diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-07-24 10:01:50 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-07-24 10:01:50 -0700 |
commit | 3c4cfadef6a1665d9cd02a543782d03d3e6740c6 (patch) | |
tree | 3df72faaacd494d5ac8c9668df4f529b1b5e4457 | |
parent | e017507f37d5cb8b541df165a824958bc333bec3 (diff) | |
parent | 320f5ea0cedc08ef65d67e056bcb9d181386ef2c (diff) | |
download | linux-stable-3c4cfadef6a1665d9cd02a543782d03d3e6740c6.tar.gz linux-stable-3c4cfadef6a1665d9cd02a543782d03d3e6740c6.tar.bz2 linux-stable-3c4cfadef6a1665d9cd02a543782d03d3e6740c6.zip |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking changes from David S Miller:
1) Remove the ipv4 routing cache. Now lookups go directly into the FIB
trie and use prebuilt routes cached there.
No more garbage collection, no more rDOS attacks on the routing
cache. Instead we now get predictable and consistent performance,
no matter what the pattern of traffic we service.
This has been almost 2 years in the making. Special thanks to
Julian Anastasov, Eric Dumazet, Steffen Klassert, and others who
have helped along the way.
I'm sure that with a change of this magnitude there will be some
kind of fallout, but such things ought the be simple to fix at this
point. Luckily I'm not European so I'll be around all of August to
fix things :-)
The major stages of this work here are each fronted by a forced
merge commit whose commit message contains a top-level description
of the motivations and implementation issues.
2) Pre-demux of established ipv4 TCP sockets, saves a route demux on
input.
3) TCP SYN/ACK performance tweaks from Eric Dumazet.
4) Add namespace support for netfilter L4 conntrack helpers, from Gao
Feng.
5) Add config mechanism for Energy Efficient Ethernet to ethtool, from
Yuval Mintz.
6) Remove quadratic behavior from /proc/net/unix, from Eric Dumazet.
7) Support for connection tracker helpers in userspace, from Pablo
Neira Ayuso.
8) Allow userspace driven TX load balancing functions in TEAM driver,
from Jiri Pirko.
9) Kill off NLMSG_PUT and RTA_PUT macros, more gross stuff with
embedded gotos.
10) TCP Small Queues, essentially minimize the amount of TCP data queued
up in the packet scheduler layer. Whereas the existing BQL (Byte
Queue Limits) limits the pkt_sched --> netdevice queuing levels,
this controls the TCP --> pkt_sched queueing levels.
From Eric Dumazet.
11) Reduce the number of get_page/put_page ops done on SKB fragments,
from Alexander Duyck.
12) Implement protection against blind resets in TCP (RFC 5961), from
Eric Dumazet.
13) Support the client side of TCP Fast Open, basically the ability to
send data in the SYN exchange, from Yuchung Cheng.
Basically, the sender queues up data with a sendmsg() call using
MSG_FASTOPEN, then they do the connect() which emits the queued up
fastopen data.
14) Avoid all the problems we get into in TCP when timers or PMTU events
hit a locked socket. The TCP Small Queues changes added a
tcp_release_cb() that allows us to queue work up to the
release_sock() caller, and that's what we use here too. From Eric
Dumazet.
15) Zero copy on TX support for TUN driver, from Michael S. Tsirkin.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1870 commits)
genetlink: define lockdep_genl_is_held() when CONFIG_LOCKDEP
r8169: revert "add byte queue limit support".
ipv4: Change rt->rt_iif encoding.
net: Make skb->skb_iif always track skb->dev
ipv4: Prepare for change of rt->rt_iif encoding.
ipv4: Remove all RTCF_DIRECTSRC handliing.
ipv4: Really ignore ICMP address requests/replies.
decnet: Don't set RTCF_DIRECTSRC.
net/ipv4/ip_vti.c: Fix __rcu warnings detected by sparse.
ipv4: Remove redundant assignment
rds: set correct msg_namelen
openvswitch: potential NULL deref in sample()
tcp: dont drop MTU reduction indications
bnx2x: Add new 57840 device IDs
tcp: avoid oops in tcp_metrics and reset tcpm_stamp
niu: Change niu_rbr_fill() to use unlikely() to check niu_rbr_add_page() return value
niu: Fix to check for dma mapping errors.
net: Fix references to out-of-scope variables in put_cmsg_compat()
net: ethernet: davinci_emac: add pm_runtime support
net: ethernet: davinci_emac: Remove unnecessary #include
...
1363 files changed, 69993 insertions, 57829 deletions
diff --git a/Documentation/DocBook/80211.tmpl b/Documentation/DocBook/80211.tmpl index f3e214f9e256..42e7f030cb16 100644 --- a/Documentation/DocBook/80211.tmpl +++ b/Documentation/DocBook/80211.tmpl @@ -404,7 +404,6 @@ !Finclude/net/mac80211.h ieee80211_get_tkip_p1k !Finclude/net/mac80211.h ieee80211_get_tkip_p1k_iv !Finclude/net/mac80211.h ieee80211_get_tkip_p2k -!Finclude/net/mac80211.h ieee80211_key_removed </chapter> <chapter id="powersave"> diff --git a/Documentation/connector/cn_test.c b/Documentation/connector/cn_test.c index 7764594778d4..adcca0368d60 100644 --- a/Documentation/connector/cn_test.c +++ b/Documentation/connector/cn_test.c @@ -69,9 +69,13 @@ static int cn_test_want_notify(void) return -ENOMEM; } - nlh = NLMSG_PUT(skb, 0, 0x123, NLMSG_DONE, size - sizeof(*nlh)); + nlh = nlmsg_put(skb, 0, 0x123, NLMSG_DONE, size - sizeof(*nlh), 0); + if (!nlh) { + kfree_skb(skb); + return -EMSGSIZE; + } - msg = (struct cn_msg *)NLMSG_DATA(nlh); + msg = nlmsg_data(nlh); memset(msg, 0, size0); @@ -117,11 +121,6 @@ static int cn_test_want_notify(void) pr_info("request was sent: group=0x%x\n", ctl->group); return 0; - -nlmsg_failure: - pr_err("failed to send %u.%u\n", msg->seq, msg->ack); - kfree_skb(skb); - return -EINVAL; } #endif diff --git a/Documentation/devicetree/bindings/net/broadcom-bcm87xx.txt b/Documentation/devicetree/bindings/net/broadcom-bcm87xx.txt new file mode 100644 index 000000000000..7c86d5e28a0e --- /dev/null +++ b/Documentation/devicetree/bindings/net/broadcom-bcm87xx.txt @@ -0,0 +1,29 @@ +The Broadcom BCM87XX devices are a family of 10G Ethernet PHYs. They +have these bindings in addition to the standard PHY bindings. + +Compatible: Should contain "broadcom,bcm8706" or "broadcom,bcm8727" and + "ethernet-phy-ieee802.3-c45" + +Optional Properties: + +- broadcom,c45-reg-init : one of more sets of 4 cells. The first cell + is the MDIO Manageable Device (MMD) address, the second a register + address within the MMD, the third cell contains a mask to be ANDed + with the existing register value, and the fourth cell is ORed with + he result to yield the new register value. If the third cell has a + value of zero, no read of the existing value is performed. + +Example: + + ethernet-phy@5 { + reg = <5>; + compatible = "broadcom,bcm8706", "ethernet-phy-ieee802.3-c45"; + interrupt-parent = <&gpio>; + interrupts = <12 8>; /* Pin 12, active low */ + /* + * Set PMD Digital Control Register for + * GPIO[1] Tx/Rx + * GPIO[0] R64 Sync Acquired + */ + broadcom,c45-reg-init = <1 0xc808 0xff8f 0x70>; + }; diff --git a/Documentation/devicetree/bindings/net/can/fsl-flexcan.txt b/Documentation/devicetree/bindings/net/can/fsl-flexcan.txt index f31b686d4556..8ff324eaa889 100644 --- a/Documentation/devicetree/bindings/net/can/fsl-flexcan.txt +++ b/Documentation/devicetree/bindings/net/can/fsl-flexcan.txt @@ -11,6 +11,9 @@ Required properties: - reg : Offset and length of the register set for this device - interrupts : Interrupt tuple for this device + +Optional properties: + - clock-frequency : The oscillator frequency driving the flexcan device Example: diff --git a/Documentation/devicetree/bindings/net/davinci_emac.txt b/Documentation/devicetree/bindings/net/davinci_emac.txt new file mode 100644 index 000000000000..48b259e29e87 --- /dev/null +++ b/Documentation/devicetree/bindings/net/davinci_emac.txt @@ -0,0 +1,41 @@ +* Texas Instruments Davinci EMAC + +This file provides information, what the device node +for the davinci_emac interface contains. + +Required properties: +- compatible: "ti,davinci-dm6467-emac"; +- reg: Offset and length of the register set for the device +- ti,davinci-ctrl-reg-offset: offset to control register +- ti,davinci-ctrl-mod-reg-offset: offset to control module register +- ti,davinci-ctrl-ram-offset: offset to control module ram +- ti,davinci-ctrl-ram-size: size of control module ram +- ti,davinci-rmii-en: use RMII +- ti,davinci-no-bd-ram: has the emac controller BD RAM +- phy-handle: Contains a phandle to an Ethernet PHY. + if not, davinci_emac driver defaults to 100/FULL +- interrupts: interrupt mapping for the davinci emac interrupts sources: + 4 sources: <Receive Threshold Interrupt + Receive Interrupt + Transmit Interrupt + Miscellaneous Interrupt> + +Optional properties: +- local-mac-address : 6 bytes, mac address + +Example (enbw_cmc board): + eth0: emac@1e20000 { + compatible = "ti,davinci-dm6467-emac"; + reg = <0x220000 0x4000>; + ti,davinci-ctrl-reg-offset = <0x3000>; + ti,davinci-ctrl-mod-reg-offset = <0x2000>; + ti,davinci-ctrl-ram-offset = <0>; + ti,davinci-ctrl-ram-size = <0x2000>; + local-mac-address = [ 00 00 00 00 00 00 ]; + interrupts = <33 + 34 + 35 + 36 + >; + interrupt-parent = <&intc>; + }; diff --git a/Documentation/devicetree/bindings/net/fsl-fec.txt b/Documentation/devicetree/bindings/net/fsl-fec.txt index 4616fc28ee86..d53639221403 100644 --- a/Documentation/devicetree/bindings/net/fsl-fec.txt +++ b/Documentation/devicetree/bindings/net/fsl-fec.txt @@ -7,10 +7,14 @@ Required properties: - phy-mode : String, operation mode of the PHY interface. Supported values are: "mii", "gmii", "sgmii", "tbi", "rmii", "rgmii", "rgmii-id", "rgmii-rxid", "rgmii-txid", "rtbi", "smii". -- phy-reset-gpios : Should specify the gpio for phy reset Optional properties: - local-mac-address : 6 bytes, mac address +- phy-reset-gpios : Should specify the gpio for phy reset +- phy-reset-duration : Reset duration in milliseconds. Should present + only if property "phy-reset-gpios" is available. Missing the property + will have the duration be 1 millisecond. Numbers greater than 1000 are + invalid and 1 millisecond will be used instead. Example: diff --git a/Documentation/devicetree/bindings/net/phy.txt b/Documentation/devicetree/bindings/net/phy.txt index bb8c742eb8c5..7cd18fbfcf71 100644 --- a/Documentation/devicetree/bindings/net/phy.txt +++ b/Documentation/devicetree/bindings/net/phy.txt @@ -14,10 +14,20 @@ Required properties: - linux,phandle : phandle for this node; likely referenced by an ethernet controller node. +Optional Properties: + +- compatible: Compatible list, may contain + "ethernet-phy-ieee802.3-c22" or "ethernet-phy-ieee802.3-c45" for + PHYs that implement IEEE802.3 clause 22 or IEEE802.3 clause 45 + specifications. If neither of these are specified, the default is to + assume clause 22. The compatible list may also contain other + elements. + Example: ethernet-phy@0 { - linux,phandle = <2452000> + compatible = "ethernet-phy-ieee802.3-c22"; + linux,phandle = <2452000>; interrupt-parent = <40000>; interrupts = <35 1>; reg = <0>; diff --git a/Documentation/devicetree/bindings/net/stmmac.txt b/Documentation/devicetree/bindings/net/stmmac.txt index 1f62623f8c3f..060bbf098ef3 100644 --- a/Documentation/devicetree/bindings/net/stmmac.txt +++ b/Documentation/devicetree/bindings/net/stmmac.txt @@ -1,7 +1,8 @@ * STMicroelectronics 10/100/1000 Ethernet driver (GMAC) Required properties: -- compatible: Should be "st,spear600-gmac" +- compatible: Should be "snps,dwmac-<ip_version>" "snps,dwmac" + For backwards compatibility: "st,spear600-gmac" is also supported. - reg: Address and length of the register set for the device - interrupt-parent: Should be the phandle for the interrupt controller that services interrupts for this device diff --git a/Documentation/feature-removal-schedule.txt b/Documentation/feature-removal-schedule.txt index 56000b33340b..61d1a89baeaf 100644 --- a/Documentation/feature-removal-schedule.txt +++ b/Documentation/feature-removal-schedule.txt @@ -249,15 +249,6 @@ Who: Ravikiran Thirumalai <kiran@scalex86.org> --------------------------- -What: Code that is now under CONFIG_WIRELESS_EXT_SYSFS - (in net/core/net-sysfs.c) -When: 3.5 -Why: Over 1K .text/.data size reduction, data is available in other - ways (ioctls) -Who: Johannes Berg <johannes@sipsolutions.net> - ---------------------------- - What: sysfs ui for changing p4-clockmod parameters When: September 2009 Why: See commits 129f8ae9b1b5be94517da76009ea956e89104ce8 and @@ -414,21 +405,6 @@ Who: Jean Delvare <khali@linux-fr.org> ---------------------------- -What: xt_connlimit rev 0 -When: 2012 -Who: Jan Engelhardt <jengelh@medozas.de> -Files: net/netfilter/xt_connlimit.c - ----------------------------- - -What: ipt_addrtype match include file -When: 2012 -Why: superseded by xt_addrtype -Who: Florian Westphal <fw@strlen.de> -Files: include/linux/netfilter_ipv4/ipt_addrtype.h - ----------------------------- - What: i2c_driver.attach_adapter i2c_driver.detach_adapter When: September 2011 @@ -449,6 +425,19 @@ Who: Hans Verkuil <hans.verkuil@cisco.com> ---------------------------- +What: CONFIG_CFG80211_WEXT +When: as soon as distributions ship new wireless tools, ie. wpa_supplicant 1.0 + and NetworkManager/connman/etc. that are able to use nl80211 +Why: Wireless extensions are deprecated, and userland tools are moving to + using nl80211. New drivers are no longer using wireless extensions, + and while there might still be old drivers, both new drivers and new + userland no longer needs them and they can't be used for an feature + developed in the past couple of years. As such, compatibility with + wireless extensions in new drivers will be removed. +Who: Johannes Berg <johannes@sipsolutions.net> + +---------------------------- + What: g_file_storage driver When: 3.8 Why: This driver has been superseded by g_mass_storage. @@ -589,6 +578,13 @@ Why: Remount currently allows changing bound subsystems and ---------------------------- +What: xt_recent rev 0 +When: 2013 +Who: Pablo Neira Ayuso <pablo@netfilter.org> +Files: net/netfilter/xt_recent.c + +---------------------------- + What: KVM debugfs statistics When: 2013 Why: KVM tracepoints provide mostly equivalent information in a much more diff --git a/Documentation/networking/batman-adv.txt b/Documentation/networking/batman-adv.txt index 75a592365af9..8f3ae4a6147e 100644 --- a/Documentation/networking/batman-adv.txt +++ b/Documentation/networking/batman-adv.txt @@ -211,6 +211,11 @@ The debug output can be changed at runtime using the file will enable debug messages for when routes change. +Counters for different types of packets entering and leaving the +batman-adv module are available through ethtool: + +# ethtool --statistics bat0 + BATCTL ------ diff --git a/Documentation/networking/bonding.txt b/Documentation/networking/bonding.txt index bfea8a338901..6b1c7110534e 100644 --- a/Documentation/networking/bonding.txt +++ b/Documentation/networking/bonding.txt @@ -1210,7 +1210,7 @@ options, you may wish to use the "max_bonds" module parameter, documented above. To create multiple bonding devices with differing options, it is -preferrable to use bonding parameters exported by sysfs, documented in the +preferable to use bonding parameters exported by sysfs, documented in the section below. For versions of bonding without sysfs support, the only means to @@ -1950,7 +1950,7 @@ access to fail over to. Additionally, the bonding load balance modes support link monitoring of their members, so if individual links fail, the load will be rebalanced across the remaining devices. - See Section 13, "Configuring Bonding for Maximum Throughput" + See Section 12, "Configuring Bonding for Maximum Throughput" for information on configuring bonding with one peer device. 11.2 High Availability in a Multiple Switch Topology @@ -2620,7 +2620,7 @@ be found at: https://lists.sourceforge.net/lists/listinfo/bonding-devel - Discussions regarding the developpement of the bonding driver take place + Discussions regarding the development of the bonding driver take place on the main Linux network mailing list, hosted at vger.kernel.org. The list address is: diff --git a/Documentation/networking/bridge.txt b/Documentation/networking/bridge.txt index a7ba5e4e2c91..a27cb6214ed7 100644 --- a/Documentation/networking/bridge.txt +++ b/Documentation/networking/bridge.txt @@ -1,7 +1,14 @@ In order to use the Ethernet bridging functionality, you'll need the -userspace tools. These programs and documentation are available -at http://www.linuxfoundation.org/en/Net:Bridge. The download page is -http://prdownloads.sourceforge.net/bridge. +userspace tools. + +Documentation for Linux bridging is on: + http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge + +The bridge-utilities are maintained at: + git://git.kernel.org/pub/scm/linux/kernel/git/shemminger/bridge-utils.git + +Additionally, the iproute2 utilities can be used to configure +bridge devices. If you still have questions, don't hesitate to post to the mailing list (more info https://lists.linux-foundation.org/mailman/listinfo/bridge). diff --git a/Documentation/networking/caif/Linux-CAIF.txt b/Documentation/networking/caif/Linux-CAIF.txt index e52fd62bef3a..0aa4bd381bec 100644 --- a/Documentation/networking/caif/Linux-CAIF.txt +++ b/Documentation/networking/caif/Linux-CAIF.txt @@ -19,60 +19,36 @@ and host. Currently, UART and Loopback are available for Linux. Architecture: ------------ The implementation of CAIF is divided into: -* CAIF Socket Layer, Kernel API, and Net Device. +* CAIF Socket Layer and GPRS IP Interface. * CAIF Core Protocol Implementation * CAIF Link Layer, implemented as NET devices. RTNL ! - ! +------+ +------+ +------+ - ! +------+! +------+! +------+! - ! ! Sock !! !Kernel!! ! Net !! - ! ! API !+ ! API !+ ! Dev !+ <- CAIF Client APIs - ! +------+ +------! +------+ - ! ! ! ! - ! +----------!----------+ - ! +------+ <- CAIF Protocol Implementation - +-------> ! CAIF ! - ! Core ! - +------+ - +--------!--------+ - ! ! - +------+ +-----+ - ! ! ! TTY ! <- Link Layer (Net Devices) - +------+ +-----+ - - -Using the Kernel API ----------------------- -The Kernel API is used for accessing CAIF channels from the -kernel. -The user of the API has to implement two callbacks for receive -and control. -The receive callback gives a CAIF packet as a SKB. The control -callback will -notify of channel initialization complete, and flow-on/flow- -off. - - - struct caif_device caif_dev = { - .caif_config = { - .name = "MYDEV" - .type = CAIF_CHTY_AT - } - .receive_cb = my_receive, - .control_cb = my_control, - }; - caif_add_device(&caif_dev); - caif_transmit(&caif_dev, skb); - -See the caif_kernel.h for details about the CAIF kernel API. + ! +------+ +------+ + ! +------+! +------+! + ! ! IP !! !Socket!! + +-------> !interf!+ ! API !+ <- CAIF Client APIs + ! +------+ +------! + ! ! ! + ! +-----------+ + ! ! + ! +------+ <- CAIF Core Protocol + ! ! CAIF ! + ! ! Core ! + ! +------+ + ! +----------!---------+ + ! ! ! ! + ! +------+ +-----+ +------+ + +--> ! HSI ! ! TTY ! ! USB ! <- Link Layer (Net Devices) + +------+ +-----+ +------+ + I M P L E M E N T A T I O N =========================== -=========================== + CAIF Core Protocol Layer ========================================= @@ -88,17 +64,13 @@ The Core CAIF implementation contains: - Simple implementation of CAIF. - Layered architecture (a la Streams), each layer in the CAIF specification is implemented in a separate c-file. - - Clients must implement PHY layer to access physical HW - with receive and transmit functions. - Clients must call configuration function to add PHY layer. - Clients must implement CAIF layer to consume/produce CAIF payload with receive and transmit functions. - Clients must call configuration function to add and connect the Client layer. - When receiving / transmitting CAIF Packets (cfpkt), ownership is passed - to the called function (except for framing layers' receive functions - or if a transmit function returns an error, in which case the caller - must free the packet). + to the called function (except for framing layers' receive function) Layered Architecture -------------------- @@ -109,11 +81,6 @@ Implementation. The support functions include: CAIF Packet has functions for creating, destroying and adding content and for adding/extracting header and trailers to protocol packets. - - CFLST CAIF list implementation. - - - CFGLUE CAIF Glue. Contains OS Specifics, such as memory - allocation, endianness, etc. - The CAIF Protocol implementation contains: - CFCNFG CAIF Configuration layer. Configures the CAIF Protocol @@ -128,7 +95,7 @@ The CAIF Protocol implementation contains: control and remote shutdown requests. - CFVEI CAIF VEI layer. Handles CAIF AT Channels on VEI (Virtual - External Interface). This layer encodes/decodes VEI frames. + External Interface). This layer encodes/decodes VEI frames. - CFDGML CAIF Datagram layer. Handles CAIF Datagram layer (IP traffic), encodes/decodes Datagram frames. @@ -170,7 +137,7 @@ The CAIF Protocol implementation contains: +---------+ +---------+ ! ! +---------+ +---------+ - | | | Serial | + | | | Serial | | | | CFSERL | +---------+ +---------+ @@ -186,24 +153,20 @@ In this layered approach the following "rules" apply. layer->dn->transmit(layer->dn, packet); -Linux Driver Implementation +CAIF Socket and IP interface =========================== -Linux GPRS Net Device and CAIF socket are implemented on top of the -CAIF Core protocol. The Net device and CAIF socket have an instance of +The IP interface and CAIF socket API are implemented on top of the +CAIF Core protocol. The IP Interface and CAIF socket have an instance of 'struct cflayer', just like the CAIF Core protocol stack. Net device and Socket implement the 'receive()' function defined by 'struct cflayer', just like the rest of the CAIF stack. In this way, transmit and receive of packets is handled as by the rest of the layers: the 'dn->transmit()' function is called in order to transmit data. -The layer on top of the CAIF Core implementation is -sometimes referred to as the "Client layer". - - Configuration of Link Layer --------------------------- -The Link Layer is implemented as Linux net devices (struct net_device). +The Link Layer is implemented as Linux network devices (struct net_device). Payload handling and registration is done using standard Linux mechanisms. The CAIF Protocol relies on a loss-less link layer without implementing diff --git a/Documentation/networking/can.txt b/Documentation/networking/can.txt index ac295399f0d4..820f55344edc 100644 --- a/Documentation/networking/can.txt +++ b/Documentation/networking/can.txt @@ -22,7 +22,8 @@ This file contains 4.1.2 RAW socket option CAN_RAW_ERR_FILTER 4.1.3 RAW socket option CAN_RAW_LOOPBACK 4.1.4 RAW socket option CAN_RAW_RECV_OWN_MSGS - 4.1.5 RAW socket returned message flags + 4.1.5 RAW socket option CAN_RAW_FD_FRAMES + 4.1.6 RAW socket returned message flags 4.2 Broadcast Manager protocol sockets (SOCK_DGRAM) 4.3 connected transport protocols (SOCK_SEQPACKET) 4.4 unconnected transport protocols (SOCK_DGRAM) @@ -41,7 +42,8 @@ This file contains 6.5.1 Netlink interface to set/get devices properties 6.5.2 Setting the CAN bit-timing 6.5.3 Starting and stopping the CAN network device - 6.6 supported CAN hardware + 6.6 CAN FD (flexible data rate) driver support + 6.7 supported CAN hardware 7 Socket CAN resources @@ -232,16 +234,16 @@ solution for a couple of reasons: arbitration problems and error frames caused by the different ECUs. The occurrence of detected errors are important for diagnosis and have to be logged together with the exact timestamp. For this - reason the CAN interface driver can generate so called Error Frames - that can optionally be passed to the user application in the same - way as other CAN frames. Whenever an error on the physical layer + reason the CAN interface driver can generate so called Error Message + Frames that can optionally be passed to the user application in the + same way as other CAN frames. Whenever an error on the physical layer or the MAC layer is detected (e.g. by the CAN controller) the driver - creates an appropriate error frame. Error frames can be requested by - the user application using the common CAN filter mechanisms. Inside - this filter definition the (interested) type of errors may be - selected. The reception of error frames is disabled by default. - The format of the CAN error frame is briefly described in the Linux - header file "include/linux/can/error.h". + creates an appropriate error message frame. Error messages frames can + be requested by the user application using the common CAN filter + mechanisms. Inside this filter definition the (interested) type of + errors may be selected. The reception of error messages is disabled + by default. The format of the CAN error message frame is briefly + described in the Linux header file "include/linux/can/error.h". 4. How to use Socket CAN ------------------------ @@ -273,7 +275,7 @@ solution for a couple of reasons: struct can_frame { canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ - __u8 can_dlc; /* data length code: 0 .. 8 */ + __u8 can_dlc; /* frame payload length in byte (0 .. 8) */ __u8 data[8] __attribute__((aligned(8))); }; @@ -375,6 +377,51 @@ solution for a couple of reasons: nbytes = sendto(s, &frame, sizeof(struct can_frame), 0, (struct sockaddr*)&addr, sizeof(addr)); + Remark about CAN FD (flexible data rate) support: + + Generally the handling of CAN FD is very similar to the formerly described + examples. The new CAN FD capable CAN controllers support two different + bitrates for the arbitration phase and the payload phase of the CAN FD frame + and up to 64 bytes of payload. This extended payload length breaks all the + kernel interfaces (ABI) which heavily rely on the CAN frame with fixed eight + bytes of payload (struct can_frame) like the CAN_RAW socket. Therefore e.g. + the CAN_RAW socket supports a new socket option CAN_RAW_FD_FRAMES that + switches the socket into a mode that allows the handling of CAN FD frames + and (legacy) CAN frames simultaneously (see section 4.1.5). + + The struct canfd_frame is defined in include/linux/can.h: + + struct canfd_frame { + canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ + __u8 len; /* frame payload length in byte (0 .. 64) */ + __u8 flags; /* additional flags for CAN FD */ + __u8 __res0; /* reserved / padding */ + __u8 __res1; /* reserved / padding */ + __u8 data[64] __attribute__((aligned(8))); + }; + + The struct canfd_frame and the existing struct can_frame have the can_id, + the payload length and the payload data at the same offset inside their + structures. This allows to handle the different structures very similar. + When the content of a struct can_frame is copied into a struct canfd_frame + all structure elements can be used as-is - only the data[] becomes extended. + + When introducing the struct canfd_frame it turned out that the data length + code (DLC) of the struct can_frame was used as a length information as the + length and the DLC has a 1:1 mapping in the range of 0 .. 8. To preserve + the easy handling of the length information the canfd_frame.len element + contains a plain length value from 0 .. 64. So both canfd_frame.len and + can_frame.can_dlc are equal and contain a length information and no DLC. + For details about the distinction of CAN and CAN FD capable devices and + the mapping to the bus-relevant data length code (DLC), see chapter 6.6. + + The length of the two CAN(FD) frame structures define the maximum transfer + unit (MTU) of the CAN(FD) network interface and skbuff data length. Two + definitions are specified for CAN specific MTUs in include/linux/can.h : + + #define CAN_MTU (sizeof(struct can_frame)) == 16 => 'legacy' CAN frame + #define CANFD_MTU (sizeof(struct canfd_frame)) == 72 => CAN FD frame + 4.1 RAW protocol sockets with can_filters (SOCK_RAW) Using CAN_RAW sockets is extensively comparable to the commonly @@ -383,7 +430,7 @@ solution for a couple of reasons: defaults are set at RAW socket binding time: - The filters are set to exactly one filter receiving everything - - The socket only receives valid data frames (=> no error frames) + - The socket only receives valid data frames (=> no error message frames) - The loopback of sent CAN frames is enabled (see chapter 3.2) - The socket does not receive its own sent frames (in loopback mode) @@ -434,7 +481,7 @@ solution for a couple of reasons: 4.1.2 RAW socket option CAN_RAW_ERR_FILTER As described in chapter 3.4 the CAN interface driver can generate so - called Error Frames that can optionally be passed to the user + called Error Message Frames that can optionally be passed to the user application in the same way as other CAN frames. The possible errors are divided into different error classes that may be filtered using the appropriate error mask. To register for every possible @@ -472,7 +519,69 @@ solution for a couple of reasons: setsockopt(s, SOL_CAN_RAW, CAN_RAW_RECV_OWN_MSGS, &recv_own_msgs, sizeof(recv_own_msgs)); - 4.1.5 RAW socket returned message flags + 4.1.5 RAW socket option CAN_RAW_FD_FRAMES + + CAN FD support in CAN_RAW sockets can be enabled with a new socket option + CAN_RAW_FD_FRAMES which is off by default. When the new socket option is + not supported by the CAN_RAW socket (e.g. on older kernels), switching the + CAN_RAW_FD_FRAMES option returns the error -ENOPROTOOPT. + + Once CAN_RAW_FD_FRAMES is enabled the application can send both CAN frames + and CAN FD frames. OTOH the application has to handle CAN and CAN FD frames + when reading from the socket. + + CAN_RAW_FD_FRAMES enabled: CAN_MTU and CANFD_MTU are allowed + CAN_RAW_FD_FRAMES disabled: only CAN_MTU is allowed (default) + + Example: + [ remember: CANFD_MTU == sizeof(struct canfd_frame) ] + + struct canfd_frame cfd; + + nbytes = read(s, &cfd, CANFD_MTU); + + if (nbytes == CANFD_MTU) { + printf("got CAN FD frame with length %d\n", cfd.len); + /* cfd.flags contains valid data */ + } else if (nbytes == CAN_MTU) { + printf("got legacy CAN frame with length %d\n", cfd.len); + /* cfd.flags is undefined */ + } else { + fprintf(stderr, "read: invalid CAN(FD) frame\n"); + return 1; + } + + /* the content can be handled independently from the received MTU size */ + + printf("can_id: %X data length: %d data: ", cfd.can_id, cfd.len); + for (i = 0; i < cfd.len; i++) + printf("%02X ", cfd.data[i]); + + When reading with size CANFD_MTU only returns CAN_MTU bytes that have + been received from the socket a legacy CAN frame has been read into the + provided CAN FD structure. Note that the canfd_frame.flags data field is + not specified in the struct can_frame and therefore it is only valid in + CANFD_MTU sized CAN FD frames. + + As long as the payload length is <=8 the received CAN frames from CAN FD + capable CAN devices can be received and read by legacy sockets too. When + user-generated CAN FD frames have a payload length <=8 these can be send + by legacy CAN network interfaces too. Sending CAN FD frames with payload + length > 8 to a legacy CAN network interface returns an -EMSGSIZE error. + + Implementation hint for new CAN applications: + + To build a CAN FD aware application use struct canfd_frame as basic CAN + data structure for CAN_RAW based applications. When the application is + executed on an older Linux kernel and switching the CAN_RAW_FD_FRAMES + socket option returns an error: No problem. You'll get legacy CAN frames + or CAN FD frames and can process them the same way. + + When sending to CAN devices make sure that the device is capable to handle + CAN FD frames by checking if the device maximum transfer unit is CANFD_MTU. + The CAN device MTU can be retrieved e.g. with a SIOCGIFMTU ioctl() syscall. + + 4.1.6 RAW socket returned message flags When using recvmsg() call, the msg->msg_flags may contain following flags: @@ -527,7 +636,7 @@ solution for a couple of reasons: rcvlist_all - list for unfiltered entries (no filter operations) rcvlist_eff - list for single extended frame (EFF) entries - rcvlist_err - list for error frames masks + rcvlist_err - list for error message frames masks rcvlist_fil - list for mask/value filters rcvlist_inv - list for mask/value filters (inverse semantic) rcvlist_sff - list for single standard frame (SFF) entries @@ -573,10 +682,13 @@ solution for a couple of reasons: dev->type = ARPHRD_CAN; /* the netdevice hardware type */ dev->flags = IFF_NOARP; /* CAN has no arp */ - dev->mtu = sizeof(struct can_frame); + dev->mtu = CAN_MTU; /* sizeof(struct can_frame) -> legacy CAN interface */ - The struct can_frame is the payload of each socket buffer in the - protocol family PF_CAN. + or alternative, when the controller supports CAN with flexible data rate: + dev->mtu = CANFD_MTU; /* sizeof(struct canfd_frame) -> CAN FD interface */ + + The struct can_frame or struct canfd_frame is the payload of each socket + buffer (skbuff) in the protocol family PF_CAN. 6.2 local loopback of sent frames @@ -784,15 +896,41 @@ solution for a couple of reasons: $ ip link set canX type can restart-ms 100 Alternatively, the application may realize the "bus-off" condition - by monitoring CAN error frames and do a restart when appropriate with - the command: + by monitoring CAN error message frames and do a restart when + appropriate with the command: $ ip link set canX type can restart - Note that a restart will also create a CAN error frame (see also - chapter 3.4). + Note that a restart will also create a CAN error message frame (see + also chapter 3.4). + + 6.6 CAN FD (flexible data rate) driver support + + CAN FD capable CAN controllers support two different bitrates for the + arbitration phase and the payload phase of the CAN FD frame. Therefore a + second bittiming has to be specified in order to enable the CAN FD bitrate. + + Additionally CAN FD capable CAN controllers support up to 64 bytes of + payload. The representation of this length in can_frame.can_dlc and + canfd_frame.len for userspace applications and inside the Linux network + layer is a plain value from 0 .. 64 instead of the CAN 'data length code'. + The data length code was a 1:1 mapping to the payload length in the legacy + CAN frames anyway. The payload length to the bus-relevant DLC mapping is + only performed inside the CAN drivers, preferably with the helper + functions can_dlc2len() and can_len2dlc(). + + The CAN netdevice driver capabilities can be distinguished by the network + devices maximum transfer unit (MTU): + + MTU = 16 (CAN_MTU) => sizeof(struct can_frame) => 'legacy' CAN device + MTU = 72 (CANFD_MTU) => sizeof(struct canfd_frame) => CAN FD capable device + + The CAN device MTU can be retrieved e.g. with a SIOCGIFMTU ioctl() syscall. + N.B. CAN FD capable devices can also handle and send legacy CAN frames. + + FIXME: Add details about the CAN FD controller configuration when available. - 6.6 Supported CAN hardware + 6.7 Supported CAN hardware Please check the "Kconfig" file in "drivers/net/can" to get an actual list of the support CAN hardware. On the Socket CAN project website diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt index 6f896b94abdc..406a5226220d 100644 --- a/Documentation/networking/ip-sysctl.txt +++ b/Documentation/networking/ip-sysctl.txt @@ -468,6 +468,19 @@ tcp_syncookies - BOOLEAN SYN flood warnings in logs not being really flooded, your server is seriously misconfigured. +tcp_fastopen - INTEGER + Enable TCP Fast Open feature (draft-ietf-tcpm-fastopen) to send data + in the opening SYN packet. To use this feature, the client application + must not use connect(). Instead, it should use sendmsg() or sendto() + with MSG_FASTOPEN flag which performs a TCP handshake automatically. + + The values (bitmap) are: + 1: Enables sending data in the opening SYN on the client + 5: Enables sending data in the opening SYN on the client regardless + of cookie availability. + + Default: 0 + tcp_syn_retries - INTEGER Number of times initial SYNs for an active TCP connection attempt will be retransmitted. Should not be higher than 255. Default value @@ -551,6 +564,25 @@ tcp_thin_dupack - BOOLEAN Documentation/networking/tcp-thin.txt Default: 0 +tcp_limit_output_bytes - INTEGER + Controls TCP Small Queue limit per tcp socket. + TCP bulk sender tends to increase packets in flight until it + gets losses notifications. With SNDBUF autotuning, this can + result in a large amount of packets queued in qdisc/device + on the local machine, hurting latency of other flows, for + typical pfifo_fast qdiscs. + tcp_limit_output_bytes limits the number of bytes on qdisc + or device to reduce artificial RTT/cwnd and reduce bufferbloat. + Note: For GSO/TSO enabled flows, we try to have at least two + packets in flight. Reducing tcp_limit_output_bytes might also + reduce the size of individual GSO packet (64KB being the max) + Default: 131072 + +tcp_challenge_ack_limit - INTEGER + Limits number of Challenge ACK sent per second, as recommended + in RFC 5961 (Improving TCP's Robustness to Blind In-Window Attacks) + Default: 100 + UDP variables: udp_mem - vector of 3 INTEGERs: min, pressure, max @@ -857,9 +889,19 @@ accept_source_route - BOOLEAN FALSE (host) accept_local - BOOLEAN - Accept packets with local source addresses. In combination with - suitable routing, this can be used to direct packets between two - local interfaces over the wire and have them accepted properly. + Accept packets with local source addresses. In combination + with suitable routing, this can be used to direct packets + between two local interfaces over the wire and have them + accepted properly. + + rp_filter must be set to a non-zero value in order for + accept_local to have an effect. + + default FALSE + +route_localnet - BOOLEAN + Do not consider loopback addresses as martian source or destination + while routing. This enables the use of 127/8 for local routing purposes. default FALSE rp_filter - INTEGER @@ -1398,6 +1440,20 @@ path_max_retrans - INTEGER Default: 5 +pf_retrans - INTEGER + The number of retransmissions that will be attempted on a given path + before traffic is redirected to an alternate transport (should one + exist). Note this is distinct from path_max_retrans, as a path that + passes the pf_retrans threshold can still be used. Its only + deprioritized when a transmission path is selected by the stack. This + setting is primarily used to enable fast failover mechanisms without + having to reduce path_max_retrans to a very low value. See: + http://www.ietf.org/id/draft-nishida-tsvwg-sctp-failover-05.txt + for details. Note also that a value of pf_retrans > path_max_retrans + disables this feature + + Default: 0 + rto_initial - INTEGER The initial round trip timeout value in milliseconds that will be used in calculating round trip times. This is the initial time interval diff --git a/Documentation/networking/openvswitch.txt b/Documentation/networking/openvswitch.txt index b8a048b8df3a..8fa2dd1e792e 100644 --- a/Documentation/networking/openvswitch.txt +++ b/Documentation/networking/openvswitch.txt @@ -118,7 +118,7 @@ essentially like this, ignoring metadata: Naively, to add VLAN support, it makes sense to add a new "vlan" flow key attribute to contain the VLAN tag, then continue to decode the encapsulated headers beyond the VLAN tag using the existing field -definitions. With this change, an TCP packet in VLAN 10 would have a +definitions. With this change, a TCP packet in VLAN 10 would have a flow key much like this: eth(...), vlan(vid=10, pcp=0), eth_type(0x0800), ip(proto=6, ...), tcp(...) diff --git a/Documentation/networking/s2io.txt b/Documentation/networking/s2io.txt index 4be0c039edbc..d2a9f43b5546 100644 --- a/Documentation/networking/s2io.txt +++ b/Documentation/networking/s2io.txt @@ -136,16 +136,6 @@ For more information, please review the AMD8131 errata at http://vip.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/ 26310_AMD-8131_HyperTransport_PCI-X_Tunnel_Revision_Guide_rev_3_18.pdf -6. Available Downloads -Neterion "s2io" driver in Red Hat and Suse 2.6-based distributions is kept up -to date, also the latest "s2io" code (including support for 2.4 kernels) is -available via "Support" link on the Neterion site: http://www.neterion.com. - -For Xframe User Guide (Programming manual), visit ftp site ns1.s2io.com, -user: linuxdocs password: HALdocs - -7. Support +6. Support For further support please contact either your 10GbE Xframe NIC vendor (IBM, -HP, SGI etc.) or click on the "Support" link on the Neterion site: -http://www.neterion.com. - +HP, SGI etc.) diff --git a/Documentation/networking/stmmac.txt b/Documentation/networking/stmmac.txt index 5cb9a1972460..c676b9cedbd0 100644 --- a/Documentation/networking/stmmac.txt +++ b/Documentation/networking/stmmac.txt @@ -257,9 +257,11 @@ reset procedure etc). o Makefile o stmmac_main.c: main network device driver; o stmmac_mdio.c: mdio functions; + o stmmac_pci: PCI driver; + o stmmac_platform.c: platform driver o stmmac_ethtool.c: ethtool support; o stmmac_timer.[ch]: timer code used for mitigating the driver dma interrupts - Only tested on ST40 platforms based. + (only tested on ST40 platforms based); o stmmac.h: private driver structure; o common.h: common definitions and VFTs; o descs.h: descriptor structure definitions; @@ -269,9 +271,11 @@ reset procedure etc). o dwmac100_core: MAC 100 core and dma code; o dwmac100_dma.c: dma funtions for the MAC chip; o dwmac1000.h: specific header file for the MAC; - o dwmac_lib.c: generic DMA functions shared among chips - o enh_desc.c: functions for handling enhanced descriptors - o norm_desc.c: functions for handling normal descriptors + o dwmac_lib.c: generic DMA functions shared among chips; + o enh_desc.c: functions for handling enhanced descriptors; + o norm_desc.c: functions for handling normal descriptors; + o chain_mode.c/ring_mode.c:: functions to manage RING/CHAINED modes; + o mmc_core.c/mmc.h: Management MAC Counters; 5) Debug Information @@ -304,7 +308,27 @@ All these are only useful during the developing stage and should never enabled inside the code for general usage. In fact, these can generate an huge amount of debug messages. -6) TODO: +6) Energy Efficient Ethernet + +Energy Efficient Ethernet(EEE) enables IEEE 802.3 MAC sublayer along +with a family of Physical layer to operate in the Low power Idle(LPI) +mode. The EEE mode supports the IEEE 802.3 MAC operation at 100Mbps, +1000Mbps & 10Gbps. + +The LPI mode allows power saving by switching off parts of the +communication device functionality when there is no data to be +transmitted & received. The system on both the side of the link can +disable some functionalities & save power during the period of low-link +utilization. The MAC controls whether the system should enter or exit +the LPI mode & communicate this to PHY. + +As soon as the interface is opened, the driver verifies if the EEE can +be supported. This is done by looking at both the DMA HW capability +register and the PHY devices MCD registers. +To enter in Tx LPI mode the driver needs to have a software timer +that enable and disable the LPI mode when there is nothing to be +transmitted. + +7) TODO: o XGMAC is not supported. - o Add the EEE - Energy Efficient Ethernet o Add the PTP - precision time protocol diff --git a/Documentation/networking/vxge.txt b/Documentation/networking/vxge.txt index d2e2997e6fa0..bb76c667a476 100644 --- a/Documentation/networking/vxge.txt +++ b/Documentation/networking/vxge.txt @@ -91,10 +91,3 @@ v) addr_learn_en virtualization environment. Valid range: 0,1 (disabled, enabled respectively) Default: 0 - -4) Troubleshooting: -------------------- - -To resolve an issue with the source code or X3100 series adapter, please collect -the statistics, register dumps using ethool, relevant logs and email them to -support@neterion.com. diff --git a/Documentation/nfc/nfc-hci.txt b/Documentation/nfc/nfc-hci.txt index 320f9336c781..89a339c9b079 100644 --- a/Documentation/nfc/nfc-hci.txt +++ b/Documentation/nfc/nfc-hci.txt @@ -178,3 +178,36 @@ ANY_GET_PARAMETER to the reader A gate to get information on the target that was discovered). Typically, such an event will be propagated to NFC Core from MSGRXWQ context. + +Error management +---------------- + +Errors that occur synchronously with the execution of an NFC Core request are +simply returned as the execution result of the request. These are easy. + +Errors that occur asynchronously (e.g. in a background protocol handling thread) +must be reported such that upper layers don't stay ignorant that something +went wrong below and know that expected events will probably never happen. +Handling of these errors is done as follows: + +- driver (pn544) fails to deliver an incoming frame: it stores the error such +that any subsequent call to the driver will result in this error. Then it calls +the standard nfc_shdlc_recv_frame() with a NULL argument to report the problem +above. shdlc stores a EREMOTEIO sticky status, which will trigger SMW to +report above in turn. + +- SMW is basically a background thread to handle incoming and outgoing shdlc +frames. This thread will also check the shdlc sticky status and report to HCI +when it discovers it is not able to run anymore because of an unrecoverable +error that happened within shdlc or below. If the problem occurs during shdlc +connection, the error is reported through the connect completion. + +- HCI: if an internal HCI error happens (frame is lost), or HCI is reported an +error from a lower layer, HCI will either complete the currently executing +command with that error, or notify NFC Core directly if no command is executing. + +- NFC Core: when NFC Core is notified of an error from below and polling is +active, it will send a tag discovered event with an empty tag list to the user +space to let it know that the poll operation will never be able to detect a tag. +If polling is not active and the error was sticky, lower levels will return it +at next invocation. diff --git a/MAINTAINERS b/MAINTAINERS index 3086d4b12711..7316ab62e5af 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -329,7 +329,7 @@ F: drivers/hwmon/adm1029.c ADM8211 WIRELESS DRIVER L: linux-wireless@vger.kernel.org -W: http://linuxwireless.org/ +W: http://wireless.kernel.org/ S: Orphan F: drivers/net/wireless/adm8211.* @@ -1441,7 +1441,7 @@ B43 WIRELESS DRIVER M: Stefano Brivio <stefano.brivio@polimi.it> L: linux-wireless@vger.kernel.org L: b43-dev@lists.infradead.org -W: http://linuxwireless.org/en/users/Drivers/b43 +W: http://wireless.kernel.org/en/users/Drivers/b43 S: Maintained F: drivers/net/wireless/b43/ @@ -1450,7 +1450,7 @@ M: Larry Finger <Larry.Finger@lwfinger.net> M: Stefano Brivio <stefano.brivio@polimi.it> L: linux-wireless@vger.kernel.org L: b43-dev@lists.infradead.org -W: http://linuxwireless.org/en/users/Drivers/b43 +W: http://wireless.kernel.org/en/users/Drivers/b43 S: Maintained F: drivers/net/wireless/b43legacy/ @@ -1613,6 +1613,7 @@ M: Arend van Spriel <arend@broadcom.com> M: Franky (Zhenhui) Lin <frankyl@broadcom.com> M: Kan Yan <kanyan@broadcom.com> L: linux-wireless@vger.kernel.org +L: brcm80211-dev-list@broadcom.com S: Supported F: drivers/net/wireless/brcm80211/ @@ -3679,14 +3680,6 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi.git S: Supported F: drivers/net/wireless/iwlwifi/ -INTEL WIRELESS MULTICOMM 3200 WIFI (iwmc3200wifi) -M: Samuel Ortiz <samuel.ortiz@intel.com> -M: Intel Linux Wireless <ilw@linux.intel.com> -L: linux-wireless@vger.kernel.org -S: Supported -W: http://wireless.kernel.org/en/users/Drivers/iwmc3200wifi -F: drivers/net/wireless/iwmc3200wifi/ - INTEL MANAGEMENT ENGINE (mei) M: Tomas Winkler <tomas.winkler@intel.com> L: linux-kernel@vger.kernel.org @@ -4370,7 +4363,7 @@ F: arch/m68k/hp300/ MAC80211 M: Johannes Berg <johannes@sipsolutions.net> L: linux-wireless@vger.kernel.org -W: http://linuxwireless.org/ +W: http://wireless.kernel.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git S: Maintained @@ -4382,7 +4375,7 @@ MAC80211 PID RATE CONTROL M: Stefano Brivio <stefano.brivio@polimi.it> M: Mattias Nissler <mattias.nissler@gmx.de> L: linux-wireless@vger.kernel.org -W: http://linuxwireless.org/en/developers/Documentation/mac80211/RateControl/PID +W: http://wireless.kernel.org/en/developers/Documentation/mac80211/RateControl/PID T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git S: Maintained @@ -4611,7 +4604,6 @@ S: Maintained F: drivers/usb/musb/ MYRICOM MYRI-10G 10GbE DRIVER (MYRI10GE) -M: Jon Mason <mason@myri.com> M: Andrew Gallatin <gallatin@myri.com> L: netdev@vger.kernel.org W: http://www.myri.com/scs/download-Myri10GE.html @@ -4656,8 +4648,6 @@ F: net/sched/sch_netem.c NETERION 10GbE DRIVERS (s2io/vxge) M: Jon Mason <jdmason@kudzu.us> L: netdev@vger.kernel.org -W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/Linux?Anonymous -W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/X3100Linux?Anonymous S: Supported F: Documentation/networking/s2io.txt F: Documentation/networking/vxge.txt @@ -5068,7 +5058,7 @@ F: fs/ocfs2/ ORINOCO DRIVER L: linux-wireless@vger.kernel.org -W: http://linuxwireless.org/en/users/Drivers/orinoco +W: http://wireless.kernel.org/en/users/Drivers/orinoco W: http://www.nongnu.org/orinoco/ S: Orphan F: drivers/net/wireless/orinoco/ @@ -5772,7 +5762,7 @@ F: net/rose/ RTL8180 WIRELESS DRIVER M: "John W. Linville" <linville@tuxdriver.com> L: linux-wireless@vger.kernel.org -W: http://linuxwireless.org/ +W: http://wireless.kernel.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git S: Maintained F: drivers/net/wireless/rtl818x/rtl8180/ @@ -5782,7 +5772,7 @@ M: Herton Ronaldo Krzesinski <herton@canonical.com> M: Hin-Tak Leung <htl10@users.sourceforge.net> M: Larry Finger <Larry.Finger@lwfinger.net> L: linux-wireless@vger.kernel.org -W: http://linuxwireless.org/ +W: http://wireless.kernel.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git S: Maintained F: drivers/net/wireless/rtl818x/rtl8187/ @@ -5791,7 +5781,7 @@ RTL8192CE WIRELESS DRIVER M: Larry Finger <Larry.Finger@lwfinger.net> M: Chaoming Li <chaoming_li@realsil.com.cn> L: linux-wireless@vger.kernel.org -W: http://linuxwireless.org/ +W: http://wireless.kernel.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git S: Maintained F: drivers/net/wireless/rtlwifi/ diff --git a/arch/blackfin/mach-bf537/boards/stamp.c b/arch/blackfin/mach-bf537/boards/stamp.c index c9d9473a5ab2..5ed654ae66e1 100644 --- a/arch/blackfin/mach-bf537/boards/stamp.c +++ b/arch/blackfin/mach-bf537/boards/stamp.c @@ -873,7 +873,7 @@ static struct adf702x_platform_data adf7021_platform_data = { }; static inline void adf702x_mac_init(void) { - random_ether_addr(adf7021_platform_data.mac_addr); + eth_random_addr(adf7021_platform_data.mac_addr); } #else static inline void adf702x_mac_init(void) {} diff --git a/arch/c6x/kernel/soc.c b/arch/c6x/kernel/soc.c index 0748c94ebef6..3ac74080fded 100644 --- a/arch/c6x/kernel/soc.c +++ b/arch/c6x/kernel/soc.c @@ -80,7 +80,7 @@ int soc_mac_addr(unsigned int index, u8 *addr) if (have_fuse_mac) memcpy(addr, c6x_fuse_mac, 6); else - random_ether_addr(addr); + eth_random_addr(addr); } /* adjust for specific EMAC device */ diff --git a/arch/m68k/include/asm/mcfne.h b/arch/m68k/include/asm/mcf8390.h index bf638be0958c..a72a20819a54 100644 --- a/arch/m68k/include/asm/mcfne.h +++ b/arch/m68k/include/asm/mcf8390.h @@ -1,7 +1,7 @@ /****************************************************************************/ /* - * mcfne.h -- NE2000 in ColdFire eval boards. + * mcf8390.h -- NS8390 support for ColdFire eval boards. * * (C) Copyright 1999-2000, Greg Ungerer (gerg@snapgear.com) * (C) Copyright 2000, Lineo (www.lineo.com) @@ -14,8 +14,8 @@ */ /****************************************************************************/ -#ifndef mcfne_h -#define mcfne_h +#ifndef mcf8390_h +#define mcf8390_h /****************************************************************************/ @@ -37,6 +37,7 @@ #if defined(CONFIG_ARN5206) #define NE2000_ADDR 0x40000300 #define NE2000_ODDOFFSET 0x00010000 +#define NE2000_ADDRSIZE 0x00020000 #define NE2000_IRQ_VECTOR 0xf0 #define NE2000_IRQ_PRIORITY 2 #define NE2000_IRQ_LEVEL 4 @@ -46,6 +47,7 @@ #if defined(CONFIG_M5206eC3) #define NE2000_ADDR 0x40000300 #define NE2000_ODDOFFSET 0x00010000 +#define NE2000_ADDRSIZE 0x00020000 #define NE2000_IRQ_VECTOR 0x1c #define NE2000_IRQ_PRIORITY 2 #define NE2000_IRQ_LEVEL 4 @@ -54,6 +56,7 @@ #if defined(CONFIG_M5206e) && defined(CONFIG_NETtel) #define NE2000_ADDR 0x30000300 +#define NE2000_ADDRSIZE 0x00001000 #define NE2000_IRQ_VECTOR 25 #define NE2000_IRQ_PRIORITY 1 #define NE2000_IRQ_LEVEL 3 @@ -63,6 +66,7 @@ #if defined(CONFIG_M5307C3) #define NE2000_ADDR 0x40000300 #define NE2000_ODDOFFSET 0x00010000 +#define NE2000_ADDRSIZE 0x00020000 #define NE2000_IRQ_VECTOR 0x1b #define NE2000_BYTE volatile unsigned short #endif @@ -70,6 +74,7 @@ #if defined(CONFIG_M5272) && defined(CONFIG_NETtel) #define NE2000_ADDR 0x30600300 #define NE2000_ODDOFFSET 0x00008000 +#define NE2000_ADDRSIZE 0x00010000 #define NE2000_IRQ_VECTOR 67 #undef BSWAP #define BSWAP(w) (w) @@ -82,6 +87,7 @@ #define NE2000_ADDR0 0x30600300 #define NE2000_ADDR1 0x30800300 #define NE2000_ODDOFFSET 0x00008000 +#define NE2000_ADDRSIZE 0x00010000 #define NE2000_IRQ_VECTOR0 27 #define NE2000_IRQ_VECTOR1 29 #undef BSWAP @@ -94,6 +100,7 @@ #if defined(CONFIG_M5307) && defined(CONFIG_SECUREEDGEMP3) #define NE2000_ADDR 0x30600300 #define NE2000_ODDOFFSET 0x00008000 +#define NE2000_ADDRSIZE 0x00010000 #define NE2000_IRQ_VECTOR 27 #undef BSWAP #define BSWAP(w) (w) @@ -105,6 +112,7 @@ #if defined(CONFIG_ARN5307) #define NE2000_ADDR 0xfe600300 #define NE2000_ODDOFFSET 0x00010000 +#define NE2000_ADDRSIZE 0x00020000 #define NE2000_IRQ_VECTOR 0x1b #define NE2000_IRQ_PRIORITY 2 #define NE2000_IRQ_LEVEL 3 @@ -114,129 +122,10 @@ #if defined(CONFIG_M5407C3) #define NE2000_ADDR 0x40000300 #define NE2000_ODDOFFSET 0x00010000 +#define NE2000_ADDRSIZE 0x00020000 #define NE2000_IRQ_VECTOR 0x1b #define NE2000_BYTE volatile unsigned short #endif /****************************************************************************/ - -/* - * Side-band address space for odd address requires re-mapping - * many of the standard ISA access functions. - */ -#ifdef NE2000_ODDOFFSET - -#undef outb -#undef outb_p -#undef inb -#undef inb_p -#undef outsb -#undef outsw -#undef insb -#undef insw - -#define outb ne2000_outb -#define inb ne2000_inb -#define outb_p ne2000_outb -#define inb_p ne2000_inb -#define outsb ne2000_outsb -#define outsw ne2000_outsw -#define insb ne2000_insb -#define insw ne2000_insw - - -#ifndef COLDFIRE_NE2000_FUNCS - -void ne2000_outb(unsigned int val, unsigned int addr); -int ne2000_inb(unsigned int addr); -void ne2000_insb(unsigned int addr, void *vbuf, int unsigned long len); -void ne2000_insw(unsigned int addr, void *vbuf, unsigned long len); -void ne2000_outsb(unsigned int addr, void *vbuf, unsigned long len); -void ne2000_outsw(unsigned int addr, void *vbuf, unsigned long len); - -#else - -/* - * This macro converts a conventional register address into the - * real memory pointer of the mapped NE2000 device. - * On most NE2000 implementations on ColdFire boards the chip is - * mapped in kinda funny, due to its ISA heritage. - */ -#define NE2000_PTR(addr) ((addr&0x1)?(NE2000_ODDOFFSET+addr-1):(addr)) -#define NE2000_DATA_PTR(addr) (addr) - - -void ne2000_outb(unsigned int val, unsigned int addr) -{ - NE2000_BYTE *rp; - - rp = (NE2000_BYTE *) NE2000_PTR(addr); - *rp = RSWAP(val); -} - -int ne2000_inb(unsigned int addr) -{ - NE2000_BYTE *rp, val; - - rp = (NE2000_BYTE *) NE2000_PTR(addr); - val = *rp; - return((int) ((NE2000_BYTE) RSWAP(val))); -} - -void ne2000_insb(unsigned int addr, void *vbuf, int unsigned long len) -{ - NE2000_BYTE *rp, val; - unsigned char *buf; - - buf = (unsigned char *) vbuf; - rp = (NE2000_BYTE *) NE2000_DATA_PTR(addr); - for (; (len > 0); len--) { - val = *rp; - *buf++ = RSWAP(val); - } -} - -void ne2000_insw(unsigned int addr, void *vbuf, unsigned long len) -{ - volatile unsigned short *rp; - unsigned short w, *buf; - - buf = (unsigned short *) vbuf; - rp = (volatile unsigned short *) NE2000_DATA_PTR(addr); - for (; (len > 0); len--) { - w = *rp; - *buf++ = BSWAP(w); - } -} - -void ne2000_outsb(unsigned int addr, const void *vbuf, unsigned long len) -{ - NE2000_BYTE *rp, val; - unsigned char *buf; - - buf = (unsigned char *) vbuf; - rp = (NE2000_BYTE *) NE2000_DATA_PTR(addr); - for (; (len > 0); len--) { - val = *buf++; - *rp = RSWAP(val); - } -} - -void ne2000_outsw(unsigned int addr, const void *vbuf, unsigned long len) -{ - volatile unsigned short *rp; - unsigned short w, *buf; - - buf = (unsigned short *) vbuf; - rp = (volatile unsigned short *) NE2000_DATA_PTR(addr); - for (; (len > 0); len--) { - w = *buf++; - *rp = BSWAP(w); - } -} - -#endif /* COLDFIRE_NE2000_FUNCS */ -#endif /* NE2000_OFFOFFSET */ - -/****************************************************************************/ -#endif /* mcfne_h */ +#endif /* mcf8390_h */ diff --git a/arch/mips/ar7/platform.c b/arch/mips/ar7/platform.c index 1a24d317e7a3..1bbc24b08685 100644 --- a/arch/mips/ar7/platform.c +++ b/arch/mips/ar7/platform.c @@ -310,10 +310,10 @@ static void __init cpmac_get_mac(int instance, unsigned char *dev_addr) &dev_addr[4], &dev_addr[5]) != 6) { pr_warning("cannot parse mac address, " "using random address\n"); - random_ether_addr(dev_addr); + eth_random_addr(dev_addr); } } else - random_ether_addr(dev_addr); + eth_random_addr(dev_addr); } /***************************************************************************** diff --git a/arch/mips/powertv/powertv_setup.c b/arch/mips/powertv/powertv_setup.c index 3933c373a438..820b8480f222 100644 --- a/arch/mips/powertv/powertv_setup.c +++ b/arch/mips/powertv/powertv_setup.c @@ -254,7 +254,7 @@ early_param("rfmac", rfmac_param); * Generates an Ethernet MAC address that is highly likely to be unique for * this particular system on a network with other systems of the same type. * - * The problem we are solving is that, when random_ether_addr() is used to + * The problem we are solving is that, when eth_random_addr() is used to * generate MAC addresses at startup, there isn't much entropy for the random * number generator to use and the addresses it produces are fairly likely to * be the same as those of other identical systems on the same local network. @@ -269,7 +269,7 @@ early_param("rfmac", rfmac_param); * Still, this does give us something to work with. * * The approach we take is: - * 1. If we can't get the RF MAC Address, just call random_ether_addr. + * 1. If we can't get the RF MAC Address, just call eth_random_addr. * 2. Use the 24-bit NIC-specific bits of the RF MAC address as the last 24 * bits of the new address. This is very likely to be unique, except for * the current box. @@ -299,7 +299,7 @@ void platform_random_ether_addr(u8 addr[ETH_ALEN]) if (!have_rfmac) { pr_warning("rfmac not available on command line; " "generating random MAC address\n"); - random_ether_addr(addr); + eth_random_addr(addr); } else { diff --git a/arch/sparc/net/bpf_jit_comp.c b/arch/sparc/net/bpf_jit_comp.c index 1a69244e785b..e9073e9501b3 100644 --- a/arch/sparc/net/bpf_jit_comp.c +++ b/arch/sparc/net/bpf_jit_comp.c @@ -96,6 +96,7 @@ static void bpf_flush_icache(void *start_, void *end_) #define AND F3(2, 0x01) #define ANDCC F3(2, 0x11) #define OR F3(2, 0x02) +#define XOR F3(2, 0x03) #define SUB F3(2, 0x04) #define SUBCC F3(2, 0x14) #define MUL F3(2, 0x0a) /* umul */ @@ -462,6 +463,9 @@ void bpf_jit_compile(struct sk_filter *fp) case BPF_S_ALU_OR_K: /* A |= K */ emit_alu_K(OR, K); break; + case BPF_S_ANC_ALU_XOR_X: /* A ^= X; */ + emit_alu_X(XOR); + break; case BPF_S_ALU_LSH_X: /* A <<= X */ emit_alu_X(SLL); break; diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c index 0d60c5685c26..458d324f062d 100644 --- a/arch/um/drivers/net_kern.c +++ b/arch/um/drivers/net_kern.c @@ -339,7 +339,7 @@ static int setup_etheraddr(char *str, unsigned char *addr, char *name) random: printk(KERN_INFO "Choosing a random ethernet address for device %s\n", name); - random_ether_addr(addr); + eth_random_addr(addr); return 1; } diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 0597f95b6da6..33643a8bcbbb 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -309,6 +309,10 @@ void bpf_jit_compile(struct sk_filter *fp) else EMIT1_off32(0x0d, K); /* or imm32,%eax */ break; + case BPF_S_ANC_ALU_XOR_X: /* A ^= X; */ + seen |= SEEN_XREG; + EMIT2(0x31, 0xd8); /* xor %ebx,%eax */ + break; case BPF_S_ALU_LSH_X: /* A <<= X; */ seen |= SEEN_XREG; EMIT4(0x89, 0xd9, 0xd3, 0xe0); /* mov %ebx,%ecx; shl %cl,%eax */ diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c index 5a37eadb4e56..ba2c611154af 100644 --- a/crypto/crypto_user.c +++ b/crypto/crypto_user.c @@ -496,9 +496,12 @@ static void crypto_netlink_rcv(struct sk_buff *skb) static int __init crypto_user_init(void) { + struct netlink_kernel_cfg cfg = { + .input = crypto_netlink_rcv, + }; + crypto_nlsk = netlink_kernel_create(&init_net, NETLINK_CRYPTO, - 0, crypto_netlink_rcv, - NULL, THIS_MODULE); + THIS_MODULE, &cfg); if (!crypto_nlsk) return -ENOMEM; diff --git a/drivers/bcma/Kconfig b/drivers/bcma/Kconfig index fb7c80fb721e..06b3207adebd 100644 --- a/drivers/bcma/Kconfig +++ b/drivers/bcma/Kconfig @@ -46,6 +46,25 @@ config BCMA_DRIVER_MIPS If unsure, say N +config BCMA_SFLASH + bool + depends on BCMA_DRIVER_MIPS && BROKEN + default y + +config BCMA_NFLASH + bool + depends on BCMA_DRIVER_MIPS && BROKEN + default y + +config BCMA_DRIVER_GMAC_CMN + bool "BCMA Broadcom GBIT MAC COMMON core driver" + depends on BCMA + help + Driver for the Broadcom GBIT MAC COMMON core attached to Broadcom + specific Advanced Microcontroller Bus. + + If unsure, say N + config BCMA_DEBUG bool "BCMA debugging" depends on BCMA diff --git a/drivers/bcma/Makefile b/drivers/bcma/Makefile index 82de24e5340c..8ad42d41b2f2 100644 --- a/drivers/bcma/Makefile +++ b/drivers/bcma/Makefile @@ -1,8 +1,11 @@ bcma-y += main.o scan.o core.o sprom.o bcma-y += driver_chipcommon.o driver_chipcommon_pmu.o +bcma-$(CONFIG_BCMA_SFLASH) += driver_chipcommon_sflash.o +bcma-$(CONFIG_BCMA_NFLASH) += driver_chipcommon_nflash.o bcma-y += driver_pci.o bcma-$(CONFIG_BCMA_DRIVER_PCI_HOSTMODE) += driver_pci_host.o bcma-$(CONFIG_BCMA_DRIVER_MIPS) += driver_mips.o +bcma-$(CONFIG_BCMA_DRIVER_GMAC_CMN) += driver_gmac_cmn.o bcma-$(CONFIG_BCMA_HOST_PCI) += host_pci.o bcma-$(CONFIG_BCMA_HOST_SOC) += host_soc.o obj-$(CONFIG_BCMA) += bcma.o diff --git a/drivers/bcma/bcma_private.h b/drivers/bcma/bcma_private.h index b81755bb4798..3cf9cc923cd2 100644 --- a/drivers/bcma/bcma_private.h +++ b/drivers/bcma/bcma_private.h @@ -10,6 +10,15 @@ #define BCMA_CORE_SIZE 0x1000 +#define bcma_err(bus, fmt, ...) \ + pr_err("bus%d: " fmt, (bus)->num, ##__VA_ARGS__) +#define bcma_warn(bus, fmt, ...) \ + pr_warn("bus%d: " fmt, (bus)->num, ##__VA_ARGS__) +#define bcma_info(bus, fmt, ...) \ + pr_info("bus%d: " fmt, (bus)->num, ##__VA_ARGS__) +#define bcma_debug(bus, fmt, ...) \ + pr_debug("bus%d: " fmt, (bus)->num, ##__VA_ARGS__) + struct bcma_bus; /* main.c */ @@ -42,6 +51,28 @@ void bcma_chipco_serial_init(struct bcma_drv_cc *cc); u32 bcma_pmu_alp_clock(struct bcma_drv_cc *cc); u32 bcma_pmu_get_clockcpu(struct bcma_drv_cc *cc); +#ifdef CONFIG_BCMA_SFLASH +/* driver_chipcommon_sflash.c */ +int bcma_sflash_init(struct bcma_drv_cc *cc); +#else +static inline int bcma_sflash_init(struct bcma_drv_cc *cc) +{ + bcma_err(cc->core->bus, "Serial flash not supported\n"); + return 0; +} +#endif /* CONFIG_BCMA_SFLASH */ + +#ifdef CONFIG_BCMA_NFLASH +/* driver_chipcommon_nflash.c */ +int bcma_nflash_init(struct bcma_drv_cc *cc); +#else +static inline int bcma_nflash_init(struct bcma_drv_cc *cc) +{ + bcma_err(cc->core->bus, "NAND flash not supported\n"); + return 0; +} +#endif /* CONFIG_BCMA_NFLASH */ + #ifdef CONFIG_BCMA_HOST_PCI /* host_pci.c */ extern int __init bcma_host_pci_init(void); diff --git a/drivers/bcma/core.c b/drivers/bcma/core.c index bc6e89212ad3..63c8b470536f 100644 --- a/drivers/bcma/core.c +++ b/drivers/bcma/core.c @@ -75,7 +75,7 @@ void bcma_core_set_clockmode(struct bcma_device *core, udelay(10); } if (i) - pr_err("HT force timeout\n"); + bcma_err(core->bus, "HT force timeout\n"); break; case BCMA_CLKMODE_DYNAMIC: bcma_set32(core, BCMA_CLKCTLST, ~BCMA_CLKCTLST_FORCEHT); @@ -102,9 +102,9 @@ void bcma_core_pll_ctl(struct bcma_device *core, u32 req, u32 status, bool on) udelay(10); } if (i) - pr_err("PLL enable timeout\n"); + bcma_err(core->bus, "PLL enable timeout\n"); } else { - pr_warn("Disabling PLL not supported yet!\n"); + bcma_warn(core->bus, "Disabling PLL not supported yet!\n"); } } EXPORT_SYMBOL_GPL(bcma_core_pll_ctl); @@ -120,8 +120,8 @@ u32 bcma_core_dma_translation(struct bcma_device *core) else return BCMA_DMA_TRANSLATION_DMA32_CMT; default: - pr_err("DMA translation unknown for host %d\n", - core->bus->hosttype); + bcma_err(core->bus, "DMA translation unknown for host %d\n", + core->bus->hosttype); } return BCMA_DMA_TRANSLATION_NONE; } diff --git a/drivers/bcma/driver_chipcommon.c b/drivers/bcma/driver_chipcommon.c index e9f1b3fd252c..a4c3ebcc4c86 100644 --- a/drivers/bcma/driver_chipcommon.c +++ b/drivers/bcma/driver_chipcommon.c @@ -44,7 +44,7 @@ void bcma_core_chipcommon_init(struct bcma_drv_cc *cc) if (cc->capabilities & BCMA_CC_CAP_PMU) bcma_pmu_init(cc); if (cc->capabilities & BCMA_CC_CAP_PCTL) - pr_err("Power control not implemented!\n"); + bcma_err(cc->core->bus, "Power control not implemented!\n"); if (cc->core->id.rev >= 16) { if (cc->core->bus->sprom.leddc_on_time && @@ -137,8 +137,7 @@ void bcma_chipco_serial_init(struct bcma_drv_cc *cc) | BCMA_CC_CORECTL_UARTCLKEN); } } else { - pr_err("serial not supported on this device ccrev: 0x%x\n", - ccrev); + bcma_err(cc->core->bus, "serial not supported on this device ccrev: 0x%x\n", ccrev); return; } diff --git a/drivers/bcma/driver_chipcommon_nflash.c b/drivers/bcma/driver_chipcommon_nflash.c new file mode 100644 index 000000000000..574d62435bc2 --- /dev/null +++ b/drivers/bcma/driver_chipcommon_nflash.c @@ -0,0 +1,19 @@ +/* + * Broadcom specific AMBA + * ChipCommon NAND flash interface + * + * Licensed under the GNU/GPL. See COPYING for details. + */ + +#include <linux/bcma/bcma.h> +#include <linux/bcma/bcma_driver_chipcommon.h> +#include <linux/delay.h> + +#include "bcma_private.h" + +/* Initialize NAND flash access */ +int bcma_nflash_init(struct bcma_drv_cc *cc) +{ + bcma_err(cc->core->bus, "NAND flash support is broken\n"); + return 0; +} diff --git a/drivers/bcma/driver_chipcommon_pmu.c b/drivers/bcma/driver_chipcommon_pmu.c index 61ce4054b3c3..44326178db29 100644 --- a/drivers/bcma/driver_chipcommon_pmu.c +++ b/drivers/bcma/driver_chipcommon_pmu.c @@ -3,7 +3,8 @@ * ChipCommon Power Management Unit driver * * Copyright 2009, Michael Buesch <m@bues.ch> - * Copyright 2007, Broadcom Corporation + * Copyright 2007, 2011, Broadcom Corporation + * Copyright 2011, 2012, Hauke Mehrtens <hauke@hauke-m.de> * * Licensed under the GNU/GPL. See COPYING for details. */ @@ -54,39 +55,19 @@ void bcma_chipco_regctl_maskset(struct bcma_drv_cc *cc, u32 offset, u32 mask, } EXPORT_SYMBOL_GPL(bcma_chipco_regctl_maskset); -static void bcma_pmu_pll_init(struct bcma_drv_cc *cc) -{ - struct bcma_bus *bus = cc->core->bus; - - switch (bus->chipinfo.id) { - case 0x4313: - case 0x4331: - case 43224: - case 43225: - break; - default: - pr_err("PLL init unknown for device 0x%04X\n", - bus->chipinfo.id); - } -} - static void bcma_pmu_resources_init(struct bcma_drv_cc *cc) { struct bcma_bus *bus = cc->core->bus; u32 min_msk = 0, max_msk = 0; switch (bus->chipinfo.id) { - case 0x4313: + case BCMA_CHIP_ID_BCM4313: min_msk = 0x200D; max_msk = 0xFFFF; break; - case 0x4331: - case 43224: - case 43225: - break; default: - pr_err("PMU resource config unknown for device 0x%04X\n", - bus->chipinfo.id); + bcma_debug(bus, "PMU resource config unknown or not needed for device 0x%04X\n", + bus->chipinfo.id); } /* Set the resource masks. */ @@ -94,22 +75,9 @@ static void bcma_pmu_resources_init(struct bcma_drv_cc *cc) bcma_cc_write32(cc, BCMA_CC_PMU_MINRES_MSK, min_msk); if (max_msk) bcma_cc_write32(cc, BCMA_CC_PMU_MAXRES_MSK, max_msk); -} - -void bcma_pmu_swreg_init(struct bcma_drv_cc *cc) -{ - struct bcma_bus *bus = cc->core->bus; - switch (bus->chipinfo.id) { - case 0x4313: - case 0x4331: - case 43224: - case 43225: - break; - default: - pr_err("PMU switch/regulators init unknown for device " - "0x%04X\n", bus->chipinfo.id); - } + /* Add some delay; allow resources to come up and settle. */ + mdelay(2); } /* Disable to allow reading SPROM. Don't know the adventages of enabling it. */ @@ -123,8 +91,11 @@ void bcma_chipco_bcm4331_ext_pa_lines_ctl(struct bcma_drv_cc *cc, bool enable) val |= BCMA_CHIPCTL_4331_EXTPA_EN; if (bus->chipinfo.pkg == 9 || bus->chipinfo.pkg == 11) val |= BCMA_CHIPCTL_4331_EXTPA_ON_GPIO2_5; + else if (bus->chipinfo.rev > 0) + val |= BCMA_CHIPCTL_4331_EXTPA_EN2; } else { val &= ~BCMA_CHIPCTL_4331_EXTPA_EN; + val &= ~BCMA_CHIPCTL_4331_EXTPA_EN2; val &= ~BCMA_CHIPCTL_4331_EXTPA_ON_GPIO2_5; } bcma_cc_write32(cc, BCMA_CC_CHIPCTL, val); @@ -135,28 +106,38 @@ void bcma_pmu_workarounds(struct bcma_drv_cc *cc) struct bcma_bus *bus = cc->core->bus; switch (bus->chipinfo.id) { - case 0x4313: - bcma_chipco_chipctl_maskset(cc, 0, ~0, 0x7); + case BCMA_CHIP_ID_BCM4313: + /* enable 12 mA drive strenth for 4313 and set chipControl + register bit 1 */ + bcma_chipco_chipctl_maskset(cc, 0, + BCMA_CCTRL_4313_12MA_LED_DRIVE, + BCMA_CCTRL_4313_12MA_LED_DRIVE); break; - case 0x4331: - case 43431: + case BCMA_CHIP_ID_BCM4331: + case BCMA_CHIP_ID_BCM43431: /* Ext PA lines must be enabled for tx on BCM4331 */ bcma_chipco_bcm4331_ext_pa_lines_ctl(cc, true); break; - case 43224: + case BCMA_CHIP_ID_BCM43224: + case BCMA_CHIP_ID_BCM43421: + /* enable 12 mA drive strenth for 43224 and set chipControl + register bit 15 */ if (bus->chipinfo.rev == 0) { - pr_err("Workarounds for 43224 rev 0 not fully " - "implemented\n"); - bcma_chipco_chipctl_maskset(cc, 0, ~0, 0x00F000F0); + bcma_cc_maskset32(cc, BCMA_CC_CHIPCTL, + BCMA_CCTRL_43224_GPIO_TOGGLE, + BCMA_CCTRL_43224_GPIO_TOGGLE); + bcma_chipco_chipctl_maskset(cc, 0, + BCMA_CCTRL_43224A0_12MA_LED_DRIVE, + BCMA_CCTRL_43224A0_12MA_LED_DRIVE); } else { - bcma_chipco_chipctl_maskset(cc, 0, ~0, 0xF0); + bcma_chipco_chipctl_maskset(cc, 0, + BCMA_CCTRL_43224B0_12MA_LED_DRIVE, + BCMA_CCTRL_43224B0_12MA_LED_DRIVE); } break; - case 43225: - break; default: - pr_err("Workarounds unknown for device 0x%04X\n", - bus->chipinfo.id); + bcma_debug(bus, "Workarounds unknown or not needed for device 0x%04X\n", + bus->chipinfo.id); } } @@ -167,8 +148,8 @@ void bcma_pmu_init(struct bcma_drv_cc *cc) pmucap = bcma_cc_read32(cc, BCMA_CC_PMU_CAP); cc->pmu.rev = (pmucap & BCMA_CC_PMU_CAP_REVISION); - pr_debug("Found rev %u PMU (capabilities 0x%08X)\n", cc->pmu.rev, - pmucap); + bcma_debug(cc->core->bus, "Found rev %u PMU (capabilities 0x%08X)\n", + cc->pmu.rev, pmucap); if (cc->pmu.rev == 1) bcma_cc_mask32(cc, BCMA_CC_PMU_CTL, @@ -177,12 +158,7 @@ void bcma_pmu_init(struct bcma_drv_cc *cc) bcma_cc_set32(cc, BCMA_CC_PMU_CTL, BCMA_CC_PMU_CTL_NOILPONW); - if (cc->core->id.id == 0x4329 && cc->core->id.rev == 2) - pr_err("Fix for 4329b0 bad LPOM state not implemented!\n"); - - bcma_pmu_pll_init(cc); bcma_pmu_resources_init(cc); - bcma_pmu_swreg_init(cc); bcma_pmu_workarounds(cc); } @@ -191,23 +167,22 @@ u32 bcma_pmu_alp_clock(struct bcma_drv_cc *cc) struct bcma_bus *bus = cc->core->bus; switch (bus->chipinfo.id) { - case 0x4716: - case 0x4748: - case 47162: - case 0x4313: - case 0x5357: - case 0x4749: - case 53572: + case BCMA_CHIP_ID_BCM4716: + case BCMA_CHIP_ID_BCM4748: + case BCMA_CHIP_ID_BCM47162: + case BCMA_CHIP_ID_BCM4313: + case BCMA_CHIP_ID_BCM5357: + case BCMA_CHIP_ID_BCM4749: + case BCMA_CHIP_ID_BCM53572: /* always 20Mhz */ return 20000 * 1000; - case 0x5356: - case 0x5300: + case BCMA_CHIP_ID_BCM5356: + case BCMA_CHIP_ID_BCM4706: /* always 25Mhz */ return 25000 * 1000; default: - pr_warn("No ALP clock specified for %04X device, " - "pmu rev. %d, using default %d Hz\n", - bus->chipinfo.id, cc->pmu.rev, BCMA_CC_PMU_ALP_CLOCK); + bcma_warn(bus, "No ALP clock specified for %04X device, pmu rev. %d, using default %d Hz\n", + bus->chipinfo.id, cc->pmu.rev, BCMA_CC_PMU_ALP_CLOCK); } return BCMA_CC_PMU_ALP_CLOCK; } @@ -224,7 +199,8 @@ static u32 bcma_pmu_clock(struct bcma_drv_cc *cc, u32 pll0, u32 m) BUG_ON(!m || m > 4); - if (bus->chipinfo.id == 0x5357 || bus->chipinfo.id == 0x4749) { + if (bus->chipinfo.id == BCMA_CHIP_ID_BCM5357 || + bus->chipinfo.id == BCMA_CHIP_ID_BCM4749) { /* Detect failure in clock setting */ tmp = bcma_cc_read32(cc, BCMA_CC_CHIPSTAT); if (tmp & 0x40000) @@ -250,33 +226,62 @@ static u32 bcma_pmu_clock(struct bcma_drv_cc *cc, u32 pll0, u32 m) return (fc / div) * 1000000; } +static u32 bcma_pmu_clock_bcm4706(struct bcma_drv_cc *cc, u32 pll0, u32 m) +{ + u32 tmp, ndiv, p1div, p2div; + u32 clock; + + BUG_ON(!m || m > 4); + + /* Get N, P1 and P2 dividers to determine CPU clock */ + tmp = bcma_chipco_pll_read(cc, pll0 + BCMA_CC_PMU6_4706_PROCPLL_OFF); + ndiv = (tmp & BCMA_CC_PMU6_4706_PROC_NDIV_INT_MASK) + >> BCMA_CC_PMU6_4706_PROC_NDIV_INT_SHIFT; + p1div = (tmp & BCMA_CC_PMU6_4706_PROC_P1DIV_MASK) + >> BCMA_CC_PMU6_4706_PROC_P1DIV_SHIFT; + p2div = (tmp & BCMA_CC_PMU6_4706_PROC_P2DIV_MASK) + >> BCMA_CC_PMU6_4706_PROC_P2DIV_SHIFT; + + tmp = bcma_cc_read32(cc, BCMA_CC_CHIPSTAT); + if (tmp & BCMA_CC_CHIPST_4706_PKG_OPTION) + /* Low cost bonding: Fixed reference clock 25MHz and m = 4 */ + clock = (25000000 / 4) * ndiv * p2div / p1div; + else + /* Fixed reference clock 25MHz and m = 2 */ + clock = (25000000 / 2) * ndiv * p2div / p1div; + + if (m == BCMA_CC_PMU5_MAINPLL_SSB) + clock = clock / 4; + + return clock; +} + /* query bus clock frequency for PMU-enabled chipcommon */ u32 bcma_pmu_get_clockcontrol(struct bcma_drv_cc *cc) { struct bcma_bus *bus = cc->core->bus; switch (bus->chipinfo.id) { - case 0x4716: - case 0x4748: - case 47162: + case BCMA_CHIP_ID_BCM4716: + case BCMA_CHIP_ID_BCM4748: + case BCMA_CHIP_ID_BCM47162: return bcma_pmu_clock(cc, BCMA_CC_PMU4716_MAINPLL_PLL0, BCMA_CC_PMU5_MAINPLL_SSB); - case 0x5356: + case BCMA_CHIP_ID_BCM5356: return bcma_pmu_clock(cc, BCMA_CC_PMU5356_MAINPLL_PLL0, BCMA_CC_PMU5_MAINPLL_SSB); - case 0x5357: - case 0x4749: + case BCMA_CHIP_ID_BCM5357: + case BCMA_CHIP_ID_BCM4749: return bcma_pmu_clock(cc, BCMA_CC_PMU5357_MAINPLL_PLL0, BCMA_CC_PMU5_MAINPLL_SSB); - case 0x5300: - return bcma_pmu_clock(cc, BCMA_CC_PMU4706_MAINPLL_PLL0, - BCMA_CC_PMU5_MAINPLL_SSB); - case 53572: + case BCMA_CHIP_ID_BCM4706: + return bcma_pmu_clock_bcm4706(cc, BCMA_CC_PMU4706_MAINPLL_PLL0, + BCMA_CC_PMU5_MAINPLL_SSB); + case BCMA_CHIP_ID_BCM53572: return 75000000; default: - pr_warn("No backplane clock specified for %04X device, " - "pmu rev. %d, using default %d Hz\n", - bus->chipinfo.id, cc->pmu.rev, BCMA_CC_PMU_HT_CLOCK); + bcma_warn(bus, "No backplane clock specified for %04X device, pmu rev. %d, using default %d Hz\n", + bus->chipinfo.id, cc->pmu.rev, BCMA_CC_PMU_HT_CLOCK); } return BCMA_CC_PMU_HT_CLOCK; } @@ -286,17 +291,21 @@ u32 bcma_pmu_get_clockcpu(struct bcma_drv_cc *cc) { struct bcma_bus *bus = cc->core->bus; - if (bus->chipinfo.id == 53572) + if (bus->chipinfo.id == BCMA_CHIP_ID_BCM53572) return 300000000; if (cc->pmu.rev >= 5) { u32 pll; switch (bus->chipinfo.id) { - case 0x5356: + case BCMA_CHIP_ID_BCM4706: + return bcma_pmu_clock_bcm4706(cc, + BCMA_CC_PMU4706_MAINPLL_PLL0, + BCMA_CC_PMU5_MAINPLL_CPU); + case BCMA_CHIP_ID_BCM5356: pll = BCMA_CC_PMU5356_MAINPLL_PLL0; break; - case 0x5357: - case 0x4749: + case BCMA_CHIP_ID_BCM5357: + case BCMA_CHIP_ID_BCM4749: pll = BCMA_CC_PMU5357_MAINPLL_PLL0; break; default: @@ -304,10 +313,188 @@ u32 bcma_pmu_get_clockcpu(struct bcma_drv_cc *cc) break; } - /* TODO: if (bus->chipinfo.id == 0x5300) - return si_4706_pmu_clock(sih, osh, cc, PMU4706_MAINPLL_PLL0, PMU5_MAINPLL_CPU); */ return bcma_pmu_clock(cc, pll, BCMA_CC_PMU5_MAINPLL_CPU); } return bcma_pmu_get_clockcontrol(cc); } + +static void bcma_pmu_spuravoid_pll_write(struct bcma_drv_cc *cc, u32 offset, + u32 value) +{ + bcma_cc_write32(cc, BCMA_CC_PLLCTL_ADDR, offset); + bcma_cc_write32(cc, BCMA_CC_PLLCTL_DATA, value); +} + +void bcma_pmu_spuravoid_pllupdate(struct bcma_drv_cc *cc, int spuravoid) +{ + u32 tmp = 0; + u8 phypll_offset = 0; + u8 bcm5357_bcm43236_p1div[] = {0x1, 0x5, 0x5}; + u8 bcm5357_bcm43236_ndiv[] = {0x30, 0xf6, 0xfc}; + struct bcma_bus *bus = cc->core->bus; + + switch (bus->chipinfo.id) { + case BCMA_CHIP_ID_BCM5357: + case BCMA_CHIP_ID_BCM4749: + case BCMA_CHIP_ID_BCM53572: + /* 5357[ab]0, 43236[ab]0, and 6362b0 */ + + /* BCM5357 needs to touch PLL1_PLLCTL[02], + so offset PLL0_PLLCTL[02] by 6 */ + phypll_offset = (bus->chipinfo.id == BCMA_CHIP_ID_BCM5357 || + bus->chipinfo.id == BCMA_CHIP_ID_BCM4749 || + bus->chipinfo.id == BCMA_CHIP_ID_BCM53572) ? 6 : 0; + + /* RMW only the P1 divider */ + bcma_cc_write32(cc, BCMA_CC_PLLCTL_ADDR, + BCMA_CC_PMU_PLL_CTL0 + phypll_offset); + tmp = bcma_cc_read32(cc, BCMA_CC_PLLCTL_DATA); + tmp &= (~(BCMA_CC_PMU1_PLL0_PC0_P1DIV_MASK)); + tmp |= (bcm5357_bcm43236_p1div[spuravoid] << BCMA_CC_PMU1_PLL0_PC0_P1DIV_SHIFT); + bcma_cc_write32(cc, BCMA_CC_PLLCTL_DATA, tmp); + + /* RMW only the int feedback divider */ + bcma_cc_write32(cc, BCMA_CC_PLLCTL_ADDR, + BCMA_CC_PMU_PLL_CTL2 + phypll_offset); + tmp = bcma_cc_read32(cc, BCMA_CC_PLLCTL_DATA); + tmp &= ~(BCMA_CC_PMU1_PLL0_PC2_NDIV_INT_MASK); + tmp |= (bcm5357_bcm43236_ndiv[spuravoid]) << BCMA_CC_PMU1_PLL0_PC2_NDIV_INT_SHIFT; + bcma_cc_write32(cc, BCMA_CC_PLLCTL_DATA, tmp); + + tmp = 1 << 10; + break; + + case BCMA_CHIP_ID_BCM4331: + case BCMA_CHIP_ID_BCM43431: + if (spuravoid == 2) { + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL0, + 0x11500014); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2, + 0x0FC00a08); + } else if (spuravoid == 1) { + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL0, + 0x11500014); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2, + 0x0F600a08); + } else { + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL0, + 0x11100014); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2, + 0x03000a08); + } + tmp = 1 << 10; + break; + + case BCMA_CHIP_ID_BCM43224: + case BCMA_CHIP_ID_BCM43225: + case BCMA_CHIP_ID_BCM43421: + if (spuravoid == 1) { + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL0, + 0x11500010); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL1, + 0x000C0C06); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2, + 0x0F600a08); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL3, + 0x00000000); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL4, + 0x2001E920); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL5, + 0x88888815); + } else { + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL0, + 0x11100010); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL1, + 0x000c0c06); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2, + 0x03000a08); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL3, + 0x00000000); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL4, + 0x200005c0); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL5, + 0x88888815); + } + tmp = 1 << 10; + break; + + case BCMA_CHIP_ID_BCM4716: + case BCMA_CHIP_ID_BCM4748: + case BCMA_CHIP_ID_BCM47162: + if (spuravoid == 1) { + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL0, + 0x11500060); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL1, + 0x080C0C06); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2, + 0x0F600000); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL3, + 0x00000000); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL4, + 0x2001E924); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL5, + 0x88888815); + } else { + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL0, + 0x11100060); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL1, + 0x080c0c06); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2, + 0x03000000); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL3, + 0x00000000); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL4, + 0x200005c0); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL5, + 0x88888815); + } + + tmp = 3 << 9; + break; + + case BCMA_CHIP_ID_BCM43227: + case BCMA_CHIP_ID_BCM43228: + case BCMA_CHIP_ID_BCM43428: + /* LCNXN */ + /* PLL Settings for spur avoidance on/off mode, + no on2 support for 43228A0 */ + if (spuravoid == 1) { + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL0, + 0x01100014); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL1, + 0x040C0C06); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2, + 0x03140A08); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL3, + 0x00333333); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL4, + 0x202C2820); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL5, + 0x88888815); + } else { + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL0, + 0x11100014); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL1, + 0x040c0c06); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2, + 0x03000a08); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL3, + 0x00000000); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL4, + 0x200005c0); + bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL5, + 0x88888815); + } + tmp = 1 << 10; + break; + default: + bcma_err(bus, "Unknown spuravoidance settings for chip 0x%04X, not changing PLL\n", + bus->chipinfo.id); + break; + } + + tmp |= bcma_cc_read32(cc, BCMA_CC_PMU_CTL); + bcma_cc_write32(cc, BCMA_CC_PMU_CTL, tmp); +} +EXPORT_SYMBOL_GPL(bcma_pmu_spuravoid_pllupdate); diff --git a/drivers/bcma/driver_chipcommon_sflash.c b/drivers/bcma/driver_chipcommon_sflash.c new file mode 100644 index 000000000000..6e157a58a1d7 --- /dev/null +++ b/drivers/bcma/driver_chipcommon_sflash.c @@ -0,0 +1,19 @@ +/* + * Broadcom specific AMBA + * ChipCommon serial flash interface + * + * Licensed under the GNU/GPL. See COPYING for details. + */ + +#include <linux/bcma/bcma.h> +#include <linux/bcma/bcma_driver_chipcommon.h> +#include <linux/delay.h> + +#include "bcma_private.h" + +/* Initialize serial flash access */ +int bcma_sflash_init(struct bcma_drv_cc *cc) +{ + bcma_err(cc->core->bus, "Serial flash support is broken\n"); + return 0; +} diff --git a/drivers/bcma/driver_gmac_cmn.c b/drivers/bcma/driver_gmac_cmn.c new file mode 100644 index 000000000000..834225f65e8f --- /dev/null +++ b/drivers/bcma/driver_gmac_cmn.c @@ -0,0 +1,14 @@ +/* + * Broadcom specific AMBA + * GBIT MAC COMMON Core + * + * Licensed under the GNU/GPL. See COPYING for details. + */ + +#include "bcma_private.h" +#include <linux/bcma/bcma.h> + +void __devinit bcma_core_gmac_cmn_init(struct bcma_drv_gmac_cmn *gc) +{ + mutex_init(&gc->phy_mutex); +} diff --git a/drivers/bcma/driver_mips.c b/drivers/bcma/driver_mips.c index c3e9dff4224e..b013b049476d 100644 --- a/drivers/bcma/driver_mips.c +++ b/drivers/bcma/driver_mips.c @@ -22,15 +22,15 @@ /* The 47162a0 hangs when reading MIPS DMP registers registers */ static inline bool bcma_core_mips_bcm47162a0_quirk(struct bcma_device *dev) { - return dev->bus->chipinfo.id == 47162 && dev->bus->chipinfo.rev == 0 && - dev->id.id == BCMA_CORE_MIPS_74K; + return dev->bus->chipinfo.id == BCMA_CHIP_ID_BCM47162 && + dev->bus->chipinfo.rev == 0 && dev->id.id == BCMA_CORE_MIPS_74K; } /* The 5357b0 hangs when reading USB20H DMP registers */ static inline bool bcma_core_mips_bcm5357b0_quirk(struct bcma_device *dev) { - return (dev->bus->chipinfo.id == 0x5357 || - dev->bus->chipinfo.id == 0x4749) && + return (dev->bus->chipinfo.id == BCMA_CHIP_ID_BCM5357 || + dev->bus->chipinfo.id == BCMA_CHIP_ID_BCM4749) && dev->bus->chipinfo.pkg == 11 && dev->id.id == BCMA_CORE_USB20_HOST; } @@ -143,8 +143,8 @@ static void bcma_core_mips_set_irq(struct bcma_device *dev, unsigned int irq) 1 << irqflag); } - pr_info("set_irq: core 0x%04x, irq %d => %d\n", - dev->id.id, oldirq + 2, irq + 2); + bcma_info(bus, "set_irq: core 0x%04x, irq %d => %d\n", + dev->id.id, oldirq + 2, irq + 2); } static void bcma_core_mips_print_irq(struct bcma_device *dev, unsigned int irq) @@ -173,7 +173,7 @@ u32 bcma_cpu_clock(struct bcma_drv_mips *mcore) if (bus->drv_cc.capabilities & BCMA_CC_CAP_PMU) return bcma_pmu_get_clockcpu(&bus->drv_cc); - pr_err("No PMU available, need this to get the cpu clock\n"); + bcma_err(bus, "No PMU available, need this to get the cpu clock\n"); return 0; } EXPORT_SYMBOL(bcma_cpu_clock); @@ -185,10 +185,11 @@ static void bcma_core_mips_flash_detect(struct bcma_drv_mips *mcore) switch (bus->drv_cc.capabilities & BCMA_CC_CAP_FLASHT) { case BCMA_CC_FLASHT_STSER: case BCMA_CC_FLASHT_ATSER: - pr_err("Serial flash not supported.\n"); + bcma_debug(bus, "Found serial flash\n"); + bcma_sflash_init(&bus->drv_cc); break; case BCMA_CC_FLASHT_PARA: - pr_info("found parallel flash.\n"); + bcma_debug(bus, "Found parallel flash\n"); bus->drv_cc.pflash.window = 0x1c000000; bus->drv_cc.pflash.window_size = 0x02000000; @@ -199,7 +200,15 @@ static void bcma_core_mips_flash_detect(struct bcma_drv_mips *mcore) bus->drv_cc.pflash.buswidth = 2; break; default: - pr_err("flash not supported.\n"); + bcma_err(bus, "Flash type not supported\n"); + } + + if (bus->drv_cc.core->id.rev == 38 || + bus->chipinfo.id == BCMA_CHIP_ID_BCM4706) { + if (bus->drv_cc.capabilities & BCMA_CC_CAP_NFLASH) { + bcma_debug(bus, "Found NAND flash\n"); + bcma_nflash_init(&bus->drv_cc); + } } } @@ -209,7 +218,7 @@ void bcma_core_mips_init(struct bcma_drv_mips *mcore) struct bcma_device *core; bus = mcore->core->bus; - pr_info("Initializing MIPS core...\n"); + bcma_info(bus, "Initializing MIPS core...\n"); if (!mcore->setup_done) mcore->assigned_irqs = 1; @@ -244,7 +253,7 @@ void bcma_core_mips_init(struct bcma_drv_mips *mcore) break; } } - pr_info("IRQ reconfiguration done\n"); + bcma_info(bus, "IRQ reconfiguration done\n"); bcma_core_mips_dump_irq(bus); if (mcore->setup_done) diff --git a/drivers/bcma/driver_pci_host.c b/drivers/bcma/driver_pci_host.c index b9a86edfec39..cbae2c231336 100644 --- a/drivers/bcma/driver_pci_host.c +++ b/drivers/bcma/driver_pci_host.c @@ -36,7 +36,7 @@ bool __devinit bcma_core_pci_is_in_hostmode(struct bcma_drv_pci *pc) return false; if (bus->sprom.boardflags_lo & BCMA_CORE_PCI_BFL_NOPCI) { - pr_info("This PCI core is disabled and not working\n"); + bcma_info(bus, "This PCI core is disabled and not working\n"); return false; } @@ -215,7 +215,8 @@ static int bcma_extpci_write_config(struct bcma_drv_pci *pc, unsigned int dev, } else { writel(val, mmio); - if (chipid == 0x4716 || chipid == 0x4748) + if (chipid == BCMA_CHIP_ID_BCM4716 || + chipid == BCMA_CHIP_ID_BCM4748) readl(mmio); } @@ -340,6 +341,7 @@ static u8 __devinit bcma_find_pci_capability(struct bcma_drv_pci *pc, */ static void __devinit bcma_core_pci_enable_crs(struct bcma_drv_pci *pc) { + struct bcma_bus *bus = pc->core->bus; u8 cap_ptr, root_ctrl, root_cap, dev; u16 val16; int i; @@ -378,7 +380,8 @@ static void __devinit bcma_core_pci_enable_crs(struct bcma_drv_pci *pc) udelay(10); } if (val16 == 0x1) - pr_err("PCI: Broken device in slot %d\n", dev); + bcma_err(bus, "PCI: Broken device in slot %d\n", + dev); } } } @@ -391,11 +394,11 @@ void __devinit bcma_core_pci_hostmode_init(struct bcma_drv_pci *pc) u32 pci_membase_1G; unsigned long io_map_base; - pr_info("PCIEcore in host mode found\n"); + bcma_info(bus, "PCIEcore in host mode found\n"); pc_host = kzalloc(sizeof(*pc_host), GFP_KERNEL); if (!pc_host) { - pr_err("can not allocate memory"); + bcma_err(bus, "can not allocate memory"); return; } @@ -434,13 +437,14 @@ void __devinit bcma_core_pci_hostmode_init(struct bcma_drv_pci *pc) * as mips can't generate 64-bit address on the * backplane. */ - if (bus->chipinfo.id == 0x4716 || bus->chipinfo.id == 0x4748) { + if (bus->chipinfo.id == BCMA_CHIP_ID_BCM4716 || + bus->chipinfo.id == BCMA_CHIP_ID_BCM4748) { pc_host->mem_resource.start = BCMA_SOC_PCI_MEM; pc_host->mem_resource.end = BCMA_SOC_PCI_MEM + BCMA_SOC_PCI_MEM_SZ - 1; pcicore_write32(pc, BCMA_CORE_PCI_SBTOPCI0, BCMA_CORE_PCI_SBTOPCI_MEM | BCMA_SOC_PCI_MEM); - } else if (bus->chipinfo.id == 0x5300) { + } else if (bus->chipinfo.id == BCMA_CHIP_ID_BCM4706) { tmp = BCMA_CORE_PCI_SBTOPCI_MEM; tmp |= BCMA_CORE_PCI_SBTOPCI_PREF; tmp |= BCMA_CORE_PCI_SBTOPCI_BURST; diff --git a/drivers/bcma/host_pci.c b/drivers/bcma/host_pci.c index 6c05cf470f96..11b32d2642df 100644 --- a/drivers/bcma/host_pci.c +++ b/drivers/bcma/host_pci.c @@ -18,7 +18,7 @@ static void bcma_host_pci_switch_core(struct bcma_device *core) pci_write_config_dword(core->bus->host_pci, BCMA_PCI_BAR0_WIN2, core->wrap); core->bus->mapped_core = core; - pr_debug("Switched to core: 0x%X\n", core->id.id); + bcma_debug(core->bus, "Switched to core: 0x%X\n", core->id.id); } /* Provides access to the requested core. Returns base offset that has to be @@ -188,7 +188,7 @@ static int __devinit bcma_host_pci_probe(struct pci_dev *dev, /* SSB needed additional powering up, do we have any AMBA PCI cards? */ if (!pci_is_pcie(dev)) - pr_err("PCI card detected, report problems.\n"); + bcma_err(bus, "PCI card detected, report problems.\n"); /* Map MMIO */ err = -ENOMEM; @@ -268,6 +268,7 @@ static SIMPLE_DEV_PM_OPS(bcma_pm_ops, bcma_host_pci_suspend, static DEFINE_PCI_DEVICE_TABLE(bcma_pci_bridge_tbl) = { { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x0576) }, + { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43224) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4331) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4353) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4357) }, diff --git a/drivers/bcma/main.c b/drivers/bcma/main.c index 7e138ec21357..758af9ccdef0 100644 --- a/drivers/bcma/main.c +++ b/drivers/bcma/main.c @@ -61,6 +61,13 @@ static struct bus_type bcma_bus_type = { .dev_attrs = bcma_device_attrs, }; +static u16 bcma_cc_core_id(struct bcma_bus *bus) +{ + if (bus->chipinfo.id == BCMA_CHIP_ID_BCM4706) + return BCMA_CORE_4706_CHIPCOMMON; + return BCMA_CORE_CHIPCOMMON; +} + struct bcma_device *bcma_find_core(struct bcma_bus *bus, u16 coreid) { struct bcma_device *core; @@ -91,10 +98,12 @@ static int bcma_register_cores(struct bcma_bus *bus) list_for_each_entry(core, &bus->cores, list) { /* We support that cores ourself */ switch (core->id.id) { + case BCMA_CORE_4706_CHIPCOMMON: case BCMA_CORE_CHIPCOMMON: case BCMA_CORE_PCI: case BCMA_CORE_PCIE: case BCMA_CORE_MIPS_74K: + case BCMA_CORE_4706_MAC_GBIT_COMMON: continue; } @@ -118,8 +127,9 @@ static int bcma_register_cores(struct bcma_bus *bus) err = device_register(&core->dev); if (err) { - pr_err("Could not register dev for core 0x%03X\n", - core->id.id); + bcma_err(bus, + "Could not register dev for core 0x%03X\n", + core->id.id); continue; } core->dev_registered = true; @@ -151,12 +161,12 @@ int __devinit bcma_bus_register(struct bcma_bus *bus) /* Scan for devices (cores) */ err = bcma_bus_scan(bus); if (err) { - pr_err("Failed to scan: %d\n", err); + bcma_err(bus, "Failed to scan: %d\n", err); return -1; } /* Init CC core */ - core = bcma_find_core(bus, BCMA_CORE_CHIPCOMMON); + core = bcma_find_core(bus, bcma_cc_core_id(bus)); if (core) { bus->drv_cc.core = core; bcma_core_chipcommon_init(&bus->drv_cc); @@ -176,17 +186,24 @@ int __devinit bcma_bus_register(struct bcma_bus *bus) bcma_core_pci_init(&bus->drv_pci); } + /* Init GBIT MAC COMMON core */ + core = bcma_find_core(bus, BCMA_CORE_4706_MAC_GBIT_COMMON); + if (core) { + bus->drv_gmac_cmn.core = core; + bcma_core_gmac_cmn_init(&bus->drv_gmac_cmn); + } + /* Try to get SPROM */ err = bcma_sprom_get(bus); if (err == -ENOENT) { - pr_err("No SPROM available\n"); + bcma_err(bus, "No SPROM available\n"); } else if (err) - pr_err("Failed to get SPROM: %d\n", err); + bcma_err(bus, "Failed to get SPROM: %d\n", err); /* Register found cores */ bcma_register_cores(bus); - pr_info("Bus registered\n"); + bcma_info(bus, "Bus registered\n"); return 0; } @@ -207,14 +224,14 @@ int __init bcma_bus_early_register(struct bcma_bus *bus, bcma_init_bus(bus); match.manuf = BCMA_MANUF_BCM; - match.id = BCMA_CORE_CHIPCOMMON; + match.id = bcma_cc_core_id(bus); match.class = BCMA_CL_SIM; match.rev = BCMA_ANY_REV; /* Scan for chip common core */ err = bcma_bus_scan_early(bus, &match, core_cc); if (err) { - pr_err("Failed to scan for common core: %d\n", err); + bcma_err(bus, "Failed to scan for common core: %d\n", err); return -1; } @@ -226,12 +243,12 @@ int __init bcma_bus_early_register(struct bcma_bus *bus, /* Scan for mips core */ err = bcma_bus_scan_early(bus, &match, core_mips); if (err) { - pr_err("Failed to scan for mips core: %d\n", err); + bcma_err(bus, "Failed to scan for mips core: %d\n", err); return -1; } /* Init CC core */ - core = bcma_find_core(bus, BCMA_CORE_CHIPCOMMON); + core = bcma_find_core(bus, bcma_cc_core_id(bus)); if (core) { bus->drv_cc.core = core; bcma_core_chipcommon_init(&bus->drv_cc); @@ -244,7 +261,7 @@ int __init bcma_bus_early_register(struct bcma_bus *bus, bcma_core_mips_init(&bus->drv_mips); } - pr_info("Early bus registered\n"); + bcma_info(bus, "Early bus registered\n"); return 0; } @@ -270,8 +287,7 @@ int bcma_bus_resume(struct bcma_bus *bus) struct bcma_device *core; /* Init CC core */ - core = bcma_find_core(bus, BCMA_CORE_CHIPCOMMON); - if (core) { + if (bus->drv_cc.core) { bus->drv_cc.setup_done = false; bcma_core_chipcommon_init(&bus->drv_cc); } diff --git a/drivers/bcma/scan.c b/drivers/bcma/scan.c index 5ed0718fc660..5672b13d0951 100644 --- a/drivers/bcma/scan.c +++ b/drivers/bcma/scan.c @@ -21,6 +21,7 @@ struct bcma_device_id_name { }; static const struct bcma_device_id_name bcma_arm_device_names[] = { + { BCMA_CORE_4706_MAC_GBIT_COMMON, "BCM4706 GBit MAC Common" }, { BCMA_CORE_ARM_1176, "ARM 1176" }, { BCMA_CORE_ARM_7TDMI, "ARM 7TDMI" }, { BCMA_CORE_ARM_CM3, "ARM CM3" }, @@ -28,6 +29,11 @@ static const struct bcma_device_id_name bcma_arm_device_names[] = { static const struct bcma_device_id_name bcma_bcm_device_names[] = { { BCMA_CORE_OOB_ROUTER, "OOB Router" }, + { BCMA_CORE_4706_CHIPCOMMON, "BCM4706 ChipCommon" }, + { BCMA_CORE_4706_SOC_RAM, "BCM4706 SOC RAM" }, + { BCMA_CORE_4706_MAC_GBIT, "BCM4706 GBit MAC" }, + { BCMA_CORE_AMEMC, "AMEMC (DDR)" }, + { BCMA_CORE_ALTA, "ALTA (I2S)" }, { BCMA_CORE_INVALID, "Invalid" }, { BCMA_CORE_CHIPCOMMON, "ChipCommon" }, { BCMA_CORE_ILINE20, "ILine 20" }, @@ -289,11 +295,15 @@ static int bcma_get_next_core(struct bcma_bus *bus, u32 __iomem **eromptr, /* check if component is a core at all */ if (wrappers[0] + wrappers[1] == 0) { - /* we could save addrl of the router - if (cid == BCMA_CORE_OOB_ROUTER) - */ - bcma_erom_skip_component(bus, eromptr); - return -ENXIO; + /* Some specific cores don't need wrappers */ + switch (core->id.id) { + case BCMA_CORE_4706_MAC_GBIT_COMMON: + /* Not used yet: case BCMA_CORE_OOB_ROUTER: */ + break; + default: + bcma_erom_skip_component(bus, eromptr); + return -ENXIO; + } } if (bcma_erom_is_bridge(bus, eromptr)) { @@ -334,7 +344,7 @@ static int bcma_get_next_core(struct bcma_bus *bus, u32 __iomem **eromptr, if (tmp <= 0) { return -EILSEQ; } else { - pr_info("Bridge found\n"); + bcma_info(bus, "Bridge found\n"); return -ENXIO; } } @@ -421,8 +431,8 @@ void bcma_init_bus(struct bcma_bus *bus) chipinfo->id = (tmp & BCMA_CC_ID_ID) >> BCMA_CC_ID_ID_SHIFT; chipinfo->rev = (tmp & BCMA_CC_ID_REV) >> BCMA_CC_ID_REV_SHIFT; chipinfo->pkg = (tmp & BCMA_CC_ID_PKG) >> BCMA_CC_ID_PKG_SHIFT; - pr_info("Found chip with id 0x%04X, rev 0x%02X and package 0x%02X\n", - chipinfo->id, chipinfo->rev, chipinfo->pkg); + bcma_info(bus, "Found chip with id 0x%04X, rev 0x%02X and package 0x%02X\n", + chipinfo->id, chipinfo->rev, chipinfo->pkg); bus->init_done = true; } @@ -476,13 +486,12 @@ int bcma_bus_scan(struct bcma_bus *bus) other_core = bcma_find_core_reverse(bus, core->id.id); core->core_unit = (other_core == NULL) ? 0 : other_core->core_unit + 1; - pr_info("Core %d found: %s " - "(manuf 0x%03X, id 0x%03X, rev 0x%02X, class 0x%X)\n", - core->core_index, bcma_device_name(&core->id), - core->id.manuf, core->id.id, core->id.rev, - core->id.class); + bcma_info(bus, "Core %d found: %s (manuf 0x%03X, id 0x%03X, rev 0x%02X, class 0x%X)\n", + core->core_index, bcma_device_name(&core->id), + core->id.manuf, core->id.id, core->id.rev, + core->id.class); - list_add(&core->list, &bus->cores); + list_add_tail(&core->list, &bus->cores); } if (bus->hosttype == BCMA_HOSTTYPE_SOC) @@ -532,13 +541,12 @@ int __init bcma_bus_scan_early(struct bcma_bus *bus, core->core_index = core_num++; bus->nr_cores++; - pr_info("Core %d found: %s " - "(manuf 0x%03X, id 0x%03X, rev 0x%02X, class 0x%X)\n", - core->core_index, bcma_device_name(&core->id), - core->id.manuf, core->id.id, core->id.rev, - core->id.class); + bcma_info(bus, "Core %d found: %s (manuf 0x%03X, id 0x%03X, rev 0x%02X, class 0x%X)\n", + core->core_index, bcma_device_name(&core->id), + core->id.manuf, core->id.id, core->id.rev, + core->id.class); - list_add(&core->list, &bus->cores); + list_add_tail(&core->list, &bus->cores); err = 0; break; } diff --git a/drivers/bcma/scan.h b/drivers/bcma/scan.h index 113e6a66884c..30eb475e4d19 100644 --- a/drivers/bcma/scan.h +++ b/drivers/bcma/scan.h @@ -27,7 +27,7 @@ #define SCAN_CIB_NMW 0x0007C000 #define SCAN_CIB_NMW_SHIFT 14 #define SCAN_CIB_NSW 0x00F80000 -#define SCAN_CIB_NSW_SHIFT 17 +#define SCAN_CIB_NSW_SHIFT 19 #define SCAN_CIB_REV 0xFF000000 #define SCAN_CIB_REV_SHIFT 24 diff --git a/drivers/bcma/sprom.c b/drivers/bcma/sprom.c index f16f42d36071..26823d97fd9f 100644 --- a/drivers/bcma/sprom.c +++ b/drivers/bcma/sprom.c @@ -60,11 +60,11 @@ static int bcma_fill_sprom_with_fallback(struct bcma_bus *bus, if (err) goto fail; - pr_debug("Using SPROM revision %d provided by" - " platform.\n", bus->sprom.revision); + bcma_debug(bus, "Using SPROM revision %d provided by platform.\n", + bus->sprom.revision); return 0; fail: - pr_warn("Using fallback SPROM failed (err %d)\n", err); + bcma_warn(bus, "Using fallback SPROM failed (err %d)\n", err); return err; } @@ -468,11 +468,11 @@ static bool bcma_sprom_ext_available(struct bcma_bus *bus) /* older chipcommon revisions use chip status register */ chip_status = bcma_read32(bus->drv_cc.core, BCMA_CC_CHIPSTAT); switch (bus->chipinfo.id) { - case 0x4313: + case BCMA_CHIP_ID_BCM4313: present_mask = BCMA_CC_CHIPST_4313_SPROM_PRESENT; break; - case 0x4331: + case BCMA_CHIP_ID_BCM4331: present_mask = BCMA_CC_CHIPST_4331_SPROM_PRESENT; break; @@ -494,16 +494,16 @@ static bool bcma_sprom_onchip_available(struct bcma_bus *bus) chip_status = bcma_read32(bus->drv_cc.core, BCMA_CC_CHIPSTAT); switch (bus->chipinfo.id) { - case 0x4313: + case BCMA_CHIP_ID_BCM4313: present = chip_status & BCMA_CC_CHIPST_4313_OTP_PRESENT; break; - case 0x4331: + case BCMA_CHIP_ID_BCM4331: present = chip_status & BCMA_CC_CHIPST_4331_OTP_PRESENT; break; - case 43224: - case 43225: + case BCMA_CHIP_ID_BCM43224: + case BCMA_CHIP_ID_BCM43225: /* for these chips OTP is always available */ present = true; break; @@ -579,13 +579,15 @@ int bcma_sprom_get(struct bcma_bus *bus) if (!sprom) return -ENOMEM; - if (bus->chipinfo.id == 0x4331 || bus->chipinfo.id == 43431) + if (bus->chipinfo.id == BCMA_CHIP_ID_BCM4331 || + bus->chipinfo.id == BCMA_CHIP_ID_BCM43431) bcma_chipco_bcm4331_ext_pa_lines_ctl(&bus->drv_cc, false); - pr_debug("SPROM offset 0x%x\n", offset); + bcma_debug(bus, "SPROM offset 0x%x\n", offset); bcma_sprom_read(bus, offset, sprom); - if (bus->chipinfo.id == 0x4331 || bus->chipinfo.id == 43431) + if (bus->chipinfo.id == BCMA_CHIP_ID_BCM4331 || + bus->chipinfo.id == BCMA_CHIP_ID_BCM43431) bcma_chipco_bcm4331_ext_pa_lines_ctl(&bus->drv_cc, true); err = bcma_sprom_valid(sprom); diff --git a/drivers/bluetooth/Kconfig b/drivers/bluetooth/Kconfig index 5ccf142ef0b8..e9f203eadb1f 100644 --- a/drivers/bluetooth/Kconfig +++ b/drivers/bluetooth/Kconfig @@ -81,6 +81,18 @@ config BT_HCIUART_LL Say Y here to compile support for HCILL protocol. +config BT_HCIUART_3WIRE + bool "Three-wire UART (H5) protocol support" + depends on BT_HCIUART + help + The HCI Three-wire UART Transport Layer makes it possible to + user the Bluetooth HCI over a serial port interface. The HCI + Three-wire UART Transport Layer assumes that the UART + communication may have bit errors, overrun errors or burst + errors and thereby making CTS/RTS lines unnecessary. + + Say Y here to compile support for Three-wire UART protocol. + config BT_HCIBCM203X tristate "HCI BCM203x USB driver" depends on USB diff --git a/drivers/bluetooth/Makefile b/drivers/bluetooth/Makefile index f4460f4f4b78..4afae20df512 100644 --- a/drivers/bluetooth/Makefile +++ b/drivers/bluetooth/Makefile @@ -28,4 +28,5 @@ hci_uart-$(CONFIG_BT_HCIUART_H4) += hci_h4.o hci_uart-$(CONFIG_BT_HCIUART_BCSP) += hci_bcsp.o hci_uart-$(CONFIG_BT_HCIUART_LL) += hci_ll.o hci_uart-$(CONFIG_BT_HCIUART_ATH3K) += hci_ath.o +hci_uart-$(CONFIG_BT_HCIUART_3WIRE) += hci_h5.o hci_uart-objs := $(hci_uart-y) diff --git a/drivers/bluetooth/bluecard_cs.c b/drivers/bluetooth/bluecard_cs.c index 1fcd92380356..66c3a6770c41 100644 --- a/drivers/bluetooth/bluecard_cs.c +++ b/drivers/bluetooth/bluecard_cs.c @@ -231,12 +231,12 @@ static void bluecard_write_wakeup(bluecard_info_t *info) } do { - register unsigned int iobase = info->p_dev->resource[0]->start; - register unsigned int offset; - register unsigned char command; - register unsigned long ready_bit; + unsigned int iobase = info->p_dev->resource[0]->start; + unsigned int offset; + unsigned char command; + unsigned long ready_bit; register struct sk_buff *skb; - register int len; + int len; clear_bit(XMIT_WAKEUP, &(info->tx_state)); @@ -621,7 +621,6 @@ static int bluecard_hci_flush(struct hci_dev *hdev) static int bluecard_hci_open(struct hci_dev *hdev) { bluecard_info_t *info = hci_get_drvdata(hdev); - unsigned int iobase = info->p_dev->resource[0]->start; if (test_bit(CARD_HAS_PCCARD_ID, &(info->hw_state))) bluecard_hci_set_baud_rate(hdev, DEFAULT_BAUD_RATE); @@ -630,6 +629,8 @@ static int bluecard_hci_open(struct hci_dev *hdev) return 0; if (test_bit(CARD_HAS_PCCARD_ID, &(info->hw_state))) { + unsigned int iobase = info->p_dev->resource[0]->start; + /* Enable LED */ outb(0x08 | 0x20, iobase + 0x30); } @@ -641,7 +642,6 @@ static int bluecard_hci_open(struct hci_dev *hdev) static int bluecard_hci_close(struct hci_dev *hdev) { bluecard_info_t *info = hci_get_drvdata(hdev); - unsigned int iobase = info->p_dev->resource[0]->start; if (!test_and_clear_bit(HCI_RUNNING, &(hdev->flags))) return 0; @@ -649,6 +649,8 @@ static int bluecard_hci_close(struct hci_dev *hdev) bluecard_hci_flush(hdev); if (test_bit(CARD_HAS_PCCARD_ID, &(info->hw_state))) { + unsigned int iobase = info->p_dev->resource[0]->start; + /* Disable LED */ outb(0x00, iobase + 0x30); } diff --git a/drivers/bluetooth/bpa10x.c b/drivers/bluetooth/bpa10x.c index 609861a53c28..29caaed2d715 100644 --- a/drivers/bluetooth/bpa10x.c +++ b/drivers/bluetooth/bpa10x.c @@ -470,7 +470,7 @@ static int bpa10x_probe(struct usb_interface *intf, const struct usb_device_id * hdev->flush = bpa10x_flush; hdev->send = bpa10x_send_frame; - set_bit(HCI_QUIRK_NO_RESET, &hdev->quirks); + set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); err = hci_register_dev(hdev); if (err < 0) { diff --git a/drivers/bluetooth/bt3c_cs.c b/drivers/bluetooth/bt3c_cs.c index 308c8599ab55..8925b6d672a6 100644 --- a/drivers/bluetooth/bt3c_cs.c +++ b/drivers/bluetooth/bt3c_cs.c @@ -186,9 +186,9 @@ static void bt3c_write_wakeup(bt3c_info_t *info) return; do { - register unsigned int iobase = info->p_dev->resource[0]->start; + unsigned int iobase = info->p_dev->resource[0]->start; register struct sk_buff *skb; - register int len; + int len; if (!pcmcia_dev_present(info->p_dev)) break; @@ -664,7 +664,7 @@ static int bt3c_check_config(struct pcmcia_device *p_dev, void *priv_data) { int *try = priv_data; - if (try == 0) + if (!try) p_dev->io_lines = 16; if ((p_dev->resource[0]->end != 8) || (p_dev->resource[0]->start == 0)) diff --git a/drivers/bluetooth/btmrvl_main.c b/drivers/bluetooth/btmrvl_main.c index dc304def8400..3a4343b3bd6d 100644 --- a/drivers/bluetooth/btmrvl_main.c +++ b/drivers/bluetooth/btmrvl_main.c @@ -47,10 +47,11 @@ EXPORT_SYMBOL_GPL(btmrvl_interrupt); bool btmrvl_check_evtpkt(struct btmrvl_private *priv, struct sk_buff *skb) { struct hci_event_hdr *hdr = (void *) skb->data; - struct hci_ev_cmd_complete *ec; - u16 opcode, ocf, ogf; if (hdr->evt == HCI_EV_CMD_COMPLETE) { + struct hci_ev_cmd_complete *ec; + u16 opcode, ocf, ogf; + ec = (void *) (skb->data + HCI_EVENT_HDR_SIZE); opcode = __le16_to_cpu(ec->opcode); ocf = hci_opcode_ocf(opcode); @@ -64,7 +65,8 @@ bool btmrvl_check_evtpkt(struct btmrvl_private *priv, struct sk_buff *skb) } if (ogf == OGF) { - BT_DBG("vendor event skipped: ogf 0x%4.4x", ogf); + BT_DBG("vendor event skipped: ogf 0x%4.4x ocf 0x%4.4x", + ogf, ocf); kfree_skb(skb); return false; } diff --git a/drivers/bluetooth/btmrvl_sdio.c b/drivers/bluetooth/btmrvl_sdio.c index 0cd61d9f07cd..6a9e9717d3ab 100644 --- a/drivers/bluetooth/btmrvl_sdio.c +++ b/drivers/bluetooth/btmrvl_sdio.c @@ -110,6 +110,9 @@ static const struct sdio_device_id btmrvl_sdio_ids[] = { /* Marvell SD8787 Bluetooth device */ { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x911A), .driver_data = (unsigned long) &btmrvl_sdio_sd8787 }, + /* Marvell SD8787 Bluetooth AMP device */ + { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x911B), + .driver_data = (unsigned long) &btmrvl_sdio_sd8787 }, /* Marvell SD8797 Bluetooth device */ { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x912A), .driver_data = (unsigned long) &btmrvl_sdio_sd8797 }, @@ -565,8 +568,9 @@ static int btmrvl_sdio_card_to_host(struct btmrvl_private *priv) if (type == HCI_EVENT_PKT) { if (btmrvl_check_evtpkt(priv, skb)) hci_recv_frame(skb); - } else + } else { hci_recv_frame(skb); + } hdev->stat.byte_rx += buf_len; break; diff --git a/drivers/bluetooth/btuart_cs.c b/drivers/bluetooth/btuart_cs.c index c4fc2f3fc32c..21e803a6a281 100644 --- a/drivers/bluetooth/btuart_cs.c +++ b/drivers/bluetooth/btuart_cs.c @@ -140,9 +140,9 @@ static void btuart_write_wakeup(btuart_info_t *info) } do { - register unsigned int iobase = info->p_dev->resource[0]->start; + unsigned int iobase = info->p_dev->resource[0]->start; register struct sk_buff *skb; - register int len; + int len; clear_bit(XMIT_WAKEUP, &(info->tx_state)); @@ -593,7 +593,7 @@ static int btuart_check_config(struct pcmcia_device *p_dev, void *priv_data) { int *try = priv_data; - if (try == 0) + if (!try) p_dev->io_lines = 16; if ((p_dev->resource[0]->end != 8) || (p_dev->resource[0]->start == 0)) diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 83ebb241bfcc..e27221411036 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -21,15 +21,7 @@ * */ -#include <linux/kernel.h> #include <linux/module.h> -#include <linux/init.h> -#include <linux/slab.h> -#include <linux/types.h> -#include <linux/sched.h> -#include <linux/errno.h> -#include <linux/skbuff.h> - #include <linux/usb.h> #include <net/bluetooth/bluetooth.h> @@ -1028,7 +1020,7 @@ static int btusb_probe(struct usb_interface *intf, data->isoc = usb_ifnum_to_if(data->udev, 1); if (!reset) - set_bit(HCI_QUIRK_NO_RESET, &hdev->quirks); + set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); if (force_scofix || id->driver_info & BTUSB_WRONG_SCO_MTU) { if (!disable_scofix) @@ -1040,7 +1032,7 @@ static int btusb_probe(struct usb_interface *intf, if (id->driver_info & BTUSB_DIGIANSWER) { data->cmdreq_type = USB_TYPE_VENDOR; - set_bit(HCI_QUIRK_NO_RESET, &hdev->quirks); + set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); } if (id->driver_info & BTUSB_CSR) { @@ -1048,7 +1040,7 @@ static int btusb_probe(struct usb_interface *intf, /* Old firmware would otherwise execute USB reset */ if (le16_to_cpu(udev->descriptor.bcdDevice) < 0x117) - set_bit(HCI_QUIRK_NO_RESET, &hdev->quirks); + set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); } if (id->driver_info & BTUSB_SNIFFER) { diff --git a/drivers/bluetooth/dtl1_cs.c b/drivers/bluetooth/dtl1_cs.c index 6e8d96189684..97a7784db4a2 100644 --- a/drivers/bluetooth/dtl1_cs.c +++ b/drivers/bluetooth/dtl1_cs.c @@ -144,9 +144,9 @@ static void dtl1_write_wakeup(dtl1_info_t *info) } do { - register unsigned int iobase = info->p_dev->resource[0]->start; + unsigned int iobase = info->p_dev->resource[0]->start; register struct sk_buff *skb; - register int len; + int len; clear_bit(XMIT_WAKEUP, &(info->tx_state)); @@ -586,29 +586,31 @@ static int dtl1_confcheck(struct pcmcia_device *p_dev, void *priv_data) static int dtl1_config(struct pcmcia_device *link) { dtl1_info_t *info = link->priv; - int i; + int ret; /* Look for a generic full-sized window */ link->resource[0]->end = 8; - if (pcmcia_loop_config(link, dtl1_confcheck, NULL) < 0) + ret = pcmcia_loop_config(link, dtl1_confcheck, NULL); + if (ret) goto failed; - i = pcmcia_request_irq(link, dtl1_interrupt); - if (i != 0) + ret = pcmcia_request_irq(link, dtl1_interrupt); + if (ret) goto failed; - i = pcmcia_enable_device(link); - if (i != 0) + ret = pcmcia_enable_device(link); + if (ret) goto failed; - if (dtl1_open(info) != 0) + ret = dtl1_open(info); + if (ret) goto failed; return 0; failed: dtl1_detach(link); - return -ENODEV; + return ret; } static const struct pcmcia_device_id dtl1_ids[] = { diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c index 661a8dc4d2f8..57e502e06080 100644 --- a/drivers/bluetooth/hci_bcsp.c +++ b/drivers/bluetooth/hci_bcsp.c @@ -552,7 +552,7 @@ static u16 bscp_get_crc(struct bcsp_struct *bcsp) static int bcsp_recv(struct hci_uart *hu, void *data, int count) { struct bcsp_struct *bcsp = hu->priv; - register unsigned char *ptr; + unsigned char *ptr; BT_DBG("hu %p count %d rx_state %d rx_count %ld", hu, count, bcsp->rx_state, bcsp->rx_count); diff --git a/drivers/bluetooth/hci_h4.c b/drivers/bluetooth/hci_h4.c index 748329468d26..c60623f206d4 100644 --- a/drivers/bluetooth/hci_h4.c +++ b/drivers/bluetooth/hci_h4.c @@ -126,7 +126,7 @@ static int h4_enqueue(struct hci_uart *hu, struct sk_buff *skb) static inline int h4_check_data_len(struct h4_struct *h4, int len) { - register int room = skb_tailroom(h4->rx_skb); + int room = skb_tailroom(h4->rx_skb); BT_DBG("len %d room %d", len, room); diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c new file mode 100644 index 000000000000..b6154d5a07a5 --- /dev/null +++ b/drivers/bluetooth/hci_h5.c @@ -0,0 +1,747 @@ +/* + * + * Bluetooth HCI Three-wire UART driver + * + * Copyright (C) 2012 Intel Corporation + * + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <linux/kernel.h> +#include <linux/errno.h> +#include <linux/skbuff.h> + +#include <net/bluetooth/bluetooth.h> +#include <net/bluetooth/hci_core.h> + +#include "hci_uart.h" + +#define HCI_3WIRE_ACK_PKT 0 +#define HCI_3WIRE_LINK_PKT 15 + +/* Sliding window size */ +#define H5_TX_WIN_MAX 4 + +#define H5_ACK_TIMEOUT msecs_to_jiffies(250) +#define H5_SYNC_TIMEOUT msecs_to_jiffies(100) + +/* + * Maximum Three-wire packet: + * 4 byte header + max value for 12-bit length + 2 bytes for CRC + */ +#define H5_MAX_LEN (4 + 0xfff + 2) + +/* Convenience macros for reading Three-wire header values */ +#define H5_HDR_SEQ(hdr) ((hdr)[0] & 0x07) +#define H5_HDR_ACK(hdr) (((hdr)[0] >> 3) & 0x07) +#define H5_HDR_CRC(hdr) (((hdr)[0] >> 6) & 0x01) +#define H5_HDR_RELIABLE(hdr) (((hdr)[0] >> 7) & 0x01) +#define H5_HDR_PKT_TYPE(hdr) ((hdr)[1] & 0x0f) +#define H5_HDR_LEN(hdr) ((((hdr)[1] >> 4) & 0xff) + ((hdr)[2] << 4)) + +#define SLIP_DELIMITER 0xc0 +#define SLIP_ESC 0xdb +#define SLIP_ESC_DELIM 0xdc +#define SLIP_ESC_ESC 0xdd + +/* H5 state flags */ +enum { + H5_RX_ESC, /* SLIP escape mode */ + H5_TX_ACK_REQ, /* Pending ack to send */ +}; + +struct h5 { + struct sk_buff_head unack; /* Unack'ed packets queue */ + struct sk_buff_head rel; /* Reliable packets queue */ + struct sk_buff_head unrel; /* Unreliable packets queue */ + + unsigned long flags; + + struct sk_buff *rx_skb; /* Receive buffer */ + size_t rx_pending; /* Expecting more bytes */ + u8 rx_ack; /* Last ack number received */ + + int (*rx_func) (struct hci_uart *hu, u8 c); + + struct timer_list timer; /* Retransmission timer */ + + u8 tx_seq; /* Next seq number to send */ + u8 tx_ack; /* Next ack number to send */ + u8 tx_win; /* Sliding window size */ + + enum { + H5_UNINITIALIZED, + H5_INITIALIZED, + H5_ACTIVE, + } state; + + enum { + H5_AWAKE, + H5_SLEEPING, + H5_WAKING_UP, + } sleep; +}; + +static void h5_reset_rx(struct h5 *h5); + +static void h5_link_control(struct hci_uart *hu, const void *data, size_t len) +{ + struct h5 *h5 = hu->priv; + struct sk_buff *nskb; + + nskb = alloc_skb(3, GFP_ATOMIC); + if (!nskb) + return; + + bt_cb(nskb)->pkt_type = HCI_3WIRE_LINK_PKT; + + memcpy(skb_put(nskb, len), data, len); + + skb_queue_tail(&h5->unrel, nskb); +} + +static u8 h5_cfg_field(struct h5 *h5) +{ + u8 field = 0; + + /* Sliding window size (first 3 bits) */ + field |= (h5->tx_win & 7); + + return field; +} + +static void h5_timed_event(unsigned long arg) +{ + const unsigned char sync_req[] = { 0x01, 0x7e }; + unsigned char conf_req[] = { 0x03, 0xfc, 0x01 }; + struct hci_uart *hu = (struct hci_uart *) arg; + struct h5 *h5 = hu->priv; + struct sk_buff *skb; + unsigned long flags; + + BT_DBG("%s", hu->hdev->name); + + if (h5->state == H5_UNINITIALIZED) + h5_link_control(hu, sync_req, sizeof(sync_req)); + + if (h5->state == H5_INITIALIZED) { + conf_req[2] = h5_cfg_field(h5); + h5_link_control(hu, conf_req, sizeof(conf_req)); + } + + if (h5->state != H5_ACTIVE) { + mod_timer(&h5->timer, jiffies + H5_SYNC_TIMEOUT); + goto wakeup; + } + + if (h5->sleep != H5_AWAKE) { + h5->sleep = H5_SLEEPING; + goto wakeup; + } + + BT_DBG("hu %p retransmitting %u pkts", hu, h5->unack.qlen); + + spin_lock_irqsave_nested(&h5->unack.lock, flags, SINGLE_DEPTH_NESTING); + + while ((skb = __skb_dequeue_tail(&h5->unack)) != NULL) { + h5->tx_seq = (h5->tx_seq - 1) & 0x07; + skb_queue_head(&h5->rel, skb); + } + + spin_unlock_irqrestore(&h5->unack.lock, flags); + +wakeup: + hci_uart_tx_wakeup(hu); +} + +static int h5_open(struct hci_uart *hu) +{ + struct h5 *h5; + const unsigned char sync[] = { 0x01, 0x7e }; + + BT_DBG("hu %p", hu); + + h5 = kzalloc(sizeof(*h5), GFP_KERNEL); + if (!h5) + return -ENOMEM; + + hu->priv = h5; + + skb_queue_head_init(&h5->unack); + skb_queue_head_init(&h5->rel); + skb_queue_head_init(&h5->unrel); + + h5_reset_rx(h5); + + init_timer(&h5->timer); + h5->timer.function = h5_timed_event; + h5->timer.data = (unsigned long) hu; + + h5->tx_win = H5_TX_WIN_MAX; + + set_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags); + + /* Send initial sync request */ + h5_link_control(hu, sync, sizeof(sync)); + mod_timer(&h5->timer, jiffies + H5_SYNC_TIMEOUT); + + return 0; +} + +static int h5_close(struct hci_uart *hu) +{ + struct h5 *h5 = hu->priv; + + skb_queue_purge(&h5->unack); + skb_queue_purge(&h5->rel); + skb_queue_purge(&h5->unrel); + + del_timer(&h5->timer); + + kfree(h5); + + return 0; +} + +static void h5_pkt_cull(struct h5 *h5) +{ + struct sk_buff *skb, *tmp; + unsigned long flags; + int i, to_remove; + u8 seq; + + spin_lock_irqsave(&h5->unack.lock, flags); + + to_remove = skb_queue_len(&h5->unack); + if (to_remove == 0) + goto unlock; + + seq = h5->tx_seq; + + while (to_remove > 0) { + if (h5->rx_ack == seq) + break; + + to_remove--; + seq = (seq - 1) % 8; + } + + if (seq != h5->rx_ack) + BT_ERR("Controller acked invalid packet"); + + i = 0; + skb_queue_walk_safe(&h5->unack, skb, tmp) { + if (i++ >= to_remove) + break; + + __skb_unlink(skb, &h5->unack); + kfree_skb(skb); + } + + if (skb_queue_empty(&h5->unack)) + del_timer(&h5->timer); + +unlock: + spin_unlock_irqrestore(&h5->unack.lock, flags); +} + +static void h5_handle_internal_rx(struct hci_uart *hu) +{ + struct h5 *h5 = hu->priv; + const unsigned char sync_req[] = { 0x01, 0x7e }; + const unsigned char sync_rsp[] = { 0x02, 0x7d }; + unsigned char conf_req[] = { 0x03, 0xfc, 0x01 }; + const unsigned char conf_rsp[] = { 0x04, 0x7b }; + const unsigned char wakeup_req[] = { 0x05, 0xfa }; + const unsigned char woken_req[] = { 0x06, 0xf9 }; + const unsigned char sleep_req[] = { 0x07, 0x78 }; + const unsigned char *hdr = h5->rx_skb->data; + const unsigned char *data = &h5->rx_skb->data[4]; + + BT_DBG("%s", hu->hdev->name); + + if (H5_HDR_PKT_TYPE(hdr) != HCI_3WIRE_LINK_PKT) + return; + + if (H5_HDR_LEN(hdr) < 2) + return; + + conf_req[2] = h5_cfg_field(h5); + + if (memcmp(data, sync_req, 2) == 0) { + h5_link_control(hu, sync_rsp, 2); + } else if (memcmp(data, sync_rsp, 2) == 0) { + h5->state = H5_INITIALIZED; + h5_link_control(hu, conf_req, 3); + } else if (memcmp(data, conf_req, 2) == 0) { + h5_link_control(hu, conf_rsp, 2); + h5_link_control(hu, conf_req, 3); + } else if (memcmp(data, conf_rsp, 2) == 0) { + if (H5_HDR_LEN(hdr) > 2) + h5->tx_win = (data[2] & 7); + BT_DBG("Three-wire init complete. tx_win %u", h5->tx_win); + h5->state = H5_ACTIVE; + hci_uart_init_ready(hu); + return; + } else if (memcmp(data, sleep_req, 2) == 0) { + BT_DBG("Peer went to sleep"); + h5->sleep = H5_SLEEPING; + return; + } else if (memcmp(data, woken_req, 2) == 0) { + BT_DBG("Peer woke up"); + h5->sleep = H5_AWAKE; + } else if (memcmp(data, wakeup_req, 2) == 0) { + BT_DBG("Peer requested wakeup"); + h5_link_control(hu, woken_req, 2); + h5->sleep = H5_AWAKE; + } else { + BT_DBG("Link Control: 0x%02hhx 0x%02hhx", data[0], data[1]); + return; + } + + hci_uart_tx_wakeup(hu); +} + +static void h5_complete_rx_pkt(struct hci_uart *hu) +{ + struct h5 *h5 = hu->priv; + const unsigned char *hdr = h5->rx_skb->data; + + if (H5_HDR_RELIABLE(hdr)) { + h5->tx_ack = (h5->tx_ack + 1) % 8; + set_bit(H5_TX_ACK_REQ, &h5->flags); + hci_uart_tx_wakeup(hu); + } + + h5->rx_ack = H5_HDR_ACK(hdr); + + h5_pkt_cull(h5); + + switch (H5_HDR_PKT_TYPE(hdr)) { + case HCI_EVENT_PKT: + case HCI_ACLDATA_PKT: + case HCI_SCODATA_PKT: + bt_cb(h5->rx_skb)->pkt_type = H5_HDR_PKT_TYPE(hdr); + + /* Remove Three-wire header */ + skb_pull(h5->rx_skb, 4); + + hci_recv_frame(h5->rx_skb); + h5->rx_skb = NULL; + + break; + + default: + h5_handle_internal_rx(hu); + break; + } + + h5_reset_rx(h5); +} + +static int h5_rx_crc(struct hci_uart *hu, unsigned char c) +{ + struct h5 *h5 = hu->priv; + + h5_complete_rx_pkt(hu); + h5_reset_rx(h5); + + return 0; +} + +static int h5_rx_payload(struct hci_uart *hu, unsigned char c) +{ + struct h5 *h5 = hu->priv; + const unsigned char *hdr = h5->rx_skb->data; + + if (H5_HDR_CRC(hdr)) { + h5->rx_func = h5_rx_crc; + h5->rx_pending = 2; + } else { + h5_complete_rx_pkt(hu); + h5_reset_rx(h5); + } + + return 0; +} + +static int h5_rx_3wire_hdr(struct hci_uart *hu, unsigned char c) +{ + struct h5 *h5 = hu->priv; + const unsigned char *hdr = h5->rx_skb->data; + + BT_DBG("%s rx: seq %u ack %u crc %u rel %u type %u len %u", + hu->hdev->name, H5_HDR_SEQ(hdr), H5_HDR_ACK(hdr), + H5_HDR_CRC(hdr), H5_HDR_RELIABLE(hdr), H5_HDR_PKT_TYPE(hdr), + H5_HDR_LEN(hdr)); + + if (((hdr[0] + hdr[1] + hdr[2] + hdr[3]) & 0xff) != 0xff) { + BT_ERR("Invalid header checksum"); + h5_reset_rx(h5); + return 0; + } + + if (H5_HDR_RELIABLE(hdr) && H5_HDR_SEQ(hdr) != h5->tx_ack) { + BT_ERR("Out-of-order packet arrived (%u != %u)", + H5_HDR_SEQ(hdr), h5->tx_ack); + h5_reset_rx(h5); + return 0; + } + + if (h5->state != H5_ACTIVE && + H5_HDR_PKT_TYPE(hdr) != HCI_3WIRE_LINK_PKT) { + BT_ERR("Non-link packet received in non-active state"); + h5_reset_rx(h5); + } + + h5->rx_func = h5_rx_payload; + h5->rx_pending = H5_HDR_LEN(hdr); + + return 0; +} + +static int h5_rx_pkt_start(struct hci_uart *hu, unsigned char c) +{ + struct h5 *h5 = hu->priv; + + if (c == SLIP_DELIMITER) + return 1; + + h5->rx_func = h5_rx_3wire_hdr; + h5->rx_pending = 4; + + h5->rx_skb = bt_skb_alloc(H5_MAX_LEN, GFP_ATOMIC); + if (!h5->rx_skb) { + BT_ERR("Can't allocate mem for new packet"); + h5_reset_rx(h5); + return -ENOMEM; + } + + h5->rx_skb->dev = (void *) hu->hdev; + + return 0; +} + +static int h5_rx_delimiter(struct hci_uart *hu, unsigned char c) +{ + struct h5 *h5 = hu->priv; + + if (c == SLIP_DELIMITER) + h5->rx_func = h5_rx_pkt_start; + + return 1; +} + +static void h5_unslip_one_byte(struct h5 *h5, unsigned char c) +{ + const u8 delim = SLIP_DELIMITER, esc = SLIP_ESC; + const u8 *byte = &c; + + if (!test_bit(H5_RX_ESC, &h5->flags) && c == SLIP_ESC) { + set_bit(H5_RX_ESC, &h5->flags); + return; + } + + if (test_and_clear_bit(H5_RX_ESC, &h5->flags)) { + switch (c) { + case SLIP_ESC_DELIM: + byte = &delim; + break; + case SLIP_ESC_ESC: + byte = &esc; + break; + default: + BT_ERR("Invalid esc byte 0x%02hhx", c); + h5_reset_rx(h5); + return; + } + } + + memcpy(skb_put(h5->rx_skb, 1), byte, 1); + h5->rx_pending--; + + BT_DBG("unsliped 0x%02hhx, rx_pending %zu", *byte, h5->rx_pending); +} + +static void h5_reset_rx(struct h5 *h5) +{ + if (h5->rx_skb) { + kfree_skb(h5->rx_skb); + h5->rx_skb = NULL; + } + + h5->rx_func = h5_rx_delimiter; + h5->rx_pending = 0; + clear_bit(H5_RX_ESC, &h5->flags); +} + +static int h5_recv(struct hci_uart *hu, void *data, int count) +{ + struct h5 *h5 = hu->priv; + unsigned char *ptr = data; + + BT_DBG("%s pending %zu count %d", hu->hdev->name, h5->rx_pending, + count); + + while (count > 0) { + int processed; + + if (h5->rx_pending > 0) { + if (*ptr == SLIP_DELIMITER) { + BT_ERR("Too short H5 packet"); + h5_reset_rx(h5); + continue; + } + + h5_unslip_one_byte(h5, *ptr); + + ptr++; count--; + continue; + } + + processed = h5->rx_func(hu, *ptr); + if (processed < 0) + return processed; + + ptr += processed; + count -= processed; + } + + return 0; +} + +static int h5_enqueue(struct hci_uart *hu, struct sk_buff *skb) +{ + struct h5 *h5 = hu->priv; + + if (skb->len > 0xfff) { + BT_ERR("Packet too long (%u bytes)", skb->len); + kfree_skb(skb); + return 0; + } + + if (h5->state != H5_ACTIVE) { + BT_ERR("Ignoring HCI data in non-active state"); + kfree_skb(skb); + return 0; + } + + switch (bt_cb(skb)->pkt_type) { + case HCI_ACLDATA_PKT: + case HCI_COMMAND_PKT: + skb_queue_tail(&h5->rel, skb); + break; + + case HCI_SCODATA_PKT: + skb_queue_tail(&h5->unrel, skb); + break; + + default: + BT_ERR("Unknown packet type %u", bt_cb(skb)->pkt_type); + kfree_skb(skb); + break; + } + + return 0; +} + +static void h5_slip_delim(struct sk_buff *skb) +{ + const char delim = SLIP_DELIMITER; + + memcpy(skb_put(skb, 1), &delim, 1); +} + +static void h5_slip_one_byte(struct sk_buff *skb, u8 c) +{ + const char esc_delim[2] = { SLIP_ESC, SLIP_ESC_DELIM }; + const char esc_esc[2] = { SLIP_ESC, SLIP_ESC_ESC }; + + switch (c) { + case SLIP_DELIMITER: + memcpy(skb_put(skb, 2), &esc_delim, 2); + break; + case SLIP_ESC: + memcpy(skb_put(skb, 2), &esc_esc, 2); + break; + default: + memcpy(skb_put(skb, 1), &c, 1); + } +} + +static bool valid_packet_type(u8 type) +{ + switch (type) { + case HCI_ACLDATA_PKT: + case HCI_COMMAND_PKT: + case HCI_SCODATA_PKT: + case HCI_3WIRE_LINK_PKT: + case HCI_3WIRE_ACK_PKT: + return true; + default: + return false; + } +} + +static struct sk_buff *h5_prepare_pkt(struct hci_uart *hu, u8 pkt_type, + const u8 *data, size_t len) +{ + struct h5 *h5 = hu->priv; + struct sk_buff *nskb; + u8 hdr[4]; + int i; + + if (!valid_packet_type(pkt_type)) { + BT_ERR("Unknown packet type %u", pkt_type); + return NULL; + } + + /* + * Max len of packet: (original len + 4 (H5 hdr) + 2 (crc)) * 2 + * (because bytes 0xc0 and 0xdb are escaped, worst case is when + * the packet is all made of 0xc0 and 0xdb) + 2 (0xc0 + * delimiters at start and end). + */ + nskb = alloc_skb((len + 6) * 2 + 2, GFP_ATOMIC); + if (!nskb) + return NULL; + + bt_cb(nskb)->pkt_type = pkt_type; + + h5_slip_delim(nskb); + + hdr[0] = h5->tx_ack << 3; + clear_bit(H5_TX_ACK_REQ, &h5->flags); + + /* Reliable packet? */ + if (pkt_type == HCI_ACLDATA_PKT || pkt_type == HCI_COMMAND_PKT) { + hdr[0] |= 1 << 7; + hdr[0] |= h5->tx_seq; + h5->tx_seq = (h5->tx_seq + 1) % 8; + } + + hdr[1] = pkt_type | ((len & 0x0f) << 4); + hdr[2] = len >> 4; + hdr[3] = ~((hdr[0] + hdr[1] + hdr[2]) & 0xff); + + BT_DBG("%s tx: seq %u ack %u crc %u rel %u type %u len %u", + hu->hdev->name, H5_HDR_SEQ(hdr), H5_HDR_ACK(hdr), + H5_HDR_CRC(hdr), H5_HDR_RELIABLE(hdr), H5_HDR_PKT_TYPE(hdr), + H5_HDR_LEN(hdr)); + + for (i = 0; i < 4; i++) + h5_slip_one_byte(nskb, hdr[i]); + + for (i = 0; i < len; i++) + h5_slip_one_byte(nskb, data[i]); + + h5_slip_delim(nskb); + + return nskb; +} + +static struct sk_buff *h5_dequeue(struct hci_uart *hu) +{ + struct h5 *h5 = hu->priv; + unsigned long flags; + struct sk_buff *skb, *nskb; + + if (h5->sleep != H5_AWAKE) { + const unsigned char wakeup_req[] = { 0x05, 0xfa }; + + if (h5->sleep == H5_WAKING_UP) + return NULL; + + h5->sleep = H5_WAKING_UP; + BT_DBG("Sending wakeup request"); + + mod_timer(&h5->timer, jiffies + HZ / 100); + return h5_prepare_pkt(hu, HCI_3WIRE_LINK_PKT, wakeup_req, 2); + } + + if ((skb = skb_dequeue(&h5->unrel)) != NULL) { + nskb = h5_prepare_pkt(hu, bt_cb(skb)->pkt_type, + skb->data, skb->len); + if (nskb) { + kfree_skb(skb); + return nskb; + } + + skb_queue_head(&h5->unrel, skb); + BT_ERR("Could not dequeue pkt because alloc_skb failed"); + } + + spin_lock_irqsave_nested(&h5->unack.lock, flags, SINGLE_DEPTH_NESTING); + + if (h5->unack.qlen >= h5->tx_win) + goto unlock; + + if ((skb = skb_dequeue(&h5->rel)) != NULL) { + nskb = h5_prepare_pkt(hu, bt_cb(skb)->pkt_type, + skb->data, skb->len); + if (nskb) { + __skb_queue_tail(&h5->unack, skb); + mod_timer(&h5->timer, jiffies + H5_ACK_TIMEOUT); + spin_unlock_irqrestore(&h5->unack.lock, flags); + return nskb; + } + + skb_queue_head(&h5->rel, skb); + BT_ERR("Could not dequeue pkt because alloc_skb failed"); + } + +unlock: + spin_unlock_irqrestore(&h5->unack.lock, flags); + + if (test_bit(H5_TX_ACK_REQ, &h5->flags)) + return h5_prepare_pkt(hu, HCI_3WIRE_ACK_PKT, NULL, 0); + + return NULL; +} + +static int h5_flush(struct hci_uart *hu) +{ + BT_DBG("hu %p", hu); + return 0; +} + +static struct hci_uart_proto h5p = { + .id = HCI_UART_3WIRE, + .open = h5_open, + .close = h5_close, + .recv = h5_recv, + .enqueue = h5_enqueue, + .dequeue = h5_dequeue, + .flush = h5_flush, +}; + +int __init h5_init(void) +{ + int err = hci_uart_register_proto(&h5p); + + if (!err) + BT_INFO("HCI Three-wire UART (H5) protocol initialized"); + else + BT_ERR("HCI Three-wire UART (H5) protocol init failed"); + + return err; +} + +int __exit h5_deinit(void) +{ + return hci_uart_unregister_proto(&h5p); +} diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c index e564579a6115..74e0966b3ead 100644 --- a/drivers/bluetooth/hci_ldisc.c +++ b/drivers/bluetooth/hci_ldisc.c @@ -156,6 +156,35 @@ restart: return 0; } +static void hci_uart_init_work(struct work_struct *work) +{ + struct hci_uart *hu = container_of(work, struct hci_uart, init_ready); + int err; + + if (!test_and_clear_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags)) + return; + + err = hci_register_dev(hu->hdev); + if (err < 0) { + BT_ERR("Can't register HCI device"); + hci_free_dev(hu->hdev); + hu->hdev = NULL; + hu->proto->close(hu); + } + + set_bit(HCI_UART_REGISTERED, &hu->flags); +} + +int hci_uart_init_ready(struct hci_uart *hu) +{ + if (!test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags)) + return -EALREADY; + + schedule_work(&hu->init_ready); + + return 0; +} + /* ------- Interface to HCI layer ------ */ /* Initialize device */ static int hci_uart_open(struct hci_dev *hdev) @@ -264,6 +293,8 @@ static int hci_uart_tty_open(struct tty_struct *tty) hu->tty = tty; tty->receive_room = 65536; + INIT_WORK(&hu->init_ready, hci_uart_init_work); + spin_lock_init(&hu->rx_lock); /* Flush any pending characters in the driver and line discipline. */ @@ -286,28 +317,30 @@ static int hci_uart_tty_open(struct tty_struct *tty) static void hci_uart_tty_close(struct tty_struct *tty) { struct hci_uart *hu = (void *)tty->disc_data; + struct hci_dev *hdev; BT_DBG("tty %p", tty); /* Detach from the tty */ tty->disc_data = NULL; - if (hu) { - struct hci_dev *hdev = hu->hdev; + if (!hu) + return; - if (hdev) - hci_uart_close(hdev); + hdev = hu->hdev; + if (hdev) + hci_uart_close(hdev); - if (test_and_clear_bit(HCI_UART_PROTO_SET, &hu->flags)) { - if (hdev) { + if (test_and_clear_bit(HCI_UART_PROTO_SET, &hu->flags)) { + if (hdev) { + if (test_bit(HCI_UART_REGISTERED, &hu->flags)) hci_unregister_dev(hdev); - hci_free_dev(hdev); - } - hu->proto->close(hu); + hci_free_dev(hdev); } - - kfree(hu); + hu->proto->close(hu); } + + kfree(hu); } /* hci_uart_tty_wakeup() @@ -394,19 +427,24 @@ static int hci_uart_register_dev(struct hci_uart *hu) set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks); if (!test_bit(HCI_UART_RESET_ON_INIT, &hu->hdev_flags)) - set_bit(HCI_QUIRK_NO_RESET, &hdev->quirks); + set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); if (test_bit(HCI_UART_CREATE_AMP, &hu->hdev_flags)) hdev->dev_type = HCI_AMP; else hdev->dev_type = HCI_BREDR; + if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags)) + return 0; + if (hci_register_dev(hdev) < 0) { BT_ERR("Can't register HCI device"); hci_free_dev(hdev); return -ENODEV; } + set_bit(HCI_UART_REGISTERED, &hu->flags); + return 0; } @@ -558,6 +596,9 @@ static int __init hci_uart_init(void) #ifdef CONFIG_BT_HCIUART_ATH3K ath_init(); #endif +#ifdef CONFIG_BT_HCIUART_3WIRE + h5_init(); +#endif return 0; } @@ -578,6 +619,9 @@ static void __exit hci_uart_exit(void) #ifdef CONFIG_BT_HCIUART_ATH3K ath_deinit(); #endif +#ifdef CONFIG_BT_HCIUART_3WIRE + h5_deinit(); +#endif /* Release tty registration of line discipline */ if ((err = tty_unregister_ldisc(N_HCI))) diff --git a/drivers/bluetooth/hci_ll.c b/drivers/bluetooth/hci_ll.c index b874c0efde24..ff6d589c34a5 100644 --- a/drivers/bluetooth/hci_ll.c +++ b/drivers/bluetooth/hci_ll.c @@ -348,7 +348,7 @@ static int ll_enqueue(struct hci_uart *hu, struct sk_buff *skb) static inline int ll_check_data_len(struct ll_struct *ll, int len) { - register int room = skb_tailroom(ll->rx_skb); + int room = skb_tailroom(ll->rx_skb); BT_DBG("len %d room %d", len, room); @@ -374,11 +374,11 @@ static inline int ll_check_data_len(struct ll_struct *ll, int len) static int ll_recv(struct hci_uart *hu, void *data, int count) { struct ll_struct *ll = hu->priv; - register char *ptr; + char *ptr; struct hci_event_hdr *eh; struct hci_acl_hdr *ah; struct hci_sco_hdr *sh; - register int len, type, dlen; + int len, type, dlen; BT_DBG("hu %p count %d rx_state %ld rx_count %ld", hu, count, ll->rx_state, ll->rx_count); diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h index 6cf6ab22ad21..fffa61ff5cb1 100644 --- a/drivers/bluetooth/hci_uart.h +++ b/drivers/bluetooth/hci_uart.h @@ -47,6 +47,7 @@ #define HCI_UART_RAW_DEVICE 0 #define HCI_UART_RESET_ON_INIT 1 #define HCI_UART_CREATE_AMP 2 +#define HCI_UART_INIT_PENDING 3 struct hci_uart; @@ -66,6 +67,8 @@ struct hci_uart { unsigned long flags; unsigned long hdev_flags; + struct work_struct init_ready; + struct hci_uart_proto *proto; void *priv; @@ -76,6 +79,7 @@ struct hci_uart { /* HCI_UART proto flag bits */ #define HCI_UART_PROTO_SET 0 +#define HCI_UART_REGISTERED 1 /* TX states */ #define HCI_UART_SENDING 1 @@ -84,6 +88,7 @@ struct hci_uart { int hci_uart_register_proto(struct hci_uart_proto *p); int hci_uart_unregister_proto(struct hci_uart_proto *p); int hci_uart_tx_wakeup(struct hci_uart *hu); +int hci_uart_init_ready(struct hci_uart *hu); #ifdef CONFIG_BT_HCIUART_H4 int h4_init(void); @@ -104,3 +109,8 @@ int ll_deinit(void); int ath_init(void); int ath_deinit(void); #endif + +#ifdef CONFIG_BT_HCIUART_3WIRE +int h5_init(void); +int h5_deinit(void); +#endif diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c index 77e1e6cd66ce..3e92b7d3fcd2 100644 --- a/drivers/connector/cn_proc.c +++ b/drivers/connector/cn_proc.c @@ -46,7 +46,7 @@ static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 }; static inline void get_seq(__u32 *ts, int *cpu) { preempt_disable(); - *ts = __this_cpu_inc_return(proc_event_counts) -1; + *ts = __this_cpu_inc_return(proc_event_counts) - 1; *cpu = smp_processor_id(); preempt_enable(); } @@ -62,8 +62,8 @@ void proc_fork_connector(struct task_struct *task) if (atomic_read(&proc_event_num_listeners) < 1) return; - msg = (struct cn_msg*)buffer; - ev = (struct proc_event*)msg->data; + msg = (struct cn_msg *)buffer; + ev = (struct proc_event *)msg->data; get_seq(&msg->seq, &ev->cpu); ktime_get_ts(&ts); /* get high res monotonic timestamp */ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns); @@ -93,8 +93,8 @@ void proc_exec_connector(struct task_struct *task) if (atomic_read(&proc_event_num_listeners) < 1) return; - msg = (struct cn_msg*)buffer; - ev = (struct proc_event*)msg->data; + msg = (struct cn_msg *)buffer; + ev = (struct proc_event *)msg->data; get_seq(&msg->seq, &ev->cpu); ktime_get_ts(&ts); /* get high res monotonic timestamp */ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns); @@ -119,8 +119,8 @@ void proc_id_connector(struct task_struct *task, int which_id) if (atomic_read(&proc_event_num_listeners) < 1) return; - msg = (struct cn_msg*)buffer; - ev = (struct proc_event*)msg->data; + msg = (struct cn_msg *)buffer; + ev = (struct proc_event *)msg->data; ev->what = which_id; ev->event_data.id.process_pid = task->pid; ev->event_data.id.process_tgid = task->tgid; @@ -134,7 +134,7 @@ void proc_id_connector(struct task_struct *task, int which_id) ev->event_data.id.e.egid = cred->egid; } else { rcu_read_unlock(); - return; + return; } rcu_read_unlock(); get_seq(&msg->seq, &ev->cpu); @@ -241,8 +241,8 @@ void proc_exit_connector(struct task_struct *task) if (atomic_read(&proc_event_num_listeners) < 1) return; - msg = (struct cn_msg*)buffer; - ev = (struct proc_event*)msg->data; + msg = (struct cn_msg *)buffer; + ev = (struct proc_event *)msg->data; get_seq(&msg->seq, &ev->cpu); ktime_get_ts(&ts); /* get high res monotonic timestamp */ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns); @@ -276,8 +276,8 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack) if (atomic_read(&proc_event_num_listeners) < 1) return; - msg = (struct cn_msg*)buffer; - ev = (struct proc_event*)msg->data; + msg = (struct cn_msg *)buffer; + ev = (struct proc_event *)msg->data; msg->seq = rcvd_seq; ktime_get_ts(&ts); /* get high res monotonic timestamp */ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns); @@ -303,7 +303,7 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg, if (msg->len != sizeof(*mc_op)) return; - mc_op = (enum proc_cn_mcast_op*)msg->data; + mc_op = (enum proc_cn_mcast_op *)msg->data; switch (*mc_op) { case PROC_CN_MCAST_LISTEN: atomic_inc(&proc_event_num_listeners); @@ -325,11 +325,11 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg, */ static int __init cn_proc_init(void) { - int err; - - if ((err = cn_add_callback(&cn_proc_event_id, "cn_proc", - &cn_proc_mcast_ctl))) { - printk(KERN_WARNING "cn_proc failed to register\n"); + int err = cn_add_callback(&cn_proc_event_id, + "cn_proc", + &cn_proc_mcast_ctl); + if (err) { + pr_warn("cn_proc failed to register\n"); return err; } return 0; diff --git a/drivers/connector/cn_queue.c b/drivers/connector/cn_queue.c index c42c9d517790..1f8bf054d11c 100644 --- a/drivers/connector/cn_queue.c +++ b/drivers/connector/cn_queue.c @@ -1,5 +1,5 @@ /* - * cn_queue.c + * cn_queue.c * * 2004+ Copyright (c) Evgeniy Polyakov <zbr@ioremap.net> * All rights reserved. @@ -34,13 +34,14 @@ static struct cn_callback_entry * cn_queue_alloc_callback_entry(struct cn_queue_dev *dev, const char *name, struct cb_id *id, - void (*callback)(struct cn_msg *, struct netlink_skb_parms *)) + void (*callback)(struct cn_msg *, + struct netlink_skb_parms *)) { struct cn_callback_entry *cbq; cbq = kzalloc(sizeof(*cbq), GFP_KERNEL); if (!cbq) { - printk(KERN_ERR "Failed to create new callback queue.\n"); + pr_err("Failed to create new callback queue.\n"); return NULL; } @@ -71,7 +72,8 @@ int cn_cb_equal(struct cb_id *i1, struct cb_id *i2) int cn_queue_add_callback(struct cn_queue_dev *dev, const char *name, struct cb_id *id, - void (*callback)(struct cn_msg *, struct netlink_skb_parms *)) + void (*callback)(struct cn_msg *, + struct netlink_skb_parms *)) { struct cn_callback_entry *cbq, *__cbq; int found = 0; @@ -149,7 +151,7 @@ void cn_queue_free_dev(struct cn_queue_dev *dev) spin_unlock_bh(&dev->queue_lock); while (atomic_read(&dev->refcnt)) { - printk(KERN_INFO "Waiting for %s to become free: refcnt=%d.\n", + pr_info("Waiting for %s to become free: refcnt=%d.\n", dev->name, atomic_read(&dev->refcnt)); msleep(1000); } diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c index dde6a0fad408..82fa4f0f91d6 100644 --- a/drivers/connector/connector.c +++ b/drivers/connector/connector.c @@ -1,5 +1,5 @@ /* - * connector.c + * connector.c * * 2004+ Copyright (c) Evgeniy Polyakov <zbr@ioremap.net> * All rights reserved. @@ -101,19 +101,19 @@ int cn_netlink_send(struct cn_msg *msg, u32 __group, gfp_t gfp_mask) if (!skb) return -ENOMEM; - nlh = NLMSG_PUT(skb, 0, msg->seq, NLMSG_DONE, size - sizeof(*nlh)); + nlh = nlmsg_put(skb, 0, msg->seq, NLMSG_DONE, size - sizeof(*nlh), 0); + if (!nlh) { + kfree_skb(skb); + return -EMSGSIZE; + } - data = NLMSG_DATA(nlh); + data = nlmsg_data(nlh); memcpy(data, msg, sizeof(*data) + msg->len); NETLINK_CB(skb).dst_group = group; return netlink_broadcast(dev->nls, skb, 0, group, gfp_mask); - -nlmsg_failure: - kfree_skb(skb); - return -EINVAL; } EXPORT_SYMBOL_GPL(cn_netlink_send); @@ -185,7 +185,8 @@ static void cn_rx_skb(struct sk_buff *__skb) * May sleep. */ int cn_add_callback(struct cb_id *id, const char *name, - void (*callback)(struct cn_msg *, struct netlink_skb_parms *)) + void (*callback)(struct cn_msg *, + struct netlink_skb_parms *)) { int err; struct cn_dev *dev = &cdev; @@ -251,15 +252,20 @@ static const struct file_operations cn_file_ops = { .release = single_release }; +static struct cn_dev cdev = { + .input = cn_rx_skb, +}; + static int __devinit cn_init(void) { struct cn_dev *dev = &cdev; - - dev->input = cn_rx_skb; + struct netlink_kernel_cfg cfg = { + .groups = CN_NETLINK_USERS + 0xf, + .input = dev->input, + }; dev->nls = netlink_kernel_create(&init_net, NETLINK_CONNECTOR, - CN_NETLINK_USERS + 0xf, - dev->input, NULL, THIS_MODULE); + THIS_MODULE, &cfg); if (!dev->nls) return -EIO; diff --git a/drivers/ieee802154/Kconfig b/drivers/ieee802154/Kconfig index 15c064073701..1fc4eefc20ed 100644 --- a/drivers/ieee802154/Kconfig +++ b/drivers/ieee802154/Kconfig @@ -19,6 +19,7 @@ config IEEE802154_FAKEHARD This driver can also be built as a module. To do so say M here. The module will be called 'fakehard'. + config IEEE802154_FAKELB depends on IEEE802154_DRIVERS && MAC802154 tristate "IEEE 802.15.4 loopback driver" @@ -28,3 +29,8 @@ config IEEE802154_FAKELB This driver can also be built as a module. To do so say M here. The module will be called 'fakelb'. + +config IEEE802154_AT86RF230 + depends on IEEE802154_DRIVERS && MAC802154 + tristate "AT86RF230/231 transceiver driver" + depends on SPI diff --git a/drivers/ieee802154/Makefile b/drivers/ieee802154/Makefile index ea784ea6f0f8..4f4371d3aa7d 100644 --- a/drivers/ieee802154/Makefile +++ b/drivers/ieee802154/Makefile @@ -1,2 +1,3 @@ obj-$(CONFIG_IEEE802154_FAKEHARD) += fakehard.o obj-$(CONFIG_IEEE802154_FAKELB) += fakelb.o +obj-$(CONFIG_IEEE802154_AT86RF230) += at86rf230.o diff --git a/drivers/ieee802154/at86rf230.c b/drivers/ieee802154/at86rf230.c new file mode 100644 index 000000000000..5d309408395d --- /dev/null +++ b/drivers/ieee802154/at86rf230.c @@ -0,0 +1,968 @@ +/* + * AT86RF230/RF231 driver + * + * Copyright (C) 2009-2012 Siemens AG + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Written by: + * Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> + * Alexander Smirnov <alex.bluesman.smirnov@gmail.com> + */ +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/interrupt.h> +#include <linux/gpio.h> +#include <linux/delay.h> +#include <linux/mutex.h> +#include <linux/workqueue.h> +#include <linux/spinlock.h> +#include <linux/spi/spi.h> +#include <linux/spi/at86rf230.h> +#include <linux/skbuff.h> + +#include <net/mac802154.h> +#include <net/wpan-phy.h> + +struct at86rf230_local { + struct spi_device *spi; + int rstn, slp_tr, dig2; + + u8 part; + u8 vers; + + u8 buf[2]; + struct mutex bmux; + + struct work_struct irqwork; + struct completion tx_complete; + + struct ieee802154_dev *dev; + + spinlock_t lock; + bool irq_disabled; + bool is_tx; +}; + +#define RG_TRX_STATUS (0x01) +#define SR_TRX_STATUS 0x01, 0x1f, 0 +#define SR_RESERVED_01_3 0x01, 0x20, 5 +#define SR_CCA_STATUS 0x01, 0x40, 6 +#define SR_CCA_DONE 0x01, 0x80, 7 +#define RG_TRX_STATE (0x02) +#define SR_TRX_CMD 0x02, 0x1f, 0 +#define SR_TRAC_STATUS 0x02, 0xe0, 5 +#define RG_TRX_CTRL_0 (0x03) +#define SR_CLKM_CTRL 0x03, 0x07, 0 +#define SR_CLKM_SHA_SEL 0x03, 0x08, 3 +#define SR_PAD_IO_CLKM 0x03, 0x30, 4 +#define SR_PAD_IO 0x03, 0xc0, 6 +#define RG_TRX_CTRL_1 (0x04) +#define SR_IRQ_POLARITY 0x04, 0x01, 0 +#define SR_IRQ_MASK_MODE 0x04, 0x02, 1 +#define SR_SPI_CMD_MODE 0x04, 0x0c, 2 +#define SR_RX_BL_CTRL 0x04, 0x10, 4 +#define SR_TX_AUTO_CRC_ON 0x04, 0x20, 5 +#define SR_IRQ_2_EXT_EN 0x04, 0x40, 6 +#define SR_PA_EXT_EN 0x04, 0x80, 7 +#define RG_PHY_TX_PWR (0x05) +#define SR_TX_PWR 0x05, 0x0f, 0 +#define SR_PA_LT 0x05, 0x30, 4 +#define SR_PA_BUF_LT 0x05, 0xc0, 6 +#define RG_PHY_RSSI (0x06) +#define SR_RSSI 0x06, 0x1f, 0 +#define SR_RND_VALUE 0x06, 0x60, 5 +#define SR_RX_CRC_VALID 0x06, 0x80, 7 +#define RG_PHY_ED_LEVEL (0x07) +#define SR_ED_LEVEL 0x07, 0xff, 0 +#define RG_PHY_CC_CCA (0x08) +#define SR_CHANNEL 0x08, 0x1f, 0 +#define SR_CCA_MODE 0x08, 0x60, 5 +#define SR_CCA_REQUEST 0x08, 0x80, 7 +#define RG_CCA_THRES (0x09) +#define SR_CCA_ED_THRES 0x09, 0x0f, 0 +#define SR_RESERVED_09_1 0x09, 0xf0, 4 +#define RG_RX_CTRL (0x0a) +#define SR_PDT_THRES 0x0a, 0x0f, 0 +#define SR_RESERVED_0a_1 0x0a, 0xf0, 4 +#define RG_SFD_VALUE (0x0b) +#define SR_SFD_VALUE 0x0b, 0xff, 0 +#define RG_TRX_CTRL_2 (0x0c) +#define SR_OQPSK_DATA_RATE 0x0c, 0x03, 0 +#define SR_RESERVED_0c_2 0x0c, 0x7c, 2 +#define SR_RX_SAFE_MODE 0x0c, 0x80, 7 +#define RG_ANT_DIV (0x0d) +#define SR_ANT_CTRL 0x0d, 0x03, 0 +#define SR_ANT_EXT_SW_EN 0x0d, 0x04, 2 +#define SR_ANT_DIV_EN 0x0d, 0x08, 3 +#define SR_RESERVED_0d_2 0x0d, 0x70, 4 +#define SR_ANT_SEL 0x0d, 0x80, 7 +#define RG_IRQ_MASK (0x0e) +#define SR_IRQ_MASK 0x0e, 0xff, 0 +#define RG_IRQ_STATUS (0x0f) +#define SR_IRQ_0_PLL_LOCK 0x0f, 0x01, 0 +#define SR_IRQ_1_PLL_UNLOCK 0x0f, 0x02, 1 +#define SR_IRQ_2_RX_START 0x0f, 0x04, 2 +#define SR_IRQ_3_TRX_END 0x0f, 0x08, 3 +#define SR_IRQ_4_CCA_ED_DONE 0x0f, 0x10, 4 +#define SR_IRQ_5_AMI 0x0f, 0x20, 5 +#define SR_IRQ_6_TRX_UR 0x0f, 0x40, 6 +#define SR_IRQ_7_BAT_LOW 0x0f, 0x80, 7 +#define RG_VREG_CTRL (0x10) +#define SR_RESERVED_10_6 0x10, 0x03, 0 +#define SR_DVDD_OK 0x10, 0x04, 2 +#define SR_DVREG_EXT 0x10, 0x08, 3 +#define SR_RESERVED_10_3 0x10, 0x30, 4 +#define SR_AVDD_OK 0x10, 0x40, 6 +#define SR_AVREG_EXT 0x10, 0x80, 7 +#define RG_BATMON (0x11) +#define SR_BATMON_VTH 0x11, 0x0f, 0 +#define SR_BATMON_HR 0x11, 0x10, 4 +#define SR_BATMON_OK 0x11, 0x20, 5 +#define SR_RESERVED_11_1 0x11, 0xc0, 6 +#define RG_XOSC_CTRL (0x12) +#define SR_XTAL_TRIM 0x12, 0x0f, 0 +#define SR_XTAL_MODE 0x12, 0xf0, 4 +#define RG_RX_SYN (0x15) +#define SR_RX_PDT_LEVEL 0x15, 0x0f, 0 +#define SR_RESERVED_15_2 0x15, 0x70, 4 +#define SR_RX_PDT_DIS 0x15, 0x80, 7 +#define RG_XAH_CTRL_1 (0x17) +#define SR_RESERVED_17_8 0x17, 0x01, 0 +#define SR_AACK_PROM_MODE 0x17, 0x02, 1 +#define SR_AACK_ACK_TIME 0x17, 0x04, 2 +#define SR_RESERVED_17_5 0x17, 0x08, 3 +#define SR_AACK_UPLD_RES_FT 0x17, 0x10, 4 +#define SR_AACK_FLTR_RES_FT 0x17, 0x20, 5 +#define SR_RESERVED_17_2 0x17, 0x40, 6 +#define SR_RESERVED_17_1 0x17, 0x80, 7 +#define RG_FTN_CTRL (0x18) +#define SR_RESERVED_18_2 0x18, 0x7f, 0 +#define SR_FTN_START 0x18, 0x80, 7 +#define RG_PLL_CF (0x1a) +#define SR_RESERVED_1a_2 0x1a, 0x7f, 0 +#define SR_PLL_CF_START 0x1a, 0x80, 7 +#define RG_PLL_DCU (0x1b) +#define SR_RESERVED_1b_3 0x1b, 0x3f, 0 +#define SR_RESERVED_1b_2 0x1b, 0x40, 6 +#define SR_PLL_DCU_START 0x1b, 0x80, 7 +#define RG_PART_NUM (0x1c) +#define SR_PART_NUM 0x1c, 0xff, 0 +#define RG_VERSION_NUM (0x1d) +#define SR_VERSION_NUM 0x1d, 0xff, 0 +#define RG_MAN_ID_0 (0x1e) +#define SR_MAN_ID_0 0x1e, 0xff, 0 +#define RG_MAN_ID_1 (0x1f) +#define SR_MAN_ID_1 0x1f, 0xff, 0 +#define RG_SHORT_ADDR_0 (0x20) +#define SR_SHORT_ADDR_0 0x20, 0xff, 0 +#define RG_SHORT_ADDR_1 (0x21) +#define SR_SHORT_ADDR_1 0x21, 0xff, 0 +#define RG_PAN_ID_0 (0x22) +#define SR_PAN_ID_0 0x22, 0xff, 0 +#define RG_PAN_ID_1 (0x23) +#define SR_PAN_ID_1 0x23, 0xff, 0 +#define RG_IEEE_ADDR_0 (0x24) +#define SR_IEEE_ADDR_0 0x24, 0xff, 0 +#define RG_IEEE_ADDR_1 (0x25) +#define SR_IEEE_ADDR_1 0x25, 0xff, 0 +#define RG_IEEE_ADDR_2 (0x26) +#define SR_IEEE_ADDR_2 0x26, 0xff, 0 +#define RG_IEEE_ADDR_3 (0x27) +#define SR_IEEE_ADDR_3 0x27, 0xff, 0 +#define RG_IEEE_ADDR_4 (0x28) +#define SR_IEEE_ADDR_4 0x28, 0xff, 0 +#define RG_IEEE_ADDR_5 (0x29) +#define SR_IEEE_ADDR_5 0x29, 0xff, 0 +#define RG_IEEE_ADDR_6 (0x2a) +#define SR_IEEE_ADDR_6 0x2a, 0xff, 0 +#define RG_IEEE_ADDR_7 (0x2b) +#define SR_IEEE_ADDR_7 0x2b, 0xff, 0 +#define RG_XAH_CTRL_0 (0x2c) +#define SR_SLOTTED_OPERATION 0x2c, 0x01, 0 +#define SR_MAX_CSMA_RETRIES 0x2c, 0x0e, 1 +#define SR_MAX_FRAME_RETRIES 0x2c, 0xf0, 4 +#define RG_CSMA_SEED_0 (0x2d) +#define SR_CSMA_SEED_0 0x2d, 0xff, 0 +#define RG_CSMA_SEED_1 (0x2e) +#define SR_CSMA_SEED_1 0x2e, 0x07, 0 +#define SR_AACK_I_AM_COORD 0x2e, 0x08, 3 +#define SR_AACK_DIS_ACK 0x2e, 0x10, 4 +#define SR_AACK_SET_PD 0x2e, 0x20, 5 +#define SR_AACK_FVN_MODE 0x2e, 0xc0, 6 +#define RG_CSMA_BE (0x2f) +#define SR_MIN_BE 0x2f, 0x0f, 0 +#define SR_MAX_BE 0x2f, 0xf0, 4 + +#define CMD_REG 0x80 +#define CMD_REG_MASK 0x3f +#define CMD_WRITE 0x40 +#define CMD_FB 0x20 + +#define IRQ_BAT_LOW (1 << 7) +#define IRQ_TRX_UR (1 << 6) +#define IRQ_AMI (1 << 5) +#define IRQ_CCA_ED (1 << 4) +#define IRQ_TRX_END (1 << 3) +#define IRQ_RX_START (1 << 2) +#define IRQ_PLL_UNL (1 << 1) +#define IRQ_PLL_LOCK (1 << 0) + +#define STATE_P_ON 0x00 /* BUSY */ +#define STATE_BUSY_RX 0x01 +#define STATE_BUSY_TX 0x02 +#define STATE_FORCE_TRX_OFF 0x03 +#define STATE_FORCE_TX_ON 0x04 /* IDLE */ +/* 0x05 */ /* INVALID_PARAMETER */ +#define STATE_RX_ON 0x06 +/* 0x07 */ /* SUCCESS */ +#define STATE_TRX_OFF 0x08 +#define STATE_TX_ON 0x09 +/* 0x0a - 0x0e */ /* 0x0a - UNSUPPORTED_ATTRIBUTE */ +#define STATE_SLEEP 0x0F +#define STATE_BUSY_RX_AACK 0x11 +#define STATE_BUSY_TX_ARET 0x12 +#define STATE_BUSY_RX_AACK_ON 0x16 +#define STATE_BUSY_TX_ARET_ON 0x19 +#define STATE_RX_ON_NOCLK 0x1C +#define STATE_RX_AACK_ON_NOCLK 0x1D +#define STATE_BUSY_RX_AACK_NOCLK 0x1E +#define STATE_TRANSITION_IN_PROGRESS 0x1F + +static int +__at86rf230_write(struct at86rf230_local *lp, u8 addr, u8 data) +{ + u8 *buf = lp->buf; + int status; + struct spi_message msg; + struct spi_transfer xfer = { + .len = 2, + .tx_buf = buf, + }; + + buf[0] = (addr & CMD_REG_MASK) | CMD_REG | CMD_WRITE; + buf[1] = data; + dev_vdbg(&lp->spi->dev, "buf[0] = %02x\n", buf[0]); + dev_vdbg(&lp->spi->dev, "buf[1] = %02x\n", buf[1]); + spi_message_init(&msg); + spi_message_add_tail(&xfer, &msg); + + status = spi_sync(lp->spi, &msg); + dev_vdbg(&lp->spi->dev, "status = %d\n", status); + if (msg.status) + status = msg.status; + + dev_vdbg(&lp->spi->dev, "status = %d\n", status); + dev_vdbg(&lp->spi->dev, "buf[0] = %02x\n", buf[0]); + dev_vdbg(&lp->spi->dev, "buf[1] = %02x\n", buf[1]); + + return status; +} + +static int +__at86rf230_read_subreg(struct at86rf230_local *lp, + u8 addr, u8 mask, int shift, u8 *data) +{ + u8 *buf = lp->buf; + int status; + struct spi_message msg; + struct spi_transfer xfer = { + .len = 2, + .tx_buf = buf, + .rx_buf = buf, + }; + + buf[0] = (addr & CMD_REG_MASK) | CMD_REG; + buf[1] = 0xff; + dev_vdbg(&lp->spi->dev, "buf[0] = %02x\n", buf[0]); + spi_message_init(&msg); + spi_message_add_tail(&xfer, &msg); + + status = spi_sync(lp->spi, &msg); + dev_vdbg(&lp->spi->dev, "status = %d\n", status); + if (msg.status) + status = msg.status; + + dev_vdbg(&lp->spi->dev, "status = %d\n", status); + dev_vdbg(&lp->spi->dev, "buf[0] = %02x\n", buf[0]); + dev_vdbg(&lp->spi->dev, "buf[1] = %02x\n", buf[1]); + + if (status == 0) + *data = buf[1]; + + return status; +} + +static int +at86rf230_read_subreg(struct at86rf230_local *lp, + u8 addr, u8 mask, int shift, u8 *data) +{ + int status; + + mutex_lock(&lp->bmux); + status = __at86rf230_read_subreg(lp, addr, mask, shift, data); + mutex_unlock(&lp->bmux); + + return status; +} + +static int +at86rf230_write_subreg(struct at86rf230_local *lp, + u8 addr, u8 mask, int shift, u8 data) +{ + int status; + u8 val; + + mutex_lock(&lp->bmux); + status = __at86rf230_read_subreg(lp, addr, 0xff, 0, &val); + if (status) + goto out; + + val &= ~mask; + val |= (data << shift) & mask; + + status = __at86rf230_write(lp, addr, val); +out: + mutex_unlock(&lp->bmux); + + return status; +} + +static int +at86rf230_write_fbuf(struct at86rf230_local *lp, u8 *data, u8 len) +{ + u8 *buf = lp->buf; + int status; + struct spi_message msg; + struct spi_transfer xfer_head = { + .len = 2, + .tx_buf = buf, + + }; + struct spi_transfer xfer_buf = { + .len = len, + .tx_buf = data, + }; + + mutex_lock(&lp->bmux); + buf[0] = CMD_WRITE | CMD_FB; + buf[1] = len + 2; /* 2 bytes for CRC that isn't written */ + + dev_vdbg(&lp->spi->dev, "buf[0] = %02x\n", buf[0]); + dev_vdbg(&lp->spi->dev, "buf[1] = %02x\n", buf[1]); + + spi_message_init(&msg); + spi_message_add_tail(&xfer_head, &msg); + spi_message_add_tail(&xfer_buf, &msg); + + status = spi_sync(lp->spi, &msg); + dev_vdbg(&lp->spi->dev, "status = %d\n", status); + if (msg.status) + status = msg.status; + + dev_vdbg(&lp->spi->dev, "status = %d\n", status); + dev_vdbg(&lp->spi->dev, "buf[0] = %02x\n", buf[0]); + dev_vdbg(&lp->spi->dev, "buf[1] = %02x\n", buf[1]); + + mutex_unlock(&lp->bmux); + return status; +} + +static int +at86rf230_read_fbuf(struct at86rf230_local *lp, u8 *data, u8 *len, u8 *lqi) +{ + u8 *buf = lp->buf; + int status; + struct spi_message msg; + struct spi_transfer xfer_head = { + .len = 2, + .tx_buf = buf, + .rx_buf = buf, + }; + struct spi_transfer xfer_head1 = { + .len = 2, + .tx_buf = buf, + .rx_buf = buf, + }; + struct spi_transfer xfer_buf = { + .len = 0, + .rx_buf = data, + }; + + mutex_lock(&lp->bmux); + + buf[0] = CMD_FB; + buf[1] = 0x00; + + spi_message_init(&msg); + spi_message_add_tail(&xfer_head, &msg); + + status = spi_sync(lp->spi, &msg); + dev_vdbg(&lp->spi->dev, "status = %d\n", status); + + xfer_buf.len = *(buf + 1) + 1; + *len = buf[1]; + + buf[0] = CMD_FB; + buf[1] = 0x00; + + spi_message_init(&msg); + spi_message_add_tail(&xfer_head1, &msg); + spi_message_add_tail(&xfer_buf, &msg); + + status = spi_sync(lp->spi, &msg); + + if (msg.status) + status = msg.status; + + dev_vdbg(&lp->spi->dev, "status = %d\n", status); + dev_vdbg(&lp->spi->dev, "buf[0] = %02x\n", buf[0]); + dev_vdbg(&lp->spi->dev, "buf[1] = %02x\n", buf[1]); + + if (status) { + if (lqi && (*len > lp->buf[1])) + *lqi = data[lp->buf[1]]; + } + mutex_unlock(&lp->bmux); + + return status; +} + +static int +at86rf230_ed(struct ieee802154_dev *dev, u8 *level) +{ + might_sleep(); + BUG_ON(!level); + *level = 0xbe; + return 0; +} + +static int +at86rf230_state(struct ieee802154_dev *dev, int state) +{ + struct at86rf230_local *lp = dev->priv; + int rc; + u8 val; + u8 desired_status; + + might_sleep(); + + if (state == STATE_FORCE_TX_ON) + desired_status = STATE_TX_ON; + else if (state == STATE_FORCE_TRX_OFF) + desired_status = STATE_TRX_OFF; + else + desired_status = state; + + do { + rc = at86rf230_read_subreg(lp, SR_TRX_STATUS, &val); + if (rc) + goto err; + } while (val == STATE_TRANSITION_IN_PROGRESS); + + if (val == desired_status) + return 0; + + /* state is equal to phy states */ + rc = at86rf230_write_subreg(lp, SR_TRX_CMD, state); + if (rc) + goto err; + + do { + rc = at86rf230_read_subreg(lp, SR_TRX_STATUS, &val); + if (rc) + goto err; + } while (val == STATE_TRANSITION_IN_PROGRESS); + + + if (val == desired_status) + return 0; + + pr_err("unexpected state change: %d, asked for %d\n", val, state); + return -EBUSY; + +err: + pr_err("error: %d\n", rc); + return rc; +} + +static int +at86rf230_start(struct ieee802154_dev *dev) +{ + struct at86rf230_local *lp = dev->priv; + u8 rc; + + rc = at86rf230_write_subreg(lp, SR_RX_SAFE_MODE, 1); + if (rc) + return rc; + + return at86rf230_state(dev, STATE_RX_ON); +} + +static void +at86rf230_stop(struct ieee802154_dev *dev) +{ + at86rf230_state(dev, STATE_FORCE_TRX_OFF); +} + +static int +at86rf230_channel(struct ieee802154_dev *dev, int page, int channel) +{ + struct at86rf230_local *lp = dev->priv; + int rc; + + might_sleep(); + + if (page != 0 || channel < 11 || channel > 26) { + WARN_ON(1); + return -EINVAL; + } + + rc = at86rf230_write_subreg(lp, SR_CHANNEL, channel); + msleep(1); /* Wait for PLL */ + dev->phy->current_channel = channel; + + return 0; +} + +static int +at86rf230_xmit(struct ieee802154_dev *dev, struct sk_buff *skb) +{ + struct at86rf230_local *lp = dev->priv; + int rc; + unsigned long flags; + + spin_lock(&lp->lock); + if (lp->irq_disabled) { + spin_unlock(&lp->lock); + return -EBUSY; + } + spin_unlock(&lp->lock); + + might_sleep(); + + rc = at86rf230_state(dev, STATE_FORCE_TX_ON); + if (rc) + goto err; + + spin_lock_irqsave(&lp->lock, flags); + lp->is_tx = 1; + INIT_COMPLETION(lp->tx_complete); + spin_unlock_irqrestore(&lp->lock, flags); + + rc = at86rf230_write_fbuf(lp, skb->data, skb->len); + if (rc) + goto err_rx; + + rc = at86rf230_write_subreg(lp, SR_TRX_CMD, STATE_BUSY_TX); + if (rc) + goto err_rx; + + rc = wait_for_completion_interruptible(&lp->tx_complete); + if (rc < 0) + goto err_rx; + + rc = at86rf230_start(dev); + + return rc; + +err_rx: + at86rf230_start(dev); +err: + pr_err("error: %d\n", rc); + + spin_lock_irqsave(&lp->lock, flags); + lp->is_tx = 0; + spin_unlock_irqrestore(&lp->lock, flags); + + return rc; +} + +static int at86rf230_rx(struct at86rf230_local *lp) +{ + u8 len = 128, lqi = 0; + struct sk_buff *skb; + + skb = alloc_skb(len, GFP_KERNEL); + + if (!skb) + return -ENOMEM; + + if (at86rf230_read_fbuf(lp, skb_put(skb, len), &len, &lqi)) + goto err; + + if (len < 2) + goto err; + + skb_trim(skb, len - 2); /* We do not put CRC into the frame */ + + ieee802154_rx_irqsafe(lp->dev, skb, lqi); + + dev_dbg(&lp->spi->dev, "READ_FBUF: %d %x\n", len, lqi); + + return 0; +err: + pr_debug("received frame is too small\n"); + + kfree_skb(skb); + return -EINVAL; +} + +static struct ieee802154_ops at86rf230_ops = { + .owner = THIS_MODULE, + .xmit = at86rf230_xmit, + .ed = at86rf230_ed, + .set_channel = at86rf230_channel, + .start = at86rf230_start, + .stop = at86rf230_stop, +}; + +static void at86rf230_irqwork(struct work_struct *work) +{ + struct at86rf230_local *lp = + container_of(work, struct at86rf230_local, irqwork); + u8 status = 0, val; + int rc; + unsigned long flags; + + rc = at86rf230_read_subreg(lp, RG_IRQ_STATUS, 0xff, 0, &val); + status |= val; + + status &= ~IRQ_PLL_LOCK; /* ignore */ + status &= ~IRQ_RX_START; /* ignore */ + status &= ~IRQ_AMI; /* ignore */ + status &= ~IRQ_TRX_UR; /* FIXME: possibly handle ???*/ + + if (status & IRQ_TRX_END) { + spin_lock_irqsave(&lp->lock, flags); + status &= ~IRQ_TRX_END; + if (lp->is_tx) { + lp->is_tx = 0; + spin_unlock_irqrestore(&lp->lock, flags); + complete(&lp->tx_complete); + } else { + spin_unlock_irqrestore(&lp->lock, flags); + at86rf230_rx(lp); + } + } + + spin_lock_irqsave(&lp->lock, flags); + lp->irq_disabled = 0; + spin_unlock_irqrestore(&lp->lock, flags); + + enable_irq(lp->spi->irq); +} + +static irqreturn_t at86rf230_isr(int irq, void *data) +{ + struct at86rf230_local *lp = data; + + disable_irq_nosync(irq); + + spin_lock(&lp->lock); + lp->irq_disabled = 1; + spin_unlock(&lp->lock); + + schedule_work(&lp->irqwork); + + return IRQ_HANDLED; +} + + +static int at86rf230_hw_init(struct at86rf230_local *lp) +{ + u8 status; + int rc; + + rc = at86rf230_read_subreg(lp, SR_TRX_STATUS, &status); + if (rc) + return rc; + + dev_info(&lp->spi->dev, "Status: %02x\n", status); + if (status == STATE_P_ON) { + rc = at86rf230_write_subreg(lp, SR_TRX_CMD, STATE_TRX_OFF); + if (rc) + return rc; + msleep(1); + rc = at86rf230_read_subreg(lp, SR_TRX_STATUS, &status); + if (rc) + return rc; + dev_info(&lp->spi->dev, "Status: %02x\n", status); + } + + rc = at86rf230_write_subreg(lp, SR_IRQ_MASK, 0xff); /* IRQ_TRX_UR | + * IRQ_CCA_ED | + * IRQ_TRX_END | + * IRQ_PLL_UNL | + * IRQ_PLL_LOCK + */ + if (rc) + return rc; + + /* CLKM changes are applied immediately */ + rc = at86rf230_write_subreg(lp, SR_CLKM_SHA_SEL, 0x00); + if (rc) + return rc; + + /* Turn CLKM Off */ + rc = at86rf230_write_subreg(lp, SR_CLKM_CTRL, 0x00); + if (rc) + return rc; + /* Wait the next SLEEP cycle */ + msleep(100); + + rc = at86rf230_write_subreg(lp, SR_TRX_CMD, STATE_TX_ON); + if (rc) + return rc; + msleep(1); + + rc = at86rf230_read_subreg(lp, SR_TRX_STATUS, &status); + if (rc) + return rc; + dev_info(&lp->spi->dev, "Status: %02x\n", status); + + rc = at86rf230_read_subreg(lp, SR_DVDD_OK, &status); + if (rc) + return rc; + if (!status) { + dev_err(&lp->spi->dev, "DVDD error\n"); + return -EINVAL; + } + + rc = at86rf230_read_subreg(lp, SR_AVDD_OK, &status); + if (rc) + return rc; + if (!status) { + dev_err(&lp->spi->dev, "AVDD error\n"); + return -EINVAL; + } + + return 0; +} + +static int at86rf230_suspend(struct spi_device *spi, pm_message_t message) +{ + return 0; +} + +static int at86rf230_resume(struct spi_device *spi) +{ + return 0; +} + +static int at86rf230_fill_data(struct spi_device *spi) +{ + struct at86rf230_local *lp = spi_get_drvdata(spi); + struct at86rf230_platform_data *pdata = spi->dev.platform_data; + + if (!pdata) { + dev_err(&spi->dev, "no platform_data\n"); + return -EINVAL; + } + + lp->rstn = pdata->rstn; + lp->slp_tr = pdata->slp_tr; + lp->dig2 = pdata->dig2; + + return 0; +} + +static int __devinit at86rf230_probe(struct spi_device *spi) +{ + struct ieee802154_dev *dev; + struct at86rf230_local *lp; + u8 man_id_0, man_id_1; + int rc; + const char *chip; + int supported = 0; + + if (!spi->irq) { + dev_err(&spi->dev, "no IRQ specified\n"); + return -EINVAL; + } + + dev = ieee802154_alloc_device(sizeof(*lp), &at86rf230_ops); + if (!dev) + return -ENOMEM; + + lp = dev->priv; + lp->dev = dev; + + lp->spi = spi; + + dev->priv = lp; + dev->parent = &spi->dev; + dev->extra_tx_headroom = 0; + /* We do support only 2.4 Ghz */ + dev->phy->channels_supported[0] = 0x7FFF800; + dev->flags = IEEE802154_HW_OMIT_CKSUM; + + mutex_init(&lp->bmux); + INIT_WORK(&lp->irqwork, at86rf230_irqwork); + spin_lock_init(&lp->lock); + init_completion(&lp->tx_complete); + + spi_set_drvdata(spi, lp); + + rc = at86rf230_fill_data(spi); + if (rc) + goto err_fill; + + rc = gpio_request(lp->rstn, "rstn"); + if (rc) + goto err_rstn; + + if (gpio_is_valid(lp->slp_tr)) { + rc = gpio_request(lp->slp_tr, "slp_tr"); + if (rc) + goto err_slp_tr; + } + + rc = gpio_direction_output(lp->rstn, 1); + if (rc) + goto err_gpio_dir; + + if (gpio_is_valid(lp->slp_tr)) { + rc = gpio_direction_output(lp->slp_tr, 0); + if (rc) + goto err_gpio_dir; + } + + /* Reset */ + msleep(1); + gpio_set_value(lp->rstn, 0); + msleep(1); + gpio_set_value(lp->rstn, 1); + msleep(1); + + rc = at86rf230_read_subreg(lp, SR_MAN_ID_0, &man_id_0); + if (rc) + goto err_gpio_dir; + rc = at86rf230_read_subreg(lp, SR_MAN_ID_1, &man_id_1); + if (rc) + goto err_gpio_dir; + + if (man_id_1 != 0x00 || man_id_0 != 0x1f) { + dev_err(&spi->dev, "Non-Atmel dev found (MAN_ID %02x %02x)\n", + man_id_1, man_id_0); + rc = -EINVAL; + goto err_gpio_dir; + } + + rc = at86rf230_read_subreg(lp, SR_PART_NUM, &lp->part); + if (rc) + goto err_gpio_dir; + + rc = at86rf230_read_subreg(lp, SR_VERSION_NUM, &lp->vers); + if (rc) + goto err_gpio_dir; + + switch (lp->part) { + case 2: + chip = "at86rf230"; + /* supported = 1; FIXME: should be easy to support; */ + break; + case 3: + chip = "at86rf231"; + supported = 1; + break; + default: + chip = "UNKNOWN"; + break; + } + + dev_info(&spi->dev, "Detected %s chip version %d\n", chip, lp->vers); + if (!supported) { + rc = -ENOTSUPP; + goto err_gpio_dir; + } + + rc = at86rf230_hw_init(lp); + if (rc) + goto err_gpio_dir; + + rc = request_irq(spi->irq, at86rf230_isr, IRQF_SHARED, + dev_name(&spi->dev), lp); + if (rc) + goto err_gpio_dir; + + rc = ieee802154_register_device(lp->dev); + if (rc) + goto err_irq; + + return rc; + + ieee802154_unregister_device(lp->dev); +err_irq: + free_irq(spi->irq, lp); + flush_work(&lp->irqwork); +err_gpio_dir: + if (gpio_is_valid(lp->slp_tr)) + gpio_free(lp->slp_tr); +err_slp_tr: + gpio_free(lp->rstn); +err_rstn: +err_fill: + spi_set_drvdata(spi, NULL); + mutex_destroy(&lp->bmux); + ieee802154_free_device(lp->dev); + return rc; +} + +static int __devexit at86rf230_remove(struct spi_device *spi) +{ + struct at86rf230_local *lp = spi_get_drvdata(spi); + + ieee802154_unregister_device(lp->dev); + + free_irq(spi->irq, lp); + flush_work(&lp->irqwork); + + if (gpio_is_valid(lp->slp_tr)) + gpio_free(lp->slp_tr); + gpio_free(lp->rstn); + + spi_set_drvdata(spi, NULL); + mutex_destroy(&lp->bmux); + ieee802154_free_device(lp->dev); + + dev_dbg(&spi->dev, "unregistered at86rf230\n"); + return 0; +} + +static struct spi_driver at86rf230_driver = { + .driver = { + .name = "at86rf230", + .owner = THIS_MODULE, + }, + .probe = at86rf230_probe, + .remove = __devexit_p(at86rf230_remove), + .suspend = at86rf230_suspend, + .resume = at86rf230_resume, +}; + +static int __init at86rf230_init(void) +{ + return spi_register_driver(&at86rf230_driver); +} +module_init(at86rf230_init); + +static void __exit at86rf230_exit(void) +{ + spi_unregister_driver(&at86rf230_driver); +} +module_exit(at86rf230_exit); + +MODULE_DESCRIPTION("AT86RF230 Transceiver Driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/infiniband/core/netlink.c b/drivers/infiniband/core/netlink.c index e497dfbee435..3ae2bfd31015 100644 --- a/drivers/infiniband/core/netlink.c +++ b/drivers/infiniband/core/netlink.c @@ -108,12 +108,14 @@ void *ibnl_put_msg(struct sk_buff *skb, struct nlmsghdr **nlh, int seq, unsigned char *prev_tail; prev_tail = skb_tail_pointer(skb); - *nlh = NLMSG_NEW(skb, 0, seq, RDMA_NL_GET_TYPE(client, op), - len, NLM_F_MULTI); + *nlh = nlmsg_put(skb, 0, seq, RDMA_NL_GET_TYPE(client, op), + len, NLM_F_MULTI); + if (!*nlh) + goto out_nlmsg_trim; (*nlh)->nlmsg_len = skb_tail_pointer(skb) - prev_tail; - return NLMSG_DATA(*nlh); + return nlmsg_data(*nlh); -nlmsg_failure: +out_nlmsg_trim: nlmsg_trim(skb, prev_tail); return NULL; } @@ -171,8 +173,11 @@ static void ibnl_rcv(struct sk_buff *skb) int __init ibnl_init(void) { - nls = netlink_kernel_create(&init_net, NETLINK_RDMA, 0, ibnl_rcv, - NULL, THIS_MODULE); + struct netlink_kernel_cfg cfg = { + .input = ibnl_rcv, + }; + + nls = netlink_kernel_create(&init_net, NETLINK_RDMA, THIS_MODULE, &cfg); if (!nls) { pr_warn("Failed to create netlink socket\n"); return -ENOMEM; diff --git a/drivers/infiniband/hw/cxgb3/iwch_cm.c b/drivers/infiniband/hw/cxgb3/iwch_cm.c index 740dcc065cf2..77b6b182778a 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_cm.c +++ b/drivers/infiniband/hw/cxgb3/iwch_cm.c @@ -1374,7 +1374,7 @@ static int pass_accept_req(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) goto reject; } dst = &rt->dst; - l2t = t3_l2t_get(tdev, dst, NULL); + l2t = t3_l2t_get(tdev, dst, NULL, &req->peer_ip); if (!l2t) { printk(KERN_ERR MOD "%s - failed to allocate l2t entry!\n", __func__); @@ -1942,7 +1942,8 @@ int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) goto fail3; } ep->dst = &rt->dst; - ep->l2t = t3_l2t_get(ep->com.tdev, ep->dst, NULL); + ep->l2t = t3_l2t_get(ep->com.tdev, ep->dst, NULL, + &cm_id->remote_addr.sin_addr.s_addr); if (!ep->l2t) { printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); err = -ENOMEM; diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c index 3530c41fcd1f..a07b774e7864 100644 --- a/drivers/infiniband/hw/mlx4/main.c +++ b/drivers/infiniband/hw/mlx4/main.c @@ -718,26 +718,53 @@ int mlx4_ib_add_mc(struct mlx4_ib_dev *mdev, struct mlx4_ib_qp *mqp, return ret; } +struct mlx4_ib_steering { + struct list_head list; + u64 reg_id; + union ib_gid gid; +}; + static int mlx4_ib_mcg_attach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid) { int err; struct mlx4_ib_dev *mdev = to_mdev(ibqp->device); struct mlx4_ib_qp *mqp = to_mqp(ibqp); + u64 reg_id; + struct mlx4_ib_steering *ib_steering = NULL; + + if (mdev->dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) { + ib_steering = kmalloc(sizeof(*ib_steering), GFP_KERNEL); + if (!ib_steering) + return -ENOMEM; + } - err = mlx4_multicast_attach(mdev->dev, &mqp->mqp, gid->raw, - !!(mqp->flags & MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK), - MLX4_PROT_IB_IPV6); + err = mlx4_multicast_attach(mdev->dev, &mqp->mqp, gid->raw, mqp->port, + !!(mqp->flags & + MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK), + MLX4_PROT_IB_IPV6, ®_id); if (err) - return err; + goto err_malloc; err = add_gid_entry(ibqp, gid); if (err) goto err_add; + if (ib_steering) { + memcpy(ib_steering->gid.raw, gid->raw, 16); + ib_steering->reg_id = reg_id; + mutex_lock(&mqp->mutex); + list_add(&ib_steering->list, &mqp->steering_rules); + mutex_unlock(&mqp->mutex); + } return 0; err_add: - mlx4_multicast_detach(mdev->dev, &mqp->mqp, gid->raw, MLX4_PROT_IB_IPV6); + mlx4_multicast_detach(mdev->dev, &mqp->mqp, gid->raw, + MLX4_PROT_IB_IPV6, reg_id); +err_malloc: + kfree(ib_steering); + return err; } @@ -765,9 +792,30 @@ static int mlx4_ib_mcg_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid) u8 mac[6]; struct net_device *ndev; struct mlx4_ib_gid_entry *ge; + u64 reg_id = 0; + + if (mdev->dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) { + struct mlx4_ib_steering *ib_steering; + + mutex_lock(&mqp->mutex); + list_for_each_entry(ib_steering, &mqp->steering_rules, list) { + if (!memcmp(ib_steering->gid.raw, gid->raw, 16)) { + list_del(&ib_steering->list); + break; + } + } + mutex_unlock(&mqp->mutex); + if (&ib_steering->list == &mqp->steering_rules) { + pr_err("Couldn't find reg_id for mgid. Steering rule is left attached\n"); + return -EINVAL; + } + reg_id = ib_steering->reg_id; + kfree(ib_steering); + } - err = mlx4_multicast_detach(mdev->dev, - &mqp->mqp, gid->raw, MLX4_PROT_IB_IPV6); + err = mlx4_multicast_detach(mdev->dev, &mqp->mqp, gid->raw, + MLX4_PROT_IB_IPV6, reg_id); if (err) return err; @@ -1111,7 +1159,8 @@ static void mlx4_ib_alloc_eqs(struct mlx4_dev *dev, struct mlx4_ib_dev *ibdev) sprintf(name, "mlx4-ib-%d-%d@%s", i, j, dev->pdev->bus->name); /* Set IRQ for specific name (per ring) */ - if (mlx4_assign_eq(dev, name, &ibdev->eq_table[eq])) { + if (mlx4_assign_eq(dev, name, NULL, + &ibdev->eq_table[eq])) { /* Use legacy (same as mlx4_en driver) */ pr_warn("Can't allocate EQ %d; reverting to legacy\n", eq); ibdev->eq_table[eq] = diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index ff36655d23d3..42df4f7a6a5b 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -163,6 +163,7 @@ struct mlx4_ib_qp { u8 state; int mlx_type; struct list_head gid_list; + struct list_head steering_rules; }; struct mlx4_ib_srq { diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c index 8d4ed24aef93..6af19f6c2b11 100644 --- a/drivers/infiniband/hw/mlx4/qp.c +++ b/drivers/infiniband/hw/mlx4/qp.c @@ -495,6 +495,7 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd, spin_lock_init(&qp->sq.lock); spin_lock_init(&qp->rq.lock); INIT_LIST_HEAD(&qp->gid_list); + INIT_LIST_HEAD(&qp->steering_rules); qp->state = IB_QPS_RESET; if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c index 014504d8e43c..1ca732201f33 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c @@ -1397,7 +1397,7 @@ void ipoib_cm_skb_too_long(struct net_device *dev, struct sk_buff *skb, int e = skb_queue_empty(&priv->cm.skb_queue); if (skb_dst(skb)) - skb_dst(skb)->ops->update_pmtu(skb_dst(skb), mtu); + skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); skb_queue_tail(&priv->cm.skb_queue, skb); if (e) diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c index 3974c290b667..bbee4b2d7a13 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c @@ -715,7 +715,7 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) rcu_read_lock(); if (likely(skb_dst(skb))) { - n = dst_get_neighbour_noref(skb_dst(skb)); + n = dst_neigh_lookup_skb(skb_dst(skb), skb); if (!n) { ++dev->stats.tx_dropped; dev_kfree_skb_any(skb); @@ -797,6 +797,8 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) } } unlock: + if (n) + neigh_release(n); rcu_read_unlock(); return NETDEV_TX_OK; } diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c index 20ebc6fd1bb9..7cecb16d3d48 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c @@ -658,9 +658,15 @@ static int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast) void ipoib_mcast_send(struct net_device *dev, void *mgid, struct sk_buff *skb) { struct ipoib_dev_priv *priv = netdev_priv(dev); + struct dst_entry *dst = skb_dst(skb); struct ipoib_mcast *mcast; + struct neighbour *n; unsigned long flags; + n = NULL; + if (dst) + n = dst_neigh_lookup_skb(dst, skb); + spin_lock_irqsave(&priv->lock, flags); if (!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags) || @@ -715,29 +721,28 @@ void ipoib_mcast_send(struct net_device *dev, void *mgid, struct sk_buff *skb) out: if (mcast && mcast->ah) { - struct dst_entry *dst = skb_dst(skb); - struct neighbour *n = NULL; - - rcu_read_lock(); - if (dst) - n = dst_get_neighbour_noref(dst); - if (n && !*to_ipoib_neigh(n)) { - struct ipoib_neigh *neigh = ipoib_neigh_alloc(n, - skb->dev); - - if (neigh) { - kref_get(&mcast->ah->ref); - neigh->ah = mcast->ah; - list_add_tail(&neigh->list, &mcast->neigh_list); + if (n) { + if (!*to_ipoib_neigh(n)) { + struct ipoib_neigh *neigh; + + neigh = ipoib_neigh_alloc(n, skb->dev); + if (neigh) { + kref_get(&mcast->ah->ref); + neigh->ah = mcast->ah; + list_add_tail(&neigh->list, + &mcast->neigh_list); + } } + neigh_release(n); } - rcu_read_unlock(); spin_unlock_irqrestore(&priv->lock, flags); ipoib_send(dev, skb, mcast->ah, IB_MULTICAST_QPN); return; } unlock: + if (n) + neigh_release(n); spin_unlock_irqrestore(&priv->lock, flags); } diff --git a/drivers/isdn/gigaset/capi.c b/drivers/isdn/gigaset/capi.c index 27e4a3e21d64..68452b768da2 100644 --- a/drivers/isdn/gigaset/capi.c +++ b/drivers/isdn/gigaset/capi.c @@ -288,6 +288,7 @@ static inline void dump_rawmsg(enum debuglevel level, const char *tag, * format CAPI IE as string */ +#ifdef CONFIG_GIGASET_DEBUG static const char *format_ie(const char *ie) { static char result[3 * MAX_FMT_IE_LEN]; @@ -313,6 +314,7 @@ static const char *format_ie(const char *ie) *--pout = 0; return result; } +#endif /* * emit DATA_B3_CONF message diff --git a/drivers/isdn/hardware/mISDN/hfcsusb.c b/drivers/isdn/hardware/mISDN/hfcsusb.c index c65c3440cd70..114f3bcba1b0 100644 --- a/drivers/isdn/hardware/mISDN/hfcsusb.c +++ b/drivers/isdn/hardware/mISDN/hfcsusb.c @@ -2084,13 +2084,21 @@ hfcsusb_probe(struct usb_interface *intf, const struct usb_device_id *id) /* create the control pipes needed for register access */ hw->ctrl_in_pipe = usb_rcvctrlpipe(hw->dev, 0); hw->ctrl_out_pipe = usb_sndctrlpipe(hw->dev, 0); + + driver_info = (struct hfcsusb_vdata *) + hfcsusb_idtab[vend_idx].driver_info; + hw->ctrl_urb = usb_alloc_urb(0, GFP_KERNEL); + if (!hw->ctrl_urb) { + pr_warn("%s: No memory for control urb\n", + driver_info->vend_name); + kfree(hw); + return -ENOMEM; + } - driver_info = - (struct hfcsusb_vdata *)hfcsusb_idtab[vend_idx].driver_info; - printk(KERN_DEBUG "%s: %s: detected \"%s\" (%s, if=%d alt=%d)\n", - hw->name, __func__, driver_info->vend_name, - conf_str[small_match], ifnum, alt_used); + pr_info("%s: %s: detected \"%s\" (%s, if=%d alt=%d)\n", + hw->name, __func__, driver_info->vend_name, + conf_str[small_match], ifnum, alt_used); if (setup_instance(hw, dev->dev.parent)) return -EIO; diff --git a/drivers/isdn/hisax/hfc_usb.c b/drivers/isdn/hisax/hfc_usb.c index 84f9c8103078..849a80752685 100644 --- a/drivers/isdn/hisax/hfc_usb.c +++ b/drivers/isdn/hisax/hfc_usb.c @@ -1483,13 +1483,21 @@ hfc_usb_probe(struct usb_interface *intf, const struct usb_device_id *id) usb_rcvctrlpipe(context->dev, 0); context->ctrl_out_pipe = usb_sndctrlpipe(context->dev, 0); + + driver_info = (hfcsusb_vdata *) + hfcusb_idtab[vend_idx].driver_info; + context->ctrl_urb = usb_alloc_urb(0, GFP_KERNEL); - driver_info = - (hfcsusb_vdata *) hfcusb_idtab[vend_idx]. - driver_info; - printk(KERN_INFO "HFC-S USB: detected \"%s\"\n", - driver_info->vend_name); + if (!context->ctrl_urb) { + pr_warn("%s: No memory for control urb\n", + driver_info->vend_name); + kfree(context); + return -ENOMEM; + } + + pr_info("HFC-S USB: detected \"%s\"\n", + driver_info->vend_name); DBG(HFCUSB_DBG_INIT, "HFC-S USB: Endpoint-Config: %s (if=%d alt=%d), E-Channel(%d)", diff --git a/drivers/isdn/hisax/isurf.c b/drivers/isdn/hisax/isurf.c index ea2717215296..c1530fe248c2 100644 --- a/drivers/isdn/hisax/isurf.c +++ b/drivers/isdn/hisax/isurf.c @@ -231,6 +231,11 @@ setup_isurf(struct IsdnCard *card) } pnp_disable_dev(pnp_d); err = pnp_activate_dev(pnp_d); + if (err < 0) { + pr_warn("%s: pnp_activate_dev ret=%d\n", + __func__, err); + return 0; + } cs->hw.isurf.reset = pnp_port_start(pnp_d, 0); cs->hw.isurf.phymem = pnp_mem_start(pnp_d, 1); cs->irq = pnp_irq(pnp_d, 0); diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig index 2661f6e366f9..154f3ef07631 100644 --- a/drivers/misc/Kconfig +++ b/drivers/misc/Kconfig @@ -511,7 +511,6 @@ config USB_SWITCH_FSA9480 source "drivers/misc/c2port/Kconfig" source "drivers/misc/eeprom/Kconfig" source "drivers/misc/cb710/Kconfig" -source "drivers/misc/iwmc3200top/Kconfig" source "drivers/misc/ti-st/Kconfig" source "drivers/misc/lis3lv02d/Kconfig" source "drivers/misc/carma/Kconfig" diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile index 456972faaeb3..b88df7a350b8 100644 --- a/drivers/misc/Makefile +++ b/drivers/misc/Makefile @@ -36,7 +36,6 @@ obj-$(CONFIG_EP93XX_PWM) += ep93xx_pwm.o obj-$(CONFIG_DS1682) += ds1682.o obj-$(CONFIG_TI_DAC7512) += ti_dac7512.o obj-$(CONFIG_C2PORT) += c2port/ -obj-$(CONFIG_IWMC3200TOP) += iwmc3200top/ obj-$(CONFIG_HMC6352) += hmc6352.o obj-y += eeprom/ obj-y += cb710/ diff --git a/drivers/misc/iwmc3200top/Kconfig b/drivers/misc/iwmc3200top/Kconfig deleted file mode 100644 index 9e4b88fb57f1..000000000000 --- a/drivers/misc/iwmc3200top/Kconfig +++ /dev/null @@ -1,20 +0,0 @@ -config IWMC3200TOP - tristate "Intel Wireless MultiCom Top Driver" - depends on MMC && EXPERIMENTAL - select FW_LOADER - ---help--- - Intel Wireless MultiCom 3200 Top driver is responsible for - for firmware load and enabled coms enumeration - -config IWMC3200TOP_DEBUG - bool "Enable full debug output of iwmc3200top Driver" - depends on IWMC3200TOP - ---help--- - Enable full debug output of iwmc3200top Driver - -config IWMC3200TOP_DEBUGFS - bool "Enable Debugfs debugging interface for iwmc3200top" - depends on IWMC3200TOP - ---help--- - Enable creation of debugfs files for iwmc3200top - diff --git a/drivers/misc/iwmc3200top/Makefile b/drivers/misc/iwmc3200top/Makefile deleted file mode 100644 index fbf53fb4634e..000000000000 --- a/drivers/misc/iwmc3200top/Makefile +++ /dev/null @@ -1,29 +0,0 @@ -# iwmc3200top - Intel Wireless MultiCom 3200 Top Driver -# drivers/misc/iwmc3200top/Makefile -# -# Copyright (C) 2009 Intel Corporation. All rights reserved. -# -# This program is free software; you can redistribute it and/or -# modify it under the terms of the GNU General Public License version -# 2 as published by the Free Software Foundation. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program; if not, write to the Free Software -# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA -# 02110-1301, USA. -# -# -# Author Name: Maxim Grabarnik <maxim.grabarnink@intel.com> -# - -# -# - -obj-$(CONFIG_IWMC3200TOP) += iwmc3200top.o -iwmc3200top-objs := main.o fw-download.o -iwmc3200top-$(CONFIG_IWMC3200TOP_DEBUG) += log.o -iwmc3200top-$(CONFIG_IWMC3200TOP_DEBUGFS) += debugfs.o diff --git a/drivers/misc/iwmc3200top/debugfs.c b/drivers/misc/iwmc3200top/debugfs.c deleted file mode 100644 index 62fbaec48207..000000000000 --- a/drivers/misc/iwmc3200top/debugfs.c +++ /dev/null @@ -1,137 +0,0 @@ -/* - * iwmc3200top - Intel Wireless MultiCom 3200 Top Driver - * drivers/misc/iwmc3200top/debufs.c - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - * - * Author Name: Maxim Grabarnik <maxim.grabarnink@intel.com> - * - - * - */ - -#include <linux/kernel.h> -#include <linux/slab.h> -#include <linux/string.h> -#include <linux/ctype.h> -#include <linux/mmc/sdio_func.h> -#include <linux/mmc/sdio.h> -#include <linux/debugfs.h> - -#include "iwmc3200top.h" -#include "fw-msg.h" -#include "log.h" -#include "debugfs.h" - - - -/* Constants definition */ -#define HEXADECIMAL_RADIX 16 - -/* Functions definition */ - - -#define DEBUGFS_ADD(name, parent) do { \ - dbgfs->dbgfs_##parent##_files.file_##name = \ - debugfs_create_file(#name, 0644, dbgfs->dir_##parent, priv, \ - &iwmct_dbgfs_##name##_ops); \ -} while (0) - -#define DEBUGFS_RM(name) do { \ - debugfs_remove(name); \ - name = NULL; \ -} while (0) - -#define DEBUGFS_READ_FUNC(name) \ -ssize_t iwmct_dbgfs_##name##_read(struct file *file, \ - char __user *user_buf, \ - size_t count, loff_t *ppos); - -#define DEBUGFS_WRITE_FUNC(name) \ -ssize_t iwmct_dbgfs_##name##_write(struct file *file, \ - const char __user *user_buf, \ - size_t count, loff_t *ppos); - -#define DEBUGFS_READ_FILE_OPS(name) \ - DEBUGFS_READ_FUNC(name) \ - static const struct file_operations iwmct_dbgfs_##name##_ops = { \ - .read = iwmct_dbgfs_##name##_read, \ - .open = iwmct_dbgfs_open_file_generic, \ - .llseek = generic_file_llseek, \ - }; - -#define DEBUGFS_WRITE_FILE_OPS(name) \ - DEBUGFS_WRITE_FUNC(name) \ - static const struct file_operations iwmct_dbgfs_##name##_ops = { \ - .write = iwmct_dbgfs_##name##_write, \ - .open = iwmct_dbgfs_open_file_generic, \ - .llseek = generic_file_llseek, \ - }; - -#define DEBUGFS_READ_WRITE_FILE_OPS(name) \ - DEBUGFS_READ_FUNC(name) \ - DEBUGFS_WRITE_FUNC(name) \ - static const struct file_operations iwmct_dbgfs_##name##_ops = {\ - .write = iwmct_dbgfs_##name##_write, \ - .read = iwmct_dbgfs_##name##_read, \ - .open = iwmct_dbgfs_open_file_generic, \ - .llseek = generic_file_llseek, \ - }; - - -/* Debugfs file ops definitions */ - -/* - * Create the debugfs files and directories - * - */ -void iwmct_dbgfs_register(struct iwmct_priv *priv, const char *name) -{ - struct iwmct_debugfs *dbgfs; - - dbgfs = kzalloc(sizeof(struct iwmct_debugfs), GFP_KERNEL); - if (!dbgfs) { - LOG_ERROR(priv, DEBUGFS, "failed to allocate %zd bytes\n", - sizeof(struct iwmct_debugfs)); - return; - } - - priv->dbgfs = dbgfs; - dbgfs->name = name; - dbgfs->dir_drv = debugfs_create_dir(name, NULL); - if (!dbgfs->dir_drv) { - LOG_ERROR(priv, DEBUGFS, "failed to create debugfs dir\n"); - return; - } - - return; -} - -/** - * Remove the debugfs files and directories - * - */ -void iwmct_dbgfs_unregister(struct iwmct_debugfs *dbgfs) -{ - if (!dbgfs) - return; - - DEBUGFS_RM(dbgfs->dir_drv); - kfree(dbgfs); - dbgfs = NULL; -} - diff --git a/drivers/misc/iwmc3200top/debugfs.h b/drivers/misc/iwmc3200top/debugfs.h deleted file mode 100644 index 71d45759b40f..000000000000 --- a/drivers/misc/iwmc3200top/debugfs.h +++ /dev/null @@ -1,58 +0,0 @@ -/* - * iwmc3200top - Intel Wireless MultiCom 3200 Top Driver - * drivers/misc/iwmc3200top/debufs.h - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - * - * Author Name: Maxim Grabarnik <maxim.grabarnink@intel.com> - * - - * - */ - -#ifndef __DEBUGFS_H__ -#define __DEBUGFS_H__ - - -#ifdef CONFIG_IWMC3200TOP_DEBUGFS - -struct iwmct_debugfs { - const char *name; - struct dentry *dir_drv; - struct dir_drv_files { - } dbgfs_drv_files; -}; - -void iwmct_dbgfs_register(struct iwmct_priv *priv, const char *name); -void iwmct_dbgfs_unregister(struct iwmct_debugfs *dbgfs); - -#else /* CONFIG_IWMC3200TOP_DEBUGFS */ - -struct iwmct_debugfs; - -static inline void -iwmct_dbgfs_register(struct iwmct_priv *priv, const char *name) -{} - -static inline void -iwmct_dbgfs_unregister(struct iwmct_debugfs *dbgfs) -{} - -#endif /* CONFIG_IWMC3200TOP_DEBUGFS */ - -#endif /* __DEBUGFS_H__ */ - diff --git a/drivers/misc/iwmc3200top/fw-download.c b/drivers/misc/iwmc3200top/fw-download.c deleted file mode 100644 index e27afde6e99f..000000000000 --- a/drivers/misc/iwmc3200top/fw-download.c +++ /dev/null @@ -1,358 +0,0 @@ -/* - * iwmc3200top - Intel Wireless MultiCom 3200 Top Driver - * drivers/misc/iwmc3200top/fw-download.c - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - * - * Author Name: Maxim Grabarnik <maxim.grabarnink@intel.com> - * - - * - */ - -#include <linux/firmware.h> -#include <linux/mmc/sdio_func.h> -#include <linux/slab.h> -#include <asm/unaligned.h> - -#include "iwmc3200top.h" -#include "log.h" -#include "fw-msg.h" - -#define CHECKSUM_BYTES_NUM sizeof(u32) - -/** - init parser struct with file - */ -static int iwmct_fw_parser_init(struct iwmct_priv *priv, const u8 *file, - size_t file_size, size_t block_size) -{ - struct iwmct_parser *parser = &priv->parser; - struct iwmct_fw_hdr *fw_hdr = &parser->versions; - - LOG_TRACE(priv, FW_DOWNLOAD, "-->\n"); - - LOG_INFO(priv, FW_DOWNLOAD, "file_size=%zd\n", file_size); - - parser->file = file; - parser->file_size = file_size; - parser->cur_pos = 0; - parser->entry_point = 0; - parser->buf = kzalloc(block_size, GFP_KERNEL); - if (!parser->buf) { - LOG_ERROR(priv, FW_DOWNLOAD, "kzalloc error\n"); - return -ENOMEM; - } - parser->buf_size = block_size; - - /* extract fw versions */ - memcpy(fw_hdr, parser->file, sizeof(struct iwmct_fw_hdr)); - LOG_INFO(priv, FW_DOWNLOAD, "fw versions are:\n" - "top %u.%u.%u gps %u.%u.%u bt %u.%u.%u tic %s\n", - fw_hdr->top_major, fw_hdr->top_minor, fw_hdr->top_revision, - fw_hdr->gps_major, fw_hdr->gps_minor, fw_hdr->gps_revision, - fw_hdr->bt_major, fw_hdr->bt_minor, fw_hdr->bt_revision, - fw_hdr->tic_name); - - parser->cur_pos += sizeof(struct iwmct_fw_hdr); - - LOG_TRACE(priv, FW_DOWNLOAD, "<--\n"); - return 0; -} - -static bool iwmct_checksum(struct iwmct_priv *priv) -{ - struct iwmct_parser *parser = &priv->parser; - __le32 *file = (__le32 *)parser->file; - int i, pad, steps; - u32 accum = 0; - u32 checksum; - u32 mask = 0xffffffff; - - pad = (parser->file_size - CHECKSUM_BYTES_NUM) % 4; - steps = (parser->file_size - CHECKSUM_BYTES_NUM) / 4; - - LOG_INFO(priv, FW_DOWNLOAD, "pad=%d steps=%d\n", pad, steps); - - for (i = 0; i < steps; i++) - accum += le32_to_cpu(file[i]); - - if (pad) { - mask <<= 8 * (4 - pad); - accum += le32_to_cpu(file[steps]) & mask; - } - - checksum = get_unaligned_le32((__le32 *)(parser->file + - parser->file_size - CHECKSUM_BYTES_NUM)); - - LOG_INFO(priv, FW_DOWNLOAD, - "compare checksum accum=0x%x to checksum=0x%x\n", - accum, checksum); - - return checksum == accum; -} - -static int iwmct_parse_next_section(struct iwmct_priv *priv, const u8 **p_sec, - size_t *sec_size, __le32 *sec_addr) -{ - struct iwmct_parser *parser = &priv->parser; - struct iwmct_dbg *dbg = &priv->dbg; - struct iwmct_fw_sec_hdr *sec_hdr; - - LOG_TRACE(priv, FW_DOWNLOAD, "-->\n"); - - while (parser->cur_pos + sizeof(struct iwmct_fw_sec_hdr) - <= parser->file_size) { - - sec_hdr = (struct iwmct_fw_sec_hdr *) - (parser->file + parser->cur_pos); - parser->cur_pos += sizeof(struct iwmct_fw_sec_hdr); - - LOG_INFO(priv, FW_DOWNLOAD, - "sec hdr: type=%s addr=0x%x size=%d\n", - sec_hdr->type, sec_hdr->target_addr, - sec_hdr->data_size); - - if (strcmp(sec_hdr->type, "ENT") == 0) - parser->entry_point = le32_to_cpu(sec_hdr->target_addr); - else if (strcmp(sec_hdr->type, "LBL") == 0) - strcpy(dbg->label_fw, parser->file + parser->cur_pos); - else if (((strcmp(sec_hdr->type, "TOP") == 0) && - (priv->barker & BARKER_DNLOAD_TOP_MSK)) || - ((strcmp(sec_hdr->type, "GPS") == 0) && - (priv->barker & BARKER_DNLOAD_GPS_MSK)) || - ((strcmp(sec_hdr->type, "BTH") == 0) && - (priv->barker & BARKER_DNLOAD_BT_MSK))) { - *sec_addr = sec_hdr->target_addr; - *sec_size = le32_to_cpu(sec_hdr->data_size); - *p_sec = parser->file + parser->cur_pos; - parser->cur_pos += le32_to_cpu(sec_hdr->data_size); - return 1; - } else if (strcmp(sec_hdr->type, "LOG") != 0) - LOG_WARNING(priv, FW_DOWNLOAD, - "skipping section type %s\n", - sec_hdr->type); - - parser->cur_pos += le32_to_cpu(sec_hdr->data_size); - LOG_INFO(priv, FW_DOWNLOAD, - "finished with section cur_pos=%zd\n", parser->cur_pos); - } - - LOG_TRACE(priv, INIT, "<--\n"); - return 0; -} - -static int iwmct_download_section(struct iwmct_priv *priv, const u8 *p_sec, - size_t sec_size, __le32 addr) -{ - struct iwmct_parser *parser = &priv->parser; - struct iwmct_fw_load_hdr *hdr = (struct iwmct_fw_load_hdr *)parser->buf; - const u8 *cur_block = p_sec; - size_t sent = 0; - int cnt = 0; - int ret = 0; - u32 cmd = 0; - - LOG_TRACE(priv, FW_DOWNLOAD, "-->\n"); - LOG_INFO(priv, FW_DOWNLOAD, "Download address 0x%x size 0x%zx\n", - addr, sec_size); - - while (sent < sec_size) { - int i; - u32 chksm = 0; - u32 reset = atomic_read(&priv->reset); - /* actual FW data */ - u32 data_size = min(parser->buf_size - sizeof(*hdr), - sec_size - sent); - /* Pad to block size */ - u32 trans_size = (data_size + sizeof(*hdr) + - IWMC_SDIO_BLK_SIZE - 1) & - ~(IWMC_SDIO_BLK_SIZE - 1); - ++cnt; - - /* in case of reset, interrupt FW DOWNLAOD */ - if (reset) { - LOG_INFO(priv, FW_DOWNLOAD, - "Reset detected. Abort FW download!!!"); - ret = -ECANCELED; - goto exit; - } - - memset(parser->buf, 0, parser->buf_size); - cmd |= IWMC_OPCODE_WRITE << CMD_HDR_OPCODE_POS; - cmd |= IWMC_CMD_SIGNATURE << CMD_HDR_SIGNATURE_POS; - cmd |= (priv->dbg.direct ? 1 : 0) << CMD_HDR_DIRECT_ACCESS_POS; - cmd |= (priv->dbg.checksum ? 1 : 0) << CMD_HDR_USE_CHECKSUM_POS; - hdr->data_size = cpu_to_le32(data_size); - hdr->target_addr = addr; - - /* checksum is allowed for sizes divisible by 4 */ - if (data_size & 0x3) - cmd &= ~CMD_HDR_USE_CHECKSUM_MSK; - - memcpy(hdr->data, cur_block, data_size); - - - if (cmd & CMD_HDR_USE_CHECKSUM_MSK) { - - chksm = data_size + le32_to_cpu(addr) + cmd; - for (i = 0; i < data_size >> 2; i++) - chksm += ((u32 *)cur_block)[i]; - - hdr->block_chksm = cpu_to_le32(chksm); - LOG_INFO(priv, FW_DOWNLOAD, "Checksum = 0x%X\n", - hdr->block_chksm); - } - - LOG_INFO(priv, FW_DOWNLOAD, "trans#%d, len=%d, sent=%zd, " - "sec_size=%zd, startAddress 0x%X\n", - cnt, trans_size, sent, sec_size, addr); - - if (priv->dbg.dump) - LOG_HEXDUMP(FW_DOWNLOAD, parser->buf, trans_size); - - - hdr->cmd = cpu_to_le32(cmd); - /* send it down */ - /* TODO: add more proper sending and error checking */ - ret = iwmct_tx(priv, parser->buf, trans_size); - if (ret != 0) { - LOG_INFO(priv, FW_DOWNLOAD, - "iwmct_tx returned %d\n", ret); - goto exit; - } - - addr = cpu_to_le32(le32_to_cpu(addr) + data_size); - sent += data_size; - cur_block = p_sec + sent; - - if (priv->dbg.blocks && (cnt + 1) >= priv->dbg.blocks) { - LOG_INFO(priv, FW_DOWNLOAD, - "Block number limit is reached [%d]\n", - priv->dbg.blocks); - break; - } - } - - if (sent < sec_size) - ret = -EINVAL; -exit: - LOG_TRACE(priv, FW_DOWNLOAD, "<--\n"); - return ret; -} - -static int iwmct_kick_fw(struct iwmct_priv *priv, bool jump) -{ - struct iwmct_parser *parser = &priv->parser; - struct iwmct_fw_load_hdr *hdr = (struct iwmct_fw_load_hdr *)parser->buf; - int ret; - u32 cmd; - - LOG_TRACE(priv, FW_DOWNLOAD, "-->\n"); - - memset(parser->buf, 0, parser->buf_size); - cmd = IWMC_CMD_SIGNATURE << CMD_HDR_SIGNATURE_POS; - if (jump) { - cmd |= IWMC_OPCODE_JUMP << CMD_HDR_OPCODE_POS; - hdr->target_addr = cpu_to_le32(parser->entry_point); - LOG_INFO(priv, FW_DOWNLOAD, "jump address 0x%x\n", - parser->entry_point); - } else { - cmd |= IWMC_OPCODE_LAST_COMMAND << CMD_HDR_OPCODE_POS; - LOG_INFO(priv, FW_DOWNLOAD, "last command\n"); - } - - hdr->cmd = cpu_to_le32(cmd); - - LOG_HEXDUMP(FW_DOWNLOAD, parser->buf, sizeof(*hdr)); - /* send it down */ - /* TODO: add more proper sending and error checking */ - ret = iwmct_tx(priv, parser->buf, IWMC_SDIO_BLK_SIZE); - if (ret) - LOG_INFO(priv, FW_DOWNLOAD, "iwmct_tx returned %d", ret); - - LOG_TRACE(priv, FW_DOWNLOAD, "<--\n"); - return 0; -} - -int iwmct_fw_load(struct iwmct_priv *priv) -{ - const u8 *fw_name = FW_NAME(FW_API_VER); - const struct firmware *raw; - const u8 *pdata; - size_t len; - __le32 addr; - int ret; - - - LOG_INFO(priv, FW_DOWNLOAD, "barker download request 0x%x is:\n", - priv->barker); - LOG_INFO(priv, FW_DOWNLOAD, "******* Top FW %s requested ********\n", - (priv->barker & BARKER_DNLOAD_TOP_MSK) ? "was" : "not"); - LOG_INFO(priv, FW_DOWNLOAD, "******* GPS FW %s requested ********\n", - (priv->barker & BARKER_DNLOAD_GPS_MSK) ? "was" : "not"); - LOG_INFO(priv, FW_DOWNLOAD, "******* BT FW %s requested ********\n", - (priv->barker & BARKER_DNLOAD_BT_MSK) ? "was" : "not"); - - - /* get the firmware */ - ret = request_firmware(&raw, fw_name, &priv->func->dev); - if (ret < 0) { - LOG_ERROR(priv, FW_DOWNLOAD, "%s request_firmware failed %d\n", - fw_name, ret); - goto exit; - } - - if (raw->size < sizeof(struct iwmct_fw_sec_hdr)) { - LOG_ERROR(priv, FW_DOWNLOAD, "%s smaller then (%zd) (%zd)\n", - fw_name, sizeof(struct iwmct_fw_sec_hdr), raw->size); - goto exit; - } - - LOG_INFO(priv, FW_DOWNLOAD, "Read firmware '%s'\n", fw_name); - - /* clear parser struct */ - ret = iwmct_fw_parser_init(priv, raw->data, raw->size, priv->trans_len); - if (ret < 0) { - LOG_ERROR(priv, FW_DOWNLOAD, - "iwmct_parser_init failed: Reason %d\n", ret); - goto exit; - } - - if (!iwmct_checksum(priv)) { - LOG_ERROR(priv, FW_DOWNLOAD, "checksum error\n"); - ret = -EINVAL; - goto exit; - } - - /* download firmware to device */ - while (iwmct_parse_next_section(priv, &pdata, &len, &addr)) { - ret = iwmct_download_section(priv, pdata, len, addr); - if (ret) { - LOG_ERROR(priv, FW_DOWNLOAD, - "%s download section failed\n", fw_name); - goto exit; - } - } - - ret = iwmct_kick_fw(priv, !!(priv->barker & BARKER_DNLOAD_JUMP_MSK)); - -exit: - kfree(priv->parser.buf); - release_firmware(raw); - return ret; -} diff --git a/drivers/misc/iwmc3200top/fw-msg.h b/drivers/misc/iwmc3200top/fw-msg.h deleted file mode 100644 index 9e26b75bd482..000000000000 --- a/drivers/misc/iwmc3200top/fw-msg.h +++ /dev/null @@ -1,113 +0,0 @@ -/* - * iwmc3200top - Intel Wireless MultiCom 3200 Top Driver - * drivers/misc/iwmc3200top/fw-msg.h - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - * - * Author Name: Maxim Grabarnik <maxim.grabarnink@intel.com> - * - - * - */ - -#ifndef __FWMSG_H__ -#define __FWMSG_H__ - -#define COMM_TYPE_D2H 0xFF -#define COMM_TYPE_H2D 0xEE - -#define COMM_CATEGORY_OPERATIONAL 0x00 -#define COMM_CATEGORY_DEBUG 0x01 -#define COMM_CATEGORY_TESTABILITY 0x02 -#define COMM_CATEGORY_DIAGNOSTICS 0x03 - -#define OP_DBG_ZSTR_MSG cpu_to_le16(0x1A) - -#define FW_LOG_SRC_MAX 32 -#define FW_LOG_SRC_ALL 255 - -#define FW_STRING_TABLE_ADDR cpu_to_le32(0x0C000000) - -#define CMD_DBG_LOG_LEVEL cpu_to_le16(0x0001) -#define CMD_TST_DEV_RESET cpu_to_le16(0x0060) -#define CMD_TST_FUNC_RESET cpu_to_le16(0x0062) -#define CMD_TST_IFACE_RESET cpu_to_le16(0x0064) -#define CMD_TST_CPU_UTILIZATION cpu_to_le16(0x0065) -#define CMD_TST_TOP_DEEP_SLEEP cpu_to_le16(0x0080) -#define CMD_TST_WAKEUP cpu_to_le16(0x0081) -#define CMD_TST_FUNC_WAKEUP cpu_to_le16(0x0082) -#define CMD_TST_FUNC_DEEP_SLEEP_REQUEST cpu_to_le16(0x0083) -#define CMD_TST_GET_MEM_DUMP cpu_to_le16(0x0096) - -#define OP_OPR_ALIVE cpu_to_le16(0x0010) -#define OP_OPR_CMD_ACK cpu_to_le16(0x001F) -#define OP_OPR_CMD_NACK cpu_to_le16(0x0020) -#define OP_TST_MEM_DUMP cpu_to_le16(0x0043) - -#define CMD_FLAG_PADDING_256 0x80 - -#define FW_HCMD_BLOCK_SIZE 256 - -struct msg_hdr { - u8 type; - u8 category; - __le16 opcode; - u8 seqnum; - u8 flags; - __le16 length; -} __attribute__((__packed__)); - -struct log_hdr { - __le32 timestamp; - u8 severity; - u8 logsource; - __le16 reserved; -} __attribute__((__packed__)); - -struct mdump_hdr { - u8 dmpid; - u8 frag; - __le16 size; - __le32 addr; -} __attribute__((__packed__)); - -struct top_msg { - struct msg_hdr hdr; - union { - /* D2H messages */ - struct { - struct log_hdr log_hdr; - u8 data[1]; - } __attribute__((__packed__)) log; - - struct { - struct log_hdr log_hdr; - struct mdump_hdr md_hdr; - u8 data[1]; - } __attribute__((__packed__)) mdump; - - /* H2D messages */ - struct { - u8 logsource; - u8 sevmask; - } __attribute__((__packed__)) logdefs[FW_LOG_SRC_MAX]; - struct mdump_hdr mdump_req; - } u; -} __attribute__((__packed__)); - - -#endif /* __FWMSG_H__ */ diff --git a/drivers/misc/iwmc3200top/iwmc3200top.h b/drivers/misc/iwmc3200top/iwmc3200top.h deleted file mode 100644 index 620973ed8bf9..000000000000 --- a/drivers/misc/iwmc3200top/iwmc3200top.h +++ /dev/null @@ -1,205 +0,0 @@ -/* - * iwmc3200top - Intel Wireless MultiCom 3200 Top Driver - * drivers/misc/iwmc3200top/iwmc3200top.h - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - * - * Author Name: Maxim Grabarnik <maxim.grabarnink@intel.com> - * - - * - */ - -#ifndef __IWMC3200TOP_H__ -#define __IWMC3200TOP_H__ - -#include <linux/workqueue.h> - -#define DRV_NAME "iwmc3200top" -#define FW_API_VER 1 -#define _FW_NAME(api) DRV_NAME "." #api ".fw" -#define FW_NAME(api) _FW_NAME(api) - -#define IWMC_SDIO_BLK_SIZE 256 -#define IWMC_DEFAULT_TR_BLK 64 -#define IWMC_SDIO_DATA_ADDR 0x0 -#define IWMC_SDIO_INTR_ENABLE_ADDR 0x14 -#define IWMC_SDIO_INTR_STATUS_ADDR 0x13 -#define IWMC_SDIO_INTR_CLEAR_ADDR 0x13 -#define IWMC_SDIO_INTR_GET_SIZE_ADDR 0x2C - -#define COMM_HUB_HEADER_LENGTH 16 -#define LOGGER_HEADER_LENGTH 10 - - -#define BARKER_DNLOAD_BT_POS 0 -#define BARKER_DNLOAD_BT_MSK BIT(BARKER_DNLOAD_BT_POS) -#define BARKER_DNLOAD_GPS_POS 1 -#define BARKER_DNLOAD_GPS_MSK BIT(BARKER_DNLOAD_GPS_POS) -#define BARKER_DNLOAD_TOP_POS 2 -#define BARKER_DNLOAD_TOP_MSK BIT(BARKER_DNLOAD_TOP_POS) -#define BARKER_DNLOAD_RESERVED1_POS 3 -#define BARKER_DNLOAD_RESERVED1_MSK BIT(BARKER_DNLOAD_RESERVED1_POS) -#define BARKER_DNLOAD_JUMP_POS 4 -#define BARKER_DNLOAD_JUMP_MSK BIT(BARKER_DNLOAD_JUMP_POS) -#define BARKER_DNLOAD_SYNC_POS 5 -#define BARKER_DNLOAD_SYNC_MSK BIT(BARKER_DNLOAD_SYNC_POS) -#define BARKER_DNLOAD_RESERVED2_POS 6 -#define BARKER_DNLOAD_RESERVED2_MSK (0x3 << BARKER_DNLOAD_RESERVED2_POS) -#define BARKER_DNLOAD_BARKER_POS 8 -#define BARKER_DNLOAD_BARKER_MSK (0xffffff << BARKER_DNLOAD_BARKER_POS) - -#define IWMC_BARKER_REBOOT (0xdeadbe << BARKER_DNLOAD_BARKER_POS) -/* whole field barker */ -#define IWMC_BARKER_ACK 0xfeedbabe - -#define IWMC_CMD_SIGNATURE 0xcbbc - -#define CMD_HDR_OPCODE_POS 0 -#define CMD_HDR_OPCODE_MSK_MSK (0xf << CMD_HDR_OPCODE_MSK_POS) -#define CMD_HDR_RESPONSE_CODE_POS 4 -#define CMD_HDR_RESPONSE_CODE_MSK (0xf << CMD_HDR_RESPONSE_CODE_POS) -#define CMD_HDR_USE_CHECKSUM_POS 8 -#define CMD_HDR_USE_CHECKSUM_MSK BIT(CMD_HDR_USE_CHECKSUM_POS) -#define CMD_HDR_RESPONSE_REQUIRED_POS 9 -#define CMD_HDR_RESPONSE_REQUIRED_MSK BIT(CMD_HDR_RESPONSE_REQUIRED_POS) -#define CMD_HDR_DIRECT_ACCESS_POS 10 -#define CMD_HDR_DIRECT_ACCESS_MSK BIT(CMD_HDR_DIRECT_ACCESS_POS) -#define CMD_HDR_RESERVED_POS 11 -#define CMD_HDR_RESERVED_MSK BIT(0x1f << CMD_HDR_RESERVED_POS) -#define CMD_HDR_SIGNATURE_POS 16 -#define CMD_HDR_SIGNATURE_MSK BIT(0xffff << CMD_HDR_SIGNATURE_POS) - -enum { - IWMC_OPCODE_PING = 0, - IWMC_OPCODE_READ = 1, - IWMC_OPCODE_WRITE = 2, - IWMC_OPCODE_JUMP = 3, - IWMC_OPCODE_REBOOT = 4, - IWMC_OPCODE_PERSISTENT_WRITE = 5, - IWMC_OPCODE_PERSISTENT_READ = 6, - IWMC_OPCODE_READ_MODIFY_WRITE = 7, - IWMC_OPCODE_LAST_COMMAND = 15 -}; - -struct iwmct_fw_load_hdr { - __le32 cmd; - __le32 target_addr; - __le32 data_size; - __le32 block_chksm; - u8 data[0]; -}; - -/** - * struct iwmct_fw_hdr - * holds all sw components versions - */ -struct iwmct_fw_hdr { - u8 top_major; - u8 top_minor; - u8 top_revision; - u8 gps_major; - u8 gps_minor; - u8 gps_revision; - u8 bt_major; - u8 bt_minor; - u8 bt_revision; - u8 tic_name[31]; -}; - -/** - * struct iwmct_fw_sec_hdr - * @type: function type - * @data_size: section's data size - * @target_addr: download address - */ -struct iwmct_fw_sec_hdr { - u8 type[4]; - __le32 data_size; - __le32 target_addr; -}; - -/** - * struct iwmct_parser - * @file: fw image - * @file_size: fw size - * @cur_pos: position in file - * @buf: temp buf for download - * @buf_size: size of buf - * @entry_point: address to jump in fw kick-off - */ -struct iwmct_parser { - const u8 *file; - size_t file_size; - size_t cur_pos; - u8 *buf; - size_t buf_size; - u32 entry_point; - struct iwmct_fw_hdr versions; -}; - - -struct iwmct_work_struct { - struct list_head list; - ssize_t iosize; -}; - -struct iwmct_dbg { - int blocks; - bool dump; - bool jump; - bool direct; - bool checksum; - bool fw_download; - int block_size; - int download_trans_blks; - - char label_fw[256]; -}; - -struct iwmct_debugfs; - -struct iwmct_priv { - struct sdio_func *func; - struct iwmct_debugfs *dbgfs; - struct iwmct_parser parser; - atomic_t reset; - atomic_t dev_sync; - u32 trans_len; - u32 barker; - struct iwmct_dbg dbg; - - /* drivers work items */ - struct work_struct bus_rescan_worker; - struct work_struct isr_worker; - - /* drivers wait queue */ - wait_queue_head_t wait_q; - - /* rx request list */ - struct list_head read_req_list; -}; - -extern int iwmct_tx(struct iwmct_priv *priv, void *src, int count); -extern int iwmct_fw_load(struct iwmct_priv *priv); - -extern void iwmct_dbg_init_params(struct iwmct_priv *drv); -extern void iwmct_dbg_init_drv_attrs(struct device_driver *drv); -extern void iwmct_dbg_remove_drv_attrs(struct device_driver *drv); -extern int iwmct_send_hcmd(struct iwmct_priv *priv, u8 *cmd, u16 len); - -#endif /* __IWMC3200TOP_H__ */ diff --git a/drivers/misc/iwmc3200top/log.c b/drivers/misc/iwmc3200top/log.c deleted file mode 100644 index a36a55a49cac..000000000000 --- a/drivers/misc/iwmc3200top/log.c +++ /dev/null @@ -1,348 +0,0 @@ -/* - * iwmc3200top - Intel Wireless MultiCom 3200 Top Driver - * drivers/misc/iwmc3200top/log.c - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - * - * Author Name: Maxim Grabarnik <maxim.grabarnink@intel.com> - * - - * - */ - -#include <linux/kernel.h> -#include <linux/mmc/sdio_func.h> -#include <linux/slab.h> -#include <linux/ctype.h> -#include "fw-msg.h" -#include "iwmc3200top.h" -#include "log.h" - -/* Maximal hexadecimal string size of the FW memdump message */ -#define LOG_MSG_SIZE_MAX 12400 - -/* iwmct_logdefs is a global used by log macros */ -u8 iwmct_logdefs[LOG_SRC_MAX]; -static u8 iwmct_fw_logdefs[FW_LOG_SRC_MAX]; - - -static int _log_set_log_filter(u8 *logdefs, int size, u8 src, u8 logmask) -{ - int i; - - if (src < size) - logdefs[src] = logmask; - else if (src == LOG_SRC_ALL) - for (i = 0; i < size; i++) - logdefs[i] = logmask; - else - return -1; - - return 0; -} - - -int iwmct_log_set_filter(u8 src, u8 logmask) -{ - return _log_set_log_filter(iwmct_logdefs, LOG_SRC_MAX, src, logmask); -} - - -int iwmct_log_set_fw_filter(u8 src, u8 logmask) -{ - return _log_set_log_filter(iwmct_fw_logdefs, - FW_LOG_SRC_MAX, src, logmask); -} - - -static int log_msg_format_hex(char *str, int slen, u8 *ibuf, - int ilen, char *pref) -{ - int pos = 0; - int i; - int len; - - for (pos = 0, i = 0; pos < slen - 2 && pref[i] != '\0'; i++, pos++) - str[pos] = pref[i]; - - for (i = 0; pos < slen - 2 && i < ilen; pos += len, i++) - len = snprintf(&str[pos], slen - pos - 1, " %2.2X", ibuf[i]); - - if (i < ilen) - return -1; - - return 0; -} - -/* NOTE: This function is not thread safe. - Currently it's called only from sdio rx worker - no race there -*/ -void iwmct_log_top_message(struct iwmct_priv *priv, u8 *buf, int len) -{ - struct top_msg *msg; - static char logbuf[LOG_MSG_SIZE_MAX]; - - msg = (struct top_msg *)buf; - - if (len < sizeof(msg->hdr) + sizeof(msg->u.log.log_hdr)) { - LOG_ERROR(priv, FW_MSG, "Log message from TOP " - "is too short %d (expected %zd)\n", - len, sizeof(msg->hdr) + sizeof(msg->u.log.log_hdr)); - return; - } - - if (!(iwmct_fw_logdefs[msg->u.log.log_hdr.logsource] & - BIT(msg->u.log.log_hdr.severity)) || - !(iwmct_logdefs[LOG_SRC_FW_MSG] & BIT(msg->u.log.log_hdr.severity))) - return; - - switch (msg->hdr.category) { - case COMM_CATEGORY_TESTABILITY: - if (!(iwmct_logdefs[LOG_SRC_TST] & - BIT(msg->u.log.log_hdr.severity))) - return; - if (log_msg_format_hex(logbuf, LOG_MSG_SIZE_MAX, buf, - le16_to_cpu(msg->hdr.length) + - sizeof(msg->hdr), "<TST>")) - LOG_WARNING(priv, TST, - "TOP TST message is too long, truncating..."); - LOG_WARNING(priv, TST, "%s\n", logbuf); - break; - case COMM_CATEGORY_DEBUG: - if (msg->hdr.opcode == OP_DBG_ZSTR_MSG) - LOG_INFO(priv, FW_MSG, "%s %s", "<DBG>", - ((u8 *)msg) + sizeof(msg->hdr) - + sizeof(msg->u.log.log_hdr)); - else { - if (log_msg_format_hex(logbuf, LOG_MSG_SIZE_MAX, buf, - le16_to_cpu(msg->hdr.length) - + sizeof(msg->hdr), - "<DBG>")) - LOG_WARNING(priv, FW_MSG, - "TOP DBG message is too long," - "truncating..."); - LOG_WARNING(priv, FW_MSG, "%s\n", logbuf); - } - break; - default: - break; - } -} - -static int _log_get_filter_str(u8 *logdefs, int logdefsz, char *buf, int size) -{ - int i, pos, len; - for (i = 0, pos = 0; (pos < size-1) && (i < logdefsz); i++) { - len = snprintf(&buf[pos], size - pos - 1, "0x%02X%02X,", - i, logdefs[i]); - pos += len; - } - buf[pos-1] = '\n'; - buf[pos] = '\0'; - - if (i < logdefsz) - return -1; - return 0; -} - -int log_get_filter_str(char *buf, int size) -{ - return _log_get_filter_str(iwmct_logdefs, LOG_SRC_MAX, buf, size); -} - -int log_get_fw_filter_str(char *buf, int size) -{ - return _log_get_filter_str(iwmct_fw_logdefs, FW_LOG_SRC_MAX, buf, size); -} - -#define HEXADECIMAL_RADIX 16 -#define LOG_SRC_FORMAT 7 /* log level is in format of "0xXXXX," */ - -ssize_t show_iwmct_log_level(struct device *d, - struct device_attribute *attr, char *buf) -{ - struct iwmct_priv *priv = dev_get_drvdata(d); - char *str_buf; - int buf_size; - ssize_t ret; - - buf_size = (LOG_SRC_FORMAT * LOG_SRC_MAX) + 1; - str_buf = kzalloc(buf_size, GFP_KERNEL); - if (!str_buf) { - LOG_ERROR(priv, DEBUGFS, - "failed to allocate %d bytes\n", buf_size); - ret = -ENOMEM; - goto exit; - } - - if (log_get_filter_str(str_buf, buf_size) < 0) { - ret = -EINVAL; - goto exit; - } - - ret = sprintf(buf, "%s", str_buf); - -exit: - kfree(str_buf); - return ret; -} - -ssize_t store_iwmct_log_level(struct device *d, - struct device_attribute *attr, - const char *buf, size_t count) -{ - struct iwmct_priv *priv = dev_get_drvdata(d); - char *token, *str_buf = NULL; - long val; - ssize_t ret = count; - u8 src, mask; - - if (!count) - goto exit; - - str_buf = kzalloc(count, GFP_KERNEL); - if (!str_buf) { - LOG_ERROR(priv, DEBUGFS, - "failed to allocate %zd bytes\n", count); - ret = -ENOMEM; - goto exit; - } - - memcpy(str_buf, buf, count); - - while ((token = strsep(&str_buf, ",")) != NULL) { - while (isspace(*token)) - ++token; - if (strict_strtol(token, HEXADECIMAL_RADIX, &val)) { - LOG_ERROR(priv, DEBUGFS, - "failed to convert string to long %s\n", - token); - ret = -EINVAL; - goto exit; - } - - mask = val & 0xFF; - src = (val & 0XFF00) >> 8; - iwmct_log_set_filter(src, mask); - } - -exit: - kfree(str_buf); - return ret; -} - -ssize_t show_iwmct_log_level_fw(struct device *d, - struct device_attribute *attr, char *buf) -{ - struct iwmct_priv *priv = dev_get_drvdata(d); - char *str_buf; - int buf_size; - ssize_t ret; - - buf_size = (LOG_SRC_FORMAT * FW_LOG_SRC_MAX) + 2; - - str_buf = kzalloc(buf_size, GFP_KERNEL); - if (!str_buf) { - LOG_ERROR(priv, DEBUGFS, - "failed to allocate %d bytes\n", buf_size); - ret = -ENOMEM; - goto exit; - } - - if (log_get_fw_filter_str(str_buf, buf_size) < 0) { - ret = -EINVAL; - goto exit; - } - - ret = sprintf(buf, "%s", str_buf); - -exit: - kfree(str_buf); - return ret; -} - -ssize_t store_iwmct_log_level_fw(struct device *d, - struct device_attribute *attr, - const char *buf, size_t count) -{ - struct iwmct_priv *priv = dev_get_drvdata(d); - struct top_msg cmd; - char *token, *str_buf = NULL; - ssize_t ret = count; - u16 cmdlen = 0; - int i; - long val; - u8 src, mask; - - if (!count) - goto exit; - - str_buf = kzalloc(count, GFP_KERNEL); - if (!str_buf) { - LOG_ERROR(priv, DEBUGFS, - "failed to allocate %zd bytes\n", count); - ret = -ENOMEM; - goto exit; - } - - memcpy(str_buf, buf, count); - - cmd.hdr.type = COMM_TYPE_H2D; - cmd.hdr.category = COMM_CATEGORY_DEBUG; - cmd.hdr.opcode = CMD_DBG_LOG_LEVEL; - - for (i = 0; ((token = strsep(&str_buf, ",")) != NULL) && - (i < FW_LOG_SRC_MAX); i++) { - - while (isspace(*token)) - ++token; - - if (strict_strtol(token, HEXADECIMAL_RADIX, &val)) { - LOG_ERROR(priv, DEBUGFS, - "failed to convert string to long %s\n", - token); - ret = -EINVAL; - goto exit; - } - - mask = val & 0xFF; /* LSB */ - src = (val & 0XFF00) >> 8; /* 2nd least significant byte. */ - iwmct_log_set_fw_filter(src, mask); - - cmd.u.logdefs[i].logsource = src; - cmd.u.logdefs[i].sevmask = mask; - } - - cmd.hdr.length = cpu_to_le16(i * sizeof(cmd.u.logdefs[0])); - cmdlen = (i * sizeof(cmd.u.logdefs[0]) + sizeof(cmd.hdr)); - - ret = iwmct_send_hcmd(priv, (u8 *)&cmd, cmdlen); - if (ret) { - LOG_ERROR(priv, DEBUGFS, - "Failed to send %d bytes of fwcmd, ret=%zd\n", - cmdlen, ret); - goto exit; - } else - LOG_INFO(priv, DEBUGFS, "fwcmd sent (%d bytes)\n", cmdlen); - - ret = count; - -exit: - kfree(str_buf); - return ret; -} - diff --git a/drivers/misc/iwmc3200top/log.h b/drivers/misc/iwmc3200top/log.h deleted file mode 100644 index 4434bb16cea7..000000000000 --- a/drivers/misc/iwmc3200top/log.h +++ /dev/null @@ -1,171 +0,0 @@ -/* - * iwmc3200top - Intel Wireless MultiCom 3200 Top Driver - * drivers/misc/iwmc3200top/log.h - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - * - * Author Name: Maxim Grabarnik <maxim.grabarnink@intel.com> - * - - * - */ - -#ifndef __LOG_H__ -#define __LOG_H__ - - -/* log severity: - * The log levels here match FW log levels - * so values need to stay as is */ -#define LOG_SEV_CRITICAL 0 -#define LOG_SEV_ERROR 1 -#define LOG_SEV_WARNING 2 -#define LOG_SEV_INFO 3 -#define LOG_SEV_INFOEX 4 - -/* Log levels not defined for FW */ -#define LOG_SEV_TRACE 5 -#define LOG_SEV_DUMP 6 - -#define LOG_SEV_FW_FILTER_ALL \ - (BIT(LOG_SEV_CRITICAL) | \ - BIT(LOG_SEV_ERROR) | \ - BIT(LOG_SEV_WARNING) | \ - BIT(LOG_SEV_INFO) | \ - BIT(LOG_SEV_INFOEX)) - -#define LOG_SEV_FILTER_ALL \ - (BIT(LOG_SEV_CRITICAL) | \ - BIT(LOG_SEV_ERROR) | \ - BIT(LOG_SEV_WARNING) | \ - BIT(LOG_SEV_INFO) | \ - BIT(LOG_SEV_INFOEX) | \ - BIT(LOG_SEV_TRACE) | \ - BIT(LOG_SEV_DUMP)) - -/* log source */ -#define LOG_SRC_INIT 0 -#define LOG_SRC_DEBUGFS 1 -#define LOG_SRC_FW_DOWNLOAD 2 -#define LOG_SRC_FW_MSG 3 -#define LOG_SRC_TST 4 -#define LOG_SRC_IRQ 5 - -#define LOG_SRC_MAX 6 -#define LOG_SRC_ALL 0xFF - -/** - * Default intitialization runtime log level - */ -#ifndef LOG_SEV_FILTER_RUNTIME -#define LOG_SEV_FILTER_RUNTIME \ - (BIT(LOG_SEV_CRITICAL) | \ - BIT(LOG_SEV_ERROR) | \ - BIT(LOG_SEV_WARNING)) -#endif - -#ifndef FW_LOG_SEV_FILTER_RUNTIME -#define FW_LOG_SEV_FILTER_RUNTIME LOG_SEV_FILTER_ALL -#endif - -#ifdef CONFIG_IWMC3200TOP_DEBUG -/** - * Log macros - */ - -#define priv2dev(priv) (&(priv->func)->dev) - -#define LOG_CRITICAL(priv, src, fmt, args...) \ -do { \ - if (iwmct_logdefs[LOG_SRC_ ## src] & BIT(LOG_SEV_CRITICAL)) \ - dev_crit(priv2dev(priv), "%s %d: " fmt, \ - __func__, __LINE__, ##args); \ -} while (0) - -#define LOG_ERROR(priv, src, fmt, args...) \ -do { \ - if (iwmct_logdefs[LOG_SRC_ ## src] & BIT(LOG_SEV_ERROR)) \ - dev_err(priv2dev(priv), "%s %d: " fmt, \ - __func__, __LINE__, ##args); \ -} while (0) - -#define LOG_WARNING(priv, src, fmt, args...) \ -do { \ - if (iwmct_logdefs[LOG_SRC_ ## src] & BIT(LOG_SEV_WARNING)) \ - dev_warn(priv2dev(priv), "%s %d: " fmt, \ - __func__, __LINE__, ##args); \ -} while (0) - -#define LOG_INFO(priv, src, fmt, args...) \ -do { \ - if (iwmct_logdefs[LOG_SRC_ ## src] & BIT(LOG_SEV_INFO)) \ - dev_info(priv2dev(priv), "%s %d: " fmt, \ - __func__, __LINE__, ##args); \ -} while (0) - -#define LOG_TRACE(priv, src, fmt, args...) \ -do { \ - if (iwmct_logdefs[LOG_SRC_ ## src] & BIT(LOG_SEV_TRACE)) \ - dev_dbg(priv2dev(priv), "%s %d: " fmt, \ - __func__, __LINE__, ##args); \ -} while (0) - -#define LOG_HEXDUMP(src, ptr, len) \ -do { \ - if (iwmct_logdefs[LOG_SRC_ ## src] & BIT(LOG_SEV_DUMP)) \ - print_hex_dump(KERN_DEBUG, "", DUMP_PREFIX_NONE, \ - 16, 1, ptr, len, false); \ -} while (0) - -void iwmct_log_top_message(struct iwmct_priv *priv, u8 *buf, int len); - -extern u8 iwmct_logdefs[]; - -int iwmct_log_set_filter(u8 src, u8 logmask); -int iwmct_log_set_fw_filter(u8 src, u8 logmask); - -ssize_t show_iwmct_log_level(struct device *d, - struct device_attribute *attr, char *buf); -ssize_t store_iwmct_log_level(struct device *d, - struct device_attribute *attr, - const char *buf, size_t count); -ssize_t show_iwmct_log_level_fw(struct device *d, - struct device_attribute *attr, char *buf); -ssize_t store_iwmct_log_level_fw(struct device *d, - struct device_attribute *attr, - const char *buf, size_t count); - -#else - -#define LOG_CRITICAL(priv, src, fmt, args...) -#define LOG_ERROR(priv, src, fmt, args...) -#define LOG_WARNING(priv, src, fmt, args...) -#define LOG_INFO(priv, src, fmt, args...) -#define LOG_TRACE(priv, src, fmt, args...) -#define LOG_HEXDUMP(src, ptr, len) - -static inline void iwmct_log_top_message(struct iwmct_priv *priv, - u8 *buf, int len) {} -static inline int iwmct_log_set_filter(u8 src, u8 logmask) { return 0; } -static inline int iwmct_log_set_fw_filter(u8 src, u8 logmask) { return 0; } - -#endif /* CONFIG_IWMC3200TOP_DEBUG */ - -int log_get_filter_str(char *buf, int size); -int log_get_fw_filter_str(char *buf, int size); - -#endif /* __LOG_H__ */ diff --git a/drivers/misc/iwmc3200top/main.c b/drivers/misc/iwmc3200top/main.c deleted file mode 100644 index 701eb600b127..000000000000 --- a/drivers/misc/iwmc3200top/main.c +++ /dev/null @@ -1,662 +0,0 @@ -/* - * iwmc3200top - Intel Wireless MultiCom 3200 Top Driver - * drivers/misc/iwmc3200top/main.c - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - * - * Author Name: Maxim Grabarnik <maxim.grabarnink@intel.com> - * - - * - */ - -#include <linux/module.h> -#include <linux/slab.h> -#include <linux/init.h> -#include <linux/kernel.h> -#include <linux/debugfs.h> -#include <linux/mmc/sdio_ids.h> -#include <linux/mmc/sdio_func.h> -#include <linux/mmc/sdio.h> - -#include "iwmc3200top.h" -#include "log.h" -#include "fw-msg.h" -#include "debugfs.h" - - -#define DRIVER_DESCRIPTION "Intel(R) IWMC 3200 Top Driver" -#define DRIVER_COPYRIGHT "Copyright (c) 2008 Intel Corporation." - -#define DRIVER_VERSION "0.1.62" - -MODULE_DESCRIPTION(DRIVER_DESCRIPTION); -MODULE_VERSION(DRIVER_VERSION); -MODULE_LICENSE("GPL"); -MODULE_AUTHOR(DRIVER_COPYRIGHT); -MODULE_FIRMWARE(FW_NAME(FW_API_VER)); - - -static inline int __iwmct_tx(struct iwmct_priv *priv, void *src, int count) -{ - return sdio_memcpy_toio(priv->func, IWMC_SDIO_DATA_ADDR, src, count); - -} -int iwmct_tx(struct iwmct_priv *priv, void *src, int count) -{ - int ret; - sdio_claim_host(priv->func); - ret = __iwmct_tx(priv, src, count); - sdio_release_host(priv->func); - return ret; -} -/* - * This workers main task is to wait for OP_OPR_ALIVE - * from TOP FW until ALIVE_MSG_TIMOUT timeout is elapsed. - * When OP_OPR_ALIVE received it will issue - * a call to "bus_rescan_devices". - */ -static void iwmct_rescan_worker(struct work_struct *ws) -{ - struct iwmct_priv *priv; - int ret; - - priv = container_of(ws, struct iwmct_priv, bus_rescan_worker); - - LOG_INFO(priv, FW_MSG, "Calling bus_rescan\n"); - - ret = bus_rescan_devices(priv->func->dev.bus); - if (ret < 0) - LOG_INFO(priv, INIT, "bus_rescan_devices FAILED!!!\n"); -} - -static void op_top_message(struct iwmct_priv *priv, struct top_msg *msg) -{ - switch (msg->hdr.opcode) { - case OP_OPR_ALIVE: - LOG_INFO(priv, FW_MSG, "Got ALIVE from device, wake rescan\n"); - schedule_work(&priv->bus_rescan_worker); - break; - default: - LOG_INFO(priv, FW_MSG, "Received msg opcode 0x%X\n", - msg->hdr.opcode); - break; - } -} - - -static void handle_top_message(struct iwmct_priv *priv, u8 *buf, int len) -{ - struct top_msg *msg; - - msg = (struct top_msg *)buf; - - if (msg->hdr.type != COMM_TYPE_D2H) { - LOG_ERROR(priv, FW_MSG, - "Message from TOP with invalid message type 0x%X\n", - msg->hdr.type); - return; - } - - if (len < sizeof(msg->hdr)) { - LOG_ERROR(priv, FW_MSG, - "Message from TOP is too short for message header " - "received %d bytes, expected at least %zd bytes\n", - len, sizeof(msg->hdr)); - return; - } - - if (len < le16_to_cpu(msg->hdr.length) + sizeof(msg->hdr)) { - LOG_ERROR(priv, FW_MSG, - "Message length (%d bytes) is shorter than " - "in header (%d bytes)\n", - len, le16_to_cpu(msg->hdr.length)); - return; - } - - switch (msg->hdr.category) { - case COMM_CATEGORY_OPERATIONAL: - op_top_message(priv, (struct top_msg *)buf); - break; - - case COMM_CATEGORY_DEBUG: - case COMM_CATEGORY_TESTABILITY: - case COMM_CATEGORY_DIAGNOSTICS: - iwmct_log_top_message(priv, buf, len); - break; - - default: - LOG_ERROR(priv, FW_MSG, - "Message from TOP with unknown category 0x%X\n", - msg->hdr.category); - break; - } -} - -int iwmct_send_hcmd(struct iwmct_priv *priv, u8 *cmd, u16 len) -{ - int ret; - u8 *buf; - - LOG_TRACE(priv, FW_MSG, "Sending hcmd:\n"); - - /* add padding to 256 for IWMC */ - ((struct top_msg *)cmd)->hdr.flags |= CMD_FLAG_PADDING_256; - - LOG_HEXDUMP(FW_MSG, cmd, len); - - if (len > FW_HCMD_BLOCK_SIZE) { - LOG_ERROR(priv, FW_MSG, "size %d exceeded hcmd max size %d\n", - len, FW_HCMD_BLOCK_SIZE); - return -1; - } - - buf = kzalloc(FW_HCMD_BLOCK_SIZE, GFP_KERNEL); - if (!buf) { - LOG_ERROR(priv, FW_MSG, "kzalloc error, buf size %d\n", - FW_HCMD_BLOCK_SIZE); - return -1; - } - - memcpy(buf, cmd, len); - ret = iwmct_tx(priv, buf, FW_HCMD_BLOCK_SIZE); - - kfree(buf); - return ret; -} - - -static void iwmct_irq_read_worker(struct work_struct *ws) -{ - struct iwmct_priv *priv; - struct iwmct_work_struct *read_req; - __le32 *buf = NULL; - int ret; - int iosize; - u32 barker; - bool is_barker; - - priv = container_of(ws, struct iwmct_priv, isr_worker); - - LOG_TRACE(priv, IRQ, "enter iwmct_irq_read_worker %p\n", ws); - - /* --------------------- Handshake with device -------------------- */ - sdio_claim_host(priv->func); - - /* all list manipulations have to be protected by - * sdio_claim_host/sdio_release_host */ - if (list_empty(&priv->read_req_list)) { - LOG_ERROR(priv, IRQ, "read_req_list empty in read worker\n"); - goto exit_release; - } - - read_req = list_entry(priv->read_req_list.next, - struct iwmct_work_struct, list); - - list_del(&read_req->list); - iosize = read_req->iosize; - kfree(read_req); - - buf = kzalloc(iosize, GFP_KERNEL); - if (!buf) { - LOG_ERROR(priv, IRQ, "kzalloc error, buf size %d\n", iosize); - goto exit_release; - } - - LOG_INFO(priv, IRQ, "iosize=%d, buf=%p, func=%d\n", - iosize, buf, priv->func->num); - - /* read from device */ - ret = sdio_memcpy_fromio(priv->func, buf, IWMC_SDIO_DATA_ADDR, iosize); - if (ret) { - LOG_ERROR(priv, IRQ, "error %d reading buffer\n", ret); - goto exit_release; - } - - LOG_HEXDUMP(IRQ, (u8 *)buf, iosize); - - barker = le32_to_cpu(buf[0]); - - /* Verify whether it's a barker and if not - treat as regular Rx */ - if (barker == IWMC_BARKER_ACK || - (barker & BARKER_DNLOAD_BARKER_MSK) == IWMC_BARKER_REBOOT) { - - /* Valid Barker is equal on first 4 dwords */ - is_barker = (buf[1] == buf[0]) && - (buf[2] == buf[0]) && - (buf[3] == buf[0]); - - if (!is_barker) { - LOG_WARNING(priv, IRQ, - "Potentially inconsistent barker " - "%08X_%08X_%08X_%08X\n", - le32_to_cpu(buf[0]), le32_to_cpu(buf[1]), - le32_to_cpu(buf[2]), le32_to_cpu(buf[3])); - } - } else { - is_barker = false; - } - - /* Handle Top CommHub message */ - if (!is_barker) { - sdio_release_host(priv->func); - handle_top_message(priv, (u8 *)buf, iosize); - goto exit; - } else if (barker == IWMC_BARKER_ACK) { /* Handle barkers */ - if (atomic_read(&priv->dev_sync) == 0) { - LOG_ERROR(priv, IRQ, - "ACK barker arrived out-of-sync\n"); - goto exit_release; - } - - /* Continuing to FW download (after Sync is completed)*/ - atomic_set(&priv->dev_sync, 0); - LOG_INFO(priv, IRQ, "ACK barker arrived " - "- starting FW download\n"); - } else { /* REBOOT barker */ - LOG_INFO(priv, IRQ, "Received reboot barker: %x\n", barker); - priv->barker = barker; - - if (barker & BARKER_DNLOAD_SYNC_MSK) { - /* Send the same barker back */ - ret = __iwmct_tx(priv, buf, iosize); - if (ret) { - LOG_ERROR(priv, IRQ, - "error %d echoing barker\n", ret); - goto exit_release; - } - LOG_INFO(priv, IRQ, "Echoing barker to device\n"); - atomic_set(&priv->dev_sync, 1); - goto exit_release; - } - - /* Continuing to FW download (without Sync) */ - LOG_INFO(priv, IRQ, "No sync requested " - "- starting FW download\n"); - } - - sdio_release_host(priv->func); - - if (priv->dbg.fw_download) - iwmct_fw_load(priv); - else - LOG_ERROR(priv, IRQ, "FW download not allowed\n"); - - goto exit; - -exit_release: - sdio_release_host(priv->func); -exit: - kfree(buf); - LOG_TRACE(priv, IRQ, "exit iwmct_irq_read_worker\n"); -} - -static void iwmct_irq(struct sdio_func *func) -{ - struct iwmct_priv *priv; - int val, ret; - int iosize; - int addr = IWMC_SDIO_INTR_GET_SIZE_ADDR; - struct iwmct_work_struct *read_req; - - priv = sdio_get_drvdata(func); - - LOG_TRACE(priv, IRQ, "enter iwmct_irq\n"); - - /* read the function's status register */ - val = sdio_readb(func, IWMC_SDIO_INTR_STATUS_ADDR, &ret); - - LOG_TRACE(priv, IRQ, "iir value = %d, ret=%d\n", val, ret); - - if (!val) { - LOG_ERROR(priv, IRQ, "iir = 0, exiting ISR\n"); - goto exit_clear_intr; - } - - - /* - * read 2 bytes of the transaction size - * IMPORTANT: sdio transaction size has to be read before clearing - * sdio interrupt!!! - */ - val = sdio_readb(priv->func, addr++, &ret); - iosize = val; - val = sdio_readb(priv->func, addr++, &ret); - iosize += val << 8; - - LOG_INFO(priv, IRQ, "READ size %d\n", iosize); - - if (iosize == 0) { - LOG_ERROR(priv, IRQ, "READ size %d, exiting ISR\n", iosize); - goto exit_clear_intr; - } - - /* allocate a work structure to pass iosize to the worker */ - read_req = kzalloc(sizeof(struct iwmct_work_struct), GFP_KERNEL); - if (!read_req) { - LOG_ERROR(priv, IRQ, "failed to allocate read_req, exit ISR\n"); - goto exit_clear_intr; - } - - INIT_LIST_HEAD(&read_req->list); - read_req->iosize = iosize; - - list_add_tail(&priv->read_req_list, &read_req->list); - - /* clear the function's interrupt request bit (write 1 to clear) */ - sdio_writeb(func, 1, IWMC_SDIO_INTR_CLEAR_ADDR, &ret); - - schedule_work(&priv->isr_worker); - - LOG_TRACE(priv, IRQ, "exit iwmct_irq\n"); - - return; - -exit_clear_intr: - /* clear the function's interrupt request bit (write 1 to clear) */ - sdio_writeb(func, 1, IWMC_SDIO_INTR_CLEAR_ADDR, &ret); -} - - -static int blocks; -module_param(blocks, int, 0604); -MODULE_PARM_DESC(blocks, "max_blocks_to_send"); - -static bool dump; -module_param(dump, bool, 0604); -MODULE_PARM_DESC(dump, "dump_hex_content"); - -static bool jump = 1; -module_param(jump, bool, 0604); - -static bool direct = 1; -module_param(direct, bool, 0604); - -static bool checksum = 1; -module_param(checksum, bool, 0604); - -static bool fw_download = 1; -module_param(fw_download, bool, 0604); - -static int block_size = IWMC_SDIO_BLK_SIZE; -module_param(block_size, int, 0404); - -static int download_trans_blks = IWMC_DEFAULT_TR_BLK; -module_param(download_trans_blks, int, 0604); - -static bool rubbish_barker; -module_param(rubbish_barker, bool, 0604); - -#ifdef CONFIG_IWMC3200TOP_DEBUG -static int log_level[LOG_SRC_MAX]; -static unsigned int log_level_argc; -module_param_array(log_level, int, &log_level_argc, 0604); -MODULE_PARM_DESC(log_level, "log_level"); - -static int log_level_fw[FW_LOG_SRC_MAX]; -static unsigned int log_level_fw_argc; -module_param_array(log_level_fw, int, &log_level_fw_argc, 0604); -MODULE_PARM_DESC(log_level_fw, "log_level_fw"); -#endif - -void iwmct_dbg_init_params(struct iwmct_priv *priv) -{ -#ifdef CONFIG_IWMC3200TOP_DEBUG - int i; - - for (i = 0; i < log_level_argc; i++) { - dev_notice(&priv->func->dev, "log_level[%d]=0x%X\n", - i, log_level[i]); - iwmct_log_set_filter((log_level[i] >> 8) & 0xFF, - log_level[i] & 0xFF); - } - for (i = 0; i < log_level_fw_argc; i++) { - dev_notice(&priv->func->dev, "log_level_fw[%d]=0x%X\n", - i, log_level_fw[i]); - iwmct_log_set_fw_filter((log_level_fw[i] >> 8) & 0xFF, - log_level_fw[i] & 0xFF); - } -#endif - - priv->dbg.blocks = blocks; - LOG_INFO(priv, INIT, "blocks=%d\n", blocks); - priv->dbg.dump = (bool)dump; - LOG_INFO(priv, INIT, "dump=%d\n", dump); - priv->dbg.jump = (bool)jump; - LOG_INFO(priv, INIT, "jump=%d\n", jump); - priv->dbg.direct = (bool)direct; - LOG_INFO(priv, INIT, "direct=%d\n", direct); - priv->dbg.checksum = (bool)checksum; - LOG_INFO(priv, INIT, "checksum=%d\n", checksum); - priv->dbg.fw_download = (bool)fw_download; - LOG_INFO(priv, INIT, "fw_download=%d\n", fw_download); - priv->dbg.block_size = block_size; - LOG_INFO(priv, INIT, "block_size=%d\n", block_size); - priv->dbg.download_trans_blks = download_trans_blks; - LOG_INFO(priv, INIT, "download_trans_blks=%d\n", download_trans_blks); -} - -/***************************************************************************** - * - * sysfs attributes - * - *****************************************************************************/ -static ssize_t show_iwmct_fw_version(struct device *d, - struct device_attribute *attr, char *buf) -{ - struct iwmct_priv *priv = dev_get_drvdata(d); - return sprintf(buf, "%s\n", priv->dbg.label_fw); -} -static DEVICE_ATTR(cc_label_fw, S_IRUGO, show_iwmct_fw_version, NULL); - -#ifdef CONFIG_IWMC3200TOP_DEBUG -static DEVICE_ATTR(log_level, S_IWUSR | S_IRUGO, - show_iwmct_log_level, store_iwmct_log_level); -static DEVICE_ATTR(log_level_fw, S_IWUSR | S_IRUGO, - show_iwmct_log_level_fw, store_iwmct_log_level_fw); -#endif - -static struct attribute *iwmct_sysfs_entries[] = { - &dev_attr_cc_label_fw.attr, -#ifdef CONFIG_IWMC3200TOP_DEBUG - &dev_attr_log_level.attr, - &dev_attr_log_level_fw.attr, -#endif - NULL -}; - -static struct attribute_group iwmct_attribute_group = { - .name = NULL, /* put in device directory */ - .attrs = iwmct_sysfs_entries, -}; - - -static int iwmct_probe(struct sdio_func *func, - const struct sdio_device_id *id) -{ - struct iwmct_priv *priv; - int ret; - int val = 1; - int addr = IWMC_SDIO_INTR_ENABLE_ADDR; - - dev_dbg(&func->dev, "enter iwmct_probe\n"); - - dev_dbg(&func->dev, "IRQ polling period id %u msecs, HZ is %d\n", - jiffies_to_msecs(2147483647), HZ); - - priv = kzalloc(sizeof(struct iwmct_priv), GFP_KERNEL); - if (!priv) { - dev_err(&func->dev, "kzalloc error\n"); - return -ENOMEM; - } - priv->func = func; - sdio_set_drvdata(func, priv); - - INIT_WORK(&priv->bus_rescan_worker, iwmct_rescan_worker); - INIT_WORK(&priv->isr_worker, iwmct_irq_read_worker); - - init_waitqueue_head(&priv->wait_q); - - sdio_claim_host(func); - /* FIXME: Remove after it is fixed in the Boot ROM upgrade */ - func->enable_timeout = 10; - - /* In our HW, setting the block size also wakes up the boot rom. */ - ret = sdio_set_block_size(func, priv->dbg.block_size); - if (ret) { - LOG_ERROR(priv, INIT, - "sdio_set_block_size() failure: %d\n", ret); - goto error_sdio_enable; - } - - ret = sdio_enable_func(func); - if (ret) { - LOG_ERROR(priv, INIT, "sdio_enable_func() failure: %d\n", ret); - goto error_sdio_enable; - } - - /* init reset and dev_sync states */ - atomic_set(&priv->reset, 0); - atomic_set(&priv->dev_sync, 0); - - /* init read req queue */ - INIT_LIST_HEAD(&priv->read_req_list); - - /* process configurable parameters */ - iwmct_dbg_init_params(priv); - ret = sysfs_create_group(&func->dev.kobj, &iwmct_attribute_group); - if (ret) { - LOG_ERROR(priv, INIT, "Failed to register attributes and " - "initialize module_params\n"); - goto error_dev_attrs; - } - - iwmct_dbgfs_register(priv, DRV_NAME); - - if (!priv->dbg.direct && priv->dbg.download_trans_blks > 8) { - LOG_INFO(priv, INIT, - "Reducing transaction to 8 blocks = 2K (from %d)\n", - priv->dbg.download_trans_blks); - priv->dbg.download_trans_blks = 8; - } - priv->trans_len = priv->dbg.download_trans_blks * priv->dbg.block_size; - LOG_INFO(priv, INIT, "Transaction length = %d\n", priv->trans_len); - - ret = sdio_claim_irq(func, iwmct_irq); - if (ret) { - LOG_ERROR(priv, INIT, "sdio_claim_irq() failure: %d\n", ret); - goto error_claim_irq; - } - - - /* Enable function's interrupt */ - sdio_writeb(priv->func, val, addr, &ret); - if (ret) { - LOG_ERROR(priv, INIT, "Failure writing to " - "Interrupt Enable Register (%d): %d\n", addr, ret); - goto error_enable_int; - } - - sdio_release_host(func); - - LOG_INFO(priv, INIT, "exit iwmct_probe\n"); - - return ret; - -error_enable_int: - sdio_release_irq(func); -error_claim_irq: - sdio_disable_func(func); -error_dev_attrs: - iwmct_dbgfs_unregister(priv->dbgfs); - sysfs_remove_group(&func->dev.kobj, &iwmct_attribute_group); -error_sdio_enable: - sdio_release_host(func); - return ret; -} - -static void iwmct_remove(struct sdio_func *func) -{ - struct iwmct_work_struct *read_req; - struct iwmct_priv *priv = sdio_get_drvdata(func); - - LOG_INFO(priv, INIT, "enter\n"); - - sdio_claim_host(func); - sdio_release_irq(func); - sdio_release_host(func); - - /* Make sure works are finished */ - flush_work_sync(&priv->bus_rescan_worker); - flush_work_sync(&priv->isr_worker); - - sdio_claim_host(func); - sdio_disable_func(func); - sysfs_remove_group(&func->dev.kobj, &iwmct_attribute_group); - iwmct_dbgfs_unregister(priv->dbgfs); - sdio_release_host(func); - - /* free read requests */ - while (!list_empty(&priv->read_req_list)) { - read_req = list_entry(priv->read_req_list.next, - struct iwmct_work_struct, list); - - list_del(&read_req->list); - kfree(read_req); - } - - kfree(priv); -} - - -static const struct sdio_device_id iwmct_ids[] = { - /* Intel Wireless MultiCom 3200 Top Driver */ - { SDIO_DEVICE(SDIO_VENDOR_ID_INTEL, 0x1404)}, - { }, /* Terminating entry */ -}; - -MODULE_DEVICE_TABLE(sdio, iwmct_ids); - -static struct sdio_driver iwmct_driver = { - .probe = iwmct_probe, - .remove = iwmct_remove, - .name = DRV_NAME, - .id_table = iwmct_ids, -}; - -static int __init iwmct_init(void) -{ - int rc; - - /* Default log filter settings */ - iwmct_log_set_filter(LOG_SRC_ALL, LOG_SEV_FILTER_RUNTIME); - iwmct_log_set_filter(LOG_SRC_FW_MSG, LOG_SEV_FW_FILTER_ALL); - iwmct_log_set_fw_filter(LOG_SRC_ALL, FW_LOG_SEV_FILTER_RUNTIME); - - rc = sdio_register_driver(&iwmct_driver); - - return rc; -} - -static void __exit iwmct_exit(void) -{ - sdio_unregister_driver(&iwmct_driver); -} - -module_init(iwmct_init); -module_exit(iwmct_exit); - diff --git a/drivers/net/appletalk/cops.c b/drivers/net/appletalk/cops.c index dd5e04813b76..545c09ed9079 100644 --- a/drivers/net/appletalk/cops.c +++ b/drivers/net/appletalk/cops.c @@ -936,7 +936,7 @@ static int cops_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) { struct cops_local *lp = netdev_priv(dev); struct sockaddr_at *sa = (struct sockaddr_at *)&ifr->ifr_addr; - struct atalk_addr *aa = (struct atalk_addr *)&lp->node_addr; + struct atalk_addr *aa = &lp->node_addr; switch(cmd) { diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c index 3463b469e657..a030e635f001 100644 --- a/drivers/net/bonding/bond_3ad.c +++ b/drivers/net/bonding/bond_3ad.c @@ -2454,24 +2454,27 @@ int bond_3ad_xmit_xor(struct sk_buff *skb, struct net_device *dev) out: if (res) { /* no suitable interface, frame not sent */ - dev_kfree_skb(skb); + kfree_skb(skb); } return NETDEV_TX_OK; } -int bond_3ad_lacpdu_recv(struct sk_buff *skb, struct bonding *bond, - struct slave *slave) +int bond_3ad_lacpdu_recv(const struct sk_buff *skb, struct bonding *bond, + struct slave *slave) { int ret = RX_HANDLER_ANOTHER; + struct lacpdu *lacpdu, _lacpdu; + if (skb->protocol != PKT_TYPE_LACPDU) return ret; - if (!pskb_may_pull(skb, sizeof(struct lacpdu))) + lacpdu = skb_header_pointer(skb, 0, sizeof(_lacpdu), &_lacpdu); + if (!lacpdu) return ret; read_lock(&bond->lock); - ret = bond_3ad_rx_indication((struct lacpdu *) skb->data, slave, skb->len); + ret = bond_3ad_rx_indication(lacpdu, slave, skb->len); read_unlock(&bond->lock); return ret; } diff --git a/drivers/net/bonding/bond_3ad.h b/drivers/net/bonding/bond_3ad.h index 5ee7e3c45db7..0cfaa4afdece 100644 --- a/drivers/net/bonding/bond_3ad.h +++ b/drivers/net/bonding/bond_3ad.h @@ -274,8 +274,8 @@ void bond_3ad_adapter_duplex_changed(struct slave *slave); void bond_3ad_handle_link_change(struct slave *slave, char link); int bond_3ad_get_active_agg_info(struct bonding *bond, struct ad_info *ad_info); int bond_3ad_xmit_xor(struct sk_buff *skb, struct net_device *dev); -int bond_3ad_lacpdu_recv(struct sk_buff *skb, struct bonding *bond, - struct slave *slave); +int bond_3ad_lacpdu_recv(const struct sk_buff *skb, struct bonding *bond, + struct slave *slave); int bond_3ad_set_carrier(struct bonding *bond); void bond_3ad_update_lacp_rate(struct bonding *bond); #endif //__BOND_3AD_H__ diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c index 0f59c1564e53..e15cc11edbbe 100644 --- a/drivers/net/bonding/bond_alb.c +++ b/drivers/net/bonding/bond_alb.c @@ -342,27 +342,17 @@ static void rlb_update_entry_from_arp(struct bonding *bond, struct arp_pkt *arp) _unlock_rx_hashtbl_bh(bond); } -static int rlb_arp_recv(struct sk_buff *skb, struct bonding *bond, - struct slave *slave) +static int rlb_arp_recv(const struct sk_buff *skb, struct bonding *bond, + struct slave *slave) { - struct arp_pkt *arp; + struct arp_pkt *arp, _arp; if (skb->protocol != cpu_to_be16(ETH_P_ARP)) goto out; - arp = (struct arp_pkt *) skb->data; - if (!arp) { - pr_debug("Packet has no ARP data\n"); + arp = skb_header_pointer(skb, 0, sizeof(_arp), &_arp); + if (!arp) goto out; - } - - if (!pskb_may_pull(skb, arp_hdr_len(bond->dev))) - goto out; - - if (skb->len < sizeof(struct arp_pkt)) { - pr_debug("Packet is too small to be an ARP\n"); - goto out; - } if (arp->op_code == htons(ARPOP_REPLY)) { /* update rx hash table for this ARP */ @@ -1356,12 +1346,12 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev) } } + read_unlock(&bond->curr_slave_lock); + if (res) { /* no suitable interface, frame not sent */ - dev_kfree_skb(skb); + kfree_skb(skb); } - read_unlock(&bond->curr_slave_lock); - return NETDEV_TX_OK; } diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 2ee76993f052..6fae5f3ec7f6 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -395,8 +395,8 @@ int bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb, skb->dev = slave_dev; BUILD_BUG_ON(sizeof(skb->queue_mapping) != - sizeof(qdisc_skb_cb(skb)->bond_queue_mapping)); - skb->queue_mapping = qdisc_skb_cb(skb)->bond_queue_mapping; + sizeof(qdisc_skb_cb(skb)->slave_dev_queue_mapping)); + skb->queue_mapping = qdisc_skb_cb(skb)->slave_dev_queue_mapping; if (unlikely(netpoll_tx_running(slave_dev))) bond_netpoll_send_skb(bond_get_slave_by_dev(bond, slave_dev), skb); @@ -1240,9 +1240,7 @@ static inline int slave_enable_netpoll(struct slave *slave) if (!np) goto out; - np->dev = slave->dev; - strlcpy(np->dev_name, slave->dev->name, IFNAMSIZ); - err = __netpoll_setup(np); + err = __netpoll_setup(np, slave->dev); if (err) { kfree(np); goto out; @@ -1384,6 +1382,7 @@ static void bond_compute_features(struct bonding *bond) netdev_features_t vlan_features = BOND_VLAN_FEATURES; unsigned short max_hard_header_len = ETH_HLEN; int i; + unsigned int flags, dst_release_flag = IFF_XMIT_DST_RELEASE; read_lock(&bond->lock); @@ -1394,6 +1393,7 @@ static void bond_compute_features(struct bonding *bond) vlan_features = netdev_increment_features(vlan_features, slave->dev->vlan_features, BOND_VLAN_FEATURES); + dst_release_flag &= slave->dev->priv_flags; if (slave->dev->hard_header_len > max_hard_header_len) max_hard_header_len = slave->dev->hard_header_len; } @@ -1402,6 +1402,9 @@ done: bond_dev->vlan_features = vlan_features; bond_dev->hard_header_len = max_hard_header_len; + flags = bond_dev->priv_flags & ~IFF_XMIT_DST_RELEASE; + bond_dev->priv_flags = flags | dst_release_flag; + read_unlock(&bond->lock); netdev_change_features(bond_dev); @@ -1445,8 +1448,8 @@ static rx_handler_result_t bond_handle_frame(struct sk_buff **pskb) struct sk_buff *skb = *pskb; struct slave *slave; struct bonding *bond; - int (*recv_probe)(struct sk_buff *, struct bonding *, - struct slave *); + int (*recv_probe)(const struct sk_buff *, struct bonding *, + struct slave *); int ret = RX_HANDLER_ANOTHER; skb = skb_share_check(skb, GFP_ATOMIC); @@ -1463,15 +1466,10 @@ static rx_handler_result_t bond_handle_frame(struct sk_buff **pskb) recv_probe = ACCESS_ONCE(bond->recv_probe); if (recv_probe) { - struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC); - - if (likely(nskb)) { - ret = recv_probe(nskb, bond, slave); - dev_kfree_skb(nskb); - if (ret == RX_HANDLER_CONSUMED) { - consume_skb(skb); - return ret; - } + ret = recv_probe(skb, bond, slave); + if (ret == RX_HANDLER_CONSUMED) { + consume_skb(skb); + return ret; } } @@ -2738,25 +2736,31 @@ static void bond_validate_arp(struct bonding *bond, struct slave *slave, __be32 } } -static int bond_arp_rcv(struct sk_buff *skb, struct bonding *bond, - struct slave *slave) +static int bond_arp_rcv(const struct sk_buff *skb, struct bonding *bond, + struct slave *slave) { - struct arphdr *arp; + struct arphdr *arp = (struct arphdr *)skb->data; unsigned char *arp_ptr; __be32 sip, tip; + int alen; if (skb->protocol != __cpu_to_be16(ETH_P_ARP)) return RX_HANDLER_ANOTHER; read_lock(&bond->lock); + alen = arp_hdr_len(bond->dev); pr_debug("bond_arp_rcv: bond %s skb->dev %s\n", bond->dev->name, skb->dev->name); - if (!pskb_may_pull(skb, arp_hdr_len(bond->dev))) - goto out_unlock; + if (alen > skb_headlen(skb)) { + arp = kmalloc(alen, GFP_ATOMIC); + if (!arp) + goto out_unlock; + if (skb_copy_bits(skb, 0, arp, alen) < 0) + goto out_unlock; + } - arp = arp_hdr(skb); if (arp->ar_hln != bond->dev->addr_len || skb->pkt_type == PACKET_OTHERHOST || skb->pkt_type == PACKET_LOOPBACK || @@ -2791,6 +2795,8 @@ static int bond_arp_rcv(struct sk_buff *skb, struct bonding *bond, out_unlock: read_unlock(&bond->lock); + if (arp != (struct arphdr *)skb->data) + kfree(arp); return RX_HANDLER_ANOTHER; } @@ -3993,7 +3999,7 @@ static int bond_xmit_roundrobin(struct sk_buff *skb, struct net_device *bond_dev out: if (res) { /* no suitable interface, frame not sent */ - dev_kfree_skb(skb); + kfree_skb(skb); } return NETDEV_TX_OK; @@ -4015,11 +4021,11 @@ static int bond_xmit_activebackup(struct sk_buff *skb, struct net_device *bond_d res = bond_dev_queue_xmit(bond, skb, bond->curr_active_slave->dev); + read_unlock(&bond->curr_slave_lock); + if (res) /* no suitable interface, frame not sent */ - dev_kfree_skb(skb); - - read_unlock(&bond->curr_slave_lock); + kfree_skb(skb); return NETDEV_TX_OK; } @@ -4058,7 +4064,7 @@ static int bond_xmit_xor(struct sk_buff *skb, struct net_device *bond_dev) if (res) { /* no suitable interface, frame not sent */ - dev_kfree_skb(skb); + kfree_skb(skb); } return NETDEV_TX_OK; @@ -4096,7 +4102,7 @@ static int bond_xmit_broadcast(struct sk_buff *skb, struct net_device *bond_dev) res = bond_dev_queue_xmit(bond, skb2, tx_dev); if (res) { - dev_kfree_skb(skb2); + kfree_skb(skb2); continue; } } @@ -4110,7 +4116,7 @@ static int bond_xmit_broadcast(struct sk_buff *skb, struct net_device *bond_dev) out: if (res) /* no suitable interface, frame not sent */ - dev_kfree_skb(skb); + kfree_skb(skb); /* frame sent to all suitable interfaces */ return NETDEV_TX_OK; @@ -4178,7 +4184,7 @@ static u16 bond_select_queue(struct net_device *dev, struct sk_buff *skb) /* * Save the original txq to restore before passing to the driver */ - qdisc_skb_cb(skb)->bond_queue_mapping = skb->queue_mapping; + qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping; if (unlikely(txq >= dev->real_num_tx_queues)) { do { @@ -4216,7 +4222,7 @@ static netdev_tx_t __bond_start_xmit(struct sk_buff *skb, struct net_device *dev pr_err("%s: Error: Unknown bonding mode %d\n", dev->name, bond->params.mode); WARN_ON_ONCE(1); - dev_kfree_skb(skb); + kfree_skb(skb); return NETDEV_TX_OK; } } @@ -4238,7 +4244,7 @@ static netdev_tx_t bond_start_xmit(struct sk_buff *skb, struct net_device *dev) if (bond->slave_cnt) ret = __bond_start_xmit(skb, dev); else - dev_kfree_skb(skb); + kfree_skb(skb); read_unlock(&bond->lock); @@ -4839,17 +4845,19 @@ static int bond_validate(struct nlattr *tb[], struct nlattr *data[]) return 0; } -static int bond_get_tx_queues(struct net *net, struct nlattr *tb[]) +static unsigned int bond_get_num_tx_queues(void) { return tx_queues; } static struct rtnl_link_ops bond_link_ops __read_mostly = { - .kind = "bond", - .priv_size = sizeof(struct bonding), - .setup = bond_setup, - .validate = bond_validate, - .get_tx_queues = bond_get_tx_queues, + .kind = "bond", + .priv_size = sizeof(struct bonding), + .setup = bond_setup, + .validate = bond_validate, + .get_num_tx_queues = bond_get_num_tx_queues, + .get_num_rx_queues = bond_get_num_tx_queues, /* Use the same number + as for TX queues */ }; /* Create a new bond based on the specified name and bonding parameters. diff --git a/drivers/net/bonding/bond_sysfs.c b/drivers/net/bonding/bond_sysfs.c index 485bedb8278c..dc15d248443f 100644 --- a/drivers/net/bonding/bond_sysfs.c +++ b/drivers/net/bonding/bond_sysfs.c @@ -1495,7 +1495,7 @@ static ssize_t bonding_store_queue_id(struct device *d, /* Check buffer length, valid ifname and queue id */ if (strlen(buffer) > IFNAMSIZ || !dev_valid_name(buffer) || - qid > bond->params.tx_queues) + qid > bond->dev->real_num_tx_queues) goto err_no_cmd; /* Get the pointer to that interface if it exists */ diff --git a/drivers/net/bonding/bonding.h b/drivers/net/bonding/bonding.h index 4581aa5ccaba..f8af2fcd3d16 100644 --- a/drivers/net/bonding/bonding.h +++ b/drivers/net/bonding/bonding.h @@ -218,8 +218,8 @@ struct bonding { struct slave *primary_slave; bool force_primary; s32 slave_cnt; /* never change this value outside the attach/detach wrappers */ - int (*recv_probe)(struct sk_buff *, struct bonding *, - struct slave *); + int (*recv_probe)(const struct sk_buff *, struct bonding *, + struct slave *); rwlock_t lock; rwlock_t curr_slave_lock; u8 send_peer_notif; diff --git a/drivers/net/caif/caif_hsi.c b/drivers/net/caif/caif_hsi.c index 4a27adb7ae67..0def8b3106f4 100644 --- a/drivers/net/caif/caif_hsi.c +++ b/drivers/net/caif/caif_hsi.c @@ -11,7 +11,6 @@ #include <linux/init.h> #include <linux/module.h> #include <linux/device.h> -#include <linux/platform_device.h> #include <linux/netdevice.h> #include <linux/string.h> #include <linux/list.h> @@ -20,7 +19,7 @@ #include <linux/sched.h> #include <linux/if_arp.h> #include <linux/timer.h> -#include <linux/rtnetlink.h> +#include <net/rtnetlink.h> #include <linux/pkt_sched.h> #include <net/caif/caif_layer.h> #include <net/caif/caif_hsi.h> @@ -33,59 +32,46 @@ MODULE_DESCRIPTION("CAIF HSI driver"); #define PAD_POW2(x, pow) ((((x)&((pow)-1)) == 0) ? 0 :\ (((pow)-((x)&((pow)-1))))) -static int inactivity_timeout = 1000; -module_param(inactivity_timeout, int, S_IRUGO | S_IWUSR); -MODULE_PARM_DESC(inactivity_timeout, "Inactivity timeout on HSI, ms."); +static const struct cfhsi_config hsi_default_config = { -static int aggregation_timeout = 1; -module_param(aggregation_timeout, int, S_IRUGO | S_IWUSR); -MODULE_PARM_DESC(aggregation_timeout, "Aggregation timeout on HSI, ms."); + /* Inactivity timeout on HSI, ms */ + .inactivity_timeout = HZ, -/* - * HSI padding options. - * Warning: must be a base of 2 (& operation used) and can not be zero ! - */ -static int hsi_head_align = 4; -module_param(hsi_head_align, int, S_IRUGO); -MODULE_PARM_DESC(hsi_head_align, "HSI head alignment."); + /* Aggregation timeout (ms) of zero means no aggregation is done*/ + .aggregation_timeout = 1, -static int hsi_tail_align = 4; -module_param(hsi_tail_align, int, S_IRUGO); -MODULE_PARM_DESC(hsi_tail_align, "HSI tail alignment."); - -/* - * HSI link layer flowcontrol thresholds. - * Warning: A high threshold value migth increase throughput but it will at - * the same time prevent channel prioritization and increase the risk of - * flooding the modem. The high threshold should be above the low. - */ -static int hsi_high_threshold = 100; -module_param(hsi_high_threshold, int, S_IRUGO); -MODULE_PARM_DESC(hsi_high_threshold, "HSI high threshold (FLOW OFF)."); + /* + * HSI link layer flow-control thresholds. + * Threshold values for the HSI packet queue. Flow-control will be + * asserted when the number of packets exceeds q_high_mark. It will + * not be de-asserted before the number of packets drops below + * q_low_mark. + * Warning: A high threshold value might increase throughput but it + * will at the same time prevent channel prioritization and increase + * the risk of flooding the modem. The high threshold should be above + * the low. + */ + .q_high_mark = 100, + .q_low_mark = 50, -static int hsi_low_threshold = 50; -module_param(hsi_low_threshold, int, S_IRUGO); -MODULE_PARM_DESC(hsi_low_threshold, "HSI high threshold (FLOW ON)."); + /* + * HSI padding options. + * Warning: must be a base of 2 (& operation used) and can not be zero ! + */ + .head_align = 4, + .tail_align = 4, +}; #define ON 1 #define OFF 0 -/* - * Threshold values for the HSI packet queue. Flowcontrol will be asserted - * when the number of packets exceeds HIGH_WATER_MARK. It will not be - * de-asserted before the number of packets drops below LOW_WATER_MARK. - */ -#define LOW_WATER_MARK hsi_low_threshold -#define HIGH_WATER_MARK hsi_high_threshold - static LIST_HEAD(cfhsi_list); -static spinlock_t cfhsi_list_lock; static void cfhsi_inactivity_tout(unsigned long arg) { struct cfhsi *cfhsi = (struct cfhsi *)arg; - dev_dbg(&cfhsi->ndev->dev, "%s.\n", + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); /* Schedule power down work queue. */ @@ -101,8 +87,8 @@ static void cfhsi_update_aggregation_stats(struct cfhsi *cfhsi, int hpad, tpad, len; info = (struct caif_payload_info *)&skb->cb; - hpad = 1 + PAD_POW2((info->hdr_len + 1), hsi_head_align); - tpad = PAD_POW2((skb->len + hpad), hsi_tail_align); + hpad = 1 + PAD_POW2((info->hdr_len + 1), cfhsi->cfg.head_align); + tpad = PAD_POW2((skb->len + hpad), cfhsi->cfg.tail_align); len = skb->len + hpad + tpad; if (direction > 0) @@ -115,7 +101,7 @@ static bool cfhsi_can_send_aggregate(struct cfhsi *cfhsi) { int i; - if (cfhsi->aggregation_timeout < 0) + if (cfhsi->cfg.aggregation_timeout == 0) return true; for (i = 0; i < CFHSI_PRIO_BEBK; ++i) { @@ -171,7 +157,7 @@ static void cfhsi_abort_tx(struct cfhsi *cfhsi) cfhsi->tx_state = CFHSI_TX_STATE_IDLE; if (!test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) mod_timer(&cfhsi->inactivity_timer, - jiffies + cfhsi->inactivity_timeout); + jiffies + cfhsi->cfg.inactivity_timeout); spin_unlock_bh(&cfhsi->lock); } @@ -181,14 +167,14 @@ static int cfhsi_flush_fifo(struct cfhsi *cfhsi) size_t fifo_occupancy; int ret; - dev_dbg(&cfhsi->ndev->dev, "%s.\n", + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); do { - ret = cfhsi->dev->cfhsi_fifo_occupancy(cfhsi->dev, + ret = cfhsi->ops->cfhsi_fifo_occupancy(cfhsi->ops, &fifo_occupancy); if (ret) { - dev_warn(&cfhsi->ndev->dev, + netdev_warn(cfhsi->ndev, "%s: can't get FIFO occupancy: %d.\n", __func__, ret); break; @@ -198,11 +184,11 @@ static int cfhsi_flush_fifo(struct cfhsi *cfhsi) fifo_occupancy = min(sizeof(buffer), fifo_occupancy); set_bit(CFHSI_FLUSH_FIFO, &cfhsi->bits); - ret = cfhsi->dev->cfhsi_rx(buffer, fifo_occupancy, - cfhsi->dev); + ret = cfhsi->ops->cfhsi_rx(buffer, fifo_occupancy, + cfhsi->ops); if (ret) { clear_bit(CFHSI_FLUSH_FIFO, &cfhsi->bits); - dev_warn(&cfhsi->ndev->dev, + netdev_warn(cfhsi->ndev, "%s: can't read data: %d.\n", __func__, ret); break; @@ -213,13 +199,13 @@ static int cfhsi_flush_fifo(struct cfhsi *cfhsi) !test_bit(CFHSI_FLUSH_FIFO, &cfhsi->bits), ret); if (ret < 0) { - dev_warn(&cfhsi->ndev->dev, + netdev_warn(cfhsi->ndev, "%s: can't wait for flush complete: %d.\n", __func__, ret); break; } else if (!ret) { ret = -ETIMEDOUT; - dev_warn(&cfhsi->ndev->dev, + netdev_warn(cfhsi->ndev, "%s: timeout waiting for flush complete.\n", __func__); break; @@ -246,14 +232,14 @@ static int cfhsi_tx_frm(struct cfhsi_desc *desc, struct cfhsi *cfhsi) /* Check if we can embed a CAIF frame. */ if (skb->len < CFHSI_MAX_EMB_FRM_SZ) { struct caif_payload_info *info; - int hpad = 0; - int tpad = 0; + int hpad; + int tpad; /* Calculate needed head alignment and tail alignment. */ info = (struct caif_payload_info *)&skb->cb; - hpad = 1 + PAD_POW2((info->hdr_len + 1), hsi_head_align); - tpad = PAD_POW2((skb->len + hpad), hsi_tail_align); + hpad = 1 + PAD_POW2((info->hdr_len + 1), cfhsi->cfg.head_align); + tpad = PAD_POW2((skb->len + hpad), cfhsi->cfg.tail_align); /* Check if frame still fits with added alignment. */ if ((skb->len + hpad + tpad) <= CFHSI_MAX_EMB_FRM_SZ) { @@ -282,8 +268,8 @@ static int cfhsi_tx_frm(struct cfhsi_desc *desc, struct cfhsi *cfhsi) pfrm = desc->emb_frm + CFHSI_MAX_EMB_FRM_SZ; while (nfrms < CFHSI_MAX_PKTS) { struct caif_payload_info *info; - int hpad = 0; - int tpad = 0; + int hpad; + int tpad; if (!skb) skb = cfhsi_dequeue(cfhsi); @@ -294,8 +280,8 @@ static int cfhsi_tx_frm(struct cfhsi_desc *desc, struct cfhsi *cfhsi) /* Calculate needed head alignment and tail alignment. */ info = (struct caif_payload_info *)&skb->cb; - hpad = 1 + PAD_POW2((info->hdr_len + 1), hsi_head_align); - tpad = PAD_POW2((skb->len + hpad), hsi_tail_align); + hpad = 1 + PAD_POW2((info->hdr_len + 1), cfhsi->cfg.head_align); + tpad = PAD_POW2((skb->len + hpad), cfhsi->cfg.tail_align); /* Fill in CAIF frame length in descriptor. */ desc->cffrm_len[nfrms] = hpad + skb->len + tpad; @@ -348,7 +334,7 @@ static void cfhsi_start_tx(struct cfhsi *cfhsi) struct cfhsi_desc *desc = (struct cfhsi_desc *)cfhsi->tx_buf; int len, res; - dev_dbg(&cfhsi->ndev->dev, "%s.\n", __func__); + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) return; @@ -366,22 +352,22 @@ static void cfhsi_start_tx(struct cfhsi *cfhsi) cfhsi->tx_state = CFHSI_TX_STATE_IDLE; /* Start inactivity timer. */ mod_timer(&cfhsi->inactivity_timer, - jiffies + cfhsi->inactivity_timeout); + jiffies + cfhsi->cfg.inactivity_timeout); spin_unlock_bh(&cfhsi->lock); break; } /* Set up new transfer. */ - res = cfhsi->dev->cfhsi_tx(cfhsi->tx_buf, len, cfhsi->dev); + res = cfhsi->ops->cfhsi_tx(cfhsi->tx_buf, len, cfhsi->ops); if (WARN_ON(res < 0)) - dev_err(&cfhsi->ndev->dev, "%s: TX error %d.\n", + netdev_err(cfhsi->ndev, "%s: TX error %d.\n", __func__, res); } while (res < 0); } static void cfhsi_tx_done(struct cfhsi *cfhsi) { - dev_dbg(&cfhsi->ndev->dev, "%s.\n", __func__); + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) return; @@ -392,7 +378,7 @@ static void cfhsi_tx_done(struct cfhsi *cfhsi) */ spin_lock_bh(&cfhsi->lock); if (cfhsi->flow_off_sent && - cfhsi_tx_queue_len(cfhsi) <= cfhsi->q_low_mark && + cfhsi_tx_queue_len(cfhsi) <= cfhsi->cfg.q_low_mark && cfhsi->cfdev.flowctrl) { cfhsi->flow_off_sent = 0; @@ -404,19 +390,19 @@ static void cfhsi_tx_done(struct cfhsi *cfhsi) cfhsi_start_tx(cfhsi); } else { mod_timer(&cfhsi->aggregation_timer, - jiffies + cfhsi->aggregation_timeout); + jiffies + cfhsi->cfg.aggregation_timeout); spin_unlock_bh(&cfhsi->lock); } return; } -static void cfhsi_tx_done_cb(struct cfhsi_drv *drv) +static void cfhsi_tx_done_cb(struct cfhsi_cb_ops *cb_ops) { struct cfhsi *cfhsi; - cfhsi = container_of(drv, struct cfhsi, drv); - dev_dbg(&cfhsi->ndev->dev, "%s.\n", + cfhsi = container_of(cb_ops, struct cfhsi, cb_ops); + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) @@ -433,7 +419,7 @@ static int cfhsi_rx_desc(struct cfhsi_desc *desc, struct cfhsi *cfhsi) if ((desc->header & ~CFHSI_PIGGY_DESC) || (desc->offset > CFHSI_MAX_EMB_FRM_SZ)) { - dev_err(&cfhsi->ndev->dev, "%s: Invalid descriptor.\n", + netdev_err(cfhsi->ndev, "%s: Invalid descriptor.\n", __func__); return -EPROTO; } @@ -455,7 +441,7 @@ static int cfhsi_rx_desc(struct cfhsi_desc *desc, struct cfhsi *cfhsi) /* Sanity check length of CAIF frame. */ if (unlikely(len > CFHSI_MAX_CAIF_FRAME_SZ)) { - dev_err(&cfhsi->ndev->dev, "%s: Invalid length.\n", + netdev_err(cfhsi->ndev, "%s: Invalid length.\n", __func__); return -EPROTO; } @@ -463,7 +449,7 @@ static int cfhsi_rx_desc(struct cfhsi_desc *desc, struct cfhsi *cfhsi) /* Allocate SKB (OK even in IRQ context). */ skb = alloc_skb(len + 1, GFP_ATOMIC); if (!skb) { - dev_err(&cfhsi->ndev->dev, "%s: Out of memory !\n", + netdev_err(cfhsi->ndev, "%s: Out of memory !\n", __func__); return -ENOMEM; } @@ -477,8 +463,8 @@ static int cfhsi_rx_desc(struct cfhsi_desc *desc, struct cfhsi *cfhsi) skb->dev = cfhsi->ndev; /* - * We are called from a arch specific platform device. - * Unfortunately we don't know what context we're + * We are in a callback handler and + * unfortunately we don't know what context we're * running in. */ if (in_interrupt()) @@ -504,7 +490,7 @@ static int cfhsi_rx_desc(struct cfhsi_desc *desc, struct cfhsi *cfhsi) xfer_sz += CFHSI_DESC_SZ; if ((xfer_sz % 4) || (xfer_sz > (CFHSI_BUF_SZ_RX - CFHSI_DESC_SZ))) { - dev_err(&cfhsi->ndev->dev, + netdev_err(cfhsi->ndev, "%s: Invalid payload len: %d, ignored.\n", __func__, xfer_sz); return -EPROTO; @@ -551,7 +537,7 @@ static int cfhsi_rx_pld(struct cfhsi_desc *desc, struct cfhsi *cfhsi) /* Sanity check header and offset. */ if (WARN_ON((desc->header & ~CFHSI_PIGGY_DESC) || (desc->offset > CFHSI_MAX_EMB_FRM_SZ))) { - dev_err(&cfhsi->ndev->dev, "%s: Invalid descriptor.\n", + netdev_err(cfhsi->ndev, "%s: Invalid descriptor.\n", __func__); return -EPROTO; } @@ -573,7 +559,7 @@ static int cfhsi_rx_pld(struct cfhsi_desc *desc, struct cfhsi *cfhsi) struct sk_buff *skb; u8 *dst = NULL; u8 *pcffrm = NULL; - int len = 0; + int len; /* CAIF frame starts after head padding. */ pcffrm = pfrm + *pfrm + 1; @@ -585,7 +571,7 @@ static int cfhsi_rx_pld(struct cfhsi_desc *desc, struct cfhsi *cfhsi) /* Sanity check length of CAIF frames. */ if (unlikely(len > CFHSI_MAX_CAIF_FRAME_SZ)) { - dev_err(&cfhsi->ndev->dev, "%s: Invalid length.\n", + netdev_err(cfhsi->ndev, "%s: Invalid length.\n", __func__); return -EPROTO; } @@ -593,7 +579,7 @@ static int cfhsi_rx_pld(struct cfhsi_desc *desc, struct cfhsi *cfhsi) /* Allocate SKB (OK even in IRQ context). */ skb = alloc_skb(len + 1, GFP_ATOMIC); if (!skb) { - dev_err(&cfhsi->ndev->dev, "%s: Out of memory !\n", + netdev_err(cfhsi->ndev, "%s: Out of memory !\n", __func__); cfhsi->rx_state.nfrms = nfrms; return -ENOMEM; @@ -608,7 +594,7 @@ static int cfhsi_rx_pld(struct cfhsi_desc *desc, struct cfhsi *cfhsi) skb->dev = cfhsi->ndev; /* - * We're called from a platform device, + * We're called in callback from HSI * and don't know the context we're running in. */ if (in_interrupt()) @@ -639,7 +625,7 @@ static void cfhsi_rx_done(struct cfhsi *cfhsi) desc = (struct cfhsi_desc *)cfhsi->rx_buf; - dev_dbg(&cfhsi->ndev->dev, "%s\n", __func__); + netdev_dbg(cfhsi->ndev, "%s\n", __func__); if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) return; @@ -647,7 +633,7 @@ static void cfhsi_rx_done(struct cfhsi *cfhsi) /* Update inactivity timer if pending. */ spin_lock_bh(&cfhsi->lock); mod_timer_pending(&cfhsi->inactivity_timer, - jiffies + cfhsi->inactivity_timeout); + jiffies + cfhsi->cfg.inactivity_timeout); spin_unlock_bh(&cfhsi->lock); if (cfhsi->rx_state.state == CFHSI_RX_STATE_DESC) { @@ -680,12 +666,11 @@ static void cfhsi_rx_done(struct cfhsi *cfhsi) if (desc_pld_len < 0) goto out_of_sync; - if (desc_pld_len > 0) + if (desc_pld_len > 0) { rx_len = desc_pld_len; - - if (desc_pld_len > 0 && - (piggy_desc->header & CFHSI_PIGGY_DESC)) - rx_len += CFHSI_DESC_SZ; + if (piggy_desc->header & CFHSI_PIGGY_DESC) + rx_len += CFHSI_DESC_SZ; + } /* * Copy needed information from the piggy-backed @@ -693,8 +678,6 @@ static void cfhsi_rx_done(struct cfhsi *cfhsi) */ memcpy(rx_buf, (u8 *)piggy_desc, CFHSI_DESC_SHORT_SZ); - if (desc_pld_len == -EPROTO) - goto out_of_sync; } } @@ -710,13 +693,13 @@ static void cfhsi_rx_done(struct cfhsi *cfhsi) /* Initiate next read */ if (test_bit(CFHSI_AWAKE, &cfhsi->bits)) { /* Set up new transfer. */ - dev_dbg(&cfhsi->ndev->dev, "%s: Start RX.\n", + netdev_dbg(cfhsi->ndev, "%s: Start RX.\n", __func__); - res = cfhsi->dev->cfhsi_rx(rx_ptr, rx_len, - cfhsi->dev); + res = cfhsi->ops->cfhsi_rx(rx_ptr, rx_len, + cfhsi->ops); if (WARN_ON(res < 0)) { - dev_err(&cfhsi->ndev->dev, "%s: RX error %d.\n", + netdev_err(cfhsi->ndev, "%s: RX error %d.\n", __func__, res); cfhsi->ndev->stats.rx_errors++; cfhsi->ndev->stats.rx_dropped++; @@ -753,7 +736,7 @@ static void cfhsi_rx_done(struct cfhsi *cfhsi) return; out_of_sync: - dev_err(&cfhsi->ndev->dev, "%s: Out of sync.\n", __func__); + netdev_err(cfhsi->ndev, "%s: Out of sync.\n", __func__); print_hex_dump_bytes("--> ", DUMP_PREFIX_NONE, cfhsi->rx_buf, CFHSI_DESC_SZ); schedule_work(&cfhsi->out_of_sync_work); @@ -763,18 +746,18 @@ static void cfhsi_rx_slowpath(unsigned long arg) { struct cfhsi *cfhsi = (struct cfhsi *)arg; - dev_dbg(&cfhsi->ndev->dev, "%s.\n", + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); cfhsi_rx_done(cfhsi); } -static void cfhsi_rx_done_cb(struct cfhsi_drv *drv) +static void cfhsi_rx_done_cb(struct cfhsi_cb_ops *cb_ops) { struct cfhsi *cfhsi; - cfhsi = container_of(drv, struct cfhsi, drv); - dev_dbg(&cfhsi->ndev->dev, "%s.\n", + cfhsi = container_of(cb_ops, struct cfhsi, cb_ops); + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) @@ -807,9 +790,9 @@ static void cfhsi_wake_up(struct work_struct *work) } /* Activate wake line. */ - cfhsi->dev->cfhsi_wake_up(cfhsi->dev); + cfhsi->ops->cfhsi_wake_up(cfhsi->ops); - dev_dbg(&cfhsi->ndev->dev, "%s: Start waiting.\n", + netdev_dbg(cfhsi->ndev, "%s: Start waiting.\n", __func__); /* Wait for acknowledge. */ @@ -819,33 +802,33 @@ static void cfhsi_wake_up(struct work_struct *work) &cfhsi->bits), ret); if (unlikely(ret < 0)) { /* Interrupted by signal. */ - dev_err(&cfhsi->ndev->dev, "%s: Signalled: %ld.\n", + netdev_err(cfhsi->ndev, "%s: Signalled: %ld.\n", __func__, ret); clear_bit(CFHSI_WAKE_UP, &cfhsi->bits); - cfhsi->dev->cfhsi_wake_down(cfhsi->dev); + cfhsi->ops->cfhsi_wake_down(cfhsi->ops); return; } else if (!ret) { bool ca_wake = false; size_t fifo_occupancy = 0; /* Wakeup timeout */ - dev_dbg(&cfhsi->ndev->dev, "%s: Timeout.\n", + netdev_dbg(cfhsi->ndev, "%s: Timeout.\n", __func__); /* Check FIFO to check if modem has sent something. */ - WARN_ON(cfhsi->dev->cfhsi_fifo_occupancy(cfhsi->dev, + WARN_ON(cfhsi->ops->cfhsi_fifo_occupancy(cfhsi->ops, &fifo_occupancy)); - dev_dbg(&cfhsi->ndev->dev, "%s: Bytes in FIFO: %u.\n", + netdev_dbg(cfhsi->ndev, "%s: Bytes in FIFO: %u.\n", __func__, (unsigned) fifo_occupancy); /* Check if we misssed the interrupt. */ - WARN_ON(cfhsi->dev->cfhsi_get_peer_wake(cfhsi->dev, + WARN_ON(cfhsi->ops->cfhsi_get_peer_wake(cfhsi->ops, &ca_wake)); if (ca_wake) { - dev_err(&cfhsi->ndev->dev, "%s: CA Wake missed !.\n", + netdev_err(cfhsi->ndev, "%s: CA Wake missed !.\n", __func__); /* Clear the CFHSI_WAKE_UP_ACK bit to prevent race. */ @@ -856,11 +839,11 @@ static void cfhsi_wake_up(struct work_struct *work) } clear_bit(CFHSI_WAKE_UP, &cfhsi->bits); - cfhsi->dev->cfhsi_wake_down(cfhsi->dev); + cfhsi->ops->cfhsi_wake_down(cfhsi->ops); return; } wake_ack: - dev_dbg(&cfhsi->ndev->dev, "%s: Woken.\n", + netdev_dbg(cfhsi->ndev, "%s: Woken.\n", __func__); /* Clear power up bit. */ @@ -868,11 +851,11 @@ wake_ack: clear_bit(CFHSI_WAKE_UP, &cfhsi->bits); /* Resume read operation. */ - dev_dbg(&cfhsi->ndev->dev, "%s: Start RX.\n", __func__); - res = cfhsi->dev->cfhsi_rx(cfhsi->rx_ptr, cfhsi->rx_len, cfhsi->dev); + netdev_dbg(cfhsi->ndev, "%s: Start RX.\n", __func__); + res = cfhsi->ops->cfhsi_rx(cfhsi->rx_ptr, cfhsi->rx_len, cfhsi->ops); if (WARN_ON(res < 0)) - dev_err(&cfhsi->ndev->dev, "%s: RX err %d.\n", __func__, res); + netdev_err(cfhsi->ndev, "%s: RX err %d.\n", __func__, res); /* Clear power up acknowledment. */ clear_bit(CFHSI_WAKE_UP_ACK, &cfhsi->bits); @@ -881,16 +864,16 @@ wake_ack: /* Resume transmit if queues are not empty. */ if (!cfhsi_tx_queue_len(cfhsi)) { - dev_dbg(&cfhsi->ndev->dev, "%s: Peer wake, start timer.\n", + netdev_dbg(cfhsi->ndev, "%s: Peer wake, start timer.\n", __func__); /* Start inactivity timer. */ mod_timer(&cfhsi->inactivity_timer, - jiffies + cfhsi->inactivity_timeout); + jiffies + cfhsi->cfg.inactivity_timeout); spin_unlock_bh(&cfhsi->lock); return; } - dev_dbg(&cfhsi->ndev->dev, "%s: Host wake.\n", + netdev_dbg(cfhsi->ndev, "%s: Host wake.\n", __func__); spin_unlock_bh(&cfhsi->lock); @@ -900,14 +883,14 @@ wake_ack: if (likely(len > 0)) { /* Set up new transfer. */ - res = cfhsi->dev->cfhsi_tx(cfhsi->tx_buf, len, cfhsi->dev); + res = cfhsi->ops->cfhsi_tx(cfhsi->tx_buf, len, cfhsi->ops); if (WARN_ON(res < 0)) { - dev_err(&cfhsi->ndev->dev, "%s: TX error %d.\n", + netdev_err(cfhsi->ndev, "%s: TX error %d.\n", __func__, res); cfhsi_abort_tx(cfhsi); } } else { - dev_err(&cfhsi->ndev->dev, + netdev_err(cfhsi->ndev, "%s: Failed to create HSI frame: %d.\n", __func__, len); } @@ -921,13 +904,13 @@ static void cfhsi_wake_down(struct work_struct *work) int retry = CFHSI_WAKE_TOUT; cfhsi = container_of(work, struct cfhsi, wake_down_work); - dev_dbg(&cfhsi->ndev->dev, "%s.\n", __func__); + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) return; /* Deactivate wake line. */ - cfhsi->dev->cfhsi_wake_down(cfhsi->dev); + cfhsi->ops->cfhsi_wake_down(cfhsi->ops); /* Wait for acknowledge. */ ret = CFHSI_WAKE_TOUT; @@ -936,26 +919,26 @@ static void cfhsi_wake_down(struct work_struct *work) &cfhsi->bits), ret); if (ret < 0) { /* Interrupted by signal. */ - dev_err(&cfhsi->ndev->dev, "%s: Signalled: %ld.\n", + netdev_err(cfhsi->ndev, "%s: Signalled: %ld.\n", __func__, ret); return; } else if (!ret) { bool ca_wake = true; /* Timeout */ - dev_err(&cfhsi->ndev->dev, "%s: Timeout.\n", __func__); + netdev_err(cfhsi->ndev, "%s: Timeout.\n", __func__); /* Check if we misssed the interrupt. */ - WARN_ON(cfhsi->dev->cfhsi_get_peer_wake(cfhsi->dev, + WARN_ON(cfhsi->ops->cfhsi_get_peer_wake(cfhsi->ops, &ca_wake)); if (!ca_wake) - dev_err(&cfhsi->ndev->dev, "%s: CA Wake missed !.\n", + netdev_err(cfhsi->ndev, "%s: CA Wake missed !.\n", __func__); } /* Check FIFO occupancy. */ while (retry) { - WARN_ON(cfhsi->dev->cfhsi_fifo_occupancy(cfhsi->dev, + WARN_ON(cfhsi->ops->cfhsi_fifo_occupancy(cfhsi->ops, &fifo_occupancy)); if (!fifo_occupancy) @@ -967,14 +950,13 @@ static void cfhsi_wake_down(struct work_struct *work) } if (!retry) - dev_err(&cfhsi->ndev->dev, "%s: FIFO Timeout.\n", __func__); + netdev_err(cfhsi->ndev, "%s: FIFO Timeout.\n", __func__); /* Clear AWAKE condition. */ clear_bit(CFHSI_AWAKE, &cfhsi->bits); /* Cancel pending RX requests. */ - cfhsi->dev->cfhsi_rx_cancel(cfhsi->dev); - + cfhsi->ops->cfhsi_rx_cancel(cfhsi->ops); } static void cfhsi_out_of_sync(struct work_struct *work) @@ -988,12 +970,12 @@ static void cfhsi_out_of_sync(struct work_struct *work) rtnl_unlock(); } -static void cfhsi_wake_up_cb(struct cfhsi_drv *drv) +static void cfhsi_wake_up_cb(struct cfhsi_cb_ops *cb_ops) { struct cfhsi *cfhsi = NULL; - cfhsi = container_of(drv, struct cfhsi, drv); - dev_dbg(&cfhsi->ndev->dev, "%s.\n", + cfhsi = container_of(cb_ops, struct cfhsi, cb_ops); + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); set_bit(CFHSI_WAKE_UP_ACK, &cfhsi->bits); @@ -1007,12 +989,12 @@ static void cfhsi_wake_up_cb(struct cfhsi_drv *drv) queue_work(cfhsi->wq, &cfhsi->wake_up_work); } -static void cfhsi_wake_down_cb(struct cfhsi_drv *drv) +static void cfhsi_wake_down_cb(struct cfhsi_cb_ops *cb_ops) { struct cfhsi *cfhsi = NULL; - cfhsi = container_of(drv, struct cfhsi, drv); - dev_dbg(&cfhsi->ndev->dev, "%s.\n", + cfhsi = container_of(cb_ops, struct cfhsi, cb_ops); + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); /* Initiating low power is only permitted by the host (us). */ @@ -1024,7 +1006,7 @@ static void cfhsi_aggregation_tout(unsigned long arg) { struct cfhsi *cfhsi = (struct cfhsi *)arg; - dev_dbg(&cfhsi->ndev->dev, "%s.\n", + netdev_dbg(cfhsi->ndev, "%s.\n", __func__); cfhsi_start_tx(cfhsi); @@ -1077,7 +1059,7 @@ static int cfhsi_xmit(struct sk_buff *skb, struct net_device *dev) /* Send flow off if number of packets is above high water mark. */ if (!cfhsi->flow_off_sent && - cfhsi_tx_queue_len(cfhsi) > cfhsi->q_high_mark && + cfhsi_tx_queue_len(cfhsi) > cfhsi->cfg.q_high_mark && cfhsi->cfdev.flowctrl) { cfhsi->flow_off_sent = 1; cfhsi->cfdev.flowctrl(cfhsi->ndev, OFF); @@ -1114,9 +1096,9 @@ static int cfhsi_xmit(struct sk_buff *skb, struct net_device *dev) WARN_ON(!len); /* Set up new transfer. */ - res = cfhsi->dev->cfhsi_tx(cfhsi->tx_buf, len, cfhsi->dev); + res = cfhsi->ops->cfhsi_tx(cfhsi->tx_buf, len, cfhsi->ops); if (WARN_ON(res < 0)) { - dev_err(&cfhsi->ndev->dev, "%s: TX error %d.\n", + netdev_err(cfhsi->ndev, "%s: TX error %d.\n", __func__, res); cfhsi_abort_tx(cfhsi); } @@ -1129,19 +1111,19 @@ static int cfhsi_xmit(struct sk_buff *skb, struct net_device *dev) return 0; } -static const struct net_device_ops cfhsi_ops; +static const struct net_device_ops cfhsi_netdevops; static void cfhsi_setup(struct net_device *dev) { int i; struct cfhsi *cfhsi = netdev_priv(dev); dev->features = 0; - dev->netdev_ops = &cfhsi_ops; dev->type = ARPHRD_CAIF; dev->flags = IFF_POINTOPOINT | IFF_NOARP; dev->mtu = CFHSI_MAX_CAIF_FRAME_SZ; dev->tx_queue_len = 0; dev->destructor = free_netdev; + dev->netdev_ops = &cfhsi_netdevops; for (i = 0; i < CFHSI_PRIO_LAST; ++i) skb_queue_head_init(&cfhsi->qhead[i]); cfhsi->cfdev.link_select = CAIF_LINK_HIGH_BANDW; @@ -1149,43 +1131,7 @@ static void cfhsi_setup(struct net_device *dev) cfhsi->cfdev.use_stx = false; cfhsi->cfdev.use_fcs = false; cfhsi->ndev = dev; -} - -int cfhsi_probe(struct platform_device *pdev) -{ - struct cfhsi *cfhsi = NULL; - struct net_device *ndev; - - int res; - - ndev = alloc_netdev(sizeof(struct cfhsi), "cfhsi%d", cfhsi_setup); - if (!ndev) - return -ENODEV; - - cfhsi = netdev_priv(ndev); - cfhsi->ndev = ndev; - cfhsi->pdev = pdev; - - /* Assign the HSI device. */ - cfhsi->dev = pdev->dev.platform_data; - - /* Assign the driver to this HSI device. */ - cfhsi->dev->drv = &cfhsi->drv; - - /* Register network device. */ - res = register_netdev(ndev); - if (res) { - dev_err(&ndev->dev, "%s: Registration error: %d.\n", - __func__, res); - free_netdev(ndev); - return -ENODEV; - } - /* Add CAIF HSI device to list. */ - spin_lock(&cfhsi_list_lock); - list_add_tail(&cfhsi->list, &cfhsi_list); - spin_unlock(&cfhsi_list_lock); - - return res; + cfhsi->cfg = hsi_default_config; } static int cfhsi_open(struct net_device *ndev) @@ -1201,9 +1147,6 @@ static int cfhsi_open(struct net_device *ndev) /* Set flow info */ cfhsi->flow_off_sent = 0; - cfhsi->q_low_mark = LOW_WATER_MARK; - cfhsi->q_high_mark = HIGH_WATER_MARK; - /* * Allocate a TX buffer with the size of a HSI packet descriptors @@ -1231,20 +1174,8 @@ static int cfhsi_open(struct net_device *ndev) goto err_alloc_rx_flip; } - /* Pre-calculate inactivity timeout. */ - if (inactivity_timeout != -1) { - cfhsi->inactivity_timeout = - inactivity_timeout * HZ / 1000; - if (!cfhsi->inactivity_timeout) - cfhsi->inactivity_timeout = 1; - else if (cfhsi->inactivity_timeout > NEXT_TIMER_MAX_DELTA) - cfhsi->inactivity_timeout = NEXT_TIMER_MAX_DELTA; - } else { - cfhsi->inactivity_timeout = NEXT_TIMER_MAX_DELTA; - } - /* Initialize aggregation timeout */ - cfhsi->aggregation_timeout = aggregation_timeout; + cfhsi->cfg.aggregation_timeout = hsi_default_config.aggregation_timeout; /* Initialize recieve vaiables. */ cfhsi->rx_ptr = cfhsi->rx_buf; @@ -1254,10 +1185,10 @@ static int cfhsi_open(struct net_device *ndev) spin_lock_init(&cfhsi->lock); /* Set up the driver. */ - cfhsi->drv.tx_done_cb = cfhsi_tx_done_cb; - cfhsi->drv.rx_done_cb = cfhsi_rx_done_cb; - cfhsi->drv.wake_up_cb = cfhsi_wake_up_cb; - cfhsi->drv.wake_down_cb = cfhsi_wake_down_cb; + cfhsi->cb_ops.tx_done_cb = cfhsi_tx_done_cb; + cfhsi->cb_ops.rx_done_cb = cfhsi_rx_done_cb; + cfhsi->cb_ops.wake_up_cb = cfhsi_wake_up_cb; + cfhsi->cb_ops.wake_down_cb = cfhsi_wake_down_cb; /* Initialize the work queues. */ INIT_WORK(&cfhsi->wake_up_work, cfhsi_wake_up); @@ -1271,9 +1202,9 @@ static int cfhsi_open(struct net_device *ndev) clear_bit(CFHSI_AWAKE, &cfhsi->bits); /* Create work thread. */ - cfhsi->wq = create_singlethread_workqueue(cfhsi->pdev->name); + cfhsi->wq = create_singlethread_workqueue(cfhsi->ndev->name); if (!cfhsi->wq) { - dev_err(&cfhsi->ndev->dev, "%s: Failed to create work queue.\n", + netdev_err(cfhsi->ndev, "%s: Failed to create work queue.\n", __func__); res = -ENODEV; goto err_create_wq; @@ -1298,9 +1229,9 @@ static int cfhsi_open(struct net_device *ndev) cfhsi->aggregation_timer.function = cfhsi_aggregation_tout; /* Activate HSI interface. */ - res = cfhsi->dev->cfhsi_up(cfhsi->dev); + res = cfhsi->ops->cfhsi_up(cfhsi->ops); if (res) { - dev_err(&cfhsi->ndev->dev, + netdev_err(cfhsi->ndev, "%s: can't activate HSI interface: %d.\n", __func__, res); goto err_activate; @@ -1309,14 +1240,14 @@ static int cfhsi_open(struct net_device *ndev) /* Flush FIFO */ res = cfhsi_flush_fifo(cfhsi); if (res) { - dev_err(&cfhsi->ndev->dev, "%s: Can't flush FIFO: %d.\n", + netdev_err(cfhsi->ndev, "%s: Can't flush FIFO: %d.\n", __func__, res); goto err_net_reg; } return res; err_net_reg: - cfhsi->dev->cfhsi_down(cfhsi->dev); + cfhsi->ops->cfhsi_down(cfhsi->ops); err_activate: destroy_workqueue(cfhsi->wq); err_create_wq: @@ -1346,7 +1277,7 @@ static int cfhsi_close(struct net_device *ndev) del_timer_sync(&cfhsi->aggregation_timer); /* Cancel pending RX request (if any) */ - cfhsi->dev->cfhsi_rx_cancel(cfhsi->dev); + cfhsi->ops->cfhsi_rx_cancel(cfhsi->ops); /* Destroy workqueue */ destroy_workqueue(cfhsi->wq); @@ -1359,7 +1290,7 @@ static int cfhsi_close(struct net_device *ndev) cfhsi_abort_tx(cfhsi); /* Deactivate interface */ - cfhsi->dev->cfhsi_down(cfhsi->dev); + cfhsi->ops->cfhsi_down(cfhsi->ops); /* Free buffers. */ kfree(tx_buf); @@ -1368,85 +1299,184 @@ static int cfhsi_close(struct net_device *ndev) return 0; } -static const struct net_device_ops cfhsi_ops = { +static void cfhsi_uninit(struct net_device *dev) +{ + struct cfhsi *cfhsi = netdev_priv(dev); + ASSERT_RTNL(); + symbol_put(cfhsi_get_device); + list_del(&cfhsi->list); +} + +static const struct net_device_ops cfhsi_netdevops = { + .ndo_uninit = cfhsi_uninit, .ndo_open = cfhsi_open, .ndo_stop = cfhsi_close, .ndo_start_xmit = cfhsi_xmit }; -int cfhsi_remove(struct platform_device *pdev) +static void cfhsi_netlink_parms(struct nlattr *data[], struct cfhsi *cfhsi) { - struct list_head *list_node; - struct list_head *n; - struct cfhsi *cfhsi = NULL; - struct cfhsi_dev *dev; + int i; - dev = (struct cfhsi_dev *)pdev->dev.platform_data; - spin_lock(&cfhsi_list_lock); - list_for_each_safe(list_node, n, &cfhsi_list) { - cfhsi = list_entry(list_node, struct cfhsi, list); - /* Find the corresponding device. */ - if (cfhsi->dev == dev) { - /* Remove from list. */ - list_del(list_node); - spin_unlock(&cfhsi_list_lock); - return 0; - } + if (!data) { + pr_debug("no params data found\n"); + return; } - spin_unlock(&cfhsi_list_lock); - return -ENODEV; + + i = __IFLA_CAIF_HSI_INACTIVITY_TOUT; + /* + * Inactivity timeout in millisecs. Lowest possible value is 1, + * and highest possible is NEXT_TIMER_MAX_DELTA. + */ + if (data[i]) { + u32 inactivity_timeout = nla_get_u32(data[i]); + /* Pre-calculate inactivity timeout. */ + cfhsi->cfg.inactivity_timeout = inactivity_timeout * HZ / 1000; + if (cfhsi->cfg.inactivity_timeout == 0) + cfhsi->cfg.inactivity_timeout = 1; + else if (cfhsi->cfg.inactivity_timeout > NEXT_TIMER_MAX_DELTA) + cfhsi->cfg.inactivity_timeout = NEXT_TIMER_MAX_DELTA; + } + + i = __IFLA_CAIF_HSI_AGGREGATION_TOUT; + if (data[i]) + cfhsi->cfg.aggregation_timeout = nla_get_u32(data[i]); + + i = __IFLA_CAIF_HSI_HEAD_ALIGN; + if (data[i]) + cfhsi->cfg.head_align = nla_get_u32(data[i]); + + i = __IFLA_CAIF_HSI_TAIL_ALIGN; + if (data[i]) + cfhsi->cfg.tail_align = nla_get_u32(data[i]); + + i = __IFLA_CAIF_HSI_QHIGH_WATERMARK; + if (data[i]) + cfhsi->cfg.q_high_mark = nla_get_u32(data[i]); + + i = __IFLA_CAIF_HSI_QLOW_WATERMARK; + if (data[i]) + cfhsi->cfg.q_low_mark = nla_get_u32(data[i]); +} + +static int caif_hsi_changelink(struct net_device *dev, struct nlattr *tb[], + struct nlattr *data[]) +{ + cfhsi_netlink_parms(data, netdev_priv(dev)); + netdev_state_change(dev); + return 0; } -struct platform_driver cfhsi_plat_drv = { - .probe = cfhsi_probe, - .remove = cfhsi_remove, - .driver = { - .name = "cfhsi", - .owner = THIS_MODULE, - }, +static const struct nla_policy caif_hsi_policy[__IFLA_CAIF_HSI_MAX + 1] = { + [__IFLA_CAIF_HSI_INACTIVITY_TOUT] = { .type = NLA_U32, .len = 4 }, + [__IFLA_CAIF_HSI_AGGREGATION_TOUT] = { .type = NLA_U32, .len = 4 }, + [__IFLA_CAIF_HSI_HEAD_ALIGN] = { .type = NLA_U32, .len = 4 }, + [__IFLA_CAIF_HSI_TAIL_ALIGN] = { .type = NLA_U32, .len = 4 }, + [__IFLA_CAIF_HSI_QHIGH_WATERMARK] = { .type = NLA_U32, .len = 4 }, + [__IFLA_CAIF_HSI_QLOW_WATERMARK] = { .type = NLA_U32, .len = 4 }, }; -static void __exit cfhsi_exit_module(void) +static size_t caif_hsi_get_size(const struct net_device *dev) +{ + int i; + size_t s = 0; + for (i = __IFLA_CAIF_HSI_UNSPEC + 1; i < __IFLA_CAIF_HSI_MAX; i++) + s += nla_total_size(caif_hsi_policy[i].len); + return s; +} + +static int caif_hsi_fill_info(struct sk_buff *skb, const struct net_device *dev) +{ + struct cfhsi *cfhsi = netdev_priv(dev); + + if (nla_put_u32(skb, __IFLA_CAIF_HSI_INACTIVITY_TOUT, + cfhsi->cfg.inactivity_timeout) || + nla_put_u32(skb, __IFLA_CAIF_HSI_AGGREGATION_TOUT, + cfhsi->cfg.aggregation_timeout) || + nla_put_u32(skb, __IFLA_CAIF_HSI_HEAD_ALIGN, + cfhsi->cfg.head_align) || + nla_put_u32(skb, __IFLA_CAIF_HSI_TAIL_ALIGN, + cfhsi->cfg.tail_align) || + nla_put_u32(skb, __IFLA_CAIF_HSI_QHIGH_WATERMARK, + cfhsi->cfg.q_high_mark) || + nla_put_u32(skb, __IFLA_CAIF_HSI_QLOW_WATERMARK, + cfhsi->cfg.q_low_mark)) + return -EMSGSIZE; + + return 0; +} + +static int caif_hsi_newlink(struct net *src_net, struct net_device *dev, + struct nlattr *tb[], struct nlattr *data[]) { - struct list_head *list_node; - struct list_head *n; struct cfhsi *cfhsi = NULL; + struct cfhsi_ops *(*get_ops)(void); - spin_lock(&cfhsi_list_lock); - list_for_each_safe(list_node, n, &cfhsi_list) { - cfhsi = list_entry(list_node, struct cfhsi, list); + ASSERT_RTNL(); - /* Remove from list. */ - list_del(list_node); - spin_unlock(&cfhsi_list_lock); + cfhsi = netdev_priv(dev); + cfhsi_netlink_parms(data, cfhsi); + dev_net_set(cfhsi->ndev, src_net); + + get_ops = symbol_get(cfhsi_get_ops); + if (!get_ops) { + pr_err("%s: failed to get the cfhsi_ops\n", __func__); + return -ENODEV; + } - unregister_netdevice(cfhsi->ndev); + /* Assign the HSI device. */ + cfhsi->ops = (*get_ops)(); + if (!cfhsi->ops) { + pr_err("%s: failed to get the cfhsi_ops\n", __func__); + goto err; + } - spin_lock(&cfhsi_list_lock); + /* Assign the driver to this HSI device. */ + cfhsi->ops->cb_ops = &cfhsi->cb_ops; + if (register_netdevice(dev)) { + pr_warn("%s: caif_hsi device registration failed\n", __func__); + goto err; } - spin_unlock(&cfhsi_list_lock); + /* Add CAIF HSI device to list. */ + list_add_tail(&cfhsi->list, &cfhsi_list); - /* Unregister platform driver. */ - platform_driver_unregister(&cfhsi_plat_drv); + return 0; +err: + symbol_put(cfhsi_get_ops); + return -ENODEV; } -static int __init cfhsi_init_module(void) +static struct rtnl_link_ops caif_hsi_link_ops __read_mostly = { + .kind = "cfhsi", + .priv_size = sizeof(struct cfhsi), + .setup = cfhsi_setup, + .maxtype = __IFLA_CAIF_HSI_MAX, + .policy = caif_hsi_policy, + .newlink = caif_hsi_newlink, + .changelink = caif_hsi_changelink, + .get_size = caif_hsi_get_size, + .fill_info = caif_hsi_fill_info, +}; + +static void __exit cfhsi_exit_module(void) { - int result; + struct list_head *list_node; + struct list_head *n; + struct cfhsi *cfhsi; - /* Initialize spin lock. */ - spin_lock_init(&cfhsi_list_lock); + rtnl_link_unregister(&caif_hsi_link_ops); - /* Register platform driver. */ - result = platform_driver_register(&cfhsi_plat_drv); - if (result) { - printk(KERN_ERR "Could not register platform HSI driver: %d.\n", - result); - goto err_dev_register; + rtnl_lock(); + list_for_each_safe(list_node, n, &cfhsi_list) { + cfhsi = list_entry(list_node, struct cfhsi, list); + unregister_netdev(cfhsi->ndev); } + rtnl_unlock(); +} - err_dev_register: - return result; +static int __init cfhsi_init_module(void) +{ + return rtnl_link_register(&caif_hsi_link_ops); } module_init(cfhsi_init_module); diff --git a/drivers/net/can/at91_can.c b/drivers/net/can/at91_can.c index 6ea905c2cf6d..fcff73a73b1d 100644 --- a/drivers/net/can/at91_can.c +++ b/drivers/net/can/at91_can.c @@ -170,7 +170,7 @@ static const struct at91_devtype_data at91_devtype_data[] __devinitconst = { }, }; -static struct can_bittiming_const at91_bittiming_const = { +static const struct can_bittiming_const at91_bittiming_const = { .name = KBUILD_MODNAME, .tseg1_min = 4, .tseg1_max = 16, diff --git a/drivers/net/can/bfin_can.c b/drivers/net/can/bfin_can.c index 3f88473423e9..f2d6d258a286 100644 --- a/drivers/net/can/bfin_can.c +++ b/drivers/net/can/bfin_can.c @@ -44,7 +44,7 @@ struct bfin_can_priv { /* * bfin can timing parameters */ -static struct can_bittiming_const bfin_can_bittiming_const = { +static const struct can_bittiming_const bfin_can_bittiming_const = { .name = DRV_NAME, .tseg1_min = 1, .tseg1_max = 16, @@ -597,7 +597,7 @@ static int __devinit bfin_can_probe(struct platform_device *pdev) dev_info(&pdev->dev, "%s device registered" "(®_base=%p, rx_irq=%d, tx_irq=%d, err_irq=%d, sclk=%d)\n", - DRV_NAME, (void *)priv->membase, priv->rx_irq, + DRV_NAME, priv->membase, priv->rx_irq, priv->tx_irq, priv->err_irq, priv->can.clock.freq); return 0; diff --git a/drivers/net/can/c_can/Kconfig b/drivers/net/can/c_can/Kconfig index ffb9773d102d..3b83bafcd947 100644 --- a/drivers/net/can/c_can/Kconfig +++ b/drivers/net/can/c_can/Kconfig @@ -1,15 +1,23 @@ menuconfig CAN_C_CAN - tristate "Bosch C_CAN devices" + tristate "Bosch C_CAN/D_CAN devices" depends on CAN_DEV && HAS_IOMEM if CAN_C_CAN config CAN_C_CAN_PLATFORM - tristate "Generic Platform Bus based C_CAN driver" + tristate "Generic Platform Bus based C_CAN/D_CAN driver" ---help--- - This driver adds support for the C_CAN chips connected to - the "platform bus" (Linux abstraction for directly to the + This driver adds support for the C_CAN/D_CAN chips connected + to the "platform bus" (Linux abstraction for directly to the processor attached devices) which can be found on various - boards from ST Microelectronics (http://www.st.com) - like the SPEAr1310 and SPEAr320 evaluation boards. + boards from ST Microelectronics (http://www.st.com) like the + SPEAr1310 and SPEAr320 evaluation boards & TI (www.ti.com) + boards like am335x, dm814x, dm813x and dm811x. + +config CAN_C_CAN_PCI + tristate "Generic PCI Bus based C_CAN/D_CAN driver" + depends on PCI + ---help--- + This driver adds support for the C_CAN/D_CAN chips connected + to the PCI bus. endif diff --git a/drivers/net/can/c_can/Makefile b/drivers/net/can/c_can/Makefile index 9273f6d5c4b7..ad1cc842170a 100644 --- a/drivers/net/can/c_can/Makefile +++ b/drivers/net/can/c_can/Makefile @@ -4,5 +4,6 @@ obj-$(CONFIG_CAN_C_CAN) += c_can.o obj-$(CONFIG_CAN_C_CAN_PLATFORM) += c_can_platform.o +obj-$(CONFIG_CAN_C_CAN_PCI) += c_can_pci.o ccflags-$(CONFIG_CAN_DEBUG_DEVICES) := -DDEBUG diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c index 86cd532c78f9..4c538e388655 100644 --- a/drivers/net/can/c_can/c_can.c +++ b/drivers/net/can/c_can/c_can.c @@ -41,6 +41,10 @@ #include "c_can.h" +/* Number of interface registers */ +#define IF_ENUM_REG_LEN 11 +#define C_CAN_IFACE(reg, iface) (C_CAN_IF1_##reg + (iface) * IF_ENUM_REG_LEN) + /* control register */ #define CONTROL_TEST BIT(7) #define CONTROL_CCE BIT(6) @@ -185,7 +189,7 @@ enum c_can_bus_error_types { C_CAN_ERROR_PASSIVE, }; -static struct can_bittiming_const c_can_bittiming_const = { +static const struct can_bittiming_const c_can_bittiming_const = { .name = KBUILD_MODNAME, .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ .tseg1_max = 16, @@ -209,10 +213,10 @@ static inline int get_tx_echo_msg_obj(const struct c_can_priv *priv) C_CAN_MSG_OBJ_TX_FIRST; } -static u32 c_can_read_reg32(struct c_can_priv *priv, void *reg) +static u32 c_can_read_reg32(struct c_can_priv *priv, enum reg index) { - u32 val = priv->read_reg(priv, reg); - val |= ((u32) priv->read_reg(priv, reg + 2)) << 16; + u32 val = priv->read_reg(priv, index); + val |= ((u32) priv->read_reg(priv, index + 1)) << 16; return val; } @@ -220,14 +224,14 @@ static void c_can_enable_all_interrupts(struct c_can_priv *priv, int enable) { unsigned int cntrl_save = priv->read_reg(priv, - &priv->regs->control); + C_CAN_CTRL_REG); if (enable) cntrl_save |= (CONTROL_SIE | CONTROL_EIE | CONTROL_IE); else cntrl_save &= ~(CONTROL_EIE | CONTROL_IE | CONTROL_SIE); - priv->write_reg(priv, &priv->regs->control, cntrl_save); + priv->write_reg(priv, C_CAN_CTRL_REG, cntrl_save); } static inline int c_can_msg_obj_is_busy(struct c_can_priv *priv, int iface) @@ -235,7 +239,7 @@ static inline int c_can_msg_obj_is_busy(struct c_can_priv *priv, int iface) int count = MIN_TIMEOUT_VALUE; while (count && priv->read_reg(priv, - &priv->regs->ifregs[iface].com_req) & + C_CAN_IFACE(COMREQ_REG, iface)) & IF_COMR_BUSY) { count--; udelay(1); @@ -258,9 +262,9 @@ static inline void c_can_object_get(struct net_device *dev, * register and message RAM must be complete in 6 CAN-CLK * period. */ - priv->write_reg(priv, &priv->regs->ifregs[iface].com_mask, + priv->write_reg(priv, C_CAN_IFACE(COMMSK_REG, iface), IFX_WRITE_LOW_16BIT(mask)); - priv->write_reg(priv, &priv->regs->ifregs[iface].com_req, + priv->write_reg(priv, C_CAN_IFACE(COMREQ_REG, iface), IFX_WRITE_LOW_16BIT(objno)); if (c_can_msg_obj_is_busy(priv, iface)) @@ -278,9 +282,9 @@ static inline void c_can_object_put(struct net_device *dev, * register and message RAM must be complete in 6 CAN-CLK * period. */ - priv->write_reg(priv, &priv->regs->ifregs[iface].com_mask, + priv->write_reg(priv, C_CAN_IFACE(COMMSK_REG, iface), (IF_COMM_WR | IFX_WRITE_LOW_16BIT(mask))); - priv->write_reg(priv, &priv->regs->ifregs[iface].com_req, + priv->write_reg(priv, C_CAN_IFACE(COMREQ_REG, iface), IFX_WRITE_LOW_16BIT(objno)); if (c_can_msg_obj_is_busy(priv, iface)) @@ -306,18 +310,18 @@ static void c_can_write_msg_object(struct net_device *dev, flags |= IF_ARB_MSGVAL; - priv->write_reg(priv, &priv->regs->ifregs[iface].arb1, + priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), IFX_WRITE_LOW_16BIT(id)); - priv->write_reg(priv, &priv->regs->ifregs[iface].arb2, flags | + priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), flags | IFX_WRITE_HIGH_16BIT(id)); for (i = 0; i < frame->can_dlc; i += 2) { - priv->write_reg(priv, &priv->regs->ifregs[iface].data[i / 2], + priv->write_reg(priv, C_CAN_IFACE(DATA1_REG, iface) + i / 2, frame->data[i] | (frame->data[i + 1] << 8)); } /* enable interrupt for this message object */ - priv->write_reg(priv, &priv->regs->ifregs[iface].msg_cntrl, + priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), IF_MCONT_TXIE | IF_MCONT_TXRQST | IF_MCONT_EOB | frame->can_dlc); c_can_object_put(dev, iface, objno, IF_COMM_ALL); @@ -329,7 +333,7 @@ static inline void c_can_mark_rx_msg_obj(struct net_device *dev, { struct c_can_priv *priv = netdev_priv(dev); - priv->write_reg(priv, &priv->regs->ifregs[iface].msg_cntrl, + priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), ctrl_mask & ~(IF_MCONT_MSGLST | IF_MCONT_INTPND)); c_can_object_put(dev, iface, obj, IF_COMM_CONTROL); @@ -343,7 +347,7 @@ static inline void c_can_activate_all_lower_rx_msg_obj(struct net_device *dev, struct c_can_priv *priv = netdev_priv(dev); for (i = C_CAN_MSG_OBJ_RX_FIRST; i <= C_CAN_MSG_RX_LOW_LAST; i++) { - priv->write_reg(priv, &priv->regs->ifregs[iface].msg_cntrl, + priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), ctrl_mask & ~(IF_MCONT_MSGLST | IF_MCONT_INTPND | IF_MCONT_NEWDAT)); c_can_object_put(dev, iface, i, IF_COMM_CONTROL); @@ -356,7 +360,7 @@ static inline void c_can_activate_rx_msg_obj(struct net_device *dev, { struct c_can_priv *priv = netdev_priv(dev); - priv->write_reg(priv, &priv->regs->ifregs[iface].msg_cntrl, + priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), ctrl_mask & ~(IF_MCONT_MSGLST | IF_MCONT_INTPND | IF_MCONT_NEWDAT)); c_can_object_put(dev, iface, obj, IF_COMM_CONTROL); @@ -374,7 +378,7 @@ static void c_can_handle_lost_msg_obj(struct net_device *dev, c_can_object_get(dev, iface, objno, IF_COMM_ALL & ~IF_COMM_TXRQST); - priv->write_reg(priv, &priv->regs->ifregs[iface].msg_cntrl, + priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), IF_MCONT_CLR_MSGLST); c_can_object_put(dev, 0, objno, IF_COMM_CONTROL); @@ -410,8 +414,8 @@ static int c_can_read_msg_object(struct net_device *dev, int iface, int ctrl) frame->can_dlc = get_can_dlc(ctrl & 0x0F); - flags = priv->read_reg(priv, &priv->regs->ifregs[iface].arb2); - val = priv->read_reg(priv, &priv->regs->ifregs[iface].arb1) | + flags = priv->read_reg(priv, C_CAN_IFACE(ARB2_REG, iface)); + val = priv->read_reg(priv, C_CAN_IFACE(ARB1_REG, iface)) | (flags << 16); if (flags & IF_ARB_MSGXTD) @@ -424,7 +428,7 @@ static int c_can_read_msg_object(struct net_device *dev, int iface, int ctrl) else { for (i = 0; i < frame->can_dlc; i += 2) { data = priv->read_reg(priv, - &priv->regs->ifregs[iface].data[i / 2]); + C_CAN_IFACE(DATA1_REG, iface) + i / 2); frame->data[i] = data; frame->data[i + 1] = data >> 8; } @@ -444,40 +448,40 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface, { struct c_can_priv *priv = netdev_priv(dev); - priv->write_reg(priv, &priv->regs->ifregs[iface].mask1, + priv->write_reg(priv, C_CAN_IFACE(MASK1_REG, iface), IFX_WRITE_LOW_16BIT(mask)); - priv->write_reg(priv, &priv->regs->ifregs[iface].mask2, + priv->write_reg(priv, C_CAN_IFACE(MASK2_REG, iface), IFX_WRITE_HIGH_16BIT(mask)); - priv->write_reg(priv, &priv->regs->ifregs[iface].arb1, + priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), IFX_WRITE_LOW_16BIT(id)); - priv->write_reg(priv, &priv->regs->ifregs[iface].arb2, + priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), (IF_ARB_MSGVAL | IFX_WRITE_HIGH_16BIT(id))); - priv->write_reg(priv, &priv->regs->ifregs[iface].msg_cntrl, mcont); + priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), mcont); c_can_object_put(dev, iface, objno, IF_COMM_ALL & ~IF_COMM_TXRQST); netdev_dbg(dev, "obj no:%d, msgval:0x%08x\n", objno, - c_can_read_reg32(priv, &priv->regs->msgval1)); + c_can_read_reg32(priv, C_CAN_MSGVAL1_REG)); } static void c_can_inval_msg_object(struct net_device *dev, int iface, int objno) { struct c_can_priv *priv = netdev_priv(dev); - priv->write_reg(priv, &priv->regs->ifregs[iface].arb1, 0); - priv->write_reg(priv, &priv->regs->ifregs[iface].arb2, 0); - priv->write_reg(priv, &priv->regs->ifregs[iface].msg_cntrl, 0); + priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), 0); + priv->write_reg(priv, C_CAN_IFACE(ARB2_REG, iface), 0); + priv->write_reg(priv, C_CAN_IFACE(MSGCTRL_REG, iface), 0); c_can_object_put(dev, iface, objno, IF_COMM_ARB | IF_COMM_CONTROL); netdev_dbg(dev, "obj no:%d, msgval:0x%08x\n", objno, - c_can_read_reg32(priv, &priv->regs->msgval1)); + c_can_read_reg32(priv, C_CAN_MSGVAL1_REG)); } static inline int c_can_is_next_tx_obj_busy(struct c_can_priv *priv, int objno) { - int val = c_can_read_reg32(priv, &priv->regs->txrqst1); + int val = c_can_read_reg32(priv, C_CAN_TXRQST1_REG); /* * as transmission request register's bit n-1 corresponds to @@ -540,12 +544,12 @@ static int c_can_set_bittiming(struct net_device *dev) netdev_info(dev, "setting BTR=%04x BRPE=%04x\n", reg_btr, reg_brpe); - ctrl_save = priv->read_reg(priv, &priv->regs->control); - priv->write_reg(priv, &priv->regs->control, + ctrl_save = priv->read_reg(priv, C_CAN_CTRL_REG); + priv->write_reg(priv, C_CAN_CTRL_REG, ctrl_save | CONTROL_CCE | CONTROL_INIT); - priv->write_reg(priv, &priv->regs->btr, reg_btr); - priv->write_reg(priv, &priv->regs->brp_ext, reg_brpe); - priv->write_reg(priv, &priv->regs->control, ctrl_save); + priv->write_reg(priv, C_CAN_BTR_REG, reg_btr); + priv->write_reg(priv, C_CAN_BRPEXT_REG, reg_brpe); + priv->write_reg(priv, C_CAN_CTRL_REG, ctrl_save); return 0; } @@ -587,36 +591,36 @@ static void c_can_chip_config(struct net_device *dev) struct c_can_priv *priv = netdev_priv(dev); /* enable automatic retransmission */ - priv->write_reg(priv, &priv->regs->control, + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_ENABLE_AR); if ((priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) && (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)) { /* loopback + silent mode : useful for hot self-test */ - priv->write_reg(priv, &priv->regs->control, CONTROL_EIE | + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE | CONTROL_SIE | CONTROL_IE | CONTROL_TEST); - priv->write_reg(priv, &priv->regs->test, + priv->write_reg(priv, C_CAN_TEST_REG, TEST_LBACK | TEST_SILENT); } else if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK) { /* loopback mode : useful for self-test function */ - priv->write_reg(priv, &priv->regs->control, CONTROL_EIE | + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE | CONTROL_SIE | CONTROL_IE | CONTROL_TEST); - priv->write_reg(priv, &priv->regs->test, TEST_LBACK); + priv->write_reg(priv, C_CAN_TEST_REG, TEST_LBACK); } else if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) { /* silent mode : bus-monitoring mode */ - priv->write_reg(priv, &priv->regs->control, CONTROL_EIE | + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE | CONTROL_SIE | CONTROL_IE | CONTROL_TEST); - priv->write_reg(priv, &priv->regs->test, TEST_SILENT); + priv->write_reg(priv, C_CAN_TEST_REG, TEST_SILENT); } else /* normal mode*/ - priv->write_reg(priv, &priv->regs->control, + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_EIE | CONTROL_SIE | CONTROL_IE); /* configure message objects */ c_can_configure_msg_objects(dev); /* set a `lec` value so that we can check for updates later */ - priv->write_reg(priv, &priv->regs->status, LEC_UNUSED); + priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED); /* set bittiming params */ c_can_set_bittiming(dev); @@ -669,7 +673,7 @@ static int c_can_get_berr_counter(const struct net_device *dev, unsigned int reg_err_counter; struct c_can_priv *priv = netdev_priv(dev); - reg_err_counter = priv->read_reg(priv, &priv->regs->err_cnt); + reg_err_counter = priv->read_reg(priv, C_CAN_ERR_CNT_REG); bec->rxerr = (reg_err_counter & ERR_CNT_REC_MASK) >> ERR_CNT_REC_SHIFT; bec->txerr = reg_err_counter & ERR_CNT_TEC_MASK; @@ -697,12 +701,12 @@ static void c_can_do_tx(struct net_device *dev) for (/* nix */; (priv->tx_next - priv->tx_echo) > 0; priv->tx_echo++) { msg_obj_no = get_tx_echo_msg_obj(priv); - val = c_can_read_reg32(priv, &priv->regs->txrqst1); + val = c_can_read_reg32(priv, C_CAN_TXRQST1_REG); if (!(val & (1 << (msg_obj_no - 1)))) { can_get_echo_skb(dev, msg_obj_no - C_CAN_MSG_OBJ_TX_FIRST); stats->tx_bytes += priv->read_reg(priv, - &priv->regs->ifregs[0].msg_cntrl) + C_CAN_IFACE(MSGCTRL_REG, 0)) & IF_MCONT_DLC_MASK; stats->tx_packets++; c_can_inval_msg_object(dev, 0, msg_obj_no); @@ -744,11 +748,11 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota) u32 num_rx_pkts = 0; unsigned int msg_obj, msg_ctrl_save; struct c_can_priv *priv = netdev_priv(dev); - u32 val = c_can_read_reg32(priv, &priv->regs->intpnd1); + u32 val = c_can_read_reg32(priv, C_CAN_INTPND1_REG); for (msg_obj = C_CAN_MSG_OBJ_RX_FIRST; msg_obj <= C_CAN_MSG_OBJ_RX_LAST && quota > 0; - val = c_can_read_reg32(priv, &priv->regs->intpnd1), + val = c_can_read_reg32(priv, C_CAN_INTPND1_REG), msg_obj++) { /* * as interrupt pending register's bit n-1 corresponds to @@ -758,7 +762,7 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota) c_can_object_get(dev, 0, msg_obj, IF_COMM_ALL & ~IF_COMM_TXRQST); msg_ctrl_save = priv->read_reg(priv, - &priv->regs->ifregs[0].msg_cntrl); + C_CAN_IFACE(MSGCTRL_REG, 0)); if (msg_ctrl_save & IF_MCONT_EOB) return num_rx_pkts; @@ -819,7 +823,7 @@ static int c_can_handle_state_change(struct net_device *dev, return 0; c_can_get_berr_counter(dev, &bec); - reg_err_counter = priv->read_reg(priv, &priv->regs->err_cnt); + reg_err_counter = priv->read_reg(priv, C_CAN_ERR_CNT_REG); rx_err_passive = (reg_err_counter & ERR_CNT_RP_MASK) >> ERR_CNT_RP_SHIFT; @@ -935,7 +939,7 @@ static int c_can_handle_bus_err(struct net_device *dev, } /* set a `lec` value so that we can check for updates later */ - priv->write_reg(priv, &priv->regs->status, LEC_UNUSED); + priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED); netif_receive_skb(skb); stats->rx_packets++; @@ -959,15 +963,15 @@ static int c_can_poll(struct napi_struct *napi, int quota) /* status events have the highest priority */ if (irqstatus == STATUS_INTERRUPT) { priv->current_status = priv->read_reg(priv, - &priv->regs->status); + C_CAN_STS_REG); /* handle Tx/Rx events */ if (priv->current_status & STATUS_TXOK) - priv->write_reg(priv, &priv->regs->status, + priv->write_reg(priv, C_CAN_STS_REG, priv->current_status & ~STATUS_TXOK); if (priv->current_status & STATUS_RXOK) - priv->write_reg(priv, &priv->regs->status, + priv->write_reg(priv, C_CAN_STS_REG, priv->current_status & ~STATUS_RXOK); /* handle state changes */ @@ -1033,7 +1037,7 @@ static irqreturn_t c_can_isr(int irq, void *dev_id) struct net_device *dev = (struct net_device *)dev_id; struct c_can_priv *priv = netdev_priv(dev); - priv->irqstatus = priv->read_reg(priv, &priv->regs->interrupt); + priv->irqstatus = priv->read_reg(priv, C_CAN_INT_REG); if (!priv->irqstatus) return IRQ_NONE; diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h index 5f32d34af507..01a7049ab990 100644 --- a/drivers/net/can/c_can/c_can.h +++ b/drivers/net/can/c_can/c_can.h @@ -22,43 +22,129 @@ #ifndef C_CAN_H #define C_CAN_H -/* c_can IF registers */ -struct c_can_if_regs { - u16 com_req; - u16 com_mask; - u16 mask1; - u16 mask2; - u16 arb1; - u16 arb2; - u16 msg_cntrl; - u16 data[4]; - u16 _reserved[13]; +enum reg { + C_CAN_CTRL_REG = 0, + C_CAN_STS_REG, + C_CAN_ERR_CNT_REG, + C_CAN_BTR_REG, + C_CAN_INT_REG, + C_CAN_TEST_REG, + C_CAN_BRPEXT_REG, + C_CAN_IF1_COMREQ_REG, + C_CAN_IF1_COMMSK_REG, + C_CAN_IF1_MASK1_REG, + C_CAN_IF1_MASK2_REG, + C_CAN_IF1_ARB1_REG, + C_CAN_IF1_ARB2_REG, + C_CAN_IF1_MSGCTRL_REG, + C_CAN_IF1_DATA1_REG, + C_CAN_IF1_DATA2_REG, + C_CAN_IF1_DATA3_REG, + C_CAN_IF1_DATA4_REG, + C_CAN_IF2_COMREQ_REG, + C_CAN_IF2_COMMSK_REG, + C_CAN_IF2_MASK1_REG, + C_CAN_IF2_MASK2_REG, + C_CAN_IF2_ARB1_REG, + C_CAN_IF2_ARB2_REG, + C_CAN_IF2_MSGCTRL_REG, + C_CAN_IF2_DATA1_REG, + C_CAN_IF2_DATA2_REG, + C_CAN_IF2_DATA3_REG, + C_CAN_IF2_DATA4_REG, + C_CAN_TXRQST1_REG, + C_CAN_TXRQST2_REG, + C_CAN_NEWDAT1_REG, + C_CAN_NEWDAT2_REG, + C_CAN_INTPND1_REG, + C_CAN_INTPND2_REG, + C_CAN_MSGVAL1_REG, + C_CAN_MSGVAL2_REG, }; -/* c_can hardware registers */ -struct c_can_regs { - u16 control; - u16 status; - u16 err_cnt; - u16 btr; - u16 interrupt; - u16 test; - u16 brp_ext; - u16 _reserved1; - struct c_can_if_regs ifregs[2]; /* [0] = IF1 and [1] = IF2 */ - u16 _reserved2[8]; - u16 txrqst1; - u16 txrqst2; - u16 _reserved3[6]; - u16 newdat1; - u16 newdat2; - u16 _reserved4[6]; - u16 intpnd1; - u16 intpnd2; - u16 _reserved5[6]; - u16 msgval1; - u16 msgval2; - u16 _reserved6[6]; +static const u16 reg_map_c_can[] = { + [C_CAN_CTRL_REG] = 0x00, + [C_CAN_STS_REG] = 0x02, + [C_CAN_ERR_CNT_REG] = 0x04, + [C_CAN_BTR_REG] = 0x06, + [C_CAN_INT_REG] = 0x08, + [C_CAN_TEST_REG] = 0x0A, + [C_CAN_BRPEXT_REG] = 0x0C, + [C_CAN_IF1_COMREQ_REG] = 0x10, + [C_CAN_IF1_COMMSK_REG] = 0x12, + [C_CAN_IF1_MASK1_REG] = 0x14, + [C_CAN_IF1_MASK2_REG] = 0x16, + [C_CAN_IF1_ARB1_REG] = 0x18, + [C_CAN_IF1_ARB2_REG] = 0x1A, + [C_CAN_IF1_MSGCTRL_REG] = 0x1C, + [C_CAN_IF1_DATA1_REG] = 0x1E, + [C_CAN_IF1_DATA2_REG] = 0x20, + [C_CAN_IF1_DATA3_REG] = 0x22, + [C_CAN_IF1_DATA4_REG] = 0x24, + [C_CAN_IF2_COMREQ_REG] = 0x40, + [C_CAN_IF2_COMMSK_REG] = 0x42, + [C_CAN_IF2_MASK1_REG] = 0x44, + [C_CAN_IF2_MASK2_REG] = 0x46, + [C_CAN_IF2_ARB1_REG] = 0x48, + [C_CAN_IF2_ARB2_REG] = 0x4A, + [C_CAN_IF2_MSGCTRL_REG] = 0x4C, + [C_CAN_IF2_DATA1_REG] = 0x4E, + [C_CAN_IF2_DATA2_REG] = 0x50, + [C_CAN_IF2_DATA3_REG] = 0x52, + [C_CAN_IF2_DATA4_REG] = 0x54, + [C_CAN_TXRQST1_REG] = 0x80, + [C_CAN_TXRQST2_REG] = 0x82, + [C_CAN_NEWDAT1_REG] = 0x90, + [C_CAN_NEWDAT2_REG] = 0x92, + [C_CAN_INTPND1_REG] = 0xA0, + [C_CAN_INTPND2_REG] = 0xA2, + [C_CAN_MSGVAL1_REG] = 0xB0, + [C_CAN_MSGVAL2_REG] = 0xB2, +}; + +static const u16 reg_map_d_can[] = { + [C_CAN_CTRL_REG] = 0x00, + [C_CAN_STS_REG] = 0x04, + [C_CAN_ERR_CNT_REG] = 0x08, + [C_CAN_BTR_REG] = 0x0C, + [C_CAN_BRPEXT_REG] = 0x0E, + [C_CAN_INT_REG] = 0x10, + [C_CAN_TEST_REG] = 0x14, + [C_CAN_TXRQST1_REG] = 0x88, + [C_CAN_TXRQST2_REG] = 0x8A, + [C_CAN_NEWDAT1_REG] = 0x9C, + [C_CAN_NEWDAT2_REG] = 0x9E, + [C_CAN_INTPND1_REG] = 0xB0, + [C_CAN_INTPND2_REG] = 0xB2, + [C_CAN_MSGVAL1_REG] = 0xC4, + [C_CAN_MSGVAL2_REG] = 0xC6, + [C_CAN_IF1_COMREQ_REG] = 0x100, + [C_CAN_IF1_COMMSK_REG] = 0x102, + [C_CAN_IF1_MASK1_REG] = 0x104, + [C_CAN_IF1_MASK2_REG] = 0x106, + [C_CAN_IF1_ARB1_REG] = 0x108, + [C_CAN_IF1_ARB2_REG] = 0x10A, + [C_CAN_IF1_MSGCTRL_REG] = 0x10C, + [C_CAN_IF1_DATA1_REG] = 0x110, + [C_CAN_IF1_DATA2_REG] = 0x112, + [C_CAN_IF1_DATA3_REG] = 0x114, + [C_CAN_IF1_DATA4_REG] = 0x116, + [C_CAN_IF2_COMREQ_REG] = 0x120, + [C_CAN_IF2_COMMSK_REG] = 0x122, + [C_CAN_IF2_MASK1_REG] = 0x124, + [C_CAN_IF2_MASK2_REG] = 0x126, + [C_CAN_IF2_ARB1_REG] = 0x128, + [C_CAN_IF2_ARB2_REG] = 0x12A, + [C_CAN_IF2_MSGCTRL_REG] = 0x12C, + [C_CAN_IF2_DATA1_REG] = 0x130, + [C_CAN_IF2_DATA2_REG] = 0x132, + [C_CAN_IF2_DATA3_REG] = 0x134, + [C_CAN_IF2_DATA4_REG] = 0x136, +}; + +enum c_can_dev_id { + C_CAN_DEVTYPE, + D_CAN_DEVTYPE, }; /* c_can private data structure */ @@ -69,9 +155,10 @@ struct c_can_priv { int tx_object; int current_status; int last_status; - u16 (*read_reg) (struct c_can_priv *priv, void *reg); - void (*write_reg) (struct c_can_priv *priv, void *reg, u16 val); - struct c_can_regs __iomem *regs; + u16 (*read_reg) (struct c_can_priv *priv, enum reg index); + void (*write_reg) (struct c_can_priv *priv, enum reg index, u16 val); + void __iomem *base; + const u16 *regs; unsigned long irq_flags; /* for request_irq() */ unsigned int tx_next; unsigned int tx_echo; diff --git a/drivers/net/can/c_can/c_can_pci.c b/drivers/net/can/c_can/c_can_pci.c new file mode 100644 index 000000000000..1011146ea513 --- /dev/null +++ b/drivers/net/can/c_can/c_can_pci.c @@ -0,0 +1,221 @@ +/* + * PCI bus driver for Bosch C_CAN/D_CAN controller + * + * Copyright (C) 2012 Federico Vaga <federico.vaga@gmail.com> + * + * Borrowed from c_can_platform.c + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/netdevice.h> +#include <linux/pci.h> + +#include <linux/can/dev.h> + +#include "c_can.h" + +enum c_can_pci_reg_align { + C_CAN_REG_ALIGN_16, + C_CAN_REG_ALIGN_32, +}; + +struct c_can_pci_data { + /* Specify if is C_CAN or D_CAN */ + enum c_can_dev_id type; + /* Set the register alignment in the memory */ + enum c_can_pci_reg_align reg_align; + /* Set the frequency */ + unsigned int freq; +}; + +/* + * 16-bit c_can registers can be arranged differently in the memory + * architecture of different implementations. For example: 16-bit + * registers can be aligned to a 16-bit boundary or 32-bit boundary etc. + * Handle the same by providing a common read/write interface. + */ +static u16 c_can_pci_read_reg_aligned_to_16bit(struct c_can_priv *priv, + enum reg index) +{ + return readw(priv->base + priv->regs[index]); +} + +static void c_can_pci_write_reg_aligned_to_16bit(struct c_can_priv *priv, + enum reg index, u16 val) +{ + writew(val, priv->base + priv->regs[index]); +} + +static u16 c_can_pci_read_reg_aligned_to_32bit(struct c_can_priv *priv, + enum reg index) +{ + return readw(priv->base + 2 * priv->regs[index]); +} + +static void c_can_pci_write_reg_aligned_to_32bit(struct c_can_priv *priv, + enum reg index, u16 val) +{ + writew(val, priv->base + 2 * priv->regs[index]); +} + +static int __devinit c_can_pci_probe(struct pci_dev *pdev, + const struct pci_device_id *ent) +{ + struct c_can_pci_data *c_can_pci_data = (void *)ent->driver_data; + struct c_can_priv *priv; + struct net_device *dev; + void __iomem *addr; + int ret; + + ret = pci_enable_device(pdev); + if (ret) { + dev_err(&pdev->dev, "pci_enable_device FAILED\n"); + goto out; + } + + ret = pci_request_regions(pdev, KBUILD_MODNAME); + if (ret) { + dev_err(&pdev->dev, "pci_request_regions FAILED\n"); + goto out_disable_device; + } + + pci_set_master(pdev); + pci_enable_msi(pdev); + + addr = pci_iomap(pdev, 0, pci_resource_len(pdev, 0)); + if (!addr) { + dev_err(&pdev->dev, + "device has no PCI memory resources, " + "failing adapter\n"); + ret = -ENOMEM; + goto out_release_regions; + } + + /* allocate the c_can device */ + dev = alloc_c_can_dev(); + if (!dev) { + ret = -ENOMEM; + goto out_iounmap; + } + + priv = netdev_priv(dev); + pci_set_drvdata(pdev, dev); + SET_NETDEV_DEV(dev, &pdev->dev); + + dev->irq = pdev->irq; + priv->base = addr; + + if (!c_can_pci_data->freq) { + dev_err(&pdev->dev, "no clock frequency defined\n"); + ret = -ENODEV; + goto out_free_c_can; + } else { + priv->can.clock.freq = c_can_pci_data->freq; + } + + /* Configure CAN type */ + switch (c_can_pci_data->type) { + case C_CAN_DEVTYPE: + priv->regs = reg_map_c_can; + break; + case D_CAN_DEVTYPE: + priv->regs = reg_map_d_can; + priv->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES; + break; + default: + ret = -EINVAL; + goto out_free_c_can; + } + + /* Configure access to registers */ + switch (c_can_pci_data->reg_align) { + case C_CAN_REG_ALIGN_32: + priv->read_reg = c_can_pci_read_reg_aligned_to_32bit; + priv->write_reg = c_can_pci_write_reg_aligned_to_32bit; + break; + case C_CAN_REG_ALIGN_16: + priv->read_reg = c_can_pci_read_reg_aligned_to_16bit; + priv->write_reg = c_can_pci_write_reg_aligned_to_16bit; + break; + default: + ret = -EINVAL; + goto out_free_c_can; + } + + ret = register_c_can_dev(dev); + if (ret) { + dev_err(&pdev->dev, "registering %s failed (err=%d)\n", + KBUILD_MODNAME, ret); + goto out_free_c_can; + } + + dev_dbg(&pdev->dev, "%s device registered (regs=%p, irq=%d)\n", + KBUILD_MODNAME, priv->regs, dev->irq); + + return 0; + +out_free_c_can: + pci_set_drvdata(pdev, NULL); + free_c_can_dev(dev); +out_iounmap: + pci_iounmap(pdev, addr); +out_release_regions: + pci_disable_msi(pdev); + pci_clear_master(pdev); + pci_release_regions(pdev); +out_disable_device: + pci_disable_device(pdev); +out: + return ret; +} + +static void __devexit c_can_pci_remove(struct pci_dev *pdev) +{ + struct net_device *dev = pci_get_drvdata(pdev); + struct c_can_priv *priv = netdev_priv(dev); + + unregister_c_can_dev(dev); + + pci_set_drvdata(pdev, NULL); + free_c_can_dev(dev); + + pci_iounmap(pdev, priv->base); + pci_disable_msi(pdev); + pci_clear_master(pdev); + pci_release_regions(pdev); + pci_disable_device(pdev); +} + +static struct c_can_pci_data c_can_sta2x11= { + .type = C_CAN_DEVTYPE, + .reg_align = C_CAN_REG_ALIGN_32, + .freq = 52000000, /* 52 Mhz */ +}; + +#define C_CAN_ID(_vend, _dev, _driverdata) { \ + PCI_DEVICE(_vend, _dev), \ + .driver_data = (unsigned long)&_driverdata, \ +} +static DEFINE_PCI_DEVICE_TABLE(c_can_pci_tbl) = { + C_CAN_ID(PCI_VENDOR_ID_STMICRO, PCI_DEVICE_ID_STMICRO_CAN, + c_can_sta2x11), + {}, +}; +static struct pci_driver c_can_pci_driver = { + .name = KBUILD_MODNAME, + .id_table = c_can_pci_tbl, + .probe = c_can_pci_probe, + .remove = __devexit_p(c_can_pci_remove), +}; + +module_pci_driver(c_can_pci_driver); + +MODULE_AUTHOR("Federico Vaga <federico.vaga@gmail.com>"); +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("PCI CAN bus driver for Bosch C_CAN/D_CAN controller"); +MODULE_DEVICE_TABLE(pci, c_can_pci_tbl); diff --git a/drivers/net/can/c_can/c_can_platform.c b/drivers/net/can/c_can/c_can_platform.c index 5e1a5ff6476e..f0921d16f0a9 100644 --- a/drivers/net/can/c_can/c_can_platform.c +++ b/drivers/net/can/c_can/c_can_platform.c @@ -42,27 +42,27 @@ * Handle the same by providing a common read/write interface. */ static u16 c_can_plat_read_reg_aligned_to_16bit(struct c_can_priv *priv, - void *reg) + enum reg index) { - return readw(reg); + return readw(priv->base + priv->regs[index]); } static void c_can_plat_write_reg_aligned_to_16bit(struct c_can_priv *priv, - void *reg, u16 val) + enum reg index, u16 val) { - writew(val, reg); + writew(val, priv->base + priv->regs[index]); } static u16 c_can_plat_read_reg_aligned_to_32bit(struct c_can_priv *priv, - void *reg) + enum reg index) { - return readw(reg + (long)reg - (long)priv->regs); + return readw(priv->base + 2 * priv->regs[index]); } static void c_can_plat_write_reg_aligned_to_32bit(struct c_can_priv *priv, - void *reg, u16 val) + enum reg index, u16 val) { - writew(val, reg + (long)reg - (long)priv->regs); + writew(val, priv->base + 2 * priv->regs[index]); } static int __devinit c_can_plat_probe(struct platform_device *pdev) @@ -71,6 +71,7 @@ static int __devinit c_can_plat_probe(struct platform_device *pdev) void __iomem *addr; struct net_device *dev; struct c_can_priv *priv; + const struct platform_device_id *id; struct resource *mem; int irq; #ifdef CONFIG_HAVE_CLK @@ -115,26 +116,40 @@ static int __devinit c_can_plat_probe(struct platform_device *pdev) } priv = netdev_priv(dev); + id = platform_get_device_id(pdev); + switch (id->driver_data) { + case C_CAN_DEVTYPE: + priv->regs = reg_map_c_can; + switch (mem->flags & IORESOURCE_MEM_TYPE_MASK) { + case IORESOURCE_MEM_32BIT: + priv->read_reg = c_can_plat_read_reg_aligned_to_32bit; + priv->write_reg = c_can_plat_write_reg_aligned_to_32bit; + break; + case IORESOURCE_MEM_16BIT: + default: + priv->read_reg = c_can_plat_read_reg_aligned_to_16bit; + priv->write_reg = c_can_plat_write_reg_aligned_to_16bit; + break; + } + break; + case D_CAN_DEVTYPE: + priv->regs = reg_map_d_can; + priv->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES; + priv->read_reg = c_can_plat_read_reg_aligned_to_16bit; + priv->write_reg = c_can_plat_write_reg_aligned_to_16bit; + break; + default: + ret = -EINVAL; + goto exit_free_device; + } dev->irq = irq; - priv->regs = addr; + priv->base = addr; #ifdef CONFIG_HAVE_CLK priv->can.clock.freq = clk_get_rate(clk); priv->priv = clk; #endif - switch (mem->flags & IORESOURCE_MEM_TYPE_MASK) { - case IORESOURCE_MEM_32BIT: - priv->read_reg = c_can_plat_read_reg_aligned_to_32bit; - priv->write_reg = c_can_plat_write_reg_aligned_to_32bit; - break; - case IORESOURCE_MEM_16BIT: - default: - priv->read_reg = c_can_plat_read_reg_aligned_to_16bit; - priv->write_reg = c_can_plat_write_reg_aligned_to_16bit; - break; - } - platform_set_drvdata(pdev, dev); SET_NETDEV_DEV(dev, &pdev->dev); @@ -146,7 +161,7 @@ static int __devinit c_can_plat_probe(struct platform_device *pdev) } dev_info(&pdev->dev, "%s device registered (regs=%p, irq=%d)\n", - KBUILD_MODNAME, priv->regs, dev->irq); + KBUILD_MODNAME, priv->base, dev->irq); return 0; exit_free_device: @@ -176,7 +191,7 @@ static int __devexit c_can_plat_remove(struct platform_device *pdev) platform_set_drvdata(pdev, NULL); free_c_can_dev(dev); - iounmap(priv->regs); + iounmap(priv->base); mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); release_mem_region(mem->start, resource_size(mem)); @@ -188,6 +203,20 @@ static int __devexit c_can_plat_remove(struct platform_device *pdev) return 0; } +static const struct platform_device_id c_can_id_table[] = { + { + .name = KBUILD_MODNAME, + .driver_data = C_CAN_DEVTYPE, + }, { + .name = "c_can", + .driver_data = C_CAN_DEVTYPE, + }, { + .name = "d_can", + .driver_data = D_CAN_DEVTYPE, + }, { + } +}; + static struct platform_driver c_can_plat_driver = { .driver = { .name = KBUILD_MODNAME, @@ -195,6 +224,7 @@ static struct platform_driver c_can_plat_driver = { }, .probe = c_can_plat_probe, .remove = __devexit_p(c_can_plat_remove), + .id_table = c_can_id_table, }; module_platform_driver(c_can_plat_driver); diff --git a/drivers/net/can/cc770/cc770.c b/drivers/net/can/cc770/cc770.c index d42a6a7396f2..0f12abf6591c 100644 --- a/drivers/net/can/cc770/cc770.c +++ b/drivers/net/can/cc770/cc770.c @@ -90,7 +90,7 @@ static unsigned char cc770_obj_flags[CC770_OBJ_MAX] = { [CC770_OBJ_TX] = 0, }; -static struct can_bittiming_const cc770_bittiming_const = { +static const struct can_bittiming_const cc770_bittiming_const = { .name = KBUILD_MODNAME, .tseg1_min = 1, .tseg1_max = 16, @@ -695,7 +695,7 @@ static void cc770_tx_interrupt(struct net_device *dev, unsigned int o) netif_wake_queue(dev); } -irqreturn_t cc770_interrupt(int irq, void *dev_id) +static irqreturn_t cc770_interrupt(int irq, void *dev_id) { struct net_device *dev = (struct net_device *)dev_id; struct cc770_priv *priv = netdev_priv(dev); diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c index f03d7a481a80..963e2ccd10db 100644 --- a/drivers/net/can/dev.c +++ b/drivers/net/can/dev.c @@ -33,6 +33,39 @@ MODULE_DESCRIPTION(MOD_DESC); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Wolfgang Grandegger <wg@grandegger.com>"); +/* CAN DLC to real data length conversion helpers */ + +static const u8 dlc2len[] = {0, 1, 2, 3, 4, 5, 6, 7, + 8, 12, 16, 20, 24, 32, 48, 64}; + +/* get data length from can_dlc with sanitized can_dlc */ +u8 can_dlc2len(u8 can_dlc) +{ + return dlc2len[can_dlc & 0x0F]; +} +EXPORT_SYMBOL_GPL(can_dlc2len); + +static const u8 len2dlc[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, /* 0 - 8 */ + 9, 9, 9, 9, /* 9 - 12 */ + 10, 10, 10, 10, /* 13 - 16 */ + 11, 11, 11, 11, /* 17 - 20 */ + 12, 12, 12, 12, /* 21 - 24 */ + 13, 13, 13, 13, 13, 13, 13, 13, /* 25 - 32 */ + 14, 14, 14, 14, 14, 14, 14, 14, /* 33 - 40 */ + 14, 14, 14, 14, 14, 14, 14, 14, /* 41 - 48 */ + 15, 15, 15, 15, 15, 15, 15, 15, /* 49 - 56 */ + 15, 15, 15, 15, 15, 15, 15, 15}; /* 57 - 64 */ + +/* map the sanitized data length to an appropriate data length code */ +u8 can_len2dlc(u8 len) +{ + if (unlikely(len > 64)) + return 0xF; + + return len2dlc[len]; +} +EXPORT_SYMBOL_GPL(can_len2dlc); + #ifdef CONFIG_CAN_CALC_BITTIMING #define CAN_CALC_MAX_ERROR 50 /* in one-tenth of a percent */ @@ -368,7 +401,7 @@ EXPORT_SYMBOL_GPL(can_free_echo_skb); /* * CAN device restart for bus-off recovery */ -void can_restart(unsigned long data) +static void can_restart(unsigned long data) { struct net_device *dev = (struct net_device *)data; struct can_priv *priv = netdev_priv(dev); @@ -454,7 +487,7 @@ EXPORT_SYMBOL_GPL(can_bus_off); static void can_setup(struct net_device *dev) { dev->type = ARPHRD_CAN; - dev->mtu = sizeof(struct can_frame); + dev->mtu = CAN_MTU; dev->hard_header_len = 0; dev->addr_len = 0; dev->tx_queue_len = 10; diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c index 81d474102378..c5f143165f80 100644 --- a/drivers/net/can/flexcan.c +++ b/drivers/net/can/flexcan.c @@ -34,6 +34,7 @@ #include <linux/list.h> #include <linux/module.h> #include <linux/of.h> +#include <linux/of_device.h> #include <linux/platform_device.h> #include <linux/pinctrl/consumer.h> @@ -165,10 +166,21 @@ struct flexcan_regs { u32 imask1; /* 0x28 */ u32 iflag2; /* 0x2c */ u32 iflag1; /* 0x30 */ - u32 _reserved2[19]; + u32 crl2; /* 0x34 */ + u32 esr2; /* 0x38 */ + u32 imeur; /* 0x3c */ + u32 lrfr; /* 0x40 */ + u32 crcr; /* 0x44 */ + u32 rxfgmask; /* 0x48 */ + u32 rxfir; /* 0x4c */ + u32 _reserved3[12]; struct flexcan_mb cantxfg[64]; }; +struct flexcan_devtype_data { + u32 hw_ver; /* hardware controller version */ +}; + struct flexcan_priv { struct can_priv can; struct net_device *dev; @@ -178,11 +190,21 @@ struct flexcan_priv { u32 reg_esr; u32 reg_ctrl_default; - struct clk *clk; + struct clk *clk_ipg; + struct clk *clk_per; struct flexcan_platform_data *pdata; + const struct flexcan_devtype_data *devtype_data; +}; + +static struct flexcan_devtype_data fsl_p1010_devtype_data = { + .hw_ver = 3, +}; + +static struct flexcan_devtype_data fsl_imx6q_devtype_data = { + .hw_ver = 10, }; -static struct can_bittiming_const flexcan_bittiming_const = { +static const struct can_bittiming_const flexcan_bittiming_const = { .name = DRV_NAME, .tseg1_min = 4, .tseg1_max = 16, @@ -750,6 +772,9 @@ static int flexcan_chip_start(struct net_device *dev) flexcan_write(0x0, ®s->rx14mask); flexcan_write(0x0, ®s->rx15mask); + if (priv->devtype_data->hw_ver >= 10) + flexcan_write(0x0, ®s->rxfgmask); + flexcan_transceiver_switch(priv, 1); /* synchronize with the can bus */ @@ -804,7 +829,8 @@ static int flexcan_open(struct net_device *dev) struct flexcan_priv *priv = netdev_priv(dev); int err; - clk_prepare_enable(priv->clk); + clk_prepare_enable(priv->clk_ipg); + clk_prepare_enable(priv->clk_per); err = open_candev(dev); if (err) @@ -826,7 +852,8 @@ static int flexcan_open(struct net_device *dev) out_close: close_candev(dev); out: - clk_disable_unprepare(priv->clk); + clk_disable_unprepare(priv->clk_per); + clk_disable_unprepare(priv->clk_ipg); return err; } @@ -840,7 +867,8 @@ static int flexcan_close(struct net_device *dev) flexcan_chip_stop(dev); free_irq(dev->irq, dev); - clk_disable_unprepare(priv->clk); + clk_disable_unprepare(priv->clk_per); + clk_disable_unprepare(priv->clk_ipg); close_candev(dev); @@ -879,7 +907,8 @@ static int __devinit register_flexcandev(struct net_device *dev) struct flexcan_regs __iomem *regs = priv->base; u32 reg, err; - clk_prepare_enable(priv->clk); + clk_prepare_enable(priv->clk_ipg); + clk_prepare_enable(priv->clk_per); /* select "bus clock", chip must be disabled */ flexcan_chip_disable(priv); @@ -912,7 +941,8 @@ static int __devinit register_flexcandev(struct net_device *dev) out: /* disable core and turn off clocks */ flexcan_chip_disable(priv); - clk_disable_unprepare(priv->clk); + clk_disable_unprepare(priv->clk_per); + clk_disable_unprepare(priv->clk_ipg); return err; } @@ -922,12 +952,25 @@ static void __devexit unregister_flexcandev(struct net_device *dev) unregister_candev(dev); } +static const struct of_device_id flexcan_of_match[] = { + { .compatible = "fsl,p1010-flexcan", .data = &fsl_p1010_devtype_data, }, + { .compatible = "fsl,imx6q-flexcan", .data = &fsl_imx6q_devtype_data, }, + { /* sentinel */ }, +}; + +static const struct platform_device_id flexcan_id_table[] = { + { .name = "flexcan", .driver_data = (kernel_ulong_t)&fsl_p1010_devtype_data, }, + { /* sentinel */ }, +}; + static int __devinit flexcan_probe(struct platform_device *pdev) { + const struct of_device_id *of_id; + const struct flexcan_devtype_data *devtype_data; struct net_device *dev; struct flexcan_priv *priv; struct resource *mem; - struct clk *clk = NULL; + struct clk *clk_ipg = NULL, *clk_per = NULL; struct pinctrl *pinctrl; void __iomem *base; resource_size_t mem_size; @@ -938,23 +981,25 @@ static int __devinit flexcan_probe(struct platform_device *pdev) if (IS_ERR(pinctrl)) return PTR_ERR(pinctrl); - if (pdev->dev.of_node) { - const __be32 *clock_freq_p; - - clock_freq_p = of_get_property(pdev->dev.of_node, - "clock-frequency", NULL); - if (clock_freq_p) - clock_freq = be32_to_cpup(clock_freq_p); - } + if (pdev->dev.of_node) + of_property_read_u32(pdev->dev.of_node, + "clock-frequency", &clock_freq); if (!clock_freq) { - clk = clk_get(&pdev->dev, NULL); - if (IS_ERR(clk)) { - dev_err(&pdev->dev, "no clock defined\n"); - err = PTR_ERR(clk); + clk_ipg = devm_clk_get(&pdev->dev, "ipg"); + if (IS_ERR(clk_ipg)) { + dev_err(&pdev->dev, "no ipg clock defined\n"); + err = PTR_ERR(clk_ipg); + goto failed_clock; + } + clock_freq = clk_get_rate(clk_ipg); + + clk_per = devm_clk_get(&pdev->dev, "per"); + if (IS_ERR(clk_per)) { + dev_err(&pdev->dev, "no per clock defined\n"); + err = PTR_ERR(clk_per); goto failed_clock; } - clock_freq = clk_get_rate(clk); } mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); @@ -982,6 +1027,17 @@ static int __devinit flexcan_probe(struct platform_device *pdev) goto failed_alloc; } + of_id = of_match_device(flexcan_of_match, &pdev->dev); + if (of_id) { + devtype_data = of_id->data; + } else if (pdev->id_entry->driver_data) { + devtype_data = (struct flexcan_devtype_data *) + pdev->id_entry->driver_data; + } else { + err = -ENODEV; + goto failed_devtype; + } + dev->netdev_ops = &flexcan_netdev_ops; dev->irq = irq; dev->flags |= IFF_ECHO; @@ -996,8 +1052,10 @@ static int __devinit flexcan_probe(struct platform_device *pdev) CAN_CTRLMODE_BERR_REPORTING; priv->base = base; priv->dev = dev; - priv->clk = clk; + priv->clk_ipg = clk_ipg; + priv->clk_per = clk_per; priv->pdata = pdev->dev.platform_data; + priv->devtype_data = devtype_data; netif_napi_add(dev, &priv->napi, flexcan_poll, FLEXCAN_NAPI_WEIGHT); @@ -1016,14 +1074,13 @@ static int __devinit flexcan_probe(struct platform_device *pdev) return 0; failed_register: + failed_devtype: free_candev(dev); failed_alloc: iounmap(base); failed_map: release_mem_region(mem->start, mem_size); failed_get: - if (clk) - clk_put(clk); failed_clock: return err; } @@ -1041,20 +1098,46 @@ static int __devexit flexcan_remove(struct platform_device *pdev) mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); release_mem_region(mem->start, resource_size(mem)); - if (priv->clk) - clk_put(priv->clk); - free_candev(dev); return 0; } -static struct of_device_id flexcan_of_match[] = { - { - .compatible = "fsl,p1010-flexcan", - }, - {}, -}; +#ifdef CONFIG_PM +static int flexcan_suspend(struct platform_device *pdev, pm_message_t state) +{ + struct net_device *dev = platform_get_drvdata(pdev); + struct flexcan_priv *priv = netdev_priv(dev); + + flexcan_chip_disable(priv); + + if (netif_running(dev)) { + netif_stop_queue(dev); + netif_device_detach(dev); + } + priv->can.state = CAN_STATE_SLEEPING; + + return 0; +} + +static int flexcan_resume(struct platform_device *pdev) +{ + struct net_device *dev = platform_get_drvdata(pdev); + struct flexcan_priv *priv = netdev_priv(dev); + + priv->can.state = CAN_STATE_ERROR_ACTIVE; + if (netif_running(dev)) { + netif_device_attach(dev); + netif_start_queue(dev); + } + flexcan_chip_enable(priv); + + return 0; +} +#else +#define flexcan_suspend NULL +#define flexcan_resume NULL +#endif static struct platform_driver flexcan_driver = { .driver = { @@ -1064,6 +1147,9 @@ static struct platform_driver flexcan_driver = { }, .probe = flexcan_probe, .remove = __devexit_p(flexcan_remove), + .suspend = flexcan_suspend, + .resume = flexcan_resume, + .id_table = flexcan_id_table, }; module_platform_driver(flexcan_driver); diff --git a/drivers/net/can/janz-ican3.c b/drivers/net/can/janz-ican3.c index 08c893cb7896..98ee43819911 100644 --- a/drivers/net/can/janz-ican3.c +++ b/drivers/net/can/janz-ican3.c @@ -116,6 +116,7 @@ #define ICAN3_BUSERR_QUOTA_MAX 255 /* Janz ICAN3 CAN Frame Conversion */ +#define ICAN3_SNGL 0x02 #define ICAN3_ECHO 0x10 #define ICAN3_EFF_RTR 0x40 #define ICAN3_SFF_RTR 0x10 @@ -220,6 +221,9 @@ struct ican3_dev { /* old and new style host interface */ unsigned int iftype; + /* queue for echo packets */ + struct sk_buff_head echoq; + /* * Any function which changes the current DPM page must hold this * lock while it is performing data accesses. This ensures that the @@ -235,7 +239,6 @@ struct ican3_dev { /* fast host interface */ unsigned int fastrx_start; - unsigned int fastrx_int; unsigned int fastrx_num; unsigned int fasttx_start; unsigned int fasttx_num; @@ -454,7 +457,6 @@ static void __devinit ican3_init_fast_host_interface(struct ican3_dev *mod) /* save the start recv page */ mod->fastrx_start = mod->free_page; mod->fastrx_num = 0; - mod->fastrx_int = 0; /* build a single fast tohost queue descriptor */ memset(&desc, 0, sizeof(desc)); @@ -813,10 +815,10 @@ static void ican3_to_can_frame(struct ican3_dev *mod, cf->can_id |= desc->data[0] << 3; cf->can_id |= (desc->data[1] & 0xe0) >> 5; - cf->can_dlc = desc->data[1] & ICAN3_CAN_DLC_MASK; - memcpy(cf->data, &desc->data[2], sizeof(cf->data)); + cf->can_dlc = get_can_dlc(desc->data[1] & ICAN3_CAN_DLC_MASK); + memcpy(cf->data, &desc->data[2], cf->can_dlc); } else { - cf->can_dlc = desc->data[0] & ICAN3_CAN_DLC_MASK; + cf->can_dlc = get_can_dlc(desc->data[0] & ICAN3_CAN_DLC_MASK); if (desc->data[0] & ICAN3_EFF_RTR) cf->can_id |= CAN_RTR_FLAG; @@ -831,7 +833,7 @@ static void ican3_to_can_frame(struct ican3_dev *mod, cf->can_id |= desc->data[3] >> 5; /* 2-0 */ } - memcpy(cf->data, &desc->data[6], sizeof(cf->data)); + memcpy(cf->data, &desc->data[6], cf->can_dlc); } } @@ -847,6 +849,10 @@ static void can_frame_to_ican3(struct ican3_dev *mod, desc->data[0] |= cf->can_dlc; desc->data[1] |= ICAN3_ECHO; + /* support single transmission (no retries) mode */ + if (mod->can.ctrlmode & CAN_CTRLMODE_ONE_SHOT) + desc->data[1] |= ICAN3_SNGL; + if (cf->can_id & CAN_RTR_FLAG) desc->data[0] |= ICAN3_EFF_RTR; @@ -863,7 +869,7 @@ static void can_frame_to_ican3(struct ican3_dev *mod, } /* copy the data bits into the descriptor */ - memcpy(&desc->data[6], cf->data, sizeof(cf->data)); + memcpy(&desc->data[6], cf->data, cf->can_dlc); } /* @@ -909,8 +915,8 @@ static void ican3_handle_msglost(struct ican3_dev *mod, struct ican3_msg *msg) if (skb) { cf->can_id |= CAN_ERR_CRTL; cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW; + stats->rx_over_errors++; stats->rx_errors++; - stats->rx_bytes += cf->can_dlc; netif_rx(skb); } } @@ -927,7 +933,7 @@ static int ican3_handle_cevtind(struct ican3_dev *mod, struct ican3_msg *msg) struct net_device *dev = mod->ndev; struct net_device_stats *stats = &dev->stats; enum can_state state = mod->can.state; - u8 status, isrc, rxerr, txerr; + u8 isrc, ecc, status, rxerr, txerr; struct can_frame *cf; struct sk_buff *skb; @@ -943,15 +949,53 @@ static int ican3_handle_cevtind(struct ican3_dev *mod, struct ican3_msg *msg) return -EINVAL; } - skb = alloc_can_err_skb(dev, &cf); - if (skb == NULL) - return -ENOMEM; - isrc = msg->data[0]; + ecc = msg->data[2]; status = msg->data[3]; rxerr = msg->data[4]; txerr = msg->data[5]; + /* + * This hardware lacks any support other than bus error messages to + * determine if packet transmission has failed. + * + * When TX errors happen, one echo skb needs to be dropped from the + * front of the queue. + * + * A small bit of code is duplicated here and below, to avoid error + * skb allocation when it will just be freed immediately. + */ + if (isrc == CEVTIND_BEI) { + int ret; + dev_dbg(mod->dev, "bus error interrupt\n"); + + /* TX error */ + if (!(ecc & ECC_DIR)) { + kfree_skb(skb_dequeue(&mod->echoq)); + stats->tx_errors++; + } else { + stats->rx_errors++; + } + + /* + * The controller automatically disables bus-error interrupts + * and therefore we must re-enable them. + */ + ret = ican3_set_buserror(mod, 1); + if (ret) { + dev_err(mod->dev, "unable to re-enable bus-error\n"); + return ret; + } + + /* bus error reporting is off, return immediately */ + if (!(mod->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)) + return 0; + } + + skb = alloc_can_err_skb(dev, &cf); + if (skb == NULL) + return -ENOMEM; + /* data overrun interrupt */ if (isrc == CEVTIND_DOI || isrc == CEVTIND_LOST) { dev_dbg(mod->dev, "data overrun interrupt\n"); @@ -980,11 +1024,7 @@ static int ican3_handle_cevtind(struct ican3_dev *mod, struct ican3_msg *msg) /* bus error interrupt */ if (isrc == CEVTIND_BEI) { - u8 ecc = msg->data[2]; - - dev_dbg(mod->dev, "bus error interrupt\n"); mod->can.can_stats.bus_error++; - stats->rx_errors++; cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR; switch (ecc & ECC_MASK) { @@ -1003,7 +1043,7 @@ static int ican3_handle_cevtind(struct ican3_dev *mod, struct ican3_msg *msg) break; } - if ((ecc & ECC_DIR) == 0) + if (!(ecc & ECC_DIR)) cf->data[2] |= CAN_ERR_PROT_TX; cf->data[6] = txerr; @@ -1030,8 +1070,6 @@ static int ican3_handle_cevtind(struct ican3_dev *mod, struct ican3_msg *msg) } mod->can.state = state; - stats->rx_errors++; - stats->rx_bytes += cf->can_dlc; netif_rx(skb); return 0; } @@ -1091,6 +1129,88 @@ static void ican3_handle_message(struct ican3_dev *mod, struct ican3_msg *msg) } /* + * The ican3 needs to store all echo skbs, and therefore cannot + * use the generic infrastructure for this. + */ +static void ican3_put_echo_skb(struct ican3_dev *mod, struct sk_buff *skb) +{ + struct sock *srcsk = skb->sk; + + if (atomic_read(&skb->users) != 1) { + struct sk_buff *old_skb = skb; + + skb = skb_clone(old_skb, GFP_ATOMIC); + kfree_skb(old_skb); + if (!skb) + return; + } else { + skb_orphan(skb); + } + + skb->sk = srcsk; + + /* save this skb for tx interrupt echo handling */ + skb_queue_tail(&mod->echoq, skb); +} + +static unsigned int ican3_get_echo_skb(struct ican3_dev *mod) +{ + struct sk_buff *skb = skb_dequeue(&mod->echoq); + struct can_frame *cf; + u8 dlc; + + /* this should never trigger unless there is a driver bug */ + if (!skb) { + netdev_err(mod->ndev, "BUG: echo skb not occupied\n"); + return 0; + } + + cf = (struct can_frame *)skb->data; + dlc = cf->can_dlc; + + /* check flag whether this packet has to be looped back */ + if (skb->pkt_type != PACKET_LOOPBACK) { + kfree_skb(skb); + return dlc; + } + + skb->protocol = htons(ETH_P_CAN); + skb->pkt_type = PACKET_BROADCAST; + skb->ip_summed = CHECKSUM_UNNECESSARY; + skb->dev = mod->ndev; + netif_receive_skb(skb); + return dlc; +} + +/* + * Compare an skb with an existing echo skb + * + * This function will be used on devices which have a hardware loopback. + * On these devices, this function can be used to compare a received skb + * with the saved echo skbs so that the hardware echo skb can be dropped. + * + * Returns true if the skb's are identical, false otherwise. + */ +static bool ican3_echo_skb_matches(struct ican3_dev *mod, struct sk_buff *skb) +{ + struct can_frame *cf = (struct can_frame *)skb->data; + struct sk_buff *echo_skb = skb_peek(&mod->echoq); + struct can_frame *echo_cf; + + if (!echo_skb) + return false; + + echo_cf = (struct can_frame *)echo_skb->data; + if (cf->can_id != echo_cf->can_id) + return false; + + if (cf->can_dlc != echo_cf->can_dlc) + return false; + + return memcmp(cf->data, echo_cf->data, cf->can_dlc) == 0; +} + +/* * Check that there is room in the TX ring to transmit another skb * * LOCKING: must hold mod->lock @@ -1100,6 +1220,10 @@ static bool ican3_txok(struct ican3_dev *mod) struct ican3_fast_desc __iomem *desc; u8 control; + /* check that we have echo queue space */ + if (skb_queue_len(&mod->echoq) >= ICAN3_TX_BUFFERS) + return false; + /* copy the control bits of the descriptor */ ican3_set_page(mod, mod->fasttx_start + (mod->fasttx_num / 16)); desc = mod->dpm + ((mod->fasttx_num % 16) * sizeof(*desc)); @@ -1150,10 +1274,27 @@ static int ican3_recv_skb(struct ican3_dev *mod) /* convert the ICAN3 frame into Linux CAN format */ ican3_to_can_frame(mod, &desc, cf); - /* receive the skb, update statistics */ - netif_receive_skb(skb); + /* + * If this is an ECHO frame received from the hardware loopback + * feature, use the skb saved in the ECHO stack instead. This allows + * the Linux CAN core to support CAN_RAW_RECV_OWN_MSGS correctly. + * + * Since this is a confirmation of a successfully transmitted packet + * sent from this host, update the transmit statistics. + * + * Also, the netdevice queue needs to be allowed to send packets again. + */ + if (ican3_echo_skb_matches(mod, skb)) { + stats->tx_packets++; + stats->tx_bytes += ican3_get_echo_skb(mod); + kfree_skb(skb); + goto err_noalloc; + } + + /* update statistics, receive the skb */ stats->rx_packets++; stats->rx_bytes += cf->can_dlc; + netif_receive_skb(skb); err_noalloc: /* toggle the valid bit and return the descriptor to the ring */ @@ -1176,13 +1317,13 @@ err_noalloc: static int ican3_napi(struct napi_struct *napi, int budget) { struct ican3_dev *mod = container_of(napi, struct ican3_dev, napi); - struct ican3_msg msg; unsigned long flags; int received = 0; int ret; /* process all communication messages */ while (true) { + struct ican3_msg msg; ret = ican3_recv_msg(mod, &msg); if (ret) break; @@ -1325,7 +1466,7 @@ static int __devinit ican3_startup_module(struct ican3_dev *mod) } /* default to "bus errors enabled" */ - ret = ican3_set_buserror(mod, ICAN3_BUSERR_QUOTA_MAX); + ret = ican3_set_buserror(mod, 1); if (ret) { dev_err(mod->dev, "unable to set bus-error\n"); return ret; @@ -1354,7 +1495,6 @@ static int __devinit ican3_startup_module(struct ican3_dev *mod) static int ican3_open(struct net_device *ndev) { struct ican3_dev *mod = netdev_priv(ndev); - u8 quota; int ret; /* open the CAN layer */ @@ -1364,19 +1504,6 @@ static int ican3_open(struct net_device *ndev) return ret; } - /* set the bus error generation state appropriately */ - if (mod->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING) - quota = ICAN3_BUSERR_QUOTA_MAX; - else - quota = 0; - - ret = ican3_set_buserror(mod, quota); - if (ret) { - dev_err(mod->dev, "unable to set bus-error\n"); - close_candev(ndev); - return ret; - } - /* bring the bus online */ ret = ican3_set_bus_state(mod, true); if (ret) { @@ -1408,6 +1535,9 @@ static int ican3_stop(struct net_device *ndev) return ret; } + /* drop all outstanding echo skbs */ + skb_queue_purge(&mod->echoq); + /* close the CAN layer */ close_candev(ndev); return 0; @@ -1416,18 +1546,19 @@ static int ican3_stop(struct net_device *ndev) static int ican3_xmit(struct sk_buff *skb, struct net_device *ndev) { struct ican3_dev *mod = netdev_priv(ndev); - struct net_device_stats *stats = &ndev->stats; struct can_frame *cf = (struct can_frame *)skb->data; struct ican3_fast_desc desc; void __iomem *desc_addr; unsigned long flags; + if (can_dropped_invalid_skb(ndev, skb)) + return NETDEV_TX_OK; + spin_lock_irqsave(&mod->lock, flags); /* check that we can actually transmit */ if (!ican3_txok(mod)) { - dev_err(mod->dev, "no free descriptors, stopping queue\n"); - netif_stop_queue(ndev); + dev_err(mod->dev, "BUG: no free descriptors\n"); spin_unlock_irqrestore(&mod->lock, flags); return NETDEV_TX_BUSY; } @@ -1442,6 +1573,14 @@ static int ican3_xmit(struct sk_buff *skb, struct net_device *ndev) can_frame_to_ican3(mod, cf, &desc); /* + * This hardware doesn't have TX-done notifications, so we'll try and + * emulate it the best we can using ECHO skbs. Add the skb to the ECHO + * stack. Upon packet reception, check if the ECHO skb and received + * skb match, and use that to wake the queue. + */ + ican3_put_echo_skb(mod, skb); + + /* * the programming manual says that you must set the IVALID bit, then * interrupt, then set the valid bit. Quite weird, but it seems to be * required for this to work @@ -1459,19 +1598,7 @@ static int ican3_xmit(struct sk_buff *skb, struct net_device *ndev) mod->fasttx_num = (desc.control & DESC_WRAP) ? 0 : (mod->fasttx_num + 1); - /* update statistics */ - stats->tx_packets++; - stats->tx_bytes += cf->can_dlc; - kfree_skb(skb); - - /* - * This hardware doesn't have TX-done notifications, so we'll try and - * emulate it the best we can using ECHO skbs. Get the next TX - * descriptor, and see if we have room to send. If not, stop the queue. - * It will be woken when the ECHO skb for the current packet is recv'd. - */ - - /* copy the control bits of the descriptor */ + /* if there is no free descriptor space, stop the transmit queue */ if (!ican3_txok(mod)) netif_stop_queue(ndev); @@ -1490,7 +1617,7 @@ static const struct net_device_ops ican3_netdev_ops = { */ /* This structure was stolen from drivers/net/can/sja1000/sja1000.c */ -static struct can_bittiming_const ican3_bittiming_const = { +static const struct can_bittiming_const ican3_bittiming_const = { .name = DRV_NAME, .tseg1_min = 1, .tseg1_max = 16, @@ -1667,6 +1794,7 @@ static int __devinit ican3_probe(struct platform_device *pdev) mod->dev = &pdev->dev; mod->num = pdata->modno; netif_napi_add(ndev, &mod->napi, ican3_napi, ICAN3_RX_BUFFERS); + skb_queue_head_init(&mod->echoq); spin_lock_init(&mod->lock); init_completion(&mod->termination_comp); init_completion(&mod->buserror_comp); @@ -1687,7 +1815,8 @@ static int __devinit ican3_probe(struct platform_device *pdev) mod->can.do_set_mode = ican3_set_mode; mod->can.do_get_berr_counter = ican3_get_berr_counter; mod->can.ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES - | CAN_CTRLMODE_BERR_REPORTING; + | CAN_CTRLMODE_BERR_REPORTING + | CAN_CTRLMODE_ONE_SHOT; /* find our IRQ number */ mod->irq = platform_get_irq(pdev, 0); diff --git a/drivers/net/can/mcp251x.c b/drivers/net/can/mcp251x.c index 346785c56a25..a580db29e503 100644 --- a/drivers/net/can/mcp251x.c +++ b/drivers/net/can/mcp251x.c @@ -214,7 +214,7 @@ static int mcp251x_enable_dma; /* Enable SPI DMA. Default: 0 (Off) */ module_param(mcp251x_enable_dma, int, S_IRUGO); MODULE_PARM_DESC(mcp251x_enable_dma, "Enable SPI DMA. Default: 0 (Off)"); -static struct can_bittiming_const mcp251x_bittiming_const = { +static const struct can_bittiming_const mcp251x_bittiming_const = { .name = DEVICE_NAME, .tseg1_min = 3, .tseg1_max = 16, @@ -1020,8 +1020,7 @@ static int __devinit mcp251x_can_probe(struct spi_device *spi) GFP_DMA); if (priv->spi_tx_buf) { - priv->spi_rx_buf = (u8 *)(priv->spi_tx_buf + - (PAGE_SIZE / 2)); + priv->spi_rx_buf = (priv->spi_tx_buf + (PAGE_SIZE / 2)); priv->spi_rx_dma = (dma_addr_t)(priv->spi_tx_dma + (PAGE_SIZE / 2)); } else { diff --git a/drivers/net/can/mscan/mpc5xxx_can.c b/drivers/net/can/mscan/mpc5xxx_can.c index 5caa572d71e3..06adf881ea24 100644 --- a/drivers/net/can/mscan/mpc5xxx_can.c +++ b/drivers/net/can/mscan/mpc5xxx_can.c @@ -251,7 +251,7 @@ static struct of_device_id mpc5xxx_can_table[]; static int __devinit mpc5xxx_can_probe(struct platform_device *ofdev) { const struct of_device_id *match; - struct mpc5xxx_can_data *data; + const struct mpc5xxx_can_data *data; struct device_node *np = ofdev->dev.of_node; struct net_device *dev; struct mscan_priv *priv; diff --git a/drivers/net/can/mscan/mscan.c b/drivers/net/can/mscan/mscan.c index 41a2a2dda7ea..2b104d5f422c 100644 --- a/drivers/net/can/mscan/mscan.c +++ b/drivers/net/can/mscan/mscan.c @@ -34,7 +34,7 @@ #include "mscan.h" -static struct can_bittiming_const mscan_bittiming_const = { +static const struct can_bittiming_const mscan_bittiming_const = { .name = "mscan", .tseg1_min = 4, .tseg1_max = 16, diff --git a/drivers/net/can/pch_can.c b/drivers/net/can/pch_can.c index 1226297e7676..48b3d62b34cb 100644 --- a/drivers/net/can/pch_can.c +++ b/drivers/net/can/pch_can.c @@ -184,7 +184,7 @@ struct pch_can_priv { int use_msi; }; -static struct can_bittiming_const pch_can_bittiming_const = { +static const struct can_bittiming_const pch_can_bittiming_const = { .name = KBUILD_MODNAME, .tseg1_min = 2, .tseg1_max = 16, diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c index 5e10472371ed..4c4f33d482d2 100644 --- a/drivers/net/can/sja1000/sja1000.c +++ b/drivers/net/can/sja1000/sja1000.c @@ -69,7 +69,7 @@ MODULE_AUTHOR("Oliver Hartkopp <oliver.hartkopp@volkswagen.de>"); MODULE_LICENSE("Dual BSD/GPL"); MODULE_DESCRIPTION(DRV_NAME "CAN netdevice driver"); -static struct can_bittiming_const sja1000_bittiming_const = { +static const struct can_bittiming_const sja1000_bittiming_const = { .name = DRV_NAME, .tseg1_min = 1, .tseg1_max = 16, diff --git a/drivers/net/can/softing/softing_main.c b/drivers/net/can/softing/softing_main.c index a7c77c744ee9..f2a221e7b968 100644 --- a/drivers/net/can/softing/softing_main.c +++ b/drivers/net/can/softing/softing_main.c @@ -826,12 +826,12 @@ static __devinit int softing_pdev_probe(struct platform_device *pdev) goto sysfs_failed; } - ret = -ENOMEM; for (j = 0; j < ARRAY_SIZE(card->net); ++j) { card->net[j] = netdev = softing_netdev_create(card, card->id.chip[j]); if (!netdev) { dev_alert(&pdev->dev, "failed to make can[%i]", j); + ret = -ENOMEM; goto netdev_failed; } priv = netdev_priv(card->net[j]); diff --git a/drivers/net/can/ti_hecc.c b/drivers/net/can/ti_hecc.c index 4accd7ec6954..527dbcf95335 100644 --- a/drivers/net/can/ti_hecc.c +++ b/drivers/net/can/ti_hecc.c @@ -196,7 +196,7 @@ MODULE_VERSION(HECC_MODULE_VERSION); #define HECC_CANGIM_SIL BIT(2) /* system interrupts to int line 1 */ /* CAN Bittiming constants as per HECC specs */ -static struct can_bittiming_const ti_hecc_bittiming_const = { +static const struct can_bittiming_const ti_hecc_bittiming_const = { .name = DRV_NAME, .tseg1_min = 1, .tseg1_max = 16, diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c index 7ae65fc80032..086fa321677a 100644 --- a/drivers/net/can/usb/ems_usb.c +++ b/drivers/net/can/usb/ems_usb.c @@ -889,7 +889,7 @@ static const struct net_device_ops ems_usb_netdev_ops = { .ndo_start_xmit = ems_usb_start_xmit, }; -static struct can_bittiming_const ems_usb_bittiming_const = { +static const struct can_bittiming_const ems_usb_bittiming_const = { .name = "ems_usb", .tseg1_min = 1, .tseg1_max = 16, diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c index 09b1da5bc512..bd36e5517173 100644 --- a/drivers/net/can/usb/esd_usb2.c +++ b/drivers/net/can/usb/esd_usb2.c @@ -871,7 +871,7 @@ static const struct net_device_ops esd_usb2_netdev_ops = { .ndo_start_xmit = esd_usb2_start_xmit, }; -static struct can_bittiming_const esd_usb2_bittiming_const = { +static const struct can_bittiming_const esd_usb2_bittiming_const = { .name = "esd_usb2", .tseg1_min = ESD_USB2_TSEG1_MIN, .tseg1_max = ESD_USB2_TSEG1_MAX, diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.h b/drivers/net/can/usb/peak_usb/pcan_usb_core.h index a948c5a89401..4c775b620be2 100644 --- a/drivers/net/can/usb/peak_usb/pcan_usb_core.h +++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.h @@ -45,7 +45,7 @@ struct peak_usb_adapter { char *name; u32 device_id; struct can_clock clock; - struct can_bittiming_const bittiming_const; + const struct can_bittiming_const bittiming_const; unsigned int ctrl_count; int (*intf_probe)(struct usb_interface *intf); diff --git a/drivers/net/can/vcan.c b/drivers/net/can/vcan.c index ea2d94285936..4f93c0be0053 100644 --- a/drivers/net/can/vcan.c +++ b/drivers/net/can/vcan.c @@ -70,13 +70,12 @@ MODULE_PARM_DESC(echo, "Echo sent frames (for testing). Default: 0 (Off)"); static void vcan_rx(struct sk_buff *skb, struct net_device *dev) { - struct can_frame *cf = (struct can_frame *)skb->data; + struct canfd_frame *cfd = (struct canfd_frame *)skb->data; struct net_device_stats *stats = &dev->stats; stats->rx_packets++; - stats->rx_bytes += cf->can_dlc; + stats->rx_bytes += cfd->len; - skb->protocol = htons(ETH_P_CAN); skb->pkt_type = PACKET_BROADCAST; skb->dev = dev; skb->ip_summed = CHECKSUM_UNNECESSARY; @@ -86,7 +85,7 @@ static void vcan_rx(struct sk_buff *skb, struct net_device *dev) static netdev_tx_t vcan_tx(struct sk_buff *skb, struct net_device *dev) { - struct can_frame *cf = (struct can_frame *)skb->data; + struct canfd_frame *cfd = (struct canfd_frame *)skb->data; struct net_device_stats *stats = &dev->stats; int loop; @@ -94,7 +93,7 @@ static netdev_tx_t vcan_tx(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; stats->tx_packets++; - stats->tx_bytes += cf->can_dlc; + stats->tx_bytes += cfd->len; /* set flag whether this packet has to be looped back */ loop = skb->pkt_type == PACKET_LOOPBACK; @@ -108,7 +107,7 @@ static netdev_tx_t vcan_tx(struct sk_buff *skb, struct net_device *dev) * CAN core already did the echo for us */ stats->rx_packets++; - stats->rx_bytes += cf->can_dlc; + stats->rx_bytes += cfd->len; } kfree_skb(skb); return NETDEV_TX_OK; @@ -133,14 +132,28 @@ static netdev_tx_t vcan_tx(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; } +static int vcan_change_mtu(struct net_device *dev, int new_mtu) +{ + /* Do not allow changing the MTU while running */ + if (dev->flags & IFF_UP) + return -EBUSY; + + if (new_mtu != CAN_MTU && new_mtu != CANFD_MTU) + return -EINVAL; + + dev->mtu = new_mtu; + return 0; +} + static const struct net_device_ops vcan_netdev_ops = { .ndo_start_xmit = vcan_tx, + .ndo_change_mtu = vcan_change_mtu, }; static void vcan_setup(struct net_device *dev) { dev->type = ARPHRD_CAN; - dev->mtu = sizeof(struct can_frame); + dev->mtu = CAN_MTU; dev->hard_header_len = 0; dev->addr_len = 0; dev->tx_queue_len = 0; diff --git a/drivers/net/cris/eth_v10.c b/drivers/net/cris/eth_v10.c index 9c755db6b16d..f0c8bd54ce29 100644 --- a/drivers/net/cris/eth_v10.c +++ b/drivers/net/cris/eth_v10.c @@ -1008,7 +1008,7 @@ e100_send_mdio_bit(unsigned char bit) } static unsigned char -e100_receive_mdio_bit() +e100_receive_mdio_bit(void) { unsigned char bit; *R_NETWORK_MGM_CTRL = 0; diff --git a/drivers/net/dummy.c b/drivers/net/dummy.c index bab0158f1cc3..c260af5411d0 100644 --- a/drivers/net/dummy.c +++ b/drivers/net/dummy.c @@ -40,18 +40,6 @@ static int numdummies = 1; -static int dummy_set_address(struct net_device *dev, void *p) -{ - struct sockaddr *sa = p; - - if (!is_valid_ether_addr(sa->sa_data)) - return -EADDRNOTAVAIL; - - dev->addr_assign_type &= ~NET_ADDR_RANDOM; - memcpy(dev->dev_addr, sa->sa_data, ETH_ALEN); - return 0; -} - /* fake multicast ability */ static void set_multicast_list(struct net_device *dev) { @@ -75,10 +63,10 @@ static struct rtnl_link_stats64 *dummy_get_stats64(struct net_device *dev, dstats = per_cpu_ptr(dev->dstats, i); do { - start = u64_stats_fetch_begin(&dstats->syncp); + start = u64_stats_fetch_begin_bh(&dstats->syncp); tbytes = dstats->tx_bytes; tpackets = dstats->tx_packets; - } while (u64_stats_fetch_retry(&dstats->syncp, start)); + } while (u64_stats_fetch_retry_bh(&dstats->syncp, start)); stats->tx_bytes += tbytes; stats->tx_packets += tpackets; } @@ -118,7 +106,7 @@ static const struct net_device_ops dummy_netdev_ops = { .ndo_start_xmit = dummy_xmit, .ndo_validate_addr = eth_validate_addr, .ndo_set_rx_mode = set_multicast_list, - .ndo_set_mac_address = dummy_set_address, + .ndo_set_mac_address = eth_mac_addr, .ndo_get_stats64 = dummy_get_stats64, }; @@ -134,6 +122,7 @@ static void dummy_setup(struct net_device *dev) dev->tx_queue_len = 0; dev->flags |= IFF_NOARP; dev->flags &= ~IFF_MULTICAST; + dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; dev->features |= NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_TSO; dev->features |= NETIF_F_HW_CSUM | NETIF_F_HIGHDMA | NETIF_F_LLTX; eth_hw_addr_random(dev); diff --git a/drivers/net/ethernet/3com/3c501.c b/drivers/net/ethernet/3com/3c501.c index bf73e1a02293..2038eaabaea4 100644 --- a/drivers/net/ethernet/3com/3c501.c +++ b/drivers/net/ethernet/3com/3c501.c @@ -143,7 +143,7 @@ static int irq = 5; static int mem_start; /** - * el1_probe: - probe for a 3c501 + * el1_probe - probe for a 3c501 * @dev: The device structure passed in to probe. * * This can be called from two places. The network layer will probe using diff --git a/drivers/net/ethernet/8390/Kconfig b/drivers/net/ethernet/8390/Kconfig index 2e538676924d..e1219e037c04 100644 --- a/drivers/net/ethernet/8390/Kconfig +++ b/drivers/net/ethernet/8390/Kconfig @@ -162,6 +162,20 @@ config MAC8390 and read the Ethernet-HOWTO, available from <http://www.tldp.org/docs.html#howto>. +config MCF8390 + tristate "ColdFire NS8390 based Ethernet support" + depends on COLDFIRE + select CRC32 + ---help--- + This driver is for Ethernet devices using an NS8390-compatible + chipset on many common ColdFire CPU based boards. Many of the older + Freescale dev boards use this, and some other common boards like + some SnapGear routers do as well. + + If you have one of these boards and want to use the network interface + on them then choose Y. To compile this driver as a module, choose M + here, the module will be called mcf8390. + config NE2000 tristate "NE2000/NE1000 support" depends on (ISA || (Q40 && m) || M32R || MACH_TX49XX) diff --git a/drivers/net/ethernet/8390/Makefile b/drivers/net/ethernet/8390/Makefile index d13790b7fd27..f43038babf86 100644 --- a/drivers/net/ethernet/8390/Makefile +++ b/drivers/net/ethernet/8390/Makefile @@ -14,6 +14,7 @@ obj-$(CONFIG_HPLAN_PLUS) += hp-plus.o 8390p.o obj-$(CONFIG_HPLAN) += hp.o 8390p.o obj-$(CONFIG_HYDRA) += hydra.o 8390.o obj-$(CONFIG_LNE390) += lne390.o 8390.o +obj-$(CONFIG_MCF8390) += mcf8390.o 8390.o obj-$(CONFIG_NE2000) += ne.o 8390p.o obj-$(CONFIG_NE2_MCA) += ne2.o 8390p.o obj-$(CONFIG_NE2K_PCI) += ne2k-pci.o 8390.o diff --git a/drivers/net/ethernet/8390/apne.c b/drivers/net/ethernet/8390/apne.c index 923959275a82..912ed7a5f33a 100644 --- a/drivers/net/ethernet/8390/apne.c +++ b/drivers/net/ethernet/8390/apne.c @@ -454,7 +454,7 @@ apne_block_input(struct net_device *dev, int count, struct sk_buff *skb, int rin buf[count-1] = inb(NE_BASE + NE_DATAPORT); } } else { - ptrc = (char*)buf; + ptrc = buf; for (cnt = 0; cnt < count; cnt++) *ptrc++ = inb(NE_BASE + NE_DATAPORT); } diff --git a/drivers/net/ethernet/8390/mcf8390.c b/drivers/net/ethernet/8390/mcf8390.c new file mode 100644 index 000000000000..230efd6fa5d5 --- /dev/null +++ b/drivers/net/ethernet/8390/mcf8390.c @@ -0,0 +1,480 @@ +/* + * Support for ColdFire CPU based boards using a NS8390 Ethernet device. + * + * Derived from the many other 8390 drivers. + * + * (C) Copyright 2012, Greg Ungerer <gerg@uclinux.org> + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file COPYING in the main directory of the Linux + * distribution for more details. + */ + +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/errno.h> +#include <linux/init.h> +#include <linux/platform_device.h> +#include <linux/netdevice.h> +#include <linux/etherdevice.h> +#include <linux/jiffies.h> +#include <linux/io.h> +#include <asm/mcf8390.h> + +static const char version[] = + "mcf8390.c: (15-06-2012) Greg Ungerer <gerg@uclinux.org>"; + +#define NE_CMD 0x00 +#define NE_DATAPORT 0x10 /* NatSemi-defined port window offset */ +#define NE_RESET 0x1f /* Issue a read to reset ,a write to clear */ +#define NE_EN0_ISR 0x07 +#define NE_EN0_DCFG 0x0e +#define NE_EN0_RSARLO 0x08 +#define NE_EN0_RSARHI 0x09 +#define NE_EN0_RCNTLO 0x0a +#define NE_EN0_RXCR 0x0c +#define NE_EN0_TXCR 0x0d +#define NE_EN0_RCNTHI 0x0b +#define NE_EN0_IMR 0x0f + +#define NESM_START_PG 0x40 /* First page of TX buffer */ +#define NESM_STOP_PG 0x80 /* Last page +1 of RX ring */ + +#ifdef NE2000_ODDOFFSET +/* + * A lot of the ColdFire boards use a separate address region for odd offset + * register addresses. The following functions convert and map as required. + * Note that the data port accesses are treated a little differently, and + * always accessed via the insX/outsX functions. + */ +static inline u32 NE_PTR(u32 addr) +{ + if (addr & 1) + return addr - 1 + NE2000_ODDOFFSET; + return addr; +} + +static inline u32 NE_DATA_PTR(u32 addr) +{ + return addr; +} + +void ei_outb(u32 val, u32 addr) +{ + NE2000_BYTE *rp; + + rp = (NE2000_BYTE *) NE_PTR(addr); + *rp = RSWAP(val); +} + +#define ei_inb ei_inb +u8 ei_inb(u32 addr) +{ + NE2000_BYTE *rp, val; + + rp = (NE2000_BYTE *) NE_PTR(addr); + val = *rp; + return (u8) (RSWAP(val) & 0xff); +} + +void ei_insb(u32 addr, void *vbuf, int len) +{ + NE2000_BYTE *rp, val; + u8 *buf; + + buf = (u8 *) vbuf; + rp = (NE2000_BYTE *) NE_DATA_PTR(addr); + for (; (len > 0); len--) { + val = *rp; + *buf++ = RSWAP(val); + } +} + +void ei_insw(u32 addr, void *vbuf, int len) +{ + volatile u16 *rp; + u16 w, *buf; + + buf = (u16 *) vbuf; + rp = (volatile u16 *) NE_DATA_PTR(addr); + for (; (len > 0); len--) { + w = *rp; + *buf++ = BSWAP(w); + } +} + +void ei_outsb(u32 addr, const void *vbuf, int len) +{ + NE2000_BYTE *rp, val; + u8 *buf; + + buf = (u8 *) vbuf; + rp = (NE2000_BYTE *) NE_DATA_PTR(addr); + for (; (len > 0); len--) { + val = *buf++; + *rp = RSWAP(val); + } +} + +void ei_outsw(u32 addr, const void *vbuf, int len) +{ + volatile u16 *rp; + u16 w, *buf; + + buf = (u16 *) vbuf; + rp = (volatile u16 *) NE_DATA_PTR(addr); + for (; (len > 0); len--) { + w = *buf++; + *rp = BSWAP(w); + } +} + +#else /* !NE2000_ODDOFFSET */ + +#define ei_inb inb +#define ei_outb outb +#define ei_insb insb +#define ei_insw insw +#define ei_outsb outsb +#define ei_outsw outsw + +#endif /* !NE2000_ODDOFFSET */ + +#define ei_inb_p ei_inb +#define ei_outb_p ei_outb + +#include "lib8390.c" + +/* + * Hard reset the card. This used to pause for the same period that a + * 8390 reset command required, but that shouldn't be necessary. + */ +static void mcf8390_reset_8390(struct net_device *dev) +{ + unsigned long reset_start_time = jiffies; + u32 addr = dev->base_addr; + + if (ei_debug > 1) + netdev_dbg(dev, "resetting the 8390 t=%ld...\n", jiffies); + + ei_outb(ei_inb(addr + NE_RESET), addr + NE_RESET); + + ei_status.txing = 0; + ei_status.dmaing = 0; + + /* This check _should_not_ be necessary, omit eventually. */ + while ((ei_inb(addr + NE_EN0_ISR) & ENISR_RESET) == 0) { + if (time_after(jiffies, reset_start_time + 2 * HZ / 100)) { + netdev_warn(dev, "%s: did not complete\n", __func__); + break; + } + } + + ei_outb(ENISR_RESET, addr + NE_EN0_ISR); +} + +/* + * This *shouldn't* happen. + * If it does, it's the last thing you'll see + */ +static void mcf8390_dmaing_err(const char *func, struct net_device *dev, + struct ei_device *ei_local) +{ + netdev_err(dev, "%s: DMAing conflict [DMAstat:%d][irqlock:%d]\n", + func, ei_local->dmaing, ei_local->irqlock); +} + +/* + * Grab the 8390 specific header. Similar to the block_input routine, but + * we don't need to be concerned with ring wrap as the header will be at + * the start of a page, so we optimize accordingly. + */ +static void mcf8390_get_8390_hdr(struct net_device *dev, + struct e8390_pkt_hdr *hdr, int ring_page) +{ + struct ei_device *ei_local = netdev_priv(dev); + u32 addr = dev->base_addr; + + if (ei_local->dmaing) { + mcf8390_dmaing_err(__func__, dev, ei_local); + return; + } + + ei_local->dmaing |= 0x01; + ei_outb(E8390_NODMA + E8390_PAGE0 + E8390_START, addr + NE_CMD); + ei_outb(ENISR_RDC, addr + NE_EN0_ISR); + ei_outb(sizeof(struct e8390_pkt_hdr), addr + NE_EN0_RCNTLO); + ei_outb(0, addr + NE_EN0_RCNTHI); + ei_outb(0, addr + NE_EN0_RSARLO); /* On page boundary */ + ei_outb(ring_page, addr + NE_EN0_RSARHI); + ei_outb(E8390_RREAD + E8390_START, addr + NE_CMD); + + ei_insw(addr + NE_DATAPORT, hdr, sizeof(struct e8390_pkt_hdr) >> 1); + + outb(ENISR_RDC, addr + NE_EN0_ISR); /* Ack intr */ + ei_local->dmaing &= ~0x01; + + hdr->count = cpu_to_le16(hdr->count); +} + +/* + * Block input and output, similar to the Crynwr packet driver. + * If you are porting to a new ethercard, look at the packet driver source + * for hints. The NEx000 doesn't share the on-board packet memory -- + * you have to put the packet out through the "remote DMA" dataport + * using z_writeb. + */ +static void mcf8390_block_input(struct net_device *dev, int count, + struct sk_buff *skb, int ring_offset) +{ + struct ei_device *ei_local = netdev_priv(dev); + u32 addr = dev->base_addr; + char *buf = skb->data; + + if (ei_local->dmaing) { + mcf8390_dmaing_err(__func__, dev, ei_local); + return; + } + + ei_local->dmaing |= 0x01; + ei_outb(E8390_NODMA + E8390_PAGE0 + E8390_START, addr + NE_CMD); + ei_outb(ENISR_RDC, addr + NE_EN0_ISR); + ei_outb(count & 0xff, addr + NE_EN0_RCNTLO); + ei_outb(count >> 8, addr + NE_EN0_RCNTHI); + ei_outb(ring_offset & 0xff, addr + NE_EN0_RSARLO); + ei_outb(ring_offset >> 8, addr + NE_EN0_RSARHI); + ei_outb(E8390_RREAD + E8390_START, addr + NE_CMD); + + ei_insw(addr + NE_DATAPORT, buf, count >> 1); + if (count & 1) + buf[count - 1] = ei_inb(addr + NE_DATAPORT); + + ei_outb(ENISR_RDC, addr + NE_EN0_ISR); /* Ack intr */ + ei_local->dmaing &= ~0x01; +} + +static void mcf8390_block_output(struct net_device *dev, int count, + const unsigned char *buf, + const int start_page) +{ + struct ei_device *ei_local = netdev_priv(dev); + u32 addr = dev->base_addr; + unsigned long dma_start; + + /* Make sure we transfer all bytes if 16bit IO writes */ + if (count & 0x1) + count++; + + if (ei_local->dmaing) { + mcf8390_dmaing_err(__func__, dev, ei_local); + return; + } + + ei_local->dmaing |= 0x01; + /* We should already be in page 0, but to be safe... */ + ei_outb(E8390_PAGE0 + E8390_START + E8390_NODMA, addr + NE_CMD); + + ei_outb(ENISR_RDC, addr + NE_EN0_ISR); + + /* Now the normal output. */ + ei_outb(count & 0xff, addr + NE_EN0_RCNTLO); + ei_outb(count >> 8, addr + NE_EN0_RCNTHI); + ei_outb(0x00, addr + NE_EN0_RSARLO); + ei_outb(start_page, addr + NE_EN0_RSARHI); + ei_outb(E8390_RWRITE + E8390_START, addr + NE_CMD); + + ei_outsw(addr + NE_DATAPORT, buf, count >> 1); + + dma_start = jiffies; + while ((ei_inb(addr + NE_EN0_ISR) & ENISR_RDC) == 0) { + if (time_after(jiffies, dma_start + 2 * HZ / 100)) { /* 20ms */ + netdev_err(dev, "timeout waiting for Tx RDC\n"); + mcf8390_reset_8390(dev); + __NS8390_init(dev, 1); + break; + } + } + + ei_outb(ENISR_RDC, addr + NE_EN0_ISR); /* Ack intr */ + ei_local->dmaing &= ~0x01; +} + +static const struct net_device_ops mcf8390_netdev_ops = { + .ndo_open = __ei_open, + .ndo_stop = __ei_close, + .ndo_start_xmit = __ei_start_xmit, + .ndo_tx_timeout = __ei_tx_timeout, + .ndo_get_stats = __ei_get_stats, + .ndo_set_rx_mode = __ei_set_multicast_list, + .ndo_validate_addr = eth_validate_addr, + .ndo_set_mac_address = eth_mac_addr, + .ndo_change_mtu = eth_change_mtu, +#ifdef CONFIG_NET_POLL_CONTROLLER + .ndo_poll_controller = __ei_poll, +#endif +}; + +static int mcf8390_init(struct net_device *dev) +{ + static u32 offsets[] = { + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, + }; + struct ei_device *ei_local = netdev_priv(dev); + unsigned char SA_prom[32]; + u32 addr = dev->base_addr; + int start_page, stop_page; + int i, ret; + + mcf8390_reset_8390(dev); + + /* + * Read the 16 bytes of station address PROM. + * We must first initialize registers, + * similar to NS8390_init(eifdev, 0). + * We can't reliably read the SAPROM address without this. + * (I learned the hard way!). + */ + { + static const struct { + u32 value; + u32 offset; + } program_seq[] = { + {E8390_NODMA + E8390_PAGE0 + E8390_STOP, NE_CMD}, + /* Select page 0 */ + {0x48, NE_EN0_DCFG}, /* 0x48: Set byte-wide access */ + {0x00, NE_EN0_RCNTLO}, /* Clear the count regs */ + {0x00, NE_EN0_RCNTHI}, + {0x00, NE_EN0_IMR}, /* Mask completion irq */ + {0xFF, NE_EN0_ISR}, + {E8390_RXOFF, NE_EN0_RXCR}, /* 0x20 Set to monitor */ + {E8390_TXOFF, NE_EN0_TXCR}, /* 0x02 and loopback mode */ + {32, NE_EN0_RCNTLO}, + {0x00, NE_EN0_RCNTHI}, + {0x00, NE_EN0_RSARLO}, /* DMA starting at 0x0000 */ + {0x00, NE_EN0_RSARHI}, + {E8390_RREAD + E8390_START, NE_CMD}, + }; + for (i = 0; i < ARRAY_SIZE(program_seq); i++) { + ei_outb(program_seq[i].value, + addr + program_seq[i].offset); + } + } + + for (i = 0; i < 16; i++) { + SA_prom[i] = ei_inb(addr + NE_DATAPORT); + ei_inb(addr + NE_DATAPORT); + } + + /* We must set the 8390 for word mode. */ + ei_outb(0x49, addr + NE_EN0_DCFG); + start_page = NESM_START_PG; + stop_page = NESM_STOP_PG; + + /* Install the Interrupt handler */ + ret = request_irq(dev->irq, __ei_interrupt, 0, dev->name, dev); + if (ret) + return ret; + + for (i = 0; i < ETH_ALEN; i++) + dev->dev_addr[i] = SA_prom[i]; + + netdev_dbg(dev, "Found ethernet address: %pM\n", dev->dev_addr); + + ei_local->name = "mcf8390"; + ei_local->tx_start_page = start_page; + ei_local->stop_page = stop_page; + ei_local->word16 = 1; + ei_local->rx_start_page = start_page + TX_PAGES; + ei_local->reset_8390 = mcf8390_reset_8390; + ei_local->block_input = mcf8390_block_input; + ei_local->block_output = mcf8390_block_output; + ei_local->get_8390_hdr = mcf8390_get_8390_hdr; + ei_local->reg_offset = offsets; + + dev->netdev_ops = &mcf8390_netdev_ops; + __NS8390_init(dev, 0); + ret = register_netdev(dev); + if (ret) { + free_irq(dev->irq, dev); + return ret; + } + + netdev_info(dev, "addr=0x%08x irq=%d, Ethernet Address %pM\n", + addr, dev->irq, dev->dev_addr); + return 0; +} + +static int mcf8390_probe(struct platform_device *pdev) +{ + struct net_device *dev; + struct ei_device *ei_local; + struct resource *mem, *irq; + resource_size_t msize; + int ret; + + irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); + if (irq == NULL) { + dev_err(&pdev->dev, "no IRQ specified?\n"); + return -ENXIO; + } + + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (mem == NULL) { + dev_err(&pdev->dev, "no memory address specified?\n"); + return -ENXIO; + } + msize = resource_size(mem); + if (!request_mem_region(mem->start, msize, pdev->name)) + return -EBUSY; + + dev = ____alloc_ei_netdev(0); + if (dev == NULL) { + release_mem_region(mem->start, msize); + return -ENOMEM; + } + + SET_NETDEV_DEV(dev, &pdev->dev); + platform_set_drvdata(pdev, dev); + ei_local = netdev_priv(dev); + + dev->irq = irq->start; + dev->base_addr = mem->start; + + ret = mcf8390_init(dev); + if (ret) { + release_mem_region(mem->start, msize); + free_netdev(dev); + return ret; + } + return 0; +} + +static int mcf8390_remove(struct platform_device *pdev) +{ + struct net_device *dev = platform_get_drvdata(pdev); + struct resource *mem; + + unregister_netdev(dev); + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (mem) + release_mem_region(mem->start, resource_size(mem)); + free_netdev(dev); + return 0; +} + +static struct platform_driver mcf8390_drv = { + .driver = { + .name = "mcf8390", + .owner = THIS_MODULE, + }, + .probe = mcf8390_probe, + .remove = mcf8390_remove, +}; + +module_platform_driver(mcf8390_drv); + +MODULE_DESCRIPTION("MCF8390 ColdFire NS8390 driver"); +MODULE_AUTHOR("Greg Ungerer <gerg@uclinux.org>"); +MODULE_LICENSE("GPL"); +MODULE_ALIAS("platform:mcf8390"); diff --git a/drivers/net/ethernet/aeroflex/greth.c b/drivers/net/ethernet/aeroflex/greth.c index 348501178089..9c77c736f171 100644 --- a/drivers/net/ethernet/aeroflex/greth.c +++ b/drivers/net/ethernet/aeroflex/greth.c @@ -1014,7 +1014,7 @@ static int greth_set_mac_add(struct net_device *dev, void *p) struct greth_regs *regs; greth = netdev_priv(dev); - regs = (struct greth_regs *) greth->regs; + regs = greth->regs; if (!is_valid_ether_addr(addr->sa_data)) return -EADDRNOTAVAIL; @@ -1036,7 +1036,7 @@ static void greth_set_hash_filter(struct net_device *dev) { struct netdev_hw_addr *ha; struct greth_private *greth = netdev_priv(dev); - struct greth_regs *regs = (struct greth_regs *) greth->regs; + struct greth_regs *regs = greth->regs; u32 mc_filter[2]; unsigned int bitnr; @@ -1055,7 +1055,7 @@ static void greth_set_multicast_list(struct net_device *dev) { int cfg; struct greth_private *greth = netdev_priv(dev); - struct greth_regs *regs = (struct greth_regs *) greth->regs; + struct greth_regs *regs = greth->regs; cfg = GRETH_REGLOAD(regs->control); if (dev->flags & IFF_PROMISC) @@ -1414,7 +1414,7 @@ static int __devinit greth_of_probe(struct platform_device *ofdev) goto error1; } - regs = (struct greth_regs *) greth->regs; + regs = greth->regs; greth->irq = ofdev->archdata.irqs[0]; dev_set_drvdata(greth->dev, dev); diff --git a/drivers/net/ethernet/amd/declance.c b/drivers/net/ethernet/amd/declance.c index 75299f500ee5..7203b522f234 100644 --- a/drivers/net/ethernet/amd/declance.c +++ b/drivers/net/ethernet/amd/declance.c @@ -623,7 +623,7 @@ static int lance_rx(struct net_device *dev) skb_put(skb, len); /* make room */ cp_from_buf(lp->type, skb->data, - (char *)lp->rx_buf_ptr_cpu[entry], len); + lp->rx_buf_ptr_cpu[entry], len); skb->protocol = eth_type_trans(skb, dev); netif_rx(skb); @@ -919,7 +919,7 @@ static int lance_start_xmit(struct sk_buff *skb, struct net_device *dev) *lib_ptr(ib, btx_ring[entry].length, lp->type) = (-len); *lib_ptr(ib, btx_ring[entry].misc, lp->type) = 0; - cp_to_buf(lp->type, (char *)lp->tx_buf_ptr_cpu[entry], skb->data, len); + cp_to_buf(lp->type, lp->tx_buf_ptr_cpu[entry], skb->data, len); /* Now, give the packet to the lance */ *lib_ptr(ib, btx_ring[entry].tmd1, lp->type) = diff --git a/drivers/net/ethernet/amd/lance.c b/drivers/net/ethernet/amd/lance.c index a6e2e840884e..5c728436b85e 100644 --- a/drivers/net/ethernet/amd/lance.c +++ b/drivers/net/ethernet/amd/lance.c @@ -873,10 +873,9 @@ lance_init_ring(struct net_device *dev, gfp_t gfp) skb = alloc_skb(PKT_BUF_SZ, GFP_DMA | gfp); lp->rx_skbuff[i] = skb; - if (skb) { - skb->dev = dev; + if (skb) rx_buff = skb->data; - } else + else rx_buff = kmalloc(PKT_BUF_SZ, GFP_DMA | gfp); if (rx_buff == NULL) lp->rx_ring[i].base = 0; diff --git a/drivers/net/ethernet/apple/macmace.c b/drivers/net/ethernet/apple/macmace.c index ab7ff8645ab1..a92ddee7f665 100644 --- a/drivers/net/ethernet/apple/macmace.c +++ b/drivers/net/ethernet/apple/macmace.c @@ -228,7 +228,7 @@ static int __devinit mace_probe(struct platform_device *pdev) * bits are reversed. */ - addr = (void *)MACE_PROM; + addr = MACE_PROM; for (j = 0; j < 6; ++j) { u8 v = bitrev8(addr[j<<4]); diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_hw.c b/drivers/net/ethernet/atheros/atl1c/atl1c_hw.c index ff9c73859d45..21e261ffbe10 100644 --- a/drivers/net/ethernet/atheros/atl1c/atl1c_hw.c +++ b/drivers/net/ethernet/atheros/atl1c/atl1c_hw.c @@ -199,7 +199,7 @@ int atl1c_read_mac_addr(struct atl1c_hw *hw) err = atl1c_get_permanent_address(hw); if (err) - random_ether_addr(hw->perm_mac_addr); + eth_random_addr(hw->perm_mac_addr); memcpy(hw->mac_addr, hw->perm_mac_addr, sizeof(hw->perm_mac_addr)); return err; @@ -602,7 +602,7 @@ int atl1c_phy_reset(struct atl1c_hw *hw) int atl1c_phy_init(struct atl1c_hw *hw) { - struct atl1c_adapter *adapter = (struct atl1c_adapter *)hw->adapter; + struct atl1c_adapter *adapter = hw->adapter; struct pci_dev *pdev = adapter->pdev; int ret_val; u16 mii_bmcr_data = BMCR_RESET; @@ -696,7 +696,7 @@ int atl1c_get_speed_and_duplex(struct atl1c_hw *hw, u16 *speed, u16 *duplex) /* select one link mode to get lower power consumption */ int atl1c_phy_to_ps_link(struct atl1c_hw *hw) { - struct atl1c_adapter *adapter = (struct atl1c_adapter *)hw->adapter; + struct atl1c_adapter *adapter = hw->adapter; struct pci_dev *pdev = adapter->pdev; int ret = 0; u16 autoneg_advertised = ADVERTISED_10baseT_Half; @@ -768,7 +768,7 @@ int atl1c_restart_autoneg(struct atl1c_hw *hw) int atl1c_power_saving(struct atl1c_hw *hw, u32 wufc) { - struct atl1c_adapter *adapter = (struct atl1c_adapter *)hw->adapter; + struct atl1c_adapter *adapter = hw->adapter; struct pci_dev *pdev = adapter->pdev; u32 master_ctrl, mac_ctrl, phy_ctrl; u32 wol_ctrl, speed; diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_hw.h b/drivers/net/ethernet/atheros/atl1c/atl1c_hw.h index 17d935bdde0a..21d8c4dbdbe1 100644 --- a/drivers/net/ethernet/atheros/atl1c/atl1c_hw.h +++ b/drivers/net/ethernet/atheros/atl1c/atl1c_hw.h @@ -74,6 +74,8 @@ void atl1c_post_phy_linkchg(struct atl1c_hw *hw, u16 link_speed); #define PCI_DEVICE_ID_ATHEROS_L1D_2_0 0x1083 /* AR8151 v2.0 Gigabit 1000 */ #define L2CB_V10 0xc0 #define L2CB_V11 0xc1 +#define L2CB_V20 0xc0 +#define L2CB_V21 0xc1 /* register definition */ #define REG_DEVICE_CAP 0x5C @@ -87,6 +89,9 @@ void atl1c_post_phy_linkchg(struct atl1c_hw *hw, u16 link_speed); #define LINK_CTRL_L1_EN 0x02 #define LINK_CTRL_EXT_SYNC 0x80 +#define REG_PCIE_IND_ACC_ADDR 0x80 +#define REG_PCIE_IND_ACC_DATA 0x84 + #define REG_DEV_SERIALNUM_CTRL 0x200 #define REG_DEV_MAC_SEL_MASK 0x0 /* 0:EUI; 1:MAC */ #define REG_DEV_MAC_SEL_SHIFT 0 diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c index 1f78b63d5efe..1bf5bbfe778e 100644 --- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c +++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c @@ -166,7 +166,7 @@ static void atl1c_reset_pcie(struct atl1c_hw *hw, u32 flag) msleep(5); } -/* +/** * atl1c_irq_enable - Enable default interrupt generation settings * @adapter: board private structure */ @@ -179,7 +179,7 @@ static inline void atl1c_irq_enable(struct atl1c_adapter *adapter) } } -/* +/** * atl1c_irq_disable - Mask off interrupt generation on the NIC * @adapter: board private structure */ @@ -192,7 +192,7 @@ static inline void atl1c_irq_disable(struct atl1c_adapter *adapter) synchronize_irq(adapter->pdev->irq); } -/* +/** * atl1c_irq_reset - reset interrupt confiure on the NIC * @adapter: board private structure */ @@ -220,7 +220,7 @@ static u32 atl1c_wait_until_idle(struct atl1c_hw *hw, u32 modu_ctrl) return data; } -/* +/** * atl1c_phy_config - Timer Call-back * @data: pointer to netdev cast into an unsigned long */ @@ -360,7 +360,7 @@ static void atl1c_del_timer(struct atl1c_adapter *adapter) } -/* +/** * atl1c_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure */ @@ -373,7 +373,7 @@ static void atl1c_tx_timeout(struct net_device *netdev) schedule_work(&adapter->common_task); } -/* +/** * atl1c_set_multi - Multicast and Promiscuous mode set * @netdev: network interface device structure * @@ -452,7 +452,7 @@ static void atl1c_restore_vlan(struct atl1c_adapter *adapter) atl1c_vlan_mode(adapter->netdev, adapter->netdev->features); } -/* +/** * atl1c_set_mac - Change the Ethernet Address of the NIC * @netdev: network interface device structure * @p: pointer to an address structure @@ -517,7 +517,7 @@ static int atl1c_set_features(struct net_device *netdev, return 0; } -/* +/** * atl1c_change_mtu - Change the Maximum Transfer Unit * @netdev: network interface device structure * @new_mtu: new value for maximum frame size @@ -576,12 +576,6 @@ static void atl1c_mdio_write(struct net_device *netdev, int phy_id, atl1c_write_phy_reg(&adapter->hw, reg_num, val); } -/* - * atl1c_mii_ioctl - - * @netdev: - * @ifreq: - * @cmd: - */ static int atl1c_mii_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) { @@ -632,12 +626,6 @@ out: return retval; } -/* - * atl1c_ioctl - - * @netdev: - * @ifreq: - * @cmd: - */ static int atl1c_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) { switch (cmd) { @@ -650,7 +638,7 @@ static int atl1c_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) } } -/* +/** * atl1c_alloc_queues - Allocate memory for all rings * @adapter: board private structure to initialize * @@ -739,6 +727,8 @@ static const struct atl1c_platform_patch plats[] __devinitdata = { static void __devinit atl1c_patch_assign(struct atl1c_hw *hw) { + struct pci_dev *pdev = hw->adapter->pdev; + u32 misc_ctrl; int i = 0; hw->msi_lnkpatch = false; @@ -753,8 +743,20 @@ static void __devinit atl1c_patch_assign(struct atl1c_hw *hw) } i++; } + + if (hw->device_id == PCI_DEVICE_ID_ATHEROS_L2C_B2 && + hw->revision_id == L2CB_V21) { + /* config acess mode */ + pci_write_config_dword(pdev, REG_PCIE_IND_ACC_ADDR, + REG_PCIE_DEV_MISC_CTRL); + pci_read_config_dword(pdev, REG_PCIE_IND_ACC_DATA, &misc_ctrl); + misc_ctrl &= ~0x100; + pci_write_config_dword(pdev, REG_PCIE_IND_ACC_ADDR, + REG_PCIE_DEV_MISC_CTRL); + pci_write_config_dword(pdev, REG_PCIE_IND_ACC_DATA, misc_ctrl); + } } -/* +/** * atl1c_sw_init - Initialize general software structures (struct atl1c_adapter) * @adapter: board private structure to initialize * @@ -780,7 +782,7 @@ static int __devinit atl1c_sw_init(struct atl1c_adapter *adapter) hw->device_id = pdev->device; hw->subsystem_vendor_id = pdev->subsystem_vendor; hw->subsystem_id = pdev->subsystem_device; - AT_READ_REG(hw, PCI_CLASS_REVISION, &revision); + pci_read_config_dword(pdev, PCI_CLASS_REVISION, &revision); hw->revision_id = revision & 0xFF; /* before link up, we assume hibernate is true */ hw->hibernate = true; @@ -852,7 +854,7 @@ static inline void atl1c_clean_buffer(struct pci_dev *pdev, buffer_info->skb = NULL; ATL1C_SET_BUFFER_STATE(buffer_info, ATL1C_BUFFER_FREE); } -/* +/** * atl1c_clean_tx_ring - Free Tx-skb * @adapter: board private structure */ @@ -877,7 +879,7 @@ static void atl1c_clean_tx_ring(struct atl1c_adapter *adapter, tpd_ring->next_to_use = 0; } -/* +/** * atl1c_clean_rx_ring - Free rx-reservation skbs * @adapter: board private structure */ @@ -930,7 +932,7 @@ static void atl1c_init_ring_ptrs(struct atl1c_adapter *adapter) } } -/* +/** * atl1c_free_ring_resources - Free Tx / RX descriptor Resources * @adapter: board private structure * @@ -953,7 +955,7 @@ static void atl1c_free_ring_resources(struct atl1c_adapter *adapter) } } -/* +/** * atl1c_setup_mem_resources - allocate Tx / RX descriptor resources * @adapter: board private structure * @@ -988,12 +990,12 @@ static int atl1c_setup_ring_resources(struct atl1c_adapter *adapter) } for (i = 0; i < AT_MAX_TRANSMIT_QUEUE; i++) { tpd_ring[i].buffer_info = - (struct atl1c_buffer *) (tpd_ring->buffer_info + count); + (tpd_ring->buffer_info + count); count += tpd_ring[i].count; } rfd_ring->buffer_info = - (struct atl1c_buffer *) (tpd_ring->buffer_info + count); + (tpd_ring->buffer_info + count); count += rfd_ring->count; rx_desc_count += rfd_ring->count; @@ -1226,7 +1228,7 @@ static void atl1c_start_mac(struct atl1c_adapter *adapter) */ static int atl1c_reset_mac(struct atl1c_hw *hw) { - struct atl1c_adapter *adapter = (struct atl1c_adapter *)hw->adapter; + struct atl1c_adapter *adapter = hw->adapter; struct pci_dev *pdev = adapter->pdev; u32 ctrl_data = 0; @@ -1362,7 +1364,7 @@ static void atl1c_set_aspm(struct atl1c_hw *hw, u16 link_speed) return; } -/* +/** * atl1c_configure - Configure Transmit&Receive Unit after Reset * @adapter: board private structure * @@ -1476,7 +1478,7 @@ static void atl1c_update_hw_stats(struct atl1c_adapter *adapter) } } -/* +/** * atl1c_get_stats - Get System Network Statistics * @netdev: network interface device structure * @@ -1530,8 +1532,7 @@ static inline void atl1c_clear_phy_int(struct atl1c_adapter *adapter) static bool atl1c_clean_tx_irq(struct atl1c_adapter *adapter, enum atl1c_trans_queue type) { - struct atl1c_tpd_ring *tpd_ring = (struct atl1c_tpd_ring *) - &adapter->tpd_ring[type]; + struct atl1c_tpd_ring *tpd_ring = &adapter->tpd_ring[type]; struct atl1c_buffer *buffer_info; struct pci_dev *pdev = adapter->pdev; u16 next_to_clean = atomic_read(&tpd_ring->next_to_clean); @@ -1558,11 +1559,10 @@ static bool atl1c_clean_tx_irq(struct atl1c_adapter *adapter, return true; } -/* +/** * atl1c_intr - Interrupt Handler * @irq: interrupt number * @data: pointer to a network interface device structure - * @pt_regs: CPU registers structure */ static irqreturn_t atl1c_intr(int irq, void *data) { @@ -1813,9 +1813,8 @@ rrs_checked: atl1c_alloc_rx_buffer(adapter); } -/* +/** * atl1c_clean - NAPI Rx polling callback - * @adapter: board private structure */ static int atl1c_clean(struct napi_struct *napi, int budget) { @@ -2270,7 +2269,7 @@ static void atl1c_down(struct atl1c_adapter *adapter) atl1c_reset_dma_ring(adapter); } -/* +/** * atl1c_open - Called when a network interface is made active * @netdev: network interface device structure * @@ -2309,7 +2308,7 @@ err_up: return err; } -/* +/** * atl1c_close - Disables a network interface * @netdev: network interface device structure * @@ -2432,7 +2431,7 @@ static int atl1c_init_netdev(struct net_device *netdev, struct pci_dev *pdev) return 0; } -/* +/** * atl1c_probe - Device Initialization Routine * @pdev: PCI device information struct * @ent: entry in atl1c_pci_tbl @@ -2579,7 +2578,7 @@ err_dma: return err; } -/* +/** * atl1c_remove - Device Removal Routine * @pdev: PCI device information struct * @@ -2605,7 +2604,7 @@ static void __devexit atl1c_remove(struct pci_dev *pdev) free_netdev(netdev); } -/* +/** * atl1c_io_error_detected - called when PCI error is detected * @pdev: Pointer to PCI device * @state: The current pci connection state @@ -2633,7 +2632,7 @@ static pci_ers_result_t atl1c_io_error_detected(struct pci_dev *pdev, return PCI_ERS_RESULT_NEED_RESET; } -/* +/** * atl1c_io_slot_reset - called after the pci bus has been reset. * @pdev: Pointer to PCI device * @@ -2661,7 +2660,7 @@ static pci_ers_result_t atl1c_io_slot_reset(struct pci_dev *pdev) return PCI_ERS_RESULT_RECOVERED; } -/* +/** * atl1c_io_resume - called when traffic can start flowing again. * @pdev: Pointer to PCI device * @@ -2704,7 +2703,7 @@ static struct pci_driver atl1c_driver = { .driver.pm = &atl1c_pm_ops, }; -/* +/** * atl1c_init_module - Driver Registration Routine * * atl1c_init_module is the first routine called when the driver is @@ -2715,7 +2714,7 @@ static int __init atl1c_init_module(void) return pci_register_driver(&atl1c_driver); } -/* +/** * atl1c_exit_module - Driver Exit Cleanup Routine * * atl1c_exit_module is called just before the driver is removed diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_ethtool.c b/drivers/net/ethernet/atheros/atl1e/atl1e_ethtool.c index 6e61f9f9ebb5..82b23861bf55 100644 --- a/drivers/net/ethernet/atheros/atl1e/atl1e_ethtool.c +++ b/drivers/net/ethernet/atheros/atl1e/atl1e_ethtool.c @@ -268,7 +268,7 @@ static int atl1e_set_eeprom(struct net_device *netdev, if (eeprom_buff == NULL) return -ENOMEM; - ptr = (u32 *)eeprom_buff; + ptr = eeprom_buff; if (eeprom->offset & 3) { /* need read/modify/write of first changed EEPROM word */ diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c index 1220e511ced6..a98acc8a956f 100644 --- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c +++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c @@ -89,7 +89,7 @@ static const u16 atl1e_pay_load_size[] = { 128, 256, 512, 1024, 2048, 4096, }; -/* +/** * atl1e_irq_enable - Enable default interrupt generation settings * @adapter: board private structure */ @@ -102,7 +102,7 @@ static inline void atl1e_irq_enable(struct atl1e_adapter *adapter) } } -/* +/** * atl1e_irq_disable - Mask off interrupt generation on the NIC * @adapter: board private structure */ @@ -114,7 +114,7 @@ static inline void atl1e_irq_disable(struct atl1e_adapter *adapter) synchronize_irq(adapter->pdev->irq); } -/* +/** * atl1e_irq_reset - reset interrupt confiure on the NIC * @adapter: board private structure */ @@ -126,7 +126,7 @@ static inline void atl1e_irq_reset(struct atl1e_adapter *adapter) AT_WRITE_FLUSH(&adapter->hw); } -/* +/** * atl1e_phy_config - Timer Call-back * @data: pointer to netdev cast into an unsigned long */ @@ -210,7 +210,7 @@ static int atl1e_check_link(struct atl1e_adapter *adapter) return 0; } -/* +/** * atl1e_link_chg_task - deal with link change event Out of interrupt context * @netdev: network interface device structure */ @@ -259,7 +259,7 @@ static void atl1e_cancel_work(struct atl1e_adapter *adapter) cancel_work_sync(&adapter->link_chg_task); } -/* +/** * atl1e_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure */ @@ -271,7 +271,7 @@ static void atl1e_tx_timeout(struct net_device *netdev) schedule_work(&adapter->reset_task); } -/* +/** * atl1e_set_multi - Multicast and Promiscuous mode set * @netdev: network interface device structure * @@ -345,7 +345,7 @@ static void atl1e_restore_vlan(struct atl1e_adapter *adapter) atl1e_vlan_mode(adapter->netdev, adapter->netdev->features); } -/* +/** * atl1e_set_mac - Change the Ethernet Address of the NIC * @netdev: network interface device structure * @p: pointer to an address structure @@ -397,7 +397,7 @@ static int atl1e_set_features(struct net_device *netdev, return 0; } -/* +/** * atl1e_change_mtu - Change the Maximum Transfer Unit * @netdev: network interface device structure * @new_mtu: new value for maximum frame size @@ -449,12 +449,6 @@ static void atl1e_mdio_write(struct net_device *netdev, int phy_id, atl1e_write_phy_reg(&adapter->hw, reg_num & MDIO_REG_ADDR_MASK, val); } -/* - * atl1e_mii_ioctl - - * @netdev: - * @ifreq: - * @cmd: - */ static int atl1e_mii_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) { @@ -505,12 +499,6 @@ out: } -/* - * atl1e_ioctl - - * @netdev: - * @ifreq: - * @cmd: - */ static int atl1e_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) { switch (cmd) { @@ -541,7 +529,7 @@ static void atl1e_setup_pcicmd(struct pci_dev *pdev) msleep(1); } -/* +/** * atl1e_alloc_queues - Allocate memory for all rings * @adapter: board private structure to initialize * @@ -551,7 +539,7 @@ static int __devinit atl1e_alloc_queues(struct atl1e_adapter *adapter) return 0; } -/* +/** * atl1e_sw_init - Initialize general software structures (struct atl1e_adapter) * @adapter: board private structure to initialize * @@ -635,14 +623,13 @@ static int __devinit atl1e_sw_init(struct atl1e_adapter *adapter) return 0; } -/* +/** * atl1e_clean_tx_ring - Free Tx-skb * @adapter: board private structure */ static void atl1e_clean_tx_ring(struct atl1e_adapter *adapter) { - struct atl1e_tx_ring *tx_ring = (struct atl1e_tx_ring *) - &adapter->tx_ring; + struct atl1e_tx_ring *tx_ring = &adapter->tx_ring; struct atl1e_tx_buffer *tx_buffer = NULL; struct pci_dev *pdev = adapter->pdev; u16 index, ring_count; @@ -679,14 +666,14 @@ static void atl1e_clean_tx_ring(struct atl1e_adapter *adapter) ring_count); } -/* +/** * atl1e_clean_rx_ring - Free rx-reservation skbs * @adapter: board private structure */ static void atl1e_clean_rx_ring(struct atl1e_adapter *adapter) { struct atl1e_rx_ring *rx_ring = - (struct atl1e_rx_ring *)&adapter->rx_ring; + &adapter->rx_ring; struct atl1e_rx_page_desc *rx_page_desc = rx_ring->rx_page_desc; u16 i, j; @@ -762,7 +749,7 @@ static void atl1e_init_ring_ptrs(struct atl1e_adapter *adapter) } } -/* +/** * atl1e_free_ring_resources - Free Tx / RX descriptor Resources * @adapter: board private structure * @@ -787,7 +774,7 @@ static void atl1e_free_ring_resources(struct atl1e_adapter *adapter) } } -/* +/** * atl1e_setup_mem_resources - allocate Tx / RX descriptor resources * @adapter: board private structure * @@ -884,14 +871,12 @@ failed: return err; } -static inline void atl1e_configure_des_ring(const struct atl1e_adapter *adapter) +static inline void atl1e_configure_des_ring(struct atl1e_adapter *adapter) { - struct atl1e_hw *hw = (struct atl1e_hw *)&adapter->hw; - struct atl1e_rx_ring *rx_ring = - (struct atl1e_rx_ring *)&adapter->rx_ring; - struct atl1e_tx_ring *tx_ring = - (struct atl1e_tx_ring *)&adapter->tx_ring; + struct atl1e_hw *hw = &adapter->hw; + struct atl1e_rx_ring *rx_ring = &adapter->rx_ring; + struct atl1e_tx_ring *tx_ring = &adapter->tx_ring; struct atl1e_rx_page_desc *rx_page_desc = NULL; int i, j; @@ -932,7 +917,7 @@ static inline void atl1e_configure_des_ring(const struct atl1e_adapter *adapter) static inline void atl1e_configure_tx(struct atl1e_adapter *adapter) { - struct atl1e_hw *hw = (struct atl1e_hw *)&adapter->hw; + struct atl1e_hw *hw = &adapter->hw; u32 dev_ctrl_data = 0; u32 max_pay_load = 0; u32 jumbo_thresh = 0; @@ -975,7 +960,7 @@ static inline void atl1e_configure_tx(struct atl1e_adapter *adapter) static inline void atl1e_configure_rx(struct atl1e_adapter *adapter) { - struct atl1e_hw *hw = (struct atl1e_hw *)&adapter->hw; + struct atl1e_hw *hw = &adapter->hw; u32 rxf_len = 0; u32 rxf_low = 0; u32 rxf_high = 0; @@ -1078,7 +1063,7 @@ static void atl1e_setup_mac_ctrl(struct atl1e_adapter *adapter) AT_WRITE_REG(hw, REG_MAC_CTRL, value); } -/* +/** * atl1e_configure - Configure Transmit&Receive Unit after Reset * @adapter: board private structure * @@ -1148,7 +1133,7 @@ static int atl1e_configure(struct atl1e_adapter *adapter) return 0; } -/* +/** * atl1e_get_stats - Get System Network Statistics * @netdev: network interface device structure * @@ -1224,8 +1209,7 @@ static inline void atl1e_clear_phy_int(struct atl1e_adapter *adapter) static bool atl1e_clean_tx_irq(struct atl1e_adapter *adapter) { - struct atl1e_tx_ring *tx_ring = (struct atl1e_tx_ring *) - &adapter->tx_ring; + struct atl1e_tx_ring *tx_ring = &adapter->tx_ring; struct atl1e_tx_buffer *tx_buffer = NULL; u16 hw_next_to_clean = AT_READ_REGW(&adapter->hw, REG_TPD_CONS_IDX); u16 next_to_clean = atomic_read(&tx_ring->next_to_clean); @@ -1261,11 +1245,10 @@ static bool atl1e_clean_tx_irq(struct atl1e_adapter *adapter) return true; } -/* +/** * atl1e_intr - Interrupt Handler * @irq: interrupt number * @data: pointer to a network interface device structure - * @pt_regs: CPU registers structure */ static irqreturn_t atl1e_intr(int irq, void *data) { @@ -1384,15 +1367,14 @@ static struct atl1e_rx_page *atl1e_get_rx_page(struct atl1e_adapter *adapter, (struct atl1e_rx_page_desc *) adapter->rx_ring.rx_page_desc; u8 rx_using = rx_page_desc[que].rx_using; - return (struct atl1e_rx_page *)&(rx_page_desc[que].rx_page[rx_using]); + return &(rx_page_desc[que].rx_page[rx_using]); } static void atl1e_clean_rx_irq(struct atl1e_adapter *adapter, u8 que, int *work_done, int work_to_do) { struct net_device *netdev = adapter->netdev; - struct atl1e_rx_ring *rx_ring = (struct atl1e_rx_ring *) - &adapter->rx_ring; + struct atl1e_rx_ring *rx_ring = &adapter->rx_ring; struct atl1e_rx_page_desc *rx_page_desc = (struct atl1e_rx_page_desc *) rx_ring->rx_page_desc; struct sk_buff *skb = NULL; @@ -1494,9 +1476,8 @@ fatal_err: schedule_work(&adapter->reset_task); } -/* +/** * atl1e_clean - NAPI Rx polling callback - * @adapter: board private structure */ static int atl1e_clean(struct napi_struct *napi, int budget) { @@ -1576,7 +1557,7 @@ static struct atl1e_tpd_desc *atl1e_get_tpd(struct atl1e_adapter *adapter) tx_ring->next_to_use = 0; memset(&tx_ring->desc[next_to_use], 0, sizeof(struct atl1e_tpd_desc)); - return (struct atl1e_tpd_desc *)&tx_ring->desc[next_to_use]; + return &tx_ring->desc[next_to_use]; } static struct atl1e_tx_buffer * @@ -1961,7 +1942,7 @@ void atl1e_down(struct atl1e_adapter *adapter) atl1e_clean_rx_ring(adapter); } -/* +/** * atl1e_open - Called when a network interface is made active * @netdev: network interface device structure * @@ -2007,7 +1988,7 @@ err_req_irq: return err; } -/* +/** * atl1e_close - Disables a network interface * @netdev: network interface device structure * @@ -2061,8 +2042,8 @@ static int atl1e_suspend(struct pci_dev *pdev, pm_message_t state) if (wufc) { /* get link status */ - atl1e_read_phy_reg(hw, MII_BMSR, (u16 *)&mii_bmsr_data); - atl1e_read_phy_reg(hw, MII_BMSR, (u16 *)&mii_bmsr_data); + atl1e_read_phy_reg(hw, MII_BMSR, &mii_bmsr_data); + atl1e_read_phy_reg(hw, MII_BMSR, &mii_bmsr_data); mii_advertise_data = ADVERTISE_10HALF; @@ -2086,7 +2067,7 @@ static int atl1e_suspend(struct pci_dev *pdev, pm_message_t state) for (i = 0; i < AT_SUSPEND_LINK_TIMEOUT; i++) { msleep(100); atl1e_read_phy_reg(hw, MII_BMSR, - (u16 *)&mii_bmsr_data); + &mii_bmsr_data); if (mii_bmsr_data & BMSR_LSTATUS) break; } @@ -2243,7 +2224,7 @@ static int atl1e_init_netdev(struct net_device *netdev, struct pci_dev *pdev) return 0; } -/* +/** * atl1e_probe - Device Initialization Routine * @pdev: PCI device information struct * @ent: entry in atl1e_pci_tbl @@ -2397,7 +2378,7 @@ err_dma: return err; } -/* +/** * atl1e_remove - Device Removal Routine * @pdev: PCI device information struct * @@ -2429,7 +2410,7 @@ static void __devexit atl1e_remove(struct pci_dev *pdev) pci_disable_device(pdev); } -/* +/** * atl1e_io_error_detected - called when PCI error is detected * @pdev: Pointer to PCI device * @state: The current pci connection state @@ -2457,7 +2438,7 @@ atl1e_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state) return PCI_ERS_RESULT_NEED_RESET; } -/* +/** * atl1e_io_slot_reset - called after the pci bus has been reset. * @pdev: Pointer to PCI device * @@ -2484,7 +2465,7 @@ static pci_ers_result_t atl1e_io_slot_reset(struct pci_dev *pdev) return PCI_ERS_RESULT_RECOVERED; } -/* +/** * atl1e_io_resume - called when traffic can start flowing again. * @pdev: Pointer to PCI device * @@ -2528,7 +2509,7 @@ static struct pci_driver atl1e_driver = { .err_handler = &atl1e_err_handler }; -/* +/** * atl1e_init_module - Driver Registration Routine * * atl1e_init_module is the first routine called when the driver is @@ -2539,7 +2520,7 @@ static int __init atl1e_init_module(void) return pci_register_driver(&atl1e_driver); } -/* +/** * atl1e_exit_module - Driver Exit Cleanup Routine * * atl1e_exit_module is called just before the driver is removed diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_param.c b/drivers/net/ethernet/atheros/atl1e/atl1e_param.c index 0ce60b6e7ef0..b5086f1e637f 100644 --- a/drivers/net/ethernet/atheros/atl1e/atl1e_param.c +++ b/drivers/net/ethernet/atheros/atl1e/atl1e_param.c @@ -168,7 +168,7 @@ static int __devinit atl1e_validate_option(int *value, struct atl1e_option *opt, return -1; } -/* +/** * atl1e_check_options - Range Checking for Command Line Parameters * @adapter: board private structure * diff --git a/drivers/net/ethernet/atheros/atlx/atl1.c b/drivers/net/ethernet/atheros/atlx/atl1.c index 5d10884e5080..7bae2ad7a7c0 100644 --- a/drivers/net/ethernet/atheros/atlx/atl1.c +++ b/drivers/net/ethernet/atheros/atlx/atl1.c @@ -195,7 +195,7 @@ static int __devinit atl1_validate_option(int *value, struct atl1_option *opt, return -1; } -/* +/** * atl1_check_options - Range Checking for Command Line Parameters * @adapter: board private structure * @@ -538,7 +538,7 @@ static s32 atl1_read_mac_addr(struct atl1_hw *hw) u16 i; if (atl1_get_permanent_address(hw)) { - random_ether_addr(hw->perm_mac_addr); + eth_random_addr(hw->perm_mac_addr); ret = 1; } @@ -937,7 +937,7 @@ static void atl1_set_mac_addr(struct atl1_hw *hw) iowrite32(value, (hw->hw_addr + REG_MAC_STA_ADDR) + (1 << 2)); } -/* +/** * atl1_sw_init - Initialize general software structures (struct atl1_adapter) * @adapter: board private structure to initialize * @@ -1014,12 +1014,6 @@ static void mdio_write(struct net_device *netdev, int phy_id, int reg_num, atl1_write_phy_reg(&adapter->hw, reg_num, val); } -/* - * atl1_mii_ioctl - - * @netdev: - * @ifreq: - * @cmd: - */ static int atl1_mii_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) { struct atl1_adapter *adapter = netdev_priv(netdev); @@ -1036,7 +1030,7 @@ static int atl1_mii_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) return retval; } -/* +/** * atl1_setup_mem_resources - allocate Tx / RX descriptor resources * @adapter: board private structure * @@ -1061,7 +1055,7 @@ static s32 atl1_setup_ring_resources(struct atl1_adapter *adapter) goto err_nomem; } rfd_ring->buffer_info = - (struct atl1_buffer *)(tpd_ring->buffer_info + tpd_ring->count); + (tpd_ring->buffer_info + tpd_ring->count); /* * real ring DMA buffer @@ -1147,7 +1141,7 @@ static void atl1_init_ring_ptrs(struct atl1_adapter *adapter) atomic_set(&rrd_ring->next_to_clean, 0); } -/* +/** * atl1_clean_rx_ring - Free RFD Buffers * @adapter: board private structure */ @@ -1187,7 +1181,7 @@ static void atl1_clean_rx_ring(struct atl1_adapter *adapter) atomic_set(&rrd_ring->next_to_clean, 0); } -/* +/** * atl1_clean_tx_ring - Free Tx Buffers * @adapter: board private structure */ @@ -1227,7 +1221,7 @@ static void atl1_clean_tx_ring(struct atl1_adapter *adapter) atomic_set(&tpd_ring->next_to_clean, 0); } -/* +/** * atl1_free_ring_resources - Free Tx / RX descriptor Resources * @adapter: board private structure * @@ -1470,7 +1464,7 @@ static void set_flow_ctrl_new(struct atl1_hw *hw) iowrite32(value, hw->hw_addr + REG_RXQ_RRD_PAUSE_THRESH); } -/* +/** * atl1_configure - Configure Transmit&Receive Unit after Reset * @adapter: board private structure * @@ -1844,7 +1838,7 @@ static void atl1_rx_checksum(struct atl1_adapter *adapter, } } -/* +/** * atl1_alloc_rx_buffers - Replace used receive buffers * @adapter: address of board private structure */ @@ -2489,11 +2483,10 @@ static inline int atl1_sched_rings_clean(struct atl1_adapter* adapter) return 1; } -/* +/** * atl1_intr - Interrupt Handler * @irq: interrupt number * @data: pointer to a network interface device structure - * @pt_regs: CPU registers structure */ static irqreturn_t atl1_intr(int irq, void *data) { @@ -2574,7 +2567,7 @@ static irqreturn_t atl1_intr(int irq, void *data) } -/* +/** * atl1_phy_config - Timer Call-back * @data: pointer to netdev cast into an unsigned long */ @@ -2693,7 +2686,7 @@ static void atl1_reset_dev_task(struct work_struct *work) netif_device_attach(netdev); } -/* +/** * atl1_change_mtu - Change the Maximum Transfer Unit * @netdev: network interface device structure * @new_mtu: new value for maximum frame size @@ -2727,7 +2720,7 @@ static int atl1_change_mtu(struct net_device *netdev, int new_mtu) return 0; } -/* +/** * atl1_open - Called when a network interface is made active * @netdev: network interface device structure * @@ -2762,7 +2755,7 @@ err_up: return err; } -/* +/** * atl1_close - Disables a network interface * @netdev: network interface device structure * @@ -2930,7 +2923,7 @@ static const struct net_device_ops atl1_netdev_ops = { #endif }; -/* +/** * atl1_probe - Device Initialization Routine * @pdev: PCI device information struct * @ent: entry in atl1_pci_tbl @@ -3111,7 +3104,7 @@ err_request_regions: return err; } -/* +/** * atl1_remove - Device Removal Routine * @pdev: PCI device information struct * @@ -3158,7 +3151,7 @@ static struct pci_driver atl1_driver = { .driver.pm = ATL1_PM_OPS, }; -/* +/** * atl1_exit_module - Driver Exit Cleanup Routine * * atl1_exit_module is called just before the driver is removed @@ -3169,7 +3162,7 @@ static void __exit atl1_exit_module(void) pci_unregister_driver(&atl1_driver); } -/* +/** * atl1_init_module - Driver Registration Routine * * atl1_init_module is the first routine called when the driver is diff --git a/drivers/net/ethernet/atheros/atlx/atl2.c b/drivers/net/ethernet/atheros/atlx/atl2.c index 6762dc406b25..57d64b80fd72 100644 --- a/drivers/net/ethernet/atheros/atlx/atl2.c +++ b/drivers/net/ethernet/atheros/atlx/atl2.c @@ -75,7 +75,7 @@ static void atl2_set_ethtool_ops(struct net_device *netdev); static void atl2_check_options(struct atl2_adapter *adapter); -/* +/** * atl2_sw_init - Initialize general software structures (struct atl2_adapter) * @adapter: board private structure to initialize * @@ -123,7 +123,7 @@ static int __devinit atl2_sw_init(struct atl2_adapter *adapter) return 0; } -/* +/** * atl2_set_multi - Multicast and Promiscuous mode set * @netdev: network interface device structure * @@ -177,7 +177,7 @@ static void init_ring_ptrs(struct atl2_adapter *adapter) adapter->txs_next_clear = 0; } -/* +/** * atl2_configure - Configure Transmit&Receive Unit after Reset * @adapter: board private structure * @@ -283,7 +283,7 @@ static int atl2_configure(struct atl2_adapter *adapter) return value; } -/* +/** * atl2_setup_ring_resources - allocate Tx / RX descriptor resources * @adapter: board private structure * @@ -340,7 +340,7 @@ static s32 atl2_setup_ring_resources(struct atl2_adapter *adapter) return 0; } -/* +/** * atl2_irq_enable - Enable default interrupt generation settings * @adapter: board private structure */ @@ -350,7 +350,7 @@ static inline void atl2_irq_enable(struct atl2_adapter *adapter) ATL2_WRITE_FLUSH(&adapter->hw); } -/* +/** * atl2_irq_disable - Mask off interrupt generation on the NIC * @adapter: board private structure */ @@ -599,11 +599,10 @@ static inline void atl2_clear_phy_int(struct atl2_adapter *adapter) spin_unlock(&adapter->stats_lock); } -/* +/** * atl2_intr - Interrupt Handler * @irq: interrupt number * @data: pointer to a network interface device structure - * @pt_regs: CPU registers structure */ static irqreturn_t atl2_intr(int irq, void *data) { @@ -679,7 +678,7 @@ static int atl2_request_irq(struct atl2_adapter *adapter) netdev); } -/* +/** * atl2_free_ring_resources - Free Tx / RX descriptor Resources * @adapter: board private structure * @@ -692,7 +691,7 @@ static void atl2_free_ring_resources(struct atl2_adapter *adapter) adapter->ring_dma); } -/* +/** * atl2_open - Called when a network interface is made active * @netdev: network interface device structure * @@ -798,7 +797,7 @@ static void atl2_free_irq(struct atl2_adapter *adapter) #endif } -/* +/** * atl2_close - Disables a network interface * @netdev: network interface device structure * @@ -918,7 +917,7 @@ static netdev_tx_t atl2_xmit_frame(struct sk_buff *skb, return NETDEV_TX_OK; } -/* +/** * atl2_change_mtu - Change the Maximum Transfer Unit * @netdev: network interface device structure * @new_mtu: new value for maximum frame size @@ -943,7 +942,7 @@ static int atl2_change_mtu(struct net_device *netdev, int new_mtu) return 0; } -/* +/** * atl2_set_mac - Change the Ethernet Address of the NIC * @netdev: network interface device structure * @p: pointer to an address structure @@ -969,12 +968,6 @@ static int atl2_set_mac(struct net_device *netdev, void *p) return 0; } -/* - * atl2_mii_ioctl - - * @netdev: - * @ifreq: - * @cmd: - */ static int atl2_mii_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) { struct atl2_adapter *adapter = netdev_priv(netdev); @@ -1011,12 +1004,6 @@ static int atl2_mii_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) return 0; } -/* - * atl2_ioctl - - * @netdev: - * @ifreq: - * @cmd: - */ static int atl2_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) { switch (cmd) { @@ -1033,7 +1020,7 @@ static int atl2_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) } } -/* +/** * atl2_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure */ @@ -1045,7 +1032,7 @@ static void atl2_tx_timeout(struct net_device *netdev) schedule_work(&adapter->reset_task); } -/* +/** * atl2_watchdog - Timer Call-back * @data: pointer to netdev cast into an unsigned long */ @@ -1070,7 +1057,7 @@ static void atl2_watchdog(unsigned long data) } } -/* +/** * atl2_phy_config - Timer Call-back * @data: pointer to netdev cast into an unsigned long */ @@ -1274,9 +1261,8 @@ static int atl2_check_link(struct atl2_adapter *adapter) return 0; } -/* +/** * atl2_link_chg_task - deal with link change event Out of interrupt context - * @netdev: network interface device structure */ static void atl2_link_chg_task(struct work_struct *work) { @@ -1341,7 +1327,7 @@ static const struct net_device_ops atl2_netdev_ops = { #endif }; -/* +/** * atl2_probe - Device Initialization Routine * @pdev: PCI device information struct * @ent: entry in atl2_pci_tbl @@ -1501,7 +1487,7 @@ err_dma: return err; } -/* +/** * atl2_remove - Device Removal Routine * @pdev: PCI device information struct * @@ -1728,7 +1714,7 @@ static struct pci_driver atl2_driver = { .shutdown = atl2_shutdown, }; -/* +/** * atl2_init_module - Driver Registration Routine * * atl2_init_module is the first routine called when the driver is @@ -1743,7 +1729,7 @@ static int __init atl2_init_module(void) } module_init(atl2_init_module); -/* +/** * atl2_exit_module - Driver Exit Cleanup Routine * * atl2_exit_module is called just before the driver is removed @@ -2360,7 +2346,7 @@ static s32 atl2_read_mac_addr(struct atl2_hw *hw) { if (get_permanent_address(hw)) { /* for test */ - /* FIXME: shouldn't we use random_ether_addr() here? */ + /* FIXME: shouldn't we use eth_random_addr() here? */ hw->perm_mac_addr[0] = 0x00; hw->perm_mac_addr[1] = 0x13; hw->perm_mac_addr[2] = 0x74; @@ -2997,7 +2983,7 @@ static int __devinit atl2_validate_option(int *value, struct atl2_option *opt) return -1; } -/* +/** * atl2_check_options - Range Checking for Command Line Parameters * @adapter: board private structure * diff --git a/drivers/net/ethernet/atheros/atlx/atlx.c b/drivers/net/ethernet/atheros/atlx/atlx.c index b4f3aa49a7fc..77ffbc4a5071 100644 --- a/drivers/net/ethernet/atheros/atlx/atlx.c +++ b/drivers/net/ethernet/atheros/atlx/atlx.c @@ -64,7 +64,7 @@ static int atlx_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) } } -/* +/** * atlx_set_mac - Change the Ethernet Address of the NIC * @netdev: network interface device structure * @p: pointer to an address structure @@ -115,7 +115,7 @@ static void atlx_check_for_link(struct atlx_adapter *adapter) schedule_work(&adapter->link_chg_task); } -/* +/** * atlx_set_multi - Multicast and Promiscuous mode set * @netdev: network interface device structure * @@ -162,7 +162,7 @@ static inline void atlx_imr_set(struct atlx_adapter *adapter, ioread32(adapter->hw.hw_addr + REG_IMR); } -/* +/** * atlx_irq_enable - Enable default interrupt generation settings * @adapter: board private structure */ @@ -172,7 +172,7 @@ static void atlx_irq_enable(struct atlx_adapter *adapter) adapter->int_enabled = true; } -/* +/** * atlx_irq_disable - Mask off interrupt generation on the NIC * @adapter: board private structure */ @@ -193,7 +193,7 @@ static void atlx_clear_phy_int(struct atlx_adapter *adapter) spin_unlock_irqrestore(&adapter->lock, flags); } -/* +/** * atlx_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure */ diff --git a/drivers/net/ethernet/broadcom/b44.c b/drivers/net/ethernet/broadcom/b44.c index d09c6b583d17..9786c0e9890e 100644 --- a/drivers/net/ethernet/broadcom/b44.c +++ b/drivers/net/ethernet/broadcom/b44.c @@ -483,9 +483,11 @@ out: static void b44_stats_update(struct b44 *bp) { unsigned long reg; - u32 *val; + u64 *val; val = &bp->hw_stats.tx_good_octets; + u64_stats_update_begin(&bp->hw_stats.syncp); + for (reg = B44_TX_GOOD_O; reg <= B44_TX_PAUSE; reg += 4UL) { *val++ += br32(bp, reg); } @@ -496,6 +498,8 @@ static void b44_stats_update(struct b44 *bp) for (reg = B44_RX_GOOD_O; reg <= B44_RX_NPAUSE; reg += 4UL) { *val++ += br32(bp, reg); } + + u64_stats_update_end(&bp->hw_stats.syncp); } static void b44_link_report(struct b44 *bp) @@ -1635,44 +1639,49 @@ static int b44_close(struct net_device *dev) return 0; } -static struct net_device_stats *b44_get_stats(struct net_device *dev) +static struct rtnl_link_stats64 *b44_get_stats64(struct net_device *dev, + struct rtnl_link_stats64 *nstat) { struct b44 *bp = netdev_priv(dev); - struct net_device_stats *nstat = &dev->stats; struct b44_hw_stats *hwstat = &bp->hw_stats; - - /* Convert HW stats into netdevice stats. */ - nstat->rx_packets = hwstat->rx_pkts; - nstat->tx_packets = hwstat->tx_pkts; - nstat->rx_bytes = hwstat->rx_octets; - nstat->tx_bytes = hwstat->tx_octets; - nstat->tx_errors = (hwstat->tx_jabber_pkts + - hwstat->tx_oversize_pkts + - hwstat->tx_underruns + - hwstat->tx_excessive_cols + - hwstat->tx_late_cols); - nstat->multicast = hwstat->tx_multicast_pkts; - nstat->collisions = hwstat->tx_total_cols; - - nstat->rx_length_errors = (hwstat->rx_oversize_pkts + - hwstat->rx_undersize); - nstat->rx_over_errors = hwstat->rx_missed_pkts; - nstat->rx_frame_errors = hwstat->rx_align_errs; - nstat->rx_crc_errors = hwstat->rx_crc_errs; - nstat->rx_errors = (hwstat->rx_jabber_pkts + - hwstat->rx_oversize_pkts + - hwstat->rx_missed_pkts + - hwstat->rx_crc_align_errs + - hwstat->rx_undersize + - hwstat->rx_crc_errs + - hwstat->rx_align_errs + - hwstat->rx_symbol_errs); - - nstat->tx_aborted_errors = hwstat->tx_underruns; + unsigned int start; + + do { + start = u64_stats_fetch_begin_bh(&hwstat->syncp); + + /* Convert HW stats into rtnl_link_stats64 stats. */ + nstat->rx_packets = hwstat->rx_pkts; + nstat->tx_packets = hwstat->tx_pkts; + nstat->rx_bytes = hwstat->rx_octets; + nstat->tx_bytes = hwstat->tx_octets; + nstat->tx_errors = (hwstat->tx_jabber_pkts + + hwstat->tx_oversize_pkts + + hwstat->tx_underruns + + hwstat->tx_excessive_cols + + hwstat->tx_late_cols); + nstat->multicast = hwstat->tx_multicast_pkts; + nstat->collisions = hwstat->tx_total_cols; + + nstat->rx_length_errors = (hwstat->rx_oversize_pkts + + hwstat->rx_undersize); + nstat->rx_over_errors = hwstat->rx_missed_pkts; + nstat->rx_frame_errors = hwstat->rx_align_errs; + nstat->rx_crc_errors = hwstat->rx_crc_errs; + nstat->rx_errors = (hwstat->rx_jabber_pkts + + hwstat->rx_oversize_pkts + + hwstat->rx_missed_pkts + + hwstat->rx_crc_align_errs + + hwstat->rx_undersize + + hwstat->rx_crc_errs + + hwstat->rx_align_errs + + hwstat->rx_symbol_errs); + + nstat->tx_aborted_errors = hwstat->tx_underruns; #if 0 - /* Carrier lost counter seems to be broken for some devices */ - nstat->tx_carrier_errors = hwstat->tx_carrier_lost; + /* Carrier lost counter seems to be broken for some devices */ + nstat->tx_carrier_errors = hwstat->tx_carrier_lost; #endif + } while (u64_stats_fetch_retry_bh(&hwstat->syncp, start)); return nstat; } @@ -1993,17 +2002,24 @@ static void b44_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *stats, u64 *data) { struct b44 *bp = netdev_priv(dev); - u32 *val = &bp->hw_stats.tx_good_octets; + struct b44_hw_stats *hwstat = &bp->hw_stats; + u64 *data_src, *data_dst; + unsigned int start; u32 i; spin_lock_irq(&bp->lock); - b44_stats_update(bp); + spin_unlock_irq(&bp->lock); - for (i = 0; i < ARRAY_SIZE(b44_gstrings); i++) - *data++ = *val++; + do { + data_src = &hwstat->tx_good_octets; + data_dst = data; + start = u64_stats_fetch_begin_bh(&hwstat->syncp); - spin_unlock_irq(&bp->lock); + for (i = 0; i < ARRAY_SIZE(b44_gstrings); i++) + *data_dst++ = *data_src++; + + } while (u64_stats_fetch_retry_bh(&hwstat->syncp, start)); } static void b44_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol) @@ -2113,7 +2129,7 @@ static const struct net_device_ops b44_netdev_ops = { .ndo_open = b44_open, .ndo_stop = b44_close, .ndo_start_xmit = b44_start_xmit, - .ndo_get_stats = b44_get_stats, + .ndo_get_stats64 = b44_get_stats64, .ndo_set_rx_mode = b44_set_rx_mode, .ndo_set_mac_address = b44_set_mac_addr, .ndo_validate_addr = eth_validate_addr, diff --git a/drivers/net/ethernet/broadcom/b44.h b/drivers/net/ethernet/broadcom/b44.h index e1905a49279f..8993d72f0420 100644 --- a/drivers/net/ethernet/broadcom/b44.h +++ b/drivers/net/ethernet/broadcom/b44.h @@ -338,9 +338,10 @@ struct ring_info { * the layout */ struct b44_hw_stats { -#define _B44(x) u32 x; +#define _B44(x) u64 x; B44_STAT_REG_DECLARE #undef _B44 + struct u64_stats_sync syncp; }; struct ssb_device; diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c index 1fa4927a45b1..79cebd8525ce 100644 --- a/drivers/net/ethernet/broadcom/bnx2.c +++ b/drivers/net/ethernet/broadcom/bnx2.c @@ -14,6 +14,7 @@ #include <linux/module.h> #include <linux/moduleparam.h> +#include <linux/stringify.h> #include <linux/kernel.h> #include <linux/timer.h> #include <linux/errno.h> @@ -57,8 +58,8 @@ #include "bnx2_fw.h" #define DRV_MODULE_NAME "bnx2" -#define DRV_MODULE_VERSION "2.2.1" -#define DRV_MODULE_RELDATE "Dec 18, 2011" +#define DRV_MODULE_VERSION "2.2.3" +#define DRV_MODULE_RELDATE "June 27, 2012" #define FW_MIPS_FILE_06 "bnx2/bnx2-mips-06-6.2.3.fw" #define FW_RV2P_FILE_06 "bnx2/bnx2-rv2p-06-6.0.15.fw" #define FW_MIPS_FILE_09 "bnx2/bnx2-mips-09-6.2.1b.fw" @@ -872,8 +873,7 @@ bnx2_alloc_mem(struct bnx2 *bp) bnapi = &bp->bnx2_napi[i]; - sblk = (void *) (status_blk + - BNX2_SBLK_MSIX_ALIGN_SIZE * i); + sblk = (status_blk + BNX2_SBLK_MSIX_ALIGN_SIZE * i); bnapi->status_blk.msix = sblk; bnapi->hw_tx_cons_ptr = &sblk->status_tx_quick_consumer_index; @@ -1972,22 +1972,26 @@ bnx2_remote_phy_event(struct bnx2 *bp) switch (speed) { case BNX2_LINK_STATUS_10HALF: bp->duplex = DUPLEX_HALF; + /* fall through */ case BNX2_LINK_STATUS_10FULL: bp->line_speed = SPEED_10; break; case BNX2_LINK_STATUS_100HALF: bp->duplex = DUPLEX_HALF; + /* fall through */ case BNX2_LINK_STATUS_100BASE_T4: case BNX2_LINK_STATUS_100FULL: bp->line_speed = SPEED_100; break; case BNX2_LINK_STATUS_1000HALF: bp->duplex = DUPLEX_HALF; + /* fall through */ case BNX2_LINK_STATUS_1000FULL: bp->line_speed = SPEED_1000; break; case BNX2_LINK_STATUS_2500HALF: bp->duplex = DUPLEX_HALF; + /* fall through */ case BNX2_LINK_STATUS_2500FULL: bp->line_speed = SPEED_2500; break; @@ -2473,6 +2477,7 @@ bnx2_dump_mcp_state(struct bnx2 *bp) bnx2_shmem_rd(bp, BNX2_BC_STATE_RESET_TYPE)); pr_cont(" condition[%08x]\n", bnx2_shmem_rd(bp, BNX2_BC_STATE_CONDITION)); + DP_SHMEM_LINE(bp, BNX2_BC_RESET_TYPE); DP_SHMEM_LINE(bp, 0x3cc); DP_SHMEM_LINE(bp, 0x3dc); DP_SHMEM_LINE(bp, 0x3ec); @@ -6245,7 +6250,7 @@ bnx2_enable_msix(struct bnx2 *bp, int msix_vecs) static int bnx2_setup_int_mode(struct bnx2 *bp, int dis_msi) { - int cpus = num_online_cpus(); + int cpus = netif_get_num_default_rss_queues(); int msix_vecs; if (!bp->num_req_rx_rings) @@ -6383,6 +6388,7 @@ bnx2_reset_task(struct work_struct *work) { struct bnx2 *bp = container_of(work, struct bnx2, reset_task); int rc; + u16 pcicmd; rtnl_lock(); if (!netif_running(bp->dev)) { @@ -6392,6 +6398,12 @@ bnx2_reset_task(struct work_struct *work) bnx2_netif_stop(bp, true); + pci_read_config_word(bp->pdev, PCI_COMMAND, &pcicmd); + if (!(pcicmd & PCI_COMMAND_MEMORY)) { + /* in case PCI block has reset */ + pci_restore_state(bp->pdev); + pci_save_state(bp->pdev); + } rc = bnx2_init_nic(bp, 1); if (rc) { netdev_err(bp->dev, "failed to reset NIC, closing\n"); @@ -6406,6 +6418,75 @@ bnx2_reset_task(struct work_struct *work) rtnl_unlock(); } +#define BNX2_FTQ_ENTRY(ftq) { __stringify(ftq##FTQ_CTL), BNX2_##ftq##FTQ_CTL } + +static void +bnx2_dump_ftq(struct bnx2 *bp) +{ + int i; + u32 reg, bdidx, cid, valid; + struct net_device *dev = bp->dev; + static const struct ftq_reg { + char *name; + u32 off; + } ftq_arr[] = { + BNX2_FTQ_ENTRY(RV2P_P), + BNX2_FTQ_ENTRY(RV2P_T), + BNX2_FTQ_ENTRY(RV2P_M), + BNX2_FTQ_ENTRY(TBDR_), + BNX2_FTQ_ENTRY(TDMA_), + BNX2_FTQ_ENTRY(TXP_), + BNX2_FTQ_ENTRY(TXP_), + BNX2_FTQ_ENTRY(TPAT_), + BNX2_FTQ_ENTRY(RXP_C), + BNX2_FTQ_ENTRY(RXP_), + BNX2_FTQ_ENTRY(COM_COMXQ_), + BNX2_FTQ_ENTRY(COM_COMTQ_), + BNX2_FTQ_ENTRY(COM_COMQ_), + BNX2_FTQ_ENTRY(CP_CPQ_), + }; + + netdev_err(dev, "<--- start FTQ dump --->\n"); + for (i = 0; i < ARRAY_SIZE(ftq_arr); i++) + netdev_err(dev, "%s %08x\n", ftq_arr[i].name, + bnx2_reg_rd_ind(bp, ftq_arr[i].off)); + + netdev_err(dev, "CPU states:\n"); + for (reg = BNX2_TXP_CPU_MODE; reg <= BNX2_CP_CPU_MODE; reg += 0x40000) + netdev_err(dev, "%06x mode %x state %x evt_mask %x pc %x pc %x instr %x\n", + reg, bnx2_reg_rd_ind(bp, reg), + bnx2_reg_rd_ind(bp, reg + 4), + bnx2_reg_rd_ind(bp, reg + 8), + bnx2_reg_rd_ind(bp, reg + 0x1c), + bnx2_reg_rd_ind(bp, reg + 0x1c), + bnx2_reg_rd_ind(bp, reg + 0x20)); + + netdev_err(dev, "<--- end FTQ dump --->\n"); + netdev_err(dev, "<--- start TBDC dump --->\n"); + netdev_err(dev, "TBDC free cnt: %ld\n", + REG_RD(bp, BNX2_TBDC_STATUS) & BNX2_TBDC_STATUS_FREE_CNT); + netdev_err(dev, "LINE CID BIDX CMD VALIDS\n"); + for (i = 0; i < 0x20; i++) { + int j = 0; + + REG_WR(bp, BNX2_TBDC_BD_ADDR, i); + REG_WR(bp, BNX2_TBDC_CAM_OPCODE, + BNX2_TBDC_CAM_OPCODE_OPCODE_CAM_READ); + REG_WR(bp, BNX2_TBDC_COMMAND, BNX2_TBDC_COMMAND_CMD_REG_ARB); + while ((REG_RD(bp, BNX2_TBDC_COMMAND) & + BNX2_TBDC_COMMAND_CMD_REG_ARB) && j < 100) + j++; + + cid = REG_RD(bp, BNX2_TBDC_CID); + bdidx = REG_RD(bp, BNX2_TBDC_BIDX); + valid = REG_RD(bp, BNX2_TBDC_CAM_OPCODE); + netdev_err(dev, "%02x %06x %04lx %02x [%x]\n", + i, cid, bdidx & BNX2_TBDC_BDIDX_BDIDX, + bdidx >> 24, (valid >> 8) & 0x0ff); + } + netdev_err(dev, "<--- end TBDC dump --->\n"); +} + static void bnx2_dump_state(struct bnx2 *bp) { @@ -6435,6 +6516,7 @@ bnx2_tx_timeout(struct net_device *dev) { struct bnx2 *bp = netdev_priv(dev); + bnx2_dump_ftq(bp); bnx2_dump_state(bp); bnx2_dump_mcp_state(bp); @@ -6628,6 +6710,7 @@ bnx2_close(struct net_device *dev) bnx2_disable_int_sync(bp); bnx2_napi_disable(bp); + netif_tx_disable(dev); del_timer_sync(&bp->timer); bnx2_shutdown_chip(bp); bnx2_free_irq(bp); @@ -7832,7 +7915,7 @@ bnx2_get_5709_media(struct bnx2 *bp) else strap = (val & BNX2_MISC_DUAL_MEDIA_CTRL_PHY_CTRL_STRAP) >> 8; - if (PCI_FUNC(bp->pdev->devfn) == 0) { + if (bp->func == 0) { switch (strap) { case 0x4: case 0x5: @@ -8131,9 +8214,12 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev) reg = bnx2_reg_rd_ind(bp, BNX2_SHM_HDR_SIGNATURE); + if (bnx2_reg_rd_ind(bp, BNX2_MCP_TOE_ID) & BNX2_MCP_TOE_ID_FUNCTION_ID) + bp->func = 1; + if ((reg & BNX2_SHM_HDR_SIGNATURE_SIG_MASK) == BNX2_SHM_HDR_SIGNATURE_SIG) { - u32 off = PCI_FUNC(pdev->devfn) << 2; + u32 off = bp->func << 2; bp->shmem_base = bnx2_reg_rd_ind(bp, BNX2_SHM_HDR_ADDR_0 + off); } else diff --git a/drivers/net/ethernet/broadcom/bnx2.h b/drivers/net/ethernet/broadcom/bnx2.h index dc06bda73be7..af6451dec295 100644 --- a/drivers/net/ethernet/broadcom/bnx2.h +++ b/drivers/net/ethernet/broadcom/bnx2.h @@ -4642,6 +4642,47 @@ struct l2_fhdr { #define BNX2_TBDR_FTQ_CTL_CUR_DEPTH (0x3ffL<<22) +/* + * tbdc definition + * offset: 0x5400 + */ +#define BNX2_TBDC_COMMAND 0x5400 +#define BNX2_TBDC_COMMAND_CMD_ENABLED (1UL<<0) +#define BNX2_TBDC_COMMAND_CMD_FLUSH (1UL<<1) +#define BNX2_TBDC_COMMAND_CMD_SOFT_RST (1UL<<2) +#define BNX2_TBDC_COMMAND_CMD_REG_ARB (1UL<<3) +#define BNX2_TBDC_COMMAND_WRCHK_RANGE_ERROR (1UL<<4) +#define BNX2_TBDC_COMMAND_WRCHK_ALL_ONES_ERROR (1UL<<5) +#define BNX2_TBDC_COMMAND_WRCHK_ALL_ZEROS_ERROR (1UL<<6) +#define BNX2_TBDC_COMMAND_WRCHK_ANY_ONES_ERROR (1UL<<7) +#define BNX2_TBDC_COMMAND_WRCHK_ANY_ZEROS_ERROR (1UL<<8) + +#define BNX2_TBDC_STATUS 0x5404 +#define BNX2_TBDC_STATUS_FREE_CNT (0x3fUL<<0) + +#define BNX2_TBDC_BD_ADDR 0x5424 + +#define BNX2_TBDC_BIDX 0x542c +#define BNX2_TBDC_BDIDX_BDIDX (0xffffUL<<0) +#define BNX2_TBDC_BDIDX_CMD (0xffUL<<24) + +#define BNX2_TBDC_CID 0x5430 + +#define BNX2_TBDC_CAM_OPCODE 0x5434 +#define BNX2_TBDC_CAM_OPCODE_OPCODE (0x7UL<<0) +#define BNX2_TBDC_CAM_OPCODE_OPCODE_SEARCH (0UL<<0) +#define BNX2_TBDC_CAM_OPCODE_OPCODE_CACHE_WRITE (1UL<<0) +#define BNX2_TBDC_CAM_OPCODE_OPCODE_INVALIDATE (2UL<<0) +#define BNX2_TBDC_CAM_OPCODE_OPCODE_CAM_WRITE (4UL<<0) +#define BNX2_TBDC_CAM_OPCODE_OPCODE_CAM_READ (5UL<<0) +#define BNX2_TBDC_CAM_OPCODE_OPCODE_RAM_WRITE (6UL<<0) +#define BNX2_TBDC_CAM_OPCODE_OPCODE_RAM_READ (7UL<<0) +#define BNX2_TBDC_CAM_OPCODE_SMASK_BDIDX (1UL<<4) +#define BNX2_TBDC_CAM_OPCODE_SMASK_CID (1UL<<5) +#define BNX2_TBDC_CAM_OPCODE_SMASK_CMD (1UL<<6) +#define BNX2_TBDC_CAM_OPCODE_WMT_FAILED (1UL<<7) +#define BNX2_TBDC_CAM_OPCODE_CAM_VALIDS (0xffUL<<8) + /* * tdma_reg definition @@ -6930,6 +6971,8 @@ struct bnx2 { struct bnx2_irq irq_tbl[BNX2_MAX_MSIX_VEC]; int irq_nvecs; + u8 func; + u8 num_tx_rings; u8 num_rx_rings; @@ -7314,6 +7357,8 @@ struct bnx2_rv2p_fw_file { #define BNX2_BC_STATE_RESET_TYPE_VALUE(msg) (BNX2_BC_STATE_RESET_TYPE_SIG | \ (msg)) +#define BNX2_BC_RESET_TYPE 0x000001c0 + #define BNX2_BC_STATE 0x000001c4 #define BNX2_BC_STATE_ERR_MASK 0x0000ff00 #define BNX2_BC_STATE_SIGN 0x42530000 diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h index 7de824184979..77bcd4cb4ffb 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h @@ -23,8 +23,8 @@ * (you will need to reboot afterwards) */ /* #define BNX2X_STOP_ON_ERROR */ -#define DRV_MODULE_VERSION "1.72.50-0" -#define DRV_MODULE_RELDATE "2012/04/23" +#define DRV_MODULE_VERSION "1.72.51-0" +#define DRV_MODULE_RELDATE "2012/06/18" #define BNX2X_BC_VER 0x040200 #if defined(CONFIG_DCB) @@ -51,6 +51,7 @@ #include "bnx2x_reg.h" #include "bnx2x_fw_defs.h" +#include "bnx2x_mfw_req.h" #include "bnx2x_hsi.h" #include "bnx2x_link.h" #include "bnx2x_sp.h" @@ -248,13 +249,12 @@ enum { BNX2X_MAX_CNIC_ETH_CL_ID_IDX, }; -#define BNX2X_CNIC_START_ETH_CID 48 -enum { +#define BNX2X_CNIC_START_ETH_CID(bp) (BNX2X_NUM_NON_CNIC_QUEUES(bp) *\ + (bp)->max_cos) /* iSCSI L2 */ - BNX2X_ISCSI_ETH_CID = BNX2X_CNIC_START_ETH_CID, +#define BNX2X_ISCSI_ETH_CID(bp) (BNX2X_CNIC_START_ETH_CID(bp)) /* FCoE L2 */ - BNX2X_FCOE_ETH_CID, -}; +#define BNX2X_FCOE_ETH_CID(bp) (BNX2X_CNIC_START_ETH_CID(bp) + 1) /** Additional rings budgeting */ #ifdef BCM_CNIC @@ -276,29 +276,30 @@ enum { #define FIRST_TX_ONLY_COS_INDEX 1 #define FIRST_TX_COS_INDEX 0 -/* defines for decodeing the fastpath index and the cos index out of the - * transmission queue index - */ -#define MAX_TXQS_PER_COS FP_SB_MAX_E1x - -#define TXQ_TO_FP(txq_index) ((txq_index) % MAX_TXQS_PER_COS) -#define TXQ_TO_COS(txq_index) ((txq_index) / MAX_TXQS_PER_COS) - /* rules for calculating the cids of tx-only connections */ -#define CID_TO_FP(cid) ((cid) % MAX_TXQS_PER_COS) -#define CID_COS_TO_TX_ONLY_CID(cid, cos) (cid + cos * MAX_TXQS_PER_COS) +#define CID_TO_FP(cid, bp) ((cid) % BNX2X_NUM_NON_CNIC_QUEUES(bp)) +#define CID_COS_TO_TX_ONLY_CID(cid, cos, bp) \ + (cid + cos * BNX2X_NUM_NON_CNIC_QUEUES(bp)) /* fp index inside class of service range */ -#define FP_COS_TO_TXQ(fp, cos) ((fp)->index + cos * MAX_TXQS_PER_COS) - -/* - * 0..15 eth cos0 - * 16..31 eth cos1 if applicable - * 32..47 eth cos2 If applicable - * fcoe queue follows eth queues (16, 32, 48 depending on cos) +#define FP_COS_TO_TXQ(fp, cos, bp) \ + ((fp)->index + cos * BNX2X_NUM_NON_CNIC_QUEUES(bp)) + +/* Indexes for transmission queues array: + * txdata for RSS i CoS j is at location i + (j * num of RSS) + * txdata for FCoE (if exist) is at location max cos * num of RSS + * txdata for FWD (if exist) is one location after FCoE + * txdata for OOO (if exist) is one location after FWD */ -#define MAX_ETH_TXQ_IDX(bp) (MAX_TXQS_PER_COS * (bp)->max_cos) -#define FCOE_TXQ_IDX(bp) (MAX_ETH_TXQ_IDX(bp)) +enum { + FCOE_TXQ_IDX_OFFSET, + FWD_TXQ_IDX_OFFSET, + OOO_TXQ_IDX_OFFSET, +}; +#define MAX_ETH_TXQ_IDX(bp) (BNX2X_NUM_NON_CNIC_QUEUES(bp) * (bp)->max_cos) +#ifdef BCM_CNIC +#define FCOE_TXQ_IDX(bp) (MAX_ETH_TXQ_IDX(bp) + FCOE_TXQ_IDX_OFFSET) +#endif /* fast path */ /* @@ -453,6 +454,7 @@ struct bnx2x_agg_info { u16 vlan_tag; u16 len_on_bd; u32 rxhash; + bool l4_rxhash; u16 gro_size; u16 full_page; }; @@ -481,6 +483,8 @@ struct bnx2x_fp_txdata { __le16 *tx_cons_sb; int txq_index; + struct bnx2x_fastpath *parent_fp; + int tx_ring_size; }; enum bnx2x_tpa_mode_t { @@ -507,7 +511,7 @@ struct bnx2x_fastpath { enum bnx2x_tpa_mode_t mode; u8 max_cos; /* actual number of active tx coses */ - struct bnx2x_fp_txdata txdata[BNX2X_MULTI_TX_COS]; + struct bnx2x_fp_txdata *txdata_ptr[BNX2X_MULTI_TX_COS]; struct sw_rx_bd *rx_buf_ring; /* BDs mappings ring */ struct sw_rx_page *rx_page_ring; /* SGE pages mappings ring */ @@ -547,51 +551,45 @@ struct bnx2x_fastpath { rx_calls; /* TPA related */ - struct bnx2x_agg_info tpa_info[ETH_MAX_AGGREGATION_QUEUES_E1H_E2]; + struct bnx2x_agg_info *tpa_info; u8 disable_tpa; #ifdef BNX2X_STOP_ON_ERROR u64 tpa_queue_used; #endif - - struct tstorm_per_queue_stats old_tclient; - struct ustorm_per_queue_stats old_uclient; - struct xstorm_per_queue_stats old_xclient; - struct bnx2x_eth_q_stats eth_q_stats; - struct bnx2x_eth_q_stats_old eth_q_stats_old; - /* The size is calculated using the following: sizeof name field from netdev structure + 4 ('-Xx-' string) + 4 (for the digits and to make it DWORD aligned) */ #define FP_NAME_SIZE (sizeof(((struct net_device *)0)->name) + 8) char name[FP_NAME_SIZE]; - - /* MACs object */ - struct bnx2x_vlan_mac_obj mac_obj; - - /* Queue State object */ - struct bnx2x_queue_sp_obj q_obj; - }; -#define bnx2x_fp(bp, nr, var) (bp->fp[nr].var) +#define bnx2x_fp(bp, nr, var) ((bp)->fp[(nr)].var) +#define bnx2x_sp_obj(bp, fp) ((bp)->sp_objs[(fp)->index]) +#define bnx2x_fp_stats(bp, fp) (&((bp)->fp_stats[(fp)->index])) +#define bnx2x_fp_qstats(bp, fp) (&((bp)->fp_stats[(fp)->index].eth_q_stats)) /* Use 2500 as a mini-jumbo MTU for FCoE */ #define BNX2X_FCOE_MINI_JUMBO_MTU 2500 -/* FCoE L2 `fastpath' entry is right after the eth entries */ -#define FCOE_IDX BNX2X_NUM_ETH_QUEUES(bp) -#define bnx2x_fcoe_fp(bp) (&bp->fp[FCOE_IDX]) -#define bnx2x_fcoe(bp, var) (bnx2x_fcoe_fp(bp)->var) -#define bnx2x_fcoe_tx(bp, var) (bnx2x_fcoe_fp(bp)-> \ - txdata[FIRST_TX_COS_INDEX].var) +#define FCOE_IDX_OFFSET 0 + +#define FCOE_IDX(bp) (BNX2X_NUM_NON_CNIC_QUEUES(bp) + \ + FCOE_IDX_OFFSET) +#define bnx2x_fcoe_fp(bp) (&bp->fp[FCOE_IDX(bp)]) +#define bnx2x_fcoe(bp, var) (bnx2x_fcoe_fp(bp)->var) +#define bnx2x_fcoe_inner_sp_obj(bp) (&bp->sp_objs[FCOE_IDX(bp)]) +#define bnx2x_fcoe_sp_obj(bp, var) (bnx2x_fcoe_inner_sp_obj(bp)->var) +#define bnx2x_fcoe_tx(bp, var) (bnx2x_fcoe_fp(bp)-> \ + txdata_ptr[FIRST_TX_COS_INDEX] \ + ->var) #define IS_ETH_FP(fp) (fp->index < \ BNX2X_NUM_ETH_QUEUES(fp->bp)) #ifdef BCM_CNIC -#define IS_FCOE_FP(fp) (fp->index == FCOE_IDX) -#define IS_FCOE_IDX(idx) ((idx) == FCOE_IDX) +#define IS_FCOE_FP(fp) (fp->index == FCOE_IDX(fp->bp)) +#define IS_FCOE_IDX(idx) ((idx) == FCOE_IDX(bp)) #else #define IS_FCOE_FP(fp) false #define IS_FCOE_IDX(idx) false @@ -616,6 +614,22 @@ struct bnx2x_fastpath { #define TX_BD(x) ((x) & MAX_TX_BD) #define TX_BD_POFF(x) ((x) & MAX_TX_DESC_CNT) +/* number of NEXT_PAGE descriptors may be required during placement */ +#define NEXT_CNT_PER_TX_PKT(bds) \ + (((bds) + MAX_TX_DESC_CNT - 1) / \ + MAX_TX_DESC_CNT * NEXT_PAGE_TX_DESC_CNT) +/* max BDs per tx packet w/o next_pages: + * START_BD - describes packed + * START_BD(splitted) - includes unpaged data segment for GSO + * PARSING_BD - for TSO and CSUM data + * Frag BDs - decribes pages for frags + */ +#define BDS_PER_TX_PKT 3 +#define MAX_BDS_PER_TX_PKT (MAX_SKB_FRAGS + BDS_PER_TX_PKT) +/* max BDs per tx packet including next pages */ +#define MAX_DESC_PER_TX_PKT (MAX_BDS_PER_TX_PKT + \ + NEXT_CNT_PER_TX_PKT(MAX_BDS_PER_TX_PKT)) + /* The RX BD ring is special, each bd is 8 bytes but the last one is 16 */ #define NUM_RX_RINGS 8 #define RX_DESC_CNT (BCM_PAGE_SIZE / sizeof(struct eth_rx_bd)) @@ -805,8 +819,11 @@ struct bnx2x_common { #define CHIP_NUM_57810_MF 0x16ae #define CHIP_NUM_57811 0x163d #define CHIP_NUM_57811_MF 0x163e -#define CHIP_NUM_57840 0x168d -#define CHIP_NUM_57840_MF 0x16ab +#define CHIP_NUM_57840_OBSOLETE 0x168d +#define CHIP_NUM_57840_MF_OBSOLETE 0x16ab +#define CHIP_NUM_57840_4_10 0x16a1 +#define CHIP_NUM_57840_2_20 0x16a2 +#define CHIP_NUM_57840_MF 0x16a4 #define CHIP_IS_E1(bp) (CHIP_NUM(bp) == CHIP_NUM_57710) #define CHIP_IS_57711(bp) (CHIP_NUM(bp) == CHIP_NUM_57711) #define CHIP_IS_57711E(bp) (CHIP_NUM(bp) == CHIP_NUM_57711E) @@ -818,8 +835,12 @@ struct bnx2x_common { #define CHIP_IS_57810_MF(bp) (CHIP_NUM(bp) == CHIP_NUM_57810_MF) #define CHIP_IS_57811(bp) (CHIP_NUM(bp) == CHIP_NUM_57811) #define CHIP_IS_57811_MF(bp) (CHIP_NUM(bp) == CHIP_NUM_57811_MF) -#define CHIP_IS_57840(bp) (CHIP_NUM(bp) == CHIP_NUM_57840) -#define CHIP_IS_57840_MF(bp) (CHIP_NUM(bp) == CHIP_NUM_57840_MF) +#define CHIP_IS_57840(bp) \ + ((CHIP_NUM(bp) == CHIP_NUM_57840_4_10) || \ + (CHIP_NUM(bp) == CHIP_NUM_57840_2_20) || \ + (CHIP_NUM(bp) == CHIP_NUM_57840_OBSOLETE)) +#define CHIP_IS_57840_MF(bp) ((CHIP_NUM(bp) == CHIP_NUM_57840_MF) || \ + (CHIP_NUM(bp) == CHIP_NUM_57840_MF_OBSOLETE)) #define CHIP_IS_E1H(bp) (CHIP_IS_57711(bp) || \ CHIP_IS_57711E(bp)) #define CHIP_IS_E2(bp) (CHIP_IS_57712(bp) || \ @@ -978,8 +999,8 @@ union cdu_context { }; /* CDU host DB constants */ -#define CDU_ILT_PAGE_SZ_HW 3 -#define CDU_ILT_PAGE_SZ (8192 << CDU_ILT_PAGE_SZ_HW) /* 64K */ +#define CDU_ILT_PAGE_SZ_HW 2 +#define CDU_ILT_PAGE_SZ (8192 << CDU_ILT_PAGE_SZ_HW) /* 32K */ #define ILT_PAGE_CIDS (CDU_ILT_PAGE_SZ / sizeof(union cdu_context)) #ifdef BCM_CNIC @@ -1182,11 +1203,31 @@ struct bnx2x_prev_path_list { struct list_head list; }; +struct bnx2x_sp_objs { + /* MACs object */ + struct bnx2x_vlan_mac_obj mac_obj; + + /* Queue State object */ + struct bnx2x_queue_sp_obj q_obj; +}; + +struct bnx2x_fp_stats { + struct tstorm_per_queue_stats old_tclient; + struct ustorm_per_queue_stats old_uclient; + struct xstorm_per_queue_stats old_xclient; + struct bnx2x_eth_q_stats eth_q_stats; + struct bnx2x_eth_q_stats_old eth_q_stats_old; +}; + struct bnx2x { /* Fields used in the tx and intr/napi performance paths * are grouped together in the beginning of the structure */ struct bnx2x_fastpath *fp; + struct bnx2x_sp_objs *sp_objs; + struct bnx2x_fp_stats *fp_stats; + struct bnx2x_fp_txdata *bnx2x_txq; + int bnx2x_txq_size; void __iomem *regview; void __iomem *doorbells; u16 db_size; @@ -1301,7 +1342,9 @@ struct bnx2x { #define NO_ISCSI_FLAG (1 << 14) #define NO_FCOE_FLAG (1 << 15) #define BC_SUPPORTS_PFC_STATS (1 << 17) +#define BC_SUPPORTS_FCOE_FEATURES (1 << 19) #define USING_SINGLE_MSIX_FLAG (1 << 20) +#define BC_SUPPORTS_DCBX_MSG_NON_PMF (1 << 21) #define NO_ISCSI(bp) ((bp)->flags & NO_ISCSI_FLAG) #define NO_ISCSI_OOO(bp) ((bp)->flags & NO_ISCSI_OOO_FLAG) @@ -1377,6 +1420,7 @@ struct bnx2x { #define BNX2X_MAX_COS 3 #define BNX2X_MAX_TX_COS 2 int num_queues; + int num_napi_queues; int disable_tpa; u32 rx_mode; @@ -1389,6 +1433,7 @@ struct bnx2x { u8 igu_dsb_id; u8 igu_base_sb; u8 igu_sb_cnt; + dma_addr_t def_status_blk_mapping; struct bnx2x_slowpath *slowpath; @@ -1420,7 +1465,11 @@ struct bnx2x { dma_addr_t fw_stats_data_mapping; int fw_stats_data_sz; - struct hw_context context; + /* For max 196 cids (64*3 + non-eth), 32KB ILT page size and 1KB + * context size we need 8 ILT entries. + */ +#define ILT_MAX_L2_LINES 8 + struct hw_context context[ILT_MAX_L2_LINES]; struct bnx2x_ilt *ilt; #define BP_ILT(bp) ((bp)->ilt) @@ -1433,13 +1482,14 @@ struct bnx2x { /* * Maximum CID count that might be required by the bnx2x: - * Max Tss * Max_Tx_Multi_Cos + CNIC L2 Clients (FCoE and iSCSI related) + * Max RSS * Max_Tx_Multi_Cos + FCoE + iSCSI */ -#define BNX2X_L2_CID_COUNT(bp) (MAX_TXQS_PER_COS * BNX2X_MULTI_TX_COS +\ - NON_ETH_CONTEXT_USE + CNIC_PRESENT) +#define BNX2X_L2_CID_COUNT(bp) (BNX2X_NUM_ETH_QUEUES(bp) * BNX2X_MULTI_TX_COS \ + + NON_ETH_CONTEXT_USE + CNIC_PRESENT) +#define BNX2X_L2_MAX_CID(bp) (BNX2X_MAX_RSS_COUNT(bp) * BNX2X_MULTI_TX_COS \ + + NON_ETH_CONTEXT_USE + CNIC_PRESENT) #define L2_ILT_LINES(bp) (DIV_ROUND_UP(BNX2X_L2_CID_COUNT(bp),\ ILT_PAGE_CIDS)) -#define BNX2X_DB_SIZE(bp) (BNX2X_L2_CID_COUNT(bp) * (1 << BNX2X_DB_SHIFT)) int qm_cid_count; @@ -1598,6 +1648,8 @@ struct bnx2x { extern int num_queues; #define BNX2X_NUM_QUEUES(bp) (bp->num_queues) #define BNX2X_NUM_ETH_QUEUES(bp) (BNX2X_NUM_QUEUES(bp) - NON_ETH_CONTEXT_USE) +#define BNX2X_NUM_NON_CNIC_QUEUES(bp) (BNX2X_NUM_QUEUES(bp) - \ + NON_ETH_CONTEXT_USE) #define BNX2X_NUM_RX_QUEUES(bp) BNX2X_NUM_QUEUES(bp) #define is_multi(bp) (BNX2X_NUM_QUEUES(bp) > 1) @@ -1656,6 +1708,9 @@ struct bnx2x_func_init_params { continue; \ else +#define for_each_napi_rx_queue(bp, var) \ + for ((var) = 0; (var) < bp->num_napi_queues; (var)++) + /* Skip OOO FP */ #define for_each_tx_queue(bp, var) \ for ((var) = 0; (var) < BNX2X_NUM_QUEUES(bp); (var)++) \ @@ -1709,15 +1764,6 @@ int bnx2x_set_mac_one(struct bnx2x *bp, u8 *mac, struct bnx2x_vlan_mac_obj *obj, bool set, int mac_type, unsigned long *ramrod_flags); /** - * Deletes all MACs configured for the specific MAC object. - * - * @param bp Function driver instance - * @param mac_obj MAC object to cleanup - * - * @return zero if all MACs were cleaned - */ - -/** * bnx2x_del_all_macs - delete all MACs configured for the specific MAC object * * @bp: driver handle @@ -1817,6 +1863,7 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms, #define LOAD_NORMAL 0 #define LOAD_OPEN 1 #define LOAD_DIAG 2 +#define LOAD_LOOPBACK_EXT 3 #define UNLOAD_NORMAL 0 #define UNLOAD_CLOSE 1 #define UNLOAD_RECOVERY 2 @@ -1899,13 +1946,17 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms, #define PCICFG_LINK_SPEED 0xf0000 #define PCICFG_LINK_SPEED_SHIFT 16 - -#define BNX2X_NUM_TESTS 7 +#define BNX2X_NUM_TESTS_SF 7 +#define BNX2X_NUM_TESTS_MF 3 +#define BNX2X_NUM_TESTS(bp) (IS_MF(bp) ? BNX2X_NUM_TESTS_MF : \ + BNX2X_NUM_TESTS_SF) #define BNX2X_PHY_LOOPBACK 0 #define BNX2X_MAC_LOOPBACK 1 +#define BNX2X_EXT_LOOPBACK 2 #define BNX2X_PHY_LOOPBACK_FAILED 1 #define BNX2X_MAC_LOOPBACK_FAILED 2 +#define BNX2X_EXT_LOOPBACK_FAILED 3 #define BNX2X_LOOPBACK_FAILED (BNX2X_MAC_LOOPBACK_FAILED | \ BNX2X_PHY_LOOPBACK_FAILED) diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c index 8098eea9704d..e879e19eb0d6 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c @@ -40,12 +40,19 @@ * Makes sure the contents of the bp->fp[to].napi is kept * intact. This is done by first copying the napi struct from * the target to the source, and then mem copying the entire - * source onto the target + * source onto the target. Update txdata pointers and related + * content. */ static inline void bnx2x_move_fp(struct bnx2x *bp, int from, int to) { struct bnx2x_fastpath *from_fp = &bp->fp[from]; struct bnx2x_fastpath *to_fp = &bp->fp[to]; + struct bnx2x_sp_objs *from_sp_objs = &bp->sp_objs[from]; + struct bnx2x_sp_objs *to_sp_objs = &bp->sp_objs[to]; + struct bnx2x_fp_stats *from_fp_stats = &bp->fp_stats[from]; + struct bnx2x_fp_stats *to_fp_stats = &bp->fp_stats[to]; + int old_max_eth_txqs, new_max_eth_txqs; + int old_txdata_index = 0, new_txdata_index = 0; /* Copy the NAPI object as it has been already initialized */ from_fp->napi = to_fp->napi; @@ -53,6 +60,30 @@ static inline void bnx2x_move_fp(struct bnx2x *bp, int from, int to) /* Move bnx2x_fastpath contents */ memcpy(to_fp, from_fp, sizeof(*to_fp)); to_fp->index = to; + + /* move sp_objs contents as well, as their indices match fp ones */ + memcpy(to_sp_objs, from_sp_objs, sizeof(*to_sp_objs)); + + /* move fp_stats contents as well, as their indices match fp ones */ + memcpy(to_fp_stats, from_fp_stats, sizeof(*to_fp_stats)); + + /* Update txdata pointers in fp and move txdata content accordingly: + * Each fp consumes 'max_cos' txdata structures, so the index should be + * decremented by max_cos x delta. + */ + + old_max_eth_txqs = BNX2X_NUM_ETH_QUEUES(bp) * (bp)->max_cos; + new_max_eth_txqs = (BNX2X_NUM_ETH_QUEUES(bp) - from + to) * + (bp)->max_cos; + if (from == FCOE_IDX(bp)) { + old_txdata_index = old_max_eth_txqs + FCOE_TXQ_IDX_OFFSET; + new_txdata_index = new_max_eth_txqs + FCOE_TXQ_IDX_OFFSET; + } + + memcpy(&bp->bnx2x_txq[old_txdata_index], + &bp->bnx2x_txq[new_txdata_index], + sizeof(struct bnx2x_fp_txdata)); + to_fp->txdata_ptr[0] = &bp->bnx2x_txq[new_txdata_index]; } int load_count[2][3] = { {0} }; /* per-path: 0-common, 1-port0, 2-port1 */ @@ -190,7 +221,7 @@ int bnx2x_tx_int(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata) if ((netif_tx_queue_stopped(txq)) && (bp->state == BNX2X_STATE_OPEN) && - (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4)) + (bnx2x_tx_avail(bp, txdata) >= MAX_DESC_PER_TX_PKT)) netif_tx_wake_queue(txq); __netif_tx_unlock(txq); @@ -264,12 +295,20 @@ static inline void bnx2x_update_sge_prod(struct bnx2x_fastpath *fp, * CQE (calculated by HW). */ static u32 bnx2x_get_rxhash(const struct bnx2x *bp, - const struct eth_fast_path_rx_cqe *cqe) + const struct eth_fast_path_rx_cqe *cqe, + bool *l4_rxhash) { /* Set Toeplitz hash from CQE */ if ((bp->dev->features & NETIF_F_RXHASH) && - (cqe->status_flags & ETH_FAST_PATH_RX_CQE_RSS_HASH_FLG)) + (cqe->status_flags & ETH_FAST_PATH_RX_CQE_RSS_HASH_FLG)) { + enum eth_rss_hash_type htype; + + htype = cqe->status_flags & ETH_FAST_PATH_RX_CQE_RSS_HASH_TYPE; + *l4_rxhash = (htype == TCP_IPV4_HASH_TYPE) || + (htype == TCP_IPV6_HASH_TYPE); return le32_to_cpu(cqe->rss_hash_result); + } + *l4_rxhash = false; return 0; } @@ -323,7 +362,7 @@ static void bnx2x_tpa_start(struct bnx2x_fastpath *fp, u16 queue, tpa_info->tpa_state = BNX2X_TPA_START; tpa_info->len_on_bd = le16_to_cpu(cqe->len_on_bd); tpa_info->placement_offset = cqe->placement_offset; - tpa_info->rxhash = bnx2x_get_rxhash(bp, cqe); + tpa_info->rxhash = bnx2x_get_rxhash(bp, cqe, &tpa_info->l4_rxhash); if (fp->mode == TPA_MODE_GRO) { u16 gro_size = le16_to_cpu(cqe->pkt_len_or_gro_seg_len); tpa_info->full_page = @@ -479,7 +518,7 @@ static int bnx2x_fill_frag_skb(struct bnx2x *bp, struct bnx2x_fastpath *fp, where we are and drop the whole packet */ err = bnx2x_alloc_rx_sge(bp, fp, sge_idx); if (unlikely(err)) { - fp->eth_q_stats.rx_skb_alloc_failed++; + bnx2x_fp_qstats(bp, fp)->rx_skb_alloc_failed++; return err; } @@ -558,6 +597,7 @@ static void bnx2x_tpa_stop(struct bnx2x *bp, struct bnx2x_fastpath *fp, skb_reserve(skb, pad + NET_SKB_PAD); skb_put(skb, len); skb->rxhash = tpa_info->rxhash; + skb->l4_rxhash = tpa_info->l4_rxhash; skb->protocol = eth_type_trans(skb, bp->dev); skb->ip_summed = CHECKSUM_UNNECESSARY; @@ -584,7 +624,7 @@ drop: /* drop the packet and keep the buffer in the bin */ DP(NETIF_MSG_RX_STATUS, "Failed to allocate or map a new skb - dropping packet!\n"); - fp->eth_q_stats.rx_skb_alloc_failed++; + bnx2x_fp_stats(bp, fp)->eth_q_stats.rx_skb_alloc_failed++; } static int bnx2x_alloc_rx_data(struct bnx2x *bp, @@ -617,8 +657,10 @@ static int bnx2x_alloc_rx_data(struct bnx2x *bp, return 0; } -static void bnx2x_csum_validate(struct sk_buff *skb, union eth_rx_cqe *cqe, - struct bnx2x_fastpath *fp) +static +void bnx2x_csum_validate(struct sk_buff *skb, union eth_rx_cqe *cqe, + struct bnx2x_fastpath *fp, + struct bnx2x_eth_q_stats *qstats) { /* Do nothing if no IP/L4 csum validation was done */ @@ -632,7 +674,7 @@ static void bnx2x_csum_validate(struct sk_buff *skb, union eth_rx_cqe *cqe, if (cqe->fast_path_cqe.type_error_flags & (ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG | ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG)) - fp->eth_q_stats.hw_csum_err++; + qstats->hw_csum_err++; else skb->ip_summed = CHECKSUM_UNNECESSARY; } @@ -679,6 +721,7 @@ int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget) enum eth_rx_cqe_type cqe_fp_type; u16 len, pad, queue; u8 *data; + bool l4_rxhash; #ifdef BNX2X_STOP_ON_ERROR if (unlikely(bp->panic)) @@ -776,7 +819,7 @@ int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget) DP(NETIF_MSG_RX_ERR | NETIF_MSG_RX_STATUS, "ERROR flags %x rx packet %u\n", cqe_fp_flags, sw_comp_cons); - fp->eth_q_stats.rx_err_discard_pkt++; + bnx2x_fp_qstats(bp, fp)->rx_err_discard_pkt++; goto reuse_rx; } @@ -789,7 +832,7 @@ int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget) if (skb == NULL) { DP(NETIF_MSG_RX_ERR | NETIF_MSG_RX_STATUS, "ERROR packet dropped because of alloc failure\n"); - fp->eth_q_stats.rx_skb_alloc_failed++; + bnx2x_fp_qstats(bp, fp)->rx_skb_alloc_failed++; goto reuse_rx; } memcpy(skb->data, data + pad, len); @@ -803,14 +846,15 @@ int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget) skb = build_skb(data, 0); if (unlikely(!skb)) { kfree(data); - fp->eth_q_stats.rx_skb_alloc_failed++; + bnx2x_fp_qstats(bp, fp)-> + rx_skb_alloc_failed++; goto next_rx; } skb_reserve(skb, pad); } else { DP(NETIF_MSG_RX_ERR | NETIF_MSG_RX_STATUS, "ERROR packet dropped because of alloc failure\n"); - fp->eth_q_stats.rx_skb_alloc_failed++; + bnx2x_fp_qstats(bp, fp)->rx_skb_alloc_failed++; reuse_rx: bnx2x_reuse_rx_data(fp, bd_cons, bd_prod); goto next_rx; @@ -821,13 +865,14 @@ reuse_rx: skb->protocol = eth_type_trans(skb, bp->dev); /* Set Toeplitz hash for a none-LRO skb */ - skb->rxhash = bnx2x_get_rxhash(bp, cqe_fp); + skb->rxhash = bnx2x_get_rxhash(bp, cqe_fp, &l4_rxhash); + skb->l4_rxhash = l4_rxhash; skb_checksum_none_assert(skb); if (bp->dev->features & NETIF_F_RXCSUM) - bnx2x_csum_validate(skb, cqe, fp); - + bnx2x_csum_validate(skb, cqe, fp, + bnx2x_fp_qstats(bp, fp)); skb_record_rx_queue(skb, fp->rx_queue); @@ -888,7 +933,7 @@ static irqreturn_t bnx2x_msix_fp_int(int irq, void *fp_cookie) prefetch(fp->rx_cons_sb); for_each_cos_in_tx_queue(fp, cos) - prefetch(fp->txdata[cos].tx_cons_sb); + prefetch(fp->txdata_ptr[cos]->tx_cons_sb); prefetch(&fp->sb_running_index[SM_RX_ID]); napi_schedule(&bnx2x_fp(bp, fp->index, napi)); @@ -1205,7 +1250,7 @@ static void bnx2x_free_tx_skbs(struct bnx2x *bp) for_each_tx_queue(bp, i) { struct bnx2x_fastpath *fp = &bp->fp[i]; for_each_cos_in_tx_queue(fp, cos) { - struct bnx2x_fp_txdata *txdata = &fp->txdata[cos]; + struct bnx2x_fp_txdata *txdata = fp->txdata_ptr[cos]; unsigned pkts_compl = 0, bytes_compl = 0; u16 sw_prod = txdata->tx_pkt_prod; @@ -1217,7 +1262,8 @@ static void bnx2x_free_tx_skbs(struct bnx2x *bp) sw_cons++; } netdev_tx_reset_queue( - netdev_get_tx_queue(bp->dev, txdata->txq_index)); + netdev_get_tx_queue(bp->dev, + txdata->txq_index)); } } } @@ -1325,7 +1371,7 @@ void bnx2x_free_irq(struct bnx2x *bp) free_irq(bp->dev->irq, bp->dev); } -int __devinit bnx2x_enable_msix(struct bnx2x *bp) +int bnx2x_enable_msix(struct bnx2x *bp) { int msix_vec = 0, i, rc, req_cnt; @@ -1579,6 +1625,8 @@ void bnx2x_set_num_queues(struct bnx2x *bp) #endif /* Add special queues */ bp->num_queues += NON_ETH_CONTEXT_USE; + + BNX2X_DEV_INFO("set number of queues to %d\n", bp->num_queues); } /** @@ -1607,8 +1655,8 @@ static int bnx2x_set_real_num_queues(struct bnx2x *bp) { int rc, tx, rx; - tx = MAX_TXQS_PER_COS * bp->max_cos; - rx = BNX2X_NUM_ETH_QUEUES(bp); + tx = BNX2X_NUM_ETH_QUEUES(bp) * bp->max_cos; + rx = BNX2X_NUM_QUEUES(bp) - NON_ETH_CONTEXT_USE; /* account for fcoe queue */ #ifdef BCM_CNIC @@ -1666,14 +1714,13 @@ static void bnx2x_set_rx_buf_size(struct bnx2x *bp) static int bnx2x_init_rss_pf(struct bnx2x *bp) { int i; - u8 ind_table[T_ETH_INDIRECTION_TABLE_SIZE] = {0}; u8 num_eth_queues = BNX2X_NUM_ETH_QUEUES(bp); /* Prepare the initial contents fo the indirection table if RSS is * enabled */ - for (i = 0; i < sizeof(ind_table); i++) - ind_table[i] = + for (i = 0; i < sizeof(bp->rss_conf_obj.ind_table); i++) + bp->rss_conf_obj.ind_table[i] = bp->fp->cl_id + ethtool_rxfh_indir_default(i, num_eth_queues); @@ -1685,12 +1732,11 @@ static int bnx2x_init_rss_pf(struct bnx2x *bp) * For 57712 and newer on the other hand it's a per-function * configuration. */ - return bnx2x_config_rss_eth(bp, ind_table, - bp->port.pmf || !CHIP_IS_E1x(bp)); + return bnx2x_config_rss_eth(bp, bp->port.pmf || !CHIP_IS_E1x(bp)); } int bnx2x_config_rss_pf(struct bnx2x *bp, struct bnx2x_rss_config_obj *rss_obj, - u8 *ind_table, bool config_hash) + bool config_hash) { struct bnx2x_config_rss_params params = {NULL}; int i; @@ -1713,11 +1759,15 @@ int bnx2x_config_rss_pf(struct bnx2x *bp, struct bnx2x_rss_config_obj *rss_obj, __set_bit(BNX2X_RSS_IPV4_TCP, ¶ms.rss_flags); __set_bit(BNX2X_RSS_IPV6, ¶ms.rss_flags); __set_bit(BNX2X_RSS_IPV6_TCP, ¶ms.rss_flags); + if (rss_obj->udp_rss_v4) + __set_bit(BNX2X_RSS_IPV4_UDP, ¶ms.rss_flags); + if (rss_obj->udp_rss_v6) + __set_bit(BNX2X_RSS_IPV6_UDP, ¶ms.rss_flags); /* Hash bits */ params.rss_result_mask = MULTI_MASK; - memcpy(params.ind_table, ind_table, sizeof(params.ind_table)); + memcpy(params.ind_table, rss_obj->ind_table, sizeof(params.ind_table)); if (config_hash) { /* RSS keys */ @@ -1754,7 +1804,7 @@ static void bnx2x_squeeze_objects(struct bnx2x *bp) int rc; unsigned long ramrod_flags = 0, vlan_mac_flags = 0; struct bnx2x_mcast_ramrod_params rparam = {NULL}; - struct bnx2x_vlan_mac_obj *mac_obj = &bp->fp->mac_obj; + struct bnx2x_vlan_mac_obj *mac_obj = &bp->sp_objs->mac_obj; /***************** Cleanup MACs' object first *************************/ @@ -1765,7 +1815,7 @@ static void bnx2x_squeeze_objects(struct bnx2x *bp) /* Clean ETH primary MAC */ __set_bit(BNX2X_ETH_MAC, &vlan_mac_flags); - rc = mac_obj->delete_all(bp, &bp->fp->mac_obj, &vlan_mac_flags, + rc = mac_obj->delete_all(bp, &bp->sp_objs->mac_obj, &vlan_mac_flags, &ramrod_flags); if (rc != 0) BNX2X_ERR("Failed to clean ETH MACs: %d\n", rc); @@ -1851,11 +1901,16 @@ bool bnx2x_test_firmware_version(struct bnx2x *bp, bool is_err) static void bnx2x_bz_fp(struct bnx2x *bp, int index) { struct bnx2x_fastpath *fp = &bp->fp[index]; + struct bnx2x_fp_stats *fp_stats = &bp->fp_stats[index]; + + int cos; struct napi_struct orig_napi = fp->napi; + struct bnx2x_agg_info *orig_tpa_info = fp->tpa_info; /* bzero bnx2x_fastpath contents */ - if (bp->stats_init) + if (bp->stats_init) { + memset(fp->tpa_info, 0, sizeof(*fp->tpa_info)); memset(fp, 0, sizeof(*fp)); - else { + } else { /* Keep Queue statistics */ struct bnx2x_eth_q_stats *tmp_eth_q_stats; struct bnx2x_eth_q_stats_old *tmp_eth_q_stats_old; @@ -1863,26 +1918,27 @@ static void bnx2x_bz_fp(struct bnx2x *bp, int index) tmp_eth_q_stats = kzalloc(sizeof(struct bnx2x_eth_q_stats), GFP_KERNEL); if (tmp_eth_q_stats) - memcpy(tmp_eth_q_stats, &fp->eth_q_stats, + memcpy(tmp_eth_q_stats, &fp_stats->eth_q_stats, sizeof(struct bnx2x_eth_q_stats)); tmp_eth_q_stats_old = kzalloc(sizeof(struct bnx2x_eth_q_stats_old), GFP_KERNEL); if (tmp_eth_q_stats_old) - memcpy(tmp_eth_q_stats_old, &fp->eth_q_stats_old, + memcpy(tmp_eth_q_stats_old, &fp_stats->eth_q_stats_old, sizeof(struct bnx2x_eth_q_stats_old)); + memset(fp->tpa_info, 0, sizeof(*fp->tpa_info)); memset(fp, 0, sizeof(*fp)); if (tmp_eth_q_stats) { - memcpy(&fp->eth_q_stats, tmp_eth_q_stats, - sizeof(struct bnx2x_eth_q_stats)); + memcpy(&fp_stats->eth_q_stats, tmp_eth_q_stats, + sizeof(struct bnx2x_eth_q_stats)); kfree(tmp_eth_q_stats); } if (tmp_eth_q_stats_old) { - memcpy(&fp->eth_q_stats_old, tmp_eth_q_stats_old, + memcpy(&fp_stats->eth_q_stats_old, tmp_eth_q_stats_old, sizeof(struct bnx2x_eth_q_stats_old)); kfree(tmp_eth_q_stats_old); } @@ -1891,7 +1947,7 @@ static void bnx2x_bz_fp(struct bnx2x *bp, int index) /* Restore the NAPI object as it has been already initialized */ fp->napi = orig_napi; - + fp->tpa_info = orig_tpa_info; fp->bp = bp; fp->index = index; if (IS_ETH_FP(fp)) @@ -1900,6 +1956,16 @@ static void bnx2x_bz_fp(struct bnx2x *bp, int index) /* Special queues support only one CoS */ fp->max_cos = 1; + /* Init txdata pointers */ +#ifdef BCM_CNIC + if (IS_FCOE_FP(fp)) + fp->txdata_ptr[0] = &bp->bnx2x_txq[FCOE_TXQ_IDX(bp)]; +#endif + if (IS_ETH_FP(fp)) + for_each_cos_in_tx_queue(fp, cos) + fp->txdata_ptr[cos] = &bp->bnx2x_txq[cos * + BNX2X_NUM_ETH_QUEUES(bp) + index]; + /* * set the tpa flag for each queue. The tpa flag determines the queue * minimal size so it must be set prior to queue memory allocation @@ -1949,11 +2015,13 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode) /* * Zero fastpath structures preserving invariants like napi, which are * allocated only once, fp index, max_cos, bp pointer. - * Also set fp->disable_tpa. + * Also set fp->disable_tpa and txdata_ptr. */ DP(NETIF_MSG_IFUP, "num queues: %d", bp->num_queues); for_each_queue(bp, i) bnx2x_bz_fp(bp, i); + memset(bp->bnx2x_txq, 0, bp->bnx2x_txq_size * + sizeof(struct bnx2x_fp_txdata)); /* Set the receive queues buffer size */ @@ -2176,6 +2244,7 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode) break; case LOAD_DIAG: + case LOAD_LOOPBACK_EXT: bp->state = BNX2X_STATE_DIAG; break; @@ -2195,6 +2264,7 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode) /* re-read iscsi info */ bnx2x_get_iscsi_info(bp); bnx2x_setup_cnic_irq_info(bp); + bnx2x_setup_cnic_info(bp); if (bp->state == BNX2X_STATE_OPEN) bnx2x_cnic_notify(bp, CNIC_CTL_START_CMD); #endif @@ -2215,7 +2285,10 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode) return -EBUSY; } - bnx2x_dcbx_init(bp); + /* If PMF - send ADMIN DCBX msg to MFW to initiate DCBX FSM */ + if (bp->port.pmf && (bp->state != BNX2X_STATE_DIAG)) + bnx2x_dcbx_init(bp, false); + return 0; #ifndef BNX2X_STOP_ON_ERROR @@ -2298,6 +2371,7 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode) /* Stop Tx */ bnx2x_tx_disable(bp); + netdev_reset_tc(bp->dev); #ifdef BCM_CNIC bnx2x_cnic_notify(bp, CNIC_CTL_STOP_CMD); @@ -2456,8 +2530,8 @@ int bnx2x_poll(struct napi_struct *napi, int budget) #endif for_each_cos_in_tx_queue(fp, cos) - if (bnx2x_tx_queue_has_work(&fp->txdata[cos])) - bnx2x_tx_int(bp, &fp->txdata[cos]); + if (bnx2x_tx_queue_has_work(fp->txdata_ptr[cos])) + bnx2x_tx_int(bp, fp->txdata_ptr[cos]); if (bnx2x_has_rx_work(fp)) { @@ -2834,7 +2908,6 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev) { struct bnx2x *bp = netdev_priv(dev); - struct bnx2x_fastpath *fp; struct netdev_queue *txq; struct bnx2x_fp_txdata *txdata; struct sw_tx_bd *tx_buf; @@ -2844,7 +2917,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev) struct eth_tx_parse_bd_e2 *pbd_e2 = NULL; u32 pbd_e2_parsing_data = 0; u16 pkt_prod, bd_prod; - int nbd, txq_index, fp_index, txdata_index; + int nbd, txq_index; dma_addr_t mapping; u32 xmit_type = bnx2x_xmit_type(bp, skb); int i; @@ -2863,39 +2936,22 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev) BUG_ON(txq_index >= MAX_ETH_TXQ_IDX(bp) + FCOE_PRESENT); - /* decode the fastpath index and the cos index from the txq */ - fp_index = TXQ_TO_FP(txq_index); - txdata_index = TXQ_TO_COS(txq_index); - -#ifdef BCM_CNIC - /* - * Override the above for the FCoE queue: - * - FCoE fp entry is right after the ETH entries. - * - FCoE L2 queue uses bp->txdata[0] only. - */ - if (unlikely(!NO_FCOE(bp) && (txq_index == - bnx2x_fcoe_tx(bp, txq_index)))) { - fp_index = FCOE_IDX; - txdata_index = 0; - } -#endif + txdata = &bp->bnx2x_txq[txq_index]; /* enable this debug print to view the transmission queue being used DP(NETIF_MSG_TX_QUEUED, "indices: txq %d, fp %d, txdata %d\n", txq_index, fp_index, txdata_index); */ - /* locate the fastpath and the txdata */ - fp = &bp->fp[fp_index]; - txdata = &fp->txdata[txdata_index]; - /* enable this debug print to view the tranmission details DP(NETIF_MSG_TX_QUEUED, "transmitting packet cid %d fp index %d txdata_index %d tx_data ptr %p fp pointer %p\n", txdata->cid, fp_index, txdata_index, txdata, fp); */ if (unlikely(bnx2x_tx_avail(bp, txdata) < - (skb_shinfo(skb)->nr_frags + 3))) { - fp->eth_q_stats.driver_xoff++; + skb_shinfo(skb)->nr_frags + + BDS_PER_TX_PKT + + NEXT_CNT_PER_TX_PKT(MAX_BDS_PER_TX_PKT))) { + bnx2x_fp_qstats(bp, txdata->parent_fp)->driver_xoff++; netif_tx_stop_queue(txq); BNX2X_ERR("BUG! Tx ring full when queue awake!\n"); return NETDEV_TX_BUSY; @@ -3169,7 +3225,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev) txdata->tx_bd_prod += nbd; - if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_SKB_FRAGS + 4)) { + if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_DESC_PER_TX_PKT)) { netif_tx_stop_queue(txq); /* paired memory barrier is in bnx2x_tx_int(), we have to keep @@ -3177,8 +3233,8 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev) * fp->bd_tx_cons */ smp_mb(); - fp->eth_q_stats.driver_xoff++; - if (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4) + bnx2x_fp_qstats(bp, txdata->parent_fp)->driver_xoff++; + if (bnx2x_tx_avail(bp, txdata) >= MAX_DESC_PER_TX_PKT) netif_tx_wake_queue(txq); } txdata->tx_pkt++; @@ -3243,7 +3299,7 @@ int bnx2x_setup_tc(struct net_device *dev, u8 num_tc) /* configure traffic class to transmission queue mapping */ for (cos = 0; cos < bp->max_cos; cos++) { count = BNX2X_NUM_ETH_QUEUES(bp); - offset = cos * MAX_TXQS_PER_COS; + offset = cos * BNX2X_NUM_NON_CNIC_QUEUES(bp); netdev_set_tc_queue(dev, cos, count, offset); DP(BNX2X_MSG_SP | NETIF_MSG_IFUP, "mapping tc %d to offset %d count %d\n", @@ -3342,7 +3398,7 @@ static void bnx2x_free_fp_mem_at(struct bnx2x *bp, int fp_index) if (!skip_tx_queue(bp, fp_index)) { /* fastpath tx rings: tx_buf tx_desc */ for_each_cos_in_tx_queue(fp, cos) { - struct bnx2x_fp_txdata *txdata = &fp->txdata[cos]; + struct bnx2x_fp_txdata *txdata = fp->txdata_ptr[cos]; DP(NETIF_MSG_IFDOWN, "freeing tx memory of fp %d cos %d cid %d\n", @@ -3414,7 +3470,7 @@ static int bnx2x_alloc_rx_bds(struct bnx2x_fastpath *fp, cqe_ring_prod); fp->rx_pkt = fp->rx_calls = 0; - fp->eth_q_stats.rx_skb_alloc_failed += failure_cnt; + bnx2x_fp_stats(bp, fp)->eth_q_stats.rx_skb_alloc_failed += failure_cnt; return i - failure_cnt; } @@ -3499,7 +3555,7 @@ static int bnx2x_alloc_fp_mem_at(struct bnx2x *bp, int index) if (!skip_tx_queue(bp, index)) { /* fastpath tx rings: tx_buf tx_desc */ for_each_cos_in_tx_queue(fp, cos) { - struct bnx2x_fp_txdata *txdata = &fp->txdata[cos]; + struct bnx2x_fp_txdata *txdata = fp->txdata_ptr[cos]; DP(NETIF_MSG_IFUP, "allocating tx memory of fp %d cos %d\n", @@ -3582,7 +3638,7 @@ int bnx2x_alloc_fp_mem(struct bnx2x *bp) #ifdef BCM_CNIC if (!NO_FCOE(bp)) /* FCoE */ - if (bnx2x_alloc_fp_mem_at(bp, FCOE_IDX)) + if (bnx2x_alloc_fp_mem_at(bp, FCOE_IDX(bp))) /* we will fail load process instead of mark * NO_FCOE_FLAG */ @@ -3607,7 +3663,7 @@ int bnx2x_alloc_fp_mem(struct bnx2x *bp) */ /* move FCoE fp even NO_FCOE_FLAG is on */ - bnx2x_move_fp(bp, FCOE_IDX, FCOE_IDX - delta); + bnx2x_move_fp(bp, FCOE_IDX(bp), FCOE_IDX(bp) - delta); #endif bp->num_queues -= delta; BNX2X_ERR("Adjusted num of queues from %d to %d\n", @@ -3619,7 +3675,11 @@ int bnx2x_alloc_fp_mem(struct bnx2x *bp) void bnx2x_free_mem_bp(struct bnx2x *bp) { + kfree(bp->fp->tpa_info); kfree(bp->fp); + kfree(bp->sp_objs); + kfree(bp->fp_stats); + kfree(bp->bnx2x_txq); kfree(bp->msix_table); kfree(bp->ilt); } @@ -3630,6 +3690,8 @@ int __devinit bnx2x_alloc_mem_bp(struct bnx2x *bp) struct msix_entry *tbl; struct bnx2x_ilt *ilt; int msix_table_size = 0; + int fp_array_size; + int i; /* * The biggest MSI-X table we might need is as a maximum number of fast @@ -3638,12 +3700,44 @@ int __devinit bnx2x_alloc_mem_bp(struct bnx2x *bp) msix_table_size = bp->igu_sb_cnt + 1; /* fp array: RSS plus CNIC related L2 queues */ - fp = kcalloc(BNX2X_MAX_RSS_COUNT(bp) + NON_ETH_CONTEXT_USE, - sizeof(*fp), GFP_KERNEL); + fp_array_size = BNX2X_MAX_RSS_COUNT(bp) + NON_ETH_CONTEXT_USE; + BNX2X_DEV_INFO("fp_array_size %d", fp_array_size); + + fp = kcalloc(fp_array_size, sizeof(*fp), GFP_KERNEL); if (!fp) goto alloc_err; + for (i = 0; i < fp_array_size; i++) { + fp[i].tpa_info = + kcalloc(ETH_MAX_AGGREGATION_QUEUES_E1H_E2, + sizeof(struct bnx2x_agg_info), GFP_KERNEL); + if (!(fp[i].tpa_info)) + goto alloc_err; + } + bp->fp = fp; + /* allocate sp objs */ + bp->sp_objs = kcalloc(fp_array_size, sizeof(struct bnx2x_sp_objs), + GFP_KERNEL); + if (!bp->sp_objs) + goto alloc_err; + + /* allocate fp_stats */ + bp->fp_stats = kcalloc(fp_array_size, sizeof(struct bnx2x_fp_stats), + GFP_KERNEL); + if (!bp->fp_stats) + goto alloc_err; + + /* Allocate memory for the transmission queues array */ + bp->bnx2x_txq_size = BNX2X_MAX_RSS_COUNT(bp) * BNX2X_MULTI_TX_COS; +#ifdef BCM_CNIC + bp->bnx2x_txq_size++; +#endif + bp->bnx2x_txq = kcalloc(bp->bnx2x_txq_size, + sizeof(struct bnx2x_fp_txdata), GFP_KERNEL); + if (!bp->bnx2x_txq) + goto alloc_err; + /* msix table */ tbl = kcalloc(msix_table_size, sizeof(*tbl), GFP_KERNEL); if (!tbl) diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h index 7cd99b75347a..dfa757e74296 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h @@ -29,6 +29,7 @@ extern int load_count[2][3]; /* per-path: 0-common, 1-port0, 2-port1 */ extern int num_queues; +extern int int_mode; /************************ Macros ********************************/ #define BNX2X_PCI_FREE(x, y, size) \ @@ -89,12 +90,12 @@ void bnx2x_send_unload_done(struct bnx2x *bp); * bnx2x_config_rss_pf - configure RSS parameters in a PF. * * @bp: driver handle - * @rss_obj RSS object to use + * @rss_obj: RSS object to use * @ind_table: indirection table to configure * @config_hash: re-configure RSS hash keys configuration */ int bnx2x_config_rss_pf(struct bnx2x *bp, struct bnx2x_rss_config_obj *rss_obj, - u8 *ind_table, bool config_hash); + bool config_hash); /** * bnx2x__init_func_obj - init function object @@ -244,6 +245,14 @@ int bnx2x_cnic_notify(struct bnx2x *bp, int cmd); * @bp: driver handle */ void bnx2x_setup_cnic_irq_info(struct bnx2x *bp); + +/** + * bnx2x_setup_cnic_info - provides cnic with updated info + * + * @bp: driver handle + */ +void bnx2x_setup_cnic_info(struct bnx2x *bp); + #endif /** @@ -409,7 +418,7 @@ void bnx2x_ilt_set_info(struct bnx2x *bp); * * @bp: driver handle */ -void bnx2x_dcbx_init(struct bnx2x *bp); +void bnx2x_dcbx_init(struct bnx2x *bp, bool update_shmem); /** * bnx2x_set_power_state - set power state to the requested value. @@ -487,7 +496,7 @@ void bnx2x_netif_start(struct bnx2x *bp); * fills msix_table, requests vectors, updates num_queues * according to number of available vectors. */ -int __devinit bnx2x_enable_msix(struct bnx2x *bp); +int bnx2x_enable_msix(struct bnx2x *bp); /** * bnx2x_enable_msi - request msi mode from OS, updated internals accordingly @@ -728,7 +737,7 @@ static inline bool bnx2x_has_tx_work(struct bnx2x_fastpath *fp) { u8 cos; for_each_cos_in_tx_queue(fp, cos) - if (bnx2x_tx_queue_has_work(&fp->txdata[cos])) + if (bnx2x_tx_queue_has_work(fp->txdata_ptr[cos])) return true; return false; } @@ -780,8 +789,10 @@ static inline void bnx2x_add_all_napi(struct bnx2x *bp) { int i; + bp->num_napi_queues = bp->num_queues; + /* Add NAPI objects */ - for_each_rx_queue(bp, i) + for_each_napi_rx_queue(bp, i) netif_napi_add(bp->dev, &bnx2x_fp(bp, i, napi), bnx2x_poll, BNX2X_NAPI_WEIGHT); } @@ -790,10 +801,12 @@ static inline void bnx2x_del_all_napi(struct bnx2x *bp) { int i; - for_each_rx_queue(bp, i) + for_each_napi_rx_queue(bp, i) netif_napi_del(&bnx2x_fp(bp, i, napi)); } +void bnx2x_set_int_mode(struct bnx2x *bp); + static inline void bnx2x_disable_msi(struct bnx2x *bp) { if (bp->flags & USING_MSIX_FLAG) { @@ -809,7 +822,8 @@ static inline int bnx2x_calc_num_queues(struct bnx2x *bp) { return num_queues ? min_t(int, num_queues, BNX2X_MAX_QUEUES(bp)) : - min_t(int, num_online_cpus(), BNX2X_MAX_QUEUES(bp)); + min_t(int, netif_get_num_default_rss_queues(), + BNX2X_MAX_QUEUES(bp)); } static inline void bnx2x_clear_sge_mask_next_elems(struct bnx2x_fastpath *fp) @@ -865,11 +879,9 @@ static inline int func_by_vn(struct bnx2x *bp, int vn) return 2 * vn + BP_PORT(bp); } -static inline int bnx2x_config_rss_eth(struct bnx2x *bp, u8 *ind_table, - bool config_hash) +static inline int bnx2x_config_rss_eth(struct bnx2x *bp, bool config_hash) { - return bnx2x_config_rss_pf(bp, &bp->rss_conf_obj, ind_table, - config_hash); + return bnx2x_config_rss_pf(bp, &bp->rss_conf_obj, config_hash); } /** @@ -975,8 +987,8 @@ static inline void bnx2x_init_vlan_mac_fp_objs(struct bnx2x_fastpath *fp, struct bnx2x *bp = fp->bp; /* Configure classification DBs */ - bnx2x_init_mac_obj(bp, &fp->mac_obj, fp->cl_id, fp->cid, - BP_FUNC(bp), bnx2x_sp(bp, mac_rdata), + bnx2x_init_mac_obj(bp, &bnx2x_sp_obj(bp, fp).mac_obj, fp->cl_id, + fp->cid, BP_FUNC(bp), bnx2x_sp(bp, mac_rdata), bnx2x_sp_mapping(bp, mac_rdata), BNX2X_FILTER_MAC_PENDING, &bp->sp_state, obj_type, @@ -1068,12 +1080,14 @@ static inline u32 bnx2x_rx_ustorm_prods_offset(struct bnx2x_fastpath *fp) } static inline void bnx2x_init_txdata(struct bnx2x *bp, - struct bnx2x_fp_txdata *txdata, u32 cid, int txq_index, - __le16 *tx_cons_sb) + struct bnx2x_fp_txdata *txdata, u32 cid, + int txq_index, __le16 *tx_cons_sb, + struct bnx2x_fastpath *fp) { txdata->cid = cid; txdata->txq_index = txq_index; txdata->tx_cons_sb = tx_cons_sb; + txdata->parent_fp = fp; DP(NETIF_MSG_IFUP, "created tx data cid %d, txq %d\n", txdata->cid, txdata->txq_index); @@ -1107,18 +1121,13 @@ static inline void bnx2x_init_fcoe_fp(struct bnx2x *bp) bnx2x_fcoe(bp, rx_queue) = BNX2X_NUM_ETH_QUEUES(bp); bnx2x_fcoe(bp, cl_id) = bnx2x_cnic_eth_cl_id(bp, BNX2X_FCOE_ETH_CL_ID_IDX); - /** Current BNX2X_FCOE_ETH_CID deffinition implies not more than - * 16 ETH clients per function when CNIC is enabled! - * - * Fix it ASAP!!! - */ - bnx2x_fcoe(bp, cid) = BNX2X_FCOE_ETH_CID; + bnx2x_fcoe(bp, cid) = BNX2X_FCOE_ETH_CID(bp); bnx2x_fcoe(bp, fw_sb_id) = DEF_SB_ID; bnx2x_fcoe(bp, igu_sb_id) = bp->igu_dsb_id; bnx2x_fcoe(bp, rx_cons_sb) = BNX2X_FCOE_L2_RX_INDEX; - - bnx2x_init_txdata(bp, &bnx2x_fcoe(bp, txdata[0]), - fp->cid, FCOE_TXQ_IDX(bp), BNX2X_FCOE_L2_TX_INDEX); + bnx2x_init_txdata(bp, bnx2x_fcoe(bp, txdata_ptr[0]), + fp->cid, FCOE_TXQ_IDX(bp), BNX2X_FCOE_L2_TX_INDEX, + fp); DP(NETIF_MSG_IFUP, "created fcoe tx data (fp index %d)\n", fp->index); @@ -1135,8 +1144,8 @@ static inline void bnx2x_init_fcoe_fp(struct bnx2x *bp) /* No multi-CoS for FCoE L2 client */ BUG_ON(fp->max_cos != 1); - bnx2x_init_queue_obj(bp, &fp->q_obj, fp->cl_id, &fp->cid, 1, - BP_FUNC(bp), bnx2x_sp(bp, q_rdata), + bnx2x_init_queue_obj(bp, &bnx2x_sp_obj(bp, fp).q_obj, fp->cl_id, + &fp->cid, 1, BP_FUNC(bp), bnx2x_sp(bp, q_rdata), bnx2x_sp_mapping(bp, q_rdata), q_type); DP(NETIF_MSG_IFUP, diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c index 4f9244bd7530..8a73374e52a7 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c @@ -972,23 +972,26 @@ void bnx2x_dcbx_init_params(struct bnx2x *bp) bp->dcbx_config_params.admin_default_priority = 0; } -void bnx2x_dcbx_init(struct bnx2x *bp) +void bnx2x_dcbx_init(struct bnx2x *bp, bool update_shmem) { u32 dcbx_lldp_params_offset = SHMEM_LLDP_DCBX_PARAMS_NONE; + /* only PMF can send ADMIN msg to MFW in old MFW versions */ + if ((!bp->port.pmf) && (!(bp->flags & BC_SUPPORTS_DCBX_MSG_NON_PMF))) + return; + if (bp->dcbx_enabled <= 0) return; /* validate: * chip of good for dcbx version, * dcb is wanted - * the function is pmf * shmem2 contains DCBX support fields */ DP(BNX2X_MSG_DCB, "dcb_state %d bp->port.pmf %d\n", bp->dcb_state, bp->port.pmf); - if (bp->dcb_state == BNX2X_DCB_STATE_ON && bp->port.pmf && + if (bp->dcb_state == BNX2X_DCB_STATE_ON && SHMEM2_HAS(bp, dcbx_lldp_params_offset)) { dcbx_lldp_params_offset = SHMEM2_RD(bp, dcbx_lldp_params_offset); @@ -999,12 +1002,23 @@ void bnx2x_dcbx_init(struct bnx2x *bp) bnx2x_update_drv_flags(bp, 1 << DRV_FLAGS_DCB_CONFIGURED, 0); if (SHMEM_LLDP_DCBX_PARAMS_NONE != dcbx_lldp_params_offset) { - bnx2x_dcbx_admin_mib_updated_params(bp, - dcbx_lldp_params_offset); + /* need HW lock to avoid scenario of two drivers + * writing in parallel to shmem + */ + bnx2x_acquire_hw_lock(bp, + HW_LOCK_RESOURCE_DCBX_ADMIN_MIB); + if (update_shmem) + bnx2x_dcbx_admin_mib_updated_params(bp, + dcbx_lldp_params_offset); /* Let HW start negotiation */ bnx2x_fw_command(bp, DRV_MSG_CODE_DCBX_ADMIN_PMF_MSG, 0); + /* release HW lock only after MFW acks that it finished + * reading values from shmem + */ + bnx2x_release_hw_lock(bp, + HW_LOCK_RESOURCE_DCBX_ADMIN_MIB); } } } @@ -2063,10 +2077,8 @@ static u8 bnx2x_dcbnl_set_all(struct net_device *netdev) "Handling parity error recovery. Try again later\n"); return 1; } - if (netif_running(bp->dev)) { - bnx2x_nic_unload(bp, UNLOAD_NORMAL); - rc = bnx2x_nic_load(bp, LOAD_NORMAL); - } + if (netif_running(bp->dev)) + bnx2x_dcbx_init(bp, true); DP(BNX2X_MSG_DCB, "set_dcbx_params done (%d)\n", rc); if (rc) return 1; diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c index ddc18ee5c5ae..fc4e0e3885b0 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c @@ -177,6 +177,8 @@ static const struct { 4, STATS_FLAGS_FUNC, "recoverable_errors" }, { STATS_OFFSET32(unrecoverable_error), 4, STATS_FLAGS_FUNC, "unrecoverable_errors" }, + { STATS_OFFSET32(eee_tx_lpi), + 4, STATS_FLAGS_PORT, "Tx LPI entry count"} }; #define BNX2X_NUM_STATS ARRAY_SIZE(bnx2x_stats_arr) @@ -185,7 +187,8 @@ static int bnx2x_get_port_type(struct bnx2x *bp) int port_type; u32 phy_idx = bnx2x_get_cur_phy_idx(bp); switch (bp->link_params.phy[phy_idx].media_type) { - case ETH_PHY_SFP_FIBER: + case ETH_PHY_SFPP_10G_FIBER: + case ETH_PHY_SFP_1G_FIBER: case ETH_PHY_XFP_FIBER: case ETH_PHY_KR: case ETH_PHY_CX4: @@ -218,6 +221,11 @@ static int bnx2x_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) (bp->port.supported[cfg_idx ^ 1] & (SUPPORTED_TP | SUPPORTED_FIBRE)); cmd->advertising = bp->port.advertising[cfg_idx]; + if (bp->link_params.phy[bnx2x_get_cur_phy_idx(bp)].media_type == + ETH_PHY_SFP_1G_FIBER) { + cmd->supported &= ~(SUPPORTED_10000baseT_Full); + cmd->advertising &= ~(ADVERTISED_10000baseT_Full); + } if ((bp->state == BNX2X_STATE_OPEN) && (bp->link_vars.link_up)) { if (!(bp->flags & MF_FUNC_DIS)) { @@ -293,7 +301,7 @@ static int bnx2x_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) { struct bnx2x *bp = netdev_priv(dev); u32 advertising, cfg_idx, old_multi_phy_config, new_multi_phy_config; - u32 speed; + u32 speed, phy_idx; if (IS_MF_SD(bp)) return 0; @@ -548,9 +556,11 @@ static int bnx2x_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) "10G half not supported\n"); return -EINVAL; } - + phy_idx = bnx2x_get_cur_phy_idx(bp); if (!(bp->port.supported[cfg_idx] - & SUPPORTED_10000baseT_Full)) { + & SUPPORTED_10000baseT_Full) || + (bp->link_params.phy[phy_idx].media_type == + ETH_PHY_SFP_1G_FIBER)) { DP(BNX2X_MSG_ETHTOOL, "10G full not supported\n"); return -EINVAL; @@ -824,7 +834,7 @@ static void bnx2x_get_drvinfo(struct net_device *dev, ((phy_fw_ver[0] != '\0') ? " phy " : ""), phy_fw_ver); strlcpy(info->bus_info, pci_name(bp->pdev), sizeof(info->bus_info)); info->n_stats = BNX2X_NUM_STATS; - info->testinfo_len = BNX2X_NUM_TESTS; + info->testinfo_len = BNX2X_NUM_TESTS(bp); info->eedump_len = bp->common.flash_size; info->regdump_len = bnx2x_get_regs_len(dev); } @@ -1150,6 +1160,65 @@ static int bnx2x_get_eeprom(struct net_device *dev, return rc; } +static int bnx2x_get_module_eeprom(struct net_device *dev, + struct ethtool_eeprom *ee, + u8 *data) +{ + struct bnx2x *bp = netdev_priv(dev); + int rc = 0, phy_idx; + u8 *user_data = data; + int remaining_len = ee->len, xfer_size; + unsigned int page_off = ee->offset; + + if (!netif_running(dev)) { + DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, + "cannot access eeprom when the interface is down\n"); + return -EAGAIN; + } + + phy_idx = bnx2x_get_cur_phy_idx(bp); + bnx2x_acquire_phy_lock(bp); + while (!rc && remaining_len > 0) { + xfer_size = (remaining_len > SFP_EEPROM_PAGE_SIZE) ? + SFP_EEPROM_PAGE_SIZE : remaining_len; + rc = bnx2x_read_sfp_module_eeprom(&bp->link_params.phy[phy_idx], + &bp->link_params, + page_off, + xfer_size, + user_data); + remaining_len -= xfer_size; + user_data += xfer_size; + page_off += xfer_size; + } + + bnx2x_release_phy_lock(bp); + return rc; +} + +static int bnx2x_get_module_info(struct net_device *dev, + struct ethtool_modinfo *modinfo) +{ + struct bnx2x *bp = netdev_priv(dev); + int phy_idx; + if (!netif_running(dev)) { + DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, + "cannot access eeprom when the interface is down\n"); + return -EAGAIN; + } + + phy_idx = bnx2x_get_cur_phy_idx(bp); + switch (bp->link_params.phy[phy_idx].media_type) { + case ETH_PHY_SFPP_10G_FIBER: + case ETH_PHY_SFP_1G_FIBER: + case ETH_PHY_DA_TWINAX: + modinfo->type = ETH_MODULE_SFF_8079; + modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN; + return 0; + default: + return -EOPNOTSUPP; + } +} + static int bnx2x_nvram_write_dword(struct bnx2x *bp, u32 offset, u32 val, u32 cmd_flags) { @@ -1531,18 +1600,146 @@ static int bnx2x_set_pauseparam(struct net_device *dev, return 0; } -static const struct { - char string[ETH_GSTRING_LEN]; -} bnx2x_tests_str_arr[BNX2X_NUM_TESTS] = { - { "register_test (offline)" }, - { "memory_test (offline)" }, - { "loopback_test (offline)" }, - { "nvram_test (online)" }, - { "interrupt_test (online)" }, - { "link_test (online)" }, - { "idle check (online)" } +static char *bnx2x_tests_str_arr[BNX2X_NUM_TESTS_SF] = { + "register_test (offline) ", + "memory_test (offline) ", + "int_loopback_test (offline)", + "ext_loopback_test (offline)", + "nvram_test (online) ", + "interrupt_test (online) ", + "link_test (online) " }; +static u32 bnx2x_eee_to_adv(u32 eee_adv) +{ + u32 modes = 0; + + if (eee_adv & SHMEM_EEE_100M_ADV) + modes |= ADVERTISED_100baseT_Full; + if (eee_adv & SHMEM_EEE_1G_ADV) + modes |= ADVERTISED_1000baseT_Full; + if (eee_adv & SHMEM_EEE_10G_ADV) + modes |= ADVERTISED_10000baseT_Full; + + return modes; +} + +static u32 bnx2x_adv_to_eee(u32 modes, u32 shift) +{ + u32 eee_adv = 0; + if (modes & ADVERTISED_100baseT_Full) + eee_adv |= SHMEM_EEE_100M_ADV; + if (modes & ADVERTISED_1000baseT_Full) + eee_adv |= SHMEM_EEE_1G_ADV; + if (modes & ADVERTISED_10000baseT_Full) + eee_adv |= SHMEM_EEE_10G_ADV; + + return eee_adv << shift; +} + +static int bnx2x_get_eee(struct net_device *dev, struct ethtool_eee *edata) +{ + struct bnx2x *bp = netdev_priv(dev); + u32 eee_cfg; + + if (!SHMEM2_HAS(bp, eee_status[BP_PORT(bp)])) { + DP(BNX2X_MSG_ETHTOOL, "BC Version does not support EEE\n"); + return -EOPNOTSUPP; + } + + eee_cfg = SHMEM2_RD(bp, eee_status[BP_PORT(bp)]); + + edata->supported = + bnx2x_eee_to_adv((eee_cfg & SHMEM_EEE_SUPPORTED_MASK) >> + SHMEM_EEE_SUPPORTED_SHIFT); + + edata->advertised = + bnx2x_eee_to_adv((eee_cfg & SHMEM_EEE_ADV_STATUS_MASK) >> + SHMEM_EEE_ADV_STATUS_SHIFT); + edata->lp_advertised = + bnx2x_eee_to_adv((eee_cfg & SHMEM_EEE_LP_ADV_STATUS_MASK) >> + SHMEM_EEE_LP_ADV_STATUS_SHIFT); + + /* SHMEM value is in 16u units --> Convert to 1u units. */ + edata->tx_lpi_timer = (eee_cfg & SHMEM_EEE_TIMER_MASK) << 4; + + edata->eee_enabled = (eee_cfg & SHMEM_EEE_REQUESTED_BIT) ? 1 : 0; + edata->eee_active = (eee_cfg & SHMEM_EEE_ACTIVE_BIT) ? 1 : 0; + edata->tx_lpi_enabled = (eee_cfg & SHMEM_EEE_LPI_REQUESTED_BIT) ? 1 : 0; + + return 0; +} + +static int bnx2x_set_eee(struct net_device *dev, struct ethtool_eee *edata) +{ + struct bnx2x *bp = netdev_priv(dev); + u32 eee_cfg; + u32 advertised; + + if (IS_MF(bp)) + return 0; + + if (!SHMEM2_HAS(bp, eee_status[BP_PORT(bp)])) { + DP(BNX2X_MSG_ETHTOOL, "BC Version does not support EEE\n"); + return -EOPNOTSUPP; + } + + eee_cfg = SHMEM2_RD(bp, eee_status[BP_PORT(bp)]); + + if (!(eee_cfg & SHMEM_EEE_SUPPORTED_MASK)) { + DP(BNX2X_MSG_ETHTOOL, "Board does not support EEE!\n"); + return -EOPNOTSUPP; + } + + advertised = bnx2x_adv_to_eee(edata->advertised, + SHMEM_EEE_ADV_STATUS_SHIFT); + if ((advertised != (eee_cfg & SHMEM_EEE_ADV_STATUS_MASK))) { + DP(BNX2X_MSG_ETHTOOL, + "Direct manipulation of EEE advertisment is not supported\n"); + return -EINVAL; + } + + if (edata->tx_lpi_timer > EEE_MODE_TIMER_MASK) { + DP(BNX2X_MSG_ETHTOOL, + "Maximal Tx Lpi timer supported is %x(u)\n", + EEE_MODE_TIMER_MASK); + return -EINVAL; + } + if (edata->tx_lpi_enabled && + (edata->tx_lpi_timer < EEE_MODE_NVRAM_AGGRESSIVE_TIME)) { + DP(BNX2X_MSG_ETHTOOL, + "Minimal Tx Lpi timer supported is %d(u)\n", + EEE_MODE_NVRAM_AGGRESSIVE_TIME); + return -EINVAL; + } + + /* All is well; Apply changes*/ + if (edata->eee_enabled) + bp->link_params.eee_mode |= EEE_MODE_ADV_LPI; + else + bp->link_params.eee_mode &= ~EEE_MODE_ADV_LPI; + + if (edata->tx_lpi_enabled) + bp->link_params.eee_mode |= EEE_MODE_ENABLE_LPI; + else + bp->link_params.eee_mode &= ~EEE_MODE_ENABLE_LPI; + + bp->link_params.eee_mode &= ~EEE_MODE_TIMER_MASK; + bp->link_params.eee_mode |= (edata->tx_lpi_timer & + EEE_MODE_TIMER_MASK) | + EEE_MODE_OVERRIDE_NVRAM | + EEE_MODE_OUTPUT_TIME; + + /* Restart link to propogate changes */ + if (netif_running(dev)) { + bnx2x_stats_handle(bp, STATS_EVENT_STOP); + bnx2x_link_set(bp); + } + + return 0; +} + + enum { BNX2X_CHIP_E1_OFST = 0, BNX2X_CHIP_E1H_OFST, @@ -1811,6 +2008,14 @@ static void bnx2x_wait_for_link(struct bnx2x *bp, u8 link_up, u8 is_serdes) if (cnt <= 0 && bnx2x_link_test(bp, is_serdes)) DP(BNX2X_MSG_ETHTOOL, "Timeout waiting for link up\n"); + + cnt = 1400; + while (!bp->link_vars.link_up && cnt--) + msleep(20); + + if (cnt <= 0 && !bp->link_vars.link_up) + DP(BNX2X_MSG_ETHTOOL, + "Timeout waiting for link init\n"); } } @@ -1821,7 +2026,7 @@ static int bnx2x_run_loopback(struct bnx2x *bp, int loopback_mode) unsigned char *packet; struct bnx2x_fastpath *fp_rx = &bp->fp[0]; struct bnx2x_fastpath *fp_tx = &bp->fp[0]; - struct bnx2x_fp_txdata *txdata = &fp_tx->txdata[0]; + struct bnx2x_fp_txdata *txdata = fp_tx->txdata_ptr[0]; u16 tx_start_idx, tx_idx; u16 rx_start_idx, rx_idx; u16 pkt_prod, bd_prod; @@ -1836,13 +2041,16 @@ static int bnx2x_run_loopback(struct bnx2x *bp, int loopback_mode) u16 len; int rc = -ENODEV; u8 *data; - struct netdev_queue *txq = netdev_get_tx_queue(bp->dev, txdata->txq_index); + struct netdev_queue *txq = netdev_get_tx_queue(bp->dev, + txdata->txq_index); /* check the loopback mode */ switch (loopback_mode) { case BNX2X_PHY_LOOPBACK: - if (bp->link_params.loopback_mode != LOOPBACK_XGXS) + if (bp->link_params.loopback_mode != LOOPBACK_XGXS) { + DP(BNX2X_MSG_ETHTOOL, "PHY loopback not supported\n"); return -EINVAL; + } break; case BNX2X_MAC_LOOPBACK: if (CHIP_IS_E3(bp)) { @@ -1859,6 +2067,13 @@ static int bnx2x_run_loopback(struct bnx2x *bp, int loopback_mode) bnx2x_phy_init(&bp->link_params, &bp->link_vars); break; + case BNX2X_EXT_LOOPBACK: + if (bp->link_params.loopback_mode != LOOPBACK_EXT) { + DP(BNX2X_MSG_ETHTOOL, + "Can't configure external loopback\n"); + return -EINVAL; + } + break; default: DP(BNX2X_MSG_ETHTOOL, "Command parameters not supported\n"); return -EINVAL; @@ -2030,6 +2245,38 @@ static int bnx2x_test_loopback(struct bnx2x *bp) return rc; } +static int bnx2x_test_ext_loopback(struct bnx2x *bp) +{ + int rc; + u8 is_serdes = + (bp->link_vars.link_status & LINK_STATUS_SERDES_LINK) > 0; + + if (BP_NOMCP(bp)) + return -ENODEV; + + if (!netif_running(bp->dev)) + return BNX2X_EXT_LOOPBACK_FAILED; + + bnx2x_nic_unload(bp, UNLOAD_NORMAL); + rc = bnx2x_nic_load(bp, LOAD_LOOPBACK_EXT); + if (rc) { + DP(BNX2X_MSG_ETHTOOL, + "Can't perform self-test, nic_load (for external lb) failed\n"); + return -ENODEV; + } + bnx2x_wait_for_link(bp, 1, is_serdes); + + bnx2x_netif_stop(bp, 1); + + rc = bnx2x_run_loopback(bp, BNX2X_EXT_LOOPBACK); + if (rc) + DP(BNX2X_MSG_ETHTOOL, "EXT loopback failed (res %d)\n", rc); + + bnx2x_netif_start(bp); + + return rc; +} + #define CRC32_RESIDUAL 0xdebb20e3 static int bnx2x_test_nvram(struct bnx2x *bp) @@ -2112,7 +2359,7 @@ static int bnx2x_test_intr(struct bnx2x *bp) return -ENODEV; } - params.q_obj = &bp->fp->q_obj; + params.q_obj = &bp->sp_objs->q_obj; params.cmd = BNX2X_Q_CMD_EMPTY; __set_bit(RAMROD_COMP_WAIT, ¶ms.ramrod_flags); @@ -2125,24 +2372,31 @@ static void bnx2x_self_test(struct net_device *dev, { struct bnx2x *bp = netdev_priv(dev); u8 is_serdes; + int rc; + if (bp->recovery_state != BNX2X_RECOVERY_DONE) { netdev_err(bp->dev, "Handling parity error recovery. Try again later\n"); etest->flags |= ETH_TEST_FL_FAILED; return; } + DP(BNX2X_MSG_ETHTOOL, + "Self-test command parameters: offline = %d, external_lb = %d\n", + (etest->flags & ETH_TEST_FL_OFFLINE), + (etest->flags & ETH_TEST_FL_EXTERNAL_LB)>>2); - memset(buf, 0, sizeof(u64) * BNX2X_NUM_TESTS); + memset(buf, 0, sizeof(u64) * BNX2X_NUM_TESTS(bp)); - if (!netif_running(dev)) + if (!netif_running(dev)) { + DP(BNX2X_MSG_ETHTOOL, + "Can't perform self-test when interface is down\n"); return; + } - /* offline tests are not supported in MF mode */ - if (IS_MF(bp)) - etest->flags &= ~ETH_TEST_FL_OFFLINE; is_serdes = (bp->link_vars.link_status & LINK_STATUS_SERDES_LINK) > 0; - if (etest->flags & ETH_TEST_FL_OFFLINE) { + /* offline tests are not supported in MF mode */ + if ((etest->flags & ETH_TEST_FL_OFFLINE) && !IS_MF(bp)) { int port = BP_PORT(bp); u32 val; u8 link_up; @@ -2155,7 +2409,14 @@ static void bnx2x_self_test(struct net_device *dev, link_up = bp->link_vars.link_up; bnx2x_nic_unload(bp, UNLOAD_NORMAL); - bnx2x_nic_load(bp, LOAD_DIAG); + rc = bnx2x_nic_load(bp, LOAD_DIAG); + if (rc) { + etest->flags |= ETH_TEST_FL_FAILED; + DP(BNX2X_MSG_ETHTOOL, + "Can't perform self-test, nic_load (for offline) failed\n"); + return; + } + /* wait until link state is restored */ bnx2x_wait_for_link(bp, 1, is_serdes); @@ -2168,30 +2429,51 @@ static void bnx2x_self_test(struct net_device *dev, etest->flags |= ETH_TEST_FL_FAILED; } - buf[2] = bnx2x_test_loopback(bp); + buf[2] = bnx2x_test_loopback(bp); /* internal LB */ if (buf[2] != 0) etest->flags |= ETH_TEST_FL_FAILED; + if (etest->flags & ETH_TEST_FL_EXTERNAL_LB) { + buf[3] = bnx2x_test_ext_loopback(bp); /* external LB */ + if (buf[3] != 0) + etest->flags |= ETH_TEST_FL_FAILED; + etest->flags |= ETH_TEST_FL_EXTERNAL_LB_DONE; + } + bnx2x_nic_unload(bp, UNLOAD_NORMAL); /* restore input for TX port IF */ REG_WR(bp, NIG_REG_EGRESS_UMP0_IN_EN + port*4, val); - - bnx2x_nic_load(bp, LOAD_NORMAL); + rc = bnx2x_nic_load(bp, LOAD_NORMAL); + if (rc) { + etest->flags |= ETH_TEST_FL_FAILED; + DP(BNX2X_MSG_ETHTOOL, + "Can't perform self-test, nic_load (for online) failed\n"); + return; + } /* wait until link state is restored */ bnx2x_wait_for_link(bp, link_up, is_serdes); } if (bnx2x_test_nvram(bp) != 0) { - buf[3] = 1; + if (!IS_MF(bp)) + buf[4] = 1; + else + buf[0] = 1; etest->flags |= ETH_TEST_FL_FAILED; } if (bnx2x_test_intr(bp) != 0) { - buf[4] = 1; + if (!IS_MF(bp)) + buf[5] = 1; + else + buf[1] = 1; etest->flags |= ETH_TEST_FL_FAILED; } if (bnx2x_link_test(bp, is_serdes) != 0) { - buf[5] = 1; + if (!IS_MF(bp)) + buf[6] = 1; + else + buf[2] = 1; etest->flags |= ETH_TEST_FL_FAILED; } @@ -2236,7 +2518,7 @@ static int bnx2x_get_sset_count(struct net_device *dev, int stringset) return num_stats; case ETH_SS_TEST: - return BNX2X_NUM_TESTS; + return BNX2X_NUM_TESTS(bp); default: return -EINVAL; @@ -2246,7 +2528,7 @@ static int bnx2x_get_sset_count(struct net_device *dev, int stringset) static void bnx2x_get_strings(struct net_device *dev, u32 stringset, u8 *buf) { struct bnx2x *bp = netdev_priv(dev); - int i, j, k; + int i, j, k, offset, start; char queue_name[MAX_QUEUE_NAME_LEN+1]; switch (stringset) { @@ -2277,7 +2559,17 @@ static void bnx2x_get_strings(struct net_device *dev, u32 stringset, u8 *buf) break; case ETH_SS_TEST: - memcpy(buf, bnx2x_tests_str_arr, sizeof(bnx2x_tests_str_arr)); + /* First 4 tests cannot be done in MF mode */ + if (!IS_MF(bp)) + start = 0; + else + start = 4; + for (i = 0, j = start; j < (start + BNX2X_NUM_TESTS(bp)); + i++, j++) { + offset = sprintf(buf+32*i, "%s", + bnx2x_tests_str_arr[j]); + *(buf+offset) = '\0'; + } break; } } @@ -2291,7 +2583,7 @@ static void bnx2x_get_ethtool_stats(struct net_device *dev, if (is_multi(bp)) { for_each_eth_queue(bp, i) { - hw_stats = (u32 *)&bp->fp[i].eth_q_stats; + hw_stats = (u32 *)&bp->fp_stats[i].eth_q_stats; for (j = 0; j < BNX2X_NUM_Q_STATS; j++) { if (bnx2x_q_stats_arr[j].size == 0) { /* skip this counter */ @@ -2375,6 +2667,41 @@ static int bnx2x_set_phys_id(struct net_device *dev, return 0; } +static int bnx2x_get_rss_flags(struct bnx2x *bp, struct ethtool_rxnfc *info) +{ + + switch (info->flow_type) { + case TCP_V4_FLOW: + case TCP_V6_FLOW: + info->data = RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3; + break; + case UDP_V4_FLOW: + if (bp->rss_conf_obj.udp_rss_v4) + info->data = RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3; + else + info->data = RXH_IP_SRC | RXH_IP_DST; + break; + case UDP_V6_FLOW: + if (bp->rss_conf_obj.udp_rss_v6) + info->data = RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3; + else + info->data = RXH_IP_SRC | RXH_IP_DST; + break; + case IPV4_FLOW: + case IPV6_FLOW: + info->data = RXH_IP_SRC | RXH_IP_DST; + break; + default: + info->data = 0; + break; + } + + return 0; +} + static int bnx2x_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info, u32 *rules __always_unused) { @@ -2384,7 +2711,102 @@ static int bnx2x_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info, case ETHTOOL_GRXRINGS: info->data = BNX2X_NUM_ETH_QUEUES(bp); return 0; + case ETHTOOL_GRXFH: + return bnx2x_get_rss_flags(bp, info); + default: + DP(BNX2X_MSG_ETHTOOL, "Command parameters not supported\n"); + return -EOPNOTSUPP; + } +} + +static int bnx2x_set_rss_flags(struct bnx2x *bp, struct ethtool_rxnfc *info) +{ + int udp_rss_requested; + + DP(BNX2X_MSG_ETHTOOL, + "Set rss flags command parameters: flow type = %d, data = %llu\n", + info->flow_type, info->data); + + switch (info->flow_type) { + case TCP_V4_FLOW: + case TCP_V6_FLOW: + /* For TCP only 4-tupple hash is supported */ + if (info->data ^ (RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3)) { + DP(BNX2X_MSG_ETHTOOL, + "Command parameters not supported\n"); + return -EINVAL; + } else { + return 0; + } + case UDP_V4_FLOW: + case UDP_V6_FLOW: + /* For UDP either 2-tupple hash or 4-tupple hash is supported */ + if (info->data == (RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3)) + udp_rss_requested = 1; + else if (info->data == (RXH_IP_SRC | RXH_IP_DST)) + udp_rss_requested = 0; + else + return -EINVAL; + if ((info->flow_type == UDP_V4_FLOW) && + (bp->rss_conf_obj.udp_rss_v4 != udp_rss_requested)) { + bp->rss_conf_obj.udp_rss_v4 = udp_rss_requested; + DP(BNX2X_MSG_ETHTOOL, + "rss re-configured, UDP 4-tupple %s\n", + udp_rss_requested ? "enabled" : "disabled"); + return bnx2x_config_rss_pf(bp, &bp->rss_conf_obj, 0); + } else if ((info->flow_type == UDP_V6_FLOW) && + (bp->rss_conf_obj.udp_rss_v6 != udp_rss_requested)) { + bp->rss_conf_obj.udp_rss_v6 = udp_rss_requested; + return bnx2x_config_rss_pf(bp, &bp->rss_conf_obj, 0); + DP(BNX2X_MSG_ETHTOOL, + "rss re-configured, UDP 4-tupple %s\n", + udp_rss_requested ? "enabled" : "disabled"); + } else { + return 0; + } + case IPV4_FLOW: + case IPV6_FLOW: + /* For IP only 2-tupple hash is supported */ + if (info->data ^ (RXH_IP_SRC | RXH_IP_DST)) { + DP(BNX2X_MSG_ETHTOOL, + "Command parameters not supported\n"); + return -EINVAL; + } else { + return 0; + } + case SCTP_V4_FLOW: + case AH_ESP_V4_FLOW: + case AH_V4_FLOW: + case ESP_V4_FLOW: + case SCTP_V6_FLOW: + case AH_ESP_V6_FLOW: + case AH_V6_FLOW: + case ESP_V6_FLOW: + case IP_USER_FLOW: + case ETHER_FLOW: + /* RSS is not supported for these protocols */ + if (info->data) { + DP(BNX2X_MSG_ETHTOOL, + "Command parameters not supported\n"); + return -EINVAL; + } else { + return 0; + } + default: + return -EINVAL; + } +} + +static int bnx2x_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info) +{ + struct bnx2x *bp = netdev_priv(dev); + + switch (info->cmd) { + case ETHTOOL_SRXFH: + return bnx2x_set_rss_flags(bp, info); default: DP(BNX2X_MSG_ETHTOOL, "Command parameters not supported\n"); return -EOPNOTSUPP; @@ -2424,7 +2846,6 @@ static int bnx2x_set_rxfh_indir(struct net_device *dev, const u32 *indir) { struct bnx2x *bp = netdev_priv(dev); size_t i; - u8 ind_table[T_ETH_INDIRECTION_TABLE_SIZE] = {0}; for (i = 0; i < T_ETH_INDIRECTION_TABLE_SIZE; i++) { /* @@ -2436,10 +2857,88 @@ static int bnx2x_set_rxfh_indir(struct net_device *dev, const u32 *indir) * align the received table to the Client ID of the leading RSS * queue */ - ind_table[i] = indir[i] + bp->fp->cl_id; + bp->rss_conf_obj.ind_table[i] = indir[i] + bp->fp->cl_id; } - return bnx2x_config_rss_eth(bp, ind_table, false); + return bnx2x_config_rss_eth(bp, false); +} + +/** + * bnx2x_get_channels - gets the number of RSS queues. + * + * @dev: net device + * @channels: returns the number of max / current queues + */ +static void bnx2x_get_channels(struct net_device *dev, + struct ethtool_channels *channels) +{ + struct bnx2x *bp = netdev_priv(dev); + + channels->max_combined = BNX2X_MAX_RSS_COUNT(bp); + channels->combined_count = BNX2X_NUM_ETH_QUEUES(bp); +} + +/** + * bnx2x_change_num_queues - change the number of RSS queues. + * + * @bp: bnx2x private structure + * + * Re-configure interrupt mode to get the new number of MSI-X + * vectors and re-add NAPI objects. + */ +static void bnx2x_change_num_queues(struct bnx2x *bp, int num_rss) +{ + bnx2x_del_all_napi(bp); + bnx2x_disable_msi(bp); + BNX2X_NUM_QUEUES(bp) = num_rss + NON_ETH_CONTEXT_USE; + bnx2x_set_int_mode(bp); + bnx2x_add_all_napi(bp); +} + +/** + * bnx2x_set_channels - sets the number of RSS queues. + * + * @dev: net device + * @channels: includes the number of queues requested + */ +static int bnx2x_set_channels(struct net_device *dev, + struct ethtool_channels *channels) +{ + struct bnx2x *bp = netdev_priv(dev); + + + DP(BNX2X_MSG_ETHTOOL, + "set-channels command parameters: rx = %d, tx = %d, other = %d, combined = %d\n", + channels->rx_count, channels->tx_count, channels->other_count, + channels->combined_count); + + /* We don't support separate rx / tx channels. + * We don't allow setting 'other' channels. + */ + if (channels->rx_count || channels->tx_count || channels->other_count + || (channels->combined_count == 0) || + (channels->combined_count > BNX2X_MAX_RSS_COUNT(bp))) { + DP(BNX2X_MSG_ETHTOOL, "command parameters not supported\n"); + return -EINVAL; + } + + /* Check if there was a change in the active parameters */ + if (channels->combined_count == BNX2X_NUM_ETH_QUEUES(bp)) { + DP(BNX2X_MSG_ETHTOOL, "No change in active parameters\n"); + return 0; + } + + /* Set the requested number of queues in bp context. + * Note that the actual number of queues created during load may be + * less than requested if memory is low. + */ + if (unlikely(!netif_running(dev))) { + bnx2x_change_num_queues(bp, channels->combined_count); + return 0; + } + bnx2x_nic_unload(bp, UNLOAD_NORMAL); + bnx2x_change_num_queues(bp, channels->combined_count); + return bnx2x_nic_load(bp, LOAD_NORMAL); } static const struct ethtool_ops bnx2x_ethtool_ops = { @@ -2469,9 +2968,17 @@ static const struct ethtool_ops bnx2x_ethtool_ops = { .set_phys_id = bnx2x_set_phys_id, .get_ethtool_stats = bnx2x_get_ethtool_stats, .get_rxnfc = bnx2x_get_rxnfc, + .set_rxnfc = bnx2x_set_rxnfc, .get_rxfh_indir_size = bnx2x_get_rxfh_indir_size, .get_rxfh_indir = bnx2x_get_rxfh_indir, .set_rxfh_indir = bnx2x_set_rxfh_indir, + .get_channels = bnx2x_get_channels, + .set_channels = bnx2x_set_channels, + .get_module_info = bnx2x_get_module_info, + .get_module_eeprom = bnx2x_get_module_eeprom, + .get_eee = bnx2x_get_eee, + .set_eee = bnx2x_set_eee, + .get_ts_info = ethtool_op_get_ts_info, }; void bnx2x_set_ethtool_ops(struct net_device *netdev) diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h index 426f77aa721a..bbc66ced9c25 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h @@ -321,9 +321,7 @@ #define DISABLE_STATISTIC_COUNTER_ID_VALUE 0 -/** - * This file defines HSI constants common to all microcode flows - */ +/* This file defines HSI constants common to all microcode flows */ #define PROTOCOL_STATE_BIT_OFFSET 6 diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h index a440a8ba85f2..76b6e65790f8 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h @@ -10,6 +10,7 @@ #define BNX2X_HSI_H #include "bnx2x_fw_defs.h" +#include "bnx2x_mfw_req.h" #define FW_ENCODE_32BIT_PATTERN 0x1e1e1e1e @@ -33,12 +34,6 @@ struct license_key { u32 reserved_b[4]; }; - -#define PORT_0 0 -#define PORT_1 1 -#define PORT_MAX 2 -#define NVM_PATH_MAX 2 - /**************************************************************************** * Shared HW configuration * ****************************************************************************/ @@ -1067,8 +1062,18 @@ struct port_feat_cfg { /* port 0: 0x454 port 1: 0x4c8 */ uses the same defines as link_config */ u32 mfw_wol_link_cfg2; /* 0x480 */ - u32 Reserved2[17]; /* 0x484 */ + /* EEE power saving mode */ + u32 eee_power_mode; /* 0x484 */ + #define PORT_FEAT_CFG_EEE_POWER_MODE_MASK 0x000000FF + #define PORT_FEAT_CFG_EEE_POWER_MODE_SHIFT 0 + #define PORT_FEAT_CFG_EEE_POWER_MODE_DISABLED 0x00000000 + #define PORT_FEAT_CFG_EEE_POWER_MODE_BALANCED 0x00000001 + #define PORT_FEAT_CFG_EEE_POWER_MODE_AGGRESSIVE 0x00000002 + #define PORT_FEAT_CFG_EEE_POWER_MODE_LOW_LATENCY 0x00000003 + + + u32 Reserved2[16]; /* 0x488 */ }; @@ -1140,6 +1145,7 @@ struct drv_port_mb { u32 link_status; /* Driver should update this field on any link change event */ + #define LINK_STATUS_NONE (0<<0) #define LINK_STATUS_LINK_FLAG_MASK 0x00000001 #define LINK_STATUS_LINK_UP 0x00000001 #define LINK_STATUS_SPEED_AND_DUPLEX_MASK 0x0000001E @@ -1197,6 +1203,7 @@ struct drv_port_mb { #define LINK_STATUS_PFC_ENABLED 0x20000000 #define LINK_STATUS_PHYSICAL_LINK_FLAG 0x40000000 + #define LINK_STATUS_SFP_TX_FAULT 0x80000000 u32 port_stx; @@ -1240,9 +1247,11 @@ struct drv_func_mb { #define REQ_BC_VER_4_VRFY_AFEX_SUPPORTED 0x00070002 #define REQ_BC_VER_4_SFP_TX_DISABLE_SUPPORTED 0x00070014 #define REQ_BC_VER_4_PFC_STATS_SUPPORTED 0x00070201 + #define REQ_BC_VER_4_FCOE_FEATURES 0x00070209 #define DRV_MSG_CODE_DCBX_ADMIN_PMF_MSG 0xb0000000 #define DRV_MSG_CODE_DCBX_PMF_DRV_OK 0xb2000000 + #define REQ_BC_VER_4_DCBX_ADMIN_MSG_NON_PMF 0x00070401 #define DRV_MSG_CODE_VF_DISABLED_DONE 0xc0000000 @@ -1255,6 +1264,8 @@ struct drv_func_mb { #define DRV_MSG_CODE_DRV_INFO_ACK 0xd8000000 #define DRV_MSG_CODE_DRV_INFO_NACK 0xd9000000 + #define DRV_MSG_CODE_EEE_RESULTS_ACK 0xda000000 + #define DRV_MSG_CODE_SET_MF_BW 0xe0000000 #define REQ_BC_VER_4_SET_MF_BW 0x00060202 #define DRV_MSG_CODE_SET_MF_BW_ACK 0xe1000000 @@ -1320,6 +1331,8 @@ struct drv_func_mb { #define FW_MSG_CODE_DRV_INFO_ACK 0xd8100000 #define FW_MSG_CODE_DRV_INFO_NACK 0xd9100000 + #define FW_MSG_CODE_EEE_RESULS_ACK 0xda100000 + #define FW_MSG_CODE_SET_MF_BW_SENT 0xe0000000 #define FW_MSG_CODE_SET_MF_BW_DONE 0xe1000000 @@ -1383,6 +1396,8 @@ struct drv_func_mb { #define DRV_STATUS_DRV_INFO_REQ 0x04000000 + #define DRV_STATUS_EEE_NEGOTIATION_RESULTS 0x08000000 + u32 virt_mac_upper; #define VIRT_MAC_SIGN_MASK 0xffff0000 #define VIRT_MAC_SIGNATURE 0x564d0000 @@ -1613,6 +1628,11 @@ struct fw_flr_mb { struct fw_flr_ack ack; }; +struct eee_remote_vals { + u32 tx_tw; + u32 rx_tw; +}; + /**** SUPPORT FOR SHMEM ARRRAYS *** * The SHMEM HSI is aligned on 32 bit boundaries which makes it difficult to * define arrays with storage types smaller then unsigned dwords. @@ -2053,6 +2073,41 @@ struct shmem2_region { #define DRV_INFO_CONTROL_OP_CODE_MASK 0x0000ff00 #define DRV_INFO_CONTROL_OP_CODE_SHIFT 8 u32 ibft_host_addr; /* initialized by option ROM */ + struct eee_remote_vals eee_remote_vals[PORT_MAX]; + u32 reserved[E2_FUNC_MAX]; + + + /* the status of EEE auto-negotiation + * bits 15:0 the configured tx-lpi entry timer value. Depends on bit 31. + * bits 19:16 the supported modes for EEE. + * bits 23:20 the speeds advertised for EEE. + * bits 27:24 the speeds the Link partner advertised for EEE. + * The supported/adv. modes in bits 27:19 originate from the + * SHMEM_EEE_XXX_ADV definitions (where XXX is replaced by speed). + * bit 28 when 1'b1 EEE was requested. + * bit 29 when 1'b1 tx lpi was requested. + * bit 30 when 1'b1 EEE was negotiated. Tx lpi will be asserted iff + * 30:29 are 2'b11. + * bit 31 when 1'b0 bits 15:0 contain a PORT_FEAT_CFG_EEE_ define as + * value. When 1'b1 those bits contains a value times 16 microseconds. + */ + u32 eee_status[PORT_MAX]; + #define SHMEM_EEE_TIMER_MASK 0x0000ffff + #define SHMEM_EEE_SUPPORTED_MASK 0x000f0000 + #define SHMEM_EEE_SUPPORTED_SHIFT 16 + #define SHMEM_EEE_ADV_STATUS_MASK 0x00f00000 + #define SHMEM_EEE_100M_ADV (1<<0) + #define SHMEM_EEE_1G_ADV (1<<1) + #define SHMEM_EEE_10G_ADV (1<<2) + #define SHMEM_EEE_ADV_STATUS_SHIFT 20 + #define SHMEM_EEE_LP_ADV_STATUS_MASK 0x0f000000 + #define SHMEM_EEE_LP_ADV_STATUS_SHIFT 24 + #define SHMEM_EEE_REQUESTED_BIT 0x10000000 + #define SHMEM_EEE_LPI_REQUESTED_BIT 0x20000000 + #define SHMEM_EEE_ACTIVE_BIT 0x40000000 + #define SHMEM_EEE_TIME_OUTPUT_BIT 0x80000000 + + u32 sizeof_port_stats; }; @@ -2599,6 +2654,9 @@ struct host_port_stats { u32 pfc_frames_tx_lo; u32 pfc_frames_rx_hi; u32 pfc_frames_rx_lo; + + u32 eee_lpi_count_hi; + u32 eee_lpi_count_lo; }; @@ -2638,118 +2696,6 @@ struct host_func_stats { /* VIC definitions */ #define VICSTATST_UIF_INDEX 2 -/* current drv_info version */ -#define DRV_INFO_CUR_VER 1 - -/* drv_info op codes supported */ -enum drv_info_opcode { - ETH_STATS_OPCODE, - FCOE_STATS_OPCODE, - ISCSI_STATS_OPCODE -}; - -#define ETH_STAT_INFO_VERSION_LEN 12 -/* Per PCI Function Ethernet Statistics required from the driver */ -struct eth_stats_info { - /* Function's Driver Version. padded to 12 */ - u8 version[ETH_STAT_INFO_VERSION_LEN]; - /* Locally Admin Addr. BigEndian EIU48. Actual size is 6 bytes */ - u8 mac_local[8]; - u8 mac_add1[8]; /* Additional Programmed MAC Addr 1. */ - u8 mac_add2[8]; /* Additional Programmed MAC Addr 2. */ - u32 mtu_size; /* MTU Size. Note : Negotiated MTU */ - u32 feature_flags; /* Feature_Flags. */ -#define FEATURE_ETH_CHKSUM_OFFLOAD_MASK 0x01 -#define FEATURE_ETH_LSO_MASK 0x02 -#define FEATURE_ETH_BOOTMODE_MASK 0x1C -#define FEATURE_ETH_BOOTMODE_SHIFT 2 -#define FEATURE_ETH_BOOTMODE_NONE (0x0 << 2) -#define FEATURE_ETH_BOOTMODE_PXE (0x1 << 2) -#define FEATURE_ETH_BOOTMODE_ISCSI (0x2 << 2) -#define FEATURE_ETH_BOOTMODE_FCOE (0x3 << 2) -#define FEATURE_ETH_TOE_MASK 0x20 - u32 lso_max_size; /* LSO MaxOffloadSize. */ - u32 lso_min_seg_cnt; /* LSO MinSegmentCount. */ - /* Num Offloaded Connections TCP_IPv4. */ - u32 ipv4_ofld_cnt; - /* Num Offloaded Connections TCP_IPv6. */ - u32 ipv6_ofld_cnt; - u32 promiscuous_mode; /* Promiscuous Mode. non-zero true */ - u32 txq_size; /* TX Descriptors Queue Size */ - u32 rxq_size; /* RX Descriptors Queue Size */ - /* TX Descriptor Queue Avg Depth. % Avg Queue Depth since last poll */ - u32 txq_avg_depth; - /* RX Descriptors Queue Avg Depth. % Avg Queue Depth since last poll */ - u32 rxq_avg_depth; - /* IOV_Offload. 0=none; 1=MultiQueue, 2=VEB 3= VEPA*/ - u32 iov_offload; - /* Number of NetQueue/VMQ Config'd. */ - u32 netq_cnt; - u32 vf_cnt; /* Num VF assigned to this PF. */ -}; - -/* Per PCI Function FCOE Statistics required from the driver */ -struct fcoe_stats_info { - u8 version[12]; /* Function's Driver Version. */ - u8 mac_local[8]; /* Locally Admin Addr. */ - u8 mac_add1[8]; /* Additional Programmed MAC Addr 1. */ - u8 mac_add2[8]; /* Additional Programmed MAC Addr 2. */ - /* QoS Priority (per 802.1p). 0-7255 */ - u32 qos_priority; - u32 txq_size; /* FCoE TX Descriptors Queue Size. */ - u32 rxq_size; /* FCoE RX Descriptors Queue Size. */ - /* FCoE TX Descriptor Queue Avg Depth. */ - u32 txq_avg_depth; - /* FCoE RX Descriptors Queue Avg Depth. */ - u32 rxq_avg_depth; - u32 rx_frames_lo; /* FCoE RX Frames received. */ - u32 rx_frames_hi; /* FCoE RX Frames received. */ - u32 rx_bytes_lo; /* FCoE RX Bytes received. */ - u32 rx_bytes_hi; /* FCoE RX Bytes received. */ - u32 tx_frames_lo; /* FCoE TX Frames sent. */ - u32 tx_frames_hi; /* FCoE TX Frames sent. */ - u32 tx_bytes_lo; /* FCoE TX Bytes sent. */ - u32 tx_bytes_hi; /* FCoE TX Bytes sent. */ -}; - -/* Per PCI Function iSCSI Statistics required from the driver*/ -struct iscsi_stats_info { - u8 version[12]; /* Function's Driver Version. */ - u8 mac_local[8]; /* Locally Admin iSCSI MAC Addr. */ - u8 mac_add1[8]; /* Additional Programmed MAC Addr 1. */ - /* QoS Priority (per 802.1p). 0-7255 */ - u32 qos_priority; - u8 initiator_name[64]; /* iSCSI Boot Initiator Node name. */ - u8 ww_port_name[64]; /* iSCSI World wide port name */ - u8 boot_target_name[64];/* iSCSI Boot Target Name. */ - u8 boot_target_ip[16]; /* iSCSI Boot Target IP. */ - u32 boot_target_portal; /* iSCSI Boot Target Portal. */ - u8 boot_init_ip[16]; /* iSCSI Boot Initiator IP Address. */ - u32 max_frame_size; /* Max Frame Size. bytes */ - u32 txq_size; /* PDU TX Descriptors Queue Size. */ - u32 rxq_size; /* PDU RX Descriptors Queue Size. */ - u32 txq_avg_depth; /* PDU TX Descriptor Queue Avg Depth. */ - u32 rxq_avg_depth; /* PDU RX Descriptors Queue Avg Depth. */ - u32 rx_pdus_lo; /* iSCSI PDUs received. */ - u32 rx_pdus_hi; /* iSCSI PDUs received. */ - u32 rx_bytes_lo; /* iSCSI RX Bytes received. */ - u32 rx_bytes_hi; /* iSCSI RX Bytes received. */ - u32 tx_pdus_lo; /* iSCSI PDUs sent. */ - u32 tx_pdus_hi; /* iSCSI PDUs sent. */ - u32 tx_bytes_lo; /* iSCSI PDU TX Bytes sent. */ - u32 tx_bytes_hi; /* iSCSI PDU TX Bytes sent. */ - u32 pcp_prior_map_tbl; /* C-PCP to S-PCP Priority MapTable. - * 9 nibbles, the position of each nibble - * represents the C-PCP value, the value - * of the nibble = S-PCP value. - */ -}; - -union drv_info_to_mcp { - struct eth_stats_info ether_stat; - struct fcoe_stats_info fcoe_stat; - struct iscsi_stats_info iscsi_stat; -}; /* stats collected for afex. * NOTE: structure is exactly as expected to be received by the switch. diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c index 6e7d5c0843b4..f4beb46c4709 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c @@ -285,7 +285,6 @@ #define ETS_E3B0_PBF_MIN_W_VAL (10000) #define MAX_PACKET_SIZE (9700) -#define WC_UC_TIMEOUT 100 #define MAX_KR_LINK_RETRY 4 /**********************************************************/ @@ -1306,6 +1305,94 @@ int bnx2x_ets_strict(const struct link_params *params, const u8 strict_cos) return 0; } + +/******************************************************************/ +/* EEE section */ +/******************************************************************/ +static u8 bnx2x_eee_has_cap(struct link_params *params) +{ + struct bnx2x *bp = params->bp; + + if (REG_RD(bp, params->shmem2_base) <= + offsetof(struct shmem2_region, eee_status[params->port])) + return 0; + + return 1; +} + +static int bnx2x_eee_nvram_to_time(u32 nvram_mode, u32 *idle_timer) +{ + switch (nvram_mode) { + case PORT_FEAT_CFG_EEE_POWER_MODE_BALANCED: + *idle_timer = EEE_MODE_NVRAM_BALANCED_TIME; + break; + case PORT_FEAT_CFG_EEE_POWER_MODE_AGGRESSIVE: + *idle_timer = EEE_MODE_NVRAM_AGGRESSIVE_TIME; + break; + case PORT_FEAT_CFG_EEE_POWER_MODE_LOW_LATENCY: + *idle_timer = EEE_MODE_NVRAM_LATENCY_TIME; + break; + default: + *idle_timer = 0; + break; + } + + return 0; +} + +static int bnx2x_eee_time_to_nvram(u32 idle_timer, u32 *nvram_mode) +{ + switch (idle_timer) { + case EEE_MODE_NVRAM_BALANCED_TIME: + *nvram_mode = PORT_FEAT_CFG_EEE_POWER_MODE_BALANCED; + break; + case EEE_MODE_NVRAM_AGGRESSIVE_TIME: + *nvram_mode = PORT_FEAT_CFG_EEE_POWER_MODE_AGGRESSIVE; + break; + case EEE_MODE_NVRAM_LATENCY_TIME: + *nvram_mode = PORT_FEAT_CFG_EEE_POWER_MODE_LOW_LATENCY; + break; + default: + *nvram_mode = PORT_FEAT_CFG_EEE_POWER_MODE_DISABLED; + break; + } + + return 0; +} + +static u32 bnx2x_eee_calc_timer(struct link_params *params) +{ + u32 eee_mode, eee_idle; + struct bnx2x *bp = params->bp; + + if (params->eee_mode & EEE_MODE_OVERRIDE_NVRAM) { + if (params->eee_mode & EEE_MODE_OUTPUT_TIME) { + /* time value in eee_mode --> used directly*/ + eee_idle = params->eee_mode & EEE_MODE_TIMER_MASK; + } else { + /* hsi value in eee_mode --> time */ + if (bnx2x_eee_nvram_to_time(params->eee_mode & + EEE_MODE_NVRAM_MASK, + &eee_idle)) + return 0; + } + } else { + /* hsi values in nvram --> time*/ + eee_mode = ((REG_RD(bp, params->shmem_base + + offsetof(struct shmem_region, dev_info. + port_feature_config[params->port]. + eee_power_mode)) & + PORT_FEAT_CFG_EEE_POWER_MODE_MASK) >> + PORT_FEAT_CFG_EEE_POWER_MODE_SHIFT); + + if (bnx2x_eee_nvram_to_time(eee_mode, &eee_idle)) + return 0; + } + + return eee_idle; +} + + /******************************************************************/ /* PFC section */ /******************************************************************/ @@ -1540,7 +1627,7 @@ static void bnx2x_umac_enable(struct link_params *params, /* Reset UMAC */ REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR, (MISC_REGISTERS_RESET_REG_2_UMAC0 << params->port)); - usleep_range(1000, 1000); + usleep_range(1000, 2000); REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_SET, (MISC_REGISTERS_RESET_REG_2_UMAC0 << params->port)); @@ -1631,7 +1718,7 @@ static void bnx2x_xmac_init(struct link_params *params, u32 max_speed) * ports of the path */ - if ((CHIP_NUM(bp) == CHIP_NUM_57840) && + if ((CHIP_NUM(bp) == CHIP_NUM_57840_4_10) && (REG_RD(bp, MISC_REG_RESET_REG_2) & MISC_REGISTERS_RESET_REG_2_XMAC)) { DP(NETIF_MSG_LINK, @@ -1642,7 +1729,7 @@ static void bnx2x_xmac_init(struct link_params *params, u32 max_speed) /* Hard reset */ REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR, MISC_REGISTERS_RESET_REG_2_XMAC); - usleep_range(1000, 1000); + usleep_range(1000, 2000); REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_SET, MISC_REGISTERS_RESET_REG_2_XMAC); @@ -1672,7 +1759,7 @@ static void bnx2x_xmac_init(struct link_params *params, u32 max_speed) /* Soft reset */ REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR, MISC_REGISTERS_RESET_REG_2_XMAC_SOFT); - usleep_range(1000, 1000); + usleep_range(1000, 2000); REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_SET, MISC_REGISTERS_RESET_REG_2_XMAC_SOFT); @@ -1730,6 +1817,14 @@ static int bnx2x_xmac_enable(struct link_params *params, /* update PFC */ bnx2x_update_pfc_xmac(params, vars, 0); + if (vars->eee_status & SHMEM_EEE_ADV_STATUS_MASK) { + DP(NETIF_MSG_LINK, "Setting XMAC for EEE\n"); + REG_WR(bp, xmac_base + XMAC_REG_EEE_TIMERS_HI, 0x1380008); + REG_WR(bp, xmac_base + XMAC_REG_EEE_CTRL, 0x1); + } else { + REG_WR(bp, xmac_base + XMAC_REG_EEE_CTRL, 0x0); + } + /* Enable TX and RX */ val = XMAC_CTRL_REG_TX_EN | XMAC_CTRL_REG_RX_EN; @@ -1785,11 +1880,6 @@ static int bnx2x_emac_enable(struct link_params *params, bnx2x_bits_en(bp, emac_base + EMAC_REG_EMAC_TX_MODE, EMAC_TX_MODE_RESET); - if (CHIP_REV_IS_SLOW(bp)) { - /* config GMII mode */ - val = REG_RD(bp, emac_base + EMAC_REG_EMAC_MODE); - EMAC_WR(bp, EMAC_REG_EMAC_MODE, (val | EMAC_MODE_PORT_GMII)); - } else { /* ASIC */ /* pause enable/disable */ bnx2x_bits_dis(bp, emac_base + EMAC_REG_EMAC_RX_MODE, EMAC_RX_MODE_FLOW_EN); @@ -1812,7 +1902,6 @@ static int bnx2x_emac_enable(struct link_params *params, } else bnx2x_bits_en(bp, emac_base + EMAC_REG_EMAC_TX_MODE, EMAC_TX_MODE_FLOW_EN); - } /* KEEP_VLAN_TAG, promiscuous */ val = REG_RD(bp, emac_base + EMAC_REG_EMAC_RX_MODE); @@ -1851,23 +1940,23 @@ static int bnx2x_emac_enable(struct link_params *params, val &= ~0x810; EMAC_WR(bp, EMAC_REG_EMAC_MODE, val); - /* enable emac */ + /* Enable emac */ REG_WR(bp, NIG_REG_NIG_EMAC0_EN + port*4, 1); - /* enable emac for jumbo packets */ + /* Enable emac for jumbo packets */ EMAC_WR(bp, EMAC_REG_EMAC_RX_MTU_SIZE, (EMAC_RX_MTU_SIZE_JUMBO_ENA | (ETH_MAX_JUMBO_PACKET_SIZE + ETH_OVREHEAD))); - /* strip CRC */ + /* Strip CRC */ REG_WR(bp, NIG_REG_NIG_INGRESS_EMAC0_NO_CRC + port*4, 0x1); - /* disable the NIG in/out to the bmac */ + /* Disable the NIG in/out to the bmac */ REG_WR(bp, NIG_REG_BMAC0_IN_EN + port*4, 0x0); REG_WR(bp, NIG_REG_BMAC0_PAUSE_OUT_EN + port*4, 0x0); REG_WR(bp, NIG_REG_BMAC0_OUT_EN + port*4, 0x0); - /* enable the NIG in/out to the emac */ + /* Enable the NIG in/out to the emac */ REG_WR(bp, NIG_REG_EMAC0_IN_EN + port*4, 0x1); val = 0; if ((params->feature_config_flags & @@ -1902,7 +1991,7 @@ static void bnx2x_update_pfc_bmac1(struct link_params *params, wb_data[1] = 0; REG_WR_DMAE(bp, bmac_addr + BIGMAC_REGISTER_RX_CONTROL, wb_data, 2); - /* tx control */ + /* TX control */ val = 0xc0; if (!(params->feature_config_flags & FEATURE_CONFIG_PFC_ENABLED) && @@ -1962,7 +2051,7 @@ static void bnx2x_update_pfc_bmac2(struct link_params *params, wb_data[0] &= ~(1<<2); } else { DP(NETIF_MSG_LINK, "PFC is disabled\n"); - /* disable PFC RX & TX & STATS and set 8 COS */ + /* Disable PFC RX & TX & STATS and set 8 COS */ wb_data[0] = 0x8; wb_data[1] = 0; } @@ -2056,7 +2145,7 @@ static int bnx2x_pfc_brb_get_config_params( PFC_E2_BRB_MAC_FULL_XOFF_THR_PAUSE; config_val->pauseable_th.full_xon = PFC_E2_BRB_MAC_FULL_XON_THR_PAUSE; - /* non pause able*/ + /* Non pause able*/ config_val->non_pauseable_th.pause_xoff = PFC_E2_BRB_MAC_PAUSE_XOFF_THR_NON_PAUSE; config_val->non_pauseable_th.pause_xon = @@ -2084,7 +2173,7 @@ static int bnx2x_pfc_brb_get_config_params( PFC_E3A0_BRB_MAC_FULL_XOFF_THR_PAUSE; config_val->pauseable_th.full_xon = PFC_E3A0_BRB_MAC_FULL_XON_THR_PAUSE; - /* non pause able*/ + /* Non pause able*/ config_val->non_pauseable_th.pause_xoff = PFC_E3A0_BRB_MAC_PAUSE_XOFF_THR_NON_PAUSE; config_val->non_pauseable_th.pause_xon = @@ -2114,7 +2203,7 @@ static int bnx2x_pfc_brb_get_config_params( PFC_E3B0_4P_BRB_MAC_FULL_XOFF_THR_PAUSE; config_val->pauseable_th.full_xon = PFC_E3B0_4P_BRB_MAC_FULL_XON_THR_PAUSE; - /* non pause able*/ + /* Non pause able*/ config_val->non_pauseable_th.pause_xoff = PFC_E3B0_4P_BRB_MAC_PAUSE_XOFF_THR_NON_PAUSE; config_val->non_pauseable_th.pause_xon = @@ -2132,7 +2221,7 @@ static int bnx2x_pfc_brb_get_config_params( PFC_E3B0_2P_BRB_MAC_FULL_XOFF_THR_PAUSE; config_val->pauseable_th.full_xon = PFC_E3B0_2P_BRB_MAC_FULL_XON_THR_PAUSE; - /* non pause able*/ + /* Non pause able*/ config_val->non_pauseable_th.pause_xoff = PFC_E3B0_2P_BRB_MAC_PAUSE_XOFF_THR_NON_PAUSE; config_val->non_pauseable_th.pause_xon = @@ -2189,7 +2278,7 @@ static void bnx2x_pfc_brb_get_e3b0_config_params( if (pfc_params->cos0_pauseable != pfc_params->cos1_pauseable) { - /* nonpauseable= Lossy + pauseable = Lossless*/ + /* Nonpauseable= Lossy + pauseable = Lossless*/ e3b0_val->lb_guarantied = PFC_E3B0_2P_MIX_PAUSE_LB_GUART; e3b0_val->mac_0_class_t_guarantied = @@ -2388,9 +2477,9 @@ static int bnx2x_update_pfc_brb(struct link_params *params, * This function is needed because NIG ARB_CREDIT_WEIGHT_X are * not continues and ARB_CREDIT_WEIGHT_0 + offset is suitable. ******************************************************************************/ -int bnx2x_pfc_nig_rx_priority_mask(struct bnx2x *bp, - u8 cos_entry, - u32 priority_mask, u8 port) +static int bnx2x_pfc_nig_rx_priority_mask(struct bnx2x *bp, + u8 cos_entry, + u32 priority_mask, u8 port) { u32 nig_reg_rx_priority_mask_add = 0; @@ -2440,6 +2529,16 @@ static void bnx2x_update_mng(struct link_params *params, u32 link_status) port_mb[params->port].link_status), link_status); } +static void bnx2x_update_mng_eee(struct link_params *params, u32 eee_status) +{ + struct bnx2x *bp = params->bp; + + if (bnx2x_eee_has_cap(params)) + REG_WR(bp, params->shmem2_base + + offsetof(struct shmem2_region, + eee_status[params->port]), eee_status); +} + static void bnx2x_update_pfc_nig(struct link_params *params, struct link_vars *vars, struct bnx2x_nig_brb_pfc_port_params *nig_params) @@ -2507,7 +2606,7 @@ static void bnx2x_update_pfc_nig(struct link_params *params, REG_WR(bp, port ? NIG_REG_LLFC_EGRESS_SRC_ENABLE_1 : NIG_REG_LLFC_EGRESS_SRC_ENABLE_0, 0x7); - /* output enable for RX_XCM # IF */ + /* Output enable for RX_XCM # IF */ REG_WR(bp, port ? NIG_REG_XCM1_OUT_EN : NIG_REG_XCM0_OUT_EN, xcm_out_en); @@ -2556,10 +2655,10 @@ int bnx2x_update_pfc(struct link_params *params, bnx2x_update_mng(params, vars->link_status); - /* update NIG params */ + /* Update NIG params */ bnx2x_update_pfc_nig(params, vars, pfc_params); - /* update BRB params */ + /* Update BRB params */ bnx2x_status = bnx2x_update_pfc_brb(params, vars, pfc_params); if (bnx2x_status) return bnx2x_status; @@ -2614,7 +2713,7 @@ static int bnx2x_bmac1_enable(struct link_params *params, REG_WR_DMAE(bp, bmac_addr + BIGMAC_REGISTER_BMAC_XGXS_CONTROL, wb_data, 2); - /* tx MAC SA */ + /* TX MAC SA */ wb_data[0] = ((params->mac_addr[2] << 24) | (params->mac_addr[3] << 16) | (params->mac_addr[4] << 8) | @@ -2623,7 +2722,7 @@ static int bnx2x_bmac1_enable(struct link_params *params, params->mac_addr[1]); REG_WR_DMAE(bp, bmac_addr + BIGMAC_REGISTER_TX_SOURCE_ADDR, wb_data, 2); - /* mac control */ + /* MAC control */ val = 0x3; if (is_lb) { val |= 0x4; @@ -2633,24 +2732,24 @@ static int bnx2x_bmac1_enable(struct link_params *params, wb_data[1] = 0; REG_WR_DMAE(bp, bmac_addr + BIGMAC_REGISTER_BMAC_CONTROL, wb_data, 2); - /* set rx mtu */ + /* Set rx mtu */ wb_data[0] = ETH_MAX_JUMBO_PACKET_SIZE + ETH_OVREHEAD; wb_data[1] = 0; REG_WR_DMAE(bp, bmac_addr + BIGMAC_REGISTER_RX_MAX_SIZE, wb_data, 2); bnx2x_update_pfc_bmac1(params, vars); - /* set tx mtu */ + /* Set tx mtu */ wb_data[0] = ETH_MAX_JUMBO_PACKET_SIZE + ETH_OVREHEAD; wb_data[1] = 0; REG_WR_DMAE(bp, bmac_addr + BIGMAC_REGISTER_TX_MAX_SIZE, wb_data, 2); - /* set cnt max size */ + /* Set cnt max size */ wb_data[0] = ETH_MAX_JUMBO_PACKET_SIZE + ETH_OVREHEAD; wb_data[1] = 0; REG_WR_DMAE(bp, bmac_addr + BIGMAC_REGISTER_CNT_MAX_SIZE, wb_data, 2); - /* configure safc */ + /* Configure SAFC */ wb_data[0] = 0x1000200; wb_data[1] = 0; REG_WR_DMAE(bp, bmac_addr + BIGMAC_REGISTER_RX_LLFC_MSG_FLDS, @@ -2684,7 +2783,7 @@ static int bnx2x_bmac2_enable(struct link_params *params, udelay(30); - /* tx MAC SA */ + /* TX MAC SA */ wb_data[0] = ((params->mac_addr[2] << 24) | (params->mac_addr[3] << 16) | (params->mac_addr[4] << 8) | @@ -2703,18 +2802,18 @@ static int bnx2x_bmac2_enable(struct link_params *params, wb_data, 2); udelay(30); - /* set rx mtu */ + /* Set RX MTU */ wb_data[0] = ETH_MAX_JUMBO_PACKET_SIZE + ETH_OVREHEAD; wb_data[1] = 0; REG_WR_DMAE(bp, bmac_addr + BIGMAC2_REGISTER_RX_MAX_SIZE, wb_data, 2); udelay(30); - /* set tx mtu */ + /* Set TX MTU */ wb_data[0] = ETH_MAX_JUMBO_PACKET_SIZE + ETH_OVREHEAD; wb_data[1] = 0; REG_WR_DMAE(bp, bmac_addr + BIGMAC2_REGISTER_TX_MAX_SIZE, wb_data, 2); udelay(30); - /* set cnt max size */ + /* Set cnt max size */ wb_data[0] = ETH_MAX_JUMBO_PACKET_SIZE + ETH_OVREHEAD - 2; wb_data[1] = 0; REG_WR_DMAE(bp, bmac_addr + BIGMAC2_REGISTER_CNT_MAX_SIZE, wb_data, 2); @@ -2732,15 +2831,15 @@ static int bnx2x_bmac_enable(struct link_params *params, u8 port = params->port; struct bnx2x *bp = params->bp; u32 val; - /* reset and unreset the BigMac */ + /* Reset and unreset the BigMac */ REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR, (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << port)); - msleep(1); + usleep_range(1000, 2000); REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_SET, (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << port)); - /* enable access for bmac registers */ + /* Enable access for bmac registers */ REG_WR(bp, NIG_REG_BMAC0_REGS_OUT_EN + port*4, 0x1); /* Enable BMAC according to BMAC type*/ @@ -2798,7 +2897,7 @@ static void bnx2x_bmac_rx_disable(struct bnx2x *bp, u8 port) BIGMAC_REGISTER_BMAC_CONTROL, wb_data, 2); } - msleep(1); + usleep_range(1000, 2000); } } @@ -2810,17 +2909,16 @@ static int bnx2x_pbf_update(struct link_params *params, u32 flow_ctrl, u32 init_crd, crd; u32 count = 1000; - /* disable port */ + /* Disable port */ REG_WR(bp, PBF_REG_DISABLE_NEW_TASK_PROC_P0 + port*4, 0x1); - /* wait for init credit */ + /* Wait for init credit */ init_crd = REG_RD(bp, PBF_REG_P0_INIT_CRD + port*4); crd = REG_RD(bp, PBF_REG_P0_CREDIT + port*8); DP(NETIF_MSG_LINK, "init_crd 0x%x crd 0x%x\n", init_crd, crd); while ((init_crd != crd) && count) { - msleep(5); - + usleep_range(5000, 10000); crd = REG_RD(bp, PBF_REG_P0_CREDIT + port*8); count--; } @@ -2837,18 +2935,18 @@ static int bnx2x_pbf_update(struct link_params *params, u32 flow_ctrl, line_speed == SPEED_1000 || line_speed == SPEED_2500) { REG_WR(bp, PBF_REG_P0_PAUSE_ENABLE + port*4, 1); - /* update threshold */ + /* Update threshold */ REG_WR(bp, PBF_REG_P0_ARB_THRSH + port*4, 0); - /* update init credit */ + /* Update init credit */ init_crd = 778; /* (800-18-4) */ } else { u32 thresh = (ETH_MAX_JUMBO_PACKET_SIZE + ETH_OVREHEAD)/16; REG_WR(bp, PBF_REG_P0_PAUSE_ENABLE + port*4, 0); - /* update threshold */ + /* Update threshold */ REG_WR(bp, PBF_REG_P0_ARB_THRSH + port*4, thresh); - /* update init credit */ + /* Update init credit */ switch (line_speed) { case SPEED_10000: init_crd = thresh + 553 - 22; @@ -2863,12 +2961,12 @@ static int bnx2x_pbf_update(struct link_params *params, u32 flow_ctrl, DP(NETIF_MSG_LINK, "PBF updated to speed %d credit %d\n", line_speed, init_crd); - /* probe the credit changes */ + /* Probe the credit changes */ REG_WR(bp, PBF_REG_INIT_P0 + port*4, 0x1); - msleep(5); + usleep_range(5000, 10000); REG_WR(bp, PBF_REG_INIT_P0 + port*4, 0x0); - /* enable port */ + /* Enable port */ REG_WR(bp, PBF_REG_DISABLE_NEW_TASK_PROC_P0 + port*4, 0x0); return 0; } @@ -2935,7 +3033,7 @@ static int bnx2x_cl22_write(struct bnx2x *bp, REG_WR(bp, phy->mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE, mode & ~EMAC_MDIO_MODE_CLAUSE_45); - /* address */ + /* Address */ tmp = ((phy->addr << 21) | (reg << 16) | val | EMAC_MDIO_COMM_COMMAND_WRITE_22 | EMAC_MDIO_COMM_START_BUSY); @@ -2971,7 +3069,7 @@ static int bnx2x_cl22_read(struct bnx2x *bp, REG_WR(bp, phy->mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE, mode & ~EMAC_MDIO_MODE_CLAUSE_45); - /* address */ + /* Address */ val = ((phy->addr << 21) | (reg << 16) | EMAC_MDIO_COMM_COMMAND_READ_22 | EMAC_MDIO_COMM_START_BUSY); @@ -3009,7 +3107,7 @@ static int bnx2x_cl45_read(struct bnx2x *bp, struct bnx2x_phy *phy, if (phy->flags & FLAGS_MDC_MDIO_WA_B0) bnx2x_bits_en(bp, phy->mdio_ctrl + EMAC_REG_EMAC_MDIO_STATUS, EMAC_MDIO_STATUS_10MB); - /* address */ + /* Address */ val = ((phy->addr << 21) | (devad << 16) | reg | EMAC_MDIO_COMM_COMMAND_ADDRESS | EMAC_MDIO_COMM_START_BUSY); @@ -3030,7 +3128,7 @@ static int bnx2x_cl45_read(struct bnx2x *bp, struct bnx2x_phy *phy, *ret_val = 0; rc = -EFAULT; } else { - /* data */ + /* Data */ val = ((phy->addr << 21) | (devad << 16) | EMAC_MDIO_COMM_COMMAND_READ_45 | EMAC_MDIO_COMM_START_BUSY); @@ -3078,7 +3176,7 @@ static int bnx2x_cl45_write(struct bnx2x *bp, struct bnx2x_phy *phy, bnx2x_bits_en(bp, phy->mdio_ctrl + EMAC_REG_EMAC_MDIO_STATUS, EMAC_MDIO_STATUS_10MB); - /* address */ + /* Address */ tmp = ((phy->addr << 21) | (devad << 16) | reg | EMAC_MDIO_COMM_COMMAND_ADDRESS | EMAC_MDIO_COMM_START_BUSY); @@ -3098,7 +3196,7 @@ static int bnx2x_cl45_write(struct bnx2x *bp, struct bnx2x_phy *phy, netdev_err(bp->dev, "MDC/MDIO access timeout\n"); rc = -EFAULT; } else { - /* data */ + /* Data */ tmp = ((phy->addr << 21) | (devad << 16) | val | EMAC_MDIO_COMM_COMMAND_WRITE_45 | EMAC_MDIO_COMM_START_BUSY); @@ -3188,23 +3286,23 @@ static int bnx2x_bsc_read(struct link_params *params, xfer_cnt = 16 - lc_addr; - /* enable the engine */ + /* Enable the engine */ val = REG_RD(bp, MCP_REG_MCPR_IMC_COMMAND); val |= MCPR_IMC_COMMAND_ENABLE; REG_WR(bp, MCP_REG_MCPR_IMC_COMMAND, val); - /* program slave device ID */ + /* Program slave device ID */ val = (sl_devid << 16) | sl_addr; REG_WR(bp, MCP_REG_MCPR_IMC_SLAVE_CONTROL, val); - /* start xfer with 0 byte to update the address pointer ???*/ + /* Start xfer with 0 byte to update the address pointer ???*/ val = (MCPR_IMC_COMMAND_ENABLE) | (MCPR_IMC_COMMAND_WRITE_OP << MCPR_IMC_COMMAND_OPERATION_BITSHIFT) | (lc_addr << MCPR_IMC_COMMAND_TRANSFER_ADDRESS_BITSHIFT) | (0); REG_WR(bp, MCP_REG_MCPR_IMC_COMMAND, val); - /* poll for completion */ + /* Poll for completion */ i = 0; val = REG_RD(bp, MCP_REG_MCPR_IMC_COMMAND); while (((val >> MCPR_IMC_COMMAND_IMC_STATUS_BITSHIFT) & 0x3) != 1) { @@ -3220,7 +3318,7 @@ static int bnx2x_bsc_read(struct link_params *params, if (rc == -EFAULT) return rc; - /* start xfer with read op */ + /* Start xfer with read op */ val = (MCPR_IMC_COMMAND_ENABLE) | (MCPR_IMC_COMMAND_READ_OP << MCPR_IMC_COMMAND_OPERATION_BITSHIFT) | @@ -3228,7 +3326,7 @@ static int bnx2x_bsc_read(struct link_params *params, (xfer_cnt); REG_WR(bp, MCP_REG_MCPR_IMC_COMMAND, val); - /* poll for completion */ + /* Poll for completion */ i = 0; val = REG_RD(bp, MCP_REG_MCPR_IMC_COMMAND); while (((val >> MCPR_IMC_COMMAND_IMC_STATUS_BITSHIFT) & 0x3) != 1) { @@ -3331,7 +3429,7 @@ static u8 bnx2x_get_warpcore_lane(struct bnx2x_phy *phy, port = port ^ 1; lane = (port<<1) + path; - } else { /* two port mode - no port swap */ + } else { /* Two port mode - no port swap */ /* Figure out path swap value */ path_swap_ovr = @@ -3409,7 +3507,7 @@ static void bnx2x_serdes_deassert(struct bnx2x *bp, u8 port) val = SERDES_RESET_BITS << (port*16); - /* reset and unreset the SerDes/XGXS */ + /* Reset and unreset the SerDes/XGXS */ REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_3_CLEAR, val); udelay(500); REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_3_SET, val); @@ -3430,7 +3528,7 @@ static void bnx2x_xgxs_deassert(struct link_params *params) val = XGXS_RESET_BITS << (port*16); - /* reset and unreset the SerDes/XGXS */ + /* Reset and unreset the SerDes/XGXS */ REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_3_CLEAR, val); udelay(500); REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_3_SET, val); @@ -3522,7 +3620,7 @@ static void bnx2x_ext_phy_set_pause(struct link_params *params, { u16 val; struct bnx2x *bp = params->bp; - /* read modify write pause advertizing */ + /* Read modify write pause advertizing */ bnx2x_cl45_read(bp, phy, MDIO_AN_DEVAD, MDIO_AN_REG_ADV_PAUSE, &val); val &= ~MDIO_AN_REG_ADV_PAUSE_BOTH; @@ -3657,44 +3755,35 @@ static u8 bnx2x_ext_phy_resolve_fc(struct bnx2x_phy *phy, static void bnx2x_warpcore_enable_AN_KR(struct bnx2x_phy *phy, struct link_params *params, struct link_vars *vars) { - u16 val16 = 0, lane, bam37 = 0; - struct bnx2x *bp = params->bp; + u16 val16 = 0, lane, i; + struct bnx2x *bp = params->bp; + static struct bnx2x_reg_set reg_set[] = { + {MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, 0x7}, + {MDIO_AN_DEVAD, MDIO_WC_REG_PAR_DET_10G_CTRL, 0}, + {MDIO_WC_DEVAD, MDIO_WC_REG_CL72_USERB0_CL72_MISC1_CONTROL, 0}, + {MDIO_WC_DEVAD, MDIO_WC_REG_XGXSBLK1_LANECTRL0, 0xff}, + {MDIO_WC_DEVAD, MDIO_WC_REG_XGXSBLK1_LANECTRL1, 0x5555}, + {MDIO_PMA_DEVAD, MDIO_WC_REG_IEEE0BLK_AUTONEGNP, 0x0}, + {MDIO_WC_DEVAD, MDIO_WC_REG_RX66_CONTROL, 0x7415}, + {MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_MISC2, 0x6190}, + /* Disable Autoneg: re-enable it after adv is done. */ + {MDIO_AN_DEVAD, MDIO_WC_REG_IEEE0BLK_MIICNTL, 0} + }; DP(NETIF_MSG_LINK, "Enable Auto Negotiation for KR\n"); /* Set to default registers that may be overriden by 10G force */ - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, 0x7); - bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, - MDIO_WC_REG_PAR_DET_10G_CTRL, 0); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_CL72_USERB0_CL72_MISC1_CONTROL, 0); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_XGXSBLK1_LANECTRL0, 0xff); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_XGXSBLK1_LANECTRL1, 0x5555); - bnx2x_cl45_write(bp, phy, MDIO_PMA_DEVAD, - MDIO_WC_REG_IEEE0BLK_AUTONEGNP, 0x0); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_RX66_CONTROL, 0x7415); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_MISC2, 0x6190); - /* Disable Autoneg: re-enable it after adv is done. */ - bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, - MDIO_WC_REG_IEEE0BLK_MIICNTL, 0); + for (i = 0; i < sizeof(reg_set)/sizeof(struct bnx2x_reg_set); i++) + bnx2x_cl45_write(bp, phy, reg_set[i].devad, reg_set[i].reg, + reg_set[i].val); /* Check adding advertisement for 1G KX */ if (((vars->line_speed == SPEED_AUTO_NEG) && (phy->speed_cap_mask & PORT_HW_CFG_SPEED_CAPABILITY_D0_1G)) || (vars->line_speed == SPEED_1000)) { - u16 sd_digital; + u32 addr = MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2; val16 |= (1<<5); /* Enable CL37 1G Parallel Detect */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, &sd_digital); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, - (sd_digital | 0x1)); - + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, addr, 0x1); DP(NETIF_MSG_LINK, "Advertize 1G\n"); } if (((vars->line_speed == SPEED_AUTO_NEG) && @@ -3704,7 +3793,7 @@ static void bnx2x_warpcore_enable_AN_KR(struct bnx2x_phy *phy, val16 |= (1<<7); /* Enable 10G Parallel Detect */ bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, - MDIO_WC_REG_PAR_DET_10G_CTRL, 1); + MDIO_WC_REG_PAR_DET_10G_CTRL, 1); DP(NETIF_MSG_LINK, "Advertize 10G\n"); } @@ -3738,10 +3827,9 @@ static void bnx2x_warpcore_enable_AN_KR(struct bnx2x_phy *phy, offsetof(struct shmem_region, dev_info. port_hw_config[params->port].default_cfg)) & PORT_HW_CFG_ENABLE_BAM_ON_KR_ENABLED) { - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DIGITAL6_MP5_NEXTPAGECTRL, &bam37); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DIGITAL6_MP5_NEXTPAGECTRL, bam37 | 1); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_DIGITAL6_MP5_NEXTPAGECTRL, + 1); DP(NETIF_MSG_LINK, "Enable CL37 BAM on KR\n"); } @@ -3755,11 +3843,8 @@ static void bnx2x_warpcore_enable_AN_KR(struct bnx2x_phy *phy, DP(NETIF_MSG_LINK, "Enable AN KR work-around\n"); vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY; } - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DIGITAL5_MISC7, &val16); - - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DIGITAL5_MISC7, val16 | 0x100); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_DIGITAL5_MISC7, 0x100); /* Over 1G - AN local device user page 1 */ bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, @@ -3776,50 +3861,35 @@ static void bnx2x_warpcore_set_10G_KR(struct bnx2x_phy *phy, struct link_vars *vars) { struct bnx2x *bp = params->bp; - u16 val; - - /* Disable Autoneg */ - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, 0x7); - - bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, - MDIO_WC_REG_PAR_DET_10G_CTRL, 0); - - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_CL72_USERB0_CL72_MISC1_CONTROL, 0x3f00); - - bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, - MDIO_WC_REG_AN_IEEE1BLK_AN_ADVERTISEMENT1, 0); - - bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, - MDIO_WC_REG_IEEE0BLK_MIICNTL, 0x0); - - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DIGITAL3_UP1, 0x1); - - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DIGITAL5_MISC7, 0xa); - - /* Disable CL36 PCS Tx */ - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_XGXSBLK1_LANECTRL0, 0x0); - - /* Double Wide Single Data Rate @ pll rate */ - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_XGXSBLK1_LANECTRL1, 0xFFFF); - - /* Leave cl72 training enable, needed for KR */ - bnx2x_cl45_write(bp, phy, MDIO_PMA_DEVAD, + u16 i; + static struct bnx2x_reg_set reg_set[] = { + /* Disable Autoneg */ + {MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, 0x7}, + {MDIO_AN_DEVAD, MDIO_WC_REG_PAR_DET_10G_CTRL, 0}, + {MDIO_WC_DEVAD, MDIO_WC_REG_CL72_USERB0_CL72_MISC1_CONTROL, + 0x3f00}, + {MDIO_AN_DEVAD, MDIO_WC_REG_AN_IEEE1BLK_AN_ADVERTISEMENT1, 0}, + {MDIO_AN_DEVAD, MDIO_WC_REG_IEEE0BLK_MIICNTL, 0x0}, + {MDIO_WC_DEVAD, MDIO_WC_REG_DIGITAL3_UP1, 0x1}, + {MDIO_WC_DEVAD, MDIO_WC_REG_DIGITAL5_MISC7, 0xa}, + /* Disable CL36 PCS Tx */ + {MDIO_WC_DEVAD, MDIO_WC_REG_XGXSBLK1_LANECTRL0, 0x0}, + /* Double Wide Single Data Rate @ pll rate */ + {MDIO_WC_DEVAD, MDIO_WC_REG_XGXSBLK1_LANECTRL1, 0xFFFF}, + /* Leave cl72 training enable, needed for KR */ + {MDIO_PMA_DEVAD, MDIO_WC_REG_PMD_IEEE9BLK_TENGBASE_KR_PMD_CONTROL_REGISTER_150, - 0x2); + 0x2} + }; + + for (i = 0; i < sizeof(reg_set)/sizeof(struct bnx2x_reg_set); i++) + bnx2x_cl45_write(bp, phy, reg_set[i].devad, reg_set[i].reg, + reg_set[i].val); /* Leave CL72 enabled */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_CL72_USERB0_CL72_MISC1_CONTROL, - &val); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_CL72_USERB0_CL72_MISC1_CONTROL, - val | 0x3800); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_CL72_USERB0_CL72_MISC1_CONTROL, + 0x3800); /* Set speed via PMA/PMD register */ bnx2x_cl45_write(bp, phy, MDIO_PMA_DEVAD, @@ -3840,7 +3910,7 @@ static void bnx2x_warpcore_set_10G_KR(struct bnx2x_phy *phy, bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, MDIO_WC_REG_RX66_CONTROL, 0xF9); - /* set and clear loopback to cause a reset to 64/66 decoder */ + /* Set and clear loopback to cause a reset to 64/66 decoder */ bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, MDIO_WC_REG_IEEE0BLK_MIICNTL, 0x4000); bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, @@ -3855,16 +3925,12 @@ static void bnx2x_warpcore_set_10G_XFI(struct bnx2x_phy *phy, struct bnx2x *bp = params->bp; u16 misc1_val, tap_val, tx_driver_val, lane, val; /* Hold rxSeqStart */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DSC2B0_DSC_MISC_CTRL0, &val); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DSC2B0_DSC_MISC_CTRL0, (val | 0x8000)); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_DSC2B0_DSC_MISC_CTRL0, 0x8000); /* Hold tx_fifo_reset */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X3, &val); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X3, (val | 0x1)); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X3, 0x1); /* Disable CL73 AN */ bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, MDIO_AN_REG_CTRL, 0); @@ -3876,10 +3942,8 @@ static void bnx2x_warpcore_set_10G_XFI(struct bnx2x_phy *phy, MDIO_WC_REG_FX100_CTRL1, (val & 0xFFFA)); /* Disable 100FX Idle detect */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_FX100_CTRL3, &val); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_FX100_CTRL3, (val | 0x0080)); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_FX100_CTRL3, 0x0080); /* Set Block address to Remote PHY & Clear forced_speed[5] */ bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, @@ -3940,16 +4004,20 @@ static void bnx2x_warpcore_set_10G_XFI(struct bnx2x_phy *phy, tx_driver_val); /* Enable fiber mode, enable and invert sig_det */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X1, &val); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X1, val | 0xd); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X1, 0xd); /* Set Block address to Remote PHY & Set forced_speed[5], 40bit mode */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DIGITAL4_MISC3, &val); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_DIGITAL4_MISC3, 0x8080); + + /* Enable LPI pass through */ + DP(NETIF_MSG_LINK, "Configure WC for LPI pass through\n"); bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DIGITAL4_MISC3, val | 0x8080); + MDIO_WC_REG_EEE_COMBO_CONTROL0, + 0x7c); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_DIGITAL4_MISC5, 0xc000); /* 10G XFI Full Duplex */ bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, @@ -4139,40 +4207,35 @@ static void bnx2x_warpcore_clear_regs(struct bnx2x_phy *phy, u16 lane) { struct bnx2x *bp = params->bp; - u16 val16; - + u16 i; + static struct bnx2x_reg_set wc_regs[] = { + {MDIO_AN_DEVAD, MDIO_AN_REG_CTRL, 0}, + {MDIO_WC_DEVAD, MDIO_WC_REG_FX100_CTRL1, 0x014a}, + {MDIO_WC_DEVAD, MDIO_WC_REG_FX100_CTRL3, 0x0800}, + {MDIO_WC_DEVAD, MDIO_WC_REG_DIGITAL4_MISC3, 0x8008}, + {MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X1, + 0x0195}, + {MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, + 0x0007}, + {MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X3, + 0x0002}, + {MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_MISC1, 0x6000}, + {MDIO_WC_DEVAD, MDIO_WC_REG_TX_FIR_TAP, 0x0000}, + {MDIO_WC_DEVAD, MDIO_WC_REG_IEEE0BLK_MIICNTL, 0x2040}, + {MDIO_WC_DEVAD, MDIO_WC_REG_COMBO_IEEE0_MIICTRL, 0x0140} + }; /* Set XFI clock comp as default. */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_RX66_CONTROL, &val16); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_RX66_CONTROL, val16 | (3<<13)); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_RX66_CONTROL, (3<<13)); + + for (i = 0; i < sizeof(wc_regs)/sizeof(struct bnx2x_reg_set); i++) + bnx2x_cl45_write(bp, phy, wc_regs[i].devad, wc_regs[i].reg, + wc_regs[i].val); - bnx2x_warpcore_reset_lane(bp, phy, 1); - bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, MDIO_AN_REG_CTRL, 0); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_FX100_CTRL1, 0x014a); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_FX100_CTRL3, 0x0800); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_DIGITAL4_MISC3, 0x8008); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X1, 0x0195); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, 0x0007); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X3, 0x0002); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_SERDESDIGITAL_MISC1, 0x6000); lane = bnx2x_get_warpcore_lane(phy, params); bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_TX_FIR_TAP, 0x0000); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, MDIO_WC_REG_TX0_TX_DRIVER + 0x10*lane, 0x0990); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_IEEE0BLK_MIICNTL, 0x2040); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_COMBO_IEEE0_MIICTRL, 0x0140); - bnx2x_warpcore_reset_lane(bp, phy, 0); + } static int bnx2x_get_mod_abs_int_cfg(struct bnx2x *bp, @@ -4260,7 +4323,7 @@ static void bnx2x_warpcore_config_runtime(struct bnx2x_phy *phy, if (!vars->turn_to_run_wc_rt) return; - /* return if there is no link partner */ + /* Return if there is no link partner */ if (!(bnx2x_warpcore_get_sigdet(phy, params))) { DP(NETIF_MSG_LINK, "bnx2x_warpcore_get_sigdet false\n"); return; @@ -4294,7 +4357,7 @@ static void bnx2x_warpcore_config_runtime(struct bnx2x_phy *phy, bnx2x_warpcore_reset_lane(bp, phy, 1); bnx2x_warpcore_reset_lane(bp, phy, 0); - /* restart Autoneg */ + /* Restart Autoneg */ bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, MDIO_WC_REG_IEEE0BLK_MIICNTL, 0x1200); @@ -4311,6 +4374,23 @@ static void bnx2x_warpcore_config_runtime(struct bnx2x_phy *phy, } /*params->rx_tx_asic_rst*/ } +static void bnx2x_warpcore_config_sfi(struct bnx2x_phy *phy, + struct link_params *params) +{ + u16 lane = bnx2x_get_warpcore_lane(phy, params); + struct bnx2x *bp = params->bp; + bnx2x_warpcore_clear_regs(phy, params, lane); + if ((params->req_line_speed[LINK_CONFIG_IDX(INT_PHY)] == + SPEED_10000) && + (phy->media_type != ETH_PHY_SFP_1G_FIBER)) { + DP(NETIF_MSG_LINK, "Setting 10G SFI\n"); + bnx2x_warpcore_set_10G_XFI(phy, params, 0); + } else { + DP(NETIF_MSG_LINK, "Setting 1G Fiber\n"); + bnx2x_warpcore_set_sgmii_speed(phy, params, 1, 0); + } +} + static void bnx2x_warpcore_config_init(struct bnx2x_phy *phy, struct link_params *params, struct link_vars *vars) @@ -4371,19 +4451,11 @@ static void bnx2x_warpcore_config_init(struct bnx2x_phy *phy, break; case PORT_HW_CFG_NET_SERDES_IF_SFI: - - bnx2x_warpcore_clear_regs(phy, params, lane); - if (vars->line_speed == SPEED_10000) { - DP(NETIF_MSG_LINK, "Setting 10G SFI\n"); - bnx2x_warpcore_set_10G_XFI(phy, params, 0); - } else if (vars->line_speed == SPEED_1000) { - DP(NETIF_MSG_LINK, "Setting 1G Fiber\n"); - bnx2x_warpcore_set_sgmii_speed( - phy, params, 1, 0); - } /* Issue Module detection */ if (bnx2x_is_sfp_module_plugged(phy, params)) bnx2x_sfp_module_detection(phy, params); + + bnx2x_warpcore_config_sfi(phy, params); break; case PORT_HW_CFG_NET_SERDES_IF_DXGXS: @@ -4500,12 +4572,9 @@ static void bnx2x_set_warpcore_loopback(struct bnx2x_phy *phy, CL22_WR_OVER_CL45(bp, phy, MDIO_REG_BANK_AER_BLOCK, MDIO_AER_BLOCK_AER_REG, 0); /* Enable 1G MDIO (1-copy) */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_XGXSBLK0_XGXSCONTROL, - &val16); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_XGXSBLK0_XGXSCONTROL, - val16 | 0x10); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_XGXSBLK0_XGXSCONTROL, + 0x10); /* Set 1G loopback based on lane (1-copy) */ lane = bnx2x_get_warpcore_lane(phy, params); bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, @@ -4518,22 +4587,19 @@ static void bnx2x_set_warpcore_loopback(struct bnx2x_phy *phy, bnx2x_set_aer_mmd(params, phy); } else { /* 10G & 20G */ - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_COMBO_IEEE0_MIICTRL, &val16); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_COMBO_IEEE0_MIICTRL, val16 | - 0x4000); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_COMBO_IEEE0_MIICTRL, + 0x4000); - bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_IEEE0BLK_MIICNTL, &val16); - bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, - MDIO_WC_REG_IEEE0BLK_MIICNTL, val16 | 0x1); + bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, + MDIO_WC_REG_IEEE0BLK_MIICNTL, 0x1); } } -void bnx2x_sync_link(struct link_params *params, - struct link_vars *vars) + +static void bnx2x_sync_link(struct link_params *params, + struct link_vars *vars) { struct bnx2x *bp = params->bp; u8 link_10g_plus; @@ -4606,7 +4672,7 @@ void bnx2x_sync_link(struct link_params *params, USES_WARPCORE(bp) && (vars->line_speed == SPEED_1000)) vars->phy_flags |= PHY_SGMII_FLAG; - /* anything 10 and over uses the bmac */ + /* Anything 10 and over uses the bmac */ link_10g_plus = (vars->line_speed >= SPEED_10000); if (link_10g_plus) { @@ -4620,7 +4686,7 @@ void bnx2x_sync_link(struct link_params *params, else vars->mac_type = MAC_TYPE_EMAC; } - } else { /* link down */ + } else { /* Link down */ DP(NETIF_MSG_LINK, "phy link down\n"); vars->phy_link_up = 0; @@ -4629,10 +4695,12 @@ void bnx2x_sync_link(struct link_params *params, vars->duplex = DUPLEX_FULL; vars->flow_ctrl = BNX2X_FLOW_CTRL_NONE; - /* indicate no mac active */ + /* Indicate no mac active */ vars->mac_type = MAC_TYPE_NONE; if (vars->link_status & LINK_STATUS_PHYSICAL_LINK_FLAG) vars->phy_flags |= PHY_HALF_OPEN_CONN_FLAG; + if (vars->link_status & LINK_STATUS_SFP_TX_FAULT) + vars->phy_flags |= PHY_SFP_TX_FAULT_FLAG; } } @@ -4698,7 +4766,7 @@ static void bnx2x_set_master_ln(struct link_params *params, PORT_HW_CFG_LANE_SWAP_CFG_MASTER_MASK) >> PORT_HW_CFG_LANE_SWAP_CFG_MASTER_SHIFT); - /* set the master_ln for AN */ + /* Set the master_ln for AN */ CL22_RD_OVER_CL45(bp, phy, MDIO_REG_BANK_XGXS_BLOCK2, MDIO_XGXS_BLOCK2_TEST_MODE_LANE, @@ -4721,7 +4789,7 @@ static int bnx2x_reset_unicore(struct link_params *params, MDIO_REG_BANK_COMBO_IEEE0, MDIO_COMBO_IEEE0_MII_CONTROL, &mii_control); - /* reset the unicore */ + /* Reset the unicore */ CL22_WR_OVER_CL45(bp, phy, MDIO_REG_BANK_COMBO_IEEE0, MDIO_COMBO_IEEE0_MII_CONTROL, @@ -4730,11 +4798,11 @@ static int bnx2x_reset_unicore(struct link_params *params, if (set_serdes) bnx2x_set_serdes_access(bp, params->port); - /* wait for the reset to self clear */ + /* Wait for the reset to self clear */ for (i = 0; i < MDIO_ACCESS_TIMEOUT; i++) { udelay(5); - /* the reset erased the previous bank value */ + /* The reset erased the previous bank value */ CL22_RD_OVER_CL45(bp, phy, MDIO_REG_BANK_COMBO_IEEE0, MDIO_COMBO_IEEE0_MII_CONTROL, @@ -4952,7 +5020,7 @@ static void bnx2x_set_autoneg(struct bnx2x_phy *phy, MDIO_CL73_IEEEB0_CL73_AN_CONTROL, reg_val); } -/* program SerDes, forced speed */ +/* Program SerDes, forced speed */ static void bnx2x_program_serdes(struct bnx2x_phy *phy, struct link_params *params, struct link_vars *vars) @@ -4960,7 +5028,7 @@ static void bnx2x_program_serdes(struct bnx2x_phy *phy, struct bnx2x *bp = params->bp; u16 reg_val; - /* program duplex, disable autoneg and sgmii*/ + /* Program duplex, disable autoneg and sgmii*/ CL22_RD_OVER_CL45(bp, phy, MDIO_REG_BANK_COMBO_IEEE0, MDIO_COMBO_IEEE0_MII_CONTROL, ®_val); @@ -4979,7 +5047,7 @@ static void bnx2x_program_serdes(struct bnx2x_phy *phy, CL22_RD_OVER_CL45(bp, phy, MDIO_REG_BANK_SERDES_DIGITAL, MDIO_SERDES_DIGITAL_MISC1, ®_val); - /* clearing the speed value before setting the right speed */ + /* Clearing the speed value before setting the right speed */ DP(NETIF_MSG_LINK, "MDIO_REG_BANK_SERDES_DIGITAL = 0x%x\n", reg_val); reg_val &= ~(MDIO_SERDES_DIGITAL_MISC1_FORCE_SPEED_MASK | @@ -5008,7 +5076,7 @@ static void bnx2x_set_brcm_cl37_advertisement(struct bnx2x_phy *phy, struct bnx2x *bp = params->bp; u16 val = 0; - /* set extended capabilities */ + /* Set extended capabilities */ if (phy->speed_cap_mask & PORT_HW_CFG_SPEED_CAPABILITY_D0_2_5G) val |= MDIO_OVER_1G_UP1_2_5G; if (phy->speed_cap_mask & PORT_HW_CFG_SPEED_CAPABILITY_D0_10G) @@ -5028,7 +5096,7 @@ static void bnx2x_set_ieee_aneg_advertisement(struct bnx2x_phy *phy, { struct bnx2x *bp = params->bp; u16 val; - /* for AN, we are always publishing full duplex */ + /* For AN, we are always publishing full duplex */ CL22_WR_OVER_CL45(bp, phy, MDIO_REG_BANK_COMBO_IEEE0, @@ -5090,14 +5158,14 @@ static void bnx2x_initialize_sgmii_process(struct bnx2x_phy *phy, struct bnx2x *bp = params->bp; u16 control1; - /* in SGMII mode, the unicore is always slave */ + /* In SGMII mode, the unicore is always slave */ CL22_RD_OVER_CL45(bp, phy, MDIO_REG_BANK_SERDES_DIGITAL, MDIO_SERDES_DIGITAL_A_1000X_CONTROL1, &control1); control1 |= MDIO_SERDES_DIGITAL_A_1000X_CONTROL1_INVERT_SIGNAL_DETECT; - /* set sgmii mode (and not fiber) */ + /* Set sgmii mode (and not fiber) */ control1 &= ~(MDIO_SERDES_DIGITAL_A_1000X_CONTROL1_FIBER_MODE | MDIO_SERDES_DIGITAL_A_1000X_CONTROL1_AUTODET | MDIO_SERDES_DIGITAL_A_1000X_CONTROL1_MSTR_MODE); @@ -5106,9 +5174,9 @@ static void bnx2x_initialize_sgmii_process(struct bnx2x_phy *phy, MDIO_SERDES_DIGITAL_A_1000X_CONTROL1, control1); - /* if forced speed */ + /* If forced speed */ if (!(vars->line_speed == SPEED_AUTO_NEG)) { - /* set speed, disable autoneg */ + /* Set speed, disable autoneg */ u16 mii_control; CL22_RD_OVER_CL45(bp, phy, @@ -5129,16 +5197,16 @@ static void bnx2x_initialize_sgmii_process(struct bnx2x_phy *phy, MDIO_COMBO_IEEO_MII_CONTROL_MAN_SGMII_SP_1000; break; case SPEED_10: - /* there is nothing to set for 10M */ + /* There is nothing to set for 10M */ break; default: - /* invalid speed for SGMII */ + /* Invalid speed for SGMII */ DP(NETIF_MSG_LINK, "Invalid line_speed 0x%x\n", vars->line_speed); break; } - /* setting the full duplex */ + /* Setting the full duplex */ if (phy->req_duplex == DUPLEX_FULL) mii_control |= MDIO_COMBO_IEEO_MII_CONTROL_FULL_DUPLEX; @@ -5148,7 +5216,7 @@ static void bnx2x_initialize_sgmii_process(struct bnx2x_phy *phy, mii_control); } else { /* AN mode */ - /* enable and restart AN */ + /* Enable and restart AN */ bnx2x_restart_autoneg(phy, params, 0); } } @@ -5244,7 +5312,7 @@ static void bnx2x_flow_ctrl_resolve(struct bnx2x_phy *phy, struct bnx2x *bp = params->bp; vars->flow_ctrl = BNX2X_FLOW_CTRL_NONE; - /* resolve from gp_status in case of AN complete and not sgmii */ + /* Resolve from gp_status in case of AN complete and not sgmii */ if (phy->req_flow_ctrl != BNX2X_FLOW_CTRL_AUTO) { /* Update the advertised flow-controled of LD/LP in AN */ if (phy->req_line_speed == SPEED_AUTO_NEG) @@ -5468,7 +5536,7 @@ static int bnx2x_link_settings_status(struct bnx2x_phy *phy, bnx2x_xgxs_an_resolve(phy, params, vars, gp_status); } - } else { /* link_down */ + } else { /* Link_down */ if ((phy->req_line_speed == SPEED_AUTO_NEG) && SINGLE_MEDIA_DIRECT(params)) { /* Check signal is detected */ @@ -5617,12 +5685,12 @@ static void bnx2x_set_gmii_tx_driver(struct link_params *params) u16 tx_driver; u16 bank; - /* read precomp */ + /* Read precomp */ CL22_RD_OVER_CL45(bp, phy, MDIO_REG_BANK_OVER_1G, MDIO_OVER_1G_LP_UP2, &lp_up2); - /* bits [10:7] at lp_up2, positioned at [15:12] */ + /* Bits [10:7] at lp_up2, positioned at [15:12] */ lp_up2 = (((lp_up2 & MDIO_OVER_1G_LP_UP2_PREEMPHASIS_MASK) >> MDIO_OVER_1G_LP_UP2_PREEMPHASIS_SHIFT) << MDIO_TX0_TX_DRIVER_PREEMPHASIS_SHIFT); @@ -5636,7 +5704,7 @@ static void bnx2x_set_gmii_tx_driver(struct link_params *params) bank, MDIO_TX0_TX_DRIVER, &tx_driver); - /* replace tx_driver bits [15:12] */ + /* Replace tx_driver bits [15:12] */ if (lp_up2 != (tx_driver & MDIO_TX0_TX_DRIVER_PREEMPHASIS_MASK)) { tx_driver &= ~MDIO_TX0_TX_DRIVER_PREEMPHASIS_MASK; @@ -5732,16 +5800,16 @@ static void bnx2x_xgxs_config_init(struct bnx2x_phy *phy, FEATURE_CONFIG_OVERRIDE_PREEMPHASIS_ENABLED)) bnx2x_set_preemphasis(phy, params); - /* forced speed requested? */ + /* Forced speed requested? */ if (vars->line_speed != SPEED_AUTO_NEG || (SINGLE_MEDIA_DIRECT(params) && params->loopback_mode == LOOPBACK_EXT)) { DP(NETIF_MSG_LINK, "not SGMII, no AN\n"); - /* disable autoneg */ + /* Disable autoneg */ bnx2x_set_autoneg(phy, params, vars, 0); - /* program speed and duplex */ + /* Program speed and duplex */ bnx2x_program_serdes(phy, params, vars); } else { /* AN_mode */ @@ -5750,14 +5818,14 @@ static void bnx2x_xgxs_config_init(struct bnx2x_phy *phy, /* AN enabled */ bnx2x_set_brcm_cl37_advertisement(phy, params); - /* program duplex & pause advertisement (for aneg) */ + /* Program duplex & pause advertisement (for aneg) */ bnx2x_set_ieee_aneg_advertisement(phy, params, vars->ieee_fc); - /* enable autoneg */ + /* Enable autoneg */ bnx2x_set_autoneg(phy, params, vars, enable_cl73); - /* enable and restart AN */ + /* Enable and restart AN */ bnx2x_restart_autoneg(phy, params, enable_cl73); } @@ -5793,12 +5861,12 @@ static int bnx2x_prepare_xgxs(struct bnx2x_phy *phy, bnx2x_set_master_ln(params, phy); rc = bnx2x_reset_unicore(params, phy, 0); - /* reset the SerDes and wait for reset bit return low */ - if (rc != 0) + /* Reset the SerDes and wait for reset bit return low */ + if (rc) return rc; bnx2x_set_aer_mmd(params, phy); - /* setting the masterLn_def again after the reset */ + /* Setting the masterLn_def again after the reset */ if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT) { bnx2x_set_master_ln(params, phy); bnx2x_set_swap_lanes(params, phy); @@ -5823,7 +5891,7 @@ static u16 bnx2x_wait_reset_complete(struct bnx2x *bp, MDIO_PMA_REG_CTRL, &ctrl); if (!(ctrl & (1<<15))) break; - msleep(1); + usleep_range(1000, 2000); } if (cnt == 1000) @@ -6054,7 +6122,7 @@ static void bnx2x_set_xgxs_loopback(struct bnx2x_phy *phy, DP(NETIF_MSG_LINK, "XGXS 10G loopback enable\n"); if (!CHIP_IS_E3(bp)) { - /* change the uni_phy_addr in the nig */ + /* Change the uni_phy_addr in the nig */ md_devad = REG_RD(bp, (NIG_REG_XGXS0_CTRL_MD_DEVAD + port*0x18)); @@ -6074,11 +6142,11 @@ static void bnx2x_set_xgxs_loopback(struct bnx2x_phy *phy, (MDIO_CL73_IEEEB0_CL73_AN_CONTROL & 0xf)), 0x6041); msleep(200); - /* set aer mmd back */ + /* Set aer mmd back */ bnx2x_set_aer_mmd(params, phy); if (!CHIP_IS_E3(bp)) { - /* and md_devad */ + /* And md_devad */ REG_WR(bp, NIG_REG_XGXS0_CTRL_MD_DEVAD + port*0x18, md_devad); } @@ -6275,7 +6343,7 @@ int bnx2x_test_link(struct link_params *params, struct link_vars *vars, MDIO_REG_BANK_GP_STATUS, MDIO_GP_STATUS_TOP_AN_STATUS1, &gp_status); - /* link is up only if both local phy and external phy are up */ + /* Link is up only if both local phy and external phy are up */ if (!(gp_status & MDIO_GP_STATUS_TOP_AN_STATUS1_LINK_STATUS)) return -ESRCH; } @@ -6296,7 +6364,9 @@ int bnx2x_test_link(struct link_params *params, struct link_vars *vars, for (phy_index = EXT_PHY1; phy_index < params->num_phys; phy_index++) { serdes_phy_type = ((params->phy[phy_index].media_type == - ETH_PHY_SFP_FIBER) || + ETH_PHY_SFPP_10G_FIBER) || + (params->phy[phy_index].media_type == + ETH_PHY_SFP_1G_FIBER) || (params->phy[phy_index].media_type == ETH_PHY_XFP_FIBER) || (params->phy[phy_index].media_type == @@ -6397,7 +6467,7 @@ static int bnx2x_link_initialize(struct link_params *params, static void bnx2x_int_link_reset(struct bnx2x_phy *phy, struct link_params *params) { - /* reset the SerDes/XGXS */ + /* Reset the SerDes/XGXS */ REG_WR(params->bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_3_CLEAR, (0x1ff << (params->port*16))); } @@ -6430,10 +6500,10 @@ static int bnx2x_update_link_down(struct link_params *params, DP(NETIF_MSG_LINK, "Port %x: Link is down\n", port); bnx2x_set_led(params, vars, LED_MODE_OFF, 0); vars->phy_flags &= ~PHY_PHYSICAL_LINK_FLAG; - /* indicate no mac active */ + /* Indicate no mac active */ vars->mac_type = MAC_TYPE_NONE; - /* update shared memory */ + /* Update shared memory */ vars->link_status &= ~(LINK_STATUS_SPEED_AND_DUPLEX_MASK | LINK_STATUS_LINK_UP | LINK_STATUS_PHYSICAL_LINK_FLAG | @@ -6446,15 +6516,15 @@ static int bnx2x_update_link_down(struct link_params *params, vars->line_speed = 0; bnx2x_update_mng(params, vars->link_status); - /* activate nig drain */ + /* Activate nig drain */ REG_WR(bp, NIG_REG_EGRESS_DRAIN0_MODE + port*4, 1); - /* disable emac */ + /* Disable emac */ if (!CHIP_IS_E3(bp)) REG_WR(bp, NIG_REG_NIG_EMAC0_EN + port*4, 0); - msleep(10); - /* reset BigMac/Xmac */ + usleep_range(10000, 20000); + /* Reset BigMac/Xmac */ if (CHIP_IS_E1x(bp) || CHIP_IS_E2(bp)) { bnx2x_bmac_rx_disable(bp, params->port); @@ -6463,6 +6533,16 @@ static int bnx2x_update_link_down(struct link_params *params, (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << port)); } if (CHIP_IS_E3(bp)) { + /* Prevent LPI Generation by chip */ + REG_WR(bp, MISC_REG_CPMU_LP_FW_ENABLE_P0 + (params->port << 2), + 0); + REG_WR(bp, MISC_REG_CPMU_LP_DR_ENABLE, 0); + REG_WR(bp, MISC_REG_CPMU_LP_MASK_ENT_P0 + (params->port << 2), + 0); + vars->eee_status &= ~(SHMEM_EEE_LP_ADV_STATUS_MASK | + SHMEM_EEE_ACTIVE_BIT); + + bnx2x_update_mng_eee(params, vars->eee_status); bnx2x_xmac_disable(params); bnx2x_umac_disable(params); } @@ -6502,6 +6582,16 @@ static int bnx2x_update_link_up(struct link_params *params, bnx2x_umac_enable(params, vars, 0); bnx2x_set_led(params, vars, LED_MODE_OPER, vars->line_speed); + + if ((vars->eee_status & SHMEM_EEE_ACTIVE_BIT) && + (vars->eee_status & SHMEM_EEE_LPI_REQUESTED_BIT)) { + DP(NETIF_MSG_LINK, "Enabling LPI assertion\n"); + REG_WR(bp, MISC_REG_CPMU_LP_FW_ENABLE_P0 + + (params->port << 2), 1); + REG_WR(bp, MISC_REG_CPMU_LP_DR_ENABLE, 1); + REG_WR(bp, MISC_REG_CPMU_LP_MASK_ENT_P0 + + (params->port << 2), 0xfc20); + } } if ((CHIP_IS_E1x(bp) || CHIP_IS_E2(bp))) { @@ -6534,12 +6624,12 @@ static int bnx2x_update_link_up(struct link_params *params, rc |= bnx2x_pbf_update(params, vars->flow_ctrl, vars->line_speed); - /* disable drain */ + /* Disable drain */ REG_WR(bp, NIG_REG_EGRESS_DRAIN0_MODE + port*4, 0); - /* update shared memory */ + /* Update shared memory */ bnx2x_update_mng(params, vars->link_status); - + bnx2x_update_mng_eee(params, vars->eee_status); /* Check remote fault */ for (phy_idx = INT_PHY; phy_idx < MAX_PHYS; phy_idx++) { if (params->phy[phy_idx].flags & FLAGS_TX_ERROR_CHECK) { @@ -6583,6 +6673,8 @@ int bnx2x_link_update(struct link_params *params, struct link_vars *vars) phy_vars[phy_index].phy_link_up = 0; phy_vars[phy_index].link_up = 0; phy_vars[phy_index].fault_detected = 0; + /* different consideration, since vars holds inner state */ + phy_vars[phy_index].eee_status = vars->eee_status; } if (USES_WARPCORE(bp)) @@ -6603,7 +6695,7 @@ int bnx2x_link_update(struct link_params *params, struct link_vars *vars) REG_RD(bp, NIG_REG_XGXS0_STATUS_LINK10G + port*0x68), REG_RD(bp, NIG_REG_XGXS0_STATUS_LINK_STATUS + port*0x68)); - /* disable emac */ + /* Disable emac */ if (!CHIP_IS_E3(bp)) REG_WR(bp, NIG_REG_NIG_EMAC0_EN + port*4, 0); @@ -6712,6 +6804,9 @@ int bnx2x_link_update(struct link_params *params, struct link_vars *vars) vars->link_status |= LINK_STATUS_SERDES_LINK; else vars->link_status &= ~LINK_STATUS_SERDES_LINK; + + vars->eee_status = phy_vars[active_external_phy].eee_status; + DP(NETIF_MSG_LINK, "Active external phy selected: %x\n", active_external_phy); } @@ -6745,11 +6840,11 @@ int bnx2x_link_update(struct link_params *params, struct link_vars *vars) } else if (prev_line_speed != vars->line_speed) { REG_WR(bp, NIG_REG_EGRESS_DRAIN0_MODE + params->port*4, 0); - msleep(1); + usleep_range(1000, 2000); } } - /* anything 10 and over uses the bmac */ + /* Anything 10 and over uses the bmac */ link_10g_plus = (vars->line_speed >= SPEED_10000); bnx2x_link_int_ack(params, vars, link_10g_plus); @@ -6815,7 +6910,7 @@ void bnx2x_ext_phy_hw_reset(struct bnx2x *bp, u8 port) { bnx2x_set_gpio(bp, MISC_REGISTERS_GPIO_1, MISC_REGISTERS_GPIO_OUTPUT_LOW, port); - msleep(1); + usleep_range(1000, 2000); bnx2x_set_gpio(bp, MISC_REGISTERS_GPIO_1, MISC_REGISTERS_GPIO_OUTPUT_HIGH, port); } @@ -6912,7 +7007,7 @@ static int bnx2x_8073_8727_external_rom_boot(struct bnx2x *bp, MDIO_PMA_REG_GEN_CTRL, 0x0001); - /* ucode reboot and rst */ + /* Ucode reboot and rst */ bnx2x_cl45_write(bp, phy, MDIO_PMA_DEVAD, MDIO_PMA_REG_GEN_CTRL, @@ -6956,7 +7051,7 @@ static int bnx2x_8073_8727_external_rom_boot(struct bnx2x *bp, MDIO_PMA_DEVAD, MDIO_PMA_REG_M8051_MSGOUT_REG, &fw_msgout); - msleep(1); + usleep_range(1000, 2000); } while (fw_ver1 == 0 || fw_ver1 == 0x4321 || ((fw_msgout & 0xff) != 0x03 && (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8073))); @@ -7050,11 +7145,11 @@ static int bnx2x_8073_xaui_wa(struct bnx2x *bp, struct bnx2x_phy *phy) "XAUI workaround has completed\n"); return 0; } - msleep(3); + usleep_range(3000, 6000); } break; } - msleep(3); + usleep_range(3000, 6000); } DP(NETIF_MSG_LINK, "Warning: XAUI work-around timeout !!!\n"); return -EINVAL; @@ -7128,7 +7223,7 @@ static int bnx2x_8073_config_init(struct bnx2x_phy *phy, bnx2x_set_gpio(bp, MISC_REGISTERS_GPIO_1, MISC_REGISTERS_GPIO_OUTPUT_HIGH, gpio_port); - /* enable LASI */ + /* Enable LASI */ bnx2x_cl45_write(bp, phy, MDIO_PMA_DEVAD, MDIO_PMA_LASI_RXCTRL, (1<<2)); bnx2x_cl45_write(bp, phy, @@ -7276,7 +7371,7 @@ static u8 bnx2x_8073_read_status(struct bnx2x_phy *phy, DP(NETIF_MSG_LINK, "8703 LASI status 0x%x\n", val1); - /* clear the interrupt LASI status register */ + /* Clear the interrupt LASI status register */ bnx2x_cl45_read(bp, phy, MDIO_PCS_DEVAD, MDIO_PCS_REG_STATUS, &val2); bnx2x_cl45_read(bp, phy, @@ -7601,7 +7696,7 @@ static int bnx2x_8726_read_sfp_module_eeprom(struct bnx2x_phy *phy, struct bnx2x *bp = params->bp; u16 val = 0; u16 i; - if (byte_cnt > 16) { + if (byte_cnt > SFP_EEPROM_PAGE_SIZE) { DP(NETIF_MSG_LINK, "Reading from eeprom is limited to 0xf\n"); return -EINVAL; @@ -7655,7 +7750,7 @@ static int bnx2x_8726_read_sfp_module_eeprom(struct bnx2x_phy *phy, if ((val & MDIO_PMA_REG_SFP_TWO_WIRE_CTRL_STATUS_MASK) == MDIO_PMA_REG_SFP_TWO_WIRE_STATUS_IDLE) return 0; - msleep(1); + usleep_range(1000, 2000); } return -EINVAL; } @@ -7692,7 +7787,8 @@ static int bnx2x_warpcore_read_sfp_module_eeprom(struct bnx2x_phy *phy, u32 data_array[4]; u16 addr32; struct bnx2x *bp = params->bp; - if (byte_cnt > 16) { + + if (byte_cnt > SFP_EEPROM_PAGE_SIZE) { DP(NETIF_MSG_LINK, "Reading from eeprom is limited to 16 bytes\n"); return -EINVAL; @@ -7728,7 +7824,7 @@ static int bnx2x_8727_read_sfp_module_eeprom(struct bnx2x_phy *phy, struct bnx2x *bp = params->bp; u16 val, i; - if (byte_cnt > 16) { + if (byte_cnt > SFP_EEPROM_PAGE_SIZE) { DP(NETIF_MSG_LINK, "Reading from eeprom is limited to 0xf\n"); return -EINVAL; @@ -7765,7 +7861,7 @@ static int bnx2x_8727_read_sfp_module_eeprom(struct bnx2x_phy *phy, /* Wait appropriate time for two-wire command to finish before * polling the status register */ - msleep(1); + usleep_range(1000, 2000); /* Wait up to 500us for command complete status */ for (i = 0; i < 100; i++) { @@ -7801,7 +7897,7 @@ static int bnx2x_8727_read_sfp_module_eeprom(struct bnx2x_phy *phy, if ((val & MDIO_PMA_REG_SFP_TWO_WIRE_CTRL_STATUS_MASK) == MDIO_PMA_REG_SFP_TWO_WIRE_STATUS_IDLE) return 0; - msleep(1); + usleep_range(1000, 2000); } return -EINVAL; @@ -7811,7 +7907,7 @@ int bnx2x_read_sfp_module_eeprom(struct bnx2x_phy *phy, struct link_params *params, u16 addr, u8 byte_cnt, u8 *o_buf) { - int rc = -EINVAL; + int rc = -EOPNOTSUPP; switch (phy->type) { case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8726: rc = bnx2x_8726_read_sfp_module_eeprom(phy, params, addr, @@ -7836,7 +7932,7 @@ static int bnx2x_get_edc_mode(struct bnx2x_phy *phy, { struct bnx2x *bp = params->bp; u32 sync_offset = 0, phy_idx, media_types; - u8 val, check_limiting_mode = 0; + u8 val[2], check_limiting_mode = 0; *edc_mode = EDC_MODE_LIMITING; phy->media_type = ETH_PHY_UNSPECIFIED; @@ -7844,13 +7940,13 @@ static int bnx2x_get_edc_mode(struct bnx2x_phy *phy, if (bnx2x_read_sfp_module_eeprom(phy, params, SFP_EEPROM_CON_TYPE_ADDR, - 1, - &val) != 0) { + 2, + (u8 *)val) != 0) { DP(NETIF_MSG_LINK, "Failed to read from SFP+ module EEPROM\n"); return -EINVAL; } - switch (val) { + switch (val[0]) { case SFP_EEPROM_CON_TYPE_VAL_COPPER: { u8 copper_module_type; @@ -7888,13 +7984,29 @@ static int bnx2x_get_edc_mode(struct bnx2x_phy *phy, break; } case SFP_EEPROM_CON_TYPE_VAL_LC: - phy->media_type = ETH_PHY_SFP_FIBER; - DP(NETIF_MSG_LINK, "Optic module detected\n"); check_limiting_mode = 1; + if ((val[1] & (SFP_EEPROM_COMP_CODE_SR_MASK | + SFP_EEPROM_COMP_CODE_LR_MASK | + SFP_EEPROM_COMP_CODE_LRM_MASK)) == 0) { + DP(NETIF_MSG_LINK, "1G Optic module detected\n"); + phy->media_type = ETH_PHY_SFP_1G_FIBER; + phy->req_line_speed = SPEED_1000; + } else { + int idx, cfg_idx = 0; + DP(NETIF_MSG_LINK, "10G Optic module detected\n"); + for (idx = INT_PHY; idx < MAX_PHYS; idx++) { + if (params->phy[idx].type == phy->type) { + cfg_idx = LINK_CONFIG_IDX(idx); + break; + } + } + phy->media_type = ETH_PHY_SFPP_10G_FIBER; + phy->req_line_speed = params->req_line_speed[cfg_idx]; + } break; default: DP(NETIF_MSG_LINK, "Unable to determine module type 0x%x !!!\n", - val); + val[0]); return -EINVAL; } sync_offset = params->shmem_base + @@ -7980,7 +8092,7 @@ static int bnx2x_verify_sfp_module(struct bnx2x_phy *phy, return 0; } - /* format the warning message */ + /* Format the warning message */ if (bnx2x_read_sfp_module_eeprom(phy, params, SFP_EEPROM_VENDOR_NAME_ADDR, @@ -8026,7 +8138,7 @@ static int bnx2x_wait_for_sfp_module_initialized(struct bnx2x_phy *phy, timeout * 5); return 0; } - msleep(5); + usleep_range(5000, 10000); } return -EINVAL; } @@ -8338,7 +8450,7 @@ int bnx2x_sfp_module_detection(struct bnx2x_phy *phy, DP(NETIF_MSG_LINK, "Failed to get valid module type\n"); return -EINVAL; } else if (bnx2x_verify_sfp_module(phy, params) != 0) { - /* check SFP+ module compatibility */ + /* Check SFP+ module compatibility */ DP(NETIF_MSG_LINK, "Module verification failed!!\n"); rc = -EINVAL; /* Turn on fault module-detected led */ @@ -8401,14 +8513,34 @@ void bnx2x_handle_module_detect_int(struct link_params *params) /* Call the handling function in case module is detected */ if (gpio_val == 0) { + bnx2x_set_mdio_clk(bp, params->chip_id, params->port); + bnx2x_set_aer_mmd(params, phy); + bnx2x_power_sfp_module(params, phy, 1); bnx2x_set_gpio_int(bp, gpio_num, MISC_REGISTERS_GPIO_INT_OUTPUT_CLR, gpio_port); - if (bnx2x_wait_for_sfp_module_initialized(phy, params) == 0) + if (bnx2x_wait_for_sfp_module_initialized(phy, params) == 0) { bnx2x_sfp_module_detection(phy, params); - else + if (CHIP_IS_E3(bp)) { + u16 rx_tx_in_reset; + /* In case WC is out of reset, reconfigure the + * link speed while taking into account 1G + * module limitation. + */ + bnx2x_cl45_read(bp, phy, + MDIO_WC_DEVAD, + MDIO_WC_REG_DIGITAL5_MISC6, + &rx_tx_in_reset); + if (!rx_tx_in_reset) { + bnx2x_warpcore_reset_lane(bp, phy, 1); + bnx2x_warpcore_config_sfi(phy, params); + bnx2x_warpcore_reset_lane(bp, phy, 0); + } + } + } else { DP(NETIF_MSG_LINK, "SFP+ module is not initialized\n"); + } } else { u32 val = REG_RD(bp, params->shmem_base + offsetof(struct shmem_region, dev_info. @@ -8469,7 +8601,7 @@ static u8 bnx2x_8706_8726_read_status(struct bnx2x_phy *phy, bnx2x_sfp_mask_fault(bp, phy, MDIO_PMA_LASI_TXSTAT, MDIO_PMA_LASI_TXCTRL); - /* clear LASI indication*/ + /* Clear LASI indication*/ bnx2x_cl45_read(bp, phy, MDIO_PMA_DEVAD, MDIO_PMA_LASI_STAT, &val1); bnx2x_cl45_read(bp, phy, @@ -8537,7 +8669,7 @@ static u8 bnx2x_8706_config_init(struct bnx2x_phy *phy, MDIO_PMA_DEVAD, MDIO_PMA_REG_ROM_VER1, &val); if (val) break; - msleep(10); + usleep_range(10000, 20000); } DP(NETIF_MSG_LINK, "XGXS 8706 is initialized after %d ms\n", cnt); if ((params->feature_config_flags & @@ -8666,7 +8798,7 @@ static void bnx2x_8726_external_rom_boot(struct bnx2x_phy *phy, MDIO_PMA_REG_GEN_CTRL, MDIO_PMA_REG_GEN_CTRL_ROM_RESET_INTERNAL_MP); - /* wait for 150ms for microcode load */ + /* Wait for 150ms for microcode load */ msleep(150); /* Disable serial boot control, tristates pins SS_N, SCK, MOSI, MISO */ @@ -8860,6 +8992,63 @@ static void bnx2x_8727_hw_reset(struct bnx2x_phy *phy, MISC_REGISTERS_GPIO_OUTPUT_LOW, port); } +static void bnx2x_8727_config_speed(struct bnx2x_phy *phy, + struct link_params *params) +{ + struct bnx2x *bp = params->bp; + u16 tmp1, val; + /* Set option 1G speed */ + if ((phy->req_line_speed == SPEED_1000) || + (phy->media_type == ETH_PHY_SFP_1G_FIBER)) { + DP(NETIF_MSG_LINK, "Setting 1G force\n"); + bnx2x_cl45_write(bp, phy, + MDIO_PMA_DEVAD, MDIO_PMA_REG_CTRL, 0x40); + bnx2x_cl45_write(bp, phy, + MDIO_PMA_DEVAD, MDIO_PMA_REG_10G_CTRL2, 0xD); + bnx2x_cl45_read(bp, phy, + MDIO_PMA_DEVAD, MDIO_PMA_REG_10G_CTRL2, &tmp1); + DP(NETIF_MSG_LINK, "1.7 = 0x%x\n", tmp1); + /* Power down the XAUI until link is up in case of dual-media + * and 1G + */ + if (DUAL_MEDIA(params)) { + bnx2x_cl45_read(bp, phy, + MDIO_PMA_DEVAD, + MDIO_PMA_REG_8727_PCS_GP, &val); + val |= (3<<10); + bnx2x_cl45_write(bp, phy, + MDIO_PMA_DEVAD, + MDIO_PMA_REG_8727_PCS_GP, val); + } + } else if ((phy->req_line_speed == SPEED_AUTO_NEG) && + ((phy->speed_cap_mask & + PORT_HW_CFG_SPEED_CAPABILITY_D0_1G)) && + ((phy->speed_cap_mask & + PORT_HW_CFG_SPEED_CAPABILITY_D0_10G) != + PORT_HW_CFG_SPEED_CAPABILITY_D0_10G)) { + + DP(NETIF_MSG_LINK, "Setting 1G clause37\n"); + bnx2x_cl45_write(bp, phy, + MDIO_AN_DEVAD, MDIO_AN_REG_8727_MISC_CTRL, 0); + bnx2x_cl45_write(bp, phy, + MDIO_AN_DEVAD, MDIO_AN_REG_CL37_AN, 0x1300); + } else { + /* Since the 8727 has only single reset pin, need to set the 10G + * registers although it is default + */ + bnx2x_cl45_write(bp, phy, + MDIO_AN_DEVAD, MDIO_AN_REG_8727_MISC_CTRL, + 0x0020); + bnx2x_cl45_write(bp, phy, + MDIO_AN_DEVAD, MDIO_AN_REG_CL37_AN, 0x0100); + bnx2x_cl45_write(bp, phy, + MDIO_PMA_DEVAD, MDIO_PMA_REG_CTRL, 0x2040); + bnx2x_cl45_write(bp, phy, + MDIO_PMA_DEVAD, MDIO_PMA_REG_10G_CTRL2, + 0x0008); + } +} + static int bnx2x_8727_config_init(struct bnx2x_phy *phy, struct link_params *params, struct link_vars *vars) @@ -8877,7 +9066,7 @@ static int bnx2x_8727_config_init(struct bnx2x_phy *phy, lasi_ctrl_val = 0x0006; DP(NETIF_MSG_LINK, "Initializing BCM8727\n"); - /* enable LASI */ + /* Enable LASI */ bnx2x_cl45_write(bp, phy, MDIO_PMA_DEVAD, MDIO_PMA_LASI_RXCTRL, rx_alarm_ctrl_val); @@ -8929,56 +9118,7 @@ static int bnx2x_8727_config_init(struct bnx2x_phy *phy, bnx2x_cl45_read(bp, phy, MDIO_PMA_DEVAD, MDIO_PMA_LASI_RXSTAT, &tmp1); - /* Set option 1G speed */ - if (phy->req_line_speed == SPEED_1000) { - DP(NETIF_MSG_LINK, "Setting 1G force\n"); - bnx2x_cl45_write(bp, phy, - MDIO_PMA_DEVAD, MDIO_PMA_REG_CTRL, 0x40); - bnx2x_cl45_write(bp, phy, - MDIO_PMA_DEVAD, MDIO_PMA_REG_10G_CTRL2, 0xD); - bnx2x_cl45_read(bp, phy, - MDIO_PMA_DEVAD, MDIO_PMA_REG_10G_CTRL2, &tmp1); - DP(NETIF_MSG_LINK, "1.7 = 0x%x\n", tmp1); - /* Power down the XAUI until link is up in case of dual-media - * and 1G - */ - if (DUAL_MEDIA(params)) { - bnx2x_cl45_read(bp, phy, - MDIO_PMA_DEVAD, - MDIO_PMA_REG_8727_PCS_GP, &val); - val |= (3<<10); - bnx2x_cl45_write(bp, phy, - MDIO_PMA_DEVAD, - MDIO_PMA_REG_8727_PCS_GP, val); - } - } else if ((phy->req_line_speed == SPEED_AUTO_NEG) && - ((phy->speed_cap_mask & - PORT_HW_CFG_SPEED_CAPABILITY_D0_1G)) && - ((phy->speed_cap_mask & - PORT_HW_CFG_SPEED_CAPABILITY_D0_10G) != - PORT_HW_CFG_SPEED_CAPABILITY_D0_10G)) { - - DP(NETIF_MSG_LINK, "Setting 1G clause37\n"); - bnx2x_cl45_write(bp, phy, - MDIO_AN_DEVAD, MDIO_AN_REG_8727_MISC_CTRL, 0); - bnx2x_cl45_write(bp, phy, - MDIO_AN_DEVAD, MDIO_AN_REG_CL37_AN, 0x1300); - } else { - /* Since the 8727 has only single reset pin, need to set the 10G - * registers although it is default - */ - bnx2x_cl45_write(bp, phy, - MDIO_AN_DEVAD, MDIO_AN_REG_8727_MISC_CTRL, - 0x0020); - bnx2x_cl45_write(bp, phy, - MDIO_AN_DEVAD, MDIO_AN_REG_CL37_AN, 0x0100); - bnx2x_cl45_write(bp, phy, - MDIO_PMA_DEVAD, MDIO_PMA_REG_CTRL, 0x2040); - bnx2x_cl45_write(bp, phy, - MDIO_PMA_DEVAD, MDIO_PMA_REG_10G_CTRL2, - 0x0008); - } - + bnx2x_8727_config_speed(phy, params); /* Set 2-wire transfer rate of SFP+ module EEPROM * to 100Khz since some DACs(direct attached cables) do * not work at 400Khz. @@ -9105,6 +9245,9 @@ static void bnx2x_8727_handle_mod_abs(struct bnx2x_phy *phy, bnx2x_sfp_module_detection(phy, params); else DP(NETIF_MSG_LINK, "SFP+ module is not initialized\n"); + + /* Reconfigure link speed based on module type limitations */ + bnx2x_8727_config_speed(phy, params); } DP(NETIF_MSG_LINK, "8727 RX_ALARM_STATUS 0x%x\n", @@ -9585,9 +9728,9 @@ static int bnx2x_8481_config_init(struct bnx2x_phy *phy, static int bnx2x_84833_cmd_hdlr(struct bnx2x_phy *phy, struct link_params *params, u16 fw_cmd, - u16 cmd_args[]) + u16 cmd_args[], int argc) { - u32 idx; + int idx; u16 val; struct bnx2x *bp = params->bp; /* Write CMD_OPEN_OVERRIDE to STATUS reg */ @@ -9599,7 +9742,7 @@ static int bnx2x_84833_cmd_hdlr(struct bnx2x_phy *phy, MDIO_84833_CMD_HDLR_STATUS, &val); if (val == PHY84833_STATUS_CMD_OPEN_FOR_CMDS) break; - msleep(1); + usleep_range(1000, 2000); } if (idx >= PHY84833_CMDHDLR_WAIT) { DP(NETIF_MSG_LINK, "FW cmd: FW not ready.\n"); @@ -9607,7 +9750,7 @@ static int bnx2x_84833_cmd_hdlr(struct bnx2x_phy *phy, } /* Prepare argument(s) and issue command */ - for (idx = 0; idx < PHY84833_CMDHDLR_MAX_ARGS; idx++) { + for (idx = 0; idx < argc; idx++) { bnx2x_cl45_write(bp, phy, MDIO_CTL_DEVAD, MDIO_84833_CMD_HDLR_DATA1 + idx, cmd_args[idx]); @@ -9620,7 +9763,7 @@ static int bnx2x_84833_cmd_hdlr(struct bnx2x_phy *phy, if ((val == PHY84833_STATUS_CMD_COMPLETE_PASS) || (val == PHY84833_STATUS_CMD_COMPLETE_ERROR)) break; - msleep(1); + usleep_range(1000, 2000); } if ((idx >= PHY84833_CMDHDLR_WAIT) || (val == PHY84833_STATUS_CMD_COMPLETE_ERROR)) { @@ -9628,7 +9771,7 @@ static int bnx2x_84833_cmd_hdlr(struct bnx2x_phy *phy, return -EINVAL; } /* Gather returning data */ - for (idx = 0; idx < PHY84833_CMDHDLR_MAX_ARGS; idx++) { + for (idx = 0; idx < argc; idx++) { bnx2x_cl45_read(bp, phy, MDIO_CTL_DEVAD, MDIO_84833_CMD_HDLR_DATA1 + idx, &cmd_args[idx]); @@ -9662,7 +9805,7 @@ static int bnx2x_84833_pair_swap_cfg(struct bnx2x_phy *phy, data[1] = (u16)pair_swap; status = bnx2x_84833_cmd_hdlr(phy, params, - PHY84833_CMD_SET_PAIR_SWAP, data); + PHY84833_CMD_SET_PAIR_SWAP, data, PHY84833_CMDHDLR_MAX_ARGS); if (status == 0) DP(NETIF_MSG_LINK, "Pairswap OK, val=0x%x\n", data[1]); @@ -9740,6 +9883,95 @@ static int bnx2x_84833_hw_reset_phy(struct bnx2x_phy *phy, return 0; } +static int bnx2x_8483x_eee_timers(struct link_params *params, + struct link_vars *vars) +{ + u32 eee_idle = 0, eee_mode; + struct bnx2x *bp = params->bp; + + eee_idle = bnx2x_eee_calc_timer(params); + + if (eee_idle) { + REG_WR(bp, MISC_REG_CPMU_LP_IDLE_THR_P0 + (params->port << 2), + eee_idle); + } else if ((params->eee_mode & EEE_MODE_ENABLE_LPI) && + (params->eee_mode & EEE_MODE_OVERRIDE_NVRAM) && + (params->eee_mode & EEE_MODE_OUTPUT_TIME)) { + DP(NETIF_MSG_LINK, "Error: Tx LPI is enabled with timer 0\n"); + return -EINVAL; + } + + vars->eee_status &= ~(SHMEM_EEE_TIMER_MASK | SHMEM_EEE_TIME_OUTPUT_BIT); + if (params->eee_mode & EEE_MODE_OUTPUT_TIME) { + /* eee_idle in 1u --> eee_status in 16u */ + eee_idle >>= 4; + vars->eee_status |= (eee_idle & SHMEM_EEE_TIMER_MASK) | + SHMEM_EEE_TIME_OUTPUT_BIT; + } else { + if (bnx2x_eee_time_to_nvram(eee_idle, &eee_mode)) + return -EINVAL; + vars->eee_status |= eee_mode; + } + + return 0; +} + +static int bnx2x_8483x_disable_eee(struct bnx2x_phy *phy, + struct link_params *params, + struct link_vars *vars) +{ + int rc; + struct bnx2x *bp = params->bp; + u16 cmd_args = 0; + + DP(NETIF_MSG_LINK, "Don't Advertise 10GBase-T EEE\n"); + + /* Make Certain LPI is disabled */ + REG_WR(bp, MISC_REG_CPMU_LP_FW_ENABLE_P0 + (params->port << 2), 0); + REG_WR(bp, MISC_REG_CPMU_LP_DR_ENABLE, 0); + + /* Prevent Phy from working in EEE and advertising it */ + rc = bnx2x_84833_cmd_hdlr(phy, params, + PHY84833_CMD_SET_EEE_MODE, &cmd_args, 1); + if (rc) { + DP(NETIF_MSG_LINK, "EEE disable failed.\n"); + return rc; + } + + bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, MDIO_AN_REG_EEE_ADV, 0); + vars->eee_status &= ~SHMEM_EEE_ADV_STATUS_MASK; + + return 0; +} + +static int bnx2x_8483x_enable_eee(struct bnx2x_phy *phy, + struct link_params *params, + struct link_vars *vars) +{ + int rc; + struct bnx2x *bp = params->bp; + u16 cmd_args = 1; + + DP(NETIF_MSG_LINK, "Advertise 10GBase-T EEE\n"); + + rc = bnx2x_84833_cmd_hdlr(phy, params, + PHY84833_CMD_SET_EEE_MODE, &cmd_args, 1); + if (rc) { + DP(NETIF_MSG_LINK, "EEE enable failed.\n"); + return rc; + } + + bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD, MDIO_AN_REG_EEE_ADV, 0x8); + + /* Mask events preventing LPI generation */ + REG_WR(bp, MISC_REG_CPMU_LP_MASK_EXT_P0 + (params->port << 2), 0xfc20); + + vars->eee_status &= ~SHMEM_EEE_ADV_STATUS_MASK; + vars->eee_status |= (SHMEM_EEE_10G_ADV << SHMEM_EEE_ADV_STATUS_SHIFT); + + return 0; +} + #define PHY84833_CONSTANT_LATENCY 1193 static int bnx2x_848x3_config_init(struct bnx2x_phy *phy, struct link_params *params, @@ -9752,7 +9984,7 @@ static int bnx2x_848x3_config_init(struct bnx2x_phy *phy, u16 cmd_args[PHY84833_CMDHDLR_MAX_ARGS]; int rc = 0; - msleep(1); + usleep_range(1000, 2000); if (!(CHIP_IS_E1x(bp))) port = BP_PATH(bp); @@ -9839,8 +10071,9 @@ static int bnx2x_848x3_config_init(struct bnx2x_phy *phy, cmd_args[2] = PHY84833_CONSTANT_LATENCY + 1; cmd_args[3] = PHY84833_CONSTANT_LATENCY; rc = bnx2x_84833_cmd_hdlr(phy, params, - PHY84833_CMD_SET_EEE_MODE, cmd_args); - if (rc != 0) + PHY84833_CMD_SET_EEE_MODE, cmd_args, + PHY84833_CMDHDLR_MAX_ARGS); + if (rc) DP(NETIF_MSG_LINK, "Cfg AutogrEEEn failed.\n"); } if (initialize) @@ -9864,6 +10097,48 @@ static int bnx2x_848x3_config_init(struct bnx2x_phy *phy, MDIO_CTL_REG_84823_USER_CTRL_REG, val); } + bnx2x_cl45_read(bp, phy, MDIO_CTL_DEVAD, + MDIO_84833_TOP_CFG_FW_REV, &val); + + /* Configure EEE support */ + if ((val >= MDIO_84833_TOP_CFG_FW_EEE) && bnx2x_eee_has_cap(params)) { + phy->flags |= FLAGS_EEE_10GBT; + vars->eee_status |= SHMEM_EEE_10G_ADV << + SHMEM_EEE_SUPPORTED_SHIFT; + /* Propogate params' bits --> vars (for migration exposure) */ + if (params->eee_mode & EEE_MODE_ENABLE_LPI) + vars->eee_status |= SHMEM_EEE_LPI_REQUESTED_BIT; + else + vars->eee_status &= ~SHMEM_EEE_LPI_REQUESTED_BIT; + + if (params->eee_mode & EEE_MODE_ADV_LPI) + vars->eee_status |= SHMEM_EEE_REQUESTED_BIT; + else + vars->eee_status &= ~SHMEM_EEE_REQUESTED_BIT; + + rc = bnx2x_8483x_eee_timers(params, vars); + if (rc) { + DP(NETIF_MSG_LINK, "Failed to configure EEE timers\n"); + bnx2x_8483x_disable_eee(phy, params, vars); + return rc; + } + + if ((params->req_duplex[actual_phy_selection] == DUPLEX_FULL) && + (params->eee_mode & EEE_MODE_ADV_LPI) && + (bnx2x_eee_calc_timer(params) || + !(params->eee_mode & EEE_MODE_ENABLE_LPI))) + rc = bnx2x_8483x_enable_eee(phy, params, vars); + else + rc = bnx2x_8483x_disable_eee(phy, params, vars); + if (rc) { + DP(NETIF_MSG_LINK, "Failed to set EEE advertisment\n"); + return rc; + } + } else { + phy->flags &= ~FLAGS_EEE_10GBT; + vars->eee_status &= ~SHMEM_EEE_SUPPORTED_MASK; + } + if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84833) { /* Bring PHY out of super isolate mode as the final step. */ bnx2x_cl45_read(bp, phy, @@ -9918,17 +10193,19 @@ static u8 bnx2x_848xx_read_status(struct bnx2x_phy *phy, DP(NETIF_MSG_LINK, "Legacy speed status = 0x%x\n", legacy_status); link_up = ((legacy_status & (1<<11)) == (1<<11)); - if (link_up) { - legacy_speed = (legacy_status & (3<<9)); - if (legacy_speed == (0<<9)) - vars->line_speed = SPEED_10; - else if (legacy_speed == (1<<9)) - vars->line_speed = SPEED_100; - else if (legacy_speed == (2<<9)) - vars->line_speed = SPEED_1000; - else /* Should not happen */ - vars->line_speed = 0; + legacy_speed = (legacy_status & (3<<9)); + if (legacy_speed == (0<<9)) + vars->line_speed = SPEED_10; + else if (legacy_speed == (1<<9)) + vars->line_speed = SPEED_100; + else if (legacy_speed == (2<<9)) + vars->line_speed = SPEED_1000; + else { /* Should not happen: Treat as link down */ + vars->line_speed = 0; + link_up = 0; + } + if (link_up) { if (legacy_status & (1<<8)) vars->duplex = DUPLEX_FULL; else @@ -9956,7 +10233,7 @@ static u8 bnx2x_848xx_read_status(struct bnx2x_phy *phy, } } if (link_up) { - DP(NETIF_MSG_LINK, "BCM84823: link speed is %d\n", + DP(NETIF_MSG_LINK, "BCM848x3: link speed is %d\n", vars->line_speed); bnx2x_ext_phy_resolve_fc(phy, params, vars); @@ -9995,6 +10272,31 @@ static u8 bnx2x_848xx_read_status(struct bnx2x_phy *phy, if (val & (1<<11)) vars->link_status |= LINK_STATUS_LINK_PARTNER_10GXFD_CAPABLE; + + /* Determine if EEE was negotiated */ + if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84833) { + u32 eee_shmem = 0; + + bnx2x_cl45_read(bp, phy, MDIO_AN_DEVAD, + MDIO_AN_REG_EEE_ADV, &val1); + bnx2x_cl45_read(bp, phy, MDIO_AN_DEVAD, + MDIO_AN_REG_LP_EEE_ADV, &val2); + if ((val1 & val2) & 0x8) { + DP(NETIF_MSG_LINK, "EEE negotiated\n"); + vars->eee_status |= SHMEM_EEE_ACTIVE_BIT; + } + + if (val2 & 0x12) + eee_shmem |= SHMEM_EEE_100M_ADV; + if (val2 & 0x4) + eee_shmem |= SHMEM_EEE_1G_ADV; + if (val2 & 0x68) + eee_shmem |= SHMEM_EEE_10G_ADV; + + vars->eee_status &= ~SHMEM_EEE_LP_ADV_STATUS_MASK; + vars->eee_status |= (eee_shmem << + SHMEM_EEE_LP_ADV_STATUS_SHIFT); + } } return link_up; @@ -10273,7 +10575,7 @@ static int bnx2x_54618se_config_init(struct bnx2x_phy *phy, u32 cfg_pin; DP(NETIF_MSG_LINK, "54618SE cfg init\n"); - usleep_range(1000, 1000); + usleep_range(1000, 2000); /* This works with E3 only, no need to check the chip * before determining the port. @@ -10342,7 +10644,7 @@ static int bnx2x_54618se_config_init(struct bnx2x_phy *phy, MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_BOTH) fc_val |= MDIO_AN_REG_ADV_PAUSE_PAUSE; - /* read all advertisement */ + /* Read all advertisement */ bnx2x_cl22_read(bp, phy, 0x09, &an_1000_val); @@ -10379,7 +10681,7 @@ static int bnx2x_54618se_config_init(struct bnx2x_phy *phy, 0x09, &an_1000_val); - /* set 100 speed advertisement */ + /* Set 100 speed advertisement */ if (((phy->req_line_speed == SPEED_AUTO_NEG) && (phy->speed_cap_mask & (PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL | @@ -10393,7 +10695,7 @@ static int bnx2x_54618se_config_init(struct bnx2x_phy *phy, DP(NETIF_MSG_LINK, "Advertising 100M\n"); } - /* set 10 speed advertisement */ + /* Set 10 speed advertisement */ if (((phy->req_line_speed == SPEED_AUTO_NEG) && (phy->speed_cap_mask & (PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL | @@ -10532,7 +10834,7 @@ static u8 bnx2x_54618se_read_status(struct bnx2x_phy *phy, /* Get speed operation status */ bnx2x_cl22_read(bp, phy, - 0x19, + MDIO_REG_GPHY_AUX_STATUS, &legacy_status); DP(NETIF_MSG_LINK, "54618SE read_status: 0x%x\n", legacy_status); @@ -10759,7 +11061,7 @@ static u8 bnx2x_7101_read_status(struct bnx2x_phy *phy, DP(NETIF_MSG_LINK, "10G-base-T PMA status 0x%x->0x%x\n", val2, val1); link_up = ((val1 & 4) == 4); - /* if link is up print the AN outcome of the SFX7101 PHY */ + /* If link is up print the AN outcome of the SFX7101 PHY */ if (link_up) { bnx2x_cl45_read(bp, phy, MDIO_AN_DEVAD, MDIO_AN_REG_MASTER_STATUS, @@ -10771,7 +11073,7 @@ static u8 bnx2x_7101_read_status(struct bnx2x_phy *phy, bnx2x_ext_phy_10G_an_resolve(bp, phy, vars); bnx2x_ext_phy_resolve_fc(phy, params, vars); - /* read LP advertised speeds */ + /* Read LP advertised speeds */ if (val2 & (1<<11)) vars->link_status |= LINK_STATUS_LINK_PARTNER_10GXFD_CAPABLE; @@ -11090,7 +11392,7 @@ static struct bnx2x_phy phy_8706 = { SUPPORTED_FIBRE | SUPPORTED_Pause | SUPPORTED_Asym_Pause), - .media_type = ETH_PHY_SFP_FIBER, + .media_type = ETH_PHY_SFPP_10G_FIBER, .ver_addr = 0, .req_flow_ctrl = 0, .req_line_speed = 0, @@ -11249,7 +11551,8 @@ static struct bnx2x_phy phy_84833 = { .def_md_devad = 0, .flags = (FLAGS_FAN_FAILURE_DET_REQ | FLAGS_REARM_LATCH_SIGNAL | - FLAGS_TX_ERROR_CHECK), + FLAGS_TX_ERROR_CHECK | + FLAGS_EEE_10GBT), .rx_preemphasis = {0xffff, 0xffff, 0xffff, 0xffff}, .tx_preemphasis = {0xffff, 0xffff, 0xffff, 0xffff}, .mdio_ctrl = 0, @@ -11428,7 +11731,7 @@ static int bnx2x_populate_int_phy(struct bnx2x *bp, u32 shmem_base, u8 port, SUPPORTED_FIBRE | SUPPORTED_Pause | SUPPORTED_Asym_Pause); - phy->media_type = ETH_PHY_SFP_FIBER; + phy->media_type = ETH_PHY_SFPP_10G_FIBER; break; case PORT_HW_CFG_NET_SERDES_IF_KR: phy->media_type = ETH_PHY_KR; @@ -11968,7 +12271,7 @@ int bnx2x_phy_init(struct link_params *params, struct link_vars *vars) vars->mac_type = MAC_TYPE_NONE; vars->phy_flags = 0; - /* disable attentions */ + /* Disable attentions */ bnx2x_bits_dis(bp, NIG_REG_MASK_INTERRUPT_PORT0 + params->port*4, (NIG_MASK_XGXS0_LINK_STATUS | NIG_MASK_XGXS0_LINK10G | @@ -12017,6 +12320,8 @@ int bnx2x_phy_init(struct link_params *params, struct link_vars *vars) break; } bnx2x_update_mng(params, vars->link_status); + + bnx2x_update_mng_eee(params, vars->eee_status); return 0; } @@ -12026,19 +12331,22 @@ int bnx2x_link_reset(struct link_params *params, struct link_vars *vars, struct bnx2x *bp = params->bp; u8 phy_index, port = params->port, clear_latch_ind = 0; DP(NETIF_MSG_LINK, "Resetting the link of port %d\n", port); - /* disable attentions */ + /* Disable attentions */ vars->link_status = 0; bnx2x_update_mng(params, vars->link_status); + vars->eee_status &= ~(SHMEM_EEE_LP_ADV_STATUS_MASK | + SHMEM_EEE_ACTIVE_BIT); + bnx2x_update_mng_eee(params, vars->eee_status); bnx2x_bits_dis(bp, NIG_REG_MASK_INTERRUPT_PORT0 + port*4, (NIG_MASK_XGXS0_LINK_STATUS | NIG_MASK_XGXS0_LINK10G | NIG_MASK_SERDES0_LINK_STATUS | NIG_MASK_MI_INT)); - /* activate nig drain */ + /* Activate nig drain */ REG_WR(bp, NIG_REG_EGRESS_DRAIN0_MODE + port*4, 1); - /* disable nig egress interface */ + /* Disable nig egress interface */ if (!CHIP_IS_E3(bp)) { REG_WR(bp, NIG_REG_BMAC0_OUT_EN + port*4, 0); REG_WR(bp, NIG_REG_EGRESS_EMAC0_OUT_EN + port*4, 0); @@ -12051,15 +12359,15 @@ int bnx2x_link_reset(struct link_params *params, struct link_vars *vars, bnx2x_xmac_disable(params); bnx2x_umac_disable(params); } - /* disable emac */ + /* Disable emac */ if (!CHIP_IS_E3(bp)) REG_WR(bp, NIG_REG_NIG_EMAC0_EN + port*4, 0); - msleep(10); + usleep_range(10000, 20000); /* The PHY reset is controlled by GPIO 1 * Hold it as vars low */ - /* clear link led */ + /* Clear link led */ bnx2x_set_mdio_clk(bp, params->chip_id, port); bnx2x_set_led(params, vars, LED_MODE_OFF, 0); @@ -12089,9 +12397,9 @@ int bnx2x_link_reset(struct link_params *params, struct link_vars *vars, params->phy[INT_PHY].link_reset( ¶ms->phy[INT_PHY], params); - /* disable nig ingress interface */ + /* Disable nig ingress interface */ if (!CHIP_IS_E3(bp)) { - /* reset BigMac */ + /* Reset BigMac */ REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR, (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << port)); REG_WR(bp, NIG_REG_BMAC0_IN_EN + port*4, 0); @@ -12148,7 +12456,7 @@ static int bnx2x_8073_common_init_phy(struct bnx2x *bp, DP(NETIF_MSG_LINK, "populate_phy failed\n"); return -EINVAL; } - /* disable attentions */ + /* Disable attentions */ bnx2x_bits_dis(bp, NIG_REG_MASK_INTERRUPT_PORT0 + port_of_path*4, (NIG_MASK_XGXS0_LINK_STATUS | @@ -12222,7 +12530,7 @@ static int bnx2x_8073_common_init_phy(struct bnx2x *bp, bnx2x_cl45_write(bp, phy_blk[port], MDIO_PMA_DEVAD, MDIO_PMA_REG_TX_POWER_DOWN, (val & (~(1<<10)))); - msleep(15); + usleep_range(15000, 30000); /* Read modify write the SPI-ROM version select register */ bnx2x_cl45_read(bp, phy_blk[port], @@ -12254,7 +12562,7 @@ static int bnx2x_8726_common_init_phy(struct bnx2x *bp, REG_WR(bp, MISC_REG_GPIO_EVENT_EN, val); bnx2x_ext_phy_hw_reset(bp, 0); - msleep(5); + usleep_range(5000, 10000); for (port = 0; port < PORT_MAX; port++) { u32 shmem_base, shmem2_base; @@ -12361,11 +12669,11 @@ static int bnx2x_8727_common_init_phy(struct bnx2x *bp, /* Initiate PHY reset*/ bnx2x_set_gpio(bp, reset_gpio, MISC_REGISTERS_GPIO_OUTPUT_LOW, port); - msleep(1); + usleep_range(1000, 2000); bnx2x_set_gpio(bp, reset_gpio, MISC_REGISTERS_GPIO_OUTPUT_HIGH, port); - msleep(5); + usleep_range(5000, 10000); /* PART1 - Reset both phys */ for (port = PORT_MAX - 1; port >= PORT_0; port--) { @@ -12459,7 +12767,7 @@ static int bnx2x_84833_pre_init_phy(struct bnx2x *bp, MDIO_PMA_REG_CTRL, &val); if (!(val & (1<<15))) break; - msleep(1); + usleep_range(1000, 2000); } if (cnt >= 1500) { DP(NETIF_MSG_LINK, "84833 reset timeout\n"); @@ -12549,7 +12857,7 @@ static int bnx2x_ext_phy_common_init(struct bnx2x *bp, u32 shmem_base_path[], break; } - if (rc != 0) + if (rc) netdev_err(bp->dev, "Warning: PHY was not initialized," " Port %d\n", 0); @@ -12630,30 +12938,41 @@ static void bnx2x_check_over_curr(struct link_params *params, vars->phy_flags &= ~PHY_OVER_CURRENT_FLAG; } -static void bnx2x_analyze_link_error(struct link_params *params, - struct link_vars *vars, u32 lss_status, - u8 notify) +/* Returns 0 if no change occured since last check; 1 otherwise. */ +static u8 bnx2x_analyze_link_error(struct link_params *params, + struct link_vars *vars, u32 status, + u32 phy_flag, u32 link_flag, u8 notify) { struct bnx2x *bp = params->bp; /* Compare new value with previous value */ u8 led_mode; - u32 half_open_conn = (vars->phy_flags & PHY_HALF_OPEN_CONN_FLAG) > 0; + u32 old_status = (vars->phy_flags & phy_flag) ? 1 : 0; - if ((lss_status ^ half_open_conn) == 0) - return; + if ((status ^ old_status) == 0) + return 0; /* If values differ */ - DP(NETIF_MSG_LINK, "Link changed:%x %x->%x\n", vars->link_up, - half_open_conn, lss_status); + switch (phy_flag) { + case PHY_HALF_OPEN_CONN_FLAG: + DP(NETIF_MSG_LINK, "Analyze Remote Fault\n"); + break; + case PHY_SFP_TX_FAULT_FLAG: + DP(NETIF_MSG_LINK, "Analyze TX Fault\n"); + break; + default: + DP(NETIF_MSG_LINK, "Analyze UNKOWN\n"); + } + DP(NETIF_MSG_LINK, "Link changed:[%x %x]->%x\n", vars->link_up, + old_status, status); /* a. Update shmem->link_status accordingly * b. Update link_vars->link_up */ - if (lss_status) { - DP(NETIF_MSG_LINK, "Remote Fault detected !!!\n"); + if (status) { vars->link_status &= ~LINK_STATUS_LINK_UP; + vars->link_status |= link_flag; vars->link_up = 0; - vars->phy_flags |= PHY_HALF_OPEN_CONN_FLAG; + vars->phy_flags |= phy_flag; /* activate nig drain */ REG_WR(bp, NIG_REG_EGRESS_DRAIN0_MODE + params->port*4, 1); @@ -12662,10 +12981,10 @@ static void bnx2x_analyze_link_error(struct link_params *params, */ led_mode = LED_MODE_OFF; } else { - DP(NETIF_MSG_LINK, "Remote Fault cleared\n"); vars->link_status |= LINK_STATUS_LINK_UP; + vars->link_status &= ~link_flag; vars->link_up = 1; - vars->phy_flags &= ~PHY_HALF_OPEN_CONN_FLAG; + vars->phy_flags &= ~phy_flag; led_mode = LED_MODE_OPER; /* Clear nig drain */ @@ -12682,6 +13001,8 @@ static void bnx2x_analyze_link_error(struct link_params *params, vars->periodic_flags |= PERIODIC_FLAGS_LINK_EVENT; if (notify) bnx2x_notify_link_changed(bp); + + return 1; } /****************************************************************************** @@ -12723,7 +13044,9 @@ int bnx2x_check_half_open_conn(struct link_params *params, if (REG_RD(bp, mac_base + XMAC_REG_RX_LSS_STATUS)) lss_status = 1; - bnx2x_analyze_link_error(params, vars, lss_status, notify); + bnx2x_analyze_link_error(params, vars, lss_status, + PHY_HALF_OPEN_CONN_FLAG, + LINK_STATUS_NONE, notify); } else if (REG_RD(bp, MISC_REG_RESET_REG_2) & (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << params->port)) { /* Check E1X / E2 BMAC */ @@ -12740,11 +13063,55 @@ int bnx2x_check_half_open_conn(struct link_params *params, REG_RD_DMAE(bp, mac_base + lss_status_reg, wb_data, 2); lss_status = (wb_data[0] > 0); - bnx2x_analyze_link_error(params, vars, lss_status, notify); + bnx2x_analyze_link_error(params, vars, lss_status, + PHY_HALF_OPEN_CONN_FLAG, + LINK_STATUS_NONE, notify); } return 0; } +static void bnx2x_sfp_tx_fault_detection(struct bnx2x_phy *phy, + struct link_params *params, + struct link_vars *vars) +{ + struct bnx2x *bp = params->bp; + u32 cfg_pin, value = 0; + u8 led_change, port = params->port; + /* Get The SFP+ TX_Fault controlling pin ([eg]pio) */ + cfg_pin = (REG_RD(bp, params->shmem_base + offsetof(struct shmem_region, + dev_info.port_hw_config[port].e3_cmn_pin_cfg)) & + PORT_HW_CFG_E3_TX_FAULT_MASK) >> + PORT_HW_CFG_E3_TX_FAULT_SHIFT; + + if (bnx2x_get_cfg_pin(bp, cfg_pin, &value)) { + DP(NETIF_MSG_LINK, "Failed to read pin 0x%02x\n", cfg_pin); + return; + } + + led_change = bnx2x_analyze_link_error(params, vars, value, + PHY_SFP_TX_FAULT_FLAG, + LINK_STATUS_SFP_TX_FAULT, 1); + + if (led_change) { + /* Change TX_Fault led, set link status for further syncs */ + u8 led_mode; + + if (vars->phy_flags & PHY_SFP_TX_FAULT_FLAG) { + led_mode = MISC_REGISTERS_GPIO_HIGH; + vars->link_status |= LINK_STATUS_SFP_TX_FAULT; + } else { + led_mode = MISC_REGISTERS_GPIO_LOW; + vars->link_status &= ~LINK_STATUS_SFP_TX_FAULT; + } + + /* If module is unapproved, led should be on regardless */ + if (!(phy->flags & FLAGS_SFP_NOT_APPROVED)) { + DP(NETIF_MSG_LINK, "Change TX_Fault LED: ->%x\n", + led_mode); + bnx2x_set_e3_module_fault_led(params, led_mode); + } + } +} void bnx2x_period_func(struct link_params *params, struct link_vars *vars) { u16 phy_idx; @@ -12763,7 +13130,26 @@ void bnx2x_period_func(struct link_params *params, struct link_vars *vars) struct bnx2x_phy *phy = ¶ms->phy[INT_PHY]; bnx2x_set_aer_mmd(params, phy); bnx2x_check_over_curr(params, vars); - bnx2x_warpcore_config_runtime(phy, params, vars); + if (vars->rx_tx_asic_rst) + bnx2x_warpcore_config_runtime(phy, params, vars); + + if ((REG_RD(bp, params->shmem_base + + offsetof(struct shmem_region, dev_info. + port_hw_config[params->port].default_cfg)) + & PORT_HW_CFG_NET_SERDES_IF_MASK) == + PORT_HW_CFG_NET_SERDES_IF_SFI) { + if (bnx2x_is_sfp_module_plugged(phy, params)) { + bnx2x_sfp_tx_fault_detection(phy, params, vars); + } else if (vars->link_status & + LINK_STATUS_SFP_TX_FAULT) { + /* Clean trail, interrupt corrects the leds */ + vars->link_status &= ~LINK_STATUS_SFP_TX_FAULT; + vars->phy_flags &= ~PHY_SFP_TX_FAULT_FLAG; + /* Update link status in the shared memory */ + bnx2x_update_mng(params, vars->link_status); + } + } + } } diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h index ea4371f4335f..51cac8130051 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h @@ -41,6 +41,7 @@ #define SPEED_AUTO_NEG 0 #define SPEED_20000 20000 +#define SFP_EEPROM_PAGE_SIZE 16 #define SFP_EEPROM_VENDOR_NAME_ADDR 0x14 #define SFP_EEPROM_VENDOR_NAME_SIZE 16 #define SFP_EEPROM_VENDOR_OUI_ADDR 0x25 @@ -125,6 +126,11 @@ typedef void (*set_link_led_t)(struct bnx2x_phy *phy, struct link_params *params, u8 mode); typedef void (*phy_specific_func_t)(struct bnx2x_phy *phy, struct link_params *params, u32 action); +struct bnx2x_reg_set { + u8 devad; + u16 reg; + u16 val; +}; struct bnx2x_phy { u32 type; @@ -149,6 +155,7 @@ struct bnx2x_phy { #define FLAGS_DUMMY_READ (1<<9) #define FLAGS_MDC_MDIO_WA_B0 (1<<10) #define FLAGS_TX_ERROR_CHECK (1<<12) +#define FLAGS_EEE_10GBT (1<<13) /* preemphasis values for the rx side */ u16 rx_preemphasis[4]; @@ -162,14 +169,15 @@ struct bnx2x_phy { u32 supported; u32 media_type; -#define ETH_PHY_UNSPECIFIED 0x0 -#define ETH_PHY_SFP_FIBER 0x1 -#define ETH_PHY_XFP_FIBER 0x2 -#define ETH_PHY_DA_TWINAX 0x3 -#define ETH_PHY_BASE_T 0x4 -#define ETH_PHY_KR 0xf0 -#define ETH_PHY_CX4 0xf1 -#define ETH_PHY_NOT_PRESENT 0xff +#define ETH_PHY_UNSPECIFIED 0x0 +#define ETH_PHY_SFPP_10G_FIBER 0x1 +#define ETH_PHY_XFP_FIBER 0x2 +#define ETH_PHY_DA_TWINAX 0x3 +#define ETH_PHY_BASE_T 0x4 +#define ETH_PHY_SFP_1G_FIBER 0x5 +#define ETH_PHY_KR 0xf0 +#define ETH_PHY_CX4 0xf1 +#define ETH_PHY_NOT_PRESENT 0xff /* The address in which version is located*/ u32 ver_addr; @@ -265,6 +273,30 @@ struct link_params { u8 num_phys; u8 rsrv; + + /* Used to configure the EEE Tx LPI timer, has several modes of + * operation, according to bits 29:28 - + * 2'b00: Timer will be configured by nvram, output will be the value + * from nvram. + * 2'b01: Timer will be configured by nvram, output will be in + * microseconds. + * 2'b10: bits 1:0 contain an nvram value which will be used instead + * of the one located in the nvram. Output will be that value. + * 2'b11: bits 19:0 contain the idle timer in microseconds; output + * will be in microseconds. + * Bits 31:30 should be 2'b11 in order for EEE to be enabled. + */ + u32 eee_mode; +#define EEE_MODE_NVRAM_BALANCED_TIME (0xa00) +#define EEE_MODE_NVRAM_AGGRESSIVE_TIME (0x100) +#define EEE_MODE_NVRAM_LATENCY_TIME (0x6000) +#define EEE_MODE_NVRAM_MASK (0x3) +#define EEE_MODE_TIMER_MASK (0xfffff) +#define EEE_MODE_OUTPUT_TIME (1<<28) +#define EEE_MODE_OVERRIDE_NVRAM (1<<29) +#define EEE_MODE_ENABLE_LPI (1<<30) +#define EEE_MODE_ADV_LPI (1<<31) + u16 hw_led_mode; /* part of the hw_config read from the shmem */ u32 multi_phy_config; @@ -282,6 +314,7 @@ struct link_vars { #define PHY_PHYSICAL_LINK_FLAG (1<<2) #define PHY_HALF_OPEN_CONN_FLAG (1<<3) #define PHY_OVER_CURRENT_FLAG (1<<4) +#define PHY_SFP_TX_FAULT_FLAG (1<<5) u8 mac_type; #define MAC_TYPE_NONE 0 @@ -301,6 +334,7 @@ struct link_vars { /* The same definitions as the shmem parameter */ u32 link_status; + u32 eee_status; u8 fault_detected; u8 rsrv1; u16 periodic_flags; @@ -459,8 +493,7 @@ struct bnx2x_ets_params { struct bnx2x_ets_cos_params cos[DCBX_MAX_NUM_COS]; }; -/** - * Used to update the PFC attributes in EMAC, BMAC, NIG and BRB +/* Used to update the PFC attributes in EMAC, BMAC, NIG and BRB * when link is already up */ int bnx2x_update_pfc(struct link_params *params, diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c index f755a665dab3..9aaf863b4237 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c @@ -74,6 +74,8 @@ #define FW_FILE_NAME_E1H "bnx2x/bnx2x-e1h-" FW_FILE_VERSION ".fw" #define FW_FILE_NAME_E2 "bnx2x/bnx2x-e2-" FW_FILE_VERSION ".fw" +#define MAC_LEADING_ZERO_CNT (ALIGN(ETH_ALEN, sizeof(u32)) - ETH_ALEN) + /* Time in jiffies before concluding the transmitter is hung */ #define TX_TIMEOUT (5*HZ) @@ -104,7 +106,7 @@ MODULE_PARM_DESC(disable_tpa, " Disable the TPA (LRO) feature"); #define INT_MODE_INTx 1 #define INT_MODE_MSI 2 -static int int_mode; +int int_mode; module_param(int_mode, int, 0); MODULE_PARM_DESC(int_mode, " Force interrupt mode other than MSI-X " "(1 INT#x; 2 MSI)"); @@ -135,7 +137,10 @@ enum bnx2x_board_type { BCM57800_MF, BCM57810, BCM57810_MF, - BCM57840, + BCM57840_O, + BCM57840_4_10, + BCM57840_2_20, + BCM57840_MFO, BCM57840_MF, BCM57811, BCM57811_MF @@ -155,6 +160,9 @@ static struct { { "Broadcom NetXtreme II BCM57810 10 Gigabit Ethernet" }, { "Broadcom NetXtreme II BCM57810 10 Gigabit Ethernet Multi Function" }, { "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet" }, + { "Broadcom NetXtreme II BCM57840 10 Gigabit Ethernet" }, + { "Broadcom NetXtreme II BCM57840 20 Gigabit Ethernet" }, + { "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet Multi Function"}, { "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet Multi Function"}, { "Broadcom NetXtreme II BCM57811 10 Gigabit Ethernet"}, { "Broadcom NetXtreme II BCM57811 10 Gigabit Ethernet Multi Function"}, @@ -187,8 +195,17 @@ static struct { #ifndef PCI_DEVICE_ID_NX2_57810_MF #define PCI_DEVICE_ID_NX2_57810_MF CHIP_NUM_57810_MF #endif -#ifndef PCI_DEVICE_ID_NX2_57840 -#define PCI_DEVICE_ID_NX2_57840 CHIP_NUM_57840 +#ifndef PCI_DEVICE_ID_NX2_57840_O +#define PCI_DEVICE_ID_NX2_57840_O CHIP_NUM_57840_OBSOLETE +#endif +#ifndef PCI_DEVICE_ID_NX2_57840_4_10 +#define PCI_DEVICE_ID_NX2_57840_4_10 CHIP_NUM_57840_4_10 +#endif +#ifndef PCI_DEVICE_ID_NX2_57840_2_20 +#define PCI_DEVICE_ID_NX2_57840_2_20 CHIP_NUM_57840_2_20 +#endif +#ifndef PCI_DEVICE_ID_NX2_57840_MFO +#define PCI_DEVICE_ID_NX2_57840_MFO CHIP_NUM_57840_MF_OBSOLETE #endif #ifndef PCI_DEVICE_ID_NX2_57840_MF #define PCI_DEVICE_ID_NX2_57840_MF CHIP_NUM_57840_MF @@ -209,7 +226,10 @@ static DEFINE_PCI_DEVICE_TABLE(bnx2x_pci_tbl) = { { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57800_MF), BCM57800_MF }, { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57810), BCM57810 }, { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57810_MF), BCM57810_MF }, - { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840), BCM57840 }, + { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_O), BCM57840_O }, + { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_4_10), BCM57840_4_10 }, + { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_2_20), BCM57840_2_20 }, + { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_MFO), BCM57840_MFO }, { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_MF), BCM57840_MF }, { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57811), BCM57811 }, { PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57811_MF), BCM57811_MF }, @@ -758,7 +778,7 @@ void bnx2x_panic_dump(struct bnx2x *bp) /* Tx */ for_each_cos_in_tx_queue(fp, cos) { - txdata = fp->txdata[cos]; + txdata = *fp->txdata_ptr[cos]; BNX2X_ERR("fp%d: tx_pkt_prod(0x%x) tx_pkt_cons(0x%x) tx_bd_prod(0x%x) tx_bd_cons(0x%x) *tx_cons_sb(0x%x)\n", i, txdata.tx_pkt_prod, txdata.tx_pkt_cons, txdata.tx_bd_prod, @@ -876,7 +896,7 @@ void bnx2x_panic_dump(struct bnx2x *bp) for_each_tx_queue(bp, i) { struct bnx2x_fastpath *fp = &bp->fp[i]; for_each_cos_in_tx_queue(fp, cos) { - struct bnx2x_fp_txdata *txdata = &fp->txdata[cos]; + struct bnx2x_fp_txdata *txdata = fp->txdata_ptr[cos]; start = TX_BD(le16_to_cpu(*txdata->tx_cons_sb) - 10); end = TX_BD(le16_to_cpu(*txdata->tx_cons_sb) + 245); @@ -1583,7 +1603,7 @@ void bnx2x_sp_event(struct bnx2x_fastpath *fp, union eth_rx_cqe *rr_cqe) int cid = SW_CID(rr_cqe->ramrod_cqe.conn_and_cmd_data); int command = CQE_CMD(rr_cqe->ramrod_cqe.conn_and_cmd_data); enum bnx2x_queue_cmd drv_cmd = BNX2X_Q_CMD_MAX; - struct bnx2x_queue_sp_obj *q_obj = &fp->q_obj; + struct bnx2x_queue_sp_obj *q_obj = &bnx2x_sp_obj(bp, fp).q_obj; DP(BNX2X_MSG_SP, "fp %d cid %d got ramrod #%d state is %x type is %d\n", @@ -1710,7 +1730,7 @@ irqreturn_t bnx2x_interrupt(int irq, void *dev_instance) /* Handle Rx or Tx according to SB id */ prefetch(fp->rx_cons_sb); for_each_cos_in_tx_queue(fp, cos) - prefetch(fp->txdata[cos].tx_cons_sb); + prefetch(fp->txdata_ptr[cos]->tx_cons_sb); prefetch(&fp->sb_running_index[SM_RX_ID]); napi_schedule(&bnx2x_fp(bp, fp->index, napi)); status &= ~mask; @@ -2124,6 +2144,11 @@ u8 bnx2x_initial_phy_init(struct bnx2x *bp, int load_mode) } } + if (load_mode == LOAD_LOOPBACK_EXT) { + struct link_params *lp = &bp->link_params; + lp->loopback_mode = LOOPBACK_EXT; + } + rc = bnx2x_phy_init(&bp->link_params, &bp->link_vars); bnx2x_release_phy_lock(bp); @@ -2916,7 +2941,7 @@ static void bnx2x_pf_tx_q_prep(struct bnx2x *bp, struct bnx2x_fastpath *fp, struct bnx2x_txq_setup_params *txq_init, u8 cos) { - txq_init->dscr_map = fp->txdata[cos].tx_desc_mapping; + txq_init->dscr_map = fp->txdata_ptr[cos]->tx_desc_mapping; txq_init->sb_cq_index = HC_INDEX_ETH_FIRST_TX_CQ_CONS + cos; txq_init->traffic_type = LLFC_TRAFFIC_TYPE_NW; txq_init->fw_sb_id = fp->fw_sb_id; @@ -3030,9 +3055,9 @@ static void bnx2x_drv_info_ether_stat(struct bnx2x *bp) memcpy(ether_stat->version, DRV_MODULE_VERSION, ETH_STAT_INFO_VERSION_LEN - 1); - bp->fp[0].mac_obj.get_n_elements(bp, &bp->fp[0].mac_obj, - DRV_INFO_ETH_STAT_NUM_MACS_REQUIRED, - ether_stat->mac_local); + bp->sp_objs[0].mac_obj.get_n_elements(bp, &bp->sp_objs[0].mac_obj, + DRV_INFO_ETH_STAT_NUM_MACS_REQUIRED, + ether_stat->mac_local); ether_stat->mtu_size = bp->dev->mtu; @@ -3055,7 +3080,8 @@ static void bnx2x_drv_info_fcoe_stat(struct bnx2x *bp) struct fcoe_stats_info *fcoe_stat = &bp->slowpath->drv_info_to_mcp.fcoe_stat; - memcpy(fcoe_stat->mac_local, bp->fip_mac, ETH_ALEN); + memcpy(fcoe_stat->mac_local + MAC_LEADING_ZERO_CNT, + bp->fip_mac, ETH_ALEN); fcoe_stat->qos_priority = app->traffic_type_priority[LLFC_TRAFFIC_TYPE_FCOE]; @@ -3063,11 +3089,11 @@ static void bnx2x_drv_info_fcoe_stat(struct bnx2x *bp) /* insert FCoE stats from ramrod response */ if (!NO_FCOE(bp)) { struct tstorm_per_queue_stats *fcoe_q_tstorm_stats = - &bp->fw_stats_data->queue_stats[FCOE_IDX]. + &bp->fw_stats_data->queue_stats[FCOE_IDX(bp)]. tstorm_queue_statistics; struct xstorm_per_queue_stats *fcoe_q_xstorm_stats = - &bp->fw_stats_data->queue_stats[FCOE_IDX]. + &bp->fw_stats_data->queue_stats[FCOE_IDX(bp)]. xstorm_queue_statistics; struct fcoe_statistics_params *fw_fcoe_stat = @@ -3146,7 +3172,8 @@ static void bnx2x_drv_info_iscsi_stat(struct bnx2x *bp) struct iscsi_stats_info *iscsi_stat = &bp->slowpath->drv_info_to_mcp.iscsi_stat; - memcpy(iscsi_stat->mac_local, bp->cnic_eth_dev.iscsi_mac, ETH_ALEN); + memcpy(iscsi_stat->mac_local + MAC_LEADING_ZERO_CNT, + bp->cnic_eth_dev.iscsi_mac, ETH_ALEN); iscsi_stat->qos_priority = app->traffic_type_priority[LLFC_TRAFFIC_TYPE_ISCSI]; @@ -3176,6 +3203,12 @@ static void bnx2x_set_mf_bw(struct bnx2x *bp) bnx2x_fw_command(bp, DRV_MSG_CODE_SET_MF_BW_ACK, 0); } +static void bnx2x_handle_eee_event(struct bnx2x *bp) +{ + DP(BNX2X_MSG_MCP, "EEE - LLDP event\n"); + bnx2x_fw_command(bp, DRV_MSG_CODE_EEE_RESULTS_ACK, 0); +} + static void bnx2x_handle_drv_info_req(struct bnx2x *bp) { enum drv_info_opcode op_code; @@ -3742,6 +3775,8 @@ static void bnx2x_attn_int_deasserted3(struct bnx2x *bp, u32 attn) if (val & DRV_STATUS_AFEX_EVENT_MASK) bnx2x_handle_afex_cmd(bp, val & DRV_STATUS_AFEX_EVENT_MASK); + if (val & DRV_STATUS_EEE_NEGOTIATION_RESULTS) + bnx2x_handle_eee_event(bp); if (bp->link_vars.periodic_flags & PERIODIC_FLAGS_LINK_EVENT) { /* sync with link */ @@ -4615,11 +4650,11 @@ static void bnx2x_handle_classification_eqe(struct bnx2x *bp, case BNX2X_FILTER_MAC_PENDING: DP(BNX2X_MSG_SP, "Got SETUP_MAC completions\n"); #ifdef BCM_CNIC - if (cid == BNX2X_ISCSI_ETH_CID) + if (cid == BNX2X_ISCSI_ETH_CID(bp)) vlan_mac_obj = &bp->iscsi_l2_mac_obj; else #endif - vlan_mac_obj = &bp->fp[cid].mac_obj; + vlan_mac_obj = &bp->sp_objs[cid].mac_obj; break; case BNX2X_FILTER_MCAST_PENDING: @@ -4717,7 +4752,7 @@ static void bnx2x_after_function_update(struct bnx2x *bp) for_each_eth_queue(bp, q) { /* Set the appropriate Queue object */ fp = &bp->fp[q]; - queue_params.q_obj = &fp->q_obj; + queue_params.q_obj = &bnx2x_sp_obj(bp, fp).q_obj; /* send the ramrod */ rc = bnx2x_queue_state_change(bp, &queue_params); @@ -4728,8 +4763,8 @@ static void bnx2x_after_function_update(struct bnx2x *bp) #ifdef BCM_CNIC if (!NO_FCOE(bp)) { - fp = &bp->fp[FCOE_IDX]; - queue_params.q_obj = &fp->q_obj; + fp = &bp->fp[FCOE_IDX(bp)]; + queue_params.q_obj = &bnx2x_sp_obj(bp, fp).q_obj; /* clear pending completion bit */ __clear_bit(RAMROD_COMP_WAIT, &queue_params.ramrod_flags); @@ -4761,11 +4796,11 @@ static struct bnx2x_queue_sp_obj *bnx2x_cid_to_q_obj( { DP(BNX2X_MSG_SP, "retrieving fp from cid %d\n", cid); #ifdef BCM_CNIC - if (cid == BNX2X_FCOE_ETH_CID) - return &bnx2x_fcoe(bp, q_obj); + if (cid == BNX2X_FCOE_ETH_CID(bp)) + return &bnx2x_fcoe_sp_obj(bp, q_obj); else #endif - return &bnx2x_fp(bp, CID_TO_FP(cid), q_obj); + return &bp->sp_objs[CID_TO_FP(cid, bp)].q_obj; } static void bnx2x_eq_int(struct bnx2x *bp) @@ -5647,15 +5682,15 @@ static void bnx2x_init_eth_fp(struct bnx2x *bp, int fp_idx) /* init tx data */ for_each_cos_in_tx_queue(fp, cos) { - bnx2x_init_txdata(bp, &fp->txdata[cos], - CID_COS_TO_TX_ONLY_CID(fp->cid, cos), - FP_COS_TO_TXQ(fp, cos), - BNX2X_TX_SB_INDEX_BASE + cos); - cids[cos] = fp->txdata[cos].cid; + bnx2x_init_txdata(bp, fp->txdata_ptr[cos], + CID_COS_TO_TX_ONLY_CID(fp->cid, cos, bp), + FP_COS_TO_TXQ(fp, cos, bp), + BNX2X_TX_SB_INDEX_BASE + cos, fp); + cids[cos] = fp->txdata_ptr[cos]->cid; } - bnx2x_init_queue_obj(bp, &fp->q_obj, fp->cl_id, cids, fp->max_cos, - BP_FUNC(bp), bnx2x_sp(bp, q_rdata), + bnx2x_init_queue_obj(bp, &bnx2x_sp_obj(bp, fp).q_obj, fp->cl_id, cids, + fp->max_cos, BP_FUNC(bp), bnx2x_sp(bp, q_rdata), bnx2x_sp_mapping(bp, q_rdata), q_type); /** @@ -5706,7 +5741,7 @@ static void bnx2x_init_tx_rings(struct bnx2x *bp) for_each_tx_queue(bp, i) for_each_cos_in_tx_queue(&bp->fp[i], cos) - bnx2x_init_tx_ring_one(&bp->fp[i].txdata[cos]); + bnx2x_init_tx_ring_one(bp->fp[i].txdata_ptr[cos]); } void bnx2x_nic_init(struct bnx2x *bp, u32 load_code) @@ -7055,12 +7090,10 @@ static int bnx2x_init_hw_func(struct bnx2x *bp) cdu_ilt_start = ilt->clients[ILT_CLIENT_CDU].start; for (i = 0; i < L2_ILT_LINES(bp); i++) { - ilt->lines[cdu_ilt_start + i].page = - bp->context.vcxt + (ILT_PAGE_CIDS * i); + ilt->lines[cdu_ilt_start + i].page = bp->context[i].vcxt; ilt->lines[cdu_ilt_start + i].page_mapping = - bp->context.cxt_mapping + (CDU_ILT_PAGE_SZ * i); - /* cdu ilt pages are allocated manually so there's no need to - set the size */ + bp->context[i].cxt_mapping; + ilt->lines[cdu_ilt_start + i].size = bp->context[i].size; } bnx2x_ilt_init_op(bp, INITOP_SET); @@ -7327,6 +7360,8 @@ static int bnx2x_init_hw_func(struct bnx2x *bp) void bnx2x_free_mem(struct bnx2x *bp) { + int i; + /* fastpath */ bnx2x_free_fp_mem(bp); /* end of fastpath */ @@ -7340,9 +7375,9 @@ void bnx2x_free_mem(struct bnx2x *bp) BNX2X_PCI_FREE(bp->slowpath, bp->slowpath_mapping, sizeof(struct bnx2x_slowpath)); - BNX2X_PCI_FREE(bp->context.vcxt, bp->context.cxt_mapping, - bp->context.size); - + for (i = 0; i < L2_ILT_LINES(bp); i++) + BNX2X_PCI_FREE(bp->context[i].vcxt, bp->context[i].cxt_mapping, + bp->context[i].size); bnx2x_ilt_mem_op(bp, ILT_MEMOP_FREE); BNX2X_FREE(bp->ilt->lines); @@ -7428,6 +7463,8 @@ alloc_mem_err: int bnx2x_alloc_mem(struct bnx2x *bp) { + int i, allocated, context_size; + #ifdef BCM_CNIC if (!CHIP_IS_E1x(bp)) /* size = the status block + ramrod buffers */ @@ -7457,11 +7494,29 @@ int bnx2x_alloc_mem(struct bnx2x *bp) if (bnx2x_alloc_fw_stats_mem(bp)) goto alloc_mem_err; - bp->context.size = sizeof(union cdu_context) * BNX2X_L2_CID_COUNT(bp); - - BNX2X_PCI_ALLOC(bp->context.vcxt, &bp->context.cxt_mapping, - bp->context.size); + /* Allocate memory for CDU context: + * This memory is allocated separately and not in the generic ILT + * functions because CDU differs in few aspects: + * 1. There are multiple entities allocating memory for context - + * 'regular' driver, CNIC and SRIOV driver. Each separately controls + * its own ILT lines. + * 2. Since CDU page-size is not a single 4KB page (which is the case + * for the other ILT clients), to be efficient we want to support + * allocation of sub-page-size in the last entry. + * 3. Context pointers are used by the driver to pass to FW / update + * the context (for the other ILT clients the pointers are used just to + * free the memory during unload). + */ + context_size = sizeof(union cdu_context) * BNX2X_L2_CID_COUNT(bp); + for (i = 0, allocated = 0; allocated < context_size; i++) { + bp->context[i].size = min(CDU_ILT_PAGE_SZ, + (context_size - allocated)); + BNX2X_PCI_ALLOC(bp->context[i].vcxt, + &bp->context[i].cxt_mapping, + bp->context[i].size); + allocated += bp->context[i].size; + } BNX2X_ALLOC(bp->ilt->lines, sizeof(struct ilt_line) * ILT_MAX_LINES); if (bnx2x_ilt_mem_op(bp, ILT_MEMOP_ALLOC)) @@ -7563,8 +7618,8 @@ int bnx2x_set_eth_mac(struct bnx2x *bp, bool set) __set_bit(RAMROD_COMP_WAIT, &ramrod_flags); /* Eth MAC is set on RSS leading client (fp[0]) */ - return bnx2x_set_mac_one(bp, bp->dev->dev_addr, &bp->fp->mac_obj, set, - BNX2X_ETH_MAC, &ramrod_flags); + return bnx2x_set_mac_one(bp, bp->dev->dev_addr, &bp->sp_objs->mac_obj, + set, BNX2X_ETH_MAC, &ramrod_flags); } int bnx2x_setup_leading(struct bnx2x *bp) @@ -7579,7 +7634,7 @@ int bnx2x_setup_leading(struct bnx2x *bp) * * In case of MSI-X it will also try to enable MSI-X. */ -static void __devinit bnx2x_set_int_mode(struct bnx2x *bp) +void bnx2x_set_int_mode(struct bnx2x *bp) { switch (int_mode) { case INT_MODE_MSI: @@ -7590,11 +7645,6 @@ static void __devinit bnx2x_set_int_mode(struct bnx2x *bp) BNX2X_DEV_INFO("set number of queues to 1\n"); break; default: - /* Set number of queues for MSI-X mode */ - bnx2x_set_num_queues(bp); - - BNX2X_DEV_INFO("set number of queues to %d\n", bp->num_queues); - /* if we can't use MSI-X we only need one fp, * so try to enable MSI-X with the requested number of fp's * and fallback to MSI or legacy INTx with one fp @@ -7735,6 +7785,8 @@ static void bnx2x_pf_q_prep_init(struct bnx2x *bp, { u8 cos; + int cxt_index, cxt_offset; + /* FCoE Queue uses Default SB, thus has no HC capabilities */ if (!IS_FCOE_FP(fp)) { __set_bit(BNX2X_Q_FLG_HC, &init_params->rx.flags); @@ -7771,9 +7823,13 @@ static void bnx2x_pf_q_prep_init(struct bnx2x *bp, fp->index, init_params->max_cos); /* set the context pointers queue object */ - for (cos = FIRST_TX_COS_INDEX; cos < init_params->max_cos; cos++) + for (cos = FIRST_TX_COS_INDEX; cos < init_params->max_cos; cos++) { + cxt_index = fp->txdata_ptr[cos]->cid / ILT_PAGE_CIDS; + cxt_offset = fp->txdata_ptr[cos]->cid - (cxt_index * + ILT_PAGE_CIDS); init_params->cxts[cos] = - &bp->context.vcxt[fp->txdata[cos].cid].eth; + &bp->context[cxt_index].vcxt[cxt_offset].eth; + } } int bnx2x_setup_tx_only(struct bnx2x *bp, struct bnx2x_fastpath *fp, @@ -7838,7 +7894,7 @@ int bnx2x_setup_queue(struct bnx2x *bp, struct bnx2x_fastpath *fp, bnx2x_ack_sb(bp, fp->igu_sb_id, USTORM_ID, 0, IGU_INT_ENABLE, 0); - q_params.q_obj = &fp->q_obj; + q_params.q_obj = &bnx2x_sp_obj(bp, fp).q_obj; /* We want to wait for completion in this context */ __set_bit(RAMROD_COMP_WAIT, &q_params.ramrod_flags); @@ -7911,7 +7967,7 @@ static int bnx2x_stop_queue(struct bnx2x *bp, int index) DP(NETIF_MSG_IFDOWN, "stopping queue %d cid %d\n", index, fp->cid); - q_params.q_obj = &fp->q_obj; + q_params.q_obj = &bnx2x_sp_obj(bp, fp).q_obj; /* We want to wait for completion in this context */ __set_bit(RAMROD_COMP_WAIT, &q_params.ramrod_flags); @@ -7922,7 +7978,7 @@ static int bnx2x_stop_queue(struct bnx2x *bp, int index) tx_index++){ /* ascertain this is a normal queue*/ - txdata = &fp->txdata[tx_index]; + txdata = fp->txdata_ptr[tx_index]; DP(NETIF_MSG_IFDOWN, "stopping tx-only queue %d\n", txdata->txq_index); @@ -8289,7 +8345,7 @@ void bnx2x_chip_cleanup(struct bnx2x *bp, int unload_mode) struct bnx2x_fastpath *fp = &bp->fp[i]; for_each_cos_in_tx_queue(fp, cos) - rc = bnx2x_clean_tx_queue(bp, &fp->txdata[cos]); + rc = bnx2x_clean_tx_queue(bp, fp->txdata_ptr[cos]); #ifdef BNX2X_STOP_ON_ERROR if (rc) return; @@ -8300,12 +8356,13 @@ void bnx2x_chip_cleanup(struct bnx2x *bp, int unload_mode) usleep_range(1000, 1000); /* Clean all ETH MACs */ - rc = bnx2x_del_all_macs(bp, &bp->fp[0].mac_obj, BNX2X_ETH_MAC, false); + rc = bnx2x_del_all_macs(bp, &bp->sp_objs[0].mac_obj, BNX2X_ETH_MAC, + false); if (rc < 0) BNX2X_ERR("Failed to delete all ETH macs: %d\n", rc); /* Clean up UC list */ - rc = bnx2x_del_all_macs(bp, &bp->fp[0].mac_obj, BNX2X_UC_LIST_MAC, + rc = bnx2x_del_all_macs(bp, &bp->sp_objs[0].mac_obj, BNX2X_UC_LIST_MAC, true); if (rc < 0) BNX2X_ERR("Failed to schedule DEL commands for UC MACs list: %d\n", @@ -9697,6 +9754,11 @@ static void __devinit bnx2x_get_common_hwinfo(struct bnx2x *bp) bp->flags |= (val >= REQ_BC_VER_4_PFC_STATS_SUPPORTED) ? BC_SUPPORTS_PFC_STATS : 0; + bp->flags |= (val >= REQ_BC_VER_4_FCOE_FEATURES) ? + BC_SUPPORTS_FCOE_FEATURES : 0; + + bp->flags |= (val >= REQ_BC_VER_4_DCBX_ADMIN_MSG_NON_PMF) ? + BC_SUPPORTS_DCBX_MSG_NON_PMF : 0; boot_mode = SHMEM_RD(bp, dev_info.port_feature_config[BP_PORT(bp)].mba_config) & PORT_FEATURE_MBA_BOOT_AGENT_TYPE_MASK; @@ -10082,7 +10144,7 @@ static void __devinit bnx2x_get_port_hwinfo(struct bnx2x *bp) { int port = BP_PORT(bp); u32 config; - u32 ext_phy_type, ext_phy_config; + u32 ext_phy_type, ext_phy_config, eee_mode; bp->link_params.bp = bp; bp->link_params.port = port; @@ -10149,6 +10211,19 @@ static void __devinit bnx2x_get_port_hwinfo(struct bnx2x *bp) bp->port.need_hw_lock = bnx2x_hw_lock_required(bp, bp->common.shmem_base, bp->common.shmem2_base); + + /* Configure link feature according to nvram value */ + eee_mode = (((SHMEM_RD(bp, dev_info. + port_feature_config[port].eee_power_mode)) & + PORT_FEAT_CFG_EEE_POWER_MODE_MASK) >> + PORT_FEAT_CFG_EEE_POWER_MODE_SHIFT); + if (eee_mode != PORT_FEAT_CFG_EEE_POWER_MODE_DISABLED) { + bp->link_params.eee_mode = EEE_MODE_ADV_LPI | + EEE_MODE_ENABLE_LPI | + EEE_MODE_OUTPUT_TIME; + } else { + bp->link_params.eee_mode = 0; + } } void bnx2x_get_iscsi_info(struct bnx2x *bp) @@ -10997,7 +11072,7 @@ static int bnx2x_set_uc_list(struct bnx2x *bp) int rc; struct net_device *dev = bp->dev; struct netdev_hw_addr *ha; - struct bnx2x_vlan_mac_obj *mac_obj = &bp->fp->mac_obj; + struct bnx2x_vlan_mac_obj *mac_obj = &bp->sp_objs->mac_obj; unsigned long ramrod_flags = 0; /* First schedule a cleanup up of old configuration */ @@ -11503,8 +11578,7 @@ static void bnx2x_prep_ops(const u8 *_source, u8 *_target, u32 n) } } -/** - * IRO array is stored in the following format: +/* IRO array is stored in the following format: * {base(24bit), m1(16bit), m2(16bit), m3(16bit), size(16bit) } */ static void bnx2x_prep_iro(const u8 *_source, u8 *_target, u32 n) @@ -11672,7 +11746,7 @@ void bnx2x__init_func_obj(struct bnx2x *bp) /* must be called after sriov-enable */ static int bnx2x_set_qm_cid_count(struct bnx2x *bp) { - int cid_count = BNX2X_L2_CID_COUNT(bp); + int cid_count = BNX2X_L2_MAX_CID(bp); #ifdef BCM_CNIC cid_count += CNIC_CID_MAX; @@ -11717,7 +11791,7 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev, struct bnx2x *bp; int pcie_width, pcie_speed; int rc, max_non_def_sbs; - int rx_count, tx_count, rss_count; + int rx_count, tx_count, rss_count, doorbell_size; /* * An estimated maximum supported CoS number according to the chip * version. @@ -11745,7 +11819,10 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev, case BCM57800_MF: case BCM57810: case BCM57810_MF: - case BCM57840: + case BCM57840_O: + case BCM57840_4_10: + case BCM57840_2_20: + case BCM57840_MFO: case BCM57840_MF: case BCM57811: case BCM57811_MF: @@ -11760,13 +11837,6 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev, max_non_def_sbs = bnx2x_get_num_non_def_sbs(pdev); - /* !!! FIXME !!! - * Do not allow the maximum SB count to grow above 16 - * since Special CIDs starts from 16*BNX2X_MULTI_TX_COS=48. - * We will use the FP_SB_MAX_E1x macro for this matter. - */ - max_non_def_sbs = min_t(int, FP_SB_MAX_E1x, max_non_def_sbs); - WARN_ON(!max_non_def_sbs); /* Maximum number of RSS queues: one IGU SB goes to CNIC */ @@ -11777,9 +11847,9 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev, /* * Maximum number of netdev Tx queues: - * Maximum TSS queues * Maximum supported number of CoS + FCoE L2 + * Maximum TSS queues * Maximum supported number of CoS + FCoE L2 */ - tx_count = MAX_TXQS_PER_COS * max_cos_est + FCOE_PRESENT; + tx_count = rss_count * max_cos_est + FCOE_PRESENT; /* dev zeroed in init_etherdev */ dev = alloc_etherdev_mqs(sizeof(*bp), tx_count, rx_count); @@ -11788,9 +11858,6 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev, bp = netdev_priv(dev); - BNX2X_DEV_INFO("Allocated netdev with %d tx and %d rx queues\n", - tx_count, rx_count); - bp->igu_sb_cnt = max_non_def_sbs; bp->msg_enable = debug; pci_set_drvdata(pdev, dev); @@ -11803,6 +11870,9 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev, BNX2X_DEV_INFO("max_non_def_sbs %d\n", max_non_def_sbs); + BNX2X_DEV_INFO("Allocated netdev with %d tx and %d rx queues\n", + tx_count, rx_count); + rc = bnx2x_init_bp(bp); if (rc) goto init_one_exit; @@ -11811,9 +11881,15 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev, * Map doorbels here as we need the real value of bp->max_cos which * is initialized in bnx2x_init_bp(). */ + doorbell_size = BNX2X_L2_MAX_CID(bp) * (1 << BNX2X_DB_SHIFT); + if (doorbell_size > pci_resource_len(pdev, 2)) { + dev_err(&bp->pdev->dev, + "Cannot map doorbells, bar size too small, aborting\n"); + rc = -ENOMEM; + goto init_one_exit; + } bp->doorbells = ioremap_nocache(pci_resource_start(pdev, 2), - min_t(u64, BNX2X_DB_SIZE(bp), - pci_resource_len(pdev, 2))); + doorbell_size); if (!bp->doorbells) { dev_err(&bp->pdev->dev, "Cannot map doorbell space, aborting\n"); @@ -11831,8 +11907,12 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev, #endif + + /* Set bp->num_queues for MSI-X mode*/ + bnx2x_set_num_queues(bp); + /* Configure interrupt mode: try to enable MSI-X/MSI if - * needed, set bp->num_queues appropriately. + * needed. */ bnx2x_set_int_mode(bp); @@ -12176,6 +12256,7 @@ static int bnx2x_set_iscsi_eth_mac_addr(struct bnx2x *bp) static void bnx2x_cnic_sp_post(struct bnx2x *bp, int count) { struct eth_spe *spe; + int cxt_index, cxt_offset; #ifdef BNX2X_STOP_ON_ERROR if (unlikely(bp->panic)) @@ -12198,10 +12279,16 @@ static void bnx2x_cnic_sp_post(struct bnx2x *bp, int count) * ramrod */ if (type == ETH_CONNECTION_TYPE) { - if (cmd == RAMROD_CMD_ID_ETH_CLIENT_SETUP) - bnx2x_set_ctx_validation(bp, &bp->context. - vcxt[BNX2X_ISCSI_ETH_CID].eth, - BNX2X_ISCSI_ETH_CID); + if (cmd == RAMROD_CMD_ID_ETH_CLIENT_SETUP) { + cxt_index = BNX2X_ISCSI_ETH_CID(bp) / + ILT_PAGE_CIDS; + cxt_offset = BNX2X_ISCSI_ETH_CID(bp) - + (cxt_index * ILT_PAGE_CIDS); + bnx2x_set_ctx_validation(bp, + &bp->context[cxt_index]. + vcxt[cxt_offset].eth, + BNX2X_ISCSI_ETH_CID(bp)); + } } /* @@ -12488,21 +12575,45 @@ static int bnx2x_drv_ctl(struct net_device *dev, struct drv_ctl_info *ctl) break; } case DRV_CTL_ULP_REGISTER_CMD: { - int ulp_type = ctl->data.ulp_type; + int ulp_type = ctl->data.register_data.ulp_type; if (CHIP_IS_E3(bp)) { int idx = BP_FW_MB_IDX(bp); - u32 cap; + u32 cap = SHMEM2_RD(bp, drv_capabilities_flag[idx]); + int path = BP_PATH(bp); + int port = BP_PORT(bp); + int i; + u32 scratch_offset; + u32 *host_addr; - cap = SHMEM2_RD(bp, drv_capabilities_flag[idx]); + /* first write capability to shmem2 */ if (ulp_type == CNIC_ULP_ISCSI) cap |= DRV_FLAGS_CAPABILITIES_LOADED_ISCSI; else if (ulp_type == CNIC_ULP_FCOE) cap |= DRV_FLAGS_CAPABILITIES_LOADED_FCOE; SHMEM2_WR(bp, drv_capabilities_flag[idx], cap); + + if ((ulp_type != CNIC_ULP_FCOE) || + (!SHMEM2_HAS(bp, ncsi_oem_data_addr)) || + (!(bp->flags & BC_SUPPORTS_FCOE_FEATURES))) + break; + + /* if reached here - should write fcoe capabilities */ + scratch_offset = SHMEM2_RD(bp, ncsi_oem_data_addr); + if (!scratch_offset) + break; + scratch_offset += offsetof(struct glob_ncsi_oem_data, + fcoe_features[path][port]); + host_addr = (u32 *) &(ctl->data.register_data. + fcoe_features); + for (i = 0; i < sizeof(struct fcoe_capabilities); + i += 4) + REG_WR(bp, scratch_offset + i, + *(host_addr + i/4)); } break; } + case DRV_CTL_ULP_UNREGISTER_CMD: { int ulp_type = ctl->data.ulp_type; @@ -12554,6 +12665,21 @@ void bnx2x_setup_cnic_irq_info(struct bnx2x *bp) cp->num_irq = 2; } +void bnx2x_setup_cnic_info(struct bnx2x *bp) +{ + struct cnic_eth_dev *cp = &bp->cnic_eth_dev; + + + cp->ctx_tbl_offset = FUNC_ILT_BASE(BP_FUNC(bp)) + + bnx2x_cid_ilt_lines(bp); + cp->starting_cid = bnx2x_cid_ilt_lines(bp) * ILT_PAGE_CIDS; + cp->fcoe_init_cid = BNX2X_FCOE_ETH_CID(bp); + cp->iscsi_l2_cid = BNX2X_ISCSI_ETH_CID(bp); + + if (NO_ISCSI_OOO(bp)) + cp->drv_state |= CNIC_DRV_STATE_NO_ISCSI_OOO; +} + static int bnx2x_register_cnic(struct net_device *dev, struct cnic_ops *ops, void *data) { @@ -12632,10 +12758,10 @@ struct cnic_eth_dev *bnx2x_cnic_probe(struct net_device *dev) cp->drv_ctl = bnx2x_drv_ctl; cp->drv_register_cnic = bnx2x_register_cnic; cp->drv_unregister_cnic = bnx2x_unregister_cnic; - cp->fcoe_init_cid = BNX2X_FCOE_ETH_CID; + cp->fcoe_init_cid = BNX2X_FCOE_ETH_CID(bp); cp->iscsi_l2_client_id = bnx2x_cnic_eth_cl_id(bp, BNX2X_ISCSI_ETH_CL_ID_IDX); - cp->iscsi_l2_cid = BNX2X_ISCSI_ETH_CID; + cp->iscsi_l2_cid = BNX2X_ISCSI_ETH_CID(bp); if (NO_ISCSI_OOO(bp)) cp->drv_state |= CNIC_DRV_STATE_NO_ISCSI_OOO; diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_mfw_req.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_mfw_req.h new file mode 100644 index 000000000000..ddd5106ad2f9 --- /dev/null +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_mfw_req.h @@ -0,0 +1,168 @@ +/* bnx2x_mfw_req.h: Broadcom Everest network driver. + * + * Copyright (c) 2012 Broadcom Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation. + */ + +#ifndef BNX2X_MFW_REQ_H +#define BNX2X_MFW_REQ_H + +#define PORT_0 0 +#define PORT_1 1 +#define PORT_MAX 2 +#define NVM_PATH_MAX 2 + +/* FCoE capabilities required from the driver */ +struct fcoe_capabilities { + u32 capability1; + /* Maximum number of I/Os per connection */ + #define FCOE_IOS_PER_CONNECTION_MASK 0x0000ffff + #define FCOE_IOS_PER_CONNECTION_SHIFT 0 + /* Maximum number of Logins per port */ + #define FCOE_LOGINS_PER_PORT_MASK 0xffff0000 + #define FCOE_LOGINS_PER_PORT_SHIFT 16 + + u32 capability2; + /* Maximum number of exchanges */ + #define FCOE_NUMBER_OF_EXCHANGES_MASK 0x0000ffff + #define FCOE_NUMBER_OF_EXCHANGES_SHIFT 0 + /* Maximum NPIV WWN per port */ + #define FCOE_NPIV_WWN_PER_PORT_MASK 0xffff0000 + #define FCOE_NPIV_WWN_PER_PORT_SHIFT 16 + + u32 capability3; + /* Maximum number of targets supported */ + #define FCOE_TARGETS_SUPPORTED_MASK 0x0000ffff + #define FCOE_TARGETS_SUPPORTED_SHIFT 0 + /* Maximum number of outstanding commands across all connections */ + #define FCOE_OUTSTANDING_COMMANDS_MASK 0xffff0000 + #define FCOE_OUTSTANDING_COMMANDS_SHIFT 16 + + u32 capability4; + #define FCOE_CAPABILITY4_STATEFUL 0x00000001 + #define FCOE_CAPABILITY4_STATELESS 0x00000002 + #define FCOE_CAPABILITY4_CAPABILITIES_REPORTED_VALID 0x00000004 +}; + +struct glob_ncsi_oem_data { + u32 driver_version; + u32 unused[3]; + struct fcoe_capabilities fcoe_features[NVM_PATH_MAX][PORT_MAX]; +}; + +/* current drv_info version */ +#define DRV_INFO_CUR_VER 2 + +/* drv_info op codes supported */ +enum drv_info_opcode { + ETH_STATS_OPCODE, + FCOE_STATS_OPCODE, + ISCSI_STATS_OPCODE +}; + +#define ETH_STAT_INFO_VERSION_LEN 12 +/* Per PCI Function Ethernet Statistics required from the driver */ +struct eth_stats_info { + /* Function's Driver Version. padded to 12 */ + u8 version[ETH_STAT_INFO_VERSION_LEN]; + /* Locally Admin Addr. BigEndian EIU48. Actual size is 6 bytes */ + u8 mac_local[8]; + u8 mac_add1[8]; /* Additional Programmed MAC Addr 1. */ + u8 mac_add2[8]; /* Additional Programmed MAC Addr 2. */ + u32 mtu_size; /* MTU Size. Note : Negotiated MTU */ + u32 feature_flags; /* Feature_Flags. */ +#define FEATURE_ETH_CHKSUM_OFFLOAD_MASK 0x01 +#define FEATURE_ETH_LSO_MASK 0x02 +#define FEATURE_ETH_BOOTMODE_MASK 0x1C +#define FEATURE_ETH_BOOTMODE_SHIFT 2 +#define FEATURE_ETH_BOOTMODE_NONE (0x0 << 2) +#define FEATURE_ETH_BOOTMODE_PXE (0x1 << 2) +#define FEATURE_ETH_BOOTMODE_ISCSI (0x2 << 2) +#define FEATURE_ETH_BOOTMODE_FCOE (0x3 << 2) +#define FEATURE_ETH_TOE_MASK 0x20 + u32 lso_max_size; /* LSO MaxOffloadSize. */ + u32 lso_min_seg_cnt; /* LSO MinSegmentCount. */ + /* Num Offloaded Connections TCP_IPv4. */ + u32 ipv4_ofld_cnt; + /* Num Offloaded Connections TCP_IPv6. */ + u32 ipv6_ofld_cnt; + u32 promiscuous_mode; /* Promiscuous Mode. non-zero true */ + u32 txq_size; /* TX Descriptors Queue Size */ + u32 rxq_size; /* RX Descriptors Queue Size */ + /* TX Descriptor Queue Avg Depth. % Avg Queue Depth since last poll */ + u32 txq_avg_depth; + /* RX Descriptors Queue Avg Depth. % Avg Queue Depth since last poll */ + u32 rxq_avg_depth; + /* IOV_Offload. 0=none; 1=MultiQueue, 2=VEB 3= VEPA*/ + u32 iov_offload; + /* Number of NetQueue/VMQ Config'd. */ + u32 netq_cnt; + u32 vf_cnt; /* Num VF assigned to this PF. */ +}; + +/* Per PCI Function FCOE Statistics required from the driver */ +struct fcoe_stats_info { + u8 version[12]; /* Function's Driver Version. */ + u8 mac_local[8]; /* Locally Admin Addr. */ + u8 mac_add1[8]; /* Additional Programmed MAC Addr 1. */ + u8 mac_add2[8]; /* Additional Programmed MAC Addr 2. */ + /* QoS Priority (per 802.1p). 0-7255 */ + u32 qos_priority; + u32 txq_size; /* FCoE TX Descriptors Queue Size. */ + u32 rxq_size; /* FCoE RX Descriptors Queue Size. */ + /* FCoE TX Descriptor Queue Avg Depth. */ + u32 txq_avg_depth; + /* FCoE RX Descriptors Queue Avg Depth. */ + u32 rxq_avg_depth; + u32 rx_frames_lo; /* FCoE RX Frames received. */ + u32 rx_frames_hi; /* FCoE RX Frames received. */ + u32 rx_bytes_lo; /* FCoE RX Bytes received. */ + u32 rx_bytes_hi; /* FCoE RX Bytes received. */ + u32 tx_frames_lo; /* FCoE TX Frames sent. */ + u32 tx_frames_hi; /* FCoE TX Frames sent. */ + u32 tx_bytes_lo; /* FCoE TX Bytes sent. */ + u32 tx_bytes_hi; /* FCoE TX Bytes sent. */ +}; + +/* Per PCI Function iSCSI Statistics required from the driver*/ +struct iscsi_stats_info { + u8 version[12]; /* Function's Driver Version. */ + u8 mac_local[8]; /* Locally Admin iSCSI MAC Addr. */ + u8 mac_add1[8]; /* Additional Programmed MAC Addr 1. */ + /* QoS Priority (per 802.1p). 0-7255 */ + u32 qos_priority; + u8 initiator_name[64]; /* iSCSI Boot Initiator Node name. */ + u8 ww_port_name[64]; /* iSCSI World wide port name */ + u8 boot_target_name[64];/* iSCSI Boot Target Name. */ + u8 boot_target_ip[16]; /* iSCSI Boot Target IP. */ + u32 boot_target_portal; /* iSCSI Boot Target Portal. */ + u8 boot_init_ip[16]; /* iSCSI Boot Initiator IP Address. */ + u32 max_frame_size; /* Max Frame Size. bytes */ + u32 txq_size; /* PDU TX Descriptors Queue Size. */ + u32 rxq_size; /* PDU RX Descriptors Queue Size. */ + u32 txq_avg_depth; /* PDU TX Descriptor Queue Avg Depth. */ + u32 rxq_avg_depth; /* PDU RX Descriptors Queue Avg Depth. */ + u32 rx_pdus_lo; /* iSCSI PDUs received. */ + u32 rx_pdus_hi; /* iSCSI PDUs received. */ + u32 rx_bytes_lo; /* iSCSI RX Bytes received. */ + u32 rx_bytes_hi; /* iSCSI RX Bytes received. */ + u32 tx_pdus_lo; /* iSCSI PDUs sent. */ + u32 tx_pdus_hi; /* iSCSI PDUs sent. */ + u32 tx_bytes_lo; /* iSCSI PDU TX Bytes sent. */ + u32 tx_bytes_hi; /* iSCSI PDU TX Bytes sent. */ + u32 pcp_prior_map_tbl; /* C-PCP to S-PCP Priority MapTable. + * 9 nibbles, the position of each nibble + * represents the C-PCP value, the value + * of the nibble = S-PCP value. + */ +}; + +union drv_info_to_mcp { + struct eth_stats_info ether_stat; + struct fcoe_stats_info fcoe_stat; + struct iscsi_stats_info iscsi_stat; +}; +#endif /* BNX2X_MFW_REQ_H */ diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h index bbd387492a80..ec62a5c8bd37 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h @@ -1488,6 +1488,121 @@ * 2:1 - otp_misc_do[51:50]; 0 - otp_misc_do[1]. */ #define MISC_REG_CHIP_TYPE 0xac60 #define MISC_REG_CHIP_TYPE_57811_MASK (1<<1) +#define MISC_REG_CPMU_LP_DR_ENABLE 0xa858 +/* [RW 1] FW EEE LPI Enable. When 1 indicates that EEE LPI mode is enabled + * by FW. When 0 indicates that the EEE LPI mode is disabled by FW. Clk + * 25MHz. Reset on hard reset. */ +#define MISC_REG_CPMU_LP_FW_ENABLE_P0 0xa84c +/* [RW 32] EEE LPI Idle Threshold. The threshold value for the idle EEE LPI + * counter. Timer tick is 1 us. Clock 25MHz. Reset on hard reset. */ +#define MISC_REG_CPMU_LP_IDLE_THR_P0 0xa8a0 +/* [RW 18] LPI entry events mask. [0] - Vmain SM Mask. When 1 indicates that + * the Vmain SM end state is disabled. When 0 indicates that the Vmain SM + * end state is enabled. [1] - FW Queues Empty Mask. When 1 indicates that + * the FW command that all Queues are empty is disabled. When 0 indicates + * that the FW command that all Queues are empty is enabled. [2] - FW Early + * Exit Mask / Reserved (Entry mask). When 1 indicates that the FW Early + * Exit command is disabled. When 0 indicates that the FW Early Exit command + * is enabled. This bit applicable only in the EXIT Events Mask registers. + * [3] - PBF Request Mask. When 1 indicates that the PBF Request indication + * is disabled. When 0 indicates that the PBF Request indication is enabled. + * [4] - Tx Request Mask. When =1 indicates that the Tx other Than PBF + * Request indication is disabled. When 0 indicates that the Tx Other Than + * PBF Request indication is enabled. [5] - Rx EEE LPI Status Mask. When 1 + * indicates that the RX EEE LPI Status indication is disabled. When 0 + * indicates that the RX EEE LPI Status indication is enabled. In the EXIT + * Events Masks registers; this bit masks the falling edge detect of the LPI + * Status (Rx LPI is on - off). [6] - Tx Pause Mask. When 1 indicates that + * the Tx Pause indication is disabled. When 0 indicates that the Tx Pause + * indication is enabled. [7] - BRB1 Empty Mask. When 1 indicates that the + * BRB1 EMPTY indication is disabled. When 0 indicates that the BRB1 EMPTY + * indication is enabled. [8] - QM Idle Mask. When 1 indicates that the QM + * IDLE indication is disabled. When 0 indicates that the QM IDLE indication + * is enabled. (One bit for both VOQ0 and VOQ1). [9] - QM LB Idle Mask. When + * 1 indicates that the QM IDLE indication for LOOPBACK is disabled. When 0 + * indicates that the QM IDLE indication for LOOPBACK is enabled. [10] - L1 + * Status Mask. When 1 indicates that the L1 Status indication from the PCIE + * CORE is disabled. When 0 indicates that the RX EEE LPI Status indication + * from the PCIE CORE is enabled. In the EXIT Events Masks registers; this + * bit masks the falling edge detect of the L1 status (L1 is on - off). [11] + * - P0 E0 EEE EEE LPI REQ Mask. When =1 indicates that the P0 E0 EEE EEE + * LPI REQ indication is disabled. When =0 indicates that the P0 E0 EEE LPI + * REQ indication is enabled. [12] - P1 E0 EEE LPI REQ Mask. When =1 + * indicates that the P0 EEE LPI REQ indication is disabled. When =0 + * indicates that the P0 EEE LPI REQ indication is enabled. [13] - P0 E1 EEE + * LPI REQ Mask. When =1 indicates that the P0 EEE LPI REQ indication is + * disabled. When =0 indicates that the P0 EEE LPI REQ indication is + * enabled. [14] - P1 E1 EEE LPI REQ Mask. When =1 indicates that the P0 EEE + * LPI REQ indication is disabled. When =0 indicates that the P0 EEE LPI REQ + * indication is enabled. [15] - L1 REQ Mask. When =1 indicates that the L1 + * REQ indication is disabled. When =0 indicates that the L1 indication is + * enabled. [16] - Rx EEE LPI Status Edge Detect Mask. When =1 indicates + * that the RX EEE LPI Status Falling Edge Detect indication is disabled (Rx + * EEE LPI is on - off). When =0 indicates that the RX EEE LPI Status + * Falling Edge Detec indication is enabled (Rx EEE LPI is on - off). This + * bit is applicable only in the EXIT Events Masks registers. [17] - L1 + * Status Edge Detect Mask. When =1 indicates that the L1 Status Falling + * Edge Detect indication from the PCIE CORE is disabled (L1 is on - off). + * When =0 indicates that the L1 Status Falling Edge Detect indication from + * the PCIE CORE is enabled (L1 is on - off). This bit is applicable only in + * the EXIT Events Masks registers. Clock 25MHz. Reset on hard reset. */ +#define MISC_REG_CPMU_LP_MASK_ENT_P0 0xa880 +/* [RW 18] EEE LPI exit events mask. [0] - Vmain SM Mask. When 1 indicates + * that the Vmain SM end state is disabled. When 0 indicates that the Vmain + * SM end state is enabled. [1] - FW Queues Empty Mask. When 1 indicates + * that the FW command that all Queues are empty is disabled. When 0 + * indicates that the FW command that all Queues are empty is enabled. [2] - + * FW Early Exit Mask / Reserved (Entry mask). When 1 indicates that the FW + * Early Exit command is disabled. When 0 indicates that the FW Early Exit + * command is enabled. This bit applicable only in the EXIT Events Mask + * registers. [3] - PBF Request Mask. When 1 indicates that the PBF Request + * indication is disabled. When 0 indicates that the PBF Request indication + * is enabled. [4] - Tx Request Mask. When =1 indicates that the Tx other + * Than PBF Request indication is disabled. When 0 indicates that the Tx + * Other Than PBF Request indication is enabled. [5] - Rx EEE LPI Status + * Mask. When 1 indicates that the RX EEE LPI Status indication is disabled. + * When 0 indicates that the RX LPI Status indication is enabled. In the + * EXIT Events Masks registers; this bit masks the falling edge detect of + * the EEE LPI Status (Rx EEE LPI is on - off). [6] - Tx Pause Mask. When 1 + * indicates that the Tx Pause indication is disabled. When 0 indicates that + * the Tx Pause indication is enabled. [7] - BRB1 Empty Mask. When 1 + * indicates that the BRB1 EMPTY indication is disabled. When 0 indicates + * that the BRB1 EMPTY indication is enabled. [8] - QM Idle Mask. When 1 + * indicates that the QM IDLE indication is disabled. When 0 indicates that + * the QM IDLE indication is enabled. (One bit for both VOQ0 and VOQ1). [9] + * - QM LB Idle Mask. When 1 indicates that the QM IDLE indication for + * LOOPBACK is disabled. When 0 indicates that the QM IDLE indication for + * LOOPBACK is enabled. [10] - L1 Status Mask. When 1 indicates that the L1 + * Status indication from the PCIE CORE is disabled. When 0 indicates that + * the RX EEE LPI Status indication from the PCIE CORE is enabled. In the + * EXIT Events Masks registers; this bit masks the falling edge detect of + * the L1 status (L1 is on - off). [11] - P0 E0 EEE EEE LPI REQ Mask. When + * =1 indicates that the P0 E0 EEE EEE LPI REQ indication is disabled. When + * =0 indicates that the P0 E0 EEE LPI REQ indication is enabled. [12] - P1 + * E0 EEE LPI REQ Mask. When =1 indicates that the P0 EEE LPI REQ indication + * is disabled. When =0 indicates that the P0 EEE LPI REQ indication is + * enabled. [13] - P0 E1 EEE LPI REQ Mask. When =1 indicates that the P0 EEE + * LPI REQ indication is disabled. When =0 indicates that the P0 EEE LPI REQ + * indication is enabled. [14] - P1 E1 EEE LPI REQ Mask. When =1 indicates + * that the P0 EEE LPI REQ indication is disabled. When =0 indicates that + * the P0 EEE LPI REQ indication is enabled. [15] - L1 REQ Mask. When =1 + * indicates that the L1 REQ indication is disabled. When =0 indicates that + * the L1 indication is enabled. [16] - Rx EEE LPI Status Edge Detect Mask. + * When =1 indicates that the RX EEE LPI Status Falling Edge Detect + * indication is disabled (Rx EEE LPI is on - off). When =0 indicates that + * the RX EEE LPI Status Falling Edge Detec indication is enabled (Rx EEE + * LPI is on - off). This bit is applicable only in the EXIT Events Masks + * registers. [17] - L1 Status Edge Detect Mask. When =1 indicates that the + * L1 Status Falling Edge Detect indication from the PCIE CORE is disabled + * (L1 is on - off). When =0 indicates that the L1 Status Falling Edge + * Detect indication from the PCIE CORE is enabled (L1 is on - off). This + * bit is applicable only in the EXIT Events Masks registers.Clock 25MHz. + * Reset on hard reset. */ +#define MISC_REG_CPMU_LP_MASK_EXT_P0 0xa888 +/* [RW 16] EEE LPI Entry Events Counter. A statistic counter with the number + * of counts that the SM entered the EEE LPI state. Clock 25MHz. Read only + * register. Reset on hard reset. */ +#define MISC_REG_CPMU_LP_SM_ENT_CNT_P0 0xa8b8 /* [RW 32] The following driver registers(1...16) represent 16 drivers and 32 clients. Each client can be controlled by one driver only. One in each bit represent that this driver control the appropriate client (Ex: bit 5 @@ -5372,6 +5487,8 @@ /* [RW 32] Lower 48 bits of ctrl_sa register. Used as the SA in PAUSE/PFC * packets transmitted by the MAC */ #define XMAC_REG_CTRL_SA_LO 0x28 +#define XMAC_REG_EEE_CTRL 0xd8 +#define XMAC_REG_EEE_TIMERS_HI 0xe4 #define XMAC_REG_PAUSE_CTRL 0x68 #define XMAC_REG_PFC_CTRL 0x70 #define XMAC_REG_PFC_CTRL_HI 0x74 @@ -5796,6 +5913,7 @@ #define MISC_REGISTERS_SPIO_OUTPUT_LOW 0 #define MISC_REGISTERS_SPIO_SET_POS 8 #define HW_LOCK_MAX_RESOURCE_VALUE 31 +#define HW_LOCK_RESOURCE_DCBX_ADMIN_MIB 13 #define HW_LOCK_RESOURCE_DRV_FLAGS 10 #define HW_LOCK_RESOURCE_GPIO 1 #define HW_LOCK_RESOURCE_MDIO 0 @@ -6813,6 +6931,8 @@ Theotherbitsarereservedandshouldbezero*/ #define MDIO_AN_REG_LP_AUTO_NEG 0x0013 #define MDIO_AN_REG_LP_AUTO_NEG2 0x0014 #define MDIO_AN_REG_MASTER_STATUS 0x0021 +#define MDIO_AN_REG_EEE_ADV 0x003c +#define MDIO_AN_REG_LP_EEE_ADV 0x003d /*bcm*/ #define MDIO_AN_REG_LINK_STATUS 0x8304 #define MDIO_AN_REG_CL37_CL73 0x8370 @@ -6866,6 +6986,8 @@ Theotherbitsarereservedandshouldbezero*/ #define MDIO_PMA_REG_84823_LED3_STRETCH_EN 0x0080 /* BCM84833 only */ +#define MDIO_84833_TOP_CFG_FW_REV 0x400f +#define MDIO_84833_TOP_CFG_FW_EEE 0x10b1 #define MDIO_84833_TOP_CFG_XGPHY_STRAP1 0x401a #define MDIO_84833_SUPER_ISOLATE 0x8000 /* These are mailbox register set used by 84833. */ @@ -6993,11 +7115,13 @@ Theotherbitsarereservedandshouldbezero*/ #define MDIO_WC_REG_DIGITAL3_UP1 0x8329 #define MDIO_WC_REG_DIGITAL3_LP_UP1 0x832c #define MDIO_WC_REG_DIGITAL4_MISC3 0x833c +#define MDIO_WC_REG_DIGITAL4_MISC5 0x833e #define MDIO_WC_REG_DIGITAL5_MISC6 0x8345 #define MDIO_WC_REG_DIGITAL5_MISC7 0x8349 #define MDIO_WC_REG_DIGITAL5_ACTUAL_SPEED 0x834e #define MDIO_WC_REG_DIGITAL6_MP5_NEXTPAGECTRL 0x8350 #define MDIO_WC_REG_CL49_USERB0_CTRL 0x8368 +#define MDIO_WC_REG_EEE_COMBO_CONTROL0 0x8390 #define MDIO_WC_REG_TX66_CONTROL 0x83b0 #define MDIO_WC_REG_RX66_CONTROL 0x83c0 #define MDIO_WC_REG_RX66_SCW0 0x83c2 @@ -7036,6 +7160,7 @@ Theotherbitsarereservedandshouldbezero*/ #define MDIO_REG_GPHY_EEE_1G (0x1 << 2) #define MDIO_REG_GPHY_EEE_100 (0x1 << 1) #define MDIO_REG_GPHY_EEE_RESOLVED 0x803e +#define MDIO_REG_GPHY_AUX_STATUS 0x19 #define MDIO_REG_INTR_STATUS 0x1a #define MDIO_REG_INTR_MASK 0x1b #define MDIO_REG_INTR_MASK_LINK_STATUS (0x1 << 1) @@ -7150,8 +7275,7 @@ Theotherbitsarereservedandshouldbezero*/ #define CDU_REGION_NUMBER_UCM_AG 4 -/** - * String-to-compress [31:8] = CID (all 24 bits) +/* String-to-compress [31:8] = CID (all 24 bits) * String-to-compress [7:4] = Region * String-to-compress [3:0] = Type */ diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c index 6c14b4a4e82c..734fd87cd990 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c @@ -4107,6 +4107,10 @@ static int bnx2x_setup_rss(struct bnx2x *bp, data->capabilities |= ETH_RSS_UPDATE_RAMROD_DATA_IPV4_TCP_CAPABILITY; + if (test_bit(BNX2X_RSS_IPV4_UDP, &p->rss_flags)) + data->capabilities |= + ETH_RSS_UPDATE_RAMROD_DATA_IPV4_UDP_CAPABILITY; + if (test_bit(BNX2X_RSS_IPV6, &p->rss_flags)) data->capabilities |= ETH_RSS_UPDATE_RAMROD_DATA_IPV6_CAPABILITY; @@ -4115,6 +4119,10 @@ static int bnx2x_setup_rss(struct bnx2x *bp, data->capabilities |= ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY; + if (test_bit(BNX2X_RSS_IPV6_UDP, &p->rss_flags)) + data->capabilities |= + ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY; + /* Hashing mask */ data->rss_result_mask = p->rss_result_mask; diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h index efd80bdd0dfe..f83e033da6da 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h @@ -167,9 +167,8 @@ typedef int (*exe_q_remove)(struct bnx2x *bp, union bnx2x_qable_obj *o, struct bnx2x_exeq_elem *elem); -/** - * @return positive is entry was optimized, 0 - if not, negative - * in case of an error. +/* Return positive if entry was optimized, 0 - if not, negative + * in case of an error. */ typedef int (*exe_q_optimize)(struct bnx2x *bp, union bnx2x_qable_obj *o, @@ -694,8 +693,10 @@ enum { BNX2X_RSS_IPV4, BNX2X_RSS_IPV4_TCP, + BNX2X_RSS_IPV4_UDP, BNX2X_RSS_IPV6, BNX2X_RSS_IPV6_TCP, + BNX2X_RSS_IPV6_UDP, }; struct bnx2x_config_rss_params { @@ -729,6 +730,10 @@ struct bnx2x_rss_config_obj { /* Last configured indirection table */ u8 ind_table[T_ETH_INDIRECTION_TABLE_SIZE]; + /* flags for enabling 4-tupple hash on UDP */ + u8 udp_rss_v4; + u8 udp_rss_v6; + int (*config_rss)(struct bnx2x *bp, struct bnx2x_config_rss_params *p); }; @@ -1280,12 +1285,11 @@ void bnx2x_init_rx_mode_obj(struct bnx2x *bp, struct bnx2x_rx_mode_obj *o); /** - * Send and RX_MODE ramrod according to the provided parameters. + * bnx2x_config_rx_mode - Send and RX_MODE ramrod according to the provided parameters. * - * @param bp - * @param p Command parameters + * @p: Command parameters * - * @return 0 - if operation was successfull and there is no pending completions, + * Return: 0 - if operation was successfull and there is no pending completions, * positive number - if there are pending completions, * negative - if there were errors */ @@ -1302,7 +1306,11 @@ void bnx2x_init_mcast_obj(struct bnx2x *bp, bnx2x_obj_type type); /** - * Configure multicast MACs list. May configure a new list + * bnx2x_config_mcast - Configure multicast MACs list. + * + * @cmd: command to execute: BNX2X_MCAST_CMD_X + * + * May configure a new list * provided in p->mcast_list (BNX2X_MCAST_CMD_ADD), clean up * (BNX2X_MCAST_CMD_DEL) or restore (BNX2X_MCAST_CMD_RESTORE) a current * configuration, continue to execute the pending commands @@ -1313,11 +1321,7 @@ void bnx2x_init_mcast_obj(struct bnx2x *bp, * the current command will be enqueued to the tail of the * pending commands list. * - * @param bp - * @param p - * @param command to execute: BNX2X_MCAST_CMD_X - * - * @return 0 is operation was sucessfull and there are no pending completions, + * Return: 0 is operation was sucessfull and there are no pending completions, * negative if there were errors, positive if there are pending * completions. */ @@ -1342,21 +1346,17 @@ void bnx2x_init_rss_config_obj(struct bnx2x *bp, bnx2x_obj_type type); /** - * Updates RSS configuration according to provided parameters. - * - * @param bp - * @param p + * bnx2x_config_rss - Updates RSS configuration according to provided parameters * - * @return 0 in case of success + * Return: 0 in case of success */ int bnx2x_config_rss(struct bnx2x *bp, struct bnx2x_config_rss_params *p); /** - * Return the current ind_table configuration. + * bnx2x_get_rss_ind_table - Return the current ind_table configuration. * - * @param bp - * @param ind_table buffer to fill with the current indirection + * @ind_table: buffer to fill with the current indirection * table content. Should be at least * T_ETH_INDIRECTION_TABLE_SIZE bytes long. */ diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c index 1e2785cd11d0..667d89042d35 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c @@ -785,6 +785,10 @@ static int bnx2x_hw_stats_update(struct bnx2x *bp) pstats->host_port_stats_counter++; + if (CHIP_IS_E3(bp)) + estats->eee_tx_lpi += REG_RD(bp, + MISC_REG_CPMU_LP_SM_ENT_CNT_P0); + if (!BP_NOMCP(bp)) { u32 nig_timer_max = SHMEM_RD(bp, port_mb[BP_PORT(bp)].stat_nig_timer); @@ -855,17 +859,22 @@ static int bnx2x_storm_stats_update(struct bnx2x *bp) struct tstorm_per_queue_stats *tclient = &bp->fw_stats_data->queue_stats[i]. tstorm_queue_statistics; - struct tstorm_per_queue_stats *old_tclient = &fp->old_tclient; + struct tstorm_per_queue_stats *old_tclient = + &bnx2x_fp_stats(bp, fp)->old_tclient; struct ustorm_per_queue_stats *uclient = &bp->fw_stats_data->queue_stats[i]. ustorm_queue_statistics; - struct ustorm_per_queue_stats *old_uclient = &fp->old_uclient; + struct ustorm_per_queue_stats *old_uclient = + &bnx2x_fp_stats(bp, fp)->old_uclient; struct xstorm_per_queue_stats *xclient = &bp->fw_stats_data->queue_stats[i]. xstorm_queue_statistics; - struct xstorm_per_queue_stats *old_xclient = &fp->old_xclient; - struct bnx2x_eth_q_stats *qstats = &fp->eth_q_stats; - struct bnx2x_eth_q_stats_old *qstats_old = &fp->eth_q_stats_old; + struct xstorm_per_queue_stats *old_xclient = + &bnx2x_fp_stats(bp, fp)->old_xclient; + struct bnx2x_eth_q_stats *qstats = + &bnx2x_fp_stats(bp, fp)->eth_q_stats; + struct bnx2x_eth_q_stats_old *qstats_old = + &bnx2x_fp_stats(bp, fp)->eth_q_stats_old; u32 diff; @@ -1048,8 +1057,11 @@ static void bnx2x_net_stats_update(struct bnx2x *bp) nstats->tx_bytes = bnx2x_hilo(&estats->total_bytes_transmitted_hi); tmp = estats->mac_discard; - for_each_rx_queue(bp, i) - tmp += le32_to_cpu(bp->fp[i].old_tclient.checksum_discard); + for_each_rx_queue(bp, i) { + struct tstorm_per_queue_stats *old_tclient = + &bp->fp_stats[i].old_tclient; + tmp += le32_to_cpu(old_tclient->checksum_discard); + } nstats->rx_dropped = tmp + bp->net_stats_old.rx_dropped; nstats->tx_dropped = 0; @@ -1099,9 +1111,9 @@ static void bnx2x_drv_stats_update(struct bnx2x *bp) int i; for_each_queue(bp, i) { - struct bnx2x_eth_q_stats *qstats = &bp->fp[i].eth_q_stats; + struct bnx2x_eth_q_stats *qstats = &bp->fp_stats[i].eth_q_stats; struct bnx2x_eth_q_stats_old *qstats_old = - &bp->fp[i].eth_q_stats_old; + &bp->fp_stats[i].eth_q_stats_old; UPDATE_ESTAT_QSTAT(driver_xoff); UPDATE_ESTAT_QSTAT(rx_err_discard_pkt); @@ -1309,12 +1321,9 @@ static void bnx2x_port_stats_base_init(struct bnx2x *bp) bnx2x_stats_comp(bp); } -/** - * This function will prepare the statistics ramrod data the way +/* This function will prepare the statistics ramrod data the way * we will only have to increment the statistics counter and * send the ramrod each time we have to. - * - * @param bp */ static void bnx2x_prep_fw_stats_req(struct bnx2x *bp) { @@ -1428,7 +1437,7 @@ static void bnx2x_prep_fw_stats_req(struct bnx2x *bp) query[first_queue_query_index + i]; cur_query_entry->kind = STATS_TYPE_QUEUE; - cur_query_entry->index = bnx2x_stats_id(&bp->fp[FCOE_IDX]); + cur_query_entry->index = bnx2x_stats_id(&bp->fp[FCOE_IDX(bp)]); cur_query_entry->funcID = cpu_to_le16(BP_FUNC(bp)); cur_query_entry->address.hi = cpu_to_le32(U64_HI(cur_data_offset)); @@ -1479,15 +1488,19 @@ void bnx2x_stats_init(struct bnx2x *bp) /* function stats */ for_each_queue(bp, i) { - struct bnx2x_fastpath *fp = &bp->fp[i]; - - memset(&fp->old_tclient, 0, sizeof(fp->old_tclient)); - memset(&fp->old_uclient, 0, sizeof(fp->old_uclient)); - memset(&fp->old_xclient, 0, sizeof(fp->old_xclient)); + struct bnx2x_fp_stats *fp_stats = &bp->fp_stats[i]; + + memset(&fp_stats->old_tclient, 0, + sizeof(fp_stats->old_tclient)); + memset(&fp_stats->old_uclient, 0, + sizeof(fp_stats->old_uclient)); + memset(&fp_stats->old_xclient, 0, + sizeof(fp_stats->old_xclient)); if (bp->stats_init) { - memset(&fp->eth_q_stats, 0, sizeof(fp->eth_q_stats)); - memset(&fp->eth_q_stats_old, 0, - sizeof(fp->eth_q_stats_old)); + memset(&fp_stats->eth_q_stats, 0, + sizeof(fp_stats->eth_q_stats)); + memset(&fp_stats->eth_q_stats_old, 0, + sizeof(fp_stats->eth_q_stats_old)); } } @@ -1529,8 +1542,10 @@ void bnx2x_save_statistics(struct bnx2x *bp) /* save queue statistics */ for_each_eth_queue(bp, i) { struct bnx2x_fastpath *fp = &bp->fp[i]; - struct bnx2x_eth_q_stats *qstats = &fp->eth_q_stats; - struct bnx2x_eth_q_stats_old *qstats_old = &fp->eth_q_stats_old; + struct bnx2x_eth_q_stats *qstats = + &bnx2x_fp_stats(bp, fp)->eth_q_stats; + struct bnx2x_eth_q_stats_old *qstats_old = + &bnx2x_fp_stats(bp, fp)->eth_q_stats_old; UPDATE_QSTAT_OLD(total_unicast_bytes_received_hi); UPDATE_QSTAT_OLD(total_unicast_bytes_received_lo); @@ -1569,7 +1584,7 @@ void bnx2x_afex_collect_stats(struct bnx2x *bp, void *void_afex_stats, struct afex_stats *afex_stats = (struct afex_stats *)void_afex_stats; struct bnx2x_eth_stats *estats = &bp->eth_stats; struct per_queue_stats *fcoe_q_stats = - &bp->fw_stats_data->queue_stats[FCOE_IDX]; + &bp->fw_stats_data->queue_stats[FCOE_IDX(bp)]; struct tstorm_per_queue_stats *fcoe_q_tstorm_stats = &fcoe_q_stats->tstorm_queue_statistics; @@ -1586,8 +1601,7 @@ void bnx2x_afex_collect_stats(struct bnx2x *bp, void *void_afex_stats, memset(afex_stats, 0, sizeof(struct afex_stats)); for_each_eth_queue(bp, i) { - struct bnx2x_fastpath *fp = &bp->fp[i]; - struct bnx2x_eth_q_stats *qstats = &fp->eth_q_stats; + struct bnx2x_eth_q_stats *qstats = &bp->fp_stats[i].eth_q_stats; ADD_64(afex_stats->rx_unicast_bytes_hi, qstats->total_unicast_bytes_received_hi, diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h index 93e689fdfeda..24b8e505b60c 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h @@ -203,6 +203,8 @@ struct bnx2x_eth_stats { /* Recovery */ u32 recoverable_error; u32 unrecoverable_error; + /* src: Clear-on-Read register; Will not survive PMF Migration */ + u32 eee_tx_lpi; }; diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c index 2c89d17cbb29..3b4fc61f24cf 100644 --- a/drivers/net/ethernet/broadcom/cnic.c +++ b/drivers/net/ethernet/broadcom/cnic.c @@ -256,11 +256,16 @@ static void cnic_ulp_ctl(struct cnic_dev *dev, int ulp_type, bool reg) struct cnic_local *cp = dev->cnic_priv; struct cnic_eth_dev *ethdev = cp->ethdev; struct drv_ctl_info info; + struct fcoe_capabilities *fcoe_cap = + &info.data.register_data.fcoe_features; - if (reg) + if (reg) { info.cmd = DRV_CTL_ULP_REGISTER_CMD; - else + if (ulp_type == CNIC_ULP_FCOE && dev->fcoe_cap) + memcpy(fcoe_cap, dev->fcoe_cap, sizeof(*fcoe_cap)); + } else { info.cmd = DRV_CTL_ULP_UNREGISTER_CMD; + } info.data.ulp_type = ulp_type; ethdev->drv_ctl(dev->netdev, &info); @@ -286,6 +291,9 @@ static int cnic_get_l5_cid(struct cnic_local *cp, u32 cid, u32 *l5_cid) { u32 i; + if (!cp->ctx_tbl) + return -EINVAL; + for (i = 0; i < cp->max_cid_space; i++) { if (cp->ctx_tbl[i].cid == cid) { *l5_cid = i; @@ -612,6 +620,8 @@ static int cnic_unregister_device(struct cnic_dev *dev, int ulp_type) if (ulp_type == CNIC_ULP_ISCSI) cnic_send_nlmsg(cp, ISCSI_KEVENT_IF_DOWN, NULL); + else if (ulp_type == CNIC_ULP_FCOE) + dev->fcoe_cap = NULL; synchronize_rcu(); @@ -2589,7 +2599,7 @@ static void cnic_bnx2x_kwqe_err(struct cnic_dev *dev, struct kwqe *kwqe) return; } - cqes[0] = (struct kcqe *) &kcqe; + cqes[0] = &kcqe; cnic_reply_bnx2x_kcqes(dev, ulp_type, cqes, 1); } @@ -3217,6 +3227,9 @@ static int cnic_ctl(void *data, struct cnic_ctl_info *info) u32 l5_cid; struct cnic_local *cp = dev->cnic_priv; + if (!test_bit(CNIC_F_CNIC_UP, &dev->flags)) + break; + if (cnic_get_l5_cid(cp, cid, &l5_cid) == 0) { struct cnic_context *ctx = &cp->ctx_tbl[l5_cid]; @@ -3947,6 +3960,15 @@ static void cnic_cm_process_kcqe(struct cnic_dev *dev, struct kcqe *kcqe) cnic_cm_upcall(cp, csk, opcode); break; + case L5CM_RAMROD_CMD_ID_CLOSE: + if (l4kcqe->status != 0) { + netdev_warn(dev->netdev, "RAMROD CLOSE compl with " + "status 0x%x\n", l4kcqe->status); + opcode = L4_KCQE_OPCODE_VALUE_CLOSE_COMP; + /* Fall through */ + } else { + break; + } case L4_KCQE_OPCODE_VALUE_RESET_RECEIVED: case L4_KCQE_OPCODE_VALUE_CLOSE_COMP: case L4_KCQE_OPCODE_VALUE_RESET_COMP: @@ -4250,8 +4272,6 @@ static int cnic_cm_shutdown(struct cnic_dev *dev) struct cnic_local *cp = dev->cnic_priv; int i; - cp->stop_cm(dev); - if (!cp->csk_tbl) return 0; @@ -4669,9 +4689,9 @@ static int cnic_start_bnx2_hw(struct cnic_dev *dev) cp->kcq1.sw_prod_idx = 0; cp->kcq1.hw_prod_idx_ptr = - (u16 *) &sblk->status_completion_producer_index; + &sblk->status_completion_producer_index; - cp->kcq1.status_idx_ptr = (u16 *) &sblk->status_idx; + cp->kcq1.status_idx_ptr = &sblk->status_idx; /* Initialize the kernel complete queue context. */ val = KRNLQ_TYPE_TYPE_KRNLQ | KRNLQ_SIZE_TYPE_SIZE | @@ -4697,9 +4717,9 @@ static int cnic_start_bnx2_hw(struct cnic_dev *dev) u32 sb = BNX2_L2CTX_L5_STATUSB_NUM(sb_id); cp->kcq1.hw_prod_idx_ptr = - (u16 *) &msblk->status_completion_producer_index; - cp->kcq1.status_idx_ptr = (u16 *) &msblk->status_idx; - cp->kwq_con_idx_ptr = (u16 *) &msblk->status_cmd_consumer_index; + &msblk->status_completion_producer_index; + cp->kcq1.status_idx_ptr = &msblk->status_idx; + cp->kwq_con_idx_ptr = &msblk->status_cmd_consumer_index; cp->int_num = sb_id << BNX2_PCICFG_INT_ACK_CMD_INT_NUM_SHIFT; cnic_ctx_wr(dev, kwq_cid_addr, L5_KRNLQ_HOST_QIDX, sb); cnic_ctx_wr(dev, kcq_cid_addr, L5_KRNLQ_HOST_QIDX, sb); @@ -4981,8 +5001,14 @@ static int cnic_start_bnx2x_hw(struct cnic_dev *dev) cp->port_mode = CHIP_PORT_MODE_NONE; if (BNX2X_CHIP_IS_E2_PLUS(cp->chip_id)) { - u32 val = CNIC_RD(dev, MISC_REG_PORT4MODE_EN_OVWR); + u32 val; + pci_read_config_dword(dev->pcidev, PCICFG_ME_REGISTER, &val); + cp->func = (u8) ((val & ME_REG_ABS_PF_NUM) >> + ME_REG_ABS_PF_NUM_SHIFT); + func = CNIC_FUNC(cp); + + val = CNIC_RD(dev, MISC_REG_PORT4MODE_EN_OVWR); if (!(val & 1)) val = CNIC_RD(dev, MISC_REG_PORT4MODE_EN); else @@ -5287,6 +5313,7 @@ static void cnic_stop_hw(struct cnic_dev *dev) i++; } cnic_shutdown_rings(dev); + cp->stop_cm(dev); clear_bit(CNIC_F_CNIC_UP, &dev->flags); RCU_INIT_POINTER(cp->ulp_ops[CNIC_ULP_L4], NULL); synchronize_rcu(); @@ -5516,9 +5543,7 @@ static void cnic_rcv_netevent(struct cnic_local *cp, unsigned long event, rcu_read_unlock(); } -/** - * netdev event handler - */ +/* netdev event handler */ static int cnic_netdev_event(struct notifier_block *this, unsigned long event, void *ptr) { diff --git a/drivers/net/ethernet/broadcom/cnic_if.h b/drivers/net/ethernet/broadcom/cnic_if.h index 289274e546be..5cb88881bba1 100644 --- a/drivers/net/ethernet/broadcom/cnic_if.h +++ b/drivers/net/ethernet/broadcom/cnic_if.h @@ -12,8 +12,10 @@ #ifndef CNIC_IF_H #define CNIC_IF_H -#define CNIC_MODULE_VERSION "2.5.10" -#define CNIC_MODULE_RELDATE "March 21, 2012" +#include "bnx2x/bnx2x_mfw_req.h" + +#define CNIC_MODULE_VERSION "2.5.12" +#define CNIC_MODULE_RELDATE "June 29, 2012" #define CNIC_ULP_RDMA 0 #define CNIC_ULP_ISCSI 1 @@ -131,6 +133,11 @@ struct drv_ctl_l2_ring { u32 cid; }; +struct drv_ctl_register_data { + int ulp_type; + struct fcoe_capabilities fcoe_features; +}; + struct drv_ctl_info { int cmd; union { @@ -138,6 +145,7 @@ struct drv_ctl_info { struct drv_ctl_io io; struct drv_ctl_l2_ring ring; int ulp_type; + struct drv_ctl_register_data register_data; char bytes[MAX_DRV_CTL_DATA]; } data; }; @@ -305,6 +313,7 @@ struct cnic_dev { int max_rdma_conn; union drv_info_to_mcp *stats_addr; + struct fcoe_capabilities *fcoe_cap; void *cnic_priv; }; diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index e47ff8be1d7b..fce4c1e4dd3f 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -44,6 +44,10 @@ #include <linux/prefetch.h> #include <linux/dma-mapping.h> #include <linux/firmware.h> +#if IS_ENABLED(CONFIG_HWMON) +#include <linux/hwmon.h> +#include <linux/hwmon-sysfs.h> +#endif #include <net/checksum.h> #include <net/ip.h> @@ -298,6 +302,7 @@ static DEFINE_PCI_DEVICE_TABLE(tg3_pci_tbl) = { {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_57795)}, {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_5719)}, {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_5720)}, + {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_57762)}, {PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, PCI_DEVICE_ID_SYSKONNECT_9DXX)}, {PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, PCI_DEVICE_ID_SYSKONNECT_9MXX)}, {PCI_DEVICE(PCI_VENDOR_ID_ALTIMA, PCI_DEVICE_ID_ALTIMA_AC1000)}, @@ -730,44 +735,131 @@ static void tg3_ape_unlock(struct tg3 *tp, int locknum) tg3_ape_write32(tp, gnt + 4 * locknum, bit); } -static void tg3_ape_send_event(struct tg3 *tp, u32 event) +static int tg3_ape_event_lock(struct tg3 *tp, u32 timeout_us) { - int i; u32 apedata; - /* NCSI does not support APE events */ - if (tg3_flag(tp, APE_HAS_NCSI)) - return; + while (timeout_us) { + if (tg3_ape_lock(tp, TG3_APE_LOCK_MEM)) + return -EBUSY; + + apedata = tg3_ape_read32(tp, TG3_APE_EVENT_STATUS); + if (!(apedata & APE_EVENT_STATUS_EVENT_PENDING)) + break; + + tg3_ape_unlock(tp, TG3_APE_LOCK_MEM); + + udelay(10); + timeout_us -= (timeout_us > 10) ? 10 : timeout_us; + } + + return timeout_us ? 0 : -EBUSY; +} + +static int tg3_ape_wait_for_event(struct tg3 *tp, u32 timeout_us) +{ + u32 i, apedata; + + for (i = 0; i < timeout_us / 10; i++) { + apedata = tg3_ape_read32(tp, TG3_APE_EVENT_STATUS); + + if (!(apedata & APE_EVENT_STATUS_EVENT_PENDING)) + break; + + udelay(10); + } + + return i == timeout_us / 10; +} + +int tg3_ape_scratchpad_read(struct tg3 *tp, u32 *data, u32 base_off, u32 len) +{ + int err; + u32 i, bufoff, msgoff, maxlen, apedata; + + if (!tg3_flag(tp, APE_HAS_NCSI)) + return 0; apedata = tg3_ape_read32(tp, TG3_APE_SEG_SIG); if (apedata != APE_SEG_SIG_MAGIC) - return; + return -ENODEV; apedata = tg3_ape_read32(tp, TG3_APE_FW_STATUS); if (!(apedata & APE_FW_STATUS_READY)) - return; + return -EAGAIN; - /* Wait for up to 1 millisecond for APE to service previous event. */ - for (i = 0; i < 10; i++) { - if (tg3_ape_lock(tp, TG3_APE_LOCK_MEM)) - return; + bufoff = tg3_ape_read32(tp, TG3_APE_SEG_MSG_BUF_OFF) + + TG3_APE_SHMEM_BASE; + msgoff = bufoff + 2 * sizeof(u32); + maxlen = tg3_ape_read32(tp, TG3_APE_SEG_MSG_BUF_LEN); - apedata = tg3_ape_read32(tp, TG3_APE_EVENT_STATUS); + while (len) { + u32 length; - if (!(apedata & APE_EVENT_STATUS_EVENT_PENDING)) - tg3_ape_write32(tp, TG3_APE_EVENT_STATUS, - event | APE_EVENT_STATUS_EVENT_PENDING); + /* Cap xfer sizes to scratchpad limits. */ + length = (len > maxlen) ? maxlen : len; + len -= length; + + apedata = tg3_ape_read32(tp, TG3_APE_FW_STATUS); + if (!(apedata & APE_FW_STATUS_READY)) + return -EAGAIN; + + /* Wait for up to 1 msec for APE to service previous event. */ + err = tg3_ape_event_lock(tp, 1000); + if (err) + return err; + + apedata = APE_EVENT_STATUS_DRIVER_EVNT | + APE_EVENT_STATUS_SCRTCHPD_READ | + APE_EVENT_STATUS_EVENT_PENDING; + tg3_ape_write32(tp, TG3_APE_EVENT_STATUS, apedata); + + tg3_ape_write32(tp, bufoff, base_off); + tg3_ape_write32(tp, bufoff + sizeof(u32), length); tg3_ape_unlock(tp, TG3_APE_LOCK_MEM); + tg3_ape_write32(tp, TG3_APE_EVENT, APE_EVENT_1); - if (!(apedata & APE_EVENT_STATUS_EVENT_PENDING)) - break; + base_off += length; - udelay(100); + if (tg3_ape_wait_for_event(tp, 30000)) + return -EAGAIN; + + for (i = 0; length; i += 4, length -= 4) { + u32 val = tg3_ape_read32(tp, msgoff + i); + memcpy(data, &val, sizeof(u32)); + data++; + } } - if (!(apedata & APE_EVENT_STATUS_EVENT_PENDING)) - tg3_ape_write32(tp, TG3_APE_EVENT, APE_EVENT_1); + return 0; +} + +static int tg3_ape_send_event(struct tg3 *tp, u32 event) +{ + int err; + u32 apedata; + + apedata = tg3_ape_read32(tp, TG3_APE_SEG_SIG); + if (apedata != APE_SEG_SIG_MAGIC) + return -EAGAIN; + + apedata = tg3_ape_read32(tp, TG3_APE_FW_STATUS); + if (!(apedata & APE_FW_STATUS_READY)) + return -EAGAIN; + + /* Wait for up to 1 millisecond for APE to service previous event. */ + err = tg3_ape_event_lock(tp, 1000); + if (err) + return err; + + tg3_ape_write32(tp, TG3_APE_EVENT_STATUS, + event | APE_EVENT_STATUS_EVENT_PENDING); + + tg3_ape_unlock(tp, TG3_APE_LOCK_MEM); + tg3_ape_write32(tp, TG3_APE_EVENT, APE_EVENT_1); + + return 0; } static void tg3_ape_driver_state_change(struct tg3 *tp, int kind) @@ -9393,6 +9485,110 @@ static int tg3_init_hw(struct tg3 *tp, int reset_phy) return tg3_reset_hw(tp, reset_phy); } +#if IS_ENABLED(CONFIG_HWMON) +static void tg3_sd_scan_scratchpad(struct tg3 *tp, struct tg3_ocir *ocir) +{ + int i; + + for (i = 0; i < TG3_SD_NUM_RECS; i++, ocir++) { + u32 off = i * TG3_OCIR_LEN, len = TG3_OCIR_LEN; + + tg3_ape_scratchpad_read(tp, (u32 *) ocir, off, len); + off += len; + + if (ocir->signature != TG3_OCIR_SIG_MAGIC || + !(ocir->version_flags & TG3_OCIR_FLAG_ACTIVE)) + memset(ocir, 0, TG3_OCIR_LEN); + } +} + +/* sysfs attributes for hwmon */ +static ssize_t tg3_show_temp(struct device *dev, + struct device_attribute *devattr, char *buf) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct net_device *netdev = pci_get_drvdata(pdev); + struct tg3 *tp = netdev_priv(netdev); + struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); + u32 temperature; + + spin_lock_bh(&tp->lock); + tg3_ape_scratchpad_read(tp, &temperature, attr->index, + sizeof(temperature)); + spin_unlock_bh(&tp->lock); + return sprintf(buf, "%u\n", temperature); +} + + +static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, tg3_show_temp, NULL, + TG3_TEMP_SENSOR_OFFSET); +static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO, tg3_show_temp, NULL, + TG3_TEMP_CAUTION_OFFSET); +static SENSOR_DEVICE_ATTR(temp1_max, S_IRUGO, tg3_show_temp, NULL, + TG3_TEMP_MAX_OFFSET); + +static struct attribute *tg3_attributes[] = { + &sensor_dev_attr_temp1_input.dev_attr.attr, + &sensor_dev_attr_temp1_crit.dev_attr.attr, + &sensor_dev_attr_temp1_max.dev_attr.attr, + NULL +}; + +static const struct attribute_group tg3_group = { + .attrs = tg3_attributes, +}; + +#endif + +static void tg3_hwmon_close(struct tg3 *tp) +{ +#if IS_ENABLED(CONFIG_HWMON) + if (tp->hwmon_dev) { + hwmon_device_unregister(tp->hwmon_dev); + tp->hwmon_dev = NULL; + sysfs_remove_group(&tp->pdev->dev.kobj, &tg3_group); + } +#endif +} + +static void tg3_hwmon_open(struct tg3 *tp) +{ +#if IS_ENABLED(CONFIG_HWMON) + int i, err; + u32 size = 0; + struct pci_dev *pdev = tp->pdev; + struct tg3_ocir ocirs[TG3_SD_NUM_RECS]; + + tg3_sd_scan_scratchpad(tp, ocirs); + + for (i = 0; i < TG3_SD_NUM_RECS; i++) { + if (!ocirs[i].src_data_length) + continue; + + size += ocirs[i].src_hdr_length; + size += ocirs[i].src_data_length; + } + + if (!size) + return; + + /* Register hwmon sysfs hooks */ + err = sysfs_create_group(&pdev->dev.kobj, &tg3_group); + if (err) { + dev_err(&pdev->dev, "Cannot create sysfs group, aborting\n"); + return; + } + + tp->hwmon_dev = hwmon_device_register(&pdev->dev); + if (IS_ERR(tp->hwmon_dev)) { + tp->hwmon_dev = NULL; + dev_err(&pdev->dev, "Cannot register hwmon device, aborting\n"); + sysfs_remove_group(&pdev->dev.kobj, &tg3_group); + } +#endif +} + + #define TG3_STAT_ADD32(PSTAT, REG) \ do { u32 __val = tr32(REG); \ (PSTAT)->low += __val; \ @@ -9908,7 +10104,7 @@ static bool tg3_enable_msix(struct tg3 *tp) int i, rc; struct msix_entry msix_ent[tp->irq_max]; - tp->irq_cnt = num_online_cpus(); + tp->irq_cnt = netif_get_num_default_rss_queues(); if (tp->irq_cnt > 1) { /* We want as many rx rings enabled as there are cpus. * In multiqueue MSI-X mode, the first MSI-X vector @@ -10101,6 +10297,8 @@ static int tg3_open(struct net_device *dev) tg3_phy_start(tp); + tg3_hwmon_open(tp); + tg3_full_lock(tp, 0); tg3_timer_start(tp); @@ -10150,6 +10348,8 @@ static int tg3_close(struct net_device *dev) tg3_timer_stop(tp); + tg3_hwmon_close(tp); + tg3_phy_stop(tp); tg3_full_lock(tp, 1); @@ -13857,14 +14057,9 @@ static void __devinit tg3_read_mgmtfw_ver(struct tg3 *tp) } } -static void __devinit tg3_read_dash_ver(struct tg3 *tp) +static void __devinit tg3_probe_ncsi(struct tg3 *tp) { - int vlen; u32 apedata; - char *fwtype; - - if (!tg3_flag(tp, ENABLE_APE) || !tg3_flag(tp, ENABLE_ASF)) - return; apedata = tg3_ape_read32(tp, TG3_APE_SEG_SIG); if (apedata != APE_SEG_SIG_MAGIC) @@ -13874,14 +14069,22 @@ static void __devinit tg3_read_dash_ver(struct tg3 *tp) if (!(apedata & APE_FW_STATUS_READY)) return; + if (tg3_ape_read32(tp, TG3_APE_FW_FEATURES) & TG3_APE_FW_FEATURE_NCSI) + tg3_flag_set(tp, APE_HAS_NCSI); +} + +static void __devinit tg3_read_dash_ver(struct tg3 *tp) +{ + int vlen; + u32 apedata; + char *fwtype; + apedata = tg3_ape_read32(tp, TG3_APE_FW_VERSION); - if (tg3_ape_read32(tp, TG3_APE_FW_FEATURES) & TG3_APE_FW_FEATURE_NCSI) { - tg3_flag_set(tp, APE_HAS_NCSI); + if (tg3_flag(tp, APE_HAS_NCSI)) fwtype = "NCSI"; - } else { + else fwtype = "DASH"; - } vlen = strlen(tp->fw_ver); @@ -13915,20 +14118,17 @@ static void __devinit tg3_read_fw_ver(struct tg3 *tp) tg3_read_sb_ver(tp, val); else if ((val & TG3_EEPROM_MAGIC_HW_MSK) == TG3_EEPROM_MAGIC_HW) tg3_read_hwsb_ver(tp); - else - return; - - if (vpd_vers) - goto done; - if (tg3_flag(tp, ENABLE_APE)) { - if (tg3_flag(tp, ENABLE_ASF)) - tg3_read_dash_ver(tp); - } else if (tg3_flag(tp, ENABLE_ASF)) { - tg3_read_mgmtfw_ver(tp); + if (tg3_flag(tp, ENABLE_ASF)) { + if (tg3_flag(tp, ENABLE_APE)) { + tg3_probe_ncsi(tp); + if (!vpd_vers) + tg3_read_dash_ver(tp); + } else if (!vpd_vers) { + tg3_read_mgmtfw_ver(tp); + } } -done: tp->fw_ver[TG3_VER_SIZE - 1] = 0; } diff --git a/drivers/net/ethernet/broadcom/tg3.h b/drivers/net/ethernet/broadcom/tg3.h index 93865f899a4f..a1b75cd67b9d 100644 --- a/drivers/net/ethernet/broadcom/tg3.h +++ b/drivers/net/ethernet/broadcom/tg3.h @@ -2311,10 +2311,11 @@ #define APE_LOCK_REQ_DRIVER 0x00001000 #define TG3_APE_LOCK_GRANT 0x004c #define APE_LOCK_GRANT_DRIVER 0x00001000 -#define TG3_APE_SEG_SIG 0x4000 -#define APE_SEG_SIG_MAGIC 0x41504521 /* APE shared memory. Accessible through BAR1 */ +#define TG3_APE_SHMEM_BASE 0x4000 +#define TG3_APE_SEG_SIG 0x4000 +#define APE_SEG_SIG_MAGIC 0x41504521 #define TG3_APE_FW_STATUS 0x400c #define APE_FW_STATUS_READY 0x00000100 #define TG3_APE_FW_FEATURES 0x4010 @@ -2327,6 +2328,8 @@ #define APE_FW_VERSION_REVMSK 0x0000ff00 #define APE_FW_VERSION_REVSFT 8 #define APE_FW_VERSION_BLDMSK 0x000000ff +#define TG3_APE_SEG_MSG_BUF_OFF 0x401c +#define TG3_APE_SEG_MSG_BUF_LEN 0x4020 #define TG3_APE_HOST_SEG_SIG 0x4200 #define APE_HOST_SEG_SIG_MAGIC 0x484f5354 #define TG3_APE_HOST_SEG_LEN 0x4204 @@ -2353,6 +2356,8 @@ #define APE_EVENT_STATUS_DRIVER_EVNT 0x00000010 #define APE_EVENT_STATUS_STATE_CHNGE 0x00000500 +#define APE_EVENT_STATUS_SCRTCHPD_READ 0x00001600 +#define APE_EVENT_STATUS_SCRTCHPD_WRITE 0x00001700 #define APE_EVENT_STATUS_STATE_START 0x00010000 #define APE_EVENT_STATUS_STATE_UNLOAD 0x00020000 #define APE_EVENT_STATUS_STATE_WOL 0x00030000 @@ -2671,6 +2676,40 @@ struct tg3_hw_stats { u8 __reserved4[0xb00-0x9c8]; }; +#define TG3_SD_NUM_RECS 3 +#define TG3_OCIR_LEN (sizeof(struct tg3_ocir)) +#define TG3_OCIR_SIG_MAGIC 0x5253434f +#define TG3_OCIR_FLAG_ACTIVE 0x00000001 + +#define TG3_TEMP_CAUTION_OFFSET 0xc8 +#define TG3_TEMP_MAX_OFFSET 0xcc +#define TG3_TEMP_SENSOR_OFFSET 0xd4 + + +struct tg3_ocir { + u32 signature; + u16 version_flags; + u16 refresh_int; + u32 refresh_tmr; + u32 update_tmr; + u32 dst_base_addr; + u16 src_hdr_offset; + u16 src_hdr_length; + u16 src_data_offset; + u16 src_data_length; + u16 dst_hdr_offset; + u16 dst_data_offset; + u16 dst_reg_upd_offset; + u16 dst_sem_offset; + u32 reserved1[2]; + u32 port0_flags; + u32 port1_flags; + u32 port2_flags; + u32 port3_flags; + u32 reserved2[1]; +}; + + /* 'mapping' is superfluous as the chip does not write into * the tx/rx post rings so we could just fetch it from there. * But the cache behavior is better how we are doing it now. @@ -3206,6 +3245,10 @@ struct tg3 { const char *fw_needed; const struct firmware *fw; u32 fw_len; /* includes BSS */ + +#if IS_ENABLED(CONFIG_HWMON) + struct device *hwmon_dev; +#endif }; #endif /* !(_T3_H) */ diff --git a/drivers/net/ethernet/brocade/bna/bfa_cee.c b/drivers/net/ethernet/brocade/bna/bfa_cee.c index 689e5e19cc0b..550d2521ba76 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_cee.c +++ b/drivers/net/ethernet/brocade/bna/bfa_cee.c @@ -52,13 +52,7 @@ bfa_cee_format_lldp_cfg(struct bfa_cee_lldp_cfg *lldp_cfg) } /** - * bfa_cee_attr_meminfo() - * - * @brief Returns the size of the DMA memory needed by CEE attributes - * - * @param[in] void - * - * @return Size of DMA region + * bfa_cee_attr_meminfo - Returns the size of the DMA memory needed by CEE attributes */ static u32 bfa_cee_attr_meminfo(void) @@ -66,13 +60,7 @@ bfa_cee_attr_meminfo(void) return roundup(sizeof(struct bfa_cee_attr), BFA_DMA_ALIGN_SZ); } /** - * bfa_cee_stats_meminfo() - * - * @brief Returns the size of the DMA memory needed by CEE stats - * - * @param[in] void - * - * @return Size of DMA region + * bfa_cee_stats_meminfo - Returns the size of the DMA memory needed by CEE stats */ static u32 bfa_cee_stats_meminfo(void) @@ -81,14 +69,10 @@ bfa_cee_stats_meminfo(void) } /** - * bfa_cee_get_attr_isr() - * - * @brief CEE ISR for get-attributes responses from f/w - * - * @param[in] cee - Pointer to the CEE module - * status - Return status from the f/w + * bfa_cee_get_attr_isr - CEE ISR for get-attributes responses from f/w * - * @return void + * @cee: Pointer to the CEE module + * @status: Return status from the f/w */ static void bfa_cee_get_attr_isr(struct bfa_cee *cee, enum bfa_status status) @@ -105,14 +89,10 @@ bfa_cee_get_attr_isr(struct bfa_cee *cee, enum bfa_status status) } /** - * bfa_cee_get_attr_isr() - * - * @brief CEE ISR for get-stats responses from f/w + * bfa_cee_get_attr_isr - CEE ISR for get-stats responses from f/w * - * @param[in] cee - Pointer to the CEE module - * status - Return status from the f/w - * - * @return void + * @cee: Pointer to the CEE module + * @status: Return status from the f/w */ static void bfa_cee_get_stats_isr(struct bfa_cee *cee, enum bfa_status status) @@ -147,13 +127,7 @@ bfa_cee_reset_stats_isr(struct bfa_cee *cee, enum bfa_status status) cee->cbfn.reset_stats_cbfn(cee->cbfn.reset_stats_cbarg, status); } /** - * bfa_nw_cee_meminfo() - * - * @brief Returns the size of the DMA memory needed by CEE module - * - * @param[in] void - * - * @return Size of DMA region + * bfa_nw_cee_meminfo - Returns the size of the DMA memory needed by CEE module */ u32 bfa_nw_cee_meminfo(void) @@ -162,15 +136,11 @@ bfa_nw_cee_meminfo(void) } /** - * bfa_nw_cee_mem_claim() - * - * @brief Initialized CEE DMA Memory - * - * @param[in] cee CEE module pointer - * dma_kva Kernel Virtual Address of CEE DMA Memory - * dma_pa Physical Address of CEE DMA Memory + * bfa_nw_cee_mem_claim - Initialized CEE DMA Memory * - * @return void + * @cee: CEE module pointer + * @dma_kva: Kernel Virtual Address of CEE DMA Memory + * @dma_pa: Physical Address of CEE DMA Memory */ void bfa_nw_cee_mem_claim(struct bfa_cee *cee, u8 *dma_kva, u64 dma_pa) @@ -185,13 +155,11 @@ bfa_nw_cee_mem_claim(struct bfa_cee *cee, u8 *dma_kva, u64 dma_pa) } /** - * bfa_cee_get_attr() - * - * @brief Send the request to the f/w to fetch CEE attributes. + * bfa_cee_get_attr - Send the request to the f/w to fetch CEE attributes. * - * @param[in] Pointer to the CEE module data structure. + * @cee: Pointer to the CEE module data structure. * - * @return Status + * Return: status */ enum bfa_status bfa_nw_cee_get_attr(struct bfa_cee *cee, struct bfa_cee_attr *attr, @@ -220,13 +188,7 @@ bfa_nw_cee_get_attr(struct bfa_cee *cee, struct bfa_cee_attr *attr, } /** - * bfa_cee_isrs() - * - * @brief Handles Mail-box interrupts for CEE module. - * - * @param[in] Pointer to the CEE module data structure. - * - * @return void + * bfa_cee_isrs - Handles Mail-box interrupts for CEE module. */ static void @@ -253,14 +215,9 @@ bfa_cee_isr(void *cbarg, struct bfi_mbmsg *m) } /** - * bfa_cee_notify() - * - * @brief CEE module heart-beat failure handler. - * @brief CEE module IOC event handler. - * - * @param[in] IOC event type + * bfa_cee_notify - CEE module heart-beat failure handler. * - * @return void + * @event: IOC event type */ static void @@ -307,17 +264,13 @@ bfa_cee_notify(void *arg, enum bfa_ioc_event event) } /** - * bfa_nw_cee_attach() - * - * @brief CEE module-attach API + * bfa_nw_cee_attach - CEE module-attach API * - * @param[in] cee - Pointer to the CEE module data structure - * ioc - Pointer to the ioc module data structure - * dev - Pointer to the device driver module data structure - * The device driver specific mbox ISR functions have - * this pointer as one of the parameters. - * - * @return void + * @cee: Pointer to the CEE module data structure + * @ioc: Pointer to the ioc module data structure + * @dev: Pointer to the device driver module data structure. + * The device driver specific mbox ISR functions have + * this pointer as one of the parameters. */ void bfa_nw_cee_attach(struct bfa_cee *cee, struct bfa_ioc *ioc, diff --git a/drivers/net/ethernet/brocade/bna/bfa_cs.h b/drivers/net/ethernet/brocade/bna/bfa_cs.h index 3da1a946ccdd..ad004a4c3897 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_cs.h +++ b/drivers/net/ethernet/brocade/bna/bfa_cs.h @@ -16,23 +16,18 @@ * www.brocade.com */ -/** - * @file bfa_cs.h BFA common services - */ +/* BFA common services */ #ifndef __BFA_CS_H__ #define __BFA_CS_H__ #include "cna.h" -/** - * @ BFA state machine interfaces - */ +/* BFA state machine interfaces */ typedef void (*bfa_sm_t)(void *sm, int event); -/** - * oc - object class eg. bfa_ioc +/* oc - object class eg. bfa_ioc * st - state, eg. reset * otype - object type, eg. struct bfa_ioc * etype - object type, eg. enum ioc_event @@ -45,9 +40,7 @@ typedef void (*bfa_sm_t)(void *sm, int event); #define bfa_sm_get_state(_sm) ((_sm)->sm) #define bfa_sm_cmp_state(_sm, _state) ((_sm)->sm == (bfa_sm_t)(_state)) -/** - * For converting from state machine function to state encoding. - */ +/* For converting from state machine function to state encoding. */ struct bfa_sm_table { bfa_sm_t sm; /*!< state machine function */ int state; /*!< state machine encoding */ @@ -55,13 +48,10 @@ struct bfa_sm_table { }; #define BFA_SM(_sm) ((bfa_sm_t)(_sm)) -/** - * State machine with entry actions. - */ +/* State machine with entry actions. */ typedef void (*bfa_fsm_t)(void *fsm, int event); -/** - * oc - object class eg. bfa_ioc +/* oc - object class eg. bfa_ioc * st - state, eg. reset * otype - object type, eg. struct bfa_ioc * etype - object type, eg. enum ioc_event @@ -90,9 +80,7 @@ bfa_sm_to_state(const struct bfa_sm_table *smt, bfa_sm_t sm) return smt[i].state; } -/** - * @ Generic wait counter. - */ +/* Generic wait counter. */ typedef void (*bfa_wc_resume_t) (void *cbarg); @@ -116,9 +104,7 @@ bfa_wc_down(struct bfa_wc *wc) wc->wc_resume(wc->wc_cbarg); } -/** - * Initialize a waiting counter. - */ +/* Initialize a waiting counter. */ static inline void bfa_wc_init(struct bfa_wc *wc, bfa_wc_resume_t wc_resume, void *wc_cbarg) { @@ -128,9 +114,7 @@ bfa_wc_init(struct bfa_wc *wc, bfa_wc_resume_t wc_resume, void *wc_cbarg) bfa_wc_up(wc); } -/** - * Wait for counter to reach zero - */ +/* Wait for counter to reach zero */ static inline void bfa_wc_wait(struct bfa_wc *wc) { diff --git a/drivers/net/ethernet/brocade/bna/bfa_defs.h b/drivers/net/ethernet/brocade/bna/bfa_defs.h index 48f877337390..e423f82da490 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_defs.h +++ b/drivers/net/ethernet/brocade/bna/bfa_defs.h @@ -26,13 +26,9 @@ #define BFA_STRING_32 32 #define BFA_VERSION_LEN 64 -/** - * ---------------------- adapter definitions ------------ - */ +/* ---------------------- adapter definitions ------------ */ -/** - * BFA adapter level attributes. - */ +/* BFA adapter level attributes. */ enum { BFA_ADAPTER_SERIAL_NUM_LEN = STRSZ(BFA_MFG_SERIALNUM_SIZE), /* @@ -74,18 +70,14 @@ struct bfa_adapter_attr { u8 trunk_capable; }; -/** - * ---------------------- IOC definitions ------------ - */ +/* ---------------------- IOC definitions ------------ */ enum { BFA_IOC_DRIVER_LEN = 16, BFA_IOC_CHIP_REV_LEN = 8, }; -/** - * Driver and firmware versions. - */ +/* Driver and firmware versions. */ struct bfa_ioc_driver_attr { char driver[BFA_IOC_DRIVER_LEN]; /*!< driver name */ char driver_ver[BFA_VERSION_LEN]; /*!< driver version */ @@ -95,9 +87,7 @@ struct bfa_ioc_driver_attr { char ob_ver[BFA_VERSION_LEN]; /*!< openboot version */ }; -/** - * IOC PCI device attributes - */ +/* IOC PCI device attributes */ struct bfa_ioc_pci_attr { u16 vendor_id; /*!< PCI vendor ID */ u16 device_id; /*!< PCI device ID */ @@ -108,9 +98,7 @@ struct bfa_ioc_pci_attr { char chip_rev[BFA_IOC_CHIP_REV_LEN]; /*!< chip revision */ }; -/** - * IOC states - */ +/* IOC states */ enum bfa_ioc_state { BFA_IOC_UNINIT = 1, /*!< IOC is in uninit state */ BFA_IOC_RESET = 2, /*!< IOC is in reset state */ @@ -127,9 +115,7 @@ enum bfa_ioc_state { BFA_IOC_HWFAIL = 13, /*!< PCI mapping doesn't exist */ }; -/** - * IOC firmware stats - */ +/* IOC firmware stats */ struct bfa_fw_ioc_stats { u32 enable_reqs; u32 disable_reqs; @@ -139,9 +125,7 @@ struct bfa_fw_ioc_stats { u32 unknown_reqs; }; -/** - * IOC driver stats - */ +/* IOC driver stats */ struct bfa_ioc_drv_stats { u32 ioc_isrs; u32 ioc_enables; @@ -157,9 +141,7 @@ struct bfa_ioc_drv_stats { u32 rsvd; }; -/** - * IOC statistics - */ +/* IOC statistics */ struct bfa_ioc_stats { struct bfa_ioc_drv_stats drv_stats; /*!< driver IOC stats */ struct bfa_fw_ioc_stats fw_stats; /*!< firmware IOC stats */ @@ -171,9 +153,7 @@ enum bfa_ioc_type { BFA_IOC_TYPE_LL = 3, }; -/** - * IOC attributes returned in queries - */ +/* IOC attributes returned in queries */ struct bfa_ioc_attr { enum bfa_ioc_type ioc_type; enum bfa_ioc_state state; /*!< IOC state */ @@ -187,22 +167,16 @@ struct bfa_ioc_attr { u8 rsvd[4]; /*!< 64bit align */ }; -/** - * Adapter capability mask definition - */ +/* Adapter capability mask definition */ enum { BFA_CM_HBA = 0x01, BFA_CM_CNA = 0x02, BFA_CM_NIC = 0x04, }; -/** - * ---------------------- mfg definitions ------------ - */ +/* ---------------------- mfg definitions ------------ */ -/** - * Checksum size - */ +/* Checksum size */ #define BFA_MFG_CHKSUM_SIZE 16 #define BFA_MFG_PARTNUM_SIZE 14 @@ -213,8 +187,7 @@ enum { #pragma pack(1) -/** - * @brief BFA adapter manufacturing block definition. +/* BFA adapter manufacturing block definition. * * All numerical fields are in big-endian format. */ @@ -256,9 +229,7 @@ struct bfa_mfg_block { #pragma pack() -/** - * ---------------------- pci definitions ------------ - */ +/* ---------------------- pci definitions ------------ */ /* * PCI device ID information @@ -275,9 +246,7 @@ enum { #define bfa_asic_id_ctc(device) \ (bfa_asic_id_ct(device) || bfa_asic_id_ct2(device)) -/** - * PCI sub-system device and vendor ID information - */ +/* PCI sub-system device and vendor ID information */ enum { BFA_PCI_FCOE_SSDEVICE_ID = 0x14, BFA_PCI_CT2_SSID_FCoE = 0x22, diff --git a/drivers/net/ethernet/brocade/bna/bfa_defs_cna.h b/drivers/net/ethernet/brocade/bna/bfa_defs_cna.h index 8ab33ee2c2bc..b39c5f23974b 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_defs_cna.h +++ b/drivers/net/ethernet/brocade/bna/bfa_defs_cna.h @@ -20,10 +20,7 @@ #include "bfa_defs.h" -/** - * @brief - * FC physical port statistics. - */ +/* FC physical port statistics. */ struct bfa_port_fc_stats { u64 secs_reset; /*!< Seconds since stats is reset */ u64 tx_frames; /*!< Tx frames */ @@ -59,10 +56,7 @@ struct bfa_port_fc_stats { u64 bbsc_link_resets; /*!< Credit Recovery-Link Resets */ }; -/** - * @brief - * Eth Physical Port statistics. - */ +/* Eth Physical Port statistics. */ struct bfa_port_eth_stats { u64 secs_reset; /*!< Seconds since stats is reset */ u64 frame_64; /*!< Frames 64 bytes */ @@ -108,10 +102,7 @@ struct bfa_port_eth_stats { u64 tx_iscsi_zero_pause; /*!< Tx iSCSI zero pause */ }; -/** - * @brief - * Port statistics. - */ +/* Port statistics. */ union bfa_port_stats_u { struct bfa_port_fc_stats fc; struct bfa_port_eth_stats eth; diff --git a/drivers/net/ethernet/brocade/bna/bfa_defs_mfg_comm.h b/drivers/net/ethernet/brocade/bna/bfa_defs_mfg_comm.h index 6681fe87c1e1..7fb396fe679d 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_defs_mfg_comm.h +++ b/drivers/net/ethernet/brocade/bna/bfa_defs_mfg_comm.h @@ -20,33 +20,23 @@ #include "bfa_defs.h" -/** - * Manufacturing block version - */ +/* Manufacturing block version */ #define BFA_MFG_VERSION 3 #define BFA_MFG_VERSION_UNINIT 0xFF -/** - * Manufacturing block encrypted version - */ +/* Manufacturing block encrypted version */ #define BFA_MFG_ENC_VER 2 -/** - * Manufacturing block version 1 length - */ +/* Manufacturing block version 1 length */ #define BFA_MFG_VER1_LEN 128 -/** - * Manufacturing block header length - */ +/* Manufacturing block header length */ #define BFA_MFG_HDR_LEN 4 #define BFA_MFG_SERIALNUM_SIZE 11 #define STRSZ(_n) (((_n) + 4) & ~3) -/** - * Manufacturing card type - */ +/* Manufacturing card type */ enum { BFA_MFG_TYPE_CB_MAX = 825, /*!< Crossbow card type max */ BFA_MFG_TYPE_FC8P2 = 825, /*!< 8G 2port FC card */ @@ -70,9 +60,7 @@ enum { #pragma pack(1) -/** - * Check if Mezz card - */ +/* Check if Mezz card */ #define bfa_mfg_is_mezz(type) (( \ (type) == BFA_MFG_TYPE_JAYHAWK || \ (type) == BFA_MFG_TYPE_WANCHESE || \ @@ -127,9 +115,7 @@ do { \ } \ } while (0) -/** - * VPD data length - */ +/* VPD data length */ #define BFA_MFG_VPD_LEN 512 #define BFA_MFG_VPD_LEN_INVALID 0 @@ -137,9 +123,7 @@ do { \ #define BFA_MFG_VPD_PCI_VER_MASK 0x07 /*!< version mask 3 bits */ #define BFA_MFG_VPD_PCI_VDR_MASK 0xf8 /*!< vendor mask 5 bits */ -/** - * VPD vendor tag - */ +/* VPD vendor tag */ enum { BFA_MFG_VPD_UNKNOWN = 0, /*!< vendor unknown */ BFA_MFG_VPD_IBM = 1, /*!< vendor IBM */ @@ -151,8 +135,7 @@ enum { BFA_MFG_VPD_PCI_BRCD = 0xf8, /*!< PCI VPD Brocade */ }; -/** - * @brief BFA adapter flash vpd data definition. +/* BFA adapter flash vpd data definition. * * All numerical fields are in big-endian format. */ diff --git a/drivers/net/ethernet/brocade/bna/bfa_defs_status.h b/drivers/net/ethernet/brocade/bna/bfa_defs_status.h index 7c5fe6c2e80e..ea9af9ae754d 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_defs_status.h +++ b/drivers/net/ethernet/brocade/bna/bfa_defs_status.h @@ -18,8 +18,7 @@ #ifndef __BFA_DEFS_STATUS_H__ #define __BFA_DEFS_STATUS_H__ -/** - * API status return values +/* API status return values * * NOTE: The error msgs are auto generated from the comments. Only singe line * comments are supported diff --git a/drivers/net/ethernet/brocade/bna/bfa_ioc.c b/drivers/net/ethernet/brocade/bna/bfa_ioc.c index 0b640fafbda3..959c58ef972a 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_ioc.c +++ b/drivers/net/ethernet/brocade/bna/bfa_ioc.c @@ -20,13 +20,9 @@ #include "bfi_reg.h" #include "bfa_defs.h" -/** - * IOC local definitions - */ +/* IOC local definitions */ -/** - * Asic specific macros : see bfa_hw_cb.c and bfa_hw_ct.c for details. - */ +/* Asic specific macros : see bfa_hw_cb.c and bfa_hw_ct.c for details. */ #define bfa_ioc_firmware_lock(__ioc) \ ((__ioc)->ioc_hwif->ioc_firmware_lock(__ioc)) @@ -96,9 +92,7 @@ static void bfa_ioc_get_adapter_manufacturer(struct bfa_ioc *ioc, static void bfa_ioc_get_adapter_model(struct bfa_ioc *ioc, char *model); static u64 bfa_ioc_get_pwwn(struct bfa_ioc *ioc); -/** - * IOC state machine definitions/declarations - */ +/* IOC state machine definitions/declarations */ enum ioc_event { IOC_E_RESET = 1, /*!< IOC reset request */ IOC_E_ENABLE = 2, /*!< IOC enable request */ @@ -148,9 +142,7 @@ static void bfa_iocpf_initfail(struct bfa_ioc *ioc); static void bfa_iocpf_getattrfail(struct bfa_ioc *ioc); static void bfa_iocpf_stop(struct bfa_ioc *ioc); -/** - * IOCPF state machine events - */ +/* IOCPF state machine events */ enum iocpf_event { IOCPF_E_ENABLE = 1, /*!< IOCPF enable request */ IOCPF_E_DISABLE = 2, /*!< IOCPF disable request */ @@ -166,9 +158,7 @@ enum iocpf_event { IOCPF_E_SEM_ERROR = 12, /*!< h/w sem mapping error */ }; -/** - * IOCPF states - */ +/* IOCPF states */ enum bfa_iocpf_state { BFA_IOCPF_RESET = 1, /*!< IOC is in reset state */ BFA_IOCPF_SEMWAIT = 2, /*!< Waiting for IOC h/w semaphore */ @@ -215,21 +205,15 @@ static struct bfa_sm_table iocpf_sm_table[] = { {BFA_SM(bfa_iocpf_sm_disabled), BFA_IOCPF_DISABLED}, }; -/** - * IOC State Machine - */ +/* IOC State Machine */ -/** - * Beginning state. IOC uninit state. - */ +/* Beginning state. IOC uninit state. */ static void bfa_ioc_sm_uninit_entry(struct bfa_ioc *ioc) { } -/** - * IOC is in uninit state. - */ +/* IOC is in uninit state. */ static void bfa_ioc_sm_uninit(struct bfa_ioc *ioc, enum ioc_event event) { @@ -243,18 +227,14 @@ bfa_ioc_sm_uninit(struct bfa_ioc *ioc, enum ioc_event event) } } -/** - * Reset entry actions -- initialize state machine - */ +/* Reset entry actions -- initialize state machine */ static void bfa_ioc_sm_reset_entry(struct bfa_ioc *ioc) { bfa_fsm_set_state(&ioc->iocpf, bfa_iocpf_sm_reset); } -/** - * IOC is in reset state. - */ +/* IOC is in reset state. */ static void bfa_ioc_sm_reset(struct bfa_ioc *ioc, enum ioc_event event) { @@ -282,8 +262,7 @@ bfa_ioc_sm_enabling_entry(struct bfa_ioc *ioc) bfa_iocpf_enable(ioc); } -/** - * Host IOC function is being enabled, awaiting response from firmware. +/* Host IOC function is being enabled, awaiting response from firmware. * Semaphore is acquired. */ static void @@ -325,9 +304,7 @@ bfa_ioc_sm_enabling(struct bfa_ioc *ioc, enum ioc_event event) } } -/** - * Semaphore should be acquired for version check. - */ +/* Semaphore should be acquired for version check. */ static void bfa_ioc_sm_getattr_entry(struct bfa_ioc *ioc) { @@ -336,9 +313,7 @@ bfa_ioc_sm_getattr_entry(struct bfa_ioc *ioc) bfa_ioc_send_getattr(ioc); } -/** - * IOC configuration in progress. Timer is active. - */ +/* IOC configuration in progress. Timer is active. */ static void bfa_ioc_sm_getattr(struct bfa_ioc *ioc, enum ioc_event event) { @@ -419,9 +394,7 @@ bfa_ioc_sm_disabling_entry(struct bfa_ioc *ioc) bfa_iocpf_disable(ioc); } -/** - * IOC is being disabled - */ +/* IOC is being disabled */ static void bfa_ioc_sm_disabling(struct bfa_ioc *ioc, enum ioc_event event) { @@ -449,9 +422,7 @@ bfa_ioc_sm_disabling(struct bfa_ioc *ioc, enum ioc_event event) } } -/** - * IOC disable completion entry. - */ +/* IOC disable completion entry. */ static void bfa_ioc_sm_disabled_entry(struct bfa_ioc *ioc) { @@ -485,9 +456,7 @@ bfa_ioc_sm_fail_retry_entry(struct bfa_ioc *ioc) { } -/** - * Hardware initialization retry. - */ +/* Hardware initialization retry. */ static void bfa_ioc_sm_fail_retry(struct bfa_ioc *ioc, enum ioc_event event) { @@ -534,9 +503,7 @@ bfa_ioc_sm_fail_entry(struct bfa_ioc *ioc) { } -/** - * IOC failure. - */ +/* IOC failure. */ static void bfa_ioc_sm_fail(struct bfa_ioc *ioc, enum ioc_event event) { @@ -568,9 +535,7 @@ bfa_ioc_sm_hwfail_entry(struct bfa_ioc *ioc) { } -/** - * IOC failure. - */ +/* IOC failure. */ static void bfa_ioc_sm_hwfail(struct bfa_ioc *ioc, enum ioc_event event) { @@ -593,13 +558,9 @@ bfa_ioc_sm_hwfail(struct bfa_ioc *ioc, enum ioc_event event) } } -/** - * IOCPF State Machine - */ +/* IOCPF State Machine */ -/** - * Reset entry actions -- initialize state machine - */ +/* Reset entry actions -- initialize state machine */ static void bfa_iocpf_sm_reset_entry(struct bfa_iocpf *iocpf) { @@ -607,9 +568,7 @@ bfa_iocpf_sm_reset_entry(struct bfa_iocpf *iocpf) iocpf->auto_recover = bfa_nw_auto_recover; } -/** - * Beginning state. IOC is in reset state. - */ +/* Beginning state. IOC is in reset state. */ static void bfa_iocpf_sm_reset(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -626,9 +585,7 @@ bfa_iocpf_sm_reset(struct bfa_iocpf *iocpf, enum iocpf_event event) } } -/** - * Semaphore should be acquired for version check. - */ +/* Semaphore should be acquired for version check. */ static void bfa_iocpf_sm_fwcheck_entry(struct bfa_iocpf *iocpf) { @@ -636,9 +593,7 @@ bfa_iocpf_sm_fwcheck_entry(struct bfa_iocpf *iocpf) bfa_ioc_hw_sem_get(iocpf->ioc); } -/** - * Awaiting h/w semaphore to continue with version check. - */ +/* Awaiting h/w semaphore to continue with version check. */ static void bfa_iocpf_sm_fwcheck(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -683,9 +638,7 @@ bfa_iocpf_sm_fwcheck(struct bfa_iocpf *iocpf, enum iocpf_event event) } } -/** - * Notify enable completion callback - */ +/* Notify enable completion callback */ static void bfa_iocpf_sm_mismatch_entry(struct bfa_iocpf *iocpf) { @@ -698,9 +651,7 @@ bfa_iocpf_sm_mismatch_entry(struct bfa_iocpf *iocpf) msecs_to_jiffies(BFA_IOC_TOV)); } -/** - * Awaiting firmware version match. - */ +/* Awaiting firmware version match. */ static void bfa_iocpf_sm_mismatch(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -727,18 +678,14 @@ bfa_iocpf_sm_mismatch(struct bfa_iocpf *iocpf, enum iocpf_event event) } } -/** - * Request for semaphore. - */ +/* Request for semaphore. */ static void bfa_iocpf_sm_semwait_entry(struct bfa_iocpf *iocpf) { bfa_ioc_hw_sem_get(iocpf->ioc); } -/** - * Awaiting semaphore for h/w initialzation. - */ +/* Awaiting semaphore for h/w initialzation. */ static void bfa_iocpf_sm_semwait(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -778,8 +725,7 @@ bfa_iocpf_sm_hwinit_entry(struct bfa_iocpf *iocpf) bfa_ioc_reset(iocpf->ioc, false); } -/** - * Hardware is being initialized. Interrupts are enabled. +/* Hardware is being initialized. Interrupts are enabled. * Holding hardware semaphore lock. */ static void @@ -822,8 +768,7 @@ bfa_iocpf_sm_enabling_entry(struct bfa_iocpf *iocpf) bfa_ioc_send_enable(iocpf->ioc); } -/** - * Host IOC function is being enabled, awaiting response from firmware. +/* Host IOC function is being enabled, awaiting response from firmware. * Semaphore is acquired. */ static void @@ -896,9 +841,7 @@ bfa_iocpf_sm_disabling_entry(struct bfa_iocpf *iocpf) bfa_ioc_send_disable(iocpf->ioc); } -/** - * IOC is being disabled - */ +/* IOC is being disabled */ static void bfa_iocpf_sm_disabling(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -935,9 +878,7 @@ bfa_iocpf_sm_disabling_sync_entry(struct bfa_iocpf *iocpf) bfa_ioc_hw_sem_get(iocpf->ioc); } -/** - * IOC hb ack request is being removed. - */ +/* IOC hb ack request is being removed. */ static void bfa_iocpf_sm_disabling_sync(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -963,9 +904,7 @@ bfa_iocpf_sm_disabling_sync(struct bfa_iocpf *iocpf, enum iocpf_event event) } } -/** - * IOC disable completion entry. - */ +/* IOC disable completion entry. */ static void bfa_iocpf_sm_disabled_entry(struct bfa_iocpf *iocpf) { @@ -1000,9 +939,7 @@ bfa_iocpf_sm_initfail_sync_entry(struct bfa_iocpf *iocpf) bfa_ioc_hw_sem_get(iocpf->ioc); } -/** - * Hardware initialization failed. - */ +/* Hardware initialization failed. */ static void bfa_iocpf_sm_initfail_sync(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -1046,9 +983,7 @@ bfa_iocpf_sm_initfail_entry(struct bfa_iocpf *iocpf) { } -/** - * Hardware initialization failed. - */ +/* Hardware initialization failed. */ static void bfa_iocpf_sm_initfail(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -1084,9 +1019,7 @@ bfa_iocpf_sm_fail_sync_entry(struct bfa_iocpf *iocpf) bfa_ioc_hw_sem_get(iocpf->ioc); } -/** - * IOC is in failed state. - */ +/* IOC is in failed state. */ static void bfa_iocpf_sm_fail_sync(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -1134,10 +1067,7 @@ bfa_iocpf_sm_fail_entry(struct bfa_iocpf *iocpf) { } -/** - * @brief - * IOC is in failed state. - */ +/* IOC is in failed state. */ static void bfa_iocpf_sm_fail(struct bfa_iocpf *iocpf, enum iocpf_event event) { @@ -1151,13 +1081,9 @@ bfa_iocpf_sm_fail(struct bfa_iocpf *iocpf, enum iocpf_event event) } } -/** - * BFA IOC private functions - */ +/* BFA IOC private functions */ -/** - * Notify common modules registered for notification. - */ +/* Notify common modules registered for notification. */ static void bfa_ioc_event_notify(struct bfa_ioc *ioc, enum bfa_ioc_event event) { @@ -1298,10 +1224,7 @@ bfa_ioc_hw_sem_get_cancel(struct bfa_ioc *ioc) del_timer(&ioc->sem_timer); } -/** - * @brief - * Initialize LPU local memory (aka secondary memory / SRAM) - */ +/* Initialize LPU local memory (aka secondary memory / SRAM) */ static void bfa_ioc_lmem_init(struct bfa_ioc *ioc) { @@ -1366,9 +1289,7 @@ bfa_ioc_lpu_stop(struct bfa_ioc *ioc) writel(pss_ctl, ioc->ioc_regs.pss_ctl_reg); } -/** - * Get driver and firmware versions. - */ +/* Get driver and firmware versions. */ void bfa_nw_ioc_fwver_get(struct bfa_ioc *ioc, struct bfi_ioc_image_hdr *fwhdr) { @@ -1388,9 +1309,7 @@ bfa_nw_ioc_fwver_get(struct bfa_ioc *ioc, struct bfi_ioc_image_hdr *fwhdr) } } -/** - * Returns TRUE if same. - */ +/* Returns TRUE if same. */ bool bfa_nw_ioc_fwver_cmp(struct bfa_ioc *ioc, struct bfi_ioc_image_hdr *fwhdr) { @@ -1408,8 +1327,7 @@ bfa_nw_ioc_fwver_cmp(struct bfa_ioc *ioc, struct bfi_ioc_image_hdr *fwhdr) return true; } -/** - * Return true if current running version is valid. Firmware signature and +/* Return true if current running version is valid. Firmware signature and * execution context (driver/bios) must match. */ static bool @@ -1430,9 +1348,7 @@ bfa_ioc_fwver_valid(struct bfa_ioc *ioc, u32 boot_env) return bfa_nw_ioc_fwver_cmp(ioc, &fwhdr); } -/** - * Conditionally flush any pending message from firmware at start. - */ +/* Conditionally flush any pending message from firmware at start. */ static void bfa_ioc_msgflush(struct bfa_ioc *ioc) { @@ -1443,9 +1359,6 @@ bfa_ioc_msgflush(struct bfa_ioc *ioc) writel(1, ioc->ioc_regs.lpu_mbox_cmd); } -/** - * @img ioc_init_logic.jpg - */ static void bfa_ioc_hwinit(struct bfa_ioc *ioc, bool force) { @@ -1603,10 +1516,7 @@ bfa_ioc_hb_stop(struct bfa_ioc *ioc) del_timer(&ioc->hb_timer); } -/** - * @brief - * Initiate a full firmware download. - */ +/* Initiate a full firmware download. */ static void bfa_ioc_download_fw(struct bfa_ioc *ioc, u32 boot_type, u32 boot_env) @@ -1672,9 +1582,7 @@ bfa_ioc_reset(struct bfa_ioc *ioc, bool force) bfa_ioc_hwinit(ioc, force); } -/** - * BFA ioc enable reply by firmware - */ +/* BFA ioc enable reply by firmware */ static void bfa_ioc_enable_reply(struct bfa_ioc *ioc, enum bfa_mode port_mode, u8 cap_bm) @@ -1686,10 +1594,7 @@ bfa_ioc_enable_reply(struct bfa_ioc *ioc, enum bfa_mode port_mode, bfa_fsm_send_event(iocpf, IOCPF_E_FWRSP_ENABLE); } -/** - * @brief - * Update BFA configuration from firmware configuration. - */ +/* Update BFA configuration from firmware configuration. */ static void bfa_ioc_getattr_reply(struct bfa_ioc *ioc) { @@ -1702,9 +1607,7 @@ bfa_ioc_getattr_reply(struct bfa_ioc *ioc) bfa_fsm_send_event(ioc, IOC_E_FWRSP_GETATTR); } -/** - * Attach time initialization of mbox logic. - */ +/* Attach time initialization of mbox logic. */ static void bfa_ioc_mbox_attach(struct bfa_ioc *ioc) { @@ -1718,9 +1621,7 @@ bfa_ioc_mbox_attach(struct bfa_ioc *ioc) } } -/** - * Mbox poll timer -- restarts any pending mailbox requests. - */ +/* Mbox poll timer -- restarts any pending mailbox requests. */ static void bfa_ioc_mbox_poll(struct bfa_ioc *ioc) { @@ -1760,9 +1661,7 @@ bfa_ioc_mbox_poll(struct bfa_ioc *ioc) } } -/** - * Cleanup any pending requests. - */ +/* Cleanup any pending requests. */ static void bfa_ioc_mbox_flush(struct bfa_ioc *ioc) { @@ -1774,12 +1673,12 @@ bfa_ioc_mbox_flush(struct bfa_ioc *ioc) } /** - * Read data from SMEM to host through PCI memmap + * bfa_nw_ioc_smem_read - Read data from SMEM to host through PCI memmap * - * @param[in] ioc memory for IOC - * @param[in] tbuf app memory to store data from smem - * @param[in] soff smem offset - * @param[in] sz size of smem in bytes + * @ioc: memory for IOC + * @tbuf: app memory to store data from smem + * @soff: smem offset + * @sz: size of smem in bytes */ static int bfa_nw_ioc_smem_read(struct bfa_ioc *ioc, void *tbuf, u32 soff, u32 sz) @@ -1826,9 +1725,7 @@ bfa_nw_ioc_smem_read(struct bfa_ioc *ioc, void *tbuf, u32 soff, u32 sz) return 0; } -/** - * Retrieve saved firmware trace from a prior IOC failure. - */ +/* Retrieve saved firmware trace from a prior IOC failure. */ int bfa_nw_ioc_debug_fwtrc(struct bfa_ioc *ioc, void *trcdata, int *trclen) { @@ -1844,9 +1741,7 @@ bfa_nw_ioc_debug_fwtrc(struct bfa_ioc *ioc, void *trcdata, int *trclen) return status; } -/** - * Save firmware trace if configured. - */ +/* Save firmware trace if configured. */ static void bfa_nw_ioc_debug_save_ftrc(struct bfa_ioc *ioc) { @@ -1861,9 +1756,7 @@ bfa_nw_ioc_debug_save_ftrc(struct bfa_ioc *ioc) } } -/** - * Retrieve saved firmware trace from a prior IOC failure. - */ +/* Retrieve saved firmware trace from a prior IOC failure. */ int bfa_nw_ioc_debug_fwsave(struct bfa_ioc *ioc, void *trcdata, int *trclen) { @@ -1892,9 +1785,7 @@ bfa_ioc_fail_notify(struct bfa_ioc *ioc) bfa_nw_ioc_debug_save_ftrc(ioc); } -/** - * IOCPF to IOC interface - */ +/* IOCPF to IOC interface */ static void bfa_ioc_pf_enabled(struct bfa_ioc *ioc) { @@ -1928,9 +1819,7 @@ bfa_ioc_pf_fwmismatch(struct bfa_ioc *ioc) ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE); } -/** - * IOC public - */ +/* IOC public */ static enum bfa_status bfa_ioc_pll_init(struct bfa_ioc *ioc) { @@ -1954,8 +1843,7 @@ bfa_ioc_pll_init(struct bfa_ioc *ioc) return BFA_STATUS_OK; } -/** - * Interface used by diag module to do firmware boot with memory test +/* Interface used by diag module to do firmware boot with memory test * as the entry vector. */ static void @@ -1983,9 +1871,7 @@ bfa_ioc_boot(struct bfa_ioc *ioc, enum bfi_fwboot_type boot_type, bfa_ioc_lpu_start(ioc); } -/** - * Enable/disable IOC failure auto recovery. - */ +/* Enable/disable IOC failure auto recovery. */ void bfa_nw_ioc_auto_recover(bool auto_recover) { @@ -2056,10 +1942,10 @@ bfa_ioc_isr(struct bfa_ioc *ioc, struct bfi_mbmsg *m) } /** - * IOC attach time initialization and setup. + * bfa_nw_ioc_attach - IOC attach time initialization and setup. * - * @param[in] ioc memory for IOC - * @param[in] bfa driver instance structure + * @ioc: memory for IOC + * @bfa: driver instance structure */ void bfa_nw_ioc_attach(struct bfa_ioc *ioc, void *bfa, struct bfa_ioc_cbfn *cbfn) @@ -2078,9 +1964,7 @@ bfa_nw_ioc_attach(struct bfa_ioc *ioc, void *bfa, struct bfa_ioc_cbfn *cbfn) bfa_fsm_send_event(ioc, IOC_E_RESET); } -/** - * Driver detach time IOC cleanup. - */ +/* Driver detach time IOC cleanup. */ void bfa_nw_ioc_detach(struct bfa_ioc *ioc) { @@ -2091,9 +1975,9 @@ bfa_nw_ioc_detach(struct bfa_ioc *ioc) } /** - * Setup IOC PCI properties. + * bfa_nw_ioc_pci_init - Setup IOC PCI properties. * - * @param[in] pcidev PCI device information for this IOC + * @pcidev: PCI device information for this IOC */ void bfa_nw_ioc_pci_init(struct bfa_ioc *ioc, struct bfa_pcidev *pcidev, @@ -2160,10 +2044,10 @@ bfa_nw_ioc_pci_init(struct bfa_ioc *ioc, struct bfa_pcidev *pcidev, } /** - * Initialize IOC dma memory + * bfa_nw_ioc_mem_claim - Initialize IOC dma memory * - * @param[in] dm_kva kernel virtual address of IOC dma memory - * @param[in] dm_pa physical address of IOC dma memory + * @dm_kva: kernel virtual address of IOC dma memory + * @dm_pa: physical address of IOC dma memory */ void bfa_nw_ioc_mem_claim(struct bfa_ioc *ioc, u8 *dm_kva, u64 dm_pa) @@ -2176,9 +2060,7 @@ bfa_nw_ioc_mem_claim(struct bfa_ioc *ioc, u8 *dm_kva, u64 dm_pa) ioc->attr = (struct bfi_ioc_attr *) dm_kva; } -/** - * Return size of dma memory required. - */ +/* Return size of dma memory required. */ u32 bfa_nw_ioc_meminfo(void) { @@ -2201,9 +2083,7 @@ bfa_nw_ioc_disable(struct bfa_ioc *ioc) bfa_fsm_send_event(ioc, IOC_E_DISABLE); } -/** - * Initialize memory for saving firmware trace. - */ +/* Initialize memory for saving firmware trace. */ void bfa_nw_ioc_debug_memclaim(struct bfa_ioc *ioc, void *dbg_fwsave) { @@ -2217,9 +2097,7 @@ bfa_ioc_smem_pgnum(struct bfa_ioc *ioc, u32 fmaddr) return PSS_SMEM_PGNUM(ioc->ioc_regs.smem_pg0, fmaddr); } -/** - * Register mailbox message handler function, to be called by common modules - */ +/* Register mailbox message handler function, to be called by common modules */ void bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc, bfa_ioc_mbox_mcfunc_t cbfn, void *cbarg) @@ -2231,11 +2109,12 @@ bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc, } /** - * Queue a mailbox command request to firmware. Waits if mailbox is busy. - * Responsibility of caller to serialize + * bfa_nw_ioc_mbox_queue - Queue a mailbox command request to firmware. * - * @param[in] ioc IOC instance - * @param[i] cmd Mailbox command + * @ioc: IOC instance + * @cmd: Mailbox command + * + * Waits if mailbox is busy. Responsibility of caller to serialize */ bool bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd, @@ -2272,9 +2151,7 @@ bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd, return false; } -/** - * Handle mailbox interrupts - */ +/* Handle mailbox interrupts */ void bfa_nw_ioc_mbox_isr(struct bfa_ioc *ioc) { @@ -2314,9 +2191,7 @@ bfa_nw_ioc_error_isr(struct bfa_ioc *ioc) bfa_fsm_send_event(ioc, IOC_E_HWERROR); } -/** - * return true if IOC is disabled - */ +/* return true if IOC is disabled */ bool bfa_nw_ioc_is_disabled(struct bfa_ioc *ioc) { @@ -2324,17 +2199,14 @@ bfa_nw_ioc_is_disabled(struct bfa_ioc *ioc) bfa_fsm_cmp_state(ioc, bfa_ioc_sm_disabled); } -/** - * return true if IOC is operational - */ +/* return true if IOC is operational */ bool bfa_nw_ioc_is_operational(struct bfa_ioc *ioc) { return bfa_fsm_cmp_state(ioc, bfa_ioc_sm_op); } -/** - * Add to IOC heartbeat failure notification queue. To be used by common +/* Add to IOC heartbeat failure notification queue. To be used by common * modules such as cee, port, diag. */ void @@ -2518,9 +2390,7 @@ bfa_nw_ioc_get_attr(struct bfa_ioc *ioc, struct bfa_ioc_attr *ioc_attr) bfa_ioc_get_pci_chip_rev(ioc, ioc_attr->pci_attr.chip_rev); } -/** - * WWN public - */ +/* WWN public */ static u64 bfa_ioc_get_pwwn(struct bfa_ioc *ioc) { @@ -2533,9 +2403,7 @@ bfa_nw_ioc_get_mac(struct bfa_ioc *ioc) return ioc->attr->mac; } -/** - * Firmware failure detected. Start recovery actions. - */ +/* Firmware failure detected. Start recovery actions. */ static void bfa_ioc_recover(struct bfa_ioc *ioc) { @@ -2545,10 +2413,7 @@ bfa_ioc_recover(struct bfa_ioc *ioc) bfa_fsm_send_event(ioc, IOC_E_HBFAIL); } -/** - * @dg hal_iocpf_pvt BFA IOC PF private functions - * @{ - */ +/* BFA IOC PF private functions */ static void bfa_iocpf_enable(struct bfa_ioc *ioc) @@ -2669,8 +2534,6 @@ bfa_flash_notify(void *cbarg, enum bfa_ioc_event event) /* * Send flash write request. - * - * @param[in] cbarg - callback argument */ static void bfa_flash_write_send(struct bfa_flash *flash) @@ -2699,10 +2562,10 @@ bfa_flash_write_send(struct bfa_flash *flash) flash->offset += len; } -/* - * Send flash read request. +/** + * bfa_flash_read_send - Send flash read request. * - * @param[in] cbarg - callback argument + * @cbarg: callback argument */ static void bfa_flash_read_send(void *cbarg) @@ -2724,11 +2587,11 @@ bfa_flash_read_send(void *cbarg) bfa_nw_ioc_mbox_queue(flash->ioc, &flash->mb, NULL, NULL); } -/* - * Process flash response messages upon receiving interrupts. +/** + * bfa_flash_intr - Process flash response messages upon receiving interrupts. * - * @param[in] flasharg - flash structure - * @param[in] msg - message structure + * @flasharg: flash structure + * @msg: message structure */ static void bfa_flash_intr(void *flasharg, struct bfi_mbmsg *msg) @@ -2821,12 +2684,12 @@ bfa_nw_flash_meminfo(void) return roundup(BFA_FLASH_DMA_BUF_SZ, BFA_DMA_ALIGN_SZ); } -/* - * Flash attach API. +/** + * bfa_nw_flash_attach - Flash attach API. * - * @param[in] flash - flash structure - * @param[in] ioc - ioc structure - * @param[in] dev - device structure + * @flash: flash structure + * @ioc: ioc structure + * @dev: device structure */ void bfa_nw_flash_attach(struct bfa_flash *flash, struct bfa_ioc *ioc, void *dev) @@ -2842,12 +2705,12 @@ bfa_nw_flash_attach(struct bfa_flash *flash, struct bfa_ioc *ioc, void *dev) list_add_tail(&flash->ioc_notify.qe, &flash->ioc->notify_q); } -/* - * Claim memory for flash +/** + * bfa_nw_flash_memclaim - Claim memory for flash * - * @param[in] flash - flash structure - * @param[in] dm_kva - pointer to virtual memory address - * @param[in] dm_pa - physical memory address + * @flash: flash structure + * @dm_kva: pointer to virtual memory address + * @dm_pa: physical memory address */ void bfa_nw_flash_memclaim(struct bfa_flash *flash, u8 *dm_kva, u64 dm_pa) @@ -2859,13 +2722,13 @@ bfa_nw_flash_memclaim(struct bfa_flash *flash, u8 *dm_kva, u64 dm_pa) dm_pa += roundup(BFA_FLASH_DMA_BUF_SZ, BFA_DMA_ALIGN_SZ); } -/* - * Get flash attribute. +/** + * bfa_nw_flash_get_attr - Get flash attribute. * - * @param[in] flash - flash structure - * @param[in] attr - flash attribute structure - * @param[in] cbfn - callback function - * @param[in] cbarg - callback argument + * @flash: flash structure + * @attr: flash attribute structure + * @cbfn: callback function + * @cbarg: callback argument * * Return status. */ @@ -2895,17 +2758,17 @@ bfa_nw_flash_get_attr(struct bfa_flash *flash, struct bfa_flash_attr *attr, return BFA_STATUS_OK; } -/* - * Update flash partition. +/** + * bfa_nw_flash_update_part - Update flash partition. * - * @param[in] flash - flash structure - * @param[in] type - flash partition type - * @param[in] instance - flash partition instance - * @param[in] buf - update data buffer - * @param[in] len - data buffer length - * @param[in] offset - offset relative to the partition starting address - * @param[in] cbfn - callback function - * @param[in] cbarg - callback argument + * @flash: flash structure + * @type: flash partition type + * @instance: flash partition instance + * @buf: update data buffer + * @len: data buffer length + * @offset: offset relative to the partition starting address + * @cbfn: callback function + * @cbarg: callback argument * * Return status. */ @@ -2944,17 +2807,17 @@ bfa_nw_flash_update_part(struct bfa_flash *flash, u32 type, u8 instance, return BFA_STATUS_OK; } -/* - * Read flash partition. +/** + * bfa_nw_flash_read_part - Read flash partition. * - * @param[in] flash - flash structure - * @param[in] type - flash partition type - * @param[in] instance - flash partition instance - * @param[in] buf - read data buffer - * @param[in] len - data buffer length - * @param[in] offset - offset relative to the partition starting address - * @param[in] cbfn - callback function - * @param[in] cbarg - callback argument + * @flash: flash structure + * @type: flash partition type + * @instance: flash partition instance + * @buf: read data buffer + * @len: data buffer length + * @offset: offset relative to the partition starting address + * @cbfn: callback function + * @cbarg: callback argument * * Return status. */ diff --git a/drivers/net/ethernet/brocade/bna/bfa_ioc.h b/drivers/net/ethernet/brocade/bna/bfa_ioc.h index 3b4460fdc148..63a85e555df8 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_ioc.h +++ b/drivers/net/ethernet/brocade/bna/bfa_ioc.h @@ -30,9 +30,7 @@ #define BNA_DBG_FWTRC_LEN (BFI_IOC_TRC_ENTS * BFI_IOC_TRC_ENT_SZ + \ BFI_IOC_TRC_HDR_SZ) -/** - * PCI device information required by IOC - */ +/* PCI device information required by IOC */ struct bfa_pcidev { int pci_slot; u8 pci_func; @@ -41,8 +39,7 @@ struct bfa_pcidev { void __iomem *pci_bar_kva; }; -/** - * Structure used to remember the DMA-able memory block's KVA and Physical +/* Structure used to remember the DMA-able memory block's KVA and Physical * Address */ struct bfa_dma { @@ -52,15 +49,11 @@ struct bfa_dma { #define BFA_DMA_ALIGN_SZ 256 -/** - * smem size for Crossbow and Catapult - */ +/* smem size for Crossbow and Catapult */ #define BFI_SMEM_CB_SIZE 0x200000U /* ! 2MB for crossbow */ #define BFI_SMEM_CT_SIZE 0x280000U /* ! 2.5MB for catapult */ -/** - * @brief BFA dma address assignment macro. (big endian format) - */ +/* BFA dma address assignment macro. (big endian format) */ #define bfa_dma_be_addr_set(dma_addr, pa) \ __bfa_dma_be_addr_set(&dma_addr, (u64)pa) static inline void @@ -108,9 +101,7 @@ struct bfa_ioc_regs { u32 smem_pg0; }; -/** - * IOC Mailbox structures - */ +/* IOC Mailbox structures */ typedef void (*bfa_mbox_cmd_cbfn_t)(void *cbarg); struct bfa_mbox_cmd { struct list_head qe; @@ -119,9 +110,7 @@ struct bfa_mbox_cmd { u32 msg[BFI_IOC_MSGSZ]; }; -/** - * IOC mailbox module - */ +/* IOC mailbox module */ typedef void (*bfa_ioc_mbox_mcfunc_t)(void *cbarg, struct bfi_mbmsg *m); struct bfa_ioc_mbox_mod { struct list_head cmd_q; /*!< pending mbox queue */ @@ -132,9 +121,7 @@ struct bfa_ioc_mbox_mod { } mbhdlr[BFI_MC_MAX]; }; -/** - * IOC callback function interfaces - */ +/* IOC callback function interfaces */ typedef void (*bfa_ioc_enable_cbfn_t)(void *bfa, enum bfa_status status); typedef void (*bfa_ioc_disable_cbfn_t)(void *bfa); typedef void (*bfa_ioc_hbfail_cbfn_t)(void *bfa); @@ -146,9 +133,7 @@ struct bfa_ioc_cbfn { bfa_ioc_reset_cbfn_t reset_cbfn; }; -/** - * IOC event notification mechanism. - */ +/* IOC event notification mechanism. */ enum bfa_ioc_event { BFA_IOC_E_ENABLED = 1, BFA_IOC_E_DISABLED = 2, @@ -163,9 +148,7 @@ struct bfa_ioc_notify { void *cbarg; }; -/** - * Initialize a IOC event notification structure - */ +/* Initialize a IOC event notification structure */ #define bfa_ioc_notify_init(__notify, __cbfn, __cbarg) do { \ (__notify)->cbfn = (__cbfn); \ (__notify)->cbarg = (__cbarg); \ @@ -261,9 +244,7 @@ struct bfa_ioc_hwif { #define BFA_IOC_FLASH_OFFSET_IN_CHUNK(off) (off % BFI_FLASH_CHUNK_SZ_WORDS) #define BFA_IOC_FLASH_CHUNK_ADDR(chunkno) (chunkno * BFI_FLASH_CHUNK_SZ_WORDS) -/** - * IOC mailbox interface - */ +/* IOC mailbox interface */ bool bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd, bfa_mbox_cmd_cbfn_t cbfn, void *cbarg); @@ -271,9 +252,7 @@ void bfa_nw_ioc_mbox_isr(struct bfa_ioc *ioc); void bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc, bfa_ioc_mbox_mcfunc_t cbfn, void *cbarg); -/** - * IOC interfaces - */ +/* IOC interfaces */ #define bfa_ioc_pll_init_asic(__ioc) \ ((__ioc)->ioc_hwif->ioc_pll_init((__ioc)->pcidev.pci_bar_kva, \ diff --git a/drivers/net/ethernet/brocade/bna/bfa_ioc_ct.c b/drivers/net/ethernet/brocade/bna/bfa_ioc_ct.c index b6b036a143ae..5df0b0c68c5a 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_ioc_ct.c +++ b/drivers/net/ethernet/brocade/bna/bfa_ioc_ct.c @@ -87,9 +87,7 @@ static const struct bfa_ioc_hwif nw_hwif_ct2 = { .ioc_sync_complete = bfa_ioc_ct_sync_complete, }; -/** - * Called from bfa_ioc_attach() to map asic specific calls. - */ +/* Called from bfa_ioc_attach() to map asic specific calls. */ void bfa_nw_ioc_set_ct_hwif(struct bfa_ioc *ioc) { @@ -102,9 +100,7 @@ bfa_nw_ioc_set_ct2_hwif(struct bfa_ioc *ioc) ioc->ioc_hwif = &nw_hwif_ct2; } -/** - * Return true if firmware of current driver matches the running firmware. - */ +/* Return true if firmware of current driver matches the running firmware. */ static bool bfa_ioc_ct_firmware_lock(struct bfa_ioc *ioc) { @@ -182,9 +178,7 @@ bfa_ioc_ct_firmware_unlock(struct bfa_ioc *ioc) bfa_nw_ioc_sem_release(ioc->ioc_regs.ioc_usage_sem_reg); } -/** - * Notify other functions on HB failure. - */ +/* Notify other functions on HB failure. */ static void bfa_ioc_ct_notify_fail(struct bfa_ioc *ioc) { @@ -195,9 +189,7 @@ bfa_ioc_ct_notify_fail(struct bfa_ioc *ioc) readl(ioc->ioc_regs.alt_ll_halt); } -/** - * Host to LPU mailbox message addresses - */ +/* Host to LPU mailbox message addresses */ static const struct { u32 hfn_mbox; u32 lpu_mbox; @@ -209,9 +201,7 @@ static const struct { { HOSTFN3_LPU_MBOX0_8, LPU_HOSTFN3_MBOX0_8, HOST_PAGE_NUM_FN3 } }; -/** - * Host <-> LPU mailbox command/status registers - port 0 - */ +/* Host <-> LPU mailbox command/status registers - port 0 */ static const struct { u32 hfn; u32 lpu; @@ -222,9 +212,7 @@ static const struct { { HOSTFN3_LPU0_CMD_STAT, LPU0_HOSTFN3_CMD_STAT } }; -/** - * Host <-> LPU mailbox command/status registers - port 1 - */ +/* Host <-> LPU mailbox command/status registers - port 1 */ static const struct { u32 hfn; u32 lpu; @@ -368,9 +356,7 @@ bfa_ioc_ct2_reg_init(struct bfa_ioc *ioc) ioc->ioc_regs.err_set = rb + ERR_SET_REG; } -/** - * Initialize IOC to port mapping. - */ +/* Initialize IOC to port mapping. */ #define FNC_PERS_FN_SHIFT(__fn) ((__fn) * 8) static void @@ -398,9 +384,7 @@ bfa_ioc_ct2_map_port(struct bfa_ioc *ioc) ioc->port_id = ((r32 & __FC_LL_PORT_MAP__MK) >> __FC_LL_PORT_MAP__SH); } -/** - * Set interrupt mode for a function: INTX or MSIX - */ +/* Set interrupt mode for a function: INTX or MSIX */ static void bfa_ioc_ct_isr_mode_set(struct bfa_ioc *ioc, bool msix) { @@ -443,9 +427,7 @@ bfa_ioc_ct2_lpu_read_stat(struct bfa_ioc *ioc) return false; } -/** - * MSI-X resource allocation for 1860 with no asic block - */ +/* MSI-X resource allocation for 1860 with no asic block */ #define HOSTFN_MSIX_DEFAULT 64 #define HOSTFN_MSIX_VT_INDEX_MBOX_ERR 0x30138 #define HOSTFN_MSIX_VT_OFST_NUMVT 0x3013c @@ -473,9 +455,7 @@ bfa_nw_ioc_ct2_poweron(struct bfa_ioc *ioc) rb + HOSTFN_MSIX_VT_INDEX_MBOX_ERR); } -/** - * Cleanup hw semaphore and usecnt registers - */ +/* Cleanup hw semaphore and usecnt registers */ static void bfa_ioc_ct_ownership_reset(struct bfa_ioc *ioc) { @@ -492,9 +472,7 @@ bfa_ioc_ct_ownership_reset(struct bfa_ioc *ioc) bfa_nw_ioc_hw_sem_release(ioc); } -/** - * Synchronized IOC failure processing routines - */ +/* Synchronized IOC failure processing routines */ static bool bfa_ioc_ct_sync_start(struct bfa_ioc *ioc) { @@ -518,9 +496,7 @@ bfa_ioc_ct_sync_start(struct bfa_ioc *ioc) return bfa_ioc_ct_sync_complete(ioc); } -/** - * Synchronized IOC failure processing routines - */ +/* Synchronized IOC failure processing routines */ static void bfa_ioc_ct_sync_join(struct bfa_ioc *ioc) { diff --git a/drivers/net/ethernet/brocade/bna/bfa_msgq.c b/drivers/net/ethernet/brocade/bna/bfa_msgq.c index dd36427f4752..55067d0d25cf 100644 --- a/drivers/net/ethernet/brocade/bna/bfa_msgq.c +++ b/drivers/net/ethernet/brocade/bna/bfa_msgq.c @@ -16,9 +16,7 @@ * www.brocade.com */ -/** - * @file bfa_msgq.c MSGQ module source file. - */ +/* MSGQ module source file. */ #include "bfi.h" #include "bfa_msgq.h" diff --git a/drivers/net/ethernet/brocade/bna/bfi.h b/drivers/net/ethernet/brocade/bna/bfi.h index 0d9df695397a..1f24c23dc786 100644 --- a/drivers/net/ethernet/brocade/bna/bfi.h +++ b/drivers/net/ethernet/brocade/bna/bfi.h @@ -22,15 +22,11 @@ #pragma pack(1) -/** - * BFI FW image type - */ +/* BFI FW image type */ #define BFI_FLASH_CHUNK_SZ 256 /*!< Flash chunk size */ #define BFI_FLASH_CHUNK_SZ_WORDS (BFI_FLASH_CHUNK_SZ/sizeof(u32)) -/** - * Msg header common to all msgs - */ +/* Msg header common to all msgs */ struct bfi_mhdr { u8 msg_class; /*!< @ref enum bfi_mclass */ u8 msg_id; /*!< msg opcode with in the class */ @@ -65,17 +61,14 @@ struct bfi_mhdr { #define BFI_I2H_OPCODE_BASE 128 #define BFA_I2HM(_x) ((_x) + BFI_I2H_OPCODE_BASE) -/** - **************************************************************************** +/**************************************************************************** * * Scatter Gather Element and Page definition * **************************************************************************** */ -/** - * DMA addresses - */ +/* DMA addresses */ union bfi_addr_u { struct { u32 addr_lo; @@ -83,9 +76,7 @@ union bfi_addr_u { } a32; }; -/** - * Generic DMA addr-len pair. - */ +/* Generic DMA addr-len pair. */ struct bfi_alen { union bfi_addr_u al_addr; /* DMA addr of buffer */ u32 al_len; /* length of buffer */ @@ -98,26 +89,20 @@ struct bfi_alen { #define BFI_LMSG_PL_WSZ \ ((BFI_LMSG_SZ - sizeof(struct bfi_mhdr)) / 4) -/** - * Mailbox message structure - */ +/* Mailbox message structure */ #define BFI_MBMSG_SZ 7 struct bfi_mbmsg { struct bfi_mhdr mh; u32 pl[BFI_MBMSG_SZ]; }; -/** - * Supported PCI function class codes (personality) - */ +/* Supported PCI function class codes (personality) */ enum bfi_pcifn_class { BFI_PCIFN_CLASS_FC = 0x0c04, BFI_PCIFN_CLASS_ETH = 0x0200, }; -/** - * Message Classes - */ +/* Message Classes */ enum bfi_mclass { BFI_MC_IOC = 1, /*!< IO Controller (IOC) */ BFI_MC_DIAG = 2, /*!< Diagnostic Msgs */ @@ -159,15 +144,12 @@ enum bfi_mclass { #define BFI_FWBOOT_ENV_OS 0 -/** - *---------------------------------------------------------------------- +/*---------------------------------------------------------------------- * IOC *---------------------------------------------------------------------- */ -/** - * Different asic generations - */ +/* Different asic generations */ enum bfi_asic_gen { BFI_ASIC_GEN_CB = 1, BFI_ASIC_GEN_CT = 2, @@ -196,9 +178,7 @@ enum bfi_ioc_i2h_msgs { BFI_IOC_I2H_HBEAT = BFA_I2HM(4), }; -/** - * BFI_IOC_H2I_GETATTR_REQ message - */ +/* BFI_IOC_H2I_GETATTR_REQ message */ struct bfi_ioc_getattr_req { struct bfi_mhdr mh; union bfi_addr_u attr_addr; @@ -231,30 +211,22 @@ struct bfi_ioc_attr { u32 card_type; /*!< card type */ }; -/** - * BFI_IOC_I2H_GETATTR_REPLY message - */ +/* BFI_IOC_I2H_GETATTR_REPLY message */ struct bfi_ioc_getattr_reply { struct bfi_mhdr mh; /*!< Common msg header */ u8 status; /*!< cfg reply status */ u8 rsvd[3]; }; -/** - * Firmware memory page offsets - */ +/* Firmware memory page offsets */ #define BFI_IOC_SMEM_PG0_CB (0x40) #define BFI_IOC_SMEM_PG0_CT (0x180) -/** - * Firmware statistic offset - */ +/* Firmware statistic offset */ #define BFI_IOC_FWSTATS_OFF (0x6B40) #define BFI_IOC_FWSTATS_SZ (4096) -/** - * Firmware trace offset - */ +/* Firmware trace offset */ #define BFI_IOC_TRC_OFF (0x4b00) #define BFI_IOC_TRC_ENTS 256 #define BFI_IOC_TRC_ENT_SZ 16 @@ -299,9 +271,7 @@ struct bfi_ioc_hbeat { u32 hb_count; /*!< current heart beat count */ }; -/** - * IOC hardware/firmware state - */ +/* IOC hardware/firmware state */ enum bfi_ioc_state { BFI_IOC_UNINIT = 0, /*!< not initialized */ BFI_IOC_INITING = 1, /*!< h/w is being initialized */ @@ -345,9 +315,7 @@ enum { ((__adap_type) & (BFI_ADAPTER_TTV | BFI_ADAPTER_PROTO | \ BFI_ADAPTER_UNSUPP)) -/** - * BFI_IOC_H2I_ENABLE_REQ & BFI_IOC_H2I_DISABLE_REQ messages - */ +/* BFI_IOC_H2I_ENABLE_REQ & BFI_IOC_H2I_DISABLE_REQ messages */ struct bfi_ioc_ctrl_req { struct bfi_mhdr mh; u16 clscode; @@ -355,9 +323,7 @@ struct bfi_ioc_ctrl_req { u32 tv_sec; }; -/** - * BFI_IOC_I2H_ENABLE_REPLY & BFI_IOC_I2H_DISABLE_REPLY messages - */ +/* BFI_IOC_I2H_ENABLE_REPLY & BFI_IOC_I2H_DISABLE_REPLY messages */ struct bfi_ioc_ctrl_reply { struct bfi_mhdr mh; /*!< Common msg header */ u8 status; /*!< enable/disable status */ @@ -367,9 +333,7 @@ struct bfi_ioc_ctrl_reply { }; #define BFI_IOC_MSGSZ 8 -/** - * H2I Messages - */ +/* H2I Messages */ union bfi_ioc_h2i_msg_u { struct bfi_mhdr mh; struct bfi_ioc_ctrl_req enable_req; @@ -378,17 +342,14 @@ union bfi_ioc_h2i_msg_u { u32 mboxmsg[BFI_IOC_MSGSZ]; }; -/** - * I2H Messages - */ +/* I2H Messages */ union bfi_ioc_i2h_msg_u { struct bfi_mhdr mh; struct bfi_ioc_ctrl_reply fw_event; u32 mboxmsg[BFI_IOC_MSGSZ]; }; -/** - *---------------------------------------------------------------------- +/*---------------------------------------------------------------------- * MSGQ *---------------------------------------------------------------------- */ diff --git a/drivers/net/ethernet/brocade/bna/bfi_cna.h b/drivers/net/ethernet/brocade/bna/bfi_cna.h index 4eecabea397b..6704a4392973 100644 --- a/drivers/net/ethernet/brocade/bna/bfi_cna.h +++ b/drivers/net/ethernet/brocade/bna/bfi_cna.h @@ -37,18 +37,14 @@ enum bfi_port_i2h { BFI_PORT_I2H_CLEAR_STATS_RSP = BFA_I2HM(4), }; -/** - * Generic REQ type - */ +/* Generic REQ type */ struct bfi_port_generic_req { struct bfi_mhdr mh; /*!< msg header */ u32 msgtag; /*!< msgtag for reply */ u32 rsvd; }; -/** - * Generic RSP type - */ +/* Generic RSP type */ struct bfi_port_generic_rsp { struct bfi_mhdr mh; /*!< common msg header */ u8 status; /*!< port enable status */ @@ -56,44 +52,12 @@ struct bfi_port_generic_rsp { u32 msgtag; /*!< msgtag for reply */ }; -/** - * @todo - * BFI_PORT_H2I_ENABLE_REQ - */ - -/** - * @todo - * BFI_PORT_I2H_ENABLE_RSP - */ - -/** - * BFI_PORT_H2I_DISABLE_REQ - */ - -/** - * BFI_PORT_I2H_DISABLE_RSP - */ - -/** - * BFI_PORT_H2I_GET_STATS_REQ - */ +/* BFI_PORT_H2I_GET_STATS_REQ */ struct bfi_port_get_stats_req { struct bfi_mhdr mh; /*!< common msg header */ union bfi_addr_u dma_addr; }; -/** - * BFI_PORT_I2H_GET_STATS_RSP - */ - -/** - * BFI_PORT_H2I_CLEAR_STATS_REQ - */ - -/** - * BFI_PORT_I2H_CLEAR_STATS_RSP - */ - union bfi_port_h2i_msg_u { struct bfi_mhdr mh; struct bfi_port_generic_req enable_req; diff --git a/drivers/net/ethernet/brocade/bna/bfi_enet.h b/drivers/net/ethernet/brocade/bna/bfi_enet.h index a90f1cf46b41..eef6e1f8aecc 100644 --- a/drivers/net/ethernet/brocade/bna/bfi_enet.h +++ b/drivers/net/ethernet/brocade/bna/bfi_enet.h @@ -16,12 +16,9 @@ * www.brocade.com */ -/** - * @file bfi_enet.h BNA Hardware and Firmware Interface - */ +/* BNA Hardware and Firmware Interface */ -/** - * Skipping statistics collection to avoid clutter. +/* Skipping statistics collection to avoid clutter. * Command is no longer needed: * MTU * TxQ Stop @@ -64,9 +61,7 @@ union bfi_addr_be_u { } a32; }; -/** - * T X Q U E U E D E F I N E S - */ +/* T X Q U E U E D E F I N E S */ /* TxQ Vector (a.k.a. Tx-Buffer Descriptor) */ /* TxQ Entry Opcodes */ #define BFI_ENET_TXQ_WI_SEND (0x402) /* Single Frame Transmission */ @@ -106,10 +101,7 @@ struct bfi_enet_txq_wi_vector { /* Tx Buffer Descriptor */ union bfi_addr_be_u addr; }; -/** - * TxQ Entry Structure - * - */ +/* TxQ Entry Structure */ struct bfi_enet_txq_entry { union { struct bfi_enet_txq_wi_base base; @@ -124,16 +116,12 @@ struct bfi_enet_txq_entry { #define BFI_ENET_TXQ_WI_L4_HDR_N_OFFSET(_hdr_size, _offset) \ (((_hdr_size) << 10) | ((_offset) & 0x3FF)) -/** - * R X Q U E U E D E F I N E S - */ +/* R X Q U E U E D E F I N E S */ struct bfi_enet_rxq_entry { union bfi_addr_be_u rx_buffer; }; -/** - * R X C O M P L E T I O N Q U E U E D E F I N E S - */ +/* R X C O M P L E T I O N Q U E U E D E F I N E S */ /* CQ Entry Flags */ #define BFI_ENET_CQ_EF_MAC_ERROR (1 << 0) #define BFI_ENET_CQ_EF_FCS_ERROR (1 << 1) @@ -174,9 +162,7 @@ struct bfi_enet_cq_entry { u8 rxq_id; }; -/** - * E N E T C O N T R O L P A T H C O M M A N D S - */ +/* E N E T C O N T R O L P A T H C O M M A N D S */ struct bfi_enet_q { union bfi_addr_u pg_tbl; union bfi_addr_u first_entry; @@ -222,9 +208,7 @@ struct bfi_enet_ib { u16 rsvd; }; -/** - * ENET command messages - */ +/* ENET command messages */ enum bfi_enet_h2i_msgs { /* Rx Commands */ BFI_ENET_H2I_RX_CFG_SET_REQ = 1, @@ -350,9 +334,7 @@ enum bfi_enet_i2h_msgs { BFI_ENET_I2H_BW_UPDATE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 4), }; -/** - * The following error codes can be returned by the enet commands - */ +/* The following error codes can be returned by the enet commands */ enum bfi_enet_err { BFI_ENET_CMD_OK = 0, BFI_ENET_CMD_FAIL = 1, @@ -364,8 +346,7 @@ enum bfi_enet_err { BFI_ENET_CMD_PORT_DISABLED = 7, /* !< port in disabled state */ }; -/** - * Generic Request +/* Generic Request * * bfi_enet_req is used by: * BFI_ENET_H2I_RX_CFG_CLR_REQ @@ -375,8 +356,7 @@ struct bfi_enet_req { struct bfi_msgq_mhdr mh; }; -/** - * Enable/Disable Request +/* Enable/Disable Request * * bfi_enet_enable_req is used by: * BFI_ENET_H2I_RSS_ENABLE_REQ (enet_id must be zero) @@ -391,9 +371,7 @@ struct bfi_enet_enable_req { u8 rsvd[3]; }; -/** - * Generic Response - */ +/* Generic Response */ struct bfi_enet_rsp { struct bfi_msgq_mhdr mh; u8 error; /*!< if error see cmd_offset */ @@ -401,20 +379,16 @@ struct bfi_enet_rsp { u16 cmd_offset; /*!< offset to invalid parameter */ }; -/** - * GLOBAL CONFIGURATION - */ +/* GLOBAL CONFIGURATION */ -/** - * bfi_enet_attr_req is used by: +/* bfi_enet_attr_req is used by: * BFI_ENET_H2I_GET_ATTR_REQ */ struct bfi_enet_attr_req { struct bfi_msgq_mhdr mh; }; -/** - * bfi_enet_attr_rsp is used by: +/* bfi_enet_attr_rsp is used by: * BFI_ENET_I2H_GET_ATTR_RSP */ struct bfi_enet_attr_rsp { @@ -427,8 +401,7 @@ struct bfi_enet_attr_rsp { u32 rit_size; }; -/** - * Tx Configuration +/* Tx Configuration * * bfi_enet_tx_cfg is used by: * BFI_ENET_H2I_TX_CFG_SET_REQ @@ -477,8 +450,7 @@ struct bfi_enet_tx_cfg_rsp { } q_handles[BFI_ENET_TXQ_PRIO_MAX]; }; -/** - * Rx Configuration +/* Rx Configuration * * bfi_enet_rx_cfg is used by: * BFI_ENET_H2I_RX_CFG_SET_REQ @@ -553,8 +525,7 @@ struct bfi_enet_rx_cfg_rsp { } q_handles[BFI_ENET_RX_QSET_MAX]; }; -/** - * RIT +/* RIT * * bfi_enet_rit_req is used by: * BFI_ENET_H2I_RIT_CFG_REQ @@ -566,8 +537,7 @@ struct bfi_enet_rit_req { u8 table[BFI_ENET_RSS_RIT_MAX]; }; -/** - * RSS +/* RSS * * bfi_enet_rss_cfg_req is used by: * BFI_ENET_H2I_RSS_CFG_REQ @@ -591,8 +561,7 @@ struct bfi_enet_rss_cfg_req { struct bfi_enet_rss_cfg cfg; }; -/** - * MAC Unicast +/* MAC Unicast * * bfi_enet_rx_vlan_req is used by: * BFI_ENET_H2I_MAC_UCAST_SET_REQ @@ -606,17 +575,14 @@ struct bfi_enet_ucast_req { u8 rsvd[2]; }; -/** - * MAC Unicast + VLAN - */ +/* MAC Unicast + VLAN */ struct bfi_enet_mac_n_vlan_req { struct bfi_msgq_mhdr mh; u16 vlan_id; mac_t mac_addr; }; -/** - * MAC Multicast +/* MAC Multicast * * bfi_enet_mac_mfilter_add_req is used by: * BFI_ENET_H2I_MAC_MCAST_ADD_REQ @@ -627,8 +593,7 @@ struct bfi_enet_mcast_add_req { u8 rsvd[2]; }; -/** - * bfi_enet_mac_mfilter_add_rsp is used by: +/* bfi_enet_mac_mfilter_add_rsp is used by: * BFI_ENET_I2H_MAC_MCAST_ADD_RSP */ struct bfi_enet_mcast_add_rsp { @@ -640,8 +605,7 @@ struct bfi_enet_mcast_add_rsp { u8 rsvd1[2]; }; -/** - * bfi_enet_mac_mfilter_del_req is used by: +/* bfi_enet_mac_mfilter_del_req is used by: * BFI_ENET_H2I_MAC_MCAST_DEL_REQ */ struct bfi_enet_mcast_del_req { @@ -650,8 +614,7 @@ struct bfi_enet_mcast_del_req { u8 rsvd[2]; }; -/** - * VLAN +/* VLAN * * bfi_enet_rx_vlan_req is used by: * BFI_ENET_H2I_RX_VLAN_SET_REQ @@ -663,8 +626,7 @@ struct bfi_enet_rx_vlan_req { u32 bit_mask[BFI_ENET_VLAN_WORDS_MAX]; }; -/** - * PAUSE +/* PAUSE * * bfi_enet_set_pause_req is used by: * BFI_ENET_H2I_SET_PAUSE_REQ @@ -676,8 +638,7 @@ struct bfi_enet_set_pause_req { u8 rx_pause; /* 1 = enable; 0 = disable */ }; -/** - * DIAGNOSTICS +/* DIAGNOSTICS * * bfi_enet_diag_lb_req is used by: * BFI_ENET_H2I_DIAG_LOOPBACK @@ -689,16 +650,13 @@ struct bfi_enet_diag_lb_req { u8 enable; /* 1 = enable; 0 = disable */ }; -/** - * enum for Loopback opmodes - */ +/* enum for Loopback opmodes */ enum { BFI_ENET_DIAG_LB_OPMODE_EXT = 0, BFI_ENET_DIAG_LB_OPMODE_CBL = 1, }; -/** - * STATISTICS +/* STATISTICS * * bfi_enet_stats_req is used by: * BFI_ENET_H2I_STATS_GET_REQ @@ -713,9 +671,7 @@ struct bfi_enet_stats_req { union bfi_addr_u host_buffer; }; -/** - * defines for "stats_mask" above. - */ +/* defines for "stats_mask" above. */ #define BFI_ENET_STATS_MAC (1 << 0) /* !< MAC Statistics */ #define BFI_ENET_STATS_BPC (1 << 1) /* !< Pause Stats from BPC */ #define BFI_ENET_STATS_RAD (1 << 2) /* !< Rx Admission Statistics */ @@ -881,8 +837,7 @@ struct bfi_enet_stats_mac { u64 tx_fragments; }; -/** - * Complete statistics, DMAed from fw to host followed by +/* Complete statistics, DMAed from fw to host followed by * BFI_ENET_I2H_STATS_GET_RSP */ struct bfi_enet_stats { diff --git a/drivers/net/ethernet/brocade/bna/bfi_reg.h b/drivers/net/ethernet/brocade/bna/bfi_reg.h index 0e094fe46dfd..c49fa312ddbd 100644 --- a/drivers/net/ethernet/brocade/bna/bfi_reg.h +++ b/drivers/net/ethernet/brocade/bna/bfi_reg.h @@ -221,9 +221,7 @@ enum { #define __PMM_1T_RESET_P 0x00000001 #define PMM_1T_RESET_REG_P1 0x00023c1c -/** - * Brocade 1860 Adapter specific defines - */ +/* Brocade 1860 Adapter specific defines */ #define CT2_PCI_CPQ_BASE 0x00030000 #define CT2_PCI_APP_BASE 0x00030100 #define CT2_PCI_ETH_BASE 0x00030400 diff --git a/drivers/net/ethernet/brocade/bna/bna.h b/drivers/net/ethernet/brocade/bna/bna.h index 4d7a5de08e12..ede532b4e9db 100644 --- a/drivers/net/ethernet/brocade/bna/bna.h +++ b/drivers/net/ethernet/brocade/bna/bna.h @@ -25,11 +25,7 @@ extern const u32 bna_napi_dim_vector[][BNA_BIAS_T_MAX]; -/** - * - * Macros and constants - * - */ +/* Macros and constants */ #define BNA_IOC_TIMER_FREQ 200 @@ -356,11 +352,7 @@ do { \ } \ } while (0) -/** - * - * Inline functions - * - */ +/* Inline functions */ static inline struct bna_mac *bna_mac_find(struct list_head *q, u8 *addr) { @@ -377,15 +369,9 @@ static inline struct bna_mac *bna_mac_find(struct list_head *q, u8 *addr) #define bna_attr(_bna) (&(_bna)->ioceth.attr) -/** - * - * Function prototypes - * - */ +/* Function prototypes */ -/** - * BNA - */ +/* BNA */ /* FW response handlers */ void bna_bfi_stats_clr_rsp(struct bna *bna, struct bfi_msgq_mhdr *msghdr); @@ -413,24 +399,19 @@ struct bna_mcam_handle *bna_mcam_mod_handle_get(struct bna_mcam_mod *mod); void bna_mcam_mod_handle_put(struct bna_mcam_mod *mcam_mod, struct bna_mcam_handle *handle); -/** - * MBOX - */ +/* MBOX */ /* API for BNAD */ void bna_mbox_handler(struct bna *bna, u32 intr_status); -/** - * ETHPORT - */ +/* ETHPORT */ /* Callbacks for RX */ void bna_ethport_cb_rx_started(struct bna_ethport *ethport); void bna_ethport_cb_rx_stopped(struct bna_ethport *ethport); -/** - * TX MODULE AND TX - */ +/* TX MODULE AND TX */ + /* FW response handelrs */ void bna_bfi_tx_enet_start_rsp(struct bna_tx *tx, struct bfi_msgq_mhdr *msghdr); @@ -462,9 +443,7 @@ void bna_tx_disable(struct bna_tx *tx, enum bna_cleanup_type type, void bna_tx_cleanup_complete(struct bna_tx *tx); void bna_tx_coalescing_timeo_set(struct bna_tx *tx, int coalescing_timeo); -/** - * RX MODULE, RX, RXF - */ +/* RX MODULE, RX, RXF */ /* FW response handlers */ void bna_bfi_rx_enet_start_rsp(struct bna_rx *rx, @@ -522,9 +501,7 @@ bna_rx_mode_set(struct bna_rx *rx, enum bna_rxmode rxmode, void bna_rx_vlan_add(struct bna_rx *rx, int vlan_id); void bna_rx_vlan_del(struct bna_rx *rx, int vlan_id); void bna_rx_vlanfilter_enable(struct bna_rx *rx); -/** - * ENET - */ +/* ENET */ /* API for RX */ int bna_enet_mtu_get(struct bna_enet *enet); @@ -544,18 +521,14 @@ void bna_enet_mtu_set(struct bna_enet *enet, int mtu, void (*cbfn)(struct bnad *)); void bna_enet_perm_mac_get(struct bna_enet *enet, mac_t *mac); -/** - * IOCETH - */ +/* IOCETH */ /* APIs for BNAD */ void bna_ioceth_enable(struct bna_ioceth *ioceth); void bna_ioceth_disable(struct bna_ioceth *ioceth, enum bna_cleanup_type type); -/** - * BNAD - */ +/* BNAD */ /* Callbacks for ENET */ void bnad_cb_ethport_link_status(struct bnad *bnad, diff --git a/drivers/net/ethernet/brocade/bna/bna_enet.c b/drivers/net/ethernet/brocade/bna/bna_enet.c index 9ccc586e3767..db14f69d63bc 100644 --- a/drivers/net/ethernet/brocade/bna/bna_enet.c +++ b/drivers/net/ethernet/brocade/bna/bna_enet.c @@ -378,9 +378,8 @@ bna_msgq_rsp_handler(void *arg, struct bfi_msgq_mhdr *msghdr) } } -/** - * ETHPORT - */ +/* ETHPORT */ + #define call_ethport_stop_cbfn(_ethport) \ do { \ if ((_ethport)->stop_cbfn) { \ @@ -804,9 +803,8 @@ bna_ethport_cb_rx_stopped(struct bna_ethport *ethport) } } -/** - * ENET - */ +/* ENET */ + #define bna_enet_chld_start(enet) \ do { \ enum bna_tx_type tx_type = \ @@ -1328,9 +1326,8 @@ bna_enet_perm_mac_get(struct bna_enet *enet, mac_t *mac) *mac = bfa_nw_ioc_get_mac(&enet->bna->ioceth.ioc); } -/** - * IOCETH - */ +/* IOCETH */ + #define enable_mbox_intr(_ioceth) \ do { \ u32 intr_status; \ diff --git a/drivers/net/ethernet/brocade/bna/bna_hw_defs.h b/drivers/net/ethernet/brocade/bna/bna_hw_defs.h index 4c6aab2a9534..b8c4e21fbf4c 100644 --- a/drivers/net/ethernet/brocade/bna/bna_hw_defs.h +++ b/drivers/net/ethernet/brocade/bna/bna_hw_defs.h @@ -16,20 +16,15 @@ * www.brocade.com */ -/** - * File for interrupt macros and functions - */ +/* File for interrupt macros and functions */ #ifndef __BNA_HW_DEFS_H__ #define __BNA_HW_DEFS_H__ #include "bfi_reg.h" -/** - * - * SW imposed limits - * - */ +/* SW imposed limits */ + #define BFI_ENET_DEF_TXQ 1 #define BFI_ENET_DEF_RXP 1 #define BFI_ENET_DEF_UCAM 1 @@ -141,11 +136,8 @@ } #define bna_port_id_get(_bna) ((_bna)->ioceth.ioc.port_id) -/** - * - * Interrupt related bits, flags and macros - * - */ + +/* Interrupt related bits, flags and macros */ #define IB_STATUS_BITS 0x0000ffff @@ -280,11 +272,7 @@ do { \ (writel(BNA_DOORBELL_Q_PRD_IDX((_rcb)->producer_index), \ (_rcb)->q_dbell)); -/** - * - * TxQ, RxQ, CQ related bits, offsets, macros - * - */ +/* TxQ, RxQ, CQ related bits, offsets, macros */ /* TxQ Entry Opcodes */ #define BNA_TXQ_WI_SEND (0x402) /* Single Frame Transmission */ @@ -334,11 +322,7 @@ do { \ #define BNA_CQ_EF_LOCAL (1 << 20) -/** - * - * Data structures - * - */ +/* Data structures */ struct bna_reg_offset { u32 fn_int_status; @@ -371,8 +355,7 @@ struct bna_txq_wi_vector { struct bna_dma_addr host_addr; /* Tx-Buf DMA addr */ }; -/** - * TxQ Entry Structure +/* TxQ Entry Structure * * BEWARE: Load values into this structure with correct endianess. */ diff --git a/drivers/net/ethernet/brocade/bna/bna_tx_rx.c b/drivers/net/ethernet/brocade/bna/bna_tx_rx.c index 276fcb589f4b..71144b396e02 100644 --- a/drivers/net/ethernet/brocade/bna/bna_tx_rx.c +++ b/drivers/net/ethernet/brocade/bna/bna_tx_rx.c @@ -18,9 +18,7 @@ #include "bna.h" #include "bfi.h" -/** - * IB - */ +/* IB */ static void bna_ib_coalescing_timeo_set(struct bna_ib *ib, u8 coalescing_timeo) { @@ -29,9 +27,7 @@ bna_ib_coalescing_timeo_set(struct bna_ib *ib, u8 coalescing_timeo) (u32)ib->coalescing_timeo, 0); } -/** - * RXF - */ +/* RXF */ #define bna_rxf_vlan_cfg_soft_reset(rxf) \ do { \ @@ -1312,9 +1308,7 @@ bna_rxf_vlan_strip_cfg_apply(struct bna_rxf *rxf) return 0; } -/** - * RX - */ +/* RX */ #define BNA_GET_RXQS(qcfg) (((qcfg)->rxp_type == BNA_RXP_SINGLE) ? \ (qcfg)->num_paths : ((qcfg)->num_paths * 2)) @@ -2791,9 +2785,8 @@ const u32 bna_napi_dim_vector[BNA_LOAD_T_MAX][BNA_BIAS_T_MAX] = { {1, 2}, }; -/** - * TX - */ +/* TX */ + #define call_tx_stop_cbfn(tx) \ do { \ if ((tx)->stop_cbfn) { \ diff --git a/drivers/net/ethernet/brocade/bna/bna_types.h b/drivers/net/ethernet/brocade/bna/bna_types.h index e8d3ab7ea6cb..d3eb8bddfb2a 100644 --- a/drivers/net/ethernet/brocade/bna/bna_types.h +++ b/drivers/net/ethernet/brocade/bna/bna_types.h @@ -23,11 +23,7 @@ #include "bfa_cee.h" #include "bfa_msgq.h" -/** - * - * Forward declarations - * - */ +/* Forward declarations */ struct bna_mcam_handle; struct bna_txq; @@ -40,11 +36,7 @@ struct bna_enet; struct bna; struct bnad; -/** - * - * Enums, primitive data types - * - */ +/* Enums, primitive data types */ enum bna_status { BNA_STATUS_T_DISABLED = 0, @@ -331,11 +323,7 @@ struct bna_attr { int max_rit_size; }; -/** - * - * IOCEth - * - */ +/* IOCEth */ struct bna_ioceth { bfa_fsm_t fsm; @@ -351,11 +339,7 @@ struct bna_ioceth { struct bna *bna; }; -/** - * - * Enet - * - */ +/* Enet */ /* Pause configuration */ struct bna_pause_config { @@ -390,11 +374,7 @@ struct bna_enet { struct bna *bna; }; -/** - * - * Ethport - * - */ +/* Ethport */ struct bna_ethport { bfa_fsm_t fsm; @@ -419,11 +399,7 @@ struct bna_ethport { struct bna *bna; }; -/** - * - * Interrupt Block - * - */ +/* Interrupt Block */ /* Doorbell structure */ struct bna_ib_dbell { @@ -447,11 +423,7 @@ struct bna_ib { int interpkt_timeo; }; -/** - * - * Tx object - * - */ +/* Tx object */ /* Tx datapath control structure */ #define BNA_Q_NAME_SIZE 16 @@ -585,11 +557,7 @@ struct bna_tx_mod { struct bna *bna; }; -/** - * - * Rx object - * - */ +/* Rx object */ /* Rx datapath control structure */ struct bna_rcb { @@ -898,11 +866,7 @@ struct bna_rx_mod { u32 rid_mask; }; -/** - * - * CAM - * - */ +/* CAM */ struct bna_ucam_mod { struct bna_mac *ucmac; /* BFI_MAX_UCMAC entries */ @@ -927,11 +891,7 @@ struct bna_mcam_mod { struct bna *bna; }; -/** - * - * Statistics - * - */ +/* Statistics */ struct bna_stats { struct bna_dma_addr hw_stats_dma; @@ -949,11 +909,7 @@ struct bna_stats_mod { struct bfi_enet_stats_req stats_clr; }; -/** - * - * BNA - * - */ +/* BNA */ struct bna { struct bna_ident ident; diff --git a/drivers/net/ethernet/brocade/bna/bnad.c b/drivers/net/ethernet/brocade/bna/bnad.c index 67cd2ed0306a..b441f33258e7 100644 --- a/drivers/net/ethernet/brocade/bna/bnad.c +++ b/drivers/net/ethernet/brocade/bna/bnad.c @@ -1302,8 +1302,7 @@ bnad_txrx_irq_alloc(struct bnad *bnad, enum bnad_intr_source src, return 0; } -/** - * NOTE: Should be called for MSIX only +/* NOTE: Should be called for MSIX only * Unregisters Tx MSIX vector(s) from the kernel */ static void @@ -1322,8 +1321,7 @@ bnad_tx_msix_unregister(struct bnad *bnad, struct bnad_tx_info *tx_info, } } -/** - * NOTE: Should be called for MSIX only +/* NOTE: Should be called for MSIX only * Registers Tx MSIX vector(s) and ISR(s), cookie with the kernel */ static int @@ -1354,8 +1352,7 @@ err_return: return -1; } -/** - * NOTE: Should be called for MSIX only +/* NOTE: Should be called for MSIX only * Unregisters Rx MSIX vector(s) from the kernel */ static void @@ -1375,8 +1372,7 @@ bnad_rx_msix_unregister(struct bnad *bnad, struct bnad_rx_info *rx_info, } } -/** - * NOTE: Should be called for MSIX only +/* NOTE: Should be called for MSIX only * Registers Tx MSIX vector(s) and ISR(s), cookie with the kernel */ static int diff --git a/drivers/net/ethernet/brocade/bna/bnad.h b/drivers/net/ethernet/brocade/bna/bnad.h index 72742be11277..d78339224751 100644 --- a/drivers/net/ethernet/brocade/bna/bnad.h +++ b/drivers/net/ethernet/brocade/bna/bnad.h @@ -389,9 +389,7 @@ extern void bnad_netdev_hwstats_fill(struct bnad *bnad, void bnad_debugfs_init(struct bnad *bnad); void bnad_debugfs_uninit(struct bnad *bnad); -/** - * MACROS - */ +/* MACROS */ /* To set & get the stats counters */ #define BNAD_UPDATE_CTR(_bnad, _ctr) \ (((_bnad)->stats.drv_stats._ctr)++) diff --git a/drivers/net/ethernet/brocade/bna/cna_fwimg.c b/drivers/net/ethernet/brocade/bna/cna_fwimg.c index cfc22a64157e..6a68e8d93309 100644 --- a/drivers/net/ethernet/brocade/bna/cna_fwimg.c +++ b/drivers/net/ethernet/brocade/bna/cna_fwimg.c @@ -67,10 +67,10 @@ bfa_cb_image_get_chunk(enum bfi_asic_gen asic_gen, u32 off) { switch (asic_gen) { case BFI_ASIC_GEN_CT: - return (u32 *)(bfi_image_ct_cna + off); + return (bfi_image_ct_cna + off); break; case BFI_ASIC_GEN_CT2: - return (u32 *)(bfi_image_ct2_cna + off); + return (bfi_image_ct2_cna + off); break; default: return NULL; diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c index 1466bc4e3dda..033064b7b576 100644 --- a/drivers/net/ethernet/cadence/macb.c +++ b/drivers/net/ethernet/cadence/macb.c @@ -179,13 +179,16 @@ static void macb_handle_link_change(struct net_device *dev) spin_unlock_irqrestore(&bp->lock, flags); if (status_change) { - if (phydev->link) + if (phydev->link) { + netif_carrier_on(dev); netdev_info(dev, "link up (%d/%s)\n", phydev->speed, phydev->duplex == DUPLEX_FULL ? "Full" : "Half"); - else + } else { + netif_carrier_off(dev); netdev_info(dev, "link down\n"); + } } } @@ -1033,6 +1036,9 @@ static int macb_open(struct net_device *dev) netdev_dbg(bp->dev, "open\n"); + /* carrier starts down */ + netif_carrier_off(dev); + /* if the phy is not yet register, retry later*/ if (!bp->phy_dev) return -EAGAIN; @@ -1406,6 +1412,8 @@ static int __init macb_probe(struct platform_device *pdev) platform_set_drvdata(pdev, dev); + netif_carrier_off(dev); + netdev_info(dev, "Cadence %s at 0x%08lx irq %d (%pM)\n", macb_is_gem(bp) ? "GEM" : "MACB", dev->base_addr, dev->irq, dev->dev_addr); @@ -1469,6 +1477,7 @@ static int macb_suspend(struct platform_device *pdev, pm_message_t state) struct net_device *netdev = platform_get_drvdata(pdev); struct macb *bp = netdev_priv(netdev); + netif_carrier_off(netdev); netif_device_detach(netdev); clk_disable(bp->hclk); diff --git a/drivers/net/ethernet/calxeda/xgmac.c b/drivers/net/ethernet/calxeda/xgmac.c index 11f667f6131a..2b4b4f529ab4 100644 --- a/drivers/net/ethernet/calxeda/xgmac.c +++ b/drivers/net/ethernet/calxeda/xgmac.c @@ -264,7 +264,7 @@ #define XGMAC_OMR_FEF 0x00000080 /* Forward Error Frames */ #define XGMAC_OMR_DT 0x00000040 /* Drop TCP/IP csum Errors */ #define XGMAC_OMR_RSF 0x00000020 /* RX FIFO Store and Forward */ -#define XGMAC_OMR_RTC 0x00000010 /* RX Threshhold Ctrl */ +#define XGMAC_OMR_RTC_256 0x00000018 /* RX Threshhold Ctrl */ #define XGMAC_OMR_RTC_MASK 0x00000018 /* RX Threshhold Ctrl MASK */ /* XGMAC HW Features Register */ @@ -671,26 +671,23 @@ static void xgmac_rx_refill(struct xgmac_priv *priv) p = priv->dma_rx + entry; - if (priv->rx_skbuff[entry] != NULL) - continue; - - skb = __skb_dequeue(&priv->rx_recycle); - if (skb == NULL) - skb = netdev_alloc_skb(priv->dev, priv->dma_buf_sz); - if (unlikely(skb == NULL)) - break; - - priv->rx_skbuff[entry] = skb; - paddr = dma_map_single(priv->device, skb->data, - priv->dma_buf_sz, DMA_FROM_DEVICE); - desc_set_buf_addr(p, paddr, priv->dma_buf_sz); + if (priv->rx_skbuff[entry] == NULL) { + skb = __skb_dequeue(&priv->rx_recycle); + if (skb == NULL) + skb = netdev_alloc_skb(priv->dev, priv->dma_buf_sz); + if (unlikely(skb == NULL)) + break; + + priv->rx_skbuff[entry] = skb; + paddr = dma_map_single(priv->device, skb->data, + priv->dma_buf_sz, DMA_FROM_DEVICE); + desc_set_buf_addr(p, paddr, priv->dma_buf_sz); + } netdev_dbg(priv->dev, "rx ring: head %d, tail %d\n", priv->rx_head, priv->rx_tail); priv->rx_head = dma_ring_incr(priv->rx_head, DMA_RX_RING_SZ); - /* Ensure descriptor is in memory before handing to h/w */ - wmb(); desc_set_rx_owner(p); } } @@ -933,6 +930,7 @@ static void xgmac_tx_err(struct xgmac_priv *priv) desc_init_tx_desc(priv->dma_tx, DMA_TX_RING_SZ); priv->tx_tail = 0; priv->tx_head = 0; + writel(priv->dma_tx_phy, priv->base + XGMAC_DMA_TX_BASE_ADDR); writel(reg | DMA_CONTROL_ST, priv->base + XGMAC_DMA_CONTROL); writel(DMA_STATUS_TU | DMA_STATUS_TPS | DMA_STATUS_NIS | DMA_STATUS_AIS, @@ -972,7 +970,7 @@ static int xgmac_hw_init(struct net_device *dev) writel(DMA_INTR_DEFAULT_MASK, ioaddr + XGMAC_DMA_INTR_ENA); /* XGMAC requires AXI bus init. This is a 'magic number' for now */ - writel(0x000100E, ioaddr + XGMAC_DMA_AXI_BUS); + writel(0x0077000E, ioaddr + XGMAC_DMA_AXI_BUS); ctrl |= XGMAC_CONTROL_DDIC | XGMAC_CONTROL_JE | XGMAC_CONTROL_ACS | XGMAC_CONTROL_CAR; @@ -984,7 +982,8 @@ static int xgmac_hw_init(struct net_device *dev) writel(value, ioaddr + XGMAC_DMA_CONTROL); /* Set the HW DMA mode and the COE */ - writel(XGMAC_OMR_TSF | XGMAC_OMR_RSF | XGMAC_OMR_RFD | XGMAC_OMR_RFA, + writel(XGMAC_OMR_TSF | XGMAC_OMR_RFD | XGMAC_OMR_RFA | + XGMAC_OMR_RTC_256, ioaddr + XGMAC_OMR); /* Reset the MMC counters */ diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c index abb6ce7c1b7e..6505070abcfa 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c +++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c @@ -3050,7 +3050,7 @@ static struct pci_error_handlers t3_err_handler = { static void set_nqsets(struct adapter *adap) { int i, j = 0; - int num_cpus = num_online_cpus(); + int num_cpus = netif_get_num_default_rss_queues(); int hwports = adap->params.nports; int nqsets = adap->msix_nvectors - 1; @@ -3173,6 +3173,9 @@ static void __devinit cxgb3_init_iscsi_mac(struct net_device *dev) pi->iscsic.mac_addr[3] |= 0x80; } +#define TSO_FLAGS (NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_TSO_ECN) +#define VLAN_FEAT (NETIF_F_SG | NETIF_F_IP_CSUM | TSO_FLAGS | \ + NETIF_F_IPV6_CSUM | NETIF_F_HIGHDMA) static int __devinit init_one(struct pci_dev *pdev, const struct pci_device_id *ent) { @@ -3293,6 +3296,7 @@ static int __devinit init_one(struct pci_dev *pdev, netdev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO | NETIF_F_RXCSUM | NETIF_F_HW_VLAN_RX; netdev->features |= netdev->hw_features | NETIF_F_HW_VLAN_TX; + netdev->vlan_features |= netdev->features & VLAN_FEAT; if (pci_using_dac) netdev->features |= NETIF_F_HIGHDMA; diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c index 65e4b280619a..2dbbcbb450d3 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c +++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_offload.c @@ -62,7 +62,9 @@ static const unsigned int MAX_ATIDS = 64 * 1024; static const unsigned int ATID_BASE = 0x10000; static void cxgb_neigh_update(struct neighbour *neigh); -static void cxgb_redirect(struct dst_entry *old, struct dst_entry *new); +static void cxgb_redirect(struct dst_entry *old, struct neighbour *old_neigh, + struct dst_entry *new, struct neighbour *new_neigh, + const void *daddr); static inline int offload_activated(struct t3cdev *tdev) { @@ -575,7 +577,7 @@ static void t3_process_tid_release_list(struct work_struct *work) if (!skb) { spin_lock_bh(&td->tid_release_lock); p->ctx = (void *)td->tid_release_list; - td->tid_release_list = (struct t3c_tid_entry *)p; + td->tid_release_list = p; break; } mk_tid_release(skb, p - td->tid_maps.tid_tab); @@ -968,8 +970,10 @@ static int nb_callback(struct notifier_block *self, unsigned long event, } case (NETEVENT_REDIRECT):{ struct netevent_redirect *nr = ctx; - cxgb_redirect(nr->old, nr->new); - cxgb_neigh_update(dst_get_neighbour_noref(nr->new)); + cxgb_redirect(nr->old, nr->old_neigh, + nr->new, nr->new_neigh, + nr->daddr); + cxgb_neigh_update(nr->new_neigh); break; } default: @@ -1107,10 +1111,11 @@ static void set_l2t_ix(struct t3cdev *tdev, u32 tid, struct l2t_entry *e) tdev->send(tdev, skb); } -static void cxgb_redirect(struct dst_entry *old, struct dst_entry *new) +static void cxgb_redirect(struct dst_entry *old, struct neighbour *old_neigh, + struct dst_entry *new, struct neighbour *new_neigh, + const void *daddr) { struct net_device *olddev, *newdev; - struct neighbour *n; struct tid_info *ti; struct t3cdev *tdev; u32 tid; @@ -1118,15 +1123,8 @@ static void cxgb_redirect(struct dst_entry *old, struct dst_entry *new) struct l2t_entry *e; struct t3c_tid_entry *te; - n = dst_get_neighbour_noref(old); - if (!n) - return; - olddev = n->dev; - - n = dst_get_neighbour_noref(new); - if (!n) - return; - newdev = n->dev; + olddev = old_neigh->dev; + newdev = new_neigh->dev; if (!is_offloading(olddev)) return; @@ -1144,7 +1142,7 @@ static void cxgb_redirect(struct dst_entry *old, struct dst_entry *new) } /* Add new L2T entry */ - e = t3_l2t_get(tdev, new, newdev); + e = t3_l2t_get(tdev, new, newdev, daddr); if (!e) { printk(KERN_ERR "%s: couldn't allocate new l2t entry!\n", __func__); diff --git a/drivers/net/ethernet/chelsio/cxgb3/l2t.c b/drivers/net/ethernet/chelsio/cxgb3/l2t.c index 3fa3c8833ed7..8d53438638b2 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/l2t.c +++ b/drivers/net/ethernet/chelsio/cxgb3/l2t.c @@ -299,7 +299,7 @@ static inline void reuse_entry(struct l2t_entry *e, struct neighbour *neigh) } struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct dst_entry *dst, - struct net_device *dev) + struct net_device *dev, const void *daddr) { struct l2t_entry *e = NULL; struct neighbour *neigh; @@ -311,7 +311,7 @@ struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct dst_entry *dst, int smt_idx; rcu_read_lock(); - neigh = dst_get_neighbour_noref(dst); + neigh = dst_neigh_lookup(dst, daddr); if (!neigh) goto done_rcu; @@ -360,6 +360,8 @@ struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct dst_entry *dst, done_unlock: write_unlock_bh(&d->lock); done_rcu: + if (neigh) + neigh_release(neigh); rcu_read_unlock(); return e; } diff --git a/drivers/net/ethernet/chelsio/cxgb3/l2t.h b/drivers/net/ethernet/chelsio/cxgb3/l2t.h index c4e864369751..8cffcdfd5678 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/l2t.h +++ b/drivers/net/ethernet/chelsio/cxgb3/l2t.h @@ -110,7 +110,7 @@ static inline void set_arp_failure_handler(struct sk_buff *skb, void t3_l2e_free(struct l2t_data *d, struct l2t_entry *e); void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh); struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct dst_entry *dst, - struct net_device *dev); + struct net_device *dev, const void *daddr); int t3_l2t_send_slow(struct t3cdev *dev, struct sk_buff *skb, struct l2t_entry *e); void t3_l2t_send_event(struct t3cdev *dev, struct l2t_entry *e); diff --git a/drivers/net/ethernet/chelsio/cxgb3/sge.c b/drivers/net/ethernet/chelsio/cxgb3/sge.c index cfb60e1f51da..dd901c5061b9 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb3/sge.c @@ -2877,7 +2877,7 @@ static void sge_timer_tx(unsigned long data) mod_timer(&qs->tx_reclaim_timer, jiffies + next_period); } -/* +/** * sge_timer_rx - perform periodic maintenance of an SGE qset * @data: the SGE queue set to maintain * diff --git a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c index 44ac2f40b644..bff8a3cdd3df 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c +++ b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c @@ -1076,7 +1076,7 @@ static int t3_flash_erase_sectors(struct adapter *adapter, int start, int end) return 0; } -/* +/** * t3_load_fw - download firmware * @adapter: the adapter * @fw_data: the firmware image to write diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c index e1f96fbb48c1..5ed49af23d6a 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c @@ -3493,8 +3493,8 @@ static void __devinit cfg_queues(struct adapter *adap) */ if (n10g) q10g = (MAX_ETH_QSETS - (adap->params.nports - n10g)) / n10g; - if (q10g > num_online_cpus()) - q10g = num_online_cpus(); + if (q10g > netif_get_num_default_rss_queues()) + q10g = netif_get_num_default_rss_queues(); for_each_port(adap, i) { struct port_info *pi = adap2pinfo(adap, i); diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c index e111d974afd8..8596acaa402b 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c @@ -753,7 +753,7 @@ static void write_sgl(const struct sk_buff *skb, struct sge_txq *q, end = (void *)q->desc + part1; } if ((uintptr_t)end & 8) /* 0-pad to multiple of 16 */ - *(u64 *)end = 0; + *end = 0; } /** diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c index 32e1dd566a14..fa947dfa4c30 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c +++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c @@ -2010,7 +2010,7 @@ int t4_fwaddrspace_write(struct adapter *adap, unsigned int mbox, return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL); } -/* +/** * t4_mem_win_read_len - read memory through PCIE memory window * @adap: the adapter * @addr: address of first byte requested aligned on 32b. diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c index 25e3308fc9d8..9dad56101e23 100644 --- a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c @@ -418,7 +418,7 @@ static int fwevtq_handler(struct sge_rspq *rspq, const __be64 *rsp, * restart a TX Ethernet Queue which was stopped for lack of * free TX Queue Descriptors ... */ - const struct cpl_sge_egr_update *p = (void *)cpl; + const struct cpl_sge_egr_update *p = cpl; unsigned int qid = EGR_QID(be32_to_cpu(p->opcode_qid)); struct sge *s = &adapter->sge; struct sge_txq *tq; diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c index 0bd585bba39d..f2d1ecdcaf98 100644 --- a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c @@ -934,7 +934,7 @@ static void write_sgl(const struct sk_buff *skb, struct sge_txq *tq, end = (void *)tq->desc + part1; } if ((uintptr_t)end & 8) /* 0-pad to multiple of 16 */ - *(u64 *)end = 0; + *end = 0; } /** @@ -1323,8 +1323,7 @@ int t4vf_eth_xmit(struct sk_buff *skb, struct net_device *dev) */ if (unlikely((void *)sgl == (void *)tq->stat)) { sgl = (void *)tq->desc; - end = (void *)((void *)tq->desc + - ((void *)end - (void *)tq->stat)); + end = ((void *)tq->desc + ((void *)end - (void *)tq->stat)); } write_sgl(skb, tq, sgl, end, 0, addr); diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c index 8132c785cea8..ad1468b3ab91 100644 --- a/drivers/net/ethernet/cisco/enic/enic_main.c +++ b/drivers/net/ethernet/cisco/enic/enic_main.c @@ -1300,8 +1300,6 @@ static void enic_rq_indicate_buf(struct vnic_rq *rq, skb->ip_summed = CHECKSUM_COMPLETE; } - skb->dev = netdev; - if (vlan_stripped) __vlan_hwaccel_put_tag(skb, vlan_tci); diff --git a/drivers/net/ethernet/dec/tulip/de4x5.c b/drivers/net/ethernet/dec/tulip/de4x5.c index d3cd489d11a2..f879e9224846 100644 --- a/drivers/net/ethernet/dec/tulip/de4x5.c +++ b/drivers/net/ethernet/dec/tulip/de4x5.c @@ -3973,7 +3973,7 @@ DevicePresent(struct net_device *dev, u_long aprom_addr) tmp = srom_rd(aprom_addr, i); *p++ = cpu_to_le16(tmp); } - de4x5_dbg_srom((struct de4x5_srom *)&lp->srom); + de4x5_dbg_srom(&lp->srom); } } diff --git a/drivers/net/ethernet/emulex/benet/be.h b/drivers/net/ethernet/emulex/benet/be.h index c5c4c0e83bd1..d266c86a53f7 100644 --- a/drivers/net/ethernet/emulex/benet/be.h +++ b/drivers/net/ethernet/emulex/benet/be.h @@ -34,7 +34,7 @@ #include "be_hw.h" #include "be_roce.h" -#define DRV_VER "4.2.220u" +#define DRV_VER "4.4.31.0u" #define DRV_NAME "be2net" #define BE_NAME "ServerEngines BladeEngine2 10Gbps NIC" #define BE3_NAME "ServerEngines BladeEngine3 10Gbps NIC" @@ -389,6 +389,7 @@ struct be_adapter { struct delayed_work work; u16 work_counter; + struct delayed_work func_recovery_work; u32 flags; /* Ethtool knobs and info */ char fw_ver[FW_VER_LEN]; @@ -396,9 +397,10 @@ struct be_adapter { u32 *pmac_id; /* MAC addr handle used by BE card */ u32 beacon_state; /* for set_phys_id */ - bool eeh_err; - bool ue_detected; + bool eeh_error; bool fw_timeout; + bool hw_error; + u32 port_num; bool promiscuous; u32 function_mode; @@ -435,6 +437,7 @@ struct be_adapter { u32 max_pmac_cnt; /* Max secondary UC MACs programmable */ u32 uc_macs; /* Count of secondary UC MAC programmed */ u32 msg_enable; + int be_get_temp_freq; }; #define be_physfn(adapter) (!adapter->virtfn) @@ -454,6 +457,9 @@ struct be_adapter { #define lancer_chip(adapter) ((adapter->pdev->device == OC_DEVICE_ID3) || \ (adapter->pdev->device == OC_DEVICE_ID4)) +#define skyhawk_chip(adapter) (adapter->pdev->device == OC_DEVICE_ID5) + + #define be_roce_supported(adapter) ((adapter->if_type == SLI_INTF_TYPE_3 || \ adapter->sli_family == SKYHAWK_SLI_FAMILY) && \ (adapter->function_mode & RDMA_ENABLED)) @@ -573,6 +579,11 @@ static inline u8 is_udp_pkt(struct sk_buff *skb) return val; } +static inline bool is_ipv4_pkt(struct sk_buff *skb) +{ + return skb->protocol == htons(ETH_P_IP) && ip_hdr(skb)->version == 4; +} + static inline void be_vf_eth_addr_generate(struct be_adapter *adapter, u8 *mac) { u32 addr; @@ -593,7 +604,19 @@ static inline bool be_multi_rxq(const struct be_adapter *adapter) static inline bool be_error(struct be_adapter *adapter) { - return adapter->eeh_err || adapter->ue_detected || adapter->fw_timeout; + return adapter->eeh_error || adapter->hw_error || adapter->fw_timeout; +} + +static inline bool be_crit_error(struct be_adapter *adapter) +{ + return adapter->eeh_error || adapter->hw_error; +} + +static inline void be_clear_all_error(struct be_adapter *adapter) +{ + adapter->eeh_error = false; + adapter->hw_error = false; + adapter->fw_timeout = false; } static inline bool be_is_wol_excluded(struct be_adapter *adapter) diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c index 921c2082af4c..7fac97b4bb59 100644 --- a/drivers/net/ethernet/emulex/benet/be_cmds.c +++ b/drivers/net/ethernet/emulex/benet/be_cmds.c @@ -19,9 +19,6 @@ #include "be.h" #include "be_cmds.h" -/* Must be a power of 2 or else MODULO will BUG_ON */ -static int be_get_temp_freq = 64; - static inline void *embedded_payload(struct be_mcc_wrb *wrb) { return wrb->payload.embedded_payload; @@ -115,7 +112,7 @@ static int be_mcc_compl_process(struct be_adapter *adapter, } } else { if (opcode == OPCODE_COMMON_GET_CNTL_ADDITIONAL_ATTRIBUTES) - be_get_temp_freq = 0; + adapter->be_get_temp_freq = 0; if (compl_status == MCC_STATUS_NOT_SUPPORTED || compl_status == MCC_STATUS_ILLEGAL_REQUEST) @@ -144,6 +141,11 @@ static void be_async_link_state_process(struct be_adapter *adapter, /* When link status changes, link speed must be re-queried from FW */ adapter->phy.link_speed = -1; + /* Ignore physical link event */ + if (lancer_chip(adapter) && + !(evt->port_link_status & LOGICAL_LINK_STATUS_MASK)) + return; + /* For the initial link status do not rely on the ASYNC event as * it may not be received in some cases. */ @@ -352,7 +354,7 @@ static int be_mbox_db_ready_wait(struct be_adapter *adapter, void __iomem *db) if (msecs > 4000) { dev_err(&adapter->pdev->dev, "FW not responding\n"); adapter->fw_timeout = true; - be_detect_dump_ue(adapter); + be_detect_error(adapter); return -1; } @@ -429,12 +431,65 @@ static int be_POST_stage_get(struct be_adapter *adapter, u16 *stage) return 0; } -int be_cmd_POST(struct be_adapter *adapter) +int lancer_wait_ready(struct be_adapter *adapter) +{ +#define SLIPORT_READY_TIMEOUT 30 + u32 sliport_status; + int status = 0, i; + + for (i = 0; i < SLIPORT_READY_TIMEOUT; i++) { + sliport_status = ioread32(adapter->db + SLIPORT_STATUS_OFFSET); + if (sliport_status & SLIPORT_STATUS_RDY_MASK) + break; + + msleep(1000); + } + + if (i == SLIPORT_READY_TIMEOUT) + status = -1; + + return status; +} + +int lancer_test_and_set_rdy_state(struct be_adapter *adapter) +{ + int status; + u32 sliport_status, err, reset_needed; + status = lancer_wait_ready(adapter); + if (!status) { + sliport_status = ioread32(adapter->db + SLIPORT_STATUS_OFFSET); + err = sliport_status & SLIPORT_STATUS_ERR_MASK; + reset_needed = sliport_status & SLIPORT_STATUS_RN_MASK; + if (err && reset_needed) { + iowrite32(SLI_PORT_CONTROL_IP_MASK, + adapter->db + SLIPORT_CONTROL_OFFSET); + + /* check adapter has corrected the error */ + status = lancer_wait_ready(adapter); + sliport_status = ioread32(adapter->db + + SLIPORT_STATUS_OFFSET); + sliport_status &= (SLIPORT_STATUS_ERR_MASK | + SLIPORT_STATUS_RN_MASK); + if (status || sliport_status) + status = -1; + } else if (err || reset_needed) { + status = -1; + } + } + return status; +} + +int be_fw_wait_ready(struct be_adapter *adapter) { u16 stage; int status, timeout = 0; struct device *dev = &adapter->pdev->dev; + if (lancer_chip(adapter)) { + status = lancer_wait_ready(adapter); + return status; + } + do { status = be_POST_stage_get(adapter, &stage); if (status) { @@ -565,6 +620,9 @@ int be_cmd_fw_init(struct be_adapter *adapter) u8 *wrb; int status; + if (lancer_chip(adapter)) + return 0; + if (mutex_lock_interruptible(&adapter->mbox_lock)) return -1; @@ -592,6 +650,9 @@ int be_cmd_fw_clean(struct be_adapter *adapter) u8 *wrb; int status; + if (lancer_chip(adapter)) + return 0; + if (mutex_lock_interruptible(&adapter->mbox_lock)) return -1; @@ -610,6 +671,7 @@ int be_cmd_fw_clean(struct be_adapter *adapter) mutex_unlock(&adapter->mbox_lock); return status; } + int be_cmd_eq_create(struct be_adapter *adapter, struct be_queue_info *eq, int eq_delay) { @@ -1132,7 +1194,7 @@ err: * Uses MCCQ */ int be_cmd_if_create(struct be_adapter *adapter, u32 cap_flags, u32 en_flags, - u8 *mac, u32 *if_handle, u32 *pmac_id, u32 domain) + u32 *if_handle, u32 domain) { struct be_mcc_wrb *wrb; struct be_cmd_req_if_create *req; @@ -1152,17 +1214,13 @@ int be_cmd_if_create(struct be_adapter *adapter, u32 cap_flags, u32 en_flags, req->hdr.domain = domain; req->capability_flags = cpu_to_le32(cap_flags); req->enable_flags = cpu_to_le32(en_flags); - if (mac) - memcpy(req->mac_addr, mac, ETH_ALEN); - else - req->pmac_invalid = true; + + req->pmac_invalid = true; status = be_mcc_notify_wait(adapter); if (!status) { struct be_cmd_resp_if_create *resp = embedded_payload(wrb); *if_handle = le32_to_cpu(resp->interface_id); - if (mac) - *pmac_id = le32_to_cpu(resp->pmac_id); } err: @@ -1210,9 +1268,6 @@ int be_cmd_get_stats(struct be_adapter *adapter, struct be_dma_mem *nonemb_cmd) struct be_cmd_req_hdr *hdr; int status = 0; - if (MODULO(adapter->work_counter, be_get_temp_freq) == 0) - be_cmd_get_die_temperature(adapter); - spin_lock_bh(&adapter->mcc_lock); wrb = wrb_from_mccq(adapter); @@ -1581,7 +1636,8 @@ int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 value) /* Reset mcast promisc mode if already set by setting mask * and not setting flags field */ - req->if_flags_mask |= + if (!lancer_chip(adapter) || be_physfn(adapter)) + req->if_flags_mask |= cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS); req->mcast_num = cpu_to_le32(netdev_mc_count(adapter->netdev)); @@ -1692,6 +1748,20 @@ int be_cmd_reset_function(struct be_adapter *adapter) struct be_cmd_req_hdr *req; int status; + if (lancer_chip(adapter)) { + status = lancer_wait_ready(adapter); + if (!status) { + iowrite32(SLI_PORT_CONTROL_IP_MASK, + adapter->db + SLIPORT_CONTROL_OFFSET); + status = lancer_test_and_set_rdy_state(adapter); + } + if (status) { + dev_err(&adapter->pdev->dev, + "Adapter in non recoverable error\n"); + } + return status; + } + if (mutex_lock_interruptible(&adapter->mbox_lock)) return -1; @@ -1728,6 +1798,13 @@ int be_cmd_rss_config(struct be_adapter *adapter, u8 *rsstable, u16 table_size) req->if_id = cpu_to_le32(adapter->if_handle); req->enable_rss = cpu_to_le16(RSS_ENABLE_TCP_IPV4 | RSS_ENABLE_IPV4 | RSS_ENABLE_TCP_IPV6 | RSS_ENABLE_IPV6); + + if (lancer_chip(adapter) || skyhawk_chip(adapter)) { + req->hdr.version = 1; + req->enable_rss |= cpu_to_le16(RSS_ENABLE_UDP_IPV4 | + RSS_ENABLE_UDP_IPV6); + } + req->cpu_table_size_log2 = cpu_to_le16(fls(table_size) - 1); memcpy(req->cpu_table, rsstable, table_size); memcpy(req->hash, myhash, sizeof(myhash)); @@ -1805,8 +1882,9 @@ err: } int lancer_cmd_write_object(struct be_adapter *adapter, struct be_dma_mem *cmd, - u32 data_size, u32 data_offset, const char *obj_name, - u32 *data_written, u8 *addn_status) + u32 data_size, u32 data_offset, + const char *obj_name, u32 *data_written, + u8 *change_status, u8 *addn_status) { struct be_mcc_wrb *wrb; struct lancer_cmd_req_write_object *req; @@ -1862,10 +1940,12 @@ int lancer_cmd_write_object(struct be_adapter *adapter, struct be_dma_mem *cmd, status = adapter->flash_status; resp = embedded_payload(wrb); - if (!status) + if (!status) { *data_written = le32_to_cpu(resp->actual_write_len); - else + *change_status = resp->change_status; + } else { *addn_status = resp->additional_status; + } return status; @@ -2330,8 +2410,8 @@ err: } /* Uses synchronous MCCQ */ -int be_cmd_get_mac_from_list(struct be_adapter *adapter, u32 domain, - bool *pmac_id_active, u32 *pmac_id, u8 *mac) +int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac, + bool *pmac_id_active, u32 *pmac_id, u8 domain) { struct be_mcc_wrb *wrb; struct be_cmd_req_get_mac_list *req; @@ -2376,8 +2456,9 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u32 domain, get_mac_list_cmd.va; mac_count = resp->true_mac_count + resp->pseudo_mac_count; /* Mac list returned could contain one or more active mac_ids - * or one or more pseudo permanant mac addresses. If an active - * mac_id is present, return first active mac_id found + * or one or more true or pseudo permanant mac addresses. + * If an active mac_id is present, return first active mac_id + * found. */ for (i = 0; i < mac_count; i++) { struct get_list_macaddr *mac_entry; @@ -2396,7 +2477,7 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u32 domain, goto out; } } - /* If no active mac_id found, return first pseudo mac addr */ + /* If no active mac_id found, return first mac addr */ *pmac_id_active = false; memcpy(mac, resp->macaddr_list[0].mac_addr_id.macaddr, ETH_ALEN); @@ -2648,6 +2729,44 @@ err: return status; } +int be_cmd_query_port_name(struct be_adapter *adapter, u8 *port_name) +{ + struct be_mcc_wrb *wrb; + struct be_cmd_req_get_port_name *req; + int status; + + if (!lancer_chip(adapter)) { + *port_name = adapter->hba_port_num + '0'; + return 0; + } + + spin_lock_bh(&adapter->mcc_lock); + + wrb = wrb_from_mccq(adapter); + if (!wrb) { + status = -EBUSY; + goto err; + } + + req = embedded_payload(wrb); + + be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON, + OPCODE_COMMON_GET_PORT_NAME, sizeof(*req), wrb, + NULL); + req->hdr.version = 1; + + status = be_mcc_notify_wait(adapter); + if (!status) { + struct be_cmd_resp_get_port_name *resp = embedded_payload(wrb); + *port_name = resp->port_name[adapter->hba_port_num]; + } else { + *port_name = adapter->hba_port_num + '0'; + } +err: + spin_unlock_bh(&adapter->mcc_lock); + return status; +} + int be_roce_mcc_cmd(void *netdev_handle, void *wrb_payload, int wrb_payload_size, u16 *cmd_status, u16 *ext_status) { diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.h b/drivers/net/ethernet/emulex/benet/be_cmds.h index b3f3fc3d1323..250f19b5f7b6 100644 --- a/drivers/net/ethernet/emulex/benet/be_cmds.h +++ b/drivers/net/ethernet/emulex/benet/be_cmds.h @@ -93,6 +93,7 @@ enum { LINK_UP = 0x1 }; #define LINK_STATUS_MASK 0x1 +#define LOGICAL_LINK_STATUS_MASK 0x2 /* When the event code of an async trailer is link-state, the mcc_compl * must be interpreted as follows @@ -186,6 +187,7 @@ struct be_mcc_mailbox { #define OPCODE_COMMON_ENABLE_DISABLE_BEACON 69 #define OPCODE_COMMON_GET_BEACON_STATE 70 #define OPCODE_COMMON_READ_TRANSRECV_DATA 73 +#define OPCODE_COMMON_GET_PORT_NAME 77 #define OPCODE_COMMON_GET_PHY_DETAILS 102 #define OPCODE_COMMON_SET_DRIVER_FUNCTION_CAP 103 #define OPCODE_COMMON_GET_CNTL_ADDITIONAL_ATTRIBUTES 121 @@ -1081,13 +1083,25 @@ struct be_cmd_resp_query_fw_cfg { u32 function_caps; }; -/******************** RSS Config *******************/ -/* RSS types */ +/******************** RSS Config ****************************************/ +/* RSS type Input parameters used to compute RX hash + * RSS_ENABLE_IPV4 SRC IPv4, DST IPv4 + * RSS_ENABLE_TCP_IPV4 SRC IPv4, DST IPv4, TCP SRC PORT, TCP DST PORT + * RSS_ENABLE_IPV6 SRC IPv6, DST IPv6 + * RSS_ENABLE_TCP_IPV6 SRC IPv6, DST IPv6, TCP SRC PORT, TCP DST PORT + * RSS_ENABLE_UDP_IPV4 SRC IPv4, DST IPv4, UDP SRC PORT, UDP DST PORT + * RSS_ENABLE_UDP_IPV6 SRC IPv6, DST IPv6, UDP SRC PORT, UDP DST PORT + * + * When multiple RSS types are enabled, HW picks the best hash policy + * based on the type of the received packet. + */ #define RSS_ENABLE_NONE 0x0 #define RSS_ENABLE_IPV4 0x1 #define RSS_ENABLE_TCP_IPV4 0x2 #define RSS_ENABLE_IPV6 0x4 #define RSS_ENABLE_TCP_IPV6 0x8 +#define RSS_ENABLE_UDP_IPV4 0x10 +#define RSS_ENABLE_UDP_IPV6 0x20 struct be_cmd_req_rss_config { struct be_cmd_req_hdr hdr; @@ -1163,6 +1177,8 @@ struct lancer_cmd_req_write_object { u32 addr_high; }; +#define LANCER_NO_RESET_NEEDED 0x00 +#define LANCER_FW_RESET_NEEDED 0x02 struct lancer_cmd_resp_write_object { u8 opcode; u8 subsystem; @@ -1173,6 +1189,8 @@ struct lancer_cmd_resp_write_object { u32 resp_len; u32 actual_resp_len; u32 actual_write_len; + u8 change_status; + u8 rsvd3[3]; }; /************************ Lancer Read FW info **************/ @@ -1502,6 +1520,17 @@ struct be_cmd_resp_get_hsw_config { u32 rsvd; }; +/******************* get port names ***************/ +struct be_cmd_req_get_port_name { + struct be_cmd_req_hdr hdr; + u32 rsvd0; +}; + +struct be_cmd_resp_get_port_name { + struct be_cmd_req_hdr hdr; + u8 port_name[4]; +}; + /*************** HW Stats Get v1 **********************************/ #define BE_TXP_SW_SZ 48 struct be_port_rxf_stats_v1 { @@ -1656,7 +1685,7 @@ struct be_cmd_req_set_ext_fat_caps { }; extern int be_pci_fnum_get(struct be_adapter *adapter); -extern int be_cmd_POST(struct be_adapter *adapter); +extern int be_fw_wait_ready(struct be_adapter *adapter); extern int be_cmd_mac_addr_query(struct be_adapter *adapter, u8 *mac_addr, u8 type, bool permanent, u32 if_handle, u32 pmac_id); extern int be_cmd_pmac_add(struct be_adapter *adapter, u8 *mac_addr, @@ -1664,8 +1693,7 @@ extern int be_cmd_pmac_add(struct be_adapter *adapter, u8 *mac_addr, extern int be_cmd_pmac_del(struct be_adapter *adapter, u32 if_id, int pmac_id, u32 domain); extern int be_cmd_if_create(struct be_adapter *adapter, u32 cap_flags, - u32 en_flags, u8 *mac, u32 *if_handle, u32 *pmac_id, - u32 domain); + u32 en_flags, u32 *if_handle, u32 domain); extern int be_cmd_if_destroy(struct be_adapter *adapter, int if_handle, u32 domain); extern int be_cmd_eq_create(struct be_adapter *adapter, @@ -1719,10 +1747,11 @@ extern int be_cmd_write_flashrom(struct be_adapter *adapter, struct be_dma_mem *cmd, u32 flash_oper, u32 flash_opcode, u32 buf_size); extern int lancer_cmd_write_object(struct be_adapter *adapter, - struct be_dma_mem *cmd, - u32 data_size, u32 data_offset, - const char *obj_name, - u32 *data_written, u8 *addn_status); + struct be_dma_mem *cmd, + u32 data_size, u32 data_offset, + const char *obj_name, + u32 *data_written, u8 *change_status, + u8 *addn_status); int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd, u32 data_size, u32 data_offset, const char *obj_name, u32 *data_read, u32 *eof, u8 *addn_status); @@ -1745,14 +1774,15 @@ extern int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num, u8 loopback_type, u8 enable); extern int be_cmd_get_phy_info(struct be_adapter *adapter); extern int be_cmd_set_qos(struct be_adapter *adapter, u32 bps, u32 domain); -extern void be_detect_dump_ue(struct be_adapter *adapter); +extern void be_detect_error(struct be_adapter *adapter); extern int be_cmd_get_die_temperature(struct be_adapter *adapter); extern int be_cmd_get_cntl_attributes(struct be_adapter *adapter); extern int be_cmd_req_native_mode(struct be_adapter *adapter); extern int be_cmd_get_reg_len(struct be_adapter *adapter, u32 *log_size); extern void be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf); -extern int be_cmd_get_mac_from_list(struct be_adapter *adapter, u32 domain, - bool *pmac_id_active, u32 *pmac_id, u8 *mac); +extern int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac, + bool *pmac_id_active, u32 *pmac_id, + u8 domain); extern int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array, u8 mac_count, u32 domain); extern int be_cmd_set_hsw_config(struct be_adapter *adapter, u16 pvid, @@ -1765,4 +1795,7 @@ extern int be_cmd_get_ext_fat_capabilites(struct be_adapter *adapter, extern int be_cmd_set_ext_fat_capabilites(struct be_adapter *adapter, struct be_dma_mem *cmd, struct be_fat_conf_params *cfgs); +extern int lancer_wait_ready(struct be_adapter *adapter); +extern int lancer_test_and_set_rdy_state(struct be_adapter *adapter); +extern int be_cmd_query_port_name(struct be_adapter *adapter, u8 *port_name); diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c index 63e51d476900..e34be1c7ae8a 100644 --- a/drivers/net/ethernet/emulex/benet/be_ethtool.c +++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c @@ -648,7 +648,7 @@ be_set_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *ecmd) struct be_adapter *adapter = netdev_priv(netdev); int status; - if (ecmd->autoneg != 0) + if (ecmd->autoneg != adapter->phy.fc_autoneg) return -EINVAL; adapter->tx_fc = ecmd->tx_pause; adapter->rx_fc = ecmd->rx_pause; diff --git a/drivers/net/ethernet/emulex/benet/be_hw.h b/drivers/net/ethernet/emulex/benet/be_hw.h index d9fb0c501fa1..b755f7061dce 100644 --- a/drivers/net/ethernet/emulex/benet/be_hw.h +++ b/drivers/net/ethernet/emulex/benet/be_hw.h @@ -45,20 +45,19 @@ #define POST_STAGE_ARMFW_RDY 0xc000 /* FW is done with POST */ -/* Lancer SLIPORT_CONTROL SLIPORT_STATUS registers */ +/* Lancer SLIPORT registers */ #define SLIPORT_STATUS_OFFSET 0x404 #define SLIPORT_CONTROL_OFFSET 0x408 #define SLIPORT_ERROR1_OFFSET 0x40C #define SLIPORT_ERROR2_OFFSET 0x410 +#define PHYSDEV_CONTROL_OFFSET 0x414 #define SLIPORT_STATUS_ERR_MASK 0x80000000 #define SLIPORT_STATUS_RN_MASK 0x01000000 #define SLIPORT_STATUS_RDY_MASK 0x00800000 - - #define SLI_PORT_CONTROL_IP_MASK 0x08000000 - -#define PCICFG_CUST_SCRATCHPAD_CSR 0x1EC +#define PHYSDEV_CONTROL_FW_RESET_MASK 0x00000002 +#define PHYSDEV_CONTROL_INP_MASK 0x40000000 /********* Memory BAR register ************/ #define PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET 0xfc diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index 501dfa9c88ec..4d9677174490 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -155,7 +155,7 @@ static void be_intr_set(struct be_adapter *adapter, bool enable) { u32 reg, enabled; - if (adapter->eeh_err) + if (adapter->eeh_error) return; pci_read_config_dword(adapter->pdev, PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET, @@ -201,7 +201,7 @@ static void be_eq_notify(struct be_adapter *adapter, u16 qid, val |= ((qid & DB_EQ_RING_ID_EXT_MASK) << DB_EQ_RING_ID_EXT_MASK_SHIFT); - if (adapter->eeh_err) + if (adapter->eeh_error) return; if (arm) @@ -220,7 +220,7 @@ void be_cq_notify(struct be_adapter *adapter, u16 qid, bool arm, u16 num_popped) val |= ((qid & DB_CQ_RING_ID_EXT_MASK) << DB_CQ_RING_ID_EXT_MASK_SHIFT); - if (adapter->eeh_err) + if (adapter->eeh_error) return; if (arm) @@ -558,6 +558,7 @@ static inline void wrb_fill(struct be_eth_wrb *wrb, u64 addr, int len) wrb->frag_pa_hi = upper_32_bits(addr); wrb->frag_pa_lo = addr & 0xFFFFFFFF; wrb->frag_len = len & ETH_WRB_FRAG_LEN_MASK; + wrb->rsvd0 = 0; } static inline u16 be_get_tx_vlan_tag(struct be_adapter *adapter, @@ -576,6 +577,11 @@ static inline u16 be_get_tx_vlan_tag(struct be_adapter *adapter, return vlan_tag; } +static int be_vlan_tag_chk(struct be_adapter *adapter, struct sk_buff *skb) +{ + return vlan_tx_tag_present(skb) || adapter->pvid; +} + static void wrb_fill_hdr(struct be_adapter *adapter, struct be_eth_hdr_wrb *hdr, struct sk_buff *skb, u32 wrb_cnt, u32 len) { @@ -703,33 +709,56 @@ dma_err: return 0; } +static struct sk_buff *be_insert_vlan_in_pkt(struct be_adapter *adapter, + struct sk_buff *skb) +{ + u16 vlan_tag = 0; + + skb = skb_share_check(skb, GFP_ATOMIC); + if (unlikely(!skb)) + return skb; + + if (vlan_tx_tag_present(skb)) { + vlan_tag = be_get_tx_vlan_tag(adapter, skb); + __vlan_put_tag(skb, vlan_tag); + skb->vlan_tci = 0; + } + + return skb; +} + static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev) { struct be_adapter *adapter = netdev_priv(netdev); struct be_tx_obj *txo = &adapter->tx_obj[skb_get_queue_mapping(skb)]; struct be_queue_info *txq = &txo->q; + struct iphdr *ip = NULL; u32 wrb_cnt = 0, copied = 0; - u32 start = txq->head; + u32 start = txq->head, eth_hdr_len; bool dummy_wrb, stopped = false; - /* For vlan tagged pkts, BE - * 1) calculates checksum even when CSO is not requested - * 2) calculates checksum wrongly for padded pkt less than - * 60 bytes long. - * As a workaround disable TX vlan offloading in such cases. + eth_hdr_len = ntohs(skb->protocol) == ETH_P_8021Q ? + VLAN_ETH_HLEN : ETH_HLEN; + + /* HW has a bug which considers padding bytes as legal + * and modifies the IPv4 hdr's 'tot_len' field */ - if (unlikely(vlan_tx_tag_present(skb) && - (skb->ip_summed != CHECKSUM_PARTIAL || skb->len <= 60))) { - skb = skb_share_check(skb, GFP_ATOMIC); - if (unlikely(!skb)) - goto tx_drop; + if (skb->len <= 60 && be_vlan_tag_chk(adapter, skb) && + is_ipv4_pkt(skb)) { + ip = (struct iphdr *)ip_hdr(skb); + pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len)); + } - skb = __vlan_put_tag(skb, be_get_tx_vlan_tag(adapter, skb)); + /* HW has a bug wherein it will calculate CSUM for VLAN + * pkts even though it is disabled. + * Manually insert VLAN in pkt. + */ + if (skb->ip_summed != CHECKSUM_PARTIAL && + be_vlan_tag_chk(adapter, skb)) { + skb = be_insert_vlan_in_pkt(adapter, skb); if (unlikely(!skb)) goto tx_drop; - - skb->vlan_tci = 0; } wrb_cnt = wrb_cnt_for_skb(adapter, skb, &dummy_wrb); @@ -786,19 +815,12 @@ static int be_change_mtu(struct net_device *netdev, int new_mtu) * A max of 64 (BE_NUM_VLANS_SUPPORTED) vlans can be configured in BE. * If the user configures more, place BE in vlan promiscuous mode. */ -static int be_vid_config(struct be_adapter *adapter, bool vf, u32 vf_num) +static int be_vid_config(struct be_adapter *adapter) { - struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf_num]; - u16 vtag[BE_NUM_VLANS_SUPPORTED]; - u16 ntags = 0, i; + u16 vids[BE_NUM_VLANS_SUPPORTED]; + u16 num = 0, i; int status = 0; - if (vf) { - vtag[0] = cpu_to_le16(vf_cfg->vlan_tag); - status = be_cmd_vlan_config(adapter, vf_cfg->if_handle, vtag, - 1, 1, 0); - } - /* No need to further configure vids if in promiscuous mode */ if (adapter->promiscuous) return 0; @@ -809,10 +831,10 @@ static int be_vid_config(struct be_adapter *adapter, bool vf, u32 vf_num) /* Construct VLAN Table to give to HW */ for (i = 0; i < VLAN_N_VID; i++) if (adapter->vlan_tag[i]) - vtag[ntags++] = cpu_to_le16(i); + vids[num++] = cpu_to_le16(i); status = be_cmd_vlan_config(adapter, adapter->if_handle, - vtag, ntags, 1, 0); + vids, num, 1, 0); /* Set to VLAN promisc mode as setting VLAN filter failed */ if (status) { @@ -841,7 +863,7 @@ static int be_vlan_add_vid(struct net_device *netdev, u16 vid) adapter->vlan_tag[vid] = 1; if (adapter->vlans_added <= (adapter->max_vlans + 1)) - status = be_vid_config(adapter, false, 0); + status = be_vid_config(adapter); if (!status) adapter->vlans_added++; @@ -863,7 +885,7 @@ static int be_vlan_rem_vid(struct net_device *netdev, u16 vid) adapter->vlan_tag[vid] = 0; if (adapter->vlans_added <= adapter->max_vlans) - status = be_vid_config(adapter, false, 0); + status = be_vid_config(adapter); if (!status) adapter->vlans_added--; @@ -890,7 +912,7 @@ static void be_set_rx_mode(struct net_device *netdev) be_cmd_rx_filter(adapter, IFF_PROMISC, OFF); if (adapter->vlans_added) - be_vid_config(adapter, false, 0); + be_vid_config(adapter); } /* Enable multicast promisc if num configured exceeds what we support */ @@ -1057,13 +1079,16 @@ static int be_find_vfs(struct be_adapter *adapter, int vf_state) u16 offset, stride; pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV); + if (!pos) + return 0; pci_read_config_word(pdev, pos + PCI_SRIOV_VF_OFFSET, &offset); pci_read_config_word(pdev, pos + PCI_SRIOV_VF_STRIDE, &stride); dev = pci_get_device(pdev->vendor, PCI_ANY_ID, NULL); while (dev) { vf_fn = (pdev->devfn + offset + stride * vfs) & 0xFFFF; - if (dev->is_virtfn && dev->devfn == vf_fn) { + if (dev->is_virtfn && dev->devfn == vf_fn && + dev->bus->number == pdev->bus->number) { vfs++; if (dev->dev_flags & PCI_DEV_FLAGS_ASSIGNED) assigned_vfs++; @@ -1203,16 +1228,16 @@ static void skb_fill_rx_data(struct be_rx_obj *rxo, struct sk_buff *skb, /* Copy data in the first descriptor of this completion */ curr_frag_len = min(rxcp->pkt_size, rx_frag_size); - /* Copy the header portion into skb_data */ - hdr_len = min(BE_HDR_LEN, curr_frag_len); - memcpy(skb->data, start, hdr_len); skb->len = curr_frag_len; if (curr_frag_len <= BE_HDR_LEN) { /* tiny packet */ + memcpy(skb->data, start, curr_frag_len); /* Complete packet has now been moved to data */ put_page(page_info->page); skb->data_len = 0; skb->tail += curr_frag_len; } else { + hdr_len = ETH_HLEN; + memcpy(skb->data, start, hdr_len); skb_shinfo(skb)->nr_frags = 1; skb_frag_set_page(skb, 0, page_info->page); skb_shinfo(skb)->frags[0].page_offset = @@ -1709,9 +1734,10 @@ static void be_evt_queues_destroy(struct be_adapter *adapter) int i; for_all_evt_queues(adapter, eqo, i) { - be_eq_clean(eqo); - if (eqo->q.created) + if (eqo->q.created) { + be_eq_clean(eqo); be_cmd_q_destroy(adapter, &eqo->q, QTYPE_EQ); + } be_queue_free(adapter, &eqo->q); } } @@ -1898,6 +1924,12 @@ static int be_rx_cqs_create(struct be_adapter *adapter) */ adapter->num_rx_qs = (num_irqs(adapter) > 1) ? num_irqs(adapter) + 1 : 1; + if (adapter->num_rx_qs != MAX_RX_QS) { + rtnl_lock(); + netif_set_real_num_rx_queues(adapter->netdev, + adapter->num_rx_qs); + rtnl_unlock(); + } adapter->big_page_size = (1 << get_order(rx_frag_size)) * PAGE_SIZE; for_all_rx_queues(adapter, rxo, i) { @@ -2067,13 +2099,13 @@ int be_poll(struct napi_struct *napi, int budget) return max_work; } -void be_detect_dump_ue(struct be_adapter *adapter) +void be_detect_error(struct be_adapter *adapter) { u32 ue_lo = 0, ue_hi = 0, ue_lo_mask = 0, ue_hi_mask = 0; u32 sliport_status = 0, sliport_err1 = 0, sliport_err2 = 0; u32 i; - if (adapter->eeh_err || adapter->ue_detected) + if (be_crit_error(adapter)) return; if (lancer_chip(adapter)) { @@ -2094,16 +2126,24 @@ void be_detect_dump_ue(struct be_adapter *adapter) pci_read_config_dword(adapter->pdev, PCICFG_UE_STATUS_HI_MASK, &ue_hi_mask); - ue_lo = (ue_lo & (~ue_lo_mask)); - ue_hi = (ue_hi & (~ue_hi_mask)); + ue_lo = (ue_lo & ~ue_lo_mask); + ue_hi = (ue_hi & ~ue_hi_mask); } if (ue_lo || ue_hi || sliport_status & SLIPORT_STATUS_ERR_MASK) { - adapter->ue_detected = true; - adapter->eeh_err = true; + adapter->hw_error = true; + dev_err(&adapter->pdev->dev, + "Error detected in the card\n"); + } + + if (sliport_status & SLIPORT_STATUS_ERR_MASK) { + dev_err(&adapter->pdev->dev, + "ERR: sliport status 0x%x\n", sliport_status); + dev_err(&adapter->pdev->dev, + "ERR: sliport error1 0x%x\n", sliport_err1); dev_err(&adapter->pdev->dev, - "Unrecoverable error in the card\n"); + "ERR: sliport error2 0x%x\n", sliport_err2); } if (ue_lo) { @@ -2113,6 +2153,7 @@ void be_detect_dump_ue(struct be_adapter *adapter) "UE: %s bit set\n", ue_status_low_desc[i]); } } + if (ue_hi) { for (i = 0; ue_hi; ue_hi >>= 1, i++) { if (ue_hi & 1) @@ -2121,14 +2162,6 @@ void be_detect_dump_ue(struct be_adapter *adapter) } } - if (sliport_status & SLIPORT_STATUS_ERR_MASK) { - dev_err(&adapter->pdev->dev, - "sliport status 0x%x\n", sliport_status); - dev_err(&adapter->pdev->dev, - "sliport error1 0x%x\n", sliport_err1); - dev_err(&adapter->pdev->dev, - "sliport error2 0x%x\n", sliport_err2); - } } static void be_msix_disable(struct be_adapter *adapter) @@ -2141,12 +2174,14 @@ static void be_msix_disable(struct be_adapter *adapter) static uint be_num_rss_want(struct be_adapter *adapter) { + u32 num = 0; if ((adapter->function_caps & BE_FUNCTION_CAPS_RSS) && !sriov_want(adapter) && be_physfn(adapter) && - !be_is_mc(adapter)) - return (adapter->be3_native) ? BE3_MAX_RSS_QS : BE2_MAX_RSS_QS; - else - return 0; + !be_is_mc(adapter)) { + num = (adapter->be3_native) ? BE3_MAX_RSS_QS : BE2_MAX_RSS_QS; + num = min_t(u32, num, (u32)netif_get_num_default_rss_queues()); + } + return num; } static void be_msix_enable(struct be_adapter *adapter) @@ -2540,11 +2575,7 @@ static int be_clear(struct be_adapter *adapter) be_tx_queues_destroy(adapter); be_evt_queues_destroy(adapter); - /* tell fw we're done with firing cmds */ - be_cmd_fw_clean(adapter); - be_msix_disable(adapter); - pci_write_config_dword(adapter->pdev, PCICFG_CUST_SCRATCHPAD_CSR, 0); return 0; } @@ -2602,8 +2633,8 @@ static int be_vf_setup(struct be_adapter *adapter) cap_flags = en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST | BE_IF_FLAGS_MULTICAST; for_all_vfs(adapter, vf_cfg, vf) { - status = be_cmd_if_create(adapter, cap_flags, en_flags, NULL, - &vf_cfg->if_handle, NULL, vf + 1); + status = be_cmd_if_create(adapter, cap_flags, en_flags, + &vf_cfg->if_handle, vf + 1); if (status) goto err; } @@ -2643,29 +2674,43 @@ static void be_setup_init(struct be_adapter *adapter) adapter->phy.forced_port_speed = -1; } -static int be_add_mac_from_list(struct be_adapter *adapter, u8 *mac) +static int be_get_mac_addr(struct be_adapter *adapter, u8 *mac, u32 if_handle, + bool *active_mac, u32 *pmac_id) { - u32 pmac_id; - int status; - bool pmac_id_active; + int status = 0; - status = be_cmd_get_mac_from_list(adapter, 0, &pmac_id_active, - &pmac_id, mac); - if (status != 0) - goto do_none; + if (!is_zero_ether_addr(adapter->netdev->perm_addr)) { + memcpy(mac, adapter->netdev->dev_addr, ETH_ALEN); + if (!lancer_chip(adapter) && !be_physfn(adapter)) + *active_mac = true; + else + *active_mac = false; - if (pmac_id_active) { - status = be_cmd_mac_addr_query(adapter, mac, - MAC_ADDRESS_TYPE_NETWORK, - false, adapter->if_handle, pmac_id); + return status; + } - if (!status) - adapter->pmac_id[0] = pmac_id; + if (lancer_chip(adapter)) { + status = be_cmd_get_mac_from_list(adapter, mac, + active_mac, pmac_id, 0); + if (*active_mac) { + status = be_cmd_mac_addr_query(adapter, mac, + MAC_ADDRESS_TYPE_NETWORK, + false, if_handle, + *pmac_id); + } + } else if (be_physfn(adapter)) { + /* For BE3, for PF get permanent MAC */ + status = be_cmd_mac_addr_query(adapter, mac, + MAC_ADDRESS_TYPE_NETWORK, true, + 0, 0); + *active_mac = false; } else { - status = be_cmd_pmac_add(adapter, mac, - adapter->if_handle, &adapter->pmac_id[0], 0); + /* For BE3, for VF get soft MAC assigned by PF*/ + status = be_cmd_mac_addr_query(adapter, mac, + MAC_ADDRESS_TYPE_NETWORK, false, + if_handle, 0); + *active_mac = true; } -do_none: return status; } @@ -2686,12 +2731,12 @@ static int be_get_config(struct be_adapter *adapter) static int be_setup(struct be_adapter *adapter) { - struct net_device *netdev = adapter->netdev; struct device *dev = &adapter->pdev->dev; u32 cap_flags, en_flags; u32 tx_fc, rx_fc; int status; u8 mac[ETH_ALEN]; + bool active_mac; be_setup_init(adapter); @@ -2717,14 +2762,6 @@ static int be_setup(struct be_adapter *adapter) if (status) goto err; - memset(mac, 0, ETH_ALEN); - status = be_cmd_mac_addr_query(adapter, mac, MAC_ADDRESS_TYPE_NETWORK, - true /*permanent */, 0, 0); - if (status) - return status; - memcpy(adapter->netdev->dev_addr, mac, ETH_ALEN); - memcpy(adapter->netdev->perm_addr, mac, ETH_ALEN); - en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST | BE_IF_FLAGS_MULTICAST | BE_IF_FLAGS_PASS_L3L4_ERRORS; cap_flags = en_flags | BE_IF_FLAGS_MCAST_PROMISCUOUS | @@ -2734,27 +2771,36 @@ static int be_setup(struct be_adapter *adapter) cap_flags |= BE_IF_FLAGS_RSS; en_flags |= BE_IF_FLAGS_RSS; } + + if (lancer_chip(adapter) && !be_physfn(adapter)) { + en_flags = BE_IF_FLAGS_UNTAGGED | + BE_IF_FLAGS_BROADCAST | + BE_IF_FLAGS_MULTICAST; + cap_flags = en_flags; + } + status = be_cmd_if_create(adapter, cap_flags, en_flags, - netdev->dev_addr, &adapter->if_handle, - &adapter->pmac_id[0], 0); + &adapter->if_handle, 0); if (status != 0) goto err; - /* The VF's permanent mac queried from card is incorrect. - * For BEx: Query the mac configued by the PF using if_handle - * For Lancer: Get and use mac_list to obtain mac address. - */ - if (!be_physfn(adapter)) { - if (lancer_chip(adapter)) - status = be_add_mac_from_list(adapter, mac); - else - status = be_cmd_mac_addr_query(adapter, mac, - MAC_ADDRESS_TYPE_NETWORK, false, - adapter->if_handle, 0); - if (!status) { - memcpy(adapter->netdev->dev_addr, mac, ETH_ALEN); - memcpy(adapter->netdev->perm_addr, mac, ETH_ALEN); - } + memset(mac, 0, ETH_ALEN); + active_mac = false; + status = be_get_mac_addr(adapter, mac, adapter->if_handle, + &active_mac, &adapter->pmac_id[0]); + if (status != 0) + goto err; + + if (!active_mac) { + status = be_cmd_pmac_add(adapter, mac, adapter->if_handle, + &adapter->pmac_id[0], 0); + if (status != 0) + goto err; + } + + if (is_zero_ether_addr(adapter->netdev->dev_addr)) { + memcpy(adapter->netdev->dev_addr, mac, ETH_ALEN); + memcpy(adapter->netdev->perm_addr, mac, ETH_ALEN); } status = be_tx_qs_create(adapter); @@ -2763,7 +2809,8 @@ static int be_setup(struct be_adapter *adapter) be_cmd_get_fw_ver(adapter, adapter->fw_ver, NULL); - be_vid_config(adapter, false, 0); + if (adapter->vlans_added) + be_vid_config(adapter); be_set_rx_mode(adapter->netdev); @@ -2773,8 +2820,6 @@ static int be_setup(struct be_adapter *adapter) be_cmd_set_flow_control(adapter, adapter->tx_fc, adapter->rx_fc); - pcie_set_readrq(adapter->pdev, 4096); - if (be_physfn(adapter) && num_vfs) { if (adapter->dev_num_vfs) be_vf_setup(adapter); @@ -2788,8 +2833,6 @@ static int be_setup(struct be_adapter *adapter) schedule_delayed_work(&adapter->work, msecs_to_jiffies(1000)); adapter->flags |= BE_FLAGS_WORKER_SCHEDULED; - - pci_write_config_dword(adapter->pdev, PCICFG_CUST_SCRATCHPAD_CSR, 1); return 0; err: be_clear(adapter); @@ -3033,6 +3076,40 @@ static int get_ufigen_type(struct flash_file_hdr_g2 *fhdr) return 0; } +static int lancer_wait_idle(struct be_adapter *adapter) +{ +#define SLIPORT_IDLE_TIMEOUT 30 + u32 reg_val; + int status = 0, i; + + for (i = 0; i < SLIPORT_IDLE_TIMEOUT; i++) { + reg_val = ioread32(adapter->db + PHYSDEV_CONTROL_OFFSET); + if ((reg_val & PHYSDEV_CONTROL_INP_MASK) == 0) + break; + + ssleep(1); + } + + if (i == SLIPORT_IDLE_TIMEOUT) + status = -1; + + return status; +} + +static int lancer_fw_reset(struct be_adapter *adapter) +{ + int status = 0; + + status = lancer_wait_idle(adapter); + if (status) + return status; + + iowrite32(PHYSDEV_CONTROL_FW_RESET_MASK, adapter->db + + PHYSDEV_CONTROL_OFFSET); + + return status; +} + static int lancer_fw_download(struct be_adapter *adapter, const struct firmware *fw) { @@ -3047,6 +3124,7 @@ static int lancer_fw_download(struct be_adapter *adapter, u32 offset = 0; int status = 0; u8 add_status = 0; + u8 change_status; if (!IS_ALIGNED(fw->size, sizeof(u32))) { dev_err(&adapter->pdev->dev, @@ -3079,9 +3157,10 @@ static int lancer_fw_download(struct be_adapter *adapter, memcpy(dest_image_ptr, data_ptr, chunk_size); status = lancer_cmd_write_object(adapter, &flash_cmd, - chunk_size, offset, LANCER_FW_DOWNLOAD_LOCATION, - &data_written, &add_status); - + chunk_size, offset, + LANCER_FW_DOWNLOAD_LOCATION, + &data_written, &change_status, + &add_status); if (status) break; @@ -3093,8 +3172,10 @@ static int lancer_fw_download(struct be_adapter *adapter, if (!status) { /* Commit the FW written */ status = lancer_cmd_write_object(adapter, &flash_cmd, - 0, offset, LANCER_FW_DOWNLOAD_LOCATION, - &data_written, &add_status); + 0, offset, + LANCER_FW_DOWNLOAD_LOCATION, + &data_written, &change_status, + &add_status); } dma_free_coherent(&adapter->pdev->dev, flash_cmd.size, flash_cmd.va, @@ -3107,6 +3188,20 @@ static int lancer_fw_download(struct be_adapter *adapter, goto lancer_fw_exit; } + if (change_status == LANCER_FW_RESET_NEEDED) { + status = lancer_fw_reset(adapter); + if (status) { + dev_err(&adapter->pdev->dev, + "Adapter busy for FW reset.\n" + "New FW will not be active.\n"); + goto lancer_fw_exit; + } + } else if (change_status != LANCER_NO_RESET_NEEDED) { + dev_err(&adapter->pdev->dev, + "System reboot required for new FW" + " to be active\n"); + } + dev_info(&adapter->pdev->dev, "Firmware flashed successfully\n"); lancer_fw_exit: return status; @@ -3435,10 +3530,15 @@ static void __devexit be_remove(struct pci_dev *pdev) be_roce_dev_remove(adapter); + cancel_delayed_work_sync(&adapter->func_recovery_work); + unregister_netdev(adapter->netdev); be_clear(adapter); + /* tell fw we're done with firing cmds */ + be_cmd_fw_clean(adapter); + be_stats_cleanup(adapter); be_ctrl_cleanup(adapter); @@ -3530,6 +3630,9 @@ static int be_get_initial_config(struct be_adapter *adapter) if (be_is_wol_supported(adapter)) adapter->wol = true; + /* Must be a power of 2 or else MODULO will BUG_ON */ + adapter->be_get_temp_freq = 64; + level = be_get_fw_log_level(adapter); adapter->msg_enable = level <= FW_LOG_LEVEL_DEFAULT ? NETIF_MSG_HW : 0; @@ -3585,101 +3688,68 @@ static int be_dev_type_check(struct be_adapter *adapter) return 0; } -static int lancer_wait_ready(struct be_adapter *adapter) +static int lancer_recover_func(struct be_adapter *adapter) { -#define SLIPORT_READY_TIMEOUT 30 - u32 sliport_status; - int status = 0, i; + int status; - for (i = 0; i < SLIPORT_READY_TIMEOUT; i++) { - sliport_status = ioread32(adapter->db + SLIPORT_STATUS_OFFSET); - if (sliport_status & SLIPORT_STATUS_RDY_MASK) - break; + status = lancer_test_and_set_rdy_state(adapter); + if (status) + goto err; - msleep(1000); - } + if (netif_running(adapter->netdev)) + be_close(adapter->netdev); - if (i == SLIPORT_READY_TIMEOUT) - status = -1; + be_clear(adapter); - return status; -} + adapter->hw_error = false; + adapter->fw_timeout = false; -static int lancer_test_and_set_rdy_state(struct be_adapter *adapter) -{ - int status; - u32 sliport_status, err, reset_needed; - status = lancer_wait_ready(adapter); - if (!status) { - sliport_status = ioread32(adapter->db + SLIPORT_STATUS_OFFSET); - err = sliport_status & SLIPORT_STATUS_ERR_MASK; - reset_needed = sliport_status & SLIPORT_STATUS_RN_MASK; - if (err && reset_needed) { - iowrite32(SLI_PORT_CONTROL_IP_MASK, - adapter->db + SLIPORT_CONTROL_OFFSET); - - /* check adapter has corrected the error */ - status = lancer_wait_ready(adapter); - sliport_status = ioread32(adapter->db + - SLIPORT_STATUS_OFFSET); - sliport_status &= (SLIPORT_STATUS_ERR_MASK | - SLIPORT_STATUS_RN_MASK); - if (status || sliport_status) - status = -1; - } else if (err || reset_needed) { - status = -1; - } + status = be_setup(adapter); + if (status) + goto err; + + if (netif_running(adapter->netdev)) { + status = be_open(adapter->netdev); + if (status) + goto err; } + + dev_err(&adapter->pdev->dev, + "Adapter SLIPORT recovery succeeded\n"); + return 0; +err: + dev_err(&adapter->pdev->dev, + "Adapter SLIPORT recovery failed\n"); + return status; } -static void lancer_test_and_recover_fn_err(struct be_adapter *adapter) +static void be_func_recovery_task(struct work_struct *work) { + struct be_adapter *adapter = + container_of(work, struct be_adapter, func_recovery_work.work); int status; - u32 sliport_status; - - if (adapter->eeh_err || adapter->ue_detected) - return; - sliport_status = ioread32(adapter->db + SLIPORT_STATUS_OFFSET); + be_detect_error(adapter); - if (sliport_status & SLIPORT_STATUS_ERR_MASK) { - dev_err(&adapter->pdev->dev, - "Adapter in error state." - "Trying to recover.\n"); + if (adapter->hw_error && lancer_chip(adapter)) { - status = lancer_test_and_set_rdy_state(adapter); - if (status) - goto err; + if (adapter->eeh_error) + goto out; + rtnl_lock(); netif_device_detach(adapter->netdev); + rtnl_unlock(); - if (netif_running(adapter->netdev)) - be_close(adapter->netdev); - - be_clear(adapter); - - adapter->fw_timeout = false; - - status = be_setup(adapter); - if (status) - goto err; - - if (netif_running(adapter->netdev)) { - status = be_open(adapter->netdev); - if (status) - goto err; - } - - netif_device_attach(adapter->netdev); + status = lancer_recover_func(adapter); - dev_err(&adapter->pdev->dev, - "Adapter error recovery succeeded\n"); + if (!status) + netif_device_attach(adapter->netdev); } - return; -err: - dev_err(&adapter->pdev->dev, - "Adapter error recovery failed\n"); + +out: + schedule_delayed_work(&adapter->func_recovery_work, + msecs_to_jiffies(1000)); } static void be_worker(struct work_struct *work) @@ -3690,11 +3760,6 @@ static void be_worker(struct work_struct *work) struct be_eq_obj *eqo; int i; - if (lancer_chip(adapter)) - lancer_test_and_recover_fn_err(adapter); - - be_detect_dump_ue(adapter); - /* when interrupts are not yet enabled, just reap any pending * mcc completions */ if (!netif_running(adapter->netdev)) { @@ -3710,6 +3775,9 @@ static void be_worker(struct work_struct *work) be_cmd_get_stats(adapter, &adapter->stats_cmd); } + if (MODULO(adapter->work_counter, adapter->be_get_temp_freq) == 0) + be_cmd_get_die_temperature(adapter); + for_all_rx_queues(adapter, rxo, i) { if (rxo->rx_post_starved) { rxo->rx_post_starved = false; @@ -3727,10 +3795,7 @@ reschedule: static bool be_reset_required(struct be_adapter *adapter) { - u32 reg; - - pci_read_config_dword(adapter->pdev, PCICFG_CUST_SCRATCHPAD_CSR, ®); - return reg; + return be_find_vfs(adapter, ENABLED) > 0 ? false : true; } static int __devinit be_probe(struct pci_dev *pdev, @@ -3739,6 +3804,7 @@ static int __devinit be_probe(struct pci_dev *pdev, int status = 0; struct be_adapter *adapter; struct net_device *netdev; + char port_name; status = pci_enable_device(pdev); if (status) @@ -3749,7 +3815,7 @@ static int __devinit be_probe(struct pci_dev *pdev, goto disable_dev; pci_set_master(pdev); - netdev = alloc_etherdev_mq(sizeof(struct be_adapter), MAX_TX_QS); + netdev = alloc_etherdev_mqs(sizeof(*adapter), MAX_TX_QS, MAX_RX_QS); if (netdev == NULL) { status = -ENOMEM; goto rel_reg; @@ -3780,22 +3846,9 @@ static int __devinit be_probe(struct pci_dev *pdev, if (status) goto free_netdev; - if (lancer_chip(adapter)) { - status = lancer_wait_ready(adapter); - if (!status) { - iowrite32(SLI_PORT_CONTROL_IP_MASK, - adapter->db + SLIPORT_CONTROL_OFFSET); - status = lancer_test_and_set_rdy_state(adapter); - } - if (status) { - dev_err(&pdev->dev, "Adapter in non recoverable error\n"); - goto ctrl_clean; - } - } - /* sync up with fw's ready state */ if (be_physfn(adapter)) { - status = be_cmd_POST(adapter); + status = be_fw_wait_ready(adapter); if (status) goto ctrl_clean; } @@ -3826,6 +3879,7 @@ static int __devinit be_probe(struct pci_dev *pdev, goto stats_clean; INIT_DELAYED_WORK(&adapter->work, be_worker); + INIT_DELAYED_WORK(&adapter->func_recovery_work, be_func_recovery_task); adapter->rx_fc = adapter->tx_fc = true; status = be_setup(adapter); @@ -3839,8 +3893,13 @@ static int __devinit be_probe(struct pci_dev *pdev, be_roce_dev_add(adapter); - dev_info(&pdev->dev, "%s: %s port %d\n", netdev->name, nic_name(pdev), - adapter->port_num); + schedule_delayed_work(&adapter->func_recovery_work, + msecs_to_jiffies(1000)); + + be_cmd_query_port_name(adapter, &port_name); + + dev_info(&pdev->dev, "%s: %s port %c\n", netdev->name, nic_name(pdev), + port_name); return 0; @@ -3872,6 +3931,8 @@ static int be_suspend(struct pci_dev *pdev, pm_message_t state) if (adapter->wol) be_setup_wol(adapter, true); + cancel_delayed_work_sync(&adapter->func_recovery_work); + netif_device_detach(netdev); if (netif_running(netdev)) { rtnl_lock(); @@ -3912,6 +3973,9 @@ static int be_resume(struct pci_dev *pdev) be_open(netdev); rtnl_unlock(); } + + schedule_delayed_work(&adapter->func_recovery_work, + msecs_to_jiffies(1000)); netif_device_attach(netdev); if (adapter->wol) @@ -3931,6 +3995,7 @@ static void be_shutdown(struct pci_dev *pdev) return; cancel_delayed_work_sync(&adapter->work); + cancel_delayed_work_sync(&adapter->func_recovery_work); netif_device_detach(adapter->netdev); @@ -3950,9 +4015,13 @@ static pci_ers_result_t be_eeh_err_detected(struct pci_dev *pdev, dev_err(&adapter->pdev->dev, "EEH error detected\n"); - adapter->eeh_err = true; + adapter->eeh_error = true; + cancel_delayed_work_sync(&adapter->func_recovery_work); + + rtnl_lock(); netif_device_detach(netdev); + rtnl_unlock(); if (netif_running(netdev)) { rtnl_lock(); @@ -3980,9 +4049,7 @@ static pci_ers_result_t be_eeh_reset(struct pci_dev *pdev) int status; dev_info(&adapter->pdev->dev, "EEH reset\n"); - adapter->eeh_err = false; - adapter->ue_detected = false; - adapter->fw_timeout = false; + be_clear_all_error(adapter); status = pci_enable_device(pdev); if (status) @@ -3993,7 +4060,7 @@ static pci_ers_result_t be_eeh_reset(struct pci_dev *pdev) pci_restore_state(pdev); /* Check if card is ok and fw is ready */ - status = be_cmd_POST(adapter); + status = be_fw_wait_ready(adapter); if (status) return PCI_ERS_RESULT_DISCONNECT; @@ -4015,6 +4082,10 @@ static void be_eeh_resume(struct pci_dev *pdev) if (status) goto err; + status = be_cmd_reset_function(adapter); + if (status) + goto err; + status = be_setup(adapter); if (status) goto err; @@ -4024,6 +4095,9 @@ static void be_eeh_resume(struct pci_dev *pdev) if (status) goto err; } + + schedule_delayed_work(&adapter->func_recovery_work, + msecs_to_jiffies(1000)); netif_device_attach(netdev); return; err: diff --git a/drivers/net/ethernet/ethoc.c b/drivers/net/ethernet/ethoc.c index a38167810546..94b7bfcdb24e 100644 --- a/drivers/net/ethernet/ethoc.c +++ b/drivers/net/ethernet/ethoc.c @@ -902,7 +902,7 @@ static const struct net_device_ops ethoc_netdev_ops = { }; /** - * ethoc_probe() - initialize OpenCores ethernet MAC + * ethoc_probe - initialize OpenCores ethernet MAC * pdev: platform device */ static int __devinit ethoc_probe(struct platform_device *pdev) @@ -1057,7 +1057,7 @@ static int __devinit ethoc_probe(struct platform_device *pdev) /* Check the MAC again for validity, if it still isn't choose and * program a random one. */ if (!is_valid_ether_addr(netdev->dev_addr)) { - random_ether_addr(netdev->dev_addr); + eth_random_addr(netdev->dev_addr); random_mac = true; } @@ -1140,7 +1140,7 @@ out: } /** - * ethoc_remove() - shutdown OpenCores ethernet MAC + * ethoc_remove - shutdown OpenCores ethernet MAC * @pdev: platform device */ static int __devexit ethoc_remove(struct platform_device *pdev) diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c index 16b07048274c..74d749e29aab 100644 --- a/drivers/net/ethernet/faraday/ftgmac100.c +++ b/drivers/net/ethernet/faraday/ftgmac100.c @@ -479,9 +479,14 @@ static bool ftgmac100_rx_packet(struct ftgmac100 *priv, int *processed) rxdes = ftgmac100_current_rxdes(priv); } while (!done); - if (skb->len <= 64) + /* Small frames are copied into linear part of skb to free one page */ + if (skb->len <= 128) { skb->truesize -= PAGE_SIZE; - __pskb_pull_tail(skb, min(skb->len, 64U)); + __pskb_pull_tail(skb, skb->len); + } else { + /* We pull the minimum amount into linear part */ + __pskb_pull_tail(skb, ETH_HLEN); + } skb->protocol = eth_type_trans(skb, netdev); netdev->stats.rx_packets++; diff --git a/drivers/net/ethernet/faraday/ftmac100.c b/drivers/net/ethernet/faraday/ftmac100.c index 829b1092fd78..b901a01e3fa5 100644 --- a/drivers/net/ethernet/faraday/ftmac100.c +++ b/drivers/net/ethernet/faraday/ftmac100.c @@ -441,11 +441,14 @@ static bool ftmac100_rx_packet(struct ftmac100 *priv, int *processed) skb->len += length; skb->data_len += length; - /* page might be freed in __pskb_pull_tail() */ - if (length > 64) + if (length > 128) { skb->truesize += PAGE_SIZE; - __pskb_pull_tail(skb, min(length, 64)); - + /* We pull the minimum amount into linear part */ + __pskb_pull_tail(skb, ETH_HLEN); + } else { + /* Small frames are copied into linear part to free one page */ + __pskb_pull_tail(skb, length); + } ftmac100_alloc_rx_page(priv, rxdes, GFP_ATOMIC); ftmac100_rx_pointer_advance(priv); diff --git a/drivers/net/ethernet/freescale/fec.c b/drivers/net/ethernet/freescale/fec.c index ff7f4c5115a1..fffd20528b5d 100644 --- a/drivers/net/ethernet/freescale/fec.c +++ b/drivers/net/ethernet/freescale/fec.c @@ -49,6 +49,7 @@ #include <linux/of_gpio.h> #include <linux/of_net.h> #include <linux/pinctrl/consumer.h> +#include <linux/regulator/consumer.h> #include <asm/cacheflush.h> @@ -1388,8 +1389,8 @@ fec_set_mac_address(struct net_device *ndev, void *p) } #ifdef CONFIG_NET_POLL_CONTROLLER -/* - * fec_poll_controller: FEC Poll controller function +/** + * fec_poll_controller - FEC Poll controller function * @dev: The FEC network adapter * * Polled functionality used by netconsole and others in non interrupt mode @@ -1506,18 +1507,25 @@ static int __devinit fec_get_phy_mode_dt(struct platform_device *pdev) static void __devinit fec_reset_phy(struct platform_device *pdev) { int err, phy_reset; + int msec = 1; struct device_node *np = pdev->dev.of_node; if (!np) return; + of_property_read_u32(np, "phy-reset-duration", &msec); + /* A sane reset duration should not be longer than 1s */ + if (msec > 1000) + msec = 1; + phy_reset = of_get_named_gpio(np, "phy-reset-gpios", 0); - err = gpio_request_one(phy_reset, GPIOF_OUT_INIT_LOW, "phy-reset"); + err = devm_gpio_request_one(&pdev->dev, phy_reset, + GPIOF_OUT_INIT_LOW, "phy-reset"); if (err) { pr_debug("FEC: failed to get gpio phy-reset: %d\n", err); return; } - msleep(1); + msleep(msec); gpio_set_value(phy_reset, 1); } #else /* CONFIG_OF */ @@ -1546,6 +1554,7 @@ fec_probe(struct platform_device *pdev) const struct of_device_id *of_id; static int dev_id; struct pinctrl *pinctrl; + struct regulator *reg_phy; of_id = of_match_device(fec_dt_ids, &pdev->dev); if (of_id) @@ -1593,8 +1602,6 @@ fec_probe(struct platform_device *pdev) fep->phy_interface = ret; } - fec_reset_phy(pdev); - for (i = 0; i < FEC_IRQ_NUM; i++) { irq = platform_get_irq(pdev, i); if (irq < 0) { @@ -1634,6 +1641,18 @@ fec_probe(struct platform_device *pdev) clk_prepare_enable(fep->clk_ahb); clk_prepare_enable(fep->clk_ipg); + reg_phy = devm_regulator_get(&pdev->dev, "phy"); + if (!IS_ERR(reg_phy)) { + ret = regulator_enable(reg_phy); + if (ret) { + dev_err(&pdev->dev, + "Failed to enable phy regulator: %d\n", ret); + goto failed_regulator; + } + } + + fec_reset_phy(pdev); + ret = fec_enet_init(ndev); if (ret) goto failed_init; @@ -1655,6 +1674,7 @@ failed_register: fec_enet_mii_remove(fep); failed_mii_init: failed_init: +failed_regulator: clk_disable_unprepare(fep->clk_ahb); clk_disable_unprepare(fep->clk_ipg); failed_pin: diff --git a/drivers/net/ethernet/freescale/fsl_pq_mdio.c b/drivers/net/ethernet/freescale/fsl_pq_mdio.c index f7f0bf5d037b..9527b28d70d1 100644 --- a/drivers/net/ethernet/freescale/fsl_pq_mdio.c +++ b/drivers/net/ethernet/freescale/fsl_pq_mdio.c @@ -47,6 +47,9 @@ #include "gianfar.h" #include "fsl_pq_mdio.h" +/* Number of microseconds to wait for an MII register to respond */ +#define MII_TIMEOUT 1000 + struct fsl_pq_mdio_priv { void __iomem *map; struct fsl_pq_mdio __iomem *regs; @@ -64,6 +67,8 @@ struct fsl_pq_mdio_priv { int fsl_pq_local_mdio_write(struct fsl_pq_mdio __iomem *regs, int mii_id, int regnum, u16 value) { + u32 status; + /* Set the PHY address and the register address we want to write */ out_be32(®s->miimadd, (mii_id << 8) | regnum); @@ -71,10 +76,10 @@ int fsl_pq_local_mdio_write(struct fsl_pq_mdio __iomem *regs, int mii_id, out_be32(®s->miimcon, value); /* Wait for the transaction to finish */ - while (in_be32(®s->miimind) & MIIMIND_BUSY) - cpu_relax(); + status = spin_event_timeout(!(in_be32(®s->miimind) & MIIMIND_BUSY), + MII_TIMEOUT, 0); - return 0; + return status ? 0 : -ETIMEDOUT; } /* @@ -91,6 +96,7 @@ int fsl_pq_local_mdio_read(struct fsl_pq_mdio __iomem *regs, int mii_id, int regnum) { u16 value; + u32 status; /* Set the PHY address and the register address we want to read */ out_be32(®s->miimadd, (mii_id << 8) | regnum); @@ -99,9 +105,12 @@ int fsl_pq_local_mdio_read(struct fsl_pq_mdio __iomem *regs, out_be32(®s->miimcom, 0); out_be32(®s->miimcom, MII_READ_COMMAND); - /* Wait for the transaction to finish */ - while (in_be32(®s->miimind) & (MIIMIND_NOTVALID | MIIMIND_BUSY)) - cpu_relax(); + /* Wait for the transaction to finish, normally less than 100us */ + status = spin_event_timeout(!(in_be32(®s->miimind) & + (MIIMIND_NOTVALID | MIIMIND_BUSY)), + MII_TIMEOUT, 0); + if (!status) + return -ETIMEDOUT; /* Grab the value of the register from miimstat */ value = in_be32(®s->miimstat); @@ -144,7 +153,7 @@ int fsl_pq_mdio_read(struct mii_bus *bus, int mii_id, int regnum) static int fsl_pq_mdio_reset(struct mii_bus *bus) { struct fsl_pq_mdio __iomem *regs = fsl_pq_mdio_get_regs(bus); - int timeout = PHY_INIT_TIMEOUT; + u32 status; mutex_lock(&bus->mdio_lock); @@ -155,12 +164,12 @@ static int fsl_pq_mdio_reset(struct mii_bus *bus) out_be32(®s->miimcfg, MIIMCFG_INIT_VALUE); /* Wait until the bus is free */ - while ((in_be32(®s->miimind) & MIIMIND_BUSY) && timeout--) - cpu_relax(); + status = spin_event_timeout(!(in_be32(®s->miimind) & MIIMIND_BUSY), + MII_TIMEOUT, 0); mutex_unlock(&bus->mdio_lock); - if (timeout < 0) { + if (!status) { printk(KERN_ERR "%s: The MII Bus is stuck!\n", bus->name); return -EBUSY; diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c index ab1d80ff0791..4605f7246687 100644 --- a/drivers/net/ethernet/freescale/gianfar.c +++ b/drivers/net/ethernet/freescale/gianfar.c @@ -1,5 +1,4 @@ -/* - * drivers/net/ethernet/freescale/gianfar.c +/* drivers/net/ethernet/freescale/gianfar.c * * Gianfar Ethernet Driver * This driver is designed for the non-CPM ethernet controllers @@ -114,7 +113,7 @@ static void gfar_timeout(struct net_device *dev); static int gfar_close(struct net_device *dev); struct sk_buff *gfar_new_skb(struct net_device *dev); static void gfar_new_rxbdp(struct gfar_priv_rx_q *rx_queue, struct rxbd8 *bdp, - struct sk_buff *skb); + struct sk_buff *skb); static int gfar_set_mac_address(struct net_device *dev); static int gfar_change_mtu(struct net_device *dev, int new_mtu); static irqreturn_t gfar_error(int irq, void *dev_id); @@ -266,8 +265,8 @@ static int gfar_alloc_skb_resources(struct net_device *ndev) tx_queue->tx_bd_dma_base = addr; tx_queue->dev = ndev; /* enet DMA only understands physical addresses */ - addr += sizeof(struct txbd8) *tx_queue->tx_ring_size; - vaddr += sizeof(struct txbd8) *tx_queue->tx_ring_size; + addr += sizeof(struct txbd8) * tx_queue->tx_ring_size; + vaddr += sizeof(struct txbd8) * tx_queue->tx_ring_size; } /* Start the rx descriptor ring where the tx ring leaves off */ @@ -276,15 +275,16 @@ static int gfar_alloc_skb_resources(struct net_device *ndev) rx_queue->rx_bd_base = vaddr; rx_queue->rx_bd_dma_base = addr; rx_queue->dev = ndev; - addr += sizeof (struct rxbd8) * rx_queue->rx_ring_size; - vaddr += sizeof (struct rxbd8) * rx_queue->rx_ring_size; + addr += sizeof(struct rxbd8) * rx_queue->rx_ring_size; + vaddr += sizeof(struct rxbd8) * rx_queue->rx_ring_size; } /* Setup the skbuff rings */ for (i = 0; i < priv->num_tx_queues; i++) { tx_queue = priv->tx_queue[i]; tx_queue->tx_skbuff = kmalloc(sizeof(*tx_queue->tx_skbuff) * - tx_queue->tx_ring_size, GFP_KERNEL); + tx_queue->tx_ring_size, + GFP_KERNEL); if (!tx_queue->tx_skbuff) { netif_err(priv, ifup, ndev, "Could not allocate tx_skbuff\n"); @@ -298,7 +298,8 @@ static int gfar_alloc_skb_resources(struct net_device *ndev) for (i = 0; i < priv->num_rx_queues; i++) { rx_queue = priv->rx_queue[i]; rx_queue->rx_skbuff = kmalloc(sizeof(*rx_queue->rx_skbuff) * - rx_queue->rx_ring_size, GFP_KERNEL); + rx_queue->rx_ring_size, + GFP_KERNEL); if (!rx_queue->rx_skbuff) { netif_err(priv, ifup, ndev, @@ -327,15 +328,15 @@ static void gfar_init_tx_rx_base(struct gfar_private *priv) int i; baddr = ®s->tbase0; - for(i = 0; i < priv->num_tx_queues; i++) { + for (i = 0; i < priv->num_tx_queues; i++) { gfar_write(baddr, priv->tx_queue[i]->tx_bd_dma_base); - baddr += 2; + baddr += 2; } baddr = ®s->rbase0; - for(i = 0; i < priv->num_rx_queues; i++) { + for (i = 0; i < priv->num_rx_queues; i++) { gfar_write(baddr, priv->rx_queue[i]->rx_bd_dma_base); - baddr += 2; + baddr += 2; } } @@ -405,7 +406,8 @@ static void gfar_init_mac(struct net_device *ndev) gfar_write(®s->attreli, attrs); /* Start with defaults, and add stashing or locking - * depending on the approprate variables */ + * depending on the approprate variables + */ attrs = ATTR_INIT_SETTINGS; if (priv->bd_stash_en) @@ -426,16 +428,16 @@ static struct net_device_stats *gfar_get_stats(struct net_device *dev) struct gfar_private *priv = netdev_priv(dev); unsigned long rx_packets = 0, rx_bytes = 0, rx_dropped = 0; unsigned long tx_packets = 0, tx_bytes = 0; - int i = 0; + int i; for (i = 0; i < priv->num_rx_queues; i++) { rx_packets += priv->rx_queue[i]->stats.rx_packets; - rx_bytes += priv->rx_queue[i]->stats.rx_bytes; + rx_bytes += priv->rx_queue[i]->stats.rx_bytes; rx_dropped += priv->rx_queue[i]->stats.rx_dropped; } dev->stats.rx_packets = rx_packets; - dev->stats.rx_bytes = rx_bytes; + dev->stats.rx_bytes = rx_bytes; dev->stats.rx_dropped = rx_dropped; for (i = 0; i < priv->num_tx_queues; i++) { @@ -443,7 +445,7 @@ static struct net_device_stats *gfar_get_stats(struct net_device *dev) tx_packets += priv->tx_queue[i]->stats.tx_packets; } - dev->stats.tx_bytes = tx_bytes; + dev->stats.tx_bytes = tx_bytes; dev->stats.tx_packets = tx_packets; return &dev->stats; @@ -468,7 +470,7 @@ static const struct net_device_ops gfar_netdev_ops = { void lock_rx_qs(struct gfar_private *priv) { - int i = 0x0; + int i; for (i = 0; i < priv->num_rx_queues; i++) spin_lock(&priv->rx_queue[i]->rxlock); @@ -476,7 +478,7 @@ void lock_rx_qs(struct gfar_private *priv) void lock_tx_qs(struct gfar_private *priv) { - int i = 0x0; + int i; for (i = 0; i < priv->num_tx_queues; i++) spin_lock(&priv->tx_queue[i]->txlock); @@ -484,7 +486,7 @@ void lock_tx_qs(struct gfar_private *priv) void unlock_rx_qs(struct gfar_private *priv) { - int i = 0x0; + int i; for (i = 0; i < priv->num_rx_queues; i++) spin_unlock(&priv->rx_queue[i]->rxlock); @@ -492,7 +494,7 @@ void unlock_rx_qs(struct gfar_private *priv) void unlock_tx_qs(struct gfar_private *priv) { - int i = 0x0; + int i; for (i = 0; i < priv->num_tx_queues; i++) spin_unlock(&priv->tx_queue[i]->txlock); @@ -508,13 +510,13 @@ static bool gfar_is_vlan_on(struct gfar_private *priv) static inline int gfar_uses_fcb(struct gfar_private *priv) { return gfar_is_vlan_on(priv) || - (priv->ndev->features & NETIF_F_RXCSUM) || - (priv->device_flags & FSL_GIANFAR_DEV_HAS_TIMER); + (priv->ndev->features & NETIF_F_RXCSUM) || + (priv->device_flags & FSL_GIANFAR_DEV_HAS_TIMER); } static void free_tx_pointers(struct gfar_private *priv) { - int i = 0; + int i; for (i = 0; i < priv->num_tx_queues; i++) kfree(priv->tx_queue[i]); @@ -522,7 +524,7 @@ static void free_tx_pointers(struct gfar_private *priv) static void free_rx_pointers(struct gfar_private *priv) { - int i = 0; + int i; for (i = 0; i < priv->num_rx_queues; i++) kfree(priv->rx_queue[i]); @@ -530,7 +532,7 @@ static void free_rx_pointers(struct gfar_private *priv) static void unmap_group_regs(struct gfar_private *priv) { - int i = 0; + int i; for (i = 0; i < MAXGROUPS; i++) if (priv->gfargrp[i].regs) @@ -539,7 +541,7 @@ static void unmap_group_regs(struct gfar_private *priv) static void disable_napi(struct gfar_private *priv) { - int i = 0; + int i; for (i = 0; i < priv->num_grps; i++) napi_disable(&priv->gfargrp[i].napi); @@ -547,14 +549,14 @@ static void disable_napi(struct gfar_private *priv) static void enable_napi(struct gfar_private *priv) { - int i = 0; + int i; for (i = 0; i < priv->num_grps; i++) napi_enable(&priv->gfargrp[i].napi); } static int gfar_parse_group(struct device_node *np, - struct gfar_private *priv, const char *model) + struct gfar_private *priv, const char *model) { u32 *queue_mask; @@ -580,15 +582,13 @@ static int gfar_parse_group(struct device_node *np, priv->gfargrp[priv->num_grps].grp_id = priv->num_grps; priv->gfargrp[priv->num_grps].priv = priv; spin_lock_init(&priv->gfargrp[priv->num_grps].grplock); - if(priv->mode == MQ_MG_MODE) { - queue_mask = (u32 *)of_get_property(np, - "fsl,rx-bit-map", NULL); - priv->gfargrp[priv->num_grps].rx_bit_map = - queue_mask ? *queue_mask :(DEFAULT_MAPPING >> priv->num_grps); - queue_mask = (u32 *)of_get_property(np, - "fsl,tx-bit-map", NULL); - priv->gfargrp[priv->num_grps].tx_bit_map = - queue_mask ? *queue_mask : (DEFAULT_MAPPING >> priv->num_grps); + if (priv->mode == MQ_MG_MODE) { + queue_mask = (u32 *)of_get_property(np, "fsl,rx-bit-map", NULL); + priv->gfargrp[priv->num_grps].rx_bit_map = queue_mask ? + *queue_mask : (DEFAULT_MAPPING >> priv->num_grps); + queue_mask = (u32 *)of_get_property(np, "fsl,tx-bit-map", NULL); + priv->gfargrp[priv->num_grps].tx_bit_map = queue_mask ? + *queue_mask : (DEFAULT_MAPPING >> priv->num_grps); } else { priv->gfargrp[priv->num_grps].rx_bit_map = 0xFF; priv->gfargrp[priv->num_grps].tx_bit_map = 0xFF; @@ -652,7 +652,7 @@ static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev) priv->num_rx_queues = num_rx_qs; priv->num_grps = 0x0; - /* Init Rx queue filer rule set linked list*/ + /* Init Rx queue filer rule set linked list */ INIT_LIST_HEAD(&priv->rx_list.list); priv->rx_list.count = 0; mutex_init(&priv->rx_queue_access); @@ -673,7 +673,7 @@ static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev) } else { priv->mode = SQ_SG_MODE; err = gfar_parse_group(np, priv, model); - if(err) + if (err) goto err_grp_init; } @@ -730,27 +730,27 @@ static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev) priv->device_flags |= FSL_GIANFAR_DEV_HAS_BUF_STASHING; mac_addr = of_get_mac_address(np); + if (mac_addr) memcpy(dev->dev_addr, mac_addr, ETH_ALEN); if (model && !strcasecmp(model, "TSEC")) - priv->device_flags = - FSL_GIANFAR_DEV_HAS_GIGABIT | - FSL_GIANFAR_DEV_HAS_COALESCE | - FSL_GIANFAR_DEV_HAS_RMON | - FSL_GIANFAR_DEV_HAS_MULTI_INTR; + priv->device_flags = FSL_GIANFAR_DEV_HAS_GIGABIT | + FSL_GIANFAR_DEV_HAS_COALESCE | + FSL_GIANFAR_DEV_HAS_RMON | + FSL_GIANFAR_DEV_HAS_MULTI_INTR; + if (model && !strcasecmp(model, "eTSEC")) - priv->device_flags = - FSL_GIANFAR_DEV_HAS_GIGABIT | - FSL_GIANFAR_DEV_HAS_COALESCE | - FSL_GIANFAR_DEV_HAS_RMON | - FSL_GIANFAR_DEV_HAS_MULTI_INTR | - FSL_GIANFAR_DEV_HAS_PADDING | - FSL_GIANFAR_DEV_HAS_CSUM | - FSL_GIANFAR_DEV_HAS_VLAN | - FSL_GIANFAR_DEV_HAS_MAGIC_PACKET | - FSL_GIANFAR_DEV_HAS_EXTENDED_HASH | - FSL_GIANFAR_DEV_HAS_TIMER; + priv->device_flags = FSL_GIANFAR_DEV_HAS_GIGABIT | + FSL_GIANFAR_DEV_HAS_COALESCE | + FSL_GIANFAR_DEV_HAS_RMON | + FSL_GIANFAR_DEV_HAS_MULTI_INTR | + FSL_GIANFAR_DEV_HAS_PADDING | + FSL_GIANFAR_DEV_HAS_CSUM | + FSL_GIANFAR_DEV_HAS_VLAN | + FSL_GIANFAR_DEV_HAS_MAGIC_PACKET | + FSL_GIANFAR_DEV_HAS_EXTENDED_HASH | + FSL_GIANFAR_DEV_HAS_TIMER; ctype = of_get_property(np, "phy-connection-type", NULL); @@ -781,7 +781,7 @@ err_grp_init: } static int gfar_hwtstamp_ioctl(struct net_device *netdev, - struct ifreq *ifr, int cmd) + struct ifreq *ifr, int cmd) { struct hwtstamp_config config; struct gfar_private *priv = netdev_priv(netdev); @@ -851,6 +851,7 @@ static unsigned int reverse_bitmap(unsigned int bit_map, unsigned int max_qs) { unsigned int new_bit_map = 0x0; int mask = 0x1 << (max_qs - 1), i; + for (i = 0; i < max_qs; i++) { if (bit_map & mask) new_bit_map = new_bit_map + (1 << i); @@ -936,22 +937,22 @@ static void gfar_detect_errata(struct gfar_private *priv) /* MPC8313 Rev 2.0 and higher; All MPC837x */ if ((pvr == 0x80850010 && mod == 0x80b0 && rev >= 0x0020) || - (pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0)) + (pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0)) priv->errata |= GFAR_ERRATA_74; /* MPC8313 and MPC837x all rev */ if ((pvr == 0x80850010 && mod == 0x80b0) || - (pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0)) + (pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0)) priv->errata |= GFAR_ERRATA_76; /* MPC8313 and MPC837x all rev */ if ((pvr == 0x80850010 && mod == 0x80b0) || - (pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0)) + (pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0)) priv->errata |= GFAR_ERRATA_A002; /* MPC8313 Rev < 2.0, MPC8548 rev 2.0 */ if ((pvr == 0x80850010 && mod == 0x80b0 && rev < 0x0020) || - (pvr == 0x80210020 && mod == 0x8030 && rev == 0x0020)) + (pvr == 0x80210020 && mod == 0x8030 && rev == 0x0020)) priv->errata |= GFAR_ERRATA_12; if (priv->errata) @@ -960,7 +961,8 @@ static void gfar_detect_errata(struct gfar_private *priv) } /* Set up the ethernet device structure, private data, - * and anything else we need before we start */ + * and anything else we need before we start + */ static int gfar_probe(struct platform_device *ofdev) { u32 tempval; @@ -991,8 +993,9 @@ static int gfar_probe(struct platform_device *ofdev) gfar_detect_errata(priv); - /* Stop the DMA engine now, in case it was running before */ - /* (The firmware could have used it, and left it running). */ + /* Stop the DMA engine now, in case it was running before + * (The firmware could have used it, and left it running). + */ gfar_halt(dev); /* Reset MAC layer */ @@ -1026,13 +1029,14 @@ static int gfar_probe(struct platform_device *ofdev) /* Register for napi ...We are registering NAPI for each grp */ for (i = 0; i < priv->num_grps; i++) - netif_napi_add(dev, &priv->gfargrp[i].napi, gfar_poll, GFAR_DEV_WEIGHT); + netif_napi_add(dev, &priv->gfargrp[i].napi, gfar_poll, + GFAR_DEV_WEIGHT); if (priv->device_flags & FSL_GIANFAR_DEV_HAS_CSUM) { dev->hw_features = NETIF_F_IP_CSUM | NETIF_F_SG | - NETIF_F_RXCSUM; + NETIF_F_RXCSUM; dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG | - NETIF_F_RXCSUM | NETIF_F_HIGHDMA; + NETIF_F_RXCSUM | NETIF_F_HIGHDMA; } if (priv->device_flags & FSL_GIANFAR_DEV_HAS_VLAN) { @@ -1081,7 +1085,7 @@ static int gfar_probe(struct platform_device *ofdev) priv->padding = 0; if (dev->features & NETIF_F_IP_CSUM || - priv->device_flags & FSL_GIANFAR_DEV_HAS_TIMER) + priv->device_flags & FSL_GIANFAR_DEV_HAS_TIMER) dev->needed_headroom = GMAC_FCB_LEN; /* Program the isrg regs only if number of grps > 1 */ @@ -1098,28 +1102,32 @@ static int gfar_probe(struct platform_device *ofdev) /* Need to reverse the bit maps as bit_map's MSB is q0 * but, for_each_set_bit parses from right to left, which - * basically reverses the queue numbers */ + * basically reverses the queue numbers + */ for (i = 0; i< priv->num_grps; i++) { - priv->gfargrp[i].tx_bit_map = reverse_bitmap( - priv->gfargrp[i].tx_bit_map, MAX_TX_QS); - priv->gfargrp[i].rx_bit_map = reverse_bitmap( - priv->gfargrp[i].rx_bit_map, MAX_RX_QS); + priv->gfargrp[i].tx_bit_map = + reverse_bitmap(priv->gfargrp[i].tx_bit_map, MAX_TX_QS); + priv->gfargrp[i].rx_bit_map = + reverse_bitmap(priv->gfargrp[i].rx_bit_map, MAX_RX_QS); } /* Calculate RSTAT, TSTAT, RQUEUE and TQUEUE values, - * also assign queues to groups */ + * also assign queues to groups + */ for (grp_idx = 0; grp_idx < priv->num_grps; grp_idx++) { priv->gfargrp[grp_idx].num_rx_queues = 0x0; + for_each_set_bit(i, &priv->gfargrp[grp_idx].rx_bit_map, - priv->num_rx_queues) { + priv->num_rx_queues) { priv->gfargrp[grp_idx].num_rx_queues++; priv->rx_queue[i]->grp = &priv->gfargrp[grp_idx]; rstat = rstat | (RSTAT_CLEAR_RHALT >> i); rqueue = rqueue | ((RQUEUE_EN0 | RQUEUE_EX0) >> i); } priv->gfargrp[grp_idx].num_tx_queues = 0x0; + for_each_set_bit(i, &priv->gfargrp[grp_idx].tx_bit_map, - priv->num_tx_queues) { + priv->num_tx_queues) { priv->gfargrp[grp_idx].num_tx_queues++; priv->tx_queue[i]->grp = &priv->gfargrp[grp_idx]; tstat = tstat | (TSTAT_CLEAR_THALT >> i); @@ -1149,7 +1157,7 @@ static int gfar_probe(struct platform_device *ofdev) priv->rx_queue[i]->rxic = DEFAULT_RXIC; } - /* always enable rx filer*/ + /* always enable rx filer */ priv->rx_filer_enable = 1; /* Enable most messages by default */ priv->msg_enable = (NETIF_MSG_IFUP << 1 ) - 1; @@ -1165,7 +1173,8 @@ static int gfar_probe(struct platform_device *ofdev) } device_init_wakeup(&dev->dev, - priv->device_flags & FSL_GIANFAR_DEV_HAS_MAGIC_PACKET); + priv->device_flags & + FSL_GIANFAR_DEV_HAS_MAGIC_PACKET); /* fill out IRQ number and name fields */ for (i = 0; i < priv->num_grps; i++) { @@ -1189,13 +1198,14 @@ static int gfar_probe(struct platform_device *ofdev) /* Print out the device info */ netdev_info(dev, "mac: %pM\n", dev->dev_addr); - /* Even more device info helps when determining which kernel */ - /* provided which set of benchmarks. */ + /* Even more device info helps when determining which kernel + * provided which set of benchmarks. + */ netdev_info(dev, "Running with NAPI enabled\n"); for (i = 0; i < priv->num_rx_queues; i++) netdev_info(dev, "RX BD ring size for Q[%d]: %d\n", i, priv->rx_queue[i]->rx_ring_size); - for(i = 0; i < priv->num_tx_queues; i++) + for (i = 0; i < priv->num_tx_queues; i++) netdev_info(dev, "TX BD ring size for Q[%d]: %d\n", i, priv->tx_queue[i]->tx_ring_size); @@ -1242,7 +1252,8 @@ static int gfar_suspend(struct device *dev) u32 tempval; int magic_packet = priv->wol_en && - (priv->device_flags & FSL_GIANFAR_DEV_HAS_MAGIC_PACKET); + (priv->device_flags & + FSL_GIANFAR_DEV_HAS_MAGIC_PACKET); netif_device_detach(ndev); @@ -1294,7 +1305,8 @@ static int gfar_resume(struct device *dev) unsigned long flags; u32 tempval; int magic_packet = priv->wol_en && - (priv->device_flags & FSL_GIANFAR_DEV_HAS_MAGIC_PACKET); + (priv->device_flags & + FSL_GIANFAR_DEV_HAS_MAGIC_PACKET); if (!netif_running(ndev)) { netif_device_attach(ndev); @@ -1393,13 +1405,13 @@ static phy_interface_t gfar_get_interface(struct net_device *dev) } if (ecntrl & ECNTRL_REDUCED_MODE) { - if (ecntrl & ECNTRL_REDUCED_MII_MODE) + if (ecntrl & ECNTRL_REDUCED_MII_MODE) { return PHY_INTERFACE_MODE_RMII; + } else { phy_interface_t interface = priv->interface; - /* - * This isn't autodetected right now, so it must + /* This isn't autodetected right now, so it must * be set by the device tree or platform code. */ if (interface == PHY_INTERFACE_MODE_RGMII_ID) @@ -1453,8 +1465,7 @@ static int init_phy(struct net_device *dev) return 0; } -/* - * Initialize TBI PHY interface for communicating with the +/* Initialize TBI PHY interface for communicating with the * SERDES lynx PHY on the chip. We communicate with this PHY * through the MDIO bus on each controller, treating it as a * "normal" PHY at the address found in the TBIPA register. We assume @@ -1479,8 +1490,7 @@ static void gfar_configure_serdes(struct net_device *dev) return; } - /* - * If the link is already up, we must already be ok, and don't need to + /* If the link is already up, we must already be ok, and don't need to * configure and reset the TBI<->SerDes link. Maybe U-Boot configured * everything for us? Resetting it takes the link down and requires * several seconds for it to come back. @@ -1492,18 +1502,19 @@ static void gfar_configure_serdes(struct net_device *dev) phy_write(tbiphy, MII_TBICON, TBICON_CLK_SELECT); phy_write(tbiphy, MII_ADVERTISE, - ADVERTISE_1000XFULL | ADVERTISE_1000XPAUSE | - ADVERTISE_1000XPSE_ASYM); + ADVERTISE_1000XFULL | ADVERTISE_1000XPAUSE | + ADVERTISE_1000XPSE_ASYM); - phy_write(tbiphy, MII_BMCR, BMCR_ANENABLE | - BMCR_ANRESTART | BMCR_FULLDPLX | BMCR_SPEED1000); + phy_write(tbiphy, MII_BMCR, + BMCR_ANENABLE | BMCR_ANRESTART | BMCR_FULLDPLX | + BMCR_SPEED1000); } static void init_registers(struct net_device *dev) { struct gfar_private *priv = netdev_priv(dev); struct gfar __iomem *regs = NULL; - int i = 0; + int i; for (i = 0; i < priv->num_grps; i++) { regs = priv->gfargrp[i].regs; @@ -1554,15 +1565,13 @@ static int __gfar_is_rx_idle(struct gfar_private *priv) { u32 res; - /* - * Normaly TSEC should not hang on GRS commands, so we should + /* Normaly TSEC should not hang on GRS commands, so we should * actually wait for IEVENT_GRSC flag. */ if (likely(!gfar_has_errata(priv, GFAR_ERRATA_A002))) return 0; - /* - * Read the eTSEC register at offset 0xD1C. If bits 7-14 are + /* Read the eTSEC register at offset 0xD1C. If bits 7-14 are * the same as bits 23-30, the eTSEC Rx is assumed to be idle * and the Rx can be safely reset. */ @@ -1580,7 +1589,7 @@ static void gfar_halt_nodisable(struct net_device *dev) struct gfar_private *priv = netdev_priv(dev); struct gfar __iomem *regs = NULL; u32 tempval; - int i = 0; + int i; for (i = 0; i < priv->num_grps; i++) { regs = priv->gfargrp[i].regs; @@ -1594,8 +1603,8 @@ static void gfar_halt_nodisable(struct net_device *dev) regs = priv->gfargrp[0].regs; /* Stop the DMA, and wait for it to stop */ tempval = gfar_read(®s->dmactrl); - if ((tempval & (DMACTRL_GRS | DMACTRL_GTS)) - != (DMACTRL_GRS | DMACTRL_GTS)) { + if ((tempval & (DMACTRL_GRS | DMACTRL_GTS)) != + (DMACTRL_GRS | DMACTRL_GTS)) { int ret; tempval |= (DMACTRL_GRS | DMACTRL_GTS); @@ -1660,7 +1669,7 @@ void stop_gfar(struct net_device *dev) } else { for (i = 0; i < priv->num_grps; i++) free_irq(priv->gfargrp[i].interruptTransmit, - &priv->gfargrp[i]); + &priv->gfargrp[i]); } free_skb_resources(priv); @@ -1679,13 +1688,13 @@ static void free_skb_tx_queue(struct gfar_priv_tx_q *tx_queue) continue; dma_unmap_single(&priv->ofdev->dev, txbdp->bufPtr, - txbdp->length, DMA_TO_DEVICE); + txbdp->length, DMA_TO_DEVICE); txbdp->lstatus = 0; for (j = 0; j < skb_shinfo(tx_queue->tx_skbuff[i])->nr_frags; - j++) { + j++) { txbdp++; dma_unmap_page(&priv->ofdev->dev, txbdp->bufPtr, - txbdp->length, DMA_TO_DEVICE); + txbdp->length, DMA_TO_DEVICE); } txbdp++; dev_kfree_skb_any(tx_queue->tx_skbuff[i]); @@ -1705,8 +1714,8 @@ static void free_skb_rx_queue(struct gfar_priv_rx_q *rx_queue) for (i = 0; i < rx_queue->rx_ring_size; i++) { if (rx_queue->rx_skbuff[i]) { dma_unmap_single(&priv->ofdev->dev, - rxbdp->bufPtr, priv->rx_buffer_size, - DMA_FROM_DEVICE); + rxbdp->bufPtr, priv->rx_buffer_size, + DMA_FROM_DEVICE); dev_kfree_skb_any(rx_queue->rx_skbuff[i]); rx_queue->rx_skbuff[i] = NULL; } @@ -1718,7 +1727,8 @@ static void free_skb_rx_queue(struct gfar_priv_rx_q *rx_queue) } /* If there are any tx skbs or rx skbs still around, free them. - * Then free tx_skbuff and rx_skbuff */ + * Then free tx_skbuff and rx_skbuff + */ static void free_skb_resources(struct gfar_private *priv) { struct gfar_priv_tx_q *tx_queue = NULL; @@ -1728,24 +1738,25 @@ static void free_skb_resources(struct gfar_private *priv) /* Go through all the buffer descriptors and free their data buffers */ for (i = 0; i < priv->num_tx_queues; i++) { struct netdev_queue *txq; + tx_queue = priv->tx_queue[i]; txq = netdev_get_tx_queue(tx_queue->dev, tx_queue->qindex); - if(tx_queue->tx_skbuff) + if (tx_queue->tx_skbuff) free_skb_tx_queue(tx_queue); netdev_tx_reset_queue(txq); } for (i = 0; i < priv->num_rx_queues; i++) { rx_queue = priv->rx_queue[i]; - if(rx_queue->rx_skbuff) + if (rx_queue->rx_skbuff) free_skb_rx_queue(rx_queue); } dma_free_coherent(&priv->ofdev->dev, - sizeof(struct txbd8) * priv->total_tx_ring_size + - sizeof(struct rxbd8) * priv->total_rx_ring_size, - priv->tx_queue[0]->tx_bd_base, - priv->tx_queue[0]->tx_bd_dma_base); + sizeof(struct txbd8) * priv->total_tx_ring_size + + sizeof(struct rxbd8) * priv->total_rx_ring_size, + priv->tx_queue[0]->tx_bd_base, + priv->tx_queue[0]->tx_bd_dma_base); skb_queue_purge(&priv->rx_recycle); } @@ -1784,7 +1795,7 @@ void gfar_start(struct net_device *dev) } void gfar_configure_coalescing(struct gfar_private *priv, - unsigned long tx_mask, unsigned long rx_mask) + unsigned long tx_mask, unsigned long rx_mask) { struct gfar __iomem *regs = priv->gfargrp[0].regs; u32 __iomem *baddr; @@ -1794,11 +1805,11 @@ void gfar_configure_coalescing(struct gfar_private *priv, * multiple queues, there's only single reg to program */ gfar_write(®s->txic, 0); - if(likely(priv->tx_queue[0]->txcoalescing)) + if (likely(priv->tx_queue[0]->txcoalescing)) gfar_write(®s->txic, priv->tx_queue[0]->txic); gfar_write(®s->rxic, 0); - if(unlikely(priv->rx_queue[0]->rxcoalescing)) + if (unlikely(priv->rx_queue[0]->rxcoalescing)) gfar_write(®s->rxic, priv->rx_queue[0]->rxic); if (priv->mode == MQ_MG_MODE) { @@ -1825,12 +1836,14 @@ static int register_grp_irqs(struct gfar_priv_grp *grp) int err; /* If the device has multiple interrupts, register for - * them. Otherwise, only register for the one */ + * them. Otherwise, only register for the one + */ if (priv->device_flags & FSL_GIANFAR_DEV_HAS_MULTI_INTR) { /* Install our interrupt handlers for Error, - * Transmit, and Receive */ - if ((err = request_irq(grp->interruptError, gfar_error, 0, - grp->int_name_er,grp)) < 0) { + * Transmit, and Receive + */ + if ((err = request_irq(grp->interruptError, gfar_error, + 0, grp->int_name_er, grp)) < 0) { netif_err(priv, intr, dev, "Can't get IRQ %d\n", grp->interruptError); @@ -1838,21 +1851,21 @@ static int register_grp_irqs(struct gfar_priv_grp *grp) } if ((err = request_irq(grp->interruptTransmit, gfar_transmit, - 0, grp->int_name_tx, grp)) < 0) { + 0, grp->int_name_tx, grp)) < 0) { netif_err(priv, intr, dev, "Can't get IRQ %d\n", grp->interruptTransmit); goto tx_irq_fail; } - if ((err = request_irq(grp->interruptReceive, gfar_receive, 0, - grp->int_name_rx, grp)) < 0) { + if ((err = request_irq(grp->interruptReceive, gfar_receive, + 0, grp->int_name_rx, grp)) < 0) { netif_err(priv, intr, dev, "Can't get IRQ %d\n", grp->interruptReceive); goto rx_irq_fail; } } else { - if ((err = request_irq(grp->interruptTransmit, gfar_interrupt, 0, - grp->int_name_tx, grp)) < 0) { + if ((err = request_irq(grp->interruptTransmit, gfar_interrupt, + 0, grp->int_name_tx, grp)) < 0) { netif_err(priv, intr, dev, "Can't get IRQ %d\n", grp->interruptTransmit); goto err_irq_fail; @@ -1912,8 +1925,9 @@ irq_fail: return err; } -/* Called when something needs to use the ethernet device */ -/* Returns 0 for success. */ +/* Called when something needs to use the ethernet device + * Returns 0 for success. + */ static int gfar_enet_open(struct net_device *dev) { struct gfar_private *priv = netdev_priv(dev); @@ -1958,18 +1972,17 @@ static inline struct txfcb *gfar_add_fcb(struct sk_buff *skb) } static inline void gfar_tx_checksum(struct sk_buff *skb, struct txfcb *fcb, - int fcb_length) + int fcb_length) { - u8 flags = 0; - /* If we're here, it's a IP packet with a TCP or UDP * payload. We set it to checksum, using a pseudo-header * we provide */ - flags = TXFCB_DEFAULT; + u8 flags = TXFCB_DEFAULT; - /* Tell the controller what the protocol is */ - /* And provide the already calculated phcs */ + /* Tell the controller what the protocol is + * And provide the already calculated phcs + */ if (ip_hdr(skb)->protocol == IPPROTO_UDP) { flags |= TXFCB_UDP; fcb->phcs = udp_hdr(skb)->check; @@ -1979,7 +1992,8 @@ static inline void gfar_tx_checksum(struct sk_buff *skb, struct txfcb *fcb, /* l3os is the distance between the start of the * frame (skb->data) and the start of the IP hdr. * l4os is the distance between the start of the - * l3 hdr and the l4 hdr */ + * l3 hdr and the l4 hdr + */ fcb->l3os = (u16)(skb_network_offset(skb) - fcb_length); fcb->l4os = skb_network_header_len(skb); @@ -1993,7 +2007,7 @@ void inline gfar_tx_vlan(struct sk_buff *skb, struct txfcb *fcb) } static inline struct txbd8 *skip_txbd(struct txbd8 *bdp, int stride, - struct txbd8 *base, int ring_size) + struct txbd8 *base, int ring_size) { struct txbd8 *new_bd = bdp + stride; @@ -2001,13 +2015,14 @@ static inline struct txbd8 *skip_txbd(struct txbd8 *bdp, int stride, } static inline struct txbd8 *next_txbd(struct txbd8 *bdp, struct txbd8 *base, - int ring_size) + int ring_size) { return skip_txbd(bdp, 1, base, ring_size); } -/* This is called by the kernel when a frame is ready for transmission. */ -/* It is pointed to by the dev->hard_start_xmit function pointer */ +/* This is called by the kernel when a frame is ready for transmission. + * It is pointed to by the dev->hard_start_xmit function pointer + */ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) { struct gfar_private *priv = netdev_priv(dev); @@ -2022,13 +2037,12 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) unsigned long flags; unsigned int nr_frags, nr_txbds, length, fcb_length = GMAC_FCB_LEN; - /* - * TOE=1 frames larger than 2500 bytes may see excess delays + /* TOE=1 frames larger than 2500 bytes may see excess delays * before start of transmission. */ if (unlikely(gfar_has_errata(priv, GFAR_ERRATA_76) && - skb->ip_summed == CHECKSUM_PARTIAL && - skb->len > 2500)) { + skb->ip_summed == CHECKSUM_PARTIAL && + skb->len > 2500)) { int ret; ret = skb_checksum_help(skb); @@ -2044,16 +2058,16 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) /* check if time stamp should be generated */ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP && - priv->hwts_tx_en)) { + priv->hwts_tx_en)) { do_tstamp = 1; fcb_length = GMAC_FCB_LEN + GMAC_TXPAL_LEN; } /* make space for additional header when fcb is needed */ if (((skb->ip_summed == CHECKSUM_PARTIAL) || - vlan_tx_tag_present(skb) || - unlikely(do_tstamp)) && - (skb_headroom(skb) < fcb_length)) { + vlan_tx_tag_present(skb) || + unlikely(do_tstamp)) && + (skb_headroom(skb) < fcb_length)) { struct sk_buff *skb_new; skb_new = skb_realloc_headroom(skb, fcb_length); @@ -2096,12 +2110,12 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) /* Time stamp insertion requires one additional TxBD */ if (unlikely(do_tstamp)) txbdp_tstamp = txbdp = next_txbd(txbdp, base, - tx_queue->tx_ring_size); + tx_queue->tx_ring_size); if (nr_frags == 0) { if (unlikely(do_tstamp)) txbdp_tstamp->lstatus |= BD_LFLAG(TXBD_LAST | - TXBD_INTERRUPT); + TXBD_INTERRUPT); else lstatus |= BD_LFLAG(TXBD_LAST | TXBD_INTERRUPT); } else { @@ -2113,7 +2127,7 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) length = skb_shinfo(skb)->frags[i].size; lstatus = txbdp->lstatus | length | - BD_LFLAG(TXBD_READY); + BD_LFLAG(TXBD_READY); /* Handle the last BD specially */ if (i == nr_frags - 1) @@ -2143,8 +2157,8 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) if (CHECKSUM_PARTIAL == skb->ip_summed) { fcb = gfar_add_fcb(skb); /* as specified by errata */ - if (unlikely(gfar_has_errata(priv, GFAR_ERRATA_12) - && ((unsigned long)fcb % 0x20) > 0x18)) { + if (unlikely(gfar_has_errata(priv, GFAR_ERRATA_12) && + ((unsigned long)fcb % 0x20) > 0x18)) { __skb_pull(skb, GMAC_FCB_LEN); skb_checksum_help(skb); } else { @@ -2172,10 +2186,9 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) } txbdp_start->bufPtr = dma_map_single(&priv->ofdev->dev, skb->data, - skb_headlen(skb), DMA_TO_DEVICE); + skb_headlen(skb), DMA_TO_DEVICE); - /* - * If time stamping is requested one additional TxBD must be set up. The + /* If time stamping is requested one additional TxBD must be set up. The * first TxBD points to the FCB and must have a data length of * GMAC_FCB_LEN. The second TxBD points to the actual frame data with * the full frame length. @@ -2183,7 +2196,7 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) if (unlikely(do_tstamp)) { txbdp_tstamp->bufPtr = txbdp_start->bufPtr + fcb_length; txbdp_tstamp->lstatus |= BD_LFLAG(TXBD_READY) | - (skb_headlen(skb) - fcb_length); + (skb_headlen(skb) - fcb_length); lstatus |= BD_LFLAG(TXBD_CRC | TXBD_READY) | GMAC_FCB_LEN; } else { lstatus |= BD_LFLAG(TXBD_CRC | TXBD_READY) | skb_headlen(skb); @@ -2191,8 +2204,7 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) netdev_tx_sent_queue(txq, skb->len); - /* - * We can work in parallel with gfar_clean_tx_ring(), except + /* We can work in parallel with gfar_clean_tx_ring(), except * when modifying num_txbdfree. Note that we didn't grab the lock * when we were reading the num_txbdfree and checking for available * space, that's because outside of this function it can only grow, @@ -2205,8 +2217,7 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) */ spin_lock_irqsave(&tx_queue->txlock, flags); - /* - * The powerpc-specific eieio() is used, as wmb() has too strong + /* The powerpc-specific eieio() is used, as wmb() has too strong * semantics (it requires synchronization between cacheable and * uncacheable mappings, which eieio doesn't provide and which we * don't need), thus requiring a more expensive sync instruction. At @@ -2222,9 +2233,10 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) tx_queue->tx_skbuff[tx_queue->skb_curtx] = skb; /* Update the current skb pointer to the next entry we will use - * (wrapping if necessary) */ + * (wrapping if necessary) + */ tx_queue->skb_curtx = (tx_queue->skb_curtx + 1) & - TX_RING_MOD_MASK(tx_queue->tx_ring_size); + TX_RING_MOD_MASK(tx_queue->tx_ring_size); tx_queue->cur_tx = next_txbd(txbdp, base, tx_queue->tx_ring_size); @@ -2232,7 +2244,8 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) tx_queue->num_txbdfree -= (nr_txbds); /* If the next BD still needs to be cleaned up, then the bds - are full. We need to tell the kernel to stop sending us stuff. */ + * are full. We need to tell the kernel to stop sending us stuff. + */ if (!tx_queue->num_txbdfree) { netif_tx_stop_queue(txq); @@ -2357,12 +2370,12 @@ static int gfar_change_mtu(struct net_device *dev, int new_mtu) frame_size += priv->padding; - tempsize = - (frame_size & ~(INCREMENTAL_BUFFER_SIZE - 1)) + - INCREMENTAL_BUFFER_SIZE; + tempsize = (frame_size & ~(INCREMENTAL_BUFFER_SIZE - 1)) + + INCREMENTAL_BUFFER_SIZE; /* Only stop and start the controller if it isn't already - * stopped, and we changed something */ + * stopped, and we changed something + */ if ((oldsize != tempsize) && (dev->flags & IFF_UP)) stop_gfar(dev); @@ -2375,11 +2388,12 @@ static int gfar_change_mtu(struct net_device *dev, int new_mtu) /* If the mtu is larger than the max size for standard * ethernet frames (ie, a jumbo frame), then set maccfg2 - * to allow huge frames, and to check the length */ + * to allow huge frames, and to check the length + */ tempval = gfar_read(®s->maccfg2); if (priv->rx_buffer_size > DEFAULT_RX_BUFFER_SIZE || - gfar_has_errata(priv, GFAR_ERRATA_74)) + gfar_has_errata(priv, GFAR_ERRATA_74)) tempval |= (MACCFG2_HUGEFRAME | MACCFG2_LENGTHCHECK); else tempval &= ~(MACCFG2_HUGEFRAME | MACCFG2_LENGTHCHECK); @@ -2400,7 +2414,7 @@ static int gfar_change_mtu(struct net_device *dev, int new_mtu) static void gfar_reset_task(struct work_struct *work) { struct gfar_private *priv = container_of(work, struct gfar_private, - reset_task); + reset_task); struct net_device *dev = priv->ndev; if (dev->flags & IFF_UP) { @@ -2427,7 +2441,7 @@ static void gfar_align_skb(struct sk_buff *skb) * as many bytes as needed to align the data properly */ skb_reserve(skb, RXBUF_ALIGNMENT - - (((unsigned long) skb->data) & (RXBUF_ALIGNMENT - 1))); + (((unsigned long) skb->data) & (RXBUF_ALIGNMENT - 1))); } /* Interrupt Handler for Transmit complete */ @@ -2461,8 +2475,7 @@ static int gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue) frags = skb_shinfo(skb)->nr_frags; - /* - * When time stamping, one additional TxBD must be freed. + /* When time stamping, one additional TxBD must be freed. * Also, we need to dma_unmap_single() the TxPAL. */ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS)) @@ -2476,7 +2489,7 @@ static int gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue) /* Only clean completed frames */ if ((lstatus & BD_LFLAG(TXBD_READY)) && - (lstatus & BD_LENGTH_MASK)) + (lstatus & BD_LENGTH_MASK)) break; if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS)) { @@ -2486,11 +2499,12 @@ static int gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue) buflen = bdp->length; dma_unmap_single(&priv->ofdev->dev, bdp->bufPtr, - buflen, DMA_TO_DEVICE); + buflen, DMA_TO_DEVICE); if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS)) { struct skb_shared_hwtstamps shhwtstamps; u64 *ns = (u64*) (((u32)skb->data + 0x10) & ~0x7); + memset(&shhwtstamps, 0, sizeof(shhwtstamps)); shhwtstamps.hwtstamp = ns_to_ktime(*ns); skb_pull(skb, GMAC_FCB_LEN + GMAC_TXPAL_LEN); @@ -2503,23 +2517,20 @@ static int gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue) bdp = next_txbd(bdp, base, tx_ring_size); for (i = 0; i < frags; i++) { - dma_unmap_page(&priv->ofdev->dev, - bdp->bufPtr, - bdp->length, - DMA_TO_DEVICE); + dma_unmap_page(&priv->ofdev->dev, bdp->bufPtr, + bdp->length, DMA_TO_DEVICE); bdp->lstatus &= BD_LFLAG(TXBD_WRAP); bdp = next_txbd(bdp, base, tx_ring_size); } bytes_sent += skb->len; - /* - * If there's room in the queue (limit it to rx_buffer_size) + /* If there's room in the queue (limit it to rx_buffer_size) * we add this skb back into the pool, if it's the right size */ if (skb_queue_len(&priv->rx_recycle) < rx_queue->rx_ring_size && - skb_recycle_check(skb, priv->rx_buffer_size + - RXBUF_ALIGNMENT)) { + skb_recycle_check(skb, priv->rx_buffer_size + + RXBUF_ALIGNMENT)) { gfar_align_skb(skb); skb_queue_head(&priv->rx_recycle, skb); } else @@ -2528,7 +2539,7 @@ static int gfar_clean_tx_ring(struct gfar_priv_tx_q *tx_queue) tx_queue->tx_skbuff[skb_dirtytx] = NULL; skb_dirtytx = (skb_dirtytx + 1) & - TX_RING_MOD_MASK(tx_ring_size); + TX_RING_MOD_MASK(tx_ring_size); howmany++; spin_lock_irqsave(&tx_queue->txlock, flags); @@ -2558,8 +2569,7 @@ static void gfar_schedule_cleanup(struct gfar_priv_grp *gfargrp) gfar_write(&gfargrp->regs->imask, IMASK_RTX_DISABLED); __napi_schedule(&gfargrp->napi); } else { - /* - * Clear IEVENT, so interrupts aren't called again + /* Clear IEVENT, so interrupts aren't called again * because of the packets that have already arrived. */ gfar_write(&gfargrp->regs->ievent, IEVENT_RTX_MASK); @@ -2576,7 +2586,7 @@ static irqreturn_t gfar_transmit(int irq, void *grp_id) } static void gfar_new_rxbdp(struct gfar_priv_rx_q *rx_queue, struct rxbd8 *bdp, - struct sk_buff *skb) + struct sk_buff *skb) { struct net_device *dev = rx_queue->dev; struct gfar_private *priv = netdev_priv(dev); @@ -2587,7 +2597,7 @@ static void gfar_new_rxbdp(struct gfar_priv_rx_q *rx_queue, struct rxbd8 *bdp, gfar_init_rxbdp(rx_queue, bdp, buf); } -static struct sk_buff * gfar_alloc_skb(struct net_device *dev) +static struct sk_buff *gfar_alloc_skb(struct net_device *dev) { struct gfar_private *priv = netdev_priv(dev); struct sk_buff *skb = NULL; @@ -2601,7 +2611,7 @@ static struct sk_buff * gfar_alloc_skb(struct net_device *dev) return skb; } -struct sk_buff * gfar_new_skb(struct net_device *dev) +struct sk_buff *gfar_new_skb(struct net_device *dev) { struct gfar_private *priv = netdev_priv(dev); struct sk_buff *skb = NULL; @@ -2619,8 +2629,7 @@ static inline void count_errors(unsigned short status, struct net_device *dev) struct net_device_stats *stats = &dev->stats; struct gfar_extra_stats *estats = &priv->extra_stats; - /* If the packet was truncated, none of the other errors - * matter */ + /* If the packet was truncated, none of the other errors matter */ if (status & RXBD_TRUNCATED) { stats->rx_length_errors++; @@ -2661,7 +2670,8 @@ static inline void gfar_rx_checksum(struct sk_buff *skb, struct rxfcb *fcb) { /* If valid headers were found, and valid sums * were verified, then we tell the kernel that no - * checksumming is necessary. Otherwise, it is */ + * checksumming is necessary. Otherwise, it is [FIXME] + */ if ((fcb->flags & RXFCB_CSUM_MASK) == (RXFCB_CIP | RXFCB_CTU)) skb->ip_summed = CHECKSUM_UNNECESSARY; else @@ -2669,8 +2679,7 @@ static inline void gfar_rx_checksum(struct sk_buff *skb, struct rxfcb *fcb) } -/* gfar_process_frame() -- handle one incoming packet if skb - * isn't NULL. */ +/* gfar_process_frame() -- handle one incoming packet if skb isn't NULL. */ static int gfar_process_frame(struct net_device *dev, struct sk_buff *skb, int amount_pull, struct napi_struct *napi) { @@ -2682,8 +2691,9 @@ static int gfar_process_frame(struct net_device *dev, struct sk_buff *skb, /* fcb is at the beginning if exists */ fcb = (struct rxfcb *)skb->data; - /* Remove the FCB from the skb */ - /* Remove the padded bytes, if there are any */ + /* Remove the FCB from the skb + * Remove the padded bytes, if there are any + */ if (amount_pull) { skb_record_rx_queue(skb, fcb->rq); skb_pull(skb, amount_pull); @@ -2693,6 +2703,7 @@ static int gfar_process_frame(struct net_device *dev, struct sk_buff *skb, if (priv->hwts_rx_en) { struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb); u64 *ns = (u64 *) skb->data; + memset(shhwtstamps, 0, sizeof(*shhwtstamps)); shhwtstamps->hwtstamp = ns_to_ktime(*ns); } @@ -2706,8 +2717,7 @@ static int gfar_process_frame(struct net_device *dev, struct sk_buff *skb, /* Tell the skb what kind of packet this is */ skb->protocol = eth_type_trans(skb, dev); - /* - * There's need to check for NETIF_F_HW_VLAN_RX here. + /* There's need to check for NETIF_F_HW_VLAN_RX here. * Even if vlan rx accel is disabled, on some chips * RXFCB_VLN is pseudo randomly set. */ @@ -2725,8 +2735,8 @@ static int gfar_process_frame(struct net_device *dev, struct sk_buff *skb, } /* gfar_clean_rx_ring() -- Processes each frame in the rx ring - * until the budget/quota has been reached. Returns the number - * of frames handled + * until the budget/quota has been reached. Returns the number + * of frames handled */ int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit) { @@ -2746,6 +2756,7 @@ int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit) while (!((bdp->status & RXBD_EMPTY) || (--rx_work_limit < 0))) { struct sk_buff *newskb; + rmb(); /* Add another skb for the future */ @@ -2754,15 +2765,15 @@ int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit) skb = rx_queue->rx_skbuff[rx_queue->skb_currx]; dma_unmap_single(&priv->ofdev->dev, bdp->bufPtr, - priv->rx_buffer_size, DMA_FROM_DEVICE); + priv->rx_buffer_size, DMA_FROM_DEVICE); if (unlikely(!(bdp->status & RXBD_ERR) && - bdp->length > priv->rx_buffer_size)) + bdp->length > priv->rx_buffer_size)) bdp->status = RXBD_LARGE; /* We drop the frame if we failed to allocate a new buffer */ if (unlikely(!newskb || !(bdp->status & RXBD_LAST) || - bdp->status & RXBD_ERR)) { + bdp->status & RXBD_ERR)) { count_errors(bdp->status, dev); if (unlikely(!newskb)) @@ -2781,7 +2792,7 @@ int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit) rx_queue->stats.rx_bytes += pkt_len; skb_record_rx_queue(skb, rx_queue->qindex); gfar_process_frame(dev, skb, amount_pull, - &rx_queue->grp->napi); + &rx_queue->grp->napi); } else { netif_warn(priv, rx_err, dev, "Missing skb!\n"); @@ -2800,9 +2811,8 @@ int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit) bdp = next_bd(bdp, base, rx_queue->rx_ring_size); /* update to point at the next skb */ - rx_queue->skb_currx = - (rx_queue->skb_currx + 1) & - RX_RING_MOD_MASK(rx_queue->rx_ring_size); + rx_queue->skb_currx = (rx_queue->skb_currx + 1) & + RX_RING_MOD_MASK(rx_queue->rx_ring_size); } /* Update the current rxbd pointer to be the next one */ @@ -2813,8 +2823,8 @@ int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit) static int gfar_poll(struct napi_struct *napi, int budget) { - struct gfar_priv_grp *gfargrp = container_of(napi, - struct gfar_priv_grp, napi); + struct gfar_priv_grp *gfargrp = + container_of(napi, struct gfar_priv_grp, napi); struct gfar_private *priv = gfargrp->priv; struct gfar __iomem *regs = gfargrp->regs; struct gfar_priv_tx_q *tx_queue = NULL; @@ -2828,11 +2838,11 @@ static int gfar_poll(struct napi_struct *napi, int budget) budget_per_queue = budget/num_queues; /* Clear IEVENT, so interrupts aren't called again - * because of the packets that have already arrived */ + * because of the packets that have already arrived + */ gfar_write(®s->ievent, IEVENT_RTX_MASK); while (num_queues && left_over_budget) { - budget_per_queue = left_over_budget/num_queues; left_over_budget = 0; @@ -2843,12 +2853,13 @@ static int gfar_poll(struct napi_struct *napi, int budget) tx_queue = priv->tx_queue[rx_queue->qindex]; tx_cleaned += gfar_clean_tx_ring(tx_queue); - rx_cleaned_per_queue = gfar_clean_rx_ring(rx_queue, - budget_per_queue); + rx_cleaned_per_queue = + gfar_clean_rx_ring(rx_queue, budget_per_queue); rx_cleaned += rx_cleaned_per_queue; - if(rx_cleaned_per_queue < budget_per_queue) { + if (rx_cleaned_per_queue < budget_per_queue) { left_over_budget = left_over_budget + - (budget_per_queue - rx_cleaned_per_queue); + (budget_per_queue - + rx_cleaned_per_queue); set_bit(i, &serviced_queues); num_queues--; } @@ -2866,25 +2877,25 @@ static int gfar_poll(struct napi_struct *napi, int budget) gfar_write(®s->imask, IMASK_DEFAULT); - /* If we are coalescing interrupts, update the timer */ - /* Otherwise, clear it */ - gfar_configure_coalescing(priv, - gfargrp->rx_bit_map, gfargrp->tx_bit_map); + /* If we are coalescing interrupts, update the timer + * Otherwise, clear it + */ + gfar_configure_coalescing(priv, gfargrp->rx_bit_map, + gfargrp->tx_bit_map); } return rx_cleaned; } #ifdef CONFIG_NET_POLL_CONTROLLER -/* - * Polling 'interrupt' - used by things like netconsole to send skbs +/* Polling 'interrupt' - used by things like netconsole to send skbs * without having to re-enable interrupts. It's not called while * the interrupt routine is executing. */ static void gfar_netpoll(struct net_device *dev) { struct gfar_private *priv = netdev_priv(dev); - int i = 0; + int i; /* If the device has multiple interrupts, run tx/rx */ if (priv->device_flags & FSL_GIANFAR_DEV_HAS_MULTI_INTR) { @@ -2893,7 +2904,7 @@ static void gfar_netpoll(struct net_device *dev) disable_irq(priv->gfargrp[i].interruptReceive); disable_irq(priv->gfargrp[i].interruptError); gfar_interrupt(priv->gfargrp[i].interruptTransmit, - &priv->gfargrp[i]); + &priv->gfargrp[i]); enable_irq(priv->gfargrp[i].interruptError); enable_irq(priv->gfargrp[i].interruptReceive); enable_irq(priv->gfargrp[i].interruptTransmit); @@ -2902,7 +2913,7 @@ static void gfar_netpoll(struct net_device *dev) for (i = 0; i < priv->num_grps; i++) { disable_irq(priv->gfargrp[i].interruptTransmit); gfar_interrupt(priv->gfargrp[i].interruptTransmit, - &priv->gfargrp[i]); + &priv->gfargrp[i]); enable_irq(priv->gfargrp[i].interruptTransmit); } } @@ -2954,7 +2965,8 @@ static void adjust_link(struct net_device *dev) u32 ecntrl = gfar_read(®s->ecntrl); /* Now we make sure that we can be in full duplex mode. - * If not, we operate in half-duplex mode. */ + * If not, we operate in half-duplex mode. + */ if (phydev->duplex != priv->oldduplex) { new_state = 1; if (!(phydev->duplex)) @@ -2980,7 +2992,8 @@ static void adjust_link(struct net_device *dev) ((tempval & ~(MACCFG2_IF)) | MACCFG2_MII); /* Reduced mode distinguishes - * between 10 and 100 */ + * between 10 and 100 + */ if (phydev->speed == SPEED_100) ecntrl |= ECNTRL_R100; else @@ -3019,7 +3032,8 @@ static void adjust_link(struct net_device *dev) /* Update the hash table based on the current list of multicast * addresses we subscribe to. Also, change the promiscuity of * the device based on the flags (this function is called - * whenever dev->flags is changed */ + * whenever dev->flags is changed + */ static void gfar_set_multi(struct net_device *dev) { struct netdev_hw_addr *ha; @@ -3081,7 +3095,8 @@ static void gfar_set_multi(struct net_device *dev) /* If we have extended hash tables, we need to * clear the exact match registers to prepare for - * setting them */ + * setting them + */ if (priv->extended_hash) { em_num = GFAR_EM_NUM + 1; gfar_clear_exact_match(dev); @@ -3107,13 +3122,14 @@ static void gfar_set_multi(struct net_device *dev) /* Clears each of the exact match registers to zero, so they - * don't interfere with normal reception */ + * don't interfere with normal reception + */ static void gfar_clear_exact_match(struct net_device *dev) { int idx; static const u8 zero_arr[ETH_ALEN] = {0, 0, 0, 0, 0, 0}; - for(idx = 1;idx < GFAR_EM_NUM + 1;idx++) + for (idx = 1; idx < GFAR_EM_NUM + 1; idx++) gfar_set_mac_for_addr(dev, idx, zero_arr); } @@ -3129,7 +3145,8 @@ static void gfar_clear_exact_match(struct net_device *dev) * hash index which gaddr register to use, and the 5 other bits * indicate which bit (assuming an IBM numbering scheme, which * for PowerPC (tm) is usually the case) in the register holds - * the entry. */ + * the entry. + */ static void gfar_set_hash_for_addr(struct net_device *dev, u8 *addr) { u32 tempval; @@ -3161,8 +3178,9 @@ static void gfar_set_mac_for_addr(struct net_device *dev, int num, macptr += num*2; - /* Now copy it into the mac registers backwards, cuz */ - /* little endian is silly */ + /* Now copy it into the mac registers backwards, cuz + * little endian is silly + */ for (idx = 0; idx < ETH_ALEN; idx++) tmpbuf[ETH_ALEN - 1 - idx] = addr[idx]; @@ -3194,7 +3212,8 @@ static irqreturn_t gfar_error(int irq, void *grp_id) /* Hmm... */ if (netif_msg_rx_err(priv) || netif_msg_tx_err(priv)) - netdev_dbg(dev, "error interrupt (ievent=0x%08x imask=0x%08x)\n", + netdev_dbg(dev, + "error interrupt (ievent=0x%08x imask=0x%08x)\n", events, gfar_read(®s->imask)); /* Update the error counters */ diff --git a/drivers/net/ethernet/freescale/gianfar_ethtool.c b/drivers/net/ethernet/freescale/gianfar_ethtool.c index 8a025570d97e..8971921cc1c8 100644 --- a/drivers/net/ethernet/freescale/gianfar_ethtool.c +++ b/drivers/net/ethernet/freescale/gianfar_ethtool.c @@ -46,18 +46,24 @@ #include "gianfar.h" extern void gfar_start(struct net_device *dev); -extern int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, int rx_work_limit); +extern int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue, + int rx_work_limit); #define GFAR_MAX_COAL_USECS 0xffff #define GFAR_MAX_COAL_FRAMES 0xff static void gfar_fill_stats(struct net_device *dev, struct ethtool_stats *dummy, - u64 * buf); + u64 *buf); static void gfar_gstrings(struct net_device *dev, u32 stringset, u8 * buf); -static int gfar_gcoalesce(struct net_device *dev, struct ethtool_coalesce *cvals); -static int gfar_scoalesce(struct net_device *dev, struct ethtool_coalesce *cvals); -static void gfar_gringparam(struct net_device *dev, struct ethtool_ringparam *rvals); -static int gfar_sringparam(struct net_device *dev, struct ethtool_ringparam *rvals); -static void gfar_gdrvinfo(struct net_device *dev, struct ethtool_drvinfo *drvinfo); +static int gfar_gcoalesce(struct net_device *dev, + struct ethtool_coalesce *cvals); +static int gfar_scoalesce(struct net_device *dev, + struct ethtool_coalesce *cvals); +static void gfar_gringparam(struct net_device *dev, + struct ethtool_ringparam *rvals); +static int gfar_sringparam(struct net_device *dev, + struct ethtool_ringparam *rvals); +static void gfar_gdrvinfo(struct net_device *dev, + struct ethtool_drvinfo *drvinfo); static const char stat_gstrings[][ETH_GSTRING_LEN] = { "rx-dropped-by-kernel", @@ -130,14 +136,15 @@ static void gfar_gstrings(struct net_device *dev, u32 stringset, u8 * buf) memcpy(buf, stat_gstrings, GFAR_STATS_LEN * ETH_GSTRING_LEN); else memcpy(buf, stat_gstrings, - GFAR_EXTRA_STATS_LEN * ETH_GSTRING_LEN); + GFAR_EXTRA_STATS_LEN * ETH_GSTRING_LEN); } /* Fill in an array of 64-bit statistics from various sources. * This array will be appended to the end of the ethtool_stats * structure, and returned to user space */ -static void gfar_fill_stats(struct net_device *dev, struct ethtool_stats *dummy, u64 * buf) +static void gfar_fill_stats(struct net_device *dev, struct ethtool_stats *dummy, + u64 *buf) { int i; struct gfar_private *priv = netdev_priv(dev); @@ -174,8 +181,8 @@ static int gfar_sset_count(struct net_device *dev, int sset) } /* Fills in the drvinfo structure with some basic info */ -static void gfar_gdrvinfo(struct net_device *dev, struct - ethtool_drvinfo *drvinfo) +static void gfar_gdrvinfo(struct net_device *dev, + struct ethtool_drvinfo *drvinfo) { strncpy(drvinfo->driver, DRV_NAME, GFAR_INFOSTR_LEN); strncpy(drvinfo->version, gfar_driver_version, GFAR_INFOSTR_LEN); @@ -226,7 +233,8 @@ static int gfar_reglen(struct net_device *dev) } /* Return a dump of the GFAR register space */ -static void gfar_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *regbuf) +static void gfar_get_regs(struct net_device *dev, struct ethtool_regs *regs, + void *regbuf) { int i; struct gfar_private *priv = netdev_priv(dev); @@ -239,7 +247,8 @@ static void gfar_get_regs(struct net_device *dev, struct ethtool_regs *regs, voi /* Convert microseconds to ethernet clock ticks, which changes * depending on what speed the controller is running at */ -static unsigned int gfar_usecs2ticks(struct gfar_private *priv, unsigned int usecs) +static unsigned int gfar_usecs2ticks(struct gfar_private *priv, + unsigned int usecs) { unsigned int count; @@ -263,7 +272,8 @@ static unsigned int gfar_usecs2ticks(struct gfar_private *priv, unsigned int use } /* Convert ethernet clock ticks to microseconds */ -static unsigned int gfar_ticks2usecs(struct gfar_private *priv, unsigned int ticks) +static unsigned int gfar_ticks2usecs(struct gfar_private *priv, + unsigned int ticks) { unsigned int count; @@ -288,7 +298,8 @@ static unsigned int gfar_ticks2usecs(struct gfar_private *priv, unsigned int tic /* Get the coalescing parameters, and put them in the cvals * structure. */ -static int gfar_gcoalesce(struct net_device *dev, struct ethtool_coalesce *cvals) +static int gfar_gcoalesce(struct net_device *dev, + struct ethtool_coalesce *cvals) { struct gfar_private *priv = netdev_priv(dev); struct gfar_priv_rx_q *rx_queue = NULL; @@ -353,7 +364,8 @@ static int gfar_gcoalesce(struct net_device *dev, struct ethtool_coalesce *cvals * Both cvals->*_usecs and cvals->*_frames have to be > 0 * in order for coalescing to be active */ -static int gfar_scoalesce(struct net_device *dev, struct ethtool_coalesce *cvals) +static int gfar_scoalesce(struct net_device *dev, + struct ethtool_coalesce *cvals) { struct gfar_private *priv = netdev_priv(dev); int i = 0; @@ -364,7 +376,8 @@ static int gfar_scoalesce(struct net_device *dev, struct ethtool_coalesce *cvals /* Set up rx coalescing */ /* As of now, we will enable/disable coalescing for all * queues together in case of eTSEC2, this will be modified - * along with the ethtool interface */ + * along with the ethtool interface + */ if ((cvals->rx_coalesce_usecs == 0) || (cvals->rx_max_coalesced_frames == 0)) { for (i = 0; i < priv->num_rx_queues; i++) @@ -433,7 +446,8 @@ static int gfar_scoalesce(struct net_device *dev, struct ethtool_coalesce *cvals /* Fills in rvals with the current ring parameters. Currently, * rx, rx_mini, and rx_jumbo rings are the same size, as mini and * jumbo are ignored by the driver */ -static void gfar_gringparam(struct net_device *dev, struct ethtool_ringparam *rvals) +static void gfar_gringparam(struct net_device *dev, + struct ethtool_ringparam *rvals) { struct gfar_private *priv = netdev_priv(dev); struct gfar_priv_tx_q *tx_queue = NULL; @@ -459,8 +473,10 @@ static void gfar_gringparam(struct net_device *dev, struct ethtool_ringparam *rv /* Change the current ring parameters, stopping the controller if * necessary so that we don't mess things up while we're in * motion. We wait for the ring to be clean before reallocating - * the rings. */ -static int gfar_sringparam(struct net_device *dev, struct ethtool_ringparam *rvals) + * the rings. + */ +static int gfar_sringparam(struct net_device *dev, + struct ethtool_ringparam *rvals) { struct gfar_private *priv = netdev_priv(dev); int err = 0, i = 0; @@ -486,7 +502,8 @@ static int gfar_sringparam(struct net_device *dev, struct ethtool_ringparam *rva unsigned long flags; /* Halt TX and RX, and process the frames which - * have already been received */ + * have already been received + */ local_irq_save(flags); lock_tx_qs(priv); lock_rx_qs(priv); @@ -499,7 +516,7 @@ static int gfar_sringparam(struct net_device *dev, struct ethtool_ringparam *rva for (i = 0; i < priv->num_rx_queues; i++) gfar_clean_rx_ring(priv->rx_queue[i], - priv->rx_queue[i]->rx_ring_size); + priv->rx_queue[i]->rx_ring_size); /* Now we take down the rings to rebuild them */ stop_gfar(dev); @@ -509,7 +526,8 @@ static int gfar_sringparam(struct net_device *dev, struct ethtool_ringparam *rva for (i = 0; i < priv->num_rx_queues; i++) { priv->rx_queue[i]->rx_ring_size = rvals->rx_pending; priv->tx_queue[i]->tx_ring_size = rvals->tx_pending; - priv->tx_queue[i]->num_txbdfree = priv->tx_queue[i]->tx_ring_size; + priv->tx_queue[i]->num_txbdfree = + priv->tx_queue[i]->tx_ring_size; } /* Rebuild the rings with the new size */ @@ -535,7 +553,8 @@ int gfar_set_features(struct net_device *dev, netdev_features_t features) if (dev->flags & IFF_UP) { /* Halt TX and RX, and process the frames which - * have already been received */ + * have already been received + */ local_irq_save(flags); lock_tx_qs(priv); lock_rx_qs(priv); @@ -548,7 +567,7 @@ int gfar_set_features(struct net_device *dev, netdev_features_t features) for (i = 0; i < priv->num_rx_queues; i++) gfar_clean_rx_ring(priv->rx_queue[i], - priv->rx_queue[i]->rx_ring_size); + priv->rx_queue[i]->rx_ring_size); /* Now we take down the rings to rebuild them */ stop_gfar(dev); @@ -564,12 +583,14 @@ int gfar_set_features(struct net_device *dev, netdev_features_t features) static uint32_t gfar_get_msglevel(struct net_device *dev) { struct gfar_private *priv = netdev_priv(dev); + return priv->msg_enable; } static void gfar_set_msglevel(struct net_device *dev, uint32_t data) { struct gfar_private *priv = netdev_priv(dev); + priv->msg_enable = data; } @@ -614,14 +635,14 @@ static void ethflow_to_filer_rules (struct gfar_private *priv, u64 ethflow) if (ethflow & RXH_L2DA) { fcr = RQFCR_PID_DAH |RQFCR_CMP_NOMATCH | - RQFCR_HASH | RQFCR_AND | RQFCR_HASHTBL_0; + RQFCR_HASH | RQFCR_AND | RQFCR_HASHTBL_0; priv->ftp_rqfpr[priv->cur_filer_idx] = fpr; priv->ftp_rqfcr[priv->cur_filer_idx] = fcr; gfar_write_filer(priv, priv->cur_filer_idx, fcr, fpr); priv->cur_filer_idx = priv->cur_filer_idx - 1; fcr = RQFCR_PID_DAL | RQFCR_AND | RQFCR_CMP_NOMATCH | - RQFCR_HASH | RQFCR_AND | RQFCR_HASHTBL_0; + RQFCR_HASH | RQFCR_AND | RQFCR_HASHTBL_0; priv->ftp_rqfpr[priv->cur_filer_idx] = fpr; priv->ftp_rqfcr[priv->cur_filer_idx] = fcr; gfar_write_filer(priv, priv->cur_filer_idx, fcr, fpr); @@ -630,7 +651,7 @@ static void ethflow_to_filer_rules (struct gfar_private *priv, u64 ethflow) if (ethflow & RXH_VLAN) { fcr = RQFCR_PID_VID | RQFCR_CMP_NOMATCH | RQFCR_HASH | - RQFCR_AND | RQFCR_HASHTBL_0; + RQFCR_AND | RQFCR_HASHTBL_0; gfar_write_filer(priv, priv->cur_filer_idx, fcr, fpr); priv->ftp_rqfpr[priv->cur_filer_idx] = fpr; priv->ftp_rqfcr[priv->cur_filer_idx] = fcr; @@ -639,7 +660,7 @@ static void ethflow_to_filer_rules (struct gfar_private *priv, u64 ethflow) if (ethflow & RXH_IP_SRC) { fcr = RQFCR_PID_SIA | RQFCR_CMP_NOMATCH | RQFCR_HASH | - RQFCR_AND | RQFCR_HASHTBL_0; + RQFCR_AND | RQFCR_HASHTBL_0; priv->ftp_rqfpr[priv->cur_filer_idx] = fpr; priv->ftp_rqfcr[priv->cur_filer_idx] = fcr; gfar_write_filer(priv, priv->cur_filer_idx, fcr, fpr); @@ -648,7 +669,7 @@ static void ethflow_to_filer_rules (struct gfar_private *priv, u64 ethflow) if (ethflow & (RXH_IP_DST)) { fcr = RQFCR_PID_DIA | RQFCR_CMP_NOMATCH | RQFCR_HASH | - RQFCR_AND | RQFCR_HASHTBL_0; + RQFCR_AND | RQFCR_HASHTBL_0; priv->ftp_rqfpr[priv->cur_filer_idx] = fpr; priv->ftp_rqfcr[priv->cur_filer_idx] = fcr; gfar_write_filer(priv, priv->cur_filer_idx, fcr, fpr); @@ -657,7 +678,7 @@ static void ethflow_to_filer_rules (struct gfar_private *priv, u64 ethflow) if (ethflow & RXH_L3_PROTO) { fcr = RQFCR_PID_L4P | RQFCR_CMP_NOMATCH | RQFCR_HASH | - RQFCR_AND | RQFCR_HASHTBL_0; + RQFCR_AND | RQFCR_HASHTBL_0; priv->ftp_rqfpr[priv->cur_filer_idx] = fpr; priv->ftp_rqfcr[priv->cur_filer_idx] = fcr; gfar_write_filer(priv, priv->cur_filer_idx, fcr, fpr); @@ -666,7 +687,7 @@ static void ethflow_to_filer_rules (struct gfar_private *priv, u64 ethflow) if (ethflow & RXH_L4_B_0_1) { fcr = RQFCR_PID_SPT | RQFCR_CMP_NOMATCH | RQFCR_HASH | - RQFCR_AND | RQFCR_HASHTBL_0; + RQFCR_AND | RQFCR_HASHTBL_0; priv->ftp_rqfpr[priv->cur_filer_idx] = fpr; priv->ftp_rqfcr[priv->cur_filer_idx] = fcr; gfar_write_filer(priv, priv->cur_filer_idx, fcr, fpr); @@ -675,7 +696,7 @@ static void ethflow_to_filer_rules (struct gfar_private *priv, u64 ethflow) if (ethflow & RXH_L4_B_2_3) { fcr = RQFCR_PID_DPT | RQFCR_CMP_NOMATCH | RQFCR_HASH | - RQFCR_AND | RQFCR_HASHTBL_0; + RQFCR_AND | RQFCR_HASHTBL_0; priv->ftp_rqfpr[priv->cur_filer_idx] = fpr; priv->ftp_rqfcr[priv->cur_filer_idx] = fcr; gfar_write_filer(priv, priv->cur_filer_idx, fcr, fpr); @@ -683,7 +704,8 @@ static void ethflow_to_filer_rules (struct gfar_private *priv, u64 ethflow) } } -static int gfar_ethflow_to_filer_table(struct gfar_private *priv, u64 ethflow, u64 class) +static int gfar_ethflow_to_filer_table(struct gfar_private *priv, u64 ethflow, + u64 class) { unsigned int last_rule_idx = priv->cur_filer_idx; unsigned int cmp_rqfpr; @@ -694,9 +716,9 @@ static int gfar_ethflow_to_filer_table(struct gfar_private *priv, u64 ethflow, u int ret = 1; local_rqfpr = kmalloc(sizeof(unsigned int) * (MAX_FILER_IDX + 1), - GFP_KERNEL); + GFP_KERNEL); local_rqfcr = kmalloc(sizeof(unsigned int) * (MAX_FILER_IDX + 1), - GFP_KERNEL); + GFP_KERNEL); if (!local_rqfpr || !local_rqfcr) { pr_err("Out of memory\n"); ret = 0; @@ -726,9 +748,9 @@ static int gfar_ethflow_to_filer_table(struct gfar_private *priv, u64 ethflow, u local_rqfpr[j] = priv->ftp_rqfpr[i]; local_rqfcr[j] = priv->ftp_rqfcr[i]; j--; - if ((priv->ftp_rqfcr[i] == (RQFCR_PID_PARSE | - RQFCR_CLE |RQFCR_AND)) && - (priv->ftp_rqfpr[i] == cmp_rqfpr)) + if ((priv->ftp_rqfcr[i] == + (RQFCR_PID_PARSE | RQFCR_CLE | RQFCR_AND)) && + (priv->ftp_rqfpr[i] == cmp_rqfpr)) break; } @@ -743,12 +765,12 @@ static int gfar_ethflow_to_filer_table(struct gfar_private *priv, u64 ethflow, u */ for (l = i+1; l < MAX_FILER_IDX; l++) { if ((priv->ftp_rqfcr[l] & RQFCR_CLE) && - !(priv->ftp_rqfcr[l] & RQFCR_AND)) { + !(priv->ftp_rqfcr[l] & RQFCR_AND)) { priv->ftp_rqfcr[l] = RQFCR_CLE | RQFCR_CMP_EXACT | - RQFCR_HASHTBL_0 | RQFCR_PID_MASK; + RQFCR_HASHTBL_0 | RQFCR_PID_MASK; priv->ftp_rqfpr[l] = FPR_FILER_MASK; gfar_write_filer(priv, l, priv->ftp_rqfcr[l], - priv->ftp_rqfpr[l]); + priv->ftp_rqfpr[l]); break; } @@ -773,7 +795,7 @@ static int gfar_ethflow_to_filer_table(struct gfar_private *priv, u64 ethflow, u priv->ftp_rqfpr[priv->cur_filer_idx] = local_rqfpr[k]; priv->ftp_rqfcr[priv->cur_filer_idx] = local_rqfcr[k]; gfar_write_filer(priv, priv->cur_filer_idx, - local_rqfcr[k], local_rqfpr[k]); + local_rqfcr[k], local_rqfpr[k]); if (!priv->cur_filer_idx) break; priv->cur_filer_idx = priv->cur_filer_idx - 1; @@ -785,7 +807,8 @@ err: return ret; } -static int gfar_set_hash_opts(struct gfar_private *priv, struct ethtool_rxnfc *cmd) +static int gfar_set_hash_opts(struct gfar_private *priv, + struct ethtool_rxnfc *cmd) { /* write the filer rules here */ if (!gfar_ethflow_to_filer_table(priv, cmd->data, cmd->flow_type)) @@ -810,10 +833,10 @@ static int gfar_check_filer_hardware(struct gfar_private *priv) i &= RCTRL_PRSDEP_MASK | RCTRL_PRSFM; if (i == (RCTRL_PRSDEP_MASK | RCTRL_PRSFM)) { netdev_info(priv->ndev, - "Receive Queue Filtering enabled\n"); + "Receive Queue Filtering enabled\n"); } else { netdev_warn(priv->ndev, - "Receive Queue Filtering disabled\n"); + "Receive Queue Filtering disabled\n"); return -EOPNOTSUPP; } } @@ -823,16 +846,17 @@ static int gfar_check_filer_hardware(struct gfar_private *priv) i &= RCTRL_PRSDEP_MASK; if (i == RCTRL_PRSDEP_MASK) { netdev_info(priv->ndev, - "Receive Queue Filtering enabled\n"); + "Receive Queue Filtering enabled\n"); } else { netdev_warn(priv->ndev, - "Receive Queue Filtering disabled\n"); + "Receive Queue Filtering disabled\n"); return -EOPNOTSUPP; } } /* Sets the properties for arbitrary filer rule - * to the first 4 Layer 4 Bytes */ + * to the first 4 Layer 4 Bytes + */ regs->rbifx = 0xC0C1C2C3; return 0; } @@ -870,14 +894,14 @@ static void gfar_set_mask(u32 mask, struct filer_table *tab) static void gfar_set_parse_bits(u32 value, u32 mask, struct filer_table *tab) { gfar_set_mask(mask, tab); - tab->fe[tab->index].ctrl = RQFCR_CMP_EXACT | RQFCR_PID_PARSE - | RQFCR_AND; + tab->fe[tab->index].ctrl = RQFCR_CMP_EXACT | RQFCR_PID_PARSE | + RQFCR_AND; tab->fe[tab->index].prop = value; tab->index++; } static void gfar_set_general_attribute(u32 value, u32 mask, u32 flag, - struct filer_table *tab) + struct filer_table *tab) { gfar_set_mask(mask, tab); tab->fe[tab->index].ctrl = RQFCR_CMP_EXACT | RQFCR_AND | flag; @@ -885,8 +909,7 @@ static void gfar_set_general_attribute(u32 value, u32 mask, u32 flag, tab->index++; } -/* - * For setting a tuple of value and mask of type flag +/* For setting a tuple of value and mask of type flag * Example: * IP-Src = 10.0.0.0/255.0.0.0 * value: 0x0A000000 mask: FF000000 flag: RQFPR_IPV4 @@ -901,7 +924,7 @@ static void gfar_set_general_attribute(u32 value, u32 mask, u32 flag, * Further the all masks are one-padded for better hardware efficiency. */ static void gfar_set_attribute(u32 value, u32 mask, u32 flag, - struct filer_table *tab) + struct filer_table *tab) { switch (flag) { /* 3bit */ @@ -959,7 +982,8 @@ static void gfar_set_attribute(u32 value, u32 mask, u32 flag, /* Translates value and mask for UDP, TCP or SCTP */ static void gfar_set_basic_ip(struct ethtool_tcpip4_spec *value, - struct ethtool_tcpip4_spec *mask, struct filer_table *tab) + struct ethtool_tcpip4_spec *mask, + struct filer_table *tab) { gfar_set_attribute(value->ip4src, mask->ip4src, RQFCR_PID_SIA, tab); gfar_set_attribute(value->ip4dst, mask->ip4dst, RQFCR_PID_DIA, tab); @@ -970,97 +994,92 @@ static void gfar_set_basic_ip(struct ethtool_tcpip4_spec *value, /* Translates value and mask for RAW-IP4 */ static void gfar_set_user_ip(struct ethtool_usrip4_spec *value, - struct ethtool_usrip4_spec *mask, struct filer_table *tab) + struct ethtool_usrip4_spec *mask, + struct filer_table *tab) { gfar_set_attribute(value->ip4src, mask->ip4src, RQFCR_PID_SIA, tab); gfar_set_attribute(value->ip4dst, mask->ip4dst, RQFCR_PID_DIA, tab); gfar_set_attribute(value->tos, mask->tos, RQFCR_PID_TOS, tab); gfar_set_attribute(value->proto, mask->proto, RQFCR_PID_L4P, tab); gfar_set_attribute(value->l4_4_bytes, mask->l4_4_bytes, RQFCR_PID_ARB, - tab); + tab); } /* Translates value and mask for ETHER spec */ static void gfar_set_ether(struct ethhdr *value, struct ethhdr *mask, - struct filer_table *tab) + struct filer_table *tab) { u32 upper_temp_mask = 0; u32 lower_temp_mask = 0; + /* Source address */ if (!is_broadcast_ether_addr(mask->h_source)) { - if (is_zero_ether_addr(mask->h_source)) { upper_temp_mask = 0xFFFFFFFF; lower_temp_mask = 0xFFFFFFFF; } else { - upper_temp_mask = mask->h_source[0] << 16 - | mask->h_source[1] << 8 - | mask->h_source[2]; - lower_temp_mask = mask->h_source[3] << 16 - | mask->h_source[4] << 8 - | mask->h_source[5]; + upper_temp_mask = mask->h_source[0] << 16 | + mask->h_source[1] << 8 | + mask->h_source[2]; + lower_temp_mask = mask->h_source[3] << 16 | + mask->h_source[4] << 8 | + mask->h_source[5]; } /* Upper 24bit */ - gfar_set_attribute( - value->h_source[0] << 16 | value->h_source[1] - << 8 | value->h_source[2], - upper_temp_mask, RQFCR_PID_SAH, tab); + gfar_set_attribute(value->h_source[0] << 16 | + value->h_source[1] << 8 | + value->h_source[2], + upper_temp_mask, RQFCR_PID_SAH, tab); /* And the same for the lower part */ - gfar_set_attribute( - value->h_source[3] << 16 | value->h_source[4] - << 8 | value->h_source[5], - lower_temp_mask, RQFCR_PID_SAL, tab); + gfar_set_attribute(value->h_source[3] << 16 | + value->h_source[4] << 8 | + value->h_source[5], + lower_temp_mask, RQFCR_PID_SAL, tab); } /* Destination address */ if (!is_broadcast_ether_addr(mask->h_dest)) { - /* Special for destination is limited broadcast */ - if ((is_broadcast_ether_addr(value->h_dest) - && is_zero_ether_addr(mask->h_dest))) { + if ((is_broadcast_ether_addr(value->h_dest) && + is_zero_ether_addr(mask->h_dest))) { gfar_set_parse_bits(RQFPR_EBC, RQFPR_EBC, tab); } else { - if (is_zero_ether_addr(mask->h_dest)) { upper_temp_mask = 0xFFFFFFFF; lower_temp_mask = 0xFFFFFFFF; } else { - upper_temp_mask = mask->h_dest[0] << 16 - | mask->h_dest[1] << 8 - | mask->h_dest[2]; - lower_temp_mask = mask->h_dest[3] << 16 - | mask->h_dest[4] << 8 - | mask->h_dest[5]; + upper_temp_mask = mask->h_dest[0] << 16 | + mask->h_dest[1] << 8 | + mask->h_dest[2]; + lower_temp_mask = mask->h_dest[3] << 16 | + mask->h_dest[4] << 8 | + mask->h_dest[5]; } /* Upper 24bit */ - gfar_set_attribute( - value->h_dest[0] << 16 - | value->h_dest[1] << 8 - | value->h_dest[2], - upper_temp_mask, RQFCR_PID_DAH, tab); + gfar_set_attribute(value->h_dest[0] << 16 | + value->h_dest[1] << 8 | + value->h_dest[2], + upper_temp_mask, RQFCR_PID_DAH, tab); /* And the same for the lower part */ - gfar_set_attribute( - value->h_dest[3] << 16 - | value->h_dest[4] << 8 - | value->h_dest[5], - lower_temp_mask, RQFCR_PID_DAL, tab); + gfar_set_attribute(value->h_dest[3] << 16 | + value->h_dest[4] << 8 | + value->h_dest[5], + lower_temp_mask, RQFCR_PID_DAL, tab); } } gfar_set_attribute(value->h_proto, mask->h_proto, RQFCR_PID_ETY, tab); - } /* Convert a rule to binary filter format of gianfar */ static int gfar_convert_to_filer(struct ethtool_rx_flow_spec *rule, - struct filer_table *tab) + struct filer_table *tab) { u32 vlan = 0, vlan_mask = 0; u32 id = 0, id_mask = 0; u32 cfi = 0, cfi_mask = 0; u32 prio = 0, prio_mask = 0; - u32 old_index = tab->index; /* Check if vlan is wanted */ @@ -1076,13 +1095,16 @@ static int gfar_convert_to_filer(struct ethtool_rx_flow_spec *rule, id_mask = rule->m_ext.vlan_tci & VLAN_VID_MASK; cfi = rule->h_ext.vlan_tci & VLAN_CFI_MASK; cfi_mask = rule->m_ext.vlan_tci & VLAN_CFI_MASK; - prio = (rule->h_ext.vlan_tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; - prio_mask = (rule->m_ext.vlan_tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + prio = (rule->h_ext.vlan_tci & VLAN_PRIO_MASK) >> + VLAN_PRIO_SHIFT; + prio_mask = (rule->m_ext.vlan_tci & VLAN_PRIO_MASK) >> + VLAN_PRIO_SHIFT; if (cfi == VLAN_TAG_PRESENT && cfi_mask == VLAN_TAG_PRESENT) { vlan |= RQFPR_CFI; vlan_mask |= RQFPR_CFI; - } else if (cfi != VLAN_TAG_PRESENT && cfi_mask == VLAN_TAG_PRESENT) { + } else if (cfi != VLAN_TAG_PRESENT && + cfi_mask == VLAN_TAG_PRESENT) { vlan_mask |= RQFPR_CFI; } } @@ -1090,34 +1112,36 @@ static int gfar_convert_to_filer(struct ethtool_rx_flow_spec *rule, switch (rule->flow_type & ~FLOW_EXT) { case TCP_V4_FLOW: gfar_set_parse_bits(RQFPR_IPV4 | RQFPR_TCP | vlan, - RQFPR_IPV4 | RQFPR_TCP | vlan_mask, tab); + RQFPR_IPV4 | RQFPR_TCP | vlan_mask, tab); gfar_set_basic_ip(&rule->h_u.tcp_ip4_spec, - &rule->m_u.tcp_ip4_spec, tab); + &rule->m_u.tcp_ip4_spec, tab); break; case UDP_V4_FLOW: gfar_set_parse_bits(RQFPR_IPV4 | RQFPR_UDP | vlan, - RQFPR_IPV4 | RQFPR_UDP | vlan_mask, tab); + RQFPR_IPV4 | RQFPR_UDP | vlan_mask, tab); gfar_set_basic_ip(&rule->h_u.udp_ip4_spec, - &rule->m_u.udp_ip4_spec, tab); + &rule->m_u.udp_ip4_spec, tab); break; case SCTP_V4_FLOW: gfar_set_parse_bits(RQFPR_IPV4 | vlan, RQFPR_IPV4 | vlan_mask, - tab); + tab); gfar_set_attribute(132, 0, RQFCR_PID_L4P, tab); - gfar_set_basic_ip((struct ethtool_tcpip4_spec *) &rule->h_u, - (struct ethtool_tcpip4_spec *) &rule->m_u, tab); + gfar_set_basic_ip((struct ethtool_tcpip4_spec *)&rule->h_u, + (struct ethtool_tcpip4_spec *)&rule->m_u, + tab); break; case IP_USER_FLOW: gfar_set_parse_bits(RQFPR_IPV4 | vlan, RQFPR_IPV4 | vlan_mask, - tab); + tab); gfar_set_user_ip((struct ethtool_usrip4_spec *) &rule->h_u, - (struct ethtool_usrip4_spec *) &rule->m_u, tab); + (struct ethtool_usrip4_spec *) &rule->m_u, + tab); break; case ETHER_FLOW: if (vlan) gfar_set_parse_bits(vlan, vlan_mask, tab); gfar_set_ether((struct ethhdr *) &rule->h_u, - (struct ethhdr *) &rule->m_u, tab); + (struct ethhdr *) &rule->m_u, tab); break; default: return -1; @@ -1152,7 +1176,9 @@ static int gfar_convert_to_filer(struct ethtool_rx_flow_spec *rule, tab->fe[tab->index - 1].ctrl |= RQFCR_CLE; } - /* In rare cases the cache can be full while there is free space in hw */ + /* In rare cases the cache can be full while there is + * free space in hw + */ if (tab->index > MAX_FILER_CACHE_IDX - 1) return -EBUSY; @@ -1161,7 +1187,7 @@ static int gfar_convert_to_filer(struct ethtool_rx_flow_spec *rule, /* Copy size filer entries */ static void gfar_copy_filer_entries(struct gfar_filer_entry dst[0], - struct gfar_filer_entry src[0], s32 size) + struct gfar_filer_entry src[0], s32 size) { while (size > 0) { size--; @@ -1171,10 +1197,12 @@ static void gfar_copy_filer_entries(struct gfar_filer_entry dst[0], } /* Delete the contents of the filer-table between start and end - * and collapse them */ + * and collapse them + */ static int gfar_trim_filer_entries(u32 begin, u32 end, struct filer_table *tab) { int length; + if (end > MAX_FILER_CACHE_IDX || end < begin) return -EINVAL; @@ -1200,14 +1228,14 @@ static int gfar_trim_filer_entries(u32 begin, u32 end, struct filer_table *tab) /* Make space on the wanted location */ static int gfar_expand_filer_entries(u32 begin, u32 length, - struct filer_table *tab) + struct filer_table *tab) { - if (length == 0 || length + tab->index > MAX_FILER_CACHE_IDX || begin - > MAX_FILER_CACHE_IDX) + if (length == 0 || length + tab->index > MAX_FILER_CACHE_IDX || + begin > MAX_FILER_CACHE_IDX) return -EINVAL; gfar_copy_filer_entries(&(tab->fe[begin + length]), &(tab->fe[begin]), - tab->index - length + 1); + tab->index - length + 1); tab->index += length; return 0; @@ -1215,9 +1243,10 @@ static int gfar_expand_filer_entries(u32 begin, u32 length, static int gfar_get_next_cluster_start(int start, struct filer_table *tab) { - for (; (start < tab->index) && (start < MAX_FILER_CACHE_IDX - 1); start++) { - if ((tab->fe[start].ctrl & (RQFCR_AND | RQFCR_CLE)) - == (RQFCR_AND | RQFCR_CLE)) + for (; (start < tab->index) && (start < MAX_FILER_CACHE_IDX - 1); + start++) { + if ((tab->fe[start].ctrl & (RQFCR_AND | RQFCR_CLE)) == + (RQFCR_AND | RQFCR_CLE)) return start; } return -1; @@ -1225,16 +1254,16 @@ static int gfar_get_next_cluster_start(int start, struct filer_table *tab) static int gfar_get_next_cluster_end(int start, struct filer_table *tab) { - for (; (start < tab->index) && (start < MAX_FILER_CACHE_IDX - 1); start++) { - if ((tab->fe[start].ctrl & (RQFCR_AND | RQFCR_CLE)) - == (RQFCR_CLE)) + for (; (start < tab->index) && (start < MAX_FILER_CACHE_IDX - 1); + start++) { + if ((tab->fe[start].ctrl & (RQFCR_AND | RQFCR_CLE)) == + (RQFCR_CLE)) return start; } return -1; } -/* - * Uses hardwares clustering option to reduce +/* Uses hardwares clustering option to reduce * the number of filer table entries */ static void gfar_cluster_filer(struct filer_table *tab) @@ -1244,8 +1273,7 @@ static void gfar_cluster_filer(struct filer_table *tab) while ((i = gfar_get_next_cluster_start(++i, tab)) != -1) { j = i; while ((j = gfar_get_next_cluster_start(++j, tab)) != -1) { - /* - * The cluster entries self and the previous one + /* The cluster entries self and the previous one * (a mask) must be identical! */ if (tab->fe[i].ctrl != tab->fe[j].ctrl) @@ -1260,21 +1288,21 @@ static void gfar_cluster_filer(struct filer_table *tab) jend = gfar_get_next_cluster_end(j, tab); if (jend == -1 || iend == -1) break; - /* - * First we make some free space, where our cluster + + /* First we make some free space, where our cluster * element should be. Then we copy it there and finally * delete in from its old location. */ - - if (gfar_expand_filer_entries(iend, (jend - j), tab) - == -EINVAL) + if (gfar_expand_filer_entries(iend, (jend - j), tab) == + -EINVAL) break; gfar_copy_filer_entries(&(tab->fe[iend + 1]), - &(tab->fe[jend + 1]), jend - j); + &(tab->fe[jend + 1]), jend - j); if (gfar_trim_filer_entries(jend - 1, - jend + (jend - j), tab) == -EINVAL) + jend + (jend - j), + tab) == -EINVAL) return; /* Mask out cluster bit */ @@ -1285,8 +1313,9 @@ static void gfar_cluster_filer(struct filer_table *tab) /* Swaps the masked bits of a1<>a2 and b1<>b2 */ static void gfar_swap_bits(struct gfar_filer_entry *a1, - struct gfar_filer_entry *a2, struct gfar_filer_entry *b1, - struct gfar_filer_entry *b2, u32 mask) + struct gfar_filer_entry *a2, + struct gfar_filer_entry *b1, + struct gfar_filer_entry *b2, u32 mask) { u32 temp[4]; temp[0] = a1->ctrl & mask; @@ -1305,13 +1334,12 @@ static void gfar_swap_bits(struct gfar_filer_entry *a1, b2->ctrl |= temp[2]; } -/* - * Generate a list consisting of masks values with their start and +/* Generate a list consisting of masks values with their start and * end of validity and block as indicator for parts belonging * together (glued by ANDs) in mask_table */ static u32 gfar_generate_mask_table(struct gfar_mask_entry *mask_table, - struct filer_table *tab) + struct filer_table *tab) { u32 i, and_index = 0, block_index = 1; @@ -1327,13 +1355,13 @@ static u32 gfar_generate_mask_table(struct gfar_mask_entry *mask_table, and_index++; } /* cluster starts and ends will be separated because they should - * hold their position */ + * hold their position + */ if (tab->fe[i].ctrl & RQFCR_CLE) block_index++; /* A not set AND indicates the end of a depended block */ if (!(tab->fe[i].ctrl & RQFCR_AND)) block_index++; - } mask_table[and_index - 1].end = i - 1; @@ -1341,14 +1369,13 @@ static u32 gfar_generate_mask_table(struct gfar_mask_entry *mask_table, return and_index; } -/* - * Sorts the entries of mask_table by the values of the masks. +/* Sorts the entries of mask_table by the values of the masks. * Important: The 0xFF80 flags of the first and last entry of a * block must hold their position (which queue, CLusterEnable, ReJEct, * AND) */ static void gfar_sort_mask_table(struct gfar_mask_entry *mask_table, - struct filer_table *temp_table, u32 and_index) + struct filer_table *temp_table, u32 and_index) { /* Pointer to compare function (_asc or _desc) */ int (*gfar_comp)(const void *, const void *); @@ -1359,16 +1386,16 @@ static void gfar_sort_mask_table(struct gfar_mask_entry *mask_table, gfar_comp = &gfar_comp_desc; for (i = 0; i < and_index; i++) { - if (prev != mask_table[i].block) { old_first = mask_table[start].start + 1; old_last = mask_table[i - 1].end; sort(mask_table + start, size, - sizeof(struct gfar_mask_entry), - gfar_comp, &gfar_swap); + sizeof(struct gfar_mask_entry), + gfar_comp, &gfar_swap); /* Toggle order for every block. This makes the - * thing more efficient! */ + * thing more efficient! + */ if (gfar_comp == gfar_comp_desc) gfar_comp = &gfar_comp_asc; else @@ -1378,12 +1405,11 @@ static void gfar_sort_mask_table(struct gfar_mask_entry *mask_table, new_last = mask_table[i - 1].end; gfar_swap_bits(&temp_table->fe[new_first], - &temp_table->fe[old_first], - &temp_table->fe[new_last], - &temp_table->fe[old_last], - RQFCR_QUEUE | RQFCR_CLE | - RQFCR_RJE | RQFCR_AND - ); + &temp_table->fe[old_first], + &temp_table->fe[new_last], + &temp_table->fe[old_last], + RQFCR_QUEUE | RQFCR_CLE | + RQFCR_RJE | RQFCR_AND); start = i; size = 0; @@ -1391,11 +1417,9 @@ static void gfar_sort_mask_table(struct gfar_mask_entry *mask_table, size++; prev = mask_table[i].block; } - } -/* - * Reduces the number of masks needed in the filer table to save entries +/* Reduces the number of masks needed in the filer table to save entries * This is done by sorting the masks of a depended block. A depended block is * identified by gluing ANDs or CLE. The sorting order toggles after every * block. Of course entries in scope of a mask must change their location with @@ -1410,13 +1434,14 @@ static int gfar_optimize_filer_masks(struct filer_table *tab) s32 ret = 0; /* We need a copy of the filer table because - * we want to change its order */ + * we want to change its order + */ temp_table = kmemdup(tab, sizeof(*temp_table), GFP_KERNEL); if (temp_table == NULL) return -ENOMEM; mask_table = kcalloc(MAX_FILER_CACHE_IDX / 2 + 1, - sizeof(struct gfar_mask_entry), GFP_KERNEL); + sizeof(struct gfar_mask_entry), GFP_KERNEL); if (mask_table == NULL) { ret = -ENOMEM; @@ -1428,7 +1453,8 @@ static int gfar_optimize_filer_masks(struct filer_table *tab) gfar_sort_mask_table(mask_table, temp_table, and_index); /* Now we can copy the data from our duplicated filer table to - * the real one in the order the mask table says */ + * the real one in the order the mask table says + */ for (i = 0; i < and_index; i++) { size = mask_table[i].end - mask_table[i].start + 1; gfar_copy_filer_entries(&(tab->fe[j]), @@ -1437,7 +1463,8 @@ static int gfar_optimize_filer_masks(struct filer_table *tab) } /* And finally we just have to check for duplicated masks and drop the - * second ones */ + * second ones + */ for (i = 0; i < tab->index && i < MAX_FILER_CACHE_IDX; i++) { if (tab->fe[i].ctrl == 0x80) { previous_mask = i++; @@ -1448,7 +1475,8 @@ static int gfar_optimize_filer_masks(struct filer_table *tab) if (tab->fe[i].ctrl == 0x80) { if (tab->fe[i].prop == tab->fe[previous_mask].prop) { /* Two identical ones found! - * So drop the second one! */ + * So drop the second one! + */ gfar_trim_filer_entries(i, i, tab); } else /* Not identical! */ @@ -1463,7 +1491,7 @@ end: kfree(temp_table); /* Write the bit-pattern from software's buffer to hardware registers */ static int gfar_write_filer_table(struct gfar_private *priv, - struct filer_table *tab) + struct filer_table *tab) { u32 i = 0; if (tab->index > MAX_FILER_IDX - 1) @@ -1473,13 +1501,15 @@ static int gfar_write_filer_table(struct gfar_private *priv, lock_rx_qs(priv); /* Fill regular entries */ - for (; i < MAX_FILER_IDX - 1 && (tab->fe[i].ctrl | tab->fe[i].ctrl); i++) + for (; i < MAX_FILER_IDX - 1 && (tab->fe[i].ctrl | tab->fe[i].ctrl); + i++) gfar_write_filer(priv, i, tab->fe[i].ctrl, tab->fe[i].prop); /* Fill the rest with fall-troughs */ for (; i < MAX_FILER_IDX - 1; i++) gfar_write_filer(priv, i, 0x60, 0xFFFFFFFF); /* Last entry must be default accept - * because that's what people expect */ + * because that's what people expect + */ gfar_write_filer(priv, i, 0x20, 0x0); unlock_rx_qs(priv); @@ -1488,21 +1518,21 @@ static int gfar_write_filer_table(struct gfar_private *priv, } static int gfar_check_capability(struct ethtool_rx_flow_spec *flow, - struct gfar_private *priv) + struct gfar_private *priv) { if (flow->flow_type & FLOW_EXT) { if (~flow->m_ext.data[0] || ~flow->m_ext.data[1]) netdev_warn(priv->ndev, - "User-specific data not supported!\n"); + "User-specific data not supported!\n"); if (~flow->m_ext.vlan_etype) netdev_warn(priv->ndev, - "VLAN-etype not supported!\n"); + "VLAN-etype not supported!\n"); } if (flow->flow_type == IP_USER_FLOW) if (flow->h_u.usr_ip4_spec.ip_ver != ETH_RX_NFC_IP4) netdev_warn(priv->ndev, - "IP-Version differing from IPv4 not supported!\n"); + "IP-Version differing from IPv4 not supported!\n"); return 0; } @@ -1520,15 +1550,18 @@ static int gfar_process_filer_changes(struct gfar_private *priv) return -ENOMEM; /* Now convert the existing filer data from flow_spec into - * filer tables binary format */ + * filer tables binary format + */ list_for_each_entry(j, &priv->rx_list.list, list) { ret = gfar_convert_to_filer(&j->fs, tab); if (ret == -EBUSY) { - netdev_err(priv->ndev, "Rule not added: No free space!\n"); + netdev_err(priv->ndev, + "Rule not added: No free space!\n"); goto end; } if (ret == -1) { - netdev_err(priv->ndev, "Rule not added: Unsupported Flow-type!\n"); + netdev_err(priv->ndev, + "Rule not added: Unsupported Flow-type!\n"); goto end; } } @@ -1540,9 +1573,9 @@ static int gfar_process_filer_changes(struct gfar_private *priv) gfar_optimize_filer_masks(tab); pr_debug("\n\tSummary:\n" - "\tData on hardware: %d\n" - "\tCompression rate: %d%%\n", - tab->index, 100 - (100 * tab->index) / i); + "\tData on hardware: %d\n" + "\tCompression rate: %d%%\n", + tab->index, 100 - (100 * tab->index) / i); /* Write everything to hardware */ ret = gfar_write_filer_table(priv, tab); @@ -1551,7 +1584,8 @@ static int gfar_process_filer_changes(struct gfar_private *priv) goto end; } -end: kfree(tab); +end: + kfree(tab); return ret; } @@ -1569,7 +1603,7 @@ static void gfar_invert_masks(struct ethtool_rx_flow_spec *flow) } static int gfar_add_cls(struct gfar_private *priv, - struct ethtool_rx_flow_spec *flow) + struct ethtool_rx_flow_spec *flow) { struct ethtool_flow_spec_container *temp, *comp; int ret = 0; @@ -1591,7 +1625,6 @@ static int gfar_add_cls(struct gfar_private *priv, list_add(&temp->list, &priv->rx_list.list); goto process; } else { - list_for_each_entry(comp, &priv->rx_list.list, list) { if (comp->fs.location > flow->location) { list_add_tail(&temp->list, &comp->list); @@ -1599,8 +1632,8 @@ static int gfar_add_cls(struct gfar_private *priv, } if (comp->fs.location == flow->location) { netdev_err(priv->ndev, - "Rule not added: ID %d not free!\n", - flow->location); + "Rule not added: ID %d not free!\n", + flow->location); ret = -EBUSY; goto clean_mem; } @@ -1642,7 +1675,6 @@ static int gfar_del_cls(struct gfar_private *priv, u32 loc) } return ret; - } static int gfar_get_cls(struct gfar_private *priv, struct ethtool_rxnfc *cmd) @@ -1663,7 +1695,7 @@ static int gfar_get_cls(struct gfar_private *priv, struct ethtool_rxnfc *cmd) } static int gfar_get_cls_all(struct gfar_private *priv, - struct ethtool_rxnfc *cmd, u32 *rule_locs) + struct ethtool_rxnfc *cmd, u32 *rule_locs) { struct ethtool_flow_spec_container *comp; u32 i = 0; @@ -1714,7 +1746,7 @@ static int gfar_set_nfc(struct net_device *dev, struct ethtool_rxnfc *cmd) } static int gfar_get_nfc(struct net_device *dev, struct ethtool_rxnfc *cmd, - u32 *rule_locs) + u32 *rule_locs) { struct gfar_private *priv = netdev_priv(dev); int ret = 0; @@ -1748,23 +1780,19 @@ static int gfar_get_ts_info(struct net_device *dev, struct gfar_private *priv = netdev_priv(dev); if (!(priv->device_flags & FSL_GIANFAR_DEV_HAS_TIMER)) { - info->so_timestamping = - SOF_TIMESTAMPING_RX_SOFTWARE | - SOF_TIMESTAMPING_SOFTWARE; + info->so_timestamping = SOF_TIMESTAMPING_RX_SOFTWARE | + SOF_TIMESTAMPING_SOFTWARE; info->phc_index = -1; return 0; } - info->so_timestamping = - SOF_TIMESTAMPING_TX_HARDWARE | - SOF_TIMESTAMPING_RX_HARDWARE | - SOF_TIMESTAMPING_RAW_HARDWARE; + info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE | + SOF_TIMESTAMPING_RX_HARDWARE | + SOF_TIMESTAMPING_RAW_HARDWARE; info->phc_index = gfar_phc_index; - info->tx_types = - (1 << HWTSTAMP_TX_OFF) | - (1 << HWTSTAMP_TX_ON); - info->rx_filters = - (1 << HWTSTAMP_FILTER_NONE) | - (1 << HWTSTAMP_FILTER_ALL); + info->tx_types = (1 << HWTSTAMP_TX_OFF) | + (1 << HWTSTAMP_TX_ON); + info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) | + (1 << HWTSTAMP_FILTER_ALL); return 0; } diff --git a/drivers/net/ethernet/freescale/ucc_geth.c b/drivers/net/ethernet/freescale/ucc_geth.c index 9ac14f804851..21c6574c5f15 100644 --- a/drivers/net/ethernet/freescale/ucc_geth.c +++ b/drivers/net/ethernet/freescale/ucc_geth.c @@ -185,7 +185,7 @@ static void mem_disp(u8 *addr, int size) for (; (u32) i < (u32) addr + size4Aling; i += 4) printk("%08x ", *((u32 *) (i))); for (; (u32) i < (u32) addr + size; i++) - printk("%02x", *((u8 *) (i))); + printk("%02x", *((i))); if (notAlign == 1) printk("\r\n"); } diff --git a/drivers/net/ethernet/hp/hp100.c b/drivers/net/ethernet/hp/hp100.c index d496673f0908..3f4391bede81 100644 --- a/drivers/net/ethernet/hp/hp100.c +++ b/drivers/net/ethernet/hp/hp100.c @@ -1217,7 +1217,7 @@ static int hp100_init_rxpdl(struct net_device *dev, ringptr->pdl = pdlptr + 1; ringptr->pdl_paddr = virt_to_whatever(dev, pdlptr + 1); - ringptr->skb = (void *) NULL; + ringptr->skb = NULL; /* * Write address and length of first PDL Fragment (which is used for @@ -1243,7 +1243,7 @@ static int hp100_init_txpdl(struct net_device *dev, ringptr->pdl = pdlptr; /* +1; */ ringptr->pdl_paddr = virt_to_whatever(dev, pdlptr); /* +1 */ - ringptr->skb = (void *) NULL; + ringptr->skb = NULL; return roundup(MAX_TX_FRAG * 2 + 2, 4); } @@ -1628,7 +1628,7 @@ static void hp100_clean_txring(struct net_device *dev) /* Conversion to new PCI API : NOP */ pci_unmap_single(lp->pci_dev, (dma_addr_t) lp->txrhead->pdl[1], lp->txrhead->pdl[2], PCI_DMA_TODEVICE); dev_kfree_skb_any(lp->txrhead->skb); - lp->txrhead->skb = (void *) NULL; + lp->txrhead->skb = NULL; lp->txrhead = lp->txrhead->next; lp->txrcommit--; } diff --git a/drivers/net/ethernet/i825xx/lp486e.c b/drivers/net/ethernet/i825xx/lp486e.c index 6c2952c8ea15..3735bfa53600 100644 --- a/drivers/net/ethernet/i825xx/lp486e.c +++ b/drivers/net/ethernet/i825xx/lp486e.c @@ -629,10 +629,10 @@ init_i596(struct net_device *dev) { memcpy ((void *)lp->eth_addr, dev->dev_addr, 6); lp->set_add.command = CmdIASetup; - i596_add_cmd(dev, (struct i596_cmd *)&lp->set_add); + i596_add_cmd(dev, &lp->set_add); lp->tdr.command = CmdTDR; - i596_add_cmd(dev, (struct i596_cmd *)&lp->tdr); + i596_add_cmd(dev, &lp->tdr); if (lp->scb.command && i596_timeout(dev, "i82596 init", 200)) return 1; @@ -737,7 +737,7 @@ i596_cleanup_cmd(struct net_device *dev) { lp = netdev_priv(dev); while (lp->cmd_head) { - cmd = (struct i596_cmd *)lp->cmd_head; + cmd = lp->cmd_head; lp->cmd_head = pa_to_va(lp->cmd_head->pa_next); lp->cmd_backlog--; @@ -1281,7 +1281,7 @@ static void set_multicast_list(struct net_device *dev) { lp->i596_config[8] |= 0x01; } - i596_add_cmd(dev, (struct i596_cmd *) &lp->set_conf); + i596_add_cmd(dev, &lp->set_conf); } } diff --git a/drivers/net/ethernet/i825xx/sun3_82586.c b/drivers/net/ethernet/i825xx/sun3_82586.c index cae17f4bc93e..353f57f675d0 100644 --- a/drivers/net/ethernet/i825xx/sun3_82586.c +++ b/drivers/net/ethernet/i825xx/sun3_82586.c @@ -571,7 +571,7 @@ static int init586(struct net_device *dev) } #endif - ptr = alloc_rfa(dev,(void *)ptr); /* init receive-frame-area */ + ptr = alloc_rfa(dev,ptr); /* init receive-frame-area */ /* * alloc xmit-buffs / init xmit_cmds @@ -584,7 +584,7 @@ static int init586(struct net_device *dev) ptr = (char *) ptr + XMIT_BUFF_SIZE; p->xmit_buffs[i] = (struct tbd_struct *)ptr; /* TBD */ ptr = (char *) ptr + sizeof(struct tbd_struct); - if((void *)ptr > (void *)dev->mem_end) + if(ptr > (void *)dev->mem_end) { printk("%s: not enough shared-mem for your configuration!\n",dev->name); return 1; diff --git a/drivers/net/ethernet/ibm/ehea/ehea_qmr.c b/drivers/net/ethernet/ibm/ehea/ehea_qmr.c index 4fb47f14dbfe..cb66f574dc97 100644 --- a/drivers/net/ethernet/ibm/ehea/ehea_qmr.c +++ b/drivers/net/ethernet/ibm/ehea/ehea_qmr.c @@ -376,9 +376,7 @@ int ehea_destroy_eq(struct ehea_eq *eq) return 0; } -/** - * allocates memory for a queue and registers pages in phyp - */ +/* allocates memory for a queue and registers pages in phyp */ static int ehea_qp_alloc_register(struct ehea_qp *qp, struct hw_queue *hw_queue, int nr_pages, int wqe_size, int act_nr_sges, struct ehea_adapter *adapter, int h_call_q_selector) diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c index ada720b42ff6..535f94fac4a1 100644 --- a/drivers/net/ethernet/intel/e100.c +++ b/drivers/net/ethernet/intel/e100.c @@ -1249,20 +1249,35 @@ static const struct firmware *e100_request_firmware(struct nic *nic) const struct firmware *fw = nic->fw; u8 timer, bundle, min_size; int err = 0; + bool required = false; /* do not load u-code for ICH devices */ if (nic->flags & ich) return NULL; - /* Search for ucode match against h/w revision */ - if (nic->mac == mac_82559_D101M) + /* Search for ucode match against h/w revision + * + * Based on comments in the source code for the FreeBSD fxp + * driver, the FIRMWARE_D102E ucode includes both CPUSaver and + * + * "fixes for bugs in the B-step hardware (specifically, bugs + * with Inline Receive)." + * + * So we must fail if it cannot be loaded. + * + * The other microcode files are only required for the optional + * CPUSaver feature. Nice to have, but no reason to fail. + */ + if (nic->mac == mac_82559_D101M) { fw_name = FIRMWARE_D101M; - else if (nic->mac == mac_82559_D101S) + } else if (nic->mac == mac_82559_D101S) { fw_name = FIRMWARE_D101S; - else if (nic->mac == mac_82551_F || nic->mac == mac_82551_10) + } else if (nic->mac == mac_82551_F || nic->mac == mac_82551_10) { fw_name = FIRMWARE_D102E; - else /* No ucode on other devices */ + required = true; + } else { /* No ucode on other devices */ return NULL; + } /* If the firmware has not previously been loaded, request a pointer * to it. If it was previously loaded, we are reinitializing the @@ -1273,10 +1288,17 @@ static const struct firmware *e100_request_firmware(struct nic *nic) err = request_firmware(&fw, fw_name, &nic->pdev->dev); if (err) { - netif_err(nic, probe, nic->netdev, - "Failed to load firmware \"%s\": %d\n", - fw_name, err); - return ERR_PTR(err); + if (required) { + netif_err(nic, probe, nic->netdev, + "Failed to load firmware \"%s\": %d\n", + fw_name, err); + return ERR_PTR(err); + } else { + netif_info(nic, probe, nic->netdev, + "CPUSaver disabled. Needs \"%s\": %d\n", + fw_name, err); + return NULL; + } } /* Firmware should be precisely UCODE_SIZE (words) plus three bytes diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c index 3103f0b6bf5e..736a7d987db5 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c +++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c @@ -1851,6 +1851,7 @@ static const struct ethtool_ops e1000_ethtool_ops = { .get_sset_count = e1000_get_sset_count, .get_coalesce = e1000_get_coalesce, .set_coalesce = e1000_set_coalesce, + .get_ts_info = ethtool_op_get_ts_info, }; void e1000_set_ethtool_ops(struct net_device *netdev) diff --git a/drivers/net/ethernet/intel/e1000/e1000_hw.c b/drivers/net/ethernet/intel/e1000/e1000_hw.c index c526279e4927..3d6839528761 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_hw.c +++ b/drivers/net/ethernet/intel/e1000/e1000_hw.c @@ -399,7 +399,7 @@ void e1000_set_media_type(struct e1000_hw *hw) } /** - * e1000_reset_hw: reset the hardware completely + * e1000_reset_hw - reset the hardware completely * @hw: Struct containing variables accessed by shared code * * Reset the transmit and receive units; mask and clear all interrupts. @@ -546,7 +546,7 @@ s32 e1000_reset_hw(struct e1000_hw *hw) } /** - * e1000_init_hw: Performs basic configuration of the adapter. + * e1000_init_hw - Performs basic configuration of the adapter. * @hw: Struct containing variables accessed by shared code * * Assumes that the controller has previously been reset and is in a @@ -2591,7 +2591,7 @@ s32 e1000_check_for_link(struct e1000_hw *hw) * @hw: Struct containing variables accessed by shared code * @speed: Speed of the connection * @duplex: Duplex setting of the connection - + * * Detects the current speed and duplex settings of the hardware. */ s32 e1000_get_speed_and_duplex(struct e1000_hw *hw, u16 *speed, u16 *duplex) @@ -2959,7 +2959,7 @@ static s32 e1000_read_phy_reg_ex(struct e1000_hw *hw, u32 reg_addr, * @hw: Struct containing variables accessed by shared code * @reg_addr: address of the PHY register to write * @data: data to write to the PHY - + * * Writes a value to a PHY register */ s32 e1000_write_phy_reg(struct e1000_hw *hw, u32 reg_addr, u16 phy_data) diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c index 7483ca0a6282..3bfbb8df8989 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_main.c +++ b/drivers/net/ethernet/intel/e1000/e1000_main.c @@ -721,9 +721,7 @@ void e1000_reset(struct e1000_adapter *adapter) e1000_release_manageability(adapter); } -/** - * Dump the eeprom for users having checksum issues - **/ +/* Dump the eeprom for users having checksum issues */ static void e1000_dump_eeprom(struct e1000_adapter *adapter) { struct net_device *netdev = adapter->netdev; @@ -1078,18 +1076,18 @@ static int __devinit e1000_probe(struct pci_dev *pdev, netdev->priv_flags |= IFF_SUPP_NOFCS; netdev->features |= netdev->hw_features; - netdev->hw_features |= NETIF_F_RXCSUM; - netdev->hw_features |= NETIF_F_RXALL; - netdev->hw_features |= NETIF_F_RXFCS; + netdev->hw_features |= (NETIF_F_RXCSUM | + NETIF_F_RXALL | + NETIF_F_RXFCS); if (pci_using_dac) { netdev->features |= NETIF_F_HIGHDMA; netdev->vlan_features |= NETIF_F_HIGHDMA; } - netdev->vlan_features |= NETIF_F_TSO; - netdev->vlan_features |= NETIF_F_HW_CSUM; - netdev->vlan_features |= NETIF_F_SG; + netdev->vlan_features |= (NETIF_F_TSO | + NETIF_F_HW_CSUM | + NETIF_F_SG); netdev->priv_flags |= IFF_UNICAST_FLT; @@ -3056,14 +3054,13 @@ static void e1000_tx_queue(struct e1000_adapter *adapter, mmiowb(); } -/** - * 82547 workaround to avoid controller hang in half-duplex environment. +/* 82547 workaround to avoid controller hang in half-duplex environment. * The workaround is to avoid queuing a large packet that would span * the internal Tx FIFO ring boundary by notifying the stack to resend * the packet at a later time. This gives the Tx FIFO an opportunity to * flush all packets. When that occurs, we reset the Tx FIFO pointers * to the beginning of the Tx FIFO. - **/ + */ #define E1000_FIFO_HDR 0x10 #define E1000_82547_PAD_LEN 0x3E0 diff --git a/drivers/net/ethernet/intel/e1000e/82571.c b/drivers/net/ethernet/intel/e1000e/82571.c index 1f063dcd8f85..0b3bade957fd 100644 --- a/drivers/net/ethernet/intel/e1000e/82571.c +++ b/drivers/net/ethernet/intel/e1000e/82571.c @@ -1680,16 +1680,18 @@ static s32 e1000_check_for_serdes_link_82571(struct e1000_hw *hw) e_dbg("ANYSTATE -> DOWN\n"); } else { /* - * Check several times, if Sync and Config - * both are consistently 1 then simply ignore - * the Invalid bit and restart Autoneg + * Check several times, if SYNCH bit and CONFIG + * bit both are consistently 1 then simply ignore + * the IV bit and restart Autoneg */ for (i = 0; i < AN_RETRY_COUNT; i++) { udelay(10); rxcw = er32(RXCW); - if ((rxcw & E1000_RXCW_IV) && - !((rxcw & E1000_RXCW_SYNCH) && - (rxcw & E1000_RXCW_C))) { + if ((rxcw & E1000_RXCW_SYNCH) && + (rxcw & E1000_RXCW_C)) + continue; + + if (rxcw & E1000_RXCW_IV) { mac->serdes_has_link = false; mac->serdes_link_state = e1000_serdes_link_down; diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h index 6e6fffb34581..cd153326c3cf 100644 --- a/drivers/net/ethernet/intel/e1000e/e1000.h +++ b/drivers/net/ethernet/intel/e1000e/e1000.h @@ -514,6 +514,7 @@ extern void e1000e_set_interrupt_capability(struct e1000_adapter *adapter); extern void e1000e_reset_interrupt_capability(struct e1000_adapter *adapter); extern void e1000e_get_hw_control(struct e1000_adapter *adapter); extern void e1000e_release_hw_control(struct e1000_adapter *adapter); +extern void e1000e_write_itr(struct e1000_adapter *adapter, u32 itr); extern unsigned int copybreak; diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c index 905e2147d918..0349e2478df8 100644 --- a/drivers/net/ethernet/intel/e1000e/ethtool.c +++ b/drivers/net/ethernet/intel/e1000e/ethtool.c @@ -1897,7 +1897,6 @@ static int e1000_set_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec) { struct e1000_adapter *adapter = netdev_priv(netdev); - struct e1000_hw *hw = &adapter->hw; if ((ec->rx_coalesce_usecs > E1000_MAX_ITR_USECS) || ((ec->rx_coalesce_usecs > 4) && @@ -1916,9 +1915,9 @@ static int e1000_set_coalesce(struct net_device *netdev, } if (adapter->itr_setting != 0) - ew32(ITR, 1000000000 / (adapter->itr * 256)); + e1000e_write_itr(adapter, adapter->itr); else - ew32(ITR, 0); + e1000e_write_itr(adapter, 0); return 0; } @@ -2062,6 +2061,7 @@ static const struct ethtool_ops e1000_ethtool_ops = { .get_coalesce = e1000_get_coalesce, .set_coalesce = e1000_set_coalesce, .get_rxnfc = e1000_get_rxnfc, + .get_ts_info = ethtool_op_get_ts_info, }; void e1000e_set_ethtool_ops(struct net_device *netdev) diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index 623e30b9964d..95b245310f17 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -2159,7 +2159,7 @@ void e1000e_release_hw_control(struct e1000_adapter *adapter) } /** - * @e1000_alloc_ring - allocate memory for a ring structure + * e1000_alloc_ring_dma - allocate memory for a ring structure **/ static int e1000_alloc_ring_dma(struct e1000_adapter *adapter, struct e1000_ring *ring) @@ -2474,6 +2474,30 @@ set_itr_now: } /** + * e1000e_write_itr - write the ITR value to the appropriate registers + * @adapter: address of board private structure + * @itr: new ITR value to program + * + * e1000e_write_itr determines if the adapter is in MSI-X mode + * and, if so, writes the EITR registers with the ITR value. + * Otherwise, it writes the ITR value into the ITR register. + **/ +void e1000e_write_itr(struct e1000_adapter *adapter, u32 itr) +{ + struct e1000_hw *hw = &adapter->hw; + u32 new_itr = itr ? 1000000000 / (itr * 256) : 0; + + if (adapter->msix_entries) { + int vector; + + for (vector = 0; vector < adapter->num_vectors; vector++) + writel(new_itr, hw->hw_addr + E1000_EITR_82574(vector)); + } else { + ew32(ITR, new_itr); + } +} + +/** * e1000_alloc_queues - Allocate memory for all rings * @adapter: board private structure to initialize **/ @@ -3059,7 +3083,7 @@ static void e1000_configure_rx(struct e1000_adapter *adapter) /* irq moderation */ ew32(RADV, adapter->rx_abs_int_delay); if ((adapter->itr_setting != 0) && (adapter->itr != 0)) - ew32(ITR, 1000000000 / (adapter->itr * 256)); + e1000e_write_itr(adapter, adapter->itr); ctrl_ext = er32(CTRL_EXT); /* Auto-Mask interrupts upon ICR access */ @@ -3486,14 +3510,14 @@ void e1000e_reset(struct e1000_adapter *adapter) dev_info(&adapter->pdev->dev, "Interrupt Throttle Rate turned off\n"); adapter->flags2 |= FLAG2_DISABLE_AIM; - ew32(ITR, 0); + e1000e_write_itr(adapter, 0); } } else if (adapter->flags2 & FLAG2_DISABLE_AIM) { dev_info(&adapter->pdev->dev, "Interrupt Throttle Rate turned on\n"); adapter->flags2 &= ~FLAG2_DISABLE_AIM; adapter->itr = 20000; - ew32(ITR, 1000000000 / (adapter->itr * 256)); + e1000e_write_itr(adapter, adapter->itr); } } @@ -4576,7 +4600,7 @@ link_up: adapter->gorc - adapter->gotc) / 10000; u32 itr = goc > 0 ? (dif * 6000 / goc + 2000) : 8000; - ew32(ITR, 1000000000 / (itr * 256)); + e1000e_write_itr(adapter, itr); } /* Cause software interrupt to ensure Rx ring is cleaned */ @@ -6191,7 +6215,8 @@ static int __devinit e1000_probe(struct pci_dev *pdev, } if (hw->phy.ops.check_reset_block && hw->phy.ops.check_reset_block(hw)) - e_info("PHY reset is blocked due to SOL/IDER session.\n"); + dev_info(&pdev->dev, + "PHY reset is blocked due to SOL/IDER session.\n"); /* Set initial default active device features */ netdev->features = (NETIF_F_SG | @@ -6241,7 +6266,7 @@ static int __devinit e1000_probe(struct pci_dev *pdev, if (e1000_validate_nvm_checksum(&adapter->hw) >= 0) break; if (i == 2) { - e_err("The NVM Checksum Is Not Valid\n"); + dev_err(&pdev->dev, "The NVM Checksum Is Not Valid\n"); err = -EIO; goto err_eeprom; } @@ -6251,13 +6276,15 @@ static int __devinit e1000_probe(struct pci_dev *pdev, /* copy the MAC address */ if (e1000e_read_mac_addr(&adapter->hw)) - e_err("NVM Read Error while reading MAC address\n"); + dev_err(&pdev->dev, + "NVM Read Error while reading MAC address\n"); memcpy(netdev->dev_addr, adapter->hw.mac.addr, netdev->addr_len); memcpy(netdev->perm_addr, adapter->hw.mac.addr, netdev->addr_len); if (!is_valid_ether_addr(netdev->perm_addr)) { - e_err("Invalid MAC Address: %pM\n", netdev->perm_addr); + dev_err(&pdev->dev, "Invalid MAC Address: %pM\n", + netdev->perm_addr); err = -EIO; goto err_eeprom; } diff --git a/drivers/net/ethernet/intel/e1000e/param.c b/drivers/net/ethernet/intel/e1000e/param.c index 55cc1565bc2f..dfbfa7fd98c3 100644 --- a/drivers/net/ethernet/intel/e1000e/param.c +++ b/drivers/net/ethernet/intel/e1000e/param.c @@ -199,16 +199,19 @@ static int __devinit e1000_validate_option(unsigned int *value, case enable_option: switch (*value) { case OPTION_ENABLED: - e_info("%s Enabled\n", opt->name); + dev_info(&adapter->pdev->dev, "%s Enabled\n", + opt->name); return 0; case OPTION_DISABLED: - e_info("%s Disabled\n", opt->name); + dev_info(&adapter->pdev->dev, "%s Disabled\n", + opt->name); return 0; } break; case range_option: if (*value >= opt->arg.r.min && *value <= opt->arg.r.max) { - e_info("%s set to %i\n", opt->name, *value); + dev_info(&adapter->pdev->dev, "%s set to %i\n", + opt->name, *value); return 0; } break; @@ -220,7 +223,8 @@ static int __devinit e1000_validate_option(unsigned int *value, ent = &opt->arg.l.p[i]; if (*value == ent->i) { if (ent->str[0] != '\0') - e_info("%s\n", ent->str); + dev_info(&adapter->pdev->dev, "%s\n", + ent->str); return 0; } } @@ -230,8 +234,8 @@ static int __devinit e1000_validate_option(unsigned int *value, BUG(); } - e_info("Invalid %s value specified (%i) %s\n", opt->name, *value, - opt->err); + dev_info(&adapter->pdev->dev, "Invalid %s value specified (%i) %s\n", + opt->name, *value, opt->err); *value = opt->def; return -1; } @@ -251,8 +255,10 @@ void __devinit e1000e_check_options(struct e1000_adapter *adapter) int bd = adapter->bd_number; if (bd >= E1000_MAX_NIC) { - e_notice("Warning: no configuration for board #%i\n", bd); - e_notice("Using defaults for all values\n"); + dev_notice(&adapter->pdev->dev, + "Warning: no configuration for board #%i\n", bd); + dev_notice(&adapter->pdev->dev, + "Using defaults for all values\n"); } { /* Transmit Interrupt Delay */ @@ -366,27 +372,32 @@ void __devinit e1000e_check_options(struct e1000_adapter *adapter) * default values */ if (adapter->itr > 4) - e_info("%s set to default %d\n", opt.name, - adapter->itr); + dev_info(&adapter->pdev->dev, + "%s set to default %d\n", opt.name, + adapter->itr); } adapter->itr_setting = adapter->itr; switch (adapter->itr) { case 0: - e_info("%s turned off\n", opt.name); + dev_info(&adapter->pdev->dev, "%s turned off\n", + opt.name); break; case 1: - e_info("%s set to dynamic mode\n", opt.name); + dev_info(&adapter->pdev->dev, + "%s set to dynamic mode\n", opt.name); adapter->itr = 20000; break; case 3: - e_info("%s set to dynamic conservative mode\n", - opt.name); + dev_info(&adapter->pdev->dev, + "%s set to dynamic conservative mode\n", + opt.name); adapter->itr = 20000; break; case 4: - e_info("%s set to simplified (2000-8000 ints) mode\n", - opt.name); + dev_info(&adapter->pdev->dev, + "%s set to simplified (2000-8000 ints) mode\n", + opt.name); break; default: /* diff --git a/drivers/net/ethernet/intel/igb/e1000_regs.h b/drivers/net/ethernet/intel/igb/e1000_regs.h index 35d1e4f2c92c..10efcd88dca0 100644 --- a/drivers/net/ethernet/intel/igb/e1000_regs.h +++ b/drivers/net/ethernet/intel/igb/e1000_regs.h @@ -117,6 +117,7 @@ /* TX Rate Limit Registers */ #define E1000_RTTDQSEL 0x3604 /* Tx Desc Plane Queue Select - WO */ +#define E1000_RTTBCNRM 0x3690 /* Tx BCN Rate-scheduler MMW */ #define E1000_RTTBCNRC 0x36B0 /* Tx BCN Rate-Scheduler Config - WO */ /* Split and Replication RX Control - RW */ diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index ae6d3f393a54..9e572dd29ab2 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -65,19 +65,30 @@ struct igb_adapter; #define MAX_Q_VECTORS 8 /* Transmit and receive queues */ -#define IGB_MAX_RX_QUEUES ((adapter->vfs_allocated_count ? 2 : \ - (hw->mac.type > e1000_82575 ? 8 : 4))) -#define IGB_MAX_RX_QUEUES_I210 4 +#define IGB_MAX_RX_QUEUES 8 +#define IGB_MAX_RX_QUEUES_82575 4 #define IGB_MAX_RX_QUEUES_I211 2 -#define IGB_MAX_TX_QUEUES 16 -#define IGB_MAX_TX_QUEUES_I210 4 -#define IGB_MAX_TX_QUEUES_I211 2 +#define IGB_MAX_TX_QUEUES 8 #define IGB_MAX_VF_MC_ENTRIES 30 #define IGB_MAX_VF_FUNCTIONS 8 #define IGB_MAX_VFTA_ENTRIES 128 #define IGB_82576_VF_DEV_ID 0x10CA #define IGB_I350_VF_DEV_ID 0x1520 +/* NVM version defines */ +#define IGB_MAJOR_MASK 0xF000 +#define IGB_MINOR_MASK 0x0FF0 +#define IGB_BUILD_MASK 0x000F +#define IGB_COMB_VER_MASK 0x00FF +#define IGB_MAJOR_SHIFT 12 +#define IGB_MINOR_SHIFT 4 +#define IGB_COMB_VER_SHFT 8 +#define IGB_NVM_VER_INVALID 0xFFFF +#define IGB_ETRACK_SHIFT 16 +#define NVM_ETRACK_WORD 0x0042 +#define NVM_COMB_VER_OFF 0x0083 +#define NVM_COMB_VER_PTR 0x003d + struct vf_data_storage { unsigned char vf_mac_addresses[ETH_ALEN]; u16 vf_mc_hashes[IGB_MAX_VF_MC_ENTRIES]; @@ -371,6 +382,7 @@ struct igb_adapter { spinlock_t tmreg_lock; struct cyclecounter cc; struct timecounter tc; + char fw_version[32]; }; #define IGB_FLAG_HAS_MSI (1 << 0) @@ -420,6 +432,7 @@ extern void igb_update_stats(struct igb_adapter *, struct rtnl_link_stats64 *); extern bool igb_has_link(struct igb_adapter *adapter); extern void igb_set_ethtool_ops(struct net_device *); extern void igb_power_up_link(struct igb_adapter *); +extern void igb_set_fw_version(struct igb_adapter *); #ifdef CONFIG_IGB_PTP extern void igb_ptp_init(struct igb_adapter *adapter); extern void igb_ptp_remove(struct igb_adapter *adapter); diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c index 812d4f963bd1..a19c84cad0e9 100644 --- a/drivers/net/ethernet/intel/igb/igb_ethtool.c +++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c @@ -710,6 +710,7 @@ static int igb_set_eeprom(struct net_device *netdev, if ((ret_val == 0) && ((first_word <= NVM_CHECKSUM_REG))) hw->nvm.ops.update(hw); + igb_set_fw_version(adapter); kfree(eeprom_buff); return ret_val; } @@ -718,20 +719,16 @@ static void igb_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo) { struct igb_adapter *adapter = netdev_priv(netdev); - u16 eeprom_data; strlcpy(drvinfo->driver, igb_driver_name, sizeof(drvinfo->driver)); strlcpy(drvinfo->version, igb_driver_version, sizeof(drvinfo->version)); - /* EEPROM image version # is reported as firmware version # for - * 82575 controllers */ - adapter->hw.nvm.ops.read(&adapter->hw, 5, 1, &eeprom_data); - snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version), - "%d.%d-%d", - (eeprom_data & 0xF000) >> 12, - (eeprom_data & 0x0FF0) >> 4, - eeprom_data & 0x000F); - + /* + * EEPROM image version # is reported as firmware version # for + * 82575 controllers + */ + strlcpy(drvinfo->fw_version, adapter->fw_version, + sizeof(drvinfo->fw_version)); strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), sizeof(drvinfo->bus_info)); drvinfo->n_stats = IGB_STATS_LEN; @@ -2271,6 +2268,38 @@ static void igb_ethtool_complete(struct net_device *netdev) pm_runtime_put(&adapter->pdev->dev); } +#ifdef CONFIG_IGB_PTP +static int igb_ethtool_get_ts_info(struct net_device *dev, + struct ethtool_ts_info *info) +{ + struct igb_adapter *adapter = netdev_priv(dev); + + info->so_timestamping = + SOF_TIMESTAMPING_TX_HARDWARE | + SOF_TIMESTAMPING_RX_HARDWARE | + SOF_TIMESTAMPING_RAW_HARDWARE; + + if (adapter->ptp_clock) + info->phc_index = ptp_clock_index(adapter->ptp_clock); + else + info->phc_index = -1; + + info->tx_types = + (1 << HWTSTAMP_TX_OFF) | + (1 << HWTSTAMP_TX_ON); + + info->rx_filters = + (1 << HWTSTAMP_FILTER_NONE) | + (1 << HWTSTAMP_FILTER_ALL) | + (1 << HWTSTAMP_FILTER_SOME) | + (1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) | + (1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) | + (1 << HWTSTAMP_FILTER_PTP_V2_EVENT); + + return 0; +} + +#endif static const struct ethtool_ops igb_ethtool_ops = { .get_settings = igb_get_settings, .set_settings = igb_set_settings, @@ -2299,6 +2328,9 @@ static const struct ethtool_ops igb_ethtool_ops = { .set_coalesce = igb_set_coalesce, .begin = igb_ethtool_begin, .complete = igb_ethtool_complete, +#ifdef CONFIG_IGB_PTP + .get_ts_info = igb_ethtool_get_ts_info, +#endif }; void igb_set_ethtool_ops(struct net_device *netdev) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index dd3bfe8cd36c..1050411e7ca3 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -59,9 +59,9 @@ #endif #include "igb.h" -#define MAJ 3 -#define MIN 4 -#define BUILD 7 +#define MAJ 4 +#define MIN 0 +#define BUILD 1 #define DRV_VERSION __stringify(MAJ) "." __stringify(MIN) "." \ __stringify(BUILD) "-k" char igb_driver_name[] = "igb"; @@ -1048,11 +1048,6 @@ static int igb_set_interrupt_capability(struct igb_adapter *adapter) if (!(adapter->flags & IGB_FLAG_QUEUE_PAIRS)) numvecs += adapter->num_tx_queues; - /* i210 and i211 can only have 4 MSIX vectors for rx/tx queues. */ - if ((adapter->hw.mac.type == e1000_i210) - || (adapter->hw.mac.type == e1000_i211)) - numvecs = 4; - /* store the number of vectors reserved for queues */ adapter->num_q_vectors = numvecs; @@ -1505,11 +1500,12 @@ static void igb_configure(struct igb_adapter *adapter) **/ void igb_power_up_link(struct igb_adapter *adapter) { + igb_reset_phy(&adapter->hw); + if (adapter->hw.phy.media_type == e1000_media_type_copper) igb_power_up_phy_copper(&adapter->hw); else igb_power_up_serdes_link_82575(&adapter->hw); - igb_reset_phy(&adapter->hw); } /** @@ -1821,6 +1817,69 @@ static const struct net_device_ops igb_netdev_ops = { }; /** + * igb_set_fw_version - Configure version string for ethtool + * @adapter: adapter struct + * + **/ +void igb_set_fw_version(struct igb_adapter *adapter) +{ + struct e1000_hw *hw = &adapter->hw; + u16 eeprom_verh, eeprom_verl, comb_verh, comb_verl, comb_offset; + u16 major, build, patch, fw_version; + u32 etrack_id; + + hw->nvm.ops.read(hw, 5, 1, &fw_version); + if (adapter->hw.mac.type != e1000_i211) { + hw->nvm.ops.read(hw, NVM_ETRACK_WORD, 1, &eeprom_verh); + hw->nvm.ops.read(hw, (NVM_ETRACK_WORD + 1), 1, &eeprom_verl); + etrack_id = (eeprom_verh << IGB_ETRACK_SHIFT) | eeprom_verl; + + /* combo image version needs to be found */ + hw->nvm.ops.read(hw, NVM_COMB_VER_PTR, 1, &comb_offset); + if ((comb_offset != 0x0) && + (comb_offset != IGB_NVM_VER_INVALID)) { + hw->nvm.ops.read(hw, (NVM_COMB_VER_OFF + comb_offset + + 1), 1, &comb_verh); + hw->nvm.ops.read(hw, (NVM_COMB_VER_OFF + comb_offset), + 1, &comb_verl); + + /* Only display Option Rom if it exists and is valid */ + if ((comb_verh && comb_verl) && + ((comb_verh != IGB_NVM_VER_INVALID) && + (comb_verl != IGB_NVM_VER_INVALID))) { + major = comb_verl >> IGB_COMB_VER_SHFT; + build = (comb_verl << IGB_COMB_VER_SHFT) | + (comb_verh >> IGB_COMB_VER_SHFT); + patch = comb_verh & IGB_COMB_VER_MASK; + snprintf(adapter->fw_version, + sizeof(adapter->fw_version), + "%d.%d%d, 0x%08x, %d.%d.%d", + (fw_version & IGB_MAJOR_MASK) >> + IGB_MAJOR_SHIFT, + (fw_version & IGB_MINOR_MASK) >> + IGB_MINOR_SHIFT, + (fw_version & IGB_BUILD_MASK), + etrack_id, major, build, patch); + goto out; + } + } + snprintf(adapter->fw_version, sizeof(adapter->fw_version), + "%d.%d%d, 0x%08x", + (fw_version & IGB_MAJOR_MASK) >> IGB_MAJOR_SHIFT, + (fw_version & IGB_MINOR_MASK) >> IGB_MINOR_SHIFT, + (fw_version & IGB_BUILD_MASK), etrack_id); + } else { + snprintf(adapter->fw_version, sizeof(adapter->fw_version), + "%d.%d%d", + (fw_version & IGB_MAJOR_MASK) >> IGB_MAJOR_SHIFT, + (fw_version & IGB_MINOR_MASK) >> IGB_MINOR_SHIFT, + (fw_version & IGB_BUILD_MASK)); + } +out: + return; +} + +/** * igb_probe - Device Initialization Routine * @pdev: PCI device information struct * @ent: entry in igb_pci_tbl @@ -2030,6 +2089,9 @@ static int __devinit igb_probe(struct pci_dev *pdev, goto err_eeprom; } + /* get firmware version for ethtool -i */ + igb_set_fw_version(adapter); + setup_timer(&adapter->watchdog_timer, igb_watchdog, (unsigned long) adapter); setup_timer(&adapter->phy_info_timer, igb_update_phy_info, @@ -2338,6 +2400,7 @@ static int __devinit igb_sw_init(struct igb_adapter *adapter) struct e1000_hw *hw = &adapter->hw; struct net_device *netdev = adapter->netdev; struct pci_dev *pdev = adapter->pdev; + u32 max_rss_queues; pci_read_config_word(pdev, PCI_COMMAND, &hw->bus.pci_cmd_word); @@ -2370,40 +2433,69 @@ static int __devinit igb_sw_init(struct igb_adapter *adapter) } else adapter->vfs_allocated_count = max_vfs; break; - case e1000_i210: - case e1000_i211: - adapter->vfs_allocated_count = 0; - break; default: break; } #endif /* CONFIG_PCI_IOV */ + + /* Determine the maximum number of RSS queues supported. */ switch (hw->mac.type) { + case e1000_i211: + max_rss_queues = IGB_MAX_RX_QUEUES_I211; + break; + case e1000_82575: case e1000_i210: - adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES_I210, - num_online_cpus()); + max_rss_queues = IGB_MAX_RX_QUEUES_82575; + break; + case e1000_i350: + /* I350 cannot do RSS and SR-IOV at the same time */ + if (!!adapter->vfs_allocated_count) { + max_rss_queues = 1; + break; + } + /* fall through */ + case e1000_82576: + if (!!adapter->vfs_allocated_count) { + max_rss_queues = 2; + break; + } + /* fall through */ + case e1000_82580: + default: + max_rss_queues = IGB_MAX_RX_QUEUES; break; + } + + adapter->rss_queues = min_t(u32, max_rss_queues, num_online_cpus()); + + /* Determine if we need to pair queues. */ + switch (hw->mac.type) { + case e1000_82575: case e1000_i211: - adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES_I211, - num_online_cpus()); + /* Device supports enough interrupts without queue pairing. */ break; + case e1000_82576: + /* + * If VFs are going to be allocated with RSS queues then we + * should pair the queues in order to conserve interrupts due + * to limited supply. + */ + if ((adapter->rss_queues > 1) && + (adapter->vfs_allocated_count > 6)) + adapter->flags |= IGB_FLAG_QUEUE_PAIRS; + /* fall through */ + case e1000_82580: + case e1000_i350: + case e1000_i210: default: - adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES, - num_online_cpus()); + /* + * If rss_queues > half of max_rss_queues, pair the queues in + * order to conserve interrupts due to limited supply. + */ + if (adapter->rss_queues > (max_rss_queues / 2)) + adapter->flags |= IGB_FLAG_QUEUE_PAIRS; break; } - /* i350 cannot do RSS and SR-IOV at the same time */ - if (hw->mac.type == e1000_i350 && adapter->vfs_allocated_count) - adapter->rss_queues = 1; - - /* - * if rss_queues > 4 or vfs are going to be allocated with rss_queues - * then we should combine the queues into a queue pair in order to - * conserve interrupts due to limited supply - */ - if ((adapter->rss_queues > 4) || - ((adapter->rss_queues > 1) && (adapter->vfs_allocated_count > 6))) - adapter->flags |= IGB_FLAG_QUEUE_PAIRS; /* Setup and initialize a copy of the hw vlan table array */ adapter->shadow_vfta = kzalloc(sizeof(u32) * @@ -4917,7 +5009,7 @@ static int igb_vf_configure(struct igb_adapter *adapter, int vf) unsigned int device_id; u16 thisvf_devfn; - random_ether_addr(mac_addr); + eth_random_addr(mac_addr); igb_set_vf_mac(adapter, vf, mac_addr); switch (adapter->hw.mac.type) { @@ -5326,7 +5418,7 @@ static void igb_vf_reset_event(struct igb_adapter *adapter, u32 vf) /* generate a new mac address as we were hotplug removed/added */ if (!(adapter->vf_data[vf].flags & IGB_VF_FLAG_PF_SET_MAC)) - random_ether_addr(vf_mac); + eth_random_addr(vf_mac); /* process remaining reset events */ igb_vf_reset(adapter, vf); @@ -5686,6 +5778,7 @@ static void igb_tx_hwtstamp(struct igb_q_vector *q_vector, /** * igb_clean_tx_irq - Reclaim resources after transmit completes * @q_vector: pointer to q_vector containing needed info + * * returns true if ring is completely cleaned **/ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector) @@ -6997,6 +7090,11 @@ static void igb_set_vf_rate_limit(struct e1000_hw *hw, int vf, int tx_rate, } wr32(E1000_RTTDQSEL, vf); /* vf X uses queue X */ + /* + * Set global transmit compensation time to the MMW_SIZE in RTTBCNRM + * register. MMW_SIZE=0x014 if 9728-byte jumbo is supported. + */ + wr32(E1000_RTTBCNRM, 0x14); wr32(E1000_RTTBCNRC, bcnrc_val); } diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c index d5ee7fa50723..c846ea9131a3 100644 --- a/drivers/net/ethernet/intel/igb/igb_ptp.c +++ b/drivers/net/ethernet/intel/igb/igb_ptp.c @@ -330,7 +330,17 @@ void igb_ptp_init(struct igb_adapter *adapter) void igb_ptp_remove(struct igb_adapter *adapter) { - cancel_delayed_work_sync(&adapter->overflow_work); + switch (adapter->hw.mac.type) { + case e1000_i211: + case e1000_i210: + case e1000_i350: + case e1000_82580: + case e1000_82576: + cancel_delayed_work_sync(&adapter->overflow_work); + break; + default: + return; + } if (adapter->ptp_clock) { ptp_clock_unregister(adapter->ptp_clock); diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c index 8ec74b07f940..0696abfe9944 100644 --- a/drivers/net/ethernet/intel/igbvf/netdev.c +++ b/drivers/net/ethernet/intel/igbvf/netdev.c @@ -766,6 +766,7 @@ static void igbvf_set_itr(struct igbvf_adapter *adapter) /** * igbvf_clean_tx_irq - Reclaim resources after transmit completes * @adapter: board private structure + * * returns true if ring is completely cleaned **/ static bool igbvf_clean_tx_irq(struct igbvf_ring *tx_ring) diff --git a/drivers/net/ethernet/intel/igbvf/vf.c b/drivers/net/ethernet/intel/igbvf/vf.c index 30a6cc426037..eea0e10ce12f 100644 --- a/drivers/net/ethernet/intel/igbvf/vf.c +++ b/drivers/net/ethernet/intel/igbvf/vf.c @@ -283,7 +283,8 @@ static s32 e1000_set_vfta_vf(struct e1000_hw *hw, u16 vid, bool set) return err; } -/** e1000_rlpml_set_vf - Set the maximum receive packet length +/** + * e1000_rlpml_set_vf - Set the maximum receive packet length * @hw: pointer to the HW structure * @max_size: value to assign to max frame size **/ @@ -302,7 +303,7 @@ void e1000_rlpml_set_vf(struct e1000_hw *hw, u16 max_size) * e1000_rar_set_vf - set device MAC address * @hw: pointer to the HW structure * @addr: pointer to the receive address - * @index receive address array register + * @index: receive address array register **/ static void e1000_rar_set_vf(struct e1000_hw *hw, u8 * addr, u32 index) { diff --git a/drivers/net/ethernet/intel/ixgb/ixgb_hw.c b/drivers/net/ethernet/intel/ixgb/ixgb_hw.c index 99b69adb4a0f..bf9a220f71fb 100644 --- a/drivers/net/ethernet/intel/ixgb/ixgb_hw.c +++ b/drivers/net/ethernet/intel/ixgb/ixgb_hw.c @@ -32,6 +32,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include <linux/pci_ids.h> #include "ixgb_hw.h" #include "ixgb_ids.h" @@ -96,7 +97,7 @@ static u32 ixgb_mac_reset(struct ixgb_hw *hw) ASSERT(!(ctrl_reg & IXGB_CTRL0_RST)); #endif - if (hw->subsystem_vendor_id == SUN_SUBVENDOR_ID) { + if (hw->subsystem_vendor_id == PCI_VENDOR_ID_SUN) { ctrl_reg = /* Enable interrupt from XFP and SerDes */ IXGB_CTRL1_GPI0_EN | IXGB_CTRL1_SDP6_DIR | @@ -271,7 +272,7 @@ ixgb_identify_phy(struct ixgb_hw *hw) } /* update phy type for sun specific board */ - if (hw->subsystem_vendor_id == SUN_SUBVENDOR_ID) + if (hw->subsystem_vendor_id == PCI_VENDOR_ID_SUN) phy_type = ixgb_phy_type_bcm; return phy_type; diff --git a/drivers/net/ethernet/intel/ixgb/ixgb_ids.h b/drivers/net/ethernet/intel/ixgb/ixgb_ids.h index 2a58847f46e8..32c1b302d791 100644 --- a/drivers/net/ethernet/intel/ixgb/ixgb_ids.h +++ b/drivers/net/ethernet/intel/ixgb/ixgb_ids.h @@ -33,11 +33,6 @@ ** The Device and Vendor IDs for 10 Gigabit MACs **********************************************************************/ -#define INTEL_VENDOR_ID 0x8086 -#define INTEL_SUBVENDOR_ID 0x8086 -#define SUN_VENDOR_ID 0x108E -#define SUN_SUBVENDOR_ID 0x108E - #define IXGB_DEVICE_ID_82597EX 0x1048 #define IXGB_DEVICE_ID_82597EX_SR 0x1A48 #define IXGB_DEVICE_ID_82597EX_LR 0x1B48 diff --git a/drivers/net/ethernet/intel/ixgb/ixgb_main.c b/drivers/net/ethernet/intel/ixgb/ixgb_main.c index 5fce363d810a..d05fc95befc5 100644 --- a/drivers/net/ethernet/intel/ixgb/ixgb_main.c +++ b/drivers/net/ethernet/intel/ixgb/ixgb_main.c @@ -54,13 +54,13 @@ MODULE_PARM_DESC(copybreak, * Class, Class Mask, private data (not used) } */ static DEFINE_PCI_DEVICE_TABLE(ixgb_pci_tbl) = { - {INTEL_VENDOR_ID, IXGB_DEVICE_ID_82597EX, + {PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, - {INTEL_VENDOR_ID, IXGB_DEVICE_ID_82597EX_CX4, + {PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX_CX4, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, - {INTEL_VENDOR_ID, IXGB_DEVICE_ID_82597EX_SR, + {PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX_SR, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, - {INTEL_VENDOR_ID, IXGB_DEVICE_ID_82597EX_LR, + {PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX_LR, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, /* required last entry */ @@ -195,7 +195,7 @@ ixgb_irq_enable(struct ixgb_adapter *adapter) { u32 val = IXGB_INT_RXT0 | IXGB_INT_RXDMT0 | IXGB_INT_TXDW | IXGB_INT_LSC; - if (adapter->hw.subsystem_vendor_id == SUN_SUBVENDOR_ID) + if (adapter->hw.subsystem_vendor_id == PCI_VENDOR_ID_SUN) val |= IXGB_INT_GPI0; IXGB_WRITE_REG(&adapter->hw, IMS, val); IXGB_WRITE_FLUSH(&adapter->hw); @@ -2276,9 +2276,9 @@ static void ixgb_netpoll(struct net_device *dev) #endif /** - * ixgb_io_error_detected() - called when PCI error is detected - * @pdev pointer to pci device with error - * @state pci channel state after error + * ixgb_io_error_detected - called when PCI error is detected + * @pdev: pointer to pci device with error + * @state: pci channel state after error * * This callback is called by the PCI subsystem whenever * a PCI bus error is detected. diff --git a/drivers/net/ethernet/intel/ixgbe/Makefile b/drivers/net/ethernet/intel/ixgbe/Makefile index 0bdf06bc5c49..5fd5d04c26c9 100644 --- a/drivers/net/ethernet/intel/ixgbe/Makefile +++ b/drivers/net/ethernet/intel/ixgbe/Makefile @@ -34,11 +34,11 @@ obj-$(CONFIG_IXGBE) += ixgbe.o ixgbe-objs := ixgbe_main.o ixgbe_common.o ixgbe_ethtool.o \ ixgbe_82599.o ixgbe_82598.o ixgbe_phy.o ixgbe_sriov.o \ - ixgbe_mbx.o ixgbe_x540.o ixgbe_sysfs.o ixgbe_lib.o + ixgbe_mbx.o ixgbe_x540.o ixgbe_lib.o ixgbe-$(CONFIG_IXGBE_DCB) += ixgbe_dcb.o ixgbe_dcb_82598.o \ ixgbe_dcb_82599.o ixgbe_dcb_nl.o ixgbe-$(CONFIG_IXGBE_PTP) += ixgbe_ptp.o - +ixgbe-$(CONFIG_IXGBE_HWMON) += ixgbe_sysfs.o ixgbe-$(CONFIG_FCOE:m=y) += ixgbe_fcoe.o diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h index 7af291e236bf..b9623e9ea895 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h @@ -77,17 +77,18 @@ #define IXGBE_MAX_FCPAUSE 0xFFFF /* Supported Rx Buffer Sizes */ -#define IXGBE_RXBUFFER_512 512 /* Used for packet split */ +#define IXGBE_RXBUFFER_256 256 /* Used for skb receive header */ #define IXGBE_MAX_RXBUFFER 16384 /* largest size for a single descriptor */ /* - * NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN mans we - * reserve 2 more, and skb_shared_info adds an additional 384 bytes more, - * this adds up to 512 bytes of extra data meaning the smallest allocation - * we could have is 1K. - * i.e. RXBUFFER_512 --> size-1024 slab + * NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we + * reserve 64 more, and skb_shared_info adds an additional 320 bytes more, + * this adds up to 448 bytes of extra data. + * + * Since netdev_alloc_skb now allocates a page fragment we can use a value + * of 256 and the resultant skb will have a truesize of 960 or less. */ -#define IXGBE_RX_HDR_SIZE IXGBE_RXBUFFER_512 +#define IXGBE_RX_HDR_SIZE IXGBE_RXBUFFER_256 #define MAXIMUM_ETHERNET_VLAN_SIZE (ETH_FRAME_LEN + ETH_FCS_LEN + VLAN_HLEN) @@ -113,7 +114,7 @@ #define IXGBE_MAX_VFTA_ENTRIES 128 #define MAX_EMULATION_MAC_ADDRS 16 #define IXGBE_MAX_PF_MACVLANS 15 -#define VMDQ_P(p) ((p) + adapter->num_vfs) +#define VMDQ_P(p) ((p) + adapter->ring_feature[RING_F_VMDQ].offset) #define IXGBE_82599_VF_DEVICE_ID 0x10ED #define IXGBE_X540_VF_DEVICE_ID 0x1515 @@ -130,7 +131,6 @@ struct vf_data_storage { u16 tx_rate; u16 vlan_count; u8 spoofchk_enabled; - struct pci_dev *vfdev; }; struct vf_macvlans { @@ -278,10 +278,16 @@ enum ixgbe_ring_f_enum { #define MAX_TX_QUEUES IXGBE_MAX_FDIR_INDICES #endif /* IXGBE_FCOE */ struct ixgbe_ring_feature { - int indices; - int mask; + u16 limit; /* upper limit on feature indices */ + u16 indices; /* current value of indices */ + u16 mask; /* Mask used for feature to ring mapping */ + u16 offset; /* offset to start of feature */ } ____cacheline_internodealigned_in_smp; +#define IXGBE_82599_VMDQ_8Q_MASK 0x78 +#define IXGBE_82599_VMDQ_4Q_MASK 0x7C +#define IXGBE_82599_VMDQ_2Q_MASK 0x7E + /* * FCoE requires that all Rx buffers be over 2200 bytes in length. Since * this is twice the size of a half page we need to double the page order @@ -315,7 +321,7 @@ struct ixgbe_ring_container { ? 8 : 1) #define MAX_TX_PACKET_BUFFERS MAX_RX_PACKET_BUFFERS -/* MAX_MSIX_Q_VECTORS of these are allocated, +/* MAX_Q_VECTORS of these are allocated, * but we only use one per queue-specific vector. */ struct ixgbe_q_vector { @@ -401,11 +407,11 @@ static inline u16 ixgbe_desc_unused(struct ixgbe_ring *ring) #define NON_Q_VECTORS (OTHER_VECTOR) #define MAX_MSIX_VECTORS_82599 64 -#define MAX_MSIX_Q_VECTORS_82599 64 +#define MAX_Q_VECTORS_82599 64 #define MAX_MSIX_VECTORS_82598 18 -#define MAX_MSIX_Q_VECTORS_82598 16 +#define MAX_Q_VECTORS_82598 16 -#define MAX_MSIX_Q_VECTORS MAX_MSIX_Q_VECTORS_82599 +#define MAX_Q_VECTORS MAX_Q_VECTORS_82599 #define MAX_MSIX_COUNT MAX_MSIX_VECTORS_82599 #define MIN_MSIX_Q_VECTORS 1 @@ -427,35 +433,33 @@ struct ixgbe_adapter { * thus the additional *_CAPABLE flags. */ u32 flags; -#define IXGBE_FLAG_MSI_CAPABLE (u32)(1 << 1) -#define IXGBE_FLAG_MSI_ENABLED (u32)(1 << 2) -#define IXGBE_FLAG_MSIX_CAPABLE (u32)(1 << 3) -#define IXGBE_FLAG_MSIX_ENABLED (u32)(1 << 4) -#define IXGBE_FLAG_RX_1BUF_CAPABLE (u32)(1 << 6) -#define IXGBE_FLAG_RX_PS_CAPABLE (u32)(1 << 7) -#define IXGBE_FLAG_RX_PS_ENABLED (u32)(1 << 8) -#define IXGBE_FLAG_IN_NETPOLL (u32)(1 << 9) -#define IXGBE_FLAG_DCA_ENABLED (u32)(1 << 10) -#define IXGBE_FLAG_DCA_CAPABLE (u32)(1 << 11) -#define IXGBE_FLAG_IMIR_ENABLED (u32)(1 << 12) -#define IXGBE_FLAG_MQ_CAPABLE (u32)(1 << 13) -#define IXGBE_FLAG_DCB_ENABLED (u32)(1 << 14) -#define IXGBE_FLAG_RSS_ENABLED (u32)(1 << 16) -#define IXGBE_FLAG_RSS_CAPABLE (u32)(1 << 17) -#define IXGBE_FLAG_VMDQ_CAPABLE (u32)(1 << 18) -#define IXGBE_FLAG_VMDQ_ENABLED (u32)(1 << 19) -#define IXGBE_FLAG_FAN_FAIL_CAPABLE (u32)(1 << 20) -#define IXGBE_FLAG_NEED_LINK_UPDATE (u32)(1 << 22) -#define IXGBE_FLAG_NEED_LINK_CONFIG (u32)(1 << 23) -#define IXGBE_FLAG_FDIR_HASH_CAPABLE (u32)(1 << 24) -#define IXGBE_FLAG_FDIR_PERFECT_CAPABLE (u32)(1 << 25) -#define IXGBE_FLAG_FCOE_CAPABLE (u32)(1 << 26) -#define IXGBE_FLAG_FCOE_ENABLED (u32)(1 << 27) -#define IXGBE_FLAG_SRIOV_CAPABLE (u32)(1 << 28) -#define IXGBE_FLAG_SRIOV_ENABLED (u32)(1 << 29) +#define IXGBE_FLAG_MSI_CAPABLE (u32)(1 << 0) +#define IXGBE_FLAG_MSI_ENABLED (u32)(1 << 1) +#define IXGBE_FLAG_MSIX_CAPABLE (u32)(1 << 2) +#define IXGBE_FLAG_MSIX_ENABLED (u32)(1 << 3) +#define IXGBE_FLAG_RX_1BUF_CAPABLE (u32)(1 << 4) +#define IXGBE_FLAG_RX_PS_CAPABLE (u32)(1 << 5) +#define IXGBE_FLAG_RX_PS_ENABLED (u32)(1 << 6) +#define IXGBE_FLAG_IN_NETPOLL (u32)(1 << 7) +#define IXGBE_FLAG_DCA_ENABLED (u32)(1 << 8) +#define IXGBE_FLAG_DCA_CAPABLE (u32)(1 << 9) +#define IXGBE_FLAG_IMIR_ENABLED (u32)(1 << 10) +#define IXGBE_FLAG_MQ_CAPABLE (u32)(1 << 11) +#define IXGBE_FLAG_DCB_ENABLED (u32)(1 << 12) +#define IXGBE_FLAG_VMDQ_CAPABLE (u32)(1 << 13) +#define IXGBE_FLAG_VMDQ_ENABLED (u32)(1 << 14) +#define IXGBE_FLAG_FAN_FAIL_CAPABLE (u32)(1 << 15) +#define IXGBE_FLAG_NEED_LINK_UPDATE (u32)(1 << 16) +#define IXGBE_FLAG_NEED_LINK_CONFIG (u32)(1 << 17) +#define IXGBE_FLAG_FDIR_HASH_CAPABLE (u32)(1 << 18) +#define IXGBE_FLAG_FDIR_PERFECT_CAPABLE (u32)(1 << 19) +#define IXGBE_FLAG_FCOE_CAPABLE (u32)(1 << 20) +#define IXGBE_FLAG_FCOE_ENABLED (u32)(1 << 21) +#define IXGBE_FLAG_SRIOV_CAPABLE (u32)(1 << 22) +#define IXGBE_FLAG_SRIOV_ENABLED (u32)(1 << 23) u32 flags2; -#define IXGBE_FLAG2_RSC_CAPABLE (u32)(1) +#define IXGBE_FLAG2_RSC_CAPABLE (u32)(1 << 0) #define IXGBE_FLAG2_RSC_ENABLED (u32)(1 << 1) #define IXGBE_FLAG2_TEMP_SENSOR_CAPABLE (u32)(1 << 2) #define IXGBE_FLAG2_TEMP_SENSOR_EVENT (u32)(1 << 3) @@ -496,7 +500,7 @@ struct ixgbe_adapter { u32 alloc_rx_page_failed; u32 alloc_rx_buff_failed; - struct ixgbe_q_vector *q_vector[MAX_MSIX_Q_VECTORS]; + struct ixgbe_q_vector *q_vector[MAX_Q_VECTORS]; /* DCB parameters */ struct ieee_pfc *ixgbe_ieee_pfc; @@ -507,8 +511,8 @@ struct ixgbe_adapter { u8 dcbx_cap; enum ixgbe_fc_mode last_lfc_mode; - int num_msix_vectors; - int max_msix_q_vectors; /* true count of q_vectors for device */ + int num_q_vectors; /* current number of q_vectors for device */ + int max_q_vectors; /* true count of q_vectors for device */ struct ixgbe_ring_feature ring_feature[RING_F_ARRAY_SIZE]; struct msix_entry *msix_entries; @@ -561,6 +565,7 @@ struct ixgbe_adapter { spinlock_t tmreg_lock; struct cyclecounter cc; struct timecounter tc; + int rx_hwtstamp_filter; u32 base_incval; u32 cycle_speed; #endif /* CONFIG_IXGBE_PTP */ @@ -686,7 +691,6 @@ extern void ixgbe_configure_fcoe(struct ixgbe_adapter *adapter); extern int ixgbe_fso(struct ixgbe_ring *tx_ring, struct ixgbe_tx_buffer *first, u8 *hdr_len); -extern void ixgbe_cleanup_fcoe(struct ixgbe_adapter *adapter); extern int ixgbe_fcoe_ddp(struct ixgbe_adapter *adapter, union ixgbe_adv_rx_desc *rx_desc, struct sk_buff *skb); @@ -695,6 +699,8 @@ extern int ixgbe_fcoe_ddp_get(struct net_device *netdev, u16 xid, extern int ixgbe_fcoe_ddp_target(struct net_device *netdev, u16 xid, struct scatterlist *sgl, unsigned int sgc); extern int ixgbe_fcoe_ddp_put(struct net_device *netdev, u16 xid); +extern int ixgbe_setup_fcoe_ddp_resources(struct ixgbe_adapter *adapter); +extern void ixgbe_free_fcoe_ddp_resources(struct ixgbe_adapter *adapter); extern int ixgbe_fcoe_enable(struct net_device *netdev); extern int ixgbe_fcoe_disable(struct net_device *netdev); #ifdef CONFIG_IXGBE_DCB @@ -704,6 +710,7 @@ extern u8 ixgbe_fcoe_setapp(struct ixgbe_adapter *adapter, u8 up); extern int ixgbe_fcoe_get_wwn(struct net_device *netdev, u64 *wwn, int type); extern int ixgbe_fcoe_get_hbainfo(struct net_device *netdev, struct netdev_fcoe_hbainfo *info); +extern u8 ixgbe_fcoe_get_tc(struct ixgbe_adapter *adapter); #endif /* IXGBE_FCOE */ static inline struct netdev_queue *txring_txq(const struct ixgbe_ring *ring) @@ -718,6 +725,7 @@ extern void ixgbe_ptp_overflow_check(struct ixgbe_adapter *adapter); extern void ixgbe_ptp_tx_hwtstamp(struct ixgbe_q_vector *q_vector, struct sk_buff *skb); extern void ixgbe_ptp_rx_hwtstamp(struct ixgbe_q_vector *q_vector, + union ixgbe_adv_rx_desc *rx_desc, struct sk_buff *skb); extern int ixgbe_ptp_hwtstamp_ioctl(struct ixgbe_adapter *adapter, struct ifreq *ifr, int cmd); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c index dee64d2703f0..50fc137501da 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c @@ -241,7 +241,9 @@ static s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw, /* Determine 1G link capabilities off of SFP+ type */ if (hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core0 || - hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core1) { + hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core1 || + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1) { *speed = IXGBE_LINK_SPEED_1GB_FULL; *negotiation = true; goto out; @@ -1023,6 +1025,9 @@ mac_reset_top: hw->mac.ops.set_rar(hw, hw->mac.num_rar_entries - 1, hw->mac.san_addr, 0, IXGBE_RAH_AV); + /* Save the SAN MAC RAR index */ + hw->mac.san_mac_rar_index = hw->mac.num_rar_entries - 1; + /* Reserve the last RAR for the SAN MAC address */ hw->mac.num_rar_entries--; } @@ -2104,6 +2109,7 @@ static struct ixgbe_mac_operations mac_ops_82599 = { .set_rar = &ixgbe_set_rar_generic, .clear_rar = &ixgbe_clear_rar_generic, .set_vmdq = &ixgbe_set_vmdq_generic, + .set_vmdq_san_mac = &ixgbe_set_vmdq_san_mac_generic, .clear_vmdq = &ixgbe_clear_vmdq_generic, .init_rx_addrs = &ixgbe_init_rx_addrs_generic, .update_mc_addr_list = &ixgbe_update_mc_addr_list_generic, diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c index 77ac41feb0fe..90e41db3cb69 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c @@ -2848,6 +2848,31 @@ s32 ixgbe_set_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq) } /** + * This function should only be involved in the IOV mode. + * In IOV mode, Default pool is next pool after the number of + * VFs advertized and not 0. + * MPSAR table needs to be updated for SAN_MAC RAR [hw->mac.san_mac_rar_index] + * + * ixgbe_set_vmdq_san_mac - Associate default VMDq pool index with a rx address + * @hw: pointer to hardware struct + * @vmdq: VMDq pool index + **/ +s32 ixgbe_set_vmdq_san_mac_generic(struct ixgbe_hw *hw, u32 vmdq) +{ + u32 rar = hw->mac.san_mac_rar_index; + + if (vmdq < 32) { + IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), 1 << vmdq); + IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), 0); + } else { + IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), 0); + IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), 1 << (vmdq - 32)); + } + + return 0; +} + +/** * ixgbe_init_uta_tables_generic - Initialize the Unicast Table Array * @hw: pointer to hardware structure **/ @@ -3132,7 +3157,7 @@ s32 ixgbe_check_mac_link_generic(struct ixgbe_hw *hw, ixgbe_link_speed *speed, } /** - * ixgbe_get_wwn_prefix_generic Get alternative WWNN/WWPN prefix from + * ixgbe_get_wwn_prefix_generic - Get alternative WWNN/WWPN prefix from * the EEPROM * @hw: pointer to hardware structure * @wwnn_prefix: the alternative WWNN prefix @@ -3200,20 +3225,22 @@ void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int pf) * PFVFSPOOF register array is size 8 with 8 bits assigned to * MAC anti-spoof enables in each register array element. */ - for (j = 0; j < IXGBE_PFVFSPOOF_REG_COUNT; j++) + for (j = 0; j < pf_target_reg; j++) IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(j), pfvfspoof); - /* If not enabling anti-spoofing then done */ - if (!enable) - return; - /* * The PF should be allowed to spoof so that it can support - * emulation mode NICs. Reset the bit assigned to the PF + * emulation mode NICs. Do not set the bits assigned to the PF */ - pfvfspoof = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(pf_target_reg)); - pfvfspoof ^= (1 << pf_target_shift); - IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(pf_target_reg), pfvfspoof); + pfvfspoof &= (1 << pf_target_shift) - 1; + IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(j), pfvfspoof); + + /* + * Remaining pools belong to the PF so they do not need to have + * anti-spoofing enabled. + */ + for (j++; j < IXGBE_PFVFSPOOF_REG_COUNT; j++) + IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(j), 0); } /** @@ -3325,6 +3352,7 @@ void ixgbe_set_rxpba_generic(struct ixgbe_hw *hw, * ixgbe_calculate_checksum - Calculate checksum for buffer * @buffer: pointer to EEPROM * @length: size of EEPROM to calculate a checksum for + * * Calculates the checksum for some buffer on a specified length. The * checksum calculated is returned. **/ diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h index 6222fdb3d3f1..d813d1188c36 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h @@ -85,6 +85,7 @@ s32 ixgbe_acquire_swfw_sync(struct ixgbe_hw *hw, u16 mask); void ixgbe_release_swfw_sync(struct ixgbe_hw *hw, u16 mask); s32 ixgbe_get_san_mac_addr_generic(struct ixgbe_hw *hw, u8 *san_mac_addr); s32 ixgbe_set_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq); +s32 ixgbe_set_vmdq_san_mac_generic(struct ixgbe_hw *hw, u32 vmdq); s32 ixgbe_clear_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq); s32 ixgbe_init_uta_tables_generic(struct ixgbe_hw *hw); s32 ixgbe_set_vfta_generic(struct ixgbe_hw *hw, u32 vlan, diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c index 8bfaaee5ac5b..9bc17c0cb972 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c @@ -180,67 +180,83 @@ out: void ixgbe_dcb_unpack_pfc(struct ixgbe_dcb_config *cfg, u8 *pfc_en) { - int i; + struct tc_configuration *tc_config = &cfg->tc_config[0]; + int tc; - *pfc_en = 0; - for (i = 0; i < MAX_TRAFFIC_CLASS; i++) - *pfc_en |= !!(cfg->tc_config[i].dcb_pfc & 0xF) << i; + for (*pfc_en = 0, tc = 0; tc < MAX_TRAFFIC_CLASS; tc++) { + if (tc_config[tc].dcb_pfc != pfc_disabled) + *pfc_en |= 1 << tc; + } } void ixgbe_dcb_unpack_refill(struct ixgbe_dcb_config *cfg, int direction, u16 *refill) { - struct tc_bw_alloc *p; - int i; + struct tc_configuration *tc_config = &cfg->tc_config[0]; + int tc; - for (i = 0; i < MAX_TRAFFIC_CLASS; i++) { - p = &cfg->tc_config[i].path[direction]; - refill[i] = p->data_credits_refill; - } + for (tc = 0; tc < MAX_TRAFFIC_CLASS; tc++) + refill[tc] = tc_config[tc].path[direction].data_credits_refill; } void ixgbe_dcb_unpack_max(struct ixgbe_dcb_config *cfg, u16 *max) { - int i; + struct tc_configuration *tc_config = &cfg->tc_config[0]; + int tc; - for (i = 0; i < MAX_TRAFFIC_CLASS; i++) - max[i] = cfg->tc_config[i].desc_credits_max; + for (tc = 0; tc < MAX_TRAFFIC_CLASS; tc++) + max[tc] = tc_config[tc].desc_credits_max; } void ixgbe_dcb_unpack_bwgid(struct ixgbe_dcb_config *cfg, int direction, u8 *bwgid) { - struct tc_bw_alloc *p; - int i; + struct tc_configuration *tc_config = &cfg->tc_config[0]; + int tc; - for (i = 0; i < MAX_TRAFFIC_CLASS; i++) { - p = &cfg->tc_config[i].path[direction]; - bwgid[i] = p->bwg_id; - } + for (tc = 0; tc < MAX_TRAFFIC_CLASS; tc++) + bwgid[tc] = tc_config[tc].path[direction].bwg_id; } void ixgbe_dcb_unpack_prio(struct ixgbe_dcb_config *cfg, int direction, u8 *ptype) { - struct tc_bw_alloc *p; - int i; + struct tc_configuration *tc_config = &cfg->tc_config[0]; + int tc; - for (i = 0; i < MAX_TRAFFIC_CLASS; i++) { - p = &cfg->tc_config[i].path[direction]; - ptype[i] = p->prio_type; + for (tc = 0; tc < MAX_TRAFFIC_CLASS; tc++) + ptype[tc] = tc_config[tc].path[direction].prio_type; +} + +u8 ixgbe_dcb_get_tc_from_up(struct ixgbe_dcb_config *cfg, int direction, u8 up) +{ + struct tc_configuration *tc_config = &cfg->tc_config[0]; + u8 prio_mask = 1 << up; + u8 tc = cfg->num_tcs.pg_tcs; + + /* If tc is 0 then DCB is likely not enabled or supported */ + if (!tc) + goto out; + + /* + * Test from maximum TC to 1 and report the first match we find. If + * we find no match we can assume that the TC is 0 since the TC must + * be set for all user priorities + */ + for (tc--; tc; tc--) { + if (prio_mask & tc_config[tc].path[direction].up_to_tc_bitmap) + break; } +out: + return tc; } void ixgbe_dcb_unpack_map(struct ixgbe_dcb_config *cfg, int direction, u8 *map) { - int i, up; - unsigned long bitmap; + u8 up; - for (i = 0; i < MAX_TRAFFIC_CLASS; i++) { - bitmap = cfg->tc_config[i].path[direction].up_to_tc_bitmap; - for_each_set_bit(up, &bitmap, MAX_USER_PRIORITY) - map[up] = i; - } + for (up = 0; up < MAX_USER_PRIORITY; up++) + map[up] = ixgbe_dcb_get_tc_from_up(cfg, direction, up); } /** diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.h index 24333b718166..1f4108ee154b 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.h @@ -146,6 +146,7 @@ void ixgbe_dcb_unpack_max(struct ixgbe_dcb_config *, u16 *); void ixgbe_dcb_unpack_bwgid(struct ixgbe_dcb_config *, int, u8 *); void ixgbe_dcb_unpack_prio(struct ixgbe_dcb_config *, int, u8 *); void ixgbe_dcb_unpack_map(struct ixgbe_dcb_config *, int, u8 *); +u8 ixgbe_dcb_get_tc_from_up(struct ixgbe_dcb_config *, int, u8); /* DCB credits calculation */ s32 ixgbe_dcb_calculate_tc_credits(struct ixgbe_hw *, diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c index 5164a21b13ca..f1e002d5fa8f 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c @@ -151,34 +151,21 @@ static u8 ixgbe_dcbnl_get_state(struct net_device *netdev) static u8 ixgbe_dcbnl_set_state(struct net_device *netdev, u8 state) { - int err = 0; - u8 prio_tc[MAX_USER_PRIORITY] = {0}; - int i; struct ixgbe_adapter *adapter = netdev_priv(netdev); + int err = 0; /* Fail command if not in CEE mode */ if (!(adapter->dcbx_cap & DCB_CAP_DCBX_VER_CEE)) return 1; /* verify there is something to do, if not then exit */ - if (!!state != !(adapter->flags & IXGBE_FLAG_DCB_ENABLED)) - goto out; - - if (state > 0) { - err = ixgbe_setup_tc(netdev, adapter->dcb_cfg.num_tcs.pg_tcs); - ixgbe_dcb_unpack_map(&adapter->dcb_cfg, DCB_TX_CONFIG, prio_tc); - } else { - err = ixgbe_setup_tc(netdev, 0); - } - - if (err) + if (!state == !(adapter->flags & IXGBE_FLAG_DCB_ENABLED)) goto out; - for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) - netdev_set_prio_tc_map(netdev, i, prio_tc[i]); - + err = ixgbe_setup_tc(netdev, + state ? adapter->dcb_cfg.num_tcs.pg_tcs : 0); out: - return err ? 1 : 0; + return !!err; } static void ixgbe_dcbnl_get_perm_hw_addr(struct net_device *netdev, @@ -584,9 +571,6 @@ static int ixgbe_dcbnl_ieee_setets(struct net_device *dev, if (err) goto err_out; - for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) - netdev_set_prio_tc_map(dev, i, ets->prio_tc[i]); - err = ixgbe_dcb_hw_ets(&adapter->hw, ets, max_frame); err_out: return err; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c index 3178f1ec3711..4104ea25d818 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c @@ -154,100 +154,60 @@ static int ixgbe_get_settings(struct net_device *netdev, { struct ixgbe_adapter *adapter = netdev_priv(netdev); struct ixgbe_hw *hw = &adapter->hw; + ixgbe_link_speed supported_link; u32 link_speed = 0; + bool autoneg; bool link_up; - ecmd->supported = SUPPORTED_10000baseT_Full; - ecmd->autoneg = AUTONEG_ENABLE; - ecmd->transceiver = XCVR_EXTERNAL; - if ((hw->phy.media_type == ixgbe_media_type_copper) || - (hw->phy.multispeed_fiber)) { - ecmd->supported |= (SUPPORTED_1000baseT_Full | - SUPPORTED_Autoneg); - - switch (hw->mac.type) { - case ixgbe_mac_X540: - ecmd->supported |= SUPPORTED_100baseT_Full; - break; - default: - break; - } - - ecmd->advertising = ADVERTISED_Autoneg; - if (hw->phy.autoneg_advertised) { - if (hw->phy.autoneg_advertised & - IXGBE_LINK_SPEED_100_FULL) - ecmd->advertising |= ADVERTISED_100baseT_Full; - if (hw->phy.autoneg_advertised & - IXGBE_LINK_SPEED_10GB_FULL) - ecmd->advertising |= ADVERTISED_10000baseT_Full; - if (hw->phy.autoneg_advertised & - IXGBE_LINK_SPEED_1GB_FULL) - ecmd->advertising |= ADVERTISED_1000baseT_Full; - } else { - /* - * Default advertised modes in case - * phy.autoneg_advertised isn't set. - */ - ecmd->advertising |= (ADVERTISED_10000baseT_Full | - ADVERTISED_1000baseT_Full); - if (hw->mac.type == ixgbe_mac_X540) - ecmd->advertising |= ADVERTISED_100baseT_Full; - } - - if (hw->phy.media_type == ixgbe_media_type_copper) { - ecmd->supported |= SUPPORTED_TP; - ecmd->advertising |= ADVERTISED_TP; - ecmd->port = PORT_TP; - } else { - ecmd->supported |= SUPPORTED_FIBRE; - ecmd->advertising |= ADVERTISED_FIBRE; - ecmd->port = PORT_FIBRE; - } - } else if (hw->phy.media_type == ixgbe_media_type_backplane) { - /* Set as FIBRE until SERDES defined in kernel */ - if (hw->device_id == IXGBE_DEV_ID_82598_BX) { - ecmd->supported = (SUPPORTED_1000baseT_Full | - SUPPORTED_FIBRE); - ecmd->advertising = (ADVERTISED_1000baseT_Full | - ADVERTISED_FIBRE); - ecmd->port = PORT_FIBRE; - ecmd->autoneg = AUTONEG_DISABLE; - } else if ((hw->device_id == IXGBE_DEV_ID_82599_COMBO_BACKPLANE) || - (hw->device_id == IXGBE_DEV_ID_82599_KX4_MEZZ)) { - ecmd->supported |= (SUPPORTED_1000baseT_Full | - SUPPORTED_Autoneg | - SUPPORTED_FIBRE); - ecmd->advertising = (ADVERTISED_10000baseT_Full | - ADVERTISED_1000baseT_Full | - ADVERTISED_Autoneg | - ADVERTISED_FIBRE); - ecmd->port = PORT_FIBRE; - } else { - ecmd->supported |= (SUPPORTED_1000baseT_Full | - SUPPORTED_FIBRE); - ecmd->advertising = (ADVERTISED_10000baseT_Full | - ADVERTISED_1000baseT_Full | - ADVERTISED_FIBRE); - ecmd->port = PORT_FIBRE; - } + hw->mac.ops.get_link_capabilities(hw, &supported_link, &autoneg); + + /* set the supported link speeds */ + if (supported_link & IXGBE_LINK_SPEED_10GB_FULL) + ecmd->supported |= SUPPORTED_10000baseT_Full; + if (supported_link & IXGBE_LINK_SPEED_1GB_FULL) + ecmd->supported |= SUPPORTED_1000baseT_Full; + if (supported_link & IXGBE_LINK_SPEED_100_FULL) + ecmd->supported |= SUPPORTED_100baseT_Full; + + /* set the advertised speeds */ + if (hw->phy.autoneg_advertised) { + if (hw->phy.autoneg_advertised & IXGBE_LINK_SPEED_100_FULL) + ecmd->advertising |= ADVERTISED_100baseT_Full; + if (hw->phy.autoneg_advertised & IXGBE_LINK_SPEED_10GB_FULL) + ecmd->advertising |= ADVERTISED_10000baseT_Full; + if (hw->phy.autoneg_advertised & IXGBE_LINK_SPEED_1GB_FULL) + ecmd->advertising |= ADVERTISED_1000baseT_Full; } else { - ecmd->supported |= SUPPORTED_FIBRE; - ecmd->advertising = (ADVERTISED_10000baseT_Full | - ADVERTISED_FIBRE); - ecmd->port = PORT_FIBRE; - ecmd->autoneg = AUTONEG_DISABLE; + /* default modes in case phy.autoneg_advertised isn't set */ + if (supported_link & IXGBE_LINK_SPEED_10GB_FULL) + ecmd->advertising |= ADVERTISED_10000baseT_Full; + if (supported_link & IXGBE_LINK_SPEED_1GB_FULL) + ecmd->advertising |= ADVERTISED_1000baseT_Full; + if (supported_link & IXGBE_LINK_SPEED_100_FULL) + ecmd->advertising |= ADVERTISED_100baseT_Full; } - /* Get PHY type */ + if (autoneg) { + ecmd->supported |= SUPPORTED_Autoneg; + ecmd->advertising |= ADVERTISED_Autoneg; + ecmd->autoneg = AUTONEG_ENABLE; + } else + ecmd->autoneg = AUTONEG_DISABLE; + + ecmd->transceiver = XCVR_EXTERNAL; + + /* Determine the remaining settings based on the PHY type. */ switch (adapter->hw.phy.type) { case ixgbe_phy_tn: case ixgbe_phy_aq: case ixgbe_phy_cu_unknown: - /* Copper 10G-BASET */ + ecmd->supported |= SUPPORTED_TP; + ecmd->advertising |= ADVERTISED_TP; ecmd->port = PORT_TP; break; case ixgbe_phy_qt: + ecmd->supported |= SUPPORTED_FIBRE; + ecmd->advertising |= ADVERTISED_FIBRE; ecmd->port = PORT_FIBRE; break; case ixgbe_phy_nl: @@ -257,42 +217,59 @@ static int ixgbe_get_settings(struct net_device *netdev, case ixgbe_phy_sfp_avago: case ixgbe_phy_sfp_intel: case ixgbe_phy_sfp_unknown: - switch (adapter->hw.phy.sfp_type) { /* SFP+ devices, further checking needed */ + switch (adapter->hw.phy.sfp_type) { case ixgbe_sfp_type_da_cu: case ixgbe_sfp_type_da_cu_core0: case ixgbe_sfp_type_da_cu_core1: + ecmd->supported |= SUPPORTED_FIBRE; + ecmd->advertising |= ADVERTISED_FIBRE; ecmd->port = PORT_DA; break; case ixgbe_sfp_type_sr: case ixgbe_sfp_type_lr: case ixgbe_sfp_type_srlr_core0: case ixgbe_sfp_type_srlr_core1: + ecmd->supported |= SUPPORTED_FIBRE; + ecmd->advertising |= ADVERTISED_FIBRE; ecmd->port = PORT_FIBRE; break; case ixgbe_sfp_type_not_present: + ecmd->supported |= SUPPORTED_FIBRE; + ecmd->advertising |= ADVERTISED_FIBRE; ecmd->port = PORT_NONE; break; case ixgbe_sfp_type_1g_cu_core0: case ixgbe_sfp_type_1g_cu_core1: + ecmd->supported |= SUPPORTED_TP; + ecmd->advertising |= ADVERTISED_TP; ecmd->port = PORT_TP; - ecmd->supported = SUPPORTED_TP; - ecmd->advertising = (ADVERTISED_1000baseT_Full | - ADVERTISED_TP); + break; + case ixgbe_sfp_type_1g_sx_core0: + case ixgbe_sfp_type_1g_sx_core1: + ecmd->supported |= SUPPORTED_FIBRE; + ecmd->advertising |= ADVERTISED_FIBRE; + ecmd->port = PORT_FIBRE; break; case ixgbe_sfp_type_unknown: default: + ecmd->supported |= SUPPORTED_FIBRE; + ecmd->advertising |= ADVERTISED_FIBRE; ecmd->port = PORT_OTHER; break; } break; case ixgbe_phy_xaui: + ecmd->supported |= SUPPORTED_FIBRE; + ecmd->advertising |= ADVERTISED_FIBRE; ecmd->port = PORT_NONE; break; case ixgbe_phy_unknown: case ixgbe_phy_generic: case ixgbe_phy_sfp_unsupported: default: + ecmd->supported |= SUPPORTED_FIBRE; + ecmd->advertising |= ADVERTISED_FIBRE; ecmd->port = PORT_OTHER; break; } @@ -2113,7 +2090,6 @@ static int ixgbe_set_coalesce(struct net_device *netdev, struct ixgbe_adapter *adapter = netdev_priv(netdev); struct ixgbe_q_vector *q_vector; int i; - int num_vectors; u16 tx_itr_param, rx_itr_param; bool need_reset = false; @@ -2149,12 +2125,7 @@ static int ixgbe_set_coalesce(struct net_device *netdev, /* check the old value and enable RSC if necessary */ need_reset = ixgbe_update_rsc(adapter); - if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) - num_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; - else - num_vectors = 1; - - for (i = 0; i < num_vectors; i++) { + for (i = 0; i < adapter->num_q_vectors; i++) { q_vector = adapter->q_vector[i]; if (q_vector->tx.count && !q_vector->rx.count) /* tx only */ @@ -2274,10 +2245,6 @@ static int ixgbe_get_rss_hash_opts(struct ixgbe_adapter *adapter, { cmd->data = 0; - /* if RSS is disabled then report no hashing */ - if (!(adapter->flags & IXGBE_FLAG_RSS_ENABLED)) - return 0; - /* Report default options for RSS on ixgbe */ switch (cmd->flow_type) { case TCP_V4_FLOW: diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c index bc07933d67da..ae73ef14fdf3 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c @@ -38,7 +38,7 @@ /** * ixgbe_fcoe_clear_ddp - clear the given ddp context - * @ddp - ptr to the ixgbe_fcoe_ddp + * @ddp: ptr to the ixgbe_fcoe_ddp * * Returns : none * @@ -104,10 +104,10 @@ int ixgbe_fcoe_ddp_put(struct net_device *netdev, u16 xid) udelay(100); } if (ddp->sgl) - pci_unmap_sg(adapter->pdev, ddp->sgl, ddp->sgc, + dma_unmap_sg(&adapter->pdev->dev, ddp->sgl, ddp->sgc, DMA_FROM_DEVICE); if (ddp->pool) { - pci_pool_free(ddp->pool, ddp->udl, ddp->udp); + dma_pool_free(ddp->pool, ddp->udl, ddp->udp); ddp->pool = NULL; } @@ -134,6 +134,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid, struct ixgbe_hw *hw; struct ixgbe_fcoe *fcoe; struct ixgbe_fcoe_ddp *ddp; + struct ixgbe_fcoe_ddp_pool *ddp_pool; struct scatterlist *sg; unsigned int i, j, dmacount; unsigned int len; @@ -144,8 +145,6 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid, unsigned int thislen = 0; u32 fcbuff, fcdmarw, fcfltrw, fcrxctl; dma_addr_t addr = 0; - struct pci_pool *pool; - unsigned int cpu; if (!netdev || !sgl) return 0; @@ -162,11 +161,6 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid, return 0; fcoe = &adapter->fcoe; - if (!fcoe->pool) { - e_warn(drv, "xid=0x%x no ddp pool for fcoe\n", xid); - return 0; - } - ddp = &fcoe->ddp[xid]; if (ddp->sgl) { e_err(drv, "xid 0x%x w/ non-null sgl=%p nents=%d\n", @@ -175,22 +169,32 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid, } ixgbe_fcoe_clear_ddp(ddp); + + if (!fcoe->ddp_pool) { + e_warn(drv, "No ddp_pool resources allocated\n"); + return 0; + } + + ddp_pool = per_cpu_ptr(fcoe->ddp_pool, get_cpu()); + if (!ddp_pool->pool) { + e_warn(drv, "xid=0x%x no ddp pool for fcoe\n", xid); + goto out_noddp; + } + /* setup dma from scsi command sgl */ - dmacount = pci_map_sg(adapter->pdev, sgl, sgc, DMA_FROM_DEVICE); + dmacount = dma_map_sg(&adapter->pdev->dev, sgl, sgc, DMA_FROM_DEVICE); if (dmacount == 0) { e_err(drv, "xid 0x%x DMA map error\n", xid); - return 0; + goto out_noddp; } /* alloc the udl from per cpu ddp pool */ - cpu = get_cpu(); - pool = *per_cpu_ptr(fcoe->pool, cpu); - ddp->udl = pci_pool_alloc(pool, GFP_ATOMIC, &ddp->udp); + ddp->udl = dma_pool_alloc(ddp_pool->pool, GFP_ATOMIC, &ddp->udp); if (!ddp->udl) { e_err(drv, "failed allocated ddp context\n"); goto out_noddp_unmap; } - ddp->pool = pool; + ddp->pool = ddp_pool->pool; ddp->sgl = sgl; ddp->sgc = sgc; @@ -201,7 +205,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid, while (len) { /* max number of buffers allowed in one DDP context */ if (j >= IXGBE_BUFFCNT_MAX) { - *per_cpu_ptr(fcoe->pcpu_noddp, cpu) += 1; + ddp_pool->noddp++; goto out_noddp_free; } @@ -241,7 +245,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid, */ if (lastsize == bufflen) { if (j >= IXGBE_BUFFCNT_MAX) { - *per_cpu_ptr(fcoe->pcpu_noddp_ext_buff, cpu) += 1; + ddp_pool->noddp_ext_buff++; goto out_noddp_free; } @@ -293,11 +297,12 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid, return 1; out_noddp_free: - pci_pool_free(pool, ddp->udl, ddp->udp); + dma_pool_free(ddp->pool, ddp->udl, ddp->udp); ixgbe_fcoe_clear_ddp(ddp); out_noddp_unmap: - pci_unmap_sg(adapter->pdev, sgl, sgc, DMA_FROM_DEVICE); + dma_unmap_sg(&adapter->pdev->dev, sgl, sgc, DMA_FROM_DEVICE); +out_noddp: put_cpu(); return 0; } @@ -409,7 +414,7 @@ int ixgbe_fcoe_ddp(struct ixgbe_adapter *adapter, break; /* unmap the sg list when FCPRSP is received */ case __constant_cpu_to_le32(IXGBE_RXDADV_STAT_FCSTAT_FCPRSP): - pci_unmap_sg(adapter->pdev, ddp->sgl, + dma_unmap_sg(&adapter->pdev->dev, ddp->sgl, ddp->sgc, DMA_FROM_DEVICE); ddp->err = ddp_err; ddp->sgl = NULL; @@ -563,44 +568,37 @@ int ixgbe_fso(struct ixgbe_ring *tx_ring, return 0; } -static void ixgbe_fcoe_ddp_pools_free(struct ixgbe_fcoe *fcoe) +static void ixgbe_fcoe_dma_pool_free(struct ixgbe_fcoe *fcoe, unsigned int cpu) { - unsigned int cpu; - struct pci_pool **pool; + struct ixgbe_fcoe_ddp_pool *ddp_pool; - for_each_possible_cpu(cpu) { - pool = per_cpu_ptr(fcoe->pool, cpu); - if (*pool) - pci_pool_destroy(*pool); - } - free_percpu(fcoe->pool); - fcoe->pool = NULL; + ddp_pool = per_cpu_ptr(fcoe->ddp_pool, cpu); + if (ddp_pool->pool) + dma_pool_destroy(ddp_pool->pool); + ddp_pool->pool = NULL; } -static void ixgbe_fcoe_ddp_pools_alloc(struct ixgbe_adapter *adapter) +static int ixgbe_fcoe_dma_pool_alloc(struct ixgbe_fcoe *fcoe, + struct device *dev, + unsigned int cpu) { - struct ixgbe_fcoe *fcoe = &adapter->fcoe; - unsigned int cpu; - struct pci_pool **pool; + struct ixgbe_fcoe_ddp_pool *ddp_pool; + struct dma_pool *pool; char pool_name[32]; - fcoe->pool = alloc_percpu(struct pci_pool *); - if (!fcoe->pool) - return; + snprintf(pool_name, 32, "ixgbe_fcoe_ddp_%d", cpu); - /* allocate pci pool for each cpu */ - for_each_possible_cpu(cpu) { - snprintf(pool_name, 32, "ixgbe_fcoe_ddp_%d", cpu); - pool = per_cpu_ptr(fcoe->pool, cpu); - *pool = pci_pool_create(pool_name, - adapter->pdev, IXGBE_FCPTR_MAX, - IXGBE_FCPTR_ALIGN, PAGE_SIZE); - if (!*pool) { - e_err(drv, "failed to alloc DDP pool on cpu:%d\n", cpu); - ixgbe_fcoe_ddp_pools_free(fcoe); - return; - } - } + pool = dma_pool_create(pool_name, dev, IXGBE_FCPTR_MAX, + IXGBE_FCPTR_ALIGN, PAGE_SIZE); + if (!pool) + return -ENOMEM; + + ddp_pool = per_cpu_ptr(fcoe->ddp_pool, cpu); + ddp_pool->pool = pool; + ddp_pool->noddp = 0; + ddp_pool->noddp_ext_buff = 0; + + return 0; } /** @@ -613,132 +611,171 @@ static void ixgbe_fcoe_ddp_pools_alloc(struct ixgbe_adapter *adapter) */ void ixgbe_configure_fcoe(struct ixgbe_adapter *adapter) { - int i, fcoe_q, fcoe_i; + struct ixgbe_ring_feature *fcoe = &adapter->ring_feature[RING_F_FCOE]; struct ixgbe_hw *hw = &adapter->hw; - struct ixgbe_fcoe *fcoe = &adapter->fcoe; - struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_FCOE]; - unsigned int cpu; - - if (!fcoe->pool) { - spin_lock_init(&fcoe->lock); - - ixgbe_fcoe_ddp_pools_alloc(adapter); - if (!fcoe->pool) { - e_err(drv, "failed to alloc percpu fcoe DDP pools\n"); - return; - } - - /* Extra buffer to be shared by all DDPs for HW work around */ - fcoe->extra_ddp_buffer = kmalloc(IXGBE_FCBUFF_MIN, GFP_ATOMIC); - if (fcoe->extra_ddp_buffer == NULL) { - e_err(drv, "failed to allocated extra DDP buffer\n"); - goto out_ddp_pools; - } + int i, fcoe_q, fcoe_i; + u32 etqf; - fcoe->extra_ddp_buffer_dma = - dma_map_single(&adapter->pdev->dev, - fcoe->extra_ddp_buffer, - IXGBE_FCBUFF_MIN, - DMA_FROM_DEVICE); - if (dma_mapping_error(&adapter->pdev->dev, - fcoe->extra_ddp_buffer_dma)) { - e_err(drv, "failed to map extra DDP buffer\n"); - goto out_extra_ddp_buffer; - } + /* Minimal functionality for FCoE requires at least CRC offloads */ + if (!(adapter->netdev->features & NETIF_F_FCOE_CRC)) + return; - /* Alloc per cpu mem to count the ddp alloc failure number */ - fcoe->pcpu_noddp = alloc_percpu(u64); - if (!fcoe->pcpu_noddp) { - e_err(drv, "failed to alloc noddp counter\n"); - goto out_pcpu_noddp_alloc_fail; - } + /* Enable L2 EtherType filter for FCoE, needed for FCoE CRC and DDP */ + etqf = ETH_P_FCOE | IXGBE_ETQF_FCOE | IXGBE_ETQF_FILTER_EN; + if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { + etqf |= IXGBE_ETQF_POOL_ENABLE; + etqf |= VMDQ_P(0) << IXGBE_ETQF_POOL_SHIFT; + } + IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FCOE), etqf); + IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FCOE), 0); - fcoe->pcpu_noddp_ext_buff = alloc_percpu(u64); - if (!fcoe->pcpu_noddp_ext_buff) { - e_err(drv, "failed to alloc noddp extra buff cnt\n"); - goto out_pcpu_noddp_extra_buff_alloc_fail; - } + /* leave registers un-configured if FCoE is disabled */ + if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED)) + return; - for_each_possible_cpu(cpu) { - *per_cpu_ptr(fcoe->pcpu_noddp, cpu) = 0; - *per_cpu_ptr(fcoe->pcpu_noddp_ext_buff, cpu) = 0; - } + /* Use one or more Rx queues for FCoE by redirection table */ + for (i = 0; i < IXGBE_FCRETA_SIZE; i++) { + fcoe_i = fcoe->offset + (i % fcoe->indices); + fcoe_i &= IXGBE_FCRETA_ENTRY_MASK; + fcoe_q = adapter->rx_ring[fcoe_i]->reg_idx; + IXGBE_WRITE_REG(hw, IXGBE_FCRETA(i), fcoe_q); } + IXGBE_WRITE_REG(hw, IXGBE_FCRECTL, IXGBE_FCRECTL_ENA); - /* Enable L2 eth type filter for FCoE */ - IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FCOE), - (ETH_P_FCOE | IXGBE_ETQF_FCOE | IXGBE_ETQF_FILTER_EN)); - /* Enable L2 eth type filter for FIP */ - IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FIP), - (ETH_P_FIP | IXGBE_ETQF_FILTER_EN)); - if (adapter->ring_feature[RING_F_FCOE].indices) { - /* Use multiple rx queues for FCoE by redirection table */ - for (i = 0; i < IXGBE_FCRETA_SIZE; i++) { - fcoe_i = f->mask + i % f->indices; - fcoe_i &= IXGBE_FCRETA_ENTRY_MASK; - fcoe_q = adapter->rx_ring[fcoe_i]->reg_idx; - IXGBE_WRITE_REG(hw, IXGBE_FCRETA(i), fcoe_q); - } - IXGBE_WRITE_REG(hw, IXGBE_FCRECTL, IXGBE_FCRECTL_ENA); - IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FCOE), 0); - } else { - /* Use single rx queue for FCoE */ - fcoe_i = f->mask; - fcoe_q = adapter->rx_ring[fcoe_i]->reg_idx; - IXGBE_WRITE_REG(hw, IXGBE_FCRECTL, 0); - IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FCOE), - IXGBE_ETQS_QUEUE_EN | - (fcoe_q << IXGBE_ETQS_RX_QUEUE_SHIFT)); + /* Enable L2 EtherType filter for FIP */ + etqf = ETH_P_FIP | IXGBE_ETQF_FILTER_EN; + if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { + etqf |= IXGBE_ETQF_POOL_ENABLE; + etqf |= VMDQ_P(0) << IXGBE_ETQF_POOL_SHIFT; } - /* send FIP frames to the first FCoE queue */ - fcoe_i = f->mask; - fcoe_q = adapter->rx_ring[fcoe_i]->reg_idx; + IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FIP), etqf); + + /* Send FIP frames to the first FCoE queue */ + fcoe_q = adapter->rx_ring[fcoe->offset]->reg_idx; IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FIP), IXGBE_ETQS_QUEUE_EN | (fcoe_q << IXGBE_ETQS_RX_QUEUE_SHIFT)); - IXGBE_WRITE_REG(hw, IXGBE_FCRXCTRL, IXGBE_FCRXCTRL_FCCRCBO | + /* Configure FCoE Rx control */ + IXGBE_WRITE_REG(hw, IXGBE_FCRXCTRL, + IXGBE_FCRXCTRL_FCCRCBO | (FC_FCOE_VER << IXGBE_FCRXCTRL_FCOEVER_SHIFT)); - return; -out_pcpu_noddp_extra_buff_alloc_fail: - free_percpu(fcoe->pcpu_noddp); -out_pcpu_noddp_alloc_fail: - dma_unmap_single(&adapter->pdev->dev, - fcoe->extra_ddp_buffer_dma, - IXGBE_FCBUFF_MIN, - DMA_FROM_DEVICE); -out_extra_ddp_buffer: - kfree(fcoe->extra_ddp_buffer); -out_ddp_pools: - ixgbe_fcoe_ddp_pools_free(fcoe); } /** - * ixgbe_cleanup_fcoe - release all fcoe ddp context resources + * ixgbe_free_fcoe_ddp_resources - release all fcoe ddp context resources * @adapter : ixgbe adapter * * Cleans up outstanding ddp context resources * * Returns : none */ -void ixgbe_cleanup_fcoe(struct ixgbe_adapter *adapter) +void ixgbe_free_fcoe_ddp_resources(struct ixgbe_adapter *adapter) { - int i; struct ixgbe_fcoe *fcoe = &adapter->fcoe; + int cpu, i; - if (!fcoe->pool) + /* do nothing if no DDP pools were allocated */ + if (!fcoe->ddp_pool) return; for (i = 0; i < IXGBE_FCOE_DDP_MAX; i++) ixgbe_fcoe_ddp_put(adapter->netdev, i); + + for_each_possible_cpu(cpu) + ixgbe_fcoe_dma_pool_free(fcoe, cpu); + dma_unmap_single(&adapter->pdev->dev, fcoe->extra_ddp_buffer_dma, IXGBE_FCBUFF_MIN, DMA_FROM_DEVICE); - free_percpu(fcoe->pcpu_noddp); - free_percpu(fcoe->pcpu_noddp_ext_buff); kfree(fcoe->extra_ddp_buffer); - ixgbe_fcoe_ddp_pools_free(fcoe); + + fcoe->extra_ddp_buffer = NULL; + fcoe->extra_ddp_buffer_dma = 0; +} + +/** + * ixgbe_setup_fcoe_ddp_resources - setup all fcoe ddp context resources + * @adapter: ixgbe adapter + * + * Sets up ddp context resouces + * + * Returns : 0 indicates success or -EINVAL on failure + */ +int ixgbe_setup_fcoe_ddp_resources(struct ixgbe_adapter *adapter) +{ + struct ixgbe_fcoe *fcoe = &adapter->fcoe; + struct device *dev = &adapter->pdev->dev; + void *buffer; + dma_addr_t dma; + unsigned int cpu; + + /* do nothing if no DDP pools were allocated */ + if (!fcoe->ddp_pool) + return 0; + + /* Extra buffer to be shared by all DDPs for HW work around */ + buffer = kmalloc(IXGBE_FCBUFF_MIN, GFP_ATOMIC); + if (!buffer) { + e_err(drv, "failed to allocate extra DDP buffer\n"); + return -ENOMEM; + } + + dma = dma_map_single(dev, buffer, IXGBE_FCBUFF_MIN, DMA_FROM_DEVICE); + if (dma_mapping_error(dev, dma)) { + e_err(drv, "failed to map extra DDP buffer\n"); + kfree(buffer); + return -ENOMEM; + } + + fcoe->extra_ddp_buffer = buffer; + fcoe->extra_ddp_buffer_dma = dma; + + /* allocate pci pool for each cpu */ + for_each_possible_cpu(cpu) { + int err = ixgbe_fcoe_dma_pool_alloc(fcoe, dev, cpu); + if (!err) + continue; + + e_err(drv, "failed to alloc DDP pool on cpu:%d\n", cpu); + ixgbe_free_fcoe_ddp_resources(adapter); + return -ENOMEM; + } + + return 0; +} + +static int ixgbe_fcoe_ddp_enable(struct ixgbe_adapter *adapter) +{ + struct ixgbe_fcoe *fcoe = &adapter->fcoe; + + if (!(adapter->flags & IXGBE_FLAG_FCOE_CAPABLE)) + return -EINVAL; + + fcoe->ddp_pool = alloc_percpu(struct ixgbe_fcoe_ddp_pool); + + if (!fcoe->ddp_pool) { + e_err(drv, "failed to allocate percpu DDP resources\n"); + return -ENOMEM; + } + + adapter->netdev->fcoe_ddp_xid = IXGBE_FCOE_DDP_MAX - 1; + + return 0; +} + +static void ixgbe_fcoe_ddp_disable(struct ixgbe_adapter *adapter) +{ + struct ixgbe_fcoe *fcoe = &adapter->fcoe; + + adapter->netdev->fcoe_ddp_xid = 0; + + if (!fcoe->ddp_pool) + return; + + free_percpu(fcoe->ddp_pool); + fcoe->ddp_pool = NULL; } /** @@ -751,40 +788,37 @@ void ixgbe_cleanup_fcoe(struct ixgbe_adapter *adapter) */ int ixgbe_fcoe_enable(struct net_device *netdev) { - int rc = -EINVAL; struct ixgbe_adapter *adapter = netdev_priv(netdev); struct ixgbe_fcoe *fcoe = &adapter->fcoe; + atomic_inc(&fcoe->refcnt); if (!(adapter->flags & IXGBE_FLAG_FCOE_CAPABLE)) - goto out_enable; + return -EINVAL; - atomic_inc(&fcoe->refcnt); if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) - goto out_enable; + return -EINVAL; e_info(drv, "Enabling FCoE offload features.\n"); if (netif_running(netdev)) netdev->netdev_ops->ndo_stop(netdev); - ixgbe_clear_interrupt_scheme(adapter); + /* Allocate per CPU memory to track DDP pools */ + ixgbe_fcoe_ddp_enable(adapter); + /* enable FCoE and notify stack */ adapter->flags |= IXGBE_FLAG_FCOE_ENABLED; - adapter->ring_feature[RING_F_FCOE].indices = IXGBE_FCRETA_SIZE; - netdev->features |= NETIF_F_FCOE_CRC; - netdev->features |= NETIF_F_FSO; netdev->features |= NETIF_F_FCOE_MTU; - netdev->fcoe_ddp_xid = IXGBE_FCOE_DDP_MAX - 1; + netdev_features_change(netdev); + /* release existing queues and reallocate them */ + ixgbe_clear_interrupt_scheme(adapter); ixgbe_init_interrupt_scheme(adapter); - netdev_features_change(netdev); if (netif_running(netdev)) netdev->netdev_ops->ndo_open(netdev); - rc = 0; -out_enable: - return rc; + return 0; } /** @@ -797,41 +831,35 @@ out_enable: */ int ixgbe_fcoe_disable(struct net_device *netdev) { - int rc = -EINVAL; struct ixgbe_adapter *adapter = netdev_priv(netdev); - struct ixgbe_fcoe *fcoe = &adapter->fcoe; - if (!(adapter->flags & IXGBE_FLAG_FCOE_CAPABLE)) - goto out_disable; + if (!atomic_dec_and_test(&adapter->fcoe.refcnt)) + return -EINVAL; if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED)) - goto out_disable; - - if (!atomic_dec_and_test(&fcoe->refcnt)) - goto out_disable; + return -EINVAL; e_info(drv, "Disabling FCoE offload features.\n"); - netdev->features &= ~NETIF_F_FCOE_CRC; - netdev->features &= ~NETIF_F_FSO; - netdev->features &= ~NETIF_F_FCOE_MTU; - netdev->fcoe_ddp_xid = 0; - netdev_features_change(netdev); - if (netif_running(netdev)) netdev->netdev_ops->ndo_stop(netdev); - ixgbe_clear_interrupt_scheme(adapter); + /* Free per CPU memory to track DDP pools */ + ixgbe_fcoe_ddp_disable(adapter); + + /* disable FCoE and notify stack */ adapter->flags &= ~IXGBE_FLAG_FCOE_ENABLED; - adapter->ring_feature[RING_F_FCOE].indices = 0; - ixgbe_cleanup_fcoe(adapter); + netdev->features &= ~NETIF_F_FCOE_MTU; + + netdev_features_change(netdev); + + /* release existing queues and reallocate them */ + ixgbe_clear_interrupt_scheme(adapter); ixgbe_init_interrupt_scheme(adapter); if (netif_running(netdev)) netdev->netdev_ops->ndo_open(netdev); - rc = 0; -out_disable: - return rc; + return 0; } /** @@ -960,3 +988,18 @@ int ixgbe_fcoe_get_hbainfo(struct net_device *netdev, return 0; } + +/** + * ixgbe_fcoe_get_tc - get the current TC that fcoe is mapped to + * @adapter - pointer to the device adapter structure + * + * Return : TC that FCoE is mapped to + */ +u8 ixgbe_fcoe_get_tc(struct ixgbe_adapter *adapter) +{ +#ifdef CONFIG_IXGBE_DCB + return netdev_get_prio_tc_map(adapter->netdev, adapter->fcoe.up); +#else + return 0; +#endif +} diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.h index 1dbed17c8107..bf724da99375 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.h @@ -62,19 +62,24 @@ struct ixgbe_fcoe_ddp { struct scatterlist *sgl; dma_addr_t udp; u64 *udl; - struct pci_pool *pool; + struct dma_pool *pool; +}; + +/* per cpu variables */ +struct ixgbe_fcoe_ddp_pool { + struct dma_pool *pool; + u64 noddp; + u64 noddp_ext_buff; }; struct ixgbe_fcoe { - struct pci_pool **pool; + struct ixgbe_fcoe_ddp_pool __percpu *ddp_pool; atomic_t refcnt; spinlock_t lock; struct ixgbe_fcoe_ddp ddp[IXGBE_FCOE_DDP_MAX]; - unsigned char *extra_ddp_buffer; + void *extra_ddp_buffer; dma_addr_t extra_ddp_buffer_dma; unsigned long mode; - u64 __percpu *pcpu_noddp; - u64 __percpu *pcpu_noddp_ext_buff; #ifdef CONFIG_IXGBE_DCB u8 up; #endif diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c index c377706e81a8..17ecbcedd548 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c @@ -28,28 +28,83 @@ #include "ixgbe.h" #include "ixgbe_sriov.h" +#ifdef CONFIG_IXGBE_DCB /** - * ixgbe_cache_ring_rss - Descriptor ring to register mapping for RSS + * ixgbe_cache_ring_dcb_sriov - Descriptor ring to register mapping for SR-IOV * @adapter: board private structure to initialize * - * Cache the descriptor ring offsets for RSS to the assigned rings. + * Cache the descriptor ring offsets for SR-IOV to the assigned rings. It + * will also try to cache the proper offsets if RSS/FCoE are enabled along + * with VMDq. * **/ -static inline bool ixgbe_cache_ring_rss(struct ixgbe_adapter *adapter) +static bool ixgbe_cache_ring_dcb_sriov(struct ixgbe_adapter *adapter) { +#ifdef IXGBE_FCOE + struct ixgbe_ring_feature *fcoe = &adapter->ring_feature[RING_F_FCOE]; +#endif /* IXGBE_FCOE */ + struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ]; int i; + u16 reg_idx; + u8 tcs = netdev_get_num_tc(adapter->netdev); - if (!(adapter->flags & IXGBE_FLAG_RSS_ENABLED)) + /* verify we have DCB queueing enabled before proceeding */ + if (tcs <= 1) return false; - for (i = 0; i < adapter->num_rx_queues; i++) - adapter->rx_ring[i]->reg_idx = i; - for (i = 0; i < adapter->num_tx_queues; i++) - adapter->tx_ring[i]->reg_idx = i; + /* verify we have VMDq enabled before proceeding */ + if (!(adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)) + return false; + + /* start at VMDq register offset for SR-IOV enabled setups */ + reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask); + for (i = 0; i < adapter->num_rx_queues; i++, reg_idx++) { + /* If we are greater than indices move to next pool */ + if ((reg_idx & ~vmdq->mask) >= tcs) + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask); + adapter->rx_ring[i]->reg_idx = reg_idx; + } + + reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask); + for (i = 0; i < adapter->num_tx_queues; i++, reg_idx++) { + /* If we are greater than indices move to next pool */ + if ((reg_idx & ~vmdq->mask) >= tcs) + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask); + adapter->tx_ring[i]->reg_idx = reg_idx; + } + +#ifdef IXGBE_FCOE + /* nothing to do if FCoE is disabled */ + if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED)) + return true; + + /* The work is already done if the FCoE ring is shared */ + if (fcoe->offset < tcs) + return true; + + /* The FCoE rings exist separately, we need to move their reg_idx */ + if (fcoe->indices) { + u16 queues_per_pool = __ALIGN_MASK(1, ~vmdq->mask); + u8 fcoe_tc = ixgbe_fcoe_get_tc(adapter); + + reg_idx = (vmdq->offset + vmdq->indices) * queues_per_pool; + for (i = fcoe->offset; i < adapter->num_rx_queues; i++) { + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask) + fcoe_tc; + adapter->rx_ring[i]->reg_idx = reg_idx; + reg_idx++; + } + + reg_idx = (vmdq->offset + vmdq->indices) * queues_per_pool; + for (i = fcoe->offset; i < adapter->num_tx_queues; i++) { + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask) + fcoe_tc; + adapter->tx_ring[i]->reg_idx = reg_idx; + reg_idx++; + } + } +#endif /* IXGBE_FCOE */ return true; } -#ifdef CONFIG_IXGBE_DCB /* ixgbe_get_first_reg_idx - Return first register index associated with ring */ static void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc, @@ -64,42 +119,37 @@ static void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc, switch (hw->mac.type) { case ixgbe_mac_82598EB: - *tx = tc << 2; - *rx = tc << 3; + /* TxQs/TC: 4 RxQs/TC: 8 */ + *tx = tc << 2; /* 0, 4, 8, 12, 16, 20, 24, 28 */ + *rx = tc << 3; /* 0, 8, 16, 24, 32, 40, 48, 56 */ break; case ixgbe_mac_82599EB: case ixgbe_mac_X540: if (num_tcs > 4) { - if (tc < 3) { - *tx = tc << 5; - *rx = tc << 4; - } else if (tc < 5) { - *tx = ((tc + 2) << 4); - *rx = tc << 4; - } else if (tc < num_tcs) { - *tx = ((tc + 8) << 3); - *rx = tc << 4; - } + /* + * TCs : TC0/1 TC2/3 TC4-7 + * TxQs/TC: 32 16 8 + * RxQs/TC: 16 16 16 + */ + *rx = tc << 4; + if (tc < 3) + *tx = tc << 5; /* 0, 32, 64 */ + else if (tc < 5) + *tx = (tc + 2) << 4; /* 80, 96 */ + else + *tx = (tc + 8) << 3; /* 104, 112, 120 */ } else { - *rx = tc << 5; - switch (tc) { - case 0: - *tx = 0; - break; - case 1: - *tx = 64; - break; - case 2: - *tx = 96; - break; - case 3: - *tx = 112; - break; - default: - break; - } + /* + * TCs : TC0 TC1 TC2/3 + * TxQs/TC: 64 32 16 + * RxQs/TC: 32 32 32 + */ + *rx = tc << 5; + if (tc < 2) + *tx = tc << 6; /* 0, 64 */ + else + *tx = (tc + 4) << 4; /* 96, 112 */ } - break; default: break; } @@ -112,106 +162,115 @@ static void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc, * Cache the descriptor ring offsets for DCB to the assigned rings. * **/ -static inline bool ixgbe_cache_ring_dcb(struct ixgbe_adapter *adapter) +static bool ixgbe_cache_ring_dcb(struct ixgbe_adapter *adapter) { struct net_device *dev = adapter->netdev; - int i, j, k; + unsigned int tx_idx, rx_idx; + int tc, offset, rss_i, i; u8 num_tcs = netdev_get_num_tc(dev); - if (!num_tcs) + /* verify we have DCB queueing enabled before proceeding */ + if (num_tcs <= 1) return false; - for (i = 0, k = 0; i < num_tcs; i++) { - unsigned int tx_s, rx_s; - u16 count = dev->tc_to_txq[i].count; + rss_i = adapter->ring_feature[RING_F_RSS].indices; - ixgbe_get_first_reg_idx(adapter, i, &tx_s, &rx_s); - for (j = 0; j < count; j++, k++) { - adapter->tx_ring[k]->reg_idx = tx_s + j; - adapter->rx_ring[k]->reg_idx = rx_s + j; - adapter->tx_ring[k]->dcb_tc = i; - adapter->rx_ring[k]->dcb_tc = i; + for (tc = 0, offset = 0; tc < num_tcs; tc++, offset += rss_i) { + ixgbe_get_first_reg_idx(adapter, tc, &tx_idx, &rx_idx); + for (i = 0; i < rss_i; i++, tx_idx++, rx_idx++) { + adapter->tx_ring[offset + i]->reg_idx = tx_idx; + adapter->rx_ring[offset + i]->reg_idx = rx_idx; + adapter->tx_ring[offset + i]->dcb_tc = tc; + adapter->rx_ring[offset + i]->dcb_tc = tc; } } return true; } -#endif +#endif /** - * ixgbe_cache_ring_fdir - Descriptor ring to register mapping for Flow Director + * ixgbe_cache_ring_sriov - Descriptor ring to register mapping for sriov * @adapter: board private structure to initialize * - * Cache the descriptor ring offsets for Flow Director to the assigned rings. + * SR-IOV doesn't use any descriptor rings but changes the default if + * no other mapping is used. * - **/ -static inline bool ixgbe_cache_ring_fdir(struct ixgbe_adapter *adapter) + */ +static bool ixgbe_cache_ring_sriov(struct ixgbe_adapter *adapter) { +#ifdef IXGBE_FCOE + struct ixgbe_ring_feature *fcoe = &adapter->ring_feature[RING_F_FCOE]; +#endif /* IXGBE_FCOE */ + struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ]; + struct ixgbe_ring_feature *rss = &adapter->ring_feature[RING_F_RSS]; int i; - bool ret = false; - - if ((adapter->flags & IXGBE_FLAG_RSS_ENABLED) && - (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE)) { - for (i = 0; i < adapter->num_rx_queues; i++) - adapter->rx_ring[i]->reg_idx = i; - for (i = 0; i < adapter->num_tx_queues; i++) - adapter->tx_ring[i]->reg_idx = i; - ret = true; + u16 reg_idx; + + /* only proceed if VMDq is enabled */ + if (!(adapter->flags & IXGBE_FLAG_VMDQ_ENABLED)) + return false; + + /* start at VMDq register offset for SR-IOV enabled setups */ + reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask); + for (i = 0; i < adapter->num_rx_queues; i++, reg_idx++) { +#ifdef IXGBE_FCOE + /* Allow first FCoE queue to be mapped as RSS */ + if (fcoe->offset && (i > fcoe->offset)) + break; +#endif + /* If we are greater than indices move to next pool */ + if ((reg_idx & ~vmdq->mask) >= rss->indices) + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask); + adapter->rx_ring[i]->reg_idx = reg_idx; } - return ret; -} +#ifdef IXGBE_FCOE + /* FCoE uses a linear block of queues so just assigning 1:1 */ + for (; i < adapter->num_rx_queues; i++, reg_idx++) + adapter->rx_ring[i]->reg_idx = reg_idx; +#endif + reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask); + for (i = 0; i < adapter->num_tx_queues; i++, reg_idx++) { #ifdef IXGBE_FCOE -/** - * ixgbe_cache_ring_fcoe - Descriptor ring to register mapping for the FCoE - * @adapter: board private structure to initialize - * - * Cache the descriptor ring offsets for FCoE mode to the assigned rings. - * - */ -static inline bool ixgbe_cache_ring_fcoe(struct ixgbe_adapter *adapter) -{ - struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_FCOE]; - int i; - u8 fcoe_rx_i = 0, fcoe_tx_i = 0; + /* Allow first FCoE queue to be mapped as RSS */ + if (fcoe->offset && (i > fcoe->offset)) + break; +#endif + /* If we are greater than indices move to next pool */ + if ((reg_idx & rss->mask) >= rss->indices) + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask); + adapter->tx_ring[i]->reg_idx = reg_idx; + } - if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED)) - return false; +#ifdef IXGBE_FCOE + /* FCoE uses a linear block of queues so just assigning 1:1 */ + for (; i < adapter->num_tx_queues; i++, reg_idx++) + adapter->tx_ring[i]->reg_idx = reg_idx; - if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) { - if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) - ixgbe_cache_ring_fdir(adapter); - else - ixgbe_cache_ring_rss(adapter); +#endif - fcoe_rx_i = f->mask; - fcoe_tx_i = f->mask; - } - for (i = 0; i < f->indices; i++, fcoe_rx_i++, fcoe_tx_i++) { - adapter->rx_ring[f->mask + i]->reg_idx = fcoe_rx_i; - adapter->tx_ring[f->mask + i]->reg_idx = fcoe_tx_i; - } return true; } -#endif /* IXGBE_FCOE */ /** - * ixgbe_cache_ring_sriov - Descriptor ring to register mapping for sriov + * ixgbe_cache_ring_rss - Descriptor ring to register mapping for RSS * @adapter: board private structure to initialize * - * SR-IOV doesn't use any descriptor rings but changes the default if - * no other mapping is used. + * Cache the descriptor ring offsets for RSS to the assigned rings. * - */ -static inline bool ixgbe_cache_ring_sriov(struct ixgbe_adapter *adapter) + **/ +static bool ixgbe_cache_ring_rss(struct ixgbe_adapter *adapter) { - adapter->rx_ring[0]->reg_idx = adapter->num_vfs * 2; - adapter->tx_ring[0]->reg_idx = adapter->num_vfs * 2; - if (adapter->num_vfs) - return true; - else - return false; + int i; + + for (i = 0; i < adapter->num_rx_queues; i++) + adapter->rx_ring[i]->reg_idx = i; + for (i = 0; i < adapter->num_tx_queues; i++) + adapter->tx_ring[i]->reg_idx = i; + + return true; } /** @@ -231,186 +290,384 @@ static void ixgbe_cache_ring_register(struct ixgbe_adapter *adapter) adapter->rx_ring[0]->reg_idx = 0; adapter->tx_ring[0]->reg_idx = 0; - if (ixgbe_cache_ring_sriov(adapter)) - return; - #ifdef CONFIG_IXGBE_DCB - if (ixgbe_cache_ring_dcb(adapter)) + if (ixgbe_cache_ring_dcb_sriov(adapter)) return; -#endif -#ifdef IXGBE_FCOE - if (ixgbe_cache_ring_fcoe(adapter)) + if (ixgbe_cache_ring_dcb(adapter)) return; -#endif /* IXGBE_FCOE */ - if (ixgbe_cache_ring_fdir(adapter)) +#endif + if (ixgbe_cache_ring_sriov(adapter)) return; - if (ixgbe_cache_ring_rss(adapter)) - return; + ixgbe_cache_ring_rss(adapter); } -/** - * ixgbe_set_sriov_queues: Allocate queues for IOV use - * @adapter: board private structure to initialize - * - * IOV doesn't actually use anything, so just NAK the - * request for now and let the other queue routines - * figure out what to do. - */ -static inline bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter) -{ - return false; -} +#define IXGBE_RSS_16Q_MASK 0xF +#define IXGBE_RSS_8Q_MASK 0x7 +#define IXGBE_RSS_4Q_MASK 0x3 +#define IXGBE_RSS_2Q_MASK 0x1 +#define IXGBE_RSS_DISABLED_MASK 0x0 +#ifdef CONFIG_IXGBE_DCB /** - * ixgbe_set_rss_queues: Allocate queues for RSS + * ixgbe_set_dcb_sriov_queues: Allocate queues for SR-IOV devices w/ DCB * @adapter: board private structure to initialize * - * This is our "base" multiqueue mode. RSS (Receive Side Scaling) will try - * to allocate one Rx queue per CPU, and if available, one Tx queue per CPU. + * When SR-IOV (Single Root IO Virtualiztion) is enabled, allocate queues + * and VM pools where appropriate. Also assign queues based on DCB + * priorities and map accordingly.. * **/ -static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter) +static bool ixgbe_set_dcb_sriov_queues(struct ixgbe_adapter *adapter) { - bool ret = false; - struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS]; - - if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) { - f->mask = 0xF; - adapter->num_rx_queues = f->indices; - adapter->num_tx_queues = f->indices; - ret = true; + int i; + u16 vmdq_i = adapter->ring_feature[RING_F_VMDQ].limit; + u16 vmdq_m = 0; +#ifdef IXGBE_FCOE + u16 fcoe_i = 0; +#endif + u8 tcs = netdev_get_num_tc(adapter->netdev); + + /* verify we have DCB queueing enabled before proceeding */ + if (tcs <= 1) + return false; + + /* verify we have VMDq enabled before proceeding */ + if (!(adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)) + return false; + + /* Add starting offset to total pool count */ + vmdq_i += adapter->ring_feature[RING_F_VMDQ].offset; + + /* 16 pools w/ 8 TC per pool */ + if (tcs > 4) { + vmdq_i = min_t(u16, vmdq_i, 16); + vmdq_m = IXGBE_82599_VMDQ_8Q_MASK; + /* 32 pools w/ 4 TC per pool */ + } else { + vmdq_i = min_t(u16, vmdq_i, 32); + vmdq_m = IXGBE_82599_VMDQ_4Q_MASK; } - return ret; -} +#ifdef IXGBE_FCOE + /* queues in the remaining pools are available for FCoE */ + fcoe_i = (128 / __ALIGN_MASK(1, ~vmdq_m)) - vmdq_i; -/** - * ixgbe_set_fdir_queues: Allocate queues for Flow Director - * @adapter: board private structure to initialize - * - * Flow Director is an advanced Rx filter, attempting to get Rx flows back - * to the original CPU that initiated the Tx session. This runs in addition - * to RSS, so if a packet doesn't match an FDIR filter, we can still spread the - * Rx load across CPUs using RSS. - * - **/ -static inline bool ixgbe_set_fdir_queues(struct ixgbe_adapter *adapter) -{ - bool ret = false; - struct ixgbe_ring_feature *f_fdir = &adapter->ring_feature[RING_F_FDIR]; +#endif + /* remove the starting offset from the pool count */ + vmdq_i -= adapter->ring_feature[RING_F_VMDQ].offset; - f_fdir->indices = min_t(int, num_online_cpus(), f_fdir->indices); - f_fdir->mask = 0; + /* save features for later use */ + adapter->ring_feature[RING_F_VMDQ].indices = vmdq_i; + adapter->ring_feature[RING_F_VMDQ].mask = vmdq_m; /* - * Use RSS in addition to Flow Director to ensure the best - * distribution of flows across cores, even when an FDIR flow - * isn't matched. + * We do not support DCB, VMDq, and RSS all simultaneously + * so we will disable RSS since it is the lowest priority */ - if ((adapter->flags & IXGBE_FLAG_RSS_ENABLED) && - (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE)) { - adapter->num_tx_queues = f_fdir->indices; - adapter->num_rx_queues = f_fdir->indices; - ret = true; - } else { - adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE; + adapter->ring_feature[RING_F_RSS].indices = 1; + adapter->ring_feature[RING_F_RSS].mask = IXGBE_RSS_DISABLED_MASK; + + /* disable ATR as it is not supported when VMDq is enabled */ + adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE; + + adapter->num_rx_pools = vmdq_i; + adapter->num_rx_queues_per_pool = tcs; + + adapter->num_tx_queues = vmdq_i * tcs; + adapter->num_rx_queues = vmdq_i * tcs; + +#ifdef IXGBE_FCOE + if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) { + struct ixgbe_ring_feature *fcoe; + + fcoe = &adapter->ring_feature[RING_F_FCOE]; + + /* limit ourselves based on feature limits */ + fcoe_i = min_t(u16, fcoe_i, num_online_cpus()); + fcoe_i = min_t(u16, fcoe_i, fcoe->limit); + + if (fcoe_i) { + /* alloc queues for FCoE separately */ + fcoe->indices = fcoe_i; + fcoe->offset = vmdq_i * tcs; + + /* add queues to adapter */ + adapter->num_tx_queues += fcoe_i; + adapter->num_rx_queues += fcoe_i; + } else if (tcs > 1) { + /* use queue belonging to FcoE TC */ + fcoe->indices = 1; + fcoe->offset = ixgbe_fcoe_get_tc(adapter); + } else { + adapter->flags &= ~IXGBE_FLAG_FCOE_ENABLED; + + fcoe->indices = 0; + fcoe->offset = 0; + } } - return ret; + +#endif /* IXGBE_FCOE */ + /* configure TC to queue mapping */ + for (i = 0; i < tcs; i++) + netdev_set_tc_queue(adapter->netdev, i, 1, i); + + return true; } +static bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter) +{ + struct net_device *dev = adapter->netdev; + struct ixgbe_ring_feature *f; + int rss_i, rss_m, i; + int tcs; + + /* Map queue offset and counts onto allocated tx queues */ + tcs = netdev_get_num_tc(dev); + + /* verify we have DCB queueing enabled before proceeding */ + if (tcs <= 1) + return false; + + /* determine the upper limit for our current DCB mode */ + rss_i = dev->num_tx_queues / tcs; + if (adapter->hw.mac.type == ixgbe_mac_82598EB) { + /* 8 TC w/ 4 queues per TC */ + rss_i = min_t(u16, rss_i, 4); + rss_m = IXGBE_RSS_4Q_MASK; + } else if (tcs > 4) { + /* 8 TC w/ 8 queues per TC */ + rss_i = min_t(u16, rss_i, 8); + rss_m = IXGBE_RSS_8Q_MASK; + } else { + /* 4 TC w/ 16 queues per TC */ + rss_i = min_t(u16, rss_i, 16); + rss_m = IXGBE_RSS_16Q_MASK; + } + + /* set RSS mask and indices */ + f = &adapter->ring_feature[RING_F_RSS]; + rss_i = min_t(int, rss_i, f->limit); + f->indices = rss_i; + f->mask = rss_m; + + /* disable ATR as it is not supported when multiple TCs are enabled */ + adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE; + #ifdef IXGBE_FCOE + /* FCoE enabled queues require special configuration indexed + * by feature specific indices and offset. Here we map FCoE + * indices onto the DCB queue pairs allowing FCoE to own + * configuration later. + */ + if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) { + u8 tc = ixgbe_fcoe_get_tc(adapter); + + f = &adapter->ring_feature[RING_F_FCOE]; + f->indices = min_t(u16, rss_i, f->limit); + f->offset = rss_i * tc; + } + +#endif /* IXGBE_FCOE */ + for (i = 0; i < tcs; i++) + netdev_set_tc_queue(dev, i, rss_i, rss_i * i); + + adapter->num_tx_queues = rss_i * tcs; + adapter->num_rx_queues = rss_i * tcs; + + return true; +} + +#endif /** - * ixgbe_set_fcoe_queues: Allocate queues for Fiber Channel over Ethernet (FCoE) + * ixgbe_set_sriov_queues - Allocate queues for SR-IOV devices * @adapter: board private structure to initialize * - * FCoE RX FCRETA can use up to 8 rx queues for up to 8 different exchanges. - * The ring feature mask is not used as a mask for FCoE, as it can take any 8 - * rx queues out of the max number of rx queues, instead, it is used as the - * index of the first rx queue used by FCoE. + * When SR-IOV (Single Root IO Virtualiztion) is enabled, allocate queues + * and VM pools where appropriate. If RSS is available, then also try and + * enable RSS and map accordingly. * **/ -static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter) +static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter) { - struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_FCOE]; + u16 vmdq_i = adapter->ring_feature[RING_F_VMDQ].limit; + u16 vmdq_m = 0; + u16 rss_i = adapter->ring_feature[RING_F_RSS].limit; + u16 rss_m = IXGBE_RSS_DISABLED_MASK; +#ifdef IXGBE_FCOE + u16 fcoe_i = 0; +#endif - if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED)) + /* only proceed if SR-IOV is enabled */ + if (!(adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)) return false; - f->indices = min_t(int, num_online_cpus(), f->indices); + /* Add starting offset to total pool count */ + vmdq_i += adapter->ring_feature[RING_F_VMDQ].offset; - adapter->num_rx_queues = 1; - adapter->num_tx_queues = 1; + /* double check we are limited to maximum pools */ + vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i); - if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) { - e_info(probe, "FCoE enabled with RSS\n"); - if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) - ixgbe_set_fdir_queues(adapter); - else - ixgbe_set_rss_queues(adapter); + /* 64 pool mode with 2 queues per pool */ + if ((vmdq_i > 32) || (rss_i < 4)) { + vmdq_m = IXGBE_82599_VMDQ_2Q_MASK; + rss_m = IXGBE_RSS_2Q_MASK; + rss_i = min_t(u16, rss_i, 2); + /* 32 pool mode with 4 queues per pool */ + } else { + vmdq_m = IXGBE_82599_VMDQ_4Q_MASK; + rss_m = IXGBE_RSS_4Q_MASK; + rss_i = 4; } - /* adding FCoE rx rings to the end */ - f->mask = adapter->num_rx_queues; - adapter->num_rx_queues += f->indices; - adapter->num_tx_queues += f->indices; +#ifdef IXGBE_FCOE + /* queues in the remaining pools are available for FCoE */ + fcoe_i = 128 - (vmdq_i * __ALIGN_MASK(1, ~vmdq_m)); + +#endif + /* remove the starting offset from the pool count */ + vmdq_i -= adapter->ring_feature[RING_F_VMDQ].offset; + + /* save features for later use */ + adapter->ring_feature[RING_F_VMDQ].indices = vmdq_i; + adapter->ring_feature[RING_F_VMDQ].mask = vmdq_m; + + /* limit RSS based on user input and save for later use */ + adapter->ring_feature[RING_F_RSS].indices = rss_i; + adapter->ring_feature[RING_F_RSS].mask = rss_m; + adapter->num_rx_pools = vmdq_i; + adapter->num_rx_queues_per_pool = rss_i; + + adapter->num_rx_queues = vmdq_i * rss_i; + adapter->num_tx_queues = vmdq_i * rss_i; + + /* disable ATR as it is not supported when VMDq is enabled */ + adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE; + +#ifdef IXGBE_FCOE + /* + * FCoE can use rings from adjacent buffers to allow RSS + * like behavior. To account for this we need to add the + * FCoE indices to the total ring count. + */ + if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) { + struct ixgbe_ring_feature *fcoe; + + fcoe = &adapter->ring_feature[RING_F_FCOE]; + + /* limit ourselves based on feature limits */ + fcoe_i = min_t(u16, fcoe_i, fcoe->limit); + + if (vmdq_i > 1 && fcoe_i) { + /* reserve no more than number of CPUs */ + fcoe_i = min_t(u16, fcoe_i, num_online_cpus()); + + /* alloc queues for FCoE separately */ + fcoe->indices = fcoe_i; + fcoe->offset = vmdq_i * rss_i; + } else { + /* merge FCoE queues with RSS queues */ + fcoe_i = min_t(u16, fcoe_i + rss_i, num_online_cpus()); + + /* limit indices to rss_i if MSI-X is disabled */ + if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) + fcoe_i = rss_i; + + /* attempt to reserve some queues for just FCoE */ + fcoe->indices = min_t(u16, fcoe_i, fcoe->limit); + fcoe->offset = fcoe_i - fcoe->indices; + + fcoe_i -= rss_i; + } + + /* add queues to adapter */ + adapter->num_tx_queues += fcoe_i; + adapter->num_rx_queues += fcoe_i; + } + +#endif return true; } -#endif /* IXGBE_FCOE */ -/* Artificial max queue cap per traffic class in DCB mode */ -#define DCB_QUEUE_CAP 8 - -#ifdef CONFIG_IXGBE_DCB -static inline bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter) +/** + * ixgbe_set_rss_queues - Allocate queues for RSS + * @adapter: board private structure to initialize + * + * This is our "base" multiqueue mode. RSS (Receive Side Scaling) will try + * to allocate one Rx queue per CPU, and if available, one Tx queue per CPU. + * + **/ +static bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter) { - int per_tc_q, q, i, offset = 0; - struct net_device *dev = adapter->netdev; - int tcs = netdev_get_num_tc(dev); + struct ixgbe_ring_feature *f; + u16 rss_i; - if (!tcs) - return false; + /* set mask for 16 queue limit of RSS */ + f = &adapter->ring_feature[RING_F_RSS]; + rss_i = f->limit; - /* Map queue offset and counts onto allocated tx queues */ - per_tc_q = min_t(unsigned int, dev->num_tx_queues / tcs, DCB_QUEUE_CAP); - q = min_t(int, num_online_cpus(), per_tc_q); + f->indices = rss_i; + f->mask = IXGBE_RSS_16Q_MASK; - for (i = 0; i < tcs; i++) { - netdev_set_tc_queue(dev, i, q, offset); - offset += q; - } + /* disable ATR by default, it will be configured below */ + adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE; + + /* + * Use Flow Director in addition to RSS to ensure the best + * distribution of flows across cores, even when an FDIR flow + * isn't matched. + */ + if (rss_i > 1 && adapter->atr_sample_rate) { + f = &adapter->ring_feature[RING_F_FDIR]; - adapter->num_tx_queues = q * tcs; - adapter->num_rx_queues = q * tcs; + f->indices = min_t(u16, num_online_cpus(), f->limit); + rss_i = max_t(u16, rss_i, f->indices); + + if (!(adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)) + adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE; + } #ifdef IXGBE_FCOE - /* FCoE enabled queues require special configuration indexed - * by feature specific indices and mask. Here we map FCoE - * indices onto the DCB queue pairs allowing FCoE to own - * configuration later. + /* + * FCoE can exist on the same rings as standard network traffic + * however it is preferred to avoid that if possible. In order + * to get the best performance we allocate as many FCoE queues + * as we can and we place them at the end of the ring array to + * avoid sharing queues with standard RSS on systems with 24 or + * more CPUs. */ if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) { - u8 prio_tc[MAX_USER_PRIORITY] = {0}; - int tc; - struct ixgbe_ring_feature *f = - &adapter->ring_feature[RING_F_FCOE]; - - ixgbe_dcb_unpack_map(&adapter->dcb_cfg, DCB_TX_CONFIG, prio_tc); - tc = prio_tc[adapter->fcoe.up]; - f->indices = dev->tc_to_txq[tc].count; - f->mask = dev->tc_to_txq[tc].offset; + struct net_device *dev = adapter->netdev; + u16 fcoe_i; + + f = &adapter->ring_feature[RING_F_FCOE]; + + /* merge FCoE queues with RSS queues */ + fcoe_i = min_t(u16, f->limit + rss_i, num_online_cpus()); + fcoe_i = min_t(u16, fcoe_i, dev->num_tx_queues); + + /* limit indices to rss_i if MSI-X is disabled */ + if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) + fcoe_i = rss_i; + + /* attempt to reserve some queues for just FCoE */ + f->indices = min_t(u16, fcoe_i, f->limit); + f->offset = fcoe_i - f->indices; + rss_i = max_t(u16, fcoe_i, rss_i); } -#endif + +#endif /* IXGBE_FCOE */ + adapter->num_rx_queues = rss_i; + adapter->num_tx_queues = rss_i; return true; } -#endif /** - * ixgbe_set_num_queues: Allocate queues for device, feature dependent + * ixgbe_set_num_queues - Allocate queues for device, feature dependent * @adapter: board private structure to initialize * * This is the top level queue allocation routine. The order here is very @@ -420,7 +677,7 @@ static inline bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter) * fallthrough conditions. * **/ -static int ixgbe_set_num_queues(struct ixgbe_adapter *adapter) +static void ixgbe_set_num_queues(struct ixgbe_adapter *adapter) { /* Start with base case */ adapter->num_rx_queues = 1; @@ -428,38 +685,18 @@ static int ixgbe_set_num_queues(struct ixgbe_adapter *adapter) adapter->num_rx_pools = adapter->num_rx_queues; adapter->num_rx_queues_per_pool = 1; - if (ixgbe_set_sriov_queues(adapter)) - goto done; - #ifdef CONFIG_IXGBE_DCB + if (ixgbe_set_dcb_sriov_queues(adapter)) + return; + if (ixgbe_set_dcb_queues(adapter)) - goto done; + return; #endif -#ifdef IXGBE_FCOE - if (ixgbe_set_fcoe_queues(adapter)) - goto done; - -#endif /* IXGBE_FCOE */ - if (ixgbe_set_fdir_queues(adapter)) - goto done; - - if (ixgbe_set_rss_queues(adapter)) - goto done; - - /* fallback to base case */ - adapter->num_rx_queues = 1; - adapter->num_tx_queues = 1; - -done: - if ((adapter->netdev->reg_state == NETREG_UNREGISTERED) || - (adapter->netdev->reg_state == NETREG_UNREGISTERING)) - return 0; + if (ixgbe_set_sriov_queues(adapter)) + return; - /* Notify the stack of the (possibly) reduced queue counts. */ - netif_set_real_num_tx_queues(adapter->netdev, adapter->num_tx_queues); - return netif_set_real_num_rx_queues(adapter->netdev, - adapter->num_rx_queues); + ixgbe_set_rss_queues(adapter); } static void ixgbe_acquire_msix_vectors(struct ixgbe_adapter *adapter, @@ -507,8 +744,8 @@ static void ixgbe_acquire_msix_vectors(struct ixgbe_adapter *adapter, * of max_msix_q_vectors + NON_Q_VECTORS, or the number of * vectors we were allocated. */ - adapter->num_msix_vectors = min(vectors, - adapter->max_msix_q_vectors + NON_Q_VECTORS); + vectors -= NON_Q_VECTORS; + adapter->num_q_vectors = min(vectors, adapter->max_q_vectors); } } @@ -632,8 +869,8 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter, if (adapter->netdev->features & NETIF_F_FCOE_MTU) { struct ixgbe_ring_feature *f; f = &adapter->ring_feature[RING_F_FCOE]; - if ((rxr_idx >= f->mask) && - (rxr_idx < f->mask + f->indices)) + if ((rxr_idx >= f->offset) && + (rxr_idx < f->offset + f->indices)) set_bit(__IXGBE_RX_FCOE, &ring->state); } @@ -695,7 +932,7 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx) **/ static int ixgbe_alloc_q_vectors(struct ixgbe_adapter *adapter) { - int q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; + int q_vectors = adapter->num_q_vectors; int rxr_remaining = adapter->num_rx_queues; int txr_remaining = adapter->num_tx_queues; int rxr_idx = 0, txr_idx = 0, v_idx = 0; @@ -739,10 +976,12 @@ static int ixgbe_alloc_q_vectors(struct ixgbe_adapter *adapter) return 0; err_out: - while (v_idx) { - v_idx--; + adapter->num_tx_queues = 0; + adapter->num_rx_queues = 0; + adapter->num_q_vectors = 0; + + while (v_idx--) ixgbe_free_q_vector(adapter, v_idx); - } return -ENOMEM; } @@ -757,14 +996,13 @@ err_out: **/ static void ixgbe_free_q_vectors(struct ixgbe_adapter *adapter) { - int v_idx, q_vectors; + int v_idx = adapter->num_q_vectors; - if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) - q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; - else - q_vectors = 1; + adapter->num_tx_queues = 0; + adapter->num_rx_queues = 0; + adapter->num_q_vectors = 0; - for (v_idx = 0; v_idx < q_vectors; v_idx++) + while (v_idx--) ixgbe_free_q_vector(adapter, v_idx); } @@ -788,11 +1026,10 @@ static void ixgbe_reset_interrupt_capability(struct ixgbe_adapter *adapter) * Attempt to configure the interrupts using the best available * capabilities of the hardware and the kernel. **/ -static int ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter) +static void ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter) { struct ixgbe_hw *hw = &adapter->hw; - int err = 0; - int vector, v_budget; + int vector, v_budget, err; /* * It's easy to be greedy for MSI-X vectors, but it really @@ -825,38 +1062,41 @@ static int ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter) ixgbe_acquire_msix_vectors(adapter, v_budget); if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) - goto out; + return; } - adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED; - adapter->flags &= ~IXGBE_FLAG_RSS_ENABLED; - if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) { - e_err(probe, - "ATR is not supported while multiple " - "queues are disabled. Disabling Flow Director\n"); + /* disable DCB if number of TCs exceeds 1 */ + if (netdev_get_num_tc(adapter->netdev) > 1) { + e_err(probe, "num TCs exceeds number of queues - disabling DCB\n"); + netdev_reset_tc(adapter->netdev); + + if (adapter->hw.mac.type == ixgbe_mac_82598EB) + adapter->hw.fc.requested_mode = adapter->last_lfc_mode; + + adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED; + adapter->temp_dcb_cfg.pfc_mode_enable = false; + adapter->dcb_cfg.pfc_mode_enable = false; } - adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE; - adapter->atr_sample_rate = 0; - if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) - ixgbe_disable_sriov(adapter); + adapter->dcb_cfg.num_tcs.pg_tcs = 1; + adapter->dcb_cfg.num_tcs.pfc_tcs = 1; + + /* disable SR-IOV */ + ixgbe_disable_sriov(adapter); - err = ixgbe_set_num_queues(adapter); - if (err) - return err; + /* disable RSS */ + adapter->ring_feature[RING_F_RSS].limit = 1; + + ixgbe_set_num_queues(adapter); + adapter->num_q_vectors = 1; err = pci_enable_msi(adapter->pdev); - if (!err) { - adapter->flags |= IXGBE_FLAG_MSI_ENABLED; - } else { + if (err) { netif_printk(adapter, hw, KERN_DEBUG, adapter->netdev, "Unable to allocate MSI interrupt, " "falling back to legacy. Error: %d\n", err); - /* reset err */ - err = 0; + return; } - -out: - return err; + adapter->flags |= IXGBE_FLAG_MSI_ENABLED; } /** @@ -874,15 +1114,10 @@ int ixgbe_init_interrupt_scheme(struct ixgbe_adapter *adapter) int err; /* Number of supported queues */ - err = ixgbe_set_num_queues(adapter); - if (err) - return err; + ixgbe_set_num_queues(adapter); - err = ixgbe_set_interrupt_capability(adapter); - if (err) { - e_dev_err("Unable to setup interrupt capabilities\n"); - goto err_set_interrupt; - } + /* Set interrupt mode */ + ixgbe_set_interrupt_capability(adapter); err = ixgbe_alloc_q_vectors(adapter); if (err) { @@ -902,7 +1137,6 @@ int ixgbe_init_interrupt_scheme(struct ixgbe_adapter *adapter) err_alloc_q_vectors: ixgbe_reset_interrupt_capability(adapter); -err_set_interrupt: return err; } diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index e242104ab471..3b6784cf134a 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -516,7 +516,7 @@ static void ixgbe_get_hw_control(struct ixgbe_adapter *adapter) ctrl_ext | IXGBE_CTRL_EXT_DRV_LOAD); } -/* +/** * ixgbe_set_ivar - set the IVAR registers, mapping interrupt causes to vectors * @adapter: pointer to adapter struct * @direction: 0 for Rx, 1 for Tx, -1 for other causes @@ -790,12 +790,10 @@ static bool ixgbe_clean_tx_irq(struct ixgbe_q_vector *q_vector, total_packets += tx_buffer->gso_segs; #ifdef CONFIG_IXGBE_PTP - if (unlikely(tx_buffer->tx_flags & - IXGBE_TX_FLAGS_TSTAMP)) - ixgbe_ptp_tx_hwtstamp(q_vector, - tx_buffer->skb); - + if (unlikely(tx_buffer->tx_flags & IXGBE_TX_FLAGS_TSTAMP)) + ixgbe_ptp_tx_hwtstamp(q_vector, tx_buffer->skb); #endif + /* free the skb */ dev_kfree_skb_any(tx_buffer->skb); @@ -995,7 +993,6 @@ out_no_update: static void ixgbe_setup_dca(struct ixgbe_adapter *adapter) { - int num_q_vectors; int i; if (!(adapter->flags & IXGBE_FLAG_DCA_ENABLED)) @@ -1004,12 +1001,7 @@ static void ixgbe_setup_dca(struct ixgbe_adapter *adapter) /* always use CB2 mode, difference is masked in the CB driver */ IXGBE_WRITE_REG(&adapter->hw, IXGBE_DCA_CTRL, 2); - if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) - num_q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; - else - num_q_vectors = 1; - - for (i = 0; i < num_q_vectors; i++) { + for (i = 0; i < adapter->num_q_vectors; i++) { adapter->q_vector[i]->cpu = -1; ixgbe_update_dca(adapter->q_vector[i]); } @@ -1399,8 +1391,7 @@ static void ixgbe_process_skb_fields(struct ixgbe_ring *rx_ring, ixgbe_rx_checksum(rx_ring, rx_desc, skb); #ifdef CONFIG_IXGBE_PTP - if (ixgbe_test_staterr(rx_desc, IXGBE_RXDADV_STAT_TS)) - ixgbe_ptp_rx_hwtstamp(rx_ring->q_vector, skb); + ixgbe_ptp_rx_hwtstamp(rx_ring->q_vector, rx_desc, skb); #endif if ((dev->features & NETIF_F_HW_VLAN_RX) && @@ -1526,8 +1517,8 @@ static bool ixgbe_cleanup_headers(struct ixgbe_ring *rx_ring, * 60 bytes if the skb->len is less than 60 for skb_pad. */ pull_len = skb_frag_size(frag); - if (pull_len > 256) - pull_len = ixgbe_get_headlen(va, pull_len); + if (pull_len > IXGBE_RX_HDR_SIZE) + pull_len = ixgbe_get_headlen(va, IXGBE_RX_HDR_SIZE); /* align pull length to size of long to optimize memcpy performance */ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long))); @@ -1834,11 +1825,9 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, static void ixgbe_configure_msix(struct ixgbe_adapter *adapter) { struct ixgbe_q_vector *q_vector; - int q_vectors, v_idx; + int v_idx; u32 mask; - q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; - /* Populate MSIX to EITR Select */ if (adapter->num_vfs > 32) { u32 eitrsel = (1 << (adapter->num_vfs - 32)) - 1; @@ -1849,7 +1838,7 @@ static void ixgbe_configure_msix(struct ixgbe_adapter *adapter) * Populate the IVAR table and set the ITR values to the * corresponding register. */ - for (v_idx = 0; v_idx < q_vectors; v_idx++) { + for (v_idx = 0; v_idx < adapter->num_q_vectors; v_idx++) { struct ixgbe_ring *ring; q_vector = adapter->q_vector[v_idx]; @@ -2413,11 +2402,10 @@ int ixgbe_poll(struct napi_struct *napi, int budget) static int ixgbe_request_msix_irqs(struct ixgbe_adapter *adapter) { struct net_device *netdev = adapter->netdev; - int q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; int vector, err; int ri = 0, ti = 0; - for (vector = 0; vector < q_vectors; vector++) { + for (vector = 0; vector < adapter->num_q_vectors; vector++) { struct ixgbe_q_vector *q_vector = adapter->q_vector[vector]; struct msix_entry *entry = &adapter->msix_entries[vector]; @@ -2572,30 +2560,28 @@ static int ixgbe_request_irq(struct ixgbe_adapter *adapter) static void ixgbe_free_irq(struct ixgbe_adapter *adapter) { - if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) { - int i, q_vectors; + int vector; - q_vectors = adapter->num_msix_vectors; - i = q_vectors - 1; - free_irq(adapter->msix_entries[i].vector, adapter); - i--; + if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) { + free_irq(adapter->pdev->irq, adapter); + return; + } - for (; i >= 0; i--) { - /* free only the irqs that were actually requested */ - if (!adapter->q_vector[i]->rx.ring && - !adapter->q_vector[i]->tx.ring) - continue; + for (vector = 0; vector < adapter->num_q_vectors; vector++) { + struct ixgbe_q_vector *q_vector = adapter->q_vector[vector]; + struct msix_entry *entry = &adapter->msix_entries[vector]; - /* clear the affinity_mask in the IRQ descriptor */ - irq_set_affinity_hint(adapter->msix_entries[i].vector, - NULL); + /* free only the irqs that were actually requested */ + if (!q_vector->rx.ring && !q_vector->tx.ring) + continue; - free_irq(adapter->msix_entries[i].vector, - adapter->q_vector[i]); - } - } else { - free_irq(adapter->pdev->irq, adapter); + /* clear the affinity_mask in the IRQ descriptor */ + irq_set_affinity_hint(entry->vector, NULL); + + free_irq(entry->vector, q_vector); } + + free_irq(adapter->msix_entries[vector++].vector, adapter); } /** @@ -2619,9 +2605,12 @@ static inline void ixgbe_irq_disable(struct ixgbe_adapter *adapter) } IXGBE_WRITE_FLUSH(&adapter->hw); if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) { - int i; - for (i = 0; i < adapter->num_msix_vectors; i++) - synchronize_irq(adapter->msix_entries[i].vector); + int vector; + + for (vector = 0; vector < adapter->num_q_vectors; vector++) + synchronize_irq(adapter->msix_entries[vector].vector); + + synchronize_irq(adapter->msix_entries[vector++].vector); } else { synchronize_irq(adapter->pdev->irq); } @@ -2699,8 +2688,7 @@ void ixgbe_configure_tx_ring(struct ixgbe_adapter *adapter, 32; /* PTHRESH = 32 */ /* reinitialize flowdirector state */ - if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) && - adapter->atr_sample_rate) { + if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) { ring->atr_sample_rate = adapter->atr_sample_rate; ring->atr_count = 0; set_bit(__IXGBE_TX_FDIR_INIT_DONE, &ring->state); @@ -2730,8 +2718,7 @@ void ixgbe_configure_tx_ring(struct ixgbe_adapter *adapter, static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter) { struct ixgbe_hw *hw = &adapter->hw; - u32 rttdcs; - u32 reg; + u32 rttdcs, mtqc; u8 tcs = netdev_get_num_tc(adapter->netdev); if (hw->mac.type == ixgbe_mac_82598EB) @@ -2743,28 +2730,32 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter) IXGBE_WRITE_REG(hw, IXGBE_RTTDCS, rttdcs); /* set transmit pool layout */ - switch (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { - case (IXGBE_FLAG_SRIOV_ENABLED): - IXGBE_WRITE_REG(hw, IXGBE_MTQC, - (IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF)); - break; - default: - if (!tcs) - reg = IXGBE_MTQC_64Q_1PB; - else if (tcs <= 4) - reg = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ; + if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { + mtqc = IXGBE_MTQC_VT_ENA; + if (tcs > 4) + mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ; + else if (tcs > 1) + mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ; + else if (adapter->ring_feature[RING_F_RSS].indices == 4) + mtqc |= IXGBE_MTQC_32VF; else - reg = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ; + mtqc |= IXGBE_MTQC_64VF; + } else { + if (tcs > 4) + mtqc = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ; + else if (tcs > 1) + mtqc = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ; + else + mtqc = IXGBE_MTQC_64Q_1PB; + } - IXGBE_WRITE_REG(hw, IXGBE_MTQC, reg); + IXGBE_WRITE_REG(hw, IXGBE_MTQC, mtqc); - /* Enable Security TX Buffer IFG for multiple pb */ - if (tcs) { - reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG); - reg |= IXGBE_SECTX_DCB; - IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg); - } - break; + /* Enable Security TX Buffer IFG for multiple pb */ + if (tcs) { + u32 sectx = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG); + sectx |= IXGBE_SECTX_DCB; + IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, sectx); } /* re-enable the arbiter */ @@ -2858,40 +2849,34 @@ static void ixgbe_set_rx_drop_en(struct ixgbe_adapter *adapter) static void ixgbe_configure_srrctl(struct ixgbe_adapter *adapter, struct ixgbe_ring *rx_ring) { + struct ixgbe_hw *hw = &adapter->hw; u32 srrctl; u8 reg_idx = rx_ring->reg_idx; - switch (adapter->hw.mac.type) { - case ixgbe_mac_82598EB: { - struct ixgbe_ring_feature *feature = adapter->ring_feature; - const int mask = feature[RING_F_RSS].mask; - reg_idx = reg_idx & mask; - } - break; - case ixgbe_mac_82599EB: - case ixgbe_mac_X540: - default: - break; - } - - srrctl = IXGBE_READ_REG(&adapter->hw, IXGBE_SRRCTL(reg_idx)); + if (hw->mac.type == ixgbe_mac_82598EB) { + u16 mask = adapter->ring_feature[RING_F_RSS].mask; - srrctl &= ~IXGBE_SRRCTL_BSIZEHDR_MASK; - srrctl &= ~IXGBE_SRRCTL_BSIZEPKT_MASK; - if (adapter->num_vfs) - srrctl |= IXGBE_SRRCTL_DROP_EN; + /* + * if VMDq is not active we must program one srrctl register + * per RSS queue since we have enabled RDRXCTL.MVMEN + */ + reg_idx &= mask; + } - srrctl |= (IXGBE_RX_HDR_SIZE << IXGBE_SRRCTL_BSIZEHDRSIZE_SHIFT) & - IXGBE_SRRCTL_BSIZEHDR_MASK; + /* configure header buffer length, needed for RSC */ + srrctl = IXGBE_RX_HDR_SIZE << IXGBE_SRRCTL_BSIZEHDRSIZE_SHIFT; + /* configure the packet buffer length */ #if PAGE_SIZE > IXGBE_MAX_RXBUFFER srrctl |= IXGBE_MAX_RXBUFFER >> IXGBE_SRRCTL_BSIZEPKT_SHIFT; #else srrctl |= ixgbe_rx_bufsz(rx_ring) >> IXGBE_SRRCTL_BSIZEPKT_SHIFT; #endif + + /* configure descriptor type */ srrctl |= IXGBE_SRRCTL_DESCTYPE_ADV_ONEBUF; - IXGBE_WRITE_REG(&adapter->hw, IXGBE_SRRCTL(reg_idx), srrctl); + IXGBE_WRITE_REG(hw, IXGBE_SRRCTL(reg_idx), srrctl); } static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter) @@ -2903,11 +2888,15 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter) u32 mrqc = 0, reta = 0; u32 rxcsum; int i, j; - u8 tcs = netdev_get_num_tc(adapter->netdev); - int maxq = adapter->ring_feature[RING_F_RSS].indices; + u16 rss_i = adapter->ring_feature[RING_F_RSS].indices; - if (tcs) - maxq = min(maxq, adapter->num_tx_queues / tcs); + /* + * Program table for at least 2 queues w/ SR-IOV so that VFs can + * make full use of any rings they may have. We will use the + * PSRTYPE register to control how many rings we use within the PF. + */ + if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 2)) + rss_i = 2; /* Fill out hash function seeds */ for (i = 0; i < 10; i++) @@ -2915,7 +2904,7 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter) /* Fill out redirection table */ for (i = 0, j = 0; i < 128; i++, j++) { - if (j == maxq) + if (j == rss_i) j = 0; /* reta = 4-byte sliding window of * 0x00..(indices-1)(indices-1)00..etc. */ @@ -2929,35 +2918,36 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter) rxcsum |= IXGBE_RXCSUM_PCSD; IXGBE_WRITE_REG(hw, IXGBE_RXCSUM, rxcsum); - if (adapter->hw.mac.type == ixgbe_mac_82598EB && - (adapter->flags & IXGBE_FLAG_RSS_ENABLED)) { - mrqc = IXGBE_MRQC_RSSEN; + if (adapter->hw.mac.type == ixgbe_mac_82598EB) { + if (adapter->ring_feature[RING_F_RSS].mask) + mrqc = IXGBE_MRQC_RSSEN; } else { - int mask = adapter->flags & (IXGBE_FLAG_RSS_ENABLED - | IXGBE_FLAG_SRIOV_ENABLED); - - switch (mask) { - case (IXGBE_FLAG_RSS_ENABLED): - if (!tcs) - mrqc = IXGBE_MRQC_RSSEN; - else if (tcs <= 4) - mrqc = IXGBE_MRQC_RTRSS4TCEN; + u8 tcs = netdev_get_num_tc(adapter->netdev); + + if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { + if (tcs > 4) + mrqc = IXGBE_MRQC_VMDQRT8TCEN; /* 8 TCs */ + else if (tcs > 1) + mrqc = IXGBE_MRQC_VMDQRT4TCEN; /* 4 TCs */ + else if (adapter->ring_feature[RING_F_RSS].indices == 4) + mrqc = IXGBE_MRQC_VMDQRSS32EN; else + mrqc = IXGBE_MRQC_VMDQRSS64EN; + } else { + if (tcs > 4) mrqc = IXGBE_MRQC_RTRSS8TCEN; - break; - case (IXGBE_FLAG_SRIOV_ENABLED): - mrqc = IXGBE_MRQC_VMDQEN; - break; - default: - break; + else if (tcs > 1) + mrqc = IXGBE_MRQC_RTRSS4TCEN; + else + mrqc = IXGBE_MRQC_RSSEN; } } /* Perform hash on these packet types */ - mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4 - | IXGBE_MRQC_RSS_FIELD_IPV4_TCP - | IXGBE_MRQC_RSS_FIELD_IPV6 - | IXGBE_MRQC_RSS_FIELD_IPV6_TCP; + mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4 | + IXGBE_MRQC_RSS_FIELD_IPV4_TCP | + IXGBE_MRQC_RSS_FIELD_IPV6 | + IXGBE_MRQC_RSS_FIELD_IPV6_TCP; if (adapter->flags2 & IXGBE_FLAG2_RSS_FIELD_IPV4_UDP) mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP; @@ -3108,6 +3098,7 @@ void ixgbe_configure_rx_ring(struct ixgbe_adapter *adapter, static void ixgbe_setup_psrtype(struct ixgbe_adapter *adapter) { struct ixgbe_hw *hw = &adapter->hw; + int rss_i = adapter->ring_feature[RING_F_RSS].indices; int p; /* PSRTYPE must be initialized in non 82598 adapters */ @@ -3120,58 +3111,69 @@ static void ixgbe_setup_psrtype(struct ixgbe_adapter *adapter) if (hw->mac.type == ixgbe_mac_82598EB) return; - if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) - psrtype |= (adapter->num_rx_queues_per_pool << 29); + if (rss_i > 3) + psrtype |= 2 << 29; + else if (rss_i > 1) + psrtype |= 1 << 29; for (p = 0; p < adapter->num_rx_pools; p++) - IXGBE_WRITE_REG(hw, IXGBE_PSRTYPE(adapter->num_vfs + p), + IXGBE_WRITE_REG(hw, IXGBE_PSRTYPE(VMDQ_P(p)), psrtype); } static void ixgbe_configure_virtualization(struct ixgbe_adapter *adapter) { struct ixgbe_hw *hw = &adapter->hw; - u32 gcr_ext; - u32 vt_reg_bits; u32 reg_offset, vf_shift; - u32 vmdctl; + u32 gcr_ext, vmdctl; int i; if (!(adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)) return; vmdctl = IXGBE_READ_REG(hw, IXGBE_VT_CTL); - vt_reg_bits = IXGBE_VMD_CTL_VMDQ_EN | IXGBE_VT_CTL_REPLEN; - vt_reg_bits |= (adapter->num_vfs << IXGBE_VT_CTL_POOL_SHIFT); - IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, vmdctl | vt_reg_bits); + vmdctl |= IXGBE_VMD_CTL_VMDQ_EN; + vmdctl &= ~IXGBE_VT_CTL_POOL_MASK; + vmdctl |= VMDQ_P(0) << IXGBE_VT_CTL_POOL_SHIFT; + vmdctl |= IXGBE_VT_CTL_REPLEN; + IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, vmdctl); - vf_shift = adapter->num_vfs % 32; - reg_offset = (adapter->num_vfs >= 32) ? 1 : 0; + vf_shift = VMDQ_P(0) % 32; + reg_offset = (VMDQ_P(0) >= 32) ? 1 : 0; /* Enable only the PF's pool for Tx/Rx */ - IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), (1 << vf_shift)); - IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset ^ 1), 0); - IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset), (1 << vf_shift)); - IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset ^ 1), 0); + IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), (~0) << vf_shift); + IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset ^ 1), reg_offset - 1); + IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset), (~0) << vf_shift); + IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset ^ 1), reg_offset - 1); IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, IXGBE_PFDTXGSWC_VT_LBEN); /* Map PF MAC address in RAR Entry 0 to first pool following VFs */ - hw->mac.ops.set_vmdq(hw, 0, adapter->num_vfs); + hw->mac.ops.set_vmdq(hw, 0, VMDQ_P(0)); /* * Set up VF register offsets for selected VT Mode, * i.e. 32 or 64 VFs for SR-IOV */ - gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT); - gcr_ext |= IXGBE_GCR_EXT_MSIX_EN; - gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64; + switch (adapter->ring_feature[RING_F_VMDQ].mask) { + case IXGBE_82599_VMDQ_8Q_MASK: + gcr_ext = IXGBE_GCR_EXT_VT_MODE_16; + break; + case IXGBE_82599_VMDQ_4Q_MASK: + gcr_ext = IXGBE_GCR_EXT_VT_MODE_32; + break; + default: + gcr_ext = IXGBE_GCR_EXT_VT_MODE_64; + break; + } + IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext); /* enable Tx loopback for VF/PF communication */ IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, IXGBE_PFDTXGSWC_VT_LBEN); + /* Enable MAC Anti-Spoofing */ - hw->mac.ops.set_mac_anti_spoofing(hw, - (adapter->num_vfs != 0), + hw->mac.ops.set_mac_anti_spoofing(hw, (adapter->num_vfs != 0), adapter->num_vfs); /* For VFs that have spoof checking turned off */ for (i = 0; i < adapter->num_vfs; i++) { @@ -3307,10 +3309,9 @@ static int ixgbe_vlan_rx_add_vid(struct net_device *netdev, u16 vid) { struct ixgbe_adapter *adapter = netdev_priv(netdev); struct ixgbe_hw *hw = &adapter->hw; - int pool_ndx = adapter->num_vfs; /* add VID to filter table */ - hw->mac.ops.set_vfta(&adapter->hw, vid, pool_ndx, true); + hw->mac.ops.set_vfta(&adapter->hw, vid, VMDQ_P(0), true); set_bit(vid, adapter->active_vlans); return 0; @@ -3320,10 +3321,9 @@ static int ixgbe_vlan_rx_kill_vid(struct net_device *netdev, u16 vid) { struct ixgbe_adapter *adapter = netdev_priv(netdev); struct ixgbe_hw *hw = &adapter->hw; - int pool_ndx = adapter->num_vfs; /* remove VID from filter table */ - hw->mac.ops.set_vfta(&adapter->hw, vid, pool_ndx, false); + hw->mac.ops.set_vfta(&adapter->hw, vid, VMDQ_P(0), false); clear_bit(vid, adapter->active_vlans); return 0; @@ -3441,15 +3441,18 @@ static int ixgbe_write_uc_addr_list(struct net_device *netdev) { struct ixgbe_adapter *adapter = netdev_priv(netdev); struct ixgbe_hw *hw = &adapter->hw; - unsigned int vfn = adapter->num_vfs; - unsigned int rar_entries = IXGBE_MAX_PF_MACVLANS; + unsigned int rar_entries = hw->mac.num_rar_entries - 1; int count = 0; + /* In SR-IOV mode significantly less RAR entries are available */ + if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) + rar_entries = IXGBE_MAX_PF_MACVLANS - 1; + /* return ENOMEM indicating insufficient memory for addresses */ if (netdev_uc_count(netdev) > rar_entries) return -ENOMEM; - if (!netdev_uc_empty(netdev) && rar_entries) { + if (!netdev_uc_empty(netdev)) { struct netdev_hw_addr *ha; /* return error if we do not support writing to RAR table */ if (!hw->mac.ops.set_rar) @@ -3459,7 +3462,7 @@ static int ixgbe_write_uc_addr_list(struct net_device *netdev) if (!rar_entries) break; hw->mac.ops.set_rar(hw, rar_entries--, ha->addr, - vfn, IXGBE_RAH_AV); + VMDQ_P(0), IXGBE_RAH_AV); count++; } } @@ -3533,12 +3536,14 @@ void ixgbe_set_rx_mode(struct net_device *netdev) vmolr |= IXGBE_VMOLR_ROPE; } - if (adapter->num_vfs) { + if (adapter->num_vfs) ixgbe_restore_vf_multicasts(adapter); - vmolr |= IXGBE_READ_REG(hw, IXGBE_VMOLR(adapter->num_vfs)) & + + if (hw->mac.type != ixgbe_mac_82598EB) { + vmolr |= IXGBE_READ_REG(hw, IXGBE_VMOLR(VMDQ_P(0))) & ~(IXGBE_VMOLR_MPE | IXGBE_VMOLR_ROMPE | IXGBE_VMOLR_ROPE); - IXGBE_WRITE_REG(hw, IXGBE_VMOLR(adapter->num_vfs), vmolr); + IXGBE_WRITE_REG(hw, IXGBE_VMOLR(VMDQ_P(0)), vmolr); } /* This is useful for sniffing bad packets. */ @@ -3564,37 +3569,21 @@ void ixgbe_set_rx_mode(struct net_device *netdev) static void ixgbe_napi_enable_all(struct ixgbe_adapter *adapter) { int q_idx; - struct ixgbe_q_vector *q_vector; - int q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; - /* legacy and MSI only use one vector */ - if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) - q_vectors = 1; - - for (q_idx = 0; q_idx < q_vectors; q_idx++) { - q_vector = adapter->q_vector[q_idx]; - napi_enable(&q_vector->napi); - } + for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) + napi_enable(&adapter->q_vector[q_idx]->napi); } static void ixgbe_napi_disable_all(struct ixgbe_adapter *adapter) { int q_idx; - struct ixgbe_q_vector *q_vector; - int q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; - - /* legacy and MSI only use one vector */ - if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) - q_vectors = 1; - for (q_idx = 0; q_idx < q_vectors; q_idx++) { - q_vector = adapter->q_vector[q_idx]; - napi_disable(&q_vector->napi); - } + for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) + napi_disable(&adapter->q_vector[q_idx]->napi); } #ifdef CONFIG_IXGBE_DCB -/* +/** * ixgbe_configure_dcb - Configure DCB hardware * @adapter: ixgbe adapter struct * @@ -3641,19 +3630,16 @@ static void ixgbe_configure_dcb(struct ixgbe_adapter *adapter) /* Enable RSS Hash per TC */ if (hw->mac.type != ixgbe_mac_82598EB) { - int i; - u32 reg = 0; - - for (i = 0; i < MAX_TRAFFIC_CLASS; i++) { - u8 msb = 0; - u8 cnt = adapter->netdev->tc_to_txq[i].count; - - while (cnt >>= 1) - msb++; + u32 msb = 0; + u16 rss_i = adapter->ring_feature[RING_F_RSS].indices - 1; - reg |= msb << IXGBE_RQTC_SHIFT_TC(i); + while (rss_i) { + msb++; + rss_i >>= 1; } - IXGBE_WRITE_REG(hw, IXGBE_RQTC, reg); + + /* write msb to all 8 TCs in one write */ + IXGBE_WRITE_REG(hw, IXGBE_RQTC, msb * 0x11111111); } } #endif @@ -3661,11 +3647,11 @@ static void ixgbe_configure_dcb(struct ixgbe_adapter *adapter) /* Additional bittime to account for IXGBE framing */ #define IXGBE_ETH_FRAMING 20 -/* +/** * ixgbe_hpbthresh - calculate high water mark for flow control * * @adapter: board private structure to calculate for - * @pb - packet buffer to calculate + * @pb: packet buffer to calculate */ static int ixgbe_hpbthresh(struct ixgbe_adapter *adapter, int pb) { @@ -3679,18 +3665,12 @@ static int ixgbe_hpbthresh(struct ixgbe_adapter *adapter, int pb) #ifdef IXGBE_FCOE /* FCoE traffic class uses FCOE jumbo frames */ - if (dev->features & NETIF_F_FCOE_MTU) { - int fcoe_pb = 0; + if ((dev->features & NETIF_F_FCOE_MTU) && + (tc < IXGBE_FCOE_JUMBO_FRAME_SIZE) && + (pb == ixgbe_fcoe_get_tc(adapter))) + tc = IXGBE_FCOE_JUMBO_FRAME_SIZE; -#ifdef CONFIG_IXGBE_DCB - fcoe_pb = netdev_get_prio_tc_map(dev, adapter->fcoe.up); - -#endif - if (fcoe_pb == pb && tc < IXGBE_FCOE_JUMBO_FRAME_SIZE) - tc = IXGBE_FCOE_JUMBO_FRAME_SIZE; - } #endif - /* Calculate delay value for device */ switch (hw->mac.type) { case ixgbe_mac_X540: @@ -3725,11 +3705,11 @@ static int ixgbe_hpbthresh(struct ixgbe_adapter *adapter, int pb) return marker; } -/* +/** * ixgbe_lpbthresh - calculate low water mark for for flow control * * @adapter: board private structure to calculate for - * @pb - packet buffer to calculate + * @pb: packet buffer to calculate */ static int ixgbe_lpbthresh(struct ixgbe_adapter *adapter) { @@ -3830,12 +3810,6 @@ static void ixgbe_configure(struct ixgbe_adapter *adapter) ixgbe_set_rx_mode(adapter->netdev); ixgbe_restore_vlan(adapter); -#ifdef IXGBE_FCOE - if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) - ixgbe_configure_fcoe(adapter); - -#endif /* IXGBE_FCOE */ - switch (hw->mac.type) { case ixgbe_mac_82599EB: case ixgbe_mac_X540: @@ -3865,6 +3839,11 @@ static void ixgbe_configure(struct ixgbe_adapter *adapter) ixgbe_configure_virtualization(adapter); +#ifdef IXGBE_FCOE + /* configure FCoE L2 filters, redirection table, and Rx control */ + ixgbe_configure_fcoe(adapter); + +#endif /* IXGBE_FCOE */ ixgbe_configure_tx(adapter); ixgbe_configure_rx(adapter); } @@ -3973,7 +3952,18 @@ static void ixgbe_setup_gpie(struct ixgbe_adapter *adapter) if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { gpie &= ~IXGBE_GPIE_VTMODE_MASK; - gpie |= IXGBE_GPIE_VTMODE_64; + + switch (adapter->ring_feature[RING_F_VMDQ].mask) { + case IXGBE_82599_VMDQ_8Q_MASK: + gpie |= IXGBE_GPIE_VTMODE_16; + break; + case IXGBE_82599_VMDQ_4Q_MASK: + gpie |= IXGBE_GPIE_VTMODE_32; + break; + default: + gpie |= IXGBE_GPIE_VTMODE_64; + break; + } } /* Enable Thermal over heat sensor interrupt */ @@ -4131,8 +4121,11 @@ void ixgbe_reset(struct ixgbe_adapter *adapter) clear_bit(__IXGBE_IN_SFP_INIT, &adapter->state); /* reprogram the RAR[0] in case user changed it. */ - hw->mac.ops.set_rar(hw, 0, hw->mac.addr, adapter->num_vfs, - IXGBE_RAH_AV); + hw->mac.ops.set_rar(hw, 0, hw->mac.addr, VMDQ_P(0), IXGBE_RAH_AV); + + /* update SAN MAC vmdq pool selection */ + if (hw->mac.san_mac_rar_index) + hw->mac.ops.set_vmdq_san_mac(hw, VMDQ_P(0)); } /** @@ -4413,32 +4406,29 @@ static int __devinit ixgbe_sw_init(struct ixgbe_adapter *adapter) /* Set capability flags */ rss = min_t(int, IXGBE_MAX_RSS_INDICES, num_online_cpus()); - adapter->ring_feature[RING_F_RSS].indices = rss; - adapter->flags |= IXGBE_FLAG_RSS_ENABLED; + adapter->ring_feature[RING_F_RSS].limit = rss; switch (hw->mac.type) { case ixgbe_mac_82598EB: if (hw->device_id == IXGBE_DEV_ID_82598AT) adapter->flags |= IXGBE_FLAG_FAN_FAIL_CAPABLE; - adapter->max_msix_q_vectors = MAX_MSIX_Q_VECTORS_82598; + adapter->max_q_vectors = MAX_Q_VECTORS_82598; break; case ixgbe_mac_X540: adapter->flags2 |= IXGBE_FLAG2_TEMP_SENSOR_CAPABLE; case ixgbe_mac_82599EB: - adapter->max_msix_q_vectors = MAX_MSIX_Q_VECTORS_82599; + adapter->max_q_vectors = MAX_Q_VECTORS_82599; adapter->flags2 |= IXGBE_FLAG2_RSC_CAPABLE; adapter->flags2 |= IXGBE_FLAG2_RSC_ENABLED; if (hw->device_id == IXGBE_DEV_ID_82599_T3_LOM) adapter->flags2 |= IXGBE_FLAG2_TEMP_SENSOR_CAPABLE; /* Flow Director hash filters enabled */ - adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE; adapter->atr_sample_rate = 20; - adapter->ring_feature[RING_F_FDIR].indices = + adapter->ring_feature[RING_F_FDIR].limit = IXGBE_MAX_FDIR_INDICES; adapter->fdir_pballoc = IXGBE_FDIR_PBALLOC_64K; #ifdef IXGBE_FCOE adapter->flags |= IXGBE_FLAG_FCOE_CAPABLE; adapter->flags &= ~IXGBE_FLAG_FCOE_ENABLED; - adapter->ring_feature[RING_F_FCOE].indices = 0; #ifdef CONFIG_IXGBE_DCB /* Default traffic class to use for FCoE */ adapter->fcoe.up = IXGBE_FCOE_DEFTC; @@ -4449,6 +4439,11 @@ static int __devinit ixgbe_sw_init(struct ixgbe_adapter *adapter) break; } +#ifdef IXGBE_FCOE + /* FCoE support exists, always init the FCoE lock */ + spin_lock_init(&adapter->fcoe.lock); + +#endif /* n-tuple support exists, always init our spinlock */ spin_lock_init(&adapter->fdir_perfect_lock); @@ -4497,6 +4492,12 @@ static int __devinit ixgbe_sw_init(struct ixgbe_adapter *adapter) hw->fc.send_xon = true; hw->fc.disable_fc_autoneg = false; +#ifdef CONFIG_PCI_IOV + /* assign number of SR-IOV VFs */ + if (hw->mac.type != ixgbe_mac_82598EB) + adapter->num_vfs = (max_vfs > 63) ? 0 : max_vfs; + +#endif /* enable itr by default in dynamic mode */ adapter->rx_itr_setting = 1; adapter->tx_itr_setting = 1; @@ -4588,10 +4589,16 @@ static int ixgbe_setup_all_tx_resources(struct ixgbe_adapter *adapter) err = ixgbe_setup_tx_resources(adapter->tx_ring[i]); if (!err) continue; + e_err(probe, "Allocation for Tx Queue %u failed\n", i); - break; + goto err_setup_tx; } + return 0; +err_setup_tx: + /* rewind the index freeing the rings as we go */ + while (i--) + ixgbe_free_tx_resources(adapter->tx_ring[i]); return err; } @@ -4666,10 +4673,20 @@ static int ixgbe_setup_all_rx_resources(struct ixgbe_adapter *adapter) err = ixgbe_setup_rx_resources(adapter->rx_ring[i]); if (!err) continue; + e_err(probe, "Allocation for Rx Queue %u failed\n", i); - break; + goto err_setup_rx; } +#ifdef IXGBE_FCOE + err = ixgbe_setup_fcoe_ddp_resources(adapter); + if (!err) +#endif + return 0; +err_setup_rx: + /* rewind the index freeing the rings as we go */ + while (i--) + ixgbe_free_rx_resources(adapter->rx_ring[i]); return err; } @@ -4744,6 +4761,10 @@ static void ixgbe_free_all_rx_resources(struct ixgbe_adapter *adapter) { int i; +#ifdef IXGBE_FCOE + ixgbe_free_fcoe_ddp_resources(adapter); + +#endif for (i = 0; i < adapter->num_rx_queues; i++) if (adapter->rx_ring[i]->desc) ixgbe_free_rx_resources(adapter->rx_ring[i]); @@ -4825,15 +4846,31 @@ static int ixgbe_open(struct net_device *netdev) if (err) goto err_req_irq; + /* Notify the stack of the actual queue counts. */ + err = netif_set_real_num_tx_queues(netdev, + adapter->num_rx_pools > 1 ? 1 : + adapter->num_tx_queues); + if (err) + goto err_set_queues; + + + err = netif_set_real_num_rx_queues(netdev, + adapter->num_rx_pools > 1 ? 1 : + adapter->num_rx_queues); + if (err) + goto err_set_queues; + ixgbe_up_complete(adapter); return 0; +err_set_queues: + ixgbe_free_irq(adapter); err_req_irq: -err_setup_rx: ixgbe_free_all_rx_resources(adapter); -err_setup_tx: +err_setup_rx: ixgbe_free_all_tx_resources(adapter); +err_setup_tx: ixgbe_reset(adapter); return err; @@ -4891,23 +4928,19 @@ static int ixgbe_resume(struct pci_dev *pdev) pci_wake_from_d3(pdev, false); - rtnl_lock(); - err = ixgbe_init_interrupt_scheme(adapter); - rtnl_unlock(); - if (err) { - e_dev_err("Cannot initialize interrupts for device\n"); - return err; - } - ixgbe_reset(adapter); IXGBE_WRITE_REG(&adapter->hw, IXGBE_WUS, ~0); - if (netif_running(netdev)) { + rtnl_lock(); + err = ixgbe_init_interrupt_scheme(adapter); + if (!err && netif_running(netdev)) err = ixgbe_open(netdev); - if (err) - return err; - } + + rtnl_unlock(); + + if (err) + return err; netif_device_attach(netdev); @@ -5043,11 +5076,6 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter) u64 non_eop_descs = 0, restart_queue = 0, tx_busy = 0; u64 alloc_rx_page_failed = 0, alloc_rx_buff_failed = 0; u64 bytes = 0, packets = 0, hw_csum_rx_error = 0; -#ifdef IXGBE_FCOE - struct ixgbe_fcoe *fcoe = &adapter->fcoe; - unsigned int cpu; - u64 fcoe_noddp_counts_sum = 0, fcoe_noddp_ext_buff_counts_sum = 0; -#endif /* IXGBE_FCOE */ if (test_bit(__IXGBE_DOWN, &adapter->state) || test_bit(__IXGBE_RESETTING, &adapter->state)) @@ -5178,17 +5206,19 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter) hwstats->fcoedwrc += IXGBE_READ_REG(hw, IXGBE_FCOEDWRC); hwstats->fcoedwtc += IXGBE_READ_REG(hw, IXGBE_FCOEDWTC); /* Add up per cpu counters for total ddp aloc fail */ - if (fcoe->pcpu_noddp && fcoe->pcpu_noddp_ext_buff) { + if (adapter->fcoe.ddp_pool) { + struct ixgbe_fcoe *fcoe = &adapter->fcoe; + struct ixgbe_fcoe_ddp_pool *ddp_pool; + unsigned int cpu; + u64 noddp = 0, noddp_ext_buff = 0; for_each_possible_cpu(cpu) { - fcoe_noddp_counts_sum += - *per_cpu_ptr(fcoe->pcpu_noddp, cpu); - fcoe_noddp_ext_buff_counts_sum += - *per_cpu_ptr(fcoe-> - pcpu_noddp_ext_buff, cpu); + ddp_pool = per_cpu_ptr(fcoe->ddp_pool, cpu); + noddp += ddp_pool->noddp; + noddp_ext_buff += ddp_pool->noddp_ext_buff; } + hwstats->fcoe_noddp = noddp; + hwstats->fcoe_noddp_ext_buff = noddp_ext_buff; } - hwstats->fcoe_noddp = fcoe_noddp_counts_sum; - hwstats->fcoe_noddp_ext_buff = fcoe_noddp_ext_buff_counts_sum; #endif /* IXGBE_FCOE */ break; default: @@ -5246,7 +5276,7 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter) /** * ixgbe_fdir_reinit_subtask - worker thread to reinit FDIR filter table - * @adapter - pointer to the device adapter structure + * @adapter: pointer to the device adapter structure **/ static void ixgbe_fdir_reinit_subtask(struct ixgbe_adapter *adapter) { @@ -5282,7 +5312,7 @@ static void ixgbe_fdir_reinit_subtask(struct ixgbe_adapter *adapter) /** * ixgbe_check_hang_subtask - check for hung queues and dropped interrupts - * @adapter - pointer to the device adapter structure + * @adapter: pointer to the device adapter structure * * This function serves two purposes. First it strobes the interrupt lines * in order to make certain interrupts are occurring. Secondly it sets the @@ -5316,7 +5346,7 @@ static void ixgbe_check_hang_subtask(struct ixgbe_adapter *adapter) (IXGBE_EICS_TCP_TIMER | IXGBE_EICS_OTHER)); } else { /* get one bit for every active tx/rx interrupt vector */ - for (i = 0; i < adapter->num_msix_vectors - NON_Q_VECTORS; i++) { + for (i = 0; i < adapter->num_q_vectors; i++) { struct ixgbe_q_vector *qv = adapter->q_vector[i]; if (qv->rx.ring || qv->tx.ring) eics |= ((u64)1 << i); @@ -5330,8 +5360,8 @@ static void ixgbe_check_hang_subtask(struct ixgbe_adapter *adapter) /** * ixgbe_watchdog_update_link - update the link status - * @adapter - pointer to the device adapter structure - * @link_speed - pointer to a u32 to store the link_speed + * @adapter: pointer to the device adapter structure + * @link_speed: pointer to a u32 to store the link_speed **/ static void ixgbe_watchdog_update_link(struct ixgbe_adapter *adapter) { @@ -5374,7 +5404,7 @@ static void ixgbe_watchdog_update_link(struct ixgbe_adapter *adapter) /** * ixgbe_watchdog_link_is_up - update netif_carrier status and * print link up message - * @adapter - pointer to the device adapter structure + * @adapter: pointer to the device adapter structure **/ static void ixgbe_watchdog_link_is_up(struct ixgbe_adapter *adapter) { @@ -5429,12 +5459,15 @@ static void ixgbe_watchdog_link_is_up(struct ixgbe_adapter *adapter) netif_carrier_on(netdev); ixgbe_check_vf_rate_limit(adapter); + + /* ping all the active vfs to let them know link has changed */ + ixgbe_ping_all_vfs(adapter); } /** * ixgbe_watchdog_link_is_down - update netif_carrier status and * print link down message - * @adapter - pointer to the adapter structure + * @adapter: pointer to the adapter structure **/ static void ixgbe_watchdog_link_is_down(struct ixgbe_adapter *adapter) { @@ -5458,11 +5491,14 @@ static void ixgbe_watchdog_link_is_down(struct ixgbe_adapter *adapter) e_info(drv, "NIC Link is Down\n"); netif_carrier_off(netdev); + + /* ping all the active vfs to let them know link has changed */ + ixgbe_ping_all_vfs(adapter); } /** * ixgbe_watchdog_flush_tx - flush queues on link down - * @adapter - pointer to the device adapter structure + * @adapter: pointer to the device adapter structure **/ static void ixgbe_watchdog_flush_tx(struct ixgbe_adapter *adapter) { @@ -5511,7 +5547,7 @@ static void ixgbe_spoof_check(struct ixgbe_adapter *adapter) /** * ixgbe_watchdog_subtask - check and bring link up - * @adapter - pointer to the device adapter structure + * @adapter: pointer to the device adapter structure **/ static void ixgbe_watchdog_subtask(struct ixgbe_adapter *adapter) { @@ -5535,7 +5571,7 @@ static void ixgbe_watchdog_subtask(struct ixgbe_adapter *adapter) /** * ixgbe_sfp_detection_subtask - poll for SFP+ cable - * @adapter - the ixgbe adapter structure + * @adapter: the ixgbe adapter structure **/ static void ixgbe_sfp_detection_subtask(struct ixgbe_adapter *adapter) { @@ -5602,7 +5638,7 @@ sfp_out: /** * ixgbe_sfp_link_config_subtask - set up link SFP after module install - * @adapter - the ixgbe adapter structure + * @adapter: the ixgbe adapter structure **/ static void ixgbe_sfp_link_config_subtask(struct ixgbe_adapter *adapter) { @@ -6233,8 +6269,14 @@ static u16 ixgbe_select_queue(struct net_device *dev, struct sk_buff *skb) if (((protocol == htons(ETH_P_FCOE)) || (protocol == htons(ETH_P_FIP))) && (adapter->flags & IXGBE_FLAG_FCOE_ENABLED)) { - txq &= (adapter->ring_feature[RING_F_FCOE].indices - 1); - txq += adapter->ring_feature[RING_F_FCOE].mask; + struct ixgbe_ring_feature *f; + + f = &adapter->ring_feature[RING_F_FCOE]; + + while (txq >= f->indices) + txq -= f->indices; + txq += adapter->ring_feature[RING_F_FCOE].offset; + return txq; } #endif @@ -6348,7 +6390,7 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct sk_buff *skb, #ifdef IXGBE_FCOE /* setup tx offload for FCoE */ if ((protocol == __constant_htons(ETH_P_FCOE)) && - (adapter->flags & IXGBE_FLAG_FCOE_ENABLED)) { + (tx_ring->netdev->features & (NETIF_F_FSO | NETIF_F_FCOE_CRC))) { tso = ixgbe_fso(tx_ring, first, &hdr_len); if (tso < 0) goto out_drop; @@ -6389,17 +6431,12 @@ static netdev_tx_t ixgbe_xmit_frame(struct sk_buff *skb, struct ixgbe_adapter *adapter = netdev_priv(netdev); struct ixgbe_ring *tx_ring; - if (skb->len <= 0) { - dev_kfree_skb_any(skb); - return NETDEV_TX_OK; - } - /* * The minimum packet size for olinfo paylen is 17 so pad the skb * in order to meet this minimum size requirement. */ - if (skb->len < 17) { - if (skb_padto(skb, 17)) + if (unlikely(skb->len < 17)) { + if (skb_pad(skb, 17 - skb->len)) return NETDEV_TX_OK; skb->len = 17; } @@ -6427,8 +6464,7 @@ static int ixgbe_set_mac(struct net_device *netdev, void *p) memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len); memcpy(hw->mac.addr, addr->sa_data, netdev->addr_len); - hw->mac.ops.set_rar(hw, 0, hw->mac.addr, adapter->num_vfs, - IXGBE_RAH_AV); + hw->mac.ops.set_rar(hw, 0, hw->mac.addr, VMDQ_P(0), IXGBE_RAH_AV); return 0; } @@ -6485,12 +6521,15 @@ static int ixgbe_add_sanmac_netdev(struct net_device *dev) { int err = 0; struct ixgbe_adapter *adapter = netdev_priv(dev); - struct ixgbe_mac_info *mac = &adapter->hw.mac; + struct ixgbe_hw *hw = &adapter->hw; - if (is_valid_ether_addr(mac->san_addr)) { + if (is_valid_ether_addr(hw->mac.san_addr)) { rtnl_lock(); - err = dev_addr_add(dev, mac->san_addr, NETDEV_HW_ADDR_T_SAN); + err = dev_addr_add(dev, hw->mac.san_addr, NETDEV_HW_ADDR_T_SAN); rtnl_unlock(); + + /* update SAN MAC vmdq pool selection */ + hw->mac.ops.set_vmdq_san_mac(hw, VMDQ_P(0)); } return err; } @@ -6533,11 +6572,8 @@ static void ixgbe_netpoll(struct net_device *netdev) adapter->flags |= IXGBE_FLAG_IN_NETPOLL; if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) { - int num_q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; - for (i = 0; i < num_q_vectors; i++) { - struct ixgbe_q_vector *q_vector = adapter->q_vector[i]; - ixgbe_msix_clean_rings(0, q_vector); - } + for (i = 0; i < adapter->num_q_vectors; i++) + ixgbe_msix_clean_rings(0, adapter->q_vector[i]); } else { ixgbe_intr(adapter->pdev->irq, netdev); } @@ -6594,8 +6630,9 @@ static struct rtnl_link_stats64 *ixgbe_get_stats64(struct net_device *netdev, } #ifdef CONFIG_IXGBE_DCB -/* ixgbe_validate_rtr - verify 802.1Qp to Rx packet buffer mapping is valid. - * #adapter: pointer to ixgbe_adapter +/** + * ixgbe_validate_rtr - verify 802.1Qp to Rx packet buffer mapping is valid. + * @adapter: pointer to ixgbe_adapter * @tc: number of traffic classes currently enabled * * Configure a valid 802.1Qp to Rx packet buffer mapping ie confirm @@ -6630,8 +6667,33 @@ static void ixgbe_validate_rtr(struct ixgbe_adapter *adapter, u8 tc) return; } -/* ixgbe_setup_tc - routine to configure net_device for multiple traffic - * classes. +/** + * ixgbe_set_prio_tc_map - Configure netdev prio tc map + * @adapter: Pointer to adapter struct + * + * Populate the netdev user priority to tc map + */ +static void ixgbe_set_prio_tc_map(struct ixgbe_adapter *adapter) +{ + struct net_device *dev = adapter->netdev; + struct ixgbe_dcb_config *dcb_cfg = &adapter->dcb_cfg; + struct ieee_ets *ets = adapter->ixgbe_ieee_ets; + u8 prio; + + for (prio = 0; prio < MAX_USER_PRIORITY; prio++) { + u8 tc = 0; + + if (adapter->dcbx_cap & DCB_CAP_DCBX_VER_CEE) + tc = ixgbe_dcb_get_tc_from_up(dcb_cfg, 0, prio); + else if (ets) + tc = ets->prio_tc[prio]; + + netdev_set_prio_tc_map(dev, prio, tc); + } +} + +/** + * ixgbe_setup_tc - configure net_device for multiple traffic classes * * @netdev: net device to configure * @tc: number of traffic classes to enable @@ -6641,17 +6703,6 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc) struct ixgbe_adapter *adapter = netdev_priv(dev); struct ixgbe_hw *hw = &adapter->hw; - /* Multiple traffic classes requires multiple queues */ - if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) { - e_err(drv, "Enable failed, needs MSI-X\n"); - return -EINVAL; - } - - if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { - e_err(drv, "Enable failed, SR-IOV enabled\n"); - return -EINVAL; - } - /* Hardware supports up to 8 traffic classes */ if (tc > adapter->dcb_cfg.num_tcs.pg_tcs || (hw->mac.type == ixgbe_mac_82598EB && @@ -6668,8 +6719,9 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc) if (tc) { netdev_set_num_tc(dev, tc); + ixgbe_set_prio_tc_map(adapter); + adapter->flags |= IXGBE_FLAG_DCB_ENABLED; - adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE; if (adapter->hw.mac.type == ixgbe_mac_82598EB) { adapter->last_lfc_mode = adapter->hw.fc.requested_mode; @@ -6677,11 +6729,11 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc) } } else { netdev_reset_tc(dev); + if (adapter->hw.mac.type == ixgbe_mac_82598EB) adapter->hw.fc.requested_mode = adapter->last_lfc_mode; adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED; - adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE; adapter->temp_dcb_cfg.pfc_mode_enable = false; adapter->dcb_cfg.pfc_mode_enable = false; @@ -6711,10 +6763,6 @@ static netdev_features_t ixgbe_fix_features(struct net_device *netdev, { struct ixgbe_adapter *adapter = netdev_priv(netdev); - /* return error if RXHASH is being enabled when RSS is not supported */ - if (!(adapter->flags & IXGBE_FLAG_RSS_ENABLED)) - features &= ~NETIF_F_RXHASH; - /* If Rx checksum is disabled, then RSC/LRO should also be disabled */ if (!(features & NETIF_F_RXCSUM)) features &= ~NETIF_F_LRO; @@ -6754,20 +6802,40 @@ static int ixgbe_set_features(struct net_device *netdev, * Check if Flow Director n-tuple support was enabled or disabled. If * the state changed, we need to reset. */ - if (!(features & NETIF_F_NTUPLE)) { - if (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE) { - /* turn off Flow Director, set ATR and reset */ - if ((adapter->flags & IXGBE_FLAG_RSS_ENABLED) && - !(adapter->flags & IXGBE_FLAG_DCB_ENABLED)) - adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE; - need_reset = true; - } - adapter->flags &= ~IXGBE_FLAG_FDIR_PERFECT_CAPABLE; - } else if (!(adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)) { + switch (features & NETIF_F_NTUPLE) { + case NETIF_F_NTUPLE: /* turn off ATR, enable perfect filters and reset */ + if (!(adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)) + need_reset = true; + adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE; adapter->flags |= IXGBE_FLAG_FDIR_PERFECT_CAPABLE; - need_reset = true; + break; + default: + /* turn off perfect filters, enable ATR and reset */ + if (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE) + need_reset = true; + + adapter->flags &= ~IXGBE_FLAG_FDIR_PERFECT_CAPABLE; + + /* We cannot enable ATR if SR-IOV is enabled */ + if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) + break; + + /* We cannot enable ATR if we have 2 or more traffic classes */ + if (netdev_get_num_tc(netdev) > 1) + break; + + /* We cannot enable ATR if RSS is disabled */ + if (adapter->ring_feature[RING_F_RSS].limit <= 1) + break; + + /* A sample rate of 0 indicates ATR disabled */ + if (!adapter->atr_sample_rate) + break; + + adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE; + break; } if (features & NETIF_F_HW_VLAN_RX) @@ -6791,7 +6859,10 @@ static int ixgbe_ndo_fdb_add(struct ndmsg *ndm, u16 flags) { struct ixgbe_adapter *adapter = netdev_priv(dev); - int err = -EOPNOTSUPP; + int err; + + if (!(adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)) + return -EOPNOTSUPP; if (ndm->ndm_state & NUD_PERMANENT) { pr_info("%s: FDB only supports static addresses\n", @@ -6799,13 +6870,17 @@ static int ixgbe_ndo_fdb_add(struct ndmsg *ndm, return -EINVAL; } - if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { - if (is_unicast_ether_addr(addr)) + if (is_unicast_ether_addr(addr)) { + u32 rar_uc_entries = IXGBE_MAX_PF_MACVLANS; + + if (netdev_uc_count(dev) < rar_uc_entries) err = dev_uc_add_excl(dev, addr); - else if (is_multicast_ether_addr(addr)) - err = dev_mc_add_excl(dev, addr); else - err = -EINVAL; + err = -ENOMEM; + } else if (is_multicast_ether_addr(addr)) { + err = dev_mc_add_excl(dev, addr); + } else { + err = -EINVAL; } /* Only return duplicate errors if NLM_F_EXCL is set */ @@ -6894,26 +6969,6 @@ static const struct net_device_ops ixgbe_netdev_ops = { .ndo_fdb_dump = ixgbe_ndo_fdb_dump, }; -static void __devinit ixgbe_probe_vf(struct ixgbe_adapter *adapter, - const struct ixgbe_info *ii) -{ -#ifdef CONFIG_PCI_IOV - struct ixgbe_hw *hw = &adapter->hw; - - if (hw->mac.type == ixgbe_mac_82598EB) - return; - - /* The 82599 supports up to 64 VFs per physical function - * but this implementation limits allocation to 63 so that - * basic networking resources are still available to the - * physical function. If the user requests greater thn - * 63 VFs then it is an error - reset to default of zero. - */ - adapter->num_vfs = (max_vfs > 63) ? 0 : max_vfs; - ixgbe_enable_sriov(adapter, ii); -#endif /* CONFIG_PCI_IOV */ -} - /** * ixgbe_wol_supported - Check whether device supports WoL * @hw: hw specific details @@ -6940,6 +6995,7 @@ int ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id, if (hw->bus.func != 0) break; case IXGBE_SUBDEV_ID_82599_SFP: + case IXGBE_SUBDEV_ID_82599_RNDC: is_wol_supported = 1; break; } @@ -6987,6 +7043,7 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, int i, err, pci_using_dac; u8 part_str[IXGBE_PBANUM_LENGTH]; unsigned int indices = num_possible_cpus(); + unsigned int dcb_max = 0; #ifdef IXGBE_FCOE u16 device_caps; #endif @@ -7036,7 +7093,12 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, pci_save_state(pdev); #ifdef CONFIG_IXGBE_DCB - indices *= MAX_TRAFFIC_CLASS; + if (ii->mac == ixgbe_mac_82598EB) + dcb_max = min_t(unsigned int, indices * MAX_TRAFFIC_CLASS, + IXGBE_MAX_RSS_INDICES); + else + dcb_max = min_t(unsigned int, indices * MAX_TRAFFIC_CLASS, + IXGBE_MAX_FDIR_INDICES); #endif if (ii->mac == ixgbe_mac_82598EB) @@ -7048,6 +7110,7 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, indices += min_t(unsigned int, num_possible_cpus(), IXGBE_MAX_FCOE_INDICES); #endif + indices = max_t(unsigned int, dcb_max, indices); netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), indices); if (!netdev) { err = -ENOMEM; @@ -7154,8 +7217,10 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, goto err_sw_init; } - ixgbe_probe_vf(adapter, ii); +#ifdef CONFIG_PCI_IOV + ixgbe_enable_sriov(adapter, ii); +#endif netdev->features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | @@ -7191,10 +7256,6 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, netdev->priv_flags |= IFF_UNICAST_FLT; netdev->priv_flags |= IFF_SUPP_NOFCS; - if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) - adapter->flags &= ~(IXGBE_FLAG_RSS_ENABLED | - IXGBE_FLAG_DCB_ENABLED); - #ifdef CONFIG_IXGBE_DCB netdev->dcbnl_ops = &dcbnl_ops; #endif @@ -7206,11 +7267,15 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, if (device_caps & IXGBE_DEVICE_CAPS_FCOE_OFFLOADS) adapter->flags &= ~IXGBE_FLAG_FCOE_CAPABLE; } - } - if (adapter->flags & IXGBE_FLAG_FCOE_CAPABLE) { - netdev->vlan_features |= NETIF_F_FCOE_CRC; - netdev->vlan_features |= NETIF_F_FSO; - netdev->vlan_features |= NETIF_F_FCOE_MTU; + + adapter->ring_feature[RING_F_FCOE].limit = IXGBE_FCRETA_SIZE; + + netdev->features |= NETIF_F_FSO | + NETIF_F_FCOE_CRC; + + netdev->vlan_features |= NETIF_F_FSO | + NETIF_F_FCOE_CRC | + NETIF_F_FCOE_MTU; } #endif /* IXGBE_FCOE */ if (pci_using_dac) { @@ -7249,11 +7314,6 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, if (err) goto err_sw_init; - if (!(adapter->flags & IXGBE_FLAG_RSS_ENABLED)) { - netdev->hw_features &= ~NETIF_F_RXHASH; - netdev->features &= ~NETIF_F_RXHASH; - } - /* WOL not supported for all devices */ adapter->wol = 0; hw->eeprom.ops.read(hw, 0x2c, &adapter->eeprom_cap); @@ -7364,8 +7424,7 @@ err_register: ixgbe_release_hw_control(adapter); ixgbe_clear_interrupt_scheme(adapter); err_sw_init: - if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) - ixgbe_disable_sriov(adapter); + ixgbe_disable_sriov(adapter); adapter->flags2 &= ~IXGBE_FLAG2_SEARCH_FOR_SFP; iounmap(hw->hw_addr); err_ioremap: @@ -7412,25 +7471,13 @@ static void __devexit ixgbe_remove(struct pci_dev *pdev) ixgbe_sysfs_exit(adapter); #endif /* CONFIG_IXGBE_HWMON */ -#ifdef IXGBE_FCOE - if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) - ixgbe_cleanup_fcoe(adapter); - -#endif /* IXGBE_FCOE */ - /* remove the added san mac */ ixgbe_del_sanmac_netdev(netdev); if (netdev->reg_state == NETREG_REGISTERED) unregister_netdev(netdev); - if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { - if (!(ixgbe_check_vf_assignment(adapter))) - ixgbe_disable_sriov(adapter); - else - e_dev_warn("Unloading driver while VFs are assigned " - "- VFs will not be deallocated\n"); - } + ixgbe_disable_sriov(adapter); ixgbe_clear_interrupt_scheme(adapter); @@ -7521,11 +7568,11 @@ static pci_ers_result_t ixgbe_io_error_detected(struct pci_dev *pdev, } /* Find the pci device of the offending VF */ - vfdev = pci_get_device(IXGBE_INTEL_VENDOR_ID, device_id, NULL); + vfdev = pci_get_device(PCI_VENDOR_ID_INTEL, device_id, NULL); while (vfdev) { if (vfdev->devfn == (req_id & 0xFF)) break; - vfdev = pci_get_device(IXGBE_INTEL_VENDOR_ID, + vfdev = pci_get_device(PCI_VENDOR_ID_INTEL, device_id, vfdev); } /* diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c index 24117709d6a2..71659edf81aa 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c @@ -907,6 +907,8 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) * 8 SFP_act_lmt_DA_CORE1 - 82599-specific * 9 SFP_1g_cu_CORE0 - 82599-specific * 10 SFP_1g_cu_CORE1 - 82599-specific + * 11 SFP_1g_sx_CORE0 - 82599-specific + * 12 SFP_1g_sx_CORE1 - 82599-specific */ if (hw->mac.type == ixgbe_mac_82598EB) { if (cable_tech & IXGBE_SFF_DA_PASSIVE_CABLE) @@ -957,6 +959,13 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) else hw->phy.sfp_type = ixgbe_sfp_type_1g_cu_core1; + } else if (comp_codes_1g & IXGBE_SFF_1GBASESX_CAPABLE) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + ixgbe_sfp_type_1g_sx_core0; + else + hw->phy.sfp_type = + ixgbe_sfp_type_1g_sx_core1; } else { hw->phy.sfp_type = ixgbe_sfp_type_unknown; } @@ -1049,7 +1058,9 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) /* Verify supported 1G SFP modules */ if (comp_codes_10g == 0 && !(hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core1 || - hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core0)) { + hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1)) { hw->phy.type = ixgbe_phy_sfp_unsupported; status = IXGBE_ERR_SFP_NOT_SUPPORTED; goto out; @@ -1064,7 +1075,9 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) hw->mac.ops.get_device_caps(hw, &enforce_sfp); if (!(enforce_sfp & IXGBE_DEVICE_CAPS_ALLOW_ANY_SFP) && !((hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core0) || - (hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core1))) { + (hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core1) || + (hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0) || + (hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1))) { /* Make sure we're a supported PHY type */ if (hw->phy.type == ixgbe_phy_sfp_intel) { status = 0; @@ -1128,10 +1141,12 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw, * SR modules */ if (sfp_type == ixgbe_sfp_type_da_act_lmt_core0 || - sfp_type == ixgbe_sfp_type_1g_cu_core0) + sfp_type == ixgbe_sfp_type_1g_cu_core0 || + sfp_type == ixgbe_sfp_type_1g_sx_core0) sfp_type = ixgbe_sfp_type_srlr_core0; else if (sfp_type == ixgbe_sfp_type_da_act_lmt_core1 || - sfp_type == ixgbe_sfp_type_1g_cu_core1) + sfp_type == ixgbe_sfp_type_1g_cu_core1 || + sfp_type == ixgbe_sfp_type_1g_sx_core1) sfp_type = ixgbe_sfp_type_srlr_core1; /* Read offset to PHY init contents */ diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c index dcebd128becf..3456d5617143 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c @@ -26,6 +26,7 @@ *******************************************************************************/ #include "ixgbe.h" #include <linux/export.h> +#include <linux/ptp_classify.h> /* * The 82599 and the X540 do not have true 64bit nanosecond scale @@ -100,9 +101,13 @@ #define NSECS_PER_SEC 1000000000ULL #endif +static struct sock_filter ptp_filter[] = { + PTP_FILTER +}; + /** * ixgbe_ptp_read - read raw cycle counter (to be used by time counter) - * @cc - the cyclecounter structure + * @cc: the cyclecounter structure * * this function reads the cyclecounter registers and is called by the * cyclecounter structure used to construct a ns counter from the @@ -123,8 +128,8 @@ static cycle_t ixgbe_ptp_read(const struct cyclecounter *cc) /** * ixgbe_ptp_adjfreq - * @ptp - the ptp clock structure - * @ppb - parts per billion adjustment from base + * @ptp: the ptp clock structure + * @ppb: parts per billion adjustment from base * * adjust the frequency of the ptp cycle counter by the * indicated ppb from the base frequency. @@ -170,8 +175,8 @@ static int ixgbe_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb) /** * ixgbe_ptp_adjtime - * @ptp - the ptp clock structure - * @delta - offset to adjust the cycle counter by + * @ptp: the ptp clock structure + * @delta: offset to adjust the cycle counter by * * adjust the timer by resetting the timecounter structure. */ @@ -198,8 +203,8 @@ static int ixgbe_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) /** * ixgbe_ptp_gettime - * @ptp - the ptp clock structure - * @ts - timespec structure to hold the current time value + * @ptp: the ptp clock structure + * @ts: timespec structure to hold the current time value * * read the timecounter and return the correct value on ns, * after converting it into a struct timespec. @@ -224,8 +229,8 @@ static int ixgbe_ptp_gettime(struct ptp_clock_info *ptp, struct timespec *ts) /** * ixgbe_ptp_settime - * @ptp - the ptp clock structure - * @ts - the timespec containing the new time for the cycle counter + * @ptp: the ptp clock structure + * @ts: the timespec containing the new time for the cycle counter * * reset the timecounter to use a new base value instead of the kernel * wall timer value. @@ -251,9 +256,9 @@ static int ixgbe_ptp_settime(struct ptp_clock_info *ptp, /** * ixgbe_ptp_enable - * @ptp - the ptp clock structure - * @rq - the requested feature to change - * @on - whether to enable or disable the feature + * @ptp: the ptp clock structure + * @rq: the requested feature to change + * @on: whether to enable or disable the feature * * enable (or disable) ancillary features of the phc subsystem. * our driver only supports the PPS feature on the X540 @@ -289,8 +294,8 @@ static int ixgbe_ptp_enable(struct ptp_clock_info *ptp, /** * ixgbe_ptp_check_pps_event - * @adapter - the private adapter structure - * @eicr - the interrupt cause register value + * @adapter: the private adapter structure + * @eicr: the interrupt cause register value * * This function is called by the interrupt routine when checking for * interrupts. It will check and handle a pps event. @@ -307,20 +312,21 @@ void ixgbe_ptp_check_pps_event(struct ixgbe_adapter *adapter, u32 eicr) !(adapter->flags2 & IXGBE_FLAG2_PTP_PPS_ENABLED)) return; - switch (hw->mac.type) { - case ixgbe_mac_X540: - if (eicr & IXGBE_EICR_TIMESYNC) + if (unlikely(eicr & IXGBE_EICR_TIMESYNC)) { + switch (hw->mac.type) { + case ixgbe_mac_X540: ptp_clock_event(adapter->ptp_clock, &event); - break; - default: - break; + break; + default: + break; + } } } /** * ixgbe_ptp_enable_sdp - * @hw - the hardware private structure - * @shift - the clock shift for calculating nanoseconds + * @hw: the hardware private structure + * @shift: the clock shift for calculating nanoseconds * * this function enables the clock out feature on the sdp0 for the * X540 device. It will create a 1second periodic output that can be @@ -393,7 +399,7 @@ static void ixgbe_ptp_enable_sdp(struct ixgbe_hw *hw, int shift) /** * ixgbe_ptp_disable_sdp - * @hw - the private hardware structure + * @hw: the private hardware structure * * this function disables the auxiliary SDP clock out feature */ @@ -425,6 +431,68 @@ void ixgbe_ptp_overflow_check(struct ixgbe_adapter *adapter) } /** + * ixgbe_ptp_match - determine if this skb matches a ptp packet + * @skb: pointer to the skb + * @hwtstamp: pointer to the hwtstamp_config to check + * + * Determine whether the skb should have been timestamped, assuming the + * hwtstamp was set via the hwtstamp ioctl. Returns non-zero when the packet + * should have a timestamp waiting in the registers, and 0 otherwise. + * + * V1 packets have to check the version type to determine whether they are + * correct. However, we can't directly access the data because it might be + * fragmented in the SKB, in paged memory. In order to work around this, we + * use skb_copy_bits which will properly copy the data whether it is in the + * paged memory fragments or not. We have to copy the IP header as well as the + * message type. + */ +static int ixgbe_ptp_match(struct sk_buff *skb, int rx_filter) +{ + struct iphdr iph; + u8 msgtype; + unsigned int type, offset; + + if (rx_filter == HWTSTAMP_FILTER_NONE) + return 0; + + type = sk_run_filter(skb, ptp_filter); + + if (likely(rx_filter == HWTSTAMP_FILTER_PTP_V2_EVENT)) + return type & PTP_CLASS_V2; + + /* For the remaining cases actually check message type */ + switch (type) { + case PTP_CLASS_V1_IPV4: + skb_copy_bits(skb, OFF_IHL, &iph, sizeof(iph)); + offset = ETH_HLEN + (iph.ihl << 2) + UDP_HLEN + OFF_PTP_CONTROL; + break; + case PTP_CLASS_V1_IPV6: + offset = OFF_PTP6 + OFF_PTP_CONTROL; + break; + default: + /* other cases invalid or handled above */ + return 0; + } + + /* Make sure our buffer is long enough */ + if (skb->len < offset) + return 0; + + skb_copy_bits(skb, offset, &msgtype, sizeof(msgtype)); + + switch (rx_filter) { + case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: + return (msgtype == IXGBE_RXMTRL_V1_SYNC_MSG); + break; + case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: + return (msgtype == IXGBE_RXMTRL_V1_DELAY_REQ_MSG); + break; + default: + return 0; + } +} + +/** * ixgbe_ptp_tx_hwtstamp - utility function which checks for TX time stamp * @q_vector: structure containing interrupt and ring information * @skb: particular skb to send timestamp with @@ -473,6 +541,7 @@ void ixgbe_ptp_tx_hwtstamp(struct ixgbe_q_vector *q_vector, /** * ixgbe_ptp_rx_hwtstamp - utility function which checks for RX time stamp * @q_vector: structure containing interrupt and ring information + * @rx_desc: the rx descriptor * @skb: particular skb to send timestamp with * * if the timestamp is valid, we convert it into the timecounter ns @@ -480,6 +549,7 @@ void ixgbe_ptp_tx_hwtstamp(struct ixgbe_q_vector *q_vector, * is passed up the network stack */ void ixgbe_ptp_rx_hwtstamp(struct ixgbe_q_vector *q_vector, + union ixgbe_adv_rx_desc *rx_desc, struct sk_buff *skb) { struct ixgbe_adapter *adapter; @@ -497,21 +567,33 @@ void ixgbe_ptp_rx_hwtstamp(struct ixgbe_q_vector *q_vector, hw = &adapter->hw; tsyncrxctl = IXGBE_READ_REG(hw, IXGBE_TSYNCRXCTL); + + /* Check if we have a valid timestamp and make sure the skb should + * have been timestamped */ + if (likely(!(tsyncrxctl & IXGBE_TSYNCRXCTL_VALID) || + !ixgbe_ptp_match(skb, adapter->rx_hwtstamp_filter))) + return; + + /* + * Always read the registers, in order to clear a possible fault + * because of stagnant RX timestamp values for a packet that never + * reached the queue. + */ regval |= (u64)IXGBE_READ_REG(hw, IXGBE_RXSTMPL); regval |= (u64)IXGBE_READ_REG(hw, IXGBE_RXSTMPH) << 32; /* - * If this bit is set, then the RX registers contain the time stamp. No - * other packet will be time stamped until we read these registers, so - * read the registers to make them available again. Because only one - * packet can be time stamped at a time, we know that the register - * values must belong to this one here and therefore we don't need to - * compare any of the additional attributes stored for it. + * If the timestamp bit is set in the packet's descriptor, we know the + * timestamp belongs to this packet. No other packet can be + * timestamped until the registers for timestamping have been read. + * Therefor only one packet with this bit can be in the queue at a + * time, and the rx timestamp values that were in the registers belong + * to this packet. * * If nothing went wrong, then it should have a skb_shared_tx that we * can turn into a skb_shared_hwtstamps. */ - if (!(tsyncrxctl & IXGBE_TSYNCRXCTL_VALID)) + if (unlikely(!ixgbe_test_staterr(rx_desc, IXGBE_RXDADV_STAT_TS))) return; spin_lock_irqsave(&adapter->tmreg_lock, flags); @@ -539,6 +621,11 @@ void ixgbe_ptp_rx_hwtstamp(struct ixgbe_q_vector *q_vector, * type has to be specified. Matching the kind of event packet is * not supported, with the exception of "all V2 events regardless of * level 2 or 4". + * + * Since hardware always timestamps Path delay packets when timestamping V2 + * packets, regardless of the type specified in the register, only use V2 + * Event mode. This more accurately tells the user what the hardware is going + * to do anyways. */ int ixgbe_ptp_hwtstamp_ioctl(struct ixgbe_adapter *adapter, struct ifreq *ifr, int cmd) @@ -582,41 +669,30 @@ int ixgbe_ptp_hwtstamp_ioctl(struct ixgbe_adapter *adapter, tsync_rx_mtrl = IXGBE_RXMTRL_V1_DELAY_REQ_MSG; is_l4 = true; break; + case HWTSTAMP_FILTER_PTP_V2_EVENT: + case HWTSTAMP_FILTER_PTP_V2_L2_EVENT: + case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: case HWTSTAMP_FILTER_PTP_V2_SYNC: case HWTSTAMP_FILTER_PTP_V2_L2_SYNC: case HWTSTAMP_FILTER_PTP_V2_L4_SYNC: - tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_L2_L4_V2; - tsync_rx_mtrl = IXGBE_RXMTRL_V2_SYNC_MSG; - is_l2 = true; - is_l4 = true; - config.rx_filter = HWTSTAMP_FILTER_SOME; - break; case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ: case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ: - tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_L2_L4_V2; - tsync_rx_mtrl = IXGBE_RXMTRL_V2_DELAY_REQ_MSG; - is_l2 = true; - is_l4 = true; - config.rx_filter = HWTSTAMP_FILTER_SOME; - break; - case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: - case HWTSTAMP_FILTER_PTP_V2_L2_EVENT: - case HWTSTAMP_FILTER_PTP_V2_EVENT: tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_EVENT_V2; - config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; is_l2 = true; is_l4 = true; + config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; break; case HWTSTAMP_FILTER_PTP_V1_L4_EVENT: case HWTSTAMP_FILTER_ALL: default: /* - * register RXMTRL must be set, therefore it is not - * possible to time stamp both V1 Sync and Delay_Req messages - * and hardware does not support timestamping all packets - * => return error + * register RXMTRL must be set in order to do V1 packets, + * therefore it is not possible to time stamp both V1 Sync and + * Delay_Req messages and hardware does not support + * timestamping all packets => return error */ + config.rx_filter = HWTSTAMP_FILTER_NONE; return -ERANGE; } @@ -626,6 +702,9 @@ int ixgbe_ptp_hwtstamp_ioctl(struct ixgbe_adapter *adapter, return 0; } + /* Store filter value for later use */ + adapter->rx_hwtstamp_filter = config.rx_filter; + /* define ethertype filter for timestamped packets */ if (is_l2) IXGBE_WRITE_REG(hw, IXGBE_ETQF(3), @@ -690,7 +769,7 @@ int ixgbe_ptp_hwtstamp_ioctl(struct ixgbe_adapter *adapter, /** * ixgbe_ptp_start_cyclecounter - create the cycle counter from hw - * @adapter - pointer to the adapter structure + * @adapter: pointer to the adapter structure * * this function initializes the timecounter and cyclecounter * structures for use in generated a ns counter from the arbitrary @@ -826,7 +905,7 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter) /** * ixgbe_ptp_init - * @adapter - the ixgbe private adapter structure + * @adapter: the ixgbe private adapter structure * * This function performs the required steps for enabling ptp * support. If ptp support has already been loaded it simply calls the @@ -870,6 +949,10 @@ void ixgbe_ptp_init(struct ixgbe_adapter *adapter) return; } + /* initialize the ptp filter */ + if (ptp_filter_init(ptp_filter, ARRAY_SIZE(ptp_filter))) + e_dev_warn("ptp_filter_init failed\n"); + spin_lock_init(&adapter->tmreg_lock); ixgbe_ptp_start_cyclecounter(adapter); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c index 2d971d18696e..4fea8716ab64 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c @@ -44,50 +44,15 @@ #include "ixgbe_sriov.h" #ifdef CONFIG_PCI_IOV -static int ixgbe_find_enabled_vfs(struct ixgbe_adapter *adapter) -{ - struct pci_dev *pdev = adapter->pdev; - struct pci_dev *pvfdev; - u16 vf_devfn = 0; - int device_id; - int vfs_found = 0; - - switch (adapter->hw.mac.type) { - case ixgbe_mac_82599EB: - device_id = IXGBE_DEV_ID_82599_VF; - break; - case ixgbe_mac_X540: - device_id = IXGBE_DEV_ID_X540_VF; - break; - default: - device_id = 0; - break; - } - - vf_devfn = pdev->devfn + 0x80; - pvfdev = pci_get_device(IXGBE_INTEL_VENDOR_ID, device_id, NULL); - while (pvfdev) { - if (pvfdev->devfn == vf_devfn && - (pvfdev->bus->number >= pdev->bus->number)) - vfs_found++; - vf_devfn += 2; - pvfdev = pci_get_device(IXGBE_INTEL_VENDOR_ID, - device_id, pvfdev); - } - - return vfs_found; -} - void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, const struct ixgbe_info *ii) { struct ixgbe_hw *hw = &adapter->hw; - int err = 0; int num_vf_macvlans, i; struct vf_macvlans *mv_list; int pre_existing_vfs = 0; - pre_existing_vfs = ixgbe_find_enabled_vfs(adapter); + pre_existing_vfs = pci_num_vf(adapter->pdev); if (!pre_existing_vfs && !adapter->num_vfs) return; @@ -106,16 +71,33 @@ void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, "enabled for this device - Please reload all " "VF drivers to avoid spoofed packet errors\n"); } else { + int err; + /* + * The 82599 supports up to 64 VFs per physical function + * but this implementation limits allocation to 63 so that + * basic networking resources are still available to the + * physical function. If the user requests greater thn + * 63 VFs then it is an error - reset to default of zero. + */ + adapter->num_vfs = min_t(unsigned int, adapter->num_vfs, 63); + err = pci_enable_sriov(adapter->pdev, adapter->num_vfs); + if (err) { + e_err(probe, "Failed to enable PCI sriov: %d\n", err); + adapter->num_vfs = 0; + return; + } } - if (err) { - e_err(probe, "Failed to enable PCI sriov: %d\n", err); - goto err_novfs; - } - adapter->flags |= IXGBE_FLAG_SRIOV_ENABLED; + adapter->flags |= IXGBE_FLAG_SRIOV_ENABLED; e_info(probe, "SR-IOV enabled with %d VFs\n", adapter->num_vfs); + /* Enable VMDq flag so device will be set in VM mode */ + adapter->flags |= IXGBE_FLAG_VMDQ_ENABLED; + if (!adapter->ring_feature[RING_F_VMDQ].limit) + adapter->ring_feature[RING_F_VMDQ].limit = 1; + adapter->ring_feature[RING_F_VMDQ].offset = adapter->num_vfs; + num_vf_macvlans = hw->mac.num_rar_entries - (IXGBE_MAX_PF_MACVLANS + 1 + adapter->num_vfs); @@ -146,12 +128,39 @@ void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, * and memory allocated set up the mailbox parameters */ ixgbe_init_mbx_params_pf(hw); - memcpy(&hw->mbx.ops, ii->mbx_ops, - sizeof(hw->mbx.ops)); + memcpy(&hw->mbx.ops, ii->mbx_ops, sizeof(hw->mbx.ops)); + + /* limit trafffic classes based on VFs enabled */ + if ((adapter->hw.mac.type == ixgbe_mac_82599EB) && + (adapter->num_vfs < 16)) { + adapter->dcb_cfg.num_tcs.pg_tcs = MAX_TRAFFIC_CLASS; + adapter->dcb_cfg.num_tcs.pfc_tcs = MAX_TRAFFIC_CLASS; + } else if (adapter->num_vfs < 32) { + adapter->dcb_cfg.num_tcs.pg_tcs = 4; + adapter->dcb_cfg.num_tcs.pfc_tcs = 4; + } else { + adapter->dcb_cfg.num_tcs.pg_tcs = 1; + adapter->dcb_cfg.num_tcs.pfc_tcs = 1; + } + + /* We do not support RSS w/ SR-IOV */ + adapter->ring_feature[RING_F_RSS].limit = 1; /* Disable RSC when in SR-IOV mode */ adapter->flags2 &= ~(IXGBE_FLAG2_RSC_CAPABLE | IXGBE_FLAG2_RSC_ENABLED); + +#ifdef IXGBE_FCOE + /* + * When SR-IOV is enabled 82599 cannot support jumbo frames + * so we must disable FCoE because we cannot support FCoE MTU. + */ + if (adapter->hw.mac.type == ixgbe_mac_82599EB) + adapter->flags &= ~(IXGBE_FLAG_FCOE_ENABLED | + IXGBE_FLAG_FCOE_CAPABLE); +#endif + + /* enable spoof checking for all VFs */ for (i = 0; i < adapter->num_vfs; i++) adapter->vfinfo[i].spoofchk_enabled = true; return; @@ -160,31 +169,80 @@ void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, /* Oh oh */ e_err(probe, "Unable to allocate memory for VF Data Storage - " "SRIOV disabled\n"); - pci_disable_sriov(adapter->pdev); + ixgbe_disable_sriov(adapter); +} -err_novfs: - adapter->flags &= ~IXGBE_FLAG_SRIOV_ENABLED; - adapter->num_vfs = 0; +static bool ixgbe_vfs_are_assigned(struct ixgbe_adapter *adapter) +{ + struct pci_dev *pdev = adapter->pdev; + struct pci_dev *vfdev; + int dev_id; + + switch (adapter->hw.mac.type) { + case ixgbe_mac_82599EB: + dev_id = IXGBE_DEV_ID_82599_VF; + break; + case ixgbe_mac_X540: + dev_id = IXGBE_DEV_ID_X540_VF; + break; + default: + return false; + } + + /* loop through all the VFs to see if we own any that are assigned */ + vfdev = pci_get_device(PCI_VENDOR_ID_INTEL, dev_id, NULL); + while (vfdev) { + /* if we don't own it we don't care */ + if (vfdev->is_virtfn && vfdev->physfn == pdev) { + /* if it is assigned we cannot release it */ + if (vfdev->dev_flags & PCI_DEV_FLAGS_ASSIGNED) + return true; + } + + vfdev = pci_get_device(PCI_VENDOR_ID_INTEL, dev_id, vfdev); + } + + return false; } -#endif /* #ifdef CONFIG_PCI_IOV */ +#endif /* #ifdef CONFIG_PCI_IOV */ void ixgbe_disable_sriov(struct ixgbe_adapter *adapter) { struct ixgbe_hw *hw = &adapter->hw; - u32 gcr; u32 gpie; u32 vmdctl; - int i; + + /* set num VFs to 0 to prevent access to vfinfo */ + adapter->num_vfs = 0; + + /* free VF control structures */ + kfree(adapter->vfinfo); + adapter->vfinfo = NULL; + + /* free macvlan list */ + kfree(adapter->mv_list); + adapter->mv_list = NULL; + + /* if SR-IOV is already disabled then there is nothing to do */ + if (!(adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)) + return; #ifdef CONFIG_PCI_IOV + /* + * If our VFs are assigned we cannot shut down SR-IOV + * without causing issues, so just leave the hardware + * available but disabled + */ + if (ixgbe_vfs_are_assigned(adapter)) { + e_dev_warn("Unloading driver while VFs are assigned - VFs will not be deallocated\n"); + return; + } /* disable iov and allow time for transactions to clear */ pci_disable_sriov(adapter->pdev); #endif /* turn off device IOV mode */ - gcr = IXGBE_READ_REG(hw, IXGBE_GCR_EXT); - gcr &= ~(IXGBE_GCR_EXT_SRIOV); - IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr); + IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, 0); gpie = IXGBE_READ_REG(hw, IXGBE_GPIE); gpie &= ~IXGBE_GPIE_VTMODE_MASK; IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie); @@ -195,19 +253,14 @@ void ixgbe_disable_sriov(struct ixgbe_adapter *adapter) IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, vmdctl); IXGBE_WRITE_FLUSH(hw); + /* Disable VMDq flag so device will be set in VM mode */ + if (adapter->ring_feature[RING_F_VMDQ].limit == 1) + adapter->flags &= ~IXGBE_FLAG_VMDQ_ENABLED; + adapter->ring_feature[RING_F_VMDQ].offset = 0; + /* take a breather then clean up driver data */ msleep(100); - /* Release reference to VF devices */ - for (i = 0; i < adapter->num_vfs; i++) { - if (adapter->vfinfo[i].vfdev) - pci_dev_put(adapter->vfinfo[i].vfdev); - } - kfree(adapter->vfinfo); - kfree(adapter->mv_list); - adapter->vfinfo = NULL; - - adapter->num_vfs = 0; adapter->flags &= ~IXGBE_FLAG_SRIOV_ENABLED; } @@ -441,33 +494,16 @@ static int ixgbe_set_vf_macvlan(struct ixgbe_adapter *adapter, return 0; } -int ixgbe_check_vf_assignment(struct ixgbe_adapter *adapter) -{ -#ifdef CONFIG_PCI_IOV - int i; - for (i = 0; i < adapter->num_vfs; i++) { - if (adapter->vfinfo[i].vfdev->dev_flags & - PCI_DEV_FLAGS_ASSIGNED) - return true; - } -#endif - return false; -} - int ixgbe_vf_configuration(struct pci_dev *pdev, unsigned int event_mask) { unsigned char vf_mac_addr[6]; struct ixgbe_adapter *adapter = pci_get_drvdata(pdev); unsigned int vfn = (event_mask & 0x3f); - struct pci_dev *pvfdev; - unsigned int device_id; - u16 thisvf_devfn = (pdev->devfn + 0x80 + (vfn << 1)) | - (pdev->devfn & 1); bool enable = ((event_mask & 0x10000000U) != 0); if (enable) { - random_ether_addr(vf_mac_addr); + eth_random_addr(vf_mac_addr); e_info(probe, "IOV: VF %d is enabled MAC %pM\n", vfn, vf_mac_addr); /* @@ -475,31 +511,6 @@ int ixgbe_vf_configuration(struct pci_dev *pdev, unsigned int event_mask) * for it later. */ memcpy(adapter->vfinfo[vfn].vf_mac_addresses, vf_mac_addr, 6); - - switch (adapter->hw.mac.type) { - case ixgbe_mac_82599EB: - device_id = IXGBE_DEV_ID_82599_VF; - break; - case ixgbe_mac_X540: - device_id = IXGBE_DEV_ID_X540_VF; - break; - default: - device_id = 0; - break; - } - - pvfdev = pci_get_device(IXGBE_INTEL_VENDOR_ID, device_id, NULL); - while (pvfdev) { - if (pvfdev->devfn == thisvf_devfn) - break; - pvfdev = pci_get_device(IXGBE_INTEL_VENDOR_ID, - device_id, pvfdev); - } - if (pvfdev) - adapter->vfinfo[vfn].vfdev = pvfdev; - else - e_err(drv, "Couldn't find pci dev ptr for VF %4.4x\n", - thisvf_devfn); } return 0; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h index 2ab38d5fda92..1be1d30e4e78 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h @@ -42,7 +42,6 @@ int ixgbe_ndo_get_vf_config(struct net_device *netdev, int vf, struct ifla_vf_info *ivi); void ixgbe_check_vf_rate_limit(struct ixgbe_adapter *adapter); void ixgbe_disable_sriov(struct ixgbe_adapter *adapter); -int ixgbe_check_vf_assignment(struct ixgbe_adapter *adapter); #ifdef CONFIG_PCI_IOV void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, const struct ixgbe_info *ii); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sysfs.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sysfs.c index 1d80b1cefa6a..16ddf14e8ba4 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sysfs.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sysfs.c @@ -37,7 +37,6 @@ #include <linux/netdevice.h> #include <linux/hwmon.h> -#ifdef CONFIG_IXGBE_HWMON /* hwmon callback functions */ static ssize_t ixgbe_hwmon_show_location(struct device *dev, struct device_attribute *attr, @@ -96,11 +95,11 @@ static ssize_t ixgbe_hwmon_show_maxopthresh(struct device *dev, return sprintf(buf, "%u\n", value); } -/* +/** * ixgbe_add_hwmon_attr - Create hwmon attr table for a hwmon sysfs file. - * @ adapter: pointer to the adapter structure - * @ offset: offset in the eeprom sensor data table - * @ type: type of sensor data to display + * @adapter: pointer to the adapter structure + * @offset: offset in the eeprom sensor data table + * @type: type of sensor data to display * * For each file we want in hwmon's sysfs interface we need a device_attribute * This is included in our hwmon_attr struct that contains the references to @@ -241,5 +240,4 @@ err: exit: return rc; } -#endif /* CONFIG_IXGBE_HWMON */ diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h index 204848d2448c..400f86a31174 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h @@ -32,9 +32,6 @@ #include <linux/mdio.h> #include <linux/netdevice.h> -/* Vendor ID */ -#define IXGBE_INTEL_VENDOR_ID 0x8086 - /* Device IDs */ #define IXGBE_DEV_ID_82598 0x10B6 #define IXGBE_DEV_ID_82598_BX 0x1508 @@ -57,6 +54,7 @@ #define IXGBE_DEV_ID_82599_BACKPLANE_FCOE 0x152a #define IXGBE_DEV_ID_82599_SFP_FCOE 0x1529 #define IXGBE_SUBDEV_ID_82599_SFP 0x11A9 +#define IXGBE_SUBDEV_ID_82599_RNDC 0x1F72 #define IXGBE_SUBDEV_ID_82599_560FLR 0x17D0 #define IXGBE_DEV_ID_82599_SFP_EM 0x1507 #define IXGBE_DEV_ID_82599_SFP_SF2 0x154D @@ -1452,6 +1450,7 @@ enum { #define IXGBE_ETQF_1588 0x40000000 /* bit 30 */ #define IXGBE_ETQF_FILTER_EN 0x80000000 /* bit 31 */ #define IXGBE_ETQF_POOL_ENABLE (1 << 26) /* bit 26 */ +#define IXGBE_ETQF_POOL_SHIFT 20 #define IXGBE_ETQS_RX_QUEUE 0x007F0000 /* bits 22:16 */ #define IXGBE_ETQS_RX_QUEUE_SHIFT 16 @@ -2419,7 +2418,7 @@ typedef u32 ixgbe_physical_layer; */ /* BitTimes (BT) conversion */ -#define IXGBE_BT2KB(BT) ((BT + 1023) / (8 * 1024)) +#define IXGBE_BT2KB(BT) ((BT + (8 * 1024 - 1)) / (8 * 1024)) #define IXGBE_B2BT(BT) (BT * 8) /* Calculate Delay to respond to PFC */ @@ -2450,24 +2449,31 @@ typedef u32 ixgbe_physical_layer; #define IXGBE_PCI_DELAY 10000 /* Calculate X540 delay value in bit times */ -#define IXGBE_FILL_RATE (36 / 25) - -#define IXGBE_DV_X540(LINK, TC) (IXGBE_FILL_RATE * \ - (IXGBE_B2BT(LINK) + IXGBE_PFC_D + \ - (2 * IXGBE_CABLE_DC) + \ - (2 * IXGBE_ID_X540) + \ - IXGBE_HD + IXGBE_B2BT(TC))) +#define IXGBE_DV_X540(_max_frame_link, _max_frame_tc) \ + ((36 * \ + (IXGBE_B2BT(_max_frame_link) + \ + IXGBE_PFC_D + \ + (2 * IXGBE_CABLE_DC) + \ + (2 * IXGBE_ID_X540) + \ + IXGBE_HD) / 25 + 1) + \ + 2 * IXGBE_B2BT(_max_frame_tc)) /* Calculate 82599, 82598 delay value in bit times */ -#define IXGBE_DV(LINK, TC) (IXGBE_FILL_RATE * \ - (IXGBE_B2BT(LINK) + IXGBE_PFC_D + \ - (2 * IXGBE_CABLE_DC) + (2 * IXGBE_ID) + \ - IXGBE_HD + IXGBE_B2BT(TC))) +#define IXGBE_DV(_max_frame_link, _max_frame_tc) \ + ((36 * \ + (IXGBE_B2BT(_max_frame_link) + \ + IXGBE_PFC_D + \ + (2 * IXGBE_CABLE_DC) + \ + (2 * IXGBE_ID) + \ + IXGBE_HD) / 25 + 1) + \ + 2 * IXGBE_B2BT(_max_frame_tc)) /* Calculate low threshold delay values */ -#define IXGBE_LOW_DV_X540(TC) (2 * IXGBE_B2BT(TC) + \ - (IXGBE_FILL_RATE * IXGBE_PCI_DELAY)) -#define IXGBE_LOW_DV(TC) (2 * IXGBE_LOW_DV_X540(TC)) +#define IXGBE_LOW_DV_X540(_max_frame_tc) \ + (2 * IXGBE_B2BT(_max_frame_tc) + \ + (36 * IXGBE_PCI_DELAY / 25) + 1) +#define IXGBE_LOW_DV(_max_frame_tc) \ + (2 * IXGBE_LOW_DV_X540(_max_frame_tc)) /* Software ATR hash keys */ #define IXGBE_ATR_BUCKET_HASH_KEY 0x3DAD14E2 @@ -2597,6 +2603,8 @@ enum ixgbe_sfp_type { ixgbe_sfp_type_da_act_lmt_core1 = 8, ixgbe_sfp_type_1g_cu_core0 = 9, ixgbe_sfp_type_1g_cu_core1 = 10, + ixgbe_sfp_type_1g_sx_core0 = 11, + ixgbe_sfp_type_1g_sx_core1 = 12, ixgbe_sfp_type_not_present = 0xFFFE, ixgbe_sfp_type_unknown = 0xFFFF }; @@ -2837,6 +2845,7 @@ struct ixgbe_mac_operations { s32 (*set_rar)(struct ixgbe_hw *, u32, u8 *, u32, u32); s32 (*clear_rar)(struct ixgbe_hw *, u32); s32 (*set_vmdq)(struct ixgbe_hw *, u32, u32); + s32 (*set_vmdq_san_mac)(struct ixgbe_hw *, u32); s32 (*clear_vmdq)(struct ixgbe_hw *, u32, u32); s32 (*init_rx_addrs)(struct ixgbe_hw *); s32 (*update_mc_addr_list)(struct ixgbe_hw *, struct net_device *); @@ -2912,6 +2921,7 @@ struct ixgbe_mac_info { bool orig_link_settings_stored; bool autotry_restart; u8 flags; + u8 san_mac_rar_index; struct ixgbe_thermal_sensor_data thermal_sensor_data; }; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c index f90ec078ece2..de4da5219b71 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c @@ -156,6 +156,9 @@ mac_reset_top: hw->mac.ops.set_rar(hw, hw->mac.num_rar_entries - 1, hw->mac.san_addr, 0, IXGBE_RAH_AV); + /* Save the SAN MAC RAR index */ + hw->mac.san_mac_rar_index = hw->mac.num_rar_entries - 1; + /* Reserve the last RAR for the SAN MAC address */ hw->mac.num_rar_entries--; } @@ -832,6 +835,7 @@ static struct ixgbe_mac_operations mac_ops_X540 = { .set_rar = &ixgbe_set_rar_generic, .clear_rar = &ixgbe_clear_rar_generic, .set_vmdq = &ixgbe_set_vmdq_generic, + .set_vmdq_san_mac = &ixgbe_set_vmdq_san_mac_generic, .clear_vmdq = &ixgbe_clear_vmdq_generic, .init_rx_addrs = &ixgbe_init_rx_addrs_generic, .update_mc_addr_list = &ixgbe_update_mc_addr_list_generic, diff --git a/drivers/net/ethernet/intel/ixgbevf/defines.h b/drivers/net/ethernet/intel/ixgbevf/defines.h index e09a6cc633bb..418af827b230 100644 --- a/drivers/net/ethernet/intel/ixgbevf/defines.h +++ b/drivers/net/ethernet/intel/ixgbevf/defines.h @@ -251,6 +251,7 @@ struct ixgbe_adv_tx_context_desc { #define IXGBE_ADVTXD_TUCMD_L4T_TCP 0x00000800 /* L4 Packet TYPE of TCP */ #define IXGBE_ADVTXD_TUCMD_L4T_SCTP 0x00001000 /* L4 Packet TYPE of SCTP */ #define IXGBE_ADVTXD_IDX_SHIFT 4 /* Adv desc Index shift */ +#define IXGBE_ADVTXD_CC 0x00000080 /* Check Context */ #define IXGBE_ADVTXD_POPTS_SHIFT 8 /* Adv desc POPTS shift */ #define IXGBE_ADVTXD_POPTS_IXSM (IXGBE_TXD_POPTS_IXSM << \ IXGBE_ADVTXD_POPTS_SHIFT) @@ -264,32 +265,9 @@ struct ixgbe_adv_tx_context_desc { /* Interrupt register bitmasks */ -/* Extended Interrupt Cause Read */ -#define IXGBE_EICR_RTX_QUEUE 0x0000FFFF /* RTx Queue Interrupt */ -#define IXGBE_EICR_MAILBOX 0x00080000 /* VF to PF Mailbox Interrupt */ -#define IXGBE_EICR_OTHER 0x80000000 /* Interrupt Cause Active */ - -/* Extended Interrupt Cause Set */ -#define IXGBE_EICS_RTX_QUEUE IXGBE_EICR_RTX_QUEUE /* RTx Queue Interrupt */ -#define IXGBE_EICS_MAILBOX IXGBE_EICR_MAILBOX /* VF to PF Mailbox Int */ -#define IXGBE_EICS_OTHER IXGBE_EICR_OTHER /* INT Cause Active */ - -/* Extended Interrupt Mask Set */ -#define IXGBE_EIMS_RTX_QUEUE IXGBE_EICR_RTX_QUEUE /* RTx Queue Interrupt */ -#define IXGBE_EIMS_MAILBOX IXGBE_EICR_MAILBOX /* VF to PF Mailbox Int */ -#define IXGBE_EIMS_OTHER IXGBE_EICR_OTHER /* INT Cause Active */ - -/* Extended Interrupt Mask Clear */ -#define IXGBE_EIMC_RTX_QUEUE IXGBE_EICR_RTX_QUEUE /* RTx Queue Interrupt */ -#define IXGBE_EIMC_MAILBOX IXGBE_EICR_MAILBOX /* VF to PF Mailbox Int */ -#define IXGBE_EIMC_OTHER IXGBE_EICR_OTHER /* INT Cause Active */ - -#define IXGBE_EIMS_ENABLE_MASK ( \ - IXGBE_EIMS_RTX_QUEUE | \ - IXGBE_EIMS_MAILBOX | \ - IXGBE_EIMS_OTHER) - #define IXGBE_EITR_CNT_WDIS 0x80000000 +#define IXGBE_MAX_EITR 0x00000FF8 +#define IXGBE_MIN_EITR 8 /* Error Codes */ #define IXGBE_ERR_INVALID_MAC_ADDR -1 diff --git a/drivers/net/ethernet/intel/ixgbevf/ethtool.c b/drivers/net/ethernet/intel/ixgbevf/ethtool.c index e8dddf572d38..8f2070439b59 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ethtool.c +++ b/drivers/net/ethernet/intel/ixgbevf/ethtool.c @@ -43,7 +43,6 @@ #define IXGBE_ALL_RAR_ENTRIES 16 -#ifdef ETHTOOL_GSTATS struct ixgbe_stats { char stat_string[ETH_GSTRING_LEN]; int sizeof_stat; @@ -75,21 +74,17 @@ static const struct ixgbe_stats ixgbe_gstrings_stats[] = { zero_base)}, {"tx_csum_offload_ctxt", IXGBEVF_STAT(hw_csum_tx_good, zero_base, zero_base)}, - {"rx_header_split", IXGBEVF_STAT(rx_hdr_split, zero_base, zero_base)}, }; #define IXGBE_QUEUE_STATS_LEN 0 #define IXGBE_GLOBAL_STATS_LEN ARRAY_SIZE(ixgbe_gstrings_stats) #define IXGBEVF_STATS_LEN (IXGBE_GLOBAL_STATS_LEN + IXGBE_QUEUE_STATS_LEN) -#endif /* ETHTOOL_GSTATS */ -#ifdef ETHTOOL_TEST static const char ixgbe_gstrings_test[][ETH_GSTRING_LEN] = { "Register test (offline)", "Link test (on/offline)" }; #define IXGBE_TEST_LEN (sizeof(ixgbe_gstrings_test) / ETH_GSTRING_LEN) -#endif /* ETHTOOL_TEST */ static int ixgbevf_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) @@ -289,13 +284,11 @@ static void ixgbevf_get_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring) { struct ixgbevf_adapter *adapter = netdev_priv(netdev); - struct ixgbevf_ring *tx_ring = adapter->tx_ring; - struct ixgbevf_ring *rx_ring = adapter->rx_ring; ring->rx_max_pending = IXGBEVF_MAX_RXD; ring->tx_max_pending = IXGBEVF_MAX_TXD; - ring->rx_pending = rx_ring->count; - ring->tx_pending = tx_ring->count; + ring->rx_pending = adapter->rx_ring_count; + ring->tx_pending = adapter->tx_ring_count; } static int ixgbevf_set_ringparam(struct net_device *netdev, @@ -303,33 +296,28 @@ static int ixgbevf_set_ringparam(struct net_device *netdev, { struct ixgbevf_adapter *adapter = netdev_priv(netdev); struct ixgbevf_ring *tx_ring = NULL, *rx_ring = NULL; - int i, err = 0; u32 new_rx_count, new_tx_count; + int i, err = 0; if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending)) return -EINVAL; - new_rx_count = max(ring->rx_pending, (u32)IXGBEVF_MIN_RXD); - new_rx_count = min(new_rx_count, (u32)IXGBEVF_MAX_RXD); - new_rx_count = ALIGN(new_rx_count, IXGBE_REQ_RX_DESCRIPTOR_MULTIPLE); - - new_tx_count = max(ring->tx_pending, (u32)IXGBEVF_MIN_TXD); - new_tx_count = min(new_tx_count, (u32)IXGBEVF_MAX_TXD); + new_tx_count = max_t(u32, ring->tx_pending, IXGBEVF_MIN_TXD); + new_tx_count = min_t(u32, new_tx_count, IXGBEVF_MAX_TXD); new_tx_count = ALIGN(new_tx_count, IXGBE_REQ_TX_DESCRIPTOR_MULTIPLE); - if ((new_tx_count == adapter->tx_ring->count) && - (new_rx_count == adapter->rx_ring->count)) { - /* nothing to do */ + new_rx_count = max_t(u32, ring->rx_pending, IXGBEVF_MIN_RXD); + new_rx_count = min_t(u32, new_rx_count, IXGBEVF_MAX_RXD); + new_rx_count = ALIGN(new_rx_count, IXGBE_REQ_RX_DESCRIPTOR_MULTIPLE); + + /* if nothing to do return success */ + if ((new_tx_count == adapter->tx_ring_count) && + (new_rx_count == adapter->rx_ring_count)) return 0; - } while (test_and_set_bit(__IXGBEVF_RESETTING, &adapter->state)) - msleep(1); + usleep_range(1000, 2000); - /* - * If the adapter isn't up and running then just set the - * new parameters and scurry for the exits. - */ if (!netif_running(adapter->netdev)) { for (i = 0; i < adapter->num_tx_queues; i++) adapter->tx_ring[i].count = new_tx_count; @@ -340,82 +328,98 @@ static int ixgbevf_set_ringparam(struct net_device *netdev, goto clear_reset; } - tx_ring = kcalloc(adapter->num_tx_queues, - sizeof(struct ixgbevf_ring), GFP_KERNEL); - if (!tx_ring) { - err = -ENOMEM; - goto clear_reset; - } - - rx_ring = kcalloc(adapter->num_rx_queues, - sizeof(struct ixgbevf_ring), GFP_KERNEL); - if (!rx_ring) { - err = -ENOMEM; - goto err_rx_setup; - } - - ixgbevf_down(adapter); + if (new_tx_count != adapter->tx_ring_count) { + tx_ring = vmalloc(adapter->num_tx_queues * sizeof(*tx_ring)); + if (!tx_ring) { + err = -ENOMEM; + goto clear_reset; + } - memcpy(tx_ring, adapter->tx_ring, - adapter->num_tx_queues * sizeof(struct ixgbevf_ring)); - for (i = 0; i < adapter->num_tx_queues; i++) { - tx_ring[i].count = new_tx_count; - err = ixgbevf_setup_tx_resources(adapter, &tx_ring[i]); - if (err) { + for (i = 0; i < adapter->num_tx_queues; i++) { + /* clone ring and setup updated count */ + tx_ring[i] = adapter->tx_ring[i]; + tx_ring[i].count = new_tx_count; + err = ixgbevf_setup_tx_resources(adapter, &tx_ring[i]); + if (!err) + continue; while (i) { i--; - ixgbevf_free_tx_resources(adapter, - &tx_ring[i]); + ixgbevf_free_tx_resources(adapter, &tx_ring[i]); } - goto err_tx_ring_setup; + + vfree(tx_ring); + tx_ring = NULL; + + goto clear_reset; } - tx_ring[i].v_idx = adapter->tx_ring[i].v_idx; } - memcpy(rx_ring, adapter->rx_ring, - adapter->num_rx_queues * sizeof(struct ixgbevf_ring)); - for (i = 0; i < adapter->num_rx_queues; i++) { - rx_ring[i].count = new_rx_count; - err = ixgbevf_setup_rx_resources(adapter, &rx_ring[i]); - if (err) { + if (new_rx_count != adapter->rx_ring_count) { + rx_ring = vmalloc(adapter->num_rx_queues * sizeof(*rx_ring)); + if (!rx_ring) { + err = -ENOMEM; + goto clear_reset; + } + + for (i = 0; i < adapter->num_rx_queues; i++) { + /* clone ring and setup updated count */ + rx_ring[i] = adapter->rx_ring[i]; + rx_ring[i].count = new_rx_count; + err = ixgbevf_setup_rx_resources(adapter, &rx_ring[i]); + if (!err) + continue; while (i) { i--; - ixgbevf_free_rx_resources(adapter, - &rx_ring[i]); + ixgbevf_free_rx_resources(adapter, &rx_ring[i]); } - goto err_rx_ring_setup; + + vfree(rx_ring); + rx_ring = NULL; + + goto clear_reset; } - rx_ring[i].v_idx = adapter->rx_ring[i].v_idx; } - /* - * Only switch to new rings if all the prior allocations - * and ring setups have succeeded. - */ - kfree(adapter->tx_ring); - adapter->tx_ring = tx_ring; - adapter->tx_ring_count = new_tx_count; - - kfree(adapter->rx_ring); - adapter->rx_ring = rx_ring; - adapter->rx_ring_count = new_rx_count; + /* bring interface down to prepare for update */ + ixgbevf_down(adapter); - /* success! */ - ixgbevf_up(adapter); + /* Tx */ + if (tx_ring) { + for (i = 0; i < adapter->num_tx_queues; i++) { + ixgbevf_free_tx_resources(adapter, + &adapter->tx_ring[i]); + adapter->tx_ring[i] = tx_ring[i]; + } + adapter->tx_ring_count = new_tx_count; - goto clear_reset; + vfree(tx_ring); + tx_ring = NULL; + } -err_rx_ring_setup: - for(i = 0; i < adapter->num_tx_queues; i++) - ixgbevf_free_tx_resources(adapter, &tx_ring[i]); + /* Rx */ + if (rx_ring) { + for (i = 0; i < adapter->num_rx_queues; i++) { + ixgbevf_free_rx_resources(adapter, + &adapter->rx_ring[i]); + adapter->rx_ring[i] = rx_ring[i]; + } + adapter->rx_ring_count = new_rx_count; -err_tx_ring_setup: - kfree(rx_ring); + vfree(rx_ring); + rx_ring = NULL; + } -err_rx_setup: - kfree(tx_ring); + /* restore interface using new values */ + ixgbevf_up(adapter); clear_reset: + /* free Tx resources if Rx error is encountered */ + if (tx_ring) { + for (i = 0; i < adapter->num_tx_queues; i++) + ixgbevf_free_tx_resources(adapter, &tx_ring[i]); + vfree(tx_ring); + } + clear_bit(__IXGBEVF_RESETTING, &adapter->state); return err; } @@ -674,10 +678,8 @@ static int ixgbevf_nway_reset(struct net_device *netdev) { struct ixgbevf_adapter *adapter = netdev_priv(netdev); - if (netif_running(netdev)) { - if (!adapter->dev_closed) - ixgbevf_reinit_locked(adapter); - } + if (netif_running(netdev)) + ixgbevf_reinit_locked(adapter); return 0; } diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h index 0a1b99240d43..98cadb0c4dab 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h @@ -52,12 +52,12 @@ struct ixgbevf_tx_buffer { struct ixgbevf_rx_buffer { struct sk_buff *skb; dma_addr_t dma; - struct page *page; - dma_addr_t page_dma; - unsigned int page_offset; }; struct ixgbevf_ring { + struct ixgbevf_ring *next; + struct net_device *netdev; + struct device *dev; struct ixgbevf_adapter *adapter; /* backlink */ void *desc; /* descriptor ring memory */ dma_addr_t dma; /* phys. address of descriptor ring */ @@ -83,29 +83,9 @@ struct ixgbevf_ring { * offset associated with this ring, which is different * for DCB and RSS modes */ -#if defined(CONFIG_DCA) || defined(CONFIG_DCA_MODULE) - /* cpu for tx queue */ - int cpu; -#endif - - u64 v_idx; /* maps directly to the index for this ring in the hardware - * vector array, can also be used for finding the bit in EICR - * and friends that represents the vector for this ring */ - - u16 work_limit; /* max work per interrupt */ u16 rx_buf_len; }; -enum ixgbevf_ring_f_enum { - RING_F_NONE = 0, - RING_F_ARRAY_SIZE /* must be last in enum set */ -}; - -struct ixgbevf_ring_feature { - int indices; - int mask; -}; - /* How many Rx Buffers do we bundle into one write to the hardware ? */ #define IXGBEVF_RX_BUFFER_WRITE 16 /* Must be power of 2 */ @@ -120,8 +100,6 @@ struct ixgbevf_ring_feature { #define IXGBEVF_MIN_RXD 64 /* Supported Rx Buffer Sizes */ -#define IXGBEVF_RXBUFFER_64 64 /* Used for packet split */ -#define IXGBEVF_RXBUFFER_128 128 /* Used for packet split */ #define IXGBEVF_RXBUFFER_256 256 /* Used for packet split */ #define IXGBEVF_RXBUFFER_2048 2048 #define IXGBEVF_MAX_RXBUFFER 16384 /* largest size for single descriptor */ @@ -140,22 +118,42 @@ struct ixgbevf_ring_feature { #define IXGBE_TX_FLAGS_VLAN_PRIO_MASK 0x0000e000 #define IXGBE_TX_FLAGS_VLAN_SHIFT 16 +struct ixgbevf_ring_container { + struct ixgbevf_ring *ring; /* pointer to linked list of rings */ + unsigned int total_bytes; /* total bytes processed this int */ + unsigned int total_packets; /* total packets processed this int */ + u8 count; /* total number of rings in vector */ + u8 itr; /* current ITR setting for ring */ +}; + +/* iterator for handling rings in ring container */ +#define ixgbevf_for_each_ring(pos, head) \ + for (pos = (head).ring; pos != NULL; pos = pos->next) + /* MAX_MSIX_Q_VECTORS of these are allocated, * but we only use one per queue-specific vector. */ struct ixgbevf_q_vector { struct ixgbevf_adapter *adapter; + u16 v_idx; /* index of q_vector within array, also used for + * finding the bit in EICR and friends that + * represents the vector for this ring */ + u16 itr; /* Interrupt throttle rate written to EITR */ struct napi_struct napi; - DECLARE_BITMAP(rxr_idx, MAX_RX_QUEUES); /* Rx ring indices */ - DECLARE_BITMAP(txr_idx, MAX_TX_QUEUES); /* Tx ring indices */ - u8 rxr_count; /* Rx ring count assigned to this vector */ - u8 txr_count; /* Tx ring count assigned to this vector */ - u8 tx_itr; - u8 rx_itr; - u32 eitr; - int v_idx; /* vector index in list */ + struct ixgbevf_ring_container rx, tx; + char name[IFNAMSIZ + 9]; }; +/* + * microsecond values for various ITR rates shifted by 2 to fit itr register + * with the first 3 bits reserved 0 + */ +#define IXGBE_MIN_RSC_ITR 24 +#define IXGBE_100K_ITR 40 +#define IXGBE_20K_ITR 200 +#define IXGBE_10K_ITR 400 +#define IXGBE_8K_ITR 500 + /* Helper macros to switch between ints/sec and what the register uses. * And yes, it's the same math going both ways. The lowest value * supported by all of the ixgbe hardware is 8. @@ -168,12 +166,12 @@ struct ixgbevf_q_vector { ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \ (R)->next_to_clean - (R)->next_to_use - 1) -#define IXGBE_RX_DESC_ADV(R, i) \ - (&(((union ixgbe_adv_rx_desc *)((R).desc))[i])) -#define IXGBE_TX_DESC_ADV(R, i) \ - (&(((union ixgbe_adv_tx_desc *)((R).desc))[i])) -#define IXGBE_TX_CTXTDESC_ADV(R, i) \ - (&(((struct ixgbe_adv_tx_context_desc *)((R).desc))[i])) +#define IXGBEVF_RX_DESC(R, i) \ + (&(((union ixgbe_adv_rx_desc *)((R)->desc))[i])) +#define IXGBEVF_TX_DESC(R, i) \ + (&(((union ixgbe_adv_tx_desc *)((R)->desc))[i])) +#define IXGBEVF_TX_CTXTDESC(R, i) \ + (&(((struct ixgbe_adv_tx_context_desc *)((R)->desc))[i])) #define IXGBE_MAX_JUMBO_FRAME_SIZE 16128 @@ -181,9 +179,8 @@ struct ixgbevf_q_vector { #define NON_Q_VECTORS (OTHER_VECTOR) #define MAX_MSIX_Q_VECTORS 2 -#define MAX_MSIX_COUNT 2 -#define MIN_MSIX_Q_VECTORS 2 +#define MIN_MSIX_Q_VECTORS 1 #define MIN_MSIX_COUNT (MIN_MSIX_Q_VECTORS + NON_Q_VECTORS) /* board specific private data structure */ @@ -193,12 +190,14 @@ struct ixgbevf_adapter { u16 bd_number; struct work_struct reset_task; struct ixgbevf_q_vector *q_vector[MAX_MSIX_Q_VECTORS]; - char name[MAX_MSIX_COUNT][IFNAMSIZ + 9]; /* Interrupt Throttle Rate */ - u32 itr_setting; - u16 eitr_low; - u16 eitr_high; + u16 rx_itr_setting; + u16 tx_itr_setting; + + /* interrupt masks */ + u32 eims_enable_mask; + u32 eims_other; /* TX */ struct ixgbevf_ring *tx_ring; /* One per active queue */ @@ -213,18 +212,13 @@ struct ixgbevf_adapter { /* RX */ struct ixgbevf_ring *rx_ring; /* One per active queue */ int num_rx_queues; - int num_rx_pools; /* == num_rx_queues in 82598 */ - int num_rx_queues_per_pool; /* 1 if 82598, can be many if 82599 */ u64 hw_csum_rx_error; u64 hw_rx_no_dma_resources; u64 hw_csum_rx_good; u64 non_eop_descs; int num_msix_vectors; - int max_msix_q_vectors; /* true count of q_vectors for device */ - struct ixgbevf_ring_feature ring_feature[RING_F_ARRAY_SIZE]; struct msix_entry *msix_entries; - u64 rx_hdr_split; u32 alloc_rx_page_failed; u32 alloc_rx_buff_failed; @@ -232,15 +226,8 @@ struct ixgbevf_adapter { * thus the additional *_CAPABLE flags. */ u32 flags; -#define IXGBE_FLAG_RX_CSUM_ENABLED (u32)(1) -#define IXGBE_FLAG_RX_1BUF_CAPABLE (u32)(1 << 1) -#define IXGBE_FLAG_RX_PS_CAPABLE (u32)(1 << 2) -#define IXGBE_FLAG_RX_PS_ENABLED (u32)(1 << 3) -#define IXGBE_FLAG_IN_NETPOLL (u32)(1 << 4) -#define IXGBE_FLAG_IMIR_ENABLED (u32)(1 << 5) -#define IXGBE_FLAG_MQ_CAPABLE (u32)(1 << 6) -#define IXGBE_FLAG_NEED_LINK_UPDATE (u32)(1 << 7) -#define IXGBE_FLAG_IN_WATCHDOG_TASK (u32)(1 << 8) +#define IXGBE_FLAG_IN_WATCHDOG_TASK (u32)(1) + /* OS defined structs */ struct net_device *netdev; struct pci_dev *pdev; @@ -254,18 +241,16 @@ struct ixgbevf_adapter { u32 eitr_param; unsigned long state; - u32 *config_space; u64 tx_busy; unsigned int tx_ring_count; unsigned int rx_ring_count; u32 link_speed; bool link_up; - unsigned long link_check_timeout; struct work_struct watchdog_task; - bool netdev_registered; - bool dev_closed; + + spinlock_t mbx_lock; }; enum ixbgevf_state_t { @@ -301,11 +286,8 @@ extern void ixgbevf_free_rx_resources(struct ixgbevf_adapter *, extern void ixgbevf_free_tx_resources(struct ixgbevf_adapter *, struct ixgbevf_ring *); extern void ixgbevf_update_stats(struct ixgbevf_adapter *adapter); - -#ifdef ETHTOOL_OPS_COMPAT extern int ethtool_ioctl(struct ifreq *ifr); -#endif extern void ixgbe_napi_add_all(struct ixgbevf_adapter *adapter); extern void ixgbe_napi_del_all(struct ixgbevf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 41e32257a4e8..3f9841d619ad 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -42,6 +42,7 @@ #include <linux/in.h> #include <linux/ip.h> #include <linux/tcp.h> +#include <linux/sctp.h> #include <linux/ipv6.h> #include <linux/slab.h> #include <net/checksum.h> @@ -97,9 +98,7 @@ module_param(debug, int, 0); MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)"); /* forward decls */ -static void ixgbevf_set_itr_msix(struct ixgbevf_q_vector *q_vector); -static void ixgbevf_write_eitr(struct ixgbevf_adapter *adapter, int v_idx, - u32 itr_reg); +static void ixgbevf_set_itr(struct ixgbevf_q_vector *q_vector); static inline void ixgbevf_release_rx_desc(struct ixgbe_hw *hw, struct ixgbevf_ring *rx_ring, @@ -115,7 +114,7 @@ static inline void ixgbevf_release_rx_desc(struct ixgbe_hw *hw, IXGBE_WRITE_REG(hw, IXGBE_VFRDT(rx_ring->reg_idx), val); } -/* +/** * ixgbevf_set_ivar - set IVAR registers - maps interrupt causes to vectors * @adapter: pointer to adapter struct * @direction: 0 for Rx, 1 for Tx, -1 for other causes @@ -146,18 +145,18 @@ static void ixgbevf_set_ivar(struct ixgbevf_adapter *adapter, s8 direction, } } -static void ixgbevf_unmap_and_free_tx_resource(struct ixgbevf_adapter *adapter, +static void ixgbevf_unmap_and_free_tx_resource(struct ixgbevf_ring *tx_ring, struct ixgbevf_tx_buffer *tx_buffer_info) { if (tx_buffer_info->dma) { if (tx_buffer_info->mapped_as_page) - dma_unmap_page(&adapter->pdev->dev, + dma_unmap_page(tx_ring->dev, tx_buffer_info->dma, tx_buffer_info->length, DMA_TO_DEVICE); else - dma_unmap_single(&adapter->pdev->dev, + dma_unmap_single(tx_ring->dev, tx_buffer_info->dma, tx_buffer_info->length, DMA_TO_DEVICE); @@ -175,27 +174,20 @@ static void ixgbevf_unmap_and_free_tx_resource(struct ixgbevf_adapter *adapter, #define IXGBE_MAX_DATA_PER_TXD (1 << IXGBE_MAX_TXD_PWR) /* Tx Descriptors needed, worst case */ -#define TXD_USE_COUNT(S) (((S) >> IXGBE_MAX_TXD_PWR) + \ - (((S) & (IXGBE_MAX_DATA_PER_TXD - 1)) ? 1 : 0)) -#ifdef MAX_SKB_FRAGS -#define DESC_NEEDED (TXD_USE_COUNT(IXGBE_MAX_DATA_PER_TXD) /* skb->data */ + \ - MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE) + 1) /* for context */ -#else -#define DESC_NEEDED TXD_USE_COUNT(IXGBE_MAX_DATA_PER_TXD) -#endif +#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), IXGBE_MAX_DATA_PER_TXD) +#define DESC_NEEDED (MAX_SKB_FRAGS + 4) static void ixgbevf_tx_timeout(struct net_device *netdev); /** * ixgbevf_clean_tx_irq - Reclaim resources after transmit completes - * @adapter: board private structure + * @q_vector: board private structure * @tx_ring: tx ring to clean **/ -static bool ixgbevf_clean_tx_irq(struct ixgbevf_adapter *adapter, +static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector, struct ixgbevf_ring *tx_ring) { - struct net_device *netdev = adapter->netdev; - struct ixgbe_hw *hw = &adapter->hw; + struct ixgbevf_adapter *adapter = q_vector->adapter; union ixgbe_adv_tx_desc *tx_desc, *eop_desc; struct ixgbevf_tx_buffer *tx_buffer_info; unsigned int i, eop, count = 0; @@ -206,10 +198,10 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_adapter *adapter, i = tx_ring->next_to_clean; eop = tx_ring->tx_buffer_info[i].next_to_watch; - eop_desc = IXGBE_TX_DESC_ADV(*tx_ring, eop); + eop_desc = IXGBEVF_TX_DESC(tx_ring, eop); while ((eop_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD)) && - (count < tx_ring->work_limit)) { + (count < tx_ring->count)) { bool cleaned = false; rmb(); /* read buffer_info after eop_desc */ /* eop could change between read and DD-check */ @@ -217,7 +209,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_adapter *adapter, goto cont_loop; for ( ; !cleaned; count++) { struct sk_buff *skb; - tx_desc = IXGBE_TX_DESC_ADV(*tx_ring, i); + tx_desc = IXGBEVF_TX_DESC(tx_ring, i); tx_buffer_info = &tx_ring->tx_buffer_info[i]; cleaned = (i == eop); skb = tx_buffer_info->skb; @@ -234,7 +226,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_adapter *adapter, total_bytes += bytecount; } - ixgbevf_unmap_and_free_tx_resource(adapter, + ixgbevf_unmap_and_free_tx_resource(tx_ring, tx_buffer_info); tx_desc->wb.status = 0; @@ -246,37 +238,25 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_adapter *adapter, cont_loop: eop = tx_ring->tx_buffer_info[i].next_to_watch; - eop_desc = IXGBE_TX_DESC_ADV(*tx_ring, eop); + eop_desc = IXGBEVF_TX_DESC(tx_ring, eop); } tx_ring->next_to_clean = i; #define TX_WAKE_THRESHOLD (DESC_NEEDED * 2) - if (unlikely(count && netif_carrier_ok(netdev) && + if (unlikely(count && netif_carrier_ok(tx_ring->netdev) && (IXGBE_DESC_UNUSED(tx_ring) >= TX_WAKE_THRESHOLD))) { /* Make sure that anybody stopping the queue after this * sees the new next_to_clean. */ smp_mb(); -#ifdef HAVE_TX_MQ - if (__netif_subqueue_stopped(netdev, tx_ring->queue_index) && - !test_bit(__IXGBEVF_DOWN, &adapter->state)) { - netif_wake_subqueue(netdev, tx_ring->queue_index); - ++adapter->restart_queue; - } -#else - if (netif_queue_stopped(netdev) && + if (__netif_subqueue_stopped(tx_ring->netdev, + tx_ring->queue_index) && !test_bit(__IXGBEVF_DOWN, &adapter->state)) { - netif_wake_queue(netdev); + netif_wake_subqueue(tx_ring->netdev, + tx_ring->queue_index); ++adapter->restart_queue; } -#endif - } - - /* re-arm the interrupt */ - if ((count >= tx_ring->work_limit) && - (!test_bit(__IXGBEVF_DOWN, &adapter->state))) { - IXGBE_WRITE_REG(hw, IXGBE_VTEICS, tx_ring->v_idx); } u64_stats_update_begin(&tx_ring->syncp); @@ -284,7 +264,7 @@ cont_loop: tx_ring->total_packets += total_packets; u64_stats_update_end(&tx_ring->syncp); - return count < tx_ring->work_limit; + return count < tx_ring->count; } /** @@ -304,13 +284,10 @@ static void ixgbevf_receive_skb(struct ixgbevf_q_vector *q_vector, bool is_vlan = (status & IXGBE_RXD_STAT_VP); u16 tag = le16_to_cpu(rx_desc->wb.upper.vlan); - if (is_vlan && test_bit(tag, adapter->active_vlans)) + if (is_vlan && test_bit(tag & VLAN_VID_MASK, adapter->active_vlans)) __vlan_hwaccel_put_tag(skb, tag); - if (!(adapter->flags & IXGBE_FLAG_IN_NETPOLL)) - napi_gro_receive(&q_vector->napi, skb); - else - netif_rx(skb); + napi_gro_receive(&q_vector->napi, skb); } /** @@ -320,12 +297,13 @@ static void ixgbevf_receive_skb(struct ixgbevf_q_vector *q_vector, * @skb: skb currently being received and modified **/ static inline void ixgbevf_rx_checksum(struct ixgbevf_adapter *adapter, + struct ixgbevf_ring *ring, u32 status_err, struct sk_buff *skb) { skb_checksum_none_assert(skb); /* Rx csum disabled */ - if (!(adapter->flags & IXGBE_FLAG_RX_CSUM_ENABLED)) + if (!(ring->netdev->features & NETIF_F_RXCSUM)) return; /* if IP and error */ @@ -360,52 +338,21 @@ static void ixgbevf_alloc_rx_buffers(struct ixgbevf_adapter *adapter, union ixgbe_adv_rx_desc *rx_desc; struct ixgbevf_rx_buffer *bi; struct sk_buff *skb; - unsigned int i; - unsigned int bufsz = rx_ring->rx_buf_len + NET_IP_ALIGN; + unsigned int i = rx_ring->next_to_use; - i = rx_ring->next_to_use; bi = &rx_ring->rx_buffer_info[i]; while (cleaned_count--) { - rx_desc = IXGBE_RX_DESC_ADV(*rx_ring, i); - - if (!bi->page_dma && - (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED)) { - if (!bi->page) { - bi->page = alloc_page(GFP_ATOMIC | __GFP_COLD); - if (!bi->page) { - adapter->alloc_rx_page_failed++; - goto no_buffers; - } - bi->page_offset = 0; - } else { - /* use a half page if we're re-using */ - bi->page_offset ^= (PAGE_SIZE / 2); - } - - bi->page_dma = dma_map_page(&pdev->dev, bi->page, - bi->page_offset, - (PAGE_SIZE / 2), - DMA_FROM_DEVICE); - } - + rx_desc = IXGBEVF_RX_DESC(rx_ring, i); skb = bi->skb; if (!skb) { - skb = netdev_alloc_skb(adapter->netdev, - bufsz); - + skb = netdev_alloc_skb_ip_align(rx_ring->netdev, + rx_ring->rx_buf_len); if (!skb) { adapter->alloc_rx_buff_failed++; goto no_buffers; } - /* - * Make buffer alignment 2 beyond a 16 byte boundary - * this will result in a 16 byte aligned IP header after - * the 14 byte MAC header is removed - */ - skb_reserve(skb, NET_IP_ALIGN); - bi->skb = skb; } if (!bi->dma) { @@ -413,14 +360,7 @@ static void ixgbevf_alloc_rx_buffers(struct ixgbevf_adapter *adapter, rx_ring->rx_buf_len, DMA_FROM_DEVICE); } - /* Refresh the desc even if buffer_addrs didn't change because - * each write-back erases this info. */ - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { - rx_desc->read.pkt_addr = cpu_to_le64(bi->page_dma); - rx_desc->read.hdr_addr = cpu_to_le64(bi->dma); - } else { - rx_desc->read.pkt_addr = cpu_to_le64(bi->dma); - } + rx_desc->read.pkt_addr = cpu_to_le64(bi->dma); i++; if (i == rx_ring->count) @@ -431,36 +371,22 @@ static void ixgbevf_alloc_rx_buffers(struct ixgbevf_adapter *adapter, no_buffers: if (rx_ring->next_to_use != i) { rx_ring->next_to_use = i; - if (i-- == 0) - i = (rx_ring->count - 1); ixgbevf_release_rx_desc(&adapter->hw, rx_ring, i); } } static inline void ixgbevf_irq_enable_queues(struct ixgbevf_adapter *adapter, - u64 qmask) + u32 qmask) { - u32 mask; struct ixgbe_hw *hw = &adapter->hw; - mask = (qmask & 0xFFFFFFFF); - IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, mask); -} - -static inline u16 ixgbevf_get_hdr_info(union ixgbe_adv_rx_desc *rx_desc) -{ - return rx_desc->wb.lower.lo_dword.hs_rss.hdr_info; -} - -static inline u16 ixgbevf_get_pkt_info(union ixgbe_adv_rx_desc *rx_desc) -{ - return rx_desc->wb.lower.lo_dword.hs_rss.pkt_info; + IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, qmask); } static bool ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector, struct ixgbevf_ring *rx_ring, - int *work_done, int work_to_do) + int budget) { struct ixgbevf_adapter *adapter = q_vector->adapter; struct pci_dev *pdev = adapter->pdev; @@ -469,36 +395,21 @@ static bool ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector, struct sk_buff *skb; unsigned int i; u32 len, staterr; - u16 hdr_info; - bool cleaned = false; int cleaned_count = 0; unsigned int total_rx_bytes = 0, total_rx_packets = 0; i = rx_ring->next_to_clean; - rx_desc = IXGBE_RX_DESC_ADV(*rx_ring, i); + rx_desc = IXGBEVF_RX_DESC(rx_ring, i); staterr = le32_to_cpu(rx_desc->wb.upper.status_error); rx_buffer_info = &rx_ring->rx_buffer_info[i]; while (staterr & IXGBE_RXD_STAT_DD) { - u32 upper_len = 0; - if (*work_done >= work_to_do) + if (!budget) break; - (*work_done)++; + budget--; rmb(); /* read descriptor and rx_buffer_info after status DD */ - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { - hdr_info = le16_to_cpu(ixgbevf_get_hdr_info(rx_desc)); - len = (hdr_info & IXGBE_RXDADV_HDRBUFLEN_MASK) >> - IXGBE_RXDADV_HDRBUFLEN_SHIFT; - if (hdr_info & IXGBE_RXDADV_SPH) - adapter->rx_hdr_split++; - if (len > IXGBEVF_RX_HDR_SIZE) - len = IXGBEVF_RX_HDR_SIZE; - upper_len = le16_to_cpu(rx_desc->wb.upper.length); - } else { - len = le16_to_cpu(rx_desc->wb.upper.length); - } - cleaned = true; + len = le16_to_cpu(rx_desc->wb.upper.length); skb = rx_buffer_info->skb; prefetch(skb->data - NET_IP_ALIGN); rx_buffer_info->skb = NULL; @@ -511,46 +422,19 @@ static bool ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector, skb_put(skb, len); } - if (upper_len) { - dma_unmap_page(&pdev->dev, rx_buffer_info->page_dma, - PAGE_SIZE / 2, DMA_FROM_DEVICE); - rx_buffer_info->page_dma = 0; - skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags, - rx_buffer_info->page, - rx_buffer_info->page_offset, - upper_len); - - if ((rx_ring->rx_buf_len > (PAGE_SIZE / 2)) || - (page_count(rx_buffer_info->page) != 1)) - rx_buffer_info->page = NULL; - else - get_page(rx_buffer_info->page); - - skb->len += upper_len; - skb->data_len += upper_len; - skb->truesize += upper_len; - } - i++; if (i == rx_ring->count) i = 0; - next_rxd = IXGBE_RX_DESC_ADV(*rx_ring, i); + next_rxd = IXGBEVF_RX_DESC(rx_ring, i); prefetch(next_rxd); cleaned_count++; next_buffer = &rx_ring->rx_buffer_info[i]; if (!(staterr & IXGBE_RXD_STAT_EOP)) { - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { - rx_buffer_info->skb = next_buffer->skb; - rx_buffer_info->dma = next_buffer->dma; - next_buffer->skb = skb; - next_buffer->dma = 0; - } else { - skb->next = next_buffer->skb; - skb->next->prev = skb; - } + skb->next = next_buffer->skb; + skb->next->prev = skb; adapter->non_eop_descs++; goto next_desc; } @@ -561,7 +445,7 @@ static bool ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector, goto next_desc; } - ixgbevf_rx_checksum(adapter, staterr, skb); + ixgbevf_rx_checksum(adapter, rx_ring, staterr, skb); /* probably a little skewed due to removing CRC */ total_rx_bytes += skb->len; @@ -576,7 +460,7 @@ static bool ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector, if (header_fixup_len < 14) skb_push(skb, header_fixup_len); } - skb->protocol = eth_type_trans(skb, adapter->netdev); + skb->protocol = eth_type_trans(skb, rx_ring->netdev); ixgbevf_receive_skb(q_vector, skb, staterr, rx_ring, rx_desc); @@ -608,95 +492,74 @@ next_desc: rx_ring->total_bytes += total_rx_bytes; u64_stats_update_end(&rx_ring->syncp); - return cleaned; + return !!budget; } /** - * ixgbevf_clean_rxonly - msix (aka one shot) rx clean routine + * ixgbevf_poll - NAPI polling calback * @napi: napi struct with our devices info in it * @budget: amount of work driver is allowed to do this pass, in packets * - * This function is optimized for cleaning one queue only on a single - * q_vector!!! + * This function will clean more than one or more rings associated with a + * q_vector. **/ -static int ixgbevf_clean_rxonly(struct napi_struct *napi, int budget) +static int ixgbevf_poll(struct napi_struct *napi, int budget) { struct ixgbevf_q_vector *q_vector = container_of(napi, struct ixgbevf_q_vector, napi); struct ixgbevf_adapter *adapter = q_vector->adapter; - struct ixgbevf_ring *rx_ring = NULL; - int work_done = 0; - long r_idx; - - r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues); - rx_ring = &(adapter->rx_ring[r_idx]); + struct ixgbevf_ring *ring; + int per_ring_budget; + bool clean_complete = true; - ixgbevf_clean_rx_irq(q_vector, rx_ring, &work_done, budget); + ixgbevf_for_each_ring(ring, q_vector->tx) + clean_complete &= ixgbevf_clean_tx_irq(q_vector, ring); - /* If all Rx work done, exit the polling mode */ - if (work_done < budget) { - napi_complete(napi); - if (adapter->itr_setting & 1) - ixgbevf_set_itr_msix(q_vector); - if (!test_bit(__IXGBEVF_DOWN, &adapter->state)) - ixgbevf_irq_enable_queues(adapter, rx_ring->v_idx); - } + /* attempt to distribute budget to each queue fairly, but don't allow + * the budget to go below 1 because we'll exit polling */ + if (q_vector->rx.count > 1) + per_ring_budget = max(budget/q_vector->rx.count, 1); + else + per_ring_budget = budget; + + ixgbevf_for_each_ring(ring, q_vector->rx) + clean_complete &= ixgbevf_clean_rx_irq(q_vector, ring, + per_ring_budget); + + /* If all work not completed, return budget and keep polling */ + if (!clean_complete) + return budget; + /* all work done, exit the polling mode */ + napi_complete(napi); + if (adapter->rx_itr_setting & 1) + ixgbevf_set_itr(q_vector); + if (!test_bit(__IXGBEVF_DOWN, &adapter->state)) + ixgbevf_irq_enable_queues(adapter, + 1 << q_vector->v_idx); - return work_done; + return 0; } /** - * ixgbevf_clean_rxonly_many - msix (aka one shot) rx clean routine - * @napi: napi struct with our devices info in it - * @budget: amount of work driver is allowed to do this pass, in packets - * - * This function will clean more than one rx queue associated with a - * q_vector. - **/ -static int ixgbevf_clean_rxonly_many(struct napi_struct *napi, int budget) + * ixgbevf_write_eitr - write VTEITR register in hardware specific way + * @q_vector: structure containing interrupt and ring information + */ +static void ixgbevf_write_eitr(struct ixgbevf_q_vector *q_vector) { - struct ixgbevf_q_vector *q_vector = - container_of(napi, struct ixgbevf_q_vector, napi); struct ixgbevf_adapter *adapter = q_vector->adapter; - struct ixgbevf_ring *rx_ring = NULL; - int work_done = 0, i; - long r_idx; - u64 enable_mask = 0; - - /* attempt to distribute budget to each queue fairly, but don't allow - * the budget to go below 1 because we'll exit polling */ - budget /= (q_vector->rxr_count ?: 1); - budget = max(budget, 1); - r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues); - for (i = 0; i < q_vector->rxr_count; i++) { - rx_ring = &(adapter->rx_ring[r_idx]); - ixgbevf_clean_rx_irq(q_vector, rx_ring, &work_done, budget); - enable_mask |= rx_ring->v_idx; - r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues, - r_idx + 1); - } - -#ifndef HAVE_NETDEV_NAPI_LIST - if (!netif_running(adapter->netdev)) - work_done = 0; - -#endif - r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues); - rx_ring = &(adapter->rx_ring[r_idx]); + struct ixgbe_hw *hw = &adapter->hw; + int v_idx = q_vector->v_idx; + u32 itr_reg = q_vector->itr & IXGBE_MAX_EITR; - /* If all Rx work done, exit the polling mode */ - if (work_done < budget) { - napi_complete(napi); - if (adapter->itr_setting & 1) - ixgbevf_set_itr_msix(q_vector); - if (!test_bit(__IXGBEVF_DOWN, &adapter->state)) - ixgbevf_irq_enable_queues(adapter, enable_mask); - } + /* + * set the WDIS bit to not clear the timer bits and cause an + * immediate assertion of the interrupt + */ + itr_reg |= IXGBE_EITR_CNT_WDIS; - return work_done; + IXGBE_WRITE_REG(hw, IXGBE_VTEITR(v_idx), itr_reg); } - /** * ixgbevf_configure_msix - Configure MSI-X hardware * @adapter: board private structure @@ -707,56 +570,49 @@ static int ixgbevf_clean_rxonly_many(struct napi_struct *napi, int budget) static void ixgbevf_configure_msix(struct ixgbevf_adapter *adapter) { struct ixgbevf_q_vector *q_vector; - struct ixgbe_hw *hw = &adapter->hw; - int i, j, q_vectors, v_idx, r_idx; - u32 mask; + int q_vectors, v_idx; q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; + adapter->eims_enable_mask = 0; /* * Populate the IVAR table and set the ITR values to the * corresponding register. */ for (v_idx = 0; v_idx < q_vectors; v_idx++) { + struct ixgbevf_ring *ring; q_vector = adapter->q_vector[v_idx]; - /* XXX for_each_set_bit(...) */ - r_idx = find_first_bit(q_vector->rxr_idx, - adapter->num_rx_queues); - - for (i = 0; i < q_vector->rxr_count; i++) { - j = adapter->rx_ring[r_idx].reg_idx; - ixgbevf_set_ivar(adapter, 0, j, v_idx); - r_idx = find_next_bit(q_vector->rxr_idx, - adapter->num_rx_queues, - r_idx + 1); - } - r_idx = find_first_bit(q_vector->txr_idx, - adapter->num_tx_queues); - - for (i = 0; i < q_vector->txr_count; i++) { - j = adapter->tx_ring[r_idx].reg_idx; - ixgbevf_set_ivar(adapter, 1, j, v_idx); - r_idx = find_next_bit(q_vector->txr_idx, - adapter->num_tx_queues, - r_idx + 1); + + ixgbevf_for_each_ring(ring, q_vector->rx) + ixgbevf_set_ivar(adapter, 0, ring->reg_idx, v_idx); + + ixgbevf_for_each_ring(ring, q_vector->tx) + ixgbevf_set_ivar(adapter, 1, ring->reg_idx, v_idx); + + if (q_vector->tx.ring && !q_vector->rx.ring) { + /* tx only vector */ + if (adapter->tx_itr_setting == 1) + q_vector->itr = IXGBE_10K_ITR; + else + q_vector->itr = adapter->tx_itr_setting; + } else { + /* rx or rx/tx vector */ + if (adapter->rx_itr_setting == 1) + q_vector->itr = IXGBE_20K_ITR; + else + q_vector->itr = adapter->rx_itr_setting; } - /* if this is a tx only vector halve the interrupt rate */ - if (q_vector->txr_count && !q_vector->rxr_count) - q_vector->eitr = (adapter->eitr_param >> 1); - else if (q_vector->rxr_count) - /* rx only */ - q_vector->eitr = adapter->eitr_param; + /* add q_vector eims value to global eims_enable_mask */ + adapter->eims_enable_mask |= 1 << v_idx; - ixgbevf_write_eitr(adapter, v_idx, q_vector->eitr); + ixgbevf_write_eitr(q_vector); } ixgbevf_set_ivar(adapter, -1, 1, v_idx); - - /* set up to autoclear timer, and the vectors */ - mask = IXGBE_EIMS_ENABLE_MASK; - mask &= ~IXGBE_EIMS_OTHER; - IXGBE_WRITE_REG(hw, IXGBE_VTEIAC, mask); + /* setup eims_other and add value to global eims_enable_mask */ + adapter->eims_other = 1 << v_idx; + adapter->eims_enable_mask |= adapter->eims_other; } enum latency_range { @@ -768,11 +624,8 @@ enum latency_range { /** * ixgbevf_update_itr - update the dynamic ITR value based on statistics - * @adapter: pointer to adapter - * @eitr: eitr setting (ints per sec) to give last timeslice - * @itr_setting: current throttle rate in ints/second - * @packets: the number of packets during this measurement interval - * @bytes: the number of bytes during this measurement interval + * @q_vector: structure containing interrupt and ring information + * @ring_container: structure containing ring performance data * * Stores a new ITR value based on packets and byte * counts during the last interrupt. The advantage of per interrupt @@ -782,17 +635,17 @@ enum latency_range { * on testing data as well as attempting to minimize response time * while increasing bulk throughput. **/ -static u8 ixgbevf_update_itr(struct ixgbevf_adapter *adapter, - u32 eitr, u8 itr_setting, - int packets, int bytes) +static void ixgbevf_update_itr(struct ixgbevf_q_vector *q_vector, + struct ixgbevf_ring_container *ring_container) { - unsigned int retval = itr_setting; + int bytes = ring_container->total_bytes; + int packets = ring_container->total_packets; u32 timepassed_us; u64 bytes_perint; + u8 itr_setting = ring_container->itr; if (packets == 0) - goto update_itr_done; - + return; /* simple throttlerate management * 0-20MB/s lowest (100000 ints/s) @@ -800,134 +653,77 @@ static u8 ixgbevf_update_itr(struct ixgbevf_adapter *adapter, * 100-1249MB/s bulk (8000 ints/s) */ /* what was last interrupt timeslice? */ - timepassed_us = 1000000/eitr; + timepassed_us = q_vector->itr >> 2; bytes_perint = bytes / timepassed_us; /* bytes/usec */ switch (itr_setting) { case lowest_latency: - if (bytes_perint > adapter->eitr_low) - retval = low_latency; + if (bytes_perint > 10) + itr_setting = low_latency; break; case low_latency: - if (bytes_perint > adapter->eitr_high) - retval = bulk_latency; - else if (bytes_perint <= adapter->eitr_low) - retval = lowest_latency; + if (bytes_perint > 20) + itr_setting = bulk_latency; + else if (bytes_perint <= 10) + itr_setting = lowest_latency; break; case bulk_latency: - if (bytes_perint <= adapter->eitr_high) - retval = low_latency; + if (bytes_perint <= 20) + itr_setting = low_latency; break; } -update_itr_done: - return retval; + /* clear work counters since we have the values we need */ + ring_container->total_bytes = 0; + ring_container->total_packets = 0; + + /* write updated itr to ring container */ + ring_container->itr = itr_setting; } -/** - * ixgbevf_write_eitr - write VTEITR register in hardware specific way - * @adapter: pointer to adapter struct - * @v_idx: vector index into q_vector array - * @itr_reg: new value to be written in *register* format, not ints/s - * - * This function is made to be called by ethtool and by the driver - * when it needs to update VTEITR registers at runtime. Hardware - * specific quirks/differences are taken care of here. - */ -static void ixgbevf_write_eitr(struct ixgbevf_adapter *adapter, int v_idx, - u32 itr_reg) +static void ixgbevf_set_itr(struct ixgbevf_q_vector *q_vector) { - struct ixgbe_hw *hw = &adapter->hw; + u32 new_itr = q_vector->itr; + u8 current_itr; - itr_reg = EITR_INTS_PER_SEC_TO_REG(itr_reg); + ixgbevf_update_itr(q_vector, &q_vector->tx); + ixgbevf_update_itr(q_vector, &q_vector->rx); - /* - * set the WDIS bit to not clear the timer bits and cause an - * immediate assertion of the interrupt - */ - itr_reg |= IXGBE_EITR_CNT_WDIS; - - IXGBE_WRITE_REG(hw, IXGBE_VTEITR(v_idx), itr_reg); -} - -static void ixgbevf_set_itr_msix(struct ixgbevf_q_vector *q_vector) -{ - struct ixgbevf_adapter *adapter = q_vector->adapter; - u32 new_itr; - u8 current_itr, ret_itr; - int i, r_idx, v_idx = q_vector->v_idx; - struct ixgbevf_ring *rx_ring, *tx_ring; - - r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues); - for (i = 0; i < q_vector->txr_count; i++) { - tx_ring = &(adapter->tx_ring[r_idx]); - ret_itr = ixgbevf_update_itr(adapter, q_vector->eitr, - q_vector->tx_itr, - tx_ring->total_packets, - tx_ring->total_bytes); - /* if the result for this queue would decrease interrupt - * rate for this vector then use that result */ - q_vector->tx_itr = ((q_vector->tx_itr > ret_itr) ? - q_vector->tx_itr - 1 : ret_itr); - r_idx = find_next_bit(q_vector->txr_idx, adapter->num_tx_queues, - r_idx + 1); - } - - r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues); - for (i = 0; i < q_vector->rxr_count; i++) { - rx_ring = &(adapter->rx_ring[r_idx]); - ret_itr = ixgbevf_update_itr(adapter, q_vector->eitr, - q_vector->rx_itr, - rx_ring->total_packets, - rx_ring->total_bytes); - /* if the result for this queue would decrease interrupt - * rate for this vector then use that result */ - q_vector->rx_itr = ((q_vector->rx_itr > ret_itr) ? - q_vector->rx_itr - 1 : ret_itr); - r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues, - r_idx + 1); - } - - current_itr = max(q_vector->rx_itr, q_vector->tx_itr); + current_itr = max(q_vector->rx.itr, q_vector->tx.itr); switch (current_itr) { /* counts and packets in update_itr are dependent on these numbers */ case lowest_latency: - new_itr = 100000; + new_itr = IXGBE_100K_ITR; break; case low_latency: - new_itr = 20000; /* aka hwitr = ~200 */ + new_itr = IXGBE_20K_ITR; break; case bulk_latency: default: - new_itr = 8000; + new_itr = IXGBE_8K_ITR; break; } - if (new_itr != q_vector->eitr) { - u32 itr_reg; - - /* save the algorithm value here, not the smoothed one */ - q_vector->eitr = new_itr; + if (new_itr != q_vector->itr) { /* do an exponential smoothing */ - new_itr = ((q_vector->eitr * 90)/100) + ((new_itr * 10)/100); - itr_reg = EITR_INTS_PER_SEC_TO_REG(new_itr); - ixgbevf_write_eitr(adapter, v_idx, itr_reg); + new_itr = (10 * new_itr * q_vector->itr) / + ((9 * new_itr) + q_vector->itr); + + /* save the algorithm value here */ + q_vector->itr = new_itr; + + ixgbevf_write_eitr(q_vector); } } static irqreturn_t ixgbevf_msix_mbx(int irq, void *data) { - struct net_device *netdev = data; - struct ixgbevf_adapter *adapter = netdev_priv(netdev); + struct ixgbevf_adapter *adapter = data; struct ixgbe_hw *hw = &adapter->hw; - u32 eicr; u32 msg; bool got_ack = false; - eicr = IXGBE_READ_REG(hw, IXGBE_VTEICS); - IXGBE_WRITE_REG(hw, IXGBE_VTEICR, eicr); - if (!hw->mbx.ops.check_for_ack(hw)) got_ack = true; @@ -956,63 +752,24 @@ static irqreturn_t ixgbevf_msix_mbx(int irq, void *data) if (got_ack) hw->mbx.v2p_mailbox |= IXGBE_VFMAILBOX_PFACK; - return IRQ_HANDLED; -} - -static irqreturn_t ixgbevf_msix_clean_tx(int irq, void *data) -{ - struct ixgbevf_q_vector *q_vector = data; - struct ixgbevf_adapter *adapter = q_vector->adapter; - struct ixgbevf_ring *tx_ring; - int i, r_idx; - - if (!q_vector->txr_count) - return IRQ_HANDLED; - - r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues); - for (i = 0; i < q_vector->txr_count; i++) { - tx_ring = &(adapter->tx_ring[r_idx]); - ixgbevf_clean_tx_irq(adapter, tx_ring); - r_idx = find_next_bit(q_vector->txr_idx, adapter->num_tx_queues, - r_idx + 1); - } - - if (adapter->itr_setting & 1) - ixgbevf_set_itr_msix(q_vector); + IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, adapter->eims_other); return IRQ_HANDLED; } + /** - * ixgbevf_msix_clean_rx - single unshared vector rx clean (all queues) + * ixgbevf_msix_clean_rings - single unshared vector rx clean (all queues) * @irq: unused * @data: pointer to our q_vector struct for this interrupt vector **/ -static irqreturn_t ixgbevf_msix_clean_rx(int irq, void *data) +static irqreturn_t ixgbevf_msix_clean_rings(int irq, void *data) { struct ixgbevf_q_vector *q_vector = data; - struct ixgbevf_adapter *adapter = q_vector->adapter; - struct ixgbe_hw *hw = &adapter->hw; - struct ixgbevf_ring *rx_ring; - int r_idx; - - if (!q_vector->rxr_count) - return IRQ_HANDLED; - - r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues); - rx_ring = &(adapter->rx_ring[r_idx]); - /* disable interrupts on this vector only */ - IXGBE_WRITE_REG(hw, IXGBE_VTEIMC, rx_ring->v_idx); - napi_schedule(&q_vector->napi); - - - return IRQ_HANDLED; -} -static irqreturn_t ixgbevf_msix_clean_many(int irq, void *data) -{ - ixgbevf_msix_clean_rx(irq, data); - ixgbevf_msix_clean_tx(irq, data); + /* EIAM disabled interrupts (on this vector) for us */ + if (q_vector->rx.ring || q_vector->tx.ring) + napi_schedule(&q_vector->napi); return IRQ_HANDLED; } @@ -1022,9 +779,9 @@ static inline void map_vector_to_rxq(struct ixgbevf_adapter *a, int v_idx, { struct ixgbevf_q_vector *q_vector = a->q_vector[v_idx]; - set_bit(r_idx, q_vector->rxr_idx); - q_vector->rxr_count++; - a->rx_ring[r_idx].v_idx = 1 << v_idx; + a->rx_ring[r_idx].next = q_vector->rx.ring; + q_vector->rx.ring = &a->rx_ring[r_idx]; + q_vector->rx.count++; } static inline void map_vector_to_txq(struct ixgbevf_adapter *a, int v_idx, @@ -1032,9 +789,9 @@ static inline void map_vector_to_txq(struct ixgbevf_adapter *a, int v_idx, { struct ixgbevf_q_vector *q_vector = a->q_vector[v_idx]; - set_bit(t_idx, q_vector->txr_idx); - q_vector->txr_count++; - a->tx_ring[t_idx].v_idx = 1 << v_idx; + a->tx_ring[t_idx].next = q_vector->tx.ring; + q_vector->tx.ring = &a->tx_ring[t_idx]; + q_vector->tx.count++; } /** @@ -1110,37 +867,30 @@ out: static int ixgbevf_request_msix_irqs(struct ixgbevf_adapter *adapter) { struct net_device *netdev = adapter->netdev; - irqreturn_t (*handler)(int, void *); - int i, vector, q_vectors, err; + int q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; + int vector, err; int ri = 0, ti = 0; - /* Decrement for Other and TCP Timer vectors */ - q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; - -#define SET_HANDLER(_v) (((_v)->rxr_count && (_v)->txr_count) \ - ? &ixgbevf_msix_clean_many : \ - (_v)->rxr_count ? &ixgbevf_msix_clean_rx : \ - (_v)->txr_count ? &ixgbevf_msix_clean_tx : \ - NULL) for (vector = 0; vector < q_vectors; vector++) { - handler = SET_HANDLER(adapter->q_vector[vector]); - - if (handler == &ixgbevf_msix_clean_rx) { - sprintf(adapter->name[vector], "%s-%s-%d", - netdev->name, "rx", ri++); - } else if (handler == &ixgbevf_msix_clean_tx) { - sprintf(adapter->name[vector], "%s-%s-%d", - netdev->name, "tx", ti++); - } else if (handler == &ixgbevf_msix_clean_many) { - sprintf(adapter->name[vector], "%s-%s-%d", - netdev->name, "TxRx", vector); + struct ixgbevf_q_vector *q_vector = adapter->q_vector[vector]; + struct msix_entry *entry = &adapter->msix_entries[vector]; + + if (q_vector->tx.ring && q_vector->rx.ring) { + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-%s-%d", netdev->name, "TxRx", ri++); + ti++; + } else if (q_vector->rx.ring) { + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-%s-%d", netdev->name, "rx", ri++); + } else if (q_vector->tx.ring) { + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-%s-%d", netdev->name, "tx", ti++); } else { /* skip this unused q_vector */ continue; } - err = request_irq(adapter->msix_entries[vector].vector, - handler, 0, adapter->name[vector], - adapter->q_vector[vector]); + err = request_irq(entry->vector, &ixgbevf_msix_clean_rings, 0, + q_vector->name, q_vector); if (err) { hw_dbg(&adapter->hw, "request_irq failed for MSIX interrupt " @@ -1149,9 +899,8 @@ static int ixgbevf_request_msix_irqs(struct ixgbevf_adapter *adapter) } } - sprintf(adapter->name[vector], "%s:mbx", netdev->name); err = request_irq(adapter->msix_entries[vector].vector, - &ixgbevf_msix_mbx, 0, adapter->name[vector], netdev); + &ixgbevf_msix_mbx, 0, netdev->name, adapter); if (err) { hw_dbg(&adapter->hw, "request_irq for msix_mbx failed: %d\n", err); @@ -1161,9 +910,11 @@ static int ixgbevf_request_msix_irqs(struct ixgbevf_adapter *adapter) return 0; free_queue_irqs: - for (i = vector - 1; i >= 0; i--) - free_irq(adapter->msix_entries[--vector].vector, - &(adapter->q_vector[i])); + while (vector) { + vector--; + free_irq(adapter->msix_entries[vector].vector, + adapter->q_vector[vector]); + } pci_disable_msix(adapter->pdev); kfree(adapter->msix_entries); adapter->msix_entries = NULL; @@ -1176,11 +927,10 @@ static inline void ixgbevf_reset_q_vectors(struct ixgbevf_adapter *adapter) for (i = 0; i < q_vectors; i++) { struct ixgbevf_q_vector *q_vector = adapter->q_vector[i]; - bitmap_zero(q_vector->rxr_idx, MAX_RX_QUEUES); - bitmap_zero(q_vector->txr_idx, MAX_TX_QUEUES); - q_vector->rxr_count = 0; - q_vector->txr_count = 0; - q_vector->eitr = adapter->eitr_param; + q_vector->rx.ring = NULL; + q_vector->tx.ring = NULL; + q_vector->rx.count = 0; + q_vector->tx.count = 0; } } @@ -1206,17 +956,20 @@ static int ixgbevf_request_irq(struct ixgbevf_adapter *adapter) static void ixgbevf_free_irq(struct ixgbevf_adapter *adapter) { - struct net_device *netdev = adapter->netdev; int i, q_vectors; q_vectors = adapter->num_msix_vectors; - i = q_vectors - 1; - free_irq(adapter->msix_entries[i].vector, netdev); + free_irq(adapter->msix_entries[i].vector, adapter); i--; for (; i >= 0; i--) { + /* free only the irqs that were actually requested */ + if (!adapter->q_vector[i]->rx.ring && + !adapter->q_vector[i]->tx.ring) + continue; + free_irq(adapter->msix_entries[i].vector, adapter->q_vector[i]); } @@ -1230,10 +983,12 @@ static void ixgbevf_free_irq(struct ixgbevf_adapter *adapter) **/ static inline void ixgbevf_irq_disable(struct ixgbevf_adapter *adapter) { - int i; struct ixgbe_hw *hw = &adapter->hw; + int i; + IXGBE_WRITE_REG(hw, IXGBE_VTEIAM, 0); IXGBE_WRITE_REG(hw, IXGBE_VTEIMC, ~0); + IXGBE_WRITE_REG(hw, IXGBE_VTEIAC, 0); IXGBE_WRITE_FLUSH(hw); @@ -1245,23 +1000,13 @@ static inline void ixgbevf_irq_disable(struct ixgbevf_adapter *adapter) * ixgbevf_irq_enable - Enable default interrupt generation settings * @adapter: board private structure **/ -static inline void ixgbevf_irq_enable(struct ixgbevf_adapter *adapter, - bool queues, bool flush) +static inline void ixgbevf_irq_enable(struct ixgbevf_adapter *adapter) { struct ixgbe_hw *hw = &adapter->hw; - u32 mask; - u64 qmask; - - mask = (IXGBE_EIMS_ENABLE_MASK & ~IXGBE_EIMS_RTX_QUEUE); - qmask = ~0; - - IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, mask); - - if (queues) - ixgbevf_irq_enable_queues(adapter, qmask); - if (flush) - IXGBE_WRITE_FLUSH(hw); + IXGBE_WRITE_REG(hw, IXGBE_VTEIAM, adapter->eims_enable_mask); + IXGBE_WRITE_REG(hw, IXGBE_VTEIAC, adapter->eims_enable_mask); + IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, adapter->eims_enable_mask); } /** @@ -1311,29 +1056,14 @@ static void ixgbevf_configure_srrctl(struct ixgbevf_adapter *adapter, int index) srrctl = IXGBE_SRRCTL_DROP_EN; - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { - u16 bufsz = IXGBEVF_RXBUFFER_2048; - /* grow the amount we can receive on large page machines */ - if (bufsz < (PAGE_SIZE / 2)) - bufsz = (PAGE_SIZE / 2); - /* cap the bufsz at our largest descriptor size */ - bufsz = min((u16)IXGBEVF_MAX_RXBUFFER, bufsz); - - srrctl |= bufsz >> IXGBE_SRRCTL_BSIZEPKT_SHIFT; - srrctl |= IXGBE_SRRCTL_DESCTYPE_HDR_SPLIT_ALWAYS; - srrctl |= ((IXGBEVF_RX_HDR_SIZE << - IXGBE_SRRCTL_BSIZEHDRSIZE_SHIFT) & - IXGBE_SRRCTL_BSIZEHDR_MASK); - } else { - srrctl |= IXGBE_SRRCTL_DESCTYPE_ADV_ONEBUF; + srrctl |= IXGBE_SRRCTL_DESCTYPE_ADV_ONEBUF; - if (rx_ring->rx_buf_len == MAXIMUM_ETHERNET_VLAN_SIZE) - srrctl |= IXGBEVF_RXBUFFER_2048 >> - IXGBE_SRRCTL_BSIZEPKT_SHIFT; - else - srrctl |= rx_ring->rx_buf_len >> - IXGBE_SRRCTL_BSIZEPKT_SHIFT; - } + if (rx_ring->rx_buf_len == MAXIMUM_ETHERNET_VLAN_SIZE) + srrctl |= IXGBEVF_RXBUFFER_2048 >> + IXGBE_SRRCTL_BSIZEPKT_SHIFT; + else + srrctl |= rx_ring->rx_buf_len >> + IXGBE_SRRCTL_BSIZEPKT_SHIFT; IXGBE_WRITE_REG(hw, IXGBE_VFSRRCTL(index), srrctl); } @@ -1353,36 +1083,12 @@ static void ixgbevf_configure_rx(struct ixgbevf_adapter *adapter) u32 rdlen; int rx_buf_len; - /* Decide whether to use packet split mode or not */ - if (netdev->mtu > ETH_DATA_LEN) { - if (adapter->flags & IXGBE_FLAG_RX_PS_CAPABLE) - adapter->flags |= IXGBE_FLAG_RX_PS_ENABLED; - else - adapter->flags &= ~IXGBE_FLAG_RX_PS_ENABLED; - } else { - if (adapter->flags & IXGBE_FLAG_RX_1BUF_CAPABLE) - adapter->flags &= ~IXGBE_FLAG_RX_PS_ENABLED; - else - adapter->flags |= IXGBE_FLAG_RX_PS_ENABLED; - } - - /* Set the RX buffer length according to the mode */ - if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) { - /* PSRTYPE must be initialized in 82599 */ - u32 psrtype = IXGBE_PSRTYPE_TCPHDR | - IXGBE_PSRTYPE_UDPHDR | - IXGBE_PSRTYPE_IPV4HDR | - IXGBE_PSRTYPE_IPV6HDR | - IXGBE_PSRTYPE_L2HDR; - IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, psrtype); - rx_buf_len = IXGBEVF_RX_HDR_SIZE; - } else { - IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, 0); - if (netdev->mtu <= ETH_DATA_LEN) - rx_buf_len = MAXIMUM_ETHERNET_VLAN_SIZE; - else - rx_buf_len = ALIGN(max_frame, 1024); - } + /* PSRTYPE must be initialized in 82599 */ + IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, 0); + if (netdev->mtu <= ETH_DATA_LEN) + rx_buf_len = MAXIMUM_ETHERNET_VLAN_SIZE; + else + rx_buf_len = ALIGN(max_frame, 1024); rdlen = adapter->rx_ring[0].count * sizeof(union ixgbe_adv_rx_desc); /* Setup the HW Rx Head and Tail Descriptor Pointers and @@ -1409,9 +1115,14 @@ static int ixgbevf_vlan_rx_add_vid(struct net_device *netdev, u16 vid) struct ixgbevf_adapter *adapter = netdev_priv(netdev); struct ixgbe_hw *hw = &adapter->hw; + spin_lock(&adapter->mbx_lock); + /* add VID to filter table */ if (hw->mac.ops.set_vfta) hw->mac.ops.set_vfta(hw, vid, 0, true); + + spin_unlock(&adapter->mbx_lock); + set_bit(vid, adapter->active_vlans); return 0; @@ -1422,9 +1133,14 @@ static int ixgbevf_vlan_rx_kill_vid(struct net_device *netdev, u16 vid) struct ixgbevf_adapter *adapter = netdev_priv(netdev); struct ixgbe_hw *hw = &adapter->hw; + spin_lock(&adapter->mbx_lock); + /* remove VID from filter table */ if (hw->mac.ops.set_vfta) hw->mac.ops.set_vfta(hw, vid, 0, false); + + spin_unlock(&adapter->mbx_lock); + clear_bit(vid, adapter->active_vlans); return 0; @@ -1479,11 +1195,15 @@ static void ixgbevf_set_rx_mode(struct net_device *netdev) struct ixgbevf_adapter *adapter = netdev_priv(netdev); struct ixgbe_hw *hw = &adapter->hw; + spin_lock(&adapter->mbx_lock); + /* reprogram multicast list */ if (hw->mac.ops.update_mc_addr_list) hw->mac.ops.update_mc_addr_list(hw, netdev); ixgbevf_write_uc_addr_list(netdev); + + spin_unlock(&adapter->mbx_lock); } static void ixgbevf_napi_enable_all(struct ixgbevf_adapter *adapter) @@ -1493,15 +1213,8 @@ static void ixgbevf_napi_enable_all(struct ixgbevf_adapter *adapter) int q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; for (q_idx = 0; q_idx < q_vectors; q_idx++) { - struct napi_struct *napi; q_vector = adapter->q_vector[q_idx]; - if (!q_vector->rxr_count) - continue; - napi = &q_vector->napi; - if (q_vector->rxr_count > 1) - napi->poll = &ixgbevf_clean_rxonly_many; - - napi_enable(napi); + napi_enable(&q_vector->napi); } } @@ -1513,8 +1226,6 @@ static void ixgbevf_napi_disable_all(struct ixgbevf_adapter *adapter) for (q_idx = 0; q_idx < q_vectors; q_idx++) { q_vector = adapter->q_vector[q_idx]; - if (!q_vector->rxr_count) - continue; napi_disable(&q_vector->napi); } } @@ -1532,9 +1243,8 @@ static void ixgbevf_configure(struct ixgbevf_adapter *adapter) ixgbevf_configure_rx(adapter); for (i = 0; i < adapter->num_rx_queues; i++) { struct ixgbevf_ring *ring = &adapter->rx_ring[i]; - ixgbevf_alloc_rx_buffers(adapter, ring, ring->count); - ring->next_to_use = ring->count - 1; - writel(ring->next_to_use, adapter->hw.hw_addr + ring->tail); + ixgbevf_alloc_rx_buffers(adapter, ring, + IXGBE_DESC_UNUSED(ring)); } } @@ -1638,6 +1348,8 @@ static void ixgbevf_up_complete(struct ixgbevf_adapter *adapter) ixgbevf_configure_msix(adapter); + spin_lock(&adapter->mbx_lock); + if (hw->mac.ops.set_rar) { if (is_valid_ether_addr(hw->mac.addr)) hw->mac.ops.set_rar(hw, 0, hw->mac.addr, 0); @@ -1649,6 +1361,8 @@ static void ixgbevf_up_complete(struct ixgbevf_adapter *adapter) msg[1] = netdev->mtu + ETH_HLEN + ETH_FCS_LEN; hw->mbx.ops.write_posted(hw, msg, 2); + spin_unlock(&adapter->mbx_lock); + clear_bit(__IXGBEVF_DOWN, &adapter->state); ixgbevf_napi_enable_all(adapter); @@ -1658,10 +1372,6 @@ static void ixgbevf_up_complete(struct ixgbevf_adapter *adapter) ixgbevf_save_reset_stats(adapter); ixgbevf_init_last_counter_stats(adapter); - /* bring the link up in the watchdog, this could race with our first - * link up interrupt but shouldn't be a problem */ - adapter->flags |= IXGBE_FLAG_NEED_LINK_UPDATE; - adapter->link_check_timeout = jiffies; mod_timer(&adapter->watchdog_timer, jiffies); } @@ -1676,7 +1386,7 @@ void ixgbevf_up(struct ixgbevf_adapter *adapter) /* clear any pending interrupts, may auto mask */ IXGBE_READ_REG(hw, IXGBE_VTEICR); - ixgbevf_irq_enable(adapter, true, true); + ixgbevf_irq_enable(adapter); } /** @@ -1714,14 +1424,6 @@ static void ixgbevf_clean_rx_ring(struct ixgbevf_adapter *adapter, dev_kfree_skb(this); } while (skb); } - if (!rx_buffer_info->page) - continue; - dma_unmap_page(&pdev->dev, rx_buffer_info->page_dma, - PAGE_SIZE / 2, DMA_FROM_DEVICE); - rx_buffer_info->page_dma = 0; - put_page(rx_buffer_info->page); - rx_buffer_info->page = NULL; - rx_buffer_info->page_offset = 0; } size = sizeof(struct ixgbevf_rx_buffer) * rx_ring->count; @@ -1758,7 +1460,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_adapter *adapter, for (i = 0; i < tx_ring->count; i++) { tx_buffer_info = &tx_ring->tx_buffer_info[i]; - ixgbevf_unmap_and_free_tx_resource(adapter, tx_buffer_info); + ixgbevf_unmap_and_free_tx_resource(tx_ring, tx_buffer_info); } size = sizeof(struct ixgbevf_tx_buffer) * tx_ring->count; @@ -1873,11 +1575,15 @@ void ixgbevf_reset(struct ixgbevf_adapter *adapter) struct ixgbe_hw *hw = &adapter->hw; struct net_device *netdev = adapter->netdev; + spin_lock(&adapter->mbx_lock); + if (hw->mac.ops.reset_hw(hw)) hw_dbg(hw, "PF still resetting\n"); else hw->mac.ops.init_hw(hw); + spin_unlock(&adapter->mbx_lock); + if (is_valid_ether_addr(adapter->hw.mac.addr)) { memcpy(netdev->dev_addr, adapter->hw.mac.addr, netdev->addr_len); @@ -1891,10 +1597,9 @@ static void ixgbevf_acquire_msix_vectors(struct ixgbevf_adapter *adapter, { int err, vector_threshold; - /* We'll want at least 3 (vector_threshold): - * 1) TxQ[0] Cleanup - * 2) RxQ[0] Cleanup - * 3) Other (Link Status Change, etc.) + /* We'll want at least 2 (vector_threshold): + * 1) TxQ[0] + RxQ[0] handler + * 2) Other (Link Status Change, etc.) */ vector_threshold = MIN_MSIX_COUNT; @@ -1933,8 +1638,8 @@ static void ixgbevf_acquire_msix_vectors(struct ixgbevf_adapter *adapter, } } -/* - * ixgbevf_set_num_queues: Allocate queues for device, feature dependent +/** + * ixgbevf_set_num_queues - Allocate queues for device, feature dependent * @adapter: board private structure to initialize * * This is the top level queue allocation routine. The order here is very @@ -1949,8 +1654,6 @@ static void ixgbevf_set_num_queues(struct ixgbevf_adapter *adapter) /* Start with base case */ adapter->num_rx_queues = 1; adapter->num_tx_queues = 1; - adapter->num_rx_pools = adapter->num_rx_queues; - adapter->num_rx_queues_per_pool = 1; } /** @@ -1979,12 +1682,16 @@ static int ixgbevf_alloc_queues(struct ixgbevf_adapter *adapter) adapter->tx_ring[i].count = adapter->tx_ring_count; adapter->tx_ring[i].queue_index = i; adapter->tx_ring[i].reg_idx = i; + adapter->tx_ring[i].dev = &adapter->pdev->dev; + adapter->tx_ring[i].netdev = adapter->netdev; } for (i = 0; i < adapter->num_rx_queues; i++) { adapter->rx_ring[i].count = adapter->rx_ring_count; adapter->rx_ring[i].queue_index = i; adapter->rx_ring[i].reg_idx = i; + adapter->rx_ring[i].dev = &adapter->pdev->dev; + adapter->rx_ring[i].netdev = adapter->netdev; } return 0; @@ -2011,10 +1718,12 @@ static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter) * It's easy to be greedy for MSI-X vectors, but it really * doesn't do us much good if we have a lot more vectors * than CPU's. So let's be conservative and only ask for - * (roughly) twice the number of vectors as there are CPU's. + * (roughly) the same number of vectors as there are CPU's. + * The default is to use pairs of vectors. */ - v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues, - (int)(num_online_cpus() * 2)) + NON_Q_VECTORS; + v_budget = max(adapter->num_rx_queues, adapter->num_tx_queues); + v_budget = min_t(int, v_budget, num_online_cpus()); + v_budget += NON_Q_VECTORS; /* A failure in MSI-X entry allocation isn't fatal, but it does * mean we disable MSI-X capabilities of the adapter. */ @@ -2045,12 +1754,8 @@ static int ixgbevf_alloc_q_vectors(struct ixgbevf_adapter *adapter) { int q_idx, num_q_vectors; struct ixgbevf_q_vector *q_vector; - int napi_vectors; - int (*poll)(struct napi_struct *, int); num_q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS; - napi_vectors = adapter->num_rx_queues; - poll = &ixgbevf_clean_rxonly; for (q_idx = 0; q_idx < num_q_vectors; q_idx++) { q_vector = kzalloc(sizeof(struct ixgbevf_q_vector), GFP_KERNEL); @@ -2058,10 +1763,8 @@ static int ixgbevf_alloc_q_vectors(struct ixgbevf_adapter *adapter) goto err_out; q_vector->adapter = adapter; q_vector->v_idx = q_idx; - q_vector->eitr = adapter->eitr_param; - if (q_idx < napi_vectors) - netif_napi_add(adapter->netdev, &q_vector->napi, - (*poll), 64); + netif_napi_add(adapter->netdev, &q_vector->napi, + ixgbevf_poll, 64); adapter->q_vector[q_idx] = q_vector; } @@ -2207,21 +1910,17 @@ static int __devinit ixgbevf_sw_init(struct ixgbevf_adapter *adapter) adapter->netdev->addr_len); } - /* Enable dynamic interrupt throttling rates */ - adapter->eitr_param = 20000; - adapter->itr_setting = 1; + /* lock to protect mailbox accesses */ + spin_lock_init(&adapter->mbx_lock); - /* set defaults for eitr in MegaBytes */ - adapter->eitr_low = 10; - adapter->eitr_high = 20; + /* Enable dynamic interrupt throttling rates */ + adapter->rx_itr_setting = 1; + adapter->tx_itr_setting = 1; /* set default ring sizes */ adapter->tx_ring_count = IXGBEVF_DEFAULT_TXD; adapter->rx_ring_count = IXGBEVF_DEFAULT_RXD; - /* enable rx csum by default */ - adapter->flags |= IXGBE_FLAG_RX_CSUM_ENABLED; - set_bit(__IXGBEVF_DOWN, &adapter->state); return 0; @@ -2281,7 +1980,7 @@ static void ixgbevf_watchdog(unsigned long data) { struct ixgbevf_adapter *adapter = (struct ixgbevf_adapter *)data; struct ixgbe_hw *hw = &adapter->hw; - u64 eics = 0; + u32 eics = 0; int i; /* @@ -2295,11 +1994,11 @@ static void ixgbevf_watchdog(unsigned long data) /* get one bit for every active tx/rx interrupt vector */ for (i = 0; i < adapter->num_msix_vectors - NON_Q_VECTORS; i++) { struct ixgbevf_q_vector *qv = adapter->q_vector[i]; - if (qv->rxr_count || qv->txr_count) - eics |= (1 << i); + if (qv->rx.ring || qv->tx.ring) + eics |= 1 << i; } - IXGBE_WRITE_REG(hw, IXGBE_VTEICS, (u32)eics); + IXGBE_WRITE_REG(hw, IXGBE_VTEICS, eics); watchdog_short_circuit: schedule_work(&adapter->watchdog_task); @@ -2353,8 +2052,16 @@ static void ixgbevf_watchdog_task(struct work_struct *work) * no LSC interrupt */ if (hw->mac.ops.check_link) { - if ((hw->mac.ops.check_link(hw, &link_speed, - &link_up, false)) != 0) { + s32 need_reset; + + spin_lock(&adapter->mbx_lock); + + need_reset = hw->mac.ops.check_link(hw, &link_speed, + &link_up, false); + + spin_unlock(&adapter->mbx_lock); + + if (need_reset) { adapter->link_up = link_up; adapter->link_speed = link_speed; netif_carrier_off(netdev); @@ -2469,7 +2176,6 @@ int ixgbevf_setup_tx_resources(struct ixgbevf_adapter *adapter, tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; - tx_ring->work_limit = tx_ring->count; return 0; err: @@ -2673,7 +2379,7 @@ static int ixgbevf_open(struct net_device *netdev) if (err) goto err_req_irq; - ixgbevf_irq_enable(adapter, true, true); + ixgbevf_irq_enable(adapter); return 0; @@ -2715,172 +2421,153 @@ static int ixgbevf_close(struct net_device *netdev) return 0; } -static int ixgbevf_tso(struct ixgbevf_adapter *adapter, - struct ixgbevf_ring *tx_ring, - struct sk_buff *skb, u32 tx_flags, u8 *hdr_len) +static void ixgbevf_tx_ctxtdesc(struct ixgbevf_ring *tx_ring, + u32 vlan_macip_lens, u32 type_tucmd, + u32 mss_l4len_idx) { struct ixgbe_adv_tx_context_desc *context_desc; - unsigned int i; - int err; - struct ixgbevf_tx_buffer *tx_buffer_info; - u32 vlan_macip_lens = 0, type_tucmd_mlhl; - u32 mss_l4len_idx, l4len; + u16 i = tx_ring->next_to_use; - if (skb_is_gso(skb)) { - if (skb_header_cloned(skb)) { - err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC); - if (err) - return err; - } - l4len = tcp_hdrlen(skb); - *hdr_len += l4len; - - if (skb->protocol == htons(ETH_P_IP)) { - struct iphdr *iph = ip_hdr(skb); - iph->tot_len = 0; - iph->check = 0; - tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr, - iph->daddr, 0, - IPPROTO_TCP, - 0); - adapter->hw_tso_ctxt++; - } else if (skb_is_gso_v6(skb)) { - ipv6_hdr(skb)->payload_len = 0; - tcp_hdr(skb)->check = - ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr, - &ipv6_hdr(skb)->daddr, - 0, IPPROTO_TCP, 0); - adapter->hw_tso6_ctxt++; - } + context_desc = IXGBEVF_TX_CTXTDESC(tx_ring, i); - i = tx_ring->next_to_use; + i++; + tx_ring->next_to_use = (i < tx_ring->count) ? i : 0; - tx_buffer_info = &tx_ring->tx_buffer_info[i]; - context_desc = IXGBE_TX_CTXTDESC_ADV(*tx_ring, i); - - /* VLAN MACLEN IPLEN */ - if (tx_flags & IXGBE_TX_FLAGS_VLAN) - vlan_macip_lens |= - (tx_flags & IXGBE_TX_FLAGS_VLAN_MASK); - vlan_macip_lens |= ((skb_network_offset(skb)) << - IXGBE_ADVTXD_MACLEN_SHIFT); - *hdr_len += skb_network_offset(skb); - vlan_macip_lens |= - (skb_transport_header(skb) - skb_network_header(skb)); - *hdr_len += - (skb_transport_header(skb) - skb_network_header(skb)); - context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens); - context_desc->seqnum_seed = 0; - - /* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */ - type_tucmd_mlhl = (IXGBE_TXD_CMD_DEXT | - IXGBE_ADVTXD_DTYP_CTXT); - - if (skb->protocol == htons(ETH_P_IP)) - type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_IPV4; - type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP; - context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd_mlhl); - - /* MSS L4LEN IDX */ - mss_l4len_idx = - (skb_shinfo(skb)->gso_size << IXGBE_ADVTXD_MSS_SHIFT); - mss_l4len_idx |= (l4len << IXGBE_ADVTXD_L4LEN_SHIFT); - /* use index 1 for TSO */ - mss_l4len_idx |= (1 << IXGBE_ADVTXD_IDX_SHIFT); - context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx); - - tx_buffer_info->time_stamp = jiffies; - tx_buffer_info->next_to_watch = i; + /* set bits to identify this as an advanced context descriptor */ + type_tucmd |= IXGBE_TXD_CMD_DEXT | IXGBE_ADVTXD_DTYP_CTXT; - i++; - if (i == tx_ring->count) - i = 0; - tx_ring->next_to_use = i; + context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens); + context_desc->seqnum_seed = 0; + context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd); + context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx); +} - return true; +static int ixgbevf_tso(struct ixgbevf_ring *tx_ring, + struct sk_buff *skb, u32 tx_flags, u8 *hdr_len) +{ + u32 vlan_macip_lens, type_tucmd; + u32 mss_l4len_idx, l4len; + + if (!skb_is_gso(skb)) + return 0; + + if (skb_header_cloned(skb)) { + int err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC); + if (err) + return err; } - return false; + /* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */ + type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_TCP; + + if (skb->protocol == htons(ETH_P_IP)) { + struct iphdr *iph = ip_hdr(skb); + iph->tot_len = 0; + iph->check = 0; + tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr, + iph->daddr, 0, + IPPROTO_TCP, + 0); + type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4; + } else if (skb_is_gso_v6(skb)) { + ipv6_hdr(skb)->payload_len = 0; + tcp_hdr(skb)->check = + ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr, + 0, IPPROTO_TCP, 0); + } + + /* compute header lengths */ + l4len = tcp_hdrlen(skb); + *hdr_len += l4len; + *hdr_len = skb_transport_offset(skb) + l4len; + + /* mss_l4len_id: use 1 as index for TSO */ + mss_l4len_idx = l4len << IXGBE_ADVTXD_L4LEN_SHIFT; + mss_l4len_idx |= skb_shinfo(skb)->gso_size << IXGBE_ADVTXD_MSS_SHIFT; + mss_l4len_idx |= 1 << IXGBE_ADVTXD_IDX_SHIFT; + + /* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */ + vlan_macip_lens = skb_network_header_len(skb); + vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT; + vlan_macip_lens |= tx_flags & IXGBE_TX_FLAGS_VLAN_MASK; + + ixgbevf_tx_ctxtdesc(tx_ring, vlan_macip_lens, + type_tucmd, mss_l4len_idx); + + return 1; } -static bool ixgbevf_tx_csum(struct ixgbevf_adapter *adapter, - struct ixgbevf_ring *tx_ring, +static bool ixgbevf_tx_csum(struct ixgbevf_ring *tx_ring, struct sk_buff *skb, u32 tx_flags) { - struct ixgbe_adv_tx_context_desc *context_desc; - unsigned int i; - struct ixgbevf_tx_buffer *tx_buffer_info; - u32 vlan_macip_lens = 0, type_tucmd_mlhl = 0; - if (skb->ip_summed == CHECKSUM_PARTIAL || - (tx_flags & IXGBE_TX_FLAGS_VLAN)) { - i = tx_ring->next_to_use; - tx_buffer_info = &tx_ring->tx_buffer_info[i]; - context_desc = IXGBE_TX_CTXTDESC_ADV(*tx_ring, i); - - if (tx_flags & IXGBE_TX_FLAGS_VLAN) - vlan_macip_lens |= (tx_flags & - IXGBE_TX_FLAGS_VLAN_MASK); - vlan_macip_lens |= (skb_network_offset(skb) << - IXGBE_ADVTXD_MACLEN_SHIFT); - if (skb->ip_summed == CHECKSUM_PARTIAL) - vlan_macip_lens |= (skb_transport_header(skb) - - skb_network_header(skb)); - - context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens); - context_desc->seqnum_seed = 0; - - type_tucmd_mlhl |= (IXGBE_TXD_CMD_DEXT | - IXGBE_ADVTXD_DTYP_CTXT); - - if (skb->ip_summed == CHECKSUM_PARTIAL) { - switch (skb->protocol) { - case __constant_htons(ETH_P_IP): - type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_IPV4; - if (ip_hdr(skb)->protocol == IPPROTO_TCP) - type_tucmd_mlhl |= - IXGBE_ADVTXD_TUCMD_L4T_TCP; - break; - case __constant_htons(ETH_P_IPV6): - /* XXX what about other V6 headers?? */ - if (ipv6_hdr(skb)->nexthdr == IPPROTO_TCP) - type_tucmd_mlhl |= - IXGBE_ADVTXD_TUCMD_L4T_TCP; - break; - default: - if (unlikely(net_ratelimit())) { - pr_warn("partial checksum but " - "proto=%x!\n", skb->protocol); - } - break; - } - } - context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd_mlhl); - /* use index zero for tx checksum offload */ - context_desc->mss_l4len_idx = 0; - tx_buffer_info->time_stamp = jiffies; - tx_buffer_info->next_to_watch = i; + u32 vlan_macip_lens = 0; + u32 mss_l4len_idx = 0; + u32 type_tucmd = 0; - adapter->hw_csum_tx_good++; - i++; - if (i == tx_ring->count) - i = 0; - tx_ring->next_to_use = i; + if (skb->ip_summed == CHECKSUM_PARTIAL) { + u8 l4_hdr = 0; + switch (skb->protocol) { + case __constant_htons(ETH_P_IP): + vlan_macip_lens |= skb_network_header_len(skb); + type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4; + l4_hdr = ip_hdr(skb)->protocol; + break; + case __constant_htons(ETH_P_IPV6): + vlan_macip_lens |= skb_network_header_len(skb); + l4_hdr = ipv6_hdr(skb)->nexthdr; + break; + default: + if (unlikely(net_ratelimit())) { + dev_warn(tx_ring->dev, + "partial checksum but proto=%x!\n", + skb->protocol); + } + break; + } - return true; + switch (l4_hdr) { + case IPPROTO_TCP: + type_tucmd |= IXGBE_ADVTXD_TUCMD_L4T_TCP; + mss_l4len_idx = tcp_hdrlen(skb) << + IXGBE_ADVTXD_L4LEN_SHIFT; + break; + case IPPROTO_SCTP: + type_tucmd |= IXGBE_ADVTXD_TUCMD_L4T_SCTP; + mss_l4len_idx = sizeof(struct sctphdr) << + IXGBE_ADVTXD_L4LEN_SHIFT; + break; + case IPPROTO_UDP: + mss_l4len_idx = sizeof(struct udphdr) << + IXGBE_ADVTXD_L4LEN_SHIFT; + break; + default: + if (unlikely(net_ratelimit())) { + dev_warn(tx_ring->dev, + "partial checksum but l4 proto=%x!\n", + l4_hdr); + } + break; + } } - return false; + /* vlan_macip_lens: MACLEN, VLAN tag */ + vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT; + vlan_macip_lens |= tx_flags & IXGBE_TX_FLAGS_VLAN_MASK; + + ixgbevf_tx_ctxtdesc(tx_ring, vlan_macip_lens, + type_tucmd, mss_l4len_idx); + + return (skb->ip_summed == CHECKSUM_PARTIAL); } -static int ixgbevf_tx_map(struct ixgbevf_adapter *adapter, - struct ixgbevf_ring *tx_ring, +static int ixgbevf_tx_map(struct ixgbevf_ring *tx_ring, struct sk_buff *skb, u32 tx_flags, unsigned int first) { - struct pci_dev *pdev = adapter->pdev; struct ixgbevf_tx_buffer *tx_buffer_info; unsigned int len; unsigned int total = skb->len; @@ -2899,12 +2586,11 @@ static int ixgbevf_tx_map(struct ixgbevf_adapter *adapter, tx_buffer_info->length = size; tx_buffer_info->mapped_as_page = false; - tx_buffer_info->dma = dma_map_single(&adapter->pdev->dev, + tx_buffer_info->dma = dma_map_single(tx_ring->dev, skb->data + offset, size, DMA_TO_DEVICE); - if (dma_mapping_error(&pdev->dev, tx_buffer_info->dma)) + if (dma_mapping_error(tx_ring->dev, tx_buffer_info->dma)) goto dma_error; - tx_buffer_info->time_stamp = jiffies; tx_buffer_info->next_to_watch = i; len -= size; @@ -2929,12 +2615,12 @@ static int ixgbevf_tx_map(struct ixgbevf_adapter *adapter, tx_buffer_info->length = size; tx_buffer_info->dma = - skb_frag_dma_map(&adapter->pdev->dev, frag, + skb_frag_dma_map(tx_ring->dev, frag, offset, size, DMA_TO_DEVICE); tx_buffer_info->mapped_as_page = true; - if (dma_mapping_error(&pdev->dev, tx_buffer_info->dma)) + if (dma_mapping_error(tx_ring->dev, + tx_buffer_info->dma)) goto dma_error; - tx_buffer_info->time_stamp = jiffies; tx_buffer_info->next_to_watch = i; len -= size; @@ -2955,15 +2641,15 @@ static int ixgbevf_tx_map(struct ixgbevf_adapter *adapter, i = i - 1; tx_ring->tx_buffer_info[i].skb = skb; tx_ring->tx_buffer_info[first].next_to_watch = i; + tx_ring->tx_buffer_info[first].time_stamp = jiffies; return count; dma_error: - dev_err(&pdev->dev, "TX DMA map failed\n"); + dev_err(tx_ring->dev, "TX DMA map failed\n"); /* clear timestamp and dma mappings for failed tx_buffer_info map */ tx_buffer_info->dma = 0; - tx_buffer_info->time_stamp = 0; tx_buffer_info->next_to_watch = 0; count--; @@ -2974,14 +2660,13 @@ dma_error: if (i < 0) i += tx_ring->count; tx_buffer_info = &tx_ring->tx_buffer_info[i]; - ixgbevf_unmap_and_free_tx_resource(adapter, tx_buffer_info); + ixgbevf_unmap_and_free_tx_resource(tx_ring, tx_buffer_info); } return count; } -static void ixgbevf_tx_queue(struct ixgbevf_adapter *adapter, - struct ixgbevf_ring *tx_ring, int tx_flags, +static void ixgbevf_tx_queue(struct ixgbevf_ring *tx_ring, int tx_flags, int count, u32 paylen, u8 hdr_len) { union ixgbe_adv_tx_desc *tx_desc = NULL; @@ -2998,28 +2683,31 @@ static void ixgbevf_tx_queue(struct ixgbevf_adapter *adapter, if (tx_flags & IXGBE_TX_FLAGS_VLAN) cmd_type_len |= IXGBE_ADVTXD_DCMD_VLE; + if (tx_flags & IXGBE_TX_FLAGS_CSUM) + olinfo_status |= IXGBE_ADVTXD_POPTS_TXSM; + if (tx_flags & IXGBE_TX_FLAGS_TSO) { cmd_type_len |= IXGBE_ADVTXD_DCMD_TSE; - olinfo_status |= IXGBE_TXD_POPTS_TXSM << - IXGBE_ADVTXD_POPTS_SHIFT; - /* use index 1 context for tso */ olinfo_status |= (1 << IXGBE_ADVTXD_IDX_SHIFT); if (tx_flags & IXGBE_TX_FLAGS_IPV4) - olinfo_status |= IXGBE_TXD_POPTS_IXSM << - IXGBE_ADVTXD_POPTS_SHIFT; + olinfo_status |= IXGBE_ADVTXD_POPTS_IXSM; + + } - } else if (tx_flags & IXGBE_TX_FLAGS_CSUM) - olinfo_status |= IXGBE_TXD_POPTS_TXSM << - IXGBE_ADVTXD_POPTS_SHIFT; + /* + * Check Context must be set if Tx switch is enabled, which it + * always is for case where virtual functions are running + */ + olinfo_status |= IXGBE_ADVTXD_CC; olinfo_status |= ((paylen - hdr_len) << IXGBE_ADVTXD_PAYLEN_SHIFT); i = tx_ring->next_to_use; while (count--) { tx_buffer_info = &tx_ring->tx_buffer_info[i]; - tx_desc = IXGBE_TX_DESC_ADV(*tx_ring, i); + tx_desc = IXGBEVF_TX_DESC(tx_ring, i); tx_desc->read.buffer_addr = cpu_to_le64(tx_buffer_info->dma); tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type_len | tx_buffer_info->length); @@ -3031,24 +2719,14 @@ static void ixgbevf_tx_queue(struct ixgbevf_adapter *adapter, tx_desc->read.cmd_type_len |= cpu_to_le32(txd_cmd); - /* - * Force memory writes to complete before letting h/w - * know there are new descriptors to fetch. (Only - * applicable for weak-ordered memory model archs, - * such as IA-64). - */ - wmb(); - tx_ring->next_to_use = i; - writel(i, adapter->hw.hw_addr + tx_ring->tail); } -static int __ixgbevf_maybe_stop_tx(struct net_device *netdev, - struct ixgbevf_ring *tx_ring, int size) +static int __ixgbevf_maybe_stop_tx(struct ixgbevf_ring *tx_ring, int size) { - struct ixgbevf_adapter *adapter = netdev_priv(netdev); + struct ixgbevf_adapter *adapter = netdev_priv(tx_ring->netdev); - netif_stop_subqueue(netdev, tx_ring->queue_index); + netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); /* Herbert's original patch had: * smp_mb__after_netif_stop_queue(); * but since that doesn't exist yet, just open code it. */ @@ -3060,17 +2738,16 @@ static int __ixgbevf_maybe_stop_tx(struct net_device *netdev, return -EBUSY; /* A reprieve! - use start_queue because it doesn't call schedule */ - netif_start_subqueue(netdev, tx_ring->queue_index); + netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index); ++adapter->restart_queue; return 0; } -static int ixgbevf_maybe_stop_tx(struct net_device *netdev, - struct ixgbevf_ring *tx_ring, int size) +static int ixgbevf_maybe_stop_tx(struct ixgbevf_ring *tx_ring, int size) { if (likely(IXGBE_DESC_UNUSED(tx_ring) >= size)) return 0; - return __ixgbevf_maybe_stop_tx(netdev, tx_ring, size); + return __ixgbevf_maybe_stop_tx(tx_ring, size); } static int ixgbevf_xmit_frame(struct sk_buff *skb, struct net_device *netdev) @@ -3081,54 +2758,66 @@ static int ixgbevf_xmit_frame(struct sk_buff *skb, struct net_device *netdev) unsigned int tx_flags = 0; u8 hdr_len = 0; int r_idx = 0, tso; - int count = 0; - - unsigned int f; + u16 count = TXD_USE_COUNT(skb_headlen(skb)); +#if PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD + unsigned short f; +#endif tx_ring = &adapter->tx_ring[r_idx]; + /* + * need: 1 descriptor per page * PAGE_SIZE/IXGBE_MAX_DATA_PER_TXD, + * + 1 desc for skb_headlen/IXGBE_MAX_DATA_PER_TXD, + * + 2 desc gap to keep tail from touching head, + * + 1 desc for context descriptor, + * otherwise try next time + */ +#if PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD + for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) + count += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].size); +#else + count += skb_shinfo(skb)->nr_frags; +#endif + if (ixgbevf_maybe_stop_tx(tx_ring, count + 3)) { + adapter->tx_busy++; + return NETDEV_TX_BUSY; + } + if (vlan_tx_tag_present(skb)) { tx_flags |= vlan_tx_tag_get(skb); tx_flags <<= IXGBE_TX_FLAGS_VLAN_SHIFT; tx_flags |= IXGBE_TX_FLAGS_VLAN; } - /* four things can cause us to need a context descriptor */ - if (skb_is_gso(skb) || - (skb->ip_summed == CHECKSUM_PARTIAL) || - (tx_flags & IXGBE_TX_FLAGS_VLAN)) - count++; - - count += TXD_USE_COUNT(skb_headlen(skb)); - for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) - count += TXD_USE_COUNT(skb_frag_size(&skb_shinfo(skb)->frags[f])); - - if (ixgbevf_maybe_stop_tx(netdev, tx_ring, count)) { - adapter->tx_busy++; - return NETDEV_TX_BUSY; - } - first = tx_ring->next_to_use; if (skb->protocol == htons(ETH_P_IP)) tx_flags |= IXGBE_TX_FLAGS_IPV4; - tso = ixgbevf_tso(adapter, tx_ring, skb, tx_flags, &hdr_len); + tso = ixgbevf_tso(tx_ring, skb, tx_flags, &hdr_len); if (tso < 0) { dev_kfree_skb_any(skb); return NETDEV_TX_OK; } if (tso) - tx_flags |= IXGBE_TX_FLAGS_TSO; - else if (ixgbevf_tx_csum(adapter, tx_ring, skb, tx_flags) && - (skb->ip_summed == CHECKSUM_PARTIAL)) + tx_flags |= IXGBE_TX_FLAGS_TSO | IXGBE_TX_FLAGS_CSUM; + else if (ixgbevf_tx_csum(tx_ring, skb, tx_flags)) tx_flags |= IXGBE_TX_FLAGS_CSUM; - ixgbevf_tx_queue(adapter, tx_ring, tx_flags, - ixgbevf_tx_map(adapter, tx_ring, skb, tx_flags, first), + ixgbevf_tx_queue(tx_ring, tx_flags, + ixgbevf_tx_map(tx_ring, skb, tx_flags, first), skb->len, hdr_len); + /* + * Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); - ixgbevf_maybe_stop_tx(netdev, tx_ring, DESC_NEEDED); + writel(tx_ring->next_to_use, adapter->hw.hw_addr + tx_ring->tail); + + ixgbevf_maybe_stop_tx(tx_ring, DESC_NEEDED); return NETDEV_TX_OK; } @@ -3152,9 +2841,13 @@ static int ixgbevf_set_mac(struct net_device *netdev, void *p) memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len); memcpy(hw->mac.addr, addr->sa_data, netdev->addr_len); + spin_lock(&adapter->mbx_lock); + if (hw->mac.ops.set_rar) hw->mac.ops.set_rar(hw, 0, hw->mac.addr, 0); + spin_unlock(&adapter->mbx_lock); + return 0; } @@ -3211,9 +2904,7 @@ static void ixgbevf_shutdown(struct pci_dev *pdev) ixgbevf_free_all_rx_resources(adapter); } -#ifdef CONFIG_PM pci_save_state(pdev); -#endif pci_disable_device(pdev); } @@ -3256,19 +2947,6 @@ static struct rtnl_link_stats64 *ixgbevf_get_stats(struct net_device *netdev, return stats; } -static int ixgbevf_set_features(struct net_device *netdev, - netdev_features_t features) -{ - struct ixgbevf_adapter *adapter = netdev_priv(netdev); - - if (features & NETIF_F_RXCSUM) - adapter->flags |= IXGBE_FLAG_RX_CSUM_ENABLED; - else - adapter->flags &= ~IXGBE_FLAG_RX_CSUM_ENABLED; - - return 0; -} - static const struct net_device_ops ixgbe_netdev_ops = { .ndo_open = ixgbevf_open, .ndo_stop = ixgbevf_close, @@ -3281,7 +2959,6 @@ static const struct net_device_ops ixgbe_netdev_ops = { .ndo_tx_timeout = ixgbevf_tx_timeout, .ndo_vlan_rx_add_vid = ixgbevf_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = ixgbevf_vlan_rx_kill_vid, - .ndo_set_features = ixgbevf_set_features, }; static void ixgbevf_assign_netdev_ops(struct net_device *dev) @@ -3341,12 +3018,8 @@ static int __devinit ixgbevf_probe(struct pci_dev *pdev, pci_set_master(pdev); -#ifdef HAVE_TX_MQ netdev = alloc_etherdev_mq(sizeof(struct ixgbevf_adapter), MAX_TX_QUEUES); -#else - netdev = alloc_etherdev(sizeof(struct ixgbevf_adapter)); -#endif if (!netdev) { err = -ENOMEM; goto err_alloc_etherdev; @@ -3387,10 +3060,6 @@ static int __devinit ixgbevf_probe(struct pci_dev *pdev, memcpy(&hw->mbx.ops, &ixgbevf_mbx_ops, sizeof(struct ixgbe_mbx_operations)); - adapter->flags &= ~IXGBE_FLAG_RX_PS_CAPABLE; - adapter->flags &= ~IXGBE_FLAG_RX_PS_ENABLED; - adapter->flags |= IXGBE_FLAG_RX_1BUF_CAPABLE; - /* setup the private structure */ err = ixgbevf_sw_init(adapter); if (err) @@ -3449,8 +3118,6 @@ static int __devinit ixgbevf_probe(struct pci_dev *pdev, if (err) goto err_register; - adapter->netdev_registered = true; - netif_carrier_off(netdev); ixgbevf_init_last_counter_stats(adapter); @@ -3460,8 +3127,6 @@ static int __devinit ixgbevf_probe(struct pci_dev *pdev, hw_dbg(hw, "MAC: %d\n", hw->mac.type); - hw_dbg(hw, "LRO is disabled\n"); - hw_dbg(hw, "Intel(R) 82599 Virtual Function\n"); cards_found++; return 0; @@ -3501,10 +3166,8 @@ static void __devexit ixgbevf_remove(struct pci_dev *pdev) cancel_work_sync(&adapter->reset_task); cancel_work_sync(&adapter->watchdog_task); - if (adapter->netdev_registered) { + if (netdev->reg_state == NETREG_REGISTERED) unregister_netdev(netdev); - adapter->netdev_registered = false; - } ixgbevf_reset_interrupt_capability(adapter); @@ -3521,12 +3184,92 @@ static void __devexit ixgbevf_remove(struct pci_dev *pdev) pci_disable_device(pdev); } +/** + * ixgbevf_io_error_detected - called when PCI error is detected + * @pdev: Pointer to PCI device + * @state: The current pci connection state + * + * This function is called after a PCI bus error affecting + * this device has been detected. + */ +static pci_ers_result_t ixgbevf_io_error_detected(struct pci_dev *pdev, + pci_channel_state_t state) +{ + struct net_device *netdev = pci_get_drvdata(pdev); + struct ixgbevf_adapter *adapter = netdev_priv(netdev); + + netif_device_detach(netdev); + + if (state == pci_channel_io_perm_failure) + return PCI_ERS_RESULT_DISCONNECT; + + if (netif_running(netdev)) + ixgbevf_down(adapter); + + pci_disable_device(pdev); + + /* Request a slot slot reset. */ + return PCI_ERS_RESULT_NEED_RESET; +} + +/** + * ixgbevf_io_slot_reset - called after the pci bus has been reset. + * @pdev: Pointer to PCI device + * + * Restart the card from scratch, as if from a cold-boot. Implementation + * resembles the first-half of the ixgbevf_resume routine. + */ +static pci_ers_result_t ixgbevf_io_slot_reset(struct pci_dev *pdev) +{ + struct net_device *netdev = pci_get_drvdata(pdev); + struct ixgbevf_adapter *adapter = netdev_priv(netdev); + + if (pci_enable_device_mem(pdev)) { + dev_err(&pdev->dev, + "Cannot re-enable PCI device after reset.\n"); + return PCI_ERS_RESULT_DISCONNECT; + } + + pci_set_master(pdev); + + ixgbevf_reset(adapter); + + return PCI_ERS_RESULT_RECOVERED; +} + +/** + * ixgbevf_io_resume - called when traffic can start flowing again. + * @pdev: Pointer to PCI device + * + * This callback is called when the error recovery driver tells us that + * its OK to resume normal operation. Implementation resembles the + * second-half of the ixgbevf_resume routine. + */ +static void ixgbevf_io_resume(struct pci_dev *pdev) +{ + struct net_device *netdev = pci_get_drvdata(pdev); + struct ixgbevf_adapter *adapter = netdev_priv(netdev); + + if (netif_running(netdev)) + ixgbevf_up(adapter); + + netif_device_attach(netdev); +} + +/* PCI Error Recovery (ERS) */ +static struct pci_error_handlers ixgbevf_err_handler = { + .error_detected = ixgbevf_io_error_detected, + .slot_reset = ixgbevf_io_slot_reset, + .resume = ixgbevf_io_resume, +}; + static struct pci_driver ixgbevf_driver = { .name = ixgbevf_driver_name, .id_table = ixgbevf_pci_tbl, .probe = ixgbevf_probe, .remove = __devexit_p(ixgbevf_remove), .shutdown = ixgbevf_shutdown, + .err_handler = &ixgbevf_err_handler }; /** diff --git a/drivers/net/ethernet/jme.c b/drivers/net/ethernet/jme.c index 4ea6580d3ae8..c911d883c27e 100644 --- a/drivers/net/ethernet/jme.c +++ b/drivers/net/ethernet/jme.c @@ -2743,6 +2743,17 @@ jme_set_features(struct net_device *netdev, netdev_features_t features) return 0; } +#ifdef CONFIG_NET_POLL_CONTROLLER +static void jme_netpoll(struct net_device *dev) +{ + unsigned long flags; + + local_irq_save(flags); + jme_intr(dev->irq, dev); + local_irq_restore(flags); +} +#endif + static int jme_nway_reset(struct net_device *netdev) { @@ -2944,6 +2955,9 @@ static const struct net_device_ops jme_netdev_ops = { .ndo_tx_timeout = jme_tx_timeout, .ndo_fix_features = jme_fix_features, .ndo_set_features = jme_set_features, +#ifdef CONFIG_NET_POLL_CONTROLLER + .ndo_poll_controller = jme_netpoll, +#endif }; static int __devinit diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c index 5dc9cbd51514..003c5bc7189f 100644 --- a/drivers/net/ethernet/lantiq_etop.c +++ b/drivers/net/ethernet/lantiq_etop.c @@ -149,7 +149,6 @@ ltq_etop_hw_receive(struct ltq_etop_chan *ch) spin_unlock_irqrestore(&priv->lock, flags); skb_put(skb, len); - skb->dev = ch->netdev; skb->protocol = eth_type_trans(skb, ch->netdev); netif_receive_skb(skb); } @@ -646,7 +645,7 @@ ltq_etop_init(struct net_device *dev) memcpy(&mac, &priv->pldata->mac, sizeof(struct sockaddr)); if (!is_valid_ether_addr(mac.sa_data)) { pr_warn("etop: invalid MAC, using random\n"); - random_ether_addr(mac.sa_data); + eth_random_addr(mac.sa_data); random_mac = true; } diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c index f0f06b2bc28b..770ee557924c 100644 --- a/drivers/net/ethernet/marvell/mv643xx_eth.c +++ b/drivers/net/ethernet/marvell/mv643xx_eth.c @@ -1896,7 +1896,7 @@ static int rxq_init(struct mv643xx_eth_private *mp, int index) goto out_free; } - rx_desc = (struct rx_desc *)rxq->rx_desc_area; + rx_desc = rxq->rx_desc_area; for (i = 0; i < rxq->rx_ring_size; i++) { int nexti; @@ -2001,7 +2001,7 @@ static int txq_init(struct mv643xx_eth_private *mp, int index) txq->tx_desc_area_size = size; - tx_desc = (struct tx_desc *)txq->tx_desc_area; + tx_desc = txq->tx_desc_area; for (i = 0; i < txq->tx_ring_size; i++) { struct tx_desc *txd = tx_desc + i; int nexti; diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c index 1db023b075a1..59489722e898 100644 --- a/drivers/net/ethernet/marvell/pxa168_eth.c +++ b/drivers/net/ethernet/marvell/pxa168_eth.c @@ -1032,7 +1032,7 @@ static int rxq_init(struct net_device *dev) } memset((void *)pep->p_rx_desc_area, 0, size); /* initialize the next_desc_ptr links in the Rx descriptors ring */ - p_rx_desc = (struct rx_desc *)pep->p_rx_desc_area; + p_rx_desc = pep->p_rx_desc_area; for (i = 0; i < rx_desc_num; i++) { p_rx_desc[i].next_desc_ptr = pep->rx_desc_dma + ((i + 1) % rx_desc_num) * sizeof(struct rx_desc); @@ -1095,7 +1095,7 @@ static int txq_init(struct net_device *dev) } memset((void *)pep->p_tx_desc_area, 0, pep->tx_desc_area_size); /* Initialize the next_desc_ptr links in the Tx descriptors ring */ - p_tx_desc = (struct tx_desc *)pep->p_tx_desc_area; + p_tx_desc = pep->p_tx_desc_area; for (i = 0; i < tx_desc_num; i++) { p_tx_desc[i].next_desc_ptr = pep->tx_desc_dma + ((i + 1) % tx_desc_num) * sizeof(struct tx_desc); diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c index 28a54451a3e5..2b0748dba8b8 100644 --- a/drivers/net/ethernet/marvell/sky2.c +++ b/drivers/net/ethernet/marvell/sky2.c @@ -141,6 +141,7 @@ static DEFINE_PCI_DEVICE_TABLE(sky2_id_table) = { { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4370) }, /* 88E8075 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4380) }, /* 88E8057 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4381) }, /* 88E8059 */ + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4382) }, /* 88E8079 */ { 0 } }; @@ -3079,8 +3080,10 @@ static irqreturn_t sky2_intr(int irq, void *dev_id) /* Reading this mask interrupts as side effect */ status = sky2_read32(hw, B0_Y2_SP_ISRC2); - if (status == 0 || status == ~0) + if (status == 0 || status == ~0) { + sky2_write32(hw, B0_Y2_SP_ICR, 2); return IRQ_NONE; + } prefetch(&hw->st_le[hw->st_idx]); @@ -3349,6 +3352,17 @@ static void sky2_reset(struct sky2_hw *hw) sky2_pci_write16(hw, pdev->pcie_cap + PCI_EXP_LNKCTL, reg); + if (hw->chip_id == CHIP_ID_YUKON_PRM && + hw->chip_rev == CHIP_REV_YU_PRM_A0) { + /* change PHY Interrupt polarity to low active */ + reg = sky2_read16(hw, GPHY_CTRL); + sky2_write16(hw, GPHY_CTRL, reg | GPC_INTPOL); + + /* adapt HW for low active PHY Interrupt */ + reg = sky2_read16(hw, Y2_CFG_SPC + PCI_LDO_CTRL); + sky2_write16(hw, Y2_CFG_SPC + PCI_LDO_CTRL, reg | PHY_M_UNDOC1); + } + sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_OFF); /* re-enable PEX PM in PEX PHY debug reg. 8 (clear bit 12) */ @@ -4871,7 +4885,7 @@ static const char *sky2_name(u8 chipid, char *buf, int sz) "UL 2", /* 0xba */ "Unknown", /* 0xbb */ "Optima", /* 0xbc */ - "Optima Prime", /* 0xbd */ + "OptimaEEE", /* 0xbd */ "Optima 2", /* 0xbe */ }; diff --git a/drivers/net/ethernet/marvell/sky2.h b/drivers/net/ethernet/marvell/sky2.h index 3c896ce80b71..615ac63ea860 100644 --- a/drivers/net/ethernet/marvell/sky2.h +++ b/drivers/net/ethernet/marvell/sky2.h @@ -23,6 +23,7 @@ enum { PSM_CONFIG_REG3 = 0x164, PSM_CONFIG_REG4 = 0x168, + PCI_LDO_CTRL = 0xbc, }; /* Yukon-2 */ @@ -586,6 +587,10 @@ enum yukon_supr_rev { CHIP_REV_YU_SU_B1 = 3, }; +enum yukon_prm_rev { + CHIP_REV_YU_PRM_Z1 = 1, + CHIP_REV_YU_PRM_A0 = 2, +}; /* B2_Y2_CLK_GATE 8 bit Clock Gating (Yukon-2 only) */ enum { diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c index 842c8ce9494e..7e94987d030c 100644 --- a/drivers/net/ethernet/mellanox/mlx4/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c @@ -1080,6 +1080,25 @@ static struct mlx4_cmd_info cmd_info[] = { .verify = NULL, .wrapper = NULL }, + /* flow steering commands */ + { + .opcode = MLX4_QP_FLOW_STEERING_ATTACH, + .has_inbox = true, + .has_outbox = false, + .out_is_imm = true, + .encode_slave_id = false, + .verify = NULL, + .wrapper = mlx4_QP_FLOW_STEERING_ATTACH_wrapper + }, + { + .opcode = MLX4_QP_FLOW_STEERING_DETACH, + .has_inbox = false, + .has_outbox = false, + .out_is_imm = false, + .encode_slave_id = false, + .verify = NULL, + .wrapper = mlx4_QP_FLOW_STEERING_DETACH_wrapper + }, }; static int mlx4_master_process_vhcr(struct mlx4_dev *dev, int slave, diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c index 908a460d8db6..aa9c2f6cf3c0 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c @@ -77,6 +77,12 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq, struct mlx4_en_dev *mdev = priv->mdev; int err = 0; char name[25]; + struct cpu_rmap *rmap = +#ifdef CONFIG_RFS_ACCEL + priv->dev->rx_cpu_rmap; +#else + NULL; +#endif cq->dev = mdev->pndev[priv->port]; cq->mcq.set_ci_db = cq->wqres.db.db; @@ -91,7 +97,8 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq, sprintf(name, "%s-%d", priv->dev->name, cq->ring); /* Set IRQ for specific name (per ring) */ - if (mlx4_assign_eq(mdev->dev, name, &cq->vector)) { + if (mlx4_assign_eq(mdev->dev, name, rmap, + &cq->vector)) { cq->vector = (cq->ring + 1 + priv->port) % mdev->dev->caps.num_comp_vectors; mlx4_warn(mdev, "Failed Assigning an EQ to " diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c index 72901ce2b088..9d0b88eea02b 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c @@ -34,10 +34,14 @@ #include <linux/kernel.h> #include <linux/ethtool.h> #include <linux/netdevice.h> +#include <linux/mlx4/driver.h> #include "mlx4_en.h" #include "en_port.h" +#define EN_ETHTOOL_QP_ATTACH (1ull << 63) +#define EN_ETHTOOL_SHORT_MASK cpu_to_be16(0xffff) +#define EN_ETHTOOL_WORD_MASK cpu_to_be32(0xffffffff) static void mlx4_en_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *drvinfo) @@ -599,16 +603,369 @@ static int mlx4_en_set_rxfh_indir(struct net_device *dev, return err; } +#define all_zeros_or_all_ones(field) \ + ((field) == 0 || (field) == (__force typeof(field))-1) + +static int mlx4_en_validate_flow(struct net_device *dev, + struct ethtool_rxnfc *cmd) +{ + struct ethtool_usrip4_spec *l3_mask; + struct ethtool_tcpip4_spec *l4_mask; + struct ethhdr *eth_mask; + u64 full_mac = ~0ull; + u64 zero_mac = 0; + + if (cmd->fs.location >= MAX_NUM_OF_FS_RULES) + return -EINVAL; + + switch (cmd->fs.flow_type & ~FLOW_EXT) { + case TCP_V4_FLOW: + case UDP_V4_FLOW: + if (cmd->fs.m_u.tcp_ip4_spec.tos) + return -EINVAL; + l4_mask = &cmd->fs.m_u.tcp_ip4_spec; + /* don't allow mask which isn't all 0 or 1 */ + if (!all_zeros_or_all_ones(l4_mask->ip4src) || + !all_zeros_or_all_ones(l4_mask->ip4dst) || + !all_zeros_or_all_ones(l4_mask->psrc) || + !all_zeros_or_all_ones(l4_mask->pdst)) + return -EINVAL; + break; + case IP_USER_FLOW: + l3_mask = &cmd->fs.m_u.usr_ip4_spec; + if (l3_mask->l4_4_bytes || l3_mask->tos || l3_mask->proto || + cmd->fs.h_u.usr_ip4_spec.ip_ver != ETH_RX_NFC_IP4 || + (!l3_mask->ip4src && !l3_mask->ip4dst) || + !all_zeros_or_all_ones(l3_mask->ip4src) || + !all_zeros_or_all_ones(l3_mask->ip4dst)) + return -EINVAL; + break; + case ETHER_FLOW: + eth_mask = &cmd->fs.m_u.ether_spec; + /* source mac mask must not be set */ + if (memcmp(eth_mask->h_source, &zero_mac, ETH_ALEN)) + return -EINVAL; + + /* dest mac mask must be ff:ff:ff:ff:ff:ff */ + if (memcmp(eth_mask->h_dest, &full_mac, ETH_ALEN)) + return -EINVAL; + + if (!all_zeros_or_all_ones(eth_mask->h_proto)) + return -EINVAL; + break; + default: + return -EINVAL; + } + + if ((cmd->fs.flow_type & FLOW_EXT)) { + if (cmd->fs.m_ext.vlan_etype || + !(cmd->fs.m_ext.vlan_tci == 0 || + cmd->fs.m_ext.vlan_tci == cpu_to_be16(0xfff))) + return -EINVAL; + } + + return 0; +} + +static int add_ip_rule(struct mlx4_en_priv *priv, + struct ethtool_rxnfc *cmd, + struct list_head *list_h) +{ + struct mlx4_spec_list *spec_l3; + struct ethtool_usrip4_spec *l3_mask = &cmd->fs.m_u.usr_ip4_spec; + + spec_l3 = kzalloc(sizeof *spec_l3, GFP_KERNEL); + if (!spec_l3) { + en_err(priv, "Fail to alloc ethtool rule.\n"); + return -ENOMEM; + } + + spec_l3->id = MLX4_NET_TRANS_RULE_ID_IPV4; + spec_l3->ipv4.src_ip = cmd->fs.h_u.usr_ip4_spec.ip4src; + if (l3_mask->ip4src) + spec_l3->ipv4.src_ip_msk = EN_ETHTOOL_WORD_MASK; + spec_l3->ipv4.dst_ip = cmd->fs.h_u.usr_ip4_spec.ip4dst; + if (l3_mask->ip4dst) + spec_l3->ipv4.dst_ip_msk = EN_ETHTOOL_WORD_MASK; + list_add_tail(&spec_l3->list, list_h); + + return 0; +} + +static int add_tcp_udp_rule(struct mlx4_en_priv *priv, + struct ethtool_rxnfc *cmd, + struct list_head *list_h, int proto) +{ + struct mlx4_spec_list *spec_l3; + struct mlx4_spec_list *spec_l4; + struct ethtool_tcpip4_spec *l4_mask = &cmd->fs.m_u.tcp_ip4_spec; + + spec_l3 = kzalloc(sizeof *spec_l3, GFP_KERNEL); + spec_l4 = kzalloc(sizeof *spec_l4, GFP_KERNEL); + if (!spec_l4 || !spec_l3) { + en_err(priv, "Fail to alloc ethtool rule.\n"); + kfree(spec_l3); + kfree(spec_l4); + return -ENOMEM; + } + + spec_l3->id = MLX4_NET_TRANS_RULE_ID_IPV4; + + if (proto == TCP_V4_FLOW) { + spec_l4->id = MLX4_NET_TRANS_RULE_ID_TCP; + spec_l3->ipv4.src_ip = cmd->fs.h_u.tcp_ip4_spec.ip4src; + spec_l3->ipv4.dst_ip = cmd->fs.h_u.tcp_ip4_spec.ip4dst; + spec_l4->tcp_udp.src_port = cmd->fs.h_u.tcp_ip4_spec.psrc; + spec_l4->tcp_udp.dst_port = cmd->fs.h_u.tcp_ip4_spec.pdst; + } else { + spec_l4->id = MLX4_NET_TRANS_RULE_ID_UDP; + spec_l3->ipv4.src_ip = cmd->fs.h_u.udp_ip4_spec.ip4src; + spec_l3->ipv4.dst_ip = cmd->fs.h_u.udp_ip4_spec.ip4dst; + spec_l4->tcp_udp.src_port = cmd->fs.h_u.udp_ip4_spec.psrc; + spec_l4->tcp_udp.dst_port = cmd->fs.h_u.udp_ip4_spec.pdst; + } + + if (l4_mask->ip4src) + spec_l3->ipv4.src_ip_msk = EN_ETHTOOL_WORD_MASK; + if (l4_mask->ip4dst) + spec_l3->ipv4.dst_ip_msk = EN_ETHTOOL_WORD_MASK; + + if (l4_mask->psrc) + spec_l4->tcp_udp.src_port_msk = EN_ETHTOOL_SHORT_MASK; + if (l4_mask->pdst) + spec_l4->tcp_udp.dst_port_msk = EN_ETHTOOL_SHORT_MASK; + + list_add_tail(&spec_l3->list, list_h); + list_add_tail(&spec_l4->list, list_h); + + return 0; +} + +static int mlx4_en_ethtool_to_net_trans_rule(struct net_device *dev, + struct ethtool_rxnfc *cmd, + struct list_head *rule_list_h) +{ + int err; + u64 mac; + __be64 be_mac; + struct ethhdr *eth_spec; + struct mlx4_en_priv *priv = netdev_priv(dev); + struct mlx4_spec_list *spec_l2; + __be64 mac_msk = cpu_to_be64(MLX4_MAC_MASK << 16); + + err = mlx4_en_validate_flow(dev, cmd); + if (err) + return err; + + spec_l2 = kzalloc(sizeof *spec_l2, GFP_KERNEL); + if (!spec_l2) + return -ENOMEM; + + mac = priv->mac & MLX4_MAC_MASK; + be_mac = cpu_to_be64(mac << 16); + + spec_l2->id = MLX4_NET_TRANS_RULE_ID_ETH; + memcpy(spec_l2->eth.dst_mac_msk, &mac_msk, ETH_ALEN); + if ((cmd->fs.flow_type & ~FLOW_EXT) != ETHER_FLOW) + memcpy(spec_l2->eth.dst_mac, &be_mac, ETH_ALEN); + + if ((cmd->fs.flow_type & FLOW_EXT) && cmd->fs.m_ext.vlan_tci) { + spec_l2->eth.vlan_id = cmd->fs.h_ext.vlan_tci; + spec_l2->eth.vlan_id_msk = cpu_to_be16(0xfff); + } + + list_add_tail(&spec_l2->list, rule_list_h); + + switch (cmd->fs.flow_type & ~FLOW_EXT) { + case ETHER_FLOW: + eth_spec = &cmd->fs.h_u.ether_spec; + memcpy(&spec_l2->eth.dst_mac, eth_spec->h_dest, ETH_ALEN); + spec_l2->eth.ether_type = eth_spec->h_proto; + if (eth_spec->h_proto) + spec_l2->eth.ether_type_enable = 1; + break; + case IP_USER_FLOW: + err = add_ip_rule(priv, cmd, rule_list_h); + break; + case TCP_V4_FLOW: + err = add_tcp_udp_rule(priv, cmd, rule_list_h, TCP_V4_FLOW); + break; + case UDP_V4_FLOW: + err = add_tcp_udp_rule(priv, cmd, rule_list_h, UDP_V4_FLOW); + break; + } + + return err; +} + +static int mlx4_en_flow_replace(struct net_device *dev, + struct ethtool_rxnfc *cmd) +{ + int err; + struct mlx4_en_priv *priv = netdev_priv(dev); + struct ethtool_flow_id *loc_rule; + struct mlx4_spec_list *spec, *tmp_spec; + u32 qpn; + u64 reg_id; + + struct mlx4_net_trans_rule rule = { + .queue_mode = MLX4_NET_TRANS_Q_FIFO, + .exclusive = 0, + .allow_loopback = 1, + .promisc_mode = MLX4_FS_PROMISC_NONE, + }; + + rule.port = priv->port; + rule.priority = MLX4_DOMAIN_ETHTOOL | cmd->fs.location; + INIT_LIST_HEAD(&rule.list); + + /* Allow direct QP attaches if the EN_ETHTOOL_QP_ATTACH flag is set */ + if (cmd->fs.ring_cookie == RX_CLS_FLOW_DISC) + qpn = priv->drop_qp.qpn; + else if (cmd->fs.ring_cookie & EN_ETHTOOL_QP_ATTACH) { + qpn = cmd->fs.ring_cookie & (EN_ETHTOOL_QP_ATTACH - 1); + } else { + if (cmd->fs.ring_cookie >= priv->rx_ring_num) { + en_warn(priv, "rxnfc: RX ring (%llu) doesn't exist.\n", + cmd->fs.ring_cookie); + return -EINVAL; + } + qpn = priv->rss_map.qps[cmd->fs.ring_cookie].qpn; + if (!qpn) { + en_warn(priv, "rxnfc: RX ring (%llu) is inactive.\n", + cmd->fs.ring_cookie); + return -EINVAL; + } + } + rule.qpn = qpn; + err = mlx4_en_ethtool_to_net_trans_rule(dev, cmd, &rule.list); + if (err) + goto out_free_list; + + loc_rule = &priv->ethtool_rules[cmd->fs.location]; + if (loc_rule->id) { + err = mlx4_flow_detach(priv->mdev->dev, loc_rule->id); + if (err) { + en_err(priv, "Fail to detach network rule at location %d. registration id = %llx\n", + cmd->fs.location, loc_rule->id); + goto out_free_list; + } + loc_rule->id = 0; + memset(&loc_rule->flow_spec, 0, + sizeof(struct ethtool_rx_flow_spec)); + } + err = mlx4_flow_attach(priv->mdev->dev, &rule, ®_id); + if (err) { + en_err(priv, "Fail to attach network rule at location %d.\n", + cmd->fs.location); + goto out_free_list; + } + loc_rule->id = reg_id; + memcpy(&loc_rule->flow_spec, &cmd->fs, + sizeof(struct ethtool_rx_flow_spec)); + +out_free_list: + list_for_each_entry_safe(spec, tmp_spec, &rule.list, list) { + list_del(&spec->list); + kfree(spec); + } + return err; +} + +static int mlx4_en_flow_detach(struct net_device *dev, + struct ethtool_rxnfc *cmd) +{ + int err = 0; + struct ethtool_flow_id *rule; + struct mlx4_en_priv *priv = netdev_priv(dev); + + if (cmd->fs.location >= MAX_NUM_OF_FS_RULES) + return -EINVAL; + + rule = &priv->ethtool_rules[cmd->fs.location]; + if (!rule->id) { + err = -ENOENT; + goto out; + } + + err = mlx4_flow_detach(priv->mdev->dev, rule->id); + if (err) { + en_err(priv, "Fail to detach network rule at location %d. registration id = 0x%llx\n", + cmd->fs.location, rule->id); + goto out; + } + rule->id = 0; + memset(&rule->flow_spec, 0, sizeof(struct ethtool_rx_flow_spec)); +out: + return err; + +} + +static int mlx4_en_get_flow(struct net_device *dev, struct ethtool_rxnfc *cmd, + int loc) +{ + int err = 0; + struct ethtool_flow_id *rule; + struct mlx4_en_priv *priv = netdev_priv(dev); + + if (loc < 0 || loc >= MAX_NUM_OF_FS_RULES) + return -EINVAL; + + rule = &priv->ethtool_rules[loc]; + if (rule->id) + memcpy(&cmd->fs, &rule->flow_spec, + sizeof(struct ethtool_rx_flow_spec)); + else + err = -ENOENT; + + return err; +} + +static int mlx4_en_get_num_flows(struct mlx4_en_priv *priv) +{ + + int i, res = 0; + for (i = 0; i < MAX_NUM_OF_FS_RULES; i++) { + if (priv->ethtool_rules[i].id) + res++; + } + return res; + +} + static int mlx4_en_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd, u32 *rule_locs) { struct mlx4_en_priv *priv = netdev_priv(dev); + struct mlx4_en_dev *mdev = priv->mdev; int err = 0; + int i = 0, priority = 0; + + if ((cmd->cmd == ETHTOOL_GRXCLSRLCNT || + cmd->cmd == ETHTOOL_GRXCLSRULE || + cmd->cmd == ETHTOOL_GRXCLSRLALL) && + mdev->dev->caps.steering_mode != MLX4_STEERING_MODE_DEVICE_MANAGED) + return -EINVAL; switch (cmd->cmd) { case ETHTOOL_GRXRINGS: cmd->data = priv->rx_ring_num; break; + case ETHTOOL_GRXCLSRLCNT: + cmd->rule_cnt = mlx4_en_get_num_flows(priv); + break; + case ETHTOOL_GRXCLSRULE: + err = mlx4_en_get_flow(dev, cmd, cmd->fs.location); + break; + case ETHTOOL_GRXCLSRLALL: + while ((!err || err == -ENOENT) && priority < cmd->rule_cnt) { + err = mlx4_en_get_flow(dev, cmd, i); + if (!err) + rule_locs[priority++] = i; + i++; + } + err = 0; + break; default: err = -EOPNOTSUPP; break; @@ -617,6 +974,30 @@ static int mlx4_en_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd, return err; } +static int mlx4_en_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd) +{ + int err = 0; + struct mlx4_en_priv *priv = netdev_priv(dev); + struct mlx4_en_dev *mdev = priv->mdev; + + if (mdev->dev->caps.steering_mode != MLX4_STEERING_MODE_DEVICE_MANAGED) + return -EINVAL; + + switch (cmd->cmd) { + case ETHTOOL_SRXCLSRLINS: + err = mlx4_en_flow_replace(dev, cmd); + break; + case ETHTOOL_SRXCLSRLDEL: + err = mlx4_en_flow_detach(dev, cmd); + break; + default: + en_warn(priv, "Unsupported ethtool command. (%d)\n", cmd->cmd); + return -EINVAL; + } + + return err; +} + const struct ethtool_ops mlx4_en_ethtool_ops = { .get_drvinfo = mlx4_en_get_drvinfo, .get_settings = mlx4_en_get_settings, @@ -637,6 +1018,7 @@ const struct ethtool_ops mlx4_en_ethtool_ops = { .get_ringparam = mlx4_en_get_ringparam, .set_ringparam = mlx4_en_set_ringparam, .get_rxnfc = mlx4_en_get_rxnfc, + .set_rxnfc = mlx4_en_set_rxnfc, .get_rxfh_indir_size = mlx4_en_get_rxfh_indir_size, .get_rxfh_indir = mlx4_en_get_rxfh_indir, .set_rxfh_indir = mlx4_en_set_rxfh_indir, diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index 073b85b45fc5..8864d8b53737 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -36,6 +36,8 @@ #include <linux/if_vlan.h> #include <linux/delay.h> #include <linux/slab.h> +#include <linux/hash.h> +#include <net/ip.h> #include <linux/mlx4/driver.h> #include <linux/mlx4/device.h> @@ -66,6 +68,299 @@ static int mlx4_en_setup_tc(struct net_device *dev, u8 up) return 0; } +#ifdef CONFIG_RFS_ACCEL + +struct mlx4_en_filter { + struct list_head next; + struct work_struct work; + + __be32 src_ip; + __be32 dst_ip; + __be16 src_port; + __be16 dst_port; + + int rxq_index; + struct mlx4_en_priv *priv; + u32 flow_id; /* RFS infrastructure id */ + int id; /* mlx4_en driver id */ + u64 reg_id; /* Flow steering API id */ + u8 activated; /* Used to prevent expiry before filter + * is attached + */ + struct hlist_node filter_chain; +}; + +static void mlx4_en_filter_rfs_expire(struct mlx4_en_priv *priv); + +static void mlx4_en_filter_work(struct work_struct *work) +{ + struct mlx4_en_filter *filter = container_of(work, + struct mlx4_en_filter, + work); + struct mlx4_en_priv *priv = filter->priv; + struct mlx4_spec_list spec_tcp = { + .id = MLX4_NET_TRANS_RULE_ID_TCP, + { + .tcp_udp = { + .dst_port = filter->dst_port, + .dst_port_msk = (__force __be16)-1, + .src_port = filter->src_port, + .src_port_msk = (__force __be16)-1, + }, + }, + }; + struct mlx4_spec_list spec_ip = { + .id = MLX4_NET_TRANS_RULE_ID_IPV4, + { + .ipv4 = { + .dst_ip = filter->dst_ip, + .dst_ip_msk = (__force __be32)-1, + .src_ip = filter->src_ip, + .src_ip_msk = (__force __be32)-1, + }, + }, + }; + struct mlx4_spec_list spec_eth = { + .id = MLX4_NET_TRANS_RULE_ID_ETH, + }; + struct mlx4_net_trans_rule rule = { + .list = LIST_HEAD_INIT(rule.list), + .queue_mode = MLX4_NET_TRANS_Q_LIFO, + .exclusive = 1, + .allow_loopback = 1, + .promisc_mode = MLX4_FS_PROMISC_NONE, + .port = priv->port, + .priority = MLX4_DOMAIN_RFS, + }; + int rc; + __be64 mac; + __be64 mac_mask = cpu_to_be64(MLX4_MAC_MASK << 16); + + list_add_tail(&spec_eth.list, &rule.list); + list_add_tail(&spec_ip.list, &rule.list); + list_add_tail(&spec_tcp.list, &rule.list); + + mac = cpu_to_be64((priv->mac & MLX4_MAC_MASK) << 16); + + rule.qpn = priv->rss_map.qps[filter->rxq_index].qpn; + memcpy(spec_eth.eth.dst_mac, &mac, ETH_ALEN); + memcpy(spec_eth.eth.dst_mac_msk, &mac_mask, ETH_ALEN); + + filter->activated = 0; + + if (filter->reg_id) { + rc = mlx4_flow_detach(priv->mdev->dev, filter->reg_id); + if (rc && rc != -ENOENT) + en_err(priv, "Error detaching flow. rc = %d\n", rc); + } + + rc = mlx4_flow_attach(priv->mdev->dev, &rule, &filter->reg_id); + if (rc) + en_err(priv, "Error attaching flow. err = %d\n", rc); + + mlx4_en_filter_rfs_expire(priv); + + filter->activated = 1; +} + +static inline struct hlist_head * +filter_hash_bucket(struct mlx4_en_priv *priv, __be32 src_ip, __be32 dst_ip, + __be16 src_port, __be16 dst_port) +{ + unsigned long l; + int bucket_idx; + + l = (__force unsigned long)src_port | + ((__force unsigned long)dst_port << 2); + l ^= (__force unsigned long)(src_ip ^ dst_ip); + + bucket_idx = hash_long(l, MLX4_EN_FILTER_HASH_SHIFT); + + return &priv->filter_hash[bucket_idx]; +} + +static struct mlx4_en_filter * +mlx4_en_filter_alloc(struct mlx4_en_priv *priv, int rxq_index, __be32 src_ip, + __be32 dst_ip, __be16 src_port, __be16 dst_port, + u32 flow_id) +{ + struct mlx4_en_filter *filter = NULL; + + filter = kzalloc(sizeof(struct mlx4_en_filter), GFP_ATOMIC); + if (!filter) + return NULL; + + filter->priv = priv; + filter->rxq_index = rxq_index; + INIT_WORK(&filter->work, mlx4_en_filter_work); + + filter->src_ip = src_ip; + filter->dst_ip = dst_ip; + filter->src_port = src_port; + filter->dst_port = dst_port; + + filter->flow_id = flow_id; + + filter->id = priv->last_filter_id++; + + list_add_tail(&filter->next, &priv->filters); + hlist_add_head(&filter->filter_chain, + filter_hash_bucket(priv, src_ip, dst_ip, src_port, + dst_port)); + + return filter; +} + +static void mlx4_en_filter_free(struct mlx4_en_filter *filter) +{ + struct mlx4_en_priv *priv = filter->priv; + int rc; + + list_del(&filter->next); + + rc = mlx4_flow_detach(priv->mdev->dev, filter->reg_id); + if (rc && rc != -ENOENT) + en_err(priv, "Error detaching flow. rc = %d\n", rc); + + kfree(filter); +} + +static inline struct mlx4_en_filter * +mlx4_en_filter_find(struct mlx4_en_priv *priv, __be32 src_ip, __be32 dst_ip, + __be16 src_port, __be16 dst_port) +{ + struct hlist_node *elem; + struct mlx4_en_filter *filter; + struct mlx4_en_filter *ret = NULL; + + hlist_for_each_entry(filter, elem, + filter_hash_bucket(priv, src_ip, dst_ip, + src_port, dst_port), + filter_chain) { + if (filter->src_ip == src_ip && + filter->dst_ip == dst_ip && + filter->src_port == src_port && + filter->dst_port == dst_port) { + ret = filter; + break; + } + } + + return ret; +} + +static int +mlx4_en_filter_rfs(struct net_device *net_dev, const struct sk_buff *skb, + u16 rxq_index, u32 flow_id) +{ + struct mlx4_en_priv *priv = netdev_priv(net_dev); + struct mlx4_en_filter *filter; + const struct iphdr *ip; + const __be16 *ports; + __be32 src_ip; + __be32 dst_ip; + __be16 src_port; + __be16 dst_port; + int nhoff = skb_network_offset(skb); + int ret = 0; + + if (skb->protocol != htons(ETH_P_IP)) + return -EPROTONOSUPPORT; + + ip = (const struct iphdr *)(skb->data + nhoff); + if (ip_is_fragment(ip)) + return -EPROTONOSUPPORT; + + ports = (const __be16 *)(skb->data + nhoff + 4 * ip->ihl); + + src_ip = ip->saddr; + dst_ip = ip->daddr; + src_port = ports[0]; + dst_port = ports[1]; + + if (ip->protocol != IPPROTO_TCP) + return -EPROTONOSUPPORT; + + spin_lock_bh(&priv->filters_lock); + filter = mlx4_en_filter_find(priv, src_ip, dst_ip, src_port, dst_port); + if (filter) { + if (filter->rxq_index == rxq_index) + goto out; + + filter->rxq_index = rxq_index; + } else { + filter = mlx4_en_filter_alloc(priv, rxq_index, + src_ip, dst_ip, + src_port, dst_port, flow_id); + if (!filter) { + ret = -ENOMEM; + goto err; + } + } + + queue_work(priv->mdev->workqueue, &filter->work); + +out: + ret = filter->id; +err: + spin_unlock_bh(&priv->filters_lock); + + return ret; +} + +void mlx4_en_cleanup_filters(struct mlx4_en_priv *priv, + struct mlx4_en_rx_ring *rx_ring) +{ + struct mlx4_en_filter *filter, *tmp; + LIST_HEAD(del_list); + + spin_lock_bh(&priv->filters_lock); + list_for_each_entry_safe(filter, tmp, &priv->filters, next) { + list_move(&filter->next, &del_list); + hlist_del(&filter->filter_chain); + } + spin_unlock_bh(&priv->filters_lock); + + list_for_each_entry_safe(filter, tmp, &del_list, next) { + cancel_work_sync(&filter->work); + mlx4_en_filter_free(filter); + } +} + +static void mlx4_en_filter_rfs_expire(struct mlx4_en_priv *priv) +{ + struct mlx4_en_filter *filter = NULL, *tmp, *last_filter = NULL; + LIST_HEAD(del_list); + int i = 0; + + spin_lock_bh(&priv->filters_lock); + list_for_each_entry_safe(filter, tmp, &priv->filters, next) { + if (i > MLX4_EN_FILTER_EXPIRY_QUOTA) + break; + + if (filter->activated && + !work_pending(&filter->work) && + rps_may_expire_flow(priv->dev, + filter->rxq_index, filter->flow_id, + filter->id)) { + list_move(&filter->next, &del_list); + hlist_del(&filter->filter_chain); + } else + last_filter = filter; + + i++; + } + + if (last_filter && (&last_filter->next != priv->filters.next)) + list_move(&priv->filters, &last_filter->next); + + spin_unlock_bh(&priv->filters_lock); + + list_for_each_entry_safe(filter, tmp, &del_list, next) + mlx4_en_filter_free(filter); +} +#endif + static int mlx4_en_vlan_rx_add_vid(struct net_device *dev, unsigned short vid) { struct mlx4_en_priv *priv = netdev_priv(dev); @@ -170,33 +465,81 @@ static void mlx4_en_do_set_mac(struct work_struct *work) static void mlx4_en_clear_list(struct net_device *dev) { struct mlx4_en_priv *priv = netdev_priv(dev); + struct mlx4_en_mc_list *tmp, *mc_to_del; - kfree(priv->mc_addrs); - priv->mc_addrs = NULL; - priv->mc_addrs_cnt = 0; + list_for_each_entry_safe(mc_to_del, tmp, &priv->mc_list, list) { + list_del(&mc_to_del->list); + kfree(mc_to_del); + } } static void mlx4_en_cache_mclist(struct net_device *dev) { struct mlx4_en_priv *priv = netdev_priv(dev); struct netdev_hw_addr *ha; - char *mc_addrs; - int mc_addrs_cnt = netdev_mc_count(dev); - int i; + struct mlx4_en_mc_list *tmp; - mc_addrs = kmalloc(mc_addrs_cnt * ETH_ALEN, GFP_ATOMIC); - if (!mc_addrs) { - en_err(priv, "failed to allocate multicast list\n"); - return; - } - i = 0; - netdev_for_each_mc_addr(ha, dev) - memcpy(mc_addrs + i++ * ETH_ALEN, ha->addr, ETH_ALEN); mlx4_en_clear_list(dev); - priv->mc_addrs = mc_addrs; - priv->mc_addrs_cnt = mc_addrs_cnt; + netdev_for_each_mc_addr(ha, dev) { + tmp = kzalloc(sizeof(struct mlx4_en_mc_list), GFP_ATOMIC); + if (!tmp) { + en_err(priv, "failed to allocate multicast list\n"); + mlx4_en_clear_list(dev); + return; + } + memcpy(tmp->addr, ha->addr, ETH_ALEN); + list_add_tail(&tmp->list, &priv->mc_list); + } } +static void update_mclist_flags(struct mlx4_en_priv *priv, + struct list_head *dst, + struct list_head *src) +{ + struct mlx4_en_mc_list *dst_tmp, *src_tmp, *new_mc; + bool found; + + /* Find all the entries that should be removed from dst, + * These are the entries that are not found in src + */ + list_for_each_entry(dst_tmp, dst, list) { + found = false; + list_for_each_entry(src_tmp, src, list) { + if (!memcmp(dst_tmp->addr, src_tmp->addr, ETH_ALEN)) { + found = true; + break; + } + } + if (!found) + dst_tmp->action = MCLIST_REM; + } + + /* Add entries that exist in src but not in dst + * mark them as need to add + */ + list_for_each_entry(src_tmp, src, list) { + found = false; + list_for_each_entry(dst_tmp, dst, list) { + if (!memcmp(dst_tmp->addr, src_tmp->addr, ETH_ALEN)) { + dst_tmp->action = MCLIST_NONE; + found = true; + break; + } + } + if (!found) { + new_mc = kmalloc(sizeof(struct mlx4_en_mc_list), + GFP_KERNEL); + if (!new_mc) { + en_err(priv, "Failed to allocate current multicast list\n"); + return; + } + memcpy(new_mc, src_tmp, + sizeof(struct mlx4_en_mc_list)); + new_mc->action = MCLIST_ADD; + list_add_tail(&new_mc->list, dst); + } + } +} static void mlx4_en_set_multicast(struct net_device *dev) { @@ -214,9 +557,10 @@ static void mlx4_en_do_set_multicast(struct work_struct *work) mcast_task); struct mlx4_en_dev *mdev = priv->mdev; struct net_device *dev = priv->dev; + struct mlx4_en_mc_list *mclist, *tmp; u64 mcast_addr = 0; u8 mc_list[16] = {0}; - int err; + int err = 0; mutex_lock(&mdev->state_lock); if (!mdev->device_up) { @@ -251,16 +595,46 @@ static void mlx4_en_do_set_multicast(struct work_struct *work) priv->flags |= MLX4_EN_FLAG_PROMISC; /* Enable promiscouos mode */ - if (!(mdev->dev->caps.flags & - MLX4_DEV_CAP_FLAG_VEP_UC_STEER)) - err = mlx4_SET_PORT_qpn_calc(mdev->dev, priv->port, - priv->base_qpn, 1); - else - err = mlx4_unicast_promisc_add(mdev->dev, priv->base_qpn, + switch (mdev->dev->caps.steering_mode) { + case MLX4_STEERING_MODE_DEVICE_MANAGED: + err = mlx4_flow_steer_promisc_add(mdev->dev, + priv->port, + priv->base_qpn, + MLX4_FS_PROMISC_UPLINK); + if (err) + en_err(priv, "Failed enabling promiscuous mode\n"); + priv->flags |= MLX4_EN_FLAG_MC_PROMISC; + break; + + case MLX4_STEERING_MODE_B0: + err = mlx4_unicast_promisc_add(mdev->dev, + priv->base_qpn, priv->port); - if (err) - en_err(priv, "Failed enabling " - "promiscuous mode\n"); + if (err) + en_err(priv, "Failed enabling unicast promiscuous mode\n"); + + /* Add the default qp number as multicast + * promisc + */ + if (!(priv->flags & MLX4_EN_FLAG_MC_PROMISC)) { + err = mlx4_multicast_promisc_add(mdev->dev, + priv->base_qpn, + priv->port); + if (err) + en_err(priv, "Failed enabling multicast promiscuous mode\n"); + priv->flags |= MLX4_EN_FLAG_MC_PROMISC; + } + break; + + case MLX4_STEERING_MODE_A0: + err = mlx4_SET_PORT_qpn_calc(mdev->dev, + priv->port, + priv->base_qpn, + 1); + if (err) + en_err(priv, "Failed enabling promiscuous mode\n"); + break; + } /* Disable port multicast filter (unconditionally) */ err = mlx4_SET_MCAST_FLTR(mdev->dev, priv->port, 0, @@ -269,15 +643,6 @@ static void mlx4_en_do_set_multicast(struct work_struct *work) en_err(priv, "Failed disabling " "multicast filter\n"); - /* Add the default qp number as multicast promisc */ - if (!(priv->flags & MLX4_EN_FLAG_MC_PROMISC)) { - err = mlx4_multicast_promisc_add(mdev->dev, priv->base_qpn, - priv->port); - if (err) - en_err(priv, "Failed entering multicast promisc mode\n"); - priv->flags |= MLX4_EN_FLAG_MC_PROMISC; - } - /* Disable port VLAN filter */ err = mlx4_SET_VLAN_FLTR(mdev->dev, priv); if (err) @@ -296,22 +661,40 @@ static void mlx4_en_do_set_multicast(struct work_struct *work) priv->flags &= ~MLX4_EN_FLAG_PROMISC; /* Disable promiscouos mode */ - if (!(mdev->dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER)) - err = mlx4_SET_PORT_qpn_calc(mdev->dev, priv->port, - priv->base_qpn, 0); - else - err = mlx4_unicast_promisc_remove(mdev->dev, priv->base_qpn, + switch (mdev->dev->caps.steering_mode) { + case MLX4_STEERING_MODE_DEVICE_MANAGED: + err = mlx4_flow_steer_promisc_remove(mdev->dev, + priv->port, + MLX4_FS_PROMISC_UPLINK); + if (err) + en_err(priv, "Failed disabling promiscuous mode\n"); + priv->flags &= ~MLX4_EN_FLAG_MC_PROMISC; + break; + + case MLX4_STEERING_MODE_B0: + err = mlx4_unicast_promisc_remove(mdev->dev, + priv->base_qpn, priv->port); - if (err) - en_err(priv, "Failed disabling promiscuous mode\n"); + if (err) + en_err(priv, "Failed disabling unicast promiscuous mode\n"); + /* Disable Multicast promisc */ + if (priv->flags & MLX4_EN_FLAG_MC_PROMISC) { + err = mlx4_multicast_promisc_remove(mdev->dev, + priv->base_qpn, + priv->port); + if (err) + en_err(priv, "Failed disabling multicast promiscuous mode\n"); + priv->flags &= ~MLX4_EN_FLAG_MC_PROMISC; + } + break; - /* Disable Multicast promisc */ - if (priv->flags & MLX4_EN_FLAG_MC_PROMISC) { - err = mlx4_multicast_promisc_remove(mdev->dev, priv->base_qpn, - priv->port); + case MLX4_STEERING_MODE_A0: + err = mlx4_SET_PORT_qpn_calc(mdev->dev, + priv->port, + priv->base_qpn, 0); if (err) - en_err(priv, "Failed disabling multicast promiscuous mode\n"); - priv->flags &= ~MLX4_EN_FLAG_MC_PROMISC; + en_err(priv, "Failed disabling promiscuous mode\n"); + break; } /* Enable port VLAN filter */ @@ -329,18 +712,46 @@ static void mlx4_en_do_set_multicast(struct work_struct *work) /* Add the default qp number as multicast promisc */ if (!(priv->flags & MLX4_EN_FLAG_MC_PROMISC)) { - err = mlx4_multicast_promisc_add(mdev->dev, priv->base_qpn, - priv->port); + switch (mdev->dev->caps.steering_mode) { + case MLX4_STEERING_MODE_DEVICE_MANAGED: + err = mlx4_flow_steer_promisc_add(mdev->dev, + priv->port, + priv->base_qpn, + MLX4_FS_PROMISC_ALL_MULTI); + break; + + case MLX4_STEERING_MODE_B0: + err = mlx4_multicast_promisc_add(mdev->dev, + priv->base_qpn, + priv->port); + break; + + case MLX4_STEERING_MODE_A0: + break; + } if (err) en_err(priv, "Failed entering multicast promisc mode\n"); priv->flags |= MLX4_EN_FLAG_MC_PROMISC; } } else { - int i; /* Disable Multicast promisc */ if (priv->flags & MLX4_EN_FLAG_MC_PROMISC) { - err = mlx4_multicast_promisc_remove(mdev->dev, priv->base_qpn, - priv->port); + switch (mdev->dev->caps.steering_mode) { + case MLX4_STEERING_MODE_DEVICE_MANAGED: + err = mlx4_flow_steer_promisc_remove(mdev->dev, + priv->port, + MLX4_FS_PROMISC_ALL_MULTI); + break; + + case MLX4_STEERING_MODE_B0: + err = mlx4_multicast_promisc_remove(mdev->dev, + priv->base_qpn, + priv->port); + break; + + case MLX4_STEERING_MODE_A0: + break; + } if (err) en_err(priv, "Failed disabling multicast promiscuous mode\n"); priv->flags &= ~MLX4_EN_FLAG_MC_PROMISC; @@ -351,13 +762,6 @@ static void mlx4_en_do_set_multicast(struct work_struct *work) if (err) en_err(priv, "Failed disabling multicast filter\n"); - /* Detach our qp from all the multicast addresses */ - for (i = 0; i < priv->mc_addrs_cnt; i++) { - memcpy(&mc_list[10], priv->mc_addrs + i * ETH_ALEN, ETH_ALEN); - mc_list[5] = priv->port; - mlx4_multicast_detach(mdev->dev, &priv->rss_map.indir_qp, - mc_list, MLX4_PROT_ETH); - } /* Flush mcast filter and init it with broadcast address */ mlx4_SET_MCAST_FLTR(mdev->dev, priv->port, ETH_BCAST, 1, MLX4_MCAST_CONFIG); @@ -367,13 +771,8 @@ static void mlx4_en_do_set_multicast(struct work_struct *work) netif_tx_lock_bh(dev); mlx4_en_cache_mclist(dev); netif_tx_unlock_bh(dev); - for (i = 0; i < priv->mc_addrs_cnt; i++) { - mcast_addr = - mlx4_en_mac_to_u64(priv->mc_addrs + i * ETH_ALEN); - memcpy(&mc_list[10], priv->mc_addrs + i * ETH_ALEN, ETH_ALEN); - mc_list[5] = priv->port; - mlx4_multicast_attach(mdev->dev, &priv->rss_map.indir_qp, - mc_list, 0, MLX4_PROT_ETH); + list_for_each_entry(mclist, &priv->mc_list, list) { + mcast_addr = mlx4_en_mac_to_u64(mclist->addr); mlx4_SET_MCAST_FLTR(mdev->dev, priv->port, mcast_addr, 0, MLX4_MCAST_CONFIG); } @@ -381,6 +780,40 @@ static void mlx4_en_do_set_multicast(struct work_struct *work) 0, MLX4_MCAST_ENABLE); if (err) en_err(priv, "Failed enabling multicast filter\n"); + + update_mclist_flags(priv, &priv->curr_list, &priv->mc_list); + list_for_each_entry_safe(mclist, tmp, &priv->curr_list, list) { + if (mclist->action == MCLIST_REM) { + /* detach this address and delete from list */ + memcpy(&mc_list[10], mclist->addr, ETH_ALEN); + mc_list[5] = priv->port; + err = mlx4_multicast_detach(mdev->dev, + &priv->rss_map.indir_qp, + mc_list, + MLX4_PROT_ETH, + mclist->reg_id); + if (err) + en_err(priv, "Fail to detach multicast address\n"); + + /* remove from list */ + list_del(&mclist->list); + kfree(mclist); + } else if (mclist->action == MCLIST_ADD) { + /* attach the address */ + memcpy(&mc_list[10], mclist->addr, ETH_ALEN); + /* needed for B0 steering support */ + mc_list[5] = priv->port; + err = mlx4_multicast_attach(mdev->dev, + &priv->rss_map.indir_qp, + mc_list, + priv->port, 0, + MLX4_PROT_ETH, + &mclist->reg_id); + if (err) + en_err(priv, "Fail to attach multicast address\n"); + + } + } } out: mutex_unlock(&mdev->state_lock); @@ -605,6 +1038,9 @@ int mlx4_en_start_port(struct net_device *dev) return 0; } + INIT_LIST_HEAD(&priv->mc_list); + INIT_LIST_HEAD(&priv->curr_list); + /* Calculate Rx buf size */ dev->mtu = min(dev->mtu, priv->max_mtu); mlx4_en_calc_rx_buf(dev); @@ -653,6 +1089,10 @@ int mlx4_en_start_port(struct net_device *dev) goto mac_err; } + err = mlx4_en_create_drop_qp(priv); + if (err) + goto rss_err; + /* Configure tx cq's and rings */ for (i = 0; i < priv->tx_ring_num; i++) { /* Configure cq */ @@ -720,13 +1160,23 @@ int mlx4_en_start_port(struct net_device *dev) /* Attach rx QP to bradcast address */ memset(&mc_list[10], 0xff, ETH_ALEN); - mc_list[5] = priv->port; + mc_list[5] = priv->port; /* needed for B0 steering support */ if (mlx4_multicast_attach(mdev->dev, &priv->rss_map.indir_qp, mc_list, - 0, MLX4_PROT_ETH)) + priv->port, 0, MLX4_PROT_ETH, + &priv->broadcast_id)) mlx4_warn(mdev, "Failed Attaching Broadcast\n"); /* Must redo promiscuous mode setup. */ priv->flags &= ~(MLX4_EN_FLAG_PROMISC | MLX4_EN_FLAG_MC_PROMISC); + if (mdev->dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) { + mlx4_flow_steer_promisc_remove(mdev->dev, + priv->port, + MLX4_FS_PROMISC_UPLINK); + mlx4_flow_steer_promisc_remove(mdev->dev, + priv->port, + MLX4_FS_PROMISC_ALL_MULTI); + } /* Schedule multicast task to populate multicast list */ queue_work(mdev->workqueue, &priv->mcast_task); @@ -742,7 +1192,8 @@ tx_err: mlx4_en_deactivate_tx_ring(priv, &priv->tx_ring[tx_index]); mlx4_en_deactivate_cq(priv, &priv->tx_cq[tx_index]); } - + mlx4_en_destroy_drop_qp(priv); +rss_err: mlx4_en_release_rss_steer(priv); mac_err: mlx4_put_eth_qp(mdev->dev, priv->port, priv->mac, priv->base_qpn); @@ -760,6 +1211,7 @@ void mlx4_en_stop_port(struct net_device *dev) { struct mlx4_en_priv *priv = netdev_priv(dev); struct mlx4_en_dev *mdev = priv->mdev; + struct mlx4_en_mc_list *mclist, *tmp; int i; u8 mc_list[16] = {0}; @@ -778,19 +1230,26 @@ void mlx4_en_stop_port(struct net_device *dev) /* Detach All multicasts */ memset(&mc_list[10], 0xff, ETH_ALEN); - mc_list[5] = priv->port; + mc_list[5] = priv->port; /* needed for B0 steering support */ mlx4_multicast_detach(mdev->dev, &priv->rss_map.indir_qp, mc_list, - MLX4_PROT_ETH); - for (i = 0; i < priv->mc_addrs_cnt; i++) { - memcpy(&mc_list[10], priv->mc_addrs + i * ETH_ALEN, ETH_ALEN); + MLX4_PROT_ETH, priv->broadcast_id); + list_for_each_entry(mclist, &priv->curr_list, list) { + memcpy(&mc_list[10], mclist->addr, ETH_ALEN); mc_list[5] = priv->port; mlx4_multicast_detach(mdev->dev, &priv->rss_map.indir_qp, - mc_list, MLX4_PROT_ETH); + mc_list, MLX4_PROT_ETH, mclist->reg_id); } mlx4_en_clear_list(dev); + list_for_each_entry_safe(mclist, tmp, &priv->curr_list, list) { + list_del(&mclist->list); + kfree(mclist); + } + /* Flush multicast filter */ mlx4_SET_MCAST_FLTR(mdev->dev, priv->port, 0, 1, MLX4_MCAST_CONFIG); + mlx4_en_destroy_drop_qp(priv); + /* Free TX Rings */ for (i = 0; i < priv->tx_ring_num; i++) { mlx4_en_deactivate_tx_ring(priv, &priv->tx_ring[i]); @@ -915,6 +1374,11 @@ void mlx4_en_free_resources(struct mlx4_en_priv *priv) { int i; +#ifdef CONFIG_RFS_ACCEL + free_irq_cpu_rmap(priv->dev->rx_cpu_rmap); + priv->dev->rx_cpu_rmap = NULL; +#endif + for (i = 0; i < priv->tx_ring_num; i++) { if (priv->tx_ring[i].tx_info) mlx4_en_destroy_tx_ring(priv, &priv->tx_ring[i]); @@ -970,6 +1434,15 @@ int mlx4_en_alloc_resources(struct mlx4_en_priv *priv) goto err; } +#ifdef CONFIG_RFS_ACCEL + priv->dev->rx_cpu_rmap = alloc_irq_cpu_rmap(priv->rx_ring_num); + if (!priv->dev->rx_cpu_rmap) + goto err; + + INIT_LIST_HEAD(&priv->filters); + spin_lock_init(&priv->filters_lock); +#endif + return 0; err: @@ -1077,6 +1550,9 @@ static const struct net_device_ops mlx4_netdev_ops = { #endif .ndo_set_features = mlx4_en_set_features, .ndo_setup_tc = mlx4_en_setup_tc, +#ifdef CONFIG_RFS_ACCEL + .ndo_rx_flow_steer = mlx4_en_filter_rfs, +#endif }; int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port, @@ -1194,6 +1670,10 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port, NETIF_F_HW_VLAN_FILTER; dev->hw_features |= NETIF_F_LOOPBACK; + if (mdev->dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) + dev->hw_features |= NETIF_F_NTUPLE; + mdev->pndev[port] = dev; netif_carrier_off(dev); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index d49a7ac3187d..f32e70300770 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -41,41 +41,75 @@ #include "mlx4_en.h" - -static int mlx4_en_alloc_frag(struct mlx4_en_priv *priv, - struct mlx4_en_rx_desc *rx_desc, - struct page_frag *skb_frags, - struct mlx4_en_rx_alloc *ring_alloc, - int i) +static int mlx4_en_alloc_frags(struct mlx4_en_priv *priv, + struct mlx4_en_rx_desc *rx_desc, + struct mlx4_en_rx_alloc *frags, + struct mlx4_en_rx_alloc *ring_alloc) { - struct mlx4_en_frag_info *frag_info = &priv->frag_info[i]; - struct mlx4_en_rx_alloc *page_alloc = &ring_alloc[i]; + struct mlx4_en_rx_alloc page_alloc[MLX4_EN_MAX_RX_FRAGS]; + struct mlx4_en_frag_info *frag_info; struct page *page; dma_addr_t dma; + int i; - if (page_alloc->offset == frag_info->last_offset) { - /* Allocate new page */ - page = alloc_pages(GFP_ATOMIC | __GFP_COMP, MLX4_EN_ALLOC_ORDER); - if (!page) - return -ENOMEM; - - skb_frags[i].page = page_alloc->page; - skb_frags[i].offset = page_alloc->offset; - page_alloc->page = page; - page_alloc->offset = frag_info->frag_align; - } else { - page = page_alloc->page; - get_page(page); + for (i = 0; i < priv->num_frags; i++) { + frag_info = &priv->frag_info[i]; + if (ring_alloc[i].offset == frag_info->last_offset) { + page = alloc_pages(GFP_ATOMIC | __GFP_COMP, + MLX4_EN_ALLOC_ORDER); + if (!page) + goto out; + dma = dma_map_page(priv->ddev, page, 0, + MLX4_EN_ALLOC_SIZE, PCI_DMA_FROMDEVICE); + if (dma_mapping_error(priv->ddev, dma)) { + put_page(page); + goto out; + } + page_alloc[i].page = page; + page_alloc[i].dma = dma; + page_alloc[i].offset = frag_info->frag_align; + } else { + page_alloc[i].page = ring_alloc[i].page; + get_page(ring_alloc[i].page); + page_alloc[i].dma = ring_alloc[i].dma; + page_alloc[i].offset = ring_alloc[i].offset + + frag_info->frag_stride; + } + } - skb_frags[i].page = page; - skb_frags[i].offset = page_alloc->offset; - page_alloc->offset += frag_info->frag_stride; + for (i = 0; i < priv->num_frags; i++) { + frags[i] = ring_alloc[i]; + dma = ring_alloc[i].dma + ring_alloc[i].offset; + ring_alloc[i] = page_alloc[i]; + rx_desc->data[i].addr = cpu_to_be64(dma); } - dma = dma_map_single(priv->ddev, page_address(skb_frags[i].page) + - skb_frags[i].offset, frag_info->frag_size, - PCI_DMA_FROMDEVICE); - rx_desc->data[i].addr = cpu_to_be64(dma); + return 0; + + +out: + while (i--) { + frag_info = &priv->frag_info[i]; + if (ring_alloc[i].offset == frag_info->last_offset) + dma_unmap_page(priv->ddev, page_alloc[i].dma, + MLX4_EN_ALLOC_SIZE, PCI_DMA_FROMDEVICE); + put_page(page_alloc[i].page); + } + return -ENOMEM; +} + +static void mlx4_en_free_frag(struct mlx4_en_priv *priv, + struct mlx4_en_rx_alloc *frags, + int i) +{ + struct mlx4_en_frag_info *frag_info = &priv->frag_info[i]; + + if (frags[i].offset == frag_info->last_offset) { + dma_unmap_page(priv->ddev, frags[i].dma, MLX4_EN_ALLOC_SIZE, + PCI_DMA_FROMDEVICE); + } + if (frags[i].page) + put_page(frags[i].page); } static int mlx4_en_init_allocator(struct mlx4_en_priv *priv, @@ -91,6 +125,13 @@ static int mlx4_en_init_allocator(struct mlx4_en_priv *priv, if (!page_alloc->page) goto out; + page_alloc->dma = dma_map_page(priv->ddev, page_alloc->page, 0, + MLX4_EN_ALLOC_SIZE, PCI_DMA_FROMDEVICE); + if (dma_mapping_error(priv->ddev, page_alloc->dma)) { + put_page(page_alloc->page); + page_alloc->page = NULL; + goto out; + } page_alloc->offset = priv->frag_info[i].frag_align; en_dbg(DRV, priv, "Initialized allocator:%d with page:%p\n", i, page_alloc->page); @@ -100,6 +141,8 @@ static int mlx4_en_init_allocator(struct mlx4_en_priv *priv, out: while (i--) { page_alloc = &ring->page_alloc[i]; + dma_unmap_page(priv->ddev, page_alloc->dma, + MLX4_EN_ALLOC_SIZE, PCI_DMA_FROMDEVICE); put_page(page_alloc->page); page_alloc->page = NULL; } @@ -117,24 +160,22 @@ static void mlx4_en_destroy_allocator(struct mlx4_en_priv *priv, en_dbg(DRV, priv, "Freeing allocator:%d count:%d\n", i, page_count(page_alloc->page)); + dma_unmap_page(priv->ddev, page_alloc->dma, + MLX4_EN_ALLOC_SIZE, PCI_DMA_FROMDEVICE); put_page(page_alloc->page); page_alloc->page = NULL; } } - static void mlx4_en_init_rx_desc(struct mlx4_en_priv *priv, struct mlx4_en_rx_ring *ring, int index) { struct mlx4_en_rx_desc *rx_desc = ring->buf + ring->stride * index; - struct skb_frag_struct *skb_frags = ring->rx_info + - (index << priv->log_rx_info); int possible_frags; int i; /* Set size and memtype fields */ for (i = 0; i < priv->num_frags; i++) { - skb_frag_size_set(&skb_frags[i], priv->frag_info[i].frag_size); rx_desc->data[i].byte_count = cpu_to_be32(priv->frag_info[i].frag_size); rx_desc->data[i].lkey = cpu_to_be32(priv->mdev->mr.key); @@ -151,29 +192,14 @@ static void mlx4_en_init_rx_desc(struct mlx4_en_priv *priv, } } - static int mlx4_en_prepare_rx_desc(struct mlx4_en_priv *priv, struct mlx4_en_rx_ring *ring, int index) { struct mlx4_en_rx_desc *rx_desc = ring->buf + (index * ring->stride); - struct page_frag *skb_frags = ring->rx_info + - (index << priv->log_rx_info); - int i; + struct mlx4_en_rx_alloc *frags = ring->rx_info + + (index << priv->log_rx_info); - for (i = 0; i < priv->num_frags; i++) - if (mlx4_en_alloc_frag(priv, rx_desc, skb_frags, ring->page_alloc, i)) - goto err; - - return 0; - -err: - while (i--) { - dma_addr_t dma = be64_to_cpu(rx_desc->data[i].addr); - pci_unmap_single(priv->mdev->pdev, dma, skb_frags[i].size, - PCI_DMA_FROMDEVICE); - put_page(skb_frags[i].page); - } - return -ENOMEM; + return mlx4_en_alloc_frags(priv, rx_desc, frags, ring->page_alloc); } static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring) @@ -185,20 +211,13 @@ static void mlx4_en_free_rx_desc(struct mlx4_en_priv *priv, struct mlx4_en_rx_ring *ring, int index) { - struct page_frag *skb_frags; - struct mlx4_en_rx_desc *rx_desc = ring->buf + (index << ring->log_stride); - dma_addr_t dma; + struct mlx4_en_rx_alloc *frags; int nr; - skb_frags = ring->rx_info + (index << priv->log_rx_info); + frags = ring->rx_info + (index << priv->log_rx_info); for (nr = 0; nr < priv->num_frags; nr++) { en_dbg(DRV, priv, "Freeing fragment:%d\n", nr); - dma = be64_to_cpu(rx_desc->data[nr].addr); - - en_dbg(DRV, priv, "Unmapping buffer at dma:0x%llx\n", (u64) dma); - dma_unmap_single(priv->ddev, dma, skb_frags[nr].size, - PCI_DMA_FROMDEVICE); - put_page(skb_frags[nr].page); + mlx4_en_free_frag(priv, frags, nr); } } @@ -268,10 +287,9 @@ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv, struct mlx4_en_rx_ring *ring, u32 size, u16 stride) { struct mlx4_en_dev *mdev = priv->mdev; - int err; + int err = -ENOMEM; int tmp; - ring->prod = 0; ring->cons = 0; ring->size = size; @@ -281,7 +299,7 @@ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv, ring->buf_size = ring->size * ring->stride + TXBB_SIZE; tmp = size * roundup_pow_of_two(MLX4_EN_MAX_RX_FRAGS * - sizeof(struct skb_frag_struct)); + sizeof(struct mlx4_en_rx_alloc)); ring->rx_info = vmalloc(tmp); if (!ring->rx_info) return -ENOMEM; @@ -338,7 +356,7 @@ int mlx4_en_activate_rx_rings(struct mlx4_en_priv *priv) memset(ring->buf, 0, ring->buf_size); mlx4_en_update_rx_prod_db(ring); - /* Initailize all descriptors */ + /* Initialize all descriptors */ for (i = 0; i < ring->size; i++) mlx4_en_init_rx_desc(priv, ring, i); @@ -389,6 +407,9 @@ void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv, mlx4_free_hwq_res(mdev->dev, &ring->wqres, size * stride + TXBB_SIZE); vfree(ring->rx_info); ring->rx_info = NULL; +#ifdef CONFIG_RFS_ACCEL + mlx4_en_cleanup_filters(priv, ring); +#endif } void mlx4_en_deactivate_rx_ring(struct mlx4_en_priv *priv, @@ -401,12 +422,10 @@ void mlx4_en_deactivate_rx_ring(struct mlx4_en_priv *priv, } -/* Unmap a completed descriptor and free unused pages */ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv, struct mlx4_en_rx_desc *rx_desc, - struct page_frag *skb_frags, + struct mlx4_en_rx_alloc *frags, struct sk_buff *skb, - struct mlx4_en_rx_alloc *page_alloc, int length) { struct skb_frag_struct *skb_frags_rx = skb_shinfo(skb)->frags; @@ -414,26 +433,24 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv, int nr; dma_addr_t dma; - /* Collect used fragments while replacing them in the HW descirptors */ + /* Collect used fragments while replacing them in the HW descriptors */ for (nr = 0; nr < priv->num_frags; nr++) { frag_info = &priv->frag_info[nr]; if (length <= frag_info->frag_prefix_size) break; + if (!frags[nr].page) + goto fail; - /* Save page reference in skb */ - __skb_frag_set_page(&skb_frags_rx[nr], skb_frags[nr].page); - skb_frag_size_set(&skb_frags_rx[nr], skb_frags[nr].size); - skb_frags_rx[nr].page_offset = skb_frags[nr].offset; - skb->truesize += frag_info->frag_stride; dma = be64_to_cpu(rx_desc->data[nr].addr); + dma_sync_single_for_cpu(priv->ddev, dma, frag_info->frag_size, + DMA_FROM_DEVICE); - /* Allocate a replacement page */ - if (mlx4_en_alloc_frag(priv, rx_desc, skb_frags, page_alloc, nr)) - goto fail; - - /* Unmap buffer */ - dma_unmap_single(priv->ddev, dma, skb_frag_size(&skb_frags_rx[nr]), - PCI_DMA_FROMDEVICE); + /* Save page reference in skb */ + get_page(frags[nr].page); + __skb_frag_set_page(&skb_frags_rx[nr], frags[nr].page); + skb_frag_size_set(&skb_frags_rx[nr], frag_info->frag_size); + skb_frags_rx[nr].page_offset = frags[nr].offset; + skb->truesize += frag_info->frag_stride; } /* Adjust size of last fragment to match actual length */ if (nr > 0) @@ -442,8 +459,6 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv, return nr; fail: - /* Drop all accumulated fragments (which have already been replaced in - * the descriptor) of this packet; remaining fragments are reused... */ while (nr > 0) { nr--; __skb_frag_unref(&skb_frags_rx[nr]); @@ -454,8 +469,7 @@ fail: static struct sk_buff *mlx4_en_rx_skb(struct mlx4_en_priv *priv, struct mlx4_en_rx_desc *rx_desc, - struct page_frag *skb_frags, - struct mlx4_en_rx_alloc *page_alloc, + struct mlx4_en_rx_alloc *frags, unsigned int length) { struct sk_buff *skb; @@ -473,23 +487,20 @@ static struct sk_buff *mlx4_en_rx_skb(struct mlx4_en_priv *priv, /* Get pointer to first fragment so we could copy the headers into the * (linear part of the) skb */ - va = page_address(skb_frags[0].page) + skb_frags[0].offset; + va = page_address(frags[0].page) + frags[0].offset; if (length <= SMALL_PACKET_SIZE) { /* We are copying all relevant data to the skb - temporarily - * synch buffers for the copy */ + * sync buffers for the copy */ dma = be64_to_cpu(rx_desc->data[0].addr); dma_sync_single_for_cpu(priv->ddev, dma, length, DMA_FROM_DEVICE); skb_copy_to_linear_data(skb, va, length); - dma_sync_single_for_device(priv->ddev, dma, length, - DMA_FROM_DEVICE); skb->tail += length; } else { - /* Move relevant fragments to skb */ - used_frags = mlx4_en_complete_rx_desc(priv, rx_desc, skb_frags, - skb, page_alloc, length); + used_frags = mlx4_en_complete_rx_desc(priv, rx_desc, frags, + skb, length); if (unlikely(!used_frags)) { kfree_skb(skb); return NULL; @@ -526,12 +537,25 @@ out_loopback: dev_kfree_skb_any(skb); } +static void mlx4_en_refill_rx_buffers(struct mlx4_en_priv *priv, + struct mlx4_en_rx_ring *ring) +{ + int index = ring->prod & ring->size_mask; + + while ((u32) (ring->prod - ring->cons) < ring->actual_size) { + if (mlx4_en_prepare_rx_desc(priv, ring, index)) + break; + ring->prod++; + index = ring->prod & ring->size_mask; + } +} + int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int budget) { struct mlx4_en_priv *priv = netdev_priv(dev); struct mlx4_cqe *cqe; struct mlx4_en_rx_ring *ring = &priv->rx_ring[cq->ring]; - struct page_frag *skb_frags; + struct mlx4_en_rx_alloc *frags; struct mlx4_en_rx_desc *rx_desc; struct sk_buff *skb; int index; @@ -540,6 +564,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud int polled = 0; int ip_summed; struct ethhdr *ethh; + dma_addr_t dma; u64 s_mac; if (!priv->port_up) @@ -555,7 +580,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud while (XNOR(cqe->owner_sr_opcode & MLX4_CQE_OWNER_MASK, cq->mcq.cons_index & cq->size)) { - skb_frags = ring->rx_info + (index << priv->log_rx_info); + frags = ring->rx_info + (index << priv->log_rx_info); rx_desc = ring->buf + (index << ring->log_stride); /* @@ -579,8 +604,11 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud /* Get pointer to first fragment since we haven't skb yet and * cast it to ethhdr struct */ - ethh = (struct ethhdr *)(page_address(skb_frags[0].page) + - skb_frags[0].offset); + dma = be64_to_cpu(rx_desc->data[0].addr); + dma_sync_single_for_cpu(priv->ddev, dma, sizeof(*ethh), + DMA_FROM_DEVICE); + ethh = (struct ethhdr *)(page_address(frags[0].page) + + frags[0].offset); s_mac = mlx4_en_mac_to_u64(ethh->h_source); /* If source MAC is equal to our own MAC and not performing @@ -612,10 +640,9 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud if (!gro_skb) goto next; - nr = mlx4_en_complete_rx_desc( - priv, rx_desc, - skb_frags, gro_skb, - ring->page_alloc, length); + nr = mlx4_en_complete_rx_desc(priv, + rx_desc, frags, gro_skb, + length); if (!nr) goto next; @@ -651,8 +678,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud ring->csum_none++; } - skb = mlx4_en_rx_skb(priv, rx_desc, skb_frags, - ring->page_alloc, length); + skb = mlx4_en_rx_skb(priv, rx_desc, frags, length); if (!skb) { priv->stats.rx_dropped++; goto next; @@ -678,6 +704,9 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud netif_receive_skb(skb); next: + for (nr = 0; nr < priv->num_frags; nr++) + mlx4_en_free_frag(priv, frags, nr); + ++cq->mcq.cons_index; index = (cq->mcq.cons_index) & ring->size_mask; cqe = &cq->buf[index]; @@ -693,7 +722,7 @@ out: mlx4_cq_set_ci(&cq->mcq); wmb(); /* ensure HW sees CQ consumer before we post new buffers */ ring->cons = cq->mcq.cons_index; - ring->prod += polled; /* Polled descriptors were realocated in place */ + mlx4_en_refill_rx_buffers(priv, ring); mlx4_en_update_rx_prod_db(ring); return polled; } @@ -782,7 +811,7 @@ void mlx4_en_calc_rx_buf(struct net_device *dev) priv->num_frags = i; priv->rx_skb_size = eff_mtu; - priv->log_rx_info = ROUNDUP_LOG2(i * sizeof(struct skb_frag_struct)); + priv->log_rx_info = ROUNDUP_LOG2(i * sizeof(struct mlx4_en_rx_alloc)); en_dbg(DRV, priv, "Rx buffer scatter-list (effective-mtu:%d " "num_frags:%d):\n", eff_mtu, priv->num_frags); @@ -844,6 +873,36 @@ out: return err; } +int mlx4_en_create_drop_qp(struct mlx4_en_priv *priv) +{ + int err; + u32 qpn; + + err = mlx4_qp_reserve_range(priv->mdev->dev, 1, 1, &qpn); + if (err) { + en_err(priv, "Failed reserving drop qpn\n"); + return err; + } + err = mlx4_qp_alloc(priv->mdev->dev, qpn, &priv->drop_qp); + if (err) { + en_err(priv, "Failed allocating drop qp\n"); + mlx4_qp_release_range(priv->mdev->dev, qpn, 1); + return err; + } + + return 0; +} + +void mlx4_en_destroy_drop_qp(struct mlx4_en_priv *priv) +{ + u32 qpn; + + qpn = priv->drop_qp.qpn; + mlx4_qp_remove(priv->mdev->dev, &priv->drop_qp); + mlx4_qp_free(priv->mdev->dev, &priv->drop_qp); + mlx4_qp_release_range(priv->mdev->dev, qpn, 1); +} + /* Allocate rx qp's and configure them according to rss map */ int mlx4_en_config_rss_steer(struct mlx4_en_priv *priv) { @@ -954,8 +1013,3 @@ void mlx4_en_release_rss_steer(struct mlx4_en_priv *priv) } mlx4_qp_release_range(mdev->dev, rss_map->base_qpn, priv->rx_ring_num); } - - - - - diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c index bce98d9c0039..cd48337cbfc0 100644 --- a/drivers/net/ethernet/mellanox/mlx4/eq.c +++ b/drivers/net/ethernet/mellanox/mlx4/eq.c @@ -39,6 +39,7 @@ #include <linux/dma-mapping.h> #include <linux/mlx4/cmd.h> +#include <linux/cpu_rmap.h> #include "mlx4.h" #include "fw.h" @@ -1060,7 +1061,8 @@ int mlx4_test_interrupts(struct mlx4_dev *dev) } EXPORT_SYMBOL(mlx4_test_interrupts); -int mlx4_assign_eq(struct mlx4_dev *dev, char* name, int * vector) +int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, + int *vector) { struct mlx4_priv *priv = mlx4_priv(dev); @@ -1074,6 +1076,14 @@ int mlx4_assign_eq(struct mlx4_dev *dev, char* name, int * vector) snprintf(priv->eq_table.irq_names + vec * MLX4_IRQNAME_SIZE, MLX4_IRQNAME_SIZE, "%s", name); +#ifdef CONFIG_RFS_ACCEL + if (rmap) { + err = irq_cpu_rmap_add(rmap, + priv->eq_table.eq[vec].irq); + if (err) + mlx4_warn(dev, "Failed adding irq rmap\n"); + } +#endif err = request_irq(priv->eq_table.eq[vec].irq, mlx4_msi_x_interrupt, 0, &priv->eq_table.irq_names[vec<<5], diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c index 9c83bb8151ea..1d70657058a5 100644 --- a/drivers/net/ethernet/mellanox/mlx4/fw.c +++ b/drivers/net/ethernet/mellanox/mlx4/fw.c @@ -123,7 +123,8 @@ static void dump_dev_cap_flags2(struct mlx4_dev *dev, u64 flags) static const char * const fname[] = { [0] = "RSS support", [1] = "RSS Toeplitz Hash Function support", - [2] = "RSS XOR Hash Function support" + [2] = "RSS XOR Hash Function support", + [3] = "Device manage flow steering support" }; int i; @@ -391,6 +392,8 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap) #define QUERY_DEV_CAP_RSVD_XRC_OFFSET 0x66 #define QUERY_DEV_CAP_MAX_XRC_OFFSET 0x67 #define QUERY_DEV_CAP_MAX_COUNTERS_OFFSET 0x68 +#define QUERY_DEV_CAP_FLOW_STEERING_RANGE_EN_OFFSET 0x76 +#define QUERY_DEV_CAP_FLOW_STEERING_MAX_QP_OFFSET 0x77 #define QUERY_DEV_CAP_RDMARC_ENTRY_SZ_OFFSET 0x80 #define QUERY_DEV_CAP_QPC_ENTRY_SZ_OFFSET 0x82 #define QUERY_DEV_CAP_AUX_ENTRY_SZ_OFFSET 0x84 @@ -474,6 +477,12 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap) dev_cap->num_ports = field & 0xf; MLX4_GET(field, outbox, QUERY_DEV_CAP_MAX_MSG_SZ_OFFSET); dev_cap->max_msg_sz = 1 << (field & 0x1f); + MLX4_GET(field, outbox, QUERY_DEV_CAP_FLOW_STEERING_RANGE_EN_OFFSET); + if (field & 0x80) + dev_cap->flags2 |= MLX4_DEV_CAP_FLAG2_FS_EN; + dev_cap->fs_log_max_ucast_qp_range_size = field & 0x1f; + MLX4_GET(field, outbox, QUERY_DEV_CAP_FLOW_STEERING_MAX_QP_OFFSET); + dev_cap->fs_max_num_qp_per_entry = field; MLX4_GET(stat_rate, outbox, QUERY_DEV_CAP_RATE_SUPPORT_OFFSET); dev_cap->stat_rate_support = stat_rate; MLX4_GET(ext_flags, outbox, QUERY_DEV_CAP_EXT_FLAGS_OFFSET); @@ -1061,6 +1070,15 @@ int mlx4_INIT_HCA(struct mlx4_dev *dev, struct mlx4_init_hca_param *param) #define INIT_HCA_LOG_MC_HASH_SZ_OFFSET (INIT_HCA_MCAST_OFFSET + 0x16) #define INIT_HCA_UC_STEERING_OFFSET (INIT_HCA_MCAST_OFFSET + 0x18) #define INIT_HCA_LOG_MC_TABLE_SZ_OFFSET (INIT_HCA_MCAST_OFFSET + 0x1b) +#define INIT_HCA_DEVICE_MANAGED_FLOW_STEERING_EN 0x6 +#define INIT_HCA_FS_PARAM_OFFSET 0x1d0 +#define INIT_HCA_FS_BASE_OFFSET (INIT_HCA_FS_PARAM_OFFSET + 0x00) +#define INIT_HCA_FS_LOG_ENTRY_SZ_OFFSET (INIT_HCA_FS_PARAM_OFFSET + 0x12) +#define INIT_HCA_FS_LOG_TABLE_SZ_OFFSET (INIT_HCA_FS_PARAM_OFFSET + 0x1b) +#define INIT_HCA_FS_ETH_BITS_OFFSET (INIT_HCA_FS_PARAM_OFFSET + 0x21) +#define INIT_HCA_FS_ETH_NUM_ADDRS_OFFSET (INIT_HCA_FS_PARAM_OFFSET + 0x22) +#define INIT_HCA_FS_IB_BITS_OFFSET (INIT_HCA_FS_PARAM_OFFSET + 0x25) +#define INIT_HCA_FS_IB_NUM_ADDRS_OFFSET (INIT_HCA_FS_PARAM_OFFSET + 0x26) #define INIT_HCA_TPT_OFFSET 0x0f0 #define INIT_HCA_DMPT_BASE_OFFSET (INIT_HCA_TPT_OFFSET + 0x00) #define INIT_HCA_LOG_MPT_SZ_OFFSET (INIT_HCA_TPT_OFFSET + 0x0b) @@ -1119,14 +1137,44 @@ int mlx4_INIT_HCA(struct mlx4_dev *dev, struct mlx4_init_hca_param *param) MLX4_PUT(inbox, param->rdmarc_base, INIT_HCA_RDMARC_BASE_OFFSET); MLX4_PUT(inbox, param->log_rd_per_qp, INIT_HCA_LOG_RD_OFFSET); - /* multicast attributes */ - - MLX4_PUT(inbox, param->mc_base, INIT_HCA_MC_BASE_OFFSET); - MLX4_PUT(inbox, param->log_mc_entry_sz, INIT_HCA_LOG_MC_ENTRY_SZ_OFFSET); - MLX4_PUT(inbox, param->log_mc_hash_sz, INIT_HCA_LOG_MC_HASH_SZ_OFFSET); - if (dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER) - MLX4_PUT(inbox, (u8) (1 << 3), INIT_HCA_UC_STEERING_OFFSET); - MLX4_PUT(inbox, param->log_mc_table_sz, INIT_HCA_LOG_MC_TABLE_SZ_OFFSET); + /* steering attributes */ + if (dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) { + *(inbox + INIT_HCA_FLAGS_OFFSET / 4) |= + cpu_to_be32(1 << + INIT_HCA_DEVICE_MANAGED_FLOW_STEERING_EN); + + MLX4_PUT(inbox, param->mc_base, INIT_HCA_FS_BASE_OFFSET); + MLX4_PUT(inbox, param->log_mc_entry_sz, + INIT_HCA_FS_LOG_ENTRY_SZ_OFFSET); + MLX4_PUT(inbox, param->log_mc_table_sz, + INIT_HCA_FS_LOG_TABLE_SZ_OFFSET); + /* Enable Ethernet flow steering + * with udp unicast and tcp unicast + */ + MLX4_PUT(inbox, param->fs_hash_enable_bits, + INIT_HCA_FS_ETH_BITS_OFFSET); + MLX4_PUT(inbox, (u16) MLX4_FS_NUM_OF_L2_ADDR, + INIT_HCA_FS_ETH_NUM_ADDRS_OFFSET); + /* Enable IPoIB flow steering + * with udp unicast and tcp unicast + */ + MLX4_PUT(inbox, param->fs_hash_enable_bits, + INIT_HCA_FS_IB_BITS_OFFSET); + MLX4_PUT(inbox, (u16) MLX4_FS_NUM_OF_L2_ADDR, + INIT_HCA_FS_IB_NUM_ADDRS_OFFSET); + } else { + MLX4_PUT(inbox, param->mc_base, INIT_HCA_MC_BASE_OFFSET); + MLX4_PUT(inbox, param->log_mc_entry_sz, + INIT_HCA_LOG_MC_ENTRY_SZ_OFFSET); + MLX4_PUT(inbox, param->log_mc_hash_sz, + INIT_HCA_LOG_MC_HASH_SZ_OFFSET); + MLX4_PUT(inbox, param->log_mc_table_sz, + INIT_HCA_LOG_MC_TABLE_SZ_OFFSET); + if (dev->caps.steering_mode == MLX4_STEERING_MODE_B0) + MLX4_PUT(inbox, (u8) (1 << 3), + INIT_HCA_UC_STEERING_OFFSET); + } /* TPT attributes */ @@ -1188,15 +1236,24 @@ int mlx4_QUERY_HCA(struct mlx4_dev *dev, MLX4_GET(param->rdmarc_base, outbox, INIT_HCA_RDMARC_BASE_OFFSET); MLX4_GET(param->log_rd_per_qp, outbox, INIT_HCA_LOG_RD_OFFSET); - /* multicast attributes */ + /* steering attributes */ + if (dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) { - MLX4_GET(param->mc_base, outbox, INIT_HCA_MC_BASE_OFFSET); - MLX4_GET(param->log_mc_entry_sz, outbox, - INIT_HCA_LOG_MC_ENTRY_SZ_OFFSET); - MLX4_GET(param->log_mc_hash_sz, outbox, - INIT_HCA_LOG_MC_HASH_SZ_OFFSET); - MLX4_GET(param->log_mc_table_sz, outbox, - INIT_HCA_LOG_MC_TABLE_SZ_OFFSET); + MLX4_GET(param->mc_base, outbox, INIT_HCA_FS_BASE_OFFSET); + MLX4_GET(param->log_mc_entry_sz, outbox, + INIT_HCA_FS_LOG_ENTRY_SZ_OFFSET); + MLX4_GET(param->log_mc_table_sz, outbox, + INIT_HCA_FS_LOG_TABLE_SZ_OFFSET); + } else { + MLX4_GET(param->mc_base, outbox, INIT_HCA_MC_BASE_OFFSET); + MLX4_GET(param->log_mc_entry_sz, outbox, + INIT_HCA_LOG_MC_ENTRY_SZ_OFFSET); + MLX4_GET(param->log_mc_hash_sz, outbox, + INIT_HCA_LOG_MC_HASH_SZ_OFFSET); + MLX4_GET(param->log_mc_table_sz, outbox, + INIT_HCA_LOG_MC_TABLE_SZ_OFFSET); + } /* TPT attributes */ diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.h b/drivers/net/ethernet/mellanox/mlx4/fw.h index 64c0399e4b78..83fcbbf1b169 100644 --- a/drivers/net/ethernet/mellanox/mlx4/fw.h +++ b/drivers/net/ethernet/mellanox/mlx4/fw.h @@ -78,6 +78,8 @@ struct mlx4_dev_cap { u16 wavelength[MLX4_MAX_PORTS + 1]; u64 trans_code[MLX4_MAX_PORTS + 1]; u16 stat_rate_support; + int fs_log_max_ucast_qp_range_size; + int fs_max_num_qp_per_entry; u64 flags; u64 flags2; int reserved_uars; @@ -165,6 +167,7 @@ struct mlx4_init_hca_param { u8 log_mpt_sz; u8 log_uar_sz; u8 uar_page_sz; /* log pg sz in 4k chunks */ + u8 fs_hash_enable_bits; }; struct mlx4_init_ib_param { diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c index a0313de122de..42645166bae2 100644 --- a/drivers/net/ethernet/mellanox/mlx4/main.c +++ b/drivers/net/ethernet/mellanox/mlx4/main.c @@ -41,6 +41,7 @@ #include <linux/slab.h> #include <linux/io-mapping.h> #include <linux/delay.h> +#include <linux/netdevice.h> #include <linux/mlx4/device.h> #include <linux/mlx4/doorbell.h> @@ -90,7 +91,9 @@ module_param_named(log_num_mgm_entry_size, MODULE_PARM_DESC(log_num_mgm_entry_size, "log mgm size, that defines the num" " of qp per mcg, for example:" " 10 gives 248.range: 9<=" - " log_num_mgm_entry_size <= 12"); + " log_num_mgm_entry_size <= 12." + " Not in use with device managed" + " flow steering"); #define MLX4_VF (1 << 0) @@ -243,7 +246,6 @@ static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap) dev->caps.reserved_srqs = dev_cap->reserved_srqs; dev->caps.max_sq_desc_sz = dev_cap->max_sq_desc_sz; dev->caps.max_rq_desc_sz = dev_cap->max_rq_desc_sz; - dev->caps.num_qp_per_mgm = mlx4_get_qp_per_mgm(dev); /* * Subtract 1 from the limit because we need to allocate a * spare CQE so the HCA HW can tell the difference between an @@ -274,6 +276,28 @@ static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap) dev->caps.max_gso_sz = dev_cap->max_gso_sz; dev->caps.max_rss_tbl_sz = dev_cap->max_rss_tbl_sz; + if (dev_cap->flags2 & MLX4_DEV_CAP_FLAG2_FS_EN) { + dev->caps.steering_mode = MLX4_STEERING_MODE_DEVICE_MANAGED; + dev->caps.num_qp_per_mgm = dev_cap->fs_max_num_qp_per_entry; + dev->caps.fs_log_max_ucast_qp_range_size = + dev_cap->fs_log_max_ucast_qp_range_size; + } else { + if (dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER && + dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER) { + dev->caps.steering_mode = MLX4_STEERING_MODE_B0; + } else { + dev->caps.steering_mode = MLX4_STEERING_MODE_A0; + + if (dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER || + dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER) + mlx4_warn(dev, "Must have UC_STEER and MC_STEER flags " + "set to use B0 steering. Falling back to A0 steering mode.\n"); + } + dev->caps.num_qp_per_mgm = mlx4_get_qp_per_mgm(dev); + } + mlx4_dbg(dev, "Steering mode is: %s\n", + mlx4_steering_mode_str(dev->caps.steering_mode)); + /* Sense port always allowed on supported devices for ConnectX1 and 2 */ if (dev->pdev->device != 0x1003) dev->caps.flags |= MLX4_DEV_CAP_FLAG_SENSE_SUPPORT; @@ -967,9 +991,11 @@ static int mlx4_init_icm(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap, } /* - * It's not strictly required, but for simplicity just map the - * whole multicast group table now. The table isn't very big - * and it's a lot easier than trying to track ref counts. + * For flow steering device managed mode it is required to use + * mlx4_init_icm_table. For B0 steering mode it's not strictly + * required, but for simplicity just map the whole multicast + * group table now. The table isn't very big and it's a lot + * easier than trying to track ref counts. */ err = mlx4_init_icm_table(dev, &priv->mcg_table.table, init_hca->mc_base, @@ -1205,7 +1231,26 @@ static int mlx4_init_hca(struct mlx4_dev *dev) goto err_stop_fw; } + priv->fs_hash_mode = MLX4_FS_L2_HASH; + + switch (priv->fs_hash_mode) { + case MLX4_FS_L2_HASH: + init_hca.fs_hash_enable_bits = 0; + break; + + case MLX4_FS_L2_L3_L4_HASH: + /* Enable flow steering with + * udp unicast and tcp unicast + */ + init_hca.fs_hash_enable_bits = + MLX4_FS_UDP_UC_EN | MLX4_FS_TCP_UC_EN; + break; + } + profile = default_profile; + if (dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) + profile.num_mcg = MLX4_FS_NUM_MCG; icm_size = mlx4_make_profile(dev, &profile, &dev_cap, &init_hca); @@ -1539,8 +1584,8 @@ static void mlx4_enable_msi_x(struct mlx4_dev *dev) struct mlx4_priv *priv = mlx4_priv(dev); struct msix_entry *entries; int nreq = min_t(int, dev->caps.num_ports * - min_t(int, num_online_cpus() + 1, MAX_MSIX_P_PORT) - + MSIX_LEGACY_SZ, MAX_MSIX); + min_t(int, netif_get_num_default_rss_queues() + 1, + MAX_MSIX_P_PORT) + MSIX_LEGACY_SZ, MAX_MSIX); int err; int i; diff --git a/drivers/net/ethernet/mellanox/mlx4/mcg.c b/drivers/net/ethernet/mellanox/mlx4/mcg.c index f4a8f98e402a..4ec3835e1bc2 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mcg.c +++ b/drivers/net/ethernet/mellanox/mlx4/mcg.c @@ -54,7 +54,12 @@ struct mlx4_mgm { int mlx4_get_mgm_entry_size(struct mlx4_dev *dev) { - return min((1 << mlx4_log_num_mgm_entry_size), MLX4_MAX_MGM_ENTRY_SIZE); + if (dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) + return 1 << MLX4_FS_MGM_LOG_ENTRY_SIZE; + else + return min((1 << mlx4_log_num_mgm_entry_size), + MLX4_MAX_MGM_ENTRY_SIZE); } int mlx4_get_qp_per_mgm(struct mlx4_dev *dev) @@ -62,6 +67,35 @@ int mlx4_get_qp_per_mgm(struct mlx4_dev *dev) return 4 * (mlx4_get_mgm_entry_size(dev) / 16 - 2); } +static int mlx4_QP_FLOW_STEERING_ATTACH(struct mlx4_dev *dev, + struct mlx4_cmd_mailbox *mailbox, + u32 size, + u64 *reg_id) +{ + u64 imm; + int err = 0; + + err = mlx4_cmd_imm(dev, mailbox->dma, &imm, size, 0, + MLX4_QP_FLOW_STEERING_ATTACH, MLX4_CMD_TIME_CLASS_A, + MLX4_CMD_NATIVE); + if (err) + return err; + *reg_id = imm; + + return err; +} + +static int mlx4_QP_FLOW_STEERING_DETACH(struct mlx4_dev *dev, u64 regid) +{ + int err = 0; + + err = mlx4_cmd(dev, regid, 0, 0, + MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A, + MLX4_CMD_NATIVE); + + return err; +} + static int mlx4_READ_ENTRY(struct mlx4_dev *dev, int index, struct mlx4_cmd_mailbox *mailbox) { @@ -614,6 +648,311 @@ static int find_entry(struct mlx4_dev *dev, u8 port, return err; } +struct mlx4_net_trans_rule_hw_ctrl { + __be32 ctrl; + __be32 vf_vep_port; + __be32 qpn; + __be32 reserved; +}; + +static void trans_rule_ctrl_to_hw(struct mlx4_net_trans_rule *ctrl, + struct mlx4_net_trans_rule_hw_ctrl *hw) +{ + static const u8 __promisc_mode[] = { + [MLX4_FS_PROMISC_NONE] = 0x0, + [MLX4_FS_PROMISC_UPLINK] = 0x1, + [MLX4_FS_PROMISC_FUNCTION_PORT] = 0x2, + [MLX4_FS_PROMISC_ALL_MULTI] = 0x3, + }; + + u32 dw = 0; + + dw = ctrl->queue_mode == MLX4_NET_TRANS_Q_LIFO ? 1 : 0; + dw |= ctrl->exclusive ? (1 << 2) : 0; + dw |= ctrl->allow_loopback ? (1 << 3) : 0; + dw |= __promisc_mode[ctrl->promisc_mode] << 8; + dw |= ctrl->priority << 16; + + hw->ctrl = cpu_to_be32(dw); + hw->vf_vep_port = cpu_to_be32(ctrl->port); + hw->qpn = cpu_to_be32(ctrl->qpn); +} + +struct mlx4_net_trans_rule_hw_ib { + u8 size; + u8 rsvd1; + __be16 id; + u32 rsvd2; + __be32 qpn; + __be32 qpn_mask; + u8 dst_gid[16]; + u8 dst_gid_msk[16]; +} __packed; + +struct mlx4_net_trans_rule_hw_eth { + u8 size; + u8 rsvd; + __be16 id; + u8 rsvd1[6]; + u8 dst_mac[6]; + u16 rsvd2; + u8 dst_mac_msk[6]; + u16 rsvd3; + u8 src_mac[6]; + u16 rsvd4; + u8 src_mac_msk[6]; + u8 rsvd5; + u8 ether_type_enable; + __be16 ether_type; + __be16 vlan_id_msk; + __be16 vlan_id; +} __packed; + +struct mlx4_net_trans_rule_hw_tcp_udp { + u8 size; + u8 rsvd; + __be16 id; + __be16 rsvd1[3]; + __be16 dst_port; + __be16 rsvd2; + __be16 dst_port_msk; + __be16 rsvd3; + __be16 src_port; + __be16 rsvd4; + __be16 src_port_msk; +} __packed; + +struct mlx4_net_trans_rule_hw_ipv4 { + u8 size; + u8 rsvd; + __be16 id; + __be32 rsvd1; + __be32 dst_ip; + __be32 dst_ip_msk; + __be32 src_ip; + __be32 src_ip_msk; +} __packed; + +struct _rule_hw { + union { + struct { + u8 size; + u8 rsvd; + __be16 id; + }; + struct mlx4_net_trans_rule_hw_eth eth; + struct mlx4_net_trans_rule_hw_ib ib; + struct mlx4_net_trans_rule_hw_ipv4 ipv4; + struct mlx4_net_trans_rule_hw_tcp_udp tcp_udp; + }; +}; + +static int parse_trans_rule(struct mlx4_dev *dev, struct mlx4_spec_list *spec, + struct _rule_hw *rule_hw) +{ + static const u16 __sw_id_hw[] = { + [MLX4_NET_TRANS_RULE_ID_ETH] = 0xE001, + [MLX4_NET_TRANS_RULE_ID_IB] = 0xE005, + [MLX4_NET_TRANS_RULE_ID_IPV6] = 0xE003, + [MLX4_NET_TRANS_RULE_ID_IPV4] = 0xE002, + [MLX4_NET_TRANS_RULE_ID_TCP] = 0xE004, + [MLX4_NET_TRANS_RULE_ID_UDP] = 0xE006 + }; + + static const size_t __rule_hw_sz[] = { + [MLX4_NET_TRANS_RULE_ID_ETH] = + sizeof(struct mlx4_net_trans_rule_hw_eth), + [MLX4_NET_TRANS_RULE_ID_IB] = + sizeof(struct mlx4_net_trans_rule_hw_ib), + [MLX4_NET_TRANS_RULE_ID_IPV6] = 0, + [MLX4_NET_TRANS_RULE_ID_IPV4] = + sizeof(struct mlx4_net_trans_rule_hw_ipv4), + [MLX4_NET_TRANS_RULE_ID_TCP] = + sizeof(struct mlx4_net_trans_rule_hw_tcp_udp), + [MLX4_NET_TRANS_RULE_ID_UDP] = + sizeof(struct mlx4_net_trans_rule_hw_tcp_udp) + }; + if (spec->id >= MLX4_NET_TRANS_RULE_NUM) { + mlx4_err(dev, "Invalid network rule id. id = %d\n", spec->id); + return -EINVAL; + } + memset(rule_hw, 0, __rule_hw_sz[spec->id]); + rule_hw->id = cpu_to_be16(__sw_id_hw[spec->id]); + rule_hw->size = __rule_hw_sz[spec->id] >> 2; + + switch (spec->id) { + case MLX4_NET_TRANS_RULE_ID_ETH: + memcpy(rule_hw->eth.dst_mac, spec->eth.dst_mac, ETH_ALEN); + memcpy(rule_hw->eth.dst_mac_msk, spec->eth.dst_mac_msk, + ETH_ALEN); + memcpy(rule_hw->eth.src_mac, spec->eth.src_mac, ETH_ALEN); + memcpy(rule_hw->eth.src_mac_msk, spec->eth.src_mac_msk, + ETH_ALEN); + if (spec->eth.ether_type_enable) { + rule_hw->eth.ether_type_enable = 1; + rule_hw->eth.ether_type = spec->eth.ether_type; + } + rule_hw->eth.vlan_id = spec->eth.vlan_id; + rule_hw->eth.vlan_id_msk = spec->eth.vlan_id_msk; + break; + + case MLX4_NET_TRANS_RULE_ID_IB: + rule_hw->ib.qpn = spec->ib.r_qpn; + rule_hw->ib.qpn_mask = spec->ib.qpn_msk; + memcpy(&rule_hw->ib.dst_gid, &spec->ib.dst_gid, 16); + memcpy(&rule_hw->ib.dst_gid_msk, &spec->ib.dst_gid_msk, 16); + break; + + case MLX4_NET_TRANS_RULE_ID_IPV6: + return -EOPNOTSUPP; + + case MLX4_NET_TRANS_RULE_ID_IPV4: + rule_hw->ipv4.src_ip = spec->ipv4.src_ip; + rule_hw->ipv4.src_ip_msk = spec->ipv4.src_ip_msk; + rule_hw->ipv4.dst_ip = spec->ipv4.dst_ip; + rule_hw->ipv4.dst_ip_msk = spec->ipv4.dst_ip_msk; + break; + + case MLX4_NET_TRANS_RULE_ID_TCP: + case MLX4_NET_TRANS_RULE_ID_UDP: + rule_hw->tcp_udp.dst_port = spec->tcp_udp.dst_port; + rule_hw->tcp_udp.dst_port_msk = spec->tcp_udp.dst_port_msk; + rule_hw->tcp_udp.src_port = spec->tcp_udp.src_port; + rule_hw->tcp_udp.src_port_msk = spec->tcp_udp.src_port_msk; + break; + + default: + return -EINVAL; + } + + return __rule_hw_sz[spec->id]; +} + +static void mlx4_err_rule(struct mlx4_dev *dev, char *str, + struct mlx4_net_trans_rule *rule) +{ +#define BUF_SIZE 256 + struct mlx4_spec_list *cur; + char buf[BUF_SIZE]; + int len = 0; + + mlx4_err(dev, "%s", str); + len += snprintf(buf + len, BUF_SIZE - len, + "port = %d prio = 0x%x qp = 0x%x ", + rule->port, rule->priority, rule->qpn); + + list_for_each_entry(cur, &rule->list, list) { + switch (cur->id) { + case MLX4_NET_TRANS_RULE_ID_ETH: + len += snprintf(buf + len, BUF_SIZE - len, + "dmac = %pM ", &cur->eth.dst_mac); + if (cur->eth.ether_type) + len += snprintf(buf + len, BUF_SIZE - len, + "ethertype = 0x%x ", + be16_to_cpu(cur->eth.ether_type)); + if (cur->eth.vlan_id) + len += snprintf(buf + len, BUF_SIZE - len, + "vlan-id = %d ", + be16_to_cpu(cur->eth.vlan_id)); + break; + + case MLX4_NET_TRANS_RULE_ID_IPV4: + if (cur->ipv4.src_ip) + len += snprintf(buf + len, BUF_SIZE - len, + "src-ip = %pI4 ", + &cur->ipv4.src_ip); + if (cur->ipv4.dst_ip) + len += snprintf(buf + len, BUF_SIZE - len, + "dst-ip = %pI4 ", + &cur->ipv4.dst_ip); + break; + + case MLX4_NET_TRANS_RULE_ID_TCP: + case MLX4_NET_TRANS_RULE_ID_UDP: + if (cur->tcp_udp.src_port) + len += snprintf(buf + len, BUF_SIZE - len, + "src-port = %d ", + be16_to_cpu(cur->tcp_udp.src_port)); + if (cur->tcp_udp.dst_port) + len += snprintf(buf + len, BUF_SIZE - len, + "dst-port = %d ", + be16_to_cpu(cur->tcp_udp.dst_port)); + break; + + case MLX4_NET_TRANS_RULE_ID_IB: + len += snprintf(buf + len, BUF_SIZE - len, + "dst-gid = %pI6\n", cur->ib.dst_gid); + len += snprintf(buf + len, BUF_SIZE - len, + "dst-gid-mask = %pI6\n", + cur->ib.dst_gid_msk); + break; + + case MLX4_NET_TRANS_RULE_ID_IPV6: + break; + + default: + break; + } + } + len += snprintf(buf + len, BUF_SIZE - len, "\n"); + mlx4_err(dev, "%s", buf); + + if (len >= BUF_SIZE) + mlx4_err(dev, "Network rule error message was truncated, print buffer is too small.\n"); +} + +int mlx4_flow_attach(struct mlx4_dev *dev, + struct mlx4_net_trans_rule *rule, u64 *reg_id) +{ + struct mlx4_cmd_mailbox *mailbox; + struct mlx4_spec_list *cur; + u32 size = 0; + int ret; + + mailbox = mlx4_alloc_cmd_mailbox(dev); + if (IS_ERR(mailbox)) + return PTR_ERR(mailbox); + + memset(mailbox->buf, 0, sizeof(struct mlx4_net_trans_rule_hw_ctrl)); + trans_rule_ctrl_to_hw(rule, mailbox->buf); + + size += sizeof(struct mlx4_net_trans_rule_hw_ctrl); + + list_for_each_entry(cur, &rule->list, list) { + ret = parse_trans_rule(dev, cur, mailbox->buf + size); + if (ret < 0) { + mlx4_free_cmd_mailbox(dev, mailbox); + return -EINVAL; + } + size += ret; + } + + ret = mlx4_QP_FLOW_STEERING_ATTACH(dev, mailbox, size >> 2, reg_id); + if (ret == -ENOMEM) + mlx4_err_rule(dev, + "mcg table is full. Fail to register network rule.\n", + rule); + else if (ret) + mlx4_err_rule(dev, "Fail to register network rule.\n", rule); + + mlx4_free_cmd_mailbox(dev, mailbox); + + return ret; +} +EXPORT_SYMBOL_GPL(mlx4_flow_attach); + +int mlx4_flow_detach(struct mlx4_dev *dev, u64 reg_id) +{ + int err; + + err = mlx4_QP_FLOW_STEERING_DETACH(dev, reg_id); + if (err) + mlx4_err(dev, "Fail to detach network rule. registration id = 0x%llx\n", + reg_id); + return err; +} +EXPORT_SYMBOL_GPL(mlx4_flow_detach); + int mlx4_qp_attach_common(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], int block_mcast_loopback, enum mlx4_protocol prot, enum mlx4_steer_type steer) @@ -866,49 +1205,159 @@ static int mlx4_QP_ATTACH(struct mlx4_dev *dev, struct mlx4_qp *qp, } int mlx4_multicast_attach(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], - int block_mcast_loopback, enum mlx4_protocol prot) + u8 port, int block_mcast_loopback, + enum mlx4_protocol prot, u64 *reg_id) { - if (prot == MLX4_PROT_ETH && - !(dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER)) - return 0; - if (prot == MLX4_PROT_ETH) - gid[7] |= (MLX4_MC_STEER << 1); + switch (dev->caps.steering_mode) { + case MLX4_STEERING_MODE_A0: + if (prot == MLX4_PROT_ETH) + return 0; + + case MLX4_STEERING_MODE_B0: + if (prot == MLX4_PROT_ETH) + gid[7] |= (MLX4_MC_STEER << 1); + + if (mlx4_is_mfunc(dev)) + return mlx4_QP_ATTACH(dev, qp, gid, 1, + block_mcast_loopback, prot); + return mlx4_qp_attach_common(dev, qp, gid, + block_mcast_loopback, prot, + MLX4_MC_STEER); + + case MLX4_STEERING_MODE_DEVICE_MANAGED: { + struct mlx4_spec_list spec = { {NULL} }; + __be64 mac_mask = cpu_to_be64(MLX4_MAC_MASK << 16); + + struct mlx4_net_trans_rule rule = { + .queue_mode = MLX4_NET_TRANS_Q_FIFO, + .exclusive = 0, + .promisc_mode = MLX4_FS_PROMISC_NONE, + .priority = MLX4_DOMAIN_NIC, + }; + + rule.allow_loopback = ~block_mcast_loopback; + rule.port = port; + rule.qpn = qp->qpn; + INIT_LIST_HEAD(&rule.list); + + switch (prot) { + case MLX4_PROT_ETH: + spec.id = MLX4_NET_TRANS_RULE_ID_ETH; + memcpy(spec.eth.dst_mac, &gid[10], ETH_ALEN); + memcpy(spec.eth.dst_mac_msk, &mac_mask, ETH_ALEN); + break; - if (mlx4_is_mfunc(dev)) - return mlx4_QP_ATTACH(dev, qp, gid, 1, - block_mcast_loopback, prot); + case MLX4_PROT_IB_IPV6: + spec.id = MLX4_NET_TRANS_RULE_ID_IB; + memcpy(spec.ib.dst_gid, gid, 16); + memset(&spec.ib.dst_gid_msk, 0xff, 16); + break; + default: + return -EINVAL; + } + list_add_tail(&spec.list, &rule.list); - return mlx4_qp_attach_common(dev, qp, gid, block_mcast_loopback, - prot, MLX4_MC_STEER); + return mlx4_flow_attach(dev, &rule, reg_id); + } + + default: + return -EINVAL; + } } EXPORT_SYMBOL_GPL(mlx4_multicast_attach); int mlx4_multicast_detach(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], - enum mlx4_protocol prot) + enum mlx4_protocol prot, u64 reg_id) { - if (prot == MLX4_PROT_ETH && - !(dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER)) - return 0; + switch (dev->caps.steering_mode) { + case MLX4_STEERING_MODE_A0: + if (prot == MLX4_PROT_ETH) + return 0; - if (prot == MLX4_PROT_ETH) - gid[7] |= (MLX4_MC_STEER << 1); + case MLX4_STEERING_MODE_B0: + if (prot == MLX4_PROT_ETH) + gid[7] |= (MLX4_MC_STEER << 1); - if (mlx4_is_mfunc(dev)) - return mlx4_QP_ATTACH(dev, qp, gid, 0, 0, prot); + if (mlx4_is_mfunc(dev)) + return mlx4_QP_ATTACH(dev, qp, gid, 0, 0, prot); + + return mlx4_qp_detach_common(dev, qp, gid, prot, + MLX4_MC_STEER); + + case MLX4_STEERING_MODE_DEVICE_MANAGED: + return mlx4_flow_detach(dev, reg_id); - return mlx4_qp_detach_common(dev, qp, gid, prot, MLX4_MC_STEER); + default: + return -EINVAL; + } } EXPORT_SYMBOL_GPL(mlx4_multicast_detach); +int mlx4_flow_steer_promisc_add(struct mlx4_dev *dev, u8 port, + u32 qpn, enum mlx4_net_trans_promisc_mode mode) +{ + struct mlx4_net_trans_rule rule; + u64 *regid_p; + + switch (mode) { + case MLX4_FS_PROMISC_UPLINK: + case MLX4_FS_PROMISC_FUNCTION_PORT: + regid_p = &dev->regid_promisc_array[port]; + break; + case MLX4_FS_PROMISC_ALL_MULTI: + regid_p = &dev->regid_allmulti_array[port]; + break; + default: + return -1; + } + + if (*regid_p != 0) + return -1; + + rule.promisc_mode = mode; + rule.port = port; + rule.qpn = qpn; + INIT_LIST_HEAD(&rule.list); + mlx4_err(dev, "going promisc on %x\n", port); + + return mlx4_flow_attach(dev, &rule, regid_p); +} +EXPORT_SYMBOL_GPL(mlx4_flow_steer_promisc_add); + +int mlx4_flow_steer_promisc_remove(struct mlx4_dev *dev, u8 port, + enum mlx4_net_trans_promisc_mode mode) +{ + int ret; + u64 *regid_p; + + switch (mode) { + case MLX4_FS_PROMISC_UPLINK: + case MLX4_FS_PROMISC_FUNCTION_PORT: + regid_p = &dev->regid_promisc_array[port]; + break; + case MLX4_FS_PROMISC_ALL_MULTI: + regid_p = &dev->regid_allmulti_array[port]; + break; + default: + return -1; + } + + if (*regid_p == 0) + return -1; + + ret = mlx4_flow_detach(dev, *regid_p); + if (ret == 0) + *regid_p = 0; + + return ret; +} +EXPORT_SYMBOL_GPL(mlx4_flow_steer_promisc_remove); + int mlx4_unicast_attach(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], int block_mcast_loopback, enum mlx4_protocol prot) { - if (prot == MLX4_PROT_ETH && - !(dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER)) - return 0; - if (prot == MLX4_PROT_ETH) gid[7] |= (MLX4_UC_STEER << 1); @@ -924,10 +1373,6 @@ EXPORT_SYMBOL_GPL(mlx4_unicast_attach); int mlx4_unicast_detach(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], enum mlx4_protocol prot) { - if (prot == MLX4_PROT_ETH && - !(dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER)) - return 0; - if (prot == MLX4_PROT_ETH) gid[7] |= (MLX4_UC_STEER << 1); @@ -968,9 +1413,6 @@ static int mlx4_PROMISC(struct mlx4_dev *dev, u32 qpn, int mlx4_multicast_promisc_add(struct mlx4_dev *dev, u32 qpn, u8 port) { - if (!(dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER)) - return 0; - if (mlx4_is_mfunc(dev)) return mlx4_PROMISC(dev, qpn, MLX4_MC_STEER, 1, port); @@ -980,9 +1422,6 @@ EXPORT_SYMBOL_GPL(mlx4_multicast_promisc_add); int mlx4_multicast_promisc_remove(struct mlx4_dev *dev, u32 qpn, u8 port) { - if (!(dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER)) - return 0; - if (mlx4_is_mfunc(dev)) return mlx4_PROMISC(dev, qpn, MLX4_MC_STEER, 0, port); @@ -992,9 +1431,6 @@ EXPORT_SYMBOL_GPL(mlx4_multicast_promisc_remove); int mlx4_unicast_promisc_add(struct mlx4_dev *dev, u32 qpn, u8 port) { - if (!(dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER)) - return 0; - if (mlx4_is_mfunc(dev)) return mlx4_PROMISC(dev, qpn, MLX4_UC_STEER, 1, port); @@ -1004,9 +1440,6 @@ EXPORT_SYMBOL_GPL(mlx4_unicast_promisc_add); int mlx4_unicast_promisc_remove(struct mlx4_dev *dev, u32 qpn, u8 port) { - if (!(dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER)) - return 0; - if (mlx4_is_mfunc(dev)) return mlx4_PROMISC(dev, qpn, MLX4_UC_STEER, 0, port); @@ -1019,6 +1452,10 @@ int mlx4_init_mcg_table(struct mlx4_dev *dev) struct mlx4_priv *priv = mlx4_priv(dev); int err; + /* No need for mcg_table when fw managed the mcg table*/ + if (dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) + return 0; err = mlx4_bitmap_init(&priv->mcg_table.bitmap, dev->caps.num_amgms, dev->caps.num_amgms - 1, 0, 0); if (err) @@ -1031,5 +1468,7 @@ int mlx4_init_mcg_table(struct mlx4_dev *dev) void mlx4_cleanup_mcg_table(struct mlx4_dev *dev) { - mlx4_bitmap_cleanup(&mlx4_priv(dev)->mcg_table.bitmap); + if (dev->caps.steering_mode != + MLX4_STEERING_MODE_DEVICE_MANAGED) + mlx4_bitmap_cleanup(&mlx4_priv(dev)->mcg_table.bitmap); } diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h index e5d20220762c..d2c436b10fbf 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h @@ -39,6 +39,7 @@ #include <linux/mutex.h> #include <linux/radix-tree.h> +#include <linux/rbtree.h> #include <linux/timer.h> #include <linux/semaphore.h> #include <linux/workqueue.h> @@ -53,6 +54,17 @@ #define DRV_VERSION "1.1" #define DRV_RELDATE "Dec, 2011" +#define MLX4_FS_UDP_UC_EN (1 << 1) +#define MLX4_FS_TCP_UC_EN (1 << 2) +#define MLX4_FS_NUM_OF_L2_ADDR 8 +#define MLX4_FS_MGM_LOG_ENTRY_SIZE 7 +#define MLX4_FS_NUM_MCG (1 << 17) + +enum { + MLX4_FS_L2_HASH = 0, + MLX4_FS_L2_L3_L4_HASH, +}; + #define MLX4_NUM_UP 8 #define MLX4_NUM_TC 8 #define MLX4_RATELIMIT_UNITS 3 /* 100 Mbps */ @@ -137,6 +149,7 @@ enum mlx4_resource { RES_VLAN, RES_EQ, RES_COUNTER, + RES_FS_RULE, MLX4_NUM_OF_RESOURCE_TYPE }; @@ -509,7 +522,7 @@ struct slave_list { struct mlx4_resource_tracker { spinlock_t lock; /* tree for each resources */ - struct radix_tree_root res_tree[MLX4_NUM_OF_RESOURCE_TYPE]; + struct rb_root res_tree[MLX4_NUM_OF_RESOURCE_TYPE]; /* num_of_slave's lists, one per slave */ struct slave_list *slave_list; }; @@ -703,6 +716,7 @@ struct mlx4_set_port_rqp_calc_context { struct mlx4_mac_entry { u64 mac; + u64 reg_id; }; struct mlx4_port_info { @@ -776,6 +790,7 @@ struct mlx4_priv { struct mutex bf_mutex; struct io_mapping *bf_mapping; int reserved_mtts; + int fs_hash_mode; }; static inline struct mlx4_priv *mlx4_priv(struct mlx4_dev *dev) @@ -1032,7 +1047,7 @@ int mlx4_SET_PORT(struct mlx4_dev *dev, u8 port); /* resource tracker functions*/ int mlx4_get_slave_from_resource_id(struct mlx4_dev *dev, enum mlx4_resource resource_type, - int resource_id, int *slave); + u64 resource_id, int *slave); void mlx4_delete_all_resources_for_slave(struct mlx4_dev *dev, int slave_id); int mlx4_init_resource_tracker(struct mlx4_dev *dev); @@ -1117,6 +1132,16 @@ int mlx4_QUERY_IF_STAT_wrapper(struct mlx4_dev *dev, int slave, struct mlx4_cmd_mailbox *inbox, struct mlx4_cmd_mailbox *outbox, struct mlx4_cmd_info *cmd); +int mlx4_QP_FLOW_STEERING_ATTACH_wrapper(struct mlx4_dev *dev, int slave, + struct mlx4_vhcr *vhcr, + struct mlx4_cmd_mailbox *inbox, + struct mlx4_cmd_mailbox *outbox, + struct mlx4_cmd_info *cmd); +int mlx4_QP_FLOW_STEERING_DETACH_wrapper(struct mlx4_dev *dev, int slave, + struct mlx4_vhcr *vhcr, + struct mlx4_cmd_mailbox *inbox, + struct mlx4_cmd_mailbox *outbox, + struct mlx4_cmd_info *cmd); int mlx4_get_mgm_entry_size(struct mlx4_dev *dev); int mlx4_get_qp_per_mgm(struct mlx4_dev *dev); diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h index 225c20d47900..5f1ab105debc 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h @@ -43,6 +43,7 @@ #ifdef CONFIG_MLX4_EN_DCB #include <linux/dcbnl.h> #endif +#include <linux/cpu_rmap.h> #include <linux/mlx4/device.h> #include <linux/mlx4/qp.h> @@ -75,6 +76,10 @@ #define STAMP_SHIFT 31 #define STAMP_VAL 0x7fffffff #define STATS_DELAY (HZ / 4) +#define MAX_NUM_OF_FS_RULES 256 + +#define MLX4_EN_FILTER_HASH_SHIFT 4 +#define MLX4_EN_FILTER_EXPIRY_QUOTA 60 /* Typical TSO descriptor with 16 gather entries is 352 bytes... */ #define MAX_DESC_SIZE 512 @@ -106,7 +111,7 @@ enum { #define MLX4_EN_MAX_TX_SIZE 8192 #define MLX4_EN_MAX_RX_SIZE 8192 -/* Minimum ring size for our page-allocation sceme to work */ +/* Minimum ring size for our page-allocation scheme to work */ #define MLX4_EN_MIN_RX_SIZE (MLX4_EN_ALLOC_SIZE / SMP_CACHE_BYTES) #define MLX4_EN_MIN_TX_SIZE (4096 / TXBB_SIZE) @@ -227,6 +232,7 @@ struct mlx4_en_tx_desc { struct mlx4_en_rx_alloc { struct page *page; + dma_addr_t dma; u16 offset; }; @@ -404,6 +410,19 @@ struct mlx4_en_perf_stats { #define NUM_PERF_COUNTERS 6 }; +enum mlx4_en_mclist_act { + MCLIST_NONE, + MCLIST_REM, + MCLIST_ADD, +}; + +struct mlx4_en_mc_list { + struct list_head list; + enum mlx4_en_mclist_act action; + u8 addr[ETH_ALEN]; + u64 reg_id; +}; + struct mlx4_en_frag_info { u16 frag_size; u16 frag_prefix_size; @@ -422,6 +441,11 @@ struct mlx4_en_frag_info { #endif +struct ethtool_flow_id { + struct ethtool_rx_flow_spec flow_spec; + u64 id; +}; + struct mlx4_en_priv { struct mlx4_en_dev *mdev; struct mlx4_en_port_profile *prof; @@ -431,6 +455,7 @@ struct mlx4_en_priv { struct net_device_stats ret_stats; struct mlx4_en_port_state port_state; spinlock_t stats_lock; + struct ethtool_flow_id ethtool_rules[MAX_NUM_OF_FS_RULES]; unsigned long last_moder_packets[MAX_RX_RINGS]; unsigned long last_moder_tx_packets; @@ -480,6 +505,7 @@ struct mlx4_en_priv { struct mlx4_en_rx_ring rx_ring[MAX_RX_RINGS]; struct mlx4_en_cq *tx_cq; struct mlx4_en_cq rx_cq[MAX_RX_RINGS]; + struct mlx4_qp drop_qp; struct work_struct mcast_task; struct work_struct mac_task; struct work_struct watchdog_task; @@ -489,8 +515,9 @@ struct mlx4_en_priv { struct mlx4_en_pkt_stats pkstats; struct mlx4_en_port_stats port_stats; u64 stats_bitmap; - char *mc_addrs; - int mc_addrs_cnt; + struct list_head mc_list; + struct list_head curr_list; + u64 broadcast_id; struct mlx4_en_stat_out_mbox hw_stats; int vids[128]; bool wol; @@ -501,6 +528,13 @@ struct mlx4_en_priv { struct ieee_ets ets; u16 maxrate[IEEE_8021QAZ_MAX_TCS]; #endif +#ifdef CONFIG_RFS_ACCEL + spinlock_t filters_lock; + int last_filter_id; + struct list_head filters; + struct hlist_head filter_hash[1 << MLX4_EN_FILTER_HASH_SHIFT]; +#endif + }; enum mlx4_en_wol { @@ -565,6 +599,8 @@ void mlx4_en_unmap_buffer(struct mlx4_buf *buf); void mlx4_en_calc_rx_buf(struct net_device *dev); int mlx4_en_config_rss_steer(struct mlx4_en_priv *priv); void mlx4_en_release_rss_steer(struct mlx4_en_priv *priv); +int mlx4_en_create_drop_qp(struct mlx4_en_priv *priv); +void mlx4_en_destroy_drop_qp(struct mlx4_en_priv *priv); int mlx4_en_free_tx_buf(struct net_device *dev, struct mlx4_en_tx_ring *ring); void mlx4_en_rx_irq(struct mlx4_cq *mcq); @@ -578,6 +614,11 @@ int mlx4_en_QUERY_PORT(struct mlx4_en_dev *mdev, u8 port); extern const struct dcbnl_rtnl_ops mlx4_en_dcbnl_ops; #endif +#ifdef CONFIG_RFS_ACCEL +void mlx4_en_cleanup_filters(struct mlx4_en_priv *priv, + struct mlx4_en_rx_ring *rx_ring); +#endif + #define MLX4_EN_NUM_SELF_TEST 5 void mlx4_en_ex_selftest(struct net_device *dev, u32 *flags, u64 *buf); u64 mlx4_en_mac_to_u64(u8 *addr); diff --git a/drivers/net/ethernet/mellanox/mlx4/port.c b/drivers/net/ethernet/mellanox/mlx4/port.c index a8fb52992c64..028833ffc56f 100644 --- a/drivers/net/ethernet/mellanox/mlx4/port.c +++ b/drivers/net/ethernet/mellanox/mlx4/port.c @@ -39,7 +39,6 @@ #include "mlx4.h" #define MLX4_MAC_VALID (1ull << 63) -#define MLX4_MAC_MASK 0xffffffffffffULL #define MLX4_VLAN_VALID (1u << 31) #define MLX4_VLAN_MASK 0xfff @@ -75,21 +74,54 @@ void mlx4_init_vlan_table(struct mlx4_dev *dev, struct mlx4_vlan_table *table) table->total = 0; } -static int mlx4_uc_steer_add(struct mlx4_dev *dev, u8 port, u64 mac, int *qpn) +static int mlx4_uc_steer_add(struct mlx4_dev *dev, u8 port, + u64 mac, int *qpn, u64 *reg_id) { - struct mlx4_qp qp; - u8 gid[16] = {0}; __be64 be_mac; int err; - qp.qpn = *qpn; - - mac &= 0xffffffffffffULL; + mac &= MLX4_MAC_MASK; be_mac = cpu_to_be64(mac << 16); - memcpy(&gid[10], &be_mac, ETH_ALEN); - gid[5] = port; - err = mlx4_unicast_attach(dev, &qp, gid, 0, MLX4_PROT_ETH); + switch (dev->caps.steering_mode) { + case MLX4_STEERING_MODE_B0: { + struct mlx4_qp qp; + u8 gid[16] = {0}; + + qp.qpn = *qpn; + memcpy(&gid[10], &be_mac, ETH_ALEN); + gid[5] = port; + + err = mlx4_unicast_attach(dev, &qp, gid, 0, MLX4_PROT_ETH); + break; + } + case MLX4_STEERING_MODE_DEVICE_MANAGED: { + struct mlx4_spec_list spec_eth = { {NULL} }; + __be64 mac_mask = cpu_to_be64(MLX4_MAC_MASK << 16); + + struct mlx4_net_trans_rule rule = { + .queue_mode = MLX4_NET_TRANS_Q_FIFO, + .exclusive = 0, + .allow_loopback = 1, + .promisc_mode = MLX4_FS_PROMISC_NONE, + .priority = MLX4_DOMAIN_NIC, + }; + + rule.port = port; + rule.qpn = *qpn; + INIT_LIST_HEAD(&rule.list); + + spec_eth.id = MLX4_NET_TRANS_RULE_ID_ETH; + memcpy(spec_eth.eth.dst_mac, &be_mac, ETH_ALEN); + memcpy(spec_eth.eth.dst_mac_msk, &mac_mask, ETH_ALEN); + list_add_tail(&spec_eth.list, &rule.list); + + err = mlx4_flow_attach(dev, &rule, reg_id); + break; + } + default: + return -EINVAL; + } if (err) mlx4_warn(dev, "Failed Attaching Unicast\n"); @@ -97,19 +129,30 @@ static int mlx4_uc_steer_add(struct mlx4_dev *dev, u8 port, u64 mac, int *qpn) } static void mlx4_uc_steer_release(struct mlx4_dev *dev, u8 port, - u64 mac, int qpn) + u64 mac, int qpn, u64 reg_id) { - struct mlx4_qp qp; - u8 gid[16] = {0}; - __be64 be_mac; + switch (dev->caps.steering_mode) { + case MLX4_STEERING_MODE_B0: { + struct mlx4_qp qp; + u8 gid[16] = {0}; + __be64 be_mac; - qp.qpn = qpn; - mac &= 0xffffffffffffULL; - be_mac = cpu_to_be64(mac << 16); - memcpy(&gid[10], &be_mac, ETH_ALEN); - gid[5] = port; + qp.qpn = qpn; + mac &= MLX4_MAC_MASK; + be_mac = cpu_to_be64(mac << 16); + memcpy(&gid[10], &be_mac, ETH_ALEN); + gid[5] = port; - mlx4_unicast_detach(dev, &qp, gid, MLX4_PROT_ETH); + mlx4_unicast_detach(dev, &qp, gid, MLX4_PROT_ETH); + break; + } + case MLX4_STEERING_MODE_DEVICE_MANAGED: { + mlx4_flow_detach(dev, reg_id); + break; + } + default: + mlx4_err(dev, "Invalid steering mode.\n"); + } } static int validate_index(struct mlx4_dev *dev, @@ -144,6 +187,7 @@ int mlx4_get_eth_qp(struct mlx4_dev *dev, u8 port, u64 mac, int *qpn) struct mlx4_mac_entry *entry; int index = 0; int err = 0; + u64 reg_id; mlx4_dbg(dev, "Registering MAC: 0x%llx for adding\n", (unsigned long long) mac); @@ -155,7 +199,7 @@ int mlx4_get_eth_qp(struct mlx4_dev *dev, u8 port, u64 mac, int *qpn) return err; } - if (!(dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER)) { + if (dev->caps.steering_mode == MLX4_STEERING_MODE_A0) { *qpn = info->base_qpn + index; return 0; } @@ -167,7 +211,7 @@ int mlx4_get_eth_qp(struct mlx4_dev *dev, u8 port, u64 mac, int *qpn) goto qp_err; } - err = mlx4_uc_steer_add(dev, port, mac, qpn); + err = mlx4_uc_steer_add(dev, port, mac, qpn, ®_id); if (err) goto steer_err; @@ -177,6 +221,7 @@ int mlx4_get_eth_qp(struct mlx4_dev *dev, u8 port, u64 mac, int *qpn) goto alloc_err; } entry->mac = mac; + entry->reg_id = reg_id; err = radix_tree_insert(&info->mac_tree, *qpn, entry); if (err) goto insert_err; @@ -186,7 +231,7 @@ insert_err: kfree(entry); alloc_err: - mlx4_uc_steer_release(dev, port, mac, *qpn); + mlx4_uc_steer_release(dev, port, mac, *qpn, reg_id); steer_err: mlx4_qp_release_range(dev, *qpn, 1); @@ -206,13 +251,14 @@ void mlx4_put_eth_qp(struct mlx4_dev *dev, u8 port, u64 mac, int qpn) (unsigned long long) mac); mlx4_unregister_mac(dev, port, mac); - if (dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER) { + if (dev->caps.steering_mode != MLX4_STEERING_MODE_A0) { entry = radix_tree_lookup(&info->mac_tree, qpn); if (entry) { mlx4_dbg(dev, "Releasing qp: port %d, mac 0x%llx," " qpn %d\n", port, (unsigned long long) mac, qpn); - mlx4_uc_steer_release(dev, port, entry->mac, qpn); + mlx4_uc_steer_release(dev, port, entry->mac, + qpn, entry->reg_id); mlx4_qp_release_range(dev, qpn, 1); radix_tree_delete(&info->mac_tree, qpn); kfree(entry); @@ -359,15 +405,18 @@ int mlx4_replace_mac(struct mlx4_dev *dev, u8 port, int qpn, u64 new_mac) int index = qpn - info->base_qpn; int err = 0; - if (dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER) { + if (dev->caps.steering_mode != MLX4_STEERING_MODE_A0) { entry = radix_tree_lookup(&info->mac_tree, qpn); if (!entry) return -EINVAL; - mlx4_uc_steer_release(dev, port, entry->mac, qpn); + mlx4_uc_steer_release(dev, port, entry->mac, + qpn, entry->reg_id); mlx4_unregister_mac(dev, port, entry->mac); entry->mac = new_mac; + entry->reg_id = 0; mlx4_register_mac(dev, port, new_mac); - err = mlx4_uc_steer_add(dev, port, entry->mac, &qpn); + err = mlx4_uc_steer_add(dev, port, entry->mac, + &qpn, &entry->reg_id); return err; } @@ -803,8 +852,7 @@ int mlx4_SET_PORT_qpn_calc(struct mlx4_dev *dev, u8 port, u32 base_qpn, u32 m_promisc = (dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER) ? MCAST_DIRECT : MCAST_DEFAULT; - if (dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_MC_STEER && - dev->caps.flags & MLX4_DEV_CAP_FLAG_VEP_UC_STEER) + if (dev->caps.steering_mode != MLX4_STEERING_MODE_A0) return 0; mailbox = mlx4_alloc_cmd_mailbox(dev); diff --git a/drivers/net/ethernet/mellanox/mlx4/profile.c b/drivers/net/ethernet/mellanox/mlx4/profile.c index b83bc928d52a..9ee4725363d5 100644 --- a/drivers/net/ethernet/mellanox/mlx4/profile.c +++ b/drivers/net/ethernet/mellanox/mlx4/profile.c @@ -237,13 +237,19 @@ u64 mlx4_make_profile(struct mlx4_dev *dev, init_hca->mtt_base = profile[i].start; break; case MLX4_RES_MCG: - dev->caps.num_mgms = profile[i].num >> 1; - dev->caps.num_amgms = profile[i].num >> 1; init_hca->mc_base = profile[i].start; init_hca->log_mc_entry_sz = ilog2(mlx4_get_mgm_entry_size(dev)); init_hca->log_mc_table_sz = profile[i].log_num; - init_hca->log_mc_hash_sz = profile[i].log_num - 1; + if (dev->caps.steering_mode == + MLX4_STEERING_MODE_DEVICE_MANAGED) { + dev->caps.num_mgms = profile[i].num; + } else { + init_hca->log_mc_hash_sz = + profile[i].log_num - 1; + dev->caps.num_mgms = profile[i].num >> 1; + dev->caps.num_amgms = profile[i].num >> 1; + } break; default: break; diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c index b45d0e7f6ab0..94ceddd17ab2 100644 --- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c +++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c @@ -41,13 +41,12 @@ #include <linux/slab.h> #include <linux/mlx4/cmd.h> #include <linux/mlx4/qp.h> +#include <linux/if_ether.h> #include "mlx4.h" #include "fw.h" #define MLX4_MAC_VALID (1ull << 63) -#define MLX4_MAC_MASK 0x7fffffffffffffffULL -#define ETH_ALEN 6 struct mac_res { struct list_head list; @@ -57,7 +56,8 @@ struct mac_res { struct res_common { struct list_head list; - u32 res_id; + struct rb_node node; + u64 res_id; int owner; int state; int from_state; @@ -189,6 +189,58 @@ struct res_xrcdn { int port; }; +enum res_fs_rule_states { + RES_FS_RULE_BUSY = RES_ANY_BUSY, + RES_FS_RULE_ALLOCATED, +}; + +struct res_fs_rule { + struct res_common com; +}; + +static void *res_tracker_lookup(struct rb_root *root, u64 res_id) +{ + struct rb_node *node = root->rb_node; + + while (node) { + struct res_common *res = container_of(node, struct res_common, + node); + + if (res_id < res->res_id) + node = node->rb_left; + else if (res_id > res->res_id) + node = node->rb_right; + else + return res; + } + return NULL; +} + +static int res_tracker_insert(struct rb_root *root, struct res_common *res) +{ + struct rb_node **new = &(root->rb_node), *parent = NULL; + + /* Figure out where to put new node */ + while (*new) { + struct res_common *this = container_of(*new, struct res_common, + node); + + parent = *new; + if (res->res_id < this->res_id) + new = &((*new)->rb_left); + else if (res->res_id > this->res_id) + new = &((*new)->rb_right); + else + return -EEXIST; + } + + /* Add new node and rebalance tree. */ + rb_link_node(&res->node, parent, new); + rb_insert_color(&res->node, root); + + return 0; +} + /* For Debug uses */ static const char *ResourceType(enum mlx4_resource rt) { @@ -201,6 +253,7 @@ static const char *ResourceType(enum mlx4_resource rt) case RES_MAC: return "RES_MAC"; case RES_EQ: return "RES_EQ"; case RES_COUNTER: return "RES_COUNTER"; + case RES_FS_RULE: return "RES_FS_RULE"; case RES_XRCD: return "RES_XRCD"; default: return "Unknown resource type !!!"; }; @@ -228,8 +281,7 @@ int mlx4_init_resource_tracker(struct mlx4_dev *dev) mlx4_dbg(dev, "Started init_resource_tracker: %ld slaves\n", dev->num_slaves); for (i = 0 ; i < MLX4_NUM_OF_RESOURCE_TYPE; i++) - INIT_RADIX_TREE(&priv->mfunc.master.res_tracker.res_tree[i], - GFP_ATOMIC|__GFP_NOWARN); + priv->mfunc.master.res_tracker.res_tree[i] = RB_ROOT; spin_lock_init(&priv->mfunc.master.res_tracker.lock); return 0 ; @@ -277,11 +329,11 @@ static void *find_res(struct mlx4_dev *dev, int res_id, { struct mlx4_priv *priv = mlx4_priv(dev); - return radix_tree_lookup(&priv->mfunc.master.res_tracker.res_tree[type], - res_id); + return res_tracker_lookup(&priv->mfunc.master.res_tracker.res_tree[type], + res_id); } -static int get_res(struct mlx4_dev *dev, int slave, int res_id, +static int get_res(struct mlx4_dev *dev, int slave, u64 res_id, enum mlx4_resource type, void *res) { @@ -307,7 +359,7 @@ static int get_res(struct mlx4_dev *dev, int slave, int res_id, r->from_state = r->state; r->state = RES_ANY_BUSY; - mlx4_dbg(dev, "res %s id 0x%x to busy\n", + mlx4_dbg(dev, "res %s id 0x%llx to busy\n", ResourceType(type), r->res_id); if (res) @@ -320,7 +372,7 @@ exit: int mlx4_get_slave_from_resource_id(struct mlx4_dev *dev, enum mlx4_resource type, - int res_id, int *slave) + u64 res_id, int *slave) { struct res_common *r; @@ -341,7 +393,7 @@ int mlx4_get_slave_from_resource_id(struct mlx4_dev *dev, return err; } -static void put_res(struct mlx4_dev *dev, int slave, int res_id, +static void put_res(struct mlx4_dev *dev, int slave, u64 res_id, enum mlx4_resource type) { struct res_common *r; @@ -473,7 +525,21 @@ static struct res_common *alloc_xrcdn_tr(int id) return &ret->com; } -static struct res_common *alloc_tr(int id, enum mlx4_resource type, int slave, +static struct res_common *alloc_fs_rule_tr(u64 id) +{ + struct res_fs_rule *ret; + + ret = kzalloc(sizeof *ret, GFP_KERNEL); + if (!ret) + return NULL; + + ret->com.res_id = id; + ret->com.state = RES_FS_RULE_ALLOCATED; + + return &ret->com; +} + +static struct res_common *alloc_tr(u64 id, enum mlx4_resource type, int slave, int extra) { struct res_common *ret; @@ -506,6 +572,9 @@ static struct res_common *alloc_tr(int id, enum mlx4_resource type, int slave, case RES_XRCD: ret = alloc_xrcdn_tr(id); break; + case RES_FS_RULE: + ret = alloc_fs_rule_tr(id); + break; default: return NULL; } @@ -515,7 +584,7 @@ static struct res_common *alloc_tr(int id, enum mlx4_resource type, int slave, return ret; } -static int add_res_range(struct mlx4_dev *dev, int slave, int base, int count, +static int add_res_range(struct mlx4_dev *dev, int slave, u64 base, int count, enum mlx4_resource type, int extra) { int i; @@ -523,7 +592,7 @@ static int add_res_range(struct mlx4_dev *dev, int slave, int base, int count, struct mlx4_priv *priv = mlx4_priv(dev); struct res_common **res_arr; struct mlx4_resource_tracker *tracker = &priv->mfunc.master.res_tracker; - struct radix_tree_root *root = &tracker->res_tree[type]; + struct rb_root *root = &tracker->res_tree[type]; res_arr = kzalloc(count * sizeof *res_arr, GFP_KERNEL); if (!res_arr) @@ -546,7 +615,7 @@ static int add_res_range(struct mlx4_dev *dev, int slave, int base, int count, err = -EEXIST; goto undo; } - err = radix_tree_insert(root, base + i, res_arr[i]); + err = res_tracker_insert(root, res_arr[i]); if (err) goto undo; list_add_tail(&res_arr[i]->list, @@ -559,7 +628,7 @@ static int add_res_range(struct mlx4_dev *dev, int slave, int base, int count, undo: for (--i; i >= base; --i) - radix_tree_delete(&tracker->res_tree[type], i); + rb_erase(&res_arr[i]->node, root); spin_unlock_irq(mlx4_tlock(dev)); @@ -638,6 +707,16 @@ static int remove_xrcdn_ok(struct res_xrcdn *res) return 0; } +static int remove_fs_rule_ok(struct res_fs_rule *res) +{ + if (res->com.state == RES_FS_RULE_BUSY) + return -EBUSY; + else if (res->com.state != RES_FS_RULE_ALLOCATED) + return -EPERM; + + return 0; +} + static int remove_cq_ok(struct res_cq *res) { if (res->com.state == RES_CQ_BUSY) @@ -679,15 +758,17 @@ static int remove_ok(struct res_common *res, enum mlx4_resource type, int extra) return remove_counter_ok((struct res_counter *)res); case RES_XRCD: return remove_xrcdn_ok((struct res_xrcdn *)res); + case RES_FS_RULE: + return remove_fs_rule_ok((struct res_fs_rule *)res); default: return -EINVAL; } } -static int rem_res_range(struct mlx4_dev *dev, int slave, int base, int count, +static int rem_res_range(struct mlx4_dev *dev, int slave, u64 base, int count, enum mlx4_resource type, int extra) { - int i; + u64 i; int err; struct mlx4_priv *priv = mlx4_priv(dev); struct mlx4_resource_tracker *tracker = &priv->mfunc.master.res_tracker; @@ -695,7 +776,7 @@ static int rem_res_range(struct mlx4_dev *dev, int slave, int base, int count, spin_lock_irq(mlx4_tlock(dev)); for (i = base; i < base + count; ++i) { - r = radix_tree_lookup(&tracker->res_tree[type], i); + r = res_tracker_lookup(&tracker->res_tree[type], i); if (!r) { err = -ENOENT; goto out; @@ -710,8 +791,8 @@ static int rem_res_range(struct mlx4_dev *dev, int slave, int base, int count, } for (i = base; i < base + count; ++i) { - r = radix_tree_lookup(&tracker->res_tree[type], i); - radix_tree_delete(&tracker->res_tree[type], i); + r = res_tracker_lookup(&tracker->res_tree[type], i); + rb_erase(&r->node, &tracker->res_tree[type]); list_del(&r->list); kfree(r); } @@ -733,7 +814,7 @@ static int qp_res_start_move_to(struct mlx4_dev *dev, int slave, int qpn, int err = 0; spin_lock_irq(mlx4_tlock(dev)); - r = radix_tree_lookup(&tracker->res_tree[RES_QP], qpn); + r = res_tracker_lookup(&tracker->res_tree[RES_QP], qpn); if (!r) err = -ENOENT; else if (r->com.owner != slave) @@ -741,7 +822,7 @@ static int qp_res_start_move_to(struct mlx4_dev *dev, int slave, int qpn, else { switch (state) { case RES_QP_BUSY: - mlx4_dbg(dev, "%s: failed RES_QP, 0x%x\n", + mlx4_dbg(dev, "%s: failed RES_QP, 0x%llx\n", __func__, r->com.res_id); err = -EBUSY; break; @@ -750,7 +831,7 @@ static int qp_res_start_move_to(struct mlx4_dev *dev, int slave, int qpn, if (r->com.state == RES_QP_MAPPED && !alloc) break; - mlx4_dbg(dev, "failed RES_QP, 0x%x\n", r->com.res_id); + mlx4_dbg(dev, "failed RES_QP, 0x%llx\n", r->com.res_id); err = -EINVAL; break; @@ -759,7 +840,7 @@ static int qp_res_start_move_to(struct mlx4_dev *dev, int slave, int qpn, r->com.state == RES_QP_HW) break; else { - mlx4_dbg(dev, "failed RES_QP, 0x%x\n", + mlx4_dbg(dev, "failed RES_QP, 0x%llx\n", r->com.res_id); err = -EINVAL; } @@ -779,7 +860,7 @@ static int qp_res_start_move_to(struct mlx4_dev *dev, int slave, int qpn, r->com.to_state = state; r->com.state = RES_QP_BUSY; if (qp) - *qp = (struct res_qp *)r; + *qp = r; } } @@ -797,7 +878,7 @@ static int mr_res_start_move_to(struct mlx4_dev *dev, int slave, int index, int err = 0; spin_lock_irq(mlx4_tlock(dev)); - r = radix_tree_lookup(&tracker->res_tree[RES_MPT], index); + r = res_tracker_lookup(&tracker->res_tree[RES_MPT], index); if (!r) err = -ENOENT; else if (r->com.owner != slave) @@ -832,7 +913,7 @@ static int mr_res_start_move_to(struct mlx4_dev *dev, int slave, int index, r->com.to_state = state; r->com.state = RES_MPT_BUSY; if (mpt) - *mpt = (struct res_mpt *)r; + *mpt = r; } } @@ -850,7 +931,7 @@ static int eq_res_start_move_to(struct mlx4_dev *dev, int slave, int index, int err = 0; spin_lock_irq(mlx4_tlock(dev)); - r = radix_tree_lookup(&tracker->res_tree[RES_EQ], index); + r = res_tracker_lookup(&tracker->res_tree[RES_EQ], index); if (!r) err = -ENOENT; else if (r->com.owner != slave) @@ -898,7 +979,7 @@ static int cq_res_start_move_to(struct mlx4_dev *dev, int slave, int cqn, int err; spin_lock_irq(mlx4_tlock(dev)); - r = radix_tree_lookup(&tracker->res_tree[RES_CQ], cqn); + r = res_tracker_lookup(&tracker->res_tree[RES_CQ], cqn); if (!r) err = -ENOENT; else if (r->com.owner != slave) @@ -952,7 +1033,7 @@ static int srq_res_start_move_to(struct mlx4_dev *dev, int slave, int index, int err = 0; spin_lock_irq(mlx4_tlock(dev)); - r = radix_tree_lookup(&tracker->res_tree[RES_SRQ], index); + r = res_tracker_lookup(&tracker->res_tree[RES_SRQ], index); if (!r) err = -ENOENT; else if (r->com.owner != slave) @@ -1001,7 +1082,7 @@ static void res_abort_move(struct mlx4_dev *dev, int slave, struct res_common *r; spin_lock_irq(mlx4_tlock(dev)); - r = radix_tree_lookup(&tracker->res_tree[type], id); + r = res_tracker_lookup(&tracker->res_tree[type], id); if (r && (r->owner == slave)) r->state = r->from_state; spin_unlock_irq(mlx4_tlock(dev)); @@ -1015,7 +1096,7 @@ static void res_end_move(struct mlx4_dev *dev, int slave, struct res_common *r; spin_lock_irq(mlx4_tlock(dev)); - r = radix_tree_lookup(&tracker->res_tree[type], id); + r = res_tracker_lookup(&tracker->res_tree[type], id); if (r && (r->owner == slave)) r->state = r->to_state; spin_unlock_irq(mlx4_tlock(dev)); @@ -2695,6 +2776,60 @@ ex_put: return err; } +int mlx4_QP_FLOW_STEERING_ATTACH_wrapper(struct mlx4_dev *dev, int slave, + struct mlx4_vhcr *vhcr, + struct mlx4_cmd_mailbox *inbox, + struct mlx4_cmd_mailbox *outbox, + struct mlx4_cmd_info *cmd) +{ + int err; + + if (dev->caps.steering_mode != + MLX4_STEERING_MODE_DEVICE_MANAGED) + return -EOPNOTSUPP; + + err = mlx4_cmd_imm(dev, inbox->dma, &vhcr->out_param, + vhcr->in_modifier, 0, + MLX4_QP_FLOW_STEERING_ATTACH, MLX4_CMD_TIME_CLASS_A, + MLX4_CMD_NATIVE); + if (err) + return err; + + err = add_res_range(dev, slave, vhcr->out_param, 1, RES_FS_RULE, 0); + if (err) { + mlx4_err(dev, "Fail to add flow steering resources.\n "); + /* detach rule*/ + mlx4_cmd(dev, vhcr->out_param, 0, 0, + MLX4_QP_FLOW_STEERING_ATTACH, MLX4_CMD_TIME_CLASS_A, + MLX4_CMD_NATIVE); + } + return err; +} + +int mlx4_QP_FLOW_STEERING_DETACH_wrapper(struct mlx4_dev *dev, int slave, + struct mlx4_vhcr *vhcr, + struct mlx4_cmd_mailbox *inbox, + struct mlx4_cmd_mailbox *outbox, + struct mlx4_cmd_info *cmd) +{ + int err; + + if (dev->caps.steering_mode != + MLX4_STEERING_MODE_DEVICE_MANAGED) + return -EOPNOTSUPP; + + err = rem_res_range(dev, slave, vhcr->in_param, 1, RES_FS_RULE, 0); + if (err) { + mlx4_err(dev, "Fail to remove flow steering resources.\n "); + return err; + } + + err = mlx4_cmd(dev, vhcr->in_param, 0, 0, + MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A, + MLX4_CMD_NATIVE); + return err; +} + enum { BUSY_MAX_RETRIES = 10 }; @@ -2751,7 +2886,7 @@ static int _move_all_busy(struct mlx4_dev *dev, int slave, if (r->state == RES_ANY_BUSY) { if (print) mlx4_dbg(dev, - "%s id 0x%x is busy\n", + "%s id 0x%llx is busy\n", ResourceType(type), r->res_id); ++busy; @@ -2817,8 +2952,8 @@ static void rem_slave_qps(struct mlx4_dev *dev, int slave) switch (state) { case RES_QP_RESERVED: spin_lock_irq(mlx4_tlock(dev)); - radix_tree_delete(&tracker->res_tree[RES_QP], - qp->com.res_id); + rb_erase(&qp->com.node, + &tracker->res_tree[RES_QP]); list_del(&qp->com.list); spin_unlock_irq(mlx4_tlock(dev)); kfree(qp); @@ -2888,8 +3023,8 @@ static void rem_slave_srqs(struct mlx4_dev *dev, int slave) case RES_SRQ_ALLOCATED: __mlx4_srq_free_icm(dev, srqn); spin_lock_irq(mlx4_tlock(dev)); - radix_tree_delete(&tracker->res_tree[RES_SRQ], - srqn); + rb_erase(&srq->com.node, + &tracker->res_tree[RES_SRQ]); list_del(&srq->com.list); spin_unlock_irq(mlx4_tlock(dev)); kfree(srq); @@ -2954,8 +3089,8 @@ static void rem_slave_cqs(struct mlx4_dev *dev, int slave) case RES_CQ_ALLOCATED: __mlx4_cq_free_icm(dev, cqn); spin_lock_irq(mlx4_tlock(dev)); - radix_tree_delete(&tracker->res_tree[RES_CQ], - cqn); + rb_erase(&cq->com.node, + &tracker->res_tree[RES_CQ]); list_del(&cq->com.list); spin_unlock_irq(mlx4_tlock(dev)); kfree(cq); @@ -3017,8 +3152,8 @@ static void rem_slave_mrs(struct mlx4_dev *dev, int slave) case RES_MPT_RESERVED: __mlx4_mr_release(dev, mpt->key); spin_lock_irq(mlx4_tlock(dev)); - radix_tree_delete(&tracker->res_tree[RES_MPT], - mptn); + rb_erase(&mpt->com.node, + &tracker->res_tree[RES_MPT]); list_del(&mpt->com.list); spin_unlock_irq(mlx4_tlock(dev)); kfree(mpt); @@ -3086,8 +3221,8 @@ static void rem_slave_mtts(struct mlx4_dev *dev, int slave) __mlx4_free_mtt_range(dev, base, mtt->order); spin_lock_irq(mlx4_tlock(dev)); - radix_tree_delete(&tracker->res_tree[RES_MTT], - base); + rb_erase(&mtt->com.node, + &tracker->res_tree[RES_MTT]); list_del(&mtt->com.list); spin_unlock_irq(mlx4_tlock(dev)); kfree(mtt); @@ -3104,6 +3239,58 @@ static void rem_slave_mtts(struct mlx4_dev *dev, int slave) spin_unlock_irq(mlx4_tlock(dev)); } +static void rem_slave_fs_rule(struct mlx4_dev *dev, int slave) +{ + struct mlx4_priv *priv = mlx4_priv(dev); + struct mlx4_resource_tracker *tracker = + &priv->mfunc.master.res_tracker; + struct list_head *fs_rule_list = + &tracker->slave_list[slave].res_list[RES_FS_RULE]; + struct res_fs_rule *fs_rule; + struct res_fs_rule *tmp; + int state; + u64 base; + int err; + + err = move_all_busy(dev, slave, RES_FS_RULE); + if (err) + mlx4_warn(dev, "rem_slave_fs_rule: Could not move all mtts to busy for slave %d\n", + slave); + + spin_lock_irq(mlx4_tlock(dev)); + list_for_each_entry_safe(fs_rule, tmp, fs_rule_list, com.list) { + spin_unlock_irq(mlx4_tlock(dev)); + if (fs_rule->com.owner == slave) { + base = fs_rule->com.res_id; + state = fs_rule->com.from_state; + while (state != 0) { + switch (state) { + case RES_FS_RULE_ALLOCATED: + /* detach rule */ + err = mlx4_cmd(dev, base, 0, 0, + MLX4_QP_FLOW_STEERING_DETACH, + MLX4_CMD_TIME_CLASS_A, + MLX4_CMD_NATIVE); + + spin_lock_irq(mlx4_tlock(dev)); + rb_erase(&fs_rule->com.node, + &tracker->res_tree[RES_FS_RULE]); + list_del(&fs_rule->com.list); + spin_unlock_irq(mlx4_tlock(dev)); + kfree(fs_rule); + state = 0; + break; + + default: + state = 0; + } + } + } + spin_lock_irq(mlx4_tlock(dev)); + } + spin_unlock_irq(mlx4_tlock(dev)); +} + static void rem_slave_eqs(struct mlx4_dev *dev, int slave) { struct mlx4_priv *priv = mlx4_priv(dev); @@ -3133,8 +3320,8 @@ static void rem_slave_eqs(struct mlx4_dev *dev, int slave) switch (state) { case RES_EQ_RESERVED: spin_lock_irq(mlx4_tlock(dev)); - radix_tree_delete(&tracker->res_tree[RES_EQ], - eqn); + rb_erase(&eq->com.node, + &tracker->res_tree[RES_EQ]); list_del(&eq->com.list); spin_unlock_irq(mlx4_tlock(dev)); kfree(eq); @@ -3191,7 +3378,8 @@ static void rem_slave_counters(struct mlx4_dev *dev, int slave) list_for_each_entry_safe(counter, tmp, counter_list, com.list) { if (counter->com.owner == slave) { index = counter->com.res_id; - radix_tree_delete(&tracker->res_tree[RES_COUNTER], index); + rb_erase(&counter->com.node, + &tracker->res_tree[RES_COUNTER]); list_del(&counter->com.list); kfree(counter); __mlx4_counter_free(dev, index); @@ -3220,7 +3408,7 @@ static void rem_slave_xrcdns(struct mlx4_dev *dev, int slave) list_for_each_entry_safe(xrcd, tmp, xrcdn_list, com.list) { if (xrcd->com.owner == slave) { xrcdn = xrcd->com.res_id; - radix_tree_delete(&tracker->res_tree[RES_XRCD], xrcdn); + rb_erase(&xrcd->com.node, &tracker->res_tree[RES_XRCD]); list_del(&xrcd->com.list); kfree(xrcd); __mlx4_xrcd_free(dev, xrcdn); @@ -3244,5 +3432,6 @@ void mlx4_delete_all_resources_for_slave(struct mlx4_dev *dev, int slave) rem_slave_mtts(dev, slave); rem_slave_counters(dev, slave); rem_slave_xrcdns(dev, slave); + rem_slave_fs_rule(dev, slave); mutex_unlock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex); } diff --git a/drivers/net/ethernet/micrel/ks8851.c b/drivers/net/ethernet/micrel/ks8851.c index 5e313e9a252f..1540ebeb8669 100644 --- a/drivers/net/ethernet/micrel/ks8851.c +++ b/drivers/net/ethernet/micrel/ks8851.c @@ -422,7 +422,7 @@ static void ks8851_read_mac_addr(struct net_device *dev) * * Get or create the initial mac address for the device and then set that * into the station address register. If there is an EEPROM present, then - * we try that. If no valid mac address is found we use random_ether_addr() + * we try that. If no valid mac address is found we use eth_random_addr() * to create a new one. */ static void ks8851_init_mac(struct ks8851_net *ks) diff --git a/drivers/net/ethernet/micrel/ks8851_mll.c b/drivers/net/ethernet/micrel/ks8851_mll.c index 5ffde23ac8fb..38529edfe350 100644 --- a/drivers/net/ethernet/micrel/ks8851_mll.c +++ b/drivers/net/ethernet/micrel/ks8851_mll.c @@ -16,8 +16,7 @@ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ -/** - * Supports: +/* Supports: * KS8851 16bit MLL chip from Micrel Inc. */ @@ -35,7 +34,7 @@ #include <linux/platform_device.h> #include <linux/delay.h> #include <linux/slab.h> -#include <asm/io.h> +#include <linux/ks8851_mll.h> #define DRV_NAME "ks8851_mll" @@ -465,8 +464,7 @@ static int msg_enable; #define BE1 0x2000 /* Byte Enable 1 */ #define BE0 0x1000 /* Byte Enable 0 */ -/** - * register read/write calls. +/* register read/write calls. * * All these calls issue transactions to access the chip's registers. They * all require that the necessary lock is held to prevent accesses when the @@ -1103,7 +1101,7 @@ static void ks_set_grpaddr(struct ks_net *ks) } } /* ks_set_grpaddr */ -/* +/** * ks_clear_mcast - clear multicast information * * @ks : The chip information @@ -1515,6 +1513,7 @@ static int __devinit ks8851_probe(struct platform_device *pdev) struct net_device *netdev; struct ks_net *ks; u16 id, data; + struct ks8851_mll_platform_data *pdata; io_d = platform_get_resource(pdev, IORESOURCE_MEM, 0); io_c = platform_get_resource(pdev, IORESOURCE_MEM, 1); @@ -1596,17 +1595,27 @@ static int __devinit ks8851_probe(struct platform_device *pdev) ks_disable_qmu(ks); ks_setup(ks); ks_setup_int(ks); - memcpy(netdev->dev_addr, ks->mac_addr, 6); data = ks_rdreg16(ks, KS_OBCR); ks_wrreg16(ks, KS_OBCR, data | OBCR_ODS_16MA); - /** - * If you want to use the default MAC addr, - * comment out the 2 functions below. - */ + /* overwriting the default MAC address */ + pdata = pdev->dev.platform_data; + if (!pdata) { + netdev_err(netdev, "No platform data\n"); + err = -ENODEV; + goto err_pdata; + } + memcpy(ks->mac_addr, pdata->mac_addr, 6); + if (!is_valid_ether_addr(ks->mac_addr)) { + /* Use random MAC address if none passed */ + eth_random_addr(ks->mac_addr); + netdev_info(netdev, "Using random mac address\n"); + } + netdev_info(netdev, "Mac address is: %pM\n", ks->mac_addr); + + memcpy(netdev->dev_addr, ks->mac_addr, 6); - random_ether_addr(netdev->dev_addr); ks_set_mac(ks, netdev->dev_addr); id = ks_rdreg16(ks, KS_CIDER); @@ -1615,6 +1624,8 @@ static int __devinit ks8851_probe(struct platform_device *pdev) (id >> 8) & 0xff, (id >> 4) & 0xf, (id >> 1) & 0x7); return 0; +err_pdata: + unregister_netdev(netdev); err_register: err_get_irq: iounmap(ks->hw_addr_cmd); diff --git a/drivers/net/ethernet/micrel/ksz884x.c b/drivers/net/ethernet/micrel/ksz884x.c index eaf9ff0262a9..318fee91c79d 100644 --- a/drivers/net/ethernet/micrel/ksz884x.c +++ b/drivers/net/ethernet/micrel/ksz884x.c @@ -3913,7 +3913,7 @@ static void hw_start_rx(struct ksz_hw *hw) hw->rx_stop = 2; } -/* +/** * hw_stop_rx - stop receiving * @hw: The hardware instance. * @@ -4480,14 +4480,12 @@ static void ksz_init_rx_buffers(struct dev_info *adapter) dma_buf->len = adapter->mtu; if (!dma_buf->skb) dma_buf->skb = alloc_skb(dma_buf->len, GFP_ATOMIC); - if (dma_buf->skb && !dma_buf->dma) { - dma_buf->skb->dev = adapter->dev; + if (dma_buf->skb && !dma_buf->dma) dma_buf->dma = pci_map_single( adapter->pdev, skb_tail_pointer(dma_buf->skb), dma_buf->len, PCI_DMA_FROMDEVICE); - } /* Set descriptor. */ set_rx_buf(desc, dma_buf->dma); @@ -4881,8 +4879,8 @@ static netdev_tx_t netdev_tx(struct sk_buff *skb, struct net_device *dev) left = hw_alloc_pkt(hw, skb->len, num); if (left) { if (left < num || - ((CHECKSUM_PARTIAL == skb->ip_summed) && - (ETH_P_IPV6 == htons(skb->protocol)))) { + (CHECKSUM_PARTIAL == skb->ip_summed && + skb->protocol == htons(ETH_P_IPV6))) { struct sk_buff *org_skb = skb; skb = netdev_alloc_skb(dev, org_skb->len); diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c index 90153fc983cb..fa85cf1353fd 100644 --- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c +++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c @@ -3775,7 +3775,7 @@ static void myri10ge_probe_slices(struct myri10ge_priv *mgp) mgp->num_slices = 1; msix_cap = pci_find_capability(pdev, PCI_CAP_ID_MSIX); - ncpus = num_online_cpus(); + ncpus = netif_get_num_default_rss_queues(); if (myri10ge_max_slices == 1 || msix_cap == 0 || (myri10ge_max_slices == -1 && ncpus < 2)) diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c index bb367582c1e8..d958c2299372 100644 --- a/drivers/net/ethernet/neterion/s2io.c +++ b/drivers/net/ethernet/neterion/s2io.c @@ -3377,7 +3377,7 @@ static int wait_for_cmd_complete(void __iomem *addr, u64 busy_bit, } while (cnt < 20); return ret; } -/* +/** * check_pci_device_id - Checks if the device id is supported * @id : device id * Description: Function to check if the pci device id is supported by driver. @@ -5238,7 +5238,7 @@ static u64 do_s2io_read_unicast_mc(struct s2io_nic *sp, int offset) } /** - * s2io_set_mac_addr driver entry point + * s2io_set_mac_addr - driver entry point */ static int s2io_set_mac_addr(struct net_device *dev, void *p) @@ -6088,7 +6088,7 @@ static int s2io_bist_test(struct s2io_nic *sp, uint64_t *data) } /** - * s2io-link_test - verifies the link state of the nic + * s2io_link_test - verifies the link state of the nic * @sp ; private member of the device structure, which is a pointer to the * s2io_nic structure. * @data: variable that returns the result of each of the test conducted by @@ -6116,9 +6116,9 @@ static int s2io_link_test(struct s2io_nic *sp, uint64_t *data) /** * s2io_rldram_test - offline test for access to the RldRam chip on the NIC - * @sp - private member of the device structure, which is a pointer to the + * @sp: private member of the device structure, which is a pointer to the * s2io_nic structure. - * @data - variable that returns the result of each of the test + * @data: variable that returns the result of each of the test * conducted by the driver. * Description: * This is one of the offline test that tests the read and write @@ -6946,9 +6946,9 @@ static int rxd_owner_bit_reset(struct s2io_nic *sp) if (sp->rxd_mode == RXD_MODE_3B) ba = &ring->ba[j][k]; if (set_rxd_buffer_pointer(sp, rxdp, ba, &skb, - (u64 *)&temp0_64, - (u64 *)&temp1_64, - (u64 *)&temp2_64, + &temp0_64, + &temp1_64, + &temp2_64, size) == -ENOMEM) { return 0; } @@ -7149,7 +7149,7 @@ static int s2io_card_up(struct s2io_nic *sp) int i, ret = 0; struct config_param *config; struct mac_info *mac_control; - struct net_device *dev = (struct net_device *)sp->dev; + struct net_device *dev = sp->dev; u16 interruptible; /* Initialize the H/W I/O registers */ @@ -7325,7 +7325,7 @@ static void s2io_tx_watchdog(struct net_device *dev) static int rx_osm_handler(struct ring_info *ring_data, struct RxD_t * rxdp) { struct s2io_nic *sp = ring_data->nic; - struct net_device *dev = (struct net_device *)ring_data->dev; + struct net_device *dev = ring_data->dev; struct sk_buff *skb = (struct sk_buff *) ((unsigned long)rxdp->Host_Control); int ring_no = ring_data->ring_no; @@ -7508,7 +7508,7 @@ aggregate: static void s2io_link(struct s2io_nic *sp, int link) { - struct net_device *dev = (struct net_device *)sp->dev; + struct net_device *dev = sp->dev; struct swStat *swstats = &sp->mac_control.stats_info->sw_stat; if (link != sp->last_link_state) { @@ -8280,7 +8280,7 @@ static int check_L2_lro_capable(u8 *buffer, struct iphdr **ip, return -1; } - *ip = (struct iphdr *)((u8 *)buffer + ip_off); + *ip = (struct iphdr *)(buffer + ip_off); ip_len = (u8)((*ip)->ihl); ip_len <<= 2; *tcp = (struct tcphdr *)((unsigned long)*ip + ip_len); diff --git a/drivers/net/ethernet/neterion/vxge/vxge-config.c b/drivers/net/ethernet/neterion/vxge/vxge-config.c index 98e2c10ae08b..32d06824fe3e 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-config.c +++ b/drivers/net/ethernet/neterion/vxge/vxge-config.c @@ -2346,7 +2346,7 @@ void __vxge_hw_blockpool_blocks_add(struct __vxge_hw_blockpool *blockpool) for (i = 0; i < nreq; i++) vxge_os_dma_malloc_async( - ((struct __vxge_hw_device *)blockpool->hldev)->pdev, + (blockpool->hldev)->pdev, blockpool->hldev, VXGE_HW_BLOCK_SIZE); } @@ -2428,13 +2428,13 @@ __vxge_hw_blockpool_blocks_remove(struct __vxge_hw_blockpool *blockpool) break; pci_unmap_single( - ((struct __vxge_hw_device *)blockpool->hldev)->pdev, + (blockpool->hldev)->pdev, ((struct __vxge_hw_blockpool_entry *)p)->dma_addr, ((struct __vxge_hw_blockpool_entry *)p)->length, PCI_DMA_BIDIRECTIONAL); vxge_os_dma_free( - ((struct __vxge_hw_device *)blockpool->hldev)->pdev, + (blockpool->hldev)->pdev, ((struct __vxge_hw_blockpool_entry *)p)->memblock, &((struct __vxge_hw_blockpool_entry *)p)->acc_handle); @@ -4059,7 +4059,7 @@ __vxge_hw_vpath_sw_reset(struct __vxge_hw_device *hldev, u32 vp_id) enum vxge_hw_status status = VXGE_HW_OK; struct __vxge_hw_virtualpath *vpath; - vpath = (struct __vxge_hw_virtualpath *)&hldev->virtual_paths[vp_id]; + vpath = &hldev->virtual_paths[vp_id]; if (vpath->ringh) { status = __vxge_hw_ring_reset(vpath->ringh); diff --git a/drivers/net/ethernet/neterion/vxge/vxge-config.h b/drivers/net/ethernet/neterion/vxge/vxge-config.h index 5046a64f0fe8..9e0c1eed5dc5 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-config.h +++ b/drivers/net/ethernet/neterion/vxge/vxge-config.h @@ -1922,7 +1922,7 @@ realloc: /* misaligned, free current one and try allocating * size + VXGE_CACHE_LINE_SIZE memory */ - kfree((void *) vaddr); + kfree(vaddr); size += VXGE_CACHE_LINE_SIZE; realloc_flag = 1; goto realloc; diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c index 51387c31914b..de2190443510 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-main.c +++ b/drivers/net/ethernet/neterion/vxge/vxge-main.c @@ -1134,7 +1134,7 @@ static void vxge_set_multicast(struct net_device *dev) "%s:%d", __func__, __LINE__); vdev = netdev_priv(dev); - hldev = (struct __vxge_hw_device *)vdev->devh; + hldev = vdev->devh; if (unlikely(!is_vxge_card_up(vdev))) return; @@ -3131,12 +3131,12 @@ vxge_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *net_stats) u64 packets, bytes, multicast; do { - start = u64_stats_fetch_begin(&rxstats->syncp); + start = u64_stats_fetch_begin_bh(&rxstats->syncp); packets = rxstats->rx_frms; multicast = rxstats->rx_mcast; bytes = rxstats->rx_bytes; - } while (u64_stats_fetch_retry(&rxstats->syncp, start)); + } while (u64_stats_fetch_retry_bh(&rxstats->syncp, start)); net_stats->rx_packets += packets; net_stats->rx_bytes += bytes; @@ -3146,11 +3146,11 @@ vxge_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *net_stats) net_stats->rx_dropped += rxstats->rx_dropped; do { - start = u64_stats_fetch_begin(&txstats->syncp); + start = u64_stats_fetch_begin_bh(&txstats->syncp); packets = txstats->tx_frms; bytes = txstats->tx_bytes; - } while (u64_stats_fetch_retry(&txstats->syncp, start)); + } while (u64_stats_fetch_retry_bh(&txstats->syncp, start)); net_stats->tx_packets += packets; net_stats->tx_bytes += bytes; @@ -3687,7 +3687,8 @@ static int __devinit vxge_config_vpaths( return 0; if (!driver_config->g_no_cpus) - driver_config->g_no_cpus = num_online_cpus(); + driver_config->g_no_cpus = + netif_get_num_default_rss_queues(); driver_config->vpath_per_dev = driver_config->g_no_cpus >> 1; if (!driver_config->vpath_per_dev) @@ -3989,16 +3990,16 @@ static void __devinit vxge_print_parm(struct vxgedev *vdev, u64 vpath_mask) continue; vxge_debug_ll_config(VXGE_TRACE, "%s: MTU size - %d", vdev->ndev->name, - ((struct __vxge_hw_device *)(vdev->devh))-> + ((vdev->devh))-> config.vp_config[i].mtu); vxge_debug_init(VXGE_TRACE, "%s: VLAN tag stripping %s", vdev->ndev->name, - ((struct __vxge_hw_device *)(vdev->devh))-> + ((vdev->devh))-> config.vp_config[i].rpa_strip_vlan_tag ? "Enabled" : "Disabled"); vxge_debug_ll_config(VXGE_TRACE, "%s: Max frags : %d", vdev->ndev->name, - ((struct __vxge_hw_device *)(vdev->devh))-> + ((vdev->devh))-> config.vp_config[i].fifo.max_frags); break; } @@ -4260,9 +4261,7 @@ static int vxge_probe_fw_update(struct vxgedev *vdev) if (VXGE_FW_VER(VXGE_CERT_FW_VER_MAJOR, VXGE_CERT_FW_VER_MINOR, 0) > VXGE_FW_VER(maj, min, 0)) { vxge_debug_init(VXGE_ERR, "%s: Firmware %d.%d.%d is too old to" - " be used with this driver.\n" - "Please get the latest version from " - "ftp://ftp.s2io.com/pub/X3100-Drivers/FIRMWARE", + " be used with this driver.", VXGE_DRIVER_NAME, maj, min, bld); return -EINVAL; } diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.h b/drivers/net/ethernet/neterion/vxge/vxge-main.h index 35f3e7552ec2..36ca40f8f249 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-main.h +++ b/drivers/net/ethernet/neterion/vxge/vxge-main.h @@ -430,8 +430,7 @@ void vxge_initialize_ethtool_ops(struct net_device *ndev); enum vxge_hw_status vxge_reset_all_vpaths(struct vxgedev *vdev); int vxge_fw_upgrade(struct vxgedev *vdev, char *fw_name, int override); -/** - * #define VXGE_DEBUG_INIT: debug for initialization functions +/* #define VXGE_DEBUG_INIT: debug for initialization functions * #define VXGE_DEBUG_TX : debug transmit related functions * #define VXGE_DEBUG_RX : debug recevice related functions * #define VXGE_DEBUG_MEM : debug memory module diff --git a/drivers/net/ethernet/neterion/vxge/vxge-traffic.c b/drivers/net/ethernet/neterion/vxge/vxge-traffic.c index 5954fa264da1..99749bd07d72 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-traffic.c +++ b/drivers/net/ethernet/neterion/vxge/vxge-traffic.c @@ -533,8 +533,7 @@ __vxge_hw_device_handle_error(struct __vxge_hw_device *hldev, u32 vp_id, /* notify driver */ if (hldev->uld_callbacks->crit_err) - hldev->uld_callbacks->crit_err( - (struct __vxge_hw_device *)hldev, + hldev->uld_callbacks->crit_err(hldev, type, vp_id); out: @@ -1322,7 +1321,7 @@ enum vxge_hw_status vxge_hw_ring_rxd_next_completed( /* check whether it is not the end */ if (!own || *t_code == VXGE_HW_RING_T_CODE_FRM_DROP) { - vxge_assert(((struct vxge_hw_ring_rxd_1 *)rxdp)->host_control != + vxge_assert((rxdp)->host_control != 0); ++ring->cmpl_cnt; diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c index 928913c4f3ff..f45def01a98e 100644 --- a/drivers/net/ethernet/nvidia/forcedeth.c +++ b/drivers/net/ethernet/nvidia/forcedeth.c @@ -3218,7 +3218,7 @@ static void nv_force_linkspeed(struct net_device *dev, int speed, int duplex) } /** - * nv_update_linkspeed: Setup the MAC according to the link partner + * nv_update_linkspeed - Setup the MAC according to the link partner * @dev: Network device to be configured * * The function queries the PHY and checks if there is a link partner. @@ -3552,8 +3552,7 @@ static irqreturn_t nv_nic_irq(int foo, void *data) return IRQ_HANDLED; } -/** - * All _optimized functions are used to help increase performance +/* All _optimized functions are used to help increase performance * (reduce CPU and increase throughput). They use descripter version 3, * compiler directives, and reduce memory accesses. */ @@ -3776,7 +3775,7 @@ static irqreturn_t nv_nic_irq_other(int foo, void *data) np->link_timeout = jiffies + LINK_TIMEOUT; } if (events & NVREG_IRQ_RECOVER_ERROR) { - spin_lock_irq(&np->lock); + spin_lock_irqsave(&np->lock, flags); /* disable interrupts on the nic */ writel(NVREG_IRQ_OTHER, base + NvRegIrqMask); pci_push(base); @@ -3786,7 +3785,7 @@ static irqreturn_t nv_nic_irq_other(int foo, void *data) np->recover_error = 1; mod_timer(&np->nic_poll, jiffies + POLL_WAIT); } - spin_unlock_irq(&np->lock); + spin_unlock_irqrestore(&np->lock, flags); break; } if (unlikely(i > max_interrupt_work)) { @@ -5183,6 +5182,7 @@ static const struct ethtool_ops ops = { .get_ethtool_stats = nv_get_ethtool_stats, .get_sset_count = nv_get_sset_count, .self_test = nv_self_test, + .get_ts_info = ethtool_op_get_ts_info, }; /* The mgmt unit and driver use a semaphore to access the phy during init */ diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c index 083d6715335c..4069edab229e 100644 --- a/drivers/net/ethernet/nxp/lpc_eth.c +++ b/drivers/net/ethernet/nxp/lpc_eth.c @@ -44,7 +44,6 @@ #include <linux/of_net.h> #include <linux/types.h> -#include <linux/delay.h> #include <linux/io.h> #include <mach/board.h> #include <mach/platform.h> @@ -52,7 +51,6 @@ #define MODNAME "lpc-eth" #define DRV_VERSION "1.00" -#define PHYDEF_ADDR 0x00 #define ENET_MAXF_SIZE 1536 #define ENET_RX_DESC 48 @@ -416,9 +414,6 @@ static bool use_iram_for_net(struct device *dev) #define TXDESC_CONTROL_LAST (1 << 30) #define TXDESC_CONTROL_INT (1 << 31) -static int lpc_eth_hard_start_xmit(struct sk_buff *skb, - struct net_device *ndev); - /* * Structure of a TX/RX descriptors and RX status */ @@ -440,7 +435,7 @@ struct netdata_local { spinlock_t lock; void __iomem *net_base; u32 msg_enable; - struct sk_buff *skb[ENET_TX_DESC]; + unsigned int skblen[ENET_TX_DESC]; unsigned int last_tx_idx; unsigned int num_used_tx_buffs; struct mii_bus *mii_bus; @@ -903,12 +898,11 @@ err_out: static void __lpc_handle_xmit(struct net_device *ndev) { struct netdata_local *pldat = netdev_priv(ndev); - struct sk_buff *skb; u32 txcidx, *ptxstat, txstat; txcidx = readl(LPC_ENET_TXCONSUMEINDEX(pldat->net_base)); while (pldat->last_tx_idx != txcidx) { - skb = pldat->skb[pldat->last_tx_idx]; + unsigned int skblen = pldat->skblen[pldat->last_tx_idx]; /* A buffer is available, get buffer status */ ptxstat = &pldat->tx_stat_v[pldat->last_tx_idx]; @@ -945,9 +939,8 @@ static void __lpc_handle_xmit(struct net_device *ndev) } else { /* Update stats */ ndev->stats.tx_packets++; - ndev->stats.tx_bytes += skb->len; + ndev->stats.tx_bytes += skblen; } - dev_kfree_skb_irq(skb); txcidx = readl(LPC_ENET_TXCONSUMEINDEX(pldat->net_base)); } @@ -1132,7 +1125,7 @@ static int lpc_eth_hard_start_xmit(struct sk_buff *skb, struct net_device *ndev) memcpy(pldat->tx_buff_v + txidx * ENET_MAXF_SIZE, skb->data, len); /* Save the buffer and increment the buffer counter */ - pldat->skb[txidx] = skb; + pldat->skblen[txidx] = len; pldat->num_used_tx_buffs++; /* Start transmit */ @@ -1147,6 +1140,7 @@ static int lpc_eth_hard_start_xmit(struct sk_buff *skb, struct net_device *ndev) spin_unlock_irq(&pldat->lock); + dev_kfree_skb(skb); return NETDEV_TX_OK; } @@ -1442,7 +1436,7 @@ static int lpc_eth_drv_probe(struct platform_device *pdev) res->start); netdev_dbg(ndev, "IO address size :%d\n", res->end - res->start + 1); - netdev_err(ndev, "IO address (mapped) :0x%p\n", + netdev_dbg(ndev, "IO address (mapped) :0x%p\n", pldat->net_base); netdev_dbg(ndev, "IRQ number :%d\n", ndev->irq); netdev_dbg(ndev, "DMA buffer size :%d\n", pldat->dma_buff_size); diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_api.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_api.c index e48f084ad226..5ae03e815ee9 100644 --- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_api.c +++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_api.c @@ -60,7 +60,7 @@ static void pch_gbe_plat_get_bus_info(struct pch_gbe_hw *hw) /** * pch_gbe_plat_init_hw - Initialize hardware * @hw: Pointer to the HW structure - * Returns + * Returns: * 0: Successfully * Negative value: Failed-EBUSY */ @@ -108,7 +108,7 @@ static void pch_gbe_plat_init_function_pointers(struct pch_gbe_hw *hw) /** * pch_gbe_hal_setup_init_funcs - Initializes function pointers * @hw: Pointer to the HW structure - * Returns + * Returns: * 0: Successfully * ENOSYS: Function is not registered */ @@ -137,7 +137,7 @@ inline void pch_gbe_hal_get_bus_info(struct pch_gbe_hw *hw) /** * pch_gbe_hal_init_hw - Initialize hardware * @hw: Pointer to the HW structure - * Returns + * Returns: * 0: Successfully * ENOSYS: Function is not registered */ @@ -155,7 +155,7 @@ inline s32 pch_gbe_hal_init_hw(struct pch_gbe_hw *hw) * @hw: Pointer to the HW structure * @offset: The register to read * @data: The buffer to store the 16-bit read. - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -172,7 +172,7 @@ inline s32 pch_gbe_hal_read_phy_reg(struct pch_gbe_hw *hw, u32 offset, * @hw: Pointer to the HW structure * @offset: The register to read * @data: The value to write. - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -211,7 +211,7 @@ inline void pch_gbe_hal_phy_sw_reset(struct pch_gbe_hw *hw) /** * pch_gbe_hal_read_mac_addr - Reads MAC address * @hw: Pointer to the HW structure - * Returns + * Returns: * 0: Successfully * ENOSYS: Function is not registered */ diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_ethtool.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_ethtool.c index ac4e72d529e5..9dbf38c10a68 100644 --- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_ethtool.c +++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_ethtool.c @@ -77,7 +77,7 @@ static const struct pch_gbe_stats pch_gbe_gstrings_stats[] = { * pch_gbe_get_settings - Get device-specific settings * @netdev: Network interface device structure * @ecmd: Ethtool command - * Returns + * Returns: * 0: Successful. * Negative value: Failed. */ @@ -100,7 +100,7 @@ static int pch_gbe_get_settings(struct net_device *netdev, * pch_gbe_set_settings - Set device-specific settings * @netdev: Network interface device structure * @ecmd: Ethtool command - * Returns + * Returns: * 0: Successful. * Negative value: Failed. */ @@ -220,7 +220,7 @@ static void pch_gbe_get_wol(struct net_device *netdev, * pch_gbe_set_wol - Turn Wake-on-Lan on or off * @netdev: Network interface device structure * @wol: Pointer of wake-on-Lan information straucture - * Returns + * Returns: * 0: Successful. * Negative value: Failed. */ @@ -248,7 +248,7 @@ static int pch_gbe_set_wol(struct net_device *netdev, /** * pch_gbe_nway_reset - Restart autonegotiation * @netdev: Network interface device structure - * Returns + * Returns: * 0: Successful. * Negative value: Failed. */ @@ -398,7 +398,7 @@ static void pch_gbe_get_pauseparam(struct net_device *netdev, * pch_gbe_set_pauseparam - Set pause paramters * @netdev: Network interface device structure * @pause: Pause parameters structure - * Returns + * Returns: * 0: Successful. * Negative value: Failed. */ diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c index 3787c64ee71c..b1006563f736 100644 --- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c +++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c @@ -301,7 +301,7 @@ inline void pch_gbe_mac_load_mac_addr(struct pch_gbe_hw *hw) /** * pch_gbe_mac_read_mac_addr - Read MAC address * @hw: Pointer to the HW structure - * Returns + * Returns: * 0: Successful. */ s32 pch_gbe_mac_read_mac_addr(struct pch_gbe_hw *hw) @@ -483,7 +483,7 @@ static void pch_gbe_mac_mc_addr_list_update(struct pch_gbe_hw *hw, /** * pch_gbe_mac_force_mac_fc - Force the MAC's flow control settings * @hw: Pointer to the HW structure - * Returns + * Returns: * 0: Successful. * Negative value: Failed. */ @@ -639,7 +639,7 @@ static void pch_gbe_mac_set_pause_packet(struct pch_gbe_hw *hw) /** * pch_gbe_alloc_queues - Allocate memory for all rings * @adapter: Board private structure to initialize - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -670,7 +670,7 @@ static void pch_gbe_init_stats(struct pch_gbe_adapter *adapter) /** * pch_gbe_init_phy - Initialize PHY * @adapter: Board private structure to initialize - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -720,7 +720,7 @@ static int pch_gbe_init_phy(struct pch_gbe_adapter *adapter) * @netdev: Network interface device structure * @addr: Phy ID * @reg: Access location - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -1364,7 +1364,7 @@ static void pch_gbe_start_receive(struct pch_gbe_hw *hw) * pch_gbe_intr - Interrupt Handler * @irq: Interrupt number * @data: Pointer to a network interface device structure - * Returns + * Returns: * - IRQ_HANDLED: Our interrupt * - IRQ_NONE: Not our interrupt */ @@ -1566,7 +1566,7 @@ static void pch_gbe_alloc_tx_buffers(struct pch_gbe_adapter *adapter, * pch_gbe_clean_tx - Reclaim resources after transmit completes * @adapter: Board private structure * @tx_ring: Tx descriptor ring - * Returns + * Returns: * true: Cleaned the descriptor * false: Not cleaned the descriptor */ @@ -1660,7 +1660,7 @@ pch_gbe_clean_tx(struct pch_gbe_adapter *adapter, * @rx_ring: Rx descriptor ring * @work_done: Completed count * @work_to_do: Request count - * Returns + * Returns: * true: Cleaned the descriptor * false: Not cleaned the descriptor */ @@ -1775,7 +1775,7 @@ pch_gbe_clean_rx(struct pch_gbe_adapter *adapter, * pch_gbe_setup_tx_resources - Allocate Tx resources (Descriptors) * @adapter: Board private structure * @tx_ring: Tx descriptor ring (for a specific queue) to setup - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -1822,7 +1822,7 @@ int pch_gbe_setup_tx_resources(struct pch_gbe_adapter *adapter, * pch_gbe_setup_rx_resources - Allocate Rx resources (Descriptors) * @adapter: Board private structure * @rx_ring: Rx descriptor ring (for a specific queue) to setup - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -1899,7 +1899,7 @@ void pch_gbe_free_rx_resources(struct pch_gbe_adapter *adapter, /** * pch_gbe_request_irq - Allocate an interrupt line * @adapter: Board private structure - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -1932,7 +1932,7 @@ static int pch_gbe_request_irq(struct pch_gbe_adapter *adapter) /** * pch_gbe_up - Up GbE network device * @adapter: Board private structure - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -2018,7 +2018,7 @@ void pch_gbe_down(struct pch_gbe_adapter *adapter) /** * pch_gbe_sw_init - Initialize general software structures (struct pch_gbe_adapter) * @adapter: Board private structure to initialize - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -2057,7 +2057,7 @@ static int pch_gbe_sw_init(struct pch_gbe_adapter *adapter) /** * pch_gbe_open - Called when a network interface is made active * @netdev: Network interface device structure - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -2097,7 +2097,7 @@ err_setup_tx: /** * pch_gbe_stop - Disables a network interface * @netdev: Network interface device structure - * Returns + * Returns: * 0: Successfully */ static int pch_gbe_stop(struct net_device *netdev) @@ -2117,7 +2117,7 @@ static int pch_gbe_stop(struct net_device *netdev) * pch_gbe_xmit_frame - Packet transmitting start * @skb: Socket buffer structure * @netdev: Network interface device structure - * Returns + * Returns: * - NETDEV_TX_OK: Normal end * - NETDEV_TX_BUSY: Error end */ @@ -2225,7 +2225,7 @@ static void pch_gbe_set_multi(struct net_device *netdev) * pch_gbe_set_mac - Change the Ethernet Address of the NIC * @netdev: Network interface device structure * @addr: Pointer to an address structure - * Returns + * Returns: * 0: Successfully * -EADDRNOTAVAIL: Failed */ @@ -2256,7 +2256,7 @@ static int pch_gbe_set_mac(struct net_device *netdev, void *addr) * pch_gbe_change_mtu - Change the Maximum Transfer Unit * @netdev: Network interface device structure * @new_mtu: New value for maximum frame size - * Returns + * Returns: * 0: Successfully * -EINVAL: Failed */ @@ -2309,7 +2309,7 @@ static int pch_gbe_change_mtu(struct net_device *netdev, int new_mtu) * pch_gbe_set_features - Reset device after features changed * @netdev: Network interface device structure * @features: New features - * Returns + * Returns: * 0: HW state updated successfully */ static int pch_gbe_set_features(struct net_device *netdev, @@ -2334,7 +2334,7 @@ static int pch_gbe_set_features(struct net_device *netdev, * @netdev: Network interface device structure * @ifr: Pointer to ifr structure * @cmd: Control command - * Returns + * Returns: * 0: Successfully * Negative value: Failed */ @@ -2369,7 +2369,7 @@ static void pch_gbe_tx_timeout(struct net_device *netdev) * pch_gbe_napi_poll - NAPI receive and transfer polling callback * @napi: Pointer of polling device struct * @budget: The maximum number of a packet - * Returns + * Returns: * false: Exit the polling mode * true: Continue the polling mode */ diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_param.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_param.c index 29e23bec809c..8653c3b81f84 100644 --- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_param.c +++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_param.c @@ -139,7 +139,7 @@ MODULE_PARM_DESC(XsumTX, "Disable or enable Transmit Checksum offload"); /** * pch_gbe_option - Force the MAC's flow control settings * @hw: Pointer to the HW structure - * Returns + * Returns: * 0: Successful. * Negative value: Failed. */ @@ -220,7 +220,7 @@ static const struct pch_gbe_opt_list fc_list[] = { * @value: value * @opt: option * @adapter: Board private structure - * Returns + * Returns: * 0: Successful. * Negative value: Failed. */ diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic.h b/drivers/net/ethernet/qlogic/netxen/netxen_nic.h index 37ccbe54e62d..eb3dfdbb642b 100644 --- a/drivers/net/ethernet/qlogic/netxen/netxen_nic.h +++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic.h @@ -53,8 +53,8 @@ #define _NETXEN_NIC_LINUX_MAJOR 4 #define _NETXEN_NIC_LINUX_MINOR 0 -#define _NETXEN_NIC_LINUX_SUBVERSION 79 -#define NETXEN_NIC_LINUX_VERSIONID "4.0.79" +#define _NETXEN_NIC_LINUX_SUBVERSION 80 +#define NETXEN_NIC_LINUX_VERSIONID "4.0.80" #define NETXEN_VERSION_CODE(a, b, c) (((a) << 24) + ((b) << 16) + (c)) #define _major(v) (((v) >> 24) & 0xff) diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_ethtool.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_ethtool.c index 39730403782f..10468e7932dd 100644 --- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_ethtool.c +++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_ethtool.c @@ -489,7 +489,7 @@ netxen_nic_get_pauseparam(struct net_device *dev, int port = adapter->physical_port; if (adapter->ahw.port_type == NETXEN_NIC_GBE) { - if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS)) + if ((port < 0) || (port >= NETXEN_NIU_MAX_GBE_PORTS)) return; /* get flow control settings */ val = NXRD32(adapter, NETXEN_NIU_GB_MAC_CONFIG_0(port)); @@ -511,7 +511,7 @@ netxen_nic_get_pauseparam(struct net_device *dev, break; } } else if (adapter->ahw.port_type == NETXEN_NIC_XGBE) { - if ((port < 0) || (port > NETXEN_NIU_MAX_XG_PORTS)) + if ((port < 0) || (port >= NETXEN_NIU_MAX_XG_PORTS)) return; pause->rx_pause = 1; val = NXRD32(adapter, NETXEN_NIU_XG_PAUSE_CTL); @@ -534,7 +534,7 @@ netxen_nic_set_pauseparam(struct net_device *dev, int port = adapter->physical_port; /* read mode */ if (adapter->ahw.port_type == NETXEN_NIC_GBE) { - if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS)) + if ((port < 0) || (port >= NETXEN_NIU_MAX_GBE_PORTS)) return -EIO; /* set flow control */ val = NXRD32(adapter, NETXEN_NIU_GB_MAC_CONFIG_0(port)); @@ -577,7 +577,7 @@ netxen_nic_set_pauseparam(struct net_device *dev, } NXWR32(adapter, NETXEN_NIU_GB_PAUSE_CTL, val); } else if (adapter->ahw.port_type == NETXEN_NIC_XGBE) { - if ((port < 0) || (port > NETXEN_NIU_MAX_XG_PORTS)) + if ((port < 0) || (port >= NETXEN_NIU_MAX_XG_PORTS)) return -EIO; val = NXRD32(adapter, NETXEN_NIU_XG_PAUSE_CTL); if (port == 0) { @@ -826,7 +826,12 @@ netxen_get_dump_flag(struct net_device *netdev, struct ethtool_dump *dump) dump->len = mdump->md_dump_size; else dump->len = 0; - dump->flag = mdump->md_capture_mask; + + if (!mdump->md_enabled) + dump->flag = ETH_FW_DUMP_DISABLE; + else + dump->flag = mdump->md_capture_mask; + dump->version = adapter->fw_version; return 0; } @@ -840,8 +845,10 @@ netxen_set_dump(struct net_device *netdev, struct ethtool_dump *val) switch (val->flag) { case NX_FORCE_FW_DUMP_KEY: - if (!mdump->md_enabled) - mdump->md_enabled = 1; + if (!mdump->md_enabled) { + netdev_info(netdev, "FW dump not enabled\n"); + return 0; + } if (adapter->fw_mdump_rdy) { netdev_info(netdev, "Previous dump not cleared, not forcing dump\n"); return 0; diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c index de96a948bb7f..946160fa5843 100644 --- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c +++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c @@ -365,7 +365,7 @@ static int netxen_niu_disable_xg_port(struct netxen_adapter *adapter) if (NX_IS_REVISION_P3(adapter->ahw.revision_id)) return 0; - if (port > NETXEN_NIU_MAX_XG_PORTS) + if (port >= NETXEN_NIU_MAX_XG_PORTS) return -EINVAL; mac_cfg = 0; @@ -392,7 +392,7 @@ static int netxen_p2_nic_set_promisc(struct netxen_adapter *adapter, u32 mode) u32 port = adapter->physical_port; u16 board_type = adapter->ahw.board_type; - if (port > NETXEN_NIU_MAX_XG_PORTS) + if (port >= NETXEN_NIU_MAX_XG_PORTS) return -EINVAL; mac_cfg = NXRD32(adapter, NETXEN_NIU_XGE_CONFIG_0 + (0x10000 * port)); diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c index 8694124ef77d..bc165f4d0f65 100644 --- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c +++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c @@ -1437,8 +1437,6 @@ netxen_handle_linkevent(struct netxen_adapter *adapter, nx_fw_msg_t *msg) netdev->name, cable_len); } - netxen_advert_link_change(adapter, link_status); - /* update link parameters */ if (duplex == LINKEVENT_FULL_DUPLEX) adapter->link_duplex = DUPLEX_FULL; @@ -1447,6 +1445,8 @@ netxen_handle_linkevent(struct netxen_adapter *adapter, nx_fw_msg_t *msg) adapter->module_type = module; adapter->link_autoneg = autoneg; adapter->link_speed = link_speed; + + netxen_advert_link_change(adapter, link_status); } static void @@ -1532,8 +1532,6 @@ static struct sk_buff *netxen_process_rxbuf(struct netxen_adapter *adapter, } else skb->ip_summed = CHECKSUM_NONE; - skb->dev = adapter->netdev; - buffer->skb = NULL; no_skb: buffer->state = NETXEN_BUFFER_FREE; diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h index 8680a5dae4a2..eaa1db9fec32 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h @@ -36,8 +36,8 @@ #define _QLCNIC_LINUX_MAJOR 5 #define _QLCNIC_LINUX_MINOR 0 -#define _QLCNIC_LINUX_SUBVERSION 28 -#define QLCNIC_LINUX_VERSIONID "5.0.28" +#define _QLCNIC_LINUX_SUBVERSION 29 +#define QLCNIC_LINUX_VERSIONID "5.0.29" #define QLCNIC_DRV_IDC_VER 0x01 #define QLCNIC_DRIVER_VERSION ((_QLCNIC_LINUX_MAJOR << 16) |\ (_QLCNIC_LINUX_MINOR << 8) | (_QLCNIC_LINUX_SUBVERSION)) @@ -258,6 +258,8 @@ struct rcv_desc { (((sts_data) >> 52) & 0x1) #define qlcnic_get_lro_sts_seq_number(sts_data) \ ((sts_data) & 0x0FFFFFFFF) +#define qlcnic_get_lro_sts_mss(sts_data1) \ + ((sts_data1 >> 32) & 0x0FFFF) struct status_desc { @@ -610,7 +612,11 @@ struct qlcnic_recv_context { #define QLCNIC_CDRP_CMD_GET_MAC_STATS 0x00000037 #define QLCNIC_RCODE_SUCCESS 0 +#define QLCNIC_RCODE_INVALID_ARGS 6 #define QLCNIC_RCODE_NOT_SUPPORTED 9 +#define QLCNIC_RCODE_NOT_PERMITTED 10 +#define QLCNIC_RCODE_NOT_IMPL 15 +#define QLCNIC_RCODE_INVALID 16 #define QLCNIC_RCODE_TIMEOUT 17 #define QLCNIC_DESTROY_CTX_RESET 0 @@ -623,6 +629,7 @@ struct qlcnic_recv_context { #define QLCNIC_CAP0_JUMBO_CONTIGUOUS (1 << 7) #define QLCNIC_CAP0_LRO_CONTIGUOUS (1 << 8) #define QLCNIC_CAP0_VALIDOFF (1 << 11) +#define QLCNIC_CAP0_LRO_MSS (1 << 21) /* * Context state @@ -829,6 +836,9 @@ struct qlcnic_mac_list_s { #define QLCNIC_FW_CAPABILITY_FVLANTX BIT_9 #define QLCNIC_FW_CAPABILITY_HW_LRO BIT_10 #define QLCNIC_FW_CAPABILITY_MULTI_LOOPBACK BIT_27 +#define QLCNIC_FW_CAPABILITY_MORE_CAPS BIT_31 + +#define QLCNIC_FW_CAPABILITY_2_LRO_MAX_TCP_SEG BIT_2 /* module types */ #define LINKEVENT_MODULE_NOT_PRESENT 1 @@ -918,6 +928,7 @@ struct qlcnic_ipaddr { #define QLCNIC_NEED_FLR 0x1000 #define QLCNIC_FW_RESET_OWNER 0x2000 #define QLCNIC_FW_HANG 0x4000 +#define QLCNIC_FW_LRO_MSS_CAP 0x8000 #define QLCNIC_IS_MSI_FAMILY(adapter) \ ((adapter)->flags & (QLCNIC_MSI_ENABLED | QLCNIC_MSIX_ENABLED)) diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c index 8db85244e8ad..b8ead696141e 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c @@ -53,12 +53,39 @@ qlcnic_issue_cmd(struct qlcnic_adapter *adapter, struct qlcnic_cmd_args *cmd) rsp = qlcnic_poll_rsp(adapter); if (rsp == QLCNIC_CDRP_RSP_TIMEOUT) { - dev_err(&pdev->dev, "card response timeout.\n"); + dev_err(&pdev->dev, "CDRP response timeout.\n"); cmd->rsp.cmd = QLCNIC_RCODE_TIMEOUT; } else if (rsp == QLCNIC_CDRP_RSP_FAIL) { cmd->rsp.cmd = QLCRD32(adapter, QLCNIC_ARG1_CRB_OFFSET); - dev_err(&pdev->dev, "failed card response code:0x%x\n", + switch (cmd->rsp.cmd) { + case QLCNIC_RCODE_INVALID_ARGS: + dev_err(&pdev->dev, "CDRP invalid args: 0x%x.\n", cmd->rsp.cmd); + break; + case QLCNIC_RCODE_NOT_SUPPORTED: + case QLCNIC_RCODE_NOT_IMPL: + dev_err(&pdev->dev, + "CDRP command not supported: 0x%x.\n", + cmd->rsp.cmd); + break; + case QLCNIC_RCODE_NOT_PERMITTED: + dev_err(&pdev->dev, + "CDRP requested action not permitted: 0x%x.\n", + cmd->rsp.cmd); + break; + case QLCNIC_RCODE_INVALID: + dev_err(&pdev->dev, + "CDRP invalid or unknown cmd received: 0x%x.\n", + cmd->rsp.cmd); + break; + case QLCNIC_RCODE_TIMEOUT: + dev_err(&pdev->dev, "CDRP command timeout: 0x%x.\n", + cmd->rsp.cmd); + break; + default: + dev_err(&pdev->dev, "CDRP command failed: 0x%x.\n", + cmd->rsp.cmd); + } } else if (rsp == QLCNIC_CDRP_RSP_OK) { cmd->rsp.cmd = QLCNIC_RCODE_SUCCESS; if (cmd->rsp.arg2) @@ -237,6 +264,9 @@ qlcnic_fw_cmd_create_rx_ctx(struct qlcnic_adapter *adapter) | QLCNIC_CAP0_VALIDOFF); cap |= (QLCNIC_CAP0_JUMBO_CONTIGUOUS | QLCNIC_CAP0_LRO_CONTIGUOUS); + if (adapter->flags & QLCNIC_FW_LRO_MSS_CAP) + cap |= QLCNIC_CAP0_LRO_MSS; + prq->valid_field_offset = offsetof(struct qlcnic_hostrq_rx_ctx, msix_handler); prq->txrx_sds_binding = nsds_rings - 1; @@ -954,9 +984,6 @@ int qlcnic_get_mac_stats(struct qlcnic_adapter *adapter, mac_stats->mac_rx_jabber = le64_to_cpu(stats->mac_rx_jabber); mac_stats->mac_rx_dropped = le64_to_cpu(stats->mac_rx_dropped); mac_stats->mac_rx_crc_error = le64_to_cpu(stats->mac_rx_crc_error); - } else { - dev_info(&adapter->pdev->dev, - "%s: Get mac stats failed =%d.\n", __func__, err); } dma_free_coherent(&adapter->pdev->dev, stats_size, stats_addr, diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hdr.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hdr.h index 6ced3195aad3..28a6b28192e3 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hdr.h +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hdr.h @@ -588,6 +588,7 @@ enum { #define CRB_DRIVER_VERSION (QLCNIC_REG(0x2a0)) #define CRB_FW_CAPABILITIES_1 (QLCNIC_CAM_RAM(0x128)) +#define CRB_FW_CAPABILITIES_2 (QLCNIC_CAM_RAM(0x12c)) #define CRB_MAC_BLOCK_START (QLCNIC_CAM_RAM(0x1c0)) /* diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c index 799fd40ed03a..0bcda9c51e9b 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c @@ -1488,8 +1488,6 @@ static struct sk_buff *qlcnic_process_rxbuf(struct qlcnic_adapter *adapter, skb_checksum_none_assert(skb); } - skb->dev = adapter->netdev; - buffer->skb = NULL; return skb; @@ -1653,6 +1651,9 @@ qlcnic_process_lro(struct qlcnic_adapter *adapter, length = skb->len; + if (adapter->flags & QLCNIC_FW_LRO_MSS_CAP) + skb_shinfo(skb)->gso_size = qlcnic_get_lro_sts_mss(sts_data1); + if (vid != 0xffff) __vlan_hwaccel_put_tag(skb, vid); netif_receive_skb(skb); diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c index ad98f4d7919d..212c12193275 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c @@ -1136,6 +1136,8 @@ static int __qlcnic_up(struct qlcnic_adapter *adapter, struct net_device *netdev) { int ring; + u32 capab2; + struct qlcnic_host_rds_ring *rds_ring; if (adapter->is_up != QLCNIC_ADAPTER_UP_MAGIC) @@ -1146,6 +1148,12 @@ __qlcnic_up(struct qlcnic_adapter *adapter, struct net_device *netdev) if (qlcnic_set_eswitch_port_config(adapter)) return -EIO; + if (adapter->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS) { + capab2 = QLCRD32(adapter, CRB_FW_CAPABILITIES_2); + if (capab2 & QLCNIC_FW_CAPABILITY_2_LRO_MAX_TCP_SEG) + adapter->flags |= QLCNIC_FW_LRO_MSS_CAP; + } + if (qlcnic_fw_create_ctx(adapter)) return -EIO; @@ -1215,6 +1223,7 @@ __qlcnic_down(struct qlcnic_adapter *adapter, struct net_device *netdev) qlcnic_napi_disable(adapter); qlcnic_fw_destroy_ctx(adapter); + adapter->flags &= ~QLCNIC_FW_LRO_MSS_CAP; qlcnic_reset_rx_buffers_list(adapter); qlcnic_release_tx_buffers(adapter); @@ -2024,6 +2033,7 @@ qlcnic_tx_pkt(struct qlcnic_adapter *adapter, vh = (struct vlan_ethhdr *)skb->data; flags = FLAGS_VLAN_TAGGED; vlan_tci = vh->h_vlan_TCI; + protocol = ntohs(vh->h_vlan_encapsulated_proto); } else if (vlan_tx_tag_present(skb)) { flags = FLAGS_VLAN_OOB; vlan_tci = vlan_tx_tag_get(skb); diff --git a/drivers/net/ethernet/qlogic/qlge/qlge.h b/drivers/net/ethernet/qlogic/qlge/qlge.h index 5a639df33f18..a131d7b5d2fe 100644 --- a/drivers/net/ethernet/qlogic/qlge/qlge.h +++ b/drivers/net/ethernet/qlogic/qlge/qlge.h @@ -18,13 +18,15 @@ */ #define DRV_NAME "qlge" #define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver " -#define DRV_VERSION "v1.00.00.30.00.00-01" +#define DRV_VERSION "v1.00.00.31" #define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */ #define QLGE_VENDOR_ID 0x1077 #define QLGE_DEVICE_ID_8012 0x8012 #define QLGE_DEVICE_ID_8000 0x8000 +#define QLGE_MEZZ_SSYS_ID_068 0x0068 +#define QLGE_MEZZ_SSYS_ID_180 0x0180 #define MAX_CPUS 8 #define MAX_TX_RINGS MAX_CPUS #define MAX_RX_RINGS ((MAX_CPUS * 2) + 1) @@ -1397,7 +1399,6 @@ struct tx_ring { struct tx_ring_desc *q; /* descriptor list for the queue */ spinlock_t lock; atomic_t tx_count; /* counts down for every outstanding IO */ - atomic_t queue_stopped; /* Turns queue off when full. */ struct delayed_work tx_work; struct ql_adapter *qdev; u64 tx_packets; @@ -1535,6 +1536,14 @@ struct nic_stats { u64 rx_1024_to_1518_pkts; u64 rx_1519_to_max_pkts; u64 rx_len_err_pkts; + /* Receive Mac Err stats */ + u64 rx_code_err; + u64 rx_oversize_err; + u64 rx_undersize_err; + u64 rx_preamble_err; + u64 rx_frame_len_err; + u64 rx_crc_err; + u64 rx_err_count; /* * These stats come from offset 500h to 5C8h * in the XGMAC register. diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_ethtool.c b/drivers/net/ethernet/qlogic/qlge/qlge_ethtool.c index 8e2c2a74f3a5..6f316ab23257 100644 --- a/drivers/net/ethernet/qlogic/qlge/qlge_ethtool.c +++ b/drivers/net/ethernet/qlogic/qlge/qlge_ethtool.c @@ -35,10 +35,152 @@ #include "qlge.h" +struct ql_stats { + char stat_string[ETH_GSTRING_LEN]; + int sizeof_stat; + int stat_offset; +}; + +#define QL_SIZEOF(m) FIELD_SIZEOF(struct ql_adapter, m) +#define QL_OFF(m) offsetof(struct ql_adapter, m) + +static const struct ql_stats ql_gstrings_stats[] = { + {"tx_pkts", QL_SIZEOF(nic_stats.tx_pkts), QL_OFF(nic_stats.tx_pkts)}, + {"tx_bytes", QL_SIZEOF(nic_stats.tx_bytes), QL_OFF(nic_stats.tx_bytes)}, + {"tx_mcast_pkts", QL_SIZEOF(nic_stats.tx_mcast_pkts), + QL_OFF(nic_stats.tx_mcast_pkts)}, + {"tx_bcast_pkts", QL_SIZEOF(nic_stats.tx_bcast_pkts), + QL_OFF(nic_stats.tx_bcast_pkts)}, + {"tx_ucast_pkts", QL_SIZEOF(nic_stats.tx_ucast_pkts), + QL_OFF(nic_stats.tx_ucast_pkts)}, + {"tx_ctl_pkts", QL_SIZEOF(nic_stats.tx_ctl_pkts), + QL_OFF(nic_stats.tx_ctl_pkts)}, + {"tx_pause_pkts", QL_SIZEOF(nic_stats.tx_pause_pkts), + QL_OFF(nic_stats.tx_pause_pkts)}, + {"tx_64_pkts", QL_SIZEOF(nic_stats.tx_64_pkt), + QL_OFF(nic_stats.tx_64_pkt)}, + {"tx_65_to_127_pkts", QL_SIZEOF(nic_stats.tx_65_to_127_pkt), + QL_OFF(nic_stats.tx_65_to_127_pkt)}, + {"tx_128_to_255_pkts", QL_SIZEOF(nic_stats.tx_128_to_255_pkt), + QL_OFF(nic_stats.tx_128_to_255_pkt)}, + {"tx_256_511_pkts", QL_SIZEOF(nic_stats.tx_256_511_pkt), + QL_OFF(nic_stats.tx_256_511_pkt)}, + {"tx_512_to_1023_pkts", QL_SIZEOF(nic_stats.tx_512_to_1023_pkt), + QL_OFF(nic_stats.tx_512_to_1023_pkt)}, + {"tx_1024_to_1518_pkts", QL_SIZEOF(nic_stats.tx_1024_to_1518_pkt), + QL_OFF(nic_stats.tx_1024_to_1518_pkt)}, + {"tx_1519_to_max_pkts", QL_SIZEOF(nic_stats.tx_1519_to_max_pkt), + QL_OFF(nic_stats.tx_1519_to_max_pkt)}, + {"tx_undersize_pkts", QL_SIZEOF(nic_stats.tx_undersize_pkt), + QL_OFF(nic_stats.tx_undersize_pkt)}, + {"tx_oversize_pkts", QL_SIZEOF(nic_stats.tx_oversize_pkt), + QL_OFF(nic_stats.tx_oversize_pkt)}, + {"rx_bytes", QL_SIZEOF(nic_stats.rx_bytes), QL_OFF(nic_stats.rx_bytes)}, + {"rx_bytes_ok", QL_SIZEOF(nic_stats.rx_bytes_ok), + QL_OFF(nic_stats.rx_bytes_ok)}, + {"rx_pkts", QL_SIZEOF(nic_stats.rx_pkts), QL_OFF(nic_stats.rx_pkts)}, + {"rx_pkts_ok", QL_SIZEOF(nic_stats.rx_pkts_ok), + QL_OFF(nic_stats.rx_pkts_ok)}, + {"rx_bcast_pkts", QL_SIZEOF(nic_stats.rx_bcast_pkts), + QL_OFF(nic_stats.rx_bcast_pkts)}, + {"rx_mcast_pkts", QL_SIZEOF(nic_stats.rx_mcast_pkts), + QL_OFF(nic_stats.rx_mcast_pkts)}, + {"rx_ucast_pkts", QL_SIZEOF(nic_stats.rx_ucast_pkts), + QL_OFF(nic_stats.rx_ucast_pkts)}, + {"rx_undersize_pkts", QL_SIZEOF(nic_stats.rx_undersize_pkts), + QL_OFF(nic_stats.rx_undersize_pkts)}, + {"rx_oversize_pkts", QL_SIZEOF(nic_stats.rx_oversize_pkts), + QL_OFF(nic_stats.rx_oversize_pkts)}, + {"rx_jabber_pkts", QL_SIZEOF(nic_stats.rx_jabber_pkts), + QL_OFF(nic_stats.rx_jabber_pkts)}, + {"rx_undersize_fcerr_pkts", + QL_SIZEOF(nic_stats.rx_undersize_fcerr_pkts), + QL_OFF(nic_stats.rx_undersize_fcerr_pkts)}, + {"rx_drop_events", QL_SIZEOF(nic_stats.rx_drop_events), + QL_OFF(nic_stats.rx_drop_events)}, + {"rx_fcerr_pkts", QL_SIZEOF(nic_stats.rx_fcerr_pkts), + QL_OFF(nic_stats.rx_fcerr_pkts)}, + {"rx_align_err", QL_SIZEOF(nic_stats.rx_align_err), + QL_OFF(nic_stats.rx_align_err)}, + {"rx_symbol_err", QL_SIZEOF(nic_stats.rx_symbol_err), + QL_OFF(nic_stats.rx_symbol_err)}, + {"rx_mac_err", QL_SIZEOF(nic_stats.rx_mac_err), + QL_OFF(nic_stats.rx_mac_err)}, + {"rx_ctl_pkts", QL_SIZEOF(nic_stats.rx_ctl_pkts), + QL_OFF(nic_stats.rx_ctl_pkts)}, + {"rx_pause_pkts", QL_SIZEOF(nic_stats.rx_pause_pkts), + QL_OFF(nic_stats.rx_pause_pkts)}, + {"rx_64_pkts", QL_SIZEOF(nic_stats.rx_64_pkts), + QL_OFF(nic_stats.rx_64_pkts)}, + {"rx_65_to_127_pkts", QL_SIZEOF(nic_stats.rx_65_to_127_pkts), + QL_OFF(nic_stats.rx_65_to_127_pkts)}, + {"rx_128_255_pkts", QL_SIZEOF(nic_stats.rx_128_255_pkts), + QL_OFF(nic_stats.rx_128_255_pkts)}, + {"rx_256_511_pkts", QL_SIZEOF(nic_stats.rx_256_511_pkts), + QL_OFF(nic_stats.rx_256_511_pkts)}, + {"rx_512_to_1023_pkts", QL_SIZEOF(nic_stats.rx_512_to_1023_pkts), + QL_OFF(nic_stats.rx_512_to_1023_pkts)}, + {"rx_1024_to_1518_pkts", QL_SIZEOF(nic_stats.rx_1024_to_1518_pkts), + QL_OFF(nic_stats.rx_1024_to_1518_pkts)}, + {"rx_1519_to_max_pkts", QL_SIZEOF(nic_stats.rx_1519_to_max_pkts), + QL_OFF(nic_stats.rx_1519_to_max_pkts)}, + {"rx_len_err_pkts", QL_SIZEOF(nic_stats.rx_len_err_pkts), + QL_OFF(nic_stats.rx_len_err_pkts)}, + {"rx_code_err", QL_SIZEOF(nic_stats.rx_code_err), + QL_OFF(nic_stats.rx_code_err)}, + {"rx_oversize_err", QL_SIZEOF(nic_stats.rx_oversize_err), + QL_OFF(nic_stats.rx_oversize_err)}, + {"rx_undersize_err", QL_SIZEOF(nic_stats.rx_undersize_err), + QL_OFF(nic_stats.rx_undersize_err)}, + {"rx_preamble_err", QL_SIZEOF(nic_stats.rx_preamble_err), + QL_OFF(nic_stats.rx_preamble_err)}, + {"rx_frame_len_err", QL_SIZEOF(nic_stats.rx_frame_len_err), + QL_OFF(nic_stats.rx_frame_len_err)}, + {"rx_crc_err", QL_SIZEOF(nic_stats.rx_crc_err), + QL_OFF(nic_stats.rx_crc_err)}, + {"rx_err_count", QL_SIZEOF(nic_stats.rx_err_count), + QL_OFF(nic_stats.rx_err_count)}, + {"tx_cbfc_pause_frames0", QL_SIZEOF(nic_stats.tx_cbfc_pause_frames0), + QL_OFF(nic_stats.tx_cbfc_pause_frames0)}, + {"tx_cbfc_pause_frames1", QL_SIZEOF(nic_stats.tx_cbfc_pause_frames1), + QL_OFF(nic_stats.tx_cbfc_pause_frames1)}, + {"tx_cbfc_pause_frames2", QL_SIZEOF(nic_stats.tx_cbfc_pause_frames2), + QL_OFF(nic_stats.tx_cbfc_pause_frames2)}, + {"tx_cbfc_pause_frames3", QL_SIZEOF(nic_stats.tx_cbfc_pause_frames3), + QL_OFF(nic_stats.tx_cbfc_pause_frames3)}, + {"tx_cbfc_pause_frames4", QL_SIZEOF(nic_stats.tx_cbfc_pause_frames4), + QL_OFF(nic_stats.tx_cbfc_pause_frames4)}, + {"tx_cbfc_pause_frames5", QL_SIZEOF(nic_stats.tx_cbfc_pause_frames5), + QL_OFF(nic_stats.tx_cbfc_pause_frames5)}, + {"tx_cbfc_pause_frames6", QL_SIZEOF(nic_stats.tx_cbfc_pause_frames6), + QL_OFF(nic_stats.tx_cbfc_pause_frames6)}, + {"tx_cbfc_pause_frames7", QL_SIZEOF(nic_stats.tx_cbfc_pause_frames7), + QL_OFF(nic_stats.tx_cbfc_pause_frames7)}, + {"rx_cbfc_pause_frames0", QL_SIZEOF(nic_stats.rx_cbfc_pause_frames0), + QL_OFF(nic_stats.rx_cbfc_pause_frames0)}, + {"rx_cbfc_pause_frames1", QL_SIZEOF(nic_stats.rx_cbfc_pause_frames1), + QL_OFF(nic_stats.rx_cbfc_pause_frames1)}, + {"rx_cbfc_pause_frames2", QL_SIZEOF(nic_stats.rx_cbfc_pause_frames2), + QL_OFF(nic_stats.rx_cbfc_pause_frames2)}, + {"rx_cbfc_pause_frames3", QL_SIZEOF(nic_stats.rx_cbfc_pause_frames3), + QL_OFF(nic_stats.rx_cbfc_pause_frames3)}, + {"rx_cbfc_pause_frames4", QL_SIZEOF(nic_stats.rx_cbfc_pause_frames4), + QL_OFF(nic_stats.rx_cbfc_pause_frames4)}, + {"rx_cbfc_pause_frames5", QL_SIZEOF(nic_stats.rx_cbfc_pause_frames5), + QL_OFF(nic_stats.rx_cbfc_pause_frames5)}, + {"rx_cbfc_pause_frames6", QL_SIZEOF(nic_stats.rx_cbfc_pause_frames6), + QL_OFF(nic_stats.rx_cbfc_pause_frames6)}, + {"rx_cbfc_pause_frames7", QL_SIZEOF(nic_stats.rx_cbfc_pause_frames7), + QL_OFF(nic_stats.rx_cbfc_pause_frames7)}, + {"rx_nic_fifo_drop", QL_SIZEOF(nic_stats.rx_nic_fifo_drop), + QL_OFF(nic_stats.rx_nic_fifo_drop)}, +}; + static const char ql_gstrings_test[][ETH_GSTRING_LEN] = { "Loopback test (offline)" }; #define QLGE_TEST_LEN (sizeof(ql_gstrings_test) / ETH_GSTRING_LEN) +#define QLGE_STATS_LEN ARRAY_SIZE(ql_gstrings_stats) static int ql_update_ring_coalescing(struct ql_adapter *qdev) { @@ -183,73 +325,19 @@ quit: QL_DUMP_STAT(qdev); } -static char ql_stats_str_arr[][ETH_GSTRING_LEN] = { - {"tx_pkts"}, - {"tx_bytes"}, - {"tx_mcast_pkts"}, - {"tx_bcast_pkts"}, - {"tx_ucast_pkts"}, - {"tx_ctl_pkts"}, - {"tx_pause_pkts"}, - {"tx_64_pkts"}, - {"tx_65_to_127_pkts"}, - {"tx_128_to_255_pkts"}, - {"tx_256_511_pkts"}, - {"tx_512_to_1023_pkts"}, - {"tx_1024_to_1518_pkts"}, - {"tx_1519_to_max_pkts"}, - {"tx_undersize_pkts"}, - {"tx_oversize_pkts"}, - {"rx_bytes"}, - {"rx_bytes_ok"}, - {"rx_pkts"}, - {"rx_pkts_ok"}, - {"rx_bcast_pkts"}, - {"rx_mcast_pkts"}, - {"rx_ucast_pkts"}, - {"rx_undersize_pkts"}, - {"rx_oversize_pkts"}, - {"rx_jabber_pkts"}, - {"rx_undersize_fcerr_pkts"}, - {"rx_drop_events"}, - {"rx_fcerr_pkts"}, - {"rx_align_err"}, - {"rx_symbol_err"}, - {"rx_mac_err"}, - {"rx_ctl_pkts"}, - {"rx_pause_pkts"}, - {"rx_64_pkts"}, - {"rx_65_to_127_pkts"}, - {"rx_128_255_pkts"}, - {"rx_256_511_pkts"}, - {"rx_512_to_1023_pkts"}, - {"rx_1024_to_1518_pkts"}, - {"rx_1519_to_max_pkts"}, - {"rx_len_err_pkts"}, - {"tx_cbfc_pause_frames0"}, - {"tx_cbfc_pause_frames1"}, - {"tx_cbfc_pause_frames2"}, - {"tx_cbfc_pause_frames3"}, - {"tx_cbfc_pause_frames4"}, - {"tx_cbfc_pause_frames5"}, - {"tx_cbfc_pause_frames6"}, - {"tx_cbfc_pause_frames7"}, - {"rx_cbfc_pause_frames0"}, - {"rx_cbfc_pause_frames1"}, - {"rx_cbfc_pause_frames2"}, - {"rx_cbfc_pause_frames3"}, - {"rx_cbfc_pause_frames4"}, - {"rx_cbfc_pause_frames5"}, - {"rx_cbfc_pause_frames6"}, - {"rx_cbfc_pause_frames7"}, - {"rx_nic_fifo_drop"}, -}; - static void ql_get_strings(struct net_device *dev, u32 stringset, u8 *buf) { + int index; switch (stringset) { + case ETH_SS_TEST: + memcpy(buf, *ql_gstrings_test, QLGE_TEST_LEN * ETH_GSTRING_LEN); + break; case ETH_SS_STATS: - memcpy(buf, ql_stats_str_arr, sizeof(ql_stats_str_arr)); + for (index = 0; index < QLGE_STATS_LEN; index++) { + memcpy(buf + index * ETH_GSTRING_LEN, + ql_gstrings_stats[index].stat_string, + ETH_GSTRING_LEN); + } break; } } @@ -260,7 +348,7 @@ static int ql_get_sset_count(struct net_device *dev, int sset) case ETH_SS_TEST: return QLGE_TEST_LEN; case ETH_SS_STATS: - return ARRAY_SIZE(ql_stats_str_arr); + return QLGE_STATS_LEN; default: return -EOPNOTSUPP; } @@ -271,69 +359,17 @@ ql_get_ethtool_stats(struct net_device *ndev, struct ethtool_stats *stats, u64 *data) { struct ql_adapter *qdev = netdev_priv(ndev); - struct nic_stats *s = &qdev->nic_stats; + int index, length; + length = QLGE_STATS_LEN; ql_update_stats(qdev); - *data++ = s->tx_pkts; - *data++ = s->tx_bytes; - *data++ = s->tx_mcast_pkts; - *data++ = s->tx_bcast_pkts; - *data++ = s->tx_ucast_pkts; - *data++ = s->tx_ctl_pkts; - *data++ = s->tx_pause_pkts; - *data++ = s->tx_64_pkt; - *data++ = s->tx_65_to_127_pkt; - *data++ = s->tx_128_to_255_pkt; - *data++ = s->tx_256_511_pkt; - *data++ = s->tx_512_to_1023_pkt; - *data++ = s->tx_1024_to_1518_pkt; - *data++ = s->tx_1519_to_max_pkt; - *data++ = s->tx_undersize_pkt; - *data++ = s->tx_oversize_pkt; - *data++ = s->rx_bytes; - *data++ = s->rx_bytes_ok; - *data++ = s->rx_pkts; - *data++ = s->rx_pkts_ok; - *data++ = s->rx_bcast_pkts; - *data++ = s->rx_mcast_pkts; - *data++ = s->rx_ucast_pkts; - *data++ = s->rx_undersize_pkts; - *data++ = s->rx_oversize_pkts; - *data++ = s->rx_jabber_pkts; - *data++ = s->rx_undersize_fcerr_pkts; - *data++ = s->rx_drop_events; - *data++ = s->rx_fcerr_pkts; - *data++ = s->rx_align_err; - *data++ = s->rx_symbol_err; - *data++ = s->rx_mac_err; - *data++ = s->rx_ctl_pkts; - *data++ = s->rx_pause_pkts; - *data++ = s->rx_64_pkts; - *data++ = s->rx_65_to_127_pkts; - *data++ = s->rx_128_255_pkts; - *data++ = s->rx_256_511_pkts; - *data++ = s->rx_512_to_1023_pkts; - *data++ = s->rx_1024_to_1518_pkts; - *data++ = s->rx_1519_to_max_pkts; - *data++ = s->rx_len_err_pkts; - *data++ = s->tx_cbfc_pause_frames0; - *data++ = s->tx_cbfc_pause_frames1; - *data++ = s->tx_cbfc_pause_frames2; - *data++ = s->tx_cbfc_pause_frames3; - *data++ = s->tx_cbfc_pause_frames4; - *data++ = s->tx_cbfc_pause_frames5; - *data++ = s->tx_cbfc_pause_frames6; - *data++ = s->tx_cbfc_pause_frames7; - *data++ = s->rx_cbfc_pause_frames0; - *data++ = s->rx_cbfc_pause_frames1; - *data++ = s->rx_cbfc_pause_frames2; - *data++ = s->rx_cbfc_pause_frames3; - *data++ = s->rx_cbfc_pause_frames4; - *data++ = s->rx_cbfc_pause_frames5; - *data++ = s->rx_cbfc_pause_frames6; - *data++ = s->rx_cbfc_pause_frames7; - *data++ = s->rx_nic_fifo_drop; + for (index = 0; index < length; index++) { + char *p = (char *)qdev + + ql_gstrings_stats[index].stat_offset; + *data++ = (ql_gstrings_stats[index].sizeof_stat == + sizeof(u64)) ? *(u64 *)p : (*(u32 *)p); + } } static int ql_get_settings(struct net_device *ndev, @@ -388,30 +424,33 @@ static void ql_get_drvinfo(struct net_device *ndev, static void ql_get_wol(struct net_device *ndev, struct ethtool_wolinfo *wol) { struct ql_adapter *qdev = netdev_priv(ndev); - /* What we support. */ - wol->supported = WAKE_MAGIC; - /* What we've currently got set. */ - wol->wolopts = qdev->wol; + unsigned short ssys_dev = qdev->pdev->subsystem_device; + + /* WOL is only supported for mezz card. */ + if (ssys_dev == QLGE_MEZZ_SSYS_ID_068 || + ssys_dev == QLGE_MEZZ_SSYS_ID_180) { + wol->supported = WAKE_MAGIC; + wol->wolopts = qdev->wol; + } } static int ql_set_wol(struct net_device *ndev, struct ethtool_wolinfo *wol) { struct ql_adapter *qdev = netdev_priv(ndev); - int status; + unsigned short ssys_dev = qdev->pdev->subsystem_device; + /* WOL is only supported for mezz card. */ + if (ssys_dev != QLGE_MEZZ_SSYS_ID_068 && + ssys_dev != QLGE_MEZZ_SSYS_ID_180) { + netif_info(qdev, drv, qdev->ndev, + "WOL is only supported for mezz card\n"); + return -EOPNOTSUPP; + } if (wol->wolopts & ~WAKE_MAGIC) return -EINVAL; qdev->wol = wol->wolopts; netif_info(qdev, drv, qdev->ndev, "Set wol option 0x%x\n", qdev->wol); - if (!qdev->wol) { - u32 wol = 0; - status = ql_mb_wol_mode(qdev, wol); - netif_err(qdev, drv, qdev->ndev, "WOL %s (wol code 0x%x)\n", - status == 0 ? "cleared successfully" : "clear failed", - wol); - } - return 0; } @@ -528,6 +567,8 @@ static void ql_self_test(struct net_device *ndev, { struct ql_adapter *qdev = netdev_priv(ndev); + memset(data, 0, sizeof(u64) * QLGE_TEST_LEN); + if (netif_running(ndev)) { set_bit(QL_SELFTEST, &qdev->flags); if (eth_test->flags == ETH_TEST_FL_OFFLINE) { diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c index 09d8d33171df..3769f5711cc3 100644 --- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c +++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c @@ -1433,6 +1433,36 @@ map_error: return NETDEV_TX_BUSY; } +/* Categorizing receive firmware frame errors */ +static void ql_categorize_rx_err(struct ql_adapter *qdev, u8 rx_err) +{ + struct nic_stats *stats = &qdev->nic_stats; + + stats->rx_err_count++; + + switch (rx_err & IB_MAC_IOCB_RSP_ERR_MASK) { + case IB_MAC_IOCB_RSP_ERR_CODE_ERR: + stats->rx_code_err++; + break; + case IB_MAC_IOCB_RSP_ERR_OVERSIZE: + stats->rx_oversize_err++; + break; + case IB_MAC_IOCB_RSP_ERR_UNDERSIZE: + stats->rx_undersize_err++; + break; + case IB_MAC_IOCB_RSP_ERR_PREAMBLE: + stats->rx_preamble_err++; + break; + case IB_MAC_IOCB_RSP_ERR_FRAME_LEN: + stats->rx_frame_len_err++; + break; + case IB_MAC_IOCB_RSP_ERR_CRC: + stats->rx_crc_err++; + default: + break; + } +} + /* Process an inbound completion from an rx ring. */ static void ql_process_mac_rx_gro_page(struct ql_adapter *qdev, struct rx_ring *rx_ring, @@ -1499,15 +1529,6 @@ static void ql_process_mac_rx_page(struct ql_adapter *qdev, addr = lbq_desc->p.pg_chunk.va; prefetch(addr); - - /* Frame error, so drop the packet. */ - if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) { - netif_info(qdev, drv, qdev->ndev, - "Receive error, flags2 = 0x%x\n", ib_mac_rsp->flags2); - rx_ring->rx_errors++; - goto err_out; - } - /* The max framesize filter on this chip is set higher than * MTU since FCoE uses 2k frames. */ @@ -1546,7 +1567,7 @@ static void ql_process_mac_rx_page(struct ql_adapter *qdev, struct iphdr *iph = (struct iphdr *) ((u8 *)addr + ETH_HLEN); if (!(iph->frag_off & - cpu_to_be16(IP_MF|IP_OFFSET))) { + htons(IP_MF|IP_OFFSET))) { skb->ip_summed = CHECKSUM_UNNECESSARY; netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, @@ -1593,15 +1614,6 @@ static void ql_process_mac_rx_skb(struct ql_adapter *qdev, memcpy(skb_put(new_skb, length), skb->data, length); skb = new_skb; - /* Frame error, so drop the packet. */ - if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) { - netif_info(qdev, drv, qdev->ndev, - "Receive error, flags2 = 0x%x\n", ib_mac_rsp->flags2); - dev_kfree_skb_any(skb); - rx_ring->rx_errors++; - return; - } - /* loopback self test for ethtool */ if (test_bit(QL_SELFTEST, &qdev->flags)) { ql_check_lb_frame(qdev, skb); @@ -1619,7 +1631,6 @@ static void ql_process_mac_rx_skb(struct ql_adapter *qdev, } prefetch(skb->data); - skb->dev = ndev; if (ib_mac_rsp->flags1 & IB_MAC_IOCB_RSP_M_MASK) { netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, "%s Multicast.\n", @@ -1654,7 +1665,7 @@ static void ql_process_mac_rx_skb(struct ql_adapter *qdev, /* Unfragmented ipv4 UDP frame. */ struct iphdr *iph = (struct iphdr *) skb->data; if (!(iph->frag_off & - ntohs(IP_MF|IP_OFFSET))) { + htons(IP_MF|IP_OFFSET))) { skb->ip_summed = CHECKSUM_UNNECESSARY; netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, @@ -1908,15 +1919,6 @@ static void ql_process_mac_split_rx_intr(struct ql_adapter *qdev, return; } - /* Frame error, so drop the packet. */ - if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) { - netif_info(qdev, drv, qdev->ndev, - "Receive error, flags2 = 0x%x\n", ib_mac_rsp->flags2); - dev_kfree_skb_any(skb); - rx_ring->rx_errors++; - return; - } - /* The max framesize filter on this chip is set higher than * MTU since FCoE uses 2k frames. */ @@ -1934,7 +1936,6 @@ static void ql_process_mac_split_rx_intr(struct ql_adapter *qdev, } prefetch(skb->data); - skb->dev = ndev; if (ib_mac_rsp->flags1 & IB_MAC_IOCB_RSP_M_MASK) { netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, "%s Multicast.\n", (ib_mac_rsp->flags1 & IB_MAC_IOCB_RSP_M_MASK) == @@ -1968,7 +1969,7 @@ static void ql_process_mac_split_rx_intr(struct ql_adapter *qdev, /* Unfragmented ipv4 UDP frame. */ struct iphdr *iph = (struct iphdr *) skb->data; if (!(iph->frag_off & - ntohs(IP_MF|IP_OFFSET))) { + htons(IP_MF|IP_OFFSET))) { skb->ip_summed = CHECKSUM_UNNECESSARY; netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev, "TCP checksum done!\n"); @@ -1999,6 +2000,12 @@ static unsigned long ql_process_mac_rx_intr(struct ql_adapter *qdev, QL_DUMP_IB_MAC_RSP(ib_mac_rsp); + /* Frame error, so drop the packet. */ + if (ib_mac_rsp->flags2 & IB_MAC_IOCB_RSP_ERR_MASK) { + ql_categorize_rx_err(qdev, ib_mac_rsp->flags2); + return (unsigned long)length; + } + if (ib_mac_rsp->flags4 & IB_MAC_IOCB_RSP_HV) { /* The data and headers are split into * separate buffers. @@ -2173,8 +2180,7 @@ static int ql_clean_outbound_rx_ring(struct rx_ring *rx_ring) ql_write_cq_idx(rx_ring); tx_ring = &qdev->tx_ring[net_rsp->txq_idx]; if (__netif_subqueue_stopped(qdev->ndev, tx_ring->wq_id)) { - if (atomic_read(&tx_ring->queue_stopped) && - (atomic_read(&tx_ring->tx_count) > (tx_ring->wq_len / 4))) + if ((atomic_read(&tx_ring->tx_count) > (tx_ring->wq_len / 4))) /* * The queue got stopped because the tx_ring was full. * Wake it up, because it's now at least 25% empty. @@ -2558,10 +2564,9 @@ static netdev_tx_t qlge_send(struct sk_buff *skb, struct net_device *ndev) if (unlikely(atomic_read(&tx_ring->tx_count) < 2)) { netif_info(qdev, tx_queued, qdev->ndev, - "%s: shutting down tx queue %d du to lack of resources.\n", + "%s: BUG! shutting down tx queue %d due to lack of resources.\n", __func__, tx_ring_idx); netif_stop_subqueue(ndev, tx_ring->wq_id); - atomic_inc(&tx_ring->queue_stopped); tx_ring->tx_errors++; return NETDEV_TX_BUSY; } @@ -2612,6 +2617,16 @@ static netdev_tx_t qlge_send(struct sk_buff *skb, struct net_device *ndev) tx_ring->prod_idx, skb->len); atomic_dec(&tx_ring->tx_count); + + if (unlikely(atomic_read(&tx_ring->tx_count) < 2)) { + netif_stop_subqueue(ndev, tx_ring->wq_id); + if ((atomic_read(&tx_ring->tx_count) > (tx_ring->wq_len / 4))) + /* + * The queue got stopped because the tx_ring was full. + * Wake it up, because it's now at least 25% empty. + */ + netif_wake_subqueue(qdev->ndev, tx_ring->wq_id); + } return NETDEV_TX_OK; } @@ -2680,7 +2695,6 @@ static void ql_init_tx_ring(struct ql_adapter *qdev, struct tx_ring *tx_ring) tx_ring_desc++; } atomic_set(&tx_ring->tx_count, tx_ring->wq_len); - atomic_set(&tx_ring->queue_stopped, 0); } static void ql_free_tx_resources(struct ql_adapter *qdev, @@ -2703,10 +2717,9 @@ static int ql_alloc_tx_resources(struct ql_adapter *qdev, &tx_ring->wq_base_dma); if ((tx_ring->wq_base == NULL) || - tx_ring->wq_base_dma & WQ_ADDR_ALIGN) { - netif_err(qdev, ifup, qdev->ndev, "tx_ring alloc failed.\n"); - return -ENOMEM; - } + tx_ring->wq_base_dma & WQ_ADDR_ALIGN) + goto pci_alloc_err; + tx_ring->q = kmalloc(tx_ring->wq_len * sizeof(struct tx_ring_desc), GFP_KERNEL); if (tx_ring->q == NULL) @@ -2716,6 +2729,9 @@ static int ql_alloc_tx_resources(struct ql_adapter *qdev, err: pci_free_consistent(qdev->pdev, tx_ring->wq_size, tx_ring->wq_base, tx_ring->wq_base_dma); + tx_ring->wq_base = NULL; +pci_alloc_err: + netif_err(qdev, ifup, qdev->ndev, "tx_ring alloc failed.\n"); return -ENOMEM; } @@ -4649,7 +4665,7 @@ static int __devinit qlge_probe(struct pci_dev *pdev, int err = 0; ndev = alloc_etherdev_mq(sizeof(struct ql_adapter), - min(MAX_CPUS, (int)num_online_cpus())); + min(MAX_CPUS, netif_get_num_default_rss_queues())); if (!ndev) return -ENOMEM; diff --git a/drivers/net/ethernet/rdc/r6040.c b/drivers/net/ethernet/rdc/r6040.c index d1827e887f4e..557a26545d75 100644 --- a/drivers/net/ethernet/rdc/r6040.c +++ b/drivers/net/ethernet/rdc/r6040.c @@ -1256,7 +1256,6 @@ static void __devexit r6040_remove_one(struct pci_dev *pdev) kfree(lp->mii_bus->irq); mdiobus_free(lp->mii_bus); netif_napi_del(&lp->napi); - pci_set_drvdata(pdev, NULL); pci_iounmap(pdev, lp->base); pci_release_regions(pdev); free_netdev(dev); @@ -1278,17 +1277,4 @@ static struct pci_driver r6040_driver = { .remove = __devexit_p(r6040_remove_one), }; - -static int __init r6040_init(void) -{ - return pci_register_driver(&r6040_driver); -} - - -static void __exit r6040_cleanup(void) -{ - pci_unregister_driver(&r6040_driver); -} - -module_init(r6040_init); -module_exit(r6040_cleanup); +module_pci_driver(r6040_driver); diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c index d7a04e091101..b47d5b35024e 100644 --- a/drivers/net/ethernet/realtek/r8169.c +++ b/drivers/net/ethernet/realtek/r8169.c @@ -46,6 +46,8 @@ #define FIRMWARE_8105E_1 "rtl_nic/rtl8105e-1.fw" #define FIRMWARE_8402_1 "rtl_nic/rtl8402-1.fw" #define FIRMWARE_8411_1 "rtl_nic/rtl8411-1.fw" +#define FIRMWARE_8106E_1 "rtl_nic/rtl8106e-1.fw" +#define FIRMWARE_8168G_1 "rtl_nic/rtl8168g-1.fw" #ifdef RTL8169_DEBUG #define assert(expr) \ @@ -141,6 +143,9 @@ enum mac_version { RTL_GIGA_MAC_VER_36, RTL_GIGA_MAC_VER_37, RTL_GIGA_MAC_VER_38, + RTL_GIGA_MAC_VER_39, + RTL_GIGA_MAC_VER_40, + RTL_GIGA_MAC_VER_41, RTL_GIGA_MAC_NONE = 0xff, }; @@ -259,6 +264,14 @@ static const struct { [RTL_GIGA_MAC_VER_38] = _R("RTL8411", RTL_TD_1, FIRMWARE_8411_1, JUMBO_9K, false), + [RTL_GIGA_MAC_VER_39] = + _R("RTL8106e", RTL_TD_1, FIRMWARE_8106E_1, + JUMBO_1K, true), + [RTL_GIGA_MAC_VER_40] = + _R("RTL8168g/8111g", RTL_TD_1, FIRMWARE_8168G_1, + JUMBO_9K, false), + [RTL_GIGA_MAC_VER_41] = + _R("RTL8168g/8111g", RTL_TD_1, NULL, JUMBO_9K, false), }; #undef _R @@ -389,8 +402,12 @@ enum rtl8168_8101_registers { TWSI = 0xd2, MCU = 0xd3, #define NOW_IS_OOB (1 << 7) +#define TX_EMPTY (1 << 5) +#define RX_EMPTY (1 << 4) +#define RXTX_EMPTY (TX_EMPTY | RX_EMPTY) #define EN_NDP (1 << 3) #define EN_OOB_RESET (1 << 2) +#define LINK_LIST_RDY (1 << 1) EFUSEAR = 0xdc, #define EFUSEAR_FLAG 0x80000000 #define EFUSEAR_WRITE_CMD 0x80000000 @@ -416,6 +433,7 @@ enum rtl8168_registers { #define ERIAR_MASK_SHIFT 12 #define ERIAR_MASK_0001 (0x1 << ERIAR_MASK_SHIFT) #define ERIAR_MASK_0011 (0x3 << ERIAR_MASK_SHIFT) +#define ERIAR_MASK_0101 (0x5 << ERIAR_MASK_SHIFT) #define ERIAR_MASK_1111 (0xf << ERIAR_MASK_SHIFT) EPHY_RXER_NUM = 0x7c, OCPDR = 0xb0, /* OCP GPHY access */ @@ -428,10 +446,14 @@ enum rtl8168_registers { #define OCPAR_FLAG 0x80000000 #define OCPAR_GPHY_WRITE_CMD 0x8000f060 #define OCPAR_GPHY_READ_CMD 0x0000f060 + GPHY_OCP = 0xb8, RDSAR1 = 0xd0, /* 8168c only. Undocumented on 8168dp */ MISC = 0xf0, /* 8168e only. */ #define TXPLA_RST (1 << 29) +#define DISABLE_LAN_EN (1 << 23) /* Enable GPIO pin */ #define PWM_EN (1 << 22) +#define RXDV_GATED_EN (1 << 19) +#define EARLY_TALLY_EN (1 << 16) }; enum rtl_register_content { @@ -721,8 +743,8 @@ struct rtl8169_private { u16 event_slow; struct mdio_ops { - void (*write)(void __iomem *, int, int); - int (*read)(void __iomem *, int); + void (*write)(struct rtl8169_private *, int, int); + int (*read)(struct rtl8169_private *, int); } mdio_ops; struct pll_power_ops { @@ -736,8 +758,8 @@ struct rtl8169_private { } jumbo_ops; struct csi_ops { - void (*write)(void __iomem *, int, int); - u32 (*read)(void __iomem *, int); + void (*write)(struct rtl8169_private *, int, int); + u32 (*read)(struct rtl8169_private *, int); } csi_ops; int (*set_speed)(struct net_device *, u8 aneg, u16 sp, u8 dpx, u32 adv); @@ -774,6 +796,8 @@ struct rtl8169_private { } phy_action; } *rtl_fw; #define RTL_FIRMWARE_UNKNOWN ERR_PTR(-EAGAIN) + + u32 ocp_base; }; MODULE_AUTHOR("Realtek and the Linux r8169 crew <netdev@vger.kernel.org>"); @@ -794,6 +818,8 @@ MODULE_FIRMWARE(FIRMWARE_8168F_1); MODULE_FIRMWARE(FIRMWARE_8168F_2); MODULE_FIRMWARE(FIRMWARE_8402_1); MODULE_FIRMWARE(FIRMWARE_8411_1); +MODULE_FIRMWARE(FIRMWARE_8106E_1); +MODULE_FIRMWARE(FIRMWARE_8168G_1); static void rtl_lock_work(struct rtl8169_private *tp) { @@ -818,47 +844,114 @@ static void rtl_tx_performance_tweak(struct pci_dev *pdev, u16 force) } } +struct rtl_cond { + bool (*check)(struct rtl8169_private *); + const char *msg; +}; + +static void rtl_udelay(unsigned int d) +{ + udelay(d); +} + +static bool rtl_loop_wait(struct rtl8169_private *tp, const struct rtl_cond *c, + void (*delay)(unsigned int), unsigned int d, int n, + bool high) +{ + int i; + + for (i = 0; i < n; i++) { + delay(d); + if (c->check(tp) == high) + return true; + } + netif_err(tp, drv, tp->dev, "%s == %d (loop: %d, delay: %d).\n", + c->msg, !high, n, d); + return false; +} + +static bool rtl_udelay_loop_wait_high(struct rtl8169_private *tp, + const struct rtl_cond *c, + unsigned int d, int n) +{ + return rtl_loop_wait(tp, c, rtl_udelay, d, n, true); +} + +static bool rtl_udelay_loop_wait_low(struct rtl8169_private *tp, + const struct rtl_cond *c, + unsigned int d, int n) +{ + return rtl_loop_wait(tp, c, rtl_udelay, d, n, false); +} + +static bool rtl_msleep_loop_wait_high(struct rtl8169_private *tp, + const struct rtl_cond *c, + unsigned int d, int n) +{ + return rtl_loop_wait(tp, c, msleep, d, n, true); +} + +static bool rtl_msleep_loop_wait_low(struct rtl8169_private *tp, + const struct rtl_cond *c, + unsigned int d, int n) +{ + return rtl_loop_wait(tp, c, msleep, d, n, false); +} + +#define DECLARE_RTL_COND(name) \ +static bool name ## _check(struct rtl8169_private *); \ + \ +static const struct rtl_cond name = { \ + .check = name ## _check, \ + .msg = #name \ +}; \ + \ +static bool name ## _check(struct rtl8169_private *tp) + +DECLARE_RTL_COND(rtl_ocpar_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R32(OCPAR) & OCPAR_FLAG; +} + static u32 ocp_read(struct rtl8169_private *tp, u8 mask, u16 reg) { void __iomem *ioaddr = tp->mmio_addr; - int i; RTL_W32(OCPAR, ((u32)mask & 0x0f) << 12 | (reg & 0x0fff)); - for (i = 0; i < 20; i++) { - udelay(100); - if (RTL_R32(OCPAR) & OCPAR_FLAG) - break; - } - return RTL_R32(OCPDR); + + return rtl_udelay_loop_wait_high(tp, &rtl_ocpar_cond, 100, 20) ? + RTL_R32(OCPDR) : ~0; } static void ocp_write(struct rtl8169_private *tp, u8 mask, u16 reg, u32 data) { void __iomem *ioaddr = tp->mmio_addr; - int i; RTL_W32(OCPDR, data); RTL_W32(OCPAR, OCPAR_FLAG | ((u32)mask & 0x0f) << 12 | (reg & 0x0fff)); - for (i = 0; i < 20; i++) { - udelay(100); - if ((RTL_R32(OCPAR) & OCPAR_FLAG) == 0) - break; - } + + rtl_udelay_loop_wait_low(tp, &rtl_ocpar_cond, 100, 20); +} + +DECLARE_RTL_COND(rtl_eriar_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R32(ERIAR) & ERIAR_FLAG; } static void rtl8168_oob_notify(struct rtl8169_private *tp, u8 cmd) { void __iomem *ioaddr = tp->mmio_addr; - int i; RTL_W8(ERIDR, cmd); RTL_W32(ERIAR, 0x800010e8); msleep(2); - for (i = 0; i < 5; i++) { - udelay(100); - if (!(RTL_R32(ERIAR) & ERIAR_FLAG)) - break; - } + + if (!rtl_udelay_loop_wait_low(tp, &rtl_eriar_cond, 100, 5)) + return; ocp_write(tp, 0x1, 0x30, 0x00000001); } @@ -872,36 +965,27 @@ static u16 rtl8168_get_ocp_reg(struct rtl8169_private *tp) return (tp->mac_version == RTL_GIGA_MAC_VER_31) ? 0xb8 : 0x10; } -static void rtl8168_driver_start(struct rtl8169_private *tp) +DECLARE_RTL_COND(rtl_ocp_read_cond) { u16 reg; - int i; - - rtl8168_oob_notify(tp, OOB_CMD_DRIVER_START); reg = rtl8168_get_ocp_reg(tp); - for (i = 0; i < 10; i++) { - msleep(10); - if (ocp_read(tp, 0x0f, reg) & 0x00000800) - break; - } + return ocp_read(tp, 0x0f, reg) & 0x00000800; } -static void rtl8168_driver_stop(struct rtl8169_private *tp) +static void rtl8168_driver_start(struct rtl8169_private *tp) { - u16 reg; - int i; + rtl8168_oob_notify(tp, OOB_CMD_DRIVER_START); - rtl8168_oob_notify(tp, OOB_CMD_DRIVER_STOP); + rtl_msleep_loop_wait_high(tp, &rtl_ocp_read_cond, 10, 10); +} - reg = rtl8168_get_ocp_reg(tp); +static void rtl8168_driver_stop(struct rtl8169_private *tp) +{ + rtl8168_oob_notify(tp, OOB_CMD_DRIVER_STOP); - for (i = 0; i < 10; i++) { - msleep(10); - if ((ocp_read(tp, 0x0f, reg) & 0x00000800) == 0) - break; - } + rtl_msleep_loop_wait_low(tp, &rtl_ocp_read_cond, 10, 10); } static int r8168dp_check_dash(struct rtl8169_private *tp) @@ -911,21 +995,114 @@ static int r8168dp_check_dash(struct rtl8169_private *tp) return (ocp_read(tp, 0x0f, reg) & 0x00008000) ? 1 : 0; } -static void r8169_mdio_write(void __iomem *ioaddr, int reg_addr, int value) +static bool rtl_ocp_reg_failure(struct rtl8169_private *tp, u32 reg) { - int i; + if (reg & 0xffff0001) { + netif_err(tp, drv, tp->dev, "Invalid ocp reg %x!\n", reg); + return true; + } + return false; +} - RTL_W32(PHYAR, 0x80000000 | (reg_addr & 0x1f) << 16 | (value & 0xffff)); +DECLARE_RTL_COND(rtl_ocp_gphy_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; - for (i = 20; i > 0; i--) { - /* - * Check if the RTL8169 has completed writing to the specified - * MII register. - */ - if (!(RTL_R32(PHYAR) & 0x80000000)) - break; - udelay(25); + return RTL_R32(GPHY_OCP) & OCPAR_FLAG; +} + +static void r8168_phy_ocp_write(struct rtl8169_private *tp, u32 reg, u32 data) +{ + void __iomem *ioaddr = tp->mmio_addr; + + if (rtl_ocp_reg_failure(tp, reg)) + return; + + RTL_W32(GPHY_OCP, OCPAR_FLAG | (reg << 15) | data); + + rtl_udelay_loop_wait_low(tp, &rtl_ocp_gphy_cond, 25, 10); +} + +static u16 r8168_phy_ocp_read(struct rtl8169_private *tp, u32 reg) +{ + void __iomem *ioaddr = tp->mmio_addr; + + if (rtl_ocp_reg_failure(tp, reg)) + return 0; + + RTL_W32(GPHY_OCP, reg << 15); + + return rtl_udelay_loop_wait_high(tp, &rtl_ocp_gphy_cond, 25, 10) ? + (RTL_R32(GPHY_OCP) & 0xffff) : ~0; +} + +static void rtl_w1w0_phy_ocp(struct rtl8169_private *tp, int reg, int p, int m) +{ + int val; + + val = r8168_phy_ocp_read(tp, reg); + r8168_phy_ocp_write(tp, reg, (val | p) & ~m); +} + +static void r8168_mac_ocp_write(struct rtl8169_private *tp, u32 reg, u32 data) +{ + void __iomem *ioaddr = tp->mmio_addr; + + if (rtl_ocp_reg_failure(tp, reg)) + return; + + RTL_W32(OCPDR, OCPAR_FLAG | (reg << 15) | data); +} + +static u16 r8168_mac_ocp_read(struct rtl8169_private *tp, u32 reg) +{ + void __iomem *ioaddr = tp->mmio_addr; + + if (rtl_ocp_reg_failure(tp, reg)) + return 0; + + RTL_W32(OCPDR, reg << 15); + + return RTL_R32(OCPDR); +} + +#define OCP_STD_PHY_BASE 0xa400 + +static void r8168g_mdio_write(struct rtl8169_private *tp, int reg, int value) +{ + if (reg == 0x1f) { + tp->ocp_base = value ? value << 4 : OCP_STD_PHY_BASE; + return; } + + if (tp->ocp_base != OCP_STD_PHY_BASE) + reg -= 0x10; + + r8168_phy_ocp_write(tp, tp->ocp_base + reg * 2, value); +} + +static int r8168g_mdio_read(struct rtl8169_private *tp, int reg) +{ + if (tp->ocp_base != OCP_STD_PHY_BASE) + reg -= 0x10; + + return r8168_phy_ocp_read(tp, tp->ocp_base + reg * 2); +} + +DECLARE_RTL_COND(rtl_phyar_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R32(PHYAR) & 0x80000000; +} + +static void r8169_mdio_write(struct rtl8169_private *tp, int reg, int value) +{ + void __iomem *ioaddr = tp->mmio_addr; + + RTL_W32(PHYAR, 0x80000000 | (reg & 0x1f) << 16 | (value & 0xffff)); + + rtl_udelay_loop_wait_low(tp, &rtl_phyar_cond, 25, 20); /* * According to hardware specs a 20us delay is required after write * complete indication, but before sending next command. @@ -933,23 +1110,16 @@ static void r8169_mdio_write(void __iomem *ioaddr, int reg_addr, int value) udelay(20); } -static int r8169_mdio_read(void __iomem *ioaddr, int reg_addr) +static int r8169_mdio_read(struct rtl8169_private *tp, int reg) { - int i, value = -1; + void __iomem *ioaddr = tp->mmio_addr; + int value; - RTL_W32(PHYAR, 0x0 | (reg_addr & 0x1f) << 16); + RTL_W32(PHYAR, 0x0 | (reg & 0x1f) << 16); + + value = rtl_udelay_loop_wait_high(tp, &rtl_phyar_cond, 25, 20) ? + RTL_R32(PHYAR) & 0xffff : ~0; - for (i = 20; i > 0; i--) { - /* - * Check if the RTL8169 has completed retrieving data from - * the specified MII register. - */ - if (RTL_R32(PHYAR) & 0x80000000) { - value = RTL_R32(PHYAR) & 0xffff; - break; - } - udelay(25); - } /* * According to hardware specs a 20us delay is required after read * complete indication, but before sending next command. @@ -959,45 +1129,35 @@ static int r8169_mdio_read(void __iomem *ioaddr, int reg_addr) return value; } -static void r8168dp_1_mdio_access(void __iomem *ioaddr, int reg_addr, u32 data) +static void r8168dp_1_mdio_access(struct rtl8169_private *tp, int reg, u32 data) { - int i; + void __iomem *ioaddr = tp->mmio_addr; - RTL_W32(OCPDR, data | - ((reg_addr & OCPDR_REG_MASK) << OCPDR_GPHY_REG_SHIFT)); + RTL_W32(OCPDR, data | ((reg & OCPDR_REG_MASK) << OCPDR_GPHY_REG_SHIFT)); RTL_W32(OCPAR, OCPAR_GPHY_WRITE_CMD); RTL_W32(EPHY_RXER_NUM, 0); - for (i = 0; i < 100; i++) { - mdelay(1); - if (!(RTL_R32(OCPAR) & OCPAR_FLAG)) - break; - } + rtl_udelay_loop_wait_low(tp, &rtl_ocpar_cond, 1000, 100); } -static void r8168dp_1_mdio_write(void __iomem *ioaddr, int reg_addr, int value) +static void r8168dp_1_mdio_write(struct rtl8169_private *tp, int reg, int value) { - r8168dp_1_mdio_access(ioaddr, reg_addr, OCPDR_WRITE_CMD | - (value & OCPDR_DATA_MASK)); + r8168dp_1_mdio_access(tp, reg, + OCPDR_WRITE_CMD | (value & OCPDR_DATA_MASK)); } -static int r8168dp_1_mdio_read(void __iomem *ioaddr, int reg_addr) +static int r8168dp_1_mdio_read(struct rtl8169_private *tp, int reg) { - int i; + void __iomem *ioaddr = tp->mmio_addr; - r8168dp_1_mdio_access(ioaddr, reg_addr, OCPDR_READ_CMD); + r8168dp_1_mdio_access(tp, reg, OCPDR_READ_CMD); mdelay(1); RTL_W32(OCPAR, OCPAR_GPHY_READ_CMD); RTL_W32(EPHY_RXER_NUM, 0); - for (i = 0; i < 100; i++) { - mdelay(1); - if (RTL_R32(OCPAR) & OCPAR_FLAG) - break; - } - - return RTL_R32(OCPDR) & OCPDR_DATA_MASK; + return rtl_udelay_loop_wait_high(tp, &rtl_ocpar_cond, 1000, 100) ? + RTL_R32(OCPDR) & OCPDR_DATA_MASK : ~0; } #define R8168DP_1_MDIO_ACCESS_BIT 0x00020000 @@ -1012,22 +1172,25 @@ static void r8168dp_2_mdio_stop(void __iomem *ioaddr) RTL_W32(0xd0, RTL_R32(0xd0) | R8168DP_1_MDIO_ACCESS_BIT); } -static void r8168dp_2_mdio_write(void __iomem *ioaddr, int reg_addr, int value) +static void r8168dp_2_mdio_write(struct rtl8169_private *tp, int reg, int value) { + void __iomem *ioaddr = tp->mmio_addr; + r8168dp_2_mdio_start(ioaddr); - r8169_mdio_write(ioaddr, reg_addr, value); + r8169_mdio_write(tp, reg, value); r8168dp_2_mdio_stop(ioaddr); } -static int r8168dp_2_mdio_read(void __iomem *ioaddr, int reg_addr) +static int r8168dp_2_mdio_read(struct rtl8169_private *tp, int reg) { + void __iomem *ioaddr = tp->mmio_addr; int value; r8168dp_2_mdio_start(ioaddr); - value = r8169_mdio_read(ioaddr, reg_addr); + value = r8169_mdio_read(tp, reg); r8168dp_2_mdio_stop(ioaddr); @@ -1036,12 +1199,12 @@ static int r8168dp_2_mdio_read(void __iomem *ioaddr, int reg_addr) static void rtl_writephy(struct rtl8169_private *tp, int location, u32 val) { - tp->mdio_ops.write(tp->mmio_addr, location, val); + tp->mdio_ops.write(tp, location, val); } static int rtl_readphy(struct rtl8169_private *tp, int location) { - return tp->mdio_ops.read(tp->mmio_addr, location); + return tp->mdio_ops.read(tp, location); } static void rtl_patchphy(struct rtl8169_private *tp, int reg_addr, int value) @@ -1072,79 +1235,64 @@ static int rtl_mdio_read(struct net_device *dev, int phy_id, int location) return rtl_readphy(tp, location); } -static void rtl_ephy_write(void __iomem *ioaddr, int reg_addr, int value) +DECLARE_RTL_COND(rtl_ephyar_cond) { - unsigned int i; + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R32(EPHYAR) & EPHYAR_FLAG; +} + +static void rtl_ephy_write(struct rtl8169_private *tp, int reg_addr, int value) +{ + void __iomem *ioaddr = tp->mmio_addr; RTL_W32(EPHYAR, EPHYAR_WRITE_CMD | (value & EPHYAR_DATA_MASK) | (reg_addr & EPHYAR_REG_MASK) << EPHYAR_REG_SHIFT); - for (i = 0; i < 100; i++) { - if (!(RTL_R32(EPHYAR) & EPHYAR_FLAG)) - break; - udelay(10); - } + rtl_udelay_loop_wait_low(tp, &rtl_ephyar_cond, 10, 100); + + udelay(10); } -static u16 rtl_ephy_read(void __iomem *ioaddr, int reg_addr) +static u16 rtl_ephy_read(struct rtl8169_private *tp, int reg_addr) { - u16 value = 0xffff; - unsigned int i; + void __iomem *ioaddr = tp->mmio_addr; RTL_W32(EPHYAR, (reg_addr & EPHYAR_REG_MASK) << EPHYAR_REG_SHIFT); - for (i = 0; i < 100; i++) { - if (RTL_R32(EPHYAR) & EPHYAR_FLAG) { - value = RTL_R32(EPHYAR) & EPHYAR_DATA_MASK; - break; - } - udelay(10); - } - - return value; + return rtl_udelay_loop_wait_high(tp, &rtl_ephyar_cond, 10, 100) ? + RTL_R32(EPHYAR) & EPHYAR_DATA_MASK : ~0; } -static -void rtl_eri_write(void __iomem *ioaddr, int addr, u32 mask, u32 val, int type) +static void rtl_eri_write(struct rtl8169_private *tp, int addr, u32 mask, + u32 val, int type) { - unsigned int i; + void __iomem *ioaddr = tp->mmio_addr; BUG_ON((addr & 3) || (mask == 0)); RTL_W32(ERIDR, val); RTL_W32(ERIAR, ERIAR_WRITE_CMD | type | mask | addr); - for (i = 0; i < 100; i++) { - if (!(RTL_R32(ERIAR) & ERIAR_FLAG)) - break; - udelay(100); - } + rtl_udelay_loop_wait_low(tp, &rtl_eriar_cond, 100, 100); } -static u32 rtl_eri_read(void __iomem *ioaddr, int addr, int type) +static u32 rtl_eri_read(struct rtl8169_private *tp, int addr, int type) { - u32 value = ~0x00; - unsigned int i; + void __iomem *ioaddr = tp->mmio_addr; RTL_W32(ERIAR, ERIAR_READ_CMD | type | ERIAR_MASK_1111 | addr); - for (i = 0; i < 100; i++) { - if (RTL_R32(ERIAR) & ERIAR_FLAG) { - value = RTL_R32(ERIDR); - break; - } - udelay(100); - } - - return value; + return rtl_udelay_loop_wait_high(tp, &rtl_eriar_cond, 100, 100) ? + RTL_R32(ERIDR) : ~0; } -static void -rtl_w1w0_eri(void __iomem *ioaddr, int addr, u32 mask, u32 p, u32 m, int type) +static void rtl_w1w0_eri(struct rtl8169_private *tp, int addr, u32 mask, u32 p, + u32 m, int type) { u32 val; - val = rtl_eri_read(ioaddr, addr, type); - rtl_eri_write(ioaddr, addr, mask, (val & ~m) | p, type); + val = rtl_eri_read(tp, addr, type); + rtl_eri_write(tp, addr, mask, (val & ~m) | p, type); } struct exgmac_reg { @@ -1153,31 +1301,30 @@ struct exgmac_reg { u32 val; }; -static void rtl_write_exgmac_batch(void __iomem *ioaddr, +static void rtl_write_exgmac_batch(struct rtl8169_private *tp, const struct exgmac_reg *r, int len) { while (len-- > 0) { - rtl_eri_write(ioaddr, r->addr, r->mask, r->val, ERIAR_EXGMAC); + rtl_eri_write(tp, r->addr, r->mask, r->val, ERIAR_EXGMAC); r++; } } -static u8 rtl8168d_efuse_read(void __iomem *ioaddr, int reg_addr) +DECLARE_RTL_COND(rtl_efusear_cond) { - u8 value = 0xff; - unsigned int i; + void __iomem *ioaddr = tp->mmio_addr; - RTL_W32(EFUSEAR, (reg_addr & EFUSEAR_REG_MASK) << EFUSEAR_REG_SHIFT); + return RTL_R32(EFUSEAR) & EFUSEAR_FLAG; +} - for (i = 0; i < 300; i++) { - if (RTL_R32(EFUSEAR) & EFUSEAR_FLAG) { - value = RTL_R32(EFUSEAR) & EFUSEAR_DATA_MASK; - break; - } - udelay(100); - } +static u8 rtl8168d_efuse_read(struct rtl8169_private *tp, int reg_addr) +{ + void __iomem *ioaddr = tp->mmio_addr; - return value; + RTL_W32(EFUSEAR, (reg_addr & EFUSEAR_REG_MASK) << EFUSEAR_REG_SHIFT); + + return rtl_udelay_loop_wait_high(tp, &rtl_efusear_cond, 100, 300) ? + RTL_R32(EFUSEAR) & EFUSEAR_DATA_MASK : ~0; } static u16 rtl_get_events(struct rtl8169_private *tp) @@ -1276,48 +1423,48 @@ static void rtl_link_chg_patch(struct rtl8169_private *tp) if (tp->mac_version == RTL_GIGA_MAC_VER_34 || tp->mac_version == RTL_GIGA_MAC_VER_38) { if (RTL_R8(PHYstatus) & _1000bpsF) { - rtl_eri_write(ioaddr, 0x1bc, ERIAR_MASK_1111, - 0x00000011, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0x1dc, ERIAR_MASK_1111, - 0x00000005, ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1bc, ERIAR_MASK_1111, 0x00000011, + ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1dc, ERIAR_MASK_1111, 0x00000005, + ERIAR_EXGMAC); } else if (RTL_R8(PHYstatus) & _100bps) { - rtl_eri_write(ioaddr, 0x1bc, ERIAR_MASK_1111, - 0x0000001f, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0x1dc, ERIAR_MASK_1111, - 0x00000005, ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1bc, ERIAR_MASK_1111, 0x0000001f, + ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1dc, ERIAR_MASK_1111, 0x00000005, + ERIAR_EXGMAC); } else { - rtl_eri_write(ioaddr, 0x1bc, ERIAR_MASK_1111, - 0x0000001f, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0x1dc, ERIAR_MASK_1111, - 0x0000003f, ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1bc, ERIAR_MASK_1111, 0x0000001f, + ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1dc, ERIAR_MASK_1111, 0x0000003f, + ERIAR_EXGMAC); } /* Reset packet filter */ - rtl_w1w0_eri(ioaddr, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, + rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, + rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC); } else if (tp->mac_version == RTL_GIGA_MAC_VER_35 || tp->mac_version == RTL_GIGA_MAC_VER_36) { if (RTL_R8(PHYstatus) & _1000bpsF) { - rtl_eri_write(ioaddr, 0x1bc, ERIAR_MASK_1111, - 0x00000011, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0x1dc, ERIAR_MASK_1111, - 0x00000005, ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1bc, ERIAR_MASK_1111, 0x00000011, + ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1dc, ERIAR_MASK_1111, 0x00000005, + ERIAR_EXGMAC); } else { - rtl_eri_write(ioaddr, 0x1bc, ERIAR_MASK_1111, - 0x0000001f, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0x1dc, ERIAR_MASK_1111, - 0x0000003f, ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1bc, ERIAR_MASK_1111, 0x0000001f, + ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1dc, ERIAR_MASK_1111, 0x0000003f, + ERIAR_EXGMAC); } } else if (tp->mac_version == RTL_GIGA_MAC_VER_37) { if (RTL_R8(PHYstatus) & _10bps) { - rtl_eri_write(ioaddr, 0x1d0, ERIAR_MASK_0011, - 0x4d02, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0x1dc, ERIAR_MASK_0011, - 0x0060, ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1d0, ERIAR_MASK_0011, 0x4d02, + ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1dc, ERIAR_MASK_0011, 0x0060, + ERIAR_EXGMAC); } else { - rtl_eri_write(ioaddr, 0x1d0, ERIAR_MASK_0011, - 0x0000, ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1d0, ERIAR_MASK_0011, 0x0000, + ERIAR_EXGMAC); } } } @@ -1784,6 +1931,13 @@ static int rtl8169_get_sset_count(struct net_device *dev, int sset) } } +DECLARE_RTL_COND(rtl_counters_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R32(CounterAddrLow) & CounterDump; +} + static void rtl8169_update_counters(struct net_device *dev) { struct rtl8169_private *tp = netdev_priv(dev); @@ -1792,7 +1946,6 @@ static void rtl8169_update_counters(struct net_device *dev) struct rtl8169_counters *counters; dma_addr_t paddr; u32 cmd; - int wait = 1000; /* * Some chips are unable to dump tally counters when the receiver @@ -1810,13 +1963,8 @@ static void rtl8169_update_counters(struct net_device *dev) RTL_W32(CounterAddrLow, cmd); RTL_W32(CounterAddrLow, cmd | CounterDump); - while (wait--) { - if ((RTL_R32(CounterAddrLow) & CounterDump) == 0) { - memcpy(&tp->counters, counters, sizeof(*counters)); - break; - } - udelay(10); - } + if (rtl_udelay_loop_wait_low(tp, &rtl_counters_cond, 10, 1000)) + memcpy(&tp->counters, counters, sizeof(*counters)); RTL_W32(CounterAddrLow, 0); RTL_W32(CounterAddrHigh, 0); @@ -1894,6 +2042,10 @@ static void rtl8169_get_mac_version(struct rtl8169_private *tp, u32 val; int mac_version; } mac_info[] = { + /* 8168G family. */ + { 0x7cf00000, 0x4c100000, RTL_GIGA_MAC_VER_41 }, + { 0x7cf00000, 0x4c000000, RTL_GIGA_MAC_VER_40 }, + /* 8168F family. */ { 0x7c800000, 0x48800000, RTL_GIGA_MAC_VER_38 }, { 0x7cf00000, 0x48100000, RTL_GIGA_MAC_VER_36 }, @@ -1933,6 +2085,8 @@ static void rtl8169_get_mac_version(struct rtl8169_private *tp, { 0x7c800000, 0x30000000, RTL_GIGA_MAC_VER_11 }, /* 8101 family. */ + { 0x7cf00000, 0x44900000, RTL_GIGA_MAC_VER_39 }, + { 0x7c800000, 0x44800000, RTL_GIGA_MAC_VER_39 }, { 0x7c800000, 0x44000000, RTL_GIGA_MAC_VER_37 }, { 0x7cf00000, 0x40b00000, RTL_GIGA_MAC_VER_30 }, { 0x7cf00000, 0x40a00000, RTL_GIGA_MAC_VER_30 }, @@ -2186,7 +2340,7 @@ static void rtl_phy_write_fw(struct rtl8169_private *tp, struct rtl_fw *rtl_fw) index -= regno; break; case PHY_READ_EFUSE: - predata = rtl8168d_efuse_read(tp->mmio_addr, regno); + predata = rtl8168d_efuse_read(tp, regno); index++; break; case PHY_CLEAR_READCOUNT: @@ -2626,7 +2780,6 @@ static void rtl8168d_1_hw_phy_config(struct rtl8169_private *tp) { 0x1f, 0x0000 }, { 0x0d, 0xf880 } }; - void __iomem *ioaddr = tp->mmio_addr; rtl_writephy_batch(tp, phy_reg_init_0, ARRAY_SIZE(phy_reg_init_0)); @@ -2638,7 +2791,7 @@ static void rtl8168d_1_hw_phy_config(struct rtl8169_private *tp) rtl_w1w0_phy(tp, 0x0b, 0x0010, 0x00ef); rtl_w1w0_phy(tp, 0x0c, 0xa200, 0x5d00); - if (rtl8168d_efuse_read(ioaddr, 0x01) == 0xb1) { + if (rtl8168d_efuse_read(tp, 0x01) == 0xb1) { static const struct phy_reg phy_reg_init[] = { { 0x1f, 0x0002 }, { 0x05, 0x669a }, @@ -2738,11 +2891,10 @@ static void rtl8168d_2_hw_phy_config(struct rtl8169_private *tp) { 0x1f, 0x0000 }, { 0x0d, 0xf880 } }; - void __iomem *ioaddr = tp->mmio_addr; rtl_writephy_batch(tp, phy_reg_init_0, ARRAY_SIZE(phy_reg_init_0)); - if (rtl8168d_efuse_read(ioaddr, 0x01) == 0xb1) { + if (rtl8168d_efuse_read(tp, 0x01) == 0xb1) { static const struct phy_reg phy_reg_init[] = { { 0x1f, 0x0002 }, { 0x05, 0x669a }, @@ -3010,8 +3162,7 @@ static void rtl8168e_2_hw_phy_config(struct rtl8169_private *tp) rtl_writephy(tp, 0x1f, 0x0000); /* EEE setting */ - rtl_w1w0_eri(tp->mmio_addr, 0x1b0, ERIAR_MASK_1111, 0x0000, 0x0003, - ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_1111, 0x0000, 0x0003, ERIAR_EXGMAC); rtl_writephy(tp, 0x1f, 0x0005); rtl_writephy(tp, 0x05, 0x8b85); rtl_w1w0_phy(tp, 0x06, 0x0000, 0x2000); @@ -3115,7 +3266,6 @@ static void rtl8168f_2_hw_phy_config(struct rtl8169_private *tp) static void rtl8411_hw_phy_config(struct rtl8169_private *tp) { - void __iomem *ioaddr = tp->mmio_addr; static const struct phy_reg phy_reg_init[] = { /* Channel estimation fine tune */ { 0x1f, 0x0003 }, @@ -3189,7 +3339,7 @@ static void rtl8411_hw_phy_config(struct rtl8169_private *tp) rtl_writephy(tp, 0x1f, 0x0000); /* eee setting */ - rtl_w1w0_eri(ioaddr, 0x1b0, ERIAR_MASK_0001, 0x00, 0x03, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0001, 0x00, 0x03, ERIAR_EXGMAC); rtl_writephy(tp, 0x1f, 0x0005); rtl_writephy(tp, 0x05, 0x8b85); rtl_w1w0_phy(tp, 0x06, 0x0000, 0x2000); @@ -3211,6 +3361,55 @@ static void rtl8411_hw_phy_config(struct rtl8169_private *tp) rtl_writephy(tp, 0x1f, 0x0000); } +static void rtl8168g_1_hw_phy_config(struct rtl8169_private *tp) +{ + static const u16 mac_ocp_patch[] = { + 0xe008, 0xe01b, 0xe01d, 0xe01f, + 0xe021, 0xe023, 0xe025, 0xe027, + 0x49d2, 0xf10d, 0x766c, 0x49e2, + 0xf00a, 0x1ec0, 0x8ee1, 0xc60a, + + 0x77c0, 0x4870, 0x9fc0, 0x1ea0, + 0xc707, 0x8ee1, 0x9d6c, 0xc603, + 0xbe00, 0xb416, 0x0076, 0xe86c, + 0xc602, 0xbe00, 0x0000, 0xc602, + + 0xbe00, 0x0000, 0xc602, 0xbe00, + 0x0000, 0xc602, 0xbe00, 0x0000, + 0xc602, 0xbe00, 0x0000, 0xc602, + 0xbe00, 0x0000, 0xc602, 0xbe00, + + 0x0000, 0x0000, 0x0000, 0x0000 + }; + u32 i; + + /* Patch code for GPHY reset */ + for (i = 0; i < ARRAY_SIZE(mac_ocp_patch); i++) + r8168_mac_ocp_write(tp, 0xf800 + 2*i, mac_ocp_patch[i]); + r8168_mac_ocp_write(tp, 0xfc26, 0x8000); + r8168_mac_ocp_write(tp, 0xfc28, 0x0075); + + rtl_apply_firmware(tp); + + if (r8168_phy_ocp_read(tp, 0xa460) & 0x0100) + rtl_w1w0_phy_ocp(tp, 0xbcc4, 0x0000, 0x8000); + else + rtl_w1w0_phy_ocp(tp, 0xbcc4, 0x8000, 0x0000); + + if (r8168_phy_ocp_read(tp, 0xa466) & 0x0100) + rtl_w1w0_phy_ocp(tp, 0xc41a, 0x0002, 0x0000); + else + rtl_w1w0_phy_ocp(tp, 0xbcc4, 0x0000, 0x0002); + + rtl_w1w0_phy_ocp(tp, 0xa442, 0x000c, 0x0000); + rtl_w1w0_phy_ocp(tp, 0xa4b2, 0x0004, 0x0000); + + r8168_phy_ocp_write(tp, 0xa436, 0x8012); + rtl_w1w0_phy_ocp(tp, 0xa438, 0x8000, 0x0000); + + rtl_w1w0_phy_ocp(tp, 0xc422, 0x4000, 0x2000); +} + static void rtl8102e_hw_phy_config(struct rtl8169_private *tp) { static const struct phy_reg phy_reg_init[] = { @@ -3256,8 +3455,6 @@ static void rtl8105e_hw_phy_config(struct rtl8169_private *tp) static void rtl8402_hw_phy_config(struct rtl8169_private *tp) { - void __iomem *ioaddr = tp->mmio_addr; - /* Disable ALDPS before setting firmware */ rtl_writephy(tp, 0x1f, 0x0000); rtl_writephy(tp, 0x18, 0x0310); @@ -3266,13 +3463,35 @@ static void rtl8402_hw_phy_config(struct rtl8169_private *tp) rtl_apply_firmware(tp); /* EEE setting */ - rtl_eri_write(ioaddr, 0x1b0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + rtl_eri_write(tp, 0x1b0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); rtl_writephy(tp, 0x1f, 0x0004); rtl_writephy(tp, 0x10, 0x401f); rtl_writephy(tp, 0x19, 0x7030); rtl_writephy(tp, 0x1f, 0x0000); } +static void rtl8106e_hw_phy_config(struct rtl8169_private *tp) +{ + static const struct phy_reg phy_reg_init[] = { + { 0x1f, 0x0004 }, + { 0x10, 0xc07f }, + { 0x19, 0x7030 }, + { 0x1f, 0x0000 } + }; + + /* Disable ALDPS before ram code */ + rtl_writephy(tp, 0x1f, 0x0000); + rtl_writephy(tp, 0x18, 0x0310); + msleep(100); + + rtl_apply_firmware(tp); + + rtl_eri_write(tp, 0x1b0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + rtl_writephy_batch(tp, phy_reg_init, ARRAY_SIZE(phy_reg_init)); + + rtl_eri_write(tp, 0x1d0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); +} + static void rtl_hw_phy_config(struct net_device *dev) { struct rtl8169_private *tp = netdev_priv(dev); @@ -3369,6 +3588,15 @@ static void rtl_hw_phy_config(struct net_device *dev) rtl8411_hw_phy_config(tp); break; + case RTL_GIGA_MAC_VER_39: + rtl8106e_hw_phy_config(tp); + break; + + case RTL_GIGA_MAC_VER_40: + rtl8168g_1_hw_phy_config(tp); + break; + + case RTL_GIGA_MAC_VER_41: default: break; } @@ -3426,18 +3654,16 @@ static void rtl8169_release_board(struct pci_dev *pdev, struct net_device *dev, free_netdev(dev); } +DECLARE_RTL_COND(rtl_phy_reset_cond) +{ + return tp->phy_reset_pending(tp); +} + static void rtl8169_phy_reset(struct net_device *dev, struct rtl8169_private *tp) { - unsigned int i; - tp->phy_reset_enable(tp); - for (i = 0; i < 100; i++) { - if (!tp->phy_reset_pending(tp)) - return; - msleep(1); - } - netif_err(tp, link, dev, "PHY reset failed\n"); + rtl_msleep_loop_wait_low(tp, &rtl_phy_reset_cond, 1, 100); } static bool rtl_tbi_enabled(struct rtl8169_private *tp) @@ -3512,7 +3738,7 @@ static void rtl_rar_set(struct rtl8169_private *tp, u8 *addr) low >> 16 }, }; - rtl_write_exgmac_batch(ioaddr, e, ARRAY_SIZE(e)); + rtl_write_exgmac_batch(tp, e, ARRAY_SIZE(e)); } RTL_W8(Cfg9346, Cfg9346_Lock); @@ -3589,6 +3815,11 @@ static void __devinit rtl_init_mdio_ops(struct rtl8169_private *tp) ops->write = r8168dp_2_mdio_write; ops->read = r8168dp_2_mdio_read; break; + case RTL_GIGA_MAC_VER_40: + case RTL_GIGA_MAC_VER_41: + ops->write = r8168g_mdio_write; + ops->read = r8168g_mdio_read; + break; default: ops->write = r8169_mdio_write; ops->read = r8169_mdio_read; @@ -3608,6 +3839,9 @@ static void rtl_wol_suspend_quirk(struct rtl8169_private *tp) case RTL_GIGA_MAC_VER_34: case RTL_GIGA_MAC_VER_37: case RTL_GIGA_MAC_VER_38: + case RTL_GIGA_MAC_VER_39: + case RTL_GIGA_MAC_VER_40: + case RTL_GIGA_MAC_VER_41: RTL_W32(RxConfig, RTL_R32(RxConfig) | AcceptBroadcast | AcceptMulticast | AcceptMyPhys); break; @@ -3761,7 +3995,7 @@ static void r8168_pll_power_down(struct rtl8169_private *tp) if (tp->mac_version == RTL_GIGA_MAC_VER_32 || tp->mac_version == RTL_GIGA_MAC_VER_33) - rtl_ephy_write(ioaddr, 0x19, 0xff64); + rtl_ephy_write(tp, 0x19, 0xff64); if (rtl_wol_pll_power_down(tp)) return; @@ -3830,6 +4064,7 @@ static void __devinit rtl_init_pll_power_ops(struct rtl8169_private *tp) case RTL_GIGA_MAC_VER_29: case RTL_GIGA_MAC_VER_30: case RTL_GIGA_MAC_VER_37: + case RTL_GIGA_MAC_VER_39: ops->down = r810x_pll_power_down; ops->up = r810x_pll_power_up; break; @@ -3855,6 +4090,8 @@ static void __devinit rtl_init_pll_power_ops(struct rtl8169_private *tp) case RTL_GIGA_MAC_VER_35: case RTL_GIGA_MAC_VER_36: case RTL_GIGA_MAC_VER_38: + case RTL_GIGA_MAC_VER_40: + case RTL_GIGA_MAC_VER_41: ops->down = r8168_pll_power_down; ops->up = r8168_pll_power_up; break; @@ -4051,6 +4288,8 @@ static void __devinit rtl_init_jumbo_ops(struct rtl8169_private *tp) * No action needed for jumbo frames with 8169. * No jumbo for 810x at all. */ + case RTL_GIGA_MAC_VER_40: + case RTL_GIGA_MAC_VER_41: default: ops->disable = NULL; ops->enable = NULL; @@ -4058,20 +4297,20 @@ static void __devinit rtl_init_jumbo_ops(struct rtl8169_private *tp) } } +DECLARE_RTL_COND(rtl_chipcmd_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R8(ChipCmd) & CmdReset; +} + static void rtl_hw_reset(struct rtl8169_private *tp) { void __iomem *ioaddr = tp->mmio_addr; - int i; - /* Soft reset the chip. */ RTL_W8(ChipCmd, CmdReset); - /* Check that the chip has finished the reset. */ - for (i = 0; i < 100; i++) { - if ((RTL_R8(ChipCmd) & CmdReset) == 0) - break; - udelay(100); - } + rtl_udelay_loop_wait_low(tp, &rtl_chipcmd_cond, 100, 100); } static void rtl_request_uncached_firmware(struct rtl8169_private *tp) @@ -4125,6 +4364,20 @@ static void rtl_rx_close(struct rtl8169_private *tp) RTL_W32(RxConfig, RTL_R32(RxConfig) & ~RX_CONFIG_ACCEPT_MASK); } +DECLARE_RTL_COND(rtl_npq_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R8(TxPoll) & NPQ; +} + +DECLARE_RTL_COND(rtl_txcfg_empty_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R32(TxConfig) & TXCFG_EMPTY; +} + static void rtl8169_hw_reset(struct rtl8169_private *tp) { void __iomem *ioaddr = tp->mmio_addr; @@ -4137,16 +4390,16 @@ static void rtl8169_hw_reset(struct rtl8169_private *tp) if (tp->mac_version == RTL_GIGA_MAC_VER_27 || tp->mac_version == RTL_GIGA_MAC_VER_28 || tp->mac_version == RTL_GIGA_MAC_VER_31) { - while (RTL_R8(TxPoll) & NPQ) - udelay(20); + rtl_udelay_loop_wait_low(tp, &rtl_npq_cond, 20, 42*42); } else if (tp->mac_version == RTL_GIGA_MAC_VER_34 || tp->mac_version == RTL_GIGA_MAC_VER_35 || tp->mac_version == RTL_GIGA_MAC_VER_36 || tp->mac_version == RTL_GIGA_MAC_VER_37 || + tp->mac_version == RTL_GIGA_MAC_VER_40 || + tp->mac_version == RTL_GIGA_MAC_VER_41 || tp->mac_version == RTL_GIGA_MAC_VER_38) { RTL_W8(ChipCmd, RTL_R8(ChipCmd) | StopReq); - while (!(RTL_R32(TxConfig) & TXCFG_EMPTY)) - udelay(100); + rtl_udelay_loop_wait_high(tp, &rtl_txcfg_empty_cond, 100, 666); } else { RTL_W8(ChipCmd, RTL_R8(ChipCmd) | StopReq); udelay(100); @@ -4352,15 +4605,12 @@ static void rtl_hw_start_8169(struct net_device *dev) static void rtl_csi_write(struct rtl8169_private *tp, int addr, int value) { if (tp->csi_ops.write) - tp->csi_ops.write(tp->mmio_addr, addr, value); + tp->csi_ops.write(tp, addr, value); } static u32 rtl_csi_read(struct rtl8169_private *tp, int addr) { - if (tp->csi_ops.read) - return tp->csi_ops.read(tp->mmio_addr, addr); - else - return ~0; + return tp->csi_ops.read ? tp->csi_ops.read(tp, addr) : ~0; } static void rtl_csi_access_enable(struct rtl8169_private *tp, u32 bits) @@ -4381,73 +4631,56 @@ static void rtl_csi_access_enable_2(struct rtl8169_private *tp) rtl_csi_access_enable(tp, 0x27000000); } -static void r8169_csi_write(void __iomem *ioaddr, int addr, int value) +DECLARE_RTL_COND(rtl_csiar_cond) { - unsigned int i; + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R32(CSIAR) & CSIAR_FLAG; +} + +static void r8169_csi_write(struct rtl8169_private *tp, int addr, int value) +{ + void __iomem *ioaddr = tp->mmio_addr; RTL_W32(CSIDR, value); RTL_W32(CSIAR, CSIAR_WRITE_CMD | (addr & CSIAR_ADDR_MASK) | CSIAR_BYTE_ENABLE << CSIAR_BYTE_ENABLE_SHIFT); - for (i = 0; i < 100; i++) { - if (!(RTL_R32(CSIAR) & CSIAR_FLAG)) - break; - udelay(10); - } + rtl_udelay_loop_wait_low(tp, &rtl_csiar_cond, 10, 100); } -static u32 r8169_csi_read(void __iomem *ioaddr, int addr) +static u32 r8169_csi_read(struct rtl8169_private *tp, int addr) { - u32 value = ~0x00; - unsigned int i; + void __iomem *ioaddr = tp->mmio_addr; RTL_W32(CSIAR, (addr & CSIAR_ADDR_MASK) | CSIAR_BYTE_ENABLE << CSIAR_BYTE_ENABLE_SHIFT); - for (i = 0; i < 100; i++) { - if (RTL_R32(CSIAR) & CSIAR_FLAG) { - value = RTL_R32(CSIDR); - break; - } - udelay(10); - } - - return value; + return rtl_udelay_loop_wait_high(tp, &rtl_csiar_cond, 10, 100) ? + RTL_R32(CSIDR) : ~0; } -static void r8402_csi_write(void __iomem *ioaddr, int addr, int value) +static void r8402_csi_write(struct rtl8169_private *tp, int addr, int value) { - unsigned int i; + void __iomem *ioaddr = tp->mmio_addr; RTL_W32(CSIDR, value); RTL_W32(CSIAR, CSIAR_WRITE_CMD | (addr & CSIAR_ADDR_MASK) | CSIAR_BYTE_ENABLE << CSIAR_BYTE_ENABLE_SHIFT | CSIAR_FUNC_NIC); - for (i = 0; i < 100; i++) { - if (!(RTL_R32(CSIAR) & CSIAR_FLAG)) - break; - udelay(10); - } + rtl_udelay_loop_wait_low(tp, &rtl_csiar_cond, 10, 100); } -static u32 r8402_csi_read(void __iomem *ioaddr, int addr) +static u32 r8402_csi_read(struct rtl8169_private *tp, int addr) { - u32 value = ~0x00; - unsigned int i; + void __iomem *ioaddr = tp->mmio_addr; RTL_W32(CSIAR, (addr & CSIAR_ADDR_MASK) | CSIAR_FUNC_NIC | CSIAR_BYTE_ENABLE << CSIAR_BYTE_ENABLE_SHIFT); - for (i = 0; i < 100; i++) { - if (RTL_R32(CSIAR) & CSIAR_FLAG) { - value = RTL_R32(CSIDR); - break; - } - udelay(10); - } - - return value; + return rtl_udelay_loop_wait_high(tp, &rtl_csiar_cond, 10, 100) ? + RTL_R32(CSIDR) : ~0; } static void __devinit rtl_init_csi_ops(struct rtl8169_private *tp) @@ -4492,13 +4725,14 @@ struct ephy_info { u16 bits; }; -static void rtl_ephy_init(void __iomem *ioaddr, const struct ephy_info *e, int len) +static void rtl_ephy_init(struct rtl8169_private *tp, const struct ephy_info *e, + int len) { u16 w; while (len-- > 0) { - w = (rtl_ephy_read(ioaddr, e->offset) & ~e->mask) | e->bits; - rtl_ephy_write(ioaddr, e->offset, w); + w = (rtl_ephy_read(tp, e->offset) & ~e->mask) | e->bits; + rtl_ephy_write(tp, e->offset, w); e++; } } @@ -4582,7 +4816,6 @@ static void __rtl_hw_start_8168cp(struct rtl8169_private *tp) static void rtl_hw_start_8168cp_1(struct rtl8169_private *tp) { - void __iomem *ioaddr = tp->mmio_addr; static const struct ephy_info e_info_8168cp[] = { { 0x01, 0, 0x0001 }, { 0x02, 0x0800, 0x1000 }, @@ -4593,7 +4826,7 @@ static void rtl_hw_start_8168cp_1(struct rtl8169_private *tp) rtl_csi_access_enable_2(tp); - rtl_ephy_init(ioaddr, e_info_8168cp, ARRAY_SIZE(e_info_8168cp)); + rtl_ephy_init(tp, e_info_8168cp, ARRAY_SIZE(e_info_8168cp)); __rtl_hw_start_8168cp(tp); } @@ -4644,14 +4877,13 @@ static void rtl_hw_start_8168c_1(struct rtl8169_private *tp) RTL_W8(DBG_REG, 0x06 | FIX_NAK_1 | FIX_NAK_2); - rtl_ephy_init(ioaddr, e_info_8168c_1, ARRAY_SIZE(e_info_8168c_1)); + rtl_ephy_init(tp, e_info_8168c_1, ARRAY_SIZE(e_info_8168c_1)); __rtl_hw_start_8168cp(tp); } static void rtl_hw_start_8168c_2(struct rtl8169_private *tp) { - void __iomem *ioaddr = tp->mmio_addr; static const struct ephy_info e_info_8168c_2[] = { { 0x01, 0, 0x0001 }, { 0x03, 0x0400, 0x0220 } @@ -4659,7 +4891,7 @@ static void rtl_hw_start_8168c_2(struct rtl8169_private *tp) rtl_csi_access_enable_2(tp); - rtl_ephy_init(ioaddr, e_info_8168c_2, ARRAY_SIZE(e_info_8168c_2)); + rtl_ephy_init(tp, e_info_8168c_2, ARRAY_SIZE(e_info_8168c_2)); __rtl_hw_start_8168cp(tp); } @@ -4727,8 +4959,8 @@ static void rtl_hw_start_8168d_4(struct rtl8169_private *tp) const struct ephy_info *e = e_info_8168d_4 + i; u16 w; - w = rtl_ephy_read(ioaddr, e->offset); - rtl_ephy_write(ioaddr, 0x03, (w & e->mask) | e->bits); + w = rtl_ephy_read(tp, e->offset); + rtl_ephy_write(tp, 0x03, (w & e->mask) | e->bits); } rtl_enable_clock_request(pdev); @@ -4756,7 +4988,7 @@ static void rtl_hw_start_8168e_1(struct rtl8169_private *tp) rtl_csi_access_enable_2(tp); - rtl_ephy_init(ioaddr, e_info_8168e_1, ARRAY_SIZE(e_info_8168e_1)); + rtl_ephy_init(tp, e_info_8168e_1, ARRAY_SIZE(e_info_8168e_1)); rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT); @@ -4782,19 +5014,18 @@ static void rtl_hw_start_8168e_2(struct rtl8169_private *tp) rtl_csi_access_enable_1(tp); - rtl_ephy_init(ioaddr, e_info_8168e_2, ARRAY_SIZE(e_info_8168e_2)); + rtl_ephy_init(tp, e_info_8168e_2, ARRAY_SIZE(e_info_8168e_2)); rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT); - rtl_eri_write(ioaddr, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xc8, ERIAR_MASK_1111, 0x00100002, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xe8, ERIAR_MASK_1111, 0x00100006, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xcc, ERIAR_MASK_1111, 0x00000050, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xd0, ERIAR_MASK_1111, 0x07ff0060, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0x1b0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0xff00, - ERIAR_EXGMAC); + rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xc8, ERIAR_MASK_1111, 0x00100002, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xe8, ERIAR_MASK_1111, 0x00100006, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xcc, ERIAR_MASK_1111, 0x00000050, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xd0, ERIAR_MASK_1111, 0x07ff0060, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0xff00, ERIAR_EXGMAC); RTL_W8(MaxTxPacketSize, EarlySize); @@ -4820,16 +5051,16 @@ static void rtl_hw_start_8168f(struct rtl8169_private *tp) rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT); - rtl_eri_write(ioaddr, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xc8, ERIAR_MASK_1111, 0x00100002, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xe8, ERIAR_MASK_1111, 0x00100006, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0x1b0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0x1d0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xcc, ERIAR_MASK_1111, 0x00000050, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xd0, ERIAR_MASK_1111, 0x00000060, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xc8, ERIAR_MASK_1111, 0x00100002, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xe8, ERIAR_MASK_1111, 0x00100006, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0x1d0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xcc, ERIAR_MASK_1111, 0x00000050, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xd0, ERIAR_MASK_1111, 0x00000060, ERIAR_EXGMAC); RTL_W8(MaxTxPacketSize, EarlySize); @@ -4854,10 +5085,9 @@ static void rtl_hw_start_8168f_1(struct rtl8169_private *tp) rtl_hw_start_8168f(tp); - rtl_ephy_init(ioaddr, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1)); + rtl_ephy_init(tp, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1)); - rtl_w1w0_eri(ioaddr, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0xff00, - ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0xff00, ERIAR_EXGMAC); /* Adjust EEE LED frequency */ RTL_W8(EEE_LED, RTL_R8(EEE_LED) & ~0x07); @@ -4865,7 +5095,6 @@ static void rtl_hw_start_8168f_1(struct rtl8169_private *tp) static void rtl_hw_start_8411(struct rtl8169_private *tp) { - void __iomem *ioaddr = tp->mmio_addr; static const struct ephy_info e_info_8168f_1[] = { { 0x06, 0x00c0, 0x0020 }, { 0x0f, 0xffff, 0x5200 }, @@ -4875,10 +5104,39 @@ static void rtl_hw_start_8411(struct rtl8169_private *tp) rtl_hw_start_8168f(tp); - rtl_ephy_init(ioaddr, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1)); + rtl_ephy_init(tp, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1)); - rtl_w1w0_eri(ioaddr, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0x0000, - ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0x0000, ERIAR_EXGMAC); +} + +static void rtl_hw_start_8168g_1(struct rtl8169_private *tp) +{ + void __iomem *ioaddr = tp->mmio_addr; + struct pci_dev *pdev = tp->pci_dev; + + rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x080002, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x38, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x48, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xe8, ERIAR_MASK_1111, 0x00100006, ERIAR_EXGMAC); + + rtl_csi_access_enable_1(tp); + + rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT); + + rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC); + + RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb); + RTL_W32(MISC, RTL_R32(MISC) & ~RXDV_GATED_EN); + RTL_W8(MaxTxPacketSize, EarlySize); + + rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + + /* Adjust EEE LED frequency */ + RTL_W8(EEE_LED, RTL_R8(EEE_LED) & ~0x07); + + rtl_w1w0_eri(tp, 0x2fc, ERIAR_MASK_0001, 0x01, 0x02, ERIAR_EXGMAC); } static void rtl_hw_start_8168(struct net_device *dev) @@ -4982,6 +5240,11 @@ static void rtl_hw_start_8168(struct net_device *dev) rtl_hw_start_8411(tp); break; + case RTL_GIGA_MAC_VER_40: + case RTL_GIGA_MAC_VER_41: + rtl_hw_start_8168g_1(tp); + break; + default: printk(KERN_ERR PFX "%s: unknown chipset (mac_version = %d).\n", dev->name, tp->mac_version); @@ -5036,7 +5299,7 @@ static void rtl_hw_start_8102e_1(struct rtl8169_private *tp) if ((cfg1 & LEDS0) && (cfg1 & LEDS1)) RTL_W8(Config1, cfg1 & ~LEDS0); - rtl_ephy_init(ioaddr, e_info_8102e_1, ARRAY_SIZE(e_info_8102e_1)); + rtl_ephy_init(tp, e_info_8102e_1, ARRAY_SIZE(e_info_8102e_1)); } static void rtl_hw_start_8102e_2(struct rtl8169_private *tp) @@ -5056,7 +5319,7 @@ static void rtl_hw_start_8102e_3(struct rtl8169_private *tp) { rtl_hw_start_8102e_2(tp); - rtl_ephy_write(tp->mmio_addr, 0x03, 0xc2f9); + rtl_ephy_write(tp, 0x03, 0xc2f9); } static void rtl_hw_start_8105e_1(struct rtl8169_private *tp) @@ -5082,15 +5345,13 @@ static void rtl_hw_start_8105e_1(struct rtl8169_private *tp) RTL_W8(MCU, RTL_R8(MCU) | EN_NDP | EN_OOB_RESET); RTL_W8(DLLPR, RTL_R8(DLLPR) | PFM_EN); - rtl_ephy_init(ioaddr, e_info_8105e_1, ARRAY_SIZE(e_info_8105e_1)); + rtl_ephy_init(tp, e_info_8105e_1, ARRAY_SIZE(e_info_8105e_1)); } static void rtl_hw_start_8105e_2(struct rtl8169_private *tp) { - void __iomem *ioaddr = tp->mmio_addr; - rtl_hw_start_8105e_1(tp); - rtl_ephy_write(ioaddr, 0x1e, rtl_ephy_read(ioaddr, 0x1e) | 0x8000); + rtl_ephy_write(tp, 0x1e, rtl_ephy_read(tp, 0x1e) | 0x8000); } static void rtl_hw_start_8402(struct rtl8169_private *tp) @@ -5109,18 +5370,29 @@ static void rtl_hw_start_8402(struct rtl8169_private *tp) RTL_W32(TxConfig, RTL_R32(TxConfig) | TXCFG_AUTO_FIFO); RTL_W8(MCU, RTL_R8(MCU) & ~NOW_IS_OOB); - rtl_ephy_init(ioaddr, e_info_8402, ARRAY_SIZE(e_info_8402)); + rtl_ephy_init(tp, e_info_8402, ARRAY_SIZE(e_info_8402)); rtl_tx_performance_tweak(tp->pci_dev, 0x5 << MAX_READ_REQUEST_SHIFT); - rtl_eri_write(ioaddr, 0xc8, ERIAR_MASK_1111, 0x00000002, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xe8, ERIAR_MASK_1111, 0x00000006, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); - rtl_eri_write(ioaddr, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); - rtl_w1w0_eri(ioaddr, 0x0d4, ERIAR_MASK_0011, 0x0e00, 0xff00, - ERIAR_EXGMAC); + rtl_eri_write(tp, 0xc8, ERIAR_MASK_1111, 0x00000002, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xe8, ERIAR_MASK_1111, 0x00000006, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); + rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0e00, 0xff00, ERIAR_EXGMAC); +} + +static void rtl_hw_start_8106(struct rtl8169_private *tp) +{ + void __iomem *ioaddr = tp->mmio_addr; + + /* Force LAN exit from ASPM if Rx/Tx are not idle */ + RTL_W32(FuncEvent, RTL_R32(FuncEvent) | 0x002800); + + RTL_W32(MISC, (RTL_R32(MISC) | DISABLE_LAN_EN) & ~EARLY_TALLY_EN); + RTL_W8(MCU, RTL_R8(MCU) | EN_NDP | EN_OOB_RESET); + RTL_W8(DLLPR, RTL_R8(DLLPR) & ~PFM_EN); } static void rtl_hw_start_8101(struct net_device *dev) @@ -5167,6 +5439,10 @@ static void rtl_hw_start_8101(struct net_device *dev) case RTL_GIGA_MAC_VER_37: rtl_hw_start_8402(tp); break; + + case RTL_GIGA_MAC_VER_39: + rtl_hw_start_8106(tp); + break; } RTL_W8(Cfg9346, Cfg9346_Lock); @@ -5380,7 +5656,6 @@ static void rtl8169_tx_clear(struct rtl8169_private *tp) { rtl8169_tx_clear_range(tp, tp->dirty_tx, NUM_TX_DESC); tp->cur_tx = tp->dirty_tx = 0; - netdev_reset_queue(tp->dev); } static void rtl_reset_work(struct rtl8169_private *tp) @@ -5535,8 +5810,6 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb, txd->opts2 = cpu_to_le32(opts[1]); - netdev_sent_queue(dev, skb->len); - skb_tx_timestamp(skb); wmb(); @@ -5633,16 +5906,9 @@ static void rtl8169_pcierr_interrupt(struct net_device *dev) rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING); } -struct rtl_txc { - int packets; - int bytes; -}; - static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp) { - struct rtl8169_stats *tx_stats = &tp->tx_stats; unsigned int dirty_tx, tx_left; - struct rtl_txc txc = { 0, 0 }; dirty_tx = tp->dirty_tx; smp_rmb(); @@ -5661,24 +5927,17 @@ static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp) rtl8169_unmap_tx_skb(&tp->pci_dev->dev, tx_skb, tp->TxDescArray + entry); if (status & LastFrag) { - struct sk_buff *skb = tx_skb->skb; - - txc.packets++; - txc.bytes += skb->len; - dev_kfree_skb(skb); + u64_stats_update_begin(&tp->tx_stats.syncp); + tp->tx_stats.packets++; + tp->tx_stats.bytes += tx_skb->skb->len; + u64_stats_update_end(&tp->tx_stats.syncp); + dev_kfree_skb(tx_skb->skb); tx_skb->skb = NULL; } dirty_tx++; tx_left--; } - u64_stats_update_begin(&tx_stats->syncp); - tx_stats->packets += txc.packets; - tx_stats->bytes += txc.bytes; - u64_stats_update_end(&tx_stats->syncp); - - netdev_completed_queue(dev, txc.packets, txc.bytes); - if (tp->dirty_tx != dirty_tx) { tp->dirty_tx = dirty_tx; /* Sync with rtl8169_start_xmit: @@ -6435,6 +6694,67 @@ static unsigned rtl_try_msi(struct rtl8169_private *tp, return msi; } +DECLARE_RTL_COND(rtl_link_list_ready_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; + + return RTL_R8(MCU) & LINK_LIST_RDY; +} + +DECLARE_RTL_COND(rtl_rxtx_empty_cond) +{ + void __iomem *ioaddr = tp->mmio_addr; + + return (RTL_R8(MCU) & RXTX_EMPTY) == RXTX_EMPTY; +} + +static void __devinit rtl_hw_init_8168g(struct rtl8169_private *tp) +{ + void __iomem *ioaddr = tp->mmio_addr; + u32 data; + + tp->ocp_base = OCP_STD_PHY_BASE; + + RTL_W32(MISC, RTL_R32(MISC) | RXDV_GATED_EN); + + if (!rtl_udelay_loop_wait_high(tp, &rtl_txcfg_empty_cond, 100, 42)) + return; + + if (!rtl_udelay_loop_wait_high(tp, &rtl_rxtx_empty_cond, 100, 42)) + return; + + RTL_W8(ChipCmd, RTL_R8(ChipCmd) & ~(CmdTxEnb | CmdRxEnb)); + msleep(1); + RTL_W8(MCU, RTL_R8(MCU) & ~NOW_IS_OOB); + + data = r8168_mac_ocp_read(tp, 0xe8de); + data &= ~(1 << 14); + r8168_mac_ocp_write(tp, 0xe8de, data); + + if (!rtl_udelay_loop_wait_high(tp, &rtl_link_list_ready_cond, 100, 42)) + return; + + data = r8168_mac_ocp_read(tp, 0xe8de); + data |= (1 << 15); + r8168_mac_ocp_write(tp, 0xe8de, data); + + if (!rtl_udelay_loop_wait_high(tp, &rtl_link_list_ready_cond, 100, 42)) + return; +} + +static void __devinit rtl_hw_initialize(struct rtl8169_private *tp) +{ + switch (tp->mac_version) { + case RTL_GIGA_MAC_VER_40: + case RTL_GIGA_MAC_VER_41: + rtl_hw_init_8168g(tp); + break; + + default: + break; + } +} + static int __devinit rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) { @@ -6544,6 +6864,8 @@ rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) rtl_irq_disable(tp); + rtl_hw_initialize(tp); + rtl_hw_reset(tp); rtl_ack_events(tp, 0xffff); diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c index 79bf09b41971..af0b867a6cf6 100644 --- a/drivers/net/ethernet/renesas/sh_eth.c +++ b/drivers/net/ethernet/renesas/sh_eth.c @@ -49,6 +49,34 @@ NETIF_MSG_RX_ERR| \ NETIF_MSG_TX_ERR) +#if defined(CONFIG_CPU_SUBTYPE_SH7734) || \ + defined(CONFIG_CPU_SUBTYPE_SH7763) || \ + defined(CONFIG_ARCH_R8A7740) +static void sh_eth_select_mii(struct net_device *ndev) +{ + u32 value = 0x0; + struct sh_eth_private *mdp = netdev_priv(ndev); + + switch (mdp->phy_interface) { + case PHY_INTERFACE_MODE_GMII: + value = 0x2; + break; + case PHY_INTERFACE_MODE_MII: + value = 0x1; + break; + case PHY_INTERFACE_MODE_RMII: + value = 0x0; + break; + default: + pr_warn("PHY interface mode was not setup. Set to MII.\n"); + value = 0x1; + break; + } + + sh_eth_write(ndev, value, RMII_MII); +} +#endif + /* There is CPU dependent code */ #if defined(CONFIG_CPU_SUBTYPE_SH7724) #define SH_ETH_RESET_DEFAULT 1 @@ -102,6 +130,8 @@ static struct sh_eth_cpu_data sh_eth_my_cpu_data = { #elif defined(CONFIG_CPU_SUBTYPE_SH7757) #define SH_ETH_HAS_BOTH_MODULES 1 #define SH_ETH_HAS_TSU 1 +static int sh_eth_check_reset(struct net_device *ndev); + static void sh_eth_set_duplex(struct net_device *ndev) { struct sh_eth_private *mdp = netdev_priv(ndev); @@ -176,23 +206,19 @@ static void sh_eth_chip_reset_giga(struct net_device *ndev) } static int sh_eth_is_gether(struct sh_eth_private *mdp); -static void sh_eth_reset(struct net_device *ndev) +static int sh_eth_reset(struct net_device *ndev) { struct sh_eth_private *mdp = netdev_priv(ndev); - int cnt = 100; + int ret = 0; if (sh_eth_is_gether(mdp)) { sh_eth_write(ndev, 0x03, EDSR); sh_eth_write(ndev, sh_eth_read(ndev, EDMR) | EDMR_SRST_GETHER, EDMR); - while (cnt > 0) { - if (!(sh_eth_read(ndev, EDMR) & 0x3)) - break; - mdelay(1); - cnt--; - } - if (cnt < 0) - printk(KERN_ERR "Device reset fail\n"); + + ret = sh_eth_check_reset(ndev); + if (ret) + goto out; /* Table Init */ sh_eth_write(ndev, 0x0, TDLAR); @@ -210,6 +236,9 @@ static void sh_eth_reset(struct net_device *ndev) sh_eth_write(ndev, sh_eth_read(ndev, EDMR) & ~EDMR_SRST_ETHER, EDMR); } + +out: + return ret; } static void sh_eth_set_duplex_giga(struct net_device *ndev) @@ -282,7 +311,9 @@ static struct sh_eth_cpu_data *sh_eth_get_cpu_data(struct sh_eth_private *mdp) #elif defined(CONFIG_CPU_SUBTYPE_SH7734) || defined(CONFIG_CPU_SUBTYPE_SH7763) #define SH_ETH_HAS_TSU 1 +static int sh_eth_check_reset(struct net_device *ndev); static void sh_eth_reset_hw_crc(struct net_device *ndev); + static void sh_eth_chip_reset(struct net_device *ndev) { struct sh_eth_private *mdp = netdev_priv(ndev); @@ -292,35 +323,6 @@ static void sh_eth_chip_reset(struct net_device *ndev) mdelay(1); } -static void sh_eth_reset(struct net_device *ndev) -{ - int cnt = 100; - - sh_eth_write(ndev, EDSR_ENALL, EDSR); - sh_eth_write(ndev, sh_eth_read(ndev, EDMR) | EDMR_SRST_GETHER, EDMR); - while (cnt > 0) { - if (!(sh_eth_read(ndev, EDMR) & 0x3)) - break; - mdelay(1); - cnt--; - } - if (cnt == 0) - printk(KERN_ERR "Device reset fail\n"); - - /* Table Init */ - sh_eth_write(ndev, 0x0, TDLAR); - sh_eth_write(ndev, 0x0, TDFAR); - sh_eth_write(ndev, 0x0, TDFXR); - sh_eth_write(ndev, 0x0, TDFFR); - sh_eth_write(ndev, 0x0, RDLAR); - sh_eth_write(ndev, 0x0, RDFAR); - sh_eth_write(ndev, 0x0, RDFXR); - sh_eth_write(ndev, 0x0, RDFFR); - - /* Reset HW CRC register */ - sh_eth_reset_hw_crc(ndev); -} - static void sh_eth_set_duplex(struct net_device *ndev) { struct sh_eth_private *mdp = netdev_priv(ndev); @@ -377,9 +379,41 @@ static struct sh_eth_cpu_data sh_eth_my_cpu_data = { .tsu = 1, #if defined(CONFIG_CPU_SUBTYPE_SH7734) .hw_crc = 1, + .select_mii = 1, #endif }; +static int sh_eth_reset(struct net_device *ndev) +{ + int ret = 0; + + sh_eth_write(ndev, EDSR_ENALL, EDSR); + sh_eth_write(ndev, sh_eth_read(ndev, EDMR) | EDMR_SRST_GETHER, EDMR); + + ret = sh_eth_check_reset(ndev); + if (ret) + goto out; + + /* Table Init */ + sh_eth_write(ndev, 0x0, TDLAR); + sh_eth_write(ndev, 0x0, TDFAR); + sh_eth_write(ndev, 0x0, TDFXR); + sh_eth_write(ndev, 0x0, TDFFR); + sh_eth_write(ndev, 0x0, RDLAR); + sh_eth_write(ndev, 0x0, RDFAR); + sh_eth_write(ndev, 0x0, RDFXR); + sh_eth_write(ndev, 0x0, RDFFR); + + /* Reset HW CRC register */ + sh_eth_reset_hw_crc(ndev); + + /* Select MII mode */ + if (sh_eth_my_cpu_data.select_mii) + sh_eth_select_mii(ndev); +out: + return ret; +} + static void sh_eth_reset_hw_crc(struct net_device *ndev) { if (sh_eth_my_cpu_data.hw_crc) @@ -388,44 +422,29 @@ static void sh_eth_reset_hw_crc(struct net_device *ndev) #elif defined(CONFIG_ARCH_R8A7740) #define SH_ETH_HAS_TSU 1 +static int sh_eth_check_reset(struct net_device *ndev); + static void sh_eth_chip_reset(struct net_device *ndev) { struct sh_eth_private *mdp = netdev_priv(ndev); - unsigned long mii; /* reset device */ sh_eth_tsu_write(mdp, ARSTR_ARSTR, ARSTR); mdelay(1); - switch (mdp->phy_interface) { - case PHY_INTERFACE_MODE_GMII: - mii = 2; - break; - case PHY_INTERFACE_MODE_MII: - mii = 1; - break; - case PHY_INTERFACE_MODE_RMII: - default: - mii = 0; - break; - } - sh_eth_write(ndev, mii, RMII_MII); + sh_eth_select_mii(ndev); } -static void sh_eth_reset(struct net_device *ndev) +static int sh_eth_reset(struct net_device *ndev) { - int cnt = 100; + int ret = 0; sh_eth_write(ndev, EDSR_ENALL, EDSR); sh_eth_write(ndev, sh_eth_read(ndev, EDMR) | EDMR_SRST_GETHER, EDMR); - while (cnt > 0) { - if (!(sh_eth_read(ndev, EDMR) & 0x3)) - break; - mdelay(1); - cnt--; - } - if (cnt == 0) - printk(KERN_ERR "Device reset fail\n"); + + ret = sh_eth_check_reset(ndev); + if (ret) + goto out; /* Table Init */ sh_eth_write(ndev, 0x0, TDLAR); @@ -436,6 +455,9 @@ static void sh_eth_reset(struct net_device *ndev) sh_eth_write(ndev, 0x0, RDFAR); sh_eth_write(ndev, 0x0, RDFXR); sh_eth_write(ndev, 0x0, RDFFR); + +out: + return ret; } static void sh_eth_set_duplex(struct net_device *ndev) @@ -492,6 +514,7 @@ static struct sh_eth_cpu_data sh_eth_my_cpu_data = { .no_trimd = 1, .no_ade = 1, .tsu = 1, + .select_mii = 1, }; #elif defined(CONFIG_CPU_SUBTYPE_SH7619) @@ -543,11 +566,31 @@ static void sh_eth_set_default_cpu_data(struct sh_eth_cpu_data *cd) #if defined(SH_ETH_RESET_DEFAULT) /* Chip Reset */ -static void sh_eth_reset(struct net_device *ndev) +static int sh_eth_reset(struct net_device *ndev) { sh_eth_write(ndev, sh_eth_read(ndev, EDMR) | EDMR_SRST_ETHER, EDMR); mdelay(3); sh_eth_write(ndev, sh_eth_read(ndev, EDMR) & ~EDMR_SRST_ETHER, EDMR); + + return 0; +} +#else +static int sh_eth_check_reset(struct net_device *ndev) +{ + int ret = 0; + int cnt = 100; + + while (cnt > 0) { + if (!(sh_eth_read(ndev, EDMR) & 0x3)) + break; + mdelay(1); + cnt--; + } + if (cnt < 0) { + printk(KERN_ERR "Device reset fail\n"); + ret = -ETIMEDOUT; + } + return ret; } #endif @@ -739,21 +782,23 @@ static void sh_eth_ring_free(struct net_device *ndev) /* Free Rx skb ringbuffer */ if (mdp->rx_skbuff) { - for (i = 0; i < RX_RING_SIZE; i++) { + for (i = 0; i < mdp->num_rx_ring; i++) { if (mdp->rx_skbuff[i]) dev_kfree_skb(mdp->rx_skbuff[i]); } } kfree(mdp->rx_skbuff); + mdp->rx_skbuff = NULL; /* Free Tx skb ringbuffer */ if (mdp->tx_skbuff) { - for (i = 0; i < TX_RING_SIZE; i++) { + for (i = 0; i < mdp->num_tx_ring; i++) { if (mdp->tx_skbuff[i]) dev_kfree_skb(mdp->tx_skbuff[i]); } } kfree(mdp->tx_skbuff); + mdp->tx_skbuff = NULL; } /* format skb and descriptor buffer */ @@ -764,8 +809,8 @@ static void sh_eth_ring_format(struct net_device *ndev) struct sk_buff *skb; struct sh_eth_rxdesc *rxdesc = NULL; struct sh_eth_txdesc *txdesc = NULL; - int rx_ringsize = sizeof(*rxdesc) * RX_RING_SIZE; - int tx_ringsize = sizeof(*txdesc) * TX_RING_SIZE; + int rx_ringsize = sizeof(*rxdesc) * mdp->num_rx_ring; + int tx_ringsize = sizeof(*txdesc) * mdp->num_tx_ring; mdp->cur_rx = mdp->cur_tx = 0; mdp->dirty_rx = mdp->dirty_tx = 0; @@ -773,7 +818,7 @@ static void sh_eth_ring_format(struct net_device *ndev) memset(mdp->rx_ring, 0, rx_ringsize); /* build Rx ring buffer */ - for (i = 0; i < RX_RING_SIZE; i++) { + for (i = 0; i < mdp->num_rx_ring; i++) { /* skb */ mdp->rx_skbuff[i] = NULL; skb = netdev_alloc_skb(ndev, mdp->rx_buf_sz); @@ -799,7 +844,7 @@ static void sh_eth_ring_format(struct net_device *ndev) } } - mdp->dirty_rx = (u32) (i - RX_RING_SIZE); + mdp->dirty_rx = (u32) (i - mdp->num_rx_ring); /* Mark the last entry as wrapping the ring. */ rxdesc->status |= cpu_to_edmac(mdp, RD_RDEL); @@ -807,7 +852,7 @@ static void sh_eth_ring_format(struct net_device *ndev) memset(mdp->tx_ring, 0, tx_ringsize); /* build Tx ring buffer */ - for (i = 0; i < TX_RING_SIZE; i++) { + for (i = 0; i < mdp->num_tx_ring; i++) { mdp->tx_skbuff[i] = NULL; txdesc = &mdp->tx_ring[i]; txdesc->status = cpu_to_edmac(mdp, TD_TFP); @@ -841,7 +886,7 @@ static int sh_eth_ring_init(struct net_device *ndev) mdp->rx_buf_sz += NET_IP_ALIGN; /* Allocate RX and TX skb rings */ - mdp->rx_skbuff = kmalloc(sizeof(*mdp->rx_skbuff) * RX_RING_SIZE, + mdp->rx_skbuff = kmalloc(sizeof(*mdp->rx_skbuff) * mdp->num_rx_ring, GFP_KERNEL); if (!mdp->rx_skbuff) { dev_err(&ndev->dev, "Cannot allocate Rx skb\n"); @@ -849,7 +894,7 @@ static int sh_eth_ring_init(struct net_device *ndev) return ret; } - mdp->tx_skbuff = kmalloc(sizeof(*mdp->tx_skbuff) * TX_RING_SIZE, + mdp->tx_skbuff = kmalloc(sizeof(*mdp->tx_skbuff) * mdp->num_tx_ring, GFP_KERNEL); if (!mdp->tx_skbuff) { dev_err(&ndev->dev, "Cannot allocate Tx skb\n"); @@ -858,7 +903,7 @@ static int sh_eth_ring_init(struct net_device *ndev) } /* Allocate all Rx descriptors. */ - rx_ringsize = sizeof(struct sh_eth_rxdesc) * RX_RING_SIZE; + rx_ringsize = sizeof(struct sh_eth_rxdesc) * mdp->num_rx_ring; mdp->rx_ring = dma_alloc_coherent(NULL, rx_ringsize, &mdp->rx_desc_dma, GFP_KERNEL); @@ -872,7 +917,7 @@ static int sh_eth_ring_init(struct net_device *ndev) mdp->dirty_rx = 0; /* Allocate all Tx descriptors. */ - tx_ringsize = sizeof(struct sh_eth_txdesc) * TX_RING_SIZE; + tx_ringsize = sizeof(struct sh_eth_txdesc) * mdp->num_tx_ring; mdp->tx_ring = dma_alloc_coherent(NULL, tx_ringsize, &mdp->tx_desc_dma, GFP_KERNEL); if (!mdp->tx_ring) { @@ -890,19 +935,41 @@ desc_ring_free: skb_ring_free: /* Free Rx and Tx skb ring buffer */ sh_eth_ring_free(ndev); + mdp->tx_ring = NULL; + mdp->rx_ring = NULL; return ret; } -static int sh_eth_dev_init(struct net_device *ndev) +static void sh_eth_free_dma_buffer(struct sh_eth_private *mdp) +{ + int ringsize; + + if (mdp->rx_ring) { + ringsize = sizeof(struct sh_eth_rxdesc) * mdp->num_rx_ring; + dma_free_coherent(NULL, ringsize, mdp->rx_ring, + mdp->rx_desc_dma); + mdp->rx_ring = NULL; + } + + if (mdp->tx_ring) { + ringsize = sizeof(struct sh_eth_txdesc) * mdp->num_tx_ring; + dma_free_coherent(NULL, ringsize, mdp->tx_ring, + mdp->tx_desc_dma); + mdp->tx_ring = NULL; + } +} + +static int sh_eth_dev_init(struct net_device *ndev, bool start) { int ret = 0; struct sh_eth_private *mdp = netdev_priv(ndev); - u_int32_t rx_int_var, tx_int_var; u32 val; /* Soft Reset */ - sh_eth_reset(ndev); + ret = sh_eth_reset(ndev); + if (ret) + goto out; /* Descriptor format */ sh_eth_ring_format(ndev); @@ -926,9 +993,7 @@ static int sh_eth_dev_init(struct net_device *ndev) /* Frame recv control */ sh_eth_write(ndev, mdp->cd->rmcr_value, RMCR); - rx_int_var = mdp->rx_int_var = DESC_I_RINT8 | DESC_I_RINT5; - tx_int_var = mdp->tx_int_var = DESC_I_TINT2; - sh_eth_write(ndev, rx_int_var | tx_int_var, TRSCER); + sh_eth_write(ndev, DESC_I_RINT8 | DESC_I_RINT5 | DESC_I_TINT2, TRSCER); if (mdp->cd->bculr) sh_eth_write(ndev, 0x800, BCULR); /* Burst sycle set */ @@ -943,7 +1008,8 @@ static int sh_eth_dev_init(struct net_device *ndev) RFLR); sh_eth_write(ndev, sh_eth_read(ndev, EESR), EESR); - sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR); + if (start) + sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR); /* PAUSE Prohibition */ val = (sh_eth_read(ndev, ECMR) & ECMR_DM) | @@ -958,7 +1024,8 @@ static int sh_eth_dev_init(struct net_device *ndev) sh_eth_write(ndev, mdp->cd->ecsr_value, ECSR); /* E-MAC Interrupt Enable register */ - sh_eth_write(ndev, mdp->cd->ecsipr_value, ECSIPR); + if (start) + sh_eth_write(ndev, mdp->cd->ecsipr_value, ECSIPR); /* Set MAC address */ update_mac_address(ndev); @@ -971,11 +1038,14 @@ static int sh_eth_dev_init(struct net_device *ndev) if (mdp->cd->tpauser) sh_eth_write(ndev, TPAUSER_UNLIMITED, TPAUSER); - /* Setting the Rx mode will start the Rx process. */ - sh_eth_write(ndev, EDRRR_R, EDRRR); + if (start) { + /* Setting the Rx mode will start the Rx process. */ + sh_eth_write(ndev, EDRRR_R, EDRRR); - netif_start_queue(ndev); + netif_start_queue(ndev); + } +out: return ret; } @@ -988,7 +1058,7 @@ static int sh_eth_txfree(struct net_device *ndev) int entry = 0; for (; mdp->cur_tx - mdp->dirty_tx > 0; mdp->dirty_tx++) { - entry = mdp->dirty_tx % TX_RING_SIZE; + entry = mdp->dirty_tx % mdp->num_tx_ring; txdesc = &mdp->tx_ring[entry]; if (txdesc->status & cpu_to_edmac(mdp, TD_TACT)) break; @@ -1001,7 +1071,7 @@ static int sh_eth_txfree(struct net_device *ndev) freeNum++; } txdesc->status = cpu_to_edmac(mdp, TD_TFP); - if (entry >= TX_RING_SIZE - 1) + if (entry >= mdp->num_tx_ring - 1) txdesc->status |= cpu_to_edmac(mdp, TD_TDLE); ndev->stats.tx_packets++; @@ -1016,8 +1086,8 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status) struct sh_eth_private *mdp = netdev_priv(ndev); struct sh_eth_rxdesc *rxdesc; - int entry = mdp->cur_rx % RX_RING_SIZE; - int boguscnt = (mdp->dirty_rx + RX_RING_SIZE) - mdp->cur_rx; + int entry = mdp->cur_rx % mdp->num_rx_ring; + int boguscnt = (mdp->dirty_rx + mdp->num_rx_ring) - mdp->cur_rx; struct sk_buff *skb; u16 pkt_len = 0; u32 desc_status; @@ -1068,13 +1138,13 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status) ndev->stats.rx_bytes += pkt_len; } rxdesc->status |= cpu_to_edmac(mdp, RD_RACT); - entry = (++mdp->cur_rx) % RX_RING_SIZE; + entry = (++mdp->cur_rx) % mdp->num_rx_ring; rxdesc = &mdp->rx_ring[entry]; } /* Refill the Rx ring buffers. */ for (; mdp->cur_rx - mdp->dirty_rx > 0; mdp->dirty_rx++) { - entry = mdp->dirty_rx % RX_RING_SIZE; + entry = mdp->dirty_rx % mdp->num_rx_ring; rxdesc = &mdp->rx_ring[entry]; /* The size of the buffer is 16 byte boundary. */ rxdesc->buffer_length = ALIGN(mdp->rx_buf_sz, 16); @@ -1091,7 +1161,7 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status) skb_checksum_none_assert(skb); rxdesc->addr = virt_to_phys(PTR_ALIGN(skb->data, 4)); } - if (entry >= RX_RING_SIZE - 1) + if (entry >= mdp->num_rx_ring - 1) rxdesc->status |= cpu_to_edmac(mdp, RD_RACT | RD_RFP | RD_RDEL); else @@ -1293,14 +1363,6 @@ other_irq: return ret; } -static void sh_eth_timer(unsigned long data) -{ - struct net_device *ndev = (struct net_device *)data; - struct sh_eth_private *mdp = netdev_priv(ndev); - - mod_timer(&mdp->timer, jiffies + (10 * HZ)); -} - /* PHY state control function */ static void sh_eth_adjust_link(struct net_device *ndev) { @@ -1499,6 +1561,71 @@ static void sh_eth_get_strings(struct net_device *ndev, u32 stringset, u8 *data) } } +static void sh_eth_get_ringparam(struct net_device *ndev, + struct ethtool_ringparam *ring) +{ + struct sh_eth_private *mdp = netdev_priv(ndev); + + ring->rx_max_pending = RX_RING_MAX; + ring->tx_max_pending = TX_RING_MAX; + ring->rx_pending = mdp->num_rx_ring; + ring->tx_pending = mdp->num_tx_ring; +} + +static int sh_eth_set_ringparam(struct net_device *ndev, + struct ethtool_ringparam *ring) +{ + struct sh_eth_private *mdp = netdev_priv(ndev); + int ret; + + if (ring->tx_pending > TX_RING_MAX || + ring->rx_pending > RX_RING_MAX || + ring->tx_pending < TX_RING_MIN || + ring->rx_pending < RX_RING_MIN) + return -EINVAL; + if (ring->rx_mini_pending || ring->rx_jumbo_pending) + return -EINVAL; + + if (netif_running(ndev)) { + netif_tx_disable(ndev); + /* Disable interrupts by clearing the interrupt mask. */ + sh_eth_write(ndev, 0x0000, EESIPR); + /* Stop the chip's Tx and Rx processes. */ + sh_eth_write(ndev, 0, EDTRR); + sh_eth_write(ndev, 0, EDRRR); + synchronize_irq(ndev->irq); + } + + /* Free all the skbuffs in the Rx queue. */ + sh_eth_ring_free(ndev); + /* Free DMA buffer */ + sh_eth_free_dma_buffer(mdp); + + /* Set new parameters */ + mdp->num_rx_ring = ring->rx_pending; + mdp->num_tx_ring = ring->tx_pending; + + ret = sh_eth_ring_init(ndev); + if (ret < 0) { + dev_err(&ndev->dev, "%s: sh_eth_ring_init failed.\n", __func__); + return ret; + } + ret = sh_eth_dev_init(ndev, false); + if (ret < 0) { + dev_err(&ndev->dev, "%s: sh_eth_dev_init failed.\n", __func__); + return ret; + } + + if (netif_running(ndev)) { + sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR); + /* Setting the Rx mode will start the Rx process. */ + sh_eth_write(ndev, EDRRR_R, EDRRR); + netif_wake_queue(ndev); + } + + return 0; +} + static const struct ethtool_ops sh_eth_ethtool_ops = { .get_settings = sh_eth_get_settings, .set_settings = sh_eth_set_settings, @@ -1509,6 +1636,8 @@ static const struct ethtool_ops sh_eth_ethtool_ops = { .get_strings = sh_eth_get_strings, .get_ethtool_stats = sh_eth_get_ethtool_stats, .get_sset_count = sh_eth_get_sset_count, + .get_ringparam = sh_eth_get_ringparam, + .set_ringparam = sh_eth_set_ringparam, }; /* network device open function */ @@ -1539,7 +1668,7 @@ static int sh_eth_open(struct net_device *ndev) goto out_free_irq; /* device init */ - ret = sh_eth_dev_init(ndev); + ret = sh_eth_dev_init(ndev, true); if (ret) goto out_free_irq; @@ -1548,11 +1677,6 @@ static int sh_eth_open(struct net_device *ndev) if (ret) goto out_free_irq; - /* Set the timer to check for link beat. */ - init_timer(&mdp->timer); - mdp->timer.expires = (jiffies + (24 * HZ)) / 10;/* 2.4 sec. */ - setup_timer(&mdp->timer, sh_eth_timer, (unsigned long)ndev); - return ret; out_free_irq: @@ -1577,11 +1701,8 @@ static void sh_eth_tx_timeout(struct net_device *ndev) /* tx_errors count up */ ndev->stats.tx_errors++; - /* timer off */ - del_timer_sync(&mdp->timer); - /* Free all the skbuffs in the Rx queue. */ - for (i = 0; i < RX_RING_SIZE; i++) { + for (i = 0; i < mdp->num_rx_ring; i++) { rxdesc = &mdp->rx_ring[i]; rxdesc->status = 0; rxdesc->addr = 0xBADF00D0; @@ -1589,18 +1710,14 @@ static void sh_eth_tx_timeout(struct net_device *ndev) dev_kfree_skb(mdp->rx_skbuff[i]); mdp->rx_skbuff[i] = NULL; } - for (i = 0; i < TX_RING_SIZE; i++) { + for (i = 0; i < mdp->num_tx_ring; i++) { if (mdp->tx_skbuff[i]) dev_kfree_skb(mdp->tx_skbuff[i]); mdp->tx_skbuff[i] = NULL; } /* device init */ - sh_eth_dev_init(ndev); - - /* timer on */ - mdp->timer.expires = (jiffies + (24 * HZ)) / 10;/* 2.4 sec. */ - add_timer(&mdp->timer); + sh_eth_dev_init(ndev, true); } /* Packet transmit function */ @@ -1612,7 +1729,7 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev) unsigned long flags; spin_lock_irqsave(&mdp->lock, flags); - if ((mdp->cur_tx - mdp->dirty_tx) >= (TX_RING_SIZE - 4)) { + if ((mdp->cur_tx - mdp->dirty_tx) >= (mdp->num_tx_ring - 4)) { if (!sh_eth_txfree(ndev)) { if (netif_msg_tx_queued(mdp)) dev_warn(&ndev->dev, "TxFD exhausted.\n"); @@ -1623,7 +1740,7 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev) } spin_unlock_irqrestore(&mdp->lock, flags); - entry = mdp->cur_tx % TX_RING_SIZE; + entry = mdp->cur_tx % mdp->num_tx_ring; mdp->tx_skbuff[entry] = skb; txdesc = &mdp->tx_ring[entry]; /* soft swap. */ @@ -1637,7 +1754,7 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev) else txdesc->buffer_length = skb->len; - if (entry >= TX_RING_SIZE - 1) + if (entry >= mdp->num_tx_ring - 1) txdesc->status |= cpu_to_edmac(mdp, TD_TACT | TD_TDLE); else txdesc->status |= cpu_to_edmac(mdp, TD_TACT); @@ -1654,7 +1771,6 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev) static int sh_eth_close(struct net_device *ndev) { struct sh_eth_private *mdp = netdev_priv(ndev); - int ringsize; netif_stop_queue(ndev); @@ -1673,18 +1789,11 @@ static int sh_eth_close(struct net_device *ndev) free_irq(ndev->irq, ndev); - del_timer_sync(&mdp->timer); - /* Free all the skbuffs in the Rx queue. */ sh_eth_ring_free(ndev); /* free DMA buffer */ - ringsize = sizeof(struct sh_eth_rxdesc) * RX_RING_SIZE; - dma_free_coherent(NULL, ringsize, mdp->rx_ring, mdp->rx_desc_dma); - - /* free DMA buffer */ - ringsize = sizeof(struct sh_eth_txdesc) * TX_RING_SIZE; - dma_free_coherent(NULL, ringsize, mdp->tx_ring, mdp->tx_desc_dma); + sh_eth_free_dma_buffer(mdp); pm_runtime_put_sync(&mdp->pdev->dev); @@ -2275,6 +2384,8 @@ static int sh_eth_drv_probe(struct platform_device *pdev) ether_setup(ndev); mdp = netdev_priv(ndev); + mdp->num_tx_ring = TX_RING_SIZE; + mdp->num_rx_ring = RX_RING_SIZE; mdp->addr = ioremap(res->start, resource_size(res)); if (mdp->addr == NULL) { ret = -ENOMEM; @@ -2312,8 +2423,6 @@ static int sh_eth_drv_probe(struct platform_device *pdev) /* debug message level */ mdp->msg_enable = SH_ETH_DEF_MSG_ENABLE; - mdp->post_rx = POST_RX >> (devno << 1); - mdp->post_fw = POST_FW >> (devno << 1); /* read and set MAC address */ read_mac_address(ndev, pd->mac_addr); diff --git a/drivers/net/ethernet/renesas/sh_eth.h b/drivers/net/ethernet/renesas/sh_eth.h index 57b8e1fc5d15..bae84fd2e73a 100644 --- a/drivers/net/ethernet/renesas/sh_eth.h +++ b/drivers/net/ethernet/renesas/sh_eth.h @@ -27,6 +27,10 @@ #define TX_TIMEOUT (5*HZ) #define TX_RING_SIZE 64 /* Tx ring size */ #define RX_RING_SIZE 64 /* Rx ring size */ +#define TX_RING_MIN 64 +#define RX_RING_MIN 64 +#define TX_RING_MAX 1024 +#define RX_RING_MAX 1024 #define ETHERSMALL 60 #define PKT_BUF_SZ 1538 #define SH_ETH_TSU_TIMEOUT_MS 500 @@ -585,71 +589,6 @@ enum RPADIR_BIT { /* FDR */ #define DEFAULT_FDR_INIT 0x00000707 -enum phy_offsets { - PHY_CTRL = 0, PHY_STAT = 1, PHY_IDT1 = 2, PHY_IDT2 = 3, - PHY_ANA = 4, PHY_ANL = 5, PHY_ANE = 6, - PHY_16 = 16, -}; - -/* PHY_CTRL */ -enum PHY_CTRL_BIT { - PHY_C_RESET = 0x8000, PHY_C_LOOPBK = 0x4000, PHY_C_SPEEDSL = 0x2000, - PHY_C_ANEGEN = 0x1000, PHY_C_PWRDN = 0x0800, PHY_C_ISO = 0x0400, - PHY_C_RANEG = 0x0200, PHY_C_DUPLEX = 0x0100, PHY_C_COLT = 0x0080, -}; -#define DM9161_PHY_C_ANEGEN 0 /* auto nego special */ - -/* PHY_STAT */ -enum PHY_STAT_BIT { - PHY_S_100T4 = 0x8000, PHY_S_100X_F = 0x4000, PHY_S_100X_H = 0x2000, - PHY_S_10T_F = 0x1000, PHY_S_10T_H = 0x0800, PHY_S_ANEGC = 0x0020, - PHY_S_RFAULT = 0x0010, PHY_S_ANEGA = 0x0008, PHY_S_LINK = 0x0004, - PHY_S_JAB = 0x0002, PHY_S_EXTD = 0x0001, -}; - -/* PHY_ANA */ -enum PHY_ANA_BIT { - PHY_A_NP = 0x8000, PHY_A_ACK = 0x4000, PHY_A_RF = 0x2000, - PHY_A_FCS = 0x0400, PHY_A_T4 = 0x0200, PHY_A_FDX = 0x0100, - PHY_A_HDX = 0x0080, PHY_A_10FDX = 0x0040, PHY_A_10HDX = 0x0020, - PHY_A_SEL = 0x001e, -}; -/* PHY_ANL */ -enum PHY_ANL_BIT { - PHY_L_NP = 0x8000, PHY_L_ACK = 0x4000, PHY_L_RF = 0x2000, - PHY_L_FCS = 0x0400, PHY_L_T4 = 0x0200, PHY_L_FDX = 0x0100, - PHY_L_HDX = 0x0080, PHY_L_10FDX = 0x0040, PHY_L_10HDX = 0x0020, - PHY_L_SEL = 0x001f, -}; - -/* PHY_ANE */ -enum PHY_ANE_BIT { - PHY_E_PDF = 0x0010, PHY_E_LPNPA = 0x0008, PHY_E_NPA = 0x0004, - PHY_E_PRX = 0x0002, PHY_E_LPANEGA = 0x0001, -}; - -/* DM9161 */ -enum PHY_16_BIT { - PHY_16_BP4B45 = 0x8000, PHY_16_BPSCR = 0x4000, PHY_16_BPALIGN = 0x2000, - PHY_16_BP_ADPOK = 0x1000, PHY_16_Repeatmode = 0x0800, - PHY_16_TXselect = 0x0400, - PHY_16_Rsvd = 0x0200, PHY_16_RMIIEnable = 0x0100, - PHY_16_Force100LNK = 0x0080, - PHY_16_APDLED_CTL = 0x0040, PHY_16_COLLED_CTL = 0x0020, - PHY_16_RPDCTR_EN = 0x0010, - PHY_16_ResetStMch = 0x0008, PHY_16_PreamSupr = 0x0004, - PHY_16_Sleepmode = 0x0002, - PHY_16_RemoteLoopOut = 0x0001, -}; - -#define POST_RX 0x08 -#define POST_FW 0x04 -#define POST0_RX (POST_RX) -#define POST0_FW (POST_FW) -#define POST1_RX (POST_RX >> 2) -#define POST1_FW (POST_FW >> 2) -#define POST_ALL (POST0_RX | POST0_FW | POST1_RX | POST1_FW) - /* ARSTR */ enum ARSTR_BIT { ARSTR_ARSTR = 0x00000001, }; @@ -757,6 +696,7 @@ struct sh_eth_cpu_data { unsigned no_trimd:1; /* E-DMAC DO NOT have TRIMD */ unsigned no_ade:1; /* E-DMAC DO NOT have ADE bit in EESR */ unsigned hw_crc:1; /* E-DMAC have CSMR */ + unsigned select_mii:1; /* EtherC have RMII_MII (MII select register) */ }; struct sh_eth_private { @@ -765,13 +705,14 @@ struct sh_eth_private { const u16 *reg_offset; void __iomem *addr; void __iomem *tsu_addr; + u32 num_rx_ring; + u32 num_tx_ring; dma_addr_t rx_desc_dma; dma_addr_t tx_desc_dma; struct sh_eth_rxdesc *rx_ring; struct sh_eth_txdesc *tx_ring; struct sk_buff **rx_skbuff; struct sk_buff **tx_skbuff; - struct timer_list timer; spinlock_t lock; u32 cur_rx, dirty_rx; /* Producer/consumer ring indices */ u32 cur_tx, dirty_tx; @@ -786,10 +727,6 @@ struct sh_eth_private { int msg_enable; int speed; int duplex; - u32 rx_int_var, tx_int_var; /* interrupt control variables */ - char post_rx; /* POST receive */ - char post_fw; /* POST forward */ - struct net_device_stats tsu_stats; /* TSU forward status */ int port; /* for TSU */ int vlan_num_ids; /* for VLAN tag filter */ diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c index b95f2e1b33f0..70554a1b2b02 100644 --- a/drivers/net/ethernet/sfc/efx.c +++ b/drivers/net/ethernet/sfc/efx.c @@ -1103,8 +1103,8 @@ static int efx_init_io(struct efx_nic *efx) * masks event though they reject 46 bit masks. */ while (dma_mask > 0x7fffffffUL) { - if (pci_dma_supported(pci_dev, dma_mask)) { - rc = pci_set_dma_mask(pci_dev, dma_mask); + if (dma_supported(&pci_dev->dev, dma_mask)) { + rc = dma_set_mask(&pci_dev->dev, dma_mask); if (rc == 0) break; } @@ -1117,10 +1117,10 @@ static int efx_init_io(struct efx_nic *efx) } netif_dbg(efx, probe, efx->net_dev, "using DMA mask %llx\n", (unsigned long long) dma_mask); - rc = pci_set_consistent_dma_mask(pci_dev, dma_mask); + rc = dma_set_coherent_mask(&pci_dev->dev, dma_mask); if (rc) { - /* pci_set_consistent_dma_mask() is not *allowed* to - * fail with a mask that pci_set_dma_mask() accepted, + /* dma_set_coherent_mask() is not *allowed* to + * fail with a mask that dma_set_mask() accepted, * but just in case... */ netif_err(efx, probe, efx->net_dev, diff --git a/drivers/net/ethernet/sfc/enum.h b/drivers/net/ethernet/sfc/enum.h index d725a8fbe1a6..182dbe2cc6e4 100644 --- a/drivers/net/ethernet/sfc/enum.h +++ b/drivers/net/ethernet/sfc/enum.h @@ -136,10 +136,10 @@ enum efx_loopback_mode { * * Reset methods are numbered in order of increasing scope. * - * @RESET_TYPE_INVISIBLE: don't reset the PHYs or interrupts - * @RESET_TYPE_ALL: reset everything but PCI core blocks - * @RESET_TYPE_WORLD: reset everything, save & restore PCI config - * @RESET_TYPE_DISABLE: disable NIC + * @RESET_TYPE_INVISIBLE: Reset datapath and MAC (Falcon only) + * @RESET_TYPE_ALL: Reset datapath, MAC and PHY + * @RESET_TYPE_WORLD: Reset as much as possible + * @RESET_TYPE_DISABLE: Reset datapath, MAC and PHY; leave NIC disabled * @RESET_TYPE_TX_WATCHDOG: reset due to TX watchdog * @RESET_TYPE_INT_ERROR: reset due to internal error * @RESET_TYPE_RX_RECOVERY: reset to recover from RX datapath errors diff --git a/drivers/net/ethernet/sfc/ethtool.c b/drivers/net/ethernet/sfc/ethtool.c index 03ded364c8da..10536f93b561 100644 --- a/drivers/net/ethernet/sfc/ethtool.c +++ b/drivers/net/ethernet/sfc/ethtool.c @@ -453,7 +453,7 @@ static void efx_ethtool_get_strings(struct net_device *net_dev, switch (string_set) { case ETH_SS_STATS: for (i = 0; i < EFX_ETHTOOL_NUM_STATS; i++) - strncpy(ethtool_strings[i].name, + strlcpy(ethtool_strings[i].name, efx_ethtool_stats[i].name, sizeof(ethtool_strings[i].name)); break; diff --git a/drivers/net/ethernet/sfc/falcon.c b/drivers/net/ethernet/sfc/falcon.c index 3a1ca2bd1548..12b573a8e82b 100644 --- a/drivers/net/ethernet/sfc/falcon.c +++ b/drivers/net/ethernet/sfc/falcon.c @@ -25,9 +25,12 @@ #include "io.h" #include "phy.h" #include "workarounds.h" +#include "selftest.h" /* Hardware control for SFC4000 (aka Falcon). */ +static int falcon_reset_hw(struct efx_nic *efx, enum reset_type method); + static const unsigned int /* "Large" EEPROM device: Atmel AT25640 or similar * 8 KB, 16-bit address, 32 B write block */ @@ -1034,10 +1037,34 @@ static const struct efx_nic_register_test falcon_b0_register_tests[] = { EFX_OWORD32(0x0003FF0F, 0x00000000, 0x00000000, 0x00000000) }, }; -static int falcon_b0_test_registers(struct efx_nic *efx) +static int +falcon_b0_test_chip(struct efx_nic *efx, struct efx_self_tests *tests) { - return efx_nic_test_registers(efx, falcon_b0_register_tests, - ARRAY_SIZE(falcon_b0_register_tests)); + enum reset_type reset_method = RESET_TYPE_INVISIBLE; + int rc, rc2; + + mutex_lock(&efx->mac_lock); + if (efx->loopback_modes) { + /* We need the 312 clock from the PHY to test the XMAC + * registers, so move into XGMII loopback if available */ + if (efx->loopback_modes & (1 << LOOPBACK_XGMII)) + efx->loopback_mode = LOOPBACK_XGMII; + else + efx->loopback_mode = __ffs(efx->loopback_modes); + } + __efx_reconfigure_port(efx); + mutex_unlock(&efx->mac_lock); + + efx_reset_down(efx, reset_method); + + tests->registers = + efx_nic_test_registers(efx, falcon_b0_register_tests, + ARRAY_SIZE(falcon_b0_register_tests)) + ? -1 : 1; + + rc = falcon_reset_hw(efx, reset_method); + rc2 = efx_reset_up(efx, reset_method, rc == 0); + return rc ? rc : rc2; } /************************************************************************** @@ -1818,7 +1845,7 @@ const struct efx_nic_type falcon_b0_nic_type = { .get_wol = falcon_get_wol, .set_wol = falcon_set_wol, .resume_wol = efx_port_dummy_op_void, - .test_registers = falcon_b0_test_registers, + .test_chip = falcon_b0_test_chip, .test_nvram = falcon_test_nvram, .revision = EFX_REV_FALCON_B0, diff --git a/drivers/net/ethernet/sfc/falcon_xmac.c b/drivers/net/ethernet/sfc/falcon_xmac.c index 6106ef15dee3..8333865d4c95 100644 --- a/drivers/net/ethernet/sfc/falcon_xmac.c +++ b/drivers/net/ethernet/sfc/falcon_xmac.c @@ -341,12 +341,12 @@ void falcon_update_stats_xmac(struct efx_nic *efx) FALCON_STAT(efx, XgTxIpSrcErrPkt, tx_ip_src_error); /* Update derived statistics */ - mac_stats->tx_good_bytes = - (mac_stats->tx_bytes - mac_stats->tx_bad_bytes - - mac_stats->tx_control * 64); - mac_stats->rx_bad_bytes = - (mac_stats->rx_bytes - mac_stats->rx_good_bytes - - mac_stats->rx_control * 64); + efx_update_diff_stat(&mac_stats->tx_good_bytes, + mac_stats->tx_bytes - mac_stats->tx_bad_bytes - + mac_stats->tx_control * 64); + efx_update_diff_stat(&mac_stats->rx_bad_bytes, + mac_stats->rx_bytes - mac_stats->rx_good_bytes - + mac_stats->rx_control * 64); } void falcon_poll_xmac(struct efx_nic *efx) diff --git a/drivers/net/ethernet/sfc/filter.c b/drivers/net/ethernet/sfc/filter.c index fea7f7300675..c3fd61f0a95c 100644 --- a/drivers/net/ethernet/sfc/filter.c +++ b/drivers/net/ethernet/sfc/filter.c @@ -662,7 +662,7 @@ s32 efx_filter_insert_filter(struct efx_nic *efx, struct efx_filter_spec *spec, struct efx_filter_table *table = efx_filter_spec_table(state, spec); struct efx_filter_spec *saved_spec; efx_oword_t filter; - unsigned int filter_idx, depth; + unsigned int filter_idx, depth = 0; u32 key; int rc; diff --git a/drivers/net/ethernet/sfc/mcdi.c b/drivers/net/ethernet/sfc/mcdi.c index 17b6463e459c..fc5e7bbcbc9e 100644 --- a/drivers/net/ethernet/sfc/mcdi.c +++ b/drivers/net/ethernet/sfc/mcdi.c @@ -1001,12 +1001,17 @@ static void efx_mcdi_exit_assertion(struct efx_nic *efx) { u8 inbuf[MC_CMD_REBOOT_IN_LEN]; - /* Atomically reboot the mcfw out of the assertion handler */ + /* If the MC is running debug firmware, it might now be + * waiting for a debugger to attach, but we just want it to + * reboot. We set a flag that makes the command a no-op if it + * has already done so. We don't know what return code to + * expect (0 or -EIO), so ignore it. + */ BUILD_BUG_ON(MC_CMD_REBOOT_OUT_LEN != 0); MCDI_SET_DWORD(inbuf, REBOOT_IN_FLAGS, MC_CMD_REBOOT_FLAGS_AFTER_ASSERTION); - efx_mcdi_rpc(efx, MC_CMD_REBOOT, inbuf, MC_CMD_REBOOT_IN_LEN, - NULL, 0, NULL); + (void) efx_mcdi_rpc(efx, MC_CMD_REBOOT, inbuf, MC_CMD_REBOOT_IN_LEN, + NULL, 0, NULL); } int efx_mcdi_handle_assertion(struct efx_nic *efx) diff --git a/drivers/net/ethernet/sfc/mcdi_mon.c b/drivers/net/ethernet/sfc/mcdi_mon.c index fb7f65b59eb8..1d552f0664d7 100644 --- a/drivers/net/ethernet/sfc/mcdi_mon.c +++ b/drivers/net/ethernet/sfc/mcdi_mon.c @@ -222,6 +222,7 @@ efx_mcdi_mon_add_attr(struct efx_nic *efx, const char *name, attr->index = index; attr->type = type; attr->limit_value = limit_value; + sysfs_attr_init(&attr->dev_attr.attr); attr->dev_attr.attr.name = attr->name; attr->dev_attr.attr.mode = S_IRUGO; attr->dev_attr.show = reader; diff --git a/drivers/net/ethernet/sfc/mcdi_pcol.h b/drivers/net/ethernet/sfc/mcdi_pcol.h index 0310b9f08c9b..db4beed97669 100644 --- a/drivers/net/ethernet/sfc/mcdi_pcol.h +++ b/drivers/net/ethernet/sfc/mcdi_pcol.h @@ -48,8 +48,7 @@ /* Unused commands: 0x23, 0x27, 0x30, 0x31 */ -/** - * MCDI version 1 +/* MCDI version 1 * * Each MCDI request starts with an MCDI_HEADER, which is a 32byte * structure, filled in by the client. diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h index 0e575359af17..cd9c0a989692 100644 --- a/drivers/net/ethernet/sfc/net_driver.h +++ b/drivers/net/ethernet/sfc/net_driver.h @@ -68,6 +68,8 @@ #define EFX_TXQ_TYPES 4 #define EFX_MAX_TX_QUEUES (EFX_TXQ_TYPES * EFX_MAX_CHANNELS) +struct efx_self_tests; + /** * struct efx_special_buffer - An Efx special buffer * @addr: CPU base address of the buffer @@ -100,7 +102,7 @@ struct efx_special_buffer { * @len: Length of this fragment. * This field is zero when the queue slot is empty. * @continuation: True if this fragment is not the end of a packet. - * @unmap_single: True if pci_unmap_single should be used. + * @unmap_single: True if dma_unmap_single should be used. * @unmap_len: Length of this fragment to unmap */ struct efx_tx_buffer { @@ -527,7 +529,7 @@ struct efx_phy_operations { }; /** - * @enum efx_phy_mode - PHY operating mode flags + * enum efx_phy_mode - PHY operating mode flags * @PHY_MODE_NORMAL: on and should pass traffic * @PHY_MODE_TX_DISABLED: on with TX disabled * @PHY_MODE_LOW_POWER: set to low power through MDIO @@ -901,7 +903,8 @@ static inline unsigned int efx_port_num(struct efx_nic *efx) * @get_wol: Get WoL configuration from driver state * @set_wol: Push WoL configuration to the NIC * @resume_wol: Synchronise WoL state between driver and MC (e.g. after resume) - * @test_registers: Test read/write functionality of control registers + * @test_chip: Test registers. Should use efx_nic_test_registers(), and is + * expected to reset the NIC. * @test_nvram: Test validity of NVRAM contents * @revision: Hardware architecture revision * @mem_map_size: Memory BAR mapped size @@ -946,7 +949,7 @@ struct efx_nic_type { void (*get_wol)(struct efx_nic *efx, struct ethtool_wolinfo *wol); int (*set_wol)(struct efx_nic *efx, u32 type); void (*resume_wol)(struct efx_nic *efx); - int (*test_registers)(struct efx_nic *efx); + int (*test_chip)(struct efx_nic *efx, struct efx_self_tests *tests); int (*test_nvram)(struct efx_nic *efx); int revision; diff --git a/drivers/net/ethernet/sfc/nic.c b/drivers/net/ethernet/sfc/nic.c index 4a9a5beec8fc..326d799762d6 100644 --- a/drivers/net/ethernet/sfc/nic.c +++ b/drivers/net/ethernet/sfc/nic.c @@ -124,9 +124,6 @@ int efx_nic_test_registers(struct efx_nic *efx, unsigned address = 0, i, j; efx_oword_t mask, imask, original, reg, buf; - /* Falcon should be in loopback to isolate the XMAC from the PHY */ - WARN_ON(!LOOPBACK_INTERNAL(efx)); - for (i = 0; i < n_regs; ++i) { address = regs[i].address; mask = imask = regs[i].mask; @@ -308,8 +305,8 @@ efx_free_special_buffer(struct efx_nic *efx, struct efx_special_buffer *buffer) int efx_nic_alloc_buffer(struct efx_nic *efx, struct efx_buffer *buffer, unsigned int len) { - buffer->addr = pci_alloc_consistent(efx->pci_dev, len, - &buffer->dma_addr); + buffer->addr = dma_alloc_coherent(&efx->pci_dev->dev, len, + &buffer->dma_addr, GFP_ATOMIC); if (!buffer->addr) return -ENOMEM; buffer->len = len; @@ -320,8 +317,8 @@ int efx_nic_alloc_buffer(struct efx_nic *efx, struct efx_buffer *buffer, void efx_nic_free_buffer(struct efx_nic *efx, struct efx_buffer *buffer) { if (buffer->addr) { - pci_free_consistent(efx->pci_dev, buffer->len, - buffer->addr, buffer->dma_addr); + dma_free_coherent(&efx->pci_dev->dev, buffer->len, + buffer->addr, buffer->dma_addr); buffer->addr = NULL; } } diff --git a/drivers/net/ethernet/sfc/nic.h b/drivers/net/ethernet/sfc/nic.h index f48ccf6bb3b9..bab5cd9f5740 100644 --- a/drivers/net/ethernet/sfc/nic.h +++ b/drivers/net/ethernet/sfc/nic.h @@ -294,6 +294,24 @@ extern bool falcon_xmac_check_fault(struct efx_nic *efx); extern int falcon_reconfigure_xmac(struct efx_nic *efx); extern void falcon_update_stats_xmac(struct efx_nic *efx); +/* Some statistics are computed as A - B where A and B each increase + * linearly with some hardware counter(s) and the counters are read + * asynchronously. If the counters contributing to B are always read + * after those contributing to A, the computed value may be lower than + * the true value by some variable amount, and may decrease between + * subsequent computations. + * + * We should never allow statistics to decrease or to exceed the true + * value. Since the computed value will never be greater than the + * true value, we can achieve this by only storing the computed value + * when it increases. + */ +static inline void efx_update_diff_stat(u64 *stat, u64 diff) +{ + if ((s64)(diff - *stat) > 0) + *stat = diff; +} + /* Interrupts and test events */ extern int efx_nic_init_interrupt(struct efx_nic *efx); extern void efx_nic_enable_interrupts(struct efx_nic *efx); diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c index 243e91f3dff9..719319b89d7a 100644 --- a/drivers/net/ethernet/sfc/rx.c +++ b/drivers/net/ethernet/sfc/rx.c @@ -155,11 +155,11 @@ static int efx_init_rx_buffers_skb(struct efx_rx_queue *rx_queue) rx_buf->len = skb_len - NET_IP_ALIGN; rx_buf->flags = 0; - rx_buf->dma_addr = pci_map_single(efx->pci_dev, + rx_buf->dma_addr = dma_map_single(&efx->pci_dev->dev, skb->data, rx_buf->len, - PCI_DMA_FROMDEVICE); - if (unlikely(pci_dma_mapping_error(efx->pci_dev, - rx_buf->dma_addr))) { + DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(&efx->pci_dev->dev, + rx_buf->dma_addr))) { dev_kfree_skb_any(skb); rx_buf->u.skb = NULL; return -EIO; @@ -200,10 +200,10 @@ static int efx_init_rx_buffers_page(struct efx_rx_queue *rx_queue) efx->rx_buffer_order); if (unlikely(page == NULL)) return -ENOMEM; - dma_addr = pci_map_page(efx->pci_dev, page, 0, + dma_addr = dma_map_page(&efx->pci_dev->dev, page, 0, efx_rx_buf_size(efx), - PCI_DMA_FROMDEVICE); - if (unlikely(pci_dma_mapping_error(efx->pci_dev, dma_addr))) { + DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(&efx->pci_dev->dev, dma_addr))) { __free_pages(page, efx->rx_buffer_order); return -EIO; } @@ -247,14 +247,14 @@ static void efx_unmap_rx_buffer(struct efx_nic *efx, state = page_address(rx_buf->u.page); if (--state->refcnt == 0) { - pci_unmap_page(efx->pci_dev, + dma_unmap_page(&efx->pci_dev->dev, state->dma_addr, efx_rx_buf_size(efx), - PCI_DMA_FROMDEVICE); + DMA_FROM_DEVICE); } } else if (!(rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.skb) { - pci_unmap_single(efx->pci_dev, rx_buf->dma_addr, - rx_buf->len, PCI_DMA_FROMDEVICE); + dma_unmap_single(&efx->pci_dev->dev, rx_buf->dma_addr, + rx_buf->len, DMA_FROM_DEVICE); } } @@ -336,6 +336,7 @@ static void efx_recycle_rx_buffer(struct efx_channel *channel, /** * efx_fast_push_rx_descriptors - push new RX descriptors quickly * @rx_queue: RX descriptor queue + * * This will aim to fill the RX descriptor queue up to * @rx_queue->@max_fill. If there is insufficient atomic * memory to do so, a slow fill will be scheduled. diff --git a/drivers/net/ethernet/sfc/selftest.c b/drivers/net/ethernet/sfc/selftest.c index de4c0069f5b2..96068d15b601 100644 --- a/drivers/net/ethernet/sfc/selftest.c +++ b/drivers/net/ethernet/sfc/selftest.c @@ -120,19 +120,6 @@ static int efx_test_nvram(struct efx_nic *efx, struct efx_self_tests *tests) return rc; } -static int efx_test_chip(struct efx_nic *efx, struct efx_self_tests *tests) -{ - int rc = 0; - - /* Test register access */ - if (efx->type->test_registers) { - rc = efx->type->test_registers(efx); - tests->registers = rc ? -1 : 1; - } - - return rc; -} - /************************************************************************** * * Interrupt and event queue testing @@ -488,7 +475,7 @@ static int efx_end_loopback(struct efx_tx_queue *tx_queue, skb = state->skbs[i]; if (skb && !skb_shared(skb)) ++tx_done; - dev_kfree_skb_any(skb); + dev_kfree_skb(skb); } netif_tx_unlock_bh(efx->net_dev); @@ -699,8 +686,7 @@ int efx_selftest(struct efx_nic *efx, struct efx_self_tests *tests, { enum efx_loopback_mode loopback_mode = efx->loopback_mode; int phy_mode = efx->phy_mode; - enum reset_type reset_method = RESET_TYPE_INVISIBLE; - int rc_test = 0, rc_reset = 0, rc; + int rc_test = 0, rc_reset, rc; efx_selftest_async_cancel(efx); @@ -737,44 +723,26 @@ int efx_selftest(struct efx_nic *efx, struct efx_self_tests *tests, */ netif_device_detach(efx->net_dev); - mutex_lock(&efx->mac_lock); - if (efx->loopback_modes) { - /* We need the 312 clock from the PHY to test the XMAC - * registers, so move into XGMII loopback if available */ - if (efx->loopback_modes & (1 << LOOPBACK_XGMII)) - efx->loopback_mode = LOOPBACK_XGMII; - else - efx->loopback_mode = __ffs(efx->loopback_modes); - } - - __efx_reconfigure_port(efx); - mutex_unlock(&efx->mac_lock); - - /* free up all consumers of SRAM (including all the queues) */ - efx_reset_down(efx, reset_method); - - rc = efx_test_chip(efx, tests); - if (rc && !rc_test) - rc_test = rc; + if (efx->type->test_chip) { + rc_reset = efx->type->test_chip(efx, tests); + if (rc_reset) { + netif_err(efx, hw, efx->net_dev, + "Unable to recover from chip test\n"); + efx_schedule_reset(efx, RESET_TYPE_DISABLE); + return rc_reset; + } - /* reset the chip to recover from the register test */ - rc_reset = efx->type->reset(efx, reset_method); + if ((tests->registers < 0) && !rc_test) + rc_test = -EIO; + } /* Ensure that the phy is powered and out of loopback * for the bist and loopback tests */ + mutex_lock(&efx->mac_lock); efx->phy_mode &= ~PHY_MODE_LOW_POWER; efx->loopback_mode = LOOPBACK_NONE; - - rc = efx_reset_up(efx, reset_method, rc_reset == 0); - if (rc && !rc_reset) - rc_reset = rc; - - if (rc_reset) { - netif_err(efx, drv, efx->net_dev, - "Unable to recover from chip test\n"); - efx_schedule_reset(efx, RESET_TYPE_DISABLE); - return rc_reset; - } + __efx_reconfigure_port(efx); + mutex_unlock(&efx->mac_lock); rc = efx_test_phy(efx, tests, flags); if (rc && !rc_test) diff --git a/drivers/net/ethernet/sfc/siena.c b/drivers/net/ethernet/sfc/siena.c index 9f8d7cea3967..6bafd216e55e 100644 --- a/drivers/net/ethernet/sfc/siena.c +++ b/drivers/net/ethernet/sfc/siena.c @@ -25,10 +25,12 @@ #include "workarounds.h" #include "mcdi.h" #include "mcdi_pcol.h" +#include "selftest.h" /* Hardware control for SFC9000 family including SFL9021 (aka Siena). */ static void siena_init_wol(struct efx_nic *efx); +static int siena_reset_hw(struct efx_nic *efx, enum reset_type method); static void siena_push_irq_moderation(struct efx_channel *channel) @@ -154,10 +156,29 @@ static const struct efx_nic_register_test siena_register_tests[] = { EFX_OWORD32(0xFFFFFFFF, 0xFFFFFFFF, 0x00000007, 0x00000000) }, }; -static int siena_test_registers(struct efx_nic *efx) +static int siena_test_chip(struct efx_nic *efx, struct efx_self_tests *tests) { - return efx_nic_test_registers(efx, siena_register_tests, - ARRAY_SIZE(siena_register_tests)); + enum reset_type reset_method = reset_method; + int rc, rc2; + + efx_reset_down(efx, reset_method); + + /* Reset the chip immediately so that it is completely + * quiescent regardless of what any VF driver does. + */ + rc = siena_reset_hw(efx, reset_method); + if (rc) + goto out; + + tests->registers = + efx_nic_test_registers(efx, siena_register_tests, + ARRAY_SIZE(siena_register_tests)) + ? -1 : 1; + + rc = siena_reset_hw(efx, reset_method); +out: + rc2 = efx_reset_up(efx, reset_method, rc == 0); + return rc ? rc : rc2; } /************************************************************************** @@ -437,8 +458,8 @@ static int siena_try_update_nic_stats(struct efx_nic *efx) MAC_STAT(tx_bytes, TX_BYTES); MAC_STAT(tx_bad_bytes, TX_BAD_BYTES); - mac_stats->tx_good_bytes = (mac_stats->tx_bytes - - mac_stats->tx_bad_bytes); + efx_update_diff_stat(&mac_stats->tx_good_bytes, + mac_stats->tx_bytes - mac_stats->tx_bad_bytes); MAC_STAT(tx_packets, TX_PKTS); MAC_STAT(tx_bad, TX_BAD_FCS_PKTS); MAC_STAT(tx_pause, TX_PAUSE_PKTS); @@ -471,8 +492,8 @@ static int siena_try_update_nic_stats(struct efx_nic *efx) MAC_STAT(tx_ip_src_error, TX_IP_SRC_ERR_PKTS); MAC_STAT(rx_bytes, RX_BYTES); MAC_STAT(rx_bad_bytes, RX_BAD_BYTES); - mac_stats->rx_good_bytes = (mac_stats->rx_bytes - - mac_stats->rx_bad_bytes); + efx_update_diff_stat(&mac_stats->rx_good_bytes, + mac_stats->rx_bytes - mac_stats->rx_bad_bytes); MAC_STAT(rx_packets, RX_PKTS); MAC_STAT(rx_good, RX_GOOD_PKTS); MAC_STAT(rx_bad, RX_BAD_FCS_PKTS); @@ -649,7 +670,7 @@ const struct efx_nic_type siena_a0_nic_type = { .get_wol = siena_get_wol, .set_wol = siena_set_wol, .resume_wol = siena_init_wol, - .test_registers = siena_test_registers, + .test_chip = siena_test_chip, .test_nvram = efx_mcdi_nvram_test_all, .revision = EFX_REV_SIENA_A0, diff --git a/drivers/net/ethernet/sfc/tx.c b/drivers/net/ethernet/sfc/tx.c index 94d0365b31cd..9b225a7769f7 100644 --- a/drivers/net/ethernet/sfc/tx.c +++ b/drivers/net/ethernet/sfc/tx.c @@ -36,15 +36,15 @@ static void efx_dequeue_buffer(struct efx_tx_queue *tx_queue, unsigned int *bytes_compl) { if (buffer->unmap_len) { - struct pci_dev *pci_dev = tx_queue->efx->pci_dev; + struct device *dma_dev = &tx_queue->efx->pci_dev->dev; dma_addr_t unmap_addr = (buffer->dma_addr + buffer->len - buffer->unmap_len); if (buffer->unmap_single) - pci_unmap_single(pci_dev, unmap_addr, buffer->unmap_len, - PCI_DMA_TODEVICE); + dma_unmap_single(dma_dev, unmap_addr, buffer->unmap_len, + DMA_TO_DEVICE); else - pci_unmap_page(pci_dev, unmap_addr, buffer->unmap_len, - PCI_DMA_TODEVICE); + dma_unmap_page(dma_dev, unmap_addr, buffer->unmap_len, + DMA_TO_DEVICE); buffer->unmap_len = 0; buffer->unmap_single = false; } @@ -138,7 +138,7 @@ efx_max_tx_len(struct efx_nic *efx, dma_addr_t dma_addr) netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb) { struct efx_nic *efx = tx_queue->efx; - struct pci_dev *pci_dev = efx->pci_dev; + struct device *dma_dev = &efx->pci_dev->dev; struct efx_tx_buffer *buffer; skb_frag_t *fragment; unsigned int len, unmap_len = 0, fill_level, insert_ptr; @@ -167,17 +167,17 @@ netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb) fill_level = tx_queue->insert_count - tx_queue->old_read_count; q_space = efx->txq_entries - 1 - fill_level; - /* Map for DMA. Use pci_map_single rather than pci_map_page + /* Map for DMA. Use dma_map_single rather than dma_map_page * since this is more efficient on machines with sparse * memory. */ unmap_single = true; - dma_addr = pci_map_single(pci_dev, skb->data, len, PCI_DMA_TODEVICE); + dma_addr = dma_map_single(dma_dev, skb->data, len, PCI_DMA_TODEVICE); /* Process all fragments */ while (1) { - if (unlikely(pci_dma_mapping_error(pci_dev, dma_addr))) - goto pci_err; + if (unlikely(dma_mapping_error(dma_dev, dma_addr))) + goto dma_err; /* Store fields for marking in the per-fragment final * descriptor */ @@ -246,7 +246,7 @@ netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb) i++; /* Map for DMA */ unmap_single = false; - dma_addr = skb_frag_dma_map(&pci_dev->dev, fragment, 0, len, + dma_addr = skb_frag_dma_map(dma_dev, fragment, 0, len, DMA_TO_DEVICE); } @@ -261,7 +261,7 @@ netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb) return NETDEV_TX_OK; - pci_err: + dma_err: netif_err(efx, tx_err, efx->net_dev, " TX queue %d could not map skb with %d bytes %d " "fragments for DMA\n", tx_queue->queue, skb->len, @@ -284,11 +284,11 @@ netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb) /* Free the fragment we were mid-way through pushing */ if (unmap_len) { if (unmap_single) - pci_unmap_single(pci_dev, unmap_addr, unmap_len, - PCI_DMA_TODEVICE); + dma_unmap_single(dma_dev, unmap_addr, unmap_len, + DMA_TO_DEVICE); else - pci_unmap_page(pci_dev, unmap_addr, unmap_len, - PCI_DMA_TODEVICE); + dma_unmap_page(dma_dev, unmap_addr, unmap_len, + DMA_TO_DEVICE); } return rc; @@ -651,17 +651,8 @@ static __be16 efx_tso_check_protocol(struct sk_buff *skb) EFX_BUG_ON_PARANOID(((struct ethhdr *)skb->data)->h_proto != protocol); if (protocol == htons(ETH_P_8021Q)) { - /* Find the encapsulated protocol; reset network header - * and transport header based on that. */ struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data; protocol = veh->h_vlan_encapsulated_proto; - skb_set_network_header(skb, sizeof(*veh)); - if (protocol == htons(ETH_P_IP)) - skb_set_transport_header(skb, sizeof(*veh) + - 4 * ip_hdr(skb)->ihl); - else if (protocol == htons(ETH_P_IPV6)) - skb_set_transport_header(skb, sizeof(*veh) + - sizeof(struct ipv6hdr)); } if (protocol == htons(ETH_P_IP)) { @@ -684,20 +675,19 @@ static __be16 efx_tso_check_protocol(struct sk_buff *skb) */ static int efx_tsoh_block_alloc(struct efx_tx_queue *tx_queue) { - - struct pci_dev *pci_dev = tx_queue->efx->pci_dev; + struct device *dma_dev = &tx_queue->efx->pci_dev->dev; struct efx_tso_header *tsoh; dma_addr_t dma_addr; u8 *base_kva, *kva; - base_kva = pci_alloc_consistent(pci_dev, PAGE_SIZE, &dma_addr); + base_kva = dma_alloc_coherent(dma_dev, PAGE_SIZE, &dma_addr, GFP_ATOMIC); if (base_kva == NULL) { netif_err(tx_queue->efx, tx_err, tx_queue->efx->net_dev, "Unable to allocate page for TSO headers\n"); return -ENOMEM; } - /* pci_alloc_consistent() allocates pages. */ + /* dma_alloc_coherent() allocates pages. */ EFX_BUG_ON_PARANOID(dma_addr & (PAGE_SIZE - 1u)); for (kva = base_kva; kva < base_kva + PAGE_SIZE; kva += TSOH_STD_SIZE) { @@ -714,7 +704,7 @@ static int efx_tsoh_block_alloc(struct efx_tx_queue *tx_queue) /* Free up a TSO header, and all others in the same page. */ static void efx_tsoh_block_free(struct efx_tx_queue *tx_queue, struct efx_tso_header *tsoh, - struct pci_dev *pci_dev) + struct device *dma_dev) { struct efx_tso_header **p; unsigned long base_kva; @@ -731,7 +721,7 @@ static void efx_tsoh_block_free(struct efx_tx_queue *tx_queue, p = &(*p)->next; } - pci_free_consistent(pci_dev, PAGE_SIZE, (void *)base_kva, base_dma); + dma_free_coherent(dma_dev, PAGE_SIZE, (void *)base_kva, base_dma); } static struct efx_tso_header * @@ -743,11 +733,11 @@ efx_tsoh_heap_alloc(struct efx_tx_queue *tx_queue, size_t header_len) if (unlikely(!tsoh)) return NULL; - tsoh->dma_addr = pci_map_single(tx_queue->efx->pci_dev, + tsoh->dma_addr = dma_map_single(&tx_queue->efx->pci_dev->dev, TSOH_BUFFER(tsoh), header_len, - PCI_DMA_TODEVICE); - if (unlikely(pci_dma_mapping_error(tx_queue->efx->pci_dev, - tsoh->dma_addr))) { + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(&tx_queue->efx->pci_dev->dev, + tsoh->dma_addr))) { kfree(tsoh); return NULL; } @@ -759,9 +749,9 @@ efx_tsoh_heap_alloc(struct efx_tx_queue *tx_queue, size_t header_len) static void efx_tsoh_heap_free(struct efx_tx_queue *tx_queue, struct efx_tso_header *tsoh) { - pci_unmap_single(tx_queue->efx->pci_dev, + dma_unmap_single(&tx_queue->efx->pci_dev->dev, tsoh->dma_addr, tsoh->unmap_len, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); kfree(tsoh); } @@ -892,13 +882,13 @@ static void efx_enqueue_unwind(struct efx_tx_queue *tx_queue) unmap_addr = (buffer->dma_addr + buffer->len - buffer->unmap_len); if (buffer->unmap_single) - pci_unmap_single(tx_queue->efx->pci_dev, + dma_unmap_single(&tx_queue->efx->pci_dev->dev, unmap_addr, buffer->unmap_len, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); else - pci_unmap_page(tx_queue->efx->pci_dev, + dma_unmap_page(&tx_queue->efx->pci_dev->dev, unmap_addr, buffer->unmap_len, - PCI_DMA_TODEVICE); + DMA_TO_DEVICE); buffer->unmap_len = 0; } buffer->len = 0; @@ -927,7 +917,6 @@ static void tso_start(struct tso_state *st, const struct sk_buff *skb) EFX_BUG_ON_PARANOID(tcp_hdr(skb)->syn); EFX_BUG_ON_PARANOID(tcp_hdr(skb)->rst); - st->packet_space = st->full_packet_size; st->out_len = skb->len - st->header_len; st->unmap_len = 0; st->unmap_single = false; @@ -954,9 +943,9 @@ static int tso_get_head_fragment(struct tso_state *st, struct efx_nic *efx, int hl = st->header_len; int len = skb_headlen(skb) - hl; - st->unmap_addr = pci_map_single(efx->pci_dev, skb->data + hl, - len, PCI_DMA_TODEVICE); - if (likely(!pci_dma_mapping_error(efx->pci_dev, st->unmap_addr))) { + st->unmap_addr = dma_map_single(&efx->pci_dev->dev, skb->data + hl, + len, DMA_TO_DEVICE); + if (likely(!dma_mapping_error(&efx->pci_dev->dev, st->unmap_addr))) { st->unmap_single = true; st->unmap_len = len; st->in_len = len; @@ -1008,7 +997,7 @@ static int tso_fill_packet_with_fragment(struct efx_tx_queue *tx_queue, buffer->continuation = !end_of_packet; if (st->in_len == 0) { - /* Transfer ownership of the pci mapping */ + /* Transfer ownership of the DMA mapping */ buffer->unmap_len = st->unmap_len; buffer->unmap_single = st->unmap_single; st->unmap_len = 0; @@ -1181,18 +1170,18 @@ static int efx_enqueue_skb_tso(struct efx_tx_queue *tx_queue, mem_err: netif_err(efx, tx_err, efx->net_dev, - "Out of memory for TSO headers, or PCI mapping error\n"); + "Out of memory for TSO headers, or DMA mapping error\n"); dev_kfree_skb_any(skb); unwind: /* Free the DMA mapping we were in the process of writing out */ if (state.unmap_len) { if (state.unmap_single) - pci_unmap_single(efx->pci_dev, state.unmap_addr, - state.unmap_len, PCI_DMA_TODEVICE); + dma_unmap_single(&efx->pci_dev->dev, state.unmap_addr, + state.unmap_len, DMA_TO_DEVICE); else - pci_unmap_page(efx->pci_dev, state.unmap_addr, - state.unmap_len, PCI_DMA_TODEVICE); + dma_unmap_page(&efx->pci_dev->dev, state.unmap_addr, + state.unmap_len, DMA_TO_DEVICE); } efx_enqueue_unwind(tx_queue); @@ -1216,5 +1205,5 @@ static void efx_fini_tso(struct efx_tx_queue *tx_queue) while (tx_queue->tso_headers_free != NULL) efx_tsoh_block_free(tx_queue, tx_queue->tso_headers_free, - tx_queue->efx->pci_dev); + &tx_queue->efx->pci_dev->dev); } diff --git a/drivers/net/ethernet/sgi/ioc3-eth.c b/drivers/net/ethernet/sgi/ioc3-eth.c index ac149d99f78f..b5ba3084c7fc 100644 --- a/drivers/net/ethernet/sgi/ioc3-eth.c +++ b/drivers/net/ethernet/sgi/ioc3-eth.c @@ -583,7 +583,7 @@ static inline void ioc3_rx(struct net_device *dev) unsigned long *rxr; u32 w0, err; - rxr = (unsigned long *) ip->rxr; /* Ring base */ + rxr = ip->rxr; /* Ring base */ rx_entry = ip->rx_ci; /* RX consume index */ n_entry = ip->rx_pi; @@ -903,7 +903,7 @@ static void ioc3_alloc_rings(struct net_device *dev) if (ip->rxr == NULL) { /* Allocate and initialize rx ring. 4kb = 512 entries */ ip->rxr = (unsigned long *) get_zeroed_page(GFP_ATOMIC); - rxr = (unsigned long *) ip->rxr; + rxr = ip->rxr; if (!rxr) printk("ioc3_alloc_rings(): get_zeroed_page() failed!\n"); diff --git a/drivers/net/ethernet/smsc/smc911x.c b/drivers/net/ethernet/smsc/smc911x.c index 8814b2f5d46f..8d15f7a74b45 100644 --- a/drivers/net/ethernet/smsc/smc911x.c +++ b/drivers/net/ethernet/smsc/smc911x.c @@ -773,7 +773,7 @@ static int smc911x_phy_fixed(struct net_device *dev) return 1; } -/* +/** * smc911x_phy_reset - reset the phy * @dev: net device * @phy: phy address @@ -819,7 +819,7 @@ static int smc911x_phy_reset(struct net_device *dev, int phy) return reg & PMT_CTRL_PHY_RST_; } -/* +/** * smc911x_phy_powerdown - powerdown phy * @dev: net device * @phy: phy address @@ -837,7 +837,7 @@ static void smc911x_phy_powerdown(struct net_device *dev, int phy) SMC_SET_PHY_BMCR(lp, phy, bmcr); } -/* +/** * smc911x_phy_check_media - check the media status and adjust BMCR * @dev: net device * @init: set true for initialisation diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c index fee449355014..318adc935a53 100644 --- a/drivers/net/ethernet/smsc/smc91x.c +++ b/drivers/net/ethernet/smsc/smc91x.c @@ -942,7 +942,7 @@ static int smc_phy_fixed(struct net_device *dev) return 1; } -/* +/** * smc_phy_reset - reset the phy * @dev: net device * @phy: phy address @@ -976,7 +976,7 @@ static int smc_phy_reset(struct net_device *dev, int phy) return bmcr & BMCR_RESET; } -/* +/** * smc_phy_powerdown - powerdown phy * @dev: net device * @@ -1000,7 +1000,7 @@ static void smc_phy_powerdown(struct net_device *dev) smc_phy_write(dev, phy, MII_BMCR, bmcr | BMCR_PDOWN); } -/* +/** * smc_phy_check_media - check the media status and adjust TCR * @dev: net device * @init: set true for initialisation diff --git a/drivers/net/ethernet/smsc/smsc911x.c b/drivers/net/ethernet/smsc/smsc911x.c index 1466e5d2af44..62d1baf111ea 100644 --- a/drivers/net/ethernet/smsc/smsc911x.c +++ b/drivers/net/ethernet/smsc/smsc911x.c @@ -1442,6 +1442,14 @@ smsc911x_set_hw_mac_address(struct smsc911x_data *pdata, u8 dev_addr[6]) smsc911x_mac_write(pdata, ADDRL, mac_low32); } +static void smsc911x_disable_irq_chip(struct net_device *dev) +{ + struct smsc911x_data *pdata = netdev_priv(dev); + + smsc911x_reg_write(pdata, INT_EN, 0); + smsc911x_reg_write(pdata, INT_STS, 0xFFFFFFFF); +} + static int smsc911x_open(struct net_device *dev) { struct smsc911x_data *pdata = netdev_priv(dev); @@ -1494,8 +1502,7 @@ static int smsc911x_open(struct net_device *dev) spin_unlock_irq(&pdata->mac_lock); /* Initialise irqs, but leave all sources disabled */ - smsc911x_reg_write(pdata, INT_EN, 0); - smsc911x_reg_write(pdata, INT_STS, 0xFFFFFFFF); + smsc911x_disable_irq_chip(dev); /* Set interrupt deassertion to 100uS */ intcfg = ((10 << 24) | INT_CFG_IRQ_EN_); @@ -2215,9 +2222,6 @@ static int __devinit smsc911x_init(struct net_device *dev) if (smsc911x_soft_reset(pdata)) return -ENODEV; - /* Disable all interrupt sources until we bring the device up */ - smsc911x_reg_write(pdata, INT_EN, 0); - ether_setup(dev); dev->flags |= IFF_MULTICAST; netif_napi_add(dev, &pdata->napi, smsc911x_poll, SMSC_NAPI_WEIGHT); @@ -2434,8 +2438,7 @@ static int __devinit smsc911x_drv_probe(struct platform_device *pdev) smsc911x_reg_write(pdata, INT_CFG, intcfg); /* Ensure interrupts are globally disabled before connecting ISR */ - smsc911x_reg_write(pdata, INT_EN, 0); - smsc911x_reg_write(pdata, INT_STS, 0xFFFFFFFF); + smsc911x_disable_irq_chip(dev); retval = request_irq(dev->irq, smsc911x_irqhandler, irq_flags | IRQF_SHARED, dev->name, dev); @@ -2485,7 +2488,7 @@ static int __devinit smsc911x_drv_probe(struct platform_device *pdev) eth_hw_addr_random(dev); smsc911x_set_hw_mac_address(pdata, dev->dev_addr); SMSC_TRACE(pdata, probe, - "MAC Address is set to random_ether_addr"); + "MAC Address is set to eth_random_addr"); } } diff --git a/drivers/net/ethernet/smsc/smsc9420.c b/drivers/net/ethernet/smsc/smsc9420.c index fd33b21f6c96..1fcd914ec39b 100644 --- a/drivers/net/ethernet/smsc/smsc9420.c +++ b/drivers/net/ethernet/smsc/smsc9420.c @@ -1640,8 +1640,7 @@ smsc9420_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto out_free_io_4; /* descriptors are aligned due to the nature of pci_alloc_consistent */ - pd->tx_ring = (struct smsc9420_dma_desc *) - (pd->rx_ring + RX_RING_SIZE); + pd->tx_ring = (pd->rx_ring + RX_RING_SIZE); pd->tx_dma_addr = pd->rx_dma_addr + sizeof(struct smsc9420_dma_desc) * RX_RING_SIZE; diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h index bcd54d6e94fd..e2d083228f3a 100644 --- a/drivers/net/ethernet/stmicro/stmmac/common.h +++ b/drivers/net/ethernet/stmicro/stmmac/common.h @@ -95,6 +95,16 @@ struct stmmac_extra_stats { unsigned long poll_n; unsigned long sched_timer_n; unsigned long normal_irq_n; + unsigned long mmc_tx_irq_n; + unsigned long mmc_rx_irq_n; + unsigned long mmc_rx_csum_offload_irq_n; + /* EEE */ + unsigned long irq_receive_pmt_irq_n; + unsigned long irq_tx_path_in_lpi_mode_n; + unsigned long irq_tx_path_exit_lpi_mode_n; + unsigned long irq_rx_path_in_lpi_mode_n; + unsigned long irq_rx_path_exit_lpi_mode_n; + unsigned long phy_eee_wakeup_error_n; }; /* CSR Frequency Access Defines*/ @@ -162,6 +172,17 @@ enum tx_dma_irq_status { handle_tx_rx = 3, }; +enum core_specific_irq_mask { + core_mmc_tx_irq = 1, + core_mmc_rx_irq = 2, + core_mmc_rx_csum_offload_irq = 4, + core_irq_receive_pmt_irq = 8, + core_irq_tx_path_in_lpi_mode = 16, + core_irq_tx_path_exit_lpi_mode = 32, + core_irq_rx_path_in_lpi_mode = 64, + core_irq_rx_path_exit_lpi_mode = 128, +}; + /* DMA HW capabilities */ struct dma_features { unsigned int mbps_10_100; @@ -208,6 +229,10 @@ struct dma_features { #define MAC_ENABLE_TX 0x00000008 /* Transmitter Enable */ #define MAC_RNABLE_RX 0x00000004 /* Receiver Enable */ +/* Default LPI timers */ +#define STMMAC_DEFAULT_LIT_LS_TIMER 0x3E8 +#define STMMAC_DEFAULT_TWT_LS_TIMER 0x0 + struct stmmac_desc_ops { /* DMA RX descriptor ring initialization */ void (*init_rx_desc) (struct dma_desc *p, unsigned int ring_size, @@ -278,7 +303,7 @@ struct stmmac_ops { /* Dump MAC registers */ void (*dump_regs) (void __iomem *ioaddr); /* Handle extra events on specific interrupts hw dependent */ - void (*host_irq_status) (void __iomem *ioaddr); + int (*host_irq_status) (void __iomem *ioaddr); /* Multicast filter setting */ void (*set_filter) (struct net_device *dev, int id); /* Flow control setting */ @@ -291,6 +316,10 @@ struct stmmac_ops { unsigned int reg_n); void (*get_umac_addr) (void __iomem *ioaddr, unsigned char *addr, unsigned int reg_n); + void (*set_eee_mode) (void __iomem *ioaddr); + void (*reset_eee_mode) (void __iomem *ioaddr); + void (*set_eee_timer) (void __iomem *ioaddr, int ls, int tw); + void (*set_eee_pls) (void __iomem *ioaddr, int link); }; struct mac_link { diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h b/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h index 23478bf4ed7a..f90fcb5f9573 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h @@ -36,6 +36,7 @@ #define GMAC_INT_STATUS 0x00000038 /* interrupt status register */ enum dwmac1000_irq_status { + lpiis_irq = 0x400, time_stamp_irq = 0x0200, mmc_rx_csum_offload_irq = 0x0080, mmc_tx_irq = 0x0040, @@ -60,6 +61,25 @@ enum power_event { power_down = 0x00000001, }; +/* Energy Efficient Ethernet (EEE) + * + * LPI status, timer and control register offset + */ +#define LPI_CTRL_STATUS 0x0030 +#define LPI_TIMER_CTRL 0x0034 + +/* LPI control and status defines */ +#define LPI_CTRL_STATUS_LPITXA 0x00080000 /* Enable LPI TX Automate */ +#define LPI_CTRL_STATUS_PLSEN 0x00040000 /* Enable PHY Link Status */ +#define LPI_CTRL_STATUS_PLS 0x00020000 /* PHY Link Status */ +#define LPI_CTRL_STATUS_LPIEN 0x00010000 /* LPI Enable */ +#define LPI_CTRL_STATUS_RLPIST 0x00000200 /* Receive LPI state */ +#define LPI_CTRL_STATUS_TLPIST 0x00000100 /* Transmit LPI state */ +#define LPI_CTRL_STATUS_RLPIEX 0x00000008 /* Receive LPI Exit */ +#define LPI_CTRL_STATUS_RLPIEN 0x00000004 /* Receive LPI Entry */ +#define LPI_CTRL_STATUS_TLPIEX 0x00000002 /* Transmit LPI Exit */ +#define LPI_CTRL_STATUS_TLPIEN 0x00000001 /* Transmit LPI Entry */ + /* GMAC HW ADDR regs */ #define GMAC_ADDR_HIGH(reg) (((reg > 15) ? 0x00000800 : 0x00000040) + \ (reg * 8)) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c index b5e4d02f15c9..bfe022605498 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c @@ -194,26 +194,107 @@ static void dwmac1000_pmt(void __iomem *ioaddr, unsigned long mode) } -static void dwmac1000_irq_status(void __iomem *ioaddr) +static int dwmac1000_irq_status(void __iomem *ioaddr) { u32 intr_status = readl(ioaddr + GMAC_INT_STATUS); + int status = 0; /* Not used events (e.g. MMC interrupts) are not handled. */ - if ((intr_status & mmc_tx_irq)) - CHIP_DBG(KERN_DEBUG "GMAC: MMC tx interrupt: 0x%08x\n", + if ((intr_status & mmc_tx_irq)) { + CHIP_DBG(KERN_INFO "GMAC: MMC tx interrupt: 0x%08x\n", readl(ioaddr + GMAC_MMC_TX_INTR)); - if (unlikely(intr_status & mmc_rx_irq)) - CHIP_DBG(KERN_DEBUG "GMAC: MMC rx interrupt: 0x%08x\n", + status |= core_mmc_tx_irq; + } + if (unlikely(intr_status & mmc_rx_irq)) { + CHIP_DBG(KERN_INFO "GMAC: MMC rx interrupt: 0x%08x\n", readl(ioaddr + GMAC_MMC_RX_INTR)); - if (unlikely(intr_status & mmc_rx_csum_offload_irq)) - CHIP_DBG(KERN_DEBUG "GMAC: MMC rx csum offload: 0x%08x\n", + status |= core_mmc_rx_irq; + } + if (unlikely(intr_status & mmc_rx_csum_offload_irq)) { + CHIP_DBG(KERN_INFO "GMAC: MMC rx csum offload: 0x%08x\n", readl(ioaddr + GMAC_MMC_RX_CSUM_OFFLOAD)); + status |= core_mmc_rx_csum_offload_irq; + } if (unlikely(intr_status & pmt_irq)) { - CHIP_DBG(KERN_DEBUG "GMAC: received Magic frame\n"); + CHIP_DBG(KERN_INFO "GMAC: received Magic frame\n"); /* clear the PMT bits 5 and 6 by reading the PMT * status register. */ readl(ioaddr + GMAC_PMT); + status |= core_irq_receive_pmt_irq; } + /* MAC trx/rx EEE LPI entry/exit interrupts */ + if (intr_status & lpiis_irq) { + /* Clean LPI interrupt by reading the Reg 12 */ + u32 lpi_status = readl(ioaddr + LPI_CTRL_STATUS); + + if (lpi_status & LPI_CTRL_STATUS_TLPIEN) { + CHIP_DBG(KERN_INFO "GMAC TX entered in LPI\n"); + status |= core_irq_tx_path_in_lpi_mode; + } + if (lpi_status & LPI_CTRL_STATUS_TLPIEX) { + CHIP_DBG(KERN_INFO "GMAC TX exit from LPI\n"); + status |= core_irq_tx_path_exit_lpi_mode; + } + if (lpi_status & LPI_CTRL_STATUS_RLPIEN) { + CHIP_DBG(KERN_INFO "GMAC RX entered in LPI\n"); + status |= core_irq_rx_path_in_lpi_mode; + } + if (lpi_status & LPI_CTRL_STATUS_RLPIEX) { + CHIP_DBG(KERN_INFO "GMAC RX exit from LPI\n"); + status |= core_irq_rx_path_exit_lpi_mode; + } + } + + return status; +} + +static void dwmac1000_set_eee_mode(void __iomem *ioaddr) +{ + u32 value; + + /* Enable the link status receive on RGMII, SGMII ore SMII + * receive path and instruct the transmit to enter in LPI + * state. */ + value = readl(ioaddr + LPI_CTRL_STATUS); + value |= LPI_CTRL_STATUS_LPIEN | LPI_CTRL_STATUS_LPITXA; + writel(value, ioaddr + LPI_CTRL_STATUS); +} + +static void dwmac1000_reset_eee_mode(void __iomem *ioaddr) +{ + u32 value; + + value = readl(ioaddr + LPI_CTRL_STATUS); + value &= ~(LPI_CTRL_STATUS_LPIEN | LPI_CTRL_STATUS_LPITXA); + writel(value, ioaddr + LPI_CTRL_STATUS); +} + +static void dwmac1000_set_eee_pls(void __iomem *ioaddr, int link) +{ + u32 value; + + value = readl(ioaddr + LPI_CTRL_STATUS); + + if (link) + value |= LPI_CTRL_STATUS_PLS; + else + value &= ~LPI_CTRL_STATUS_PLS; + + writel(value, ioaddr + LPI_CTRL_STATUS); +} + +static void dwmac1000_set_eee_timer(void __iomem *ioaddr, int ls, int tw) +{ + int value = ((tw & 0xffff)) | ((ls & 0x7ff) << 16); + + /* Program the timers in the LPI timer control register: + * LS: minimum time (ms) for which the link + * status from PHY should be ok before transmitting + * the LPI pattern. + * TW: minimum time (us) for which the core waits + * after it has stopped transmitting the LPI pattern. + */ + writel(value, ioaddr + LPI_TIMER_CTRL); } static const struct stmmac_ops dwmac1000_ops = { @@ -226,6 +307,10 @@ static const struct stmmac_ops dwmac1000_ops = { .pmt = dwmac1000_pmt, .set_umac_addr = dwmac1000_set_umac_addr, .get_umac_addr = dwmac1000_get_umac_addr, + .set_eee_mode = dwmac1000_set_eee_mode, + .reset_eee_mode = dwmac1000_reset_eee_mode, + .set_eee_timer = dwmac1000_set_eee_timer, + .set_eee_pls = dwmac1000_set_eee_pls, }; struct mac_device_info *dwmac1000_setup(void __iomem *ioaddr) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c index 19e0f4eed2bc..f83210e7c221 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c @@ -72,9 +72,9 @@ static int dwmac100_rx_ipc_enable(void __iomem *ioaddr) return 0; } -static void dwmac100_irq_status(void __iomem *ioaddr) +static int dwmac100_irq_status(void __iomem *ioaddr) { - return; + return 0; } static void dwmac100_set_umac_addr(void __iomem *ioaddr, unsigned char *addr, diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h b/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h index 6e0360f9cfde..e678ce39d014 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h @@ -70,6 +70,7 @@ #define DMA_INTR_DEFAULT_MASK (DMA_INTR_NORMAL | DMA_INTR_ABNORMAL) /* DMA Status register defines */ +#define DMA_STATUS_GLPII 0x40000000 /* GMAC LPI interrupt */ #define DMA_STATUS_GPI 0x10000000 /* PMT interrupt */ #define DMA_STATUS_GMI 0x08000000 /* MMC interrupt */ #define DMA_STATUS_GLI 0x04000000 /* GMAC Line interface int */ diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index dc20c56efc9d..ab4c376cb276 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -87,6 +87,12 @@ struct stmmac_priv { #endif int clk_csr; int synopsys_id; + struct timer_list eee_ctrl_timer; + bool tx_path_in_lpi_mode; + int lpi_irq; + int eee_enabled; + int eee_active; + int tx_lpi_timer; }; extern int phyaddr; @@ -104,6 +110,8 @@ int stmmac_dvr_remove(struct net_device *ndev); struct stmmac_priv *stmmac_dvr_probe(struct device *device, struct plat_stmmacenet_data *plat_dat, void __iomem *addr); +void stmmac_disable_eee_mode(struct stmmac_priv *priv); +bool stmmac_eee_init(struct stmmac_priv *priv); #ifdef CONFIG_HAVE_CLK static inline int stmmac_clk_enable(struct stmmac_priv *priv) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c index ce431846fc6f..76fd61aa005f 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c @@ -93,6 +93,16 @@ static const struct stmmac_stats stmmac_gstrings_stats[] = { STMMAC_STAT(poll_n), STMMAC_STAT(sched_timer_n), STMMAC_STAT(normal_irq_n), + STMMAC_STAT(normal_irq_n), + STMMAC_STAT(mmc_tx_irq_n), + STMMAC_STAT(mmc_rx_irq_n), + STMMAC_STAT(mmc_rx_csum_offload_irq_n), + STMMAC_STAT(irq_receive_pmt_irq_n), + STMMAC_STAT(irq_tx_path_in_lpi_mode_n), + STMMAC_STAT(irq_tx_path_exit_lpi_mode_n), + STMMAC_STAT(irq_rx_path_in_lpi_mode_n), + STMMAC_STAT(irq_rx_path_exit_lpi_mode_n), + STMMAC_STAT(phy_eee_wakeup_error_n), }; #define STMMAC_STATS_LEN ARRAY_SIZE(stmmac_gstrings_stats) @@ -366,6 +376,11 @@ static void stmmac_get_ethtool_stats(struct net_device *dev, (*(u32 *)p); } } + if (priv->eee_enabled) { + int val = phy_get_eee_err(priv->phydev); + if (val) + priv->xstats.phy_eee_wakeup_error_n = val; + } } for (i = 0; i < STMMAC_STATS_LEN; i++) { char *p = (char *)priv + stmmac_gstrings_stats[i].stat_offset; @@ -464,6 +479,46 @@ static int stmmac_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol) return 0; } +static int stmmac_ethtool_op_get_eee(struct net_device *dev, + struct ethtool_eee *edata) +{ + struct stmmac_priv *priv = netdev_priv(dev); + + if (!priv->dma_cap.eee) + return -EOPNOTSUPP; + + edata->eee_enabled = priv->eee_enabled; + edata->eee_active = priv->eee_active; + edata->tx_lpi_timer = priv->tx_lpi_timer; + + return phy_ethtool_get_eee(priv->phydev, edata); +} + +static int stmmac_ethtool_op_set_eee(struct net_device *dev, + struct ethtool_eee *edata) +{ + struct stmmac_priv *priv = netdev_priv(dev); + + priv->eee_enabled = edata->eee_enabled; + + if (!priv->eee_enabled) + stmmac_disable_eee_mode(priv); + else { + /* We are asking for enabling the EEE but it is safe + * to verify all by invoking the eee_init function. + * In case of failure it will return an error. + */ + priv->eee_enabled = stmmac_eee_init(priv); + if (!priv->eee_enabled) + return -EOPNOTSUPP; + + /* Do not change tx_lpi_timer in case of failure */ + priv->tx_lpi_timer = edata->tx_lpi_timer; + } + + return phy_ethtool_set_eee(priv->phydev, edata); +} + static const struct ethtool_ops stmmac_ethtool_ops = { .begin = stmmac_check_if_running, .get_drvinfo = stmmac_ethtool_getdrvinfo, @@ -480,6 +535,8 @@ static const struct ethtool_ops stmmac_ethtool_ops = { .get_strings = stmmac_get_strings, .get_wol = stmmac_get_wol, .set_wol = stmmac_set_wol, + .get_eee = stmmac_ethtool_op_get_eee, + .set_eee = stmmac_ethtool_op_set_eee, .get_sset_count = stmmac_get_sset_count, .get_ts_info = ethtool_op_get_ts_info, }; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index ea3003edde18..f6b04c1a3672 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -133,6 +133,12 @@ static const u32 default_msg_level = (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK | NETIF_MSG_IFUP | NETIF_MSG_IFDOWN | NETIF_MSG_TIMER); +#define STMMAC_DEFAULT_LPI_TIMER 1000 +static int eee_timer = STMMAC_DEFAULT_LPI_TIMER; +module_param(eee_timer, int, S_IRUGO | S_IWUSR); +MODULE_PARM_DESC(eee_timer, "LPI tx expiration time in msec"); +#define STMMAC_LPI_TIMER(x) (jiffies + msecs_to_jiffies(x)) + static irqreturn_t stmmac_interrupt(int irq, void *dev_id); #ifdef CONFIG_STMMAC_DEBUG_FS @@ -161,6 +167,8 @@ static void stmmac_verify_args(void) flow_ctrl = FLOW_OFF; if (unlikely((pause < 0) || (pause > 0xffff))) pause = PAUSE_TIME; + if (eee_timer < 0) + eee_timer = STMMAC_DEFAULT_LPI_TIMER; } static void stmmac_clk_csr_set(struct stmmac_priv *priv) @@ -229,6 +237,85 @@ static inline void stmmac_hw_fix_mac_speed(struct stmmac_priv *priv) phydev->speed); } +static void stmmac_enable_eee_mode(struct stmmac_priv *priv) +{ + /* Check and enter in LPI mode */ + if ((priv->dirty_tx == priv->cur_tx) && + (priv->tx_path_in_lpi_mode == false)) + priv->hw->mac->set_eee_mode(priv->ioaddr); +} + +void stmmac_disable_eee_mode(struct stmmac_priv *priv) +{ + /* Exit and disable EEE in case of we are are in LPI state. */ + priv->hw->mac->reset_eee_mode(priv->ioaddr); + del_timer_sync(&priv->eee_ctrl_timer); + priv->tx_path_in_lpi_mode = false; +} + +/** + * stmmac_eee_ctrl_timer + * @arg : data hook + * Description: + * If there is no data transfer and if we are not in LPI state, + * then MAC Transmitter can be moved to LPI state. + */ +static void stmmac_eee_ctrl_timer(unsigned long arg) +{ + struct stmmac_priv *priv = (struct stmmac_priv *)arg; + + stmmac_enable_eee_mode(priv); + mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_TIMER(eee_timer)); +} + +/** + * stmmac_eee_init + * @priv: private device pointer + * Description: + * If the EEE support has been enabled while configuring the driver, + * if the GMAC actually supports the EEE (from the HW cap reg) and the + * phy can also manage EEE, so enable the LPI state and start the timer + * to verify if the tx path can enter in LPI state. + */ +bool stmmac_eee_init(struct stmmac_priv *priv) +{ + bool ret = false; + + /* MAC core supports the EEE feature. */ + if (priv->dma_cap.eee) { + /* Check if the PHY supports EEE */ + if (phy_init_eee(priv->phydev, 1)) + goto out; + + priv->eee_active = 1; + init_timer(&priv->eee_ctrl_timer); + priv->eee_ctrl_timer.function = stmmac_eee_ctrl_timer; + priv->eee_ctrl_timer.data = (unsigned long)priv; + priv->eee_ctrl_timer.expires = STMMAC_LPI_TIMER(eee_timer); + add_timer(&priv->eee_ctrl_timer); + + priv->hw->mac->set_eee_timer(priv->ioaddr, + STMMAC_DEFAULT_LIT_LS_TIMER, + priv->tx_lpi_timer); + + pr_info("stmmac: Energy-Efficient Ethernet initialized\n"); + + ret = true; + } +out: + return ret; +} + +static void stmmac_eee_adjust(struct stmmac_priv *priv) +{ + /* When the EEE has been already initialised we have to + * modify the PLS bit in the LPI ctrl & status reg according + * to the PHY link status. For this reason. + */ + if (priv->eee_enabled) + priv->hw->mac->set_eee_pls(priv->ioaddr, priv->phydev->link); +} + /** * stmmac_adjust_link * @dev: net device structure @@ -249,6 +336,7 @@ static void stmmac_adjust_link(struct net_device *dev) phydev->addr, phydev->link); spin_lock_irqsave(&priv->lock, flags); + if (phydev->link) { u32 ctrl = readl(priv->ioaddr + MAC_CTRL_REG); @@ -315,6 +403,8 @@ static void stmmac_adjust_link(struct net_device *dev) if (new_state && netif_msg_link(priv)) phy_print_status(phydev); + stmmac_eee_adjust(priv); + spin_unlock_irqrestore(&priv->lock, flags); DBG(probe, DEBUG, "stmmac_adjust_link: exiting\n"); @@ -332,7 +422,7 @@ static int stmmac_init_phy(struct net_device *dev) { struct stmmac_priv *priv = netdev_priv(dev); struct phy_device *phydev; - char phy_id[MII_BUS_ID_SIZE + 3]; + char phy_id_fmt[MII_BUS_ID_SIZE + 3]; char bus_id[MII_BUS_ID_SIZE]; int interface = priv->plat->interface; priv->oldlink = 0; @@ -346,11 +436,12 @@ static int stmmac_init_phy(struct net_device *dev) snprintf(bus_id, MII_BUS_ID_SIZE, "stmmac-%x", priv->plat->bus_id); - snprintf(phy_id, MII_BUS_ID_SIZE + 3, PHY_ID_FMT, bus_id, + snprintf(phy_id_fmt, MII_BUS_ID_SIZE + 3, PHY_ID_FMT, bus_id, priv->plat->phy_addr); - pr_debug("stmmac_init_phy: trying to attach to %s\n", phy_id); + pr_debug("stmmac_init_phy: trying to attach to %s\n", phy_id_fmt); - phydev = phy_connect(dev, phy_id, &stmmac_adjust_link, 0, interface); + phydev = phy_connect(dev, phy_id_fmt, &stmmac_adjust_link, 0, + interface); if (IS_ERR(phydev)) { pr_err("%s: Could not attach to PHY\n", dev->name); @@ -677,7 +768,7 @@ static void stmmac_tx(struct stmmac_priv *priv) priv->hw->desc->release_tx_desc(p); - entry = (++priv->dirty_tx) % txsize; + priv->dirty_tx++; } if (unlikely(netif_queue_stopped(priv->dev) && stmmac_tx_avail(priv) > STMMAC_TX_THRESH(priv))) { @@ -689,6 +780,11 @@ static void stmmac_tx(struct stmmac_priv *priv) } netif_tx_unlock(priv->dev); } + + if ((priv->eee_enabled) && (!priv->tx_path_in_lpi_mode)) { + stmmac_enable_eee_mode(priv); + mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_TIMER(eee_timer)); + } spin_unlock(&priv->tx_lock); } @@ -1027,6 +1123,17 @@ static int stmmac_open(struct net_device *dev) } } + /* Request the IRQ lines */ + if (priv->lpi_irq != -ENXIO) { + ret = request_irq(priv->lpi_irq, stmmac_interrupt, IRQF_SHARED, + dev->name, dev); + if (unlikely(ret < 0)) { + pr_err("%s: ERROR: allocating the LPI IRQ %d (%d)\n", + __func__, priv->lpi_irq, ret); + goto open_error_lpiirq; + } + } + /* Enable the MAC Rx/Tx */ stmmac_set_mac(priv->ioaddr, true); @@ -1062,12 +1169,19 @@ static int stmmac_open(struct net_device *dev) if (priv->phydev) phy_start(priv->phydev); + priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS_TIMER; + priv->eee_enabled = stmmac_eee_init(priv); + napi_enable(&priv->napi); skb_queue_head_init(&priv->rx_recycle); netif_start_queue(dev); return 0; +open_error_lpiirq: + if (priv->wol_irq != dev->irq) + free_irq(priv->wol_irq, dev); + open_error_wolirq: free_irq(dev->irq, dev); @@ -1093,6 +1207,9 @@ static int stmmac_release(struct net_device *dev) { struct stmmac_priv *priv = netdev_priv(dev); + if (priv->eee_enabled) + del_timer_sync(&priv->eee_ctrl_timer); + /* Stop and disconnect the PHY */ if (priv->phydev) { phy_stop(priv->phydev); @@ -1115,6 +1232,8 @@ static int stmmac_release(struct net_device *dev) free_irq(dev->irq, dev); if (priv->wol_irq != dev->irq) free_irq(priv->wol_irq, dev); + if (priv->lpi_irq != -ENXIO) + free_irq(priv->lpi_irq, dev); /* Stop TX/RX DMA and clear the descriptors */ priv->hw->dma->stop_tx(priv->ioaddr); @@ -1164,6 +1283,9 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) spin_lock(&priv->tx_lock); + if (priv->tx_path_in_lpi_mode) + stmmac_disable_eee_mode(priv); + entry = priv->cur_tx % txsize; #ifdef STMMAC_XMIT_DEBUG @@ -1311,7 +1433,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit) display_ring(priv->dma_rx, rxsize); } #endif - count = 0; while (!priv->hw->desc->get_rx_owner(p)) { int status; @@ -1544,10 +1665,37 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id) return IRQ_NONE; } - if (priv->plat->has_gmac) - /* To handle GMAC own interrupts */ - priv->hw->mac->host_irq_status((void __iomem *) dev->base_addr); + /* To handle GMAC own interrupts */ + if (priv->plat->has_gmac) { + int status = priv->hw->mac->host_irq_status((void __iomem *) + dev->base_addr); + if (unlikely(status)) { + if (status & core_mmc_tx_irq) + priv->xstats.mmc_tx_irq_n++; + if (status & core_mmc_rx_irq) + priv->xstats.mmc_rx_irq_n++; + if (status & core_mmc_rx_csum_offload_irq) + priv->xstats.mmc_rx_csum_offload_irq_n++; + if (status & core_irq_receive_pmt_irq) + priv->xstats.irq_receive_pmt_irq_n++; + + /* For LPI we need to save the tx status */ + if (status & core_irq_tx_path_in_lpi_mode) { + priv->xstats.irq_tx_path_in_lpi_mode_n++; + priv->tx_path_in_lpi_mode = true; + } + if (status & core_irq_tx_path_exit_lpi_mode) { + priv->xstats.irq_tx_path_exit_lpi_mode_n++; + priv->tx_path_in_lpi_mode = false; + } + if (status & core_irq_rx_path_in_lpi_mode) + priv->xstats.irq_rx_path_in_lpi_mode_n++; + if (status & core_irq_rx_path_exit_lpi_mode) + priv->xstats.irq_rx_path_exit_lpi_mode_n++; + } + } + /* To handle DMA interrupts */ stmmac_dma_interrupt(priv); return IRQ_HANDLED; @@ -2133,42 +2281,38 @@ static int __init stmmac_cmdline_opt(char *str) return -EINVAL; while ((opt = strsep(&str, ",")) != NULL) { if (!strncmp(opt, "debug:", 6)) { - if (strict_strtoul(opt + 6, 0, (unsigned long *)&debug)) + if (kstrtoint(opt + 6, 0, &debug)) goto err; } else if (!strncmp(opt, "phyaddr:", 8)) { - if (strict_strtoul(opt + 8, 0, - (unsigned long *)&phyaddr)) + if (kstrtoint(opt + 8, 0, &phyaddr)) goto err; } else if (!strncmp(opt, "dma_txsize:", 11)) { - if (strict_strtoul(opt + 11, 0, - (unsigned long *)&dma_txsize)) + if (kstrtoint(opt + 11, 0, &dma_txsize)) goto err; } else if (!strncmp(opt, "dma_rxsize:", 11)) { - if (strict_strtoul(opt + 11, 0, - (unsigned long *)&dma_rxsize)) + if (kstrtoint(opt + 11, 0, &dma_rxsize)) goto err; } else if (!strncmp(opt, "buf_sz:", 7)) { - if (strict_strtoul(opt + 7, 0, - (unsigned long *)&buf_sz)) + if (kstrtoint(opt + 7, 0, &buf_sz)) goto err; } else if (!strncmp(opt, "tc:", 3)) { - if (strict_strtoul(opt + 3, 0, (unsigned long *)&tc)) + if (kstrtoint(opt + 3, 0, &tc)) goto err; } else if (!strncmp(opt, "watchdog:", 9)) { - if (strict_strtoul(opt + 9, 0, - (unsigned long *)&watchdog)) + if (kstrtoint(opt + 9, 0, &watchdog)) goto err; } else if (!strncmp(opt, "flow_ctrl:", 10)) { - if (strict_strtoul(opt + 10, 0, - (unsigned long *)&flow_ctrl)) + if (kstrtoint(opt + 10, 0, &flow_ctrl)) goto err; } else if (!strncmp(opt, "pause:", 6)) { - if (strict_strtoul(opt + 6, 0, (unsigned long *)&pause)) + if (kstrtoint(opt + 6, 0, &pause)) + goto err; + } else if (!strncmp(opt, "eee_timer:", 6)) { + if (kstrtoint(opt + 10, 0, &eee_timer)) goto err; #ifdef CONFIG_STMMAC_TIMER } else if (!strncmp(opt, "tmrate:", 7)) { - if (strict_strtoul(opt + 7, 0, - (unsigned long *)&tmrate)) + if (kstrtoint(opt + 7, 0, &tmrate)) goto err; #endif } diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c index cf826e6b6aa1..13afb8edfadc 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c @@ -125,7 +125,7 @@ err_out_req_reg_failed: } /** - * stmmac_dvr_remove + * stmmac_pci_remove * * @pdev: platform device pointer * Description: this function calls the main to free the net resources diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c index 680d2b8dfe27..cd01ee7ecef1 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c @@ -49,7 +49,9 @@ static int __devinit stmmac_probe_config_dt(struct platform_device *pdev, * are provided. All other properties should be added * once needed on other platforms. */ - if (of_device_is_compatible(np, "st,spear600-gmac")) { + if (of_device_is_compatible(np, "st,spear600-gmac") || + of_device_is_compatible(np, "snps,dwmac-3.70a") || + of_device_is_compatible(np, "snps,dwmac")) { plat->has_gmac = 1; plat->pmt = 1; } @@ -156,6 +158,8 @@ static int stmmac_pltfr_probe(struct platform_device *pdev) if (priv->wol_irq == -ENXIO) priv->wol_irq = priv->dev->irq; + priv->lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); + platform_set_drvdata(pdev, priv->dev); pr_debug("STMMAC platform driver registration completed"); @@ -190,7 +194,7 @@ static int stmmac_pltfr_remove(struct platform_device *pdev) platform_set_drvdata(pdev, NULL); - iounmap((void *)priv->ioaddr); + iounmap((void __force __iomem *)priv->ioaddr); res = platform_get_resource(pdev, IORESOURCE_MEM, 0); release_mem_region(res->start, resource_size(res)); @@ -250,7 +254,9 @@ static const struct dev_pm_ops stmmac_pltfr_pm_ops; #endif /* CONFIG_PM */ static const struct of_device_id stmmac_dt_ids[] = { - { .compatible = "st,spear600-gmac", }, + { .compatible = "st,spear600-gmac"}, + { .compatible = "snps,dwmac-3.70a"}, + { .compatible = "snps,dwmac"}, { /* sentinel */ } }; MODULE_DEVICE_TABLE(of, stmmac_dt_ids); diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c index 8c726b7004d3..c2a0fe393267 100644 --- a/drivers/net/ethernet/sun/niu.c +++ b/drivers/net/ethernet/sun/niu.c @@ -3335,6 +3335,10 @@ static int niu_rbr_add_page(struct niu *np, struct rx_ring_info *rp, addr = np->ops->map_page(np->device, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); + if (!addr) { + __free_page(page); + return -ENOMEM; + } niu_hash_page(rp, page, addr); if (rp->rbr_blocks_per_page > 1) @@ -3513,7 +3517,7 @@ static int niu_rbr_fill(struct niu *np, struct rx_ring_info *rp, gfp_t mask) err = 0; while (index < (rp->rbr_table_size - blocks_per_page)) { err = niu_rbr_add_page(np, rp, mask, index); - if (err) + if (unlikely(err)) break; index += blocks_per_page; diff --git a/drivers/net/ethernet/sun/sunbmac.c b/drivers/net/ethernet/sun/sunbmac.c index 2a83fc57edba..967fe8cb476e 100644 --- a/drivers/net/ethernet/sun/sunbmac.c +++ b/drivers/net/ethernet/sun/sunbmac.c @@ -233,7 +233,6 @@ static void bigmac_init_rings(struct bigmac *bp, int from_irq) continue; bp->rx_skbs[i] = skb; - skb->dev = dev; /* Because we reserve afterwards. */ skb_put(skb, ETH_FRAME_LEN); @@ -838,7 +837,6 @@ static void bigmac_rx(struct bigmac *bp) RX_BUF_ALLOC_SIZE - 34, DMA_FROM_DEVICE); bp->rx_skbs[elem] = new_skb; - new_skb->dev = bp->dev; skb_put(new_skb, ETH_FRAME_LEN); skb_reserve(new_skb, 34); this->rx_addr = diff --git a/drivers/net/ethernet/sun/sungem.c b/drivers/net/ethernet/sun/sungem.c index 3cf4ab755838..9ae12d0c9632 100644 --- a/drivers/net/ethernet/sun/sungem.c +++ b/drivers/net/ethernet/sun/sungem.c @@ -752,7 +752,6 @@ static __inline__ struct sk_buff *gem_alloc_skb(struct net_device *dev, int size if (likely(skb)) { unsigned long offset = ALIGNED_RX_SKB_ADDR(skb->data); skb_reserve(skb, offset); - skb->dev = dev; } return skb; } diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c index dfc00c4683e5..73f341b8befb 100644 --- a/drivers/net/ethernet/sun/sunhme.c +++ b/drivers/net/ethernet/sun/sunhme.c @@ -1249,7 +1249,6 @@ static void happy_meal_clean_rings(struct happy_meal *hp) static void happy_meal_init_rings(struct happy_meal *hp) { struct hmeal_init_block *hb = hp->happy_block; - struct net_device *dev = hp->dev; int i; HMD(("happy_meal_init_rings: counters to zero, ")); @@ -1270,7 +1269,6 @@ static void happy_meal_init_rings(struct happy_meal *hp) continue; } hp->rx_skbs[i] = skb; - skb->dev = dev; /* Because we reserve afterwards. */ skb_put(skb, (ETH_FRAME_LEN + RX_OFFSET + 4)); @@ -2031,7 +2029,6 @@ static void happy_meal_rx(struct happy_meal *hp, struct net_device *dev) } dma_unmap_single(hp->dma_dev, dma_addr, RX_BUF_ALLOC_SIZE, DMA_FROM_DEVICE); hp->rx_skbs[elem] = new_skb; - new_skb->dev = dev; skb_put(new_skb, (ETH_FRAME_LEN + RX_OFFSET + 4)); hme_write_rxd(hp, this, (RXFLAG_OWN|((RX_BUF_ALLOC_SIZE-RX_OFFSET)<<16)), diff --git a/drivers/net/ethernet/sun/sunqe.c b/drivers/net/ethernet/sun/sunqe.c index 7d4a040d84a2..aeded7ff1c8f 100644 --- a/drivers/net/ethernet/sun/sunqe.c +++ b/drivers/net/ethernet/sun/sunqe.c @@ -441,7 +441,7 @@ static void qe_rx(struct sunqe *qep) } else { skb_reserve(skb, 2); skb_put(skb, len); - skb_copy_to_linear_data(skb, (unsigned char *) this_qbuf, + skb_copy_to_linear_data(skb, this_qbuf, len); skb->protocol = eth_type_trans(skb, qep->dev); netif_rx(skb); diff --git a/drivers/net/ethernet/tehuti/tehuti.c b/drivers/net/ethernet/tehuti/tehuti.c index 447a6932cab3..6ce9edd95c04 100644 --- a/drivers/net/ethernet/tehuti/tehuti.c +++ b/drivers/net/ethernet/tehuti/tehuti.c @@ -137,14 +137,15 @@ static void print_eth_id(struct net_device *ndev) #define bdx_disable_interrupts(priv) \ do { WRITE_REG(priv, regIMR, 0); } while (0) -/* bdx_fifo_init - * create TX/RX descriptor fifo for host-NIC communication. +/** + * bdx_fifo_init - create TX/RX descriptor fifo for host-NIC communication. + * @priv: NIC private structure + * @f: fifo to initialize + * @fsz_type: fifo size type: 0-4KB, 1-8KB, 2-16KB, 3-32KB + * @reg_XXX: offsets of registers relative to base address + * * 1K extra space is allocated at the end of the fifo to simplify * processing of descriptors that wraps around fifo's end - * @priv - NIC private structure - * @f - fifo to initialize - * @fsz_type - fifo size type: 0-4KB, 1-8KB, 2-16KB, 3-32KB - * @reg_XXX - offsets of registers relative to base address * * Returns 0 on success, negative value on failure * @@ -177,9 +178,10 @@ bdx_fifo_init(struct bdx_priv *priv, struct fifo *f, int fsz_type, RET(0); } -/* bdx_fifo_free - free all resources used by fifo - * @priv - NIC private structure - * @f - fifo to release +/** + * bdx_fifo_free - free all resources used by fifo + * @priv: NIC private structure + * @f: fifo to release */ static void bdx_fifo_free(struct bdx_priv *priv, struct fifo *f) { @@ -192,9 +194,9 @@ static void bdx_fifo_free(struct bdx_priv *priv, struct fifo *f) RET(); } -/* +/** * bdx_link_changed - notifies OS about hw link state. - * @bdx_priv - hw adapter structure + * @priv: hw adapter structure */ static void bdx_link_changed(struct bdx_priv *priv) { @@ -233,10 +235,10 @@ static void bdx_isr_extra(struct bdx_priv *priv, u32 isr) } -/* bdx_isr - Interrupt Service Routine for Bordeaux NIC - * @irq - interrupt number - * @ndev - network device - * @regs - CPU registers +/** + * bdx_isr_napi - Interrupt Service Routine for Bordeaux NIC + * @irq: interrupt number + * @dev: network device * * Return IRQ_NONE if it was not our interrupt, IRQ_HANDLED - otherwise * @@ -307,8 +309,10 @@ static int bdx_poll(struct napi_struct *napi, int budget) return work_done; } -/* bdx_fw_load - loads firmware to NIC - * @priv - NIC private structure +/** + * bdx_fw_load - loads firmware to NIC + * @priv: NIC private structure + * * Firmware is loaded via TXD fifo, so it must be initialized first. * Firware must be loaded once per NIC not per PCI device provided by NIC (NIC * can have few of them). So all drivers use semaphore register to choose one @@ -380,8 +384,9 @@ static void bdx_restore_mac(struct net_device *ndev, struct bdx_priv *priv) RET(); } -/* bdx_hw_start - inits registers and starts HW's Rx and Tx engines - * @priv - NIC private structure +/** + * bdx_hw_start - inits registers and starts HW's Rx and Tx engines + * @priv: NIC private structure */ static int bdx_hw_start(struct bdx_priv *priv) { @@ -691,12 +696,13 @@ static int bdx_ioctl(struct net_device *ndev, struct ifreq *ifr, int cmd) RET(-EOPNOTSUPP); } -/* +/** * __bdx_vlan_rx_vid - private helper for adding/killing VLAN vid - * by passing VLAN filter table to hardware - * @ndev network device - * @vid VLAN vid - * @op add or kill operation + * @ndev: network device + * @vid: VLAN vid + * @op: add or kill operation + * + * Passes VLAN filter table to hardware */ static void __bdx_vlan_rx_vid(struct net_device *ndev, uint16_t vid, int enable) { @@ -722,10 +728,10 @@ static void __bdx_vlan_rx_vid(struct net_device *ndev, uint16_t vid, int enable) RET(); } -/* +/** * bdx_vlan_rx_add_vid - kernel hook for adding VLAN vid to hw filtering table - * @ndev network device - * @vid VLAN vid to add + * @ndev: network device + * @vid: VLAN vid to add */ static int bdx_vlan_rx_add_vid(struct net_device *ndev, uint16_t vid) { @@ -733,10 +739,10 @@ static int bdx_vlan_rx_add_vid(struct net_device *ndev, uint16_t vid) return 0; } -/* +/** * bdx_vlan_rx_kill_vid - kernel hook for killing VLAN vid in hw filtering table - * @ndev network device - * @vid VLAN vid to kill + * @ndev: network device + * @vid: VLAN vid to kill */ static int bdx_vlan_rx_kill_vid(struct net_device *ndev, unsigned short vid) { @@ -974,8 +980,9 @@ static inline void bdx_rxdb_free_elem(struct rxdb *db, int n) * Rx Init * *************************************************************************/ -/* bdx_rx_init - initialize RX all related HW and SW resources - * @priv - NIC private structure +/** + * bdx_rx_init - initialize RX all related HW and SW resources + * @priv: NIC private structure * * Returns 0 on success, negative value on failure * @@ -1016,9 +1023,10 @@ err_mem: return -ENOMEM; } -/* bdx_rx_free_skbs - frees and unmaps all skbs allocated for the fifo - * @priv - NIC private structure - * @f - RXF fifo +/** + * bdx_rx_free_skbs - frees and unmaps all skbs allocated for the fifo + * @priv: NIC private structure + * @f: RXF fifo */ static void bdx_rx_free_skbs(struct bdx_priv *priv, struct rxf_fifo *f) { @@ -1045,8 +1053,10 @@ static void bdx_rx_free_skbs(struct bdx_priv *priv, struct rxf_fifo *f) } } -/* bdx_rx_free - release all Rx resources - * @priv - NIC private structure +/** + * bdx_rx_free - release all Rx resources + * @priv: NIC private structure + * * It assumes that Rx is desabled in HW */ static void bdx_rx_free(struct bdx_priv *priv) @@ -1067,9 +1077,11 @@ static void bdx_rx_free(struct bdx_priv *priv) * Rx Engine * *************************************************************************/ -/* bdx_rx_alloc_skbs - fill rxf fifo with new skbs - * @priv - nic's private structure - * @f - RXF fifo that needs skbs +/** + * bdx_rx_alloc_skbs - fill rxf fifo with new skbs + * @priv: nic's private structure + * @f: RXF fifo that needs skbs + * * It allocates skbs, build rxf descs and push it (rxf descr) into rxf fifo. * skb's virtual and physical addresses are stored in skb db. * To calculate free space, func uses cached values of RPTR and WPTR @@ -1179,13 +1191,15 @@ static void bdx_recycle_skb(struct bdx_priv *priv, struct rxd_desc *rxdd) RET(); } -/* bdx_rx_receive - receives full packets from RXD fifo and pass them to OS +/** + * bdx_rx_receive - receives full packets from RXD fifo and pass them to OS * NOTE: a special treatment is given to non-continuous descriptors * that start near the end, wraps around and continue at the beginning. a second * part is copied right after the first, and then descriptor is interpreted as * normal. fifo has an extra space to allow such operations - * @priv - nic's private structure - * @f - RXF fifo that needs skbs + * @priv: nic's private structure + * @f: RXF fifo that needs skbs + * @budget: maximum number of packets to receive */ /* TBD: replace memcpy func call by explicite inline asm */ @@ -1375,9 +1389,10 @@ static inline int bdx_tx_db_size(struct txdb *db) return db->size - taken; } -/* __bdx_tx_ptr_next - helper function, increment read/write pointer + wrap - * @d - tx data base - * @ptr - read or write pointer +/** + * __bdx_tx_db_ptr_next - helper function, increment read/write pointer + wrap + * @db: tx data base + * @pptr: read or write pointer */ static inline void __bdx_tx_db_ptr_next(struct txdb *db, struct tx_map **pptr) { @@ -1394,8 +1409,9 @@ static inline void __bdx_tx_db_ptr_next(struct txdb *db, struct tx_map **pptr) *pptr = db->start; } -/* bdx_tx_db_inc_rptr - increment read pointer - * @d - tx data base +/** + * bdx_tx_db_inc_rptr - increment read pointer + * @db: tx data base */ static inline void bdx_tx_db_inc_rptr(struct txdb *db) { @@ -1403,8 +1419,9 @@ static inline void bdx_tx_db_inc_rptr(struct txdb *db) __bdx_tx_db_ptr_next(db, &db->rptr); } -/* bdx_tx_db_inc_rptr - increment write pointer - * @d - tx data base +/** + * bdx_tx_db_inc_wptr - increment write pointer + * @db: tx data base */ static inline void bdx_tx_db_inc_wptr(struct txdb *db) { @@ -1413,9 +1430,11 @@ static inline void bdx_tx_db_inc_wptr(struct txdb *db) a result of write */ } -/* bdx_tx_db_init - creates and initializes tx db - * @d - tx data base - * @sz_type - size of tx fifo +/** + * bdx_tx_db_init - creates and initializes tx db + * @d: tx data base + * @sz_type: size of tx fifo + * * Returns 0 on success, error code otherwise */ static int bdx_tx_db_init(struct txdb *d, int sz_type) @@ -1441,8 +1460,9 @@ static int bdx_tx_db_init(struct txdb *d, int sz_type) return 0; } -/* bdx_tx_db_close - closes tx db and frees all memory - * @d - tx data base +/** + * bdx_tx_db_close - closes tx db and frees all memory + * @d: tx data base */ static void bdx_tx_db_close(struct txdb *d) { @@ -1463,9 +1483,11 @@ static struct { u16 qwords; /* qword = 64 bit */ } txd_sizes[MAX_SKB_FRAGS + 1]; -/* txdb_map_skb - creates and stores dma mappings for skb's data blocks - * @priv - NIC private structure - * @skb - socket buffer to map +/** + * bdx_tx_map_skb - creates and stores dma mappings for skb's data blocks + * @priv: NIC private structure + * @skb: socket buffer to map + * @txdd: TX descriptor to use * * It makes dma mappings for skb's data blocks and writes them to PBL of * new tx descriptor. It also stores them in the tx db, so they could be @@ -1562,9 +1584,10 @@ err_mem: return -ENOMEM; } -/* +/** * bdx_tx_space - calculates available space in TX fifo - * @priv - NIC private structure + * @priv: NIC private structure + * * Returns available space in TX fifo in bytes */ static inline int bdx_tx_space(struct bdx_priv *priv) @@ -1579,9 +1602,10 @@ static inline int bdx_tx_space(struct bdx_priv *priv) return fsize; } -/* bdx_tx_transmit - send packet to NIC - * @skb - packet to send - * ndev - network device assigned to NIC +/** + * bdx_tx_transmit - send packet to NIC + * @skb: packet to send + * @ndev: network device assigned to NIC * Return codes: * o NETDEV_TX_OK everything ok. * o NETDEV_TX_BUSY Cannot transmit packet, try later @@ -1699,8 +1723,10 @@ static netdev_tx_t bdx_tx_transmit(struct sk_buff *skb, return NETDEV_TX_OK; } -/* bdx_tx_cleanup - clean TXF fifo, run in the context of IRQ. - * @priv - bdx adapter +/** + * bdx_tx_cleanup - clean TXF fifo, run in the context of IRQ. + * @priv: bdx adapter + * * It scans TXF fifo for descriptors, frees DMA mappings and reports to OS * that those packets were sent */ @@ -1761,7 +1787,8 @@ static void bdx_tx_cleanup(struct bdx_priv *priv) spin_unlock(&priv->tx_lock); } -/* bdx_tx_free_skbs - frees all skbs from TXD fifo. +/** + * bdx_tx_free_skbs - frees all skbs from TXD fifo. * It gets called when OS stops this dev, eg upon "ifconfig down" or rmmod */ static void bdx_tx_free_skbs(struct bdx_priv *priv) @@ -1790,10 +1817,11 @@ static void bdx_tx_free(struct bdx_priv *priv) bdx_tx_db_close(&priv->txdb); } -/* bdx_tx_push_desc - push descriptor to TxD fifo - * @priv - NIC private structure - * @data - desc's data - * @size - desc's size +/** + * bdx_tx_push_desc - push descriptor to TxD fifo + * @priv: NIC private structure + * @data: desc's data + * @size: desc's size * * Pushes desc to TxD fifo and overlaps it if needed. * NOTE: this func does not check for available space. this is responsibility @@ -1819,10 +1847,11 @@ static void bdx_tx_push_desc(struct bdx_priv *priv, void *data, int size) WRITE_REG(priv, f->m.reg_WPTR, f->m.wptr & TXF_WPTR_WR_PTR); } -/* bdx_tx_push_desc_safe - push descriptor to TxD fifo in a safe way - * @priv - NIC private structure - * @data - desc's data - * @size - desc's size +/** + * bdx_tx_push_desc_safe - push descriptor to TxD fifo in a safe way + * @priv: NIC private structure + * @data: desc's data + * @size: desc's size * * NOTE: this func does check for available space and, if necessary, waits for * NIC to read existing data before writing new one. diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 6685bbb5705a..1e5d85b06e71 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -27,6 +27,7 @@ #include <linux/phy.h> #include <linux/workqueue.h> #include <linux/delay.h> +#include <linux/pm_runtime.h> #include <linux/platform_data/cpsw.h> @@ -494,11 +495,7 @@ static int cpsw_ndo_open(struct net_device *ndev) cpsw_intr_disable(priv); netif_carrier_off(ndev); - ret = clk_enable(priv->clk); - if (ret < 0) { - dev_err(priv->dev, "unable to turn on device clock\n"); - return ret; - } + pm_runtime_get_sync(&priv->pdev->dev); reg = __raw_readl(&priv->regs->id_ver); @@ -569,7 +566,7 @@ static int cpsw_ndo_stop(struct net_device *ndev) netif_carrier_off(priv->ndev); cpsw_ale_stop(priv->ale); for_each_slave(priv, cpsw_slave_stop, priv); - clk_disable(priv->clk); + pm_runtime_put_sync(&priv->pdev->dev); return 0; } @@ -748,7 +745,7 @@ static int __devinit cpsw_probe(struct platform_device *pdev) memcpy(priv->mac_addr, data->slave_data[0].mac_addr, ETH_ALEN); pr_info("Detected MACID = %pM", priv->mac_addr); } else { - random_ether_addr(priv->mac_addr); + eth_random_addr(priv->mac_addr); pr_info("Random MACID = %pM", priv->mac_addr); } @@ -763,10 +760,12 @@ static int __devinit cpsw_probe(struct platform_device *pdev) for (i = 0; i < data->slaves; i++) priv->slaves[i].slave_num = i; - priv->clk = clk_get(&pdev->dev, NULL); + pm_runtime_enable(&pdev->dev); + priv->clk = clk_get(&pdev->dev, "fck"); if (IS_ERR(priv->clk)) { - dev_err(priv->dev, "failed to get device clock)\n"); - ret = -EBUSY; + dev_err(&pdev->dev, "fck is not found\n"); + ret = -ENODEV; + goto clean_slave_ret; } priv->cpsw_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); @@ -935,6 +934,8 @@ clean_cpsw_iores_ret: resource_size(priv->cpsw_res)); clean_clk_ret: clk_put(priv->clk); +clean_slave_ret: + pm_runtime_disable(&pdev->dev); kfree(priv->slaves); clean_ndev_ret: free_netdev(ndev); @@ -959,6 +960,7 @@ static int __devexit cpsw_remove(struct platform_device *pdev) resource_size(priv->cpsw_res)); release_mem_region(priv->cpsw_ss_res->start, resource_size(priv->cpsw_ss_res)); + pm_runtime_disable(&pdev->dev); clk_put(priv->clk); kfree(priv->slaves); free_netdev(ndev); @@ -973,6 +975,8 @@ static int cpsw_suspend(struct device *dev) if (netif_running(ndev)) cpsw_ndo_stop(ndev); + pm_runtime_put_sync(&pdev->dev); + return 0; } @@ -981,6 +985,7 @@ static int cpsw_resume(struct device *dev) struct platform_device *pdev = to_platform_device(dev); struct net_device *ndev = platform_get_drvdata(pdev); + pm_runtime_get_sync(&pdev->dev); if (netif_running(ndev)) cpsw_ndo_open(ndev); return 0; diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c index 4da93a5d7ec6..fce89a0ab06e 100644 --- a/drivers/net/ethernet/ti/davinci_emac.c +++ b/drivers/net/ethernet/ti/davinci_emac.c @@ -57,7 +57,12 @@ #include <linux/bitops.h> #include <linux/io.h> #include <linux/uaccess.h> +#include <linux/pm_runtime.h> #include <linux/davinci_emac.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/of_irq.h> +#include <linux/of_net.h> #include <asm/irq.h> #include <asm/page.h> @@ -339,6 +344,9 @@ struct emac_priv { u32 rx_addr_type; atomic_t cur_tx; const char *phy_id; +#ifdef CONFIG_OF + struct device_node *phy_node; +#endif struct phy_device *phydev; spinlock_t lock; /*platform specific members*/ @@ -346,10 +354,6 @@ struct emac_priv { void (*int_disable) (void); }; -/* clock frequency for EMAC */ -static struct clk *emac_clk; -static unsigned long emac_bus_frequency; - /* EMAC TX Host Error description strings */ static char *emac_txhost_errcodes[16] = { "No error", "SOP error", "Ownership bit not set in SOP buffer", @@ -375,7 +379,7 @@ static char *emac_rxhost_errcodes[16] = { #define emac_ctrl_write(reg, val) iowrite32(val, (priv->ctrl_base + (reg))) /** - * emac_dump_regs: Dump important EMAC registers to debug terminal + * emac_dump_regs - Dump important EMAC registers to debug terminal * @priv: The DaVinci EMAC private adapter structure * * Executes ethtool set cmd & sets phy mode @@ -466,7 +470,7 @@ static void emac_dump_regs(struct emac_priv *priv) } /** - * emac_get_drvinfo: Get EMAC driver information + * emac_get_drvinfo - Get EMAC driver information * @ndev: The DaVinci EMAC network adapter * @info: ethtool info structure containing name and version * @@ -481,7 +485,7 @@ static void emac_get_drvinfo(struct net_device *ndev, } /** - * emac_get_settings: Get EMAC settings + * emac_get_settings - Get EMAC settings * @ndev: The DaVinci EMAC network adapter * @ecmd: ethtool command * @@ -500,7 +504,7 @@ static int emac_get_settings(struct net_device *ndev, } /** - * emac_set_settings: Set EMAC settings + * emac_set_settings - Set EMAC settings * @ndev: The DaVinci EMAC network adapter * @ecmd: ethtool command * @@ -518,7 +522,7 @@ static int emac_set_settings(struct net_device *ndev, struct ethtool_cmd *ecmd) } /** - * emac_get_coalesce : Get interrupt coalesce settings for this device + * emac_get_coalesce - Get interrupt coalesce settings for this device * @ndev : The DaVinci EMAC network adapter * @coal : ethtool coalesce settings structure * @@ -536,7 +540,7 @@ static int emac_get_coalesce(struct net_device *ndev, } /** - * emac_set_coalesce : Set interrupt coalesce settings for this device + * emac_set_coalesce - Set interrupt coalesce settings for this device * @ndev : The DaVinci EMAC network adapter * @coal : ethtool coalesce settings structure * @@ -614,11 +618,9 @@ static int emac_set_coalesce(struct net_device *ndev, } -/** - * ethtool_ops: DaVinci EMAC Ethtool structure +/* ethtool_ops: DaVinci EMAC Ethtool structure * * Ethtool support for EMAC adapter - * */ static const struct ethtool_ops ethtool_ops = { .get_drvinfo = emac_get_drvinfo, @@ -631,7 +633,7 @@ static const struct ethtool_ops ethtool_ops = { }; /** - * emac_update_phystatus: Update Phy status + * emac_update_phystatus - Update Phy status * @priv: The DaVinci EMAC private adapter structure * * Updates phy status and takes action for network queue if required @@ -697,7 +699,7 @@ static void emac_update_phystatus(struct emac_priv *priv) } /** - * hash_get: Calculate hash value from mac address + * hash_get - Calculate hash value from mac address * @addr: mac address to delete from hash table * * Calculates hash value from mac address @@ -723,9 +725,9 @@ static u32 hash_get(u8 *addr) } /** - * hash_add: Hash function to add mac addr from hash table + * hash_add - Hash function to add mac addr from hash table * @priv: The DaVinci EMAC private adapter structure - * mac_addr: mac address to delete from hash table + * @mac_addr: mac address to delete from hash table * * Adds mac address to the internal hash table * @@ -765,9 +767,9 @@ static int hash_add(struct emac_priv *priv, u8 *mac_addr) } /** - * hash_del: Hash function to delete mac addr from hash table + * hash_del - Hash function to delete mac addr from hash table * @priv: The DaVinci EMAC private adapter structure - * mac_addr: mac address to delete from hash table + * @mac_addr: mac address to delete from hash table * * Removes mac address from the internal hash table * @@ -807,7 +809,7 @@ static int hash_del(struct emac_priv *priv, u8 *mac_addr) #define EMAC_ALL_MULTI_CLR 3 /** - * emac_add_mcast: Set multicast address in the EMAC adapter (Internal) + * emac_add_mcast - Set multicast address in the EMAC adapter (Internal) * @priv: The DaVinci EMAC private adapter structure * @action: multicast operation to perform * mac_addr: mac address to set @@ -855,7 +857,7 @@ static void emac_add_mcast(struct emac_priv *priv, u32 action, u8 *mac_addr) } /** - * emac_dev_mcast_set: Set multicast address in the EMAC adapter + * emac_dev_mcast_set - Set multicast address in the EMAC adapter * @ndev: The DaVinci EMAC network adapter * * Set multicast addresses in EMAC adapter @@ -901,7 +903,7 @@ static void emac_dev_mcast_set(struct net_device *ndev) *************************************************************************/ /** - * emac_int_disable: Disable EMAC module interrupt (from adapter) + * emac_int_disable - Disable EMAC module interrupt (from adapter) * @priv: The DaVinci EMAC private adapter structure * * Disable EMAC interrupt on the adapter @@ -931,7 +933,7 @@ static void emac_int_disable(struct emac_priv *priv) } /** - * emac_int_enable: Enable EMAC module interrupt (from adapter) + * emac_int_enable - Enable EMAC module interrupt (from adapter) * @priv: The DaVinci EMAC private adapter structure * * Enable EMAC interrupt on the adapter @@ -967,7 +969,7 @@ static void emac_int_enable(struct emac_priv *priv) } /** - * emac_irq: EMAC interrupt handler + * emac_irq - EMAC interrupt handler * @irq: interrupt number * @dev_id: EMAC network adapter data structure ptr * @@ -1060,7 +1062,7 @@ static void emac_tx_handler(void *token, int len, int status) } /** - * emac_dev_xmit: EMAC Transmit function + * emac_dev_xmit - EMAC Transmit function * @skb: SKB pointer * @ndev: The DaVinci EMAC network adapter * @@ -1111,7 +1113,7 @@ fail_tx: } /** - * emac_dev_tx_timeout: EMAC Transmit timeout function + * emac_dev_tx_timeout - EMAC Transmit timeout function * @ndev: The DaVinci EMAC network adapter * * Called when system detects that a skb timeout period has expired @@ -1138,7 +1140,7 @@ static void emac_dev_tx_timeout(struct net_device *ndev) } /** - * emac_set_type0addr: Set EMAC Type0 mac address + * emac_set_type0addr - Set EMAC Type0 mac address * @priv: The DaVinci EMAC private adapter structure * @ch: RX channel number * @mac_addr: MAC address to set in device @@ -1165,7 +1167,7 @@ static void emac_set_type0addr(struct emac_priv *priv, u32 ch, char *mac_addr) } /** - * emac_set_type1addr: Set EMAC Type1 mac address + * emac_set_type1addr - Set EMAC Type1 mac address * @priv: The DaVinci EMAC private adapter structure * @ch: RX channel number * @mac_addr: MAC address to set in device @@ -1187,7 +1189,7 @@ static void emac_set_type1addr(struct emac_priv *priv, u32 ch, char *mac_addr) } /** - * emac_set_type2addr: Set EMAC Type2 mac address + * emac_set_type2addr - Set EMAC Type2 mac address * @priv: The DaVinci EMAC private adapter structure * @ch: RX channel number * @mac_addr: MAC address to set in device @@ -1213,7 +1215,7 @@ static void emac_set_type2addr(struct emac_priv *priv, u32 ch, } /** - * emac_setmac: Set mac address in the adapter (internal function) + * emac_setmac - Set mac address in the adapter (internal function) * @priv: The DaVinci EMAC private adapter structure * @ch: RX channel number * @mac_addr: MAC address to set in device @@ -1242,7 +1244,7 @@ static void emac_setmac(struct emac_priv *priv, u32 ch, char *mac_addr) } /** - * emac_dev_setmac_addr: Set mac address in the adapter + * emac_dev_setmac_addr - Set mac address in the adapter * @ndev: The DaVinci EMAC network adapter * @addr: MAC address to set in device * @@ -1277,7 +1279,7 @@ static int emac_dev_setmac_addr(struct net_device *ndev, void *addr) } /** - * emac_hw_enable: Enable EMAC hardware for packet transmission/reception + * emac_hw_enable - Enable EMAC hardware for packet transmission/reception * @priv: The DaVinci EMAC private adapter structure * * Enables EMAC hardware for packet processing - enables PHY, enables RX @@ -1347,7 +1349,7 @@ static int emac_hw_enable(struct emac_priv *priv) } /** - * emac_poll: EMAC NAPI Poll function + * emac_poll - EMAC NAPI Poll function * @ndev: The DaVinci EMAC network adapter * @budget: Number of receive packets to process (as told by NAPI layer) * @@ -1430,7 +1432,7 @@ static int emac_poll(struct napi_struct *napi, int budget) #ifdef CONFIG_NET_POLL_CONTROLLER /** - * emac_poll_controller: EMAC Poll controller function + * emac_poll_controller - EMAC Poll controller function * @ndev: The DaVinci EMAC network adapter * * Polled functionality used by netconsole and others in non interrupt mode @@ -1489,7 +1491,7 @@ static void emac_adjust_link(struct net_device *ndev) *************************************************************************/ /** - * emac_devioctl: EMAC adapter ioctl + * emac_devioctl - EMAC adapter ioctl * @ndev: The DaVinci EMAC network adapter * @ifrq: request parameter * @cmd: command parameter @@ -1516,7 +1518,7 @@ static int match_first_device(struct device *dev, void *data) } /** - * emac_dev_open: EMAC device open + * emac_dev_open - EMAC device open * @ndev: The DaVinci EMAC network adapter * * Called when system wants to start the interface. We init TX/RX channels @@ -1535,6 +1537,8 @@ static int emac_dev_open(struct net_device *ndev) int k = 0; struct emac_priv *priv = netdev_priv(ndev); + pm_runtime_get(&priv->pdev->dev); + netif_carrier_off(ndev); for (cnt = 0; cnt < ETH_ALEN; cnt++) ndev->dev_addr[cnt] = priv->mac_addr[cnt]; @@ -1604,7 +1608,7 @@ static int emac_dev_open(struct net_device *ndev) priv->phy_id); ret = PTR_ERR(priv->phydev); priv->phydev = NULL; - return ret; + goto err; } priv->link = 0; @@ -1645,11 +1649,15 @@ rollback: res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k-1); m = res->end; } - return -EBUSY; + + ret = -EBUSY; +err: + pm_runtime_put(&priv->pdev->dev); + return ret; } /** - * emac_dev_stop: EMAC device stop + * emac_dev_stop - EMAC device stop * @ndev: The DaVinci EMAC network adapter * * Called when system wants to stop or down the interface. We stop the network @@ -1687,11 +1695,12 @@ static int emac_dev_stop(struct net_device *ndev) if (netif_msg_drv(priv)) dev_notice(emac_dev, "DaVinci EMAC: %s stopped\n", ndev->name); + pm_runtime_put(&priv->pdev->dev); return 0; } /** - * emac_dev_getnetstats: EMAC get statistics function + * emac_dev_getnetstats - EMAC get statistics function * @ndev: The DaVinci EMAC network adapter * * Called when system wants to get statistics from the device. @@ -1762,8 +1771,79 @@ static const struct net_device_ops emac_netdev_ops = { #endif }; +#ifdef CONFIG_OF +static struct emac_platform_data + *davinci_emac_of_get_pdata(struct platform_device *pdev, + struct emac_priv *priv) +{ + struct device_node *np; + struct emac_platform_data *pdata = NULL; + const u8 *mac_addr; + u32 data; + int ret; + + pdata = pdev->dev.platform_data; + if (!pdata) { + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); + if (!pdata) + goto nodata; + } + + np = pdev->dev.of_node; + if (!np) + goto nodata; + else + pdata->version = EMAC_VERSION_2; + + if (!is_valid_ether_addr(pdata->mac_addr)) { + mac_addr = of_get_mac_address(np); + if (mac_addr) + memcpy(pdata->mac_addr, mac_addr, ETH_ALEN); + } + + ret = of_property_read_u32(np, "ti,davinci-ctrl-reg-offset", &data); + if (!ret) + pdata->ctrl_reg_offset = data; + + ret = of_property_read_u32(np, "ti,davinci-ctrl-mod-reg-offset", + &data); + if (!ret) + pdata->ctrl_mod_reg_offset = data; + + ret = of_property_read_u32(np, "ti,davinci-ctrl-ram-offset", &data); + if (!ret) + pdata->ctrl_ram_offset = data; + + ret = of_property_read_u32(np, "ti,davinci-ctrl-ram-size", &data); + if (!ret) + pdata->ctrl_ram_size = data; + + ret = of_property_read_u32(np, "ti,davinci-rmii-en", &data); + if (!ret) + pdata->rmii_en = data; + + ret = of_property_read_u32(np, "ti,davinci-no-bd-ram", &data); + if (!ret) + pdata->no_bd_ram = data; + + priv->phy_node = of_parse_phandle(np, "phy-handle", 0); + if (!priv->phy_node) + pdata->phy_id = ""; + + pdev->dev.platform_data = pdata; +nodata: + return pdata; +} +#else +static struct emac_platform_data + *davinci_emac_of_get_pdata(struct platform_device *pdev, + struct emac_priv *priv) +{ + return pdev->dev.platform_data; +} +#endif /** - * davinci_emac_probe: EMAC device probe + * davinci_emac_probe - EMAC device probe * @pdev: The DaVinci EMAC device that we are removing * * Called when probing for emac devicesr. We get details of instances and @@ -1780,6 +1860,9 @@ static int __devinit davinci_emac_probe(struct platform_device *pdev) struct emac_platform_data *pdata; struct device *emac_dev; struct cpdma_params dma_params; + struct clk *emac_clk; + unsigned long emac_bus_frequency; + /* obtain emac clock from kernel */ emac_clk = clk_get(&pdev->dev, NULL); @@ -1788,12 +1871,14 @@ static int __devinit davinci_emac_probe(struct platform_device *pdev) return -EBUSY; } emac_bus_frequency = clk_get_rate(emac_clk); + clk_put(emac_clk); + /* TODO: Probe PHY here if possible */ ndev = alloc_etherdev(sizeof(struct emac_priv)); if (!ndev) { rc = -ENOMEM; - goto free_clk; + goto no_ndev; } platform_set_drvdata(pdev, ndev); @@ -1804,7 +1889,7 @@ static int __devinit davinci_emac_probe(struct platform_device *pdev) spin_lock_init(&priv->lock); - pdata = pdev->dev.platform_data; + pdata = davinci_emac_of_get_pdata(pdev, priv); if (!pdata) { dev_err(&pdev->dev, "no platform data\n"); rc = -ENODEV; @@ -1909,15 +1994,13 @@ static int __devinit davinci_emac_probe(struct platform_device *pdev) SET_ETHTOOL_OPS(ndev, ðtool_ops); netif_napi_add(ndev, &priv->napi, emac_poll, EMAC_POLL_WEIGHT); - clk_enable(emac_clk); - /* register the network device */ SET_NETDEV_DEV(ndev, &pdev->dev); rc = register_netdev(ndev); if (rc) { dev_err(&pdev->dev, "error in register_netdev\n"); rc = -ENODEV; - goto netdev_reg_err; + goto no_irq_res; } @@ -1926,10 +2009,12 @@ static int __devinit davinci_emac_probe(struct platform_device *pdev) "(regs: %p, irq: %d)\n", (void *)priv->emac_base_phys, ndev->irq); } + + pm_runtime_enable(&pdev->dev); + pm_runtime_resume(&pdev->dev); + return 0; -netdev_reg_err: - clk_disable(emac_clk); no_irq_res: if (priv->txchan) cpdma_chan_destroy(priv->txchan); @@ -1943,13 +2028,12 @@ no_dma: probe_quit: free_netdev(ndev); -free_clk: - clk_put(emac_clk); +no_ndev: return rc; } /** - * davinci_emac_remove: EMAC device remove + * davinci_emac_remove - EMAC device remove * @pdev: The DaVinci EMAC device that we are removing * * Called when removing the device driver. We disable clock usage and release @@ -1978,9 +2062,6 @@ static int __devexit davinci_emac_remove(struct platform_device *pdev) iounmap(priv->remap_addr); free_netdev(ndev); - clk_disable(emac_clk); - clk_put(emac_clk); - return 0; } @@ -1992,8 +2073,6 @@ static int davinci_emac_suspend(struct device *dev) if (netif_running(ndev)) emac_dev_stop(ndev); - clk_disable(emac_clk); - return 0; } @@ -2002,8 +2081,6 @@ static int davinci_emac_resume(struct device *dev) struct platform_device *pdev = to_platform_device(dev); struct net_device *ndev = platform_get_drvdata(pdev); - clk_enable(emac_clk); - if (netif_running(ndev)) emac_dev_open(ndev); @@ -2015,21 +2092,26 @@ static const struct dev_pm_ops davinci_emac_pm_ops = { .resume = davinci_emac_resume, }; -/** - * davinci_emac_driver: EMAC platform driver structure - */ +static const struct of_device_id davinci_emac_of_match[] = { + {.compatible = "ti,davinci-dm6467-emac", }, + {}, +}; +MODULE_DEVICE_TABLE(of, davinci_emac_of_match); + +/* davinci_emac_driver: EMAC platform driver structure */ static struct platform_driver davinci_emac_driver = { .driver = { .name = "davinci_emac", .owner = THIS_MODULE, .pm = &davinci_emac_pm_ops, + .of_match_table = of_match_ptr(davinci_emac_of_match), }, .probe = davinci_emac_probe, .remove = __devexit_p(davinci_emac_remove), }; /** - * davinci_emac_init: EMAC driver module init + * davinci_emac_init - EMAC driver module init * * Called when initializing the driver. We register the driver with * the platform. @@ -2041,7 +2123,7 @@ static int __init davinci_emac_init(void) late_initcall(davinci_emac_init); /** - * davinci_emac_exit: EMAC driver module exit + * davinci_emac_exit - EMAC driver module exit * * Called when exiting the driver completely. We unregister the driver with * the platform and exit diff --git a/drivers/net/ethernet/ti/davinci_mdio.c b/drivers/net/ethernet/ti/davinci_mdio.c index e4e47088e26b..cd7ee204e94a 100644 --- a/drivers/net/ethernet/ti/davinci_mdio.c +++ b/drivers/net/ethernet/ti/davinci_mdio.c @@ -34,6 +34,7 @@ #include <linux/clk.h> #include <linux/err.h> #include <linux/io.h> +#include <linux/pm_runtime.h> #include <linux/davinci_emac.h> /* @@ -321,7 +322,9 @@ static int __devinit davinci_mdio_probe(struct platform_device *pdev) snprintf(data->bus->id, MII_BUS_ID_SIZE, "%s-%x", pdev->name, pdev->id); - data->clk = clk_get(dev, NULL); + pm_runtime_enable(&pdev->dev); + pm_runtime_get_sync(&pdev->dev); + data->clk = clk_get(&pdev->dev, "fck"); if (IS_ERR(data->clk)) { dev_err(dev, "failed to get device clock\n"); ret = PTR_ERR(data->clk); @@ -329,8 +332,6 @@ static int __devinit davinci_mdio_probe(struct platform_device *pdev) goto bail_out; } - clk_enable(data->clk); - dev_set_drvdata(dev, data); data->dev = dev; spin_lock_init(&data->lock); @@ -378,10 +379,10 @@ bail_out: if (data->bus) mdiobus_free(data->bus); - if (data->clk) { - clk_disable(data->clk); + if (data->clk) clk_put(data->clk); - } + pm_runtime_put_sync(&pdev->dev); + pm_runtime_disable(&pdev->dev); kfree(data); @@ -396,10 +397,10 @@ static int __devexit davinci_mdio_remove(struct platform_device *pdev) if (data->bus) mdiobus_free(data->bus); - if (data->clk) { - clk_disable(data->clk); + if (data->clk) clk_put(data->clk); - } + pm_runtime_put_sync(&pdev->dev); + pm_runtime_disable(&pdev->dev); dev_set_drvdata(dev, NULL); @@ -421,8 +422,7 @@ static int davinci_mdio_suspend(struct device *dev) __raw_writel(ctrl, &data->regs->control); wait_for_idle(data); - if (data->clk) - clk_disable(data->clk); + pm_runtime_put_sync(data->dev); data->suspended = true; spin_unlock(&data->lock); @@ -436,8 +436,7 @@ static int davinci_mdio_resume(struct device *dev) u32 ctrl; spin_lock(&data->lock); - if (data->clk) - clk_enable(data->clk); + pm_runtime_put_sync(data->dev); /* restart the scan state machine */ ctrl = __raw_readl(&data->regs->control); diff --git a/drivers/net/ethernet/tile/tilegx.c b/drivers/net/ethernet/tile/tilegx.c index 83b4b388ad49..4e2a1628484d 100644 --- a/drivers/net/ethernet/tile/tilegx.c +++ b/drivers/net/ethernet/tile/tilegx.c @@ -123,6 +123,7 @@ struct tile_net_comps { /* The transmit wake timer for a given cpu and echannel. */ struct tile_net_tx_wake { + int tx_queue_idx; struct hrtimer timer; struct net_device *dev; }; @@ -573,12 +574,14 @@ static void add_comp(gxio_mpipe_equeue_t *equeue, comps->comp_next++; } -static void tile_net_schedule_tx_wake_timer(struct net_device *dev) +static void tile_net_schedule_tx_wake_timer(struct net_device *dev, + int tx_queue_idx) { - struct tile_net_info *info = &__get_cpu_var(per_cpu_info); + struct tile_net_info *info = &per_cpu(per_cpu_info, tx_queue_idx); struct tile_net_priv *priv = netdev_priv(dev); + struct tile_net_tx_wake *tx_wake = &info->tx_wake[priv->echannel]; - hrtimer_start(&info->tx_wake[priv->echannel].timer, + hrtimer_start(&tx_wake->timer, ktime_set(0, TX_TIMER_DELAY_USEC * 1000UL), HRTIMER_MODE_REL_PINNED); } @@ -587,7 +590,7 @@ static enum hrtimer_restart tile_net_handle_tx_wake_timer(struct hrtimer *t) { struct tile_net_tx_wake *tx_wake = container_of(t, struct tile_net_tx_wake, timer); - netif_wake_subqueue(tx_wake->dev, smp_processor_id()); + netif_wake_subqueue(tx_wake->dev, tx_wake->tx_queue_idx); return HRTIMER_NORESTART; } @@ -1218,6 +1221,7 @@ static int tile_net_open(struct net_device *dev) hrtimer_init(&tx_wake->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + tx_wake->tx_queue_idx = cpu; tx_wake->timer.function = tile_net_handle_tx_wake_timer; tx_wake->dev = dev; } @@ -1291,6 +1295,7 @@ static inline void *tile_net_frag_buf(skb_frag_t *f) * stop the queue and schedule the tx_wake timer. */ static s64 tile_net_equeue_try_reserve(struct net_device *dev, + int tx_queue_idx, struct tile_net_comps *comps, gxio_mpipe_equeue_t *equeue, int num_edescs) @@ -1313,8 +1318,8 @@ static s64 tile_net_equeue_try_reserve(struct net_device *dev, } /* Still nothing; give up and stop the queue for a short while. */ - netif_stop_subqueue(dev, smp_processor_id()); - tile_net_schedule_tx_wake_timer(dev); + netif_stop_subqueue(dev, tx_queue_idx); + tile_net_schedule_tx_wake_timer(dev, tx_queue_idx); return -1; } @@ -1328,11 +1333,12 @@ static s64 tile_net_equeue_try_reserve(struct net_device *dev, static int tso_count_edescs(struct sk_buff *skb) { struct skb_shared_info *sh = skb_shinfo(skb); - unsigned int data_len = skb->data_len; + unsigned int sh_len = skb_transport_offset(skb) + tcp_hdrlen(skb); + unsigned int data_len = skb->data_len + skb->hdr_len - sh_len; unsigned int p_len = sh->gso_size; long f_id = -1; /* id of the current fragment */ - long f_size = -1; /* size of the current fragment */ - long f_used = -1; /* bytes used from the current fragment */ + long f_size = skb->hdr_len; /* size of the current fragment */ + long f_used = sh_len; /* bytes used from the current fragment */ long n; /* size of the current piece of payload */ int num_edescs = 0; int segment; @@ -1377,13 +1383,14 @@ static void tso_headers_prepare(struct sk_buff *skb, unsigned char *headers, struct skb_shared_info *sh = skb_shinfo(skb); struct iphdr *ih; struct tcphdr *th; - unsigned int data_len = skb->data_len; + unsigned int sh_len = skb_transport_offset(skb) + tcp_hdrlen(skb); + unsigned int data_len = skb->data_len + skb->hdr_len - sh_len; unsigned char *data = skb->data; - unsigned int ih_off, th_off, sh_len, p_len; + unsigned int ih_off, th_off, p_len; unsigned int isum_seed, tsum_seed, id, seq; long f_id = -1; /* id of the current fragment */ - long f_size = -1; /* size of the current fragment */ - long f_used = -1; /* bytes used from the current fragment */ + long f_size = skb->hdr_len; /* size of the current fragment */ + long f_used = sh_len; /* bytes used from the current fragment */ long n; /* size of the current piece of payload */ int segment; @@ -1392,14 +1399,13 @@ static void tso_headers_prepare(struct sk_buff *skb, unsigned char *headers, th = tcp_hdr(skb); ih_off = skb_network_offset(skb); th_off = skb_transport_offset(skb); - sh_len = th_off + tcp_hdrlen(skb); p_len = sh->gso_size; /* Set up seed values for IP and TCP csum and initialize id and seq. */ isum_seed = ((0xFFFF - ih->check) + (0xFFFF - ih->tot_len) + (0xFFFF - ih->id)); - tsum_seed = th->check + (0xFFFF ^ htons(skb->len)); + tsum_seed = th->check + (0xFFFF ^ htons(sh_len + data_len)); id = ntohs(ih->id); seq = ntohl(th->seq); @@ -1471,21 +1477,22 @@ static void tso_egress(struct net_device *dev, gxio_mpipe_equeue_t *equeue, { struct tile_net_priv *priv = netdev_priv(dev); struct skb_shared_info *sh = skb_shinfo(skb); - unsigned int data_len = skb->data_len; + unsigned int sh_len = skb_transport_offset(skb) + tcp_hdrlen(skb); + unsigned int data_len = skb->data_len + skb->hdr_len - sh_len; unsigned int p_len = sh->gso_size; gxio_mpipe_edesc_t edesc_head = { { 0 } }; gxio_mpipe_edesc_t edesc_body = { { 0 } }; long f_id = -1; /* id of the current fragment */ - long f_size = -1; /* size of the current fragment */ - long f_used = -1; /* bytes used from the current fragment */ + long f_size = skb->hdr_len; /* size of the current fragment */ + long f_used = sh_len; /* bytes used from the current fragment */ + void *f_data = skb->data; long n; /* size of the current piece of payload */ unsigned long tx_packets = 0, tx_bytes = 0; - unsigned int csum_start, sh_len; + unsigned int csum_start; int segment; /* Prepare to egress the headers: set up header edesc. */ csum_start = skb_checksum_start_offset(skb); - sh_len = skb_transport_offset(skb) + tcp_hdrlen(skb); edesc_head.csum = 1; edesc_head.csum_start = csum_start; edesc_head.csum_dest = csum_start + skb->csum_offset; @@ -1497,7 +1504,6 @@ static void tso_egress(struct net_device *dev, gxio_mpipe_equeue_t *equeue, /* Egress all the edescs. */ for (segment = 0; segment < sh->gso_segs; segment++) { - void *va; unsigned char *buf; unsigned int p_used = 0; @@ -1516,10 +1522,9 @@ static void tso_egress(struct net_device *dev, gxio_mpipe_equeue_t *equeue, f_id++; f_size = sh->frags[f_id].size; f_used = 0; + f_data = tile_net_frag_buf(&sh->frags[f_id]); } - va = tile_net_frag_buf(&sh->frags[f_id]) + f_used; - /* Use bytes from the current fragment. */ n = p_len - p_used; if (n > f_size - f_used) @@ -1528,7 +1533,7 @@ static void tso_egress(struct net_device *dev, gxio_mpipe_equeue_t *equeue, p_used += n; /* Egress a piece of the payload. */ - edesc_body.va = va_to_tile_io_addr(va); + edesc_body.va = va_to_tile_io_addr(f_data) + f_used; edesc_body.xfer_size = n; edesc_body.bound = !(p_used < p_len); gxio_mpipe_equeue_put_at(equeue, edesc_body, slot); @@ -1580,7 +1585,8 @@ static int tile_net_tx_tso(struct sk_buff *skb, struct net_device *dev) local_irq_save(irqflags); /* Try to acquire a completion entry and an egress slot. */ - slot = tile_net_equeue_try_reserve(dev, comps, equeue, num_edescs); + slot = tile_net_equeue_try_reserve(dev, skb->queue_mapping, comps, + equeue, num_edescs); if (slot < 0) { local_irq_restore(irqflags); return NETDEV_TX_BUSY; @@ -1674,7 +1680,8 @@ static int tile_net_tx(struct sk_buff *skb, struct net_device *dev) local_irq_save(irqflags); /* Try to acquire a completion entry and an egress slot. */ - slot = tile_net_equeue_try_reserve(dev, comps, equeue, num_edescs); + slot = tile_net_equeue_try_reserve(dev, skb->queue_mapping, comps, + equeue, num_edescs); if (slot < 0) { local_irq_restore(irqflags); return NETDEV_TX_BUSY; @@ -1844,7 +1851,7 @@ static void tile_net_dev_init(const char *name, const uint8_t *mac) memcpy(dev->dev_addr, mac, 6); dev->addr_len = 6; } else { - random_ether_addr(dev->dev_addr); + eth_hw_addr_random(dev); } /* Register the network device. */ diff --git a/drivers/net/ethernet/toshiba/spider_net.c b/drivers/net/ethernet/toshiba/spider_net.c index 6199f6b387b6..c1ebfe9efcb3 100644 --- a/drivers/net/ethernet/toshiba/spider_net.c +++ b/drivers/net/ethernet/toshiba/spider_net.c @@ -114,7 +114,8 @@ spider_net_write_reg(struct spider_net_card *card, u32 reg, u32 value) out_be32(card->regs + reg, value); } -/** spider_net_write_phy - write to phy register +/** + * spider_net_write_phy - write to phy register * @netdev: adapter to be written to * @mii_id: id of MII * @reg: PHY register @@ -137,7 +138,8 @@ spider_net_write_phy(struct net_device *netdev, int mii_id, spider_net_write_reg(card, SPIDER_NET_GPCWOPCMD, writevalue); } -/** spider_net_read_phy - read from phy register +/** + * spider_net_read_phy - read from phy register * @netdev: network device to be read from * @mii_id: id of MII * @reg: PHY register diff --git a/drivers/net/ethernet/via/via-velocity.c b/drivers/net/ethernet/via/via-velocity.c index ea3e0a21ba74..a46c19859683 100644 --- a/drivers/net/ethernet/via/via-velocity.c +++ b/drivers/net/ethernet/via/via-velocity.c @@ -486,7 +486,7 @@ static void __devinit velocity_get_options(struct velocity_opt *opts, int index, velocity_set_bool_opt(&opts->flags, IP_byte_align[index], IP_ALIG_DEF, VELOCITY_FLAGS_IP_ALIGN, "IP_byte_align", devname); velocity_set_bool_opt(&opts->flags, ValPktLen[index], VAL_PKT_LEN_DEF, VELOCITY_FLAGS_VAL_PKT_LEN, "ValPktLen", devname); velocity_set_int_opt((int *) &opts->spd_dpx, speed_duplex[index], MED_LNK_MIN, MED_LNK_MAX, MED_LNK_DEF, "Media link mode", devname); - velocity_set_int_opt((int *) &opts->wol_opts, wol_opts[index], WOL_OPT_MIN, WOL_OPT_MAX, WOL_OPT_DEF, "Wake On Lan options", devname); + velocity_set_int_opt(&opts->wol_opts, wol_opts[index], WOL_OPT_MIN, WOL_OPT_MAX, WOL_OPT_DEF, "Wake On Lan options", devname); opts->numrx = (opts->numrx & ~3); } diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c index a75e9ef5a4ce..a5826a3111a6 100644 --- a/drivers/net/ethernet/wiznet/w5100.c +++ b/drivers/net/ethernet/wiznet/w5100.c @@ -637,7 +637,7 @@ static int __devinit w5100_hw_probe(struct platform_device *pdev) if (data && is_valid_ether_addr(data->mac_addr)) { memcpy(ndev->dev_addr, data->mac_addr, ETH_ALEN); } else { - random_ether_addr(ndev->dev_addr); + eth_random_addr(ndev->dev_addr); ndev->addr_assign_type |= NET_ADDR_RANDOM; } diff --git a/drivers/net/ethernet/wiznet/w5300.c b/drivers/net/ethernet/wiznet/w5300.c index 3306a20ec211..bdd8891c215a 100644 --- a/drivers/net/ethernet/wiznet/w5300.c +++ b/drivers/net/ethernet/wiznet/w5300.c @@ -557,7 +557,7 @@ static int __devinit w5300_hw_probe(struct platform_device *pdev) if (data && is_valid_ether_addr(data->mac_addr)) { memcpy(ndev->dev_addr, data->mac_addr, ETH_ALEN); } else { - random_ether_addr(ndev->dev_addr); + eth_random_addr(ndev->dev_addr); ndev->addr_assign_type |= NET_ADDR_RANDOM; } diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c index 1eaf7128afee..f8e351880119 100644 --- a/drivers/net/ethernet/xilinx/ll_temac_main.c +++ b/drivers/net/ethernet/xilinx/ll_temac_main.c @@ -197,7 +197,7 @@ static int temac_dcr_setup(struct temac_local *lp, struct platform_device *op, #endif /** - * * temac_dma_bd_release - Release buffer descriptor rings + * temac_dma_bd_release - Release buffer descriptor rings */ static void temac_dma_bd_release(struct net_device *ndev) { @@ -768,7 +768,6 @@ static void ll_temac_recv(struct net_device *ndev) DMA_FROM_DEVICE); skb_put(skb, length); - skb->dev = ndev; skb->protocol = eth_type_trans(skb, ndev); skb_checksum_none_assert(skb); diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c index 9c365e192a31..0793299bd39e 100644 --- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c @@ -312,7 +312,7 @@ static void axienet_set_mac_address(struct net_device *ndev, void *address) if (address) memcpy(ndev->dev_addr, address, ETH_ALEN); if (!is_valid_ether_addr(ndev->dev_addr)) - random_ether_addr(ndev->dev_addr); + eth_random_addr(ndev->dev_addr); /* Set up unicast MAC address filter set its mac address */ axienet_iow(lp, XAE_UAW0_OFFSET, diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c index 4ad80f771099..6695a1dadf4e 100644 --- a/drivers/net/fddi/defxx.c +++ b/drivers/net/fddi/defxx.c @@ -2962,7 +2962,7 @@ static int dfx_rcv_init(DFX_board_t *bp, int get_buffers) bp->descr_block_virt->rcv_data[i+j].long_0 = (u32) (PI_RCV_DESCR_M_SOP | ((PI_RCV_DATA_K_SIZE_MAX / PI_ALIGN_K_RCV_DATA_BUFF) << PI_RCV_DESCR_V_SEG_LEN)); bp->descr_block_virt->rcv_data[i+j].long_1 = (u32) (bp->rcv_block_phys + (i * PI_RCV_DATA_K_SIZE_MAX)); - bp->p_rcv_buff_va[i+j] = (char *) (bp->rcv_block_virt + (i * PI_RCV_DATA_K_SIZE_MAX)); + bp->p_rcv_buff_va[i+j] = (bp->rcv_block_virt + (i * PI_RCV_DATA_K_SIZE_MAX)); } #endif } @@ -3030,7 +3030,7 @@ static void dfx_rcv_queue_process( #ifdef DYNAMIC_BUFFERS p_buff = (char *) (((struct sk_buff *)bp->p_rcv_buff_va[entry])->data); #else - p_buff = (char *) bp->p_rcv_buff_va[entry]; + p_buff = bp->p_rcv_buff_va[entry]; #endif memcpy(&descr, p_buff + RCV_BUFF_K_DESCR, sizeof(u32)); diff --git a/drivers/net/fddi/skfp/pmf.c b/drivers/net/fddi/skfp/pmf.c index 9ac4665d7411..24d8566cfd8b 100644 --- a/drivers/net/fddi/skfp/pmf.c +++ b/drivers/net/fddi/skfp/pmf.c @@ -1242,7 +1242,7 @@ static int smt_set_para(struct s_smc *smc, struct smt_para *pa, int index, if (len < 8) goto len_error ; if (set) - memcpy((char *) to,(char *) from+2,6) ; + memcpy(to,from+2,6) ; to += 8 ; from += 8 ; len -= 8 ; @@ -1251,7 +1251,7 @@ static int smt_set_para(struct s_smc *smc, struct smt_para *pa, int index, if (len < 4) goto len_error ; if (set) - memcpy((char *) to,(char *) from,4) ; + memcpy(to,from,4) ; to += 4 ; from += 4 ; len -= 4 ; @@ -1260,7 +1260,7 @@ static int smt_set_para(struct s_smc *smc, struct smt_para *pa, int index, if (len < 8) goto len_error ; if (set) - memcpy((char *) to,(char *) from,8) ; + memcpy(to,from,8) ; to += 8 ; from += 8 ; len -= 8 ; @@ -1269,7 +1269,7 @@ static int smt_set_para(struct s_smc *smc, struct smt_para *pa, int index, if (len < 32) goto len_error ; if (set) - memcpy((char *) to,(char *) from,32) ; + memcpy(to,from,32) ; to += 32 ; from += 32 ; len -= 32 ; diff --git a/drivers/net/hamradio/mkiss.c b/drivers/net/hamradio/mkiss.c index aed1a6105b24..2c0894a92abd 100644 --- a/drivers/net/hamradio/mkiss.c +++ b/drivers/net/hamradio/mkiss.c @@ -485,7 +485,7 @@ static void ax_encaps(struct net_device *dev, unsigned char *icp, int len) return; default: - count = kiss_esc(p, (unsigned char *)ax->xbuff, len); + count = kiss_esc(p, ax->xbuff, len); } } else { unsigned short crc; @@ -497,7 +497,7 @@ static void ax_encaps(struct net_device *dev, unsigned char *icp, int len) case CRC_MODE_SMACK: *p |= 0x80; crc = swab16(crc16(0, p, len)); - count = kiss_esc_crc(p, (unsigned char *)ax->xbuff, crc, len+2); + count = kiss_esc_crc(p, ax->xbuff, crc, len+2); break; case CRC_MODE_FLEX_TEST: ax->crcmode = CRC_MODE_NONE; @@ -506,11 +506,11 @@ static void ax_encaps(struct net_device *dev, unsigned char *icp, int len) case CRC_MODE_FLEX: *p |= 0x20; crc = calc_crc_flex(p, len); - count = kiss_esc_crc(p, (unsigned char *)ax->xbuff, crc, len+2); + count = kiss_esc_crc(p, ax->xbuff, crc, len+2); break; default: - count = kiss_esc(p, (unsigned char *)ax->xbuff, len); + count = kiss_esc(p, ax->xbuff, len); } } spin_unlock_bh(&ax->buflock); diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h index 2857ab078aac..95ceb3593043 100644 --- a/drivers/net/hyperv/hyperv_net.h +++ b/drivers/net/hyperv/hyperv_net.h @@ -131,6 +131,7 @@ int rndis_filter_send(struct hv_device *dev, struct hv_netvsc_packet *pkt); int rndis_filter_set_packet_filter(struct rndis_device *dev, u32 new_filter); +int rndis_filter_set_device_mac(struct hv_device *hdev, char *mac); #define NVSP_INVALID_PROTOCOL_VERSION ((u32)0xFFFFFFFF) diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index 0c569831db5a..6cee2917eb02 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -614,7 +614,7 @@ retry_send_cmplt: static void netvsc_receive_completion(void *context) { struct hv_netvsc_packet *packet = context; - struct hv_device *device = (struct hv_device *)packet->device; + struct hv_device *device = packet->device; struct netvsc_device *net_device; u64 transaction_id = 0; bool fsend_receive_comp = false; diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index 8f8ed3320425..8e23c084c4a7 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -341,6 +341,34 @@ static int netvsc_change_mtu(struct net_device *ndev, int mtu) return 0; } + +static int netvsc_set_mac_addr(struct net_device *ndev, void *p) +{ + struct net_device_context *ndevctx = netdev_priv(ndev); + struct hv_device *hdev = ndevctx->device_ctx; + struct sockaddr *addr = p; + char save_adr[14]; + unsigned char save_aatype; + int err; + + memcpy(save_adr, ndev->dev_addr, ETH_ALEN); + save_aatype = ndev->addr_assign_type; + + err = eth_mac_addr(ndev, p); + if (err != 0) + return err; + + err = rndis_filter_set_device_mac(hdev, addr->sa_data); + if (err != 0) { + /* roll back to saved MAC */ + memcpy(ndev->dev_addr, save_adr, ETH_ALEN); + ndev->addr_assign_type = save_aatype; + } + + return err; +} + + static const struct ethtool_ops ethtool_ops = { .get_drvinfo = netvsc_get_drvinfo, .get_link = ethtool_op_get_link, @@ -353,7 +381,7 @@ static const struct net_device_ops device_ops = { .ndo_set_rx_mode = netvsc_set_multicast_list, .ndo_change_mtu = netvsc_change_mtu, .ndo_validate_addr = eth_validate_addr, - .ndo_set_mac_address = eth_mac_addr, + .ndo_set_mac_address = netvsc_set_mac_addr, }; /* diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c index 981ebb115637..fbf539468205 100644 --- a/drivers/net/hyperv/rndis_filter.c +++ b/drivers/net/hyperv/rndis_filter.c @@ -27,6 +27,7 @@ #include <linux/if_ether.h> #include <linux/netdevice.h> #include <linux/if_vlan.h> +#include <linux/nls.h> #include "hyperv_net.h" @@ -47,6 +48,7 @@ struct rndis_request { struct hv_page_buffer buf; /* FIXME: We assumed a fixed size request here. */ struct rndis_message request_msg; + u8 ext[100]; }; static void rndis_filter_send_completion(void *ctx); @@ -511,6 +513,83 @@ static int rndis_filter_query_device_mac(struct rndis_device *dev) dev->hw_mac_adr, &size); } +#define NWADR_STR "NetworkAddress" +#define NWADR_STRLEN 14 + +int rndis_filter_set_device_mac(struct hv_device *hdev, char *mac) +{ + struct netvsc_device *nvdev = hv_get_drvdata(hdev); + struct rndis_device *rdev = nvdev->extension; + struct net_device *ndev = nvdev->ndev; + struct rndis_request *request; + struct rndis_set_request *set; + struct rndis_config_parameter_info *cpi; + wchar_t *cfg_nwadr, *cfg_mac; + struct rndis_set_complete *set_complete; + char macstr[2*ETH_ALEN+1]; + u32 extlen = sizeof(struct rndis_config_parameter_info) + + 2*NWADR_STRLEN + 4*ETH_ALEN; + int ret, t; + + request = get_rndis_request(rdev, RNDIS_MSG_SET, + RNDIS_MESSAGE_SIZE(struct rndis_set_request) + extlen); + if (!request) + return -ENOMEM; + + set = &request->request_msg.msg.set_req; + set->oid = RNDIS_OID_GEN_RNDIS_CONFIG_PARAMETER; + set->info_buflen = extlen; + set->info_buf_offset = sizeof(struct rndis_set_request); + set->dev_vc_handle = 0; + + cpi = (struct rndis_config_parameter_info *)((ulong)set + + set->info_buf_offset); + cpi->parameter_name_offset = + sizeof(struct rndis_config_parameter_info); + /* Multiply by 2 because host needs 2 bytes (utf16) for each char */ + cpi->parameter_name_length = 2*NWADR_STRLEN; + cpi->parameter_type = RNDIS_CONFIG_PARAM_TYPE_STRING; + cpi->parameter_value_offset = + cpi->parameter_name_offset + cpi->parameter_name_length; + /* Multiply by 4 because each MAC byte displayed as 2 utf16 chars */ + cpi->parameter_value_length = 4*ETH_ALEN; + + cfg_nwadr = (wchar_t *)((ulong)cpi + cpi->parameter_name_offset); + cfg_mac = (wchar_t *)((ulong)cpi + cpi->parameter_value_offset); + ret = utf8s_to_utf16s(NWADR_STR, NWADR_STRLEN, UTF16_HOST_ENDIAN, + cfg_nwadr, NWADR_STRLEN); + if (ret < 0) + goto cleanup; + snprintf(macstr, 2*ETH_ALEN+1, "%pm", mac); + ret = utf8s_to_utf16s(macstr, 2*ETH_ALEN, UTF16_HOST_ENDIAN, + cfg_mac, 2*ETH_ALEN); + if (ret < 0) + goto cleanup; + + ret = rndis_filter_send_request(rdev, request); + if (ret != 0) + goto cleanup; + + t = wait_for_completion_timeout(&request->wait_event, 5*HZ); + if (t == 0) { + netdev_err(ndev, "timeout before we got a set response...\n"); + /* + * can't put_rndis_request, since we may still receive a + * send-completion. + */ + return -EBUSY; + } else { + set_complete = &request->response_msg.msg.set_complete; + if (set_complete->status != RNDIS_STATUS_SUCCESS) + ret = -EINVAL; + } + +cleanup: + put_rndis_request(rdev, request); + return ret; +} + + static int rndis_filter_query_device_link_status(struct rndis_device *dev) { u32 size = sizeof(u32); diff --git a/drivers/net/irda/ali-ircc.c b/drivers/net/irda/ali-ircc.c index dcc80d652b78..84872043b5c6 100644 --- a/drivers/net/irda/ali-ircc.c +++ b/drivers/net/irda/ali-ircc.c @@ -1017,7 +1017,7 @@ static void ali_ircc_fir_change_speed(struct ali_ircc_cb *priv, __u32 baud) { int iobase; - struct ali_ircc_cb *self = (struct ali_ircc_cb *) priv; + struct ali_ircc_cb *self = priv; struct net_device *dev; IRDA_DEBUG(1, "%s(), ---------------- Start ----------------\n", __func__ ); @@ -1052,7 +1052,7 @@ static void ali_ircc_fir_change_speed(struct ali_ircc_cb *priv, __u32 baud) */ static void ali_ircc_sir_change_speed(struct ali_ircc_cb *priv, __u32 speed) { - struct ali_ircc_cb *self = (struct ali_ircc_cb *) priv; + struct ali_ircc_cb *self = priv; unsigned long flags; int iobase; int fcr; /* FIFO control reg */ @@ -1121,7 +1121,7 @@ static void ali_ircc_sir_change_speed(struct ali_ircc_cb *priv, __u32 speed) static void ali_ircc_change_dongle_speed(struct ali_ircc_cb *priv, int speed) { - struct ali_ircc_cb *self = (struct ali_ircc_cb *) priv; + struct ali_ircc_cb *self = priv; int iobase,dongle_id; int tmp = 0; diff --git a/drivers/net/irda/au1k_ir.c b/drivers/net/irda/au1k_ir.c index fc503aa5288e..e09417df8f39 100644 --- a/drivers/net/irda/au1k_ir.c +++ b/drivers/net/irda/au1k_ir.c @@ -794,7 +794,7 @@ static int __devinit au1k_irda_net_init(struct net_device *dev) /* allocate the data buffers */ aup->db[0].vaddr = - (void *)dma_alloc(MAX_BUF_SIZE * 2 * NUM_IR_DESC, &temp); + dma_alloc(MAX_BUF_SIZE * 2 * NUM_IR_DESC, &temp); if (!aup->db[0].vaddr) goto out3; diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c index 32eb94ece6c1..e2a06fd996d5 100644 --- a/drivers/net/loopback.c +++ b/drivers/net/loopback.c @@ -107,10 +107,10 @@ static struct rtnl_link_stats64 *loopback_get_stats64(struct net_device *dev, lb_stats = per_cpu_ptr(dev->lstats, i); do { - start = u64_stats_fetch_begin(&lb_stats->syncp); + start = u64_stats_fetch_begin_bh(&lb_stats->syncp); tbytes = lb_stats->bytes; tpackets = lb_stats->packets; - } while (u64_stats_fetch_retry(&lb_stats->syncp, start)); + } while (u64_stats_fetch_retry_bh(&lb_stats->syncp, start)); bytes += tbytes; packets += tpackets; } diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c index 2ee56de7b0ca..0737bd4d1669 100644 --- a/drivers/net/macvtap.c +++ b/drivers/net/macvtap.c @@ -847,13 +847,12 @@ static ssize_t macvtap_do_read(struct macvtap_queue *q, struct kiocb *iocb, const struct iovec *iv, unsigned long len, int noblock) { - DECLARE_WAITQUEUE(wait, current); + DEFINE_WAIT(wait); struct sk_buff *skb; ssize_t ret = 0; - add_wait_queue(sk_sleep(&q->sk), &wait); while (len) { - current->state = TASK_INTERRUPTIBLE; + prepare_to_wait(sk_sleep(&q->sk), &wait, TASK_INTERRUPTIBLE); /* Read frames from the queue */ skb = skb_dequeue(&q->sk.sk_receive_queue); @@ -875,8 +874,7 @@ static ssize_t macvtap_do_read(struct macvtap_queue *q, struct kiocb *iocb, break; } - current->state = TASK_RUNNING; - remove_wait_queue(sk_sleep(&q->sk), &wait); + finish_wait(sk_sleep(&q->sk), &wait); return ret; } diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig index 944cdfb80fe4..3090dc65a6f1 100644 --- a/drivers/net/phy/Kconfig +++ b/drivers/net/phy/Kconfig @@ -67,6 +67,11 @@ config BCM63XX_PHY ---help--- Currently supports the 6348 and 6358 PHYs. +config BCM87XX_PHY + tristate "Driver for Broadcom BCM8706 and BCM8727 PHYs" + help + Currently supports the BCM8706 and BCM8727 10G Ethernet PHYs. + config ICPLUS_PHY tristate "Drivers for ICPlus PHYs" ---help--- diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile index f51af688ef8b..6d2dc6c94f2e 100644 --- a/drivers/net/phy/Makefile +++ b/drivers/net/phy/Makefile @@ -12,6 +12,7 @@ obj-$(CONFIG_SMSC_PHY) += smsc.o obj-$(CONFIG_VITESSE_PHY) += vitesse.o obj-$(CONFIG_BROADCOM_PHY) += broadcom.o obj-$(CONFIG_BCM63XX_PHY) += bcm63xx.o +obj-$(CONFIG_BCM87XX_PHY) += bcm87xx.o obj-$(CONFIG_ICPLUS_PHY) += icplus.o obj-$(CONFIG_REALTEK_PHY) += realtek.o obj-$(CONFIG_LSI_ET1011C_PHY) += et1011c.o diff --git a/drivers/net/phy/amd.c b/drivers/net/phy/amd.c index cfabd5fe5372..a3fb5ceb6487 100644 --- a/drivers/net/phy/amd.c +++ b/drivers/net/phy/amd.c @@ -77,13 +77,7 @@ static struct phy_driver am79c_driver = { static int __init am79c_init(void) { - int ret; - - ret = phy_driver_register(&am79c_driver); - if (ret) - return ret; - - return 0; + return phy_driver_register(&am79c_driver); } static void __exit am79c_exit(void) diff --git a/drivers/net/phy/bcm63xx.c b/drivers/net/phy/bcm63xx.c index cd802eb25fd2..84c7a39b1c65 100644 --- a/drivers/net/phy/bcm63xx.c +++ b/drivers/net/phy/bcm63xx.c @@ -71,7 +71,8 @@ static int bcm63xx_config_intr(struct phy_device *phydev) return err; } -static struct phy_driver bcm63xx_1_driver = { +static struct phy_driver bcm63xx_driver[] = { +{ .phy_id = 0x00406000, .phy_id_mask = 0xfffffc00, .name = "Broadcom BCM63XX (1)", @@ -84,10 +85,8 @@ static struct phy_driver bcm63xx_1_driver = { .ack_interrupt = bcm63xx_ack_interrupt, .config_intr = bcm63xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -/* same phy as above, with just a different OUI */ -static struct phy_driver bcm63xx_2_driver = { +}, { + /* same phy as above, with just a different OUI */ .phy_id = 0x002bdc00, .phy_id_mask = 0xfffffc00, .name = "Broadcom BCM63XX (2)", @@ -99,30 +98,18 @@ static struct phy_driver bcm63xx_2_driver = { .ack_interrupt = bcm63xx_ack_interrupt, .config_intr = bcm63xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; +} }; static int __init bcm63xx_phy_init(void) { - int ret; - - ret = phy_driver_register(&bcm63xx_1_driver); - if (ret) - goto out_63xx_1; - ret = phy_driver_register(&bcm63xx_2_driver); - if (ret) - goto out_63xx_2; - return ret; - -out_63xx_2: - phy_driver_unregister(&bcm63xx_1_driver); -out_63xx_1: - return ret; + return phy_drivers_register(bcm63xx_driver, + ARRAY_SIZE(bcm63xx_driver)); } static void __exit bcm63xx_phy_exit(void) { - phy_driver_unregister(&bcm63xx_1_driver); - phy_driver_unregister(&bcm63xx_2_driver); + phy_drivers_unregister(bcm63xx_driver, + ARRAY_SIZE(bcm63xx_driver)); } module_init(bcm63xx_phy_init); diff --git a/drivers/net/phy/bcm87xx.c b/drivers/net/phy/bcm87xx.c new file mode 100644 index 000000000000..2346b38b9837 --- /dev/null +++ b/drivers/net/phy/bcm87xx.c @@ -0,0 +1,231 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2011 - 2012 Cavium, Inc. + */ + +#include <linux/module.h> +#include <linux/phy.h> +#include <linux/of.h> + +#define PHY_ID_BCM8706 0x0143bdc1 +#define PHY_ID_BCM8727 0x0143bff0 + +#define BCM87XX_PMD_RX_SIGNAL_DETECT (MII_ADDR_C45 | 0x1000a) +#define BCM87XX_10GBASER_PCS_STATUS (MII_ADDR_C45 | 0x30020) +#define BCM87XX_XGXS_LANE_STATUS (MII_ADDR_C45 | 0x40018) + +#define BCM87XX_LASI_CONTROL (MII_ADDR_C45 | 0x39002) +#define BCM87XX_LASI_STATUS (MII_ADDR_C45 | 0x39005) + +#if IS_ENABLED(CONFIG_OF_MDIO) +/* Set and/or override some configuration registers based on the + * broadcom,c45-reg-init property stored in the of_node for the phydev. + * + * broadcom,c45-reg-init = <devid reg mask value>,...; + * + * There may be one or more sets of <devid reg mask value>: + * + * devid: which sub-device to use. + * reg: the register. + * mask: if non-zero, ANDed with existing register value. + * value: ORed with the masked value and written to the regiser. + * + */ +static int bcm87xx_of_reg_init(struct phy_device *phydev) +{ + const __be32 *paddr; + const __be32 *paddr_end; + int len, ret; + + if (!phydev->dev.of_node) + return 0; + + paddr = of_get_property(phydev->dev.of_node, + "broadcom,c45-reg-init", &len); + if (!paddr) + return 0; + + paddr_end = paddr + (len /= sizeof(*paddr)); + + ret = 0; + + while (paddr + 3 < paddr_end) { + u16 devid = be32_to_cpup(paddr++); + u16 reg = be32_to_cpup(paddr++); + u16 mask = be32_to_cpup(paddr++); + u16 val_bits = be32_to_cpup(paddr++); + int val; + u32 regnum = MII_ADDR_C45 | (devid << 16) | reg; + val = 0; + if (mask) { + val = phy_read(phydev, regnum); + if (val < 0) { + ret = val; + goto err; + } + val &= mask; + } + val |= val_bits; + + ret = phy_write(phydev, regnum, val); + if (ret < 0) + goto err; + } +err: + return ret; +} +#else +static int bcm87xx_of_reg_init(struct phy_device *phydev) +{ + return 0; +} +#endif /* CONFIG_OF_MDIO */ + +static int bcm87xx_config_init(struct phy_device *phydev) +{ + phydev->supported = SUPPORTED_10000baseR_FEC; + phydev->advertising = ADVERTISED_10000baseR_FEC; + phydev->state = PHY_NOLINK; + phydev->autoneg = AUTONEG_DISABLE; + + bcm87xx_of_reg_init(phydev); + + return 0; +} + +static int bcm87xx_config_aneg(struct phy_device *phydev) +{ + return -EINVAL; +} + +static int bcm87xx_read_status(struct phy_device *phydev) +{ + int rx_signal_detect; + int pcs_status; + int xgxs_lane_status; + + rx_signal_detect = phy_read(phydev, BCM87XX_PMD_RX_SIGNAL_DETECT); + if (rx_signal_detect < 0) + return rx_signal_detect; + + if ((rx_signal_detect & 1) == 0) + goto no_link; + + pcs_status = phy_read(phydev, BCM87XX_10GBASER_PCS_STATUS); + if (pcs_status < 0) + return pcs_status; + + if ((pcs_status & 1) == 0) + goto no_link; + + xgxs_lane_status = phy_read(phydev, BCM87XX_XGXS_LANE_STATUS); + if (xgxs_lane_status < 0) + return xgxs_lane_status; + + if ((xgxs_lane_status & 0x1000) == 0) + goto no_link; + + phydev->speed = 10000; + phydev->link = 1; + phydev->duplex = 1; + return 0; + +no_link: + phydev->link = 0; + return 0; +} + +static int bcm87xx_config_intr(struct phy_device *phydev) +{ + int reg, err; + + reg = phy_read(phydev, BCM87XX_LASI_CONTROL); + + if (reg < 0) + return reg; + + if (phydev->interrupts == PHY_INTERRUPT_ENABLED) + reg |= 1; + else + reg &= ~1; + + err = phy_write(phydev, BCM87XX_LASI_CONTROL, reg); + return err; +} + +static int bcm87xx_did_interrupt(struct phy_device *phydev) +{ + int reg; + + reg = phy_read(phydev, BCM87XX_LASI_STATUS); + + if (reg < 0) { + dev_err(&phydev->dev, + "Error: Read of BCM87XX_LASI_STATUS failed: %d\n", reg); + return 0; + } + return (reg & 1) != 0; +} + +static int bcm87xx_ack_interrupt(struct phy_device *phydev) +{ + /* Reading the LASI status clears it. */ + bcm87xx_did_interrupt(phydev); + return 0; +} + +static int bcm8706_match_phy_device(struct phy_device *phydev) +{ + return phydev->c45_ids.device_ids[4] == PHY_ID_BCM8706; +} + +static int bcm8727_match_phy_device(struct phy_device *phydev) +{ + return phydev->c45_ids.device_ids[4] == PHY_ID_BCM8727; +} + +static struct phy_driver bcm87xx_driver[] = { +{ + .phy_id = PHY_ID_BCM8706, + .phy_id_mask = 0xffffffff, + .name = "Broadcom BCM8706", + .flags = PHY_HAS_INTERRUPT, + .config_init = bcm87xx_config_init, + .config_aneg = bcm87xx_config_aneg, + .read_status = bcm87xx_read_status, + .ack_interrupt = bcm87xx_ack_interrupt, + .config_intr = bcm87xx_config_intr, + .did_interrupt = bcm87xx_did_interrupt, + .match_phy_device = bcm8706_match_phy_device, + .driver = { .owner = THIS_MODULE }, +}, { + .phy_id = PHY_ID_BCM8727, + .phy_id_mask = 0xffffffff, + .name = "Broadcom BCM8727", + .flags = PHY_HAS_INTERRUPT, + .config_init = bcm87xx_config_init, + .config_aneg = bcm87xx_config_aneg, + .read_status = bcm87xx_read_status, + .ack_interrupt = bcm87xx_ack_interrupt, + .config_intr = bcm87xx_config_intr, + .did_interrupt = bcm87xx_did_interrupt, + .match_phy_device = bcm8727_match_phy_device, + .driver = { .owner = THIS_MODULE }, +} }; + +static int __init bcm87xx_init(void) +{ + return phy_drivers_register(bcm87xx_driver, + ARRAY_SIZE(bcm87xx_driver)); +} +module_init(bcm87xx_init); + +static void __exit bcm87xx_exit(void) +{ + phy_drivers_unregister(bcm87xx_driver, + ARRAY_SIZE(bcm87xx_driver)); +} +module_exit(bcm87xx_exit); diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c index 60338ff63092..f8c90ea75108 100644 --- a/drivers/net/phy/broadcom.c +++ b/drivers/net/phy/broadcom.c @@ -682,7 +682,8 @@ static int brcm_fet_config_intr(struct phy_device *phydev) return err; } -static struct phy_driver bcm5411_driver = { +static struct phy_driver broadcom_drivers[] = { +{ .phy_id = PHY_ID_BCM5411, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5411", @@ -695,9 +696,7 @@ static struct phy_driver bcm5411_driver = { .ack_interrupt = bcm54xx_ack_interrupt, .config_intr = bcm54xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcm5421_driver = { +}, { .phy_id = PHY_ID_BCM5421, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5421", @@ -710,9 +709,7 @@ static struct phy_driver bcm5421_driver = { .ack_interrupt = bcm54xx_ack_interrupt, .config_intr = bcm54xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcm5461_driver = { +}, { .phy_id = PHY_ID_BCM5461, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5461", @@ -725,9 +722,7 @@ static struct phy_driver bcm5461_driver = { .ack_interrupt = bcm54xx_ack_interrupt, .config_intr = bcm54xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcm5464_driver = { +}, { .phy_id = PHY_ID_BCM5464, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5464", @@ -740,9 +735,7 @@ static struct phy_driver bcm5464_driver = { .ack_interrupt = bcm54xx_ack_interrupt, .config_intr = bcm54xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcm5481_driver = { +}, { .phy_id = PHY_ID_BCM5481, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5481", @@ -755,9 +748,7 @@ static struct phy_driver bcm5481_driver = { .ack_interrupt = bcm54xx_ack_interrupt, .config_intr = bcm54xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcm5482_driver = { +}, { .phy_id = PHY_ID_BCM5482, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5482", @@ -770,9 +761,7 @@ static struct phy_driver bcm5482_driver = { .ack_interrupt = bcm54xx_ack_interrupt, .config_intr = bcm54xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcm50610_driver = { +}, { .phy_id = PHY_ID_BCM50610, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM50610", @@ -785,9 +774,7 @@ static struct phy_driver bcm50610_driver = { .ack_interrupt = bcm54xx_ack_interrupt, .config_intr = bcm54xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcm50610m_driver = { +}, { .phy_id = PHY_ID_BCM50610M, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM50610M", @@ -800,9 +787,7 @@ static struct phy_driver bcm50610m_driver = { .ack_interrupt = bcm54xx_ack_interrupt, .config_intr = bcm54xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcm57780_driver = { +}, { .phy_id = PHY_ID_BCM57780, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM57780", @@ -815,9 +800,7 @@ static struct phy_driver bcm57780_driver = { .ack_interrupt = bcm54xx_ack_interrupt, .config_intr = bcm54xx_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcmac131_driver = { +}, { .phy_id = PHY_ID_BCMAC131, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCMAC131", @@ -830,9 +813,7 @@ static struct phy_driver bcmac131_driver = { .ack_interrupt = brcm_fet_ack_interrupt, .config_intr = brcm_fet_config_intr, .driver = { .owner = THIS_MODULE }, -}; - -static struct phy_driver bcm5241_driver = { +}, { .phy_id = PHY_ID_BCM5241, .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5241", @@ -845,84 +826,18 @@ static struct phy_driver bcm5241_driver = { .ack_interrupt = brcm_fet_ack_interrupt, .config_intr = brcm_fet_config_intr, .driver = { .owner = THIS_MODULE }, -}; +} }; static int __init broadcom_init(void) { - int ret; - - ret = phy_driver_register(&bcm5411_driver); - if (ret) - goto out_5411; - ret = phy_driver_register(&bcm5421_driver); - if (ret) - goto out_5421; - ret = phy_driver_register(&bcm5461_driver); - if (ret) - goto out_5461; - ret = phy_driver_register(&bcm5464_driver); - if (ret) - goto out_5464; - ret = phy_driver_register(&bcm5481_driver); - if (ret) - goto out_5481; - ret = phy_driver_register(&bcm5482_driver); - if (ret) - goto out_5482; - ret = phy_driver_register(&bcm50610_driver); - if (ret) - goto out_50610; - ret = phy_driver_register(&bcm50610m_driver); - if (ret) - goto out_50610m; - ret = phy_driver_register(&bcm57780_driver); - if (ret) - goto out_57780; - ret = phy_driver_register(&bcmac131_driver); - if (ret) - goto out_ac131; - ret = phy_driver_register(&bcm5241_driver); - if (ret) - goto out_5241; - return ret; - -out_5241: - phy_driver_unregister(&bcmac131_driver); -out_ac131: - phy_driver_unregister(&bcm57780_driver); -out_57780: - phy_driver_unregister(&bcm50610m_driver); -out_50610m: - phy_driver_unregister(&bcm50610_driver); -out_50610: - phy_driver_unregister(&bcm5482_driver); -out_5482: - phy_driver_unregister(&bcm5481_driver); -out_5481: - phy_driver_unregister(&bcm5464_driver); -out_5464: - phy_driver_unregister(&bcm5461_driver); -out_5461: - phy_driver_unregister(&bcm5421_driver); -out_5421: - phy_driver_unregister(&bcm5411_driver); -out_5411: - return ret; + return phy_drivers_register(broadcom_drivers, + ARRAY_SIZE(broadcom_drivers)); } static void __exit broadcom_exit(void) { - phy_driver_unregister(&bcm5241_driver); - phy_driver_unregister(&bcmac131_driver); - phy_driver_unregister(&bcm57780_driver); - phy_driver_unregister(&bcm50610m_driver); - phy_driver_unregister(&bcm50610_driver); - phy_driver_unregister(&bcm5482_driver); - phy_driver_unregister(&bcm5481_driver); - phy_driver_unregister(&bcm5464_driver); - phy_driver_unregister(&bcm5461_driver); - phy_driver_unregister(&bcm5421_driver); - phy_driver_unregister(&bcm5411_driver); + phy_drivers_unregister(broadcom_drivers, + ARRAY_SIZE(broadcom_drivers)); } module_init(broadcom_init); diff --git a/drivers/net/phy/cicada.c b/drivers/net/phy/cicada.c index d28173161c21..db472ffb6e89 100644 --- a/drivers/net/phy/cicada.c +++ b/drivers/net/phy/cicada.c @@ -102,7 +102,8 @@ static int cis820x_config_intr(struct phy_device *phydev) } /* Cicada 8201, a.k.a Vitesse VSC8201 */ -static struct phy_driver cis8201_driver = { +static struct phy_driver cis820x_driver[] = { +{ .phy_id = 0x000fc410, .name = "Cicada Cis8201", .phy_id_mask = 0x000ffff0, @@ -113,11 +114,8 @@ static struct phy_driver cis8201_driver = { .read_status = &genphy_read_status, .ack_interrupt = &cis820x_ack_interrupt, .config_intr = &cis820x_config_intr, - .driver = { .owner = THIS_MODULE,}, -}; - -/* Cicada 8204 */ -static struct phy_driver cis8204_driver = { + .driver = { .owner = THIS_MODULE,}, +}, { .phy_id = 0x000fc440, .name = "Cicada Cis8204", .phy_id_mask = 0x000fffc0, @@ -128,32 +126,19 @@ static struct phy_driver cis8204_driver = { .read_status = &genphy_read_status, .ack_interrupt = &cis820x_ack_interrupt, .config_intr = &cis820x_config_intr, - .driver = { .owner = THIS_MODULE,}, -}; + .driver = { .owner = THIS_MODULE,}, +} }; static int __init cicada_init(void) { - int ret; - - ret = phy_driver_register(&cis8204_driver); - if (ret) - goto err1; - - ret = phy_driver_register(&cis8201_driver); - if (ret) - goto err2; - return 0; - -err2: - phy_driver_unregister(&cis8204_driver); -err1: - return ret; + return phy_drivers_register(cis820x_driver, + ARRAY_SIZE(cis820x_driver)); } static void __exit cicada_exit(void) { - phy_driver_unregister(&cis8204_driver); - phy_driver_unregister(&cis8201_driver); + phy_drivers_unregister(cis820x_driver, + ARRAY_SIZE(cis820x_driver)); } module_init(cicada_init); diff --git a/drivers/net/phy/davicom.c b/drivers/net/phy/davicom.c index 5f59cc064778..81c7bc010dd8 100644 --- a/drivers/net/phy/davicom.c +++ b/drivers/net/phy/davicom.c @@ -144,7 +144,8 @@ static int dm9161_ack_interrupt(struct phy_device *phydev) return (err < 0) ? err : 0; } -static struct phy_driver dm9161e_driver = { +static struct phy_driver dm91xx_driver[] = { +{ .phy_id = 0x0181b880, .name = "Davicom DM9161E", .phy_id_mask = 0x0ffffff0, @@ -153,9 +154,7 @@ static struct phy_driver dm9161e_driver = { .config_aneg = dm9161_config_aneg, .read_status = genphy_read_status, .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver dm9161a_driver = { +}, { .phy_id = 0x0181b8a0, .name = "Davicom DM9161A", .phy_id_mask = 0x0ffffff0, @@ -164,9 +163,7 @@ static struct phy_driver dm9161a_driver = { .config_aneg = dm9161_config_aneg, .read_status = genphy_read_status, .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver dm9131_driver = { +}, { .phy_id = 0x00181b80, .name = "Davicom DM9131", .phy_id_mask = 0x0ffffff0, @@ -177,38 +174,18 @@ static struct phy_driver dm9131_driver = { .ack_interrupt = dm9161_ack_interrupt, .config_intr = dm9161_config_intr, .driver = { .owner = THIS_MODULE,}, -}; +} }; static int __init davicom_init(void) { - int ret; - - ret = phy_driver_register(&dm9161e_driver); - if (ret) - goto err1; - - ret = phy_driver_register(&dm9161a_driver); - if (ret) - goto err2; - - ret = phy_driver_register(&dm9131_driver); - if (ret) - goto err3; - return 0; - - err3: - phy_driver_unregister(&dm9161a_driver); - err2: - phy_driver_unregister(&dm9161e_driver); - err1: - return ret; + return phy_drivers_register(dm91xx_driver, + ARRAY_SIZE(dm91xx_driver)); } static void __exit davicom_exit(void) { - phy_driver_unregister(&dm9161e_driver); - phy_driver_unregister(&dm9161a_driver); - phy_driver_unregister(&dm9131_driver); + phy_drivers_unregister(dm91xx_driver, + ARRAY_SIZE(dm91xx_driver)); } module_init(davicom_init); diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c index 940b29022d0c..b0da0226661f 100644 --- a/drivers/net/phy/dp83640.c +++ b/drivers/net/phy/dp83640.c @@ -17,6 +17,9 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/ethtool.h> #include <linux/kernel.h> #include <linux/list.h> @@ -453,16 +456,16 @@ static void enable_status_frames(struct phy_device *phydev, bool on) ext_write(0, phydev, PAGE6, PSF_CFG1, ver); if (!phydev->attached_dev) { - pr_warning("dp83640: expected to find an attached netdevice\n"); + pr_warn("expected to find an attached netdevice\n"); return; } if (on) { if (dev_mc_add(phydev->attached_dev, status_frame_dst)) - pr_warning("dp83640: failed to add mc address\n"); + pr_warn("failed to add mc address\n"); } else { if (dev_mc_del(phydev->attached_dev, status_frame_dst)) - pr_warning("dp83640: failed to delete mc address\n"); + pr_warn("failed to delete mc address\n"); } } @@ -582,9 +585,9 @@ static void recalibrate(struct dp83640_clock *clock) * read out and correct offsets */ val = ext_read(master, PAGE4, PTP_STS); - pr_info("master PTP_STS 0x%04hx", val); + pr_info("master PTP_STS 0x%04hx\n", val); val = ext_read(master, PAGE4, PTP_ESTS); - pr_info("master PTP_ESTS 0x%04hx", val); + pr_info("master PTP_ESTS 0x%04hx\n", val); event_ts.ns_lo = ext_read(master, PAGE4, PTP_EDATA); event_ts.ns_hi = ext_read(master, PAGE4, PTP_EDATA); event_ts.sec_lo = ext_read(master, PAGE4, PTP_EDATA); @@ -594,9 +597,9 @@ static void recalibrate(struct dp83640_clock *clock) list_for_each(this, &clock->phylist) { tmp = list_entry(this, struct dp83640_private, list); val = ext_read(tmp->phydev, PAGE4, PTP_STS); - pr_info("slave PTP_STS 0x%04hx", val); + pr_info("slave PTP_STS 0x%04hx\n", val); val = ext_read(tmp->phydev, PAGE4, PTP_ESTS); - pr_info("slave PTP_ESTS 0x%04hx", val); + pr_info("slave PTP_ESTS 0x%04hx\n", val); event_ts.ns_lo = ext_read(tmp->phydev, PAGE4, PTP_EDATA); event_ts.ns_hi = ext_read(tmp->phydev, PAGE4, PTP_EDATA); event_ts.sec_lo = ext_read(tmp->phydev, PAGE4, PTP_EDATA); @@ -686,7 +689,7 @@ static void decode_rxts(struct dp83640_private *dp83640, prune_rx_ts(dp83640); if (list_empty(&dp83640->rxpool)) { - pr_debug("dp83640: rx timestamp pool is empty\n"); + pr_debug("rx timestamp pool is empty\n"); goto out; } rxts = list_first_entry(&dp83640->rxpool, struct rxts, list); @@ -709,7 +712,7 @@ static void decode_txts(struct dp83640_private *dp83640, skb = skb_dequeue(&dp83640->tx_queue); if (!skb) { - pr_debug("dp83640: have timestamp but tx_queue empty\n"); + pr_debug("have timestamp but tx_queue empty\n"); return; } ns = phy2txts(phy_txts); @@ -847,7 +850,7 @@ static void dp83640_free_clocks(void) list_for_each_safe(this, next, &phyter_clocks) { clock = list_entry(this, struct dp83640_clock, list); if (!list_empty(&clock->phylist)) { - pr_warning("phy list non-empty while unloading"); + pr_warn("phy list non-empty while unloading\n"); BUG(); } list_del(&clock->list); diff --git a/drivers/net/phy/fixed.c b/drivers/net/phy/fixed.c index 633680d0828e..ba55adfc7aae 100644 --- a/drivers/net/phy/fixed.c +++ b/drivers/net/phy/fixed.c @@ -70,7 +70,7 @@ static int fixed_phy_update_regs(struct fixed_phy *fp) lpa |= LPA_10FULL; break; default: - printk(KERN_WARNING "fixed phy: unknown speed\n"); + pr_warn("fixed phy: unknown speed\n"); return -EINVAL; } } else { @@ -90,7 +90,7 @@ static int fixed_phy_update_regs(struct fixed_phy *fp) lpa |= LPA_10HALF; break; default: - printk(KERN_WARNING "fixed phy: unknown speed\n"); + pr_warn("fixed phy: unknown speed\n"); return -EINVAL; } } diff --git a/drivers/net/phy/icplus.c b/drivers/net/phy/icplus.c index 47f8e8939266..d5199cb4caec 100644 --- a/drivers/net/phy/icplus.c +++ b/drivers/net/phy/icplus.c @@ -202,7 +202,8 @@ static int ip101a_g_ack_interrupt(struct phy_device *phydev) return 0; } -static struct phy_driver ip175c_driver = { +static struct phy_driver icplus_driver[] = { +{ .phy_id = 0x02430d80, .name = "ICPlus IP175C", .phy_id_mask = 0x0ffffff0, @@ -213,9 +214,7 @@ static struct phy_driver ip175c_driver = { .suspend = genphy_suspend, .resume = genphy_resume, .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver ip1001_driver = { +}, { .phy_id = 0x02430d90, .name = "ICPlus IP1001", .phy_id_mask = 0x0ffffff0, @@ -227,9 +226,7 @@ static struct phy_driver ip1001_driver = { .suspend = genphy_suspend, .resume = genphy_resume, .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver ip101a_g_driver = { +}, { .phy_id = 0x02430c54, .name = "ICPlus IP101A/G", .phy_id_mask = 0x0ffffff0, @@ -243,28 +240,18 @@ static struct phy_driver ip101a_g_driver = { .suspend = genphy_suspend, .resume = genphy_resume, .driver = { .owner = THIS_MODULE,}, -}; +} }; static int __init icplus_init(void) { - int ret = 0; - - ret = phy_driver_register(&ip1001_driver); - if (ret < 0) - return -ENODEV; - - ret = phy_driver_register(&ip101a_g_driver); - if (ret < 0) - return -ENODEV; - - return phy_driver_register(&ip175c_driver); + return phy_drivers_register(icplus_driver, + ARRAY_SIZE(icplus_driver)); } static void __exit icplus_exit(void) { - phy_driver_unregister(&ip1001_driver); - phy_driver_unregister(&ip101a_g_driver); - phy_driver_unregister(&ip175c_driver); + phy_drivers_unregister(icplus_driver, + ARRAY_SIZE(icplus_driver)); } module_init(icplus_init); diff --git a/drivers/net/phy/lxt.c b/drivers/net/phy/lxt.c index 6f6e8b616a62..6d1e3fcc43e2 100644 --- a/drivers/net/phy/lxt.c +++ b/drivers/net/phy/lxt.c @@ -149,7 +149,8 @@ static int lxt973_config_aneg(struct phy_device *phydev) return phydev->priv ? 0 : genphy_config_aneg(phydev); } -static struct phy_driver lxt970_driver = { +static struct phy_driver lxt97x_driver[] = { +{ .phy_id = 0x78100000, .name = "LXT970", .phy_id_mask = 0xfffffff0, @@ -160,10 +161,8 @@ static struct phy_driver lxt970_driver = { .read_status = genphy_read_status, .ack_interrupt = lxt970_ack_interrupt, .config_intr = lxt970_config_intr, - .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver lxt971_driver = { + .driver = { .owner = THIS_MODULE,}, +}, { .phy_id = 0x001378e0, .name = "LXT971", .phy_id_mask = 0xfffffff0, @@ -173,10 +172,8 @@ static struct phy_driver lxt971_driver = { .read_status = genphy_read_status, .ack_interrupt = lxt971_ack_interrupt, .config_intr = lxt971_config_intr, - .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver lxt973_driver = { + .driver = { .owner = THIS_MODULE,}, +}, { .phy_id = 0x00137a10, .name = "LXT973", .phy_id_mask = 0xfffffff0, @@ -185,39 +182,19 @@ static struct phy_driver lxt973_driver = { .probe = lxt973_probe, .config_aneg = lxt973_config_aneg, .read_status = genphy_read_status, - .driver = { .owner = THIS_MODULE,}, -}; + .driver = { .owner = THIS_MODULE,}, +} }; static int __init lxt_init(void) { - int ret; - - ret = phy_driver_register(&lxt970_driver); - if (ret) - goto err1; - - ret = phy_driver_register(&lxt971_driver); - if (ret) - goto err2; - - ret = phy_driver_register(&lxt973_driver); - if (ret) - goto err3; - return 0; - - err3: - phy_driver_unregister(&lxt971_driver); - err2: - phy_driver_unregister(&lxt970_driver); - err1: - return ret; + return phy_drivers_register(lxt97x_driver, + ARRAY_SIZE(lxt97x_driver)); } static void __exit lxt_exit(void) { - phy_driver_unregister(&lxt970_driver); - phy_driver_unregister(&lxt971_driver); - phy_driver_unregister(&lxt973_driver); + phy_drivers_unregister(lxt97x_driver, + ARRAY_SIZE(lxt97x_driver)); } module_init(lxt_init); diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c index 418928d644bf..5d2a3f215887 100644 --- a/drivers/net/phy/marvell.c +++ b/drivers/net/phy/marvell.c @@ -826,28 +826,14 @@ static struct phy_driver marvell_drivers[] = { static int __init marvell_init(void) { - int ret; - int i; - - for (i = 0; i < ARRAY_SIZE(marvell_drivers); i++) { - ret = phy_driver_register(&marvell_drivers[i]); - - if (ret) { - while (i-- > 0) - phy_driver_unregister(&marvell_drivers[i]); - return ret; - } - } - - return 0; + return phy_drivers_register(marvell_drivers, + ARRAY_SIZE(marvell_drivers)); } static void __exit marvell_exit(void) { - int i; - - for (i = 0; i < ARRAY_SIZE(marvell_drivers); i++) - phy_driver_unregister(&marvell_drivers[i]); + phy_drivers_unregister(marvell_drivers, + ARRAY_SIZE(marvell_drivers)); } module_init(marvell_init); diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c index 5061608f408c..170eb411ab5d 100644 --- a/drivers/net/phy/mdio_bus.c +++ b/drivers/net/phy/mdio_bus.c @@ -13,6 +13,9 @@ * option) any later version. * */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/kernel.h> #include <linux/string.h> #include <linux/errno.h> @@ -22,6 +25,7 @@ #include <linux/init.h> #include <linux/delay.h> #include <linux/device.h> +#include <linux/of_device.h> #include <linux/netdevice.h> #include <linux/etherdevice.h> #include <linux/skbuff.h> @@ -148,7 +152,7 @@ int mdiobus_register(struct mii_bus *bus) err = device_register(&bus->dev); if (err) { - printk(KERN_ERR "mii_bus %s failed to register\n", bus->id); + pr_err("mii_bus %s failed to register\n", bus->id); return -EINVAL; } @@ -229,7 +233,7 @@ struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr) struct phy_device *phydev; int err; - phydev = get_phy_device(bus, addr); + phydev = get_phy_device(bus, addr, false); if (IS_ERR(phydev) || phydev == NULL) return phydev; @@ -305,6 +309,12 @@ static int mdio_bus_match(struct device *dev, struct device_driver *drv) struct phy_device *phydev = to_phy_device(dev); struct phy_driver *phydrv = to_phy_driver(drv); + if (of_driver_match_device(dev, drv)) + return 1; + + if (phydrv->match_phy_device) + return phydrv->match_phy_device(phydev); + return ((phydrv->phy_id & phydrv->phy_id_mask) == (phydev->phy_id & phydrv->phy_id_mask)); } diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c index 9d6c80c8a0cf..cf287e0eb408 100644 --- a/drivers/net/phy/micrel.c +++ b/drivers/net/phy/micrel.c @@ -114,7 +114,8 @@ static int ks8051_config_init(struct phy_device *phydev) return 0; } -static struct phy_driver ks8737_driver = { +static struct phy_driver ksphy_driver[] = { +{ .phy_id = PHY_ID_KS8737, .phy_id_mask = 0x00fffff0, .name = "Micrel KS8737", @@ -126,9 +127,7 @@ static struct phy_driver ks8737_driver = { .ack_interrupt = kszphy_ack_interrupt, .config_intr = ks8737_config_intr, .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver ks8041_driver = { +}, { .phy_id = PHY_ID_KS8041, .phy_id_mask = 0x00fffff0, .name = "Micrel KS8041", @@ -141,9 +140,7 @@ static struct phy_driver ks8041_driver = { .ack_interrupt = kszphy_ack_interrupt, .config_intr = kszphy_config_intr, .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver ks8051_driver = { +}, { .phy_id = PHY_ID_KS8051, .phy_id_mask = 0x00fffff0, .name = "Micrel KS8051", @@ -156,9 +153,7 @@ static struct phy_driver ks8051_driver = { .ack_interrupt = kszphy_ack_interrupt, .config_intr = kszphy_config_intr, .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver ks8001_driver = { +}, { .phy_id = PHY_ID_KS8001, .name = "Micrel KS8001 or KS8721", .phy_id_mask = 0x00ffffff, @@ -170,9 +165,7 @@ static struct phy_driver ks8001_driver = { .ack_interrupt = kszphy_ack_interrupt, .config_intr = kszphy_config_intr, .driver = { .owner = THIS_MODULE,}, -}; - -static struct phy_driver ksz9021_driver = { +}, { .phy_id = PHY_ID_KSZ9021, .phy_id_mask = 0x000ffffe, .name = "Micrel KSZ9021 Gigabit PHY", @@ -185,51 +178,18 @@ static struct phy_driver ksz9021_driver = { .ack_interrupt = kszphy_ack_interrupt, .config_intr = ksz9021_config_intr, .driver = { .owner = THIS_MODULE, }, -}; +} }; static int __init ksphy_init(void) { - int ret; - - ret = phy_driver_register(&ks8001_driver); - if (ret) - goto err1; - - ret = phy_driver_register(&ksz9021_driver); - if (ret) - goto err2; - - ret = phy_driver_register(&ks8737_driver); - if (ret) - goto err3; - ret = phy_driver_register(&ks8041_driver); - if (ret) - goto err4; - ret = phy_driver_register(&ks8051_driver); - if (ret) - goto err5; - - return 0; - -err5: - phy_driver_unregister(&ks8041_driver); -err4: - phy_driver_unregister(&ks8737_driver); -err3: - phy_driver_unregister(&ksz9021_driver); -err2: - phy_driver_unregister(&ks8001_driver); -err1: - return ret; + return phy_drivers_register(ksphy_driver, + ARRAY_SIZE(ksphy_driver)); } static void __exit ksphy_exit(void) { - phy_driver_unregister(&ks8001_driver); - phy_driver_unregister(&ks8737_driver); - phy_driver_unregister(&ksz9021_driver); - phy_driver_unregister(&ks8041_driver); - phy_driver_unregister(&ks8051_driver); + phy_drivers_unregister(ksphy_driver, + ARRAY_SIZE(ksphy_driver)); } module_init(ksphy_init); diff --git a/drivers/net/phy/national.c b/drivers/net/phy/national.c index 04bb8fcc0cb5..9a5f234d95b0 100644 --- a/drivers/net/phy/national.c +++ b/drivers/net/phy/national.c @@ -15,6 +15,8 @@ * */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/kernel.h> #include <linux/module.h> #include <linux/mii.h> @@ -22,6 +24,8 @@ #include <linux/phy.h> #include <linux/netdevice.h> +#define DEBUG + /* DP83865 phy identifier values */ #define DP83865_PHY_ID 0x20005c7a @@ -112,8 +116,8 @@ static void ns_10_base_t_hdx_loopack(struct phy_device *phydev, int disable) ns_exp_write(phydev, 0x1c0, ns_exp_read(phydev, 0x1c0) & 0xfffe); - printk(KERN_DEBUG "DP83865 PHY: 10BASE-T HDX loopback %s\n", - (ns_exp_read(phydev, 0x1c0) & 0x0001) ? "off" : "on"); + pr_debug("10BASE-T HDX loopback %s\n", + (ns_exp_read(phydev, 0x1c0) & 0x0001) ? "off" : "on"); } static int ns_config_init(struct phy_device *phydev) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 3cbda0851f83..7ca2ff97c368 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -15,6 +15,9 @@ * option) any later version. * */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/kernel.h> #include <linux/string.h> #include <linux/errno.h> @@ -32,6 +35,7 @@ #include <linux/phy.h> #include <linux/timer.h> #include <linux/workqueue.h> +#include <linux/mdio.h> #include <linux/atomic.h> #include <asm/io.h> @@ -44,18 +48,16 @@ */ void phy_print_status(struct phy_device *phydev) { - pr_info("PHY: %s - Link is %s", dev_name(&phydev->dev), - phydev->link ? "Up" : "Down"); if (phydev->link) - printk(KERN_CONT " - %d/%s", phydev->speed, - DUPLEX_FULL == phydev->duplex ? - "Full" : "Half"); - - printk(KERN_CONT "\n"); + pr_info("%s - Link is Up - %d/%s\n", + dev_name(&phydev->dev), + phydev->speed, + DUPLEX_FULL == phydev->duplex ? "Full" : "Half"); + else + pr_info("%s - Link is Down\n", dev_name(&phydev->dev)); } EXPORT_SYMBOL(phy_print_status); - /** * phy_clear_interrupt - Ack the phy device's interrupt * @phydev: the phy_device struct @@ -482,9 +484,8 @@ static void phy_force_reduction(struct phy_device *phydev) phydev->speed = settings[idx].speed; phydev->duplex = settings[idx].duplex; - pr_info("Trying %d/%s\n", phydev->speed, - DUPLEX_FULL == phydev->duplex ? - "FULL" : "HALF"); + pr_info("Trying %d/%s\n", + phydev->speed, DUPLEX_FULL == phydev->duplex ? "FULL" : "HALF"); } @@ -598,9 +599,8 @@ int phy_start_interrupts(struct phy_device *phydev) IRQF_SHARED, "phy_interrupt", phydev) < 0) { - printk(KERN_WARNING "%s: Can't get IRQ %d (PHY)\n", - phydev->bus->name, - phydev->irq); + pr_warn("%s: Can't get IRQ %d (PHY)\n", + phydev->bus->name, phydev->irq); phydev->irq = PHY_POLL; return 0; } @@ -838,10 +838,10 @@ void phy_state_machine(struct work_struct *work) phydev->autoneg = AUTONEG_DISABLE; - pr_info("Trying %d/%s\n", phydev->speed, - DUPLEX_FULL == - phydev->duplex ? - "FULL" : "HALF"); + pr_info("Trying %d/%s\n", + phydev->speed, + DUPLEX_FULL == phydev->duplex ? + "FULL" : "HALF"); } break; case PHY_NOLINK: @@ -968,3 +968,283 @@ void phy_state_machine(struct work_struct *work) schedule_delayed_work(&phydev->state_queue, PHY_STATE_TIME * HZ); } + +static inline void mmd_phy_indirect(struct mii_bus *bus, int prtad, int devad, + int addr) +{ + /* Write the desired MMD Devad */ + bus->write(bus, addr, MII_MMD_CTRL, devad); + + /* Write the desired MMD register address */ + bus->write(bus, addr, MII_MMD_DATA, prtad); + + /* Select the Function : DATA with no post increment */ + bus->write(bus, addr, MII_MMD_CTRL, (devad | MII_MMD_CTRL_NOINCR)); +} + +/** + * phy_read_mmd_indirect - reads data from the MMD registers + * @bus: the target MII bus + * @prtad: MMD Address + * @devad: MMD DEVAD + * @addr: PHY address on the MII bus + * + * Description: it reads data from the MMD registers (clause 22 to access to + * clause 45) of the specified phy address. + * To read these register we have: + * 1) Write reg 13 // DEVAD + * 2) Write reg 14 // MMD Address + * 3) Write reg 13 // MMD Data Command for MMD DEVAD + * 3) Read reg 14 // Read MMD data + */ +static int phy_read_mmd_indirect(struct mii_bus *bus, int prtad, int devad, + int addr) +{ + u32 ret; + + mmd_phy_indirect(bus, prtad, devad, addr); + + /* Read the content of the MMD's selected register */ + ret = bus->read(bus, addr, MII_MMD_DATA); + + return ret; +} + +/** + * phy_write_mmd_indirect - writes data to the MMD registers + * @bus: the target MII bus + * @prtad: MMD Address + * @devad: MMD DEVAD + * @addr: PHY address on the MII bus + * @data: data to write in the MMD register + * + * Description: Write data from the MMD registers of the specified + * phy address. + * To write these register we have: + * 1) Write reg 13 // DEVAD + * 2) Write reg 14 // MMD Address + * 3) Write reg 13 // MMD Data Command for MMD DEVAD + * 3) Write reg 14 // Write MMD data + */ +static void phy_write_mmd_indirect(struct mii_bus *bus, int prtad, int devad, + int addr, u32 data) +{ + mmd_phy_indirect(bus, prtad, devad, addr); + + /* Write the data into MMD's selected register */ + bus->write(bus, addr, MII_MMD_DATA, data); +} + +static u32 phy_eee_to_adv(u16 eee_adv) +{ + u32 adv = 0; + + if (eee_adv & MDIO_EEE_100TX) + adv |= ADVERTISED_100baseT_Full; + if (eee_adv & MDIO_EEE_1000T) + adv |= ADVERTISED_1000baseT_Full; + if (eee_adv & MDIO_EEE_10GT) + adv |= ADVERTISED_10000baseT_Full; + if (eee_adv & MDIO_EEE_1000KX) + adv |= ADVERTISED_1000baseKX_Full; + if (eee_adv & MDIO_EEE_10GKX4) + adv |= ADVERTISED_10000baseKX4_Full; + if (eee_adv & MDIO_EEE_10GKR) + adv |= ADVERTISED_10000baseKR_Full; + + return adv; +} + +static u32 phy_eee_to_supported(u16 eee_caported) +{ + u32 supported = 0; + + if (eee_caported & MDIO_EEE_100TX) + supported |= SUPPORTED_100baseT_Full; + if (eee_caported & MDIO_EEE_1000T) + supported |= SUPPORTED_1000baseT_Full; + if (eee_caported & MDIO_EEE_10GT) + supported |= SUPPORTED_10000baseT_Full; + if (eee_caported & MDIO_EEE_1000KX) + supported |= SUPPORTED_1000baseKX_Full; + if (eee_caported & MDIO_EEE_10GKX4) + supported |= SUPPORTED_10000baseKX4_Full; + if (eee_caported & MDIO_EEE_10GKR) + supported |= SUPPORTED_10000baseKR_Full; + + return supported; +} + +static u16 phy_adv_to_eee(u32 adv) +{ + u16 reg = 0; + + if (adv & ADVERTISED_100baseT_Full) + reg |= MDIO_EEE_100TX; + if (adv & ADVERTISED_1000baseT_Full) + reg |= MDIO_EEE_1000T; + if (adv & ADVERTISED_10000baseT_Full) + reg |= MDIO_EEE_10GT; + if (adv & ADVERTISED_1000baseKX_Full) + reg |= MDIO_EEE_1000KX; + if (adv & ADVERTISED_10000baseKX4_Full) + reg |= MDIO_EEE_10GKX4; + if (adv & ADVERTISED_10000baseKR_Full) + reg |= MDIO_EEE_10GKR; + + return reg; +} + +/** + * phy_init_eee - init and check the EEE feature + * @phydev: target phy_device struct + * @clk_stop_enable: PHY may stop the clock during LPI + * + * Description: it checks if the Energy-Efficient Ethernet (EEE) + * is supported by looking at the MMD registers 3.20 and 7.60/61 + * and it programs the MMD register 3.0 setting the "Clock stop enable" + * bit if required. + */ +int phy_init_eee(struct phy_device *phydev, bool clk_stop_enable) +{ + int ret = -EPROTONOSUPPORT; + + /* According to 802.3az,the EEE is supported only in full duplex-mode. + * Also EEE feature is active when core is operating with MII, GMII + * or RGMII. + */ + if ((phydev->duplex == DUPLEX_FULL) && + ((phydev->interface == PHY_INTERFACE_MODE_MII) || + (phydev->interface == PHY_INTERFACE_MODE_GMII) || + (phydev->interface == PHY_INTERFACE_MODE_RGMII))) { + int eee_lp, eee_cap, eee_adv; + u32 lp, cap, adv; + int idx, status; + + /* Read phy status to properly get the right settings */ + status = phy_read_status(phydev); + if (status) + return status; + + /* First check if the EEE ability is supported */ + eee_cap = phy_read_mmd_indirect(phydev->bus, MDIO_PCS_EEE_ABLE, + MDIO_MMD_PCS, phydev->addr); + if (eee_cap < 0) + return eee_cap; + + cap = phy_eee_to_supported(eee_cap); + if (!cap) + goto eee_exit; + + /* Check which link settings negotiated and verify it in + * the EEE advertising registers. + */ + eee_lp = phy_read_mmd_indirect(phydev->bus, MDIO_AN_EEE_LPABLE, + MDIO_MMD_AN, phydev->addr); + if (eee_lp < 0) + return eee_lp; + + eee_adv = phy_read_mmd_indirect(phydev->bus, MDIO_AN_EEE_ADV, + MDIO_MMD_AN, phydev->addr); + if (eee_adv < 0) + return eee_adv; + + adv = phy_eee_to_adv(eee_adv); + lp = phy_eee_to_adv(eee_lp); + idx = phy_find_setting(phydev->speed, phydev->duplex); + if ((lp & adv & settings[idx].setting)) + goto eee_exit; + + if (clk_stop_enable) { + /* Configure the PHY to stop receiving xMII + * clock while it is signaling LPI. + */ + int val = phy_read_mmd_indirect(phydev->bus, MDIO_CTRL1, + MDIO_MMD_PCS, + phydev->addr); + if (val < 0) + return val; + + val |= MDIO_PCS_CTRL1_CLKSTOP_EN; + phy_write_mmd_indirect(phydev->bus, MDIO_CTRL1, + MDIO_MMD_PCS, phydev->addr, val); + } + + ret = 0; /* EEE supported */ + } + +eee_exit: + return ret; +} +EXPORT_SYMBOL(phy_init_eee); + +/** + * phy_get_eee_err - report the EEE wake error count + * @phydev: target phy_device struct + * + * Description: it is to report the number of time where the PHY + * failed to complete its normal wake sequence. + */ +int phy_get_eee_err(struct phy_device *phydev) +{ + return phy_read_mmd_indirect(phydev->bus, MDIO_PCS_EEE_WK_ERR, + MDIO_MMD_PCS, phydev->addr); + +} +EXPORT_SYMBOL(phy_get_eee_err); + +/** + * phy_ethtool_get_eee - get EEE supported and status + * @phydev: target phy_device struct + * @data: ethtool_eee data + * + * Description: it reportes the Supported/Advertisement/LP Advertisement + * capabilities. + */ +int phy_ethtool_get_eee(struct phy_device *phydev, struct ethtool_eee *data) +{ + int val; + + /* Get Supported EEE */ + val = phy_read_mmd_indirect(phydev->bus, MDIO_PCS_EEE_ABLE, + MDIO_MMD_PCS, phydev->addr); + if (val < 0) + return val; + data->supported = phy_eee_to_supported(val); + + /* Get advertisement EEE */ + val = phy_read_mmd_indirect(phydev->bus, MDIO_AN_EEE_ADV, + MDIO_MMD_AN, phydev->addr); + if (val < 0) + return val; + data->advertised = phy_eee_to_adv(val); + + /* Get LP advertisement EEE */ + val = phy_read_mmd_indirect(phydev->bus, MDIO_AN_EEE_LPABLE, + MDIO_MMD_AN, phydev->addr); + if (val < 0) + return val; + data->lp_advertised = phy_eee_to_adv(val); + + return 0; +} +EXPORT_SYMBOL(phy_ethtool_get_eee); + +/** + * phy_ethtool_set_eee - set EEE supported and status + * @phydev: target phy_device struct + * @data: ethtool_eee data + * + * Description: it is to program the Advertisement EEE register. + */ +int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_eee *data) +{ + int val; + + val = phy_adv_to_eee(data->advertised); + phy_write_mmd_indirect(phydev->bus, MDIO_AN_EEE_ADV, MDIO_MMD_AN, + phydev->addr, val); + + return 0; +} +EXPORT_SYMBOL(phy_ethtool_set_eee); diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c index de86a5582224..8af46e88a181 100644 --- a/drivers/net/phy/phy_device.c +++ b/drivers/net/phy/phy_device.c @@ -14,6 +14,9 @@ * option) any later version. * */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/kernel.h> #include <linux/string.h> #include <linux/errno.h> @@ -149,8 +152,8 @@ int phy_scan_fixups(struct phy_device *phydev) } EXPORT_SYMBOL(phy_scan_fixups); -static struct phy_device* phy_device_create(struct mii_bus *bus, - int addr, int phy_id) +struct phy_device *phy_device_create(struct mii_bus *bus, int addr, int phy_id, + bool is_c45, struct phy_c45_device_ids *c45_ids) { struct phy_device *dev; @@ -171,8 +174,11 @@ static struct phy_device* phy_device_create(struct mii_bus *bus, dev->autoneg = AUTONEG_ENABLE; + dev->is_c45 = is_c45; dev->addr = addr; dev->phy_id = phy_id; + if (c45_ids) + dev->c45_ids = *c45_ids; dev->bus = bus; dev->dev.parent = bus->parent; dev->dev.bus = &mdio_bus_type; @@ -197,20 +203,99 @@ static struct phy_device* phy_device_create(struct mii_bus *bus, return dev; } +EXPORT_SYMBOL(phy_device_create); + +/** + * get_phy_c45_ids - reads the specified addr for its 802.3-c45 IDs. + * @bus: the target MII bus + * @addr: PHY address on the MII bus + * @phy_id: where to store the ID retrieved. + * @c45_ids: where to store the c45 ID information. + * + * If the PHY devices-in-package appears to be valid, it and the + * corresponding identifiers are stored in @c45_ids, zero is stored + * in @phy_id. Otherwise 0xffffffff is stored in @phy_id. Returns + * zero on success. + * + */ +static int get_phy_c45_ids(struct mii_bus *bus, int addr, u32 *phy_id, + struct phy_c45_device_ids *c45_ids) { + int phy_reg; + int i, reg_addr; + const int num_ids = ARRAY_SIZE(c45_ids->device_ids); + + /* Find first non-zero Devices In package. Device + * zero is reserved, so don't probe it. + */ + for (i = 1; + i < num_ids && c45_ids->devices_in_package == 0; + i++) { + reg_addr = MII_ADDR_C45 | i << 16 | 6; + phy_reg = mdiobus_read(bus, addr, reg_addr); + if (phy_reg < 0) + return -EIO; + c45_ids->devices_in_package = (phy_reg & 0xffff) << 16; + + reg_addr = MII_ADDR_C45 | i << 16 | 5; + phy_reg = mdiobus_read(bus, addr, reg_addr); + if (phy_reg < 0) + return -EIO; + c45_ids->devices_in_package |= (phy_reg & 0xffff); + + /* If mostly Fs, there is no device there, + * let's get out of here. + */ + if ((c45_ids->devices_in_package & 0x1fffffff) == 0x1fffffff) { + *phy_id = 0xffffffff; + return 0; + } + } + + /* Now probe Device Identifiers for each device present. */ + for (i = 1; i < num_ids; i++) { + if (!(c45_ids->devices_in_package & (1 << i))) + continue; + + reg_addr = MII_ADDR_C45 | i << 16 | MII_PHYSID1; + phy_reg = mdiobus_read(bus, addr, reg_addr); + if (phy_reg < 0) + return -EIO; + c45_ids->device_ids[i] = (phy_reg & 0xffff) << 16; + + reg_addr = MII_ADDR_C45 | i << 16 | MII_PHYSID2; + phy_reg = mdiobus_read(bus, addr, reg_addr); + if (phy_reg < 0) + return -EIO; + c45_ids->device_ids[i] |= (phy_reg & 0xffff); + } + *phy_id = 0; + return 0; +} /** * get_phy_id - reads the specified addr for its ID. * @bus: the target MII bus * @addr: PHY address on the MII bus * @phy_id: where to store the ID retrieved. + * @is_c45: If true the PHY uses the 802.3 clause 45 protocol + * @c45_ids: where to store the c45 ID information. + * + * Description: In the case of a 802.3-c22 PHY, reads the ID registers + * of the PHY at @addr on the @bus, stores it in @phy_id and returns + * zero on success. + * + * In the case of a 802.3-c45 PHY, get_phy_c45_ids() is invoked, and + * its return value is in turn returned. * - * Description: Reads the ID registers of the PHY at @addr on the - * @bus, stores it in @phy_id and returns zero on success. */ -static int get_phy_id(struct mii_bus *bus, int addr, u32 *phy_id) +static int get_phy_id(struct mii_bus *bus, int addr, u32 *phy_id, + bool is_c45, struct phy_c45_device_ids *c45_ids) { int phy_reg; + if (is_c45) + return get_phy_c45_ids(bus, addr, phy_id, c45_ids); + /* Grab the bits from PHYIR1, and put them * in the upper half */ phy_reg = mdiobus_read(bus, addr, MII_PHYSID1); @@ -235,17 +320,19 @@ static int get_phy_id(struct mii_bus *bus, int addr, u32 *phy_id) * get_phy_device - reads the specified PHY device and returns its @phy_device struct * @bus: the target MII bus * @addr: PHY address on the MII bus + * @is_c45: If true the PHY uses the 802.3 clause 45 protocol * * Description: Reads the ID registers of the PHY at @addr on the * @bus, then allocates and returns the phy_device to represent it. */ -struct phy_device * get_phy_device(struct mii_bus *bus, int addr) +struct phy_device *get_phy_device(struct mii_bus *bus, int addr, bool is_c45) { + struct phy_c45_device_ids c45_ids = {0}; struct phy_device *dev = NULL; - u32 phy_id; + u32 phy_id = 0; int r; - r = get_phy_id(bus, addr, &phy_id); + r = get_phy_id(bus, addr, &phy_id, is_c45, &c45_ids); if (r) return ERR_PTR(r); @@ -253,7 +340,7 @@ struct phy_device * get_phy_device(struct mii_bus *bus, int addr) if ((phy_id & 0x1fffffff) == 0x1fffffff) return NULL; - dev = phy_device_create(bus, addr, phy_id); + dev = phy_device_create(bus, addr, phy_id, is_c45, &c45_ids); return dev; } @@ -446,6 +533,11 @@ static int phy_attach_direct(struct net_device *dev, struct phy_device *phydev, /* Assume that if there is no driver, that it doesn't * exist, and we should use the genphy driver. */ if (NULL == d->driver) { + if (phydev->is_c45) { + pr_err("No driver for phy %x\n", phydev->phy_id); + return -ENODEV; + } + d->driver = &genphy_driver.driver; err = d->driver->probe(d); @@ -975,8 +1067,8 @@ int phy_driver_register(struct phy_driver *new_driver) retval = driver_register(&new_driver->driver); if (retval) { - printk(KERN_ERR "%s: Error %d in registering driver\n", - new_driver->name, retval); + pr_err("%s: Error %d in registering driver\n", + new_driver->name, retval); return retval; } @@ -987,12 +1079,37 @@ int phy_driver_register(struct phy_driver *new_driver) } EXPORT_SYMBOL(phy_driver_register); +int phy_drivers_register(struct phy_driver *new_driver, int n) +{ + int i, ret = 0; + + for (i = 0; i < n; i++) { + ret = phy_driver_register(new_driver + i); + if (ret) { + while (i-- > 0) + phy_driver_unregister(new_driver + i); + break; + } + } + return ret; +} +EXPORT_SYMBOL(phy_drivers_register); + void phy_driver_unregister(struct phy_driver *drv) { driver_unregister(&drv->driver); } EXPORT_SYMBOL(phy_driver_unregister); +void phy_drivers_unregister(struct phy_driver *drv, int n) +{ + int i; + for (i = 0; i < n; i++) { + phy_driver_unregister(drv + i); + } +} +EXPORT_SYMBOL(phy_drivers_unregister); + static struct phy_driver genphy_driver = { .phy_id = 0xffffffff, .phy_id_mask = 0xffffffff, diff --git a/drivers/net/phy/realtek.c b/drivers/net/phy/realtek.c index f414ffb5b728..72f93470ea35 100644 --- a/drivers/net/phy/realtek.c +++ b/drivers/net/phy/realtek.c @@ -65,11 +65,7 @@ static struct phy_driver rtl821x_driver = { static int __init realtek_init(void) { - int ret; - - ret = phy_driver_register(&rtl821x_driver); - - return ret; + return phy_driver_register(&rtl821x_driver); } static void __exit realtek_exit(void) diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c index fc3e7e96c88c..c6b06d311fee 100644 --- a/drivers/net/phy/smsc.c +++ b/drivers/net/phy/smsc.c @@ -61,7 +61,8 @@ static int lan911x_config_init(struct phy_device *phydev) return smsc_phy_ack_interrupt(phydev); } -static struct phy_driver lan83c185_driver = { +static struct phy_driver smsc_phy_driver[] = { +{ .phy_id = 0x0007c0a0, /* OUI=0x00800f, Model#=0x0a */ .phy_id_mask = 0xfffffff0, .name = "SMSC LAN83C185", @@ -83,9 +84,7 @@ static struct phy_driver lan83c185_driver = { .resume = genphy_resume, .driver = { .owner = THIS_MODULE, } -}; - -static struct phy_driver lan8187_driver = { +}, { .phy_id = 0x0007c0b0, /* OUI=0x00800f, Model#=0x0b */ .phy_id_mask = 0xfffffff0, .name = "SMSC LAN8187", @@ -107,9 +106,7 @@ static struct phy_driver lan8187_driver = { .resume = genphy_resume, .driver = { .owner = THIS_MODULE, } -}; - -static struct phy_driver lan8700_driver = { +}, { .phy_id = 0x0007c0c0, /* OUI=0x00800f, Model#=0x0c */ .phy_id_mask = 0xfffffff0, .name = "SMSC LAN8700", @@ -131,9 +128,7 @@ static struct phy_driver lan8700_driver = { .resume = genphy_resume, .driver = { .owner = THIS_MODULE, } -}; - -static struct phy_driver lan911x_int_driver = { +}, { .phy_id = 0x0007c0d0, /* OUI=0x00800f, Model#=0x0d */ .phy_id_mask = 0xfffffff0, .name = "SMSC LAN911x Internal PHY", @@ -155,9 +150,7 @@ static struct phy_driver lan911x_int_driver = { .resume = genphy_resume, .driver = { .owner = THIS_MODULE, } -}; - -static struct phy_driver lan8710_driver = { +}, { .phy_id = 0x0007c0f0, /* OUI=0x00800f, Model#=0x0f */ .phy_id_mask = 0xfffffff0, .name = "SMSC LAN8710/LAN8720", @@ -179,53 +172,18 @@ static struct phy_driver lan8710_driver = { .resume = genphy_resume, .driver = { .owner = THIS_MODULE, } -}; +} }; static int __init smsc_init(void) { - int ret; - - ret = phy_driver_register (&lan83c185_driver); - if (ret) - goto err1; - - ret = phy_driver_register (&lan8187_driver); - if (ret) - goto err2; - - ret = phy_driver_register (&lan8700_driver); - if (ret) - goto err3; - - ret = phy_driver_register (&lan911x_int_driver); - if (ret) - goto err4; - - ret = phy_driver_register (&lan8710_driver); - if (ret) - goto err5; - - return 0; - -err5: - phy_driver_unregister (&lan911x_int_driver); -err4: - phy_driver_unregister (&lan8700_driver); -err3: - phy_driver_unregister (&lan8187_driver); -err2: - phy_driver_unregister (&lan83c185_driver); -err1: - return ret; + return phy_drivers_register(smsc_phy_driver, + ARRAY_SIZE(smsc_phy_driver)); } static void __exit smsc_exit(void) { - phy_driver_unregister (&lan8710_driver); - phy_driver_unregister (&lan911x_int_driver); - phy_driver_unregister (&lan8700_driver); - phy_driver_unregister (&lan8187_driver); - phy_driver_unregister (&lan83c185_driver); + return phy_drivers_unregister(smsc_phy_driver, + ARRAY_SIZE(smsc_phy_driver)); } MODULE_DESCRIPTION("SMSC PHY driver"); diff --git a/drivers/net/phy/spi_ks8995.c b/drivers/net/phy/spi_ks8995.c index 4eb98bc52a0a..1c3abce78b6a 100644 --- a/drivers/net/phy/spi_ks8995.c +++ b/drivers/net/phy/spi_ks8995.c @@ -11,6 +11,8 @@ * by the Free Software Foundation. */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/types.h> #include <linux/kernel.h> #include <linux/init.h> @@ -356,7 +358,7 @@ static struct spi_driver ks8995_driver = { static int __init ks8995_init(void) { - printk(KERN_INFO DRV_DESC " version " DRV_VERSION"\n"); + pr_info(DRV_DESC " version " DRV_VERSION "\n"); return spi_register_driver(&ks8995_driver); } diff --git a/drivers/net/phy/ste10Xp.c b/drivers/net/phy/ste10Xp.c index 187a2fa814f2..5e1eb138916f 100644 --- a/drivers/net/phy/ste10Xp.c +++ b/drivers/net/phy/ste10Xp.c @@ -81,7 +81,8 @@ static int ste10Xp_ack_interrupt(struct phy_device *phydev) return 0; } -static struct phy_driver ste101p_pdriver = { +static struct phy_driver ste10xp_pdriver[] = { +{ .phy_id = STE101P_PHY_ID, .phy_id_mask = 0xfffffff0, .name = "STe101p", @@ -95,9 +96,7 @@ static struct phy_driver ste101p_pdriver = { .suspend = genphy_suspend, .resume = genphy_resume, .driver = {.owner = THIS_MODULE,} -}; - -static struct phy_driver ste100p_pdriver = { +}, { .phy_id = STE100P_PHY_ID, .phy_id_mask = 0xffffffff, .name = "STe100p", @@ -111,22 +110,18 @@ static struct phy_driver ste100p_pdriver = { .suspend = genphy_suspend, .resume = genphy_resume, .driver = {.owner = THIS_MODULE,} -}; +} }; static int __init ste10Xp_init(void) { - int retval; - - retval = phy_driver_register(&ste100p_pdriver); - if (retval < 0) - return retval; - return phy_driver_register(&ste101p_pdriver); + return phy_drivers_register(ste10xp_pdriver, + ARRAY_SIZE(ste10xp_pdriver)); } static void __exit ste10Xp_exit(void) { - phy_driver_unregister(&ste100p_pdriver); - phy_driver_unregister(&ste101p_pdriver); + phy_drivers_unregister(ste10xp_pdriver, + ARRAY_SIZE(ste10xp_pdriver)); } module_init(ste10Xp_init); diff --git a/drivers/net/phy/vitesse.c b/drivers/net/phy/vitesse.c index 0ec8e09cc2ac..2585c383e623 100644 --- a/drivers/net/phy/vitesse.c +++ b/drivers/net/phy/vitesse.c @@ -138,21 +138,6 @@ static int vsc82xx_config_intr(struct phy_device *phydev) return err; } -/* Vitesse 824x */ -static struct phy_driver vsc8244_driver = { - .phy_id = PHY_ID_VSC8244, - .name = "Vitesse VSC8244", - .phy_id_mask = 0x000fffc0, - .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, - .config_init = &vsc824x_config_init, - .config_aneg = &genphy_config_aneg, - .read_status = &genphy_read_status, - .ack_interrupt = &vsc824x_ack_interrupt, - .config_intr = &vsc82xx_config_intr, - .driver = { .owner = THIS_MODULE,}, -}; - static int vsc8221_config_init(struct phy_device *phydev) { int err; @@ -165,8 +150,22 @@ static int vsc8221_config_init(struct phy_device *phydev) Options are 802.3Z SerDes or SGMII */ } -/* Vitesse 8221 */ -static struct phy_driver vsc8221_driver = { +/* Vitesse 824x */ +static struct phy_driver vsc82xx_driver[] = { +{ + .phy_id = PHY_ID_VSC8244, + .name = "Vitesse VSC8244", + .phy_id_mask = 0x000fffc0, + .features = PHY_GBIT_FEATURES, + .flags = PHY_HAS_INTERRUPT, + .config_init = &vsc824x_config_init, + .config_aneg = &genphy_config_aneg, + .read_status = &genphy_read_status, + .ack_interrupt = &vsc824x_ack_interrupt, + .config_intr = &vsc82xx_config_intr, + .driver = { .owner = THIS_MODULE,}, +}, { + /* Vitesse 8221 */ .phy_id = PHY_ID_VSC8221, .phy_id_mask = 0x000ffff0, .name = "Vitesse VSC8221", @@ -177,26 +176,19 @@ static struct phy_driver vsc8221_driver = { .read_status = &genphy_read_status, .ack_interrupt = &vsc824x_ack_interrupt, .config_intr = &vsc82xx_config_intr, - .driver = { .owner = THIS_MODULE,}, -}; + .driver = { .owner = THIS_MODULE,}, +} }; static int __init vsc82xx_init(void) { - int err; - - err = phy_driver_register(&vsc8244_driver); - if (err < 0) - return err; - err = phy_driver_register(&vsc8221_driver); - if (err < 0) - phy_driver_unregister(&vsc8244_driver); - return err; + return phy_drivers_register(vsc82xx_driver, + ARRAY_SIZE(vsc82xx_driver)); } static void __exit vsc82xx_exit(void) { - phy_driver_unregister(&vsc8244_driver); - phy_driver_unregister(&vsc8221_driver); + return phy_drivers_unregister(vsc82xx_driver, + ARRAY_SIZE(vsc82xx_driver)); } module_init(vsc82xx_init); diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c index d4c9db3da22a..a34d6bf5e43b 100644 --- a/drivers/net/slip/slip.c +++ b/drivers/net/slip/slip.c @@ -390,10 +390,10 @@ static void sl_encaps(struct slip *sl, unsigned char *icp, int len) #endif #ifdef CONFIG_SLIP_MODE_SLIP6 if (sl->mode & SL_MODE_SLIP6) - count = slip_esc6(p, (unsigned char *) sl->xbuff, len); + count = slip_esc6(p, sl->xbuff, len); else #endif - count = slip_esc(p, (unsigned char *) sl->xbuff, len); + count = slip_esc(p, sl->xbuff, len); /* Order of next two lines is *very* important. * When we are sending a little amount of data, diff --git a/drivers/net/team/Kconfig b/drivers/net/team/Kconfig index 89024d5fc33a..6a7260b03a1e 100644 --- a/drivers/net/team/Kconfig +++ b/drivers/net/team/Kconfig @@ -15,6 +15,17 @@ menuconfig NET_TEAM if NET_TEAM +config NET_TEAM_MODE_BROADCAST + tristate "Broadcast mode support" + depends on NET_TEAM + ---help--- + Basic mode where packets are transmitted always by all suitable ports. + + All added ports are setup to have team's mac address. + + To compile this team mode as a module, choose M here: the module + will be called team_mode_broadcast. + config NET_TEAM_MODE_ROUNDROBIN tristate "Round-robin mode support" depends on NET_TEAM @@ -22,7 +33,7 @@ config NET_TEAM_MODE_ROUNDROBIN Basic mode where port used for transmitting packets is selected in round-robin fashion using packet counter. - All added ports are setup to have bond's mac address. + All added ports are setup to have team's mac address. To compile this team mode as a module, choose M here: the module will be called team_mode_roundrobin. diff --git a/drivers/net/team/Makefile b/drivers/net/team/Makefile index fb9f4c1c51ff..975763014e5a 100644 --- a/drivers/net/team/Makefile +++ b/drivers/net/team/Makefile @@ -3,6 +3,7 @@ # obj-$(CONFIG_NET_TEAM) += team.o +obj-$(CONFIG_NET_TEAM_MODE_BROADCAST) += team_mode_broadcast.o obj-$(CONFIG_NET_TEAM_MODE_ROUNDROBIN) += team_mode_roundrobin.o obj-$(CONFIG_NET_TEAM_MODE_ACTIVEBACKUP) += team_mode_activebackup.o obj-$(CONFIG_NET_TEAM_MODE_LOADBALANCE) += team_mode_loadbalance.o diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c index c61ae35a53ce..b104c05225f7 100644 --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -1,5 +1,5 @@ /* - * net/drivers/team/team.c - Network team device driver + * drivers/net/team/team.c - Network team device driver * Copyright (c) 2011 Jiri Pirko <jpirko@redhat.com> * * This program is free software; you can redistribute it and/or modify @@ -18,6 +18,7 @@ #include <linux/ctype.h> #include <linux/notifier.h> #include <linux/netdevice.h> +#include <linux/netpoll.h> #include <linux/if_vlan.h> #include <linux/if_arp.h> #include <linux/socket.h> @@ -26,6 +27,7 @@ #include <net/rtnetlink.h> #include <net/genetlink.h> #include <net/netlink.h> +#include <net/sch_generic.h> #include <linux/if_team.h> #define DRV_NAME "team" @@ -82,14 +84,16 @@ static void team_refresh_port_linkup(struct team_port *port) port->state.linkup; } + /******************* * Options handling *******************/ struct team_option_inst { /* One for each option instance */ struct list_head list; + struct list_head tmp_list; struct team_option *option; - struct team_port *port; /* != NULL if per-port */ + struct team_option_inst_info info; bool changed; bool removed; }; @@ -106,22 +110,6 @@ static struct team_option *__team_find_option(struct team *team, return NULL; } -static int __team_option_inst_add(struct team *team, struct team_option *option, - struct team_port *port) -{ - struct team_option_inst *opt_inst; - - opt_inst = kmalloc(sizeof(*opt_inst), GFP_KERNEL); - if (!opt_inst) - return -ENOMEM; - opt_inst->option = option; - opt_inst->port = port; - opt_inst->changed = true; - opt_inst->removed = false; - list_add_tail(&opt_inst->list, &team->option_inst_list); - return 0; -} - static void __team_option_inst_del(struct team_option_inst *opt_inst) { list_del(&opt_inst->list); @@ -139,14 +127,49 @@ static void __team_option_inst_del_option(struct team *team, } } +static int __team_option_inst_add(struct team *team, struct team_option *option, + struct team_port *port) +{ + struct team_option_inst *opt_inst; + unsigned int array_size; + unsigned int i; + int err; + + array_size = option->array_size; + if (!array_size) + array_size = 1; /* No array but still need one instance */ + + for (i = 0; i < array_size; i++) { + opt_inst = kmalloc(sizeof(*opt_inst), GFP_KERNEL); + if (!opt_inst) + return -ENOMEM; + opt_inst->option = option; + opt_inst->info.port = port; + opt_inst->info.array_index = i; + opt_inst->changed = true; + opt_inst->removed = false; + list_add_tail(&opt_inst->list, &team->option_inst_list); + if (option->init) { + err = option->init(team, &opt_inst->info); + if (err) + return err; + } + + } + return 0; +} + static int __team_option_inst_add_option(struct team *team, struct team_option *option) { struct team_port *port; int err; - if (!option->per_port) - return __team_option_inst_add(team, option, 0); + if (!option->per_port) { + err = __team_option_inst_add(team, option, NULL); + if (err) + goto inst_del_option; + } list_for_each_entry(port, &team->port_list, list) { err = __team_option_inst_add(team, option, port); @@ -180,7 +203,7 @@ static void __team_option_inst_del_port(struct team *team, list_for_each_entry_safe(opt_inst, tmp, &team->option_inst_list, list) { if (opt_inst->option->per_port && - opt_inst->port == port) + opt_inst->info.port == port) __team_option_inst_del(opt_inst); } } @@ -211,7 +234,7 @@ static void __team_option_inst_mark_removed_port(struct team *team, struct team_option_inst *opt_inst; list_for_each_entry(opt_inst, &team->option_inst_list, list) { - if (opt_inst->port == port) { + if (opt_inst->info.port == port) { opt_inst->changed = true; opt_inst->removed = true; } @@ -324,28 +347,12 @@ void team_options_unregister(struct team *team, } EXPORT_SYMBOL(team_options_unregister); -static int team_option_port_add(struct team *team, struct team_port *port) -{ - int err; - - err = __team_option_inst_add_port(team, port); - if (err) - return err; - __team_options_change_check(team); - return 0; -} - -static void team_option_port_del(struct team *team, struct team_port *port) -{ - __team_option_inst_mark_removed_port(team, port); - __team_options_change_check(team); - __team_option_inst_del_port(team, port); -} - static int team_option_get(struct team *team, struct team_option_inst *opt_inst, struct team_gsetter_ctx *ctx) { + if (!opt_inst->option->getter) + return -EOPNOTSUPP; return opt_inst->option->getter(team, ctx); } @@ -353,16 +360,26 @@ static int team_option_set(struct team *team, struct team_option_inst *opt_inst, struct team_gsetter_ctx *ctx) { - int err; + if (!opt_inst->option->setter) + return -EOPNOTSUPP; + return opt_inst->option->setter(team, ctx); +} - err = opt_inst->option->setter(team, ctx); - if (err) - return err; +void team_option_inst_set_change(struct team_option_inst_info *opt_inst_info) +{ + struct team_option_inst *opt_inst; + opt_inst = container_of(opt_inst_info, struct team_option_inst, info); opt_inst->changed = true; +} +EXPORT_SYMBOL(team_option_inst_set_change); + +void team_options_change_check(struct team *team) +{ __team_options_change_check(team); - return err; } +EXPORT_SYMBOL(team_options_change_check); + /**************** * Mode handling @@ -371,13 +388,18 @@ static int team_option_set(struct team *team, static LIST_HEAD(mode_list); static DEFINE_SPINLOCK(mode_list_lock); -static struct team_mode *__find_mode(const char *kind) +struct team_mode_item { + struct list_head list; + const struct team_mode *mode; +}; + +static struct team_mode_item *__find_mode(const char *kind) { - struct team_mode *mode; + struct team_mode_item *mitem; - list_for_each_entry(mode, &mode_list, list) { - if (strcmp(mode->kind, kind) == 0) - return mode; + list_for_each_entry(mitem, &mode_list, list) { + if (strcmp(mitem->mode->kind, kind) == 0) + return mitem; } return NULL; } @@ -392,49 +414,65 @@ static bool is_good_mode_name(const char *name) return true; } -int team_mode_register(struct team_mode *mode) +int team_mode_register(const struct team_mode *mode) { int err = 0; + struct team_mode_item *mitem; if (!is_good_mode_name(mode->kind) || mode->priv_size > TEAM_MODE_PRIV_SIZE) return -EINVAL; + + mitem = kmalloc(sizeof(*mitem), GFP_KERNEL); + if (!mitem) + return -ENOMEM; + spin_lock(&mode_list_lock); if (__find_mode(mode->kind)) { err = -EEXIST; + kfree(mitem); goto unlock; } - list_add_tail(&mode->list, &mode_list); + mitem->mode = mode; + list_add_tail(&mitem->list, &mode_list); unlock: spin_unlock(&mode_list_lock); return err; } EXPORT_SYMBOL(team_mode_register); -int team_mode_unregister(struct team_mode *mode) +void team_mode_unregister(const struct team_mode *mode) { + struct team_mode_item *mitem; + spin_lock(&mode_list_lock); - list_del_init(&mode->list); + mitem = __find_mode(mode->kind); + if (mitem) { + list_del_init(&mitem->list); + kfree(mitem); + } spin_unlock(&mode_list_lock); - return 0; } EXPORT_SYMBOL(team_mode_unregister); -static struct team_mode *team_mode_get(const char *kind) +static const struct team_mode *team_mode_get(const char *kind) { - struct team_mode *mode; + struct team_mode_item *mitem; + const struct team_mode *mode = NULL; spin_lock(&mode_list_lock); - mode = __find_mode(kind); - if (!mode) { + mitem = __find_mode(kind); + if (!mitem) { spin_unlock(&mode_list_lock); request_module("team-mode-%s", kind); spin_lock(&mode_list_lock); - mode = __find_mode(kind); + mitem = __find_mode(kind); } - if (mode) + if (mitem) { + mode = mitem->mode; if (!try_module_get(mode->owner)) mode = NULL; + } spin_unlock(&mode_list_lock); return mode; @@ -458,26 +496,45 @@ rx_handler_result_t team_dummy_receive(struct team *team, return RX_HANDLER_ANOTHER; } -static void team_adjust_ops(struct team *team) +static const struct team_mode __team_no_mode = { + .kind = "*NOMODE*", +}; + +static bool team_is_mode_set(struct team *team) +{ + return team->mode != &__team_no_mode; +} + +static void team_set_no_mode(struct team *team) +{ + team->mode = &__team_no_mode; +} + +static void __team_adjust_ops(struct team *team, int en_port_count) { /* * To avoid checks in rx/tx skb paths, ensure here that non-null and * correct ops are always set. */ - if (list_empty(&team->port_list) || - !team->mode || !team->mode->ops->transmit) + if (!en_port_count || !team_is_mode_set(team) || + !team->mode->ops->transmit) team->ops.transmit = team_dummy_transmit; else team->ops.transmit = team->mode->ops->transmit; - if (list_empty(&team->port_list) || - !team->mode || !team->mode->ops->receive) + if (!en_port_count || !team_is_mode_set(team) || + !team->mode->ops->receive) team->ops.receive = team_dummy_receive; else team->ops.receive = team->mode->ops->receive; } +static void team_adjust_ops(struct team *team) +{ + __team_adjust_ops(team, team->en_port_count); +} + /* * We can benefit from the fact that it's ensured no port is present * at the time of mode change. Therefore no packets are in fly so there's no @@ -487,7 +544,7 @@ static int __team_change_mode(struct team *team, const struct team_mode *new_mode) { /* Check if mode was previously set and do cleanup if so */ - if (team->mode) { + if (team_is_mode_set(team)) { void (*exit_op)(struct team *team) = team->ops.exit; /* Clear ops area so no callback is called any longer */ @@ -497,7 +554,7 @@ static int __team_change_mode(struct team *team, if (exit_op) exit_op(team); team_mode_put(team->mode); - team->mode = NULL; + team_set_no_mode(team); /* zero private data area */ memset(&team->mode_priv, 0, sizeof(struct team) - offsetof(struct team, mode_priv)); @@ -523,7 +580,7 @@ static int __team_change_mode(struct team *team, static int team_change_mode(struct team *team, const char *kind) { - struct team_mode *new_mode; + const struct team_mode *new_mode; struct net_device *dev = team->dev; int err; @@ -532,7 +589,7 @@ static int team_change_mode(struct team *team, const char *kind) return -EBUSY; } - if (team->mode && strcmp(team->mode->kind, kind) == 0) { + if (team_is_mode_set(team) && strcmp(team->mode->kind, kind) == 0) { netdev_err(dev, "Unable to change to the same mode the team is in\n"); return -EINVAL; } @@ -559,8 +616,6 @@ static int team_change_mode(struct team *team, const char *kind) * Rx path frame handler ************************/ -static bool team_port_enabled(struct team_port *port); - /* note: already called with rcu_read_lock */ static rx_handler_result_t team_handle_frame(struct sk_buff **pskb) { @@ -618,11 +673,6 @@ static bool team_port_find(const struct team *team, return false; } -static bool team_port_enabled(struct team_port *port) -{ - return port->index != -1; -} - /* * Enable/disable port by adding to enabled port hashlist and setting * port->index (Might be racy so reader could see incorrect ifindex when @@ -637,6 +687,9 @@ static void team_port_enable(struct team *team, port->index = team->en_port_count++; hlist_add_head_rcu(&port->hlist, team_port_index_hash(team, port->index)); + team_adjust_ops(team); + if (team->ops.port_enabled) + team->ops.port_enabled(team, port); } static void __reconstruct_port_hlist(struct team *team, int rm_index) @@ -656,14 +709,20 @@ static void __reconstruct_port_hlist(struct team *team, int rm_index) static void team_port_disable(struct team *team, struct team_port *port) { - int rm_index = port->index; - if (!team_port_enabled(port)) return; + if (team->ops.port_disabled) + team->ops.port_disabled(team, port); hlist_del_rcu(&port->hlist); - __reconstruct_port_hlist(team, rm_index); - team->en_port_count--; + __reconstruct_port_hlist(team, port->index); port->index = -1; + __team_adjust_ops(team, team->en_port_count - 1); + /* + * Wait until readers see adjusted ops. This ensures that + * readers never see team->en_port_count == 0 + */ + synchronize_rcu(); + team->en_port_count--; } #define TEAM_VLAN_FEATURES (NETIF_F_ALL_CSUM | NETIF_F_SG | \ @@ -675,12 +734,14 @@ static void __team_compute_features(struct team *team) struct team_port *port; u32 vlan_features = TEAM_VLAN_FEATURES; unsigned short max_hard_header_len = ETH_HLEN; + unsigned int flags, dst_release_flag = IFF_XMIT_DST_RELEASE; list_for_each_entry(port, &team->port_list, list) { vlan_features = netdev_increment_features(vlan_features, port->dev->vlan_features, TEAM_VLAN_FEATURES); + dst_release_flag &= port->dev->priv_flags; if (port->dev->hard_header_len > max_hard_header_len) max_hard_header_len = port->dev->hard_header_len; } @@ -688,6 +749,9 @@ static void __team_compute_features(struct team *team) team->dev->vlan_features = vlan_features; team->dev->hard_header_len = max_hard_header_len; + flags = team->dev->priv_flags & ~IFF_XMIT_DST_RELEASE; + team->dev->priv_flags = flags | dst_release_flag; + netdev_change_features(team->dev); } @@ -730,6 +794,58 @@ static void team_port_leave(struct team *team, struct team_port *port) dev_put(team->dev); } +#ifdef CONFIG_NET_POLL_CONTROLLER +static int team_port_enable_netpoll(struct team *team, struct team_port *port) +{ + struct netpoll *np; + int err; + + np = kzalloc(sizeof(*np), GFP_KERNEL); + if (!np) + return -ENOMEM; + + err = __netpoll_setup(np, port->dev); + if (err) { + kfree(np); + return err; + } + port->np = np; + return err; +} + +static void team_port_disable_netpoll(struct team_port *port) +{ + struct netpoll *np = port->np; + + if (!np) + return; + port->np = NULL; + + /* Wait for transmitting packets to finish before freeing. */ + synchronize_rcu_bh(); + __netpoll_cleanup(np); + kfree(np); +} + +static struct netpoll_info *team_netpoll_info(struct team *team) +{ + return team->dev->npinfo; +} + +#else +static int team_port_enable_netpoll(struct team *team, struct team_port *port) +{ + return 0; +} +static void team_port_disable_netpoll(struct team_port *port) +{ +} +static struct netpoll_info *team_netpoll_info(struct team *team) +{ + return NULL; +} +#endif + static void __team_port_change_check(struct team_port *port, bool linkup); static int team_port_add(struct team *team, struct net_device *port_dev) @@ -758,7 +874,8 @@ static int team_port_add(struct team *team, struct net_device *port_dev) return -EBUSY; } - port = kzalloc(sizeof(struct team_port), GFP_KERNEL); + port = kzalloc(sizeof(struct team_port) + team->mode->port_priv_size, + GFP_KERNEL); if (!port) return -ENOMEM; @@ -795,6 +912,15 @@ static int team_port_add(struct team *team, struct net_device *port_dev) goto err_vids_add; } + if (team_netpoll_info(team)) { + err = team_port_enable_netpoll(team, port); + if (err) { + netdev_err(dev, "Failed to enable netpoll on device %s\n", + portname); + goto err_enable_netpoll; + } + } + err = netdev_set_master(port_dev, dev); if (err) { netdev_err(dev, "Device %s failed to set master\n", portname); @@ -809,7 +935,7 @@ static int team_port_add(struct team *team, struct net_device *port_dev) goto err_handler_register; } - err = team_option_port_add(team, port); + err = __team_option_inst_add_port(team, port); if (err) { netdev_err(dev, "Device %s failed to add per-port options\n", portname); @@ -819,9 +945,9 @@ static int team_port_add(struct team *team, struct net_device *port_dev) port->index = -1; team_port_enable(team, port); list_add_tail_rcu(&port->list, &team->port_list); - team_adjust_ops(team); __team_compute_features(team); __team_port_change_check(port, !!netif_carrier_ok(port_dev)); + __team_options_change_check(team); netdev_info(dev, "Port device %s added\n", portname); @@ -834,6 +960,9 @@ err_handler_register: netdev_set_master(port_dev, NULL); err_set_master: + team_port_disable_netpoll(port); + +err_enable_netpoll: vlan_vids_del_by_dev(port_dev, dev); err_vids_add: @@ -865,14 +994,16 @@ static int team_port_del(struct team *team, struct net_device *port_dev) return -ENOENT; } + __team_option_inst_mark_removed_port(team, port); + __team_options_change_check(team); + __team_option_inst_del_port(team, port); port->removed = true; __team_port_change_check(port, false); team_port_disable(team, port); list_del_rcu(&port->list); - team_adjust_ops(team); - team_option_port_del(team, port); netdev_rx_handler_unregister(port_dev); netdev_set_master(port_dev, NULL); + team_port_disable_netpoll(port); vlan_vids_del_by_dev(port_dev, dev); dev_close(port_dev); team_port_leave(team, port); @@ -891,11 +1022,9 @@ static int team_port_del(struct team *team, struct net_device *port_dev) * Net device ops *****************/ -static const char team_no_mode_kind[] = "*NOMODE*"; - static int team_mode_option_get(struct team *team, struct team_gsetter_ctx *ctx) { - ctx->data.str_val = team->mode ? team->mode->kind : team_no_mode_kind; + ctx->data.str_val = team->mode->kind; return 0; } @@ -907,39 +1036,47 @@ static int team_mode_option_set(struct team *team, struct team_gsetter_ctx *ctx) static int team_port_en_option_get(struct team *team, struct team_gsetter_ctx *ctx) { - ctx->data.bool_val = team_port_enabled(ctx->port); + struct team_port *port = ctx->info->port; + + ctx->data.bool_val = team_port_enabled(port); return 0; } static int team_port_en_option_set(struct team *team, struct team_gsetter_ctx *ctx) { + struct team_port *port = ctx->info->port; + if (ctx->data.bool_val) - team_port_enable(team, ctx->port); + team_port_enable(team, port); else - team_port_disable(team, ctx->port); + team_port_disable(team, port); return 0; } static int team_user_linkup_option_get(struct team *team, struct team_gsetter_ctx *ctx) { - ctx->data.bool_val = ctx->port->user.linkup; + struct team_port *port = ctx->info->port; + + ctx->data.bool_val = port->user.linkup; return 0; } static int team_user_linkup_option_set(struct team *team, struct team_gsetter_ctx *ctx) { - ctx->port->user.linkup = ctx->data.bool_val; - team_refresh_port_linkup(ctx->port); + struct team_port *port = ctx->info->port; + + port->user.linkup = ctx->data.bool_val; + team_refresh_port_linkup(port); return 0; } static int team_user_linkup_en_option_get(struct team *team, struct team_gsetter_ctx *ctx) { - struct team_port *port = ctx->port; + struct team_port *port = ctx->info->port; ctx->data.bool_val = port->user.linkup_enabled; return 0; @@ -948,10 +1085,10 @@ static int team_user_linkup_en_option_get(struct team *team, static int team_user_linkup_en_option_set(struct team *team, struct team_gsetter_ctx *ctx) { - struct team_port *port = ctx->port; + struct team_port *port = ctx->info->port; port->user.linkup_enabled = ctx->data.bool_val; - team_refresh_port_linkup(ctx->port); + team_refresh_port_linkup(port); return 0; } @@ -985,6 +1122,22 @@ static const struct team_option team_options[] = { }, }; +static struct lock_class_key team_netdev_xmit_lock_key; +static struct lock_class_key team_netdev_addr_lock_key; + +static void team_set_lockdep_class_one(struct net_device *dev, + struct netdev_queue *txq, + void *unused) +{ + lockdep_set_class(&txq->_xmit_lock, &team_netdev_xmit_lock_key); +} + +static void team_set_lockdep_class(struct net_device *dev) +{ + lockdep_set_class(&dev->addr_list_lock, &team_netdev_addr_lock_key); + netdev_for_each_tx_queue(dev, team_set_lockdep_class_one, NULL); +} + static int team_init(struct net_device *dev) { struct team *team = netdev_priv(dev); @@ -993,6 +1146,7 @@ static int team_init(struct net_device *dev) team->dev = dev; mutex_init(&team->lock); + team_set_no_mode(team); team->pcpu_stats = alloc_percpu(struct team_pcpu_stats); if (!team->pcpu_stats) @@ -1011,6 +1165,8 @@ static int team_init(struct net_device *dev) goto err_options_register; netif_carrier_off(dev); + team_set_lockdep_class(dev); + return 0; err_options_register: @@ -1079,6 +1235,29 @@ static netdev_tx_t team_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; } +static u16 team_select_queue(struct net_device *dev, struct sk_buff *skb) +{ + /* + * This helper function exists to help dev_pick_tx get the correct + * destination queue. Using a helper function skips a call to + * skb_tx_hash and will put the skbs in the queue we expect on their + * way down to the team driver. + */ + u16 txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0; + + /* + * Save the original txq to restore before passing to the driver + */ + qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping; + + if (unlikely(txq >= dev->real_num_tx_queues)) { + do { + txq -= dev->real_num_tx_queues; + } while (txq >= dev->real_num_tx_queues); + } + return txq; +} + static void team_change_rx_flags(struct net_device *dev, int change) { struct team *team = netdev_priv(dev); @@ -1116,10 +1295,11 @@ static int team_set_mac_address(struct net_device *dev, void *p) { struct team *team = netdev_priv(dev); struct team_port *port; - struct sockaddr *addr = p; + int err; - dev->addr_assign_type &= ~NET_ADDR_RANDOM; - memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN); + err = eth_mac_addr(dev, p); + if (err) + return err; rcu_read_lock(); list_for_each_entry_rcu(port, &team->port_list, list) if (team->ops.port_change_mac) @@ -1240,6 +1420,48 @@ static int team_vlan_rx_kill_vid(struct net_device *dev, uint16_t vid) return 0; } +#ifdef CONFIG_NET_POLL_CONTROLLER +static void team_poll_controller(struct net_device *dev) +{ +} + +static void __team_netpoll_cleanup(struct team *team) +{ + struct team_port *port; + + list_for_each_entry(port, &team->port_list, list) + team_port_disable_netpoll(port); +} + +static void team_netpoll_cleanup(struct net_device *dev) +{ + struct team *team = netdev_priv(dev); + + mutex_lock(&team->lock); + __team_netpoll_cleanup(team); + mutex_unlock(&team->lock); +} + +static int team_netpoll_setup(struct net_device *dev, + struct netpoll_info *npifo) +{ + struct team *team = netdev_priv(dev); + struct team_port *port; + int err; + + mutex_lock(&team->lock); + list_for_each_entry(port, &team->port_list, list) { + err = team_port_enable_netpoll(team, port); + if (err) { + __team_netpoll_cleanup(team); + break; + } + } + mutex_unlock(&team->lock); + return err; +} +#endif + static int team_add_slave(struct net_device *dev, struct net_device *port_dev) { struct team *team = netdev_priv(dev); @@ -1289,6 +1511,7 @@ static const struct net_device_ops team_netdev_ops = { .ndo_open = team_open, .ndo_stop = team_close, .ndo_start_xmit = team_xmit, + .ndo_select_queue = team_select_queue, .ndo_change_rx_flags = team_change_rx_flags, .ndo_set_rx_mode = team_set_rx_mode, .ndo_set_mac_address = team_set_mac_address, @@ -1296,6 +1519,11 @@ static const struct net_device_ops team_netdev_ops = { .ndo_get_stats64 = team_get_stats64, .ndo_vlan_rx_add_vid = team_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = team_vlan_rx_kill_vid, +#ifdef CONFIG_NET_POLL_CONTROLLER + .ndo_poll_controller = team_poll_controller, + .ndo_netpoll_setup = team_netpoll_setup, + .ndo_netpoll_cleanup = team_netpoll_cleanup, +#endif .ndo_add_slave = team_add_slave, .ndo_del_slave = team_del_slave, .ndo_fix_features = team_fix_features, @@ -1321,7 +1549,7 @@ static void team_setup(struct net_device *dev) * bring us to promisc mode in case a unicast addr is added. * Let this up to underlay drivers. */ - dev->priv_flags |= IFF_UNICAST_FLT; + dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE; dev->features |= NETIF_F_LLTX; dev->features |= NETIF_F_GRO; @@ -1358,12 +1586,24 @@ static int team_validate(struct nlattr *tb[], struct nlattr *data[]) return 0; } +static unsigned int team_get_num_tx_queues(void) +{ + return TEAM_DEFAULT_NUM_TX_QUEUES; +} + +static unsigned int team_get_num_rx_queues(void) +{ + return TEAM_DEFAULT_NUM_RX_QUEUES; +} + static struct rtnl_link_ops team_link_ops __read_mostly = { - .kind = DRV_NAME, - .priv_size = sizeof(struct team), - .setup = team_setup, - .newlink = team_newlink, - .validate = team_validate, + .kind = DRV_NAME, + .priv_size = sizeof(struct team), + .setup = team_setup, + .newlink = team_newlink, + .validate = team_validate, + .get_num_tx_queues = team_get_num_tx_queues, + .get_num_rx_queues = team_get_num_rx_queues, }; @@ -1404,7 +1644,7 @@ static int team_nl_cmd_noop(struct sk_buff *skb, struct genl_info *info) void *hdr; int err; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) return -ENOMEM; @@ -1466,7 +1706,7 @@ static int team_nl_send_generic(struct genl_info *info, struct team *team, struct sk_buff *skb; int err; - skb = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!skb) return -ENOMEM; @@ -1482,16 +1722,128 @@ err_fill: return err; } -static int team_nl_fill_options_get(struct sk_buff *skb, - u32 pid, u32 seq, int flags, - struct team *team, bool fillall) +typedef int team_nl_send_func_t(struct sk_buff *skb, + struct team *team, u32 pid); + +static int team_nl_send_unicast(struct sk_buff *skb, struct team *team, u32 pid) +{ + return genlmsg_unicast(dev_net(team->dev), skb, pid); +} + +static int team_nl_fill_one_option_get(struct sk_buff *skb, struct team *team, + struct team_option_inst *opt_inst) +{ + struct nlattr *option_item; + struct team_option *option = opt_inst->option; + struct team_option_inst_info *opt_inst_info = &opt_inst->info; + struct team_gsetter_ctx ctx; + int err; + + ctx.info = opt_inst_info; + err = team_option_get(team, opt_inst, &ctx); + if (err) + return err; + + option_item = nla_nest_start(skb, TEAM_ATTR_ITEM_OPTION); + if (!option_item) + return -EMSGSIZE; + + if (nla_put_string(skb, TEAM_ATTR_OPTION_NAME, option->name)) + goto nest_cancel; + if (opt_inst_info->port && + nla_put_u32(skb, TEAM_ATTR_OPTION_PORT_IFINDEX, + opt_inst_info->port->dev->ifindex)) + goto nest_cancel; + if (opt_inst->option->array_size && + nla_put_u32(skb, TEAM_ATTR_OPTION_ARRAY_INDEX, + opt_inst_info->array_index)) + goto nest_cancel; + + switch (option->type) { + case TEAM_OPTION_TYPE_U32: + if (nla_put_u8(skb, TEAM_ATTR_OPTION_TYPE, NLA_U32)) + goto nest_cancel; + if (nla_put_u32(skb, TEAM_ATTR_OPTION_DATA, ctx.data.u32_val)) + goto nest_cancel; + break; + case TEAM_OPTION_TYPE_STRING: + if (nla_put_u8(skb, TEAM_ATTR_OPTION_TYPE, NLA_STRING)) + goto nest_cancel; + if (nla_put_string(skb, TEAM_ATTR_OPTION_DATA, + ctx.data.str_val)) + goto nest_cancel; + break; + case TEAM_OPTION_TYPE_BINARY: + if (nla_put_u8(skb, TEAM_ATTR_OPTION_TYPE, NLA_BINARY)) + goto nest_cancel; + if (nla_put(skb, TEAM_ATTR_OPTION_DATA, ctx.data.bin_val.len, + ctx.data.bin_val.ptr)) + goto nest_cancel; + break; + case TEAM_OPTION_TYPE_BOOL: + if (nla_put_u8(skb, TEAM_ATTR_OPTION_TYPE, NLA_FLAG)) + goto nest_cancel; + if (ctx.data.bool_val && + nla_put_flag(skb, TEAM_ATTR_OPTION_DATA)) + goto nest_cancel; + break; + default: + BUG(); + } + if (opt_inst->removed && nla_put_flag(skb, TEAM_ATTR_OPTION_REMOVED)) + goto nest_cancel; + if (opt_inst->changed) { + if (nla_put_flag(skb, TEAM_ATTR_OPTION_CHANGED)) + goto nest_cancel; + opt_inst->changed = false; + } + nla_nest_end(skb, option_item); + return 0; + +nest_cancel: + nla_nest_cancel(skb, option_item); + return -EMSGSIZE; +} + +static int __send_and_alloc_skb(struct sk_buff **pskb, + struct team *team, u32 pid, + team_nl_send_func_t *send_func) +{ + int err; + + if (*pskb) { + err = send_func(*pskb, team, pid); + if (err) + return err; + } + *pskb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!*pskb) + return -ENOMEM; + return 0; +} + +static int team_nl_send_options_get(struct team *team, u32 pid, u32 seq, + int flags, team_nl_send_func_t *send_func, + struct list_head *sel_opt_inst_list) { struct nlattr *option_list; + struct nlmsghdr *nlh; void *hdr; struct team_option_inst *opt_inst; int err; + struct sk_buff *skb = NULL; + bool incomplete; + int i; - hdr = genlmsg_put(skb, pid, seq, &team_nl_family, flags, + opt_inst = list_first_entry(sel_opt_inst_list, + struct team_option_inst, tmp_list); + +start_again: + err = __send_and_alloc_skb(&skb, team, pid, send_func); + if (err) + return err; + + hdr = genlmsg_put(skb, pid, seq, &team_nl_family, flags | NLM_F_MULTI, TEAM_CMD_OPTIONS_GET); if (IS_ERR(hdr)) return PTR_ERR(hdr); @@ -1500,122 +1852,80 @@ static int team_nl_fill_options_get(struct sk_buff *skb, goto nla_put_failure; option_list = nla_nest_start(skb, TEAM_ATTR_LIST_OPTION); if (!option_list) - return -EMSGSIZE; - - list_for_each_entry(opt_inst, &team->option_inst_list, list) { - struct nlattr *option_item; - struct team_option *option = opt_inst->option; - struct team_gsetter_ctx ctx; + goto nla_put_failure; - /* Include only changed options if fill all mode is not on */ - if (!fillall && !opt_inst->changed) - continue; - option_item = nla_nest_start(skb, TEAM_ATTR_ITEM_OPTION); - if (!option_item) - goto nla_put_failure; - if (nla_put_string(skb, TEAM_ATTR_OPTION_NAME, option->name)) - goto nla_put_failure; - if (opt_inst->changed) { - if (nla_put_flag(skb, TEAM_ATTR_OPTION_CHANGED)) - goto nla_put_failure; - opt_inst->changed = false; - } - if (opt_inst->removed && - nla_put_flag(skb, TEAM_ATTR_OPTION_REMOVED)) - goto nla_put_failure; - if (opt_inst->port && - nla_put_u32(skb, TEAM_ATTR_OPTION_PORT_IFINDEX, - opt_inst->port->dev->ifindex)) - goto nla_put_failure; - ctx.port = opt_inst->port; - switch (option->type) { - case TEAM_OPTION_TYPE_U32: - if (nla_put_u8(skb, TEAM_ATTR_OPTION_TYPE, NLA_U32)) - goto nla_put_failure; - err = team_option_get(team, opt_inst, &ctx); - if (err) - goto errout; - if (nla_put_u32(skb, TEAM_ATTR_OPTION_DATA, - ctx.data.u32_val)) - goto nla_put_failure; - break; - case TEAM_OPTION_TYPE_STRING: - if (nla_put_u8(skb, TEAM_ATTR_OPTION_TYPE, NLA_STRING)) - goto nla_put_failure; - err = team_option_get(team, opt_inst, &ctx); - if (err) - goto errout; - if (nla_put_string(skb, TEAM_ATTR_OPTION_DATA, - ctx.data.str_val)) - goto nla_put_failure; - break; - case TEAM_OPTION_TYPE_BINARY: - if (nla_put_u8(skb, TEAM_ATTR_OPTION_TYPE, NLA_BINARY)) - goto nla_put_failure; - err = team_option_get(team, opt_inst, &ctx); - if (err) - goto errout; - if (nla_put(skb, TEAM_ATTR_OPTION_DATA, - ctx.data.bin_val.len, ctx.data.bin_val.ptr)) - goto nla_put_failure; - break; - case TEAM_OPTION_TYPE_BOOL: - if (nla_put_u8(skb, TEAM_ATTR_OPTION_TYPE, NLA_FLAG)) - goto nla_put_failure; - err = team_option_get(team, opt_inst, &ctx); - if (err) - goto errout; - if (ctx.data.bool_val && - nla_put_flag(skb, TEAM_ATTR_OPTION_DATA)) - goto nla_put_failure; - break; - default: - BUG(); + i = 0; + incomplete = false; + list_for_each_entry_from(opt_inst, sel_opt_inst_list, tmp_list) { + err = team_nl_fill_one_option_get(skb, team, opt_inst); + if (err) { + if (err == -EMSGSIZE) { + if (!i) + goto errout; + incomplete = true; + break; + } + goto errout; } - nla_nest_end(skb, option_item); + i++; } nla_nest_end(skb, option_list); - return genlmsg_end(skb, hdr); + genlmsg_end(skb, hdr); + if (incomplete) + goto start_again; + +send_done: + nlh = nlmsg_put(skb, pid, seq, NLMSG_DONE, 0, flags | NLM_F_MULTI); + if (!nlh) { + err = __send_and_alloc_skb(&skb, team, pid, send_func); + if (err) + goto errout; + goto send_done; + } + + return send_func(skb, team, pid); nla_put_failure: err = -EMSGSIZE; errout: genlmsg_cancel(skb, hdr); + nlmsg_free(skb); return err; } -static int team_nl_fill_options_get_all(struct sk_buff *skb, - struct genl_info *info, int flags, - struct team *team) -{ - return team_nl_fill_options_get(skb, info->snd_pid, - info->snd_seq, NLM_F_ACK, - team, true); -} - static int team_nl_cmd_options_get(struct sk_buff *skb, struct genl_info *info) { struct team *team; + struct team_option_inst *opt_inst; int err; + LIST_HEAD(sel_opt_inst_list); team = team_nl_team_get(info); if (!team) return -EINVAL; - err = team_nl_send_generic(info, team, team_nl_fill_options_get_all); + list_for_each_entry(opt_inst, &team->option_inst_list, list) + list_add_tail(&opt_inst->tmp_list, &sel_opt_inst_list); + err = team_nl_send_options_get(team, info->snd_pid, info->snd_seq, + NLM_F_ACK, team_nl_send_unicast, + &sel_opt_inst_list); team_nl_team_put(team); return err; } +static int team_nl_send_event_options_get(struct team *team, + struct list_head *sel_opt_inst_list); + static int team_nl_cmd_options_set(struct sk_buff *skb, struct genl_info *info) { struct team *team; int err = 0; int i; struct nlattr *nl_option; + LIST_HEAD(opt_inst_list); team = team_nl_team_get(info); if (!team) @@ -1629,10 +1939,12 @@ static int team_nl_cmd_options_set(struct sk_buff *skb, struct genl_info *info) nla_for_each_nested(nl_option, info->attrs[TEAM_ATTR_LIST_OPTION], i) { struct nlattr *opt_attrs[TEAM_ATTR_OPTION_MAX + 1]; - struct nlattr *attr_port_ifindex; + struct nlattr *attr; struct nlattr *attr_data; enum team_option_type opt_type; int opt_port_ifindex = 0; /* != 0 for per-port options */ + u32 opt_array_index = 0; + bool opt_is_array = false; struct team_option_inst *opt_inst; char *opt_name; bool opt_found = false; @@ -1674,23 +1986,33 @@ static int team_nl_cmd_options_set(struct sk_buff *skb, struct genl_info *info) } opt_name = nla_data(opt_attrs[TEAM_ATTR_OPTION_NAME]); - attr_port_ifindex = opt_attrs[TEAM_ATTR_OPTION_PORT_IFINDEX]; - if (attr_port_ifindex) - opt_port_ifindex = nla_get_u32(attr_port_ifindex); + attr = opt_attrs[TEAM_ATTR_OPTION_PORT_IFINDEX]; + if (attr) + opt_port_ifindex = nla_get_u32(attr); + + attr = opt_attrs[TEAM_ATTR_OPTION_ARRAY_INDEX]; + if (attr) { + opt_is_array = true; + opt_array_index = nla_get_u32(attr); + } list_for_each_entry(opt_inst, &team->option_inst_list, list) { struct team_option *option = opt_inst->option; struct team_gsetter_ctx ctx; + struct team_option_inst_info *opt_inst_info; int tmp_ifindex; - tmp_ifindex = opt_inst->port ? - opt_inst->port->dev->ifindex : 0; + opt_inst_info = &opt_inst->info; + tmp_ifindex = opt_inst_info->port ? + opt_inst_info->port->dev->ifindex : 0; if (option->type != opt_type || strcmp(option->name, opt_name) || - tmp_ifindex != opt_port_ifindex) + tmp_ifindex != opt_port_ifindex || + (option->array_size && !opt_is_array) || + opt_inst_info->array_index != opt_array_index) continue; opt_found = true; - ctx.port = opt_inst->port; + ctx.info = opt_inst_info; switch (opt_type) { case TEAM_OPTION_TYPE_U32: ctx.data.u32_val = nla_get_u32(attr_data); @@ -1715,6 +2037,8 @@ static int team_nl_cmd_options_set(struct sk_buff *skb, struct genl_info *info) err = team_option_set(team, opt_inst, &ctx); if (err) goto team_put; + opt_inst->changed = true; + list_add(&opt_inst->tmp_list, &opt_inst_list); } if (!opt_found) { err = -ENOENT; @@ -1722,6 +2046,8 @@ static int team_nl_cmd_options_set(struct sk_buff *skb, struct genl_info *info) } } + err = team_nl_send_event_options_get(team, &opt_inst_list); + team_put: team_nl_team_put(team); @@ -1746,7 +2072,7 @@ static int team_nl_fill_port_list_get(struct sk_buff *skb, goto nla_put_failure; port_list = nla_nest_start(skb, TEAM_ATTR_LIST_PORT); if (!port_list) - return -EMSGSIZE; + goto nla_put_failure; list_for_each_entry(port, &team->port_list, list) { struct nlattr *port_item; @@ -1838,27 +2164,18 @@ static struct genl_multicast_group team_change_event_mcgrp = { .name = TEAM_GENL_CHANGE_EVENT_MC_GRP_NAME, }; -static int team_nl_send_event_options_get(struct team *team) +static int team_nl_send_multicast(struct sk_buff *skb, + struct team *team, u32 pid) { - struct sk_buff *skb; - int err; - struct net *net = dev_net(team->dev); - - skb = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); - if (!skb) - return -ENOMEM; - - err = team_nl_fill_options_get(skb, 0, 0, 0, team, false); - if (err < 0) - goto err_fill; - - err = genlmsg_multicast_netns(net, skb, 0, team_change_event_mcgrp.id, - GFP_KERNEL); - return err; + return genlmsg_multicast_netns(dev_net(team->dev), skb, 0, + team_change_event_mcgrp.id, GFP_KERNEL); +} -err_fill: - nlmsg_free(skb); - return err; +static int team_nl_send_event_options_get(struct team *team, + struct list_head *sel_opt_inst_list) +{ + return team_nl_send_options_get(team, 0, 0, 0, team_nl_send_multicast, + sel_opt_inst_list); } static int team_nl_send_event_port_list_get(struct team *team) @@ -1867,7 +2184,7 @@ static int team_nl_send_event_port_list_get(struct team *team) int err; struct net *net = dev_net(team->dev); - skb = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!skb) return -ENOMEM; @@ -1918,10 +2235,17 @@ static void team_nl_fini(void) static void __team_options_change_check(struct team *team) { int err; + struct team_option_inst *opt_inst; + LIST_HEAD(sel_opt_inst_list); - err = team_nl_send_event_options_get(team); + list_for_each_entry(opt_inst, &team->option_inst_list, list) { + if (opt_inst->changed) + list_add_tail(&opt_inst->tmp_list, &sel_opt_inst_list); + } + err = team_nl_send_event_options_get(team, &sel_opt_inst_list); if (err) - netdev_warn(team->dev, "Failed to send options change via netlink\n"); + netdev_warn(team->dev, "Failed to send options change via netlink (err %d)\n", + err); } /* rtnl lock is held */ @@ -1965,6 +2289,7 @@ static void team_port_change_check(struct team_port *port, bool linkup) mutex_unlock(&team->lock); } + /************************************ * Net device notifier event handler ************************************/ diff --git a/drivers/net/team/team_mode_activebackup.c b/drivers/net/team/team_mode_activebackup.c index fd6bd03aaa89..6262b4defd93 100644 --- a/drivers/net/team/team_mode_activebackup.c +++ b/drivers/net/team/team_mode_activebackup.c @@ -1,5 +1,5 @@ /* - * net/drivers/team/team_mode_activebackup.c - Active-backup mode for team + * drivers/net/team/team_mode_activebackup.c - Active-backup mode for team * Copyright (c) 2011 Jiri Pirko <jpirko@redhat.com> * * This program is free software; you can redistribute it and/or modify @@ -40,11 +40,10 @@ static bool ab_transmit(struct team *team, struct sk_buff *skb) { struct team_port *active_port; - active_port = rcu_dereference(ab_priv(team)->active_port); + active_port = rcu_dereference_bh(ab_priv(team)->active_port); if (unlikely(!active_port)) goto drop; - skb->dev = active_port->dev; - if (dev_queue_xmit(skb)) + if (team_dev_queue_xmit(team, active_port, skb)) return false; return true; @@ -61,8 +60,12 @@ static void ab_port_leave(struct team *team, struct team_port *port) static int ab_active_port_get(struct team *team, struct team_gsetter_ctx *ctx) { - if (ab_priv(team)->active_port) - ctx->data.u32_val = ab_priv(team)->active_port->dev->ifindex; + struct team_port *active_port; + + active_port = rcu_dereference_protected(ab_priv(team)->active_port, + lockdep_is_held(&team->lock)); + if (active_port) + ctx->data.u32_val = active_port->dev->ifindex; else ctx->data.u32_val = 0; return 0; @@ -108,7 +111,7 @@ static const struct team_mode_ops ab_mode_ops = { .port_leave = ab_port_leave, }; -static struct team_mode ab_mode = { +static const struct team_mode ab_mode = { .kind = "activebackup", .owner = THIS_MODULE, .priv_size = sizeof(struct ab_priv), diff --git a/drivers/net/team/team_mode_broadcast.c b/drivers/net/team/team_mode_broadcast.c new file mode 100644 index 000000000000..c96e4d2967f0 --- /dev/null +++ b/drivers/net/team/team_mode_broadcast.c @@ -0,0 +1,87 @@ +/* + * drivers/net/team/team_mode_broadcast.c - Broadcast mode for team + * Copyright (c) 2012 Jiri Pirko <jpirko@redhat.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include <linux/kernel.h> +#include <linux/types.h> +#include <linux/module.h> +#include <linux/init.h> +#include <linux/errno.h> +#include <linux/netdevice.h> +#include <linux/if_team.h> + +static bool bc_transmit(struct team *team, struct sk_buff *skb) +{ + struct team_port *cur; + struct team_port *last = NULL; + struct sk_buff *skb2; + bool ret; + bool sum_ret = false; + + list_for_each_entry_rcu(cur, &team->port_list, list) { + if (team_port_txable(cur)) { + if (last) { + skb2 = skb_clone(skb, GFP_ATOMIC); + if (skb2) { + ret = team_dev_queue_xmit(team, last, + skb2); + if (!sum_ret) + sum_ret = ret; + } + } + last = cur; + } + } + if (last) { + ret = team_dev_queue_xmit(team, last, skb); + if (!sum_ret) + sum_ret = ret; + } + return sum_ret; +} + +static int bc_port_enter(struct team *team, struct team_port *port) +{ + return team_port_set_team_mac(port); +} + +static void bc_port_change_mac(struct team *team, struct team_port *port) +{ + team_port_set_team_mac(port); +} + +static const struct team_mode_ops bc_mode_ops = { + .transmit = bc_transmit, + .port_enter = bc_port_enter, + .port_change_mac = bc_port_change_mac, +}; + +static const struct team_mode bc_mode = { + .kind = "broadcast", + .owner = THIS_MODULE, + .ops = &bc_mode_ops, +}; + +static int __init bc_init_module(void) +{ + return team_mode_register(&bc_mode); +} + +static void __exit bc_cleanup_module(void) +{ + team_mode_unregister(&bc_mode); +} + +module_init(bc_init_module); +module_exit(bc_cleanup_module); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Jiri Pirko <jpirko@redhat.com>"); +MODULE_DESCRIPTION("Broadcast mode for team"); +MODULE_ALIAS("team-mode-broadcast"); diff --git a/drivers/net/team/team_mode_loadbalance.c b/drivers/net/team/team_mode_loadbalance.c index 86e8183c8e3d..cdc31b5ea15e 100644 --- a/drivers/net/team/team_mode_loadbalance.c +++ b/drivers/net/team/team_mode_loadbalance.c @@ -17,34 +17,209 @@ #include <linux/filter.h> #include <linux/if_team.h> +struct lb_priv; + +typedef struct team_port *lb_select_tx_port_func_t(struct team *, + struct lb_priv *, + struct sk_buff *, + unsigned char); + +#define LB_TX_HASHTABLE_SIZE 256 /* hash is a char */ + +struct lb_stats { + u64 tx_bytes; +}; + +struct lb_pcpu_stats { + struct lb_stats hash_stats[LB_TX_HASHTABLE_SIZE]; + struct u64_stats_sync syncp; +}; + +struct lb_stats_info { + struct lb_stats stats; + struct lb_stats last_stats; + struct team_option_inst_info *opt_inst_info; +}; + +struct lb_port_mapping { + struct team_port __rcu *port; + struct team_option_inst_info *opt_inst_info; +}; + +struct lb_priv_ex { + struct team *team; + struct lb_port_mapping tx_hash_to_port_mapping[LB_TX_HASHTABLE_SIZE]; + struct sock_fprog *orig_fprog; + struct { + unsigned int refresh_interval; /* in tenths of second */ + struct delayed_work refresh_dw; + struct lb_stats_info info[LB_TX_HASHTABLE_SIZE]; + } stats; +}; + struct lb_priv { struct sk_filter __rcu *fp; - struct sock_fprog *orig_fprog; + lb_select_tx_port_func_t __rcu *select_tx_port_func; + struct lb_pcpu_stats __percpu *pcpu_stats; + struct lb_priv_ex *ex; /* priv extension */ }; -static struct lb_priv *lb_priv(struct team *team) +static struct lb_priv *get_lb_priv(struct team *team) { return (struct lb_priv *) &team->mode_priv; } -static bool lb_transmit(struct team *team, struct sk_buff *skb) +struct lb_port_priv { + struct lb_stats __percpu *pcpu_stats; + struct lb_stats_info stats_info; +}; + +static struct lb_port_priv *get_lb_port_priv(struct team_port *port) +{ + return (struct lb_port_priv *) &port->mode_priv; +} + +#define LB_HTPM_PORT_BY_HASH(lp_priv, hash) \ + (lb_priv)->ex->tx_hash_to_port_mapping[hash].port + +#define LB_HTPM_OPT_INST_INFO_BY_HASH(lp_priv, hash) \ + (lb_priv)->ex->tx_hash_to_port_mapping[hash].opt_inst_info + +static void lb_tx_hash_to_port_mapping_null_port(struct team *team, + struct team_port *port) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + bool changed = false; + int i; + + for (i = 0; i < LB_TX_HASHTABLE_SIZE; i++) { + struct lb_port_mapping *pm; + + pm = &lb_priv->ex->tx_hash_to_port_mapping[i]; + if (rcu_access_pointer(pm->port) == port) { + RCU_INIT_POINTER(pm->port, NULL); + team_option_inst_set_change(pm->opt_inst_info); + changed = true; + } + } + if (changed) + team_options_change_check(team); +} + +/* Basic tx selection based solely by hash */ +static struct team_port *lb_hash_select_tx_port(struct team *team, + struct lb_priv *lb_priv, + struct sk_buff *skb, + unsigned char hash) { - struct sk_filter *fp; - struct team_port *port; - unsigned int hash; int port_index; - fp = rcu_dereference(lb_priv(team)->fp); - if (unlikely(!fp)) - goto drop; - hash = SK_RUN_FILTER(fp, skb); port_index = hash % team->en_port_count; - port = team_get_port_by_index_rcu(team, port_index); + return team_get_port_by_index_rcu(team, port_index); +} + +/* Hash to port mapping select tx port */ +static struct team_port *lb_htpm_select_tx_port(struct team *team, + struct lb_priv *lb_priv, + struct sk_buff *skb, + unsigned char hash) +{ + return rcu_dereference_bh(LB_HTPM_PORT_BY_HASH(lb_priv, hash)); +} + +struct lb_select_tx_port { + char *name; + lb_select_tx_port_func_t *func; +}; + +static const struct lb_select_tx_port lb_select_tx_port_list[] = { + { + .name = "hash", + .func = lb_hash_select_tx_port, + }, + { + .name = "hash_to_port_mapping", + .func = lb_htpm_select_tx_port, + }, +}; +#define LB_SELECT_TX_PORT_LIST_COUNT ARRAY_SIZE(lb_select_tx_port_list) + +static char *lb_select_tx_port_get_name(lb_select_tx_port_func_t *func) +{ + int i; + + for (i = 0; i < LB_SELECT_TX_PORT_LIST_COUNT; i++) { + const struct lb_select_tx_port *item; + + item = &lb_select_tx_port_list[i]; + if (item->func == func) + return item->name; + } + return NULL; +} + +static lb_select_tx_port_func_t *lb_select_tx_port_get_func(const char *name) +{ + int i; + + for (i = 0; i < LB_SELECT_TX_PORT_LIST_COUNT; i++) { + const struct lb_select_tx_port *item; + + item = &lb_select_tx_port_list[i]; + if (!strcmp(item->name, name)) + return item->func; + } + return NULL; +} + +static unsigned int lb_get_skb_hash(struct lb_priv *lb_priv, + struct sk_buff *skb) +{ + struct sk_filter *fp; + uint32_t lhash; + unsigned char *c; + + fp = rcu_dereference_bh(lb_priv->fp); + if (unlikely(!fp)) + return 0; + lhash = SK_RUN_FILTER(fp, skb); + c = (char *) &lhash; + return c[0] ^ c[1] ^ c[2] ^ c[3]; +} + +static void lb_update_tx_stats(unsigned int tx_bytes, struct lb_priv *lb_priv, + struct lb_port_priv *lb_port_priv, + unsigned char hash) +{ + struct lb_pcpu_stats *pcpu_stats; + struct lb_stats *port_stats; + struct lb_stats *hash_stats; + + pcpu_stats = this_cpu_ptr(lb_priv->pcpu_stats); + port_stats = this_cpu_ptr(lb_port_priv->pcpu_stats); + hash_stats = &pcpu_stats->hash_stats[hash]; + u64_stats_update_begin(&pcpu_stats->syncp); + port_stats->tx_bytes += tx_bytes; + hash_stats->tx_bytes += tx_bytes; + u64_stats_update_end(&pcpu_stats->syncp); +} + +static bool lb_transmit(struct team *team, struct sk_buff *skb) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + lb_select_tx_port_func_t *select_tx_port_func; + struct team_port *port; + unsigned char hash; + unsigned int tx_bytes = skb->len; + + hash = lb_get_skb_hash(lb_priv, skb); + select_tx_port_func = rcu_dereference_bh(lb_priv->select_tx_port_func); + port = select_tx_port_func(team, lb_priv, skb, hash); if (unlikely(!port)) goto drop; - skb->dev = port->dev; - if (dev_queue_xmit(skb)) + if (team_dev_queue_xmit(team, port, skb)) return false; + lb_update_tx_stats(tx_bytes, lb_priv, get_lb_port_priv(port), hash); return true; drop: @@ -54,14 +229,16 @@ drop: static int lb_bpf_func_get(struct team *team, struct team_gsetter_ctx *ctx) { - if (!lb_priv(team)->orig_fprog) { + struct lb_priv *lb_priv = get_lb_priv(team); + + if (!lb_priv->ex->orig_fprog) { ctx->data.bin_val.len = 0; ctx->data.bin_val.ptr = NULL; return 0; } - ctx->data.bin_val.len = lb_priv(team)->orig_fprog->len * + ctx->data.bin_val.len = lb_priv->ex->orig_fprog->len * sizeof(struct sock_filter); - ctx->data.bin_val.ptr = lb_priv(team)->orig_fprog->filter; + ctx->data.bin_val.ptr = lb_priv->ex->orig_fprog->filter; return 0; } @@ -94,7 +271,9 @@ static void __fprog_destroy(struct sock_fprog *fprog) static int lb_bpf_func_set(struct team *team, struct team_gsetter_ctx *ctx) { + struct lb_priv *lb_priv = get_lb_priv(team); struct sk_filter *fp = NULL; + struct sk_filter *orig_fp; struct sock_fprog *fprog = NULL; int err; @@ -110,14 +289,238 @@ static int lb_bpf_func_set(struct team *team, struct team_gsetter_ctx *ctx) } } - if (lb_priv(team)->orig_fprog) { + if (lb_priv->ex->orig_fprog) { /* Clear old filter data */ - __fprog_destroy(lb_priv(team)->orig_fprog); - sk_unattached_filter_destroy(lb_priv(team)->fp); + __fprog_destroy(lb_priv->ex->orig_fprog); + orig_fp = rcu_dereference_protected(lb_priv->fp, + lockdep_is_held(&team->lock)); + sk_unattached_filter_destroy(orig_fp); } - rcu_assign_pointer(lb_priv(team)->fp, fp); - lb_priv(team)->orig_fprog = fprog; + rcu_assign_pointer(lb_priv->fp, fp); + lb_priv->ex->orig_fprog = fprog; + return 0; +} + +static int lb_tx_method_get(struct team *team, struct team_gsetter_ctx *ctx) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + lb_select_tx_port_func_t *func; + char *name; + + func = rcu_dereference_protected(lb_priv->select_tx_port_func, + lockdep_is_held(&team->lock)); + name = lb_select_tx_port_get_name(func); + BUG_ON(!name); + ctx->data.str_val = name; + return 0; +} + +static int lb_tx_method_set(struct team *team, struct team_gsetter_ctx *ctx) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + lb_select_tx_port_func_t *func; + + func = lb_select_tx_port_get_func(ctx->data.str_val); + if (!func) + return -EINVAL; + rcu_assign_pointer(lb_priv->select_tx_port_func, func); + return 0; +} + +static int lb_tx_hash_to_port_mapping_init(struct team *team, + struct team_option_inst_info *info) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + unsigned char hash = info->array_index; + + LB_HTPM_OPT_INST_INFO_BY_HASH(lb_priv, hash) = info; + return 0; +} + +static int lb_tx_hash_to_port_mapping_get(struct team *team, + struct team_gsetter_ctx *ctx) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + struct team_port *port; + unsigned char hash = ctx->info->array_index; + + port = LB_HTPM_PORT_BY_HASH(lb_priv, hash); + ctx->data.u32_val = port ? port->dev->ifindex : 0; + return 0; +} + +static int lb_tx_hash_to_port_mapping_set(struct team *team, + struct team_gsetter_ctx *ctx) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + struct team_port *port; + unsigned char hash = ctx->info->array_index; + + list_for_each_entry(port, &team->port_list, list) { + if (ctx->data.u32_val == port->dev->ifindex && + team_port_enabled(port)) { + rcu_assign_pointer(LB_HTPM_PORT_BY_HASH(lb_priv, hash), + port); + return 0; + } + } + return -ENODEV; +} + +static int lb_hash_stats_init(struct team *team, + struct team_option_inst_info *info) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + unsigned char hash = info->array_index; + + lb_priv->ex->stats.info[hash].opt_inst_info = info; + return 0; +} + +static int lb_hash_stats_get(struct team *team, struct team_gsetter_ctx *ctx) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + unsigned char hash = ctx->info->array_index; + + ctx->data.bin_val.ptr = &lb_priv->ex->stats.info[hash].stats; + ctx->data.bin_val.len = sizeof(struct lb_stats); + return 0; +} + +static int lb_port_stats_init(struct team *team, + struct team_option_inst_info *info) +{ + struct team_port *port = info->port; + struct lb_port_priv *lb_port_priv = get_lb_port_priv(port); + + lb_port_priv->stats_info.opt_inst_info = info; + return 0; +} + +static int lb_port_stats_get(struct team *team, struct team_gsetter_ctx *ctx) +{ + struct team_port *port = ctx->info->port; + struct lb_port_priv *lb_port_priv = get_lb_port_priv(port); + + ctx->data.bin_val.ptr = &lb_port_priv->stats_info.stats; + ctx->data.bin_val.len = sizeof(struct lb_stats); + return 0; +} + +static void __lb_stats_info_refresh_prepare(struct lb_stats_info *s_info) +{ + memcpy(&s_info->last_stats, &s_info->stats, sizeof(struct lb_stats)); + memset(&s_info->stats, 0, sizeof(struct lb_stats)); +} + +static bool __lb_stats_info_refresh_check(struct lb_stats_info *s_info, + struct team *team) +{ + if (memcmp(&s_info->last_stats, &s_info->stats, + sizeof(struct lb_stats))) { + team_option_inst_set_change(s_info->opt_inst_info); + return true; + } + return false; +} + +static void __lb_one_cpu_stats_add(struct lb_stats *acc_stats, + struct lb_stats *cpu_stats, + struct u64_stats_sync *syncp) +{ + unsigned int start; + struct lb_stats tmp; + + do { + start = u64_stats_fetch_begin_bh(syncp); + tmp.tx_bytes = cpu_stats->tx_bytes; + } while (u64_stats_fetch_retry_bh(syncp, start)); + acc_stats->tx_bytes += tmp.tx_bytes; +} + +static void lb_stats_refresh(struct work_struct *work) +{ + struct team *team; + struct lb_priv *lb_priv; + struct lb_priv_ex *lb_priv_ex; + struct lb_pcpu_stats *pcpu_stats; + struct lb_stats *stats; + struct lb_stats_info *s_info; + struct team_port *port; + bool changed = false; + int i; + int j; + + lb_priv_ex = container_of(work, struct lb_priv_ex, + stats.refresh_dw.work); + + team = lb_priv_ex->team; + lb_priv = get_lb_priv(team); + + if (!mutex_trylock(&team->lock)) { + schedule_delayed_work(&lb_priv_ex->stats.refresh_dw, 0); + return; + } + + for (j = 0; j < LB_TX_HASHTABLE_SIZE; j++) { + s_info = &lb_priv->ex->stats.info[j]; + __lb_stats_info_refresh_prepare(s_info); + for_each_possible_cpu(i) { + pcpu_stats = per_cpu_ptr(lb_priv->pcpu_stats, i); + stats = &pcpu_stats->hash_stats[j]; + __lb_one_cpu_stats_add(&s_info->stats, stats, + &pcpu_stats->syncp); + } + changed |= __lb_stats_info_refresh_check(s_info, team); + } + + list_for_each_entry(port, &team->port_list, list) { + struct lb_port_priv *lb_port_priv = get_lb_port_priv(port); + + s_info = &lb_port_priv->stats_info; + __lb_stats_info_refresh_prepare(s_info); + for_each_possible_cpu(i) { + pcpu_stats = per_cpu_ptr(lb_priv->pcpu_stats, i); + stats = per_cpu_ptr(lb_port_priv->pcpu_stats, i); + __lb_one_cpu_stats_add(&s_info->stats, stats, + &pcpu_stats->syncp); + } + changed |= __lb_stats_info_refresh_check(s_info, team); + } + + if (changed) + team_options_change_check(team); + + schedule_delayed_work(&lb_priv_ex->stats.refresh_dw, + (lb_priv_ex->stats.refresh_interval * HZ) / 10); + + mutex_unlock(&team->lock); +} + +static int lb_stats_refresh_interval_get(struct team *team, + struct team_gsetter_ctx *ctx) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + + ctx->data.u32_val = lb_priv->ex->stats.refresh_interval; + return 0; +} + +static int lb_stats_refresh_interval_set(struct team *team, + struct team_gsetter_ctx *ctx) +{ + struct lb_priv *lb_priv = get_lb_priv(team); + unsigned int interval; + + interval = ctx->data.u32_val; + if (lb_priv->ex->stats.refresh_interval == interval) + return 0; + lb_priv->ex->stats.refresh_interval = interval; + if (interval) + schedule_delayed_work(&lb_priv->ex->stats.refresh_dw, 0); + else + cancel_delayed_work(&lb_priv->ex->stats.refresh_dw); return 0; } @@ -128,30 +531,125 @@ static const struct team_option lb_options[] = { .getter = lb_bpf_func_get, .setter = lb_bpf_func_set, }, + { + .name = "lb_tx_method", + .type = TEAM_OPTION_TYPE_STRING, + .getter = lb_tx_method_get, + .setter = lb_tx_method_set, + }, + { + .name = "lb_tx_hash_to_port_mapping", + .array_size = LB_TX_HASHTABLE_SIZE, + .type = TEAM_OPTION_TYPE_U32, + .init = lb_tx_hash_to_port_mapping_init, + .getter = lb_tx_hash_to_port_mapping_get, + .setter = lb_tx_hash_to_port_mapping_set, + }, + { + .name = "lb_hash_stats", + .array_size = LB_TX_HASHTABLE_SIZE, + .type = TEAM_OPTION_TYPE_BINARY, + .init = lb_hash_stats_init, + .getter = lb_hash_stats_get, + }, + { + .name = "lb_port_stats", + .per_port = true, + .type = TEAM_OPTION_TYPE_BINARY, + .init = lb_port_stats_init, + .getter = lb_port_stats_get, + }, + { + .name = "lb_stats_refresh_interval", + .type = TEAM_OPTION_TYPE_U32, + .getter = lb_stats_refresh_interval_get, + .setter = lb_stats_refresh_interval_set, + }, }; static int lb_init(struct team *team) { - return team_options_register(team, lb_options, - ARRAY_SIZE(lb_options)); + struct lb_priv *lb_priv = get_lb_priv(team); + lb_select_tx_port_func_t *func; + int err; + + /* set default tx port selector */ + func = lb_select_tx_port_get_func("hash"); + BUG_ON(!func); + rcu_assign_pointer(lb_priv->select_tx_port_func, func); + + lb_priv->ex = kzalloc(sizeof(*lb_priv->ex), GFP_KERNEL); + if (!lb_priv->ex) + return -ENOMEM; + lb_priv->ex->team = team; + + lb_priv->pcpu_stats = alloc_percpu(struct lb_pcpu_stats); + if (!lb_priv->pcpu_stats) { + err = -ENOMEM; + goto err_alloc_pcpu_stats; + } + + INIT_DELAYED_WORK(&lb_priv->ex->stats.refresh_dw, lb_stats_refresh); + + err = team_options_register(team, lb_options, ARRAY_SIZE(lb_options)); + if (err) + goto err_options_register; + return 0; + +err_options_register: + free_percpu(lb_priv->pcpu_stats); +err_alloc_pcpu_stats: + kfree(lb_priv->ex); + return err; } static void lb_exit(struct team *team) { + struct lb_priv *lb_priv = get_lb_priv(team); + team_options_unregister(team, lb_options, ARRAY_SIZE(lb_options)); + cancel_delayed_work_sync(&lb_priv->ex->stats.refresh_dw); + free_percpu(lb_priv->pcpu_stats); + kfree(lb_priv->ex); +} + +static int lb_port_enter(struct team *team, struct team_port *port) +{ + struct lb_port_priv *lb_port_priv = get_lb_port_priv(port); + + lb_port_priv->pcpu_stats = alloc_percpu(struct lb_stats); + if (!lb_port_priv->pcpu_stats) + return -ENOMEM; + return 0; +} + +static void lb_port_leave(struct team *team, struct team_port *port) +{ + struct lb_port_priv *lb_port_priv = get_lb_port_priv(port); + + free_percpu(lb_port_priv->pcpu_stats); +} + +static void lb_port_disabled(struct team *team, struct team_port *port) +{ + lb_tx_hash_to_port_mapping_null_port(team, port); } static const struct team_mode_ops lb_mode_ops = { .init = lb_init, .exit = lb_exit, + .port_enter = lb_port_enter, + .port_leave = lb_port_leave, + .port_disabled = lb_port_disabled, .transmit = lb_transmit, }; -static struct team_mode lb_mode = { +static const struct team_mode lb_mode = { .kind = "loadbalance", .owner = THIS_MODULE, .priv_size = sizeof(struct lb_priv), + .port_priv_size = sizeof(struct lb_port_priv), .ops = &lb_mode_ops, }; diff --git a/drivers/net/team/team_mode_roundrobin.c b/drivers/net/team/team_mode_roundrobin.c index 6abfbdc96be5..ad7ed0ec544c 100644 --- a/drivers/net/team/team_mode_roundrobin.c +++ b/drivers/net/team/team_mode_roundrobin.c @@ -1,5 +1,5 @@ /* - * net/drivers/team/team_mode_roundrobin.c - Round-robin mode for team + * drivers/net/team/team_mode_roundrobin.c - Round-robin mode for team * Copyright (c) 2011 Jiri Pirko <jpirko@redhat.com> * * This program is free software; you can redistribute it and/or modify @@ -30,16 +30,16 @@ static struct team_port *__get_first_port_up(struct team *team, { struct team_port *cur; - if (port->linkup) + if (team_port_txable(port)) return port; cur = port; list_for_each_entry_continue_rcu(cur, &team->port_list, list) - if (cur->linkup) + if (team_port_txable(port)) return cur; list_for_each_entry_rcu(cur, &team->port_list, list) { if (cur == port) break; - if (cur->linkup) + if (team_port_txable(port)) return cur; } return NULL; @@ -55,8 +55,7 @@ static bool rr_transmit(struct team *team, struct sk_buff *skb) port = __get_first_port_up(team, port); if (unlikely(!port)) goto drop; - skb->dev = port->dev; - if (dev_queue_xmit(skb)) + if (team_dev_queue_xmit(team, port, skb)) return false; return true; @@ -81,7 +80,7 @@ static const struct team_mode_ops rr_mode_ops = { .port_change_mac = rr_port_change_mac, }; -static struct team_mode rr_mode = { +static const struct team_mode rr_mode = { .kind = "roundrobin", .owner = THIS_MODULE, .priv_size = sizeof(struct rr_priv), diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 987aeefbc774..c62163e272cd 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -22,7 +22,7 @@ * Add TUNSETLINK ioctl to set the link encapsulation * * Mark Smith <markzzzsmith@yahoo.com.au> - * Use random_ether_addr() for tap MAC address. + * Use eth_random_addr() for tap MAC address. * * Harald Roelle <harald.roelle@ifi.lmu.de> 2004/04/20 * Fixes in packet dropping, queue length setting and queue wakeup. @@ -100,6 +100,8 @@ do { \ } while (0) #endif +#define GOODCOPY_LEN 128 + #define FLT_EXACT_COUNT 8 struct tap_filter { unsigned int count; /* Number of addrs. Zero means disabled */ @@ -358,6 +360,8 @@ static void tun_free_netdev(struct net_device *dev) { struct tun_struct *tun = netdev_priv(dev); + BUG_ON(!test_bit(SOCK_EXTERNALLY_ALLOCATED, &tun->socket.flags)); + sk_release_kernel(tun->socket.sk); } @@ -414,6 +418,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) /* Orphan the skb - required as we might hang on to it * for indefinite time. */ + if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC))) + goto drop; skb_orphan(skb); /* Enqueue packet */ @@ -600,19 +606,100 @@ static struct sk_buff *tun_alloc_skb(struct tun_struct *tun, return skb; } +/* set skb frags from iovec, this can move to core network code for reuse */ +static int zerocopy_sg_from_iovec(struct sk_buff *skb, const struct iovec *from, + int offset, size_t count) +{ + int len = iov_length(from, count) - offset; + int copy = skb_headlen(skb); + int size, offset1 = 0; + int i = 0; + + /* Skip over from offset */ + while (count && (offset >= from->iov_len)) { + offset -= from->iov_len; + ++from; + --count; + } + + /* copy up to skb headlen */ + while (count && (copy > 0)) { + size = min_t(unsigned int, copy, from->iov_len - offset); + if (copy_from_user(skb->data + offset1, from->iov_base + offset, + size)) + return -EFAULT; + if (copy > size) { + ++from; + --count; + offset = 0; + } else + offset += size; + copy -= size; + offset1 += size; + } + + if (len == offset1) + return 0; + + while (count--) { + struct page *page[MAX_SKB_FRAGS]; + int num_pages; + unsigned long base; + unsigned long truesize; + + len = from->iov_len - offset; + if (!len) { + offset = 0; + ++from; + continue; + } + base = (unsigned long)from->iov_base + offset; + size = ((base & ~PAGE_MASK) + len + ~PAGE_MASK) >> PAGE_SHIFT; + if (i + size > MAX_SKB_FRAGS) + return -EMSGSIZE; + num_pages = get_user_pages_fast(base, size, 0, &page[i]); + if (num_pages != size) { + for (i = 0; i < num_pages; i++) + put_page(page[i]); + return -EFAULT; + } + truesize = size * PAGE_SIZE; + skb->data_len += len; + skb->len += len; + skb->truesize += truesize; + atomic_add(truesize, &skb->sk->sk_wmem_alloc); + while (len) { + int off = base & ~PAGE_MASK; + int size = min_t(int, len, PAGE_SIZE - off); + __skb_fill_page_desc(skb, i, page[i], off, size); + skb_shinfo(skb)->nr_frags++; + /* increase sk_wmem_alloc */ + base += size; + len -= size; + i++; + } + offset = 0; + ++from; + } + return 0; +} + /* Get packet from user space buffer */ -static ssize_t tun_get_user(struct tun_struct *tun, - const struct iovec *iv, size_t count, - int noblock) +static ssize_t tun_get_user(struct tun_struct *tun, void *msg_control, + const struct iovec *iv, size_t total_len, + size_t count, int noblock) { struct tun_pi pi = { 0, cpu_to_be16(ETH_P_IP) }; struct sk_buff *skb; - size_t len = count, align = NET_SKB_PAD; + size_t len = total_len, align = NET_SKB_PAD; struct virtio_net_hdr gso = { 0 }; int offset = 0; + int copylen; + bool zerocopy = false; + int err; if (!(tun->flags & TUN_NO_PI)) { - if ((len -= sizeof(pi)) > count) + if ((len -= sizeof(pi)) > total_len) return -EINVAL; if (memcpy_fromiovecend((void *)&pi, iv, 0, sizeof(pi))) @@ -621,7 +708,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, } if (tun->flags & TUN_VNET_HDR) { - if ((len -= tun->vnet_hdr_sz) > count) + if ((len -= tun->vnet_hdr_sz) > total_len) return -EINVAL; if (memcpy_fromiovecend((void *)&gso, iv, offset, sizeof(gso))) @@ -643,14 +730,46 @@ static ssize_t tun_get_user(struct tun_struct *tun, return -EINVAL; } - skb = tun_alloc_skb(tun, align, len, gso.hdr_len, noblock); + if (msg_control) + zerocopy = true; + + if (zerocopy) { + /* Userspace may produce vectors with count greater than + * MAX_SKB_FRAGS, so we need to linearize parts of the skb + * to let the rest of data to be fit in the frags. + */ + if (count > MAX_SKB_FRAGS) { + copylen = iov_length(iv, count - MAX_SKB_FRAGS); + if (copylen < offset) + copylen = 0; + else + copylen -= offset; + } else + copylen = 0; + /* There are 256 bytes to be copied in skb, so there is enough + * room for skb expand head in case it is used. + * The rest of the buffer is mapped from userspace. + */ + if (copylen < gso.hdr_len) + copylen = gso.hdr_len; + if (!copylen) + copylen = GOODCOPY_LEN; + } else + copylen = len; + + skb = tun_alloc_skb(tun, align, copylen, gso.hdr_len, noblock); if (IS_ERR(skb)) { if (PTR_ERR(skb) != -EAGAIN) tun->dev->stats.rx_dropped++; return PTR_ERR(skb); } - if (skb_copy_datagram_from_iovec(skb, 0, iv, offset, len)) { + if (zerocopy) + err = zerocopy_sg_from_iovec(skb, iv, offset, count); + else + err = skb_copy_datagram_from_iovec(skb, 0, iv, offset, len); + + if (err) { tun->dev->stats.rx_dropped++; kfree_skb(skb); return -EFAULT; @@ -724,12 +843,18 @@ static ssize_t tun_get_user(struct tun_struct *tun, skb_shinfo(skb)->gso_segs = 0; } + /* copy skb_ubuf_info for callback when skb has no error */ + if (zerocopy) { + skb_shinfo(skb)->destructor_arg = msg_control; + skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY; + } + netif_rx_ni(skb); tun->dev->stats.rx_packets++; tun->dev->stats.rx_bytes += len; - return count; + return total_len; } static ssize_t tun_chr_aio_write(struct kiocb *iocb, const struct iovec *iv, @@ -744,7 +869,7 @@ static ssize_t tun_chr_aio_write(struct kiocb *iocb, const struct iovec *iv, tun_debug(KERN_INFO, tun, "tun_chr_write %ld\n", count); - result = tun_get_user(tun, iv, iov_length(iv, count), + result = tun_get_user(tun, NULL, iv, iov_length(iv, count), count, file->f_flags & O_NONBLOCK); tun_put(tun); @@ -958,8 +1083,8 @@ static int tun_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *m, size_t total_len) { struct tun_struct *tun = container_of(sock, struct tun_struct, socket); - return tun_get_user(tun, m->msg_iov, total_len, - m->msg_flags & MSG_DONTWAIT); + return tun_get_user(tun, m->msg_control, m->msg_iov, total_len, + m->msg_iovlen, m->msg_flags & MSG_DONTWAIT); } static int tun_recvmsg(struct kiocb *iocb, struct socket *sock, @@ -1115,6 +1240,7 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr) tun->flags = flags; tun->txflt.count = 0; tun->vnet_hdr_sz = sizeof(struct virtio_net_hdr); + set_bit(SOCK_EXTERNALLY_ALLOCATED, &tun->socket.flags); err = -ENOMEM; sk = sk_alloc(&init_net, AF_UNSPEC, GFP_KERNEL, &tun_proto); @@ -1128,6 +1254,7 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr) sock_init_data(&tun->socket, sk); sk->sk_write_space = tun_sock_write_space; sk->sk_sndbuf = INT_MAX; + sock_set_flag(sk, SOCK_ZEROCOPY); tun_sk(sk)->tun = tun; diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig index 833e32f8d63b..c1ae76968f47 100644 --- a/drivers/net/usb/Kconfig +++ b/drivers/net/usb/Kconfig @@ -134,6 +134,7 @@ config USB_NET_AX8817X tristate "ASIX AX88xxx Based USB 2.0 Ethernet Adapters" depends on USB_USBNET select CRC32 + select PHYLIB default y help This option adds support for ASIX AX88xxx based USB 2.0 diff --git a/drivers/net/usb/Makefile b/drivers/net/usb/Makefile index a2e2d72c52a0..bf063008c1af 100644 --- a/drivers/net/usb/Makefile +++ b/drivers/net/usb/Makefile @@ -8,6 +8,7 @@ obj-$(CONFIG_USB_PEGASUS) += pegasus.o obj-$(CONFIG_USB_RTL8150) += rtl8150.o obj-$(CONFIG_USB_HSO) += hso.o obj-$(CONFIG_USB_NET_AX8817X) += asix.o +asix-y := asix_devices.o asix_common.o ax88172a.o obj-$(CONFIG_USB_NET_CDCETHER) += cdc_ether.o obj-$(CONFIG_USB_NET_CDC_EEM) += cdc_eem.o obj-$(CONFIG_USB_NET_DM9601) += dm9601.o diff --git a/drivers/net/usb/asix.h b/drivers/net/usb/asix.h new file mode 100644 index 000000000000..e889631161b8 --- /dev/null +++ b/drivers/net/usb/asix.h @@ -0,0 +1,218 @@ +/* + * ASIX AX8817X based USB 2.0 Ethernet Devices + * Copyright (C) 2003-2006 David Hollis <dhollis@davehollis.com> + * Copyright (C) 2005 Phil Chang <pchang23@sbcglobal.net> + * Copyright (C) 2006 James Painter <jamie.painter@iname.com> + * Copyright (c) 2002-2003 TiVo Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ + +#ifndef _ASIX_H +#define _ASIX_H + +// #define DEBUG // error path messages, extra info +// #define VERBOSE // more; success messages + +#include <linux/module.h> +#include <linux/kmod.h> +#include <linux/init.h> +#include <linux/netdevice.h> +#include <linux/etherdevice.h> +#include <linux/ethtool.h> +#include <linux/workqueue.h> +#include <linux/mii.h> +#include <linux/usb.h> +#include <linux/crc32.h> +#include <linux/usb/usbnet.h> +#include <linux/slab.h> +#include <linux/if_vlan.h> + +#define DRIVER_VERSION "22-Dec-2011" +#define DRIVER_NAME "asix" + +/* ASIX AX8817X based USB 2.0 Ethernet Devices */ + +#define AX_CMD_SET_SW_MII 0x06 +#define AX_CMD_READ_MII_REG 0x07 +#define AX_CMD_WRITE_MII_REG 0x08 +#define AX_CMD_SET_HW_MII 0x0a +#define AX_CMD_READ_EEPROM 0x0b +#define AX_CMD_WRITE_EEPROM 0x0c +#define AX_CMD_WRITE_ENABLE 0x0d +#define AX_CMD_WRITE_DISABLE 0x0e +#define AX_CMD_READ_RX_CTL 0x0f +#define AX_CMD_WRITE_RX_CTL 0x10 +#define AX_CMD_READ_IPG012 0x11 +#define AX_CMD_WRITE_IPG0 0x12 +#define AX_CMD_WRITE_IPG1 0x13 +#define AX_CMD_READ_NODE_ID 0x13 +#define AX_CMD_WRITE_NODE_ID 0x14 +#define AX_CMD_WRITE_IPG2 0x14 +#define AX_CMD_WRITE_MULTI_FILTER 0x16 +#define AX88172_CMD_READ_NODE_ID 0x17 +#define AX_CMD_READ_PHY_ID 0x19 +#define AX_CMD_READ_MEDIUM_STATUS 0x1a +#define AX_CMD_WRITE_MEDIUM_MODE 0x1b +#define AX_CMD_READ_MONITOR_MODE 0x1c +#define AX_CMD_WRITE_MONITOR_MODE 0x1d +#define AX_CMD_READ_GPIOS 0x1e +#define AX_CMD_WRITE_GPIOS 0x1f +#define AX_CMD_SW_RESET 0x20 +#define AX_CMD_SW_PHY_STATUS 0x21 +#define AX_CMD_SW_PHY_SELECT 0x22 + +#define AX_PHY_SELECT_MASK (BIT(3) | BIT(2)) +#define AX_PHY_SELECT_INTERNAL 0 +#define AX_PHY_SELECT_EXTERNAL BIT(2) + +#define AX_MONITOR_MODE 0x01 +#define AX_MONITOR_LINK 0x02 +#define AX_MONITOR_MAGIC 0x04 +#define AX_MONITOR_HSFS 0x10 + +/* AX88172 Medium Status Register values */ +#define AX88172_MEDIUM_FD 0x02 +#define AX88172_MEDIUM_TX 0x04 +#define AX88172_MEDIUM_FC 0x10 +#define AX88172_MEDIUM_DEFAULT \ + ( AX88172_MEDIUM_FD | AX88172_MEDIUM_TX | AX88172_MEDIUM_FC ) + +#define AX_MCAST_FILTER_SIZE 8 +#define AX_MAX_MCAST 64 + +#define AX_SWRESET_CLEAR 0x00 +#define AX_SWRESET_RR 0x01 +#define AX_SWRESET_RT 0x02 +#define AX_SWRESET_PRTE 0x04 +#define AX_SWRESET_PRL 0x08 +#define AX_SWRESET_BZ 0x10 +#define AX_SWRESET_IPRL 0x20 +#define AX_SWRESET_IPPD 0x40 + +#define AX88772_IPG0_DEFAULT 0x15 +#define AX88772_IPG1_DEFAULT 0x0c +#define AX88772_IPG2_DEFAULT 0x12 + +/* AX88772 & AX88178 Medium Mode Register */ +#define AX_MEDIUM_PF 0x0080 +#define AX_MEDIUM_JFE 0x0040 +#define AX_MEDIUM_TFC 0x0020 +#define AX_MEDIUM_RFC 0x0010 +#define AX_MEDIUM_ENCK 0x0008 +#define AX_MEDIUM_AC 0x0004 +#define AX_MEDIUM_FD 0x0002 +#define AX_MEDIUM_GM 0x0001 +#define AX_MEDIUM_SM 0x1000 +#define AX_MEDIUM_SBP 0x0800 +#define AX_MEDIUM_PS 0x0200 +#define AX_MEDIUM_RE 0x0100 + +#define AX88178_MEDIUM_DEFAULT \ + (AX_MEDIUM_PS | AX_MEDIUM_FD | AX_MEDIUM_AC | \ + AX_MEDIUM_RFC | AX_MEDIUM_TFC | AX_MEDIUM_JFE | \ + AX_MEDIUM_RE) + +#define AX88772_MEDIUM_DEFAULT \ + (AX_MEDIUM_FD | AX_MEDIUM_RFC | \ + AX_MEDIUM_TFC | AX_MEDIUM_PS | \ + AX_MEDIUM_AC | AX_MEDIUM_RE) + +/* AX88772 & AX88178 RX_CTL values */ +#define AX_RX_CTL_SO 0x0080 +#define AX_RX_CTL_AP 0x0020 +#define AX_RX_CTL_AM 0x0010 +#define AX_RX_CTL_AB 0x0008 +#define AX_RX_CTL_SEP 0x0004 +#define AX_RX_CTL_AMALL 0x0002 +#define AX_RX_CTL_PRO 0x0001 +#define AX_RX_CTL_MFB_2048 0x0000 +#define AX_RX_CTL_MFB_4096 0x0100 +#define AX_RX_CTL_MFB_8192 0x0200 +#define AX_RX_CTL_MFB_16384 0x0300 + +#define AX_DEFAULT_RX_CTL (AX_RX_CTL_SO | AX_RX_CTL_AB) + +/* GPIO 0 .. 2 toggles */ +#define AX_GPIO_GPO0EN 0x01 /* GPIO0 Output enable */ +#define AX_GPIO_GPO_0 0x02 /* GPIO0 Output value */ +#define AX_GPIO_GPO1EN 0x04 /* GPIO1 Output enable */ +#define AX_GPIO_GPO_1 0x08 /* GPIO1 Output value */ +#define AX_GPIO_GPO2EN 0x10 /* GPIO2 Output enable */ +#define AX_GPIO_GPO_2 0x20 /* GPIO2 Output value */ +#define AX_GPIO_RESERVED 0x40 /* Reserved */ +#define AX_GPIO_RSE 0x80 /* Reload serial EEPROM */ + +#define AX_EEPROM_MAGIC 0xdeadbeef +#define AX_EEPROM_LEN 0x200 + +/* This structure cannot exceed sizeof(unsigned long [5]) AKA 20 bytes */ +struct asix_data { + u8 multi_filter[AX_MCAST_FILTER_SIZE]; + u8 mac_addr[ETH_ALEN]; + u8 phymode; + u8 ledmode; + u8 res; +}; + +int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, + u16 size, void *data); + +int asix_write_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, + u16 size, void *data); + +void asix_write_cmd_async(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 size, void *data); + +int asix_rx_fixup(struct usbnet *dev, struct sk_buff *skb); + +struct sk_buff *asix_tx_fixup(struct usbnet *dev, struct sk_buff *skb, + gfp_t flags); + +int asix_set_sw_mii(struct usbnet *dev); +int asix_set_hw_mii(struct usbnet *dev); + +int asix_read_phy_addr(struct usbnet *dev, int internal); +int asix_get_phy_addr(struct usbnet *dev); + +int asix_sw_reset(struct usbnet *dev, u8 flags); + +u16 asix_read_rx_ctl(struct usbnet *dev); +int asix_write_rx_ctl(struct usbnet *dev, u16 mode); + +u16 asix_read_medium_status(struct usbnet *dev); +int asix_write_medium_mode(struct usbnet *dev, u16 mode); + +int asix_write_gpio(struct usbnet *dev, u16 value, int sleep); + +void asix_set_multicast(struct net_device *net); + +int asix_mdio_read(struct net_device *netdev, int phy_id, int loc); +void asix_mdio_write(struct net_device *netdev, int phy_id, int loc, int val); + +void asix_get_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo); +int asix_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo); + +int asix_get_eeprom_len(struct net_device *net); +int asix_get_eeprom(struct net_device *net, struct ethtool_eeprom *eeprom, + u8 *data); +int asix_set_eeprom(struct net_device *net, struct ethtool_eeprom *eeprom, + u8 *data); + +void asix_get_drvinfo(struct net_device *net, struct ethtool_drvinfo *info); + +int asix_set_mac_address(struct net_device *net, void *p); + +#endif /* _ASIX_H */ diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c new file mode 100644 index 000000000000..774d9ce2dafc --- /dev/null +++ b/drivers/net/usb/asix_common.c @@ -0,0 +1,631 @@ +/* + * ASIX AX8817X based USB 2.0 Ethernet Devices + * Copyright (C) 2003-2006 David Hollis <dhollis@davehollis.com> + * Copyright (C) 2005 Phil Chang <pchang23@sbcglobal.net> + * Copyright (C) 2006 James Painter <jamie.painter@iname.com> + * Copyright (c) 2002-2003 TiVo Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ + +#include "asix.h" + +int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, + u16 size, void *data) +{ + void *buf; + int err = -ENOMEM; + + netdev_dbg(dev->net, "asix_read_cmd() cmd=0x%02x value=0x%04x index=0x%04x size=%d\n", + cmd, value, index, size); + + buf = kmalloc(size, GFP_KERNEL); + if (!buf) + goto out; + + err = usb_control_msg( + dev->udev, + usb_rcvctrlpipe(dev->udev, 0), + cmd, + USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, + value, + index, + buf, + size, + USB_CTRL_GET_TIMEOUT); + if (err == size) + memcpy(data, buf, size); + else if (err >= 0) + err = -EINVAL; + kfree(buf); + +out: + return err; +} + +int asix_write_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, + u16 size, void *data) +{ + void *buf = NULL; + int err = -ENOMEM; + + netdev_dbg(dev->net, "asix_write_cmd() cmd=0x%02x value=0x%04x index=0x%04x size=%d\n", + cmd, value, index, size); + + if (data) { + buf = kmemdup(data, size, GFP_KERNEL); + if (!buf) + goto out; + } + + err = usb_control_msg( + dev->udev, + usb_sndctrlpipe(dev->udev, 0), + cmd, + USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE, + value, + index, + buf, + size, + USB_CTRL_SET_TIMEOUT); + kfree(buf); + +out: + return err; +} + +static void asix_async_cmd_callback(struct urb *urb) +{ + struct usb_ctrlrequest *req = (struct usb_ctrlrequest *)urb->context; + int status = urb->status; + + if (status < 0) + printk(KERN_DEBUG "asix_async_cmd_callback() failed with %d", + status); + + kfree(req); + usb_free_urb(urb); +} + +void asix_write_cmd_async(struct usbnet *dev, u8 cmd, u16 value, u16 index, + u16 size, void *data) +{ + struct usb_ctrlrequest *req; + int status; + struct urb *urb; + + netdev_dbg(dev->net, "asix_write_cmd_async() cmd=0x%02x value=0x%04x index=0x%04x size=%d\n", + cmd, value, index, size); + + urb = usb_alloc_urb(0, GFP_ATOMIC); + if (!urb) { + netdev_err(dev->net, "Error allocating URB in write_cmd_async!\n"); + return; + } + + req = kmalloc(sizeof(struct usb_ctrlrequest), GFP_ATOMIC); + if (!req) { + netdev_err(dev->net, "Failed to allocate memory for control request\n"); + usb_free_urb(urb); + return; + } + + req->bRequestType = USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE; + req->bRequest = cmd; + req->wValue = cpu_to_le16(value); + req->wIndex = cpu_to_le16(index); + req->wLength = cpu_to_le16(size); + + usb_fill_control_urb(urb, dev->udev, + usb_sndctrlpipe(dev->udev, 0), + (void *)req, data, size, + asix_async_cmd_callback, req); + + status = usb_submit_urb(urb, GFP_ATOMIC); + if (status < 0) { + netdev_err(dev->net, "Error submitting the control message: status=%d\n", + status); + kfree(req); + usb_free_urb(urb); + } +} + +int asix_rx_fixup(struct usbnet *dev, struct sk_buff *skb) +{ + int offset = 0; + + while (offset + sizeof(u32) < skb->len) { + struct sk_buff *ax_skb; + u16 size; + u32 header = get_unaligned_le32(skb->data + offset); + + offset += sizeof(u32); + + /* get the packet length */ + size = (u16) (header & 0x7ff); + if (size != ((~header >> 16) & 0x07ff)) { + netdev_err(dev->net, "asix_rx_fixup() Bad Header Length\n"); + return 0; + } + + if ((size > dev->net->mtu + ETH_HLEN + VLAN_HLEN) || + (size + offset > skb->len)) { + netdev_err(dev->net, "asix_rx_fixup() Bad RX Length %d\n", + size); + return 0; + } + ax_skb = netdev_alloc_skb_ip_align(dev->net, size); + if (!ax_skb) + return 0; + + skb_put(ax_skb, size); + memcpy(ax_skb->data, skb->data + offset, size); + usbnet_skb_return(dev, ax_skb); + + offset += (size + 1) & 0xfffe; + } + + if (skb->len != offset) { + netdev_err(dev->net, "asix_rx_fixup() Bad SKB Length %d\n", + skb->len); + return 0; + } + return 1; +} + +struct sk_buff *asix_tx_fixup(struct usbnet *dev, struct sk_buff *skb, + gfp_t flags) +{ + int padlen; + int headroom = skb_headroom(skb); + int tailroom = skb_tailroom(skb); + u32 packet_len; + u32 padbytes = 0xffff0000; + + padlen = ((skb->len + 4) & (dev->maxpacket - 1)) ? 0 : 4; + + /* We need to push 4 bytes in front of frame (packet_len) + * and maybe add 4 bytes after the end (if padlen is 4) + * + * Avoid skb_copy_expand() expensive call, using following rules : + * - We are allowed to push 4 bytes in headroom if skb_header_cloned() + * is false (and if we have 4 bytes of headroom) + * - We are allowed to put 4 bytes at tail if skb_cloned() + * is false (and if we have 4 bytes of tailroom) + * + * TCP packets for example are cloned, but skb_header_release() + * was called in tcp stack, allowing us to use headroom for our needs. + */ + if (!skb_header_cloned(skb) && + !(padlen && skb_cloned(skb)) && + headroom + tailroom >= 4 + padlen) { + /* following should not happen, but better be safe */ + if (headroom < 4 || + tailroom < padlen) { + skb->data = memmove(skb->head + 4, skb->data, skb->len); + skb_set_tail_pointer(skb, skb->len); + } + } else { + struct sk_buff *skb2; + + skb2 = skb_copy_expand(skb, 4, padlen, flags); + dev_kfree_skb_any(skb); + skb = skb2; + if (!skb) + return NULL; + } + + packet_len = ((skb->len ^ 0x0000ffff) << 16) + skb->len; + skb_push(skb, 4); + cpu_to_le32s(&packet_len); + skb_copy_to_linear_data(skb, &packet_len, sizeof(packet_len)); + + if (padlen) { + cpu_to_le32s(&padbytes); + memcpy(skb_tail_pointer(skb), &padbytes, sizeof(padbytes)); + skb_put(skb, sizeof(padbytes)); + } + return skb; +} + +int asix_set_sw_mii(struct usbnet *dev) +{ + int ret; + ret = asix_write_cmd(dev, AX_CMD_SET_SW_MII, 0x0000, 0, 0, NULL); + if (ret < 0) + netdev_err(dev->net, "Failed to enable software MII access\n"); + return ret; +} + +int asix_set_hw_mii(struct usbnet *dev) +{ + int ret; + ret = asix_write_cmd(dev, AX_CMD_SET_HW_MII, 0x0000, 0, 0, NULL); + if (ret < 0) + netdev_err(dev->net, "Failed to enable hardware MII access\n"); + return ret; +} + +int asix_read_phy_addr(struct usbnet *dev, int internal) +{ + int offset = (internal ? 1 : 0); + u8 buf[2]; + int ret = asix_read_cmd(dev, AX_CMD_READ_PHY_ID, 0, 0, 2, buf); + + netdev_dbg(dev->net, "asix_get_phy_addr()\n"); + + if (ret < 0) { + netdev_err(dev->net, "Error reading PHYID register: %02x\n", ret); + goto out; + } + netdev_dbg(dev->net, "asix_get_phy_addr() returning 0x%04x\n", + *((__le16 *)buf)); + ret = buf[offset]; + +out: + return ret; +} + +int asix_get_phy_addr(struct usbnet *dev) +{ + /* return the address of the internal phy */ + return asix_read_phy_addr(dev, 1); +} + + +int asix_sw_reset(struct usbnet *dev, u8 flags) +{ + int ret; + + ret = asix_write_cmd(dev, AX_CMD_SW_RESET, flags, 0, 0, NULL); + if (ret < 0) + netdev_err(dev->net, "Failed to send software reset: %02x\n", ret); + + return ret; +} + +u16 asix_read_rx_ctl(struct usbnet *dev) +{ + __le16 v; + int ret = asix_read_cmd(dev, AX_CMD_READ_RX_CTL, 0, 0, 2, &v); + + if (ret < 0) { + netdev_err(dev->net, "Error reading RX_CTL register: %02x\n", ret); + goto out; + } + ret = le16_to_cpu(v); +out: + return ret; +} + +int asix_write_rx_ctl(struct usbnet *dev, u16 mode) +{ + int ret; + + netdev_dbg(dev->net, "asix_write_rx_ctl() - mode = 0x%04x\n", mode); + ret = asix_write_cmd(dev, AX_CMD_WRITE_RX_CTL, mode, 0, 0, NULL); + if (ret < 0) + netdev_err(dev->net, "Failed to write RX_CTL mode to 0x%04x: %02x\n", + mode, ret); + + return ret; +} + +u16 asix_read_medium_status(struct usbnet *dev) +{ + __le16 v; + int ret = asix_read_cmd(dev, AX_CMD_READ_MEDIUM_STATUS, 0, 0, 2, &v); + + if (ret < 0) { + netdev_err(dev->net, "Error reading Medium Status register: %02x\n", + ret); + return ret; /* TODO: callers not checking for error ret */ + } + + return le16_to_cpu(v); + +} + +int asix_write_medium_mode(struct usbnet *dev, u16 mode) +{ + int ret; + + netdev_dbg(dev->net, "asix_write_medium_mode() - mode = 0x%04x\n", mode); + ret = asix_write_cmd(dev, AX_CMD_WRITE_MEDIUM_MODE, mode, 0, 0, NULL); + if (ret < 0) + netdev_err(dev->net, "Failed to write Medium Mode mode to 0x%04x: %02x\n", + mode, ret); + + return ret; +} + +int asix_write_gpio(struct usbnet *dev, u16 value, int sleep) +{ + int ret; + + netdev_dbg(dev->net, "asix_write_gpio() - value = 0x%04x\n", value); + ret = asix_write_cmd(dev, AX_CMD_WRITE_GPIOS, value, 0, 0, NULL); + if (ret < 0) + netdev_err(dev->net, "Failed to write GPIO value 0x%04x: %02x\n", + value, ret); + + if (sleep) + msleep(sleep); + + return ret; +} + +/* + * AX88772 & AX88178 have a 16-bit RX_CTL value + */ +void asix_set_multicast(struct net_device *net) +{ + struct usbnet *dev = netdev_priv(net); + struct asix_data *data = (struct asix_data *)&dev->data; + u16 rx_ctl = AX_DEFAULT_RX_CTL; + + if (net->flags & IFF_PROMISC) { + rx_ctl |= AX_RX_CTL_PRO; + } else if (net->flags & IFF_ALLMULTI || + netdev_mc_count(net) > AX_MAX_MCAST) { + rx_ctl |= AX_RX_CTL_AMALL; + } else if (netdev_mc_empty(net)) { + /* just broadcast and directed */ + } else { + /* We use the 20 byte dev->data + * for our 8 byte filter buffer + * to avoid allocating memory that + * is tricky to free later */ + struct netdev_hw_addr *ha; + u32 crc_bits; + + memset(data->multi_filter, 0, AX_MCAST_FILTER_SIZE); + + /* Build the multicast hash filter. */ + netdev_for_each_mc_addr(ha, net) { + crc_bits = ether_crc(ETH_ALEN, ha->addr) >> 26; + data->multi_filter[crc_bits >> 3] |= + 1 << (crc_bits & 7); + } + + asix_write_cmd_async(dev, AX_CMD_WRITE_MULTI_FILTER, 0, 0, + AX_MCAST_FILTER_SIZE, data->multi_filter); + + rx_ctl |= AX_RX_CTL_AM; + } + + asix_write_cmd_async(dev, AX_CMD_WRITE_RX_CTL, rx_ctl, 0, 0, NULL); +} + +int asix_mdio_read(struct net_device *netdev, int phy_id, int loc) +{ + struct usbnet *dev = netdev_priv(netdev); + __le16 res; + + mutex_lock(&dev->phy_mutex); + asix_set_sw_mii(dev); + asix_read_cmd(dev, AX_CMD_READ_MII_REG, phy_id, + (__u16)loc, 2, &res); + asix_set_hw_mii(dev); + mutex_unlock(&dev->phy_mutex); + + netdev_dbg(dev->net, "asix_mdio_read() phy_id=0x%02x, loc=0x%02x, returns=0x%04x\n", + phy_id, loc, le16_to_cpu(res)); + + return le16_to_cpu(res); +} + +void asix_mdio_write(struct net_device *netdev, int phy_id, int loc, int val) +{ + struct usbnet *dev = netdev_priv(netdev); + __le16 res = cpu_to_le16(val); + + netdev_dbg(dev->net, "asix_mdio_write() phy_id=0x%02x, loc=0x%02x, val=0x%04x\n", + phy_id, loc, val); + mutex_lock(&dev->phy_mutex); + asix_set_sw_mii(dev); + asix_write_cmd(dev, AX_CMD_WRITE_MII_REG, phy_id, (__u16)loc, 2, &res); + asix_set_hw_mii(dev); + mutex_unlock(&dev->phy_mutex); +} + +void asix_get_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo) +{ + struct usbnet *dev = netdev_priv(net); + u8 opt; + + if (asix_read_cmd(dev, AX_CMD_READ_MONITOR_MODE, 0, 0, 1, &opt) < 0) { + wolinfo->supported = 0; + wolinfo->wolopts = 0; + return; + } + wolinfo->supported = WAKE_PHY | WAKE_MAGIC; + wolinfo->wolopts = 0; + if (opt & AX_MONITOR_LINK) + wolinfo->wolopts |= WAKE_PHY; + if (opt & AX_MONITOR_MAGIC) + wolinfo->wolopts |= WAKE_MAGIC; +} + +int asix_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo) +{ + struct usbnet *dev = netdev_priv(net); + u8 opt = 0; + + if (wolinfo->wolopts & WAKE_PHY) + opt |= AX_MONITOR_LINK; + if (wolinfo->wolopts & WAKE_MAGIC) + opt |= AX_MONITOR_MAGIC; + + if (asix_write_cmd(dev, AX_CMD_WRITE_MONITOR_MODE, + opt, 0, 0, NULL) < 0) + return -EINVAL; + + return 0; +} + +int asix_get_eeprom_len(struct net_device *net) +{ + return AX_EEPROM_LEN; +} + +int asix_get_eeprom(struct net_device *net, struct ethtool_eeprom *eeprom, + u8 *data) +{ + struct usbnet *dev = netdev_priv(net); + u16 *eeprom_buff; + int first_word, last_word; + int i; + + if (eeprom->len == 0) + return -EINVAL; + + eeprom->magic = AX_EEPROM_MAGIC; + + first_word = eeprom->offset >> 1; + last_word = (eeprom->offset + eeprom->len - 1) >> 1; + + eeprom_buff = kmalloc(sizeof(u16) * (last_word - first_word + 1), + GFP_KERNEL); + if (!eeprom_buff) + return -ENOMEM; + + /* ax8817x returns 2 bytes from eeprom on read */ + for (i = first_word; i <= last_word; i++) { + if (asix_read_cmd(dev, AX_CMD_READ_EEPROM, i, 0, 2, + &(eeprom_buff[i - first_word])) < 0) { + kfree(eeprom_buff); + return -EIO; + } + } + + memcpy(data, (u8 *)eeprom_buff + (eeprom->offset & 1), eeprom->len); + kfree(eeprom_buff); + return 0; +} + +int asix_set_eeprom(struct net_device *net, struct ethtool_eeprom *eeprom, + u8 *data) +{ + struct usbnet *dev = netdev_priv(net); + u16 *eeprom_buff; + int first_word, last_word; + int i; + int ret; + + netdev_dbg(net, "write EEPROM len %d, offset %d, magic 0x%x\n", + eeprom->len, eeprom->offset, eeprom->magic); + + if (eeprom->len == 0) + return -EINVAL; + + if (eeprom->magic != AX_EEPROM_MAGIC) + return -EINVAL; + + first_word = eeprom->offset >> 1; + last_word = (eeprom->offset + eeprom->len - 1) >> 1; + + eeprom_buff = kmalloc(sizeof(u16) * (last_word - first_word + 1), + GFP_KERNEL); + if (!eeprom_buff) + return -ENOMEM; + + /* align data to 16 bit boundaries, read the missing data from + the EEPROM */ + if (eeprom->offset & 1) { + ret = asix_read_cmd(dev, AX_CMD_READ_EEPROM, first_word, 0, 2, + &(eeprom_buff[0])); + if (ret < 0) { + netdev_err(net, "Failed to read EEPROM at offset 0x%02x.\n", first_word); + goto free; + } + } + + if ((eeprom->offset + eeprom->len) & 1) { + ret = asix_read_cmd(dev, AX_CMD_READ_EEPROM, last_word, 0, 2, + &(eeprom_buff[last_word - first_word])); + if (ret < 0) { + netdev_err(net, "Failed to read EEPROM at offset 0x%02x.\n", last_word); + goto free; + } + } + + memcpy((u8 *)eeprom_buff + (eeprom->offset & 1), data, eeprom->len); + + /* write data to EEPROM */ + ret = asix_write_cmd(dev, AX_CMD_WRITE_ENABLE, 0x0000, 0, 0, NULL); + if (ret < 0) { + netdev_err(net, "Failed to enable EEPROM write\n"); + goto free; + } + msleep(20); + + for (i = first_word; i <= last_word; i++) { + netdev_dbg(net, "write to EEPROM at offset 0x%02x, data 0x%04x\n", + i, eeprom_buff[i - first_word]); + ret = asix_write_cmd(dev, AX_CMD_WRITE_EEPROM, i, + eeprom_buff[i - first_word], 0, NULL); + if (ret < 0) { + netdev_err(net, "Failed to write EEPROM at offset 0x%02x.\n", + i); + goto free; + } + msleep(20); + } + + ret = asix_write_cmd(dev, AX_CMD_WRITE_DISABLE, 0x0000, 0, 0, NULL); + if (ret < 0) { + netdev_err(net, "Failed to disable EEPROM write\n"); + goto free; + } + + ret = 0; +free: + kfree(eeprom_buff); + return ret; +} + +void asix_get_drvinfo(struct net_device *net, struct ethtool_drvinfo *info) +{ + /* Inherit standard device info */ + usbnet_get_drvinfo(net, info); + strncpy (info->driver, DRIVER_NAME, sizeof info->driver); + strncpy (info->version, DRIVER_VERSION, sizeof info->version); + info->eedump_len = AX_EEPROM_LEN; +} + +int asix_set_mac_address(struct net_device *net, void *p) +{ + struct usbnet *dev = netdev_priv(net); + struct asix_data *data = (struct asix_data *)&dev->data; + struct sockaddr *addr = p; + + if (netif_running(net)) + return -EBUSY; + if (!is_valid_ether_addr(addr->sa_data)) + return -EADDRNOTAVAIL; + + memcpy(net->dev_addr, addr->sa_data, ETH_ALEN); + + /* We use the 20 byte dev->data + * for our 6 byte mac buffer + * to avoid allocating memory that + * is tricky to free later */ + memcpy(data->mac_addr, addr->sa_data, ETH_ALEN); + asix_write_cmd_async(dev, AX_CMD_WRITE_NODE_ID, 0, 0, ETH_ALEN, + data->mac_addr); + + return 0; +} diff --git a/drivers/net/usb/asix.c b/drivers/net/usb/asix_devices.c index 3ae80eccd0ef..4fd48df6b989 100644 --- a/drivers/net/usb/asix.c +++ b/drivers/net/usb/asix_devices.c @@ -20,137 +20,7 @@ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ -// #define DEBUG // error path messages, extra info -// #define VERBOSE // more; success messages - -#include <linux/module.h> -#include <linux/kmod.h> -#include <linux/init.h> -#include <linux/netdevice.h> -#include <linux/etherdevice.h> -#include <linux/ethtool.h> -#include <linux/workqueue.h> -#include <linux/mii.h> -#include <linux/usb.h> -#include <linux/crc32.h> -#include <linux/usb/usbnet.h> -#include <linux/slab.h> -#include <linux/if_vlan.h> - -#define DRIVER_VERSION "22-Dec-2011" -#define DRIVER_NAME "asix" - -/* ASIX AX8817X based USB 2.0 Ethernet Devices */ - -#define AX_CMD_SET_SW_MII 0x06 -#define AX_CMD_READ_MII_REG 0x07 -#define AX_CMD_WRITE_MII_REG 0x08 -#define AX_CMD_SET_HW_MII 0x0a -#define AX_CMD_READ_EEPROM 0x0b -#define AX_CMD_WRITE_EEPROM 0x0c -#define AX_CMD_WRITE_ENABLE 0x0d -#define AX_CMD_WRITE_DISABLE 0x0e -#define AX_CMD_READ_RX_CTL 0x0f -#define AX_CMD_WRITE_RX_CTL 0x10 -#define AX_CMD_READ_IPG012 0x11 -#define AX_CMD_WRITE_IPG0 0x12 -#define AX_CMD_WRITE_IPG1 0x13 -#define AX_CMD_READ_NODE_ID 0x13 -#define AX_CMD_WRITE_NODE_ID 0x14 -#define AX_CMD_WRITE_IPG2 0x14 -#define AX_CMD_WRITE_MULTI_FILTER 0x16 -#define AX88172_CMD_READ_NODE_ID 0x17 -#define AX_CMD_READ_PHY_ID 0x19 -#define AX_CMD_READ_MEDIUM_STATUS 0x1a -#define AX_CMD_WRITE_MEDIUM_MODE 0x1b -#define AX_CMD_READ_MONITOR_MODE 0x1c -#define AX_CMD_WRITE_MONITOR_MODE 0x1d -#define AX_CMD_READ_GPIOS 0x1e -#define AX_CMD_WRITE_GPIOS 0x1f -#define AX_CMD_SW_RESET 0x20 -#define AX_CMD_SW_PHY_STATUS 0x21 -#define AX_CMD_SW_PHY_SELECT 0x22 - -#define AX_MONITOR_MODE 0x01 -#define AX_MONITOR_LINK 0x02 -#define AX_MONITOR_MAGIC 0x04 -#define AX_MONITOR_HSFS 0x10 - -/* AX88172 Medium Status Register values */ -#define AX88172_MEDIUM_FD 0x02 -#define AX88172_MEDIUM_TX 0x04 -#define AX88172_MEDIUM_FC 0x10 -#define AX88172_MEDIUM_DEFAULT \ - ( AX88172_MEDIUM_FD | AX88172_MEDIUM_TX | AX88172_MEDIUM_FC ) - -#define AX_MCAST_FILTER_SIZE 8 -#define AX_MAX_MCAST 64 - -#define AX_SWRESET_CLEAR 0x00 -#define AX_SWRESET_RR 0x01 -#define AX_SWRESET_RT 0x02 -#define AX_SWRESET_PRTE 0x04 -#define AX_SWRESET_PRL 0x08 -#define AX_SWRESET_BZ 0x10 -#define AX_SWRESET_IPRL 0x20 -#define AX_SWRESET_IPPD 0x40 - -#define AX88772_IPG0_DEFAULT 0x15 -#define AX88772_IPG1_DEFAULT 0x0c -#define AX88772_IPG2_DEFAULT 0x12 - -/* AX88772 & AX88178 Medium Mode Register */ -#define AX_MEDIUM_PF 0x0080 -#define AX_MEDIUM_JFE 0x0040 -#define AX_MEDIUM_TFC 0x0020 -#define AX_MEDIUM_RFC 0x0010 -#define AX_MEDIUM_ENCK 0x0008 -#define AX_MEDIUM_AC 0x0004 -#define AX_MEDIUM_FD 0x0002 -#define AX_MEDIUM_GM 0x0001 -#define AX_MEDIUM_SM 0x1000 -#define AX_MEDIUM_SBP 0x0800 -#define AX_MEDIUM_PS 0x0200 -#define AX_MEDIUM_RE 0x0100 - -#define AX88178_MEDIUM_DEFAULT \ - (AX_MEDIUM_PS | AX_MEDIUM_FD | AX_MEDIUM_AC | \ - AX_MEDIUM_RFC | AX_MEDIUM_TFC | AX_MEDIUM_JFE | \ - AX_MEDIUM_RE) - -#define AX88772_MEDIUM_DEFAULT \ - (AX_MEDIUM_FD | AX_MEDIUM_RFC | \ - AX_MEDIUM_TFC | AX_MEDIUM_PS | \ - AX_MEDIUM_AC | AX_MEDIUM_RE) - -/* AX88772 & AX88178 RX_CTL values */ -#define AX_RX_CTL_SO 0x0080 -#define AX_RX_CTL_AP 0x0020 -#define AX_RX_CTL_AM 0x0010 -#define AX_RX_CTL_AB 0x0008 -#define AX_RX_CTL_SEP 0x0004 -#define AX_RX_CTL_AMALL 0x0002 -#define AX_RX_CTL_PRO 0x0001 -#define AX_RX_CTL_MFB_2048 0x0000 -#define AX_RX_CTL_MFB_4096 0x0100 -#define AX_RX_CTL_MFB_8192 0x0200 -#define AX_RX_CTL_MFB_16384 0x0300 - -#define AX_DEFAULT_RX_CTL (AX_RX_CTL_SO | AX_RX_CTL_AB) - -/* GPIO 0 .. 2 toggles */ -#define AX_GPIO_GPO0EN 0x01 /* GPIO0 Output enable */ -#define AX_GPIO_GPO_0 0x02 /* GPIO0 Output value */ -#define AX_GPIO_GPO1EN 0x04 /* GPIO1 Output enable */ -#define AX_GPIO_GPO_1 0x08 /* GPIO1 Output value */ -#define AX_GPIO_GPO2EN 0x10 /* GPIO2 Output enable */ -#define AX_GPIO_GPO_2 0x20 /* GPIO2 Output value */ -#define AX_GPIO_RESERVED 0x40 /* Reserved */ -#define AX_GPIO_RSE 0x80 /* Reload serial EEPROM */ - -#define AX_EEPROM_MAGIC 0xdeadbeef -#define AX88172_EEPROM_LEN 0x40 -#define AX88772_EEPROM_LEN 0xff +#include "asix.h" #define PHY_MODE_MARVELL 0x0000 #define MII_MARVELL_LED_CTRL 0x0018 @@ -166,15 +36,6 @@ #define PHY_MODE_RTL8211CL 0x000C -/* This structure cannot exceed sizeof(unsigned long [5]) AKA 20 bytes */ -struct asix_data { - u8 multi_filter[AX_MCAST_FILTER_SIZE]; - u8 mac_addr[ETH_ALEN]; - u8 phymode; - u8 ledmode; - u8 eeprom_len; -}; - struct ax88172_int_data { __le16 res1; u8 link; @@ -183,209 +44,6 @@ struct ax88172_int_data { __le16 res3; } __packed; -static int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, - u16 size, void *data) -{ - void *buf; - int err = -ENOMEM; - - netdev_dbg(dev->net, "asix_read_cmd() cmd=0x%02x value=0x%04x index=0x%04x size=%d\n", - cmd, value, index, size); - - buf = kmalloc(size, GFP_KERNEL); - if (!buf) - goto out; - - err = usb_control_msg( - dev->udev, - usb_rcvctrlpipe(dev->udev, 0), - cmd, - USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, - value, - index, - buf, - size, - USB_CTRL_GET_TIMEOUT); - if (err == size) - memcpy(data, buf, size); - else if (err >= 0) - err = -EINVAL; - kfree(buf); - -out: - return err; -} - -static int asix_write_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, - u16 size, void *data) -{ - void *buf = NULL; - int err = -ENOMEM; - - netdev_dbg(dev->net, "asix_write_cmd() cmd=0x%02x value=0x%04x index=0x%04x size=%d\n", - cmd, value, index, size); - - if (data) { - buf = kmemdup(data, size, GFP_KERNEL); - if (!buf) - goto out; - } - - err = usb_control_msg( - dev->udev, - usb_sndctrlpipe(dev->udev, 0), - cmd, - USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE, - value, - index, - buf, - size, - USB_CTRL_SET_TIMEOUT); - kfree(buf); - -out: - return err; -} - -static void asix_async_cmd_callback(struct urb *urb) -{ - struct usb_ctrlrequest *req = (struct usb_ctrlrequest *)urb->context; - int status = urb->status; - - if (status < 0) - printk(KERN_DEBUG "asix_async_cmd_callback() failed with %d", - status); - - kfree(req); - usb_free_urb(urb); -} - -static void -asix_write_cmd_async(struct usbnet *dev, u8 cmd, u16 value, u16 index, - u16 size, void *data) -{ - struct usb_ctrlrequest *req; - int status; - struct urb *urb; - - netdev_dbg(dev->net, "asix_write_cmd_async() cmd=0x%02x value=0x%04x index=0x%04x size=%d\n", - cmd, value, index, size); - - urb = usb_alloc_urb(0, GFP_ATOMIC); - if (!urb) { - netdev_err(dev->net, "Error allocating URB in write_cmd_async!\n"); - return; - } - - req = kmalloc(sizeof(struct usb_ctrlrequest), GFP_ATOMIC); - if (!req) { - netdev_err(dev->net, "Failed to allocate memory for control request\n"); - usb_free_urb(urb); - return; - } - - req->bRequestType = USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE; - req->bRequest = cmd; - req->wValue = cpu_to_le16(value); - req->wIndex = cpu_to_le16(index); - req->wLength = cpu_to_le16(size); - - usb_fill_control_urb(urb, dev->udev, - usb_sndctrlpipe(dev->udev, 0), - (void *)req, data, size, - asix_async_cmd_callback, req); - - status = usb_submit_urb(urb, GFP_ATOMIC); - if (status < 0) { - netdev_err(dev->net, "Error submitting the control message: status=%d\n", - status); - kfree(req); - usb_free_urb(urb); - } -} - -static int asix_rx_fixup(struct usbnet *dev, struct sk_buff *skb) -{ - int offset = 0; - - while (offset + sizeof(u32) < skb->len) { - struct sk_buff *ax_skb; - u16 size; - u32 header = get_unaligned_le32(skb->data + offset); - - offset += sizeof(u32); - - /* get the packet length */ - size = (u16) (header & 0x7ff); - if (size != ((~header >> 16) & 0x07ff)) { - netdev_err(dev->net, "asix_rx_fixup() Bad Header Length\n"); - return 0; - } - - if ((size > dev->net->mtu + ETH_HLEN + VLAN_HLEN) || - (size + offset > skb->len)) { - netdev_err(dev->net, "asix_rx_fixup() Bad RX Length %d\n", - size); - return 0; - } - ax_skb = netdev_alloc_skb_ip_align(dev->net, size); - if (!ax_skb) - return 0; - - skb_put(ax_skb, size); - memcpy(ax_skb->data, skb->data + offset, size); - usbnet_skb_return(dev, ax_skb); - - offset += (size + 1) & 0xfffe; - } - - if (skb->len != offset) { - netdev_err(dev->net, "asix_rx_fixup() Bad SKB Length %d\n", - skb->len); - return 0; - } - return 1; -} - -static struct sk_buff *asix_tx_fixup(struct usbnet *dev, struct sk_buff *skb, - gfp_t flags) -{ - int padlen; - int headroom = skb_headroom(skb); - int tailroom = skb_tailroom(skb); - u32 packet_len; - u32 padbytes = 0xffff0000; - - padlen = ((skb->len + 4) & (dev->maxpacket - 1)) ? 0 : 4; - - if ((!skb_cloned(skb)) && - ((headroom + tailroom) >= (4 + padlen))) { - if ((headroom < 4) || (tailroom < padlen)) { - skb->data = memmove(skb->head + 4, skb->data, skb->len); - skb_set_tail_pointer(skb, skb->len); - } - } else { - struct sk_buff *skb2; - skb2 = skb_copy_expand(skb, 4, padlen, flags); - dev_kfree_skb_any(skb); - skb = skb2; - if (!skb) - return NULL; - } - - skb_push(skb, 4); - packet_len = (((skb->len - 4) ^ 0x0000ffff) << 16) + (skb->len - 4); - cpu_to_le32s(&packet_len); - skb_copy_to_linear_data(skb, &packet_len, sizeof(packet_len)); - - if (padlen) { - cpu_to_le32s(&padbytes); - memcpy(skb_tail_pointer(skb), &padbytes, sizeof(padbytes)); - skb_put(skb, sizeof(padbytes)); - } - return skb; -} - static void asix_status(struct usbnet *dev, struct urb *urb) { struct ax88172_int_data *event; @@ -406,200 +64,6 @@ static void asix_status(struct usbnet *dev, struct urb *urb) } } -static inline int asix_set_sw_mii(struct usbnet *dev) -{ - int ret; - ret = asix_write_cmd(dev, AX_CMD_SET_SW_MII, 0x0000, 0, 0, NULL); - if (ret < 0) - netdev_err(dev->net, "Failed to enable software MII access\n"); - return ret; -} - -static inline int asix_set_hw_mii(struct usbnet *dev) -{ - int ret; - ret = asix_write_cmd(dev, AX_CMD_SET_HW_MII, 0x0000, 0, 0, NULL); - if (ret < 0) - netdev_err(dev->net, "Failed to enable hardware MII access\n"); - return ret; -} - -static inline int asix_get_phy_addr(struct usbnet *dev) -{ - u8 buf[2]; - int ret = asix_read_cmd(dev, AX_CMD_READ_PHY_ID, 0, 0, 2, buf); - - netdev_dbg(dev->net, "asix_get_phy_addr()\n"); - - if (ret < 0) { - netdev_err(dev->net, "Error reading PHYID register: %02x\n", ret); - goto out; - } - netdev_dbg(dev->net, "asix_get_phy_addr() returning 0x%04x\n", - *((__le16 *)buf)); - ret = buf[1]; - -out: - return ret; -} - -static int asix_sw_reset(struct usbnet *dev, u8 flags) -{ - int ret; - - ret = asix_write_cmd(dev, AX_CMD_SW_RESET, flags, 0, 0, NULL); - if (ret < 0) - netdev_err(dev->net, "Failed to send software reset: %02x\n", ret); - - return ret; -} - -static u16 asix_read_rx_ctl(struct usbnet *dev) -{ - __le16 v; - int ret = asix_read_cmd(dev, AX_CMD_READ_RX_CTL, 0, 0, 2, &v); - - if (ret < 0) { - netdev_err(dev->net, "Error reading RX_CTL register: %02x\n", ret); - goto out; - } - ret = le16_to_cpu(v); -out: - return ret; -} - -static int asix_write_rx_ctl(struct usbnet *dev, u16 mode) -{ - int ret; - - netdev_dbg(dev->net, "asix_write_rx_ctl() - mode = 0x%04x\n", mode); - ret = asix_write_cmd(dev, AX_CMD_WRITE_RX_CTL, mode, 0, 0, NULL); - if (ret < 0) - netdev_err(dev->net, "Failed to write RX_CTL mode to 0x%04x: %02x\n", - mode, ret); - - return ret; -} - -static u16 asix_read_medium_status(struct usbnet *dev) -{ - __le16 v; - int ret = asix_read_cmd(dev, AX_CMD_READ_MEDIUM_STATUS, 0, 0, 2, &v); - - if (ret < 0) { - netdev_err(dev->net, "Error reading Medium Status register: %02x\n", - ret); - return ret; /* TODO: callers not checking for error ret */ - } - - return le16_to_cpu(v); - -} - -static int asix_write_medium_mode(struct usbnet *dev, u16 mode) -{ - int ret; - - netdev_dbg(dev->net, "asix_write_medium_mode() - mode = 0x%04x\n", mode); - ret = asix_write_cmd(dev, AX_CMD_WRITE_MEDIUM_MODE, mode, 0, 0, NULL); - if (ret < 0) - netdev_err(dev->net, "Failed to write Medium Mode mode to 0x%04x: %02x\n", - mode, ret); - - return ret; -} - -static int asix_write_gpio(struct usbnet *dev, u16 value, int sleep) -{ - int ret; - - netdev_dbg(dev->net, "asix_write_gpio() - value = 0x%04x\n", value); - ret = asix_write_cmd(dev, AX_CMD_WRITE_GPIOS, value, 0, 0, NULL); - if (ret < 0) - netdev_err(dev->net, "Failed to write GPIO value 0x%04x: %02x\n", - value, ret); - - if (sleep) - msleep(sleep); - - return ret; -} - -/* - * AX88772 & AX88178 have a 16-bit RX_CTL value - */ -static void asix_set_multicast(struct net_device *net) -{ - struct usbnet *dev = netdev_priv(net); - struct asix_data *data = (struct asix_data *)&dev->data; - u16 rx_ctl = AX_DEFAULT_RX_CTL; - - if (net->flags & IFF_PROMISC) { - rx_ctl |= AX_RX_CTL_PRO; - } else if (net->flags & IFF_ALLMULTI || - netdev_mc_count(net) > AX_MAX_MCAST) { - rx_ctl |= AX_RX_CTL_AMALL; - } else if (netdev_mc_empty(net)) { - /* just broadcast and directed */ - } else { - /* We use the 20 byte dev->data - * for our 8 byte filter buffer - * to avoid allocating memory that - * is tricky to free later */ - struct netdev_hw_addr *ha; - u32 crc_bits; - - memset(data->multi_filter, 0, AX_MCAST_FILTER_SIZE); - - /* Build the multicast hash filter. */ - netdev_for_each_mc_addr(ha, net) { - crc_bits = ether_crc(ETH_ALEN, ha->addr) >> 26; - data->multi_filter[crc_bits >> 3] |= - 1 << (crc_bits & 7); - } - - asix_write_cmd_async(dev, AX_CMD_WRITE_MULTI_FILTER, 0, 0, - AX_MCAST_FILTER_SIZE, data->multi_filter); - - rx_ctl |= AX_RX_CTL_AM; - } - - asix_write_cmd_async(dev, AX_CMD_WRITE_RX_CTL, rx_ctl, 0, 0, NULL); -} - -static int asix_mdio_read(struct net_device *netdev, int phy_id, int loc) -{ - struct usbnet *dev = netdev_priv(netdev); - __le16 res; - - mutex_lock(&dev->phy_mutex); - asix_set_sw_mii(dev); - asix_read_cmd(dev, AX_CMD_READ_MII_REG, phy_id, - (__u16)loc, 2, &res); - asix_set_hw_mii(dev); - mutex_unlock(&dev->phy_mutex); - - netdev_dbg(dev->net, "asix_mdio_read() phy_id=0x%02x, loc=0x%02x, returns=0x%04x\n", - phy_id, loc, le16_to_cpu(res)); - - return le16_to_cpu(res); -} - -static void -asix_mdio_write(struct net_device *netdev, int phy_id, int loc, int val) -{ - struct usbnet *dev = netdev_priv(netdev); - __le16 res = cpu_to_le16(val); - - netdev_dbg(dev->net, "asix_mdio_write() phy_id=0x%02x, loc=0x%02x, val=0x%04x\n", - phy_id, loc, val); - mutex_lock(&dev->phy_mutex); - asix_set_sw_mii(dev); - asix_write_cmd(dev, AX_CMD_WRITE_MII_REG, phy_id, (__u16)loc, 2, &res); - asix_set_hw_mii(dev); - mutex_unlock(&dev->phy_mutex); -} - /* Get the PHY Identifier from the PHYSID1 & PHYSID2 MII registers */ static u32 asix_get_phyid(struct usbnet *dev) { @@ -629,88 +93,6 @@ static u32 asix_get_phyid(struct usbnet *dev) return phy_id; } -static void -asix_get_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo) -{ - struct usbnet *dev = netdev_priv(net); - u8 opt; - - if (asix_read_cmd(dev, AX_CMD_READ_MONITOR_MODE, 0, 0, 1, &opt) < 0) { - wolinfo->supported = 0; - wolinfo->wolopts = 0; - return; - } - wolinfo->supported = WAKE_PHY | WAKE_MAGIC; - wolinfo->wolopts = 0; - if (opt & AX_MONITOR_LINK) - wolinfo->wolopts |= WAKE_PHY; - if (opt & AX_MONITOR_MAGIC) - wolinfo->wolopts |= WAKE_MAGIC; -} - -static int -asix_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo) -{ - struct usbnet *dev = netdev_priv(net); - u8 opt = 0; - - if (wolinfo->wolopts & WAKE_PHY) - opt |= AX_MONITOR_LINK; - if (wolinfo->wolopts & WAKE_MAGIC) - opt |= AX_MONITOR_MAGIC; - - if (asix_write_cmd(dev, AX_CMD_WRITE_MONITOR_MODE, - opt, 0, 0, NULL) < 0) - return -EINVAL; - - return 0; -} - -static int asix_get_eeprom_len(struct net_device *net) -{ - struct usbnet *dev = netdev_priv(net); - struct asix_data *data = (struct asix_data *)&dev->data; - - return data->eeprom_len; -} - -static int asix_get_eeprom(struct net_device *net, - struct ethtool_eeprom *eeprom, u8 *data) -{ - struct usbnet *dev = netdev_priv(net); - __le16 *ebuf = (__le16 *)data; - int i; - - /* Crude hack to ensure that we don't overwrite memory - * if an odd length is supplied - */ - if (eeprom->len % 2) - return -EINVAL; - - eeprom->magic = AX_EEPROM_MAGIC; - - /* ax8817x returns 2 bytes from eeprom on read */ - for (i=0; i < eeprom->len / 2; i++) { - if (asix_read_cmd(dev, AX_CMD_READ_EEPROM, - eeprom->offset + i, 0, 2, &ebuf[i]) < 0) - return -EINVAL; - } - return 0; -} - -static void asix_get_drvinfo (struct net_device *net, - struct ethtool_drvinfo *info) -{ - struct usbnet *dev = netdev_priv(net); - struct asix_data *data = (struct asix_data *)&dev->data; - - /* Inherit standard device info */ - usbnet_get_drvinfo(net, info); - strncpy (info->driver, DRIVER_NAME, sizeof info->driver); - strncpy (info->version, DRIVER_VERSION, sizeof info->version); - info->eedump_len = data->eeprom_len; -} - static u32 asix_get_link(struct net_device *net) { struct usbnet *dev = netdev_priv(net); @@ -725,30 +107,6 @@ static int asix_ioctl (struct net_device *net, struct ifreq *rq, int cmd) return generic_mii_ioctl(&dev->mii, if_mii(rq), cmd, NULL); } -static int asix_set_mac_address(struct net_device *net, void *p) -{ - struct usbnet *dev = netdev_priv(net); - struct asix_data *data = (struct asix_data *)&dev->data; - struct sockaddr *addr = p; - - if (netif_running(net)) - return -EBUSY; - if (!is_valid_ether_addr(addr->sa_data)) - return -EADDRNOTAVAIL; - - memcpy(net->dev_addr, addr->sa_data, ETH_ALEN); - - /* We use the 20 byte dev->data - * for our 6 byte mac buffer - * to avoid allocating memory that - * is tricky to free later */ - memcpy(data->mac_addr, addr->sa_data, ETH_ALEN); - asix_write_cmd_async(dev, AX_CMD_WRITE_NODE_ID, 0, 0, ETH_ALEN, - data->mac_addr); - - return 0; -} - /* We need to override some ethtool_ops so we require our own structure so we don't interfere with other usbnet devices that may be connected at the same time. */ @@ -761,6 +119,7 @@ static const struct ethtool_ops ax88172_ethtool_ops = { .set_wol = asix_set_wol, .get_eeprom_len = asix_get_eeprom_len, .get_eeprom = asix_get_eeprom, + .set_eeprom = asix_set_eeprom, .get_settings = usbnet_get_settings, .set_settings = usbnet_set_settings, .nway_reset = usbnet_nway_reset, @@ -843,9 +202,6 @@ static int ax88172_bind(struct usbnet *dev, struct usb_interface *intf) u8 buf[ETH_ALEN]; int i; unsigned long gpio_bits = dev->driver_info->data; - struct asix_data *data = (struct asix_data *)&dev->data; - - data->eeprom_len = AX88172_EEPROM_LEN; usbnet_get_endpoints(dev,intf); @@ -880,6 +236,8 @@ static int ax88172_bind(struct usbnet *dev, struct usb_interface *intf) dev->net->netdev_ops = &ax88172_netdev_ops; dev->net->ethtool_ops = &ax88172_ethtool_ops; + dev->net->needed_headroom = 4; /* cf asix_tx_fixup() */ + dev->net->needed_tailroom = 4; /* cf asix_tx_fixup() */ asix_mdio_write(dev->net, dev->mii.phy_id, MII_BMCR, BMCR_RESET); asix_mdio_write(dev->net, dev->mii.phy_id, MII_ADVERTISE, @@ -901,6 +259,7 @@ static const struct ethtool_ops ax88772_ethtool_ops = { .set_wol = asix_set_wol, .get_eeprom_len = asix_get_eeprom_len, .get_eeprom = asix_get_eeprom, + .set_eeprom = asix_set_eeprom, .get_settings = usbnet_get_settings, .set_settings = usbnet_set_settings, .nway_reset = usbnet_nway_reset, @@ -1049,12 +408,9 @@ static const struct net_device_ops ax88772_netdev_ops = { static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf) { int ret, embd_phy; - struct asix_data *data = (struct asix_data *)&dev->data; u8 buf[ETH_ALEN]; u32 phyid; - data->eeprom_len = AX88772_EEPROM_LEN; - usbnet_get_endpoints(dev,intf); /* Get the MAC address */ @@ -1075,6 +431,8 @@ static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf) dev->net->netdev_ops = &ax88772_netdev_ops; dev->net->ethtool_ops = &ax88772_ethtool_ops; + dev->net->needed_headroom = 4; /* cf asix_tx_fixup() */ + dev->net->needed_tailroom = 4; /* cf asix_tx_fixup() */ embd_phy = ((dev->mii.phy_id & 0x1f) == 0x10 ? 1 : 0); @@ -1122,6 +480,7 @@ static const struct ethtool_ops ax88178_ethtool_ops = { .set_wol = asix_set_wol, .get_eeprom_len = asix_get_eeprom_len, .get_eeprom = asix_get_eeprom, + .set_eeprom = asix_set_eeprom, .get_settings = usbnet_get_settings, .set_settings = usbnet_set_settings, .nway_reset = usbnet_nway_reset, @@ -1405,9 +764,6 @@ static int ax88178_bind(struct usbnet *dev, struct usb_interface *intf) { int ret; u8 buf[ETH_ALEN]; - struct asix_data *data = (struct asix_data *)&dev->data; - - data->eeprom_len = AX88772_EEPROM_LEN; usbnet_get_endpoints(dev,intf); @@ -1510,6 +866,8 @@ static const struct driver_info ax88178_info = { .tx_fixup = asix_tx_fixup, }; +extern const struct driver_info ax88172a_info; + static const struct usb_device_id products [] = { { // Linksys USB200M @@ -1635,6 +993,10 @@ static const struct usb_device_id products [] = { // Asus USB Ethernet Adapter USB_DEVICE (0x0b95, 0x7e2b), .driver_info = (unsigned long) &ax88772_info, +}, { + /* ASIX 88172a demo board */ + USB_DEVICE(0x0b95, 0x172a), + .driver_info = (unsigned long) &ax88172a_info, }, { }, // END }; diff --git a/drivers/net/usb/ax88172a.c b/drivers/net/usb/ax88172a.c new file mode 100644 index 000000000000..c8e0aa85fb8e --- /dev/null +++ b/drivers/net/usb/ax88172a.c @@ -0,0 +1,414 @@ +/* + * ASIX AX88172A based USB 2.0 Ethernet Devices + * Copyright (C) 2012 OMICRON electronics GmbH + * + * Supports external PHYs via phylib. Based on the driver for the + * AX88772. Original copyrights follow: + * + * Copyright (C) 2003-2006 David Hollis <dhollis@davehollis.com> + * Copyright (C) 2005 Phil Chang <pchang23@sbcglobal.net> + * Copyright (C) 2006 James Painter <jamie.painter@iname.com> + * Copyright (c) 2002-2003 TiVo Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ + +#include "asix.h" +#include <linux/phy.h> + +struct ax88172a_private { + struct mii_bus *mdio; + struct phy_device *phydev; + char phy_name[20]; + u16 phy_addr; + u16 oldmode; + int use_embdphy; +}; + +/* MDIO read and write wrappers for phylib */ +static int asix_mdio_bus_read(struct mii_bus *bus, int phy_id, int regnum) +{ + return asix_mdio_read(((struct usbnet *)bus->priv)->net, phy_id, + regnum); +} + +static int asix_mdio_bus_write(struct mii_bus *bus, int phy_id, int regnum, + u16 val) +{ + asix_mdio_write(((struct usbnet *)bus->priv)->net, phy_id, regnum, val); + return 0; +} + +static int ax88172a_ioctl(struct net_device *net, struct ifreq *rq, int cmd) +{ + if (!netif_running(net)) + return -EINVAL; + + if (!net->phydev) + return -ENODEV; + + return phy_mii_ioctl(net->phydev, rq, cmd); +} + +/* set MAC link settings according to information from phylib */ +static void ax88172a_adjust_link(struct net_device *netdev) +{ + struct phy_device *phydev = netdev->phydev; + struct usbnet *dev = netdev_priv(netdev); + struct ax88172a_private *priv = dev->driver_priv; + u16 mode = 0; + + if (phydev->link) { + mode = AX88772_MEDIUM_DEFAULT; + + if (phydev->duplex == DUPLEX_HALF) + mode &= ~AX_MEDIUM_FD; + + if (phydev->speed != SPEED_100) + mode &= ~AX_MEDIUM_PS; + } + + if (mode != priv->oldmode) { + asix_write_medium_mode(dev, mode); + priv->oldmode = mode; + netdev_dbg(netdev, "speed %u duplex %d, setting mode to 0x%04x\n", + phydev->speed, phydev->duplex, mode); + phy_print_status(phydev); + } +} + +static void ax88172a_status(struct usbnet *dev, struct urb *urb) +{ + /* link changes are detected by polling the phy */ +} + +/* use phylib infrastructure */ +static int ax88172a_init_mdio(struct usbnet *dev) +{ + struct ax88172a_private *priv = dev->driver_priv; + int ret, i; + + priv->mdio = mdiobus_alloc(); + if (!priv->mdio) { + netdev_err(dev->net, "Could not allocate MDIO bus\n"); + return -ENOMEM; + } + + priv->mdio->priv = (void *)dev; + priv->mdio->read = &asix_mdio_bus_read; + priv->mdio->write = &asix_mdio_bus_write; + priv->mdio->name = "Asix MDIO Bus"; + /* mii bus name is usb-<usb bus number>-<usb device number> */ + snprintf(priv->mdio->id, MII_BUS_ID_SIZE, "usb-%03d:%03d", + dev->udev->bus->busnum, dev->udev->devnum); + + priv->mdio->irq = kzalloc(sizeof(int) * PHY_MAX_ADDR, GFP_KERNEL); + if (!priv->mdio->irq) { + netdev_err(dev->net, "Could not allocate mdio->irq\n"); + ret = -ENOMEM; + goto mfree; + } + for (i = 0; i < PHY_MAX_ADDR; i++) + priv->mdio->irq[i] = PHY_POLL; + + ret = mdiobus_register(priv->mdio); + if (ret) { + netdev_err(dev->net, "Could not register MDIO bus\n"); + goto ifree; + } + + netdev_info(dev->net, "registered mdio bus %s\n", priv->mdio->id); + return 0; + +ifree: + kfree(priv->mdio->irq); +mfree: + mdiobus_free(priv->mdio); + return ret; +} + +static void ax88172a_remove_mdio(struct usbnet *dev) +{ + struct ax88172a_private *priv = dev->driver_priv; + + netdev_info(dev->net, "deregistering mdio bus %s\n", priv->mdio->id); + mdiobus_unregister(priv->mdio); + kfree(priv->mdio->irq); + mdiobus_free(priv->mdio); +} + +static const struct net_device_ops ax88172a_netdev_ops = { + .ndo_open = usbnet_open, + .ndo_stop = usbnet_stop, + .ndo_start_xmit = usbnet_start_xmit, + .ndo_tx_timeout = usbnet_tx_timeout, + .ndo_change_mtu = usbnet_change_mtu, + .ndo_set_mac_address = asix_set_mac_address, + .ndo_validate_addr = eth_validate_addr, + .ndo_do_ioctl = ax88172a_ioctl, + .ndo_set_rx_mode = asix_set_multicast, +}; + +int ax88172a_get_settings(struct net_device *net, struct ethtool_cmd *cmd) +{ + if (!net->phydev) + return -ENODEV; + + return phy_ethtool_gset(net->phydev, cmd); +} + +int ax88172a_set_settings(struct net_device *net, struct ethtool_cmd *cmd) +{ + if (!net->phydev) + return -ENODEV; + + return phy_ethtool_sset(net->phydev, cmd); +} + +int ax88172a_nway_reset(struct net_device *net) +{ + if (!net->phydev) + return -ENODEV; + + return phy_start_aneg(net->phydev); +} + +static const struct ethtool_ops ax88172a_ethtool_ops = { + .get_drvinfo = asix_get_drvinfo, + .get_link = usbnet_get_link, + .get_msglevel = usbnet_get_msglevel, + .set_msglevel = usbnet_set_msglevel, + .get_wol = asix_get_wol, + .set_wol = asix_set_wol, + .get_eeprom_len = asix_get_eeprom_len, + .get_eeprom = asix_get_eeprom, + .set_eeprom = asix_set_eeprom, + .get_settings = ax88172a_get_settings, + .set_settings = ax88172a_set_settings, + .nway_reset = ax88172a_nway_reset, +}; + +static int ax88172a_reset_phy(struct usbnet *dev, int embd_phy) +{ + int ret; + + ret = asix_sw_reset(dev, AX_SWRESET_IPPD); + if (ret < 0) + goto err; + + msleep(150); + ret = asix_sw_reset(dev, AX_SWRESET_CLEAR); + if (ret < 0) + goto err; + + msleep(150); + + ret = asix_sw_reset(dev, embd_phy ? AX_SWRESET_IPRL : AX_SWRESET_IPPD); + if (ret < 0) + goto err; + + return 0; + +err: + return ret; +} + + +static int ax88172a_bind(struct usbnet *dev, struct usb_interface *intf) +{ + int ret; + u8 buf[ETH_ALEN]; + struct ax88172a_private *priv; + + usbnet_get_endpoints(dev, intf); + + priv = kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) { + netdev_err(dev->net, "Could not allocate memory for private data\n"); + return -ENOMEM; + } + dev->driver_priv = priv; + + /* Get the MAC address */ + ret = asix_read_cmd(dev, AX_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, buf); + if (ret < 0) { + netdev_err(dev->net, "Failed to read MAC address: %d\n", ret); + goto free; + } + memcpy(dev->net->dev_addr, buf, ETH_ALEN); + + dev->net->netdev_ops = &ax88172a_netdev_ops; + dev->net->ethtool_ops = &ax88172a_ethtool_ops; + + /* are we using the internal or the external phy? */ + ret = asix_read_cmd(dev, AX_CMD_SW_PHY_STATUS, 0, 0, 1, buf); + if (ret < 0) { + netdev_err(dev->net, "Failed to read software interface selection register: %d\n", + ret); + goto free; + } + + netdev_dbg(dev->net, "AX_CMD_SW_PHY_STATUS = 0x%02x\n", buf[0]); + switch (buf[0] & AX_PHY_SELECT_MASK) { + case AX_PHY_SELECT_INTERNAL: + netdev_dbg(dev->net, "use internal phy\n"); + priv->use_embdphy = 1; + break; + case AX_PHY_SELECT_EXTERNAL: + netdev_dbg(dev->net, "use external phy\n"); + priv->use_embdphy = 0; + break; + default: + netdev_err(dev->net, "Interface mode not supported by driver\n"); + ret = -ENOTSUPP; + goto free; + } + + priv->phy_addr = asix_read_phy_addr(dev, priv->use_embdphy); + ax88172a_reset_phy(dev, priv->use_embdphy); + + /* Asix framing packs multiple eth frames into a 2K usb bulk transfer */ + if (dev->driver_info->flags & FLAG_FRAMING_AX) { + /* hard_mtu is still the default - the device does not support + jumbo eth frames */ + dev->rx_urb_size = 2048; + } + + /* init MDIO bus */ + ret = ax88172a_init_mdio(dev); + if (ret) + goto free; + + return 0; + +free: + kfree(priv); + return ret; +} + +static int ax88172a_stop(struct usbnet *dev) +{ + struct ax88172a_private *priv = dev->driver_priv; + + netdev_dbg(dev->net, "Stopping interface\n"); + + if (priv->phydev) { + netdev_info(dev->net, "Disconnecting from phy %s\n", + priv->phy_name); + phy_stop(priv->phydev); + phy_disconnect(priv->phydev); + } + + return 0; +} + +static void ax88172a_unbind(struct usbnet *dev, struct usb_interface *intf) +{ + struct ax88172a_private *priv = dev->driver_priv; + + ax88172a_remove_mdio(dev); + kfree(priv); +} + +static int ax88172a_reset(struct usbnet *dev) +{ + struct asix_data *data = (struct asix_data *)&dev->data; + struct ax88172a_private *priv = dev->driver_priv; + int ret; + u16 rx_ctl; + + ax88172a_reset_phy(dev, priv->use_embdphy); + + msleep(150); + rx_ctl = asix_read_rx_ctl(dev); + netdev_dbg(dev->net, "RX_CTL is 0x%04x after software reset\n", rx_ctl); + ret = asix_write_rx_ctl(dev, 0x0000); + if (ret < 0) + goto out; + + rx_ctl = asix_read_rx_ctl(dev); + netdev_dbg(dev->net, "RX_CTL is 0x%04x setting to 0x0000\n", rx_ctl); + + msleep(150); + + ret = asix_write_cmd(dev, AX_CMD_WRITE_IPG0, + AX88772_IPG0_DEFAULT | AX88772_IPG1_DEFAULT, + AX88772_IPG2_DEFAULT, 0, NULL); + if (ret < 0) { + netdev_err(dev->net, "Write IPG,IPG1,IPG2 failed: %d\n", ret); + goto out; + } + + /* Rewrite MAC address */ + memcpy(data->mac_addr, dev->net->dev_addr, ETH_ALEN); + ret = asix_write_cmd(dev, AX_CMD_WRITE_NODE_ID, 0, 0, ETH_ALEN, + data->mac_addr); + if (ret < 0) + goto out; + + /* Set RX_CTL to default values with 2k buffer, and enable cactus */ + ret = asix_write_rx_ctl(dev, AX_DEFAULT_RX_CTL); + if (ret < 0) + goto out; + + rx_ctl = asix_read_rx_ctl(dev); + netdev_dbg(dev->net, "RX_CTL is 0x%04x after all initializations\n", + rx_ctl); + + rx_ctl = asix_read_medium_status(dev); + netdev_dbg(dev->net, "Medium Status is 0x%04x after all initializations\n", + rx_ctl); + + /* Connect to PHY */ + snprintf(priv->phy_name, 20, PHY_ID_FMT, + priv->mdio->id, priv->phy_addr); + + priv->phydev = phy_connect(dev->net, priv->phy_name, + &ax88172a_adjust_link, + 0, PHY_INTERFACE_MODE_MII); + if (IS_ERR(priv->phydev)) { + netdev_err(dev->net, "Could not connect to PHY device %s\n", + priv->phy_name); + ret = PTR_ERR(priv->phydev); + goto out; + } + + netdev_info(dev->net, "Connected to phy %s\n", priv->phy_name); + + /* During power-up, the AX88172A set the power down (BMCR_PDOWN) + * bit of the PHY. Bring the PHY up again. + */ + genphy_resume(priv->phydev); + phy_start(priv->phydev); + + return 0; + +out: + return ret; + +} + +const struct driver_info ax88172a_info = { + .description = "ASIX AX88172A USB 2.0 Ethernet", + .bind = ax88172a_bind, + .reset = ax88172a_reset, + .stop = ax88172a_stop, + .unbind = ax88172a_unbind, + .status = ax88172a_status, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_LINK_INTR | + FLAG_MULTI_PACKET, + .rx_fixup = asix_rx_fixup, + .tx_fixup = asix_tx_fixup, +}; diff --git a/drivers/net/usb/cdc-phonet.c b/drivers/net/usb/cdc-phonet.c index d848d4dd5754..187c144c5e5b 100644 --- a/drivers/net/usb/cdc-phonet.c +++ b/drivers/net/usb/cdc-phonet.c @@ -394,7 +394,7 @@ int usbpn_probe(struct usb_interface *intf, const struct usb_device_id *id) SET_NETDEV_DEV(dev, &intf->dev); pnd->dev = dev; - pnd->usb = usb_get_dev(usbdev); + pnd->usb = usbdev; pnd->intf = intf; pnd->data_intf = data_intf; spin_lock_init(&pnd->tx_lock); @@ -440,7 +440,6 @@ out: static void usbpn_disconnect(struct usb_interface *intf) { struct usbpn_dev *pnd = usb_get_intfdata(intf); - struct usb_device *usb = pnd->usb; if (pnd->disconnected) return; @@ -449,7 +448,6 @@ static void usbpn_disconnect(struct usb_interface *intf) usb_driver_release_interface(&usbpn_driver, (pnd->intf == intf) ? pnd->data_intf : pnd->intf); unregister_netdev(pnd->dev); - usb_put_dev(usb); } static struct usb_driver usbpn_driver = { diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c index 7023220456c5..a0b5807b30d4 100644 --- a/drivers/net/usb/pegasus.c +++ b/drivers/net/usb/pegasus.c @@ -1329,8 +1329,6 @@ static int pegasus_probe(struct usb_interface *intf, } pegasus_count++; - usb_get_dev(dev); - net = alloc_etherdev(sizeof(struct pegasus)); if (!net) goto out; @@ -1407,7 +1405,6 @@ out2: out1: free_netdev(net); out: - usb_put_dev(dev); pegasus_dec_workqueue(); return res; } @@ -1425,7 +1422,6 @@ static void pegasus_disconnect(struct usb_interface *intf) pegasus->flags |= PEGASUS_UNPLUG; cancel_delayed_work(&pegasus->carrier_check); unregister_netdev(pegasus->net); - usb_put_dev(interface_to_usbdev(intf)); unlink_all_urbs(pegasus); free_all_urbs(pegasus); free_skb_pool(pegasus); diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index a051cedd64bd..2ea126a16d79 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -1,6 +1,10 @@ /* * Copyright (c) 2012 Bjørn Mork <bjorn@mork.no> * + * The probing code is heavily inspired by cdc_ether, which is: + * Copyright (C) 2003-2005 by David Brownell + * Copyright (C) 2006 by Ole Andre Vadla Ravnas (ActiveSync) + * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * version 2 as published by the Free Software Foundation. @@ -15,11 +19,7 @@ #include <linux/usb/usbnet.h> #include <linux/usb/cdc-wdm.h> -/* The name of the CDC Device Management driver */ -#define DM_DRIVER "cdc_wdm" - -/* - * This driver supports wwan (3G/LTE/?) devices using a vendor +/* This driver supports wwan (3G/LTE/?) devices using a vendor * specific management protocol called Qualcomm MSM Interface (QMI) - * in addition to the more common AT commands over serial interface * management @@ -31,59 +31,117 @@ * management protocol is used in place of the standard CDC * notifications NOTIFY_NETWORK_CONNECTION and NOTIFY_SPEED_CHANGE * + * Alternatively, control and data functions can be combined in a + * single USB interface. + * * Handling a protocol like QMI is out of the scope for any driver. - * It can be exported as a character device using the cdc-wdm driver, - * which will enable userspace applications ("modem managers") to - * handle it. This may be required to use the network interface - * provided by the driver. + * It is exported as a character device using the cdc-wdm driver as + * a subdriver, enabling userspace applications ("modem managers") to + * handle it. * * These devices may alternatively/additionally be configured using AT - * commands on any of the serial interfaces driven by the option driver - * - * This driver binds only to the data ("slave") interface to enable - * the cdc-wdm driver to bind to the control interface. It still - * parses the CDC functional descriptors on the control interface to - * a) verify that this is indeed a handled interface (CDC Union - * header lists it as slave) - * b) get MAC address and other ethernet config from the CDC Ethernet - * header - * c) enable user bind requests against the control interface, which - * is the common way to bind to CDC Ethernet Control Model type - * interfaces - * d) provide a hint to the user about which interface is the - * corresponding management interface + * commands on a serial interface */ +/* driver specific data */ +struct qmi_wwan_state { + struct usb_driver *subdriver; + atomic_t pmcount; + unsigned long unused; + struct usb_interface *control; + struct usb_interface *data; +}; + +/* using a counter to merge subdriver requests with our own into a combined state */ +static int qmi_wwan_manage_power(struct usbnet *dev, int on) +{ + struct qmi_wwan_state *info = (void *)&dev->data; + int rv = 0; + + dev_dbg(&dev->intf->dev, "%s() pmcount=%d, on=%d\n", __func__, atomic_read(&info->pmcount), on); + + if ((on && atomic_add_return(1, &info->pmcount) == 1) || (!on && atomic_dec_and_test(&info->pmcount))) { + /* need autopm_get/put here to ensure the usbcore sees the new value */ + rv = usb_autopm_get_interface(dev->intf); + if (rv < 0) + goto err; + dev->intf->needs_remote_wakeup = on; + usb_autopm_put_interface(dev->intf); + } +err: + return rv; +} + +static int qmi_wwan_cdc_wdm_manage_power(struct usb_interface *intf, int on) +{ + struct usbnet *dev = usb_get_intfdata(intf); + + /* can be called while disconnecting */ + if (!dev) + return 0; + return qmi_wwan_manage_power(dev, on); +} + +/* collect all three endpoints and register subdriver */ +static int qmi_wwan_register_subdriver(struct usbnet *dev) +{ + int rv; + struct usb_driver *subdriver = NULL; + struct qmi_wwan_state *info = (void *)&dev->data; + + /* collect bulk endpoints */ + rv = usbnet_get_endpoints(dev, info->data); + if (rv < 0) + goto err; + + /* update status endpoint if separate control interface */ + if (info->control != info->data) + dev->status = &info->control->cur_altsetting->endpoint[0]; + + /* require interrupt endpoint for subdriver */ + if (!dev->status) { + rv = -EINVAL; + goto err; + } + + /* for subdriver power management */ + atomic_set(&info->pmcount, 0); + + /* register subdriver */ + subdriver = usb_cdc_wdm_register(info->control, &dev->status->desc, 512, &qmi_wwan_cdc_wdm_manage_power); + if (IS_ERR(subdriver)) { + dev_err(&info->control->dev, "subdriver registration failed\n"); + rv = PTR_ERR(subdriver); + goto err; + } + + /* prevent usbnet from using status endpoint */ + dev->status = NULL; + + /* save subdriver struct for suspend/resume wrappers */ + info->subdriver = subdriver; + +err: + return rv; +} + static int qmi_wwan_bind(struct usbnet *dev, struct usb_interface *intf) { int status = -1; - struct usb_interface *control = NULL; u8 *buf = intf->cur_altsetting->extra; int len = intf->cur_altsetting->extralen; struct usb_interface_descriptor *desc = &intf->cur_altsetting->desc; struct usb_cdc_union_desc *cdc_union = NULL; struct usb_cdc_ether_desc *cdc_ether = NULL; - u32 required = 1 << USB_CDC_HEADER_TYPE | 1 << USB_CDC_UNION_TYPE; u32 found = 0; - atomic_t *pmcount = (void *)&dev->data[1]; + struct usb_driver *driver = driver_of(intf); + struct qmi_wwan_state *info = (void *)&dev->data; - atomic_set(pmcount, 0); + BUILD_BUG_ON((sizeof(((struct usbnet *)0)->data) < sizeof(struct qmi_wwan_state))); - /* - * assume a data interface has no additional descriptors and - * that the control and data interface are numbered - * consecutively - this holds for the Huawei device at least - */ - if (len == 0 && desc->bInterfaceNumber > 0) { - control = usb_ifnum_to_if(dev->udev, desc->bInterfaceNumber - 1); - if (!control) - goto err; - - buf = control->cur_altsetting->extra; - len = control->cur_altsetting->extralen; - dev_dbg(&intf->dev, "guessing \"control\" => %s, \"data\" => this\n", - dev_name(&control->dev)); - } + /* require a single interrupt status endpoint for subdriver */ + if (intf->cur_altsetting->desc.bNumEndpoints != 1) + goto err; while (len > 3) { struct usb_descriptor_header *h = (void *)buf; @@ -142,15 +200,23 @@ next_desc: } /* did we find all the required ones? */ - if ((found & required) != required) { + if (!(found & (1 << USB_CDC_HEADER_TYPE)) || + !(found & (1 << USB_CDC_UNION_TYPE))) { dev_err(&intf->dev, "CDC functional descriptors missing\n"); goto err; } - /* give the user a helpful hint if trying to bind to the wrong interface */ - if (cdc_union && desc->bInterfaceNumber == cdc_union->bMasterInterface0) { - dev_err(&intf->dev, "leaving \"control\" interface for " DM_DRIVER " - try binding to %s instead!\n", - dev_name(&usb_ifnum_to_if(dev->udev, cdc_union->bSlaveInterface0)->dev)); + /* verify CDC Union */ + if (desc->bInterfaceNumber != cdc_union->bMasterInterface0) { + dev_err(&intf->dev, "bogus CDC Union: master=%u\n", cdc_union->bMasterInterface0); + goto err; + } + + /* need to save these for unbind */ + info->control = intf; + info->data = usb_ifnum_to_if(dev->udev, cdc_union->bSlaveInterface0); + if (!info->data) { + dev_err(&intf->dev, "bogus CDC Union: slave=%u\n", cdc_union->bSlaveInterface0); goto err; } @@ -160,63 +226,29 @@ next_desc: usbnet_get_ethernet_addr(dev, cdc_ether->iMACAddress); } - /* success! point the user to the management interface */ - if (control) - dev_info(&intf->dev, "Use \"" DM_DRIVER "\" for QMI interface %s\n", - dev_name(&control->dev)); - - /* XXX: add a sysfs symlink somewhere to help management applications find it? */ + /* claim data interface and set it up */ + status = usb_driver_claim_interface(driver, info->data, dev); + if (status < 0) + goto err; - /* collect bulk endpoints now that we know intf == "data" interface */ - status = usbnet_get_endpoints(dev, intf); + status = qmi_wwan_register_subdriver(dev); + if (status < 0) { + usb_set_intfdata(info->data, NULL); + usb_driver_release_interface(driver, info->data); + } err: return status; } -/* using a counter to merge subdriver requests with our own into a combined state */ -static int qmi_wwan_manage_power(struct usbnet *dev, int on) -{ - atomic_t *pmcount = (void *)&dev->data[1]; - int rv = 0; - - dev_dbg(&dev->intf->dev, "%s() pmcount=%d, on=%d\n", __func__, atomic_read(pmcount), on); - - if ((on && atomic_add_return(1, pmcount) == 1) || (!on && atomic_dec_and_test(pmcount))) { - /* need autopm_get/put here to ensure the usbcore sees the new value */ - rv = usb_autopm_get_interface(dev->intf); - if (rv < 0) - goto err; - dev->intf->needs_remote_wakeup = on; - usb_autopm_put_interface(dev->intf); - } -err: - return rv; -} - -static int qmi_wwan_cdc_wdm_manage_power(struct usb_interface *intf, int on) -{ - struct usbnet *dev = usb_get_intfdata(intf); - - /* can be called while disconnecting */ - if (!dev) - return 0; - return qmi_wwan_manage_power(dev, on); -} - /* Some devices combine the "control" and "data" functions into a * single interface with all three endpoints: interrupt + bulk in and * out - * - * Setting up cdc-wdm as a subdriver owning the interrupt endpoint - * will let it provide userspace access to the encapsulated QMI - * protocol without interfering with the usbnet operations. - */ + */ static int qmi_wwan_bind_shared(struct usbnet *dev, struct usb_interface *intf) { int rv; - struct usb_driver *subdriver = NULL; - atomic_t *pmcount = (void *)&dev->data[1]; + struct qmi_wwan_state *info = (void *)&dev->data; /* ZTE makes devices where the interface descriptors and endpoint * configurations of two or more interfaces are identical, even @@ -232,43 +264,39 @@ static int qmi_wwan_bind_shared(struct usbnet *dev, struct usb_interface *intf) goto err; } - atomic_set(pmcount, 0); - - /* collect all three endpoints */ - rv = usbnet_get_endpoints(dev, intf); - if (rv < 0) - goto err; - - /* require interrupt endpoint for subdriver */ - if (!dev->status) { - rv = -EINVAL; - goto err; - } - - subdriver = usb_cdc_wdm_register(intf, &dev->status->desc, 512, &qmi_wwan_cdc_wdm_manage_power); - if (IS_ERR(subdriver)) { - rv = PTR_ERR(subdriver); - goto err; - } - - /* can't let usbnet use the interrupt endpoint */ - dev->status = NULL; - - /* save subdriver struct for suspend/resume wrappers */ - dev->data[0] = (unsigned long)subdriver; + /* control and data is shared */ + info->control = intf; + info->data = intf; + rv = qmi_wwan_register_subdriver(dev); err: return rv; } -static void qmi_wwan_unbind_shared(struct usbnet *dev, struct usb_interface *intf) +static void qmi_wwan_unbind(struct usbnet *dev, struct usb_interface *intf) { - struct usb_driver *subdriver = (void *)dev->data[0]; - - if (subdriver && subdriver->disconnect) - subdriver->disconnect(intf); + struct qmi_wwan_state *info = (void *)&dev->data; + struct usb_driver *driver = driver_of(intf); + struct usb_interface *other; + + if (info->subdriver && info->subdriver->disconnect) + info->subdriver->disconnect(info->control); + + /* allow user to unbind using either control or data */ + if (intf == info->control) + other = info->data; + else + other = info->control; + + /* only if not shared */ + if (other && intf != other) { + usb_set_intfdata(other, NULL); + usb_driver_release_interface(driver, other); + } - dev->data[0] = (unsigned long)NULL; + info->subdriver = NULL; + info->data = NULL; + info->control = NULL; } /* suspend/resume wrappers calling both usbnet and the cdc-wdm @@ -280,15 +308,15 @@ static void qmi_wwan_unbind_shared(struct usbnet *dev, struct usb_interface *int static int qmi_wwan_suspend(struct usb_interface *intf, pm_message_t message) { struct usbnet *dev = usb_get_intfdata(intf); - struct usb_driver *subdriver = (void *)dev->data[0]; + struct qmi_wwan_state *info = (void *)&dev->data; int ret; ret = usbnet_suspend(intf, message); if (ret < 0) goto err; - if (subdriver && subdriver->suspend) - ret = subdriver->suspend(intf, message); + if (info->subdriver && info->subdriver->suspend) + ret = info->subdriver->suspend(intf, message); if (ret < 0) usbnet_resume(intf); err: @@ -298,33 +326,33 @@ err: static int qmi_wwan_resume(struct usb_interface *intf) { struct usbnet *dev = usb_get_intfdata(intf); - struct usb_driver *subdriver = (void *)dev->data[0]; + struct qmi_wwan_state *info = (void *)&dev->data; int ret = 0; - if (subdriver && subdriver->resume) - ret = subdriver->resume(intf); + if (info->subdriver && info->subdriver->resume) + ret = info->subdriver->resume(intf); if (ret < 0) goto err; ret = usbnet_resume(intf); - if (ret < 0 && subdriver && subdriver->resume && subdriver->suspend) - subdriver->suspend(intf, PMSG_SUSPEND); + if (ret < 0 && info->subdriver && info->subdriver->resume && info->subdriver->suspend) + info->subdriver->suspend(intf, PMSG_SUSPEND); err: return ret; } - static const struct driver_info qmi_wwan_info = { - .description = "QMI speaking wwan device", + .description = "WWAN/QMI device", .flags = FLAG_WWAN, .bind = qmi_wwan_bind, + .unbind = qmi_wwan_unbind, .manage_power = qmi_wwan_manage_power, }; static const struct driver_info qmi_wwan_shared = { - .description = "QMI speaking wwan device with combined interface", + .description = "WWAN/QMI device", .flags = FLAG_WWAN, .bind = qmi_wwan_bind_shared, - .unbind = qmi_wwan_unbind_shared, + .unbind = qmi_wwan_unbind, .manage_power = qmi_wwan_manage_power, }; @@ -332,7 +360,7 @@ static const struct driver_info qmi_wwan_force_int0 = { .description = "Qualcomm WWAN/QMI device", .flags = FLAG_WWAN, .bind = qmi_wwan_bind_shared, - .unbind = qmi_wwan_unbind_shared, + .unbind = qmi_wwan_unbind, .manage_power = qmi_wwan_manage_power, .data = BIT(0), /* interface whitelist bitmap */ }; @@ -341,7 +369,7 @@ static const struct driver_info qmi_wwan_force_int1 = { .description = "Qualcomm WWAN/QMI device", .flags = FLAG_WWAN, .bind = qmi_wwan_bind_shared, - .unbind = qmi_wwan_unbind_shared, + .unbind = qmi_wwan_unbind, .manage_power = qmi_wwan_manage_power, .data = BIT(1), /* interface whitelist bitmap */ }; @@ -350,7 +378,7 @@ static const struct driver_info qmi_wwan_force_int2 = { .description = "Qualcomm WWAN/QMI device", .flags = FLAG_WWAN, .bind = qmi_wwan_bind_shared, - .unbind = qmi_wwan_unbind_shared, + .unbind = qmi_wwan_unbind, .manage_power = qmi_wwan_manage_power, .data = BIT(2), /* interface whitelist bitmap */ }; @@ -359,7 +387,7 @@ static const struct driver_info qmi_wwan_force_int3 = { .description = "Qualcomm WWAN/QMI device", .flags = FLAG_WWAN, .bind = qmi_wwan_bind_shared, - .unbind = qmi_wwan_unbind_shared, + .unbind = qmi_wwan_unbind, .manage_power = qmi_wwan_manage_power, .data = BIT(3), /* interface whitelist bitmap */ }; @@ -368,7 +396,7 @@ static const struct driver_info qmi_wwan_force_int4 = { .description = "Qualcomm WWAN/QMI device", .flags = FLAG_WWAN, .bind = qmi_wwan_bind_shared, - .unbind = qmi_wwan_unbind_shared, + .unbind = qmi_wwan_unbind, .manage_power = qmi_wwan_manage_power, .data = BIT(4), /* interface whitelist bitmap */ }; @@ -390,7 +418,7 @@ static const struct driver_info qmi_wwan_sierra = { .description = "Sierra Wireless wwan/QMI device", .flags = FLAG_WWAN, .bind = qmi_wwan_bind_shared, - .unbind = qmi_wwan_unbind_shared, + .unbind = qmi_wwan_unbind, .manage_power = qmi_wwan_manage_power, .data = BIT(8) | BIT(19), /* interface whitelist bitmap */ }; @@ -413,7 +441,7 @@ static const struct usb_device_id products[] = { .idVendor = HUAWEI_VENDOR_ID, .bInterfaceClass = USB_CLASS_VENDOR_SPEC, .bInterfaceSubClass = 1, - .bInterfaceProtocol = 8, /* NOTE: This is the *slave* interface of the CDC Union! */ + .bInterfaceProtocol = 9, /* CDC Ethernet *control* interface */ .driver_info = (unsigned long)&qmi_wwan_info, }, { /* Vodafone/Huawei K5005 (12d1:14c8) and similar modems */ @@ -421,7 +449,7 @@ static const struct usb_device_id products[] = { .idVendor = HUAWEI_VENDOR_ID, .bInterfaceClass = USB_CLASS_VENDOR_SPEC, .bInterfaceSubClass = 1, - .bInterfaceProtocol = 56, /* NOTE: This is the *slave* interface of the CDC Union! */ + .bInterfaceProtocol = 57, /* CDC Ethernet *control* interface */ .driver_info = (unsigned long)&qmi_wwan_info, }, { /* Huawei E392, E398 and possibly others in "Windows mode" @@ -453,6 +481,15 @@ static const struct usb_device_id products[] = { .bInterfaceProtocol = 0xff, .driver_info = (unsigned long)&qmi_wwan_force_int4, }, + { /* ZTE MF821D */ + .match_flags = USB_DEVICE_ID_MATCH_DEVICE | USB_DEVICE_ID_MATCH_INT_INFO, + .idVendor = 0x19d2, + .idProduct = 0x0326, + .bInterfaceClass = 0xff, + .bInterfaceSubClass = 0xff, + .bInterfaceProtocol = 0xff, + .driver_info = (unsigned long)&qmi_wwan_force_int4, + }, { /* ZTE (Vodafone) K3520-Z */ .match_flags = USB_DEVICE_ID_MATCH_DEVICE | USB_DEVICE_ID_MATCH_INT_INFO, .idVendor = 0x19d2, @@ -572,10 +609,27 @@ static const struct usb_device_id products[] = { }; MODULE_DEVICE_TABLE(usb, products); +static int qmi_wwan_probe(struct usb_interface *intf, const struct usb_device_id *prod) +{ + struct usb_device_id *id = (struct usb_device_id *)prod; + + /* Workaround to enable dynamic IDs. This disables usbnet + * blacklisting functionality. Which, if required, can be + * reimplemented here by using a magic "blacklist" value + * instead of 0 in the static device id table + */ + if (!id->driver_info) { + dev_dbg(&intf->dev, "setting defaults for dynamic device id\n"); + id->driver_info = (unsigned long)&qmi_wwan_shared; + } + + return usbnet_probe(intf, id); +} + static struct usb_driver qmi_wwan_driver = { .name = "qmi_wwan", .id_table = products, - .probe = usbnet_probe, + .probe = qmi_wwan_probe, .disconnect = usbnet_disconnect, .suspend = qmi_wwan_suspend, .resume = qmi_wwan_resume, @@ -584,17 +638,7 @@ static struct usb_driver qmi_wwan_driver = { .disable_hub_initiated_lpm = 1, }; -static int __init qmi_wwan_init(void) -{ - return usb_register(&qmi_wwan_driver); -} -module_init(qmi_wwan_init); - -static void __exit qmi_wwan_exit(void) -{ - usb_deregister(&qmi_wwan_driver); -} -module_exit(qmi_wwan_exit); +module_usb_driver(qmi_wwan_driver); MODULE_AUTHOR("Bjørn Mork <bjorn@mork.no>"); MODULE_DESCRIPTION("Qualcomm MSM Interface (QMI) WWAN driver"); diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c index 1c6e51588da7..6c0c5b76fc41 100644 --- a/drivers/net/usb/smsc75xx.c +++ b/drivers/net/usb/smsc75xx.c @@ -616,7 +616,7 @@ static void smsc75xx_init_mac_address(struct usbnet *dev) /* no eeprom, or eeprom values are invalid. generate random MAC */ eth_hw_addr_random(dev->net); - netif_dbg(dev, ifup, dev->net, "MAC address set to random_ether_addr"); + netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr"); } static int smsc75xx_set_mac_address(struct usbnet *dev) diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c index b1112e753859..25cc3a15a4ea 100644 --- a/drivers/net/usb/smsc95xx.c +++ b/drivers/net/usb/smsc95xx.c @@ -578,6 +578,36 @@ static int smsc95xx_ethtool_set_eeprom(struct net_device *netdev, return smsc95xx_write_eeprom(dev, ee->offset, ee->len, data); } +static int smsc95xx_ethtool_getregslen(struct net_device *netdev) +{ + /* all smsc95xx registers */ + return COE_CR - ID_REV + 1; +} + +static void +smsc95xx_ethtool_getregs(struct net_device *netdev, struct ethtool_regs *regs, + void *buf) +{ + struct usbnet *dev = netdev_priv(netdev); + unsigned int i, j; + int retval; + u32 *data = buf; + + retval = smsc95xx_read_reg(dev, ID_REV, ®s->version); + if (retval < 0) { + netdev_warn(netdev, "REGS: cannot read ID_REV\n"); + return; + } + + for (i = ID_REV, j = 0; i <= COE_CR; i += (sizeof(u32)), j++) { + retval = smsc95xx_read_reg(dev, i, &data[j]); + if (retval < 0) { + netdev_warn(netdev, "REGS: cannot read reg[%x]\n", i); + return; + } + } +} + static const struct ethtool_ops smsc95xx_ethtool_ops = { .get_link = usbnet_get_link, .nway_reset = usbnet_nway_reset, @@ -589,6 +619,8 @@ static const struct ethtool_ops smsc95xx_ethtool_ops = { .get_eeprom_len = smsc95xx_ethtool_get_eeprom_len, .get_eeprom = smsc95xx_ethtool_get_eeprom, .set_eeprom = smsc95xx_ethtool_set_eeprom, + .get_regs_len = smsc95xx_ethtool_getregslen, + .get_regs = smsc95xx_ethtool_getregs, }; static int smsc95xx_ioctl(struct net_device *netdev, struct ifreq *rq, int cmd) @@ -615,7 +647,7 @@ static void smsc95xx_init_mac_address(struct usbnet *dev) /* no eeprom, or eeprom values are invalid. generate random MAC */ eth_hw_addr_random(dev->net); - netif_dbg(dev, ifup, dev->net, "MAC address set to random_ether_addr\n"); + netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n"); } static int smsc95xx_set_mac_address(struct usbnet *dev) diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c index aba769d77459..8531c1caac28 100644 --- a/drivers/net/usb/usbnet.c +++ b/drivers/net/usb/usbnet.c @@ -180,7 +180,40 @@ int usbnet_get_ethernet_addr(struct usbnet *dev, int iMACAddress) } EXPORT_SYMBOL_GPL(usbnet_get_ethernet_addr); -static void intr_complete (struct urb *urb); +static void intr_complete (struct urb *urb) +{ + struct usbnet *dev = urb->context; + int status = urb->status; + + switch (status) { + /* success */ + case 0: + dev->driver_info->status(dev, urb); + break; + + /* software-driven interface shutdown */ + case -ENOENT: /* urb killed */ + case -ESHUTDOWN: /* hardware gone */ + netif_dbg(dev, ifdown, dev->net, + "intr shutdown, code %d\n", status); + return; + + /* NOTE: not throttling like RX/TX, since this endpoint + * already polls infrequently + */ + default: + netdev_dbg(dev->net, "intr status %d\n", status); + break; + } + + if (!netif_running (dev->net)) + return; + + status = usb_submit_urb (urb, GFP_ATOMIC); + if (status != 0) + netif_err(dev, timer, dev->net, + "intr resubmit --> %d\n", status); +} static int init_status (struct usbnet *dev, struct usb_interface *intf) { @@ -519,42 +552,6 @@ block: netif_dbg(dev, rx_err, dev->net, "no read resubmitted\n"); } -static void intr_complete (struct urb *urb) -{ - struct usbnet *dev = urb->context; - int status = urb->status; - - switch (status) { - /* success */ - case 0: - dev->driver_info->status(dev, urb); - break; - - /* software-driven interface shutdown */ - case -ENOENT: /* urb killed */ - case -ESHUTDOWN: /* hardware gone */ - netif_dbg(dev, ifdown, dev->net, - "intr shutdown, code %d\n", status); - return; - - /* NOTE: not throttling like RX/TX, since this endpoint - * already polls infrequently - */ - default: - netdev_dbg(dev->net, "intr status %d\n", status); - break; - } - - if (!netif_running (dev->net)) - return; - - memset(urb->transfer_buffer, 0, urb->transfer_buffer_length); - status = usb_submit_urb (urb, GFP_ATOMIC); - if (status != 0) - netif_err(dev, timer, dev->net, - "intr resubmit --> %d\n", status); -} - /*-------------------------------------------------------------------------*/ void usbnet_pause_rx(struct usbnet *dev) { @@ -1312,7 +1309,6 @@ void usbnet_disconnect (struct usb_interface *intf) usb_free_urb(dev->interrupt); free_netdev(net); - usb_put_dev (xdev); } EXPORT_SYMBOL_GPL(usbnet_disconnect); @@ -1368,8 +1364,6 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod) xdev = interface_to_usbdev (udev); interface = udev->cur_altsetting; - usb_get_dev (xdev); - status = -ENOMEM; // set up our own records @@ -1498,7 +1492,6 @@ out3: out1: free_netdev(net); out: - usb_put_dev(xdev); return status; } EXPORT_SYMBOL_GPL(usbnet_probe); @@ -1600,7 +1593,7 @@ static int __init usbnet_init(void) BUILD_BUG_ON( FIELD_SIZEOF(struct sk_buff, cb) < sizeof(struct skb_data)); - random_ether_addr(node_id); + eth_random_addr(node_id); return 0; } module_init(usbnet_init); diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index f18149ae2588..83d2b0c34c5e 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -704,16 +704,16 @@ static struct rtnl_link_stats64 *virtnet_stats(struct net_device *dev, u64 tpackets, tbytes, rpackets, rbytes; do { - start = u64_stats_fetch_begin(&stats->tx_syncp); + start = u64_stats_fetch_begin_bh(&stats->tx_syncp); tpackets = stats->tx_packets; tbytes = stats->tx_bytes; - } while (u64_stats_fetch_retry(&stats->tx_syncp, start)); + } while (u64_stats_fetch_retry_bh(&stats->tx_syncp, start)); do { - start = u64_stats_fetch_begin(&stats->rx_syncp); + start = u64_stats_fetch_begin_bh(&stats->rx_syncp); rpackets = stats->rx_packets; rbytes = stats->rx_bytes; - } while (u64_stats_fetch_retry(&stats->rx_syncp, start)); + } while (u64_stats_fetch_retry_bh(&stats->rx_syncp, start)); tot->rx_packets += rpackets; tot->tx_packets += tpackets; @@ -1062,7 +1062,7 @@ static int virtnet_probe(struct virtio_device *vdev) return -ENOMEM; /* Set up network device as normal. */ - dev->priv_flags |= IFF_UNICAST_FLT; + dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE; dev->netdev_ops = &virtnet_netdev; dev->features = NETIF_F_HIGHDMA; diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 3f04ba0a5454..93e0cfb739b8 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1037,7 +1037,7 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, #endif dev_dbg(&adapter->netdev->dev, "txd[%u]: SOP 0x%Lx 0x%x 0x%x\n", - (u32)((union Vmxnet3_GenericDesc *)ctx.sop_txd - + (u32)(ctx.sop_txd - tq->tx_ring.base), le64_to_cpu(gdesc->txd.addr), le32_to_cpu(gdesc->dword[2]), le32_to_cpu(gdesc->dword[3])); diff --git a/drivers/net/wan/x25_asy.c b/drivers/net/wan/x25_asy.c index d7a65e141d1a..44db8b75a531 100644 --- a/drivers/net/wan/x25_asy.c +++ b/drivers/net/wan/x25_asy.c @@ -231,7 +231,7 @@ static void x25_asy_encaps(struct x25_asy *sl, unsigned char *icp, int len) } p = icp; - count = x25_asy_esc(p, (unsigned char *) sl->xbuff, len); + count = x25_asy_esc(p, sl->xbuff, len); /* Order of next two lines is *very* important. * When we are sending a little amount of data, diff --git a/drivers/net/wimax/i2400m/Kconfig b/drivers/net/wimax/i2400m/Kconfig index 672de18a776c..71453db14258 100644 --- a/drivers/net/wimax/i2400m/Kconfig +++ b/drivers/net/wimax/i2400m/Kconfig @@ -7,9 +7,6 @@ config WIMAX_I2400M comment "Enable USB support to see WiMAX USB drivers" depends on USB = n -comment "Enable MMC support to see WiMAX SDIO drivers" - depends on MMC = n - config WIMAX_I2400M_USB tristate "Intel Wireless WiMAX Connection 2400 over USB (including 5x50)" depends on WIMAX && USB @@ -21,25 +18,6 @@ config WIMAX_I2400M_USB If unsure, it is safe to select M (module). -config WIMAX_I2400M_SDIO - tristate "Intel Wireless WiMAX Connection 2400 over SDIO" - depends on WIMAX && MMC - select WIMAX_I2400M - help - Select if you have a device based on the Intel WiMAX - Connection 2400 over SDIO. - - If unsure, it is safe to select M (module). - -config WIMAX_IWMC3200_SDIO - bool "Intel Wireless Multicom WiMAX Connection 3200 over SDIO (EXPERIMENTAL)" - depends on WIMAX_I2400M_SDIO - depends on EXPERIMENTAL - select IWMC3200TOP - help - Select if you have a device based on the Intel Multicom WiMAX - Connection 3200 over SDIO. - config WIMAX_I2400M_DEBUG_LEVEL int "WiMAX i2400m debug level" depends on WIMAX_I2400M diff --git a/drivers/net/wimax/i2400m/Makefile b/drivers/net/wimax/i2400m/Makefile index 5d9e018d31af..f6d19c348082 100644 --- a/drivers/net/wimax/i2400m/Makefile +++ b/drivers/net/wimax/i2400m/Makefile @@ -1,7 +1,6 @@ obj-$(CONFIG_WIMAX_I2400M) += i2400m.o obj-$(CONFIG_WIMAX_I2400M_USB) += i2400m-usb.o -obj-$(CONFIG_WIMAX_I2400M_SDIO) += i2400m-sdio.o i2400m-y := \ control.o \ @@ -21,10 +20,3 @@ i2400m-usb-y := \ usb-tx.o \ usb-rx.o \ usb.o - - -i2400m-sdio-y := \ - sdio.o \ - sdio-tx.o \ - sdio-fw.o \ - sdio-rx.o diff --git a/drivers/net/wimax/i2400m/control.c b/drivers/net/wimax/i2400m/control.c index 2fea02b35b2d..4a01e5c7fe09 100644 --- a/drivers/net/wimax/i2400m/control.c +++ b/drivers/net/wimax/i2400m/control.c @@ -130,7 +130,7 @@ ssize_t i2400m_tlv_match(const struct i2400m_tlv_hdr *tlv, && le16_to_cpu(tlv->length) + sizeof(*tlv) != tlv_size) { size_t size = le16_to_cpu(tlv->length) + sizeof(*tlv); printk(KERN_WARNING "W: tlv type 0x%x mismatched because of " - "size (got %zu vs %zu expected)\n", + "size (got %zu vs %zd expected)\n", tlv_type, size, tlv_size); return size; } @@ -235,7 +235,7 @@ const struct i2400m_tlv_hdr *i2400m_tlv_find( break; if (match > 0) dev_warn(dev, "TLV type 0x%04x found with size " - "mismatch (%zu vs %zu needed)\n", + "mismatch (%zu vs %zd needed)\n", tlv_type, match, tlv_size); } return tlv; diff --git a/drivers/net/wimax/i2400m/driver.c b/drivers/net/wimax/i2400m/driver.c index 47cae7150bc1..025426132754 100644 --- a/drivers/net/wimax/i2400m/driver.c +++ b/drivers/net/wimax/i2400m/driver.c @@ -754,8 +754,7 @@ EXPORT_SYMBOL_GPL(i2400m_error_recovery); /* * Alloc the command and ack buffers for boot mode * - * Get the buffers needed to deal with boot mode messages. These - * buffers need to be allocated before the sdio receive irq is setup. + * Get the buffers needed to deal with boot mode messages. */ static int i2400m_bm_buf_alloc(struct i2400m *i2400m) @@ -897,7 +896,7 @@ int i2400m_setup(struct i2400m *i2400m, enum i2400m_bri bm_flags) result = i2400m_read_mac_addr(i2400m); if (result < 0) goto error_read_mac_addr; - random_ether_addr(i2400m->src_mac_addr); + eth_random_addr(i2400m->src_mac_addr); i2400m->pm_notifier.notifier_call = i2400m_pm_notifier; register_pm_notifier(&i2400m->pm_notifier); diff --git a/drivers/net/wimax/i2400m/fw.c b/drivers/net/wimax/i2400m/fw.c index 7cbd7d231e11..283237f6f074 100644 --- a/drivers/net/wimax/i2400m/fw.c +++ b/drivers/net/wimax/i2400m/fw.c @@ -51,8 +51,7 @@ * firmware. Normal hardware takes only signed firmware. * * On boot mode, in USB, we write to the device using the bulk out - * endpoint and read from it in the notification endpoint. In SDIO we - * talk to it via the write address and read from the read address. + * endpoint and read from it in the notification endpoint. * * Upon entrance to boot mode, the device sends (preceded with a few * zero length packets (ZLPs) on the notification endpoint in USB) a @@ -1268,7 +1267,7 @@ int i2400m_fw_check(struct i2400m *i2400m, const void *bcf, size_t bcf_size) size_t leftover, offset, header_len, size; leftover = top - itr; - offset = itr - (const void *) bcf; + offset = itr - bcf; if (leftover <= sizeof(*bcf_hdr)) { dev_err(dev, "firmware %s: %zu B left at @%zx, " "not enough for BCF header\n", diff --git a/drivers/net/wimax/i2400m/i2400m-sdio.h b/drivers/net/wimax/i2400m/i2400m-sdio.h deleted file mode 100644 index 1d63ffdedfde..000000000000 --- a/drivers/net/wimax/i2400m/i2400m-sdio.h +++ /dev/null @@ -1,157 +0,0 @@ -/* - * Intel Wireless WiMAX Connection 2400m - * SDIO-specific i2400m driver definitions - * - * - * Copyright (C) 2007-2008 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <linux-wimax@intel.com> - * Brian Bian <brian.bian@intel.com> - * Dirk Brandewie <dirk.j.brandewie@intel.com> - * Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> - * Yanir Lubetkin <yanirx.lubetkin@intel.com> - * - Initial implementation - * - * - * This driver implements the bus-specific part of the i2400m for - * SDIO. Check i2400m.h for a generic driver description. - * - * ARCHITECTURE - * - * This driver sits under the bus-generic i2400m driver, providing the - * connection to the device. - * - * When probed, all the function pointers are setup and then the - * bus-generic code called. The generic driver will then use the - * provided pointers for uploading firmware (i2400ms_bus_bm*() in - * sdio-fw.c) and then setting up the device (i2400ms_dev_*() in - * sdio.c). - * - * Once firmware is uploaded, TX functions (sdio-tx.c) are called when - * data is ready for transmission in the TX fifo; then the SDIO IRQ is - * fired and data is available (sdio-rx.c), it is sent to the generic - * driver for processing with i2400m_rx. - */ - -#ifndef __I2400M_SDIO_H__ -#define __I2400M_SDIO_H__ - -#include "i2400m.h" - -/* Host-Device interface for SDIO */ -enum { - I2400M_SDIO_BOOT_RETRIES = 3, - I2400MS_BLK_SIZE = 256, - I2400MS_PL_SIZE_MAX = 0x3E00, - - I2400MS_DATA_ADDR = 0x0, - I2400MS_INTR_STATUS_ADDR = 0x13, - I2400MS_INTR_CLEAR_ADDR = 0x13, - I2400MS_INTR_ENABLE_ADDR = 0x14, - I2400MS_INTR_GET_SIZE_ADDR = 0x2C, - /* The number of ticks to wait for the device to signal that - * it is ready */ - I2400MS_INIT_SLEEP_INTERVAL = 100, - /* How long to wait for the device to settle after reset */ - I2400MS_SETTLE_TIME = 40, - /* The number of msec to wait for IOR after sending IOE */ - IWMC3200_IOR_TIMEOUT = 10, -}; - - -/** - * struct i2400ms - descriptor for a SDIO connected i2400m - * - * @i2400m: bus-generic i2400m implementation; has to be first (see - * it's documentation in i2400m.h). - * - * @func: pointer to our SDIO function - * - * @tx_worker: workqueue struct used to TX data when the bus-generic - * code signals packets are pending for transmission to the device. - * - * @tx_workqueue: workqeueue used for data TX; we don't use the - * system's workqueue as that might cause deadlocks with code in - * the bus-generic driver. The read/write operation to the queue - * is protected with spinlock (tx_lock in struct i2400m) to avoid - * the queue being destroyed in the middle of a the queue read/write - * operation. - * - * @debugfs_dentry: dentry for the SDIO specific debugfs files - * - * Note this value is set to NULL upon destruction; this is - * because some routinges use it to determine if we are inside the - * probe() path or some other path. When debugfs is disabled, - * creation sets the dentry to '(void*) -ENODEV', which is valid - * for the test. - */ -struct i2400ms { - struct i2400m i2400m; /* FIRST! See doc */ - struct sdio_func *func; - - struct work_struct tx_worker; - struct workqueue_struct *tx_workqueue; - char tx_wq_name[32]; - - struct dentry *debugfs_dentry; - - wait_queue_head_t bm_wfa_wq; - int bm_wait_result; - size_t bm_ack_size; - - /* Device is any of the iwmc3200 SKUs */ - unsigned iwmc3200:1; -}; - - -static inline -void i2400ms_init(struct i2400ms *i2400ms) -{ - i2400m_init(&i2400ms->i2400m); -} - - -extern int i2400ms_rx_setup(struct i2400ms *); -extern void i2400ms_rx_release(struct i2400ms *); - -extern int i2400ms_tx_setup(struct i2400ms *); -extern void i2400ms_tx_release(struct i2400ms *); -extern void i2400ms_bus_tx_kick(struct i2400m *); - -extern ssize_t i2400ms_bus_bm_cmd_send(struct i2400m *, - const struct i2400m_bootrom_header *, - size_t, int); -extern ssize_t i2400ms_bus_bm_wait_for_ack(struct i2400m *, - struct i2400m_bootrom_header *, - size_t); -extern void i2400ms_bus_bm_release(struct i2400m *); -extern int i2400ms_bus_bm_setup(struct i2400m *); - -#endif /* #ifndef __I2400M_SDIO_H__ */ diff --git a/drivers/net/wimax/i2400m/i2400m.h b/drivers/net/wimax/i2400m/i2400m.h index c806d4550212..79c6505b5c20 100644 --- a/drivers/net/wimax/i2400m/i2400m.h +++ b/drivers/net/wimax/i2400m/i2400m.h @@ -46,7 +46,7 @@ * - bus generic driver (this part) * * The bus specific driver sets up stuff specific to the bus the - * device is connected to (USB, SDIO, PCI, tam-tam...non-authoritative + * device is connected to (USB, PCI, tam-tam...non-authoritative * nor binding list) which is basically the device-model management * (probe/disconnect, etc), moving data from device to kernel and * back, doing the power saving details and reseting the device. @@ -238,14 +238,13 @@ struct i2400m_barker_db; * amount needed for loading firmware, where us dev_start/stop setup * the rest needed to do full data/control traffic. * - * @bus_tx_block_size: [fill] SDIO imposes a 256 block size, USB 16, - * so we have a tx_blk_size variable that the bus layer sets to - * tell the engine how much of that we need. + * @bus_tx_block_size: [fill] USB imposes a 16 block size, but other + * busses will differ. So we have a tx_blk_size variable that the + * bus layer sets to tell the engine how much of that we need. * * @bus_tx_room_min: [fill] Minimum room required while allocating - * TX queue's buffer space for message header. SDIO requires - * 224 bytes and USB 16 bytes. Refer bus specific driver code - * for details. + * TX queue's buffer space for message header. USB requires + * 16 bytes. Refer to bus specific driver code for details. * * @bus_pl_size_max: [fill] Maximum payload size. * diff --git a/drivers/net/wimax/i2400m/sdio-debug-levels.h b/drivers/net/wimax/i2400m/sdio-debug-levels.h deleted file mode 100644 index c51998741301..000000000000 --- a/drivers/net/wimax/i2400m/sdio-debug-levels.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * debug levels control file for the i2400m module's - */ -#ifndef __debug_levels__h__ -#define __debug_levels__h__ - -/* Maximum compile and run time debug level for all submodules */ -#define D_MODULENAME i2400m_sdio -#define D_MASTER CONFIG_WIMAX_I2400M_DEBUG_LEVEL - -#include <linux/wimax/debug.h> - -/* List of all the enabled modules */ -enum d_module { - D_SUBMODULE_DECLARE(main), - D_SUBMODULE_DECLARE(tx), - D_SUBMODULE_DECLARE(rx), - D_SUBMODULE_DECLARE(fw) -}; - - -#endif /* #ifndef __debug_levels__h__ */ diff --git a/drivers/net/wimax/i2400m/sdio-fw.c b/drivers/net/wimax/i2400m/sdio-fw.c deleted file mode 100644 index 8e025418f5be..000000000000 --- a/drivers/net/wimax/i2400m/sdio-fw.c +++ /dev/null @@ -1,210 +0,0 @@ -/* - * Intel Wireless WiMAX Connection 2400m - * Firmware uploader's SDIO specifics - * - * - * Copyright (C) 2007-2008 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <linux-wimax@intel.com> - * Yanir Lubetkin <yanirx.lubetkin@intel.com> - * Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> - * - Initial implementation - * - * Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> - * - Bus generic/specific split for USB - * - * Dirk Brandewie <dirk.j.brandewie@intel.com> - * - Initial implementation for SDIO - * - * Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> - * - SDIO rehash for changes in the bus-driver model - * - * Dirk Brandewie <dirk.j.brandewie@intel.com> - * - Make it IRQ based, not polling - * - * THE PROCEDURE - * - * See fw.c for the generic description of this procedure. - * - * This file implements only the SDIO specifics. It boils down to how - * to send a command and waiting for an acknowledgement from the - * device. - * - * All this code is sequential -- all i2400ms_bus_bm_*() functions are - * executed in the same thread, except i2400ms_bm_irq() [on its own by - * the SDIO driver]. This makes it possible to avoid locking. - * - * COMMAND EXECUTION - * - * The generic firmware upload code will call i2400m_bus_bm_cmd_send() - * to send commands. - * - * The SDIO devices expects things in 256 byte blocks, so it will pad - * it, compute the checksum (if needed) and pass it to SDIO. - * - * ACK RECEPTION - * - * This works in IRQ mode -- the fw loader says when to wait for data - * and for that it calls i2400ms_bus_bm_wait_for_ack(). - * - * This checks if there is any data available (RX size > 0); if not, - * waits for the IRQ handler to notify about it. Once there is data, - * it is read and passed to the caller. Doing it this way we don't - * need much coordination/locking, and it makes it much more difficult - * for an interrupt to be lost and the wait_for_ack() function getting - * stuck even when data is pending. - */ -#include <linux/mmc/sdio_func.h> -#include "i2400m-sdio.h" - - -#define D_SUBMODULE fw -#include "sdio-debug-levels.h" - - -/* - * Send a boot-mode command to the SDIO function - * - * We use a bounce buffer (i2400m->bm_cmd_buf) because we need to - * touch the header if the RAW flag is not set. - * - * @flags: pass thru from i2400m_bm_cmd() - * @return: cmd_size if ok, < 0 errno code on error. - * - * Note the command is padded to the SDIO block size for the device. - */ -ssize_t i2400ms_bus_bm_cmd_send(struct i2400m *i2400m, - const struct i2400m_bootrom_header *_cmd, - size_t cmd_size, int flags) -{ - ssize_t result; - struct device *dev = i2400m_dev(i2400m); - struct i2400ms *i2400ms = container_of(i2400m, struct i2400ms, i2400m); - int opcode = _cmd == NULL ? -1 : i2400m_brh_get_opcode(_cmd); - struct i2400m_bootrom_header *cmd; - /* SDIO restriction */ - size_t cmd_size_a = ALIGN(cmd_size, I2400MS_BLK_SIZE); - - d_fnstart(5, dev, "(i2400m %p cmd %p size %zu)\n", - i2400m, _cmd, cmd_size); - result = -E2BIG; - if (cmd_size > I2400M_BM_CMD_BUF_SIZE) - goto error_too_big; - - if (_cmd != i2400m->bm_cmd_buf) - memmove(i2400m->bm_cmd_buf, _cmd, cmd_size); - cmd = i2400m->bm_cmd_buf; - if (cmd_size_a > cmd_size) /* Zero pad space */ - memset(i2400m->bm_cmd_buf + cmd_size, 0, cmd_size_a - cmd_size); - if ((flags & I2400M_BM_CMD_RAW) == 0) { - if (WARN_ON(i2400m_brh_get_response_required(cmd) == 0)) - dev_warn(dev, "SW BUG: response_required == 0\n"); - i2400m_bm_cmd_prepare(cmd); - } - d_printf(4, dev, "BM cmd %d: %zu bytes (%zu padded)\n", - opcode, cmd_size, cmd_size_a); - d_dump(5, dev, cmd, cmd_size); - - sdio_claim_host(i2400ms->func); /* Send & check */ - result = sdio_memcpy_toio(i2400ms->func, I2400MS_DATA_ADDR, - i2400m->bm_cmd_buf, cmd_size_a); - sdio_release_host(i2400ms->func); - if (result < 0) { - dev_err(dev, "BM cmd %d: cannot send: %ld\n", - opcode, (long) result); - goto error_cmd_send; - } - result = cmd_size; -error_cmd_send: -error_too_big: - d_fnend(5, dev, "(i2400m %p cmd %p size %zu) = %d\n", - i2400m, _cmd, cmd_size, (int) result); - return result; -} - - -/* - * Read an ack from the device's boot-mode - * - * @i2400m: - * @_ack: pointer to where to store the read data - * @ack_size: how many bytes we should read - * - * Returns: < 0 errno code on error; otherwise, amount of received bytes. - * - * The ACK for a BM command is always at least sizeof(*ack) bytes, so - * check for that. We don't need to check for device reboots - * - */ -ssize_t i2400ms_bus_bm_wait_for_ack(struct i2400m *i2400m, - struct i2400m_bootrom_header *ack, - size_t ack_size) -{ - ssize_t result; - struct i2400ms *i2400ms = container_of(i2400m, struct i2400ms, i2400m); - struct sdio_func *func = i2400ms->func; - struct device *dev = &func->dev; - int size; - - BUG_ON(sizeof(*ack) > ack_size); - - d_fnstart(5, dev, "(i2400m %p ack %p size %zu)\n", - i2400m, ack, ack_size); - - result = wait_event_timeout(i2400ms->bm_wfa_wq, - i2400ms->bm_ack_size != -EINPROGRESS, - 2 * HZ); - if (result == 0) { - result = -ETIMEDOUT; - dev_err(dev, "BM: error waiting for an ack\n"); - goto error_timeout; - } - - spin_lock(&i2400m->rx_lock); - result = i2400ms->bm_ack_size; - BUG_ON(result == -EINPROGRESS); - if (result < 0) /* so we exit when rx_release() is called */ - dev_err(dev, "BM: %s failed: %zd\n", __func__, result); - else { - size = min(ack_size, i2400ms->bm_ack_size); - memcpy(ack, i2400m->bm_ack_buf, size); - } - /* - * Remember always to clear the bm_ack_size to -EINPROGRESS - * after the RX data is processed - */ - i2400ms->bm_ack_size = -EINPROGRESS; - spin_unlock(&i2400m->rx_lock); - -error_timeout: - d_fnend(5, dev, "(i2400m %p ack %p size %zu) = %zd\n", - i2400m, ack, ack_size, result); - return result; -} diff --git a/drivers/net/wimax/i2400m/sdio-rx.c b/drivers/net/wimax/i2400m/sdio-rx.c deleted file mode 100644 index fb6396dd115f..000000000000 --- a/drivers/net/wimax/i2400m/sdio-rx.c +++ /dev/null @@ -1,301 +0,0 @@ -/* - * Intel Wireless WiMAX Connection 2400m - * SDIO RX handling - * - * - * Copyright (C) 2007-2008 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <linux-wimax@intel.com> - * Dirk Brandewie <dirk.j.brandewie@intel.com> - * - Initial implementation - * - * - * This handles the RX path on SDIO. - * - * The SDIO bus driver calls the "irq" routine when data is available. - * This is not a traditional interrupt routine since the SDIO bus - * driver calls us from its irq thread context. Because of this - * sleeping in the SDIO RX IRQ routine is okay. - * - * From there on, we obtain the size of the data that is available, - * allocate an skb, copy it and then pass it to the generic driver's - * RX routine [i2400m_rx()]. - * - * ROADMAP - * - * i2400ms_irq() - * i2400ms_rx() - * __i2400ms_rx_get_size() - * i2400m_is_boot_barker() - * i2400m_rx() - * - * i2400ms_rx_setup() - * - * i2400ms_rx_release() - */ -#include <linux/workqueue.h> -#include <linux/wait.h> -#include <linux/skbuff.h> -#include <linux/mmc/sdio.h> -#include <linux/mmc/sdio_func.h> -#include <linux/slab.h> -#include "i2400m-sdio.h" - -#define D_SUBMODULE rx -#include "sdio-debug-levels.h" - -static const __le32 i2400m_ACK_BARKER[4] = { - __constant_cpu_to_le32(I2400M_ACK_BARKER), - __constant_cpu_to_le32(I2400M_ACK_BARKER), - __constant_cpu_to_le32(I2400M_ACK_BARKER), - __constant_cpu_to_le32(I2400M_ACK_BARKER) -}; - - -/* - * Read and return the amount of bytes available for RX - * - * The RX size has to be read like this: byte reads of three - * sequential locations; then glue'em together. - * - * sdio_readl() doesn't work. - */ -static ssize_t __i2400ms_rx_get_size(struct i2400ms *i2400ms) -{ - int ret, cnt, val; - ssize_t rx_size; - unsigned xfer_size_addr; - struct sdio_func *func = i2400ms->func; - struct device *dev = &i2400ms->func->dev; - - d_fnstart(7, dev, "(i2400ms %p)\n", i2400ms); - xfer_size_addr = I2400MS_INTR_GET_SIZE_ADDR; - rx_size = 0; - for (cnt = 0; cnt < 3; cnt++) { - val = sdio_readb(func, xfer_size_addr + cnt, &ret); - if (ret < 0) { - dev_err(dev, "RX: Can't read byte %d of RX size from " - "0x%08x: %d\n", cnt, xfer_size_addr + cnt, ret); - rx_size = ret; - goto error_read; - } - rx_size = rx_size << 8 | (val & 0xff); - } - d_printf(6, dev, "RX: rx_size is %ld\n", (long) rx_size); -error_read: - d_fnend(7, dev, "(i2400ms %p) = %ld\n", i2400ms, (long) rx_size); - return rx_size; -} - - -/* - * Read data from the device (when in normal) - * - * Allocate an SKB of the right size, read the data in and then - * deliver it to the generic layer. - * - * We also check for a reboot barker. That means the device died and - * we have to reboot it. - */ -static -void i2400ms_rx(struct i2400ms *i2400ms) -{ - int ret; - struct sdio_func *func = i2400ms->func; - struct device *dev = &func->dev; - struct i2400m *i2400m = &i2400ms->i2400m; - struct sk_buff *skb; - ssize_t rx_size; - - d_fnstart(7, dev, "(i2400ms %p)\n", i2400ms); - rx_size = __i2400ms_rx_get_size(i2400ms); - if (rx_size < 0) { - ret = rx_size; - goto error_get_size; - } - /* - * Hardware quirk: make sure to clear the INTR status register - * AFTER getting the data transfer size. - */ - sdio_writeb(func, 1, I2400MS_INTR_CLEAR_ADDR, &ret); - - ret = -ENOMEM; - skb = alloc_skb(rx_size, GFP_ATOMIC); - if (NULL == skb) { - dev_err(dev, "RX: unable to alloc skb\n"); - goto error_alloc_skb; - } - ret = sdio_memcpy_fromio(func, skb->data, - I2400MS_DATA_ADDR, rx_size); - if (ret < 0) { - dev_err(dev, "RX: SDIO data read failed: %d\n", ret); - goto error_memcpy_fromio; - } - - rmb(); /* make sure we get boot_mode from dev_reset_handle */ - if (unlikely(i2400m->boot_mode == 1)) { - spin_lock(&i2400m->rx_lock); - i2400ms->bm_ack_size = rx_size; - spin_unlock(&i2400m->rx_lock); - memcpy(i2400m->bm_ack_buf, skb->data, rx_size); - wake_up(&i2400ms->bm_wfa_wq); - d_printf(5, dev, "RX: SDIO boot mode message\n"); - kfree_skb(skb); - goto out; - } - ret = -EIO; - if (unlikely(rx_size < sizeof(__le32))) { - dev_err(dev, "HW BUG? only %zu bytes received\n", rx_size); - goto error_bad_size; - } - if (likely(i2400m_is_d2h_barker(skb->data))) { - skb_put(skb, rx_size); - i2400m_rx(i2400m, skb); - } else if (unlikely(i2400m_is_boot_barker(i2400m, - skb->data, rx_size))) { - ret = i2400m_dev_reset_handle(i2400m, "device rebooted"); - dev_err(dev, "RX: SDIO reboot barker\n"); - kfree_skb(skb); - } else { - i2400m_unknown_barker(i2400m, skb->data, rx_size); - kfree_skb(skb); - } -out: - d_fnend(7, dev, "(i2400ms %p) = void\n", i2400ms); - return; - -error_memcpy_fromio: - kfree_skb(skb); -error_alloc_skb: -error_get_size: -error_bad_size: - d_fnend(7, dev, "(i2400ms %p) = %d\n", i2400ms, ret); -} - - -/* - * Process an interrupt from the SDIO card - * - * FIXME: need to process other events that are not just ready-to-read - * - * Checks there is data ready and then proceeds to read it. - */ -static -void i2400ms_irq(struct sdio_func *func) -{ - int ret; - struct i2400ms *i2400ms = sdio_get_drvdata(func); - struct device *dev = &func->dev; - int val; - - d_fnstart(6, dev, "(i2400ms %p)\n", i2400ms); - val = sdio_readb(func, I2400MS_INTR_STATUS_ADDR, &ret); - if (ret < 0) { - dev_err(dev, "RX: Can't read interrupt status: %d\n", ret); - goto error_no_irq; - } - if (!val) { - dev_err(dev, "RX: BUG? got IRQ but no interrupt ready?\n"); - goto error_no_irq; - } - i2400ms_rx(i2400ms); -error_no_irq: - d_fnend(6, dev, "(i2400ms %p) = void\n", i2400ms); -} - - -/* - * Setup SDIO RX - * - * Hooks up the IRQ handler and then enables IRQs. - */ -int i2400ms_rx_setup(struct i2400ms *i2400ms) -{ - int result; - struct sdio_func *func = i2400ms->func; - struct device *dev = &func->dev; - struct i2400m *i2400m = &i2400ms->i2400m; - - d_fnstart(5, dev, "(i2400ms %p)\n", i2400ms); - - init_waitqueue_head(&i2400ms->bm_wfa_wq); - spin_lock(&i2400m->rx_lock); - i2400ms->bm_wait_result = -EINPROGRESS; - /* - * Before we are about to enable the RX interrupt, make sure - * bm_ack_size is cleared to -EINPROGRESS which indicates - * no RX interrupt happened yet or the previous interrupt - * has been handled, we are ready to take the new interrupt - */ - i2400ms->bm_ack_size = -EINPROGRESS; - spin_unlock(&i2400m->rx_lock); - - sdio_claim_host(func); - result = sdio_claim_irq(func, i2400ms_irq); - if (result < 0) { - dev_err(dev, "Cannot claim IRQ: %d\n", result); - goto error_irq_claim; - } - result = 0; - sdio_writeb(func, 1, I2400MS_INTR_ENABLE_ADDR, &result); - if (result < 0) { - sdio_release_irq(func); - dev_err(dev, "Failed to enable interrupts %d\n", result); - } -error_irq_claim: - sdio_release_host(func); - d_fnend(5, dev, "(i2400ms %p) = %d\n", i2400ms, result); - return result; -} - - -/* - * Tear down SDIO RX - * - * Disables IRQs in the device and removes the IRQ handler. - */ -void i2400ms_rx_release(struct i2400ms *i2400ms) -{ - int result; - struct sdio_func *func = i2400ms->func; - struct device *dev = &func->dev; - struct i2400m *i2400m = &i2400ms->i2400m; - - d_fnstart(5, dev, "(i2400ms %p)\n", i2400ms); - spin_lock(&i2400m->rx_lock); - i2400ms->bm_ack_size = -EINTR; - spin_unlock(&i2400m->rx_lock); - wake_up_all(&i2400ms->bm_wfa_wq); - sdio_claim_host(func); - sdio_writeb(func, 0, I2400MS_INTR_ENABLE_ADDR, &result); - sdio_release_irq(func); - sdio_release_host(func); - d_fnend(5, dev, "(i2400ms %p) = %d\n", i2400ms, result); -} diff --git a/drivers/net/wimax/i2400m/sdio-tx.c b/drivers/net/wimax/i2400m/sdio-tx.c deleted file mode 100644 index b53cd1c80e3e..000000000000 --- a/drivers/net/wimax/i2400m/sdio-tx.c +++ /dev/null @@ -1,177 +0,0 @@ -/* - * Intel Wireless WiMAX Connection 2400m - * SDIO TX transaction backends - * - * - * Copyright (C) 2007-2008 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <linux-wimax@intel.com> - * Dirk Brandewie <dirk.j.brandewie@intel.com> - * - Initial implementation - * - * - * Takes the TX messages in the i2400m's driver TX FIFO and sends them - * to the device until there are no more. - * - * If we fail sending the message, we just drop it. There isn't much - * we can do at this point. Most of the traffic is network, which has - * recovery methods for dropped packets. - * - * The SDIO functions are not atomic, so we can't run from the context - * where i2400m->bus_tx_kick() [i2400ms_bus_tx_kick()] is being called - * (some times atomic). Thus, the actual TX work is deferred to a - * workqueue. - * - * ROADMAP - * - * i2400ms_bus_tx_kick() - * i2400ms_tx_submit() [through workqueue] - * - * i2400m_tx_setup() - * - * i2400m_tx_release() - */ -#include <linux/mmc/sdio_func.h> -#include "i2400m-sdio.h" - -#define D_SUBMODULE tx -#include "sdio-debug-levels.h" - - -/* - * Pull TX transations from the TX FIFO and send them to the device - * until there are no more. - */ -static -void i2400ms_tx_submit(struct work_struct *ws) -{ - int result; - struct i2400ms *i2400ms = container_of(ws, struct i2400ms, tx_worker); - struct i2400m *i2400m = &i2400ms->i2400m; - struct sdio_func *func = i2400ms->func; - struct device *dev = &func->dev; - struct i2400m_msg_hdr *tx_msg; - size_t tx_msg_size; - - d_fnstart(4, dev, "(i2400ms %p, i2400m %p)\n", i2400ms, i2400ms); - - while (NULL != (tx_msg = i2400m_tx_msg_get(i2400m, &tx_msg_size))) { - d_printf(2, dev, "TX: submitting %zu bytes\n", tx_msg_size); - d_dump(5, dev, tx_msg, tx_msg_size); - - sdio_claim_host(func); - result = sdio_memcpy_toio(func, 0, tx_msg, tx_msg_size); - sdio_release_host(func); - - i2400m_tx_msg_sent(i2400m); - - if (result < 0) { - dev_err(dev, "TX: cannot submit TX; tx_msg @%zu %zu B:" - " %d\n", (void *) tx_msg - i2400m->tx_buf, - tx_msg_size, result); - } - - if (result == -ETIMEDOUT) { - i2400m_error_recovery(i2400m); - break; - } - d_printf(2, dev, "TX: %zub submitted\n", tx_msg_size); - } - - d_fnend(4, dev, "(i2400ms %p) = void\n", i2400ms); -} - - -/* - * The generic driver notifies us that there is data ready for TX - * - * Schedule a run of i2400ms_tx_submit() to handle it. - */ -void i2400ms_bus_tx_kick(struct i2400m *i2400m) -{ - struct i2400ms *i2400ms = container_of(i2400m, struct i2400ms, i2400m); - struct device *dev = &i2400ms->func->dev; - unsigned long flags; - - d_fnstart(3, dev, "(i2400m %p) = void\n", i2400m); - - /* schedule tx work, this is because tx may block, therefore - * it has to run in a thread context. - */ - spin_lock_irqsave(&i2400m->tx_lock, flags); - if (i2400ms->tx_workqueue != NULL) - queue_work(i2400ms->tx_workqueue, &i2400ms->tx_worker); - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - - d_fnend(3, dev, "(i2400m %p) = void\n", i2400m); -} - -int i2400ms_tx_setup(struct i2400ms *i2400ms) -{ - int result; - struct device *dev = &i2400ms->func->dev; - struct i2400m *i2400m = &i2400ms->i2400m; - struct workqueue_struct *tx_workqueue; - unsigned long flags; - - d_fnstart(5, dev, "(i2400ms %p)\n", i2400ms); - - INIT_WORK(&i2400ms->tx_worker, i2400ms_tx_submit); - snprintf(i2400ms->tx_wq_name, sizeof(i2400ms->tx_wq_name), - "%s-tx", i2400m->wimax_dev.name); - tx_workqueue = - create_singlethread_workqueue(i2400ms->tx_wq_name); - if (tx_workqueue == NULL) { - dev_err(dev, "TX: failed to create workqueue\n"); - result = -ENOMEM; - } else - result = 0; - spin_lock_irqsave(&i2400m->tx_lock, flags); - i2400ms->tx_workqueue = tx_workqueue; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - d_fnend(5, dev, "(i2400ms %p) = %d\n", i2400ms, result); - return result; -} - -void i2400ms_tx_release(struct i2400ms *i2400ms) -{ - struct i2400m *i2400m = &i2400ms->i2400m; - struct workqueue_struct *tx_workqueue; - unsigned long flags; - - tx_workqueue = i2400ms->tx_workqueue; - - spin_lock_irqsave(&i2400m->tx_lock, flags); - i2400ms->tx_workqueue = NULL; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - - if (tx_workqueue) - destroy_workqueue(tx_workqueue); -} diff --git a/drivers/net/wimax/i2400m/sdio.c b/drivers/net/wimax/i2400m/sdio.c deleted file mode 100644 index 21a9edd6e75d..000000000000 --- a/drivers/net/wimax/i2400m/sdio.c +++ /dev/null @@ -1,602 +0,0 @@ -/* - * Intel Wireless WiMAX Connection 2400m - * Linux driver model glue for the SDIO device, reset & fw upload - * - * - * Copyright (C) 2007-2008 Intel Corporation <linux-wimax@intel.com> - * Dirk Brandewie <dirk.j.brandewie@intel.com> - * Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> - * Yanir Lubetkin <yanirx.lubetkin@intel.com> - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - * - * See i2400m-sdio.h for a general description of this driver. - * - * This file implements driver model glue, and hook ups for the - * generic driver to implement the bus-specific functions (device - * communication setup/tear down, firmware upload and resetting). - * - * ROADMAP - * - * i2400m_probe() - * alloc_netdev() - * i2400ms_netdev_setup() - * i2400ms_init() - * i2400m_netdev_setup() - * i2400ms_enable_function() - * i2400m_setup() - * - * i2400m_remove() - * i2400m_release() - * free_netdev(net_dev) - * - * i2400ms_bus_reset() Called by i2400m_reset - * __i2400ms_reset() - * __i2400ms_send_barker() - */ - -#include <linux/slab.h> -#include <linux/debugfs.h> -#include <linux/mmc/sdio_ids.h> -#include <linux/mmc/sdio.h> -#include <linux/mmc/sdio_func.h> -#include "i2400m-sdio.h" -#include <linux/wimax/i2400m.h> -#include <linux/module.h> - -#define D_SUBMODULE main -#include "sdio-debug-levels.h" - -/* IOE WiMAX function timeout in seconds */ -static int ioe_timeout = 2; -module_param(ioe_timeout, int, 0); - -static char i2400ms_debug_params[128]; -module_param_string(debug, i2400ms_debug_params, sizeof(i2400ms_debug_params), - 0644); -MODULE_PARM_DESC(debug, - "String of space-separated NAME:VALUE pairs, where NAMEs " - "are the different debug submodules and VALUE are the " - "initial debug value to set."); - -/* Our firmware file name list */ -static const char *i2400ms_bus_fw_names[] = { -#define I2400MS_FW_FILE_NAME "i2400m-fw-sdio-1.3.sbcf" - I2400MS_FW_FILE_NAME, - NULL -}; - - -static const struct i2400m_poke_table i2400ms_pokes[] = { - I2400M_FW_POKE(0x6BE260, 0x00000088), - I2400M_FW_POKE(0x080550, 0x00000005), - I2400M_FW_POKE(0xAE0000, 0x00000000), - I2400M_FW_POKE(0x000000, 0x00000000), /* MUST be 0 terminated or bad - * things will happen */ -}; - -/* - * Enable the SDIO function - * - * Tries to enable the SDIO function; might fail if it is still not - * ready (in some hardware, the SDIO WiMAX function is only enabled - * when we ask it to explicitly doing). Tries until a timeout is - * reached. - * - * The @maxtries argument indicates how many times (at most) it should - * be tried to enable the function. 0 means forever. This acts along - * with the timeout (ie: it'll stop trying as soon as the maximum - * number of tries is reached _or_ as soon as the timeout is reached). - * - * The reverse of this is...sdio_disable_function() - * - * Returns: 0 if the SDIO function was enabled, < 0 errno code on - * error (-ENODEV when it was unable to enable the function). - */ -static -int i2400ms_enable_function(struct i2400ms *i2400ms, unsigned maxtries) -{ - struct sdio_func *func = i2400ms->func; - u64 timeout; - int err; - struct device *dev = &func->dev; - unsigned tries = 0; - - d_fnstart(3, dev, "(func %p)\n", func); - /* Setup timeout (FIXME: This needs to read the CIS table to - * get a real timeout) and then wait for the device to signal - * it is ready */ - timeout = get_jiffies_64() + ioe_timeout * HZ; - err = -ENODEV; - while (err != 0 && time_before64(get_jiffies_64(), timeout)) { - sdio_claim_host(func); - /* - * There is a sillicon bug on the IWMC3200, where the - * IOE timeout will cause problems on Moorestown - * platforms (system hang). We explicitly overwrite - * func->enable_timeout here to work around the issue. - */ - if (i2400ms->iwmc3200) - func->enable_timeout = IWMC3200_IOR_TIMEOUT; - err = sdio_enable_func(func); - if (0 == err) { - sdio_release_host(func); - d_printf(2, dev, "SDIO function enabled\n"); - goto function_enabled; - } - d_printf(2, dev, "SDIO function failed to enable: %d\n", err); - sdio_release_host(func); - if (maxtries > 0 && ++tries >= maxtries) { - err = -ETIME; - break; - } - msleep(I2400MS_INIT_SLEEP_INTERVAL); - } - /* If timed out, device is not there yet -- get -ENODEV so - * the device driver core will retry later on. */ - if (err == -ETIME) { - dev_err(dev, "Can't enable WiMAX function; " - " has the function been enabled?\n"); - err = -ENODEV; - } -function_enabled: - d_fnend(3, dev, "(func %p) = %d\n", func, err); - return err; -} - - -/* - * Setup minimal device communication infrastructure needed to at - * least be able to update the firmware. - * - * Note the ugly trick: if we are in the probe path - * (i2400ms->debugfs_dentry == NULL), we only retry function - * enablement one, to avoid racing with the iwmc3200 top controller. - */ -static -int i2400ms_bus_setup(struct i2400m *i2400m) -{ - int result; - struct i2400ms *i2400ms = - container_of(i2400m, struct i2400ms, i2400m); - struct device *dev = i2400m_dev(i2400m); - struct sdio_func *func = i2400ms->func; - int retries; - - sdio_claim_host(func); - result = sdio_set_block_size(func, I2400MS_BLK_SIZE); - sdio_release_host(func); - if (result < 0) { - dev_err(dev, "Failed to set block size: %d\n", result); - goto error_set_blk_size; - } - - if (i2400ms->iwmc3200 && i2400ms->debugfs_dentry == NULL) - retries = 1; - else - retries = 0; - result = i2400ms_enable_function(i2400ms, retries); - if (result < 0) { - dev_err(dev, "Cannot enable SDIO function: %d\n", result); - goto error_func_enable; - } - - result = i2400ms_tx_setup(i2400ms); - if (result < 0) - goto error_tx_setup; - result = i2400ms_rx_setup(i2400ms); - if (result < 0) - goto error_rx_setup; - return 0; - -error_rx_setup: - i2400ms_tx_release(i2400ms); -error_tx_setup: - sdio_claim_host(func); - sdio_disable_func(func); - sdio_release_host(func); -error_func_enable: -error_set_blk_size: - return result; -} - - -/* - * Tear down minimal device communication infrastructure needed to at - * least be able to update the firmware. - */ -static -void i2400ms_bus_release(struct i2400m *i2400m) -{ - struct i2400ms *i2400ms = - container_of(i2400m, struct i2400ms, i2400m); - struct sdio_func *func = i2400ms->func; - - i2400ms_rx_release(i2400ms); - i2400ms_tx_release(i2400ms); - sdio_claim_host(func); - sdio_disable_func(func); - sdio_release_host(func); -} - - -/* - * Setup driver resources needed to communicate with the device - * - * The fw needs some time to settle, and it was just uploaded, - * so give it a break first. I'd prefer to just wait for the device to - * send something, but seems the poking we do to enable SDIO stuff - * interferes with it, so just give it a break before starting... - */ -static -int i2400ms_bus_dev_start(struct i2400m *i2400m) -{ - struct i2400ms *i2400ms = container_of(i2400m, struct i2400ms, i2400m); - struct sdio_func *func = i2400ms->func; - struct device *dev = &func->dev; - - d_fnstart(3, dev, "(i2400m %p)\n", i2400m); - msleep(200); - d_fnend(3, dev, "(i2400m %p) = %d\n", i2400m, 0); - return 0; -} - - -/* - * Sends a barker buffer to the device - * - * This helper will allocate a kmalloced buffer and use it to transmit - * (then free it). Reason for this is that the SDIO host controller - * expects alignment (unknown exactly which) which the stack won't - * really provide and certain arches/host-controller combinations - * cannot use stack/vmalloc/text areas for DMA transfers. - */ -static -int __i2400ms_send_barker(struct i2400ms *i2400ms, - const __le32 *barker, size_t barker_size) -{ - int ret; - struct sdio_func *func = i2400ms->func; - struct device *dev = &func->dev; - void *buffer; - - ret = -ENOMEM; - buffer = kmalloc(I2400MS_BLK_SIZE, GFP_KERNEL); - if (buffer == NULL) - goto error_kzalloc; - - memcpy(buffer, barker, barker_size); - sdio_claim_host(func); - ret = sdio_memcpy_toio(func, 0, buffer, I2400MS_BLK_SIZE); - sdio_release_host(func); - - if (ret < 0) - d_printf(0, dev, "E: barker error: %d\n", ret); - - kfree(buffer); -error_kzalloc: - return ret; -} - - -/* - * Reset a device at different levels (warm, cold or bus) - * - * @i2400ms: device descriptor - * @reset_type: soft, warm or bus reset (I2400M_RT_WARM/SOFT/BUS) - * - * FIXME: not tested -- need to confirm expected effects - * - * Warm and cold resets get an SDIO reset if they fail (unimplemented) - * - * Warm reset: - * - * The device will be fully reset internally, but won't be - * disconnected from the bus (so no reenumeration will - * happen). Firmware upload will be necessary. - * - * The device will send a reboot barker that will trigger the driver - * to reinitialize the state via __i2400m_dev_reset_handle. - * - * - * Cold and bus reset: - * - * The device will be fully reset internally, disconnected from the - * bus an a reenumeration will happen. Firmware upload will be - * necessary. Thus, we don't do any locking or struct - * reinitialization, as we are going to be fully disconnected and - * reenumerated. - * - * Note we need to return -ENODEV if a warm reset was requested and we - * had to resort to a bus reset. See i2400m_op_reset(), wimax_reset() - * and wimax_dev->op_reset. - * - * WARNING: no driver state saved/fixed - */ -static -int i2400ms_bus_reset(struct i2400m *i2400m, enum i2400m_reset_type rt) -{ - int result = 0; - struct i2400ms *i2400ms = - container_of(i2400m, struct i2400ms, i2400m); - struct device *dev = i2400m_dev(i2400m); - static const __le32 i2400m_WARM_BOOT_BARKER[4] = { - cpu_to_le32(I2400M_WARM_RESET_BARKER), - cpu_to_le32(I2400M_WARM_RESET_BARKER), - cpu_to_le32(I2400M_WARM_RESET_BARKER), - cpu_to_le32(I2400M_WARM_RESET_BARKER), - }; - static const __le32 i2400m_COLD_BOOT_BARKER[4] = { - cpu_to_le32(I2400M_COLD_RESET_BARKER), - cpu_to_le32(I2400M_COLD_RESET_BARKER), - cpu_to_le32(I2400M_COLD_RESET_BARKER), - cpu_to_le32(I2400M_COLD_RESET_BARKER), - }; - - if (rt == I2400M_RT_WARM) - result = __i2400ms_send_barker(i2400ms, i2400m_WARM_BOOT_BARKER, - sizeof(i2400m_WARM_BOOT_BARKER)); - else if (rt == I2400M_RT_COLD) - result = __i2400ms_send_barker(i2400ms, i2400m_COLD_BOOT_BARKER, - sizeof(i2400m_COLD_BOOT_BARKER)); - else if (rt == I2400M_RT_BUS) { -do_bus_reset: - - i2400ms_bus_release(i2400m); - - /* Wait for the device to settle */ - msleep(40); - - result = i2400ms_bus_setup(i2400m); - } else - BUG(); - if (result < 0 && rt != I2400M_RT_BUS) { - dev_err(dev, "%s reset failed (%d); trying SDIO reset\n", - rt == I2400M_RT_WARM ? "warm" : "cold", result); - rt = I2400M_RT_BUS; - goto do_bus_reset; - } - return result; -} - - -static -void i2400ms_netdev_setup(struct net_device *net_dev) -{ - struct i2400m *i2400m = net_dev_to_i2400m(net_dev); - struct i2400ms *i2400ms = container_of(i2400m, struct i2400ms, i2400m); - i2400ms_init(i2400ms); - i2400m_netdev_setup(net_dev); -} - - -/* - * Debug levels control; see debug.h - */ -struct d_level D_LEVEL[] = { - D_SUBMODULE_DEFINE(main), - D_SUBMODULE_DEFINE(tx), - D_SUBMODULE_DEFINE(rx), - D_SUBMODULE_DEFINE(fw), -}; -size_t D_LEVEL_SIZE = ARRAY_SIZE(D_LEVEL); - - -#define __debugfs_register(prefix, name, parent) \ -do { \ - result = d_level_register_debugfs(prefix, name, parent); \ - if (result < 0) \ - goto error; \ -} while (0) - - -static -int i2400ms_debugfs_add(struct i2400ms *i2400ms) -{ - int result; - struct dentry *dentry = i2400ms->i2400m.wimax_dev.debugfs_dentry; - - dentry = debugfs_create_dir("i2400m-sdio", dentry); - result = PTR_ERR(dentry); - if (IS_ERR(dentry)) { - if (result == -ENODEV) - result = 0; /* No debugfs support */ - goto error; - } - i2400ms->debugfs_dentry = dentry; - __debugfs_register("dl_", main, dentry); - __debugfs_register("dl_", tx, dentry); - __debugfs_register("dl_", rx, dentry); - __debugfs_register("dl_", fw, dentry); - - return 0; - -error: - debugfs_remove_recursive(i2400ms->debugfs_dentry); - i2400ms->debugfs_dentry = NULL; - return result; -} - - -static struct device_type i2400ms_type = { - .name = "wimax", -}; - -/* - * Probe a i2400m interface and register it - * - * @func: SDIO function - * @id: SDIO device ID - * @returns: 0 if ok, < 0 errno code on error. - * - * Alloc a net device, initialize the bus-specific details and then - * calls the bus-generic initialization routine. That will register - * the wimax and netdev devices, upload the firmware [using - * _bus_bm_*()], call _bus_dev_start() to finalize the setup of the - * communication with the device and then will start to talk to it to - * finnish setting it up. - * - * Initialization is tricky; some instances of the hw are packed with - * others in a way that requires a third driver that enables the WiMAX - * function. In those cases, we can't enable the SDIO function and - * we'll return with -ENODEV. When the driver that enables the WiMAX - * function does its thing, it has to do a bus_rescan_devices() on the - * SDIO bus so this driver is called again to enumerate the WiMAX - * function. - */ -static -int i2400ms_probe(struct sdio_func *func, - const struct sdio_device_id *id) -{ - int result; - struct net_device *net_dev; - struct device *dev = &func->dev; - struct i2400m *i2400m; - struct i2400ms *i2400ms; - - /* Allocate instance [calls i2400m_netdev_setup() on it]. */ - result = -ENOMEM; - net_dev = alloc_netdev(sizeof(*i2400ms), "wmx%d", - i2400ms_netdev_setup); - if (net_dev == NULL) { - dev_err(dev, "no memory for network device instance\n"); - goto error_alloc_netdev; - } - SET_NETDEV_DEV(net_dev, dev); - SET_NETDEV_DEVTYPE(net_dev, &i2400ms_type); - i2400m = net_dev_to_i2400m(net_dev); - i2400ms = container_of(i2400m, struct i2400ms, i2400m); - i2400m->wimax_dev.net_dev = net_dev; - i2400ms->func = func; - sdio_set_drvdata(func, i2400ms); - - i2400m->bus_tx_block_size = I2400MS_BLK_SIZE; - /* - * Room required in the TX queue for SDIO message to accommodate - * a smallest payload while allocating header space is 224 bytes, - * which is the smallest message size(the block size 256 bytes) - * minus the smallest message header size(32 bytes). - */ - i2400m->bus_tx_room_min = I2400MS_BLK_SIZE - I2400M_PL_ALIGN * 2; - i2400m->bus_pl_size_max = I2400MS_PL_SIZE_MAX; - i2400m->bus_setup = i2400ms_bus_setup; - i2400m->bus_dev_start = i2400ms_bus_dev_start; - i2400m->bus_dev_stop = NULL; - i2400m->bus_release = i2400ms_bus_release; - i2400m->bus_tx_kick = i2400ms_bus_tx_kick; - i2400m->bus_reset = i2400ms_bus_reset; - /* The iwmc3200-wimax sometimes requires the driver to try - * hard when we paint it into a corner. */ - i2400m->bus_bm_retries = I2400M_SDIO_BOOT_RETRIES; - i2400m->bus_bm_cmd_send = i2400ms_bus_bm_cmd_send; - i2400m->bus_bm_wait_for_ack = i2400ms_bus_bm_wait_for_ack; - i2400m->bus_fw_names = i2400ms_bus_fw_names; - i2400m->bus_bm_mac_addr_impaired = 1; - i2400m->bus_bm_pokes_table = &i2400ms_pokes[0]; - - switch (func->device) { - case SDIO_DEVICE_ID_INTEL_IWMC3200WIMAX: - case SDIO_DEVICE_ID_INTEL_IWMC3200WIMAX_2G5: - i2400ms->iwmc3200 = 1; - break; - default: - i2400ms->iwmc3200 = 0; - } - - result = i2400m_setup(i2400m, I2400M_BRI_NO_REBOOT); - if (result < 0) { - dev_err(dev, "cannot setup device: %d\n", result); - goto error_setup; - } - - result = i2400ms_debugfs_add(i2400ms); - if (result < 0) { - dev_err(dev, "cannot create SDIO debugfs: %d\n", - result); - goto error_debugfs_add; - } - return 0; - -error_debugfs_add: - i2400m_release(i2400m); -error_setup: - sdio_set_drvdata(func, NULL); - free_netdev(net_dev); -error_alloc_netdev: - return result; -} - - -static -void i2400ms_remove(struct sdio_func *func) -{ - struct device *dev = &func->dev; - struct i2400ms *i2400ms = sdio_get_drvdata(func); - struct i2400m *i2400m = &i2400ms->i2400m; - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - - d_fnstart(3, dev, "SDIO func %p\n", func); - debugfs_remove_recursive(i2400ms->debugfs_dentry); - i2400ms->debugfs_dentry = NULL; - i2400m_release(i2400m); - sdio_set_drvdata(func, NULL); - free_netdev(net_dev); - d_fnend(3, dev, "SDIO func %p\n", func); -} - -static -const struct sdio_device_id i2400ms_sdio_ids[] = { - /* Intel: i2400m WiMAX (iwmc3200) over SDIO */ - { SDIO_DEVICE(SDIO_VENDOR_ID_INTEL, - SDIO_DEVICE_ID_INTEL_IWMC3200WIMAX) }, - { SDIO_DEVICE(SDIO_VENDOR_ID_INTEL, - SDIO_DEVICE_ID_INTEL_IWMC3200WIMAX_2G5) }, - { /* end: all zeroes */ }, -}; -MODULE_DEVICE_TABLE(sdio, i2400ms_sdio_ids); - - -static -struct sdio_driver i2400m_sdio_driver = { - .name = KBUILD_MODNAME, - .probe = i2400ms_probe, - .remove = i2400ms_remove, - .id_table = i2400ms_sdio_ids, -}; - - -static -int __init i2400ms_driver_init(void) -{ - d_parse_params(D_LEVEL, D_LEVEL_SIZE, i2400ms_debug_params, - "i2400m_sdio.debug"); - return sdio_register_driver(&i2400m_sdio_driver); -} -module_init(i2400ms_driver_init); - - -static -void __exit i2400ms_driver_exit(void) -{ - sdio_unregister_driver(&i2400m_sdio_driver); -} -module_exit(i2400ms_driver_exit); - - -MODULE_AUTHOR("Intel Corporation <linux-wimax@intel.com>"); -MODULE_DESCRIPTION("Intel 2400M WiMAX networking for SDIO"); -MODULE_LICENSE("GPL"); -MODULE_FIRMWARE(I2400MS_FW_FILE_NAME); diff --git a/drivers/net/wimax/i2400m/usb-fw.c b/drivers/net/wimax/i2400m/usb-fw.c index 1fda46c55eb3..e74664b84925 100644 --- a/drivers/net/wimax/i2400m/usb-fw.c +++ b/drivers/net/wimax/i2400m/usb-fw.c @@ -212,7 +212,7 @@ ssize_t i2400mu_bus_bm_cmd_send(struct i2400m *i2400m, } if (result != cmd_size) { /* all was transferred? */ dev_err(dev, "boot-mode cmd %d: incomplete transfer " - "(%zu vs %zu submitted)\n", opcode, result, cmd_size); + "(%zd vs %zu submitted)\n", opcode, result, cmd_size); result = -EIO; goto error_cmd_size; } diff --git a/drivers/net/wireless/Kconfig b/drivers/net/wireless/Kconfig index 5f58fa53238c..6deaae18db57 100644 --- a/drivers/net/wireless/Kconfig +++ b/drivers/net/wireless/Kconfig @@ -276,7 +276,6 @@ source "drivers/net/wireless/hostap/Kconfig" source "drivers/net/wireless/ipw2x00/Kconfig" source "drivers/net/wireless/iwlwifi/Kconfig" source "drivers/net/wireless/iwlegacy/Kconfig" -source "drivers/net/wireless/iwmc3200wifi/Kconfig" source "drivers/net/wireless/libertas/Kconfig" source "drivers/net/wireless/orinoco/Kconfig" source "drivers/net/wireless/p54/Kconfig" diff --git a/drivers/net/wireless/Makefile b/drivers/net/wireless/Makefile index 0ce218b931d4..062dfdff6364 100644 --- a/drivers/net/wireless/Makefile +++ b/drivers/net/wireless/Makefile @@ -53,8 +53,6 @@ obj-$(CONFIG_MAC80211_HWSIM) += mac80211_hwsim.o obj-$(CONFIG_WL_TI) += ti/ -obj-$(CONFIG_IWM) += iwmc3200wifi/ - obj-$(CONFIG_MWIFIEX) += mwifiex/ obj-$(CONFIG_BRCMFMAC) += brcm80211/ diff --git a/drivers/net/wireless/adm8211.c b/drivers/net/wireless/adm8211.c index 0ac09a2bd144..689a71c1af71 100644 --- a/drivers/net/wireless/adm8211.c +++ b/drivers/net/wireless/adm8211.c @@ -1738,8 +1738,7 @@ static int adm8211_alloc_rings(struct ieee80211_hw *dev) return -ENOMEM; } - priv->tx_ring = (struct adm8211_desc *)(priv->rx_ring + - priv->rx_ring_size); + priv->tx_ring = priv->rx_ring + priv->rx_ring_size; priv->tx_ring_dma = priv->rx_ring_dma + sizeof(struct adm8211_desc) * priv->rx_ring_size; @@ -1855,7 +1854,7 @@ static int __devinit adm8211_probe(struct pci_dev *pdev, if (!is_valid_ether_addr(perm_addr)) { printk(KERN_WARNING "%s (adm8211): Invalid hwaddr in EEPROM!\n", pci_name(pdev)); - random_ether_addr(perm_addr); + eth_random_addr(perm_addr); } SET_IEEE80211_PERM_ADDR(dev, perm_addr); diff --git a/drivers/net/wireless/airo.c b/drivers/net/wireless/airo.c index a747c632597a..f9f15bb3f03a 100644 --- a/drivers/net/wireless/airo.c +++ b/drivers/net/wireless/airo.c @@ -1997,7 +1997,7 @@ static int mpi_send_packet (struct net_device *dev) * ------------------------------------------------ */ - memcpy((char *)ai->txfids[0].virtual_host_addr, + memcpy(ai->txfids[0].virtual_host_addr, (char *)&wifictlhdr8023, sizeof(wifictlhdr8023)); payloadLen = (__le16 *)(ai->txfids[0].virtual_host_addr + @@ -4212,7 +4212,7 @@ static int PC4500_writerid(struct airo_info *ai, u16 rid, airo_print_err(ai->dev->name, "%s: len=%d", __func__, len); rc = -1; } else { - memcpy((char *)ai->config_desc.virtual_host_addr, + memcpy(ai->config_desc.virtual_host_addr, pBuf, len); rc = issuecommand(ai, &cmd, &rsp); diff --git a/drivers/net/wireless/ath/ath.h b/drivers/net/wireless/ath/ath.h index 420d69b2674c..6169fbd23ed1 100644 --- a/drivers/net/wireless/ath/ath.h +++ b/drivers/net/wireless/ath/ath.h @@ -216,6 +216,7 @@ void ath_printk(const char *level, const struct ath_common *common, * used exclusively for WLAN-BT coexistence starting from * AR9462. * @ATH_DBG_DFS: radar datection + * @ATH_DBG_WOW: Wake on Wireless * @ATH_DBG_ANY: enable all debugging * * The debug level is used to control the amount and type of debugging output @@ -243,6 +244,7 @@ enum ATH_DEBUG { ATH_DBG_BSTUCK = 0x00008000, ATH_DBG_MCI = 0x00010000, ATH_DBG_DFS = 0x00020000, + ATH_DBG_WOW = 0x00040000, ATH_DBG_ANY = 0xffffffff }; diff --git a/drivers/net/wireless/ath/ath5k/Kconfig b/drivers/net/wireless/ath/ath5k/Kconfig index e18a9aa7b6ca..338c5c42357d 100644 --- a/drivers/net/wireless/ath/ath5k/Kconfig +++ b/drivers/net/wireless/ath/ath5k/Kconfig @@ -64,3 +64,11 @@ config ATH5K_PCI ---help--- This adds support for PCI type chipsets of the 5xxx Atheros family. + +config ATH5K_TEST_CHANNELS + bool "Enables testing channels on ath5k" + depends on ATH5K && CFG80211_CERTIFICATION_ONUS + ---help--- + This enables non-standard IEEE 802.11 channels on ath5k, which + can be used for research purposes. This option should be disabled + unless doing research. diff --git a/drivers/net/wireless/ath/ath5k/base.c b/drivers/net/wireless/ath/ath5k/base.c index 44ad6fe0278f..8c4c040a47b8 100644 --- a/drivers/net/wireless/ath/ath5k/base.c +++ b/drivers/net/wireless/ath/ath5k/base.c @@ -74,10 +74,6 @@ bool ath5k_modparam_nohwcrypt; module_param_named(nohwcrypt, ath5k_modparam_nohwcrypt, bool, S_IRUGO); MODULE_PARM_DESC(nohwcrypt, "Disable hardware encryption."); -static bool modparam_all_channels; -module_param_named(all_channels, modparam_all_channels, bool, S_IRUGO); -MODULE_PARM_DESC(all_channels, "Expose all channels the device can use."); - static bool modparam_fastchanswitch; module_param_named(fastchanswitch, modparam_fastchanswitch, bool, S_IRUGO); MODULE_PARM_DESC(fastchanswitch, "Enable fast channel switching for AR2413/AR5413 radios."); @@ -258,8 +254,15 @@ static int ath5k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *re \********************/ /* - * Returns true for the channel numbers used without all_channels modparam. + * Returns true for the channel numbers used. */ +#ifdef CONFIG_ATH5K_TEST_CHANNELS +static bool ath5k_is_standard_channel(short chan, enum ieee80211_band band) +{ + return true; +} + +#else static bool ath5k_is_standard_channel(short chan, enum ieee80211_band band) { if (band == IEEE80211_BAND_2GHZ && chan <= 14) @@ -276,6 +279,7 @@ static bool ath5k_is_standard_channel(short chan, enum ieee80211_band band) /* 802.11j 4.9GHz (20MHz) */ (chan == 184 || chan == 188 || chan == 192 || chan == 196)); } +#endif static unsigned int ath5k_setup_channels(struct ath5k_hw *ah, struct ieee80211_channel *channels, @@ -316,8 +320,7 @@ ath5k_setup_channels(struct ath5k_hw *ah, struct ieee80211_channel *channels, if (!ath5k_channel_ok(ah, &channels[count])) continue; - if (!modparam_all_channels && - !ath5k_is_standard_channel(ch, band)) + if (!ath5k_is_standard_channel(ch, band)) continue; count++; diff --git a/drivers/net/wireless/ath/ath5k/mac80211-ops.c b/drivers/net/wireless/ath/ath5k/mac80211-ops.c index 22b80af0f47c..260e7dc7f751 100644 --- a/drivers/net/wireless/ath/ath5k/mac80211-ops.c +++ b/drivers/net/wireless/ath/ath5k/mac80211-ops.c @@ -594,7 +594,7 @@ ath5k_conf_tx(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u16 queue, qi.tqi_aifs = params->aifs; qi.tqi_cw_min = params->cw_min; qi.tqi_cw_max = params->cw_max; - qi.tqi_burst_time = params->txop; + qi.tqi_burst_time = params->txop * 32; ATH5K_DBG(ah, ATH5K_DEBUG_ANY, "Configure tx [queue %d], " diff --git a/drivers/net/wireless/ath/ath6kl/cfg80211.c b/drivers/net/wireless/ath/ath6kl/cfg80211.c index b869a358ce43..86aeef4b9d7e 100644 --- a/drivers/net/wireless/ath/ath6kl/cfg80211.c +++ b/drivers/net/wireless/ath/ath6kl/cfg80211.c @@ -53,6 +53,11 @@ #define DEFAULT_BG_SCAN_PERIOD 60 +struct ath6kl_cfg80211_match_probe_ssid { + struct cfg80211_ssid ssid; + u8 flag; +}; + static struct ieee80211_rate ath6kl_rates[] = { RATETAB_ENT(10, 0x1, 0), RATETAB_ENT(20, 0x2, 0), @@ -576,6 +581,9 @@ static int ath6kl_cfg80211_connect(struct wiphy *wiphy, struct net_device *dev, vif->nw_type = vif->next_mode; + /* enable enhanced bmiss detection if applicable */ + ath6kl_cfg80211_sta_bmiss_enhance(vif, true); + if (vif->wdev.iftype == NL80211_IFTYPE_P2P_CLIENT) nw_subtype = SUBTYPE_P2PCLIENT; @@ -852,20 +860,6 @@ void ath6kl_cfg80211_disconnect_event(struct ath6kl_vif *vif, u8 reason, } } - /* - * Send a disconnect command to target when a disconnect event is - * received with reason code other than 3 (DISCONNECT_CMD - disconnect - * request from host) to make the firmware stop trying to connect even - * after giving disconnect event. There will be one more disconnect - * event for this disconnect command with reason code DISCONNECT_CMD - * which will be notified to cfg80211. - */ - - if (reason != DISCONNECT_CMD) { - ath6kl_wmi_disconnect_cmd(ar->wmi, vif->fw_vif_idx); - return; - } - clear_bit(CONNECT_PEND, &vif->flags); if (vif->sme_state == SME_CONNECTING) { @@ -875,32 +869,96 @@ void ath6kl_cfg80211_disconnect_event(struct ath6kl_vif *vif, u8 reason, WLAN_STATUS_UNSPECIFIED_FAILURE, GFP_KERNEL); } else if (vif->sme_state == SME_CONNECTED) { - cfg80211_disconnected(vif->ndev, reason, + cfg80211_disconnected(vif->ndev, proto_reason, NULL, 0, GFP_KERNEL); } vif->sme_state = SME_DISCONNECTED; + + /* + * Send a disconnect command to target when a disconnect event is + * received with reason code other than 3 (DISCONNECT_CMD - disconnect + * request from host) to make the firmware stop trying to connect even + * after giving disconnect event. There will be one more disconnect + * event for this disconnect command with reason code DISCONNECT_CMD + * which won't be notified to cfg80211. + */ + if (reason != DISCONNECT_CMD) + ath6kl_wmi_disconnect_cmd(ar->wmi, vif->fw_vif_idx); } static int ath6kl_set_probed_ssids(struct ath6kl *ar, struct ath6kl_vif *vif, - struct cfg80211_ssid *ssids, int n_ssids) + struct cfg80211_ssid *ssids, int n_ssids, + struct cfg80211_match_set *match_set, + int n_match_ssid) { - u8 i; + u8 i, j, index_to_add, ssid_found = false; + struct ath6kl_cfg80211_match_probe_ssid ssid_list[MAX_PROBED_SSIDS]; + + memset(ssid_list, 0, sizeof(ssid_list)); - if (n_ssids > MAX_PROBED_SSID_INDEX) + if (n_ssids > MAX_PROBED_SSIDS || + n_match_ssid > MAX_PROBED_SSIDS) return -EINVAL; for (i = 0; i < n_ssids; i++) { + memcpy(ssid_list[i].ssid.ssid, + ssids[i].ssid, + ssids[i].ssid_len); + ssid_list[i].ssid.ssid_len = ssids[i].ssid_len; + + if (ssids[i].ssid_len) + ssid_list[i].flag = SPECIFIC_SSID_FLAG; + else + ssid_list[i].flag = ANY_SSID_FLAG; + + if (n_match_ssid == 0) + ssid_list[i].flag |= MATCH_SSID_FLAG; + } + + index_to_add = i; + + for (i = 0; i < n_match_ssid; i++) { + ssid_found = false; + + for (j = 0; j < n_ssids; j++) { + if ((match_set[i].ssid.ssid_len == + ssid_list[j].ssid.ssid_len) && + (!memcmp(ssid_list[j].ssid.ssid, + match_set[i].ssid.ssid, + match_set[i].ssid.ssid_len))) { + ssid_list[j].flag |= MATCH_SSID_FLAG; + ssid_found = true; + break; + } + } + + if (ssid_found) + continue; + + if (index_to_add >= MAX_PROBED_SSIDS) + continue; + + ssid_list[index_to_add].ssid.ssid_len = + match_set[i].ssid.ssid_len; + memcpy(ssid_list[index_to_add].ssid.ssid, + match_set[i].ssid.ssid, + match_set[i].ssid.ssid_len); + ssid_list[index_to_add].flag |= MATCH_SSID_FLAG; + index_to_add++; + } + + for (i = 0; i < index_to_add; i++) { ath6kl_wmi_probedssid_cmd(ar->wmi, vif->fw_vif_idx, i, - ssids[i].ssid_len ? - SPECIFIC_SSID_FLAG : ANY_SSID_FLAG, - ssids[i].ssid_len, - ssids[i].ssid); + ssid_list[i].flag, + ssid_list[i].ssid.ssid_len, + ssid_list[i].ssid.ssid); + } /* Make sure no old entries are left behind */ - for (i = n_ssids; i < MAX_PROBED_SSID_INDEX; i++) { + for (i = index_to_add; i < MAX_PROBED_SSIDS; i++) { ath6kl_wmi_probedssid_cmd(ar->wmi, vif->fw_vif_idx, i, DISABLE_SSID_FLAG, 0, NULL); } @@ -908,11 +966,11 @@ static int ath6kl_set_probed_ssids(struct ath6kl *ar, return 0; } -static int ath6kl_cfg80211_scan(struct wiphy *wiphy, struct net_device *ndev, +static int ath6kl_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request) { - struct ath6kl *ar = ath6kl_priv(ndev); - struct ath6kl_vif *vif = netdev_priv(ndev); + struct ath6kl_vif *vif = ath6kl_vif_from_wdev(request->wdev); + struct ath6kl *ar = ath6kl_priv(vif->ndev); s8 n_channels = 0; u16 *channels = NULL; int ret = 0; @@ -934,7 +992,7 @@ static int ath6kl_cfg80211_scan(struct wiphy *wiphy, struct net_device *ndev, } ret = ath6kl_set_probed_ssids(ar, vif, request->ssids, - request->n_ssids); + request->n_ssids, NULL, 0); if (ret < 0) return ret; @@ -943,7 +1001,7 @@ static int ath6kl_cfg80211_scan(struct wiphy *wiphy, struct net_device *ndev, WMI_FRAME_PROBE_REQ, request->ie, request->ie_len); if (ret) { - ath6kl_err("failed to set Probe Request appie for scan"); + ath6kl_err("failed to set Probe Request appie for scan\n"); return ret; } @@ -1429,14 +1487,14 @@ static int ath6kl_cfg80211_set_power_mgmt(struct wiphy *wiphy, return 0; } -static struct net_device *ath6kl_cfg80211_add_iface(struct wiphy *wiphy, - char *name, - enum nl80211_iftype type, - u32 *flags, - struct vif_params *params) +static struct wireless_dev *ath6kl_cfg80211_add_iface(struct wiphy *wiphy, + char *name, + enum nl80211_iftype type, + u32 *flags, + struct vif_params *params) { struct ath6kl *ar = wiphy_priv(wiphy); - struct net_device *ndev; + struct wireless_dev *wdev; u8 if_idx, nw_type; if (ar->num_vif == ar->vif_max) { @@ -1449,20 +1507,20 @@ static struct net_device *ath6kl_cfg80211_add_iface(struct wiphy *wiphy, return ERR_PTR(-EINVAL); } - ndev = ath6kl_interface_add(ar, name, type, if_idx, nw_type); - if (!ndev) + wdev = ath6kl_interface_add(ar, name, type, if_idx, nw_type); + if (!wdev) return ERR_PTR(-ENOMEM); ar->num_vif++; - return ndev; + return wdev; } static int ath6kl_cfg80211_del_iface(struct wiphy *wiphy, - struct net_device *ndev) + struct wireless_dev *wdev) { struct ath6kl *ar = wiphy_priv(wiphy); - struct ath6kl_vif *vif = netdev_priv(ndev); + struct ath6kl_vif *vif = netdev_priv(wdev->netdev); spin_lock_bh(&ar->list_lock); list_del(&vif->list); @@ -1512,6 +1570,9 @@ static int ath6kl_cfg80211_change_iface(struct wiphy *wiphy, } } + /* need to clean up enhanced bmiss detection fw state */ + ath6kl_cfg80211_sta_bmiss_enhance(vif, false); + set_iface_type: switch (type) { case NL80211_IFTYPE_STATION: @@ -2074,7 +2135,9 @@ static int ath6kl_wow_suspend(struct ath6kl *ar, struct cfg80211_wowlan *wow) if (wow && (wow->n_patterns > WOW_MAX_FILTERS_PER_LIST)) return -EINVAL; - if (!test_bit(NETDEV_MCAST_ALL_ON, &vif->flags)) { + if (!test_bit(NETDEV_MCAST_ALL_ON, &vif->flags) && + test_bit(ATH6KL_FW_CAPABILITY_WOW_MULTICAST_FILTER, + ar->fw_capabilities)) { ret = ath6kl_wmi_mcast_filter_cmd(vif->ar->wmi, vif->fw_vif_idx, false); if (ret) @@ -2209,7 +2272,9 @@ static int ath6kl_wow_resume(struct ath6kl *ar) ar->state = ATH6KL_STATE_ON; - if (!test_bit(NETDEV_MCAST_ALL_OFF, &vif->flags)) { + if (!test_bit(NETDEV_MCAST_ALL_OFF, &vif->flags) && + test_bit(ATH6KL_FW_CAPABILITY_WOW_MULTICAST_FILTER, + ar->fw_capabilities)) { ret = ath6kl_wmi_mcast_filter_cmd(vif->ar->wmi, vif->fw_vif_idx, true); if (ret) @@ -2475,7 +2540,7 @@ void ath6kl_check_wow_status(struct ath6kl *ar) static int ath6kl_set_htcap(struct ath6kl_vif *vif, enum ieee80211_band band, bool ht_enable) { - struct ath6kl_htcap *htcap = &vif->htcap; + struct ath6kl_htcap *htcap = &vif->htcap[band]; if (htcap->ht_enable == ht_enable) return 0; @@ -2585,33 +2650,28 @@ static int ath6kl_set_ies(struct ath6kl_vif *vif, return 0; } -static int ath6kl_set_channel(struct wiphy *wiphy, struct net_device *dev, - struct ieee80211_channel *chan, - enum nl80211_channel_type channel_type) +void ath6kl_cfg80211_sta_bmiss_enhance(struct ath6kl_vif *vif, bool enable) { - struct ath6kl_vif *vif; + int err; - /* - * 'dev' could be NULL if a channel change is required for the hardware - * device itself, instead of a particular VIF. - * - * FIXME: To be handled properly when monitor mode is supported. - */ - if (!dev) - return -EBUSY; + if (WARN_ON(!test_bit(WMI_READY, &vif->ar->flag))) + return; - vif = netdev_priv(dev); + if (vif->nw_type != INFRA_NETWORK) + return; - if (!ath6kl_cfg80211_ready(vif)) - return -EIO; + if (!test_bit(ATH6KL_FW_CAPABILITY_BMISS_ENHANCE, + vif->ar->fw_capabilities)) + return; - ath6kl_dbg(ATH6KL_DBG_WLAN_CFG, "%s: center_freq=%u hw_value=%u\n", - __func__, chan->center_freq, chan->hw_value); - vif->next_chan = chan->center_freq; - vif->next_ch_type = channel_type; - vif->next_ch_band = chan->band; + ath6kl_dbg(ATH6KL_DBG_WLAN_CFG, "%s fw bmiss enhance\n", + enable ? "enable" : "disable"); - return 0; + err = ath6kl_wmi_sta_bmiss_enhance_cmd(vif->ar->wmi, + vif->fw_vif_idx, enable); + if (err) + ath6kl_err("failed to %s enhanced bmiss detection: %d\n", + enable ? "enable" : "disable", err); } static int ath6kl_get_rsn_capab(struct cfg80211_beacon_data *beacon, @@ -2694,9 +2754,15 @@ static int ath6kl_start_ap(struct wiphy *wiphy, struct net_device *dev, /* TODO: * info->interval - * info->dtim_period */ + ret = ath6kl_wmi_ap_set_dtim_cmd(ar->wmi, vif->fw_vif_idx, + info->dtim_period); + + /* ignore error, just print a warning and continue normally */ + if (ret) + ath6kl_warn("Failed to set dtim_period in beacon: %d\n", ret); + if (info->beacon.head == NULL) return -EINVAL; mgmt = (struct ieee80211_mgmt *) info->beacon.head; @@ -2791,7 +2857,7 @@ static int ath6kl_start_ap(struct wiphy *wiphy, struct net_device *dev, p.ssid_len = vif->ssid_len; memcpy(p.ssid, vif->ssid, vif->ssid_len); p.dot11_auth_mode = vif->dot11_auth_mode; - p.ch = cpu_to_le16(vif->next_chan); + p.ch = cpu_to_le16(info->channel->center_freq); /* Enable uAPSD support by default */ res = ath6kl_wmi_ap_set_apsd(ar->wmi, vif->fw_vif_idx, true); @@ -2815,8 +2881,8 @@ static int ath6kl_start_ap(struct wiphy *wiphy, struct net_device *dev, return res; } - if (ath6kl_set_htcap(vif, vif->next_ch_band, - vif->next_ch_type != NL80211_CHAN_NO_HT)) + if (ath6kl_set_htcap(vif, info->channel->band, + info->channel_type != NL80211_CHAN_NO_HT)) return -EIO; /* @@ -2909,14 +2975,14 @@ static int ath6kl_change_station(struct wiphy *wiphy, struct net_device *dev, } static int ath6kl_remain_on_channel(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, unsigned int duration, u64 *cookie) { - struct ath6kl *ar = ath6kl_priv(dev); - struct ath6kl_vif *vif = netdev_priv(dev); + struct ath6kl_vif *vif = ath6kl_vif_from_wdev(wdev); + struct ath6kl *ar = ath6kl_priv(vif->ndev); u32 id; /* TODO: if already pending or ongoing remain-on-channel, @@ -2933,11 +2999,11 @@ static int ath6kl_remain_on_channel(struct wiphy *wiphy, } static int ath6kl_cancel_remain_on_channel(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, u64 cookie) { - struct ath6kl *ar = ath6kl_priv(dev); - struct ath6kl_vif *vif = netdev_priv(dev); + struct ath6kl_vif *vif = ath6kl_vif_from_wdev(wdev); + struct ath6kl *ar = ath6kl_priv(vif->ndev); if (cookie != vif->last_roc_id) return -ENOENT; @@ -3068,15 +3134,15 @@ static bool ath6kl_is_p2p_go_ssid(const u8 *buf, size_t len) return false; } -static int ath6kl_mgmt_tx(struct wiphy *wiphy, struct net_device *dev, +static int ath6kl_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev, struct ieee80211_channel *chan, bool offchan, enum nl80211_channel_type channel_type, bool channel_type_valid, unsigned int wait, const u8 *buf, size_t len, bool no_cck, bool dont_wait_for_ack, u64 *cookie) { - struct ath6kl *ar = ath6kl_priv(dev); - struct ath6kl_vif *vif = netdev_priv(dev); + struct ath6kl_vif *vif = ath6kl_vif_from_wdev(wdev); + struct ath6kl *ar = ath6kl_priv(vif->ndev); u32 id; const struct ieee80211_mgmt *mgmt; bool more_data, queued; @@ -3121,10 +3187,10 @@ static int ath6kl_mgmt_tx(struct wiphy *wiphy, struct net_device *dev, } static void ath6kl_mgmt_frame_register(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, u16 frame_type, bool reg) { - struct ath6kl_vif *vif = netdev_priv(dev); + struct ath6kl_vif *vif = ath6kl_vif_from_wdev(wdev); ath6kl_dbg(ATH6KL_DBG_WLAN_CFG, "%s: frame_type=0x%x reg=%d\n", __func__, frame_type, reg); @@ -3160,10 +3226,24 @@ static int ath6kl_cfg80211_sscan_start(struct wiphy *wiphy, ath6kl_cfg80211_scan_complete_event(vif, true); ret = ath6kl_set_probed_ssids(ar, vif, request->ssids, - request->n_ssids); + request->n_ssids, + request->match_sets, + request->n_match_sets); if (ret < 0) return ret; + if (!request->n_match_sets) { + ret = ath6kl_wmi_bssfilter_cmd(ar->wmi, vif->fw_vif_idx, + ALL_BSS_FILTER, 0); + if (ret < 0) + return ret; + } else { + ret = ath6kl_wmi_bssfilter_cmd(ar->wmi, vif->fw_vif_idx, + MATCHED_SSID_FILTER, 0); + if (ret < 0) + return ret; + } + /* fw uses seconds, also make sure that it's >0 */ interval = max_t(u16, 1, request->interval / 1000); @@ -3185,7 +3265,7 @@ static int ath6kl_cfg80211_sscan_start(struct wiphy *wiphy, WMI_FRAME_PROBE_REQ, request->ie, request->ie_len); if (ret) { - ath6kl_warn("Failed to set probe request IE for scheduled scan: %d", + ath6kl_warn("Failed to set probe request IE for scheduled scan: %d\n", ret); return ret; } @@ -3217,6 +3297,18 @@ static int ath6kl_cfg80211_sscan_stop(struct wiphy *wiphy, return 0; } +static int ath6kl_cfg80211_set_bitrate(struct wiphy *wiphy, + struct net_device *dev, + const u8 *addr, + const struct cfg80211_bitrate_mask *mask) +{ + struct ath6kl *ar = ath6kl_priv(dev); + struct ath6kl_vif *vif = netdev_priv(dev); + + return ath6kl_wmi_set_bitrate_mask(ar->wmi, vif->fw_vif_idx, + mask); +} + static const struct ieee80211_txrx_stypes ath6kl_mgmt_stypes[NUM_NL80211_IFTYPES] = { [NL80211_IFTYPE_STATION] = { @@ -3271,7 +3363,6 @@ static struct cfg80211_ops ath6kl_cfg80211_ops = { .suspend = __ath6kl_cfg80211_suspend, .resume = __ath6kl_cfg80211_resume, #endif - .set_channel = ath6kl_set_channel, .start_ap = ath6kl_start_ap, .change_beacon = ath6kl_change_beacon, .stop_ap = ath6kl_stop_ap, @@ -3283,6 +3374,7 @@ static struct cfg80211_ops ath6kl_cfg80211_ops = { .mgmt_frame_register = ath6kl_mgmt_frame_register, .sched_scan_start = ath6kl_cfg80211_sscan_start, .sched_scan_stop = ath6kl_cfg80211_sscan_stop, + .set_bitrate_mask = ath6kl_cfg80211_set_bitrate, }; void ath6kl_cfg80211_stop(struct ath6kl_vif *vif) @@ -3385,9 +3477,9 @@ void ath6kl_cfg80211_vif_cleanup(struct ath6kl_vif *vif) ar->num_vif--; } -struct net_device *ath6kl_interface_add(struct ath6kl *ar, char *name, - enum nl80211_iftype type, u8 fw_vif_idx, - u8 nw_type) +struct wireless_dev *ath6kl_interface_add(struct ath6kl *ar, char *name, + enum nl80211_iftype type, + u8 fw_vif_idx, u8 nw_type) { struct net_device *ndev; struct ath6kl_vif *vif; @@ -3410,7 +3502,8 @@ struct net_device *ath6kl_interface_add(struct ath6kl *ar, char *name, vif->listen_intvl_t = ATH6KL_DEFAULT_LISTEN_INTVAL; vif->bmiss_time_t = ATH6KL_DEFAULT_BMISS_TIME; vif->bg_scan_period = 0; - vif->htcap.ht_enable = true; + vif->htcap[IEEE80211_BAND_2GHZ].ht_enable = true; + vif->htcap[IEEE80211_BAND_5GHZ].ht_enable = true; memcpy(ndev->dev_addr, ar->mac_addr, ETH_ALEN); if (fw_vif_idx != 0) @@ -3440,7 +3533,7 @@ struct net_device *ath6kl_interface_add(struct ath6kl *ar, char *name, list_add_tail(&vif->list, &ar->vif_list); spin_unlock_bh(&ar->list_lock); - return ndev; + return &vif->wdev; err: aggr_module_destroy(vif->aggr_cntxt); @@ -3470,7 +3563,13 @@ int ath6kl_cfg80211_init(struct ath6kl *ar) } /* max num of ssids that can be probed during scanning */ - wiphy->max_scan_ssids = MAX_PROBED_SSID_INDEX; + wiphy->max_scan_ssids = MAX_PROBED_SSIDS; + + /* max num of ssids that can be matched after scan */ + if (test_bit(ATH6KL_FW_CAPABILITY_SCHED_SCAN_MATCH_LIST, + ar->fw_capabilities)) + wiphy->max_match_sets = MAX_PROBED_SSIDS; + wiphy->max_scan_ie_len = 1000; /* FIX: what is correct limit? */ switch (ar->hw.cap) { case WMI_11AN_CAP: @@ -3507,6 +3606,17 @@ int ath6kl_cfg80211_init(struct ath6kl *ar) ath6kl_band_5ghz.ht_cap.cap = 0; ath6kl_band_5ghz.ht_cap.ht_supported = false; } + + if (ar->hw.flags & ATH6KL_HW_FLAG_64BIT_RATES) { + ath6kl_band_2ghz.ht_cap.mcs.rx_mask[0] = 0xff; + ath6kl_band_5ghz.ht_cap.mcs.rx_mask[0] = 0xff; + ath6kl_band_2ghz.ht_cap.mcs.rx_mask[1] = 0xff; + ath6kl_band_5ghz.ht_cap.mcs.rx_mask[1] = 0xff; + } else { + ath6kl_band_2ghz.ht_cap.mcs.rx_mask[0] = 0xff; + ath6kl_band_5ghz.ht_cap.mcs.rx_mask[0] = 0xff; + } + if (band_2gig) wiphy->bands[IEEE80211_BAND_2GHZ] = &ath6kl_band_2ghz; if (band_5gig) @@ -3517,6 +3627,7 @@ int ath6kl_cfg80211_init(struct ath6kl *ar) wiphy->cipher_suites = cipher_suites; wiphy->n_cipher_suites = ARRAY_SIZE(cipher_suites); +#ifdef CONFIG_PM wiphy->wowlan.flags = WIPHY_WOWLAN_MAGIC_PKT | WIPHY_WOWLAN_DISCONNECT | WIPHY_WOWLAN_GTK_REKEY_FAILURE | @@ -3526,8 +3637,9 @@ int ath6kl_cfg80211_init(struct ath6kl *ar) wiphy->wowlan.n_patterns = WOW_MAX_FILTERS_PER_LIST; wiphy->wowlan.pattern_min_len = 1; wiphy->wowlan.pattern_max_len = WOW_PATTERN_SIZE; +#endif - wiphy->max_sched_scan_ssids = MAX_PROBED_SSID_INDEX; + wiphy->max_sched_scan_ssids = MAX_PROBED_SSIDS; ar->wiphy->flags |= WIPHY_FLAG_SUPPORTS_FW_ROAM | WIPHY_FLAG_HAVE_AP_SME | diff --git a/drivers/net/wireless/ath/ath6kl/cfg80211.h b/drivers/net/wireless/ath/ath6kl/cfg80211.h index 5ea8cbb79f43..56b1ebe79812 100644 --- a/drivers/net/wireless/ath/ath6kl/cfg80211.h +++ b/drivers/net/wireless/ath/ath6kl/cfg80211.h @@ -25,9 +25,9 @@ enum ath6kl_cfg_suspend_mode { ATH6KL_CFG_SUSPEND_SCHED_SCAN, }; -struct net_device *ath6kl_interface_add(struct ath6kl *ar, char *name, - enum nl80211_iftype type, - u8 fw_vif_idx, u8 nw_type); +struct wireless_dev *ath6kl_interface_add(struct ath6kl *ar, char *name, + enum nl80211_iftype type, + u8 fw_vif_idx, u8 nw_type); void ath6kl_cfg80211_ch_switch_notify(struct ath6kl_vif *vif, int freq, enum wmi_phy_mode mode); void ath6kl_cfg80211_scan_complete_event(struct ath6kl_vif *vif, bool aborted); @@ -62,5 +62,7 @@ void ath6kl_cfg80211_cleanup(struct ath6kl *ar); struct ath6kl *ath6kl_cfg80211_create(void); void ath6kl_cfg80211_destroy(struct ath6kl *ar); +/* TODO: remove this once ath6kl_vif_cleanup() is moved to cfg80211.c */ +void ath6kl_cfg80211_sta_bmiss_enhance(struct ath6kl_vif *vif, bool enable); #endif /* ATH6KL_CFG80211_H */ diff --git a/drivers/net/wireless/ath/ath6kl/core.c b/drivers/net/wireless/ath/ath6kl/core.c index fdb3b1decc76..82c4dd2a960e 100644 --- a/drivers/net/wireless/ath/ath6kl/core.c +++ b/drivers/net/wireless/ath/ath6kl/core.c @@ -56,7 +56,7 @@ EXPORT_SYMBOL(ath6kl_core_rx_complete); int ath6kl_core_init(struct ath6kl *ar, enum ath6kl_htc_type htc_type) { struct ath6kl_bmi_target_info targ_info; - struct net_device *ndev; + struct wireless_dev *wdev; int ret = 0, i; switch (htc_type) { @@ -187,12 +187,12 @@ int ath6kl_core_init(struct ath6kl *ar, enum ath6kl_htc_type htc_type) rtnl_lock(); /* Add an initial station interface */ - ndev = ath6kl_interface_add(ar, "wlan%d", NL80211_IFTYPE_STATION, 0, + wdev = ath6kl_interface_add(ar, "wlan%d", NL80211_IFTYPE_STATION, 0, INFRA_NETWORK); rtnl_unlock(); - if (!ndev) { + if (!wdev) { ath6kl_err("Failed to instantiate a network device\n"); ret = -ENOMEM; wiphy_unregister(ar->wiphy); @@ -200,7 +200,7 @@ int ath6kl_core_init(struct ath6kl *ar, enum ath6kl_htc_type htc_type) } ath6kl_dbg(ATH6KL_DBG_TRC, "%s: name=%s dev=0x%p, ar=0x%p\n", - __func__, ndev->name, ndev, ar); + __func__, wdev->netdev->name, wdev->netdev, ar); return ret; diff --git a/drivers/net/wireless/ath/ath6kl/core.h b/drivers/net/wireless/ath/ath6kl/core.h index 4d9c6f142698..cec49a31029a 100644 --- a/drivers/net/wireless/ath/ath6kl/core.h +++ b/drivers/net/wireless/ath/ath6kl/core.h @@ -100,6 +100,21 @@ enum ath6kl_fw_capability { /* Firmware has support to override rsn cap of rsn ie */ ATH6KL_FW_CAPABILITY_RSN_CAP_OVERRIDE, + /* + * Multicast support in WOW and host awake mode. + * Allow all multicast in host awake mode. + * Apply multicast filter in WOW mode. + */ + ATH6KL_FW_CAPABILITY_WOW_MULTICAST_FILTER, + + /* Firmware supports enhanced bmiss detection */ + ATH6KL_FW_CAPABILITY_BMISS_ENHANCE, + + /* + * FW supports matching of ssid in schedule scan + */ + ATH6KL_FW_CAPABILITY_SCHED_SCAN_MATCH_LIST, + /* this needs to be last */ ATH6KL_FW_CAPABILITY_MAX, }; @@ -112,6 +127,10 @@ struct ath6kl_fw_ie { u8 data[0]; }; +enum ath6kl_hw_flags { + ATH6KL_HW_FLAG_64BIT_RATES = BIT(0), +}; + #define ATH6KL_FW_API2_FILE "fw-2.bin" #define ATH6KL_FW_API3_FILE "fw-3.bin" @@ -196,7 +215,7 @@ struct ath6kl_fw_ie { #define AGGR_NUM_OF_FREE_NETBUFS 16 -#define AGGR_RX_TIMEOUT 400 /* in ms */ +#define AGGR_RX_TIMEOUT 100 /* in ms */ #define WMI_TIMEOUT (2 * HZ) @@ -245,7 +264,6 @@ struct skb_hold_q { struct rxtid { bool aggr; - bool progress; bool timer_mon; u16 win_sz; u16 seq_next; @@ -254,9 +272,15 @@ struct rxtid { struct sk_buff_head q; /* - * FIXME: No clue what this should protect. Apparently it should - * protect some of the fields above but they are also accessed - * without taking the lock. + * lock mainly protects seq_next and hold_q. Movement of seq_next + * needs to be protected between aggr_timeout() and + * aggr_process_recv_frm(). hold_q will be holding the pending + * reorder frames and it's access should also be protected. + * Some of the other fields like hold_q_sz, win_sz and aggr are + * initialized/reset when receiving addba/delba req, also while + * deleting aggr state all the pending buffers are flushed before + * resetting these fields, so there should not be any race in accessing + * these fields. */ spinlock_t lock; }; @@ -541,7 +565,7 @@ struct ath6kl_vif { struct ath6kl_wep_key wep_key_list[WMI_MAX_KEY_INDEX + 1]; struct ath6kl_key keys[WMI_MAX_KEY_INDEX + 1]; struct aggr_info *aggr_cntxt; - struct ath6kl_htcap htcap; + struct ath6kl_htcap htcap[IEEE80211_NUM_BANDS]; struct timer_list disconnect_timer; struct timer_list sched_scan_timer; @@ -553,9 +577,6 @@ struct ath6kl_vif { u32 last_cancel_roc_id; u32 send_action_id; bool probe_req_report; - u16 next_chan; - enum nl80211_channel_type next_ch_type; - enum ieee80211_band next_ch_band; u16 assoc_bss_beacon_int; u16 listen_intvl_t; u16 bmiss_time_t; @@ -568,6 +589,11 @@ struct ath6kl_vif { struct list_head mc_filter; }; +static inline struct ath6kl_vif *ath6kl_vif_from_wdev(struct wireless_dev *wdev) +{ + return container_of(wdev, struct ath6kl_vif, wdev); +} + #define WOW_LIST_ID 0 #define WOW_HOST_REQ_DELAY 500 /* ms */ @@ -687,6 +713,8 @@ struct ath6kl { u32 testscript_addr; enum wmi_phy_cap cap; + u32 flags; + struct ath6kl_hw_fw { const char *dir; const char *otp; diff --git a/drivers/net/wireless/ath/ath6kl/htc_mbox.c b/drivers/net/wireless/ath/ath6kl/htc_mbox.c index 2798624d3a9d..cd0e1ba410d6 100644 --- a/drivers/net/wireless/ath/ath6kl/htc_mbox.c +++ b/drivers/net/wireless/ath/ath6kl/htc_mbox.c @@ -1309,7 +1309,7 @@ static int ath6kl_htc_rx_packet(struct htc_target *target, } ath6kl_dbg(ATH6KL_DBG_HTC, - "htc rx 0x%p hdr x%x len %d mbox 0x%x\n", + "htc rx 0x%p hdr 0x%x len %d mbox 0x%x\n", packet, packet->info.rx.exp_hdr, padded_len, dev->ar->mbox_info.htc_addr); diff --git a/drivers/net/wireless/ath/ath6kl/init.c b/drivers/net/wireless/ath/ath6kl/init.c index 7eb0515f458a..f90b5db741cf 100644 --- a/drivers/net/wireless/ath/ath6kl/init.c +++ b/drivers/net/wireless/ath/ath6kl/init.c @@ -42,6 +42,7 @@ static const struct ath6kl_hw hw_list[] = { .reserved_ram_size = 6912, .refclk_hz = 26000000, .uarttx_pin = 8, + .flags = 0, /* hw2.0 needs override address hardcoded */ .app_start_override_addr = 0x944C00, @@ -67,6 +68,7 @@ static const struct ath6kl_hw hw_list[] = { .refclk_hz = 26000000, .uarttx_pin = 8, .testscript_addr = 0x57ef74, + .flags = 0, .fw = { .dir = AR6003_HW_2_1_1_FW_DIR, @@ -91,6 +93,7 @@ static const struct ath6kl_hw hw_list[] = { .board_addr = 0x433900, .refclk_hz = 26000000, .uarttx_pin = 11, + .flags = ATH6KL_HW_FLAG_64BIT_RATES, .fw = { .dir = AR6004_HW_1_0_FW_DIR, @@ -110,6 +113,7 @@ static const struct ath6kl_hw hw_list[] = { .board_addr = 0x43d400, .refclk_hz = 40000000, .uarttx_pin = 11, + .flags = ATH6KL_HW_FLAG_64BIT_RATES, .fw = { .dir = AR6004_HW_1_1_FW_DIR, @@ -129,6 +133,7 @@ static const struct ath6kl_hw hw_list[] = { .board_addr = 0x435c00, .refclk_hz = 40000000, .uarttx_pin = 11, + .flags = ATH6KL_HW_FLAG_64BIT_RATES, .fw = { .dir = AR6004_HW_1_2_FW_DIR, @@ -938,6 +943,14 @@ static int ath6kl_fetch_fw_apin(struct ath6kl *ar, const char *name) } switch (ie_id) { + case ATH6KL_FW_IE_FW_VERSION: + strlcpy(ar->wiphy->fw_version, data, + sizeof(ar->wiphy->fw_version)); + + ath6kl_dbg(ATH6KL_DBG_BOOT, + "found fw version %s\n", + ar->wiphy->fw_version); + break; case ATH6KL_FW_IE_OTP_IMAGE: ath6kl_dbg(ATH6KL_DBG_BOOT, "found otp image ie (%zd B)\n", ie_len); @@ -991,9 +1004,6 @@ static int ath6kl_fetch_fw_apin(struct ath6kl *ar, const char *name) ar->hw.reserved_ram_size); break; case ATH6KL_FW_IE_CAPABILITIES: - if (ie_len < DIV_ROUND_UP(ATH6KL_FW_CAPABILITY_MAX, 8)) - break; - ath6kl_dbg(ATH6KL_DBG_BOOT, "found firmware capabilities ie (%zd B)\n", ie_len); @@ -1002,6 +1012,9 @@ static int ath6kl_fetch_fw_apin(struct ath6kl *ar, const char *name) index = i / 8; bit = i % 8; + if (index == ie_len) + break; + if (data[index] & (1 << bit)) __set_bit(i, ar->fw_capabilities); } @@ -1392,6 +1405,12 @@ static int ath6kl_init_upload(struct ath6kl *ar) ar->version.target_ver == AR6003_HW_2_1_1_VERSION) { ath6kl_err("temporary war to avoid sdio crc error\n"); + param = 0x28; + address = GPIO_BASE_ADDRESS + GPIO_PIN9_ADDRESS; + status = ath6kl_bmi_reg_write(ar, address, param); + if (status) + return status; + param = 0x20; address = GPIO_BASE_ADDRESS + GPIO_PIN10_ADDRESS; @@ -1659,6 +1678,9 @@ void ath6kl_cleanup_vif(struct ath6kl_vif *vif, bool wmi_ready) cfg80211_scan_done(vif->scan_req, true); vif->scan_req = NULL; } + + /* need to clean up enhanced bmiss detection fw state */ + ath6kl_cfg80211_sta_bmiss_enhance(vif, false); } void ath6kl_stop_txrx(struct ath6kl *ar) diff --git a/drivers/net/wireless/ath/ath6kl/main.c b/drivers/net/wireless/ath/ath6kl/main.c index e5524470529c..c189e28e86a9 100644 --- a/drivers/net/wireless/ath/ath6kl/main.c +++ b/drivers/net/wireless/ath/ath6kl/main.c @@ -554,20 +554,24 @@ void ath6kl_ready_event(void *devt, u8 *datap, u32 sw_ver, u32 abi_ver, struct ath6kl *ar = devt; memcpy(ar->mac_addr, datap, ETH_ALEN); - ath6kl_dbg(ATH6KL_DBG_TRC, "%s: mac addr = %pM\n", - __func__, ar->mac_addr); + + ath6kl_dbg(ATH6KL_DBG_BOOT, + "ready event mac addr %pM sw_ver 0x%x abi_ver 0x%x cap 0x%x\n", + ar->mac_addr, sw_ver, abi_ver, cap); ar->version.wlan_ver = sw_ver; ar->version.abi_ver = abi_ver; ar->hw.cap = cap; - snprintf(ar->wiphy->fw_version, - sizeof(ar->wiphy->fw_version), - "%u.%u.%u.%u", - (ar->version.wlan_ver & 0xf0000000) >> 28, - (ar->version.wlan_ver & 0x0f000000) >> 24, - (ar->version.wlan_ver & 0x00ff0000) >> 16, - (ar->version.wlan_ver & 0x0000ffff)); + if (strlen(ar->wiphy->fw_version) == 0) { + snprintf(ar->wiphy->fw_version, + sizeof(ar->wiphy->fw_version), + "%u.%u.%u.%u", + (ar->version.wlan_ver & 0xf0000000) >> 28, + (ar->version.wlan_ver & 0x0f000000) >> 24, + (ar->version.wlan_ver & 0x00ff0000) >> 16, + (ar->version.wlan_ver & 0x0000ffff)); + } /* indicate to the waiting thread that the ready event was received */ set_bit(WMI_READY, &ar->flag); @@ -598,7 +602,6 @@ static int ath6kl_commit_ch_switch(struct ath6kl_vif *vif, u16 channel) struct ath6kl *ar = vif->ar; - vif->next_chan = channel; vif->profile.ch = cpu_to_le16(channel); switch (vif->nw_type) { @@ -1167,7 +1170,10 @@ static void ath6kl_set_multicast_list(struct net_device *ndev) else clear_bit(NETDEV_MCAST_ALL_ON, &vif->flags); - mc_all_on = mc_all_on || (vif->ar->state == ATH6KL_STATE_ON); + if (test_bit(ATH6KL_FW_CAPABILITY_WOW_MULTICAST_FILTER, + vif->ar->fw_capabilities)) { + mc_all_on = mc_all_on || (vif->ar->state == ATH6KL_STATE_ON); + } if (!(ndev->flags & IFF_MULTICAST)) { mc_all_on = false; diff --git a/drivers/net/wireless/ath/ath6kl/target.h b/drivers/net/wireless/ath/ath6kl/target.h index 78e0ef4567a5..a98c12ba70c1 100644 --- a/drivers/net/wireless/ath/ath6kl/target.h +++ b/drivers/net/wireless/ath/ath6kl/target.h @@ -45,6 +45,7 @@ #define LPO_CAL_ENABLE_S 20 #define LPO_CAL_ENABLE 0x00100000 +#define GPIO_PIN9_ADDRESS 0x0000004c #define GPIO_PIN10_ADDRESS 0x00000050 #define GPIO_PIN11_ADDRESS 0x00000054 #define GPIO_PIN12_ADDRESS 0x00000058 diff --git a/drivers/net/wireless/ath/ath6kl/txrx.c b/drivers/net/wireless/ath/ath6kl/txrx.c index 67206aedea6c..7dfa0fd86d7b 100644 --- a/drivers/net/wireless/ath/ath6kl/txrx.c +++ b/drivers/net/wireless/ath/ath6kl/txrx.c @@ -1036,6 +1036,7 @@ static void aggr_deque_frms(struct aggr_info_conn *agg_conn, u8 tid, rxtid = &agg_conn->rx_tid[tid]; stats = &agg_conn->stat[tid]; + spin_lock_bh(&rxtid->lock); idx = AGGR_WIN_IDX(rxtid->seq_next, rxtid->hold_q_sz); /* @@ -1054,8 +1055,6 @@ static void aggr_deque_frms(struct aggr_info_conn *agg_conn, u8 tid, seq_end = seq_no ? seq_no : rxtid->seq_next; idx_end = AGGR_WIN_IDX(seq_end, rxtid->hold_q_sz); - spin_lock_bh(&rxtid->lock); - do { node = &rxtid->hold_q[idx]; if ((order == 1) && (!node->skb)) @@ -1127,11 +1126,13 @@ static bool aggr_process_recv_frm(struct aggr_info_conn *agg_conn, u8 tid, ((end > extended_end) && (cur > extended_end) && (cur < end))) { aggr_deque_frms(agg_conn, tid, 0, 0); + spin_lock_bh(&rxtid->lock); if (cur >= rxtid->hold_q_sz - 1) rxtid->seq_next = cur - (rxtid->hold_q_sz - 1); else rxtid->seq_next = ATH6KL_MAX_SEQ_NO - (rxtid->hold_q_sz - 2 - cur); + spin_unlock_bh(&rxtid->lock); } else { /* * Dequeue only those frames that are outside the @@ -1185,25 +1186,25 @@ static bool aggr_process_recv_frm(struct aggr_info_conn *agg_conn, u8 tid, aggr_deque_frms(agg_conn, tid, 0, 1); if (agg_conn->timer_scheduled) - rxtid->progress = true; - else - for (idx = 0 ; idx < rxtid->hold_q_sz; idx++) { - if (rxtid->hold_q[idx].skb) { - /* - * There is a frame in the queue and no - * timer so start a timer to ensure that - * the frame doesn't remain stuck - * forever. - */ - agg_conn->timer_scheduled = true; - mod_timer(&agg_conn->timer, - (jiffies + - HZ * (AGGR_RX_TIMEOUT) / 1000)); - rxtid->progress = false; - rxtid->timer_mon = true; - break; - } + return is_queued; + + spin_lock_bh(&rxtid->lock); + for (idx = 0 ; idx < rxtid->hold_q_sz; idx++) { + if (rxtid->hold_q[idx].skb) { + /* + * There is a frame in the queue and no + * timer so start a timer to ensure that + * the frame doesn't remain stuck + * forever. + */ + agg_conn->timer_scheduled = true; + mod_timer(&agg_conn->timer, + (jiffies + (HZ * AGGR_RX_TIMEOUT) / 1000)); + rxtid->timer_mon = true; + break; } + } + spin_unlock_bh(&rxtid->lock); return is_queued; } @@ -1608,7 +1609,7 @@ static void aggr_timeout(unsigned long arg) rxtid = &aggr_conn->rx_tid[i]; stats = &aggr_conn->stat[i]; - if (!rxtid->aggr || !rxtid->timer_mon || rxtid->progress) + if (!rxtid->aggr || !rxtid->timer_mon) continue; stats->num_timeouts++; @@ -1626,14 +1627,15 @@ static void aggr_timeout(unsigned long arg) rxtid = &aggr_conn->rx_tid[i]; if (rxtid->aggr && rxtid->hold_q) { + spin_lock_bh(&rxtid->lock); for (j = 0; j < rxtid->hold_q_sz; j++) { if (rxtid->hold_q[j].skb) { aggr_conn->timer_scheduled = true; rxtid->timer_mon = true; - rxtid->progress = false; break; } } + spin_unlock_bh(&rxtid->lock); if (j >= rxtid->hold_q_sz) rxtid->timer_mon = false; @@ -1660,7 +1662,6 @@ static void aggr_delete_tid_state(struct aggr_info_conn *aggr_conn, u8 tid) aggr_deque_frms(aggr_conn, tid, 0, 0); rxtid->aggr = false; - rxtid->progress = false; rxtid->timer_mon = false; rxtid->win_sz = 0; rxtid->seq_next = 0; @@ -1739,7 +1740,6 @@ void aggr_conn_init(struct ath6kl_vif *vif, struct aggr_info *aggr_info, for (i = 0; i < NUM_OF_TIDS; i++) { rxtid = &aggr_conn->rx_tid[i]; rxtid->aggr = false; - rxtid->progress = false; rxtid->timer_mon = false; skb_queue_head_init(&rxtid->q); spin_lock_init(&rxtid->lock); diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c index ee8ec2394c2c..c30ab4b11d61 100644 --- a/drivers/net/wireless/ath/ath6kl/wmi.c +++ b/drivers/net/wireless/ath/ath6kl/wmi.c @@ -474,7 +474,7 @@ static int ath6kl_wmi_remain_on_chnl_event_rx(struct wmi *wmi, u8 *datap, return -EINVAL; } id = vif->last_roc_id; - cfg80211_ready_on_channel(vif->ndev, id, chan, NL80211_CHAN_NO_HT, + cfg80211_ready_on_channel(&vif->wdev, id, chan, NL80211_CHAN_NO_HT, dur, GFP_ATOMIC); return 0; @@ -513,7 +513,7 @@ static int ath6kl_wmi_cancel_remain_on_chnl_event_rx(struct wmi *wmi, else id = vif->last_roc_id; /* timeout on uncanceled r-o-c */ vif->last_cancel_roc_id = 0; - cfg80211_remain_on_channel_expired(vif->ndev, id, chan, + cfg80211_remain_on_channel_expired(&vif->wdev, id, chan, NL80211_CHAN_NO_HT, GFP_ATOMIC); return 0; @@ -533,7 +533,7 @@ static int ath6kl_wmi_tx_status_event_rx(struct wmi *wmi, u8 *datap, int len, ath6kl_dbg(ATH6KL_DBG_WMI, "tx_status: id=%x ack_status=%u\n", id, ev->ack_status); if (wmi->last_mgmt_tx_frame) { - cfg80211_mgmt_tx_status(vif->ndev, id, + cfg80211_mgmt_tx_status(&vif->wdev, id, wmi->last_mgmt_tx_frame, wmi->last_mgmt_tx_frame_len, !!ev->ack_status, GFP_ATOMIC); @@ -568,7 +568,7 @@ static int ath6kl_wmi_rx_probe_req_event_rx(struct wmi *wmi, u8 *datap, int len, dlen, freq, vif->probe_req_report); if (vif->probe_req_report || vif->nw_type == AP_NETWORK) - cfg80211_rx_mgmt(vif->ndev, freq, 0, + cfg80211_rx_mgmt(&vif->wdev, freq, 0, ev->data, dlen, GFP_ATOMIC); return 0; @@ -608,7 +608,7 @@ static int ath6kl_wmi_rx_action_event_rx(struct wmi *wmi, u8 *datap, int len, return -EINVAL; } ath6kl_dbg(ATH6KL_DBG_WMI, "rx_action: len=%u freq=%u\n", dlen, freq); - cfg80211_rx_mgmt(vif->ndev, freq, 0, + cfg80211_rx_mgmt(&vif->wdev, freq, 0, ev->data, dlen, GFP_ATOMIC); return 0; @@ -743,7 +743,6 @@ int ath6kl_wmi_force_roam_cmd(struct wmi *wmi, const u8 *bssid) return -ENOMEM; cmd = (struct roam_ctrl_cmd *) skb->data; - memset(cmd, 0, sizeof(*cmd)); memcpy(cmd->info.bssid, bssid, ETH_ALEN); cmd->roam_ctrl = WMI_FORCE_ROAM; @@ -753,6 +752,22 @@ int ath6kl_wmi_force_roam_cmd(struct wmi *wmi, const u8 *bssid) NO_SYNC_WMIFLAG); } +int ath6kl_wmi_ap_set_dtim_cmd(struct wmi *wmi, u8 if_idx, u32 dtim_period) +{ + struct sk_buff *skb; + struct set_dtim_cmd *cmd; + + skb = ath6kl_wmi_get_new_buf(sizeof(*cmd)); + if (!skb) + return -ENOMEM; + + cmd = (struct set_dtim_cmd *) skb->data; + + cmd->dtim_period = cpu_to_le32(dtim_period); + return ath6kl_wmi_cmd_send(wmi, if_idx, skb, + WMI_AP_SET_DTIM_CMDID, NO_SYNC_WMIFLAG); +} + int ath6kl_wmi_set_roam_mode_cmd(struct wmi *wmi, enum wmi_roam_mode mode) { struct sk_buff *skb; @@ -763,7 +778,6 @@ int ath6kl_wmi_set_roam_mode_cmd(struct wmi *wmi, enum wmi_roam_mode mode) return -ENOMEM; cmd = (struct roam_ctrl_cmd *) skb->data; - memset(cmd, 0, sizeof(*cmd)); cmd->info.roam_mode = mode; cmd->roam_ctrl = WMI_SET_ROAM_MODE; @@ -1995,7 +2009,7 @@ int ath6kl_wmi_probedssid_cmd(struct wmi *wmi, u8 if_idx, u8 index, u8 flag, struct wmi_probed_ssid_cmd *cmd; int ret; - if (index > MAX_PROBED_SSID_INDEX) + if (index >= MAX_PROBED_SSIDS) return -EINVAL; if (ssid_len > sizeof(cmd->ssid)) @@ -2599,6 +2613,115 @@ static void ath6kl_wmi_relinquish_implicit_pstream_credits(struct wmi *wmi) spin_unlock_bh(&wmi->lock); } +static int ath6kl_set_bitrate_mask64(struct wmi *wmi, u8 if_idx, + const struct cfg80211_bitrate_mask *mask) +{ + struct sk_buff *skb; + int ret, mode, band; + u64 mcsrate, ratemask[IEEE80211_NUM_BANDS]; + struct wmi_set_tx_select_rates64_cmd *cmd; + + memset(&ratemask, 0, sizeof(ratemask)); + for (band = 0; band < IEEE80211_NUM_BANDS; band++) { + /* copy legacy rate mask */ + ratemask[band] = mask->control[band].legacy; + if (band == IEEE80211_BAND_5GHZ) + ratemask[band] = + mask->control[band].legacy << 4; + + /* copy mcs rate mask */ + mcsrate = mask->control[band].mcs[1]; + mcsrate <<= 8; + mcsrate |= mask->control[band].mcs[0]; + ratemask[band] |= mcsrate << 12; + ratemask[band] |= mcsrate << 28; + } + + ath6kl_dbg(ATH6KL_DBG_WMI, + "Ratemask 64 bit: 2.4:%llx 5:%llx\n", + ratemask[0], ratemask[1]); + + skb = ath6kl_wmi_get_new_buf(sizeof(*cmd) * WMI_RATES_MODE_MAX); + if (!skb) + return -ENOMEM; + + cmd = (struct wmi_set_tx_select_rates64_cmd *) skb->data; + for (mode = 0; mode < WMI_RATES_MODE_MAX; mode++) { + /* A mode operate in 5GHZ band */ + if (mode == WMI_RATES_MODE_11A || + mode == WMI_RATES_MODE_11A_HT20 || + mode == WMI_RATES_MODE_11A_HT40) + band = IEEE80211_BAND_5GHZ; + else + band = IEEE80211_BAND_2GHZ; + cmd->ratemask[mode] = cpu_to_le64(ratemask[band]); + } + + ret = ath6kl_wmi_cmd_send(wmi, if_idx, skb, + WMI_SET_TX_SELECT_RATES_CMDID, + NO_SYNC_WMIFLAG); + return ret; +} + +static int ath6kl_set_bitrate_mask32(struct wmi *wmi, u8 if_idx, + const struct cfg80211_bitrate_mask *mask) +{ + struct sk_buff *skb; + int ret, mode, band; + u32 mcsrate, ratemask[IEEE80211_NUM_BANDS]; + struct wmi_set_tx_select_rates32_cmd *cmd; + + memset(&ratemask, 0, sizeof(ratemask)); + for (band = 0; band < IEEE80211_NUM_BANDS; band++) { + /* copy legacy rate mask */ + ratemask[band] = mask->control[band].legacy; + if (band == IEEE80211_BAND_5GHZ) + ratemask[band] = + mask->control[band].legacy << 4; + + /* copy mcs rate mask */ + mcsrate = mask->control[band].mcs[0]; + ratemask[band] |= mcsrate << 12; + ratemask[band] |= mcsrate << 20; + } + + ath6kl_dbg(ATH6KL_DBG_WMI, + "Ratemask 32 bit: 2.4:%x 5:%x\n", + ratemask[0], ratemask[1]); + + skb = ath6kl_wmi_get_new_buf(sizeof(*cmd) * WMI_RATES_MODE_MAX); + if (!skb) + return -ENOMEM; + + cmd = (struct wmi_set_tx_select_rates32_cmd *) skb->data; + for (mode = 0; mode < WMI_RATES_MODE_MAX; mode++) { + /* A mode operate in 5GHZ band */ + if (mode == WMI_RATES_MODE_11A || + mode == WMI_RATES_MODE_11A_HT20 || + mode == WMI_RATES_MODE_11A_HT40) + band = IEEE80211_BAND_5GHZ; + else + band = IEEE80211_BAND_2GHZ; + cmd->ratemask[mode] = cpu_to_le32(ratemask[band]); + } + + ret = ath6kl_wmi_cmd_send(wmi, if_idx, skb, + WMI_SET_TX_SELECT_RATES_CMDID, + NO_SYNC_WMIFLAG); + return ret; +} + +int ath6kl_wmi_set_bitrate_mask(struct wmi *wmi, u8 if_idx, + const struct cfg80211_bitrate_mask *mask) +{ + struct ath6kl *ar = wmi->parent_dev; + + if (ar->hw.flags & ATH6KL_HW_FLAG_64BIT_RATES) + return ath6kl_set_bitrate_mask64(wmi, if_idx, mask); + else + return ath6kl_set_bitrate_mask32(wmi, if_idx, mask); +} + int ath6kl_wmi_set_host_sleep_mode_cmd(struct wmi *wmi, u8 if_idx, enum ath6kl_host_mode host_mode) { @@ -2997,6 +3120,25 @@ int ath6kl_wmi_add_del_mcast_filter_cmd(struct wmi *wmi, u8 if_idx, return ret; } +int ath6kl_wmi_sta_bmiss_enhance_cmd(struct wmi *wmi, u8 if_idx, bool enhance) +{ + struct sk_buff *skb; + struct wmi_sta_bmiss_enhance_cmd *cmd; + int ret; + + skb = ath6kl_wmi_get_new_buf(sizeof(*cmd)); + if (!skb) + return -ENOMEM; + + cmd = (struct wmi_sta_bmiss_enhance_cmd *) skb->data; + cmd->enable = enhance ? 1 : 0; + + ret = ath6kl_wmi_cmd_send(wmi, if_idx, skb, + WMI_STA_BMISS_ENHANCE_CMDID, + NO_SYNC_WMIFLAG); + return ret; +} + s32 ath6kl_wmi_get_rate(s8 rate_index) { if (rate_index == RATE_AUTO) diff --git a/drivers/net/wireless/ath/ath6kl/wmi.h b/drivers/net/wireless/ath/ath6kl/wmi.h index 9076bec3a2ba..43339aca585d 100644 --- a/drivers/net/wireless/ath/ath6kl/wmi.h +++ b/drivers/net/wireless/ath/ath6kl/wmi.h @@ -624,6 +624,10 @@ enum wmi_cmd_id { WMI_SEND_MGMT_CMDID, WMI_BEGIN_SCAN_CMDID, + WMI_SET_BLACK_LIST, + WMI_SET_MCASTRATE, + + WMI_STA_BMISS_ENHANCE_CMDID, }; enum wmi_mgmt_frame_type { @@ -960,6 +964,9 @@ enum wmi_bss_filter { /* beacons matching probed ssid */ PROBED_SSID_FILTER, + /* beacons matching matched ssid */ + MATCHED_SSID_FILTER, + /* marker only */ LAST_BSS_FILTER, }; @@ -978,7 +985,7 @@ struct wmi_bss_filter_cmd { } __packed; /* WMI_SET_PROBED_SSID_CMDID */ -#define MAX_PROBED_SSID_INDEX 9 +#define MAX_PROBED_SSIDS 16 enum wmi_ssid_flag { /* disables entry */ @@ -989,10 +996,13 @@ enum wmi_ssid_flag { /* probes for any ssid */ ANY_SSID_FLAG = 0x02, + + /* match for ssid */ + MATCH_SSID_FLAG = 0x08, }; struct wmi_probed_ssid_cmd { - /* 0 to MAX_PROBED_SSID_INDEX */ + /* 0 to MAX_PROBED_SSIDS - 1 */ u8 entry_index; /* see, enum wmi_ssid_flg */ @@ -1017,6 +1027,11 @@ struct wmi_bmiss_time_cmd { __le16 num_beacons; }; +/* WMI_STA_ENHANCE_BMISS_CMDID */ +struct wmi_sta_bmiss_enhance_cmd { + u8 enable; +} __packed; + /* WMI_SET_POWER_MODE_CMDID */ enum wmi_power_mode { REC_POWER = 0x01, @@ -1048,6 +1063,36 @@ struct wmi_power_params_cmd { __le16 ps_fail_event_policy; } __packed; +/* + * Ratemask for below modes should be passed + * to WMI_SET_TX_SELECT_RATES_CMDID. + * AR6003 has 32 bit mask for each modes. + * First 12 bits for legacy rates, 13 to 20 + * bits for HT 20 rates and 21 to 28 bits for + * HT 40 rates + */ +enum wmi_mode_phy { + WMI_RATES_MODE_11A = 0, + WMI_RATES_MODE_11G, + WMI_RATES_MODE_11B, + WMI_RATES_MODE_11GONLY, + WMI_RATES_MODE_11A_HT20, + WMI_RATES_MODE_11G_HT20, + WMI_RATES_MODE_11A_HT40, + WMI_RATES_MODE_11G_HT40, + WMI_RATES_MODE_MAX +}; + +/* WMI_SET_TX_SELECT_RATES_CMDID */ +struct wmi_set_tx_select_rates32_cmd { + __le32 ratemask[WMI_RATES_MODE_MAX]; +} __packed; + +/* WMI_SET_TX_SELECT_RATES_CMDID */ +struct wmi_set_tx_select_rates64_cmd { + __le64 ratemask[WMI_RATES_MODE_MAX]; +} __packed; + /* WMI_SET_DISC_TIMEOUT_CMDID */ struct wmi_disc_timeout_cmd { /* seconds */ @@ -1572,6 +1617,10 @@ struct roam_ctrl_cmd { u8 roam_ctrl; } __packed; +struct set_dtim_cmd { + __le32 dtim_period; +} __packed; + /* BSS INFO HDR version 2.0 */ struct wmi_bss_info_hdr2 { __le16 ch; /* frequency in MHz */ @@ -2532,6 +2581,8 @@ int ath6kl_wmi_set_ip_cmd(struct wmi *wmi, u8 if_idx, __be32 ips0, __be32 ips1); int ath6kl_wmi_set_host_sleep_mode_cmd(struct wmi *wmi, u8 if_idx, enum ath6kl_host_mode host_mode); +int ath6kl_wmi_set_bitrate_mask(struct wmi *wmi, u8 if_idx, + const struct cfg80211_bitrate_mask *mask); int ath6kl_wmi_set_wow_mode_cmd(struct wmi *wmi, u8 if_idx, enum ath6kl_wow_mode wow_mode, u32 filter, u16 host_req_delay); @@ -2542,11 +2593,14 @@ int ath6kl_wmi_add_wow_pattern_cmd(struct wmi *wmi, u8 if_idx, int ath6kl_wmi_del_wow_pattern_cmd(struct wmi *wmi, u8 if_idx, u16 list_id, u16 filter_id); int ath6kl_wmi_set_roam_lrssi_cmd(struct wmi *wmi, u8 lrssi); +int ath6kl_wmi_ap_set_dtim_cmd(struct wmi *wmi, u8 if_idx, u32 dtim_period); int ath6kl_wmi_force_roam_cmd(struct wmi *wmi, const u8 *bssid); int ath6kl_wmi_set_roam_mode_cmd(struct wmi *wmi, enum wmi_roam_mode mode); int ath6kl_wmi_mcast_filter_cmd(struct wmi *wmi, u8 if_idx, bool mc_all_on); int ath6kl_wmi_add_del_mcast_filter_cmd(struct wmi *wmi, u8 if_idx, u8 *filter, bool add_filter); +int ath6kl_wmi_sta_bmiss_enhance_cmd(struct wmi *wmi, u8 if_idx, bool enable); + /* AP mode uAPSD */ int ath6kl_wmi_ap_set_apsd(struct wmi *wmi, u8 if_idx, u8 enable); diff --git a/drivers/net/wireless/ath/ath9k/Kconfig b/drivers/net/wireless/ath/ath9k/Kconfig index e507e78398f3..c7aa6646123e 100644 --- a/drivers/net/wireless/ath/ath9k/Kconfig +++ b/drivers/net/wireless/ath/ath9k/Kconfig @@ -64,7 +64,7 @@ config ATH9K_DEBUGFS config ATH9K_DFS_CERTIFIED bool "Atheros DFS support for certified platforms" - depends on ATH9K && EXPERT + depends on ATH9K && CFG80211_CERTIFICATION_ONUS default n ---help--- This option enables DFS support for initiating radiation on diff --git a/drivers/net/wireless/ath/ath9k/Makefile b/drivers/net/wireless/ath/ath9k/Makefile index 3f0b84723789..2ad8f9474ba1 100644 --- a/drivers/net/wireless/ath/ath9k/Makefile +++ b/drivers/net/wireless/ath/ath9k/Makefile @@ -3,7 +3,9 @@ ath9k-y += beacon.o \ init.o \ main.o \ recv.o \ - xmit.o + xmit.o \ + link.o \ + antenna.o ath9k-$(CONFIG_ATH9K_BTCOEX_SUPPORT) += mci.o ath9k-$(CONFIG_ATH9K_RATE_CONTROL) += rc.o @@ -15,6 +17,7 @@ ath9k-$(CONFIG_ATH9K_DFS_CERTIFIED) += \ dfs.o \ dfs_pattern_detector.o \ dfs_pri_detector.o +ath9k-$(CONFIG_PM_SLEEP) += wow.o obj-$(CONFIG_ATH9K) += ath9k.o diff --git a/drivers/net/wireless/ath/ath9k/ahb.c b/drivers/net/wireless/ath/ath9k/ahb.c index 5e47ca6d16a8..3a69804f4c16 100644 --- a/drivers/net/wireless/ath/ath9k/ahb.c +++ b/drivers/net/wireless/ath/ath9k/ahb.c @@ -35,6 +35,10 @@ static const struct platform_device_id ath9k_platform_id_table[] = { .name = "ar934x_wmac", .driver_data = AR9300_DEVID_AR9340, }, + { + .name = "qca955x_wmac", + .driver_data = AR9300_DEVID_QCA955X, + }, {}, }; @@ -126,7 +130,7 @@ static int ath_ahb_probe(struct platform_device *pdev) sc->irq = irq; /* Will be cleared in ath9k_start() */ - sc->sc_flags |= SC_OP_INVALID; + set_bit(SC_OP_INVALID, &sc->sc_flags); ret = request_irq(irq, ath_isr, IRQF_SHARED, "ath9k", sc); if (ret) { diff --git a/drivers/net/wireless/ath/ath9k/ani.c b/drivers/net/wireless/ath/ath9k/ani.c index b4c77f9d7470..ff007f500feb 100644 --- a/drivers/net/wireless/ath/ath9k/ani.c +++ b/drivers/net/wireless/ath/ath9k/ani.c @@ -104,11 +104,6 @@ static const struct ani_cck_level_entry cck_level_table[] = { #define ATH9K_ANI_CCK_DEF_LEVEL \ 2 /* default level - matches the INI settings */ -static bool use_new_ani(struct ath_hw *ah) -{ - return AR_SREV_9300_20_OR_LATER(ah) || modparam_force_new_ani; -} - static void ath9k_hw_update_mibstats(struct ath_hw *ah, struct ath9k_mib_stats *stats) { @@ -122,8 +117,6 @@ static void ath9k_hw_update_mibstats(struct ath_hw *ah, static void ath9k_ani_restart(struct ath_hw *ah) { struct ar5416AniState *aniState; - struct ath_common *common = ath9k_hw_common(ah); - u32 ofdm_base = 0, cck_base = 0; if (!DO_ANI(ah)) return; @@ -131,18 +124,10 @@ static void ath9k_ani_restart(struct ath_hw *ah) aniState = &ah->curchan->ani; aniState->listenTime = 0; - if (!use_new_ani(ah)) { - ofdm_base = AR_PHY_COUNTMAX - ah->config.ofdm_trig_high; - cck_base = AR_PHY_COUNTMAX - ah->config.cck_trig_high; - } - - ath_dbg(common, ANI, "Writing ofdmbase=%u cckbase=%u\n", - ofdm_base, cck_base); - ENABLE_REGWRITE_BUFFER(ah); - REG_WRITE(ah, AR_PHY_ERR_1, ofdm_base); - REG_WRITE(ah, AR_PHY_ERR_2, cck_base); + REG_WRITE(ah, AR_PHY_ERR_1, 0); + REG_WRITE(ah, AR_PHY_ERR_2, 0); REG_WRITE(ah, AR_PHY_ERR_MASK_1, AR_PHY_ERR_OFDM_TIMING); REG_WRITE(ah, AR_PHY_ERR_MASK_2, AR_PHY_ERR_CCK_TIMING); @@ -154,129 +139,23 @@ static void ath9k_ani_restart(struct ath_hw *ah) aniState->cckPhyErrCount = 0; } -static void ath9k_hw_ani_ofdm_err_trigger_old(struct ath_hw *ah) -{ - struct ieee80211_conf *conf = &ath9k_hw_common(ah)->hw->conf; - struct ar5416AniState *aniState; - int32_t rssi; - - aniState = &ah->curchan->ani; - - if (aniState->noiseImmunityLevel < HAL_NOISE_IMMUNE_MAX) { - if (ath9k_hw_ani_control(ah, ATH9K_ANI_NOISE_IMMUNITY_LEVEL, - aniState->noiseImmunityLevel + 1)) { - return; - } - } - - if (aniState->spurImmunityLevel < HAL_SPUR_IMMUNE_MAX) { - if (ath9k_hw_ani_control(ah, ATH9K_ANI_SPUR_IMMUNITY_LEVEL, - aniState->spurImmunityLevel + 1)) { - return; - } - } - - if (ah->opmode == NL80211_IFTYPE_AP) { - if (aniState->firstepLevel < HAL_FIRST_STEP_MAX) { - ath9k_hw_ani_control(ah, ATH9K_ANI_FIRSTEP_LEVEL, - aniState->firstepLevel + 1); - } - return; - } - rssi = BEACON_RSSI(ah); - if (rssi > aniState->rssiThrHigh) { - if (!aniState->ofdmWeakSigDetectOff) { - if (ath9k_hw_ani_control(ah, - ATH9K_ANI_OFDM_WEAK_SIGNAL_DETECTION, - false)) { - ath9k_hw_ani_control(ah, - ATH9K_ANI_SPUR_IMMUNITY_LEVEL, 0); - return; - } - } - if (aniState->firstepLevel < HAL_FIRST_STEP_MAX) { - ath9k_hw_ani_control(ah, ATH9K_ANI_FIRSTEP_LEVEL, - aniState->firstepLevel + 1); - return; - } - } else if (rssi > aniState->rssiThrLow) { - if (aniState->ofdmWeakSigDetectOff) - ath9k_hw_ani_control(ah, - ATH9K_ANI_OFDM_WEAK_SIGNAL_DETECTION, - true); - if (aniState->firstepLevel < HAL_FIRST_STEP_MAX) - ath9k_hw_ani_control(ah, ATH9K_ANI_FIRSTEP_LEVEL, - aniState->firstepLevel + 1); - return; - } else { - if ((conf->channel->band == IEEE80211_BAND_2GHZ) && - !conf_is_ht(conf)) { - if (!aniState->ofdmWeakSigDetectOff) - ath9k_hw_ani_control(ah, - ATH9K_ANI_OFDM_WEAK_SIGNAL_DETECTION, - false); - if (aniState->firstepLevel > 0) - ath9k_hw_ani_control(ah, - ATH9K_ANI_FIRSTEP_LEVEL, 0); - return; - } - } -} - -static void ath9k_hw_ani_cck_err_trigger_old(struct ath_hw *ah) -{ - struct ieee80211_conf *conf = &ath9k_hw_common(ah)->hw->conf; - struct ar5416AniState *aniState; - int32_t rssi; - - aniState = &ah->curchan->ani; - if (aniState->noiseImmunityLevel < HAL_NOISE_IMMUNE_MAX) { - if (ath9k_hw_ani_control(ah, ATH9K_ANI_NOISE_IMMUNITY_LEVEL, - aniState->noiseImmunityLevel + 1)) { - return; - } - } - if (ah->opmode == NL80211_IFTYPE_AP) { - if (aniState->firstepLevel < HAL_FIRST_STEP_MAX) { - ath9k_hw_ani_control(ah, ATH9K_ANI_FIRSTEP_LEVEL, - aniState->firstepLevel + 1); - } - return; - } - rssi = BEACON_RSSI(ah); - if (rssi > aniState->rssiThrLow) { - if (aniState->firstepLevel < HAL_FIRST_STEP_MAX) - ath9k_hw_ani_control(ah, ATH9K_ANI_FIRSTEP_LEVEL, - aniState->firstepLevel + 1); - } else { - if ((conf->channel->band == IEEE80211_BAND_2GHZ) && - !conf_is_ht(conf)) { - if (aniState->firstepLevel > 0) - ath9k_hw_ani_control(ah, - ATH9K_ANI_FIRSTEP_LEVEL, 0); - } - } -} - /* Adjust the OFDM Noise Immunity Level */ -static void ath9k_hw_set_ofdm_nil(struct ath_hw *ah, u8 immunityLevel) +static void ath9k_hw_set_ofdm_nil(struct ath_hw *ah, u8 immunityLevel, + bool scan) { struct ar5416AniState *aniState = &ah->curchan->ani; struct ath_common *common = ath9k_hw_common(ah); const struct ani_ofdm_level_entry *entry_ofdm; const struct ani_cck_level_entry *entry_cck; - - aniState->noiseFloor = BEACON_RSSI(ah); + bool weak_sig; ath_dbg(common, ANI, "**** ofdmlevel %d=>%d, rssi=%d[lo=%d hi=%d]\n", aniState->ofdmNoiseImmunityLevel, - immunityLevel, aniState->noiseFloor, + immunityLevel, BEACON_RSSI(ah), aniState->rssiThrLow, aniState->rssiThrHigh); - if (aniState->update_ani) - aniState->ofdmNoiseImmunityLevel = - (immunityLevel > ATH9K_ANI_OFDM_DEF_LEVEL) ? - immunityLevel : ATH9K_ANI_OFDM_DEF_LEVEL; + if (!scan) + aniState->ofdmNoiseImmunityLevel = immunityLevel; entry_ofdm = &ofdm_level_table[aniState->ofdmNoiseImmunityLevel]; entry_cck = &cck_level_table[aniState->cckNoiseImmunityLevel]; @@ -292,12 +171,22 @@ static void ath9k_hw_set_ofdm_nil(struct ath_hw *ah, u8 immunityLevel) ATH9K_ANI_FIRSTEP_LEVEL, entry_ofdm->fir_step_level); - if ((aniState->noiseFloor >= aniState->rssiThrHigh) && - (!aniState->ofdmWeakSigDetectOff != - entry_ofdm->ofdm_weak_signal_on)) { + weak_sig = entry_ofdm->ofdm_weak_signal_on; + if (ah->opmode == NL80211_IFTYPE_STATION && + BEACON_RSSI(ah) <= aniState->rssiThrHigh) + weak_sig = true; + + if (aniState->ofdmWeakSigDetect != weak_sig) ath9k_hw_ani_control(ah, ATH9K_ANI_OFDM_WEAK_SIGNAL_DETECTION, entry_ofdm->ofdm_weak_signal_on); + + if (aniState->ofdmNoiseImmunityLevel >= ATH9K_ANI_OFDM_DEF_LEVEL) { + ah->config.ofdm_trig_high = ATH9K_ANI_OFDM_TRIG_HIGH; + ah->config.ofdm_trig_low = ATH9K_ANI_OFDM_TRIG_LOW_ABOVE_INI; + } else { + ah->config.ofdm_trig_high = ATH9K_ANI_OFDM_TRIG_HIGH_BELOW_INI; + ah->config.ofdm_trig_low = ATH9K_ANI_OFDM_TRIG_LOW; } } @@ -308,43 +197,35 @@ static void ath9k_hw_ani_ofdm_err_trigger(struct ath_hw *ah) if (!DO_ANI(ah)) return; - if (!use_new_ani(ah)) { - ath9k_hw_ani_ofdm_err_trigger_old(ah); - return; - } - aniState = &ah->curchan->ani; if (aniState->ofdmNoiseImmunityLevel < ATH9K_ANI_OFDM_MAX_LEVEL) - ath9k_hw_set_ofdm_nil(ah, aniState->ofdmNoiseImmunityLevel + 1); + ath9k_hw_set_ofdm_nil(ah, aniState->ofdmNoiseImmunityLevel + 1, false); } /* * Set the ANI settings to match an CCK level. */ -static void ath9k_hw_set_cck_nil(struct ath_hw *ah, u_int8_t immunityLevel) +static void ath9k_hw_set_cck_nil(struct ath_hw *ah, u_int8_t immunityLevel, + bool scan) { struct ar5416AniState *aniState = &ah->curchan->ani; struct ath_common *common = ath9k_hw_common(ah); const struct ani_ofdm_level_entry *entry_ofdm; const struct ani_cck_level_entry *entry_cck; - aniState->noiseFloor = BEACON_RSSI(ah); ath_dbg(common, ANI, "**** ccklevel %d=>%d, rssi=%d[lo=%d hi=%d]\n", aniState->cckNoiseImmunityLevel, immunityLevel, - aniState->noiseFloor, aniState->rssiThrLow, + BEACON_RSSI(ah), aniState->rssiThrLow, aniState->rssiThrHigh); - if ((ah->opmode == NL80211_IFTYPE_STATION || - ah->opmode == NL80211_IFTYPE_ADHOC) && - aniState->noiseFloor <= aniState->rssiThrLow && + if (ah->opmode == NL80211_IFTYPE_STATION && + BEACON_RSSI(ah) <= aniState->rssiThrLow && immunityLevel > ATH9K_ANI_CCK_MAX_LEVEL_LOW_RSSI) immunityLevel = ATH9K_ANI_CCK_MAX_LEVEL_LOW_RSSI; - if (aniState->update_ani) - aniState->cckNoiseImmunityLevel = - (immunityLevel > ATH9K_ANI_CCK_DEF_LEVEL) ? - immunityLevel : ATH9K_ANI_CCK_DEF_LEVEL; + if (!scan) + aniState->cckNoiseImmunityLevel = immunityLevel; entry_ofdm = &ofdm_level_table[aniState->ofdmNoiseImmunityLevel]; entry_cck = &cck_level_table[aniState->cckNoiseImmunityLevel]; @@ -359,7 +240,7 @@ static void ath9k_hw_set_cck_nil(struct ath_hw *ah, u_int8_t immunityLevel) if (!AR_SREV_9300_20_OR_LATER(ah) || AR_SREV_9485(ah)) return; - if (aniState->mrcCCKOff == entry_cck->mrc_cck_on) + if (aniState->mrcCCK != entry_cck->mrc_cck_on) ath9k_hw_ani_control(ah, ATH9K_ANI_MRC_CCK, entry_cck->mrc_cck_on); @@ -372,68 +253,11 @@ static void ath9k_hw_ani_cck_err_trigger(struct ath_hw *ah) if (!DO_ANI(ah)) return; - if (!use_new_ani(ah)) { - ath9k_hw_ani_cck_err_trigger_old(ah); - return; - } - aniState = &ah->curchan->ani; if (aniState->cckNoiseImmunityLevel < ATH9K_ANI_CCK_MAX_LEVEL) - ath9k_hw_set_cck_nil(ah, aniState->cckNoiseImmunityLevel + 1); -} - -static void ath9k_hw_ani_lower_immunity_old(struct ath_hw *ah) -{ - struct ar5416AniState *aniState; - int32_t rssi; - - aniState = &ah->curchan->ani; - - if (ah->opmode == NL80211_IFTYPE_AP) { - if (aniState->firstepLevel > 0) { - if (ath9k_hw_ani_control(ah, ATH9K_ANI_FIRSTEP_LEVEL, - aniState->firstepLevel - 1)) - return; - } - } else { - rssi = BEACON_RSSI(ah); - if (rssi > aniState->rssiThrHigh) { - /* XXX: Handle me */ - } else if (rssi > aniState->rssiThrLow) { - if (aniState->ofdmWeakSigDetectOff) { - if (ath9k_hw_ani_control(ah, - ATH9K_ANI_OFDM_WEAK_SIGNAL_DETECTION, - true)) - return; - } - if (aniState->firstepLevel > 0) { - if (ath9k_hw_ani_control(ah, - ATH9K_ANI_FIRSTEP_LEVEL, - aniState->firstepLevel - 1)) - return; - } - } else { - if (aniState->firstepLevel > 0) { - if (ath9k_hw_ani_control(ah, - ATH9K_ANI_FIRSTEP_LEVEL, - aniState->firstepLevel - 1)) - return; - } - } - } - - if (aniState->spurImmunityLevel > 0) { - if (ath9k_hw_ani_control(ah, ATH9K_ANI_SPUR_IMMUNITY_LEVEL, - aniState->spurImmunityLevel - 1)) - return; - } - - if (aniState->noiseImmunityLevel > 0) { - ath9k_hw_ani_control(ah, ATH9K_ANI_NOISE_IMMUNITY_LEVEL, - aniState->noiseImmunityLevel - 1); - return; - } + ath9k_hw_set_cck_nil(ah, aniState->cckNoiseImmunityLevel + 1, + false); } /* @@ -446,87 +270,18 @@ static void ath9k_hw_ani_lower_immunity(struct ath_hw *ah) aniState = &ah->curchan->ani; - if (!use_new_ani(ah)) { - ath9k_hw_ani_lower_immunity_old(ah); - return; - } - /* lower OFDM noise immunity */ if (aniState->ofdmNoiseImmunityLevel > 0 && (aniState->ofdmsTurn || aniState->cckNoiseImmunityLevel == 0)) { - ath9k_hw_set_ofdm_nil(ah, aniState->ofdmNoiseImmunityLevel - 1); + ath9k_hw_set_ofdm_nil(ah, aniState->ofdmNoiseImmunityLevel - 1, + false); return; } /* lower CCK noise immunity */ if (aniState->cckNoiseImmunityLevel > 0) - ath9k_hw_set_cck_nil(ah, aniState->cckNoiseImmunityLevel - 1); -} - -static void ath9k_ani_reset_old(struct ath_hw *ah, bool is_scanning) -{ - struct ar5416AniState *aniState; - struct ath9k_channel *chan = ah->curchan; - struct ath_common *common = ath9k_hw_common(ah); - - if (!DO_ANI(ah)) - return; - - aniState = &ah->curchan->ani; - - if (ah->opmode != NL80211_IFTYPE_STATION - && ah->opmode != NL80211_IFTYPE_ADHOC) { - ath_dbg(common, ANI, "Reset ANI state opmode %u\n", ah->opmode); - ah->stats.ast_ani_reset++; - - if (ah->opmode == NL80211_IFTYPE_AP) { - /* - * ath9k_hw_ani_control() will only process items set on - * ah->ani_function - */ - if (IS_CHAN_2GHZ(chan)) - ah->ani_function = (ATH9K_ANI_SPUR_IMMUNITY_LEVEL | - ATH9K_ANI_FIRSTEP_LEVEL); - else - ah->ani_function = 0; - } - - ath9k_hw_ani_control(ah, ATH9K_ANI_NOISE_IMMUNITY_LEVEL, 0); - ath9k_hw_ani_control(ah, ATH9K_ANI_SPUR_IMMUNITY_LEVEL, 0); - ath9k_hw_ani_control(ah, ATH9K_ANI_FIRSTEP_LEVEL, 0); - ath9k_hw_ani_control(ah, ATH9K_ANI_OFDM_WEAK_SIGNAL_DETECTION, - !ATH9K_ANI_USE_OFDM_WEAK_SIG); - ath9k_hw_ani_control(ah, ATH9K_ANI_CCK_WEAK_SIGNAL_THR, - ATH9K_ANI_CCK_WEAK_SIG_THR); - - ath9k_ani_restart(ah); - return; - } - - if (aniState->noiseImmunityLevel != 0) - ath9k_hw_ani_control(ah, ATH9K_ANI_NOISE_IMMUNITY_LEVEL, - aniState->noiseImmunityLevel); - if (aniState->spurImmunityLevel != 0) - ath9k_hw_ani_control(ah, ATH9K_ANI_SPUR_IMMUNITY_LEVEL, - aniState->spurImmunityLevel); - if (aniState->ofdmWeakSigDetectOff) - ath9k_hw_ani_control(ah, ATH9K_ANI_OFDM_WEAK_SIGNAL_DETECTION, - !aniState->ofdmWeakSigDetectOff); - if (aniState->cckWeakSigThreshold) - ath9k_hw_ani_control(ah, ATH9K_ANI_CCK_WEAK_SIGNAL_THR, - aniState->cckWeakSigThreshold); - if (aniState->firstepLevel != 0) - ath9k_hw_ani_control(ah, ATH9K_ANI_FIRSTEP_LEVEL, - aniState->firstepLevel); - - ath9k_ani_restart(ah); - - ENABLE_REGWRITE_BUFFER(ah); - - REG_WRITE(ah, AR_PHY_ERR_MASK_1, AR_PHY_ERR_OFDM_TIMING); - REG_WRITE(ah, AR_PHY_ERR_MASK_2, AR_PHY_ERR_CCK_TIMING); - - REGWRITE_BUFFER_FLUSH(ah); + ath9k_hw_set_cck_nil(ah, aniState->cckNoiseImmunityLevel - 1, + false); } /* @@ -539,13 +294,11 @@ void ath9k_ani_reset(struct ath_hw *ah, bool is_scanning) struct ar5416AniState *aniState = &ah->curchan->ani; struct ath9k_channel *chan = ah->curchan; struct ath_common *common = ath9k_hw_common(ah); + int ofdm_nil, cck_nil; if (!DO_ANI(ah)) return; - if (!use_new_ani(ah)) - return ath9k_ani_reset_old(ah, is_scanning); - BUG_ON(aniState == NULL); ah->stats.ast_ani_reset++; @@ -563,6 +316,11 @@ void ath9k_ani_reset(struct ath_hw *ah, bool is_scanning) /* always allow mode (on/off) to be controlled */ ah->ani_function |= ATH9K_ANI_MODE; + ofdm_nil = max_t(int, ATH9K_ANI_OFDM_DEF_LEVEL, + aniState->ofdmNoiseImmunityLevel); + cck_nil = max_t(int, ATH9K_ANI_CCK_DEF_LEVEL, + aniState->cckNoiseImmunityLevel); + if (is_scanning || (ah->opmode != NL80211_IFTYPE_STATION && ah->opmode != NL80211_IFTYPE_ADHOC)) { @@ -585,9 +343,8 @@ void ath9k_ani_reset(struct ath_hw *ah, bool is_scanning) aniState->ofdmNoiseImmunityLevel, aniState->cckNoiseImmunityLevel); - aniState->update_ani = false; - ath9k_hw_set_ofdm_nil(ah, ATH9K_ANI_OFDM_DEF_LEVEL); - ath9k_hw_set_cck_nil(ah, ATH9K_ANI_CCK_DEF_LEVEL); + ofdm_nil = ATH9K_ANI_OFDM_DEF_LEVEL; + cck_nil = ATH9K_ANI_CCK_DEF_LEVEL; } } else { /* @@ -601,13 +358,9 @@ void ath9k_ani_reset(struct ath_hw *ah, bool is_scanning) is_scanning, aniState->ofdmNoiseImmunityLevel, aniState->cckNoiseImmunityLevel); - - aniState->update_ani = true; - ath9k_hw_set_ofdm_nil(ah, - aniState->ofdmNoiseImmunityLevel); - ath9k_hw_set_cck_nil(ah, - aniState->cckNoiseImmunityLevel); } + ath9k_hw_set_ofdm_nil(ah, ofdm_nil, is_scanning); + ath9k_hw_set_cck_nil(ah, cck_nil, is_scanning); /* * enable phy counters if hw supports or if not, enable phy @@ -627,9 +380,6 @@ static bool ath9k_hw_ani_read_counters(struct ath_hw *ah) { struct ath_common *common = ath9k_hw_common(ah); struct ar5416AniState *aniState = &ah->curchan->ani; - u32 ofdm_base = 0; - u32 cck_base = 0; - u32 ofdmPhyErrCnt, cckPhyErrCnt; u32 phyCnt1, phyCnt2; int32_t listenTime; @@ -642,11 +392,6 @@ static bool ath9k_hw_ani_read_counters(struct ath_hw *ah) return false; } - if (!use_new_ani(ah)) { - ofdm_base = AR_PHY_COUNTMAX - ah->config.ofdm_trig_high; - cck_base = AR_PHY_COUNTMAX - ah->config.cck_trig_high; - } - aniState->listenTime += listenTime; ath9k_hw_update_mibstats(ah, &ah->ah_mibStats); @@ -654,35 +399,12 @@ static bool ath9k_hw_ani_read_counters(struct ath_hw *ah) phyCnt1 = REG_READ(ah, AR_PHY_ERR_1); phyCnt2 = REG_READ(ah, AR_PHY_ERR_2); - if (!use_new_ani(ah) && (phyCnt1 < ofdm_base || phyCnt2 < cck_base)) { - if (phyCnt1 < ofdm_base) { - ath_dbg(common, ANI, - "phyCnt1 0x%x, resetting counter value to 0x%x\n", - phyCnt1, ofdm_base); - REG_WRITE(ah, AR_PHY_ERR_1, ofdm_base); - REG_WRITE(ah, AR_PHY_ERR_MASK_1, - AR_PHY_ERR_OFDM_TIMING); - } - if (phyCnt2 < cck_base) { - ath_dbg(common, ANI, - "phyCnt2 0x%x, resetting counter value to 0x%x\n", - phyCnt2, cck_base); - REG_WRITE(ah, AR_PHY_ERR_2, cck_base); - REG_WRITE(ah, AR_PHY_ERR_MASK_2, - AR_PHY_ERR_CCK_TIMING); - } - return false; - } + ah->stats.ast_ani_ofdmerrs += phyCnt1 - aniState->ofdmPhyErrCount; + aniState->ofdmPhyErrCount = phyCnt1; - ofdmPhyErrCnt = phyCnt1 - ofdm_base; - ah->stats.ast_ani_ofdmerrs += - ofdmPhyErrCnt - aniState->ofdmPhyErrCount; - aniState->ofdmPhyErrCount = ofdmPhyErrCnt; + ah->stats.ast_ani_cckerrs += phyCnt2 - aniState->cckPhyErrCount; + aniState->cckPhyErrCount = phyCnt2; - cckPhyErrCnt = phyCnt2 - cck_base; - ah->stats.ast_ani_cckerrs += - cckPhyErrCnt - aniState->cckPhyErrCount; - aniState->cckPhyErrCount = cckPhyErrCnt; return true; } @@ -716,21 +438,10 @@ void ath9k_hw_ani_monitor(struct ath_hw *ah, struct ath9k_channel *chan) if (aniState->listenTime > ah->aniperiod) { if (cckPhyErrRate < ah->config.cck_trig_low && - ((ofdmPhyErrRate < ah->config.ofdm_trig_low && - aniState->ofdmNoiseImmunityLevel < - ATH9K_ANI_OFDM_DEF_LEVEL) || - (ofdmPhyErrRate < ATH9K_ANI_OFDM_TRIG_LOW_ABOVE_INI && - aniState->ofdmNoiseImmunityLevel >= - ATH9K_ANI_OFDM_DEF_LEVEL))) { + ofdmPhyErrRate < ah->config.ofdm_trig_low) { ath9k_hw_ani_lower_immunity(ah); aniState->ofdmsTurn = !aniState->ofdmsTurn; - } else if ((ofdmPhyErrRate > ah->config.ofdm_trig_high && - aniState->ofdmNoiseImmunityLevel >= - ATH9K_ANI_OFDM_DEF_LEVEL) || - (ofdmPhyErrRate > - ATH9K_ANI_OFDM_TRIG_HIGH_BELOW_INI && - aniState->ofdmNoiseImmunityLevel < - ATH9K_ANI_OFDM_DEF_LEVEL)) { + } else if (ofdmPhyErrRate > ah->config.ofdm_trig_high) { ath9k_hw_ani_ofdm_err_trigger(ah); aniState->ofdmsTurn = false; } else if (cckPhyErrRate > ah->config.cck_trig_high) { @@ -778,49 +489,6 @@ void ath9k_hw_disable_mib_counters(struct ath_hw *ah) } EXPORT_SYMBOL(ath9k_hw_disable_mib_counters); -/* - * Process a MIB interrupt. We may potentially be invoked because - * any of the MIB counters overflow/trigger so don't assume we're - * here because a PHY error counter triggered. - */ -void ath9k_hw_proc_mib_event(struct ath_hw *ah) -{ - u32 phyCnt1, phyCnt2; - - /* Reset these counters regardless */ - REG_WRITE(ah, AR_FILT_OFDM, 0); - REG_WRITE(ah, AR_FILT_CCK, 0); - if (!(REG_READ(ah, AR_SLP_MIB_CTRL) & AR_SLP_MIB_PENDING)) - REG_WRITE(ah, AR_SLP_MIB_CTRL, AR_SLP_MIB_CLEAR); - - /* Clear the mib counters and save them in the stats */ - ath9k_hw_update_mibstats(ah, &ah->ah_mibStats); - - if (!DO_ANI(ah)) { - /* - * We must always clear the interrupt cause by - * resetting the phy error regs. - */ - REG_WRITE(ah, AR_PHY_ERR_1, 0); - REG_WRITE(ah, AR_PHY_ERR_2, 0); - return; - } - - /* NB: these are not reset-on-read */ - phyCnt1 = REG_READ(ah, AR_PHY_ERR_1); - phyCnt2 = REG_READ(ah, AR_PHY_ERR_2); - if (((phyCnt1 & AR_MIBCNT_INTRMASK) == AR_MIBCNT_INTRMASK) || - ((phyCnt2 & AR_MIBCNT_INTRMASK) == AR_MIBCNT_INTRMASK)) { - - if (!use_new_ani(ah)) - ath9k_hw_ani_read_counters(ah); - - /* NB: always restart to insure the h/w counters are reset */ - ath9k_ani_restart(ah); - } -} -EXPORT_SYMBOL(ath9k_hw_proc_mib_event); - void ath9k_hw_ani_setup(struct ath_hw *ah) { int i; @@ -845,66 +513,37 @@ void ath9k_hw_ani_init(struct ath_hw *ah) ath_dbg(common, ANI, "Initialize ANI\n"); - if (use_new_ani(ah)) { - ah->config.ofdm_trig_high = ATH9K_ANI_OFDM_TRIG_HIGH_NEW; - ah->config.ofdm_trig_low = ATH9K_ANI_OFDM_TRIG_LOW_NEW; + ah->config.ofdm_trig_high = ATH9K_ANI_OFDM_TRIG_HIGH; + ah->config.ofdm_trig_low = ATH9K_ANI_OFDM_TRIG_LOW; - ah->config.cck_trig_high = ATH9K_ANI_CCK_TRIG_HIGH_NEW; - ah->config.cck_trig_low = ATH9K_ANI_CCK_TRIG_LOW_NEW; - } else { - ah->config.ofdm_trig_high = ATH9K_ANI_OFDM_TRIG_HIGH_OLD; - ah->config.ofdm_trig_low = ATH9K_ANI_OFDM_TRIG_LOW_OLD; - - ah->config.cck_trig_high = ATH9K_ANI_CCK_TRIG_HIGH_OLD; - ah->config.cck_trig_low = ATH9K_ANI_CCK_TRIG_LOW_OLD; - } + ah->config.cck_trig_high = ATH9K_ANI_CCK_TRIG_HIGH; + ah->config.cck_trig_low = ATH9K_ANI_CCK_TRIG_LOW; for (i = 0; i < ARRAY_SIZE(ah->channels); i++) { struct ath9k_channel *chan = &ah->channels[i]; struct ar5416AniState *ani = &chan->ani; - if (use_new_ani(ah)) { - ani->spurImmunityLevel = - ATH9K_ANI_SPUR_IMMUNE_LVL_NEW; + ani->spurImmunityLevel = ATH9K_ANI_SPUR_IMMUNE_LVL; - ani->firstepLevel = ATH9K_ANI_FIRSTEP_LVL_NEW; + ani->firstepLevel = ATH9K_ANI_FIRSTEP_LVL; - if (AR_SREV_9300_20_OR_LATER(ah)) - ani->mrcCCKOff = - !ATH9K_ANI_ENABLE_MRC_CCK; - else - ani->mrcCCKOff = true; - - ani->ofdmsTurn = true; - } else { - ani->spurImmunityLevel = - ATH9K_ANI_SPUR_IMMUNE_LVL_OLD; - ani->firstepLevel = ATH9K_ANI_FIRSTEP_LVL_OLD; - - ani->cckWeakSigThreshold = - ATH9K_ANI_CCK_WEAK_SIG_THR; - } + ani->mrcCCK = AR_SREV_9300_20_OR_LATER(ah) ? true : false; + + ani->ofdmsTurn = true; ani->rssiThrHigh = ATH9K_ANI_RSSI_THR_HIGH; ani->rssiThrLow = ATH9K_ANI_RSSI_THR_LOW; - ani->ofdmWeakSigDetectOff = - !ATH9K_ANI_USE_OFDM_WEAK_SIG; + ani->ofdmWeakSigDetect = ATH9K_ANI_USE_OFDM_WEAK_SIG; ani->cckNoiseImmunityLevel = ATH9K_ANI_CCK_DEF_LEVEL; ani->ofdmNoiseImmunityLevel = ATH9K_ANI_OFDM_DEF_LEVEL; - ani->update_ani = false; } /* * since we expect some ongoing maintenance on the tables, let's sanity * check here default level should not modify INI setting. */ - if (use_new_ani(ah)) { - ah->aniperiod = ATH9K_ANI_PERIOD_NEW; - ah->config.ani_poll_interval = ATH9K_ANI_POLLINTERVAL_NEW; - } else { - ah->aniperiod = ATH9K_ANI_PERIOD_OLD; - ah->config.ani_poll_interval = ATH9K_ANI_POLLINTERVAL_OLD; - } + ah->aniperiod = ATH9K_ANI_PERIOD; + ah->config.ani_poll_interval = ATH9K_ANI_POLLINTERVAL; if (ah->config.enable_ani) ah->proc_phyerr |= HAL_PROCESS_ANI; diff --git a/drivers/net/wireless/ath/ath9k/ani.h b/drivers/net/wireless/ath/ath9k/ani.h index 72e2b874e179..1485bf5e3518 100644 --- a/drivers/net/wireless/ath/ath9k/ani.h +++ b/drivers/net/wireless/ath/ath9k/ani.h @@ -24,42 +24,34 @@ #define BEACON_RSSI(ahp) (ahp->stats.avgbrssi) /* units are errors per second */ -#define ATH9K_ANI_OFDM_TRIG_HIGH_OLD 500 -#define ATH9K_ANI_OFDM_TRIG_HIGH_NEW 3500 +#define ATH9K_ANI_OFDM_TRIG_HIGH 3500 #define ATH9K_ANI_OFDM_TRIG_HIGH_BELOW_INI 1000 /* units are errors per second */ -#define ATH9K_ANI_OFDM_TRIG_LOW_OLD 200 -#define ATH9K_ANI_OFDM_TRIG_LOW_NEW 400 +#define ATH9K_ANI_OFDM_TRIG_LOW 400 #define ATH9K_ANI_OFDM_TRIG_LOW_ABOVE_INI 900 /* units are errors per second */ -#define ATH9K_ANI_CCK_TRIG_HIGH_OLD 200 -#define ATH9K_ANI_CCK_TRIG_HIGH_NEW 600 +#define ATH9K_ANI_CCK_TRIG_HIGH 600 /* units are errors per second */ -#define ATH9K_ANI_CCK_TRIG_LOW_OLD 100 -#define ATH9K_ANI_CCK_TRIG_LOW_NEW 300 +#define ATH9K_ANI_CCK_TRIG_LOW 300 #define ATH9K_ANI_NOISE_IMMUNE_LVL 4 #define ATH9K_ANI_USE_OFDM_WEAK_SIG true #define ATH9K_ANI_CCK_WEAK_SIG_THR false -#define ATH9K_ANI_SPUR_IMMUNE_LVL_OLD 7 -#define ATH9K_ANI_SPUR_IMMUNE_LVL_NEW 3 +#define ATH9K_ANI_SPUR_IMMUNE_LVL 3 -#define ATH9K_ANI_FIRSTEP_LVL_OLD 0 -#define ATH9K_ANI_FIRSTEP_LVL_NEW 2 +#define ATH9K_ANI_FIRSTEP_LVL 2 #define ATH9K_ANI_RSSI_THR_HIGH 40 #define ATH9K_ANI_RSSI_THR_LOW 7 -#define ATH9K_ANI_PERIOD_OLD 100 -#define ATH9K_ANI_PERIOD_NEW 300 +#define ATH9K_ANI_PERIOD 300 /* in ms */ -#define ATH9K_ANI_POLLINTERVAL_OLD 100 -#define ATH9K_ANI_POLLINTERVAL_NEW 1000 +#define ATH9K_ANI_POLLINTERVAL 1000 #define HAL_NOISE_IMMUNE_MAX 4 #define HAL_SPUR_IMMUNE_MAX 7 @@ -70,8 +62,6 @@ #define ATH9K_SIG_SPUR_IMM_SETTING_MIN 0 #define ATH9K_SIG_SPUR_IMM_SETTING_MAX 22 -#define ATH9K_ANI_ENABLE_MRC_CCK true - /* values here are relative to the INI */ enum ath9k_ani_cmd { @@ -119,16 +109,14 @@ struct ar5416AniState { u8 ofdmNoiseImmunityLevel; u8 cckNoiseImmunityLevel; bool ofdmsTurn; - u8 mrcCCKOff; + u8 mrcCCK; u8 spurImmunityLevel; u8 firstepLevel; - u8 ofdmWeakSigDetectOff; + u8 ofdmWeakSigDetect; u8 cckWeakSigThreshold; - bool update_ani; u32 listenTime; int32_t rssiThrLow; int32_t rssiThrHigh; - u32 noiseFloor; u32 ofdmPhyErrCount; u32 cckPhyErrCount; int16_t pktRssi[2]; diff --git a/drivers/net/wireless/ath/ath9k/antenna.c b/drivers/net/wireless/ath/ath9k/antenna.c new file mode 100644 index 000000000000..bbcfeb3b2a60 --- /dev/null +++ b/drivers/net/wireless/ath/ath9k/antenna.c @@ -0,0 +1,776 @@ +/* + * Copyright (c) 2012 Qualcomm Atheros, Inc. + * + * Permission to use, copy, modify, and/or distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +#include "ath9k.h" + +static inline bool ath_is_alt_ant_ratio_better(int alt_ratio, int maxdelta, + int mindelta, int main_rssi_avg, + int alt_rssi_avg, int pkt_count) +{ + return (((alt_ratio >= ATH_ANT_DIV_COMB_ALT_ANT_RATIO2) && + (alt_rssi_avg > main_rssi_avg + maxdelta)) || + (alt_rssi_avg > main_rssi_avg + mindelta)) && (pkt_count > 50); +} + +static inline bool ath_ant_div_comb_alt_check(u8 div_group, int alt_ratio, + int curr_main_set, int curr_alt_set, + int alt_rssi_avg, int main_rssi_avg) +{ + bool result = false; + switch (div_group) { + case 0: + if (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO) + result = true; + break; + case 1: + case 2: + if ((((curr_main_set == ATH_ANT_DIV_COMB_LNA2) && + (curr_alt_set == ATH_ANT_DIV_COMB_LNA1) && + (alt_rssi_avg >= (main_rssi_avg - 5))) || + ((curr_main_set == ATH_ANT_DIV_COMB_LNA1) && + (curr_alt_set == ATH_ANT_DIV_COMB_LNA2) && + (alt_rssi_avg >= (main_rssi_avg - 2)))) && + (alt_rssi_avg >= 4)) + result = true; + else + result = false; + break; + } + + return result; +} + +static void ath_lnaconf_alt_good_scan(struct ath_ant_comb *antcomb, + struct ath_hw_antcomb_conf ant_conf, + int main_rssi_avg) +{ + antcomb->quick_scan_cnt = 0; + + if (ant_conf.main_lna_conf == ATH_ANT_DIV_COMB_LNA2) + antcomb->rssi_lna2 = main_rssi_avg; + else if (ant_conf.main_lna_conf == ATH_ANT_DIV_COMB_LNA1) + antcomb->rssi_lna1 = main_rssi_avg; + + switch ((ant_conf.main_lna_conf << 4) | ant_conf.alt_lna_conf) { + case 0x10: /* LNA2 A-B */ + antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; + antcomb->first_quick_scan_conf = + ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + antcomb->second_quick_scan_conf = ATH_ANT_DIV_COMB_LNA1; + break; + case 0x20: /* LNA1 A-B */ + antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; + antcomb->first_quick_scan_conf = + ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + antcomb->second_quick_scan_conf = ATH_ANT_DIV_COMB_LNA2; + break; + case 0x21: /* LNA1 LNA2 */ + antcomb->main_conf = ATH_ANT_DIV_COMB_LNA2; + antcomb->first_quick_scan_conf = + ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; + antcomb->second_quick_scan_conf = + ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + break; + case 0x12: /* LNA2 LNA1 */ + antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1; + antcomb->first_quick_scan_conf = + ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; + antcomb->second_quick_scan_conf = + ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + break; + case 0x13: /* LNA2 A+B */ + antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + antcomb->first_quick_scan_conf = + ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; + antcomb->second_quick_scan_conf = ATH_ANT_DIV_COMB_LNA1; + break; + case 0x23: /* LNA1 A+B */ + antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + antcomb->first_quick_scan_conf = + ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; + antcomb->second_quick_scan_conf = ATH_ANT_DIV_COMB_LNA2; + break; + default: + break; + } +} + +static void ath_select_ant_div_from_quick_scan(struct ath_ant_comb *antcomb, + struct ath_hw_antcomb_conf *div_ant_conf, + int main_rssi_avg, int alt_rssi_avg, + int alt_ratio) +{ + /* alt_good */ + switch (antcomb->quick_scan_cnt) { + case 0: + /* set alt to main, and alt to first conf */ + div_ant_conf->main_lna_conf = antcomb->main_conf; + div_ant_conf->alt_lna_conf = antcomb->first_quick_scan_conf; + break; + case 1: + /* set alt to main, and alt to first conf */ + div_ant_conf->main_lna_conf = antcomb->main_conf; + div_ant_conf->alt_lna_conf = antcomb->second_quick_scan_conf; + antcomb->rssi_first = main_rssi_avg; + antcomb->rssi_second = alt_rssi_avg; + + if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA1) { + /* main is LNA1 */ + if (ath_is_alt_ant_ratio_better(alt_ratio, + ATH_ANT_DIV_COMB_LNA1_DELTA_HI, + ATH_ANT_DIV_COMB_LNA1_DELTA_LOW, + main_rssi_avg, alt_rssi_avg, + antcomb->total_pkt_count)) + antcomb->first_ratio = true; + else + antcomb->first_ratio = false; + } else if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA2) { + if (ath_is_alt_ant_ratio_better(alt_ratio, + ATH_ANT_DIV_COMB_LNA1_DELTA_MID, + ATH_ANT_DIV_COMB_LNA1_DELTA_LOW, + main_rssi_avg, alt_rssi_avg, + antcomb->total_pkt_count)) + antcomb->first_ratio = true; + else + antcomb->first_ratio = false; + } else { + if ((((alt_ratio >= ATH_ANT_DIV_COMB_ALT_ANT_RATIO2) && + (alt_rssi_avg > main_rssi_avg + + ATH_ANT_DIV_COMB_LNA1_DELTA_HI)) || + (alt_rssi_avg > main_rssi_avg)) && + (antcomb->total_pkt_count > 50)) + antcomb->first_ratio = true; + else + antcomb->first_ratio = false; + } + break; + case 2: + antcomb->alt_good = false; + antcomb->scan_not_start = false; + antcomb->scan = false; + antcomb->rssi_first = main_rssi_avg; + antcomb->rssi_third = alt_rssi_avg; + + if (antcomb->second_quick_scan_conf == ATH_ANT_DIV_COMB_LNA1) + antcomb->rssi_lna1 = alt_rssi_avg; + else if (antcomb->second_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA2) + antcomb->rssi_lna2 = alt_rssi_avg; + else if (antcomb->second_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2) { + if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA2) + antcomb->rssi_lna2 = main_rssi_avg; + else if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA1) + antcomb->rssi_lna1 = main_rssi_avg; + } + + if (antcomb->rssi_lna2 > antcomb->rssi_lna1 + + ATH_ANT_DIV_COMB_LNA1_LNA2_SWITCH_DELTA) + div_ant_conf->main_lna_conf = ATH_ANT_DIV_COMB_LNA2; + else + div_ant_conf->main_lna_conf = ATH_ANT_DIV_COMB_LNA1; + + if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA1) { + if (ath_is_alt_ant_ratio_better(alt_ratio, + ATH_ANT_DIV_COMB_LNA1_DELTA_HI, + ATH_ANT_DIV_COMB_LNA1_DELTA_LOW, + main_rssi_avg, alt_rssi_avg, + antcomb->total_pkt_count)) + antcomb->second_ratio = true; + else + antcomb->second_ratio = false; + } else if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA2) { + if (ath_is_alt_ant_ratio_better(alt_ratio, + ATH_ANT_DIV_COMB_LNA1_DELTA_MID, + ATH_ANT_DIV_COMB_LNA1_DELTA_LOW, + main_rssi_avg, alt_rssi_avg, + antcomb->total_pkt_count)) + antcomb->second_ratio = true; + else + antcomb->second_ratio = false; + } else { + if ((((alt_ratio >= ATH_ANT_DIV_COMB_ALT_ANT_RATIO2) && + (alt_rssi_avg > main_rssi_avg + + ATH_ANT_DIV_COMB_LNA1_DELTA_HI)) || + (alt_rssi_avg > main_rssi_avg)) && + (antcomb->total_pkt_count > 50)) + antcomb->second_ratio = true; + else + antcomb->second_ratio = false; + } + + /* set alt to the conf with maximun ratio */ + if (antcomb->first_ratio && antcomb->second_ratio) { + if (antcomb->rssi_second > antcomb->rssi_third) { + /* first alt*/ + if ((antcomb->first_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA1) || + (antcomb->first_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA2)) + /* Set alt LNA1 or LNA2*/ + if (div_ant_conf->main_lna_conf == + ATH_ANT_DIV_COMB_LNA2) + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + else + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + else + /* Set alt to A+B or A-B */ + div_ant_conf->alt_lna_conf = + antcomb->first_quick_scan_conf; + } else if ((antcomb->second_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA1) || + (antcomb->second_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA2)) { + /* Set alt LNA1 or LNA2 */ + if (div_ant_conf->main_lna_conf == + ATH_ANT_DIV_COMB_LNA2) + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + else + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + } else { + /* Set alt to A+B or A-B */ + div_ant_conf->alt_lna_conf = + antcomb->second_quick_scan_conf; + } + } else if (antcomb->first_ratio) { + /* first alt */ + if ((antcomb->first_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA1) || + (antcomb->first_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA2)) + /* Set alt LNA1 or LNA2 */ + if (div_ant_conf->main_lna_conf == + ATH_ANT_DIV_COMB_LNA2) + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + else + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + else + /* Set alt to A+B or A-B */ + div_ant_conf->alt_lna_conf = + antcomb->first_quick_scan_conf; + } else if (antcomb->second_ratio) { + /* second alt */ + if ((antcomb->second_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA1) || + (antcomb->second_quick_scan_conf == + ATH_ANT_DIV_COMB_LNA2)) + /* Set alt LNA1 or LNA2 */ + if (div_ant_conf->main_lna_conf == + ATH_ANT_DIV_COMB_LNA2) + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + else + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + else + /* Set alt to A+B or A-B */ + div_ant_conf->alt_lna_conf = + antcomb->second_quick_scan_conf; + } else { + /* main is largest */ + if ((antcomb->main_conf == ATH_ANT_DIV_COMB_LNA1) || + (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA2)) + /* Set alt LNA1 or LNA2 */ + if (div_ant_conf->main_lna_conf == + ATH_ANT_DIV_COMB_LNA2) + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + else + div_ant_conf->alt_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + else + /* Set alt to A+B or A-B */ + div_ant_conf->alt_lna_conf = antcomb->main_conf; + } + break; + default: + break; + } +} + +static void ath_ant_div_conf_fast_divbias(struct ath_hw_antcomb_conf *ant_conf, + struct ath_ant_comb *antcomb, + int alt_ratio) +{ + if (ant_conf->div_group == 0) { + /* Adjust the fast_div_bias based on main and alt lna conf */ + switch ((ant_conf->main_lna_conf << 4) | + ant_conf->alt_lna_conf) { + case 0x01: /* A-B LNA2 */ + ant_conf->fast_div_bias = 0x3b; + break; + case 0x02: /* A-B LNA1 */ + ant_conf->fast_div_bias = 0x3d; + break; + case 0x03: /* A-B A+B */ + ant_conf->fast_div_bias = 0x1; + break; + case 0x10: /* LNA2 A-B */ + ant_conf->fast_div_bias = 0x7; + break; + case 0x12: /* LNA2 LNA1 */ + ant_conf->fast_div_bias = 0x2; + break; + case 0x13: /* LNA2 A+B */ + ant_conf->fast_div_bias = 0x7; + break; + case 0x20: /* LNA1 A-B */ + ant_conf->fast_div_bias = 0x6; + break; + case 0x21: /* LNA1 LNA2 */ + ant_conf->fast_div_bias = 0x0; + break; + case 0x23: /* LNA1 A+B */ + ant_conf->fast_div_bias = 0x6; + break; + case 0x30: /* A+B A-B */ + ant_conf->fast_div_bias = 0x1; + break; + case 0x31: /* A+B LNA2 */ + ant_conf->fast_div_bias = 0x3b; + break; + case 0x32: /* A+B LNA1 */ + ant_conf->fast_div_bias = 0x3d; + break; + default: + break; + } + } else if (ant_conf->div_group == 1) { + /* Adjust the fast_div_bias based on main and alt_lna_conf */ + switch ((ant_conf->main_lna_conf << 4) | + ant_conf->alt_lna_conf) { + case 0x01: /* A-B LNA2 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x02: /* A-B LNA1 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x03: /* A-B A+B */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x10: /* LNA2 A-B */ + if (!(antcomb->scan) && + (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) + ant_conf->fast_div_bias = 0x3f; + else + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x12: /* LNA2 LNA1 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x13: /* LNA2 A+B */ + if (!(antcomb->scan) && + (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) + ant_conf->fast_div_bias = 0x3f; + else + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x20: /* LNA1 A-B */ + if (!(antcomb->scan) && + (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) + ant_conf->fast_div_bias = 0x3f; + else + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x21: /* LNA1 LNA2 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x23: /* LNA1 A+B */ + if (!(antcomb->scan) && + (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) + ant_conf->fast_div_bias = 0x3f; + else + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x30: /* A+B A-B */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x31: /* A+B LNA2 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x32: /* A+B LNA1 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + default: + break; + } + } else if (ant_conf->div_group == 2) { + /* Adjust the fast_div_bias based on main and alt_lna_conf */ + switch ((ant_conf->main_lna_conf << 4) | + ant_conf->alt_lna_conf) { + case 0x01: /* A-B LNA2 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x02: /* A-B LNA1 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x03: /* A-B A+B */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x10: /* LNA2 A-B */ + if (!(antcomb->scan) && + (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) + ant_conf->fast_div_bias = 0x1; + else + ant_conf->fast_div_bias = 0x2; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x12: /* LNA2 LNA1 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x13: /* LNA2 A+B */ + if (!(antcomb->scan) && + (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) + ant_conf->fast_div_bias = 0x1; + else + ant_conf->fast_div_bias = 0x2; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x20: /* LNA1 A-B */ + if (!(antcomb->scan) && + (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) + ant_conf->fast_div_bias = 0x1; + else + ant_conf->fast_div_bias = 0x2; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x21: /* LNA1 LNA2 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x23: /* LNA1 A+B */ + if (!(antcomb->scan) && + (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) + ant_conf->fast_div_bias = 0x1; + else + ant_conf->fast_div_bias = 0x2; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x30: /* A+B A-B */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x31: /* A+B LNA2 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + case 0x32: /* A+B LNA1 */ + ant_conf->fast_div_bias = 0x1; + ant_conf->main_gaintb = 0; + ant_conf->alt_gaintb = 0; + break; + default: + break; + } + } +} + +void ath_ant_comb_scan(struct ath_softc *sc, struct ath_rx_status *rs) +{ + struct ath_hw_antcomb_conf div_ant_conf; + struct ath_ant_comb *antcomb = &sc->ant_comb; + int alt_ratio = 0, alt_rssi_avg = 0, main_rssi_avg = 0, curr_alt_set; + int curr_main_set; + int main_rssi = rs->rs_rssi_ctl0; + int alt_rssi = rs->rs_rssi_ctl1; + int rx_ant_conf, main_ant_conf; + bool short_scan = false; + + rx_ant_conf = (rs->rs_rssi_ctl2 >> ATH_ANT_RX_CURRENT_SHIFT) & + ATH_ANT_RX_MASK; + main_ant_conf = (rs->rs_rssi_ctl2 >> ATH_ANT_RX_MAIN_SHIFT) & + ATH_ANT_RX_MASK; + + /* Record packet only when both main_rssi and alt_rssi is positive */ + if (main_rssi > 0 && alt_rssi > 0) { + antcomb->total_pkt_count++; + antcomb->main_total_rssi += main_rssi; + antcomb->alt_total_rssi += alt_rssi; + if (main_ant_conf == rx_ant_conf) + antcomb->main_recv_cnt++; + else + antcomb->alt_recv_cnt++; + } + + /* Short scan check */ + if (antcomb->scan && antcomb->alt_good) { + if (time_after(jiffies, antcomb->scan_start_time + + msecs_to_jiffies(ATH_ANT_DIV_COMB_SHORT_SCAN_INTR))) + short_scan = true; + else + if (antcomb->total_pkt_count == + ATH_ANT_DIV_COMB_SHORT_SCAN_PKTCOUNT) { + alt_ratio = ((antcomb->alt_recv_cnt * 100) / + antcomb->total_pkt_count); + if (alt_ratio < ATH_ANT_DIV_COMB_ALT_ANT_RATIO) + short_scan = true; + } + } + + if (((antcomb->total_pkt_count < ATH_ANT_DIV_COMB_MAX_PKTCOUNT) || + rs->rs_moreaggr) && !short_scan) + return; + + if (antcomb->total_pkt_count) { + alt_ratio = ((antcomb->alt_recv_cnt * 100) / + antcomb->total_pkt_count); + main_rssi_avg = (antcomb->main_total_rssi / + antcomb->total_pkt_count); + alt_rssi_avg = (antcomb->alt_total_rssi / + antcomb->total_pkt_count); + } + + + ath9k_hw_antdiv_comb_conf_get(sc->sc_ah, &div_ant_conf); + curr_alt_set = div_ant_conf.alt_lna_conf; + curr_main_set = div_ant_conf.main_lna_conf; + + antcomb->count++; + + if (antcomb->count == ATH_ANT_DIV_COMB_MAX_COUNT) { + if (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO) { + ath_lnaconf_alt_good_scan(antcomb, div_ant_conf, + main_rssi_avg); + antcomb->alt_good = true; + } else { + antcomb->alt_good = false; + } + + antcomb->count = 0; + antcomb->scan = true; + antcomb->scan_not_start = true; + } + + if (!antcomb->scan) { + if (ath_ant_div_comb_alt_check(div_ant_conf.div_group, + alt_ratio, curr_main_set, curr_alt_set, + alt_rssi_avg, main_rssi_avg)) { + if (curr_alt_set == ATH_ANT_DIV_COMB_LNA2) { + /* Switch main and alt LNA */ + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + } else if (curr_alt_set == ATH_ANT_DIV_COMB_LNA1) { + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + } + + goto div_comb_done; + } else if ((curr_alt_set != ATH_ANT_DIV_COMB_LNA1) && + (curr_alt_set != ATH_ANT_DIV_COMB_LNA2)) { + /* Set alt to another LNA */ + if (curr_main_set == ATH_ANT_DIV_COMB_LNA2) + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + else if (curr_main_set == ATH_ANT_DIV_COMB_LNA1) + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + + goto div_comb_done; + } + + if ((alt_rssi_avg < (main_rssi_avg + + div_ant_conf.lna1_lna2_delta))) + goto div_comb_done; + } + + if (!antcomb->scan_not_start) { + switch (curr_alt_set) { + case ATH_ANT_DIV_COMB_LNA2: + antcomb->rssi_lna2 = alt_rssi_avg; + antcomb->rssi_lna1 = main_rssi_avg; + antcomb->scan = true; + /* set to A+B */ + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + break; + case ATH_ANT_DIV_COMB_LNA1: + antcomb->rssi_lna1 = alt_rssi_avg; + antcomb->rssi_lna2 = main_rssi_avg; + antcomb->scan = true; + /* set to A+B */ + div_ant_conf.main_lna_conf = ATH_ANT_DIV_COMB_LNA2; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + break; + case ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2: + antcomb->rssi_add = alt_rssi_avg; + antcomb->scan = true; + /* set to A-B */ + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; + break; + case ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2: + antcomb->rssi_sub = alt_rssi_avg; + antcomb->scan = false; + if (antcomb->rssi_lna2 > + (antcomb->rssi_lna1 + + ATH_ANT_DIV_COMB_LNA1_LNA2_SWITCH_DELTA)) { + /* use LNA2 as main LNA */ + if ((antcomb->rssi_add > antcomb->rssi_lna1) && + (antcomb->rssi_add > antcomb->rssi_sub)) { + /* set to A+B */ + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + } else if (antcomb->rssi_sub > + antcomb->rssi_lna1) { + /* set to A-B */ + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; + } else { + /* set to LNA1 */ + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + } + } else { + /* use LNA1 as main LNA */ + if ((antcomb->rssi_add > antcomb->rssi_lna2) && + (antcomb->rssi_add > antcomb->rssi_sub)) { + /* set to A+B */ + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; + } else if (antcomb->rssi_sub > + antcomb->rssi_lna1) { + /* set to A-B */ + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; + } else { + /* set to LNA2 */ + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + } + } + break; + default: + break; + } + } else { + if (!antcomb->alt_good) { + antcomb->scan_not_start = false; + /* Set alt to another LNA */ + if (curr_main_set == ATH_ANT_DIV_COMB_LNA2) { + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + } else if (curr_main_set == ATH_ANT_DIV_COMB_LNA1) { + div_ant_conf.main_lna_conf = + ATH_ANT_DIV_COMB_LNA1; + div_ant_conf.alt_lna_conf = + ATH_ANT_DIV_COMB_LNA2; + } + goto div_comb_done; + } + } + + ath_select_ant_div_from_quick_scan(antcomb, &div_ant_conf, + main_rssi_avg, alt_rssi_avg, + alt_ratio); + + antcomb->quick_scan_cnt++; + +div_comb_done: + ath_ant_div_conf_fast_divbias(&div_ant_conf, antcomb, alt_ratio); + ath9k_hw_antdiv_comb_conf_set(sc->sc_ah, &div_ant_conf); + + antcomb->scan_start_time = jiffies; + antcomb->total_pkt_count = 0; + antcomb->main_total_rssi = 0; + antcomb->alt_total_rssi = 0; + antcomb->main_recv_cnt = 0; + antcomb->alt_recv_cnt = 0; +} + +void ath_ant_comb_update(struct ath_softc *sc) +{ + struct ath_hw *ah = sc->sc_ah; + struct ath_hw_antcomb_conf div_ant_conf; + u8 lna_conf; + + ath9k_hw_antdiv_comb_conf_get(ah, &div_ant_conf); + + if (sc->ant_rx == 1) + lna_conf = ATH_ANT_DIV_COMB_LNA1; + else + lna_conf = ATH_ANT_DIV_COMB_LNA2; + + div_ant_conf.main_lna_conf = lna_conf; + div_ant_conf.alt_lna_conf = lna_conf; + + ath9k_hw_antdiv_comb_conf_set(ah, &div_ant_conf); +} diff --git a/drivers/net/wireless/ath/ath9k/ar5008_phy.c b/drivers/net/wireless/ath/ath9k/ar5008_phy.c index c7492c6a2519..874186bfda41 100644 --- a/drivers/net/wireless/ath/ath9k/ar5008_phy.c +++ b/drivers/net/wireless/ath/ath9k/ar5008_phy.c @@ -995,141 +995,6 @@ static u32 ar5008_hw_compute_pll_control(struct ath_hw *ah, return pll; } -static bool ar5008_hw_ani_control_old(struct ath_hw *ah, - enum ath9k_ani_cmd cmd, - int param) -{ - struct ar5416AniState *aniState = &ah->curchan->ani; - struct ath_common *common = ath9k_hw_common(ah); - - switch (cmd & ah->ani_function) { - case ATH9K_ANI_NOISE_IMMUNITY_LEVEL:{ - u32 level = param; - - if (level >= ARRAY_SIZE(ah->totalSizeDesired)) { - ath_dbg(common, ANI, "level out of range (%u > %zu)\n", - level, ARRAY_SIZE(ah->totalSizeDesired)); - return false; - } - - REG_RMW_FIELD(ah, AR_PHY_DESIRED_SZ, - AR_PHY_DESIRED_SZ_TOT_DES, - ah->totalSizeDesired[level]); - REG_RMW_FIELD(ah, AR_PHY_AGC_CTL1, - AR_PHY_AGC_CTL1_COARSE_LOW, - ah->coarse_low[level]); - REG_RMW_FIELD(ah, AR_PHY_AGC_CTL1, - AR_PHY_AGC_CTL1_COARSE_HIGH, - ah->coarse_high[level]); - REG_RMW_FIELD(ah, AR_PHY_FIND_SIG, - AR_PHY_FIND_SIG_FIRPWR, - ah->firpwr[level]); - - if (level > aniState->noiseImmunityLevel) - ah->stats.ast_ani_niup++; - else if (level < aniState->noiseImmunityLevel) - ah->stats.ast_ani_nidown++; - aniState->noiseImmunityLevel = level; - break; - } - case ATH9K_ANI_OFDM_WEAK_SIGNAL_DETECTION:{ - u32 on = param ? 1 : 0; - - if (on) - REG_SET_BIT(ah, AR_PHY_SFCORR_LOW, - AR_PHY_SFCORR_LOW_USE_SELF_CORR_LOW); - else - REG_CLR_BIT(ah, AR_PHY_SFCORR_LOW, - AR_PHY_SFCORR_LOW_USE_SELF_CORR_LOW); - - if (!on != aniState->ofdmWeakSigDetectOff) { - if (on) - ah->stats.ast_ani_ofdmon++; - else - ah->stats.ast_ani_ofdmoff++; - aniState->ofdmWeakSigDetectOff = !on; - } - break; - } - case ATH9K_ANI_CCK_WEAK_SIGNAL_THR:{ - static const int weakSigThrCck[] = { 8, 6 }; - u32 high = param ? 1 : 0; - - REG_RMW_FIELD(ah, AR_PHY_CCK_DETECT, - AR_PHY_CCK_DETECT_WEAK_SIG_THR_CCK, - weakSigThrCck[high]); - if (high != aniState->cckWeakSigThreshold) { - if (high) - ah->stats.ast_ani_cckhigh++; - else - ah->stats.ast_ani_ccklow++; - aniState->cckWeakSigThreshold = high; - } - break; - } - case ATH9K_ANI_FIRSTEP_LEVEL:{ - static const int firstep[] = { 0, 4, 8 }; - u32 level = param; - - if (level >= ARRAY_SIZE(firstep)) { - ath_dbg(common, ANI, "level out of range (%u > %zu)\n", - level, ARRAY_SIZE(firstep)); - return false; - } - REG_RMW_FIELD(ah, AR_PHY_FIND_SIG, - AR_PHY_FIND_SIG_FIRSTEP, - firstep[level]); - if (level > aniState->firstepLevel) - ah->stats.ast_ani_stepup++; - else if (level < aniState->firstepLevel) - ah->stats.ast_ani_stepdown++; - aniState->firstepLevel = level; - break; - } - case ATH9K_ANI_SPUR_IMMUNITY_LEVEL:{ - static const int cycpwrThr1[] = { 2, 4, 6, 8, 10, 12, 14, 16 }; - u32 level = param; - - if (level >= ARRAY_SIZE(cycpwrThr1)) { - ath_dbg(common, ANI, "level out of range (%u > %zu)\n", - level, ARRAY_SIZE(cycpwrThr1)); - return false; - } - REG_RMW_FIELD(ah, AR_PHY_TIMING5, - AR_PHY_TIMING5_CYCPWR_THR1, - cycpwrThr1[level]); - if (level > aniState->spurImmunityLevel) - ah->stats.ast_ani_spurup++; - else if (level < aniState->spurImmunityLevel) - ah->stats.ast_ani_spurdown++; - aniState->spurImmunityLevel = level; - break; - } - case ATH9K_ANI_PRESENT: - break; - default: - ath_dbg(common, ANI, "invalid cmd %u\n", cmd); - return false; - } - - ath_dbg(common, ANI, "ANI parameters:\n"); - ath_dbg(common, ANI, - "noiseImmunityLevel=%d, spurImmunityLevel=%d, ofdmWeakSigDetectOff=%d\n", - aniState->noiseImmunityLevel, - aniState->spurImmunityLevel, - !aniState->ofdmWeakSigDetectOff); - ath_dbg(common, ANI, - "cckWeakSigThreshold=%d, firstepLevel=%d, listenTime=%d\n", - aniState->cckWeakSigThreshold, - aniState->firstepLevel, - aniState->listenTime); - ath_dbg(common, ANI, "ofdmPhyErrCount=%d, cckPhyErrCount=%d\n\n", - aniState->ofdmPhyErrCount, - aniState->cckPhyErrCount); - - return true; -} - static bool ar5008_hw_ani_control_new(struct ath_hw *ah, enum ath9k_ani_cmd cmd, int param) @@ -1206,18 +1071,18 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, REG_CLR_BIT(ah, AR_PHY_SFCORR_LOW, AR_PHY_SFCORR_LOW_USE_SELF_CORR_LOW); - if (!on != aniState->ofdmWeakSigDetectOff) { + if (on != aniState->ofdmWeakSigDetect) { ath_dbg(common, ANI, "** ch %d: ofdm weak signal: %s=>%s\n", chan->channel, - !aniState->ofdmWeakSigDetectOff ? + aniState->ofdmWeakSigDetect ? "on" : "off", on ? "on" : "off"); if (on) ah->stats.ast_ani_ofdmon++; else ah->stats.ast_ani_ofdmoff++; - aniState->ofdmWeakSigDetectOff = !on; + aniState->ofdmWeakSigDetect = on; } break; } @@ -1236,7 +1101,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, * from INI file & cap value */ value = firstep_table[level] - - firstep_table[ATH9K_ANI_FIRSTEP_LVL_NEW] + + firstep_table[ATH9K_ANI_FIRSTEP_LVL] + aniState->iniDef.firstep; if (value < ATH9K_SIG_FIRSTEP_SETTING_MIN) value = ATH9K_SIG_FIRSTEP_SETTING_MIN; @@ -1251,7 +1116,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, * from INI file & cap value */ value2 = firstep_table[level] - - firstep_table[ATH9K_ANI_FIRSTEP_LVL_NEW] + + firstep_table[ATH9K_ANI_FIRSTEP_LVL] + aniState->iniDef.firstepLow; if (value2 < ATH9K_SIG_FIRSTEP_SETTING_MIN) value2 = ATH9K_SIG_FIRSTEP_SETTING_MIN; @@ -1267,7 +1132,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, chan->channel, aniState->firstepLevel, level, - ATH9K_ANI_FIRSTEP_LVL_NEW, + ATH9K_ANI_FIRSTEP_LVL, value, aniState->iniDef.firstep); ath_dbg(common, ANI, @@ -1275,7 +1140,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, chan->channel, aniState->firstepLevel, level, - ATH9K_ANI_FIRSTEP_LVL_NEW, + ATH9K_ANI_FIRSTEP_LVL, value2, aniState->iniDef.firstepLow); if (level > aniState->firstepLevel) @@ -1300,7 +1165,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, * from INI file & cap value */ value = cycpwrThr1_table[level] - - cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL_NEW] + + cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL] + aniState->iniDef.cycpwrThr1; if (value < ATH9K_SIG_SPUR_IMM_SETTING_MIN) value = ATH9K_SIG_SPUR_IMM_SETTING_MIN; @@ -1316,7 +1181,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, * from INI file & cap value */ value2 = cycpwrThr1_table[level] - - cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL_NEW] + + cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL] + aniState->iniDef.cycpwrThr1Ext; if (value2 < ATH9K_SIG_SPUR_IMM_SETTING_MIN) value2 = ATH9K_SIG_SPUR_IMM_SETTING_MIN; @@ -1331,7 +1196,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, chan->channel, aniState->spurImmunityLevel, level, - ATH9K_ANI_SPUR_IMMUNE_LVL_NEW, + ATH9K_ANI_SPUR_IMMUNE_LVL, value, aniState->iniDef.cycpwrThr1); ath_dbg(common, ANI, @@ -1339,7 +1204,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, chan->channel, aniState->spurImmunityLevel, level, - ATH9K_ANI_SPUR_IMMUNE_LVL_NEW, + ATH9K_ANI_SPUR_IMMUNE_LVL, value2, aniState->iniDef.cycpwrThr1Ext); if (level > aniState->spurImmunityLevel) @@ -1367,9 +1232,9 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah, ath_dbg(common, ANI, "ANI parameters: SI=%d, ofdmWS=%s FS=%d MRCcck=%s listenTime=%d ofdmErrs=%d cckErrs=%d\n", aniState->spurImmunityLevel, - !aniState->ofdmWeakSigDetectOff ? "on" : "off", + aniState->ofdmWeakSigDetect ? "on" : "off", aniState->firstepLevel, - !aniState->mrcCCKOff ? "on" : "off", + aniState->mrcCCK ? "on" : "off", aniState->listenTime, aniState->ofdmPhyErrCount, aniState->cckPhyErrCount); @@ -1454,10 +1319,10 @@ static void ar5008_hw_ani_cache_ini_regs(struct ath_hw *ah) AR_PHY_EXT_TIMING5_CYCPWR_THR1); /* these levels just got reset to defaults by the INI */ - aniState->spurImmunityLevel = ATH9K_ANI_SPUR_IMMUNE_LVL_NEW; - aniState->firstepLevel = ATH9K_ANI_FIRSTEP_LVL_NEW; - aniState->ofdmWeakSigDetectOff = !ATH9K_ANI_USE_OFDM_WEAK_SIG; - aniState->mrcCCKOff = true; /* not available on pre AR9003 */ + aniState->spurImmunityLevel = ATH9K_ANI_SPUR_IMMUNE_LVL; + aniState->firstepLevel = ATH9K_ANI_FIRSTEP_LVL; + aniState->ofdmWeakSigDetect = ATH9K_ANI_USE_OFDM_WEAK_SIG; + aniState->mrcCCK = false; /* not available on pre AR9003 */ } static void ar5008_hw_set_nf_limits(struct ath_hw *ah) @@ -1545,11 +1410,8 @@ void ar5008_hw_attach_phy_ops(struct ath_hw *ah) priv_ops->do_getnf = ar5008_hw_do_getnf; priv_ops->set_radar_params = ar5008_hw_set_radar_params; - if (modparam_force_new_ani) { - priv_ops->ani_control = ar5008_hw_ani_control_new; - priv_ops->ani_cache_ini_regs = ar5008_hw_ani_cache_ini_regs; - } else - priv_ops->ani_control = ar5008_hw_ani_control_old; + priv_ops->ani_control = ar5008_hw_ani_control_new; + priv_ops->ani_cache_ini_regs = ar5008_hw_ani_cache_ini_regs; if (AR_SREV_9100(ah) || AR_SREV_9160_10_OR_LATER(ah)) priv_ops->compute_pll_control = ar9160_hw_compute_pll_control; diff --git a/drivers/net/wireless/ath/ath9k/ar9002_hw.c b/drivers/net/wireless/ath/ath9k/ar9002_hw.c index d9a69fc470cd..648da3e885e9 100644 --- a/drivers/net/wireless/ath/ath9k/ar9002_hw.c +++ b/drivers/net/wireless/ath/ath9k/ar9002_hw.c @@ -21,110 +21,79 @@ #include "ar9002_initvals.h" #include "ar9002_phy.h" -int modparam_force_new_ani; -module_param_named(force_new_ani, modparam_force_new_ani, int, 0444); -MODULE_PARM_DESC(force_new_ani, "Force new ANI for AR5008, AR9001, AR9002"); - /* General hardware code for the A5008/AR9001/AR9002 hadware families */ static void ar9002_hw_init_mode_regs(struct ath_hw *ah) { if (AR_SREV_9271(ah)) { - INIT_INI_ARRAY(&ah->iniModes, ar9271Modes_9271, - ARRAY_SIZE(ar9271Modes_9271), 5); - INIT_INI_ARRAY(&ah->iniCommon, ar9271Common_9271, - ARRAY_SIZE(ar9271Common_9271), 2); - INIT_INI_ARRAY(&ah->iniModes_9271_ANI_reg, ar9271Modes_9271_ANI_reg, - ARRAY_SIZE(ar9271Modes_9271_ANI_reg), 5); + INIT_INI_ARRAY(&ah->iniModes, ar9271Modes_9271); + INIT_INI_ARRAY(&ah->iniCommon, ar9271Common_9271); + INIT_INI_ARRAY(&ah->iniModes_9271_ANI_reg, ar9271Modes_9271_ANI_reg); return; } if (ah->config.pcie_clock_req) INIT_INI_ARRAY(&ah->iniPcieSerdes, - ar9280PciePhy_clkreq_off_L1_9280, - ARRAY_SIZE(ar9280PciePhy_clkreq_off_L1_9280), 2); + ar9280PciePhy_clkreq_off_L1_9280); else INIT_INI_ARRAY(&ah->iniPcieSerdes, - ar9280PciePhy_clkreq_always_on_L1_9280, - ARRAY_SIZE(ar9280PciePhy_clkreq_always_on_L1_9280), 2); + ar9280PciePhy_clkreq_always_on_L1_9280); +#ifdef CONFIG_PM_SLEEP + INIT_INI_ARRAY(&ah->iniPcieSerdesWow, + ar9280PciePhy_awow); +#endif if (AR_SREV_9287_11_OR_LATER(ah)) { - INIT_INI_ARRAY(&ah->iniModes, ar9287Modes_9287_1_1, - ARRAY_SIZE(ar9287Modes_9287_1_1), 5); - INIT_INI_ARRAY(&ah->iniCommon, ar9287Common_9287_1_1, - ARRAY_SIZE(ar9287Common_9287_1_1), 2); + INIT_INI_ARRAY(&ah->iniModes, ar9287Modes_9287_1_1); + INIT_INI_ARRAY(&ah->iniCommon, ar9287Common_9287_1_1); } else if (AR_SREV_9285_12_OR_LATER(ah)) { - INIT_INI_ARRAY(&ah->iniModes, ar9285Modes_9285_1_2, - ARRAY_SIZE(ar9285Modes_9285_1_2), 5); - INIT_INI_ARRAY(&ah->iniCommon, ar9285Common_9285_1_2, - ARRAY_SIZE(ar9285Common_9285_1_2), 2); + INIT_INI_ARRAY(&ah->iniModes, ar9285Modes_9285_1_2); + INIT_INI_ARRAY(&ah->iniCommon, ar9285Common_9285_1_2); } else if (AR_SREV_9280_20_OR_LATER(ah)) { - INIT_INI_ARRAY(&ah->iniModes, ar9280Modes_9280_2, - ARRAY_SIZE(ar9280Modes_9280_2), 5); - INIT_INI_ARRAY(&ah->iniCommon, ar9280Common_9280_2, - ARRAY_SIZE(ar9280Common_9280_2), 2); + INIT_INI_ARRAY(&ah->iniModes, ar9280Modes_9280_2); + INIT_INI_ARRAY(&ah->iniCommon, ar9280Common_9280_2); INIT_INI_ARRAY(&ah->iniModesFastClock, - ar9280Modes_fast_clock_9280_2, - ARRAY_SIZE(ar9280Modes_fast_clock_9280_2), 3); + ar9280Modes_fast_clock_9280_2); } else if (AR_SREV_9160_10_OR_LATER(ah)) { - INIT_INI_ARRAY(&ah->iniModes, ar5416Modes_9160, - ARRAY_SIZE(ar5416Modes_9160), 5); - INIT_INI_ARRAY(&ah->iniCommon, ar5416Common_9160, - ARRAY_SIZE(ar5416Common_9160), 2); + INIT_INI_ARRAY(&ah->iniModes, ar5416Modes_9160); + INIT_INI_ARRAY(&ah->iniCommon, ar5416Common_9160); if (AR_SREV_9160_11(ah)) { INIT_INI_ARRAY(&ah->iniAddac, - ar5416Addac_9160_1_1, - ARRAY_SIZE(ar5416Addac_9160_1_1), 2); + ar5416Addac_9160_1_1); } else { - INIT_INI_ARRAY(&ah->iniAddac, ar5416Addac_9160, - ARRAY_SIZE(ar5416Addac_9160), 2); + INIT_INI_ARRAY(&ah->iniAddac, ar5416Addac_9160); } } else if (AR_SREV_9100_OR_LATER(ah)) { - INIT_INI_ARRAY(&ah->iniModes, ar5416Modes_9100, - ARRAY_SIZE(ar5416Modes_9100), 5); - INIT_INI_ARRAY(&ah->iniCommon, ar5416Common_9100, - ARRAY_SIZE(ar5416Common_9100), 2); - INIT_INI_ARRAY(&ah->iniBank6, ar5416Bank6_9100, - ARRAY_SIZE(ar5416Bank6_9100), 3); - INIT_INI_ARRAY(&ah->iniAddac, ar5416Addac_9100, - ARRAY_SIZE(ar5416Addac_9100), 2); + INIT_INI_ARRAY(&ah->iniModes, ar5416Modes_9100); + INIT_INI_ARRAY(&ah->iniCommon, ar5416Common_9100); + INIT_INI_ARRAY(&ah->iniBank6, ar5416Bank6_9100); + INIT_INI_ARRAY(&ah->iniAddac, ar5416Addac_9100); } else { - INIT_INI_ARRAY(&ah->iniModes, ar5416Modes, - ARRAY_SIZE(ar5416Modes), 5); - INIT_INI_ARRAY(&ah->iniCommon, ar5416Common, - ARRAY_SIZE(ar5416Common), 2); - INIT_INI_ARRAY(&ah->iniBank6TPC, ar5416Bank6TPC, - ARRAY_SIZE(ar5416Bank6TPC), 3); - INIT_INI_ARRAY(&ah->iniAddac, ar5416Addac, - ARRAY_SIZE(ar5416Addac), 2); + INIT_INI_ARRAY(&ah->iniModes, ar5416Modes); + INIT_INI_ARRAY(&ah->iniCommon, ar5416Common); + INIT_INI_ARRAY(&ah->iniBank6TPC, ar5416Bank6TPC); + INIT_INI_ARRAY(&ah->iniAddac, ar5416Addac); } if (!AR_SREV_9280_20_OR_LATER(ah)) { /* Common for AR5416, AR913x, AR9160 */ - INIT_INI_ARRAY(&ah->iniBB_RfGain, ar5416BB_RfGain, - ARRAY_SIZE(ar5416BB_RfGain), 3); - - INIT_INI_ARRAY(&ah->iniBank0, ar5416Bank0, - ARRAY_SIZE(ar5416Bank0), 2); - INIT_INI_ARRAY(&ah->iniBank1, ar5416Bank1, - ARRAY_SIZE(ar5416Bank1), 2); - INIT_INI_ARRAY(&ah->iniBank2, ar5416Bank2, - ARRAY_SIZE(ar5416Bank2), 2); - INIT_INI_ARRAY(&ah->iniBank3, ar5416Bank3, - ARRAY_SIZE(ar5416Bank3), 3); - INIT_INI_ARRAY(&ah->iniBank7, ar5416Bank7, - ARRAY_SIZE(ar5416Bank7), 2); + INIT_INI_ARRAY(&ah->iniBB_RfGain, ar5416BB_RfGain); + + INIT_INI_ARRAY(&ah->iniBank0, ar5416Bank0); + INIT_INI_ARRAY(&ah->iniBank1, ar5416Bank1); + INIT_INI_ARRAY(&ah->iniBank2, ar5416Bank2); + INIT_INI_ARRAY(&ah->iniBank3, ar5416Bank3); + INIT_INI_ARRAY(&ah->iniBank7, ar5416Bank7); /* Common for AR5416, AR9160 */ if (!AR_SREV_9100(ah)) - INIT_INI_ARRAY(&ah->iniBank6, ar5416Bank6, - ARRAY_SIZE(ar5416Bank6), 3); + INIT_INI_ARRAY(&ah->iniBank6, ar5416Bank6); /* Common for AR913x, AR9160 */ if (!AR_SREV_5416(ah)) - INIT_INI_ARRAY(&ah->iniBank6TPC, ar5416Bank6TPC_9100, - ARRAY_SIZE(ar5416Bank6TPC_9100), 3); + INIT_INI_ARRAY(&ah->iniBank6TPC, + ar5416Bank6TPC_9100); } /* iniAddac needs to be modified for these chips */ @@ -147,13 +116,9 @@ static void ar9002_hw_init_mode_regs(struct ath_hw *ah) } if (AR_SREV_9287_11_OR_LATER(ah)) { INIT_INI_ARRAY(&ah->iniCckfirNormal, - ar9287Common_normal_cck_fir_coeff_9287_1_1, - ARRAY_SIZE(ar9287Common_normal_cck_fir_coeff_9287_1_1), - 2); + ar9287Common_normal_cck_fir_coeff_9287_1_1); INIT_INI_ARRAY(&ah->iniCckfirJapan2484, - ar9287Common_japan_2484_cck_fir_coeff_9287_1_1, - ARRAY_SIZE(ar9287Common_japan_2484_cck_fir_coeff_9287_1_1), - 2); + ar9287Common_japan_2484_cck_fir_coeff_9287_1_1); } } @@ -167,20 +132,16 @@ static void ar9280_20_hw_init_rxgain_ini(struct ath_hw *ah) if (rxgain_type == AR5416_EEP_RXGAIN_13DB_BACKOFF) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9280Modes_backoff_13db_rxgain_9280_2, - ARRAY_SIZE(ar9280Modes_backoff_13db_rxgain_9280_2), 5); + ar9280Modes_backoff_13db_rxgain_9280_2); else if (rxgain_type == AR5416_EEP_RXGAIN_23DB_BACKOFF) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9280Modes_backoff_23db_rxgain_9280_2, - ARRAY_SIZE(ar9280Modes_backoff_23db_rxgain_9280_2), 5); + ar9280Modes_backoff_23db_rxgain_9280_2); else INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9280Modes_original_rxgain_9280_2, - ARRAY_SIZE(ar9280Modes_original_rxgain_9280_2), 5); + ar9280Modes_original_rxgain_9280_2); } else { INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9280Modes_original_rxgain_9280_2, - ARRAY_SIZE(ar9280Modes_original_rxgain_9280_2), 5); + ar9280Modes_original_rxgain_9280_2); } } @@ -190,16 +151,13 @@ static void ar9280_20_hw_init_txgain_ini(struct ath_hw *ah, u32 txgain_type) AR5416_EEP_MINOR_VER_19) { if (txgain_type == AR5416_EEP_TXGAIN_HIGH_POWER) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9280Modes_high_power_tx_gain_9280_2, - ARRAY_SIZE(ar9280Modes_high_power_tx_gain_9280_2), 5); + ar9280Modes_high_power_tx_gain_9280_2); else INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9280Modes_original_tx_gain_9280_2, - ARRAY_SIZE(ar9280Modes_original_tx_gain_9280_2), 5); + ar9280Modes_original_tx_gain_9280_2); } else { INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9280Modes_original_tx_gain_9280_2, - ARRAY_SIZE(ar9280Modes_original_tx_gain_9280_2), 5); + ar9280Modes_original_tx_gain_9280_2); } } @@ -207,12 +165,10 @@ static void ar9271_hw_init_txgain_ini(struct ath_hw *ah, u32 txgain_type) { if (txgain_type == AR5416_EEP_TXGAIN_HIGH_POWER) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9271Modes_high_power_tx_gain_9271, - ARRAY_SIZE(ar9271Modes_high_power_tx_gain_9271), 5); + ar9271Modes_high_power_tx_gain_9271); else INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9271Modes_normal_power_tx_gain_9271, - ARRAY_SIZE(ar9271Modes_normal_power_tx_gain_9271), 5); + ar9271Modes_normal_power_tx_gain_9271); } static void ar9002_hw_init_mode_gain_regs(struct ath_hw *ah) @@ -221,8 +177,7 @@ static void ar9002_hw_init_mode_gain_regs(struct ath_hw *ah) if (AR_SREV_9287_11_OR_LATER(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9287Modes_rx_gain_9287_1_1, - ARRAY_SIZE(ar9287Modes_rx_gain_9287_1_1), 5); + ar9287Modes_rx_gain_9287_1_1); else if (AR_SREV_9280_20(ah)) ar9280_20_hw_init_rxgain_ini(ah); @@ -230,8 +185,7 @@ static void ar9002_hw_init_mode_gain_regs(struct ath_hw *ah) ar9271_hw_init_txgain_ini(ah, txgain_type); } else if (AR_SREV_9287_11_OR_LATER(ah)) { INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9287Modes_tx_gain_9287_1_1, - ARRAY_SIZE(ar9287Modes_tx_gain_9287_1_1), 5); + ar9287Modes_tx_gain_9287_1_1); } else if (AR_SREV_9280_20(ah)) { ar9280_20_hw_init_txgain_ini(ah, txgain_type); } else if (AR_SREV_9285_12_OR_LATER(ah)) { @@ -239,26 +193,18 @@ static void ar9002_hw_init_mode_gain_regs(struct ath_hw *ah) if (txgain_type == AR5416_EEP_TXGAIN_HIGH_POWER) { if (AR_SREV_9285E_20(ah)) { INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9285Modes_XE2_0_high_power, - ARRAY_SIZE( - ar9285Modes_XE2_0_high_power), 5); + ar9285Modes_XE2_0_high_power); } else { INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9285Modes_high_power_tx_gain_9285_1_2, - ARRAY_SIZE( - ar9285Modes_high_power_tx_gain_9285_1_2), 5); + ar9285Modes_high_power_tx_gain_9285_1_2); } } else { if (AR_SREV_9285E_20(ah)) { INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9285Modes_XE2_0_normal_power, - ARRAY_SIZE( - ar9285Modes_XE2_0_normal_power), 5); + ar9285Modes_XE2_0_normal_power); } else { INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9285Modes_original_tx_gain_9285_1_2, - ARRAY_SIZE( - ar9285Modes_original_tx_gain_9285_1_2), 5); + ar9285Modes_original_tx_gain_9285_1_2); } } } diff --git a/drivers/net/wireless/ath/ath9k/ar9002_initvals.h b/drivers/net/wireless/ath/ath9k/ar9002_initvals.h index 4d18c66a6790..beb6162cf97c 100644 --- a/drivers/net/wireless/ath/ath9k/ar9002_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9002_initvals.h @@ -925,6 +925,20 @@ static const u32 ar9280PciePhy_clkreq_always_on_L1_9280[][2] = { {0x00004044, 0x00000000}, }; +static const u32 ar9280PciePhy_awow[][2] = { + /* Addr allmodes */ + {0x00004040, 0x9248fd00}, + {0x00004040, 0x24924924}, + {0x00004040, 0xa8000019}, + {0x00004040, 0x13160820}, + {0x00004040, 0xe5980560}, + {0x00004040, 0xc01dcffd}, + {0x00004040, 0x1aaabe41}, + {0x00004040, 0xbe105554}, + {0x00004040, 0x00043007}, + {0x00004044, 0x00000000}, +}; + static const u32 ar9285Modes_9285_1_2[][5] = { /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ {0x00001030, 0x00000230, 0x00000460, 0x000002c0, 0x00000160}, diff --git a/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h b/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h index 952cb2b4656b..89bf94d4d8a1 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h @@ -1,5 +1,6 @@ /* * Copyright (c) 2010-2011 Atheros Communications Inc. + * Copyright (c) 2011-2012 Qualcomm Atheros Inc. * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above diff --git a/drivers/net/wireless/ath/ath9k/ar9003_calib.c b/drivers/net/wireless/ath/ath9k/ar9003_calib.c index 9fdd70fcaf5b..84b558d126ca 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_calib.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_calib.c @@ -159,14 +159,11 @@ static bool ar9003_hw_calibrate(struct ath_hw *ah, } } - /* Do NF cal only at longer intervals */ - if (longcal) { - /* - * Get the value from the previous NF cal and update - * history buffer. - */ - ath9k_hw_getnf(ah, chan); - + /* + * Do NF cal only at longer intervals. Get the value from + * the previous NF cal and update history buffer. + */ + if (longcal && ath9k_hw_getnf(ah, chan)) { /* * Load the NF from history buffer of the current channel. * NF is slow time-variant, so it is OK to use a historical @@ -653,7 +650,6 @@ static void ar9003_hw_detect_outlier(int *mp_coeff, int nmeasurement, } static void ar9003_hw_tx_iqcal_load_avg_2_passes(struct ath_hw *ah, - u8 num_chains, struct coeff *coeff, bool is_reusable) { @@ -677,7 +673,9 @@ static void ar9003_hw_tx_iqcal_load_avg_2_passes(struct ath_hw *ah, } /* Load the average of 2 passes */ - for (i = 0; i < num_chains; i++) { + for (i = 0; i < AR9300_MAX_CHAINS; i++) { + if (!(ah->txchainmask & (1 << i))) + continue; nmeasurement = REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_STATUS_B0, AR_PHY_CALIBRATED_GAINS_0); @@ -767,16 +765,13 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah, bool is_reusable) }; struct coeff coeff; s32 iq_res[6]; - u8 num_chains = 0; int i, im, j; int nmeasurement; for (i = 0; i < AR9300_MAX_CHAINS; i++) { - if (ah->txchainmask & (1 << i)) - num_chains++; - } + if (!(ah->txchainmask & (1 << i))) + continue; - for (i = 0; i < num_chains; i++) { nmeasurement = REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_STATUS_B0, AR_PHY_CALIBRATED_GAINS_0); @@ -839,8 +834,7 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah, bool is_reusable) coeff.phs_coeff[i][im] -= 128; } } - ar9003_hw_tx_iqcal_load_avg_2_passes(ah, num_chains, - &coeff, is_reusable); + ar9003_hw_tx_iqcal_load_avg_2_passes(ah, &coeff, is_reusable); return; @@ -901,7 +895,6 @@ static bool ar9003_hw_init_cal(struct ath_hw *ah, bool is_reusable = true, status = true; bool run_rtt_cal = false, run_agc_cal; bool rtt = !!(ah->caps.hw_caps & ATH9K_HW_CAP_RTT); - bool mci = !!(ah->caps.hw_caps & ATH9K_HW_CAP_MCI); u32 agc_ctrl = 0, agc_supp_cals = AR_PHY_AGC_CONTROL_OFFSET_CAL | AR_PHY_AGC_CONTROL_FLTR_CAL | AR_PHY_AGC_CONTROL_PKDET_CAL; @@ -970,7 +963,7 @@ static bool ar9003_hw_init_cal(struct ath_hw *ah, } else if (caldata && !caldata->done_txiqcal_once) run_agc_cal = true; - if (mci && IS_CHAN_2GHZ(chan) && run_agc_cal) + if (ath9k_hw_mci_is_enabled(ah) && IS_CHAN_2GHZ(chan) && run_agc_cal) ar9003_mci_init_cal_req(ah, &is_reusable); if (!(IS_CHAN_HALF_RATE(chan) || IS_CHAN_QUARTER_RATE(chan))) { @@ -993,7 +986,7 @@ skip_tx_iqcal: 0, AH_WAIT_TIMEOUT); } - if (mci && IS_CHAN_2GHZ(chan) && run_agc_cal) + if (ath9k_hw_mci_is_enabled(ah) && IS_CHAN_2GHZ(chan) && run_agc_cal) ar9003_mci_init_cal_done(ah); if (rtt && !run_rtt_cal) { diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c index dfb0441f406c..2588848f4a82 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c @@ -131,8 +131,9 @@ static const struct ar9300_eeprom ar9300_default = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0cf0e0e0), .papdRateMaskHt40 = LE32(0x6cf0e0e0), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext1 = { @@ -331,8 +332,9 @@ static const struct ar9300_eeprom ar9300_default = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0c80c080), .papdRateMaskHt40 = LE32(0x0080c080), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext2 = { @@ -704,8 +706,9 @@ static const struct ar9300_eeprom ar9300_x113 = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0c80c080), .papdRateMaskHt40 = LE32(0x0080c080), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext1 = { @@ -904,8 +907,9 @@ static const struct ar9300_eeprom ar9300_x113 = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0cf0e0e0), .papdRateMaskHt40 = LE32(0x6cf0e0e0), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext2 = { @@ -1278,8 +1282,9 @@ static const struct ar9300_eeprom ar9300_h112 = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0c80c080), .papdRateMaskHt40 = LE32(0x0080c080), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext1 = { @@ -1478,8 +1483,9 @@ static const struct ar9300_eeprom ar9300_h112 = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0cf0e0e0), .papdRateMaskHt40 = LE32(0x6cf0e0e0), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext2 = { @@ -1852,8 +1858,9 @@ static const struct ar9300_eeprom ar9300_x112 = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0c80c080), .papdRateMaskHt40 = LE32(0x0080c080), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext1 = { @@ -2052,8 +2059,9 @@ static const struct ar9300_eeprom ar9300_x112 = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0cf0e0e0), .papdRateMaskHt40 = LE32(0x6cf0e0e0), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext2 = { @@ -2425,8 +2433,9 @@ static const struct ar9300_eeprom ar9300_h116 = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0c80C080), .papdRateMaskHt40 = LE32(0x0080C080), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext1 = { @@ -2625,8 +2634,9 @@ static const struct ar9300_eeprom ar9300_h116 = { .thresh62 = 28, .papdRateMaskHt20 = LE32(0x0cf0e0e0), .papdRateMaskHt40 = LE32(0x6cf0e0e0), + .xlna_bias_strength = 0, .futureModal = { - 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, }, }, .base_ext2 = { @@ -2971,14 +2981,6 @@ static u32 ath9k_hw_ar9300_get_eeprom(struct ath_hw *ah, return (pBase->txrxMask >> 4) & 0xf; case EEP_RX_MASK: return pBase->txrxMask & 0xf; - case EEP_DRIVE_STRENGTH: -#define AR9300_EEP_BASE_DRIV_STRENGTH 0x1 - return pBase->miscConfiguration & AR9300_EEP_BASE_DRIV_STRENGTH; - case EEP_INTERNAL_REGULATOR: - /* Bit 4 is internal regulator flag */ - return (pBase->featureEnable & 0x10) >> 4; - case EEP_SWREG: - return le32_to_cpu(pBase->swreg); case EEP_PAPRD: return !!(pBase->featureEnable & BIT(5)); case EEP_CHAIN_MASK_REDUCE: @@ -2989,8 +2991,6 @@ static u32 ath9k_hw_ar9300_get_eeprom(struct ath_hw *ah, return eep->modalHeader5G.antennaGain; case EEP_ANTENNA_GAIN_2G: return eep->modalHeader2G.antennaGain; - case EEP_QUICK_DROP: - return pBase->miscConfiguration & BIT(1); default: return 0; } @@ -3178,7 +3178,7 @@ static int ar9300_compress_decision(struct ath_hw *ah, mdata_size, length); return -1; } - memcpy(mptr, (u8 *) (word + COMP_HDR_LEN), length); + memcpy(mptr, word + COMP_HDR_LEN, length); ath_dbg(common, EEPROM, "restored eeprom %d: uncompressed, length %d\n", it, length); @@ -3199,7 +3199,7 @@ static int ar9300_compress_decision(struct ath_hw *ah, "restore eeprom %d: block, reference %d, length %d\n", it, reference, length); ar9300_uncompress_block(ah, mptr, mdata_size, - (u8 *) (word + COMP_HDR_LEN), length); + (word + COMP_HDR_LEN), length); break; default: ath_dbg(common, EEPROM, "unknown compression code %d\n", code); @@ -3260,10 +3260,20 @@ static int ar9300_eeprom_restore_internal(struct ath_hw *ah, int it; u16 checksum, mchecksum; struct ath_common *common = ath9k_hw_common(ah); + struct ar9300_eeprom *eep; eeprom_read_op read; - if (ath9k_hw_use_flash(ah)) - return ar9300_eeprom_restore_flash(ah, mptr, mdata_size); + if (ath9k_hw_use_flash(ah)) { + u8 txrx; + + ar9300_eeprom_restore_flash(ah, mptr, mdata_size); + + /* check if eeprom contains valid data */ + eep = (struct ar9300_eeprom *) mptr; + txrx = eep->baseEepHeader.txrxMask; + if (txrx != 0 && txrx != 0xff) + return 0; + } word = kzalloc(2048, GFP_KERNEL); if (!word) @@ -3412,11 +3422,11 @@ static u32 ath9k_hw_ar9003_dump_eeprom(struct ath_hw *ah, bool dump_base_hdr, if (!dump_base_hdr) { len += snprintf(buf + len, size - len, "%20s :\n", "2GHz modal Header"); - len += ar9003_dump_modal_eeprom(buf, len, size, + len = ar9003_dump_modal_eeprom(buf, len, size, &eep->modalHeader2G); len += snprintf(buf + len, size - len, "%20s :\n", "5GHz modal Header"); - len += ar9003_dump_modal_eeprom(buf, len, size, + len = ar9003_dump_modal_eeprom(buf, len, size, &eep->modalHeader5G); goto out; } @@ -3493,23 +3503,24 @@ static int ath9k_hw_ar9300_get_eeprom_rev(struct ath_hw *ah) return 0; } -static s32 ar9003_hw_xpa_bias_level_get(struct ath_hw *ah, bool is2ghz) +static struct ar9300_modal_eep_header *ar9003_modal_header(struct ath_hw *ah, + bool is2ghz) { struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; if (is2ghz) - return eep->modalHeader2G.xpaBiasLvl; + return &eep->modalHeader2G; else - return eep->modalHeader5G.xpaBiasLvl; + return &eep->modalHeader5G; } static void ar9003_hw_xpa_bias_level_apply(struct ath_hw *ah, bool is2ghz) { - int bias = ar9003_hw_xpa_bias_level_get(ah, is2ghz); + int bias = ar9003_modal_header(ah, is2ghz)->xpaBiasLvl; if (AR_SREV_9485(ah) || AR_SREV_9330(ah) || AR_SREV_9340(ah)) REG_RMW_FIELD(ah, AR_CH0_TOP2, AR_CH0_TOP2_XPABIASLVL, bias); - else if (AR_SREV_9462(ah)) + else if (AR_SREV_9462(ah) || AR_SREV_9550(ah)) REG_RMW_FIELD(ah, AR_CH0_TOP, AR_CH0_TOP_XPABIASLVL, bias); else { REG_RMW_FIELD(ah, AR_CH0_TOP, AR_CH0_TOP_XPABIASLVL, bias); @@ -3521,57 +3532,26 @@ static void ar9003_hw_xpa_bias_level_apply(struct ath_hw *ah, bool is2ghz) } } -static u16 ar9003_switch_com_spdt_get(struct ath_hw *ah, bool is_2ghz) +static u16 ar9003_switch_com_spdt_get(struct ath_hw *ah, bool is2ghz) { - struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; - __le16 val; - - if (is_2ghz) - val = eep->modalHeader2G.switchcomspdt; - else - val = eep->modalHeader5G.switchcomspdt; - return le16_to_cpu(val); + return le16_to_cpu(ar9003_modal_header(ah, is2ghz)->switchcomspdt); } static u32 ar9003_hw_ant_ctrl_common_get(struct ath_hw *ah, bool is2ghz) { - struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; - __le32 val; - - if (is2ghz) - val = eep->modalHeader2G.antCtrlCommon; - else - val = eep->modalHeader5G.antCtrlCommon; - return le32_to_cpu(val); + return le32_to_cpu(ar9003_modal_header(ah, is2ghz)->antCtrlCommon); } static u32 ar9003_hw_ant_ctrl_common_2_get(struct ath_hw *ah, bool is2ghz) { - struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; - __le32 val; - - if (is2ghz) - val = eep->modalHeader2G.antCtrlCommon2; - else - val = eep->modalHeader5G.antCtrlCommon2; - return le32_to_cpu(val); + return le32_to_cpu(ar9003_modal_header(ah, is2ghz)->antCtrlCommon2); } -static u16 ar9003_hw_ant_ctrl_chain_get(struct ath_hw *ah, - int chain, +static u16 ar9003_hw_ant_ctrl_chain_get(struct ath_hw *ah, int chain, bool is2ghz) { - struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; - __le16 val = 0; - - if (chain >= 0 && chain < AR9300_MAX_CHAINS) { - if (is2ghz) - val = eep->modalHeader2G.antCtrlChain[chain]; - else - val = eep->modalHeader5G.antCtrlChain[chain]; - } - + __le16 val = ar9003_modal_header(ah, is2ghz)->antCtrlChain[chain]; return le16_to_cpu(val); } @@ -3591,6 +3571,9 @@ static void ar9003_hw_ant_ctrl_apply(struct ath_hw *ah, bool is2ghz) if (AR_SREV_9462(ah)) { REG_RMW_FIELD(ah, AR_PHY_SWITCH_COM, AR_SWITCH_TABLE_COM_AR9462_ALL, value); + } else if (AR_SREV_9550(ah)) { + REG_RMW_FIELD(ah, AR_PHY_SWITCH_COM, + AR_SWITCH_TABLE_COM_AR9550_ALL, value); } else REG_RMW_FIELD(ah, AR_PHY_SWITCH_COM, AR_SWITCH_TABLE_COM_ALL, value); @@ -3613,6 +3596,7 @@ static void ar9003_hw_ant_ctrl_apply(struct ath_hw *ah, bool is2ghz) value = ar9003_switch_com_spdt_get(ah, is2ghz); REG_RMW_FIELD(ah, AR_PHY_GLB_CONTROL, AR_SWITCH_TABLE_COM_SPDT_ALL, value); + REG_SET_BIT(ah, AR_PHY_GLB_CONTROL, AR_BTCOEX_CTRL_SPDT_ENABLE); } value = ar9003_hw_ant_ctrl_common_2_get(ah, is2ghz); @@ -3677,11 +3661,12 @@ static void ar9003_hw_ant_ctrl_apply(struct ath_hw *ah, bool is2ghz) static void ar9003_hw_drive_strength_apply(struct ath_hw *ah) { + struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; + struct ar9300_base_eep_hdr *pBase = &eep->baseEepHeader; int drive_strength; unsigned long reg; - drive_strength = ath9k_hw_ar9300_get_eeprom(ah, EEP_DRIVE_STRENGTH); - + drive_strength = pBase->miscConfiguration & BIT(0); if (!drive_strength) return; @@ -3811,11 +3796,11 @@ static bool is_pmu_set(struct ath_hw *ah, u32 pmu_reg, int pmu_set) void ar9003_hw_internal_regulator_apply(struct ath_hw *ah) { - int internal_regulator = - ath9k_hw_ar9300_get_eeprom(ah, EEP_INTERNAL_REGULATOR); + struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; + struct ar9300_base_eep_hdr *pBase = &eep->baseEepHeader; u32 reg_val; - if (internal_regulator) { + if (pBase->featureEnable & BIT(4)) { if (AR_SREV_9330(ah) || AR_SREV_9485(ah)) { int reg_pmu_set; @@ -3859,11 +3844,11 @@ void ar9003_hw_internal_regulator_apply(struct ath_hw *ah) if (!is_pmu_set(ah, AR_PHY_PMU2, reg_pmu_set)) return; } else if (AR_SREV_9462(ah)) { - reg_val = ath9k_hw_ar9300_get_eeprom(ah, EEP_SWREG); + reg_val = le32_to_cpu(pBase->swreg); REG_WRITE(ah, AR_PHY_PMU1, reg_val); } else { /* Internal regulator is ON. Write swreg register. */ - reg_val = ath9k_hw_ar9300_get_eeprom(ah, EEP_SWREG); + reg_val = le32_to_cpu(pBase->swreg); REG_WRITE(ah, AR_RTC_REG_CONTROL1, REG_READ(ah, AR_RTC_REG_CONTROL1) & (~AR_RTC_REG_CONTROL1_SWREG_PROGRAM)); @@ -3905,6 +3890,9 @@ static void ar9003_hw_apply_tuning_caps(struct ath_hw *ah) struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; u8 tuning_caps_param = eep->baseEepHeader.params_for_tuning_caps[0]; + if (AR_SREV_9485(ah) || AR_SREV_9330(ah) || AR_SREV_9340(ah)) + return; + if (eep->baseEepHeader.featureEnable & 0x40) { tuning_caps_param &= 0x7f; REG_RMW_FIELD(ah, AR_CH0_XTAL, AR_CH0_XTAL_CAPINDAC, @@ -3917,10 +3905,11 @@ static void ar9003_hw_apply_tuning_caps(struct ath_hw *ah) static void ar9003_hw_quick_drop_apply(struct ath_hw *ah, u16 freq) { struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; - int quick_drop = ath9k_hw_ar9300_get_eeprom(ah, EEP_QUICK_DROP); + struct ar9300_base_eep_hdr *pBase = &eep->baseEepHeader; + int quick_drop; s32 t[3], f[3] = {5180, 5500, 5785}; - if (!quick_drop) + if (!(pBase->miscConfiguration & BIT(1))) return; if (freq < 4000) @@ -3934,13 +3923,11 @@ static void ar9003_hw_quick_drop_apply(struct ath_hw *ah, u16 freq) REG_RMW_FIELD(ah, AR_PHY_AGC, AR_PHY_AGC_QUICK_DROP, quick_drop); } -static void ar9003_hw_txend_to_xpa_off_apply(struct ath_hw *ah, u16 freq) +static void ar9003_hw_txend_to_xpa_off_apply(struct ath_hw *ah, bool is2ghz) { - struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; u32 value; - value = (freq < 4000) ? eep->modalHeader2G.txEndToXpaOff : - eep->modalHeader5G.txEndToXpaOff; + value = ar9003_modal_header(ah, is2ghz)->txEndToXpaOff; REG_RMW_FIELD(ah, AR_PHY_XPA_TIMING_CTL, AR_PHY_XPA_TIMING_CTL_TX_END_XPAB_OFF, value); @@ -3948,19 +3935,63 @@ static void ar9003_hw_txend_to_xpa_off_apply(struct ath_hw *ah, u16 freq) AR_PHY_XPA_TIMING_CTL_TX_END_XPAA_OFF, value); } +static void ar9003_hw_xpa_timing_control_apply(struct ath_hw *ah, bool is2ghz) +{ + struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; + u8 xpa_ctl; + + if (!(eep->baseEepHeader.featureEnable & 0x80)) + return; + + if (!AR_SREV_9300(ah) && !AR_SREV_9340(ah) && !AR_SREV_9580(ah)) + return; + + xpa_ctl = ar9003_modal_header(ah, is2ghz)->txFrameToXpaOn; + if (is2ghz) + REG_RMW_FIELD(ah, AR_PHY_XPA_TIMING_CTL, + AR_PHY_XPA_TIMING_CTL_FRAME_XPAB_ON, xpa_ctl); + else + REG_RMW_FIELD(ah, AR_PHY_XPA_TIMING_CTL, + AR_PHY_XPA_TIMING_CTL_FRAME_XPAA_ON, xpa_ctl); +} + +static void ar9003_hw_xlna_bias_strength_apply(struct ath_hw *ah, bool is2ghz) +{ + struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; + u8 bias; + + if (!(eep->baseEepHeader.featureEnable & 0x40)) + return; + + if (!AR_SREV_9300(ah)) + return; + + bias = ar9003_modal_header(ah, is2ghz)->xlna_bias_strength; + REG_RMW_FIELD(ah, AR_PHY_65NM_CH0_RXTX4, AR_PHY_65NM_RXTX4_XLNA_BIAS, + bias & 0x3); + bias >>= 2; + REG_RMW_FIELD(ah, AR_PHY_65NM_CH1_RXTX4, AR_PHY_65NM_RXTX4_XLNA_BIAS, + bias & 0x3); + bias >>= 2; + REG_RMW_FIELD(ah, AR_PHY_65NM_CH2_RXTX4, AR_PHY_65NM_RXTX4_XLNA_BIAS, + bias & 0x3); +} + static void ath9k_hw_ar9300_set_board_values(struct ath_hw *ah, struct ath9k_channel *chan) { - ar9003_hw_xpa_bias_level_apply(ah, IS_CHAN_2GHZ(chan)); - ar9003_hw_ant_ctrl_apply(ah, IS_CHAN_2GHZ(chan)); + bool is2ghz = IS_CHAN_2GHZ(chan); + ar9003_hw_xpa_timing_control_apply(ah, is2ghz); + ar9003_hw_xpa_bias_level_apply(ah, is2ghz); + ar9003_hw_ant_ctrl_apply(ah, is2ghz); ar9003_hw_drive_strength_apply(ah); + ar9003_hw_xlna_bias_strength_apply(ah, is2ghz); ar9003_hw_atten_apply(ah, chan); ar9003_hw_quick_drop_apply(ah, chan->channel); - if (!AR_SREV_9330(ah) && !AR_SREV_9340(ah)) + if (!AR_SREV_9330(ah) && !AR_SREV_9340(ah) && !AR_SREV_9550(ah)) ar9003_hw_internal_regulator_apply(ah); - if (AR_SREV_9485(ah) || AR_SREV_9330(ah) || AR_SREV_9340(ah)) - ar9003_hw_apply_tuning_caps(ah); - ar9003_hw_txend_to_xpa_off_apply(ah, chan->channel); + ar9003_hw_apply_tuning_caps(ah); + ar9003_hw_txend_to_xpa_off_apply(ah, is2ghz); } static void ath9k_hw_ar9300_set_addac(struct ath_hw *ah, @@ -5096,14 +5127,9 @@ s32 ar9003_hw_get_rx_gain_idx(struct ath_hw *ah) return (eep->baseEepHeader.txrxgain) & 0xf; /* bits 3:0 */ } -u8 *ar9003_get_spur_chan_ptr(struct ath_hw *ah, bool is_2ghz) +u8 *ar9003_get_spur_chan_ptr(struct ath_hw *ah, bool is2ghz) { - struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; - - if (is_2ghz) - return eep->modalHeader2G.spurChans; - else - return eep->modalHeader5G.spurChans; + return ar9003_modal_header(ah, is2ghz)->spurChans; } unsigned int ar9003_get_paprd_scale_factor(struct ath_hw *ah, diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h index 8396d150ce01..3a1ff55bceb9 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h +++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h @@ -231,7 +231,8 @@ struct ar9300_modal_eep_header { __le32 papdRateMaskHt20; __le32 papdRateMaskHt40; __le16 switchcomspdt; - u8 futureModal[8]; + u8 xlna_bias_strength; + u8 futureModal[7]; } __packed; struct ar9300_cal_data_per_freq_op_loop { diff --git a/drivers/net/wireless/ath/ath9k/ar9003_hw.c b/drivers/net/wireless/ath/ath9k/ar9003_hw.c index a0e3394b10dc..1e8a4da5952f 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_hw.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_hw.c @@ -21,6 +21,7 @@ #include "ar9340_initvals.h" #include "ar9330_1p1_initvals.h" #include "ar9330_1p2_initvals.h" +#include "ar955x_1p0_initvals.h" #include "ar9580_1p0_initvals.h" #include "ar9462_2p0_initvals.h" @@ -43,408 +44,310 @@ static void ar9003_hw_init_mode_regs(struct ath_hw *ah) ar9462_2p0_baseband_core_txfir_coeff_japan_2484 if (AR_SREV_9330_11(ah)) { /* mac */ - INIT_INI_ARRAY(&ah->iniMac[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE], - ar9331_1p1_mac_core, - ARRAY_SIZE(ar9331_1p1_mac_core), 2); + ar9331_1p1_mac_core); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_POST], - ar9331_1p1_mac_postamble, - ARRAY_SIZE(ar9331_1p1_mac_postamble), 5); + ar9331_1p1_mac_postamble); /* bb */ - INIT_INI_ARRAY(&ah->iniBB[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_CORE], - ar9331_1p1_baseband_core, - ARRAY_SIZE(ar9331_1p1_baseband_core), 2); + ar9331_1p1_baseband_core); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_POST], - ar9331_1p1_baseband_postamble, - ARRAY_SIZE(ar9331_1p1_baseband_postamble), 5); + ar9331_1p1_baseband_postamble); /* radio */ - INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_CORE], - ar9331_1p1_radio_core, - ARRAY_SIZE(ar9331_1p1_radio_core), 2); - INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_POST], NULL, 0, 0); + ar9331_1p1_radio_core); /* soc */ INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_PRE], - ar9331_1p1_soc_preamble, - ARRAY_SIZE(ar9331_1p1_soc_preamble), 2); - INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_CORE], NULL, 0, 0); + ar9331_1p1_soc_preamble); INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_POST], - ar9331_1p1_soc_postamble, - ARRAY_SIZE(ar9331_1p1_soc_postamble), 2); + ar9331_1p1_soc_postamble); /* rx/tx gain */ INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9331_common_rx_gain_1p1, - ARRAY_SIZE(ar9331_common_rx_gain_1p1), 2); + ar9331_common_rx_gain_1p1); INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_lowest_ob_db_tx_gain_1p1, - ARRAY_SIZE(ar9331_modes_lowest_ob_db_tx_gain_1p1), - 5); + ar9331_modes_lowest_ob_db_tx_gain_1p1); /* additional clock settings */ if (ah->is_clk_25mhz) INIT_INI_ARRAY(&ah->iniAdditional, - ar9331_1p1_xtal_25M, - ARRAY_SIZE(ar9331_1p1_xtal_25M), 2); + ar9331_1p1_xtal_25M); else INIT_INI_ARRAY(&ah->iniAdditional, - ar9331_1p1_xtal_40M, - ARRAY_SIZE(ar9331_1p1_xtal_40M), 2); + ar9331_1p1_xtal_40M); } else if (AR_SREV_9330_12(ah)) { /* mac */ - INIT_INI_ARRAY(&ah->iniMac[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE], - ar9331_1p2_mac_core, - ARRAY_SIZE(ar9331_1p2_mac_core), 2); + ar9331_1p2_mac_core); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_POST], - ar9331_1p2_mac_postamble, - ARRAY_SIZE(ar9331_1p2_mac_postamble), 5); + ar9331_1p2_mac_postamble); /* bb */ - INIT_INI_ARRAY(&ah->iniBB[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_CORE], - ar9331_1p2_baseband_core, - ARRAY_SIZE(ar9331_1p2_baseband_core), 2); + ar9331_1p2_baseband_core); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_POST], - ar9331_1p2_baseband_postamble, - ARRAY_SIZE(ar9331_1p2_baseband_postamble), 5); + ar9331_1p2_baseband_postamble); /* radio */ - INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_CORE], - ar9331_1p2_radio_core, - ARRAY_SIZE(ar9331_1p2_radio_core), 2); - INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_POST], NULL, 0, 0); + ar9331_1p2_radio_core); /* soc */ INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_PRE], - ar9331_1p2_soc_preamble, - ARRAY_SIZE(ar9331_1p2_soc_preamble), 2); - INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_CORE], NULL, 0, 0); + ar9331_1p2_soc_preamble); INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_POST], - ar9331_1p2_soc_postamble, - ARRAY_SIZE(ar9331_1p2_soc_postamble), 2); + ar9331_1p2_soc_postamble); /* rx/tx gain */ INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9331_common_rx_gain_1p2, - ARRAY_SIZE(ar9331_common_rx_gain_1p2), 2); + ar9331_common_rx_gain_1p2); INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_lowest_ob_db_tx_gain_1p2, - ARRAY_SIZE(ar9331_modes_lowest_ob_db_tx_gain_1p2), - 5); + ar9331_modes_lowest_ob_db_tx_gain_1p2); /* additional clock settings */ if (ah->is_clk_25mhz) INIT_INI_ARRAY(&ah->iniAdditional, - ar9331_1p2_xtal_25M, - ARRAY_SIZE(ar9331_1p2_xtal_25M), 2); + ar9331_1p2_xtal_25M); else INIT_INI_ARRAY(&ah->iniAdditional, - ar9331_1p2_xtal_40M, - ARRAY_SIZE(ar9331_1p2_xtal_40M), 2); + ar9331_1p2_xtal_40M); } else if (AR_SREV_9340(ah)) { /* mac */ - INIT_INI_ARRAY(&ah->iniMac[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE], - ar9340_1p0_mac_core, - ARRAY_SIZE(ar9340_1p0_mac_core), 2); + ar9340_1p0_mac_core); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_POST], - ar9340_1p0_mac_postamble, - ARRAY_SIZE(ar9340_1p0_mac_postamble), 5); + ar9340_1p0_mac_postamble); /* bb */ - INIT_INI_ARRAY(&ah->iniBB[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_CORE], - ar9340_1p0_baseband_core, - ARRAY_SIZE(ar9340_1p0_baseband_core), 2); + ar9340_1p0_baseband_core); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_POST], - ar9340_1p0_baseband_postamble, - ARRAY_SIZE(ar9340_1p0_baseband_postamble), 5); + ar9340_1p0_baseband_postamble); /* radio */ - INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_CORE], - ar9340_1p0_radio_core, - ARRAY_SIZE(ar9340_1p0_radio_core), 2); + ar9340_1p0_radio_core); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_POST], - ar9340_1p0_radio_postamble, - ARRAY_SIZE(ar9340_1p0_radio_postamble), 5); + ar9340_1p0_radio_postamble); /* soc */ INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_PRE], - ar9340_1p0_soc_preamble, - ARRAY_SIZE(ar9340_1p0_soc_preamble), 2); - INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_CORE], NULL, 0, 0); + ar9340_1p0_soc_preamble); INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_POST], - ar9340_1p0_soc_postamble, - ARRAY_SIZE(ar9340_1p0_soc_postamble), 5); + ar9340_1p0_soc_postamble); /* rx/tx gain */ INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9340Common_wo_xlna_rx_gain_table_1p0, - ARRAY_SIZE(ar9340Common_wo_xlna_rx_gain_table_1p0), - 5); + ar9340Common_wo_xlna_rx_gain_table_1p0); INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9340Modes_high_ob_db_tx_gain_table_1p0, - ARRAY_SIZE(ar9340Modes_high_ob_db_tx_gain_table_1p0), - 5); + ar9340Modes_high_ob_db_tx_gain_table_1p0); INIT_INI_ARRAY(&ah->iniModesFastClock, - ar9340Modes_fast_clock_1p0, - ARRAY_SIZE(ar9340Modes_fast_clock_1p0), - 3); + ar9340Modes_fast_clock_1p0); if (!ah->is_clk_25mhz) INIT_INI_ARRAY(&ah->iniAdditional, - ar9340_1p0_radio_core_40M, - ARRAY_SIZE(ar9340_1p0_radio_core_40M), - 2); + ar9340_1p0_radio_core_40M); } else if (AR_SREV_9485_11(ah)) { /* mac */ - INIT_INI_ARRAY(&ah->iniMac[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE], - ar9485_1_1_mac_core, - ARRAY_SIZE(ar9485_1_1_mac_core), 2); + ar9485_1_1_mac_core); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_POST], - ar9485_1_1_mac_postamble, - ARRAY_SIZE(ar9485_1_1_mac_postamble), 5); + ar9485_1_1_mac_postamble); /* bb */ - INIT_INI_ARRAY(&ah->iniBB[ATH_INI_PRE], ar9485_1_1, - ARRAY_SIZE(ar9485_1_1), 2); + INIT_INI_ARRAY(&ah->iniBB[ATH_INI_PRE], ar9485_1_1); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_CORE], - ar9485_1_1_baseband_core, - ARRAY_SIZE(ar9485_1_1_baseband_core), 2); + ar9485_1_1_baseband_core); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_POST], - ar9485_1_1_baseband_postamble, - ARRAY_SIZE(ar9485_1_1_baseband_postamble), 5); + ar9485_1_1_baseband_postamble); /* radio */ - INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_CORE], - ar9485_1_1_radio_core, - ARRAY_SIZE(ar9485_1_1_radio_core), 2); + ar9485_1_1_radio_core); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_POST], - ar9485_1_1_radio_postamble, - ARRAY_SIZE(ar9485_1_1_radio_postamble), 2); + ar9485_1_1_radio_postamble); /* soc */ INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_PRE], - ar9485_1_1_soc_preamble, - ARRAY_SIZE(ar9485_1_1_soc_preamble), 2); - INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_CORE], NULL, 0, 0); - INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_POST], NULL, 0, 0); + ar9485_1_1_soc_preamble); /* rx/tx gain */ INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9485Common_wo_xlna_rx_gain_1_1, - ARRAY_SIZE(ar9485Common_wo_xlna_rx_gain_1_1), 2); + ar9485Common_wo_xlna_rx_gain_1_1); INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9485_modes_lowest_ob_db_tx_gain_1_1, - ARRAY_SIZE(ar9485_modes_lowest_ob_db_tx_gain_1_1), - 5); + ar9485_modes_lowest_ob_db_tx_gain_1_1); /* Load PCIE SERDES settings from INI */ /* Awake Setting */ INIT_INI_ARRAY(&ah->iniPcieSerdes, - ar9485_1_1_pcie_phy_clkreq_disable_L1, - ARRAY_SIZE(ar9485_1_1_pcie_phy_clkreq_disable_L1), - 2); + ar9485_1_1_pcie_phy_clkreq_disable_L1); /* Sleep Setting */ INIT_INI_ARRAY(&ah->iniPcieSerdesLowPower, - ar9485_1_1_pcie_phy_clkreq_disable_L1, - ARRAY_SIZE(ar9485_1_1_pcie_phy_clkreq_disable_L1), - 2); + ar9485_1_1_pcie_phy_clkreq_disable_L1); } else if (AR_SREV_9462_20(ah)) { - INIT_INI_ARRAY(&ah->iniMac[ATH_INI_PRE], NULL, 0, 0); - INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE], ar9462_2p0_mac_core, - ARRAY_SIZE(ar9462_2p0_mac_core), 2); + INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE], ar9462_2p0_mac_core); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_POST], - ar9462_2p0_mac_postamble, - ARRAY_SIZE(ar9462_2p0_mac_postamble), 5); + ar9462_2p0_mac_postamble); - INIT_INI_ARRAY(&ah->iniBB[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_CORE], - ar9462_2p0_baseband_core, - ARRAY_SIZE(ar9462_2p0_baseband_core), 2); + ar9462_2p0_baseband_core); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_POST], - ar9462_2p0_baseband_postamble, - ARRAY_SIZE(ar9462_2p0_baseband_postamble), 5); + ar9462_2p0_baseband_postamble); - INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_CORE], - ar9462_2p0_radio_core, - ARRAY_SIZE(ar9462_2p0_radio_core), 2); + ar9462_2p0_radio_core); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_POST], - ar9462_2p0_radio_postamble, - ARRAY_SIZE(ar9462_2p0_radio_postamble), 5); + ar9462_2p0_radio_postamble); INIT_INI_ARRAY(&ah->ini_radio_post_sys2ant, - ar9462_2p0_radio_postamble_sys2ant, - ARRAY_SIZE(ar9462_2p0_radio_postamble_sys2ant), - 5); + ar9462_2p0_radio_postamble_sys2ant); INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_PRE], - ar9462_2p0_soc_preamble, - ARRAY_SIZE(ar9462_2p0_soc_preamble), 2); - INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_CORE], NULL, 0, 0); + ar9462_2p0_soc_preamble); INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_POST], - ar9462_2p0_soc_postamble, - ARRAY_SIZE(ar9462_2p0_soc_postamble), 5); + ar9462_2p0_soc_postamble); INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9462_common_rx_gain_table_2p0, - ARRAY_SIZE(ar9462_common_rx_gain_table_2p0), 2); + ar9462_common_rx_gain_table_2p0); /* Awake -> Sleep Setting */ INIT_INI_ARRAY(&ah->iniPcieSerdes, - PCIE_PLL_ON_CREQ_DIS_L1_2P0, - ARRAY_SIZE(PCIE_PLL_ON_CREQ_DIS_L1_2P0), - 2); + PCIE_PLL_ON_CREQ_DIS_L1_2P0); /* Sleep -> Awake Setting */ INIT_INI_ARRAY(&ah->iniPcieSerdesLowPower, - PCIE_PLL_ON_CREQ_DIS_L1_2P0, - ARRAY_SIZE(PCIE_PLL_ON_CREQ_DIS_L1_2P0), - 2); + PCIE_PLL_ON_CREQ_DIS_L1_2P0); /* Fast clock modal settings */ INIT_INI_ARRAY(&ah->iniModesFastClock, - ar9462_modes_fast_clock_2p0, - ARRAY_SIZE(ar9462_modes_fast_clock_2p0), 3); + ar9462_modes_fast_clock_2p0); INIT_INI_ARRAY(&ah->iniCckfirJapan2484, - AR9462_BB_CTX_COEFJ(2p0), - ARRAY_SIZE(AR9462_BB_CTX_COEFJ(2p0)), 2); + AR9462_BB_CTX_COEFJ(2p0)); - INIT_INI_ARRAY(&ah->ini_japan2484, AR9462_BBC_TXIFR_COEFFJ, - ARRAY_SIZE(AR9462_BBC_TXIFR_COEFFJ), 2); + INIT_INI_ARRAY(&ah->ini_japan2484, AR9462_BBC_TXIFR_COEFFJ); + } else if (AR_SREV_9550(ah)) { + /* mac */ + INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE], + ar955x_1p0_mac_core); + INIT_INI_ARRAY(&ah->iniMac[ATH_INI_POST], + ar955x_1p0_mac_postamble); + + /* bb */ + INIT_INI_ARRAY(&ah->iniBB[ATH_INI_CORE], + ar955x_1p0_baseband_core); + INIT_INI_ARRAY(&ah->iniBB[ATH_INI_POST], + ar955x_1p0_baseband_postamble); + + /* radio */ + INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_CORE], + ar955x_1p0_radio_core); + INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_POST], + ar955x_1p0_radio_postamble); + + /* soc */ + INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_PRE], + ar955x_1p0_soc_preamble); + INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_POST], + ar955x_1p0_soc_postamble); + + /* rx/tx gain */ + INIT_INI_ARRAY(&ah->iniModesRxGain, + ar955x_1p0_common_wo_xlna_rx_gain_table); + INIT_INI_ARRAY(&ah->ini_modes_rx_gain_bounds, + ar955x_1p0_common_wo_xlna_rx_gain_bounds); + INIT_INI_ARRAY(&ah->iniModesTxGain, + ar955x_1p0_modes_xpa_tx_gain_table); + /* Fast clock modal settings */ + INIT_INI_ARRAY(&ah->iniModesFastClock, + ar955x_1p0_modes_fast_clock); } else if (AR_SREV_9580(ah)) { /* mac */ - INIT_INI_ARRAY(&ah->iniMac[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE], - ar9580_1p0_mac_core, - ARRAY_SIZE(ar9580_1p0_mac_core), 2); + ar9580_1p0_mac_core); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_POST], - ar9580_1p0_mac_postamble, - ARRAY_SIZE(ar9580_1p0_mac_postamble), 5); + ar9580_1p0_mac_postamble); /* bb */ - INIT_INI_ARRAY(&ah->iniBB[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_CORE], - ar9580_1p0_baseband_core, - ARRAY_SIZE(ar9580_1p0_baseband_core), 2); + ar9580_1p0_baseband_core); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_POST], - ar9580_1p0_baseband_postamble, - ARRAY_SIZE(ar9580_1p0_baseband_postamble), 5); + ar9580_1p0_baseband_postamble); /* radio */ - INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_CORE], - ar9580_1p0_radio_core, - ARRAY_SIZE(ar9580_1p0_radio_core), 2); + ar9580_1p0_radio_core); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_POST], - ar9580_1p0_radio_postamble, - ARRAY_SIZE(ar9580_1p0_radio_postamble), 5); + ar9580_1p0_radio_postamble); /* soc */ INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_PRE], - ar9580_1p0_soc_preamble, - ARRAY_SIZE(ar9580_1p0_soc_preamble), 2); - INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_CORE], NULL, 0, 0); + ar9580_1p0_soc_preamble); INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_POST], - ar9580_1p0_soc_postamble, - ARRAY_SIZE(ar9580_1p0_soc_postamble), 5); + ar9580_1p0_soc_postamble); /* rx/tx gain */ INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9580_1p0_rx_gain_table, - ARRAY_SIZE(ar9580_1p0_rx_gain_table), 2); + ar9580_1p0_rx_gain_table); INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9580_1p0_low_ob_db_tx_gain_table, - ARRAY_SIZE(ar9580_1p0_low_ob_db_tx_gain_table), - 5); + ar9580_1p0_low_ob_db_tx_gain_table); INIT_INI_ARRAY(&ah->iniModesFastClock, - ar9580_1p0_modes_fast_clock, - ARRAY_SIZE(ar9580_1p0_modes_fast_clock), - 3); + ar9580_1p0_modes_fast_clock); } else { /* mac */ - INIT_INI_ARRAY(&ah->iniMac[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE], - ar9300_2p2_mac_core, - ARRAY_SIZE(ar9300_2p2_mac_core), 2); + ar9300_2p2_mac_core); INIT_INI_ARRAY(&ah->iniMac[ATH_INI_POST], - ar9300_2p2_mac_postamble, - ARRAY_SIZE(ar9300_2p2_mac_postamble), 5); + ar9300_2p2_mac_postamble); /* bb */ - INIT_INI_ARRAY(&ah->iniBB[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_CORE], - ar9300_2p2_baseband_core, - ARRAY_SIZE(ar9300_2p2_baseband_core), 2); + ar9300_2p2_baseband_core); INIT_INI_ARRAY(&ah->iniBB[ATH_INI_POST], - ar9300_2p2_baseband_postamble, - ARRAY_SIZE(ar9300_2p2_baseband_postamble), 5); + ar9300_2p2_baseband_postamble); /* radio */ - INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_PRE], NULL, 0, 0); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_CORE], - ar9300_2p2_radio_core, - ARRAY_SIZE(ar9300_2p2_radio_core), 2); + ar9300_2p2_radio_core); INIT_INI_ARRAY(&ah->iniRadio[ATH_INI_POST], - ar9300_2p2_radio_postamble, - ARRAY_SIZE(ar9300_2p2_radio_postamble), 5); + ar9300_2p2_radio_postamble); /* soc */ INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_PRE], - ar9300_2p2_soc_preamble, - ARRAY_SIZE(ar9300_2p2_soc_preamble), 2); - INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_CORE], NULL, 0, 0); + ar9300_2p2_soc_preamble); INIT_INI_ARRAY(&ah->iniSOC[ATH_INI_POST], - ar9300_2p2_soc_postamble, - ARRAY_SIZE(ar9300_2p2_soc_postamble), 5); + ar9300_2p2_soc_postamble); /* rx/tx gain */ INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9300Common_rx_gain_table_2p2, - ARRAY_SIZE(ar9300Common_rx_gain_table_2p2), 2); + ar9300Common_rx_gain_table_2p2); INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9300Modes_lowest_ob_db_tx_gain_table_2p2, - ARRAY_SIZE(ar9300Modes_lowest_ob_db_tx_gain_table_2p2), - 5); + ar9300Modes_lowest_ob_db_tx_gain_table_2p2); /* Load PCIE SERDES settings from INI */ /* Awake Setting */ INIT_INI_ARRAY(&ah->iniPcieSerdes, - ar9300PciePhy_pll_on_clkreq_disable_L1_2p2, - ARRAY_SIZE(ar9300PciePhy_pll_on_clkreq_disable_L1_2p2), - 2); + ar9300PciePhy_pll_on_clkreq_disable_L1_2p2); /* Sleep Setting */ INIT_INI_ARRAY(&ah->iniPcieSerdesLowPower, - ar9300PciePhy_pll_on_clkreq_disable_L1_2p2, - ARRAY_SIZE(ar9300PciePhy_pll_on_clkreq_disable_L1_2p2), - 2); + ar9300PciePhy_pll_on_clkreq_disable_L1_2p2); /* Fast clock modal settings */ INIT_INI_ARRAY(&ah->iniModesFastClock, - ar9300Modes_fast_clock_2p2, - ARRAY_SIZE(ar9300Modes_fast_clock_2p2), - 3); + ar9300Modes_fast_clock_2p2); } } @@ -452,146 +355,110 @@ static void ar9003_tx_gain_table_mode0(struct ath_hw *ah) { if (AR_SREV_9330_12(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_lowest_ob_db_tx_gain_1p2, - ARRAY_SIZE(ar9331_modes_lowest_ob_db_tx_gain_1p2), - 5); + ar9331_modes_lowest_ob_db_tx_gain_1p2); else if (AR_SREV_9330_11(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_lowest_ob_db_tx_gain_1p1, - ARRAY_SIZE(ar9331_modes_lowest_ob_db_tx_gain_1p1), - 5); + ar9331_modes_lowest_ob_db_tx_gain_1p1); else if (AR_SREV_9340(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9340Modes_lowest_ob_db_tx_gain_table_1p0, - ARRAY_SIZE(ar9340Modes_lowest_ob_db_tx_gain_table_1p0), - 5); + ar9340Modes_lowest_ob_db_tx_gain_table_1p0); else if (AR_SREV_9485_11(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9485_modes_lowest_ob_db_tx_gain_1_1, - ARRAY_SIZE(ar9485_modes_lowest_ob_db_tx_gain_1_1), - 5); + ar9485_modes_lowest_ob_db_tx_gain_1_1); + else if (AR_SREV_9550(ah)) + INIT_INI_ARRAY(&ah->iniModesTxGain, + ar955x_1p0_modes_xpa_tx_gain_table); else if (AR_SREV_9580(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9580_1p0_lowest_ob_db_tx_gain_table, - ARRAY_SIZE(ar9580_1p0_lowest_ob_db_tx_gain_table), - 5); + ar9580_1p0_lowest_ob_db_tx_gain_table); else if (AR_SREV_9462_20(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9462_modes_low_ob_db_tx_gain_table_2p0, - ARRAY_SIZE(ar9462_modes_low_ob_db_tx_gain_table_2p0), - 5); + ar9462_modes_low_ob_db_tx_gain_table_2p0); else INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9300Modes_lowest_ob_db_tx_gain_table_2p2, - ARRAY_SIZE(ar9300Modes_lowest_ob_db_tx_gain_table_2p2), - 5); + ar9300Modes_lowest_ob_db_tx_gain_table_2p2); } static void ar9003_tx_gain_table_mode1(struct ath_hw *ah) { if (AR_SREV_9330_12(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_high_ob_db_tx_gain_1p2, - ARRAY_SIZE(ar9331_modes_high_ob_db_tx_gain_1p2), - 5); + ar9331_modes_high_ob_db_tx_gain_1p2); else if (AR_SREV_9330_11(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_high_ob_db_tx_gain_1p1, - ARRAY_SIZE(ar9331_modes_high_ob_db_tx_gain_1p1), - 5); + ar9331_modes_high_ob_db_tx_gain_1p1); else if (AR_SREV_9340(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9340Modes_lowest_ob_db_tx_gain_table_1p0, - ARRAY_SIZE(ar9340Modes_lowest_ob_db_tx_gain_table_1p0), - 5); + ar9340Modes_high_ob_db_tx_gain_table_1p0); else if (AR_SREV_9485_11(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9485Modes_high_ob_db_tx_gain_1_1, - ARRAY_SIZE(ar9485Modes_high_ob_db_tx_gain_1_1), - 5); + ar9485Modes_high_ob_db_tx_gain_1_1); else if (AR_SREV_9580(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9580_1p0_high_ob_db_tx_gain_table, - ARRAY_SIZE(ar9580_1p0_high_ob_db_tx_gain_table), - 5); + ar9580_1p0_high_ob_db_tx_gain_table); + else if (AR_SREV_9550(ah)) + INIT_INI_ARRAY(&ah->iniModesTxGain, + ar955x_1p0_modes_no_xpa_tx_gain_table); else if (AR_SREV_9462_20(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9462_modes_high_ob_db_tx_gain_table_2p0, - ARRAY_SIZE(ar9462_modes_high_ob_db_tx_gain_table_2p0), - 5); + ar9462_modes_high_ob_db_tx_gain_table_2p0); else INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9300Modes_high_ob_db_tx_gain_table_2p2, - ARRAY_SIZE(ar9300Modes_high_ob_db_tx_gain_table_2p2), - 5); + ar9300Modes_high_ob_db_tx_gain_table_2p2); } static void ar9003_tx_gain_table_mode2(struct ath_hw *ah) { if (AR_SREV_9330_12(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_low_ob_db_tx_gain_1p2, - ARRAY_SIZE(ar9331_modes_low_ob_db_tx_gain_1p2), - 5); + ar9331_modes_low_ob_db_tx_gain_1p2); else if (AR_SREV_9330_11(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_low_ob_db_tx_gain_1p1, - ARRAY_SIZE(ar9331_modes_low_ob_db_tx_gain_1p1), - 5); + ar9331_modes_low_ob_db_tx_gain_1p1); else if (AR_SREV_9340(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9340Modes_lowest_ob_db_tx_gain_table_1p0, - ARRAY_SIZE(ar9340Modes_lowest_ob_db_tx_gain_table_1p0), - 5); + ar9340Modes_low_ob_db_tx_gain_table_1p0); else if (AR_SREV_9485_11(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9485Modes_low_ob_db_tx_gain_1_1, - ARRAY_SIZE(ar9485Modes_low_ob_db_tx_gain_1_1), - 5); + ar9485Modes_low_ob_db_tx_gain_1_1); else if (AR_SREV_9580(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9580_1p0_low_ob_db_tx_gain_table, - ARRAY_SIZE(ar9580_1p0_low_ob_db_tx_gain_table), - 5); + ar9580_1p0_low_ob_db_tx_gain_table); else INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9300Modes_low_ob_db_tx_gain_table_2p2, - ARRAY_SIZE(ar9300Modes_low_ob_db_tx_gain_table_2p2), - 5); + ar9300Modes_low_ob_db_tx_gain_table_2p2); } static void ar9003_tx_gain_table_mode3(struct ath_hw *ah) { if (AR_SREV_9330_12(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_high_power_tx_gain_1p2, - ARRAY_SIZE(ar9331_modes_high_power_tx_gain_1p2), - 5); + ar9331_modes_high_power_tx_gain_1p2); else if (AR_SREV_9330_11(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9331_modes_high_power_tx_gain_1p1, - ARRAY_SIZE(ar9331_modes_high_power_tx_gain_1p1), - 5); + ar9331_modes_high_power_tx_gain_1p1); else if (AR_SREV_9340(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9340Modes_lowest_ob_db_tx_gain_table_1p0, - ARRAY_SIZE(ar9340Modes_lowest_ob_db_tx_gain_table_1p0), - 5); + ar9340Modes_high_power_tx_gain_table_1p0); else if (AR_SREV_9485_11(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9485Modes_high_power_tx_gain_1_1, - ARRAY_SIZE(ar9485Modes_high_power_tx_gain_1_1), - 5); + ar9485Modes_high_power_tx_gain_1_1); else if (AR_SREV_9580(ah)) INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9580_1p0_high_power_tx_gain_table, - ARRAY_SIZE(ar9580_1p0_high_power_tx_gain_table), - 5); + ar9580_1p0_high_power_tx_gain_table); else INIT_INI_ARRAY(&ah->iniModesTxGain, - ar9300Modes_high_power_tx_gain_table_2p2, - ARRAY_SIZE(ar9300Modes_high_power_tx_gain_table_2p2), - 5); + ar9300Modes_high_power_tx_gain_table_2p2); +} + +static void ar9003_tx_gain_table_mode4(struct ath_hw *ah) +{ + if (AR_SREV_9340(ah)) + INIT_INI_ARRAY(&ah->iniModesTxGain, + ar9340Modes_mixed_ob_db_tx_gain_table_1p0); + else if (AR_SREV_9580(ah)) + INIT_INI_ARRAY(&ah->iniModesTxGain, + ar9580_1p0_mixed_ob_db_tx_gain_table); } static void ar9003_tx_gain_table_apply(struct ath_hw *ah) @@ -610,6 +477,9 @@ static void ar9003_tx_gain_table_apply(struct ath_hw *ah) case 3: ar9003_tx_gain_table_mode3(ah); break; + case 4: + ar9003_tx_gain_table_mode4(ah); + break; } } @@ -617,86 +487,67 @@ static void ar9003_rx_gain_table_mode0(struct ath_hw *ah) { if (AR_SREV_9330_12(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9331_common_rx_gain_1p2, - ARRAY_SIZE(ar9331_common_rx_gain_1p2), - 2); + ar9331_common_rx_gain_1p2); else if (AR_SREV_9330_11(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9331_common_rx_gain_1p1, - ARRAY_SIZE(ar9331_common_rx_gain_1p1), - 2); + ar9331_common_rx_gain_1p1); else if (AR_SREV_9340(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9340Common_rx_gain_table_1p0, - ARRAY_SIZE(ar9340Common_rx_gain_table_1p0), - 2); + ar9340Common_rx_gain_table_1p0); else if (AR_SREV_9485_11(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9485Common_wo_xlna_rx_gain_1_1, - ARRAY_SIZE(ar9485Common_wo_xlna_rx_gain_1_1), - 2); - else if (AR_SREV_9580(ah)) + ar9485Common_wo_xlna_rx_gain_1_1); + else if (AR_SREV_9550(ah)) { + INIT_INI_ARRAY(&ah->iniModesRxGain, + ar955x_1p0_common_rx_gain_table); + INIT_INI_ARRAY(&ah->ini_modes_rx_gain_bounds, + ar955x_1p0_common_rx_gain_bounds); + } else if (AR_SREV_9580(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9580_1p0_rx_gain_table, - ARRAY_SIZE(ar9580_1p0_rx_gain_table), - 2); + ar9580_1p0_rx_gain_table); else if (AR_SREV_9462_20(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9462_common_rx_gain_table_2p0, - ARRAY_SIZE(ar9462_common_rx_gain_table_2p0), - 2); + ar9462_common_rx_gain_table_2p0); else INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9300Common_rx_gain_table_2p2, - ARRAY_SIZE(ar9300Common_rx_gain_table_2p2), - 2); + ar9300Common_rx_gain_table_2p2); } static void ar9003_rx_gain_table_mode1(struct ath_hw *ah) { if (AR_SREV_9330_12(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9331_common_wo_xlna_rx_gain_1p2, - ARRAY_SIZE(ar9331_common_wo_xlna_rx_gain_1p2), - 2); + ar9331_common_wo_xlna_rx_gain_1p2); else if (AR_SREV_9330_11(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9331_common_wo_xlna_rx_gain_1p1, - ARRAY_SIZE(ar9331_common_wo_xlna_rx_gain_1p1), - 2); + ar9331_common_wo_xlna_rx_gain_1p1); else if (AR_SREV_9340(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9340Common_wo_xlna_rx_gain_table_1p0, - ARRAY_SIZE(ar9340Common_wo_xlna_rx_gain_table_1p0), - 2); + ar9340Common_wo_xlna_rx_gain_table_1p0); else if (AR_SREV_9485_11(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9485Common_wo_xlna_rx_gain_1_1, - ARRAY_SIZE(ar9485Common_wo_xlna_rx_gain_1_1), - 2); + ar9485Common_wo_xlna_rx_gain_1_1); else if (AR_SREV_9462_20(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9462_common_wo_xlna_rx_gain_table_2p0, - ARRAY_SIZE(ar9462_common_wo_xlna_rx_gain_table_2p0), - 2); - else if (AR_SREV_9580(ah)) + ar9462_common_wo_xlna_rx_gain_table_2p0); + else if (AR_SREV_9550(ah)) { + INIT_INI_ARRAY(&ah->iniModesRxGain, + ar955x_1p0_common_wo_xlna_rx_gain_table); + INIT_INI_ARRAY(&ah->ini_modes_rx_gain_bounds, + ar955x_1p0_common_wo_xlna_rx_gain_bounds); + } else if (AR_SREV_9580(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9580_1p0_wo_xlna_rx_gain_table, - ARRAY_SIZE(ar9580_1p0_wo_xlna_rx_gain_table), - 2); + ar9580_1p0_wo_xlna_rx_gain_table); else INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9300Common_wo_xlna_rx_gain_table_2p2, - ARRAY_SIZE(ar9300Common_wo_xlna_rx_gain_table_2p2), - 2); + ar9300Common_wo_xlna_rx_gain_table_2p2); } static void ar9003_rx_gain_table_mode2(struct ath_hw *ah) { if (AR_SREV_9462_20(ah)) INIT_INI_ARRAY(&ah->iniModesRxGain, - ar9462_common_mixed_rx_gain_table_2p0, - ARRAY_SIZE(ar9462_common_mixed_rx_gain_table_2p0), 2); + ar9462_common_mixed_rx_gain_table_2p0); } static void ar9003_rx_gain_table_apply(struct ath_hw *ah) diff --git a/drivers/net/wireless/ath/ath9k/ar9003_mac.c b/drivers/net/wireless/ath/ath9k/ar9003_mac.c index d9e0824af093..78816b8b2173 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_mac.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_mac.c @@ -181,11 +181,14 @@ static bool ar9003_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked) u32 mask2 = 0; struct ath9k_hw_capabilities *pCap = &ah->caps; struct ath_common *common = ath9k_hw_common(ah); - u32 sync_cause = 0, async_cause; + u32 sync_cause = 0, async_cause, async_mask = AR_INTR_MAC_IRQ; + + if (ath9k_hw_mci_is_enabled(ah)) + async_mask |= AR_INTR_ASYNC_MASK_MCI; async_cause = REG_READ(ah, AR_INTR_ASYNC_CAUSE); - if (async_cause & (AR_INTR_MAC_IRQ | AR_INTR_ASYNC_MASK_MCI)) { + if (async_cause & async_mask) { if ((REG_READ(ah, AR_RTC_STATUS) & AR_RTC_STATUS_M) == AR_RTC_STATUS_ON) isr = REG_READ(ah, AR_ISR); diff --git a/drivers/net/wireless/ath/ath9k/ar9003_mci.c b/drivers/net/wireless/ath/ath9k/ar9003_mci.c index ffbb180f91e1..9a34fcaae3ff 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_mci.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_mci.c @@ -35,31 +35,30 @@ static int ar9003_mci_wait_for_interrupt(struct ath_hw *ah, u32 address, struct ath_common *common = ath9k_hw_common(ah); while (time_out) { - if (REG_READ(ah, address) & bit_position) { - REG_WRITE(ah, address, bit_position); - - if (address == AR_MCI_INTERRUPT_RX_MSG_RAW) { - if (bit_position & - AR_MCI_INTERRUPT_RX_MSG_REQ_WAKE) - ar9003_mci_reset_req_wakeup(ah); - - if (bit_position & - (AR_MCI_INTERRUPT_RX_MSG_SYS_SLEEPING | - AR_MCI_INTERRUPT_RX_MSG_SYS_WAKING)) - REG_WRITE(ah, AR_MCI_INTERRUPT_RAW, - AR_MCI_INTERRUPT_REMOTE_SLEEP_UPDATE); - - REG_WRITE(ah, AR_MCI_INTERRUPT_RAW, - AR_MCI_INTERRUPT_RX_MSG); - } - break; - } + if (!(REG_READ(ah, address) & bit_position)) { + udelay(10); + time_out -= 10; - udelay(10); - time_out -= 10; + if (time_out < 0) + break; + else + continue; + } + REG_WRITE(ah, address, bit_position); - if (time_out < 0) + if (address != AR_MCI_INTERRUPT_RX_MSG_RAW) break; + + if (bit_position & AR_MCI_INTERRUPT_RX_MSG_REQ_WAKE) + ar9003_mci_reset_req_wakeup(ah); + + if (bit_position & (AR_MCI_INTERRUPT_RX_MSG_SYS_SLEEPING | + AR_MCI_INTERRUPT_RX_MSG_SYS_WAKING)) + REG_WRITE(ah, AR_MCI_INTERRUPT_RAW, + AR_MCI_INTERRUPT_REMOTE_SLEEP_UPDATE); + + REG_WRITE(ah, AR_MCI_INTERRUPT_RAW, AR_MCI_INTERRUPT_RX_MSG); + break; } if (time_out <= 0) { @@ -127,14 +126,13 @@ static void ar9003_mci_send_coex_version_query(struct ath_hw *ah, struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; u32 payload[4] = {0, 0, 0, 0}; - if (!mci->bt_version_known && - (mci->bt_state != MCI_BT_SLEEP)) { - MCI_GPM_SET_TYPE_OPCODE(payload, - MCI_GPM_COEX_AGENT, - MCI_GPM_COEX_VERSION_QUERY); - ar9003_mci_send_message(ah, MCI_GPM, 0, payload, 16, - wait_done, true); - } + if (mci->bt_version_known || + (mci->bt_state == MCI_BT_SLEEP)) + return; + + MCI_GPM_SET_TYPE_OPCODE(payload, MCI_GPM_COEX_AGENT, + MCI_GPM_COEX_VERSION_QUERY); + ar9003_mci_send_message(ah, MCI_GPM, 0, payload, 16, wait_done, true); } static void ar9003_mci_send_coex_version_response(struct ath_hw *ah, @@ -158,15 +156,14 @@ static void ar9003_mci_send_coex_wlan_channels(struct ath_hw *ah, struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; u32 *payload = &mci->wlan_channels[0]; - if ((mci->wlan_channels_update == true) && - (mci->bt_state != MCI_BT_SLEEP)) { - MCI_GPM_SET_TYPE_OPCODE(payload, - MCI_GPM_COEX_AGENT, - MCI_GPM_COEX_WLAN_CHANNELS); - ar9003_mci_send_message(ah, MCI_GPM, 0, payload, 16, - wait_done, true); - MCI_GPM_SET_TYPE_OPCODE(payload, 0xff, 0xff); - } + if (!mci->wlan_channels_update || + (mci->bt_state == MCI_BT_SLEEP)) + return; + + MCI_GPM_SET_TYPE_OPCODE(payload, MCI_GPM_COEX_AGENT, + MCI_GPM_COEX_WLAN_CHANNELS); + ar9003_mci_send_message(ah, MCI_GPM, 0, payload, 16, wait_done, true); + MCI_GPM_SET_TYPE_OPCODE(payload, 0xff, 0xff); } static void ar9003_mci_send_coex_bt_status_query(struct ath_hw *ah, @@ -174,29 +171,30 @@ static void ar9003_mci_send_coex_bt_status_query(struct ath_hw *ah, { struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; u32 payload[4] = {0, 0, 0, 0}; - bool query_btinfo = !!(query_type & (MCI_GPM_COEX_QUERY_BT_ALL_INFO | - MCI_GPM_COEX_QUERY_BT_TOPOLOGY)); + bool query_btinfo; - if (mci->bt_state != MCI_BT_SLEEP) { + if (mci->bt_state == MCI_BT_SLEEP) + return; - MCI_GPM_SET_TYPE_OPCODE(payload, MCI_GPM_COEX_AGENT, - MCI_GPM_COEX_STATUS_QUERY); + query_btinfo = !!(query_type & (MCI_GPM_COEX_QUERY_BT_ALL_INFO | + MCI_GPM_COEX_QUERY_BT_TOPOLOGY)); + MCI_GPM_SET_TYPE_OPCODE(payload, MCI_GPM_COEX_AGENT, + MCI_GPM_COEX_STATUS_QUERY); - *(((u8 *)payload) + MCI_GPM_COEX_B_BT_BITMAP) = query_type; - - /* - * If bt_status_query message is not sent successfully, - * then need_flush_btinfo should be set again. - */ - if (!ar9003_mci_send_message(ah, MCI_GPM, 0, payload, 16, - wait_done, true)) { - if (query_btinfo) - mci->need_flush_btinfo = true; - } + *(((u8 *)payload) + MCI_GPM_COEX_B_BT_BITMAP) = query_type; + /* + * If bt_status_query message is not sent successfully, + * then need_flush_btinfo should be set again. + */ + if (!ar9003_mci_send_message(ah, MCI_GPM, 0, payload, 16, + wait_done, true)) { if (query_btinfo) - mci->query_bt = false; + mci->need_flush_btinfo = true; } + + if (query_btinfo) + mci->query_bt = false; } static void ar9003_mci_send_coex_halt_bt_gpm(struct ath_hw *ah, bool halt, @@ -241,73 +239,73 @@ static void ar9003_mci_prep_interface(struct ath_hw *ah) ar9003_mci_remote_reset(ah, true); ar9003_mci_send_req_wake(ah, true); - if (ar9003_mci_wait_for_interrupt(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, - AR_MCI_INTERRUPT_RX_MSG_SYS_WAKING, 500)) { + if (!ar9003_mci_wait_for_interrupt(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, + AR_MCI_INTERRUPT_RX_MSG_SYS_WAKING, 500)) + goto clear_redunt; - mci->bt_state = MCI_BT_AWAKE; + mci->bt_state = MCI_BT_AWAKE; - /* - * we don't need to send more remote_reset at this moment. - * If BT receive first remote_reset, then BT HW will - * be cleaned up and will be able to receive req_wake - * and BT HW will respond sys_waking. - * In this case, WLAN will receive BT's HW sys_waking. - * Otherwise, if BT SW missed initial remote_reset, - * that remote_reset will still clean up BT MCI RX, - * and the req_wake will wake BT up, - * and BT SW will respond this req_wake with a remote_reset and - * sys_waking. In this case, WLAN will receive BT's SW - * sys_waking. In either case, BT's RX is cleaned up. So we - * don't need to reply BT's remote_reset now, if any. - * Similarly, if in any case, WLAN can receive BT's sys_waking, - * that means WLAN's RX is also fine. - */ - ar9003_mci_send_sys_waking(ah, true); - udelay(10); + /* + * we don't need to send more remote_reset at this moment. + * If BT receive first remote_reset, then BT HW will + * be cleaned up and will be able to receive req_wake + * and BT HW will respond sys_waking. + * In this case, WLAN will receive BT's HW sys_waking. + * Otherwise, if BT SW missed initial remote_reset, + * that remote_reset will still clean up BT MCI RX, + * and the req_wake will wake BT up, + * and BT SW will respond this req_wake with a remote_reset and + * sys_waking. In this case, WLAN will receive BT's SW + * sys_waking. In either case, BT's RX is cleaned up. So we + * don't need to reply BT's remote_reset now, if any. + * Similarly, if in any case, WLAN can receive BT's sys_waking, + * that means WLAN's RX is also fine. + */ + ar9003_mci_send_sys_waking(ah, true); + udelay(10); - /* - * Set BT priority interrupt value to be 0xff to - * avoid having too many BT PRIORITY interrupts. - */ - REG_WRITE(ah, AR_MCI_BT_PRI0, 0xFFFFFFFF); - REG_WRITE(ah, AR_MCI_BT_PRI1, 0xFFFFFFFF); - REG_WRITE(ah, AR_MCI_BT_PRI2, 0xFFFFFFFF); - REG_WRITE(ah, AR_MCI_BT_PRI3, 0xFFFFFFFF); - REG_WRITE(ah, AR_MCI_BT_PRI, 0X000000FF); + /* + * Set BT priority interrupt value to be 0xff to + * avoid having too many BT PRIORITY interrupts. + */ + REG_WRITE(ah, AR_MCI_BT_PRI0, 0xFFFFFFFF); + REG_WRITE(ah, AR_MCI_BT_PRI1, 0xFFFFFFFF); + REG_WRITE(ah, AR_MCI_BT_PRI2, 0xFFFFFFFF); + REG_WRITE(ah, AR_MCI_BT_PRI3, 0xFFFFFFFF); + REG_WRITE(ah, AR_MCI_BT_PRI, 0X000000FF); - /* - * A contention reset will be received after send out - * sys_waking. Also BT priority interrupt bits will be set. - * Clear those bits before the next step. - */ + /* + * A contention reset will be received after send out + * sys_waking. Also BT priority interrupt bits will be set. + * Clear those bits before the next step. + */ - REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, - AR_MCI_INTERRUPT_RX_MSG_CONT_RST); - REG_WRITE(ah, AR_MCI_INTERRUPT_RAW, - AR_MCI_INTERRUPT_BT_PRI); + REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, + AR_MCI_INTERRUPT_RX_MSG_CONT_RST); + REG_WRITE(ah, AR_MCI_INTERRUPT_RAW, AR_MCI_INTERRUPT_BT_PRI); - if (mci->is_2g) { - ar9003_mci_send_lna_transfer(ah, true); - udelay(5); - } + if (mci->is_2g) { + ar9003_mci_send_lna_transfer(ah, true); + udelay(5); + } - if ((mci->is_2g && !mci->update_2g5g)) { - if (ar9003_mci_wait_for_interrupt(ah, - AR_MCI_INTERRUPT_RX_MSG_RAW, - AR_MCI_INTERRUPT_RX_MSG_LNA_INFO, - mci_timeout)) - ath_dbg(common, MCI, - "MCI WLAN has control over the LNA & BT obeys it\n"); - else - ath_dbg(common, MCI, - "MCI BT didn't respond to LNA_TRANS\n"); - } + if ((mci->is_2g && !mci->update_2g5g)) { + if (ar9003_mci_wait_for_interrupt(ah, + AR_MCI_INTERRUPT_RX_MSG_RAW, + AR_MCI_INTERRUPT_RX_MSG_LNA_INFO, + mci_timeout)) + ath_dbg(common, MCI, + "MCI WLAN has control over the LNA & BT obeys it\n"); + else + ath_dbg(common, MCI, + "MCI BT didn't respond to LNA_TRANS\n"); } +clear_redunt: /* Clear the extra redundant SYS_WAKING from BT */ if ((mci->bt_state == MCI_BT_AWAKE) && - (REG_READ_FIELD(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, - AR_MCI_INTERRUPT_RX_MSG_SYS_WAKING)) && + (REG_READ_FIELD(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, + AR_MCI_INTERRUPT_RX_MSG_SYS_WAKING)) && (REG_READ_FIELD(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, AR_MCI_INTERRUPT_RX_MSG_SYS_SLEEPING) == 0)) { REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, @@ -323,14 +321,13 @@ void ar9003_mci_set_full_sleep(struct ath_hw *ah) { struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; - if (ar9003_mci_state(ah, MCI_STATE_ENABLE, NULL) && + if (ar9003_mci_state(ah, MCI_STATE_ENABLE) && (mci->bt_state != MCI_BT_SLEEP) && !mci->halted_bt_gpm) { ar9003_mci_send_coex_halt_bt_gpm(ah, true, true); } mci->ready = false; - REG_WRITE(ah, AR_RTC_KEEP_AWAKE, 0x2); } static void ar9003_mci_disable_interrupt(struct ath_hw *ah) @@ -487,7 +484,7 @@ static void ar9003_mci_sync_bt_state(struct ath_hw *ah) struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; u32 cur_bt_state; - cur_bt_state = ar9003_mci_state(ah, MCI_STATE_REMOTE_SLEEP, NULL); + cur_bt_state = ar9003_mci_state(ah, MCI_STATE_REMOTE_SLEEP); if (mci->bt_state != cur_bt_state) mci->bt_state = cur_bt_state; @@ -596,8 +593,7 @@ static u32 ar9003_mci_wait_for_gpm(struct ath_hw *ah, u8 gpm_type, if (!time_out) break; - offset = ar9003_mci_state(ah, MCI_STATE_NEXT_GPM_OFFSET, - &more_data); + offset = ar9003_mci_get_next_gpm_offset(ah, false, &more_data); if (offset == MCI_GPM_INVALID) continue; @@ -615,9 +611,9 @@ static u32 ar9003_mci_wait_for_gpm(struct ath_hw *ah, u8 gpm_type, } break; } - } else if ((recv_type == gpm_type) && (recv_opcode == gpm_opcode)) { + } else if ((recv_type == gpm_type) && + (recv_opcode == gpm_opcode)) break; - } /* * check if it's cal_grant @@ -661,8 +657,7 @@ static u32 ar9003_mci_wait_for_gpm(struct ath_hw *ah, u8 gpm_type, time_out = 0; while (more_data == MCI_GPM_MORE) { - offset = ar9003_mci_state(ah, MCI_STATE_NEXT_GPM_OFFSET, - &more_data); + offset = ar9003_mci_get_next_gpm_offset(ah, false, &more_data); if (offset == MCI_GPM_INVALID) break; @@ -731,38 +726,38 @@ int ar9003_mci_end_reset(struct ath_hw *ah, struct ath9k_channel *chan, if (!IS_CHAN_2GHZ(chan) || (mci_hw->bt_state != MCI_BT_SLEEP)) goto exit; - if (ar9003_mci_check_int(ah, AR_MCI_INTERRUPT_RX_MSG_REMOTE_RESET) || - ar9003_mci_check_int(ah, AR_MCI_INTERRUPT_RX_MSG_REQ_WAKE)) { + if (!ar9003_mci_check_int(ah, AR_MCI_INTERRUPT_RX_MSG_REMOTE_RESET) && + !ar9003_mci_check_int(ah, AR_MCI_INTERRUPT_RX_MSG_REQ_WAKE)) + goto exit; - /* - * BT is sleeping. Check if BT wakes up during - * WLAN calibration. If BT wakes up during - * WLAN calibration, need to go through all - * message exchanges again and recal. - */ - REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, - AR_MCI_INTERRUPT_RX_MSG_REMOTE_RESET | - AR_MCI_INTERRUPT_RX_MSG_REQ_WAKE); + /* + * BT is sleeping. Check if BT wakes up during + * WLAN calibration. If BT wakes up during + * WLAN calibration, need to go through all + * message exchanges again and recal. + */ + REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, + (AR_MCI_INTERRUPT_RX_MSG_REMOTE_RESET | + AR_MCI_INTERRUPT_RX_MSG_REQ_WAKE)); - ar9003_mci_remote_reset(ah, true); - ar9003_mci_send_sys_waking(ah, true); - udelay(1); + ar9003_mci_remote_reset(ah, true); + ar9003_mci_send_sys_waking(ah, true); + udelay(1); - if (IS_CHAN_2GHZ(chan)) - ar9003_mci_send_lna_transfer(ah, true); + if (IS_CHAN_2GHZ(chan)) + ar9003_mci_send_lna_transfer(ah, true); - mci_hw->bt_state = MCI_BT_AWAKE; + mci_hw->bt_state = MCI_BT_AWAKE; - if (caldata) { - caldata->done_txiqcal_once = false; - caldata->done_txclcal_once = false; - caldata->rtt_done = false; - } + if (caldata) { + caldata->done_txiqcal_once = false; + caldata->done_txclcal_once = false; + caldata->rtt_done = false; + } - if (!ath9k_hw_init_cal(ah, chan)) - return -EIO; + if (!ath9k_hw_init_cal(ah, chan)) + return -EIO; - } exit: ar9003_mci_enable_interrupt(ah); return 0; @@ -772,10 +767,6 @@ static void ar9003_mci_mute_bt(struct ath_hw *ah) { /* disable all MCI messages */ REG_WRITE(ah, AR_MCI_MSG_ATTRIBUTES_TABLE, 0xffff0000); - REG_WRITE(ah, AR_BTCOEX_WL_WEIGHTS0, 0xffffffff); - REG_WRITE(ah, AR_BTCOEX_WL_WEIGHTS1, 0xffffffff); - REG_WRITE(ah, AR_BTCOEX_WL_WEIGHTS2, 0xffffffff); - REG_WRITE(ah, AR_BTCOEX_WL_WEIGHTS3, 0xffffffff); REG_SET_BIT(ah, AR_MCI_TX_CTRL, AR_MCI_TX_CTRL_DISABLE_LNA_UPDATE); /* wait pending HW messages to flush out */ @@ -798,29 +789,27 @@ static void ar9003_mci_osla_setup(struct ath_hw *ah, bool enable) struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; u32 thresh; - if (enable) { - REG_RMW_FIELD(ah, AR_MCI_SCHD_TABLE_2, - AR_MCI_SCHD_TABLE_2_HW_BASED, 1); - REG_RMW_FIELD(ah, AR_MCI_SCHD_TABLE_2, - AR_MCI_SCHD_TABLE_2_MEM_BASED, 1); - - if (!(mci->config & ATH_MCI_CONFIG_DISABLE_AGGR_THRESH)) { - thresh = MS(mci->config, ATH_MCI_CONFIG_AGGR_THRESH); - REG_RMW_FIELD(ah, AR_BTCOEX_CTRL, - AR_BTCOEX_CTRL_AGGR_THRESH, thresh); - REG_RMW_FIELD(ah, AR_BTCOEX_CTRL, - AR_BTCOEX_CTRL_TIME_TO_NEXT_BT_THRESH_EN, 1); - } else { - REG_RMW_FIELD(ah, AR_BTCOEX_CTRL, - AR_BTCOEX_CTRL_TIME_TO_NEXT_BT_THRESH_EN, 0); - } - - REG_RMW_FIELD(ah, AR_BTCOEX_CTRL, - AR_BTCOEX_CTRL_ONE_STEP_LOOK_AHEAD_EN, 1); - } else { + if (!enable) { REG_CLR_BIT(ah, AR_BTCOEX_CTRL, AR_BTCOEX_CTRL_ONE_STEP_LOOK_AHEAD_EN); + return; } + REG_RMW_FIELD(ah, AR_MCI_SCHD_TABLE_2, AR_MCI_SCHD_TABLE_2_HW_BASED, 1); + REG_RMW_FIELD(ah, AR_MCI_SCHD_TABLE_2, + AR_MCI_SCHD_TABLE_2_MEM_BASED, 1); + + if (!(mci->config & ATH_MCI_CONFIG_DISABLE_AGGR_THRESH)) { + thresh = MS(mci->config, ATH_MCI_CONFIG_AGGR_THRESH); + REG_RMW_FIELD(ah, AR_BTCOEX_CTRL, + AR_BTCOEX_CTRL_AGGR_THRESH, thresh); + REG_RMW_FIELD(ah, AR_BTCOEX_CTRL, + AR_BTCOEX_CTRL_TIME_TO_NEXT_BT_THRESH_EN, 1); + } else + REG_RMW_FIELD(ah, AR_BTCOEX_CTRL, + AR_BTCOEX_CTRL_TIME_TO_NEXT_BT_THRESH_EN, 0); + + REG_RMW_FIELD(ah, AR_BTCOEX_CTRL, + AR_BTCOEX_CTRL_ONE_STEP_LOOK_AHEAD_EN, 1); } void ar9003_mci_reset(struct ath_hw *ah, bool en_int, bool is_2g, @@ -898,13 +887,16 @@ void ar9003_mci_reset(struct ath_hw *ah, bool en_int, bool is_2g, udelay(100); } + /* Check pending GPM msg before MCI Reset Rx */ + ar9003_mci_check_gpm_offset(ah); + regval |= SM(1, AR_MCI_COMMAND2_RESET_RX); REG_WRITE(ah, AR_MCI_COMMAND2, regval); udelay(1); regval &= ~SM(1, AR_MCI_COMMAND2_RESET_RX); REG_WRITE(ah, AR_MCI_COMMAND2, regval); - ar9003_mci_state(ah, MCI_STATE_INIT_GPM_OFFSET, NULL); + ar9003_mci_get_next_gpm_offset(ah, true, NULL); REG_WRITE(ah, AR_MCI_MSG_ATTRIBUTES_TABLE, (SM(0xe801, AR_MCI_MSG_ATTRIBUTES_TABLE_INVALID_HDR) | @@ -943,26 +935,27 @@ static void ar9003_mci_send_2g5g_status(struct ath_hw *ah, bool wait_done) struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; u32 new_flags, to_set, to_clear; - if (mci->update_2g5g && (mci->bt_state != MCI_BT_SLEEP)) { - if (mci->is_2g) { - new_flags = MCI_2G_FLAGS; - to_clear = MCI_2G_FLAGS_CLEAR_MASK; - to_set = MCI_2G_FLAGS_SET_MASK; - } else { - new_flags = MCI_5G_FLAGS; - to_clear = MCI_5G_FLAGS_CLEAR_MASK; - to_set = MCI_5G_FLAGS_SET_MASK; - } + if (!mci->update_2g5g || (mci->bt_state == MCI_BT_SLEEP)) + return; + + if (mci->is_2g) { + new_flags = MCI_2G_FLAGS; + to_clear = MCI_2G_FLAGS_CLEAR_MASK; + to_set = MCI_2G_FLAGS_SET_MASK; + } else { + new_flags = MCI_5G_FLAGS; + to_clear = MCI_5G_FLAGS_CLEAR_MASK; + to_set = MCI_5G_FLAGS_SET_MASK; + } - if (to_clear) - ar9003_mci_send_coex_bt_flags(ah, wait_done, + if (to_clear) + ar9003_mci_send_coex_bt_flags(ah, wait_done, MCI_GPM_COEX_BT_FLAGS_CLEAR, to_clear); - if (to_set) - ar9003_mci_send_coex_bt_flags(ah, wait_done, + if (to_set) + ar9003_mci_send_coex_bt_flags(ah, wait_done, MCI_GPM_COEX_BT_FLAGS_SET, to_set); - } } static void ar9003_mci_queue_unsent_gpm(struct ath_hw *ah, u8 header, @@ -1014,38 +1007,36 @@ static void ar9003_mci_queue_unsent_gpm(struct ath_hw *ah, u8 header, } } -void ar9003_mci_2g5g_switch(struct ath_hw *ah, bool wait_done) +void ar9003_mci_2g5g_switch(struct ath_hw *ah, bool force) { struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; - if (mci->update_2g5g) { - if (mci->is_2g) { - ar9003_mci_send_2g5g_status(ah, true); - ar9003_mci_send_lna_transfer(ah, true); - udelay(5); + if (!mci->update_2g5g && !force) + return; - REG_CLR_BIT(ah, AR_MCI_TX_CTRL, - AR_MCI_TX_CTRL_DISABLE_LNA_UPDATE); - REG_CLR_BIT(ah, AR_PHY_GLB_CONTROL, - AR_BTCOEX_CTRL_BT_OWN_SPDT_CTRL); + if (mci->is_2g) { + ar9003_mci_send_2g5g_status(ah, true); + ar9003_mci_send_lna_transfer(ah, true); + udelay(5); - if (!(mci->config & ATH_MCI_CONFIG_DISABLE_OSLA)) { - REG_SET_BIT(ah, AR_BTCOEX_CTRL, - AR_BTCOEX_CTRL_ONE_STEP_LOOK_AHEAD_EN); - } - } else { - ar9003_mci_send_lna_take(ah, true); - udelay(5); + REG_CLR_BIT(ah, AR_MCI_TX_CTRL, + AR_MCI_TX_CTRL_DISABLE_LNA_UPDATE); + REG_CLR_BIT(ah, AR_PHY_GLB_CONTROL, + AR_BTCOEX_CTRL_BT_OWN_SPDT_CTRL); + + if (!(mci->config & ATH_MCI_CONFIG_DISABLE_OSLA)) + ar9003_mci_osla_setup(ah, true); + } else { + ar9003_mci_send_lna_take(ah, true); + udelay(5); - REG_SET_BIT(ah, AR_MCI_TX_CTRL, - AR_MCI_TX_CTRL_DISABLE_LNA_UPDATE); - REG_SET_BIT(ah, AR_PHY_GLB_CONTROL, - AR_BTCOEX_CTRL_BT_OWN_SPDT_CTRL); - REG_CLR_BIT(ah, AR_BTCOEX_CTRL, - AR_BTCOEX_CTRL_ONE_STEP_LOOK_AHEAD_EN); + REG_SET_BIT(ah, AR_MCI_TX_CTRL, + AR_MCI_TX_CTRL_DISABLE_LNA_UPDATE); + REG_SET_BIT(ah, AR_PHY_GLB_CONTROL, + AR_BTCOEX_CTRL_BT_OWN_SPDT_CTRL); - ar9003_mci_send_2g5g_status(ah, true); - } + ar9003_mci_osla_setup(ah, false); + ar9003_mci_send_2g5g_status(ah, true); } } @@ -1132,7 +1123,7 @@ void ar9003_mci_init_cal_req(struct ath_hw *ah, bool *is_reusable) if (ar9003_mci_wait_for_gpm(ah, MCI_GPM_BT_CAL_GRANT, 0, 50000)) { ath_dbg(common, MCI, "MCI BT_CAL_GRANT received\n"); } else { - is_reusable = false; + *is_reusable = false; ath_dbg(common, MCI, "MCI BT_CAL_GRANT not received\n"); } } @@ -1173,11 +1164,10 @@ void ar9003_mci_cleanup(struct ath_hw *ah) } EXPORT_SYMBOL(ar9003_mci_cleanup); -u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type, u32 *p_data) +u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type) { - struct ath_common *common = ath9k_hw_common(ah); struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; - u32 value = 0, more_gpm = 0, gpm_ptr; + u32 value = 0; u8 query_type; switch (state_type) { @@ -1190,81 +1180,6 @@ u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type, u32 *p_data) } value &= AR_BTCOEX_CTRL_MCI_MODE_EN; break; - case MCI_STATE_INIT_GPM_OFFSET: - value = MS(REG_READ(ah, AR_MCI_GPM_1), AR_MCI_GPM_WRITE_PTR); - mci->gpm_idx = value; - break; - case MCI_STATE_NEXT_GPM_OFFSET: - case MCI_STATE_LAST_GPM_OFFSET: - /* - * This could be useful to avoid new GPM message interrupt which - * may lead to spurious interrupt after power sleep, or multiple - * entry of ath_mci_intr(). - * Adding empty GPM check by returning HAL_MCI_GPM_INVALID can - * alleviate this effect, but clearing GPM RX interrupt bit is - * safe, because whether this is called from hw or driver code - * there must be an interrupt bit set/triggered initially - */ - REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, - AR_MCI_INTERRUPT_RX_MSG_GPM); - - gpm_ptr = MS(REG_READ(ah, AR_MCI_GPM_1), AR_MCI_GPM_WRITE_PTR); - value = gpm_ptr; - - if (value == 0) - value = mci->gpm_len - 1; - else if (value >= mci->gpm_len) { - if (value != 0xFFFF) - value = 0; - } else { - value--; - } - - if (value == 0xFFFF) { - value = MCI_GPM_INVALID; - more_gpm = MCI_GPM_NOMORE; - } else if (state_type == MCI_STATE_NEXT_GPM_OFFSET) { - if (gpm_ptr == mci->gpm_idx) { - value = MCI_GPM_INVALID; - more_gpm = MCI_GPM_NOMORE; - } else { - for (;;) { - u32 temp_index; - - /* skip reserved GPM if any */ - - if (value != mci->gpm_idx) - more_gpm = MCI_GPM_MORE; - else - more_gpm = MCI_GPM_NOMORE; - - temp_index = mci->gpm_idx; - mci->gpm_idx++; - - if (mci->gpm_idx >= - mci->gpm_len) - mci->gpm_idx = 0; - - if (ar9003_mci_is_gpm_valid(ah, - temp_index)) { - value = temp_index; - break; - } - - if (more_gpm == MCI_GPM_NOMORE) { - value = MCI_GPM_INVALID; - break; - } - } - } - if (p_data) - *p_data = more_gpm; - } - - if (value != MCI_GPM_INVALID) - value <<= 4; - - break; case MCI_STATE_LAST_SCHD_MSG_OFFSET: value = MS(REG_READ(ah, AR_MCI_RX_STATUS), AR_MCI_RX_LAST_SCHD_MSG_INDEX); @@ -1276,21 +1191,6 @@ u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type, u32 *p_data) AR_MCI_RX_REMOTE_SLEEP) ? MCI_BT_SLEEP : MCI_BT_AWAKE; break; - case MCI_STATE_CONT_RSSI_POWER: - value = MS(mci->cont_status, AR_MCI_CONT_RSSI_POWER); - break; - case MCI_STATE_CONT_PRIORITY: - value = MS(mci->cont_status, AR_MCI_CONT_RRIORITY); - break; - case MCI_STATE_CONT_TXRX: - value = MS(mci->cont_status, AR_MCI_CONT_TXRX); - break; - case MCI_STATE_BT: - value = mci->bt_state; - break; - case MCI_STATE_SET_BT_SLEEP: - mci->bt_state = MCI_BT_SLEEP; - break; case MCI_STATE_SET_BT_AWAKE: mci->bt_state = MCI_BT_AWAKE; ar9003_mci_send_coex_version_query(ah, true); @@ -1299,7 +1199,7 @@ u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type, u32 *p_data) if (mci->unhalt_bt_gpm) ar9003_mci_send_coex_halt_bt_gpm(ah, false, true); - ar9003_mci_2g5g_switch(ah, true); + ar9003_mci_2g5g_switch(ah, false); break; case MCI_STATE_SET_BT_CAL_START: mci->bt_state = MCI_BT_CAL_START; @@ -1323,34 +1223,6 @@ u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type, u32 *p_data) case MCI_STATE_SEND_WLAN_COEX_VERSION: ar9003_mci_send_coex_version_response(ah, true); break; - case MCI_STATE_SET_BT_COEX_VERSION: - if (!p_data) - ath_dbg(common, MCI, - "MCI Set BT Coex version with NULL data!!\n"); - else { - mci->bt_ver_major = (*p_data >> 8) & 0xff; - mci->bt_ver_minor = (*p_data) & 0xff; - mci->bt_version_known = true; - ath_dbg(common, MCI, "MCI BT version set: %d.%d\n", - mci->bt_ver_major, mci->bt_ver_minor); - } - break; - case MCI_STATE_SEND_WLAN_CHANNELS: - if (p_data) { - if (((mci->wlan_channels[1] & 0xffff0000) == - (*(p_data + 1) & 0xffff0000)) && - (mci->wlan_channels[2] == *(p_data + 2)) && - (mci->wlan_channels[3] == *(p_data + 3))) - break; - - mci->wlan_channels[0] = *p_data++; - mci->wlan_channels[1] = *p_data++; - mci->wlan_channels[2] = *p_data++; - mci->wlan_channels[3] = *p_data++; - } - mci->wlan_channels_update = true; - ar9003_mci_send_coex_wlan_channels(ah, true); - break; case MCI_STATE_SEND_VERSION_QUERY: ar9003_mci_send_coex_version_query(ah, true); break; @@ -1358,38 +1230,16 @@ u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type, u32 *p_data) query_type = MCI_GPM_COEX_QUERY_BT_TOPOLOGY; ar9003_mci_send_coex_bt_status_query(ah, true, query_type); break; - case MCI_STATE_NEED_FLUSH_BT_INFO: - /* - * btcoex_hw.mci.unhalt_bt_gpm means whether it's - * needed to send UNHALT message. It's set whenever - * there's a request to send HALT message. - * mci_halted_bt_gpm means whether HALT message is sent - * out successfully. - * - * Checking (mci_unhalt_bt_gpm == false) instead of - * checking (ah->mci_halted_bt_gpm == false) will make - * sure currently is in UNHALT-ed mode and BT can - * respond to status query. - */ - value = (!mci->unhalt_bt_gpm && - mci->need_flush_btinfo) ? 1 : 0; - if (p_data) - mci->need_flush_btinfo = - (*p_data != 0) ? true : false; - break; case MCI_STATE_RECOVER_RX: ar9003_mci_prep_interface(ah); mci->query_bt = true; mci->need_flush_btinfo = true; ar9003_mci_send_coex_wlan_channels(ah, true); - ar9003_mci_2g5g_switch(ah, true); + ar9003_mci_2g5g_switch(ah, false); break; case MCI_STATE_NEED_FTP_STOMP: value = !(mci->config & ATH_MCI_CONFIG_DISABLE_FTP_STOMP); break; - case MCI_STATE_NEED_TUNING: - value = !(mci->config & ATH_MCI_CONFIG_DISABLE_TUNING); - break; default: break; } @@ -1397,3 +1247,173 @@ u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type, u32 *p_data) return value; } EXPORT_SYMBOL(ar9003_mci_state); + +void ar9003_mci_bt_gain_ctrl(struct ath_hw *ah) +{ + struct ath_common *common = ath9k_hw_common(ah); + struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; + + ath_dbg(common, MCI, "Give LNA and SPDT control to BT\n"); + + ar9003_mci_send_lna_take(ah, true); + udelay(50); + + REG_SET_BIT(ah, AR_PHY_GLB_CONTROL, AR_BTCOEX_CTRL_BT_OWN_SPDT_CTRL); + mci->is_2g = false; + mci->update_2g5g = true; + ar9003_mci_send_2g5g_status(ah, true); + + /* Force another 2g5g update at next scanning */ + mci->update_2g5g = true; +} + +void ar9003_mci_set_power_awake(struct ath_hw *ah) +{ + u32 btcoex_ctrl2, diag_sw; + int i; + u8 lna_ctrl, bt_sleep; + + for (i = 0; i < AH_WAIT_TIMEOUT; i++) { + btcoex_ctrl2 = REG_READ(ah, AR_BTCOEX_CTRL2); + if (btcoex_ctrl2 != 0xdeadbeef) + break; + udelay(AH_TIME_QUANTUM); + } + REG_WRITE(ah, AR_BTCOEX_CTRL2, (btcoex_ctrl2 | BIT(23))); + + for (i = 0; i < AH_WAIT_TIMEOUT; i++) { + diag_sw = REG_READ(ah, AR_DIAG_SW); + if (diag_sw != 0xdeadbeef) + break; + udelay(AH_TIME_QUANTUM); + } + REG_WRITE(ah, AR_DIAG_SW, (diag_sw | BIT(27) | BIT(19) | BIT(18))); + lna_ctrl = REG_READ(ah, AR_OBS_BUS_CTRL) & 0x3; + bt_sleep = REG_READ(ah, AR_MCI_RX_STATUS) & AR_MCI_RX_REMOTE_SLEEP; + + REG_WRITE(ah, AR_BTCOEX_CTRL2, btcoex_ctrl2); + REG_WRITE(ah, AR_DIAG_SW, diag_sw); + + if (bt_sleep && (lna_ctrl == 2)) { + REG_SET_BIT(ah, AR_BTCOEX_RC, 0x1); + REG_CLR_BIT(ah, AR_BTCOEX_RC, 0x1); + udelay(50); + } +} + +void ar9003_mci_check_gpm_offset(struct ath_hw *ah) +{ + struct ath_common *common = ath9k_hw_common(ah); + struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; + u32 offset; + + /* + * This should only be called before "MAC Warm Reset" or "MCI Reset Rx". + */ + offset = MS(REG_READ(ah, AR_MCI_GPM_1), AR_MCI_GPM_WRITE_PTR); + if (mci->gpm_idx == offset) + return; + ath_dbg(common, MCI, "GPM cached write pointer mismatch %d %d\n", + mci->gpm_idx, offset); + mci->query_bt = true; + mci->need_flush_btinfo = true; + mci->gpm_idx = 0; +} + +u32 ar9003_mci_get_next_gpm_offset(struct ath_hw *ah, bool first, u32 *more) +{ + struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; + u32 offset, more_gpm = 0, gpm_ptr; + + if (first) { + gpm_ptr = MS(REG_READ(ah, AR_MCI_GPM_1), AR_MCI_GPM_WRITE_PTR); + mci->gpm_idx = gpm_ptr; + return gpm_ptr; + } + + /* + * This could be useful to avoid new GPM message interrupt which + * may lead to spurious interrupt after power sleep, or multiple + * entry of ath_mci_intr(). + * Adding empty GPM check by returning HAL_MCI_GPM_INVALID can + * alleviate this effect, but clearing GPM RX interrupt bit is + * safe, because whether this is called from hw or driver code + * there must be an interrupt bit set/triggered initially + */ + REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_RAW, + AR_MCI_INTERRUPT_RX_MSG_GPM); + + gpm_ptr = MS(REG_READ(ah, AR_MCI_GPM_1), AR_MCI_GPM_WRITE_PTR); + offset = gpm_ptr; + + if (!offset) + offset = mci->gpm_len - 1; + else if (offset >= mci->gpm_len) { + if (offset != 0xFFFF) + offset = 0; + } else { + offset--; + } + + if ((offset == 0xFFFF) || (gpm_ptr == mci->gpm_idx)) { + offset = MCI_GPM_INVALID; + more_gpm = MCI_GPM_NOMORE; + goto out; + } + for (;;) { + u32 temp_index; + + /* skip reserved GPM if any */ + + if (offset != mci->gpm_idx) + more_gpm = MCI_GPM_MORE; + else + more_gpm = MCI_GPM_NOMORE; + + temp_index = mci->gpm_idx; + mci->gpm_idx++; + + if (mci->gpm_idx >= mci->gpm_len) + mci->gpm_idx = 0; + + if (ar9003_mci_is_gpm_valid(ah, temp_index)) { + offset = temp_index; + break; + } + + if (more_gpm == MCI_GPM_NOMORE) { + offset = MCI_GPM_INVALID; + break; + } + } + + if (offset != MCI_GPM_INVALID) + offset <<= 4; +out: + if (more) + *more = more_gpm; + + return offset; +} +EXPORT_SYMBOL(ar9003_mci_get_next_gpm_offset); + +void ar9003_mci_set_bt_version(struct ath_hw *ah, u8 major, u8 minor) +{ + struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; + + mci->bt_ver_major = major; + mci->bt_ver_minor = minor; + mci->bt_version_known = true; + ath_dbg(ath9k_hw_common(ah), MCI, "MCI BT version set: %d.%d\n", + mci->bt_ver_major, mci->bt_ver_minor); +} +EXPORT_SYMBOL(ar9003_mci_set_bt_version); + +void ar9003_mci_send_wlan_channels(struct ath_hw *ah) +{ + struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; + + mci->wlan_channels_update = true; + ar9003_mci_send_coex_wlan_channels(ah, true); +} +EXPORT_SYMBOL(ar9003_mci_send_wlan_channels); diff --git a/drivers/net/wireless/ath/ath9k/ar9003_mci.h b/drivers/net/wireless/ath/ath9k/ar9003_mci.h index 4842f6c06b8c..d33b8e128855 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_mci.h +++ b/drivers/net/wireless/ath/ath9k/ar9003_mci.h @@ -189,30 +189,18 @@ enum mci_bt_state { /* Type of state query */ enum mci_state_type { MCI_STATE_ENABLE, - MCI_STATE_INIT_GPM_OFFSET, - MCI_STATE_NEXT_GPM_OFFSET, - MCI_STATE_LAST_GPM_OFFSET, - MCI_STATE_BT, - MCI_STATE_SET_BT_SLEEP, MCI_STATE_SET_BT_AWAKE, MCI_STATE_SET_BT_CAL_START, MCI_STATE_SET_BT_CAL, MCI_STATE_LAST_SCHD_MSG_OFFSET, MCI_STATE_REMOTE_SLEEP, - MCI_STATE_CONT_RSSI_POWER, - MCI_STATE_CONT_PRIORITY, - MCI_STATE_CONT_TXRX, MCI_STATE_RESET_REQ_WAKE, MCI_STATE_SEND_WLAN_COEX_VERSION, - MCI_STATE_SET_BT_COEX_VERSION, - MCI_STATE_SEND_WLAN_CHANNELS, MCI_STATE_SEND_VERSION_QUERY, MCI_STATE_SEND_STATUS_QUERY, - MCI_STATE_NEED_FLUSH_BT_INFO, MCI_STATE_SET_CONCUR_TX_PRI, MCI_STATE_RECOVER_RX, MCI_STATE_NEED_FTP_STOMP, - MCI_STATE_NEED_TUNING, MCI_STATE_DEBUG, MCI_STATE_MAX }; @@ -260,28 +248,26 @@ enum mci_gpm_coex_opcode { bool ar9003_mci_send_message(struct ath_hw *ah, u8 header, u32 flag, u32 *payload, u8 len, bool wait_done, bool check_bt); -u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type, u32 *p_data); +u32 ar9003_mci_state(struct ath_hw *ah, u32 state_type); void ar9003_mci_setup(struct ath_hw *ah, u32 gpm_addr, void *gpm_buf, u16 len, u32 sched_addr); void ar9003_mci_cleanup(struct ath_hw *ah); void ar9003_mci_get_interrupt(struct ath_hw *ah, u32 *raw_intr, u32 *rx_msg_intr); - +u32 ar9003_mci_get_next_gpm_offset(struct ath_hw *ah, bool first, u32 *more); +void ar9003_mci_set_bt_version(struct ath_hw *ah, u8 major, u8 minor); +void ar9003_mci_send_wlan_channels(struct ath_hw *ah); /* * These functions are used by ath9k_hw. */ #ifdef CONFIG_ATH9K_BTCOEX_SUPPORT -static inline bool ar9003_mci_is_ready(struct ath_hw *ah) -{ - return ah->btcoex_hw.mci.ready; -} void ar9003_mci_stop_bt(struct ath_hw *ah, bool save_fullsleep); void ar9003_mci_init_cal_req(struct ath_hw *ah, bool *is_reusable); void ar9003_mci_init_cal_done(struct ath_hw *ah); void ar9003_mci_set_full_sleep(struct ath_hw *ah); -void ar9003_mci_2g5g_switch(struct ath_hw *ah, bool wait_done); +void ar9003_mci_2g5g_switch(struct ath_hw *ah, bool force); void ar9003_mci_check_bt(struct ath_hw *ah); bool ar9003_mci_start_reset(struct ath_hw *ah, struct ath9k_channel *chan); int ar9003_mci_end_reset(struct ath_hw *ah, struct ath9k_channel *chan, @@ -289,13 +275,12 @@ int ar9003_mci_end_reset(struct ath_hw *ah, struct ath9k_channel *chan, void ar9003_mci_reset(struct ath_hw *ah, bool en_int, bool is_2g, bool is_full_sleep); void ar9003_mci_get_isr(struct ath_hw *ah, enum ath9k_int *masked); +void ar9003_mci_bt_gain_ctrl(struct ath_hw *ah); +void ar9003_mci_set_power_awake(struct ath_hw *ah); +void ar9003_mci_check_gpm_offset(struct ath_hw *ah); #else -static inline bool ar9003_mci_is_ready(struct ath_hw *ah) -{ - return false; -} static inline void ar9003_mci_stop_bt(struct ath_hw *ah, bool save_fullsleep) { } @@ -330,6 +315,15 @@ static inline void ar9003_mci_reset(struct ath_hw *ah, bool en_int, bool is_2g, static inline void ar9003_mci_get_isr(struct ath_hw *ah, enum ath9k_int *masked) { } +static inline void ar9003_mci_bt_gain_ctrl(struct ath_hw *ah) +{ +} +static inline void ar9003_mci_set_power_awake(struct ath_hw *ah) +{ +} +static inline void ar9003_mci_check_gpm_offset(struct ath_hw *ah) +{ +} #endif /* CONFIG_ATH9K_BTCOEX_SUPPORT */ #endif diff --git a/drivers/net/wireless/ath/ath9k/ar9003_paprd.c b/drivers/net/wireless/ath/ath9k/ar9003_paprd.c index 3d400e8d6535..2c9f7d7ed4cc 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_paprd.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_paprd.c @@ -211,7 +211,7 @@ static int ar9003_paprd_setup_single_table(struct ath_hw *ah) AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_NUM_CORR_STAGES, 7); REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3, AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_MIN_LOOPBACK_DEL, 1); - if (AR_SREV_9485(ah) || AR_SREV_9462(ah)) + if (AR_SREV_9485(ah) || AR_SREV_9462(ah) || AR_SREV_9550(ah)) REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3, AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_QUICK_DROP, -3); diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.c b/drivers/net/wireless/ath/ath9k/ar9003_phy.c index 11abb972be1f..e476f9f92ce3 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_phy.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.c @@ -99,7 +99,7 @@ static int ar9003_hw_set_channel(struct ath_hw *ah, struct ath9k_channel *chan) channelSel = (freq * 4) / 120; chan_frac = (((freq * 4) % 120) * 0x20000) / 120; channelSel = (channelSel << 17) | chan_frac; - } else if (AR_SREV_9340(ah)) { + } else if (AR_SREV_9340(ah) || AR_SREV_9550(ah)) { if (ah->is_clk_25mhz) { u32 chan_frac; @@ -113,11 +113,12 @@ static int ar9003_hw_set_channel(struct ath_hw *ah, struct ath9k_channel *chan) /* Set to 2G mode */ bMode = 1; } else { - if (AR_SREV_9340(ah) && ah->is_clk_25mhz) { + if ((AR_SREV_9340(ah) || AR_SREV_9550(ah)) && + ah->is_clk_25mhz) { u32 chan_frac; - channelSel = (freq * 2) / 75; - chan_frac = (((freq * 2) % 75) * 0x20000) / 75; + channelSel = freq / 75; + chan_frac = ((freq % 75) * 0x20000) / 75; channelSel = (channelSel << 17) | chan_frac; } else { channelSel = CHANSEL_5G(freq); @@ -173,16 +174,15 @@ static void ar9003_hw_spur_mitigate_mrc_cck(struct ath_hw *ah, int cur_bb_spur, negative = 0, cck_spur_freq; int i; int range, max_spur_cnts, synth_freq; - u8 *spur_fbin_ptr = NULL; + u8 *spur_fbin_ptr = ar9003_get_spur_chan_ptr(ah, IS_CHAN_2GHZ(chan)); /* * Need to verify range +/- 10 MHz in control channel, otherwise spur * is out-of-band and can be ignored. */ - if (AR_SREV_9485(ah) || AR_SREV_9340(ah) || AR_SREV_9330(ah)) { - spur_fbin_ptr = ar9003_get_spur_chan_ptr(ah, - IS_CHAN_2GHZ(chan)); + if (AR_SREV_9485(ah) || AR_SREV_9340(ah) || AR_SREV_9330(ah) || + AR_SREV_9550(ah)) { if (spur_fbin_ptr[0] == 0) /* No spur */ return; max_spur_cnts = 5; @@ -207,7 +207,8 @@ static void ar9003_hw_spur_mitigate_mrc_cck(struct ath_hw *ah, if (AR_SREV_9462(ah) && (i == 0 || i == 3)) continue; negative = 0; - if (AR_SREV_9485(ah) || AR_SREV_9340(ah) || AR_SREV_9330(ah)) + if (AR_SREV_9485(ah) || AR_SREV_9340(ah) || AR_SREV_9330(ah) || + AR_SREV_9550(ah)) cur_bb_spur = ath9k_hw_fbin2freq(spur_fbin_ptr[i], IS_CHAN_2GHZ(chan)); else @@ -620,6 +621,50 @@ static void ar9003_hw_prog_ini(struct ath_hw *ah, } } +static int ar9550_hw_get_modes_txgain_index(struct ath_hw *ah, + struct ath9k_channel *chan) +{ + int ret; + + switch (chan->chanmode) { + case CHANNEL_A: + case CHANNEL_A_HT20: + if (chan->channel <= 5350) + ret = 1; + else if ((chan->channel > 5350) && (chan->channel <= 5600)) + ret = 3; + else + ret = 5; + break; + + case CHANNEL_A_HT40PLUS: + case CHANNEL_A_HT40MINUS: + if (chan->channel <= 5350) + ret = 2; + else if ((chan->channel > 5350) && (chan->channel <= 5600)) + ret = 4; + else + ret = 6; + break; + + case CHANNEL_G: + case CHANNEL_G_HT20: + case CHANNEL_B: + ret = 8; + break; + + case CHANNEL_G_HT40PLUS: + case CHANNEL_G_HT40MINUS: + ret = 7; + break; + + default: + ret = -EINVAL; + } + + return ret; +} + static int ar9003_hw_process_ini(struct ath_hw *ah, struct ath9k_channel *chan) { @@ -661,7 +706,22 @@ static int ar9003_hw_process_ini(struct ath_hw *ah, } REG_WRITE_ARRAY(&ah->iniModesRxGain, 1, regWrites); - REG_WRITE_ARRAY(&ah->iniModesTxGain, modesIndex, regWrites); + if (AR_SREV_9550(ah)) + REG_WRITE_ARRAY(&ah->ini_modes_rx_gain_bounds, modesIndex, + regWrites); + + if (AR_SREV_9550(ah)) { + int modes_txgain_index; + + modes_txgain_index = ar9550_hw_get_modes_txgain_index(ah, chan); + if (modes_txgain_index < 0) + return -EINVAL; + + REG_WRITE_ARRAY(&ah->iniModesTxGain, modes_txgain_index, + regWrites); + } else { + REG_WRITE_ARRAY(&ah->iniModesTxGain, modesIndex, regWrites); + } /* * For 5GHz channels requiring Fast Clock, apply @@ -676,6 +736,10 @@ static int ar9003_hw_process_ini(struct ath_hw *ah, if (chan->channel == 2484) ar9003_hw_prog_ini(ah, &ah->ini_japan2484, 1); + if (AR_SREV_9462(ah)) + REG_WRITE(ah, AR_GLB_SWREG_DISCONT_MODE, + AR_GLB_SWREG_DISCONT_EN_BT_WLAN); + ah->modes_index = modesIndex; ar9003_hw_override_ini(ah); ar9003_hw_set_channel_regs(ah, chan); @@ -821,18 +885,18 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, REG_CLR_BIT(ah, AR_PHY_SFCORR_LOW, AR_PHY_SFCORR_LOW_USE_SELF_CORR_LOW); - if (!on != aniState->ofdmWeakSigDetectOff) { + if (on != aniState->ofdmWeakSigDetect) { ath_dbg(common, ANI, "** ch %d: ofdm weak signal: %s=>%s\n", chan->channel, - !aniState->ofdmWeakSigDetectOff ? + aniState->ofdmWeakSigDetect ? "on" : "off", on ? "on" : "off"); if (on) ah->stats.ast_ani_ofdmon++; else ah->stats.ast_ani_ofdmoff++; - aniState->ofdmWeakSigDetectOff = !on; + aniState->ofdmWeakSigDetect = on; } break; } @@ -851,7 +915,7 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, * from INI file & cap value */ value = firstep_table[level] - - firstep_table[ATH9K_ANI_FIRSTEP_LVL_NEW] + + firstep_table[ATH9K_ANI_FIRSTEP_LVL] + aniState->iniDef.firstep; if (value < ATH9K_SIG_FIRSTEP_SETTING_MIN) value = ATH9K_SIG_FIRSTEP_SETTING_MIN; @@ -866,7 +930,7 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, * from INI file & cap value */ value2 = firstep_table[level] - - firstep_table[ATH9K_ANI_FIRSTEP_LVL_NEW] + + firstep_table[ATH9K_ANI_FIRSTEP_LVL] + aniState->iniDef.firstepLow; if (value2 < ATH9K_SIG_FIRSTEP_SETTING_MIN) value2 = ATH9K_SIG_FIRSTEP_SETTING_MIN; @@ -882,7 +946,7 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, chan->channel, aniState->firstepLevel, level, - ATH9K_ANI_FIRSTEP_LVL_NEW, + ATH9K_ANI_FIRSTEP_LVL, value, aniState->iniDef.firstep); ath_dbg(common, ANI, @@ -890,7 +954,7 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, chan->channel, aniState->firstepLevel, level, - ATH9K_ANI_FIRSTEP_LVL_NEW, + ATH9K_ANI_FIRSTEP_LVL, value2, aniState->iniDef.firstepLow); if (level > aniState->firstepLevel) @@ -915,7 +979,7 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, * from INI file & cap value */ value = cycpwrThr1_table[level] - - cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL_NEW] + + cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL] + aniState->iniDef.cycpwrThr1; if (value < ATH9K_SIG_SPUR_IMM_SETTING_MIN) value = ATH9K_SIG_SPUR_IMM_SETTING_MIN; @@ -931,7 +995,7 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, * from INI file & cap value */ value2 = cycpwrThr1_table[level] - - cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL_NEW] + + cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL] + aniState->iniDef.cycpwrThr1Ext; if (value2 < ATH9K_SIG_SPUR_IMM_SETTING_MIN) value2 = ATH9K_SIG_SPUR_IMM_SETTING_MIN; @@ -946,7 +1010,7 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, chan->channel, aniState->spurImmunityLevel, level, - ATH9K_ANI_SPUR_IMMUNE_LVL_NEW, + ATH9K_ANI_SPUR_IMMUNE_LVL, value, aniState->iniDef.cycpwrThr1); ath_dbg(common, ANI, @@ -954,7 +1018,7 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, chan->channel, aniState->spurImmunityLevel, level, - ATH9K_ANI_SPUR_IMMUNE_LVL_NEW, + ATH9K_ANI_SPUR_IMMUNE_LVL, value2, aniState->iniDef.cycpwrThr1Ext); if (level > aniState->spurImmunityLevel) @@ -975,16 +1039,16 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, AR_PHY_MRC_CCK_ENABLE, is_on); REG_RMW_FIELD(ah, AR_PHY_MRC_CCK_CTRL, AR_PHY_MRC_CCK_MUX_REG, is_on); - if (!is_on != aniState->mrcCCKOff) { + if (is_on != aniState->mrcCCK) { ath_dbg(common, ANI, "** ch %d: MRC CCK: %s=>%s\n", chan->channel, - !aniState->mrcCCKOff ? "on" : "off", + aniState->mrcCCK ? "on" : "off", is_on ? "on" : "off"); if (is_on) ah->stats.ast_ani_ccklow++; else ah->stats.ast_ani_cckhigh++; - aniState->mrcCCKOff = !is_on; + aniState->mrcCCK = is_on; } break; } @@ -998,9 +1062,9 @@ static bool ar9003_hw_ani_control(struct ath_hw *ah, ath_dbg(common, ANI, "ANI parameters: SI=%d, ofdmWS=%s FS=%d MRCcck=%s listenTime=%d ofdmErrs=%d cckErrs=%d\n", aniState->spurImmunityLevel, - !aniState->ofdmWeakSigDetectOff ? "on" : "off", + aniState->ofdmWeakSigDetect ? "on" : "off", aniState->firstepLevel, - !aniState->mrcCCKOff ? "on" : "off", + aniState->mrcCCK ? "on" : "off", aniState->listenTime, aniState->ofdmPhyErrCount, aniState->cckPhyErrCount); @@ -1107,10 +1171,10 @@ static void ar9003_hw_ani_cache_ini_regs(struct ath_hw *ah) AR_PHY_EXT_CYCPWR_THR1); /* these levels just got reset to defaults by the INI */ - aniState->spurImmunityLevel = ATH9K_ANI_SPUR_IMMUNE_LVL_NEW; - aniState->firstepLevel = ATH9K_ANI_FIRSTEP_LVL_NEW; - aniState->ofdmWeakSigDetectOff = !ATH9K_ANI_USE_OFDM_WEAK_SIG; - aniState->mrcCCKOff = !ATH9K_ANI_ENABLE_MRC_CCK; + aniState->spurImmunityLevel = ATH9K_ANI_SPUR_IMMUNE_LVL; + aniState->firstepLevel = ATH9K_ANI_FIRSTEP_LVL; + aniState->ofdmWeakSigDetect = ATH9K_ANI_USE_OFDM_WEAK_SIG; + aniState->mrcCCK = true; } static void ar9003_hw_set_radar_params(struct ath_hw *ah, diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.h b/drivers/net/wireless/ath/ath9k/ar9003_phy.h index 7268a48a92a1..7bfbaf065a43 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_phy.h +++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.h @@ -633,11 +633,13 @@ #define AR_PHY_65NM_CH0_BIAS2 0x160c4 #define AR_PHY_65NM_CH0_BIAS4 0x160cc #define AR_PHY_65NM_CH0_RXTX4 0x1610c +#define AR_PHY_65NM_CH1_RXTX4 0x1650c +#define AR_PHY_65NM_CH2_RXTX4 0x1690c #define AR_CH0_TOP (AR_SREV_9300(ah) ? 0x16288 : \ ((AR_SREV_9462(ah) ? 0x1628c : 0x16280))) -#define AR_CH0_TOP_XPABIASLVL (0x300) -#define AR_CH0_TOP_XPABIASLVL_S (8) +#define AR_CH0_TOP_XPABIASLVL (AR_SREV_9550(ah) ? 0x3c0 : 0x300) +#define AR_CH0_TOP_XPABIASLVL_S (AR_SREV_9550(ah) ? 6 : 8) #define AR_CH0_THERM (AR_SREV_9300(ah) ? 0x16290 : \ ((AR_SREV_9485(ah) ? 0x1628c : 0x16294))) @@ -650,6 +652,8 @@ #define AR_SWITCH_TABLE_COM_ALL_S (0) #define AR_SWITCH_TABLE_COM_AR9462_ALL (0xffffff) #define AR_SWITCH_TABLE_COM_AR9462_ALL_S (0) +#define AR_SWITCH_TABLE_COM_AR9550_ALL (0xffffff) +#define AR_SWITCH_TABLE_COM_AR9550_ALL_S (0) #define AR_SWITCH_TABLE_COM_SPDT (0x00f00000) #define AR_SWITCH_TABLE_COM_SPDT_ALL (0x0000fff0) #define AR_SWITCH_TABLE_COM_SPDT_ALL_S (4) @@ -820,18 +824,26 @@ #define AR_PHY_CHAN_INFO_MEMORY_CAPTURE_MASK 0x0001 #define AR_PHY_RX_DELAY_DELAY 0x00003FFF #define AR_PHY_CCK_TX_CTRL_JAPAN 0x00000010 -#define AR_PHY_SPECTRAL_SCAN_ENABLE 0x00000001 -#define AR_PHY_SPECTRAL_SCAN_ENABLE_S 0 -#define AR_PHY_SPECTRAL_SCAN_ACTIVE 0x00000002 -#define AR_PHY_SPECTRAL_SCAN_ACTIVE_S 1 -#define AR_PHY_SPECTRAL_SCAN_FFT_PERIOD 0x000000F0 -#define AR_PHY_SPECTRAL_SCAN_FFT_PERIOD_S 4 -#define AR_PHY_SPECTRAL_SCAN_PERIOD 0x0000FF00 -#define AR_PHY_SPECTRAL_SCAN_PERIOD_S 8 -#define AR_PHY_SPECTRAL_SCAN_COUNT 0x00FF0000 -#define AR_PHY_SPECTRAL_SCAN_COUNT_S 16 -#define AR_PHY_SPECTRAL_SCAN_SHORT_REPEAT 0x01000000 -#define AR_PHY_SPECTRAL_SCAN_SHORT_REPEAT_S 24 + +#define AR_PHY_SPECTRAL_SCAN_ENABLE 0x00000001 +#define AR_PHY_SPECTRAL_SCAN_ENABLE_S 0 +#define AR_PHY_SPECTRAL_SCAN_ACTIVE 0x00000002 +#define AR_PHY_SPECTRAL_SCAN_ACTIVE_S 1 +#define AR_PHY_SPECTRAL_SCAN_FFT_PERIOD 0x000000F0 +#define AR_PHY_SPECTRAL_SCAN_FFT_PERIOD_S 4 +#define AR_PHY_SPECTRAL_SCAN_PERIOD 0x0000FF00 +#define AR_PHY_SPECTRAL_SCAN_PERIOD_S 8 +#define AR_PHY_SPECTRAL_SCAN_COUNT 0x0FFF0000 +#define AR_PHY_SPECTRAL_SCAN_COUNT_S 16 +#define AR_PHY_SPECTRAL_SCAN_SHORT_REPEAT 0x10000000 +#define AR_PHY_SPECTRAL_SCAN_SHORT_REPEAT_S 28 +#define AR_PHY_SPECTRAL_SCAN_PRIORITY 0x20000000 +#define AR_PHY_SPECTRAL_SCAN_PRIORITY_S 29 +#define AR_PHY_SPECTRAL_SCAN_USE_ERR5 0x40000000 +#define AR_PHY_SPECTRAL_SCAN_USE_ERR5_S 30 +#define AR_PHY_SPECTRAL_SCAN_COMPRESSED_RPT 0x80000000 +#define AR_PHY_SPECTRAL_SCAN_COMPRESSED_RPT_S 31 + #define AR_PHY_CHANNEL_STATUS_RX_CLEAR 0x00000004 #define AR_PHY_RTT_CTRL_ENA_RADIO_RETENTION 0x00000001 #define AR_PHY_RTT_CTRL_ENA_RADIO_RETENTION_S 0 @@ -866,6 +878,9 @@ #define AR_PHY_65NM_CH0_RXTX4_THERM_ON 0x10000000 #define AR_PHY_65NM_CH0_RXTX4_THERM_ON_S 28 +#define AR_PHY_65NM_RXTX4_XLNA_BIAS 0xC0000000 +#define AR_PHY_65NM_RXTX4_XLNA_BIAS_S 30 + /* * Channel 1 Register Map */ diff --git a/drivers/net/wireless/ath/ath9k/ar9330_1p1_initvals.h b/drivers/net/wireless/ath/ath9k/ar9330_1p1_initvals.h index 1bd3a3d22101..6e1756bc3833 100644 --- a/drivers/net/wireless/ath/ath9k/ar9330_1p1_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9330_1p1_initvals.h @@ -337,12 +337,7 @@ static const u32 ar9331_modes_low_ob_db_tx_gain_1p1[][5] = { {0x00016284, 0x14d3f000, 0x14d3f000, 0x14d3f000, 0x14d3f000}, }; -static const u32 ar9331_1p1_baseband_core_txfir_coeff_japan_2484[][2] = { - /* Addr allmodes */ - {0x0000a398, 0x00000000}, - {0x0000a39c, 0x6f7f0301}, - {0x0000a3a0, 0xca9228ee}, -}; +#define ar9331_1p1_baseband_core_txfir_coeff_japan_2484 ar9462_2p0_baseband_core_txfir_coeff_japan_2484 static const u32 ar9331_1p1_xtal_25M[][2] = { /* Addr allmodes */ @@ -783,17 +778,7 @@ static const u32 ar9331_modes_high_power_tx_gain_1p1[][5] = { {0x00016284, 0x14d3f000, 0x14d3f000, 0x14d3f000, 0x14d3f000}, }; -static const u32 ar9331_1p1_mac_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x00001030, 0x00000230, 0x00000460, 0x000002c0, 0x00000160}, - {0x00001070, 0x00000168, 0x000002d0, 0x00000318, 0x0000018c}, - {0x000010b0, 0x00000e60, 0x00001cc0, 0x00007c70, 0x00003e38}, - {0x00008014, 0x03e803e8, 0x07d007d0, 0x10801600, 0x08400b00}, - {0x0000801c, 0x128d8027, 0x128d804f, 0x12e00057, 0x12e0002b}, - {0x00008120, 0x08f04800, 0x08f04800, 0x08f04810, 0x08f04810}, - {0x000081d0, 0x00003210, 0x00003210, 0x0000320a, 0x0000320a}, - {0x00008318, 0x00003e80, 0x00007d00, 0x00006880, 0x00003440}, -}; +#define ar9331_1p1_mac_postamble ar9300_2p2_mac_postamble static const u32 ar9331_1p1_soc_preamble[][2] = { /* Addr allmodes */ @@ -1112,38 +1097,4 @@ static const u32 ar9331_common_tx_gain_offset1_1[][1] = { {0x00000000}, }; -static const u32 ar9331_1p1_chansel_xtal_25M[] = { - 0x0101479e, - 0x0101d027, - 0x010258af, - 0x0102e138, - 0x010369c0, - 0x0103f249, - 0x01047ad1, - 0x0105035a, - 0x01058be2, - 0x0106146b, - 0x01069cf3, - 0x0107257c, - 0x0107ae04, - 0x0108f5b2, -}; - -static const u32 ar9331_1p1_chansel_xtal_40M[] = { - 0x00a0ccbe, - 0x00a12213, - 0x00a17769, - 0x00a1ccbe, - 0x00a22213, - 0x00a27769, - 0x00a2ccbe, - 0x00a32213, - 0x00a37769, - 0x00a3ccbe, - 0x00a42213, - 0x00a47769, - 0x00a4ccbe, - 0x00a5998b, -}; - #endif /* INITVALS_9330_1P1_H */ diff --git a/drivers/net/wireless/ath/ath9k/ar9330_1p2_initvals.h b/drivers/net/wireless/ath/ath9k/ar9330_1p2_initvals.h index 0e6ca0834b34..57ed8a112173 100644 --- a/drivers/net/wireless/ath/ath9k/ar9330_1p2_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9330_1p2_initvals.h @@ -1,5 +1,6 @@ /* - * Copyright (c) 2011 Atheros Communications Inc. + * Copyright (c) 2010-2011 Atheros Communications Inc. + * Copyright (c) 2011-2012 Qualcomm Atheros Inc. * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -17,8 +18,8 @@ #ifndef INITVALS_9330_1P2_H #define INITVALS_9330_1P2_H -static const u32 ar9331_modes_lowest_ob_db_tx_gain_1p2[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ +static const u32 ar9331_modes_high_ob_db_tx_gain_1p2[][5] = { + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ {0x0000a410, 0x000050d7, 0x000050d7, 0x000050d7, 0x000050d7}, {0x0000a500, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, {0x0000a504, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, @@ -102,8 +103,14 @@ static const u32 ar9331_modes_lowest_ob_db_tx_gain_1p2[][5] = { {0x0000a63c, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, }; +#define ar9331_modes_high_power_tx_gain_1p2 ar9331_modes_high_ob_db_tx_gain_1p2 + +#define ar9331_modes_low_ob_db_tx_gain_1p2 ar9331_modes_high_power_tx_gain_1p2 + +#define ar9331_modes_lowest_ob_db_tx_gain_1p2 ar9331_modes_low_ob_db_tx_gain_1p2 + static const u32 ar9331_1p2_baseband_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ {0x00009810, 0xd00a8005, 0xd00a8005, 0xd00a8005, 0xd00a8005}, {0x00009820, 0x206a002e, 0x206a002e, 0x206a002e, 0x206a002e}, {0x00009824, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0}, @@ -147,191 +154,6 @@ static const u32 ar9331_1p2_baseband_postamble[][5] = { {0x0000ae18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, }; -static const u32 ar9331_modes_high_ob_db_tx_gain_1p2[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x0000a410, 0x000050d7, 0x000050d7, 0x000050d7, 0x000050d7}, - {0x0000a500, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, - {0x0000a504, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, - {0x0000a508, 0x0c002e00, 0x0c002e00, 0x08000004, 0x08000004}, - {0x0000a50c, 0x11062202, 0x11062202, 0x0d000200, 0x0d000200}, - {0x0000a510, 0x17022e00, 0x17022e00, 0x11000202, 0x11000202}, - {0x0000a514, 0x1d000ec2, 0x1d000ec2, 0x15000400, 0x15000400}, - {0x0000a518, 0x25020ec0, 0x25020ec0, 0x19000402, 0x19000402}, - {0x0000a51c, 0x2b020ec3, 0x2b020ec3, 0x1d000404, 0x1d000404}, - {0x0000a520, 0x2f001f04, 0x2f001f04, 0x23000a00, 0x23000a00}, - {0x0000a524, 0x35001fc4, 0x35001fc4, 0x27000a02, 0x27000a02}, - {0x0000a528, 0x3c022f04, 0x3c022f04, 0x2b000a04, 0x2b000a04}, - {0x0000a52c, 0x41023e85, 0x41023e85, 0x3f001620, 0x3f001620}, - {0x0000a530, 0x48023ec6, 0x48023ec6, 0x41001621, 0x41001621}, - {0x0000a534, 0x4d023f01, 0x4d023f01, 0x44001640, 0x44001640}, - {0x0000a538, 0x53023f4b, 0x53023f4b, 0x46001641, 0x46001641}, - {0x0000a53c, 0x5a027f09, 0x5a027f09, 0x48001642, 0x48001642}, - {0x0000a540, 0x5f027fc9, 0x5f027fc9, 0x4b001644, 0x4b001644}, - {0x0000a544, 0x6502feca, 0x6502feca, 0x4e001a81, 0x4e001a81}, - {0x0000a548, 0x6b02ff4a, 0x6b02ff4a, 0x51001a83, 0x51001a83}, - {0x0000a54c, 0x7203feca, 0x7203feca, 0x54001c84, 0x54001c84}, - {0x0000a550, 0x7703ff0b, 0x7703ff0b, 0x57001ce3, 0x57001ce3}, - {0x0000a554, 0x7d06ffcb, 0x7d06ffcb, 0x5b001ce5, 0x5b001ce5}, - {0x0000a558, 0x8407ff0b, 0x8407ff0b, 0x5f001ce9, 0x5f001ce9}, - {0x0000a55c, 0x8907ffcb, 0x8907ffcb, 0x66001eec, 0x66001eec}, - {0x0000a560, 0x900fff0b, 0x900fff0b, 0x66001eec, 0x66001eec}, - {0x0000a564, 0x960fffcb, 0x960fffcb, 0x66001eec, 0x66001eec}, - {0x0000a568, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a56c, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a570, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a574, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a578, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a57c, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a580, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, - {0x0000a584, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, - {0x0000a588, 0x0c002e00, 0x0c002e00, 0x08000004, 0x08000004}, - {0x0000a58c, 0x11062202, 0x11062202, 0x0b000200, 0x0b000200}, - {0x0000a590, 0x17022e00, 0x17022e00, 0x0f000202, 0x0f000202}, - {0x0000a594, 0x1d000ec2, 0x1d000ec2, 0x11000400, 0x11000400}, - {0x0000a598, 0x25020ec0, 0x25020ec0, 0x15000402, 0x15000402}, - {0x0000a59c, 0x2b020ec3, 0x2b020ec3, 0x19000404, 0x19000404}, - {0x0000a5a0, 0x2f001f04, 0x2f001f04, 0x1b000603, 0x1b000603}, - {0x0000a5a4, 0x35001fc4, 0x35001fc4, 0x1f000a02, 0x1f000a02}, - {0x0000a5a8, 0x3c022f04, 0x3c022f04, 0x23000a04, 0x23000a04}, - {0x0000a5ac, 0x41023e85, 0x41023e85, 0x26000a20, 0x26000a20}, - {0x0000a5b0, 0x48023ec6, 0x48023ec6, 0x2a000e20, 0x2a000e20}, - {0x0000a5b4, 0x4d023f01, 0x4d023f01, 0x2e000e22, 0x2e000e22}, - {0x0000a5b8, 0x53023f4b, 0x53023f4b, 0x31000e24, 0x31000e24}, - {0x0000a5bc, 0x5a027f09, 0x5a027f09, 0x34001640, 0x34001640}, - {0x0000a5c0, 0x5f027fc9, 0x5f027fc9, 0x38001660, 0x38001660}, - {0x0000a5c4, 0x6502feca, 0x6502feca, 0x3b001861, 0x3b001861}, - {0x0000a5c8, 0x6b02ff4a, 0x6b02ff4a, 0x3e001a81, 0x3e001a81}, - {0x0000a5cc, 0x7203feca, 0x7203feca, 0x42001a83, 0x42001a83}, - {0x0000a5d0, 0x7703ff0b, 0x7703ff0b, 0x44001c84, 0x44001c84}, - {0x0000a5d4, 0x7d06ffcb, 0x7d06ffcb, 0x48001ce3, 0x48001ce3}, - {0x0000a5d8, 0x8407ff0b, 0x8407ff0b, 0x4c001ce5, 0x4c001ce5}, - {0x0000a5dc, 0x8907ffcb, 0x8907ffcb, 0x50001ce9, 0x50001ce9}, - {0x0000a5e0, 0x900fff0b, 0x900fff0b, 0x54001ceb, 0x54001ceb}, - {0x0000a5e4, 0x960fffcb, 0x960fffcb, 0x56001eec, 0x56001eec}, - {0x0000a5e8, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5ec, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5f0, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5f4, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5f8, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5fc, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a610, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a614, 0x01404000, 0x01404000, 0x01404000, 0x01404000}, - {0x0000a618, 0x02008501, 0x02008501, 0x02008501, 0x02008501}, - {0x0000a61c, 0x02008802, 0x02008802, 0x02008802, 0x02008802}, - {0x0000a620, 0x0300c802, 0x0300c802, 0x0300c802, 0x0300c802}, - {0x0000a624, 0x0300cc03, 0x0300cc03, 0x0300cc03, 0x0300cc03}, - {0x0000a628, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a62c, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a630, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a634, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a638, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a63c, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, -}; - -static const u32 ar9331_modes_low_ob_db_tx_gain_1p2[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x0000a410, 0x000050d7, 0x000050d7, 0x000050d7, 0x000050d7}, - {0x0000a500, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, - {0x0000a504, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, - {0x0000a508, 0x0c002e00, 0x0c002e00, 0x08000004, 0x08000004}, - {0x0000a50c, 0x11062202, 0x11062202, 0x0d000200, 0x0d000200}, - {0x0000a510, 0x17022e00, 0x17022e00, 0x11000202, 0x11000202}, - {0x0000a514, 0x1d000ec2, 0x1d000ec2, 0x15000400, 0x15000400}, - {0x0000a518, 0x25020ec0, 0x25020ec0, 0x19000402, 0x19000402}, - {0x0000a51c, 0x2b020ec3, 0x2b020ec3, 0x1d000404, 0x1d000404}, - {0x0000a520, 0x2f001f04, 0x2f001f04, 0x23000a00, 0x23000a00}, - {0x0000a524, 0x35001fc4, 0x35001fc4, 0x27000a02, 0x27000a02}, - {0x0000a528, 0x3c022f04, 0x3c022f04, 0x2b000a04, 0x2b000a04}, - {0x0000a52c, 0x41023e85, 0x41023e85, 0x3f001620, 0x3f001620}, - {0x0000a530, 0x48023ec6, 0x48023ec6, 0x41001621, 0x41001621}, - {0x0000a534, 0x4d023f01, 0x4d023f01, 0x44001640, 0x44001640}, - {0x0000a538, 0x53023f4b, 0x53023f4b, 0x46001641, 0x46001641}, - {0x0000a53c, 0x5a027f09, 0x5a027f09, 0x48001642, 0x48001642}, - {0x0000a540, 0x5f027fc9, 0x5f027fc9, 0x4b001644, 0x4b001644}, - {0x0000a544, 0x6502feca, 0x6502feca, 0x4e001a81, 0x4e001a81}, - {0x0000a548, 0x6b02ff4a, 0x6b02ff4a, 0x51001a83, 0x51001a83}, - {0x0000a54c, 0x7203feca, 0x7203feca, 0x54001c84, 0x54001c84}, - {0x0000a550, 0x7703ff0b, 0x7703ff0b, 0x57001ce3, 0x57001ce3}, - {0x0000a554, 0x7d06ffcb, 0x7d06ffcb, 0x5b001ce5, 0x5b001ce5}, - {0x0000a558, 0x8407ff0b, 0x8407ff0b, 0x5f001ce9, 0x5f001ce9}, - {0x0000a55c, 0x8907ffcb, 0x8907ffcb, 0x66001eec, 0x66001eec}, - {0x0000a560, 0x900fff0b, 0x900fff0b, 0x66001eec, 0x66001eec}, - {0x0000a564, 0x960fffcb, 0x960fffcb, 0x66001eec, 0x66001eec}, - {0x0000a568, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a56c, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a570, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a574, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a578, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a57c, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a580, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, - {0x0000a584, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, - {0x0000a588, 0x0c002e00, 0x0c002e00, 0x08000004, 0x08000004}, - {0x0000a58c, 0x11062202, 0x11062202, 0x0b000200, 0x0b000200}, - {0x0000a590, 0x17022e00, 0x17022e00, 0x0f000202, 0x0f000202}, - {0x0000a594, 0x1d000ec2, 0x1d000ec2, 0x11000400, 0x11000400}, - {0x0000a598, 0x25020ec0, 0x25020ec0, 0x15000402, 0x15000402}, - {0x0000a59c, 0x2b020ec3, 0x2b020ec3, 0x19000404, 0x19000404}, - {0x0000a5a0, 0x2f001f04, 0x2f001f04, 0x1b000603, 0x1b000603}, - {0x0000a5a4, 0x35001fc4, 0x35001fc4, 0x1f000a02, 0x1f000a02}, - {0x0000a5a8, 0x3c022f04, 0x3c022f04, 0x23000a04, 0x23000a04}, - {0x0000a5ac, 0x41023e85, 0x41023e85, 0x26000a20, 0x26000a20}, - {0x0000a5b0, 0x48023ec6, 0x48023ec6, 0x2a000e20, 0x2a000e20}, - {0x0000a5b4, 0x4d023f01, 0x4d023f01, 0x2e000e22, 0x2e000e22}, - {0x0000a5b8, 0x53023f4b, 0x53023f4b, 0x31000e24, 0x31000e24}, - {0x0000a5bc, 0x5a027f09, 0x5a027f09, 0x34001640, 0x34001640}, - {0x0000a5c0, 0x5f027fc9, 0x5f027fc9, 0x38001660, 0x38001660}, - {0x0000a5c4, 0x6502feca, 0x6502feca, 0x3b001861, 0x3b001861}, - {0x0000a5c8, 0x6b02ff4a, 0x6b02ff4a, 0x3e001a81, 0x3e001a81}, - {0x0000a5cc, 0x7203feca, 0x7203feca, 0x42001a83, 0x42001a83}, - {0x0000a5d0, 0x7703ff0b, 0x7703ff0b, 0x44001c84, 0x44001c84}, - {0x0000a5d4, 0x7d06ffcb, 0x7d06ffcb, 0x48001ce3, 0x48001ce3}, - {0x0000a5d8, 0x8407ff0b, 0x8407ff0b, 0x4c001ce5, 0x4c001ce5}, - {0x0000a5dc, 0x8907ffcb, 0x8907ffcb, 0x50001ce9, 0x50001ce9}, - {0x0000a5e0, 0x900fff0b, 0x900fff0b, 0x54001ceb, 0x54001ceb}, - {0x0000a5e4, 0x960fffcb, 0x960fffcb, 0x56001eec, 0x56001eec}, - {0x0000a5e8, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5ec, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5f0, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5f4, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5f8, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5fc, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a610, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a614, 0x01404000, 0x01404000, 0x01404000, 0x01404000}, - {0x0000a618, 0x02008501, 0x02008501, 0x02008501, 0x02008501}, - {0x0000a61c, 0x02008802, 0x02008802, 0x02008802, 0x02008802}, - {0x0000a620, 0x0300c802, 0x0300c802, 0x0300c802, 0x0300c802}, - {0x0000a624, 0x0300cc03, 0x0300cc03, 0x0300cc03, 0x0300cc03}, - {0x0000a628, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a62c, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a630, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a634, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a638, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a63c, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, -}; - -static const u32 ar9331_1p2_baseband_core_txfir_coeff_japan_2484[][2] = { - /* Addr allmodes */ - {0x0000a398, 0x00000000}, - {0x0000a39c, 0x6f7f0301}, - {0x0000a3a0, 0xca9228ee}, -}; - -static const u32 ar9331_1p2_xtal_25M[][2] = { - /* Addr allmodes */ - {0x00007038, 0x000002f8}, - {0x00008244, 0x0010f3d7}, - {0x0000824c, 0x0001e7ae}, - {0x0001609c, 0x0f508f29}, -}; - static const u32 ar9331_1p2_radio_core[][2] = { /* Addr allmodes */ {0x00016000, 0x36db6db6}, @@ -397,684 +219,24 @@ static const u32 ar9331_1p2_radio_core[][2] = { {0x000163d4, 0x00000000}, }; -static const u32 ar9331_1p2_soc_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x00007010, 0x00000022, 0x00000022, 0x00000022, 0x00000022}, -}; +#define ar9331_1p2_baseband_core_txfir_coeff_japan_2484 ar9331_1p1_baseband_core_txfir_coeff_japan_2484 -static const u32 ar9331_common_wo_xlna_rx_gain_1p2[][2] = { - /* Addr allmodes */ - {0x0000a000, 0x00060005}, - {0x0000a004, 0x00810080}, - {0x0000a008, 0x00830082}, - {0x0000a00c, 0x00850084}, - {0x0000a010, 0x01820181}, - {0x0000a014, 0x01840183}, - {0x0000a018, 0x01880185}, - {0x0000a01c, 0x018a0189}, - {0x0000a020, 0x02850284}, - {0x0000a024, 0x02890288}, - {0x0000a028, 0x028b028a}, - {0x0000a02c, 0x03850384}, - {0x0000a030, 0x03890388}, - {0x0000a034, 0x038b038a}, - {0x0000a038, 0x038d038c}, - {0x0000a03c, 0x03910390}, - {0x0000a040, 0x03930392}, - {0x0000a044, 0x03950394}, - {0x0000a048, 0x00000396}, - {0x0000a04c, 0x00000000}, - {0x0000a050, 0x00000000}, - {0x0000a054, 0x00000000}, - {0x0000a058, 0x00000000}, - {0x0000a05c, 0x00000000}, - {0x0000a060, 0x00000000}, - {0x0000a064, 0x00000000}, - {0x0000a068, 0x00000000}, - {0x0000a06c, 0x00000000}, - {0x0000a070, 0x00000000}, - {0x0000a074, 0x00000000}, - {0x0000a078, 0x00000000}, - {0x0000a07c, 0x00000000}, - {0x0000a080, 0x28282828}, - {0x0000a084, 0x28282828}, - {0x0000a088, 0x28282828}, - {0x0000a08c, 0x28282828}, - {0x0000a090, 0x28282828}, - {0x0000a094, 0x24242428}, - {0x0000a098, 0x171e1e1e}, - {0x0000a09c, 0x02020b0b}, - {0x0000a0a0, 0x02020202}, - {0x0000a0a4, 0x00000000}, - {0x0000a0a8, 0x00000000}, - {0x0000a0ac, 0x00000000}, - {0x0000a0b0, 0x00000000}, - {0x0000a0b4, 0x00000000}, - {0x0000a0b8, 0x00000000}, - {0x0000a0bc, 0x00000000}, - {0x0000a0c0, 0x22072208}, - {0x0000a0c4, 0x22052206}, - {0x0000a0c8, 0x22032204}, - {0x0000a0cc, 0x22012202}, - {0x0000a0d0, 0x221f2200}, - {0x0000a0d4, 0x221d221e}, - {0x0000a0d8, 0x33023303}, - {0x0000a0dc, 0x33003301}, - {0x0000a0e0, 0x331e331f}, - {0x0000a0e4, 0x4402331d}, - {0x0000a0e8, 0x44004401}, - {0x0000a0ec, 0x441e441f}, - {0x0000a0f0, 0x55025503}, - {0x0000a0f4, 0x55005501}, - {0x0000a0f8, 0x551e551f}, - {0x0000a0fc, 0x6602551d}, - {0x0000a100, 0x66006601}, - {0x0000a104, 0x661e661f}, - {0x0000a108, 0x7703661d}, - {0x0000a10c, 0x77017702}, - {0x0000a110, 0x00007700}, - {0x0000a114, 0x00000000}, - {0x0000a118, 0x00000000}, - {0x0000a11c, 0x00000000}, - {0x0000a120, 0x00000000}, - {0x0000a124, 0x00000000}, - {0x0000a128, 0x00000000}, - {0x0000a12c, 0x00000000}, - {0x0000a130, 0x00000000}, - {0x0000a134, 0x00000000}, - {0x0000a138, 0x00000000}, - {0x0000a13c, 0x00000000}, - {0x0000a140, 0x001f0000}, - {0x0000a144, 0x111f1100}, - {0x0000a148, 0x111d111e}, - {0x0000a14c, 0x111b111c}, - {0x0000a150, 0x22032204}, - {0x0000a154, 0x22012202}, - {0x0000a158, 0x221f2200}, - {0x0000a15c, 0x221d221e}, - {0x0000a160, 0x33013302}, - {0x0000a164, 0x331f3300}, - {0x0000a168, 0x4402331e}, - {0x0000a16c, 0x44004401}, - {0x0000a170, 0x441e441f}, - {0x0000a174, 0x55015502}, - {0x0000a178, 0x551f5500}, - {0x0000a17c, 0x6602551e}, - {0x0000a180, 0x66006601}, - {0x0000a184, 0x661e661f}, - {0x0000a188, 0x7703661d}, - {0x0000a18c, 0x77017702}, - {0x0000a190, 0x00007700}, - {0x0000a194, 0x00000000}, - {0x0000a198, 0x00000000}, - {0x0000a19c, 0x00000000}, - {0x0000a1a0, 0x00000000}, - {0x0000a1a4, 0x00000000}, - {0x0000a1a8, 0x00000000}, - {0x0000a1ac, 0x00000000}, - {0x0000a1b0, 0x00000000}, - {0x0000a1b4, 0x00000000}, - {0x0000a1b8, 0x00000000}, - {0x0000a1bc, 0x00000000}, - {0x0000a1c0, 0x00000000}, - {0x0000a1c4, 0x00000000}, - {0x0000a1c8, 0x00000000}, - {0x0000a1cc, 0x00000000}, - {0x0000a1d0, 0x00000000}, - {0x0000a1d4, 0x00000000}, - {0x0000a1d8, 0x00000000}, - {0x0000a1dc, 0x00000000}, - {0x0000a1e0, 0x00000000}, - {0x0000a1e4, 0x00000000}, - {0x0000a1e8, 0x00000000}, - {0x0000a1ec, 0x00000000}, - {0x0000a1f0, 0x00000396}, - {0x0000a1f4, 0x00000396}, - {0x0000a1f8, 0x00000396}, - {0x0000a1fc, 0x00000296}, -}; +#define ar9331_1p2_xtal_25M ar9331_1p1_xtal_25M -static const u32 ar9331_1p2_baseband_core[][2] = { - /* Addr allmodes */ - {0x00009800, 0xafe68e30}, - {0x00009804, 0xfd14e000}, - {0x00009808, 0x9c0a8f6b}, - {0x0000980c, 0x04800000}, - {0x00009814, 0x9280c00a}, - {0x00009818, 0x00000000}, - {0x0000981c, 0x00020028}, - {0x00009834, 0x5f3ca3de}, - {0x00009838, 0x0108ecff}, - {0x0000983c, 0x14750600}, - {0x00009880, 0x201fff00}, - {0x00009884, 0x00001042}, - {0x000098a4, 0x00200400}, - {0x000098b0, 0x32840bbe}, - {0x000098d0, 0x004b6a8e}, - {0x000098d4, 0x00000820}, - {0x000098dc, 0x00000000}, - {0x000098f0, 0x00000000}, - {0x000098f4, 0x00000000}, - {0x00009c04, 0x00000000}, - {0x00009c08, 0x03200000}, - {0x00009c0c, 0x00000000}, - {0x00009c10, 0x00000000}, - {0x00009c14, 0x00046384}, - {0x00009c18, 0x05b6b440}, - {0x00009c1c, 0x00b6b440}, - {0x00009d00, 0xc080a333}, - {0x00009d04, 0x40206c10}, - {0x00009d08, 0x009c4060}, - {0x00009d0c, 0x1883800a}, - {0x00009d10, 0x01834061}, - {0x00009d14, 0x00c00400}, - {0x00009d18, 0x00000000}, - {0x00009e08, 0x0038233c}, - {0x00009e24, 0x9927b515}, - {0x00009e28, 0x12ef0200}, - {0x00009e30, 0x06336f77}, - {0x00009e34, 0x6af6532f}, - {0x00009e38, 0x0cc80c00}, - {0x00009e40, 0x0d261820}, - {0x00009e4c, 0x00001004}, - {0x00009e50, 0x00ff03f1}, - {0x00009fc0, 0x803e4788}, - {0x00009fc4, 0x0001efb5}, - {0x00009fcc, 0x40000014}, - {0x0000a20c, 0x00000000}, - {0x0000a220, 0x00000000}, - {0x0000a224, 0x00000000}, - {0x0000a228, 0x10002310}, - {0x0000a23c, 0x00000000}, - {0x0000a244, 0x0c000000}, - {0x0000a2a0, 0x00000001}, - {0x0000a2c0, 0x00000001}, - {0x0000a2c8, 0x00000000}, - {0x0000a2cc, 0x18c43433}, - {0x0000a2d4, 0x00000000}, - {0x0000a2dc, 0x00000000}, - {0x0000a2e0, 0x00000000}, - {0x0000a2e4, 0x00000000}, - {0x0000a2e8, 0x00000000}, - {0x0000a2ec, 0x00000000}, - {0x0000a2f0, 0x00000000}, - {0x0000a2f4, 0x00000000}, - {0x0000a2f8, 0x00000000}, - {0x0000a344, 0x00000000}, - {0x0000a34c, 0x00000000}, - {0x0000a350, 0x0000a000}, - {0x0000a364, 0x00000000}, - {0x0000a370, 0x00000000}, - {0x0000a390, 0x00000001}, - {0x0000a394, 0x00000444}, - {0x0000a398, 0x001f0e0f}, - {0x0000a39c, 0x0075393f}, - {0x0000a3a0, 0xb79f6427}, - {0x0000a3a4, 0x00000000}, - {0x0000a3a8, 0xaaaaaaaa}, - {0x0000a3ac, 0x3c466478}, - {0x0000a3c0, 0x20202020}, - {0x0000a3c4, 0x22222220}, - {0x0000a3c8, 0x20200020}, - {0x0000a3cc, 0x20202020}, - {0x0000a3d0, 0x20202020}, - {0x0000a3d4, 0x20202020}, - {0x0000a3d8, 0x20202020}, - {0x0000a3dc, 0x20202020}, - {0x0000a3e0, 0x20202020}, - {0x0000a3e4, 0x20202020}, - {0x0000a3e8, 0x20202020}, - {0x0000a3ec, 0x20202020}, - {0x0000a3f0, 0x00000000}, - {0x0000a3f4, 0x00000006}, - {0x0000a3f8, 0x0cdbd380}, - {0x0000a3fc, 0x000f0f01}, - {0x0000a400, 0x8fa91f01}, - {0x0000a404, 0x00000000}, - {0x0000a408, 0x0e79e5c6}, - {0x0000a40c, 0x00820820}, - {0x0000a414, 0x1ce739ce}, - {0x0000a418, 0x2d001dce}, - {0x0000a41c, 0x1ce739ce}, - {0x0000a420, 0x000001ce}, - {0x0000a424, 0x1ce739ce}, - {0x0000a428, 0x000001ce}, - {0x0000a42c, 0x1ce739ce}, - {0x0000a430, 0x1ce739ce}, - {0x0000a434, 0x00000000}, - {0x0000a438, 0x00001801}, - {0x0000a43c, 0x00000000}, - {0x0000a440, 0x00000000}, - {0x0000a444, 0x00000000}, - {0x0000a448, 0x04000000}, - {0x0000a44c, 0x00000001}, - {0x0000a450, 0x00010000}, - {0x0000a458, 0x00000000}, - {0x0000a640, 0x00000000}, - {0x0000a644, 0x3fad9d74}, - {0x0000a648, 0x0048060a}, - {0x0000a64c, 0x00003c37}, - {0x0000a670, 0x03020100}, - {0x0000a674, 0x09080504}, - {0x0000a678, 0x0d0c0b0a}, - {0x0000a67c, 0x13121110}, - {0x0000a680, 0x31301514}, - {0x0000a684, 0x35343332}, - {0x0000a688, 0x00000036}, - {0x0000a690, 0x00000838}, - {0x0000a7c0, 0x00000000}, - {0x0000a7c4, 0xfffffffc}, - {0x0000a7c8, 0x00000000}, - {0x0000a7cc, 0x00000000}, - {0x0000a7d0, 0x00000000}, - {0x0000a7d4, 0x00000004}, - {0x0000a7dc, 0x00000001}, -}; +#define ar9331_1p2_xtal_40M ar9331_1p1_xtal_40M -static const u32 ar9331_modes_high_power_tx_gain_1p2[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x0000a410, 0x000050d7, 0x000050d7, 0x000050d7, 0x000050d7}, - {0x0000a500, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, - {0x0000a504, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, - {0x0000a508, 0x0c002e00, 0x0c002e00, 0x08000004, 0x08000004}, - {0x0000a50c, 0x11062202, 0x11062202, 0x0d000200, 0x0d000200}, - {0x0000a510, 0x17022e00, 0x17022e00, 0x11000202, 0x11000202}, - {0x0000a514, 0x1d000ec2, 0x1d000ec2, 0x15000400, 0x15000400}, - {0x0000a518, 0x25020ec0, 0x25020ec0, 0x19000402, 0x19000402}, - {0x0000a51c, 0x2b020ec3, 0x2b020ec3, 0x1d000404, 0x1d000404}, - {0x0000a520, 0x2f001f04, 0x2f001f04, 0x23000a00, 0x23000a00}, - {0x0000a524, 0x35001fc4, 0x35001fc4, 0x27000a02, 0x27000a02}, - {0x0000a528, 0x3c022f04, 0x3c022f04, 0x2b000a04, 0x2b000a04}, - {0x0000a52c, 0x41023e85, 0x41023e85, 0x3f001620, 0x3f001620}, - {0x0000a530, 0x48023ec6, 0x48023ec6, 0x41001621, 0x41001621}, - {0x0000a534, 0x4d023f01, 0x4d023f01, 0x44001640, 0x44001640}, - {0x0000a538, 0x53023f4b, 0x53023f4b, 0x46001641, 0x46001641}, - {0x0000a53c, 0x5a027f09, 0x5a027f09, 0x48001642, 0x48001642}, - {0x0000a540, 0x5f027fc9, 0x5f027fc9, 0x4b001644, 0x4b001644}, - {0x0000a544, 0x6502feca, 0x6502feca, 0x4e001a81, 0x4e001a81}, - {0x0000a548, 0x6b02ff4a, 0x6b02ff4a, 0x51001a83, 0x51001a83}, - {0x0000a54c, 0x7203feca, 0x7203feca, 0x54001c84, 0x54001c84}, - {0x0000a550, 0x7703ff0b, 0x7703ff0b, 0x57001ce3, 0x57001ce3}, - {0x0000a554, 0x7d06ffcb, 0x7d06ffcb, 0x5b001ce5, 0x5b001ce5}, - {0x0000a558, 0x8407ff0b, 0x8407ff0b, 0x5f001ce9, 0x5f001ce9}, - {0x0000a55c, 0x8907ffcb, 0x8907ffcb, 0x66001eec, 0x66001eec}, - {0x0000a560, 0x900fff0b, 0x900fff0b, 0x66001eec, 0x66001eec}, - {0x0000a564, 0x960fffcb, 0x960fffcb, 0x66001eec, 0x66001eec}, - {0x0000a568, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a56c, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a570, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a574, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a578, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a57c, 0x9c1fff0b, 0x9c1fff0b, 0x66001eec, 0x66001eec}, - {0x0000a580, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, - {0x0000a584, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, - {0x0000a588, 0x0c002e00, 0x0c002e00, 0x08000004, 0x08000004}, - {0x0000a58c, 0x11062202, 0x11062202, 0x0b000200, 0x0b000200}, - {0x0000a590, 0x17022e00, 0x17022e00, 0x0f000202, 0x0f000202}, - {0x0000a594, 0x1d000ec2, 0x1d000ec2, 0x11000400, 0x11000400}, - {0x0000a598, 0x25020ec0, 0x25020ec0, 0x15000402, 0x15000402}, - {0x0000a59c, 0x2b020ec3, 0x2b020ec3, 0x19000404, 0x19000404}, - {0x0000a5a0, 0x2f001f04, 0x2f001f04, 0x1b000603, 0x1b000603}, - {0x0000a5a4, 0x35001fc4, 0x35001fc4, 0x1f000a02, 0x1f000a02}, - {0x0000a5a8, 0x3c022f04, 0x3c022f04, 0x23000a04, 0x23000a04}, - {0x0000a5ac, 0x41023e85, 0x41023e85, 0x26000a20, 0x26000a20}, - {0x0000a5b0, 0x48023ec6, 0x48023ec6, 0x2a000e20, 0x2a000e20}, - {0x0000a5b4, 0x4d023f01, 0x4d023f01, 0x2e000e22, 0x2e000e22}, - {0x0000a5b8, 0x53023f4b, 0x53023f4b, 0x31000e24, 0x31000e24}, - {0x0000a5bc, 0x5a027f09, 0x5a027f09, 0x34001640, 0x34001640}, - {0x0000a5c0, 0x5f027fc9, 0x5f027fc9, 0x38001660, 0x38001660}, - {0x0000a5c4, 0x6502feca, 0x6502feca, 0x3b001861, 0x3b001861}, - {0x0000a5c8, 0x6b02ff4a, 0x6b02ff4a, 0x3e001a81, 0x3e001a81}, - {0x0000a5cc, 0x7203feca, 0x7203feca, 0x42001a83, 0x42001a83}, - {0x0000a5d0, 0x7703ff0b, 0x7703ff0b, 0x44001c84, 0x44001c84}, - {0x0000a5d4, 0x7d06ffcb, 0x7d06ffcb, 0x48001ce3, 0x48001ce3}, - {0x0000a5d8, 0x8407ff0b, 0x8407ff0b, 0x4c001ce5, 0x4c001ce5}, - {0x0000a5dc, 0x8907ffcb, 0x8907ffcb, 0x50001ce9, 0x50001ce9}, - {0x0000a5e0, 0x900fff0b, 0x900fff0b, 0x54001ceb, 0x54001ceb}, - {0x0000a5e4, 0x960fffcb, 0x960fffcb, 0x56001eec, 0x56001eec}, - {0x0000a5e8, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5ec, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5f0, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5f4, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5f8, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a5fc, 0x9c1fff0b, 0x9c1fff0b, 0x56001eec, 0x56001eec}, - {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a610, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a614, 0x01404000, 0x01404000, 0x01404000, 0x01404000}, - {0x0000a618, 0x02008501, 0x02008501, 0x02008501, 0x02008501}, - {0x0000a61c, 0x02008802, 0x02008802, 0x02008802, 0x02008802}, - {0x0000a620, 0x0300c802, 0x0300c802, 0x0300c802, 0x0300c802}, - {0x0000a624, 0x0300cc03, 0x0300cc03, 0x0300cc03, 0x0300cc03}, - {0x0000a628, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a62c, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a630, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a634, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a638, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, - {0x0000a63c, 0x04011004, 0x04011004, 0x04011004, 0x04011004}, -}; +#define ar9331_1p2_baseband_core ar9331_1p1_baseband_core -static const u32 ar9331_1p2_mac_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x00001030, 0x00000230, 0x00000460, 0x000002c0, 0x00000160}, - {0x00001070, 0x00000168, 0x000002d0, 0x00000318, 0x0000018c}, - {0x000010b0, 0x00000e60, 0x00001cc0, 0x00007c70, 0x00003e38}, - {0x00008014, 0x03e803e8, 0x07d007d0, 0x10801600, 0x08400b00}, - {0x0000801c, 0x128d8027, 0x128d804f, 0x12e00057, 0x12e0002b}, - {0x00008120, 0x08f04800, 0x08f04800, 0x08f04810, 0x08f04810}, - {0x000081d0, 0x00003210, 0x00003210, 0x0000320a, 0x0000320a}, - {0x00008318, 0x00003e80, 0x00007d00, 0x00006880, 0x00003440}, -}; +#define ar9331_1p2_soc_postamble ar9331_1p1_soc_postamble -static const u32 ar9331_1p2_soc_preamble[][2] = { - /* Addr allmodes */ - {0x00007020, 0x00000000}, - {0x00007034, 0x00000002}, - {0x00007038, 0x000002f8}, -}; +#define ar9331_1p2_mac_postamble ar9331_1p1_mac_postamble -static const u32 ar9331_1p2_xtal_40M[][2] = { - /* Addr allmodes */ - {0x00007038, 0x000004c2}, - {0x00008244, 0x0010f400}, - {0x0000824c, 0x0001e800}, - {0x0001609c, 0x0b283f31}, -}; +#define ar9331_1p2_soc_preamble ar9331_1p1_soc_preamble -static const u32 ar9331_1p2_mac_core[][2] = { - /* Addr allmodes */ - {0x00000008, 0x00000000}, - {0x00000030, 0x00020085}, - {0x00000034, 0x00000005}, - {0x00000040, 0x00000000}, - {0x00000044, 0x00000000}, - {0x00000048, 0x00000008}, - {0x0000004c, 0x00000010}, - {0x00000050, 0x00000000}, - {0x00001040, 0x002ffc0f}, - {0x00001044, 0x002ffc0f}, - {0x00001048, 0x002ffc0f}, - {0x0000104c, 0x002ffc0f}, - {0x00001050, 0x002ffc0f}, - {0x00001054, 0x002ffc0f}, - {0x00001058, 0x002ffc0f}, - {0x0000105c, 0x002ffc0f}, - {0x00001060, 0x002ffc0f}, - {0x00001064, 0x002ffc0f}, - {0x000010f0, 0x00000100}, - {0x00001270, 0x00000000}, - {0x000012b0, 0x00000000}, - {0x000012f0, 0x00000000}, - {0x0000143c, 0x00000000}, - {0x0000147c, 0x00000000}, - {0x00008000, 0x00000000}, - {0x00008004, 0x00000000}, - {0x00008008, 0x00000000}, - {0x0000800c, 0x00000000}, - {0x00008018, 0x00000000}, - {0x00008020, 0x00000000}, - {0x00008038, 0x00000000}, - {0x0000803c, 0x00000000}, - {0x00008040, 0x00000000}, - {0x00008044, 0x00000000}, - {0x00008048, 0x00000000}, - {0x0000804c, 0xffffffff}, - {0x00008054, 0x00000000}, - {0x00008058, 0x00000000}, - {0x0000805c, 0x000fc78f}, - {0x00008060, 0x0000000f}, - {0x00008064, 0x00000000}, - {0x00008070, 0x00000310}, - {0x00008074, 0x00000020}, - {0x00008078, 0x00000000}, - {0x0000809c, 0x0000000f}, - {0x000080a0, 0x00000000}, - {0x000080a4, 0x02ff0000}, - {0x000080a8, 0x0e070605}, - {0x000080ac, 0x0000000d}, - {0x000080b0, 0x00000000}, - {0x000080b4, 0x00000000}, - {0x000080b8, 0x00000000}, - {0x000080bc, 0x00000000}, - {0x000080c0, 0x2a800000}, - {0x000080c4, 0x06900168}, - {0x000080c8, 0x13881c20}, - {0x000080cc, 0x01f40000}, - {0x000080d0, 0x00252500}, - {0x000080d4, 0x00a00000}, - {0x000080d8, 0x00400000}, - {0x000080dc, 0x00000000}, - {0x000080e0, 0xffffffff}, - {0x000080e4, 0x0000ffff}, - {0x000080e8, 0x3f3f3f3f}, - {0x000080ec, 0x00000000}, - {0x000080f0, 0x00000000}, - {0x000080f4, 0x00000000}, - {0x000080fc, 0x00020000}, - {0x00008100, 0x00000000}, - {0x00008108, 0x00000052}, - {0x0000810c, 0x00000000}, - {0x00008110, 0x00000000}, - {0x00008114, 0x000007ff}, - {0x00008118, 0x000000aa}, - {0x0000811c, 0x00003210}, - {0x00008124, 0x00000000}, - {0x00008128, 0x00000000}, - {0x0000812c, 0x00000000}, - {0x00008130, 0x00000000}, - {0x00008134, 0x00000000}, - {0x00008138, 0x00000000}, - {0x0000813c, 0x0000ffff}, - {0x00008144, 0xffffffff}, - {0x00008168, 0x00000000}, - {0x0000816c, 0x00000000}, - {0x00008170, 0x18486200}, - {0x00008174, 0x33332210}, - {0x00008178, 0x00000000}, - {0x0000817c, 0x00020000}, - {0x000081c0, 0x00000000}, - {0x000081c4, 0x33332210}, - {0x000081c8, 0x00000000}, - {0x000081cc, 0x00000000}, - {0x000081d4, 0x00000000}, - {0x000081ec, 0x00000000}, - {0x000081f0, 0x00000000}, - {0x000081f4, 0x00000000}, - {0x000081f8, 0x00000000}, - {0x000081fc, 0x00000000}, - {0x00008240, 0x00100000}, - {0x00008248, 0x00000800}, - {0x00008250, 0x00000000}, - {0x00008254, 0x00000000}, - {0x00008258, 0x00000000}, - {0x0000825c, 0x40000000}, - {0x00008260, 0x00080922}, - {0x00008264, 0x9d400010}, - {0x00008268, 0xffffffff}, - {0x0000826c, 0x0000ffff}, - {0x00008270, 0x00000000}, - {0x00008274, 0x40000000}, - {0x00008278, 0x003e4180}, - {0x0000827c, 0x00000004}, - {0x00008284, 0x0000002c}, - {0x00008288, 0x0000002c}, - {0x0000828c, 0x000000ff}, - {0x00008294, 0x00000000}, - {0x00008298, 0x00000000}, - {0x0000829c, 0x00000000}, - {0x00008300, 0x00000140}, - {0x00008314, 0x00000000}, - {0x0000831c, 0x0000010d}, - {0x00008328, 0x00000000}, - {0x0000832c, 0x00000007}, - {0x00008330, 0x00000302}, - {0x00008334, 0x00000700}, - {0x00008338, 0x00ff0000}, - {0x0000833c, 0x02400000}, - {0x00008340, 0x000107ff}, - {0x00008344, 0xaa48105b}, - {0x00008348, 0x008f0000}, - {0x0000835c, 0x00000000}, - {0x00008360, 0xffffffff}, - {0x00008364, 0xffffffff}, - {0x00008368, 0x00000000}, - {0x00008370, 0x00000000}, - {0x00008374, 0x000000ff}, - {0x00008378, 0x00000000}, - {0x0000837c, 0x00000000}, - {0x00008380, 0xffffffff}, - {0x00008384, 0xffffffff}, - {0x00008390, 0xffffffff}, - {0x00008394, 0xffffffff}, - {0x00008398, 0x00000000}, - {0x0000839c, 0x00000000}, - {0x000083a0, 0x00000000}, - {0x000083a4, 0x0000fa14}, - {0x000083a8, 0x000f0c00}, - {0x000083ac, 0x33332210}, - {0x000083b0, 0x33332210}, - {0x000083b4, 0x33332210}, - {0x000083b8, 0x33332210}, - {0x000083bc, 0x00000000}, - {0x000083c0, 0x00000000}, - {0x000083c4, 0x00000000}, - {0x000083c8, 0x00000000}, - {0x000083cc, 0x00000200}, - {0x000083d0, 0x000301ff}, -}; +#define ar9331_1p2_mac_core ar9331_1p1_mac_core -static const u32 ar9331_common_rx_gain_1p2[][2] = { - /* Addr allmodes */ - {0x0000a000, 0x00010000}, - {0x0000a004, 0x00030002}, - {0x0000a008, 0x00050004}, - {0x0000a00c, 0x00810080}, - {0x0000a010, 0x01800082}, - {0x0000a014, 0x01820181}, - {0x0000a018, 0x01840183}, - {0x0000a01c, 0x01880185}, - {0x0000a020, 0x018a0189}, - {0x0000a024, 0x02850284}, - {0x0000a028, 0x02890288}, - {0x0000a02c, 0x03850384}, - {0x0000a030, 0x03890388}, - {0x0000a034, 0x038b038a}, - {0x0000a038, 0x038d038c}, - {0x0000a03c, 0x03910390}, - {0x0000a040, 0x03930392}, - {0x0000a044, 0x03950394}, - {0x0000a048, 0x00000396}, - {0x0000a04c, 0x00000000}, - {0x0000a050, 0x00000000}, - {0x0000a054, 0x00000000}, - {0x0000a058, 0x00000000}, - {0x0000a05c, 0x00000000}, - {0x0000a060, 0x00000000}, - {0x0000a064, 0x00000000}, - {0x0000a068, 0x00000000}, - {0x0000a06c, 0x00000000}, - {0x0000a070, 0x00000000}, - {0x0000a074, 0x00000000}, - {0x0000a078, 0x00000000}, - {0x0000a07c, 0x00000000}, - {0x0000a080, 0x28282828}, - {0x0000a084, 0x28282828}, - {0x0000a088, 0x28282828}, - {0x0000a08c, 0x28282828}, - {0x0000a090, 0x28282828}, - {0x0000a094, 0x21212128}, - {0x0000a098, 0x171c1c1c}, - {0x0000a09c, 0x02020212}, - {0x0000a0a0, 0x00000202}, - {0x0000a0a4, 0x00000000}, - {0x0000a0a8, 0x00000000}, - {0x0000a0ac, 0x00000000}, - {0x0000a0b0, 0x00000000}, - {0x0000a0b4, 0x00000000}, - {0x0000a0b8, 0x00000000}, - {0x0000a0bc, 0x00000000}, - {0x0000a0c0, 0x001f0000}, - {0x0000a0c4, 0x111f1100}, - {0x0000a0c8, 0x111d111e}, - {0x0000a0cc, 0x111b111c}, - {0x0000a0d0, 0x22032204}, - {0x0000a0d4, 0x22012202}, - {0x0000a0d8, 0x221f2200}, - {0x0000a0dc, 0x221d221e}, - {0x0000a0e0, 0x33013302}, - {0x0000a0e4, 0x331f3300}, - {0x0000a0e8, 0x4402331e}, - {0x0000a0ec, 0x44004401}, - {0x0000a0f0, 0x441e441f}, - {0x0000a0f4, 0x55015502}, - {0x0000a0f8, 0x551f5500}, - {0x0000a0fc, 0x6602551e}, - {0x0000a100, 0x66006601}, - {0x0000a104, 0x661e661f}, - {0x0000a108, 0x7703661d}, - {0x0000a10c, 0x77017702}, - {0x0000a110, 0x00007700}, - {0x0000a114, 0x00000000}, - {0x0000a118, 0x00000000}, - {0x0000a11c, 0x00000000}, - {0x0000a120, 0x00000000}, - {0x0000a124, 0x00000000}, - {0x0000a128, 0x00000000}, - {0x0000a12c, 0x00000000}, - {0x0000a130, 0x00000000}, - {0x0000a134, 0x00000000}, - {0x0000a138, 0x00000000}, - {0x0000a13c, 0x00000000}, - {0x0000a140, 0x001f0000}, - {0x0000a144, 0x111f1100}, - {0x0000a148, 0x111d111e}, - {0x0000a14c, 0x111b111c}, - {0x0000a150, 0x22032204}, - {0x0000a154, 0x22012202}, - {0x0000a158, 0x221f2200}, - {0x0000a15c, 0x221d221e}, - {0x0000a160, 0x33013302}, - {0x0000a164, 0x331f3300}, - {0x0000a168, 0x4402331e}, - {0x0000a16c, 0x44004401}, - {0x0000a170, 0x441e441f}, - {0x0000a174, 0x55015502}, - {0x0000a178, 0x551f5500}, - {0x0000a17c, 0x6602551e}, - {0x0000a180, 0x66006601}, - {0x0000a184, 0x661e661f}, - {0x0000a188, 0x7703661d}, - {0x0000a18c, 0x77017702}, - {0x0000a190, 0x00007700}, - {0x0000a194, 0x00000000}, - {0x0000a198, 0x00000000}, - {0x0000a19c, 0x00000000}, - {0x0000a1a0, 0x00000000}, - {0x0000a1a4, 0x00000000}, - {0x0000a1a8, 0x00000000}, - {0x0000a1ac, 0x00000000}, - {0x0000a1b0, 0x00000000}, - {0x0000a1b4, 0x00000000}, - {0x0000a1b8, 0x00000000}, - {0x0000a1bc, 0x00000000}, - {0x0000a1c0, 0x00000000}, - {0x0000a1c4, 0x00000000}, - {0x0000a1c8, 0x00000000}, - {0x0000a1cc, 0x00000000}, - {0x0000a1d0, 0x00000000}, - {0x0000a1d4, 0x00000000}, - {0x0000a1d8, 0x00000000}, - {0x0000a1dc, 0x00000000}, - {0x0000a1e0, 0x00000000}, - {0x0000a1e4, 0x00000000}, - {0x0000a1e8, 0x00000000}, - {0x0000a1ec, 0x00000000}, - {0x0000a1f0, 0x00000396}, - {0x0000a1f4, 0x00000396}, - {0x0000a1f8, 0x00000396}, - {0x0000a1fc, 0x00000296}, -}; +#define ar9331_common_wo_xlna_rx_gain_1p2 ar9331_common_wo_xlna_rx_gain_1p1 + +#define ar9331_common_rx_gain_1p2 ar9485_common_rx_gain_1_1 #endif /* INITVALS_9330_1P2_H */ diff --git a/drivers/net/wireless/ath/ath9k/ar9340_initvals.h b/drivers/net/wireless/ath/ath9k/ar9340_initvals.h index 815a8af1beef..1d8235e19f0f 100644 --- a/drivers/net/wireless/ath/ath9k/ar9340_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9340_initvals.h @@ -1,5 +1,6 @@ /* - * Copyright (c) 2011 Atheros Communications Inc. + * Copyright (c) 2010-2011 Atheros Communications Inc. + * Copyright (c) 2011-2012 Qualcomm Atheros Inc. * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -18,16 +19,16 @@ #define INITVALS_9340_H static const u32 ar9340_1p0_radio_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ {0x000160ac, 0xa4646800, 0xa4646800, 0xa4646800, 0xa4646800}, - {0x0001610c, 0x08000000, 0x08000000, 0x00000000, 0x00000000}, + {0x0001610c, 0x08000000, 0x00000000, 0x00000000, 0x00000000}, {0x00016140, 0x10804000, 0x10804000, 0x50804000, 0x50804000}, - {0x0001650c, 0x08000000, 0x08000000, 0x00000000, 0x00000000}, + {0x0001650c, 0x08000000, 0x00000000, 0x00000000, 0x00000000}, {0x00016540, 0x10804000, 0x10804000, 0x50804000, 0x50804000}, }; static const u32 ar9340Modes_lowest_ob_db_tx_gain_table_1p0[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ {0x0000a410, 0x000050d9, 0x000050d9, 0x000050d9, 0x000050d9}, {0x0000a500, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, {0x0000a504, 0x06000003, 0x06000003, 0x04000002, 0x04000002}, @@ -99,21 +100,10 @@ static const u32 ar9340Modes_lowest_ob_db_tx_gain_table_1p0[][5] = { {0x00016448, 0x24925266, 0x24925266, 0x24925266, 0x24925266}, }; -static const u32 ar9340Modes_fast_clock_1p0[][3] = { - /* Addr 5G_HT20 5G_HT40 */ - {0x00001030, 0x00000268, 0x000004d0}, - {0x00001070, 0x0000018c, 0x00000318}, - {0x000010b0, 0x00000fd0, 0x00001fa0}, - {0x00008014, 0x044c044c, 0x08980898}, - {0x0000801c, 0x148ec02b, 0x148ec057}, - {0x00008318, 0x000044c0, 0x00008980}, - {0x00009e00, 0x03721821, 0x03721821}, - {0x0000a230, 0x0000000b, 0x00000016}, - {0x0000a254, 0x00000898, 0x00001130}, -}; +#define ar9340Modes_fast_clock_1p0 ar9300Modes_fast_clock_2p2 static const u32 ar9340_1p0_radio_core[][2] = { - /* Addr allmodes */ + /* Addr allmodes */ {0x00016000, 0x36db6db6}, {0x00016004, 0x6db6db40}, {0x00016008, 0x73f00000}, @@ -146,15 +136,13 @@ static const u32 ar9340_1p0_radio_core[][2] = { {0x00016100, 0x04cb0001}, {0x00016104, 0xfff80000}, {0x00016108, 0x00080010}, - {0x0001610c, 0x00000000}, {0x00016140, 0x50804008}, {0x00016144, 0x01884080}, {0x00016148, 0x000080c0}, {0x00016280, 0x01000015}, - {0x00016284, 0x05530000}, + {0x00016284, 0x15530000}, {0x00016288, 0x00318000}, {0x0001628c, 0x50000000}, - {0x00016290, 0x4080294f}, {0x00016380, 0x00000000}, {0x00016384, 0x00000000}, {0x00016388, 0x00800700}, @@ -219,52 +207,43 @@ static const u32 ar9340_1p0_radio_core[][2] = { }; static const u32 ar9340_1p0_radio_core_40M[][2] = { + /* Addr allmodes */ {0x0001609c, 0x02566f3a}, {0x000160ac, 0xa4647c00}, {0x000160b0, 0x01885f5a}, }; -static const u32 ar9340_1p0_mac_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x00001030, 0x00000230, 0x00000460, 0x000002c0, 0x00000160}, - {0x00001070, 0x00000168, 0x000002d0, 0x00000318, 0x0000018c}, - {0x000010b0, 0x00000e60, 0x00001cc0, 0x00007c70, 0x00003e38}, - {0x00008014, 0x03e803e8, 0x07d007d0, 0x10801600, 0x08400b00}, - {0x0000801c, 0x128d8027, 0x128d804f, 0x12e00057, 0x12e0002b}, - {0x00008120, 0x08f04800, 0x08f04800, 0x08f04810, 0x08f04810}, - {0x000081d0, 0x00003210, 0x00003210, 0x0000320a, 0x0000320a}, - {0x00008318, 0x00003e80, 0x00007d00, 0x00006880, 0x00003440}, -}; +#define ar9340_1p0_mac_postamble ar9300_2p2_mac_postamble -static const u32 ar9340_1p0_soc_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x00007010, 0x00000023, 0x00000023, 0x00000023, 0x00000023}, -}; +#define ar9340_1p0_soc_postamble ar9300_2p2_soc_postamble static const u32 ar9340_1p0_baseband_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ {0x00009810, 0xd00a8005, 0xd00a8005, 0xd00a8011, 0xd00a8011}, {0x00009820, 0x206a022e, 0x206a022e, 0x206a022e, 0x206a022e}, {0x00009824, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0}, {0x00009828, 0x06903081, 0x06903081, 0x06903881, 0x06903881}, {0x0000982c, 0x05eea6d4, 0x05eea6d4, 0x05eea6d4, 0x05eea6d4}, {0x00009830, 0x0000059c, 0x0000059c, 0x0000119c, 0x0000119c}, - {0x00009c00, 0x00000044, 0x000000c4, 0x000000c4, 0x00000044}, - {0x00009e00, 0x0372161e, 0x0372161e, 0x037216a0, 0x037216a0}, - {0x00009e04, 0x00182020, 0x00182020, 0x00182020, 0x00182020}, + {0x00009c00, 0x000000c4, 0x000000c4, 0x000000c4, 0x000000c4}, + {0x00009e00, 0x0372111a, 0x0372111a, 0x037216a0, 0x037216a0}, + {0x00009e04, 0x001c2020, 0x001c2020, 0x001c2020, 0x001c2020}, {0x00009e0c, 0x6c4000e2, 0x6d4000e2, 0x6d4000e2, 0x6c4000e2}, {0x00009e10, 0x7ec88d2e, 0x7ec88d2e, 0x7ec88d2e, 0x7ec88d2e}, - {0x00009e14, 0x31395d5e, 0x3139605e, 0x3139605e, 0x31395d5e}, + {0x00009e14, 0x37b95d5e, 0x37b9605e, 0x3379605e, 0x33795d5e}, {0x00009e18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, {0x00009e1c, 0x0001cf9c, 0x0001cf9c, 0x00021f9c, 0x00021f9c}, {0x00009e20, 0x000003b5, 0x000003b5, 0x000003ce, 0x000003ce}, {0x00009e2c, 0x0000001c, 0x0000001c, 0x00000021, 0x00000021}, + {0x00009e3c, 0xcf946220, 0xcf946220, 0xcf946222, 0xcf946222}, {0x00009e44, 0x02321e27, 0x02321e27, 0x02291e27, 0x02291e27}, {0x00009e48, 0x5030201a, 0x5030201a, 0x50302012, 0x50302012}, {0x00009fc8, 0x0003f000, 0x0003f000, 0x0001a000, 0x0001a000}, - {0x0000a204, 0x00003fc0, 0x00003fc4, 0x00003fc4, 0x00003fc0}, + {0x0000a204, 0x00003ec0, 0x00003ec4, 0x00003ec4, 0x00003ec0}, {0x0000a208, 0x00000104, 0x00000104, 0x00000004, 0x00000004}, + {0x0000a22c, 0x07e26a2f, 0x07e26a2f, 0x01026a2f, 0x01026a2f}, {0x0000a230, 0x0000000a, 0x00000014, 0x00000016, 0x0000000b}, + {0x0000a234, 0x00000fff, 0x10000fff, 0x10000fff, 0x00000fff}, {0x0000a238, 0xffb81018, 0xffb81018, 0xffb81018, 0xffb81018}, {0x0000a250, 0x00000000, 0x00000000, 0x00000210, 0x00000108}, {0x0000a254, 0x000007d0, 0x00000fa0, 0x00001130, 0x00000898}, @@ -277,11 +256,11 @@ static const u32 ar9340_1p0_baseband_postamble[][5] = { {0x0000a288, 0x00000220, 0x00000220, 0x00000110, 0x00000110}, {0x0000a28c, 0x00011111, 0x00011111, 0x00022222, 0x00022222}, {0x0000a2c4, 0x00158d18, 0x00158d18, 0x00158d18, 0x00158d18}, - {0x0000a2d0, 0x00071981, 0x00071981, 0x00071981, 0x00071982}, - {0x0000a2d8, 0xf999a83a, 0xf999a83a, 0xf999a83a, 0xf999a83a}, + {0x0000a2d0, 0x00041983, 0x00041983, 0x00041982, 0x00041982}, + {0x0000a2d8, 0x7999a83a, 0x7999a83a, 0x7999a83a, 0x7999a83a}, {0x0000a358, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, {0x0000a830, 0x0000019c, 0x0000019c, 0x0000019c, 0x0000019c}, - {0x0000ae04, 0x00180000, 0x00180000, 0x00180000, 0x00180000}, + {0x0000ae04, 0x001c0000, 0x001c0000, 0x001c0000, 0x001c0000}, {0x0000ae18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, {0x0000ae1c, 0x0000019c, 0x0000019c, 0x0000019c, 0x0000019c}, {0x0000ae20, 0x000001b5, 0x000001b5, 0x000001ce, 0x000001ce}, @@ -289,21 +268,21 @@ static const u32 ar9340_1p0_baseband_postamble[][5] = { }; static const u32 ar9340_1p0_baseband_core[][2] = { - /* Addr allmodes */ + /* Addr allmodes */ {0x00009800, 0xafe68e30}, {0x00009804, 0xfd14e000}, {0x00009808, 0x9c0a9f6b}, {0x0000980c, 0x04900000}, - {0x00009814, 0xb280c00a}, + {0x00009814, 0x3280c00a}, {0x00009818, 0x00000000}, {0x0000981c, 0x00020028}, - {0x00009834, 0x5f3ca3de}, + {0x00009834, 0x6400a190}, {0x00009838, 0x0108ecff}, - {0x0000983c, 0x14750600}, + {0x0000983c, 0x14000600}, {0x00009880, 0x201fff00}, {0x00009884, 0x00001042}, {0x000098a4, 0x00200400}, - {0x000098b0, 0x52440bbe}, + {0x000098b0, 0x32840bbe}, {0x000098d0, 0x004b6a8e}, {0x000098d4, 0x00000820}, {0x000098dc, 0x00000000}, @@ -329,7 +308,6 @@ static const u32 ar9340_1p0_baseband_core[][2] = { {0x00009e30, 0x06336f77}, {0x00009e34, 0x6af6532f}, {0x00009e38, 0x0cc80c00}, - {0x00009e3c, 0xcf946222}, {0x00009e40, 0x0d261820}, {0x00009e4c, 0x00001004}, {0x00009e50, 0x00ff03f1}, @@ -342,8 +320,6 @@ static const u32 ar9340_1p0_baseband_core[][2] = { {0x0000a220, 0x00000000}, {0x0000a224, 0x00000000}, {0x0000a228, 0x10002310}, - {0x0000a22c, 0x01036a1e}, - {0x0000a234, 0x10000fff}, {0x0000a23c, 0x00000000}, {0x0000a244, 0x0c000000}, {0x0000a2a0, 0x00000001}, @@ -351,10 +327,6 @@ static const u32 ar9340_1p0_baseband_core[][2] = { {0x0000a2c8, 0x00000000}, {0x0000a2cc, 0x18c43433}, {0x0000a2d4, 0x00000000}, - {0x0000a2dc, 0x00000000}, - {0x0000a2e0, 0x00000000}, - {0x0000a2e4, 0x00000000}, - {0x0000a2e8, 0x00000000}, {0x0000a2ec, 0x00000000}, {0x0000a2f0, 0x00000000}, {0x0000a2f4, 0x00000000}, @@ -385,7 +357,7 @@ static const u32 ar9340_1p0_baseband_core[][2] = { {0x0000a3e8, 0x20202020}, {0x0000a3ec, 0x20202020}, {0x0000a3f0, 0x00000000}, - {0x0000a3f4, 0x00000246}, + {0x0000a3f4, 0x00000000}, {0x0000a3f8, 0x0cdbd380}, {0x0000a3fc, 0x000f0f01}, {0x0000a400, 0x8fa91f01}, @@ -402,33 +374,17 @@ static const u32 ar9340_1p0_baseband_core[][2] = { {0x0000a430, 0x1ce739ce}, {0x0000a434, 0x00000000}, {0x0000a438, 0x00001801}, - {0x0000a43c, 0x00000000}, + {0x0000a43c, 0x00100000}, {0x0000a440, 0x00000000}, {0x0000a444, 0x00000000}, - {0x0000a448, 0x04000080}, + {0x0000a448, 0x05000080}, {0x0000a44c, 0x00000001}, {0x0000a450, 0x00010000}, {0x0000a458, 0x00000000}, - {0x0000a600, 0x00000000}, - {0x0000a604, 0x00000000}, - {0x0000a608, 0x00000000}, - {0x0000a60c, 0x00000000}, - {0x0000a610, 0x00000000}, - {0x0000a614, 0x00000000}, - {0x0000a618, 0x00000000}, - {0x0000a61c, 0x00000000}, - {0x0000a620, 0x00000000}, - {0x0000a624, 0x00000000}, - {0x0000a628, 0x00000000}, - {0x0000a62c, 0x00000000}, - {0x0000a630, 0x00000000}, - {0x0000a634, 0x00000000}, - {0x0000a638, 0x00000000}, - {0x0000a63c, 0x00000000}, {0x0000a640, 0x00000000}, {0x0000a644, 0x3fad9d74}, {0x0000a648, 0x0048060a}, - {0x0000a64c, 0x00000637}, + {0x0000a64c, 0x00003c37}, {0x0000a670, 0x03020100}, {0x0000a674, 0x09080504}, {0x0000a678, 0x0d0c0b0a}, @@ -451,10 +407,6 @@ static const u32 ar9340_1p0_baseband_core[][2] = { {0x0000a8f4, 0x00000000}, {0x0000b2d0, 0x00000080}, {0x0000b2d4, 0x00000000}, - {0x0000b2dc, 0x00000000}, - {0x0000b2e0, 0x00000000}, - {0x0000b2e4, 0x00000000}, - {0x0000b2e8, 0x00000000}, {0x0000b2ec, 0x00000000}, {0x0000b2f0, 0x00000000}, {0x0000b2f4, 0x00000000}, @@ -465,80 +417,108 @@ static const u32 ar9340_1p0_baseband_core[][2] = { }; static const u32 ar9340Modes_high_power_tx_gain_table_1p0[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x0000a2dc, 0x0380c7fc, 0x0380c7fc, 0x03aaa352, 0x03aaa352}, + {0x0000a2e0, 0x0000f800, 0x0000f800, 0x03ccc584, 0x03ccc584}, + {0x0000a2e4, 0x03ff0000, 0x03ff0000, 0x03f0f800, 0x03f0f800}, + {0x0000a2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, + {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a610, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a614, 0x01404000, 0x01404000, 0x01404000, 0x01404000}, + {0x0000a618, 0x01404501, 0x01404501, 0x01404501, 0x01404501}, + {0x0000a61c, 0x02008802, 0x02008802, 0x02008501, 0x02008501}, + {0x0000a620, 0x0300cc03, 0x0300cc03, 0x0280ca03, 0x0280ca03}, + {0x0000a624, 0x0300cc03, 0x0300cc03, 0x03010c04, 0x03010c04}, + {0x0000a628, 0x0300cc03, 0x0300cc03, 0x04014c04, 0x04014c04}, + {0x0000a62c, 0x03810c03, 0x03810c03, 0x04015005, 0x04015005}, + {0x0000a630, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000a634, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000a638, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000a63c, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000b2dc, 0x0380c7fc, 0x0380c7fc, 0x03aaa352, 0x03aaa352}, + {0x0000b2e0, 0x0000f800, 0x0000f800, 0x03ccc584, 0x03ccc584}, + {0x0000b2e4, 0x03ff0000, 0x03ff0000, 0x03f0f800, 0x03f0f800}, + {0x0000b2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, {0x0000a410, 0x000050d8, 0x000050d8, 0x000050d9, 0x000050d9}, {0x0000a500, 0x00002220, 0x00002220, 0x00000000, 0x00000000}, - {0x0000a504, 0x04002222, 0x04002222, 0x04000002, 0x04000002}, - {0x0000a508, 0x09002421, 0x09002421, 0x08000004, 0x08000004}, - {0x0000a50c, 0x0d002621, 0x0d002621, 0x0b000200, 0x0b000200}, - {0x0000a510, 0x13004620, 0x13004620, 0x0f000202, 0x0f000202}, - {0x0000a514, 0x19004a20, 0x19004a20, 0x11000400, 0x11000400}, - {0x0000a518, 0x1d004e20, 0x1d004e20, 0x15000402, 0x15000402}, - {0x0000a51c, 0x21005420, 0x21005420, 0x19000404, 0x19000404}, - {0x0000a520, 0x26005e20, 0x26005e20, 0x1b000603, 0x1b000603}, - {0x0000a524, 0x2b005e40, 0x2b005e40, 0x1f000a02, 0x1f000a02}, - {0x0000a528, 0x2f005e42, 0x2f005e42, 0x23000a04, 0x23000a04}, - {0x0000a52c, 0x33005e44, 0x33005e44, 0x26000a20, 0x26000a20}, - {0x0000a530, 0x38005e65, 0x38005e65, 0x2a000e20, 0x2a000e20}, - {0x0000a534, 0x3c005e69, 0x3c005e69, 0x2e000e22, 0x2e000e22}, - {0x0000a538, 0x40005e6b, 0x40005e6b, 0x31000e24, 0x31000e24}, - {0x0000a53c, 0x44005e6d, 0x44005e6d, 0x34001640, 0x34001640}, - {0x0000a540, 0x49005e72, 0x49005e72, 0x38001660, 0x38001660}, - {0x0000a544, 0x4e005eb2, 0x4e005eb2, 0x3b001861, 0x3b001861}, - {0x0000a548, 0x53005f12, 0x53005f12, 0x3e001a81, 0x3e001a81}, - {0x0000a54c, 0x59025eb5, 0x59025eb5, 0x42001a83, 0x42001a83}, - {0x0000a550, 0x5e025f12, 0x5e025f12, 0x44001c84, 0x44001c84}, - {0x0000a554, 0x61027f12, 0x61027f12, 0x48001ce3, 0x48001ce3}, - {0x0000a558, 0x6702bf12, 0x6702bf12, 0x4c001ce5, 0x4c001ce5}, - {0x0000a55c, 0x6b02bf14, 0x6b02bf14, 0x50001ce9, 0x50001ce9}, - {0x0000a560, 0x6f02bf16, 0x6f02bf16, 0x54001ceb, 0x54001ceb}, - {0x0000a564, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a568, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a56c, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a570, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a574, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a578, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a57c, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, + {0x0000a504, 0x04002222, 0x04002222, 0x02000001, 0x02000001}, + {0x0000a508, 0x09002421, 0x09002421, 0x05000003, 0x05000003}, + {0x0000a50c, 0x0d002621, 0x0d002621, 0x0a000005, 0x0a000005}, + {0x0000a510, 0x13004620, 0x13004620, 0x0e000201, 0x0e000201}, + {0x0000a514, 0x19004a20, 0x19004a20, 0x11000203, 0x11000203}, + {0x0000a518, 0x1d004e20, 0x1d004e20, 0x14000401, 0x14000401}, + {0x0000a51c, 0x21005420, 0x21005420, 0x18000403, 0x18000403}, + {0x0000a520, 0x26005e20, 0x26005e20, 0x1b000602, 0x1b000602}, + {0x0000a524, 0x2b005e40, 0x2b005e40, 0x1f000802, 0x1f000802}, + {0x0000a528, 0x2f005e42, 0x2f005e42, 0x21000620, 0x21000620}, + {0x0000a52c, 0x33005e44, 0x33005e44, 0x25000820, 0x25000820}, + {0x0000a530, 0x38005e65, 0x38005e65, 0x29000822, 0x29000822}, + {0x0000a534, 0x3c005e69, 0x3c005e69, 0x2d000824, 0x2d000824}, + {0x0000a538, 0x40005e6b, 0x40005e6b, 0x30000828, 0x30000828}, + {0x0000a53c, 0x44005e6d, 0x44005e6d, 0x3400082a, 0x3400082a}, + {0x0000a540, 0x49005e72, 0x49005e72, 0x38000849, 0x38000849}, + {0x0000a544, 0x4e005eb2, 0x4e005eb2, 0x3b000a2c, 0x3b000a2c}, + {0x0000a548, 0x53005f12, 0x53005f12, 0x3e000e2b, 0x3e000e2b}, + {0x0000a54c, 0x59025eb5, 0x59025eb5, 0x42000e2d, 0x42000e2d}, + {0x0000a550, 0x5e025f12, 0x5e025f12, 0x4500124a, 0x4500124a}, + {0x0000a554, 0x61027f12, 0x61027f12, 0x4900124c, 0x4900124c}, + {0x0000a558, 0x6702bf12, 0x6702bf12, 0x4c00126c, 0x4c00126c}, + {0x0000a55c, 0x6b02bf14, 0x6b02bf14, 0x4f00128c, 0x4f00128c}, + {0x0000a560, 0x6f02bf16, 0x6f02bf16, 0x52001290, 0x52001290}, + {0x0000a564, 0x6f02bf16, 0x6f02bf16, 0x56001292, 0x56001292}, + {0x0000a568, 0x6f02bf16, 0x6f02bf16, 0x56001292, 0x56001292}, + {0x0000a56c, 0x6f02bf16, 0x6f02bf16, 0x56001292, 0x56001292}, + {0x0000a570, 0x6f02bf16, 0x6f02bf16, 0x56001292, 0x56001292}, + {0x0000a574, 0x6f02bf16, 0x6f02bf16, 0x56001292, 0x56001292}, + {0x0000a578, 0x6f02bf16, 0x6f02bf16, 0x56001292, 0x56001292}, + {0x0000a57c, 0x6f02bf16, 0x6f02bf16, 0x56001292, 0x56001292}, {0x0000a580, 0x00802220, 0x00802220, 0x00800000, 0x00800000}, - {0x0000a584, 0x04802222, 0x04802222, 0x04800002, 0x04800002}, - {0x0000a588, 0x09802421, 0x09802421, 0x08800004, 0x08800004}, - {0x0000a58c, 0x0d802621, 0x0d802621, 0x0b800200, 0x0b800200}, - {0x0000a590, 0x13804620, 0x13804620, 0x0f800202, 0x0f800202}, - {0x0000a594, 0x19804a20, 0x19804a20, 0x11800400, 0x11800400}, - {0x0000a598, 0x1d804e20, 0x1d804e20, 0x15800402, 0x15800402}, - {0x0000a59c, 0x21805420, 0x21805420, 0x19800404, 0x19800404}, - {0x0000a5a0, 0x26805e20, 0x26805e20, 0x1b800603, 0x1b800603}, - {0x0000a5a4, 0x2b805e40, 0x2b805e40, 0x1f800a02, 0x1f800a02}, - {0x0000a5a8, 0x2f805e42, 0x2f805e42, 0x23800a04, 0x23800a04}, - {0x0000a5ac, 0x33805e44, 0x33805e44, 0x26800a20, 0x26800a20}, - {0x0000a5b0, 0x38805e65, 0x38805e65, 0x2a800e20, 0x2a800e20}, - {0x0000a5b4, 0x3c805e69, 0x3c805e69, 0x2e800e22, 0x2e800e22}, - {0x0000a5b8, 0x40805e6b, 0x40805e6b, 0x31800e24, 0x31800e24}, - {0x0000a5bc, 0x44805e6d, 0x44805e6d, 0x34801640, 0x34801640}, - {0x0000a5c0, 0x49805e72, 0x49805e72, 0x38801660, 0x38801660}, - {0x0000a5c4, 0x4e805eb2, 0x4e805eb2, 0x3b801861, 0x3b801861}, - {0x0000a5c8, 0x53805f12, 0x53805f12, 0x3e801a81, 0x3e801a81}, - {0x0000a5cc, 0x59825eb2, 0x59825eb2, 0x42801a83, 0x42801a83}, - {0x0000a5d0, 0x5e825f12, 0x5e825f12, 0x44801c84, 0x44801c84}, - {0x0000a5d4, 0x61827f12, 0x61827f12, 0x48801ce3, 0x48801ce3}, - {0x0000a5d8, 0x6782bf12, 0x6782bf12, 0x4c801ce5, 0x4c801ce5}, - {0x0000a5dc, 0x6b82bf14, 0x6b82bf14, 0x50801ce9, 0x50801ce9}, - {0x0000a5e0, 0x6f82bf16, 0x6f82bf16, 0x54801ceb, 0x54801ceb}, - {0x0000a5e4, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5e8, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5ec, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5f0, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5f4, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5f8, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5fc, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x00016044, 0x056db2db, 0x056db2db, 0x056db2db, 0x056db2db}, + {0x0000a584, 0x04802222, 0x04802222, 0x02800001, 0x02800001}, + {0x0000a588, 0x09802421, 0x09802421, 0x05800003, 0x05800003}, + {0x0000a58c, 0x0d802621, 0x0d802621, 0x0a800005, 0x0a800005}, + {0x0000a590, 0x13804620, 0x13804620, 0x0e800201, 0x0e800201}, + {0x0000a594, 0x19804a20, 0x19804a20, 0x11800203, 0x11800203}, + {0x0000a598, 0x1d804e20, 0x1d804e20, 0x14800401, 0x14800401}, + {0x0000a59c, 0x21805420, 0x21805420, 0x18800403, 0x18800403}, + {0x0000a5a0, 0x26805e20, 0x26805e20, 0x1b800602, 0x1b800602}, + {0x0000a5a4, 0x2b805e40, 0x2b805e40, 0x1f800802, 0x1f800802}, + {0x0000a5a8, 0x2f805e42, 0x2f805e42, 0x21800620, 0x21800620}, + {0x0000a5ac, 0x33805e44, 0x33805e44, 0x25800820, 0x25800820}, + {0x0000a5b0, 0x38805e65, 0x38805e65, 0x29800822, 0x29800822}, + {0x0000a5b4, 0x3c805e69, 0x3c805e69, 0x2d800824, 0x2d800824}, + {0x0000a5b8, 0x40805e6b, 0x40805e6b, 0x30800828, 0x30800828}, + {0x0000a5bc, 0x44805e6d, 0x44805e6d, 0x3480082a, 0x3480082a}, + {0x0000a5c0, 0x49805e72, 0x49805e72, 0x38800849, 0x38800849}, + {0x0000a5c4, 0x4e805eb2, 0x4e805eb2, 0x3b800a2c, 0x3b800a2c}, + {0x0000a5c8, 0x53805f12, 0x53805f12, 0x3e800e2b, 0x3e800e2b}, + {0x0000a5cc, 0x59825eb2, 0x59825eb2, 0x42800e2d, 0x42800e2d}, + {0x0000a5d0, 0x5e825f12, 0x5e825f12, 0x4580124a, 0x4580124a}, + {0x0000a5d4, 0x61827f12, 0x61827f12, 0x4980124c, 0x4980124c}, + {0x0000a5d8, 0x6782bf12, 0x6782bf12, 0x4c80126c, 0x4c80126c}, + {0x0000a5dc, 0x6b82bf14, 0x6b82bf14, 0x4f80128c, 0x4f80128c}, + {0x0000a5e0, 0x6f82bf16, 0x6f82bf16, 0x52801290, 0x52801290}, + {0x0000a5e4, 0x6f82bf16, 0x6f82bf16, 0x56801292, 0x56801292}, + {0x0000a5e8, 0x6f82bf16, 0x6f82bf16, 0x56801292, 0x56801292}, + {0x0000a5ec, 0x6f82bf16, 0x6f82bf16, 0x56801292, 0x56801292}, + {0x0000a5f0, 0x6f82bf16, 0x6f82bf16, 0x56801292, 0x56801292}, + {0x0000a5f4, 0x6f82bf16, 0x6f82bf16, 0x56801292, 0x56801292}, + {0x0000a5f8, 0x6f82bf16, 0x6f82bf16, 0x56801292, 0x56801292}, + {0x0000a5fc, 0x6f82bf16, 0x6f82bf16, 0x56801292, 0x56801292}, + {0x00016044, 0x056db2db, 0x056db2db, 0x022492db, 0x022492db}, {0x00016048, 0x24925266, 0x24925266, 0x24925266, 0x24925266}, - {0x00016444, 0x056db2db, 0x056db2db, 0x056db2db, 0x056db2db}, + {0x00016444, 0x056db2db, 0x056db2db, 0x022492db, 0x022492db}, {0x00016448, 0x24925266, 0x24925266, 0x24925266, 0x24925266}, }; static const u32 ar9340Modes_high_ob_db_tx_gain_table_1p0[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x0000a2dc, 0x01feee00, 0x01feee00, 0x03aaa352, 0x03aaa352}, + {0x0000a2e0, 0x0000f000, 0x0000f000, 0x03ccc584, 0x03ccc584}, + {0x0000a2e4, 0x01ff0000, 0x01ff0000, 0x03f0f800, 0x03f0f800}, + {0x0000a2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, {0x0000a410, 0x000050d8, 0x000050d8, 0x000050d9, 0x000050d9}, {0x0000a500, 0x00002220, 0x00002220, 0x00000000, 0x00000000}, {0x0000a504, 0x04002222, 0x04002222, 0x04000002, 0x04000002}, @@ -559,7 +539,7 @@ static const u32 ar9340Modes_high_ob_db_tx_gain_table_1p0[][5] = { {0x0000a540, 0x49005e72, 0x49005e72, 0x38001660, 0x38001660}, {0x0000a544, 0x4e005eb2, 0x4e005eb2, 0x3b001861, 0x3b001861}, {0x0000a548, 0x53005f12, 0x53005f12, 0x3e001a81, 0x3e001a81}, - {0x0000a54c, 0x59025eb5, 0x59025eb5, 0x42001a83, 0x42001a83}, + {0x0000a54c, 0x59025eb2, 0x59025eb2, 0x42001a83, 0x42001a83}, {0x0000a550, 0x5e025f12, 0x5e025f12, 0x44001c84, 0x44001c84}, {0x0000a554, 0x61027f12, 0x61027f12, 0x48001ce3, 0x48001ce3}, {0x0000a558, 0x6702bf12, 0x6702bf12, 0x4c001ce5, 0x4c001ce5}, @@ -604,13 +584,43 @@ static const u32 ar9340Modes_high_ob_db_tx_gain_table_1p0[][5] = { {0x0000a5f4, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, {0x0000a5f8, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, {0x0000a5fc, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, + {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a610, 0x00804000, 0x00804000, 0x00000000, 0x00000000}, + {0x0000a614, 0x00804201, 0x00804201, 0x01404000, 0x01404000}, + {0x0000a618, 0x0280c802, 0x0280c802, 0x01404501, 0x01404501}, + {0x0000a61c, 0x0280ca03, 0x0280ca03, 0x02008501, 0x02008501}, + {0x0000a620, 0x04c15104, 0x04c15104, 0x0280ca03, 0x0280ca03}, + {0x0000a624, 0x04c15305, 0x04c15305, 0x03010c04, 0x03010c04}, + {0x0000a628, 0x04c15305, 0x04c15305, 0x04014c04, 0x04014c04}, + {0x0000a62c, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000a630, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000a634, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000a638, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000a63c, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000b2dc, 0x01feee00, 0x01feee00, 0x03aaa352, 0x03aaa352}, + {0x0000b2e0, 0x0000f000, 0x0000f000, 0x03ccc584, 0x03ccc584}, + {0x0000b2e4, 0x01ff0000, 0x01ff0000, 0x03f0f800, 0x03f0f800}, + {0x0000b2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, {0x00016044, 0x03b6d2e4, 0x03b6d2e4, 0x03b6d2e4, 0x03b6d2e4}, - {0x00016048, 0x8e481266, 0x8e481266, 0x8e481266, 0x8e481266}, + {0x00016048, 0x8e481666, 0x8e481666, 0x8e481266, 0x8e481266}, + {0x00016280, 0x01000015, 0x01000015, 0x01001015, 0x01001015}, {0x00016444, 0x03b6d2e4, 0x03b6d2e4, 0x03b6d2e4, 0x03b6d2e4}, - {0x00016448, 0x8e481266, 0x8e481266, 0x8e481266, 0x8e481266}, + {0x00016448, 0x8e481666, 0x8e481666, 0x8e481266, 0x8e481266}, }; + static const u32 ar9340Modes_ub124_tx_gain_table_1p0[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x00009810, 0xd00a8005, 0xd00a8005, 0xd00a8005, 0xd00a8005}, + {0x00009820, 0x206a022e, 0x206a022e, 0x206a00ae, 0x206a00ae}, + {0x00009830, 0x0000059c, 0x0000059c, 0x0000059c, 0x0000059c}, + {0x00009e10, 0x7ec88d2e, 0x7ec88d2e, 0x7ec82d2e, 0x7ec82d2e}, + {0x0000a2dc, 0xfef5d402, 0xfef5d402, 0xfdab5b52, 0xfdab5b52}, + {0x0000a2e0, 0xfe896600, 0xfe896600, 0xfd339c84, 0xfd339c84}, + {0x0000a2e4, 0xff01f800, 0xff01f800, 0xfec3e000, 0xfec3e000}, + {0x0000a2e8, 0xfffe0000, 0xfffe0000, 0xfffc0000, 0xfffc0000}, {0x0000a410, 0x000050d8, 0x000050d8, 0x000050d9, 0x000050d9}, {0x0000a500, 0x00002220, 0x00002220, 0x00000000, 0x00000000}, {0x0000a504, 0x04002222, 0x04002222, 0x04000002, 0x04000002}, @@ -676,15 +686,34 @@ static const u32 ar9340Modes_ub124_tx_gain_table_1p0[][5] = { {0x0000a5f4, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, {0x0000a5f8, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, {0x0000a5fc, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x00016044, 0x036db2db, 0x036db2db, 0x036db2db, 0x036db2db}, - {0x00016048, 0x69b65266, 0x69b65266, 0x69b65266, 0x69b65266}, - {0x00016444, 0x036db2db, 0x036db2db, 0x036db2db, 0x036db2db}, - {0x00016448, 0x69b65266, 0x69b65266, 0x69b65266, 0x69b65266}, + {0x00016044, 0x03b6d2e4, 0x03b6d2e4, 0x03b6d2e4, 0x03b6d2e4}, + {0x00016048, 0x8e480086, 0x8e480086, 0x8e480086, 0x8e480086}, + {0x00016444, 0x03b6d2e4, 0x03b6d2e4, 0x03b6d2e4, 0x03b6d2e4}, + {0x00016448, 0x8e480086, 0x8e480086, 0x8e480086, 0x8e480086}, + {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a610, 0x00804000, 0x00804000, 0x00000000, 0x00000000}, + {0x0000a614, 0x00804201, 0x00804201, 0x01404000, 0x01404000}, + {0x0000a618, 0x0280c802, 0x0280c802, 0x01404501, 0x01404501}, + {0x0000a61c, 0x0280ca03, 0x0280ca03, 0x02008501, 0x02008501}, + {0x0000a620, 0x04c15104, 0x04c15104, 0x0280ca03, 0x0280ca03}, + {0x0000a624, 0x04c15305, 0x04c15305, 0x03010c04, 0x03010c04}, + {0x0000a628, 0x04c15305, 0x04c15305, 0x04014c04, 0x04014c04}, + {0x0000a62c, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000a630, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000a634, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000a638, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000a63c, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, + {0x0000b2dc, 0xfef5d402, 0xfef5d402, 0xfdab5b52, 0xfdab5b52}, + {0x0000b2e0, 0xfe896600, 0xfe896600, 0xfd339c84, 0xfd339c84}, + {0x0000b2e4, 0xff01f800, 0xff01f800, 0xfec3e000, 0xfec3e000}, + {0x0000b2e8, 0xfffe0000, 0xfffe0000, 0xfffc0000, 0xfffc0000}, }; - static const u32 ar9340Common_rx_gain_table_1p0[][2] = { - /* Addr allmodes */ + /* Addr allmodes */ {0x0000a000, 0x00010000}, {0x0000a004, 0x00030002}, {0x0000a008, 0x00050004}, @@ -845,14 +874,14 @@ static const u32 ar9340Common_rx_gain_table_1p0[][2] = { {0x0000b074, 0x00000000}, {0x0000b078, 0x00000000}, {0x0000b07c, 0x00000000}, - {0x0000b080, 0x32323232}, - {0x0000b084, 0x2f2f3232}, - {0x0000b088, 0x23282a2d}, - {0x0000b08c, 0x1c1e2123}, - {0x0000b090, 0x14171919}, - {0x0000b094, 0x0e0e1214}, - {0x0000b098, 0x03050707}, - {0x0000b09c, 0x00030303}, + {0x0000b080, 0x23232323}, + {0x0000b084, 0x21232323}, + {0x0000b088, 0x19191c1e}, + {0x0000b08c, 0x12141417}, + {0x0000b090, 0x07070e0e}, + {0x0000b094, 0x03030305}, + {0x0000b098, 0x00000003}, + {0x0000b09c, 0x00000000}, {0x0000b0a0, 0x00000000}, {0x0000b0a4, 0x00000000}, {0x0000b0a8, 0x00000000}, @@ -944,7 +973,11 @@ static const u32 ar9340Common_rx_gain_table_1p0[][2] = { }; static const u32 ar9340Modes_low_ob_db_tx_gain_table_1p0[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x0000a2dc, 0x0380c7fc, 0x0380c7fc, 0x03aaa352, 0x03aaa352}, + {0x0000a2e0, 0x0000f800, 0x0000f800, 0x03ccc584, 0x03ccc584}, + {0x0000a2e4, 0x03ff0000, 0x03ff0000, 0x03f0f800, 0x03f0f800}, + {0x0000a2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, {0x0000a410, 0x000050d9, 0x000050d9, 0x000050d9, 0x000050d9}, {0x0000a500, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, {0x0000a504, 0x06000003, 0x06000003, 0x04000002, 0x04000002}, @@ -952,8 +985,8 @@ static const u32 ar9340Modes_low_ob_db_tx_gain_table_1p0[][5] = { {0x0000a50c, 0x10000023, 0x10000023, 0x0b000200, 0x0b000200}, {0x0000a510, 0x16000220, 0x16000220, 0x0f000202, 0x0f000202}, {0x0000a514, 0x1c000223, 0x1c000223, 0x12000400, 0x12000400}, - {0x0000a518, 0x21020220, 0x21020220, 0x16000402, 0x16000402}, - {0x0000a51c, 0x27020223, 0x27020223, 0x19000404, 0x19000404}, + {0x0000a518, 0x21002220, 0x21002220, 0x16000402, 0x16000402}, + {0x0000a51c, 0x27002223, 0x27002223, 0x19000404, 0x19000404}, {0x0000a520, 0x2b022220, 0x2b022220, 0x1c000603, 0x1c000603}, {0x0000a524, 0x2f022222, 0x2f022222, 0x21000a02, 0x21000a02}, {0x0000a528, 0x34022225, 0x34022225, 0x25000a04, 0x25000a04}, @@ -965,19 +998,19 @@ static const u32 ar9340Modes_low_ob_db_tx_gain_table_1p0[][5] = { {0x0000a540, 0x4e02246c, 0x4e02246c, 0x3c001660, 0x3c001660}, {0x0000a544, 0x5302266c, 0x5302266c, 0x3f001861, 0x3f001861}, {0x0000a548, 0x5702286c, 0x5702286c, 0x43001a81, 0x43001a81}, - {0x0000a54c, 0x5c04286b, 0x5c04286b, 0x47001a83, 0x47001a83}, - {0x0000a550, 0x61042a6c, 0x61042a6c, 0x4a001c84, 0x4a001c84}, - {0x0000a554, 0x66062a6c, 0x66062a6c, 0x4e001ce3, 0x4e001ce3}, - {0x0000a558, 0x6b062e6c, 0x6b062e6c, 0x52001ce5, 0x52001ce5}, - {0x0000a55c, 0x7006308c, 0x7006308c, 0x56001ce9, 0x56001ce9}, - {0x0000a560, 0x730a308a, 0x730a308a, 0x5a001ceb, 0x5a001ceb}, - {0x0000a564, 0x770a308c, 0x770a308c, 0x5d001eec, 0x5d001eec}, - {0x0000a568, 0x770a308c, 0x770a308c, 0x5d001eec, 0x5d001eec}, - {0x0000a56c, 0x770a308c, 0x770a308c, 0x5d001eec, 0x5d001eec}, - {0x0000a570, 0x770a308c, 0x770a308c, 0x5d001eec, 0x5d001eec}, - {0x0000a574, 0x770a308c, 0x770a308c, 0x5d001eec, 0x5d001eec}, - {0x0000a578, 0x770a308c, 0x770a308c, 0x5d001eec, 0x5d001eec}, - {0x0000a57c, 0x770a308c, 0x770a308c, 0x5d001eec, 0x5d001eec}, + {0x0000a54c, 0x5c02486b, 0x5c02486b, 0x47001a83, 0x47001a83}, + {0x0000a550, 0x61024a6c, 0x61024a6c, 0x4a001c84, 0x4a001c84}, + {0x0000a554, 0x66026a6c, 0x66026a6c, 0x4e001ce3, 0x4e001ce3}, + {0x0000a558, 0x6b026e6c, 0x6b026e6c, 0x52001ce5, 0x52001ce5}, + {0x0000a55c, 0x7002708c, 0x7002708c, 0x56001ce9, 0x56001ce9}, + {0x0000a560, 0x7302b08a, 0x7302b08a, 0x5a001ceb, 0x5a001ceb}, + {0x0000a564, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, + {0x0000a568, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, + {0x0000a56c, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, + {0x0000a570, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, + {0x0000a574, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, + {0x0000a578, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, + {0x0000a57c, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, {0x0000a580, 0x00800000, 0x00800000, 0x00800000, 0x00800000}, {0x0000a584, 0x06800003, 0x06800003, 0x04800002, 0x04800002}, {0x0000a588, 0x0a800020, 0x0a800020, 0x08800004, 0x08800004}, @@ -1010,14 +1043,40 @@ static const u32 ar9340Modes_low_ob_db_tx_gain_table_1p0[][5] = { {0x0000a5f4, 0x778a308c, 0x778a308c, 0x5d801eec, 0x5d801eec}, {0x0000a5f8, 0x778a308c, 0x778a308c, 0x5d801eec, 0x5d801eec}, {0x0000a5fc, 0x778a308c, 0x778a308c, 0x5d801eec, 0x5d801eec}, + {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a610, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a614, 0x01404000, 0x01404000, 0x01404000, 0x01404000}, + {0x0000a618, 0x01404501, 0x01404501, 0x01404501, 0x01404501}, + {0x0000a61c, 0x02008802, 0x02008802, 0x02008501, 0x02008501}, + {0x0000a620, 0x0300cc03, 0x0300cc03, 0x0280ca03, 0x0280ca03}, + {0x0000a624, 0x0300cc03, 0x0300cc03, 0x03010c04, 0x03010c04}, + {0x0000a628, 0x0300cc03, 0x0300cc03, 0x04014c04, 0x04014c04}, + {0x0000a62c, 0x03810c03, 0x03810c03, 0x04015005, 0x04015005}, + {0x0000a630, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000a634, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000a638, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000a63c, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000b2dc, 0x0380c7fc, 0x0380c7fc, 0x03aaa352, 0x03aaa352}, + {0x0000b2e0, 0x0000f800, 0x0000f800, 0x03ccc584, 0x03ccc584}, + {0x0000b2e4, 0x03ff0000, 0x03ff0000, 0x03f0f800, 0x03f0f800}, + {0x0000b2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, {0x00016044, 0x056db2db, 0x056db2db, 0x056db2db, 0x056db2db}, - {0x00016048, 0x24925266, 0x24925266, 0x24925266, 0x24925266}, + {0x00016048, 0x24925666, 0x24925666, 0x24925266, 0x24925266}, + {0x00016280, 0x01000015, 0x01000015, 0x01001015, 0x01001015}, + {0x00016288, 0xf0318000, 0xf0318000, 0xf0318000, 0xf0318000}, {0x00016444, 0x056db2db, 0x056db2db, 0x056db2db, 0x056db2db}, - {0x00016448, 0x24925266, 0x24925266, 0x24925266, 0x24925266}, + {0x00016448, 0x24925666, 0x24925666, 0x24925266, 0x24925266}, }; static const u32 ar9340Modes_mixed_ob_db_tx_gain_table_1p0[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x0000a2dc, 0x0380c7fc, 0x0380c7fc, 0x03aaa352, 0x03aaa352}, + {0x0000a2e0, 0x0000f800, 0x0000f800, 0x03ccc584, 0x03ccc584}, + {0x0000a2e4, 0x03ff0000, 0x03ff0000, 0x03f0f800, 0x03f0f800}, + {0x0000a2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, {0x0000a410, 0x000050d9, 0x000050d9, 0x000050d9, 0x000050d9}, {0x0000a500, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, {0x0000a504, 0x06000003, 0x06000003, 0x04000002, 0x04000002}, @@ -1025,8 +1084,8 @@ static const u32 ar9340Modes_mixed_ob_db_tx_gain_table_1p0[][5] = { {0x0000a50c, 0x10000023, 0x10000023, 0x0b000200, 0x0b000200}, {0x0000a510, 0x16000220, 0x16000220, 0x0f000202, 0x0f000202}, {0x0000a514, 0x1c000223, 0x1c000223, 0x11000400, 0x11000400}, - {0x0000a518, 0x21020220, 0x21020220, 0x15000402, 0x15000402}, - {0x0000a51c, 0x27020223, 0x27020223, 0x19000404, 0x19000404}, + {0x0000a518, 0x21002220, 0x21002220, 0x15000402, 0x15000402}, + {0x0000a51c, 0x27002223, 0x27002223, 0x19000404, 0x19000404}, {0x0000a520, 0x2b022220, 0x2b022220, 0x1b000603, 0x1b000603}, {0x0000a524, 0x2f022222, 0x2f022222, 0x1f000a02, 0x1f000a02}, {0x0000a528, 0x34022225, 0x34022225, 0x23000a04, 0x23000a04}, @@ -1038,19 +1097,19 @@ static const u32 ar9340Modes_mixed_ob_db_tx_gain_table_1p0[][5] = { {0x0000a540, 0x4e02246c, 0x4e02246c, 0x38001660, 0x38001660}, {0x0000a544, 0x5302266c, 0x5302266c, 0x3b001861, 0x3b001861}, {0x0000a548, 0x5702286c, 0x5702286c, 0x3e001a81, 0x3e001a81}, - {0x0000a54c, 0x5c04286b, 0x5c04286b, 0x42001a83, 0x42001a83}, - {0x0000a550, 0x61042a6c, 0x61042a6c, 0x44001c84, 0x44001c84}, - {0x0000a554, 0x66062a6c, 0x66062a6c, 0x48001ce3, 0x48001ce3}, - {0x0000a558, 0x6b062e6c, 0x6b062e6c, 0x4c001ce5, 0x4c001ce5}, - {0x0000a55c, 0x7006308c, 0x7006308c, 0x50001ce9, 0x50001ce9}, - {0x0000a560, 0x730a308a, 0x730a308a, 0x54001ceb, 0x54001ceb}, - {0x0000a564, 0x770a308c, 0x770a308c, 0x56001eec, 0x56001eec}, - {0x0000a568, 0x770a308c, 0x770a308c, 0x56001eec, 0x56001eec}, - {0x0000a56c, 0x770a308c, 0x770a308c, 0x56001eec, 0x56001eec}, - {0x0000a570, 0x770a308c, 0x770a308c, 0x56001eec, 0x56001eec}, - {0x0000a574, 0x770a308c, 0x770a308c, 0x56001eec, 0x56001eec}, - {0x0000a578, 0x770a308c, 0x770a308c, 0x56001eec, 0x56001eec}, - {0x0000a57c, 0x770a308c, 0x770a308c, 0x56001eec, 0x56001eec}, + {0x0000a54c, 0x5c02486b, 0x5c02486b, 0x42001a83, 0x42001a83}, + {0x0000a550, 0x61024a6c, 0x61024a6c, 0x44001c84, 0x44001c84}, + {0x0000a554, 0x66026a6c, 0x66026a6c, 0x48001ce3, 0x48001ce3}, + {0x0000a558, 0x6b026e6c, 0x6b026e6c, 0x4c001ce5, 0x4c001ce5}, + {0x0000a55c, 0x7002708c, 0x7002708c, 0x50001ce9, 0x50001ce9}, + {0x0000a560, 0x7302b08a, 0x7302b08a, 0x54001ceb, 0x54001ceb}, + {0x0000a564, 0x7702b08c, 0x7702b08c, 0x56001eec, 0x56001eec}, + {0x0000a568, 0x7702b08c, 0x7702b08c, 0x56001eec, 0x56001eec}, + {0x0000a56c, 0x7702b08c, 0x7702b08c, 0x56001eec, 0x56001eec}, + {0x0000a570, 0x7702b08c, 0x7702b08c, 0x56001eec, 0x56001eec}, + {0x0000a574, 0x7702b08c, 0x7702b08c, 0x56001eec, 0x56001eec}, + {0x0000a578, 0x7702b08c, 0x7702b08c, 0x56001eec, 0x56001eec}, + {0x0000a57c, 0x7702b08c, 0x7702b08c, 0x56001eec, 0x56001eec}, {0x0000a580, 0x00800000, 0x00800000, 0x00800000, 0x00800000}, {0x0000a584, 0x06800003, 0x06800003, 0x04800002, 0x04800002}, {0x0000a588, 0x0a800020, 0x0a800020, 0x08800004, 0x08800004}, @@ -1083,14 +1142,36 @@ static const u32 ar9340Modes_mixed_ob_db_tx_gain_table_1p0[][5] = { {0x0000a5f4, 0x778a308c, 0x778a308c, 0x56801eec, 0x56801eec}, {0x0000a5f8, 0x778a308c, 0x778a308c, 0x56801eec, 0x56801eec}, {0x0000a5fc, 0x778a308c, 0x778a308c, 0x56801eec, 0x56801eec}, + {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a610, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a614, 0x01404000, 0x01404000, 0x01404000, 0x01404000}, + {0x0000a618, 0x01404501, 0x01404501, 0x01404501, 0x01404501}, + {0x0000a61c, 0x02008802, 0x02008802, 0x02008501, 0x02008501}, + {0x0000a620, 0x0300cc03, 0x0300cc03, 0x0280ca03, 0x0280ca03}, + {0x0000a624, 0x0300cc03, 0x0300cc03, 0x03010c04, 0x03010c04}, + {0x0000a628, 0x0300cc03, 0x0300cc03, 0x04014c04, 0x04014c04}, + {0x0000a62c, 0x03810c03, 0x03810c03, 0x04015005, 0x04015005}, + {0x0000a630, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000a634, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000a638, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000a63c, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, + {0x0000b2dc, 0x0380c7fc, 0x0380c7fc, 0x03aaa352, 0x03aaa352}, + {0x0000b2e0, 0x0000f800, 0x0000f800, 0x03ccc584, 0x03ccc584}, + {0x0000b2e4, 0x03ff0000, 0x03ff0000, 0x03f0f800, 0x03f0f800}, + {0x0000b2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, {0x00016044, 0x056db2db, 0x056db2db, 0x03b6d2e4, 0x03b6d2e4}, - {0x00016048, 0x24927266, 0x24927266, 0x8e483266, 0x8e483266}, + {0x00016048, 0x24925666, 0x24925666, 0x8e481266, 0x8e481266}, + {0x00016280, 0x01000015, 0x01000015, 0x01001015, 0x01001015}, + {0x00016288, 0x30318000, 0x30318000, 0x00318000, 0x00318000}, {0x00016444, 0x056db2db, 0x056db2db, 0x03b6d2e4, 0x03b6d2e4}, - {0x00016448, 0x24927266, 0x24927266, 0x8e482266, 0x8e482266}, + {0x00016448, 0x24925666, 0x24925666, 0x8e481266, 0x8e481266}, }; static const u32 ar9340_1p0_mac_core[][2] = { - /* Addr allmodes */ + /* Addr allmodes */ {0x00000008, 0x00000000}, {0x00000030, 0x00020085}, {0x00000034, 0x00000005}, @@ -1119,6 +1200,7 @@ static const u32 ar9340_1p0_mac_core[][2] = { {0x00008004, 0x00000000}, {0x00008008, 0x00000000}, {0x0000800c, 0x00000000}, + {0x00008010, 0x00080800}, {0x00008018, 0x00000000}, {0x00008020, 0x00000000}, {0x00008038, 0x00000000}, @@ -1146,7 +1228,7 @@ static const u32 ar9340_1p0_mac_core[][2] = { {0x000080bc, 0x00000000}, {0x000080c0, 0x2a800000}, {0x000080c4, 0x06900168}, - {0x000080c8, 0x13881c20}, + {0x000080c8, 0x13881c22}, {0x000080cc, 0x01f40000}, {0x000080d0, 0x00252500}, {0x000080d4, 0x00a00000}, @@ -1250,276 +1332,17 @@ static const u32 ar9340_1p0_mac_core[][2] = { {0x000083c4, 0x00000000}, {0x000083c8, 0x00000000}, {0x000083cc, 0x00000200}, - {0x000083d0, 0x000301ff}, + {0x000083d0, 0x000101ff}, }; -static const u32 ar9340Common_wo_xlna_rx_gain_table_1p0[][2] = { - /* Addr allmodes */ - {0x0000a000, 0x00010000}, - {0x0000a004, 0x00030002}, - {0x0000a008, 0x00050004}, - {0x0000a00c, 0x00810080}, - {0x0000a010, 0x00830082}, - {0x0000a014, 0x01810180}, - {0x0000a018, 0x01830182}, - {0x0000a01c, 0x01850184}, - {0x0000a020, 0x01890188}, - {0x0000a024, 0x018b018a}, - {0x0000a028, 0x018d018c}, - {0x0000a02c, 0x03820190}, - {0x0000a030, 0x03840383}, - {0x0000a034, 0x03880385}, - {0x0000a038, 0x038a0389}, - {0x0000a03c, 0x038c038b}, - {0x0000a040, 0x0390038d}, - {0x0000a044, 0x03920391}, - {0x0000a048, 0x03940393}, - {0x0000a04c, 0x03960395}, - {0x0000a050, 0x00000000}, - {0x0000a054, 0x00000000}, - {0x0000a058, 0x00000000}, - {0x0000a05c, 0x00000000}, - {0x0000a060, 0x00000000}, - {0x0000a064, 0x00000000}, - {0x0000a068, 0x00000000}, - {0x0000a06c, 0x00000000}, - {0x0000a070, 0x00000000}, - {0x0000a074, 0x00000000}, - {0x0000a078, 0x00000000}, - {0x0000a07c, 0x00000000}, - {0x0000a080, 0x29292929}, - {0x0000a084, 0x29292929}, - {0x0000a088, 0x29292929}, - {0x0000a08c, 0x29292929}, - {0x0000a090, 0x22292929}, - {0x0000a094, 0x1d1d2222}, - {0x0000a098, 0x0c111117}, - {0x0000a09c, 0x00030303}, - {0x0000a0a0, 0x00000000}, - {0x0000a0a4, 0x00000000}, - {0x0000a0a8, 0x00000000}, - {0x0000a0ac, 0x00000000}, - {0x0000a0b0, 0x00000000}, - {0x0000a0b4, 0x00000000}, - {0x0000a0b8, 0x00000000}, - {0x0000a0bc, 0x00000000}, - {0x0000a0c0, 0x001f0000}, - {0x0000a0c4, 0x01000101}, - {0x0000a0c8, 0x011e011f}, - {0x0000a0cc, 0x011c011d}, - {0x0000a0d0, 0x02030204}, - {0x0000a0d4, 0x02010202}, - {0x0000a0d8, 0x021f0200}, - {0x0000a0dc, 0x0302021e}, - {0x0000a0e0, 0x03000301}, - {0x0000a0e4, 0x031e031f}, - {0x0000a0e8, 0x0402031d}, - {0x0000a0ec, 0x04000401}, - {0x0000a0f0, 0x041e041f}, - {0x0000a0f4, 0x0502041d}, - {0x0000a0f8, 0x05000501}, - {0x0000a0fc, 0x051e051f}, - {0x0000a100, 0x06010602}, - {0x0000a104, 0x061f0600}, - {0x0000a108, 0x061d061e}, - {0x0000a10c, 0x07020703}, - {0x0000a110, 0x07000701}, - {0x0000a114, 0x00000000}, - {0x0000a118, 0x00000000}, - {0x0000a11c, 0x00000000}, - {0x0000a120, 0x00000000}, - {0x0000a124, 0x00000000}, - {0x0000a128, 0x00000000}, - {0x0000a12c, 0x00000000}, - {0x0000a130, 0x00000000}, - {0x0000a134, 0x00000000}, - {0x0000a138, 0x00000000}, - {0x0000a13c, 0x00000000}, - {0x0000a140, 0x001f0000}, - {0x0000a144, 0x01000101}, - {0x0000a148, 0x011e011f}, - {0x0000a14c, 0x011c011d}, - {0x0000a150, 0x02030204}, - {0x0000a154, 0x02010202}, - {0x0000a158, 0x021f0200}, - {0x0000a15c, 0x0302021e}, - {0x0000a160, 0x03000301}, - {0x0000a164, 0x031e031f}, - {0x0000a168, 0x0402031d}, - {0x0000a16c, 0x04000401}, - {0x0000a170, 0x041e041f}, - {0x0000a174, 0x0502041d}, - {0x0000a178, 0x05000501}, - {0x0000a17c, 0x051e051f}, - {0x0000a180, 0x06010602}, - {0x0000a184, 0x061f0600}, - {0x0000a188, 0x061d061e}, - {0x0000a18c, 0x07020703}, - {0x0000a190, 0x07000701}, - {0x0000a194, 0x00000000}, - {0x0000a198, 0x00000000}, - {0x0000a19c, 0x00000000}, - {0x0000a1a0, 0x00000000}, - {0x0000a1a4, 0x00000000}, - {0x0000a1a8, 0x00000000}, - {0x0000a1ac, 0x00000000}, - {0x0000a1b0, 0x00000000}, - {0x0000a1b4, 0x00000000}, - {0x0000a1b8, 0x00000000}, - {0x0000a1bc, 0x00000000}, - {0x0000a1c0, 0x00000000}, - {0x0000a1c4, 0x00000000}, - {0x0000a1c8, 0x00000000}, - {0x0000a1cc, 0x00000000}, - {0x0000a1d0, 0x00000000}, - {0x0000a1d4, 0x00000000}, - {0x0000a1d8, 0x00000000}, - {0x0000a1dc, 0x00000000}, - {0x0000a1e0, 0x00000000}, - {0x0000a1e4, 0x00000000}, - {0x0000a1e8, 0x00000000}, - {0x0000a1ec, 0x00000000}, - {0x0000a1f0, 0x00000396}, - {0x0000a1f4, 0x00000396}, - {0x0000a1f8, 0x00000396}, - {0x0000a1fc, 0x00000196}, - {0x0000b000, 0x00010000}, - {0x0000b004, 0x00030002}, - {0x0000b008, 0x00050004}, - {0x0000b00c, 0x00810080}, - {0x0000b010, 0x00830082}, - {0x0000b014, 0x01810180}, - {0x0000b018, 0x01830182}, - {0x0000b01c, 0x01850184}, - {0x0000b020, 0x02810280}, - {0x0000b024, 0x02830282}, - {0x0000b028, 0x02850284}, - {0x0000b02c, 0x02890288}, - {0x0000b030, 0x028b028a}, - {0x0000b034, 0x0388028c}, - {0x0000b038, 0x038a0389}, - {0x0000b03c, 0x038c038b}, - {0x0000b040, 0x0390038d}, - {0x0000b044, 0x03920391}, - {0x0000b048, 0x03940393}, - {0x0000b04c, 0x03960395}, - {0x0000b050, 0x00000000}, - {0x0000b054, 0x00000000}, - {0x0000b058, 0x00000000}, - {0x0000b05c, 0x00000000}, - {0x0000b060, 0x00000000}, - {0x0000b064, 0x00000000}, - {0x0000b068, 0x00000000}, - {0x0000b06c, 0x00000000}, - {0x0000b070, 0x00000000}, - {0x0000b074, 0x00000000}, - {0x0000b078, 0x00000000}, - {0x0000b07c, 0x00000000}, - {0x0000b080, 0x32323232}, - {0x0000b084, 0x2f2f3232}, - {0x0000b088, 0x23282a2d}, - {0x0000b08c, 0x1c1e2123}, - {0x0000b090, 0x14171919}, - {0x0000b094, 0x0e0e1214}, - {0x0000b098, 0x03050707}, - {0x0000b09c, 0x00030303}, - {0x0000b0a0, 0x00000000}, - {0x0000b0a4, 0x00000000}, - {0x0000b0a8, 0x00000000}, - {0x0000b0ac, 0x00000000}, - {0x0000b0b0, 0x00000000}, - {0x0000b0b4, 0x00000000}, - {0x0000b0b8, 0x00000000}, - {0x0000b0bc, 0x00000000}, - {0x0000b0c0, 0x003f0020}, - {0x0000b0c4, 0x00400041}, - {0x0000b0c8, 0x0140005f}, - {0x0000b0cc, 0x0160015f}, - {0x0000b0d0, 0x017e017f}, - {0x0000b0d4, 0x02410242}, - {0x0000b0d8, 0x025f0240}, - {0x0000b0dc, 0x027f0260}, - {0x0000b0e0, 0x0341027e}, - {0x0000b0e4, 0x035f0340}, - {0x0000b0e8, 0x037f0360}, - {0x0000b0ec, 0x04400441}, - {0x0000b0f0, 0x0460045f}, - {0x0000b0f4, 0x0541047f}, - {0x0000b0f8, 0x055f0540}, - {0x0000b0fc, 0x057f0560}, - {0x0000b100, 0x06400641}, - {0x0000b104, 0x0660065f}, - {0x0000b108, 0x067e067f}, - {0x0000b10c, 0x07410742}, - {0x0000b110, 0x075f0740}, - {0x0000b114, 0x077f0760}, - {0x0000b118, 0x07800781}, - {0x0000b11c, 0x07a0079f}, - {0x0000b120, 0x07c107bf}, - {0x0000b124, 0x000007c0}, - {0x0000b128, 0x00000000}, - {0x0000b12c, 0x00000000}, - {0x0000b130, 0x00000000}, - {0x0000b134, 0x00000000}, - {0x0000b138, 0x00000000}, - {0x0000b13c, 0x00000000}, - {0x0000b140, 0x003f0020}, - {0x0000b144, 0x00400041}, - {0x0000b148, 0x0140005f}, - {0x0000b14c, 0x0160015f}, - {0x0000b150, 0x017e017f}, - {0x0000b154, 0x02410242}, - {0x0000b158, 0x025f0240}, - {0x0000b15c, 0x027f0260}, - {0x0000b160, 0x0341027e}, - {0x0000b164, 0x035f0340}, - {0x0000b168, 0x037f0360}, - {0x0000b16c, 0x04400441}, - {0x0000b170, 0x0460045f}, - {0x0000b174, 0x0541047f}, - {0x0000b178, 0x055f0540}, - {0x0000b17c, 0x057f0560}, - {0x0000b180, 0x06400641}, - {0x0000b184, 0x0660065f}, - {0x0000b188, 0x067e067f}, - {0x0000b18c, 0x07410742}, - {0x0000b190, 0x075f0740}, - {0x0000b194, 0x077f0760}, - {0x0000b198, 0x07800781}, - {0x0000b19c, 0x07a0079f}, - {0x0000b1a0, 0x07c107bf}, - {0x0000b1a4, 0x000007c0}, - {0x0000b1a8, 0x00000000}, - {0x0000b1ac, 0x00000000}, - {0x0000b1b0, 0x00000000}, - {0x0000b1b4, 0x00000000}, - {0x0000b1b8, 0x00000000}, - {0x0000b1bc, 0x00000000}, - {0x0000b1c0, 0x00000000}, - {0x0000b1c4, 0x00000000}, - {0x0000b1c8, 0x00000000}, - {0x0000b1cc, 0x00000000}, - {0x0000b1d0, 0x00000000}, - {0x0000b1d4, 0x00000000}, - {0x0000b1d8, 0x00000000}, - {0x0000b1dc, 0x00000000}, - {0x0000b1e0, 0x00000000}, - {0x0000b1e4, 0x00000000}, - {0x0000b1e8, 0x00000000}, - {0x0000b1ec, 0x00000000}, - {0x0000b1f0, 0x00000396}, - {0x0000b1f4, 0x00000396}, - {0x0000b1f8, 0x00000396}, - {0x0000b1fc, 0x00000196}, -}; +#define ar9340Common_wo_xlna_rx_gain_table_1p0 ar9300Common_wo_xlna_rx_gain_table_2p2 static const u32 ar9340_1p0_soc_preamble[][2] = { - /* Addr allmodes */ - {0x000040a4, 0x00a0c1c9}, + /* Addr allmodes */ {0x00007008, 0x00000000}, {0x00007020, 0x00000000}, {0x00007034, 0x00000002}, {0x00007038, 0x000004c2}, }; -#endif +#endif /* INITVALS_9340_H */ diff --git a/drivers/net/wireless/ath/ath9k/ar9462_2p0_initvals.h b/drivers/net/wireless/ath/ath9k/ar9462_2p0_initvals.h index 1d6658e139b5..4ef7dcccaa2f 100644 --- a/drivers/net/wireless/ath/ath9k/ar9462_2p0_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9462_2p0_initvals.h @@ -1,5 +1,6 @@ /* - * Copyright (c) 2010 Atheros Communications Inc. + * Copyright (c) 2010-2011 Atheros Communications Inc. + * Copyright (c) 2011-2012 Qualcomm Atheros Inc. * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -52,7 +53,7 @@ static const u32 ar9462_2p0_baseband_postamble[][5] = { {0x00009e04, 0x001c2020, 0x001c2020, 0x001c2020, 0x001c2020}, {0x00009e0c, 0x6c4000e2, 0x6d4000e2, 0x6d4000e2, 0x6c4000d8}, {0x00009e10, 0x92c88d2e, 0x7ec88d2e, 0x7ec84d2e, 0x7ec86d2e}, - {0x00009e14, 0x37b95d5e, 0x37b9605e, 0x3376605e, 0x33795d5e}, + {0x00009e14, 0x37b95d5e, 0x37b9605e, 0x3376605e, 0x32395d5e}, {0x00009e18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, {0x00009e1c, 0x0001cf9c, 0x0001cf9c, 0x00021f9c, 0x00021f9c}, {0x00009e20, 0x000003b5, 0x000003b5, 0x000003ce, 0x000003ce}, @@ -61,7 +62,7 @@ static const u32 ar9462_2p0_baseband_postamble[][5] = { {0x00009e44, 0x62321e27, 0x62321e27, 0xfe291e27, 0xfe291e27}, {0x00009e48, 0x5030201a, 0x5030201a, 0x50302012, 0x50302012}, {0x00009fc8, 0x0003f000, 0x0003f000, 0x0001a000, 0x0001a000}, - {0x0000a204, 0x013187c0, 0x013187c4, 0x013187c4, 0x013187c0}, + {0x0000a204, 0x01318fc0, 0x01318fc4, 0x01318fc4, 0x01318fc0}, {0x0000a208, 0x00000104, 0x00000104, 0x00000004, 0x00000004}, {0x0000a22c, 0x01026a2f, 0x01026a27, 0x01026a2f, 0x01026a2f}, {0x0000a230, 0x0000400a, 0x00004014, 0x00004016, 0x0000400b}, @@ -958,7 +959,7 @@ static const u32 ar9462_2p0_radio_core[][2] = { {0x0001604c, 0x2699e04f}, {0x00016050, 0x6db6db6c}, {0x00016058, 0x6c200000}, - {0x00016080, 0x00040000}, + {0x00016080, 0x000c0000}, {0x00016084, 0x9a68048c}, {0x00016088, 0x54214514}, {0x0001608c, 0x1203040b}, @@ -981,7 +982,7 @@ static const u32 ar9462_2p0_radio_core[][2] = { {0x00016144, 0x02084080}, {0x00016148, 0x000080c0}, {0x00016280, 0x050a0001}, - {0x00016284, 0x3d841400}, + {0x00016284, 0x3d841418}, {0x00016288, 0x00000000}, {0x0001628c, 0xe3000000}, {0x00016290, 0xa1005080}, @@ -1007,6 +1008,7 @@ static const u32 ar9462_2p0_radio_core[][2] = { static const u32 ar9462_2p0_soc_preamble[][2] = { /* Addr allmodes */ + {0x000040a4, 0x00a0c1c9}, {0x00007020, 0x00000000}, {0x00007034, 0x00000002}, {0x00007038, 0x000004c2}, diff --git a/drivers/net/wireless/ath/ath9k/ar9485_initvals.h b/drivers/net/wireless/ath/ath9k/ar9485_initvals.h index d16d029f81a9..fb4497fc7a3d 100644 --- a/drivers/net/wireless/ath/ath9k/ar9485_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9485_initvals.h @@ -1,5 +1,6 @@ /* * Copyright (c) 2010-2011 Atheros Communications Inc. + * Copyright (c) 2011-2012 Qualcomm Atheros Inc. * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -17,360 +18,151 @@ #ifndef INITVALS_9485_H #define INITVALS_9485_H -static const u32 ar9485_1_1_mac_core[][2] = { - /* Addr allmodes */ - {0x00000008, 0x00000000}, - {0x00000030, 0x00020085}, - {0x00000034, 0x00000005}, - {0x00000040, 0x00000000}, - {0x00000044, 0x00000000}, - {0x00000048, 0x00000008}, - {0x0000004c, 0x00000010}, - {0x00000050, 0x00000000}, - {0x00001040, 0x002ffc0f}, - {0x00001044, 0x002ffc0f}, - {0x00001048, 0x002ffc0f}, - {0x0000104c, 0x002ffc0f}, - {0x00001050, 0x002ffc0f}, - {0x00001054, 0x002ffc0f}, - {0x00001058, 0x002ffc0f}, - {0x0000105c, 0x002ffc0f}, - {0x00001060, 0x002ffc0f}, - {0x00001064, 0x002ffc0f}, - {0x000010f0, 0x00000100}, - {0x00001270, 0x00000000}, - {0x000012b0, 0x00000000}, - {0x000012f0, 0x00000000}, - {0x0000143c, 0x00000000}, - {0x0000147c, 0x00000000}, - {0x00008000, 0x00000000}, - {0x00008004, 0x00000000}, - {0x00008008, 0x00000000}, - {0x0000800c, 0x00000000}, - {0x00008018, 0x00000000}, - {0x00008020, 0x00000000}, - {0x00008038, 0x00000000}, - {0x0000803c, 0x00000000}, - {0x00008040, 0x00000000}, - {0x00008044, 0x00000000}, - {0x00008048, 0x00000000}, - {0x0000804c, 0xffffffff}, - {0x00008054, 0x00000000}, - {0x00008058, 0x00000000}, - {0x0000805c, 0x000fc78f}, - {0x00008060, 0x0000000f}, - {0x00008064, 0x00000000}, - {0x00008070, 0x00000310}, - {0x00008074, 0x00000020}, - {0x00008078, 0x00000000}, - {0x0000809c, 0x0000000f}, - {0x000080a0, 0x00000000}, - {0x000080a4, 0x02ff0000}, - {0x000080a8, 0x0e070605}, - {0x000080ac, 0x0000000d}, - {0x000080b0, 0x00000000}, - {0x000080b4, 0x00000000}, - {0x000080b8, 0x00000000}, - {0x000080bc, 0x00000000}, - {0x000080c0, 0x2a800000}, - {0x000080c4, 0x06900168}, - {0x000080c8, 0x13881c22}, - {0x000080cc, 0x01f40000}, - {0x000080d0, 0x00252500}, - {0x000080d4, 0x00a00000}, - {0x000080d8, 0x00400000}, - {0x000080dc, 0x00000000}, - {0x000080e0, 0xffffffff}, - {0x000080e4, 0x0000ffff}, - {0x000080e8, 0x3f3f3f3f}, - {0x000080ec, 0x00000000}, - {0x000080f0, 0x00000000}, - {0x000080f4, 0x00000000}, - {0x000080fc, 0x00020000}, - {0x00008100, 0x00000000}, - {0x00008108, 0x00000052}, - {0x0000810c, 0x00000000}, - {0x00008110, 0x00000000}, - {0x00008114, 0x000007ff}, - {0x00008118, 0x000000aa}, - {0x0000811c, 0x00003210}, - {0x00008124, 0x00000000}, - {0x00008128, 0x00000000}, - {0x0000812c, 0x00000000}, - {0x00008130, 0x00000000}, - {0x00008134, 0x00000000}, - {0x00008138, 0x00000000}, - {0x0000813c, 0x0000ffff}, - {0x00008144, 0xffffffff}, - {0x00008168, 0x00000000}, - {0x0000816c, 0x00000000}, - {0x00008170, 0x18486200}, - {0x00008174, 0x33332210}, - {0x00008178, 0x00000000}, - {0x0000817c, 0x00020000}, - {0x000081c0, 0x00000000}, - {0x000081c4, 0x33332210}, - {0x000081d4, 0x00000000}, - {0x000081ec, 0x00000000}, - {0x000081f0, 0x00000000}, - {0x000081f4, 0x00000000}, - {0x000081f8, 0x00000000}, - {0x000081fc, 0x00000000}, - {0x00008240, 0x00100000}, - {0x00008244, 0x0010f400}, - {0x00008248, 0x00000800}, - {0x0000824c, 0x0001e800}, - {0x00008250, 0x00000000}, - {0x00008254, 0x00000000}, - {0x00008258, 0x00000000}, - {0x0000825c, 0x40000000}, - {0x00008260, 0x00080922}, - {0x00008264, 0x9ca00010}, - {0x00008268, 0xffffffff}, - {0x0000826c, 0x0000ffff}, - {0x00008270, 0x00000000}, - {0x00008274, 0x40000000}, - {0x00008278, 0x003e4180}, - {0x0000827c, 0x00000004}, - {0x00008284, 0x0000002c}, - {0x00008288, 0x0000002c}, - {0x0000828c, 0x000000ff}, - {0x00008294, 0x00000000}, - {0x00008298, 0x00000000}, - {0x0000829c, 0x00000000}, - {0x00008300, 0x00000140}, - {0x00008314, 0x00000000}, - {0x0000831c, 0x0000010d}, - {0x00008328, 0x00000000}, - {0x0000832c, 0x00000007}, - {0x00008330, 0x00000302}, - {0x00008334, 0x00000700}, - {0x00008338, 0x00ff0000}, - {0x0000833c, 0x02400000}, - {0x00008340, 0x000107ff}, - {0x00008344, 0xa248105b}, - {0x00008348, 0x008f0000}, - {0x0000835c, 0x00000000}, - {0x00008360, 0xffffffff}, - {0x00008364, 0xffffffff}, - {0x00008368, 0x00000000}, - {0x00008370, 0x00000000}, - {0x00008374, 0x000000ff}, - {0x00008378, 0x00000000}, - {0x0000837c, 0x00000000}, - {0x00008380, 0xffffffff}, - {0x00008384, 0xffffffff}, - {0x00008390, 0xffffffff}, - {0x00008394, 0xffffffff}, - {0x00008398, 0x00000000}, - {0x0000839c, 0x00000000}, - {0x000083a0, 0x00000000}, - {0x000083a4, 0x0000fa14}, - {0x000083a8, 0x000f0c00}, - {0x000083ac, 0x33332210}, - {0x000083b0, 0x33332210}, - {0x000083b4, 0x33332210}, - {0x000083b8, 0x33332210}, - {0x000083bc, 0x00000000}, - {0x000083c0, 0x00000000}, - {0x000083c4, 0x00000000}, - {0x000083c8, 0x00000000}, - {0x000083cc, 0x00000200}, - {0x000083d0, 0x000301ff}, -}; +/* AR9485 1.0 */ -static const u32 ar9485_1_1_baseband_core[][2] = { - /* Addr allmodes */ - {0x00009800, 0xafe68e30}, - {0x00009804, 0xfd14e000}, - {0x00009808, 0x9c0a8f6b}, - {0x0000980c, 0x04800000}, - {0x00009814, 0x9280c00a}, - {0x00009818, 0x00000000}, - {0x0000981c, 0x00020028}, - {0x00009834, 0x5f3ca3de}, - {0x00009838, 0x0108ecff}, - {0x0000983c, 0x14750600}, - {0x00009880, 0x201fff00}, - {0x00009884, 0x00001042}, - {0x000098a4, 0x00200400}, - {0x000098b0, 0x52440bbe}, - {0x000098d0, 0x004b6a8e}, - {0x000098d4, 0x00000820}, - {0x000098dc, 0x00000000}, - {0x000098f0, 0x00000000}, - {0x000098f4, 0x00000000}, - {0x00009c04, 0x00000000}, - {0x00009c08, 0x03200000}, - {0x00009c0c, 0x00000000}, - {0x00009c10, 0x00000000}, - {0x00009c14, 0x00046384}, - {0x00009c18, 0x05b6b440}, - {0x00009c1c, 0x00b6b440}, - {0x00009d00, 0xc080a333}, - {0x00009d04, 0x40206c10}, - {0x00009d08, 0x009c4060}, - {0x00009d0c, 0x1883800a}, - {0x00009d10, 0x01834061}, - {0x00009d14, 0x00c00400}, - {0x00009d18, 0x00000000}, - {0x00009d1c, 0x00000000}, - {0x00009e08, 0x0038233c}, - {0x00009e24, 0x9927b515}, - {0x00009e28, 0x12ef0200}, - {0x00009e30, 0x06336f77}, - {0x00009e34, 0x6af6532f}, - {0x00009e38, 0x0cc80c00}, - {0x00009e40, 0x0d261820}, - {0x00009e4c, 0x00001004}, - {0x00009e50, 0x00ff03f1}, - {0x00009fc0, 0x80be4788}, - {0x00009fc4, 0x0001efb5}, - {0x00009fcc, 0x40000014}, - {0x0000a20c, 0x00000000}, - {0x0000a210, 0x00000000}, - {0x0000a220, 0x00000000}, - {0x0000a224, 0x00000000}, - {0x0000a228, 0x10002310}, - {0x0000a23c, 0x00000000}, - {0x0000a244, 0x0c000000}, - {0x0000a2a0, 0x00000001}, - {0x0000a2c0, 0x00000001}, - {0x0000a2c8, 0x00000000}, - {0x0000a2cc, 0x18c43433}, - {0x0000a2d4, 0x00000000}, - {0x0000a2dc, 0x00000000}, - {0x0000a2e0, 0x00000000}, - {0x0000a2e4, 0x00000000}, - {0x0000a2e8, 0x00000000}, - {0x0000a2ec, 0x00000000}, - {0x0000a2f0, 0x00000000}, - {0x0000a2f4, 0x00000000}, - {0x0000a2f8, 0x00000000}, - {0x0000a344, 0x00000000}, - {0x0000a34c, 0x00000000}, - {0x0000a350, 0x0000a000}, - {0x0000a364, 0x00000000}, - {0x0000a370, 0x00000000}, - {0x0000a390, 0x00000001}, - {0x0000a394, 0x00000444}, - {0x0000a398, 0x001f0e0f}, - {0x0000a39c, 0x0075393f}, - {0x0000a3a0, 0xb79f6427}, - {0x0000a3a4, 0x000000ff}, - {0x0000a3a8, 0x3b3b3b3b}, - {0x0000a3ac, 0x2f2f2f2f}, - {0x0000a3c0, 0x20202020}, - {0x0000a3c4, 0x22222220}, - {0x0000a3c8, 0x20200020}, - {0x0000a3cc, 0x20202020}, - {0x0000a3d0, 0x20202020}, - {0x0000a3d4, 0x20202020}, - {0x0000a3d8, 0x20202020}, - {0x0000a3dc, 0x20202020}, - {0x0000a3e0, 0x20202020}, - {0x0000a3e4, 0x20202020}, - {0x0000a3e8, 0x20202020}, - {0x0000a3ec, 0x20202020}, - {0x0000a3f0, 0x00000000}, - {0x0000a3f4, 0x00000006}, - {0x0000a3f8, 0x0cdbd380}, - {0x0000a3fc, 0x000f0f01}, - {0x0000a400, 0x8fa91f01}, - {0x0000a404, 0x00000000}, - {0x0000a408, 0x0e79e5c6}, - {0x0000a40c, 0x00820820}, - {0x0000a414, 0x1ce739cf}, - {0x0000a418, 0x2d0019ce}, - {0x0000a41c, 0x1ce739ce}, - {0x0000a420, 0x000001ce}, - {0x0000a424, 0x1ce739ce}, - {0x0000a428, 0x000001ce}, - {0x0000a42c, 0x1ce739ce}, - {0x0000a430, 0x1ce739ce}, - {0x0000a434, 0x00000000}, - {0x0000a438, 0x00001801}, - {0x0000a43c, 0x00000000}, - {0x0000a440, 0x00000000}, - {0x0000a444, 0x00000000}, - {0x0000a448, 0x04000000}, - {0x0000a44c, 0x00000001}, - {0x0000a450, 0x00010000}, - {0x0000a5c4, 0xbfad9d74}, - {0x0000a5c8, 0x0048060a}, - {0x0000a5cc, 0x00000637}, - {0x0000a760, 0x03020100}, - {0x0000a764, 0x09080504}, - {0x0000a768, 0x0d0c0b0a}, - {0x0000a76c, 0x13121110}, - {0x0000a770, 0x31301514}, - {0x0000a774, 0x35343332}, - {0x0000a778, 0x00000036}, - {0x0000a780, 0x00000838}, - {0x0000a7c0, 0x00000000}, - {0x0000a7c4, 0xfffffffc}, - {0x0000a7c8, 0x00000000}, - {0x0000a7cc, 0x00000000}, - {0x0000a7d0, 0x00000000}, - {0x0000a7d4, 0x00000004}, - {0x0000a7dc, 0x00000000}, -}; +#define ar9485_1_1_mac_postamble ar9300_2p2_mac_postamble -static const u32 ar9485Common_1_1[][2] = { - /* Addr allmodes */ - {0x00007010, 0x00000022}, - {0x00007020, 0x00000000}, - {0x00007034, 0x00000002}, - {0x00007038, 0x000004c2}, +static const u32 ar9485_1_1_pcie_phy_pll_on_clkreq_disable_L1[][2] = { + /* Addr allmodes */ + {0x00018c00, 0x18012e5e}, + {0x00018c04, 0x000801d8}, + {0x00018c08, 0x0000080c}, }; -static const u32 ar9485_1_1_baseband_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x00009810, 0xd00a8005, 0xd00a8005, 0xd00a8005, 0xd00a8005}, - {0x00009820, 0x206a002e, 0x206a002e, 0x206a002e, 0x206a002e}, - {0x00009824, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0}, - {0x00009828, 0x06903081, 0x06903081, 0x06903881, 0x06903881}, - {0x0000982c, 0x05eea6d4, 0x05eea6d4, 0x05eea6d4, 0x05eea6d4}, - {0x00009830, 0x0000059c, 0x0000059c, 0x0000059c, 0x0000059c}, - {0x00009c00, 0x00000044, 0x00000044, 0x00000044, 0x00000044}, - {0x00009e00, 0x0372161e, 0x0372161e, 0x037216a0, 0x037216a0}, - {0x00009e04, 0x00182020, 0x00182020, 0x00182020, 0x00182020}, - {0x00009e0c, 0x6c4000e2, 0x6d4000e2, 0x6d4000e2, 0x6c4000e2}, - {0x00009e10, 0x7ec88d2e, 0x7ec88d2e, 0x7ec80d2e, 0x7ec80d2e}, - {0x00009e14, 0x31395d5e, 0x3139605e, 0x3139605e, 0x31395d5e}, - {0x00009e18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x00009e1c, 0x0001cf9c, 0x0001cf9c, 0x00021f9c, 0x00021f9c}, - {0x00009e20, 0x000003b5, 0x000003b5, 0x000003ce, 0x000003ce}, - {0x00009e2c, 0x0000001c, 0x0000001c, 0x00000021, 0x00000021}, - {0x00009e3c, 0xcf946220, 0xcf946220, 0xcf946222, 0xcf946222}, - {0x00009e44, 0x02321e27, 0x02321e27, 0x02282324, 0x02282324}, - {0x00009e48, 0x5030201a, 0x5030201a, 0x50302010, 0x50302010}, - {0x00009fc8, 0x0003f000, 0x0003f000, 0x0001a000, 0x0001a000}, - {0x0000a204, 0x01303fc0, 0x01303fc4, 0x01303fc4, 0x01303fc0}, - {0x0000a208, 0x00000104, 0x00000104, 0x00000004, 0x00000004}, - {0x0000a230, 0x0000400a, 0x00004014, 0x00004016, 0x0000400b}, - {0x0000a234, 0x10000fff, 0x10000fff, 0x10000fff, 0x10000fff}, - {0x0000a238, 0xffb81018, 0xffb81018, 0xffb81018, 0xffb81018}, - {0x0000a250, 0x00000000, 0x00000000, 0x00000210, 0x00000108}, - {0x0000a254, 0x000007d0, 0x00000fa0, 0x00001130, 0x00000898}, - {0x0000a258, 0x02020002, 0x02020002, 0x02020002, 0x02020002}, - {0x0000a25c, 0x01000e0e, 0x01000e0e, 0x01000e0e, 0x01000e0e}, - {0x0000a260, 0x3a021501, 0x3a021501, 0x3a021501, 0x3a021501}, - {0x0000a264, 0x00000e0e, 0x00000e0e, 0x00000e0e, 0x00000e0e}, - {0x0000a280, 0x00000007, 0x00000007, 0x0000000b, 0x0000000b}, - {0x0000a284, 0x00000000, 0x00000000, 0x000002a0, 0x000002a0}, - {0x0000a288, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a28c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a2c4, 0x00158d18, 0x00158d18, 0x00158d18, 0x00158d18}, - {0x0000a2d0, 0x00071981, 0x00071981, 0x00071982, 0x00071982}, - {0x0000a2d8, 0xf999a83a, 0xf999a83a, 0xf999a83a, 0xf999a83a}, - {0x0000a358, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000be04, 0x00802020, 0x00802020, 0x00802020, 0x00802020}, - {0x0000be18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, +static const u32 ar9485Common_wo_xlna_rx_gain_1_1[][2] = { + /* Addr allmodes */ + {0x0000a000, 0x00060005}, + {0x0000a004, 0x00810080}, + {0x0000a008, 0x00830082}, + {0x0000a00c, 0x00850084}, + {0x0000a010, 0x01820181}, + {0x0000a014, 0x01840183}, + {0x0000a018, 0x01880185}, + {0x0000a01c, 0x018a0189}, + {0x0000a020, 0x02850284}, + {0x0000a024, 0x02890288}, + {0x0000a028, 0x028b028a}, + {0x0000a02c, 0x03850384}, + {0x0000a030, 0x03890388}, + {0x0000a034, 0x038b038a}, + {0x0000a038, 0x038d038c}, + {0x0000a03c, 0x03910390}, + {0x0000a040, 0x03930392}, + {0x0000a044, 0x03950394}, + {0x0000a048, 0x00000396}, + {0x0000a04c, 0x00000000}, + {0x0000a050, 0x00000000}, + {0x0000a054, 0x00000000}, + {0x0000a058, 0x00000000}, + {0x0000a05c, 0x00000000}, + {0x0000a060, 0x00000000}, + {0x0000a064, 0x00000000}, + {0x0000a068, 0x00000000}, + {0x0000a06c, 0x00000000}, + {0x0000a070, 0x00000000}, + {0x0000a074, 0x00000000}, + {0x0000a078, 0x00000000}, + {0x0000a07c, 0x00000000}, + {0x0000a080, 0x28282828}, + {0x0000a084, 0x28282828}, + {0x0000a088, 0x28282828}, + {0x0000a08c, 0x28282828}, + {0x0000a090, 0x28282828}, + {0x0000a094, 0x24242428}, + {0x0000a098, 0x171e1e1e}, + {0x0000a09c, 0x02020b0b}, + {0x0000a0a0, 0x02020202}, + {0x0000a0a4, 0x00000000}, + {0x0000a0a8, 0x00000000}, + {0x0000a0ac, 0x00000000}, + {0x0000a0b0, 0x00000000}, + {0x0000a0b4, 0x00000000}, + {0x0000a0b8, 0x00000000}, + {0x0000a0bc, 0x00000000}, + {0x0000a0c0, 0x22072208}, + {0x0000a0c4, 0x22052206}, + {0x0000a0c8, 0x22032204}, + {0x0000a0cc, 0x22012202}, + {0x0000a0d0, 0x221f2200}, + {0x0000a0d4, 0x221d221e}, + {0x0000a0d8, 0x33023303}, + {0x0000a0dc, 0x33003301}, + {0x0000a0e0, 0x331e331f}, + {0x0000a0e4, 0x4402331d}, + {0x0000a0e8, 0x44004401}, + {0x0000a0ec, 0x441e441f}, + {0x0000a0f0, 0x55025503}, + {0x0000a0f4, 0x55005501}, + {0x0000a0f8, 0x551e551f}, + {0x0000a0fc, 0x6602551d}, + {0x0000a100, 0x66006601}, + {0x0000a104, 0x661e661f}, + {0x0000a108, 0x7703661d}, + {0x0000a10c, 0x77017702}, + {0x0000a110, 0x00007700}, + {0x0000a114, 0x00000000}, + {0x0000a118, 0x00000000}, + {0x0000a11c, 0x00000000}, + {0x0000a120, 0x00000000}, + {0x0000a124, 0x00000000}, + {0x0000a128, 0x00000000}, + {0x0000a12c, 0x00000000}, + {0x0000a130, 0x00000000}, + {0x0000a134, 0x00000000}, + {0x0000a138, 0x00000000}, + {0x0000a13c, 0x00000000}, + {0x0000a140, 0x001f0000}, + {0x0000a144, 0x111f1100}, + {0x0000a148, 0x111d111e}, + {0x0000a14c, 0x111b111c}, + {0x0000a150, 0x22032204}, + {0x0000a154, 0x22012202}, + {0x0000a158, 0x221f2200}, + {0x0000a15c, 0x221d221e}, + {0x0000a160, 0x33013302}, + {0x0000a164, 0x331f3300}, + {0x0000a168, 0x4402331e}, + {0x0000a16c, 0x44004401}, + {0x0000a170, 0x441e441f}, + {0x0000a174, 0x55015502}, + {0x0000a178, 0x551f5500}, + {0x0000a17c, 0x6602551e}, + {0x0000a180, 0x66006601}, + {0x0000a184, 0x661e661f}, + {0x0000a188, 0x7703661d}, + {0x0000a18c, 0x77017702}, + {0x0000a190, 0x00007700}, + {0x0000a194, 0x00000000}, + {0x0000a198, 0x00000000}, + {0x0000a19c, 0x00000000}, + {0x0000a1a0, 0x00000000}, + {0x0000a1a4, 0x00000000}, + {0x0000a1a8, 0x00000000}, + {0x0000a1ac, 0x00000000}, + {0x0000a1b0, 0x00000000}, + {0x0000a1b4, 0x00000000}, + {0x0000a1b8, 0x00000000}, + {0x0000a1bc, 0x00000000}, + {0x0000a1c0, 0x00000000}, + {0x0000a1c4, 0x00000000}, + {0x0000a1c8, 0x00000000}, + {0x0000a1cc, 0x00000000}, + {0x0000a1d0, 0x00000000}, + {0x0000a1d4, 0x00000000}, + {0x0000a1d8, 0x00000000}, + {0x0000a1dc, 0x00000000}, + {0x0000a1e0, 0x00000000}, + {0x0000a1e4, 0x00000000}, + {0x0000a1e8, 0x00000000}, + {0x0000a1ec, 0x00000000}, + {0x0000a1f0, 0x00000396}, + {0x0000a1f4, 0x00000396}, + {0x0000a1f8, 0x00000396}, + {0x0000a1fc, 0x00000296}, }; -static const u32 ar9485Modes_high_ob_db_tx_gain_1_1[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ +static const u32 ar9485Modes_high_power_tx_gain_1_1[][5] = { + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ {0x000098bc, 0x00000002, 0x00000002, 0x00000002, 0x00000002}, {0x0000a410, 0x000050d9, 0x000050d9, 0x000050d8, 0x000050d8}, {0x0000a458, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, @@ -442,102 +234,34 @@ static const u32 ar9485Modes_high_ob_db_tx_gain_1_1[][5] = { {0x00016048, 0x6c924260, 0x6c924260, 0x6c924260, 0x6c924260}, }; -static const u32 ar9485_modes_lowest_ob_db_tx_gain_1_1[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x000098bc, 0x00000002, 0x00000002, 0x00000002, 0x00000002}, - {0x0000a410, 0x000050d9, 0x000050d9, 0x000050d8, 0x000050d8}, - {0x0000a458, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a500, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, - {0x0000a504, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, - {0x0000a508, 0x0c002e00, 0x0c002e00, 0x08000004, 0x08000004}, - {0x0000a50c, 0x11062202, 0x11062202, 0x0d000200, 0x0d000200}, - {0x0000a510, 0x17022e00, 0x17022e00, 0x11000202, 0x11000202}, - {0x0000a514, 0x1d000ec2, 0x1d000ec2, 0x15000400, 0x15000400}, - {0x0000a518, 0x25020ec0, 0x25020ec0, 0x19000402, 0x19000402}, - {0x0000a51c, 0x2b020ec3, 0x2b020ec3, 0x1d000404, 0x1d000404}, - {0x0000a520, 0x2f001f04, 0x2f001f04, 0x21000603, 0x21000603}, - {0x0000a524, 0x35001fc4, 0x35001fc4, 0x25000605, 0x25000605}, - {0x0000a528, 0x3c022f04, 0x3c022f04, 0x2a000a03, 0x2a000a03}, - {0x0000a52c, 0x41023e85, 0x41023e85, 0x2c000a04, 0x2c000a04}, - {0x0000a530, 0x48023ec6, 0x48023ec6, 0x34000e20, 0x34000e20}, - {0x0000a534, 0x4d023f01, 0x4d023f01, 0x35000e21, 0x35000e21}, - {0x0000a538, 0x53023f4b, 0x53023f4b, 0x43000e62, 0x43000e62}, - {0x0000a53c, 0x5a027f09, 0x5a027f09, 0x45000e63, 0x45000e63}, - {0x0000a540, 0x5f027fc9, 0x5f027fc9, 0x49000e65, 0x49000e65}, - {0x0000a544, 0x6502feca, 0x6502feca, 0x4b000e66, 0x4b000e66}, - {0x0000a548, 0x6b02ff4a, 0x6b02ff4a, 0x4d001645, 0x4d001645}, - {0x0000a54c, 0x7203feca, 0x7203feca, 0x51001865, 0x51001865}, - {0x0000a550, 0x7703ff0b, 0x7703ff0b, 0x55001a86, 0x55001a86}, - {0x0000a554, 0x7d06ffcb, 0x7d06ffcb, 0x57001ce9, 0x57001ce9}, - {0x0000a558, 0x8407ff0b, 0x8407ff0b, 0x5a001ceb, 0x5a001ceb}, - {0x0000a55c, 0x8907ffcb, 0x8907ffcb, 0x5e001eeb, 0x5e001eeb}, - {0x0000a560, 0x900fff0b, 0x900fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a564, 0x960fffcb, 0x960fffcb, 0x5e001eeb, 0x5e001eeb}, - {0x0000a568, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a56c, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a570, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a574, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a578, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a57c, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000b500, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b504, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b508, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b50c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b510, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b514, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b518, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b51c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b520, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b524, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b528, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b52c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b530, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b534, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b538, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b53c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b540, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b544, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b548, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b54c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b550, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b554, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b558, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b55c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b560, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b564, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b568, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b56c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b570, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b574, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b578, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b57c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x00016044, 0x05d6b2db, 0x05d6b2db, 0x05d6b2db, 0x05d6b2db}, - {0x00016048, 0x6c924260, 0x6c924260, 0x6c924260, 0x6c924260}, -}; +#define ar9485Modes_high_ob_db_tx_gain_1_1 ar9485Modes_high_power_tx_gain_1_1 -static const u32 ar9485_1_1_radio_postamble[][2] = { - /* Addr allmodes */ - {0x0001609c, 0x0b283f31}, - {0x000160ac, 0x24611800}, - {0x000160b0, 0x03284f3e}, - {0x0001610c, 0x00170000}, - {0x00016140, 0x50804008}, -}; +#define ar9485Modes_low_ob_db_tx_gain_1_1 ar9485Modes_high_ob_db_tx_gain_1_1 -static const u32 ar9485_1_1_mac_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x00001030, 0x00000230, 0x00000460, 0x000002c0, 0x00000160}, - {0x00001070, 0x00000168, 0x000002d0, 0x00000318, 0x0000018c}, - {0x000010b0, 0x00000e60, 0x00001cc0, 0x00007c70, 0x00003e38}, - {0x00008014, 0x03e803e8, 0x07d007d0, 0x10801600, 0x08400b00}, - {0x0000801c, 0x128d8027, 0x128d804f, 0x12e00057, 0x12e0002b}, - {0x00008120, 0x08f04800, 0x08f04800, 0x08f04810, 0x08f04810}, - {0x000081d0, 0x00003210, 0x00003210, 0x0000320a, 0x0000320a}, - {0x00008318, 0x00003e80, 0x00007d00, 0x00006880, 0x00003440}, +#define ar9485_modes_lowest_ob_db_tx_gain_1_1 ar9485Modes_low_ob_db_tx_gain_1_1 + +static const u32 ar9485_1_1[][2] = { + /* Addr allmodes */ + {0x0000a580, 0x00000000}, + {0x0000a584, 0x00000000}, + {0x0000a588, 0x00000000}, + {0x0000a58c, 0x00000000}, + {0x0000a590, 0x00000000}, + {0x0000a594, 0x00000000}, + {0x0000a598, 0x00000000}, + {0x0000a59c, 0x00000000}, + {0x0000a5a0, 0x00000000}, + {0x0000a5a4, 0x00000000}, + {0x0000a5a8, 0x00000000}, + {0x0000a5ac, 0x00000000}, + {0x0000a5b0, 0x00000000}, + {0x0000a5b4, 0x00000000}, + {0x0000a5b8, 0x00000000}, + {0x0000a5bc, 0x00000000}, }; static const u32 ar9485_1_1_radio_core[][2] = { - /* Addr allmodes */ + /* Addr allmodes */ {0x00016000, 0x36db6db6}, {0x00016004, 0x6db6db40}, {0x00016008, 0x73800000}, @@ -601,294 +325,145 @@ static const u32 ar9485_1_1_radio_core[][2] = { {0x00016c44, 0x12000000}, }; -static const u32 ar9485_1_1_pcie_phy_pll_on_clkreq_enable_L1[][2] = { - /* Addr allmodes */ - {0x00018c00, 0x18052e5e}, - {0x00018c04, 0x000801d8}, - {0x00018c08, 0x0000080c}, -}; - -static const u32 ar9485Modes_high_power_tx_gain_1_1[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x000098bc, 0x00000002, 0x00000002, 0x00000002, 0x00000002}, - {0x0000a410, 0x000050d9, 0x000050d9, 0x000050d8, 0x000050d8}, - {0x0000a458, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a500, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, - {0x0000a504, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, - {0x0000a508, 0x0c002e00, 0x0c002e00, 0x08000004, 0x08000004}, - {0x0000a50c, 0x11062202, 0x11062202, 0x0d000200, 0x0d000200}, - {0x0000a510, 0x17022e00, 0x17022e00, 0x11000202, 0x11000202}, - {0x0000a514, 0x1d000ec2, 0x1d000ec2, 0x15000400, 0x15000400}, - {0x0000a518, 0x25020ec0, 0x25020ec0, 0x19000402, 0x19000402}, - {0x0000a51c, 0x2b020ec3, 0x2b020ec3, 0x1d000404, 0x1d000404}, - {0x0000a520, 0x2f001f04, 0x2f001f04, 0x21000603, 0x21000603}, - {0x0000a524, 0x35001fc4, 0x35001fc4, 0x25000605, 0x25000605}, - {0x0000a528, 0x3c022f04, 0x3c022f04, 0x2a000a03, 0x2a000a03}, - {0x0000a52c, 0x41023e85, 0x41023e85, 0x2c000a04, 0x2c000a04}, - {0x0000a530, 0x48023ec6, 0x48023ec6, 0x34000e20, 0x34000e20}, - {0x0000a534, 0x4d023f01, 0x4d023f01, 0x35000e21, 0x35000e21}, - {0x0000a538, 0x53023f4b, 0x53023f4b, 0x43000e62, 0x43000e62}, - {0x0000a53c, 0x5a027f09, 0x5a027f09, 0x45000e63, 0x45000e63}, - {0x0000a540, 0x5f027fc9, 0x5f027fc9, 0x49000e65, 0x49000e65}, - {0x0000a544, 0x6502feca, 0x6502feca, 0x4b000e66, 0x4b000e66}, - {0x0000a548, 0x6b02ff4a, 0x6b02ff4a, 0x4d001645, 0x4d001645}, - {0x0000a54c, 0x7203feca, 0x7203feca, 0x51001865, 0x51001865}, - {0x0000a550, 0x7703ff0b, 0x7703ff0b, 0x55001a86, 0x55001a86}, - {0x0000a554, 0x7d06ffcb, 0x7d06ffcb, 0x57001ce9, 0x57001ce9}, - {0x0000a558, 0x8407ff0b, 0x8407ff0b, 0x5a001ceb, 0x5a001ceb}, - {0x0000a55c, 0x8907ffcb, 0x8907ffcb, 0x5e001eeb, 0x5e001eeb}, - {0x0000a560, 0x900fff0b, 0x900fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a564, 0x960fffcb, 0x960fffcb, 0x5e001eeb, 0x5e001eeb}, - {0x0000a568, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a56c, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a570, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a574, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a578, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a57c, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000b500, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b504, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b508, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b50c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b510, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b514, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b518, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b51c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b520, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b524, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b528, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b52c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b530, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b534, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b538, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b53c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b540, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b544, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b548, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b54c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b550, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b554, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b558, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b55c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b560, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b564, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b568, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b56c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b570, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b574, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b578, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b57c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x00016044, 0x05d6b2db, 0x05d6b2db, 0x05d6b2db, 0x05d6b2db}, - {0x00016048, 0x6c924260, 0x6c924260, 0x6c924260, 0x6c924260}, -}; - -static const u32 ar9485_1_1[][2] = { - /* Addr allmodes */ - {0x0000a580, 0x00000000}, - {0x0000a584, 0x00000000}, - {0x0000a588, 0x00000000}, - {0x0000a58c, 0x00000000}, - {0x0000a590, 0x00000000}, - {0x0000a594, 0x00000000}, - {0x0000a598, 0x00000000}, - {0x0000a59c, 0x00000000}, - {0x0000a5a0, 0x00000000}, - {0x0000a5a4, 0x00000000}, - {0x0000a5a8, 0x00000000}, - {0x0000a5ac, 0x00000000}, - {0x0000a5b0, 0x00000000}, - {0x0000a5b4, 0x00000000}, - {0x0000a5b8, 0x00000000}, - {0x0000a5bc, 0x00000000}, -}; - -static const u32 ar9485_modes_green_ob_db_tx_gain_1_1[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x000098bc, 0x00000003, 0x00000003, 0x00000003, 0x00000003}, - {0x0000a410, 0x000050d9, 0x000050d9, 0x000050d8, 0x000050d8}, - {0x0000a458, 0x80000000, 0x80000000, 0x80000000, 0x80000000}, - {0x0000a500, 0x00022200, 0x00022200, 0x00000006, 0x00000006}, - {0x0000a504, 0x05062002, 0x05062002, 0x03000201, 0x03000201}, - {0x0000a508, 0x0c002e00, 0x0c002e00, 0x06000203, 0x06000203}, - {0x0000a50c, 0x11062202, 0x11062202, 0x0a000401, 0x0a000401}, - {0x0000a510, 0x17022e00, 0x17022e00, 0x0e000403, 0x0e000403}, - {0x0000a514, 0x1d000ec2, 0x1d000ec2, 0x12000405, 0x12000405}, - {0x0000a518, 0x25020ec0, 0x25020ec0, 0x15000604, 0x15000604}, - {0x0000a51c, 0x2b020ec3, 0x2b020ec3, 0x18000605, 0x18000605}, - {0x0000a520, 0x2f001f04, 0x2f001f04, 0x1c000a04, 0x1c000a04}, - {0x0000a524, 0x35001fc4, 0x35001fc4, 0x21000a06, 0x21000a06}, - {0x0000a528, 0x3c022f04, 0x3c022f04, 0x29000a24, 0x29000a24}, - {0x0000a52c, 0x41023e85, 0x41023e85, 0x2f000e21, 0x2f000e21}, - {0x0000a530, 0x48023ec6, 0x48023ec6, 0x31000e20, 0x31000e20}, - {0x0000a534, 0x4d023f01, 0x4d023f01, 0x33000e20, 0x33000e20}, - {0x0000a538, 0x53023f4b, 0x53023f4b, 0x43000e62, 0x43000e62}, - {0x0000a53c, 0x5a027f09, 0x5a027f09, 0x45000e63, 0x45000e63}, - {0x0000a540, 0x5f027fc9, 0x5f027fc9, 0x49000e65, 0x49000e65}, - {0x0000a544, 0x6502feca, 0x6502feca, 0x4b000e66, 0x4b000e66}, - {0x0000a548, 0x6b02ff4a, 0x6b02ff4a, 0x4d001645, 0x4d001645}, - {0x0000a54c, 0x7203feca, 0x7203feca, 0x51001865, 0x51001865}, - {0x0000a550, 0x7703ff0b, 0x7703ff0b, 0x55001a86, 0x55001a86}, - {0x0000a554, 0x7d06ffcb, 0x7d06ffcb, 0x57001ce9, 0x57001ce9}, - {0x0000a558, 0x8407ff0b, 0x8407ff0b, 0x5a001ceb, 0x5a001ceb}, - {0x0000a55c, 0x8907ffcb, 0x8907ffcb, 0x5e001eeb, 0x5e001eeb}, - {0x0000a560, 0x900fff0b, 0x900fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a564, 0x960fffcb, 0x960fffcb, 0x5e001eeb, 0x5e001eeb}, - {0x0000a568, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a56c, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a570, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a574, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a578, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a57c, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000b500, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b504, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b508, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b50c, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b510, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b514, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b518, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b51c, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b520, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b524, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b528, 0x0000001a, 0x0000001a, 0x0000001a, 0x0000001a}, - {0x0000b52c, 0x0000002a, 0x0000002a, 0x0000002a, 0x0000002a}, - {0x0000b530, 0x0000003a, 0x0000003a, 0x0000003a, 0x0000003a}, - {0x0000b534, 0x0000004a, 0x0000004a, 0x0000004a, 0x0000004a}, - {0x0000b538, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b53c, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b540, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b544, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b548, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b54c, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b550, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b554, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b558, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b55c, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b560, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b564, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b568, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b56c, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b570, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b574, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b578, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x0000b57c, 0x0000005b, 0x0000005b, 0x0000005b, 0x0000005b}, - {0x00016044, 0x05d6b2db, 0x05d6b2db, 0x05d6b2db, 0x05d6b2db}, - {0x00016048, 0x6c924260, 0x6c924260, 0x6c924260, 0x6c924260}, -}; - -static const u32 ar9485_1_1_pcie_phy_clkreq_disable_L1[][2] = { - /* Addr allmodes */ - {0x00018c00, 0x18013e5e}, - {0x00018c04, 0x000801d8}, - {0x00018c08, 0x0000080c}, -}; - -static const u32 ar9485_1_1_soc_preamble[][2] = { - /* Addr allmodes */ - {0x00004014, 0xba280400}, - {0x00004090, 0x00aa10aa}, - {0x000040a4, 0x00a0c9c9}, - {0x00007010, 0x00000022}, - {0x00007020, 0x00000000}, - {0x00007034, 0x00000002}, - {0x00007038, 0x000004c2}, - {0x00007048, 0x00000002}, -}; - -static const u32 ar9485_1_1_baseband_core_txfir_coeff_japan_2484[][2] = { - /* Addr allmodes */ - {0x0000a398, 0x00000000}, - {0x0000a39c, 0x6f7f0301}, - {0x0000a3a0, 0xca9228ee}, -}; - -static const u32 ar9485Modes_low_ob_db_tx_gain_1_1[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x000098bc, 0x00000002, 0x00000002, 0x00000002, 0x00000002}, - {0x0000a410, 0x000050d9, 0x000050d9, 0x000050d8, 0x000050d8}, - {0x0000a458, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a500, 0x00022200, 0x00022200, 0x00000000, 0x00000000}, - {0x0000a504, 0x05062002, 0x05062002, 0x04000002, 0x04000002}, - {0x0000a508, 0x0c002e00, 0x0c002e00, 0x08000004, 0x08000004}, - {0x0000a50c, 0x11062202, 0x11062202, 0x0d000200, 0x0d000200}, - {0x0000a510, 0x17022e00, 0x17022e00, 0x11000202, 0x11000202}, - {0x0000a514, 0x1d000ec2, 0x1d000ec2, 0x15000400, 0x15000400}, - {0x0000a518, 0x25020ec0, 0x25020ec0, 0x19000402, 0x19000402}, - {0x0000a51c, 0x2b020ec3, 0x2b020ec3, 0x1d000404, 0x1d000404}, - {0x0000a520, 0x2f001f04, 0x2f001f04, 0x21000603, 0x21000603}, - {0x0000a524, 0x35001fc4, 0x35001fc4, 0x25000605, 0x25000605}, - {0x0000a528, 0x3c022f04, 0x3c022f04, 0x2a000a03, 0x2a000a03}, - {0x0000a52c, 0x41023e85, 0x41023e85, 0x2c000a04, 0x2c000a04}, - {0x0000a530, 0x48023ec6, 0x48023ec6, 0x34000e20, 0x34000e20}, - {0x0000a534, 0x4d023f01, 0x4d023f01, 0x35000e21, 0x35000e21}, - {0x0000a538, 0x53023f4b, 0x53023f4b, 0x43000e62, 0x43000e62}, - {0x0000a53c, 0x5a027f09, 0x5a027f09, 0x45000e63, 0x45000e63}, - {0x0000a540, 0x5f027fc9, 0x5f027fc9, 0x49000e65, 0x49000e65}, - {0x0000a544, 0x6502feca, 0x6502feca, 0x4b000e66, 0x4b000e66}, - {0x0000a548, 0x6b02ff4a, 0x6b02ff4a, 0x4d001645, 0x4d001645}, - {0x0000a54c, 0x7203feca, 0x7203feca, 0x51001865, 0x51001865}, - {0x0000a550, 0x7703ff0b, 0x7703ff0b, 0x55001a86, 0x55001a86}, - {0x0000a554, 0x7d06ffcb, 0x7d06ffcb, 0x57001ce9, 0x57001ce9}, - {0x0000a558, 0x8407ff0b, 0x8407ff0b, 0x5a001ceb, 0x5a001ceb}, - {0x0000a55c, 0x8907ffcb, 0x8907ffcb, 0x5e001eeb, 0x5e001eeb}, - {0x0000a560, 0x900fff0b, 0x900fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a564, 0x960fffcb, 0x960fffcb, 0x5e001eeb, 0x5e001eeb}, - {0x0000a568, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a56c, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a570, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a574, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a578, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000a57c, 0x9c1fff0b, 0x9c1fff0b, 0x5e001eeb, 0x5e001eeb}, - {0x0000b500, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b504, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b508, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b50c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b510, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b514, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b518, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b51c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b520, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b524, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b528, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b52c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b530, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b534, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b538, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b53c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b540, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b544, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b548, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b54c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b550, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b554, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b558, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b55c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b560, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b564, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b568, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b56c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b570, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b574, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b578, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000b57c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x00016044, 0x05d6b2db, 0x05d6b2db, 0x05d6b2db, 0x05d6b2db}, - {0x00016048, 0x6c924260, 0x6c924260, 0x6c924260, 0x6c924260}, -}; - -static const u32 ar9485_fast_clock_1_1_baseband_postamble[][3] = { - /* Addr 5G_HT2 5G_HT40 */ - {0x00009e00, 0x03721821, 0x03721821}, - {0x0000a230, 0x0000400b, 0x00004016}, - {0x0000a254, 0x00000898, 0x00001130}, -}; - -static const u32 ar9485_1_1_pcie_phy_pll_on_clkreq_disable_L1[][2] = { - /* Addr allmodes */ - {0x00018c00, 0x18012e5e}, - {0x00018c04, 0x000801d8}, - {0x00018c08, 0x0000080c}, +static const u32 ar9485_1_1_baseband_core[][2] = { + /* Addr allmodes */ + {0x00009800, 0xafe68e30}, + {0x00009804, 0xfd14e000}, + {0x00009808, 0x9c0a8f6b}, + {0x0000980c, 0x04800000}, + {0x00009814, 0x9280c00a}, + {0x00009818, 0x00000000}, + {0x0000981c, 0x00020028}, + {0x00009834, 0x5f3ca3de}, + {0x00009838, 0x0108ecff}, + {0x0000983c, 0x14750600}, + {0x00009880, 0x201fff00}, + {0x00009884, 0x00001042}, + {0x000098a4, 0x00200400}, + {0x000098b0, 0x52440bbe}, + {0x000098d0, 0x004b6a8e}, + {0x000098d4, 0x00000820}, + {0x000098dc, 0x00000000}, + {0x000098f0, 0x00000000}, + {0x000098f4, 0x00000000}, + {0x00009c04, 0x00000000}, + {0x00009c08, 0x03200000}, + {0x00009c0c, 0x00000000}, + {0x00009c10, 0x00000000}, + {0x00009c14, 0x00046384}, + {0x00009c18, 0x05b6b440}, + {0x00009c1c, 0x00b6b440}, + {0x00009d00, 0xc080a333}, + {0x00009d04, 0x40206c10}, + {0x00009d08, 0x009c4060}, + {0x00009d0c, 0x1883800a}, + {0x00009d10, 0x01834061}, + {0x00009d14, 0x00c00400}, + {0x00009d18, 0x00000000}, + {0x00009d1c, 0x00000000}, + {0x00009e08, 0x0038233c}, + {0x00009e24, 0x9927b515}, + {0x00009e28, 0x12ef0200}, + {0x00009e30, 0x06336f77}, + {0x00009e34, 0x6af6532f}, + {0x00009e38, 0x0cc80c00}, + {0x00009e40, 0x0d261820}, + {0x00009e4c, 0x00001004}, + {0x00009e50, 0x00ff03f1}, + {0x00009fc0, 0x80be4788}, + {0x00009fc4, 0x0001efb5}, + {0x00009fcc, 0x40000014}, + {0x0000a20c, 0x00000000}, + {0x0000a210, 0x00000000}, + {0x0000a220, 0x00000000}, + {0x0000a224, 0x00000000}, + {0x0000a228, 0x10002310}, + {0x0000a23c, 0x00000000}, + {0x0000a244, 0x0c000000}, + {0x0000a2a0, 0x00000001}, + {0x0000a2c0, 0x00000001}, + {0x0000a2c8, 0x00000000}, + {0x0000a2cc, 0x18c43433}, + {0x0000a2d4, 0x00000000}, + {0x0000a2dc, 0x00000000}, + {0x0000a2e0, 0x00000000}, + {0x0000a2e4, 0x00000000}, + {0x0000a2e8, 0x00000000}, + {0x0000a2ec, 0x00000000}, + {0x0000a2f0, 0x00000000}, + {0x0000a2f4, 0x00000000}, + {0x0000a2f8, 0x00000000}, + {0x0000a344, 0x00000000}, + {0x0000a34c, 0x00000000}, + {0x0000a350, 0x0000a000}, + {0x0000a364, 0x00000000}, + {0x0000a370, 0x00000000}, + {0x0000a390, 0x00000001}, + {0x0000a394, 0x00000444}, + {0x0000a398, 0x001f0e0f}, + {0x0000a39c, 0x0075393f}, + {0x0000a3a0, 0xb79f6427}, + {0x0000a3a4, 0x000000ff}, + {0x0000a3a8, 0x3b3b3b3b}, + {0x0000a3ac, 0x2f2f2f2f}, + {0x0000a3c0, 0x20202020}, + {0x0000a3c4, 0x22222220}, + {0x0000a3c8, 0x20200020}, + {0x0000a3cc, 0x20202020}, + {0x0000a3d0, 0x20202020}, + {0x0000a3d4, 0x20202020}, + {0x0000a3d8, 0x20202020}, + {0x0000a3dc, 0x20202020}, + {0x0000a3e0, 0x20202020}, + {0x0000a3e4, 0x20202020}, + {0x0000a3e8, 0x20202020}, + {0x0000a3ec, 0x20202020}, + {0x0000a3f0, 0x00000000}, + {0x0000a3f4, 0x00000006}, + {0x0000a3f8, 0x0cdbd380}, + {0x0000a3fc, 0x000f0f01}, + {0x0000a400, 0x8fa91f01}, + {0x0000a404, 0x00000000}, + {0x0000a408, 0x0e79e5c6}, + {0x0000a40c, 0x00820820}, + {0x0000a414, 0x1ce739cf}, + {0x0000a418, 0x2d0019ce}, + {0x0000a41c, 0x1ce739ce}, + {0x0000a420, 0x000001ce}, + {0x0000a424, 0x1ce739ce}, + {0x0000a428, 0x000001ce}, + {0x0000a42c, 0x1ce739ce}, + {0x0000a430, 0x1ce739ce}, + {0x0000a434, 0x00000000}, + {0x0000a438, 0x00001801}, + {0x0000a43c, 0x00000000}, + {0x0000a440, 0x00000000}, + {0x0000a444, 0x00000000}, + {0x0000a448, 0x04000000}, + {0x0000a44c, 0x00000001}, + {0x0000a450, 0x00010000}, + {0x0000a5c4, 0xbfad9d74}, + {0x0000a5c8, 0x0048060a}, + {0x0000a5cc, 0x00000637}, + {0x0000a760, 0x03020100}, + {0x0000a764, 0x09080504}, + {0x0000a768, 0x0d0c0b0a}, + {0x0000a76c, 0x13121110}, + {0x0000a770, 0x31301514}, + {0x0000a774, 0x35343332}, + {0x0000a778, 0x00000036}, + {0x0000a780, 0x00000838}, + {0x0000a7c0, 0x00000000}, + {0x0000a7c4, 0xfffffffc}, + {0x0000a7c8, 0x00000000}, + {0x0000a7cc, 0x00000000}, + {0x0000a7d0, 0x00000000}, + {0x0000a7d4, 0x00000004}, + {0x0000a7dc, 0x00000000}, }; static const u32 ar9485_common_rx_gain_1_1[][2] = { - /* Addr allmodes */ + /* Addr allmodes */ {0x0000a000, 0x00010000}, {0x0000a004, 0x00030002}, {0x0000a008, 0x00050004}, @@ -1019,143 +594,260 @@ static const u32 ar9485_common_rx_gain_1_1[][2] = { {0x0000a1fc, 0x00000296}, }; +static const u32 ar9485_1_1_pcie_phy_pll_on_clkreq_enable_L1[][2] = { + /* Addr allmodes */ + {0x00018c00, 0x18052e5e}, + {0x00018c04, 0x000801d8}, + {0x00018c08, 0x0000080c}, +}; + static const u32 ar9485_1_1_pcie_phy_clkreq_enable_L1[][2] = { - /* Addr allmodes */ + /* Addr allmodes */ {0x00018c00, 0x18053e5e}, {0x00018c04, 0x000801d8}, {0x00018c08, 0x0000080c}, }; -static const u32 ar9485Common_wo_xlna_rx_gain_1_1[][2] = { - /* Addr allmodes */ - {0x0000a000, 0x00060005}, - {0x0000a004, 0x00810080}, - {0x0000a008, 0x00830082}, - {0x0000a00c, 0x00850084}, - {0x0000a010, 0x01820181}, - {0x0000a014, 0x01840183}, - {0x0000a018, 0x01880185}, - {0x0000a01c, 0x018a0189}, - {0x0000a020, 0x02850284}, - {0x0000a024, 0x02890288}, - {0x0000a028, 0x028b028a}, - {0x0000a02c, 0x03850384}, - {0x0000a030, 0x03890388}, - {0x0000a034, 0x038b038a}, - {0x0000a038, 0x038d038c}, - {0x0000a03c, 0x03910390}, - {0x0000a040, 0x03930392}, - {0x0000a044, 0x03950394}, - {0x0000a048, 0x00000396}, - {0x0000a04c, 0x00000000}, - {0x0000a050, 0x00000000}, - {0x0000a054, 0x00000000}, - {0x0000a058, 0x00000000}, - {0x0000a05c, 0x00000000}, - {0x0000a060, 0x00000000}, - {0x0000a064, 0x00000000}, - {0x0000a068, 0x00000000}, - {0x0000a06c, 0x00000000}, - {0x0000a070, 0x00000000}, - {0x0000a074, 0x00000000}, - {0x0000a078, 0x00000000}, - {0x0000a07c, 0x00000000}, - {0x0000a080, 0x28282828}, - {0x0000a084, 0x28282828}, - {0x0000a088, 0x28282828}, - {0x0000a08c, 0x28282828}, - {0x0000a090, 0x28282828}, - {0x0000a094, 0x24242428}, - {0x0000a098, 0x171e1e1e}, - {0x0000a09c, 0x02020b0b}, - {0x0000a0a0, 0x02020202}, - {0x0000a0a4, 0x00000000}, - {0x0000a0a8, 0x00000000}, - {0x0000a0ac, 0x00000000}, - {0x0000a0b0, 0x00000000}, - {0x0000a0b4, 0x00000000}, - {0x0000a0b8, 0x00000000}, - {0x0000a0bc, 0x00000000}, - {0x0000a0c0, 0x22072208}, - {0x0000a0c4, 0x22052206}, - {0x0000a0c8, 0x22032204}, - {0x0000a0cc, 0x22012202}, - {0x0000a0d0, 0x221f2200}, - {0x0000a0d4, 0x221d221e}, - {0x0000a0d8, 0x33023303}, - {0x0000a0dc, 0x33003301}, - {0x0000a0e0, 0x331e331f}, - {0x0000a0e4, 0x4402331d}, - {0x0000a0e8, 0x44004401}, - {0x0000a0ec, 0x441e441f}, - {0x0000a0f0, 0x55025503}, - {0x0000a0f4, 0x55005501}, - {0x0000a0f8, 0x551e551f}, - {0x0000a0fc, 0x6602551d}, - {0x0000a100, 0x66006601}, - {0x0000a104, 0x661e661f}, - {0x0000a108, 0x7703661d}, - {0x0000a10c, 0x77017702}, - {0x0000a110, 0x00007700}, - {0x0000a114, 0x00000000}, - {0x0000a118, 0x00000000}, - {0x0000a11c, 0x00000000}, - {0x0000a120, 0x00000000}, - {0x0000a124, 0x00000000}, - {0x0000a128, 0x00000000}, - {0x0000a12c, 0x00000000}, - {0x0000a130, 0x00000000}, - {0x0000a134, 0x00000000}, - {0x0000a138, 0x00000000}, - {0x0000a13c, 0x00000000}, - {0x0000a140, 0x001f0000}, - {0x0000a144, 0x111f1100}, - {0x0000a148, 0x111d111e}, - {0x0000a14c, 0x111b111c}, - {0x0000a150, 0x22032204}, - {0x0000a154, 0x22012202}, - {0x0000a158, 0x221f2200}, - {0x0000a15c, 0x221d221e}, - {0x0000a160, 0x33013302}, - {0x0000a164, 0x331f3300}, - {0x0000a168, 0x4402331e}, - {0x0000a16c, 0x44004401}, - {0x0000a170, 0x441e441f}, - {0x0000a174, 0x55015502}, - {0x0000a178, 0x551f5500}, - {0x0000a17c, 0x6602551e}, - {0x0000a180, 0x66006601}, - {0x0000a184, 0x661e661f}, - {0x0000a188, 0x7703661d}, - {0x0000a18c, 0x77017702}, - {0x0000a190, 0x00007700}, - {0x0000a194, 0x00000000}, - {0x0000a198, 0x00000000}, - {0x0000a19c, 0x00000000}, - {0x0000a1a0, 0x00000000}, - {0x0000a1a4, 0x00000000}, - {0x0000a1a8, 0x00000000}, - {0x0000a1ac, 0x00000000}, - {0x0000a1b0, 0x00000000}, - {0x0000a1b4, 0x00000000}, - {0x0000a1b8, 0x00000000}, - {0x0000a1bc, 0x00000000}, - {0x0000a1c0, 0x00000000}, - {0x0000a1c4, 0x00000000}, - {0x0000a1c8, 0x00000000}, - {0x0000a1cc, 0x00000000}, - {0x0000a1d0, 0x00000000}, - {0x0000a1d4, 0x00000000}, - {0x0000a1d8, 0x00000000}, - {0x0000a1dc, 0x00000000}, - {0x0000a1e0, 0x00000000}, - {0x0000a1e4, 0x00000000}, - {0x0000a1e8, 0x00000000}, - {0x0000a1ec, 0x00000000}, - {0x0000a1f0, 0x00000396}, - {0x0000a1f4, 0x00000396}, - {0x0000a1f8, 0x00000396}, - {0x0000a1fc, 0x00000296}, +static const u32 ar9485_1_1_soc_preamble[][2] = { + /* Addr allmodes */ + {0x00004014, 0xba280400}, + {0x00004090, 0x00aa10aa}, + {0x000040a4, 0x00a0c9c9}, + {0x00007010, 0x00000022}, + {0x00007020, 0x00000000}, + {0x00007034, 0x00000002}, + {0x00007038, 0x000004c2}, + {0x00007048, 0x00000002}, +}; + +static const u32 ar9485_fast_clock_1_1_baseband_postamble[][3] = { + /* Addr 5G_HT20 5G_HT40 */ + {0x00009e00, 0x03721821, 0x03721821}, + {0x0000a230, 0x0000400b, 0x00004016}, + {0x0000a254, 0x00000898, 0x00001130}, +}; + +static const u32 ar9485_1_1_baseband_postamble[][5] = { + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x00009810, 0xd00a8005, 0xd00a8005, 0xd00a8005, 0xd00a8005}, + {0x00009820, 0x206a002e, 0x206a002e, 0x206a002e, 0x206a002e}, + {0x00009824, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0}, + {0x00009828, 0x06903081, 0x06903081, 0x06903881, 0x06903881}, + {0x0000982c, 0x05eea6d4, 0x05eea6d4, 0x05eea6d4, 0x05eea6d4}, + {0x00009830, 0x0000059c, 0x0000059c, 0x0000059c, 0x0000059c}, + {0x00009c00, 0x00000044, 0x00000044, 0x00000044, 0x00000044}, + {0x00009e00, 0x0372161e, 0x0372161e, 0x037216a0, 0x037216a0}, + {0x00009e04, 0x00182020, 0x00182020, 0x00182020, 0x00182020}, + {0x00009e0c, 0x6c4000e2, 0x6d4000e2, 0x6d4000e2, 0x6c4000e2}, + {0x00009e10, 0x7ec88d2e, 0x7ec88d2e, 0x7ec80d2e, 0x7ec80d2e}, + {0x00009e14, 0x31395d5e, 0x3139605e, 0x3139605e, 0x31395d5e}, + {0x00009e18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x00009e1c, 0x0001cf9c, 0x0001cf9c, 0x00021f9c, 0x00021f9c}, + {0x00009e20, 0x000003b5, 0x000003b5, 0x000003ce, 0x000003ce}, + {0x00009e2c, 0x0000001c, 0x0000001c, 0x00000021, 0x00000021}, + {0x00009e3c, 0xcf946220, 0xcf946220, 0xcf946222, 0xcf946222}, + {0x00009e44, 0x02321e27, 0x02321e27, 0x02282324, 0x02282324}, + {0x00009e48, 0x5030201a, 0x5030201a, 0x50302010, 0x50302010}, + {0x00009fc8, 0x0003f000, 0x0003f000, 0x0001a000, 0x0001a000}, + {0x0000a204, 0x01303fc0, 0x01303fc4, 0x01303fc4, 0x01303fc0}, + {0x0000a208, 0x00000104, 0x00000104, 0x00000004, 0x00000004}, + {0x0000a230, 0x0000400a, 0x00004014, 0x00004016, 0x0000400b}, + {0x0000a234, 0x10000fff, 0x10000fff, 0x10000fff, 0x10000fff}, + {0x0000a238, 0xffb81018, 0xffb81018, 0xffb81018, 0xffb81018}, + {0x0000a250, 0x00000000, 0x00000000, 0x00000210, 0x00000108}, + {0x0000a254, 0x000007d0, 0x00000fa0, 0x00001130, 0x00000898}, + {0x0000a258, 0x02020002, 0x02020002, 0x02020002, 0x02020002}, + {0x0000a25c, 0x01000e0e, 0x01000e0e, 0x01000e0e, 0x01000e0e}, + {0x0000a260, 0x3a021501, 0x3a021501, 0x3a021501, 0x3a021501}, + {0x0000a264, 0x00000e0e, 0x00000e0e, 0x00000e0e, 0x00000e0e}, + {0x0000a280, 0x00000007, 0x00000007, 0x0000000b, 0x0000000b}, + {0x0000a284, 0x00000000, 0x00000000, 0x000002a0, 0x000002a0}, + {0x0000a288, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a28c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a2c4, 0x00158d18, 0x00158d18, 0x00158d18, 0x00158d18}, + {0x0000a2d0, 0x00071981, 0x00071981, 0x00071982, 0x00071982}, + {0x0000a2d8, 0xf999a83a, 0xf999a83a, 0xf999a83a, 0xf999a83a}, + {0x0000a358, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000be04, 0x00802020, 0x00802020, 0x00802020, 0x00802020}, + {0x0000be18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, +}; + +static const u32 ar9485_1_1_pcie_phy_clkreq_disable_L1[][2] = { + /* Addr allmodes */ + {0x00018c00, 0x18013e5e}, + {0x00018c04, 0x000801d8}, + {0x00018c08, 0x0000080c}, +}; + +static const u32 ar9485_1_1_radio_postamble[][2] = { + /* Addr allmodes */ + {0x0001609c, 0x0b283f31}, + {0x000160ac, 0x24611800}, + {0x000160b0, 0x03284f3e}, + {0x0001610c, 0x00170000}, + {0x00016140, 0x50804008}, +}; + +static const u32 ar9485_1_1_mac_core[][2] = { + /* Addr allmodes */ + {0x00000008, 0x00000000}, + {0x00000030, 0x00020085}, + {0x00000034, 0x00000005}, + {0x00000040, 0x00000000}, + {0x00000044, 0x00000000}, + {0x00000048, 0x00000008}, + {0x0000004c, 0x00000010}, + {0x00000050, 0x00000000}, + {0x00001040, 0x002ffc0f}, + {0x00001044, 0x002ffc0f}, + {0x00001048, 0x002ffc0f}, + {0x0000104c, 0x002ffc0f}, + {0x00001050, 0x002ffc0f}, + {0x00001054, 0x002ffc0f}, + {0x00001058, 0x002ffc0f}, + {0x0000105c, 0x002ffc0f}, + {0x00001060, 0x002ffc0f}, + {0x00001064, 0x002ffc0f}, + {0x000010f0, 0x00000100}, + {0x00001270, 0x00000000}, + {0x000012b0, 0x00000000}, + {0x000012f0, 0x00000000}, + {0x0000143c, 0x00000000}, + {0x0000147c, 0x00000000}, + {0x00008000, 0x00000000}, + {0x00008004, 0x00000000}, + {0x00008008, 0x00000000}, + {0x0000800c, 0x00000000}, + {0x00008018, 0x00000000}, + {0x00008020, 0x00000000}, + {0x00008038, 0x00000000}, + {0x0000803c, 0x00000000}, + {0x00008040, 0x00000000}, + {0x00008044, 0x00000000}, + {0x00008048, 0x00000000}, + {0x0000804c, 0xffffffff}, + {0x00008054, 0x00000000}, + {0x00008058, 0x00000000}, + {0x0000805c, 0x000fc78f}, + {0x00008060, 0x0000000f}, + {0x00008064, 0x00000000}, + {0x00008070, 0x00000310}, + {0x00008074, 0x00000020}, + {0x00008078, 0x00000000}, + {0x0000809c, 0x0000000f}, + {0x000080a0, 0x00000000}, + {0x000080a4, 0x02ff0000}, + {0x000080a8, 0x0e070605}, + {0x000080ac, 0x0000000d}, + {0x000080b0, 0x00000000}, + {0x000080b4, 0x00000000}, + {0x000080b8, 0x00000000}, + {0x000080bc, 0x00000000}, + {0x000080c0, 0x2a800000}, + {0x000080c4, 0x06900168}, + {0x000080c8, 0x13881c22}, + {0x000080cc, 0x01f40000}, + {0x000080d0, 0x00252500}, + {0x000080d4, 0x00a00000}, + {0x000080d8, 0x00400000}, + {0x000080dc, 0x00000000}, + {0x000080e0, 0xffffffff}, + {0x000080e4, 0x0000ffff}, + {0x000080e8, 0x3f3f3f3f}, + {0x000080ec, 0x00000000}, + {0x000080f0, 0x00000000}, + {0x000080f4, 0x00000000}, + {0x000080fc, 0x00020000}, + {0x00008100, 0x00000000}, + {0x00008108, 0x00000052}, + {0x0000810c, 0x00000000}, + {0x00008110, 0x00000000}, + {0x00008114, 0x000007ff}, + {0x00008118, 0x000000aa}, + {0x0000811c, 0x00003210}, + {0x00008124, 0x00000000}, + {0x00008128, 0x00000000}, + {0x0000812c, 0x00000000}, + {0x00008130, 0x00000000}, + {0x00008134, 0x00000000}, + {0x00008138, 0x00000000}, + {0x0000813c, 0x0000ffff}, + {0x00008144, 0xffffffff}, + {0x00008168, 0x00000000}, + {0x0000816c, 0x00000000}, + {0x00008170, 0x18486200}, + {0x00008174, 0x33332210}, + {0x00008178, 0x00000000}, + {0x0000817c, 0x00020000}, + {0x000081c0, 0x00000000}, + {0x000081c4, 0x33332210}, + {0x000081d4, 0x00000000}, + {0x000081ec, 0x00000000}, + {0x000081f0, 0x00000000}, + {0x000081f4, 0x00000000}, + {0x000081f8, 0x00000000}, + {0x000081fc, 0x00000000}, + {0x00008240, 0x00100000}, + {0x00008244, 0x0010f400}, + {0x00008248, 0x00000800}, + {0x0000824c, 0x0001e800}, + {0x00008250, 0x00000000}, + {0x00008254, 0x00000000}, + {0x00008258, 0x00000000}, + {0x0000825c, 0x40000000}, + {0x00008260, 0x00080922}, + {0x00008264, 0x9ca00010}, + {0x00008268, 0xffffffff}, + {0x0000826c, 0x0000ffff}, + {0x00008270, 0x00000000}, + {0x00008274, 0x40000000}, + {0x00008278, 0x003e4180}, + {0x0000827c, 0x00000004}, + {0x00008284, 0x0000002c}, + {0x00008288, 0x0000002c}, + {0x0000828c, 0x000000ff}, + {0x00008294, 0x00000000}, + {0x00008298, 0x00000000}, + {0x0000829c, 0x00000000}, + {0x00008300, 0x00000140}, + {0x00008314, 0x00000000}, + {0x0000831c, 0x0000010d}, + {0x00008328, 0x00000000}, + {0x0000832c, 0x00000007}, + {0x00008330, 0x00000302}, + {0x00008334, 0x00000700}, + {0x00008338, 0x00ff0000}, + {0x0000833c, 0x02400000}, + {0x00008340, 0x000107ff}, + {0x00008344, 0xa248105b}, + {0x00008348, 0x008f0000}, + {0x0000835c, 0x00000000}, + {0x00008360, 0xffffffff}, + {0x00008364, 0xffffffff}, + {0x00008368, 0x00000000}, + {0x00008370, 0x00000000}, + {0x00008374, 0x000000ff}, + {0x00008378, 0x00000000}, + {0x0000837c, 0x00000000}, + {0x00008380, 0xffffffff}, + {0x00008384, 0xffffffff}, + {0x00008390, 0xffffffff}, + {0x00008394, 0xffffffff}, + {0x00008398, 0x00000000}, + {0x0000839c, 0x00000000}, + {0x000083a0, 0x00000000}, + {0x000083a4, 0x0000fa14}, + {0x000083a8, 0x000f0c00}, + {0x000083ac, 0x33332210}, + {0x000083b0, 0x33332210}, + {0x000083b4, 0x33332210}, + {0x000083b8, 0x33332210}, + {0x000083bc, 0x00000000}, + {0x000083c0, 0x00000000}, + {0x000083c4, 0x00000000}, + {0x000083c8, 0x00000000}, + {0x000083cc, 0x00000200}, + {0x000083d0, 0x000301ff}, }; -#endif +#endif /* INITVALS_9485_H */ diff --git a/drivers/net/wireless/ath/ath9k/ar955x_1p0_initvals.h b/drivers/net/wireless/ath/ath9k/ar955x_1p0_initvals.h new file mode 100644 index 000000000000..df97f21c52dc --- /dev/null +++ b/drivers/net/wireless/ath/ath9k/ar955x_1p0_initvals.h @@ -0,0 +1,1284 @@ +/* + * Copyright (c) 2010-2011 Atheros Communications Inc. + * Copyright (c) 2011-2012 Qualcomm Atheros Inc. + * + * Permission to use, copy, modify, and/or distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +#ifndef INITVALS_955X_1P0_H +#define INITVALS_955X_1P0_H + +/* AR955X 1.0 */ + +static const u32 ar955x_1p0_radio_postamble[][5] = { + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x00016098, 0xd2dd5554, 0xd2dd5554, 0xd28b3330, 0xd28b3330}, + {0x0001609c, 0x0a566f3a, 0x0a566f3a, 0x06345f2a, 0x06345f2a}, + {0x000160ac, 0xa4647c00, 0xa4647c00, 0xa4646800, 0xa4646800}, + {0x000160b0, 0x01885f52, 0x01885f52, 0x04accf3a, 0x04accf3a}, + {0x00016104, 0xb7a00001, 0xb7a00001, 0xb7a00001, 0xb7a00001}, + {0x0001610c, 0xc0000000, 0xc0000000, 0xc0000000, 0xc0000000}, + {0x00016140, 0x10804008, 0x10804008, 0x10804008, 0x10804008}, + {0x00016504, 0xb7a00001, 0xb7a00001, 0xb7a00001, 0xb7a00001}, + {0x0001650c, 0xc0000000, 0xc0000000, 0xc0000000, 0xc0000000}, + {0x00016540, 0x10804008, 0x10804008, 0x10804008, 0x10804008}, + {0x00016904, 0xb7a00001, 0xb7a00001, 0xb7a00001, 0xb7a00001}, + {0x0001690c, 0xc0000000, 0xc0000000, 0xc0000000, 0xc0000000}, + {0x00016940, 0x10804008, 0x10804008, 0x10804008, 0x10804008}, +}; + +static const u32 ar955x_1p0_baseband_core_txfir_coeff_japan_2484[][2] = { + /* Addr allmodes */ + {0x0000a398, 0x00000000}, + {0x0000a39c, 0x6f7f0301}, + {0x0000a3a0, 0xca9228ee}, +}; + +static const u32 ar955x_1p0_baseband_postamble[][5] = { + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x00009810, 0xd00a8005, 0xd00a8005, 0xd00a8011, 0xd00a8011}, + {0x00009820, 0x206a022e, 0x206a022e, 0x206a012e, 0x206a012e}, + {0x00009824, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0, 0x5ac640d0}, + {0x00009828, 0x06903081, 0x06903081, 0x06903881, 0x06903881}, + {0x0000982c, 0x05eea6d4, 0x05eea6d4, 0x05eea6d4, 0x05eea6d4}, + {0x00009830, 0x0000059c, 0x0000059c, 0x0000119c, 0x0000119c}, + {0x00009c00, 0x000000c4, 0x000000c4, 0x000000c4, 0x000000c4}, + {0x00009e00, 0x0372111a, 0x0372111a, 0x037216a0, 0x037216a0}, + {0x00009e04, 0x001c2020, 0x001c2020, 0x001c2020, 0x001c2020}, + {0x00009e0c, 0x6c4000e2, 0x6d4000e2, 0x6d4000e2, 0x6c4000e2}, + {0x00009e10, 0x7ec88d2e, 0x7ec88d2e, 0x7ec84d2e, 0x7ec84d2e}, + {0x00009e14, 0x37b95d5e, 0x37b9605e, 0x3379605e, 0x33795d5e}, + {0x00009e18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x00009e1c, 0x0001cf9c, 0x0001cf9c, 0x00021f9c, 0x00021f9c}, + {0x00009e20, 0x000003b5, 0x000003b5, 0x000003ce, 0x000003ce}, + {0x00009e2c, 0x0000001c, 0x0000001c, 0x00000021, 0x00000021}, + {0x00009e3c, 0xcfa10820, 0xcfa10820, 0xcfa10822, 0xcfa10822}, + {0x00009e44, 0xfe321e27, 0xfe321e27, 0xfe291e27, 0xfe291e27}, + {0x00009e48, 0x5030201a, 0x5030201a, 0x50302012, 0x50302012}, + {0x00009fc8, 0x0003f000, 0x0003f000, 0x0001a000, 0x0001a000}, + {0x0000a204, 0x005c0ec0, 0x005c0ec4, 0x005c0ec4, 0x005c0ec0}, + {0x0000a208, 0x00000104, 0x00000104, 0x00000004, 0x00000004}, + {0x0000a22c, 0x07e26a2f, 0x07e26a2f, 0x01026a2f, 0x01026a2f}, + {0x0000a230, 0x0000000a, 0x00000014, 0x00000016, 0x0000000b}, + {0x0000a234, 0x00000fff, 0x10000fff, 0x10000fff, 0x00000fff}, + {0x0000a238, 0xffb01018, 0xffb01018, 0xffb01018, 0xffb01018}, + {0x0000a250, 0x00000000, 0x00000000, 0x00000210, 0x00000108}, + {0x0000a254, 0x000007d0, 0x00000fa0, 0x00001130, 0x00000898}, + {0x0000a258, 0x02020002, 0x02020002, 0x02020002, 0x02020002}, + {0x0000a25c, 0x01000e0e, 0x01000e0e, 0x01000e0e, 0x01000e0e}, + {0x0000a260, 0x0a021501, 0x0a021501, 0x3a021501, 0x3a021501}, + {0x0000a264, 0x00000e0e, 0x00000e0e, 0x00000e0e, 0x00000e0e}, + {0x0000a280, 0x00000007, 0x00000007, 0x0000000b, 0x0000000b}, + {0x0000a284, 0x00000000, 0x00000000, 0x00000010, 0x00000010}, + {0x0000a288, 0x00000110, 0x00000110, 0x00000110, 0x00000110}, + {0x0000a28c, 0x00022222, 0x00022222, 0x00022222, 0x00022222}, + {0x0000a2c4, 0x00158d18, 0x00158d18, 0x00158d18, 0x00158d18}, + {0x0000a2cc, 0x18c50033, 0x18c43433, 0x18c41033, 0x18c44c33}, + {0x0000a2d0, 0x00041982, 0x00041982, 0x00041982, 0x00041982}, + {0x0000a2d8, 0x7999a83b, 0x7999a83b, 0x7999a83b, 0x7999a83b}, + {0x0000a358, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a830, 0x0000019c, 0x0000019c, 0x0000019c, 0x0000019c}, + {0x0000ae04, 0x001c0000, 0x001c0000, 0x001c0000, 0x001c0000}, + {0x0000ae18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000ae1c, 0x0000019c, 0x0000019c, 0x0000019c, 0x0000019c}, + {0x0000ae20, 0x000001b5, 0x000001b5, 0x000001ce, 0x000001ce}, + {0x0000b284, 0x00000000, 0x00000000, 0x00000010, 0x00000010}, + {0x0000b830, 0x0000019c, 0x0000019c, 0x0000019c, 0x0000019c}, + {0x0000be04, 0x001c0000, 0x001c0000, 0x001c0000, 0x001c0000}, + {0x0000be18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000be1c, 0x0000019c, 0x0000019c, 0x0000019c, 0x0000019c}, + {0x0000be20, 0x000001b5, 0x000001b5, 0x000001ce, 0x000001ce}, + {0x0000c284, 0x00000000, 0x00000000, 0x00000010, 0x00000010}, +}; + +static const u32 ar955x_1p0_radio_core[][2] = { + /* Addr allmodes */ + {0x00016000, 0x36db6db6}, + {0x00016004, 0x6db6db40}, + {0x00016008, 0x73f00000}, + {0x0001600c, 0x00000000}, + {0x00016040, 0x7f80fff8}, + {0x0001604c, 0x76d005b5}, + {0x00016050, 0x557cf031}, + {0x00016054, 0x13449440}, + {0x00016058, 0x0c51c92c}, + {0x0001605c, 0x3db7fffc}, + {0x00016060, 0xfffffffc}, + {0x00016064, 0x000f0278}, + {0x00016068, 0x6db6db6c}, + {0x0001606c, 0x6db60000}, + {0x00016080, 0x00080000}, + {0x00016084, 0x0e48048c}, + {0x00016088, 0x14214514}, + {0x0001608c, 0x119f101e}, + {0x00016090, 0x24926490}, + {0x00016094, 0x00000000}, + {0x000160a0, 0x0a108ffe}, + {0x000160a4, 0x812fc370}, + {0x000160a8, 0x423c8000}, + {0x000160b4, 0x92480080}, + {0x000160c0, 0x006db6d0}, + {0x000160c4, 0x6db6db60}, + {0x000160c8, 0x6db6db6c}, + {0x000160cc, 0x01e6c000}, + {0x00016100, 0x11999601}, + {0x00016108, 0x00080010}, + {0x00016144, 0x02084080}, + {0x00016148, 0x000080c0}, + {0x00016280, 0x01800804}, + {0x00016284, 0x00038dc5}, + {0x00016288, 0x00000000}, + {0x0001628c, 0x00000040}, + {0x00016380, 0x00000000}, + {0x00016384, 0x00000000}, + {0x00016388, 0x00400705}, + {0x0001638c, 0x00800700}, + {0x00016390, 0x00800700}, + {0x00016394, 0x00000000}, + {0x00016398, 0x00000000}, + {0x0001639c, 0x00000000}, + {0x000163a0, 0x00000001}, + {0x000163a4, 0x00000001}, + {0x000163a8, 0x00000000}, + {0x000163ac, 0x00000000}, + {0x000163b0, 0x00000000}, + {0x000163b4, 0x00000000}, + {0x000163b8, 0x00000000}, + {0x000163bc, 0x00000000}, + {0x000163c0, 0x000000a0}, + {0x000163c4, 0x000c0000}, + {0x000163c8, 0x14021402}, + {0x000163cc, 0x00001402}, + {0x000163d0, 0x00000000}, + {0x000163d4, 0x00000000}, + {0x00016400, 0x36db6db6}, + {0x00016404, 0x6db6db40}, + {0x00016408, 0x73f00000}, + {0x0001640c, 0x00000000}, + {0x00016440, 0x7f80fff8}, + {0x0001644c, 0x76d005b5}, + {0x00016450, 0x557cf031}, + {0x00016454, 0x13449440}, + {0x00016458, 0x0c51c92c}, + {0x0001645c, 0x3db7fffc}, + {0x00016460, 0xfffffffc}, + {0x00016464, 0x000f0278}, + {0x00016468, 0x6db6db6c}, + {0x0001646c, 0x6db60000}, + {0x00016500, 0x11999601}, + {0x00016508, 0x00080010}, + {0x00016544, 0x02084080}, + {0x00016548, 0x000080c0}, + {0x00016780, 0x00000000}, + {0x00016784, 0x00000000}, + {0x00016788, 0x00400705}, + {0x0001678c, 0x00800700}, + {0x00016790, 0x00800700}, + {0x00016794, 0x00000000}, + {0x00016798, 0x00000000}, + {0x0001679c, 0x00000000}, + {0x000167a0, 0x00000001}, + {0x000167a4, 0x00000001}, + {0x000167a8, 0x00000000}, + {0x000167ac, 0x00000000}, + {0x000167b0, 0x00000000}, + {0x000167b4, 0x00000000}, + {0x000167b8, 0x00000000}, + {0x000167bc, 0x00000000}, + {0x000167c0, 0x000000a0}, + {0x000167c4, 0x000c0000}, + {0x000167c8, 0x14021402}, + {0x000167cc, 0x00001402}, + {0x000167d0, 0x00000000}, + {0x000167d4, 0x00000000}, + {0x00016800, 0x36db6db6}, + {0x00016804, 0x6db6db40}, + {0x00016808, 0x73f00000}, + {0x0001680c, 0x00000000}, + {0x00016840, 0x7f80fff8}, + {0x0001684c, 0x76d005b5}, + {0x00016850, 0x557cf031}, + {0x00016854, 0x13449440}, + {0x00016858, 0x0c51c92c}, + {0x0001685c, 0x3db7fffc}, + {0x00016860, 0xfffffffc}, + {0x00016864, 0x000f0278}, + {0x00016868, 0x6db6db6c}, + {0x0001686c, 0x6db60000}, + {0x00016900, 0x11999601}, + {0x00016908, 0x00080010}, + {0x00016944, 0x02084080}, + {0x00016948, 0x000080c0}, + {0x00016b80, 0x00000000}, + {0x00016b84, 0x00000000}, + {0x00016b88, 0x00400705}, + {0x00016b8c, 0x00800700}, + {0x00016b90, 0x00800700}, + {0x00016b94, 0x00000000}, + {0x00016b98, 0x00000000}, + {0x00016b9c, 0x00000000}, + {0x00016ba0, 0x00000001}, + {0x00016ba4, 0x00000001}, + {0x00016ba8, 0x00000000}, + {0x00016bac, 0x00000000}, + {0x00016bb0, 0x00000000}, + {0x00016bb4, 0x00000000}, + {0x00016bb8, 0x00000000}, + {0x00016bbc, 0x00000000}, + {0x00016bc0, 0x000000a0}, + {0x00016bc4, 0x000c0000}, + {0x00016bc8, 0x14021402}, + {0x00016bcc, 0x00001402}, + {0x00016bd0, 0x00000000}, + {0x00016bd4, 0x00000000}, +}; + +static const u32 ar955x_1p0_modes_xpa_tx_gain_table[][9] = { + /* Addr 5G_HT20_L 5G_HT40_L 5G_HT20_M 5G_HT40_M 5G_HT20_H 5G_HT40_H 2G_HT40 2G_HT20 */ + {0x0000a2dc, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xfffd5aaa, 0xfffd5aaa}, + {0x0000a2e0, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xfffe9ccc, 0xfffe9ccc}, + {0x0000a2e4, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xffffe0f0, 0xffffe0f0}, + {0x0000a2e8, 0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00, 0xfffcff00, 0xfffcff00}, + {0x0000a410, 0x000050de, 0x000050de, 0x000050de, 0x000050de, 0x000050de, 0x000050de, 0x000050da, 0x000050da}, + {0x0000a500, 0x00000003, 0x00000003, 0x00000003, 0x00000003, 0x00000003, 0x00000003, 0x00000000, 0x00000000}, + {0x0000a504, 0x04000005, 0x04000005, 0x04000005, 0x04000005, 0x04000005, 0x04000005, 0x04000002, 0x04000002}, + {0x0000a508, 0x08000009, 0x08000009, 0x08000009, 0x08000009, 0x08000009, 0x08000009, 0x08000004, 0x08000004}, + {0x0000a50c, 0x0c00000b, 0x0c00000b, 0x0c00000b, 0x0c00000b, 0x0c00000b, 0x0c00000b, 0x0c000006, 0x0c000006}, + {0x0000a510, 0x1000000d, 0x1000000d, 0x1000000d, 0x1000000d, 0x1000000d, 0x1000000d, 0x0f00000a, 0x0f00000a}, + {0x0000a514, 0x14000011, 0x14000011, 0x14000011, 0x14000011, 0x14000011, 0x14000011, 0x1300000c, 0x1300000c}, + {0x0000a518, 0x19004008, 0x19004008, 0x19004008, 0x19004008, 0x18004008, 0x18004008, 0x1700000e, 0x1700000e}, + {0x0000a51c, 0x1d00400a, 0x1d00400a, 0x1d00400a, 0x1d00400a, 0x1c00400a, 0x1c00400a, 0x1b000064, 0x1b000064}, + {0x0000a520, 0x230020a2, 0x230020a2, 0x210020a2, 0x210020a2, 0x200020a2, 0x200020a2, 0x1f000242, 0x1f000242}, + {0x0000a524, 0x2500006e, 0x2500006e, 0x2500006e, 0x2500006e, 0x2400006e, 0x2400006e, 0x23000229, 0x23000229}, + {0x0000a528, 0x29022221, 0x29022221, 0x28022221, 0x28022221, 0x27022221, 0x27022221, 0x270002a2, 0x270002a2}, + {0x0000a52c, 0x2d00062a, 0x2d00062a, 0x2c00062a, 0x2c00062a, 0x2a00062a, 0x2a00062a, 0x2c001203, 0x2c001203}, + {0x0000a530, 0x340220a5, 0x340220a5, 0x320220a5, 0x320220a5, 0x2f0220a5, 0x2f0220a5, 0x30001803, 0x30001803}, + {0x0000a534, 0x380022c5, 0x380022c5, 0x350022c5, 0x350022c5, 0x320022c5, 0x320022c5, 0x33000881, 0x33000881}, + {0x0000a538, 0x3b002486, 0x3b002486, 0x39002486, 0x39002486, 0x36002486, 0x36002486, 0x38001809, 0x38001809}, + {0x0000a53c, 0x3f00248a, 0x3f00248a, 0x3d00248a, 0x3d00248a, 0x3a00248a, 0x3a00248a, 0x3a000814, 0x3a000814}, + {0x0000a540, 0x4202242c, 0x4202242c, 0x4102242c, 0x4102242c, 0x3f02242c, 0x3f02242c, 0x3f001a0c, 0x3f001a0c}, + {0x0000a544, 0x490044c6, 0x490044c6, 0x460044c6, 0x460044c6, 0x420044c6, 0x420044c6, 0x43001a0e, 0x43001a0e}, + {0x0000a548, 0x4d024485, 0x4d024485, 0x4a024485, 0x4a024485, 0x46024485, 0x46024485, 0x46001812, 0x46001812}, + {0x0000a54c, 0x51044483, 0x51044483, 0x4e044483, 0x4e044483, 0x4a044483, 0x4a044483, 0x49001884, 0x49001884}, + {0x0000a550, 0x5404a40c, 0x5404a40c, 0x5204a40c, 0x5204a40c, 0x4d04a40c, 0x4d04a40c, 0x4d001e84, 0x4d001e84}, + {0x0000a554, 0x57024632, 0x57024632, 0x55024632, 0x55024632, 0x52024632, 0x52024632, 0x50001e69, 0x50001e69}, + {0x0000a558, 0x5c00a634, 0x5c00a634, 0x5900a634, 0x5900a634, 0x5600a634, 0x5600a634, 0x550006f4, 0x550006f4}, + {0x0000a55c, 0x5f026832, 0x5f026832, 0x5d026832, 0x5d026832, 0x5a026832, 0x5a026832, 0x59000ad3, 0x59000ad3}, + {0x0000a560, 0x6602b012, 0x6602b012, 0x6202b012, 0x6202b012, 0x5d02b012, 0x5d02b012, 0x5e000ad5, 0x5e000ad5}, + {0x0000a564, 0x6e02d0e1, 0x6e02d0e1, 0x6802d0e1, 0x6802d0e1, 0x6002d0e1, 0x6002d0e1, 0x61001ced, 0x61001ced}, + {0x0000a568, 0x7202b4c4, 0x7202b4c4, 0x6c02b4c4, 0x6c02b4c4, 0x6502b4c4, 0x6502b4c4, 0x660018d4, 0x660018d4}, + {0x0000a56c, 0x75007894, 0x75007894, 0x70007894, 0x70007894, 0x6b007894, 0x6b007894, 0x660018d4, 0x660018d4}, + {0x0000a570, 0x7b025c74, 0x7b025c74, 0x75025c74, 0x75025c74, 0x70025c74, 0x70025c74, 0x660018d4, 0x660018d4}, + {0x0000a574, 0x8300bcb5, 0x8300bcb5, 0x7a00bcb5, 0x7a00bcb5, 0x7600bcb5, 0x7600bcb5, 0x660018d4, 0x660018d4}, + {0x0000a578, 0x8a04dc74, 0x8a04dc74, 0x7f04dc74, 0x7f04dc74, 0x7c04dc74, 0x7c04dc74, 0x660018d4, 0x660018d4}, + {0x0000a57c, 0x8a04dc74, 0x8a04dc74, 0x7f04dc74, 0x7f04dc74, 0x7c04dc74, 0x7c04dc74, 0x660018d4, 0x660018d4}, + {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x03804000, 0x03804000}, + {0x0000a610, 0x04c08c01, 0x04c08c01, 0x04808b01, 0x04808b01, 0x04808a01, 0x04808a01, 0x0300ca02, 0x0300ca02}, + {0x0000a614, 0x00c0c303, 0x00c0c303, 0x00c0c303, 0x00c0c303, 0x00c0c303, 0x00c0c303, 0x00000e04, 0x00000e04}, + {0x0000a618, 0x04010c01, 0x04010c01, 0x03c10b01, 0x03c10b01, 0x03810a01, 0x03810a01, 0x03014000, 0x03014000}, + {0x0000a61c, 0x03814e05, 0x03814e05, 0x03414d05, 0x03414d05, 0x03414d05, 0x03414d05, 0x00000000, 0x00000000}, + {0x0000a620, 0x04010303, 0x04010303, 0x03c10303, 0x03c10303, 0x03810303, 0x03810303, 0x00000000, 0x00000000}, + {0x0000a624, 0x03814e05, 0x03814e05, 0x03414d05, 0x03414d05, 0x03414d05, 0x03414d05, 0x03014000, 0x03014000}, + {0x0000a628, 0x00c0c000, 0x00c0c000, 0x00c0c000, 0x00c0c000, 0x00c0c000, 0x00c0c000, 0x03804c05, 0x03804c05}, + {0x0000a62c, 0x00c0c303, 0x00c0c303, 0x00c0c303, 0x00c0c303, 0x00c0c303, 0x00c0c303, 0x0701de06, 0x0701de06}, + {0x0000a630, 0x03418000, 0x03418000, 0x03018000, 0x03018000, 0x02c18000, 0x02c18000, 0x07819c07, 0x07819c07}, + {0x0000a634, 0x03815004, 0x03815004, 0x03414f04, 0x03414f04, 0x03414e04, 0x03414e04, 0x0701dc07, 0x0701dc07}, + {0x0000a638, 0x03005302, 0x03005302, 0x02c05202, 0x02c05202, 0x02805202, 0x02805202, 0x0701dc07, 0x0701dc07}, + {0x0000a63c, 0x04c09302, 0x04c09302, 0x04809202, 0x04809202, 0x04809202, 0x04809202, 0x0701dc07, 0x0701dc07}, + {0x0000b2dc, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xfffd5aaa, 0xfffd5aaa}, + {0x0000b2e0, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xfffe9ccc, 0xfffe9ccc}, + {0x0000b2e4, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xffffe0f0, 0xffffe0f0}, + {0x0000b2e8, 0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00, 0xfffcff00, 0xfffcff00}, + {0x0000c2dc, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xffffaaaa, 0xfffd5aaa, 0xfffd5aaa}, + {0x0000c2e0, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xffffcccc, 0xfffe9ccc, 0xfffe9ccc}, + {0x0000c2e4, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xfffff0f0, 0xffffe0f0, 0xffffe0f0}, + {0x0000c2e8, 0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00, 0xfffcff00, 0xfffcff00}, + {0x00016044, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x010002d4, 0x010002d4}, + {0x00016048, 0x62482401, 0x62482401, 0x62482401, 0x62482401, 0x62482401, 0x62482401, 0x66482401, 0x66482401}, + {0x00016280, 0x01801e84, 0x01801e84, 0x01801e84, 0x01801e84, 0x01801e84, 0x01801e84, 0x01808e84, 0x01808e84}, + {0x00016444, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x010002d4, 0x010002d4}, + {0x00016448, 0x62482401, 0x62482401, 0x62482401, 0x62482401, 0x62482401, 0x62482401, 0x66482401, 0x66482401}, + {0x00016844, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x010002d4, 0x010002d4}, + {0x00016848, 0x62482401, 0x62482401, 0x62482401, 0x62482401, 0x62482401, 0x62482401, 0x66482401, 0x66482401}, +}; + +static const u32 ar955x_1p0_mac_core[][2] = { + /* Addr allmodes */ + {0x00000008, 0x00000000}, + {0x00000030, 0x00020085}, + {0x00000034, 0x00000005}, + {0x00000040, 0x00000000}, + {0x00000044, 0x00000000}, + {0x00000048, 0x00000008}, + {0x0000004c, 0x00000010}, + {0x00000050, 0x00000000}, + {0x00001040, 0x002ffc0f}, + {0x00001044, 0x002ffc0f}, + {0x00001048, 0x002ffc0f}, + {0x0000104c, 0x002ffc0f}, + {0x00001050, 0x002ffc0f}, + {0x00001054, 0x002ffc0f}, + {0x00001058, 0x002ffc0f}, + {0x0000105c, 0x002ffc0f}, + {0x00001060, 0x002ffc0f}, + {0x00001064, 0x002ffc0f}, + {0x000010f0, 0x00000100}, + {0x00001270, 0x00000000}, + {0x000012b0, 0x00000000}, + {0x000012f0, 0x00000000}, + {0x0000143c, 0x00000000}, + {0x0000147c, 0x00000000}, + {0x00008000, 0x00000000}, + {0x00008004, 0x00000000}, + {0x00008008, 0x00000000}, + {0x0000800c, 0x00000000}, + {0x00008018, 0x00000000}, + {0x00008020, 0x00000000}, + {0x00008038, 0x00000000}, + {0x0000803c, 0x00000000}, + {0x00008040, 0x00000000}, + {0x00008044, 0x00000000}, + {0x00008048, 0x00000000}, + {0x0000804c, 0xffffffff}, + {0x00008054, 0x00000000}, + {0x00008058, 0x00000000}, + {0x0000805c, 0x000fc78f}, + {0x00008060, 0x0000000f}, + {0x00008064, 0x00000000}, + {0x00008070, 0x00000310}, + {0x00008074, 0x00000020}, + {0x00008078, 0x00000000}, + {0x0000809c, 0x0000000f}, + {0x000080a0, 0x00000000}, + {0x000080a4, 0x02ff0000}, + {0x000080a8, 0x0e070605}, + {0x000080ac, 0x0000000d}, + {0x000080b0, 0x00000000}, + {0x000080b4, 0x00000000}, + {0x000080b8, 0x00000000}, + {0x000080bc, 0x00000000}, + {0x000080c0, 0x2a800000}, + {0x000080c4, 0x06900168}, + {0x000080c8, 0x13881c22}, + {0x000080cc, 0x01f40000}, + {0x000080d0, 0x00252500}, + {0x000080d4, 0x00a00000}, + {0x000080d8, 0x00400000}, + {0x000080dc, 0x00000000}, + {0x000080e0, 0xffffffff}, + {0x000080e4, 0x0000ffff}, + {0x000080e8, 0x3f3f3f3f}, + {0x000080ec, 0x00000000}, + {0x000080f0, 0x00000000}, + {0x000080f4, 0x00000000}, + {0x000080fc, 0x00020000}, + {0x00008100, 0x00000000}, + {0x00008108, 0x00000052}, + {0x0000810c, 0x00000000}, + {0x00008110, 0x00000000}, + {0x00008114, 0x000007ff}, + {0x00008118, 0x000000aa}, + {0x0000811c, 0x00003210}, + {0x00008124, 0x00000000}, + {0x00008128, 0x00000000}, + {0x0000812c, 0x00000000}, + {0x00008130, 0x00000000}, + {0x00008134, 0x00000000}, + {0x00008138, 0x00000000}, + {0x0000813c, 0x0000ffff}, + {0x00008140, 0x000000fe}, + {0x00008144, 0xffffffff}, + {0x00008168, 0x00000000}, + {0x0000816c, 0x00000000}, + {0x000081c0, 0x00000000}, + {0x000081c4, 0x33332210}, + {0x000081ec, 0x00000000}, + {0x000081f0, 0x00000000}, + {0x000081f4, 0x00000000}, + {0x000081f8, 0x00000000}, + {0x000081fc, 0x00000000}, + {0x00008240, 0x00100000}, + {0x00008244, 0x0010f400}, + {0x00008248, 0x00000800}, + {0x0000824c, 0x0001e800}, + {0x00008250, 0x00000000}, + {0x00008254, 0x00000000}, + {0x00008258, 0x00000000}, + {0x0000825c, 0x40000000}, + {0x00008260, 0x00080922}, + {0x00008264, 0x9d400010}, + {0x00008268, 0xffffffff}, + {0x0000826c, 0x0000ffff}, + {0x00008270, 0x00000000}, + {0x00008274, 0x40000000}, + {0x00008278, 0x003e4180}, + {0x0000827c, 0x00000004}, + {0x00008284, 0x0000002c}, + {0x00008288, 0x0000002c}, + {0x0000828c, 0x000000ff}, + {0x00008294, 0x00000000}, + {0x00008298, 0x00000000}, + {0x0000829c, 0x00000000}, + {0x00008300, 0x00001d40}, + {0x00008314, 0x00000000}, + {0x0000831c, 0x0000010d}, + {0x00008328, 0x00000000}, + {0x0000832c, 0x0000001f}, + {0x00008330, 0x00000302}, + {0x00008334, 0x00000700}, + {0x00008338, 0xffff0000}, + {0x0000833c, 0x02400000}, + {0x00008340, 0x000107ff}, + {0x00008344, 0xaa48107b}, + {0x00008348, 0x008f0000}, + {0x0000835c, 0x00000000}, + {0x00008360, 0xffffffff}, + {0x00008364, 0xffffffff}, + {0x00008368, 0x00000000}, + {0x00008370, 0x00000000}, + {0x00008374, 0x000000ff}, + {0x00008378, 0x00000000}, + {0x0000837c, 0x00000000}, + {0x00008380, 0xffffffff}, + {0x00008384, 0xffffffff}, + {0x00008390, 0xffffffff}, + {0x00008394, 0xffffffff}, + {0x00008398, 0x00000000}, + {0x0000839c, 0x00000000}, + {0x000083a0, 0x00000000}, + {0x000083a4, 0x0000fa14}, + {0x000083a8, 0x000f0c00}, + {0x000083ac, 0x33332210}, + {0x000083b0, 0x33332210}, + {0x000083b4, 0x33332210}, + {0x000083b8, 0x33332210}, + {0x000083bc, 0x00000000}, + {0x000083c0, 0x00000000}, + {0x000083c4, 0x00000000}, + {0x000083c8, 0x00000000}, + {0x000083cc, 0x00000200}, + {0x000083d0, 0x8c7901ff}, +}; + +static const u32 ar955x_1p0_common_rx_gain_table[][2] = { + /* Addr allmodes */ + {0x0000a000, 0x00010000}, + {0x0000a004, 0x00030002}, + {0x0000a008, 0x00050004}, + {0x0000a00c, 0x00810080}, + {0x0000a010, 0x00830082}, + {0x0000a014, 0x01810180}, + {0x0000a018, 0x01830182}, + {0x0000a01c, 0x01850184}, + {0x0000a020, 0x01890188}, + {0x0000a024, 0x018b018a}, + {0x0000a028, 0x018d018c}, + {0x0000a02c, 0x01910190}, + {0x0000a030, 0x01930192}, + {0x0000a034, 0x01950194}, + {0x0000a038, 0x038a0196}, + {0x0000a03c, 0x038c038b}, + {0x0000a040, 0x0390038d}, + {0x0000a044, 0x03920391}, + {0x0000a048, 0x03940393}, + {0x0000a04c, 0x03960395}, + {0x0000a050, 0x00000000}, + {0x0000a054, 0x00000000}, + {0x0000a058, 0x00000000}, + {0x0000a05c, 0x00000000}, + {0x0000a060, 0x00000000}, + {0x0000a064, 0x00000000}, + {0x0000a068, 0x00000000}, + {0x0000a06c, 0x00000000}, + {0x0000a070, 0x00000000}, + {0x0000a074, 0x00000000}, + {0x0000a078, 0x00000000}, + {0x0000a07c, 0x00000000}, + {0x0000a080, 0x22222229}, + {0x0000a084, 0x1d1d1d1d}, + {0x0000a088, 0x1d1d1d1d}, + {0x0000a08c, 0x1d1d1d1d}, + {0x0000a090, 0x171d1d1d}, + {0x0000a094, 0x11111717}, + {0x0000a098, 0x00030311}, + {0x0000a09c, 0x00000000}, + {0x0000a0a0, 0x00000000}, + {0x0000a0a4, 0x00000000}, + {0x0000a0a8, 0x00000000}, + {0x0000a0ac, 0x00000000}, + {0x0000a0b0, 0x00000000}, + {0x0000a0b4, 0x00000000}, + {0x0000a0b8, 0x00000000}, + {0x0000a0bc, 0x00000000}, + {0x0000a0c0, 0x001f0000}, + {0x0000a0c4, 0x01000101}, + {0x0000a0c8, 0x011e011f}, + {0x0000a0cc, 0x011c011d}, + {0x0000a0d0, 0x02030204}, + {0x0000a0d4, 0x02010202}, + {0x0000a0d8, 0x021f0200}, + {0x0000a0dc, 0x0302021e}, + {0x0000a0e0, 0x03000301}, + {0x0000a0e4, 0x031e031f}, + {0x0000a0e8, 0x0402031d}, + {0x0000a0ec, 0x04000401}, + {0x0000a0f0, 0x041e041f}, + {0x0000a0f4, 0x0502041d}, + {0x0000a0f8, 0x05000501}, + {0x0000a0fc, 0x051e051f}, + {0x0000a100, 0x06010602}, + {0x0000a104, 0x061f0600}, + {0x0000a108, 0x061d061e}, + {0x0000a10c, 0x07020703}, + {0x0000a110, 0x07000701}, + {0x0000a114, 0x00000000}, + {0x0000a118, 0x00000000}, + {0x0000a11c, 0x00000000}, + {0x0000a120, 0x00000000}, + {0x0000a124, 0x00000000}, + {0x0000a128, 0x00000000}, + {0x0000a12c, 0x00000000}, + {0x0000a130, 0x00000000}, + {0x0000a134, 0x00000000}, + {0x0000a138, 0x00000000}, + {0x0000a13c, 0x00000000}, + {0x0000a140, 0x001f0000}, + {0x0000a144, 0x01000101}, + {0x0000a148, 0x011e011f}, + {0x0000a14c, 0x011c011d}, + {0x0000a150, 0x02030204}, + {0x0000a154, 0x02010202}, + {0x0000a158, 0x021f0200}, + {0x0000a15c, 0x0302021e}, + {0x0000a160, 0x03000301}, + {0x0000a164, 0x031e031f}, + {0x0000a168, 0x0402031d}, + {0x0000a16c, 0x04000401}, + {0x0000a170, 0x041e041f}, + {0x0000a174, 0x0502041d}, + {0x0000a178, 0x05000501}, + {0x0000a17c, 0x051e051f}, + {0x0000a180, 0x06010602}, + {0x0000a184, 0x061f0600}, + {0x0000a188, 0x061d061e}, + {0x0000a18c, 0x07020703}, + {0x0000a190, 0x07000701}, + {0x0000a194, 0x00000000}, + {0x0000a198, 0x00000000}, + {0x0000a19c, 0x00000000}, + {0x0000a1a0, 0x00000000}, + {0x0000a1a4, 0x00000000}, + {0x0000a1a8, 0x00000000}, + {0x0000a1ac, 0x00000000}, + {0x0000a1b0, 0x00000000}, + {0x0000a1b4, 0x00000000}, + {0x0000a1b8, 0x00000000}, + {0x0000a1bc, 0x00000000}, + {0x0000a1c0, 0x00000000}, + {0x0000a1c4, 0x00000000}, + {0x0000a1c8, 0x00000000}, + {0x0000a1cc, 0x00000000}, + {0x0000a1d0, 0x00000000}, + {0x0000a1d4, 0x00000000}, + {0x0000a1d8, 0x00000000}, + {0x0000a1dc, 0x00000000}, + {0x0000a1e0, 0x00000000}, + {0x0000a1e4, 0x00000000}, + {0x0000a1e8, 0x00000000}, + {0x0000a1ec, 0x00000000}, + {0x0000a1f0, 0x00000396}, + {0x0000a1f4, 0x00000396}, + {0x0000a1f8, 0x00000396}, + {0x0000a1fc, 0x00000196}, + {0x0000b000, 0x00010000}, + {0x0000b004, 0x00030002}, + {0x0000b008, 0x00050004}, + {0x0000b00c, 0x00810080}, + {0x0000b010, 0x00830082}, + {0x0000b014, 0x01810180}, + {0x0000b018, 0x01830182}, + {0x0000b01c, 0x01850184}, + {0x0000b020, 0x02810280}, + {0x0000b024, 0x02830282}, + {0x0000b028, 0x02850284}, + {0x0000b02c, 0x02890288}, + {0x0000b030, 0x028b028a}, + {0x0000b034, 0x0388028c}, + {0x0000b038, 0x038a0389}, + {0x0000b03c, 0x038c038b}, + {0x0000b040, 0x0390038d}, + {0x0000b044, 0x03920391}, + {0x0000b048, 0x03940393}, + {0x0000b04c, 0x03960395}, + {0x0000b050, 0x00000000}, + {0x0000b054, 0x00000000}, + {0x0000b058, 0x00000000}, + {0x0000b05c, 0x00000000}, + {0x0000b060, 0x00000000}, + {0x0000b064, 0x00000000}, + {0x0000b068, 0x00000000}, + {0x0000b06c, 0x00000000}, + {0x0000b070, 0x00000000}, + {0x0000b074, 0x00000000}, + {0x0000b078, 0x00000000}, + {0x0000b07c, 0x00000000}, + {0x0000b080, 0x23232323}, + {0x0000b084, 0x21232323}, + {0x0000b088, 0x19191c1e}, + {0x0000b08c, 0x12141417}, + {0x0000b090, 0x07070e0e}, + {0x0000b094, 0x03030305}, + {0x0000b098, 0x00000003}, + {0x0000b09c, 0x00000000}, + {0x0000b0a0, 0x00000000}, + {0x0000b0a4, 0x00000000}, + {0x0000b0a8, 0x00000000}, + {0x0000b0ac, 0x00000000}, + {0x0000b0b0, 0x00000000}, + {0x0000b0b4, 0x00000000}, + {0x0000b0b8, 0x00000000}, + {0x0000b0bc, 0x00000000}, + {0x0000b0c0, 0x003f0020}, + {0x0000b0c4, 0x00400041}, + {0x0000b0c8, 0x0140005f}, + {0x0000b0cc, 0x0160015f}, + {0x0000b0d0, 0x017e017f}, + {0x0000b0d4, 0x02410242}, + {0x0000b0d8, 0x025f0240}, + {0x0000b0dc, 0x027f0260}, + {0x0000b0e0, 0x0341027e}, + {0x0000b0e4, 0x035f0340}, + {0x0000b0e8, 0x037f0360}, + {0x0000b0ec, 0x04400441}, + {0x0000b0f0, 0x0460045f}, + {0x0000b0f4, 0x0541047f}, + {0x0000b0f8, 0x055f0540}, + {0x0000b0fc, 0x057f0560}, + {0x0000b100, 0x06400641}, + {0x0000b104, 0x0660065f}, + {0x0000b108, 0x067e067f}, + {0x0000b10c, 0x07410742}, + {0x0000b110, 0x075f0740}, + {0x0000b114, 0x077f0760}, + {0x0000b118, 0x07800781}, + {0x0000b11c, 0x07a0079f}, + {0x0000b120, 0x07c107bf}, + {0x0000b124, 0x000007c0}, + {0x0000b128, 0x00000000}, + {0x0000b12c, 0x00000000}, + {0x0000b130, 0x00000000}, + {0x0000b134, 0x00000000}, + {0x0000b138, 0x00000000}, + {0x0000b13c, 0x00000000}, + {0x0000b140, 0x003f0020}, + {0x0000b144, 0x00400041}, + {0x0000b148, 0x0140005f}, + {0x0000b14c, 0x0160015f}, + {0x0000b150, 0x017e017f}, + {0x0000b154, 0x02410242}, + {0x0000b158, 0x025f0240}, + {0x0000b15c, 0x027f0260}, + {0x0000b160, 0x0341027e}, + {0x0000b164, 0x035f0340}, + {0x0000b168, 0x037f0360}, + {0x0000b16c, 0x04400441}, + {0x0000b170, 0x0460045f}, + {0x0000b174, 0x0541047f}, + {0x0000b178, 0x055f0540}, + {0x0000b17c, 0x057f0560}, + {0x0000b180, 0x06400641}, + {0x0000b184, 0x0660065f}, + {0x0000b188, 0x067e067f}, + {0x0000b18c, 0x07410742}, + {0x0000b190, 0x075f0740}, + {0x0000b194, 0x077f0760}, + {0x0000b198, 0x07800781}, + {0x0000b19c, 0x07a0079f}, + {0x0000b1a0, 0x07c107bf}, + {0x0000b1a4, 0x000007c0}, + {0x0000b1a8, 0x00000000}, + {0x0000b1ac, 0x00000000}, + {0x0000b1b0, 0x00000000}, + {0x0000b1b4, 0x00000000}, + {0x0000b1b8, 0x00000000}, + {0x0000b1bc, 0x00000000}, + {0x0000b1c0, 0x00000000}, + {0x0000b1c4, 0x00000000}, + {0x0000b1c8, 0x00000000}, + {0x0000b1cc, 0x00000000}, + {0x0000b1d0, 0x00000000}, + {0x0000b1d4, 0x00000000}, + {0x0000b1d8, 0x00000000}, + {0x0000b1dc, 0x00000000}, + {0x0000b1e0, 0x00000000}, + {0x0000b1e4, 0x00000000}, + {0x0000b1e8, 0x00000000}, + {0x0000b1ec, 0x00000000}, + {0x0000b1f0, 0x00000396}, + {0x0000b1f4, 0x00000396}, + {0x0000b1f8, 0x00000396}, + {0x0000b1fc, 0x00000196}, +}; + +static const u32 ar955x_1p0_baseband_core[][2] = { + /* Addr allmodes */ + {0x00009800, 0xafe68e30}, + {0x00009804, 0xfd14e000}, + {0x00009808, 0x9c0a9f6b}, + {0x0000980c, 0x04900000}, + {0x00009814, 0x0280c00a}, + {0x00009818, 0x00000000}, + {0x0000981c, 0x00020028}, + {0x00009834, 0x6400a190}, + {0x00009838, 0x0108ecff}, + {0x0000983c, 0x14000600}, + {0x00009880, 0x201fff00}, + {0x00009884, 0x00001042}, + {0x000098a4, 0x00200400}, + {0x000098b0, 0x32840bbe}, + {0x000098bc, 0x00000002}, + {0x000098d0, 0x004b6a8e}, + {0x000098d4, 0x00000820}, + {0x000098dc, 0x00000000}, + {0x000098f0, 0x00000000}, + {0x000098f4, 0x00000000}, + {0x00009c04, 0xff55ff55}, + {0x00009c08, 0x0320ff55}, + {0x00009c0c, 0x00000000}, + {0x00009c10, 0x00000000}, + {0x00009c14, 0x00046384}, + {0x00009c18, 0x05b6b440}, + {0x00009c1c, 0x00b6b440}, + {0x00009d00, 0xc080a333}, + {0x00009d04, 0x40206c10}, + {0x00009d08, 0x009c4060}, + {0x00009d0c, 0x9883800a}, + {0x00009d10, 0x01834061}, + {0x00009d14, 0x00c0040b}, + {0x00009d18, 0x00000000}, + {0x00009e08, 0x0038230c}, + {0x00009e24, 0x990bb515}, + {0x00009e28, 0x0c6f0000}, + {0x00009e30, 0x06336f77}, + {0x00009e34, 0x6af6532f}, + {0x00009e38, 0x0cc80c00}, + {0x00009e40, 0x0d261820}, + {0x00009e4c, 0x00001004}, + {0x00009e50, 0x00ff03f1}, + {0x00009fc0, 0x813e4788}, + {0x00009fc4, 0x0001efb5}, + {0x00009fcc, 0x40000014}, + {0x00009fd0, 0x01193b93}, + {0x0000a20c, 0x00000000}, + {0x0000a220, 0x00000000}, + {0x0000a224, 0x00000000}, + {0x0000a228, 0x10002310}, + {0x0000a23c, 0x00000000}, + {0x0000a244, 0x0c000000}, + {0x0000a248, 0x00000140}, + {0x0000a2a0, 0x00000007}, + {0x0000a2c0, 0x00000007}, + {0x0000a2c8, 0x00000000}, + {0x0000a2d4, 0x00000000}, + {0x0000a2ec, 0x00000000}, + {0x0000a2f0, 0x00000000}, + {0x0000a2f4, 0x00000000}, + {0x0000a2f8, 0x00000000}, + {0x0000a344, 0x00000000}, + {0x0000a34c, 0x00000000}, + {0x0000a350, 0x0000a000}, + {0x0000a364, 0x00000000}, + {0x0000a370, 0x00000000}, + {0x0000a390, 0x00000001}, + {0x0000a394, 0x00000444}, + {0x0000a398, 0x1f020503}, + {0x0000a39c, 0x29180c03}, + {0x0000a3a0, 0x9a8b6844}, + {0x0000a3a4, 0x00000000}, + {0x0000a3a8, 0xaaaaaaaa}, + {0x0000a3ac, 0x3c466478}, + {0x0000a3c0, 0x20202020}, + {0x0000a3c4, 0x22222220}, + {0x0000a3c8, 0x20200020}, + {0x0000a3cc, 0x20202020}, + {0x0000a3d0, 0x20202020}, + {0x0000a3d4, 0x20202020}, + {0x0000a3d8, 0x20202020}, + {0x0000a3dc, 0x20202020}, + {0x0000a3e0, 0x20202020}, + {0x0000a3e4, 0x20202020}, + {0x0000a3e8, 0x20202020}, + {0x0000a3ec, 0x20202020}, + {0x0000a3f0, 0x00000000}, + {0x0000a3f4, 0x00000000}, + {0x0000a3f8, 0x0c9bd380}, + {0x0000a3fc, 0x000f0f01}, + {0x0000a400, 0x8fa91f01}, + {0x0000a404, 0x00000000}, + {0x0000a408, 0x0e79e5c6}, + {0x0000a40c, 0x00820820}, + {0x0000a414, 0x1ce739ce}, + {0x0000a418, 0x2d001dce}, + {0x0000a41c, 0x1ce739ce}, + {0x0000a420, 0x000001ce}, + {0x0000a424, 0x1ce739ce}, + {0x0000a428, 0x000001ce}, + {0x0000a42c, 0x1ce739ce}, + {0x0000a430, 0x1ce739ce}, + {0x0000a434, 0x00000000}, + {0x0000a438, 0x00001801}, + {0x0000a43c, 0x00100000}, + {0x0000a444, 0x00000000}, + {0x0000a448, 0x05000080}, + {0x0000a44c, 0x00000001}, + {0x0000a450, 0x00010000}, + {0x0000a458, 0x00000000}, + {0x0000a644, 0x3fad9d74}, + {0x0000a648, 0x0048060a}, + {0x0000a64c, 0x00003c37}, + {0x0000a670, 0x03020100}, + {0x0000a674, 0x09080504}, + {0x0000a678, 0x0d0c0b0a}, + {0x0000a67c, 0x13121110}, + {0x0000a680, 0x31301514}, + {0x0000a684, 0x35343332}, + {0x0000a688, 0x00000036}, + {0x0000a690, 0x00000838}, + {0x0000a7cc, 0x00000000}, + {0x0000a7d0, 0x00000000}, + {0x0000a7d4, 0x00000004}, + {0x0000a7dc, 0x00000000}, + {0x0000a8d0, 0x004b6a8e}, + {0x0000a8d4, 0x00000820}, + {0x0000a8dc, 0x00000000}, + {0x0000a8f0, 0x00000000}, + {0x0000a8f4, 0x00000000}, + {0x0000b2d0, 0x00000080}, + {0x0000b2d4, 0x00000000}, + {0x0000b2ec, 0x00000000}, + {0x0000b2f0, 0x00000000}, + {0x0000b2f4, 0x00000000}, + {0x0000b2f8, 0x00000000}, + {0x0000b408, 0x0e79e5c0}, + {0x0000b40c, 0x00820820}, + {0x0000b420, 0x00000000}, + {0x0000b8d0, 0x004b6a8e}, + {0x0000b8d4, 0x00000820}, + {0x0000b8dc, 0x00000000}, + {0x0000b8f0, 0x00000000}, + {0x0000b8f4, 0x00000000}, + {0x0000c2d0, 0x00000080}, + {0x0000c2d4, 0x00000000}, + {0x0000c2ec, 0x00000000}, + {0x0000c2f0, 0x00000000}, + {0x0000c2f4, 0x00000000}, + {0x0000c2f8, 0x00000000}, + {0x0000c408, 0x0e79e5c0}, + {0x0000c40c, 0x00820820}, + {0x0000c420, 0x00000000}, +}; + +static const u32 ar955x_1p0_common_wo_xlna_rx_gain_table[][2] = { + /* Addr allmodes */ + {0x0000a000, 0x00010000}, + {0x0000a004, 0x00030002}, + {0x0000a008, 0x00050004}, + {0x0000a00c, 0x00810080}, + {0x0000a010, 0x00830082}, + {0x0000a014, 0x01810180}, + {0x0000a018, 0x01830182}, + {0x0000a01c, 0x01850184}, + {0x0000a020, 0x01890188}, + {0x0000a024, 0x018b018a}, + {0x0000a028, 0x018d018c}, + {0x0000a02c, 0x03820190}, + {0x0000a030, 0x03840383}, + {0x0000a034, 0x03880385}, + {0x0000a038, 0x038a0389}, + {0x0000a03c, 0x038c038b}, + {0x0000a040, 0x0390038d}, + {0x0000a044, 0x03920391}, + {0x0000a048, 0x03940393}, + {0x0000a04c, 0x03960395}, + {0x0000a050, 0x00000000}, + {0x0000a054, 0x00000000}, + {0x0000a058, 0x00000000}, + {0x0000a05c, 0x00000000}, + {0x0000a060, 0x00000000}, + {0x0000a064, 0x00000000}, + {0x0000a068, 0x00000000}, + {0x0000a06c, 0x00000000}, + {0x0000a070, 0x00000000}, + {0x0000a074, 0x00000000}, + {0x0000a078, 0x00000000}, + {0x0000a07c, 0x00000000}, + {0x0000a080, 0x29292929}, + {0x0000a084, 0x29292929}, + {0x0000a088, 0x29292929}, + {0x0000a08c, 0x29292929}, + {0x0000a090, 0x22292929}, + {0x0000a094, 0x1d1d2222}, + {0x0000a098, 0x0c111117}, + {0x0000a09c, 0x00030303}, + {0x0000a0a0, 0x00000000}, + {0x0000a0a4, 0x00000000}, + {0x0000a0a8, 0x00000000}, + {0x0000a0ac, 0x00000000}, + {0x0000a0b0, 0x00000000}, + {0x0000a0b4, 0x00000000}, + {0x0000a0b8, 0x00000000}, + {0x0000a0bc, 0x00000000}, + {0x0000a0c0, 0x001f0000}, + {0x0000a0c4, 0x01000101}, + {0x0000a0c8, 0x011e011f}, + {0x0000a0cc, 0x011c011d}, + {0x0000a0d0, 0x02030204}, + {0x0000a0d4, 0x02010202}, + {0x0000a0d8, 0x021f0200}, + {0x0000a0dc, 0x0302021e}, + {0x0000a0e0, 0x03000301}, + {0x0000a0e4, 0x031e031f}, + {0x0000a0e8, 0x0402031d}, + {0x0000a0ec, 0x04000401}, + {0x0000a0f0, 0x041e041f}, + {0x0000a0f4, 0x0502041d}, + {0x0000a0f8, 0x05000501}, + {0x0000a0fc, 0x051e051f}, + {0x0000a100, 0x06010602}, + {0x0000a104, 0x061f0600}, + {0x0000a108, 0x061d061e}, + {0x0000a10c, 0x07020703}, + {0x0000a110, 0x07000701}, + {0x0000a114, 0x00000000}, + {0x0000a118, 0x00000000}, + {0x0000a11c, 0x00000000}, + {0x0000a120, 0x00000000}, + {0x0000a124, 0x00000000}, + {0x0000a128, 0x00000000}, + {0x0000a12c, 0x00000000}, + {0x0000a130, 0x00000000}, + {0x0000a134, 0x00000000}, + {0x0000a138, 0x00000000}, + {0x0000a13c, 0x00000000}, + {0x0000a140, 0x001f0000}, + {0x0000a144, 0x01000101}, + {0x0000a148, 0x011e011f}, + {0x0000a14c, 0x011c011d}, + {0x0000a150, 0x02030204}, + {0x0000a154, 0x02010202}, + {0x0000a158, 0x021f0200}, + {0x0000a15c, 0x0302021e}, + {0x0000a160, 0x03000301}, + {0x0000a164, 0x031e031f}, + {0x0000a168, 0x0402031d}, + {0x0000a16c, 0x04000401}, + {0x0000a170, 0x041e041f}, + {0x0000a174, 0x0502041d}, + {0x0000a178, 0x05000501}, + {0x0000a17c, 0x051e051f}, + {0x0000a180, 0x06010602}, + {0x0000a184, 0x061f0600}, + {0x0000a188, 0x061d061e}, + {0x0000a18c, 0x07020703}, + {0x0000a190, 0x07000701}, + {0x0000a194, 0x00000000}, + {0x0000a198, 0x00000000}, + {0x0000a19c, 0x00000000}, + {0x0000a1a0, 0x00000000}, + {0x0000a1a4, 0x00000000}, + {0x0000a1a8, 0x00000000}, + {0x0000a1ac, 0x00000000}, + {0x0000a1b0, 0x00000000}, + {0x0000a1b4, 0x00000000}, + {0x0000a1b8, 0x00000000}, + {0x0000a1bc, 0x00000000}, + {0x0000a1c0, 0x00000000}, + {0x0000a1c4, 0x00000000}, + {0x0000a1c8, 0x00000000}, + {0x0000a1cc, 0x00000000}, + {0x0000a1d0, 0x00000000}, + {0x0000a1d4, 0x00000000}, + {0x0000a1d8, 0x00000000}, + {0x0000a1dc, 0x00000000}, + {0x0000a1e0, 0x00000000}, + {0x0000a1e4, 0x00000000}, + {0x0000a1e8, 0x00000000}, + {0x0000a1ec, 0x00000000}, + {0x0000a1f0, 0x00000396}, + {0x0000a1f4, 0x00000396}, + {0x0000a1f8, 0x00000396}, + {0x0000a1fc, 0x00000196}, + {0x0000b000, 0x00010000}, + {0x0000b004, 0x00030002}, + {0x0000b008, 0x00050004}, + {0x0000b00c, 0x00810080}, + {0x0000b010, 0x00830082}, + {0x0000b014, 0x01810180}, + {0x0000b018, 0x01830182}, + {0x0000b01c, 0x01850184}, + {0x0000b020, 0x02810280}, + {0x0000b024, 0x02830282}, + {0x0000b028, 0x02850284}, + {0x0000b02c, 0x02890288}, + {0x0000b030, 0x028b028a}, + {0x0000b034, 0x0388028c}, + {0x0000b038, 0x038a0389}, + {0x0000b03c, 0x038c038b}, + {0x0000b040, 0x0390038d}, + {0x0000b044, 0x03920391}, + {0x0000b048, 0x03940393}, + {0x0000b04c, 0x03960395}, + {0x0000b050, 0x00000000}, + {0x0000b054, 0x00000000}, + {0x0000b058, 0x00000000}, + {0x0000b05c, 0x00000000}, + {0x0000b060, 0x00000000}, + {0x0000b064, 0x00000000}, + {0x0000b068, 0x00000000}, + {0x0000b06c, 0x00000000}, + {0x0000b070, 0x00000000}, + {0x0000b074, 0x00000000}, + {0x0000b078, 0x00000000}, + {0x0000b07c, 0x00000000}, + {0x0000b080, 0x32323232}, + {0x0000b084, 0x2f2f3232}, + {0x0000b088, 0x23282a2d}, + {0x0000b08c, 0x1c1e2123}, + {0x0000b090, 0x14171919}, + {0x0000b094, 0x0e0e1214}, + {0x0000b098, 0x03050707}, + {0x0000b09c, 0x00030303}, + {0x0000b0a0, 0x00000000}, + {0x0000b0a4, 0x00000000}, + {0x0000b0a8, 0x00000000}, + {0x0000b0ac, 0x00000000}, + {0x0000b0b0, 0x00000000}, + {0x0000b0b4, 0x00000000}, + {0x0000b0b8, 0x00000000}, + {0x0000b0bc, 0x00000000}, + {0x0000b0c0, 0x003f0020}, + {0x0000b0c4, 0x00400041}, + {0x0000b0c8, 0x0140005f}, + {0x0000b0cc, 0x0160015f}, + {0x0000b0d0, 0x017e017f}, + {0x0000b0d4, 0x02410242}, + {0x0000b0d8, 0x025f0240}, + {0x0000b0dc, 0x027f0260}, + {0x0000b0e0, 0x0341027e}, + {0x0000b0e4, 0x035f0340}, + {0x0000b0e8, 0x037f0360}, + {0x0000b0ec, 0x04400441}, + {0x0000b0f0, 0x0460045f}, + {0x0000b0f4, 0x0541047f}, + {0x0000b0f8, 0x055f0540}, + {0x0000b0fc, 0x057f0560}, + {0x0000b100, 0x06400641}, + {0x0000b104, 0x0660065f}, + {0x0000b108, 0x067e067f}, + {0x0000b10c, 0x07410742}, + {0x0000b110, 0x075f0740}, + {0x0000b114, 0x077f0760}, + {0x0000b118, 0x07800781}, + {0x0000b11c, 0x07a0079f}, + {0x0000b120, 0x07c107bf}, + {0x0000b124, 0x000007c0}, + {0x0000b128, 0x00000000}, + {0x0000b12c, 0x00000000}, + {0x0000b130, 0x00000000}, + {0x0000b134, 0x00000000}, + {0x0000b138, 0x00000000}, + {0x0000b13c, 0x00000000}, + {0x0000b140, 0x003f0020}, + {0x0000b144, 0x00400041}, + {0x0000b148, 0x0140005f}, + {0x0000b14c, 0x0160015f}, + {0x0000b150, 0x017e017f}, + {0x0000b154, 0x02410242}, + {0x0000b158, 0x025f0240}, + {0x0000b15c, 0x027f0260}, + {0x0000b160, 0x0341027e}, + {0x0000b164, 0x035f0340}, + {0x0000b168, 0x037f0360}, + {0x0000b16c, 0x04400441}, + {0x0000b170, 0x0460045f}, + {0x0000b174, 0x0541047f}, + {0x0000b178, 0x055f0540}, + {0x0000b17c, 0x057f0560}, + {0x0000b180, 0x06400641}, + {0x0000b184, 0x0660065f}, + {0x0000b188, 0x067e067f}, + {0x0000b18c, 0x07410742}, + {0x0000b190, 0x075f0740}, + {0x0000b194, 0x077f0760}, + {0x0000b198, 0x07800781}, + {0x0000b19c, 0x07a0079f}, + {0x0000b1a0, 0x07c107bf}, + {0x0000b1a4, 0x000007c0}, + {0x0000b1a8, 0x00000000}, + {0x0000b1ac, 0x00000000}, + {0x0000b1b0, 0x00000000}, + {0x0000b1b4, 0x00000000}, + {0x0000b1b8, 0x00000000}, + {0x0000b1bc, 0x00000000}, + {0x0000b1c0, 0x00000000}, + {0x0000b1c4, 0x00000000}, + {0x0000b1c8, 0x00000000}, + {0x0000b1cc, 0x00000000}, + {0x0000b1d0, 0x00000000}, + {0x0000b1d4, 0x00000000}, + {0x0000b1d8, 0x00000000}, + {0x0000b1dc, 0x00000000}, + {0x0000b1e0, 0x00000000}, + {0x0000b1e4, 0x00000000}, + {0x0000b1e8, 0x00000000}, + {0x0000b1ec, 0x00000000}, + {0x0000b1f0, 0x00000396}, + {0x0000b1f4, 0x00000396}, + {0x0000b1f8, 0x00000396}, + {0x0000b1fc, 0x00000196}, +}; + +static const u32 ar955x_1p0_soc_preamble[][2] = { + /* Addr allmodes */ + {0x00007000, 0x00000000}, + {0x00007004, 0x00000000}, + {0x00007008, 0x00000000}, + {0x0000700c, 0x00000000}, + {0x0000701c, 0x00000000}, + {0x00007020, 0x00000000}, + {0x00007024, 0x00000000}, + {0x00007028, 0x00000000}, + {0x0000702c, 0x00000000}, + {0x00007030, 0x00000000}, + {0x00007034, 0x00000002}, + {0x00007038, 0x000004c2}, + {0x00007048, 0x00000000}, +}; + +static const u32 ar955x_1p0_common_wo_xlna_rx_gain_bounds[][5] = { + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x00009e44, 0xfe321e27, 0xfe321e27, 0xfe291e27, 0xfe291e27}, + {0x00009e48, 0x5030201a, 0x5030201a, 0x50302012, 0x50302012}, +}; + +static const u32 ar955x_1p0_mac_postamble[][5] = { + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x00001030, 0x00000230, 0x00000460, 0x000002c0, 0x00000160}, + {0x00001070, 0x00000168, 0x000002d0, 0x00000318, 0x0000018c}, + {0x000010b0, 0x00000e60, 0x00001cc0, 0x00007c70, 0x00003e38}, + {0x00008014, 0x03e803e8, 0x07d007d0, 0x10801600, 0x08400b00}, + {0x0000801c, 0x128d8027, 0x128d804f, 0x12e00057, 0x12e0002b}, + {0x00008120, 0x08f04800, 0x08f04800, 0x08f04810, 0x08f04810}, + {0x000081d0, 0x00003210, 0x00003210, 0x0000320a, 0x0000320a}, + {0x00008318, 0x00003e80, 0x00007d00, 0x00006880, 0x00003440}, +}; + +static const u32 ar955x_1p0_common_rx_gain_bounds[][5] = { + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x00009e44, 0xfe321e27, 0xfe321e27, 0xfe291e27, 0xfe291e27}, + {0x00009e48, 0x5030201a, 0x5030201a, 0x50302018, 0x50302018}, +}; + +static const u32 ar955x_1p0_modes_no_xpa_tx_gain_table[][9] = { + /* Addr 5G_HT20_L 5G_HT40_L 5G_HT20_M 5G_HT40_M 5G_HT20_H 5G_HT40_H 2G_HT40 2G_HT20 */ + {0x0000a2dc, 0x01feee00, 0x01feee00, 0x01feee00, 0x01feee00, 0x01feee00, 0x01feee00, 0xfffe5aaa, 0xfffe5aaa}, + {0x0000a2e0, 0x0000f000, 0x0000f000, 0x0000f000, 0x0000f000, 0x0000f000, 0x0000f000, 0xfffe9ccc, 0xfffe9ccc}, + {0x0000a2e4, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0xffffe0f0, 0xffffe0f0}, + {0x0000a2e8, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0xffffef00, 0xffffef00}, + {0x0000a410, 0x000050d8, 0x000050d8, 0x000050d8, 0x000050d8, 0x000050d8, 0x000050d8, 0x000050d7, 0x000050d7}, + {0x0000a500, 0x00002220, 0x00002220, 0x00002220, 0x00002220, 0x00002220, 0x00002220, 0x00000000, 0x00000000}, + {0x0000a504, 0x04002222, 0x04002222, 0x04002222, 0x04002222, 0x04002222, 0x04002222, 0x04000002, 0x04000002}, + {0x0000a508, 0x09002421, 0x09002421, 0x09002421, 0x09002421, 0x09002421, 0x09002421, 0x08000004, 0x08000004}, + {0x0000a50c, 0x0d002621, 0x0d002621, 0x0d002621, 0x0d002621, 0x0d002621, 0x0d002621, 0x0b000006, 0x0b000006}, + {0x0000a510, 0x13004620, 0x13004620, 0x13004620, 0x13004620, 0x13004620, 0x13004620, 0x0f00000a, 0x0f00000a}, + {0x0000a514, 0x19004a20, 0x19004a20, 0x19004a20, 0x19004a20, 0x19004a20, 0x19004a20, 0x1300000c, 0x1300000c}, + {0x0000a518, 0x1d004e20, 0x1d004e20, 0x1d004e20, 0x1d004e20, 0x1d004e20, 0x1d004e20, 0x1700000e, 0x1700000e}, + {0x0000a51c, 0x21005420, 0x21005420, 0x21005420, 0x21005420, 0x21005420, 0x21005420, 0x1b000012, 0x1b000012}, + {0x0000a520, 0x26005e20, 0x26005e20, 0x26005e20, 0x26005e20, 0x26005e20, 0x26005e20, 0x1f00004a, 0x1f00004a}, + {0x0000a524, 0x2b005e40, 0x2b005e40, 0x2b005e40, 0x2b005e40, 0x2b005e40, 0x2b005e40, 0x23000244, 0x23000244}, + {0x0000a528, 0x2f005e42, 0x2f005e42, 0x2f005e42, 0x2f005e42, 0x2f005e42, 0x2f005e42, 0x2700022b, 0x2700022b}, + {0x0000a52c, 0x33005e44, 0x33005e44, 0x33005e44, 0x33005e44, 0x33005e44, 0x33005e44, 0x2b000625, 0x2b000625}, + {0x0000a530, 0x38005e65, 0x38005e65, 0x38005e65, 0x38005e65, 0x38005e65, 0x38005e65, 0x2f001006, 0x2f001006}, + {0x0000a534, 0x3c005e69, 0x3c005e69, 0x3c005e69, 0x3c005e69, 0x3c005e69, 0x3c005e69, 0x330008a0, 0x330008a0}, + {0x0000a538, 0x40005e6b, 0x40005e6b, 0x40005e6b, 0x40005e6b, 0x40005e6b, 0x40005e6b, 0x37000a2a, 0x37000a2a}, + {0x0000a53c, 0x44005e6d, 0x44005e6d, 0x44005e6d, 0x44005e6d, 0x44005e6d, 0x44005e6d, 0x3b001c23, 0x3b001c23}, + {0x0000a540, 0x49005e72, 0x49005e72, 0x49005e72, 0x49005e72, 0x49005e72, 0x49005e72, 0x3f0014a0, 0x3f0014a0}, + {0x0000a544, 0x4e005eb2, 0x4e005eb2, 0x4e005eb2, 0x4e005eb2, 0x4e005eb2, 0x4e005eb2, 0x43001882, 0x43001882}, + {0x0000a548, 0x53005f12, 0x53005f12, 0x53005f12, 0x53005f12, 0x53005f12, 0x53005f12, 0x47001ca2, 0x47001ca2}, + {0x0000a54c, 0x59025eb2, 0x59025eb2, 0x59025eb2, 0x59025eb2, 0x59025eb2, 0x59025eb2, 0x4b001ec3, 0x4b001ec3}, + {0x0000a550, 0x5e025f12, 0x5e025f12, 0x5e025f12, 0x5e025f12, 0x5e025f12, 0x5e025f12, 0x4f00148c, 0x4f00148c}, + {0x0000a554, 0x61027f12, 0x61027f12, 0x61027f12, 0x61027f12, 0x61027f12, 0x61027f12, 0x53001c6e, 0x53001c6e}, + {0x0000a558, 0x6702bf12, 0x6702bf12, 0x6702bf12, 0x6702bf12, 0x6702bf12, 0x6702bf12, 0x57001c92, 0x57001c92}, + {0x0000a55c, 0x6b02bf14, 0x6b02bf14, 0x6b02bf14, 0x6b02bf14, 0x6b02bf14, 0x6b02bf14, 0x5c001af6, 0x5c001af6}, + {0x0000a560, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x5c001af6, 0x5c001af6}, + {0x0000a564, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x5c001af6, 0x5c001af6}, + {0x0000a568, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x5c001af6, 0x5c001af6}, + {0x0000a56c, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x5c001af6, 0x5c001af6}, + {0x0000a570, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x5c001af6, 0x5c001af6}, + {0x0000a574, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x5c001af6, 0x5c001af6}, + {0x0000a578, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x5c001af6, 0x5c001af6}, + {0x0000a57c, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x6f02bf16, 0x5c001af6, 0x5c001af6}, + {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, + {0x0000a610, 0x00804000, 0x00804000, 0x00804000, 0x00804000, 0x00804000, 0x00804000, 0x04005001, 0x04005001}, + {0x0000a614, 0x00804201, 0x00804201, 0x00804201, 0x00804201, 0x00804201, 0x00804201, 0x03808e02, 0x03808e02}, + {0x0000a618, 0x0280c802, 0x0280c802, 0x0280c802, 0x0280c802, 0x0280c802, 0x0280c802, 0x0300c000, 0x0300c000}, + {0x0000a61c, 0x0280ca03, 0x0280ca03, 0x0280ca03, 0x0280ca03, 0x0280ca03, 0x0280ca03, 0x03808e02, 0x03808e02}, + {0x0000a620, 0x04c15104, 0x04c15104, 0x04c15104, 0x04c15104, 0x04c15104, 0x04c15104, 0x03410c03, 0x03410c03}, + {0x0000a624, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04014c03, 0x04014c03}, + {0x0000a628, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x05818d04, 0x05818d04}, + {0x0000a62c, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x0801cd04, 0x0801cd04}, + {0x0000a630, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x0801e007, 0x0801e007}, + {0x0000a634, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x0801e007, 0x0801e007}, + {0x0000a638, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x0801e007, 0x0801e007}, + {0x0000a63c, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x04c15305, 0x0801e007, 0x0801e007}, + {0x0000b2dc, 0x01feee00, 0x01feee00, 0x01feee00, 0x01feee00, 0x01feee00, 0x01feee00, 0xfffe5aaa, 0xfffe5aaa}, + {0x0000b2e0, 0x0000f000, 0x0000f000, 0x0000f000, 0x0000f000, 0x0000f000, 0x0000f000, 0xfffe9ccc, 0xfffe9ccc}, + {0x0000b2e4, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0xffffe0f0, 0xffffe0f0}, + {0x0000b2e8, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0xffffef00, 0xffffef00}, + {0x0000c2dc, 0x01feee00, 0x01feee00, 0x01feee00, 0x01feee00, 0x01feee00, 0x01feee00, 0xfffe5aaa, 0xfffe5aaa}, + {0x0000c2e0, 0x0000f000, 0x0000f000, 0x0000f000, 0x0000f000, 0x0000f000, 0x0000f000, 0xfffe9ccc, 0xfffe9ccc}, + {0x0000c2e4, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0x01ff0000, 0xffffe0f0, 0xffffe0f0}, + {0x0000c2e8, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0xffffef00, 0xffffef00}, + {0x00016044, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x054922d4, 0x054922d4}, + {0x00016048, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401}, + {0x00016444, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x054922d4, 0x054922d4}, + {0x00016448, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401}, + {0x00016844, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x056db2d4, 0x054922d4, 0x054922d4}, + {0x00016848, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401, 0x66482401}, +}; + +static const u32 ar955x_1p0_soc_postamble[][5] = { + /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ + {0x00007010, 0x00000023, 0x00000023, 0x00000023, 0x00000023}, +}; + +static const u32 ar955x_1p0_modes_fast_clock[][3] = { + /* Addr 5G_HT20 5G_HT40 */ + {0x00001030, 0x00000268, 0x000004d0}, + {0x00001070, 0x0000018c, 0x00000318}, + {0x000010b0, 0x00000fd0, 0x00001fa0}, + {0x00008014, 0x044c044c, 0x08980898}, + {0x0000801c, 0x148ec02b, 0x148ec057}, + {0x00008318, 0x000044c0, 0x00008980}, + {0x00009e00, 0x0372131c, 0x0372131c}, + {0x0000a230, 0x0000000b, 0x00000016}, + {0x0000a254, 0x00000898, 0x00001130}, +}; + +#endif /* INITVALS_955X_1P0_H */ diff --git a/drivers/net/wireless/ath/ath9k/ar9580_1p0_initvals.h b/drivers/net/wireless/ath/ath9k/ar9580_1p0_initvals.h index 06b3f0df9fad..6e1915aee712 100644 --- a/drivers/net/wireless/ath/ath9k/ar9580_1p0_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9580_1p0_initvals.h @@ -1,5 +1,6 @@ /* - * Copyright (c) 2010 Atheros Communications Inc. + * Copyright (c) 2010-2011 Atheros Communications Inc. + * Copyright (c) 2011-2012 Qualcomm Atheros Inc. * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -19,18 +20,7 @@ /* AR9580 1.0 */ -static const u32 ar9580_1p0_modes_fast_clock[][3] = { - /* Addr 5G_HT20 5G_HT40 */ - {0x00001030, 0x00000268, 0x000004d0}, - {0x00001070, 0x0000018c, 0x00000318}, - {0x000010b0, 0x00000fd0, 0x00001fa0}, - {0x00008014, 0x044c044c, 0x08980898}, - {0x0000801c, 0x148ec02b, 0x148ec057}, - {0x00008318, 0x000044c0, 0x00008980}, - {0x00009e00, 0x0372131c, 0x0372131c}, - {0x0000a230, 0x0000000b, 0x00000016}, - {0x0000a254, 0x00000898, 0x00001130}, -}; +#define ar9580_1p0_modes_fast_clock ar9300Modes_fast_clock_2p2 static const u32 ar9580_1p0_radio_postamble[][5] = { /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ @@ -208,17 +198,7 @@ static const u32 ar9580_1p0_baseband_core[][2] = { {0x0000c420, 0x00000000}, }; -static const u32 ar9580_1p0_mac_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x00001030, 0x00000230, 0x00000460, 0x000002c0, 0x00000160}, - {0x00001070, 0x00000168, 0x000002d0, 0x00000318, 0x0000018c}, - {0x000010b0, 0x00000e60, 0x00001cc0, 0x00007c70, 0x00003e38}, - {0x00008014, 0x03e803e8, 0x07d007d0, 0x10801600, 0x08400b00}, - {0x0000801c, 0x128d8027, 0x128d804f, 0x12e00057, 0x12e0002b}, - {0x00008120, 0x08f04800, 0x08f04800, 0x08f04810, 0x08f04810}, - {0x000081d0, 0x00003210, 0x00003210, 0x0000320a, 0x0000320a}, - {0x00008318, 0x00003e80, 0x00007d00, 0x00006880, 0x00003440}, -}; +#define ar9580_1p0_mac_postamble ar9300_2p2_mac_postamble static const u32 ar9580_1p0_low_ob_db_tx_gain_table[][5] = { /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ @@ -326,111 +306,7 @@ static const u32 ar9580_1p0_low_ob_db_tx_gain_table[][5] = { {0x00016868, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c}, }; -static const u32 ar9580_1p0_high_power_tx_gain_table[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x0000a2dc, 0x0380c7fc, 0x0380c7fc, 0x03aaa352, 0x03aaa352}, - {0x0000a2e0, 0x0000f800, 0x0000f800, 0x03ccc584, 0x03ccc584}, - {0x0000a2e4, 0x03ff0000, 0x03ff0000, 0x03f0f800, 0x03f0f800}, - {0x0000a2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, - {0x0000a410, 0x000050d9, 0x000050d9, 0x000050d9, 0x000050d9}, - {0x0000a500, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a504, 0x06000003, 0x06000003, 0x04000002, 0x04000002}, - {0x0000a508, 0x0a000020, 0x0a000020, 0x08000004, 0x08000004}, - {0x0000a50c, 0x10000023, 0x10000023, 0x0b000200, 0x0b000200}, - {0x0000a510, 0x16000220, 0x16000220, 0x0f000202, 0x0f000202}, - {0x0000a514, 0x1c000223, 0x1c000223, 0x12000400, 0x12000400}, - {0x0000a518, 0x21002220, 0x21002220, 0x16000402, 0x16000402}, - {0x0000a51c, 0x27002223, 0x27002223, 0x19000404, 0x19000404}, - {0x0000a520, 0x2b022220, 0x2b022220, 0x1c000603, 0x1c000603}, - {0x0000a524, 0x2f022222, 0x2f022222, 0x21000a02, 0x21000a02}, - {0x0000a528, 0x34022225, 0x34022225, 0x25000a04, 0x25000a04}, - {0x0000a52c, 0x3a02222a, 0x3a02222a, 0x28000a20, 0x28000a20}, - {0x0000a530, 0x3e02222c, 0x3e02222c, 0x2c000e20, 0x2c000e20}, - {0x0000a534, 0x4202242a, 0x4202242a, 0x30000e22, 0x30000e22}, - {0x0000a538, 0x4702244a, 0x4702244a, 0x34000e24, 0x34000e24}, - {0x0000a53c, 0x4b02244c, 0x4b02244c, 0x38001640, 0x38001640}, - {0x0000a540, 0x4e02246c, 0x4e02246c, 0x3c001660, 0x3c001660}, - {0x0000a544, 0x5302266c, 0x5302266c, 0x3f001861, 0x3f001861}, - {0x0000a548, 0x5702286c, 0x5702286c, 0x43001a81, 0x43001a81}, - {0x0000a54c, 0x5c02486b, 0x5c02486b, 0x47001a83, 0x47001a83}, - {0x0000a550, 0x61024a6c, 0x61024a6c, 0x4a001c84, 0x4a001c84}, - {0x0000a554, 0x66026a6c, 0x66026a6c, 0x4e001ce3, 0x4e001ce3}, - {0x0000a558, 0x6b026e6c, 0x6b026e6c, 0x52001ce5, 0x52001ce5}, - {0x0000a55c, 0x7002708c, 0x7002708c, 0x56001ce9, 0x56001ce9}, - {0x0000a560, 0x7302b08a, 0x7302b08a, 0x5a001ceb, 0x5a001ceb}, - {0x0000a564, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, - {0x0000a568, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, - {0x0000a56c, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, - {0x0000a570, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, - {0x0000a574, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, - {0x0000a578, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, - {0x0000a57c, 0x7702b08c, 0x7702b08c, 0x5d001eec, 0x5d001eec}, - {0x0000a580, 0x00800000, 0x00800000, 0x00800000, 0x00800000}, - {0x0000a584, 0x06800003, 0x06800003, 0x04800002, 0x04800002}, - {0x0000a588, 0x0a800020, 0x0a800020, 0x08800004, 0x08800004}, - {0x0000a58c, 0x10800023, 0x10800023, 0x0b800200, 0x0b800200}, - {0x0000a590, 0x16800220, 0x16800220, 0x0f800202, 0x0f800202}, - {0x0000a594, 0x1c800223, 0x1c800223, 0x12800400, 0x12800400}, - {0x0000a598, 0x21802220, 0x21802220, 0x16800402, 0x16800402}, - {0x0000a59c, 0x27802223, 0x27802223, 0x19800404, 0x19800404}, - {0x0000a5a0, 0x2b822220, 0x2b822220, 0x1c800603, 0x1c800603}, - {0x0000a5a4, 0x2f822222, 0x2f822222, 0x21800a02, 0x21800a02}, - {0x0000a5a8, 0x34822225, 0x34822225, 0x25800a04, 0x25800a04}, - {0x0000a5ac, 0x3a82222a, 0x3a82222a, 0x28800a20, 0x28800a20}, - {0x0000a5b0, 0x3e82222c, 0x3e82222c, 0x2c800e20, 0x2c800e20}, - {0x0000a5b4, 0x4282242a, 0x4282242a, 0x30800e22, 0x30800e22}, - {0x0000a5b8, 0x4782244a, 0x4782244a, 0x34800e24, 0x34800e24}, - {0x0000a5bc, 0x4b82244c, 0x4b82244c, 0x38801640, 0x38801640}, - {0x0000a5c0, 0x4e82246c, 0x4e82246c, 0x3c801660, 0x3c801660}, - {0x0000a5c4, 0x5382266c, 0x5382266c, 0x3f801861, 0x3f801861}, - {0x0000a5c8, 0x5782286c, 0x5782286c, 0x43801a81, 0x43801a81}, - {0x0000a5cc, 0x5c82486b, 0x5c82486b, 0x47801a83, 0x47801a83}, - {0x0000a5d0, 0x61824a6c, 0x61824a6c, 0x4a801c84, 0x4a801c84}, - {0x0000a5d4, 0x66826a6c, 0x66826a6c, 0x4e801ce3, 0x4e801ce3}, - {0x0000a5d8, 0x6b826e6c, 0x6b826e6c, 0x52801ce5, 0x52801ce5}, - {0x0000a5dc, 0x7082708c, 0x7082708c, 0x56801ce9, 0x56801ce9}, - {0x0000a5e0, 0x7382b08a, 0x7382b08a, 0x5a801ceb, 0x5a801ceb}, - {0x0000a5e4, 0x7782b08c, 0x7782b08c, 0x5d801eec, 0x5d801eec}, - {0x0000a5e8, 0x7782b08c, 0x7782b08c, 0x5d801eec, 0x5d801eec}, - {0x0000a5ec, 0x7782b08c, 0x7782b08c, 0x5d801eec, 0x5d801eec}, - {0x0000a5f0, 0x7782b08c, 0x7782b08c, 0x5d801eec, 0x5d801eec}, - {0x0000a5f4, 0x7782b08c, 0x7782b08c, 0x5d801eec, 0x5d801eec}, - {0x0000a5f8, 0x7782b08c, 0x7782b08c, 0x5d801eec, 0x5d801eec}, - {0x0000a5fc, 0x7782b08c, 0x7782b08c, 0x5d801eec, 0x5d801eec}, - {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a610, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a614, 0x01404000, 0x01404000, 0x01404000, 0x01404000}, - {0x0000a618, 0x01404501, 0x01404501, 0x01404501, 0x01404501}, - {0x0000a61c, 0x02008802, 0x02008802, 0x02008501, 0x02008501}, - {0x0000a620, 0x0300cc03, 0x0300cc03, 0x0280ca03, 0x0280ca03}, - {0x0000a624, 0x0300cc03, 0x0300cc03, 0x03010c04, 0x03010c04}, - {0x0000a628, 0x0300cc03, 0x0300cc03, 0x04014c04, 0x04014c04}, - {0x0000a62c, 0x03810c03, 0x03810c03, 0x04015005, 0x04015005}, - {0x0000a630, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, - {0x0000a634, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, - {0x0000a638, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, - {0x0000a63c, 0x03810e04, 0x03810e04, 0x04015005, 0x04015005}, - {0x0000b2dc, 0x0380c7fc, 0x0380c7fc, 0x03aaa352, 0x03aaa352}, - {0x0000b2e0, 0x0000f800, 0x0000f800, 0x03ccc584, 0x03ccc584}, - {0x0000b2e4, 0x03ff0000, 0x03ff0000, 0x03f0f800, 0x03f0f800}, - {0x0000b2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, - {0x0000c2dc, 0x0380c7fc, 0x0380c7fc, 0x03aaa352, 0x03aaa352}, - {0x0000c2e0, 0x0000f800, 0x0000f800, 0x03ccc584, 0x03ccc584}, - {0x0000c2e4, 0x03ff0000, 0x03ff0000, 0x03f0f800, 0x03f0f800}, - {0x0000c2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, - {0x00016044, 0x012492d4, 0x012492d4, 0x012492d4, 0x012492d4}, - {0x00016048, 0x66480001, 0x66480001, 0x66480001, 0x66480001}, - {0x00016068, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c}, - {0x00016444, 0x012492d4, 0x012492d4, 0x012492d4, 0x012492d4}, - {0x00016448, 0x66480001, 0x66480001, 0x66480001, 0x66480001}, - {0x00016468, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c}, - {0x00016844, 0x012492d4, 0x012492d4, 0x012492d4, 0x012492d4}, - {0x00016848, 0x66480001, 0x66480001, 0x66480001, 0x66480001}, - {0x00016868, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c}, -}; +#define ar9580_1p0_high_power_tx_gain_table ar9580_1p0_low_ob_db_tx_gain_table static const u32 ar9580_1p0_lowest_ob_db_tx_gain_table[][5] = { /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ @@ -538,12 +414,7 @@ static const u32 ar9580_1p0_lowest_ob_db_tx_gain_table[][5] = { {0x00016868, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c}, }; -static const u32 ar9580_1p0_baseband_core_txfir_coeff_japan_2484[][2] = { - /* Addr allmodes */ - {0x0000a398, 0x00000000}, - {0x0000a39c, 0x6f7f0301}, - {0x0000a3a0, 0xca9228ee}, -}; +#define ar9580_1p0_baseband_core_txfir_coeff_japan_2484 ar9462_2p0_baseband_core_txfir_coeff_japan_2484 static const u32 ar9580_1p0_mac_core[][2] = { /* Addr allmodes */ @@ -808,376 +679,11 @@ static const u32 ar9580_1p0_mixed_ob_db_tx_gain_table[][5] = { {0x00016868, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c}, }; -static const u32 ar9580_1p0_wo_xlna_rx_gain_table[][2] = { - /* Addr allmodes */ - {0x0000a000, 0x00010000}, - {0x0000a004, 0x00030002}, - {0x0000a008, 0x00050004}, - {0x0000a00c, 0x00810080}, - {0x0000a010, 0x00830082}, - {0x0000a014, 0x01810180}, - {0x0000a018, 0x01830182}, - {0x0000a01c, 0x01850184}, - {0x0000a020, 0x01890188}, - {0x0000a024, 0x018b018a}, - {0x0000a028, 0x018d018c}, - {0x0000a02c, 0x03820190}, - {0x0000a030, 0x03840383}, - {0x0000a034, 0x03880385}, - {0x0000a038, 0x038a0389}, - {0x0000a03c, 0x038c038b}, - {0x0000a040, 0x0390038d}, - {0x0000a044, 0x03920391}, - {0x0000a048, 0x03940393}, - {0x0000a04c, 0x03960395}, - {0x0000a050, 0x00000000}, - {0x0000a054, 0x00000000}, - {0x0000a058, 0x00000000}, - {0x0000a05c, 0x00000000}, - {0x0000a060, 0x00000000}, - {0x0000a064, 0x00000000}, - {0x0000a068, 0x00000000}, - {0x0000a06c, 0x00000000}, - {0x0000a070, 0x00000000}, - {0x0000a074, 0x00000000}, - {0x0000a078, 0x00000000}, - {0x0000a07c, 0x00000000}, - {0x0000a080, 0x29292929}, - {0x0000a084, 0x29292929}, - {0x0000a088, 0x29292929}, - {0x0000a08c, 0x29292929}, - {0x0000a090, 0x22292929}, - {0x0000a094, 0x1d1d2222}, - {0x0000a098, 0x0c111117}, - {0x0000a09c, 0x00030303}, - {0x0000a0a0, 0x00000000}, - {0x0000a0a4, 0x00000000}, - {0x0000a0a8, 0x00000000}, - {0x0000a0ac, 0x00000000}, - {0x0000a0b0, 0x00000000}, - {0x0000a0b4, 0x00000000}, - {0x0000a0b8, 0x00000000}, - {0x0000a0bc, 0x00000000}, - {0x0000a0c0, 0x001f0000}, - {0x0000a0c4, 0x01000101}, - {0x0000a0c8, 0x011e011f}, - {0x0000a0cc, 0x011c011d}, - {0x0000a0d0, 0x02030204}, - {0x0000a0d4, 0x02010202}, - {0x0000a0d8, 0x021f0200}, - {0x0000a0dc, 0x0302021e}, - {0x0000a0e0, 0x03000301}, - {0x0000a0e4, 0x031e031f}, - {0x0000a0e8, 0x0402031d}, - {0x0000a0ec, 0x04000401}, - {0x0000a0f0, 0x041e041f}, - {0x0000a0f4, 0x0502041d}, - {0x0000a0f8, 0x05000501}, - {0x0000a0fc, 0x051e051f}, - {0x0000a100, 0x06010602}, - {0x0000a104, 0x061f0600}, - {0x0000a108, 0x061d061e}, - {0x0000a10c, 0x07020703}, - {0x0000a110, 0x07000701}, - {0x0000a114, 0x00000000}, - {0x0000a118, 0x00000000}, - {0x0000a11c, 0x00000000}, - {0x0000a120, 0x00000000}, - {0x0000a124, 0x00000000}, - {0x0000a128, 0x00000000}, - {0x0000a12c, 0x00000000}, - {0x0000a130, 0x00000000}, - {0x0000a134, 0x00000000}, - {0x0000a138, 0x00000000}, - {0x0000a13c, 0x00000000}, - {0x0000a140, 0x001f0000}, - {0x0000a144, 0x01000101}, - {0x0000a148, 0x011e011f}, - {0x0000a14c, 0x011c011d}, - {0x0000a150, 0x02030204}, - {0x0000a154, 0x02010202}, - {0x0000a158, 0x021f0200}, - {0x0000a15c, 0x0302021e}, - {0x0000a160, 0x03000301}, - {0x0000a164, 0x031e031f}, - {0x0000a168, 0x0402031d}, - {0x0000a16c, 0x04000401}, - {0x0000a170, 0x041e041f}, - {0x0000a174, 0x0502041d}, - {0x0000a178, 0x05000501}, - {0x0000a17c, 0x051e051f}, - {0x0000a180, 0x06010602}, - {0x0000a184, 0x061f0600}, - {0x0000a188, 0x061d061e}, - {0x0000a18c, 0x07020703}, - {0x0000a190, 0x07000701}, - {0x0000a194, 0x00000000}, - {0x0000a198, 0x00000000}, - {0x0000a19c, 0x00000000}, - {0x0000a1a0, 0x00000000}, - {0x0000a1a4, 0x00000000}, - {0x0000a1a8, 0x00000000}, - {0x0000a1ac, 0x00000000}, - {0x0000a1b0, 0x00000000}, - {0x0000a1b4, 0x00000000}, - {0x0000a1b8, 0x00000000}, - {0x0000a1bc, 0x00000000}, - {0x0000a1c0, 0x00000000}, - {0x0000a1c4, 0x00000000}, - {0x0000a1c8, 0x00000000}, - {0x0000a1cc, 0x00000000}, - {0x0000a1d0, 0x00000000}, - {0x0000a1d4, 0x00000000}, - {0x0000a1d8, 0x00000000}, - {0x0000a1dc, 0x00000000}, - {0x0000a1e0, 0x00000000}, - {0x0000a1e4, 0x00000000}, - {0x0000a1e8, 0x00000000}, - {0x0000a1ec, 0x00000000}, - {0x0000a1f0, 0x00000396}, - {0x0000a1f4, 0x00000396}, - {0x0000a1f8, 0x00000396}, - {0x0000a1fc, 0x00000196}, - {0x0000b000, 0x00010000}, - {0x0000b004, 0x00030002}, - {0x0000b008, 0x00050004}, - {0x0000b00c, 0x00810080}, - {0x0000b010, 0x00830082}, - {0x0000b014, 0x01810180}, - {0x0000b018, 0x01830182}, - {0x0000b01c, 0x01850184}, - {0x0000b020, 0x02810280}, - {0x0000b024, 0x02830282}, - {0x0000b028, 0x02850284}, - {0x0000b02c, 0x02890288}, - {0x0000b030, 0x028b028a}, - {0x0000b034, 0x0388028c}, - {0x0000b038, 0x038a0389}, - {0x0000b03c, 0x038c038b}, - {0x0000b040, 0x0390038d}, - {0x0000b044, 0x03920391}, - {0x0000b048, 0x03940393}, - {0x0000b04c, 0x03960395}, - {0x0000b050, 0x00000000}, - {0x0000b054, 0x00000000}, - {0x0000b058, 0x00000000}, - {0x0000b05c, 0x00000000}, - {0x0000b060, 0x00000000}, - {0x0000b064, 0x00000000}, - {0x0000b068, 0x00000000}, - {0x0000b06c, 0x00000000}, - {0x0000b070, 0x00000000}, - {0x0000b074, 0x00000000}, - {0x0000b078, 0x00000000}, - {0x0000b07c, 0x00000000}, - {0x0000b080, 0x32323232}, - {0x0000b084, 0x2f2f3232}, - {0x0000b088, 0x23282a2d}, - {0x0000b08c, 0x1c1e2123}, - {0x0000b090, 0x14171919}, - {0x0000b094, 0x0e0e1214}, - {0x0000b098, 0x03050707}, - {0x0000b09c, 0x00030303}, - {0x0000b0a0, 0x00000000}, - {0x0000b0a4, 0x00000000}, - {0x0000b0a8, 0x00000000}, - {0x0000b0ac, 0x00000000}, - {0x0000b0b0, 0x00000000}, - {0x0000b0b4, 0x00000000}, - {0x0000b0b8, 0x00000000}, - {0x0000b0bc, 0x00000000}, - {0x0000b0c0, 0x003f0020}, - {0x0000b0c4, 0x00400041}, - {0x0000b0c8, 0x0140005f}, - {0x0000b0cc, 0x0160015f}, - {0x0000b0d0, 0x017e017f}, - {0x0000b0d4, 0x02410242}, - {0x0000b0d8, 0x025f0240}, - {0x0000b0dc, 0x027f0260}, - {0x0000b0e0, 0x0341027e}, - {0x0000b0e4, 0x035f0340}, - {0x0000b0e8, 0x037f0360}, - {0x0000b0ec, 0x04400441}, - {0x0000b0f0, 0x0460045f}, - {0x0000b0f4, 0x0541047f}, - {0x0000b0f8, 0x055f0540}, - {0x0000b0fc, 0x057f0560}, - {0x0000b100, 0x06400641}, - {0x0000b104, 0x0660065f}, - {0x0000b108, 0x067e067f}, - {0x0000b10c, 0x07410742}, - {0x0000b110, 0x075f0740}, - {0x0000b114, 0x077f0760}, - {0x0000b118, 0x07800781}, - {0x0000b11c, 0x07a0079f}, - {0x0000b120, 0x07c107bf}, - {0x0000b124, 0x000007c0}, - {0x0000b128, 0x00000000}, - {0x0000b12c, 0x00000000}, - {0x0000b130, 0x00000000}, - {0x0000b134, 0x00000000}, - {0x0000b138, 0x00000000}, - {0x0000b13c, 0x00000000}, - {0x0000b140, 0x003f0020}, - {0x0000b144, 0x00400041}, - {0x0000b148, 0x0140005f}, - {0x0000b14c, 0x0160015f}, - {0x0000b150, 0x017e017f}, - {0x0000b154, 0x02410242}, - {0x0000b158, 0x025f0240}, - {0x0000b15c, 0x027f0260}, - {0x0000b160, 0x0341027e}, - {0x0000b164, 0x035f0340}, - {0x0000b168, 0x037f0360}, - {0x0000b16c, 0x04400441}, - {0x0000b170, 0x0460045f}, - {0x0000b174, 0x0541047f}, - {0x0000b178, 0x055f0540}, - {0x0000b17c, 0x057f0560}, - {0x0000b180, 0x06400641}, - {0x0000b184, 0x0660065f}, - {0x0000b188, 0x067e067f}, - {0x0000b18c, 0x07410742}, - {0x0000b190, 0x075f0740}, - {0x0000b194, 0x077f0760}, - {0x0000b198, 0x07800781}, - {0x0000b19c, 0x07a0079f}, - {0x0000b1a0, 0x07c107bf}, - {0x0000b1a4, 0x000007c0}, - {0x0000b1a8, 0x00000000}, - {0x0000b1ac, 0x00000000}, - {0x0000b1b0, 0x00000000}, - {0x0000b1b4, 0x00000000}, - {0x0000b1b8, 0x00000000}, - {0x0000b1bc, 0x00000000}, - {0x0000b1c0, 0x00000000}, - {0x0000b1c4, 0x00000000}, - {0x0000b1c8, 0x00000000}, - {0x0000b1cc, 0x00000000}, - {0x0000b1d0, 0x00000000}, - {0x0000b1d4, 0x00000000}, - {0x0000b1d8, 0x00000000}, - {0x0000b1dc, 0x00000000}, - {0x0000b1e0, 0x00000000}, - {0x0000b1e4, 0x00000000}, - {0x0000b1e8, 0x00000000}, - {0x0000b1ec, 0x00000000}, - {0x0000b1f0, 0x00000396}, - {0x0000b1f4, 0x00000396}, - {0x0000b1f8, 0x00000396}, - {0x0000b1fc, 0x00000196}, -}; +#define ar9580_1p0_wo_xlna_rx_gain_table ar9300Common_wo_xlna_rx_gain_table_2p2 -static const u32 ar9580_1p0_soc_postamble[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x00007010, 0x00000023, 0x00000023, 0x00000023, 0x00000023}, -}; +#define ar9580_1p0_soc_postamble ar9300_2p2_soc_postamble -static const u32 ar9580_1p0_high_ob_db_tx_gain_table[][5] = { - /* Addr 5G_HT20 5G_HT40 2G_HT40 2G_HT20 */ - {0x0000a2dc, 0x01feee00, 0x01feee00, 0x03aaa352, 0x03aaa352}, - {0x0000a2e0, 0x0000f000, 0x0000f000, 0x03ccc584, 0x03ccc584}, - {0x0000a2e4, 0x01ff0000, 0x01ff0000, 0x03f0f800, 0x03f0f800}, - {0x0000a2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, - {0x0000a410, 0x000050d8, 0x000050d8, 0x000050d9, 0x000050d9}, - {0x0000a500, 0x00002220, 0x00002220, 0x00000000, 0x00000000}, - {0x0000a504, 0x04002222, 0x04002222, 0x04000002, 0x04000002}, - {0x0000a508, 0x09002421, 0x09002421, 0x08000004, 0x08000004}, - {0x0000a50c, 0x0d002621, 0x0d002621, 0x0b000200, 0x0b000200}, - {0x0000a510, 0x13004620, 0x13004620, 0x0f000202, 0x0f000202}, - {0x0000a514, 0x19004a20, 0x19004a20, 0x11000400, 0x11000400}, - {0x0000a518, 0x1d004e20, 0x1d004e20, 0x15000402, 0x15000402}, - {0x0000a51c, 0x21005420, 0x21005420, 0x19000404, 0x19000404}, - {0x0000a520, 0x26005e20, 0x26005e20, 0x1b000603, 0x1b000603}, - {0x0000a524, 0x2b005e40, 0x2b005e40, 0x1f000a02, 0x1f000a02}, - {0x0000a528, 0x2f005e42, 0x2f005e42, 0x23000a04, 0x23000a04}, - {0x0000a52c, 0x33005e44, 0x33005e44, 0x26000a20, 0x26000a20}, - {0x0000a530, 0x38005e65, 0x38005e65, 0x2a000e20, 0x2a000e20}, - {0x0000a534, 0x3c005e69, 0x3c005e69, 0x2e000e22, 0x2e000e22}, - {0x0000a538, 0x40005e6b, 0x40005e6b, 0x31000e24, 0x31000e24}, - {0x0000a53c, 0x44005e6d, 0x44005e6d, 0x34001640, 0x34001640}, - {0x0000a540, 0x49005e72, 0x49005e72, 0x38001660, 0x38001660}, - {0x0000a544, 0x4e005eb2, 0x4e005eb2, 0x3b001861, 0x3b001861}, - {0x0000a548, 0x53005f12, 0x53005f12, 0x3e001a81, 0x3e001a81}, - {0x0000a54c, 0x59025eb2, 0x59025eb2, 0x42001a83, 0x42001a83}, - {0x0000a550, 0x5e025f12, 0x5e025f12, 0x44001c84, 0x44001c84}, - {0x0000a554, 0x61027f12, 0x61027f12, 0x48001ce3, 0x48001ce3}, - {0x0000a558, 0x6702bf12, 0x6702bf12, 0x4c001ce5, 0x4c001ce5}, - {0x0000a55c, 0x6b02bf14, 0x6b02bf14, 0x50001ce9, 0x50001ce9}, - {0x0000a560, 0x6f02bf16, 0x6f02bf16, 0x54001ceb, 0x54001ceb}, - {0x0000a564, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a568, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a56c, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a570, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a574, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a578, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a57c, 0x6f02bf16, 0x6f02bf16, 0x56001eec, 0x56001eec}, - {0x0000a580, 0x00802220, 0x00802220, 0x00800000, 0x00800000}, - {0x0000a584, 0x04802222, 0x04802222, 0x04800002, 0x04800002}, - {0x0000a588, 0x09802421, 0x09802421, 0x08800004, 0x08800004}, - {0x0000a58c, 0x0d802621, 0x0d802621, 0x0b800200, 0x0b800200}, - {0x0000a590, 0x13804620, 0x13804620, 0x0f800202, 0x0f800202}, - {0x0000a594, 0x19804a20, 0x19804a20, 0x11800400, 0x11800400}, - {0x0000a598, 0x1d804e20, 0x1d804e20, 0x15800402, 0x15800402}, - {0x0000a59c, 0x21805420, 0x21805420, 0x19800404, 0x19800404}, - {0x0000a5a0, 0x26805e20, 0x26805e20, 0x1b800603, 0x1b800603}, - {0x0000a5a4, 0x2b805e40, 0x2b805e40, 0x1f800a02, 0x1f800a02}, - {0x0000a5a8, 0x2f805e42, 0x2f805e42, 0x23800a04, 0x23800a04}, - {0x0000a5ac, 0x33805e44, 0x33805e44, 0x26800a20, 0x26800a20}, - {0x0000a5b0, 0x38805e65, 0x38805e65, 0x2a800e20, 0x2a800e20}, - {0x0000a5b4, 0x3c805e69, 0x3c805e69, 0x2e800e22, 0x2e800e22}, - {0x0000a5b8, 0x40805e6b, 0x40805e6b, 0x31800e24, 0x31800e24}, - {0x0000a5bc, 0x44805e6d, 0x44805e6d, 0x34801640, 0x34801640}, - {0x0000a5c0, 0x49805e72, 0x49805e72, 0x38801660, 0x38801660}, - {0x0000a5c4, 0x4e805eb2, 0x4e805eb2, 0x3b801861, 0x3b801861}, - {0x0000a5c8, 0x53805f12, 0x53805f12, 0x3e801a81, 0x3e801a81}, - {0x0000a5cc, 0x59825eb2, 0x59825eb2, 0x42801a83, 0x42801a83}, - {0x0000a5d0, 0x5e825f12, 0x5e825f12, 0x44801c84, 0x44801c84}, - {0x0000a5d4, 0x61827f12, 0x61827f12, 0x48801ce3, 0x48801ce3}, - {0x0000a5d8, 0x6782bf12, 0x6782bf12, 0x4c801ce5, 0x4c801ce5}, - {0x0000a5dc, 0x6b82bf14, 0x6b82bf14, 0x50801ce9, 0x50801ce9}, - {0x0000a5e0, 0x6f82bf16, 0x6f82bf16, 0x54801ceb, 0x54801ceb}, - {0x0000a5e4, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5e8, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5ec, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5f0, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5f4, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5f8, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a5fc, 0x6f82bf16, 0x6f82bf16, 0x56801eec, 0x56801eec}, - {0x0000a600, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a604, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a608, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a60c, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, - {0x0000a610, 0x00804000, 0x00804000, 0x00000000, 0x00000000}, - {0x0000a614, 0x00804201, 0x00804201, 0x01404000, 0x01404000}, - {0x0000a618, 0x0280c802, 0x0280c802, 0x01404501, 0x01404501}, - {0x0000a61c, 0x0280ca03, 0x0280ca03, 0x02008501, 0x02008501}, - {0x0000a620, 0x04c15104, 0x04c15104, 0x0280ca03, 0x0280ca03}, - {0x0000a624, 0x04c15305, 0x04c15305, 0x03010c04, 0x03010c04}, - {0x0000a628, 0x04c15305, 0x04c15305, 0x04014c04, 0x04014c04}, - {0x0000a62c, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, - {0x0000a630, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, - {0x0000a634, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, - {0x0000a638, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, - {0x0000a63c, 0x04c15305, 0x04c15305, 0x04015005, 0x04015005}, - {0x0000b2dc, 0x01feee00, 0x01feee00, 0x03aaa352, 0x03aaa352}, - {0x0000b2e0, 0x0000f000, 0x0000f000, 0x03ccc584, 0x03ccc584}, - {0x0000b2e4, 0x01ff0000, 0x01ff0000, 0x03f0f800, 0x03f0f800}, - {0x0000b2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, - {0x0000c2dc, 0x01feee00, 0x01feee00, 0x03aaa352, 0x03aaa352}, - {0x0000c2e0, 0x0000f000, 0x0000f000, 0x03ccc584, 0x03ccc584}, - {0x0000c2e4, 0x01ff0000, 0x01ff0000, 0x03f0f800, 0x03f0f800}, - {0x0000c2e8, 0x00000000, 0x00000000, 0x03ff0000, 0x03ff0000}, - {0x00016044, 0x056db2e4, 0x056db2e4, 0x056db2e4, 0x056db2e4}, - {0x00016048, 0x8e480001, 0x8e480001, 0x8e480001, 0x8e480001}, - {0x00016068, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c}, - {0x00016444, 0x056db2e4, 0x056db2e4, 0x056db2e4, 0x056db2e4}, - {0x00016448, 0x8e480001, 0x8e480001, 0x8e480001, 0x8e480001}, - {0x00016468, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c}, - {0x00016844, 0x056db2e4, 0x056db2e4, 0x056db2e4, 0x056db2e4}, - {0x00016848, 0x8e480001, 0x8e480001, 0x8e480001, 0x8e480001}, - {0x00016868, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c, 0x6db6db6c}, -}; +#define ar9580_1p0_high_ob_db_tx_gain_table ar9300Modes_high_ob_db_tx_gain_table_2p2 static const u32 ar9580_1p0_soc_preamble[][2] = { /* Addr allmodes */ @@ -1189,265 +695,7 @@ static const u32 ar9580_1p0_soc_preamble[][2] = { {0x00007048, 0x00000008}, }; -static const u32 ar9580_1p0_rx_gain_table[][2] = { - /* Addr allmodes */ - {0x0000a000, 0x00010000}, - {0x0000a004, 0x00030002}, - {0x0000a008, 0x00050004}, - {0x0000a00c, 0x00810080}, - {0x0000a010, 0x00830082}, - {0x0000a014, 0x01810180}, - {0x0000a018, 0x01830182}, - {0x0000a01c, 0x01850184}, - {0x0000a020, 0x01890188}, - {0x0000a024, 0x018b018a}, - {0x0000a028, 0x018d018c}, - {0x0000a02c, 0x01910190}, - {0x0000a030, 0x01930192}, - {0x0000a034, 0x01950194}, - {0x0000a038, 0x038a0196}, - {0x0000a03c, 0x038c038b}, - {0x0000a040, 0x0390038d}, - {0x0000a044, 0x03920391}, - {0x0000a048, 0x03940393}, - {0x0000a04c, 0x03960395}, - {0x0000a050, 0x00000000}, - {0x0000a054, 0x00000000}, - {0x0000a058, 0x00000000}, - {0x0000a05c, 0x00000000}, - {0x0000a060, 0x00000000}, - {0x0000a064, 0x00000000}, - {0x0000a068, 0x00000000}, - {0x0000a06c, 0x00000000}, - {0x0000a070, 0x00000000}, - {0x0000a074, 0x00000000}, - {0x0000a078, 0x00000000}, - {0x0000a07c, 0x00000000}, - {0x0000a080, 0x22222229}, - {0x0000a084, 0x1d1d1d1d}, - {0x0000a088, 0x1d1d1d1d}, - {0x0000a08c, 0x1d1d1d1d}, - {0x0000a090, 0x171d1d1d}, - {0x0000a094, 0x11111717}, - {0x0000a098, 0x00030311}, - {0x0000a09c, 0x00000000}, - {0x0000a0a0, 0x00000000}, - {0x0000a0a4, 0x00000000}, - {0x0000a0a8, 0x00000000}, - {0x0000a0ac, 0x00000000}, - {0x0000a0b0, 0x00000000}, - {0x0000a0b4, 0x00000000}, - {0x0000a0b8, 0x00000000}, - {0x0000a0bc, 0x00000000}, - {0x0000a0c0, 0x001f0000}, - {0x0000a0c4, 0x01000101}, - {0x0000a0c8, 0x011e011f}, - {0x0000a0cc, 0x011c011d}, - {0x0000a0d0, 0x02030204}, - {0x0000a0d4, 0x02010202}, - {0x0000a0d8, 0x021f0200}, - {0x0000a0dc, 0x0302021e}, - {0x0000a0e0, 0x03000301}, - {0x0000a0e4, 0x031e031f}, - {0x0000a0e8, 0x0402031d}, - {0x0000a0ec, 0x04000401}, - {0x0000a0f0, 0x041e041f}, - {0x0000a0f4, 0x0502041d}, - {0x0000a0f8, 0x05000501}, - {0x0000a0fc, 0x051e051f}, - {0x0000a100, 0x06010602}, - {0x0000a104, 0x061f0600}, - {0x0000a108, 0x061d061e}, - {0x0000a10c, 0x07020703}, - {0x0000a110, 0x07000701}, - {0x0000a114, 0x00000000}, - {0x0000a118, 0x00000000}, - {0x0000a11c, 0x00000000}, - {0x0000a120, 0x00000000}, - {0x0000a124, 0x00000000}, - {0x0000a128, 0x00000000}, - {0x0000a12c, 0x00000000}, - {0x0000a130, 0x00000000}, - {0x0000a134, 0x00000000}, - {0x0000a138, 0x00000000}, - {0x0000a13c, 0x00000000}, - {0x0000a140, 0x001f0000}, - {0x0000a144, 0x01000101}, - {0x0000a148, 0x011e011f}, - {0x0000a14c, 0x011c011d}, - {0x0000a150, 0x02030204}, - {0x0000a154, 0x02010202}, - {0x0000a158, 0x021f0200}, - {0x0000a15c, 0x0302021e}, - {0x0000a160, 0x03000301}, - {0x0000a164, 0x031e031f}, - {0x0000a168, 0x0402031d}, - {0x0000a16c, 0x04000401}, - {0x0000a170, 0x041e041f}, - {0x0000a174, 0x0502041d}, - {0x0000a178, 0x05000501}, - {0x0000a17c, 0x051e051f}, - {0x0000a180, 0x06010602}, - {0x0000a184, 0x061f0600}, - {0x0000a188, 0x061d061e}, - {0x0000a18c, 0x07020703}, - {0x0000a190, 0x07000701}, - {0x0000a194, 0x00000000}, - {0x0000a198, 0x00000000}, - {0x0000a19c, 0x00000000}, - {0x0000a1a0, 0x00000000}, - {0x0000a1a4, 0x00000000}, - {0x0000a1a8, 0x00000000}, - {0x0000a1ac, 0x00000000}, - {0x0000a1b0, 0x00000000}, - {0x0000a1b4, 0x00000000}, - {0x0000a1b8, 0x00000000}, - {0x0000a1bc, 0x00000000}, - {0x0000a1c0, 0x00000000}, - {0x0000a1c4, 0x00000000}, - {0x0000a1c8, 0x00000000}, - {0x0000a1cc, 0x00000000}, - {0x0000a1d0, 0x00000000}, - {0x0000a1d4, 0x00000000}, - {0x0000a1d8, 0x00000000}, - {0x0000a1dc, 0x00000000}, - {0x0000a1e0, 0x00000000}, - {0x0000a1e4, 0x00000000}, - {0x0000a1e8, 0x00000000}, - {0x0000a1ec, 0x00000000}, - {0x0000a1f0, 0x00000396}, - {0x0000a1f4, 0x00000396}, - {0x0000a1f8, 0x00000396}, - {0x0000a1fc, 0x00000196}, - {0x0000b000, 0x00010000}, - {0x0000b004, 0x00030002}, - {0x0000b008, 0x00050004}, - {0x0000b00c, 0x00810080}, - {0x0000b010, 0x00830082}, - {0x0000b014, 0x01810180}, - {0x0000b018, 0x01830182}, - {0x0000b01c, 0x01850184}, - {0x0000b020, 0x02810280}, - {0x0000b024, 0x02830282}, - {0x0000b028, 0x02850284}, - {0x0000b02c, 0x02890288}, - {0x0000b030, 0x028b028a}, - {0x0000b034, 0x0388028c}, - {0x0000b038, 0x038a0389}, - {0x0000b03c, 0x038c038b}, - {0x0000b040, 0x0390038d}, - {0x0000b044, 0x03920391}, - {0x0000b048, 0x03940393}, - {0x0000b04c, 0x03960395}, - {0x0000b050, 0x00000000}, - {0x0000b054, 0x00000000}, - {0x0000b058, 0x00000000}, - {0x0000b05c, 0x00000000}, - {0x0000b060, 0x00000000}, - {0x0000b064, 0x00000000}, - {0x0000b068, 0x00000000}, - {0x0000b06c, 0x00000000}, - {0x0000b070, 0x00000000}, - {0x0000b074, 0x00000000}, - {0x0000b078, 0x00000000}, - {0x0000b07c, 0x00000000}, - {0x0000b080, 0x2a2d2f32}, - {0x0000b084, 0x21232328}, - {0x0000b088, 0x19191c1e}, - {0x0000b08c, 0x12141417}, - {0x0000b090, 0x07070e0e}, - {0x0000b094, 0x03030305}, - {0x0000b098, 0x00000003}, - {0x0000b09c, 0x00000000}, - {0x0000b0a0, 0x00000000}, - {0x0000b0a4, 0x00000000}, - {0x0000b0a8, 0x00000000}, - {0x0000b0ac, 0x00000000}, - {0x0000b0b0, 0x00000000}, - {0x0000b0b4, 0x00000000}, - {0x0000b0b8, 0x00000000}, - {0x0000b0bc, 0x00000000}, - {0x0000b0c0, 0x003f0020}, - {0x0000b0c4, 0x00400041}, - {0x0000b0c8, 0x0140005f}, - {0x0000b0cc, 0x0160015f}, - {0x0000b0d0, 0x017e017f}, - {0x0000b0d4, 0x02410242}, - {0x0000b0d8, 0x025f0240}, - {0x0000b0dc, 0x027f0260}, - {0x0000b0e0, 0x0341027e}, - {0x0000b0e4, 0x035f0340}, - {0x0000b0e8, 0x037f0360}, - {0x0000b0ec, 0x04400441}, - {0x0000b0f0, 0x0460045f}, - {0x0000b0f4, 0x0541047f}, - {0x0000b0f8, 0x055f0540}, - {0x0000b0fc, 0x057f0560}, - {0x0000b100, 0x06400641}, - {0x0000b104, 0x0660065f}, - {0x0000b108, 0x067e067f}, - {0x0000b10c, 0x07410742}, - {0x0000b110, 0x075f0740}, - {0x0000b114, 0x077f0760}, - {0x0000b118, 0x07800781}, - {0x0000b11c, 0x07a0079f}, - {0x0000b120, 0x07c107bf}, - {0x0000b124, 0x000007c0}, - {0x0000b128, 0x00000000}, - {0x0000b12c, 0x00000000}, - {0x0000b130, 0x00000000}, - {0x0000b134, 0x00000000}, - {0x0000b138, 0x00000000}, - {0x0000b13c, 0x00000000}, - {0x0000b140, 0x003f0020}, - {0x0000b144, 0x00400041}, - {0x0000b148, 0x0140005f}, - {0x0000b14c, 0x0160015f}, - {0x0000b150, 0x017e017f}, - {0x0000b154, 0x02410242}, - {0x0000b158, 0x025f0240}, - {0x0000b15c, 0x027f0260}, - {0x0000b160, 0x0341027e}, - {0x0000b164, 0x035f0340}, - {0x0000b168, 0x037f0360}, - {0x0000b16c, 0x04400441}, - {0x0000b170, 0x0460045f}, - {0x0000b174, 0x0541047f}, - {0x0000b178, 0x055f0540}, - {0x0000b17c, 0x057f0560}, - {0x0000b180, 0x06400641}, - {0x0000b184, 0x0660065f}, - {0x0000b188, 0x067e067f}, - {0x0000b18c, 0x07410742}, - {0x0000b190, 0x075f0740}, - {0x0000b194, 0x077f0760}, - {0x0000b198, 0x07800781}, - {0x0000b19c, 0x07a0079f}, - {0x0000b1a0, 0x07c107bf}, - {0x0000b1a4, 0x000007c0}, - {0x0000b1a8, 0x00000000}, - {0x0000b1ac, 0x00000000}, - {0x0000b1b0, 0x00000000}, - {0x0000b1b4, 0x00000000}, - {0x0000b1b8, 0x00000000}, - {0x0000b1bc, 0x00000000}, - {0x0000b1c0, 0x00000000}, - {0x0000b1c4, 0x00000000}, - {0x0000b1c8, 0x00000000}, - {0x0000b1cc, 0x00000000}, - {0x0000b1d0, 0x00000000}, - {0x0000b1d4, 0x00000000}, - {0x0000b1d8, 0x00000000}, - {0x0000b1dc, 0x00000000}, - {0x0000b1e0, 0x00000000}, - {0x0000b1e4, 0x00000000}, - {0x0000b1e8, 0x00000000}, - {0x0000b1ec, 0x00000000}, - {0x0000b1f0, 0x00000396}, - {0x0000b1f4, 0x00000396}, - {0x0000b1f8, 0x00000396}, - {0x0000b1fc, 0x00000196}, -}; +#define ar9580_1p0_rx_gain_table ar9462_common_rx_gain_table_2p0 static const u32 ar9580_1p0_radio_core[][2] = { /* Addr allmodes */ diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h index 4866550ddd96..b09285c36c4a 100644 --- a/drivers/net/wireless/ath/ath9k/ath9k.h +++ b/drivers/net/wireless/ath/ath9k/ath9k.h @@ -297,6 +297,8 @@ struct ath_tx { struct ath_txq txq[ATH9K_NUM_TX_QUEUES]; struct ath_descdma txdma; struct ath_txq *txq_map[WME_NUM_AC]; + u32 txq_max_pending[WME_NUM_AC]; + u16 max_aggr_framelen[WME_NUM_AC][4][32]; }; struct ath_rx_edma { @@ -308,6 +310,7 @@ struct ath_rx { u8 defant; u8 rxotherant; u32 *rxlink; + u32 num_pkts; unsigned int rxfilter; spinlock_t rxbuflock; struct list_head rxbuf; @@ -326,6 +329,9 @@ int ath_rx_init(struct ath_softc *sc, int nbufs); void ath_rx_cleanup(struct ath_softc *sc); int ath_rx_tasklet(struct ath_softc *sc, int flush, bool hp); struct ath_txq *ath_txq_setup(struct ath_softc *sc, int qtype, int subtype); +void ath_txq_lock(struct ath_softc *sc, struct ath_txq *txq); +void ath_txq_unlock(struct ath_softc *sc, struct ath_txq *txq); +void ath_txq_unlock_complete(struct ath_softc *sc, struct ath_txq *txq); void ath_tx_cleanupq(struct ath_softc *sc, struct ath_txq *txq); bool ath_drain_all_txq(struct ath_softc *sc, bool retry_tx); void ath_draintxq(struct ath_softc *sc, @@ -337,6 +343,7 @@ int ath_tx_init(struct ath_softc *sc, int nbufs); void ath_tx_cleanup(struct ath_softc *sc); int ath_txq_update(struct ath_softc *sc, int qnum, struct ath9k_tx_queue_info *q); +void ath_update_max_aggr_framelen(struct ath_softc *sc, int queue, int txop); int ath_tx_start(struct ieee80211_hw *hw, struct sk_buff *skb, struct ath_tx_control *txctl); void ath_tx_tasklet(struct ath_softc *sc); @@ -356,7 +363,7 @@ void ath_tx_aggr_sleep(struct ieee80211_sta *sta, struct ath_softc *sc, struct ath_vif { int av_bslot; - bool is_bslot_active, primary_sta_vif; + bool primary_sta_vif; __le64 tsf_adjust; /* TSF adjustment for staggered beacons */ struct ath_buf *av_bcbuf; }; @@ -382,6 +389,7 @@ struct ath_beacon_config { u16 dtim_period; u16 bmiss_timeout; u8 dtim_count; + bool enable_beacon; }; struct ath_beacon { @@ -393,7 +401,6 @@ struct ath_beacon { u32 beaconq; u32 bmisscnt; - u32 ast_be_xmit; u32 bc_tstamp; struct ieee80211_vif *bslot[ATH_BCBUF]; int slottime; @@ -407,17 +414,19 @@ struct ath_beacon { bool tx_last; }; -void ath_beacon_tasklet(unsigned long data); -void ath_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif); -int ath_beacon_alloc(struct ath_softc *sc, struct ieee80211_vif *vif); -void ath_beacon_return(struct ath_softc *sc, struct ath_vif *avp); -int ath_beaconq_config(struct ath_softc *sc); -void ath_set_beacon(struct ath_softc *sc); +void ath9k_beacon_tasklet(unsigned long data); +bool ath9k_allow_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif); +void ath9k_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif, + u32 changed); +void ath9k_beacon_assign_slot(struct ath_softc *sc, struct ieee80211_vif *vif); +void ath9k_beacon_remove_slot(struct ath_softc *sc, struct ieee80211_vif *vif); +void ath9k_set_tsfadjust(struct ath_softc *sc, struct ieee80211_vif *vif); +void ath9k_set_beacon(struct ath_softc *sc); void ath9k_set_beaconing_status(struct ath_softc *sc, bool status); -/*******/ -/* ANI */ -/*******/ +/*******************/ +/* Link Monitoring */ +/*******************/ #define ATH_STA_SHORT_CALINTERVAL 1000 /* 1 second */ #define ATH_AP_SHORT_CALINTERVAL 100 /* 100 ms */ @@ -428,7 +437,9 @@ void ath9k_set_beaconing_status(struct ath_softc *sc, bool status); #define ATH_RESTART_CALINTERVAL 1200000 /* 20 minutes */ #define ATH_PAPRD_TIMEOUT 100 /* msecs */ +#define ATH_PLL_WORK_INTERVAL 100 +void ath_tx_complete_poll_work(struct work_struct *work); void ath_reset_work(struct work_struct *work); void ath_hw_check(struct work_struct *work); void ath_hw_pll_work(struct work_struct *work); @@ -436,23 +447,35 @@ void ath_rx_poll(unsigned long data); void ath_start_rx_poll(struct ath_softc *sc, u8 nbeacon); void ath_paprd_calibrate(struct work_struct *work); void ath_ani_calibrate(unsigned long data); -void ath_start_ani(struct ath_common *common); +void ath_start_ani(struct ath_softc *sc); +void ath_stop_ani(struct ath_softc *sc); +void ath_check_ani(struct ath_softc *sc); +int ath_update_survey_stats(struct ath_softc *sc); +void ath_update_survey_nf(struct ath_softc *sc, int channel); +void ath9k_queue_reset(struct ath_softc *sc, enum ath_reset_type type); /**********/ /* BTCOEX */ /**********/ +enum bt_op_flags { + BT_OP_PRIORITY_DETECTED, + BT_OP_SCAN, +}; + struct ath_btcoex { bool hw_timer_enabled; spinlock_t btcoex_lock; struct timer_list period_timer; /* Timer for BT period */ u32 bt_priority_cnt; unsigned long bt_priority_time; + unsigned long op_flags; int bt_stomp_type; /* Types of BT stomping */ u32 btcoex_no_stomp; /* in usec */ u32 btcoex_period; /* in usec */ u32 btscan_no_stomp; /* in usec */ u32 duty_cycle; + u32 bt_wait_time; struct ath_gen_timer *no_stomp_timer; /* Timer for no BT stomping */ struct ath_mci_profile mci; }; @@ -466,6 +489,7 @@ void ath9k_btcoex_timer_resume(struct ath_softc *sc); void ath9k_btcoex_timer_pause(struct ath_softc *sc); void ath9k_btcoex_handle_interrupt(struct ath_softc *sc, u32 status); u16 ath9k_btcoex_aggr_limit(struct ath_softc *sc, u32 max_4ms_framelen); +void ath9k_btcoex_stop_gen_timer(struct ath_softc *sc); #else static inline int ath9k_init_btcoex(struct ath_softc *sc) { @@ -489,8 +513,17 @@ static inline u16 ath9k_btcoex_aggr_limit(struct ath_softc *sc, { return 0; } +static inline void ath9k_btcoex_stop_gen_timer(struct ath_softc *sc) +{ +} #endif /* CONFIG_ATH9K_BTCOEX_SUPPORT */ +struct ath9k_wow_pattern { + u8 pattern_bytes[MAX_PATTERN_SIZE]; + u8 mask_bytes[MAX_PATTERN_SIZE]; + u32 pattern_len; +}; + /********************/ /* LED Control */ /********************/ @@ -514,8 +547,10 @@ static inline void ath_deinit_leds(struct ath_softc *sc) } #endif - +/*******************************/ /* Antenna diversity/combining */ +/*******************************/ + #define ATH_ANT_RX_CURRENT_SHIFT 4 #define ATH_ANT_RX_MAIN_SHIFT 2 #define ATH_ANT_RX_MASK 0x3 @@ -568,6 +603,9 @@ struct ath_ant_comb { unsigned long scan_start_time; }; +void ath_ant_comb_scan(struct ath_softc *sc, struct ath_rx_status *rs); +void ath_ant_comb_update(struct ath_softc *sc); + /********************/ /* Main driver core */ /********************/ @@ -585,15 +623,14 @@ struct ath_ant_comb { #define ATH_TXPOWER_MAX 100 /* .5 dBm units */ #define ATH_RATE_DUMMY_MARKER 0 -#define SC_OP_INVALID BIT(0) -#define SC_OP_BEACONS BIT(1) -#define SC_OP_OFFCHANNEL BIT(2) -#define SC_OP_RXFLUSH BIT(3) -#define SC_OP_TSF_RESET BIT(4) -#define SC_OP_BT_PRIORITY_DETECTED BIT(5) -#define SC_OP_BT_SCAN BIT(6) -#define SC_OP_ANI_RUN BIT(7) -#define SC_OP_PRIM_STA_VIF BIT(8) +enum sc_op_flags { + SC_OP_INVALID, + SC_OP_BEACONS, + SC_OP_RXFLUSH, + SC_OP_ANI_RUN, + SC_OP_PRIM_STA_VIF, + SC_OP_HW_RESET, +}; /* Powersave flags */ #define PS_WAIT_FOR_BEACON BIT(0) @@ -639,9 +676,9 @@ struct ath_softc { struct completion paprd_complete; unsigned int hw_busy_count; + unsigned long sc_flags; u32 intrstatus; - u32 sc_flags; /* SC_OP_* */ u16 ps_flags; /* PS_* */ u16 curtxpow; bool ps_enabled; @@ -679,6 +716,7 @@ struct ath_softc { #ifdef CONFIG_ATH9K_BTCOEX_SUPPORT struct ath_btcoex btcoex; struct ath_mci_coex mci_coex; + struct work_struct mci_work; #endif struct ath_descdma txsdma; @@ -686,6 +724,13 @@ struct ath_softc { struct ath_ant_comb ant_comb; u8 ant_tx, ant_rx; struct dfs_pattern_detector *dfs_detector; + u32 wow_enabled; + +#ifdef CONFIG_PM_SLEEP + atomic_t wow_got_bmiss_intr; + atomic_t wow_sleep_proc_intr; /* in the middle of WoW sleep ? */ + u32 wow_intr_before_sleep; +#endif }; void ath9k_tasklet(unsigned long data); @@ -701,6 +746,7 @@ extern int ath9k_modparam_nohwcrypt; extern int led_blink; extern bool is_ath9k_unloaded; +u8 ath9k_parse_mpdudensity(u8 mpdudensity); irqreturn_t ath_isr(int irq, void *dev); int ath9k_init_device(u16 devid, struct ath_softc *sc, const struct ath_bus_ops *bus_ops); @@ -737,5 +783,4 @@ void ath9k_calculate_iter_data(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ath9k_vif_iter_data *iter_data); - #endif /* ATH9K_H */ diff --git a/drivers/net/wireless/ath/ath9k/beacon.c b/drivers/net/wireless/ath/ath9k/beacon.c index 11bc55e3d697..76f07d8c272d 100644 --- a/drivers/net/wireless/ath/ath9k/beacon.c +++ b/drivers/net/wireless/ath/ath9k/beacon.c @@ -30,7 +30,7 @@ static void ath9k_reset_beacon_status(struct ath_softc *sc) * the operating mode of the station (AP or AdHoc). Parameters are AIFS * settings and channel width min/max */ -int ath_beaconq_config(struct ath_softc *sc) +static void ath9k_beaconq_config(struct ath_softc *sc) { struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); @@ -38,6 +38,7 @@ int ath_beaconq_config(struct ath_softc *sc) struct ath_txq *txq; ath9k_hw_get_txq_props(ah, sc->beacon.beaconq, &qi); + if (sc->sc_ah->opmode == NL80211_IFTYPE_AP) { /* Always burst out beacon and CAB traffic. */ qi.tqi_aifs = 1; @@ -48,17 +49,17 @@ int ath_beaconq_config(struct ath_softc *sc) txq = sc->tx.txq_map[WME_AC_BE]; ath9k_hw_get_txq_props(ah, txq->axq_qnum, &qi_be); qi.tqi_aifs = qi_be.tqi_aifs; - qi.tqi_cwmin = 4*qi_be.tqi_cwmin; + if (ah->slottime == ATH9K_SLOT_TIME_20) + qi.tqi_cwmin = 2*qi_be.tqi_cwmin; + else + qi.tqi_cwmin = 4*qi_be.tqi_cwmin; qi.tqi_cwmax = qi_be.tqi_cwmax; } if (!ath9k_hw_set_txq_props(ah, sc->beacon.beaconq, &qi)) { - ath_err(common, - "Unable to update h/w beacon queue parameters\n"); - return 0; + ath_err(common, "Unable to update h/w beacon queue parameters\n"); } else { ath9k_hw_resettxqueue(ah, sc->beacon.beaconq); - return 1; } } @@ -67,7 +68,7 @@ int ath_beaconq_config(struct ath_softc *sc) * up rate codes, and channel flags. Beacons are always sent out at the * lowest rate, and are not retried. */ -static void ath_beacon_setup(struct ath_softc *sc, struct ieee80211_vif *vif, +static void ath9k_beacon_setup(struct ath_softc *sc, struct ieee80211_vif *vif, struct ath_buf *bf, int rateidx) { struct sk_buff *skb = bf->bf_mpdu; @@ -78,8 +79,6 @@ static void ath_beacon_setup(struct ath_softc *sc, struct ieee80211_vif *vif, u8 chainmask = ah->txchainmask; u8 rate = 0; - ath9k_reset_beacon_status(sc); - sband = &sc->sbands[common->hw->conf.channel->band]; rate = sband->bitrates[rateidx].hw_value; if (vif->bss_conf.use_short_preamble) @@ -108,7 +107,7 @@ static void ath_beacon_setup(struct ath_softc *sc, struct ieee80211_vif *vif, ath9k_hw_set_txdesc(ah, bf->bf_desc, &info); } -static void ath_tx_cabq(struct ieee80211_hw *hw, struct sk_buff *skb) +static void ath9k_tx_cabq(struct ieee80211_hw *hw, struct sk_buff *skb) { struct ath_softc *sc = hw->priv; struct ath_common *common = ath9k_hw_common(sc->sc_ah); @@ -125,28 +124,22 @@ static void ath_tx_cabq(struct ieee80211_hw *hw, struct sk_buff *skb) } } -static struct ath_buf *ath_beacon_generate(struct ieee80211_hw *hw, - struct ieee80211_vif *vif) +static struct ath_buf *ath9k_beacon_generate(struct ieee80211_hw *hw, + struct ieee80211_vif *vif) { struct ath_softc *sc = hw->priv; struct ath_common *common = ath9k_hw_common(sc->sc_ah); struct ath_buf *bf; - struct ath_vif *avp; + struct ath_vif *avp = (void *)vif->drv_priv; struct sk_buff *skb; - struct ath_txq *cabq; + struct ath_txq *cabq = sc->beacon.cabq; struct ieee80211_tx_info *info; + struct ieee80211_mgmt *mgmt_hdr; int cabq_depth; - ath9k_reset_beacon_status(sc); - - avp = (void *)vif->drv_priv; - cabq = sc->beacon.cabq; - - if ((avp->av_bcbuf == NULL) || !avp->is_bslot_active) + if (avp->av_bcbuf == NULL) return NULL; - /* Release the old beacon first */ - bf = avp->av_bcbuf; skb = bf->bf_mpdu; if (skb) { @@ -156,14 +149,14 @@ static struct ath_buf *ath_beacon_generate(struct ieee80211_hw *hw, bf->bf_buf_addr = 0; } - /* Get a new beacon from mac80211 */ - skb = ieee80211_beacon_get(hw, vif); - bf->bf_mpdu = skb; if (skb == NULL) return NULL; - ((struct ieee80211_mgmt *)skb->data)->u.beacon.timestamp = - avp->tsf_adjust; + + bf->bf_mpdu = skb; + + mgmt_hdr = (struct ieee80211_mgmt *)skb->data; + mgmt_hdr->u.beacon.timestamp = avp->tsf_adjust; info = IEEE80211_SKB_CB(skb); if (info->flags & IEEE80211_TX_CTL_ASSIGN_SEQ) { @@ -209,61 +202,52 @@ static struct ath_buf *ath_beacon_generate(struct ieee80211_hw *hw, } } - ath_beacon_setup(sc, vif, bf, info->control.rates[0].idx); + ath9k_beacon_setup(sc, vif, bf, info->control.rates[0].idx); while (skb) { - ath_tx_cabq(hw, skb); + ath9k_tx_cabq(hw, skb); skb = ieee80211_get_buffered_bc(hw, vif); } return bf; } -int ath_beacon_alloc(struct ath_softc *sc, struct ieee80211_vif *vif) +void ath9k_beacon_assign_slot(struct ath_softc *sc, struct ieee80211_vif *vif) { struct ath_common *common = ath9k_hw_common(sc->sc_ah); - struct ath_vif *avp; - struct ath_buf *bf; - struct sk_buff *skb; - struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf; - __le64 tstamp; - - avp = (void *)vif->drv_priv; - - /* Allocate a beacon descriptor if we haven't done so. */ - if (!avp->av_bcbuf) { - /* Allocate beacon state for hostap/ibss. We know - * a buffer is available. */ - avp->av_bcbuf = list_first_entry(&sc->beacon.bbuf, - struct ath_buf, list); - list_del(&avp->av_bcbuf->list); - - if (ath9k_uses_beacons(vif->type)) { - int slot; - /* - * Assign the vif to a beacon xmit slot. As - * above, this cannot fail to find one. - */ - avp->av_bslot = 0; - for (slot = 0; slot < ATH_BCBUF; slot++) - if (sc->beacon.bslot[slot] == NULL) { - avp->av_bslot = slot; - avp->is_bslot_active = false; - - /* NB: keep looking for a double slot */ - if (slot == 0 || !sc->beacon.bslot[slot-1]) - break; - } - BUG_ON(sc->beacon.bslot[avp->av_bslot] != NULL); - sc->beacon.bslot[avp->av_bslot] = vif; - sc->nbcnvifs++; + struct ath_vif *avp = (void *)vif->drv_priv; + int slot; + + avp->av_bcbuf = list_first_entry(&sc->beacon.bbuf, struct ath_buf, list); + list_del(&avp->av_bcbuf->list); + + for (slot = 0; slot < ATH_BCBUF; slot++) { + if (sc->beacon.bslot[slot] == NULL) { + avp->av_bslot = slot; + break; } } - /* release the previous beacon frame, if it already exists. */ - bf = avp->av_bcbuf; - if (bf->bf_mpdu != NULL) { - skb = bf->bf_mpdu; + sc->beacon.bslot[avp->av_bslot] = vif; + sc->nbcnvifs++; + + ath_dbg(common, CONFIG, "Added interface at beacon slot: %d\n", + avp->av_bslot); +} + +void ath9k_beacon_remove_slot(struct ath_softc *sc, struct ieee80211_vif *vif) +{ + struct ath_common *common = ath9k_hw_common(sc->sc_ah); + struct ath_vif *avp = (void *)vif->drv_priv; + struct ath_buf *bf = avp->av_bcbuf; + + ath_dbg(common, CONFIG, "Removing interface at beacon slot: %d\n", + avp->av_bslot); + + tasklet_disable(&sc->bcon_tasklet); + + if (bf && bf->bf_mpdu) { + struct sk_buff *skb = bf->bf_mpdu; dma_unmap_single(sc->dev, bf->bf_buf_addr, skb->len, DMA_TO_DEVICE); dev_kfree_skb_any(skb); @@ -271,99 +255,74 @@ int ath_beacon_alloc(struct ath_softc *sc, struct ieee80211_vif *vif) bf->bf_buf_addr = 0; } - /* NB: the beacon data buffer must be 32-bit aligned. */ - skb = ieee80211_beacon_get(sc->hw, vif); - if (skb == NULL) - return -ENOMEM; + avp->av_bcbuf = NULL; + sc->beacon.bslot[avp->av_bslot] = NULL; + sc->nbcnvifs--; + list_add_tail(&bf->list, &sc->beacon.bbuf); - tstamp = ((struct ieee80211_mgmt *)skb->data)->u.beacon.timestamp; - sc->beacon.bc_tstamp = (u32) le64_to_cpu(tstamp); - /* Calculate a TSF adjustment factor required for staggered beacons. */ - if (avp->av_bslot > 0) { - u64 tsfadjust; - int intval; - - intval = cur_conf->beacon_interval ? : ATH_DEFAULT_BINTVAL; + tasklet_enable(&sc->bcon_tasklet); +} - /* - * Calculate the TSF offset for this beacon slot, i.e., the - * number of usecs that need to be added to the timestamp field - * in Beacon and Probe Response frames. Beacon slot 0 is - * processed at the correct offset, so it does not require TSF - * adjustment. Other slots are adjusted to get the timestamp - * close to the TBTT for the BSS. - */ - tsfadjust = TU_TO_USEC(intval * avp->av_bslot) / ATH_BCBUF; - avp->tsf_adjust = cpu_to_le64(tsfadjust); +static int ath9k_beacon_choose_slot(struct ath_softc *sc) +{ + struct ath_common *common = ath9k_hw_common(sc->sc_ah); + struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf; + u16 intval; + u32 tsftu; + u64 tsf; + int slot; - ath_dbg(common, BEACON, - "stagger beacons, bslot %d intval %u tsfadjust %llu\n", - avp->av_bslot, intval, (unsigned long long)tsfadjust); + if (sc->sc_ah->opmode != NL80211_IFTYPE_AP) { + ath_dbg(common, BEACON, "slot 0, tsf: %llu\n", + ath9k_hw_gettsf64(sc->sc_ah)); + return 0; + } - ((struct ieee80211_mgmt *)skb->data)->u.beacon.timestamp = - avp->tsf_adjust; - } else - avp->tsf_adjust = cpu_to_le64(0); + intval = cur_conf->beacon_interval ? : ATH_DEFAULT_BINTVAL; + tsf = ath9k_hw_gettsf64(sc->sc_ah); + tsf += TU_TO_USEC(sc->sc_ah->config.sw_beacon_response_time); + tsftu = TSF_TO_TU((tsf * ATH_BCBUF) >>32, tsf * ATH_BCBUF); + slot = (tsftu % (intval * ATH_BCBUF)) / intval; - bf->bf_mpdu = skb; - bf->bf_buf_addr = dma_map_single(sc->dev, skb->data, - skb->len, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(sc->dev, bf->bf_buf_addr))) { - dev_kfree_skb_any(skb); - bf->bf_mpdu = NULL; - bf->bf_buf_addr = 0; - ath_err(common, "dma_mapping_error on beacon alloc\n"); - return -ENOMEM; - } - avp->is_bslot_active = true; + ath_dbg(common, BEACON, "slot: %d tsf: %llu tsftu: %u\n", + slot, tsf, tsftu / ATH_BCBUF); - return 0; + return slot; } -void ath_beacon_return(struct ath_softc *sc, struct ath_vif *avp) +void ath9k_set_tsfadjust(struct ath_softc *sc, struct ieee80211_vif *vif) { - if (avp->av_bcbuf != NULL) { - struct ath_buf *bf; - - avp->is_bslot_active = false; - if (avp->av_bslot != -1) { - sc->beacon.bslot[avp->av_bslot] = NULL; - sc->nbcnvifs--; - avp->av_bslot = -1; - } + struct ath_common *common = ath9k_hw_common(sc->sc_ah); + struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf; + struct ath_vif *avp = (void *)vif->drv_priv; + u64 tsfadjust; - bf = avp->av_bcbuf; - if (bf->bf_mpdu != NULL) { - struct sk_buff *skb = bf->bf_mpdu; - dma_unmap_single(sc->dev, bf->bf_buf_addr, - skb->len, DMA_TO_DEVICE); - dev_kfree_skb_any(skb); - bf->bf_mpdu = NULL; - bf->bf_buf_addr = 0; - } - list_add_tail(&bf->list, &sc->beacon.bbuf); + if (avp->av_bslot == 0) + return; - avp->av_bcbuf = NULL; - } + tsfadjust = cur_conf->beacon_interval * avp->av_bslot / ATH_BCBUF; + avp->tsf_adjust = cpu_to_le64(TU_TO_USEC(tsfadjust)); + + ath_dbg(common, CONFIG, "tsfadjust is: %llu for bslot: %d\n", + (unsigned long long)tsfadjust, avp->av_bslot); } -void ath_beacon_tasklet(unsigned long data) +void ath9k_beacon_tasklet(unsigned long data) { struct ath_softc *sc = (struct ath_softc *)data; - struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf; struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); struct ath_buf *bf = NULL; struct ieee80211_vif *vif; bool edma = !!(ah->caps.hw_caps & ATH9K_HW_CAP_EDMA); int slot; - u32 bfaddr, bc = 0; - if (work_pending(&sc->hw_reset_work)) { + if (test_bit(SC_OP_HW_RESET, &sc->sc_flags)) { ath_dbg(common, RESET, "reset work is pending, skip beaconing now\n"); return; } + /* * Check if the previous beacon has gone out. If * not don't try to post another, skip this period @@ -387,55 +346,25 @@ void ath_beacon_tasklet(unsigned long data) } else if (sc->beacon.bmisscnt >= BSTUCK_THRESH) { ath_dbg(common, BSTUCK, "beacon is officially stuck\n"); sc->beacon.bmisscnt = 0; - sc->sc_flags |= SC_OP_TSF_RESET; - ieee80211_queue_work(sc->hw, &sc->hw_reset_work); + ath9k_queue_reset(sc, RESET_TYPE_BEACON_STUCK); } return; } - /* - * Generate beacon frames. we are sending frames - * staggered so calculate the slot for this frame based - * on the tsf to safeguard against missing an swba. - */ - - - if (ah->opmode == NL80211_IFTYPE_AP) { - u16 intval; - u32 tsftu; - u64 tsf; - - intval = cur_conf->beacon_interval ? : ATH_DEFAULT_BINTVAL; - tsf = ath9k_hw_gettsf64(ah); - tsf += TU_TO_USEC(ah->config.sw_beacon_response_time); - tsftu = TSF_TO_TU((tsf * ATH_BCBUF) >>32, tsf * ATH_BCBUF); - slot = (tsftu % (intval * ATH_BCBUF)) / intval; - vif = sc->beacon.bslot[slot]; - - ath_dbg(common, BEACON, - "slot %d [tsf %llu tsftu %u intval %u] vif %p\n", - slot, tsf, tsftu / ATH_BCBUF, intval, vif); - } else { - slot = 0; - vif = sc->beacon.bslot[slot]; - } + slot = ath9k_beacon_choose_slot(sc); + vif = sc->beacon.bslot[slot]; + if (!vif || !vif->bss_conf.enable_beacon) + return; - bfaddr = 0; - if (vif) { - bf = ath_beacon_generate(sc->hw, vif); - if (bf != NULL) { - bfaddr = bf->bf_daddr; - bc = 1; - } + bf = ath9k_beacon_generate(sc->hw, vif); + WARN_ON(!bf); - if (sc->beacon.bmisscnt != 0) { - ath_dbg(common, BSTUCK, - "resume beacon xmit after %u misses\n", - sc->beacon.bmisscnt); - sc->beacon.bmisscnt = 0; - } + if (sc->beacon.bmisscnt != 0) { + ath_dbg(common, BSTUCK, "resume beacon xmit after %u misses\n", + sc->beacon.bmisscnt); + sc->beacon.bmisscnt = 0; } /* @@ -455,39 +384,40 @@ void ath_beacon_tasklet(unsigned long data) * set to ATH_BCBUF so this check is a noop. */ if (sc->beacon.updateslot == UPDATE) { - sc->beacon.updateslot = COMMIT; /* commit next beacon */ + sc->beacon.updateslot = COMMIT; sc->beacon.slotupdate = slot; - } else if (sc->beacon.updateslot == COMMIT && sc->beacon.slotupdate == slot) { + } else if (sc->beacon.updateslot == COMMIT && + sc->beacon.slotupdate == slot) { ah->slottime = sc->beacon.slottime; ath9k_hw_init_global_settings(ah); sc->beacon.updateslot = OK; } - if (bfaddr != 0) { + + if (bf) { + ath9k_reset_beacon_status(sc); + + ath_dbg(common, BEACON, + "Transmitting beacon for slot: %d\n", slot); + /* NB: cabq traffic should already be queued and primed */ - ath9k_hw_puttxbuf(ah, sc->beacon.beaconq, bfaddr); + ath9k_hw_puttxbuf(ah, sc->beacon.beaconq, bf->bf_daddr); if (!edma) ath9k_hw_txstart(ah, sc->beacon.beaconq); - - sc->beacon.ast_be_xmit += bc; /* XXX per-vif? */ } } -static void ath9k_beacon_init(struct ath_softc *sc, - u32 next_beacon, - u32 beacon_period) +static void ath9k_beacon_init(struct ath_softc *sc, u32 nexttbtt, u32 intval) { - if (sc->sc_flags & SC_OP_TSF_RESET) { - ath9k_ps_wakeup(sc); - ath9k_hw_reset_tsf(sc->sc_ah); - } - - ath9k_hw_beaconinit(sc->sc_ah, next_beacon, beacon_period); + struct ath_hw *ah = sc->sc_ah; - if (sc->sc_flags & SC_OP_TSF_RESET) { - ath9k_ps_restore(sc); - sc->sc_flags &= ~SC_OP_TSF_RESET; - } + ath9k_hw_disable_interrupts(ah); + ath9k_hw_reset_tsf(ah); + ath9k_beaconq_config(sc); + ath9k_hw_beaconinit(ah, nexttbtt, intval); + sc->beacon.bmisscnt = 0; + ath9k_hw_set_interrupts(ah); + ath9k_hw_enable_interrupts(ah); } /* @@ -495,32 +425,27 @@ static void ath9k_beacon_init(struct ath_softc *sc, * burst together. For the former arrange for the SWBA to be delivered for each * slot. Slots that are not occupied will generate nothing. */ -static void ath_beacon_config_ap(struct ath_softc *sc, - struct ath_beacon_config *conf) +static void ath9k_beacon_config_ap(struct ath_softc *sc, + struct ath_beacon_config *conf) { struct ath_hw *ah = sc->sc_ah; + struct ath_common *common = ath9k_hw_common(ah); u32 nexttbtt, intval; /* NB: the beacon interval is kept internally in TU's */ intval = TU_TO_USEC(conf->beacon_interval); - intval /= ATH_BCBUF; /* for staggered beacons */ + intval /= ATH_BCBUF; nexttbtt = intval; - /* - * In AP mode we enable the beacon timers and SWBA interrupts to - * prepare beacon frames. - */ - ah->imask |= ATH9K_INT_SWBA; - ath_beaconq_config(sc); + if (conf->enable_beacon) + ah->imask |= ATH9K_INT_SWBA; + else + ah->imask &= ~ATH9K_INT_SWBA; - /* Set the computed AP beacon timers */ + ath_dbg(common, BEACON, "AP nexttbtt: %u intval: %u conf_intval: %u\n", + nexttbtt, intval, conf->beacon_interval); - ath9k_hw_disable_interrupts(ah); - sc->sc_flags |= SC_OP_TSF_RESET; ath9k_beacon_init(sc, nexttbtt, intval); - sc->beacon.bmisscnt = 0; - ath9k_hw_set_interrupts(ah); - ath9k_hw_enable_interrupts(ah); } /* @@ -531,8 +456,8 @@ static void ath_beacon_config_ap(struct ath_softc *sc, * we'll receive a BMISS interrupt when we stop seeing beacons from the AP * we've associated with. */ -static void ath_beacon_config_sta(struct ath_softc *sc, - struct ath_beacon_config *conf) +static void ath9k_beacon_config_sta(struct ath_softc *sc, + struct ath_beacon_config *conf) { struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); @@ -544,7 +469,7 @@ static void ath_beacon_config_sta(struct ath_softc *sc, int num_beacons, offset, dtim_dec_count, cfp_dec_count; /* No need to configure beacon if we are not associated */ - if (!common->curaid) { + if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) { ath_dbg(common, BEACON, "STA is not yet associated..skipping beacon config\n"); return; @@ -651,97 +576,65 @@ static void ath_beacon_config_sta(struct ath_softc *sc, ath9k_hw_enable_interrupts(ah); } -static void ath_beacon_config_adhoc(struct ath_softc *sc, - struct ath_beacon_config *conf) +static void ath9k_beacon_config_adhoc(struct ath_softc *sc, + struct ath_beacon_config *conf) { struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); - u32 tsf, intval, nexttbtt; + u32 intval, nexttbtt; ath9k_reset_beacon_status(sc); - if (!(sc->sc_flags & SC_OP_BEACONS)) - ath9k_hw_settsf64(ah, sc->beacon.bc_tstamp); intval = TU_TO_USEC(conf->beacon_interval); - tsf = roundup(ath9k_hw_gettsf32(ah) + TU_TO_USEC(FUDGE), intval); - nexttbtt = tsf + intval; - - ath_dbg(common, BEACON, "IBSS nexttbtt %u intval %u (%u)\n", - nexttbtt, intval, conf->beacon_interval); - - /* - * In IBSS mode enable the beacon timers but only enable SWBA interrupts - * if we need to manually prepare beacon frames. Otherwise we use a - * self-linked tx descriptor and let the hardware deal with things. - */ - ah->imask |= ATH9K_INT_SWBA; + nexttbtt = intval; - ath_beaconq_config(sc); + if (conf->enable_beacon) + ah->imask |= ATH9K_INT_SWBA; + else + ah->imask &= ~ATH9K_INT_SWBA; - /* Set the computed ADHOC beacon timers */ + ath_dbg(common, BEACON, "IBSS nexttbtt: %u intval: %u conf_intval: %u\n", + nexttbtt, intval, conf->beacon_interval); - ath9k_hw_disable_interrupts(ah); ath9k_beacon_init(sc, nexttbtt, intval); - sc->beacon.bmisscnt = 0; - - ath9k_hw_set_interrupts(ah); - ath9k_hw_enable_interrupts(ah); } -static bool ath9k_allow_beacon_config(struct ath_softc *sc, - struct ieee80211_vif *vif) +bool ath9k_allow_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif) { - struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf; struct ath_common *common = ath9k_hw_common(sc->sc_ah); - struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; struct ath_vif *avp = (void *)vif->drv_priv; - /* - * Can not have different beacon interval on multiple - * AP interface case - */ - if ((sc->sc_ah->opmode == NL80211_IFTYPE_AP) && - (sc->nbcnvifs > 1) && - (vif->type == NL80211_IFTYPE_AP) && - (cur_conf->beacon_interval != bss_conf->beacon_int)) { - ath_dbg(common, CONFIG, - "Changing beacon interval of multiple AP interfaces !\n"); - return false; - } - /* - * Can not configure station vif's beacon config - * while on AP opmode - */ - if ((sc->sc_ah->opmode == NL80211_IFTYPE_AP) && - (vif->type != NL80211_IFTYPE_AP)) { - ath_dbg(common, CONFIG, - "STA vif's beacon not allowed on AP mode\n"); - return false; + if (sc->sc_ah->opmode == NL80211_IFTYPE_AP) { + if ((vif->type != NL80211_IFTYPE_AP) || + (sc->nbcnvifs > 1)) { + ath_dbg(common, CONFIG, + "An AP interface is already present !\n"); + return false; + } } - /* - * Do not allow beacon config if HW was already configured - * with another STA vif - */ - if ((sc->sc_ah->opmode == NL80211_IFTYPE_STATION) && - (vif->type == NL80211_IFTYPE_STATION) && - (sc->sc_flags & SC_OP_BEACONS) && - !avp->primary_sta_vif) { - ath_dbg(common, CONFIG, - "Beacon already configured for a station interface\n"); - return false; + + if (sc->sc_ah->opmode == NL80211_IFTYPE_STATION) { + if ((vif->type == NL80211_IFTYPE_STATION) && + test_bit(SC_OP_BEACONS, &sc->sc_flags) && + !avp->primary_sta_vif) { + ath_dbg(common, CONFIG, + "Beacon already configured for a station interface\n"); + return false; + } } + return true; } -void ath_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif) +static void ath9k_cache_beacon_config(struct ath_softc *sc, + struct ieee80211_bss_conf *bss_conf) { + struct ath_common *common = ath9k_hw_common(sc->sc_ah); struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf; - struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; - if (!ath9k_allow_beacon_config(sc, vif)) - return; + ath_dbg(common, BEACON, + "Caching beacon data for BSS: %pM\n", bss_conf->bssid); - /* Setup the beacon configuration parameters */ cur_conf->beacon_interval = bss_conf->beacon_int; cur_conf->dtim_period = bss_conf->dtim_period; cur_conf->listen_interval = 1; @@ -766,73 +659,62 @@ void ath_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif) if (cur_conf->dtim_period == 0) cur_conf->dtim_period = 1; - ath_set_beacon(sc); } -static bool ath_has_valid_bslot(struct ath_softc *sc) +void ath9k_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif, + u32 changed) { - struct ath_vif *avp; - int slot; - bool found = false; + struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; + struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf; - for (slot = 0; slot < ATH_BCBUF; slot++) { - if (sc->beacon.bslot[slot]) { - avp = (void *)sc->beacon.bslot[slot]->drv_priv; - if (avp->is_bslot_active) { - found = true; - break; + if (sc->sc_ah->opmode == NL80211_IFTYPE_STATION) { + ath9k_cache_beacon_config(sc, bss_conf); + ath9k_set_beacon(sc); + set_bit(SC_OP_BEACONS, &sc->sc_flags); + } else { + /* + * Take care of multiple interfaces when + * enabling/disabling SWBA. + */ + if (changed & BSS_CHANGED_BEACON_ENABLED) { + if (!bss_conf->enable_beacon && + (sc->nbcnvifs <= 1)) { + cur_conf->enable_beacon = false; + } else if (bss_conf->enable_beacon) { + cur_conf->enable_beacon = true; + ath9k_cache_beacon_config(sc, bss_conf); } } + + if (cur_conf->beacon_interval) { + ath9k_set_beacon(sc); + + if (cur_conf->enable_beacon) + set_bit(SC_OP_BEACONS, &sc->sc_flags); + else + clear_bit(SC_OP_BEACONS, &sc->sc_flags); + } } - return found; } - -void ath_set_beacon(struct ath_softc *sc) +void ath9k_set_beacon(struct ath_softc *sc) { struct ath_common *common = ath9k_hw_common(sc->sc_ah); struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf; switch (sc->sc_ah->opmode) { case NL80211_IFTYPE_AP: - if (ath_has_valid_bslot(sc)) - ath_beacon_config_ap(sc, cur_conf); + ath9k_beacon_config_ap(sc, cur_conf); break; case NL80211_IFTYPE_ADHOC: case NL80211_IFTYPE_MESH_POINT: - ath_beacon_config_adhoc(sc, cur_conf); + ath9k_beacon_config_adhoc(sc, cur_conf); break; case NL80211_IFTYPE_STATION: - ath_beacon_config_sta(sc, cur_conf); + ath9k_beacon_config_sta(sc, cur_conf); break; default: ath_dbg(common, CONFIG, "Unsupported beaconing mode\n"); return; } - - sc->sc_flags |= SC_OP_BEACONS; -} - -void ath9k_set_beaconing_status(struct ath_softc *sc, bool status) -{ - struct ath_hw *ah = sc->sc_ah; - - if (!ath_has_valid_bslot(sc)) { - sc->sc_flags &= ~SC_OP_BEACONS; - return; - } - - ath9k_ps_wakeup(sc); - if (status) { - /* Re-enable beaconing */ - ah->imask |= ATH9K_INT_SWBA; - ath9k_hw_set_interrupts(ah); - } else { - /* Disable SWBA interrupt */ - ah->imask &= ~ATH9K_INT_SWBA; - ath9k_hw_set_interrupts(ah); - tasklet_kill(&sc->bcon_tasklet); - ath9k_hw_stop_dma_queue(ah, sc->beacon.beaconq); - } - ath9k_ps_restore(sc); } diff --git a/drivers/net/wireless/ath/ath9k/btcoex.c b/drivers/net/wireless/ath/ath9k/btcoex.c index 1ca6da80d4ad..acd437384fe4 100644 --- a/drivers/net/wireless/ath/ath9k/btcoex.c +++ b/drivers/net/wireless/ath/ath9k/btcoex.c @@ -336,10 +336,16 @@ static void ar9003_btcoex_bt_stomp(struct ath_hw *ah, enum ath_stomp_type stomp_type) { struct ath_btcoex_hw *btcoex = &ah->btcoex_hw; - const u32 *weight = AR_SREV_9462(ah) ? ar9003_wlan_weights[stomp_type] : - ar9462_wlan_weights[stomp_type]; + const u32 *weight = ar9003_wlan_weights[stomp_type]; int i; + if (AR_SREV_9462(ah)) { + if ((stomp_type == ATH_BTCOEX_STOMP_LOW) && + btcoex->mci.stomp_ftp) + stomp_type = ATH_BTCOEX_STOMP_LOW_FTP; + weight = ar9462_wlan_weights[stomp_type]; + } + for (i = 0; i < AR9300_NUM_WLAN_WEIGHTS; i++) { btcoex->bt_weight[i] = AR9300_BT_WGHT; btcoex->wlan_weight[i] = weight[i]; diff --git a/drivers/net/wireless/ath/ath9k/btcoex.h b/drivers/net/wireless/ath/ath9k/btcoex.h index 3a1e1cfabd5e..20092f98658f 100644 --- a/drivers/net/wireless/ath/ath9k/btcoex.h +++ b/drivers/net/wireless/ath/ath9k/btcoex.h @@ -36,6 +36,9 @@ #define ATH_BT_CNT_THRESHOLD 3 #define ATH_BT_CNT_SCAN_THRESHOLD 15 +#define ATH_BTCOEX_RX_WAIT_TIME 100 +#define ATH_BTCOEX_STOMP_FTP_THRESH 5 + #define AR9300_NUM_BT_WEIGHTS 4 #define AR9300_NUM_WLAN_WEIGHTS 4 /* Defines the BT AR_BT_COEX_WGHT used */ @@ -80,6 +83,7 @@ struct ath9k_hw_mci { u8 bt_ver_major; u8 bt_ver_minor; u8 bt_state; + u8 stomp_ftp; }; struct ath_btcoex_hw { diff --git a/drivers/net/wireless/ath/ath9k/calib.h b/drivers/net/wireless/ath/ath9k/calib.h index 3b33996d97df..1060c19a5012 100644 --- a/drivers/net/wireless/ath/ath9k/calib.h +++ b/drivers/net/wireless/ath/ath9k/calib.h @@ -30,10 +30,10 @@ struct ar5416IniArray { u32 ia_columns; }; -#define INIT_INI_ARRAY(iniarray, array, rows, columns) do { \ +#define INIT_INI_ARRAY(iniarray, array) do { \ (iniarray)->ia_array = (u32 *)(array); \ - (iniarray)->ia_rows = (rows); \ - (iniarray)->ia_columns = (columns); \ + (iniarray)->ia_rows = ARRAY_SIZE(array); \ + (iniarray)->ia_columns = ARRAY_SIZE(array[0]); \ } while (0) #define INI_RA(iniarray, row, column) \ diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c index fde700c4e490..68b643c8943c 100644 --- a/drivers/net/wireless/ath/ath9k/debug.c +++ b/drivers/net/wireless/ath/ath9k/debug.c @@ -205,11 +205,10 @@ static ssize_t write_file_disable_ani(struct file *file, common->disable_ani = !!disable_ani; if (disable_ani) { - sc->sc_flags &= ~SC_OP_ANI_RUN; - del_timer_sync(&common->ani.timer); + clear_bit(SC_OP_ANI_RUN, &sc->sc_flags); + ath_stop_ani(sc); } else { - sc->sc_flags |= SC_OP_ANI_RUN; - ath_start_ani(common); + ath_check_ani(sc); } return count; @@ -348,8 +347,6 @@ void ath_debug_stat_interrupt(struct ath_softc *sc, enum ath9k_int status) sc->debug.stats.istats.txok++; if (status & ATH9K_INT_TXURN) sc->debug.stats.istats.txurn++; - if (status & ATH9K_INT_MIB) - sc->debug.stats.istats.mib++; if (status & ATH9K_INT_RXPHY) sc->debug.stats.istats.rxphyerr++; if (status & ATH9K_INT_RXKCM) @@ -374,6 +371,8 @@ void ath_debug_stat_interrupt(struct ath_softc *sc, enum ath9k_int status) sc->debug.stats.istats.dtim++; if (status & ATH9K_INT_TSFOOR) sc->debug.stats.istats.tsfoor++; + if (status & ATH9K_INT_MCI) + sc->debug.stats.istats.mci++; } static ssize_t read_file_interrupt(struct file *file, char __user *user_buf, @@ -418,6 +417,7 @@ static ssize_t read_file_interrupt(struct file *file, char __user *user_buf, PR_IS("DTIMSYNC", dtimsync); PR_IS("DTIM", dtim); PR_IS("TSFOOR", tsfoor); + PR_IS("MCI", mci); PR_IS("TOTAL", total); len += snprintf(buf + len, mxlen - len, @@ -1318,7 +1318,7 @@ static int open_file_bb_mac_samps(struct inode *inode, struct file *file) u8 chainmask = (ah->rxchainmask << 3) | ah->rxchainmask; u8 nread; - if (sc->sc_flags & SC_OP_INVALID) + if (test_bit(SC_OP_INVALID, &sc->sc_flags)) return -EAGAIN; buf = vmalloc(size); @@ -1555,6 +1555,14 @@ int ath9k_init_debug(struct ath_hw *ah) &fops_interrupt); debugfs_create_file("xmit", S_IRUSR, sc->debug.debugfs_phy, sc, &fops_xmit); + debugfs_create_u32("qlen_bk", S_IRUSR | S_IWUSR, sc->debug.debugfs_phy, + &sc->tx.txq_max_pending[WME_AC_BK]); + debugfs_create_u32("qlen_be", S_IRUSR | S_IWUSR, sc->debug.debugfs_phy, + &sc->tx.txq_max_pending[WME_AC_BE]); + debugfs_create_u32("qlen_vi", S_IRUSR | S_IWUSR, sc->debug.debugfs_phy, + &sc->tx.txq_max_pending[WME_AC_VI]); + debugfs_create_u32("qlen_vo", S_IRUSR | S_IWUSR, sc->debug.debugfs_phy, + &sc->tx.txq_max_pending[WME_AC_VO]); debugfs_create_file("stations", S_IRUSR, sc->debug.debugfs_phy, sc, &fops_stations); debugfs_create_file("misc", S_IRUSR, sc->debug.debugfs_phy, sc, diff --git a/drivers/net/wireless/ath/ath9k/debug.h b/drivers/net/wireless/ath/ath9k/debug.h index c34da09d9103..8b9d080d89da 100644 --- a/drivers/net/wireless/ath/ath9k/debug.h +++ b/drivers/net/wireless/ath/ath9k/debug.h @@ -32,6 +32,19 @@ struct ath_buf; #define RESET_STAT_INC(sc, type) do { } while (0) #endif +enum ath_reset_type { + RESET_TYPE_BB_HANG, + RESET_TYPE_BB_WATCHDOG, + RESET_TYPE_FATAL_INT, + RESET_TYPE_TX_ERROR, + RESET_TYPE_TX_HANG, + RESET_TYPE_PLL_HANG, + RESET_TYPE_MAC_HANG, + RESET_TYPE_BEACON_STUCK, + RESET_TYPE_MCI, + __RESET_TYPE_MAX +}; + #ifdef CONFIG_ATH9K_DEBUGFS /** @@ -86,6 +99,7 @@ struct ath_interrupt_stats { u32 dtim; u32 bb_watchdog; u32 tsfoor; + u32 mci; /* Sync-cause stats */ u32 sync_cause_all; @@ -208,17 +222,6 @@ struct ath_rx_stats { u32 rx_frags; }; -enum ath_reset_type { - RESET_TYPE_BB_HANG, - RESET_TYPE_BB_WATCHDOG, - RESET_TYPE_FATAL_INT, - RESET_TYPE_TX_ERROR, - RESET_TYPE_TX_HANG, - RESET_TYPE_PLL_HANG, - RESET_TYPE_MAC_HANG, - __RESET_TYPE_MAX -}; - struct ath_stats { struct ath_interrupt_stats istats; struct ath_tx_stats txstats[ATH9K_NUM_TX_QUEUES]; diff --git a/drivers/net/wireless/ath/ath9k/eeprom.h b/drivers/net/wireless/ath/ath9k/eeprom.h index 33acb920ed3f..484b31305906 100644 --- a/drivers/net/wireless/ath/ath9k/eeprom.h +++ b/drivers/net/wireless/ath/ath9k/eeprom.h @@ -241,16 +241,12 @@ enum eeprom_param { EEP_TEMPSENSE_SLOPE, EEP_TEMPSENSE_SLOPE_PAL_ON, EEP_PWR_TABLE_OFFSET, - EEP_DRIVE_STRENGTH, - EEP_INTERNAL_REGULATOR, - EEP_SWREG, EEP_PAPRD, EEP_MODAL_VER, EEP_ANT_DIV_CTL1, EEP_CHAIN_MASK_REDUCE, EEP_ANTENNA_GAIN_2G, EEP_ANTENNA_GAIN_5G, - EEP_QUICK_DROP }; enum ar5416_rates { diff --git a/drivers/net/wireless/ath/ath9k/eeprom_4k.c b/drivers/net/wireless/ath/ath9k/eeprom_4k.c index 4322ac80c203..7d075105a85d 100644 --- a/drivers/net/wireless/ath/ath9k/eeprom_4k.c +++ b/drivers/net/wireless/ath/ath9k/eeprom_4k.c @@ -135,7 +135,7 @@ static u32 ath9k_hw_4k_dump_eeprom(struct ath_hw *ah, bool dump_base_hdr, if (!dump_base_hdr) { len += snprintf(buf + len, size - len, "%20s :\n", "2GHz modal Header"); - len += ath9k_dump_4k_modal_eeprom(buf, len, size, + len = ath9k_dump_4k_modal_eeprom(buf, len, size, &eep->modalHeader); goto out; } @@ -188,8 +188,7 @@ static int ath9k_hw_4k_check_eeprom(struct ath_hw *ah) { #define EEPROM_4K_SIZE (sizeof(struct ar5416_eeprom_4k) / sizeof(u16)) struct ath_common *common = ath9k_hw_common(ah); - struct ar5416_eeprom_4k *eep = - (struct ar5416_eeprom_4k *) &ah->eeprom.map4k; + struct ar5416_eeprom_4k *eep = &ah->eeprom.map4k; u16 *eepdata, temp, magic, magic2; u32 sum = 0, el; bool need_swap = false; diff --git a/drivers/net/wireless/ath/ath9k/eeprom_9287.c b/drivers/net/wireless/ath/ath9k/eeprom_9287.c index aa614767adff..cd742fb944c2 100644 --- a/drivers/net/wireless/ath/ath9k/eeprom_9287.c +++ b/drivers/net/wireless/ath/ath9k/eeprom_9287.c @@ -132,7 +132,7 @@ static u32 ath9k_hw_ar9287_dump_eeprom(struct ath_hw *ah, bool dump_base_hdr, if (!dump_base_hdr) { len += snprintf(buf + len, size - len, "%20s :\n", "2GHz modal Header"); - len += ar9287_dump_modal_eeprom(buf, len, size, + len = ar9287_dump_modal_eeprom(buf, len, size, &eep->modalHeader); goto out; } diff --git a/drivers/net/wireless/ath/ath9k/eeprom_def.c b/drivers/net/wireless/ath/ath9k/eeprom_def.c index b5fba8b18b8b..a8ac30a00720 100644 --- a/drivers/net/wireless/ath/ath9k/eeprom_def.c +++ b/drivers/net/wireless/ath/ath9k/eeprom_def.c @@ -211,11 +211,11 @@ static u32 ath9k_hw_def_dump_eeprom(struct ath_hw *ah, bool dump_base_hdr, if (!dump_base_hdr) { len += snprintf(buf + len, size - len, "%20s :\n", "2GHz modal Header"); - len += ath9k_def_dump_modal_eeprom(buf, len, size, + len = ath9k_def_dump_modal_eeprom(buf, len, size, &eep->modalHeader[0]); len += snprintf(buf + len, size - len, "%20s :\n", "5GHz modal Header"); - len += ath9k_def_dump_modal_eeprom(buf, len, size, + len = ath9k_def_dump_modal_eeprom(buf, len, size, &eep->modalHeader[1]); goto out; } @@ -264,8 +264,7 @@ static u32 ath9k_hw_def_dump_eeprom(struct ath_hw *ah, bool dump_base_hdr, static int ath9k_hw_def_check_eeprom(struct ath_hw *ah) { - struct ar5416_eeprom_def *eep = - (struct ar5416_eeprom_def *) &ah->eeprom.def; + struct ar5416_eeprom_def *eep = &ah->eeprom.def; struct ath_common *common = ath9k_hw_common(ah); u16 *eepdata, temp, magic, magic2; u32 sum = 0, el; diff --git a/drivers/net/wireless/ath/ath9k/gpio.c b/drivers/net/wireless/ath/ath9k/gpio.c index 281a9af0f1b6..bacdb8fb4ef4 100644 --- a/drivers/net/wireless/ath/ath9k/gpio.c +++ b/drivers/net/wireless/ath/ath9k/gpio.c @@ -132,17 +132,18 @@ static void ath_detect_bt_priority(struct ath_softc *sc) if (time_after(jiffies, btcoex->bt_priority_time + msecs_to_jiffies(ATH_BT_PRIORITY_TIME_THRESHOLD))) { - sc->sc_flags &= ~(SC_OP_BT_PRIORITY_DETECTED | SC_OP_BT_SCAN); + clear_bit(BT_OP_PRIORITY_DETECTED, &btcoex->op_flags); + clear_bit(BT_OP_SCAN, &btcoex->op_flags); /* Detect if colocated bt started scanning */ if (btcoex->bt_priority_cnt >= ATH_BT_CNT_SCAN_THRESHOLD) { ath_dbg(ath9k_hw_common(sc->sc_ah), BTCOEX, "BT scan detected\n"); - sc->sc_flags |= (SC_OP_BT_SCAN | - SC_OP_BT_PRIORITY_DETECTED); + set_bit(BT_OP_PRIORITY_DETECTED, &btcoex->op_flags); + set_bit(BT_OP_SCAN, &btcoex->op_flags); } else if (btcoex->bt_priority_cnt >= ATH_BT_CNT_THRESHOLD) { ath_dbg(ath9k_hw_common(sc->sc_ah), BTCOEX, "BT priority traffic detected\n"); - sc->sc_flags |= SC_OP_BT_PRIORITY_DETECTED; + set_bit(BT_OP_PRIORITY_DETECTED, &btcoex->op_flags); } btcoex->bt_priority_cnt = 0; @@ -190,13 +191,34 @@ static void ath_btcoex_period_timer(unsigned long data) struct ath_softc *sc = (struct ath_softc *) data; struct ath_hw *ah = sc->sc_ah; struct ath_btcoex *btcoex = &sc->btcoex; + struct ath_mci_profile *mci = &btcoex->mci; u32 timer_period; bool is_btscan; + unsigned long flags; + + spin_lock_irqsave(&sc->sc_pm_lock, flags); + if (sc->sc_ah->power_mode == ATH9K_PM_NETWORK_SLEEP) { + spin_unlock_irqrestore(&sc->sc_pm_lock, flags); + goto skip_hw_wakeup; + } + spin_unlock_irqrestore(&sc->sc_pm_lock, flags); ath9k_ps_wakeup(sc); if (!(ah->caps.hw_caps & ATH9K_HW_CAP_MCI)) ath_detect_bt_priority(sc); - is_btscan = sc->sc_flags & SC_OP_BT_SCAN; + is_btscan = test_bit(BT_OP_SCAN, &btcoex->op_flags); + + btcoex->bt_wait_time += btcoex->btcoex_period; + if (btcoex->bt_wait_time > ATH_BTCOEX_RX_WAIT_TIME) { + if (ar9003_mci_state(ah, MCI_STATE_NEED_FTP_STOMP) && + (mci->num_pan || mci->num_other_acl)) + ah->btcoex_hw.mci.stomp_ftp = + (sc->rx.num_pkts < ATH_BTCOEX_STOMP_FTP_THRESH); + else + ah->btcoex_hw.mci.stomp_ftp = false; + btcoex->bt_wait_time = 0; + sc->rx.num_pkts = 0; + } spin_lock_bh(&btcoex->btcoex_lock); @@ -218,9 +240,9 @@ static void ath_btcoex_period_timer(unsigned long data) } ath9k_ps_restore(sc); - timer_period = btcoex->btcoex_period / 1000; - mod_timer(&btcoex->period_timer, jiffies + - msecs_to_jiffies(timer_period)); +skip_hw_wakeup: + timer_period = btcoex->btcoex_period; + mod_timer(&btcoex->period_timer, jiffies + msecs_to_jiffies(timer_period)); } /* @@ -233,14 +255,14 @@ static void ath_btcoex_no_stomp_timer(void *arg) struct ath_hw *ah = sc->sc_ah; struct ath_btcoex *btcoex = &sc->btcoex; struct ath_common *common = ath9k_hw_common(ah); - bool is_btscan = sc->sc_flags & SC_OP_BT_SCAN; ath_dbg(common, BTCOEX, "no stomp timer running\n"); ath9k_ps_wakeup(sc); spin_lock_bh(&btcoex->btcoex_lock); - if (btcoex->bt_stomp_type == ATH_BTCOEX_STOMP_LOW || is_btscan) + if (btcoex->bt_stomp_type == ATH_BTCOEX_STOMP_LOW || + test_bit(BT_OP_SCAN, &btcoex->op_flags)) ath9k_hw_btcoex_bt_stomp(ah, ATH_BTCOEX_STOMP_NONE); else if (btcoex->bt_stomp_type == ATH_BTCOEX_STOMP_ALL) ath9k_hw_btcoex_bt_stomp(ah, ATH_BTCOEX_STOMP_LOW); @@ -254,10 +276,10 @@ static int ath_init_btcoex_timer(struct ath_softc *sc) { struct ath_btcoex *btcoex = &sc->btcoex; - btcoex->btcoex_period = ATH_BTCOEX_DEF_BT_PERIOD * 1000; - btcoex->btcoex_no_stomp = (100 - ATH_BTCOEX_DEF_DUTY_CYCLE) * + btcoex->btcoex_period = ATH_BTCOEX_DEF_BT_PERIOD; + btcoex->btcoex_no_stomp = (100 - ATH_BTCOEX_DEF_DUTY_CYCLE) * 1000 * btcoex->btcoex_period / 100; - btcoex->btscan_no_stomp = (100 - ATH_BTCOEX_BTSCAN_DUTY_CYCLE) * + btcoex->btscan_no_stomp = (100 - ATH_BTCOEX_BTSCAN_DUTY_CYCLE) * 1000 * btcoex->btcoex_period / 100; setup_timer(&btcoex->period_timer, ath_btcoex_period_timer, @@ -292,7 +314,8 @@ void ath9k_btcoex_timer_resume(struct ath_softc *sc) btcoex->bt_priority_cnt = 0; btcoex->bt_priority_time = jiffies; - sc->sc_flags &= ~(SC_OP_BT_PRIORITY_DETECTED | SC_OP_BT_SCAN); + clear_bit(BT_OP_PRIORITY_DETECTED, &btcoex->op_flags); + clear_bit(BT_OP_SCAN, &btcoex->op_flags); mod_timer(&btcoex->period_timer, jiffies); } @@ -314,14 +337,22 @@ void ath9k_btcoex_timer_pause(struct ath_softc *sc) btcoex->hw_timer_enabled = false; } +void ath9k_btcoex_stop_gen_timer(struct ath_softc *sc) +{ + struct ath_btcoex *btcoex = &sc->btcoex; + + ath9k_gen_timer_stop(sc->sc_ah, btcoex->no_stomp_timer); +} + u16 ath9k_btcoex_aggr_limit(struct ath_softc *sc, u32 max_4ms_framelen) { + struct ath_btcoex *btcoex = &sc->btcoex; struct ath_mci_profile *mci = &sc->btcoex.mci; u16 aggr_limit = 0; if ((sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_MCI) && mci->aggr_limit) aggr_limit = (max_4ms_framelen * mci->aggr_limit) >> 4; - else if (sc->sc_flags & SC_OP_BT_PRIORITY_DETECTED) + else if (test_bit(BT_OP_PRIORITY_DETECTED, &btcoex->op_flags)) aggr_limit = min((max_4ms_framelen * 3) / 8, (u32)ATH_AMPDU_LIMIT_MAX); @@ -362,9 +393,9 @@ void ath9k_stop_btcoex(struct ath_softc *sc) if (ah->btcoex_hw.enabled && ath9k_hw_get_btcoex_scheme(ah) != ATH_BTCOEX_CFG_NONE) { - ath9k_hw_btcoex_disable(ah); if (ath9k_hw_get_btcoex_scheme(ah) == ATH_BTCOEX_CFG_3WIRE) ath9k_btcoex_timer_pause(sc); + ath9k_hw_btcoex_disable(ah); if (AR_SREV_9462(ah)) ath_mci_flush_profile(&sc->btcoex.mci); } @@ -372,11 +403,13 @@ void ath9k_stop_btcoex(struct ath_softc *sc) void ath9k_deinit_btcoex(struct ath_softc *sc) { + struct ath_hw *ah = sc->sc_ah; + if ((sc->btcoex.no_stomp_timer) && ath9k_hw_get_btcoex_scheme(sc->sc_ah) == ATH_BTCOEX_CFG_3WIRE) ath_gen_timer_free(sc->sc_ah, sc->btcoex.no_stomp_timer); - if (AR_SREV_9462(sc->sc_ah)) + if (ath9k_hw_mci_is_enabled(ah)) ath_mci_cleanup(sc); } @@ -402,7 +435,7 @@ int ath9k_init_btcoex(struct ath_softc *sc) txq = sc->tx.txq_map[WME_AC_BE]; ath9k_hw_init_btcoex_hw(sc->sc_ah, txq->axq_qnum); sc->btcoex.bt_stomp_type = ATH_BTCOEX_STOMP_LOW; - if (AR_SREV_9462(ah)) { + if (ath9k_hw_mci_is_enabled(ah)) { sc->btcoex.duty_cycle = ATH_BTCOEX_DEF_DUTY_CYCLE; INIT_LIST_HEAD(&sc->btcoex.mci.info); diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h index 135795257d95..936e920fb88e 100644 --- a/drivers/net/wireless/ath/ath9k/htc.h +++ b/drivers/net/wireless/ath/ath9k/htc.h @@ -453,7 +453,6 @@ struct ath9k_htc_priv { u8 num_sta_assoc_vif; u8 num_ap_vif; - u16 op_flags; u16 curtxpow; u16 txpowlimit; u16 nvifs; @@ -461,6 +460,7 @@ struct ath9k_htc_priv { bool rearm_ani; bool reconfig_beacon; unsigned int rxfilter; + unsigned long op_flags; struct ath9k_hw_cal_data caldata; struct ieee80211_supported_band sbands[IEEE80211_NUM_BANDS]; @@ -572,8 +572,6 @@ bool ath9k_htc_setpower(struct ath9k_htc_priv *priv, void ath9k_start_rfkill_poll(struct ath9k_htc_priv *priv); void ath9k_htc_rfkill_poll_state(struct ieee80211_hw *hw); -void ath9k_htc_radio_enable(struct ieee80211_hw *hw); -void ath9k_htc_radio_disable(struct ieee80211_hw *hw); #ifdef CONFIG_MAC80211_LEDS void ath9k_init_leds(struct ath9k_htc_priv *priv); diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c b/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c index 2eadffb7971c..77d541feb910 100644 --- a/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c +++ b/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c @@ -207,9 +207,9 @@ static void ath9k_htc_beacon_config_ap(struct ath9k_htc_priv *priv, else priv->ah->config.sw_beacon_response_time = MIN_SWBA_RESPONSE; - if (priv->op_flags & OP_TSF_RESET) { + if (test_bit(OP_TSF_RESET, &priv->op_flags)) { ath9k_hw_reset_tsf(priv->ah); - priv->op_flags &= ~OP_TSF_RESET; + clear_bit(OP_TSF_RESET, &priv->op_flags); } else { /* * Pull nexttbtt forward to reflect the current TSF. @@ -221,7 +221,7 @@ static void ath9k_htc_beacon_config_ap(struct ath9k_htc_priv *priv, } while (nexttbtt < tsftu); } - if (priv->op_flags & OP_ENABLE_BEACON) + if (test_bit(OP_ENABLE_BEACON, &priv->op_flags)) imask |= ATH9K_INT_SWBA; ath_dbg(common, CONFIG, @@ -269,7 +269,7 @@ static void ath9k_htc_beacon_config_adhoc(struct ath9k_htc_priv *priv, else priv->ah->config.sw_beacon_response_time = MIN_SWBA_RESPONSE; - if (priv->op_flags & OP_ENABLE_BEACON) + if (test_bit(OP_ENABLE_BEACON, &priv->op_flags)) imask |= ATH9K_INT_SWBA; ath_dbg(common, CONFIG, @@ -365,7 +365,7 @@ static void ath9k_htc_send_beacon(struct ath9k_htc_priv *priv, vif = priv->cur_beacon_conf.bslot[slot]; avp = (struct ath9k_htc_vif *)vif->drv_priv; - if (unlikely(priv->op_flags & OP_SCANNING)) { + if (unlikely(test_bit(OP_SCANNING, &priv->op_flags))) { spin_unlock_bh(&priv->beacon_lock); return; } diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_gpio.c b/drivers/net/wireless/ath/ath9k/htc_drv_gpio.c index 1c10e2e5c237..07df279c8d46 100644 --- a/drivers/net/wireless/ath/ath9k/htc_drv_gpio.c +++ b/drivers/net/wireless/ath/ath9k/htc_drv_gpio.c @@ -37,17 +37,18 @@ static void ath_detect_bt_priority(struct ath9k_htc_priv *priv) if (time_after(jiffies, btcoex->bt_priority_time + msecs_to_jiffies(ATH_BT_PRIORITY_TIME_THRESHOLD))) { - priv->op_flags &= ~(OP_BT_PRIORITY_DETECTED | OP_BT_SCAN); + clear_bit(OP_BT_PRIORITY_DETECTED, &priv->op_flags); + clear_bit(OP_BT_SCAN, &priv->op_flags); /* Detect if colocated bt started scanning */ if (btcoex->bt_priority_cnt >= ATH_BT_CNT_SCAN_THRESHOLD) { ath_dbg(ath9k_hw_common(ah), BTCOEX, "BT scan detected\n"); - priv->op_flags |= (OP_BT_SCAN | - OP_BT_PRIORITY_DETECTED); + set_bit(OP_BT_PRIORITY_DETECTED, &priv->op_flags); + set_bit(OP_BT_SCAN, &priv->op_flags); } else if (btcoex->bt_priority_cnt >= ATH_BT_CNT_THRESHOLD) { ath_dbg(ath9k_hw_common(ah), BTCOEX, "BT priority traffic detected\n"); - priv->op_flags |= OP_BT_PRIORITY_DETECTED; + set_bit(OP_BT_PRIORITY_DETECTED, &priv->op_flags); } btcoex->bt_priority_cnt = 0; @@ -67,26 +68,23 @@ static void ath_btcoex_period_work(struct work_struct *work) struct ath_btcoex *btcoex = &priv->btcoex; struct ath_common *common = ath9k_hw_common(priv->ah); u32 timer_period; - bool is_btscan; int ret; ath_detect_bt_priority(priv); - is_btscan = !!(priv->op_flags & OP_BT_SCAN); - ret = ath9k_htc_update_cap_target(priv, - !!(priv->op_flags & OP_BT_PRIORITY_DETECTED)); + test_bit(OP_BT_PRIORITY_DETECTED, &priv->op_flags)); if (ret) { ath_err(common, "Unable to set BTCOEX parameters\n"); return; } - ath9k_hw_btcoex_bt_stomp(priv->ah, is_btscan ? ATH_BTCOEX_STOMP_ALL : - btcoex->bt_stomp_type); + ath9k_hw_btcoex_bt_stomp(priv->ah, test_bit(OP_BT_SCAN, &priv->op_flags) ? + ATH_BTCOEX_STOMP_ALL : btcoex->bt_stomp_type); ath9k_hw_btcoex_enable(priv->ah); - timer_period = is_btscan ? btcoex->btscan_no_stomp : - btcoex->btcoex_no_stomp; + timer_period = test_bit(OP_BT_SCAN, &priv->op_flags) ? + btcoex->btscan_no_stomp : btcoex->btcoex_no_stomp; ieee80211_queue_delayed_work(priv->hw, &priv->duty_cycle_work, msecs_to_jiffies(timer_period)); ieee80211_queue_delayed_work(priv->hw, &priv->coex_period_work, @@ -104,14 +102,15 @@ static void ath_btcoex_duty_cycle_work(struct work_struct *work) struct ath_hw *ah = priv->ah; struct ath_btcoex *btcoex = &priv->btcoex; struct ath_common *common = ath9k_hw_common(ah); - bool is_btscan = priv->op_flags & OP_BT_SCAN; ath_dbg(common, BTCOEX, "time slice work for bt and wlan\n"); - if (btcoex->bt_stomp_type == ATH_BTCOEX_STOMP_LOW || is_btscan) + if (btcoex->bt_stomp_type == ATH_BTCOEX_STOMP_LOW || + test_bit(OP_BT_SCAN, &priv->op_flags)) ath9k_hw_btcoex_bt_stomp(ah, ATH_BTCOEX_STOMP_NONE); else if (btcoex->bt_stomp_type == ATH_BTCOEX_STOMP_ALL) ath9k_hw_btcoex_bt_stomp(ah, ATH_BTCOEX_STOMP_LOW); + ath9k_hw_btcoex_enable(priv->ah); } @@ -141,7 +140,8 @@ static void ath_htc_resume_btcoex_work(struct ath9k_htc_priv *priv) btcoex->bt_priority_cnt = 0; btcoex->bt_priority_time = jiffies; - priv->op_flags &= ~(OP_BT_PRIORITY_DETECTED | OP_BT_SCAN); + clear_bit(OP_BT_PRIORITY_DETECTED, &priv->op_flags); + clear_bit(OP_BT_SCAN, &priv->op_flags); ieee80211_queue_delayed_work(priv->hw, &priv->coex_period_work, 0); } @@ -310,95 +310,3 @@ void ath9k_start_rfkill_poll(struct ath9k_htc_priv *priv) if (priv->ah->caps.hw_caps & ATH9K_HW_CAP_RFSILENT) wiphy_rfkill_start_polling(priv->hw->wiphy); } - -void ath9k_htc_radio_enable(struct ieee80211_hw *hw) -{ - struct ath9k_htc_priv *priv = hw->priv; - struct ath_hw *ah = priv->ah; - struct ath_common *common = ath9k_hw_common(ah); - int ret; - u8 cmd_rsp; - - if (!ah->curchan) - ah->curchan = ath9k_cmn_get_curchannel(hw, ah); - - /* Reset the HW */ - ret = ath9k_hw_reset(ah, ah->curchan, ah->caldata, false); - if (ret) { - ath_err(common, - "Unable to reset hardware; reset status %d (freq %u MHz)\n", - ret, ah->curchan->channel); - } - - ath9k_cmn_update_txpow(ah, priv->curtxpow, priv->txpowlimit, - &priv->curtxpow); - - /* Start RX */ - WMI_CMD(WMI_START_RECV_CMDID); - ath9k_host_rx_init(priv); - - /* Start TX */ - htc_start(priv->htc); - spin_lock_bh(&priv->tx.tx_lock); - priv->tx.flags &= ~ATH9K_HTC_OP_TX_QUEUES_STOP; - spin_unlock_bh(&priv->tx.tx_lock); - ieee80211_wake_queues(hw); - - WMI_CMD(WMI_ENABLE_INTR_CMDID); - - /* Enable LED */ - ath9k_hw_cfg_output(ah, ah->led_pin, - AR_GPIO_OUTPUT_MUX_AS_OUTPUT); - ath9k_hw_set_gpio(ah, ah->led_pin, 0); -} - -void ath9k_htc_radio_disable(struct ieee80211_hw *hw) -{ - struct ath9k_htc_priv *priv = hw->priv; - struct ath_hw *ah = priv->ah; - struct ath_common *common = ath9k_hw_common(ah); - int ret; - u8 cmd_rsp; - - ath9k_htc_ps_wakeup(priv); - - /* Disable LED */ - ath9k_hw_set_gpio(ah, ah->led_pin, 1); - ath9k_hw_cfg_gpio_input(ah, ah->led_pin); - - WMI_CMD(WMI_DISABLE_INTR_CMDID); - - /* Stop TX */ - ieee80211_stop_queues(hw); - ath9k_htc_tx_drain(priv); - WMI_CMD(WMI_DRAIN_TXQ_ALL_CMDID); - - /* Stop RX */ - WMI_CMD(WMI_STOP_RECV_CMDID); - - /* Clear the WMI event queue */ - ath9k_wmi_event_drain(priv); - - /* - * The MIB counters have to be disabled here, - * since the target doesn't do it. - */ - ath9k_hw_disable_mib_counters(ah); - - if (!ah->curchan) - ah->curchan = ath9k_cmn_get_curchannel(hw, ah); - - /* Reset the HW */ - ret = ath9k_hw_reset(ah, ah->curchan, ah->caldata, false); - if (ret) { - ath_err(common, - "Unable to reset hardware; reset status %d (freq %u MHz)\n", - ret, ah->curchan->channel); - } - - /* Disable the PHY */ - ath9k_hw_phy_disable(ah); - - ath9k_htc_ps_restore(priv); - ath9k_htc_setpower(priv, ATH9K_PM_FULL_SLEEP); -} diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c index 25213d521bc2..a035a380d669 100644 --- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c +++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c @@ -611,7 +611,7 @@ static int ath9k_init_priv(struct ath9k_htc_priv *priv, struct ath_common *common; int i, ret = 0, csz = 0; - priv->op_flags |= OP_INVALID; + set_bit(OP_INVALID, &priv->op_flags); ah = kzalloc(sizeof(struct ath_hw), GFP_KERNEL); if (!ah) @@ -718,7 +718,7 @@ static void ath9k_set_hw_capab(struct ath9k_htc_priv *priv, hw->queues = 4; hw->channel_change_time = 5000; - hw->max_listen_interval = 10; + hw->max_listen_interval = 1; hw->vif_data_size = sizeof(struct ath9k_htc_vif); hw->sta_data_size = sizeof(struct ath9k_htc_sta); diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_main.c b/drivers/net/wireless/ath/ath9k/htc_drv_main.c index abbd6effd60d..c785129692ff 100644 --- a/drivers/net/wireless/ath/ath9k/htc_drv_main.c +++ b/drivers/net/wireless/ath/ath9k/htc_drv_main.c @@ -75,14 +75,19 @@ unlock: void ath9k_htc_ps_restore(struct ath9k_htc_priv *priv) { + bool reset; + mutex_lock(&priv->htc_pm_lock); if (--priv->ps_usecount != 0) goto unlock; - if (priv->ps_idle) + if (priv->ps_idle) { + ath9k_hw_setrxabort(priv->ah, true); + ath9k_hw_stopdmarecv(priv->ah, &reset); ath9k_hw_setpower(priv->ah, ATH9K_PM_FULL_SLEEP); - else if (priv->ps_enabled) + } else if (priv->ps_enabled) { ath9k_hw_setpower(priv->ah, ATH9K_PM_NETWORK_SLEEP); + } unlock: mutex_unlock(&priv->htc_pm_lock); @@ -250,7 +255,7 @@ static int ath9k_htc_set_channel(struct ath9k_htc_priv *priv, u8 cmd_rsp; int ret; - if (priv->op_flags & OP_INVALID) + if (test_bit(OP_INVALID, &priv->op_flags)) return -EIO; fastcc = !!(hw->conf.flags & IEEE80211_CONF_OFFCHANNEL); @@ -304,7 +309,7 @@ static int ath9k_htc_set_channel(struct ath9k_htc_priv *priv, htc_start(priv->htc); - if (!(priv->op_flags & OP_SCANNING) && + if (!test_bit(OP_SCANNING, &priv->op_flags) && !(hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)) ath9k_htc_vif_reconfig(priv); @@ -750,7 +755,7 @@ void ath9k_htc_start_ani(struct ath9k_htc_priv *priv) common->ani.shortcal_timer = timestamp; common->ani.checkani_timer = timestamp; - priv->op_flags |= OP_ANI_RUNNING; + set_bit(OP_ANI_RUNNING, &priv->op_flags); ieee80211_queue_delayed_work(common->hw, &priv->ani_work, msecs_to_jiffies(ATH_ANI_POLLINTERVAL)); @@ -759,7 +764,7 @@ void ath9k_htc_start_ani(struct ath9k_htc_priv *priv) void ath9k_htc_stop_ani(struct ath9k_htc_priv *priv) { cancel_delayed_work_sync(&priv->ani_work); - priv->op_flags &= ~OP_ANI_RUNNING; + clear_bit(OP_ANI_RUNNING, &priv->op_flags); } void ath9k_htc_ani_work(struct work_struct *work) @@ -944,7 +949,7 @@ static int ath9k_htc_start(struct ieee80211_hw *hw) ath_dbg(common, CONFIG, "Failed to update capability in target\n"); - priv->op_flags &= ~OP_INVALID; + clear_bit(OP_INVALID, &priv->op_flags); htc_start(priv->htc); spin_lock_bh(&priv->tx.tx_lock); @@ -973,7 +978,7 @@ static void ath9k_htc_stop(struct ieee80211_hw *hw) mutex_lock(&priv->mutex); - if (priv->op_flags & OP_INVALID) { + if (test_bit(OP_INVALID, &priv->op_flags)) { ath_dbg(common, ANY, "Device not present\n"); mutex_unlock(&priv->mutex); return; @@ -1015,7 +1020,7 @@ static void ath9k_htc_stop(struct ieee80211_hw *hw) ath9k_htc_ps_restore(priv); ath9k_htc_setpower(priv, ATH9K_PM_FULL_SLEEP); - priv->op_flags |= OP_INVALID; + set_bit(OP_INVALID, &priv->op_flags); ath_dbg(common, CONFIG, "Driver halt\n"); mutex_unlock(&priv->mutex); @@ -1105,8 +1110,8 @@ static int ath9k_htc_add_interface(struct ieee80211_hw *hw, ath9k_htc_set_opmode(priv); if ((priv->ah->opmode == NL80211_IFTYPE_AP) && - !(priv->op_flags & OP_ANI_RUNNING)) { - ath9k_hw_set_tsfadjust(priv->ah, 1); + !test_bit(OP_ANI_RUNNING, &priv->op_flags)) { + ath9k_hw_set_tsfadjust(priv->ah, true); ath9k_htc_start_ani(priv); } @@ -1178,24 +1183,20 @@ static int ath9k_htc_config(struct ieee80211_hw *hw, u32 changed) struct ath9k_htc_priv *priv = hw->priv; struct ath_common *common = ath9k_hw_common(priv->ah); struct ieee80211_conf *conf = &hw->conf; + bool chip_reset = false; + int ret = 0; mutex_lock(&priv->mutex); + ath9k_htc_ps_wakeup(priv); if (changed & IEEE80211_CONF_CHANGE_IDLE) { - bool enable_radio = false; - bool idle = !!(conf->flags & IEEE80211_CONF_IDLE); - mutex_lock(&priv->htc_pm_lock); - if (!idle && priv->ps_idle) - enable_radio = true; - priv->ps_idle = idle; - mutex_unlock(&priv->htc_pm_lock); - if (enable_radio) { - ath_dbg(common, CONFIG, "not-idle: enabling radio\n"); - ath9k_htc_setpower(priv, ATH9K_PM_AWAKE); - ath9k_htc_radio_enable(hw); - } + priv->ps_idle = !!(conf->flags & IEEE80211_CONF_IDLE); + if (priv->ps_idle) + chip_reset = true; + + mutex_unlock(&priv->htc_pm_lock); } /* @@ -1210,7 +1211,7 @@ static int ath9k_htc_config(struct ieee80211_hw *hw, u32 changed) ath9k_htc_remove_monitor_interface(priv); } - if (changed & IEEE80211_CONF_CHANGE_CHANNEL) { + if ((changed & IEEE80211_CONF_CHANGE_CHANNEL) || chip_reset) { struct ieee80211_channel *curchan = hw->conf.channel; int pos = curchan->hw_value; @@ -1223,8 +1224,8 @@ static int ath9k_htc_config(struct ieee80211_hw *hw, u32 changed) if (ath9k_htc_set_channel(priv, hw, &priv->ah->channels[pos]) < 0) { ath_err(common, "Unable to set channel\n"); - mutex_unlock(&priv->mutex); - return -EINVAL; + ret = -EINVAL; + goto out; } } @@ -1246,21 +1247,10 @@ static int ath9k_htc_config(struct ieee80211_hw *hw, u32 changed) priv->txpowlimit, &priv->curtxpow); } - if (changed & IEEE80211_CONF_CHANGE_IDLE) { - mutex_lock(&priv->htc_pm_lock); - if (!priv->ps_idle) { - mutex_unlock(&priv->htc_pm_lock); - goto out; - } - mutex_unlock(&priv->htc_pm_lock); - - ath_dbg(common, CONFIG, "idle: disabling radio\n"); - ath9k_htc_radio_disable(hw); - } - out: + ath9k_htc_ps_restore(priv); mutex_unlock(&priv->mutex); - return 0; + return ret; } #define SUPPORTED_FILTERS \ @@ -1285,7 +1275,7 @@ static void ath9k_htc_configure_filter(struct ieee80211_hw *hw, changed_flags &= SUPPORTED_FILTERS; *total_flags &= SUPPORTED_FILTERS; - if (priv->op_flags & OP_INVALID) { + if (test_bit(OP_INVALID, &priv->op_flags)) { ath_dbg(ath9k_hw_common(priv->ah), ANY, "Unable to configure filter on invalid state\n"); mutex_unlock(&priv->mutex); @@ -1361,7 +1351,7 @@ static int ath9k_htc_conf_tx(struct ieee80211_hw *hw, qi.tqi_aifs = params->aifs; qi.tqi_cwmin = params->cw_min; qi.tqi_cwmax = params->cw_max; - qi.tqi_burstTime = params->txop; + qi.tqi_burstTime = params->txop * 32; qnum = get_hw_qnum(queue, priv->hwq_map); @@ -1516,7 +1506,7 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw, ath_dbg(common, CONFIG, "Beacon enabled for BSS: %pM\n", bss_conf->bssid); ath9k_htc_set_tsfadjust(priv, vif); - priv->op_flags |= OP_ENABLE_BEACON; + set_bit(OP_ENABLE_BEACON, &priv->op_flags); ath9k_htc_beacon_config(priv, vif); } @@ -1529,7 +1519,7 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw, ath_dbg(common, CONFIG, "Beacon disabled for BSS: %pM\n", bss_conf->bssid); - priv->op_flags &= ~OP_ENABLE_BEACON; + clear_bit(OP_ENABLE_BEACON, &priv->op_flags); ath9k_htc_beacon_config(priv, vif); } } @@ -1542,7 +1532,7 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw, (priv->nvifs == 1) && (priv->num_ap_vif == 1) && (vif->type == NL80211_IFTYPE_AP)) { - priv->op_flags |= OP_TSF_RESET; + set_bit(OP_TSF_RESET, &priv->op_flags); } ath_dbg(common, CONFIG, "Beacon interval changed for BSS: %pM\n", @@ -1654,7 +1644,7 @@ static void ath9k_htc_sw_scan_start(struct ieee80211_hw *hw) mutex_lock(&priv->mutex); spin_lock_bh(&priv->beacon_lock); - priv->op_flags |= OP_SCANNING; + set_bit(OP_SCANNING, &priv->op_flags); spin_unlock_bh(&priv->beacon_lock); cancel_work_sync(&priv->ps_work); ath9k_htc_stop_ani(priv); @@ -1667,7 +1657,7 @@ static void ath9k_htc_sw_scan_complete(struct ieee80211_hw *hw) mutex_lock(&priv->mutex); spin_lock_bh(&priv->beacon_lock); - priv->op_flags &= ~OP_SCANNING; + clear_bit(OP_SCANNING, &priv->op_flags); spin_unlock_bh(&priv->beacon_lock); ath9k_htc_ps_wakeup(priv); ath9k_htc_vif_reconfig(priv); diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c index 3e40a6461512..47e61d0da33b 100644 --- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c +++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c @@ -916,7 +916,7 @@ void ath9k_host_rx_init(struct ath9k_htc_priv *priv) { ath9k_hw_rxena(priv->ah); ath9k_htc_opmode_init(priv); - ath9k_hw_startpcureceive(priv->ah, (priv->op_flags & OP_SCANNING)); + ath9k_hw_startpcureceive(priv->ah, test_bit(OP_SCANNING, &priv->op_flags)); priv->rx.last_rssi = ATH_RSSI_DUMMY_MARKER; } diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c index 995ca8e1302e..cfa91ab7acf8 100644 --- a/drivers/net/wireless/ath/ath9k/hw.c +++ b/drivers/net/wireless/ath/ath9k/hw.c @@ -342,6 +342,9 @@ static void ath9k_hw_read_revisions(struct ath_hw *ah) val = REG_READ(ah, AR_SREV); ah->hw_version.macRev = MS(val, AR_SREV_REVISION2); return; + case AR9300_DEVID_QCA955X: + ah->hw_version.macVersion = AR_SREV_VERSION_9550; + return; } val = REG_READ(ah, AR_SREV) & AR_SREV_ID; @@ -390,14 +393,6 @@ static void ath9k_hw_disablepcie(struct ath_hw *ah) REG_WRITE(ah, AR_PCIE_SERDES2, 0x00000000); } -static void ath9k_hw_aspm_init(struct ath_hw *ah) -{ - struct ath_common *common = ath9k_hw_common(ah); - - if (common->bus_ops->aspm_init) - common->bus_ops->aspm_init(common); -} - /* This should work for all families including legacy */ static bool ath9k_hw_chip_test(struct ath_hw *ah) { @@ -654,6 +649,7 @@ static int __ath9k_hw_init(struct ath_hw *ah) case AR_SREV_VERSION_9485: case AR_SREV_VERSION_9340: case AR_SREV_VERSION_9462: + case AR_SREV_VERSION_9550: break; default: ath_err(common, @@ -663,7 +659,7 @@ static int __ath9k_hw_init(struct ath_hw *ah) } if (AR_SREV_9271(ah) || AR_SREV_9100(ah) || AR_SREV_9340(ah) || - AR_SREV_9330(ah)) + AR_SREV_9330(ah) || AR_SREV_9550(ah)) ah->is_pciexpress = false; ah->hw_version.phyRev = REG_READ(ah, AR_PHY_CHIP_ID); @@ -675,10 +671,6 @@ static int __ath9k_hw_init(struct ath_hw *ah) if (!AR_SREV_9300_20_OR_LATER(ah)) ah->ani_function &= ~ATH9K_ANI_MRC_CCK; - /* disable ANI for 9340 */ - if (AR_SREV_9340(ah)) - ah->config.enable_ani = false; - ath9k_hw_init_mode_regs(ah); if (!ah->is_pciexpress) @@ -693,9 +685,6 @@ static int __ath9k_hw_init(struct ath_hw *ah) if (r) return r; - if (ah->is_pciexpress) - ath9k_hw_aspm_init(ah); - r = ath9k_hw_init_macaddr(ah); if (r) { ath_err(common, "Failed to initialize MAC address\n"); @@ -738,6 +727,7 @@ int ath9k_hw_init(struct ath_hw *ah) case AR9300_DEVID_AR9485_PCIE: case AR9300_DEVID_AR9330: case AR9300_DEVID_AR9340: + case AR9300_DEVID_QCA955X: case AR9300_DEVID_AR9580: case AR9300_DEVID_AR9462: break; @@ -876,7 +866,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah, /* program BB PLL phase_shift */ REG_RMW_FIELD(ah, AR_CH0_BB_DPLL3, AR_CH0_BB_DPLL3_PHASE_SHIFT, 0x1); - } else if (AR_SREV_9340(ah)) { + } else if (AR_SREV_9340(ah) || AR_SREV_9550(ah)) { u32 regval, pll2_divint, pll2_divfrac, refdiv; REG_WRITE(ah, AR_RTC_PLL_CONTROL, 0x1142c); @@ -890,9 +880,15 @@ static void ath9k_hw_init_pll(struct ath_hw *ah, pll2_divfrac = 0x1eb85; refdiv = 3; } else { - pll2_divint = 88; - pll2_divfrac = 0; - refdiv = 5; + if (AR_SREV_9340(ah)) { + pll2_divint = 88; + pll2_divfrac = 0; + refdiv = 5; + } else { + pll2_divint = 0x11; + pll2_divfrac = 0x26666; + refdiv = 1; + } } regval = REG_READ(ah, AR_PHY_PLL_MODE); @@ -905,8 +901,12 @@ static void ath9k_hw_init_pll(struct ath_hw *ah, udelay(100); regval = REG_READ(ah, AR_PHY_PLL_MODE); - regval = (regval & 0x80071fff) | (0x1 << 30) | (0x1 << 13) | - (0x4 << 26) | (0x18 << 19); + if (AR_SREV_9340(ah)) + regval = (regval & 0x80071fff) | (0x1 << 30) | + (0x1 << 13) | (0x4 << 26) | (0x18 << 19); + else + regval = (regval & 0x80071fff) | (0x3 << 30) | + (0x1 << 13) | (0x4 << 26) | (0x60 << 19); REG_WRITE(ah, AR_PHY_PLL_MODE, regval); REG_WRITE(ah, AR_PHY_PLL_MODE, REG_READ(ah, AR_PHY_PLL_MODE) & 0xfffeffff); @@ -917,7 +917,8 @@ static void ath9k_hw_init_pll(struct ath_hw *ah, REG_WRITE(ah, AR_RTC_PLL_CONTROL, pll); - if (AR_SREV_9485(ah) || AR_SREV_9340(ah) || AR_SREV_9330(ah)) + if (AR_SREV_9485(ah) || AR_SREV_9340(ah) || AR_SREV_9330(ah) || + AR_SREV_9550(ah)) udelay(1000); /* Switch the core clock for ar9271 to 117Mhz */ @@ -930,7 +931,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah, REG_WRITE(ah, AR_RTC_SLEEP_CLK, AR_RTC_FORCE_DERIVED_CLK); - if (AR_SREV_9340(ah)) { + if (AR_SREV_9340(ah) || AR_SREV_9550(ah)) { if (ah->is_clk_25mhz) { REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x17c << 1); REG_WRITE(ah, AR_SLP32_MODE, 0x0010f3d7); @@ -954,7 +955,7 @@ static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah, AR_IMR_RXORN | AR_IMR_BCNMISC; - if (AR_SREV_9340(ah)) + if (AR_SREV_9340(ah) || AR_SREV_9550(ah)) sync_default &= ~AR_INTR_SYNC_HOST1_FATAL; if (AR_SREV_9300_20_OR_LATER(ah)) { @@ -1371,6 +1372,9 @@ static bool ath9k_hw_set_reset(struct ath_hw *ah, int type) } } + if (ath9k_hw_mci_is_enabled(ah)) + ar9003_mci_check_gpm_offset(ah); + REG_WRITE(ah, AR_RTC_RC, rst_flags); REGWRITE_BUFFER_FLUSH(ah); @@ -1455,9 +1459,6 @@ static bool ath9k_hw_set_reset_reg(struct ath_hw *ah, u32 type) break; } - if (ah->caps.hw_caps & ATH9K_HW_CAP_MCI) - REG_WRITE(ah, AR_RTC_KEEP_AWAKE, 0x2); - return ret; } @@ -1733,8 +1734,8 @@ static int ath9k_hw_do_fastcc(struct ath_hw *ah, struct ath9k_channel *chan) ath9k_hw_loadnf(ah, ah->curchan); ath9k_hw_start_nfcal(ah, true); - if ((ah->caps.hw_caps & ATH9K_HW_CAP_MCI) && ar9003_mci_is_ready(ah)) - ar9003_mci_2g5g_switch(ah, true); + if (ath9k_hw_mci_is_enabled(ah)) + ar9003_mci_2g5g_switch(ah, false); if (AR_SREV_9271(ah)) ar9002_hw_load_ani_reg(ah, chan); @@ -1754,10 +1755,9 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan, u64 tsf = 0; int i, r; bool start_mci_reset = false; - bool mci = !!(ah->caps.hw_caps & ATH9K_HW_CAP_MCI); bool save_fullsleep = ah->chip_fullsleep; - if (mci) { + if (ath9k_hw_mci_is_enabled(ah)) { start_mci_reset = ar9003_mci_start_reset(ah, chan); if (start_mci_reset) return 0; @@ -1786,7 +1786,7 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan, return r; } - if (mci) + if (ath9k_hw_mci_is_enabled(ah)) ar9003_mci_stop_bt(ah, save_fullsleep); saveDefAntenna = REG_READ(ah, AR_DEF_ANTENNA); @@ -1844,7 +1844,7 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan, if (r) return r; - if (mci) + if (ath9k_hw_mci_is_enabled(ah)) ar9003_mci_reset(ah, false, IS_CHAN_2GHZ(chan), save_fullsleep); /* @@ -1939,7 +1939,8 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan, ath9k_hw_set_dma(ah); - REG_WRITE(ah, AR_OBS, 8); + if (!ath9k_hw_mci_is_enabled(ah)) + REG_WRITE(ah, AR_OBS, 8); if (ah->config.rx_intr_mitigation) { REG_RMW_FIELD(ah, AR_RIMT, AR_RIMT_LAST, 500); @@ -1960,10 +1961,7 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan, if (!ath9k_hw_init_cal(ah, chan)) return -EIO; - ath9k_hw_loadnf(ah, chan); - ath9k_hw_start_nfcal(ah, true); - - if (mci && ar9003_mci_end_reset(ah, chan, caldata)) + if (ath9k_hw_mci_is_enabled(ah) && ar9003_mci_end_reset(ah, chan, caldata)) return -EIO; ENABLE_REGWRITE_BUFFER(ah); @@ -1998,7 +1996,8 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan, REG_WRITE(ah, AR_CFG, AR_CFG_SWTD | AR_CFG_SWRD); } #ifdef __BIG_ENDIAN - else if (AR_SREV_9330(ah) || AR_SREV_9340(ah)) + else if (AR_SREV_9330(ah) || AR_SREV_9340(ah) || + AR_SREV_9550(ah)) REG_RMW(ah, AR_CFG, AR_CFG_SWRB | AR_CFG_SWTB, 0); else REG_WRITE(ah, AR_CFG, AR_CFG_SWTD | AR_CFG_SWRD); @@ -2008,9 +2007,12 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan, if (ath9k_hw_btcoex_is_enabled(ah)) ath9k_hw_btcoex_enable(ah); - if (mci) + if (ath9k_hw_mci_is_enabled(ah)) ar9003_mci_check_bt(ah); + ath9k_hw_loadnf(ah, chan); + ath9k_hw_start_nfcal(ah, true); + if (AR_SREV_9300_20_OR_LATER(ah)) { ar9003_hw_bb_watchdog_config(ah); @@ -2031,39 +2033,35 @@ EXPORT_SYMBOL(ath9k_hw_reset); * Notify Power Mgt is disabled in self-generated frames. * If requested, force chip to sleep. */ -static void ath9k_set_power_sleep(struct ath_hw *ah, int setChip) +static void ath9k_set_power_sleep(struct ath_hw *ah) { REG_SET_BIT(ah, AR_STA_ID1, AR_STA_ID1_PWR_SAV); - if (setChip) { - if (AR_SREV_9462(ah)) { - REG_WRITE(ah, AR_TIMER_MODE, - REG_READ(ah, AR_TIMER_MODE) & 0xFFFFFF00); - REG_WRITE(ah, AR_NDP2_TIMER_MODE, REG_READ(ah, - AR_NDP2_TIMER_MODE) & 0xFFFFFF00); - REG_WRITE(ah, AR_SLP32_INC, - REG_READ(ah, AR_SLP32_INC) & 0xFFF00000); - /* xxx Required for WLAN only case ? */ - REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_EN, 0); - udelay(100); - } - /* - * Clear the RTC force wake bit to allow the - * mac to go to sleep. - */ - REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN); + if (AR_SREV_9462(ah)) { + REG_CLR_BIT(ah, AR_TIMER_MODE, 0xff); + REG_CLR_BIT(ah, AR_NDP2_TIMER_MODE, 0xff); + REG_CLR_BIT(ah, AR_SLP32_INC, 0xfffff); + /* xxx Required for WLAN only case ? */ + REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_EN, 0); + udelay(100); + } - if (AR_SREV_9462(ah)) - udelay(100); + /* + * Clear the RTC force wake bit to allow the + * mac to go to sleep. + */ + REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN); - if (!AR_SREV_9100(ah) && !AR_SREV_9300_20_OR_LATER(ah)) - REG_WRITE(ah, AR_RC, AR_RC_AHB | AR_RC_HOSTIF); + if (ath9k_hw_mci_is_enabled(ah)) + udelay(100); - /* Shutdown chip. Active low */ - if (!AR_SREV_5416(ah) && !AR_SREV_9271(ah)) { - REG_CLR_BIT(ah, AR_RTC_RESET, AR_RTC_RESET_EN); - udelay(2); - } + if (!AR_SREV_9100(ah) && !AR_SREV_9300_20_OR_LATER(ah)) + REG_WRITE(ah, AR_RC, AR_RC_AHB | AR_RC_HOSTIF); + + /* Shutdown chip. Active low */ + if (!AR_SREV_5416(ah) && !AR_SREV_9271(ah)) { + REG_CLR_BIT(ah, AR_RTC_RESET, AR_RTC_RESET_EN); + udelay(2); } /* Clear Bit 14 of AR_WA after putting chip into Full Sleep mode. */ @@ -2076,44 +2074,38 @@ static void ath9k_set_power_sleep(struct ath_hw *ah, int setChip) * frames. If request, set power mode of chip to * auto/normal. Duration in units of 128us (1/8 TU). */ -static void ath9k_set_power_network_sleep(struct ath_hw *ah, int setChip) +static void ath9k_set_power_network_sleep(struct ath_hw *ah) { - u32 val; + struct ath9k_hw_capabilities *pCap = &ah->caps; REG_SET_BIT(ah, AR_STA_ID1, AR_STA_ID1_PWR_SAV); - if (setChip) { - struct ath9k_hw_capabilities *pCap = &ah->caps; - if (!(pCap->hw_caps & ATH9K_HW_CAP_AUTOSLEEP)) { - /* Set WakeOnInterrupt bit; clear ForceWake bit */ - REG_WRITE(ah, AR_RTC_FORCE_WAKE, - AR_RTC_FORCE_WAKE_ON_INT); - } else { + if (!(pCap->hw_caps & ATH9K_HW_CAP_AUTOSLEEP)) { + /* Set WakeOnInterrupt bit; clear ForceWake bit */ + REG_WRITE(ah, AR_RTC_FORCE_WAKE, + AR_RTC_FORCE_WAKE_ON_INT); + } else { - /* When chip goes into network sleep, it could be waken - * up by MCI_INT interrupt caused by BT's HW messages - * (LNA_xxx, CONT_xxx) which chould be in a very fast - * rate (~100us). This will cause chip to leave and - * re-enter network sleep mode frequently, which in - * consequence will have WLAN MCI HW to generate lots of - * SYS_WAKING and SYS_SLEEPING messages which will make - * BT CPU to busy to process. - */ - if (AR_SREV_9462(ah)) { - val = REG_READ(ah, AR_MCI_INTERRUPT_RX_MSG_EN) & - ~AR_MCI_INTERRUPT_RX_HW_MSG_MASK; - REG_WRITE(ah, AR_MCI_INTERRUPT_RX_MSG_EN, val); - } - /* - * Clear the RTC force wake bit to allow the - * mac to go to sleep. - */ - REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE, - AR_RTC_FORCE_WAKE_EN); - - if (AR_SREV_9462(ah)) - udelay(30); - } + /* When chip goes into network sleep, it could be waken + * up by MCI_INT interrupt caused by BT's HW messages + * (LNA_xxx, CONT_xxx) which chould be in a very fast + * rate (~100us). This will cause chip to leave and + * re-enter network sleep mode frequently, which in + * consequence will have WLAN MCI HW to generate lots of + * SYS_WAKING and SYS_SLEEPING messages which will make + * BT CPU to busy to process. + */ + if (ath9k_hw_mci_is_enabled(ah)) + REG_CLR_BIT(ah, AR_MCI_INTERRUPT_RX_MSG_EN, + AR_MCI_INTERRUPT_RX_HW_MSG_MASK); + /* + * Clear the RTC force wake bit to allow the + * mac to go to sleep. + */ + REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN); + + if (ath9k_hw_mci_is_enabled(ah)) + udelay(30); } /* Clear Bit 14 of AR_WA after putting chip into Net Sleep mode. */ @@ -2121,7 +2113,7 @@ static void ath9k_set_power_network_sleep(struct ath_hw *ah, int setChip) REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE); } -static bool ath9k_hw_set_power_awake(struct ath_hw *ah, int setChip) +static bool ath9k_hw_set_power_awake(struct ath_hw *ah) { u32 val; int i; @@ -2132,37 +2124,38 @@ static bool ath9k_hw_set_power_awake(struct ath_hw *ah, int setChip) udelay(10); } - if (setChip) { - if ((REG_READ(ah, AR_RTC_STATUS) & - AR_RTC_STATUS_M) == AR_RTC_STATUS_SHUTDOWN) { - if (!ath9k_hw_set_reset_reg(ah, ATH9K_RESET_POWER_ON)) { - return false; - } - if (!AR_SREV_9300_20_OR_LATER(ah)) - ath9k_hw_init_pll(ah, NULL); + if ((REG_READ(ah, AR_RTC_STATUS) & + AR_RTC_STATUS_M) == AR_RTC_STATUS_SHUTDOWN) { + if (!ath9k_hw_set_reset_reg(ah, ATH9K_RESET_POWER_ON)) { + return false; } - if (AR_SREV_9100(ah)) - REG_SET_BIT(ah, AR_RTC_RESET, - AR_RTC_RESET_EN); + if (!AR_SREV_9300_20_OR_LATER(ah)) + ath9k_hw_init_pll(ah, NULL); + } + if (AR_SREV_9100(ah)) + REG_SET_BIT(ah, AR_RTC_RESET, + AR_RTC_RESET_EN); + REG_SET_BIT(ah, AR_RTC_FORCE_WAKE, + AR_RTC_FORCE_WAKE_EN); + udelay(50); + + if (ath9k_hw_mci_is_enabled(ah)) + ar9003_mci_set_power_awake(ah); + + for (i = POWER_UP_TIME / 50; i > 0; i--) { + val = REG_READ(ah, AR_RTC_STATUS) & AR_RTC_STATUS_M; + if (val == AR_RTC_STATUS_ON) + break; + udelay(50); REG_SET_BIT(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN); - udelay(50); - - for (i = POWER_UP_TIME / 50; i > 0; i--) { - val = REG_READ(ah, AR_RTC_STATUS) & AR_RTC_STATUS_M; - if (val == AR_RTC_STATUS_ON) - break; - udelay(50); - REG_SET_BIT(ah, AR_RTC_FORCE_WAKE, - AR_RTC_FORCE_WAKE_EN); - } - if (i == 0) { - ath_err(ath9k_hw_common(ah), - "Failed to wakeup in %uus\n", - POWER_UP_TIME / 20); - return false; - } + } + if (i == 0) { + ath_err(ath9k_hw_common(ah), + "Failed to wakeup in %uus\n", + POWER_UP_TIME / 20); + return false; } REG_CLR_BIT(ah, AR_STA_ID1, AR_STA_ID1_PWR_SAV); @@ -2173,7 +2166,7 @@ static bool ath9k_hw_set_power_awake(struct ath_hw *ah, int setChip) bool ath9k_hw_setpower(struct ath_hw *ah, enum ath9k_power_mode mode) { struct ath_common *common = ath9k_hw_common(ah); - int status = true, setChip = true; + int status = true; static const char *modes[] = { "AWAKE", "FULL-SLEEP", @@ -2189,25 +2182,17 @@ bool ath9k_hw_setpower(struct ath_hw *ah, enum ath9k_power_mode mode) switch (mode) { case ATH9K_PM_AWAKE: - status = ath9k_hw_set_power_awake(ah, setChip); - - if (ah->caps.hw_caps & ATH9K_HW_CAP_MCI) - REG_WRITE(ah, AR_RTC_KEEP_AWAKE, 0x2); - + status = ath9k_hw_set_power_awake(ah); break; case ATH9K_PM_FULL_SLEEP: - if (ah->caps.hw_caps & ATH9K_HW_CAP_MCI) + if (ath9k_hw_mci_is_enabled(ah)) ar9003_mci_set_full_sleep(ah); - ath9k_set_power_sleep(ah, setChip); + ath9k_set_power_sleep(ah); ah->chip_fullsleep = true; break; case ATH9K_PM_NETWORK_SLEEP: - - if (ah->caps.hw_caps & ATH9K_HW_CAP_MCI) - REG_WRITE(ah, AR_RTC_KEEP_AWAKE, 0x2); - - ath9k_set_power_network_sleep(ah, setChip); + ath9k_set_power_network_sleep(ah); break; default: ath_err(common, "Unknown power mode %u\n", mode); @@ -2600,6 +2585,14 @@ int ath9k_hw_fill_cap_info(struct ath_hw *ah) } + if (AR_SREV_9280_20_OR_LATER(ah)) { + pCap->hw_caps |= ATH9K_HW_WOW_DEVICE_CAPABLE | + ATH9K_HW_WOW_PATTERN_MATCH_EXACT; + + if (AR_SREV_9280(ah)) + pCap->hw_caps |= ATH9K_HW_WOW_PATTERN_MATCH_DWORD; + } + return 0; } @@ -2777,6 +2770,9 @@ EXPORT_SYMBOL(ath9k_hw_setrxfilter); bool ath9k_hw_phy_disable(struct ath_hw *ah) { + if (ath9k_hw_mci_is_enabled(ah)) + ar9003_mci_bt_gain_ctrl(ah); + if (!ath9k_hw_set_reset_reg(ah, ATH9K_RESET_WARM)) return false; @@ -2916,9 +2912,9 @@ void ath9k_hw_reset_tsf(struct ath_hw *ah) } EXPORT_SYMBOL(ath9k_hw_reset_tsf); -void ath9k_hw_set_tsfadjust(struct ath_hw *ah, u32 setting) +void ath9k_hw_set_tsfadjust(struct ath_hw *ah, bool set) { - if (setting) + if (set) ah->misc_mode |= AR_PCU_TX_ADD_TSF; else ah->misc_mode &= ~AR_PCU_TX_ADD_TSF; @@ -3162,6 +3158,7 @@ static struct { { AR_SREV_VERSION_9340, "9340" }, { AR_SREV_VERSION_9485, "9485" }, { AR_SREV_VERSION_9462, "9462" }, + { AR_SREV_VERSION_9550, "9550" }, }; /* For devices with external radios */ diff --git a/drivers/net/wireless/ath/ath9k/hw.h b/drivers/net/wireless/ath/ath9k/hw.h index b620c557c2a6..dd0c146d81dc 100644 --- a/drivers/net/wireless/ath/ath9k/hw.h +++ b/drivers/net/wireless/ath/ath9k/hw.h @@ -48,6 +48,7 @@ #define AR9300_DEVID_AR9580 0x0033 #define AR9300_DEVID_AR9462 0x0034 #define AR9300_DEVID_AR9330 0x0035 +#define AR9300_DEVID_QCA955X 0x0038 #define AR5416_AR9100_DEVID 0x000b @@ -179,6 +180,37 @@ #define PAPRD_TABLE_SZ 24 #define PAPRD_IDEAL_AGC2_PWR_RANGE 0xe0 +/* + * Wake on Wireless + */ + +/* Keep Alive Frame */ +#define KAL_FRAME_LEN 28 +#define KAL_FRAME_TYPE 0x2 /* data frame */ +#define KAL_FRAME_SUB_TYPE 0x4 /* null data frame */ +#define KAL_DURATION_ID 0x3d +#define KAL_NUM_DATA_WORDS 6 +#define KAL_NUM_DESC_WORDS 12 +#define KAL_ANTENNA_MODE 1 +#define KAL_TO_DS 1 +#define KAL_DELAY 4 /*delay of 4ms between 2 KAL frames */ +#define KAL_TIMEOUT 900 + +#define MAX_PATTERN_SIZE 256 +#define MAX_PATTERN_MASK_SIZE 32 +#define MAX_NUM_PATTERN 8 +#define MAX_NUM_USER_PATTERN 6 /* deducting the disassociate and + deauthenticate packets */ + +/* + * WoW trigger mapping to hardware code + */ + +#define AH_WOW_USER_PATTERN_EN BIT(0) +#define AH_WOW_MAGIC_PATTERN_EN BIT(1) +#define AH_WOW_LINK_CHANGE BIT(2) +#define AH_WOW_BEACON_MISS BIT(3) + enum ath_hw_txq_subtype { ATH_TXQ_AC_BE = 0, ATH_TXQ_AC_BK = 1, @@ -211,8 +243,22 @@ enum ath9k_hw_caps { ATH9K_HW_CAP_RTT = BIT(14), ATH9K_HW_CAP_MCI = BIT(15), ATH9K_HW_CAP_DFS = BIT(16), + ATH9K_HW_WOW_DEVICE_CAPABLE = BIT(17), + ATH9K_HW_WOW_PATTERN_MATCH_EXACT = BIT(18), + ATH9K_HW_WOW_PATTERN_MATCH_DWORD = BIT(19), }; +/* + * WoW device capabilities + * @ATH9K_HW_WOW_DEVICE_CAPABLE: device revision is capable of WoW. + * @ATH9K_HW_WOW_PATTERN_MATCH_EXACT: device is capable of matching + * an exact user defined pattern or de-authentication/disassoc pattern. + * @ATH9K_HW_WOW_PATTERN_MATCH_DWORD: device requires the first four + * bytes of the pattern for user defined pattern, de-authentication and + * disassociation patterns for all types of possible frames recieved + * of those types. + */ + struct ath9k_hw_capabilities { u32 hw_caps; /* ATH9K_HW_CAP_* from ath9k_hw_caps */ u16 rts_aggr_limit; @@ -814,17 +860,20 @@ struct ath_hw { struct ar5416IniArray iniBank7; struct ar5416IniArray iniAddac; struct ar5416IniArray iniPcieSerdes; +#ifdef CONFIG_PM_SLEEP + struct ar5416IniArray iniPcieSerdesWow; +#endif struct ar5416IniArray iniPcieSerdesLowPower; struct ar5416IniArray iniModesFastClock; struct ar5416IniArray iniAdditional; struct ar5416IniArray iniModesRxGain; + struct ar5416IniArray ini_modes_rx_gain_bounds; struct ar5416IniArray iniModesTxGain; struct ar5416IniArray iniCckfirNormal; struct ar5416IniArray iniCckfirJapan2484; struct ar5416IniArray ini_japan2484; struct ar5416IniArray iniModes_9271_ANI_reg; struct ar5416IniArray ini_radio_post_sys2ant; - struct ar5416IniArray ini_BTCOEX_MAX_TXPWR; struct ar5416IniArray iniMac[ATH_INI_NUM_SPLIT]; struct ar5416IniArray iniBB[ATH_INI_NUM_SPLIT]; @@ -862,6 +911,9 @@ struct ath_hw { /* Enterprise mode cap */ u32 ent_mode; +#ifdef CONFIG_PM_SLEEP + u32 wow_event_mask; +#endif bool is_clk_25mhz; int (*get_mac_revision)(void); int (*external_reset)(void); @@ -942,7 +994,7 @@ u32 ath9k_hw_gettsf32(struct ath_hw *ah); u64 ath9k_hw_gettsf64(struct ath_hw *ah); void ath9k_hw_settsf64(struct ath_hw *ah, u64 tsf64); void ath9k_hw_reset_tsf(struct ath_hw *ah); -void ath9k_hw_set_tsfadjust(struct ath_hw *ah, u32 setting); +void ath9k_hw_set_tsfadjust(struct ath_hw *ah, bool set); void ath9k_hw_init_global_settings(struct ath_hw *ah); u32 ar9003_get_pll_sqsum_dvc(struct ath_hw *ah); void ath9k_hw_set11nmac2040(struct ath_hw *ah); @@ -1020,16 +1072,8 @@ void ar9002_hw_attach_ops(struct ath_hw *ah); void ar9003_hw_attach_ops(struct ath_hw *ah); void ar9002_hw_load_ani_reg(struct ath_hw *ah, struct ath9k_channel *chan); -/* - * ANI work can be shared between all families but a next - * generation implementation of ANI will be used only for AR9003 only - * for now as the other families still need to be tested with the same - * next generation ANI. Feel free to start testing it though for the - * older families (AR5008, AR9001, AR9002) by using modparam_force_new_ani. - */ -extern int modparam_force_new_ani; + void ath9k_ani_reset(struct ath_hw *ah, bool is_scanning); -void ath9k_hw_proc_mib_event(struct ath_hw *ah); void ath9k_hw_ani_monitor(struct ath_hw *ah, struct ath9k_channel *chan); #ifdef CONFIG_ATH9K_BTCOEX_SUPPORT @@ -1037,6 +1081,12 @@ static inline bool ath9k_hw_btcoex_is_enabled(struct ath_hw *ah) { return ah->btcoex_hw.enabled; } +static inline bool ath9k_hw_mci_is_enabled(struct ath_hw *ah) +{ + return ah->common.btcoex_enabled && + (ah->caps.hw_caps & ATH9K_HW_CAP_MCI); + +} void ath9k_hw_btcoex_enable(struct ath_hw *ah); static inline enum ath_btcoex_scheme ath9k_hw_get_btcoex_scheme(struct ath_hw *ah) @@ -1048,6 +1098,10 @@ static inline bool ath9k_hw_btcoex_is_enabled(struct ath_hw *ah) { return false; } +static inline bool ath9k_hw_mci_is_enabled(struct ath_hw *ah) +{ + return false; +} static inline void ath9k_hw_btcoex_enable(struct ath_hw *ah) { } @@ -1058,6 +1112,37 @@ ath9k_hw_get_btcoex_scheme(struct ath_hw *ah) } #endif /* CONFIG_ATH9K_BTCOEX_SUPPORT */ + +#ifdef CONFIG_PM_SLEEP +const char *ath9k_hw_wow_event_to_string(u32 wow_event); +void ath9k_hw_wow_apply_pattern(struct ath_hw *ah, u8 *user_pattern, + u8 *user_mask, int pattern_count, + int pattern_len); +u32 ath9k_hw_wow_wakeup(struct ath_hw *ah); +void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable); +#else +static inline const char *ath9k_hw_wow_event_to_string(u32 wow_event) +{ + return NULL; +} +static inline void ath9k_hw_wow_apply_pattern(struct ath_hw *ah, + u8 *user_pattern, + u8 *user_mask, + int pattern_count, + int pattern_len) +{ +} +static inline u32 ath9k_hw_wow_wakeup(struct ath_hw *ah) +{ + return 0; +} +static inline void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable) +{ +} +#endif + + + #define ATH9K_CLOCK_RATE_CCK 22 #define ATH9K_CLOCK_RATE_5GHZ_OFDM 40 #define ATH9K_CLOCK_RATE_2GHZ_OFDM 44 diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c index dee9e092449a..f33712140fa5 100644 --- a/drivers/net/wireless/ath/ath9k/init.c +++ b/drivers/net/wireless/ath/ath9k/init.c @@ -434,6 +434,7 @@ static int ath9k_init_queues(struct ath_softc *sc) for (i = 0; i < WME_NUM_AC; i++) { sc->tx.txq_map[i] = ath_txq_setup(sc, ATH9K_TX_QUEUE_DATA, i); sc->tx.txq_map[i]->mac80211_qnum = i; + sc->tx.txq_max_pending[i] = ATH_MAX_QDEPTH; } return 0; } @@ -489,6 +490,7 @@ static void ath9k_init_misc(struct ath_softc *sc) setup_timer(&common->ani.timer, ath_ani_calibrate, (unsigned long)sc); + sc->last_rssi = ATH_RSSI_DUMMY_MARKER; sc->config.txpowlimit = ATH_TXPOWER_MAX; memcpy(common->bssidmask, ath_bcast_mac, ETH_ALEN); sc->beacon.slottime = ATH9K_SLOT_TIME_9; @@ -557,9 +559,15 @@ static int ath9k_init_softc(u16 devid, struct ath_softc *sc, spin_lock_init(&sc->debug.samp_lock); #endif tasklet_init(&sc->intr_tq, ath9k_tasklet, (unsigned long)sc); - tasklet_init(&sc->bcon_tasklet, ath_beacon_tasklet, + tasklet_init(&sc->bcon_tasklet, ath9k_beacon_tasklet, (unsigned long)sc); + INIT_WORK(&sc->hw_reset_work, ath_reset_work); + INIT_WORK(&sc->hw_check_work, ath_hw_check); + INIT_WORK(&sc->paprd_work, ath_paprd_calibrate); + INIT_DELAYED_WORK(&sc->hw_pll_work, ath_hw_pll_work); + setup_timer(&sc->rx_poll_timer, ath_rx_poll, (unsigned long)sc); + /* * Cache line size is used to size and align various * structures used to communicate with the hardware. @@ -590,6 +598,9 @@ static int ath9k_init_softc(u16 devid, struct ath_softc *sc, ath9k_cmn_init_crypto(sc->sc_ah); ath9k_init_misc(sc); + if (common->bus_ops->aspm_init) + common->bus_ops->aspm_init(common); + return 0; err_btcoex: @@ -703,6 +714,24 @@ void ath9k_set_hw_capab(struct ath_softc *sc, struct ieee80211_hw *hw) hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_TDLS; hw->wiphy->flags |= WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL; +#ifdef CONFIG_PM_SLEEP + + if ((ah->caps.hw_caps & ATH9K_HW_WOW_DEVICE_CAPABLE) && + device_can_wakeup(sc->dev)) { + + hw->wiphy->wowlan.flags = WIPHY_WOWLAN_MAGIC_PKT | + WIPHY_WOWLAN_DISCONNECT; + hw->wiphy->wowlan.n_patterns = MAX_NUM_USER_PATTERN; + hw->wiphy->wowlan.pattern_min_len = 1; + hw->wiphy->wowlan.pattern_max_len = MAX_PATTERN_SIZE; + + } + + atomic_set(&sc->wow_sleep_proc_intr, -1); + atomic_set(&sc->wow_got_bmiss_intr, -1); + +#endif + hw->queues = 4; hw->max_rates = 4; hw->channel_change_time = 5000; @@ -782,11 +811,6 @@ int ath9k_init_device(u16 devid, struct ath_softc *sc, ARRAY_SIZE(ath9k_tpt_blink)); #endif - INIT_WORK(&sc->hw_reset_work, ath_reset_work); - INIT_WORK(&sc->hw_check_work, ath_hw_check); - INIT_WORK(&sc->paprd_work, ath_paprd_calibrate); - INIT_DELAYED_WORK(&sc->hw_pll_work, ath_hw_pll_work); - /* Register with mac80211 */ error = ieee80211_register_hw(hw); if (error) @@ -805,9 +829,6 @@ int ath9k_init_device(u16 devid, struct ath_softc *sc, goto error_world; } - setup_timer(&sc->rx_poll_timer, ath_rx_poll, (unsigned long)sc); - sc->last_rssi = ATH_RSSI_DUMMY_MARKER; - ath_init_leds(sc); ath_start_rfkill_poll(sc); diff --git a/drivers/net/wireless/ath/ath9k/link.c b/drivers/net/wireless/ath/ath9k/link.c new file mode 100644 index 000000000000..d4549e9aac5c --- /dev/null +++ b/drivers/net/wireless/ath/ath9k/link.c @@ -0,0 +1,555 @@ +/* + * Copyright (c) 2012 Qualcomm Atheros, Inc. + * + * Permission to use, copy, modify, and/or distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +#include "ath9k.h" + +/* + * TX polling - checks if the TX engine is stuck somewhere + * and issues a chip reset if so. + */ +void ath_tx_complete_poll_work(struct work_struct *work) +{ + struct ath_softc *sc = container_of(work, struct ath_softc, + tx_complete_work.work); + struct ath_txq *txq; + int i; + bool needreset = false; +#ifdef CONFIG_ATH9K_DEBUGFS + sc->tx_complete_poll_work_seen++; +#endif + + for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) + if (ATH_TXQ_SETUP(sc, i)) { + txq = &sc->tx.txq[i]; + ath_txq_lock(sc, txq); + if (txq->axq_depth) { + if (txq->axq_tx_inprogress) { + needreset = true; + ath_txq_unlock(sc, txq); + break; + } else { + txq->axq_tx_inprogress = true; + } + } + ath_txq_unlock_complete(sc, txq); + } + + if (needreset) { + ath_dbg(ath9k_hw_common(sc->sc_ah), RESET, + "tx hung, resetting the chip\n"); + ath9k_queue_reset(sc, RESET_TYPE_TX_HANG); + return; + } + + ieee80211_queue_delayed_work(sc->hw, &sc->tx_complete_work, + msecs_to_jiffies(ATH_TX_COMPLETE_POLL_INT)); +} + +/* + * Checks if the BB/MAC is hung. + */ +void ath_hw_check(struct work_struct *work) +{ + struct ath_softc *sc = container_of(work, struct ath_softc, hw_check_work); + struct ath_common *common = ath9k_hw_common(sc->sc_ah); + unsigned long flags; + int busy; + u8 is_alive, nbeacon = 1; + enum ath_reset_type type; + + ath9k_ps_wakeup(sc); + is_alive = ath9k_hw_check_alive(sc->sc_ah); + + if (is_alive && !AR_SREV_9300(sc->sc_ah)) + goto out; + else if (!is_alive && AR_SREV_9300(sc->sc_ah)) { + ath_dbg(common, RESET, + "DCU stuck is detected. Schedule chip reset\n"); + type = RESET_TYPE_MAC_HANG; + goto sched_reset; + } + + spin_lock_irqsave(&common->cc_lock, flags); + busy = ath_update_survey_stats(sc); + spin_unlock_irqrestore(&common->cc_lock, flags); + + ath_dbg(common, RESET, "Possible baseband hang, busy=%d (try %d)\n", + busy, sc->hw_busy_count + 1); + if (busy >= 99) { + if (++sc->hw_busy_count >= 3) { + type = RESET_TYPE_BB_HANG; + goto sched_reset; + } + } else if (busy >= 0) { + sc->hw_busy_count = 0; + nbeacon = 3; + } + + ath_start_rx_poll(sc, nbeacon); + goto out; + +sched_reset: + ath9k_queue_reset(sc, type); +out: + ath9k_ps_restore(sc); +} + +/* + * PLL-WAR for AR9485/AR9340 + */ +static bool ath_hw_pll_rx_hang_check(struct ath_softc *sc, u32 pll_sqsum) +{ + static int count; + struct ath_common *common = ath9k_hw_common(sc->sc_ah); + + if (pll_sqsum >= 0x40000) { + count++; + if (count == 3) { + ath_dbg(common, RESET, "PLL WAR, resetting the chip\n"); + ath9k_queue_reset(sc, RESET_TYPE_PLL_HANG); + count = 0; + return true; + } + } else { + count = 0; + } + + return false; +} + +void ath_hw_pll_work(struct work_struct *work) +{ + u32 pll_sqsum; + struct ath_softc *sc = container_of(work, struct ath_softc, + hw_pll_work.work); + /* + * ensure that the PLL WAR is executed only + * after the STA is associated (or) if the + * beaconing had started in interfaces that + * uses beacons. + */ + if (!test_bit(SC_OP_BEACONS, &sc->sc_flags)) + return; + + ath9k_ps_wakeup(sc); + pll_sqsum = ar9003_get_pll_sqsum_dvc(sc->sc_ah); + ath9k_ps_restore(sc); + if (ath_hw_pll_rx_hang_check(sc, pll_sqsum)) + return; + + ieee80211_queue_delayed_work(sc->hw, &sc->hw_pll_work, + msecs_to_jiffies(ATH_PLL_WORK_INTERVAL)); +} + +/* + * RX Polling - monitors baseband hangs. + */ +void ath_start_rx_poll(struct ath_softc *sc, u8 nbeacon) +{ + if (!AR_SREV_9300(sc->sc_ah)) + return; + + if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) + return; + + mod_timer(&sc->rx_poll_timer, jiffies + msecs_to_jiffies + (nbeacon * sc->cur_beacon_conf.beacon_interval)); +} + +void ath_rx_poll(unsigned long data) +{ + struct ath_softc *sc = (struct ath_softc *)data; + + ieee80211_queue_work(sc->hw, &sc->hw_check_work); +} + +/* + * PA Pre-distortion. + */ +static void ath_paprd_activate(struct ath_softc *sc) +{ + struct ath_hw *ah = sc->sc_ah; + struct ath9k_hw_cal_data *caldata = ah->caldata; + int chain; + + if (!caldata || !caldata->paprd_done) + return; + + ath9k_ps_wakeup(sc); + ar9003_paprd_enable(ah, false); + for (chain = 0; chain < AR9300_MAX_CHAINS; chain++) { + if (!(ah->txchainmask & BIT(chain))) + continue; + + ar9003_paprd_populate_single_table(ah, caldata, chain); + } + + ar9003_paprd_enable(ah, true); + ath9k_ps_restore(sc); +} + +static bool ath_paprd_send_frame(struct ath_softc *sc, struct sk_buff *skb, int chain) +{ + struct ieee80211_hw *hw = sc->hw; + struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb); + struct ath_hw *ah = sc->sc_ah; + struct ath_common *common = ath9k_hw_common(ah); + struct ath_tx_control txctl; + int time_left; + + memset(&txctl, 0, sizeof(txctl)); + txctl.txq = sc->tx.txq_map[WME_AC_BE]; + + memset(tx_info, 0, sizeof(*tx_info)); + tx_info->band = hw->conf.channel->band; + tx_info->flags |= IEEE80211_TX_CTL_NO_ACK; + tx_info->control.rates[0].idx = 0; + tx_info->control.rates[0].count = 1; + tx_info->control.rates[0].flags = IEEE80211_TX_RC_MCS; + tx_info->control.rates[1].idx = -1; + + init_completion(&sc->paprd_complete); + txctl.paprd = BIT(chain); + + if (ath_tx_start(hw, skb, &txctl) != 0) { + ath_dbg(common, CALIBRATE, "PAPRD TX failed\n"); + dev_kfree_skb_any(skb); + return false; + } + + time_left = wait_for_completion_timeout(&sc->paprd_complete, + msecs_to_jiffies(ATH_PAPRD_TIMEOUT)); + + if (!time_left) + ath_dbg(common, CALIBRATE, + "Timeout waiting for paprd training on TX chain %d\n", + chain); + + return !!time_left; +} + +void ath_paprd_calibrate(struct work_struct *work) +{ + struct ath_softc *sc = container_of(work, struct ath_softc, paprd_work); + struct ieee80211_hw *hw = sc->hw; + struct ath_hw *ah = sc->sc_ah; + struct ieee80211_hdr *hdr; + struct sk_buff *skb = NULL; + struct ath9k_hw_cal_data *caldata = ah->caldata; + struct ath_common *common = ath9k_hw_common(ah); + int ftype; + int chain_ok = 0; + int chain; + int len = 1800; + + if (!caldata) + return; + + ath9k_ps_wakeup(sc); + + if (ar9003_paprd_init_table(ah) < 0) + goto fail_paprd; + + skb = alloc_skb(len, GFP_KERNEL); + if (!skb) + goto fail_paprd; + + skb_put(skb, len); + memset(skb->data, 0, len); + hdr = (struct ieee80211_hdr *)skb->data; + ftype = IEEE80211_FTYPE_DATA | IEEE80211_STYPE_NULLFUNC; + hdr->frame_control = cpu_to_le16(ftype); + hdr->duration_id = cpu_to_le16(10); + memcpy(hdr->addr1, hw->wiphy->perm_addr, ETH_ALEN); + memcpy(hdr->addr2, hw->wiphy->perm_addr, ETH_ALEN); + memcpy(hdr->addr3, hw->wiphy->perm_addr, ETH_ALEN); + + for (chain = 0; chain < AR9300_MAX_CHAINS; chain++) { + if (!(ah->txchainmask & BIT(chain))) + continue; + + chain_ok = 0; + + ath_dbg(common, CALIBRATE, + "Sending PAPRD frame for thermal measurement on chain %d\n", + chain); + if (!ath_paprd_send_frame(sc, skb, chain)) + goto fail_paprd; + + ar9003_paprd_setup_gain_table(ah, chain); + + ath_dbg(common, CALIBRATE, + "Sending PAPRD training frame on chain %d\n", chain); + if (!ath_paprd_send_frame(sc, skb, chain)) + goto fail_paprd; + + if (!ar9003_paprd_is_done(ah)) { + ath_dbg(common, CALIBRATE, + "PAPRD not yet done on chain %d\n", chain); + break; + } + + if (ar9003_paprd_create_curve(ah, caldata, chain)) { + ath_dbg(common, CALIBRATE, + "PAPRD create curve failed on chain %d\n", + chain); + break; + } + + chain_ok = 1; + } + kfree_skb(skb); + + if (chain_ok) { + caldata->paprd_done = true; + ath_paprd_activate(sc); + } + +fail_paprd: + ath9k_ps_restore(sc); +} + +/* + * ANI performs periodic noise floor calibration + * that is used to adjust and optimize the chip performance. This + * takes environmental changes (location, temperature) into account. + * When the task is complete, it reschedules itself depending on the + * appropriate interval that was calculated. + */ +void ath_ani_calibrate(unsigned long data) +{ + struct ath_softc *sc = (struct ath_softc *)data; + struct ath_hw *ah = sc->sc_ah; + struct ath_common *common = ath9k_hw_common(ah); + bool longcal = false; + bool shortcal = false; + bool aniflag = false; + unsigned int timestamp = jiffies_to_msecs(jiffies); + u32 cal_interval, short_cal_interval, long_cal_interval; + unsigned long flags; + + if (ah->caldata && ah->caldata->nfcal_interference) + long_cal_interval = ATH_LONG_CALINTERVAL_INT; + else + long_cal_interval = ATH_LONG_CALINTERVAL; + + short_cal_interval = (ah->opmode == NL80211_IFTYPE_AP) ? + ATH_AP_SHORT_CALINTERVAL : ATH_STA_SHORT_CALINTERVAL; + + /* Only calibrate if awake */ + if (sc->sc_ah->power_mode != ATH9K_PM_AWAKE) + goto set_timer; + + ath9k_ps_wakeup(sc); + + /* Long calibration runs independently of short calibration. */ + if ((timestamp - common->ani.longcal_timer) >= long_cal_interval) { + longcal = true; + common->ani.longcal_timer = timestamp; + } + + /* Short calibration applies only while caldone is false */ + if (!common->ani.caldone) { + if ((timestamp - common->ani.shortcal_timer) >= short_cal_interval) { + shortcal = true; + common->ani.shortcal_timer = timestamp; + common->ani.resetcal_timer = timestamp; + } + } else { + if ((timestamp - common->ani.resetcal_timer) >= + ATH_RESTART_CALINTERVAL) { + common->ani.caldone = ath9k_hw_reset_calvalid(ah); + if (common->ani.caldone) + common->ani.resetcal_timer = timestamp; + } + } + + /* Verify whether we must check ANI */ + if (sc->sc_ah->config.enable_ani + && (timestamp - common->ani.checkani_timer) >= + ah->config.ani_poll_interval) { + aniflag = true; + common->ani.checkani_timer = timestamp; + } + + /* Call ANI routine if necessary */ + if (aniflag) { + spin_lock_irqsave(&common->cc_lock, flags); + ath9k_hw_ani_monitor(ah, ah->curchan); + ath_update_survey_stats(sc); + spin_unlock_irqrestore(&common->cc_lock, flags); + } + + /* Perform calibration if necessary */ + if (longcal || shortcal) { + common->ani.caldone = + ath9k_hw_calibrate(ah, ah->curchan, + ah->rxchainmask, longcal); + } + + ath_dbg(common, ANI, + "Calibration @%lu finished: %s %s %s, caldone: %s\n", + jiffies, + longcal ? "long" : "", shortcal ? "short" : "", + aniflag ? "ani" : "", common->ani.caldone ? "true" : "false"); + + ath9k_debug_samp_bb_mac(sc); + ath9k_ps_restore(sc); + +set_timer: + /* + * Set timer interval based on previous results. + * The interval must be the shortest necessary to satisfy ANI, + * short calibration and long calibration. + */ + cal_interval = ATH_LONG_CALINTERVAL; + if (sc->sc_ah->config.enable_ani) + cal_interval = min(cal_interval, + (u32)ah->config.ani_poll_interval); + if (!common->ani.caldone) + cal_interval = min(cal_interval, (u32)short_cal_interval); + + mod_timer(&common->ani.timer, jiffies + msecs_to_jiffies(cal_interval)); + if ((sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_PAPRD) && ah->caldata) { + if (!ah->caldata->paprd_done) + ieee80211_queue_work(sc->hw, &sc->paprd_work); + else if (!ah->paprd_table_write_done) + ath_paprd_activate(sc); + } +} + +void ath_start_ani(struct ath_softc *sc) +{ + struct ath_hw *ah = sc->sc_ah; + struct ath_common *common = ath9k_hw_common(ah); + unsigned long timestamp = jiffies_to_msecs(jiffies); + + if (common->disable_ani || + !test_bit(SC_OP_ANI_RUN, &sc->sc_flags) || + (sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)) + return; + + common->ani.longcal_timer = timestamp; + common->ani.shortcal_timer = timestamp; + common->ani.checkani_timer = timestamp; + + ath_dbg(common, ANI, "Starting ANI\n"); + mod_timer(&common->ani.timer, + jiffies + msecs_to_jiffies((u32)ah->config.ani_poll_interval)); +} + +void ath_stop_ani(struct ath_softc *sc) +{ + struct ath_common *common = ath9k_hw_common(sc->sc_ah); + + ath_dbg(common, ANI, "Stopping ANI\n"); + del_timer_sync(&common->ani.timer); +} + +void ath_check_ani(struct ath_softc *sc) +{ + struct ath_hw *ah = sc->sc_ah; + struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf; + + /* + * Check for the various conditions in which ANI has to + * be stopped. + */ + if (ah->opmode == NL80211_IFTYPE_ADHOC) { + if (!cur_conf->enable_beacon) + goto stop_ani; + } else if (ah->opmode == NL80211_IFTYPE_AP) { + if (!cur_conf->enable_beacon) { + /* + * Disable ANI only when there are no + * associated stations. + */ + if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) + goto stop_ani; + } + } else if (ah->opmode == NL80211_IFTYPE_STATION) { + if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) + goto stop_ani; + } + + if (!test_bit(SC_OP_ANI_RUN, &sc->sc_flags)) { + set_bit(SC_OP_ANI_RUN, &sc->sc_flags); + ath_start_ani(sc); + } + + return; + +stop_ani: + clear_bit(SC_OP_ANI_RUN, &sc->sc_flags); + ath_stop_ani(sc); +} + +void ath_update_survey_nf(struct ath_softc *sc, int channel) +{ + struct ath_hw *ah = sc->sc_ah; + struct ath9k_channel *chan = &ah->channels[channel]; + struct survey_info *survey = &sc->survey[channel]; + + if (chan->noisefloor) { + survey->filled |= SURVEY_INFO_NOISE_DBM; + survey->noise = ath9k_hw_getchan_noise(ah, chan); + } +} + +/* + * Updates the survey statistics and returns the busy time since last + * update in %, if the measurement duration was long enough for the + * result to be useful, -1 otherwise. + */ +int ath_update_survey_stats(struct ath_softc *sc) +{ + struct ath_hw *ah = sc->sc_ah; + struct ath_common *common = ath9k_hw_common(ah); + int pos = ah->curchan - &ah->channels[0]; + struct survey_info *survey = &sc->survey[pos]; + struct ath_cycle_counters *cc = &common->cc_survey; + unsigned int div = common->clockrate * 1000; + int ret = 0; + + if (!ah->curchan) + return -1; + + if (ah->power_mode == ATH9K_PM_AWAKE) + ath_hw_cycle_counters_update(common); + + if (cc->cycles > 0) { + survey->filled |= SURVEY_INFO_CHANNEL_TIME | + SURVEY_INFO_CHANNEL_TIME_BUSY | + SURVEY_INFO_CHANNEL_TIME_RX | + SURVEY_INFO_CHANNEL_TIME_TX; + survey->channel_time += cc->cycles / div; + survey->channel_time_busy += cc->rx_busy / div; + survey->channel_time_rx += cc->rx_frame / div; + survey->channel_time_tx += cc->tx_frame / div; + } + + if (cc->cycles < div) + return -1; + + if (cc->cycles > 0) + ret = cc->rx_busy * 100 / cc->cycles; + + memset(cc, 0, sizeof(*cc)); + + ath_update_survey_nf(sc, pos); + + return ret; +} diff --git a/drivers/net/wireless/ath/ath9k/mac.c b/drivers/net/wireless/ath/ath9k/mac.c index 04ef775ccee1..7990cd55599c 100644 --- a/drivers/net/wireless/ath/ath9k/mac.c +++ b/drivers/net/wireless/ath/ath9k/mac.c @@ -810,7 +810,7 @@ void ath9k_hw_enable_interrupts(struct ath_hw *ah) return; } - if (AR_SREV_9340(ah)) + if (AR_SREV_9340(ah) || AR_SREV_9550(ah)) sync_default &= ~AR_INTR_SYNC_HOST1_FATAL; async_mask = AR_INTR_MAC_IRQ; diff --git a/drivers/net/wireless/ath/ath9k/mac.h b/drivers/net/wireless/ath/ath9k/mac.h index 21c955609e6c..0eba36dca6f8 100644 --- a/drivers/net/wireless/ath/ath9k/mac.h +++ b/drivers/net/wireless/ath/ath9k/mac.h @@ -646,6 +646,7 @@ enum ath9k_rx_filter { ATH9K_RX_FILTER_PHYRADAR = 0x00002000, ATH9K_RX_FILTER_MCAST_BCAST_ALL = 0x00008000, ATH9K_RX_FILTER_CONTROL_WRAPPER = 0x00080000, + ATH9K_RX_FILTER_4ADDRESS = 0x00100000, }; #define ATH9K_RATESERIES_RTS_CTS 0x0001 diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c index dac1a2709e3c..6049d8b82855 100644 --- a/drivers/net/wireless/ath/ath9k/main.c +++ b/drivers/net/wireless/ath/ath9k/main.c @@ -19,7 +19,10 @@ #include "ath9k.h" #include "btcoex.h" -static u8 parse_mpdudensity(u8 mpdudensity) +static void ath9k_set_assoc_state(struct ath_softc *sc, + struct ieee80211_vif *vif); + +u8 ath9k_parse_mpdudensity(u8 mpdudensity) { /* * 802.11n D2.0 defined values for "Minimum MPDU Start Spacing": @@ -101,6 +104,7 @@ void ath9k_ps_wakeup(struct ath_softc *sc) spin_lock(&common->cc_lock); ath_hw_cycle_counters_update(common); memset(&common->cc_survey, 0, sizeof(common->cc_survey)); + memset(&common->cc_ani, 0, sizeof(common->cc_ani)); spin_unlock(&common->cc_lock); } @@ -129,6 +133,8 @@ void ath9k_ps_restore(struct ath_softc *sc) PS_WAIT_FOR_PSPOLL_DATA | PS_WAIT_FOR_TX_ACK))) { mode = ATH9K_PM_NETWORK_SLEEP; + if (ath9k_hw_btcoex_is_enabled(sc->sc_ah)) + ath9k_btcoex_stop_gen_timer(sc); } else { goto unlock; } @@ -143,90 +149,17 @@ void ath9k_ps_restore(struct ath_softc *sc) spin_unlock_irqrestore(&sc->sc_pm_lock, flags); } -void ath_start_ani(struct ath_common *common) -{ - struct ath_hw *ah = common->ah; - unsigned long timestamp = jiffies_to_msecs(jiffies); - struct ath_softc *sc = (struct ath_softc *) common->priv; - - if (!(sc->sc_flags & SC_OP_ANI_RUN)) - return; - - if (sc->sc_flags & SC_OP_OFFCHANNEL) - return; - - common->ani.longcal_timer = timestamp; - common->ani.shortcal_timer = timestamp; - common->ani.checkani_timer = timestamp; - - mod_timer(&common->ani.timer, - jiffies + - msecs_to_jiffies((u32)ah->config.ani_poll_interval)); -} - -static void ath_update_survey_nf(struct ath_softc *sc, int channel) -{ - struct ath_hw *ah = sc->sc_ah; - struct ath9k_channel *chan = &ah->channels[channel]; - struct survey_info *survey = &sc->survey[channel]; - - if (chan->noisefloor) { - survey->filled |= SURVEY_INFO_NOISE_DBM; - survey->noise = ath9k_hw_getchan_noise(ah, chan); - } -} - -/* - * Updates the survey statistics and returns the busy time since last - * update in %, if the measurement duration was long enough for the - * result to be useful, -1 otherwise. - */ -static int ath_update_survey_stats(struct ath_softc *sc) -{ - struct ath_hw *ah = sc->sc_ah; - struct ath_common *common = ath9k_hw_common(ah); - int pos = ah->curchan - &ah->channels[0]; - struct survey_info *survey = &sc->survey[pos]; - struct ath_cycle_counters *cc = &common->cc_survey; - unsigned int div = common->clockrate * 1000; - int ret = 0; - - if (!ah->curchan) - return -1; - - if (ah->power_mode == ATH9K_PM_AWAKE) - ath_hw_cycle_counters_update(common); - - if (cc->cycles > 0) { - survey->filled |= SURVEY_INFO_CHANNEL_TIME | - SURVEY_INFO_CHANNEL_TIME_BUSY | - SURVEY_INFO_CHANNEL_TIME_RX | - SURVEY_INFO_CHANNEL_TIME_TX; - survey->channel_time += cc->cycles / div; - survey->channel_time_busy += cc->rx_busy / div; - survey->channel_time_rx += cc->rx_frame / div; - survey->channel_time_tx += cc->tx_frame / div; - } - - if (cc->cycles < div) - return -1; - - if (cc->cycles > 0) - ret = cc->rx_busy * 100 / cc->cycles; - - memset(cc, 0, sizeof(*cc)); - - ath_update_survey_nf(sc, pos); - - return ret; -} - static void __ath_cancel_work(struct ath_softc *sc) { cancel_work_sync(&sc->paprd_work); cancel_work_sync(&sc->hw_check_work); cancel_delayed_work_sync(&sc->tx_complete_work); cancel_delayed_work_sync(&sc->hw_pll_work); + +#ifdef CONFIG_ATH9K_BTCOEX_SUPPORT + if (ath9k_hw_mci_is_enabled(sc->sc_ah)) + cancel_work_sync(&sc->mci_work); +#endif } static void ath_cancel_work(struct ath_softc *sc) @@ -235,16 +168,28 @@ static void ath_cancel_work(struct ath_softc *sc) cancel_work_sync(&sc->hw_reset_work); } +static void ath_restart_work(struct ath_softc *sc) +{ + ieee80211_queue_delayed_work(sc->hw, &sc->tx_complete_work, 0); + + if (AR_SREV_9340(sc->sc_ah) || AR_SREV_9485(sc->sc_ah) || + AR_SREV_9550(sc->sc_ah)) + ieee80211_queue_delayed_work(sc->hw, &sc->hw_pll_work, + msecs_to_jiffies(ATH_PLL_WORK_INTERVAL)); + + ath_start_rx_poll(sc, 3); + ath_start_ani(sc); +} + static bool ath_prepare_reset(struct ath_softc *sc, bool retry_tx, bool flush) { struct ath_hw *ah = sc->sc_ah; - struct ath_common *common = ath9k_hw_common(ah); bool ret = true; ieee80211_stop_queues(sc->hw); sc->hw_busy_count = 0; - del_timer_sync(&common->ani.timer); + ath_stop_ani(sc); del_timer_sync(&sc->rx_poll_timer); ath9k_debug_samp_bb_mac(sc); @@ -271,6 +216,7 @@ static bool ath_complete_reset(struct ath_softc *sc, bool start) { struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); + unsigned long flags; if (ath_startrecv(sc) != 0) { ath_err(common, "Unable to restart recv logic\n"); @@ -279,36 +225,30 @@ static bool ath_complete_reset(struct ath_softc *sc, bool start) ath9k_cmn_update_txpow(ah, sc->curtxpow, sc->config.txpowlimit, &sc->curtxpow); + + clear_bit(SC_OP_HW_RESET, &sc->sc_flags); ath9k_hw_set_interrupts(ah); ath9k_hw_enable_interrupts(ah); - if (!(sc->sc_flags & (SC_OP_OFFCHANNEL)) && start) { - if (sc->sc_flags & SC_OP_BEACONS) - ath_set_beacon(sc); - - ieee80211_queue_delayed_work(sc->hw, &sc->tx_complete_work, 0); - ieee80211_queue_delayed_work(sc->hw, &sc->hw_pll_work, HZ/2); - ath_start_rx_poll(sc, 3); - if (!common->disable_ani) - ath_start_ani(common); - } - - if ((ah->caps.hw_caps & ATH9K_HW_CAP_ANT_DIV_COMB) && sc->ant_rx != 3) { - struct ath_hw_antcomb_conf div_ant_conf; - u8 lna_conf; - - ath9k_hw_antdiv_comb_conf_get(ah, &div_ant_conf); + if (!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL) && start) { + if (!test_bit(SC_OP_BEACONS, &sc->sc_flags)) + goto work; - if (sc->ant_rx == 1) - lna_conf = ATH_ANT_DIV_COMB_LNA1; - else - lna_conf = ATH_ANT_DIV_COMB_LNA2; - div_ant_conf.main_lna_conf = lna_conf; - div_ant_conf.alt_lna_conf = lna_conf; + ath9k_set_beacon(sc); - ath9k_hw_antdiv_comb_conf_set(ah, &div_ant_conf); + if (ah->opmode == NL80211_IFTYPE_STATION && + test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) { + spin_lock_irqsave(&sc->sc_pm_lock, flags); + sc->ps_flags |= PS_BEACON_SYNC | PS_WAIT_FOR_BEACON; + spin_unlock_irqrestore(&sc->sc_pm_lock, flags); + } + work: + ath_restart_work(sc); } + if ((ah->caps.hw_caps & ATH9K_HW_CAP_ANT_DIV_COMB) && sc->ant_rx != 3) + ath_ant_comb_update(sc); + ieee80211_wake_queues(sc->hw); return true; @@ -328,7 +268,7 @@ static int ath_reset_internal(struct ath_softc *sc, struct ath9k_channel *hchan, spin_lock_bh(&sc->sc_pcu_lock); - if (!(sc->sc_flags & SC_OP_OFFCHANNEL)) { + if (!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)) { fastcc = false; caldata = &sc->caldata; } @@ -371,7 +311,7 @@ static int ath_set_channel(struct ath_softc *sc, struct ieee80211_hw *hw, { int r; - if (sc->sc_flags & SC_OP_INVALID) + if (test_bit(SC_OP_INVALID, &sc->sc_flags)) return -EIO; r = ath_reset_internal(sc, hchan, false); @@ -379,262 +319,11 @@ static int ath_set_channel(struct ath_softc *sc, struct ieee80211_hw *hw, return r; } -static void ath_paprd_activate(struct ath_softc *sc) -{ - struct ath_hw *ah = sc->sc_ah; - struct ath9k_hw_cal_data *caldata = ah->caldata; - int chain; - - if (!caldata || !caldata->paprd_done) - return; - - ath9k_ps_wakeup(sc); - ar9003_paprd_enable(ah, false); - for (chain = 0; chain < AR9300_MAX_CHAINS; chain++) { - if (!(ah->txchainmask & BIT(chain))) - continue; - - ar9003_paprd_populate_single_table(ah, caldata, chain); - } - - ar9003_paprd_enable(ah, true); - ath9k_ps_restore(sc); -} - -static bool ath_paprd_send_frame(struct ath_softc *sc, struct sk_buff *skb, int chain) -{ - struct ieee80211_hw *hw = sc->hw; - struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb); - struct ath_hw *ah = sc->sc_ah; - struct ath_common *common = ath9k_hw_common(ah); - struct ath_tx_control txctl; - int time_left; - - memset(&txctl, 0, sizeof(txctl)); - txctl.txq = sc->tx.txq_map[WME_AC_BE]; - - memset(tx_info, 0, sizeof(*tx_info)); - tx_info->band = hw->conf.channel->band; - tx_info->flags |= IEEE80211_TX_CTL_NO_ACK; - tx_info->control.rates[0].idx = 0; - tx_info->control.rates[0].count = 1; - tx_info->control.rates[0].flags = IEEE80211_TX_RC_MCS; - tx_info->control.rates[1].idx = -1; - - init_completion(&sc->paprd_complete); - txctl.paprd = BIT(chain); - - if (ath_tx_start(hw, skb, &txctl) != 0) { - ath_dbg(common, CALIBRATE, "PAPRD TX failed\n"); - dev_kfree_skb_any(skb); - return false; - } - - time_left = wait_for_completion_timeout(&sc->paprd_complete, - msecs_to_jiffies(ATH_PAPRD_TIMEOUT)); - - if (!time_left) - ath_dbg(common, CALIBRATE, - "Timeout waiting for paprd training on TX chain %d\n", - chain); - - return !!time_left; -} - -void ath_paprd_calibrate(struct work_struct *work) -{ - struct ath_softc *sc = container_of(work, struct ath_softc, paprd_work); - struct ieee80211_hw *hw = sc->hw; - struct ath_hw *ah = sc->sc_ah; - struct ieee80211_hdr *hdr; - struct sk_buff *skb = NULL; - struct ath9k_hw_cal_data *caldata = ah->caldata; - struct ath_common *common = ath9k_hw_common(ah); - int ftype; - int chain_ok = 0; - int chain; - int len = 1800; - - if (!caldata) - return; - - ath9k_ps_wakeup(sc); - - if (ar9003_paprd_init_table(ah) < 0) - goto fail_paprd; - - skb = alloc_skb(len, GFP_KERNEL); - if (!skb) - goto fail_paprd; - - skb_put(skb, len); - memset(skb->data, 0, len); - hdr = (struct ieee80211_hdr *)skb->data; - ftype = IEEE80211_FTYPE_DATA | IEEE80211_STYPE_NULLFUNC; - hdr->frame_control = cpu_to_le16(ftype); - hdr->duration_id = cpu_to_le16(10); - memcpy(hdr->addr1, hw->wiphy->perm_addr, ETH_ALEN); - memcpy(hdr->addr2, hw->wiphy->perm_addr, ETH_ALEN); - memcpy(hdr->addr3, hw->wiphy->perm_addr, ETH_ALEN); - - for (chain = 0; chain < AR9300_MAX_CHAINS; chain++) { - if (!(ah->txchainmask & BIT(chain))) - continue; - - chain_ok = 0; - - ath_dbg(common, CALIBRATE, - "Sending PAPRD frame for thermal measurement on chain %d\n", - chain); - if (!ath_paprd_send_frame(sc, skb, chain)) - goto fail_paprd; - - ar9003_paprd_setup_gain_table(ah, chain); - - ath_dbg(common, CALIBRATE, - "Sending PAPRD training frame on chain %d\n", chain); - if (!ath_paprd_send_frame(sc, skb, chain)) - goto fail_paprd; - - if (!ar9003_paprd_is_done(ah)) { - ath_dbg(common, CALIBRATE, - "PAPRD not yet done on chain %d\n", chain); - break; - } - - if (ar9003_paprd_create_curve(ah, caldata, chain)) { - ath_dbg(common, CALIBRATE, - "PAPRD create curve failed on chain %d\n", - chain); - break; - } - - chain_ok = 1; - } - kfree_skb(skb); - - if (chain_ok) { - caldata->paprd_done = true; - ath_paprd_activate(sc); - } - -fail_paprd: - ath9k_ps_restore(sc); -} - -/* - * This routine performs the periodic noise floor calibration function - * that is used to adjust and optimize the chip performance. This - * takes environmental changes (location, temperature) into account. - * When the task is complete, it reschedules itself depending on the - * appropriate interval that was calculated. - */ -void ath_ani_calibrate(unsigned long data) -{ - struct ath_softc *sc = (struct ath_softc *)data; - struct ath_hw *ah = sc->sc_ah; - struct ath_common *common = ath9k_hw_common(ah); - bool longcal = false; - bool shortcal = false; - bool aniflag = false; - unsigned int timestamp = jiffies_to_msecs(jiffies); - u32 cal_interval, short_cal_interval, long_cal_interval; - unsigned long flags; - - if (ah->caldata && ah->caldata->nfcal_interference) - long_cal_interval = ATH_LONG_CALINTERVAL_INT; - else - long_cal_interval = ATH_LONG_CALINTERVAL; - - short_cal_interval = (ah->opmode == NL80211_IFTYPE_AP) ? - ATH_AP_SHORT_CALINTERVAL : ATH_STA_SHORT_CALINTERVAL; - - /* Only calibrate if awake */ - if (sc->sc_ah->power_mode != ATH9K_PM_AWAKE) - goto set_timer; - - ath9k_ps_wakeup(sc); - - /* Long calibration runs independently of short calibration. */ - if ((timestamp - common->ani.longcal_timer) >= long_cal_interval) { - longcal = true; - common->ani.longcal_timer = timestamp; - } - - /* Short calibration applies only while caldone is false */ - if (!common->ani.caldone) { - if ((timestamp - common->ani.shortcal_timer) >= short_cal_interval) { - shortcal = true; - common->ani.shortcal_timer = timestamp; - common->ani.resetcal_timer = timestamp; - } - } else { - if ((timestamp - common->ani.resetcal_timer) >= - ATH_RESTART_CALINTERVAL) { - common->ani.caldone = ath9k_hw_reset_calvalid(ah); - if (common->ani.caldone) - common->ani.resetcal_timer = timestamp; - } - } - - /* Verify whether we must check ANI */ - if (sc->sc_ah->config.enable_ani - && (timestamp - common->ani.checkani_timer) >= - ah->config.ani_poll_interval) { - aniflag = true; - common->ani.checkani_timer = timestamp; - } - - /* Call ANI routine if necessary */ - if (aniflag) { - spin_lock_irqsave(&common->cc_lock, flags); - ath9k_hw_ani_monitor(ah, ah->curchan); - ath_update_survey_stats(sc); - spin_unlock_irqrestore(&common->cc_lock, flags); - } - - /* Perform calibration if necessary */ - if (longcal || shortcal) { - common->ani.caldone = - ath9k_hw_calibrate(ah, ah->curchan, - ah->rxchainmask, longcal); - } - - ath_dbg(common, ANI, - "Calibration @%lu finished: %s %s %s, caldone: %s\n", - jiffies, - longcal ? "long" : "", shortcal ? "short" : "", - aniflag ? "ani" : "", common->ani.caldone ? "true" : "false"); - - ath9k_ps_restore(sc); - -set_timer: - /* - * Set timer interval based on previous results. - * The interval must be the shortest necessary to satisfy ANI, - * short calibration and long calibration. - */ - ath9k_debug_samp_bb_mac(sc); - cal_interval = ATH_LONG_CALINTERVAL; - if (sc->sc_ah->config.enable_ani) - cal_interval = min(cal_interval, - (u32)ah->config.ani_poll_interval); - if (!common->ani.caldone) - cal_interval = min(cal_interval, (u32)short_cal_interval); - - mod_timer(&common->ani.timer, jiffies + msecs_to_jiffies(cal_interval)); - if ((sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_PAPRD) && ah->caldata) { - if (!ah->caldata->paprd_done) - ieee80211_queue_work(sc->hw, &sc->paprd_work); - else if (!ah->paprd_table_write_done) - ath_paprd_activate(sc); - } -} - static void ath_node_attach(struct ath_softc *sc, struct ieee80211_sta *sta, struct ieee80211_vif *vif) { struct ath_node *an; + u8 density; an = (struct ath_node *)sta->drv_priv; #ifdef CONFIG_ATH9K_DEBUGFS @@ -649,7 +338,8 @@ static void ath_node_attach(struct ath_softc *sc, struct ieee80211_sta *sta, ath_tx_node_init(sc, an); an->maxampdu = 1 << (IEEE80211_HT_MAX_AMPDU_FACTOR + sta->ht_cap.ampdu_factor); - an->mpdudensity = parse_mpdudensity(sta->ht_cap.ampdu_density); + density = ath9k_parse_mpdudensity(sta->ht_cap.ampdu_density); + an->mpdudensity = density; } } @@ -668,13 +358,13 @@ static void ath_node_detach(struct ath_softc *sc, struct ieee80211_sta *sta) ath_tx_node_cleanup(sc, an); } - void ath9k_tasklet(unsigned long data) { struct ath_softc *sc = (struct ath_softc *)data; struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); - + enum ath_reset_type type; + unsigned long flags; u32 status = sc->intrstatus; u32 rxmask; @@ -683,20 +373,17 @@ void ath9k_tasklet(unsigned long data) if ((status & ATH9K_INT_FATAL) || (status & ATH9K_INT_BB_WATCHDOG)) { -#ifdef CONFIG_ATH9K_DEBUGFS - enum ath_reset_type type; if (status & ATH9K_INT_FATAL) type = RESET_TYPE_FATAL_INT; else type = RESET_TYPE_BB_WATCHDOG; - RESET_STAT_INC(sc, type); -#endif - ieee80211_queue_work(sc->hw, &sc->hw_reset_work); + ath9k_queue_reset(sc, type); goto out; } + spin_lock_irqsave(&sc->sc_pm_lock, flags); if ((status & ATH9K_INT_TSFOOR) && sc->ps_enabled) { /* * TSF sync does not look correct; remain awake to sync with @@ -705,6 +392,7 @@ void ath9k_tasklet(unsigned long data) ath_dbg(common, PS, "TSFOOR - Sync with next Beacon\n"); sc->ps_flags |= PS_WAIT_FOR_BEACON | PS_BEACON_SYNC; } + spin_unlock_irqrestore(&sc->sc_pm_lock, flags); if (ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) rxmask = (ATH9K_INT_RXHP | ATH9K_INT_RXLP | ATH9K_INT_RXEOL | @@ -766,15 +454,17 @@ irqreturn_t ath_isr(int irq, void *dev) * touch anything. Note this can happen early * on if the IRQ is shared. */ - if (sc->sc_flags & SC_OP_INVALID) + if (test_bit(SC_OP_INVALID, &sc->sc_flags)) return IRQ_NONE; - /* shared irq, not for us */ if (!ath9k_hw_intrpend(ah)) return IRQ_NONE; + if(test_bit(SC_OP_HW_RESET, &sc->sc_flags)) + return IRQ_HANDLED; + /* * Figure out the reason(s) for the interrupt. Note * that the hal returns a pseudo-ISR that may include @@ -797,6 +487,17 @@ irqreturn_t ath_isr(int irq, void *dev) if (status & SCHED_INTR) sched = true; +#ifdef CONFIG_PM_SLEEP + if (status & ATH9K_INT_BMISS) { + if (atomic_read(&sc->wow_sleep_proc_intr) == 0) { + ath_dbg(common, ANY, "during WoW we got a BMISS\n"); + atomic_inc(&sc->wow_got_bmiss_intr); + atomic_dec(&sc->wow_sleep_proc_intr); + } + ath_dbg(common, INTERRUPT, "beacon miss interrupt\n"); + } +#endif + /* * If a FATAL or RXORN interrupt is received, we have to reset the * chip immediately. @@ -827,24 +528,6 @@ irqreturn_t ath_isr(int irq, void *dev) ath9k_hw_set_interrupts(ah); } - if (status & ATH9K_INT_MIB) { - /* - * Disable interrupts until we service the MIB - * interrupt; otherwise it will continue to - * fire. - */ - ath9k_hw_disable_interrupts(ah); - /* - * Let the hal handle the event. We assume - * it will clear whatever condition caused - * the interrupt. - */ - spin_lock(&common->cc_lock); - ath9k_hw_proc_mib_event(ah); - spin_unlock(&common->cc_lock); - ath9k_hw_enable_interrupts(ah); - } - if (!(ah->caps.hw_caps & ATH9K_HW_CAP_AUTOSLEEP)) if (status & ATH9K_INT_TIM_TIMER) { if (ATH_DBG_WARN_ON_ONCE(sc->ps_idle)) @@ -852,8 +535,10 @@ irqreturn_t ath_isr(int irq, void *dev) /* Clear RxAbort bit so that we can * receive frames */ ath9k_setpower(sc, ATH9K_PM_AWAKE); + spin_lock(&sc->sc_pm_lock); ath9k_hw_setrxabort(sc->sc_ah, 0); sc->ps_flags |= PS_WAIT_FOR_BEACON; + spin_unlock(&sc->sc_pm_lock); } chip_reset: @@ -895,101 +580,20 @@ static int ath_reset(struct ath_softc *sc, bool retry_tx) return r; } -void ath_reset_work(struct work_struct *work) +void ath9k_queue_reset(struct ath_softc *sc, enum ath_reset_type type) { - struct ath_softc *sc = container_of(work, struct ath_softc, hw_reset_work); - - ath_reset(sc, true); -} - -void ath_hw_check(struct work_struct *work) -{ - struct ath_softc *sc = container_of(work, struct ath_softc, hw_check_work); - struct ath_common *common = ath9k_hw_common(sc->sc_ah); - unsigned long flags; - int busy; - u8 is_alive, nbeacon = 1; - - ath9k_ps_wakeup(sc); - is_alive = ath9k_hw_check_alive(sc->sc_ah); - - if (is_alive && !AR_SREV_9300(sc->sc_ah)) - goto out; - else if (!is_alive && AR_SREV_9300(sc->sc_ah)) { - ath_dbg(common, RESET, - "DCU stuck is detected. Schedule chip reset\n"); - RESET_STAT_INC(sc, RESET_TYPE_MAC_HANG); - goto sched_reset; - } - - spin_lock_irqsave(&common->cc_lock, flags); - busy = ath_update_survey_stats(sc); - spin_unlock_irqrestore(&common->cc_lock, flags); - - ath_dbg(common, RESET, "Possible baseband hang, busy=%d (try %d)\n", - busy, sc->hw_busy_count + 1); - if (busy >= 99) { - if (++sc->hw_busy_count >= 3) { - RESET_STAT_INC(sc, RESET_TYPE_BB_HANG); - goto sched_reset; - } - } else if (busy >= 0) { - sc->hw_busy_count = 0; - nbeacon = 3; - } - - ath_start_rx_poll(sc, nbeacon); - goto out; - -sched_reset: +#ifdef CONFIG_ATH9K_DEBUGFS + RESET_STAT_INC(sc, type); +#endif + set_bit(SC_OP_HW_RESET, &sc->sc_flags); ieee80211_queue_work(sc->hw, &sc->hw_reset_work); -out: - ath9k_ps_restore(sc); -} - -static void ath_hw_pll_rx_hang_check(struct ath_softc *sc, u32 pll_sqsum) -{ - static int count; - struct ath_common *common = ath9k_hw_common(sc->sc_ah); - - if (pll_sqsum >= 0x40000) { - count++; - if (count == 3) { - /* Rx is hung for more than 500ms. Reset it */ - ath_dbg(common, RESET, "Possible RX hang, resetting\n"); - RESET_STAT_INC(sc, RESET_TYPE_PLL_HANG); - ieee80211_queue_work(sc->hw, &sc->hw_reset_work); - count = 0; - } - } else - count = 0; } -void ath_hw_pll_work(struct work_struct *work) +void ath_reset_work(struct work_struct *work) { - struct ath_softc *sc = container_of(work, struct ath_softc, - hw_pll_work.work); - u32 pll_sqsum; - - /* - * ensure that the PLL WAR is executed only - * after the STA is associated (or) if the - * beaconing had started in interfaces that - * uses beacons. - */ - if (!(sc->sc_flags & SC_OP_BEACONS)) - return; - - if (AR_SREV_9485(sc->sc_ah)) { - - ath9k_ps_wakeup(sc); - pll_sqsum = ar9003_get_pll_sqsum_dvc(sc->sc_ah); - ath9k_ps_restore(sc); - - ath_hw_pll_rx_hang_check(sc, pll_sqsum); + struct ath_softc *sc = container_of(work, struct ath_softc, hw_reset_work); - ieee80211_queue_delayed_work(sc->hw, &sc->hw_pll_work, HZ/5); - } + ath_reset(sc, true); } /**********************/ @@ -1054,10 +658,9 @@ static int ath9k_start(struct ieee80211_hw *hw) if (ah->caps.hw_caps & ATH9K_HW_CAP_HT) ah->imask |= ATH9K_INT_CST; - if (ah->caps.hw_caps & ATH9K_HW_CAP_MCI) - ah->imask |= ATH9K_INT_MCI; + ath_mci_enable(sc); - sc->sc_flags &= ~SC_OP_INVALID; + clear_bit(SC_OP_INVALID, &sc->sc_flags); sc->sc_ah->is_monitoring = false; if (!ath_complete_reset(sc, false)) { @@ -1080,8 +683,6 @@ static int ath9k_start(struct ieee80211_hw *hw) spin_unlock_bh(&sc->sc_pcu_lock); - ath9k_start_btcoex(sc); - if (ah->caps.pcie_lcr_extsync_en && common->bus_ops->extn_synch_en) common->bus_ops->extn_synch_en(common); @@ -1099,6 +700,7 @@ static void ath9k_tx(struct ieee80211_hw *hw, struct sk_buff *skb) struct ath_common *common = ath9k_hw_common(sc->sc_ah); struct ath_tx_control txctl; struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; + unsigned long flags; if (sc->ps_enabled) { /* @@ -1121,6 +723,7 @@ static void ath9k_tx(struct ieee80211_hw *hw, struct sk_buff *skb) * completed and if needed, also for RX of buffered frames. */ ath9k_ps_wakeup(sc); + spin_lock_irqsave(&sc->sc_pm_lock, flags); if (!(sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_AUTOSLEEP)) ath9k_hw_setrxabort(sc->sc_ah, 0); if (ieee80211_is_pspoll(hdr->frame_control)) { @@ -1136,6 +739,7 @@ static void ath9k_tx(struct ieee80211_hw *hw, struct sk_buff *skb) * the ps_flags bit is cleared. We are just dropping * the ps_usecount here. */ + spin_unlock_irqrestore(&sc->sc_pm_lock, flags); ath9k_ps_restore(sc); } @@ -1176,7 +780,7 @@ static void ath9k_stop(struct ieee80211_hw *hw) ath_cancel_work(sc); del_timer_sync(&sc->rx_poll_timer); - if (sc->sc_flags & SC_OP_INVALID) { + if (test_bit(SC_OP_INVALID, &sc->sc_flags)) { ath_dbg(common, ANY, "Device not present\n"); mutex_unlock(&sc->mutex); return; @@ -1185,8 +789,6 @@ static void ath9k_stop(struct ieee80211_hw *hw) /* Ensure HW is awake when we try to shut it down. */ ath9k_ps_wakeup(sc); - ath9k_stop_btcoex(sc); - spin_lock_bh(&sc->sc_pcu_lock); /* prevent tasklets to enable interrupts once we disable them */ @@ -1233,7 +835,7 @@ static void ath9k_stop(struct ieee80211_hw *hw) ath9k_ps_restore(sc); - sc->sc_flags |= SC_OP_INVALID; + set_bit(SC_OP_INVALID, &sc->sc_flags); sc->ps_idle = prev_idle; mutex_unlock(&sc->mutex); @@ -1253,16 +855,6 @@ bool ath9k_uses_beacons(int type) } } -static void ath9k_reclaim_beacon(struct ath_softc *sc, - struct ieee80211_vif *vif) -{ - struct ath_vif *avp = (void *)vif->drv_priv; - - ath9k_set_beaconing_status(sc, false); - ath_beacon_return(sc, avp); - ath9k_set_beaconing_status(sc, true); -} - static void ath9k_vif_iter(void *data, u8 *mac, struct ieee80211_vif *vif) { struct ath9k_vif_iter_data *iter_data = data; @@ -1294,6 +886,18 @@ static void ath9k_vif_iter(void *data, u8 *mac, struct ieee80211_vif *vif) } } +static void ath9k_sta_vif_iter(void *data, u8 *mac, struct ieee80211_vif *vif) +{ + struct ath_softc *sc = data; + struct ath_vif *avp = (void *)vif->drv_priv; + + if (vif->type != NL80211_IFTYPE_STATION) + return; + + if (avp->primary_sta_vif) + ath9k_set_assoc_state(sc, vif); +} + /* Called with sc->mutex held. */ void ath9k_calculate_iter_data(struct ieee80211_hw *hw, struct ieee80211_vif *vif, @@ -1327,21 +931,18 @@ static void ath9k_calculate_summary_state(struct ieee80211_hw *hw, struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); struct ath9k_vif_iter_data iter_data; + enum nl80211_iftype old_opmode = ah->opmode; ath9k_calculate_iter_data(hw, vif, &iter_data); - /* Set BSSID mask. */ memcpy(common->bssidmask, iter_data.mask, ETH_ALEN); ath_hw_setbssidmask(common); - /* Set op-mode & TSF */ if (iter_data.naps > 0) { - ath9k_hw_set_tsfadjust(ah, 1); - sc->sc_flags |= SC_OP_TSF_RESET; + ath9k_hw_set_tsfadjust(ah, true); ah->opmode = NL80211_IFTYPE_AP; } else { - ath9k_hw_set_tsfadjust(ah, 0); - sc->sc_flags &= ~SC_OP_TSF_RESET; + ath9k_hw_set_tsfadjust(ah, false); if (iter_data.nmeshes) ah->opmode = NL80211_IFTYPE_MESH_POINT; @@ -1353,70 +954,27 @@ static void ath9k_calculate_summary_state(struct ieee80211_hw *hw, ah->opmode = NL80211_IFTYPE_STATION; } - /* - * Enable MIB interrupts when there are hardware phy counters. - */ - if ((iter_data.nstations + iter_data.nadhocs + iter_data.nmeshes) > 0) { - if (ah->config.enable_ani) - ah->imask |= ATH9K_INT_MIB; + ath9k_hw_setopmode(ah); + + if ((iter_data.nstations + iter_data.nadhocs + iter_data.nmeshes) > 0) ah->imask |= ATH9K_INT_TSFOOR; - } else { - ah->imask &= ~ATH9K_INT_MIB; + else ah->imask &= ~ATH9K_INT_TSFOOR; - } ath9k_hw_set_interrupts(ah); - /* Set up ANI */ - if (iter_data.naps > 0) { - sc->sc_ah->stats.avgbrssi = ATH_RSSI_DUMMY_MARKER; - - if (!common->disable_ani) { - sc->sc_flags |= SC_OP_ANI_RUN; - ath_start_ani(common); - } - - } else { - sc->sc_flags &= ~SC_OP_ANI_RUN; - del_timer_sync(&common->ani.timer); - } -} - -/* Called with sc->mutex held, vif counts set up properly. */ -static void ath9k_do_vif_add_setup(struct ieee80211_hw *hw, - struct ieee80211_vif *vif) -{ - struct ath_softc *sc = hw->priv; - - ath9k_calculate_summary_state(hw, vif); - - if (ath9k_uses_beacons(vif->type)) { - /* Reserve a beacon slot for the vif */ - ath9k_set_beaconing_status(sc, false); - ath_beacon_alloc(sc, vif); - ath9k_set_beaconing_status(sc, true); + /* + * If we are changing the opmode to STATION, + * a beacon sync needs to be done. + */ + if (ah->opmode == NL80211_IFTYPE_STATION && + old_opmode == NL80211_IFTYPE_AP && + test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) { + ieee80211_iterate_active_interfaces_atomic(sc->hw, + ath9k_sta_vif_iter, sc); } } -void ath_start_rx_poll(struct ath_softc *sc, u8 nbeacon) -{ - if (!AR_SREV_9300(sc->sc_ah)) - return; - - if (!(sc->sc_flags & SC_OP_PRIM_STA_VIF)) - return; - - mod_timer(&sc->rx_poll_timer, jiffies + msecs_to_jiffies - (nbeacon * sc->cur_beacon_conf.beacon_interval)); -} - -void ath_rx_poll(unsigned long data) -{ - struct ath_softc *sc = (struct ath_softc *)data; - - ieee80211_queue_work(sc->hw, &sc->hw_check_work); -} - static int ath9k_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif) { @@ -1456,7 +1014,10 @@ static int ath9k_add_interface(struct ieee80211_hw *hw, sc->nvifs++; - ath9k_do_vif_add_setup(hw, vif); + ath9k_calculate_summary_state(hw, vif); + if (ath9k_uses_beacons(vif->type)) + ath9k_beacon_assign_slot(sc, vif); + out: mutex_unlock(&sc->mutex); ath9k_ps_restore(sc); @@ -1473,6 +1034,7 @@ static int ath9k_change_interface(struct ieee80211_hw *hw, int ret = 0; ath_dbg(common, CONFIG, "Change Interface\n"); + mutex_lock(&sc->mutex); ath9k_ps_wakeup(sc); @@ -1485,15 +1047,16 @@ static int ath9k_change_interface(struct ieee80211_hw *hw, } } - /* Clean up old vif stuff */ if (ath9k_uses_beacons(vif->type)) - ath9k_reclaim_beacon(sc, vif); + ath9k_beacon_remove_slot(sc, vif); - /* Add new settings */ vif->type = new_type; vif->p2p = p2p; - ath9k_do_vif_add_setup(hw, vif); + ath9k_calculate_summary_state(hw, vif); + if (ath9k_uses_beacons(vif->type)) + ath9k_beacon_assign_slot(sc, vif); + out: ath9k_ps_restore(sc); mutex_unlock(&sc->mutex); @@ -1513,9 +1076,8 @@ static void ath9k_remove_interface(struct ieee80211_hw *hw, sc->nvifs--; - /* Reclaim beacon resources */ if (ath9k_uses_beacons(vif->type)) - ath9k_reclaim_beacon(sc, vif); + ath9k_beacon_remove_slot(sc, vif); ath9k_calculate_summary_state(hw, NULL); @@ -1573,14 +1135,17 @@ static int ath9k_config(struct ieee80211_hw *hw, u32 changed) if (changed & IEEE80211_CONF_CHANGE_IDLE) { sc->ps_idle = !!(conf->flags & IEEE80211_CONF_IDLE); - if (sc->ps_idle) + if (sc->ps_idle) { ath_cancel_work(sc); - else + ath9k_stop_btcoex(sc); + } else { + ath9k_start_btcoex(sc); /* * The chip needs a reset to properly wake up from * full sleep */ reset_channel = ah->chip_fullsleep; + } } /* @@ -1618,11 +1183,6 @@ static int ath9k_config(struct ieee80211_hw *hw, u32 changed) if (ah->curchan) old_pos = ah->curchan - &ah->channels[0]; - if (hw->conf.flags & IEEE80211_CONF_OFFCHANNEL) - sc->sc_flags |= SC_OP_OFFCHANNEL; - else - sc->sc_flags &= ~SC_OP_OFFCHANNEL; - ath_dbg(common, CONFIG, "Set channel: %d MHz type: %d\n", curchan->center_freq, conf->channel_type); @@ -1664,6 +1224,7 @@ static int ath9k_config(struct ieee80211_hw *hw, u32 changed) if (ath_set_channel(sc, hw, &sc->sc_ah->channels[pos]) < 0) { ath_err(common, "Unable to set channel\n"); mutex_unlock(&sc->mutex); + ath9k_ps_restore(sc); return -EINVAL; } @@ -1813,21 +1374,18 @@ static int ath9k_conf_tx(struct ieee80211_hw *hw, qi.tqi_aifs = params->aifs; qi.tqi_cwmin = params->cw_min; qi.tqi_cwmax = params->cw_max; - qi.tqi_burstTime = params->txop; + qi.tqi_burstTime = params->txop * 32; ath_dbg(common, CONFIG, "Configure tx [queue/halq] [%d/%d], aifs: %d, cw_min: %d, cw_max: %d, txop: %d\n", queue, txq->axq_qnum, params->aifs, params->cw_min, params->cw_max, params->txop); + ath_update_max_aggr_framelen(sc, queue, qi.tqi_burstTime); ret = ath_txq_update(sc, txq->axq_qnum, &qi); if (ret) ath_err(common, "TXQ Update failed\n"); - if (sc->sc_ah->opmode == NL80211_IFTYPE_ADHOC) - if (queue == WME_AC_BE && !ret) - ath_beaconq_config(sc); - mutex_unlock(&sc->mutex); ath9k_ps_restore(sc); @@ -1896,83 +1454,53 @@ static int ath9k_set_key(struct ieee80211_hw *hw, return ret; } -static void ath9k_bss_iter(void *data, u8 *mac, struct ieee80211_vif *vif) + +static void ath9k_set_assoc_state(struct ath_softc *sc, + struct ieee80211_vif *vif) { - struct ath_softc *sc = data; struct ath_common *common = ath9k_hw_common(sc->sc_ah); - struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; struct ath_vif *avp = (void *)vif->drv_priv; + struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; + unsigned long flags; + + set_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags); + avp->primary_sta_vif = true; /* - * Skip iteration if primary station vif's bss info - * was not changed + * Set the AID, BSSID and do beacon-sync only when + * the HW opmode is STATION. + * + * But the primary bit is set above in any case. */ - if (sc->sc_flags & SC_OP_PRIM_STA_VIF) + if (sc->sc_ah->opmode != NL80211_IFTYPE_STATION) return; - if (bss_conf->assoc) { - sc->sc_flags |= SC_OP_PRIM_STA_VIF; - avp->primary_sta_vif = true; - memcpy(common->curbssid, bss_conf->bssid, ETH_ALEN); - common->curaid = bss_conf->aid; - ath9k_hw_write_associd(sc->sc_ah); - ath_dbg(common, CONFIG, "Bss Info ASSOC %d, bssid: %pM\n", - bss_conf->aid, common->curbssid); - ath_beacon_config(sc, vif); - /* - * Request a re-configuration of Beacon related timers - * on the receipt of the first Beacon frame (i.e., - * after time sync with the AP). - */ - sc->ps_flags |= PS_BEACON_SYNC | PS_WAIT_FOR_BEACON; - /* Reset rssi stats */ - sc->last_rssi = ATH_RSSI_DUMMY_MARKER; - sc->sc_ah->stats.avgbrssi = ATH_RSSI_DUMMY_MARKER; + memcpy(common->curbssid, bss_conf->bssid, ETH_ALEN); + common->curaid = bss_conf->aid; + ath9k_hw_write_associd(sc->sc_ah); - ath_start_rx_poll(sc, 3); + sc->last_rssi = ATH_RSSI_DUMMY_MARKER; + sc->sc_ah->stats.avgbrssi = ATH_RSSI_DUMMY_MARKER; - if (!common->disable_ani) { - sc->sc_flags |= SC_OP_ANI_RUN; - ath_start_ani(common); - } + spin_lock_irqsave(&sc->sc_pm_lock, flags); + sc->ps_flags |= PS_BEACON_SYNC | PS_WAIT_FOR_BEACON; + spin_unlock_irqrestore(&sc->sc_pm_lock, flags); - } + ath_dbg(common, CONFIG, + "Primary Station interface: %pM, BSSID: %pM\n", + vif->addr, common->curbssid); } -static void ath9k_config_bss(struct ath_softc *sc, struct ieee80211_vif *vif) +static void ath9k_bss_assoc_iter(void *data, u8 *mac, struct ieee80211_vif *vif) { - struct ath_common *common = ath9k_hw_common(sc->sc_ah); + struct ath_softc *sc = data; struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; - struct ath_vif *avp = (void *)vif->drv_priv; - if (sc->sc_ah->opmode != NL80211_IFTYPE_STATION) + if (test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) return; - /* Reconfigure bss info */ - if (avp->primary_sta_vif && !bss_conf->assoc) { - ath_dbg(common, CONFIG, "Bss Info DISASSOC %d, bssid %pM\n", - common->curaid, common->curbssid); - sc->sc_flags &= ~(SC_OP_PRIM_STA_VIF | SC_OP_BEACONS); - avp->primary_sta_vif = false; - memset(common->curbssid, 0, ETH_ALEN); - common->curaid = 0; - } - - ieee80211_iterate_active_interfaces_atomic( - sc->hw, ath9k_bss_iter, sc); - - /* - * None of station vifs are associated. - * Clear bssid & aid - */ - if (!(sc->sc_flags & SC_OP_PRIM_STA_VIF)) { - ath9k_hw_write_associd(sc->sc_ah); - /* Stop ANI */ - sc->sc_flags &= ~SC_OP_ANI_RUN; - del_timer_sync(&common->ani.timer); - del_timer_sync(&sc->rx_poll_timer); - memset(&sc->caldata, 0, sizeof(sc->caldata)); - } + if (bss_conf->assoc) + ath9k_set_assoc_state(sc, vif); } static void ath9k_bss_info_changed(struct ieee80211_hw *hw, @@ -1980,6 +1508,11 @@ static void ath9k_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_bss_conf *bss_conf, u32 changed) { +#define CHECK_ANI \ + (BSS_CHANGED_ASSOC | \ + BSS_CHANGED_IBSS | \ + BSS_CHANGED_BEACON_ENABLED) + struct ath_softc *sc = hw->priv; struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); @@ -1990,53 +1523,41 @@ static void ath9k_bss_info_changed(struct ieee80211_hw *hw, mutex_lock(&sc->mutex); if (changed & BSS_CHANGED_ASSOC) { - ath9k_config_bss(sc, vif); + ath_dbg(common, CONFIG, "BSSID %pM Changed ASSOC %d\n", + bss_conf->bssid, bss_conf->assoc); + + if (avp->primary_sta_vif && !bss_conf->assoc) { + clear_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags); + avp->primary_sta_vif = false; - ath_dbg(common, CONFIG, "BSSID: %pM aid: 0x%x\n", - common->curbssid, common->curaid); + if (ah->opmode == NL80211_IFTYPE_STATION) + clear_bit(SC_OP_BEACONS, &sc->sc_flags); + } + + ieee80211_iterate_active_interfaces_atomic(sc->hw, + ath9k_bss_assoc_iter, sc); + + if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags) && + ah->opmode == NL80211_IFTYPE_STATION) { + memset(common->curbssid, 0, ETH_ALEN); + common->curaid = 0; + ath9k_hw_write_associd(sc->sc_ah); + } } if (changed & BSS_CHANGED_IBSS) { - /* There can be only one vif available */ memcpy(common->curbssid, bss_conf->bssid, ETH_ALEN); common->curaid = bss_conf->aid; ath9k_hw_write_associd(sc->sc_ah); - - if (bss_conf->ibss_joined) { - sc->sc_ah->stats.avgbrssi = ATH_RSSI_DUMMY_MARKER; - - if (!common->disable_ani) { - sc->sc_flags |= SC_OP_ANI_RUN; - ath_start_ani(common); - } - - } else { - sc->sc_flags &= ~SC_OP_ANI_RUN; - del_timer_sync(&common->ani.timer); - del_timer_sync(&sc->rx_poll_timer); - } } - /* - * In case of AP mode, the HW TSF has to be reset - * when the beacon interval changes. - */ - if ((changed & BSS_CHANGED_BEACON_INT) && - (vif->type == NL80211_IFTYPE_AP)) - sc->sc_flags |= SC_OP_TSF_RESET; - - /* Configure beaconing (AP, IBSS, MESH) */ - if (ath9k_uses_beacons(vif->type) && - ((changed & BSS_CHANGED_BEACON) || - (changed & BSS_CHANGED_BEACON_ENABLED) || - (changed & BSS_CHANGED_BEACON_INT))) { - ath9k_set_beaconing_status(sc, false); - if (bss_conf->enable_beacon) - ath_beacon_alloc(sc, vif); - else - avp->is_bslot_active = false; - ath_beacon_config(sc, vif); - ath9k_set_beaconing_status(sc, true); + if ((changed & BSS_CHANGED_BEACON_ENABLED) || + (changed & BSS_CHANGED_BEACON_INT)) { + if (ah->opmode == NL80211_IFTYPE_AP && + bss_conf->enable_beacon) + ath9k_set_tsfadjust(sc, vif); + if (ath9k_allow_beacon_config(sc, vif)) + ath9k_beacon_config(sc, vif, changed); } if (changed & BSS_CHANGED_ERP_SLOT) { @@ -2058,8 +1579,13 @@ static void ath9k_bss_info_changed(struct ieee80211_hw *hw, } } + if (changed & CHECK_ANI) + ath_check_ani(sc); + mutex_unlock(&sc->mutex); ath9k_ps_restore(sc); + +#undef CHECK_ANI } static u64 ath9k_get_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif) @@ -2215,7 +1741,7 @@ static void ath9k_flush(struct ieee80211_hw *hw, bool drop) return; } - if (sc->sc_flags & SC_OP_INVALID) { + if (test_bit(SC_OP_INVALID, &sc->sc_flags)) { ath_dbg(common, ANY, "Device not present\n"); mutex_unlock(&sc->mutex); return; @@ -2288,10 +1814,11 @@ static int ath9k_tx_last_beacon(struct ieee80211_hw *hw) if (!vif) return 0; - avp = (void *)vif->drv_priv; - if (!avp->is_bslot_active) + if (!vif->bss_conf.enable_beacon) return 0; + avp = (void *)vif->drv_priv; + if (!sc->beacon.tx_processed && !edma) { tasklet_disable(&sc->bcon_tasklet); @@ -2345,12 +1872,29 @@ static u32 fill_chainmask(u32 cap, u32 new) return filled; } +static bool validate_antenna_mask(struct ath_hw *ah, u32 val) +{ + switch (val & 0x7) { + case 0x1: + case 0x3: + case 0x7: + return true; + case 0x2: + return (ah->caps.rx_chainmask == 1); + default: + return false; + } +} + static int ath9k_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant) { struct ath_softc *sc = hw->priv; struct ath_hw *ah = sc->sc_ah; - if (!rx_ant || !tx_ant) + if (ah->caps.rx_chainmask != 1) + rx_ant |= tx_ant; + + if (!validate_antenna_mask(ah, rx_ant) || !tx_ant) return -EINVAL; sc->ant_rx = rx_ant; @@ -2380,6 +1924,490 @@ static int ath9k_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant) return 0; } +#ifdef CONFIG_ATH9K_DEBUGFS + +/* Ethtool support for get-stats */ + +#define AMKSTR(nm) #nm "_BE", #nm "_BK", #nm "_VI", #nm "_VO" +static const char ath9k_gstrings_stats[][ETH_GSTRING_LEN] = { + "tx_pkts_nic", + "tx_bytes_nic", + "rx_pkts_nic", + "rx_bytes_nic", + AMKSTR(d_tx_pkts), + AMKSTR(d_tx_bytes), + AMKSTR(d_tx_mpdus_queued), + AMKSTR(d_tx_mpdus_completed), + AMKSTR(d_tx_mpdu_xretries), + AMKSTR(d_tx_aggregates), + AMKSTR(d_tx_ampdus_queued_hw), + AMKSTR(d_tx_ampdus_queued_sw), + AMKSTR(d_tx_ampdus_completed), + AMKSTR(d_tx_ampdu_retries), + AMKSTR(d_tx_ampdu_xretries), + AMKSTR(d_tx_fifo_underrun), + AMKSTR(d_tx_op_exceeded), + AMKSTR(d_tx_timer_expiry), + AMKSTR(d_tx_desc_cfg_err), + AMKSTR(d_tx_data_underrun), + AMKSTR(d_tx_delim_underrun), + + "d_rx_decrypt_crc_err", + "d_rx_phy_err", + "d_rx_mic_err", + "d_rx_pre_delim_crc_err", + "d_rx_post_delim_crc_err", + "d_rx_decrypt_busy_err", + + "d_rx_phyerr_radar", + "d_rx_phyerr_ofdm_timing", + "d_rx_phyerr_cck_timing", + +}; +#define ATH9K_SSTATS_LEN ARRAY_SIZE(ath9k_gstrings_stats) + +static void ath9k_get_et_strings(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + u32 sset, u8 *data) +{ + if (sset == ETH_SS_STATS) + memcpy(data, *ath9k_gstrings_stats, + sizeof(ath9k_gstrings_stats)); +} + +static int ath9k_get_et_sset_count(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, int sset) +{ + if (sset == ETH_SS_STATS) + return ATH9K_SSTATS_LEN; + return 0; +} + +#define PR_QNUM(_n) (sc->tx.txq_map[_n]->axq_qnum) +#define AWDATA(elem) \ + do { \ + data[i++] = sc->debug.stats.txstats[PR_QNUM(WME_AC_BE)].elem; \ + data[i++] = sc->debug.stats.txstats[PR_QNUM(WME_AC_BK)].elem; \ + data[i++] = sc->debug.stats.txstats[PR_QNUM(WME_AC_VI)].elem; \ + data[i++] = sc->debug.stats.txstats[PR_QNUM(WME_AC_VO)].elem; \ + } while (0) + +#define AWDATA_RX(elem) \ + do { \ + data[i++] = sc->debug.stats.rxstats.elem; \ + } while (0) + +static void ath9k_get_et_stats(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ethtool_stats *stats, u64 *data) +{ + struct ath_softc *sc = hw->priv; + int i = 0; + + data[i++] = (sc->debug.stats.txstats[PR_QNUM(WME_AC_BE)].tx_pkts_all + + sc->debug.stats.txstats[PR_QNUM(WME_AC_BK)].tx_pkts_all + + sc->debug.stats.txstats[PR_QNUM(WME_AC_VI)].tx_pkts_all + + sc->debug.stats.txstats[PR_QNUM(WME_AC_VO)].tx_pkts_all); + data[i++] = (sc->debug.stats.txstats[PR_QNUM(WME_AC_BE)].tx_bytes_all + + sc->debug.stats.txstats[PR_QNUM(WME_AC_BK)].tx_bytes_all + + sc->debug.stats.txstats[PR_QNUM(WME_AC_VI)].tx_bytes_all + + sc->debug.stats.txstats[PR_QNUM(WME_AC_VO)].tx_bytes_all); + AWDATA_RX(rx_pkts_all); + AWDATA_RX(rx_bytes_all); + + AWDATA(tx_pkts_all); + AWDATA(tx_bytes_all); + AWDATA(queued); + AWDATA(completed); + AWDATA(xretries); + AWDATA(a_aggr); + AWDATA(a_queued_hw); + AWDATA(a_queued_sw); + AWDATA(a_completed); + AWDATA(a_retries); + AWDATA(a_xretries); + AWDATA(fifo_underrun); + AWDATA(xtxop); + AWDATA(timer_exp); + AWDATA(desc_cfg_err); + AWDATA(data_underrun); + AWDATA(delim_underrun); + + AWDATA_RX(decrypt_crc_err); + AWDATA_RX(phy_err); + AWDATA_RX(mic_err); + AWDATA_RX(pre_delim_crc_err); + AWDATA_RX(post_delim_crc_err); + AWDATA_RX(decrypt_busy_err); + + AWDATA_RX(phy_err_stats[ATH9K_PHYERR_RADAR]); + AWDATA_RX(phy_err_stats[ATH9K_PHYERR_OFDM_TIMING]); + AWDATA_RX(phy_err_stats[ATH9K_PHYERR_CCK_TIMING]); + + WARN_ON(i != ATH9K_SSTATS_LEN); +} + +/* End of ethtool get-stats functions */ + +#endif + + +#ifdef CONFIG_PM_SLEEP + +static void ath9k_wow_map_triggers(struct ath_softc *sc, + struct cfg80211_wowlan *wowlan, + u32 *wow_triggers) +{ + if (wowlan->disconnect) + *wow_triggers |= AH_WOW_LINK_CHANGE | + AH_WOW_BEACON_MISS; + if (wowlan->magic_pkt) + *wow_triggers |= AH_WOW_MAGIC_PATTERN_EN; + + if (wowlan->n_patterns) + *wow_triggers |= AH_WOW_USER_PATTERN_EN; + + sc->wow_enabled = *wow_triggers; + +} + +static void ath9k_wow_add_disassoc_deauth_pattern(struct ath_softc *sc) +{ + struct ath_hw *ah = sc->sc_ah; + struct ath_common *common = ath9k_hw_common(ah); + struct ath9k_hw_capabilities *pcaps = &ah->caps; + int pattern_count = 0; + int i, byte_cnt; + u8 dis_deauth_pattern[MAX_PATTERN_SIZE]; + u8 dis_deauth_mask[MAX_PATTERN_SIZE]; + + memset(dis_deauth_pattern, 0, MAX_PATTERN_SIZE); + memset(dis_deauth_mask, 0, MAX_PATTERN_SIZE); + + /* + * Create Dissassociate / Deauthenticate packet filter + * + * 2 bytes 2 byte 6 bytes 6 bytes 6 bytes + * +--------------+----------+---------+--------+--------+---- + * + Frame Control+ Duration + DA + SA + BSSID + + * +--------------+----------+---------+--------+--------+---- + * + * The above is the management frame format for disassociate/ + * deauthenticate pattern, from this we need to match the first byte + * of 'Frame Control' and DA, SA, and BSSID fields + * (skipping 2nd byte of FC and Duration feild. + * + * Disassociate pattern + * -------------------- + * Frame control = 00 00 1010 + * DA, SA, BSSID = x:x:x:x:x:x + * Pattern will be A0000000 | x:x:x:x:x:x | x:x:x:x:x:x + * | x:x:x:x:x:x -- 22 bytes + * + * Deauthenticate pattern + * ---------------------- + * Frame control = 00 00 1100 + * DA, SA, BSSID = x:x:x:x:x:x + * Pattern will be C0000000 | x:x:x:x:x:x | x:x:x:x:x:x + * | x:x:x:x:x:x -- 22 bytes + */ + + /* Create Disassociate Pattern first */ + + byte_cnt = 0; + + /* Fill out the mask with all FF's */ + + for (i = 0; i < MAX_PATTERN_MASK_SIZE; i++) + dis_deauth_mask[i] = 0xff; + + /* copy the first byte of frame control field */ + dis_deauth_pattern[byte_cnt] = 0xa0; + byte_cnt++; + + /* skip 2nd byte of frame control and Duration field */ + byte_cnt += 3; + + /* + * need not match the destination mac address, it can be a broadcast + * mac address or an unicast to this station + */ + byte_cnt += 6; + + /* copy the source mac address */ + memcpy((dis_deauth_pattern + byte_cnt), common->curbssid, ETH_ALEN); + + byte_cnt += 6; + + /* copy the bssid, its same as the source mac address */ + + memcpy((dis_deauth_pattern + byte_cnt), common->curbssid, ETH_ALEN); + + /* Create Disassociate pattern mask */ + + if (pcaps->hw_caps & ATH9K_HW_WOW_PATTERN_MATCH_EXACT) { + + if (pcaps->hw_caps & ATH9K_HW_WOW_PATTERN_MATCH_DWORD) { + /* + * for AR9280, because of hardware limitation, the + * first 4 bytes have to be matched for all patterns. + * the mask for disassociation and de-auth pattern + * matching need to enable the first 4 bytes. + * also the duration field needs to be filled. + */ + dis_deauth_mask[0] = 0xf0; + + /* + * fill in duration field + FIXME: what is the exact value ? + */ + dis_deauth_pattern[2] = 0xff; + dis_deauth_pattern[3] = 0xff; + } else { + dis_deauth_mask[0] = 0xfe; + } + + dis_deauth_mask[1] = 0x03; + dis_deauth_mask[2] = 0xc0; + } else { + dis_deauth_mask[0] = 0xef; + dis_deauth_mask[1] = 0x3f; + dis_deauth_mask[2] = 0x00; + dis_deauth_mask[3] = 0xfc; + } + + ath_dbg(common, WOW, "Adding disassoc/deauth patterns for WoW\n"); + + ath9k_hw_wow_apply_pattern(ah, dis_deauth_pattern, dis_deauth_mask, + pattern_count, byte_cnt); + + pattern_count++; + /* + * for de-authenticate pattern, only the first byte of the frame + * control field gets changed from 0xA0 to 0xC0 + */ + dis_deauth_pattern[0] = 0xC0; + + ath9k_hw_wow_apply_pattern(ah, dis_deauth_pattern, dis_deauth_mask, + pattern_count, byte_cnt); + +} + +static void ath9k_wow_add_pattern(struct ath_softc *sc, + struct cfg80211_wowlan *wowlan) +{ + struct ath_hw *ah = sc->sc_ah; + struct ath9k_wow_pattern *wow_pattern = NULL; + struct cfg80211_wowlan_trig_pkt_pattern *patterns = wowlan->patterns; + int mask_len; + s8 i = 0; + + if (!wowlan->n_patterns) + return; + + /* + * Add the new user configured patterns + */ + for (i = 0; i < wowlan->n_patterns; i++) { + + wow_pattern = kzalloc(sizeof(*wow_pattern), GFP_KERNEL); + + if (!wow_pattern) + return; + + /* + * TODO: convert the generic user space pattern to + * appropriate chip specific/802.11 pattern. + */ + + mask_len = DIV_ROUND_UP(wowlan->patterns[i].pattern_len, 8); + memset(wow_pattern->pattern_bytes, 0, MAX_PATTERN_SIZE); + memset(wow_pattern->mask_bytes, 0, MAX_PATTERN_SIZE); + memcpy(wow_pattern->pattern_bytes, patterns[i].pattern, + patterns[i].pattern_len); + memcpy(wow_pattern->mask_bytes, patterns[i].mask, mask_len); + wow_pattern->pattern_len = patterns[i].pattern_len; + + /* + * just need to take care of deauth and disssoc pattern, + * make sure we don't overwrite them. + */ + + ath9k_hw_wow_apply_pattern(ah, wow_pattern->pattern_bytes, + wow_pattern->mask_bytes, + i + 2, + wow_pattern->pattern_len); + kfree(wow_pattern); + + } + +} + +static int ath9k_suspend(struct ieee80211_hw *hw, + struct cfg80211_wowlan *wowlan) +{ + struct ath_softc *sc = hw->priv; + struct ath_hw *ah = sc->sc_ah; + struct ath_common *common = ath9k_hw_common(ah); + u32 wow_triggers_enabled = 0; + int ret = 0; + + mutex_lock(&sc->mutex); + + ath_cancel_work(sc); + del_timer_sync(&common->ani.timer); + del_timer_sync(&sc->rx_poll_timer); + + if (test_bit(SC_OP_INVALID, &sc->sc_flags)) { + ath_dbg(common, ANY, "Device not present\n"); + ret = -EINVAL; + goto fail_wow; + } + + if (WARN_ON(!wowlan)) { + ath_dbg(common, WOW, "None of the WoW triggers enabled\n"); + ret = -EINVAL; + goto fail_wow; + } + + if (!device_can_wakeup(sc->dev)) { + ath_dbg(common, WOW, "device_can_wakeup failed, WoW is not enabled\n"); + ret = 1; + goto fail_wow; + } + + /* + * none of the sta vifs are associated + * and we are not currently handling multivif + * cases, for instance we have to seperately + * configure 'keep alive frame' for each + * STA. + */ + + if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) { + ath_dbg(common, WOW, "None of the STA vifs are associated\n"); + ret = 1; + goto fail_wow; + } + + if (sc->nvifs > 1) { + ath_dbg(common, WOW, "WoW for multivif is not yet supported\n"); + ret = 1; + goto fail_wow; + } + + ath9k_wow_map_triggers(sc, wowlan, &wow_triggers_enabled); + + ath_dbg(common, WOW, "WoW triggers enabled 0x%x\n", + wow_triggers_enabled); + + ath9k_ps_wakeup(sc); + + ath9k_stop_btcoex(sc); + + /* + * Enable wake up on recieving disassoc/deauth + * frame by default. + */ + ath9k_wow_add_disassoc_deauth_pattern(sc); + + if (wow_triggers_enabled & AH_WOW_USER_PATTERN_EN) + ath9k_wow_add_pattern(sc, wowlan); + + spin_lock_bh(&sc->sc_pcu_lock); + /* + * To avoid false wake, we enable beacon miss interrupt only + * when we go to sleep. We save the current interrupt mask + * so we can restore it after the system wakes up + */ + sc->wow_intr_before_sleep = ah->imask; + ah->imask &= ~ATH9K_INT_GLOBAL; + ath9k_hw_disable_interrupts(ah); + ah->imask = ATH9K_INT_BMISS | ATH9K_INT_GLOBAL; + ath9k_hw_set_interrupts(ah); + ath9k_hw_enable_interrupts(ah); + + spin_unlock_bh(&sc->sc_pcu_lock); + + /* + * we can now sync irq and kill any running tasklets, since we already + * disabled interrupts and not holding a spin lock + */ + synchronize_irq(sc->irq); + tasklet_kill(&sc->intr_tq); + + ath9k_hw_wow_enable(ah, wow_triggers_enabled); + + ath9k_ps_restore(sc); + ath_dbg(common, ANY, "WoW enabled in ath9k\n"); + atomic_inc(&sc->wow_sleep_proc_intr); + +fail_wow: + mutex_unlock(&sc->mutex); + return ret; +} + +static int ath9k_resume(struct ieee80211_hw *hw) +{ + struct ath_softc *sc = hw->priv; + struct ath_hw *ah = sc->sc_ah; + struct ath_common *common = ath9k_hw_common(ah); + u32 wow_status; + + mutex_lock(&sc->mutex); + + ath9k_ps_wakeup(sc); + + spin_lock_bh(&sc->sc_pcu_lock); + + ath9k_hw_disable_interrupts(ah); + ah->imask = sc->wow_intr_before_sleep; + ath9k_hw_set_interrupts(ah); + ath9k_hw_enable_interrupts(ah); + + spin_unlock_bh(&sc->sc_pcu_lock); + + wow_status = ath9k_hw_wow_wakeup(ah); + + if (atomic_read(&sc->wow_got_bmiss_intr) == 0) { + /* + * some devices may not pick beacon miss + * as the reason they woke up so we add + * that here for that shortcoming. + */ + wow_status |= AH_WOW_BEACON_MISS; + atomic_dec(&sc->wow_got_bmiss_intr); + ath_dbg(common, ANY, "Beacon miss interrupt picked up during WoW sleep\n"); + } + + atomic_dec(&sc->wow_sleep_proc_intr); + + if (wow_status) { + ath_dbg(common, ANY, "Waking up due to WoW triggers %s with WoW status = %x\n", + ath9k_hw_wow_event_to_string(wow_status), wow_status); + } + + ath_restart_work(sc); + ath9k_start_btcoex(sc); + + ath9k_ps_restore(sc); + mutex_unlock(&sc->mutex); + + return 0; +} + +static void ath9k_set_wakeup(struct ieee80211_hw *hw, bool enabled) +{ + struct ath_softc *sc = hw->priv; + + mutex_lock(&sc->mutex); + device_init_wakeup(sc->dev, 1); + device_set_wakeup_enable(sc->dev, enabled); + mutex_unlock(&sc->mutex); +} + +#endif + struct ieee80211_ops ath9k_ops = { .tx = ath9k_tx, .start = ath9k_start, @@ -2408,4 +2436,16 @@ struct ieee80211_ops ath9k_ops = { .get_stats = ath9k_get_stats, .set_antenna = ath9k_set_antenna, .get_antenna = ath9k_get_antenna, + +#ifdef CONFIG_PM_SLEEP + .suspend = ath9k_suspend, + .resume = ath9k_resume, + .set_wakeup = ath9k_set_wakeup, +#endif + +#ifdef CONFIG_ATH9K_DEBUGFS + .get_et_sset_count = ath9k_get_et_sset_count, + .get_et_stats = ath9k_get_et_stats, + .get_et_strings = ath9k_get_et_strings, +#endif }; diff --git a/drivers/net/wireless/ath/ath9k/mci.c b/drivers/net/wireless/ath/ath9k/mci.c index 29fe52d69973..fb536e7e661b 100644 --- a/drivers/net/wireless/ath/ath9k/mci.c +++ b/drivers/net/wireless/ath/ath9k/mci.c @@ -20,7 +20,7 @@ #include "ath9k.h" #include "mci.h" -static const u8 ath_mci_duty_cycle[] = { 0, 50, 60, 70, 80, 85, 90, 95, 98 }; +static const u8 ath_mci_duty_cycle[] = { 55, 50, 60, 70, 80, 85, 90, 95, 98 }; static struct ath_mci_profile_info* ath_mci_find_profile(struct ath_mci_profile *mci, @@ -28,11 +28,14 @@ ath_mci_find_profile(struct ath_mci_profile *mci, { struct ath_mci_profile_info *entry; + if (list_empty(&mci->info)) + return NULL; + list_for_each_entry(entry, &mci->info, list) { if (entry->conn_handle == info->conn_handle) - break; + return entry; } - return entry; + return NULL; } static bool ath_mci_add_profile(struct ath_common *common, @@ -49,31 +52,21 @@ static bool ath_mci_add_profile(struct ath_common *common, (info->type != MCI_GPM_COEX_PROFILE_VOICE)) return false; - entry = ath_mci_find_profile(mci, info); + entry = kzalloc(sizeof(*entry), GFP_ATOMIC); + if (!entry) + return false; - if (entry) { - memcpy(entry, info, 10); - } else { - entry = kzalloc(sizeof(*entry), GFP_KERNEL); - if (!entry) - return false; - - memcpy(entry, info, 10); - INC_PROF(mci, info); - list_add_tail(&info->list, &mci->info); - } + memcpy(entry, info, 10); + INC_PROF(mci, info); + list_add_tail(&entry->list, &mci->info); return true; } static void ath_mci_del_profile(struct ath_common *common, struct ath_mci_profile *mci, - struct ath_mci_profile_info *info) + struct ath_mci_profile_info *entry) { - struct ath_mci_profile_info *entry; - - entry = ath_mci_find_profile(mci, info); - if (!entry) return; @@ -86,12 +79,16 @@ void ath_mci_flush_profile(struct ath_mci_profile *mci) { struct ath_mci_profile_info *info, *tinfo; + mci->aggr_limit = 0; + + if (list_empty(&mci->info)) + return; + list_for_each_entry_safe(info, tinfo, &mci->info, list) { list_del(&info->list); DEC_PROF(mci, info); kfree(info); } - mci->aggr_limit = 0; } static void ath_mci_adjust_aggr_limit(struct ath_btcoex *btcoex) @@ -116,42 +113,60 @@ static void ath_mci_update_scheme(struct ath_softc *sc) struct ath_common *common = ath9k_hw_common(sc->sc_ah); struct ath_btcoex *btcoex = &sc->btcoex; struct ath_mci_profile *mci = &btcoex->mci; + struct ath9k_hw_mci *mci_hw = &sc->sc_ah->btcoex_hw.mci; struct ath_mci_profile_info *info; u32 num_profile = NUM_PROF(mci); + if (mci_hw->config & ATH_MCI_CONFIG_DISABLE_TUNING) + goto skip_tuning; + + btcoex->duty_cycle = ath_mci_duty_cycle[num_profile]; + if (num_profile == 1) { info = list_first_entry(&mci->info, struct ath_mci_profile_info, list); - if (mci->num_sco && info->T == 12) { - mci->aggr_limit = 8; + if (mci->num_sco) { + if (info->T == 12) + mci->aggr_limit = 8; + else if (info->T == 6) { + mci->aggr_limit = 6; + btcoex->duty_cycle = 30; + } ath_dbg(common, MCI, - "Single SCO, aggregation limit 2 ms\n"); - } else if ((info->type == MCI_GPM_COEX_PROFILE_BNEP) && - !info->master) { - btcoex->btcoex_period = 60; + "Single SCO, aggregation limit %d 1/4 ms\n", + mci->aggr_limit); + } else if (mci->num_pan || mci->num_other_acl) { + /* + * For single PAN/FTP profile, allocate 35% for BT + * to improve WLAN throughput. + */ + btcoex->duty_cycle = 35; + btcoex->btcoex_period = 53; ath_dbg(common, MCI, - "Single slave PAN/FTP, bt period 60 ms\n"); - } else if ((info->type == MCI_GPM_COEX_PROFILE_HID) && - (info->T > 0 && info->T < 50) && - (info->A > 1 || info->W > 1)) { + "Single PAN/FTP bt period %d ms dutycycle %d\n", + btcoex->duty_cycle, btcoex->btcoex_period); + } else if (mci->num_hid) { btcoex->duty_cycle = 30; - mci->aggr_limit = 8; + mci->aggr_limit = 6; ath_dbg(common, MCI, "Multiple attempt/timeout single HID " - "aggregation limit 2 ms dutycycle 30%%\n"); + "aggregation limit 1.5 ms dutycycle 30%%\n"); } - } else if ((num_profile == 2) && (mci->num_hid == 2)) { - btcoex->duty_cycle = 30; - mci->aggr_limit = 8; - ath_dbg(common, MCI, - "Two HIDs aggregation limit 2 ms dutycycle 30%%\n"); - } else if (num_profile > 3) { + } else if (num_profile == 2) { + if (mci->num_hid == 2) + btcoex->duty_cycle = 30; mci->aggr_limit = 6; ath_dbg(common, MCI, - "Three or more profiles aggregation limit 1.5 ms\n"); + "Two BT profiles aggr limit 1.5 ms dutycycle %d%%\n", + btcoex->duty_cycle); + } else if (num_profile >= 3) { + mci->aggr_limit = 4; + ath_dbg(common, MCI, + "Three or more profiles aggregation limit 1 ms\n"); } +skip_tuning: if (IS_CHAN_2GHZ(sc->sc_ah->curchan)) { if (IS_CHAN_HT(sc->sc_ah->curchan)) ath_mci_adjust_aggr_limit(btcoex); @@ -159,18 +174,17 @@ static void ath_mci_update_scheme(struct ath_softc *sc) btcoex->btcoex_period >>= 1; } - ath9k_hw_btcoex_disable(sc->sc_ah); ath9k_btcoex_timer_pause(sc); + ath9k_hw_btcoex_disable(sc->sc_ah); if (IS_CHAN_5GHZ(sc->sc_ah->curchan)) return; - btcoex->duty_cycle += (mci->num_bdr ? ATH_MCI_MAX_DUTY_CYCLE : 0); + btcoex->duty_cycle += (mci->num_bdr ? ATH_MCI_BDR_DUTY_CYCLE : 0); if (btcoex->duty_cycle > ATH_MCI_MAX_DUTY_CYCLE) btcoex->duty_cycle = ATH_MCI_MAX_DUTY_CYCLE; - btcoex->btcoex_period *= 1000; - btcoex->btcoex_no_stomp = btcoex->btcoex_period * + btcoex->btcoex_no_stomp = btcoex->btcoex_period * 1000 * (100 - btcoex->duty_cycle) / 100; ath9k_hw_btcoex_enable(sc->sc_ah); @@ -181,20 +195,16 @@ static void ath_mci_cal_msg(struct ath_softc *sc, u8 opcode, u8 *rx_payload) { struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); + struct ath9k_hw_mci *mci_hw = &ah->btcoex_hw.mci; u32 payload[4] = {0, 0, 0, 0}; switch (opcode) { case MCI_GPM_BT_CAL_REQ: - if (ar9003_mci_state(ah, MCI_STATE_BT, NULL) == MCI_BT_AWAKE) { - ar9003_mci_state(ah, MCI_STATE_SET_BT_CAL_START, NULL); - ieee80211_queue_work(sc->hw, &sc->hw_reset_work); - } else { - ath_dbg(common, MCI, "MCI State mismatch: %d\n", - ar9003_mci_state(ah, MCI_STATE_BT, NULL)); + if (mci_hw->bt_state == MCI_BT_AWAKE) { + ar9003_mci_state(ah, MCI_STATE_SET_BT_CAL_START); + ath9k_queue_reset(sc, RESET_TYPE_MCI); } - break; - case MCI_GPM_BT_CAL_DONE: - ar9003_mci_state(ah, MCI_STATE_BT, NULL); + ath_dbg(common, MCI, "MCI State : %d\n", mci_hw->bt_state); break; case MCI_GPM_BT_CAL_GRANT: MCI_GPM_SET_CAL_TYPE(payload, MCI_GPM_WLAN_CAL_DONE); @@ -207,32 +217,55 @@ static void ath_mci_cal_msg(struct ath_softc *sc, u8 opcode, u8 *rx_payload) } } +static void ath9k_mci_work(struct work_struct *work) +{ + struct ath_softc *sc = container_of(work, struct ath_softc, mci_work); + + ath_mci_update_scheme(sc); +} + static void ath_mci_process_profile(struct ath_softc *sc, struct ath_mci_profile_info *info) { struct ath_common *common = ath9k_hw_common(sc->sc_ah); struct ath_btcoex *btcoex = &sc->btcoex; struct ath_mci_profile *mci = &btcoex->mci; + struct ath_mci_profile_info *entry = NULL; + + entry = ath_mci_find_profile(mci, info); + if (entry) { + /* + * Two MCI interrupts are generated while connecting to + * headset and A2DP profile, but only one MCI interrupt + * is generated with last added profile type while disconnecting + * both profiles. + * So while adding second profile type decrement + * the first one. + */ + if (entry->type != info->type) { + DEC_PROF(mci, entry); + INC_PROF(mci, info); + } + memcpy(entry, info, 10); + } if (info->start) { - if (!ath_mci_add_profile(common, mci, info)) + if (!entry && !ath_mci_add_profile(common, mci, info)) return; } else - ath_mci_del_profile(common, mci, info); + ath_mci_del_profile(common, mci, entry); btcoex->btcoex_period = ATH_MCI_DEF_BT_PERIOD; mci->aggr_limit = mci->num_sco ? 6 : 0; - if (NUM_PROF(mci)) { + btcoex->duty_cycle = ath_mci_duty_cycle[NUM_PROF(mci)]; + if (NUM_PROF(mci)) btcoex->bt_stomp_type = ATH_BTCOEX_STOMP_LOW; - btcoex->duty_cycle = ath_mci_duty_cycle[NUM_PROF(mci)]; - } else { + else btcoex->bt_stomp_type = mci->num_mgmt ? ATH_BTCOEX_STOMP_ALL : ATH_BTCOEX_STOMP_LOW; - btcoex->duty_cycle = ATH_BTCOEX_DEF_DUTY_CYCLE; - } - ath_mci_update_scheme(sc); + ieee80211_queue_work(sc->hw, &sc->mci_work); } static void ath_mci_process_status(struct ath_softc *sc, @@ -247,8 +280,6 @@ static void ath_mci_process_status(struct ath_softc *sc, if (status->is_link) return; - memset(&info, 0, sizeof(struct ath_mci_profile_info)); - info.conn_handle = status->conn_handle; if (ath_mci_find_profile(mci, &info)) return; @@ -268,7 +299,7 @@ static void ath_mci_process_status(struct ath_softc *sc, } while (++i < ATH_MCI_MAX_PROFILE); if (old_num_mgmt != mci->num_mgmt) - ath_mci_update_scheme(sc); + ieee80211_queue_work(sc->hw, &sc->mci_work); } static void ath_mci_msg(struct ath_softc *sc, u8 opcode, u8 *rx_payload) @@ -277,25 +308,20 @@ static void ath_mci_msg(struct ath_softc *sc, u8 opcode, u8 *rx_payload) struct ath_mci_profile_info profile_info; struct ath_mci_profile_status profile_status; struct ath_common *common = ath9k_hw_common(sc->sc_ah); - u32 version; - u8 major; - u8 minor; + u8 major, minor; u32 seq_num; switch (opcode) { case MCI_GPM_COEX_VERSION_QUERY: - version = ar9003_mci_state(ah, MCI_STATE_SEND_WLAN_COEX_VERSION, - NULL); + ar9003_mci_state(ah, MCI_STATE_SEND_WLAN_COEX_VERSION); break; case MCI_GPM_COEX_VERSION_RESPONSE: major = *(rx_payload + MCI_GPM_COEX_B_MAJOR_VERSION); minor = *(rx_payload + MCI_GPM_COEX_B_MINOR_VERSION); - version = (major << 8) + minor; - version = ar9003_mci_state(ah, MCI_STATE_SET_BT_COEX_VERSION, - &version); + ar9003_mci_set_bt_version(ah, major, minor); break; case MCI_GPM_COEX_STATUS_QUERY: - ar9003_mci_state(ah, MCI_STATE_SEND_WLAN_CHANNELS, NULL); + ar9003_mci_send_wlan_channels(ah); break; case MCI_GPM_COEX_BT_PROFILE_INFO: memcpy(&profile_info, @@ -322,7 +348,7 @@ static void ath_mci_msg(struct ath_softc *sc, u8 opcode, u8 *rx_payload) seq_num = *((u32 *)(rx_payload + 12)); ath_dbg(common, MCI, - "BT_Status_Update: is_link=%d, linkId=%d, state=%d, SEQ=%d\n", + "BT_Status_Update: is_link=%d, linkId=%d, state=%d, SEQ=%u\n", profile_status.is_link, profile_status.conn_handle, profile_status.is_critical, seq_num); @@ -362,6 +388,7 @@ int ath_mci_setup(struct ath_softc *sc) mci->gpm_buf.bf_addr, (mci->gpm_buf.bf_len >> 4), mci->sched_buf.bf_paddr); + INIT_WORK(&sc->mci_work, ath9k_mci_work); ath_dbg(common, MCI, "MCI Initialized\n"); return 0; @@ -389,6 +416,7 @@ void ath_mci_intr(struct ath_softc *sc) struct ath_mci_coex *mci = &sc->mci_coex; struct ath_hw *ah = sc->sc_ah; struct ath_common *common = ath9k_hw_common(ah); + struct ath9k_hw_mci *mci_hw = &ah->btcoex_hw.mci; u32 mci_int, mci_int_rxmsg; u32 offset, subtype, opcode; u32 *pgpm; @@ -397,8 +425,8 @@ void ath_mci_intr(struct ath_softc *sc) ar9003_mci_get_interrupt(sc->sc_ah, &mci_int, &mci_int_rxmsg); - if (ar9003_mci_state(ah, MCI_STATE_ENABLE, NULL) == 0) { - ar9003_mci_state(ah, MCI_STATE_INIT_GPM_OFFSET, NULL); + if (ar9003_mci_state(ah, MCI_STATE_ENABLE) == 0) { + ar9003_mci_get_next_gpm_offset(ah, true, NULL); return; } @@ -417,46 +445,41 @@ void ath_mci_intr(struct ath_softc *sc) NULL, 0, true, false); mci_int_rxmsg &= ~AR_MCI_INTERRUPT_RX_MSG_REQ_WAKE; - ar9003_mci_state(ah, MCI_STATE_RESET_REQ_WAKE, NULL); + ar9003_mci_state(ah, MCI_STATE_RESET_REQ_WAKE); /* * always do this for recovery and 2G/5G toggling and LNA_TRANS */ - ar9003_mci_state(ah, MCI_STATE_SET_BT_AWAKE, NULL); + ar9003_mci_state(ah, MCI_STATE_SET_BT_AWAKE); } if (mci_int_rxmsg & AR_MCI_INTERRUPT_RX_MSG_SYS_WAKING) { mci_int_rxmsg &= ~AR_MCI_INTERRUPT_RX_MSG_SYS_WAKING; - if (ar9003_mci_state(ah, MCI_STATE_BT, NULL) == MCI_BT_SLEEP) { - if (ar9003_mci_state(ah, MCI_STATE_REMOTE_SLEEP, NULL) != - MCI_BT_SLEEP) - ar9003_mci_state(ah, MCI_STATE_SET_BT_AWAKE, - NULL); - } + if ((mci_hw->bt_state == MCI_BT_SLEEP) && + (ar9003_mci_state(ah, MCI_STATE_REMOTE_SLEEP) != + MCI_BT_SLEEP)) + ar9003_mci_state(ah, MCI_STATE_SET_BT_AWAKE); } if (mci_int_rxmsg & AR_MCI_INTERRUPT_RX_MSG_SYS_SLEEPING) { mci_int_rxmsg &= ~AR_MCI_INTERRUPT_RX_MSG_SYS_SLEEPING; - if (ar9003_mci_state(ah, MCI_STATE_BT, NULL) == MCI_BT_AWAKE) { - if (ar9003_mci_state(ah, MCI_STATE_REMOTE_SLEEP, NULL) != - MCI_BT_AWAKE) - ar9003_mci_state(ah, MCI_STATE_SET_BT_SLEEP, - NULL); - } + if ((mci_hw->bt_state == MCI_BT_AWAKE) && + (ar9003_mci_state(ah, MCI_STATE_REMOTE_SLEEP) != + MCI_BT_AWAKE)) + mci_hw->bt_state = MCI_BT_SLEEP; } if ((mci_int & AR_MCI_INTERRUPT_RX_INVALID_HDR) || (mci_int & AR_MCI_INTERRUPT_CONT_INFO_TIMEOUT)) { - ar9003_mci_state(ah, MCI_STATE_RECOVER_RX, NULL); + ar9003_mci_state(ah, MCI_STATE_RECOVER_RX); skip_gpm = true; } if (mci_int_rxmsg & AR_MCI_INTERRUPT_RX_MSG_SCHD_INFO) { mci_int_rxmsg &= ~AR_MCI_INTERRUPT_RX_MSG_SCHD_INFO; - offset = ar9003_mci_state(ah, MCI_STATE_LAST_SCHD_MSG_OFFSET, - NULL); + offset = ar9003_mci_state(ah, MCI_STATE_LAST_SCHD_MSG_OFFSET); } if (mci_int_rxmsg & AR_MCI_INTERRUPT_RX_MSG_GPM) { @@ -465,8 +488,8 @@ void ath_mci_intr(struct ath_softc *sc) while (more_data == MCI_GPM_MORE) { pgpm = mci->gpm_buf.bf_addr; - offset = ar9003_mci_state(ah, MCI_STATE_NEXT_GPM_OFFSET, - &more_data); + offset = ar9003_mci_get_next_gpm_offset(ah, false, + &more_data); if (offset == MCI_GPM_INVALID) break; @@ -507,23 +530,17 @@ void ath_mci_intr(struct ath_softc *sc) mci_int_rxmsg &= ~AR_MCI_INTERRUPT_RX_MSG_LNA_INFO; if (mci_int_rxmsg & AR_MCI_INTERRUPT_RX_MSG_CONT_INFO) { - int value_dbm = ar9003_mci_state(ah, - MCI_STATE_CONT_RSSI_POWER, NULL); + int value_dbm = MS(mci_hw->cont_status, + AR_MCI_CONT_RSSI_POWER); mci_int_rxmsg &= ~AR_MCI_INTERRUPT_RX_MSG_CONT_INFO; - if (ar9003_mci_state(ah, MCI_STATE_CONT_TXRX, NULL)) - ath_dbg(common, MCI, - "MCI CONT_INFO: (tx) pri = %d, pwr = %d dBm\n", - ar9003_mci_state(ah, - MCI_STATE_CONT_PRIORITY, NULL), - value_dbm); - else - ath_dbg(common, MCI, - "MCI CONT_INFO: (rx) pri = %d,pwr = %d dBm\n", - ar9003_mci_state(ah, - MCI_STATE_CONT_PRIORITY, NULL), - value_dbm); + ath_dbg(common, MCI, + "MCI CONT_INFO: (%s) pri = %d pwr = %d dBm\n", + MS(mci_hw->cont_status, AR_MCI_CONT_TXRX) ? + "tx" : "rx", + MS(mci_hw->cont_status, AR_MCI_CONT_PRIORITY), + value_dbm); } if (mci_int_rxmsg & AR_MCI_INTERRUPT_RX_MSG_CONT_NACK) @@ -538,3 +555,14 @@ void ath_mci_intr(struct ath_softc *sc) mci_int &= ~(AR_MCI_INTERRUPT_RX_INVALID_HDR | AR_MCI_INTERRUPT_CONT_INFO_TIMEOUT); } + +void ath_mci_enable(struct ath_softc *sc) +{ + struct ath_common *common = ath9k_hw_common(sc->sc_ah); + + if (!common->btcoex_enabled) + return; + + if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_MCI) + sc->sc_ah->imask |= ATH9K_INT_MCI; +} diff --git a/drivers/net/wireless/ath/ath9k/mci.h b/drivers/net/wireless/ath/ath9k/mci.h index c841444f53c2..fc14eea034eb 100644 --- a/drivers/net/wireless/ath/ath9k/mci.h +++ b/drivers/net/wireless/ath/ath9k/mci.h @@ -130,4 +130,13 @@ void ath_mci_flush_profile(struct ath_mci_profile *mci); int ath_mci_setup(struct ath_softc *sc); void ath_mci_cleanup(struct ath_softc *sc); void ath_mci_intr(struct ath_softc *sc); -#endif + +#ifdef CONFIG_ATH9K_BTCOEX_SUPPORT +void ath_mci_enable(struct ath_softc *sc); +#else +static inline void ath_mci_enable(struct ath_softc *sc) +{ +} +#endif /* CONFIG_ATH9K_BTCOEX_SUPPORT */ + +#endif /* MCI_H*/ diff --git a/drivers/net/wireless/ath/ath9k/pci.c b/drivers/net/wireless/ath/ath9k/pci.c index a856b51255f4..87b89d55e637 100644 --- a/drivers/net/wireless/ath/ath9k/pci.c +++ b/drivers/net/wireless/ath/ath9k/pci.c @@ -115,6 +115,9 @@ static void ath_pci_aspm_init(struct ath_common *common) int pos; u8 aspm; + if (!ah->is_pciexpress) + return; + pos = pci_pcie_cap(pdev); if (!pos) return; @@ -138,6 +141,7 @@ static void ath_pci_aspm_init(struct ath_common *common) aspm &= ~(PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1); pci_write_config_byte(parent, pos + PCI_EXP_LNKCTL, aspm); + ath_info(common, "Disabling ASPM since BTCOEX is enabled\n"); return; } @@ -147,6 +151,7 @@ static void ath_pci_aspm_init(struct ath_common *common) ah->aspm_enabled = true; /* Initialize PCIe PM and SERDES registers. */ ath9k_hw_configpcipowersave(ah, false); + ath_info(common, "ASPM enabled: 0x%x\n", aspm); } } @@ -246,7 +251,7 @@ static int ath_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) sc->mem = mem; /* Will be cleared in ath9k_start() */ - sc->sc_flags |= SC_OP_INVALID; + set_bit(SC_OP_INVALID, &sc->sc_flags); ret = request_irq(pdev->irq, ath_isr, IRQF_SHARED, "ath9k", sc); if (ret) { @@ -308,6 +313,9 @@ static int ath_pci_suspend(struct device *device) struct ieee80211_hw *hw = pci_get_drvdata(pdev); struct ath_softc *sc = hw->priv; + if (sc->wow_enabled) + return 0; + /* The device has to be moved to FULLSLEEP forcibly. * Otherwise the chip never moved to full sleep, * when no interface is up. diff --git a/drivers/net/wireless/ath/ath9k/rc.c b/drivers/net/wireless/ath/ath9k/rc.c index 92a6c0a87f89..e034add9cd5a 100644 --- a/drivers/net/wireless/ath/ath9k/rc.c +++ b/drivers/net/wireless/ath/ath9k/rc.c @@ -770,7 +770,7 @@ static void ath_get_rate(void *priv, struct ieee80211_sta *sta, void *priv_sta, struct ieee80211_tx_rate *rates = tx_info->control.rates; struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; __le16 fc = hdr->frame_control; - u8 try_per_rate, i = 0, rix, high_rix; + u8 try_per_rate, i = 0, rix; int is_probe = 0; if (rate_control_send_low(sta, priv_sta, txrc)) @@ -791,7 +791,6 @@ static void ath_get_rate(void *priv, struct ieee80211_sta *sta, void *priv_sta, rate_table = ath_rc_priv->rate_table; rix = ath_rc_get_highest_rix(sc, ath_rc_priv, rate_table, &is_probe, false); - high_rix = rix; /* * If we're in HT mode and both us and our peer supports LDPC. @@ -839,16 +838,16 @@ static void ath_get_rate(void *priv, struct ieee80211_sta *sta, void *priv_sta, try_per_rate = 8; /* - * Use a legacy rate as last retry to ensure that the frame - * is tried in both MCS and legacy rates. + * If the last rate in the rate series is MCS and has + * more than 80% of per thresh, then use a legacy rate + * as last retry to ensure that the frame is tried in both + * MCS and legacy rate. */ - if ((rates[2].flags & IEEE80211_TX_RC_MCS) && - (!(tx_info->flags & IEEE80211_TX_CTL_AMPDU) || - (ath_rc_priv->per[high_rix] > 45))) + ath_rc_get_lower_rix(rate_table, ath_rc_priv, rix, &rix); + if (WLAN_RC_PHY_HT(rate_table->info[rix].phy) && + (ath_rc_priv->per[rix] > 45)) rix = ath_rc_get_highest_rix(sc, ath_rc_priv, rate_table, &is_probe, true); - else - ath_rc_get_lower_rix(rate_table, ath_rc_priv, rix, &rix); /* All other rates in the series have RTS enabled */ ath_rc_rate_set_series(rate_table, &rates[i], txrc, diff --git a/drivers/net/wireless/ath/ath9k/recv.c b/drivers/net/wireless/ath/ath9k/recv.c index 0735aeb3b26c..12aca02228c2 100644 --- a/drivers/net/wireless/ath/ath9k/recv.c +++ b/drivers/net/wireless/ath/ath9k/recv.c @@ -20,43 +20,6 @@ #define SKB_CB_ATHBUF(__skb) (*((struct ath_buf **)__skb->cb)) -static inline bool ath_is_alt_ant_ratio_better(int alt_ratio, int maxdelta, - int mindelta, int main_rssi_avg, - int alt_rssi_avg, int pkt_count) -{ - return (((alt_ratio >= ATH_ANT_DIV_COMB_ALT_ANT_RATIO2) && - (alt_rssi_avg > main_rssi_avg + maxdelta)) || - (alt_rssi_avg > main_rssi_avg + mindelta)) && (pkt_count > 50); -} - -static inline bool ath_ant_div_comb_alt_check(u8 div_group, int alt_ratio, - int curr_main_set, int curr_alt_set, - int alt_rssi_avg, int main_rssi_avg) -{ - bool result = false; - switch (div_group) { - case 0: - if (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO) - result = true; - break; - case 1: - case 2: - if ((((curr_main_set == ATH_ANT_DIV_COMB_LNA2) && - (curr_alt_set == ATH_ANT_DIV_COMB_LNA1) && - (alt_rssi_avg >= (main_rssi_avg - 5))) || - ((curr_main_set == ATH_ANT_DIV_COMB_LNA1) && - (curr_alt_set == ATH_ANT_DIV_COMB_LNA2) && - (alt_rssi_avg >= (main_rssi_avg - 2)))) && - (alt_rssi_avg >= 4)) - result = true; - else - result = false; - break; - } - - return result; -} - static inline bool ath9k_check_auto_sleep(struct ath_softc *sc) { return sc->ps_enabled && @@ -303,7 +266,7 @@ static void ath_edma_start_recv(struct ath_softc *sc) ath_opmode_init(sc); - ath9k_hw_startpcureceive(sc->sc_ah, (sc->sc_flags & SC_OP_OFFCHANNEL)); + ath9k_hw_startpcureceive(sc->sc_ah, !!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)); spin_unlock_bh(&sc->rx.rxbuflock); } @@ -322,8 +285,8 @@ int ath_rx_init(struct ath_softc *sc, int nbufs) int error = 0; spin_lock_init(&sc->sc_pcu_lock); - sc->sc_flags &= ~SC_OP_RXFLUSH; spin_lock_init(&sc->rx.rxbuflock); + clear_bit(SC_OP_RXFLUSH, &sc->sc_flags); common->rx_bufsize = IEEE80211_MAX_MPDU_LEN / 2 + sc->sc_ah->caps.rx_status_len; @@ -467,6 +430,9 @@ u32 ath_calcrxfilter(struct ath_softc *sc) rfilt |= ATH9K_RX_FILTER_MCAST_BCAST_ALL; } + if (AR_SREV_9550(sc->sc_ah)) + rfilt |= ATH9K_RX_FILTER_4ADDRESS; + return rfilt; } @@ -500,7 +466,7 @@ int ath_startrecv(struct ath_softc *sc) start_recv: ath_opmode_init(sc); - ath9k_hw_startpcureceive(ah, (sc->sc_flags & SC_OP_OFFCHANNEL)); + ath9k_hw_startpcureceive(ah, !!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)); spin_unlock_bh(&sc->rx.rxbuflock); @@ -535,11 +501,11 @@ bool ath_stoprecv(struct ath_softc *sc) void ath_flushrecv(struct ath_softc *sc) { - sc->sc_flags |= SC_OP_RXFLUSH; + set_bit(SC_OP_RXFLUSH, &sc->sc_flags); if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) ath_rx_tasklet(sc, 1, true); ath_rx_tasklet(sc, 1, false); - sc->sc_flags &= ~SC_OP_RXFLUSH; + clear_bit(SC_OP_RXFLUSH, &sc->sc_flags); } static bool ath_beacon_dtim_pending_cab(struct sk_buff *skb) @@ -587,7 +553,7 @@ static void ath_rx_ps_beacon(struct ath_softc *sc, struct sk_buff *skb) sc->ps_flags &= ~PS_BEACON_SYNC; ath_dbg(common, PS, "Reconfigure Beacon timers based on timestamp from the AP\n"); - ath_set_beacon(sc); + ath9k_set_beacon(sc); } if (ath_beacon_dtim_pending_cab(skb)) { @@ -624,13 +590,13 @@ static void ath_rx_ps(struct ath_softc *sc, struct sk_buff *skb, bool mybeacon) /* Process Beacon and CAB receive in PS state */ if (((sc->ps_flags & PS_WAIT_FOR_BEACON) || ath9k_check_auto_sleep(sc)) - && mybeacon) + && mybeacon) { ath_rx_ps_beacon(sc, skb); - else if ((sc->ps_flags & PS_WAIT_FOR_CAB) && - (ieee80211_is_data(hdr->frame_control) || - ieee80211_is_action(hdr->frame_control)) && - is_multicast_ether_addr(hdr->addr1) && - !ieee80211_has_moredata(hdr->frame_control)) { + } else if ((sc->ps_flags & PS_WAIT_FOR_CAB) && + (ieee80211_is_data(hdr->frame_control) || + ieee80211_is_action(hdr->frame_control)) && + is_multicast_ether_addr(hdr->addr1) && + !ieee80211_has_moredata(hdr->frame_control)) { /* * No more broadcast/multicast frames to be received at this * point. @@ -1068,709 +1034,6 @@ static void ath9k_rx_skb_postprocess(struct ath_common *common, rxs->flag &= ~RX_FLAG_DECRYPTED; } -static void ath_lnaconf_alt_good_scan(struct ath_ant_comb *antcomb, - struct ath_hw_antcomb_conf ant_conf, - int main_rssi_avg) -{ - antcomb->quick_scan_cnt = 0; - - if (ant_conf.main_lna_conf == ATH_ANT_DIV_COMB_LNA2) - antcomb->rssi_lna2 = main_rssi_avg; - else if (ant_conf.main_lna_conf == ATH_ANT_DIV_COMB_LNA1) - antcomb->rssi_lna1 = main_rssi_avg; - - switch ((ant_conf.main_lna_conf << 4) | ant_conf.alt_lna_conf) { - case 0x10: /* LNA2 A-B */ - antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; - antcomb->first_quick_scan_conf = - ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - antcomb->second_quick_scan_conf = ATH_ANT_DIV_COMB_LNA1; - break; - case 0x20: /* LNA1 A-B */ - antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; - antcomb->first_quick_scan_conf = - ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - antcomb->second_quick_scan_conf = ATH_ANT_DIV_COMB_LNA2; - break; - case 0x21: /* LNA1 LNA2 */ - antcomb->main_conf = ATH_ANT_DIV_COMB_LNA2; - antcomb->first_quick_scan_conf = - ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; - antcomb->second_quick_scan_conf = - ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - break; - case 0x12: /* LNA2 LNA1 */ - antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1; - antcomb->first_quick_scan_conf = - ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; - antcomb->second_quick_scan_conf = - ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - break; - case 0x13: /* LNA2 A+B */ - antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - antcomb->first_quick_scan_conf = - ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; - antcomb->second_quick_scan_conf = ATH_ANT_DIV_COMB_LNA1; - break; - case 0x23: /* LNA1 A+B */ - antcomb->main_conf = ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - antcomb->first_quick_scan_conf = - ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; - antcomb->second_quick_scan_conf = ATH_ANT_DIV_COMB_LNA2; - break; - default: - break; - } -} - -static void ath_select_ant_div_from_quick_scan(struct ath_ant_comb *antcomb, - struct ath_hw_antcomb_conf *div_ant_conf, - int main_rssi_avg, int alt_rssi_avg, - int alt_ratio) -{ - /* alt_good */ - switch (antcomb->quick_scan_cnt) { - case 0: - /* set alt to main, and alt to first conf */ - div_ant_conf->main_lna_conf = antcomb->main_conf; - div_ant_conf->alt_lna_conf = antcomb->first_quick_scan_conf; - break; - case 1: - /* set alt to main, and alt to first conf */ - div_ant_conf->main_lna_conf = antcomb->main_conf; - div_ant_conf->alt_lna_conf = antcomb->second_quick_scan_conf; - antcomb->rssi_first = main_rssi_avg; - antcomb->rssi_second = alt_rssi_avg; - - if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA1) { - /* main is LNA1 */ - if (ath_is_alt_ant_ratio_better(alt_ratio, - ATH_ANT_DIV_COMB_LNA1_DELTA_HI, - ATH_ANT_DIV_COMB_LNA1_DELTA_LOW, - main_rssi_avg, alt_rssi_avg, - antcomb->total_pkt_count)) - antcomb->first_ratio = true; - else - antcomb->first_ratio = false; - } else if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA2) { - if (ath_is_alt_ant_ratio_better(alt_ratio, - ATH_ANT_DIV_COMB_LNA1_DELTA_MID, - ATH_ANT_DIV_COMB_LNA1_DELTA_LOW, - main_rssi_avg, alt_rssi_avg, - antcomb->total_pkt_count)) - antcomb->first_ratio = true; - else - antcomb->first_ratio = false; - } else { - if ((((alt_ratio >= ATH_ANT_DIV_COMB_ALT_ANT_RATIO2) && - (alt_rssi_avg > main_rssi_avg + - ATH_ANT_DIV_COMB_LNA1_DELTA_HI)) || - (alt_rssi_avg > main_rssi_avg)) && - (antcomb->total_pkt_count > 50)) - antcomb->first_ratio = true; - else - antcomb->first_ratio = false; - } - break; - case 2: - antcomb->alt_good = false; - antcomb->scan_not_start = false; - antcomb->scan = false; - antcomb->rssi_first = main_rssi_avg; - antcomb->rssi_third = alt_rssi_avg; - - if (antcomb->second_quick_scan_conf == ATH_ANT_DIV_COMB_LNA1) - antcomb->rssi_lna1 = alt_rssi_avg; - else if (antcomb->second_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA2) - antcomb->rssi_lna2 = alt_rssi_avg; - else if (antcomb->second_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2) { - if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA2) - antcomb->rssi_lna2 = main_rssi_avg; - else if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA1) - antcomb->rssi_lna1 = main_rssi_avg; - } - - if (antcomb->rssi_lna2 > antcomb->rssi_lna1 + - ATH_ANT_DIV_COMB_LNA1_LNA2_SWITCH_DELTA) - div_ant_conf->main_lna_conf = ATH_ANT_DIV_COMB_LNA2; - else - div_ant_conf->main_lna_conf = ATH_ANT_DIV_COMB_LNA1; - - if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA1) { - if (ath_is_alt_ant_ratio_better(alt_ratio, - ATH_ANT_DIV_COMB_LNA1_DELTA_HI, - ATH_ANT_DIV_COMB_LNA1_DELTA_LOW, - main_rssi_avg, alt_rssi_avg, - antcomb->total_pkt_count)) - antcomb->second_ratio = true; - else - antcomb->second_ratio = false; - } else if (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA2) { - if (ath_is_alt_ant_ratio_better(alt_ratio, - ATH_ANT_DIV_COMB_LNA1_DELTA_MID, - ATH_ANT_DIV_COMB_LNA1_DELTA_LOW, - main_rssi_avg, alt_rssi_avg, - antcomb->total_pkt_count)) - antcomb->second_ratio = true; - else - antcomb->second_ratio = false; - } else { - if ((((alt_ratio >= ATH_ANT_DIV_COMB_ALT_ANT_RATIO2) && - (alt_rssi_avg > main_rssi_avg + - ATH_ANT_DIV_COMB_LNA1_DELTA_HI)) || - (alt_rssi_avg > main_rssi_avg)) && - (antcomb->total_pkt_count > 50)) - antcomb->second_ratio = true; - else - antcomb->second_ratio = false; - } - - /* set alt to the conf with maximun ratio */ - if (antcomb->first_ratio && antcomb->second_ratio) { - if (antcomb->rssi_second > antcomb->rssi_third) { - /* first alt*/ - if ((antcomb->first_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA1) || - (antcomb->first_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA2)) - /* Set alt LNA1 or LNA2*/ - if (div_ant_conf->main_lna_conf == - ATH_ANT_DIV_COMB_LNA2) - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - else - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - else - /* Set alt to A+B or A-B */ - div_ant_conf->alt_lna_conf = - antcomb->first_quick_scan_conf; - } else if ((antcomb->second_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA1) || - (antcomb->second_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA2)) { - /* Set alt LNA1 or LNA2 */ - if (div_ant_conf->main_lna_conf == - ATH_ANT_DIV_COMB_LNA2) - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - else - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - } else { - /* Set alt to A+B or A-B */ - div_ant_conf->alt_lna_conf = - antcomb->second_quick_scan_conf; - } - } else if (antcomb->first_ratio) { - /* first alt */ - if ((antcomb->first_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA1) || - (antcomb->first_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA2)) - /* Set alt LNA1 or LNA2 */ - if (div_ant_conf->main_lna_conf == - ATH_ANT_DIV_COMB_LNA2) - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - else - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - else - /* Set alt to A+B or A-B */ - div_ant_conf->alt_lna_conf = - antcomb->first_quick_scan_conf; - } else if (antcomb->second_ratio) { - /* second alt */ - if ((antcomb->second_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA1) || - (antcomb->second_quick_scan_conf == - ATH_ANT_DIV_COMB_LNA2)) - /* Set alt LNA1 or LNA2 */ - if (div_ant_conf->main_lna_conf == - ATH_ANT_DIV_COMB_LNA2) - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - else - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - else - /* Set alt to A+B or A-B */ - div_ant_conf->alt_lna_conf = - antcomb->second_quick_scan_conf; - } else { - /* main is largest */ - if ((antcomb->main_conf == ATH_ANT_DIV_COMB_LNA1) || - (antcomb->main_conf == ATH_ANT_DIV_COMB_LNA2)) - /* Set alt LNA1 or LNA2 */ - if (div_ant_conf->main_lna_conf == - ATH_ANT_DIV_COMB_LNA2) - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - else - div_ant_conf->alt_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - else - /* Set alt to A+B or A-B */ - div_ant_conf->alt_lna_conf = antcomb->main_conf; - } - break; - default: - break; - } -} - -static void ath_ant_div_conf_fast_divbias(struct ath_hw_antcomb_conf *ant_conf, - struct ath_ant_comb *antcomb, int alt_ratio) -{ - if (ant_conf->div_group == 0) { - /* Adjust the fast_div_bias based on main and alt lna conf */ - switch ((ant_conf->main_lna_conf << 4) | - ant_conf->alt_lna_conf) { - case 0x01: /* A-B LNA2 */ - ant_conf->fast_div_bias = 0x3b; - break; - case 0x02: /* A-B LNA1 */ - ant_conf->fast_div_bias = 0x3d; - break; - case 0x03: /* A-B A+B */ - ant_conf->fast_div_bias = 0x1; - break; - case 0x10: /* LNA2 A-B */ - ant_conf->fast_div_bias = 0x7; - break; - case 0x12: /* LNA2 LNA1 */ - ant_conf->fast_div_bias = 0x2; - break; - case 0x13: /* LNA2 A+B */ - ant_conf->fast_div_bias = 0x7; - break; - case 0x20: /* LNA1 A-B */ - ant_conf->fast_div_bias = 0x6; - break; - case 0x21: /* LNA1 LNA2 */ - ant_conf->fast_div_bias = 0x0; - break; - case 0x23: /* LNA1 A+B */ - ant_conf->fast_div_bias = 0x6; - break; - case 0x30: /* A+B A-B */ - ant_conf->fast_div_bias = 0x1; - break; - case 0x31: /* A+B LNA2 */ - ant_conf->fast_div_bias = 0x3b; - break; - case 0x32: /* A+B LNA1 */ - ant_conf->fast_div_bias = 0x3d; - break; - default: - break; - } - } else if (ant_conf->div_group == 1) { - /* Adjust the fast_div_bias based on main and alt_lna_conf */ - switch ((ant_conf->main_lna_conf << 4) | - ant_conf->alt_lna_conf) { - case 0x01: /* A-B LNA2 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x02: /* A-B LNA1 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x03: /* A-B A+B */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x10: /* LNA2 A-B */ - if (!(antcomb->scan) && - (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) - ant_conf->fast_div_bias = 0x3f; - else - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x12: /* LNA2 LNA1 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x13: /* LNA2 A+B */ - if (!(antcomb->scan) && - (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) - ant_conf->fast_div_bias = 0x3f; - else - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x20: /* LNA1 A-B */ - if (!(antcomb->scan) && - (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) - ant_conf->fast_div_bias = 0x3f; - else - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x21: /* LNA1 LNA2 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x23: /* LNA1 A+B */ - if (!(antcomb->scan) && - (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) - ant_conf->fast_div_bias = 0x3f; - else - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x30: /* A+B A-B */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x31: /* A+B LNA2 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x32: /* A+B LNA1 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - default: - break; - } - } else if (ant_conf->div_group == 2) { - /* Adjust the fast_div_bias based on main and alt_lna_conf */ - switch ((ant_conf->main_lna_conf << 4) | - ant_conf->alt_lna_conf) { - case 0x01: /* A-B LNA2 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x02: /* A-B LNA1 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x03: /* A-B A+B */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x10: /* LNA2 A-B */ - if (!(antcomb->scan) && - (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) - ant_conf->fast_div_bias = 0x1; - else - ant_conf->fast_div_bias = 0x2; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x12: /* LNA2 LNA1 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x13: /* LNA2 A+B */ - if (!(antcomb->scan) && - (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) - ant_conf->fast_div_bias = 0x1; - else - ant_conf->fast_div_bias = 0x2; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x20: /* LNA1 A-B */ - if (!(antcomb->scan) && - (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) - ant_conf->fast_div_bias = 0x1; - else - ant_conf->fast_div_bias = 0x2; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x21: /* LNA1 LNA2 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x23: /* LNA1 A+B */ - if (!(antcomb->scan) && - (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO)) - ant_conf->fast_div_bias = 0x1; - else - ant_conf->fast_div_bias = 0x2; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x30: /* A+B A-B */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x31: /* A+B LNA2 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - case 0x32: /* A+B LNA1 */ - ant_conf->fast_div_bias = 0x1; - ant_conf->main_gaintb = 0; - ant_conf->alt_gaintb = 0; - break; - default: - break; - } - } -} - -/* Antenna diversity and combining */ -static void ath_ant_comb_scan(struct ath_softc *sc, struct ath_rx_status *rs) -{ - struct ath_hw_antcomb_conf div_ant_conf; - struct ath_ant_comb *antcomb = &sc->ant_comb; - int alt_ratio = 0, alt_rssi_avg = 0, main_rssi_avg = 0, curr_alt_set; - int curr_main_set; - int main_rssi = rs->rs_rssi_ctl0; - int alt_rssi = rs->rs_rssi_ctl1; - int rx_ant_conf, main_ant_conf; - bool short_scan = false; - - rx_ant_conf = (rs->rs_rssi_ctl2 >> ATH_ANT_RX_CURRENT_SHIFT) & - ATH_ANT_RX_MASK; - main_ant_conf = (rs->rs_rssi_ctl2 >> ATH_ANT_RX_MAIN_SHIFT) & - ATH_ANT_RX_MASK; - - /* Record packet only when both main_rssi and alt_rssi is positive */ - if (main_rssi > 0 && alt_rssi > 0) { - antcomb->total_pkt_count++; - antcomb->main_total_rssi += main_rssi; - antcomb->alt_total_rssi += alt_rssi; - if (main_ant_conf == rx_ant_conf) - antcomb->main_recv_cnt++; - else - antcomb->alt_recv_cnt++; - } - - /* Short scan check */ - if (antcomb->scan && antcomb->alt_good) { - if (time_after(jiffies, antcomb->scan_start_time + - msecs_to_jiffies(ATH_ANT_DIV_COMB_SHORT_SCAN_INTR))) - short_scan = true; - else - if (antcomb->total_pkt_count == - ATH_ANT_DIV_COMB_SHORT_SCAN_PKTCOUNT) { - alt_ratio = ((antcomb->alt_recv_cnt * 100) / - antcomb->total_pkt_count); - if (alt_ratio < ATH_ANT_DIV_COMB_ALT_ANT_RATIO) - short_scan = true; - } - } - - if (((antcomb->total_pkt_count < ATH_ANT_DIV_COMB_MAX_PKTCOUNT) || - rs->rs_moreaggr) && !short_scan) - return; - - if (antcomb->total_pkt_count) { - alt_ratio = ((antcomb->alt_recv_cnt * 100) / - antcomb->total_pkt_count); - main_rssi_avg = (antcomb->main_total_rssi / - antcomb->total_pkt_count); - alt_rssi_avg = (antcomb->alt_total_rssi / - antcomb->total_pkt_count); - } - - - ath9k_hw_antdiv_comb_conf_get(sc->sc_ah, &div_ant_conf); - curr_alt_set = div_ant_conf.alt_lna_conf; - curr_main_set = div_ant_conf.main_lna_conf; - - antcomb->count++; - - if (antcomb->count == ATH_ANT_DIV_COMB_MAX_COUNT) { - if (alt_ratio > ATH_ANT_DIV_COMB_ALT_ANT_RATIO) { - ath_lnaconf_alt_good_scan(antcomb, div_ant_conf, - main_rssi_avg); - antcomb->alt_good = true; - } else { - antcomb->alt_good = false; - } - - antcomb->count = 0; - antcomb->scan = true; - antcomb->scan_not_start = true; - } - - if (!antcomb->scan) { - if (ath_ant_div_comb_alt_check(div_ant_conf.div_group, - alt_ratio, curr_main_set, curr_alt_set, - alt_rssi_avg, main_rssi_avg)) { - if (curr_alt_set == ATH_ANT_DIV_COMB_LNA2) { - /* Switch main and alt LNA */ - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - } else if (curr_alt_set == ATH_ANT_DIV_COMB_LNA1) { - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - } - - goto div_comb_done; - } else if ((curr_alt_set != ATH_ANT_DIV_COMB_LNA1) && - (curr_alt_set != ATH_ANT_DIV_COMB_LNA2)) { - /* Set alt to another LNA */ - if (curr_main_set == ATH_ANT_DIV_COMB_LNA2) - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - else if (curr_main_set == ATH_ANT_DIV_COMB_LNA1) - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - - goto div_comb_done; - } - - if ((alt_rssi_avg < (main_rssi_avg + - div_ant_conf.lna1_lna2_delta))) - goto div_comb_done; - } - - if (!antcomb->scan_not_start) { - switch (curr_alt_set) { - case ATH_ANT_DIV_COMB_LNA2: - antcomb->rssi_lna2 = alt_rssi_avg; - antcomb->rssi_lna1 = main_rssi_avg; - antcomb->scan = true; - /* set to A+B */ - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - break; - case ATH_ANT_DIV_COMB_LNA1: - antcomb->rssi_lna1 = alt_rssi_avg; - antcomb->rssi_lna2 = main_rssi_avg; - antcomb->scan = true; - /* set to A+B */ - div_ant_conf.main_lna_conf = ATH_ANT_DIV_COMB_LNA2; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - break; - case ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2: - antcomb->rssi_add = alt_rssi_avg; - antcomb->scan = true; - /* set to A-B */ - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; - break; - case ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2: - antcomb->rssi_sub = alt_rssi_avg; - antcomb->scan = false; - if (antcomb->rssi_lna2 > - (antcomb->rssi_lna1 + - ATH_ANT_DIV_COMB_LNA1_LNA2_SWITCH_DELTA)) { - /* use LNA2 as main LNA */ - if ((antcomb->rssi_add > antcomb->rssi_lna1) && - (antcomb->rssi_add > antcomb->rssi_sub)) { - /* set to A+B */ - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - } else if (antcomb->rssi_sub > - antcomb->rssi_lna1) { - /* set to A-B */ - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; - } else { - /* set to LNA1 */ - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - } - } else { - /* use LNA1 as main LNA */ - if ((antcomb->rssi_add > antcomb->rssi_lna2) && - (antcomb->rssi_add > antcomb->rssi_sub)) { - /* set to A+B */ - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1_PLUS_LNA2; - } else if (antcomb->rssi_sub > - antcomb->rssi_lna1) { - /* set to A-B */ - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1_MINUS_LNA2; - } else { - /* set to LNA2 */ - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - } - } - break; - default: - break; - } - } else { - if (!antcomb->alt_good) { - antcomb->scan_not_start = false; - /* Set alt to another LNA */ - if (curr_main_set == ATH_ANT_DIV_COMB_LNA2) { - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - } else if (curr_main_set == ATH_ANT_DIV_COMB_LNA1) { - div_ant_conf.main_lna_conf = - ATH_ANT_DIV_COMB_LNA1; - div_ant_conf.alt_lna_conf = - ATH_ANT_DIV_COMB_LNA2; - } - goto div_comb_done; - } - } - - ath_select_ant_div_from_quick_scan(antcomb, &div_ant_conf, - main_rssi_avg, alt_rssi_avg, - alt_ratio); - - antcomb->quick_scan_cnt++; - -div_comb_done: - ath_ant_div_conf_fast_divbias(&div_ant_conf, antcomb, alt_ratio); - ath9k_hw_antdiv_comb_conf_set(sc->sc_ah, &div_ant_conf); - - antcomb->scan_start_time = jiffies; - antcomb->total_pkt_count = 0; - antcomb->main_total_rssi = 0; - antcomb->alt_total_rssi = 0; - antcomb->main_recv_cnt = 0; - antcomb->alt_recv_cnt = 0; -} - int ath_rx_tasklet(struct ath_softc *sc, int flush, bool hp) { struct ath_buf *bf; @@ -1804,7 +1067,7 @@ int ath_rx_tasklet(struct ath_softc *sc, int flush, bool hp) do { /* If handling rx interrupt and flush is in progress => exit */ - if ((sc->sc_flags & SC_OP_RXFLUSH) && (flush == 0)) + if (test_bit(SC_OP_RXFLUSH, &sc->sc_flags) && (flush == 0)) break; memset(&rs, 0, sizeof(rs)); @@ -1842,13 +1105,14 @@ int ath_rx_tasklet(struct ath_softc *sc, int flush, bool hp) else rs.is_mybeacon = false; + sc->rx.num_pkts++; ath_debug_stat_rx(sc, &rs); /* * If we're asked to flush receive queue, directly * chain it back at the queue without processing it. */ - if (sc->sc_flags & SC_OP_RXFLUSH) { + if (test_bit(SC_OP_RXFLUSH, &sc->sc_flags)) { RX_STAT_INC(rx_drop_rxflush); goto requeue_drop_frag; } @@ -1969,7 +1233,6 @@ int ath_rx_tasklet(struct ath_softc *sc, int flush, bool hp) skb_trim(skb, skb->len - 8); spin_lock_irqsave(&sc->sc_pm_lock, flags); - if ((sc->ps_flags & (PS_WAIT_FOR_BEACON | PS_WAIT_FOR_CAB | PS_WAIT_FOR_PSPOLL_DATA)) || diff --git a/drivers/net/wireless/ath/ath9k/reg.h b/drivers/net/wireless/ath/ath9k/reg.h index 458f81b4a7cb..87cac8eb7834 100644 --- a/drivers/net/wireless/ath/ath9k/reg.h +++ b/drivers/net/wireless/ath/ath9k/reg.h @@ -696,9 +696,12 @@ #define AR_WA_BIT7 (1 << 7) #define AR_WA_BIT23 (1 << 23) #define AR_WA_D3_L1_DISABLE (1 << 14) +#define AR_WA_UNTIE_RESET_EN (1 << 15) /* Enable PCI Reset + to POR (power-on-reset) */ #define AR_WA_D3_TO_L1_DISABLE_REAL (1 << 16) #define AR_WA_ASPM_TIMER_BASED_DISABLE (1 << 17) -#define AR_WA_RESET_EN (1 << 18) /* Sw Control to enable PCI-Reset to POR (bit 15) */ +#define AR_WA_RESET_EN (1 << 18) /* Enable PCI-Reset to + POR (bit 15) */ #define AR_WA_ANALOG_SHIFT (1 << 20) #define AR_WA_POR_SHORT (1 << 21) /* PCI-E Phy reset control */ #define AR_WA_BIT22 (1 << 22) @@ -798,6 +801,7 @@ #define AR_SREV_REVISION_9580_10 4 /* AR9580 1.0 */ #define AR_SREV_VERSION_9462 0x280 #define AR_SREV_REVISION_9462_20 2 +#define AR_SREV_VERSION_9550 0x400 #define AR_SREV_5416(_ah) \ (((_ah)->hw_version.macVersion == AR_SREV_VERSION_5416_PCI) || \ @@ -905,6 +909,9 @@ (((_ah)->hw_version.macVersion == AR_SREV_VERSION_9462) && \ ((_ah)->hw_version.macRev >= AR_SREV_REVISION_9462_20)) +#define AR_SREV_9550(_ah) \ + (((_ah)->hw_version.macVersion == AR_SREV_VERSION_9550)) + #define AR_SREV_9580(_ah) \ (((_ah)->hw_version.macVersion == AR_SREV_VERSION_9580) && \ ((_ah)->hw_version.macRev >= AR_SREV_REVISION_9580_10)) @@ -1028,6 +1035,8 @@ enum { #define AR_PCIE_PM_CTRL (AR_SREV_9340(ah) ? 0x4004 : 0x4014) #define AR_PCIE_PM_CTRL_ENA 0x00080000 +#define AR_PCIE_PHY_REG3 0x18c08 + #define AR_NUM_GPIO 14 #define AR928X_NUM_GPIO 10 #define AR9285_NUM_GPIO 12 @@ -1231,6 +1240,8 @@ enum { #define AR_RTC_PLL_CLKSEL 0x00000300 #define AR_RTC_PLL_CLKSEL_S 8 #define AR_RTC_PLL_BYPASS 0x00010000 +#define AR_RTC_PLL_NOPWD 0x00040000 +#define AR_RTC_PLL_NOPWD_S 18 #define PLL3 0x16188 #define PLL3_DO_MEAS_MASK 0x40000000 @@ -1643,11 +1654,11 @@ enum { #define AR_TPC 0x80e8 #define AR_TPC_ACK 0x0000003f -#define AR_TPC_ACK_S 0x00 +#define AR_TPC_ACK_S 0 #define AR_TPC_CTS 0x00003f00 -#define AR_TPC_CTS_S 0x08 +#define AR_TPC_CTS_S 8 #define AR_TPC_CHIRP 0x003f0000 -#define AR_TPC_CHIRP_S 0x16 +#define AR_TPC_CHIRP_S 16 #define AR_QUIET1 0x80fc #define AR_QUIET1_NEXT_QUIET_S 0 @@ -1883,6 +1894,8 @@ enum { #define AR_PCU_MISC_MODE2_HWWAR2 0x02000000 #define AR_PCU_MISC_MODE2_RESERVED2 0xFFFE0000 +#define AR_PCU_MISC_MODE3 0x83d0 + #define AR_MAC_PCU_ASYNC_FIFO_REG3 0x8358 #define AR_MAC_PCU_ASYNC_FIFO_REG3_DATAPATH_SEL 0x00000400 #define AR_MAC_PCU_ASYNC_FIFO_REG3_SOFT_RESET 0x80000000 @@ -1905,6 +1918,140 @@ enum { #define AR_RATE_DURATION_32 0x8780 #define AR_RATE_DURATION(_n) (AR_RATE_DURATION_0 + ((_n)<<2)) +/* WoW - Wake On Wireless */ + +#define AR_PMCTRL_AUX_PWR_DET 0x10000000 /* Puts Chip in L2 state */ +#define AR_PMCTRL_D3COLD_VAUX 0x00800000 +#define AR_PMCTRL_HOST_PME_EN 0x00400000 /* Send OOB WAKE_L on WoW + event */ +#define AR_PMCTRL_WOW_PME_CLR 0x00200000 /* Clear WoW event */ +#define AR_PMCTRL_PWR_STATE_MASK 0x0f000000 /* Power State Mask */ +#define AR_PMCTRL_PWR_STATE_D1D3 0x0f000000 /* Activate D1 and D3 */ +#define AR_PMCTRL_PWR_STATE_D1D3_REAL 0x0f000000 /* Activate D1 and D3 */ +#define AR_PMCTRL_PWR_STATE_D0 0x08000000 /* Activate D0 */ +#define AR_PMCTRL_PWR_PM_CTRL_ENA 0x00008000 /* Enable power mgmt */ + +#define AR_WOW_BEACON_TIMO_MAX 0xffffffff + +/* + * MAC WoW Registers + */ + +#define AR_WOW_PATTERN 0x825C +#define AR_WOW_COUNT 0x8260 +#define AR_WOW_BCN_EN 0x8270 +#define AR_WOW_BCN_TIMO 0x8274 +#define AR_WOW_KEEP_ALIVE_TIMO 0x8278 +#define AR_WOW_KEEP_ALIVE 0x827c +#define AR_WOW_US_SCALAR 0x8284 +#define AR_WOW_KEEP_ALIVE_DELAY 0x8288 +#define AR_WOW_PATTERN_MATCH 0x828c +#define AR_WOW_PATTERN_OFF1 0x8290 /* pattern bytes 0 -> 3 */ +#define AR_WOW_PATTERN_OFF2 0x8294 /* pattern bytes 4 -> 7 */ + +/* for AR9285 or later version of chips */ +#define AR_WOW_EXACT 0x829c +#define AR_WOW_LENGTH1 0x8360 +#define AR_WOW_LENGTH2 0X8364 +/* register to enable match for less than 256 bytes packets */ +#define AR_WOW_PATTERN_MATCH_LT_256B 0x8368 + +#define AR_SW_WOW_CONTROL 0x20018 +#define AR_SW_WOW_ENABLE 0x1 +#define AR_SWITCH_TO_REFCLK 0x2 +#define AR_RESET_CONTROL 0x4 +#define AR_RESET_VALUE_MASK 0x8 +#define AR_HW_WOW_DISABLE 0x10 +#define AR_CLR_MAC_INTERRUPT 0x20 +#define AR_CLR_KA_INTERRUPT 0x40 + +/* AR_WOW_PATTERN register values */ +#define AR_WOW_BACK_OFF_SHIFT(x) ((x & 0xf) << 28) /* in usecs */ +#define AR_WOW_MAC_INTR_EN 0x00040000 +#define AR_WOW_MAGIC_EN 0x00010000 +#define AR_WOW_PATTERN_EN(x) (x & 0xff) +#define AR_WOW_PAT_FOUND_SHIFT 8 +#define AR_WOW_PATTERN_FOUND(x) (x & (0xff << AR_WOW_PAT_FOUND_SHIFT)) +#define AR_WOW_PATTERN_FOUND_MASK ((0xff) << AR_WOW_PAT_FOUND_SHIFT) +#define AR_WOW_MAGIC_PAT_FOUND 0x00020000 +#define AR_WOW_MAC_INTR 0x00080000 +#define AR_WOW_KEEP_ALIVE_FAIL 0x00100000 +#define AR_WOW_BEACON_FAIL 0x00200000 + +#define AR_WOW_STATUS(x) (x & (AR_WOW_PATTERN_FOUND_MASK | \ + AR_WOW_MAGIC_PAT_FOUND | \ + AR_WOW_KEEP_ALIVE_FAIL | \ + AR_WOW_BEACON_FAIL)) +#define AR_WOW_CLEAR_EVENTS(x) (x & ~(AR_WOW_PATTERN_EN(0xff) | \ + AR_WOW_MAGIC_EN | \ + AR_WOW_MAC_INTR_EN | \ + AR_WOW_BEACON_FAIL | \ + AR_WOW_KEEP_ALIVE_FAIL)) + +/* AR_WOW_COUNT register values */ +#define AR_WOW_AIFS_CNT(x) (x & 0xff) +#define AR_WOW_SLOT_CNT(x) ((x & 0xff) << 8) +#define AR_WOW_KEEP_ALIVE_CNT(x) ((x & 0xff) << 16) + +/* AR_WOW_BCN_EN register */ +#define AR_WOW_BEACON_FAIL_EN 0x00000001 + +/* AR_WOW_BCN_TIMO rgister */ +#define AR_WOW_BEACON_TIMO 0x40000000 /* valid if BCN_EN is set */ + +/* AR_WOW_KEEP_ALIVE_TIMO register */ +#define AR_WOW_KEEP_ALIVE_TIMO_VALUE +#define AR_WOW_KEEP_ALIVE_NEVER 0xffffffff + +/* AR_WOW_KEEP_ALIVE register */ +#define AR_WOW_KEEP_ALIVE_AUTO_DIS 0x00000001 +#define AR_WOW_KEEP_ALIVE_FAIL_DIS 0x00000002 + +/* AR_WOW_KEEP_ALIVE_DELAY register */ +#define AR_WOW_KEEP_ALIVE_DELAY_VALUE 0x000003e8 /* 1 msec */ + + +/* + * keep it long for beacon workaround - ensure no false alarm + */ +#define AR_WOW_BMISSTHRESHOLD 0x20 + +/* AR_WOW_PATTERN_MATCH register */ +#define AR_WOW_PAT_END_OF_PKT(x) (x & 0xf) +#define AR_WOW_PAT_OFF_MATCH(x) ((x & 0xf) << 8) + +/* + * default values for Wow Configuration for backoff, aifs, slot, keep-alive + * to be programmed into various registers. + */ +#define AR_WOW_PAT_BACKOFF 0x00000004 /* AR_WOW_PATTERN_REG */ +#define AR_WOW_CNT_AIFS_CNT 0x00000022 /* AR_WOW_COUNT_REG */ +#define AR_WOW_CNT_SLOT_CNT 0x00000009 /* AR_WOW_COUNT_REG */ +/* + * Keepalive count applicable for AR9280 2.0 and above. + */ +#define AR_WOW_CNT_KA_CNT 0x00000008 /* AR_WOW_COUNT register */ + +/* WoW - Transmit buffer for keep alive frames */ +#define AR_WOW_TRANSMIT_BUFFER 0xe000 /* E000 - EFFC */ + +#define AR_WOW_TXBUF(i) (AR_WOW_TRANSMIT_BUFFER + ((i) << 2)) + +#define AR_WOW_KA_DESC_WORD2 0xe000 + +#define AR_WOW_KA_DATA_WORD0 0xe030 + +/* WoW Transmit Buffer for patterns */ +#define AR_WOW_TB_PATTERN(i) (0xe100 + (i << 8)) +#define AR_WOW_TB_MASK(i) (0xec00 + (i << 5)) + +/* Currently Pattern 0-7 are supported - so bit 0-7 are set */ +#define AR_WOW_PATTERN_SUPPORTED 0xff +#define AR_WOW_LENGTH_MAX 0xff +#define AR_WOW_LEN1_SHIFT(_i) ((0x3 - ((_i) & 0x3)) << 0x3) +#define AR_WOW_LENGTH1_MASK(_i) (AR_WOW_LENGTH_MAX << AR_WOW_LEN1_SHIFT(_i)) +#define AR_WOW_LEN2_SHIFT(_i) ((0x7 - ((_i) & 0x7)) << 0x3) +#define AR_WOW_LENGTH2_MASK(_i) (AR_WOW_LENGTH_MAX << AR_WOW_LEN2_SHIFT(_i)) #define AR9271_CORE_CLOCK 117 /* clock to 117Mhz */ #define AR9271_TARGET_BAUD_RATE 19200 /* 115200 */ @@ -2077,12 +2224,6 @@ enum { AR_MCI_INTERRUPT_RX_MSG_REMOTE_RESET| \ AR_MCI_INTERRUPT_RX_MSG_SYS_WAKING | \ AR_MCI_INTERRUPT_RX_MSG_SYS_SLEEPING| \ - AR_MCI_INTERRUPT_RX_MSG_SCHD_INFO | \ - AR_MCI_INTERRUPT_RX_MSG_LNA_CONTROL | \ - AR_MCI_INTERRUPT_RX_MSG_LNA_INFO | \ - AR_MCI_INTERRUPT_RX_MSG_CONT_NACK | \ - AR_MCI_INTERRUPT_RX_MSG_CONT_INFO | \ - AR_MCI_INTERRUPT_RX_MSG_CONT_RST | \ AR_MCI_INTERRUPT_RX_MSG_REQ_WAKE) #define AR_MCI_CPU_INT 0x1840 @@ -2098,8 +2239,8 @@ enum { #define AR_MCI_CONT_STATUS 0x1848 #define AR_MCI_CONT_RSSI_POWER 0x000000FF #define AR_MCI_CONT_RSSI_POWER_S 0 -#define AR_MCI_CONT_RRIORITY 0x0000FF00 -#define AR_MCI_CONT_RRIORITY_S 8 +#define AR_MCI_CONT_PRIORITY 0x0000FF00 +#define AR_MCI_CONT_PRIORITY_S 8 #define AR_MCI_CONT_TXRX 0x00010000 #define AR_MCI_CONT_TXRX_S 16 @@ -2162,10 +2303,6 @@ enum { #define AR_BTCOEX_CTRL_SPDT_POLARITY 0x80000000 #define AR_BTCOEX_CTRL_SPDT_POLARITY_S 31 -#define AR_BTCOEX_WL_WEIGHTS0 0x18b0 -#define AR_BTCOEX_WL_WEIGHTS1 0x18b4 -#define AR_BTCOEX_WL_WEIGHTS2 0x18b8 -#define AR_BTCOEX_WL_WEIGHTS3 0x18bc #define AR_BTCOEX_MAX_TXPWR(_x) (0x18c0 + ((_x) << 2)) #define AR_BTCOEX_WL_LNA 0x1940 #define AR_BTCOEX_RFGAIN_CTRL 0x1944 @@ -2211,5 +2348,7 @@ enum { #define AR_BTCOEX_CTRL3_CONT_INFO_TIMEOUT 0x00000fff #define AR_BTCOEX_CTRL3_CONT_INFO_TIMEOUT_S 0 +#define AR_GLB_SWREG_DISCONT_MODE 0x2002c +#define AR_GLB_SWREG_DISCONT_EN_BT_WLAN 0x3 #endif diff --git a/drivers/net/wireless/ath/ath9k/wow.c b/drivers/net/wireless/ath/ath9k/wow.c new file mode 100644 index 000000000000..44a08eb53c62 --- /dev/null +++ b/drivers/net/wireless/ath/ath9k/wow.c @@ -0,0 +1,532 @@ +/* + * Copyright (c) 2012 Qualcomm Atheros, Inc. + * + * Permission to use, copy, modify, and/or distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +#include <linux/export.h> +#include "ath9k.h" +#include "reg.h" +#include "hw-ops.h" + +const char *ath9k_hw_wow_event_to_string(u32 wow_event) +{ + if (wow_event & AH_WOW_MAGIC_PATTERN_EN) + return "Magic pattern"; + if (wow_event & AH_WOW_USER_PATTERN_EN) + return "User pattern"; + if (wow_event & AH_WOW_LINK_CHANGE) + return "Link change"; + if (wow_event & AH_WOW_BEACON_MISS) + return "Beacon miss"; + + return "unknown reason"; +} +EXPORT_SYMBOL(ath9k_hw_wow_event_to_string); + +static void ath9k_hw_config_serdes_wow_sleep(struct ath_hw *ah) +{ + int i; + + for (i = 0; i < ah->iniPcieSerdesWow.ia_rows; i++) + REG_WRITE(ah, INI_RA(&ah->iniPcieSerdesWow, i, 0), + INI_RA(&ah->iniPcieSerdesWow, i, 1)); + + usleep_range(1000, 1500); +} + +static void ath9k_hw_set_powermode_wow_sleep(struct ath_hw *ah) +{ + struct ath_common *common = ath9k_hw_common(ah); + + REG_SET_BIT(ah, AR_STA_ID1, AR_STA_ID1_PWR_SAV); + + /* set rx disable bit */ + REG_WRITE(ah, AR_CR, AR_CR_RXD); + + if (!ath9k_hw_wait(ah, AR_CR, AR_CR_RXE, 0, AH_WAIT_TIMEOUT)) { + ath_err(common, "Failed to stop Rx DMA in 10ms AR_CR=0x%08x AR_DIAG_SW=0x%08x\n", + REG_READ(ah, AR_CR), REG_READ(ah, AR_DIAG_SW)); + return; + } else { + if (!AR_SREV_9300_20_OR_LATER(ah)) + REG_WRITE(ah, AR_RXDP, 0x0); + } + + /* AR9280 WoW has sleep issue, do not set it to sleep */ + if (AR_SREV_9280_20(ah)) + return; + + REG_WRITE(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_ON_INT); +} + +static void ath9k_wow_create_keep_alive_pattern(struct ath_hw *ah) +{ + struct ath_common *common = ath9k_hw_common(ah); + u8 sta_mac_addr[ETH_ALEN], ap_mac_addr[ETH_ALEN]; + u32 ctl[13] = {0}; + u32 data_word[KAL_NUM_DATA_WORDS]; + u8 i; + u32 wow_ka_data_word0; + + memcpy(sta_mac_addr, common->macaddr, ETH_ALEN); + memcpy(ap_mac_addr, common->curbssid, ETH_ALEN); + + /* set the transmit buffer */ + ctl[0] = (KAL_FRAME_LEN | (MAX_RATE_POWER << 16)); + + if (!(AR_SREV_9300_20_OR_LATER(ah))) + ctl[0] += (KAL_ANTENNA_MODE << 25); + + ctl[1] = 0; + ctl[3] = 0xb; /* OFDM_6M hardware value for this rate */ + ctl[4] = 0; + ctl[7] = (ah->txchainmask) << 2; + + if (AR_SREV_9300_20_OR_LATER(ah)) + ctl[2] = 0xf << 16; /* tx_tries 0 */ + else + ctl[2] = 0x7 << 16; /* tx_tries 0 */ + + + for (i = 0; i < KAL_NUM_DESC_WORDS; i++) + REG_WRITE(ah, (AR_WOW_KA_DESC_WORD2 + i * 4), ctl[i]); + + /* for AR9300 family 13 descriptor words */ + if (AR_SREV_9300_20_OR_LATER(ah)) + REG_WRITE(ah, (AR_WOW_KA_DESC_WORD2 + i * 4), ctl[i]); + + data_word[0] = (KAL_FRAME_TYPE << 2) | (KAL_FRAME_SUB_TYPE << 4) | + (KAL_TO_DS << 8) | (KAL_DURATION_ID << 16); + data_word[1] = (ap_mac_addr[3] << 24) | (ap_mac_addr[2] << 16) | + (ap_mac_addr[1] << 8) | (ap_mac_addr[0]); + data_word[2] = (sta_mac_addr[1] << 24) | (sta_mac_addr[0] << 16) | + (ap_mac_addr[5] << 8) | (ap_mac_addr[4]); + data_word[3] = (sta_mac_addr[5] << 24) | (sta_mac_addr[4] << 16) | + (sta_mac_addr[3] << 8) | (sta_mac_addr[2]); + data_word[4] = (ap_mac_addr[3] << 24) | (ap_mac_addr[2] << 16) | + (ap_mac_addr[1] << 8) | (ap_mac_addr[0]); + data_word[5] = (ap_mac_addr[5] << 8) | (ap_mac_addr[4]); + + if (AR_SREV_9462_20_OR_LATER(ah)) { + /* AR9462 2.0 has an extra descriptor word (time based + * discard) compared to other chips */ + REG_WRITE(ah, (AR_WOW_KA_DESC_WORD2 + (12 * 4)), 0); + wow_ka_data_word0 = AR_WOW_TXBUF(13); + } else { + wow_ka_data_word0 = AR_WOW_TXBUF(12); + } + + for (i = 0; i < KAL_NUM_DATA_WORDS; i++) + REG_WRITE(ah, (wow_ka_data_word0 + i*4), data_word[i]); + +} + +void ath9k_hw_wow_apply_pattern(struct ath_hw *ah, u8 *user_pattern, + u8 *user_mask, int pattern_count, + int pattern_len) +{ + int i; + u32 pattern_val, mask_val; + u32 set, clr; + + /* FIXME: should check count by querying the hardware capability */ + if (pattern_count >= MAX_NUM_PATTERN) + return; + + REG_SET_BIT(ah, AR_WOW_PATTERN, BIT(pattern_count)); + + /* set the registers for pattern */ + for (i = 0; i < MAX_PATTERN_SIZE; i += 4) { + memcpy(&pattern_val, user_pattern, 4); + REG_WRITE(ah, (AR_WOW_TB_PATTERN(pattern_count) + i), + pattern_val); + user_pattern += 4; + } + + /* set the registers for mask */ + for (i = 0; i < MAX_PATTERN_MASK_SIZE; i += 4) { + memcpy(&mask_val, user_mask, 4); + REG_WRITE(ah, (AR_WOW_TB_MASK(pattern_count) + i), mask_val); + user_mask += 4; + } + + /* set the pattern length to be matched + * + * AR_WOW_LENGTH1_REG1 + * bit 31:24 pattern 0 length + * bit 23:16 pattern 1 length + * bit 15:8 pattern 2 length + * bit 7:0 pattern 3 length + * + * AR_WOW_LENGTH1_REG2 + * bit 31:24 pattern 4 length + * bit 23:16 pattern 5 length + * bit 15:8 pattern 6 length + * bit 7:0 pattern 7 length + * + * the below logic writes out the new + * pattern length for the corresponding + * pattern_count, while masking out the + * other fields + */ + + ah->wow_event_mask |= BIT(pattern_count + AR_WOW_PAT_FOUND_SHIFT); + + if (!AR_SREV_9285_12_OR_LATER(ah)) + return; + + if (pattern_count < 4) { + /* Pattern 0-3 uses AR_WOW_LENGTH1 register */ + set = (pattern_len & AR_WOW_LENGTH_MAX) << + AR_WOW_LEN1_SHIFT(pattern_count); + clr = AR_WOW_LENGTH1_MASK(pattern_count); + REG_RMW(ah, AR_WOW_LENGTH1, set, clr); + } else { + /* Pattern 4-7 uses AR_WOW_LENGTH2 register */ + set = (pattern_len & AR_WOW_LENGTH_MAX) << + AR_WOW_LEN2_SHIFT(pattern_count); + clr = AR_WOW_LENGTH2_MASK(pattern_count); + REG_RMW(ah, AR_WOW_LENGTH2, set, clr); + } + +} +EXPORT_SYMBOL(ath9k_hw_wow_apply_pattern); + +u32 ath9k_hw_wow_wakeup(struct ath_hw *ah) +{ + u32 wow_status = 0; + u32 val = 0, rval; + /* + * read the WoW status register to know + * the wakeup reason + */ + rval = REG_READ(ah, AR_WOW_PATTERN); + val = AR_WOW_STATUS(rval); + + /* + * mask only the WoW events that we have enabled. Sometimes + * we have spurious WoW events from the AR_WOW_PATTERN + * register. This mask will clean it up. + */ + + val &= ah->wow_event_mask; + + if (val) { + + if (val & AR_WOW_MAGIC_PAT_FOUND) + wow_status |= AH_WOW_MAGIC_PATTERN_EN; + + if (AR_WOW_PATTERN_FOUND(val)) + wow_status |= AH_WOW_USER_PATTERN_EN; + + if (val & AR_WOW_KEEP_ALIVE_FAIL) + wow_status |= AH_WOW_LINK_CHANGE; + + if (val & AR_WOW_BEACON_FAIL) + wow_status |= AH_WOW_BEACON_MISS; + + } + + /* + * set and clear WOW_PME_CLEAR registers for the chip to + * generate next wow signal. + * disable D3 before accessing other registers ? + */ + + /* do we need to check the bit value 0x01000000 (7-10) ?? */ + REG_RMW(ah, AR_PCIE_PM_CTRL, AR_PMCTRL_WOW_PME_CLR, + AR_PMCTRL_PWR_STATE_D1D3); + + /* + * clear all events + */ + REG_WRITE(ah, AR_WOW_PATTERN, + AR_WOW_CLEAR_EVENTS(REG_READ(ah, AR_WOW_PATTERN))); + + /* + * tie reset register for AR9002 family of chipsets + * NB: not tieing it back might have some repurcussions. + */ + + if (!AR_SREV_9300_20_OR_LATER(ah)) { + REG_SET_BIT(ah, AR_WA, AR_WA_UNTIE_RESET_EN | + AR_WA_POR_SHORT | AR_WA_RESET_EN); + } + + + /* + * restore the beacon threshold to init value + */ + REG_WRITE(ah, AR_RSSI_THR, INIT_RSSI_THR); + + /* + * Restore the way the PCI-E reset, Power-On-Reset, external + * PCIE_POR_SHORT pins are tied to its original value. + * Previously just before WoW sleep, we untie the PCI-E + * reset to our Chip's Power On Reset so that any PCI-E + * reset from the bus will not reset our chip + */ + + if (AR_SREV_9280_20_OR_LATER(ah) && ah->is_pciexpress) + ath9k_hw_configpcipowersave(ah, false); + + ah->wow_event_mask = 0; + + return wow_status; +} +EXPORT_SYMBOL(ath9k_hw_wow_wakeup); + +void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable) +{ + u32 wow_event_mask; + u32 set, clr; + + /* + * wow_event_mask is a mask to the AR_WOW_PATTERN register to + * indicate which WoW events we have enabled. The WoW events + * are from the 'pattern_enable' in this function and + * 'pattern_count' of ath9k_hw_wow_apply_pattern() + */ + + wow_event_mask = ah->wow_event_mask; + + /* + * Untie Power-on-Reset from the PCI-E-Reset. When we are in + * WOW sleep, we do want the Reset from the PCI-E to disturb + * our hw state + */ + + if (ah->is_pciexpress) { + + /* + * we need to untie the internal POR (power-on-reset) + * to the external PCI-E reset. We also need to tie + * the PCI-E Phy reset to the PCI-E reset. + */ + + if (AR_SREV_9300_20_OR_LATER(ah)) { + set = AR_WA_RESET_EN | AR_WA_POR_SHORT; + clr = AR_WA_UNTIE_RESET_EN | AR_WA_D3_L1_DISABLE; + REG_RMW(ah, AR_WA, set, clr); + } else { + if (AR_SREV_9285(ah) || AR_SREV_9287(ah)) + set = AR9285_WA_DEFAULT; + else + set = AR9280_WA_DEFAULT; + + /* + * In AR9280 and AR9285, bit 14 in WA register + * (disable L1) should only be set when device + * enters D3 state and be cleared when device + * comes back to D0 + */ + + if (ah->config.pcie_waen & AR_WA_D3_L1_DISABLE) + set |= AR_WA_D3_L1_DISABLE; + + clr = AR_WA_UNTIE_RESET_EN; + set |= AR_WA_RESET_EN | AR_WA_POR_SHORT; + REG_RMW(ah, AR_WA, set, clr); + + /* + * for WoW sleep, we reprogram the SerDes so that the + * PLL and CLK REQ are both enabled. This uses more + * power but otherwise WoW sleep is unstable and the + * chip may disappear. + */ + + if (AR_SREV_9285_12_OR_LATER(ah)) + ath9k_hw_config_serdes_wow_sleep(ah); + + } + } + + /* + * set the power states appropriately and enable PME + */ + set = AR_PMCTRL_HOST_PME_EN | AR_PMCTRL_PWR_PM_CTRL_ENA | + AR_PMCTRL_AUX_PWR_DET | AR_PMCTRL_WOW_PME_CLR; + + /* + * set and clear WOW_PME_CLEAR registers for the chip + * to generate next wow signal. + */ + REG_SET_BIT(ah, AR_PCIE_PM_CTRL, set); + clr = AR_PMCTRL_WOW_PME_CLR; + REG_CLR_BIT(ah, AR_PCIE_PM_CTRL, clr); + + /* + * Setup for: + * - beacon misses + * - magic pattern + * - keep alive timeout + * - pattern matching + */ + + /* + * Program default values for pattern backoff, aifs/slot/KAL count, + * beacon miss timeout, KAL timeout, etc. + */ + + set = AR_WOW_BACK_OFF_SHIFT(AR_WOW_PAT_BACKOFF); + REG_SET_BIT(ah, AR_WOW_PATTERN, set); + + set = AR_WOW_AIFS_CNT(AR_WOW_CNT_AIFS_CNT) | + AR_WOW_SLOT_CNT(AR_WOW_CNT_SLOT_CNT) | + AR_WOW_KEEP_ALIVE_CNT(AR_WOW_CNT_KA_CNT); + REG_SET_BIT(ah, AR_WOW_COUNT, set); + + if (pattern_enable & AH_WOW_BEACON_MISS) + set = AR_WOW_BEACON_TIMO; + /* We are not using beacon miss, program a large value */ + else + set = AR_WOW_BEACON_TIMO_MAX; + + REG_WRITE(ah, AR_WOW_BCN_TIMO, set); + + /* + * Keep alive timo in ms except AR9280 + */ + if (!pattern_enable || AR_SREV_9280(ah)) + set = AR_WOW_KEEP_ALIVE_NEVER; + else + set = KAL_TIMEOUT * 32; + + REG_WRITE(ah, AR_WOW_KEEP_ALIVE_TIMO, set); + + /* + * Keep alive delay in us. based on 'power on clock', + * therefore in usec + */ + set = KAL_DELAY * 1000; + REG_WRITE(ah, AR_WOW_KEEP_ALIVE_DELAY, set); + + /* + * Create keep alive pattern to respond to beacons + */ + ath9k_wow_create_keep_alive_pattern(ah); + + /* + * Configure MAC WoW Registers + */ + + set = 0; + /* Send keep alive timeouts anyway */ + clr = AR_WOW_KEEP_ALIVE_AUTO_DIS; + + if (pattern_enable & AH_WOW_LINK_CHANGE) + wow_event_mask |= AR_WOW_KEEP_ALIVE_FAIL; + else + set = AR_WOW_KEEP_ALIVE_FAIL_DIS; + + /* + * FIXME: For now disable keep alive frame + * failure. This seems to sometimes trigger + * unnecessary wake up with AR9485 chipsets. + */ + set = AR_WOW_KEEP_ALIVE_FAIL_DIS; + + REG_RMW(ah, AR_WOW_KEEP_ALIVE, set, clr); + + + /* + * we are relying on a bmiss failure. ensure we have + * enough threshold to prevent false positives + */ + REG_RMW_FIELD(ah, AR_RSSI_THR, AR_RSSI_THR_BM_THR, + AR_WOW_BMISSTHRESHOLD); + + set = 0; + clr = 0; + + if (pattern_enable & AH_WOW_BEACON_MISS) { + set = AR_WOW_BEACON_FAIL_EN; + wow_event_mask |= AR_WOW_BEACON_FAIL; + } else { + clr = AR_WOW_BEACON_FAIL_EN; + } + + REG_RMW(ah, AR_WOW_BCN_EN, set, clr); + + set = 0; + clr = 0; + /* + * Enable the magic packet registers + */ + if (pattern_enable & AH_WOW_MAGIC_PATTERN_EN) { + set = AR_WOW_MAGIC_EN; + wow_event_mask |= AR_WOW_MAGIC_PAT_FOUND; + } else { + clr = AR_WOW_MAGIC_EN; + } + set |= AR_WOW_MAC_INTR_EN; + REG_RMW(ah, AR_WOW_PATTERN, set, clr); + + /* + * For AR9285 and later version of chipsets + * enable WoW pattern match for packets less + * than 256 bytes for all patterns + */ + if (AR_SREV_9285_12_OR_LATER(ah)) + REG_WRITE(ah, AR_WOW_PATTERN_MATCH_LT_256B, + AR_WOW_PATTERN_SUPPORTED); + + /* + * Set the power states appropriately and enable PME + */ + clr = 0; + set = AR_PMCTRL_PWR_STATE_D1D3 | AR_PMCTRL_HOST_PME_EN | + AR_PMCTRL_PWR_PM_CTRL_ENA; + /* + * This is needed for AR9300 chipsets to wake-up + * the host. + */ + if (AR_SREV_9300_20_OR_LATER(ah)) + clr = AR_PCIE_PM_CTRL_ENA; + + REG_RMW(ah, AR_PCIE_PM_CTRL, set, clr); + + if (AR_SREV_9462(ah)) { + /* + * this is needed to prevent the chip waking up + * the host within 3-4 seconds with certain + * platform/BIOS. The fix is to enable + * D1 & D3 to match original definition and + * also match the OTP value. Anyway this + * is more related to SW WOW. + */ + clr = AR_PMCTRL_PWR_STATE_D1D3; + REG_CLR_BIT(ah, AR_PCIE_PM_CTRL, clr); + + set = AR_PMCTRL_PWR_STATE_D1D3_REAL; + REG_SET_BIT(ah, AR_PCIE_PM_CTRL, set); + } + + + + REG_CLR_BIT(ah, AR_STA_ID1, AR_STA_ID1_PRESERVE_SEQNUM); + + if (AR_SREV_9300_20_OR_LATER(ah)) { + /* to bring down WOW power low margin */ + set = BIT(13); + REG_SET_BIT(ah, AR_PCIE_PHY_REG3, set); + /* HW WoW */ + clr = BIT(5); + REG_CLR_BIT(ah, AR_PCU_MISC_MODE3, clr); + } + + ath9k_hw_set_powermode_wow_sleep(ah); + ah->wow_event_mask = wow_event_mask; +} +EXPORT_SYMBOL(ath9k_hw_wow_enable); diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c index 4d571394c7a8..2c9da6b2ecb1 100644 --- a/drivers/net/wireless/ath/ath9k/xmit.c +++ b/drivers/net/wireless/ath/ath9k/xmit.c @@ -29,6 +29,8 @@ #define HT_LTF(_ns) (4 * (_ns)) #define SYMBOL_TIME(_ns) ((_ns) << 2) /* ns * 4 us */ #define SYMBOL_TIME_HALFGI(_ns) (((_ns) * 18 + 4) / 5) /* ns * 3.6 us */ +#define TIME_SYMBOLS(t) ((t) >> 2) +#define TIME_SYMBOLS_HALFGI(t) (((t) * 5 - 4) / 18) #define NUM_SYMBOLS_PER_USEC(_usec) (_usec >> 2) #define NUM_SYMBOLS_PER_USEC_HALFGI(_usec) (((_usec*5)-4)/18) @@ -74,50 +76,23 @@ enum { MCS_HT40_SGI, }; -static int ath_max_4ms_framelen[4][32] = { - [MCS_HT20] = { - 3212, 6432, 9648, 12864, 19300, 25736, 28952, 32172, - 6424, 12852, 19280, 25708, 38568, 51424, 57852, 64280, - 9628, 19260, 28896, 38528, 57792, 65532, 65532, 65532, - 12828, 25656, 38488, 51320, 65532, 65532, 65532, 65532, - }, - [MCS_HT20_SGI] = { - 3572, 7144, 10720, 14296, 21444, 28596, 32172, 35744, - 7140, 14284, 21428, 28568, 42856, 57144, 64288, 65532, - 10700, 21408, 32112, 42816, 64228, 65532, 65532, 65532, - 14256, 28516, 42780, 57040, 65532, 65532, 65532, 65532, - }, - [MCS_HT40] = { - 6680, 13360, 20044, 26724, 40092, 53456, 60140, 65532, - 13348, 26700, 40052, 53400, 65532, 65532, 65532, 65532, - 20004, 40008, 60016, 65532, 65532, 65532, 65532, 65532, - 26644, 53292, 65532, 65532, 65532, 65532, 65532, 65532, - }, - [MCS_HT40_SGI] = { - 7420, 14844, 22272, 29696, 44544, 59396, 65532, 65532, - 14832, 29668, 44504, 59340, 65532, 65532, 65532, 65532, - 22232, 44464, 65532, 65532, 65532, 65532, 65532, 65532, - 29616, 59232, 65532, 65532, 65532, 65532, 65532, 65532, - } -}; - /*********************/ /* Aggregation logic */ /*********************/ -static void ath_txq_lock(struct ath_softc *sc, struct ath_txq *txq) +void ath_txq_lock(struct ath_softc *sc, struct ath_txq *txq) __acquires(&txq->axq_lock) { spin_lock_bh(&txq->axq_lock); } -static void ath_txq_unlock(struct ath_softc *sc, struct ath_txq *txq) +void ath_txq_unlock(struct ath_softc *sc, struct ath_txq *txq) __releases(&txq->axq_lock) { spin_unlock_bh(&txq->axq_lock); } -static void ath_txq_unlock_complete(struct ath_softc *sc, struct ath_txq *txq) +void ath_txq_unlock_complete(struct ath_softc *sc, struct ath_txq *txq) __releases(&txq->axq_lock) { struct sk_buff_head q; @@ -614,10 +589,8 @@ static void ath_tx_complete_aggr(struct ath_softc *sc, struct ath_txq *txq, rcu_read_unlock(); - if (needreset) { - RESET_STAT_INC(sc, RESET_TYPE_TX_ERROR); - ieee80211_queue_work(sc->hw, &sc->hw_reset_work); - } + if (needreset) + ath9k_queue_reset(sc, RESET_TYPE_TX_ERROR); } static bool ath_lookup_legacy(struct ath_buf *bf) @@ -650,6 +623,7 @@ static u32 ath_lookup_rate(struct ath_softc *sc, struct ath_buf *bf, struct ieee80211_tx_rate *rates; u32 max_4ms_framelen, frmlen; u16 aggr_limit, bt_aggr_limit, legacy = 0; + int q = tid->ac->txq->mac80211_qnum; int i; skb = bf->bf_mpdu; @@ -658,8 +632,7 @@ static u32 ath_lookup_rate(struct ath_softc *sc, struct ath_buf *bf, /* * Find the lowest frame length among the rate series that will have a - * 4ms transmit duration. - * TODO - TXOP limit needs to be considered. + * 4ms (or TXOP limited) transmit duration. */ max_4ms_framelen = ATH_AMPDU_LIMIT_MAX; @@ -682,7 +655,7 @@ static u32 ath_lookup_rate(struct ath_softc *sc, struct ath_buf *bf, if (rates[i].flags & IEEE80211_TX_RC_SHORT_GI) modeidx++; - frmlen = ath_max_4ms_framelen[modeidx][rates[i].idx]; + frmlen = sc->tx.max_aggr_framelen[q][modeidx][rates[i].idx]; max_4ms_framelen = min(max_4ms_framelen, frmlen); } @@ -929,6 +902,44 @@ static u32 ath_pkt_duration(struct ath_softc *sc, u8 rix, int pktlen, return duration; } +static int ath_max_framelen(int usec, int mcs, bool ht40, bool sgi) +{ + int streams = HT_RC_2_STREAMS(mcs); + int symbols, bits; + int bytes = 0; + + symbols = sgi ? TIME_SYMBOLS_HALFGI(usec) : TIME_SYMBOLS(usec); + bits = symbols * bits_per_symbol[mcs % 8][ht40] * streams; + bits -= OFDM_PLCP_BITS; + bytes = bits / 8; + bytes -= L_STF + L_LTF + L_SIG + HT_SIG + HT_STF + HT_LTF(streams); + if (bytes > 65532) + bytes = 65532; + + return bytes; +} + +void ath_update_max_aggr_framelen(struct ath_softc *sc, int queue, int txop) +{ + u16 *cur_ht20, *cur_ht20_sgi, *cur_ht40, *cur_ht40_sgi; + int mcs; + + /* 4ms is the default (and maximum) duration */ + if (!txop || txop > 4096) + txop = 4096; + + cur_ht20 = sc->tx.max_aggr_framelen[queue][MCS_HT20]; + cur_ht20_sgi = sc->tx.max_aggr_framelen[queue][MCS_HT20_SGI]; + cur_ht40 = sc->tx.max_aggr_framelen[queue][MCS_HT40]; + cur_ht40_sgi = sc->tx.max_aggr_framelen[queue][MCS_HT40_SGI]; + for (mcs = 0; mcs < 32; mcs++) { + cur_ht20[mcs] = ath_max_framelen(txop, mcs, false, false); + cur_ht20_sgi[mcs] = ath_max_framelen(txop, mcs, false, true); + cur_ht40[mcs] = ath_max_framelen(txop, mcs, true, false); + cur_ht40_sgi[mcs] = ath_max_framelen(txop, mcs, true, true); + } +} + static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf, struct ath_tx_info *info, int len) { @@ -1165,6 +1176,7 @@ int ath_tx_aggr_start(struct ath_softc *sc, struct ieee80211_sta *sta, { struct ath_atx_tid *txtid; struct ath_node *an; + u8 density; an = (struct ath_node *)sta->drv_priv; txtid = ATH_AN_2_TID(an, tid); @@ -1172,6 +1184,17 @@ int ath_tx_aggr_start(struct ath_softc *sc, struct ieee80211_sta *sta, if (txtid->state & (AGGR_CLEANUP | AGGR_ADDBA_COMPLETE)) return -EAGAIN; + /* update ampdu factor/density, they may have changed. This may happen + * in HT IBSS when a beacon with HT-info is received after the station + * has already been added. + */ + if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_HT) { + an->maxampdu = 1 << (IEEE80211_HT_MAX_AMPDU_FACTOR + + sta->ht_cap.ampdu_factor); + density = ath9k_parse_mpdudensity(sta->ht_cap.ampdu_density); + an->mpdudensity = density; + } + txtid->state |= AGGR_ADDBA_PROGRESS; txtid->paused = true; *ssn = txtid->seq_start = txtid->seq_next; @@ -1391,16 +1414,6 @@ int ath_txq_update(struct ath_softc *sc, int qnum, int error = 0; struct ath9k_tx_queue_info qi; - if (qnum == sc->beacon.beaconq) { - /* - * XXX: for beacon queue, we just save the parameter. - * It will be picked up by ath_beaconq_config when - * it's necessary. - */ - sc->beacon.beacon_qi = *qinfo; - return 0; - } - BUG_ON(sc->tx.txq[qnum].axq_qnum != qnum); ath9k_hw_get_txq_props(ah, qnum, &qi); @@ -1526,7 +1539,7 @@ bool ath_drain_all_txq(struct ath_softc *sc, bool retry_tx) int i; u32 npend = 0; - if (sc->sc_flags & SC_OP_INVALID) + if (test_bit(SC_OP_INVALID, &sc->sc_flags)) return true; ath9k_hw_abort_tx_dma(ah); @@ -1574,7 +1587,8 @@ void ath_txq_schedule(struct ath_softc *sc, struct ath_txq *txq) struct ath_atx_ac *ac, *ac_tmp, *last_ac; struct ath_atx_tid *tid, *last_tid; - if (work_pending(&sc->hw_reset_work) || list_empty(&txq->axq_acq) || + if (test_bit(SC_OP_HW_RESET, &sc->sc_flags) || + list_empty(&txq->axq_acq) || txq->axq_ampdu_depth >= ATH_AGGR_MIN_QDEPTH) return; @@ -1976,7 +1990,8 @@ int ath_tx_start(struct ieee80211_hw *hw, struct sk_buff *skb, ath_txq_lock(sc, txq); if (txq == sc->tx.txq_map[q] && - ++txq->pending_frames > ATH_MAX_QDEPTH && !txq->stopped) { + ++txq->pending_frames > sc->tx.txq_max_pending[q] && + !txq->stopped) { ieee80211_stop_queue(sc->hw, q); txq->stopped = true; } @@ -1999,6 +2014,7 @@ static void ath_tx_complete(struct ath_softc *sc, struct sk_buff *skb, struct ath_common *common = ath9k_hw_common(sc->sc_ah); struct ieee80211_hdr * hdr = (struct ieee80211_hdr *)skb->data; int q, padpos, padsize; + unsigned long flags; ath_dbg(common, XMIT, "TX complete: skb: %p\n", skb); @@ -2017,6 +2033,7 @@ static void ath_tx_complete(struct ath_softc *sc, struct sk_buff *skb, skb_pull(skb, padsize); } + spin_lock_irqsave(&sc->sc_pm_lock, flags); if ((sc->ps_flags & PS_WAIT_FOR_TX_ACK) && !txq->axq_depth) { sc->ps_flags &= ~PS_WAIT_FOR_TX_ACK; ath_dbg(common, PS, @@ -2026,13 +2043,15 @@ static void ath_tx_complete(struct ath_softc *sc, struct sk_buff *skb, PS_WAIT_FOR_PSPOLL_DATA | PS_WAIT_FOR_TX_ACK)); } + spin_unlock_irqrestore(&sc->sc_pm_lock, flags); q = skb_get_queue_mapping(skb); if (txq == sc->tx.txq_map[q]) { if (WARN_ON(--txq->pending_frames < 0)) txq->pending_frames = 0; - if (txq->stopped && txq->pending_frames < ATH_MAX_QDEPTH) { + if (txq->stopped && + txq->pending_frames < sc->tx.txq_max_pending[q]) { ieee80211_wake_queue(sc->hw, q); txq->stopped = false; } @@ -2176,7 +2195,7 @@ static void ath_tx_processq(struct ath_softc *sc, struct ath_txq *txq) ath_txq_lock(sc, txq); for (;;) { - if (work_pending(&sc->hw_reset_work)) + if (test_bit(SC_OP_HW_RESET, &sc->sc_flags)) break; if (list_empty(&txq->axq_q)) { @@ -2236,46 +2255,6 @@ static void ath_tx_processq(struct ath_softc *sc, struct ath_txq *txq) ath_txq_unlock_complete(sc, txq); } -static void ath_tx_complete_poll_work(struct work_struct *work) -{ - struct ath_softc *sc = container_of(work, struct ath_softc, - tx_complete_work.work); - struct ath_txq *txq; - int i; - bool needreset = false; -#ifdef CONFIG_ATH9K_DEBUGFS - sc->tx_complete_poll_work_seen++; -#endif - - for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) - if (ATH_TXQ_SETUP(sc, i)) { - txq = &sc->tx.txq[i]; - ath_txq_lock(sc, txq); - if (txq->axq_depth) { - if (txq->axq_tx_inprogress) { - needreset = true; - ath_txq_unlock(sc, txq); - break; - } else { - txq->axq_tx_inprogress = true; - } - } - ath_txq_unlock_complete(sc, txq); - } - - if (needreset) { - ath_dbg(ath9k_hw_common(sc->sc_ah), RESET, - "tx hung, resetting the chip\n"); - RESET_STAT_INC(sc, RESET_TYPE_TX_HANG); - ieee80211_queue_work(sc->hw, &sc->hw_reset_work); - } - - ieee80211_queue_delayed_work(sc->hw, &sc->tx_complete_work, - msecs_to_jiffies(ATH_TX_COMPLETE_POLL_INT)); -} - - - void ath_tx_tasklet(struct ath_softc *sc) { struct ath_hw *ah = sc->sc_ah; @@ -2299,7 +2278,7 @@ void ath_tx_edma_tasklet(struct ath_softc *sc) int status; for (;;) { - if (work_pending(&sc->hw_reset_work)) + if (test_bit(SC_OP_HW_RESET, &sc->sc_flags)) break; status = ath9k_hw_txprocdesc(ah, NULL, (void *)&ts); diff --git a/drivers/net/wireless/ath/carl9170/carl9170.h b/drivers/net/wireless/ath/carl9170/carl9170.h index 0cea20e3e250..376be11161c0 100644 --- a/drivers/net/wireless/ath/carl9170/carl9170.h +++ b/drivers/net/wireless/ath/carl9170/carl9170.h @@ -289,6 +289,7 @@ struct ar9170 { unsigned int mem_block_size; unsigned int rx_size; unsigned int tx_seq_table; + bool ba_filter; } fw; /* interface configuration combinations */ @@ -425,6 +426,10 @@ struct ar9170 { struct sk_buff *rx_failover; int rx_failover_missing; + /* FIFO for collecting outstanding BlockAckRequest */ + struct list_head bar_list[__AR9170_NUM_TXQ]; + spinlock_t bar_list_lock[__AR9170_NUM_TXQ]; + #ifdef CONFIG_CARL9170_WPC struct { bool pbc_state; @@ -468,6 +473,12 @@ enum carl9170_ps_off_override_reasons { PS_OFF_BCN = BIT(1), }; +struct carl9170_bar_list_entry { + struct list_head list; + struct rcu_head head; + struct sk_buff *skb; +}; + struct carl9170_ba_stats { u8 ampdu_len; u8 ampdu_ack_len; diff --git a/drivers/net/wireless/ath/carl9170/cmd.c b/drivers/net/wireless/ath/carl9170/cmd.c index 195dc6538110..39a63874b275 100644 --- a/drivers/net/wireless/ath/carl9170/cmd.c +++ b/drivers/net/wireless/ath/carl9170/cmd.c @@ -138,7 +138,7 @@ int carl9170_reboot(struct ar9170 *ar) if (!cmd) return -ENOMEM; - err = __carl9170_exec_cmd(ar, (struct carl9170_cmd *)cmd, true); + err = __carl9170_exec_cmd(ar, cmd, true); return err; } diff --git a/drivers/net/wireless/ath/carl9170/fw.c b/drivers/net/wireless/ath/carl9170/fw.c index 5c73c03872f3..c5ca6f1f5836 100644 --- a/drivers/net/wireless/ath/carl9170/fw.c +++ b/drivers/net/wireless/ath/carl9170/fw.c @@ -307,6 +307,9 @@ static int carl9170_fw(struct ar9170 *ar, const __u8 *data, size_t len) if (SUPP(CARL9170FW_WOL)) device_set_wakeup_enable(&ar->udev->dev, true); + if (SUPP(CARL9170FW_RX_BA_FILTER)) + ar->fw.ba_filter = true; + if_comb_types = BIT(NL80211_IFTYPE_STATION) | BIT(NL80211_IFTYPE_P2P_CLIENT); diff --git a/drivers/net/wireless/ath/carl9170/fwdesc.h b/drivers/net/wireless/ath/carl9170/fwdesc.h index 6d9c0891ce7f..66848d47c88e 100644 --- a/drivers/net/wireless/ath/carl9170/fwdesc.h +++ b/drivers/net/wireless/ath/carl9170/fwdesc.h @@ -78,6 +78,9 @@ enum carl9170fw_feature_list { /* HW (ANI, CCA, MIB) tally counters */ CARL9170FW_HW_COUNTERS, + /* Firmware will pass BA when BARs are queued */ + CARL9170FW_RX_BA_FILTER, + /* KEEP LAST */ __CARL9170FW_FEATURE_NUM }; diff --git a/drivers/net/wireless/ath/carl9170/main.c b/drivers/net/wireless/ath/carl9170/main.c index 8d2523b3f722..858e58dfc4dc 100644 --- a/drivers/net/wireless/ath/carl9170/main.c +++ b/drivers/net/wireless/ath/carl9170/main.c @@ -949,6 +949,9 @@ static void carl9170_op_configure_filter(struct ieee80211_hw *hw, if (ar->fw.rx_filter && changed_flags & ar->rx_filter_caps) { u32 rx_filter = 0; + if (!ar->fw.ba_filter) + rx_filter |= CARL9170_RX_FILTER_CTL_OTHER; + if (!(*new_flags & (FIF_FCSFAIL | FIF_PLCPFAIL))) rx_filter |= CARL9170_RX_FILTER_BAD; @@ -1753,6 +1756,9 @@ void *carl9170_alloc(size_t priv_size) for (i = 0; i < ar->hw->queues; i++) { skb_queue_head_init(&ar->tx_status[i]); skb_queue_head_init(&ar->tx_pending[i]); + + INIT_LIST_HEAD(&ar->bar_list[i]); + spin_lock_init(&ar->bar_list_lock[i]); } INIT_WORK(&ar->ps_work, carl9170_ps_work); INIT_WORK(&ar->ping_work, carl9170_ping_work); diff --git a/drivers/net/wireless/ath/carl9170/rx.c b/drivers/net/wireless/ath/carl9170/rx.c index 84b22eec7abd..6f6a34155667 100644 --- a/drivers/net/wireless/ath/carl9170/rx.c +++ b/drivers/net/wireless/ath/carl9170/rx.c @@ -161,7 +161,7 @@ static void carl9170_cmd_callback(struct ar9170 *ar, u32 len, void *buffer) void carl9170_handle_command_response(struct ar9170 *ar, void *buf, u32 len) { - struct carl9170_rsp *cmd = (void *) buf; + struct carl9170_rsp *cmd = buf; struct ieee80211_vif *vif; if (carl9170_check_sequence(ar, cmd->hdr.seq)) @@ -520,7 +520,7 @@ static u8 *carl9170_find_ie(u8 *data, unsigned int len, u8 ie) */ static void carl9170_ps_beacon(struct ar9170 *ar, void *data, unsigned int len) { - struct ieee80211_hdr *hdr = (void *) data; + struct ieee80211_hdr *hdr = data; struct ieee80211_tim_ie *tim_ie; u8 *tim; u8 tim_len; @@ -576,6 +576,53 @@ static void carl9170_ps_beacon(struct ar9170 *ar, void *data, unsigned int len) } } +static void carl9170_ba_check(struct ar9170 *ar, void *data, unsigned int len) +{ + struct ieee80211_bar *bar = (void *) data; + struct carl9170_bar_list_entry *entry; + unsigned int queue; + + if (likely(!ieee80211_is_back(bar->frame_control))) + return; + + if (len <= sizeof(*bar) + FCS_LEN) + return; + + queue = TID_TO_WME_AC(((le16_to_cpu(bar->control) & + IEEE80211_BAR_CTRL_TID_INFO_MASK) >> + IEEE80211_BAR_CTRL_TID_INFO_SHIFT) & 7); + + rcu_read_lock(); + list_for_each_entry_rcu(entry, &ar->bar_list[queue], list) { + struct sk_buff *entry_skb = entry->skb; + struct _carl9170_tx_superframe *super = (void *)entry_skb->data; + struct ieee80211_bar *entry_bar = (void *)super->frame_data; + +#define TID_CHECK(a, b) ( \ + ((a) & cpu_to_le16(IEEE80211_BAR_CTRL_TID_INFO_MASK)) == \ + ((b) & cpu_to_le16(IEEE80211_BAR_CTRL_TID_INFO_MASK))) \ + + if (bar->start_seq_num == entry_bar->start_seq_num && + TID_CHECK(bar->control, entry_bar->control) && + compare_ether_addr(bar->ra, entry_bar->ta) == 0 && + compare_ether_addr(bar->ta, entry_bar->ra) == 0) { + struct ieee80211_tx_info *tx_info; + + tx_info = IEEE80211_SKB_CB(entry_skb); + tx_info->flags |= IEEE80211_TX_STAT_ACK; + + spin_lock_bh(&ar->bar_list_lock[queue]); + list_del_rcu(&entry->list); + spin_unlock_bh(&ar->bar_list_lock[queue]); + kfree_rcu(entry, head); + break; + } + } + rcu_read_unlock(); + +#undef TID_CHECK +} + static bool carl9170_ampdu_check(struct ar9170 *ar, u8 *buf, u8 ms) { __le16 fc; @@ -738,6 +785,8 @@ static void carl9170_handle_mpdu(struct ar9170 *ar, u8 *buf, int len) carl9170_ps_beacon(ar, buf, mpdu_len); + carl9170_ba_check(ar, buf, mpdu_len); + skb = carl9170_rx_copy_data(buf, mpdu_len); if (!skb) goto drop; diff --git a/drivers/net/wireless/ath/carl9170/tx.c b/drivers/net/wireless/ath/carl9170/tx.c index aed305177af6..6a8681407a1d 100644 --- a/drivers/net/wireless/ath/carl9170/tx.c +++ b/drivers/net/wireless/ath/carl9170/tx.c @@ -277,11 +277,11 @@ static void carl9170_tx_release(struct kref *ref) return; BUILD_BUG_ON( - offsetof(struct ieee80211_tx_info, status.ampdu_ack_len) != 23); + offsetof(struct ieee80211_tx_info, status.ack_signal) != 20); - memset(&txinfo->status.ampdu_ack_len, 0, + memset(&txinfo->status.ack_signal, 0, sizeof(struct ieee80211_tx_info) - - offsetof(struct ieee80211_tx_info, status.ampdu_ack_len)); + offsetof(struct ieee80211_tx_info, status.ack_signal)); if (atomic_read(&ar->tx_total_queued)) ar->tx_schedule = true; @@ -436,6 +436,45 @@ out_rcu: rcu_read_unlock(); } +static void carl9170_tx_bar_status(struct ar9170 *ar, struct sk_buff *skb, + struct ieee80211_tx_info *tx_info) +{ + struct _carl9170_tx_superframe *super = (void *) skb->data; + struct ieee80211_bar *bar = (void *) super->frame_data; + + /* + * Unlike all other frames, the status report for BARs does + * not directly come from the hardware as it is incapable of + * matching a BA to a previously send BAR. + * Instead the RX-path will scan for incoming BAs and set the + * IEEE80211_TX_STAT_ACK if it sees one that was likely + * caused by a BAR from us. + */ + + if (unlikely(ieee80211_is_back_req(bar->frame_control)) && + !(tx_info->flags & IEEE80211_TX_STAT_ACK)) { + struct carl9170_bar_list_entry *entry; + int queue = skb_get_queue_mapping(skb); + + rcu_read_lock(); + list_for_each_entry_rcu(entry, &ar->bar_list[queue], list) { + if (entry->skb == skb) { + spin_lock_bh(&ar->bar_list_lock[queue]); + list_del_rcu(&entry->list); + spin_unlock_bh(&ar->bar_list_lock[queue]); + kfree_rcu(entry, head); + goto out; + } + } + + WARN(1, "bar not found in %d - ra:%pM ta:%pM c:%x ssn:%x\n", + queue, bar->ra, bar->ta, bar->control, + bar->start_seq_num); +out: + rcu_read_unlock(); + } +} + void carl9170_tx_status(struct ar9170 *ar, struct sk_buff *skb, const bool success) { @@ -445,6 +484,8 @@ void carl9170_tx_status(struct ar9170 *ar, struct sk_buff *skb, txinfo = IEEE80211_SKB_CB(skb); + carl9170_tx_bar_status(ar, skb, txinfo); + if (success) txinfo->flags |= IEEE80211_TX_STAT_ACK; else @@ -1265,6 +1306,26 @@ out_rcu: return false; } +static void carl9170_bar_check(struct ar9170 *ar, struct sk_buff *skb) +{ + struct _carl9170_tx_superframe *super = (void *) skb->data; + struct ieee80211_bar *bar = (void *) super->frame_data; + + if (unlikely(ieee80211_is_back_req(bar->frame_control)) && + skb->len >= sizeof(struct ieee80211_bar)) { + struct carl9170_bar_list_entry *entry; + unsigned int queue = skb_get_queue_mapping(skb); + + entry = kmalloc(sizeof(*entry), GFP_ATOMIC); + if (!WARN_ON_ONCE(!entry)) { + entry->skb = skb; + spin_lock_bh(&ar->bar_list_lock[queue]); + list_add_tail_rcu(&entry->list, &ar->bar_list[queue]); + spin_unlock_bh(&ar->bar_list_lock[queue]); + } + } +} + static void carl9170_tx(struct ar9170 *ar) { struct sk_buff *skb; @@ -1287,6 +1348,8 @@ static void carl9170_tx(struct ar9170 *ar) if (unlikely(carl9170_tx_ps_drop(ar, skb))) continue; + carl9170_bar_check(ar, skb); + atomic_inc(&ar->tx_total_pending); q = __carl9170_get_queue(ar, i); diff --git a/drivers/net/wireless/ath/carl9170/version.h b/drivers/net/wireless/ath/carl9170/version.h index e651db856344..2ec3e9191e4d 100644 --- a/drivers/net/wireless/ath/carl9170/version.h +++ b/drivers/net/wireless/ath/carl9170/version.h @@ -1,7 +1,7 @@ #ifndef __CARL9170_SHARED_VERSION_H #define __CARL9170_SHARED_VERSION_H -#define CARL9170FW_VERSION_YEAR 11 -#define CARL9170FW_VERSION_MONTH 8 -#define CARL9170FW_VERSION_DAY 15 -#define CARL9170FW_VERSION_GIT "1.9.4" +#define CARL9170FW_VERSION_YEAR 12 +#define CARL9170FW_VERSION_MONTH 7 +#define CARL9170FW_VERSION_DAY 7 +#define CARL9170FW_VERSION_GIT "1.9.6" #endif /* __CARL9170_SHARED_VERSION_H */ diff --git a/drivers/net/wireless/atmel.c b/drivers/net/wireless/atmel.c index d07c0301da6a..4a4e98f71807 100644 --- a/drivers/net/wireless/atmel.c +++ b/drivers/net/wireless/atmel.c @@ -2952,10 +2952,10 @@ static void send_association_request(struct atmel_private *priv, int is_reassoc) /* current AP address - only in reassoc frame */ if (is_reassoc) { memcpy(body.ap, priv->CurrentBSSID, 6); - ssid_el_p = (u8 *)&body.ssid_el_id; + ssid_el_p = &body.ssid_el_id; bodysize = 18 + priv->SSID_size; } else { - ssid_el_p = (u8 *)&body.ap[0]; + ssid_el_p = &body.ap[0]; bodysize = 12 + priv->SSID_size; } diff --git a/drivers/net/wireless/b43/b43.h b/drivers/net/wireless/b43/b43.h index c06b6cb5c91e..7c899fc7ddd0 100644 --- a/drivers/net/wireless/b43/b43.h +++ b/drivers/net/wireless/b43/b43.h @@ -870,13 +870,6 @@ struct b43_wl { * handler, only. This basically is just the IRQ mask register. */ spinlock_t hardirq_lock; - /* The number of queues that were registered with the mac80211 subsystem - * initially. This is a backup copy of hw->queues in case hw->queues has - * to be dynamically lowered at runtime (Firmware does not support QoS). - * hw->queues has to be restored to the original value before unregistering - * from the mac80211 subsystem. */ - u16 mac80211_initially_registered_queues; - /* Set this if we call ieee80211_register_hw() and check if we call * ieee80211_unregister_hw(). */ bool hw_registred; diff --git a/drivers/net/wireless/b43/main.c b/drivers/net/wireless/b43/main.c index 1b988f26bdf1..b80352b308d5 100644 --- a/drivers/net/wireless/b43/main.c +++ b/drivers/net/wireless/b43/main.c @@ -2359,6 +2359,8 @@ static int b43_try_request_fw(struct b43_request_fw_context *ctx) if (err) goto err_load; + fw->opensource = (ctx->req_type == B43_FWTYPE_OPENSOURCE); + return 0; err_no_ucode: @@ -2434,6 +2436,10 @@ static void b43_request_firmware(struct work_struct *work) goto out; start_ieee80211: + wl->hw->queues = B43_QOS_QUEUE_NUM; + if (!modparam_qos || dev->fw.opensource) + wl->hw->queues = 1; + err = ieee80211_register_hw(wl->hw); if (err) goto err_one_core_detach; @@ -2537,11 +2543,9 @@ static int b43_upload_microcode(struct b43_wldev *dev) dev->fw.hdr_format = B43_FW_HDR_410; else dev->fw.hdr_format = B43_FW_HDR_351; - dev->fw.opensource = (fwdate == 0xFFFF); + WARN_ON(dev->fw.opensource != (fwdate == 0xFFFF)); - /* Default to use-all-queues. */ - dev->wl->hw->queues = dev->wl->mac80211_initially_registered_queues; - dev->qos_enabled = !!modparam_qos; + dev->qos_enabled = dev->wl->hw->queues > 1; /* Default to firmware/hardware crypto acceleration. */ dev->hwcrypto_enabled = true; @@ -2559,14 +2563,8 @@ static int b43_upload_microcode(struct b43_wldev *dev) /* Disable hardware crypto and fall back to software crypto. */ dev->hwcrypto_enabled = false; } - if (!(fwcapa & B43_FWCAPA_QOS)) { - b43info(dev->wl, "QoS not supported by firmware\n"); - /* Disable QoS. Tweak hw->queues to 1. It will be restored before - * ieee80211_unregister to make sure the networking core can - * properly free possible resources. */ - dev->wl->hw->queues = 1; - dev->qos_enabled = false; - } + /* adding QoS support should use an offline discovery mechanism */ + WARN(fwcapa & B43_FWCAPA_QOS, "QoS in OpenFW not supported\n"); } else { b43info(dev->wl, "Loading firmware version %u.%u " "(20%.2i-%.2i-%.2i %.2i:%.2i:%.2i)\n", @@ -5298,8 +5296,6 @@ static struct b43_wl *b43_wireless_init(struct b43_bus_dev *dev) hw->wiphy->flags |= WIPHY_FLAG_IBSS_RSN; - hw->queues = modparam_qos ? B43_QOS_QUEUE_NUM : 1; - wl->mac80211_initially_registered_queues = hw->queues; wl->hw_registred = false; hw->max_rates = 2; SET_IEEE80211_DEV(hw, dev->dev); @@ -5374,10 +5370,6 @@ static void b43_bcma_remove(struct bcma_device *core) B43_WARN_ON(!wl); if (wl->current_dev == wldev && wl->hw_registred) { - /* Restore the queues count before unregistering, because firmware detect - * might have modified it. Restoring is important, so the networking - * stack can properly free resources. */ - wl->hw->queues = wl->mac80211_initially_registered_queues; b43_leds_stop(wldev); ieee80211_unregister_hw(wl->hw); } @@ -5452,10 +5444,6 @@ static void b43_ssb_remove(struct ssb_device *sdev) B43_WARN_ON(!wl); if (wl->current_dev == wldev && wl->hw_registred) { - /* Restore the queues count before unregistering, because firmware detect - * might have modified it. Restoring is important, so the networking - * stack can properly free resources. */ - wl->hw->queues = wl->mac80211_initially_registered_queues; b43_leds_stop(wldev); ieee80211_unregister_hw(wl->hw); } diff --git a/drivers/net/wireless/b43/phy_n.c b/drivers/net/wireless/b43/phy_n.c index 108118820b36..b92bb9c92ad1 100644 --- a/drivers/net/wireless/b43/phy_n.c +++ b/drivers/net/wireless/b43/phy_n.c @@ -1369,7 +1369,7 @@ static void b43_nphy_rev3_rssi_cal(struct b43_wldev *dev) i << 2); b43_nphy_poll_rssi(dev, 2, results[i], 8); } - for (i = 0; i < 4; i++) { + for (i = 0; i < 4; i += 2) { s32 curr; s32 mind = 40; s32 minpoll = 249; @@ -1415,14 +1415,15 @@ static void b43_nphy_rev3_rssi_cal(struct b43_wldev *dev) b43_nphy_scale_offset_rssi(dev, 0, 0, core + 1, 1, i); b43_nphy_poll_rssi(dev, i, poll_results, 8); for (j = 0; j < 4; j++) { - if (j / 2 == core) + if (j / 2 == core) { offset[j] = 232 - poll_results[j]; - if (offset[j] < 0) - offset[j] = -(abs(offset[j] + 4) / 8); - else - offset[j] = (offset[j] + 4) / 8; - b43_nphy_scale_offset_rssi(dev, 0, - offset[2 * core], core + 1, j % 2, i); + if (offset[j] < 0) + offset[j] = -(abs(offset[j] + 4) / 8); + else + offset[j] = (offset[j] + 4) / 8; + b43_nphy_scale_offset_rssi(dev, 0, + offset[2 * core], core + 1, j % 2, i); + } } } } diff --git a/drivers/net/wireless/b43/xmit.c b/drivers/net/wireless/b43/xmit.c index b31ccc02fa21..136510edf3cf 100644 --- a/drivers/net/wireless/b43/xmit.c +++ b/drivers/net/wireless/b43/xmit.c @@ -663,7 +663,7 @@ void b43_rx(struct b43_wldev *dev, struct sk_buff *skb, const void *_rxhdr) u32 uninitialized_var(macstat); u16 chanid; u16 phytype; - int padding; + int padding, rate_idx; memset(&status, 0, sizeof(status)); @@ -766,16 +766,17 @@ void b43_rx(struct b43_wldev *dev, struct sk_buff *skb, const void *_rxhdr) } if (phystat0 & B43_RX_PHYST0_OFDM) - status.rate_idx = b43_plcp_get_bitrate_idx_ofdm(plcp, + rate_idx = b43_plcp_get_bitrate_idx_ofdm(plcp, phytype == B43_PHYTYPE_A); else - status.rate_idx = b43_plcp_get_bitrate_idx_cck(plcp); - if (unlikely(status.rate_idx == -1)) { + rate_idx = b43_plcp_get_bitrate_idx_cck(plcp); + if (unlikely(rate_idx == -1)) { /* PLCP seems to be corrupted. * Drop the frame, if we are not interested in corrupted frames. */ if (!(dev->wl->filter_flags & FIF_PLCPFAIL)) goto drop; } + status.rate_idx = rate_idx; status.antenna = !!(phystat0 & B43_RX_PHYST0_ANT); /* diff --git a/drivers/net/wireless/b43legacy/dma.c b/drivers/net/wireless/b43legacy/dma.c index c8baf020c20f..2d3c6644f82d 100644 --- a/drivers/net/wireless/b43legacy/dma.c +++ b/drivers/net/wireless/b43legacy/dma.c @@ -52,7 +52,7 @@ struct b43legacy_dmadesc32 *op32_idx2desc(struct b43legacy_dmaring *ring, desc = ring->descbase; desc = &(desc[slot]); - return (struct b43legacy_dmadesc32 *)desc; + return desc; } static void op32_fill_descriptor(struct b43legacy_dmaring *ring, diff --git a/drivers/net/wireless/b43legacy/main.c b/drivers/net/wireless/b43legacy/main.c index eae691e2f7dd..8156135a0590 100644 --- a/drivers/net/wireless/b43legacy/main.c +++ b/drivers/net/wireless/b43legacy/main.c @@ -1508,7 +1508,7 @@ static void b43legacy_release_firmware(struct b43legacy_wldev *dev) static void b43legacy_print_fw_helptext(struct b43legacy_wl *wl) { - b43legacyerr(wl, "You must go to http://linuxwireless.org/en/users/" + b43legacyerr(wl, "You must go to http://wireless.kernel.org/en/users/" "Drivers/b43#devicefirmware " "and download the correct firmware (version 3).\n"); } diff --git a/drivers/net/wireless/b43legacy/xmit.c b/drivers/net/wireless/b43legacy/xmit.c index a8012f2749ee..b8ffea6f5c64 100644 --- a/drivers/net/wireless/b43legacy/xmit.c +++ b/drivers/net/wireless/b43legacy/xmit.c @@ -269,8 +269,7 @@ static int generate_txhdr_fw3(struct b43legacy_wldev *dev, b43legacy_generate_plcp_hdr((struct b43legacy_plcp_hdr4 *) (&txhdr->plcp), plcp_fragment_len, rate); - b43legacy_generate_plcp_hdr((struct b43legacy_plcp_hdr4 *) - (&txhdr->plcp_fb), plcp_fragment_len, + b43legacy_generate_plcp_hdr(&txhdr->plcp_fb, plcp_fragment_len, rate_fb->hw_value); /* PHY TX Control word */ @@ -340,8 +339,7 @@ static int generate_txhdr_fw3(struct b43legacy_wldev *dev, b43legacy_generate_plcp_hdr((struct b43legacy_plcp_hdr4 *) (&txhdr->rts_plcp), len, rts_rate); - b43legacy_generate_plcp_hdr((struct b43legacy_plcp_hdr4 *) - (&txhdr->rts_plcp_fb), + b43legacy_generate_plcp_hdr(&txhdr->rts_plcp_fb, len, rts_rate_fb); hdr = (struct ieee80211_hdr *)(&txhdr->rts_frame); txhdr->rts_dur_fb = hdr->duration_id; diff --git a/drivers/net/wireless/brcm80211/brcmfmac/Makefile b/drivers/net/wireless/brcm80211/brcmfmac/Makefile index abb48032753b..9d5170b6df50 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/Makefile +++ b/drivers/net/wireless/brcm80211/brcmfmac/Makefile @@ -34,3 +34,5 @@ brcmfmac-$(CONFIG_BRCMFMAC_SDIO) += \ sdio_chip.o brcmfmac-$(CONFIG_BRCMFMAC_USB) += \ usb.o +brcmfmac-$(CONFIG_BRCMDBG) += \ + dhd_dbg.o
\ No newline at end of file diff --git a/drivers/net/wireless/brcm80211/brcmfmac/bcmsdh_sdmmc.c b/drivers/net/wireless/brcm80211/brcmfmac/bcmsdh_sdmmc.c index 82f51dbd0d66..49765d34b4e0 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/bcmsdh_sdmmc.c +++ b/drivers/net/wireless/brcm80211/brcmfmac/bcmsdh_sdmmc.c @@ -44,6 +44,7 @@ #define SDIO_DEVICE_ID_BROADCOM_4329 0x4329 #define SDIO_DEVICE_ID_BROADCOM_4330 0x4330 +#define SDIO_DEVICE_ID_BROADCOM_4334 0x4334 #define SDIO_FUNC1_BLOCKSIZE 64 #define SDIO_FUNC2_BLOCKSIZE 512 @@ -52,6 +53,7 @@ static const struct sdio_device_id brcmf_sdmmc_ids[] = { {SDIO_DEVICE(SDIO_VENDOR_ID_BROADCOM, SDIO_DEVICE_ID_BROADCOM_4329)}, {SDIO_DEVICE(SDIO_VENDOR_ID_BROADCOM, SDIO_DEVICE_ID_BROADCOM_4330)}, + {SDIO_DEVICE(SDIO_VENDOR_ID_BROADCOM, SDIO_DEVICE_ID_BROADCOM_4334)}, { /* end: all zeroes */ }, }; MODULE_DEVICE_TABLE(sdio, brcmf_sdmmc_ids); diff --git a/drivers/net/wireless/brcm80211/brcmfmac/dhd.h b/drivers/net/wireless/brcm80211/brcmfmac/dhd.h index 9f637014486e..a11fe54f5950 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/dhd.h +++ b/drivers/net/wireless/brcm80211/brcmfmac/dhd.h @@ -613,6 +613,9 @@ struct brcmf_pub { struct work_struct multicast_work; u8 macvalue[ETH_ALEN]; atomic_t pend_8021x_cnt; +#ifdef DEBUG + struct dentry *dbgfs_dir; +#endif }; struct brcmf_if_event { diff --git a/drivers/net/wireless/brcm80211/brcmfmac/dhd_bus.h b/drivers/net/wireless/brcm80211/brcmfmac/dhd_bus.h index 366916494be4..537f499cc5d2 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/dhd_bus.h +++ b/drivers/net/wireless/brcm80211/brcmfmac/dhd_bus.h @@ -36,6 +36,13 @@ struct dngl_stats { unsigned long multicast; /* multicast packets received */ }; +struct brcmf_bus_dcmd { + char *name; + char *param; + int param_len; + struct list_head list; +}; + /* interface structure between common and bus layer */ struct brcmf_bus { u8 type; /* bus type */ @@ -50,6 +57,7 @@ struct brcmf_bus { unsigned long tx_realloc; /* Tx packets realloced for headroom */ struct dngl_stats dstats; /* Stats for dongle-based data */ u8 align; /* bus alignment requirement */ + struct list_head dcmd_list; /* interface functions pointers */ /* Stop bus module: clear pending frames, disable data flow */ diff --git a/drivers/net/wireless/brcm80211/brcmfmac/dhd_common.c b/drivers/net/wireless/brcm80211/brcmfmac/dhd_common.c index 236cb9fa460c..2621dd3d7dcd 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/dhd_common.c +++ b/drivers/net/wireless/brcm80211/brcmfmac/dhd_common.c @@ -800,13 +800,13 @@ int brcmf_c_preinit_dcmds(struct brcmf_pub *drvr) char iovbuf[BRCMF_EVENTING_MASK_LEN + 12]; /* Room for "event_msgs" + '\0' + bitvec */ char buf[128], *ptr; - u32 dongle_align = drvr->bus_if->align; - u32 glom = 0; u32 roaming = 1; uint bcn_timeout = 3; int scan_assoc_time = 40; int scan_unassoc_time = 40; int i; + struct brcmf_bus_dcmd *cmdlst; + struct list_head *cur, *q; mutex_lock(&drvr->proto_block); @@ -827,17 +827,6 @@ int brcmf_c_preinit_dcmds(struct brcmf_pub *drvr) /* Print fw version info */ brcmf_dbg(ERROR, "Firmware version = %s\n", buf); - /* Match Host and Dongle rx alignment */ - brcmf_c_mkiovar("bus:txglomalign", (char *)&dongle_align, 4, iovbuf, - sizeof(iovbuf)); - brcmf_proto_cdc_set_dcmd(drvr, 0, BRCMF_C_SET_VAR, iovbuf, - sizeof(iovbuf)); - - /* disable glom option per default */ - brcmf_c_mkiovar("bus:txglom", (char *)&glom, 4, iovbuf, sizeof(iovbuf)); - brcmf_proto_cdc_set_dcmd(drvr, 0, BRCMF_C_SET_VAR, iovbuf, - sizeof(iovbuf)); - /* Setup timeout if Beacons are lost and roam is off to report link down */ brcmf_c_mkiovar("bcn_timeout", (char *)&bcn_timeout, 4, iovbuf, @@ -874,6 +863,20 @@ int brcmf_c_preinit_dcmds(struct brcmf_pub *drvr) 0, true); } + /* set bus specific command if there is any */ + list_for_each_safe(cur, q, &drvr->bus_if->dcmd_list) { + cmdlst = list_entry(cur, struct brcmf_bus_dcmd, list); + if (cmdlst->name && cmdlst->param && cmdlst->param_len) { + brcmf_c_mkiovar(cmdlst->name, cmdlst->param, + cmdlst->param_len, iovbuf, + sizeof(iovbuf)); + brcmf_proto_cdc_set_dcmd(drvr, 0, BRCMF_C_SET_VAR, + iovbuf, sizeof(iovbuf)); + } + list_del(cur); + kfree(cmdlst); + } + mutex_unlock(&drvr->proto_block); return 0; diff --git a/drivers/net/wireless/brcm80211/brcmfmac/dhd_dbg.c b/drivers/net/wireless/brcm80211/brcmfmac/dhd_dbg.c new file mode 100644 index 000000000000..7f89540b56da --- /dev/null +++ b/drivers/net/wireless/brcm80211/brcmfmac/dhd_dbg.c @@ -0,0 +1,126 @@ +/* + * Copyright (c) 2012 Broadcom Corporation + * + * Permission to use, copy, modify, and/or distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ +#include <linux/debugfs.h> +#include <linux/if_ether.h> +#include <linux/if.h> +#include <linux/ieee80211.h> +#include <linux/module.h> + +#include <defs.h> +#include <brcmu_wifi.h> +#include <brcmu_utils.h> +#include "dhd.h" +#include "dhd_bus.h" +#include "dhd_dbg.h" + +static struct dentry *root_folder; + +void brcmf_debugfs_init(void) +{ + root_folder = debugfs_create_dir(KBUILD_MODNAME, NULL); + if (IS_ERR(root_folder)) + root_folder = NULL; +} + +void brcmf_debugfs_exit(void) +{ + if (!root_folder) + return; + + debugfs_remove_recursive(root_folder); + root_folder = NULL; +} + +int brcmf_debugfs_attach(struct brcmf_pub *drvr) +{ + if (!root_folder) + return -ENODEV; + + drvr->dbgfs_dir = debugfs_create_dir(dev_name(drvr->dev), root_folder); + return PTR_RET(drvr->dbgfs_dir); +} + +void brcmf_debugfs_detach(struct brcmf_pub *drvr) +{ + if (!IS_ERR_OR_NULL(drvr->dbgfs_dir)) + debugfs_remove_recursive(drvr->dbgfs_dir); +} + +struct dentry *brcmf_debugfs_get_devdir(struct brcmf_pub *drvr) +{ + return drvr->dbgfs_dir; +} + +static +ssize_t brcmf_debugfs_sdio_counter_read(struct file *f, char __user *data, + size_t count, loff_t *ppos) +{ + struct brcmf_sdio_count *sdcnt = f->private_data; + char buf[750]; + int res; + + /* only allow read from start */ + if (*ppos > 0) + return 0; + + res = scnprintf(buf, sizeof(buf), + "intrcount: %u\nlastintrs: %u\n" + "pollcnt: %u\nregfails: %u\n" + "tx_sderrs: %u\nfcqueued: %u\n" + "rxrtx: %u\nrx_toolong: %u\n" + "rxc_errors: %u\nrx_hdrfail: %u\n" + "rx_badhdr: %u\nrx_badseq: %u\n" + "fc_rcvd: %u\nfc_xoff: %u\n" + "fc_xon: %u\nrxglomfail: %u\n" + "rxglomframes: %u\nrxglompkts: %u\n" + "f2rxhdrs: %u\nf2rxdata: %u\n" + "f2txdata: %u\nf1regdata: %u\n" + "tickcnt: %u\ntx_ctlerrs: %lu\n" + "tx_ctlpkts: %lu\nrx_ctlerrs: %lu\n" + "rx_ctlpkts: %lu\nrx_readahead: %lu\n", + sdcnt->intrcount, sdcnt->lastintrs, + sdcnt->pollcnt, sdcnt->regfails, + sdcnt->tx_sderrs, sdcnt->fcqueued, + sdcnt->rxrtx, sdcnt->rx_toolong, + sdcnt->rxc_errors, sdcnt->rx_hdrfail, + sdcnt->rx_badhdr, sdcnt->rx_badseq, + sdcnt->fc_rcvd, sdcnt->fc_xoff, + sdcnt->fc_xon, sdcnt->rxglomfail, + sdcnt->rxglomframes, sdcnt->rxglompkts, + sdcnt->f2rxhdrs, sdcnt->f2rxdata, + sdcnt->f2txdata, sdcnt->f1regdata, + sdcnt->tickcnt, sdcnt->tx_ctlerrs, + sdcnt->tx_ctlpkts, sdcnt->rx_ctlerrs, + sdcnt->rx_ctlpkts, sdcnt->rx_readahead_cnt); + + return simple_read_from_buffer(data, count, ppos, buf, res); +} + +static const struct file_operations brcmf_debugfs_sdio_counter_ops = { + .owner = THIS_MODULE, + .open = simple_open, + .read = brcmf_debugfs_sdio_counter_read +}; + +void brcmf_debugfs_create_sdio_count(struct brcmf_pub *drvr, + struct brcmf_sdio_count *sdcnt) +{ + struct dentry *dentry = drvr->dbgfs_dir; + + if (!IS_ERR_OR_NULL(dentry)) + debugfs_create_file("counters", S_IRUGO, dentry, + sdcnt, &brcmf_debugfs_sdio_counter_ops); +} diff --git a/drivers/net/wireless/brcm80211/brcmfmac/dhd_dbg.h b/drivers/net/wireless/brcm80211/brcmfmac/dhd_dbg.h index a2c4576cf9ff..b784920532d3 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/dhd_dbg.h +++ b/drivers/net/wireless/brcm80211/brcmfmac/dhd_dbg.h @@ -76,4 +76,63 @@ do { \ extern int brcmf_msg_level; +/* + * hold counter variables used in brcmfmac sdio driver. + */ +struct brcmf_sdio_count { + uint intrcount; /* Count of device interrupt callbacks */ + uint lastintrs; /* Count as of last watchdog timer */ + uint pollcnt; /* Count of active polls */ + uint regfails; /* Count of R_REG failures */ + uint tx_sderrs; /* Count of tx attempts with sd errors */ + uint fcqueued; /* Tx packets that got queued */ + uint rxrtx; /* Count of rtx requests (NAK to dongle) */ + uint rx_toolong; /* Receive frames too long to receive */ + uint rxc_errors; /* SDIO errors when reading control frames */ + uint rx_hdrfail; /* SDIO errors on header reads */ + uint rx_badhdr; /* Bad received headers (roosync?) */ + uint rx_badseq; /* Mismatched rx sequence number */ + uint fc_rcvd; /* Number of flow-control events received */ + uint fc_xoff; /* Number which turned on flow-control */ + uint fc_xon; /* Number which turned off flow-control */ + uint rxglomfail; /* Failed deglom attempts */ + uint rxglomframes; /* Number of glom frames (superframes) */ + uint rxglompkts; /* Number of packets from glom frames */ + uint f2rxhdrs; /* Number of header reads */ + uint f2rxdata; /* Number of frame data reads */ + uint f2txdata; /* Number of f2 frame writes */ + uint f1regdata; /* Number of f1 register accesses */ + uint tickcnt; /* Number of watchdog been schedule */ + ulong tx_ctlerrs; /* Err of sending ctrl frames */ + ulong tx_ctlpkts; /* Ctrl frames sent to dongle */ + ulong rx_ctlerrs; /* Err of processing rx ctrl frames */ + ulong rx_ctlpkts; /* Ctrl frames processed from dongle */ + ulong rx_readahead_cnt; /* packets where header read-ahead was used */ +}; + +struct brcmf_pub; +#ifdef DEBUG +void brcmf_debugfs_init(void); +void brcmf_debugfs_exit(void); +int brcmf_debugfs_attach(struct brcmf_pub *drvr); +void brcmf_debugfs_detach(struct brcmf_pub *drvr); +struct dentry *brcmf_debugfs_get_devdir(struct brcmf_pub *drvr); +void brcmf_debugfs_create_sdio_count(struct brcmf_pub *drvr, + struct brcmf_sdio_count *sdcnt); +#else +static inline void brcmf_debugfs_init(void) +{ +} +static inline void brcmf_debugfs_exit(void) +{ +} +static inline int brcmf_debugfs_attach(struct brcmf_pub *drvr) +{ + return 0; +} +static inline void brcmf_debugfs_detach(struct brcmf_pub *drvr) +{ +} +#endif + #endif /* _BRCMF_DBG_H_ */ diff --git a/drivers/net/wireless/brcm80211/brcmfmac/dhd_linux.c b/drivers/net/wireless/brcm80211/brcmfmac/dhd_linux.c index 8933f9b31a9a..57bf1d7ee80f 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/dhd_linux.c +++ b/drivers/net/wireless/brcm80211/brcmfmac/dhd_linux.c @@ -1007,6 +1007,9 @@ int brcmf_attach(uint bus_hdrlen, struct device *dev) drvr->bus_if->drvr = drvr; drvr->dev = dev; + /* create device debugfs folder */ + brcmf_debugfs_attach(drvr); + /* Attach and link in the protocol */ ret = brcmf_proto_attach(drvr); if (ret != 0) { @@ -1017,6 +1020,8 @@ int brcmf_attach(uint bus_hdrlen, struct device *dev) INIT_WORK(&drvr->setmacaddr_work, _brcmf_set_mac_address); INIT_WORK(&drvr->multicast_work, _brcmf_set_multicast_list); + INIT_LIST_HEAD(&drvr->bus_if->dcmd_list); + return ret; fail: @@ -1123,6 +1128,7 @@ void brcmf_detach(struct device *dev) brcmf_proto_detach(drvr); } + brcmf_debugfs_detach(drvr); bus_if->drvr = NULL; kfree(drvr); } @@ -1192,6 +1198,8 @@ exit: static void brcmf_driver_init(struct work_struct *work) { + brcmf_debugfs_init(); + #ifdef CONFIG_BRCMFMAC_SDIO brcmf_sdio_init(); #endif @@ -1219,6 +1227,7 @@ static void __exit brcmfmac_module_exit(void) #ifdef CONFIG_BRCMFMAC_USB brcmf_usb_exit(); #endif + brcmf_debugfs_exit(); } module_init(brcmfmac_module_init); diff --git a/drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c b/drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c index 1dbf2be478c8..472f2ef5c652 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c +++ b/drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c @@ -31,6 +31,8 @@ #include <linux/firmware.h> #include <linux/module.h> #include <linux/bcma/bcma.h> +#include <linux/debugfs.h> +#include <linux/vmalloc.h> #include <asm/unaligned.h> #include <defs.h> #include <brcmu_wifi.h> @@ -48,6 +50,9 @@ #define CBUF_LEN (128) +/* Device console log buffer state */ +#define CONSOLE_BUFFER_MAX 2024 + struct rte_log_le { __le32 buf; /* Can't be pointer on (64-bit) hosts */ __le32 buf_size; @@ -281,7 +286,7 @@ struct rte_console { * Shared structure between dongle and the host. * The structure contains pointers to trap or assert information. */ -#define SDPCM_SHARED_VERSION 0x0002 +#define SDPCM_SHARED_VERSION 0x0003 #define SDPCM_SHARED_VERSION_MASK 0x00FF #define SDPCM_SHARED_ASSERT_BUILT 0x0100 #define SDPCM_SHARED_ASSERT 0x0200 @@ -428,6 +433,29 @@ struct brcmf_console { u8 *buf; /* Log buffer (host copy) */ uint last; /* Last buffer read index */ }; + +struct brcmf_trap_info { + __le32 type; + __le32 epc; + __le32 cpsr; + __le32 spsr; + __le32 r0; /* a1 */ + __le32 r1; /* a2 */ + __le32 r2; /* a3 */ + __le32 r3; /* a4 */ + __le32 r4; /* v1 */ + __le32 r5; /* v2 */ + __le32 r6; /* v3 */ + __le32 r7; /* v4 */ + __le32 r8; /* v5 */ + __le32 r9; /* sb/v6 */ + __le32 r10; /* sl/v7 */ + __le32 r11; /* fp/v8 */ + __le32 r12; /* ip */ + __le32 r13; /* sp */ + __le32 r14; /* lr */ + __le32 pc; /* r15 */ +}; #endif /* DEBUG */ struct sdpcm_shared { @@ -439,6 +467,7 @@ struct sdpcm_shared { u32 console_addr; /* Address of struct rte_console */ u32 msgtrace_addr; u8 tag[32]; + u32 brpt_addr; }; struct sdpcm_shared_le { @@ -450,6 +479,7 @@ struct sdpcm_shared_le { __le32 console_addr; /* Address of struct rte_console */ __le32 msgtrace_addr; u8 tag[32]; + __le32 brpt_addr; }; @@ -502,12 +532,9 @@ struct brcmf_sdio { bool intr; /* Use interrupts */ bool poll; /* Use polling */ bool ipend; /* Device interrupt is pending */ - uint intrcount; /* Count of device interrupt callbacks */ - uint lastintrs; /* Count as of last watchdog timer */ uint spurious; /* Count of spurious interrupts */ uint pollrate; /* Ticks between device polls */ uint polltick; /* Tick counter */ - uint pollcnt; /* Count of active polls */ #ifdef DEBUG uint console_interval; @@ -515,8 +542,6 @@ struct brcmf_sdio { uint console_addr; /* Console address from shared struct */ #endif /* DEBUG */ - uint regfails; /* Count of R_REG failures */ - uint clkstate; /* State of sd and backplane clock(s) */ bool activity; /* Activity flag for clock down */ s32 idletime; /* Control for activity timeout */ @@ -531,33 +556,6 @@ struct brcmf_sdio { /* Field to decide if rx of control frames happen in rxbuf or lb-pool */ bool usebufpool; - /* Some additional counters */ - uint tx_sderrs; /* Count of tx attempts with sd errors */ - uint fcqueued; /* Tx packets that got queued */ - uint rxrtx; /* Count of rtx requests (NAK to dongle) */ - uint rx_toolong; /* Receive frames too long to receive */ - uint rxc_errors; /* SDIO errors when reading control frames */ - uint rx_hdrfail; /* SDIO errors on header reads */ - uint rx_badhdr; /* Bad received headers (roosync?) */ - uint rx_badseq; /* Mismatched rx sequence number */ - uint fc_rcvd; /* Number of flow-control events received */ - uint fc_xoff; /* Number which turned on flow-control */ - uint fc_xon; /* Number which turned off flow-control */ - uint rxglomfail; /* Failed deglom attempts */ - uint rxglomframes; /* Number of glom frames (superframes) */ - uint rxglompkts; /* Number of packets from glom frames */ - uint f2rxhdrs; /* Number of header reads */ - uint f2rxdata; /* Number of frame data reads */ - uint f2txdata; /* Number of f2 frame writes */ - uint f1regdata; /* Number of f1 register accesses */ - uint tickcnt; /* Number of watchdog been schedule */ - unsigned long tx_ctlerrs; /* Err of sending ctrl frames */ - unsigned long tx_ctlpkts; /* Ctrl frames sent to dongle */ - unsigned long rx_ctlerrs; /* Err of processing rx ctrl frames */ - unsigned long rx_ctlpkts; /* Ctrl frames processed from dongle */ - unsigned long rx_readahead_cnt; /* Number of packets where header - * read-ahead was used. */ - u8 *ctrl_frame_buf; u32 ctrl_frame_len; bool ctrl_frame_stat; @@ -583,6 +581,7 @@ struct brcmf_sdio { u32 fw_ptr; bool txoff; /* Transmit flow-controlled */ + struct brcmf_sdio_count sdcnt; }; /* clkstate */ @@ -945,7 +944,7 @@ static u32 brcmf_sdbrcm_hostmail(struct brcmf_sdio *bus) if (ret == 0) w_sdreg32(bus, SMB_INT_ACK, offsetof(struct sdpcmd_regs, tosbmailbox)); - bus->f1regdata += 2; + bus->sdcnt.f1regdata += 2; /* Dongle recomposed rx frames, accept them again */ if (hmb_data & HMB_DATA_NAKHANDLED) { @@ -984,12 +983,12 @@ static u32 brcmf_sdbrcm_hostmail(struct brcmf_sdio *bus) HMB_DATA_FCDATA_SHIFT; if (fcbits & ~bus->flowcontrol) - bus->fc_xoff++; + bus->sdcnt.fc_xoff++; if (bus->flowcontrol & ~fcbits) - bus->fc_xon++; + bus->sdcnt.fc_xon++; - bus->fc_rcvd++; + bus->sdcnt.fc_rcvd++; bus->flowcontrol = fcbits; } @@ -1021,7 +1020,7 @@ static void brcmf_sdbrcm_rxfail(struct brcmf_sdio *bus, bool abort, bool rtx) brcmf_sdio_regwb(bus->sdiodev, SBSDIO_FUNC1_FRAMECTRL, SFC_RF_TERM, &err); - bus->f1regdata++; + bus->sdcnt.f1regdata++; /* Wait until the packet has been flushed (device/FIFO stable) */ for (lastrbc = retries = 0xffff; retries > 0; retries--) { @@ -1029,7 +1028,7 @@ static void brcmf_sdbrcm_rxfail(struct brcmf_sdio *bus, bool abort, bool rtx) SBSDIO_FUNC1_RFRAMEBCHI, &err); lo = brcmf_sdio_regrb(bus->sdiodev, SBSDIO_FUNC1_RFRAMEBCLO, &err); - bus->f1regdata += 2; + bus->sdcnt.f1regdata += 2; if ((hi == 0) && (lo == 0)) break; @@ -1047,11 +1046,11 @@ static void brcmf_sdbrcm_rxfail(struct brcmf_sdio *bus, bool abort, bool rtx) brcmf_dbg(INFO, "flush took %d iterations\n", 0xffff - retries); if (rtx) { - bus->rxrtx++; + bus->sdcnt.rxrtx++; err = w_sdreg32(bus, SMB_NAK, offsetof(struct sdpcmd_regs, tosbmailbox)); - bus->f1regdata++; + bus->sdcnt.f1regdata++; if (err == 0) bus->rxskip = true; } @@ -1243,7 +1242,7 @@ static u8 brcmf_sdbrcm_rxglom(struct brcmf_sdio *bus, u8 rxseq) dlen); errcode = -1; } - bus->f2rxdata++; + bus->sdcnt.f2rxdata++; /* On failure, kill the superframe, allow a couple retries */ if (errcode < 0) { @@ -1256,7 +1255,7 @@ static u8 brcmf_sdbrcm_rxglom(struct brcmf_sdio *bus, u8 rxseq) } else { bus->glomerr = 0; brcmf_sdbrcm_rxfail(bus, true, false); - bus->rxglomfail++; + bus->sdcnt.rxglomfail++; brcmf_sdbrcm_free_glom(bus); } return 0; @@ -1312,7 +1311,7 @@ static u8 brcmf_sdbrcm_rxglom(struct brcmf_sdio *bus, u8 rxseq) if (rxseq != seq) { brcmf_dbg(INFO, "(superframe) rx_seq %d, expected %d\n", seq, rxseq); - bus->rx_badseq++; + bus->sdcnt.rx_badseq++; rxseq = seq; } @@ -1376,7 +1375,7 @@ static u8 brcmf_sdbrcm_rxglom(struct brcmf_sdio *bus, u8 rxseq) } else { bus->glomerr = 0; brcmf_sdbrcm_rxfail(bus, true, false); - bus->rxglomfail++; + bus->sdcnt.rxglomfail++; brcmf_sdbrcm_free_glom(bus); } bus->nextlen = 0; @@ -1402,7 +1401,7 @@ static u8 brcmf_sdbrcm_rxglom(struct brcmf_sdio *bus, u8 rxseq) if (rxseq != seq) { brcmf_dbg(GLOM, "rx_seq %d, expected %d\n", seq, rxseq); - bus->rx_badseq++; + bus->sdcnt.rx_badseq++; rxseq = seq; } rxseq++; @@ -1441,8 +1440,8 @@ static u8 brcmf_sdbrcm_rxglom(struct brcmf_sdio *bus, u8 rxseq) down(&bus->sdsem); } - bus->rxglomframes++; - bus->rxglompkts += bus->glom.qlen; + bus->sdcnt.rxglomframes++; + bus->sdcnt.rxglompkts += bus->glom.qlen; } return num; } @@ -1526,7 +1525,7 @@ brcmf_sdbrcm_read_control(struct brcmf_sdio *bus, u8 *hdr, uint len, uint doff) brcmf_dbg(ERROR, "%d-byte ctl frame (%d-byte ctl data) exceeds %d-byte limit\n", len, len - doff, bus->sdiodev->bus_if->maxctl); bus->sdiodev->bus_if->dstats.rx_errors++; - bus->rx_toolong++; + bus->sdcnt.rx_toolong++; brcmf_sdbrcm_rxfail(bus, false, false); goto done; } @@ -1536,13 +1535,13 @@ brcmf_sdbrcm_read_control(struct brcmf_sdio *bus, u8 *hdr, uint len, uint doff) bus->sdiodev->sbwad, SDIO_FUNC_2, F2SYNC, (bus->rxctl + BRCMF_FIRSTREAD), rdlen); - bus->f2rxdata++; + bus->sdcnt.f2rxdata++; /* Control frame failures need retransmission */ if (sdret < 0) { brcmf_dbg(ERROR, "read %d control bytes failed: %d\n", rdlen, sdret); - bus->rxc_errors++; + bus->sdcnt.rxc_errors++; brcmf_sdbrcm_rxfail(bus, true, true); goto done; } @@ -1589,7 +1588,7 @@ brcmf_alloc_pkt_and_read(struct brcmf_sdio *bus, u16 rdlen, /* Read the entire frame */ sdret = brcmf_sdcard_recv_pkt(bus->sdiodev, bus->sdiodev->sbwad, SDIO_FUNC_2, F2SYNC, *pkt); - bus->f2rxdata++; + bus->sdcnt.f2rxdata++; if (sdret < 0) { brcmf_dbg(ERROR, "(nextlen): read %d bytes failed: %d\n", @@ -1630,7 +1629,7 @@ brcmf_check_rxbuf(struct brcmf_sdio *bus, struct sk_buff *pkt, u8 *rxbuf, if ((u16)~(*len ^ check)) { brcmf_dbg(ERROR, "(nextlen): HW hdr error: nextlen/len/check 0x%04x/0x%04x/0x%04x\n", nextlen, *len, check); - bus->rx_badhdr++; + bus->sdcnt.rx_badhdr++; brcmf_sdbrcm_rxfail(bus, false, false); goto fail; } @@ -1746,7 +1745,7 @@ brcmf_sdbrcm_readframes(struct brcmf_sdio *bus, uint maxframes, bool *finished) bus->nextlen = 0; } - bus->rx_readahead_cnt++; + bus->sdcnt.rx_readahead_cnt++; /* Handle Flow Control */ fcbits = SDPCM_FCMASK_VALUE( @@ -1754,12 +1753,12 @@ brcmf_sdbrcm_readframes(struct brcmf_sdio *bus, uint maxframes, bool *finished) if (bus->flowcontrol != fcbits) { if (~bus->flowcontrol & fcbits) - bus->fc_xoff++; + bus->sdcnt.fc_xoff++; if (bus->flowcontrol & ~fcbits) - bus->fc_xon++; + bus->sdcnt.fc_xon++; - bus->fc_rcvd++; + bus->sdcnt.fc_rcvd++; bus->flowcontrol = fcbits; } @@ -1767,7 +1766,7 @@ brcmf_sdbrcm_readframes(struct brcmf_sdio *bus, uint maxframes, bool *finished) if (rxseq != seq) { brcmf_dbg(INFO, "(nextlen): rx_seq %d, expected %d\n", seq, rxseq); - bus->rx_badseq++; + bus->sdcnt.rx_badseq++; rxseq = seq; } @@ -1814,11 +1813,11 @@ brcmf_sdbrcm_readframes(struct brcmf_sdio *bus, uint maxframes, bool *finished) sdret = brcmf_sdcard_recv_buf(bus->sdiodev, bus->sdiodev->sbwad, SDIO_FUNC_2, F2SYNC, bus->rxhdr, BRCMF_FIRSTREAD); - bus->f2rxhdrs++; + bus->sdcnt.f2rxhdrs++; if (sdret < 0) { brcmf_dbg(ERROR, "RXHEADER FAILED: %d\n", sdret); - bus->rx_hdrfail++; + bus->sdcnt.rx_hdrfail++; brcmf_sdbrcm_rxfail(bus, true, true); continue; } @@ -1840,7 +1839,7 @@ brcmf_sdbrcm_readframes(struct brcmf_sdio *bus, uint maxframes, bool *finished) if ((u16) ~(len ^ check)) { brcmf_dbg(ERROR, "HW hdr err: len/check 0x%04x/0x%04x\n", len, check); - bus->rx_badhdr++; + bus->sdcnt.rx_badhdr++; brcmf_sdbrcm_rxfail(bus, false, false); continue; } @@ -1861,7 +1860,7 @@ brcmf_sdbrcm_readframes(struct brcmf_sdio *bus, uint maxframes, bool *finished) if ((doff < SDPCM_HDRLEN) || (doff > len)) { brcmf_dbg(ERROR, "Bad data offset %d: HW len %d, min %d seq %d\n", doff, len, SDPCM_HDRLEN, seq); - bus->rx_badhdr++; + bus->sdcnt.rx_badhdr++; brcmf_sdbrcm_rxfail(bus, false, false); continue; } @@ -1880,19 +1879,19 @@ brcmf_sdbrcm_readframes(struct brcmf_sdio *bus, uint maxframes, bool *finished) if (bus->flowcontrol != fcbits) { if (~bus->flowcontrol & fcbits) - bus->fc_xoff++; + bus->sdcnt.fc_xoff++; if (bus->flowcontrol & ~fcbits) - bus->fc_xon++; + bus->sdcnt.fc_xon++; - bus->fc_rcvd++; + bus->sdcnt.fc_rcvd++; bus->flowcontrol = fcbits; } /* Check and update sequence number */ if (rxseq != seq) { brcmf_dbg(INFO, "rx_seq %d, expected %d\n", seq, rxseq); - bus->rx_badseq++; + bus->sdcnt.rx_badseq++; rxseq = seq; } @@ -1937,7 +1936,7 @@ brcmf_sdbrcm_readframes(struct brcmf_sdio *bus, uint maxframes, bool *finished) brcmf_dbg(ERROR, "too long: len %d rdlen %d\n", len, rdlen); bus->sdiodev->bus_if->dstats.rx_errors++; - bus->rx_toolong++; + bus->sdcnt.rx_toolong++; brcmf_sdbrcm_rxfail(bus, false, false); continue; } @@ -1960,7 +1959,7 @@ brcmf_sdbrcm_readframes(struct brcmf_sdio *bus, uint maxframes, bool *finished) /* Read the remaining frame data */ sdret = brcmf_sdcard_recv_pkt(bus->sdiodev, bus->sdiodev->sbwad, SDIO_FUNC_2, F2SYNC, pkt); - bus->f2rxdata++; + bus->sdcnt.f2rxdata++; if (sdret < 0) { brcmf_dbg(ERROR, "read %d %s bytes failed: %d\n", rdlen, @@ -2147,18 +2146,18 @@ static int brcmf_sdbrcm_txpkt(struct brcmf_sdio *bus, struct sk_buff *pkt, ret = brcmf_sdcard_send_pkt(bus->sdiodev, bus->sdiodev->sbwad, SDIO_FUNC_2, F2SYNC, pkt); - bus->f2txdata++; + bus->sdcnt.f2txdata++; if (ret < 0) { /* On failure, abort the command and terminate the frame */ brcmf_dbg(INFO, "sdio error %d, abort command and terminate frame\n", ret); - bus->tx_sderrs++; + bus->sdcnt.tx_sderrs++; brcmf_sdcard_abort(bus->sdiodev, SDIO_FUNC_2); brcmf_sdio_regwb(bus->sdiodev, SBSDIO_FUNC1_FRAMECTRL, SFC_WF_TERM, NULL); - bus->f1regdata++; + bus->sdcnt.f1regdata++; for (i = 0; i < 3; i++) { u8 hi, lo; @@ -2166,7 +2165,7 @@ static int brcmf_sdbrcm_txpkt(struct brcmf_sdio *bus, struct sk_buff *pkt, SBSDIO_FUNC1_WFRAMEBCHI, NULL); lo = brcmf_sdio_regrb(bus->sdiodev, SBSDIO_FUNC1_WFRAMEBCLO, NULL); - bus->f1regdata += 2; + bus->sdcnt.f1regdata += 2; if ((hi == 0) && (lo == 0)) break; } @@ -2224,7 +2223,7 @@ static uint brcmf_sdbrcm_sendfromq(struct brcmf_sdio *bus, uint maxframes) ret = r_sdreg32(bus, &intstatus, offsetof(struct sdpcmd_regs, intstatus)); - bus->f2txdata++; + bus->sdcnt.f2txdata++; if (ret != 0) break; if (intstatus & bus->hostintmask) @@ -2417,7 +2416,7 @@ static bool brcmf_sdbrcm_dpc(struct brcmf_sdio *bus) bus->ipend = false; err = r_sdreg32(bus, &newstatus, offsetof(struct sdpcmd_regs, intstatus)); - bus->f1regdata++; + bus->sdcnt.f1regdata++; if (err != 0) newstatus = 0; newstatus &= bus->hostintmask; @@ -2426,7 +2425,7 @@ static bool brcmf_sdbrcm_dpc(struct brcmf_sdio *bus) err = w_sdreg32(bus, newstatus, offsetof(struct sdpcmd_regs, intstatus)); - bus->f1regdata++; + bus->sdcnt.f1regdata++; } } @@ -2445,7 +2444,7 @@ static bool brcmf_sdbrcm_dpc(struct brcmf_sdio *bus) err = r_sdreg32(bus, &newstatus, offsetof(struct sdpcmd_regs, intstatus)); - bus->f1regdata += 2; + bus->sdcnt.f1regdata += 2; bus->fcstate = !!(newstatus & (I_HMB_FC_STATE | I_HMB_FC_CHANGE)); intstatus |= (newstatus & bus->hostintmask); @@ -2502,7 +2501,7 @@ clkwait: int ret, i; ret = brcmf_sdcard_send_buf(bus->sdiodev, bus->sdiodev->sbwad, - SDIO_FUNC_2, F2SYNC, (u8 *) bus->ctrl_frame_buf, + SDIO_FUNC_2, F2SYNC, bus->ctrl_frame_buf, (u32) bus->ctrl_frame_len); if (ret < 0) { @@ -2510,13 +2509,13 @@ clkwait: terminate the frame */ brcmf_dbg(INFO, "sdio error %d, abort command and terminate frame\n", ret); - bus->tx_sderrs++; + bus->sdcnt.tx_sderrs++; brcmf_sdcard_abort(bus->sdiodev, SDIO_FUNC_2); brcmf_sdio_regwb(bus->sdiodev, SBSDIO_FUNC1_FRAMECTRL, SFC_WF_TERM, &err); - bus->f1regdata++; + bus->sdcnt.f1regdata++; for (i = 0; i < 3; i++) { u8 hi, lo; @@ -2526,7 +2525,7 @@ clkwait: lo = brcmf_sdio_regrb(bus->sdiodev, SBSDIO_FUNC1_WFRAMEBCLO, &err); - bus->f1regdata += 2; + bus->sdcnt.f1regdata += 2; if ((hi == 0) && (lo == 0)) break; } @@ -2657,7 +2656,7 @@ static int brcmf_sdbrcm_bus_txdata(struct device *dev, struct sk_buff *pkt) /* Check for existing queue, current flow-control, pending event, or pending clock */ brcmf_dbg(TRACE, "deferring pktq len %d\n", pktq_len(&bus->txq)); - bus->fcqueued++; + bus->sdcnt.fcqueued++; /* Priority based enq */ spin_lock_bh(&bus->txqlock); @@ -2845,13 +2844,13 @@ static int brcmf_tx_frame(struct brcmf_sdio *bus, u8 *frame, u16 len) /* On failure, abort the command and terminate the frame */ brcmf_dbg(INFO, "sdio error %d, abort command and terminate frame\n", ret); - bus->tx_sderrs++; + bus->sdcnt.tx_sderrs++; brcmf_sdcard_abort(bus->sdiodev, SDIO_FUNC_2); brcmf_sdio_regwb(bus->sdiodev, SBSDIO_FUNC1_FRAMECTRL, SFC_WF_TERM, NULL); - bus->f1regdata++; + bus->sdcnt.f1regdata++; for (i = 0; i < 3; i++) { u8 hi, lo; @@ -2859,7 +2858,7 @@ static int brcmf_tx_frame(struct brcmf_sdio *bus, u8 *frame, u16 len) SBSDIO_FUNC1_WFRAMEBCHI, NULL); lo = brcmf_sdio_regrb(bus->sdiodev, SBSDIO_FUNC1_WFRAMEBCLO, NULL); - bus->f1regdata += 2; + bus->sdcnt.f1regdata += 2; if (hi == 0 && lo == 0) break; } @@ -2976,13 +2975,324 @@ brcmf_sdbrcm_bus_txctl(struct device *dev, unsigned char *msg, uint msglen) up(&bus->sdsem); if (ret) - bus->tx_ctlerrs++; + bus->sdcnt.tx_ctlerrs++; else - bus->tx_ctlpkts++; + bus->sdcnt.tx_ctlpkts++; return ret ? -EIO : 0; } +#ifdef DEBUG +static inline bool brcmf_sdio_valid_shared_address(u32 addr) +{ + return !(addr == 0 || ((~addr >> 16) & 0xffff) == (addr & 0xffff)); +} + +static int brcmf_sdio_readshared(struct brcmf_sdio *bus, + struct sdpcm_shared *sh) +{ + u32 addr; + int rv; + u32 shaddr = 0; + struct sdpcm_shared_le sh_le; + __le32 addr_le; + + shaddr = bus->ramsize - 4; + + /* + * Read last word in socram to determine + * address of sdpcm_shared structure + */ + rv = brcmf_sdbrcm_membytes(bus, false, shaddr, + (u8 *)&addr_le, 4); + if (rv < 0) + return rv; + + addr = le32_to_cpu(addr_le); + + brcmf_dbg(INFO, "sdpcm_shared address 0x%08X\n", addr); + + /* + * Check if addr is valid. + * NVRAM length at the end of memory should have been overwritten. + */ + if (!brcmf_sdio_valid_shared_address(addr)) { + brcmf_dbg(ERROR, "invalid sdpcm_shared address 0x%08X\n", + addr); + return -EINVAL; + } + + /* Read hndrte_shared structure */ + rv = brcmf_sdbrcm_membytes(bus, false, addr, (u8 *)&sh_le, + sizeof(struct sdpcm_shared_le)); + if (rv < 0) + return rv; + + /* Endianness */ + sh->flags = le32_to_cpu(sh_le.flags); + sh->trap_addr = le32_to_cpu(sh_le.trap_addr); + sh->assert_exp_addr = le32_to_cpu(sh_le.assert_exp_addr); + sh->assert_file_addr = le32_to_cpu(sh_le.assert_file_addr); + sh->assert_line = le32_to_cpu(sh_le.assert_line); + sh->console_addr = le32_to_cpu(sh_le.console_addr); + sh->msgtrace_addr = le32_to_cpu(sh_le.msgtrace_addr); + + if ((sh->flags & SDPCM_SHARED_VERSION_MASK) != SDPCM_SHARED_VERSION) { + brcmf_dbg(ERROR, + "sdpcm_shared version mismatch: dhd %d dongle %d\n", + SDPCM_SHARED_VERSION, + sh->flags & SDPCM_SHARED_VERSION_MASK); + return -EPROTO; + } + + return 0; +} + +static int brcmf_sdio_dump_console(struct brcmf_sdio *bus, + struct sdpcm_shared *sh, char __user *data, + size_t count) +{ + u32 addr, console_ptr, console_size, console_index; + char *conbuf = NULL; + __le32 sh_val; + int rv; + loff_t pos = 0; + int nbytes = 0; + + /* obtain console information from device memory */ + addr = sh->console_addr + offsetof(struct rte_console, log_le); + rv = brcmf_sdbrcm_membytes(bus, false, addr, + (u8 *)&sh_val, sizeof(u32)); + if (rv < 0) + return rv; + console_ptr = le32_to_cpu(sh_val); + + addr = sh->console_addr + offsetof(struct rte_console, log_le.buf_size); + rv = brcmf_sdbrcm_membytes(bus, false, addr, + (u8 *)&sh_val, sizeof(u32)); + if (rv < 0) + return rv; + console_size = le32_to_cpu(sh_val); + + addr = sh->console_addr + offsetof(struct rte_console, log_le.idx); + rv = brcmf_sdbrcm_membytes(bus, false, addr, + (u8 *)&sh_val, sizeof(u32)); + if (rv < 0) + return rv; + console_index = le32_to_cpu(sh_val); + + /* allocate buffer for console data */ + if (console_size <= CONSOLE_BUFFER_MAX) + conbuf = vzalloc(console_size+1); + + if (!conbuf) + return -ENOMEM; + + /* obtain the console data from device */ + conbuf[console_size] = '\0'; + rv = brcmf_sdbrcm_membytes(bus, false, console_ptr, (u8 *)conbuf, + console_size); + if (rv < 0) + goto done; + + rv = simple_read_from_buffer(data, count, &pos, + conbuf + console_index, + console_size - console_index); + if (rv < 0) + goto done; + + nbytes = rv; + if (console_index > 0) { + pos = 0; + rv = simple_read_from_buffer(data+nbytes, count, &pos, + conbuf, console_index - 1); + if (rv < 0) + goto done; + rv += nbytes; + } +done: + vfree(conbuf); + return rv; +} + +static int brcmf_sdio_trap_info(struct brcmf_sdio *bus, struct sdpcm_shared *sh, + char __user *data, size_t count) +{ + int error, res; + char buf[350]; + struct brcmf_trap_info tr; + int nbytes; + loff_t pos = 0; + + if ((sh->flags & SDPCM_SHARED_TRAP) == 0) + return 0; + + error = brcmf_sdbrcm_membytes(bus, false, sh->trap_addr, (u8 *)&tr, + sizeof(struct brcmf_trap_info)); + if (error < 0) + return error; + + nbytes = brcmf_sdio_dump_console(bus, sh, data, count); + if (nbytes < 0) + return nbytes; + + res = scnprintf(buf, sizeof(buf), + "dongle trap info: type 0x%x @ epc 0x%08x\n" + " cpsr 0x%08x spsr 0x%08x sp 0x%08x\n" + " lr 0x%08x pc 0x%08x offset 0x%x\n" + " r0 0x%08x r1 0x%08x r2 0x%08x r3 0x%08x\n" + " r4 0x%08x r5 0x%08x r6 0x%08x r7 0x%08x\n", + le32_to_cpu(tr.type), le32_to_cpu(tr.epc), + le32_to_cpu(tr.cpsr), le32_to_cpu(tr.spsr), + le32_to_cpu(tr.r13), le32_to_cpu(tr.r14), + le32_to_cpu(tr.pc), sh->trap_addr, + le32_to_cpu(tr.r0), le32_to_cpu(tr.r1), + le32_to_cpu(tr.r2), le32_to_cpu(tr.r3), + le32_to_cpu(tr.r4), le32_to_cpu(tr.r5), + le32_to_cpu(tr.r6), le32_to_cpu(tr.r7)); + + error = simple_read_from_buffer(data+nbytes, count, &pos, buf, res); + if (error < 0) + return error; + + nbytes += error; + return nbytes; +} + +static int brcmf_sdio_assert_info(struct brcmf_sdio *bus, + struct sdpcm_shared *sh, char __user *data, + size_t count) +{ + int error = 0; + char buf[200]; + char file[80] = "?"; + char expr[80] = "<???>"; + int res; + loff_t pos = 0; + + if ((sh->flags & SDPCM_SHARED_ASSERT_BUILT) == 0) { + brcmf_dbg(INFO, "firmware not built with -assert\n"); + return 0; + } else if ((sh->flags & SDPCM_SHARED_ASSERT) == 0) { + brcmf_dbg(INFO, "no assert in dongle\n"); + return 0; + } + + if (sh->assert_file_addr != 0) { + error = brcmf_sdbrcm_membytes(bus, false, sh->assert_file_addr, + (u8 *)file, 80); + if (error < 0) + return error; + } + if (sh->assert_exp_addr != 0) { + error = brcmf_sdbrcm_membytes(bus, false, sh->assert_exp_addr, + (u8 *)expr, 80); + if (error < 0) + return error; + } + + res = scnprintf(buf, sizeof(buf), + "dongle assert: %s:%d: assert(%s)\n", + file, sh->assert_line, expr); + return simple_read_from_buffer(data, count, &pos, buf, res); +} + +static int brcmf_sdbrcm_checkdied(struct brcmf_sdio *bus) +{ + int error; + struct sdpcm_shared sh; + + down(&bus->sdsem); + error = brcmf_sdio_readshared(bus, &sh); + up(&bus->sdsem); + + if (error < 0) + return error; + + if ((sh.flags & SDPCM_SHARED_ASSERT_BUILT) == 0) + brcmf_dbg(INFO, "firmware not built with -assert\n"); + else if (sh.flags & SDPCM_SHARED_ASSERT) + brcmf_dbg(ERROR, "assertion in dongle\n"); + + if (sh.flags & SDPCM_SHARED_TRAP) + brcmf_dbg(ERROR, "firmware trap in dongle\n"); + + return 0; +} + +static int brcmf_sdbrcm_died_dump(struct brcmf_sdio *bus, char __user *data, + size_t count, loff_t *ppos) +{ + int error = 0; + struct sdpcm_shared sh; + int nbytes = 0; + loff_t pos = *ppos; + + if (pos != 0) + return 0; + + down(&bus->sdsem); + error = brcmf_sdio_readshared(bus, &sh); + if (error < 0) + goto done; + + error = brcmf_sdio_assert_info(bus, &sh, data, count); + if (error < 0) + goto done; + + nbytes = error; + error = brcmf_sdio_trap_info(bus, &sh, data, count); + if (error < 0) + goto done; + + error += nbytes; + *ppos += error; +done: + up(&bus->sdsem); + return error; +} + +static ssize_t brcmf_sdio_forensic_read(struct file *f, char __user *data, + size_t count, loff_t *ppos) +{ + struct brcmf_sdio *bus = f->private_data; + int res; + + res = brcmf_sdbrcm_died_dump(bus, data, count, ppos); + if (res > 0) + *ppos += res; + return (ssize_t)res; +} + +static const struct file_operations brcmf_sdio_forensic_ops = { + .owner = THIS_MODULE, + .open = simple_open, + .read = brcmf_sdio_forensic_read +}; + +static void brcmf_sdio_debugfs_create(struct brcmf_sdio *bus) +{ + struct brcmf_pub *drvr = bus->sdiodev->bus_if->drvr; + struct dentry *dentry = brcmf_debugfs_get_devdir(drvr); + + if (IS_ERR_OR_NULL(dentry)) + return; + + debugfs_create_file("forensics", S_IRUGO, dentry, bus, + &brcmf_sdio_forensic_ops); + brcmf_debugfs_create_sdio_count(drvr, &bus->sdcnt); +} +#else +static int brcmf_sdbrcm_checkdied(struct brcmf_sdio *bus) +{ + return 0; +} + +static void brcmf_sdio_debugfs_create(struct brcmf_sdio *bus) +{ +} +#endif /* DEBUG */ + static int brcmf_sdbrcm_bus_rxctl(struct device *dev, unsigned char *msg, uint msglen) { @@ -3009,60 +3319,27 @@ brcmf_sdbrcm_bus_rxctl(struct device *dev, unsigned char *msg, uint msglen) rxlen, msglen); } else if (timeleft == 0) { brcmf_dbg(ERROR, "resumed on timeout\n"); + brcmf_sdbrcm_checkdied(bus); } else if (pending) { brcmf_dbg(CTL, "cancelled\n"); return -ERESTARTSYS; } else { brcmf_dbg(CTL, "resumed for unknown reason?\n"); + brcmf_sdbrcm_checkdied(bus); } if (rxlen) - bus->rx_ctlpkts++; + bus->sdcnt.rx_ctlpkts++; else - bus->rx_ctlerrs++; + bus->sdcnt.rx_ctlerrs++; return rxlen ? (int)rxlen : -ETIMEDOUT; } -static int brcmf_sdbrcm_downloadvars(struct brcmf_sdio *bus, void *arg, int len) -{ - int bcmerror = 0; - - brcmf_dbg(TRACE, "Enter\n"); - - /* Basic sanity checks */ - if (bus->sdiodev->bus_if->drvr_up) { - bcmerror = -EISCONN; - goto err; - } - if (!len) { - bcmerror = -EOVERFLOW; - goto err; - } - - /* Free the old ones and replace with passed variables */ - kfree(bus->vars); - - bus->vars = kmalloc(len, GFP_ATOMIC); - bus->varsz = bus->vars ? len : 0; - if (bus->vars == NULL) { - bcmerror = -ENOMEM; - goto err; - } - - /* Copy the passed variables, which should include the - terminating double-null */ - memcpy(bus->vars, arg, bus->varsz); -err: - return bcmerror; -} - static int brcmf_sdbrcm_write_vars(struct brcmf_sdio *bus) { int bcmerror = 0; - u32 varsize; u32 varaddr; - u8 *vbuffer; u32 varsizew; __le32 varsizew_le; #ifdef DEBUG @@ -3071,56 +3348,44 @@ static int brcmf_sdbrcm_write_vars(struct brcmf_sdio *bus) /* Even if there are no vars are to be written, we still need to set the ramsize. */ - varsize = bus->varsz ? roundup(bus->varsz, 4) : 0; - varaddr = (bus->ramsize - 4) - varsize; + varaddr = (bus->ramsize - 4) - bus->varsz; if (bus->vars) { - vbuffer = kzalloc(varsize, GFP_ATOMIC); - if (!vbuffer) - return -ENOMEM; - - memcpy(vbuffer, bus->vars, bus->varsz); - /* Write the vars list */ - bcmerror = - brcmf_sdbrcm_membytes(bus, true, varaddr, vbuffer, varsize); + bcmerror = brcmf_sdbrcm_membytes(bus, true, varaddr, + bus->vars, bus->varsz); #ifdef DEBUG /* Verify NVRAM bytes */ - brcmf_dbg(INFO, "Compare NVRAM dl & ul; varsize=%d\n", varsize); - nvram_ularray = kmalloc(varsize, GFP_ATOMIC); - if (!nvram_ularray) { - kfree(vbuffer); + brcmf_dbg(INFO, "Compare NVRAM dl & ul; varsize=%d\n", + bus->varsz); + nvram_ularray = kmalloc(bus->varsz, GFP_ATOMIC); + if (!nvram_ularray) return -ENOMEM; - } /* Upload image to verify downloaded contents. */ - memset(nvram_ularray, 0xaa, varsize); + memset(nvram_ularray, 0xaa, bus->varsz); /* Read the vars list to temp buffer for comparison */ - bcmerror = - brcmf_sdbrcm_membytes(bus, false, varaddr, nvram_ularray, - varsize); + bcmerror = brcmf_sdbrcm_membytes(bus, false, varaddr, + nvram_ularray, bus->varsz); if (bcmerror) { brcmf_dbg(ERROR, "error %d on reading %d nvram bytes at 0x%08x\n", - bcmerror, varsize, varaddr); + bcmerror, bus->varsz, varaddr); } /* Compare the org NVRAM with the one read from RAM */ - if (memcmp(vbuffer, nvram_ularray, varsize)) + if (memcmp(bus->vars, nvram_ularray, bus->varsz)) brcmf_dbg(ERROR, "Downloaded NVRAM image is corrupted\n"); else brcmf_dbg(ERROR, "Download/Upload/Compare of NVRAM ok\n"); kfree(nvram_ularray); #endif /* DEBUG */ - - kfree(vbuffer); } /* adjust to the user specified RAM */ brcmf_dbg(INFO, "Physical memory size: %d\n", bus->ramsize); brcmf_dbg(INFO, "Vars are at %d, orig varsize is %d\n", - varaddr, varsize); - varsize = ((bus->ramsize - 4) - varaddr); + varaddr, bus->varsz); /* * Determine the length token: @@ -3131,13 +3396,13 @@ static int brcmf_sdbrcm_write_vars(struct brcmf_sdio *bus) varsizew = 0; varsizew_le = cpu_to_le32(0); } else { - varsizew = varsize / 4; + varsizew = bus->varsz / 4; varsizew = (~varsizew << 16) | (varsizew & 0x0000FFFF); varsizew_le = cpu_to_le32(varsizew); } brcmf_dbg(INFO, "New varsize is %d, length token=0x%08x\n", - varsize, varsizew); + bus->varsz, varsizew); /* Write the length token to the last word */ bcmerror = brcmf_sdbrcm_membytes(bus, true, (bus->ramsize - 4), @@ -3261,13 +3526,21 @@ err: * by two NULs. */ -static uint brcmf_process_nvram_vars(char *varbuf, uint len) +static int brcmf_process_nvram_vars(struct brcmf_sdio *bus) { + char *varbuf; char *dp; bool findNewline; int column; - uint buf_len, n; + int ret = 0; + uint buf_len, n, len; + len = bus->firmware->size; + varbuf = vmalloc(len); + if (!varbuf) + return -ENOMEM; + + memcpy(varbuf, bus->firmware->data, len); dp = varbuf; findNewline = false; @@ -3296,56 +3569,44 @@ static uint brcmf_process_nvram_vars(char *varbuf, uint len) column++; } buf_len = dp - varbuf; - while (dp < varbuf + n) *dp++ = 0; - return buf_len; + kfree(bus->vars); + /* roundup needed for download to device */ + bus->varsz = roundup(buf_len + 1, 4); + bus->vars = kmalloc(bus->varsz, GFP_KERNEL); + if (bus->vars == NULL) { + bus->varsz = 0; + ret = -ENOMEM; + goto err; + } + + /* copy the processed variables and add null termination */ + memcpy(bus->vars, varbuf, buf_len); + bus->vars[buf_len] = 0; +err: + vfree(varbuf); + return ret; } static int brcmf_sdbrcm_download_nvram(struct brcmf_sdio *bus) { - uint len; - char *memblock = NULL; - char *bufp; int ret; + if (bus->sdiodev->bus_if->drvr_up) + return -EISCONN; + ret = request_firmware(&bus->firmware, BRCMF_SDIO_NV_NAME, &bus->sdiodev->func[2]->dev); if (ret) { brcmf_dbg(ERROR, "Fail to request nvram %d\n", ret); return ret; } - bus->fw_ptr = 0; - - memblock = kmalloc(MEMBLOCK, GFP_ATOMIC); - if (memblock == NULL) { - ret = -ENOMEM; - goto err; - } - - len = brcmf_sdbrcm_get_image(memblock, MEMBLOCK, bus); - - if (len > 0 && len < MEMBLOCK) { - bufp = (char *)memblock; - bufp[len] = 0; - len = brcmf_process_nvram_vars(bufp, len); - bufp += len; - *bufp++ = 0; - if (len) - ret = brcmf_sdbrcm_downloadvars(bus, memblock, len + 1); - if (ret) - brcmf_dbg(ERROR, "error downloading vars: %d\n", ret); - } else { - brcmf_dbg(ERROR, "error reading nvram file: %d\n", len); - ret = -EIO; - } -err: - kfree(memblock); + ret = brcmf_process_nvram_vars(bus); release_firmware(bus->firmware); - bus->fw_ptr = 0; return ret; } @@ -3419,7 +3680,7 @@ static int brcmf_sdbrcm_bus_init(struct device *dev) return 0; /* Start the watchdog timer */ - bus->tickcnt = 0; + bus->sdcnt.tickcnt = 0; brcmf_sdbrcm_wd_timer(bus, BRCMF_WD_POLL_MS); down(&bus->sdsem); @@ -3512,7 +3773,7 @@ void brcmf_sdbrcm_isr(void *arg) return; } /* Count the interrupt call */ - bus->intrcount++; + bus->sdcnt.intrcount++; bus->ipend = true; /* Shouldn't get this interrupt if we're sleeping? */ @@ -3554,7 +3815,8 @@ static bool brcmf_sdbrcm_bus_watchdog(struct brcmf_sdio *bus) bus->polltick = 0; /* Check device if no interrupts */ - if (!bus->intr || (bus->intrcount == bus->lastintrs)) { + if (!bus->intr || + (bus->sdcnt.intrcount == bus->sdcnt.lastintrs)) { if (!bus->dpc_sched) { u8 devpend; @@ -3569,7 +3831,7 @@ static bool brcmf_sdbrcm_bus_watchdog(struct brcmf_sdio *bus) /* If there is something, make like the ISR and schedule the DPC */ if (intstatus) { - bus->pollcnt++; + bus->sdcnt.pollcnt++; bus->ipend = true; bus->dpc_sched = true; @@ -3581,7 +3843,7 @@ static bool brcmf_sdbrcm_bus_watchdog(struct brcmf_sdio *bus) } /* Update interrupt tracking */ - bus->lastintrs = bus->intrcount; + bus->sdcnt.lastintrs = bus->sdcnt.intrcount; } #ifdef DEBUG /* Poll for console output periodically */ @@ -3623,6 +3885,8 @@ static bool brcmf_sdbrcm_chipmatch(u16 chipid) return true; if (chipid == BCM4330_CHIP_ID) return true; + if (chipid == BCM4334_CHIP_ID) + return true; return false; } @@ -3793,7 +4057,7 @@ brcmf_sdbrcm_watchdog_thread(void *data) if (!wait_for_completion_interruptible(&bus->watchdog_wait)) { brcmf_sdbrcm_bus_watchdog(bus); /* Count the tick for reference */ - bus->tickcnt++; + bus->sdcnt.tickcnt++; } else break; } @@ -3856,6 +4120,10 @@ void *brcmf_sdbrcm_probe(u32 regsva, struct brcmf_sdio_dev *sdiodev) { int ret; struct brcmf_sdio *bus; + struct brcmf_bus_dcmd *dlst; + u32 dngl_txglom; + u32 dngl_txglomalign; + u8 idx; brcmf_dbg(TRACE, "Enter\n"); @@ -3938,8 +4206,29 @@ void *brcmf_sdbrcm_probe(u32 regsva, struct brcmf_sdio_dev *sdiodev) goto fail; } + brcmf_sdio_debugfs_create(bus); brcmf_dbg(INFO, "completed!!\n"); + /* sdio bus core specific dcmd */ + idx = brcmf_sdio_chip_getinfidx(bus->ci, BCMA_CORE_SDIO_DEV); + dlst = kzalloc(sizeof(struct brcmf_bus_dcmd), GFP_KERNEL); + if (dlst) { + if (bus->ci->c_inf[idx].rev < 12) { + /* for sdio core rev < 12, disable txgloming */ + dngl_txglom = 0; + dlst->name = "bus:txglom"; + dlst->param = (char *)&dngl_txglom; + dlst->param_len = sizeof(u32); + } else { + /* otherwise, set txglomalign */ + dngl_txglomalign = bus->sdiodev->bus_if->align; + dlst->name = "bus:txglomalign"; + dlst->param = (char *)&dngl_txglomalign; + dlst->param_len = sizeof(u32); + } + list_add(&dlst->list, &bus->sdiodev->bus_if->dcmd_list); + } + /* if firmware path present try to download and bring up bus */ ret = brcmf_bus_start(bus->sdiodev->dev); if (ret != 0) { diff --git a/drivers/net/wireless/brcm80211/brcmfmac/sdio_chip.c b/drivers/net/wireless/brcm80211/brcmfmac/sdio_chip.c index f8e1f1c84d08..58155e23d220 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/sdio_chip.c +++ b/drivers/net/wireless/brcm80211/brcmfmac/sdio_chip.c @@ -403,6 +403,23 @@ static int brcmf_sdio_chip_recognition(struct brcmf_sdio_dev *sdiodev, ci->c_inf[3].cib = 0x03004211; ci->ramsize = 0x48000; break; + case BCM4334_CHIP_ID: + ci->c_inf[0].wrapbase = 0x18100000; + ci->c_inf[0].cib = 0x29004211; + ci->c_inf[1].id = BCMA_CORE_SDIO_DEV; + ci->c_inf[1].base = 0x18002000; + ci->c_inf[1].wrapbase = 0x18102000; + ci->c_inf[1].cib = 0x0d004211; + ci->c_inf[2].id = BCMA_CORE_INTERNAL_MEM; + ci->c_inf[2].base = 0x18004000; + ci->c_inf[2].wrapbase = 0x18104000; + ci->c_inf[2].cib = 0x13080401; + ci->c_inf[3].id = BCMA_CORE_ARM_CM3; + ci->c_inf[3].base = 0x18003000; + ci->c_inf[3].wrapbase = 0x18103000; + ci->c_inf[3].cib = 0x07004211; + ci->ramsize = 0x80000; + break; default: brcmf_dbg(ERROR, "chipid 0x%x is not supported\n", ci->chip); return -ENODEV; diff --git a/drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c b/drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c index d13ae9c299f2..28c5fbb4af26 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c +++ b/drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c @@ -691,9 +691,10 @@ scan_out: } static s32 -brcmf_cfg80211_scan(struct wiphy *wiphy, struct net_device *ndev, +brcmf_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request) { + struct net_device *ndev = request->wdev->netdev; s32 err = 0; WL_TRACE("Enter\n"); @@ -919,9 +920,7 @@ brcmf_cfg80211_join_ibss(struct wiphy *wiphy, struct net_device *ndev, set_bit(WL_STATUS_CONNECTING, &cfg_priv->status); if (params->bssid) - WL_CONN("BSSID: %02X %02X %02X %02X %02X %02X\n", - params->bssid[0], params->bssid[1], params->bssid[2], - params->bssid[3], params->bssid[4], params->bssid[5]); + WL_CONN("BSSID: %pM\n", params->bssid); else WL_CONN("No BSSID specified\n"); diff --git a/drivers/net/wireless/brcm80211/brcmsmac/aiutils.c b/drivers/net/wireless/brcm80211/brcmsmac/aiutils.c index 6d8b7213643a..8c9345dd37d2 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/aiutils.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/aiutils.c @@ -318,10 +318,6 @@ #define IS_SIM(chippkg) \ ((chippkg == HDLSIM_PKG_ID) || (chippkg == HWSIM_PKG_ID)) -#define PCIE(sih) (ai_get_buscoretype(sih) == PCIE_CORE_ID) - -#define PCI_FORCEHT(sih) (PCIE(sih) && (ai_get_chip_id(sih) == BCM4716_CHIP_ID)) - #ifdef DEBUG #define SI_MSG(fmt, ...) pr_debug(fmt, ##__VA_ARGS__) #else @@ -473,9 +469,6 @@ ai_buscore_setup(struct si_info *sii, struct bcma_device *cc) sii->pub.pmurev = sii->pub.pmucaps & PCAP_REV_MASK; } - /* figure out buscore */ - sii->buscore = ai_findcore(&sii->pub, PCIE_CORE_ID, 0); - return true; } @@ -483,11 +476,7 @@ static struct si_info *ai_doattach(struct si_info *sii, struct bcma_bus *pbus) { struct si_pub *sih = &sii->pub; - u32 w, savewin; struct bcma_device *cc; - struct ssb_sprom *sprom = &pbus->sprom; - - savewin = 0; sii->icbus = pbus; sii->pcibus = pbus->host_pci; @@ -510,47 +499,7 @@ static struct si_info *ai_doattach(struct si_info *sii, /* PMU specific initializations */ if (ai_get_cccaps(sih) & CC_CAP_PMU) { - si_pmu_init(sih); (void)si_pmu_measure_alpclk(sih); - si_pmu_res_init(sih); - } - - /* setup the GPIO based LED powersave register */ - w = (sprom->leddc_on_time << BCMA_CC_GPIOTIMER_ONTIME_SHIFT) | - (sprom->leddc_off_time << BCMA_CC_GPIOTIMER_OFFTIME_SHIFT); - if (w == 0) - w = DEFAULT_GPIOTIMERVAL; - ai_cc_reg(sih, offsetof(struct chipcregs, gpiotimerval), - ~0, w); - - if (ai_get_chip_id(sih) == BCM43224_CHIP_ID) { - /* - * enable 12 mA drive strenth for 43224 and - * set chipControl register bit 15 - */ - if (ai_get_chiprev(sih) == 0) { - SI_MSG("Applying 43224A0 WARs\n"); - ai_cc_reg(sih, offsetof(struct chipcregs, chipcontrol), - CCTRL43224_GPIO_TOGGLE, - CCTRL43224_GPIO_TOGGLE); - si_pmu_chipcontrol(sih, 0, CCTRL_43224A0_12MA_LED_DRIVE, - CCTRL_43224A0_12MA_LED_DRIVE); - } - if (ai_get_chiprev(sih) >= 1) { - SI_MSG("Applying 43224B0+ WARs\n"); - si_pmu_chipcontrol(sih, 0, CCTRL_43224B0_12MA_LED_DRIVE, - CCTRL_43224B0_12MA_LED_DRIVE); - } - } - - if (ai_get_chip_id(sih) == BCM4313_CHIP_ID) { - /* - * enable 12 mA drive strenth for 4313 and - * set chipControl register bit 1 - */ - SI_MSG("Applying 4313 WARs\n"); - si_pmu_chipcontrol(sih, 0, CCTRL_4313_12MA_LED_DRIVE, - CCTRL_4313_12MA_LED_DRIVE); } return sii; @@ -589,7 +538,7 @@ void ai_detach(struct si_pub *sih) struct si_pub *si_local = NULL; memcpy(&si_local, &sih, sizeof(struct si_pub **)); - sii = (struct si_info *)sih; + sii = container_of(sih, struct si_info, pub); if (sii == NULL) return; @@ -597,27 +546,6 @@ void ai_detach(struct si_pub *sih) kfree(sii); } -/* return index of coreid or BADIDX if not found */ -struct bcma_device *ai_findcore(struct si_pub *sih, u16 coreid, u16 coreunit) -{ - struct bcma_device *core; - struct si_info *sii; - uint found; - - sii = (struct si_info *)sih; - - found = 0; - - list_for_each_entry(core, &sii->icbus->cores, list) - if (core->id.id == coreid) { - if (found == coreunit) - return core; - found++; - } - - return NULL; -} - /* * read/modify chipcommon core register. */ @@ -627,13 +555,12 @@ uint ai_cc_reg(struct si_pub *sih, uint regoff, u32 mask, u32 val) u32 w; struct si_info *sii; - sii = (struct si_info *)sih; + sii = container_of(sih, struct si_info, pub); cc = sii->icbus->drv_cc.core; /* mask and set */ - if (mask || val) { + if (mask || val) bcma_maskset32(cc, regoff, ~mask, val); - } /* readback */ w = bcma_read32(cc, regoff); @@ -694,12 +621,13 @@ ai_clkctl_setdelay(struct si_pub *sih, struct bcma_device *cc) /* initialize power control delay registers */ void ai_clkctl_init(struct si_pub *sih) { + struct si_info *sii = container_of(sih, struct si_info, pub); struct bcma_device *cc; if (!(ai_get_cccaps(sih) & CC_CAP_PWR_CTL)) return; - cc = ai_findcore(sih, BCMA_CORE_CHIPCOMMON, 0); + cc = sii->icbus->drv_cc.core; if (cc == NULL) return; @@ -721,7 +649,7 @@ u16 ai_clkctl_fast_pwrup_delay(struct si_pub *sih) uint slowminfreq; u16 fpdelay; - sii = (struct si_info *)sih; + sii = container_of(sih, struct si_info, pub); if (ai_get_cccaps(sih) & CC_CAP_PMU) { fpdelay = si_pmu_fast_pwrup_delay(sih); return fpdelay; @@ -731,7 +659,7 @@ u16 ai_clkctl_fast_pwrup_delay(struct si_pub *sih) return 0; fpdelay = 0; - cc = ai_findcore(sih, CC_CORE_ID, 0); + cc = sii->icbus->drv_cc.core; if (cc) { slowminfreq = ai_slowclk_freq(sih, false, cc); fpdelay = (((bcma_read32(cc, CHIPCREGOFFS(pll_on_delay)) + 2) @@ -753,12 +681,9 @@ bool ai_clkctl_cc(struct si_pub *sih, enum bcma_clkmode mode) struct si_info *sii; struct bcma_device *cc; - sii = (struct si_info *)sih; - - if (PCI_FORCEHT(sih)) - return mode == BCMA_CLKMODE_FAST; + sii = container_of(sih, struct si_info, pub); - cc = ai_findcore(&sii->pub, BCMA_CORE_CHIPCOMMON, 0); + cc = sii->icbus->drv_cc.core; bcma_core_set_clockmode(cc, mode); return mode == BCMA_CLKMODE_FAST; } @@ -766,16 +691,10 @@ bool ai_clkctl_cc(struct si_pub *sih, enum bcma_clkmode mode) void ai_pci_up(struct si_pub *sih) { struct si_info *sii; - struct bcma_device *cc; - sii = (struct si_info *)sih; + sii = container_of(sih, struct si_info, pub); - if (PCI_FORCEHT(sih)) { - cc = ai_findcore(&sii->pub, BCMA_CORE_CHIPCOMMON, 0); - bcma_core_set_clockmode(cc, BCMA_CLKMODE_FAST); - } - - if (PCIE(sih)) + if (sii->icbus->hosttype == BCMA_HOSTTYPE_PCI) bcma_core_pci_extend_L1timer(&sii->icbus->drv_pci, true); } @@ -783,26 +702,20 @@ void ai_pci_up(struct si_pub *sih) void ai_pci_down(struct si_pub *sih) { struct si_info *sii; - struct bcma_device *cc; - sii = (struct si_info *)sih; + sii = container_of(sih, struct si_info, pub); - /* release FORCEHT since chip is going to "down" state */ - if (PCI_FORCEHT(sih)) { - cc = ai_findcore(&sii->pub, BCMA_CORE_CHIPCOMMON, 0); - bcma_core_set_clockmode(cc, BCMA_CLKMODE_DYNAMIC); - } - - if (PCIE(sih)) + if (sii->icbus->hosttype == BCMA_HOSTTYPE_PCI) bcma_core_pci_extend_L1timer(&sii->icbus->drv_pci, false); } /* Enable BT-COEX & Ex-PA for 4313 */ void ai_epa_4313war(struct si_pub *sih) { + struct si_info *sii = container_of(sih, struct si_info, pub); struct bcma_device *cc; - cc = ai_findcore(sih, CC_CORE_ID, 0); + cc = sii->icbus->drv_cc.core; /* EPA Fix */ bcma_set32(cc, CHIPCREGOFFS(gpiocontrol), GPIO_CTRL_EPA_EN_MASK); @@ -814,7 +727,7 @@ bool ai_deviceremoved(struct si_pub *sih) u32 w; struct si_info *sii; - sii = (struct si_info *)sih; + sii = container_of(sih, struct si_info, pub); if (sii->icbus->hosttype != BCMA_HOSTTYPE_PCI) return false; @@ -825,15 +738,3 @@ bool ai_deviceremoved(struct si_pub *sih) return false; } - -uint ai_get_buscoretype(struct si_pub *sih) -{ - struct si_info *sii = (struct si_info *)sih; - return sii->buscore->id.id; -} - -uint ai_get_buscorerev(struct si_pub *sih) -{ - struct si_info *sii = (struct si_info *)sih; - return sii->buscore->id.rev; -} diff --git a/drivers/net/wireless/brcm80211/brcmsmac/aiutils.h b/drivers/net/wireless/brcm80211/brcmsmac/aiutils.h index d9f04a683bdb..89562c1fbf49 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/aiutils.h +++ b/drivers/net/wireless/brcm80211/brcmsmac/aiutils.h @@ -88,16 +88,6 @@ #define CLKD_OTP 0x000f0000 #define CLKD_OTP_SHIFT 16 -/* Package IDs */ -#define BCM4717_PKG_ID 9 /* 4717 package id */ -#define BCM4718_PKG_ID 10 /* 4718 package id */ -#define BCM43224_FAB_SMIC 0xa /* the chip is manufactured by SMIC */ - -/* these are router chips */ -#define BCM4716_CHIP_ID 0x4716 /* 4716 chipcommon chipid */ -#define BCM47162_CHIP_ID 47162 /* 47162 chipcommon chipid */ -#define BCM4748_CHIP_ID 0x4748 /* 4716 chipcommon chipid (OTP, RBBU) */ - /* dynamic clock control defines */ #define LPOMINFREQ 25000 /* low power oscillator min */ #define LPOMAXFREQ 43000 /* low power oscillator max */ @@ -168,7 +158,6 @@ struct si_info { struct si_pub pub; /* back plane public state (must be first) */ struct bcma_bus *icbus; /* handle to soc interconnect bus */ struct pci_dev *pcibus; /* handle to pci bus */ - struct bcma_device *buscore; u32 chipst; /* chip status */ }; @@ -183,8 +172,6 @@ struct si_info { /* AMBA Interconnect exported externs */ -extern struct bcma_device *ai_findcore(struct si_pub *sih, - u16 coreid, u16 coreunit); extern u32 ai_core_cflags(struct bcma_device *core, u32 mask, u32 val); /* === exported functions === */ @@ -193,7 +180,7 @@ extern void ai_detach(struct si_pub *sih); extern uint ai_cc_reg(struct si_pub *sih, uint regoff, u32 mask, u32 val); extern void ai_clkctl_init(struct si_pub *sih); extern u16 ai_clkctl_fast_pwrup_delay(struct si_pub *sih); -extern bool ai_clkctl_cc(struct si_pub *sih, uint mode); +extern bool ai_clkctl_cc(struct si_pub *sih, enum bcma_clkmode mode); extern bool ai_deviceremoved(struct si_pub *sih); extern void ai_pci_down(struct si_pub *sih); @@ -202,9 +189,6 @@ extern void ai_pci_up(struct si_pub *sih); /* Enable Ex-PA for 4313 */ extern void ai_epa_4313war(struct si_pub *sih); -extern uint ai_get_buscoretype(struct si_pub *sih); -extern uint ai_get_buscorerev(struct si_pub *sih); - static inline u32 ai_get_cccaps(struct si_pub *sih) { return sih->cccaps; diff --git a/drivers/net/wireless/brcm80211/brcmsmac/ampdu.c b/drivers/net/wireless/brcm80211/brcmsmac/ampdu.c index 95b5902bc4b3..be5bcfb9153b 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/ampdu.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/ampdu.c @@ -663,9 +663,6 @@ brcms_c_sendampdu(struct ampdu_info *ampdu, struct brcms_txq_info *qi, /* patch the first MPDU */ if (count == 1) { u8 plcp0, plcp3, is40, sgi; - struct ieee80211_sta *sta; - - sta = tx_info->control.sta; if (rr) { plcp0 = plcp[0]; @@ -735,10 +732,8 @@ brcms_c_sendampdu(struct ampdu_info *ampdu, struct brcms_txq_info *qi, * a candidate for aggregation */ p = pktq_ppeek(&qi->q, prec); - /* tx_info must be checked with current p */ - tx_info = IEEE80211_SKB_CB(p); - if (p) { + tx_info = IEEE80211_SKB_CB(p); if ((tx_info->flags & IEEE80211_TX_CTL_AMPDU) && ((u8) (p->priority) == tid)) { plen = p->len + AMPDU_MAX_MPDU_OVERHEAD; @@ -759,6 +754,7 @@ brcms_c_sendampdu(struct ampdu_info *ampdu, struct brcms_txq_info *qi, p = NULL; continue; } + /* next packet fit for aggregation so dequeue */ p = brcmu_pktq_pdeq(&qi->q, prec); } else { p = NULL; @@ -1196,8 +1192,8 @@ static bool cb_del_ampdu_pkt(struct sk_buff *mpdu, void *arg_a) bool rc; rc = tx_info->flags & IEEE80211_TX_CTL_AMPDU ? true : false; - rc = rc && (tx_info->control.sta == NULL || ampdu_pars->sta == NULL || - tx_info->control.sta == ampdu_pars->sta); + rc = rc && (tx_info->rate_driver_data[0] == NULL || ampdu_pars->sta == NULL || + tx_info->rate_driver_data[0] == ampdu_pars->sta); rc = rc && ((u8)(mpdu->priority) == ampdu_pars->tid); return rc; } @@ -1211,8 +1207,8 @@ static void dma_cb_fn_ampdu(void *txi, void *arg_a) struct ieee80211_tx_info *tx_info = (struct ieee80211_tx_info *)txi; if ((tx_info->flags & IEEE80211_TX_CTL_AMPDU) && - (tx_info->control.sta == sta || sta == NULL)) - tx_info->control.sta = NULL; + (tx_info->rate_driver_data[0] == sta || sta == NULL)) + tx_info->rate_driver_data[0] = NULL; } /* diff --git a/drivers/net/wireless/brcm80211/brcmsmac/channel.c b/drivers/net/wireless/brcm80211/brcmsmac/channel.c index eb77ac3cfb6b..9a4c63f927cb 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/channel.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/channel.c @@ -15,7 +15,9 @@ */ #include <linux/types.h> +#include <net/cfg80211.h> #include <net/mac80211.h> +#include <net/regulatory.h> #include <defs.h> #include "pub.h" @@ -23,73 +25,17 @@ #include "main.h" #include "stf.h" #include "channel.h" +#include "mac80211_if.h" /* QDB() macro takes a dB value and converts to a quarter dB value */ #define QDB(n) ((n) * BRCMS_TXPWR_DB_FACTOR) -#define LOCALE_CHAN_01_11 (1<<0) -#define LOCALE_CHAN_12_13 (1<<1) -#define LOCALE_CHAN_14 (1<<2) -#define LOCALE_SET_5G_LOW_JP1 (1<<3) /* 34-48, step 2 */ -#define LOCALE_SET_5G_LOW_JP2 (1<<4) /* 34-46, step 4 */ -#define LOCALE_SET_5G_LOW1 (1<<5) /* 36-48, step 4 */ -#define LOCALE_SET_5G_LOW2 (1<<6) /* 52 */ -#define LOCALE_SET_5G_LOW3 (1<<7) /* 56-64, step 4 */ -#define LOCALE_SET_5G_MID1 (1<<8) /* 100-116, step 4 */ -#define LOCALE_SET_5G_MID2 (1<<9) /* 120-124, step 4 */ -#define LOCALE_SET_5G_MID3 (1<<10) /* 128 */ -#define LOCALE_SET_5G_HIGH1 (1<<11) /* 132-140, step 4 */ -#define LOCALE_SET_5G_HIGH2 (1<<12) /* 149-161, step 4 */ -#define LOCALE_SET_5G_HIGH3 (1<<13) /* 165 */ -#define LOCALE_CHAN_52_140_ALL (1<<14) -#define LOCALE_SET_5G_HIGH4 (1<<15) /* 184-216 */ - -#define LOCALE_CHAN_36_64 (LOCALE_SET_5G_LOW1 | \ - LOCALE_SET_5G_LOW2 | \ - LOCALE_SET_5G_LOW3) -#define LOCALE_CHAN_52_64 (LOCALE_SET_5G_LOW2 | LOCALE_SET_5G_LOW3) -#define LOCALE_CHAN_100_124 (LOCALE_SET_5G_MID1 | LOCALE_SET_5G_MID2) -#define LOCALE_CHAN_100_140 (LOCALE_SET_5G_MID1 | LOCALE_SET_5G_MID2 | \ - LOCALE_SET_5G_MID3 | LOCALE_SET_5G_HIGH1) -#define LOCALE_CHAN_149_165 (LOCALE_SET_5G_HIGH2 | LOCALE_SET_5G_HIGH3) -#define LOCALE_CHAN_184_216 LOCALE_SET_5G_HIGH4 - -#define LOCALE_CHAN_01_14 (LOCALE_CHAN_01_11 | \ - LOCALE_CHAN_12_13 | \ - LOCALE_CHAN_14) - -#define LOCALE_RADAR_SET_NONE 0 -#define LOCALE_RADAR_SET_1 1 - -#define LOCALE_RESTRICTED_NONE 0 -#define LOCALE_RESTRICTED_SET_2G_SHORT 1 -#define LOCALE_RESTRICTED_CHAN_165 2 -#define LOCALE_CHAN_ALL_5G 3 -#define LOCALE_RESTRICTED_JAPAN_LEGACY 4 -#define LOCALE_RESTRICTED_11D_2G 5 -#define LOCALE_RESTRICTED_11D_5G 6 -#define LOCALE_RESTRICTED_LOW_HI 7 -#define LOCALE_RESTRICTED_12_13_14 8 - -#define LOCALE_2G_IDX_i 0 -#define LOCALE_5G_IDX_11 0 #define LOCALE_MIMO_IDX_bn 0 #define LOCALE_MIMO_IDX_11n 0 -/* max of BAND_5G_PWR_LVLS and 6 for 2.4 GHz */ -#define BRCMS_MAXPWR_TBL_SIZE 6 /* max of BAND_5G_PWR_LVLS and 14 for 2.4 GHz */ #define BRCMS_MAXPWR_MIMO_TBL_SIZE 14 -/* power level in group of 2.4GHz band channels: - * maxpwr[0] - CCK channels [1] - * maxpwr[1] - CCK channels [2-10] - * maxpwr[2] - CCK channels [11-14] - * maxpwr[3] - OFDM channels [1] - * maxpwr[4] - OFDM channels [2-10] - * maxpwr[5] - OFDM channels [11-14] - */ - /* maxpwr mapping to 5GHz band channels: * maxpwr[0] - channels [34-48] * maxpwr[1] - channels [52-60] @@ -101,16 +47,8 @@ #define LC(id) LOCALE_MIMO_IDX_ ## id -#define LC_2G(id) LOCALE_2G_IDX_ ## id - -#define LC_5G(id) LOCALE_5G_IDX_ ## id - -#define LOCALES(band2, band5, mimo2, mimo5) \ - {LC_2G(band2), LC_5G(band5), LC(mimo2), LC(mimo5)} - -/* macro to get 2.4 GHz channel group index for tx power */ -#define CHANNEL_POWER_IDX_2G_CCK(c) (((c) < 2) ? 0 : (((c) < 11) ? 1 : 2)) -#define CHANNEL_POWER_IDX_2G_OFDM(c) (((c) < 2) ? 3 : (((c) < 11) ? 4 : 5)) +#define LOCALES(mimo2, mimo5) \ + {LC(mimo2), LC(mimo5)} /* macro to get 5 GHz channel group index for tx power */ #define CHANNEL_POWER_IDX_5G(c) (((c) < 52) ? 0 : \ @@ -118,18 +56,37 @@ (((c) < 100) ? 2 : \ (((c) < 149) ? 3 : 4)))) -#define ISDFS_EU(fl) (((fl) & BRCMS_DFS_EU) == BRCMS_DFS_EU) - -struct brcms_cm_band { - /* struct locale_info flags */ - u8 locale_flags; - /* List of valid channels in the country */ - struct brcms_chanvec valid_channels; - /* List of restricted use channels */ - const struct brcms_chanvec *restricted_channels; - /* List of radar sensitive channels */ - const struct brcms_chanvec *radar_channels; - u8 PAD[8]; +#define BRCM_2GHZ_2412_2462 REG_RULE(2412-10, 2462+10, 40, 0, 19, 0) +#define BRCM_2GHZ_2467_2472 REG_RULE(2467-10, 2472+10, 20, 0, 19, \ + NL80211_RRF_PASSIVE_SCAN | \ + NL80211_RRF_NO_IBSS) + +#define BRCM_5GHZ_5180_5240 REG_RULE(5180-10, 5240+10, 40, 0, 21, \ + NL80211_RRF_PASSIVE_SCAN | \ + NL80211_RRF_NO_IBSS) +#define BRCM_5GHZ_5260_5320 REG_RULE(5260-10, 5320+10, 40, 0, 21, \ + NL80211_RRF_PASSIVE_SCAN | \ + NL80211_RRF_DFS | \ + NL80211_RRF_NO_IBSS) +#define BRCM_5GHZ_5500_5700 REG_RULE(5500-10, 5700+10, 40, 0, 21, \ + NL80211_RRF_PASSIVE_SCAN | \ + NL80211_RRF_DFS | \ + NL80211_RRF_NO_IBSS) +#define BRCM_5GHZ_5745_5825 REG_RULE(5745-10, 5825+10, 40, 0, 21, \ + NL80211_RRF_PASSIVE_SCAN | \ + NL80211_RRF_NO_IBSS) + +static const struct ieee80211_regdomain brcms_regdom_x2 = { + .n_reg_rules = 7, + .alpha2 = "X2", + .reg_rules = { + BRCM_2GHZ_2412_2462, + BRCM_2GHZ_2467_2472, + BRCM_5GHZ_5180_5240, + BRCM_5GHZ_5260_5320, + BRCM_5GHZ_5500_5700, + BRCM_5GHZ_5745_5825, + } }; /* locale per-channel tx power limits for MIMO frames @@ -141,337 +98,23 @@ struct locale_mimo_info { s8 maxpwr20[BRCMS_MAXPWR_MIMO_TBL_SIZE]; /* tx 40 MHz power limits, qdBm units */ s8 maxpwr40[BRCMS_MAXPWR_MIMO_TBL_SIZE]; - u8 flags; }; /* Country names and abbreviations with locale defined from ISO 3166 */ struct country_info { - const u8 locale_2G; /* 2.4G band locale */ - const u8 locale_5G; /* 5G band locale */ const u8 locale_mimo_2G; /* 2.4G mimo info */ const u8 locale_mimo_5G; /* 5G mimo info */ }; +struct brcms_regd { + struct country_info country; + const struct ieee80211_regdomain *regdomain; +}; + struct brcms_cm_info { struct brcms_pub *pub; struct brcms_c_info *wlc; - char srom_ccode[BRCM_CNTRY_BUF_SZ]; /* Country Code in SROM */ - uint srom_regrev; /* Regulatory Rev for the SROM ccode */ - const struct country_info *country; /* current country def */ - char ccode[BRCM_CNTRY_BUF_SZ]; /* current internal Country Code */ - uint regrev; /* current Regulatory Revision */ - char country_abbrev[BRCM_CNTRY_BUF_SZ]; /* current advertised ccode */ - /* per-band state (one per phy/radio) */ - struct brcms_cm_band bandstate[MAXBANDS]; - /* quiet channels currently for radar sensitivity or 11h support */ - /* channels on which we cannot transmit */ - struct brcms_chanvec quiet_channels; -}; - -/* locale channel and power info. */ -struct locale_info { - u32 valid_channels; - /* List of radar sensitive channels */ - u8 radar_channels; - /* List of channels used only if APs are detected */ - u8 restricted_channels; - /* Max tx pwr in qdBm for each sub-band */ - s8 maxpwr[BRCMS_MAXPWR_TBL_SIZE]; - /* Country IE advertised max tx pwr in dBm per sub-band */ - s8 pub_maxpwr[BAND_5G_PWR_LVLS]; - u8 flags; -}; - -/* Regulatory Matrix Spreadsheet (CLM) MIMO v3.7.9 */ - -/* - * Some common channel sets - */ - -/* No channels */ -static const struct brcms_chanvec chanvec_none = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -/* All 2.4 GHz HW channels */ -static const struct brcms_chanvec chanvec_all_2G = { - {0xfe, 0x7f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -/* All 5 GHz HW channels */ -static const struct brcms_chanvec chanvec_all_5G = { - {0x00, 0x00, 0x00, 0x00, 0x54, 0x55, 0x11, 0x11, - 0x01, 0x00, 0x00, 0x00, 0x10, 0x11, 0x11, 0x11, - 0x11, 0x11, 0x20, 0x22, 0x22, 0x00, 0x00, 0x11, - 0x11, 0x11, 0x11, 0x01} -}; - -/* - * Radar channel sets - */ - -/* Channels 52 - 64, 100 - 140 */ -static const struct brcms_chanvec radar_set1 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x11, /* 52 - 60 */ - 0x01, 0x00, 0x00, 0x00, 0x10, 0x11, 0x11, 0x11, /* 64, 100 - 124 */ - 0x11, 0x11, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 128 - 140 */ - 0x00, 0x00, 0x00, 0x00} -}; - -/* - * Restricted channel sets - */ - -/* Channels 34, 38, 42, 46 */ -static const struct brcms_chanvec restricted_set_japan_legacy = { - {0x00, 0x00, 0x00, 0x00, 0x44, 0x44, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -/* Channels 12, 13 */ -static const struct brcms_chanvec restricted_set_2g_short = { - {0x00, 0x30, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -/* Channel 165 */ -static const struct brcms_chanvec restricted_chan_165 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -/* Channels 36 - 48 & 149 - 165 */ -static const struct brcms_chanvec restricted_low_hi = { - {0x00, 0x00, 0x00, 0x00, 0x10, 0x11, 0x01, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x20, 0x22, 0x22, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -/* Channels 12 - 14 */ -static const struct brcms_chanvec restricted_set_12_13_14 = { - {0x00, 0x70, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -/* global memory to provide working buffer for expanded locale */ - -static const struct brcms_chanvec *g_table_radar_set[] = { - &chanvec_none, - &radar_set1 -}; - -static const struct brcms_chanvec *g_table_restricted_chan[] = { - &chanvec_none, /* restricted_set_none */ - &restricted_set_2g_short, - &restricted_chan_165, - &chanvec_all_5G, - &restricted_set_japan_legacy, - &chanvec_all_2G, /* restricted_set_11d_2G */ - &chanvec_all_5G, /* restricted_set_11d_5G */ - &restricted_low_hi, - &restricted_set_12_13_14 -}; - -static const struct brcms_chanvec locale_2g_01_11 = { - {0xfe, 0x0f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_2g_12_13 = { - {0x00, 0x30, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_2g_14 = { - {0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_LOW_JP1 = { - {0x00, 0x00, 0x00, 0x00, 0x54, 0x55, 0x01, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_LOW_JP2 = { - {0x00, 0x00, 0x00, 0x00, 0x44, 0x44, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_LOW1 = { - {0x00, 0x00, 0x00, 0x00, 0x10, 0x11, 0x01, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_LOW2 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_LOW3 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x11, - 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_MID1 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x10, 0x11, 0x11, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_MID2 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x11, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_MID3 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_HIGH1 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x10, 0x11, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_HIGH2 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x20, 0x22, 0x02, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_HIGH3 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_52_140_ALL = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x11, - 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, - 0x11, 0x11, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00} -}; - -static const struct brcms_chanvec locale_5g_HIGH4 = { - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x11, - 0x11, 0x11, 0x11, 0x11} -}; - -static const struct brcms_chanvec *g_table_locale_base[] = { - &locale_2g_01_11, - &locale_2g_12_13, - &locale_2g_14, - &locale_5g_LOW_JP1, - &locale_5g_LOW_JP2, - &locale_5g_LOW1, - &locale_5g_LOW2, - &locale_5g_LOW3, - &locale_5g_MID1, - &locale_5g_MID2, - &locale_5g_MID3, - &locale_5g_HIGH1, - &locale_5g_HIGH2, - &locale_5g_HIGH3, - &locale_5g_52_140_ALL, - &locale_5g_HIGH4 -}; - -static void brcms_c_locale_add_channels(struct brcms_chanvec *target, - const struct brcms_chanvec *channels) -{ - u8 i; - for (i = 0; i < sizeof(struct brcms_chanvec); i++) - target->vec[i] |= channels->vec[i]; -} - -static void brcms_c_locale_get_channels(const struct locale_info *locale, - struct brcms_chanvec *channels) -{ - u8 i; - - memset(channels, 0, sizeof(struct brcms_chanvec)); - - for (i = 0; i < ARRAY_SIZE(g_table_locale_base); i++) { - if (locale->valid_channels & (1 << i)) - brcms_c_locale_add_channels(channels, - g_table_locale_base[i]); - } -} - -/* - * Locale Definitions - 2.4 GHz - */ -static const struct locale_info locale_i = { /* locale i. channel 1 - 13 */ - LOCALE_CHAN_01_11 | LOCALE_CHAN_12_13, - LOCALE_RADAR_SET_NONE, - LOCALE_RESTRICTED_SET_2G_SHORT, - {QDB(19), QDB(19), QDB(19), - QDB(19), QDB(19), QDB(19)}, - {20, 20, 20, 0}, - BRCMS_EIRP -}; - -/* - * Locale Definitions - 5 GHz - */ -static const struct locale_info locale_11 = { - /* locale 11. channel 36 - 48, 52 - 64, 100 - 140, 149 - 165 */ - LOCALE_CHAN_36_64 | LOCALE_CHAN_100_140 | LOCALE_CHAN_149_165, - LOCALE_RADAR_SET_1, - LOCALE_RESTRICTED_NONE, - {QDB(21), QDB(21), QDB(21), QDB(21), QDB(21)}, - {23, 23, 23, 30, 30}, - BRCMS_EIRP | BRCMS_DFS_EU -}; - -static const struct locale_info *g_locale_2g_table[] = { - &locale_i -}; - -static const struct locale_info *g_locale_5g_table[] = { - &locale_11 + const struct brcms_regd *world_regd; }; /* @@ -484,7 +127,6 @@ static const struct locale_mimo_info locale_bn = { {0, 0, QDB(13), QDB(13), QDB(13), QDB(13), QDB(13), QDB(13), QDB(13), QDB(13), QDB(13), 0, 0}, - 0 }; static const struct locale_mimo_info *g_mimo_2g_table[] = { @@ -497,114 +139,20 @@ static const struct locale_mimo_info *g_mimo_2g_table[] = { static const struct locale_mimo_info locale_11n = { { /* 12.5 dBm */ 50, 50, 50, QDB(15), QDB(15)}, {QDB(14), QDB(15), QDB(15), QDB(15), QDB(15)}, - 0 }; static const struct locale_mimo_info *g_mimo_5g_table[] = { &locale_11n }; -static const struct { - char abbrev[BRCM_CNTRY_BUF_SZ]; /* country abbreviation */ - struct country_info country; -} cntry_locales[] = { +static const struct brcms_regd cntry_locales[] = { + /* Worldwide RoW 2, must always be at index 0 */ { - "X2", LOCALES(i, 11, bn, 11n)}, /* Worldwide RoW 2 */ -}; - -#ifdef SUPPORT_40MHZ -/* 20MHz channel info for 40MHz pairing support */ -struct chan20_info { - u8 sb; - u8 adj_sbs; + .country = LOCALES(bn, 11n), + .regdomain = &brcms_regdom_x2, + }, }; -/* indicates adjacent channels that are allowed for a 40 Mhz channel and - * those that permitted by the HT - */ -struct chan20_info chan20_info[] = { - /* 11b/11g */ -/* 0 */ {1, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 1 */ {2, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 2 */ {3, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 3 */ {4, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 4 */ {5, (CH_UPPER_SB | CH_LOWER_SB | CH_EWA_VALID)}, -/* 5 */ {6, (CH_UPPER_SB | CH_LOWER_SB | CH_EWA_VALID)}, -/* 6 */ {7, (CH_UPPER_SB | CH_LOWER_SB | CH_EWA_VALID)}, -/* 7 */ {8, (CH_UPPER_SB | CH_LOWER_SB | CH_EWA_VALID)}, -/* 8 */ {9, (CH_UPPER_SB | CH_LOWER_SB | CH_EWA_VALID)}, -/* 9 */ {10, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 10 */ {11, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 11 */ {12, (CH_LOWER_SB)}, -/* 12 */ {13, (CH_LOWER_SB)}, -/* 13 */ {14, (CH_LOWER_SB)}, - -/* 11a japan high */ -/* 14 */ {34, (CH_UPPER_SB)}, -/* 15 */ {38, (CH_LOWER_SB)}, -/* 16 */ {42, (CH_LOWER_SB)}, -/* 17 */ {46, (CH_LOWER_SB)}, - -/* 11a usa low */ -/* 18 */ {36, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 19 */ {40, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 20 */ {44, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 21 */ {48, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 22 */ {52, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 23 */ {56, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 24 */ {60, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 25 */ {64, (CH_LOWER_SB | CH_EWA_VALID)}, - -/* 11a Europe */ -/* 26 */ {100, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 27 */ {104, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 28 */ {108, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 29 */ {112, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 30 */ {116, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 31 */ {120, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 32 */ {124, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 33 */ {128, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 34 */ {132, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 35 */ {136, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 36 */ {140, (CH_LOWER_SB)}, - -/* 11a usa high, ref5 only */ -/* The 0x80 bit in pdiv means these are REF5, other entries are REF20 */ -/* 37 */ {149, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 38 */ {153, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 39 */ {157, (CH_UPPER_SB | CH_EWA_VALID)}, -/* 40 */ {161, (CH_LOWER_SB | CH_EWA_VALID)}, -/* 41 */ {165, (CH_LOWER_SB)}, - -/* 11a japan */ -/* 42 */ {184, (CH_UPPER_SB)}, -/* 43 */ {188, (CH_LOWER_SB)}, -/* 44 */ {192, (CH_UPPER_SB)}, -/* 45 */ {196, (CH_LOWER_SB)}, -/* 46 */ {200, (CH_UPPER_SB)}, -/* 47 */ {204, (CH_LOWER_SB)}, -/* 48 */ {208, (CH_UPPER_SB)}, -/* 49 */ {212, (CH_LOWER_SB)}, -/* 50 */ {216, (CH_LOWER_SB)} -}; -#endif /* SUPPORT_40MHZ */ - -static const struct locale_info *brcms_c_get_locale_2g(u8 locale_idx) -{ - if (locale_idx >= ARRAY_SIZE(g_locale_2g_table)) - return NULL; /* error condition */ - - return g_locale_2g_table[locale_idx]; -} - -static const struct locale_info *brcms_c_get_locale_5g(u8 locale_idx) -{ - if (locale_idx >= ARRAY_SIZE(g_locale_5g_table)) - return NULL; /* error condition */ - - return g_locale_5g_table[locale_idx]; -} - static const struct locale_mimo_info *brcms_c_get_mimo_2g(u8 locale_idx) { if (locale_idx >= ARRAY_SIZE(g_mimo_2g_table)) @@ -621,13 +169,6 @@ static const struct locale_mimo_info *brcms_c_get_mimo_5g(u8 locale_idx) return g_mimo_5g_table[locale_idx]; } -static int -brcms_c_country_aggregate_map(struct brcms_cm_info *wlc_cm, const char *ccode, - char *mapped_ccode, uint *mapped_regrev) -{ - return false; -} - /* * Indicates whether the country provided is valid to pass * to cfg80211 or not. @@ -662,155 +203,24 @@ static bool brcms_c_country_valid(const char *ccode) return true; } -/* Lookup a country info structure from a null terminated country - * abbreviation and regrev directly with no translation. - */ -static const struct country_info * -brcms_c_country_lookup_direct(const char *ccode, uint regrev) +static const struct brcms_regd *brcms_world_regd(const char *regdom, int len) { - uint size, i; - - /* Should just return 0 for single locale driver. */ - /* Keep it this way in case we add more locales. (for now anyway) */ - - /* - * all other country def arrays are for regrev == 0, so if - * regrev is non-zero, fail - */ - if (regrev > 0) - return NULL; - - /* find matched table entry from country code */ - size = ARRAY_SIZE(cntry_locales); - for (i = 0; i < size; i++) { - if (strcmp(ccode, cntry_locales[i].abbrev) == 0) - return &cntry_locales[i].country; - } - return NULL; -} - -static const struct country_info * -brcms_c_countrycode_map(struct brcms_cm_info *wlc_cm, const char *ccode, - char *mapped_ccode, uint *mapped_regrev) -{ - struct brcms_c_info *wlc = wlc_cm->wlc; - const struct country_info *country; - uint srom_regrev = wlc_cm->srom_regrev; - const char *srom_ccode = wlc_cm->srom_ccode; - int mapped; - - /* check for currently supported ccode size */ - if (strlen(ccode) > (BRCM_CNTRY_BUF_SZ - 1)) { - wiphy_err(wlc->wiphy, "wl%d: %s: ccode \"%s\" too long for " - "match\n", wlc->pub->unit, __func__, ccode); - return NULL; - } - - /* default mapping is the given ccode and regrev 0 */ - strncpy(mapped_ccode, ccode, BRCM_CNTRY_BUF_SZ); - *mapped_regrev = 0; - - /* If the desired country code matches the srom country code, - * then the mapped country is the srom regulatory rev. - * Otherwise look for an aggregate mapping. - */ - if (!strcmp(srom_ccode, ccode)) { - *mapped_regrev = srom_regrev; - mapped = 0; - wiphy_err(wlc->wiphy, "srom_code == ccode %s\n", __func__); - } else { - mapped = - brcms_c_country_aggregate_map(wlc_cm, ccode, mapped_ccode, - mapped_regrev); - } - - /* find the matching built-in country definition */ - country = brcms_c_country_lookup_direct(mapped_ccode, *mapped_regrev); - - /* if there is not an exact rev match, default to rev zero */ - if (country == NULL && *mapped_regrev != 0) { - *mapped_regrev = 0; - country = - brcms_c_country_lookup_direct(mapped_ccode, *mapped_regrev); - } - - return country; -} - -/* Lookup a country info structure from a null terminated country code - * The lookup is case sensitive. - */ -static const struct country_info * -brcms_c_country_lookup(struct brcms_c_info *wlc, const char *ccode) -{ - const struct country_info *country; - char mapped_ccode[BRCM_CNTRY_BUF_SZ]; - uint mapped_regrev; - - /* - * map the country code to a built-in country code, regrev, and - * country_info struct - */ - country = brcms_c_countrycode_map(wlc->cmi, ccode, mapped_ccode, - &mapped_regrev); - - return country; -} - -/* - * reset the quiet channels vector to the union - * of the restricted and radar channel sets - */ -static void brcms_c_quiet_channels_reset(struct brcms_cm_info *wlc_cm) -{ - struct brcms_c_info *wlc = wlc_cm->wlc; - uint i, j; - struct brcms_band *band; - const struct brcms_chanvec *chanvec; - - memset(&wlc_cm->quiet_channels, 0, sizeof(struct brcms_chanvec)); - - band = wlc->band; - for (i = 0; i < wlc->pub->_nbands; - i++, band = wlc->bandstate[OTHERBANDUNIT(wlc)]) { - - /* initialize quiet channels for restricted channels */ - chanvec = wlc_cm->bandstate[band->bandunit].restricted_channels; - for (j = 0; j < sizeof(struct brcms_chanvec); j++) - wlc_cm->quiet_channels.vec[j] |= chanvec->vec[j]; + const struct brcms_regd *regd = NULL; + int i; + for (i = 0; i < ARRAY_SIZE(cntry_locales); i++) { + if (!strncmp(regdom, cntry_locales[i].regdomain->alpha2, len)) { + regd = &cntry_locales[i]; + break; + } } -} - -/* Is the channel valid for the current locale and current band? */ -static bool brcms_c_valid_channel20(struct brcms_cm_info *wlc_cm, uint val) -{ - struct brcms_c_info *wlc = wlc_cm->wlc; - return ((val < MAXCHANNEL) && - isset(wlc_cm->bandstate[wlc->band->bandunit].valid_channels.vec, - val)); + return regd; } -/* Is the channel valid for the current locale and specified band? */ -static bool brcms_c_valid_channel20_in_band(struct brcms_cm_info *wlc_cm, - uint bandunit, uint val) -{ - return ((val < MAXCHANNEL) - && isset(wlc_cm->bandstate[bandunit].valid_channels.vec, val)); -} - -/* Is the channel valid for the current locale? (but don't consider channels not - * available due to bandlocking) - */ -static bool brcms_c_valid_channel20_db(struct brcms_cm_info *wlc_cm, uint val) +static const struct brcms_regd *brcms_default_world_regd(void) { - struct brcms_c_info *wlc = wlc_cm->wlc; - - return brcms_c_valid_channel20(wlc->cmi, val) || - (!wlc->bandlocked - && brcms_c_valid_channel20_in_band(wlc->cmi, - OTHERBANDUNIT(wlc), val)); + return &cntry_locales[0]; } /* JP, J1 - J10 are Japan ccodes */ @@ -820,12 +230,6 @@ static bool brcms_c_japan_ccode(const char *ccode) (ccode[1] == 'P' || (ccode[1] >= '1' && ccode[1] <= '9'))); } -/* Returns true if currently set country is Japan or variant */ -static bool brcms_c_japan(struct brcms_c_info *wlc) -{ - return brcms_c_japan_ccode(wlc->cmi->country_abbrev); -} - static void brcms_c_channel_min_txpower_limits_with_local_constraint( struct brcms_cm_info *wlc_cm, struct txpwr_limits *txpwr, @@ -901,140 +305,16 @@ brcms_c_channel_min_txpower_limits_with_local_constraint( } -/* Update the radio state (enable/disable) and tx power targets - * based on a new set of channel/regulatory information - */ -static void brcms_c_channels_commit(struct brcms_cm_info *wlc_cm) -{ - struct brcms_c_info *wlc = wlc_cm->wlc; - uint chan; - struct txpwr_limits txpwr; - - /* search for the existence of any valid channel */ - for (chan = 0; chan < MAXCHANNEL; chan++) { - if (brcms_c_valid_channel20_db(wlc->cmi, chan)) - break; - } - if (chan == MAXCHANNEL) - chan = INVCHANNEL; - - /* - * based on the channel search above, set or - * clear WL_RADIO_COUNTRY_DISABLE. - */ - if (chan == INVCHANNEL) { - /* - * country/locale with no valid channels, set - * the radio disable bit - */ - mboolset(wlc->pub->radio_disabled, WL_RADIO_COUNTRY_DISABLE); - wiphy_err(wlc->wiphy, "wl%d: %s: no valid channel for \"%s\" " - "nbands %d bandlocked %d\n", wlc->pub->unit, - __func__, wlc_cm->country_abbrev, wlc->pub->_nbands, - wlc->bandlocked); - } else if (mboolisset(wlc->pub->radio_disabled, - WL_RADIO_COUNTRY_DISABLE)) { - /* - * country/locale with valid channel, clear - * the radio disable bit - */ - mboolclr(wlc->pub->radio_disabled, WL_RADIO_COUNTRY_DISABLE); - } - - /* - * Now that the country abbreviation is set, if the radio supports 2G, - * then set channel 14 restrictions based on the new locale. - */ - if (wlc->pub->_nbands > 1 || wlc->band->bandtype == BRCM_BAND_2G) - wlc_phy_chanspec_ch14_widefilter_set(wlc->band->pi, - brcms_c_japan(wlc) ? true : - false); - - if (wlc->pub->up && chan != INVCHANNEL) { - brcms_c_channel_reg_limits(wlc_cm, wlc->chanspec, &txpwr); - brcms_c_channel_min_txpower_limits_with_local_constraint(wlc_cm, - &txpwr, BRCMS_TXPWR_MAX); - wlc_phy_txpower_limit_set(wlc->band->pi, &txpwr, wlc->chanspec); - } -} - -static int -brcms_c_channels_init(struct brcms_cm_info *wlc_cm, - const struct country_info *country) -{ - struct brcms_c_info *wlc = wlc_cm->wlc; - uint i, j; - struct brcms_band *band; - const struct locale_info *li; - struct brcms_chanvec sup_chan; - const struct locale_mimo_info *li_mimo; - - band = wlc->band; - for (i = 0; i < wlc->pub->_nbands; - i++, band = wlc->bandstate[OTHERBANDUNIT(wlc)]) { - - li = (band->bandtype == BRCM_BAND_5G) ? - brcms_c_get_locale_5g(country->locale_5G) : - brcms_c_get_locale_2g(country->locale_2G); - wlc_cm->bandstate[band->bandunit].locale_flags = li->flags; - li_mimo = (band->bandtype == BRCM_BAND_5G) ? - brcms_c_get_mimo_5g(country->locale_mimo_5G) : - brcms_c_get_mimo_2g(country->locale_mimo_2G); - - /* merge the mimo non-mimo locale flags */ - wlc_cm->bandstate[band->bandunit].locale_flags |= - li_mimo->flags; - - wlc_cm->bandstate[band->bandunit].restricted_channels = - g_table_restricted_chan[li->restricted_channels]; - wlc_cm->bandstate[band->bandunit].radar_channels = - g_table_radar_set[li->radar_channels]; - - /* - * set the channel availability, masking out the channels - * that may not be supported on this phy. - */ - wlc_phy_chanspec_band_validch(band->pi, band->bandtype, - &sup_chan); - brcms_c_locale_get_channels(li, - &wlc_cm->bandstate[band->bandunit]. - valid_channels); - for (j = 0; j < sizeof(struct brcms_chanvec); j++) - wlc_cm->bandstate[band->bandunit].valid_channels. - vec[j] &= sup_chan.vec[j]; - } - - brcms_c_quiet_channels_reset(wlc_cm); - brcms_c_channels_commit(wlc_cm); - - return 0; -} - /* * set the driver's current country and regulatory information * using a country code as the source. Look up built in country * information found with the country code. */ static void -brcms_c_set_country_common(struct brcms_cm_info *wlc_cm, - const char *country_abbrev, - const char *ccode, uint regrev, - const struct country_info *country) +brcms_c_set_country(struct brcms_cm_info *wlc_cm, + const struct brcms_regd *regd) { - const struct locale_info *locale; struct brcms_c_info *wlc = wlc_cm->wlc; - char prev_country_abbrev[BRCM_CNTRY_BUF_SZ]; - - /* save current country state */ - wlc_cm->country = country; - - memset(&prev_country_abbrev, 0, BRCM_CNTRY_BUF_SZ); - strncpy(prev_country_abbrev, wlc_cm->country_abbrev, - BRCM_CNTRY_BUF_SZ - 1); - - strncpy(wlc_cm->country_abbrev, country_abbrev, BRCM_CNTRY_BUF_SZ - 1); - strncpy(wlc_cm->ccode, ccode, BRCM_CNTRY_BUF_SZ - 1); - wlc_cm->regrev = regrev; if ((wlc->pub->_n_enab & SUPPORT_11N) != wlc->protection->nmode_user) @@ -1042,75 +322,19 @@ brcms_c_set_country_common(struct brcms_cm_info *wlc_cm, brcms_c_stf_ss_update(wlc, wlc->bandstate[BAND_2G_INDEX]); brcms_c_stf_ss_update(wlc, wlc->bandstate[BAND_5G_INDEX]); - /* set or restore gmode as required by regulatory */ - locale = brcms_c_get_locale_2g(country->locale_2G); - if (locale && (locale->flags & BRCMS_NO_OFDM)) - brcms_c_set_gmode(wlc, GMODE_LEGACY_B, false); - else - brcms_c_set_gmode(wlc, wlc->protection->gmode_user, false); - brcms_c_channels_init(wlc_cm, country); + brcms_c_set_gmode(wlc, wlc->protection->gmode_user, false); return; } -static int -brcms_c_set_countrycode_rev(struct brcms_cm_info *wlc_cm, - const char *country_abbrev, - const char *ccode, int regrev) -{ - const struct country_info *country; - char mapped_ccode[BRCM_CNTRY_BUF_SZ]; - uint mapped_regrev; - - /* if regrev is -1, lookup the mapped country code, - * otherwise use the ccode and regrev directly - */ - if (regrev == -1) { - /* - * map the country code to a built-in country - * code, regrev, and country_info - */ - country = - brcms_c_countrycode_map(wlc_cm, ccode, mapped_ccode, - &mapped_regrev); - } else { - /* find the matching built-in country definition */ - country = brcms_c_country_lookup_direct(ccode, regrev); - strncpy(mapped_ccode, ccode, BRCM_CNTRY_BUF_SZ); - mapped_regrev = regrev; - } - - if (country == NULL) - return -EINVAL; - - /* set the driver state for the country */ - brcms_c_set_country_common(wlc_cm, country_abbrev, mapped_ccode, - mapped_regrev, country); - - return 0; -} - -/* - * set the driver's current country and regulatory information using - * a country code as the source. Lookup built in country information - * found with the country code. - */ -static int -brcms_c_set_countrycode(struct brcms_cm_info *wlc_cm, const char *ccode) -{ - char country_abbrev[BRCM_CNTRY_BUF_SZ]; - strncpy(country_abbrev, ccode, BRCM_CNTRY_BUF_SZ); - return brcms_c_set_countrycode_rev(wlc_cm, country_abbrev, ccode, -1); -} - struct brcms_cm_info *brcms_c_channel_mgr_attach(struct brcms_c_info *wlc) { struct brcms_cm_info *wlc_cm; - char country_abbrev[BRCM_CNTRY_BUF_SZ]; - const struct country_info *country; struct brcms_pub *pub = wlc->pub; struct ssb_sprom *sprom = &wlc->hw->d11core->bus->sprom; + const char *ccode = sprom->alpha2; + int ccode_len = sizeof(sprom->alpha2); BCMMSG(wlc->wiphy, "wl%d\n", wlc->pub->unit); @@ -1122,24 +346,27 @@ struct brcms_cm_info *brcms_c_channel_mgr_attach(struct brcms_c_info *wlc) wlc->cmi = wlc_cm; /* store the country code for passing up as a regulatory hint */ - if (sprom->alpha2 && brcms_c_country_valid(sprom->alpha2)) - strncpy(wlc->pub->srom_ccode, sprom->alpha2, sizeof(sprom->alpha2)); + wlc_cm->world_regd = brcms_world_regd(ccode, ccode_len); + if (brcms_c_country_valid(ccode)) + strncpy(wlc->pub->srom_ccode, ccode, ccode_len); /* - * internal country information which must match - * regulatory constraints in firmware + * If no custom world domain is found in the SROM, use the + * default "X2" domain. */ - memset(country_abbrev, 0, BRCM_CNTRY_BUF_SZ); - strncpy(country_abbrev, "X2", sizeof(country_abbrev) - 1); - country = brcms_c_country_lookup(wlc, country_abbrev); + if (!wlc_cm->world_regd) { + wlc_cm->world_regd = brcms_default_world_regd(); + ccode = wlc_cm->world_regd->regdomain->alpha2; + ccode_len = BRCM_CNTRY_BUF_SZ - 1; + } /* save default country for exiting 11d regulatory mode */ - strncpy(wlc->country_default, country_abbrev, BRCM_CNTRY_BUF_SZ - 1); + strncpy(wlc->country_default, ccode, ccode_len); /* initialize autocountry_default to driver default */ - strncpy(wlc->autocountry_default, "X2", BRCM_CNTRY_BUF_SZ - 1); + strncpy(wlc->autocountry_default, ccode, ccode_len); - brcms_c_set_countrycode(wlc_cm, country_abbrev); + brcms_c_set_country(wlc_cm, wlc_cm->world_regd); return wlc_cm; } @@ -1149,31 +376,15 @@ void brcms_c_channel_mgr_detach(struct brcms_cm_info *wlc_cm) kfree(wlc_cm); } -u8 -brcms_c_channel_locale_flags_in_band(struct brcms_cm_info *wlc_cm, - uint bandunit) -{ - return wlc_cm->bandstate[bandunit].locale_flags; -} - -static bool -brcms_c_quiet_chanspec(struct brcms_cm_info *wlc_cm, u16 chspec) -{ - return (wlc_cm->wlc->pub->_n_enab & SUPPORT_11N) && - CHSPEC_IS40(chspec) ? - (isset(wlc_cm->quiet_channels.vec, - lower_20_sb(CHSPEC_CHANNEL(chspec))) || - isset(wlc_cm->quiet_channels.vec, - upper_20_sb(CHSPEC_CHANNEL(chspec)))) : - isset(wlc_cm->quiet_channels.vec, CHSPEC_CHANNEL(chspec)); -} - void brcms_c_channel_set_chanspec(struct brcms_cm_info *wlc_cm, u16 chanspec, u8 local_constraint_qdbm) { struct brcms_c_info *wlc = wlc_cm->wlc; + struct ieee80211_channel *ch = wlc->pub->ieee_hw->conf.channel; + const struct ieee80211_reg_rule *reg_rule; struct txpwr_limits txpwr; + int ret; brcms_c_channel_reg_limits(wlc_cm, chanspec, &txpwr); @@ -1181,8 +392,15 @@ brcms_c_channel_set_chanspec(struct brcms_cm_info *wlc_cm, u16 chanspec, wlc_cm, &txpwr, local_constraint_qdbm ); + /* set or restore gmode as required by regulatory */ + ret = freq_reg_info(wlc->wiphy, ch->center_freq, 0, ®_rule); + if (!ret && (reg_rule->flags & NL80211_RRF_NO_OFDM)) + brcms_c_set_gmode(wlc, GMODE_LEGACY_B, false); + else + brcms_c_set_gmode(wlc, wlc->protection->gmode_user, false); + brcms_b_set_chanspec(wlc->hw, chanspec, - (brcms_c_quiet_chanspec(wlc_cm, chanspec) != 0), + !!(ch->flags & IEEE80211_CHAN_PASSIVE_SCAN), &txpwr); } @@ -1191,15 +409,14 @@ brcms_c_channel_reg_limits(struct brcms_cm_info *wlc_cm, u16 chanspec, struct txpwr_limits *txpwr) { struct brcms_c_info *wlc = wlc_cm->wlc; + struct ieee80211_channel *ch = wlc->pub->ieee_hw->conf.channel; uint i; uint chan; int maxpwr; int delta; const struct country_info *country; struct brcms_band *band; - const struct locale_info *li; int conducted_max = BRCMS_TXPWR_MAX; - int conducted_ofdm_max = BRCMS_TXPWR_MAX; const struct locale_mimo_info *li_mimo; int maxpwr20, maxpwr40; int maxpwr_idx; @@ -1207,67 +424,35 @@ brcms_c_channel_reg_limits(struct brcms_cm_info *wlc_cm, u16 chanspec, memset(txpwr, 0, sizeof(struct txpwr_limits)); - if (!brcms_c_valid_chanspec_db(wlc_cm, chanspec)) { - country = brcms_c_country_lookup(wlc, wlc->autocountry_default); - if (country == NULL) - return; - } else { - country = wlc_cm->country; - } + if (WARN_ON(!ch)) + return; + + country = &wlc_cm->world_regd->country; chan = CHSPEC_CHANNEL(chanspec); band = wlc->bandstate[chspec_bandunit(chanspec)]; - li = (band->bandtype == BRCM_BAND_5G) ? - brcms_c_get_locale_5g(country->locale_5G) : - brcms_c_get_locale_2g(country->locale_2G); - li_mimo = (band->bandtype == BRCM_BAND_5G) ? brcms_c_get_mimo_5g(country->locale_mimo_5G) : brcms_c_get_mimo_2g(country->locale_mimo_2G); - if (li->flags & BRCMS_EIRP) { - delta = band->antgain; - } else { - delta = 0; - if (band->antgain > QDB(6)) - delta = band->antgain - QDB(6); /* Excess over 6 dB */ - } + delta = band->antgain; - if (li == &locale_i) { + if (band->bandtype == BRCM_BAND_2G) conducted_max = QDB(22); - conducted_ofdm_max = QDB(22); - } + + maxpwr = QDB(ch->max_power) - delta; + maxpwr = max(maxpwr, 0); + maxpwr = min(maxpwr, conducted_max); /* CCK txpwr limits for 2.4G band */ if (band->bandtype == BRCM_BAND_2G) { - maxpwr = li->maxpwr[CHANNEL_POWER_IDX_2G_CCK(chan)]; - - maxpwr = maxpwr - delta; - maxpwr = max(maxpwr, 0); - maxpwr = min(maxpwr, conducted_max); - for (i = 0; i < BRCMS_NUM_RATES_CCK; i++) txpwr->cck[i] = (u8) maxpwr; } - /* OFDM txpwr limits for 2.4G or 5G bands */ - if (band->bandtype == BRCM_BAND_2G) - maxpwr = li->maxpwr[CHANNEL_POWER_IDX_2G_OFDM(chan)]; - else - maxpwr = li->maxpwr[CHANNEL_POWER_IDX_5G(chan)]; - - maxpwr = maxpwr - delta; - maxpwr = max(maxpwr, 0); - maxpwr = min(maxpwr, conducted_ofdm_max); - - /* Keep OFDM lmit below CCK limit */ - if (band->bandtype == BRCM_BAND_2G) - maxpwr = min_t(int, maxpwr, txpwr->cck[0]); - - for (i = 0; i < BRCMS_NUM_RATES_OFDM; i++) + for (i = 0; i < BRCMS_NUM_RATES_OFDM; i++) { txpwr->ofdm[i] = (u8) maxpwr; - for (i = 0; i < BRCMS_NUM_RATES_OFDM; i++) { /* * OFDM 40 MHz SISO has the same power as the corresponding * MCS0-7 rate unless overriden by the locale specific code. @@ -1282,14 +467,9 @@ brcms_c_channel_reg_limits(struct brcms_cm_info *wlc_cm, u16 chanspec, txpwr->ofdm_40_cdd[i] = 0; } - /* MIMO/HT specific limits */ - if (li_mimo->flags & BRCMS_EIRP) { - delta = band->antgain; - } else { - delta = 0; - if (band->antgain > QDB(6)) - delta = band->antgain - QDB(6); /* Excess over 6 dB */ - } + delta = 0; + if (band->antgain > QDB(6)) + delta = band->antgain - QDB(6); /* Excess over 6 dB */ if (band->bandtype == BRCM_BAND_2G) maxpwr_idx = (chan - 1); @@ -1431,8 +611,7 @@ static bool brcms_c_chspec_malformed(u16 chanspec) * and they are also a legal HT combination */ static bool -brcms_c_valid_chanspec_ext(struct brcms_cm_info *wlc_cm, u16 chspec, - bool dualband) +brcms_c_valid_chanspec_ext(struct brcms_cm_info *wlc_cm, u16 chspec) { struct brcms_c_info *wlc = wlc_cm->wlc; u8 channel = CHSPEC_CHANNEL(chspec); @@ -1448,59 +627,163 @@ brcms_c_valid_chanspec_ext(struct brcms_cm_info *wlc_cm, u16 chspec, chspec_bandunit(chspec)) return false; - /* Check a 20Mhz channel */ - if (CHSPEC_IS20(chspec)) { - if (dualband) - return brcms_c_valid_channel20_db(wlc_cm->wlc->cmi, - channel); - else - return brcms_c_valid_channel20(wlc_cm->wlc->cmi, - channel); + return true; +} + +bool brcms_c_valid_chanspec_db(struct brcms_cm_info *wlc_cm, u16 chspec) +{ + return brcms_c_valid_chanspec_ext(wlc_cm, chspec); +} + +static bool brcms_is_radar_freq(u16 center_freq) +{ + return center_freq >= 5260 && center_freq <= 5700; +} + +static void brcms_reg_apply_radar_flags(struct wiphy *wiphy) +{ + struct ieee80211_supported_band *sband; + struct ieee80211_channel *ch; + int i; + + sband = wiphy->bands[IEEE80211_BAND_5GHZ]; + if (!sband) + return; + + for (i = 0; i < sband->n_channels; i++) { + ch = &sband->channels[i]; + + if (!brcms_is_radar_freq(ch->center_freq)) + continue; + + /* + * All channels in this range should be passive and have + * DFS enabled. + */ + if (!(ch->flags & IEEE80211_CHAN_DISABLED)) + ch->flags |= IEEE80211_CHAN_RADAR | + IEEE80211_CHAN_NO_IBSS | + IEEE80211_CHAN_PASSIVE_SCAN; } -#ifdef SUPPORT_40MHZ - /* - * We know we are now checking a 40MHZ channel, so we should - * only be here for NPHYS - */ - if (BRCMS_ISNPHY(wlc->band) || BRCMS_ISSSLPNPHY(wlc->band)) { - u8 upper_sideband = 0, idx; - u8 num_ch20_entries = - sizeof(chan20_info) / sizeof(struct chan20_info); - - if (!VALID_40CHANSPEC_IN_BAND(wlc, chspec_bandunit(chspec))) - return false; - - if (dualband) { - if (!brcms_c_valid_channel20_db(wlc->cmi, - lower_20_sb(channel)) || - !brcms_c_valid_channel20_db(wlc->cmi, - upper_20_sb(channel))) - return false; - } else { - if (!brcms_c_valid_channel20(wlc->cmi, - lower_20_sb(channel)) || - !brcms_c_valid_channel20(wlc->cmi, - upper_20_sb(channel))) - return false; +} + +static void +brcms_reg_apply_beaconing_flags(struct wiphy *wiphy, + enum nl80211_reg_initiator initiator) +{ + struct ieee80211_supported_band *sband; + struct ieee80211_channel *ch; + const struct ieee80211_reg_rule *rule; + int band, i, ret; + + for (band = 0; band < IEEE80211_NUM_BANDS; band++) { + sband = wiphy->bands[band]; + if (!sband) + continue; + + for (i = 0; i < sband->n_channels; i++) { + ch = &sband->channels[i]; + + if (ch->flags & + (IEEE80211_CHAN_DISABLED | IEEE80211_CHAN_RADAR)) + continue; + + if (initiator == NL80211_REGDOM_SET_BY_COUNTRY_IE) { + ret = freq_reg_info(wiphy, ch->center_freq, + 0, &rule); + if (ret) + continue; + + if (!(rule->flags & NL80211_RRF_NO_IBSS)) + ch->flags &= ~IEEE80211_CHAN_NO_IBSS; + if (!(rule->flags & NL80211_RRF_PASSIVE_SCAN)) + ch->flags &= + ~IEEE80211_CHAN_PASSIVE_SCAN; + } else if (ch->beacon_found) { + ch->flags &= ~(IEEE80211_CHAN_NO_IBSS | + IEEE80211_CHAN_PASSIVE_SCAN); + } } + } +} - /* find the lower sideband info in the sideband array */ - for (idx = 0; idx < num_ch20_entries; idx++) { - if (chan20_info[idx].sb == lower_20_sb(channel)) - upper_sideband = chan20_info[idx].adj_sbs; +static int brcms_reg_notifier(struct wiphy *wiphy, + struct regulatory_request *request) +{ + struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy); + struct brcms_info *wl = hw->priv; + struct brcms_c_info *wlc = wl->wlc; + struct ieee80211_supported_band *sband; + struct ieee80211_channel *ch; + int band, i; + bool ch_found = false; + + brcms_reg_apply_radar_flags(wiphy); + + if (request->initiator == NL80211_REGDOM_SET_BY_COUNTRY_IE) + brcms_reg_apply_beaconing_flags(wiphy, request->initiator); + + /* Disable radio if all channels disallowed by regulatory */ + for (band = 0; !ch_found && band < IEEE80211_NUM_BANDS; band++) { + sband = wiphy->bands[band]; + if (!sband) + continue; + + for (i = 0; !ch_found && i < sband->n_channels; i++) { + ch = &sband->channels[i]; + + if (!(ch->flags & IEEE80211_CHAN_DISABLED)) + ch_found = true; } - /* check that the lower sideband allows an upper sideband */ - if ((upper_sideband & (CH_UPPER_SB | CH_EWA_VALID)) == - (CH_UPPER_SB | CH_EWA_VALID)) - return true; - return false; } -#endif /* 40 MHZ */ - return false; + if (ch_found) { + mboolclr(wlc->pub->radio_disabled, WL_RADIO_COUNTRY_DISABLE); + } else { + mboolset(wlc->pub->radio_disabled, WL_RADIO_COUNTRY_DISABLE); + wiphy_err(wlc->wiphy, "wl%d: %s: no valid channel for \"%s\"\n", + wlc->pub->unit, __func__, request->alpha2); + } + + if (wlc->pub->_nbands > 1 || wlc->band->bandtype == BRCM_BAND_2G) + wlc_phy_chanspec_ch14_widefilter_set(wlc->band->pi, + brcms_c_japan_ccode(request->alpha2)); + + return 0; } -bool brcms_c_valid_chanspec_db(struct brcms_cm_info *wlc_cm, u16 chspec) +void brcms_c_regd_init(struct brcms_c_info *wlc) { - return brcms_c_valid_chanspec_ext(wlc_cm, chspec, true); + struct wiphy *wiphy = wlc->wiphy; + const struct brcms_regd *regd = wlc->cmi->world_regd; + struct ieee80211_supported_band *sband; + struct ieee80211_channel *ch; + struct brcms_chanvec sup_chan; + struct brcms_band *band; + int band_idx, i; + + /* Disable any channels not supported by the phy */ + for (band_idx = 0; band_idx < wlc->pub->_nbands; band_idx++) { + band = wlc->bandstate[band_idx]; + + wlc_phy_chanspec_band_validch(band->pi, band->bandtype, + &sup_chan); + + if (band_idx == BAND_2G_INDEX) + sband = wiphy->bands[IEEE80211_BAND_2GHZ]; + else + sband = wiphy->bands[IEEE80211_BAND_5GHZ]; + + for (i = 0; i < sband->n_channels; i++) { + ch = &sband->channels[i]; + if (!isset(sup_chan.vec, ch->hw_value)) + ch->flags |= IEEE80211_CHAN_DISABLED; + } + } + + wlc->wiphy->reg_notifier = brcms_reg_notifier; + wlc->wiphy->flags |= WIPHY_FLAG_CUSTOM_REGULATORY | + WIPHY_FLAG_STRICT_REGULATORY; + wiphy_apply_custom_regulatory(wlc->wiphy, regd->regdomain); + brcms_reg_apply_beaconing_flags(wiphy, NL80211_REGDOM_SET_BY_DRIVER); } diff --git a/drivers/net/wireless/brcm80211/brcmsmac/channel.h b/drivers/net/wireless/brcm80211/brcmsmac/channel.h index 808cb4fbfbe7..006483a0abe6 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/channel.h +++ b/drivers/net/wireless/brcm80211/brcmsmac/channel.h @@ -37,9 +37,6 @@ brcms_c_channel_mgr_attach(struct brcms_c_info *wlc); extern void brcms_c_channel_mgr_detach(struct brcms_cm_info *wlc_cm); -extern u8 brcms_c_channel_locale_flags_in_band(struct brcms_cm_info *wlc_cm, - uint bandunit); - extern bool brcms_c_valid_chanspec_db(struct brcms_cm_info *wlc_cm, u16 chspec); @@ -49,5 +46,6 @@ extern void brcms_c_channel_reg_limits(struct brcms_cm_info *wlc_cm, extern void brcms_c_channel_set_chanspec(struct brcms_cm_info *wlc_cm, u16 chanspec, u8 local_constraint_qdbm); +extern void brcms_c_regd_init(struct brcms_c_info *wlc); #endif /* _WLC_CHANNEL_H */ diff --git a/drivers/net/wireless/brcm80211/brcmsmac/dma.c b/drivers/net/wireless/brcm80211/brcmsmac/dma.c index 11054ae9d4f6..5e53305bd9a9 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/dma.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/dma.c @@ -573,6 +573,7 @@ struct dma_pub *dma_attach(char *name, struct si_pub *sih, struct dma_info *di; u8 rev = core->id.rev; uint size; + struct si_info *sii = container_of(sih, struct si_info, pub); /* allocate private info structure */ di = kzalloc(sizeof(struct dma_info), GFP_ATOMIC); @@ -633,16 +634,20 @@ struct dma_pub *dma_attach(char *name, struct si_pub *sih, */ di->ddoffsetlow = 0; di->dataoffsetlow = 0; - /* add offset for pcie with DMA64 bus */ - di->ddoffsetlow = 0; - di->ddoffsethigh = SI_PCIE_DMA_H32; + /* for pci bus, add offset */ + if (sii->icbus->hosttype == BCMA_HOSTTYPE_PCI) { + /* add offset for pcie with DMA64 bus */ + di->ddoffsetlow = 0; + di->ddoffsethigh = SI_PCIE_DMA_H32; + } di->dataoffsetlow = di->ddoffsetlow; di->dataoffsethigh = di->ddoffsethigh; + /* WAR64450 : DMACtl.Addr ext fields are not supported in SDIOD core. */ - if ((core->id.id == SDIOD_CORE_ID) + if ((core->id.id == BCMA_CORE_SDIO_DEV) && ((rev > 0) && (rev <= 2))) di->addrext = false; - else if ((core->id.id == I2S_CORE_ID) && + else if ((core->id.id == BCMA_CORE_I2S) && ((rev == 0) || (rev == 1))) di->addrext = false; else @@ -1433,7 +1438,7 @@ void dma_walk_packets(struct dma_pub *dmah, void (*callback_fnc) struct ieee80211_tx_info *tx_info; while (i != end) { - skb = (struct sk_buff *)di->txp[i]; + skb = di->txp[i]; if (skb != NULL) { tx_info = (struct ieee80211_tx_info *)skb->cb; (callback_fnc)(tx_info, arg_a); diff --git a/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c index 50f92a0b7c41..9e79d47e077f 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c @@ -267,6 +267,7 @@ static void brcms_set_basic_rate(struct brcm_rateset *rs, u16 rate, bool is_br) static void brcms_ops_tx(struct ieee80211_hw *hw, struct sk_buff *skb) { struct brcms_info *wl = hw->priv; + struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb); spin_lock_bh(&wl->lock); if (!wl->pub->up) { @@ -275,6 +276,7 @@ static void brcms_ops_tx(struct ieee80211_hw *hw, struct sk_buff *skb) goto done; } brcms_c_sendpkt_mac80211(wl->wlc, skb, hw); + tx_info->rate_driver_data[0] = tx_info->control.sta; done: spin_unlock_bh(&wl->lock); } @@ -319,8 +321,7 @@ static void brcms_ops_stop(struct ieee80211_hw *hw) return; spin_lock_bh(&wl->lock); - status = brcms_c_chipmatch(wl->wlc->hw->vendorid, - wl->wlc->hw->deviceid); + status = brcms_c_chipmatch(wl->wlc->hw->d11core); spin_unlock_bh(&wl->lock); if (!status) { wiphy_err(wl->wiphy, @@ -721,14 +722,6 @@ static const struct ieee80211_ops brcms_ops = { .flush = brcms_ops_flush, }; -/* - * is called in brcms_bcma_probe() context, therefore no locking required. - */ -static int brcms_set_hint(struct brcms_info *wl, char *abbrev) -{ - return regulatory_hint(wl->pub->ieee_hw->wiphy, abbrev); -} - void brcms_dpc(unsigned long data) { struct brcms_info *wl; @@ -1058,6 +1051,8 @@ static struct brcms_info *brcms_attach(struct bcma_device *pdev) goto fail; } + brcms_c_regd_init(wl->wlc); + memcpy(perm, &wl->pub->cur_etheraddr, ETH_ALEN); if (WARN_ON(!is_valid_ether_addr(perm))) goto fail; @@ -1068,9 +1063,9 @@ static struct brcms_info *brcms_attach(struct bcma_device *pdev) wiphy_err(wl->wiphy, "%s: ieee80211_register_hw failed, status" "%d\n", __func__, err); - if (wl->pub->srom_ccode[0] && brcms_set_hint(wl, wl->pub->srom_ccode)) - wiphy_err(wl->wiphy, "%s: regulatory_hint failed, status %d\n", - __func__, err); + if (wl->pub->srom_ccode[0] && + regulatory_hint(wl->wiphy, wl->pub->srom_ccode)) + wiphy_err(wl->wiphy, "%s: regulatory hint failed\n", __func__); n_adapters_found++; return wl; diff --git a/drivers/net/wireless/brcm80211/brcmsmac/main.c b/drivers/net/wireless/brcm80211/brcmsmac/main.c index 19db4052c44c..03ca65324845 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/main.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/main.c @@ -18,6 +18,7 @@ #include <linux/pci_ids.h> #include <linux/if_ether.h> +#include <net/cfg80211.h> #include <net/mac80211.h> #include <brcm_hw_ids.h> #include <aiutils.h> @@ -268,7 +269,7 @@ struct brcms_c_bit_desc { */ /* Starting corerev for the fifo size table */ -#define XMTFIFOTBL_STARTREV 20 +#define XMTFIFOTBL_STARTREV 17 struct d11init { __le16 addr; @@ -332,6 +333,12 @@ const u8 wlc_prio2prec_map[] = { }; static const u16 xmtfifo_sz[][NFIFO] = { + /* corerev 17: 5120, 49152, 49152, 5376, 4352, 1280 */ + {20, 192, 192, 21, 17, 5}, + /* corerev 18: */ + {0, 0, 0, 0, 0, 0}, + /* corerev 19: */ + {0, 0, 0, 0, 0, 0}, /* corerev 20: 5120, 49152, 49152, 5376, 4352, 1280 */ {20, 192, 192, 21, 17, 5}, /* corerev 21: 2304, 14848, 5632, 3584, 3584, 1280 */ @@ -342,6 +349,14 @@ static const u16 xmtfifo_sz[][NFIFO] = { {20, 192, 192, 21, 17, 5}, /* corerev 24: 2304, 14848, 5632, 3584, 3584, 1280 */ {9, 58, 22, 14, 14, 5}, + /* corerev 25: */ + {0, 0, 0, 0, 0, 0}, + /* corerev 26: */ + {0, 0, 0, 0, 0, 0}, + /* corerev 27: */ + {0, 0, 0, 0, 0, 0}, + /* corerev 28: 2304, 14848, 5632, 3584, 3584, 1280 */ + {9, 58, 22, 14, 14, 5}, }; #ifdef DEBUG @@ -878,7 +893,7 @@ brcms_c_dotxstatus(struct brcms_c_info *wlc, struct tx_status *txs) tx_info = IEEE80211_SKB_CB(p); h = (struct ieee80211_hdr *)((u8 *) (txh + 1) + D11_PHY_HDR_LEN); - if (tx_info->control.sta) + if (tx_info->rate_driver_data[0]) scb = &wlc->pri_scb; if (tx_info->flags & IEEE80211_TX_CTL_AMPDU) { @@ -1941,7 +1956,8 @@ static bool brcms_b_radio_read_hwdisabled(struct brcms_hardware *wlc_hw) * accesses phyreg throughput mac. This can be skipped since * only mac reg is accessed below */ - flags |= SICF_PCLKE; + if (D11REV_GE(wlc_hw->corerev, 18)) + flags |= SICF_PCLKE; /* * TODO: test suspend/resume @@ -2022,7 +2038,8 @@ void brcms_b_corereset(struct brcms_hardware *wlc_hw, u32 flags) * phyreg throughput mac, AND phy_reset is skipped at early stage when * band->pi is invalid. need to enable PHY CLK */ - flags |= SICF_PCLKE; + if (D11REV_GE(wlc_hw->corerev, 18)) + flags |= SICF_PCLKE; /* * reset the core @@ -2125,8 +2142,8 @@ void brcms_b_switch_macfreq(struct brcms_hardware *wlc_hw, u8 spurmode) { struct bcma_device *core = wlc_hw->d11core; - if ((ai_get_chip_id(wlc_hw->sih) == BCM43224_CHIP_ID) || - (ai_get_chip_id(wlc_hw->sih) == BCM43225_CHIP_ID)) { + if ((ai_get_chip_id(wlc_hw->sih) == BCMA_CHIP_ID_BCM43224) || + (ai_get_chip_id(wlc_hw->sih) == BCMA_CHIP_ID_BCM43225)) { if (spurmode == WL_SPURAVOID_ON2) { /* 126Mhz */ bcma_write16(core, D11REGOFFS(tsf_clk_frac_l), 0x2082); bcma_write16(core, D11REGOFFS(tsf_clk_frac_h), 0x8); @@ -2790,7 +2807,7 @@ void brcms_b_core_phypll_ctl(struct brcms_hardware *wlc_hw, bool on) tmp = 0; if (on) { - if ((ai_get_chip_id(wlc_hw->sih) == BCM4313_CHIP_ID)) { + if ((ai_get_chip_id(wlc_hw->sih) == BCMA_CHIP_ID_BCM4313)) { bcma_set32(core, D11REGOFFS(clk_ctl_st), CCS_ERSRC_REQ_HT | CCS_ERSRC_REQ_D11PLL | @@ -3139,20 +3156,6 @@ void brcms_c_reset(struct brcms_c_info *wlc) brcms_b_reset(wlc->hw); } -/* Return the channel the driver should initialize during brcms_c_init. - * the channel may have to be changed from the currently configured channel - * if other configurations are in conflict (bandlocked, 11n mode disabled, - * invalid channel for current country, etc.) - */ -static u16 brcms_c_init_chanspec(struct brcms_c_info *wlc) -{ - u16 chanspec = - 1 | WL_CHANSPEC_BW_20 | WL_CHANSPEC_CTL_SB_NONE | - WL_CHANSPEC_BAND_2G; - - return chanspec; -} - void brcms_c_init_scb(struct scb *scb) { int i; @@ -4231,9 +4234,8 @@ static void brcms_c_radio_timer(void *arg) } /* common low-level watchdog code */ -static void brcms_b_watchdog(void *arg) +static void brcms_b_watchdog(struct brcms_c_info *wlc) { - struct brcms_c_info *wlc = (struct brcms_c_info *) arg; struct brcms_hardware *wlc_hw = wlc->hw; BCMMSG(wlc->wiphy, "wl%d\n", wlc_hw->unit); @@ -4254,10 +4256,8 @@ static void brcms_b_watchdog(void *arg) } /* common watchdog code */ -static void brcms_c_watchdog(void *arg) +static void brcms_c_watchdog(struct brcms_c_info *wlc) { - struct brcms_c_info *wlc = (struct brcms_c_info *) arg; - BCMMSG(wlc->wiphy, "wl%d\n", wlc->pub->unit); if (!wlc->pub->up) @@ -4297,7 +4297,9 @@ static void brcms_c_watchdog(void *arg) static void brcms_c_watchdog_by_timer(void *arg) { - brcms_c_watchdog(arg); + struct brcms_c_info *wlc = (struct brcms_c_info *) arg; + + brcms_c_watchdog(wlc); } static bool brcms_c_timers_init(struct brcms_c_info *wlc, int unit) @@ -4467,11 +4469,9 @@ static int brcms_b_attach(struct brcms_c_info *wlc, struct bcma_device *core, } /* verify again the device is supported */ - if (core->bus->hosttype == BCMA_HOSTTYPE_PCI && - !brcms_c_chipmatch(pcidev->vendor, pcidev->device)) { - wiphy_err(wiphy, "wl%d: brcms_b_attach: Unsupported " - "vendor/device (0x%x/0x%x)\n", - unit, pcidev->vendor, pcidev->device); + if (!brcms_c_chipmatch(core)) { + wiphy_err(wiphy, "wl%d: brcms_b_attach: Unsupported device\n", + unit); err = 12; goto fail; } @@ -4541,7 +4541,7 @@ static int brcms_b_attach(struct brcms_c_info *wlc, struct bcma_device *core, else wlc_hw->_nbands = 1; - if ((ai_get_chip_id(wlc_hw->sih) == BCM43225_CHIP_ID)) + if ((ai_get_chip_id(wlc_hw->sih) == BCMA_CHIP_ID_BCM43225)) wlc_hw->_nbands = 1; /* BMAC_NOTE: remove init of pub values when brcms_c_attach() @@ -4608,8 +4608,12 @@ static int brcms_b_attach(struct brcms_c_info *wlc, struct bcma_device *core, wlc_hw->machwcap_backup = wlc_hw->machwcap; /* init tx fifo size */ + WARN_ON((wlc_hw->corerev - XMTFIFOTBL_STARTREV) < 0 || + (wlc_hw->corerev - XMTFIFOTBL_STARTREV) > + ARRAY_SIZE(xmtfifo_sz)); wlc_hw->xmtfifo_sz = xmtfifo_sz[(wlc_hw->corerev - XMTFIFOTBL_STARTREV)]; + WARN_ON(!wlc_hw->xmtfifo_sz[0]); /* Get a phy for this band */ wlc_hw->band->pi = @@ -5049,7 +5053,7 @@ static void brcms_b_hw_up(struct brcms_hardware *wlc_hw) wlc_hw->wlc->pub->hw_up = true; if ((wlc_hw->boardflags & BFL_FEM) - && (ai_get_chip_id(wlc_hw->sih) == BCM4313_CHIP_ID)) { + && (ai_get_chip_id(wlc_hw->sih) == BCMA_CHIP_ID_BCM4313)) { if (! (wlc_hw->boardrev >= 0x1250 && (wlc_hw->boardflags & BFL_FEM_BT))) @@ -5129,6 +5133,8 @@ static void brcms_c_wme_retries_write(struct brcms_c_info *wlc) /* make interface operational */ int brcms_c_up(struct brcms_c_info *wlc) { + struct ieee80211_channel *ch; + BCMMSG(wlc->wiphy, "wl%d\n", wlc->pub->unit); /* HW is turned off so don't try to access it */ @@ -5141,7 +5147,7 @@ int brcms_c_up(struct brcms_c_info *wlc) } if ((wlc->pub->boardflags & BFL_FEM) - && (ai_get_chip_id(wlc->hw->sih) == BCM4313_CHIP_ID)) { + && (ai_get_chip_id(wlc->hw->sih) == BCMA_CHIP_ID_BCM4313)) { if (wlc->pub->boardrev >= 0x1250 && (wlc->pub->boardflags & BFL_FEM_BT)) brcms_b_mhf(wlc->hw, MHF5, MHF5_4313_GPIOCTRL, @@ -5195,8 +5201,9 @@ int brcms_c_up(struct brcms_c_info *wlc) wlc->pub->up = true; if (wlc->bandinit_pending) { + ch = wlc->pub->ieee_hw->conf.channel; brcms_c_suspend_mac_and_wait(wlc); - brcms_c_set_chanspec(wlc, wlc->default_bss->chanspec); + brcms_c_set_chanspec(wlc, ch20mhz_chspec(ch->hw_value)); wlc->bandinit_pending = false; brcms_c_enable_mac(wlc); } @@ -5397,11 +5404,6 @@ int brcms_c_set_gmode(struct brcms_c_info *wlc, u8 gmode, bool config) else return -EINVAL; - /* Legacy or bust when no OFDM is supported by regulatory */ - if ((brcms_c_channel_locale_flags_in_band(wlc->cmi, band->bandunit) & - BRCMS_NO_OFDM) && (gmode != GMODE_LEGACY_B)) - return -EINVAL; - /* update configuration value */ if (config) brcms_c_protection_upd(wlc, BRCMS_PROT_G_USER, gmode); @@ -5782,8 +5784,12 @@ void brcms_c_print_txstatus(struct tx_status *txs) (txs->ackphyrxsh & PRXS1_SQ_MASK) >> PRXS1_SQ_SHIFT); } -bool brcms_c_chipmatch(u16 vendor, u16 device) +static bool brcms_c_chipmatch_pci(struct bcma_device *core) { + struct pci_dev *pcidev = core->bus->host_pci; + u16 vendor = pcidev->vendor; + u16 device = pcidev->device; + if (vendor != PCI_VENDOR_ID_BROADCOM) { pr_err("unknown vendor id %04x\n", vendor); return false; @@ -5802,6 +5808,30 @@ bool brcms_c_chipmatch(u16 vendor, u16 device) return false; } +static bool brcms_c_chipmatch_soc(struct bcma_device *core) +{ + struct bcma_chipinfo *chipinfo = &core->bus->chipinfo; + + if (chipinfo->id == BCMA_CHIP_ID_BCM4716) + return true; + + pr_err("unknown chip id %04x\n", chipinfo->id); + return false; +} + +bool brcms_c_chipmatch(struct bcma_device *core) +{ + switch (core->bus->hosttype) { + case BCMA_HOSTTYPE_PCI: + return brcms_c_chipmatch_pci(core); + case BCMA_HOSTTYPE_SOC: + return brcms_c_chipmatch_soc(core); + default: + pr_err("unknown host type: %i\n", core->bus->hosttype); + return false; + } +} + #if defined(DEBUG) void brcms_c_print_txdesc(struct d11txh *txh) { @@ -8201,19 +8231,12 @@ bool brcms_c_dpc(struct brcms_c_info *wlc, bool bounded) void brcms_c_init(struct brcms_c_info *wlc, bool mute_tx) { struct bcma_device *core = wlc->hw->d11core; + struct ieee80211_channel *ch = wlc->pub->ieee_hw->conf.channel; u16 chanspec; BCMMSG(wlc->wiphy, "wl%d\n", wlc->pub->unit); - /* - * This will happen if a big-hammer was executed. In - * that case, we want to go back to the channel that - * we were on and not new channel - */ - if (wlc->pub->associated) - chanspec = wlc->home_chanspec; - else - chanspec = brcms_c_init_chanspec(wlc); + chanspec = ch20mhz_chspec(ch->hw_value); brcms_b_init(wlc->hw, chanspec); @@ -8318,7 +8341,7 @@ brcms_c_attach(struct brcms_info *wl, struct bcma_device *core, uint unit, struct brcms_pub *pub; /* allocate struct brcms_c_info state and its substructures */ - wlc = (struct brcms_c_info *) brcms_c_attach_malloc(unit, &err, 0); + wlc = brcms_c_attach_malloc(unit, &err, 0); if (wlc == NULL) goto fail; wlc->wiphy = wl->wiphy; diff --git a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_cmn.c b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_cmn.c index 264f8c4c703d..91937c5025ce 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_cmn.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_cmn.c @@ -198,6 +198,8 @@ u16 read_radio_reg(struct brcms_phy *pi, u16 addr) void write_radio_reg(struct brcms_phy *pi, u16 addr, u16 val) { + struct si_info *sii = container_of(pi->sh->sih, struct si_info, pub); + if ((D11REV_GE(pi->sh->corerev, 24)) || (D11REV_IS(pi->sh->corerev, 22) && (pi->pubpi.phy_type != PHY_TYPE_SSN))) { @@ -209,7 +211,8 @@ void write_radio_reg(struct brcms_phy *pi, u16 addr, u16 val) bcma_write16(pi->d11core, D11REGOFFS(phy4wdatalo), val); } - if (++pi->phy_wreg >= pi->phy_wreg_limit) { + if ((sii->icbus->hosttype == BCMA_HOSTTYPE_PCI) && + (++pi->phy_wreg >= pi->phy_wreg_limit)) { (void)bcma_read32(pi->d11core, D11REGOFFS(maccontrol)); pi->phy_wreg = 0; } @@ -292,10 +295,13 @@ void write_phy_reg(struct brcms_phy *pi, u16 addr, u16 val) bcma_wflush16(pi->d11core, D11REGOFFS(phyregaddr), addr); bcma_write16(pi->d11core, D11REGOFFS(phyregdata), val); if (addr == 0x72) - (void)bcma_read16(pi->d11core, D11REGOFFS(phyversion)); + (void)bcma_read16(pi->d11core, D11REGOFFS(phyregdata)); #else + struct si_info *sii = container_of(pi->sh->sih, struct si_info, pub); + bcma_write32(pi->d11core, D11REGOFFS(phyregaddr), addr | (val << 16)); - if (++pi->phy_wreg >= pi->phy_wreg_limit) { + if ((sii->icbus->hosttype == BCMA_HOSTTYPE_PCI) && + (++pi->phy_wreg >= pi->phy_wreg_limit)) { pi->phy_wreg = 0; (void)bcma_read16(pi->d11core, D11REGOFFS(phyversion)); } @@ -837,7 +843,7 @@ wlc_phy_table_addr(struct brcms_phy *pi, uint tbl_id, uint tbl_offset, pi->tbl_data_hi = tblDataHi; pi->tbl_data_lo = tblDataLo; - if (pi->sh->chip == BCM43224_CHIP_ID && + if (pi->sh->chip == BCMA_CHIP_ID_BCM43224 && pi->sh->chiprev == 1) { pi->tbl_addr = tblAddr; pi->tbl_save_id = tbl_id; @@ -847,7 +853,7 @@ wlc_phy_table_addr(struct brcms_phy *pi, uint tbl_id, uint tbl_offset, void wlc_phy_table_data_write(struct brcms_phy *pi, uint width, u32 val) { - if ((pi->sh->chip == BCM43224_CHIP_ID) && + if ((pi->sh->chip == BCMA_CHIP_ID_BCM43224) && (pi->sh->chiprev == 1) && (pi->tbl_save_id == NPHY_TBL_ID_ANTSWCTRLLUT)) { read_phy_reg(pi, pi->tbl_data_lo); @@ -881,7 +887,7 @@ wlc_phy_write_table(struct brcms_phy *pi, const struct phytbl_info *ptbl_info, for (idx = 0; idx < ptbl_info->tbl_len; idx++) { - if ((pi->sh->chip == BCM43224_CHIP_ID) && + if ((pi->sh->chip == BCMA_CHIP_ID_BCM43224) && (pi->sh->chiprev == 1) && (tbl_id == NPHY_TBL_ID_ANTSWCTRLLUT)) { read_phy_reg(pi, tblDataLo); @@ -918,7 +924,7 @@ wlc_phy_read_table(struct brcms_phy *pi, const struct phytbl_info *ptbl_info, for (idx = 0; idx < ptbl_info->tbl_len; idx++) { - if ((pi->sh->chip == BCM43224_CHIP_ID) && + if ((pi->sh->chip == BCMA_CHIP_ID_BCM43224) && (pi->sh->chiprev == 1)) { (void)read_phy_reg(pi, tblDataLo); @@ -2894,7 +2900,7 @@ const u8 *wlc_phy_get_ofdm_rate_lookup(void) void wlc_lcnphy_epa_switch(struct brcms_phy *pi, bool mode) { - if ((pi->sh->chip == BCM4313_CHIP_ID) && + if ((pi->sh->chip == BCMA_CHIP_ID_BCM4313) && (pi->sh->boardflags & BFL_FEM)) { if (mode) { u16 txant = 0; diff --git a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_n.c b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_n.c index 13b261517cce..65db9b7458dc 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_n.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/phy/phy_n.c @@ -14358,7 +14358,7 @@ void wlc_phy_nphy_tkip_rifs_war(struct brcms_phy *pi, u8 rifs) wlc_phy_write_txmacreg_nphy(pi, holdoff, delay); - if (pi && pi->sh && (pi->sh->_rifs_phy != rifs)) + if (pi->sh && (pi->sh->_rifs_phy != rifs)) pi->sh->_rifs_phy = rifs; } @@ -17893,6 +17893,8 @@ static u32 *wlc_phy_get_ipa_gaintbl_nphy(struct brcms_phy *pi) nphy_tpc_txgain_ipa_2g_2057rev7; } else if (NREV_IS(pi->pubpi.phy_rev, 6)) { tx_pwrctrl_tbl = nphy_tpc_txgain_ipa_rev6; + if (pi->sh->chip == BCMA_CHIP_ID_BCM47162) + tx_pwrctrl_tbl = nphy_tpc_txgain_ipa_rev5; } else if (NREV_IS(pi->pubpi.phy_rev, 5)) { tx_pwrctrl_tbl = nphy_tpc_txgain_ipa_rev5; } else { @@ -19254,8 +19256,14 @@ static void wlc_phy_spurwar_nphy(struct brcms_phy *pi) case 38: case 102: case 118: - nphy_adj_tone_id_buf[0] = 0; - nphy_adj_noise_var_buf[0] = 0x0; + if ((pi->sh->chip == BCMA_CHIP_ID_BCM4716) && + (pi->sh->chippkg == BCMA_PKG_ID_BCM4717)) { + nphy_adj_tone_id_buf[0] = 32; + nphy_adj_noise_var_buf[0] = 0x21f; + } else { + nphy_adj_tone_id_buf[0] = 0; + nphy_adj_noise_var_buf[0] = 0x0; + } break; case 134: nphy_adj_tone_id_buf[0] = 32; @@ -19309,8 +19317,8 @@ void wlc_phy_init_nphy(struct brcms_phy *pi) pi->measure_hold |= PHY_HOLD_FOR_NOT_ASSOC; if ((ISNPHY(pi)) && (NREV_GE(pi->pubpi.phy_rev, 5)) && - ((pi->sh->chippkg == BCM4717_PKG_ID) || - (pi->sh->chippkg == BCM4718_PKG_ID))) { + ((pi->sh->chippkg == BCMA_PKG_ID_BCM4717) || + (pi->sh->chippkg == BCMA_PKG_ID_BCM4718))) { if ((pi->sh->boardflags & BFL_EXTLNA) && (CHSPEC_IS2G(pi->radio_chanspec))) ai_cc_reg(pi->sh->sih, @@ -19318,6 +19326,10 @@ void wlc_phy_init_nphy(struct brcms_phy *pi) 0x40, 0x40); } + if ((!PHY_IPA(pi)) && (pi->sh->chip == BCMA_CHIP_ID_BCM5357)) + si_pmu_chipcontrol(pi->sh->sih, 1, CCTRL5357_EXTPA, + CCTRL5357_EXTPA); + if ((pi->nphy_gband_spurwar2_en) && CHSPEC_IS2G(pi->radio_chanspec) && CHSPEC_IS40(pi->radio_chanspec)) { @@ -20695,12 +20707,22 @@ wlc_phy_chanspec_radio2056_setup(struct brcms_phy *pi, write_radio_reg(pi, RADIO_2056_SYN_PLL_LOOPFILTER2 | RADIO_2056_SYN, 0x1f); - write_radio_reg(pi, - RADIO_2056_SYN_PLL_LOOPFILTER4 | - RADIO_2056_SYN, 0xb); - write_radio_reg(pi, - RADIO_2056_SYN_PLL_CP2 | - RADIO_2056_SYN, 0x14); + if ((pi->sh->chip == BCMA_CHIP_ID_BCM4716) || + (pi->sh->chip == BCMA_CHIP_ID_BCM47162)) { + write_radio_reg(pi, + RADIO_2056_SYN_PLL_LOOPFILTER4 | + RADIO_2056_SYN, 0x14); + write_radio_reg(pi, + RADIO_2056_SYN_PLL_CP2 | + RADIO_2056_SYN, 0x00); + } else { + write_radio_reg(pi, + RADIO_2056_SYN_PLL_LOOPFILTER4 | + RADIO_2056_SYN, 0xb); + write_radio_reg(pi, + RADIO_2056_SYN_PLL_CP2 | + RADIO_2056_SYN, 0x14); + } } } @@ -20747,24 +20769,30 @@ wlc_phy_chanspec_radio2056_setup(struct brcms_phy *pi, WRITE_RADIO_REG2(pi, RADIO_2056, TX, core, PADG_IDAC, 0xcc); - bias = 0x25; - cascbias = 0x20; + if ((pi->sh->chip == BCMA_CHIP_ID_BCM4716) || + (pi->sh->chip == BCMA_CHIP_ID_BCM47162)) { + bias = 0x40; + cascbias = 0x45; + pag_boost_tune = 0x5; + pgag_boost_tune = 0x33; + padg_boost_tune = 0x77; + mixg_boost_tune = 0x55; + } else { + bias = 0x25; + cascbias = 0x20; - if ((pi->sh->chip == - BCM43224_CHIP_ID) - || (pi->sh->chip == - BCM43225_CHIP_ID)) { - if (pi->sh->chippkg == - BCM43224_FAB_SMIC) { + if ((pi->sh->chip == BCMA_CHIP_ID_BCM43224 || + pi->sh->chip == BCMA_CHIP_ID_BCM43225) && + pi->sh->chippkg == BCMA_PKG_ID_BCM43224_FAB_SMIC) { bias = 0x2a; cascbias = 0x38; } - } - pag_boost_tune = 0x4; - pgag_boost_tune = 0x03; - padg_boost_tune = 0x77; - mixg_boost_tune = 0x65; + pag_boost_tune = 0x4; + pgag_boost_tune = 0x03; + padg_boost_tune = 0x77; + mixg_boost_tune = 0x65; + } WRITE_RADIO_REG2(pi, RADIO_2056, TX, core, INTPAG_IMAIN_STAT, bias); @@ -20863,11 +20891,10 @@ wlc_phy_chanspec_radio2056_setup(struct brcms_phy *pi, cascbias = 0x30; - if ((pi->sh->chip == BCM43224_CHIP_ID) || - (pi->sh->chip == BCM43225_CHIP_ID)) { - if (pi->sh->chippkg == BCM43224_FAB_SMIC) - cascbias = 0x35; - } + if ((pi->sh->chip == BCMA_CHIP_ID_BCM43224 || + pi->sh->chip == BCMA_CHIP_ID_BCM43225) && + pi->sh->chippkg == BCMA_PKG_ID_BCM43224_FAB_SMIC) + cascbias = 0x35; pabias = (pi->phy_pabias == 0) ? 0x30 : pi->phy_pabias; @@ -21106,6 +21133,7 @@ wlc_phy_chanspec_nphy_setup(struct brcms_phy *pi, u16 chanspec, const struct nphy_sfo_cfg *ci) { u16 val; + struct si_info *sii = container_of(pi->sh->sih, struct si_info, pub); val = read_phy_reg(pi, 0x09) & NPHY_BandControl_currentBand; if (CHSPEC_IS5G(chanspec) && !val) { @@ -21178,22 +21206,32 @@ wlc_phy_chanspec_nphy_setup(struct brcms_phy *pi, u16 chanspec, } else if (NREV_GE(pi->pubpi.phy_rev, 7)) { if (val == 54) spuravoid = 1; - } else { - if (pi->nphy_aband_spurwar_en && - ((val == 38) || (val == 102) - || (val == 118))) + } else if (pi->nphy_aband_spurwar_en && + ((val == 38) || (val == 102) || (val == 118))) { + if ((pi->sh->chip == BCMA_CHIP_ID_BCM4716) + && (pi->sh->chippkg == BCMA_PKG_ID_BCM4717)) { + spuravoid = 0; + } else { spuravoid = 1; + } } if (pi->phy_spuravoid == SPURAVOID_FORCEON) spuravoid = 1; - wlapi_bmac_core_phypll_ctl(pi->sh->physhim, false); - si_pmu_spuravoid_pllupdate(pi->sh->sih, spuravoid); - wlapi_bmac_core_phypll_ctl(pi->sh->physhim, true); + if ((pi->sh->chip == BCMA_CHIP_ID_BCM4716) || + (pi->sh->chip == BCMA_CHIP_ID_BCM43225)) { + bcma_pmu_spuravoid_pllupdate(&sii->icbus->drv_cc, + spuravoid); + } else { + wlapi_bmac_core_phypll_ctl(pi->sh->physhim, false); + bcma_pmu_spuravoid_pllupdate(&sii->icbus->drv_cc, + spuravoid); + wlapi_bmac_core_phypll_ctl(pi->sh->physhim, true); + } - if ((pi->sh->chip == BCM43224_CHIP_ID) || - (pi->sh->chip == BCM43225_CHIP_ID)) { + if ((pi->sh->chip == BCMA_CHIP_ID_BCM43224) || + (pi->sh->chip == BCMA_CHIP_ID_BCM43225)) { if (spuravoid == 1) { bcma_write16(pi->d11core, D11REGOFFS(tsf_clk_frac_l), @@ -21209,7 +21247,9 @@ wlc_phy_chanspec_nphy_setup(struct brcms_phy *pi, u16 chanspec, } } - wlapi_bmac_core_phypll_reset(pi->sh->physhim); + if (!((pi->sh->chip == BCMA_CHIP_ID_BCM4716) || + (pi->sh->chip == BCMA_CHIP_ID_BCM47162))) + wlapi_bmac_core_phypll_reset(pi->sh->physhim); mod_phy_reg(pi, 0x01, (0x1 << 15), ((spuravoid > 0) ? (0x1 << 15) : 0)); @@ -22171,9 +22211,15 @@ s16 wlc_phy_tempsense_nphy(struct brcms_phy *pi) wlc_phy_table_write_nphy(pi, NPHY_TBL_ID_AFECTRL, 1, 0x03, 16, &auxADC_rssi_ctrlH_save); - radio_temp[0] = (179 * (radio_temp[1] + radio_temp2[1]) - + 82 * (auxADC_Vl) - 28861 + - 128) / 256; + if (pi->sh->chip == BCMA_CHIP_ID_BCM5357) { + radio_temp[0] = (193 * (radio_temp[1] + radio_temp2[1]) + + 88 * (auxADC_Vl) - 27111 + + 128) / 256; + } else { + radio_temp[0] = (179 * (radio_temp[1] + radio_temp2[1]) + + 82 * (auxADC_Vl) - 28861 + + 128) / 256; + } offset = (s16) pi->phy_tempsense_offset; @@ -24923,14 +24969,16 @@ wlc_phy_a2_nphy(struct brcms_phy *pi, struct nphy_ipa_txcalgains *txgains, if (txgains->useindex) { phy_a4 = 15 - ((txgains->index) >> 3); if (CHSPEC_IS2G(pi->radio_chanspec)) { - if (NREV_GE(pi->pubpi.phy_rev, 6)) + if (NREV_GE(pi->pubpi.phy_rev, 6) && + pi->sh->chip == BCMA_CHIP_ID_BCM47162) { + phy_a5 = 0x10f7 | (phy_a4 << 8); + } else if (NREV_GE(pi->pubpi.phy_rev, 6)) { phy_a5 = 0x00f7 | (phy_a4 << 8); - - else - if (NREV_IS(pi->pubpi.phy_rev, 5)) + } else if (NREV_IS(pi->pubpi.phy_rev, 5)) { phy_a5 = 0x10f7 | (phy_a4 << 8); - else + } else { phy_a5 = 0x50f7 | (phy_a4 << 8); + } } else { phy_a5 = 0x70f7 | (phy_a4 << 8); } diff --git a/drivers/net/wireless/brcm80211/brcmsmac/pmu.c b/drivers/net/wireless/brcm80211/brcmsmac/pmu.c index 4931d29d077b..7e9df566c733 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/pmu.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/pmu.c @@ -74,16 +74,6 @@ * PMU<rev>_PLL<num>_XX where <rev> is PMU corerev and <num> is an arbitrary * number to differentiate different PLLs controlled by the same PMU rev. */ -/* pllcontrol registers: - * ndiv_pwrdn, pwrdn_ch<x>, refcomp_pwrdn, dly_ch<x>, - * p1div, p2div, _bypass_sdmod - */ -#define PMU1_PLL0_PLLCTL0 0 -#define PMU1_PLL0_PLLCTL1 1 -#define PMU1_PLL0_PLLCTL2 2 -#define PMU1_PLL0_PLLCTL3 3 -#define PMU1_PLL0_PLLCTL4 4 -#define PMU1_PLL0_PLLCTL5 5 /* pmu XtalFreqRatio */ #define PMU_XTALFREQ_REG_ILPCTR_MASK 0x00001FFF @@ -108,118 +98,14 @@ #define RES4313_HT_AVAIL_RSRC 14 #define RES4313_MACPHY_CLK_AVAIL_RSRC 15 -/* Determine min/max rsrc masks. Value 0 leaves hardware at default. */ -static void si_pmu_res_masks(struct si_pub *sih, u32 * pmin, u32 * pmax) -{ - u32 min_mask = 0, max_mask = 0; - uint rsrcs; - - /* # resources */ - rsrcs = (ai_get_pmucaps(sih) & PCAP_RC_MASK) >> PCAP_RC_SHIFT; - - /* determine min/max rsrc masks */ - switch (ai_get_chip_id(sih)) { - case BCM43224_CHIP_ID: - case BCM43225_CHIP_ID: - /* ??? */ - break; - - case BCM4313_CHIP_ID: - min_mask = PMURES_BIT(RES4313_BB_PU_RSRC) | - PMURES_BIT(RES4313_XTAL_PU_RSRC) | - PMURES_BIT(RES4313_ALP_AVAIL_RSRC) | - PMURES_BIT(RES4313_BB_PLL_PWRSW_RSRC); - max_mask = 0xffff; - break; - default: - break; - } - - *pmin = min_mask; - *pmax = max_mask; -} - -void si_pmu_spuravoid_pllupdate(struct si_pub *sih, u8 spuravoid) -{ - u32 tmp = 0; - struct bcma_device *core; - - /* switch to chipc */ - core = ai_findcore(sih, BCMA_CORE_CHIPCOMMON, 0); - - switch (ai_get_chip_id(sih)) { - case BCM43224_CHIP_ID: - case BCM43225_CHIP_ID: - if (spuravoid == 1) { - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL0); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x11500010); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL1); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x000C0C06); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL2); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x0F600a08); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL3); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x00000000); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL4); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x2001E920); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL5); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x88888815); - } else { - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL0); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x11100010); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL1); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x000c0c06); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL2); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x03000a08); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL3); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x00000000); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL4); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x200005c0); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_addr), - PMU1_PLL0_PLLCTL5); - bcma_write32(core, CHIPCREGOFFS(pllcontrol_data), - 0x88888815); - } - tmp = 1 << 10; - break; - - default: - /* bail out */ - return; - } - - bcma_set32(core, CHIPCREGOFFS(pmucontrol), tmp); -} - u16 si_pmu_fast_pwrup_delay(struct si_pub *sih) { uint delay = PMU_MAX_TRANSITION_DLY; switch (ai_get_chip_id(sih)) { - case BCM43224_CHIP_ID: - case BCM43225_CHIP_ID: - case BCM4313_CHIP_ID: + case BCMA_CHIP_ID_BCM43224: + case BCMA_CHIP_ID_BCM43225: + case BCMA_CHIP_ID_BCM4313: delay = 3700; break; default: @@ -270,9 +156,9 @@ u32 si_pmu_alp_clock(struct si_pub *sih) return clock; switch (ai_get_chip_id(sih)) { - case BCM43224_CHIP_ID: - case BCM43225_CHIP_ID: - case BCM4313_CHIP_ID: + case BCMA_CHIP_ID_BCM43224: + case BCMA_CHIP_ID_BCM43225: + case BCMA_CHIP_ID_BCM4313: /* always 20Mhz */ clock = 20000 * 1000; break; @@ -283,51 +169,9 @@ u32 si_pmu_alp_clock(struct si_pub *sih) return clock; } -/* initialize PMU */ -void si_pmu_init(struct si_pub *sih) -{ - struct bcma_device *core; - - /* select chipc */ - core = ai_findcore(sih, BCMA_CORE_CHIPCOMMON, 0); - - if (ai_get_pmurev(sih) == 1) - bcma_mask32(core, CHIPCREGOFFS(pmucontrol), - ~PCTL_NOILP_ON_WAIT); - else if (ai_get_pmurev(sih) >= 2) - bcma_set32(core, CHIPCREGOFFS(pmucontrol), PCTL_NOILP_ON_WAIT); -} - -/* initialize PMU resources */ -void si_pmu_res_init(struct si_pub *sih) -{ - struct bcma_device *core; - u32 min_mask = 0, max_mask = 0; - - /* select to chipc */ - core = ai_findcore(sih, BCMA_CORE_CHIPCOMMON, 0); - - /* Determine min/max rsrc masks */ - si_pmu_res_masks(sih, &min_mask, &max_mask); - - /* It is required to program max_mask first and then min_mask */ - - /* Program max resource mask */ - - if (max_mask) - bcma_write32(core, CHIPCREGOFFS(max_res_mask), max_mask); - - /* Program min resource mask */ - - if (min_mask) - bcma_write32(core, CHIPCREGOFFS(min_res_mask), min_mask); - - /* Add some delay; allow resources to come up and settle. */ - mdelay(2); -} - u32 si_pmu_measure_alpclk(struct si_pub *sih) { + struct si_info *sii = container_of(sih, struct si_info, pub); struct bcma_device *core; u32 alp_khz; @@ -335,7 +179,7 @@ u32 si_pmu_measure_alpclk(struct si_pub *sih) return 0; /* Remember original core before switch to chipc */ - core = ai_findcore(sih, BCMA_CORE_CHIPCOMMON, 0); + core = sii->icbus->drv_cc.core; if (bcma_read32(core, CHIPCREGOFFS(pmustatus)) & PST_EXTLPOAVAIL) { u32 ilp_ctr, alp_hz; diff --git a/drivers/net/wireless/brcm80211/brcmsmac/pmu.h b/drivers/net/wireless/brcm80211/brcmsmac/pmu.h index 3e39c5e0f9ff..f7cff873578b 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/pmu.h +++ b/drivers/net/wireless/brcm80211/brcmsmac/pmu.h @@ -26,10 +26,7 @@ extern u32 si_pmu_chipcontrol(struct si_pub *sih, uint reg, u32 mask, u32 val); extern u32 si_pmu_regcontrol(struct si_pub *sih, uint reg, u32 mask, u32 val); extern u32 si_pmu_alp_clock(struct si_pub *sih); extern void si_pmu_pllupd(struct si_pub *sih); -extern void si_pmu_spuravoid_pllupdate(struct si_pub *sih, u8 spuravoid); extern u32 si_pmu_pllcontrol(struct si_pub *sih, uint reg, u32 mask, u32 val); -extern void si_pmu_init(struct si_pub *sih); -extern void si_pmu_res_init(struct si_pub *sih); extern u32 si_pmu_measure_alpclk(struct si_pub *sih); #endif /* _BRCM_PMU_H_ */ diff --git a/drivers/net/wireless/brcm80211/brcmsmac/pub.h b/drivers/net/wireless/brcm80211/brcmsmac/pub.h index aa5d67f8d874..5855f4fd16dc 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/pub.h +++ b/drivers/net/wireless/brcm80211/brcmsmac/pub.h @@ -311,7 +311,7 @@ extern uint brcms_c_detach(struct brcms_c_info *wlc); extern int brcms_c_up(struct brcms_c_info *wlc); extern uint brcms_c_down(struct brcms_c_info *wlc); -extern bool brcms_c_chipmatch(u16 vendor, u16 device); +extern bool brcms_c_chipmatch(struct bcma_device *core); extern void brcms_c_init(struct brcms_c_info *wlc, bool mute_tx); extern void brcms_c_reset(struct brcms_c_info *wlc); diff --git a/drivers/net/wireless/brcm80211/brcmutil/utils.c b/drivers/net/wireless/brcm80211/brcmutil/utils.c index b45ab34cdfdc..3e6405e06ac0 100644 --- a/drivers/net/wireless/brcm80211/brcmutil/utils.c +++ b/drivers/net/wireless/brcm80211/brcmutil/utils.c @@ -43,6 +43,8 @@ EXPORT_SYMBOL(brcmu_pkt_buf_get_skb); /* Free the driver packet. Free the tag if present */ void brcmu_pkt_buf_free_skb(struct sk_buff *skb) { + if (!skb) + return; WARN_ON(skb->next); if (skb->destructor) /* cannot kfree_skb() on hard IRQ (net/core/skbuff.c) if diff --git a/drivers/net/wireless/brcm80211/include/brcm_hw_ids.h b/drivers/net/wireless/brcm80211/include/brcm_hw_ids.h index 333193f20e1c..bcc79b4e3267 100644 --- a/drivers/net/wireless/brcm80211/include/brcm_hw_ids.h +++ b/drivers/net/wireless/brcm80211/include/brcm_hw_ids.h @@ -37,5 +37,6 @@ #define BCM4329_CHIP_ID 0x4329 #define BCM4330_CHIP_ID 0x4330 #define BCM4331_CHIP_ID 0x4331 +#define BCM4334_CHIP_ID 0x4334 #endif /* _BRCM_HW_IDS_H_ */ diff --git a/drivers/net/wireless/brcm80211/include/soc.h b/drivers/net/wireless/brcm80211/include/soc.h index 4e9b7e4827ea..123cfa854a0d 100644 --- a/drivers/net/wireless/brcm80211/include/soc.h +++ b/drivers/net/wireless/brcm80211/include/soc.h @@ -19,68 +19,6 @@ #define SI_ENUM_BASE 0x18000000 /* Enumeration space base */ -/* core codes */ -#define NODEV_CORE_ID 0x700 /* Invalid coreid */ -#define CC_CORE_ID 0x800 /* chipcommon core */ -#define ILINE20_CORE_ID 0x801 /* iline20 core */ -#define SRAM_CORE_ID 0x802 /* sram core */ -#define SDRAM_CORE_ID 0x803 /* sdram core */ -#define PCI_CORE_ID 0x804 /* pci core */ -#define MIPS_CORE_ID 0x805 /* mips core */ -#define ENET_CORE_ID 0x806 /* enet mac core */ -#define CODEC_CORE_ID 0x807 /* v90 codec core */ -#define USB_CORE_ID 0x808 /* usb 1.1 host/device core */ -#define ADSL_CORE_ID 0x809 /* ADSL core */ -#define ILINE100_CORE_ID 0x80a /* iline100 core */ -#define IPSEC_CORE_ID 0x80b /* ipsec core */ -#define UTOPIA_CORE_ID 0x80c /* utopia core */ -#define PCMCIA_CORE_ID 0x80d /* pcmcia core */ -#define SOCRAM_CORE_ID 0x80e /* internal memory core */ -#define MEMC_CORE_ID 0x80f /* memc sdram core */ -#define OFDM_CORE_ID 0x810 /* OFDM phy core */ -#define EXTIF_CORE_ID 0x811 /* external interface core */ -#define D11_CORE_ID 0x812 /* 802.11 MAC core */ -#define APHY_CORE_ID 0x813 /* 802.11a phy core */ -#define BPHY_CORE_ID 0x814 /* 802.11b phy core */ -#define GPHY_CORE_ID 0x815 /* 802.11g phy core */ -#define MIPS33_CORE_ID 0x816 /* mips3302 core */ -#define USB11H_CORE_ID 0x817 /* usb 1.1 host core */ -#define USB11D_CORE_ID 0x818 /* usb 1.1 device core */ -#define USB20H_CORE_ID 0x819 /* usb 2.0 host core */ -#define USB20D_CORE_ID 0x81a /* usb 2.0 device core */ -#define SDIOH_CORE_ID 0x81b /* sdio host core */ -#define ROBO_CORE_ID 0x81c /* roboswitch core */ -#define ATA100_CORE_ID 0x81d /* parallel ATA core */ -#define SATAXOR_CORE_ID 0x81e /* serial ATA & XOR DMA core */ -#define GIGETH_CORE_ID 0x81f /* gigabit ethernet core */ -#define PCIE_CORE_ID 0x820 /* pci express core */ -#define NPHY_CORE_ID 0x821 /* 802.11n 2x2 phy core */ -#define SRAMC_CORE_ID 0x822 /* SRAM controller core */ -#define MINIMAC_CORE_ID 0x823 /* MINI MAC/phy core */ -#define ARM11_CORE_ID 0x824 /* ARM 1176 core */ -#define ARM7S_CORE_ID 0x825 /* ARM7tdmi-s core */ -#define LPPHY_CORE_ID 0x826 /* 802.11a/b/g phy core */ -#define PMU_CORE_ID 0x827 /* PMU core */ -#define SSNPHY_CORE_ID 0x828 /* 802.11n single-stream phy core */ -#define SDIOD_CORE_ID 0x829 /* SDIO device core */ -#define ARMCM3_CORE_ID 0x82a /* ARM Cortex M3 core */ -#define HTPHY_CORE_ID 0x82b /* 802.11n 4x4 phy core */ -#define MIPS74K_CORE_ID 0x82c /* mips 74k core */ -#define GMAC_CORE_ID 0x82d /* Gigabit MAC core */ -#define DMEMC_CORE_ID 0x82e /* DDR1/2 memory controller core */ -#define PCIERC_CORE_ID 0x82f /* PCIE Root Complex core */ -#define OCP_CORE_ID 0x830 /* OCP2OCP bridge core */ -#define SC_CORE_ID 0x831 /* shared common core */ -#define AHB_CORE_ID 0x832 /* OCP2AHB bridge core */ -#define SPIH_CORE_ID 0x833 /* SPI host core */ -#define I2S_CORE_ID 0x834 /* I2S core */ -#define DMEMS_CORE_ID 0x835 /* SDR/DDR1 memory controller core */ -#define DEF_SHIM_COMP 0x837 /* SHIM component in ubus/6362 */ -#define OOB_ROUTER_CORE_ID 0x367 /* OOB router core ID */ -#define DEF_AI_COMP 0xfff /* Default component, in ai chips it - * maps all unused address ranges - */ - /* Common core control flags */ #define SICF_BIST_EN 0x8000 #define SICF_PME_EN 0x4000 diff --git a/drivers/net/wireless/hostap/hostap_proc.c b/drivers/net/wireless/hostap/hostap_proc.c index 75ef8f04aabe..dc447c1b5abe 100644 --- a/drivers/net/wireless/hostap/hostap_proc.c +++ b/drivers/net/wireless/hostap/hostap_proc.c @@ -58,8 +58,7 @@ static int prism2_stats_proc_read(char *page, char **start, off_t off, { char *p = page; local_info_t *local = (local_info_t *) data; - struct comm_tallies_sums *sums = (struct comm_tallies_sums *) - &local->comm_tallies; + struct comm_tallies_sums *sums = &local->comm_tallies; if (off != 0) { *eof = 1; diff --git a/drivers/net/wireless/ipw2x00/ipw2200.c b/drivers/net/wireless/ipw2x00/ipw2200.c index 0036737fe8e3..0df459147394 100644 --- a/drivers/net/wireless/ipw2x00/ipw2200.c +++ b/drivers/net/wireless/ipw2x00/ipw2200.c @@ -2701,6 +2701,20 @@ static void eeprom_parse_mac(struct ipw_priv *priv, u8 * mac) memcpy(mac, &priv->eeprom[EEPROM_MAC_ADDRESS], 6); } +static void ipw_read_eeprom(struct ipw_priv *priv) +{ + int i; + __le16 *eeprom = (__le16 *) priv->eeprom; + + IPW_DEBUG_TRACE(">>\n"); + + /* read entire contents of eeprom into private buffer */ + for (i = 0; i < 128; i++) + eeprom[i] = cpu_to_le16(eeprom_read_u16(priv, (u8) i)); + + IPW_DEBUG_TRACE("<<\n"); +} + /* * Either the device driver (i.e. the host) or the firmware can * load eeprom data into the designated region in SRAM. If neither @@ -2712,14 +2726,9 @@ static void eeprom_parse_mac(struct ipw_priv *priv, u8 * mac) static void ipw_eeprom_init_sram(struct ipw_priv *priv) { int i; - __le16 *eeprom = (__le16 *) priv->eeprom; IPW_DEBUG_TRACE(">>\n"); - /* read entire contents of eeprom into private buffer */ - for (i = 0; i < 128; i++) - eeprom[i] = cpu_to_le16(eeprom_read_u16(priv, (u8) i)); - /* If the data looks correct, then copy it to our private copy. Otherwise let the firmware know to perform the operation @@ -3643,8 +3652,10 @@ static int ipw_load(struct ipw_priv *priv) /* ack fw init done interrupt */ ipw_write32(priv, IPW_INTA_RW, IPW_INTA_BIT_FW_INITIALIZATION_DONE); - /* read eeprom data and initialize the eeprom region of sram */ + /* read eeprom data */ priv->eeprom_delay = 1; + ipw_read_eeprom(priv); + /* initialize the eeprom region of sram */ ipw_eeprom_init_sram(priv); /* enable interrupts */ @@ -7069,9 +7080,7 @@ static int ipw_qos_activate(struct ipw_priv *priv, } IPW_DEBUG_QOS("QoS sending IPW_CMD_QOS_PARAMETERS\n"); - err = ipw_send_qos_params_command(priv, - (struct libipw_qos_parameters *) - &(qos_parameters[0])); + err = ipw_send_qos_params_command(priv, &qos_parameters[0]); if (err) IPW_DEBUG_QOS("QoS IPW_CMD_QOS_PARAMETERS failed\n"); diff --git a/drivers/net/wireless/iwlegacy/3945-rs.c b/drivers/net/wireless/iwlegacy/3945-rs.c index 4b10157d8686..d4fd29ad90dc 100644 --- a/drivers/net/wireless/iwlegacy/3945-rs.c +++ b/drivers/net/wireless/iwlegacy/3945-rs.c @@ -946,7 +946,7 @@ il3945_rate_scale_init(struct ieee80211_hw *hw, s32 sta_id) case IEEE80211_BAND_5GHZ: rs_sta->expected_tpt = il3945_expected_tpt_a; break; - case IEEE80211_NUM_BANDS: + default: BUG(); break; } diff --git a/drivers/net/wireless/iwlegacy/4965-mac.c b/drivers/net/wireless/iwlegacy/4965-mac.c index ff5d689e13f3..34f61a0581a2 100644 --- a/drivers/net/wireless/iwlegacy/4965-mac.c +++ b/drivers/net/wireless/iwlegacy/4965-mac.c @@ -5724,7 +5724,8 @@ il4965_mac_setup_register(struct il_priv *il, u32 max_probe_length) BIT(NL80211_IFTYPE_STATION) | BIT(NL80211_IFTYPE_ADHOC); hw->wiphy->flags |= - WIPHY_FLAG_CUSTOM_REGULATORY | WIPHY_FLAG_DISABLE_BEACON_HINTS; + WIPHY_FLAG_CUSTOM_REGULATORY | WIPHY_FLAG_DISABLE_BEACON_HINTS | + WIPHY_FLAG_IBSS_RSN; /* * For now, disable PS by default because it affects @@ -5873,6 +5874,16 @@ il4965_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, return -EOPNOTSUPP; } + /* + * To support IBSS RSN, don't program group keys in IBSS, the + * hardware will then not attempt to decrypt the frames. + */ + if (vif->type == NL80211_IFTYPE_ADHOC && + !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) { + D_MAC80211("leave - ad-hoc group key\n"); + return -EOPNOTSUPP; + } + sta_id = il_sta_id_or_broadcast(il, sta); if (sta_id == IL_INVALID_STATION) return -EINVAL; diff --git a/drivers/net/wireless/iwlegacy/common.c b/drivers/net/wireless/iwlegacy/common.c index 5d4807c2b56d..0370403fd0bd 100644 --- a/drivers/net/wireless/iwlegacy/common.c +++ b/drivers/net/wireless/iwlegacy/common.c @@ -4717,10 +4717,11 @@ il_check_stuck_queue(struct il_priv *il, int cnt) struct il_tx_queue *txq = &il->txq[cnt]; struct il_queue *q = &txq->q; unsigned long timeout; + unsigned long now = jiffies; int ret; if (q->read_ptr == q->write_ptr) { - txq->time_stamp = jiffies; + txq->time_stamp = now; return 0; } @@ -4728,9 +4729,9 @@ il_check_stuck_queue(struct il_priv *il, int cnt) txq->time_stamp + msecs_to_jiffies(il->cfg->wd_timeout); - if (time_after(jiffies, timeout)) { + if (time_after(now, timeout)) { IL_ERR("Queue %d stuck for %u ms.\n", q->id, - il->cfg->wd_timeout); + jiffies_to_msecs(now - txq->time_stamp)); ret = il_force_reset(il, false); return (ret == -EAGAIN) ? 0 : 1; } @@ -5358,7 +5359,7 @@ il_mac_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif, if (changes & BSS_CHANGED_ASSOC) { D_MAC80211("ASSOC %d\n", bss_conf->assoc); if (bss_conf->assoc) { - il->timestamp = bss_conf->last_tsf; + il->timestamp = bss_conf->sync_tsf; if (!il_is_rfkill(il)) il->ops->post_associate(il); diff --git a/drivers/net/wireless/iwlwifi/Kconfig b/drivers/net/wireless/iwlwifi/Kconfig index 2463c0626438..727fbb5db9da 100644 --- a/drivers/net/wireless/iwlwifi/Kconfig +++ b/drivers/net/wireless/iwlwifi/Kconfig @@ -6,6 +6,7 @@ config IWLWIFI select LEDS_CLASS select LEDS_TRIGGERS select MAC80211_LEDS + select IWLDVM ---help--- Select to build the driver supporting the: @@ -41,6 +42,10 @@ config IWLWIFI say M here and read <file:Documentation/kbuild/modules.txt>. The module will be called iwlwifi. +config IWLDVM + tristate "Intel Wireless WiFi" + depends on IWLWIFI + menu "Debugging Options" depends on IWLWIFI diff --git a/drivers/net/wireless/iwlwifi/Makefile b/drivers/net/wireless/iwlwifi/Makefile index d615eacbf050..170ec330d2a9 100644 --- a/drivers/net/wireless/iwlwifi/Makefile +++ b/drivers/net/wireless/iwlwifi/Makefile @@ -1,27 +1,19 @@ -# WIFI +# common obj-$(CONFIG_IWLWIFI) += iwlwifi.o -iwlwifi-objs := iwl-agn.o iwl-agn-rs.o iwl-mac80211.o -iwlwifi-objs += iwl-ucode.o iwl-agn-tx.o iwl-debug.o -iwlwifi-objs += iwl-agn-lib.o iwl-agn-calib.o iwl-io.o -iwlwifi-objs += iwl-agn-tt.o iwl-agn-sta.o iwl-agn-rx.o - -iwlwifi-objs += iwl-eeprom.o iwl-power.o -iwlwifi-objs += iwl-scan.o iwl-led.o -iwlwifi-objs += iwl-agn-rxon.o iwl-agn-devices.o -iwlwifi-objs += iwl-5000.o -iwlwifi-objs += iwl-6000.o -iwlwifi-objs += iwl-1000.o -iwlwifi-objs += iwl-2000.o -iwlwifi-objs += iwl-pci.o +iwlwifi-objs += iwl-io.o iwlwifi-objs += iwl-drv.o +iwlwifi-objs += iwl-debug.o iwlwifi-objs += iwl-notif-wait.o -iwlwifi-objs += iwl-trans-pcie.o iwl-trans-pcie-rx.o iwl-trans-pcie-tx.o - +iwlwifi-objs += iwl-eeprom-read.o iwl-eeprom-parse.o +iwlwifi-objs += pcie/drv.o pcie/rx.o pcie/tx.o pcie/trans.o +iwlwifi-objs += pcie/1000.o pcie/2000.o pcie/5000.o pcie/6000.o -iwlwifi-$(CONFIG_IWLWIFI_DEBUGFS) += iwl-debugfs.o iwlwifi-$(CONFIG_IWLWIFI_DEVICE_TRACING) += iwl-devtrace.o -iwlwifi-$(CONFIG_IWLWIFI_DEVICE_TESTMODE) += iwl-testmode.o +iwlwifi-$(CONFIG_IWLWIFI_DEVICE_TESTMODE) += iwl-test.o -CFLAGS_iwl-devtrace.o := -I$(src) +ccflags-y += -D__CHECK_ENDIAN__ -I$(src) -ccflags-y += -D__CHECK_ENDIAN__ + +obj-$(CONFIG_IWLDVM) += dvm/ + +CFLAGS_iwl-devtrace.o := -I$(src) diff --git a/drivers/net/wireless/iwlwifi/dvm/Makefile b/drivers/net/wireless/iwlwifi/dvm/Makefile new file mode 100644 index 000000000000..5ff76b204141 --- /dev/null +++ b/drivers/net/wireless/iwlwifi/dvm/Makefile @@ -0,0 +1,13 @@ +# DVM +obj-$(CONFIG_IWLDVM) += iwldvm.o +iwldvm-objs += main.o rs.o mac80211.o ucode.o tx.o +iwldvm-objs += lib.o calib.o tt.o sta.o rx.o + +iwldvm-objs += power.o +iwldvm-objs += scan.o led.o +iwldvm-objs += rxon.o devices.o + +iwldvm-$(CONFIG_IWLWIFI_DEBUGFS) += debugfs.o +iwldvm-$(CONFIG_IWLWIFI_DEVICE_TESTMODE) += testmode.o + +ccflags-y += -D__CHECK_ENDIAN__ -I$(src)/../ diff --git a/drivers/net/wireless/iwlwifi/iwl-agn.h b/drivers/net/wireless/iwlwifi/dvm/agn.h index 79c0fe06f4db..9bb16bdf6d26 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn.h +++ b/drivers/net/wireless/iwlwifi/dvm/agn.h @@ -63,9 +63,10 @@ #ifndef __iwl_agn_h__ #define __iwl_agn_h__ -#include "iwl-dev.h" #include "iwl-config.h" +#include "dev.h" + /* The first 11 queues (0-10) are used otherwise */ #define IWLAGN_FIRST_AMPDU_QUEUE 11 @@ -91,7 +92,6 @@ extern struct iwl_lib_ops iwl6030_lib; #define STATUS_CT_KILL 1 #define STATUS_ALIVE 2 #define STATUS_READY 3 -#define STATUS_GEO_CONFIGURED 4 #define STATUS_EXIT_PENDING 5 #define STATUS_STATISTICS 6 #define STATUS_SCANNING 7 @@ -101,6 +101,7 @@ extern struct iwl_lib_ops iwl6030_lib; #define STATUS_CHANNEL_SWITCH_PENDING 11 #define STATUS_SCAN_COMPLETE 12 #define STATUS_POWER_PMI 13 +#define STATUS_SCAN_ROC_EXPIRED 14 struct iwl_ucode_capabilities; @@ -255,6 +256,10 @@ int __must_check iwl_scan_initiate(struct iwl_priv *priv, enum iwl_scan_type scan_type, enum ieee80211_band band); +void iwl_scan_roc_expired(struct iwl_priv *priv); +void iwl_scan_offchannel_skb(struct iwl_priv *priv); +void iwl_scan_offchannel_skb_status(struct iwl_priv *priv); + /* For faster active scanning, scan will move to the next channel if fewer than * PLCP_QUIET_THRESH packets are heard on this channel within * ACTIVE_QUIET_TIME after sending probe request. This shortens the dwell @@ -264,7 +269,7 @@ int __must_check iwl_scan_initiate(struct iwl_priv *priv, #define IWL_ACTIVE_QUIET_TIME cpu_to_le16(10) /* msec */ #define IWL_PLCP_QUIET_THRESH cpu_to_le16(1) /* packets */ -#define IWL_SCAN_CHECK_WATCHDOG (HZ * 7) +#define IWL_SCAN_CHECK_WATCHDOG (HZ * 15) /* bt coex */ @@ -390,8 +395,10 @@ static inline __le32 iwl_hw_set_rate_n_flags(u8 rate, u32 flags) } extern int iwl_alive_start(struct iwl_priv *priv); -/* svtool */ + +/* testmode support */ #ifdef CONFIG_IWLWIFI_DEVICE_TESTMODE + extern int iwlagn_mac_testmode_cmd(struct ieee80211_hw *hw, void *data, int len); extern int iwlagn_mac_testmode_dump(struct ieee80211_hw *hw, @@ -399,13 +406,16 @@ extern int iwlagn_mac_testmode_dump(struct ieee80211_hw *hw, struct netlink_callback *cb, void *data, int len); extern void iwl_testmode_init(struct iwl_priv *priv); -extern void iwl_testmode_cleanup(struct iwl_priv *priv); +extern void iwl_testmode_free(struct iwl_priv *priv); + #else + static inline int iwlagn_mac_testmode_cmd(struct ieee80211_hw *hw, void *data, int len) { return -ENOSYS; } + static inline int iwlagn_mac_testmode_dump(struct ieee80211_hw *hw, struct sk_buff *skb, struct netlink_callback *cb, @@ -413,12 +423,12 @@ int iwlagn_mac_testmode_dump(struct ieee80211_hw *hw, struct sk_buff *skb, { return -ENOSYS; } -static inline -void iwl_testmode_init(struct iwl_priv *priv) + +static inline void iwl_testmode_init(struct iwl_priv *priv) { } -static inline -void iwl_testmode_cleanup(struct iwl_priv *priv) + +static inline void iwl_testmode_free(struct iwl_priv *priv) { } #endif @@ -437,10 +447,8 @@ static inline void iwl_print_rx_config_cmd(struct iwl_priv *priv, static inline int iwl_is_ready(struct iwl_priv *priv) { - /* The adapter is 'ready' if READY and GEO_CONFIGURED bits are - * set but EXIT_PENDING is not */ + /* The adapter is 'ready' if READY EXIT_PENDING is not set */ return test_bit(STATUS_READY, &priv->status) && - test_bit(STATUS_GEO_CONFIGURED, &priv->status) && !test_bit(STATUS_EXIT_PENDING, &priv->status); } @@ -518,85 +526,4 @@ static inline const char *iwl_dvm_get_cmd_string(u8 cmd) return s; return "UNKNOWN"; } - -/* API method exported for mvm hybrid state */ -void iwl_setup_deferred_work(struct iwl_priv *priv); -int iwl_send_wimax_coex(struct iwl_priv *priv); -int iwl_send_bt_env(struct iwl_priv *priv, u8 action, u8 type); -void iwl_option_config(struct iwl_priv *priv); -void iwl_set_hw_params(struct iwl_priv *priv); -void iwl_init_context(struct iwl_priv *priv, u32 ucode_flags); -int iwl_init_drv(struct iwl_priv *priv); -void iwl_uninit_drv(struct iwl_priv *priv); -void iwl_send_bt_config(struct iwl_priv *priv); -void iwl_rf_kill_ct_config(struct iwl_priv *priv); -int iwl_setup_interface(struct iwl_priv *priv, struct iwl_rxon_context *ctx); -void iwl_teardown_interface(struct iwl_priv *priv, - struct ieee80211_vif *vif, - bool mode_change); -int iwl_full_rxon_required(struct iwl_priv *priv, struct iwl_rxon_context *ctx); -void iwlagn_update_qos(struct iwl_priv *priv, struct iwl_rxon_context *ctx); -void iwlagn_check_needed_chains(struct iwl_priv *priv, - struct iwl_rxon_context *ctx, - struct ieee80211_bss_conf *bss_conf); -void iwlagn_chain_noise_reset(struct iwl_priv *priv); -int iwlagn_update_beacon(struct iwl_priv *priv, - struct ieee80211_vif *vif); -void iwl_tt_handler(struct iwl_priv *priv); -void iwl_op_mode_dvm_stop(struct iwl_op_mode *op_mode); -void iwl_stop_sw_queue(struct iwl_op_mode *op_mode, int queue); -void iwl_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state); -void iwl_free_skb(struct iwl_op_mode *op_mode, struct sk_buff *skb); -void iwl_nic_error(struct iwl_op_mode *op_mode); -void iwl_cmd_queue_full(struct iwl_op_mode *op_mode); -void iwl_nic_config(struct iwl_op_mode *op_mode); -int iwlagn_mac_set_tim(struct ieee80211_hw *hw, - struct ieee80211_sta *sta, bool set); -void iwlagn_mac_rssi_callback(struct ieee80211_hw *hw, - enum ieee80211_rssi_event rssi_event); -int iwlagn_mac_cancel_remain_on_channel(struct ieee80211_hw *hw); -int iwlagn_mac_tx_last_beacon(struct ieee80211_hw *hw); -void iwlagn_mac_flush(struct ieee80211_hw *hw, bool drop); -void iwl_wake_sw_queue(struct iwl_op_mode *op_mode, int queue); -void iwlagn_mac_channel_switch(struct ieee80211_hw *hw, - struct ieee80211_channel_switch *ch_switch); -int iwlagn_mac_sta_state(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta, - enum ieee80211_sta_state old_state, - enum ieee80211_sta_state new_state); -int iwlagn_mac_ampdu_action(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - enum ieee80211_ampdu_mlme_action action, - struct ieee80211_sta *sta, u16 tid, u16 *ssn, - u8 buf_size); -int iwlagn_mac_hw_scan(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct cfg80211_scan_request *req); -void iwlagn_mac_sta_notify(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - enum sta_notify_cmd cmd, - struct ieee80211_sta *sta); -void iwlagn_configure_filter(struct ieee80211_hw *hw, - unsigned int changed_flags, - unsigned int *total_flags, - u64 multicast); -int iwlagn_mac_conf_tx(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, u16 queue, - const struct ieee80211_tx_queue_params *params); -void iwlagn_mac_set_rekey_data(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct cfg80211_gtk_rekey_data *data); -void iwlagn_mac_update_tkip_key(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct ieee80211_key_conf *keyconf, - struct ieee80211_sta *sta, - u32 iv32, u16 *phase1key); -int iwlagn_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta, - struct ieee80211_key_conf *key); -void iwlagn_mac_stop(struct ieee80211_hw *hw); -void iwlagn_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb); -int iwlagn_mac_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan); #endif /* __iwl_agn_h__ */ diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-calib.c b/drivers/net/wireless/iwlwifi/dvm/calib.c index 95f27f1a423b..f2dd671d7dc8 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-calib.c +++ b/drivers/net/wireless/iwlwifi/dvm/calib.c @@ -63,10 +63,11 @@ #include <linux/slab.h> #include <net/mac80211.h> -#include "iwl-dev.h" -#include "iwl-agn-calib.h" #include "iwl-trans.h" -#include "iwl-agn.h" + +#include "dev.h" +#include "calib.h" +#include "agn.h" /***************************************************************************** * INIT calibrations framework @@ -832,14 +833,14 @@ static void iwl_find_disconn_antenna(struct iwl_priv *priv, u32* average_sig, * To be safe, simply mask out any chains that we know * are not on the device. */ - active_chains &= priv->hw_params.valid_rx_ant; + active_chains &= priv->eeprom_data->valid_rx_ant; num_tx_chains = 0; for (i = 0; i < NUM_RX_CHAINS; i++) { /* loops on all the bits of * priv->hw_setting.valid_tx_ant */ u8 ant_msk = (1 << i); - if (!(priv->hw_params.valid_tx_ant & ant_msk)) + if (!(priv->eeprom_data->valid_tx_ant & ant_msk)) continue; num_tx_chains++; @@ -853,7 +854,7 @@ static void iwl_find_disconn_antenna(struct iwl_priv *priv, u32* average_sig, * connect the first valid tx chain */ first_chain = - find_first_chain(priv->hw_params.valid_tx_ant); + find_first_chain(priv->eeprom_data->valid_tx_ant); data->disconn_array[first_chain] = 0; active_chains |= BIT(first_chain); IWL_DEBUG_CALIB(priv, @@ -863,13 +864,13 @@ static void iwl_find_disconn_antenna(struct iwl_priv *priv, u32* average_sig, } } - if (active_chains != priv->hw_params.valid_rx_ant && + if (active_chains != priv->eeprom_data->valid_rx_ant && active_chains != priv->chain_noise_data.active_chains) IWL_DEBUG_CALIB(priv, "Detected that not all antennas are connected! " "Connected: %#x, valid: %#x.\n", active_chains, - priv->hw_params.valid_rx_ant); + priv->eeprom_data->valid_rx_ant); /* Save for use within RXON, TX, SCAN commands, etc. */ data->active_chains = active_chains; @@ -1054,7 +1055,7 @@ void iwl_chain_noise_calibration(struct iwl_priv *priv) priv->cfg->bt_params->advanced_bt_coexist) { /* Disable disconnected antenna algorithm for advanced bt coex, assuming valid antennas are connected */ - data->active_chains = priv->hw_params.valid_rx_ant; + data->active_chains = priv->eeprom_data->valid_rx_ant; for (i = 0; i < NUM_RX_CHAINS; i++) if (!(data->active_chains & (1<<i))) data->disconn_array[i] = 1; @@ -1083,8 +1084,9 @@ void iwl_chain_noise_calibration(struct iwl_priv *priv) IWL_DEBUG_CALIB(priv, "min_average_noise = %d, antenna %d\n", min_average_noise, min_average_noise_antenna_i); - iwlagn_gain_computation(priv, average_noise, - find_first_chain(priv->hw_params.valid_rx_ant)); + iwlagn_gain_computation( + priv, average_noise, + find_first_chain(priv->eeprom_data->valid_rx_ant)); /* Some power changes may have been made during the calibration. * Update and commit the RXON diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-calib.h b/drivers/net/wireless/iwlwifi/dvm/calib.h index dbe13787f272..2349f393cc42 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-calib.h +++ b/drivers/net/wireless/iwlwifi/dvm/calib.h @@ -62,8 +62,8 @@ #ifndef __iwl_calib_h__ #define __iwl_calib_h__ -#include "iwl-dev.h" -#include "iwl-commands.h" +#include "dev.h" +#include "commands.h" void iwl_chain_noise_calibration(struct iwl_priv *priv); void iwl_sensitivity_calibration(struct iwl_priv *priv); diff --git a/drivers/net/wireless/iwlwifi/iwl-commands.h b/drivers/net/wireless/iwlwifi/dvm/commands.h index 9af6a239b384..4a361c55c543 100644 --- a/drivers/net/wireless/iwlwifi/iwl-commands.h +++ b/drivers/net/wireless/iwlwifi/dvm/commands.h @@ -61,9 +61,9 @@ * *****************************************************************************/ /* - * Please use this file (iwl-commands.h) only for uCode API definitions. + * Please use this file (commands.h) only for uCode API definitions. * Please use iwl-xxxx-hw.h for hardware-related definitions. - * Please use iwl-dev.h for driver implementation definitions. + * Please use dev.h for driver implementation definitions. */ #ifndef __iwl_commands_h__ @@ -190,6 +190,44 @@ enum { REPLY_MAX = 0xff }; +/* + * Minimum number of queues. MAX_NUM is defined in hw specific files. + * Set the minimum to accommodate + * - 4 standard TX queues + * - the command queue + * - 4 PAN TX queues + * - the PAN multicast queue, and + * - the AUX (TX during scan dwell) queue. + */ +#define IWL_MIN_NUM_QUEUES 11 + +/* + * Command queue depends on iPAN support. + */ +#define IWL_DEFAULT_CMD_QUEUE_NUM 4 +#define IWL_IPAN_CMD_QUEUE_NUM 9 + +#define IWL_TX_FIFO_BK 0 /* shared */ +#define IWL_TX_FIFO_BE 1 +#define IWL_TX_FIFO_VI 2 /* shared */ +#define IWL_TX_FIFO_VO 3 +#define IWL_TX_FIFO_BK_IPAN IWL_TX_FIFO_BK +#define IWL_TX_FIFO_BE_IPAN 4 +#define IWL_TX_FIFO_VI_IPAN IWL_TX_FIFO_VI +#define IWL_TX_FIFO_VO_IPAN 5 +/* re-uses the VO FIFO, uCode will properly flush/schedule */ +#define IWL_TX_FIFO_AUX 5 +#define IWL_TX_FIFO_UNUSED 255 + +#define IWLAGN_CMD_FIFO_NUM 7 + +/* + * This queue number is required for proper operation + * because the ucode will stop/start the scheduler as + * required. + */ +#define IWL_IPAN_MCAST_QUEUE 8 + /****************************************************************************** * (0) * Commonly used structures and definitions: @@ -197,9 +235,6 @@ enum { * *****************************************************************************/ -/* iwl_cmd_header flags value */ -#define IWL_CMD_FAILED_MSK 0x40 - /** * iwlagn rate_n_flags bit fields * @@ -758,8 +793,6 @@ struct iwl_qosparam_cmd { #define IWLAGN_BROADCAST_ID 15 #define IWLAGN_STATION_COUNT 16 -#define IWL_INVALID_STATION 255 -#define IWL_MAX_TID_COUNT 8 #define IWL_TID_NON_QOS IWL_MAX_TID_COUNT #define STA_FLG_TX_RATE_MSK cpu_to_le32(1 << 2) @@ -1872,6 +1905,7 @@ struct iwl_bt_cmd { #define IWLAGN_BT_PRIO_BOOST_MAX 0xFF #define IWLAGN_BT_PRIO_BOOST_MIN 0x00 #define IWLAGN_BT_PRIO_BOOST_DEFAULT 0xF0 +#define IWLAGN_BT_PRIO_BOOST_DEFAULT32 0xF0F0F0F0 #define IWLAGN_BT_MAX_KILL_DEFAULT 5 diff --git a/drivers/net/wireless/iwlwifi/iwl-debugfs.c b/drivers/net/wireless/iwlwifi/dvm/debugfs.c index 7f97dec8534d..46782f1102ac 100644 --- a/drivers/net/wireless/iwlwifi/iwl-debugfs.c +++ b/drivers/net/wireless/iwlwifi/dvm/debugfs.c @@ -30,16 +30,12 @@ #include <linux/kernel.h> #include <linux/module.h> #include <linux/debugfs.h> - #include <linux/ieee80211.h> #include <net/mac80211.h> - - -#include "iwl-dev.h" #include "iwl-debug.h" #include "iwl-io.h" -#include "iwl-agn.h" -#include "iwl-modparams.h" +#include "dev.h" +#include "agn.h" /* create and remove of files */ #define DEBUGFS_ADD_FILE(name, parent, mode) do { \ @@ -87,7 +83,7 @@ static ssize_t iwl_dbgfs_##name##_write(struct file *file, \ #define DEBUGFS_READ_FILE_OPS(name) \ DEBUGFS_READ_FUNC(name); \ static const struct file_operations iwl_dbgfs_##name##_ops = { \ - .read = iwl_dbgfs_##name##_read, \ + .read = iwl_dbgfs_##name##_read, \ .open = simple_open, \ .llseek = generic_file_llseek, \ }; @@ -307,13 +303,13 @@ static ssize_t iwl_dbgfs_nvm_read(struct file *file, const u8 *ptr; char *buf; u16 eeprom_ver; - size_t eeprom_len = priv->cfg->base_params->eeprom_size; + size_t eeprom_len = priv->eeprom_blob_size; buf_size = 4 * eeprom_len + 256; if (eeprom_len % 16) return -ENODATA; - ptr = priv->eeprom; + ptr = priv->eeprom_blob; if (!ptr) return -ENOMEM; @@ -322,11 +318,9 @@ static ssize_t iwl_dbgfs_nvm_read(struct file *file, if (!buf) return -ENOMEM; - eeprom_ver = iwl_eeprom_query16(priv, EEPROM_VERSION); - pos += scnprintf(buf + pos, buf_size - pos, "NVM Type: %s, " - "version: 0x%x\n", - (priv->nvm_device_type == NVM_DEVICE_TYPE_OTP) - ? "OTP" : "EEPROM", eeprom_ver); + eeprom_ver = priv->eeprom_data->eeprom_version; + pos += scnprintf(buf + pos, buf_size - pos, + "NVM version: 0x%x\n", eeprom_ver); for (ofs = 0 ; ofs < eeprom_len ; ofs += 16) { pos += scnprintf(buf + pos, buf_size - pos, "0x%.4x ", ofs); hex_dump_to_buffer(ptr + ofs, 16 , 16, 2, buf + pos, @@ -351,9 +345,6 @@ static ssize_t iwl_dbgfs_channels_read(struct file *file, char __user *user_buf, char *buf; ssize_t ret; - if (!test_bit(STATUS_GEO_CONFIGURED, &priv->status)) - return -EAGAIN; - buf = kzalloc(bufsz, GFP_KERNEL); if (!buf) return -ENOMEM; @@ -426,8 +417,6 @@ static ssize_t iwl_dbgfs_status_read(struct file *file, test_bit(STATUS_ALIVE, &priv->status)); pos += scnprintf(buf + pos, bufsz - pos, "STATUS_READY:\t\t %d\n", test_bit(STATUS_READY, &priv->status)); - pos += scnprintf(buf + pos, bufsz - pos, "STATUS_GEO_CONFIGURED:\t %d\n", - test_bit(STATUS_GEO_CONFIGURED, &priv->status)); pos += scnprintf(buf + pos, bufsz - pos, "STATUS_EXIT_PENDING:\t %d\n", test_bit(STATUS_EXIT_PENDING, &priv->status)); pos += scnprintf(buf + pos, bufsz - pos, "STATUS_STATISTICS:\t %d\n", @@ -1341,17 +1330,17 @@ static ssize_t iwl_dbgfs_ucode_tx_stats_read(struct file *file, if (tx->tx_power.ant_a || tx->tx_power.ant_b || tx->tx_power.ant_c) { pos += scnprintf(buf + pos, bufsz - pos, "tx power: (1/2 dB step)\n"); - if ((priv->hw_params.valid_tx_ant & ANT_A) && + if ((priv->eeprom_data->valid_tx_ant & ANT_A) && tx->tx_power.ant_a) pos += scnprintf(buf + pos, bufsz - pos, fmt_hex, "antenna A:", tx->tx_power.ant_a); - if ((priv->hw_params.valid_tx_ant & ANT_B) && + if ((priv->eeprom_data->valid_tx_ant & ANT_B) && tx->tx_power.ant_b) pos += scnprintf(buf + pos, bufsz - pos, fmt_hex, "antenna B:", tx->tx_power.ant_b); - if ((priv->hw_params.valid_tx_ant & ANT_C) && + if ((priv->eeprom_data->valid_tx_ant & ANT_C) && tx->tx_power.ant_c) pos += scnprintf(buf + pos, bufsz - pos, fmt_hex, "antenna C:", @@ -2266,6 +2255,10 @@ static ssize_t iwl_dbgfs_log_event_write(struct file *file, char buf[8]; int buf_size; + /* check that the interface is up */ + if (!iwl_is_ready(priv)) + return -EAGAIN; + memset(buf, 0, sizeof(buf)); buf_size = min(count, sizeof(buf) - 1); if (copy_from_user(buf, user_buf, buf_size)) diff --git a/drivers/net/wireless/iwlwifi/iwl-dev.h b/drivers/net/wireless/iwlwifi/dvm/dev.h index 70062379d0ec..054f728f6266 100644 --- a/drivers/net/wireless/iwlwifi/iwl-dev.h +++ b/drivers/net/wireless/iwlwifi/dvm/dev.h @@ -24,8 +24,8 @@ * *****************************************************************************/ /* - * Please use this file (iwl-dev.h) for driver implementation definitions. - * Please use iwl-commands.h for uCode API definitions. + * Please use this file (dev.h) for driver implementation definitions. + * Please use commands.h for uCode API definitions. */ #ifndef __iwl_dev_h__ @@ -39,17 +39,20 @@ #include <linux/mutex.h> #include "iwl-fw.h" -#include "iwl-eeprom.h" +#include "iwl-eeprom-parse.h" #include "iwl-csr.h" #include "iwl-debug.h" #include "iwl-agn-hw.h" -#include "iwl-led.h" -#include "iwl-power.h" -#include "iwl-agn-rs.h" -#include "iwl-agn-tt.h" -#include "iwl-trans.h" #include "iwl-op-mode.h" #include "iwl-notif-wait.h" +#include "iwl-trans.h" + +#include "led.h" +#include "power.h" +#include "rs.h" +#include "tt.h" + +#include "iwl-test.h" /* CT-KILL constants */ #define CT_KILL_THRESHOLD_LEGACY 110 /* in Celsius */ @@ -87,49 +90,6 @@ #define IWL_NUM_SCAN_RATES (2) -/* - * One for each channel, holds all channel setup data - * Some of the fields (e.g. eeprom and flags/max_power_avg) are redundant - * with one another! - */ -struct iwl_channel_info { - struct iwl_eeprom_channel eeprom; /* EEPROM regulatory limit */ - struct iwl_eeprom_channel ht40_eeprom; /* EEPROM regulatory limit for - * HT40 channel */ - - u8 channel; /* channel number */ - u8 flags; /* flags copied from EEPROM */ - s8 max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */ - s8 curr_txpow; /* (dBm) regulatory/spectrum/user (not h/w) limit */ - s8 min_power; /* always 0 */ - s8 scan_power; /* (dBm) regul. eeprom, direct scans, any rate */ - - u8 group_index; /* 0-4, maps channel to group1/2/3/4/5 */ - u8 band_index; /* 0-4, maps channel to band1/2/3/4/5 */ - enum ieee80211_band band; - - /* HT40 channel info */ - s8 ht40_max_power_avg; /* (dBm) regul. eeprom, normal Tx, any rate */ - u8 ht40_flags; /* flags copied from EEPROM */ - u8 ht40_extension_channel; /* HT_IE_EXT_CHANNEL_* */ -}; - -/* - * Minimum number of queues. MAX_NUM is defined in hw specific files. - * Set the minimum to accommodate - * - 4 standard TX queues - * - the command queue - * - 4 PAN TX queues - * - the PAN multicast queue, and - * - the AUX (TX during scan dwell) queue. - */ -#define IWL_MIN_NUM_QUEUES 11 - -/* - * Command queue depends on iPAN support. - */ -#define IWL_DEFAULT_CMD_QUEUE_NUM 4 -#define IWL_IPAN_CMD_QUEUE_NUM 9 #define IEEE80211_DATA_LEN 2304 #define IEEE80211_4ADDR_LEN 30 @@ -153,29 +113,6 @@ union iwl_ht_rate_supp { }; }; -#define CFG_HT_RX_AMPDU_FACTOR_8K (0x0) -#define CFG_HT_RX_AMPDU_FACTOR_16K (0x1) -#define CFG_HT_RX_AMPDU_FACTOR_32K (0x2) -#define CFG_HT_RX_AMPDU_FACTOR_64K (0x3) -#define CFG_HT_RX_AMPDU_FACTOR_DEF CFG_HT_RX_AMPDU_FACTOR_64K -#define CFG_HT_RX_AMPDU_FACTOR_MAX CFG_HT_RX_AMPDU_FACTOR_64K -#define CFG_HT_RX_AMPDU_FACTOR_MIN CFG_HT_RX_AMPDU_FACTOR_8K - -/* - * Maximal MPDU density for TX aggregation - * 4 - 2us density - * 5 - 4us density - * 6 - 8us density - * 7 - 16us density - */ -#define CFG_HT_MPDU_DENSITY_2USEC (0x4) -#define CFG_HT_MPDU_DENSITY_4USEC (0x5) -#define CFG_HT_MPDU_DENSITY_8USEC (0x6) -#define CFG_HT_MPDU_DENSITY_16USEC (0x7) -#define CFG_HT_MPDU_DENSITY_DEF CFG_HT_MPDU_DENSITY_4USEC -#define CFG_HT_MPDU_DENSITY_MAX CFG_HT_MPDU_DENSITY_16USEC -#define CFG_HT_MPDU_DENSITY_MIN (0x1) - struct iwl_ht_config { bool single_chain_sufficient; enum ieee80211_smps_mode smps; /* current smps mode */ @@ -445,23 +382,6 @@ enum { MEASUREMENT_ACTIVE = (1 << 1), }; -enum iwl_nvm_type { - NVM_DEVICE_TYPE_EEPROM = 0, - NVM_DEVICE_TYPE_OTP, -}; - -/* - * Two types of OTP memory access modes - * IWL_OTP_ACCESS_ABSOLUTE - absolute address mode, - * based on physical memory addressing - * IWL_OTP_ACCESS_RELATIVE - relative address mode, - * based on logical memory addressing - */ -enum iwl_access_mode { - IWL_OTP_ACCESS_ABSOLUTE, - IWL_OTP_ACCESS_RELATIVE, -}; - /* reply_tx_statistics (for _agn devices) */ struct reply_tx_error_statistics { u32 pp_delay; @@ -632,10 +552,6 @@ enum iwl_scan_type { * * @tx_chains_num: Number of TX chains * @rx_chains_num: Number of RX chains - * @valid_tx_ant: usable antennas for TX - * @valid_rx_ant: usable antennas for RX - * @ht40_channel: is 40MHz width possible: BIT(IEEE80211_BAND_XXX) - * @sku: sku read from EEPROM * @ct_kill_threshold: temperature threshold - in hw dependent unit * @ct_kill_exit_threshold: when to reeable the device - in hw dependent unit * relevant for 1000, 6000 and up @@ -645,11 +561,7 @@ enum iwl_scan_type { struct iwl_hw_params { u8 tx_chains_num; u8 rx_chains_num; - u8 valid_tx_ant; - u8 valid_rx_ant; - u8 ht40_channel; bool use_rts_for_aggregation; - u16 sku; u32 ct_kill_threshold; u32 ct_kill_exit_threshold; @@ -664,31 +576,10 @@ struct iwl_lib_ops { /* device specific configuration */ void (*nic_config)(struct iwl_priv *priv); - /* eeprom operations (as defined in iwl-eeprom.h) */ - struct iwl_eeprom_ops eeprom_ops; - /* temperature */ void (*temperature)(struct iwl_priv *priv); }; -#ifdef CONFIG_IWLWIFI_DEVICE_TESTMODE -struct iwl_testmode_trace { - u32 buff_size; - u32 total_size; - u32 num_chunks; - u8 *cpu_addr; - u8 *trace_addr; - dma_addr_t dma_addr; - bool trace_enabled; -}; -struct iwl_testmode_mem { - u32 buff_size; - u32 num_chunks; - u8 *buff_addr; - bool read_in_progress; -}; -#endif - struct iwl_wipan_noa_data { struct rcu_head rcu_head; u32 length; @@ -735,8 +626,6 @@ struct iwl_priv { /* ieee device used by generic ieee processing code */ struct ieee80211_hw *hw; - struct ieee80211_channel *ieee_channels; - struct ieee80211_rate *ieee_rates; struct list_head calib_results; @@ -747,16 +636,12 @@ struct iwl_priv { enum ieee80211_band band; u8 valid_contexts; - void (*pre_rx_handler)(struct iwl_priv *priv, - struct iwl_rx_cmd_buffer *rxb); int (*rx_handlers[REPLY_MAX])(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb, struct iwl_device_cmd *cmd); struct iwl_notif_wait_data notif_wait; - struct ieee80211_supported_band bands[IEEE80211_NUM_BANDS]; - /* spectrum measurement report caching */ struct iwl_spectrum_notification measure_report; u8 measurement_status; @@ -787,11 +672,6 @@ struct iwl_priv { bool ucode_loaded; bool init_ucode_run; /* Don't run init uCode again */ - /* we allocate array of iwl_channel_info for NIC's valid channels. - * Access via channel # using indirect index array */ - struct iwl_channel_info *channel_info; /* channel info array */ - u8 channel_count; /* # of channels */ - u8 plcp_delta_threshold; /* thermal calibration */ @@ -846,6 +726,7 @@ struct iwl_priv { struct iwl_station_entry stations[IWLAGN_STATION_COUNT]; unsigned long ucode_key_table; struct iwl_tid_data tid_data[IWLAGN_STATION_COUNT][IWL_MAX_TID_COUNT]; + atomic_t num_aux_in_flight; u8 mac80211_registered; @@ -950,10 +831,8 @@ struct iwl_priv { struct delayed_work scan_check; - /* TX Power */ + /* TX Power settings */ s8 tx_power_user_lmt; - s8 tx_power_device_lmt; - s8 tx_power_lmt_in_half_dbm; /* max tx power in half-dBm format */ s8 tx_power_next; #ifdef CONFIG_IWLWIFI_DEBUGFS @@ -964,9 +843,10 @@ struct iwl_priv { void *wowlan_sram; #endif /* CONFIG_IWLWIFI_DEBUGFS */ - /* eeprom -- this is in the card's little endian byte order */ - u8 *eeprom; - enum iwl_nvm_type nvm_device_type; + struct iwl_eeprom_data *eeprom_data; + /* eeprom blob for debugfs/testmode */ + u8 *eeprom_blob; + size_t eeprom_blob_size; struct work_struct txpower_work; u32 calib_disabled; @@ -979,9 +859,9 @@ struct iwl_priv { struct led_classdev led; unsigned long blink_on, blink_off; bool led_registered; + #ifdef CONFIG_IWLWIFI_DEVICE_TESTMODE - struct iwl_testmode_trace testmode_trace; - struct iwl_testmode_mem testmode_mem; + struct iwl_test tst; u32 tm_fixed_rate; #endif @@ -1001,8 +881,6 @@ struct iwl_priv { enum iwl_ucode_type cur_ucode; }; /*iwl_priv */ -extern struct kmem_cache *iwl_tx_cmd_pool; - static inline struct iwl_rxon_context * iwl_rxon_ctx_from_vif(struct ieee80211_vif *vif) { @@ -1036,36 +914,4 @@ static inline int iwl_is_any_associated(struct iwl_priv *priv) return false; } -static inline int is_channel_valid(const struct iwl_channel_info *ch_info) -{ - if (ch_info == NULL) - return 0; - return (ch_info->flags & EEPROM_CHANNEL_VALID) ? 1 : 0; -} - -static inline int is_channel_radar(const struct iwl_channel_info *ch_info) -{ - return (ch_info->flags & EEPROM_CHANNEL_RADAR) ? 1 : 0; -} - -static inline u8 is_channel_a_band(const struct iwl_channel_info *ch_info) -{ - return ch_info->band == IEEE80211_BAND_5GHZ; -} - -static inline u8 is_channel_bg_band(const struct iwl_channel_info *ch_info) -{ - return ch_info->band == IEEE80211_BAND_2GHZ; -} - -static inline int is_channel_passive(const struct iwl_channel_info *ch) -{ - return (!(ch->flags & EEPROM_CHANNEL_ACTIVE)) ? 1 : 0; -} - -static inline int is_channel_ibss(const struct iwl_channel_info *ch) -{ - return ((ch->flags & EEPROM_CHANNEL_IBSS)) ? 1 : 0; -} - #endif /* __iwl_dev_h__ */ diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-devices.c b/drivers/net/wireless/iwlwifi/dvm/devices.c index 48533b3a0f9a..349c205d5f62 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-devices.c +++ b/drivers/net/wireless/iwlwifi/dvm/devices.c @@ -27,11 +27,14 @@ /* * DVM device-specific data & functions */ -#include "iwl-agn.h" -#include "iwl-dev.h" -#include "iwl-commands.h" #include "iwl-io.h" #include "iwl-prph.h" +#include "iwl-eeprom-parse.h" + +#include "agn.h" +#include "dev.h" +#include "commands.h" + /* * 1000 series @@ -58,11 +61,6 @@ static void iwl1000_set_ct_threshold(struct iwl_priv *priv) /* NIC configuration for 1000 series */ static void iwl1000_nic_config(struct iwl_priv *priv) { - /* set CSR_HW_CONFIG_REG for uCode use */ - iwl_set_bit(priv->trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_RADIO_SI | - CSR_HW_IF_CONFIG_REG_BIT_MAC_SI); - /* Setting digital SVR for 1000 card to 1.32V */ /* locking is acquired in iwl_set_bits_mask_prph() function */ iwl_set_bits_mask_prph(priv->trans, APMG_DIGITAL_SVR_REG, @@ -170,16 +168,6 @@ static const struct iwl_sensitivity_ranges iwl1000_sensitivity = { static void iwl1000_hw_set_hw_params(struct iwl_priv *priv) { - priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ); - - priv->hw_params.tx_chains_num = - num_of_ant(priv->hw_params.valid_tx_ant); - if (priv->cfg->rx_with_siso_diversity) - priv->hw_params.rx_chains_num = 1; - else - priv->hw_params.rx_chains_num = - num_of_ant(priv->hw_params.valid_rx_ant); - iwl1000_set_ct_threshold(priv); /* Set initial sensitivity parameters */ @@ -189,17 +177,6 @@ static void iwl1000_hw_set_hw_params(struct iwl_priv *priv) struct iwl_lib_ops iwl1000_lib = { .set_hw_params = iwl1000_hw_set_hw_params, .nic_config = iwl1000_nic_config, - .eeprom_ops = { - .regulatory_bands = { - EEPROM_REG_BAND_1_CHANNELS, - EEPROM_REG_BAND_2_CHANNELS, - EEPROM_REG_BAND_3_CHANNELS, - EEPROM_REG_BAND_4_CHANNELS, - EEPROM_REG_BAND_5_CHANNELS, - EEPROM_REG_BAND_24_HT40_CHANNELS, - EEPROM_REGULATORY_BAND_NO_HT40, - }, - }, .temperature = iwlagn_temperature, }; @@ -219,8 +196,6 @@ static void iwl2000_set_ct_threshold(struct iwl_priv *priv) /* NIC configuration for 2000 series */ static void iwl2000_nic_config(struct iwl_priv *priv) { - iwl_rf_config(priv); - iwl_set_bit(priv->trans, CSR_GP_DRIVER_REG, CSR_GP_DRIVER_REG_BIT_RADIO_IQ_INVER); } @@ -251,16 +226,6 @@ static const struct iwl_sensitivity_ranges iwl2000_sensitivity = { static void iwl2000_hw_set_hw_params(struct iwl_priv *priv) { - priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ); - - priv->hw_params.tx_chains_num = - num_of_ant(priv->hw_params.valid_tx_ant); - if (priv->cfg->rx_with_siso_diversity) - priv->hw_params.rx_chains_num = 1; - else - priv->hw_params.rx_chains_num = - num_of_ant(priv->hw_params.valid_rx_ant); - iwl2000_set_ct_threshold(priv); /* Set initial sensitivity parameters */ @@ -270,36 +235,12 @@ static void iwl2000_hw_set_hw_params(struct iwl_priv *priv) struct iwl_lib_ops iwl2000_lib = { .set_hw_params = iwl2000_hw_set_hw_params, .nic_config = iwl2000_nic_config, - .eeprom_ops = { - .regulatory_bands = { - EEPROM_REG_BAND_1_CHANNELS, - EEPROM_REG_BAND_2_CHANNELS, - EEPROM_REG_BAND_3_CHANNELS, - EEPROM_REG_BAND_4_CHANNELS, - EEPROM_REG_BAND_5_CHANNELS, - EEPROM_6000_REG_BAND_24_HT40_CHANNELS, - EEPROM_REGULATORY_BAND_NO_HT40, - }, - .enhanced_txpower = true, - }, .temperature = iwlagn_temperature, }; struct iwl_lib_ops iwl2030_lib = { .set_hw_params = iwl2000_hw_set_hw_params, .nic_config = iwl2000_nic_config, - .eeprom_ops = { - .regulatory_bands = { - EEPROM_REG_BAND_1_CHANNELS, - EEPROM_REG_BAND_2_CHANNELS, - EEPROM_REG_BAND_3_CHANNELS, - EEPROM_REG_BAND_4_CHANNELS, - EEPROM_REG_BAND_5_CHANNELS, - EEPROM_6000_REG_BAND_24_HT40_CHANNELS, - EEPROM_REGULATORY_BAND_NO_HT40, - }, - .enhanced_txpower = true, - }, .temperature = iwlagn_temperature, }; @@ -309,19 +250,6 @@ struct iwl_lib_ops iwl2030_lib = { */ /* NIC configuration for 5000 series */ -static void iwl5000_nic_config(struct iwl_priv *priv) -{ - iwl_rf_config(priv); - - /* W/A : NIC is stuck in a reset state after Early PCIe power off - * (PCIe power is lost before PERST# is asserted), - * causing ME FW to lose ownership and not being able to obtain it back. - */ - iwl_set_bits_mask_prph(priv->trans, APMG_PS_CTRL_REG, - APMG_PS_CTRL_EARLY_PWR_OFF_RESET_DIS, - ~APMG_PS_CTRL_EARLY_PWR_OFF_RESET_DIS); -} - static const struct iwl_sensitivity_ranges iwl5000_sensitivity = { .min_nrg_cck = 100, .auto_corr_min_ofdm = 90, @@ -376,11 +304,9 @@ static struct iwl_sensitivity_ranges iwl5150_sensitivity = { static s32 iwl_temp_calib_to_offset(struct iwl_priv *priv) { u16 temperature, voltage; - __le16 *temp_calib = (__le16 *)iwl_eeprom_query_addr(priv, - EEPROM_KELVIN_TEMPERATURE); - temperature = le16_to_cpu(temp_calib[0]); - voltage = le16_to_cpu(temp_calib[1]); + temperature = le16_to_cpu(priv->eeprom_data->kelvin_temperature); + voltage = le16_to_cpu(priv->eeprom_data->kelvin_voltage); /* offset = temp - volt / coeff */ return (s32)(temperature - @@ -404,14 +330,6 @@ static void iwl5000_set_ct_threshold(struct iwl_priv *priv) static void iwl5000_hw_set_hw_params(struct iwl_priv *priv) { - priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ) | - BIT(IEEE80211_BAND_5GHZ); - - priv->hw_params.tx_chains_num = - num_of_ant(priv->hw_params.valid_tx_ant); - priv->hw_params.rx_chains_num = - num_of_ant(priv->hw_params.valid_rx_ant); - iwl5000_set_ct_threshold(priv); /* Set initial sensitivity parameters */ @@ -420,14 +338,6 @@ static void iwl5000_hw_set_hw_params(struct iwl_priv *priv) static void iwl5150_hw_set_hw_params(struct iwl_priv *priv) { - priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ) | - BIT(IEEE80211_BAND_5GHZ); - - priv->hw_params.tx_chains_num = - num_of_ant(priv->hw_params.valid_tx_ant); - priv->hw_params.rx_chains_num = - num_of_ant(priv->hw_params.valid_rx_ant); - iwl5150_set_ct_threshold(priv); /* Set initial sensitivity parameters */ @@ -455,7 +365,6 @@ static int iwl5000_hw_channel_switch(struct iwl_priv *priv, */ struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS]; struct iwl5000_channel_switch_cmd cmd; - const struct iwl_channel_info *ch_info; u32 switch_time_in_usec, ucode_switch_time; u16 ch; u32 tsf_low; @@ -505,14 +414,7 @@ static int iwl5000_hw_channel_switch(struct iwl_priv *priv, } IWL_DEBUG_11H(priv, "uCode time for the switch is 0x%x\n", cmd.switch_time); - ch_info = iwl_get_channel_info(priv, priv->band, ch); - if (ch_info) - cmd.expect_beacon = is_channel_radar(ch_info); - else { - IWL_ERR(priv, "invalid channel switch from %u to %u\n", - ctx->active.channel, ch); - return -EFAULT; - } + cmd.expect_beacon = ch_switch->channel->flags & IEEE80211_CHAN_RADAR; return iwl_dvm_send_cmd(priv, &hcmd); } @@ -520,36 +422,12 @@ static int iwl5000_hw_channel_switch(struct iwl_priv *priv, struct iwl_lib_ops iwl5000_lib = { .set_hw_params = iwl5000_hw_set_hw_params, .set_channel_switch = iwl5000_hw_channel_switch, - .nic_config = iwl5000_nic_config, - .eeprom_ops = { - .regulatory_bands = { - EEPROM_REG_BAND_1_CHANNELS, - EEPROM_REG_BAND_2_CHANNELS, - EEPROM_REG_BAND_3_CHANNELS, - EEPROM_REG_BAND_4_CHANNELS, - EEPROM_REG_BAND_5_CHANNELS, - EEPROM_REG_BAND_24_HT40_CHANNELS, - EEPROM_REG_BAND_52_HT40_CHANNELS - }, - }, .temperature = iwlagn_temperature, }; struct iwl_lib_ops iwl5150_lib = { .set_hw_params = iwl5150_hw_set_hw_params, .set_channel_switch = iwl5000_hw_channel_switch, - .nic_config = iwl5000_nic_config, - .eeprom_ops = { - .regulatory_bands = { - EEPROM_REG_BAND_1_CHANNELS, - EEPROM_REG_BAND_2_CHANNELS, - EEPROM_REG_BAND_3_CHANNELS, - EEPROM_REG_BAND_4_CHANNELS, - EEPROM_REG_BAND_5_CHANNELS, - EEPROM_REG_BAND_24_HT40_CHANNELS, - EEPROM_REG_BAND_52_HT40_CHANNELS - }, - }, .temperature = iwl5150_temperature, }; @@ -570,8 +448,6 @@ static void iwl6000_set_ct_threshold(struct iwl_priv *priv) /* NIC configuration for 6000 series */ static void iwl6000_nic_config(struct iwl_priv *priv) { - iwl_rf_config(priv); - switch (priv->cfg->device_family) { case IWL_DEVICE_FAMILY_6005: case IWL_DEVICE_FAMILY_6030: @@ -584,13 +460,13 @@ static void iwl6000_nic_config(struct iwl_priv *priv) break; case IWL_DEVICE_FAMILY_6050: /* Indicate calibration version to uCode. */ - if (iwl_eeprom_calib_version(priv) >= 6) + if (priv->eeprom_data->calib_version >= 6) iwl_set_bit(priv->trans, CSR_GP_DRIVER_REG, CSR_GP_DRIVER_REG_BIT_CALIB_VERSION6); break; case IWL_DEVICE_FAMILY_6150: /* Indicate calibration version to uCode. */ - if (iwl_eeprom_calib_version(priv) >= 6) + if (priv->eeprom_data->calib_version >= 6) iwl_set_bit(priv->trans, CSR_GP_DRIVER_REG, CSR_GP_DRIVER_REG_BIT_CALIB_VERSION6); iwl_set_bit(priv->trans, CSR_GP_DRIVER_REG, @@ -627,17 +503,6 @@ static const struct iwl_sensitivity_ranges iwl6000_sensitivity = { static void iwl6000_hw_set_hw_params(struct iwl_priv *priv) { - priv->hw_params.ht40_channel = BIT(IEEE80211_BAND_2GHZ) | - BIT(IEEE80211_BAND_5GHZ); - - priv->hw_params.tx_chains_num = - num_of_ant(priv->hw_params.valid_tx_ant); - if (priv->cfg->rx_with_siso_diversity) - priv->hw_params.rx_chains_num = 1; - else - priv->hw_params.rx_chains_num = - num_of_ant(priv->hw_params.valid_rx_ant); - iwl6000_set_ct_threshold(priv); /* Set initial sensitivity parameters */ @@ -654,7 +519,6 @@ static int iwl6000_hw_channel_switch(struct iwl_priv *priv, */ struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS]; struct iwl6000_channel_switch_cmd cmd; - const struct iwl_channel_info *ch_info; u32 switch_time_in_usec, ucode_switch_time; u16 ch; u32 tsf_low; @@ -704,14 +568,7 @@ static int iwl6000_hw_channel_switch(struct iwl_priv *priv, } IWL_DEBUG_11H(priv, "uCode time for the switch is 0x%x\n", cmd.switch_time); - ch_info = iwl_get_channel_info(priv, priv->band, ch); - if (ch_info) - cmd.expect_beacon = is_channel_radar(ch_info); - else { - IWL_ERR(priv, "invalid channel switch from %u to %u\n", - ctx->active.channel, ch); - return -EFAULT; - } + cmd.expect_beacon = ch_switch->channel->flags & IEEE80211_CHAN_RADAR; return iwl_dvm_send_cmd(priv, &hcmd); } @@ -720,18 +577,6 @@ struct iwl_lib_ops iwl6000_lib = { .set_hw_params = iwl6000_hw_set_hw_params, .set_channel_switch = iwl6000_hw_channel_switch, .nic_config = iwl6000_nic_config, - .eeprom_ops = { - .regulatory_bands = { - EEPROM_REG_BAND_1_CHANNELS, - EEPROM_REG_BAND_2_CHANNELS, - EEPROM_REG_BAND_3_CHANNELS, - EEPROM_REG_BAND_4_CHANNELS, - EEPROM_REG_BAND_5_CHANNELS, - EEPROM_6000_REG_BAND_24_HT40_CHANNELS, - EEPROM_REG_BAND_52_HT40_CHANNELS - }, - .enhanced_txpower = true, - }, .temperature = iwlagn_temperature, }; @@ -739,17 +584,5 @@ struct iwl_lib_ops iwl6030_lib = { .set_hw_params = iwl6000_hw_set_hw_params, .set_channel_switch = iwl6000_hw_channel_switch, .nic_config = iwl6000_nic_config, - .eeprom_ops = { - .regulatory_bands = { - EEPROM_REG_BAND_1_CHANNELS, - EEPROM_REG_BAND_2_CHANNELS, - EEPROM_REG_BAND_3_CHANNELS, - EEPROM_REG_BAND_4_CHANNELS, - EEPROM_REG_BAND_5_CHANNELS, - EEPROM_6000_REG_BAND_24_HT40_CHANNELS, - EEPROM_REG_BAND_52_HT40_CHANNELS - }, - .enhanced_txpower = true, - }, .temperature = iwlagn_temperature, }; diff --git a/drivers/net/wireless/iwlwifi/iwl-led.c b/drivers/net/wireless/iwlwifi/dvm/led.c index 47000419f916..bf479f709091 100644 --- a/drivers/net/wireless/iwlwifi/iwl-led.c +++ b/drivers/net/wireless/iwlwifi/dvm/led.c @@ -34,12 +34,11 @@ #include <net/mac80211.h> #include <linux/etherdevice.h> #include <asm/unaligned.h> - -#include "iwl-dev.h" -#include "iwl-agn.h" #include "iwl-io.h" #include "iwl-trans.h" #include "iwl-modparams.h" +#include "dev.h" +#include "agn.h" /* Throughput OFF time(ms) ON time (ms) * >300 25 25 diff --git a/drivers/net/wireless/iwlwifi/iwl-led.h b/drivers/net/wireless/iwlwifi/dvm/led.h index b02a853103d3..b02a853103d3 100644 --- a/drivers/net/wireless/iwlwifi/iwl-led.h +++ b/drivers/net/wireless/iwlwifi/dvm/led.h diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-lib.c b/drivers/net/wireless/iwlwifi/dvm/lib.c index e55ec6c8a920..bef88c1a2c9b 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-lib.c +++ b/drivers/net/wireless/iwlwifi/dvm/lib.c @@ -33,13 +33,14 @@ #include <linux/sched.h> #include <net/mac80211.h> -#include "iwl-dev.h" #include "iwl-io.h" #include "iwl-agn-hw.h" -#include "iwl-agn.h" #include "iwl-trans.h" #include "iwl-modparams.h" +#include "dev.h" +#include "agn.h" + int iwlagn_hw_valid_rtc_data_addr(u32 addr) { return (addr >= IWLAGN_RTC_DATA_LOWER_BOUND) && @@ -58,8 +59,7 @@ int iwlagn_send_tx_power(struct iwl_priv *priv) /* half dBm need to multiply */ tx_power_cmd.global_lmt = (s8)(2 * priv->tx_power_user_lmt); - if (priv->tx_power_lmt_in_half_dbm && - priv->tx_power_lmt_in_half_dbm < tx_power_cmd.global_lmt) { + if (tx_power_cmd.global_lmt > priv->eeprom_data->max_tx_pwr_half_dbm) { /* * For the newer devices which using enhanced/extend tx power * table in EEPROM, the format is in half dBm. driver need to @@ -71,7 +71,8 @@ int iwlagn_send_tx_power(struct iwl_priv *priv) * "tx_power_user_lmt" is higher than EEPROM value (in * half-dBm format), lower the tx power based on EEPROM */ - tx_power_cmd.global_lmt = priv->tx_power_lmt_in_half_dbm; + tx_power_cmd.global_lmt = + priv->eeprom_data->max_tx_pwr_half_dbm; } tx_power_cmd.flags = IWLAGN_TX_POWER_NO_CLOSED; tx_power_cmd.srv_chan_lmt = IWLAGN_TX_POWER_AUTO; @@ -159,7 +160,7 @@ int iwlagn_txfifo_flush(struct iwl_priv *priv, u16 flush_control) IWL_PAN_SCD_BK_MSK | IWL_PAN_SCD_MGMT_MSK | IWL_PAN_SCD_MULTICAST_MSK; - if (priv->hw_params.sku & EEPROM_SKU_CAP_11N_ENABLE) + if (priv->eeprom_data->sku & EEPROM_SKU_CAP_11N_ENABLE) flush_cmd.fifo_control |= IWL_AGG_TX_QUEUE_MSK; IWL_DEBUG_INFO(priv, "fifo queue control: 0X%x\n", @@ -264,6 +265,8 @@ void iwlagn_send_advance_bt_config(struct iwl_priv *priv) bt_cmd_v2.tx_prio_boost = 0; bt_cmd_v2.rx_prio_boost = 0; } else { + /* older version only has 8 bits */ + WARN_ON(priv->cfg->bt_params->bt_prio_boost & ~0xFF); bt_cmd_v1.prio_boost = priv->cfg->bt_params->bt_prio_boost; bt_cmd_v1.tx_prio_boost = 0; @@ -617,6 +620,11 @@ static bool iwlagn_fill_txpower_mode(struct iwl_priv *priv, struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS]; int ave_rssi; + if (!ctx->vif || (ctx->vif->type != NL80211_IFTYPE_STATION)) { + IWL_DEBUG_INFO(priv, "BSS ctx not active or not in sta mode\n"); + return false; + } + ave_rssi = ieee80211_ave_rssi(ctx->vif); if (!ave_rssi) { /* no rssi data, no changes to reduce tx power */ @@ -818,7 +826,7 @@ void iwlagn_set_rxon_chain(struct iwl_priv *priv, struct iwl_rxon_context *ctx) if (priv->chain_noise_data.active_chains) active_chains = priv->chain_noise_data.active_chains; else - active_chains = priv->hw_params.valid_rx_ant; + active_chains = priv->eeprom_data->valid_rx_ant; if (priv->cfg->bt_params && priv->cfg->bt_params->advanced_bt_coexist && @@ -1259,7 +1267,7 @@ int iwl_dvm_send_cmd(struct iwl_priv *priv, struct iwl_host_cmd *cmd) * the mutex, this ensures we don't try to send two * (or more) synchronous commands at a time. */ - if (cmd->flags & CMD_SYNC) + if (!(cmd->flags & CMD_ASYNC)) lockdep_assert_held(&priv->mutex); if (priv->ucode_owner == IWL_OWNERSHIP_TM && diff --git a/drivers/net/wireless/iwlwifi/iwl-mac80211.c b/drivers/net/wireless/iwlwifi/dvm/mac80211.c index 013680332f07..a5f7bce96325 100644 --- a/drivers/net/wireless/iwlwifi/iwl-mac80211.c +++ b/drivers/net/wireless/iwlwifi/dvm/mac80211.c @@ -38,19 +38,20 @@ #include <linux/etherdevice.h> #include <linux/if_arp.h> +#include <net/ieee80211_radiotap.h> #include <net/mac80211.h> #include <asm/div64.h> -#include "iwl-eeprom.h" -#include "iwl-dev.h" #include "iwl-io.h" -#include "iwl-agn-calib.h" -#include "iwl-agn.h" #include "iwl-trans.h" #include "iwl-op-mode.h" #include "iwl-modparams.h" +#include "dev.h" +#include "calib.h" +#include "agn.h" + /***************************************************************************** * * mac80211 entry point functions @@ -154,6 +155,7 @@ int iwlagn_mac_setup_register(struct iwl_priv *priv, IEEE80211_HW_SCAN_WHILE_IDLE; hw->offchannel_tx_hw_queue = IWL_AUX_QUEUE; + hw->radiotap_mcs_details |= IEEE80211_RADIOTAP_MCS_HAVE_FMT; /* * Including the following line will crash some AP's. This @@ -162,7 +164,7 @@ int iwlagn_mac_setup_register(struct iwl_priv *priv, hw->max_tx_aggregation_subframes = LINK_QUAL_AGG_FRAME_LIMIT_DEF; */ - if (priv->hw_params.sku & EEPROM_SKU_CAP_11N_ENABLE) + if (priv->eeprom_data->sku & EEPROM_SKU_CAP_11N_ENABLE) hw->flags |= IEEE80211_HW_SUPPORTS_DYNAMIC_SMPS | IEEE80211_HW_SUPPORTS_STATIC_SMPS; @@ -237,12 +239,12 @@ int iwlagn_mac_setup_register(struct iwl_priv *priv, hw->max_listen_interval = IWL_CONN_MAX_LISTEN_INTERVAL; - if (priv->bands[IEEE80211_BAND_2GHZ].n_channels) + if (priv->eeprom_data->bands[IEEE80211_BAND_2GHZ].n_channels) priv->hw->wiphy->bands[IEEE80211_BAND_2GHZ] = - &priv->bands[IEEE80211_BAND_2GHZ]; - if (priv->bands[IEEE80211_BAND_5GHZ].n_channels) + &priv->eeprom_data->bands[IEEE80211_BAND_2GHZ]; + if (priv->eeprom_data->bands[IEEE80211_BAND_5GHZ].n_channels) priv->hw->wiphy->bands[IEEE80211_BAND_5GHZ] = - &priv->bands[IEEE80211_BAND_5GHZ]; + &priv->eeprom_data->bands[IEEE80211_BAND_5GHZ]; hw->wiphy->hw_version = priv->trans->hw_id; @@ -341,7 +343,7 @@ static int iwlagn_mac_start(struct ieee80211_hw *hw) return 0; } -void iwlagn_mac_stop(struct ieee80211_hw *hw) +static void iwlagn_mac_stop(struct ieee80211_hw *hw) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); @@ -369,9 +371,9 @@ void iwlagn_mac_stop(struct ieee80211_hw *hw) IWL_DEBUG_MAC80211(priv, "leave\n"); } -void iwlagn_mac_set_rekey_data(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct cfg80211_gtk_rekey_data *data) +static void iwlagn_mac_set_rekey_data(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct cfg80211_gtk_rekey_data *data) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); @@ -397,7 +399,8 @@ void iwlagn_mac_set_rekey_data(struct ieee80211_hw *hw, #ifdef CONFIG_PM_SLEEP -int iwlagn_mac_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan) +static int iwlagn_mac_suspend(struct ieee80211_hw *hw, + struct cfg80211_wowlan *wowlan) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS]; @@ -420,8 +423,6 @@ int iwlagn_mac_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan) if (ret) goto error; - device_set_wakeup_enable(priv->trans->dev, true); - iwl_trans_wowlan_suspend(priv->trans); goto out; @@ -475,7 +476,7 @@ static int iwlagn_mac_resume(struct ieee80211_hw *hw) } if (priv->wowlan_sram) - _iwl_read_targ_mem_words( + _iwl_read_targ_mem_dwords( priv->trans, 0x800000, priv->wowlan_sram, img->sec[IWL_UCODE_SECTION_DATA].len / 4); @@ -488,8 +489,6 @@ static int iwlagn_mac_resume(struct ieee80211_hw *hw) priv->wowlan = false; - device_set_wakeup_enable(priv->trans->dev, false); - iwlagn_prepare_restart(priv); memset((void *)&ctx->active, 0, sizeof(ctx->active)); @@ -504,9 +503,15 @@ static int iwlagn_mac_resume(struct ieee80211_hw *hw) return 1; } +static void iwlagn_mac_set_wakeup(struct ieee80211_hw *hw, bool enabled) +{ + struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); + + device_set_wakeup_enable(priv->trans->dev, enabled); +} #endif -void iwlagn_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb) +static void iwlagn_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); @@ -517,21 +522,21 @@ void iwlagn_mac_tx(struct ieee80211_hw *hw, struct sk_buff *skb) dev_kfree_skb_any(skb); } -void iwlagn_mac_update_tkip_key(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct ieee80211_key_conf *keyconf, - struct ieee80211_sta *sta, - u32 iv32, u16 *phase1key) +static void iwlagn_mac_update_tkip_key(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_key_conf *keyconf, + struct ieee80211_sta *sta, + u32 iv32, u16 *phase1key) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); iwl_update_tkip_key(priv, vif, keyconf, sta, iv32, phase1key); } -int iwlagn_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta, - struct ieee80211_key_conf *key) +static int iwlagn_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); struct iwl_vif_priv *vif_priv = (void *)vif->drv_priv; @@ -631,11 +636,11 @@ int iwlagn_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, return ret; } -int iwlagn_mac_ampdu_action(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - enum ieee80211_ampdu_mlme_action action, - struct ieee80211_sta *sta, u16 tid, u16 *ssn, - u8 buf_size) +static int iwlagn_mac_ampdu_action(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + enum ieee80211_ampdu_mlme_action action, + struct ieee80211_sta *sta, u16 tid, u16 *ssn, + u8 buf_size) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); int ret = -EINVAL; @@ -644,7 +649,7 @@ int iwlagn_mac_ampdu_action(struct ieee80211_hw *hw, IWL_DEBUG_HT(priv, "A-MPDU action on addr %pM tid %d\n", sta->addr, tid); - if (!(priv->hw_params.sku & EEPROM_SKU_CAP_11N_ENABLE)) + if (!(priv->eeprom_data->sku & EEPROM_SKU_CAP_11N_ENABLE)) return -EACCES; IWL_DEBUG_MAC80211(priv, "enter\n"); @@ -662,7 +667,7 @@ int iwlagn_mac_ampdu_action(struct ieee80211_hw *hw, ret = iwl_sta_rx_agg_stop(priv, sta, tid); break; case IEEE80211_AMPDU_TX_START: - if (!priv->trans->ops->tx_agg_setup) + if (!priv->trans->ops->txq_enable) break; if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_TXAGG) break; @@ -757,11 +762,11 @@ static int iwlagn_mac_sta_remove(struct ieee80211_hw *hw, return ret; } -int iwlagn_mac_sta_state(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta, - enum ieee80211_sta_state old_state, - enum ieee80211_sta_state new_state) +static int iwlagn_mac_sta_state(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + enum ieee80211_sta_state old_state, + enum ieee80211_sta_state new_state) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); struct iwl_vif_priv *vif_priv = (void *)vif->drv_priv; @@ -852,11 +857,10 @@ int iwlagn_mac_sta_state(struct ieee80211_hw *hw, return ret; } -void iwlagn_mac_channel_switch(struct ieee80211_hw *hw, - struct ieee80211_channel_switch *ch_switch) +static void iwlagn_mac_channel_switch(struct ieee80211_hw *hw, + struct ieee80211_channel_switch *ch_switch) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - const struct iwl_channel_info *ch_info; struct ieee80211_conf *conf = &hw->conf; struct ieee80211_channel *channel = ch_switch->channel; struct iwl_ht_config *ht_conf = &priv->current_ht_config; @@ -893,12 +897,6 @@ void iwlagn_mac_channel_switch(struct ieee80211_hw *hw, if (le16_to_cpu(ctx->active.channel) == ch) goto out; - ch_info = iwl_get_channel_info(priv, channel->band, ch); - if (!is_channel_valid(ch_info)) { - IWL_DEBUG_MAC80211(priv, "invalid channel\n"); - goto out; - } - priv->current_ht_config.smps = conf->smps_mode; /* Configure HT40 channels */ @@ -947,10 +945,10 @@ void iwl_chswitch_done(struct iwl_priv *priv, bool is_success) ieee80211_chswitch_done(ctx->vif, is_success); } -void iwlagn_configure_filter(struct ieee80211_hw *hw, - unsigned int changed_flags, - unsigned int *total_flags, - u64 multicast) +static void iwlagn_configure_filter(struct ieee80211_hw *hw, + unsigned int changed_flags, + unsigned int *total_flags, + u64 multicast) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); __le32 filter_or = 0, filter_nand = 0; @@ -997,7 +995,7 @@ void iwlagn_configure_filter(struct ieee80211_hw *hw, FIF_BCN_PRBRESP_PROMISC | FIF_CONTROL; } -void iwlagn_mac_flush(struct ieee80211_hw *hw, bool drop) +static void iwlagn_mac_flush(struct ieee80211_hw *hw, bool drop) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); @@ -1050,8 +1048,18 @@ static int iwlagn_mac_remain_on_channel(struct ieee80211_hw *hw, mutex_lock(&priv->mutex); if (test_bit(STATUS_SCAN_HW, &priv->status)) { - err = -EBUSY; - goto out; + /* mac80211 should not scan while ROC or ROC while scanning */ + if (WARN_ON_ONCE(priv->scan_type != IWL_SCAN_RADIO_RESET)) { + err = -EBUSY; + goto out; + } + + iwl_scan_cancel_timeout(priv, 100); + + if (test_bit(STATUS_SCAN_HW, &priv->status)) { + err = -EBUSY; + goto out; + } } priv->hw_roc_channel = channel; @@ -1124,7 +1132,7 @@ static int iwlagn_mac_remain_on_channel(struct ieee80211_hw *hw, return err; } -int iwlagn_mac_cancel_remain_on_channel(struct ieee80211_hw *hw) +static int iwlagn_mac_cancel_remain_on_channel(struct ieee80211_hw *hw) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); @@ -1141,8 +1149,8 @@ int iwlagn_mac_cancel_remain_on_channel(struct ieee80211_hw *hw) return 0; } -void iwlagn_mac_rssi_callback(struct ieee80211_hw *hw, - enum ieee80211_rssi_event rssi_event) +static void iwlagn_mac_rssi_callback(struct ieee80211_hw *hw, + enum ieee80211_rssi_event rssi_event) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); @@ -1166,8 +1174,8 @@ void iwlagn_mac_rssi_callback(struct ieee80211_hw *hw, IWL_DEBUG_MAC80211(priv, "leave\n"); } -int iwlagn_mac_set_tim(struct ieee80211_hw *hw, - struct ieee80211_sta *sta, bool set) +static int iwlagn_mac_set_tim(struct ieee80211_hw *hw, + struct ieee80211_sta *sta, bool set) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); @@ -1176,9 +1184,9 @@ int iwlagn_mac_set_tim(struct ieee80211_hw *hw, return 0; } -int iwlagn_mac_conf_tx(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, u16 queue, - const struct ieee80211_tx_queue_params *params) +static int iwlagn_mac_conf_tx(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, u16 queue, + const struct ieee80211_tx_queue_params *params) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); struct iwl_vif_priv *vif_priv = (void *)vif->drv_priv; @@ -1220,7 +1228,7 @@ int iwlagn_mac_conf_tx(struct ieee80211_hw *hw, return 0; } -int iwlagn_mac_tx_last_beacon(struct ieee80211_hw *hw) +static int iwlagn_mac_tx_last_beacon(struct ieee80211_hw *hw) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); @@ -1236,7 +1244,8 @@ static int iwl_set_mode(struct iwl_priv *priv, struct iwl_rxon_context *ctx) return iwlagn_commit_rxon(priv, ctx); } -int iwl_setup_interface(struct iwl_priv *priv, struct iwl_rxon_context *ctx) +static int iwl_setup_interface(struct iwl_priv *priv, + struct iwl_rxon_context *ctx) { struct ieee80211_vif *vif = ctx->vif; int err, ac; @@ -1356,9 +1365,9 @@ static int iwlagn_mac_add_interface(struct ieee80211_hw *hw, return err; } -void iwl_teardown_interface(struct iwl_priv *priv, - struct ieee80211_vif *vif, - bool mode_change) +static void iwl_teardown_interface(struct iwl_priv *priv, + struct ieee80211_vif *vif, + bool mode_change) { struct iwl_rxon_context *ctx = iwl_rxon_ctx_from_vif(vif); @@ -1414,13 +1423,11 @@ static void iwlagn_mac_remove_interface(struct ieee80211_hw *hw, } static int iwlagn_mac_change_interface(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - enum nl80211_iftype newtype, bool newp2p) + struct ieee80211_vif *vif, + enum nl80211_iftype newtype, bool newp2p) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - struct iwl_rxon_context *ctx = iwl_rxon_ctx_from_vif(vif); - struct iwl_rxon_context *bss_ctx = &priv->contexts[IWL_RXON_CTX_BSS]; - struct iwl_rxon_context *tmp; + struct iwl_rxon_context *ctx, *tmp; enum nl80211_iftype newviftype = newtype; u32 interface_modes; int err; @@ -1431,6 +1438,18 @@ static int iwlagn_mac_change_interface(struct ieee80211_hw *hw, mutex_lock(&priv->mutex); + ctx = iwl_rxon_ctx_from_vif(vif); + + /* + * To simplify this code, only support changes on the + * BSS context. The PAN context is usually reassigned + * by creating/removing P2P interfaces anyway. + */ + if (ctx->ctxid != IWL_RXON_CTX_BSS) { + err = -EBUSY; + goto out; + } + if (!ctx->vif || !iwl_is_ready_rf(priv)) { /* * Huh? But wait ... this can maybe happen when @@ -1440,32 +1459,19 @@ static int iwlagn_mac_change_interface(struct ieee80211_hw *hw, goto out; } + /* Check if the switch is supported in the same context */ interface_modes = ctx->interface_modes | ctx->exclusive_interface_modes; - if (!(interface_modes & BIT(newtype))) { err = -EBUSY; goto out; } - /* - * Refuse a change that should be done by moving from the PAN - * context to the BSS context instead, if the BSS context is - * available and can support the new interface type. - */ - if (ctx->ctxid == IWL_RXON_CTX_PAN && !bss_ctx->vif && - (bss_ctx->interface_modes & BIT(newtype) || - bss_ctx->exclusive_interface_modes & BIT(newtype))) { - BUILD_BUG_ON(NUM_IWL_RXON_CTX != 2); - err = -EBUSY; - goto out; - } - if (ctx->exclusive_interface_modes & BIT(newtype)) { for_each_context(priv, tmp) { if (ctx == tmp) continue; - if (!tmp->vif) + if (!tmp->is_active) continue; /* @@ -1499,9 +1505,9 @@ static int iwlagn_mac_change_interface(struct ieee80211_hw *hw, return err; } -int iwlagn_mac_hw_scan(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct cfg80211_scan_request *req) +static int iwlagn_mac_hw_scan(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct cfg80211_scan_request *req) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); int ret; @@ -1556,10 +1562,10 @@ static void iwl_sta_modify_ps_wake(struct iwl_priv *priv, int sta_id) iwl_send_add_sta(priv, &cmd, CMD_ASYNC); } -void iwlagn_mac_sta_notify(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - enum sta_notify_cmd cmd, - struct ieee80211_sta *sta) +static void iwlagn_mac_sta_notify(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + enum sta_notify_cmd cmd, + struct ieee80211_sta *sta) { struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); struct iwl_station_priv *sta_priv = (void *)sta->drv_priv; @@ -1596,6 +1602,7 @@ struct ieee80211_ops iwlagn_hw_ops = { #ifdef CONFIG_PM_SLEEP .suspend = iwlagn_mac_suspend, .resume = iwlagn_mac_resume, + .set_wakeup = iwlagn_mac_set_wakeup, #endif .add_interface = iwlagn_mac_add_interface, .remove_interface = iwlagn_mac_remove_interface, diff --git a/drivers/net/wireless/iwlwifi/iwl-agn.c b/drivers/net/wireless/iwlwifi/dvm/main.c index ec36e2b020b6..84d3db5aa506 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn.c +++ b/drivers/net/wireless/iwlwifi/dvm/main.c @@ -44,15 +44,19 @@ #include <asm/div64.h> -#include "iwl-eeprom.h" -#include "iwl-dev.h" +#include "iwl-eeprom-read.h" +#include "iwl-eeprom-parse.h" #include "iwl-io.h" -#include "iwl-agn-calib.h" -#include "iwl-agn.h" #include "iwl-trans.h" #include "iwl-op-mode.h" #include "iwl-drv.h" #include "iwl-modparams.h" +#include "iwl-prph.h" + +#include "dev.h" +#include "calib.h" +#include "agn.h" + /****************************************************************************** * @@ -78,7 +82,8 @@ MODULE_DESCRIPTION(DRV_DESCRIPTION); MODULE_VERSION(DRV_VERSION); MODULE_AUTHOR(DRV_COPYRIGHT " " DRV_AUTHOR); MODULE_LICENSE("GPL"); -MODULE_ALIAS("iwlagn"); + +static const struct iwl_op_mode_ops iwl_dvm_ops; void iwl_update_chain_flags(struct iwl_priv *priv) { @@ -180,7 +185,7 @@ int iwlagn_send_beacon_cmd(struct iwl_priv *priv) rate = info->control.rates[0].idx; priv->mgmt_tx_ant = iwl_toggle_tx_ant(priv, priv->mgmt_tx_ant, - priv->hw_params.valid_tx_ant); + priv->eeprom_data->valid_tx_ant); rate_flags = iwl_ant_idx_to_flags(priv->mgmt_tx_ant); /* In mac80211, rates for 5 GHz start at 0 */ @@ -403,7 +408,7 @@ static void iwl_continuous_event_trace(struct iwl_priv *priv) base = priv->device_pointers.log_event_table; if (iwlagn_hw_valid_rtc_data_addr(base)) { - iwl_read_targ_mem_words(priv->trans, base, &read, sizeof(read)); + iwl_read_targ_mem_bytes(priv->trans, base, &read, sizeof(read)); capacity = read.capacity; mode = read.mode; num_wraps = read.wrap_counter; @@ -513,49 +518,6 @@ static void iwl_bg_tx_flush(struct work_struct *work) * queue/FIFO/AC mapping definitions */ -#define IWL_TX_FIFO_BK 0 /* shared */ -#define IWL_TX_FIFO_BE 1 -#define IWL_TX_FIFO_VI 2 /* shared */ -#define IWL_TX_FIFO_VO 3 -#define IWL_TX_FIFO_BK_IPAN IWL_TX_FIFO_BK -#define IWL_TX_FIFO_BE_IPAN 4 -#define IWL_TX_FIFO_VI_IPAN IWL_TX_FIFO_VI -#define IWL_TX_FIFO_VO_IPAN 5 -/* re-uses the VO FIFO, uCode will properly flush/schedule */ -#define IWL_TX_FIFO_AUX 5 -#define IWL_TX_FIFO_UNUSED -1 - -#define IWLAGN_CMD_FIFO_NUM 7 - -/* - * This queue number is required for proper operation - * because the ucode will stop/start the scheduler as - * required. - */ -#define IWL_IPAN_MCAST_QUEUE 8 - -static const u8 iwlagn_default_queue_to_tx_fifo[] = { - IWL_TX_FIFO_VO, - IWL_TX_FIFO_VI, - IWL_TX_FIFO_BE, - IWL_TX_FIFO_BK, - IWLAGN_CMD_FIFO_NUM, -}; - -static const u8 iwlagn_ipan_queue_to_tx_fifo[] = { - IWL_TX_FIFO_VO, - IWL_TX_FIFO_VI, - IWL_TX_FIFO_BE, - IWL_TX_FIFO_BK, - IWL_TX_FIFO_BK_IPAN, - IWL_TX_FIFO_BE_IPAN, - IWL_TX_FIFO_VI_IPAN, - IWL_TX_FIFO_VO_IPAN, - IWL_TX_FIFO_BE_IPAN, - IWLAGN_CMD_FIFO_NUM, - IWL_TX_FIFO_AUX, -}; - static const u8 iwlagn_bss_ac_to_fifo[] = { IWL_TX_FIFO_VO, IWL_TX_FIFO_VI, @@ -578,7 +540,7 @@ static const u8 iwlagn_pan_ac_to_queue[] = { 7, 6, 5, 4, }; -void iwl_init_context(struct iwl_priv *priv, u32 ucode_flags) +static void iwl_init_context(struct iwl_priv *priv, u32 ucode_flags) { int i; @@ -645,7 +607,7 @@ void iwl_init_context(struct iwl_priv *priv, u32 ucode_flags) BUILD_BUG_ON(NUM_IWL_RXON_CTX != 2); } -void iwl_rf_kill_ct_config(struct iwl_priv *priv) +static void iwl_rf_kill_ct_config(struct iwl_priv *priv) { struct iwl_ct_kill_config cmd; struct iwl_ct_kill_throttling_config adv_cmd; @@ -726,7 +688,7 @@ static int iwlagn_send_tx_ant_config(struct iwl_priv *priv, u8 valid_tx_ant) } } -void iwl_send_bt_config(struct iwl_priv *priv) +static void iwl_send_bt_config(struct iwl_priv *priv) { struct iwl_bt_cmd bt_cmd = { .lead_time = BT_LEAD_TIME_DEF, @@ -814,7 +776,7 @@ int iwl_alive_start(struct iwl_priv *priv) ieee80211_wake_queues(priv->hw); /* Configure Tx antenna selection based on H/W config */ - iwlagn_send_tx_ant_config(priv, priv->hw_params.valid_tx_ant); + iwlagn_send_tx_ant_config(priv, priv->eeprom_data->valid_tx_ant); if (iwl_is_associated_ctx(ctx) && !priv->wowlan) { struct iwl_rxon_cmd *active_rxon = @@ -932,11 +894,12 @@ void iwl_down(struct iwl_priv *priv) priv->ucode_loaded = false; iwl_trans_stop_device(priv->trans); + /* Set num_aux_in_flight must be done after the transport is stopped */ + atomic_set(&priv->num_aux_in_flight, 0); + /* Clear out all status bits but a few that are stable across reset */ priv->status &= test_bit(STATUS_RF_KILL_HW, &priv->status) << STATUS_RF_KILL_HW | - test_bit(STATUS_GEO_CONFIGURED, &priv->status) << - STATUS_GEO_CONFIGURED | test_bit(STATUS_FW_ERROR, &priv->status) << STATUS_FW_ERROR | test_bit(STATUS_EXIT_PENDING, &priv->status) << @@ -1078,7 +1041,7 @@ static void iwlagn_disable_roc_work(struct work_struct *work) * *****************************************************************************/ -void iwl_setup_deferred_work(struct iwl_priv *priv) +static void iwl_setup_deferred_work(struct iwl_priv *priv) { priv->workqueue = create_singlethread_workqueue(DRV_NAME); @@ -1123,224 +1086,14 @@ void iwl_cancel_deferred_work(struct iwl_priv *priv) del_timer_sync(&priv->ucode_trace); } -static void iwl_init_hw_rates(struct ieee80211_rate *rates) -{ - int i; - - for (i = 0; i < IWL_RATE_COUNT_LEGACY; i++) { - rates[i].bitrate = iwl_rates[i].ieee * 5; - rates[i].hw_value = i; /* Rate scaling will work on indexes */ - rates[i].hw_value_short = i; - rates[i].flags = 0; - if ((i >= IWL_FIRST_CCK_RATE) && (i <= IWL_LAST_CCK_RATE)) { - /* - * If CCK != 1M then set short preamble rate flag. - */ - rates[i].flags |= - (iwl_rates[i].plcp == IWL_RATE_1M_PLCP) ? - 0 : IEEE80211_RATE_SHORT_PREAMBLE; - } - } -} - -#define MAX_BIT_RATE_40_MHZ 150 /* Mbps */ -#define MAX_BIT_RATE_20_MHZ 72 /* Mbps */ -static void iwl_init_ht_hw_capab(const struct iwl_priv *priv, - struct ieee80211_sta_ht_cap *ht_info, - enum ieee80211_band band) -{ - u16 max_bit_rate = 0; - u8 rx_chains_num = priv->hw_params.rx_chains_num; - u8 tx_chains_num = priv->hw_params.tx_chains_num; - - ht_info->cap = 0; - memset(&ht_info->mcs, 0, sizeof(ht_info->mcs)); - - ht_info->ht_supported = true; - - if (priv->cfg->ht_params && - priv->cfg->ht_params->ht_greenfield_support) - ht_info->cap |= IEEE80211_HT_CAP_GRN_FLD; - ht_info->cap |= IEEE80211_HT_CAP_SGI_20; - max_bit_rate = MAX_BIT_RATE_20_MHZ; - if (priv->hw_params.ht40_channel & BIT(band)) { - ht_info->cap |= IEEE80211_HT_CAP_SUP_WIDTH_20_40; - ht_info->cap |= IEEE80211_HT_CAP_SGI_40; - ht_info->mcs.rx_mask[4] = 0x01; - max_bit_rate = MAX_BIT_RATE_40_MHZ; - } - - if (iwlwifi_mod_params.amsdu_size_8K) - ht_info->cap |= IEEE80211_HT_CAP_MAX_AMSDU; - - ht_info->ampdu_factor = CFG_HT_RX_AMPDU_FACTOR_DEF; - ht_info->ampdu_density = CFG_HT_MPDU_DENSITY_DEF; - - ht_info->mcs.rx_mask[0] = 0xFF; - if (rx_chains_num >= 2) - ht_info->mcs.rx_mask[1] = 0xFF; - if (rx_chains_num >= 3) - ht_info->mcs.rx_mask[2] = 0xFF; - - /* Highest supported Rx data rate */ - max_bit_rate *= rx_chains_num; - WARN_ON(max_bit_rate & ~IEEE80211_HT_MCS_RX_HIGHEST_MASK); - ht_info->mcs.rx_highest = cpu_to_le16(max_bit_rate); - - /* Tx MCS capabilities */ - ht_info->mcs.tx_params = IEEE80211_HT_MCS_TX_DEFINED; - if (tx_chains_num != rx_chains_num) { - ht_info->mcs.tx_params |= IEEE80211_HT_MCS_TX_RX_DIFF; - ht_info->mcs.tx_params |= ((tx_chains_num - 1) << - IEEE80211_HT_MCS_TX_MAX_STREAMS_SHIFT); - } -} - -/** - * iwl_init_geos - Initialize mac80211's geo/channel info based from eeprom - */ -static int iwl_init_geos(struct iwl_priv *priv) -{ - struct iwl_channel_info *ch; - struct ieee80211_supported_band *sband; - struct ieee80211_channel *channels; - struct ieee80211_channel *geo_ch; - struct ieee80211_rate *rates; - int i = 0; - s8 max_tx_power = IWLAGN_TX_POWER_TARGET_POWER_MIN; - - if (priv->bands[IEEE80211_BAND_2GHZ].n_bitrates || - priv->bands[IEEE80211_BAND_5GHZ].n_bitrates) { - IWL_DEBUG_INFO(priv, "Geography modes already initialized.\n"); - set_bit(STATUS_GEO_CONFIGURED, &priv->status); - return 0; - } - - channels = kcalloc(priv->channel_count, - sizeof(struct ieee80211_channel), GFP_KERNEL); - if (!channels) - return -ENOMEM; - - rates = kcalloc(IWL_RATE_COUNT_LEGACY, sizeof(struct ieee80211_rate), - GFP_KERNEL); - if (!rates) { - kfree(channels); - return -ENOMEM; - } - - /* 5.2GHz channels start after the 2.4GHz channels */ - sband = &priv->bands[IEEE80211_BAND_5GHZ]; - sband->channels = &channels[ARRAY_SIZE(iwl_eeprom_band_1)]; - /* just OFDM */ - sband->bitrates = &rates[IWL_FIRST_OFDM_RATE]; - sband->n_bitrates = IWL_RATE_COUNT_LEGACY - IWL_FIRST_OFDM_RATE; - - if (priv->hw_params.sku & EEPROM_SKU_CAP_11N_ENABLE) - iwl_init_ht_hw_capab(priv, &sband->ht_cap, - IEEE80211_BAND_5GHZ); - - sband = &priv->bands[IEEE80211_BAND_2GHZ]; - sband->channels = channels; - /* OFDM & CCK */ - sband->bitrates = rates; - sband->n_bitrates = IWL_RATE_COUNT_LEGACY; - - if (priv->hw_params.sku & EEPROM_SKU_CAP_11N_ENABLE) - iwl_init_ht_hw_capab(priv, &sband->ht_cap, - IEEE80211_BAND_2GHZ); - - priv->ieee_channels = channels; - priv->ieee_rates = rates; - - for (i = 0; i < priv->channel_count; i++) { - ch = &priv->channel_info[i]; - - /* FIXME: might be removed if scan is OK */ - if (!is_channel_valid(ch)) - continue; - - sband = &priv->bands[ch->band]; - - geo_ch = &sband->channels[sband->n_channels++]; - - geo_ch->center_freq = - ieee80211_channel_to_frequency(ch->channel, ch->band); - geo_ch->max_power = ch->max_power_avg; - geo_ch->max_antenna_gain = 0xff; - geo_ch->hw_value = ch->channel; - - if (is_channel_valid(ch)) { - if (!(ch->flags & EEPROM_CHANNEL_IBSS)) - geo_ch->flags |= IEEE80211_CHAN_NO_IBSS; - - if (!(ch->flags & EEPROM_CHANNEL_ACTIVE)) - geo_ch->flags |= IEEE80211_CHAN_PASSIVE_SCAN; - - if (ch->flags & EEPROM_CHANNEL_RADAR) - geo_ch->flags |= IEEE80211_CHAN_RADAR; - - geo_ch->flags |= ch->ht40_extension_channel; - - if (ch->max_power_avg > max_tx_power) - max_tx_power = ch->max_power_avg; - } else { - geo_ch->flags |= IEEE80211_CHAN_DISABLED; - } - - IWL_DEBUG_INFO(priv, "Channel %d Freq=%d[%sGHz] %s flag=0x%X\n", - ch->channel, geo_ch->center_freq, - is_channel_a_band(ch) ? "5.2" : "2.4", - geo_ch->flags & IEEE80211_CHAN_DISABLED ? - "restricted" : "valid", - geo_ch->flags); - } - - priv->tx_power_device_lmt = max_tx_power; - priv->tx_power_user_lmt = max_tx_power; - priv->tx_power_next = max_tx_power; - - if ((priv->bands[IEEE80211_BAND_5GHZ].n_channels == 0) && - priv->hw_params.sku & EEPROM_SKU_CAP_BAND_52GHZ) { - IWL_INFO(priv, "Incorrectly detected BG card as ABG. " - "Please send your %s to maintainer.\n", - priv->trans->hw_id_str); - priv->hw_params.sku &= ~EEPROM_SKU_CAP_BAND_52GHZ; - } - - if (iwlwifi_mod_params.disable_5ghz) - priv->bands[IEEE80211_BAND_5GHZ].n_channels = 0; - - IWL_INFO(priv, "Tunable channels: %d 802.11bg, %d 802.11a channels\n", - priv->bands[IEEE80211_BAND_2GHZ].n_channels, - priv->bands[IEEE80211_BAND_5GHZ].n_channels); - - set_bit(STATUS_GEO_CONFIGURED, &priv->status); - - return 0; -} - -/* - * iwl_free_geos - undo allocations in iwl_init_geos - */ -static void iwl_free_geos(struct iwl_priv *priv) -{ - kfree(priv->ieee_channels); - kfree(priv->ieee_rates); - clear_bit(STATUS_GEO_CONFIGURED, &priv->status); -} - -int iwl_init_drv(struct iwl_priv *priv) +static int iwl_init_drv(struct iwl_priv *priv) { - int ret; - spin_lock_init(&priv->sta_lock); mutex_init(&priv->mutex); INIT_LIST_HEAD(&priv->calib_results); - priv->ieee_channels = NULL; - priv->ieee_rates = NULL; priv->band = IEEE80211_BAND_2GHZ; priv->plcp_delta_threshold = @@ -1371,31 +1124,11 @@ int iwl_init_drv(struct iwl_priv *priv) priv->dynamic_frag_thresh = BT_FRAG_THRESHOLD_DEF; } - ret = iwl_init_channel_map(priv); - if (ret) { - IWL_ERR(priv, "initializing regulatory failed: %d\n", ret); - goto err; - } - - ret = iwl_init_geos(priv); - if (ret) { - IWL_ERR(priv, "initializing geos failed: %d\n", ret); - goto err_free_channel_map; - } - iwl_init_hw_rates(priv->ieee_rates); - return 0; - -err_free_channel_map: - iwl_free_channel_map(priv); -err: - return ret; } -void iwl_uninit_drv(struct iwl_priv *priv) +static void iwl_uninit_drv(struct iwl_priv *priv) { - iwl_free_geos(priv); - iwl_free_channel_map(priv); kfree(priv->scan_cmd); kfree(priv->beacon_cmd); kfree(rcu_dereference_raw(priv->noa_data)); @@ -1405,15 +1138,12 @@ void iwl_uninit_drv(struct iwl_priv *priv) #endif } -void iwl_set_hw_params(struct iwl_priv *priv) +static void iwl_set_hw_params(struct iwl_priv *priv) { if (priv->cfg->ht_params) priv->hw_params.use_rts_for_aggregation = priv->cfg->ht_params->use_rts_for_aggregation; - if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_ALL) - priv->hw_params.sku &= ~EEPROM_SKU_CAP_11N_ENABLE; - /* Device-specific setup */ priv->lib->set_hw_params(priv); } @@ -1421,7 +1151,7 @@ void iwl_set_hw_params(struct iwl_priv *priv) /* show what optional capabilities we have */ -void iwl_option_config(struct iwl_priv *priv) +static void iwl_option_config(struct iwl_priv *priv) { #ifdef CONFIG_IWLWIFI_DEBUG IWL_INFO(priv, "CONFIG_IWLWIFI_DEBUG enabled\n"); @@ -1454,6 +1184,42 @@ void iwl_option_config(struct iwl_priv *priv) #endif } +static int iwl_eeprom_init_hw_params(struct iwl_priv *priv) +{ + u16 radio_cfg; + + priv->eeprom_data->sku = priv->eeprom_data->sku; + + if (priv->eeprom_data->sku & EEPROM_SKU_CAP_11N_ENABLE && + !priv->cfg->ht_params) { + IWL_ERR(priv, "Invalid 11n configuration\n"); + return -EINVAL; + } + + if (!priv->eeprom_data->sku) { + IWL_ERR(priv, "Invalid device sku\n"); + return -EINVAL; + } + + IWL_INFO(priv, "Device SKU: 0x%X\n", priv->eeprom_data->sku); + + radio_cfg = priv->eeprom_data->radio_cfg; + + priv->hw_params.tx_chains_num = + num_of_ant(priv->eeprom_data->valid_tx_ant); + if (priv->cfg->rx_with_siso_diversity) + priv->hw_params.rx_chains_num = 1; + else + priv->hw_params.rx_chains_num = + num_of_ant(priv->eeprom_data->valid_rx_ant); + + IWL_INFO(priv, "Valid Tx ant: 0x%X, Valid Rx ant: 0x%X\n", + priv->eeprom_data->valid_tx_ant, + priv->eeprom_data->valid_rx_ant); + + return 0; +} + static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg, const struct iwl_fw *fw) @@ -1466,7 +1232,6 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans, struct iwl_trans_config trans_cfg; static const u8 no_reclaim_cmds[] = { REPLY_RX_PHY_CMD, - REPLY_RX, REPLY_RX_MPDU_CMD, REPLY_COMPRESSED_BA, STATISTICS_NOTIFICATION, @@ -1539,8 +1304,12 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans, trans_cfg.queue_watchdog_timeout = priv->cfg->base_params->wd_timeout; else - trans_cfg.queue_watchdog_timeout = IWL_WATCHHDOG_DISABLED; + trans_cfg.queue_watchdog_timeout = IWL_WATCHDOG_DISABLED; trans_cfg.command_names = iwl_dvm_cmd_strings; + trans_cfg.cmd_fifo = IWLAGN_CMD_FIFO_NUM; + + WARN_ON(sizeof(priv->transport_queue_stop) * BITS_PER_BYTE < + priv->cfg->base_params->num_of_queues); ucode_flags = fw->ucode_capa.flags; @@ -1551,15 +1320,9 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans, if (ucode_flags & IWL_UCODE_TLV_FLAGS_PAN) { priv->sta_key_max_num = STA_KEY_MAX_NUM_PAN; trans_cfg.cmd_queue = IWL_IPAN_CMD_QUEUE_NUM; - trans_cfg.queue_to_fifo = iwlagn_ipan_queue_to_tx_fifo; - trans_cfg.n_queue_to_fifo = - ARRAY_SIZE(iwlagn_ipan_queue_to_tx_fifo); } else { priv->sta_key_max_num = STA_KEY_MAX_NUM; trans_cfg.cmd_queue = IWL_DEFAULT_CMD_QUEUE_NUM; - trans_cfg.queue_to_fifo = iwlagn_default_queue_to_tx_fifo; - trans_cfg.n_queue_to_fifo = - ARRAY_SIZE(iwlagn_default_queue_to_tx_fifo); } /* Configure transport layer */ @@ -1599,25 +1362,33 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans, goto out_free_hw; /* Read the EEPROM */ - if (iwl_eeprom_init(priv, priv->trans->hw_rev)) { + if (iwl_read_eeprom(priv->trans, &priv->eeprom_blob, + &priv->eeprom_blob_size)) { IWL_ERR(priv, "Unable to init EEPROM\n"); goto out_free_hw; } + /* Reset chip to save power until we load uCode during "up". */ iwl_trans_stop_hw(priv->trans, false); - if (iwl_eeprom_check_version(priv)) + priv->eeprom_data = iwl_parse_eeprom_data(priv->trans->dev, priv->cfg, + priv->eeprom_blob, + priv->eeprom_blob_size); + if (!priv->eeprom_data) + goto out_free_eeprom_blob; + + if (iwl_eeprom_check_version(priv->eeprom_data, priv->trans)) goto out_free_eeprom; if (iwl_eeprom_init_hw_params(priv)) goto out_free_eeprom; /* extract MAC Address */ - iwl_eeprom_get_mac(priv, priv->addresses[0].addr); + memcpy(priv->addresses[0].addr, priv->eeprom_data->hw_addr, ETH_ALEN); IWL_DEBUG_INFO(priv, "MAC address: %pM\n", priv->addresses[0].addr); priv->hw->wiphy->addresses = priv->addresses; priv->hw->wiphy->n_addresses = 1; - num_mac = iwl_eeprom_query16(priv, EEPROM_NUM_MAC_ADDRESS); + num_mac = priv->eeprom_data->n_hw_addrs; if (num_mac > 1) { memcpy(priv->addresses[1].addr, priv->addresses[0].addr, ETH_ALEN); @@ -1630,7 +1401,7 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans, ************************/ iwl_set_hw_params(priv); - if (!(priv->hw_params.sku & EEPROM_SKU_CAP_IPAN_ENABLE)) { + if (!(priv->eeprom_data->sku & EEPROM_SKU_CAP_IPAN_ENABLE)) { IWL_DEBUG_INFO(priv, "Your EEPROM disabled PAN"); ucode_flags &= ~IWL_UCODE_TLV_FLAGS_PAN; /* @@ -1640,9 +1411,6 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans, ucode_flags &= ~IWL_UCODE_TLV_FLAGS_P2P; priv->sta_key_max_num = STA_KEY_MAX_NUM; trans_cfg.cmd_queue = IWL_DEFAULT_CMD_QUEUE_NUM; - trans_cfg.queue_to_fifo = iwlagn_default_queue_to_tx_fifo; - trans_cfg.n_queue_to_fifo = - ARRAY_SIZE(iwlagn_default_queue_to_tx_fifo); /* Configure transport layer again*/ iwl_trans_configure(priv->trans, &trans_cfg); @@ -1660,9 +1428,6 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans, atomic_set(&priv->queue_stop_count[i], 0); } - WARN_ON(trans_cfg.queue_to_fifo[trans_cfg.cmd_queue] != - IWLAGN_CMD_FIFO_NUM); - if (iwl_init_drv(priv)) goto out_free_eeprom; @@ -1711,8 +1476,10 @@ out_destroy_workqueue: destroy_workqueue(priv->workqueue); priv->workqueue = NULL; iwl_uninit_drv(priv); +out_free_eeprom_blob: + kfree(priv->eeprom_blob); out_free_eeprom: - iwl_eeprom_free(priv); + iwl_free_eeprom_data(priv->eeprom_data); out_free_hw: ieee80211_free_hw(priv->hw); out: @@ -1720,7 +1487,7 @@ out: return op_mode; } -void iwl_op_mode_dvm_stop(struct iwl_op_mode *op_mode) +static void iwl_op_mode_dvm_stop(struct iwl_op_mode *op_mode) { struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); @@ -1728,7 +1495,7 @@ void iwl_op_mode_dvm_stop(struct iwl_op_mode *op_mode) iwl_dbgfs_unregister(priv); - iwl_testmode_cleanup(priv); + iwl_testmode_free(priv); iwlagn_mac_unregister(priv); iwl_tt_exit(priv); @@ -1737,7 +1504,8 @@ void iwl_op_mode_dvm_stop(struct iwl_op_mode *op_mode) priv->ucode_loaded = false; iwl_trans_stop_device(priv->trans); - iwl_eeprom_free(priv); + kfree(priv->eeprom_blob); + iwl_free_eeprom_data(priv->eeprom_data); /*netif_stop_queue(dev); */ flush_workqueue(priv->workqueue); @@ -1850,7 +1618,7 @@ static void iwl_dump_nic_error_log(struct iwl_priv *priv) } /*TODO: Update dbgfs with ISR error stats obtained below */ - iwl_read_targ_mem_words(trans, base, &table, sizeof(table)); + iwl_read_targ_mem_bytes(trans, base, &table, sizeof(table)); if (ERROR_START_OFFSET <= table.valid * ERROR_ELEM_SIZE) { IWL_ERR(trans, "Start IWL Error Log Dump:\n"); @@ -2185,7 +1953,7 @@ static void iwlagn_fw_error(struct iwl_priv *priv, bool ondemand) } } -void iwl_nic_error(struct iwl_op_mode *op_mode) +static void iwl_nic_error(struct iwl_op_mode *op_mode) { struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); @@ -2198,7 +1966,7 @@ void iwl_nic_error(struct iwl_op_mode *op_mode) iwlagn_fw_error(priv, false); } -void iwl_cmd_queue_full(struct iwl_op_mode *op_mode) +static void iwl_cmd_queue_full(struct iwl_op_mode *op_mode) { struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); @@ -2208,11 +1976,60 @@ void iwl_cmd_queue_full(struct iwl_op_mode *op_mode) } } -void iwl_nic_config(struct iwl_op_mode *op_mode) +#define EEPROM_RF_CONFIG_TYPE_MAX 0x3 + +static void iwl_nic_config(struct iwl_op_mode *op_mode) { struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); + u16 radio_cfg = priv->eeprom_data->radio_cfg; + + /* SKU Control */ + iwl_set_bits_mask(priv->trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_MSK_MAC_DASH | + CSR_HW_IF_CONFIG_REG_MSK_MAC_STEP, + (CSR_HW_REV_STEP(priv->trans->hw_rev) << + CSR_HW_IF_CONFIG_REG_POS_MAC_STEP) | + (CSR_HW_REV_DASH(priv->trans->hw_rev) << + CSR_HW_IF_CONFIG_REG_POS_MAC_DASH)); + + /* write radio config values to register */ + if (EEPROM_RF_CFG_TYPE_MSK(radio_cfg) <= EEPROM_RF_CONFIG_TYPE_MAX) { + u32 reg_val = + EEPROM_RF_CFG_TYPE_MSK(radio_cfg) << + CSR_HW_IF_CONFIG_REG_POS_PHY_TYPE | + EEPROM_RF_CFG_STEP_MSK(radio_cfg) << + CSR_HW_IF_CONFIG_REG_POS_PHY_STEP | + EEPROM_RF_CFG_DASH_MSK(radio_cfg) << + CSR_HW_IF_CONFIG_REG_POS_PHY_DASH; + + iwl_set_bits_mask(priv->trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_MSK_PHY_TYPE | + CSR_HW_IF_CONFIG_REG_MSK_PHY_STEP | + CSR_HW_IF_CONFIG_REG_MSK_PHY_DASH, reg_val); + + IWL_INFO(priv, "Radio type=0x%x-0x%x-0x%x\n", + EEPROM_RF_CFG_TYPE_MSK(radio_cfg), + EEPROM_RF_CFG_STEP_MSK(radio_cfg), + EEPROM_RF_CFG_DASH_MSK(radio_cfg)); + } else { + WARN_ON(1); + } - priv->lib->nic_config(priv); + /* set CSR_HW_CONFIG_REG for uCode use */ + iwl_set_bit(priv->trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_BIT_RADIO_SI | + CSR_HW_IF_CONFIG_REG_BIT_MAC_SI); + + /* W/A : NIC is stuck in a reset state after Early PCIe power off + * (PCIe power is lost before PERST# is asserted), + * causing ME FW to lose ownership and not being able to obtain it back. + */ + iwl_set_bits_mask_prph(priv->trans, APMG_PS_CTRL_REG, + APMG_PS_CTRL_EARLY_PWR_OFF_RESET_DIS, + ~APMG_PS_CTRL_EARLY_PWR_OFF_RESET_DIS); + + if (priv->lib->nic_config) + priv->lib->nic_config(priv); } static void iwl_wimax_active(struct iwl_op_mode *op_mode) @@ -2223,7 +2040,7 @@ static void iwl_wimax_active(struct iwl_op_mode *op_mode) IWL_ERR(priv, "RF is used by WiMAX\n"); } -void iwl_stop_sw_queue(struct iwl_op_mode *op_mode, int queue) +static void iwl_stop_sw_queue(struct iwl_op_mode *op_mode, int queue) { struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); int mq = priv->queue_to_mac80211[queue]; @@ -2242,7 +2059,7 @@ void iwl_stop_sw_queue(struct iwl_op_mode *op_mode, int queue) ieee80211_stop_queue(priv->hw, mq); } -void iwl_wake_sw_queue(struct iwl_op_mode *op_mode, int queue) +static void iwl_wake_sw_queue(struct iwl_op_mode *op_mode, int queue) { struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); int mq = priv->queue_to_mac80211[queue]; @@ -2282,16 +2099,17 @@ void iwlagn_lift_passive_no_rx(struct iwl_priv *priv) priv->passive_no_rx = false; } -void iwl_free_skb(struct iwl_op_mode *op_mode, struct sk_buff *skb) +static void iwl_free_skb(struct iwl_op_mode *op_mode, struct sk_buff *skb) { + struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); struct ieee80211_tx_info *info; info = IEEE80211_SKB_CB(skb); - kmem_cache_free(iwl_tx_cmd_pool, (info->driver_data[1])); + iwl_trans_free_tx_cmd(priv->trans, info->driver_data[1]); dev_kfree_skb_any(skb); } -void iwl_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state) +static void iwl_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state) { struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); @@ -2303,7 +2121,7 @@ void iwl_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state) wiphy_rfkill_set_hw_state(priv->hw->wiphy, state); } -const struct iwl_op_mode_ops iwl_dvm_ops = { +static const struct iwl_op_mode_ops iwl_dvm_ops = { .start = iwl_op_mode_dvm_start, .stop = iwl_op_mode_dvm_stop, .rx = iwl_rx_dispatch, @@ -2322,9 +2140,6 @@ const struct iwl_op_mode_ops iwl_dvm_ops = { * driver and module entry point * *****************************************************************************/ - -struct kmem_cache *iwl_tx_cmd_pool; - static int __init iwl_init(void) { @@ -2332,36 +2147,25 @@ static int __init iwl_init(void) pr_info(DRV_DESCRIPTION ", " DRV_VERSION "\n"); pr_info(DRV_COPYRIGHT "\n"); - iwl_tx_cmd_pool = kmem_cache_create("iwl_dev_cmd", - sizeof(struct iwl_device_cmd), - sizeof(void *), 0, NULL); - if (!iwl_tx_cmd_pool) - return -ENOMEM; - ret = iwlagn_rate_control_register(); if (ret) { pr_err("Unable to register rate control algorithm: %d\n", ret); - goto error_rc_register; + return ret; } - ret = iwl_pci_register_driver(); - if (ret) - goto error_pci_register; - return ret; + ret = iwl_opmode_register("iwldvm", &iwl_dvm_ops); + if (ret) { + pr_err("Unable to register op_mode: %d\n", ret); + iwlagn_rate_control_unregister(); + } -error_pci_register: - iwlagn_rate_control_unregister(); -error_rc_register: - kmem_cache_destroy(iwl_tx_cmd_pool); return ret; } +module_init(iwl_init); static void __exit iwl_exit(void) { - iwl_pci_unregister_driver(); + iwl_opmode_deregister("iwldvm"); iwlagn_rate_control_unregister(); - kmem_cache_destroy(iwl_tx_cmd_pool); } - module_exit(iwl_exit); -module_init(iwl_init); diff --git a/drivers/net/wireless/iwlwifi/iwl-power.c b/drivers/net/wireless/iwlwifi/dvm/power.c index 544ddf17f5bd..518cf3715809 100644 --- a/drivers/net/wireless/iwlwifi/iwl-power.c +++ b/drivers/net/wireless/iwlwifi/dvm/power.c @@ -31,18 +31,15 @@ #include <linux/module.h> #include <linux/slab.h> #include <linux/init.h> - #include <net/mac80211.h> - -#include "iwl-eeprom.h" -#include "iwl-dev.h" -#include "iwl-agn.h" #include "iwl-io.h" -#include "iwl-commands.h" #include "iwl-debug.h" -#include "iwl-power.h" #include "iwl-trans.h" #include "iwl-modparams.h" +#include "dev.h" +#include "agn.h" +#include "commands.h" +#include "power.h" /* * Setting power level allows the card to go to sleep when not busy. diff --git a/drivers/net/wireless/iwlwifi/iwl-power.h b/drivers/net/wireless/iwlwifi/dvm/power.h index 21afc92efacb..a2cee7f04848 100644 --- a/drivers/net/wireless/iwlwifi/iwl-power.h +++ b/drivers/net/wireless/iwlwifi/dvm/power.h @@ -28,7 +28,7 @@ #ifndef __iwl_power_setting_h__ #define __iwl_power_setting_h__ -#include "iwl-commands.h" +#include "commands.h" struct iwl_power_mgr { struct iwl_powertable_cmd sleep_cmd; diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-rs.c b/drivers/net/wireless/iwlwifi/dvm/rs.c index 8cebd7c363fc..6fddd2785e6e 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-rs.c +++ b/drivers/net/wireless/iwlwifi/dvm/rs.c @@ -35,10 +35,8 @@ #include <linux/workqueue.h> -#include "iwl-dev.h" -#include "iwl-agn.h" -#include "iwl-op-mode.h" -#include "iwl-modparams.h" +#include "dev.h" +#include "agn.h" #define RS_NAME "iwl-agn-rs" @@ -819,7 +817,7 @@ static u32 rs_get_lower_rate(struct iwl_lq_sta *lq_sta, if (num_of_ant(tbl->ant_type) > 1) tbl->ant_type = - first_antenna(priv->hw_params.valid_tx_ant); + first_antenna(priv->eeprom_data->valid_tx_ant); tbl->is_ht40 = 0; tbl->is_SGI = 0; @@ -1447,7 +1445,7 @@ static int rs_move_legacy_other(struct iwl_priv *priv, u32 sz = (sizeof(struct iwl_scale_tbl_info) - (sizeof(struct iwl_rate_scale_data) * IWL_RATE_COUNT)); u8 start_action; - u8 valid_tx_ant = priv->hw_params.valid_tx_ant; + u8 valid_tx_ant = priv->eeprom_data->valid_tx_ant; u8 tx_chains_num = priv->hw_params.tx_chains_num; int ret = 0; u8 update_search_tbl_counter = 0; @@ -1465,7 +1463,7 @@ static int rs_move_legacy_other(struct iwl_priv *priv, case IWL_BT_COEX_TRAFFIC_LOAD_CONTINUOUS: /* avoid antenna B and MIMO */ valid_tx_ant = - first_antenna(priv->hw_params.valid_tx_ant); + first_antenna(priv->eeprom_data->valid_tx_ant); if (tbl->action >= IWL_LEGACY_SWITCH_ANTENNA2 && tbl->action != IWL_LEGACY_SWITCH_SISO) tbl->action = IWL_LEGACY_SWITCH_SISO; @@ -1489,7 +1487,7 @@ static int rs_move_legacy_other(struct iwl_priv *priv, else if (tbl->action >= IWL_LEGACY_SWITCH_ANTENNA2) tbl->action = IWL_LEGACY_SWITCH_SISO; valid_tx_ant = - first_antenna(priv->hw_params.valid_tx_ant); + first_antenna(priv->eeprom_data->valid_tx_ant); } start_action = tbl->action; @@ -1623,7 +1621,7 @@ static int rs_move_siso_to_other(struct iwl_priv *priv, u32 sz = (sizeof(struct iwl_scale_tbl_info) - (sizeof(struct iwl_rate_scale_data) * IWL_RATE_COUNT)); u8 start_action; - u8 valid_tx_ant = priv->hw_params.valid_tx_ant; + u8 valid_tx_ant = priv->eeprom_data->valid_tx_ant; u8 tx_chains_num = priv->hw_params.tx_chains_num; u8 update_search_tbl_counter = 0; int ret; @@ -1641,7 +1639,7 @@ static int rs_move_siso_to_other(struct iwl_priv *priv, case IWL_BT_COEX_TRAFFIC_LOAD_CONTINUOUS: /* avoid antenna B and MIMO */ valid_tx_ant = - first_antenna(priv->hw_params.valid_tx_ant); + first_antenna(priv->eeprom_data->valid_tx_ant); if (tbl->action != IWL_SISO_SWITCH_ANTENNA1) tbl->action = IWL_SISO_SWITCH_ANTENNA1; break; @@ -1659,7 +1657,7 @@ static int rs_move_siso_to_other(struct iwl_priv *priv, /* configure as 1x1 if bt full concurrency */ if (priv->bt_full_concurrent) { valid_tx_ant = - first_antenna(priv->hw_params.valid_tx_ant); + first_antenna(priv->eeprom_data->valid_tx_ant); if (tbl->action >= IWL_LEGACY_SWITCH_ANTENNA2) tbl->action = IWL_SISO_SWITCH_ANTENNA1; } @@ -1795,7 +1793,7 @@ static int rs_move_mimo2_to_other(struct iwl_priv *priv, u32 sz = (sizeof(struct iwl_scale_tbl_info) - (sizeof(struct iwl_rate_scale_data) * IWL_RATE_COUNT)); u8 start_action; - u8 valid_tx_ant = priv->hw_params.valid_tx_ant; + u8 valid_tx_ant = priv->eeprom_data->valid_tx_ant; u8 tx_chains_num = priv->hw_params.tx_chains_num; u8 update_search_tbl_counter = 0; int ret; @@ -1965,7 +1963,7 @@ static int rs_move_mimo3_to_other(struct iwl_priv *priv, u32 sz = (sizeof(struct iwl_scale_tbl_info) - (sizeof(struct iwl_rate_scale_data) * IWL_RATE_COUNT)); u8 start_action; - u8 valid_tx_ant = priv->hw_params.valid_tx_ant; + u8 valid_tx_ant = priv->eeprom_data->valid_tx_ant; u8 tx_chains_num = priv->hw_params.tx_chains_num; int ret; u8 update_search_tbl_counter = 0; @@ -2699,7 +2697,7 @@ static void rs_initialize_lq(struct iwl_priv *priv, i = lq_sta->last_txrate_idx; - valid_tx_ant = priv->hw_params.valid_tx_ant; + valid_tx_ant = priv->eeprom_data->valid_tx_ant; if (!lq_sta->search_better_tbl) active_tbl = lq_sta->active_tbl; @@ -2893,15 +2891,15 @@ void iwl_rs_rate_init(struct iwl_priv *priv, struct ieee80211_sta *sta, u8 sta_i /* These values will be overridden later */ lq_sta->lq.general_params.single_stream_ant_msk = - first_antenna(priv->hw_params.valid_tx_ant); + first_antenna(priv->eeprom_data->valid_tx_ant); lq_sta->lq.general_params.dual_stream_ant_msk = - priv->hw_params.valid_tx_ant & - ~first_antenna(priv->hw_params.valid_tx_ant); + priv->eeprom_data->valid_tx_ant & + ~first_antenna(priv->eeprom_data->valid_tx_ant); if (!lq_sta->lq.general_params.dual_stream_ant_msk) { lq_sta->lq.general_params.dual_stream_ant_msk = ANT_AB; - } else if (num_of_ant(priv->hw_params.valid_tx_ant) == 2) { + } else if (num_of_ant(priv->eeprom_data->valid_tx_ant) == 2) { lq_sta->lq.general_params.dual_stream_ant_msk = - priv->hw_params.valid_tx_ant; + priv->eeprom_data->valid_tx_ant; } /* as default allow aggregation for all tids */ @@ -2947,7 +2945,7 @@ static void rs_fill_link_cmd(struct iwl_priv *priv, if (priv && priv->bt_full_concurrent) { /* 1x1 only */ tbl_type.ant_type = - first_antenna(priv->hw_params.valid_tx_ant); + first_antenna(priv->eeprom_data->valid_tx_ant); } /* How many times should we repeat the initial rate? */ @@ -2979,7 +2977,7 @@ static void rs_fill_link_cmd(struct iwl_priv *priv, if (priv->bt_full_concurrent) valid_tx_ant = ANT_A; else - valid_tx_ant = priv->hw_params.valid_tx_ant; + valid_tx_ant = priv->eeprom_data->valid_tx_ant; } /* Fill rest of rate table */ @@ -3013,7 +3011,7 @@ static void rs_fill_link_cmd(struct iwl_priv *priv, if (priv && priv->bt_full_concurrent) { /* 1x1 only */ tbl_type.ant_type = - first_antenna(priv->hw_params.valid_tx_ant); + first_antenna(priv->eeprom_data->valid_tx_ant); } /* Indicate to uCode which entries might be MIMO. @@ -3100,7 +3098,7 @@ static void rs_dbgfs_set_mcs(struct iwl_lq_sta *lq_sta, u8 ant_sel_tx; priv = lq_sta->drv; - valid_tx_ant = priv->hw_params.valid_tx_ant; + valid_tx_ant = priv->eeprom_data->valid_tx_ant; if (lq_sta->dbg_fixed_rate) { ant_sel_tx = ((lq_sta->dbg_fixed_rate & RATE_MCS_ANT_ABC_MSK) @@ -3171,9 +3169,9 @@ static ssize_t rs_sta_dbgfs_scale_table_read(struct file *file, desc += sprintf(buff+desc, "fixed rate 0x%X\n", lq_sta->dbg_fixed_rate); desc += sprintf(buff+desc, "valid_tx_ant %s%s%s\n", - (priv->hw_params.valid_tx_ant & ANT_A) ? "ANT_A," : "", - (priv->hw_params.valid_tx_ant & ANT_B) ? "ANT_B," : "", - (priv->hw_params.valid_tx_ant & ANT_C) ? "ANT_C" : ""); + (priv->eeprom_data->valid_tx_ant & ANT_A) ? "ANT_A," : "", + (priv->eeprom_data->valid_tx_ant & ANT_B) ? "ANT_B," : "", + (priv->eeprom_data->valid_tx_ant & ANT_C) ? "ANT_C" : ""); desc += sprintf(buff+desc, "lq type %s\n", (is_legacy(tbl->lq_type)) ? "legacy" : "HT"); if (is_Ht(tbl->lq_type)) { diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-rs.h b/drivers/net/wireless/iwlwifi/dvm/rs.h index 82d02e1ae89f..ad3aea8f626a 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-rs.h +++ b/drivers/net/wireless/iwlwifi/dvm/rs.h @@ -29,9 +29,10 @@ #include <net/mac80211.h> -#include "iwl-commands.h" #include "iwl-config.h" +#include "commands.h" + struct iwl_rate_info { u8 plcp; /* uCode API: IWL_RATE_6M_PLCP, etc. */ u8 plcp_siso; /* uCode API: IWL_RATE_SISO_6M_PLCP, etc. */ diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-rx.c b/drivers/net/wireless/iwlwifi/dvm/rx.c index 403de96f9747..fee5cffa1669 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-rx.c +++ b/drivers/net/wireless/iwlwifi/dvm/rx.c @@ -32,12 +32,10 @@ #include <linux/sched.h> #include <net/mac80211.h> #include <asm/unaligned.h> -#include "iwl-eeprom.h" -#include "iwl-dev.h" #include "iwl-io.h" -#include "iwl-agn-calib.h" -#include "iwl-agn.h" -#include "iwl-modparams.h" +#include "dev.h" +#include "calib.h" +#include "agn.h" #define IWL_CMD_ENTRY(x) [x] = #x @@ -90,7 +88,6 @@ const char *iwl_dvm_cmd_strings[REPLY_MAX] = { IWL_CMD_ENTRY(REPLY_PHY_CALIBRATION_CMD), IWL_CMD_ENTRY(REPLY_RX_PHY_CMD), IWL_CMD_ENTRY(REPLY_RX_MPDU_CMD), - IWL_CMD_ENTRY(REPLY_RX), IWL_CMD_ENTRY(REPLY_COMPRESSED_BA), IWL_CMD_ENTRY(CALIBRATION_CFG_CMD), IWL_CMD_ENTRY(CALIBRATION_RES_NOTIFICATION), @@ -897,8 +894,7 @@ static int iwlagn_calc_rssi(struct iwl_priv *priv, return max_rssi - agc - IWLAGN_RSSI_OFFSET; } -/* Called for REPLY_RX (legacy ABG frames), or - * REPLY_RX_MPDU_CMD (HT high-throughput N frames). */ +/* Called for REPLY_RX_MPDU_CMD */ static int iwlagn_rx_reply_rx(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb, struct iwl_device_cmd *cmd) @@ -913,37 +909,17 @@ static int iwlagn_rx_reply_rx(struct iwl_priv *priv, u32 ampdu_status; u32 rate_n_flags; - /** - * REPLY_RX and REPLY_RX_MPDU_CMD are handled differently. - * REPLY_RX: physical layer info is in this buffer - * REPLY_RX_MPDU_CMD: physical layer info was sent in separate - * command and cached in priv->last_phy_res - * - * Here we set up local variables depending on which command is - * received. - */ - if (pkt->hdr.cmd == REPLY_RX) { - phy_res = (struct iwl_rx_phy_res *)pkt->data; - header = (struct ieee80211_hdr *)(pkt->data + sizeof(*phy_res) - + phy_res->cfg_phy_cnt); - - len = le16_to_cpu(phy_res->byte_count); - rx_pkt_status = *(__le32 *)(pkt->data + sizeof(*phy_res) + - phy_res->cfg_phy_cnt + len); - ampdu_status = le32_to_cpu(rx_pkt_status); - } else { - if (!priv->last_phy_res_valid) { - IWL_ERR(priv, "MPDU frame without cached PHY data\n"); - return 0; - } - phy_res = &priv->last_phy_res; - amsdu = (struct iwl_rx_mpdu_res_start *)pkt->data; - header = (struct ieee80211_hdr *)(pkt->data + sizeof(*amsdu)); - len = le16_to_cpu(amsdu->byte_count); - rx_pkt_status = *(__le32 *)(pkt->data + sizeof(*amsdu) + len); - ampdu_status = iwlagn_translate_rx_status(priv, - le32_to_cpu(rx_pkt_status)); + if (!priv->last_phy_res_valid) { + IWL_ERR(priv, "MPDU frame without cached PHY data\n"); + return 0; } + phy_res = &priv->last_phy_res; + amsdu = (struct iwl_rx_mpdu_res_start *)pkt->data; + header = (struct ieee80211_hdr *)(pkt->data + sizeof(*amsdu)); + len = le16_to_cpu(amsdu->byte_count); + rx_pkt_status = *(__le32 *)(pkt->data + sizeof(*amsdu) + len); + ampdu_status = iwlagn_translate_rx_status(priv, + le32_to_cpu(rx_pkt_status)); if ((unlikely(phy_res->cfg_phy_cnt > 20))) { IWL_DEBUG_DROP(priv, "dsp size out of range [0,20]: %d\n", @@ -1012,6 +988,8 @@ static int iwlagn_rx_reply_rx(struct iwl_priv *priv, rx_status.flag |= RX_FLAG_40MHZ; if (rate_n_flags & RATE_MCS_SGI_MSK) rx_status.flag |= RX_FLAG_SHORT_GI; + if (rate_n_flags & RATE_MCS_GF_MSK) + rx_status.flag |= RX_FLAG_HT_GF; iwlagn_pass_packet_to_mac80211(priv, header, len, ampdu_status, rxb, &rx_status); @@ -1124,8 +1102,6 @@ int iwl_rx_dispatch(struct iwl_op_mode *op_mode, struct iwl_rx_cmd_buffer *rxb, { struct iwl_rx_packet *pkt = rxb_addr(rxb); struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); - void (*pre_rx_handler)(struct iwl_priv *, - struct iwl_rx_cmd_buffer *); int err = 0; /* @@ -1135,19 +1111,19 @@ int iwl_rx_dispatch(struct iwl_op_mode *op_mode, struct iwl_rx_cmd_buffer *rxb, */ iwl_notification_wait_notify(&priv->notif_wait, pkt); - /* RX data may be forwarded to userspace (using pre_rx_handler) in one - * of two cases: the first, that the user owns the uCode through - * testmode - in such case the pre_rx_handler is set and no further - * processing takes place. The other case is when the user want to - * monitor the rx w/o affecting the regular flow - the pre_rx_handler - * will be set but the ownership flag != IWL_OWNERSHIP_TM and the flow +#ifdef CONFIG_IWLWIFI_DEVICE_TESTMODE + /* + * RX data may be forwarded to userspace in one + * of two cases: the user owns the fw through testmode or when + * the user requested to monitor the rx w/o affecting the regular flow. + * In these cases the iwl_test object will handle forwarding the rx + * data to user space. + * Note that if the ownership flag != IWL_OWNERSHIP_TM the flow * continues. - * We need to use ACCESS_ONCE to prevent a case where the handler - * changes between the check and the call. */ - pre_rx_handler = ACCESS_ONCE(priv->pre_rx_handler); - if (pre_rx_handler) - pre_rx_handler(priv, rxb); + iwl_test_rx(&priv->tst, rxb); +#endif + if (priv->ucode_owner != IWL_OWNERSHIP_TM) { /* Based on type of command response or notification, * handle those that need handling via function in diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-rxon.c b/drivers/net/wireless/iwlwifi/dvm/rxon.c index 0a3aa7c83003..10896393e5a0 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-rxon.c +++ b/drivers/net/wireless/iwlwifi/dvm/rxon.c @@ -25,11 +25,11 @@ *****************************************************************************/ #include <linux/etherdevice.h> -#include "iwl-dev.h" -#include "iwl-agn.h" -#include "iwl-agn-calib.h" #include "iwl-trans.h" #include "iwl-modparams.h" +#include "dev.h" +#include "agn.h" +#include "calib.h" /* * initialize rxon structure with default values from eeprom @@ -37,8 +37,6 @@ void iwl_connection_init_rx_config(struct iwl_priv *priv, struct iwl_rxon_context *ctx) { - const struct iwl_channel_info *ch_info; - memset(&ctx->staging, 0, sizeof(ctx->staging)); if (!ctx->vif) { @@ -80,14 +78,8 @@ void iwl_connection_init_rx_config(struct iwl_priv *priv, ctx->staging.flags |= RXON_FLG_SHORT_PREAMBLE_MSK; #endif - ch_info = iwl_get_channel_info(priv, priv->band, - le16_to_cpu(ctx->active.channel)); - - if (!ch_info) - ch_info = &priv->channel_info[0]; - - ctx->staging.channel = cpu_to_le16(ch_info->channel); - priv->band = ch_info->band; + ctx->staging.channel = cpu_to_le16(priv->hw->conf.channel->hw_value); + priv->band = priv->hw->conf.channel->band; iwl_set_flags_for_band(priv, ctx, priv->band, ctx->vif); @@ -175,7 +167,8 @@ static int iwlagn_disconn_pan(struct iwl_priv *priv, return ret; } -void iwlagn_update_qos(struct iwl_priv *priv, struct iwl_rxon_context *ctx) +static void iwlagn_update_qos(struct iwl_priv *priv, + struct iwl_rxon_context *ctx) { int ret; @@ -202,8 +195,8 @@ void iwlagn_update_qos(struct iwl_priv *priv, struct iwl_rxon_context *ctx) IWL_DEBUG_QUIET_RFKILL(priv, "Failed to update QoS\n"); } -int iwlagn_update_beacon(struct iwl_priv *priv, - struct ieee80211_vif *vif) +static int iwlagn_update_beacon(struct iwl_priv *priv, + struct ieee80211_vif *vif) { lockdep_assert_held(&priv->mutex); @@ -215,7 +208,7 @@ int iwlagn_update_beacon(struct iwl_priv *priv, } static int iwlagn_send_rxon_assoc(struct iwl_priv *priv, - struct iwl_rxon_context *ctx) + struct iwl_rxon_context *ctx) { int ret = 0; struct iwl_rxon_assoc_cmd rxon_assoc; @@ -427,10 +420,10 @@ static int iwl_set_tx_power(struct iwl_priv *priv, s8 tx_power, bool force) return -EINVAL; } - if (tx_power > priv->tx_power_device_lmt) { + if (tx_power > DIV_ROUND_UP(priv->eeprom_data->max_tx_pwr_half_dbm, 2)) { IWL_WARN(priv, "Requested user TXPOWER %d above upper limit %d.\n", - tx_power, priv->tx_power_device_lmt); + tx_power, priv->eeprom_data->max_tx_pwr_half_dbm); return -EINVAL; } @@ -863,8 +856,8 @@ static int iwl_check_rxon_cmd(struct iwl_priv *priv, * or is clearing the RXON_FILTER_ASSOC_MSK, then return 1 to indicate that * a new tune (full RXON command, rather than RXON_ASSOC cmd) is required. */ -int iwl_full_rxon_required(struct iwl_priv *priv, - struct iwl_rxon_context *ctx) +static int iwl_full_rxon_required(struct iwl_priv *priv, + struct iwl_rxon_context *ctx) { const struct iwl_rxon_cmd *staging = &ctx->staging; const struct iwl_rxon_cmd *active = &ctx->active; @@ -1189,7 +1182,6 @@ int iwlagn_mac_config(struct ieee80211_hw *hw, u32 changed) struct iwl_rxon_context *ctx; struct ieee80211_conf *conf = &hw->conf; struct ieee80211_channel *channel = conf->channel; - const struct iwl_channel_info *ch_info; int ret = 0; IWL_DEBUG_MAC80211(priv, "enter: changed %#x\n", changed); @@ -1223,14 +1215,6 @@ int iwlagn_mac_config(struct ieee80211_hw *hw, u32 changed) } if (changed & IEEE80211_CONF_CHANGE_CHANNEL) { - ch_info = iwl_get_channel_info(priv, channel->band, - channel->hw_value); - if (!is_channel_valid(ch_info)) { - IWL_DEBUG_MAC80211(priv, "leave - invalid channel\n"); - ret = -EINVAL; - goto out; - } - for_each_context(priv, ctx) { /* Configure HT40 channels */ if (ctx->ht.enabled != conf_is_ht(conf)) @@ -1294,9 +1278,9 @@ int iwlagn_mac_config(struct ieee80211_hw *hw, u32 changed) return ret; } -void iwlagn_check_needed_chains(struct iwl_priv *priv, - struct iwl_rxon_context *ctx, - struct ieee80211_bss_conf *bss_conf) +static void iwlagn_check_needed_chains(struct iwl_priv *priv, + struct iwl_rxon_context *ctx, + struct ieee80211_bss_conf *bss_conf) { struct ieee80211_vif *vif = ctx->vif; struct iwl_rxon_context *tmp; @@ -1388,7 +1372,7 @@ void iwlagn_check_needed_chains(struct iwl_priv *priv, ht_conf->single_chain_sufficient = !need_multiple; } -void iwlagn_chain_noise_reset(struct iwl_priv *priv) +static void iwlagn_chain_noise_reset(struct iwl_priv *priv) { struct iwl_chain_noise_data *data = &priv->chain_noise_data; int ret; @@ -1463,7 +1447,7 @@ void iwlagn_bss_info_changed(struct ieee80211_hw *hw, if (changes & BSS_CHANGED_ASSOC) { if (bss_conf->assoc) { - priv->timestamp = bss_conf->last_tsf; + priv->timestamp = bss_conf->sync_tsf; ctx->staging.filter_flags |= RXON_FILTER_ASSOC_MSK; } else { /* diff --git a/drivers/net/wireless/iwlwifi/iwl-scan.c b/drivers/net/wireless/iwlwifi/dvm/scan.c index 031d8e21f82f..e3467fa86899 100644 --- a/drivers/net/wireless/iwlwifi/iwl-scan.c +++ b/drivers/net/wireless/iwlwifi/dvm/scan.c @@ -30,11 +30,8 @@ #include <linux/etherdevice.h> #include <net/mac80211.h> -#include "iwl-eeprom.h" -#include "iwl-dev.h" -#include "iwl-io.h" -#include "iwl-agn.h" -#include "iwl-trans.h" +#include "dev.h" +#include "agn.h" /* For active scan, listen ACTIVE_DWELL_TIME (msec) on each channel after * sending probe req. This should be set long enough to hear probe responses @@ -54,6 +51,9 @@ #define IWL_CHANNEL_TUNE_TIME 5 #define MAX_SCAN_CHANNEL 50 +/* For reset radio, need minimal dwell time only */ +#define IWL_RADIO_RESET_DWELL_TIME 5 + static int iwl_send_scan_abort(struct iwl_priv *priv) { int ret; @@ -67,7 +67,6 @@ static int iwl_send_scan_abort(struct iwl_priv *priv) * to receive scan abort command or it does not perform * hardware scan currently */ if (!test_bit(STATUS_READY, &priv->status) || - !test_bit(STATUS_GEO_CONFIGURED, &priv->status) || !test_bit(STATUS_SCAN_HW, &priv->status) || test_bit(STATUS_FW_ERROR, &priv->status)) return -EIO; @@ -101,11 +100,8 @@ static void iwl_complete_scan(struct iwl_priv *priv, bool aborted) ieee80211_scan_completed(priv->hw, aborted); } - if (priv->scan_type == IWL_SCAN_ROC) { - ieee80211_remain_on_channel_expired(priv->hw); - priv->hw_roc_channel = NULL; - schedule_delayed_work(&priv->hw_roc_disable_work, 10 * HZ); - } + if (priv->scan_type == IWL_SCAN_ROC) + iwl_scan_roc_expired(priv); priv->scan_type = IWL_SCAN_NORMAL; priv->scan_vif = NULL; @@ -134,11 +130,8 @@ static void iwl_process_scan_complete(struct iwl_priv *priv) goto out_settings; } - if (priv->scan_type == IWL_SCAN_ROC) { - ieee80211_remain_on_channel_expired(priv->hw); - priv->hw_roc_channel = NULL; - schedule_delayed_work(&priv->hw_roc_disable_work, 10 * HZ); - } + if (priv->scan_type == IWL_SCAN_ROC) + iwl_scan_roc_expired(priv); if (priv->scan_type != IWL_SCAN_NORMAL && !aborted) { int err; @@ -403,15 +396,21 @@ static u16 iwl_get_active_dwell_time(struct iwl_priv *priv, static u16 iwl_limit_dwell(struct iwl_priv *priv, u16 dwell_time) { struct iwl_rxon_context *ctx; + int limits[NUM_IWL_RXON_CTX] = {}; + int n_active = 0; + u16 limit; + + BUILD_BUG_ON(NUM_IWL_RXON_CTX != 2); /* * If we're associated, we clamp the dwell time 98% - * of the smallest beacon interval (minus 2 * channel - * tune time) + * of the beacon interval (minus 2 * channel tune time) + * If both contexts are active, we have to restrict to + * 1/2 of the minimum of them, because they might be in + * lock-step with the time inbetween only half of what + * time we'd have in each of them. */ for_each_context(priv, ctx) { - u16 value; - switch (ctx->staging.dev_type) { case RXON_DEV_TYPE_P2P: /* no timing constraints */ @@ -431,14 +430,25 @@ static u16 iwl_limit_dwell(struct iwl_priv *priv, u16 dwell_time) break; } - value = ctx->beacon_int; - if (!value) - value = IWL_PASSIVE_DWELL_BASE; - value = (value * 98) / 100 - IWL_CHANNEL_TUNE_TIME * 2; - dwell_time = min(value, dwell_time); + limits[n_active++] = ctx->beacon_int ?: IWL_PASSIVE_DWELL_BASE; } - return dwell_time; + switch (n_active) { + case 0: + return dwell_time; + case 2: + limit = (limits[1] * 98) / 100 - IWL_CHANNEL_TUNE_TIME * 2; + limit /= 2; + dwell_time = min(limit, dwell_time); + /* fall through to limit further */ + case 1: + limit = (limits[0] * 98) / 100 - IWL_CHANNEL_TUNE_TIME * 2; + limit /= n_active; + return min(limit, dwell_time); + default: + WARN_ON_ONCE(1); + return dwell_time; + } } static u16 iwl_get_passive_dwell_time(struct iwl_priv *priv, @@ -453,27 +463,17 @@ static u16 iwl_get_passive_dwell_time(struct iwl_priv *priv, /* Return valid, unused, channel for a passive scan to reset the RF */ static u8 iwl_get_single_channel_number(struct iwl_priv *priv, - enum ieee80211_band band) + enum ieee80211_band band) { - const struct iwl_channel_info *ch_info; - int i; - u8 channel = 0; - u8 min, max; + struct ieee80211_supported_band *sband = priv->hw->wiphy->bands[band]; struct iwl_rxon_context *ctx; + int i; - if (band == IEEE80211_BAND_5GHZ) { - min = 14; - max = priv->channel_count; - } else { - min = 0; - max = 14; - } - - for (i = min; i < max; i++) { + for (i = 0; i < sband->n_channels; i++) { bool busy = false; for_each_context(priv, ctx) { - busy = priv->channel_info[i].channel == + busy = sband->channels[i].hw_value == le16_to_cpu(ctx->staging.channel); if (busy) break; @@ -482,54 +482,46 @@ static u8 iwl_get_single_channel_number(struct iwl_priv *priv, if (busy) continue; - channel = priv->channel_info[i].channel; - ch_info = iwl_get_channel_info(priv, band, channel); - if (is_channel_valid(ch_info)) - break; + if (!(sband->channels[i].flags & IEEE80211_CHAN_DISABLED)) + return sband->channels[i].hw_value; } - return channel; + return 0; } -static int iwl_get_single_channel_for_scan(struct iwl_priv *priv, - struct ieee80211_vif *vif, - enum ieee80211_band band, - struct iwl_scan_channel *scan_ch) +static int iwl_get_channel_for_reset_scan(struct iwl_priv *priv, + struct ieee80211_vif *vif, + enum ieee80211_band band, + struct iwl_scan_channel *scan_ch) { const struct ieee80211_supported_band *sband; - u16 passive_dwell = 0; - u16 active_dwell = 0; - int added = 0; - u16 channel = 0; + u16 channel; sband = iwl_get_hw_mode(priv, band); if (!sband) { IWL_ERR(priv, "invalid band\n"); - return added; + return 0; } - active_dwell = iwl_get_active_dwell_time(priv, band, 0); - passive_dwell = iwl_get_passive_dwell_time(priv, band); - - if (passive_dwell <= active_dwell) - passive_dwell = active_dwell + 1; - channel = iwl_get_single_channel_number(priv, band); if (channel) { scan_ch->channel = cpu_to_le16(channel); scan_ch->type = SCAN_CHANNEL_TYPE_PASSIVE; - scan_ch->active_dwell = cpu_to_le16(active_dwell); - scan_ch->passive_dwell = cpu_to_le16(passive_dwell); + scan_ch->active_dwell = + cpu_to_le16(IWL_RADIO_RESET_DWELL_TIME); + scan_ch->passive_dwell = + cpu_to_le16(IWL_RADIO_RESET_DWELL_TIME); /* Set txpower levels to defaults */ scan_ch->dsp_atten = 110; if (band == IEEE80211_BAND_5GHZ) scan_ch->tx_gain = ((1 << 5) | (3 << 3)) | 3; else scan_ch->tx_gain = ((1 << 5) | (5 << 3)); - added++; - } else - IWL_ERR(priv, "no valid channel found\n"); - return added; + return 1; + } + + IWL_ERR(priv, "no valid channel found\n"); + return 0; } static int iwl_get_channels_for_scan(struct iwl_priv *priv, @@ -540,7 +532,6 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, { struct ieee80211_channel *chan; const struct ieee80211_supported_band *sband; - const struct iwl_channel_info *ch_info; u16 passive_dwell = 0; u16 active_dwell = 0; int added, i; @@ -565,16 +556,7 @@ static int iwl_get_channels_for_scan(struct iwl_priv *priv, channel = chan->hw_value; scan_ch->channel = cpu_to_le16(channel); - ch_info = iwl_get_channel_info(priv, band, channel); - if (!is_channel_valid(ch_info)) { - IWL_DEBUG_SCAN(priv, - "Channel %d is INVALID for this band.\n", - channel); - continue; - } - - if (!is_active || is_channel_passive(ch_info) || - (chan->flags & IEEE80211_CHAN_PASSIVE_SCAN)) + if (!is_active || (chan->flags & IEEE80211_CHAN_PASSIVE_SCAN)) scan_ch->type = SCAN_CHANNEL_TYPE_PASSIVE; else scan_ch->type = SCAN_CHANNEL_TYPE_ACTIVE; @@ -678,12 +660,12 @@ static int iwlagn_request_scan(struct iwl_priv *priv, struct ieee80211_vif *vif) u16 rx_chain = 0; enum ieee80211_band band; u8 n_probes = 0; - u8 rx_ant = priv->hw_params.valid_rx_ant; + u8 rx_ant = priv->eeprom_data->valid_rx_ant; u8 rate; bool is_active = false; int chan_mod; u8 active_chains; - u8 scan_tx_antennas = priv->hw_params.valid_tx_ant; + u8 scan_tx_antennas = priv->eeprom_data->valid_tx_ant; int ret; int scan_cmd_size = sizeof(struct iwl_scan_cmd) + MAX_SCAN_CHANNEL * sizeof(struct iwl_scan_channel) + @@ -755,6 +737,12 @@ static int iwlagn_request_scan(struct iwl_priv *priv, struct ieee80211_vif *vif) switch (priv->scan_type) { case IWL_SCAN_RADIO_RESET: IWL_DEBUG_SCAN(priv, "Start internal passive scan.\n"); + /* + * Override quiet time as firmware checks that active + * dwell is >= quiet; since we use passive scan it'll + * not actually be used. + */ + scan->quiet_time = cpu_to_le16(IWL_RADIO_RESET_DWELL_TIME); break; case IWL_SCAN_NORMAL: if (priv->scan_request->n_ssids) { @@ -893,7 +881,7 @@ static int iwlagn_request_scan(struct iwl_priv *priv, struct ieee80211_vif *vif) /* MIMO is not used here, but value is required */ rx_chain |= - priv->hw_params.valid_rx_ant << RXON_RX_CHAIN_VALID_POS; + priv->eeprom_data->valid_rx_ant << RXON_RX_CHAIN_VALID_POS; rx_chain |= rx_ant << RXON_RX_CHAIN_FORCE_MIMO_SEL_POS; rx_chain |= rx_ant << RXON_RX_CHAIN_FORCE_SEL_POS; rx_chain |= 0x1 << RXON_RX_CHAIN_DRIVER_FORCE_POS; @@ -928,7 +916,7 @@ static int iwlagn_request_scan(struct iwl_priv *priv, struct ieee80211_vif *vif) switch (priv->scan_type) { case IWL_SCAN_RADIO_RESET: scan->channel_count = - iwl_get_single_channel_for_scan(priv, vif, band, + iwl_get_channel_for_reset_scan(priv, vif, band, (void *)&scan->data[cmd_len]); break; case IWL_SCAN_NORMAL: @@ -994,8 +982,10 @@ static int iwlagn_request_scan(struct iwl_priv *priv, struct ieee80211_vif *vif) set_bit(STATUS_SCAN_HW, &priv->status); ret = iwlagn_set_pan_params(priv); - if (ret) + if (ret) { + clear_bit(STATUS_SCAN_HW, &priv->status); return ret; + } ret = iwl_dvm_send_cmd(priv, &cmd); if (ret) { @@ -1008,7 +998,7 @@ static int iwlagn_request_scan(struct iwl_priv *priv, struct ieee80211_vif *vif) void iwl_init_scan_params(struct iwl_priv *priv) { - u8 ant_idx = fls(priv->hw_params.valid_tx_ant) - 1; + u8 ant_idx = fls(priv->eeprom_data->valid_tx_ant) - 1; if (!priv->scan_tx_ant[IEEE80211_BAND_5GHZ]) priv->scan_tx_ant[IEEE80211_BAND_5GHZ] = ant_idx; if (!priv->scan_tx_ant[IEEE80211_BAND_2GHZ]) @@ -1158,3 +1148,40 @@ void iwl_cancel_scan_deferred_work(struct iwl_priv *priv) mutex_unlock(&priv->mutex); } } + +void iwl_scan_roc_expired(struct iwl_priv *priv) +{ + /* + * The status bit should be set here, to prevent a race + * where the atomic_read returns 1, but before the execution continues + * iwl_scan_offchannel_skb_status() checks if the status bit is set + */ + set_bit(STATUS_SCAN_ROC_EXPIRED, &priv->status); + + if (atomic_read(&priv->num_aux_in_flight) == 0) { + ieee80211_remain_on_channel_expired(priv->hw); + priv->hw_roc_channel = NULL; + schedule_delayed_work(&priv->hw_roc_disable_work, + 10 * HZ); + + clear_bit(STATUS_SCAN_ROC_EXPIRED, &priv->status); + } else { + IWL_DEBUG_SCAN(priv, "ROC done with %d frames in aux\n", + atomic_read(&priv->num_aux_in_flight)); + } +} + +void iwl_scan_offchannel_skb(struct iwl_priv *priv) +{ + WARN_ON(!priv->hw_roc_start_notified); + atomic_inc(&priv->num_aux_in_flight); +} + +void iwl_scan_offchannel_skb_status(struct iwl_priv *priv) +{ + if (atomic_dec_return(&priv->num_aux_in_flight) == 0 && + test_bit(STATUS_SCAN_ROC_EXPIRED, &priv->status)) { + IWL_DEBUG_SCAN(priv, "0 aux frames. Calling ROC expired\n"); + iwl_scan_roc_expired(priv); + } +} diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-sta.c b/drivers/net/wireless/iwlwifi/dvm/sta.c index eb6a8eaf42fc..b29b798f7550 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-sta.c +++ b/drivers/net/wireless/iwlwifi/dvm/sta.c @@ -28,10 +28,9 @@ *****************************************************************************/ #include <linux/etherdevice.h> #include <net/mac80211.h> - -#include "iwl-dev.h" -#include "iwl-agn.h" #include "iwl-trans.h" +#include "dev.h" +#include "agn.h" const u8 iwl_bcast_addr[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; @@ -171,26 +170,6 @@ int iwl_send_add_sta(struct iwl_priv *priv, return cmd.handler_status; } -static bool iwl_is_channel_extension(struct iwl_priv *priv, - enum ieee80211_band band, - u16 channel, u8 extension_chan_offset) -{ - const struct iwl_channel_info *ch_info; - - ch_info = iwl_get_channel_info(priv, band, channel); - if (!is_channel_valid(ch_info)) - return false; - - if (extension_chan_offset == IEEE80211_HT_PARAM_CHA_SEC_ABOVE) - return !(ch_info->ht40_extension_channel & - IEEE80211_CHAN_NO_HT40PLUS); - else if (extension_chan_offset == IEEE80211_HT_PARAM_CHA_SEC_BELOW) - return !(ch_info->ht40_extension_channel & - IEEE80211_CHAN_NO_HT40MINUS); - - return false; -} - bool iwl_is_ht40_tx_allowed(struct iwl_priv *priv, struct iwl_rxon_context *ctx, struct ieee80211_sta_ht_cap *ht_cap) @@ -198,21 +177,25 @@ bool iwl_is_ht40_tx_allowed(struct iwl_priv *priv, if (!ctx->ht.enabled || !ctx->ht.is_40mhz) return false; +#ifdef CONFIG_IWLWIFI_DEBUGFS + if (priv->disable_ht40) + return false; +#endif + /* - * We do not check for IEEE80211_HT_CAP_SUP_WIDTH_20_40 - * the bit will not set if it is pure 40MHz case + * Remainder of this function checks ht_cap, but if it's + * NULL then we can do HT40 (special case for RXON) */ - if (ht_cap && !ht_cap->ht_supported) + if (!ht_cap) + return true; + + if (!ht_cap->ht_supported) return false; -#ifdef CONFIG_IWLWIFI_DEBUGFS - if (priv->disable_ht40) + if (!(ht_cap->cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40)) return false; -#endif - return iwl_is_channel_extension(priv, priv->band, - le16_to_cpu(ctx->staging.channel), - ctx->ht.extension_chan_offset); + return true; } static void iwl_sta_calc_ht_flags(struct iwl_priv *priv, @@ -236,6 +219,7 @@ static void iwl_sta_calc_ht_flags(struct iwl_priv *priv, mimo_ps_mode = (sta_ht_inf->cap & IEEE80211_HT_CAP_SM_PS) >> 2; IWL_DEBUG_INFO(priv, "STA %pM SM PS mode: %s\n", + sta->addr, (mimo_ps_mode == WLAN_HT_CAP_SM_PS_STATIC) ? "static" : (mimo_ps_mode == WLAN_HT_CAP_SM_PS_DYNAMIC) ? @@ -649,23 +633,23 @@ static void iwl_sta_fill_lq(struct iwl_priv *priv, struct iwl_rxon_context *ctx, if (r >= IWL_FIRST_CCK_RATE && r <= IWL_LAST_CCK_RATE) rate_flags |= RATE_MCS_CCK_MSK; - rate_flags |= first_antenna(priv->hw_params.valid_tx_ant) << + rate_flags |= first_antenna(priv->eeprom_data->valid_tx_ant) << RATE_MCS_ANT_POS; rate_n_flags = iwl_hw_set_rate_n_flags(iwl_rates[r].plcp, rate_flags); for (i = 0; i < LINK_QUAL_MAX_RETRY_NUM; i++) link_cmd->rs_table[i].rate_n_flags = rate_n_flags; link_cmd->general_params.single_stream_ant_msk = - first_antenna(priv->hw_params.valid_tx_ant); + first_antenna(priv->eeprom_data->valid_tx_ant); link_cmd->general_params.dual_stream_ant_msk = - priv->hw_params.valid_tx_ant & - ~first_antenna(priv->hw_params.valid_tx_ant); + priv->eeprom_data->valid_tx_ant & + ~first_antenna(priv->eeprom_data->valid_tx_ant); if (!link_cmd->general_params.dual_stream_ant_msk) { link_cmd->general_params.dual_stream_ant_msk = ANT_AB; - } else if (num_of_ant(priv->hw_params.valid_tx_ant) == 2) { + } else if (num_of_ant(priv->eeprom_data->valid_tx_ant) == 2) { link_cmd->general_params.dual_stream_ant_msk = - priv->hw_params.valid_tx_ant; + priv->eeprom_data->valid_tx_ant; } link_cmd->agg_params.agg_dis_start_th = diff --git a/drivers/net/wireless/iwlwifi/dvm/testmode.c b/drivers/net/wireless/iwlwifi/dvm/testmode.c new file mode 100644 index 000000000000..57b918ce3b5f --- /dev/null +++ b/drivers/net/wireless/iwlwifi/dvm/testmode.c @@ -0,0 +1,471 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright(c) 2010 - 2012 Intel Corporation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, + * USA + * + * The full GNU General Public License is included in this distribution + * in the file called LICENSE.GPL. + * + * Contact Information: + * Intel Linux Wireless <ilw@linux.intel.com> + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright(c) 2010 - 2012 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + *****************************************************************************/ + +#include <linux/init.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/dma-mapping.h> +#include <net/net_namespace.h> +#include <linux/netdevice.h> +#include <net/cfg80211.h> +#include <net/mac80211.h> +#include <net/netlink.h> + +#include "iwl-debug.h" +#include "iwl-trans.h" +#include "dev.h" +#include "agn.h" +#include "iwl-test.h" +#include "iwl-testmode.h" + +static int iwl_testmode_send_cmd(struct iwl_op_mode *op_mode, + struct iwl_host_cmd *cmd) +{ + struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); + return iwl_dvm_send_cmd(priv, cmd); +} + +static bool iwl_testmode_valid_hw_addr(u32 addr) +{ + if (iwlagn_hw_valid_rtc_data_addr(addr)) + return true; + + if (IWLAGN_RTC_INST_LOWER_BOUND <= addr && + addr < IWLAGN_RTC_INST_UPPER_BOUND) + return true; + + return false; +} + +static u32 iwl_testmode_get_fw_ver(struct iwl_op_mode *op_mode) +{ + struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); + return priv->fw->ucode_ver; +} + +static struct sk_buff* +iwl_testmode_alloc_reply(struct iwl_op_mode *op_mode, int len) +{ + struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); + return cfg80211_testmode_alloc_reply_skb(priv->hw->wiphy, len); +} + +static int iwl_testmode_reply(struct iwl_op_mode *op_mode, struct sk_buff *skb) +{ + return cfg80211_testmode_reply(skb); +} + +static struct sk_buff *iwl_testmode_alloc_event(struct iwl_op_mode *op_mode, + int len) +{ + struct iwl_priv *priv = IWL_OP_MODE_GET_DVM(op_mode); + return cfg80211_testmode_alloc_event_skb(priv->hw->wiphy, len, + GFP_ATOMIC); +} + +static void iwl_testmode_event(struct iwl_op_mode *op_mode, struct sk_buff *skb) +{ + return cfg80211_testmode_event(skb, GFP_ATOMIC); +} + +static struct iwl_test_ops tst_ops = { + .send_cmd = iwl_testmode_send_cmd, + .valid_hw_addr = iwl_testmode_valid_hw_addr, + .get_fw_ver = iwl_testmode_get_fw_ver, + .alloc_reply = iwl_testmode_alloc_reply, + .reply = iwl_testmode_reply, + .alloc_event = iwl_testmode_alloc_event, + .event = iwl_testmode_event, +}; + +void iwl_testmode_init(struct iwl_priv *priv) +{ + iwl_test_init(&priv->tst, priv->trans, &tst_ops); +} + +void iwl_testmode_free(struct iwl_priv *priv) +{ + iwl_test_free(&priv->tst); +} + +static int iwl_testmode_cfg_init_calib(struct iwl_priv *priv) +{ + struct iwl_notification_wait calib_wait; + static const u8 calib_complete[] = { + CALIBRATION_COMPLETE_NOTIFICATION + }; + int ret; + + iwl_init_notification_wait(&priv->notif_wait, &calib_wait, + calib_complete, ARRAY_SIZE(calib_complete), + NULL, NULL); + ret = iwl_init_alive_start(priv); + if (ret) { + IWL_ERR(priv, "Fail init calibration: %d\n", ret); + goto cfg_init_calib_error; + } + + ret = iwl_wait_notification(&priv->notif_wait, &calib_wait, 2 * HZ); + if (ret) + IWL_ERR(priv, "Error detecting" + " CALIBRATION_COMPLETE_NOTIFICATION: %d\n", ret); + return ret; + +cfg_init_calib_error: + iwl_remove_notification(&priv->notif_wait, &calib_wait); + return ret; +} + +/* + * This function handles the user application commands for driver. + * + * It retrieves command ID carried with IWL_TM_ATTR_COMMAND and calls to the + * handlers respectively. + * + * If it's an unknown commdn ID, -ENOSYS is replied; otherwise, the returned + * value of the actual command execution is replied to the user application. + * + * If there's any message responding to the user space, IWL_TM_ATTR_SYNC_RSP + * is used for carry the message while IWL_TM_ATTR_COMMAND must set to + * IWL_TM_CMD_DEV2APP_SYNC_RSP. + * + * @hw: ieee80211_hw object that represents the device + * @tb: gnl message fields from the user space + */ +static int iwl_testmode_driver(struct ieee80211_hw *hw, struct nlattr **tb) +{ + struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); + struct iwl_trans *trans = priv->trans; + struct sk_buff *skb; + unsigned char *rsp_data_ptr = NULL; + int status = 0, rsp_data_len = 0; + u32 inst_size = 0, data_size = 0; + const struct fw_img *img; + + switch (nla_get_u32(tb[IWL_TM_ATTR_COMMAND])) { + case IWL_TM_CMD_APP2DEV_GET_DEVICENAME: + rsp_data_ptr = (unsigned char *)priv->cfg->name; + rsp_data_len = strlen(priv->cfg->name); + skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, + rsp_data_len + 20); + if (!skb) { + IWL_ERR(priv, "Memory allocation fail\n"); + return -ENOMEM; + } + if (nla_put_u32(skb, IWL_TM_ATTR_COMMAND, + IWL_TM_CMD_DEV2APP_SYNC_RSP) || + nla_put(skb, IWL_TM_ATTR_SYNC_RSP, + rsp_data_len, rsp_data_ptr)) + goto nla_put_failure; + status = cfg80211_testmode_reply(skb); + if (status < 0) + IWL_ERR(priv, "Error sending msg : %d\n", status); + break; + + case IWL_TM_CMD_APP2DEV_LOAD_INIT_FW: + status = iwl_load_ucode_wait_alive(priv, IWL_UCODE_INIT); + if (status) + IWL_ERR(priv, "Error loading init ucode: %d\n", status); + break; + + case IWL_TM_CMD_APP2DEV_CFG_INIT_CALIB: + iwl_testmode_cfg_init_calib(priv); + priv->ucode_loaded = false; + iwl_trans_stop_device(trans); + break; + + case IWL_TM_CMD_APP2DEV_LOAD_RUNTIME_FW: + status = iwl_load_ucode_wait_alive(priv, IWL_UCODE_REGULAR); + if (status) { + IWL_ERR(priv, + "Error loading runtime ucode: %d\n", status); + break; + } + status = iwl_alive_start(priv); + if (status) + IWL_ERR(priv, + "Error starting the device: %d\n", status); + break; + + case IWL_TM_CMD_APP2DEV_LOAD_WOWLAN_FW: + iwl_scan_cancel_timeout(priv, 200); + priv->ucode_loaded = false; + iwl_trans_stop_device(trans); + status = iwl_load_ucode_wait_alive(priv, IWL_UCODE_WOWLAN); + if (status) { + IWL_ERR(priv, + "Error loading WOWLAN ucode: %d\n", status); + break; + } + status = iwl_alive_start(priv); + if (status) + IWL_ERR(priv, + "Error starting the device: %d\n", status); + break; + + case IWL_TM_CMD_APP2DEV_GET_EEPROM: + if (priv->eeprom_blob) { + skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, + priv->eeprom_blob_size + 20); + if (!skb) { + IWL_ERR(priv, "Memory allocation fail\n"); + return -ENOMEM; + } + if (nla_put_u32(skb, IWL_TM_ATTR_COMMAND, + IWL_TM_CMD_DEV2APP_EEPROM_RSP) || + nla_put(skb, IWL_TM_ATTR_EEPROM, + priv->eeprom_blob_size, + priv->eeprom_blob)) + goto nla_put_failure; + status = cfg80211_testmode_reply(skb); + if (status < 0) + IWL_ERR(priv, "Error sending msg : %d\n", + status); + } else + return -ENODATA; + break; + + case IWL_TM_CMD_APP2DEV_FIXRATE_REQ: + if (!tb[IWL_TM_ATTR_FIXRATE]) { + IWL_ERR(priv, "Missing fixrate setting\n"); + return -ENOMSG; + } + priv->tm_fixed_rate = nla_get_u32(tb[IWL_TM_ATTR_FIXRATE]); + break; + + case IWL_TM_CMD_APP2DEV_GET_FW_INFO: + skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, 20 + 8); + if (!skb) { + IWL_ERR(priv, "Memory allocation fail\n"); + return -ENOMEM; + } + if (!priv->ucode_loaded) { + IWL_ERR(priv, "No uCode has not been loaded\n"); + return -EINVAL; + } else { + img = &priv->fw->img[priv->cur_ucode]; + inst_size = img->sec[IWL_UCODE_SECTION_INST].len; + data_size = img->sec[IWL_UCODE_SECTION_DATA].len; + } + if (nla_put_u32(skb, IWL_TM_ATTR_FW_TYPE, priv->cur_ucode) || + nla_put_u32(skb, IWL_TM_ATTR_FW_INST_SIZE, inst_size) || + nla_put_u32(skb, IWL_TM_ATTR_FW_DATA_SIZE, data_size)) + goto nla_put_failure; + status = cfg80211_testmode_reply(skb); + if (status < 0) + IWL_ERR(priv, "Error sending msg : %d\n", status); + break; + + default: + IWL_ERR(priv, "Unknown testmode driver command ID\n"); + return -ENOSYS; + } + return status; + +nla_put_failure: + kfree_skb(skb); + return -EMSGSIZE; +} + +/* + * This function handles the user application switch ucode ownership. + * + * It retrieves the mandatory fields IWL_TM_ATTR_UCODE_OWNER and + * decide who the current owner of the uCode + * + * If the current owner is OWNERSHIP_TM, then the only host command + * can deliver to uCode is from testmode, all the other host commands + * will dropped. + * + * default driver is the owner of uCode in normal operational mode + * + * @hw: ieee80211_hw object that represents the device + * @tb: gnl message fields from the user space + */ +static int iwl_testmode_ownership(struct ieee80211_hw *hw, struct nlattr **tb) +{ + struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); + u8 owner; + + if (!tb[IWL_TM_ATTR_UCODE_OWNER]) { + IWL_ERR(priv, "Missing ucode owner\n"); + return -ENOMSG; + } + + owner = nla_get_u8(tb[IWL_TM_ATTR_UCODE_OWNER]); + if (owner == IWL_OWNERSHIP_DRIVER) { + priv->ucode_owner = owner; + iwl_test_enable_notifications(&priv->tst, false); + } else if (owner == IWL_OWNERSHIP_TM) { + priv->ucode_owner = owner; + iwl_test_enable_notifications(&priv->tst, true); + } else { + IWL_ERR(priv, "Invalid owner\n"); + return -EINVAL; + } + return 0; +} + +/* The testmode gnl message handler that takes the gnl message from the + * user space and parses it per the policy iwl_testmode_gnl_msg_policy, then + * invoke the corresponding handlers. + * + * This function is invoked when there is user space application sending + * gnl message through the testmode tunnel NL80211_CMD_TESTMODE regulated + * by nl80211. + * + * It retrieves the mandatory field, IWL_TM_ATTR_COMMAND, before + * dispatching it to the corresponding handler. + * + * If IWL_TM_ATTR_COMMAND is missing, -ENOMSG is replied to user application; + * -ENOSYS is replied to the user application if the command is unknown; + * Otherwise, the command is dispatched to the respective handler. + * + * @hw: ieee80211_hw object that represents the device + * @data: pointer to user space message + * @len: length in byte of @data + */ +int iwlagn_mac_testmode_cmd(struct ieee80211_hw *hw, void *data, int len) +{ + struct nlattr *tb[IWL_TM_ATTR_MAX]; + struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); + int result; + + result = iwl_test_parse(&priv->tst, tb, data, len); + if (result) + return result; + + /* in case multiple accesses to the device happens */ + mutex_lock(&priv->mutex); + switch (nla_get_u32(tb[IWL_TM_ATTR_COMMAND])) { + case IWL_TM_CMD_APP2DEV_UCODE: + case IWL_TM_CMD_APP2DEV_DIRECT_REG_READ32: + case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE32: + case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE8: + case IWL_TM_CMD_APP2DEV_BEGIN_TRACE: + case IWL_TM_CMD_APP2DEV_END_TRACE: + case IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_READ: + case IWL_TM_CMD_APP2DEV_NOTIFICATIONS: + case IWL_TM_CMD_APP2DEV_GET_FW_VERSION: + case IWL_TM_CMD_APP2DEV_GET_DEVICE_ID: + case IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_WRITE: + result = iwl_test_handle_cmd(&priv->tst, tb); + break; + + case IWL_TM_CMD_APP2DEV_GET_DEVICENAME: + case IWL_TM_CMD_APP2DEV_LOAD_INIT_FW: + case IWL_TM_CMD_APP2DEV_CFG_INIT_CALIB: + case IWL_TM_CMD_APP2DEV_LOAD_RUNTIME_FW: + case IWL_TM_CMD_APP2DEV_GET_EEPROM: + case IWL_TM_CMD_APP2DEV_FIXRATE_REQ: + case IWL_TM_CMD_APP2DEV_LOAD_WOWLAN_FW: + case IWL_TM_CMD_APP2DEV_GET_FW_INFO: + IWL_DEBUG_INFO(priv, "testmode cmd to driver\n"); + result = iwl_testmode_driver(hw, tb); + break; + + case IWL_TM_CMD_APP2DEV_OWNERSHIP: + IWL_DEBUG_INFO(priv, "testmode change uCode ownership\n"); + result = iwl_testmode_ownership(hw, tb); + break; + + default: + IWL_ERR(priv, "Unknown testmode command\n"); + result = -ENOSYS; + break; + } + mutex_unlock(&priv->mutex); + + if (result) + IWL_ERR(priv, "Test cmd failed result=%d\n", result); + return result; +} + +int iwlagn_mac_testmode_dump(struct ieee80211_hw *hw, struct sk_buff *skb, + struct netlink_callback *cb, + void *data, int len) +{ + struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); + int result; + u32 cmd; + + if (cb->args[3]) { + /* offset by 1 since commands start at 0 */ + cmd = cb->args[3] - 1; + } else { + struct nlattr *tb[IWL_TM_ATTR_MAX]; + + result = iwl_test_parse(&priv->tst, tb, data, len); + if (result) + return result; + + cmd = nla_get_u32(tb[IWL_TM_ATTR_COMMAND]); + cb->args[3] = cmd + 1; + } + + /* in case multiple accesses to the device happens */ + mutex_lock(&priv->mutex); + result = iwl_test_dump(&priv->tst, cmd, skb, cb); + mutex_unlock(&priv->mutex); + return result; +} diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-tt.c b/drivers/net/wireless/iwlwifi/dvm/tt.c index a5cfe0aceedb..eb864433e59d 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-tt.c +++ b/drivers/net/wireless/iwlwifi/dvm/tt.c @@ -31,17 +31,14 @@ #include <linux/module.h> #include <linux/slab.h> #include <linux/init.h> - #include <net/mac80211.h> - -#include "iwl-agn.h" -#include "iwl-eeprom.h" -#include "iwl-dev.h" #include "iwl-io.h" -#include "iwl-commands.h" -#include "iwl-debug.h" -#include "iwl-agn-tt.h" #include "iwl-modparams.h" +#include "iwl-debug.h" +#include "agn.h" +#include "dev.h" +#include "commands.h" +#include "tt.h" /* default Thermal Throttling transaction table * Current state | Throttling Down | Throttling Up diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-tt.h b/drivers/net/wireless/iwlwifi/dvm/tt.h index 86bbf47501c1..44c7c8f30a2d 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-tt.h +++ b/drivers/net/wireless/iwlwifi/dvm/tt.h @@ -28,7 +28,7 @@ #ifndef __iwl_tt_setting_h__ #define __iwl_tt_setting_h__ -#include "iwl-commands.h" +#include "commands.h" #define IWL_ABSOLUTE_ZERO 0 #define IWL_ABSOLUTE_MAX 0xFFFFFFFF diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-tx.c b/drivers/net/wireless/iwlwifi/dvm/tx.c index 3366e2e2f00f..5971a23aa47d 100644 --- a/drivers/net/wireless/iwlwifi/iwl-agn-tx.c +++ b/drivers/net/wireless/iwlwifi/dvm/tx.c @@ -32,12 +32,11 @@ #include <linux/init.h> #include <linux/sched.h> #include <linux/ieee80211.h> - -#include "iwl-dev.h" #include "iwl-io.h" -#include "iwl-agn-hw.h" -#include "iwl-agn.h" #include "iwl-trans.h" +#include "iwl-agn-hw.h" +#include "dev.h" +#include "agn.h" static const u8 tid_to_ac[] = { IEEE80211_AC_BE, @@ -187,7 +186,8 @@ static void iwlagn_tx_cmd_build_rate(struct iwl_priv *priv, rate_idx = info->control.rates[0].idx; if (info->control.rates[0].flags & IEEE80211_TX_RC_MCS || (rate_idx < 0) || (rate_idx > IWL_RATE_COUNT_LEGACY)) - rate_idx = rate_lowest_index(&priv->bands[info->band], + rate_idx = rate_lowest_index( + &priv->eeprom_data->bands[info->band], info->control.sta); /* For 5 GHZ band, remap mac80211 rate indices into driver indices */ if (info->band == IEEE80211_BAND_5GHZ) @@ -207,10 +207,11 @@ static void iwlagn_tx_cmd_build_rate(struct iwl_priv *priv, priv->bt_full_concurrent) { /* operated as 1x1 in full concurrency mode */ priv->mgmt_tx_ant = iwl_toggle_tx_ant(priv, priv->mgmt_tx_ant, - first_antenna(priv->hw_params.valid_tx_ant)); + first_antenna(priv->eeprom_data->valid_tx_ant)); } else - priv->mgmt_tx_ant = iwl_toggle_tx_ant(priv, priv->mgmt_tx_ant, - priv->hw_params.valid_tx_ant); + priv->mgmt_tx_ant = iwl_toggle_tx_ant( + priv, priv->mgmt_tx_ant, + priv->eeprom_data->valid_tx_ant); rate_flags |= iwl_ant_idx_to_flags(priv->mgmt_tx_ant); /* Set the rate in the TX cmd */ @@ -296,7 +297,7 @@ int iwlagn_tx_skb(struct iwl_priv *priv, struct sk_buff *skb) struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); struct iwl_station_priv *sta_priv = NULL; struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS]; - struct iwl_device_cmd *dev_cmd = NULL; + struct iwl_device_cmd *dev_cmd; struct iwl_tx_cmd *tx_cmd; __le16 fc; u8 hdr_len; @@ -378,7 +379,7 @@ int iwlagn_tx_skb(struct iwl_priv *priv, struct sk_buff *skb) if (info->flags & IEEE80211_TX_CTL_AMPDU) is_agg = true; - dev_cmd = kmem_cache_alloc(iwl_tx_cmd_pool, GFP_ATOMIC); + dev_cmd = iwl_trans_alloc_tx_cmd(priv->trans); if (unlikely(!dev_cmd)) goto drop_unlock_priv; @@ -402,6 +403,7 @@ int iwlagn_tx_skb(struct iwl_priv *priv, struct sk_buff *skb) info->driver_data[0] = ctx; info->driver_data[1] = dev_cmd; + /* From now on, we cannot access info->control */ spin_lock(&priv->sta_lock); @@ -486,11 +488,14 @@ int iwlagn_tx_skb(struct iwl_priv *priv, struct sk_buff *skb) if (sta_priv && sta_priv->client && !is_agg) atomic_inc(&sta_priv->pending_frames); + if (info->flags & IEEE80211_TX_CTL_TX_OFFCHAN) + iwl_scan_offchannel_skb(priv); + return 0; drop_unlock_sta: if (dev_cmd) - kmem_cache_free(iwl_tx_cmd_pool, dev_cmd); + iwl_trans_free_tx_cmd(priv->trans, dev_cmd); spin_unlock(&priv->sta_lock); drop_unlock_priv: return -1; @@ -597,7 +602,7 @@ turn_off: * time, or we hadn't time to drain the AC queues. */ if (agg_state == IWL_AGG_ON) - iwl_trans_tx_agg_disable(priv->trans, txq_id); + iwl_trans_txq_disable(priv->trans, txq_id); else IWL_DEBUG_TX_QUEUES(priv, "Don't disable tx agg: %d\n", agg_state); @@ -686,9 +691,8 @@ int iwlagn_tx_agg_oper(struct iwl_priv *priv, struct ieee80211_vif *vif, fifo = ctx->ac_to_fifo[tid_to_ac[tid]]; - iwl_trans_tx_agg_setup(priv->trans, q, fifo, - sta_priv->sta_id, tid, - buf_size, ssn); + iwl_trans_txq_enable(priv->trans, q, fifo, sta_priv->sta_id, tid, + buf_size, ssn); /* * If the limit is 0, then it wasn't initialised yet, @@ -753,8 +757,8 @@ static void iwlagn_check_ratid_empty(struct iwl_priv *priv, int sta_id, u8 tid) IWL_DEBUG_TX_QUEUES(priv, "Can continue DELBA flow ssn = next_recl =" " %d", tid_data->next_reclaimed); - iwl_trans_tx_agg_disable(priv->trans, - tid_data->agg.txq_id); + iwl_trans_txq_disable(priv->trans, + tid_data->agg.txq_id); iwlagn_dealloc_agg_txq(priv, tid_data->agg.txq_id); tid_data->agg.state = IWL_AGG_OFF; ieee80211_stop_tx_ba_cb_irqsafe(vif, addr, tid); @@ -1136,6 +1140,7 @@ int iwlagn_rx_reply_tx(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb, struct sk_buff *skb; struct iwl_rxon_context *ctx; bool is_agg = (txq_id >= IWLAGN_FIRST_AMPDU_QUEUE); + bool is_offchannel_skb; tid = (tx_resp->ra_tid & IWLAGN_TX_RES_TID_MSK) >> IWLAGN_TX_RES_TID_POS; @@ -1149,6 +1154,8 @@ int iwlagn_rx_reply_tx(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb, __skb_queue_head_init(&skbs); + is_offchannel_skb = false; + if (tx_resp->frame_count == 1) { u16 next_reclaimed = le16_to_cpu(tx_resp->seq_ctl); next_reclaimed = SEQ_TO_SN(next_reclaimed + 0x10); @@ -1176,7 +1183,8 @@ int iwlagn_rx_reply_tx(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb, } /*we can free until ssn % q.n_bd not inclusive */ - WARN_ON(iwl_reclaim(priv, sta_id, tid, txq_id, ssn, &skbs)); + WARN_ON_ONCE(iwl_reclaim(priv, sta_id, tid, + txq_id, ssn, &skbs)); iwlagn_check_ratid_empty(priv, sta_id, tid); freed = 0; @@ -1189,8 +1197,8 @@ int iwlagn_rx_reply_tx(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb, info = IEEE80211_SKB_CB(skb); ctx = info->driver_data[0]; - kmem_cache_free(iwl_tx_cmd_pool, - (info->driver_data[1])); + iwl_trans_free_tx_cmd(priv->trans, + info->driver_data[1]); memset(&info->status, 0, sizeof(info->status)); @@ -1225,10 +1233,19 @@ int iwlagn_rx_reply_tx(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb, if (!is_agg) iwlagn_non_agg_tx_status(priv, ctx, hdr->addr1); + is_offchannel_skb = + (info->flags & IEEE80211_TX_CTL_TX_OFFCHAN); freed++; } WARN_ON(!is_agg && freed != 1); + + /* + * An offchannel frame can be send only on the AUX queue, where + * there is no aggregation (and reordering) so it only is single + * skb is expected to be processed. + */ + WARN_ON(is_offchannel_skb && freed != 1); } iwl_check_abort_status(priv, tx_resp->frame_count, status); @@ -1239,6 +1256,9 @@ int iwlagn_rx_reply_tx(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb, ieee80211_tx_status(priv->hw, skb); } + if (is_offchannel_skb) + iwl_scan_offchannel_skb_status(priv); + return 0; } @@ -1341,7 +1361,7 @@ int iwlagn_rx_reply_compressed_ba(struct iwl_priv *priv, WARN_ON_ONCE(1); info = IEEE80211_SKB_CB(skb); - kmem_cache_free(iwl_tx_cmd_pool, (info->driver_data[1])); + iwl_trans_free_tx_cmd(priv->trans, info->driver_data[1]); if (freed == 1) { /* this is the first skb we deliver in this batch */ diff --git a/drivers/net/wireless/iwlwifi/iwl-ucode.c b/drivers/net/wireless/iwlwifi/dvm/ucode.c index bc40dc68b0f4..6d8d6dd7943f 100644 --- a/drivers/net/wireless/iwlwifi/iwl-ucode.c +++ b/drivers/net/wireless/iwlwifi/dvm/ucode.c @@ -30,15 +30,16 @@ #include <linux/kernel.h> #include <linux/init.h> -#include "iwl-dev.h" #include "iwl-io.h" #include "iwl-agn-hw.h" -#include "iwl-agn.h" -#include "iwl-agn-calib.h" #include "iwl-trans.h" #include "iwl-fh.h" #include "iwl-op-mode.h" +#include "dev.h" +#include "agn.h" +#include "calib.h" + /****************************************************************************** * * uCode download functions @@ -60,8 +61,7 @@ iwl_get_ucode_image(struct iwl_priv *priv, enum iwl_ucode_type ucode_type) static int iwl_set_Xtal_calib(struct iwl_priv *priv) { struct iwl_calib_xtal_freq_cmd cmd; - __le16 *xtal_calib = - (__le16 *)iwl_eeprom_query_addr(priv, EEPROM_XTAL); + __le16 *xtal_calib = priv->eeprom_data->xtal_calib; iwl_set_calib_hdr(&cmd.hdr, IWL_PHY_CALIBRATE_CRYSTAL_FRQ_CMD); cmd.cap_pin1 = le16_to_cpu(xtal_calib[0]); @@ -72,12 +72,10 @@ static int iwl_set_Xtal_calib(struct iwl_priv *priv) static int iwl_set_temperature_offset_calib(struct iwl_priv *priv) { struct iwl_calib_temperature_offset_cmd cmd; - __le16 *offset_calib = - (__le16 *)iwl_eeprom_query_addr(priv, EEPROM_RAW_TEMPERATURE); memset(&cmd, 0, sizeof(cmd)); iwl_set_calib_hdr(&cmd.hdr, IWL_PHY_CALIBRATE_TEMP_OFFSET_CMD); - memcpy(&cmd.radio_sensor_offset, offset_calib, sizeof(*offset_calib)); + cmd.radio_sensor_offset = priv->eeprom_data->raw_temperature; if (!(cmd.radio_sensor_offset)) cmd.radio_sensor_offset = DEFAULT_RADIO_SENSOR_OFFSET; @@ -89,27 +87,17 @@ static int iwl_set_temperature_offset_calib(struct iwl_priv *priv) static int iwl_set_temperature_offset_calib_v2(struct iwl_priv *priv) { struct iwl_calib_temperature_offset_v2_cmd cmd; - __le16 *offset_calib_high = (__le16 *)iwl_eeprom_query_addr(priv, - EEPROM_KELVIN_TEMPERATURE); - __le16 *offset_calib_low = - (__le16 *)iwl_eeprom_query_addr(priv, EEPROM_RAW_TEMPERATURE); - struct iwl_eeprom_calib_hdr *hdr; memset(&cmd, 0, sizeof(cmd)); iwl_set_calib_hdr(&cmd.hdr, IWL_PHY_CALIBRATE_TEMP_OFFSET_CMD); - hdr = (struct iwl_eeprom_calib_hdr *)iwl_eeprom_query_addr(priv, - EEPROM_CALIB_ALL); - memcpy(&cmd.radio_sensor_offset_high, offset_calib_high, - sizeof(*offset_calib_high)); - memcpy(&cmd.radio_sensor_offset_low, offset_calib_low, - sizeof(*offset_calib_low)); - if (!(cmd.radio_sensor_offset_low)) { + cmd.radio_sensor_offset_high = priv->eeprom_data->kelvin_temperature; + cmd.radio_sensor_offset_low = priv->eeprom_data->raw_temperature; + if (!cmd.radio_sensor_offset_low) { IWL_DEBUG_CALIB(priv, "no info in EEPROM, use default\n"); cmd.radio_sensor_offset_low = DEFAULT_RADIO_SENSOR_OFFSET; cmd.radio_sensor_offset_high = DEFAULT_RADIO_SENSOR_OFFSET; } - memcpy(&cmd.burntVoltageRef, &hdr->voltage, - sizeof(hdr->voltage)); + cmd.burntVoltageRef = priv->eeprom_data->calib_voltage; IWL_DEBUG_CALIB(priv, "Radio sensor offset high: %d\n", le16_to_cpu(cmd.radio_sensor_offset_high)); @@ -177,7 +165,7 @@ int iwl_init_alive_start(struct iwl_priv *priv) return 0; } -int iwl_send_wimax_coex(struct iwl_priv *priv) +static int iwl_send_wimax_coex(struct iwl_priv *priv) { struct iwl_wimax_coex_cmd coex_cmd; @@ -238,13 +226,50 @@ int iwl_send_bt_env(struct iwl_priv *priv, u8 action, u8 type) return ret; } +static const u8 iwlagn_default_queue_to_tx_fifo[] = { + IWL_TX_FIFO_VO, + IWL_TX_FIFO_VI, + IWL_TX_FIFO_BE, + IWL_TX_FIFO_BK, +}; + +static const u8 iwlagn_ipan_queue_to_tx_fifo[] = { + IWL_TX_FIFO_VO, + IWL_TX_FIFO_VI, + IWL_TX_FIFO_BE, + IWL_TX_FIFO_BK, + IWL_TX_FIFO_BK_IPAN, + IWL_TX_FIFO_BE_IPAN, + IWL_TX_FIFO_VI_IPAN, + IWL_TX_FIFO_VO_IPAN, + IWL_TX_FIFO_BE_IPAN, + IWL_TX_FIFO_UNUSED, + IWL_TX_FIFO_AUX, +}; static int iwl_alive_notify(struct iwl_priv *priv) { + const u8 *queue_to_txf; + u8 n_queues; int ret; + int i; iwl_trans_fw_alive(priv->trans); + if (priv->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_PAN && + priv->eeprom_data->sku & EEPROM_SKU_CAP_IPAN_ENABLE) { + n_queues = ARRAY_SIZE(iwlagn_ipan_queue_to_tx_fifo); + queue_to_txf = iwlagn_ipan_queue_to_tx_fifo; + } else { + n_queues = ARRAY_SIZE(iwlagn_default_queue_to_tx_fifo); + queue_to_txf = iwlagn_default_queue_to_tx_fifo; + } + + for (i = 0; i < n_queues; i++) + if (queue_to_txf[i] != IWL_TX_FIFO_UNUSED) + iwl_trans_ac_txq_enable(priv->trans, i, + queue_to_txf[i]); + priv->passive_no_rx = false; priv->transport_queue_stop = 0; diff --git a/drivers/net/wireless/iwlwifi/iwl-config.h b/drivers/net/wireless/iwlwifi/iwl-config.h index 67b28aa7f9be..87f465a49df1 100644 --- a/drivers/net/wireless/iwlwifi/iwl-config.h +++ b/drivers/net/wireless/iwlwifi/iwl-config.h @@ -113,7 +113,7 @@ enum iwl_led_mode { #define IWL_MAX_PLCP_ERR_THRESHOLD_DISABLE 0 /* TX queue watchdog timeouts in mSecs */ -#define IWL_WATCHHDOG_DISABLED 0 +#define IWL_WATCHDOG_DISABLED 0 #define IWL_DEF_WD_TIMEOUT 2000 #define IWL_LONG_WD_TIMEOUT 10000 #define IWL_MAX_WD_TIMEOUT 120000 @@ -143,7 +143,7 @@ enum iwl_led_mode { * @chain_noise_scale: default chain noise scale used for gain computation * @wd_timeout: TX queues watchdog timeout * @max_event_log_size: size of event log buffer size for ucode event logging - * @shadow_reg_enable: HW shadhow register bit + * @shadow_reg_enable: HW shadow register support * @hd_v2: v2 of enhanced sensitivity value, used for 2000 series and up * @no_idle_support: do not support idle mode */ @@ -177,18 +177,39 @@ struct iwl_base_params { struct iwl_bt_params { bool advanced_bt_coexist; u8 bt_init_traffic_load; - u8 bt_prio_boost; + u32 bt_prio_boost; u16 agg_time_limit; bool bt_sco_disable; bool bt_session_2; }; + /* * @use_rts_for_aggregation: use rts/cts protection for HT traffic + * @ht40_bands: bitmap of bands (using %IEEE80211_BAND_*) that support HT40 */ struct iwl_ht_params { + enum ieee80211_smps_mode smps_mode; const bool ht_greenfield_support; /* if used set to true */ bool use_rts_for_aggregation; - enum ieee80211_smps_mode smps_mode; + u8 ht40_bands; +}; + +/* + * information on how to parse the EEPROM + */ +#define EEPROM_REG_BAND_1_CHANNELS 0x08 +#define EEPROM_REG_BAND_2_CHANNELS 0x26 +#define EEPROM_REG_BAND_3_CHANNELS 0x42 +#define EEPROM_REG_BAND_4_CHANNELS 0x5C +#define EEPROM_REG_BAND_5_CHANNELS 0x74 +#define EEPROM_REG_BAND_24_HT40_CHANNELS 0x82 +#define EEPROM_REG_BAND_52_HT40_CHANNELS 0x92 +#define EEPROM_6000_REG_BAND_24_HT40_CHANNELS 0x80 +#define EEPROM_REGULATORY_BAND_NO_HT40 0 + +struct iwl_eeprom_params { + const u8 regulatory_bands[7]; + bool enhanced_txpower; }; /** @@ -243,6 +264,7 @@ struct iwl_cfg { /* params likely to change within a device family */ const struct iwl_ht_params *ht_params; const struct iwl_bt_params *bt_params; + const struct iwl_eeprom_params *eeprom_params; const bool need_temp_offset_calib; /* if used set to true */ const bool no_xtal_calib; enum iwl_led_mode led_mode; diff --git a/drivers/net/wireless/iwlwifi/iwl-csr.h b/drivers/net/wireless/iwlwifi/iwl-csr.h index 59750543fce7..34a5287dfc2f 100644 --- a/drivers/net/wireless/iwlwifi/iwl-csr.h +++ b/drivers/net/wireless/iwlwifi/iwl-csr.h @@ -97,13 +97,10 @@ /* * Hardware revision info * Bit fields: - * 31-8: Reserved - * 7-4: Type of device: see CSR_HW_REV_TYPE_xxx definitions + * 31-16: Reserved + * 15-4: Type of device: see CSR_HW_REV_TYPE_xxx definitions * 3-2: Revision step: 0 = A, 1 = B, 2 = C, 3 = D * 1-0: "Dash" (-) value, as in A-1, etc. - * - * NOTE: Revision step affects calculation of CCK txpower for 4965. - * NOTE: See also CSR_HW_REV_WA_REG (work-around for bug in 4965). */ #define CSR_HW_REV (CSR_BASE+0x028) @@ -155,9 +152,21 @@ #define CSR_DBG_LINK_PWR_MGMT_REG (CSR_BASE+0x250) /* Bits for CSR_HW_IF_CONFIG_REG */ -#define CSR_HW_IF_CONFIG_REG_MSK_BOARD_VER (0x00000C00) -#define CSR_HW_IF_CONFIG_REG_BIT_MAC_SI (0x00000100) +#define CSR_HW_IF_CONFIG_REG_MSK_MAC_DASH (0x00000003) +#define CSR_HW_IF_CONFIG_REG_MSK_MAC_STEP (0x0000000C) +#define CSR_HW_IF_CONFIG_REG_MSK_BOARD_VER (0x000000C0) +#define CSR_HW_IF_CONFIG_REG_BIT_MAC_SI (0x00000100) #define CSR_HW_IF_CONFIG_REG_BIT_RADIO_SI (0x00000200) +#define CSR_HW_IF_CONFIG_REG_MSK_PHY_TYPE (0x00000C00) +#define CSR_HW_IF_CONFIG_REG_MSK_PHY_DASH (0x00003000) +#define CSR_HW_IF_CONFIG_REG_MSK_PHY_STEP (0x0000C000) + +#define CSR_HW_IF_CONFIG_REG_POS_MAC_DASH (0) +#define CSR_HW_IF_CONFIG_REG_POS_MAC_STEP (2) +#define CSR_HW_IF_CONFIG_REG_POS_BOARD_VER (6) +#define CSR_HW_IF_CONFIG_REG_POS_PHY_TYPE (10) +#define CSR_HW_IF_CONFIG_REG_POS_PHY_DASH (12) +#define CSR_HW_IF_CONFIG_REG_POS_PHY_STEP (14) #define CSR_HW_IF_CONFIG_REG_BIT_HAP_WAKE_L1A (0x00080000) #define CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM (0x00200000) @@ -270,7 +279,10 @@ /* HW REV */ -#define CSR_HW_REV_TYPE_MSK (0x00001F0) +#define CSR_HW_REV_DASH(_val) (((_val) & 0x0000003) >> 0) +#define CSR_HW_REV_STEP(_val) (((_val) & 0x000000C) >> 2) + +#define CSR_HW_REV_TYPE_MSK (0x000FFF0) #define CSR_HW_REV_TYPE_5300 (0x0000020) #define CSR_HW_REV_TYPE_5350 (0x0000030) #define CSR_HW_REV_TYPE_5100 (0x0000050) diff --git a/drivers/net/wireless/iwlwifi/iwl-debug.c b/drivers/net/wireless/iwlwifi/iwl-debug.c index 2d1b42847b9b..87535a67de76 100644 --- a/drivers/net/wireless/iwlwifi/iwl-debug.c +++ b/drivers/net/wireless/iwlwifi/iwl-debug.c @@ -61,7 +61,11 @@ * *****************************************************************************/ +#define DEBUG + +#include <linux/device.h> #include <linux/interrupt.h> +#include <linux/export.h> #include "iwl-debug.h" #include "iwl-devtrace.h" @@ -81,8 +85,11 @@ void __iwl_ ##fn(struct device *dev, const char *fmt, ...) \ } __iwl_fn(warn) +EXPORT_SYMBOL_GPL(__iwl_warn); __iwl_fn(info) +EXPORT_SYMBOL_GPL(__iwl_info); __iwl_fn(crit) +EXPORT_SYMBOL_GPL(__iwl_crit); void __iwl_err(struct device *dev, bool rfkill_prefix, bool trace_only, const char *fmt, ...) @@ -103,6 +110,7 @@ void __iwl_err(struct device *dev, bool rfkill_prefix, bool trace_only, trace_iwlwifi_err(&vaf); va_end(args); } +EXPORT_SYMBOL_GPL(__iwl_err); #if defined(CONFIG_IWLWIFI_DEBUG) || defined(CONFIG_IWLWIFI_DEVICE_TRACING) void __iwl_dbg(struct device *dev, @@ -119,10 +127,11 @@ void __iwl_dbg(struct device *dev, #ifdef CONFIG_IWLWIFI_DEBUG if (iwl_have_debug_level(level) && (!limit || net_ratelimit())) - dev_err(dev, "%c %s %pV", in_interrupt() ? 'I' : 'U', + dev_dbg(dev, "%c %s %pV", in_interrupt() ? 'I' : 'U', function, &vaf); #endif trace_iwlwifi_dbg(level, in_interrupt(), function, &vaf); va_end(args); } +EXPORT_SYMBOL_GPL(__iwl_dbg); #endif diff --git a/drivers/net/wireless/iwlwifi/iwl-debug.h b/drivers/net/wireless/iwlwifi/iwl-debug.h index 8376b842bdba..42b20b0e83bc 100644 --- a/drivers/net/wireless/iwlwifi/iwl-debug.h +++ b/drivers/net/wireless/iwlwifi/iwl-debug.h @@ -38,13 +38,14 @@ static inline bool iwl_have_debug_level(u32 level) } void __iwl_err(struct device *dev, bool rfkill_prefix, bool only_trace, - const char *fmt, ...); -void __iwl_warn(struct device *dev, const char *fmt, ...); -void __iwl_info(struct device *dev, const char *fmt, ...); -void __iwl_crit(struct device *dev, const char *fmt, ...); + const char *fmt, ...) __printf(4, 5); +void __iwl_warn(struct device *dev, const char *fmt, ...) __printf(2, 3); +void __iwl_info(struct device *dev, const char *fmt, ...) __printf(2, 3); +void __iwl_crit(struct device *dev, const char *fmt, ...) __printf(2, 3); /* No matter what is m (priv, bus, trans), this will work */ #define IWL_ERR(m, f, a...) __iwl_err((m)->dev, false, false, f, ## a) +#define IWL_ERR_DEV(d, f, a...) __iwl_err((d), false, false, f, ## a) #define IWL_WARN(m, f, a...) __iwl_warn((m)->dev, f, ## a) #define IWL_INFO(m, f, a...) __iwl_info((m)->dev, f, ## a) #define IWL_CRIT(m, f, a...) __iwl_crit((m)->dev, f, ## a) @@ -52,9 +53,9 @@ void __iwl_crit(struct device *dev, const char *fmt, ...); #if defined(CONFIG_IWLWIFI_DEBUG) || defined(CONFIG_IWLWIFI_DEVICE_TRACING) void __iwl_dbg(struct device *dev, u32 level, bool limit, const char *function, - const char *fmt, ...); + const char *fmt, ...) __printf(5, 6); #else -static inline void +__printf(5, 6) static inline void __iwl_dbg(struct device *dev, u32 level, bool limit, const char *function, const char *fmt, ...) @@ -69,6 +70,8 @@ do { \ #define IWL_DEBUG(m, level, fmt, args...) \ __iwl_dbg((m)->dev, level, false, __func__, fmt, ##args) +#define IWL_DEBUG_DEV(dev, level, fmt, args...) \ + __iwl_dbg((dev), level, false, __func__, fmt, ##args) #define IWL_DEBUG_LIMIT(m, level, fmt, args...) \ __iwl_dbg((m)->dev, level, true, __func__, fmt, ##args) @@ -153,7 +156,7 @@ do { \ #define IWL_DEBUG_LED(p, f, a...) IWL_DEBUG(p, IWL_DL_LED, f, ## a) #define IWL_DEBUG_WEP(p, f, a...) IWL_DEBUG(p, IWL_DL_WEP, f, ## a) #define IWL_DEBUG_HC(p, f, a...) IWL_DEBUG(p, IWL_DL_HCMD, f, ## a) -#define IWL_DEBUG_EEPROM(p, f, a...) IWL_DEBUG(p, IWL_DL_EEPROM, f, ## a) +#define IWL_DEBUG_EEPROM(d, f, a...) IWL_DEBUG_DEV(d, IWL_DL_EEPROM, f, ## a) #define IWL_DEBUG_CALIB(p, f, a...) IWL_DEBUG(p, IWL_DL_CALIB, f, ## a) #define IWL_DEBUG_FW(p, f, a...) IWL_DEBUG(p, IWL_DL_FW, f, ## a) #define IWL_DEBUG_RF_KILL(p, f, a...) IWL_DEBUG(p, IWL_DL_RF_KILL, f, ## a) diff --git a/drivers/net/wireless/iwlwifi/iwl-devtrace.c b/drivers/net/wireless/iwlwifi/iwl-devtrace.c index 91f45e71e0a2..70191ddbd8f6 100644 --- a/drivers/net/wireless/iwlwifi/iwl-devtrace.c +++ b/drivers/net/wireless/iwlwifi/iwl-devtrace.c @@ -42,4 +42,9 @@ EXPORT_TRACEPOINT_SYMBOL(iwlwifi_dev_ucode_event); EXPORT_TRACEPOINT_SYMBOL(iwlwifi_dev_ucode_error); EXPORT_TRACEPOINT_SYMBOL(iwlwifi_dev_ucode_cont_event); EXPORT_TRACEPOINT_SYMBOL(iwlwifi_dev_ucode_wrap_event); +EXPORT_TRACEPOINT_SYMBOL(iwlwifi_info); +EXPORT_TRACEPOINT_SYMBOL(iwlwifi_warn); +EXPORT_TRACEPOINT_SYMBOL(iwlwifi_crit); +EXPORT_TRACEPOINT_SYMBOL(iwlwifi_err); +EXPORT_TRACEPOINT_SYMBOL(iwlwifi_dbg); #endif diff --git a/drivers/net/wireless/iwlwifi/iwl-devtrace.h b/drivers/net/wireless/iwlwifi/iwl-devtrace.h index 06203d6a1d86..06ca505bb2cc 100644 --- a/drivers/net/wireless/iwlwifi/iwl-devtrace.h +++ b/drivers/net/wireless/iwlwifi/iwl-devtrace.h @@ -28,6 +28,7 @@ #define __IWLWIFI_DEVICE_TRACE #include <linux/tracepoint.h> +#include <linux/device.h> #if !defined(CONFIG_IWLWIFI_DEVICE_TRACING) || defined(__CHECKER__) @@ -175,7 +176,7 @@ TRACE_EVENT(iwlwifi_dev_ucode_wrap_event, #undef TRACE_SYSTEM #define TRACE_SYSTEM iwlwifi_msg -#define MAX_MSG_LEN 100 +#define MAX_MSG_LEN 110 DECLARE_EVENT_CLASS(iwlwifi_msg_event, TP_PROTO(struct va_format *vaf), @@ -188,7 +189,7 @@ DECLARE_EVENT_CLASS(iwlwifi_msg_event, MAX_MSG_LEN, vaf->fmt, *vaf->va) >= MAX_MSG_LEN); ), - TP_printk("%s", (char *)__get_dynamic_array(msg)) + TP_printk("%s", __get_str(msg)) ); DEFINE_EVENT(iwlwifi_msg_event, iwlwifi_err, diff --git a/drivers/net/wireless/iwlwifi/iwl-drv.c b/drivers/net/wireless/iwlwifi/iwl-drv.c index fac67a526a30..cc41cfaedfbd 100644 --- a/drivers/net/wireless/iwlwifi/iwl-drv.c +++ b/drivers/net/wireless/iwlwifi/iwl-drv.c @@ -77,8 +77,33 @@ /* private includes */ #include "iwl-fw-file.h" +/****************************************************************************** + * + * module boiler plate + * + ******************************************************************************/ + +/* + * module name, copyright, version, etc. + */ +#define DRV_DESCRIPTION "Intel(R) Wireless WiFi driver for Linux" + +#ifdef CONFIG_IWLWIFI_DEBUG +#define VD "d" +#else +#define VD +#endif + +#define DRV_VERSION IWLWIFI_VERSION VD + +MODULE_DESCRIPTION(DRV_DESCRIPTION); +MODULE_VERSION(DRV_VERSION); +MODULE_AUTHOR(DRV_COPYRIGHT " " DRV_AUTHOR); +MODULE_LICENSE("GPL"); + /** * struct iwl_drv - drv common data + * @list: list of drv structures using this opmode * @fw: the iwl_fw structure * @op_mode: the running op_mode * @trans: transport layer @@ -89,6 +114,7 @@ * @request_firmware_complete: the firmware has been obtained from user space */ struct iwl_drv { + struct list_head list; struct iwl_fw fw; struct iwl_op_mode *op_mode; @@ -102,7 +128,19 @@ struct iwl_drv { struct completion request_firmware_complete; }; - +#define DVM_OP_MODE 0 +#define MVM_OP_MODE 1 + +/* Protects the table contents, i.e. the ops pointer & drv list */ +static struct mutex iwlwifi_opmode_table_mtx; +static struct iwlwifi_opmode_table { + const char *name; /* name: iwldvm, iwlmvm, etc */ + const struct iwl_op_mode_ops *ops; /* pointer to op_mode ops */ + struct list_head drv; /* list of devices using this op_mode */ +} iwlwifi_opmode_table[] = { /* ops set when driver is initialized */ + { .name = "iwldvm", .ops = NULL }, + { .name = "iwlmvm", .ops = NULL }, +}; /* * struct fw_sec: Just for the image parsing proccess. @@ -721,7 +759,6 @@ static int validate_sec_sizes(struct iwl_drv *drv, return 0; } - /** * iwl_ucode_callback - callback when firmware was loaded * @@ -733,6 +770,7 @@ static void iwl_ucode_callback(const struct firmware *ucode_raw, void *context) struct iwl_drv *drv = context; struct iwl_fw *fw = &drv->fw; struct iwl_ucode_header *ucode; + struct iwlwifi_opmode_table *op; int err; struct iwl_firmware_pieces pieces; const unsigned int api_max = drv->cfg->ucode_api_max; @@ -740,6 +778,7 @@ static void iwl_ucode_callback(const struct firmware *ucode_raw, void *context) const unsigned int api_min = drv->cfg->ucode_api_min; u32 api_ver; int i; + bool load_module = false; fw->ucode_capa.max_probe_length = 200; fw->ucode_capa.standard_phy_calibration_size = @@ -862,10 +901,24 @@ static void iwl_ucode_callback(const struct firmware *ucode_raw, void *context) /* We have our copies now, allow OS release its copies */ release_firmware(ucode_raw); - drv->op_mode = iwl_dvm_ops.start(drv->trans, drv->cfg, &drv->fw); + mutex_lock(&iwlwifi_opmode_table_mtx); + op = &iwlwifi_opmode_table[DVM_OP_MODE]; - if (!drv->op_mode) - goto out_unbind; + /* add this device to the list of devices using this op_mode */ + list_add_tail(&drv->list, &op->drv); + + if (op->ops) { + const struct iwl_op_mode_ops *ops = op->ops; + drv->op_mode = ops->start(drv->trans, drv->cfg, &drv->fw); + + if (!drv->op_mode) { + mutex_unlock(&iwlwifi_opmode_table_mtx); + goto out_unbind; + } + } else { + load_module = true; + } + mutex_unlock(&iwlwifi_opmode_table_mtx); /* * Complete the firmware request last so that @@ -873,6 +926,14 @@ static void iwl_ucode_callback(const struct firmware *ucode_raw, void *context) * are doing the start() above. */ complete(&drv->request_firmware_complete); + + /* + * Load the module last so we don't block anything + * else from proceeding if the module fails to load + * or hangs loading. + */ + if (load_module) + request_module("%s", op->name); return; try_again: @@ -906,6 +967,7 @@ struct iwl_drv *iwl_drv_start(struct iwl_trans *trans, drv->cfg = cfg; init_completion(&drv->request_firmware_complete); + INIT_LIST_HEAD(&drv->list); ret = iwl_request_firmware(drv, true); @@ -928,6 +990,16 @@ void iwl_drv_stop(struct iwl_drv *drv) iwl_dealloc_ucode(drv); + mutex_lock(&iwlwifi_opmode_table_mtx); + /* + * List is empty (this item wasn't added) + * when firmware loading failed -- in that + * case we can't remove it from any list. + */ + if (!list_empty(&drv->list)) + list_del(&drv->list); + mutex_unlock(&iwlwifi_opmode_table_mtx); + kfree(drv); } @@ -941,8 +1013,78 @@ struct iwl_mod_params iwlwifi_mod_params = { .power_level = IWL_POWER_INDEX_1, .bt_ch_announce = true, .auto_agg = true, + .wd_disable = true, /* the rest are 0 by default */ }; +EXPORT_SYMBOL_GPL(iwlwifi_mod_params); + +int iwl_opmode_register(const char *name, const struct iwl_op_mode_ops *ops) +{ + int i; + struct iwl_drv *drv; + + mutex_lock(&iwlwifi_opmode_table_mtx); + for (i = 0; i < ARRAY_SIZE(iwlwifi_opmode_table); i++) { + if (strcmp(iwlwifi_opmode_table[i].name, name)) + continue; + iwlwifi_opmode_table[i].ops = ops; + list_for_each_entry(drv, &iwlwifi_opmode_table[i].drv, list) + drv->op_mode = ops->start(drv->trans, drv->cfg, + &drv->fw); + mutex_unlock(&iwlwifi_opmode_table_mtx); + return 0; + } + mutex_unlock(&iwlwifi_opmode_table_mtx); + return -EIO; +} +EXPORT_SYMBOL_GPL(iwl_opmode_register); + +void iwl_opmode_deregister(const char *name) +{ + int i; + struct iwl_drv *drv; + + mutex_lock(&iwlwifi_opmode_table_mtx); + for (i = 0; i < ARRAY_SIZE(iwlwifi_opmode_table); i++) { + if (strcmp(iwlwifi_opmode_table[i].name, name)) + continue; + iwlwifi_opmode_table[i].ops = NULL; + + /* call the stop routine for all devices */ + list_for_each_entry(drv, &iwlwifi_opmode_table[i].drv, list) { + if (drv->op_mode) { + iwl_op_mode_stop(drv->op_mode); + drv->op_mode = NULL; + } + } + mutex_unlock(&iwlwifi_opmode_table_mtx); + return; + } + mutex_unlock(&iwlwifi_opmode_table_mtx); +} +EXPORT_SYMBOL_GPL(iwl_opmode_deregister); + +static int __init iwl_drv_init(void) +{ + int i; + + mutex_init(&iwlwifi_opmode_table_mtx); + + for (i = 0; i < ARRAY_SIZE(iwlwifi_opmode_table); i++) + INIT_LIST_HEAD(&iwlwifi_opmode_table[i].drv); + + pr_info(DRV_DESCRIPTION ", " DRV_VERSION "\n"); + pr_info(DRV_COPYRIGHT "\n"); + + return iwl_pci_register_driver(); +} +module_init(iwl_drv_init); + +static void __exit iwl_drv_exit(void) +{ + iwl_pci_unregister_driver(); +} +module_exit(iwl_drv_exit); #ifdef CONFIG_IWLWIFI_DEBUG module_param_named(debug, iwlwifi_mod_params.debug_level, uint, diff --git a/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.c b/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.c new file mode 100644 index 000000000000..f10170fe8799 --- /dev/null +++ b/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.c @@ -0,0 +1,903 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright(c) 2008 - 2012 Intel Corporation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, + * USA + * + * The full GNU General Public License is included in this distribution + * in the file called LICENSE.GPL. + * + * Contact Information: + * Intel Linux Wireless <ilw@linux.intel.com> + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright(c) 2005 - 2012 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + *****************************************************************************/ +#include <linux/types.h> +#include <linux/slab.h> +#include <linux/export.h> +#include "iwl-modparams.h" +#include "iwl-eeprom-parse.h" + +/* EEPROM offset definitions */ + +/* indirect access definitions */ +#define ADDRESS_MSK 0x0000FFFF +#define INDIRECT_TYPE_MSK 0x000F0000 +#define INDIRECT_HOST 0x00010000 +#define INDIRECT_GENERAL 0x00020000 +#define INDIRECT_REGULATORY 0x00030000 +#define INDIRECT_CALIBRATION 0x00040000 +#define INDIRECT_PROCESS_ADJST 0x00050000 +#define INDIRECT_OTHERS 0x00060000 +#define INDIRECT_TXP_LIMIT 0x00070000 +#define INDIRECT_TXP_LIMIT_SIZE 0x00080000 +#define INDIRECT_ADDRESS 0x00100000 + +/* corresponding link offsets in EEPROM */ +#define EEPROM_LINK_HOST (2*0x64) +#define EEPROM_LINK_GENERAL (2*0x65) +#define EEPROM_LINK_REGULATORY (2*0x66) +#define EEPROM_LINK_CALIBRATION (2*0x67) +#define EEPROM_LINK_PROCESS_ADJST (2*0x68) +#define EEPROM_LINK_OTHERS (2*0x69) +#define EEPROM_LINK_TXP_LIMIT (2*0x6a) +#define EEPROM_LINK_TXP_LIMIT_SIZE (2*0x6b) + +/* General */ +#define EEPROM_DEVICE_ID (2*0x08) /* 2 bytes */ +#define EEPROM_SUBSYSTEM_ID (2*0x0A) /* 2 bytes */ +#define EEPROM_MAC_ADDRESS (2*0x15) /* 6 bytes */ +#define EEPROM_BOARD_REVISION (2*0x35) /* 2 bytes */ +#define EEPROM_BOARD_PBA_NUMBER (2*0x3B+1) /* 9 bytes */ +#define EEPROM_VERSION (2*0x44) /* 2 bytes */ +#define EEPROM_SKU_CAP (2*0x45) /* 2 bytes */ +#define EEPROM_OEM_MODE (2*0x46) /* 2 bytes */ +#define EEPROM_RADIO_CONFIG (2*0x48) /* 2 bytes */ +#define EEPROM_NUM_MAC_ADDRESS (2*0x4C) /* 2 bytes */ + +/* calibration */ +struct iwl_eeprom_calib_hdr { + u8 version; + u8 pa_type; + __le16 voltage; +} __packed; + +#define EEPROM_CALIB_ALL (INDIRECT_ADDRESS | INDIRECT_CALIBRATION) +#define EEPROM_XTAL ((2*0x128) | EEPROM_CALIB_ALL) + +/* temperature */ +#define EEPROM_KELVIN_TEMPERATURE ((2*0x12A) | EEPROM_CALIB_ALL) +#define EEPROM_RAW_TEMPERATURE ((2*0x12B) | EEPROM_CALIB_ALL) + +/* + * EEPROM bands + * These are the channel numbers from each band in the order + * that they are stored in the EEPROM band information. Note + * that EEPROM bands aren't the same as mac80211 bands, and + * there are even special "ht40 bands" in the EEPROM. + */ +static const u8 iwl_eeprom_band_1[14] = { /* 2.4 GHz */ + 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 +}; + +static const u8 iwl_eeprom_band_2[] = { /* 4915-5080MHz */ + 183, 184, 185, 187, 188, 189, 192, 196, 7, 8, 11, 12, 16 +}; + +static const u8 iwl_eeprom_band_3[] = { /* 5170-5320MHz */ + 34, 36, 38, 40, 42, 44, 46, 48, 52, 56, 60, 64 +}; + +static const u8 iwl_eeprom_band_4[] = { /* 5500-5700MHz */ + 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140 +}; + +static const u8 iwl_eeprom_band_5[] = { /* 5725-5825MHz */ + 145, 149, 153, 157, 161, 165 +}; + +static const u8 iwl_eeprom_band_6[] = { /* 2.4 ht40 channel */ + 1, 2, 3, 4, 5, 6, 7 +}; + +static const u8 iwl_eeprom_band_7[] = { /* 5.2 ht40 channel */ + 36, 44, 52, 60, 100, 108, 116, 124, 132, 149, 157 +}; + +#define IWL_NUM_CHANNELS (ARRAY_SIZE(iwl_eeprom_band_1) + \ + ARRAY_SIZE(iwl_eeprom_band_2) + \ + ARRAY_SIZE(iwl_eeprom_band_3) + \ + ARRAY_SIZE(iwl_eeprom_band_4) + \ + ARRAY_SIZE(iwl_eeprom_band_5)) + +/* rate data (static) */ +static struct ieee80211_rate iwl_cfg80211_rates[] = { + { .bitrate = 1 * 10, .hw_value = 0, .hw_value_short = 0, }, + { .bitrate = 2 * 10, .hw_value = 1, .hw_value_short = 1, + .flags = IEEE80211_RATE_SHORT_PREAMBLE, }, + { .bitrate = 5.5 * 10, .hw_value = 2, .hw_value_short = 2, + .flags = IEEE80211_RATE_SHORT_PREAMBLE, }, + { .bitrate = 11 * 10, .hw_value = 3, .hw_value_short = 3, + .flags = IEEE80211_RATE_SHORT_PREAMBLE, }, + { .bitrate = 6 * 10, .hw_value = 4, .hw_value_short = 4, }, + { .bitrate = 9 * 10, .hw_value = 5, .hw_value_short = 5, }, + { .bitrate = 12 * 10, .hw_value = 6, .hw_value_short = 6, }, + { .bitrate = 18 * 10, .hw_value = 7, .hw_value_short = 7, }, + { .bitrate = 24 * 10, .hw_value = 8, .hw_value_short = 8, }, + { .bitrate = 36 * 10, .hw_value = 9, .hw_value_short = 9, }, + { .bitrate = 48 * 10, .hw_value = 10, .hw_value_short = 10, }, + { .bitrate = 54 * 10, .hw_value = 11, .hw_value_short = 11, }, +}; +#define RATES_24_OFFS 0 +#define N_RATES_24 ARRAY_SIZE(iwl_cfg80211_rates) +#define RATES_52_OFFS 4 +#define N_RATES_52 (N_RATES_24 - RATES_52_OFFS) + +/* EEPROM reading functions */ + +static u16 iwl_eeprom_query16(const u8 *eeprom, size_t eeprom_size, int offset) +{ + if (WARN_ON(offset + sizeof(u16) > eeprom_size)) + return 0; + return le16_to_cpup((__le16 *)(eeprom + offset)); +} + +static u32 eeprom_indirect_address(const u8 *eeprom, size_t eeprom_size, + u32 address) +{ + u16 offset = 0; + + if ((address & INDIRECT_ADDRESS) == 0) + return address; + + switch (address & INDIRECT_TYPE_MSK) { + case INDIRECT_HOST: + offset = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_LINK_HOST); + break; + case INDIRECT_GENERAL: + offset = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_LINK_GENERAL); + break; + case INDIRECT_REGULATORY: + offset = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_LINK_REGULATORY); + break; + case INDIRECT_TXP_LIMIT: + offset = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_LINK_TXP_LIMIT); + break; + case INDIRECT_TXP_LIMIT_SIZE: + offset = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_LINK_TXP_LIMIT_SIZE); + break; + case INDIRECT_CALIBRATION: + offset = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_LINK_CALIBRATION); + break; + case INDIRECT_PROCESS_ADJST: + offset = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_LINK_PROCESS_ADJST); + break; + case INDIRECT_OTHERS: + offset = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_LINK_OTHERS); + break; + default: + WARN_ON(1); + break; + } + + /* translate the offset from words to byte */ + return (address & ADDRESS_MSK) + (offset << 1); +} + +static const u8 *iwl_eeprom_query_addr(const u8 *eeprom, size_t eeprom_size, + u32 offset) +{ + u32 address = eeprom_indirect_address(eeprom, eeprom_size, offset); + + if (WARN_ON(address >= eeprom_size)) + return NULL; + + return &eeprom[address]; +} + +static int iwl_eeprom_read_calib(const u8 *eeprom, size_t eeprom_size, + struct iwl_eeprom_data *data) +{ + struct iwl_eeprom_calib_hdr *hdr; + + hdr = (void *)iwl_eeprom_query_addr(eeprom, eeprom_size, + EEPROM_CALIB_ALL); + if (!hdr) + return -ENODATA; + data->calib_version = hdr->version; + data->calib_voltage = hdr->voltage; + + return 0; +} + +/** + * enum iwl_eeprom_channel_flags - channel flags in EEPROM + * @EEPROM_CHANNEL_VALID: channel is usable for this SKU/geo + * @EEPROM_CHANNEL_IBSS: usable as an IBSS channel + * @EEPROM_CHANNEL_ACTIVE: active scanning allowed + * @EEPROM_CHANNEL_RADAR: radar detection required + * @EEPROM_CHANNEL_WIDE: 20 MHz channel okay (?) + * @EEPROM_CHANNEL_DFS: dynamic freq selection candidate + */ +enum iwl_eeprom_channel_flags { + EEPROM_CHANNEL_VALID = BIT(0), + EEPROM_CHANNEL_IBSS = BIT(1), + EEPROM_CHANNEL_ACTIVE = BIT(3), + EEPROM_CHANNEL_RADAR = BIT(4), + EEPROM_CHANNEL_WIDE = BIT(5), + EEPROM_CHANNEL_DFS = BIT(7), +}; + +/** + * struct iwl_eeprom_channel - EEPROM channel data + * @flags: %EEPROM_CHANNEL_* flags + * @max_power_avg: max power (in dBm) on this channel, at most 31 dBm + */ +struct iwl_eeprom_channel { + u8 flags; + s8 max_power_avg; +} __packed; + + +enum iwl_eeprom_enhanced_txpwr_flags { + IWL_EEPROM_ENH_TXP_FL_VALID = BIT(0), + IWL_EEPROM_ENH_TXP_FL_BAND_52G = BIT(1), + IWL_EEPROM_ENH_TXP_FL_OFDM = BIT(2), + IWL_EEPROM_ENH_TXP_FL_40MHZ = BIT(3), + IWL_EEPROM_ENH_TXP_FL_HT_AP = BIT(4), + IWL_EEPROM_ENH_TXP_FL_RES1 = BIT(5), + IWL_EEPROM_ENH_TXP_FL_RES2 = BIT(6), + IWL_EEPROM_ENH_TXP_FL_COMMON_TYPE = BIT(7), +}; + +/** + * iwl_eeprom_enhanced_txpwr structure + * @flags: entry flags + * @channel: channel number + * @chain_a_max_pwr: chain a max power in 1/2 dBm + * @chain_b_max_pwr: chain b max power in 1/2 dBm + * @chain_c_max_pwr: chain c max power in 1/2 dBm + * @delta_20_in_40: 20-in-40 deltas (hi/lo) + * @mimo2_max_pwr: mimo2 max power in 1/2 dBm + * @mimo3_max_pwr: mimo3 max power in 1/2 dBm + * + * This structure presents the enhanced regulatory tx power limit layout + * in an EEPROM image. + */ +struct iwl_eeprom_enhanced_txpwr { + u8 flags; + u8 channel; + s8 chain_a_max; + s8 chain_b_max; + s8 chain_c_max; + u8 delta_20_in_40; + s8 mimo2_max; + s8 mimo3_max; +} __packed; + +static s8 iwl_get_max_txpwr_half_dbm(const struct iwl_eeprom_data *data, + struct iwl_eeprom_enhanced_txpwr *txp) +{ + s8 result = 0; /* (.5 dBm) */ + + /* Take the highest tx power from any valid chains */ + if (data->valid_tx_ant & ANT_A && txp->chain_a_max > result) + result = txp->chain_a_max; + + if (data->valid_tx_ant & ANT_B && txp->chain_b_max > result) + result = txp->chain_b_max; + + if (data->valid_tx_ant & ANT_C && txp->chain_c_max > result) + result = txp->chain_c_max; + + if ((data->valid_tx_ant == ANT_AB || + data->valid_tx_ant == ANT_BC || + data->valid_tx_ant == ANT_AC) && txp->mimo2_max > result) + result = txp->mimo2_max; + + if (data->valid_tx_ant == ANT_ABC && txp->mimo3_max > result) + result = txp->mimo3_max; + + return result; +} + +#define EEPROM_TXP_OFFS (0x00 | INDIRECT_ADDRESS | INDIRECT_TXP_LIMIT) +#define EEPROM_TXP_ENTRY_LEN sizeof(struct iwl_eeprom_enhanced_txpwr) +#define EEPROM_TXP_SZ_OFFS (0x00 | INDIRECT_ADDRESS | INDIRECT_TXP_LIMIT_SIZE) + +#define TXP_CHECK_AND_PRINT(x) \ + ((txp->flags & IWL_EEPROM_ENH_TXP_FL_##x) ? # x " " : "") + +static void +iwl_eeprom_enh_txp_read_element(struct iwl_eeprom_data *data, + struct iwl_eeprom_enhanced_txpwr *txp, + int n_channels, s8 max_txpower_avg) +{ + int ch_idx; + enum ieee80211_band band; + + band = txp->flags & IWL_EEPROM_ENH_TXP_FL_BAND_52G ? + IEEE80211_BAND_5GHZ : IEEE80211_BAND_2GHZ; + + for (ch_idx = 0; ch_idx < n_channels; ch_idx++) { + struct ieee80211_channel *chan = &data->channels[ch_idx]; + + /* update matching channel or from common data only */ + if (txp->channel != 0 && chan->hw_value != txp->channel) + continue; + + /* update matching band only */ + if (band != chan->band) + continue; + + if (chan->max_power < max_txpower_avg && + !(txp->flags & IWL_EEPROM_ENH_TXP_FL_40MHZ)) + chan->max_power = max_txpower_avg; + } +} + +static void iwl_eeprom_enhanced_txpower(struct device *dev, + struct iwl_eeprom_data *data, + const u8 *eeprom, size_t eeprom_size, + int n_channels) +{ + struct iwl_eeprom_enhanced_txpwr *txp_array, *txp; + int idx, entries; + __le16 *txp_len; + s8 max_txp_avg_halfdbm; + + BUILD_BUG_ON(sizeof(struct iwl_eeprom_enhanced_txpwr) != 8); + + /* the length is in 16-bit words, but we want entries */ + txp_len = (__le16 *)iwl_eeprom_query_addr(eeprom, eeprom_size, + EEPROM_TXP_SZ_OFFS); + entries = le16_to_cpup(txp_len) * 2 / EEPROM_TXP_ENTRY_LEN; + + txp_array = (void *)iwl_eeprom_query_addr(eeprom, eeprom_size, + EEPROM_TXP_OFFS); + + for (idx = 0; idx < entries; idx++) { + txp = &txp_array[idx]; + /* skip invalid entries */ + if (!(txp->flags & IWL_EEPROM_ENH_TXP_FL_VALID)) + continue; + + IWL_DEBUG_EEPROM(dev, "%s %d:\t %s%s%s%s%s%s%s%s (0x%02x)\n", + (txp->channel && (txp->flags & + IWL_EEPROM_ENH_TXP_FL_COMMON_TYPE)) ? + "Common " : (txp->channel) ? + "Channel" : "Common", + (txp->channel), + TXP_CHECK_AND_PRINT(VALID), + TXP_CHECK_AND_PRINT(BAND_52G), + TXP_CHECK_AND_PRINT(OFDM), + TXP_CHECK_AND_PRINT(40MHZ), + TXP_CHECK_AND_PRINT(HT_AP), + TXP_CHECK_AND_PRINT(RES1), + TXP_CHECK_AND_PRINT(RES2), + TXP_CHECK_AND_PRINT(COMMON_TYPE), + txp->flags); + IWL_DEBUG_EEPROM(dev, + "\t\t chain_A: 0x%02x chain_B: 0X%02x chain_C: 0X%02x\n", + txp->chain_a_max, txp->chain_b_max, + txp->chain_c_max); + IWL_DEBUG_EEPROM(dev, + "\t\t MIMO2: 0x%02x MIMO3: 0x%02x High 20_on_40: 0x%02x Low 20_on_40: 0x%02x\n", + txp->mimo2_max, txp->mimo3_max, + ((txp->delta_20_in_40 & 0xf0) >> 4), + (txp->delta_20_in_40 & 0x0f)); + + max_txp_avg_halfdbm = iwl_get_max_txpwr_half_dbm(data, txp); + + iwl_eeprom_enh_txp_read_element(data, txp, n_channels, + DIV_ROUND_UP(max_txp_avg_halfdbm, 2)); + + if (max_txp_avg_halfdbm > data->max_tx_pwr_half_dbm) + data->max_tx_pwr_half_dbm = max_txp_avg_halfdbm; + } +} + +static void iwl_init_band_reference(const struct iwl_cfg *cfg, + const u8 *eeprom, size_t eeprom_size, + int eeprom_band, int *eeprom_ch_count, + const struct iwl_eeprom_channel **ch_info, + const u8 **eeprom_ch_array) +{ + u32 offset = cfg->eeprom_params->regulatory_bands[eeprom_band - 1]; + + offset |= INDIRECT_ADDRESS | INDIRECT_REGULATORY; + + *ch_info = (void *)iwl_eeprom_query_addr(eeprom, eeprom_size, offset); + + switch (eeprom_band) { + case 1: /* 2.4GHz band */ + *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_1); + *eeprom_ch_array = iwl_eeprom_band_1; + break; + case 2: /* 4.9GHz band */ + *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_2); + *eeprom_ch_array = iwl_eeprom_band_2; + break; + case 3: /* 5.2GHz band */ + *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_3); + *eeprom_ch_array = iwl_eeprom_band_3; + break; + case 4: /* 5.5GHz band */ + *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_4); + *eeprom_ch_array = iwl_eeprom_band_4; + break; + case 5: /* 5.7GHz band */ + *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_5); + *eeprom_ch_array = iwl_eeprom_band_5; + break; + case 6: /* 2.4GHz ht40 channels */ + *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_6); + *eeprom_ch_array = iwl_eeprom_band_6; + break; + case 7: /* 5 GHz ht40 channels */ + *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_7); + *eeprom_ch_array = iwl_eeprom_band_7; + break; + default: + *eeprom_ch_count = 0; + *eeprom_ch_array = NULL; + WARN_ON(1); + } +} + +#define CHECK_AND_PRINT(x) \ + ((eeprom_ch->flags & EEPROM_CHANNEL_##x) ? # x " " : "") + +static void iwl_mod_ht40_chan_info(struct device *dev, + struct iwl_eeprom_data *data, int n_channels, + enum ieee80211_band band, u16 channel, + const struct iwl_eeprom_channel *eeprom_ch, + u8 clear_ht40_extension_channel) +{ + struct ieee80211_channel *chan = NULL; + int i; + + for (i = 0; i < n_channels; i++) { + if (data->channels[i].band != band) + continue; + if (data->channels[i].hw_value != channel) + continue; + chan = &data->channels[i]; + break; + } + + if (!chan) + return; + + IWL_DEBUG_EEPROM(dev, + "HT40 Ch. %d [%sGHz] %s%s%s%s%s(0x%02x %ddBm): Ad-Hoc %ssupported\n", + channel, + band == IEEE80211_BAND_5GHZ ? "5.2" : "2.4", + CHECK_AND_PRINT(IBSS), + CHECK_AND_PRINT(ACTIVE), + CHECK_AND_PRINT(RADAR), + CHECK_AND_PRINT(WIDE), + CHECK_AND_PRINT(DFS), + eeprom_ch->flags, + eeprom_ch->max_power_avg, + ((eeprom_ch->flags & EEPROM_CHANNEL_IBSS) && + !(eeprom_ch->flags & EEPROM_CHANNEL_RADAR)) ? "" + : "not "); + + if (eeprom_ch->flags & EEPROM_CHANNEL_VALID) + chan->flags &= ~clear_ht40_extension_channel; +} + +#define CHECK_AND_PRINT_I(x) \ + ((eeprom_ch_info[ch_idx].flags & EEPROM_CHANNEL_##x) ? # x " " : "") + +static int iwl_init_channel_map(struct device *dev, const struct iwl_cfg *cfg, + struct iwl_eeprom_data *data, + const u8 *eeprom, size_t eeprom_size) +{ + int band, ch_idx; + const struct iwl_eeprom_channel *eeprom_ch_info; + const u8 *eeprom_ch_array; + int eeprom_ch_count; + int n_channels = 0; + + /* + * Loop through the 5 EEPROM bands and add them to the parse list + */ + for (band = 1; band <= 5; band++) { + struct ieee80211_channel *channel; + + iwl_init_band_reference(cfg, eeprom, eeprom_size, band, + &eeprom_ch_count, &eeprom_ch_info, + &eeprom_ch_array); + + /* Loop through each band adding each of the channels */ + for (ch_idx = 0; ch_idx < eeprom_ch_count; ch_idx++) { + const struct iwl_eeprom_channel *eeprom_ch; + + eeprom_ch = &eeprom_ch_info[ch_idx]; + + if (!(eeprom_ch->flags & EEPROM_CHANNEL_VALID)) { + IWL_DEBUG_EEPROM(dev, + "Ch. %d Flags %x [%sGHz] - No traffic\n", + eeprom_ch_array[ch_idx], + eeprom_ch_info[ch_idx].flags, + (band != 1) ? "5.2" : "2.4"); + continue; + } + + channel = &data->channels[n_channels]; + n_channels++; + + channel->hw_value = eeprom_ch_array[ch_idx]; + channel->band = (band == 1) ? IEEE80211_BAND_2GHZ + : IEEE80211_BAND_5GHZ; + channel->center_freq = + ieee80211_channel_to_frequency( + channel->hw_value, channel->band); + + /* set no-HT40, will enable as appropriate later */ + channel->flags = IEEE80211_CHAN_NO_HT40; + + if (!(eeprom_ch->flags & EEPROM_CHANNEL_IBSS)) + channel->flags |= IEEE80211_CHAN_NO_IBSS; + + if (!(eeprom_ch->flags & EEPROM_CHANNEL_ACTIVE)) + channel->flags |= IEEE80211_CHAN_PASSIVE_SCAN; + + if (eeprom_ch->flags & EEPROM_CHANNEL_RADAR) + channel->flags |= IEEE80211_CHAN_RADAR; + + /* Initialize regulatory-based run-time data */ + channel->max_power = + eeprom_ch_info[ch_idx].max_power_avg; + IWL_DEBUG_EEPROM(dev, + "Ch. %d [%sGHz] %s%s%s%s%s%s(0x%02x %ddBm): Ad-Hoc %ssupported\n", + channel->hw_value, + (band != 1) ? "5.2" : "2.4", + CHECK_AND_PRINT_I(VALID), + CHECK_AND_PRINT_I(IBSS), + CHECK_AND_PRINT_I(ACTIVE), + CHECK_AND_PRINT_I(RADAR), + CHECK_AND_PRINT_I(WIDE), + CHECK_AND_PRINT_I(DFS), + eeprom_ch_info[ch_idx].flags, + eeprom_ch_info[ch_idx].max_power_avg, + ((eeprom_ch_info[ch_idx].flags & + EEPROM_CHANNEL_IBSS) && + !(eeprom_ch_info[ch_idx].flags & + EEPROM_CHANNEL_RADAR)) + ? "" : "not "); + } + } + + if (cfg->eeprom_params->enhanced_txpower) { + /* + * for newer device (6000 series and up) + * EEPROM contain enhanced tx power information + * driver need to process addition information + * to determine the max channel tx power limits + */ + iwl_eeprom_enhanced_txpower(dev, data, eeprom, eeprom_size, + n_channels); + } else { + /* All others use data from channel map */ + int i; + + data->max_tx_pwr_half_dbm = -128; + + for (i = 0; i < n_channels; i++) + data->max_tx_pwr_half_dbm = + max_t(s8, data->max_tx_pwr_half_dbm, + data->channels[i].max_power * 2); + } + + /* Check if we do have HT40 channels */ + if (cfg->eeprom_params->regulatory_bands[5] == + EEPROM_REGULATORY_BAND_NO_HT40 && + cfg->eeprom_params->regulatory_bands[6] == + EEPROM_REGULATORY_BAND_NO_HT40) + return n_channels; + + /* Two additional EEPROM bands for 2.4 and 5 GHz HT40 channels */ + for (band = 6; band <= 7; band++) { + enum ieee80211_band ieeeband; + + iwl_init_band_reference(cfg, eeprom, eeprom_size, band, + &eeprom_ch_count, &eeprom_ch_info, + &eeprom_ch_array); + + /* EEPROM band 6 is 2.4, band 7 is 5 GHz */ + ieeeband = (band == 6) ? IEEE80211_BAND_2GHZ + : IEEE80211_BAND_5GHZ; + + /* Loop through each band adding each of the channels */ + for (ch_idx = 0; ch_idx < eeprom_ch_count; ch_idx++) { + /* Set up driver's info for lower half */ + iwl_mod_ht40_chan_info(dev, data, n_channels, ieeeband, + eeprom_ch_array[ch_idx], + &eeprom_ch_info[ch_idx], + IEEE80211_CHAN_NO_HT40PLUS); + + /* Set up driver's info for upper half */ + iwl_mod_ht40_chan_info(dev, data, n_channels, ieeeband, + eeprom_ch_array[ch_idx] + 4, + &eeprom_ch_info[ch_idx], + IEEE80211_CHAN_NO_HT40MINUS); + } + } + + return n_channels; +} + +static int iwl_init_sband_channels(struct iwl_eeprom_data *data, + struct ieee80211_supported_band *sband, + int n_channels, enum ieee80211_band band) +{ + struct ieee80211_channel *chan = &data->channels[0]; + int n = 0, idx = 0; + + while (chan->band != band && idx < n_channels) + chan = &data->channels[++idx]; + + sband->channels = &data->channels[idx]; + + while (chan->band == band && idx < n_channels) { + chan = &data->channels[++idx]; + n++; + } + + sband->n_channels = n; + + return n; +} + +#define MAX_BIT_RATE_40_MHZ 150 /* Mbps */ +#define MAX_BIT_RATE_20_MHZ 72 /* Mbps */ + +static void iwl_init_ht_hw_capab(const struct iwl_cfg *cfg, + struct iwl_eeprom_data *data, + struct ieee80211_sta_ht_cap *ht_info, + enum ieee80211_band band) +{ + int max_bit_rate = 0; + u8 rx_chains; + u8 tx_chains; + + tx_chains = hweight8(data->valid_tx_ant); + if (cfg->rx_with_siso_diversity) + rx_chains = 1; + else + rx_chains = hweight8(data->valid_rx_ant); + + if (!(data->sku & EEPROM_SKU_CAP_11N_ENABLE) || !cfg->ht_params) { + ht_info->ht_supported = false; + return; + } + + ht_info->ht_supported = true; + ht_info->cap = 0; + + if (iwlwifi_mod_params.amsdu_size_8K) + ht_info->cap |= IEEE80211_HT_CAP_MAX_AMSDU; + + ht_info->ampdu_factor = IEEE80211_HT_MAX_AMPDU_64K; + ht_info->ampdu_density = IEEE80211_HT_MPDU_DENSITY_4; + + ht_info->mcs.rx_mask[0] = 0xFF; + if (rx_chains >= 2) + ht_info->mcs.rx_mask[1] = 0xFF; + if (rx_chains >= 3) + ht_info->mcs.rx_mask[2] = 0xFF; + + if (cfg->ht_params->ht_greenfield_support) + ht_info->cap |= IEEE80211_HT_CAP_GRN_FLD; + ht_info->cap |= IEEE80211_HT_CAP_SGI_20; + + max_bit_rate = MAX_BIT_RATE_20_MHZ; + + if (cfg->ht_params->ht40_bands & BIT(band)) { + ht_info->cap |= IEEE80211_HT_CAP_SUP_WIDTH_20_40; + ht_info->cap |= IEEE80211_HT_CAP_SGI_40; + ht_info->mcs.rx_mask[4] = 0x01; + max_bit_rate = MAX_BIT_RATE_40_MHZ; + } + + /* Highest supported Rx data rate */ + max_bit_rate *= rx_chains; + WARN_ON(max_bit_rate & ~IEEE80211_HT_MCS_RX_HIGHEST_MASK); + ht_info->mcs.rx_highest = cpu_to_le16(max_bit_rate); + + /* Tx MCS capabilities */ + ht_info->mcs.tx_params = IEEE80211_HT_MCS_TX_DEFINED; + if (tx_chains != rx_chains) { + ht_info->mcs.tx_params |= IEEE80211_HT_MCS_TX_RX_DIFF; + ht_info->mcs.tx_params |= ((tx_chains - 1) << + IEEE80211_HT_MCS_TX_MAX_STREAMS_SHIFT); + } +} + +static void iwl_init_sbands(struct device *dev, const struct iwl_cfg *cfg, + struct iwl_eeprom_data *data, + const u8 *eeprom, size_t eeprom_size) +{ + int n_channels = iwl_init_channel_map(dev, cfg, data, + eeprom, eeprom_size); + int n_used = 0; + struct ieee80211_supported_band *sband; + + sband = &data->bands[IEEE80211_BAND_2GHZ]; + sband->band = IEEE80211_BAND_2GHZ; + sband->bitrates = &iwl_cfg80211_rates[RATES_24_OFFS]; + sband->n_bitrates = N_RATES_24; + n_used += iwl_init_sband_channels(data, sband, n_channels, + IEEE80211_BAND_2GHZ); + iwl_init_ht_hw_capab(cfg, data, &sband->ht_cap, IEEE80211_BAND_2GHZ); + + sband = &data->bands[IEEE80211_BAND_5GHZ]; + sband->band = IEEE80211_BAND_5GHZ; + sband->bitrates = &iwl_cfg80211_rates[RATES_52_OFFS]; + sband->n_bitrates = N_RATES_52; + n_used += iwl_init_sband_channels(data, sband, n_channels, + IEEE80211_BAND_5GHZ); + iwl_init_ht_hw_capab(cfg, data, &sband->ht_cap, IEEE80211_BAND_5GHZ); + + if (n_channels != n_used) + IWL_ERR_DEV(dev, "EEPROM: used only %d of %d channels\n", + n_used, n_channels); +} + +/* EEPROM data functions */ + +struct iwl_eeprom_data * +iwl_parse_eeprom_data(struct device *dev, const struct iwl_cfg *cfg, + const u8 *eeprom, size_t eeprom_size) +{ + struct iwl_eeprom_data *data; + const void *tmp; + + if (WARN_ON(!cfg || !cfg->eeprom_params)) + return NULL; + + data = kzalloc(sizeof(*data) + + sizeof(struct ieee80211_channel) * IWL_NUM_CHANNELS, + GFP_KERNEL); + if (!data) + return NULL; + + /* get MAC address(es) */ + tmp = iwl_eeprom_query_addr(eeprom, eeprom_size, EEPROM_MAC_ADDRESS); + if (!tmp) + goto err_free; + memcpy(data->hw_addr, tmp, ETH_ALEN); + data->n_hw_addrs = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_NUM_MAC_ADDRESS); + + if (iwl_eeprom_read_calib(eeprom, eeprom_size, data)) + goto err_free; + + tmp = iwl_eeprom_query_addr(eeprom, eeprom_size, EEPROM_XTAL); + if (!tmp) + goto err_free; + memcpy(data->xtal_calib, tmp, sizeof(data->xtal_calib)); + + tmp = iwl_eeprom_query_addr(eeprom, eeprom_size, + EEPROM_RAW_TEMPERATURE); + if (!tmp) + goto err_free; + data->raw_temperature = *(__le16 *)tmp; + + tmp = iwl_eeprom_query_addr(eeprom, eeprom_size, + EEPROM_KELVIN_TEMPERATURE); + if (!tmp) + goto err_free; + data->kelvin_temperature = *(__le16 *)tmp; + data->kelvin_voltage = *((__le16 *)tmp + 1); + + data->radio_cfg = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_RADIO_CONFIG); + data->sku = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_SKU_CAP); + if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_ALL) + data->sku &= ~EEPROM_SKU_CAP_11N_ENABLE; + + data->eeprom_version = iwl_eeprom_query16(eeprom, eeprom_size, + EEPROM_VERSION); + + data->valid_tx_ant = EEPROM_RF_CFG_TX_ANT_MSK(data->radio_cfg); + data->valid_rx_ant = EEPROM_RF_CFG_RX_ANT_MSK(data->radio_cfg); + + /* check overrides (some devices have wrong EEPROM) */ + if (cfg->valid_tx_ant) + data->valid_tx_ant = cfg->valid_tx_ant; + if (cfg->valid_rx_ant) + data->valid_rx_ant = cfg->valid_rx_ant; + + if (!data->valid_tx_ant || !data->valid_rx_ant) { + IWL_ERR_DEV(dev, "invalid antennas (0x%x, 0x%x)\n", + data->valid_tx_ant, data->valid_rx_ant); + goto err_free; + } + + iwl_init_sbands(dev, cfg, data, eeprom, eeprom_size); + + return data; + err_free: + kfree(data); + return NULL; +} +EXPORT_SYMBOL_GPL(iwl_parse_eeprom_data); + +/* helper functions */ +int iwl_eeprom_check_version(struct iwl_eeprom_data *data, + struct iwl_trans *trans) +{ + if (data->eeprom_version >= trans->cfg->eeprom_ver || + data->calib_version >= trans->cfg->eeprom_calib_ver) { + IWL_INFO(trans, "device EEPROM VER=0x%x, CALIB=0x%x\n", + data->eeprom_version, data->calib_version); + return 0; + } + + IWL_ERR(trans, + "Unsupported (too old) EEPROM VER=0x%x < 0x%x CALIB=0x%x < 0x%x\n", + data->eeprom_version, trans->cfg->eeprom_ver, + data->calib_version, trans->cfg->eeprom_calib_ver); + return -EINVAL; +} +EXPORT_SYMBOL_GPL(iwl_eeprom_check_version); diff --git a/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.h b/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.h new file mode 100644 index 000000000000..9c07c670a1ce --- /dev/null +++ b/drivers/net/wireless/iwlwifi/iwl-eeprom-parse.h @@ -0,0 +1,138 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright(c) 2008 - 2012 Intel Corporation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, + * USA + * + * The full GNU General Public License is included in this distribution + * in the file called LICENSE.GPL. + * + * Contact Information: + * Intel Linux Wireless <ilw@linux.intel.com> + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright(c) 2005 - 2012 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + *****************************************************************************/ +#ifndef __iwl_eeprom_parse_h__ +#define __iwl_eeprom_parse_h__ + +#include <linux/types.h> +#include <linux/if_ether.h> +#include "iwl-trans.h" + +/* SKU Capabilities (actual values from EEPROM definition) */ +#define EEPROM_SKU_CAP_BAND_24GHZ (1 << 4) +#define EEPROM_SKU_CAP_BAND_52GHZ (1 << 5) +#define EEPROM_SKU_CAP_11N_ENABLE (1 << 6) +#define EEPROM_SKU_CAP_AMT_ENABLE (1 << 7) +#define EEPROM_SKU_CAP_IPAN_ENABLE (1 << 8) + +/* radio config bits (actual values from EEPROM definition) */ +#define EEPROM_RF_CFG_TYPE_MSK(x) (x & 0x3) /* bits 0-1 */ +#define EEPROM_RF_CFG_STEP_MSK(x) ((x >> 2) & 0x3) /* bits 2-3 */ +#define EEPROM_RF_CFG_DASH_MSK(x) ((x >> 4) & 0x3) /* bits 4-5 */ +#define EEPROM_RF_CFG_PNUM_MSK(x) ((x >> 6) & 0x3) /* bits 6-7 */ +#define EEPROM_RF_CFG_TX_ANT_MSK(x) ((x >> 8) & 0xF) /* bits 8-11 */ +#define EEPROM_RF_CFG_RX_ANT_MSK(x) ((x >> 12) & 0xF) /* bits 12-15 */ + +struct iwl_eeprom_data { + int n_hw_addrs; + u8 hw_addr[ETH_ALEN]; + + u16 radio_config; + + u8 calib_version; + __le16 calib_voltage; + + __le16 raw_temperature; + __le16 kelvin_temperature; + __le16 kelvin_voltage; + __le16 xtal_calib[2]; + + u16 sku; + u16 radio_cfg; + u16 eeprom_version; + s8 max_tx_pwr_half_dbm; + + u8 valid_tx_ant, valid_rx_ant; + + struct ieee80211_supported_band bands[IEEE80211_NUM_BANDS]; + struct ieee80211_channel channels[]; +}; + +/** + * iwl_parse_eeprom_data - parse EEPROM data and return values + * + * @dev: device pointer we're parsing for, for debug only + * @cfg: device configuration for parsing and overrides + * @eeprom: the EEPROM data + * @eeprom_size: length of the EEPROM data + * + * This function parses all EEPROM values we need and then + * returns a (newly allocated) struct containing all the + * relevant values for driver use. The struct must be freed + * later with iwl_free_eeprom_data(). + */ +struct iwl_eeprom_data * +iwl_parse_eeprom_data(struct device *dev, const struct iwl_cfg *cfg, + const u8 *eeprom, size_t eeprom_size); + +/** + * iwl_free_eeprom_data - free EEPROM data + * @data: the data to free + */ +static inline void iwl_free_eeprom_data(struct iwl_eeprom_data *data) +{ + kfree(data); +} + +int iwl_eeprom_check_version(struct iwl_eeprom_data *data, + struct iwl_trans *trans); + +#endif /* __iwl_eeprom_parse_h__ */ diff --git a/drivers/net/wireless/iwlwifi/iwl-eeprom-read.c b/drivers/net/wireless/iwlwifi/iwl-eeprom-read.c new file mode 100644 index 000000000000..27c7da3c6ed1 --- /dev/null +++ b/drivers/net/wireless/iwlwifi/iwl-eeprom-read.c @@ -0,0 +1,463 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright(c) 2008 - 2012 Intel Corporation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, + * USA + * + * The full GNU General Public License is included in this distribution + * in the file called LICENSE.GPL. + * + * Contact Information: + * Intel Linux Wireless <ilw@linux.intel.com> + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright(c) 2005 - 2012 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + *****************************************************************************/ +#include <linux/types.h> +#include <linux/slab.h> +#include <linux/export.h> + +#include "iwl-debug.h" +#include "iwl-eeprom-read.h" +#include "iwl-io.h" +#include "iwl-prph.h" +#include "iwl-csr.h" + +/* + * EEPROM access time values: + * + * Driver initiates EEPROM read by writing byte address << 1 to CSR_EEPROM_REG. + * Driver then polls CSR_EEPROM_REG for CSR_EEPROM_REG_READ_VALID_MSK (0x1). + * When polling, wait 10 uSec between polling loops, up to a maximum 5000 uSec. + * Driver reads 16-bit value from bits 31-16 of CSR_EEPROM_REG. + */ +#define IWL_EEPROM_ACCESS_TIMEOUT 5000 /* uSec */ + +#define IWL_EEPROM_SEM_TIMEOUT 10 /* microseconds */ +#define IWL_EEPROM_SEM_RETRY_LIMIT 1000 /* number of attempts (not time) */ + + +/* + * The device's EEPROM semaphore prevents conflicts between driver and uCode + * when accessing the EEPROM; each access is a series of pulses to/from the + * EEPROM chip, not a single event, so even reads could conflict if they + * weren't arbitrated by the semaphore. + */ + +#define EEPROM_SEM_TIMEOUT 10 /* milliseconds */ +#define EEPROM_SEM_RETRY_LIMIT 1000 /* number of attempts (not time) */ + +static int iwl_eeprom_acquire_semaphore(struct iwl_trans *trans) +{ + u16 count; + int ret; + + for (count = 0; count < EEPROM_SEM_RETRY_LIMIT; count++) { + /* Request semaphore */ + iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM); + + /* See if we got it */ + ret = iwl_poll_bit(trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM, + CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM, + EEPROM_SEM_TIMEOUT); + if (ret >= 0) { + IWL_DEBUG_EEPROM(trans->dev, + "Acquired semaphore after %d tries.\n", + count+1); + return ret; + } + } + + return ret; +} + +static void iwl_eeprom_release_semaphore(struct iwl_trans *trans) +{ + iwl_clear_bit(trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM); +} + +static int iwl_eeprom_verify_signature(struct iwl_trans *trans, bool nvm_is_otp) +{ + u32 gp = iwl_read32(trans, CSR_EEPROM_GP) & CSR_EEPROM_GP_VALID_MSK; + + IWL_DEBUG_EEPROM(trans->dev, "EEPROM signature=0x%08x\n", gp); + + switch (gp) { + case CSR_EEPROM_GP_BAD_SIG_EEP_GOOD_SIG_OTP: + if (!nvm_is_otp) { + IWL_ERR(trans, "EEPROM with bad signature: 0x%08x\n", + gp); + return -ENOENT; + } + return 0; + case CSR_EEPROM_GP_GOOD_SIG_EEP_LESS_THAN_4K: + case CSR_EEPROM_GP_GOOD_SIG_EEP_MORE_THAN_4K: + if (nvm_is_otp) { + IWL_ERR(trans, "OTP with bad signature: 0x%08x\n", gp); + return -ENOENT; + } + return 0; + case CSR_EEPROM_GP_BAD_SIGNATURE_BOTH_EEP_AND_OTP: + default: + IWL_ERR(trans, + "bad EEPROM/OTP signature, type=%s, EEPROM_GP=0x%08x\n", + nvm_is_otp ? "OTP" : "EEPROM", gp); + return -ENOENT; + } +} + +/****************************************************************************** + * + * OTP related functions + * +******************************************************************************/ + +static void iwl_set_otp_access_absolute(struct iwl_trans *trans) +{ + iwl_read32(trans, CSR_OTP_GP_REG); + + iwl_clear_bit(trans, CSR_OTP_GP_REG, + CSR_OTP_GP_REG_OTP_ACCESS_MODE); +} + +static int iwl_nvm_is_otp(struct iwl_trans *trans) +{ + u32 otpgp; + + /* OTP only valid for CP/PP and after */ + switch (trans->hw_rev & CSR_HW_REV_TYPE_MSK) { + case CSR_HW_REV_TYPE_NONE: + IWL_ERR(trans, "Unknown hardware type\n"); + return -EIO; + case CSR_HW_REV_TYPE_5300: + case CSR_HW_REV_TYPE_5350: + case CSR_HW_REV_TYPE_5100: + case CSR_HW_REV_TYPE_5150: + return 0; + default: + otpgp = iwl_read32(trans, CSR_OTP_GP_REG); + if (otpgp & CSR_OTP_GP_REG_DEVICE_SELECT) + return 1; + return 0; + } +} + +static int iwl_init_otp_access(struct iwl_trans *trans) +{ + int ret; + + /* Enable 40MHz radio clock */ + iwl_write32(trans, CSR_GP_CNTRL, + iwl_read32(trans, CSR_GP_CNTRL) | + CSR_GP_CNTRL_REG_FLAG_INIT_DONE); + + /* wait for clock to be ready */ + ret = iwl_poll_bit(trans, CSR_GP_CNTRL, + CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, + CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, + 25000); + if (ret < 0) { + IWL_ERR(trans, "Time out access OTP\n"); + } else { + iwl_set_bits_prph(trans, APMG_PS_CTRL_REG, + APMG_PS_CTRL_VAL_RESET_REQ); + udelay(5); + iwl_clear_bits_prph(trans, APMG_PS_CTRL_REG, + APMG_PS_CTRL_VAL_RESET_REQ); + + /* + * CSR auto clock gate disable bit - + * this is only applicable for HW with OTP shadow RAM + */ + if (trans->cfg->base_params->shadow_ram_support) + iwl_set_bit(trans, CSR_DBG_LINK_PWR_MGMT_REG, + CSR_RESET_LINK_PWR_MGMT_DISABLED); + } + return ret; +} + +static int iwl_read_otp_word(struct iwl_trans *trans, u16 addr, + __le16 *eeprom_data) +{ + int ret = 0; + u32 r; + u32 otpgp; + + iwl_write32(trans, CSR_EEPROM_REG, + CSR_EEPROM_REG_MSK_ADDR & (addr << 1)); + ret = iwl_poll_bit(trans, CSR_EEPROM_REG, + CSR_EEPROM_REG_READ_VALID_MSK, + CSR_EEPROM_REG_READ_VALID_MSK, + IWL_EEPROM_ACCESS_TIMEOUT); + if (ret < 0) { + IWL_ERR(trans, "Time out reading OTP[%d]\n", addr); + return ret; + } + r = iwl_read32(trans, CSR_EEPROM_REG); + /* check for ECC errors: */ + otpgp = iwl_read32(trans, CSR_OTP_GP_REG); + if (otpgp & CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK) { + /* stop in this case */ + /* set the uncorrectable OTP ECC bit for acknowledgement */ + iwl_set_bit(trans, CSR_OTP_GP_REG, + CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK); + IWL_ERR(trans, "Uncorrectable OTP ECC error, abort OTP read\n"); + return -EINVAL; + } + if (otpgp & CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK) { + /* continue in this case */ + /* set the correctable OTP ECC bit for acknowledgement */ + iwl_set_bit(trans, CSR_OTP_GP_REG, + CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK); + IWL_ERR(trans, "Correctable OTP ECC error, continue read\n"); + } + *eeprom_data = cpu_to_le16(r >> 16); + return 0; +} + +/* + * iwl_is_otp_empty: check for empty OTP + */ +static bool iwl_is_otp_empty(struct iwl_trans *trans) +{ + u16 next_link_addr = 0; + __le16 link_value; + bool is_empty = false; + + /* locate the beginning of OTP link list */ + if (!iwl_read_otp_word(trans, next_link_addr, &link_value)) { + if (!link_value) { + IWL_ERR(trans, "OTP is empty\n"); + is_empty = true; + } + } else { + IWL_ERR(trans, "Unable to read first block of OTP list.\n"); + is_empty = true; + } + + return is_empty; +} + + +/* + * iwl_find_otp_image: find EEPROM image in OTP + * finding the OTP block that contains the EEPROM image. + * the last valid block on the link list (the block _before_ the last block) + * is the block we should read and used to configure the device. + * If all the available OTP blocks are full, the last block will be the block + * we should read and used to configure the device. + * only perform this operation if shadow RAM is disabled + */ +static int iwl_find_otp_image(struct iwl_trans *trans, + u16 *validblockaddr) +{ + u16 next_link_addr = 0, valid_addr; + __le16 link_value = 0; + int usedblocks = 0; + + /* set addressing mode to absolute to traverse the link list */ + iwl_set_otp_access_absolute(trans); + + /* checking for empty OTP or error */ + if (iwl_is_otp_empty(trans)) + return -EINVAL; + + /* + * start traverse link list + * until reach the max number of OTP blocks + * different devices have different number of OTP blocks + */ + do { + /* save current valid block address + * check for more block on the link list + */ + valid_addr = next_link_addr; + next_link_addr = le16_to_cpu(link_value) * sizeof(u16); + IWL_DEBUG_EEPROM(trans->dev, "OTP blocks %d addr 0x%x\n", + usedblocks, next_link_addr); + if (iwl_read_otp_word(trans, next_link_addr, &link_value)) + return -EINVAL; + if (!link_value) { + /* + * reach the end of link list, return success and + * set address point to the starting address + * of the image + */ + *validblockaddr = valid_addr; + /* skip first 2 bytes (link list pointer) */ + *validblockaddr += 2; + return 0; + } + /* more in the link list, continue */ + usedblocks++; + } while (usedblocks <= trans->cfg->base_params->max_ll_items); + + /* OTP has no valid blocks */ + IWL_DEBUG_EEPROM(trans->dev, "OTP has no valid blocks\n"); + return -EINVAL; +} + +/** + * iwl_read_eeprom - read EEPROM contents + * + * Load the EEPROM contents from adapter and return it + * and its size. + * + * NOTE: This routine uses the non-debug IO access functions. + */ +int iwl_read_eeprom(struct iwl_trans *trans, u8 **eeprom, size_t *eeprom_size) +{ + __le16 *e; + u32 gp = iwl_read32(trans, CSR_EEPROM_GP); + int sz; + int ret; + u16 addr; + u16 validblockaddr = 0; + u16 cache_addr = 0; + int nvm_is_otp; + + if (!eeprom || !eeprom_size) + return -EINVAL; + + nvm_is_otp = iwl_nvm_is_otp(trans); + if (nvm_is_otp < 0) + return nvm_is_otp; + + sz = trans->cfg->base_params->eeprom_size; + IWL_DEBUG_EEPROM(trans->dev, "NVM size = %d\n", sz); + + e = kmalloc(sz, GFP_KERNEL); + if (!e) + return -ENOMEM; + + ret = iwl_eeprom_verify_signature(trans, nvm_is_otp); + if (ret < 0) { + IWL_ERR(trans, "EEPROM not found, EEPROM_GP=0x%08x\n", gp); + goto err_free; + } + + /* Make sure driver (instead of uCode) is allowed to read EEPROM */ + ret = iwl_eeprom_acquire_semaphore(trans); + if (ret < 0) { + IWL_ERR(trans, "Failed to acquire EEPROM semaphore.\n"); + goto err_free; + } + + if (nvm_is_otp) { + ret = iwl_init_otp_access(trans); + if (ret) { + IWL_ERR(trans, "Failed to initialize OTP access.\n"); + goto err_unlock; + } + + iwl_write32(trans, CSR_EEPROM_GP, + iwl_read32(trans, CSR_EEPROM_GP) & + ~CSR_EEPROM_GP_IF_OWNER_MSK); + + iwl_set_bit(trans, CSR_OTP_GP_REG, + CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK | + CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK); + /* traversing the linked list if no shadow ram supported */ + if (!trans->cfg->base_params->shadow_ram_support) { + ret = iwl_find_otp_image(trans, &validblockaddr); + if (ret) + goto err_unlock; + } + for (addr = validblockaddr; addr < validblockaddr + sz; + addr += sizeof(u16)) { + __le16 eeprom_data; + + ret = iwl_read_otp_word(trans, addr, &eeprom_data); + if (ret) + goto err_unlock; + e[cache_addr / 2] = eeprom_data; + cache_addr += sizeof(u16); + } + } else { + /* eeprom is an array of 16bit values */ + for (addr = 0; addr < sz; addr += sizeof(u16)) { + u32 r; + + iwl_write32(trans, CSR_EEPROM_REG, + CSR_EEPROM_REG_MSK_ADDR & (addr << 1)); + + ret = iwl_poll_bit(trans, CSR_EEPROM_REG, + CSR_EEPROM_REG_READ_VALID_MSK, + CSR_EEPROM_REG_READ_VALID_MSK, + IWL_EEPROM_ACCESS_TIMEOUT); + if (ret < 0) { + IWL_ERR(trans, + "Time out reading EEPROM[%d]\n", addr); + goto err_unlock; + } + r = iwl_read32(trans, CSR_EEPROM_REG); + e[addr / 2] = cpu_to_le16(r >> 16); + } + } + + IWL_DEBUG_EEPROM(trans->dev, "NVM Type: %s\n", + nvm_is_otp ? "OTP" : "EEPROM"); + + iwl_eeprom_release_semaphore(trans); + + *eeprom_size = sz; + *eeprom = (u8 *)e; + return 0; + + err_unlock: + iwl_eeprom_release_semaphore(trans); + err_free: + kfree(e); + + return ret; +} +EXPORT_SYMBOL_GPL(iwl_read_eeprom); diff --git a/drivers/net/wireless/iwlwifi/iwl-eeprom-read.h b/drivers/net/wireless/iwlwifi/iwl-eeprom-read.h new file mode 100644 index 000000000000..1337c9d36fee --- /dev/null +++ b/drivers/net/wireless/iwlwifi/iwl-eeprom-read.h @@ -0,0 +1,70 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright(c) 2008 - 2012 Intel Corporation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, + * USA + * + * The full GNU General Public License is included in this distribution + * in the file called LICENSE.GPL. + * + * Contact Information: + * Intel Linux Wireless <ilw@linux.intel.com> + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright(c) 2005 - 2012 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + *****************************************************************************/ + +#ifndef __iwl_eeprom_h__ +#define __iwl_eeprom_h__ + +#include "iwl-trans.h" + +int iwl_read_eeprom(struct iwl_trans *trans, u8 **eeprom, size_t *eeprom_size); + +#endif /* __iwl_eeprom_h__ */ diff --git a/drivers/net/wireless/iwlwifi/iwl-eeprom.c b/drivers/net/wireless/iwlwifi/iwl-eeprom.c deleted file mode 100644 index b8e2b223ac36..000000000000 --- a/drivers/net/wireless/iwlwifi/iwl-eeprom.c +++ /dev/null @@ -1,1148 +0,0 @@ -/****************************************************************************** - * - * This file is provided under a dual BSD/GPLv2 license. When using or - * redistributing this file, you may do so under either license. - * - * GPL LICENSE SUMMARY - * - * Copyright(c) 2008 - 2012 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of version 2 of the GNU General Public License as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, - * USA - * - * The full GNU General Public License is included in this distribution - * in the file called LICENSE.GPL. - * - * Contact Information: - * Intel Linux Wireless <ilw@linux.intel.com> - * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 - * - * BSD LICENSE - * - * Copyright(c) 2005 - 2012 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - *****************************************************************************/ - - -#include <linux/kernel.h> -#include <linux/module.h> -#include <linux/slab.h> -#include <linux/init.h> - -#include <net/mac80211.h> - -#include "iwl-dev.h" -#include "iwl-debug.h" -#include "iwl-agn.h" -#include "iwl-eeprom.h" -#include "iwl-io.h" -#include "iwl-prph.h" - -/************************** EEPROM BANDS **************************** - * - * The iwl_eeprom_band definitions below provide the mapping from the - * EEPROM contents to the specific channel number supported for each - * band. - * - * For example, iwl_priv->eeprom.band_3_channels[4] from the band_3 - * definition below maps to physical channel 42 in the 5.2GHz spectrum. - * The specific geography and calibration information for that channel - * is contained in the eeprom map itself. - * - * During init, we copy the eeprom information and channel map - * information into priv->channel_info_24/52 and priv->channel_map_24/52 - * - * channel_map_24/52 provides the index in the channel_info array for a - * given channel. We have to have two separate maps as there is channel - * overlap with the 2.4GHz and 5.2GHz spectrum as seen in band_1 and - * band_2 - * - * A value of 0xff stored in the channel_map indicates that the channel - * is not supported by the hardware at all. - * - * A value of 0xfe in the channel_map indicates that the channel is not - * valid for Tx with the current hardware. This means that - * while the system can tune and receive on a given channel, it may not - * be able to associate or transmit any frames on that - * channel. There is no corresponding channel information for that - * entry. - * - *********************************************************************/ - -/* 2.4 GHz */ -const u8 iwl_eeprom_band_1[14] = { - 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 -}; - -/* 5.2 GHz bands */ -static const u8 iwl_eeprom_band_2[] = { /* 4915-5080MHz */ - 183, 184, 185, 187, 188, 189, 192, 196, 7, 8, 11, 12, 16 -}; - -static const u8 iwl_eeprom_band_3[] = { /* 5170-5320MHz */ - 34, 36, 38, 40, 42, 44, 46, 48, 52, 56, 60, 64 -}; - -static const u8 iwl_eeprom_band_4[] = { /* 5500-5700MHz */ - 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140 -}; - -static const u8 iwl_eeprom_band_5[] = { /* 5725-5825MHz */ - 145, 149, 153, 157, 161, 165 -}; - -static const u8 iwl_eeprom_band_6[] = { /* 2.4 ht40 channel */ - 1, 2, 3, 4, 5, 6, 7 -}; - -static const u8 iwl_eeprom_band_7[] = { /* 5.2 ht40 channel */ - 36, 44, 52, 60, 100, 108, 116, 124, 132, 149, 157 -}; - -/****************************************************************************** - * - * generic NVM functions - * -******************************************************************************/ - -/* - * The device's EEPROM semaphore prevents conflicts between driver and uCode - * when accessing the EEPROM; each access is a series of pulses to/from the - * EEPROM chip, not a single event, so even reads could conflict if they - * weren't arbitrated by the semaphore. - */ - -#define EEPROM_SEM_TIMEOUT 10 /* milliseconds */ -#define EEPROM_SEM_RETRY_LIMIT 1000 /* number of attempts (not time) */ - -static int iwl_eeprom_acquire_semaphore(struct iwl_trans *trans) -{ - u16 count; - int ret; - - for (count = 0; count < EEPROM_SEM_RETRY_LIMIT; count++) { - /* Request semaphore */ - iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM); - - /* See if we got it */ - ret = iwl_poll_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM, - CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM, - EEPROM_SEM_TIMEOUT); - if (ret >= 0) { - IWL_DEBUG_EEPROM(trans, - "Acquired semaphore after %d tries.\n", - count+1); - return ret; - } - } - - return ret; -} - -static void iwl_eeprom_release_semaphore(struct iwl_trans *trans) -{ - iwl_clear_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_EEPROM_OWN_SEM); - -} - -static int iwl_eeprom_verify_signature(struct iwl_priv *priv) -{ - u32 gp = iwl_read32(priv->trans, CSR_EEPROM_GP) & - CSR_EEPROM_GP_VALID_MSK; - int ret = 0; - - IWL_DEBUG_EEPROM(priv, "EEPROM signature=0x%08x\n", gp); - switch (gp) { - case CSR_EEPROM_GP_BAD_SIG_EEP_GOOD_SIG_OTP: - if (priv->nvm_device_type != NVM_DEVICE_TYPE_OTP) { - IWL_ERR(priv, "EEPROM with bad signature: 0x%08x\n", - gp); - ret = -ENOENT; - } - break; - case CSR_EEPROM_GP_GOOD_SIG_EEP_LESS_THAN_4K: - case CSR_EEPROM_GP_GOOD_SIG_EEP_MORE_THAN_4K: - if (priv->nvm_device_type != NVM_DEVICE_TYPE_EEPROM) { - IWL_ERR(priv, "OTP with bad signature: 0x%08x\n", gp); - ret = -ENOENT; - } - break; - case CSR_EEPROM_GP_BAD_SIGNATURE_BOTH_EEP_AND_OTP: - default: - IWL_ERR(priv, "bad EEPROM/OTP signature, type=%s, " - "EEPROM_GP=0x%08x\n", - (priv->nvm_device_type == NVM_DEVICE_TYPE_OTP) - ? "OTP" : "EEPROM", gp); - ret = -ENOENT; - break; - } - return ret; -} - -u16 iwl_eeprom_query16(struct iwl_priv *priv, size_t offset) -{ - if (!priv->eeprom) - return 0; - return (u16)priv->eeprom[offset] | ((u16)priv->eeprom[offset + 1] << 8); -} - -int iwl_eeprom_check_version(struct iwl_priv *priv) -{ - u16 eeprom_ver; - u16 calib_ver; - - eeprom_ver = iwl_eeprom_query16(priv, EEPROM_VERSION); - calib_ver = iwl_eeprom_calib_version(priv); - - if (eeprom_ver < priv->cfg->eeprom_ver || - calib_ver < priv->cfg->eeprom_calib_ver) - goto err; - - IWL_INFO(priv, "device EEPROM VER=0x%x, CALIB=0x%x\n", - eeprom_ver, calib_ver); - - return 0; -err: - IWL_ERR(priv, "Unsupported (too old) EEPROM VER=0x%x < 0x%x " - "CALIB=0x%x < 0x%x\n", - eeprom_ver, priv->cfg->eeprom_ver, - calib_ver, priv->cfg->eeprom_calib_ver); - return -EINVAL; - -} - -int iwl_eeprom_init_hw_params(struct iwl_priv *priv) -{ - u16 radio_cfg; - - priv->hw_params.sku = iwl_eeprom_query16(priv, EEPROM_SKU_CAP); - if (priv->hw_params.sku & EEPROM_SKU_CAP_11N_ENABLE && - !priv->cfg->ht_params) { - IWL_ERR(priv, "Invalid 11n configuration\n"); - return -EINVAL; - } - - if (!priv->hw_params.sku) { - IWL_ERR(priv, "Invalid device sku\n"); - return -EINVAL; - } - - IWL_INFO(priv, "Device SKU: 0x%X\n", priv->hw_params.sku); - - radio_cfg = iwl_eeprom_query16(priv, EEPROM_RADIO_CONFIG); - - priv->hw_params.valid_tx_ant = EEPROM_RF_CFG_TX_ANT_MSK(radio_cfg); - priv->hw_params.valid_rx_ant = EEPROM_RF_CFG_RX_ANT_MSK(radio_cfg); - - /* check overrides (some devices have wrong EEPROM) */ - if (priv->cfg->valid_tx_ant) - priv->hw_params.valid_tx_ant = priv->cfg->valid_tx_ant; - if (priv->cfg->valid_rx_ant) - priv->hw_params.valid_rx_ant = priv->cfg->valid_rx_ant; - - if (!priv->hw_params.valid_tx_ant || !priv->hw_params.valid_rx_ant) { - IWL_ERR(priv, "Invalid chain (0x%X, 0x%X)\n", - priv->hw_params.valid_tx_ant, - priv->hw_params.valid_rx_ant); - return -EINVAL; - } - - IWL_INFO(priv, "Valid Tx ant: 0x%X, Valid Rx ant: 0x%X\n", - priv->hw_params.valid_tx_ant, priv->hw_params.valid_rx_ant); - - return 0; -} - -u16 iwl_eeprom_calib_version(struct iwl_priv *priv) -{ - struct iwl_eeprom_calib_hdr *hdr; - - hdr = (struct iwl_eeprom_calib_hdr *)iwl_eeprom_query_addr(priv, - EEPROM_CALIB_ALL); - return hdr->version; -} - -static u32 eeprom_indirect_address(struct iwl_priv *priv, u32 address) -{ - u16 offset = 0; - - if ((address & INDIRECT_ADDRESS) == 0) - return address; - - switch (address & INDIRECT_TYPE_MSK) { - case INDIRECT_HOST: - offset = iwl_eeprom_query16(priv, EEPROM_LINK_HOST); - break; - case INDIRECT_GENERAL: - offset = iwl_eeprom_query16(priv, EEPROM_LINK_GENERAL); - break; - case INDIRECT_REGULATORY: - offset = iwl_eeprom_query16(priv, EEPROM_LINK_REGULATORY); - break; - case INDIRECT_TXP_LIMIT: - offset = iwl_eeprom_query16(priv, EEPROM_LINK_TXP_LIMIT); - break; - case INDIRECT_TXP_LIMIT_SIZE: - offset = iwl_eeprom_query16(priv, EEPROM_LINK_TXP_LIMIT_SIZE); - break; - case INDIRECT_CALIBRATION: - offset = iwl_eeprom_query16(priv, EEPROM_LINK_CALIBRATION); - break; - case INDIRECT_PROCESS_ADJST: - offset = iwl_eeprom_query16(priv, EEPROM_LINK_PROCESS_ADJST); - break; - case INDIRECT_OTHERS: - offset = iwl_eeprom_query16(priv, EEPROM_LINK_OTHERS); - break; - default: - IWL_ERR(priv, "illegal indirect type: 0x%X\n", - address & INDIRECT_TYPE_MSK); - break; - } - - /* translate the offset from words to byte */ - return (address & ADDRESS_MSK) + (offset << 1); -} - -const u8 *iwl_eeprom_query_addr(struct iwl_priv *priv, size_t offset) -{ - u32 address = eeprom_indirect_address(priv, offset); - BUG_ON(address >= priv->cfg->base_params->eeprom_size); - return &priv->eeprom[address]; -} - -void iwl_eeprom_get_mac(struct iwl_priv *priv, u8 *mac) -{ - const u8 *addr = iwl_eeprom_query_addr(priv, - EEPROM_MAC_ADDRESS); - memcpy(mac, addr, ETH_ALEN); -} - -/****************************************************************************** - * - * OTP related functions - * -******************************************************************************/ - -static void iwl_set_otp_access(struct iwl_trans *trans, - enum iwl_access_mode mode) -{ - iwl_read32(trans, CSR_OTP_GP_REG); - - if (mode == IWL_OTP_ACCESS_ABSOLUTE) - iwl_clear_bit(trans, CSR_OTP_GP_REG, - CSR_OTP_GP_REG_OTP_ACCESS_MODE); - else - iwl_set_bit(trans, CSR_OTP_GP_REG, - CSR_OTP_GP_REG_OTP_ACCESS_MODE); -} - -static int iwl_get_nvm_type(struct iwl_trans *trans, u32 hw_rev) -{ - u32 otpgp; - int nvm_type; - - /* OTP only valid for CP/PP and after */ - switch (hw_rev & CSR_HW_REV_TYPE_MSK) { - case CSR_HW_REV_TYPE_NONE: - IWL_ERR(trans, "Unknown hardware type\n"); - return -ENOENT; - case CSR_HW_REV_TYPE_5300: - case CSR_HW_REV_TYPE_5350: - case CSR_HW_REV_TYPE_5100: - case CSR_HW_REV_TYPE_5150: - nvm_type = NVM_DEVICE_TYPE_EEPROM; - break; - default: - otpgp = iwl_read32(trans, CSR_OTP_GP_REG); - if (otpgp & CSR_OTP_GP_REG_DEVICE_SELECT) - nvm_type = NVM_DEVICE_TYPE_OTP; - else - nvm_type = NVM_DEVICE_TYPE_EEPROM; - break; - } - return nvm_type; -} - -static int iwl_init_otp_access(struct iwl_trans *trans) -{ - int ret; - - /* Enable 40MHz radio clock */ - iwl_write32(trans, CSR_GP_CNTRL, - iwl_read32(trans, CSR_GP_CNTRL) | - CSR_GP_CNTRL_REG_FLAG_INIT_DONE); - - /* wait for clock to be ready */ - ret = iwl_poll_bit(trans, CSR_GP_CNTRL, - CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, - CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, - 25000); - if (ret < 0) - IWL_ERR(trans, "Time out access OTP\n"); - else { - iwl_set_bits_prph(trans, APMG_PS_CTRL_REG, - APMG_PS_CTRL_VAL_RESET_REQ); - udelay(5); - iwl_clear_bits_prph(trans, APMG_PS_CTRL_REG, - APMG_PS_CTRL_VAL_RESET_REQ); - - /* - * CSR auto clock gate disable bit - - * this is only applicable for HW with OTP shadow RAM - */ - if (trans->cfg->base_params->shadow_ram_support) - iwl_set_bit(trans, CSR_DBG_LINK_PWR_MGMT_REG, - CSR_RESET_LINK_PWR_MGMT_DISABLED); - } - return ret; -} - -static int iwl_read_otp_word(struct iwl_trans *trans, u16 addr, - __le16 *eeprom_data) -{ - int ret = 0; - u32 r; - u32 otpgp; - - iwl_write32(trans, CSR_EEPROM_REG, - CSR_EEPROM_REG_MSK_ADDR & (addr << 1)); - ret = iwl_poll_bit(trans, CSR_EEPROM_REG, - CSR_EEPROM_REG_READ_VALID_MSK, - CSR_EEPROM_REG_READ_VALID_MSK, - IWL_EEPROM_ACCESS_TIMEOUT); - if (ret < 0) { - IWL_ERR(trans, "Time out reading OTP[%d]\n", addr); - return ret; - } - r = iwl_read32(trans, CSR_EEPROM_REG); - /* check for ECC errors: */ - otpgp = iwl_read32(trans, CSR_OTP_GP_REG); - if (otpgp & CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK) { - /* stop in this case */ - /* set the uncorrectable OTP ECC bit for acknowledgement */ - iwl_set_bit(trans, CSR_OTP_GP_REG, - CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK); - IWL_ERR(trans, "Uncorrectable OTP ECC error, abort OTP read\n"); - return -EINVAL; - } - if (otpgp & CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK) { - /* continue in this case */ - /* set the correctable OTP ECC bit for acknowledgement */ - iwl_set_bit(trans, CSR_OTP_GP_REG, - CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK); - IWL_ERR(trans, "Correctable OTP ECC error, continue read\n"); - } - *eeprom_data = cpu_to_le16(r >> 16); - return 0; -} - -/* - * iwl_is_otp_empty: check for empty OTP - */ -static bool iwl_is_otp_empty(struct iwl_trans *trans) -{ - u16 next_link_addr = 0; - __le16 link_value; - bool is_empty = false; - - /* locate the beginning of OTP link list */ - if (!iwl_read_otp_word(trans, next_link_addr, &link_value)) { - if (!link_value) { - IWL_ERR(trans, "OTP is empty\n"); - is_empty = true; - } - } else { - IWL_ERR(trans, "Unable to read first block of OTP list.\n"); - is_empty = true; - } - - return is_empty; -} - - -/* - * iwl_find_otp_image: find EEPROM image in OTP - * finding the OTP block that contains the EEPROM image. - * the last valid block on the link list (the block _before_ the last block) - * is the block we should read and used to configure the device. - * If all the available OTP blocks are full, the last block will be the block - * we should read and used to configure the device. - * only perform this operation if shadow RAM is disabled - */ -static int iwl_find_otp_image(struct iwl_trans *trans, - u16 *validblockaddr) -{ - u16 next_link_addr = 0, valid_addr; - __le16 link_value = 0; - int usedblocks = 0; - - /* set addressing mode to absolute to traverse the link list */ - iwl_set_otp_access(trans, IWL_OTP_ACCESS_ABSOLUTE); - - /* checking for empty OTP or error */ - if (iwl_is_otp_empty(trans)) - return -EINVAL; - - /* - * start traverse link list - * until reach the max number of OTP blocks - * different devices have different number of OTP blocks - */ - do { - /* save current valid block address - * check for more block on the link list - */ - valid_addr = next_link_addr; - next_link_addr = le16_to_cpu(link_value) * sizeof(u16); - IWL_DEBUG_EEPROM(trans, "OTP blocks %d addr 0x%x\n", - usedblocks, next_link_addr); - if (iwl_read_otp_word(trans, next_link_addr, &link_value)) - return -EINVAL; - if (!link_value) { - /* - * reach the end of link list, return success and - * set address point to the starting address - * of the image - */ - *validblockaddr = valid_addr; - /* skip first 2 bytes (link list pointer) */ - *validblockaddr += 2; - return 0; - } - /* more in the link list, continue */ - usedblocks++; - } while (usedblocks <= trans->cfg->base_params->max_ll_items); - - /* OTP has no valid blocks */ - IWL_DEBUG_EEPROM(trans, "OTP has no valid blocks\n"); - return -EINVAL; -} - -/****************************************************************************** - * - * Tx Power related functions - * -******************************************************************************/ -/** - * iwl_get_max_txpower_avg - get the highest tx power from all chains. - * find the highest tx power from all chains for the channel - */ -static s8 iwl_get_max_txpower_avg(struct iwl_priv *priv, - struct iwl_eeprom_enhanced_txpwr *enhanced_txpower, - int element, s8 *max_txpower_in_half_dbm) -{ - s8 max_txpower_avg = 0; /* (dBm) */ - - /* Take the highest tx power from any valid chains */ - if ((priv->hw_params.valid_tx_ant & ANT_A) && - (enhanced_txpower[element].chain_a_max > max_txpower_avg)) - max_txpower_avg = enhanced_txpower[element].chain_a_max; - if ((priv->hw_params.valid_tx_ant & ANT_B) && - (enhanced_txpower[element].chain_b_max > max_txpower_avg)) - max_txpower_avg = enhanced_txpower[element].chain_b_max; - if ((priv->hw_params.valid_tx_ant & ANT_C) && - (enhanced_txpower[element].chain_c_max > max_txpower_avg)) - max_txpower_avg = enhanced_txpower[element].chain_c_max; - if (((priv->hw_params.valid_tx_ant == ANT_AB) | - (priv->hw_params.valid_tx_ant == ANT_BC) | - (priv->hw_params.valid_tx_ant == ANT_AC)) && - (enhanced_txpower[element].mimo2_max > max_txpower_avg)) - max_txpower_avg = enhanced_txpower[element].mimo2_max; - if ((priv->hw_params.valid_tx_ant == ANT_ABC) && - (enhanced_txpower[element].mimo3_max > max_txpower_avg)) - max_txpower_avg = enhanced_txpower[element].mimo3_max; - - /* - * max. tx power in EEPROM is in 1/2 dBm format - * convert from 1/2 dBm to dBm (round-up convert) - * but we also do not want to loss 1/2 dBm resolution which - * will impact performance - */ - *max_txpower_in_half_dbm = max_txpower_avg; - return (max_txpower_avg & 0x01) + (max_txpower_avg >> 1); -} - -static void -iwl_eeprom_enh_txp_read_element(struct iwl_priv *priv, - struct iwl_eeprom_enhanced_txpwr *txp, - s8 max_txpower_avg) -{ - int ch_idx; - bool is_ht40 = txp->flags & IWL_EEPROM_ENH_TXP_FL_40MHZ; - enum ieee80211_band band; - - band = txp->flags & IWL_EEPROM_ENH_TXP_FL_BAND_52G ? - IEEE80211_BAND_5GHZ : IEEE80211_BAND_2GHZ; - - for (ch_idx = 0; ch_idx < priv->channel_count; ch_idx++) { - struct iwl_channel_info *ch_info = &priv->channel_info[ch_idx]; - - /* update matching channel or from common data only */ - if (txp->channel != 0 && ch_info->channel != txp->channel) - continue; - - /* update matching band only */ - if (band != ch_info->band) - continue; - - if (ch_info->max_power_avg < max_txpower_avg && !is_ht40) { - ch_info->max_power_avg = max_txpower_avg; - ch_info->curr_txpow = max_txpower_avg; - ch_info->scan_power = max_txpower_avg; - } - - if (is_ht40 && ch_info->ht40_max_power_avg < max_txpower_avg) - ch_info->ht40_max_power_avg = max_txpower_avg; - } -} - -#define EEPROM_TXP_OFFS (0x00 | INDIRECT_ADDRESS | INDIRECT_TXP_LIMIT) -#define EEPROM_TXP_ENTRY_LEN sizeof(struct iwl_eeprom_enhanced_txpwr) -#define EEPROM_TXP_SZ_OFFS (0x00 | INDIRECT_ADDRESS | INDIRECT_TXP_LIMIT_SIZE) - -#define TXP_CHECK_AND_PRINT(x) ((txp->flags & IWL_EEPROM_ENH_TXP_FL_##x) \ - ? # x " " : "") - -static void iwl_eeprom_enhanced_txpower(struct iwl_priv *priv) -{ - struct iwl_eeprom_enhanced_txpwr *txp_array, *txp; - int idx, entries; - __le16 *txp_len; - s8 max_txp_avg, max_txp_avg_halfdbm; - - BUILD_BUG_ON(sizeof(struct iwl_eeprom_enhanced_txpwr) != 8); - - /* the length is in 16-bit words, but we want entries */ - txp_len = (__le16 *) iwl_eeprom_query_addr(priv, EEPROM_TXP_SZ_OFFS); - entries = le16_to_cpup(txp_len) * 2 / EEPROM_TXP_ENTRY_LEN; - - txp_array = (void *) iwl_eeprom_query_addr(priv, EEPROM_TXP_OFFS); - - for (idx = 0; idx < entries; idx++) { - txp = &txp_array[idx]; - /* skip invalid entries */ - if (!(txp->flags & IWL_EEPROM_ENH_TXP_FL_VALID)) - continue; - - IWL_DEBUG_EEPROM(priv, "%s %d:\t %s%s%s%s%s%s%s%s (0x%02x)\n", - (txp->channel && (txp->flags & - IWL_EEPROM_ENH_TXP_FL_COMMON_TYPE)) ? - "Common " : (txp->channel) ? - "Channel" : "Common", - (txp->channel), - TXP_CHECK_AND_PRINT(VALID), - TXP_CHECK_AND_PRINT(BAND_52G), - TXP_CHECK_AND_PRINT(OFDM), - TXP_CHECK_AND_PRINT(40MHZ), - TXP_CHECK_AND_PRINT(HT_AP), - TXP_CHECK_AND_PRINT(RES1), - TXP_CHECK_AND_PRINT(RES2), - TXP_CHECK_AND_PRINT(COMMON_TYPE), - txp->flags); - IWL_DEBUG_EEPROM(priv, "\t\t chain_A: 0x%02x " - "chain_B: 0X%02x chain_C: 0X%02x\n", - txp->chain_a_max, txp->chain_b_max, - txp->chain_c_max); - IWL_DEBUG_EEPROM(priv, "\t\t MIMO2: 0x%02x " - "MIMO3: 0x%02x High 20_on_40: 0x%02x " - "Low 20_on_40: 0x%02x\n", - txp->mimo2_max, txp->mimo3_max, - ((txp->delta_20_in_40 & 0xf0) >> 4), - (txp->delta_20_in_40 & 0x0f)); - - max_txp_avg = iwl_get_max_txpower_avg(priv, txp_array, idx, - &max_txp_avg_halfdbm); - - /* - * Update the user limit values values to the highest - * power supported by any channel - */ - if (max_txp_avg > priv->tx_power_user_lmt) - priv->tx_power_user_lmt = max_txp_avg; - if (max_txp_avg_halfdbm > priv->tx_power_lmt_in_half_dbm) - priv->tx_power_lmt_in_half_dbm = max_txp_avg_halfdbm; - - iwl_eeprom_enh_txp_read_element(priv, txp, max_txp_avg); - } -} - -/** - * iwl_eeprom_init - read EEPROM contents - * - * Load the EEPROM contents from adapter into priv->eeprom - * - * NOTE: This routine uses the non-debug IO access functions. - */ -int iwl_eeprom_init(struct iwl_priv *priv, u32 hw_rev) -{ - __le16 *e; - u32 gp = iwl_read32(priv->trans, CSR_EEPROM_GP); - int sz; - int ret; - u16 addr; - u16 validblockaddr = 0; - u16 cache_addr = 0; - - priv->nvm_device_type = iwl_get_nvm_type(priv->trans, hw_rev); - if (priv->nvm_device_type == -ENOENT) - return -ENOENT; - /* allocate eeprom */ - sz = priv->cfg->base_params->eeprom_size; - IWL_DEBUG_EEPROM(priv, "NVM size = %d\n", sz); - priv->eeprom = kzalloc(sz, GFP_KERNEL); - if (!priv->eeprom) { - ret = -ENOMEM; - goto alloc_err; - } - e = (__le16 *)priv->eeprom; - - ret = iwl_eeprom_verify_signature(priv); - if (ret < 0) { - IWL_ERR(priv, "EEPROM not found, EEPROM_GP=0x%08x\n", gp); - ret = -ENOENT; - goto err; - } - - /* Make sure driver (instead of uCode) is allowed to read EEPROM */ - ret = iwl_eeprom_acquire_semaphore(priv->trans); - if (ret < 0) { - IWL_ERR(priv, "Failed to acquire EEPROM semaphore.\n"); - ret = -ENOENT; - goto err; - } - - if (priv->nvm_device_type == NVM_DEVICE_TYPE_OTP) { - - ret = iwl_init_otp_access(priv->trans); - if (ret) { - IWL_ERR(priv, "Failed to initialize OTP access.\n"); - ret = -ENOENT; - goto done; - } - iwl_write32(priv->trans, CSR_EEPROM_GP, - iwl_read32(priv->trans, CSR_EEPROM_GP) & - ~CSR_EEPROM_GP_IF_OWNER_MSK); - - iwl_set_bit(priv->trans, CSR_OTP_GP_REG, - CSR_OTP_GP_REG_ECC_CORR_STATUS_MSK | - CSR_OTP_GP_REG_ECC_UNCORR_STATUS_MSK); - /* traversing the linked list if no shadow ram supported */ - if (!priv->cfg->base_params->shadow_ram_support) { - if (iwl_find_otp_image(priv->trans, &validblockaddr)) { - ret = -ENOENT; - goto done; - } - } - for (addr = validblockaddr; addr < validblockaddr + sz; - addr += sizeof(u16)) { - __le16 eeprom_data; - - ret = iwl_read_otp_word(priv->trans, addr, - &eeprom_data); - if (ret) - goto done; - e[cache_addr / 2] = eeprom_data; - cache_addr += sizeof(u16); - } - } else { - /* eeprom is an array of 16bit values */ - for (addr = 0; addr < sz; addr += sizeof(u16)) { - u32 r; - - iwl_write32(priv->trans, CSR_EEPROM_REG, - CSR_EEPROM_REG_MSK_ADDR & (addr << 1)); - - ret = iwl_poll_bit(priv->trans, CSR_EEPROM_REG, - CSR_EEPROM_REG_READ_VALID_MSK, - CSR_EEPROM_REG_READ_VALID_MSK, - IWL_EEPROM_ACCESS_TIMEOUT); - if (ret < 0) { - IWL_ERR(priv, - "Time out reading EEPROM[%d]\n", addr); - goto done; - } - r = iwl_read32(priv->trans, CSR_EEPROM_REG); - e[addr / 2] = cpu_to_le16(r >> 16); - } - } - - IWL_DEBUG_EEPROM(priv, "NVM Type: %s, version: 0x%x\n", - (priv->nvm_device_type == NVM_DEVICE_TYPE_OTP) - ? "OTP" : "EEPROM", - iwl_eeprom_query16(priv, EEPROM_VERSION)); - - ret = 0; -done: - iwl_eeprom_release_semaphore(priv->trans); - -err: - if (ret) - iwl_eeprom_free(priv); -alloc_err: - return ret; -} - -void iwl_eeprom_free(struct iwl_priv *priv) -{ - kfree(priv->eeprom); - priv->eeprom = NULL; -} - -static void iwl_init_band_reference(struct iwl_priv *priv, - int eep_band, int *eeprom_ch_count, - const struct iwl_eeprom_channel **eeprom_ch_info, - const u8 **eeprom_ch_index) -{ - u32 offset = priv->lib-> - eeprom_ops.regulatory_bands[eep_band - 1]; - switch (eep_band) { - case 1: /* 2.4GHz band */ - *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_1); - *eeprom_ch_info = (struct iwl_eeprom_channel *) - iwl_eeprom_query_addr(priv, offset); - *eeprom_ch_index = iwl_eeprom_band_1; - break; - case 2: /* 4.9GHz band */ - *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_2); - *eeprom_ch_info = (struct iwl_eeprom_channel *) - iwl_eeprom_query_addr(priv, offset); - *eeprom_ch_index = iwl_eeprom_band_2; - break; - case 3: /* 5.2GHz band */ - *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_3); - *eeprom_ch_info = (struct iwl_eeprom_channel *) - iwl_eeprom_query_addr(priv, offset); - *eeprom_ch_index = iwl_eeprom_band_3; - break; - case 4: /* 5.5GHz band */ - *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_4); - *eeprom_ch_info = (struct iwl_eeprom_channel *) - iwl_eeprom_query_addr(priv, offset); - *eeprom_ch_index = iwl_eeprom_band_4; - break; - case 5: /* 5.7GHz band */ - *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_5); - *eeprom_ch_info = (struct iwl_eeprom_channel *) - iwl_eeprom_query_addr(priv, offset); - *eeprom_ch_index = iwl_eeprom_band_5; - break; - case 6: /* 2.4GHz ht40 channels */ - *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_6); - *eeprom_ch_info = (struct iwl_eeprom_channel *) - iwl_eeprom_query_addr(priv, offset); - *eeprom_ch_index = iwl_eeprom_band_6; - break; - case 7: /* 5 GHz ht40 channels */ - *eeprom_ch_count = ARRAY_SIZE(iwl_eeprom_band_7); - *eeprom_ch_info = (struct iwl_eeprom_channel *) - iwl_eeprom_query_addr(priv, offset); - *eeprom_ch_index = iwl_eeprom_band_7; - break; - default: - BUG(); - return; - } -} - -#define CHECK_AND_PRINT(x) ((eeprom_ch->flags & EEPROM_CHANNEL_##x) \ - ? # x " " : "") -/** - * iwl_mod_ht40_chan_info - Copy ht40 channel info into driver's priv. - * - * Does not set up a command, or touch hardware. - */ -static int iwl_mod_ht40_chan_info(struct iwl_priv *priv, - enum ieee80211_band band, u16 channel, - const struct iwl_eeprom_channel *eeprom_ch, - u8 clear_ht40_extension_channel) -{ - struct iwl_channel_info *ch_info; - - ch_info = (struct iwl_channel_info *) - iwl_get_channel_info(priv, band, channel); - - if (!is_channel_valid(ch_info)) - return -1; - - IWL_DEBUG_EEPROM(priv, "HT40 Ch. %d [%sGHz] %s%s%s%s%s(0x%02x %ddBm):" - " Ad-Hoc %ssupported\n", - ch_info->channel, - is_channel_a_band(ch_info) ? - "5.2" : "2.4", - CHECK_AND_PRINT(IBSS), - CHECK_AND_PRINT(ACTIVE), - CHECK_AND_PRINT(RADAR), - CHECK_AND_PRINT(WIDE), - CHECK_AND_PRINT(DFS), - eeprom_ch->flags, - eeprom_ch->max_power_avg, - ((eeprom_ch->flags & EEPROM_CHANNEL_IBSS) - && !(eeprom_ch->flags & EEPROM_CHANNEL_RADAR)) ? - "" : "not "); - - ch_info->ht40_eeprom = *eeprom_ch; - ch_info->ht40_max_power_avg = eeprom_ch->max_power_avg; - ch_info->ht40_flags = eeprom_ch->flags; - if (eeprom_ch->flags & EEPROM_CHANNEL_VALID) - ch_info->ht40_extension_channel &= ~clear_ht40_extension_channel; - - return 0; -} - -#define CHECK_AND_PRINT_I(x) ((eeprom_ch_info[ch].flags & EEPROM_CHANNEL_##x) \ - ? # x " " : "") - -/** - * iwl_init_channel_map - Set up driver's info for all possible channels - */ -int iwl_init_channel_map(struct iwl_priv *priv) -{ - int eeprom_ch_count = 0; - const u8 *eeprom_ch_index = NULL; - const struct iwl_eeprom_channel *eeprom_ch_info = NULL; - int band, ch; - struct iwl_channel_info *ch_info; - - if (priv->channel_count) { - IWL_DEBUG_EEPROM(priv, "Channel map already initialized.\n"); - return 0; - } - - IWL_DEBUG_EEPROM(priv, "Initializing regulatory info from EEPROM\n"); - - priv->channel_count = - ARRAY_SIZE(iwl_eeprom_band_1) + - ARRAY_SIZE(iwl_eeprom_band_2) + - ARRAY_SIZE(iwl_eeprom_band_3) + - ARRAY_SIZE(iwl_eeprom_band_4) + - ARRAY_SIZE(iwl_eeprom_band_5); - - IWL_DEBUG_EEPROM(priv, "Parsing data for %d channels.\n", - priv->channel_count); - - priv->channel_info = kcalloc(priv->channel_count, - sizeof(struct iwl_channel_info), - GFP_KERNEL); - if (!priv->channel_info) { - IWL_ERR(priv, "Could not allocate channel_info\n"); - priv->channel_count = 0; - return -ENOMEM; - } - - ch_info = priv->channel_info; - - /* Loop through the 5 EEPROM bands adding them in order to the - * channel map we maintain (that contains additional information than - * what just in the EEPROM) */ - for (band = 1; band <= 5; band++) { - - iwl_init_band_reference(priv, band, &eeprom_ch_count, - &eeprom_ch_info, &eeprom_ch_index); - - /* Loop through each band adding each of the channels */ - for (ch = 0; ch < eeprom_ch_count; ch++) { - ch_info->channel = eeprom_ch_index[ch]; - ch_info->band = (band == 1) ? IEEE80211_BAND_2GHZ : - IEEE80211_BAND_5GHZ; - - /* permanently store EEPROM's channel regulatory flags - * and max power in channel info database. */ - ch_info->eeprom = eeprom_ch_info[ch]; - - /* Copy the run-time flags so they are there even on - * invalid channels */ - ch_info->flags = eeprom_ch_info[ch].flags; - /* First write that ht40 is not enabled, and then enable - * one by one */ - ch_info->ht40_extension_channel = - IEEE80211_CHAN_NO_HT40; - - if (!(is_channel_valid(ch_info))) { - IWL_DEBUG_EEPROM(priv, - "Ch. %d Flags %x [%sGHz] - " - "No traffic\n", - ch_info->channel, - ch_info->flags, - is_channel_a_band(ch_info) ? - "5.2" : "2.4"); - ch_info++; - continue; - } - - /* Initialize regulatory-based run-time data */ - ch_info->max_power_avg = ch_info->curr_txpow = - eeprom_ch_info[ch].max_power_avg; - ch_info->scan_power = eeprom_ch_info[ch].max_power_avg; - ch_info->min_power = 0; - - IWL_DEBUG_EEPROM(priv, "Ch. %d [%sGHz] " - "%s%s%s%s%s%s(0x%02x %ddBm):" - " Ad-Hoc %ssupported\n", - ch_info->channel, - is_channel_a_band(ch_info) ? - "5.2" : "2.4", - CHECK_AND_PRINT_I(VALID), - CHECK_AND_PRINT_I(IBSS), - CHECK_AND_PRINT_I(ACTIVE), - CHECK_AND_PRINT_I(RADAR), - CHECK_AND_PRINT_I(WIDE), - CHECK_AND_PRINT_I(DFS), - eeprom_ch_info[ch].flags, - eeprom_ch_info[ch].max_power_avg, - ((eeprom_ch_info[ch]. - flags & EEPROM_CHANNEL_IBSS) - && !(eeprom_ch_info[ch]. - flags & EEPROM_CHANNEL_RADAR)) - ? "" : "not "); - - ch_info++; - } - } - - /* Check if we do have HT40 channels */ - if (priv->lib->eeprom_ops.regulatory_bands[5] == - EEPROM_REGULATORY_BAND_NO_HT40 && - priv->lib->eeprom_ops.regulatory_bands[6] == - EEPROM_REGULATORY_BAND_NO_HT40) - return 0; - - /* Two additional EEPROM bands for 2.4 and 5 GHz HT40 channels */ - for (band = 6; band <= 7; band++) { - enum ieee80211_band ieeeband; - - iwl_init_band_reference(priv, band, &eeprom_ch_count, - &eeprom_ch_info, &eeprom_ch_index); - - /* EEPROM band 6 is 2.4, band 7 is 5 GHz */ - ieeeband = - (band == 6) ? IEEE80211_BAND_2GHZ : IEEE80211_BAND_5GHZ; - - /* Loop through each band adding each of the channels */ - for (ch = 0; ch < eeprom_ch_count; ch++) { - /* Set up driver's info for lower half */ - iwl_mod_ht40_chan_info(priv, ieeeband, - eeprom_ch_index[ch], - &eeprom_ch_info[ch], - IEEE80211_CHAN_NO_HT40PLUS); - - /* Set up driver's info for upper half */ - iwl_mod_ht40_chan_info(priv, ieeeband, - eeprom_ch_index[ch] + 4, - &eeprom_ch_info[ch], - IEEE80211_CHAN_NO_HT40MINUS); - } - } - - /* for newer device (6000 series and up) - * EEPROM contain enhanced tx power information - * driver need to process addition information - * to determine the max channel tx power limits - */ - if (priv->lib->eeprom_ops.enhanced_txpower) - iwl_eeprom_enhanced_txpower(priv); - - return 0; -} - -/* - * iwl_free_channel_map - undo allocations in iwl_init_channel_map - */ -void iwl_free_channel_map(struct iwl_priv *priv) -{ - kfree(priv->channel_info); - priv->channel_count = 0; -} - -/** - * iwl_get_channel_info - Find driver's private channel info - * - * Based on band and channel number. - */ -const struct iwl_channel_info *iwl_get_channel_info(const struct iwl_priv *priv, - enum ieee80211_band band, u16 channel) -{ - int i; - - switch (band) { - case IEEE80211_BAND_5GHZ: - for (i = 14; i < priv->channel_count; i++) { - if (priv->channel_info[i].channel == channel) - return &priv->channel_info[i]; - } - break; - case IEEE80211_BAND_2GHZ: - if (channel >= 1 && channel <= 14) - return &priv->channel_info[channel - 1]; - break; - default: - BUG(); - } - - return NULL; -} - -void iwl_rf_config(struct iwl_priv *priv) -{ - u16 radio_cfg; - - radio_cfg = iwl_eeprom_query16(priv, EEPROM_RADIO_CONFIG); - - /* write radio config values to register */ - if (EEPROM_RF_CFG_TYPE_MSK(radio_cfg) <= EEPROM_RF_CONFIG_TYPE_MAX) { - iwl_set_bit(priv->trans, CSR_HW_IF_CONFIG_REG, - EEPROM_RF_CFG_TYPE_MSK(radio_cfg) | - EEPROM_RF_CFG_STEP_MSK(radio_cfg) | - EEPROM_RF_CFG_DASH_MSK(radio_cfg)); - IWL_INFO(priv, "Radio type=0x%x-0x%x-0x%x\n", - EEPROM_RF_CFG_TYPE_MSK(radio_cfg), - EEPROM_RF_CFG_STEP_MSK(radio_cfg), - EEPROM_RF_CFG_DASH_MSK(radio_cfg)); - } else - WARN_ON(1); - - /* set CSR_HW_CONFIG_REG for uCode use */ - iwl_set_bit(priv->trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_RADIO_SI | - CSR_HW_IF_CONFIG_REG_BIT_MAC_SI); -} diff --git a/drivers/net/wireless/iwlwifi/iwl-eeprom.h b/drivers/net/wireless/iwlwifi/iwl-eeprom.h deleted file mode 100644 index 64bfd947caeb..000000000000 --- a/drivers/net/wireless/iwlwifi/iwl-eeprom.h +++ /dev/null @@ -1,269 +0,0 @@ -/****************************************************************************** - * - * This file is provided under a dual BSD/GPLv2 license. When using or - * redistributing this file, you may do so under either license. - * - * GPL LICENSE SUMMARY - * - * Copyright(c) 2008 - 2012 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of version 2 of the GNU General Public License as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, - * USA - * - * The full GNU General Public License is included in this distribution - * in the file called LICENSE.GPL. - * - * Contact Information: - * Intel Linux Wireless <ilw@linux.intel.com> - * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 - * - * BSD LICENSE - * - * Copyright(c) 2005 - 2012 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - *****************************************************************************/ - -#ifndef __iwl_eeprom_h__ -#define __iwl_eeprom_h__ - -#include <net/mac80211.h> - -struct iwl_priv; - -/* - * EEPROM access time values: - * - * Driver initiates EEPROM read by writing byte address << 1 to CSR_EEPROM_REG. - * Driver then polls CSR_EEPROM_REG for CSR_EEPROM_REG_READ_VALID_MSK (0x1). - * When polling, wait 10 uSec between polling loops, up to a maximum 5000 uSec. - * Driver reads 16-bit value from bits 31-16 of CSR_EEPROM_REG. - */ -#define IWL_EEPROM_ACCESS_TIMEOUT 5000 /* uSec */ - -#define IWL_EEPROM_SEM_TIMEOUT 10 /* microseconds */ -#define IWL_EEPROM_SEM_RETRY_LIMIT 1000 /* number of attempts (not time) */ - - -/* - * Regulatory channel usage flags in EEPROM struct iwl4965_eeprom_channel.flags. - * - * IBSS and/or AP operation is allowed *only* on those channels with - * (VALID && IBSS && ACTIVE && !RADAR). This restriction is in place because - * RADAR detection is not supported by the 4965 driver, but is a - * requirement for establishing a new network for legal operation on channels - * requiring RADAR detection or restricting ACTIVE scanning. - * - * NOTE: "WIDE" flag does not indicate anything about "HT40" 40 MHz channels. - * It only indicates that 20 MHz channel use is supported; HT40 channel - * usage is indicated by a separate set of regulatory flags for each - * HT40 channel pair. - * - * NOTE: Using a channel inappropriately will result in a uCode error! - */ -#define IWL_NUM_TX_CALIB_GROUPS 5 -enum { - EEPROM_CHANNEL_VALID = (1 << 0), /* usable for this SKU/geo */ - EEPROM_CHANNEL_IBSS = (1 << 1), /* usable as an IBSS channel */ - /* Bit 2 Reserved */ - EEPROM_CHANNEL_ACTIVE = (1 << 3), /* active scanning allowed */ - EEPROM_CHANNEL_RADAR = (1 << 4), /* radar detection required */ - EEPROM_CHANNEL_WIDE = (1 << 5), /* 20 MHz channel okay */ - /* Bit 6 Reserved (was Narrow Channel) */ - EEPROM_CHANNEL_DFS = (1 << 7), /* dynamic freq selection candidate */ -}; - -/* SKU Capabilities */ -#define EEPROM_SKU_CAP_BAND_24GHZ (1 << 4) -#define EEPROM_SKU_CAP_BAND_52GHZ (1 << 5) -#define EEPROM_SKU_CAP_11N_ENABLE (1 << 6) -#define EEPROM_SKU_CAP_AMT_ENABLE (1 << 7) -#define EEPROM_SKU_CAP_IPAN_ENABLE (1 << 8) - -/* *regulatory* channel data format in eeprom, one for each channel. - * There are separate entries for HT40 (40 MHz) vs. normal (20 MHz) channels. */ -struct iwl_eeprom_channel { - u8 flags; /* EEPROM_CHANNEL_* flags copied from EEPROM */ - s8 max_power_avg; /* max power (dBm) on this chnl, limit 31 */ -} __packed; - -enum iwl_eeprom_enhanced_txpwr_flags { - IWL_EEPROM_ENH_TXP_FL_VALID = BIT(0), - IWL_EEPROM_ENH_TXP_FL_BAND_52G = BIT(1), - IWL_EEPROM_ENH_TXP_FL_OFDM = BIT(2), - IWL_EEPROM_ENH_TXP_FL_40MHZ = BIT(3), - IWL_EEPROM_ENH_TXP_FL_HT_AP = BIT(4), - IWL_EEPROM_ENH_TXP_FL_RES1 = BIT(5), - IWL_EEPROM_ENH_TXP_FL_RES2 = BIT(6), - IWL_EEPROM_ENH_TXP_FL_COMMON_TYPE = BIT(7), -}; - -/** - * iwl_eeprom_enhanced_txpwr structure - * This structure presents the enhanced regulatory tx power limit layout - * in eeprom image - * Enhanced regulatory tx power portion of eeprom image can be broken down - * into individual structures; each one is 8 bytes in size and contain the - * following information - * @flags: entry flags - * @channel: channel number - * @chain_a_max_pwr: chain a max power in 1/2 dBm - * @chain_b_max_pwr: chain b max power in 1/2 dBm - * @chain_c_max_pwr: chain c max power in 1/2 dBm - * @delta_20_in_40: 20-in-40 deltas (hi/lo) - * @mimo2_max_pwr: mimo2 max power in 1/2 dBm - * @mimo3_max_pwr: mimo3 max power in 1/2 dBm - * - */ -struct iwl_eeprom_enhanced_txpwr { - u8 flags; - u8 channel; - s8 chain_a_max; - s8 chain_b_max; - s8 chain_c_max; - u8 delta_20_in_40; - s8 mimo2_max; - s8 mimo3_max; -} __packed; - -/* calibration */ -struct iwl_eeprom_calib_hdr { - u8 version; - u8 pa_type; - __le16 voltage; -} __packed; - -#define EEPROM_CALIB_ALL (INDIRECT_ADDRESS | INDIRECT_CALIBRATION) -#define EEPROM_XTAL ((2*0x128) | EEPROM_CALIB_ALL) - -/* temperature */ -#define EEPROM_KELVIN_TEMPERATURE ((2*0x12A) | EEPROM_CALIB_ALL) -#define EEPROM_RAW_TEMPERATURE ((2*0x12B) | EEPROM_CALIB_ALL) - - -/* agn links */ -#define EEPROM_LINK_HOST (2*0x64) -#define EEPROM_LINK_GENERAL (2*0x65) -#define EEPROM_LINK_REGULATORY (2*0x66) -#define EEPROM_LINK_CALIBRATION (2*0x67) -#define EEPROM_LINK_PROCESS_ADJST (2*0x68) -#define EEPROM_LINK_OTHERS (2*0x69) -#define EEPROM_LINK_TXP_LIMIT (2*0x6a) -#define EEPROM_LINK_TXP_LIMIT_SIZE (2*0x6b) - -/* agn regulatory - indirect access */ -#define EEPROM_REG_BAND_1_CHANNELS ((0x08)\ - | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 28 bytes */ -#define EEPROM_REG_BAND_2_CHANNELS ((0x26)\ - | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 26 bytes */ -#define EEPROM_REG_BAND_3_CHANNELS ((0x42)\ - | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 24 bytes */ -#define EEPROM_REG_BAND_4_CHANNELS ((0x5C)\ - | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 22 bytes */ -#define EEPROM_REG_BAND_5_CHANNELS ((0x74)\ - | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 12 bytes */ -#define EEPROM_REG_BAND_24_HT40_CHANNELS ((0x82)\ - | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 14 bytes */ -#define EEPROM_REG_BAND_52_HT40_CHANNELS ((0x92)\ - | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 22 bytes */ - -/* 6000 regulatory - indirect access */ -#define EEPROM_6000_REG_BAND_24_HT40_CHANNELS ((0x80)\ - | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 14 bytes */ -/* 2.4 GHz */ -extern const u8 iwl_eeprom_band_1[14]; - -#define ADDRESS_MSK 0x0000FFFF -#define INDIRECT_TYPE_MSK 0x000F0000 -#define INDIRECT_HOST 0x00010000 -#define INDIRECT_GENERAL 0x00020000 -#define INDIRECT_REGULATORY 0x00030000 -#define INDIRECT_CALIBRATION 0x00040000 -#define INDIRECT_PROCESS_ADJST 0x00050000 -#define INDIRECT_OTHERS 0x00060000 -#define INDIRECT_TXP_LIMIT 0x00070000 -#define INDIRECT_TXP_LIMIT_SIZE 0x00080000 -#define INDIRECT_ADDRESS 0x00100000 - -/* General */ -#define EEPROM_DEVICE_ID (2*0x08) /* 2 bytes */ -#define EEPROM_SUBSYSTEM_ID (2*0x0A) /* 2 bytes */ -#define EEPROM_MAC_ADDRESS (2*0x15) /* 6 bytes */ -#define EEPROM_BOARD_REVISION (2*0x35) /* 2 bytes */ -#define EEPROM_BOARD_PBA_NUMBER (2*0x3B+1) /* 9 bytes */ -#define EEPROM_VERSION (2*0x44) /* 2 bytes */ -#define EEPROM_SKU_CAP (2*0x45) /* 2 bytes */ -#define EEPROM_OEM_MODE (2*0x46) /* 2 bytes */ -#define EEPROM_RADIO_CONFIG (2*0x48) /* 2 bytes */ -#define EEPROM_NUM_MAC_ADDRESS (2*0x4C) /* 2 bytes */ - -/* The following masks are to be applied on EEPROM_RADIO_CONFIG */ -#define EEPROM_RF_CFG_TYPE_MSK(x) (x & 0x3) /* bits 0-1 */ -#define EEPROM_RF_CFG_STEP_MSK(x) ((x >> 2) & 0x3) /* bits 2-3 */ -#define EEPROM_RF_CFG_DASH_MSK(x) ((x >> 4) & 0x3) /* bits 4-5 */ -#define EEPROM_RF_CFG_PNUM_MSK(x) ((x >> 6) & 0x3) /* bits 6-7 */ -#define EEPROM_RF_CFG_TX_ANT_MSK(x) ((x >> 8) & 0xF) /* bits 8-11 */ -#define EEPROM_RF_CFG_RX_ANT_MSK(x) ((x >> 12) & 0xF) /* bits 12-15 */ - -#define EEPROM_RF_CONFIG_TYPE_MAX 0x3 - -#define EEPROM_REGULATORY_BAND_NO_HT40 (0) - -struct iwl_eeprom_ops { - const u32 regulatory_bands[7]; - bool enhanced_txpower; -}; - - -int iwl_eeprom_init(struct iwl_priv *priv, u32 hw_rev); -void iwl_eeprom_free(struct iwl_priv *priv); -int iwl_eeprom_check_version(struct iwl_priv *priv); -int iwl_eeprom_init_hw_params(struct iwl_priv *priv); -u16 iwl_eeprom_calib_version(struct iwl_priv *priv); -const u8 *iwl_eeprom_query_addr(struct iwl_priv *priv, size_t offset); -u16 iwl_eeprom_query16(struct iwl_priv *priv, size_t offset); -void iwl_eeprom_get_mac(struct iwl_priv *priv, u8 *mac); -int iwl_init_channel_map(struct iwl_priv *priv); -void iwl_free_channel_map(struct iwl_priv *priv); -const struct iwl_channel_info *iwl_get_channel_info( - const struct iwl_priv *priv, - enum ieee80211_band band, u16 channel); -void iwl_rf_config(struct iwl_priv *priv); - -#endif /* __iwl_eeprom_h__ */ diff --git a/drivers/net/wireless/iwlwifi/iwl-fh.h b/drivers/net/wireless/iwlwifi/iwl-fh.h index 74bce97a8600..806046641747 100644 --- a/drivers/net/wireless/iwlwifi/iwl-fh.h +++ b/drivers/net/wireless/iwlwifi/iwl-fh.h @@ -421,6 +421,8 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(unsigned int chnl) (FH_SRVC_LOWER_BOUND + ((_chnl) - 9) * 0x4) #define FH_TX_CHICKEN_BITS_REG (FH_MEM_LOWER_BOUND + 0xE98) +#define FH_TX_TRB_REG(_chan) (FH_MEM_LOWER_BOUND + 0x958 + (_chan) * 4) + /* Instruct FH to increment the retry count of a packet when * it is brought from the memory to TX-FIFO */ diff --git a/drivers/net/wireless/iwlwifi/iwl-io.c b/drivers/net/wireless/iwlwifi/iwl-io.c index 081dd34d2387..66c873399aba 100644 --- a/drivers/net/wireless/iwlwifi/iwl-io.c +++ b/drivers/net/wireless/iwlwifi/iwl-io.c @@ -27,6 +27,7 @@ *****************************************************************************/ #include <linux/delay.h> #include <linux/device.h> +#include <linux/export.h> #include "iwl-io.h" #include"iwl-csr.h" @@ -52,6 +53,7 @@ void iwl_set_bit(struct iwl_trans *trans, u32 reg, u32 mask) __iwl_set_bit(trans, reg, mask); spin_unlock_irqrestore(&trans->reg_lock, flags); } +EXPORT_SYMBOL_GPL(iwl_set_bit); void iwl_clear_bit(struct iwl_trans *trans, u32 reg, u32 mask) { @@ -61,6 +63,25 @@ void iwl_clear_bit(struct iwl_trans *trans, u32 reg, u32 mask) __iwl_clear_bit(trans, reg, mask); spin_unlock_irqrestore(&trans->reg_lock, flags); } +EXPORT_SYMBOL_GPL(iwl_clear_bit); + +void iwl_set_bits_mask(struct iwl_trans *trans, u32 reg, u32 mask, u32 value) +{ + unsigned long flags; + u32 v; + +#ifdef CONFIG_IWLWIFI_DEBUG + WARN_ON_ONCE(value & ~mask); +#endif + + spin_lock_irqsave(&trans->reg_lock, flags); + v = iwl_read32(trans, reg); + v &= ~mask; + v |= value; + iwl_write32(trans, reg, v); + spin_unlock_irqrestore(&trans->reg_lock, flags); +} +EXPORT_SYMBOL_GPL(iwl_set_bits_mask); int iwl_poll_bit(struct iwl_trans *trans, u32 addr, u32 bits, u32 mask, int timeout) @@ -76,6 +97,7 @@ int iwl_poll_bit(struct iwl_trans *trans, u32 addr, return -ETIMEDOUT; } +EXPORT_SYMBOL_GPL(iwl_poll_bit); int iwl_grab_nic_access_silent(struct iwl_trans *trans) { @@ -117,6 +139,7 @@ int iwl_grab_nic_access_silent(struct iwl_trans *trans) return 0; } +EXPORT_SYMBOL_GPL(iwl_grab_nic_access_silent); bool iwl_grab_nic_access(struct iwl_trans *trans) { @@ -130,6 +153,7 @@ bool iwl_grab_nic_access(struct iwl_trans *trans) return true; } +EXPORT_SYMBOL_GPL(iwl_grab_nic_access); void iwl_release_nic_access(struct iwl_trans *trans) { @@ -144,6 +168,7 @@ void iwl_release_nic_access(struct iwl_trans *trans) */ mmiowb(); } +EXPORT_SYMBOL_GPL(iwl_release_nic_access); u32 iwl_read_direct32(struct iwl_trans *trans, u32 reg) { @@ -158,6 +183,7 @@ u32 iwl_read_direct32(struct iwl_trans *trans, u32 reg) return value; } +EXPORT_SYMBOL_GPL(iwl_read_direct32); void iwl_write_direct32(struct iwl_trans *trans, u32 reg, u32 value) { @@ -170,6 +196,7 @@ void iwl_write_direct32(struct iwl_trans *trans, u32 reg, u32 value) } spin_unlock_irqrestore(&trans->reg_lock, flags); } +EXPORT_SYMBOL_GPL(iwl_write_direct32); int iwl_poll_direct_bit(struct iwl_trans *trans, u32 addr, u32 mask, int timeout) @@ -185,6 +212,7 @@ int iwl_poll_direct_bit(struct iwl_trans *trans, u32 addr, u32 mask, return -ETIMEDOUT; } +EXPORT_SYMBOL_GPL(iwl_poll_direct_bit); static inline u32 __iwl_read_prph(struct iwl_trans *trans, u32 reg) { @@ -211,6 +239,7 @@ u32 iwl_read_prph(struct iwl_trans *trans, u32 reg) spin_unlock_irqrestore(&trans->reg_lock, flags); return val; } +EXPORT_SYMBOL_GPL(iwl_read_prph); void iwl_write_prph(struct iwl_trans *trans, u32 addr, u32 val) { @@ -223,6 +252,7 @@ void iwl_write_prph(struct iwl_trans *trans, u32 addr, u32 val) } spin_unlock_irqrestore(&trans->reg_lock, flags); } +EXPORT_SYMBOL_GPL(iwl_write_prph); void iwl_set_bits_prph(struct iwl_trans *trans, u32 reg, u32 mask) { @@ -236,6 +266,7 @@ void iwl_set_bits_prph(struct iwl_trans *trans, u32 reg, u32 mask) } spin_unlock_irqrestore(&trans->reg_lock, flags); } +EXPORT_SYMBOL_GPL(iwl_set_bits_prph); void iwl_set_bits_mask_prph(struct iwl_trans *trans, u32 reg, u32 bits, u32 mask) @@ -250,6 +281,7 @@ void iwl_set_bits_mask_prph(struct iwl_trans *trans, u32 reg, } spin_unlock_irqrestore(&trans->reg_lock, flags); } +EXPORT_SYMBOL_GPL(iwl_set_bits_mask_prph); void iwl_clear_bits_prph(struct iwl_trans *trans, u32 reg, u32 mask) { @@ -264,9 +296,10 @@ void iwl_clear_bits_prph(struct iwl_trans *trans, u32 reg, u32 mask) } spin_unlock_irqrestore(&trans->reg_lock, flags); } +EXPORT_SYMBOL_GPL(iwl_clear_bits_prph); -void _iwl_read_targ_mem_words(struct iwl_trans *trans, u32 addr, - void *buf, int words) +void _iwl_read_targ_mem_dwords(struct iwl_trans *trans, u32 addr, + void *buf, int dwords) { unsigned long flags; int offs; @@ -275,24 +308,26 @@ void _iwl_read_targ_mem_words(struct iwl_trans *trans, u32 addr, spin_lock_irqsave(&trans->reg_lock, flags); if (likely(iwl_grab_nic_access(trans))) { iwl_write32(trans, HBUS_TARG_MEM_RADDR, addr); - for (offs = 0; offs < words; offs++) + for (offs = 0; offs < dwords; offs++) vals[offs] = iwl_read32(trans, HBUS_TARG_MEM_RDAT); iwl_release_nic_access(trans); } spin_unlock_irqrestore(&trans->reg_lock, flags); } +EXPORT_SYMBOL_GPL(_iwl_read_targ_mem_dwords); u32 iwl_read_targ_mem(struct iwl_trans *trans, u32 addr) { u32 value; - _iwl_read_targ_mem_words(trans, addr, &value, 1); + _iwl_read_targ_mem_dwords(trans, addr, &value, 1); return value; } +EXPORT_SYMBOL_GPL(iwl_read_targ_mem); -int _iwl_write_targ_mem_words(struct iwl_trans *trans, u32 addr, - void *buf, int words) +int _iwl_write_targ_mem_dwords(struct iwl_trans *trans, u32 addr, + void *buf, int dwords) { unsigned long flags; int offs, result = 0; @@ -301,7 +336,7 @@ int _iwl_write_targ_mem_words(struct iwl_trans *trans, u32 addr, spin_lock_irqsave(&trans->reg_lock, flags); if (likely(iwl_grab_nic_access(trans))) { iwl_write32(trans, HBUS_TARG_MEM_WADDR, addr); - for (offs = 0; offs < words; offs++) + for (offs = 0; offs < dwords; offs++) iwl_write32(trans, HBUS_TARG_MEM_WDAT, vals[offs]); iwl_release_nic_access(trans); } else @@ -310,8 +345,10 @@ int _iwl_write_targ_mem_words(struct iwl_trans *trans, u32 addr, return result; } +EXPORT_SYMBOL_GPL(_iwl_write_targ_mem_dwords); int iwl_write_targ_mem(struct iwl_trans *trans, u32 addr, u32 val) { - return _iwl_write_targ_mem_words(trans, addr, &val, 1); + return _iwl_write_targ_mem_dwords(trans, addr, &val, 1); } +EXPORT_SYMBOL_GPL(iwl_write_targ_mem); diff --git a/drivers/net/wireless/iwlwifi/iwl-io.h b/drivers/net/wireless/iwlwifi/iwl-io.h index abb3250164ba..50d3819739d1 100644 --- a/drivers/net/wireless/iwlwifi/iwl-io.h +++ b/drivers/net/wireless/iwlwifi/iwl-io.h @@ -54,6 +54,8 @@ static inline u32 iwl_read32(struct iwl_trans *trans, u32 ofs) void iwl_set_bit(struct iwl_trans *trans, u32 reg, u32 mask); void iwl_clear_bit(struct iwl_trans *trans, u32 reg, u32 mask); +void iwl_set_bits_mask(struct iwl_trans *trans, u32 reg, u32 mask, u32 value); + int iwl_poll_bit(struct iwl_trans *trans, u32 addr, u32 bits, u32 mask, int timeout); int iwl_poll_direct_bit(struct iwl_trans *trans, u32 addr, u32 mask, @@ -74,18 +76,18 @@ void iwl_set_bits_mask_prph(struct iwl_trans *trans, u32 reg, u32 bits, u32 mask); void iwl_clear_bits_prph(struct iwl_trans *trans, u32 reg, u32 mask); -void _iwl_read_targ_mem_words(struct iwl_trans *trans, u32 addr, - void *buf, int words); +void _iwl_read_targ_mem_dwords(struct iwl_trans *trans, u32 addr, + void *buf, int dwords); -#define iwl_read_targ_mem_words(trans, addr, buf, bufsize) \ +#define iwl_read_targ_mem_bytes(trans, addr, buf, bufsize) \ do { \ BUILD_BUG_ON((bufsize) % sizeof(u32)); \ - _iwl_read_targ_mem_words(trans, addr, buf, \ - (bufsize) / sizeof(u32));\ + _iwl_read_targ_mem_dwords(trans, addr, buf, \ + (bufsize) / sizeof(u32));\ } while (0) -int _iwl_write_targ_mem_words(struct iwl_trans *trans, u32 addr, - void *buf, int words); +int _iwl_write_targ_mem_dwords(struct iwl_trans *trans, u32 addr, + void *buf, int dwords); u32 iwl_read_targ_mem(struct iwl_trans *trans, u32 addr); int iwl_write_targ_mem(struct iwl_trans *trans, u32 addr, u32 val); diff --git a/drivers/net/wireless/iwlwifi/iwl-notif-wait.c b/drivers/net/wireless/iwlwifi/iwl-notif-wait.c index 0066b899fe5c..c61f2070f15a 100644 --- a/drivers/net/wireless/iwlwifi/iwl-notif-wait.c +++ b/drivers/net/wireless/iwlwifi/iwl-notif-wait.c @@ -61,6 +61,7 @@ * *****************************************************************************/ #include <linux/sched.h> +#include <linux/export.h> #include "iwl-notif-wait.h" @@ -71,6 +72,7 @@ void iwl_notification_wait_init(struct iwl_notif_wait_data *notif_wait) INIT_LIST_HEAD(¬if_wait->notif_waits); init_waitqueue_head(¬if_wait->notif_waitq); } +EXPORT_SYMBOL_GPL(iwl_notification_wait_init); void iwl_notification_wait_notify(struct iwl_notif_wait_data *notif_wait, struct iwl_rx_packet *pkt) @@ -115,20 +117,20 @@ void iwl_notification_wait_notify(struct iwl_notif_wait_data *notif_wait, if (triggered) wake_up_all(¬if_wait->notif_waitq); } +EXPORT_SYMBOL_GPL(iwl_notification_wait_notify); void iwl_abort_notification_waits(struct iwl_notif_wait_data *notif_wait) { - unsigned long flags; struct iwl_notification_wait *wait_entry; - spin_lock_irqsave(¬if_wait->notif_wait_lock, flags); + spin_lock(¬if_wait->notif_wait_lock); list_for_each_entry(wait_entry, ¬if_wait->notif_waits, list) wait_entry->aborted = true; - spin_unlock_irqrestore(¬if_wait->notif_wait_lock, flags); + spin_unlock(¬if_wait->notif_wait_lock); wake_up_all(¬if_wait->notif_waitq); } - +EXPORT_SYMBOL_GPL(iwl_abort_notification_waits); void iwl_init_notification_wait(struct iwl_notif_wait_data *notif_wait, @@ -152,6 +154,7 @@ iwl_init_notification_wait(struct iwl_notif_wait_data *notif_wait, list_add(&wait_entry->list, ¬if_wait->notif_waits); spin_unlock_bh(¬if_wait->notif_wait_lock); } +EXPORT_SYMBOL_GPL(iwl_init_notification_wait); int iwl_wait_notification(struct iwl_notif_wait_data *notif_wait, struct iwl_notification_wait *wait_entry, @@ -175,6 +178,7 @@ int iwl_wait_notification(struct iwl_notif_wait_data *notif_wait, return -ETIMEDOUT; return 0; } +EXPORT_SYMBOL_GPL(iwl_wait_notification); void iwl_remove_notification(struct iwl_notif_wait_data *notif_wait, struct iwl_notification_wait *wait_entry) @@ -183,3 +187,4 @@ void iwl_remove_notification(struct iwl_notif_wait_data *notif_wait, list_del(&wait_entry->list); spin_unlock_bh(¬if_wait->notif_wait_lock); } +EXPORT_SYMBOL_GPL(iwl_remove_notification); diff --git a/drivers/net/wireless/iwlwifi/iwl-op-mode.h b/drivers/net/wireless/iwlwifi/iwl-op-mode.h index 4ef742b28e08..64886f95664f 100644 --- a/drivers/net/wireless/iwlwifi/iwl-op-mode.h +++ b/drivers/net/wireless/iwlwifi/iwl-op-mode.h @@ -111,22 +111,25 @@ struct iwl_cfg; * May sleep * @rx: Rx notification to the op_mode. rxb is the Rx buffer itself. Cmd is the * HCMD the this Rx responds to. - * Must be atomic. + * Must be atomic and called with BH disabled. * @queue_full: notifies that a HW queue is full. - * Must be atomic + * Must be atomic and called with BH disabled. * @queue_not_full: notifies that a HW queue is not full any more. - * Must be atomic + * Must be atomic and called with BH disabled. * @hw_rf_kill:notifies of a change in the HW rf kill switch. True means that * the radio is killed. Must be atomic. * @free_skb: allows the transport layer to free skbs that haven't been * reclaimed by the op_mode. This can happen when the driver is freed and * there are Tx packets pending in the transport layer. * Must be atomic - * @nic_error: error notification. Must be atomic - * @cmd_queue_full: Called when the command queue gets full. Must be atomic. + * @nic_error: error notification. Must be atomic and must be called with BH + * disabled. + * @cmd_queue_full: Called when the command queue gets full. Must be atomic and + * called with BH disabled. * @nic_config: configure NIC, called before firmware is started. * May sleep - * @wimax_active: invoked when WiMax becomes active. Must be atomic. + * @wimax_active: invoked when WiMax becomes active. Must be atomic and called + * with BH disabled. */ struct iwl_op_mode_ops { struct iwl_op_mode *(*start)(struct iwl_trans *trans, @@ -145,6 +148,9 @@ struct iwl_op_mode_ops { void (*wimax_active)(struct iwl_op_mode *op_mode); }; +int iwl_opmode_register(const char *name, const struct iwl_op_mode_ops *ops); +void iwl_opmode_deregister(const char *name); + /** * struct iwl_op_mode - operational mode * @@ -162,7 +168,6 @@ struct iwl_op_mode { static inline void iwl_op_mode_stop(struct iwl_op_mode *op_mode) { might_sleep(); - op_mode->ops->stop(op_mode); } @@ -218,9 +223,4 @@ static inline void iwl_op_mode_wimax_active(struct iwl_op_mode *op_mode) op_mode->ops->wimax_active(op_mode); } -/***************************************************** -* Op mode layers implementations -******************************************************/ -extern const struct iwl_op_mode_ops iwl_dvm_ops; - #endif /* __iwl_op_mode_h__ */ diff --git a/drivers/net/wireless/iwlwifi/iwl-prph.h b/drivers/net/wireless/iwlwifi/iwl-prph.h index dfd54662e3e6..9253ef1dba72 100644 --- a/drivers/net/wireless/iwlwifi/iwl-prph.h +++ b/drivers/net/wireless/iwlwifi/iwl-prph.h @@ -187,7 +187,7 @@ #define SCD_QUEUE_STTS_REG_POS_ACTIVE (3) #define SCD_QUEUE_STTS_REG_POS_WSL (4) #define SCD_QUEUE_STTS_REG_POS_SCD_ACT_EN (19) -#define SCD_QUEUE_STTS_REG_MSK (0x00FF0000) +#define SCD_QUEUE_STTS_REG_MSK (0x017F0000) #define SCD_QUEUE_CTX_REG1_CREDIT_POS (8) #define SCD_QUEUE_CTX_REG1_CREDIT_MSK (0x00FFFF00) diff --git a/drivers/net/wireless/iwlwifi/iwl-test.c b/drivers/net/wireless/iwlwifi/iwl-test.c new file mode 100644 index 000000000000..81e8c7126d72 --- /dev/null +++ b/drivers/net/wireless/iwlwifi/iwl-test.c @@ -0,0 +1,856 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright(c) 2010 - 2012 Intel Corporation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, + * USA + * + * The full GNU General Public License is included in this distribution + * in the file called LICENSE.GPL. + * + * Contact Information: + * Intel Linux Wireless <ilw@linux.intel.com> + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright(c) 2010 - 2012 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + *****************************************************************************/ + +#include <linux/export.h> +#include <net/netlink.h> + +#include "iwl-io.h" +#include "iwl-fh.h" +#include "iwl-prph.h" +#include "iwl-trans.h" +#include "iwl-test.h" +#include "iwl-csr.h" +#include "iwl-testmode.h" + +/* + * Periphery registers absolute lower bound. This is used in order to + * differentiate registery access through HBUS_TARG_PRPH_* and + * HBUS_TARG_MEM_* accesses. + */ +#define IWL_ABS_PRPH_START (0xA00000) + +/* + * The TLVs used in the gnl message policy between the kernel module and + * user space application. iwl_testmode_gnl_msg_policy is to be carried + * through the NL80211_CMD_TESTMODE channel regulated by nl80211. + * See iwl-testmode.h + */ +static +struct nla_policy iwl_testmode_gnl_msg_policy[IWL_TM_ATTR_MAX] = { + [IWL_TM_ATTR_COMMAND] = { .type = NLA_U32, }, + + [IWL_TM_ATTR_UCODE_CMD_ID] = { .type = NLA_U8, }, + [IWL_TM_ATTR_UCODE_CMD_DATA] = { .type = NLA_UNSPEC, }, + + [IWL_TM_ATTR_REG_OFFSET] = { .type = NLA_U32, }, + [IWL_TM_ATTR_REG_VALUE8] = { .type = NLA_U8, }, + [IWL_TM_ATTR_REG_VALUE32] = { .type = NLA_U32, }, + + [IWL_TM_ATTR_SYNC_RSP] = { .type = NLA_UNSPEC, }, + [IWL_TM_ATTR_UCODE_RX_PKT] = { .type = NLA_UNSPEC, }, + + [IWL_TM_ATTR_EEPROM] = { .type = NLA_UNSPEC, }, + + [IWL_TM_ATTR_TRACE_ADDR] = { .type = NLA_UNSPEC, }, + [IWL_TM_ATTR_TRACE_DUMP] = { .type = NLA_UNSPEC, }, + [IWL_TM_ATTR_TRACE_SIZE] = { .type = NLA_U32, }, + + [IWL_TM_ATTR_FIXRATE] = { .type = NLA_U32, }, + + [IWL_TM_ATTR_UCODE_OWNER] = { .type = NLA_U8, }, + + [IWL_TM_ATTR_MEM_ADDR] = { .type = NLA_U32, }, + [IWL_TM_ATTR_BUFFER_SIZE] = { .type = NLA_U32, }, + [IWL_TM_ATTR_BUFFER_DUMP] = { .type = NLA_UNSPEC, }, + + [IWL_TM_ATTR_FW_VERSION] = { .type = NLA_U32, }, + [IWL_TM_ATTR_DEVICE_ID] = { .type = NLA_U32, }, + [IWL_TM_ATTR_FW_TYPE] = { .type = NLA_U32, }, + [IWL_TM_ATTR_FW_INST_SIZE] = { .type = NLA_U32, }, + [IWL_TM_ATTR_FW_DATA_SIZE] = { .type = NLA_U32, }, + + [IWL_TM_ATTR_ENABLE_NOTIFICATION] = {.type = NLA_FLAG, }, +}; + +static inline void iwl_test_trace_clear(struct iwl_test *tst) +{ + memset(&tst->trace, 0, sizeof(struct iwl_test_trace)); +} + +static void iwl_test_trace_stop(struct iwl_test *tst) +{ + if (!tst->trace.enabled) + return; + + if (tst->trace.cpu_addr && tst->trace.dma_addr) + dma_free_coherent(tst->trans->dev, + tst->trace.tsize, + tst->trace.cpu_addr, + tst->trace.dma_addr); + + iwl_test_trace_clear(tst); +} + +static inline void iwl_test_mem_clear(struct iwl_test *tst) +{ + memset(&tst->mem, 0, sizeof(struct iwl_test_mem)); +} + +static inline void iwl_test_mem_stop(struct iwl_test *tst) +{ + if (!tst->mem.in_read) + return; + + iwl_test_mem_clear(tst); +} + +/* + * Initializes the test object + * During the lifetime of the test object it is assumed that the transport is + * started. The test object should be stopped before the transport is stopped. + */ +void iwl_test_init(struct iwl_test *tst, struct iwl_trans *trans, + struct iwl_test_ops *ops) +{ + tst->trans = trans; + tst->ops = ops; + + iwl_test_trace_clear(tst); + iwl_test_mem_clear(tst); +} +EXPORT_SYMBOL_GPL(iwl_test_init); + +/* + * Stop the test object + */ +void iwl_test_free(struct iwl_test *tst) +{ + iwl_test_mem_stop(tst); + iwl_test_trace_stop(tst); +} +EXPORT_SYMBOL_GPL(iwl_test_free); + +static inline int iwl_test_send_cmd(struct iwl_test *tst, + struct iwl_host_cmd *cmd) +{ + return tst->ops->send_cmd(tst->trans->op_mode, cmd); +} + +static inline bool iwl_test_valid_hw_addr(struct iwl_test *tst, u32 addr) +{ + return tst->ops->valid_hw_addr(addr); +} + +static inline u32 iwl_test_fw_ver(struct iwl_test *tst) +{ + return tst->ops->get_fw_ver(tst->trans->op_mode); +} + +static inline struct sk_buff* +iwl_test_alloc_reply(struct iwl_test *tst, int len) +{ + return tst->ops->alloc_reply(tst->trans->op_mode, len); +} + +static inline int iwl_test_reply(struct iwl_test *tst, struct sk_buff *skb) +{ + return tst->ops->reply(tst->trans->op_mode, skb); +} + +static inline struct sk_buff* +iwl_test_alloc_event(struct iwl_test *tst, int len) +{ + return tst->ops->alloc_event(tst->trans->op_mode, len); +} + +static inline void +iwl_test_event(struct iwl_test *tst, struct sk_buff *skb) +{ + return tst->ops->event(tst->trans->op_mode, skb); +} + +/* + * This function handles the user application commands to the fw. The fw + * commands are sent in a synchronuous manner. In case that the user requested + * to get commands response, it is send to the user. + */ +static int iwl_test_fw_cmd(struct iwl_test *tst, struct nlattr **tb) +{ + struct iwl_host_cmd cmd; + struct iwl_rx_packet *pkt; + struct sk_buff *skb; + void *reply_buf; + u32 reply_len; + int ret; + bool cmd_want_skb; + + memset(&cmd, 0, sizeof(struct iwl_host_cmd)); + + if (!tb[IWL_TM_ATTR_UCODE_CMD_ID] || + !tb[IWL_TM_ATTR_UCODE_CMD_DATA]) { + IWL_ERR(tst->trans, "Missing fw command mandatory fields\n"); + return -ENOMSG; + } + + cmd.flags = CMD_ON_DEMAND | CMD_SYNC; + cmd_want_skb = nla_get_flag(tb[IWL_TM_ATTR_UCODE_CMD_SKB]); + if (cmd_want_skb) + cmd.flags |= CMD_WANT_SKB; + + cmd.id = nla_get_u8(tb[IWL_TM_ATTR_UCODE_CMD_ID]); + cmd.data[0] = nla_data(tb[IWL_TM_ATTR_UCODE_CMD_DATA]); + cmd.len[0] = nla_len(tb[IWL_TM_ATTR_UCODE_CMD_DATA]); + cmd.dataflags[0] = IWL_HCMD_DFL_NOCOPY; + IWL_DEBUG_INFO(tst->trans, "test fw cmd=0x%x, flags 0x%x, len %d\n", + cmd.id, cmd.flags, cmd.len[0]); + + ret = iwl_test_send_cmd(tst, &cmd); + if (ret) { + IWL_ERR(tst->trans, "Failed to send hcmd\n"); + return ret; + } + if (!cmd_want_skb) + return ret; + + /* Handling return of SKB to the user */ + pkt = cmd.resp_pkt; + if (!pkt) { + IWL_ERR(tst->trans, "HCMD received a null response packet\n"); + return ret; + } + + reply_len = le32_to_cpu(pkt->len_n_flags) & FH_RSCSR_FRAME_SIZE_MSK; + skb = iwl_test_alloc_reply(tst, reply_len + 20); + reply_buf = kmalloc(reply_len, GFP_KERNEL); + if (!skb || !reply_buf) { + kfree_skb(skb); + kfree(reply_buf); + return -ENOMEM; + } + + /* The reply is in a page, that we cannot send to user space. */ + memcpy(reply_buf, &(pkt->hdr), reply_len); + iwl_free_resp(&cmd); + + if (nla_put_u32(skb, IWL_TM_ATTR_COMMAND, + IWL_TM_CMD_DEV2APP_UCODE_RX_PKT) || + nla_put(skb, IWL_TM_ATTR_UCODE_RX_PKT, reply_len, reply_buf)) + goto nla_put_failure; + return iwl_test_reply(tst, skb); + +nla_put_failure: + IWL_DEBUG_INFO(tst->trans, "Failed creating NL attributes\n"); + kfree(reply_buf); + kfree_skb(skb); + return -ENOMSG; +} + +/* + * Handles the user application commands for register access. + */ +static int iwl_test_reg(struct iwl_test *tst, struct nlattr **tb) +{ + u32 ofs, val32, cmd; + u8 val8; + struct sk_buff *skb; + int status = 0; + struct iwl_trans *trans = tst->trans; + + if (!tb[IWL_TM_ATTR_REG_OFFSET]) { + IWL_ERR(trans, "Missing reg offset\n"); + return -ENOMSG; + } + + ofs = nla_get_u32(tb[IWL_TM_ATTR_REG_OFFSET]); + IWL_DEBUG_INFO(trans, "test reg access cmd offset=0x%x\n", ofs); + + cmd = nla_get_u32(tb[IWL_TM_ATTR_COMMAND]); + + /* + * Allow access only to FH/CSR/HBUS in direct mode. + * Since we don't have the upper bounds for the CSR and HBUS segments, + * we will use only the upper bound of FH for sanity check. + */ + if (ofs >= FH_MEM_UPPER_BOUND) { + IWL_ERR(trans, "offset out of segment (0x0 - 0x%x)\n", + FH_MEM_UPPER_BOUND); + return -EINVAL; + } + + switch (cmd) { + case IWL_TM_CMD_APP2DEV_DIRECT_REG_READ32: + val32 = iwl_read_direct32(tst->trans, ofs); + IWL_DEBUG_INFO(trans, "32 value to read 0x%x\n", val32); + + skb = iwl_test_alloc_reply(tst, 20); + if (!skb) { + IWL_ERR(trans, "Memory allocation fail\n"); + return -ENOMEM; + } + if (nla_put_u32(skb, IWL_TM_ATTR_REG_VALUE32, val32)) + goto nla_put_failure; + status = iwl_test_reply(tst, skb); + if (status < 0) + IWL_ERR(trans, "Error sending msg : %d\n", status); + break; + + case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE32: + if (!tb[IWL_TM_ATTR_REG_VALUE32]) { + IWL_ERR(trans, "Missing value to write\n"); + return -ENOMSG; + } else { + val32 = nla_get_u32(tb[IWL_TM_ATTR_REG_VALUE32]); + IWL_DEBUG_INFO(trans, "32b write val=0x%x\n", val32); + iwl_write_direct32(tst->trans, ofs, val32); + } + break; + + case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE8: + if (!tb[IWL_TM_ATTR_REG_VALUE8]) { + IWL_ERR(trans, "Missing value to write\n"); + return -ENOMSG; + } else { + val8 = nla_get_u8(tb[IWL_TM_ATTR_REG_VALUE8]); + IWL_DEBUG_INFO(trans, "8b write val=0x%x\n", val8); + iwl_write8(tst->trans, ofs, val8); + } + break; + + default: + IWL_ERR(trans, "Unknown test register cmd ID\n"); + return -ENOMSG; + } + + return status; + +nla_put_failure: + kfree_skb(skb); + return -EMSGSIZE; +} + +/* + * Handles the request to start FW tracing. Allocates of the trace buffer + * and sends a reply to user space with the address of the allocated buffer. + */ +static int iwl_test_trace_begin(struct iwl_test *tst, struct nlattr **tb) +{ + struct sk_buff *skb; + int status = 0; + + if (tst->trace.enabled) + return -EBUSY; + + if (!tb[IWL_TM_ATTR_TRACE_SIZE]) + tst->trace.size = TRACE_BUFF_SIZE_DEF; + else + tst->trace.size = + nla_get_u32(tb[IWL_TM_ATTR_TRACE_SIZE]); + + if (!tst->trace.size) + return -EINVAL; + + if (tst->trace.size < TRACE_BUFF_SIZE_MIN || + tst->trace.size > TRACE_BUFF_SIZE_MAX) + return -EINVAL; + + tst->trace.tsize = tst->trace.size + TRACE_BUFF_PADD; + tst->trace.cpu_addr = dma_alloc_coherent(tst->trans->dev, + tst->trace.tsize, + &tst->trace.dma_addr, + GFP_KERNEL); + if (!tst->trace.cpu_addr) + return -ENOMEM; + + tst->trace.enabled = true; + tst->trace.trace_addr = (u8 *)PTR_ALIGN(tst->trace.cpu_addr, 0x100); + + memset(tst->trace.trace_addr, 0x03B, tst->trace.size); + + skb = iwl_test_alloc_reply(tst, sizeof(tst->trace.dma_addr) + 20); + if (!skb) { + IWL_ERR(tst->trans, "Memory allocation fail\n"); + iwl_test_trace_stop(tst); + return -ENOMEM; + } + + if (nla_put(skb, IWL_TM_ATTR_TRACE_ADDR, + sizeof(tst->trace.dma_addr), + (u64 *)&tst->trace.dma_addr)) + goto nla_put_failure; + + status = iwl_test_reply(tst, skb); + if (status < 0) + IWL_ERR(tst->trans, "Error sending msg : %d\n", status); + + tst->trace.nchunks = DIV_ROUND_UP(tst->trace.size, + DUMP_CHUNK_SIZE); + + return status; + +nla_put_failure: + kfree_skb(skb); + if (nla_get_u32(tb[IWL_TM_ATTR_COMMAND]) == + IWL_TM_CMD_APP2DEV_BEGIN_TRACE) + iwl_test_trace_stop(tst); + return -EMSGSIZE; +} + +/* + * Handles indirect read from the periphery or the SRAM. The read is performed + * to a temporary buffer. The user space application should later issue a dump + */ +static int iwl_test_indirect_read(struct iwl_test *tst, u32 addr, u32 size) +{ + struct iwl_trans *trans = tst->trans; + unsigned long flags; + int i; + + if (size & 0x3) + return -EINVAL; + + tst->mem.size = size; + tst->mem.addr = kmalloc(tst->mem.size, GFP_KERNEL); + if (tst->mem.addr == NULL) + return -ENOMEM; + + /* Hard-coded periphery absolute address */ + if (IWL_ABS_PRPH_START <= addr && + addr < IWL_ABS_PRPH_START + PRPH_END) { + spin_lock_irqsave(&trans->reg_lock, flags); + iwl_grab_nic_access(trans); + iwl_write32(trans, HBUS_TARG_PRPH_RADDR, + addr | (3 << 24)); + for (i = 0; i < size; i += 4) + *(u32 *)(tst->mem.addr + i) = + iwl_read32(trans, HBUS_TARG_PRPH_RDAT); + iwl_release_nic_access(trans); + spin_unlock_irqrestore(&trans->reg_lock, flags); + } else { /* target memory (SRAM) */ + _iwl_read_targ_mem_dwords(trans, addr, + tst->mem.addr, + tst->mem.size / 4); + } + + tst->mem.nchunks = + DIV_ROUND_UP(tst->mem.size, DUMP_CHUNK_SIZE); + tst->mem.in_read = true; + return 0; + +} + +/* + * Handles indirect write to the periphery or SRAM. The is performed to a + * temporary buffer. + */ +static int iwl_test_indirect_write(struct iwl_test *tst, u32 addr, + u32 size, unsigned char *buf) +{ + struct iwl_trans *trans = tst->trans; + u32 val, i; + unsigned long flags; + + if (IWL_ABS_PRPH_START <= addr && + addr < IWL_ABS_PRPH_START + PRPH_END) { + /* Periphery writes can be 1-3 bytes long, or DWORDs */ + if (size < 4) { + memcpy(&val, buf, size); + spin_lock_irqsave(&trans->reg_lock, flags); + iwl_grab_nic_access(trans); + iwl_write32(trans, HBUS_TARG_PRPH_WADDR, + (addr & 0x0000FFFF) | + ((size - 1) << 24)); + iwl_write32(trans, HBUS_TARG_PRPH_WDAT, val); + iwl_release_nic_access(trans); + /* needed after consecutive writes w/o read */ + mmiowb(); + spin_unlock_irqrestore(&trans->reg_lock, flags); + } else { + if (size % 4) + return -EINVAL; + for (i = 0; i < size; i += 4) + iwl_write_prph(trans, addr+i, + *(u32 *)(buf+i)); + } + } else if (iwl_test_valid_hw_addr(tst, addr)) { + _iwl_write_targ_mem_dwords(trans, addr, buf, size / 4); + } else { + return -EINVAL; + } + return 0; +} + +/* + * Handles the user application commands for indirect read/write + * to/from the periphery or the SRAM. + */ +static int iwl_test_indirect_mem(struct iwl_test *tst, struct nlattr **tb) +{ + u32 addr, size, cmd; + unsigned char *buf; + + /* Both read and write should be blocked, for atomicity */ + if (tst->mem.in_read) + return -EBUSY; + + cmd = nla_get_u32(tb[IWL_TM_ATTR_COMMAND]); + if (!tb[IWL_TM_ATTR_MEM_ADDR]) { + IWL_ERR(tst->trans, "Error finding memory offset address\n"); + return -ENOMSG; + } + addr = nla_get_u32(tb[IWL_TM_ATTR_MEM_ADDR]); + if (!tb[IWL_TM_ATTR_BUFFER_SIZE]) { + IWL_ERR(tst->trans, "Error finding size for memory reading\n"); + return -ENOMSG; + } + size = nla_get_u32(tb[IWL_TM_ATTR_BUFFER_SIZE]); + + if (cmd == IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_READ) { + return iwl_test_indirect_read(tst, addr, size); + } else { + if (!tb[IWL_TM_ATTR_BUFFER_DUMP]) + return -EINVAL; + buf = (unsigned char *)nla_data(tb[IWL_TM_ATTR_BUFFER_DUMP]); + return iwl_test_indirect_write(tst, addr, size, buf); + } +} + +/* + * Enable notifications to user space + */ +static int iwl_test_notifications(struct iwl_test *tst, + struct nlattr **tb) +{ + tst->notify = nla_get_flag(tb[IWL_TM_ATTR_ENABLE_NOTIFICATION]); + return 0; +} + +/* + * Handles the request to get the device id + */ +static int iwl_test_get_dev_id(struct iwl_test *tst, struct nlattr **tb) +{ + u32 devid = tst->trans->hw_id; + struct sk_buff *skb; + int status; + + IWL_DEBUG_INFO(tst->trans, "hw version: 0x%x\n", devid); + + skb = iwl_test_alloc_reply(tst, 20); + if (!skb) { + IWL_ERR(tst->trans, "Memory allocation fail\n"); + return -ENOMEM; + } + + if (nla_put_u32(skb, IWL_TM_ATTR_DEVICE_ID, devid)) + goto nla_put_failure; + status = iwl_test_reply(tst, skb); + if (status < 0) + IWL_ERR(tst->trans, "Error sending msg : %d\n", status); + + return 0; + +nla_put_failure: + kfree_skb(skb); + return -EMSGSIZE; +} + +/* + * Handles the request to get the FW version + */ +static int iwl_test_get_fw_ver(struct iwl_test *tst, struct nlattr **tb) +{ + struct sk_buff *skb; + int status; + u32 ver = iwl_test_fw_ver(tst); + + IWL_DEBUG_INFO(tst->trans, "uCode version raw: 0x%x\n", ver); + + skb = iwl_test_alloc_reply(tst, 20); + if (!skb) { + IWL_ERR(tst->trans, "Memory allocation fail\n"); + return -ENOMEM; + } + + if (nla_put_u32(skb, IWL_TM_ATTR_FW_VERSION, ver)) + goto nla_put_failure; + + status = iwl_test_reply(tst, skb); + if (status < 0) + IWL_ERR(tst->trans, "Error sending msg : %d\n", status); + + return 0; + +nla_put_failure: + kfree_skb(skb); + return -EMSGSIZE; +} + +/* + * Parse the netlink message and validate that the IWL_TM_ATTR_CMD exists + */ +int iwl_test_parse(struct iwl_test *tst, struct nlattr **tb, + void *data, int len) +{ + int result; + + result = nla_parse(tb, IWL_TM_ATTR_MAX - 1, data, len, + iwl_testmode_gnl_msg_policy); + if (result) { + IWL_ERR(tst->trans, "Fail parse gnl msg: %d\n", result); + return result; + } + + /* IWL_TM_ATTR_COMMAND is absolutely mandatory */ + if (!tb[IWL_TM_ATTR_COMMAND]) { + IWL_ERR(tst->trans, "Missing testmode command type\n"); + return -ENOMSG; + } + return 0; +} +EXPORT_SYMBOL_GPL(iwl_test_parse); + +/* + * Handle test commands. + * Returns 1 for unknown commands (not handled by the test object); negative + * value in case of error. + */ +int iwl_test_handle_cmd(struct iwl_test *tst, struct nlattr **tb) +{ + int result; + + switch (nla_get_u32(tb[IWL_TM_ATTR_COMMAND])) { + case IWL_TM_CMD_APP2DEV_UCODE: + IWL_DEBUG_INFO(tst->trans, "test cmd to uCode\n"); + result = iwl_test_fw_cmd(tst, tb); + break; + + case IWL_TM_CMD_APP2DEV_DIRECT_REG_READ32: + case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE32: + case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE8: + IWL_DEBUG_INFO(tst->trans, "test cmd to register\n"); + result = iwl_test_reg(tst, tb); + break; + + case IWL_TM_CMD_APP2DEV_BEGIN_TRACE: + IWL_DEBUG_INFO(tst->trans, "test uCode trace cmd to driver\n"); + result = iwl_test_trace_begin(tst, tb); + break; + + case IWL_TM_CMD_APP2DEV_END_TRACE: + iwl_test_trace_stop(tst); + result = 0; + break; + + case IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_READ: + case IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_WRITE: + IWL_DEBUG_INFO(tst->trans, "test indirect memory cmd\n"); + result = iwl_test_indirect_mem(tst, tb); + break; + + case IWL_TM_CMD_APP2DEV_NOTIFICATIONS: + IWL_DEBUG_INFO(tst->trans, "test notifications cmd\n"); + result = iwl_test_notifications(tst, tb); + break; + + case IWL_TM_CMD_APP2DEV_GET_FW_VERSION: + IWL_DEBUG_INFO(tst->trans, "test get FW ver cmd\n"); + result = iwl_test_get_fw_ver(tst, tb); + break; + + case IWL_TM_CMD_APP2DEV_GET_DEVICE_ID: + IWL_DEBUG_INFO(tst->trans, "test Get device ID cmd\n"); + result = iwl_test_get_dev_id(tst, tb); + break; + + default: + IWL_DEBUG_INFO(tst->trans, "Unknown test command\n"); + result = 1; + break; + } + return result; +} +EXPORT_SYMBOL_GPL(iwl_test_handle_cmd); + +static int iwl_test_trace_dump(struct iwl_test *tst, struct sk_buff *skb, + struct netlink_callback *cb) +{ + int idx, length; + + if (!tst->trace.enabled || !tst->trace.trace_addr) + return -EFAULT; + + idx = cb->args[4]; + if (idx >= tst->trace.nchunks) + return -ENOENT; + + length = DUMP_CHUNK_SIZE; + if (((idx + 1) == tst->trace.nchunks) && + (tst->trace.size % DUMP_CHUNK_SIZE)) + length = tst->trace.size % + DUMP_CHUNK_SIZE; + + if (nla_put(skb, IWL_TM_ATTR_TRACE_DUMP, length, + tst->trace.trace_addr + (DUMP_CHUNK_SIZE * idx))) + goto nla_put_failure; + + cb->args[4] = ++idx; + return 0; + + nla_put_failure: + return -ENOBUFS; +} + +static int iwl_test_buffer_dump(struct iwl_test *tst, struct sk_buff *skb, + struct netlink_callback *cb) +{ + int idx, length; + + if (!tst->mem.in_read) + return -EFAULT; + + idx = cb->args[4]; + if (idx >= tst->mem.nchunks) { + iwl_test_mem_stop(tst); + return -ENOENT; + } + + length = DUMP_CHUNK_SIZE; + if (((idx + 1) == tst->mem.nchunks) && + (tst->mem.size % DUMP_CHUNK_SIZE)) + length = tst->mem.size % DUMP_CHUNK_SIZE; + + if (nla_put(skb, IWL_TM_ATTR_BUFFER_DUMP, length, + tst->mem.addr + (DUMP_CHUNK_SIZE * idx))) + goto nla_put_failure; + + cb->args[4] = ++idx; + return 0; + + nla_put_failure: + return -ENOBUFS; +} + +/* + * Handle dump commands. + * Returns 1 for unknown commands (not handled by the test object); negative + * value in case of error. + */ +int iwl_test_dump(struct iwl_test *tst, u32 cmd, struct sk_buff *skb, + struct netlink_callback *cb) +{ + int result; + + switch (cmd) { + case IWL_TM_CMD_APP2DEV_READ_TRACE: + IWL_DEBUG_INFO(tst->trans, "uCode trace cmd\n"); + result = iwl_test_trace_dump(tst, skb, cb); + break; + + case IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_DUMP: + IWL_DEBUG_INFO(tst->trans, "testmode sram dump cmd\n"); + result = iwl_test_buffer_dump(tst, skb, cb); + break; + + default: + result = 1; + break; + } + return result; +} +EXPORT_SYMBOL_GPL(iwl_test_dump); + +/* + * Multicast a spontaneous messages from the device to the user space. + */ +static void iwl_test_send_rx(struct iwl_test *tst, + struct iwl_rx_cmd_buffer *rxb) +{ + struct sk_buff *skb; + struct iwl_rx_packet *data; + int length; + + data = rxb_addr(rxb); + length = le32_to_cpu(data->len_n_flags) & FH_RSCSR_FRAME_SIZE_MSK; + + /* the length doesn't include len_n_flags field, so add it manually */ + length += sizeof(__le32); + + skb = iwl_test_alloc_event(tst, length + 20); + if (skb == NULL) { + IWL_ERR(tst->trans, "Out of memory for message to user\n"); + return; + } + + if (nla_put_u32(skb, IWL_TM_ATTR_COMMAND, + IWL_TM_CMD_DEV2APP_UCODE_RX_PKT) || + nla_put(skb, IWL_TM_ATTR_UCODE_RX_PKT, length, data)) + goto nla_put_failure; + + iwl_test_event(tst, skb); + return; + +nla_put_failure: + kfree_skb(skb); + IWL_ERR(tst->trans, "Ouch, overran buffer, check allocation!\n"); +} + +/* + * Called whenever a Rx frames is recevied from the device. If notifications to + * the user space are requested, sends the frames to the user. + */ +void iwl_test_rx(struct iwl_test *tst, struct iwl_rx_cmd_buffer *rxb) +{ + if (tst->notify) + iwl_test_send_rx(tst, rxb); +} +EXPORT_SYMBOL_GPL(iwl_test_rx); diff --git a/drivers/net/wireless/iwlwifi/iwl-test.h b/drivers/net/wireless/iwlwifi/iwl-test.h new file mode 100644 index 000000000000..e13ffa8acc02 --- /dev/null +++ b/drivers/net/wireless/iwlwifi/iwl-test.h @@ -0,0 +1,161 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright(c) 2010 - 2012 Intel Corporation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, + * USA + * + * The full GNU General Public License is included in this distribution + * in the file called LICENSE.GPL. + * + * Contact Information: + * Intel Linux Wireless <ilw@linux.intel.com> + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright(c) 2010 - 2012 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + *****************************************************************************/ + +#ifndef __IWL_TEST_H__ +#define __IWL_TEST_H__ + +#include <linux/types.h> +#include "iwl-trans.h" + +struct iwl_test_trace { + u32 size; + u32 tsize; + u32 nchunks; + u8 *cpu_addr; + u8 *trace_addr; + dma_addr_t dma_addr; + bool enabled; +}; + +struct iwl_test_mem { + u32 size; + u32 nchunks; + u8 *addr; + bool in_read; +}; + +/* + * struct iwl_test_ops: callback to the op mode + * + * The structure defines the callbacks that the op_mode should handle, + * inorder to handle logic that is out of the scope of iwl_test. The + * op_mode must set all the callbacks. + + * @send_cmd: handler that is used by the test object to request the + * op_mode to send a command to the fw. + * + * @valid_hw_addr: handler that is used by the test object to request the + * op_mode to check if the given address is a valid address. + * + * @get_fw_ver: handler used to get the FW version. + * + * @alloc_reply: handler used by the test object to request the op_mode + * to allocate an skb for sending a reply to the user, and initialize + * the skb. It is assumed that the test object only fills the required + * attributes. + * + * @reply: handler used by the test object to request the op_mode to reply + * to a request. The skb is an skb previously allocated by the the + * alloc_reply callback. + I + * @alloc_event: handler used by the test object to request the op_mode + * to allocate an skb for sending an event, and initialize + * the skb. It is assumed that the test object only fills the required + * attributes. + * + * @reply: handler used by the test object to request the op_mode to send + * an event. The skb is an skb previously allocated by the the + * alloc_event callback. + */ +struct iwl_test_ops { + int (*send_cmd)(struct iwl_op_mode *op_modes, + struct iwl_host_cmd *cmd); + bool (*valid_hw_addr)(u32 addr); + u32 (*get_fw_ver)(struct iwl_op_mode *op_mode); + + struct sk_buff *(*alloc_reply)(struct iwl_op_mode *op_mode, int len); + int (*reply)(struct iwl_op_mode *op_mode, struct sk_buff *skb); + struct sk_buff* (*alloc_event)(struct iwl_op_mode *op_mode, int len); + void (*event)(struct iwl_op_mode *op_mode, struct sk_buff *skb); +}; + +struct iwl_test { + struct iwl_trans *trans; + struct iwl_test_ops *ops; + struct iwl_test_trace trace; + struct iwl_test_mem mem; + bool notify; +}; + +void iwl_test_init(struct iwl_test *tst, struct iwl_trans *trans, + struct iwl_test_ops *ops); + +void iwl_test_free(struct iwl_test *tst); + +int iwl_test_parse(struct iwl_test *tst, struct nlattr **tb, + void *data, int len); + +int iwl_test_handle_cmd(struct iwl_test *tst, struct nlattr **tb); + +int iwl_test_dump(struct iwl_test *tst, u32 cmd, struct sk_buff *skb, + struct netlink_callback *cb); + +void iwl_test_rx(struct iwl_test *tst, struct iwl_rx_cmd_buffer *rxb); + +static inline void iwl_test_enable_notifications(struct iwl_test *tst, + bool enable) +{ + tst->notify = enable; +} + +#endif diff --git a/drivers/net/wireless/iwlwifi/iwl-testmode.c b/drivers/net/wireless/iwlwifi/iwl-testmode.c deleted file mode 100644 index 060aac3e22f1..000000000000 --- a/drivers/net/wireless/iwlwifi/iwl-testmode.c +++ /dev/null @@ -1,1114 +0,0 @@ -/****************************************************************************** - * - * This file is provided under a dual BSD/GPLv2 license. When using or - * redistributing this file, you may do so under either license. - * - * GPL LICENSE SUMMARY - * - * Copyright(c) 2010 - 2012 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of version 2 of the GNU General Public License as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, - * USA - * - * The full GNU General Public License is included in this distribution - * in the file called LICENSE.GPL. - * - * Contact Information: - * Intel Linux Wireless <ilw@linux.intel.com> - * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 - * - * BSD LICENSE - * - * Copyright(c) 2010 - 2012 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - *****************************************************************************/ -#include <linux/init.h> -#include <linux/kernel.h> -#include <linux/module.h> -#include <linux/dma-mapping.h> -#include <net/net_namespace.h> -#include <linux/netdevice.h> -#include <net/cfg80211.h> -#include <net/mac80211.h> -#include <net/netlink.h> - -#include "iwl-dev.h" -#include "iwl-debug.h" -#include "iwl-io.h" -#include "iwl-agn.h" -#include "iwl-testmode.h" -#include "iwl-trans.h" -#include "iwl-fh.h" -#include "iwl-prph.h" - - -/* Periphery registers absolute lower bound. This is used in order to - * differentiate registery access through HBUS_TARG_PRPH_* and - * HBUS_TARG_MEM_* accesses. - */ -#define IWL_TM_ABS_PRPH_START (0xA00000) - -/* The TLVs used in the gnl message policy between the kernel module and - * user space application. iwl_testmode_gnl_msg_policy is to be carried - * through the NL80211_CMD_TESTMODE channel regulated by nl80211. - * See iwl-testmode.h - */ -static -struct nla_policy iwl_testmode_gnl_msg_policy[IWL_TM_ATTR_MAX] = { - [IWL_TM_ATTR_COMMAND] = { .type = NLA_U32, }, - - [IWL_TM_ATTR_UCODE_CMD_ID] = { .type = NLA_U8, }, - [IWL_TM_ATTR_UCODE_CMD_DATA] = { .type = NLA_UNSPEC, }, - - [IWL_TM_ATTR_REG_OFFSET] = { .type = NLA_U32, }, - [IWL_TM_ATTR_REG_VALUE8] = { .type = NLA_U8, }, - [IWL_TM_ATTR_REG_VALUE32] = { .type = NLA_U32, }, - - [IWL_TM_ATTR_SYNC_RSP] = { .type = NLA_UNSPEC, }, - [IWL_TM_ATTR_UCODE_RX_PKT] = { .type = NLA_UNSPEC, }, - - [IWL_TM_ATTR_EEPROM] = { .type = NLA_UNSPEC, }, - - [IWL_TM_ATTR_TRACE_ADDR] = { .type = NLA_UNSPEC, }, - [IWL_TM_ATTR_TRACE_DUMP] = { .type = NLA_UNSPEC, }, - [IWL_TM_ATTR_TRACE_SIZE] = { .type = NLA_U32, }, - - [IWL_TM_ATTR_FIXRATE] = { .type = NLA_U32, }, - - [IWL_TM_ATTR_UCODE_OWNER] = { .type = NLA_U8, }, - - [IWL_TM_ATTR_MEM_ADDR] = { .type = NLA_U32, }, - [IWL_TM_ATTR_BUFFER_SIZE] = { .type = NLA_U32, }, - [IWL_TM_ATTR_BUFFER_DUMP] = { .type = NLA_UNSPEC, }, - - [IWL_TM_ATTR_FW_VERSION] = { .type = NLA_U32, }, - [IWL_TM_ATTR_DEVICE_ID] = { .type = NLA_U32, }, - [IWL_TM_ATTR_FW_TYPE] = { .type = NLA_U32, }, - [IWL_TM_ATTR_FW_INST_SIZE] = { .type = NLA_U32, }, - [IWL_TM_ATTR_FW_DATA_SIZE] = { .type = NLA_U32, }, - - [IWL_TM_ATTR_ENABLE_NOTIFICATION] = {.type = NLA_FLAG, }, -}; - -/* - * See the struct iwl_rx_packet in iwl-commands.h for the format of the - * received events from the device - */ -static inline int get_event_length(struct iwl_rx_cmd_buffer *rxb) -{ - struct iwl_rx_packet *pkt = rxb_addr(rxb); - if (pkt) - return le32_to_cpu(pkt->len_n_flags) & FH_RSCSR_FRAME_SIZE_MSK; - else - return 0; -} - - -/* - * This function multicasts the spontaneous messages from the device to the - * user space. It is invoked whenever there is a received messages - * from the device. This function is called within the ISR of the rx handlers - * in iwlagn driver. - * - * The parsing of the message content is left to the user space application, - * The message content is treated as unattacked raw data and is encapsulated - * with IWL_TM_ATTR_UCODE_RX_PKT multicasting to the user space. - * - * @priv: the instance of iwlwifi device - * @rxb: pointer to rx data content received by the ISR - * - * See the message policies and TLVs in iwl_testmode_gnl_msg_policy[]. - * For the messages multicasting to the user application, the mandatory - * TLV fields are : - * IWL_TM_ATTR_COMMAND must be IWL_TM_CMD_DEV2APP_UCODE_RX_PKT - * IWL_TM_ATTR_UCODE_RX_PKT for carrying the message content - */ - -static void iwl_testmode_ucode_rx_pkt(struct iwl_priv *priv, - struct iwl_rx_cmd_buffer *rxb) -{ - struct ieee80211_hw *hw = priv->hw; - struct sk_buff *skb; - void *data; - int length; - - data = (void *)rxb_addr(rxb); - length = get_event_length(rxb); - - if (!data || length == 0) - return; - - skb = cfg80211_testmode_alloc_event_skb(hw->wiphy, 20 + length, - GFP_ATOMIC); - if (skb == NULL) { - IWL_ERR(priv, - "Run out of memory for messages to user space ?\n"); - return; - } - if (nla_put_u32(skb, IWL_TM_ATTR_COMMAND, IWL_TM_CMD_DEV2APP_UCODE_RX_PKT) || - /* the length doesn't include len_n_flags field, so add it manually */ - nla_put(skb, IWL_TM_ATTR_UCODE_RX_PKT, length + sizeof(__le32), data)) - goto nla_put_failure; - cfg80211_testmode_event(skb, GFP_ATOMIC); - return; - -nla_put_failure: - kfree_skb(skb); - IWL_ERR(priv, "Ouch, overran buffer, check allocation!\n"); -} - -void iwl_testmode_init(struct iwl_priv *priv) -{ - priv->pre_rx_handler = NULL; - priv->testmode_trace.trace_enabled = false; - priv->testmode_mem.read_in_progress = false; -} - -static void iwl_mem_cleanup(struct iwl_priv *priv) -{ - if (priv->testmode_mem.read_in_progress) { - kfree(priv->testmode_mem.buff_addr); - priv->testmode_mem.buff_addr = NULL; - priv->testmode_mem.buff_size = 0; - priv->testmode_mem.num_chunks = 0; - priv->testmode_mem.read_in_progress = false; - } -} - -static void iwl_trace_cleanup(struct iwl_priv *priv) -{ - if (priv->testmode_trace.trace_enabled) { - if (priv->testmode_trace.cpu_addr && - priv->testmode_trace.dma_addr) - dma_free_coherent(priv->trans->dev, - priv->testmode_trace.total_size, - priv->testmode_trace.cpu_addr, - priv->testmode_trace.dma_addr); - priv->testmode_trace.trace_enabled = false; - priv->testmode_trace.cpu_addr = NULL; - priv->testmode_trace.trace_addr = NULL; - priv->testmode_trace.dma_addr = 0; - priv->testmode_trace.buff_size = 0; - priv->testmode_trace.total_size = 0; - } -} - - -void iwl_testmode_cleanup(struct iwl_priv *priv) -{ - iwl_trace_cleanup(priv); - iwl_mem_cleanup(priv); -} - - -/* - * This function handles the user application commands to the ucode. - * - * It retrieves the mandatory fields IWL_TM_ATTR_UCODE_CMD_ID and - * IWL_TM_ATTR_UCODE_CMD_DATA and calls to the handler to send the - * host command to the ucode. - * - * If any mandatory field is missing, -ENOMSG is replied to the user space - * application; otherwise, waits for the host command to be sent and checks - * the return code. In case or error, it is returned, otherwise a reply is - * allocated and the reply RX packet - * is returned. - * - * @hw: ieee80211_hw object that represents the device - * @tb: gnl message fields from the user space - */ -static int iwl_testmode_ucode(struct ieee80211_hw *hw, struct nlattr **tb) -{ - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - struct iwl_host_cmd cmd; - struct iwl_rx_packet *pkt; - struct sk_buff *skb; - void *reply_buf; - u32 reply_len; - int ret; - bool cmd_want_skb; - - memset(&cmd, 0, sizeof(struct iwl_host_cmd)); - - if (!tb[IWL_TM_ATTR_UCODE_CMD_ID] || - !tb[IWL_TM_ATTR_UCODE_CMD_DATA]) { - IWL_ERR(priv, "Missing ucode command mandatory fields\n"); - return -ENOMSG; - } - - cmd.flags = CMD_ON_DEMAND | CMD_SYNC; - cmd_want_skb = nla_get_flag(tb[IWL_TM_ATTR_UCODE_CMD_SKB]); - if (cmd_want_skb) - cmd.flags |= CMD_WANT_SKB; - - cmd.id = nla_get_u8(tb[IWL_TM_ATTR_UCODE_CMD_ID]); - cmd.data[0] = nla_data(tb[IWL_TM_ATTR_UCODE_CMD_DATA]); - cmd.len[0] = nla_len(tb[IWL_TM_ATTR_UCODE_CMD_DATA]); - cmd.dataflags[0] = IWL_HCMD_DFL_NOCOPY; - IWL_DEBUG_INFO(priv, "testmode ucode command ID 0x%x, flags 0x%x," - " len %d\n", cmd.id, cmd.flags, cmd.len[0]); - - ret = iwl_dvm_send_cmd(priv, &cmd); - if (ret) { - IWL_ERR(priv, "Failed to send hcmd\n"); - return ret; - } - if (!cmd_want_skb) - return ret; - - /* Handling return of SKB to the user */ - pkt = cmd.resp_pkt; - if (!pkt) { - IWL_ERR(priv, "HCMD received a null response packet\n"); - return ret; - } - - reply_len = le32_to_cpu(pkt->len_n_flags) & FH_RSCSR_FRAME_SIZE_MSK; - skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, reply_len + 20); - reply_buf = kmalloc(reply_len, GFP_KERNEL); - if (!skb || !reply_buf) { - kfree_skb(skb); - kfree(reply_buf); - return -ENOMEM; - } - - /* The reply is in a page, that we cannot send to user space. */ - memcpy(reply_buf, &(pkt->hdr), reply_len); - iwl_free_resp(&cmd); - - if (nla_put_u32(skb, IWL_TM_ATTR_COMMAND, IWL_TM_CMD_DEV2APP_UCODE_RX_PKT) || - nla_put(skb, IWL_TM_ATTR_UCODE_RX_PKT, reply_len, reply_buf)) - goto nla_put_failure; - return cfg80211_testmode_reply(skb); - -nla_put_failure: - IWL_DEBUG_INFO(priv, "Failed creating NL attributes\n"); - return -ENOMSG; -} - - -/* - * This function handles the user application commands for register access. - * - * It retrieves command ID carried with IWL_TM_ATTR_COMMAND and calls to the - * handlers respectively. - * - * If it's an unknown commdn ID, -ENOSYS is returned; or -ENOMSG if the - * mandatory fields(IWL_TM_ATTR_REG_OFFSET,IWL_TM_ATTR_REG_VALUE32, - * IWL_TM_ATTR_REG_VALUE8) are missing; Otherwise 0 is replied indicating - * the success of the command execution. - * - * If IWL_TM_ATTR_COMMAND is IWL_TM_CMD_APP2DEV_REG_READ32, the register read - * value is returned with IWL_TM_ATTR_REG_VALUE32. - * - * @hw: ieee80211_hw object that represents the device - * @tb: gnl message fields from the user space - */ -static int iwl_testmode_reg(struct ieee80211_hw *hw, struct nlattr **tb) -{ - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - u32 ofs, val32, cmd; - u8 val8; - struct sk_buff *skb; - int status = 0; - - if (!tb[IWL_TM_ATTR_REG_OFFSET]) { - IWL_ERR(priv, "Missing register offset\n"); - return -ENOMSG; - } - ofs = nla_get_u32(tb[IWL_TM_ATTR_REG_OFFSET]); - IWL_INFO(priv, "testmode register access command offset 0x%x\n", ofs); - - /* Allow access only to FH/CSR/HBUS in direct mode. - Since we don't have the upper bounds for the CSR and HBUS segments, - we will use only the upper bound of FH for sanity check. */ - cmd = nla_get_u32(tb[IWL_TM_ATTR_COMMAND]); - if ((cmd == IWL_TM_CMD_APP2DEV_DIRECT_REG_READ32 || - cmd == IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE32 || - cmd == IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE8) && - (ofs >= FH_MEM_UPPER_BOUND)) { - IWL_ERR(priv, "offset out of segment (0x0 - 0x%x)\n", - FH_MEM_UPPER_BOUND); - return -EINVAL; - } - - switch (cmd) { - case IWL_TM_CMD_APP2DEV_DIRECT_REG_READ32: - val32 = iwl_read_direct32(priv->trans, ofs); - IWL_INFO(priv, "32bit value to read 0x%x\n", val32); - - skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, 20); - if (!skb) { - IWL_ERR(priv, "Memory allocation fail\n"); - return -ENOMEM; - } - if (nla_put_u32(skb, IWL_TM_ATTR_REG_VALUE32, val32)) - goto nla_put_failure; - status = cfg80211_testmode_reply(skb); - if (status < 0) - IWL_ERR(priv, "Error sending msg : %d\n", status); - break; - case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE32: - if (!tb[IWL_TM_ATTR_REG_VALUE32]) { - IWL_ERR(priv, "Missing value to write\n"); - return -ENOMSG; - } else { - val32 = nla_get_u32(tb[IWL_TM_ATTR_REG_VALUE32]); - IWL_INFO(priv, "32bit value to write 0x%x\n", val32); - iwl_write_direct32(priv->trans, ofs, val32); - } - break; - case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE8: - if (!tb[IWL_TM_ATTR_REG_VALUE8]) { - IWL_ERR(priv, "Missing value to write\n"); - return -ENOMSG; - } else { - val8 = nla_get_u8(tb[IWL_TM_ATTR_REG_VALUE8]); - IWL_INFO(priv, "8bit value to write 0x%x\n", val8); - iwl_write8(priv->trans, ofs, val8); - } - break; - default: - IWL_ERR(priv, "Unknown testmode register command ID\n"); - return -ENOSYS; - } - - return status; - -nla_put_failure: - kfree_skb(skb); - return -EMSGSIZE; -} - - -static int iwl_testmode_cfg_init_calib(struct iwl_priv *priv) -{ - struct iwl_notification_wait calib_wait; - static const u8 calib_complete[] = { - CALIBRATION_COMPLETE_NOTIFICATION - }; - int ret; - - iwl_init_notification_wait(&priv->notif_wait, &calib_wait, - calib_complete, ARRAY_SIZE(calib_complete), - NULL, NULL); - ret = iwl_init_alive_start(priv); - if (ret) { - IWL_ERR(priv, "Fail init calibration: %d\n", ret); - goto cfg_init_calib_error; - } - - ret = iwl_wait_notification(&priv->notif_wait, &calib_wait, 2 * HZ); - if (ret) - IWL_ERR(priv, "Error detecting" - " CALIBRATION_COMPLETE_NOTIFICATION: %d\n", ret); - return ret; - -cfg_init_calib_error: - iwl_remove_notification(&priv->notif_wait, &calib_wait); - return ret; -} - -/* - * This function handles the user application commands for driver. - * - * It retrieves command ID carried with IWL_TM_ATTR_COMMAND and calls to the - * handlers respectively. - * - * If it's an unknown commdn ID, -ENOSYS is replied; otherwise, the returned - * value of the actual command execution is replied to the user application. - * - * If there's any message responding to the user space, IWL_TM_ATTR_SYNC_RSP - * is used for carry the message while IWL_TM_ATTR_COMMAND must set to - * IWL_TM_CMD_DEV2APP_SYNC_RSP. - * - * @hw: ieee80211_hw object that represents the device - * @tb: gnl message fields from the user space - */ -static int iwl_testmode_driver(struct ieee80211_hw *hw, struct nlattr **tb) -{ - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - struct iwl_trans *trans = priv->trans; - struct sk_buff *skb; - unsigned char *rsp_data_ptr = NULL; - int status = 0, rsp_data_len = 0; - u32 devid, inst_size = 0, data_size = 0; - const struct fw_img *img; - - switch (nla_get_u32(tb[IWL_TM_ATTR_COMMAND])) { - case IWL_TM_CMD_APP2DEV_GET_DEVICENAME: - rsp_data_ptr = (unsigned char *)priv->cfg->name; - rsp_data_len = strlen(priv->cfg->name); - skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, - rsp_data_len + 20); - if (!skb) { - IWL_ERR(priv, "Memory allocation fail\n"); - return -ENOMEM; - } - if (nla_put_u32(skb, IWL_TM_ATTR_COMMAND, - IWL_TM_CMD_DEV2APP_SYNC_RSP) || - nla_put(skb, IWL_TM_ATTR_SYNC_RSP, - rsp_data_len, rsp_data_ptr)) - goto nla_put_failure; - status = cfg80211_testmode_reply(skb); - if (status < 0) - IWL_ERR(priv, "Error sending msg : %d\n", status); - break; - - case IWL_TM_CMD_APP2DEV_LOAD_INIT_FW: - status = iwl_load_ucode_wait_alive(priv, IWL_UCODE_INIT); - if (status) - IWL_ERR(priv, "Error loading init ucode: %d\n", status); - break; - - case IWL_TM_CMD_APP2DEV_CFG_INIT_CALIB: - iwl_testmode_cfg_init_calib(priv); - priv->ucode_loaded = false; - iwl_trans_stop_device(trans); - break; - - case IWL_TM_CMD_APP2DEV_LOAD_RUNTIME_FW: - status = iwl_load_ucode_wait_alive(priv, IWL_UCODE_REGULAR); - if (status) { - IWL_ERR(priv, - "Error loading runtime ucode: %d\n", status); - break; - } - status = iwl_alive_start(priv); - if (status) - IWL_ERR(priv, - "Error starting the device: %d\n", status); - break; - - case IWL_TM_CMD_APP2DEV_LOAD_WOWLAN_FW: - iwl_scan_cancel_timeout(priv, 200); - priv->ucode_loaded = false; - iwl_trans_stop_device(trans); - status = iwl_load_ucode_wait_alive(priv, IWL_UCODE_WOWLAN); - if (status) { - IWL_ERR(priv, - "Error loading WOWLAN ucode: %d\n", status); - break; - } - status = iwl_alive_start(priv); - if (status) - IWL_ERR(priv, - "Error starting the device: %d\n", status); - break; - - case IWL_TM_CMD_APP2DEV_GET_EEPROM: - if (priv->eeprom) { - skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, - priv->cfg->base_params->eeprom_size + 20); - if (!skb) { - IWL_ERR(priv, "Memory allocation fail\n"); - return -ENOMEM; - } - if (nla_put_u32(skb, IWL_TM_ATTR_COMMAND, - IWL_TM_CMD_DEV2APP_EEPROM_RSP) || - nla_put(skb, IWL_TM_ATTR_EEPROM, - priv->cfg->base_params->eeprom_size, - priv->eeprom)) - goto nla_put_failure; - status = cfg80211_testmode_reply(skb); - if (status < 0) - IWL_ERR(priv, "Error sending msg : %d\n", - status); - } else - return -EFAULT; - break; - - case IWL_TM_CMD_APP2DEV_FIXRATE_REQ: - if (!tb[IWL_TM_ATTR_FIXRATE]) { - IWL_ERR(priv, "Missing fixrate setting\n"); - return -ENOMSG; - } - priv->tm_fixed_rate = nla_get_u32(tb[IWL_TM_ATTR_FIXRATE]); - break; - - case IWL_TM_CMD_APP2DEV_GET_FW_VERSION: - IWL_INFO(priv, "uCode version raw: 0x%x\n", - priv->fw->ucode_ver); - - skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, 20); - if (!skb) { - IWL_ERR(priv, "Memory allocation fail\n"); - return -ENOMEM; - } - if (nla_put_u32(skb, IWL_TM_ATTR_FW_VERSION, - priv->fw->ucode_ver)) - goto nla_put_failure; - status = cfg80211_testmode_reply(skb); - if (status < 0) - IWL_ERR(priv, "Error sending msg : %d\n", status); - break; - - case IWL_TM_CMD_APP2DEV_GET_DEVICE_ID: - devid = priv->trans->hw_id; - IWL_INFO(priv, "hw version: 0x%x\n", devid); - - skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, 20); - if (!skb) { - IWL_ERR(priv, "Memory allocation fail\n"); - return -ENOMEM; - } - if (nla_put_u32(skb, IWL_TM_ATTR_DEVICE_ID, devid)) - goto nla_put_failure; - status = cfg80211_testmode_reply(skb); - if (status < 0) - IWL_ERR(priv, "Error sending msg : %d\n", status); - break; - - case IWL_TM_CMD_APP2DEV_GET_FW_INFO: - skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, 20 + 8); - if (!skb) { - IWL_ERR(priv, "Memory allocation fail\n"); - return -ENOMEM; - } - if (!priv->ucode_loaded) { - IWL_ERR(priv, "No uCode has not been loaded\n"); - return -EINVAL; - } else { - img = &priv->fw->img[priv->cur_ucode]; - inst_size = img->sec[IWL_UCODE_SECTION_INST].len; - data_size = img->sec[IWL_UCODE_SECTION_DATA].len; - } - if (nla_put_u32(skb, IWL_TM_ATTR_FW_TYPE, priv->cur_ucode) || - nla_put_u32(skb, IWL_TM_ATTR_FW_INST_SIZE, inst_size) || - nla_put_u32(skb, IWL_TM_ATTR_FW_DATA_SIZE, data_size)) - goto nla_put_failure; - status = cfg80211_testmode_reply(skb); - if (status < 0) - IWL_ERR(priv, "Error sending msg : %d\n", status); - break; - - default: - IWL_ERR(priv, "Unknown testmode driver command ID\n"); - return -ENOSYS; - } - return status; - -nla_put_failure: - kfree_skb(skb); - return -EMSGSIZE; -} - - -/* - * This function handles the user application commands for uCode trace - * - * It retrieves command ID carried with IWL_TM_ATTR_COMMAND and calls to the - * handlers respectively. - * - * If it's an unknown commdn ID, -ENOSYS is replied; otherwise, the returned - * value of the actual command execution is replied to the user application. - * - * @hw: ieee80211_hw object that represents the device - * @tb: gnl message fields from the user space - */ -static int iwl_testmode_trace(struct ieee80211_hw *hw, struct nlattr **tb) -{ - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - struct sk_buff *skb; - int status = 0; - struct device *dev = priv->trans->dev; - - switch (nla_get_u32(tb[IWL_TM_ATTR_COMMAND])) { - case IWL_TM_CMD_APP2DEV_BEGIN_TRACE: - if (priv->testmode_trace.trace_enabled) - return -EBUSY; - - if (!tb[IWL_TM_ATTR_TRACE_SIZE]) - priv->testmode_trace.buff_size = TRACE_BUFF_SIZE_DEF; - else - priv->testmode_trace.buff_size = - nla_get_u32(tb[IWL_TM_ATTR_TRACE_SIZE]); - if (!priv->testmode_trace.buff_size) - return -EINVAL; - if (priv->testmode_trace.buff_size < TRACE_BUFF_SIZE_MIN || - priv->testmode_trace.buff_size > TRACE_BUFF_SIZE_MAX) - return -EINVAL; - - priv->testmode_trace.total_size = - priv->testmode_trace.buff_size + TRACE_BUFF_PADD; - priv->testmode_trace.cpu_addr = - dma_alloc_coherent(dev, - priv->testmode_trace.total_size, - &priv->testmode_trace.dma_addr, - GFP_KERNEL); - if (!priv->testmode_trace.cpu_addr) - return -ENOMEM; - priv->testmode_trace.trace_enabled = true; - priv->testmode_trace.trace_addr = (u8 *)PTR_ALIGN( - priv->testmode_trace.cpu_addr, 0x100); - memset(priv->testmode_trace.trace_addr, 0x03B, - priv->testmode_trace.buff_size); - skb = cfg80211_testmode_alloc_reply_skb(hw->wiphy, - sizeof(priv->testmode_trace.dma_addr) + 20); - if (!skb) { - IWL_ERR(priv, "Memory allocation fail\n"); - iwl_trace_cleanup(priv); - return -ENOMEM; - } - if (nla_put(skb, IWL_TM_ATTR_TRACE_ADDR, - sizeof(priv->testmode_trace.dma_addr), - (u64 *)&priv->testmode_trace.dma_addr)) - goto nla_put_failure; - status = cfg80211_testmode_reply(skb); - if (status < 0) { - IWL_ERR(priv, "Error sending msg : %d\n", status); - } - priv->testmode_trace.num_chunks = - DIV_ROUND_UP(priv->testmode_trace.buff_size, - DUMP_CHUNK_SIZE); - break; - - case IWL_TM_CMD_APP2DEV_END_TRACE: - iwl_trace_cleanup(priv); - break; - default: - IWL_ERR(priv, "Unknown testmode mem command ID\n"); - return -ENOSYS; - } - return status; - -nla_put_failure: - kfree_skb(skb); - if (nla_get_u32(tb[IWL_TM_ATTR_COMMAND]) == - IWL_TM_CMD_APP2DEV_BEGIN_TRACE) - iwl_trace_cleanup(priv); - return -EMSGSIZE; -} - -static int iwl_testmode_trace_dump(struct ieee80211_hw *hw, - struct sk_buff *skb, - struct netlink_callback *cb) -{ - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - int idx, length; - - if (priv->testmode_trace.trace_enabled && - priv->testmode_trace.trace_addr) { - idx = cb->args[4]; - if (idx >= priv->testmode_trace.num_chunks) - return -ENOENT; - length = DUMP_CHUNK_SIZE; - if (((idx + 1) == priv->testmode_trace.num_chunks) && - (priv->testmode_trace.buff_size % DUMP_CHUNK_SIZE)) - length = priv->testmode_trace.buff_size % - DUMP_CHUNK_SIZE; - - if (nla_put(skb, IWL_TM_ATTR_TRACE_DUMP, length, - priv->testmode_trace.trace_addr + - (DUMP_CHUNK_SIZE * idx))) - goto nla_put_failure; - idx++; - cb->args[4] = idx; - return 0; - } else - return -EFAULT; - - nla_put_failure: - return -ENOBUFS; -} - -/* - * This function handles the user application switch ucode ownership. - * - * It retrieves the mandatory fields IWL_TM_ATTR_UCODE_OWNER and - * decide who the current owner of the uCode - * - * If the current owner is OWNERSHIP_TM, then the only host command - * can deliver to uCode is from testmode, all the other host commands - * will dropped. - * - * default driver is the owner of uCode in normal operational mode - * - * @hw: ieee80211_hw object that represents the device - * @tb: gnl message fields from the user space - */ -static int iwl_testmode_ownership(struct ieee80211_hw *hw, struct nlattr **tb) -{ - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - u8 owner; - - if (!tb[IWL_TM_ATTR_UCODE_OWNER]) { - IWL_ERR(priv, "Missing ucode owner\n"); - return -ENOMSG; - } - - owner = nla_get_u8(tb[IWL_TM_ATTR_UCODE_OWNER]); - if (owner == IWL_OWNERSHIP_DRIVER) { - priv->ucode_owner = owner; - priv->pre_rx_handler = NULL; - } else if (owner == IWL_OWNERSHIP_TM) { - priv->pre_rx_handler = iwl_testmode_ucode_rx_pkt; - priv->ucode_owner = owner; - } else { - IWL_ERR(priv, "Invalid owner\n"); - return -EINVAL; - } - return 0; -} - -static int iwl_testmode_indirect_read(struct iwl_priv *priv, u32 addr, u32 size) -{ - struct iwl_trans *trans = priv->trans; - unsigned long flags; - int i; - - if (size & 0x3) - return -EINVAL; - priv->testmode_mem.buff_size = size; - priv->testmode_mem.buff_addr = - kmalloc(priv->testmode_mem.buff_size, GFP_KERNEL); - if (priv->testmode_mem.buff_addr == NULL) - return -ENOMEM; - - /* Hard-coded periphery absolute address */ - if (IWL_TM_ABS_PRPH_START <= addr && - addr < IWL_TM_ABS_PRPH_START + PRPH_END) { - spin_lock_irqsave(&trans->reg_lock, flags); - iwl_grab_nic_access(trans); - iwl_write32(trans, HBUS_TARG_PRPH_RADDR, - addr | (3 << 24)); - for (i = 0; i < size; i += 4) - *(u32 *)(priv->testmode_mem.buff_addr + i) = - iwl_read32(trans, HBUS_TARG_PRPH_RDAT); - iwl_release_nic_access(trans); - spin_unlock_irqrestore(&trans->reg_lock, flags); - } else { /* target memory (SRAM) */ - _iwl_read_targ_mem_words(trans, addr, - priv->testmode_mem.buff_addr, - priv->testmode_mem.buff_size / 4); - } - - priv->testmode_mem.num_chunks = - DIV_ROUND_UP(priv->testmode_mem.buff_size, DUMP_CHUNK_SIZE); - priv->testmode_mem.read_in_progress = true; - return 0; - -} - -static int iwl_testmode_indirect_write(struct iwl_priv *priv, u32 addr, - u32 size, unsigned char *buf) -{ - struct iwl_trans *trans = priv->trans; - u32 val, i; - unsigned long flags; - - if (IWL_TM_ABS_PRPH_START <= addr && - addr < IWL_TM_ABS_PRPH_START + PRPH_END) { - /* Periphery writes can be 1-3 bytes long, or DWORDs */ - if (size < 4) { - memcpy(&val, buf, size); - spin_lock_irqsave(&trans->reg_lock, flags); - iwl_grab_nic_access(trans); - iwl_write32(trans, HBUS_TARG_PRPH_WADDR, - (addr & 0x0000FFFF) | - ((size - 1) << 24)); - iwl_write32(trans, HBUS_TARG_PRPH_WDAT, val); - iwl_release_nic_access(trans); - /* needed after consecutive writes w/o read */ - mmiowb(); - spin_unlock_irqrestore(&trans->reg_lock, flags); - } else { - if (size % 4) - return -EINVAL; - for (i = 0; i < size; i += 4) - iwl_write_prph(trans, addr+i, - *(u32 *)(buf+i)); - } - } else if (iwlagn_hw_valid_rtc_data_addr(addr) || - (IWLAGN_RTC_INST_LOWER_BOUND <= addr && - addr < IWLAGN_RTC_INST_UPPER_BOUND)) { - _iwl_write_targ_mem_words(trans, addr, buf, size/4); - } else - return -EINVAL; - return 0; -} - -/* - * This function handles the user application commands for SRAM data dump - * - * It retrieves the mandatory fields IWL_TM_ATTR_SRAM_ADDR and - * IWL_TM_ATTR_SRAM_SIZE to decide the memory area for SRAM data reading - * - * Several error will be retured, -EBUSY if the SRAM data retrieved by - * previous command has not been delivered to userspace, or -ENOMSG if - * the mandatory fields (IWL_TM_ATTR_SRAM_ADDR,IWL_TM_ATTR_SRAM_SIZE) - * are missing, or -ENOMEM if the buffer allocation fails. - * - * Otherwise 0 is replied indicating the success of the SRAM reading. - * - * @hw: ieee80211_hw object that represents the device - * @tb: gnl message fields from the user space - */ -static int iwl_testmode_indirect_mem(struct ieee80211_hw *hw, - struct nlattr **tb) -{ - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - u32 addr, size, cmd; - unsigned char *buf; - - /* Both read and write should be blocked, for atomicity */ - if (priv->testmode_mem.read_in_progress) - return -EBUSY; - - cmd = nla_get_u32(tb[IWL_TM_ATTR_COMMAND]); - if (!tb[IWL_TM_ATTR_MEM_ADDR]) { - IWL_ERR(priv, "Error finding memory offset address\n"); - return -ENOMSG; - } - addr = nla_get_u32(tb[IWL_TM_ATTR_MEM_ADDR]); - if (!tb[IWL_TM_ATTR_BUFFER_SIZE]) { - IWL_ERR(priv, "Error finding size for memory reading\n"); - return -ENOMSG; - } - size = nla_get_u32(tb[IWL_TM_ATTR_BUFFER_SIZE]); - - if (cmd == IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_READ) - return iwl_testmode_indirect_read(priv, addr, size); - else { - if (!tb[IWL_TM_ATTR_BUFFER_DUMP]) - return -EINVAL; - buf = (unsigned char *) nla_data(tb[IWL_TM_ATTR_BUFFER_DUMP]); - return iwl_testmode_indirect_write(priv, addr, size, buf); - } -} - -static int iwl_testmode_buffer_dump(struct ieee80211_hw *hw, - struct sk_buff *skb, - struct netlink_callback *cb) -{ - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - int idx, length; - - if (priv->testmode_mem.read_in_progress) { - idx = cb->args[4]; - if (idx >= priv->testmode_mem.num_chunks) { - iwl_mem_cleanup(priv); - return -ENOENT; - } - length = DUMP_CHUNK_SIZE; - if (((idx + 1) == priv->testmode_mem.num_chunks) && - (priv->testmode_mem.buff_size % DUMP_CHUNK_SIZE)) - length = priv->testmode_mem.buff_size % - DUMP_CHUNK_SIZE; - - if (nla_put(skb, IWL_TM_ATTR_BUFFER_DUMP, length, - priv->testmode_mem.buff_addr + - (DUMP_CHUNK_SIZE * idx))) - goto nla_put_failure; - idx++; - cb->args[4] = idx; - return 0; - } else - return -EFAULT; - - nla_put_failure: - return -ENOBUFS; -} - -static int iwl_testmode_notifications(struct ieee80211_hw *hw, - struct nlattr **tb) -{ - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - bool enable; - - enable = nla_get_flag(tb[IWL_TM_ATTR_ENABLE_NOTIFICATION]); - if (enable) - priv->pre_rx_handler = iwl_testmode_ucode_rx_pkt; - else - priv->pre_rx_handler = NULL; - return 0; -} - - -/* The testmode gnl message handler that takes the gnl message from the - * user space and parses it per the policy iwl_testmode_gnl_msg_policy, then - * invoke the corresponding handlers. - * - * This function is invoked when there is user space application sending - * gnl message through the testmode tunnel NL80211_CMD_TESTMODE regulated - * by nl80211. - * - * It retrieves the mandatory field, IWL_TM_ATTR_COMMAND, before - * dispatching it to the corresponding handler. - * - * If IWL_TM_ATTR_COMMAND is missing, -ENOMSG is replied to user application; - * -ENOSYS is replied to the user application if the command is unknown; - * Otherwise, the command is dispatched to the respective handler. - * - * @hw: ieee80211_hw object that represents the device - * @data: pointer to user space message - * @len: length in byte of @data - */ -int iwlagn_mac_testmode_cmd(struct ieee80211_hw *hw, void *data, int len) -{ - struct nlattr *tb[IWL_TM_ATTR_MAX]; - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - int result; - - result = nla_parse(tb, IWL_TM_ATTR_MAX - 1, data, len, - iwl_testmode_gnl_msg_policy); - if (result != 0) { - IWL_ERR(priv, "Error parsing the gnl message : %d\n", result); - return result; - } - - /* IWL_TM_ATTR_COMMAND is absolutely mandatory */ - if (!tb[IWL_TM_ATTR_COMMAND]) { - IWL_ERR(priv, "Missing testmode command type\n"); - return -ENOMSG; - } - /* in case multiple accesses to the device happens */ - mutex_lock(&priv->mutex); - - switch (nla_get_u32(tb[IWL_TM_ATTR_COMMAND])) { - case IWL_TM_CMD_APP2DEV_UCODE: - IWL_DEBUG_INFO(priv, "testmode cmd to uCode\n"); - result = iwl_testmode_ucode(hw, tb); - break; - case IWL_TM_CMD_APP2DEV_DIRECT_REG_READ32: - case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE32: - case IWL_TM_CMD_APP2DEV_DIRECT_REG_WRITE8: - IWL_DEBUG_INFO(priv, "testmode cmd to register\n"); - result = iwl_testmode_reg(hw, tb); - break; - case IWL_TM_CMD_APP2DEV_GET_DEVICENAME: - case IWL_TM_CMD_APP2DEV_LOAD_INIT_FW: - case IWL_TM_CMD_APP2DEV_CFG_INIT_CALIB: - case IWL_TM_CMD_APP2DEV_LOAD_RUNTIME_FW: - case IWL_TM_CMD_APP2DEV_GET_EEPROM: - case IWL_TM_CMD_APP2DEV_FIXRATE_REQ: - case IWL_TM_CMD_APP2DEV_LOAD_WOWLAN_FW: - case IWL_TM_CMD_APP2DEV_GET_FW_VERSION: - case IWL_TM_CMD_APP2DEV_GET_DEVICE_ID: - case IWL_TM_CMD_APP2DEV_GET_FW_INFO: - IWL_DEBUG_INFO(priv, "testmode cmd to driver\n"); - result = iwl_testmode_driver(hw, tb); - break; - - case IWL_TM_CMD_APP2DEV_BEGIN_TRACE: - case IWL_TM_CMD_APP2DEV_END_TRACE: - case IWL_TM_CMD_APP2DEV_READ_TRACE: - IWL_DEBUG_INFO(priv, "testmode uCode trace cmd to driver\n"); - result = iwl_testmode_trace(hw, tb); - break; - - case IWL_TM_CMD_APP2DEV_OWNERSHIP: - IWL_DEBUG_INFO(priv, "testmode change uCode ownership\n"); - result = iwl_testmode_ownership(hw, tb); - break; - - case IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_READ: - case IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_WRITE: - IWL_DEBUG_INFO(priv, "testmode indirect memory cmd " - "to driver\n"); - result = iwl_testmode_indirect_mem(hw, tb); - break; - - case IWL_TM_CMD_APP2DEV_NOTIFICATIONS: - IWL_DEBUG_INFO(priv, "testmode notifications cmd " - "to driver\n"); - result = iwl_testmode_notifications(hw, tb); - break; - - default: - IWL_ERR(priv, "Unknown testmode command\n"); - result = -ENOSYS; - break; - } - - mutex_unlock(&priv->mutex); - return result; -} - -int iwlagn_mac_testmode_dump(struct ieee80211_hw *hw, struct sk_buff *skb, - struct netlink_callback *cb, - void *data, int len) -{ - struct nlattr *tb[IWL_TM_ATTR_MAX]; - struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); - int result; - u32 cmd; - - if (cb->args[3]) { - /* offset by 1 since commands start at 0 */ - cmd = cb->args[3] - 1; - } else { - result = nla_parse(tb, IWL_TM_ATTR_MAX - 1, data, len, - iwl_testmode_gnl_msg_policy); - if (result) { - IWL_ERR(priv, - "Error parsing the gnl message : %d\n", result); - return result; - } - - /* IWL_TM_ATTR_COMMAND is absolutely mandatory */ - if (!tb[IWL_TM_ATTR_COMMAND]) { - IWL_ERR(priv, "Missing testmode command type\n"); - return -ENOMSG; - } - cmd = nla_get_u32(tb[IWL_TM_ATTR_COMMAND]); - cb->args[3] = cmd + 1; - } - - /* in case multiple accesses to the device happens */ - mutex_lock(&priv->mutex); - switch (cmd) { - case IWL_TM_CMD_APP2DEV_READ_TRACE: - IWL_DEBUG_INFO(priv, "uCode trace cmd to driver\n"); - result = iwl_testmode_trace_dump(hw, skb, cb); - break; - case IWL_TM_CMD_APP2DEV_INDIRECT_BUFFER_DUMP: - IWL_DEBUG_INFO(priv, "testmode sram dump cmd to driver\n"); - result = iwl_testmode_buffer_dump(hw, skb, cb); - break; - default: - result = -EINVAL; - break; - } - - mutex_unlock(&priv->mutex); - return result; -} diff --git a/drivers/net/wireless/iwlwifi/iwl-trans.h b/drivers/net/wireless/iwlwifi/iwl-trans.h index 79a1e7ae4995..92576a3e84ef 100644 --- a/drivers/net/wireless/iwlwifi/iwl-trans.h +++ b/drivers/net/wireless/iwlwifi/iwl-trans.h @@ -154,6 +154,9 @@ struct iwl_cmd_header { __le16 sequence; } __packed; +/* iwl_cmd_header flags value */ +#define IWL_CMD_FAILED_MSK 0x40 + #define FH_RSCSR_FRAME_SIZE_MSK 0x00003FFF /* bits 0-13 */ #define FH_RSCSR_FRAME_INVALID 0x55550000 @@ -280,21 +283,24 @@ static inline struct page *rxb_steal_page(struct iwl_rx_cmd_buffer *r) #define MAX_NO_RECLAIM_CMDS 6 +#define IWL_MASK(lo, hi) ((1 << (hi)) | ((1 << (hi)) - (1 << (lo)))) + /* * Maximum number of HW queues the transport layer * currently supports */ #define IWL_MAX_HW_QUEUES 32 +#define IWL_INVALID_STATION 255 +#define IWL_MAX_TID_COUNT 8 +#define IWL_FRAME_LIMIT 64 /** * struct iwl_trans_config - transport configuration * * @op_mode: pointer to the upper layer. - * @queue_to_fifo: queue to FIFO mapping to set up by - * default - * @n_queue_to_fifo: number of queues to set up * @cmd_queue: the index of the command queue. * Must be set before start_fw. + * @cmd_fifo: the fifo for host commands * @no_reclaim_cmds: Some devices erroneously don't set the * SEQ_RX_FRAME bit on some notifications, this is the * list of such notifications to filter. Max length is @@ -309,10 +315,9 @@ static inline struct page *rxb_steal_page(struct iwl_rx_cmd_buffer *r) */ struct iwl_trans_config { struct iwl_op_mode *op_mode; - const u8 *queue_to_fifo; - u8 n_queue_to_fifo; u8 cmd_queue; + u8 cmd_fifo; const u8 *no_reclaim_cmds; int n_no_reclaim_cmds; @@ -350,10 +355,10 @@ struct iwl_trans; * Must be atomic * @reclaim: free packet until ssn. Returns a list of freed packets. * Must be atomic - * @tx_agg_setup: setup a tx queue for AMPDU - will be called once the HW is - * ready and a successful ADDBA response has been received. - * May sleep - * @tx_agg_disable: de-configure a Tx queue to send AMPDUs + * @txq_enable: setup a queue. To setup an AC queue, use the + * iwl_trans_ac_txq_enable wrapper. fw_alive must have been called before + * this one. The op_mode must not configure the HCMD queue. May sleep. + * @txq_disable: de-configure a Tx queue to send AMPDUs * Must be atomic * @wait_tx_queue_empty: wait until all tx queues are empty * May sleep @@ -386,9 +391,9 @@ struct iwl_trans_ops { void (*reclaim)(struct iwl_trans *trans, int queue, int ssn, struct sk_buff_head *skbs); - void (*tx_agg_setup)(struct iwl_trans *trans, int queue, int fifo, - int sta_id, int tid, int frame_limit, u16 ssn); - void (*tx_agg_disable)(struct iwl_trans *trans, int queue); + void (*txq_enable)(struct iwl_trans *trans, int queue, int fifo, + int sta_id, int tid, int frame_limit, u16 ssn); + void (*txq_disable)(struct iwl_trans *trans, int queue); int (*dbgfs_register)(struct iwl_trans *trans, struct dentry* dir); int (*wait_tx_queue_empty)(struct iwl_trans *trans); @@ -428,6 +433,11 @@ enum iwl_trans_state { * @hw_id_str: a string with info about HW ID. Set during transport allocation. * @pm_support: set to true in start_hw if link pm is supported * @wait_command_queue: the wait_queue for SYNC host commands + * @dev_cmd_pool: pool for Tx cmd allocation - for internal use only. + * The user should use iwl_trans_{alloc,free}_tx_cmd. + * @dev_cmd_headroom: room needed for the transport's private use before the + * device_cmd for Tx - for internal use only + * The user should use iwl_trans_{alloc,free}_tx_cmd. */ struct iwl_trans { const struct iwl_trans_ops *ops; @@ -445,6 +455,11 @@ struct iwl_trans { wait_queue_head_t wait_command_queue; + /* The following fields are internal only */ + struct kmem_cache *dev_cmd_pool; + size_t dev_cmd_headroom; + char dev_cmd_pool_name[50]; + /* pointer to trans specific struct */ /*Ensure that this pointer will always be aligned to sizeof pointer */ char trans_specific[0] __aligned(sizeof(void *)); @@ -483,9 +498,9 @@ static inline void iwl_trans_fw_alive(struct iwl_trans *trans) { might_sleep(); - trans->ops->fw_alive(trans); - trans->state = IWL_TRANS_FW_ALIVE; + + trans->ops->fw_alive(trans); } static inline int iwl_trans_start_fw(struct iwl_trans *trans, @@ -520,6 +535,26 @@ static inline int iwl_trans_send_cmd(struct iwl_trans *trans, return trans->ops->send_cmd(trans, cmd); } +static inline struct iwl_device_cmd * +iwl_trans_alloc_tx_cmd(struct iwl_trans *trans) +{ + u8 *dev_cmd_ptr = kmem_cache_alloc(trans->dev_cmd_pool, GFP_ATOMIC); + + if (unlikely(dev_cmd_ptr == NULL)) + return NULL; + + return (struct iwl_device_cmd *) + (dev_cmd_ptr + trans->dev_cmd_headroom); +} + +static inline void iwl_trans_free_tx_cmd(struct iwl_trans *trans, + struct iwl_device_cmd *dev_cmd) +{ + u8 *dev_cmd_ptr = (u8 *)dev_cmd - trans->dev_cmd_headroom; + + kmem_cache_free(trans->dev_cmd_pool, dev_cmd_ptr); +} + static inline int iwl_trans_tx(struct iwl_trans *trans, struct sk_buff *skb, struct iwl_device_cmd *dev_cmd, int queue) { @@ -538,27 +573,34 @@ static inline void iwl_trans_reclaim(struct iwl_trans *trans, int queue, trans->ops->reclaim(trans, queue, ssn, skbs); } -static inline void iwl_trans_tx_agg_disable(struct iwl_trans *trans, int queue) +static inline void iwl_trans_txq_disable(struct iwl_trans *trans, int queue) { WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, "%s bad state = %d", __func__, trans->state); - trans->ops->tx_agg_disable(trans, queue); + trans->ops->txq_disable(trans, queue); } -static inline void iwl_trans_tx_agg_setup(struct iwl_trans *trans, int queue, - int fifo, int sta_id, int tid, - int frame_limit, u16 ssn) +static inline void iwl_trans_txq_enable(struct iwl_trans *trans, int queue, + int fifo, int sta_id, int tid, + int frame_limit, u16 ssn) { might_sleep(); WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, "%s bad state = %d", __func__, trans->state); - trans->ops->tx_agg_setup(trans, queue, fifo, sta_id, tid, + trans->ops->txq_enable(trans, queue, fifo, sta_id, tid, frame_limit, ssn); } +static inline void iwl_trans_ac_txq_enable(struct iwl_trans *trans, int queue, + int fifo) +{ + iwl_trans_txq_enable(trans, queue, fifo, IWL_INVALID_STATION, + IWL_MAX_TID_COUNT, IWL_FRAME_LIMIT, 0); +} + static inline int iwl_trans_wait_tx_queue_empty(struct iwl_trans *trans) { WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, diff --git a/drivers/net/wireless/iwlwifi/iwl-1000.c b/drivers/net/wireless/iwlwifi/pcie/1000.c index 2629a6602dfa..81b83f484f08 100644 --- a/drivers/net/wireless/iwlwifi/iwl-1000.c +++ b/drivers/net/wireless/iwlwifi/pcie/1000.c @@ -27,9 +27,9 @@ #include <linux/module.h> #include <linux/stringify.h> #include "iwl-config.h" -#include "iwl-cfg.h" #include "iwl-csr.h" #include "iwl-agn-hw.h" +#include "cfg.h" /* Highest firmware API version supported */ #define IWL1000_UCODE_API_MAX 5 @@ -64,13 +64,26 @@ static const struct iwl_base_params iwl1000_base_params = { .support_ct_kill_exit = true, .plcp_delta_threshold = IWL_MAX_PLCP_ERR_EXT_LONG_THRESHOLD_DEF, .chain_noise_scale = 1000, - .wd_timeout = IWL_WATCHHDOG_DISABLED, + .wd_timeout = IWL_WATCHDOG_DISABLED, .max_event_log_size = 128, }; static const struct iwl_ht_params iwl1000_ht_params = { .ht_greenfield_support = true, .use_rts_for_aggregation = true, /* use rts/cts protection */ + .ht40_bands = BIT(IEEE80211_BAND_2GHZ), +}; + +static const struct iwl_eeprom_params iwl1000_eeprom_params = { + .regulatory_bands = { + EEPROM_REG_BAND_1_CHANNELS, + EEPROM_REG_BAND_2_CHANNELS, + EEPROM_REG_BAND_3_CHANNELS, + EEPROM_REG_BAND_4_CHANNELS, + EEPROM_REG_BAND_5_CHANNELS, + EEPROM_REG_BAND_24_HT40_CHANNELS, + EEPROM_REGULATORY_BAND_NO_HT40, + } }; #define IWL_DEVICE_1000 \ @@ -84,6 +97,7 @@ static const struct iwl_ht_params iwl1000_ht_params = { .eeprom_ver = EEPROM_1000_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_1000_TX_POWER_VERSION, \ .base_params = &iwl1000_base_params, \ + .eeprom_params = &iwl1000_eeprom_params, \ .led_mode = IWL_LED_BLINK const struct iwl_cfg iwl1000_bgn_cfg = { @@ -108,6 +122,7 @@ const struct iwl_cfg iwl1000_bg_cfg = { .eeprom_ver = EEPROM_1000_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_1000_TX_POWER_VERSION, \ .base_params = &iwl1000_base_params, \ + .eeprom_params = &iwl1000_eeprom_params, \ .led_mode = IWL_LED_RF_STATE, \ .rx_with_siso_diversity = true diff --git a/drivers/net/wireless/iwlwifi/iwl-2000.c b/drivers/net/wireless/iwlwifi/pcie/2000.c index 8133105ac645..9fbde32f7559 100644 --- a/drivers/net/wireless/iwlwifi/iwl-2000.c +++ b/drivers/net/wireless/iwlwifi/pcie/2000.c @@ -27,9 +27,9 @@ #include <linux/module.h> #include <linux/stringify.h> #include "iwl-config.h" -#include "iwl-cfg.h" #include "iwl-agn-hw.h" -#include "iwl-commands.h" /* needed for BT for now */ +#include "cfg.h" +#include "dvm/commands.h" /* needed for BT for now */ /* Highest firmware API version supported */ #define IWL2030_UCODE_API_MAX 6 @@ -104,6 +104,7 @@ static const struct iwl_base_params iwl2030_base_params = { static const struct iwl_ht_params iwl2000_ht_params = { .ht_greenfield_support = true, .use_rts_for_aggregation = true, /* use rts/cts protection */ + .ht40_bands = BIT(IEEE80211_BAND_2GHZ), }; static const struct iwl_bt_params iwl2030_bt_params = { @@ -111,11 +112,24 @@ static const struct iwl_bt_params iwl2030_bt_params = { .advanced_bt_coexist = true, .agg_time_limit = BT_AGG_THRESHOLD_DEF, .bt_init_traffic_load = IWL_BT_COEX_TRAFFIC_LOAD_NONE, - .bt_prio_boost = IWLAGN_BT_PRIO_BOOST_DEFAULT, + .bt_prio_boost = IWLAGN_BT_PRIO_BOOST_DEFAULT32, .bt_sco_disable = true, .bt_session_2 = true, }; +static const struct iwl_eeprom_params iwl20x0_eeprom_params = { + .regulatory_bands = { + EEPROM_REG_BAND_1_CHANNELS, + EEPROM_REG_BAND_2_CHANNELS, + EEPROM_REG_BAND_3_CHANNELS, + EEPROM_REG_BAND_4_CHANNELS, + EEPROM_REG_BAND_5_CHANNELS, + EEPROM_6000_REG_BAND_24_HT40_CHANNELS, + EEPROM_REGULATORY_BAND_NO_HT40, + }, + .enhanced_txpower = true, +}; + #define IWL_DEVICE_2000 \ .fw_name_pre = IWL2000_FW_PRE, \ .ucode_api_max = IWL2000_UCODE_API_MAX, \ @@ -127,6 +141,7 @@ static const struct iwl_bt_params iwl2030_bt_params = { .eeprom_ver = EEPROM_2000_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_2000_TX_POWER_VERSION, \ .base_params = &iwl2000_base_params, \ + .eeprom_params = &iwl20x0_eeprom_params, \ .need_temp_offset_calib = true, \ .temp_offset_v2 = true, \ .led_mode = IWL_LED_RF_STATE @@ -155,6 +170,7 @@ const struct iwl_cfg iwl2000_2bgn_d_cfg = { .eeprom_calib_ver = EEPROM_2000_TX_POWER_VERSION, \ .base_params = &iwl2030_base_params, \ .bt_params = &iwl2030_bt_params, \ + .eeprom_params = &iwl20x0_eeprom_params, \ .need_temp_offset_calib = true, \ .temp_offset_v2 = true, \ .led_mode = IWL_LED_RF_STATE, \ @@ -177,6 +193,7 @@ const struct iwl_cfg iwl2030_2bgn_cfg = { .eeprom_ver = EEPROM_2000_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_2000_TX_POWER_VERSION, \ .base_params = &iwl2000_base_params, \ + .eeprom_params = &iwl20x0_eeprom_params, \ .need_temp_offset_calib = true, \ .temp_offset_v2 = true, \ .led_mode = IWL_LED_RF_STATE, \ @@ -207,6 +224,7 @@ const struct iwl_cfg iwl105_bgn_d_cfg = { .eeprom_calib_ver = EEPROM_2000_TX_POWER_VERSION, \ .base_params = &iwl2030_base_params, \ .bt_params = &iwl2030_bt_params, \ + .eeprom_params = &iwl20x0_eeprom_params, \ .need_temp_offset_calib = true, \ .temp_offset_v2 = true, \ .led_mode = IWL_LED_RF_STATE, \ diff --git a/drivers/net/wireless/iwlwifi/iwl-5000.c b/drivers/net/wireless/iwlwifi/pcie/5000.c index 8e26bc825f23..d1665fa6d15a 100644 --- a/drivers/net/wireless/iwlwifi/iwl-5000.c +++ b/drivers/net/wireless/iwlwifi/pcie/5000.c @@ -27,9 +27,9 @@ #include <linux/module.h> #include <linux/stringify.h> #include "iwl-config.h" -#include "iwl-cfg.h" #include "iwl-agn-hw.h" #include "iwl-csr.h" +#include "cfg.h" /* Highest firmware API version supported */ #define IWL5000_UCODE_API_MAX 5 @@ -62,13 +62,26 @@ static const struct iwl_base_params iwl5000_base_params = { .led_compensation = 51, .plcp_delta_threshold = IWL_MAX_PLCP_ERR_LONG_THRESHOLD_DEF, .chain_noise_scale = 1000, - .wd_timeout = IWL_WATCHHDOG_DISABLED, + .wd_timeout = IWL_WATCHDOG_DISABLED, .max_event_log_size = 512, .no_idle_support = true, }; static const struct iwl_ht_params iwl5000_ht_params = { .ht_greenfield_support = true, + .ht40_bands = BIT(IEEE80211_BAND_2GHZ) | BIT(IEEE80211_BAND_5GHZ), +}; + +static const struct iwl_eeprom_params iwl5000_eeprom_params = { + .regulatory_bands = { + EEPROM_REG_BAND_1_CHANNELS, + EEPROM_REG_BAND_2_CHANNELS, + EEPROM_REG_BAND_3_CHANNELS, + EEPROM_REG_BAND_4_CHANNELS, + EEPROM_REG_BAND_5_CHANNELS, + EEPROM_REG_BAND_24_HT40_CHANNELS, + EEPROM_REG_BAND_52_HT40_CHANNELS + }, }; #define IWL_DEVICE_5000 \ @@ -82,6 +95,7 @@ static const struct iwl_ht_params iwl5000_ht_params = { .eeprom_ver = EEPROM_5000_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_5000_TX_POWER_VERSION, \ .base_params = &iwl5000_base_params, \ + .eeprom_params = &iwl5000_eeprom_params, \ .led_mode = IWL_LED_BLINK const struct iwl_cfg iwl5300_agn_cfg = { @@ -128,6 +142,7 @@ const struct iwl_cfg iwl5350_agn_cfg = { .eeprom_ver = EEPROM_5050_EEPROM_VERSION, .eeprom_calib_ver = EEPROM_5050_TX_POWER_VERSION, .base_params = &iwl5000_base_params, + .eeprom_params = &iwl5000_eeprom_params, .ht_params = &iwl5000_ht_params, .led_mode = IWL_LED_BLINK, .internal_wimax_coex = true, @@ -144,6 +159,7 @@ const struct iwl_cfg iwl5350_agn_cfg = { .eeprom_ver = EEPROM_5050_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_5050_TX_POWER_VERSION, \ .base_params = &iwl5000_base_params, \ + .eeprom_params = &iwl5000_eeprom_params, \ .no_xtal_calib = true, \ .led_mode = IWL_LED_BLINK, \ .internal_wimax_coex = true diff --git a/drivers/net/wireless/iwlwifi/iwl-6000.c b/drivers/net/wireless/iwlwifi/pcie/6000.c index e5e8ada4aaf6..4a57624afc40 100644 --- a/drivers/net/wireless/iwlwifi/iwl-6000.c +++ b/drivers/net/wireless/iwlwifi/pcie/6000.c @@ -27,9 +27,9 @@ #include <linux/module.h> #include <linux/stringify.h> #include "iwl-config.h" -#include "iwl-cfg.h" #include "iwl-agn-hw.h" -#include "iwl-commands.h" /* needed for BT for now */ +#include "cfg.h" +#include "dvm/commands.h" /* needed for BT for now */ /* Highest firmware API version supported */ #define IWL6000_UCODE_API_MAX 6 @@ -127,6 +127,7 @@ static const struct iwl_base_params iwl6000_g2_base_params = { static const struct iwl_ht_params iwl6000_ht_params = { .ht_greenfield_support = true, .use_rts_for_aggregation = true, /* use rts/cts protection */ + .ht40_bands = BIT(IEEE80211_BAND_2GHZ) | BIT(IEEE80211_BAND_5GHZ), }; static const struct iwl_bt_params iwl6000_bt_params = { @@ -138,6 +139,19 @@ static const struct iwl_bt_params iwl6000_bt_params = { .bt_sco_disable = true, }; +static const struct iwl_eeprom_params iwl6000_eeprom_params = { + .regulatory_bands = { + EEPROM_REG_BAND_1_CHANNELS, + EEPROM_REG_BAND_2_CHANNELS, + EEPROM_REG_BAND_3_CHANNELS, + EEPROM_REG_BAND_4_CHANNELS, + EEPROM_REG_BAND_5_CHANNELS, + EEPROM_6000_REG_BAND_24_HT40_CHANNELS, + EEPROM_REG_BAND_52_HT40_CHANNELS + }, + .enhanced_txpower = true, +}; + #define IWL_DEVICE_6005 \ .fw_name_pre = IWL6005_FW_PRE, \ .ucode_api_max = IWL6000G2_UCODE_API_MAX, \ @@ -149,6 +163,7 @@ static const struct iwl_bt_params iwl6000_bt_params = { .eeprom_ver = EEPROM_6005_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_6005_TX_POWER_VERSION, \ .base_params = &iwl6000_g2_base_params, \ + .eeprom_params = &iwl6000_eeprom_params, \ .need_temp_offset_calib = true, \ .led_mode = IWL_LED_RF_STATE @@ -204,6 +219,7 @@ const struct iwl_cfg iwl6005_2agn_mow2_cfg = { .eeprom_calib_ver = EEPROM_6030_TX_POWER_VERSION, \ .base_params = &iwl6000_g2_base_params, \ .bt_params = &iwl6000_bt_params, \ + .eeprom_params = &iwl6000_eeprom_params, \ .need_temp_offset_calib = true, \ .led_mode = IWL_LED_RF_STATE, \ .adv_pm = true \ @@ -242,6 +258,7 @@ const struct iwl_cfg iwl6030_2bg_cfg = { .eeprom_calib_ver = EEPROM_6030_TX_POWER_VERSION, \ .base_params = &iwl6000_g2_base_params, \ .bt_params = &iwl6000_bt_params, \ + .eeprom_params = &iwl6000_eeprom_params, \ .need_temp_offset_calib = true, \ .led_mode = IWL_LED_RF_STATE, \ .adv_pm = true @@ -292,6 +309,7 @@ const struct iwl_cfg iwl130_bg_cfg = { .eeprom_ver = EEPROM_6000_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_6000_TX_POWER_VERSION, \ .base_params = &iwl6000_base_params, \ + .eeprom_params = &iwl6000_eeprom_params, \ .led_mode = IWL_LED_BLINK const struct iwl_cfg iwl6000i_2agn_cfg = { @@ -322,6 +340,7 @@ const struct iwl_cfg iwl6000i_2bg_cfg = { .eeprom_ver = EEPROM_6050_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_6050_TX_POWER_VERSION, \ .base_params = &iwl6050_base_params, \ + .eeprom_params = &iwl6000_eeprom_params, \ .led_mode = IWL_LED_BLINK, \ .internal_wimax_coex = true @@ -346,6 +365,7 @@ const struct iwl_cfg iwl6050_2abg_cfg = { .eeprom_ver = EEPROM_6150_EEPROM_VERSION, \ .eeprom_calib_ver = EEPROM_6150_TX_POWER_VERSION, \ .base_params = &iwl6050_base_params, \ + .eeprom_params = &iwl6000_eeprom_params, \ .led_mode = IWL_LED_BLINK, \ .internal_wimax_coex = true @@ -372,6 +392,7 @@ const struct iwl_cfg iwl6000_3agn_cfg = { .eeprom_ver = EEPROM_6000_EEPROM_VERSION, .eeprom_calib_ver = EEPROM_6000_TX_POWER_VERSION, .base_params = &iwl6000_base_params, + .eeprom_params = &iwl6000_eeprom_params, .ht_params = &iwl6000_ht_params, .led_mode = IWL_LED_BLINK, }; diff --git a/drivers/net/wireless/iwlwifi/iwl-cfg.h b/drivers/net/wireless/iwlwifi/pcie/cfg.h index 82152311d73b..82152311d73b 100644 --- a/drivers/net/wireless/iwlwifi/iwl-cfg.h +++ b/drivers/net/wireless/iwlwifi/pcie/cfg.h diff --git a/drivers/net/wireless/iwlwifi/iwl-pci.c b/drivers/net/wireless/iwlwifi/pcie/drv.c index 0c8a1c2d8847..f4c3500b68c6 100644 --- a/drivers/net/wireless/iwlwifi/iwl-pci.c +++ b/drivers/net/wireless/iwlwifi/pcie/drv.c @@ -68,10 +68,11 @@ #include <linux/pci-aspm.h> #include "iwl-trans.h" -#include "iwl-cfg.h" #include "iwl-drv.h" #include "iwl-trans.h" -#include "iwl-trans-pcie-int.h" + +#include "cfg.h" +#include "internal.h" #define IWL_PCI_DEVICE(dev, subdev, cfg) \ .vendor = PCI_VENDOR_ID_INTEL, .device = (dev), \ diff --git a/drivers/net/wireless/iwlwifi/iwl-trans-pcie-int.h b/drivers/net/wireless/iwlwifi/pcie/internal.h index e959207c630a..d9694c58208c 100644 --- a/drivers/net/wireless/iwlwifi/iwl-trans-pcie-int.h +++ b/drivers/net/wireless/iwlwifi/pcie/internal.h @@ -269,10 +269,9 @@ struct iwl_trans_pcie { wait_queue_head_t ucode_write_waitq; unsigned long status; u8 cmd_queue; + u8 cmd_fifo; u8 n_no_reclaim_cmds; u8 no_reclaim_cmds[MAX_NO_RECLAIM_CMDS]; - u8 setup_q_to_fifo[IWL_MAX_HW_QUEUES]; - u8 n_q_to_fifo; bool rx_buf_size_8k; u32 rx_page_order; @@ -313,7 +312,7 @@ void iwl_bg_rx_replenish(struct work_struct *data); void iwl_irq_tasklet(struct iwl_trans *trans); void iwlagn_rx_replenish(struct iwl_trans *trans); void iwl_rx_queue_update_write_ptr(struct iwl_trans *trans, - struct iwl_rx_queue *q); + struct iwl_rx_queue *q); /***************************************************** * ICT @@ -328,7 +327,7 @@ irqreturn_t iwl_isr_ict(int irq, void *data); * TX / HCMD ******************************************************/ void iwl_txq_update_write_ptr(struct iwl_trans *trans, - struct iwl_tx_queue *txq); + struct iwl_tx_queue *txq); int iwlagn_txq_attach_buf_to_tfd(struct iwl_trans *trans, struct iwl_tx_queue *txq, dma_addr_t addr, u16 len, u8 reset); @@ -337,17 +336,13 @@ int iwl_trans_pcie_send_cmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); void iwl_tx_cmd_complete(struct iwl_trans *trans, struct iwl_rx_cmd_buffer *rxb, int handler_status); void iwl_trans_txq_update_byte_cnt_tbl(struct iwl_trans *trans, - struct iwl_tx_queue *txq, - u16 byte_cnt); -void iwl_trans_pcie_tx_agg_disable(struct iwl_trans *trans, int queue); -void iwl_trans_set_wr_ptrs(struct iwl_trans *trans, int txq_id, u32 index); -void iwl_trans_tx_queue_set_status(struct iwl_trans *trans, - struct iwl_tx_queue *txq, - int tx_fifo_id, bool active); -void iwl_trans_pcie_tx_agg_setup(struct iwl_trans *trans, int queue, int fifo, - int sta_id, int tid, int frame_limit, u16 ssn); -void iwlagn_txq_free_tfd(struct iwl_trans *trans, struct iwl_tx_queue *txq, - enum dma_data_direction dma_dir); + struct iwl_tx_queue *txq, + u16 byte_cnt); +void iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int txq_id, int fifo, + int sta_id, int tid, int frame_limit, u16 ssn); +void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int queue); +void iwl_txq_free_tfd(struct iwl_trans *trans, struct iwl_tx_queue *txq, + enum dma_data_direction dma_dir); int iwl_tx_queue_reclaim(struct iwl_trans *trans, int txq_id, int index, struct sk_buff_head *skbs); int iwl_queue_space(const struct iwl_queue *q); diff --git a/drivers/net/wireless/iwlwifi/iwl-trans-pcie-rx.c b/drivers/net/wireless/iwlwifi/pcie/rx.c index 08517d3c80bb..39a6ca1f009c 100644 --- a/drivers/net/wireless/iwlwifi/iwl-trans-pcie-rx.c +++ b/drivers/net/wireless/iwlwifi/pcie/rx.c @@ -32,7 +32,7 @@ #include "iwl-prph.h" #include "iwl-io.h" -#include "iwl-trans-pcie-int.h" +#include "internal.h" #include "iwl-op-mode.h" #ifdef CONFIG_IWLWIFI_IDI @@ -130,7 +130,7 @@ static int iwl_rx_queue_space(const struct iwl_rx_queue *q) * iwl_rx_queue_update_write_ptr - Update the write pointer for the RX queue */ void iwl_rx_queue_update_write_ptr(struct iwl_trans *trans, - struct iwl_rx_queue *q) + struct iwl_rx_queue *q) { unsigned long flags; u32 reg; @@ -201,9 +201,7 @@ static inline __le32 iwlagn_dma_addr2rbd_ptr(dma_addr_t dma_addr) */ static void iwlagn_rx_queue_restock(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); - + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_rx_queue *rxq = &trans_pcie->rxq; struct list_head *element; struct iwl_rx_mem_buffer *rxb; @@ -253,9 +251,7 @@ static void iwlagn_rx_queue_restock(struct iwl_trans *trans) */ static void iwlagn_rx_allocate(struct iwl_trans *trans, gfp_t priority) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); - + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_rx_queue *rxq = &trans_pcie->rxq; struct list_head *element; struct iwl_rx_mem_buffer *rxb; @@ -278,8 +274,7 @@ static void iwlagn_rx_allocate(struct iwl_trans *trans, gfp_t priority) gfp_mask |= __GFP_COMP; /* Alloc a new receive buffer */ - page = alloc_pages(gfp_mask, - trans_pcie->rx_page_order); + page = alloc_pages(gfp_mask, trans_pcie->rx_page_order); if (!page) { if (net_ratelimit()) IWL_DEBUG_INFO(trans, "alloc_pages failed, " @@ -315,9 +310,10 @@ static void iwlagn_rx_allocate(struct iwl_trans *trans, gfp_t priority) BUG_ON(rxb->page); rxb->page = page; /* Get physical address of the RB */ - rxb->page_dma = dma_map_page(trans->dev, page, 0, - PAGE_SIZE << trans_pcie->rx_page_order, - DMA_FROM_DEVICE); + rxb->page_dma = + dma_map_page(trans->dev, page, 0, + PAGE_SIZE << trans_pcie->rx_page_order, + DMA_FROM_DEVICE); /* dma address must be no more than 36 bits */ BUG_ON(rxb->page_dma & ~DMA_BIT_MASK(36)); /* and also 256 byte aligned! */ @@ -465,8 +461,8 @@ static void iwl_rx_handle_rxbuf(struct iwl_trans *trans, if (rxb->page != NULL) { rxb->page_dma = dma_map_page(trans->dev, rxb->page, 0, - PAGE_SIZE << trans_pcie->rx_page_order, - DMA_FROM_DEVICE); + PAGE_SIZE << trans_pcie->rx_page_order, + DMA_FROM_DEVICE); list_add_tail(&rxb->list, &rxq->rx_free); rxq->free_count++; } else @@ -497,7 +493,7 @@ static void iwl_rx_handle(struct iwl_trans *trans) /* Rx interrupt, but nothing sent from uCode */ if (i == r) - IWL_DEBUG_RX(trans, "r = %d, i = %d\n", r, i); + IWL_DEBUG_RX(trans, "HW = SW = %d\n", r); /* calculate total frames need to be restock after handling RX */ total_empty = r - rxq->write_actual; @@ -513,8 +509,8 @@ static void iwl_rx_handle(struct iwl_trans *trans) rxb = rxq->queue[i]; rxq->queue[i] = NULL; - IWL_DEBUG_RX(trans, "rxbuf: r = %d, i = %d (%p)\n", rxb); - + IWL_DEBUG_RX(trans, "rxbuf: HW = %d, SW = %d (%p)\n", + r, i, rxb); iwl_rx_handle_rxbuf(trans, rxb); i = (i + 1) & RX_QUEUE_MASK; @@ -546,12 +542,12 @@ static void iwl_irq_handle_error(struct iwl_trans *trans) /* W/A for WiFi/WiMAX coex and WiMAX own the RF */ if (trans->cfg->internal_wimax_coex && (!(iwl_read_prph(trans, APMG_CLK_CTRL_REG) & - APMS_CLK_VAL_MRB_FUNC_MODE) || + APMS_CLK_VAL_MRB_FUNC_MODE) || (iwl_read_prph(trans, APMG_PS_CTRL_REG) & - APMG_PS_CTRL_VAL_RESET_REQ))) { - struct iwl_trans_pcie *trans_pcie; + APMG_PS_CTRL_VAL_RESET_REQ))) { + struct iwl_trans_pcie *trans_pcie = + IWL_TRANS_GET_PCIE_TRANS(trans); - trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); clear_bit(STATUS_HCMD_ACTIVE, &trans_pcie->status); iwl_op_mode_wimax_active(trans->op_mode); wake_up(&trans->wait_command_queue); @@ -567,6 +563,8 @@ static void iwl_irq_handle_error(struct iwl_trans *trans) /* tasklet for iwlagn interrupt */ void iwl_irq_tasklet(struct iwl_trans *trans) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct isr_statistics *isr_stats = &trans_pcie->isr_stats; u32 inta = 0; u32 handled = 0; unsigned long flags; @@ -575,10 +573,6 @@ void iwl_irq_tasklet(struct iwl_trans *trans) u32 inta_mask; #endif - struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - struct isr_statistics *isr_stats = &trans_pcie->isr_stats; - - spin_lock_irqsave(&trans_pcie->irq_lock, flags); /* Ack/clear/reset pending uCode interrupts. @@ -593,7 +587,7 @@ void iwl_irq_tasklet(struct iwl_trans *trans) * interrupt coalescing can still be achieved. */ iwl_write32(trans, CSR_INT, - trans_pcie->inta | ~trans_pcie->inta_mask); + trans_pcie->inta | ~trans_pcie->inta_mask); inta = trans_pcie->inta; @@ -602,7 +596,7 @@ void iwl_irq_tasklet(struct iwl_trans *trans) /* just for debug */ inta_mask = iwl_read32(trans, CSR_INT_MASK); IWL_DEBUG_ISR(trans, "inta 0x%08x, enabled 0x%08x\n", - inta, inta_mask); + inta, inta_mask); } #endif @@ -651,7 +645,7 @@ void iwl_irq_tasklet(struct iwl_trans *trans) hw_rfkill = iwl_is_rfkill_set(trans); IWL_WARN(trans, "RF_KILL bit toggled to %s.\n", - hw_rfkill ? "disable radio" : "enable radio"); + hw_rfkill ? "disable radio" : "enable radio"); isr_stats->rfkill++; @@ -693,7 +687,7 @@ void iwl_irq_tasklet(struct iwl_trans *trans) * Rx "responses" (frame-received notification), and other * notifications from uCode come through here*/ if (inta & (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX | - CSR_INT_BIT_RX_PERIODIC)) { + CSR_INT_BIT_RX_PERIODIC)) { IWL_DEBUG_ISR(trans, "Rx interrupt\n"); if (inta & (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX)) { handled |= (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX); @@ -733,7 +727,7 @@ void iwl_irq_tasklet(struct iwl_trans *trans) */ if (inta & (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX)) iwl_write8(trans, CSR_INT_PERIODIC_REG, - CSR_INT_PERIODIC_ENA); + CSR_INT_PERIODIC_ENA); isr_stats->rx++; } @@ -782,8 +776,7 @@ void iwl_irq_tasklet(struct iwl_trans *trans) /* Free dram table */ void iwl_free_isr_ict(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); if (trans_pcie->ict_tbl) { dma_free_coherent(trans->dev, ICT_SIZE, @@ -802,8 +795,7 @@ void iwl_free_isr_ict(struct iwl_trans *trans) */ int iwl_alloc_isr_ict(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); trans_pcie->ict_tbl = dma_alloc_coherent(trans->dev, ICT_SIZE, @@ -837,10 +829,9 @@ int iwl_alloc_isr_ict(struct iwl_trans *trans) */ void iwl_reset_ict(struct iwl_trans *trans) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); u32 val; unsigned long flags; - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); if (!trans_pcie->ict_tbl) return; @@ -868,9 +859,7 @@ void iwl_reset_ict(struct iwl_trans *trans) /* Device is going down disable ict interrupt usage */ void iwl_disable_ict(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); - + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); unsigned long flags; spin_lock_irqsave(&trans_pcie->irq_lock, flags); @@ -878,23 +867,19 @@ void iwl_disable_ict(struct iwl_trans *trans) spin_unlock_irqrestore(&trans_pcie->irq_lock, flags); } +/* legacy (non-ICT) ISR. Assumes that trans_pcie->irq_lock is held */ static irqreturn_t iwl_isr(int irq, void *data) { struct iwl_trans *trans = data; - struct iwl_trans_pcie *trans_pcie; + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); u32 inta, inta_mask; - unsigned long flags; #ifdef CONFIG_IWLWIFI_DEBUG u32 inta_fh; #endif - if (!trans) - return IRQ_NONE; - trace_iwlwifi_dev_irq(trans->dev); + lockdep_assert_held(&trans_pcie->irq_lock); - trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - - spin_lock_irqsave(&trans_pcie->irq_lock, flags); + trace_iwlwifi_dev_irq(trans->dev); /* Disable (but don't clear!) interrupts here to avoid * back-to-back ISRs and sporadic interrupts from our NIC. @@ -918,7 +903,7 @@ static irqreturn_t iwl_isr(int irq, void *data) /* Hardware disappeared. It might have already raised * an interrupt */ IWL_WARN(trans, "HARDWARE GONE?? INTA == 0x%08x\n", inta); - goto unplugged; + return IRQ_HANDLED; } #ifdef CONFIG_IWLWIFI_DEBUG @@ -934,21 +919,16 @@ static irqreturn_t iwl_isr(int irq, void *data) if (likely(inta)) tasklet_schedule(&trans_pcie->irq_tasklet); else if (test_bit(STATUS_INT_ENABLED, &trans_pcie->status) && - !trans_pcie->inta) + !trans_pcie->inta) iwl_enable_interrupts(trans); - unplugged: - spin_unlock_irqrestore(&trans_pcie->irq_lock, flags); - return IRQ_HANDLED; - - none: +none: /* re-enable interrupts here since we don't have anything to service. */ /* only Re-enable if disabled by irq and no schedules tasklet. */ if (test_bit(STATUS_INT_ENABLED, &trans_pcie->status) && - !trans_pcie->inta) + !trans_pcie->inta) iwl_enable_interrupts(trans); - spin_unlock_irqrestore(&trans_pcie->irq_lock, flags); return IRQ_NONE; } @@ -974,15 +954,19 @@ irqreturn_t iwl_isr_ict(int irq, void *data) trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + spin_lock_irqsave(&trans_pcie->irq_lock, flags); + /* dram interrupt table not set yet, * use legacy interrupt. */ - if (!trans_pcie->use_ict) - return iwl_isr(irq, data); + if (unlikely(!trans_pcie->use_ict)) { + irqreturn_t ret = iwl_isr(irq, data); + spin_unlock_irqrestore(&trans_pcie->irq_lock, flags); + return ret; + } trace_iwlwifi_dev_irq(trans->dev); - spin_lock_irqsave(&trans_pcie->irq_lock, flags); /* Disable (but don't clear!) interrupts here to avoid * back-to-back ISRs and sporadic interrupts from our NIC. @@ -1036,7 +1020,7 @@ irqreturn_t iwl_isr_ict(int irq, void *data) inta = (0xff & val) | ((0xff00 & val) << 16); IWL_DEBUG_ISR(trans, "ISR inta 0x%08x, enabled 0x%08x ict 0x%08x\n", - inta, inta_mask, val); + inta, inta_mask, val); inta &= trans_pcie->inta_mask; trans_pcie->inta |= inta; diff --git a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c b/drivers/net/wireless/iwlwifi/pcie/trans.c index 79c6b91417f9..939c2f78df58 100644 --- a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c +++ b/drivers/net/wireless/iwlwifi/pcie/trans.c @@ -70,15 +70,12 @@ #include "iwl-drv.h" #include "iwl-trans.h" -#include "iwl-trans-pcie-int.h" #include "iwl-csr.h" #include "iwl-prph.h" -#include "iwl-eeprom.h" #include "iwl-agn-hw.h" +#include "internal.h" /* FIXME: need to abstract out TX command (once we know what it looks like) */ -#include "iwl-commands.h" - -#define IWL_MASK(lo, hi) ((1 << (hi)) | ((1 << (hi)) - (1 << (lo)))) +#include "dvm/commands.h" #define SCD_QUEUECHAIN_SEL_ALL(trans, trans_pcie) \ (((1<<trans->cfg->base_params->num_of_queues) - 1) &\ @@ -86,8 +83,7 @@ static int iwl_trans_rx_alloc(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_rx_queue *rxq = &trans_pcie->rxq; struct device *dev = trans->dev; @@ -114,7 +110,7 @@ static int iwl_trans_rx_alloc(struct iwl_trans *trans) err_rb_stts: dma_free_coherent(dev, sizeof(__le32) * RX_QUEUE_SIZE, - rxq->bd, rxq->bd_dma); + rxq->bd, rxq->bd_dma); memset(&rxq->bd_dma, 0, sizeof(rxq->bd_dma)); rxq->bd = NULL; err_bd: @@ -123,8 +119,7 @@ err_bd: static void iwl_trans_rxq_free_rx_bufs(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_rx_queue *rxq = &trans_pcie->rxq; int i; @@ -134,8 +129,8 @@ static void iwl_trans_rxq_free_rx_bufs(struct iwl_trans *trans) * to an SKB, so we need to unmap and free potential storage */ if (rxq->pool[i].page != NULL) { dma_unmap_page(trans->dev, rxq->pool[i].page_dma, - PAGE_SIZE << trans_pcie->rx_page_order, - DMA_FROM_DEVICE); + PAGE_SIZE << trans_pcie->rx_page_order, + DMA_FROM_DEVICE); __free_pages(rxq->pool[i].page, trans_pcie->rx_page_order); rxq->pool[i].page = NULL; @@ -193,8 +188,7 @@ static void iwl_trans_rx_hw_init(struct iwl_trans *trans, static int iwl_rx_init(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_rx_queue *rxq = &trans_pcie->rxq; int i, err; @@ -236,10 +230,8 @@ static int iwl_rx_init(struct iwl_trans *trans) static void iwl_trans_pcie_rx_free(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_rx_queue *rxq = &trans_pcie->rxq; - unsigned long flags; /*if rxq->bd is NULL, it means that nothing has been allocated, @@ -274,11 +266,11 @@ static int iwl_trans_rx_stop(struct iwl_trans *trans) /* stop Rx DMA */ iwl_write_direct32(trans, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0); return iwl_poll_direct_bit(trans, FH_MEM_RSSR_RX_STATUS_REG, - FH_RSSR_CHNL0_RX_STATUS_CHNL_IDLE, 1000); + FH_RSSR_CHNL0_RX_STATUS_CHNL_IDLE, 1000); } -static inline int iwlagn_alloc_dma_ptr(struct iwl_trans *trans, - struct iwl_dma_ptr *ptr, size_t size) +static int iwlagn_alloc_dma_ptr(struct iwl_trans *trans, + struct iwl_dma_ptr *ptr, size_t size) { if (WARN_ON(ptr->addr)) return -EINVAL; @@ -291,8 +283,8 @@ static inline int iwlagn_alloc_dma_ptr(struct iwl_trans *trans, return 0; } -static inline void iwlagn_free_dma_ptr(struct iwl_trans *trans, - struct iwl_dma_ptr *ptr) +static void iwlagn_free_dma_ptr(struct iwl_trans *trans, + struct iwl_dma_ptr *ptr) { if (unlikely(!ptr->addr)) return; @@ -304,8 +296,13 @@ static inline void iwlagn_free_dma_ptr(struct iwl_trans *trans, static void iwl_trans_pcie_queue_stuck_timer(unsigned long data) { struct iwl_tx_queue *txq = (void *)data; + struct iwl_queue *q = &txq->q; struct iwl_trans_pcie *trans_pcie = txq->trans_pcie; struct iwl_trans *trans = iwl_trans_pcie_get_trans(trans_pcie); + u32 scd_sram_addr = trans_pcie->scd_base_addr + + SCD_TX_STTS_MEM_LOWER_BOUND + (16 * txq->q.id); + u8 buf[16]; + int i; spin_lock(&txq->lock); /* check if triggered erroneously */ @@ -315,26 +312,59 @@ static void iwl_trans_pcie_queue_stuck_timer(unsigned long data) } spin_unlock(&txq->lock); - IWL_ERR(trans, "Queue %d stuck for %u ms.\n", txq->q.id, jiffies_to_msecs(trans_pcie->wd_timeout)); IWL_ERR(trans, "Current SW read_ptr %d write_ptr %d\n", txq->q.read_ptr, txq->q.write_ptr); - IWL_ERR(trans, "Current HW read_ptr %d write_ptr %d\n", - iwl_read_prph(trans, SCD_QUEUE_RDPTR(txq->q.id)) - & (TFD_QUEUE_SIZE_MAX - 1), - iwl_read_prph(trans, SCD_QUEUE_WRPTR(txq->q.id))); + + iwl_read_targ_mem_bytes(trans, scd_sram_addr, buf, sizeof(buf)); + + iwl_print_hex_error(trans, buf, sizeof(buf)); + + for (i = 0; i < FH_TCSR_CHNL_NUM; i++) + IWL_ERR(trans, "FH TRBs(%d) = 0x%08x\n", i, + iwl_read_direct32(trans, FH_TX_TRB_REG(i))); + + for (i = 0; i < trans->cfg->base_params->num_of_queues; i++) { + u32 status = iwl_read_prph(trans, SCD_QUEUE_STATUS_BITS(i)); + u8 fifo = (status >> SCD_QUEUE_STTS_REG_POS_TXF) & 0x7; + bool active = !!(status & BIT(SCD_QUEUE_STTS_REG_POS_ACTIVE)); + u32 tbl_dw = + iwl_read_targ_mem(trans, + trans_pcie->scd_base_addr + + SCD_TRANS_TBL_OFFSET_QUEUE(i)); + + if (i & 0x1) + tbl_dw = (tbl_dw & 0xFFFF0000) >> 16; + else + tbl_dw = tbl_dw & 0x0000FFFF; + + IWL_ERR(trans, + "Q %d is %sactive and mapped to fifo %d ra_tid 0x%04x [%d,%d]\n", + i, active ? "" : "in", fifo, tbl_dw, + iwl_read_prph(trans, + SCD_QUEUE_RDPTR(i)) & (txq->q.n_bd - 1), + iwl_read_prph(trans, SCD_QUEUE_WRPTR(i))); + } + + for (i = q->read_ptr; i != q->write_ptr; + i = iwl_queue_inc_wrap(i, q->n_bd)) { + struct iwl_tx_cmd *tx_cmd = + (struct iwl_tx_cmd *)txq->entries[i].cmd->payload; + IWL_ERR(trans, "scratch %d = 0x%08x\n", i, + get_unaligned_le32(&tx_cmd->scratch)); + } iwl_op_mode_nic_error(trans->op_mode); } static int iwl_trans_txq_alloc(struct iwl_trans *trans, - struct iwl_tx_queue *txq, int slots_num, - u32 txq_id) + struct iwl_tx_queue *txq, int slots_num, + u32 txq_id) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); size_t tfd_sz = sizeof(struct iwl_tfd) * TFD_QUEUE_SIZE_MAX; int i; - struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); if (WARN_ON(txq->entries || txq->tfds)) return -EINVAL; @@ -435,7 +465,7 @@ static void iwl_tx_queue_unmap(struct iwl_trans *trans, int txq_id) spin_lock_bh(&txq->lock); while (q->write_ptr != q->read_ptr) { - iwlagn_txq_free_tfd(trans, txq, dma_dir); + iwl_txq_free_tfd(trans, txq, dma_dir); q->read_ptr = iwl_queue_inc_wrap(q->read_ptr, q->n_bd); } spin_unlock_bh(&txq->lock); @@ -455,6 +485,7 @@ static void iwl_tx_queue_free(struct iwl_trans *trans, int txq_id) struct iwl_tx_queue *txq = &trans_pcie->txq[txq_id]; struct device *dev = trans->dev; int i; + if (WARN_ON(!txq)) return; @@ -574,11 +605,11 @@ error: } static int iwl_tx_init(struct iwl_trans *trans) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int ret; int txq_id, slots_num; unsigned long flags; bool alloc = false; - struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); if (!trans_pcie->txq) { ret = iwl_trans_tx_alloc(trans); @@ -643,10 +674,9 @@ static void iwl_set_pwr_vmain(struct iwl_trans *trans) static u16 iwl_pciexp_link_ctrl(struct iwl_trans *trans) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int pos; u16 pci_lnk_ctl; - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); struct pci_dev *pci_dev = trans_pcie->pci_dev; @@ -700,14 +730,14 @@ static int iwl_apm_init(struct iwl_trans *trans) /* Disable L0S exit timer (platform NMI Work/Around) */ iwl_set_bit(trans, CSR_GIO_CHICKEN_BITS, - CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER); + CSR_GIO_CHICKEN_BITS_REG_BIT_DIS_L0S_EXIT_TIMER); /* * Disable L0s without affecting L1; * don't wait for ICH L0s (ICH bug W/A) */ iwl_set_bit(trans, CSR_GIO_CHICKEN_BITS, - CSR_GIO_CHICKEN_BITS_REG_BIT_L1A_NO_L0S_RX); + CSR_GIO_CHICKEN_BITS_REG_BIT_L1A_NO_L0S_RX); /* Set FH wait threshold to maximum (HW error during stress W/A) */ iwl_set_bit(trans, CSR_DBG_HPET_MEM_REG, CSR_DBG_HPET_MEM_REG_VAL); @@ -717,7 +747,7 @@ static int iwl_apm_init(struct iwl_trans *trans) * wake device's PCI Express link L1a -> L0s */ iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_HAP_WAKE_L1A); + CSR_HW_IF_CONFIG_REG_BIT_HAP_WAKE_L1A); iwl_apm_config(trans); @@ -738,8 +768,8 @@ static int iwl_apm_init(struct iwl_trans *trans) * and accesses to uCode SRAM. */ ret = iwl_poll_bit(trans, CSR_GP_CNTRL, - CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, - CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25000); + CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, + CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25000); if (ret < 0) { IWL_DEBUG_INFO(trans, "Failed to init the card\n"); goto out; @@ -773,8 +803,8 @@ static int iwl_apm_stop_master(struct iwl_trans *trans) iwl_set_bit(trans, CSR_RESET, CSR_RESET_REG_FLAG_STOP_MASTER); ret = iwl_poll_bit(trans, CSR_RESET, - CSR_RESET_REG_FLAG_MASTER_DISABLED, - CSR_RESET_REG_FLAG_MASTER_DISABLED, 100); + CSR_RESET_REG_FLAG_MASTER_DISABLED, + CSR_RESET_REG_FLAG_MASTER_DISABLED, 100); if (ret) IWL_WARN(trans, "Master Disable Timed Out, 100 usec\n"); @@ -816,8 +846,7 @@ static int iwl_nic_init(struct iwl_trans *trans) iwl_apm_init(trans); /* Set interrupt coalescing calibration timer to default (512 usecs) */ - iwl_write8(trans, CSR_INT_COALESCING, - IWL_HOST_INT_CALIB_TIMEOUT_DEF); + iwl_write8(trans, CSR_INT_COALESCING, IWL_HOST_INT_CALIB_TIMEOUT_DEF); spin_unlock_irqrestore(&trans_pcie->irq_lock, flags); @@ -836,8 +865,8 @@ static int iwl_nic_init(struct iwl_trans *trans) if (trans->cfg->base_params->shadow_reg_enable) { /* enable shadow regs in HW */ - iwl_set_bit(trans, CSR_MAC_SHADOW_REG_CTRL, - 0x800FFFFF); + iwl_set_bit(trans, CSR_MAC_SHADOW_REG_CTRL, 0x800FFFFF); + IWL_DEBUG_INFO(trans, "Enabling shadow registers in device\n"); } return 0; @@ -851,13 +880,13 @@ static int iwl_set_hw_ready(struct iwl_trans *trans) int ret; iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_NIC_READY); + CSR_HW_IF_CONFIG_REG_BIT_NIC_READY); /* See if we got it */ ret = iwl_poll_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_BIT_NIC_READY, - CSR_HW_IF_CONFIG_REG_BIT_NIC_READY, - HW_READY_TIMEOUT); + CSR_HW_IF_CONFIG_REG_BIT_NIC_READY, + CSR_HW_IF_CONFIG_REG_BIT_NIC_READY, + HW_READY_TIMEOUT); IWL_DEBUG_INFO(trans, "hardware%s ready\n", ret < 0 ? " not" : ""); return ret; @@ -877,11 +906,11 @@ static int iwl_prepare_card_hw(struct iwl_trans *trans) /* If HW is not ready, prepare the conditions to check again */ iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_PREPARE); + CSR_HW_IF_CONFIG_REG_PREPARE); ret = iwl_poll_bit(trans, CSR_HW_IF_CONFIG_REG, - ~CSR_HW_IF_CONFIG_REG_BIT_NIC_PREPARE_DONE, - CSR_HW_IF_CONFIG_REG_BIT_NIC_PREPARE_DONE, 150000); + ~CSR_HW_IF_CONFIG_REG_BIT_NIC_PREPARE_DONE, + CSR_HW_IF_CONFIG_REG_BIT_NIC_PREPARE_DONE, 150000); if (ret < 0) return ret; @@ -908,32 +937,33 @@ static int iwl_load_section(struct iwl_trans *trans, u8 section_num, trans_pcie->ucode_write_complete = false; iwl_write_direct32(trans, - FH_TCSR_CHNL_TX_CONFIG_REG(FH_SRVC_CHNL), - FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_PAUSE); + FH_TCSR_CHNL_TX_CONFIG_REG(FH_SRVC_CHNL), + FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_PAUSE); iwl_write_direct32(trans, - FH_SRVC_CHNL_SRAM_ADDR_REG(FH_SRVC_CHNL), dst_addr); + FH_SRVC_CHNL_SRAM_ADDR_REG(FH_SRVC_CHNL), + dst_addr); iwl_write_direct32(trans, FH_TFDIB_CTRL0_REG(FH_SRVC_CHNL), phy_addr & FH_MEM_TFDIB_DRAM_ADDR_LSB_MSK); iwl_write_direct32(trans, - FH_TFDIB_CTRL1_REG(FH_SRVC_CHNL), - (iwl_get_dma_hi_addr(phy_addr) - << FH_MEM_TFDIB_REG1_ADDR_BITSHIFT) | byte_cnt); + FH_TFDIB_CTRL1_REG(FH_SRVC_CHNL), + (iwl_get_dma_hi_addr(phy_addr) + << FH_MEM_TFDIB_REG1_ADDR_BITSHIFT) | byte_cnt); iwl_write_direct32(trans, - FH_TCSR_CHNL_TX_BUF_STS_REG(FH_SRVC_CHNL), - 1 << FH_TCSR_CHNL_TX_BUF_STS_REG_POS_TB_NUM | - 1 << FH_TCSR_CHNL_TX_BUF_STS_REG_POS_TB_IDX | - FH_TCSR_CHNL_TX_BUF_STS_REG_VAL_TFDB_VALID); + FH_TCSR_CHNL_TX_BUF_STS_REG(FH_SRVC_CHNL), + 1 << FH_TCSR_CHNL_TX_BUF_STS_REG_POS_TB_NUM | + 1 << FH_TCSR_CHNL_TX_BUF_STS_REG_POS_TB_IDX | + FH_TCSR_CHNL_TX_BUF_STS_REG_VAL_TFDB_VALID); iwl_write_direct32(trans, - FH_TCSR_CHNL_TX_CONFIG_REG(FH_SRVC_CHNL), - FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE | - FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_DISABLE | - FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_HOST_ENDTFD); + FH_TCSR_CHNL_TX_CONFIG_REG(FH_SRVC_CHNL), + FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE | + FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_DISABLE | + FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_HOST_ENDTFD); IWL_DEBUG_FW(trans, "[%d] uCode section being loaded...\n", section_num); @@ -1016,15 +1046,12 @@ static int iwl_trans_pcie_start_fw(struct iwl_trans *trans, /* * Activate/Deactivate Tx DMA/FIFO channels according tx fifos mask - * must be called under the irq lock and with MAC access */ static void iwl_trans_txq_set_sched(struct iwl_trans *trans, u32 mask) { struct iwl_trans_pcie __maybe_unused *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - lockdep_assert_held(&trans_pcie->irq_lock); - iwl_write_prph(trans, SCD_TXFACT, mask); } @@ -1032,11 +1059,12 @@ static void iwl_tx_start(struct iwl_trans *trans) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); u32 a; - unsigned long flags; - int i, chan; + int chan; u32 reg_val; - spin_lock_irqsave(&trans_pcie->irq_lock, flags); + /* make sure all queue are not stopped/used */ + memset(trans_pcie->queue_stopped, 0, sizeof(trans_pcie->queue_stopped)); + memset(trans_pcie->queue_used, 0, sizeof(trans_pcie->queue_used)); trans_pcie->scd_base_addr = iwl_read_prph(trans, SCD_SRAM_BASE_ADDR); @@ -1063,64 +1091,26 @@ static void iwl_tx_start(struct iwl_trans *trans) */ iwl_write_prph(trans, SCD_CHAINEXT_EN, 0); + iwl_trans_ac_txq_enable(trans, trans_pcie->cmd_queue, + trans_pcie->cmd_fifo); + + /* Activate all Tx DMA/FIFO channels */ + iwl_trans_txq_set_sched(trans, IWL_MASK(0, 7)); + /* Enable DMA channel */ for (chan = 0; chan < FH_TCSR_CHNL_NUM ; chan++) iwl_write_direct32(trans, FH_TCSR_CHNL_TX_CONFIG_REG(chan), - FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE | - FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_ENABLE); + FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE | + FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_ENABLE); /* Update FH chicken bits */ reg_val = iwl_read_direct32(trans, FH_TX_CHICKEN_BITS_REG); iwl_write_direct32(trans, FH_TX_CHICKEN_BITS_REG, reg_val | FH_TX_CHICKEN_BITS_SCD_AUTO_RETRY_EN); - iwl_write_prph(trans, SCD_QUEUECHAIN_SEL, - SCD_QUEUECHAIN_SEL_ALL(trans, trans_pcie)); - iwl_write_prph(trans, SCD_AGGR_SEL, 0); - - /* initiate the queues */ - for (i = 0; i < trans->cfg->base_params->num_of_queues; i++) { - iwl_write_prph(trans, SCD_QUEUE_RDPTR(i), 0); - iwl_write_direct32(trans, HBUS_TARG_WRPTR, 0 | (i << 8)); - iwl_write_targ_mem(trans, trans_pcie->scd_base_addr + - SCD_CONTEXT_QUEUE_OFFSET(i), 0); - iwl_write_targ_mem(trans, trans_pcie->scd_base_addr + - SCD_CONTEXT_QUEUE_OFFSET(i) + - sizeof(u32), - ((SCD_WIN_SIZE << - SCD_QUEUE_CTX_REG2_WIN_SIZE_POS) & - SCD_QUEUE_CTX_REG2_WIN_SIZE_MSK) | - ((SCD_FRAME_LIMIT << - SCD_QUEUE_CTX_REG2_FRAME_LIMIT_POS) & - SCD_QUEUE_CTX_REG2_FRAME_LIMIT_MSK)); - } - - iwl_write_prph(trans, SCD_INTERRUPT_MASK, - IWL_MASK(0, trans->cfg->base_params->num_of_queues)); - - /* Activate all Tx DMA/FIFO channels */ - iwl_trans_txq_set_sched(trans, IWL_MASK(0, 7)); - - iwl_trans_set_wr_ptrs(trans, trans_pcie->cmd_queue, 0); - - /* make sure all queue are not stopped/used */ - memset(trans_pcie->queue_stopped, 0, sizeof(trans_pcie->queue_stopped)); - memset(trans_pcie->queue_used, 0, sizeof(trans_pcie->queue_used)); - - for (i = 0; i < trans_pcie->n_q_to_fifo; i++) { - int fifo = trans_pcie->setup_q_to_fifo[i]; - - set_bit(i, trans_pcie->queue_used); - - iwl_trans_tx_queue_set_status(trans, &trans_pcie->txq[i], - fifo, true); - } - - spin_unlock_irqrestore(&trans_pcie->irq_lock, flags); - /* Enable L1-Active */ iwl_clear_bits_prph(trans, APMG_PCIDEV_STT_REG, - APMG_PCIDEV_STT_VAL_L1_ACT_DIS); + APMG_PCIDEV_STT_VAL_L1_ACT_DIS); } static void iwl_trans_pcie_fw_alive(struct iwl_trans *trans) @@ -1134,9 +1124,9 @@ static void iwl_trans_pcie_fw_alive(struct iwl_trans *trans) */ static int iwl_trans_tx_stop(struct iwl_trans *trans) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int ch, txq_id, ret; unsigned long flags; - struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); /* Turn off all Tx DMA fifos */ spin_lock_irqsave(&trans_pcie->irq_lock, flags); @@ -1148,18 +1138,19 @@ static int iwl_trans_tx_stop(struct iwl_trans *trans) iwl_write_direct32(trans, FH_TCSR_CHNL_TX_CONFIG_REG(ch), 0x0); ret = iwl_poll_direct_bit(trans, FH_TSSR_TX_STATUS_REG, - FH_TSSR_TX_STATUS_REG_MSK_CHNL_IDLE(ch), - 1000); + FH_TSSR_TX_STATUS_REG_MSK_CHNL_IDLE(ch), 1000); if (ret < 0) - IWL_ERR(trans, "Failing on timeout while stopping" - " DMA channel %d [0x%08x]", ch, - iwl_read_direct32(trans, - FH_TSSR_TX_STATUS_REG)); + IWL_ERR(trans, + "Failing on timeout while stopping DMA channel %d [0x%08x]\n", + ch, + iwl_read_direct32(trans, + FH_TSSR_TX_STATUS_REG)); } spin_unlock_irqrestore(&trans_pcie->irq_lock, flags); if (!trans_pcie->txq) { - IWL_WARN(trans, "Stopping tx queues that aren't allocated..."); + IWL_WARN(trans, + "Stopping tx queues that aren't allocated...\n"); return 0; } @@ -1173,8 +1164,8 @@ static int iwl_trans_tx_stop(struct iwl_trans *trans) static void iwl_trans_pcie_stop_device(struct iwl_trans *trans) { - unsigned long flags; struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + unsigned long flags; /* tell the device to stop sending interrupts */ spin_lock_irqsave(&trans_pcie->irq_lock, flags); @@ -1204,7 +1195,7 @@ static void iwl_trans_pcie_stop_device(struct iwl_trans *trans) /* Make sure (redundant) we've released our request to stay awake */ iwl_clear_bit(trans, CSR_GP_CNTRL, - CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ); + CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ); /* Stop the device, and put it in low power state */ iwl_apm_stop(trans); @@ -1273,13 +1264,27 @@ static int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, spin_lock(&txq->lock); + /* In AGG mode, the index in the ring must correspond to the WiFi + * sequence number. This is a HW requirements to help the SCD to parse + * the BA. + * Check here that the packets are in the right place on the ring. + */ +#ifdef CONFIG_IWLWIFI_DEBUG + wifi_seq = SEQ_TO_SN(le16_to_cpu(hdr->seq_ctrl)); + WARN_ONCE((iwl_read_prph(trans, SCD_AGGR_SEL) & BIT(txq_id)) && + ((wifi_seq & 0xff) != q->write_ptr), + "Q: %d WiFi Seq %d tfdNum %d", + txq_id, wifi_seq, q->write_ptr); +#endif + /* Set up driver data for this TFD */ txq->entries[q->write_ptr].skb = skb; txq->entries[q->write_ptr].cmd = dev_cmd; dev_cmd->hdr.cmd = REPLY_TX; - dev_cmd->hdr.sequence = cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) | - INDEX_TO_SEQ(q->write_ptr))); + dev_cmd->hdr.sequence = + cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) | + INDEX_TO_SEQ(q->write_ptr))); /* Set up first empty entry in queue's array of Tx/cmd buffers */ out_meta = &txq->entries[q->write_ptr].meta; @@ -1344,7 +1349,7 @@ static int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, /* take back ownership of DMA buffer to enable update */ dma_sync_single_for_cpu(trans->dev, txcmd_phys, firstlen, - DMA_BIDIRECTIONAL); + DMA_BIDIRECTIONAL); tx_cmd->dram_lsb_ptr = cpu_to_le32(scratch_phys); tx_cmd->dram_msb_ptr = iwl_get_dma_hi_addr(scratch_phys); @@ -1356,16 +1361,17 @@ static int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, iwl_trans_txq_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len)); dma_sync_single_for_device(trans->dev, txcmd_phys, firstlen, - DMA_BIDIRECTIONAL); + DMA_BIDIRECTIONAL); trace_iwlwifi_dev_tx(trans->dev, - &((struct iwl_tfd *)txq->tfds)[txq->q.write_ptr], + &txq->tfds[txq->q.write_ptr], sizeof(struct iwl_tfd), &dev_cmd->hdr, firstlen, skb->data + hdr_len, secondlen); /* start timer if queue currently empty */ - if (q->read_ptr == q->write_ptr && trans_pcie->wd_timeout) + if (txq->need_update && q->read_ptr == q->write_ptr && + trans_pcie->wd_timeout) mod_timer(&txq->stuck_timer, jiffies + trans_pcie->wd_timeout); /* Tell device the write index *just past* this latest filled TFD */ @@ -1395,8 +1401,7 @@ static int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, static int iwl_trans_pcie_start_hw(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int err; bool hw_rfkill; @@ -1409,7 +1414,7 @@ static int iwl_trans_pcie_start_hw(struct iwl_trans *trans) iwl_alloc_isr_ict(trans); err = request_irq(trans_pcie->irq, iwl_isr_ict, IRQF_SHARED, - DRV_NAME, trans); + DRV_NAME, trans); if (err) { IWL_ERR(trans, "Error allocating IRQ %d\n", trans_pcie->irq); @@ -1422,7 +1427,7 @@ static int iwl_trans_pcie_start_hw(struct iwl_trans *trans) err = iwl_prepare_card_hw(trans); if (err) { - IWL_ERR(trans, "Error while preparing HW: %d", err); + IWL_ERR(trans, "Error while preparing HW: %d\n", err); goto err_free_irq; } @@ -1447,9 +1452,9 @@ error: static void iwl_trans_pcie_stop_hw(struct iwl_trans *trans, bool op_mode_leaving) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); bool hw_rfkill; unsigned long flags; - struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); iwl_apm_stop(trans); @@ -1520,6 +1525,7 @@ static void iwl_trans_pcie_configure(struct iwl_trans *trans, struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); trans_pcie->cmd_queue = trans_cfg->cmd_queue; + trans_pcie->cmd_fifo = trans_cfg->cmd_fifo; if (WARN_ON(trans_cfg->n_no_reclaim_cmds > MAX_NO_RECLAIM_CMDS)) trans_pcie->n_no_reclaim_cmds = 0; else @@ -1528,17 +1534,6 @@ static void iwl_trans_pcie_configure(struct iwl_trans *trans, memcpy(trans_pcie->no_reclaim_cmds, trans_cfg->no_reclaim_cmds, trans_pcie->n_no_reclaim_cmds * sizeof(u8)); - trans_pcie->n_q_to_fifo = trans_cfg->n_queue_to_fifo; - - if (WARN_ON(trans_pcie->n_q_to_fifo > IWL_MAX_HW_QUEUES)) - trans_pcie->n_q_to_fifo = IWL_MAX_HW_QUEUES; - - /* at least the command queue must be mapped */ - WARN_ON(!trans_pcie->n_q_to_fifo); - - memcpy(trans_pcie->setup_q_to_fifo, trans_cfg->queue_to_fifo, - trans_pcie->n_q_to_fifo * sizeof(u8)); - trans_pcie->rx_buf_size_8k = trans_cfg->rx_buf_size_8k; if (trans_pcie->rx_buf_size_8k) trans_pcie->rx_page_order = get_order(8 * 1024); @@ -1553,8 +1548,7 @@ static void iwl_trans_pcie_configure(struct iwl_trans *trans, void iwl_trans_pcie_free(struct iwl_trans *trans) { - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); iwl_trans_pcie_tx_free(trans); #ifndef CONFIG_IWLWIFI_IDI @@ -1569,6 +1563,7 @@ void iwl_trans_pcie_free(struct iwl_trans *trans) iounmap(trans_pcie->hw_base); pci_release_regions(trans_pcie->pci_dev); pci_disable_device(trans_pcie->pci_dev); + kmem_cache_destroy(trans->dev_cmd_pool); kfree(trans); } @@ -1816,8 +1811,8 @@ static const struct file_operations iwl_dbgfs_##name##_ops = { \ }; static ssize_t iwl_dbgfs_tx_queue_read(struct file *file, - char __user *user_buf, - size_t count, loff_t *ppos) + char __user *user_buf, + size_t count, loff_t *ppos) { struct iwl_trans *trans = file->private_data; struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); @@ -1853,11 +1848,11 @@ static ssize_t iwl_dbgfs_tx_queue_read(struct file *file, } static ssize_t iwl_dbgfs_rx_queue_read(struct file *file, - char __user *user_buf, - size_t count, loff_t *ppos) { + char __user *user_buf, + size_t count, loff_t *ppos) +{ struct iwl_trans *trans = file->private_data; - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_rx_queue *rxq = &trans_pcie->rxq; char buf[256]; int pos = 0; @@ -1881,11 +1876,10 @@ static ssize_t iwl_dbgfs_rx_queue_read(struct file *file, static ssize_t iwl_dbgfs_interrupt_read(struct file *file, char __user *user_buf, - size_t count, loff_t *ppos) { - + size_t count, loff_t *ppos) +{ struct iwl_trans *trans = file->private_data; - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct isr_statistics *isr_stats = &trans_pcie->isr_stats; int pos = 0; @@ -1943,8 +1937,7 @@ static ssize_t iwl_dbgfs_interrupt_write(struct file *file, size_t count, loff_t *ppos) { struct iwl_trans *trans = file->private_data; - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct isr_statistics *isr_stats = &trans_pcie->isr_stats; char buf[8]; @@ -1964,8 +1957,8 @@ static ssize_t iwl_dbgfs_interrupt_write(struct file *file, } static ssize_t iwl_dbgfs_csr_write(struct file *file, - const char __user *user_buf, - size_t count, loff_t *ppos) + const char __user *user_buf, + size_t count, loff_t *ppos) { struct iwl_trans *trans = file->private_data; char buf[8]; @@ -1985,8 +1978,8 @@ static ssize_t iwl_dbgfs_csr_write(struct file *file, } static ssize_t iwl_dbgfs_fh_reg_read(struct file *file, - char __user *user_buf, - size_t count, loff_t *ppos) + char __user *user_buf, + size_t count, loff_t *ppos) { struct iwl_trans *trans = file->private_data; char *buf; @@ -2012,7 +2005,9 @@ static ssize_t iwl_dbgfs_fw_restart_write(struct file *file, if (!trans->op_mode) return -EAGAIN; + local_bh_disable(); iwl_op_mode_nic_error(trans->op_mode); + local_bh_enable(); return count; } @@ -2029,7 +2024,7 @@ DEBUGFS_WRITE_FILE_OPS(fw_restart); * */ static int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans, - struct dentry *dir) + struct dentry *dir) { DEBUGFS_ADD_FILE(rx_queue, dir, S_IRUSR); DEBUGFS_ADD_FILE(tx_queue, dir, S_IRUSR); @@ -2041,9 +2036,10 @@ static int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans, } #else static int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans, - struct dentry *dir) -{ return 0; } - + struct dentry *dir) +{ + return 0; +} #endif /*CONFIG_IWLWIFI_DEBUGFS */ static const struct iwl_trans_ops trans_ops_pcie = { @@ -2060,8 +2056,8 @@ static const struct iwl_trans_ops trans_ops_pcie = { .tx = iwl_trans_pcie_tx, .reclaim = iwl_trans_pcie_reclaim, - .tx_agg_disable = iwl_trans_pcie_tx_agg_disable, - .tx_agg_setup = iwl_trans_pcie_tx_agg_setup, + .txq_disable = iwl_trans_pcie_txq_disable, + .txq_enable = iwl_trans_pcie_txq_enable, .dbgfs_register = iwl_trans_pcie_dbgfs_register, @@ -2088,7 +2084,7 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, int err; trans = kzalloc(sizeof(struct iwl_trans) + - sizeof(struct iwl_trans_pcie), GFP_KERNEL); + sizeof(struct iwl_trans_pcie), GFP_KERNEL); if (WARN_ON(!trans)) return NULL; @@ -2104,7 +2100,7 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, /* W/A - seems to solve weird behavior. We need to remove this if we * don't want to stay in L1 all the time. This wastes a lot of power */ pci_disable_link_state(pdev, PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1 | - PCIE_LINK_STATE_CLKPM); + PCIE_LINK_STATE_CLKPM); if (pci_enable_device(pdev)) { err = -ENODEV; @@ -2120,7 +2116,7 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); if (!err) err = pci_set_consistent_dma_mask(pdev, - DMA_BIT_MASK(32)); + DMA_BIT_MASK(32)); /* both attempts failed: */ if (err) { dev_printk(KERN_ERR, &pdev->dev, @@ -2131,25 +2127,26 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, err = pci_request_regions(pdev, DRV_NAME); if (err) { - dev_printk(KERN_ERR, &pdev->dev, "pci_request_regions failed"); + dev_printk(KERN_ERR, &pdev->dev, + "pci_request_regions failed\n"); goto out_pci_disable_device; } trans_pcie->hw_base = pci_ioremap_bar(pdev, 0); if (!trans_pcie->hw_base) { - dev_printk(KERN_ERR, &pdev->dev, "pci_ioremap_bar failed"); + dev_printk(KERN_ERR, &pdev->dev, "pci_ioremap_bar failed\n"); err = -ENODEV; goto out_pci_release_regions; } dev_printk(KERN_INFO, &pdev->dev, - "pci_resource_len = 0x%08llx\n", - (unsigned long long) pci_resource_len(pdev, 0)); + "pci_resource_len = 0x%08llx\n", + (unsigned long long) pci_resource_len(pdev, 0)); dev_printk(KERN_INFO, &pdev->dev, - "pci_resource_base = %p\n", trans_pcie->hw_base); + "pci_resource_base = %p\n", trans_pcie->hw_base); dev_printk(KERN_INFO, &pdev->dev, - "HW Revision ID = 0x%X\n", pdev->revision); + "HW Revision ID = 0x%X\n", pdev->revision); /* We disable the RETRY_TIMEOUT register (0x41) to keep * PCI Tx retries from interfering with C3 CPU state */ @@ -2158,7 +2155,7 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, err = pci_enable_msi(pdev); if (err) dev_printk(KERN_ERR, &pdev->dev, - "pci_enable_msi failed(0X%x)", err); + "pci_enable_msi failed(0X%x)\n", err); trans->dev = &pdev->dev; trans_pcie->irq = pdev->irq; @@ -2180,8 +2177,25 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, init_waitqueue_head(&trans->wait_command_queue); spin_lock_init(&trans->reg_lock); + snprintf(trans->dev_cmd_pool_name, sizeof(trans->dev_cmd_pool_name), + "iwl_cmd_pool:%s", dev_name(trans->dev)); + + trans->dev_cmd_headroom = 0; + trans->dev_cmd_pool = + kmem_cache_create(trans->dev_cmd_pool_name, + sizeof(struct iwl_device_cmd) + + trans->dev_cmd_headroom, + sizeof(void *), + SLAB_HWCACHE_ALIGN, + NULL); + + if (!trans->dev_cmd_pool) + goto out_pci_disable_msi; + return trans; +out_pci_disable_msi: + pci_disable_msi(pdev); out_pci_release_regions: pci_release_regions(pdev); out_pci_disable_device: @@ -2190,4 +2204,3 @@ out_no_pci: kfree(trans); return NULL; } - diff --git a/drivers/net/wireless/iwlwifi/iwl-trans-pcie-tx.c b/drivers/net/wireless/iwlwifi/pcie/tx.c index a8750238ee09..6baf8deef519 100644 --- a/drivers/net/wireless/iwlwifi/iwl-trans-pcie-tx.c +++ b/drivers/net/wireless/iwlwifi/pcie/tx.c @@ -34,11 +34,10 @@ #include "iwl-csr.h" #include "iwl-prph.h" #include "iwl-io.h" -#include "iwl-agn-hw.h" #include "iwl-op-mode.h" -#include "iwl-trans-pcie-int.h" +#include "internal.h" /* FIXME: need to abstract out TX command (once we know what it looks like) */ -#include "iwl-commands.h" +#include "dvm/commands.h" #define IWL_TX_CRC_SIZE 4 #define IWL_TX_DELIMITER_SIZE 4 @@ -47,12 +46,11 @@ * iwl_trans_txq_update_byte_cnt_tbl - Set up entry in Tx byte-count array */ void iwl_trans_txq_update_byte_cnt_tbl(struct iwl_trans *trans, - struct iwl_tx_queue *txq, - u16 byte_cnt) + struct iwl_tx_queue *txq, + u16 byte_cnt) { struct iwlagn_scd_bc_tbl *scd_bc_tbl; - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); int write_ptr = txq->q.write_ptr; int txq_id = txq->q.id; u8 sec_ctl = 0; @@ -178,8 +176,8 @@ static inline u8 iwl_tfd_get_num_tbs(struct iwl_tfd *tfd) return tfd->num_tbs & 0x1f; } -static void iwlagn_unmap_tfd(struct iwl_trans *trans, struct iwl_cmd_meta *meta, - struct iwl_tfd *tfd, enum dma_data_direction dma_dir) +static void iwl_unmap_tfd(struct iwl_trans *trans, struct iwl_cmd_meta *meta, + struct iwl_tfd *tfd, enum dma_data_direction dma_dir) { int i; int num_tbs; @@ -209,7 +207,7 @@ static void iwlagn_unmap_tfd(struct iwl_trans *trans, struct iwl_cmd_meta *meta, } /** - * iwlagn_txq_free_tfd - Free all chunks referenced by TFD [txq->q.read_ptr] + * iwl_txq_free_tfd - Free all chunks referenced by TFD [txq->q.read_ptr] * @trans - transport private data * @txq - tx queue * @dma_dir - the direction of the DMA mapping @@ -217,8 +215,8 @@ static void iwlagn_unmap_tfd(struct iwl_trans *trans, struct iwl_cmd_meta *meta, * Does NOT advance any TFD circular buffer read/write indexes * Does NOT free the TFD itself (which is within circular buffer) */ -void iwlagn_txq_free_tfd(struct iwl_trans *trans, struct iwl_tx_queue *txq, - enum dma_data_direction dma_dir) +void iwl_txq_free_tfd(struct iwl_trans *trans, struct iwl_tx_queue *txq, + enum dma_data_direction dma_dir) { struct iwl_tfd *tfd_tmp = txq->tfds; @@ -229,8 +227,8 @@ void iwlagn_txq_free_tfd(struct iwl_trans *trans, struct iwl_tx_queue *txq, lockdep_assert_held(&txq->lock); /* We have only q->n_window txq->entries, but we use q->n_bd tfds */ - iwlagn_unmap_tfd(trans, &txq->entries[idx].meta, - &tfd_tmp[rd_ptr], dma_dir); + iwl_unmap_tfd(trans, &txq->entries[idx].meta, &tfd_tmp[rd_ptr], + dma_dir); /* free SKB */ if (txq->entries) { @@ -270,7 +268,7 @@ int iwlagn_txq_attach_buf_to_tfd(struct iwl_trans *trans, /* Each TFD can point to a maximum 20 Tx buffers */ if (num_tbs >= IWL_NUM_OF_TBS) { IWL_ERR(trans, "Error can not send more than %d chunks\n", - IWL_NUM_OF_TBS); + IWL_NUM_OF_TBS); return -EINVAL; } @@ -279,7 +277,7 @@ int iwlagn_txq_attach_buf_to_tfd(struct iwl_trans *trans, if (unlikely(addr & ~IWL_TX_DMA_MASK)) IWL_ERR(trans, "Unaligned address = %llx\n", - (unsigned long long)addr); + (unsigned long long)addr); iwl_tfd_set_tb(tfd, num_tbs, addr, len); @@ -382,16 +380,14 @@ static void iwlagn_txq_inval_byte_cnt_tbl(struct iwl_trans *trans, tfd_offset[TFD_QUEUE_SIZE_MAX + read_ptr] = bc_ent; } -static int iwlagn_tx_queue_set_q2ratid(struct iwl_trans *trans, u16 ra_tid, - u16 txq_id) +static int iwl_txq_set_ratid_map(struct iwl_trans *trans, u16 ra_tid, + u16 txq_id) { + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); u32 tbl_dw_addr; u32 tbl_dw; u16 scd_q2ratid; - struct iwl_trans_pcie *trans_pcie = - IWL_TRANS_GET_PCIE_TRANS(trans); - scd_q2ratid = ra_tid & SCD_QUEUE_RA_TID_MAP_RATID_MSK; tbl_dw_addr = trans_pcie->scd_base_addr + @@ -409,7 +405,7 @@ static int iwlagn_tx_queue_set_q2ratid(struct iwl_trans *trans, u16 ra_tid, return 0; } -static void iwlagn_tx_queue_stop_scheduler(struct iwl_trans *trans, u16 txq_id) +static inline void iwl_txq_set_inactive(struct iwl_trans *trans, u16 txq_id) { /* Simply stop the queue, but don't change any configuration; * the SCD_ACT_EN bit is the write-enable mask for the ACTIVE bit. */ @@ -419,102 +415,87 @@ static void iwlagn_tx_queue_stop_scheduler(struct iwl_trans *trans, u16 txq_id) (1 << SCD_QUEUE_STTS_REG_POS_SCD_ACT_EN)); } -void iwl_trans_set_wr_ptrs(struct iwl_trans *trans, - int txq_id, u32 index) -{ - IWL_DEBUG_TX_QUEUES(trans, "Q %d WrPtr: %d\n", txq_id, index & 0xff); - iwl_write_direct32(trans, HBUS_TARG_WRPTR, - (index & 0xff) | (txq_id << 8)); - iwl_write_prph(trans, SCD_QUEUE_RDPTR(txq_id), index); -} - -void iwl_trans_tx_queue_set_status(struct iwl_trans *trans, - struct iwl_tx_queue *txq, - int tx_fifo_id, bool active) -{ - int txq_id = txq->q.id; - - iwl_write_prph(trans, SCD_QUEUE_STATUS_BITS(txq_id), - (active << SCD_QUEUE_STTS_REG_POS_ACTIVE) | - (tx_fifo_id << SCD_QUEUE_STTS_REG_POS_TXF) | - (1 << SCD_QUEUE_STTS_REG_POS_WSL) | - SCD_QUEUE_STTS_REG_MSK); - - if (active) - IWL_DEBUG_TX_QUEUES(trans, "Activate queue %d on FIFO %d\n", - txq_id, tx_fifo_id); - else - IWL_DEBUG_TX_QUEUES(trans, "Deactivate queue %d\n", txq_id); -} - -void iwl_trans_pcie_tx_agg_setup(struct iwl_trans *trans, int txq_id, int fifo, - int sta_id, int tid, int frame_limit, u16 ssn) +void iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int txq_id, int fifo, + int sta_id, int tid, int frame_limit, u16 ssn) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - unsigned long flags; - u16 ra_tid = BUILD_RAxTID(sta_id, tid); if (test_and_set_bit(txq_id, trans_pcie->queue_used)) WARN_ONCE(1, "queue %d already used - expect issues", txq_id); - spin_lock_irqsave(&trans_pcie->irq_lock, flags); - /* Stop this Tx queue before configuring it */ - iwlagn_tx_queue_stop_scheduler(trans, txq_id); + iwl_txq_set_inactive(trans, txq_id); - /* Map receiver-address / traffic-ID to this queue */ - iwlagn_tx_queue_set_q2ratid(trans, ra_tid, txq_id); + /* Set this queue as a chain-building queue unless it is CMD queue */ + if (txq_id != trans_pcie->cmd_queue) + iwl_set_bits_prph(trans, SCD_QUEUECHAIN_SEL, BIT(txq_id)); - /* Set this queue as a chain-building queue */ - iwl_set_bits_prph(trans, SCD_QUEUECHAIN_SEL, BIT(txq_id)); + /* If this queue is mapped to a certain station: it is an AGG queue */ + if (sta_id != IWL_INVALID_STATION) { + u16 ra_tid = BUILD_RAxTID(sta_id, tid); - /* enable aggregations for the queue */ - iwl_set_bits_prph(trans, SCD_AGGR_SEL, BIT(txq_id)); + /* Map receiver-address / traffic-ID to this queue */ + iwl_txq_set_ratid_map(trans, ra_tid, txq_id); + + /* enable aggregations for the queue */ + iwl_set_bits_prph(trans, SCD_AGGR_SEL, BIT(txq_id)); + } else { + /* + * disable aggregations for the queue, this will also make the + * ra_tid mapping configuration irrelevant since it is now a + * non-AGG queue. + */ + iwl_clear_bits_prph(trans, SCD_AGGR_SEL, BIT(txq_id)); + } /* Place first TFD at index corresponding to start sequence number. * Assumes that ssn_idx is valid (!= 0xFFF) */ trans_pcie->txq[txq_id].q.read_ptr = (ssn & 0xff); trans_pcie->txq[txq_id].q.write_ptr = (ssn & 0xff); - iwl_trans_set_wr_ptrs(trans, txq_id, ssn); + + iwl_write_direct32(trans, HBUS_TARG_WRPTR, + (ssn & 0xff) | (txq_id << 8)); + iwl_write_prph(trans, SCD_QUEUE_RDPTR(txq_id), ssn); /* Set up Tx window size and frame limit for this queue */ iwl_write_targ_mem(trans, trans_pcie->scd_base_addr + + SCD_CONTEXT_QUEUE_OFFSET(txq_id), 0); + iwl_write_targ_mem(trans, trans_pcie->scd_base_addr + SCD_CONTEXT_QUEUE_OFFSET(txq_id) + sizeof(u32), ((frame_limit << SCD_QUEUE_CTX_REG2_WIN_SIZE_POS) & SCD_QUEUE_CTX_REG2_WIN_SIZE_MSK) | ((frame_limit << SCD_QUEUE_CTX_REG2_FRAME_LIMIT_POS) & SCD_QUEUE_CTX_REG2_FRAME_LIMIT_MSK)); - iwl_set_bits_prph(trans, SCD_INTERRUPT_MASK, (1 << txq_id)); - /* Set up Status area in SRAM, map to Tx DMA/FIFO, activate the queue */ - iwl_trans_tx_queue_set_status(trans, &trans_pcie->txq[txq_id], - fifo, true); - - spin_unlock_irqrestore(&trans_pcie->irq_lock, flags); + iwl_write_prph(trans, SCD_QUEUE_STATUS_BITS(txq_id), + (1 << SCD_QUEUE_STTS_REG_POS_ACTIVE) | + (fifo << SCD_QUEUE_STTS_REG_POS_TXF) | + (1 << SCD_QUEUE_STTS_REG_POS_WSL) | + SCD_QUEUE_STTS_REG_MSK); + IWL_DEBUG_TX_QUEUES(trans, "Activate queue %d on FIFO %d WrPtr: %d\n", + txq_id, fifo, ssn & 0xff); } -void iwl_trans_pcie_tx_agg_disable(struct iwl_trans *trans, int txq_id) +void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int txq_id) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + u16 rd_ptr, wr_ptr; + int n_bd = trans_pcie->txq[txq_id].q.n_bd; if (!test_and_clear_bit(txq_id, trans_pcie->queue_used)) { WARN_ONCE(1, "queue %d not used", txq_id); return; } - iwlagn_tx_queue_stop_scheduler(trans, txq_id); - - iwl_clear_bits_prph(trans, SCD_AGGR_SEL, BIT(txq_id)); + rd_ptr = iwl_read_prph(trans, SCD_QUEUE_RDPTR(txq_id)) & (n_bd - 1); + wr_ptr = iwl_read_prph(trans, SCD_QUEUE_WRPTR(txq_id)); - trans_pcie->txq[txq_id].q.read_ptr = 0; - trans_pcie->txq[txq_id].q.write_ptr = 0; - iwl_trans_set_wr_ptrs(trans, txq_id, 0); + WARN_ONCE(rd_ptr != wr_ptr, "queue %d isn't empty: [%d,%d]", + txq_id, rd_ptr, wr_ptr); - iwl_clear_bits_prph(trans, SCD_INTERRUPT_MASK, BIT(txq_id)); - - iwl_trans_tx_queue_set_status(trans, &trans_pcie->txq[txq_id], - 0, false); + iwl_txq_set_inactive(trans, txq_id); + IWL_DEBUG_TX_QUEUES(trans, "Deactivate queue %d\n", txq_id); } /*************** HOST COMMAND QUEUE FUNCTIONS *****/ @@ -615,13 +596,13 @@ static int iwl_enqueue_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd) } IWL_DEBUG_HC(trans, - "Sending command %s (#%x), seq: 0x%04X, %d bytes at %d[%d]:%d\n", - trans_pcie_get_cmd_string(trans_pcie, out_cmd->hdr.cmd), - out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence), cmd_size, - q->write_ptr, idx, trans_pcie->cmd_queue); + "Sending command %s (#%x), seq: 0x%04X, %d bytes at %d[%d]:%d\n", + trans_pcie_get_cmd_string(trans_pcie, out_cmd->hdr.cmd), + out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence), + cmd_size, q->write_ptr, idx, trans_pcie->cmd_queue); phys_addr = dma_map_single(trans->dev, &out_cmd->hdr, copy_size, - DMA_BIDIRECTIONAL); + DMA_BIDIRECTIONAL); if (unlikely(dma_mapping_error(trans->dev, phys_addr))) { idx = -ENOMEM; goto out; @@ -630,8 +611,7 @@ static int iwl_enqueue_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd) dma_unmap_addr_set(out_meta, mapping, phys_addr); dma_unmap_len_set(out_meta, len, copy_size); - iwlagn_txq_attach_buf_to_tfd(trans, txq, - phys_addr, copy_size, 1); + iwlagn_txq_attach_buf_to_tfd(trans, txq, phys_addr, copy_size, 1); #ifdef CONFIG_IWLWIFI_DEVICE_TRACING trace_bufs[0] = &out_cmd->hdr; trace_lens[0] = copy_size; @@ -643,13 +623,12 @@ static int iwl_enqueue_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd) continue; if (!(cmd->dataflags[i] & IWL_HCMD_DFL_NOCOPY)) continue; - phys_addr = dma_map_single(trans->dev, - (void *)cmd->data[i], + phys_addr = dma_map_single(trans->dev, (void *)cmd->data[i], cmd->len[i], DMA_BIDIRECTIONAL); if (dma_mapping_error(trans->dev, phys_addr)) { - iwlagn_unmap_tfd(trans, out_meta, - &txq->tfds[q->write_ptr], - DMA_BIDIRECTIONAL); + iwl_unmap_tfd(trans, out_meta, + &txq->tfds[q->write_ptr], + DMA_BIDIRECTIONAL); idx = -ENOMEM; goto out; } @@ -723,9 +702,10 @@ static void iwl_hcmd_queue_reclaim(struct iwl_trans *trans, int txq_id, lockdep_assert_held(&txq->lock); if ((idx >= q->n_bd) || (iwl_queue_used(q, idx) == 0)) { - IWL_ERR(trans, "%s: Read index for DMA queue txq id (%d), " - "index %d is out of range [0-%d] %d %d.\n", __func__, - txq_id, idx, q->n_bd, q->write_ptr, q->read_ptr); + IWL_ERR(trans, + "%s: Read index for DMA queue txq id (%d), index %d is out of range [0-%d] %d %d.\n", + __func__, txq_id, idx, q->n_bd, + q->write_ptr, q->read_ptr); return; } @@ -733,8 +713,8 @@ static void iwl_hcmd_queue_reclaim(struct iwl_trans *trans, int txq_id, q->read_ptr = iwl_queue_inc_wrap(q->read_ptr, q->n_bd)) { if (nfreed++ > 0) { - IWL_ERR(trans, "HCMD skipped: index (%d) %d %d\n", idx, - q->write_ptr, q->read_ptr); + IWL_ERR(trans, "HCMD skipped: index (%d) %d %d\n", + idx, q->write_ptr, q->read_ptr); iwl_op_mode_nic_error(trans->op_mode); } @@ -771,9 +751,9 @@ void iwl_tx_cmd_complete(struct iwl_trans *trans, struct iwl_rx_cmd_buffer *rxb, * in the queue management code. */ if (WARN(txq_id != trans_pcie->cmd_queue, "wrong command queue %d (should be %d), sequence 0x%X readp=%d writep=%d\n", - txq_id, trans_pcie->cmd_queue, sequence, - trans_pcie->txq[trans_pcie->cmd_queue].q.read_ptr, - trans_pcie->txq[trans_pcie->cmd_queue].q.write_ptr)) { + txq_id, trans_pcie->cmd_queue, sequence, + trans_pcie->txq[trans_pcie->cmd_queue].q.read_ptr, + trans_pcie->txq[trans_pcie->cmd_queue].q.write_ptr)) { iwl_print_hex_error(trans, pkt, 32); return; } @@ -784,8 +764,7 @@ void iwl_tx_cmd_complete(struct iwl_trans *trans, struct iwl_rx_cmd_buffer *rxb, cmd = txq->entries[cmd_index].cmd; meta = &txq->entries[cmd_index].meta; - iwlagn_unmap_tfd(trans, meta, &txq->tfds[index], - DMA_BIDIRECTIONAL); + iwl_unmap_tfd(trans, meta, &txq->tfds[index], DMA_BIDIRECTIONAL); /* Input error checking is done when commands are added to queue. */ if (meta->flags & CMD_WANT_SKB) { @@ -870,8 +849,9 @@ static int iwl_send_cmd_sync(struct iwl_trans *trans, struct iwl_host_cmd *cmd) } ret = wait_event_timeout(trans->wait_command_queue, - !test_bit(STATUS_HCMD_ACTIVE, &trans_pcie->status), - HOST_COMPLETE_TIMEOUT); + !test_bit(STATUS_HCMD_ACTIVE, + &trans_pcie->status), + HOST_COMPLETE_TIMEOUT); if (!ret) { if (test_bit(STATUS_HCMD_ACTIVE, &trans_pcie->status)) { struct iwl_tx_queue *txq = @@ -956,10 +936,10 @@ int iwl_tx_queue_reclaim(struct iwl_trans *trans, int txq_id, int index, if ((index >= q->n_bd) || (iwl_queue_used(q, last_to_free) == 0)) { - IWL_ERR(trans, "%s: Read index for DMA queue txq id (%d), " - "last_to_free %d is out of range [0-%d] %d %d.\n", - __func__, txq_id, last_to_free, q->n_bd, - q->write_ptr, q->read_ptr); + IWL_ERR(trans, + "%s: Read index for DMA queue txq id (%d), last_to_free %d is out of range [0-%d] %d %d.\n", + __func__, txq_id, last_to_free, q->n_bd, + q->write_ptr, q->read_ptr); return 0; } @@ -979,7 +959,7 @@ int iwl_tx_queue_reclaim(struct iwl_trans *trans, int txq_id, int index, iwlagn_txq_inval_byte_cnt_tbl(trans, txq); - iwlagn_txq_free_tfd(trans, txq, DMA_TO_DEVICE); + iwl_txq_free_tfd(trans, txq, DMA_TO_DEVICE); freed++; } diff --git a/drivers/net/wireless/iwmc3200wifi/Kconfig b/drivers/net/wireless/iwmc3200wifi/Kconfig deleted file mode 100644 index 7107ce53d4d4..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/Kconfig +++ /dev/null @@ -1,39 +0,0 @@ -config IWM - tristate "Intel Wireless Multicomm 3200 WiFi driver (EXPERIMENTAL)" - depends on MMC && EXPERIMENTAL - depends on CFG80211 - select FW_LOADER - select IWMC3200TOP - help - The Intel Wireless Multicomm 3200 hardware is a combo - card with GPS, Bluetooth, WiMax and 802.11 radios. It - runs over SDIO and is typically found on Moorestown - based platform. This driver takes care of the 802.11 - part, which is a fullmac one. - - If you choose to build it as a module, it'll be called - iwmc3200wifi.ko. - -config IWM_DEBUG - bool "Enable full debugging output in iwmc3200wifi" - depends on IWM && DEBUG_FS - help - This option will enable debug tracing and setting for iwm - - You can set the debug level and module through debugfs. By - default all modules are set to the IWL_DL_ERR level. - To see the list of debug modules and levels, see iwm/debug.h - - For example, if you want the full MLME debug output: - echo 0xff > /sys/kernel/debug/iwm/phyN/debug/mlme - - Or, if you want the full debug, for all modules: - echo 0xff > /sys/kernel/debug/iwm/phyN/debug/level - echo 0xff > /sys/kernel/debug/iwm/phyN/debug/modules - -config IWM_TRACING - bool "Enable event tracing for iwmc3200wifi" - depends on IWM && EVENT_TRACING - help - Say Y here to trace all the commands and responses between - the driver and firmware (including TX/RX frames) with ftrace. diff --git a/drivers/net/wireless/iwmc3200wifi/Makefile b/drivers/net/wireless/iwmc3200wifi/Makefile deleted file mode 100644 index cdc7e07ba113..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/Makefile +++ /dev/null @@ -1,10 +0,0 @@ -obj-$(CONFIG_IWM) := iwmc3200wifi.o -iwmc3200wifi-objs += main.o netdev.o rx.o tx.o sdio.o hal.o fw.o -iwmc3200wifi-objs += commands.o cfg80211.o eeprom.o - -iwmc3200wifi-$(CONFIG_IWM_DEBUG) += debugfs.o -iwmc3200wifi-$(CONFIG_IWM_TRACING) += trace.o - -CFLAGS_trace.o := -I$(src) - -ccflags-y += -D__CHECK_ENDIAN__ diff --git a/drivers/net/wireless/iwmc3200wifi/bus.h b/drivers/net/wireless/iwmc3200wifi/bus.h deleted file mode 100644 index 62edd5888a7b..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/bus.h +++ /dev/null @@ -1,57 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - */ - -#ifndef __IWM_BUS_H__ -#define __IWM_BUS_H__ - -#include "iwm.h" - -struct iwm_if_ops { - int (*enable)(struct iwm_priv *iwm); - int (*disable)(struct iwm_priv *iwm); - int (*send_chunk)(struct iwm_priv *iwm, u8* buf, int count); - - void (*debugfs_init)(struct iwm_priv *iwm, struct dentry *parent_dir); - void (*debugfs_exit)(struct iwm_priv *iwm); - - const char *umac_name; - const char *calib_lmac_name; - const char *lmac_name; -}; - -static inline int iwm_bus_send_chunk(struct iwm_priv *iwm, u8 *buf, int count) -{ - return iwm->bus_ops->send_chunk(iwm, buf, count); -} - -static inline int iwm_bus_enable(struct iwm_priv *iwm) -{ - return iwm->bus_ops->enable(iwm); -} - -static inline int iwm_bus_disable(struct iwm_priv *iwm) -{ - return iwm->bus_ops->disable(iwm); -} - -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/cfg80211.c b/drivers/net/wireless/iwmc3200wifi/cfg80211.c deleted file mode 100644 index 48e8218fd23b..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/cfg80211.c +++ /dev/null @@ -1,882 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - */ - -#include <linux/kernel.h> -#include <linux/netdevice.h> -#include <linux/sched.h> -#include <linux/etherdevice.h> -#include <linux/wireless.h> -#include <linux/ieee80211.h> -#include <linux/slab.h> -#include <net/cfg80211.h> - -#include "iwm.h" -#include "commands.h" -#include "cfg80211.h" -#include "debug.h" - -#define RATETAB_ENT(_rate, _rateid, _flags) \ - { \ - .bitrate = (_rate), \ - .hw_value = (_rateid), \ - .flags = (_flags), \ - } - -#define CHAN2G(_channel, _freq, _flags) { \ - .band = IEEE80211_BAND_2GHZ, \ - .center_freq = (_freq), \ - .hw_value = (_channel), \ - .flags = (_flags), \ - .max_antenna_gain = 0, \ - .max_power = 30, \ -} - -#define CHAN5G(_channel, _flags) { \ - .band = IEEE80211_BAND_5GHZ, \ - .center_freq = 5000 + (5 * (_channel)), \ - .hw_value = (_channel), \ - .flags = (_flags), \ - .max_antenna_gain = 0, \ - .max_power = 30, \ -} - -static struct ieee80211_rate iwm_rates[] = { - RATETAB_ENT(10, 0x1, 0), - RATETAB_ENT(20, 0x2, 0), - RATETAB_ENT(55, 0x4, 0), - RATETAB_ENT(110, 0x8, 0), - RATETAB_ENT(60, 0x10, 0), - RATETAB_ENT(90, 0x20, 0), - RATETAB_ENT(120, 0x40, 0), - RATETAB_ENT(180, 0x80, 0), - RATETAB_ENT(240, 0x100, 0), - RATETAB_ENT(360, 0x200, 0), - RATETAB_ENT(480, 0x400, 0), - RATETAB_ENT(540, 0x800, 0), -}; - -#define iwm_a_rates (iwm_rates + 4) -#define iwm_a_rates_size 8 -#define iwm_g_rates (iwm_rates + 0) -#define iwm_g_rates_size 12 - -static struct ieee80211_channel iwm_2ghz_channels[] = { - CHAN2G(1, 2412, 0), - CHAN2G(2, 2417, 0), - CHAN2G(3, 2422, 0), - CHAN2G(4, 2427, 0), - CHAN2G(5, 2432, 0), - CHAN2G(6, 2437, 0), - CHAN2G(7, 2442, 0), - CHAN2G(8, 2447, 0), - CHAN2G(9, 2452, 0), - CHAN2G(10, 2457, 0), - CHAN2G(11, 2462, 0), - CHAN2G(12, 2467, 0), - CHAN2G(13, 2472, 0), - CHAN2G(14, 2484, 0), -}; - -static struct ieee80211_channel iwm_5ghz_a_channels[] = { - CHAN5G(34, 0), CHAN5G(36, 0), - CHAN5G(38, 0), CHAN5G(40, 0), - CHAN5G(42, 0), CHAN5G(44, 0), - CHAN5G(46, 0), CHAN5G(48, 0), - CHAN5G(52, 0), CHAN5G(56, 0), - CHAN5G(60, 0), CHAN5G(64, 0), - CHAN5G(100, 0), CHAN5G(104, 0), - CHAN5G(108, 0), CHAN5G(112, 0), - CHAN5G(116, 0), CHAN5G(120, 0), - CHAN5G(124, 0), CHAN5G(128, 0), - CHAN5G(132, 0), CHAN5G(136, 0), - CHAN5G(140, 0), CHAN5G(149, 0), - CHAN5G(153, 0), CHAN5G(157, 0), - CHAN5G(161, 0), CHAN5G(165, 0), - CHAN5G(184, 0), CHAN5G(188, 0), - CHAN5G(192, 0), CHAN5G(196, 0), - CHAN5G(200, 0), CHAN5G(204, 0), - CHAN5G(208, 0), CHAN5G(212, 0), - CHAN5G(216, 0), -}; - -static struct ieee80211_supported_band iwm_band_2ghz = { - .channels = iwm_2ghz_channels, - .n_channels = ARRAY_SIZE(iwm_2ghz_channels), - .bitrates = iwm_g_rates, - .n_bitrates = iwm_g_rates_size, -}; - -static struct ieee80211_supported_band iwm_band_5ghz = { - .channels = iwm_5ghz_a_channels, - .n_channels = ARRAY_SIZE(iwm_5ghz_a_channels), - .bitrates = iwm_a_rates, - .n_bitrates = iwm_a_rates_size, -}; - -static int iwm_key_init(struct iwm_key *key, u8 key_index, - const u8 *mac_addr, struct key_params *params) -{ - key->hdr.key_idx = key_index; - if (!mac_addr || is_broadcast_ether_addr(mac_addr)) { - key->hdr.multicast = 1; - memset(key->hdr.mac, 0xff, ETH_ALEN); - } else { - key->hdr.multicast = 0; - memcpy(key->hdr.mac, mac_addr, ETH_ALEN); - } - - if (params) { - if (params->key_len > WLAN_MAX_KEY_LEN || - params->seq_len > IW_ENCODE_SEQ_MAX_SIZE) - return -EINVAL; - - key->cipher = params->cipher; - key->key_len = params->key_len; - key->seq_len = params->seq_len; - memcpy(key->key, params->key, key->key_len); - memcpy(key->seq, params->seq, key->seq_len); - } - - return 0; -} - -static int iwm_cfg80211_add_key(struct wiphy *wiphy, struct net_device *ndev, - u8 key_index, bool pairwise, const u8 *mac_addr, - struct key_params *params) -{ - struct iwm_priv *iwm = ndev_to_iwm(ndev); - struct iwm_key *key; - int ret; - - IWM_DBG_WEXT(iwm, DBG, "Adding key for %pM\n", mac_addr); - - if (key_index >= IWM_NUM_KEYS) - return -ENOENT; - - key = &iwm->keys[key_index]; - memset(key, 0, sizeof(struct iwm_key)); - ret = iwm_key_init(key, key_index, mac_addr, params); - if (ret < 0) { - IWM_ERR(iwm, "Invalid key_params\n"); - return ret; - } - - return iwm_set_key(iwm, 0, key); -} - -static int iwm_cfg80211_get_key(struct wiphy *wiphy, struct net_device *ndev, - u8 key_index, bool pairwise, const u8 *mac_addr, - void *cookie, - void (*callback)(void *cookie, - struct key_params*)) -{ - struct iwm_priv *iwm = ndev_to_iwm(ndev); - struct iwm_key *key; - struct key_params params; - - IWM_DBG_WEXT(iwm, DBG, "Getting key %d\n", key_index); - - if (key_index >= IWM_NUM_KEYS) - return -ENOENT; - - memset(¶ms, 0, sizeof(params)); - - key = &iwm->keys[key_index]; - params.cipher = key->cipher; - params.key_len = key->key_len; - params.seq_len = key->seq_len; - params.seq = key->seq; - params.key = key->key; - - callback(cookie, ¶ms); - - return key->key_len ? 0 : -ENOENT; -} - - -static int iwm_cfg80211_del_key(struct wiphy *wiphy, struct net_device *ndev, - u8 key_index, bool pairwise, const u8 *mac_addr) -{ - struct iwm_priv *iwm = ndev_to_iwm(ndev); - struct iwm_key *key; - - if (key_index >= IWM_NUM_KEYS) - return -ENOENT; - - key = &iwm->keys[key_index]; - if (!iwm->keys[key_index].key_len) { - IWM_DBG_WEXT(iwm, DBG, "Key %d not used\n", key_index); - return 0; - } - - if (key_index == iwm->default_key) - iwm->default_key = -1; - - return iwm_set_key(iwm, 1, key); -} - -static int iwm_cfg80211_set_default_key(struct wiphy *wiphy, - struct net_device *ndev, - u8 key_index, bool unicast, - bool multicast) -{ - struct iwm_priv *iwm = ndev_to_iwm(ndev); - - IWM_DBG_WEXT(iwm, DBG, "Default key index is: %d\n", key_index); - - if (key_index >= IWM_NUM_KEYS) - return -ENOENT; - - if (!iwm->keys[key_index].key_len) { - IWM_ERR(iwm, "Key %d not used\n", key_index); - return -EINVAL; - } - - iwm->default_key = key_index; - - return iwm_set_tx_key(iwm, key_index); -} - -static int iwm_cfg80211_get_station(struct wiphy *wiphy, - struct net_device *ndev, - u8 *mac, struct station_info *sinfo) -{ - struct iwm_priv *iwm = ndev_to_iwm(ndev); - - if (memcmp(mac, iwm->bssid, ETH_ALEN)) - return -ENOENT; - - sinfo->filled |= STATION_INFO_TX_BITRATE; - sinfo->txrate.legacy = iwm->rate * 10; - - if (test_bit(IWM_STATUS_ASSOCIATED, &iwm->status)) { - sinfo->filled |= STATION_INFO_SIGNAL; - sinfo->signal = iwm->wstats.qual.level; - } - - return 0; -} - - -int iwm_cfg80211_inform_bss(struct iwm_priv *iwm) -{ - struct wiphy *wiphy = iwm_to_wiphy(iwm); - struct iwm_bss_info *bss; - struct iwm_umac_notif_bss_info *umac_bss; - struct ieee80211_mgmt *mgmt; - struct ieee80211_channel *channel; - struct ieee80211_supported_band *band; - s32 signal; - int freq; - - list_for_each_entry(bss, &iwm->bss_list, node) { - umac_bss = bss->bss; - mgmt = (struct ieee80211_mgmt *)(umac_bss->frame_buf); - - if (umac_bss->band == UMAC_BAND_2GHZ) - band = wiphy->bands[IEEE80211_BAND_2GHZ]; - else if (umac_bss->band == UMAC_BAND_5GHZ) - band = wiphy->bands[IEEE80211_BAND_5GHZ]; - else { - IWM_ERR(iwm, "Invalid band: %d\n", umac_bss->band); - return -EINVAL; - } - - freq = ieee80211_channel_to_frequency(umac_bss->channel, - band->band); - channel = ieee80211_get_channel(wiphy, freq); - signal = umac_bss->rssi * 100; - - if (!cfg80211_inform_bss_frame(wiphy, channel, mgmt, - le16_to_cpu(umac_bss->frame_len), - signal, GFP_KERNEL)) - return -EINVAL; - } - - return 0; -} - -static int iwm_cfg80211_change_iface(struct wiphy *wiphy, - struct net_device *ndev, - enum nl80211_iftype type, u32 *flags, - struct vif_params *params) -{ - struct wireless_dev *wdev; - struct iwm_priv *iwm; - u32 old_mode; - - wdev = ndev->ieee80211_ptr; - iwm = ndev_to_iwm(ndev); - old_mode = iwm->conf.mode; - - switch (type) { - case NL80211_IFTYPE_STATION: - iwm->conf.mode = UMAC_MODE_BSS; - break; - case NL80211_IFTYPE_ADHOC: - iwm->conf.mode = UMAC_MODE_IBSS; - break; - default: - return -EOPNOTSUPP; - } - - wdev->iftype = type; - - if ((old_mode == iwm->conf.mode) || !iwm->umac_profile) - return 0; - - iwm->umac_profile->mode = cpu_to_le32(iwm->conf.mode); - - if (iwm->umac_profile_active) - iwm_invalidate_mlme_profile(iwm); - - return 0; -} - -static int iwm_cfg80211_scan(struct wiphy *wiphy, struct net_device *ndev, - struct cfg80211_scan_request *request) -{ - struct iwm_priv *iwm = ndev_to_iwm(ndev); - int ret; - - if (!test_bit(IWM_STATUS_READY, &iwm->status)) { - IWM_ERR(iwm, "Scan while device is not ready\n"); - return -EIO; - } - - if (test_bit(IWM_STATUS_SCANNING, &iwm->status)) { - IWM_ERR(iwm, "Scanning already\n"); - return -EAGAIN; - } - - if (test_bit(IWM_STATUS_SCAN_ABORTING, &iwm->status)) { - IWM_ERR(iwm, "Scanning being aborted\n"); - return -EAGAIN; - } - - set_bit(IWM_STATUS_SCANNING, &iwm->status); - - ret = iwm_scan_ssids(iwm, request->ssids, request->n_ssids); - if (ret) { - clear_bit(IWM_STATUS_SCANNING, &iwm->status); - return ret; - } - - iwm->scan_request = request; - return 0; -} - -static int iwm_cfg80211_set_wiphy_params(struct wiphy *wiphy, u32 changed) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - - if (changed & WIPHY_PARAM_RTS_THRESHOLD && - (iwm->conf.rts_threshold != wiphy->rts_threshold)) { - int ret; - - iwm->conf.rts_threshold = wiphy->rts_threshold; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_RTS_THRESHOLD, - iwm->conf.rts_threshold); - if (ret < 0) - return ret; - } - - if (changed & WIPHY_PARAM_FRAG_THRESHOLD && - (iwm->conf.frag_threshold != wiphy->frag_threshold)) { - int ret; - - iwm->conf.frag_threshold = wiphy->frag_threshold; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_FA_CFG_FIX, - CFG_FRAG_THRESHOLD, - iwm->conf.frag_threshold); - if (ret < 0) - return ret; - } - - return 0; -} - -static int iwm_cfg80211_join_ibss(struct wiphy *wiphy, struct net_device *dev, - struct cfg80211_ibss_params *params) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - struct ieee80211_channel *chan = params->channel; - - if (!test_bit(IWM_STATUS_READY, &iwm->status)) - return -EIO; - - /* UMAC doesn't support creating or joining an IBSS network - * with specified bssid. */ - if (params->bssid) - return -EOPNOTSUPP; - - iwm->channel = ieee80211_frequency_to_channel(chan->center_freq); - iwm->umac_profile->ibss.band = chan->band; - iwm->umac_profile->ibss.channel = iwm->channel; - iwm->umac_profile->ssid.ssid_len = params->ssid_len; - memcpy(iwm->umac_profile->ssid.ssid, params->ssid, params->ssid_len); - - return iwm_send_mlme_profile(iwm); -} - -static int iwm_cfg80211_leave_ibss(struct wiphy *wiphy, struct net_device *dev) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - - if (iwm->umac_profile_active) - return iwm_invalidate_mlme_profile(iwm); - - return 0; -} - -static int iwm_set_auth_type(struct iwm_priv *iwm, - enum nl80211_auth_type sme_auth_type) -{ - u8 *auth_type = &iwm->umac_profile->sec.auth_type; - - switch (sme_auth_type) { - case NL80211_AUTHTYPE_AUTOMATIC: - case NL80211_AUTHTYPE_OPEN_SYSTEM: - IWM_DBG_WEXT(iwm, DBG, "OPEN auth\n"); - *auth_type = UMAC_AUTH_TYPE_OPEN; - break; - case NL80211_AUTHTYPE_SHARED_KEY: - if (iwm->umac_profile->sec.flags & - (UMAC_SEC_FLG_WPA_ON_MSK | UMAC_SEC_FLG_RSNA_ON_MSK)) { - IWM_DBG_WEXT(iwm, DBG, "WPA auth alg\n"); - *auth_type = UMAC_AUTH_TYPE_RSNA_PSK; - } else { - IWM_DBG_WEXT(iwm, DBG, "WEP shared key auth alg\n"); - *auth_type = UMAC_AUTH_TYPE_LEGACY_PSK; - } - - break; - default: - IWM_ERR(iwm, "Unsupported auth alg: 0x%x\n", sme_auth_type); - return -ENOTSUPP; - } - - return 0; -} - -static int iwm_set_wpa_version(struct iwm_priv *iwm, u32 wpa_version) -{ - IWM_DBG_WEXT(iwm, DBG, "wpa_version: %d\n", wpa_version); - - if (!wpa_version) { - iwm->umac_profile->sec.flags = UMAC_SEC_FLG_LEGACY_PROFILE; - return 0; - } - - if (wpa_version & NL80211_WPA_VERSION_1) - iwm->umac_profile->sec.flags = UMAC_SEC_FLG_WPA_ON_MSK; - - if (wpa_version & NL80211_WPA_VERSION_2) - iwm->umac_profile->sec.flags = UMAC_SEC_FLG_RSNA_ON_MSK; - - return 0; -} - -static int iwm_set_cipher(struct iwm_priv *iwm, u32 cipher, bool ucast) -{ - u8 *profile_cipher = ucast ? &iwm->umac_profile->sec.ucast_cipher : - &iwm->umac_profile->sec.mcast_cipher; - - if (!cipher) { - *profile_cipher = UMAC_CIPHER_TYPE_NONE; - return 0; - } - - IWM_DBG_WEXT(iwm, DBG, "%ccast cipher is 0x%x\n", ucast ? 'u' : 'm', - cipher); - - switch (cipher) { - case IW_AUTH_CIPHER_NONE: - *profile_cipher = UMAC_CIPHER_TYPE_NONE; - break; - case WLAN_CIPHER_SUITE_WEP40: - *profile_cipher = UMAC_CIPHER_TYPE_WEP_40; - break; - case WLAN_CIPHER_SUITE_WEP104: - *profile_cipher = UMAC_CIPHER_TYPE_WEP_104; - break; - case WLAN_CIPHER_SUITE_TKIP: - *profile_cipher = UMAC_CIPHER_TYPE_TKIP; - break; - case WLAN_CIPHER_SUITE_CCMP: - *profile_cipher = UMAC_CIPHER_TYPE_CCMP; - break; - default: - IWM_ERR(iwm, "Unsupported cipher: 0x%x\n", cipher); - return -ENOTSUPP; - } - - return 0; -} - -static int iwm_set_key_mgt(struct iwm_priv *iwm, u32 key_mgt) -{ - u8 *auth_type = &iwm->umac_profile->sec.auth_type; - - IWM_DBG_WEXT(iwm, DBG, "key_mgt: 0x%x\n", key_mgt); - - if (key_mgt == WLAN_AKM_SUITE_8021X) - *auth_type = UMAC_AUTH_TYPE_8021X; - else if (key_mgt == WLAN_AKM_SUITE_PSK) { - if (iwm->umac_profile->sec.flags & - (UMAC_SEC_FLG_WPA_ON_MSK | UMAC_SEC_FLG_RSNA_ON_MSK)) - *auth_type = UMAC_AUTH_TYPE_RSNA_PSK; - else - *auth_type = UMAC_AUTH_TYPE_LEGACY_PSK; - } else { - IWM_ERR(iwm, "Invalid key mgt: 0x%x\n", key_mgt); - return -EINVAL; - } - - return 0; -} - - -static int iwm_cfg80211_connect(struct wiphy *wiphy, struct net_device *dev, - struct cfg80211_connect_params *sme) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - struct ieee80211_channel *chan = sme->channel; - struct key_params key_param; - int ret; - - if (!test_bit(IWM_STATUS_READY, &iwm->status)) - return -EIO; - - if (!sme->ssid) - return -EINVAL; - - if (iwm->umac_profile_active) { - ret = iwm_invalidate_mlme_profile(iwm); - if (ret) { - IWM_ERR(iwm, "Couldn't invalidate profile\n"); - return ret; - } - } - - if (chan) - iwm->channel = - ieee80211_frequency_to_channel(chan->center_freq); - - iwm->umac_profile->ssid.ssid_len = sme->ssid_len; - memcpy(iwm->umac_profile->ssid.ssid, sme->ssid, sme->ssid_len); - - if (sme->bssid) { - IWM_DBG_WEXT(iwm, DBG, "BSSID: %pM\n", sme->bssid); - memcpy(&iwm->umac_profile->bssid[0], sme->bssid, ETH_ALEN); - iwm->umac_profile->bss_num = 1; - } else { - memset(&iwm->umac_profile->bssid[0], 0, ETH_ALEN); - iwm->umac_profile->bss_num = 0; - } - - ret = iwm_set_wpa_version(iwm, sme->crypto.wpa_versions); - if (ret < 0) - return ret; - - ret = iwm_set_auth_type(iwm, sme->auth_type); - if (ret < 0) - return ret; - - if (sme->crypto.n_ciphers_pairwise) { - ret = iwm_set_cipher(iwm, sme->crypto.ciphers_pairwise[0], - true); - if (ret < 0) - return ret; - } - - ret = iwm_set_cipher(iwm, sme->crypto.cipher_group, false); - if (ret < 0) - return ret; - - if (sme->crypto.n_akm_suites) { - ret = iwm_set_key_mgt(iwm, sme->crypto.akm_suites[0]); - if (ret < 0) - return ret; - } - - /* - * We save the WEP key in case we want to do shared authentication. - * We have to do it so because UMAC will assert whenever it gets a - * key before a profile. - */ - if (sme->key) { - key_param.key = kmemdup(sme->key, sme->key_len, GFP_KERNEL); - if (key_param.key == NULL) - return -ENOMEM; - key_param.key_len = sme->key_len; - key_param.seq_len = 0; - key_param.cipher = sme->crypto.ciphers_pairwise[0]; - - ret = iwm_key_init(&iwm->keys[sme->key_idx], sme->key_idx, - NULL, &key_param); - kfree(key_param.key); - if (ret < 0) { - IWM_ERR(iwm, "Invalid key_params\n"); - return ret; - } - - iwm->default_key = sme->key_idx; - } - - /* WPA and open AUTH type from wpa_s means WPS (a.k.a. WSC) */ - if ((iwm->umac_profile->sec.flags & - (UMAC_SEC_FLG_WPA_ON_MSK | UMAC_SEC_FLG_RSNA_ON_MSK)) && - iwm->umac_profile->sec.auth_type == UMAC_AUTH_TYPE_OPEN) { - iwm->umac_profile->sec.flags = UMAC_SEC_FLG_WSC_ON_MSK; - } - - ret = iwm_send_mlme_profile(iwm); - - if (iwm->umac_profile->sec.auth_type != UMAC_AUTH_TYPE_LEGACY_PSK || - sme->key == NULL) - return ret; - - /* - * We want to do shared auth. - * We need to actually set the key we previously cached, - * and then tell the UMAC it's the default one. - * That will trigger the auth+assoc UMAC machinery, and again, - * this must be done after setting the profile. - */ - ret = iwm_set_key(iwm, 0, &iwm->keys[sme->key_idx]); - if (ret < 0) - return ret; - - return iwm_set_tx_key(iwm, iwm->default_key); -} - -static int iwm_cfg80211_disconnect(struct wiphy *wiphy, struct net_device *dev, - u16 reason_code) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - - IWM_DBG_WEXT(iwm, DBG, "Active: %d\n", iwm->umac_profile_active); - - if (iwm->umac_profile_active) - iwm_invalidate_mlme_profile(iwm); - - return 0; -} - -static int iwm_cfg80211_set_txpower(struct wiphy *wiphy, - enum nl80211_tx_power_setting type, int mbm) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - int ret; - - switch (type) { - case NL80211_TX_POWER_AUTOMATIC: - return 0; - case NL80211_TX_POWER_FIXED: - if (mbm < 0 || (mbm % 100)) - return -EOPNOTSUPP; - - if (!test_bit(IWM_STATUS_READY, &iwm->status)) - return 0; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_TX_PWR_LIMIT_USR, - MBM_TO_DBM(mbm) * 2); - if (ret < 0) - return ret; - - return iwm_tx_power_trigger(iwm); - default: - IWM_ERR(iwm, "Unsupported power type: %d\n", type); - return -EOPNOTSUPP; - } - - return 0; -} - -static int iwm_cfg80211_get_txpower(struct wiphy *wiphy, int *dbm) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - - *dbm = iwm->txpower >> 1; - - return 0; -} - -static int iwm_cfg80211_set_power_mgmt(struct wiphy *wiphy, - struct net_device *dev, - bool enabled, int timeout) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - u32 power_index; - - if (enabled) - power_index = IWM_POWER_INDEX_DEFAULT; - else - power_index = IWM_POWER_INDEX_MIN; - - if (power_index == iwm->conf.power_index) - return 0; - - iwm->conf.power_index = power_index; - - return iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_POWER_INDEX, iwm->conf.power_index); -} - -static int iwm_cfg80211_set_pmksa(struct wiphy *wiphy, - struct net_device *netdev, - struct cfg80211_pmksa *pmksa) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - - return iwm_send_pmkid_update(iwm, pmksa, IWM_CMD_PMKID_ADD); -} - -static int iwm_cfg80211_del_pmksa(struct wiphy *wiphy, - struct net_device *netdev, - struct cfg80211_pmksa *pmksa) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - - return iwm_send_pmkid_update(iwm, pmksa, IWM_CMD_PMKID_DEL); -} - -static int iwm_cfg80211_flush_pmksa(struct wiphy *wiphy, - struct net_device *netdev) -{ - struct iwm_priv *iwm = wiphy_to_iwm(wiphy); - struct cfg80211_pmksa pmksa; - - memset(&pmksa, 0, sizeof(struct cfg80211_pmksa)); - - return iwm_send_pmkid_update(iwm, &pmksa, IWM_CMD_PMKID_FLUSH); -} - - -static struct cfg80211_ops iwm_cfg80211_ops = { - .change_virtual_intf = iwm_cfg80211_change_iface, - .add_key = iwm_cfg80211_add_key, - .get_key = iwm_cfg80211_get_key, - .del_key = iwm_cfg80211_del_key, - .set_default_key = iwm_cfg80211_set_default_key, - .get_station = iwm_cfg80211_get_station, - .scan = iwm_cfg80211_scan, - .set_wiphy_params = iwm_cfg80211_set_wiphy_params, - .connect = iwm_cfg80211_connect, - .disconnect = iwm_cfg80211_disconnect, - .join_ibss = iwm_cfg80211_join_ibss, - .leave_ibss = iwm_cfg80211_leave_ibss, - .set_tx_power = iwm_cfg80211_set_txpower, - .get_tx_power = iwm_cfg80211_get_txpower, - .set_power_mgmt = iwm_cfg80211_set_power_mgmt, - .set_pmksa = iwm_cfg80211_set_pmksa, - .del_pmksa = iwm_cfg80211_del_pmksa, - .flush_pmksa = iwm_cfg80211_flush_pmksa, -}; - -static const u32 cipher_suites[] = { - WLAN_CIPHER_SUITE_WEP40, - WLAN_CIPHER_SUITE_WEP104, - WLAN_CIPHER_SUITE_TKIP, - WLAN_CIPHER_SUITE_CCMP, -}; - -struct wireless_dev *iwm_wdev_alloc(int sizeof_bus, struct device *dev) -{ - int ret = 0; - struct wireless_dev *wdev; - - /* - * We're trying to have the following memory - * layout: - * - * +-------------------------+ - * | struct wiphy | - * +-------------------------+ - * | struct iwm_priv | - * +-------------------------+ - * | bus private data | - * | (e.g. iwm_priv_sdio) | - * +-------------------------+ - * - */ - - wdev = kzalloc(sizeof(struct wireless_dev), GFP_KERNEL); - if (!wdev) { - dev_err(dev, "Couldn't allocate wireless device\n"); - return ERR_PTR(-ENOMEM); - } - - wdev->wiphy = wiphy_new(&iwm_cfg80211_ops, - sizeof(struct iwm_priv) + sizeof_bus); - if (!wdev->wiphy) { - dev_err(dev, "Couldn't allocate wiphy device\n"); - ret = -ENOMEM; - goto out_err_new; - } - - set_wiphy_dev(wdev->wiphy, dev); - wdev->wiphy->max_scan_ssids = UMAC_WIFI_IF_PROBE_OPTION_MAX; - wdev->wiphy->max_num_pmkids = UMAC_MAX_NUM_PMKIDS; - wdev->wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) | - BIT(NL80211_IFTYPE_ADHOC); - wdev->wiphy->bands[IEEE80211_BAND_2GHZ] = &iwm_band_2ghz; - wdev->wiphy->bands[IEEE80211_BAND_5GHZ] = &iwm_band_5ghz; - wdev->wiphy->signal_type = CFG80211_SIGNAL_TYPE_MBM; - - wdev->wiphy->cipher_suites = cipher_suites; - wdev->wiphy->n_cipher_suites = ARRAY_SIZE(cipher_suites); - - ret = wiphy_register(wdev->wiphy); - if (ret < 0) { - dev_err(dev, "Couldn't register wiphy device\n"); - goto out_err_register; - } - - return wdev; - - out_err_register: - wiphy_free(wdev->wiphy); - - out_err_new: - kfree(wdev); - - return ERR_PTR(ret); -} - -void iwm_wdev_free(struct iwm_priv *iwm) -{ - struct wireless_dev *wdev = iwm_to_wdev(iwm); - - if (!wdev) - return; - - wiphy_unregister(wdev->wiphy); - wiphy_free(wdev->wiphy); - kfree(wdev); -} diff --git a/drivers/net/wireless/iwmc3200wifi/cfg80211.h b/drivers/net/wireless/iwmc3200wifi/cfg80211.h deleted file mode 100644 index 56a34145acbf..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/cfg80211.h +++ /dev/null @@ -1,31 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - */ - -#ifndef __IWM_CFG80211_H__ -#define __IWM_CFG80211_H__ - -int iwm_cfg80211_inform_bss(struct iwm_priv *iwm); -struct wireless_dev *iwm_wdev_alloc(int sizeof_bus, struct device *dev); -void iwm_wdev_free(struct iwm_priv *iwm); - -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/commands.c b/drivers/net/wireless/iwmc3200wifi/commands.c deleted file mode 100644 index bd75078c454b..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/commands.c +++ /dev/null @@ -1,1002 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#include <linux/kernel.h> -#include <linux/wireless.h> -#include <linux/etherdevice.h> -#include <linux/ieee80211.h> -#include <linux/sched.h> -#include <linux/slab.h> -#include <linux/moduleparam.h> - -#include "iwm.h" -#include "bus.h" -#include "hal.h" -#include "umac.h" -#include "commands.h" -#include "debug.h" - -static int iwm_send_lmac_ptrough_cmd(struct iwm_priv *iwm, - u8 lmac_cmd_id, - const void *lmac_payload, - u16 lmac_payload_size, - u8 resp) -{ - struct iwm_udma_wifi_cmd udma_cmd = UDMA_LMAC_INIT; - struct iwm_umac_cmd umac_cmd; - struct iwm_lmac_cmd lmac_cmd; - - lmac_cmd.id = lmac_cmd_id; - - umac_cmd.id = UMAC_CMD_OPCODE_WIFI_PASS_THROUGH; - umac_cmd.resp = resp; - - return iwm_hal_send_host_cmd(iwm, &udma_cmd, &umac_cmd, &lmac_cmd, - lmac_payload, lmac_payload_size); -} - -int iwm_send_wifi_if_cmd(struct iwm_priv *iwm, void *payload, u16 payload_size, - bool resp) -{ - struct iwm_umac_wifi_if *hdr = (struct iwm_umac_wifi_if *)payload; - struct iwm_udma_wifi_cmd udma_cmd = UDMA_UMAC_INIT; - struct iwm_umac_cmd umac_cmd; - int ret; - u8 oid = hdr->oid; - - if (!test_bit(IWM_STATUS_READY, &iwm->status)) { - IWM_ERR(iwm, "Interface is not ready yet"); - return -EAGAIN; - } - - umac_cmd.id = UMAC_CMD_OPCODE_WIFI_IF_WRAPPER; - umac_cmd.resp = resp; - - ret = iwm_hal_send_umac_cmd(iwm, &udma_cmd, &umac_cmd, - payload, payload_size); - - if (resp) { - ret = wait_event_interruptible_timeout(iwm->wifi_ntfy_queue, - test_and_clear_bit(oid, &iwm->wifi_ntfy[0]), - 3 * HZ); - - return ret ? 0 : -EBUSY; - } - - return ret; -} - -static int modparam_wiwi = COEX_MODE_CM; -module_param_named(wiwi, modparam_wiwi, int, 0644); -MODULE_PARM_DESC(wiwi, "Wifi-WiMAX coexistence: 1=SA, 2=XOR, 3=CM (default)"); - -static struct coex_event iwm_sta_xor_prio_tbl[COEX_EVENTS_NUM] = -{ - {4, 3, 0, COEX_UNASSOC_IDLE_FLAGS}, - {4, 3, 0, COEX_UNASSOC_MANUAL_SCAN_FLAGS}, - {4, 3, 0, COEX_UNASSOC_AUTO_SCAN_FLAGS}, - {4, 3, 0, COEX_CALIBRATION_FLAGS}, - {4, 3, 0, COEX_PERIODIC_CALIBRATION_FLAGS}, - {4, 3, 0, COEX_CONNECTION_ESTAB_FLAGS}, - {4, 3, 0, COEX_ASSOCIATED_IDLE_FLAGS}, - {4, 3, 0, COEX_ASSOC_MANUAL_SCAN_FLAGS}, - {4, 3, 0, COEX_ASSOC_AUTO_SCAN_FLAGS}, - {4, 3, 0, COEX_ASSOC_ACTIVE_LEVEL_FLAGS}, - {6, 3, 0, COEX_XOR_RF_ON_FLAGS}, - {4, 3, 0, COEX_RF_OFF_FLAGS}, - {6, 6, 0, COEX_STAND_ALONE_DEBUG_FLAGS}, - {4, 3, 0, COEX_IPAN_ASSOC_LEVEL_FLAGS}, - {4, 3, 0, COEX_RSRVD1_FLAGS}, - {4, 3, 0, COEX_RSRVD2_FLAGS} -}; - -static struct coex_event iwm_sta_cm_prio_tbl[COEX_EVENTS_NUM] = -{ - {1, 1, 0, COEX_UNASSOC_IDLE_FLAGS}, - {4, 4, 0, COEX_UNASSOC_MANUAL_SCAN_FLAGS}, - {3, 3, 0, COEX_UNASSOC_AUTO_SCAN_FLAGS}, - {6, 6, 0, COEX_CALIBRATION_FLAGS}, - {3, 3, 0, COEX_PERIODIC_CALIBRATION_FLAGS}, - {6, 5, 0, COEX_CONNECTION_ESTAB_FLAGS}, - {4, 4, 0, COEX_ASSOCIATED_IDLE_FLAGS}, - {4, 4, 0, COEX_ASSOC_MANUAL_SCAN_FLAGS}, - {4, 4, 0, COEX_ASSOC_AUTO_SCAN_FLAGS}, - {4, 4, 0, COEX_ASSOC_ACTIVE_LEVEL_FLAGS}, - {1, 1, 0, COEX_RF_ON_FLAGS}, - {1, 1, 0, COEX_RF_OFF_FLAGS}, - {7, 7, 0, COEX_STAND_ALONE_DEBUG_FLAGS}, - {5, 4, 0, COEX_IPAN_ASSOC_LEVEL_FLAGS}, - {1, 1, 0, COEX_RSRVD1_FLAGS}, - {1, 1, 0, COEX_RSRVD2_FLAGS} -}; - -int iwm_send_prio_table(struct iwm_priv *iwm) -{ - struct iwm_coex_prio_table_cmd coex_table_cmd; - u32 coex_enabled, mode_enabled; - - memset(&coex_table_cmd, 0, sizeof(struct iwm_coex_prio_table_cmd)); - - coex_table_cmd.flags = COEX_FLAGS_STA_TABLE_VALID_MSK; - - switch (modparam_wiwi) { - case COEX_MODE_XOR: - case COEX_MODE_CM: - coex_enabled = 1; - break; - default: - coex_enabled = 0; - break; - } - - switch (iwm->conf.mode) { - case UMAC_MODE_BSS: - case UMAC_MODE_IBSS: - mode_enabled = 1; - break; - default: - mode_enabled = 0; - break; - } - - if (coex_enabled && mode_enabled) { - coex_table_cmd.flags |= COEX_FLAGS_COEX_ENABLE_MSK | - COEX_FLAGS_ASSOC_WAKEUP_UMASK_MSK | - COEX_FLAGS_UNASSOC_WAKEUP_UMASK_MSK; - - switch (modparam_wiwi) { - case COEX_MODE_XOR: - memcpy(coex_table_cmd.sta_prio, iwm_sta_xor_prio_tbl, - sizeof(iwm_sta_xor_prio_tbl)); - break; - case COEX_MODE_CM: - memcpy(coex_table_cmd.sta_prio, iwm_sta_cm_prio_tbl, - sizeof(iwm_sta_cm_prio_tbl)); - break; - default: - IWM_ERR(iwm, "Invalid coex_mode 0x%x\n", - modparam_wiwi); - break; - } - } else - IWM_WARN(iwm, "coexistense disabled\n"); - - return iwm_send_lmac_ptrough_cmd(iwm, COEX_PRIORITY_TABLE_CMD, - &coex_table_cmd, - sizeof(struct iwm_coex_prio_table_cmd), 0); -} - -int iwm_send_init_calib_cfg(struct iwm_priv *iwm, u8 calib_requested) -{ - struct iwm_lmac_cal_cfg_cmd cal_cfg_cmd; - - memset(&cal_cfg_cmd, 0, sizeof(struct iwm_lmac_cal_cfg_cmd)); - - cal_cfg_cmd.ucode_cfg.init.enable = cpu_to_le32(calib_requested); - cal_cfg_cmd.ucode_cfg.init.start = cpu_to_le32(calib_requested); - cal_cfg_cmd.ucode_cfg.init.send_res = cpu_to_le32(calib_requested); - cal_cfg_cmd.ucode_cfg.flags = - cpu_to_le32(CALIB_CFG_FLAG_SEND_COMPLETE_NTFY_AFTER_MSK); - - return iwm_send_lmac_ptrough_cmd(iwm, CALIBRATION_CFG_CMD, &cal_cfg_cmd, - sizeof(struct iwm_lmac_cal_cfg_cmd), 1); -} - -int iwm_send_periodic_calib_cfg(struct iwm_priv *iwm, u8 calib_requested) -{ - struct iwm_lmac_cal_cfg_cmd cal_cfg_cmd; - - memset(&cal_cfg_cmd, 0, sizeof(struct iwm_lmac_cal_cfg_cmd)); - - cal_cfg_cmd.ucode_cfg.periodic.enable = cpu_to_le32(calib_requested); - cal_cfg_cmd.ucode_cfg.periodic.start = cpu_to_le32(calib_requested); - - return iwm_send_lmac_ptrough_cmd(iwm, CALIBRATION_CFG_CMD, &cal_cfg_cmd, - sizeof(struct iwm_lmac_cal_cfg_cmd), 0); -} - -int iwm_store_rxiq_calib_result(struct iwm_priv *iwm) -{ - struct iwm_calib_rxiq *rxiq; - u8 *eeprom_rxiq = iwm_eeprom_access(iwm, IWM_EEPROM_CALIB_RXIQ); - int grplen = sizeof(struct iwm_calib_rxiq_group); - - rxiq = kzalloc(sizeof(struct iwm_calib_rxiq), GFP_KERNEL); - if (!rxiq) { - IWM_ERR(iwm, "Couldn't alloc memory for RX IQ\n"); - return -ENOMEM; - } - - eeprom_rxiq = iwm_eeprom_access(iwm, IWM_EEPROM_CALIB_RXIQ); - if (IS_ERR(eeprom_rxiq)) { - IWM_ERR(iwm, "Couldn't access EEPROM RX IQ entry\n"); - kfree(rxiq); - return PTR_ERR(eeprom_rxiq); - } - - iwm->calib_res[SHILOH_PHY_CALIBRATE_RX_IQ_CMD].buf = (u8 *)rxiq; - iwm->calib_res[SHILOH_PHY_CALIBRATE_RX_IQ_CMD].size = sizeof(*rxiq); - - rxiq->hdr.opcode = SHILOH_PHY_CALIBRATE_RX_IQ_CMD; - rxiq->hdr.first_grp = 0; - rxiq->hdr.grp_num = 1; - rxiq->hdr.all_data_valid = 1; - - memcpy(&rxiq->group[0], eeprom_rxiq, 4 * grplen); - memcpy(&rxiq->group[4], eeprom_rxiq + 6 * grplen, grplen); - - return 0; -} - -int iwm_send_calib_results(struct iwm_priv *iwm) -{ - int i, ret = 0; - - for (i = PHY_CALIBRATE_OPCODES_NUM; i < CALIBRATION_CMD_NUM; i++) { - if (test_bit(i - PHY_CALIBRATE_OPCODES_NUM, - &iwm->calib_done_map)) { - IWM_DBG_CMD(iwm, DBG, - "Send calibration %d result\n", i); - ret |= iwm_send_lmac_ptrough_cmd(iwm, - REPLY_PHY_CALIBRATION_CMD, - iwm->calib_res[i].buf, - iwm->calib_res[i].size, 0); - - kfree(iwm->calib_res[i].buf); - iwm->calib_res[i].buf = NULL; - iwm->calib_res[i].size = 0; - } - } - - return ret; -} - -int iwm_send_ct_kill_cfg(struct iwm_priv *iwm, u8 entry, u8 exit) -{ - struct iwm_ct_kill_cfg_cmd cmd; - - cmd.entry_threshold = entry; - cmd.exit_threshold = exit; - - return iwm_send_lmac_ptrough_cmd(iwm, REPLY_CT_KILL_CONFIG_CMD, &cmd, - sizeof(struct iwm_ct_kill_cfg_cmd), 0); -} - -int iwm_send_umac_reset(struct iwm_priv *iwm, __le32 reset_flags, bool resp) -{ - struct iwm_udma_wifi_cmd udma_cmd = UDMA_UMAC_INIT; - struct iwm_umac_cmd umac_cmd; - struct iwm_umac_cmd_reset reset; - - reset.flags = reset_flags; - - umac_cmd.id = UMAC_CMD_OPCODE_RESET; - umac_cmd.resp = resp; - - return iwm_hal_send_umac_cmd(iwm, &udma_cmd, &umac_cmd, &reset, - sizeof(struct iwm_umac_cmd_reset)); -} - -int iwm_umac_set_config_fix(struct iwm_priv *iwm, u16 tbl, u16 key, u32 value) -{ - struct iwm_udma_wifi_cmd udma_cmd = UDMA_UMAC_INIT; - struct iwm_umac_cmd umac_cmd; - struct iwm_umac_cmd_set_param_fix param; - - if ((tbl != UMAC_PARAM_TBL_CFG_FIX) && - (tbl != UMAC_PARAM_TBL_FA_CFG_FIX)) - return -EINVAL; - - umac_cmd.id = UMAC_CMD_OPCODE_SET_PARAM_FIX; - umac_cmd.resp = 0; - - param.tbl = cpu_to_le16(tbl); - param.key = cpu_to_le16(key); - param.value = cpu_to_le32(value); - - return iwm_hal_send_umac_cmd(iwm, &udma_cmd, &umac_cmd, ¶m, - sizeof(struct iwm_umac_cmd_set_param_fix)); -} - -int iwm_umac_set_config_var(struct iwm_priv *iwm, u16 key, - void *payload, u16 payload_size) -{ - struct iwm_udma_wifi_cmd udma_cmd = UDMA_UMAC_INIT; - struct iwm_umac_cmd umac_cmd; - struct iwm_umac_cmd_set_param_var *param_hdr; - u8 *param; - int ret; - - param = kzalloc(payload_size + - sizeof(struct iwm_umac_cmd_set_param_var), GFP_KERNEL); - if (!param) { - IWM_ERR(iwm, "Couldn't allocate param\n"); - return -ENOMEM; - } - - param_hdr = (struct iwm_umac_cmd_set_param_var *)param; - - umac_cmd.id = UMAC_CMD_OPCODE_SET_PARAM_VAR; - umac_cmd.resp = 0; - - param_hdr->tbl = cpu_to_le16(UMAC_PARAM_TBL_CFG_VAR); - param_hdr->key = cpu_to_le16(key); - param_hdr->len = cpu_to_le16(payload_size); - memcpy(param + sizeof(struct iwm_umac_cmd_set_param_var), - payload, payload_size); - - ret = iwm_hal_send_umac_cmd(iwm, &udma_cmd, &umac_cmd, param, - sizeof(struct iwm_umac_cmd_set_param_var) + - payload_size); - kfree(param); - - return ret; -} - -int iwm_send_umac_config(struct iwm_priv *iwm, __le32 reset_flags) -{ - int ret; - - /* Use UMAC default values */ - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_POWER_INDEX, iwm->conf.power_index); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_FA_CFG_FIX, - CFG_FRAG_THRESHOLD, - iwm->conf.frag_threshold); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_RTS_THRESHOLD, - iwm->conf.rts_threshold); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_CTS_TO_SELF, iwm->conf.cts_to_self); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_WIRELESS_MODE, - iwm->conf.wireless_mode); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_COEX_MODE, modparam_wiwi); - if (ret < 0) - return ret; - - /* - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_ASSOCIATION_TIMEOUT, - iwm->conf.assoc_timeout); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_ROAM_TIMEOUT, - iwm->conf.roam_timeout); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_WIRELESS_MODE, - WIRELESS_MODE_11A | WIRELESS_MODE_11G); - if (ret < 0) - return ret; - */ - - ret = iwm_umac_set_config_var(iwm, CFG_NET_ADDR, - iwm_to_ndev(iwm)->dev_addr, ETH_ALEN); - if (ret < 0) - return ret; - - /* UMAC PM static configurations */ - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_PM_LEGACY_RX_TIMEOUT, 0x12C); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_PM_LEGACY_TX_TIMEOUT, 0x15E); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_PM_CTRL_FLAGS, 0x1); - if (ret < 0) - return ret; - - ret = iwm_umac_set_config_fix(iwm, UMAC_PARAM_TBL_CFG_FIX, - CFG_PM_KEEP_ALIVE_IN_BEACONS, 0x80); - if (ret < 0) - return ret; - - /* reset UMAC */ - ret = iwm_send_umac_reset(iwm, reset_flags, 1); - if (ret < 0) - return ret; - - ret = iwm_notif_handle(iwm, UMAC_CMD_OPCODE_RESET, IWM_SRC_UMAC, - WAIT_NOTIF_TIMEOUT); - if (ret) { - IWM_ERR(iwm, "Wait for UMAC RESET timeout\n"); - return ret; - } - - return ret; -} - -int iwm_send_packet(struct iwm_priv *iwm, struct sk_buff *skb, int pool_id) -{ - struct iwm_udma_wifi_cmd udma_cmd; - struct iwm_umac_cmd umac_cmd; - struct iwm_tx_info *tx_info = skb_to_tx_info(skb); - - udma_cmd.eop = 1; /* always set eop for non-concatenated Tx */ - udma_cmd.credit_group = pool_id; - udma_cmd.ra_tid = tx_info->sta << 4 | tx_info->tid; - udma_cmd.lmac_offset = 0; - - umac_cmd.id = REPLY_TX; - umac_cmd.color = tx_info->color; - umac_cmd.resp = 0; - - return iwm_hal_send_umac_cmd(iwm, &udma_cmd, &umac_cmd, - skb->data, skb->len); -} - -static int iwm_target_read(struct iwm_priv *iwm, __le32 address, - u8 *response, u32 resp_size) -{ - struct iwm_udma_nonwifi_cmd target_cmd; - struct iwm_nonwifi_cmd *cmd; - u16 seq_num; - int ret = 0; - - target_cmd.opcode = UMAC_HDI_OUT_OPCODE_READ; - target_cmd.addr = address; - target_cmd.op1_sz = cpu_to_le32(resp_size); - target_cmd.op2 = 0; - target_cmd.handle_by_hw = 0; - target_cmd.resp = 1; - target_cmd.eop = 1; - - ret = iwm_hal_send_target_cmd(iwm, &target_cmd, NULL); - if (ret < 0) { - IWM_ERR(iwm, "Couldn't send READ command\n"); - return ret; - } - - /* When succeeding, the send_target routine returns the seq number */ - seq_num = ret; - - ret = wait_event_interruptible_timeout(iwm->nonwifi_queue, - (cmd = iwm_get_pending_nonwifi_cmd(iwm, seq_num, - UMAC_HDI_OUT_OPCODE_READ)) != NULL, - 2 * HZ); - - if (!ret) { - IWM_ERR(iwm, "Didn't receive a target READ answer\n"); - return ret; - } - - memcpy(response, cmd->buf.hdr + sizeof(struct iwm_udma_in_hdr), - resp_size); - - kfree(cmd); - - return 0; -} - -int iwm_read_mac(struct iwm_priv *iwm, u8 *mac) -{ - int ret; - u8 mac_align[ALIGN(ETH_ALEN, 8)]; - - ret = iwm_target_read(iwm, cpu_to_le32(WICO_MAC_ADDRESS_ADDR), - mac_align, sizeof(mac_align)); - if (ret) - return ret; - - if (is_valid_ether_addr(mac_align)) - memcpy(mac, mac_align, ETH_ALEN); - else { - IWM_ERR(iwm, "Invalid EEPROM MAC\n"); - memcpy(mac, iwm->conf.mac_addr, ETH_ALEN); - get_random_bytes(&mac[3], 3); - } - - return 0; -} - -static int iwm_check_profile(struct iwm_priv *iwm) -{ - if (!iwm->umac_profile_active) - return -EAGAIN; - - if (iwm->umac_profile->sec.ucast_cipher != UMAC_CIPHER_TYPE_WEP_40 && - iwm->umac_profile->sec.ucast_cipher != UMAC_CIPHER_TYPE_WEP_104 && - iwm->umac_profile->sec.ucast_cipher != UMAC_CIPHER_TYPE_TKIP && - iwm->umac_profile->sec.ucast_cipher != UMAC_CIPHER_TYPE_CCMP) { - IWM_ERR(iwm, "Wrong unicast cipher: 0x%x\n", - iwm->umac_profile->sec.ucast_cipher); - return -EAGAIN; - } - - if (iwm->umac_profile->sec.mcast_cipher != UMAC_CIPHER_TYPE_WEP_40 && - iwm->umac_profile->sec.mcast_cipher != UMAC_CIPHER_TYPE_WEP_104 && - iwm->umac_profile->sec.mcast_cipher != UMAC_CIPHER_TYPE_TKIP && - iwm->umac_profile->sec.mcast_cipher != UMAC_CIPHER_TYPE_CCMP) { - IWM_ERR(iwm, "Wrong multicast cipher: 0x%x\n", - iwm->umac_profile->sec.mcast_cipher); - return -EAGAIN; - } - - if ((iwm->umac_profile->sec.ucast_cipher == UMAC_CIPHER_TYPE_WEP_40 || - iwm->umac_profile->sec.ucast_cipher == UMAC_CIPHER_TYPE_WEP_104) && - (iwm->umac_profile->sec.ucast_cipher != - iwm->umac_profile->sec.mcast_cipher)) { - IWM_ERR(iwm, "Unicast and multicast ciphers differ for WEP\n"); - } - - return 0; -} - -int iwm_set_tx_key(struct iwm_priv *iwm, u8 key_idx) -{ - struct iwm_umac_tx_key_id tx_key_id; - int ret; - - ret = iwm_check_profile(iwm); - if (ret < 0) - return ret; - - /* UMAC only allows to set default key for WEP and auth type is - * NOT 802.1X or RSNA. */ - if ((iwm->umac_profile->sec.ucast_cipher != UMAC_CIPHER_TYPE_WEP_40 && - iwm->umac_profile->sec.ucast_cipher != UMAC_CIPHER_TYPE_WEP_104) || - iwm->umac_profile->sec.auth_type == UMAC_AUTH_TYPE_8021X || - iwm->umac_profile->sec.auth_type == UMAC_AUTH_TYPE_RSNA_PSK) - return 0; - - tx_key_id.hdr.oid = UMAC_WIFI_IF_CMD_GLOBAL_TX_KEY_ID; - tx_key_id.hdr.buf_size = cpu_to_le16(sizeof(struct iwm_umac_tx_key_id) - - sizeof(struct iwm_umac_wifi_if)); - - tx_key_id.key_idx = key_idx; - - return iwm_send_wifi_if_cmd(iwm, &tx_key_id, sizeof(tx_key_id), 1); -} - -int iwm_set_key(struct iwm_priv *iwm, bool remove, struct iwm_key *key) -{ - int ret = 0; - u8 cmd[64], *sta_addr, *key_data, key_len; - s8 key_idx; - u16 cmd_size = 0; - struct iwm_umac_key_hdr *key_hdr = &key->hdr; - struct iwm_umac_key_wep40 *wep40 = (struct iwm_umac_key_wep40 *)cmd; - struct iwm_umac_key_wep104 *wep104 = (struct iwm_umac_key_wep104 *)cmd; - struct iwm_umac_key_tkip *tkip = (struct iwm_umac_key_tkip *)cmd; - struct iwm_umac_key_ccmp *ccmp = (struct iwm_umac_key_ccmp *)cmd; - - if (!remove) { - ret = iwm_check_profile(iwm); - if (ret < 0) - return ret; - } - - sta_addr = key->hdr.mac; - key_data = key->key; - key_len = key->key_len; - key_idx = key->hdr.key_idx; - - if (!remove) { - u8 auth_type = iwm->umac_profile->sec.auth_type; - - IWM_DBG_WEXT(iwm, DBG, "key_idx:%d\n", key_idx); - IWM_DBG_WEXT(iwm, DBG, "key_len:%d\n", key_len); - IWM_DBG_WEXT(iwm, DBG, "MAC:%pM, idx:%d, multicast:%d\n", - key_hdr->mac, key_hdr->key_idx, key_hdr->multicast); - - IWM_DBG_WEXT(iwm, DBG, "profile: mcast:0x%x, ucast:0x%x\n", - iwm->umac_profile->sec.mcast_cipher, - iwm->umac_profile->sec.ucast_cipher); - IWM_DBG_WEXT(iwm, DBG, "profile: auth_type:0x%x, flags:0x%x\n", - iwm->umac_profile->sec.auth_type, - iwm->umac_profile->sec.flags); - - switch (key->cipher) { - case WLAN_CIPHER_SUITE_WEP40: - wep40->hdr.oid = UMAC_WIFI_IF_CMD_ADD_WEP40_KEY; - wep40->hdr.buf_size = - cpu_to_le16(sizeof(struct iwm_umac_key_wep40) - - sizeof(struct iwm_umac_wifi_if)); - - memcpy(&wep40->key_hdr, key_hdr, - sizeof(struct iwm_umac_key_hdr)); - memcpy(wep40->key, key_data, key_len); - wep40->static_key = - !!((auth_type != UMAC_AUTH_TYPE_8021X) && - (auth_type != UMAC_AUTH_TYPE_RSNA_PSK)); - - cmd_size = sizeof(struct iwm_umac_key_wep40); - break; - - case WLAN_CIPHER_SUITE_WEP104: - wep104->hdr.oid = UMAC_WIFI_IF_CMD_ADD_WEP104_KEY; - wep104->hdr.buf_size = - cpu_to_le16(sizeof(struct iwm_umac_key_wep104) - - sizeof(struct iwm_umac_wifi_if)); - - memcpy(&wep104->key_hdr, key_hdr, - sizeof(struct iwm_umac_key_hdr)); - memcpy(wep104->key, key_data, key_len); - wep104->static_key = - !!((auth_type != UMAC_AUTH_TYPE_8021X) && - (auth_type != UMAC_AUTH_TYPE_RSNA_PSK)); - - cmd_size = sizeof(struct iwm_umac_key_wep104); - break; - - case WLAN_CIPHER_SUITE_CCMP: - key_hdr->key_idx++; - ccmp->hdr.oid = UMAC_WIFI_IF_CMD_ADD_CCMP_KEY; - ccmp->hdr.buf_size = - cpu_to_le16(sizeof(struct iwm_umac_key_ccmp) - - sizeof(struct iwm_umac_wifi_if)); - - memcpy(&ccmp->key_hdr, key_hdr, - sizeof(struct iwm_umac_key_hdr)); - - memcpy(ccmp->key, key_data, key_len); - - if (key->seq_len) - memcpy(ccmp->iv_count, key->seq, key->seq_len); - - cmd_size = sizeof(struct iwm_umac_key_ccmp); - break; - - case WLAN_CIPHER_SUITE_TKIP: - key_hdr->key_idx++; - tkip->hdr.oid = UMAC_WIFI_IF_CMD_ADD_TKIP_KEY; - tkip->hdr.buf_size = - cpu_to_le16(sizeof(struct iwm_umac_key_tkip) - - sizeof(struct iwm_umac_wifi_if)); - - memcpy(&tkip->key_hdr, key_hdr, - sizeof(struct iwm_umac_key_hdr)); - - memcpy(tkip->tkip_key, key_data, IWM_TKIP_KEY_SIZE); - memcpy(tkip->mic_tx_key, key_data + IWM_TKIP_KEY_SIZE, - IWM_TKIP_MIC_SIZE); - memcpy(tkip->mic_rx_key, - key_data + IWM_TKIP_KEY_SIZE + IWM_TKIP_MIC_SIZE, - IWM_TKIP_MIC_SIZE); - - if (key->seq_len) - memcpy(ccmp->iv_count, key->seq, key->seq_len); - - cmd_size = sizeof(struct iwm_umac_key_tkip); - break; - - default: - return -ENOTSUPP; - } - - if ((key->cipher == WLAN_CIPHER_SUITE_TKIP) || - (key->cipher == WLAN_CIPHER_SUITE_CCMP)) - /* - * UGLY_UGLY_UGLY - * Copied HACK from the MWG driver. - * Without it, the key is set before the second - * EAPOL frame is sent, and the latter is thus - * encrypted. - */ - schedule_timeout_interruptible(usecs_to_jiffies(300)); - - ret = iwm_send_wifi_if_cmd(iwm, cmd, cmd_size, 1); - } else { - struct iwm_umac_key_remove key_remove; - - IWM_DBG_WEXT(iwm, ERR, "Removing key_idx:%d\n", key_idx); - - key_remove.hdr.oid = UMAC_WIFI_IF_CMD_REMOVE_KEY; - key_remove.hdr.buf_size = - cpu_to_le16(sizeof(struct iwm_umac_key_remove) - - sizeof(struct iwm_umac_wifi_if)); - memcpy(&key_remove.key_hdr, key_hdr, - sizeof(struct iwm_umac_key_hdr)); - - ret = iwm_send_wifi_if_cmd(iwm, &key_remove, - sizeof(struct iwm_umac_key_remove), - 1); - if (ret) - return ret; - - iwm->keys[key_idx].key_len = 0; - } - - return ret; -} - - -int iwm_send_mlme_profile(struct iwm_priv *iwm) -{ - int ret; - struct iwm_umac_profile profile; - - memcpy(&profile, iwm->umac_profile, sizeof(profile)); - - profile.hdr.oid = UMAC_WIFI_IF_CMD_SET_PROFILE; - profile.hdr.buf_size = cpu_to_le16(sizeof(struct iwm_umac_profile) - - sizeof(struct iwm_umac_wifi_if)); - - ret = iwm_send_wifi_if_cmd(iwm, &profile, sizeof(profile), 1); - if (ret) { - IWM_ERR(iwm, "Send profile command failed\n"); - return ret; - } - - set_bit(IWM_STATUS_SME_CONNECTING, &iwm->status); - return 0; -} - -int __iwm_invalidate_mlme_profile(struct iwm_priv *iwm) -{ - struct iwm_umac_invalidate_profile invalid; - - invalid.hdr.oid = UMAC_WIFI_IF_CMD_INVALIDATE_PROFILE; - invalid.hdr.buf_size = - cpu_to_le16(sizeof(struct iwm_umac_invalidate_profile) - - sizeof(struct iwm_umac_wifi_if)); - - invalid.reason = WLAN_REASON_UNSPECIFIED; - - return iwm_send_wifi_if_cmd(iwm, &invalid, sizeof(invalid), 1); -} - -int iwm_invalidate_mlme_profile(struct iwm_priv *iwm) -{ - int ret; - - ret = __iwm_invalidate_mlme_profile(iwm); - if (ret) - return ret; - - ret = wait_event_interruptible_timeout(iwm->mlme_queue, - (iwm->umac_profile_active == 0), 5 * HZ); - - return ret ? 0 : -EBUSY; -} - -int iwm_tx_power_trigger(struct iwm_priv *iwm) -{ - struct iwm_umac_pwr_trigger pwr_trigger; - - pwr_trigger.hdr.oid = UMAC_WIFI_IF_CMD_TX_PWR_TRIGGER; - pwr_trigger.hdr.buf_size = - cpu_to_le16(sizeof(struct iwm_umac_pwr_trigger) - - sizeof(struct iwm_umac_wifi_if)); - - - return iwm_send_wifi_if_cmd(iwm, &pwr_trigger, sizeof(pwr_trigger), 1); -} - -int iwm_send_umac_stats_req(struct iwm_priv *iwm, u32 flags) -{ - struct iwm_udma_wifi_cmd udma_cmd = UDMA_UMAC_INIT; - struct iwm_umac_cmd umac_cmd; - struct iwm_umac_cmd_stats_req stats_req; - - stats_req.flags = cpu_to_le32(flags); - - umac_cmd.id = UMAC_CMD_OPCODE_STATISTIC_REQUEST; - umac_cmd.resp = 0; - - return iwm_hal_send_umac_cmd(iwm, &udma_cmd, &umac_cmd, &stats_req, - sizeof(struct iwm_umac_cmd_stats_req)); -} - -int iwm_send_umac_channel_list(struct iwm_priv *iwm) -{ - struct iwm_udma_wifi_cmd udma_cmd = UDMA_UMAC_INIT; - struct iwm_umac_cmd umac_cmd; - struct iwm_umac_cmd_get_channel_list *ch_list; - int size = sizeof(struct iwm_umac_cmd_get_channel_list) + - sizeof(struct iwm_umac_channel_info) * 4; - int ret; - - ch_list = kzalloc(size, GFP_KERNEL); - if (!ch_list) { - IWM_ERR(iwm, "Couldn't allocate channel list cmd\n"); - return -ENOMEM; - } - - ch_list->ch[0].band = UMAC_BAND_2GHZ; - ch_list->ch[0].type = UMAC_CHANNEL_WIDTH_20MHZ; - ch_list->ch[0].flags = UMAC_CHANNEL_FLAG_VALID; - - ch_list->ch[1].band = UMAC_BAND_5GHZ; - ch_list->ch[1].type = UMAC_CHANNEL_WIDTH_20MHZ; - ch_list->ch[1].flags = UMAC_CHANNEL_FLAG_VALID; - - ch_list->ch[2].band = UMAC_BAND_2GHZ; - ch_list->ch[2].type = UMAC_CHANNEL_WIDTH_20MHZ; - ch_list->ch[2].flags = UMAC_CHANNEL_FLAG_VALID | UMAC_CHANNEL_FLAG_IBSS; - - ch_list->ch[3].band = UMAC_BAND_5GHZ; - ch_list->ch[3].type = UMAC_CHANNEL_WIDTH_20MHZ; - ch_list->ch[3].flags = UMAC_CHANNEL_FLAG_VALID | UMAC_CHANNEL_FLAG_IBSS; - - ch_list->count = cpu_to_le16(4); - - umac_cmd.id = UMAC_CMD_OPCODE_GET_CHAN_INFO_LIST; - umac_cmd.resp = 1; - - ret = iwm_hal_send_umac_cmd(iwm, &udma_cmd, &umac_cmd, ch_list, size); - - kfree(ch_list); - - return ret; -} - -int iwm_scan_ssids(struct iwm_priv *iwm, struct cfg80211_ssid *ssids, - int ssid_num) -{ - struct iwm_umac_cmd_scan_request req; - int i, ret; - - memset(&req, 0, sizeof(struct iwm_umac_cmd_scan_request)); - - req.hdr.oid = UMAC_WIFI_IF_CMD_SCAN_REQUEST; - req.hdr.buf_size = cpu_to_le16(sizeof(struct iwm_umac_cmd_scan_request) - - sizeof(struct iwm_umac_wifi_if)); - req.type = UMAC_WIFI_IF_SCAN_TYPE_USER; - req.timeout = 2; - req.seq_num = iwm->scan_id; - req.ssid_num = min(ssid_num, UMAC_WIFI_IF_PROBE_OPTION_MAX); - - for (i = 0; i < req.ssid_num; i++) { - memcpy(req.ssids[i].ssid, ssids[i].ssid, ssids[i].ssid_len); - req.ssids[i].ssid_len = ssids[i].ssid_len; - } - - ret = iwm_send_wifi_if_cmd(iwm, &req, sizeof(req), 0); - if (ret) { - IWM_ERR(iwm, "Couldn't send scan request\n"); - return ret; - } - - iwm->scan_id = (iwm->scan_id + 1) % IWM_SCAN_ID_MAX; - - return 0; -} - -int iwm_scan_one_ssid(struct iwm_priv *iwm, u8 *ssid, int ssid_len) -{ - struct cfg80211_ssid one_ssid; - - if (test_and_set_bit(IWM_STATUS_SCANNING, &iwm->status)) - return 0; - - one_ssid.ssid_len = min(ssid_len, IEEE80211_MAX_SSID_LEN); - memcpy(&one_ssid.ssid, ssid, one_ssid.ssid_len); - - return iwm_scan_ssids(iwm, &one_ssid, 1); -} - -int iwm_target_reset(struct iwm_priv *iwm) -{ - struct iwm_udma_nonwifi_cmd target_cmd; - - target_cmd.opcode = UMAC_HDI_OUT_OPCODE_REBOOT; - target_cmd.addr = 0; - target_cmd.op1_sz = 0; - target_cmd.op2 = 0; - target_cmd.handle_by_hw = 0; - target_cmd.resp = 0; - target_cmd.eop = 1; - - return iwm_hal_send_target_cmd(iwm, &target_cmd, NULL); -} - -int iwm_send_umac_stop_resume_tx(struct iwm_priv *iwm, - struct iwm_umac_notif_stop_resume_tx *ntf) -{ - struct iwm_udma_wifi_cmd udma_cmd = UDMA_UMAC_INIT; - struct iwm_umac_cmd umac_cmd; - struct iwm_umac_cmd_stop_resume_tx stp_res_cmd; - struct iwm_sta_info *sta_info; - u8 sta_id = STA_ID_N_COLOR_ID(ntf->sta_id); - int i; - - sta_info = &iwm->sta_table[sta_id]; - if (!sta_info->valid) { - IWM_ERR(iwm, "Invalid STA: %d\n", sta_id); - return -EINVAL; - } - - umac_cmd.id = UMAC_CMD_OPCODE_STOP_RESUME_STA_TX; - umac_cmd.resp = 0; - - stp_res_cmd.flags = ntf->flags; - stp_res_cmd.sta_id = ntf->sta_id; - stp_res_cmd.stop_resume_tid_msk = ntf->stop_resume_tid_msk; - for (i = 0; i < IWM_UMAC_TID_NR; i++) - stp_res_cmd.last_seq_num[i] = - sta_info->tid_info[i].last_seq_num; - - return iwm_hal_send_umac_cmd(iwm, &udma_cmd, &umac_cmd, &stp_res_cmd, - sizeof(struct iwm_umac_cmd_stop_resume_tx)); - -} - -int iwm_send_pmkid_update(struct iwm_priv *iwm, - struct cfg80211_pmksa *pmksa, u32 command) -{ - struct iwm_umac_pmkid_update update; - int ret; - - memset(&update, 0, sizeof(struct iwm_umac_pmkid_update)); - - update.hdr.oid = UMAC_WIFI_IF_CMD_PMKID_UPDATE; - update.hdr.buf_size = cpu_to_le16(sizeof(struct iwm_umac_pmkid_update) - - sizeof(struct iwm_umac_wifi_if)); - - update.command = cpu_to_le32(command); - if (pmksa->bssid) - memcpy(&update.bssid, pmksa->bssid, ETH_ALEN); - if (pmksa->pmkid) - memcpy(&update.pmkid, pmksa->pmkid, WLAN_PMKID_LEN); - - ret = iwm_send_wifi_if_cmd(iwm, &update, - sizeof(struct iwm_umac_pmkid_update), 0); - if (ret) { - IWM_ERR(iwm, "PMKID update command failed\n"); - return ret; - } - - return 0; -} diff --git a/drivers/net/wireless/iwmc3200wifi/commands.h b/drivers/net/wireless/iwmc3200wifi/commands.h deleted file mode 100644 index 6421689f5e8e..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/commands.h +++ /dev/null @@ -1,509 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#ifndef __IWM_COMMANDS_H__ -#define __IWM_COMMANDS_H__ - -#include <linux/ieee80211.h> - -#define IWM_BARKER_REBOOT_NOTIFICATION 0xF -#define IWM_ACK_BARKER_NOTIFICATION 0x10 - -/* UMAC commands */ -#define UMAC_RST_CTRL_FLG_LARC_CLK_EN 0x0001 -#define UMAC_RST_CTRL_FLG_LARC_RESET 0x0002 -#define UMAC_RST_CTRL_FLG_FUNC_RESET 0x0004 -#define UMAC_RST_CTRL_FLG_DEV_RESET 0x0008 -#define UMAC_RST_CTRL_FLG_WIFI_CORE_EN 0x0010 -#define UMAC_RST_CTRL_FLG_WIFI_LINK_EN 0x0040 -#define UMAC_RST_CTRL_FLG_WIFI_MLME_EN 0x0080 -#define UMAC_RST_CTRL_FLG_NVM_RELOAD 0x0100 - -struct iwm_umac_cmd_reset { - __le32 flags; -} __packed; - -#define UMAC_PARAM_TBL_ORD_FIX 0x0 -#define UMAC_PARAM_TBL_ORD_VAR 0x1 -#define UMAC_PARAM_TBL_CFG_FIX 0x2 -#define UMAC_PARAM_TBL_CFG_VAR 0x3 -#define UMAC_PARAM_TBL_BSS_TRK 0x4 -#define UMAC_PARAM_TBL_FA_CFG_FIX 0x5 -#define UMAC_PARAM_TBL_STA 0x6 -#define UMAC_PARAM_TBL_CHN 0x7 -#define UMAC_PARAM_TBL_STATISTICS 0x8 - -/* fast access table */ -enum { - CFG_FRAG_THRESHOLD = 0, - CFG_FRAME_RETRY_LIMIT, - CFG_OS_QUEUE_UTIL_TH, - CFG_RX_FILTER, - /* <-- LAST --> */ - FAST_ACCESS_CFG_TBL_FIX_LAST -}; - -/* fixed size table */ -enum { - CFG_POWER_INDEX = 0, - CFG_PM_LEGACY_RX_TIMEOUT, - CFG_PM_LEGACY_TX_TIMEOUT, - CFG_PM_CTRL_FLAGS, - CFG_PM_KEEP_ALIVE_IN_BEACONS, - CFG_BT_ON_THRESHOLD, - CFG_RTS_THRESHOLD, - CFG_CTS_TO_SELF, - CFG_COEX_MODE, - CFG_WIRELESS_MODE, - CFG_ASSOCIATION_TIMEOUT, - CFG_ROAM_TIMEOUT, - CFG_CAPABILITY_SUPPORTED_RATES, - CFG_SCAN_ALLOWED_UNASSOC_FLAGS, - CFG_SCAN_ALLOWED_MAIN_ASSOC_FLAGS, - CFG_SCAN_ALLOWED_PAN_ASSOC_FLAGS, - CFG_SCAN_INTERNAL_PERIODIC_ENABLED, - CFG_SCAN_IMM_INTERNAL_PERIODIC_SCAN_ON_INIT, - CFG_SCAN_DEFAULT_PERIODIC_FREQ_SEC, - CFG_SCAN_NUM_PASSIVE_CHAN_PER_PARTIAL_SCAN, - CFG_TLC_SUPPORTED_TX_HT_RATES, - CFG_TLC_SUPPORTED_TX_RATES, - CFG_TLC_SPATIAL_STREAM_SUPPORTED, - CFG_TLC_RETRY_PER_RATE, - CFG_TLC_RETRY_PER_HT_RATE, - CFG_TLC_FIXED_MCS, - CFG_TLC_CONTROL_FLAGS, - CFG_TLC_SR_MIN_FAIL, - CFG_TLC_SR_MIN_PASS, - CFG_TLC_HT_STAY_IN_COL_PASS_THRESH, - CFG_TLC_HT_STAY_IN_COL_FAIL_THRESH, - CFG_TLC_LEGACY_STAY_IN_COL_PASS_THRESH, - CFG_TLC_LEGACY_STAY_IN_COL_FAIL_THRESH, - CFG_TLC_HT_FLUSH_STATS_PACKETS, - CFG_TLC_LEGACY_FLUSH_STATS_PACKETS, - CFG_TLC_LEGACY_FLUSH_STATS_MS, - CFG_TLC_HT_FLUSH_STATS_MS, - CFG_TLC_STAY_IN_COL_TIME_OUT, - CFG_TLC_AGG_SHORT_LIM, - CFG_TLC_AGG_LONG_LIM, - CFG_TLC_HT_SR_NO_DECREASE, - CFG_TLC_LEGACY_SR_NO_DECREASE, - CFG_TLC_SR_FORCE_DECREASE, - CFG_TLC_SR_ALLOW_INCREASE, - CFG_TLC_AGG_SET_LONG, - CFG_TLC_AUTO_AGGREGATION, - CFG_TLC_AGG_THRESHOLD, - CFG_TLC_TID_LOAD_THRESHOLD, - CFG_TLC_BLOCK_ACK_TIMEOUT, - CFG_TLC_NO_BA_COUNTED_AS_ONE, - CFG_TLC_NUM_BA_STREAMS_ALLOWED, - CFG_TLC_NUM_BA_STREAMS_PRESENT, - CFG_TLC_RENEW_ADDBA_DELAY, - CFG_TLC_NUM_OF_MULTISEC_TO_COUN_LOAD, - CFG_TLC_IS_STABLE_IN_HT, - CFG_TLC_SR_SIC_1ST_FAIL, - CFG_TLC_SR_SIC_1ST_PASS, - CFG_TLC_SR_SIC_TOTAL_FAIL, - CFG_TLC_SR_SIC_TOTAL_PASS, - CFG_RLC_CHAIN_CTRL, - CFG_TRK_TABLE_OP_MODE, - CFG_TRK_TABLE_RSSI_THRESHOLD, - CFG_TX_PWR_TARGET, /* Used By xVT */ - CFG_TX_PWR_LIMIT_USR, - CFG_TX_PWR_LIMIT_BSS, /* 11d limit */ - CFG_TX_PWR_LIMIT_BSS_CONSTRAINT, /* 11h constraint */ - CFG_TX_PWR_MODE, - CFG_MLME_DBG_NOTIF_BLOCK, - CFG_BT_OFF_BECONS_INTERVALS, - CFG_BT_FRAG_DURATION, - CFG_ACTIVE_CHAINS, - CFG_CALIB_CTRL, - CFG_CAPABILITY_SUPPORTED_HT_RATES, - CFG_HT_MAC_PARAM_INFO, - CFG_MIMO_PS_MODE, - CFG_HT_DEFAULT_CAPABILIES_INFO, - CFG_LED_SC_RESOLUTION_FACTOR, - CFG_PTAM_ENERGY_CCK_DET_DEFAULT, - CFG_PTAM_CORR40_4_TH_ADD_MIN_MRC_DEFAULT, - CFG_PTAM_CORR40_4_TH_ADD_MIN_DEFAULT, - CFG_PTAM_CORR32_4_TH_ADD_MIN_MRC_DEFAULT, - CFG_PTAM_CORR32_4_TH_ADD_MIN_DEFAULT, - CFG_PTAM_CORR32_1_TH_ADD_MIN_MRC_DEFAULT, - CFG_PTAM_CORR32_1_TH_ADD_MIN_DEFAULT, - CFG_PTAM_ENERGY_CCK_DET_MIN_VAL, - CFG_PTAM_CORR40_4_TH_ADD_MIN_MRC_MIN_VAL, - CFG_PTAM_CORR40_4_TH_ADD_MIN_MIN_VAL, - CFG_PTAM_CORR32_4_TH_ADD_MIN_MRC_MIN_VAL, - CFG_PTAM_CORR32_4_TH_ADD_MIN_MIN_VAL, - CFG_PTAM_CORR32_1_TH_ADD_MIN_MRC_MIN_VAL, - CFG_PTAM_CORR32_1_TH_ADD_MIN_MIN_VAL, - CFG_PTAM_ENERGY_CCK_DET_MAX_VAL, - CFG_PTAM_CORR40_4_TH_ADD_MIN_MRC_MAX_VAL, - CFG_PTAM_CORR40_4_TH_ADD_MIN_MAX_VAL, - CFG_PTAM_CORR32_4_TH_ADD_MIN_MRC_MAX_VAL, - CFG_PTAM_CORR32_4_TH_ADD_MIN_MAX_VAL, - CFG_PTAM_CORR32_1_TH_ADD_MIN_MRC_MAX_VAL, - CFG_PTAM_CORR32_1_TH_ADD_MIN_MAX_VAL, - CFG_PTAM_ENERGY_CCK_DET_STEP_VAL, - CFG_PTAM_CORR40_4_TH_ADD_MIN_MRC_STEP_VAL, - CFG_PTAM_CORR40_4_TH_ADD_MIN_STEP_VAL, - CFG_PTAM_CORR32_4_TH_ADD_MIN_MRC_STEP_VAL, - CFG_PTAM_CORR32_4_TH_ADD_MIN_STEP_VAL, - CFG_PTAM_CORR32_1_TH_ADD_MIN_MRC_STEP_VAL, - CFG_PTAM_CORR32_1_TH_ADD_MIN_STEP_VAL, - CFG_PTAM_LINK_SENS_FA_OFDM_MAX, - CFG_PTAM_LINK_SENS_FA_OFDM_MIN, - CFG_PTAM_LINK_SENS_FA_CCK_MAX, - CFG_PTAM_LINK_SENS_FA_CCK_MIN, - CFG_PTAM_LINK_SENS_NRG_DIFF, - CFG_PTAM_LINK_SENS_NRG_MARGIN, - CFG_PTAM_LINK_SENS_MAX_NUMBER_OF_TIMES_IN_CCK_NO_FA, - CFG_PTAM_LINK_SENS_AUTO_CORR_MAX_TH_CCK, - CFG_AGG_MGG_TID_LOAD_ADDBA_THRESHOLD, - CFG_AGG_MGG_TID_LOAD_DELBA_THRESHOLD, - CFG_AGG_MGG_ADDBA_BUF_SIZE, - CFG_AGG_MGG_ADDBA_INACTIVE_TIMEOUT, - CFG_AGG_MGG_ADDBA_DEBUG_FLAGS, - CFG_SCAN_PERIODIC_RSSI_HIGH_THRESHOLD, - CFG_SCAN_PERIODIC_COEF_RSSI_HIGH, - CFG_11D_ENABLED, - CFG_11H_FEATURE_FLAGS, - - /* <-- LAST --> */ - CFG_TBL_FIX_LAST -}; - -/* variable size table */ -enum { - CFG_NET_ADDR = 0, - CFG_LED_PATTERN_TABLE, - - /* <-- LAST --> */ - CFG_TBL_VAR_LAST -}; - -struct iwm_umac_cmd_set_param_fix { - __le16 tbl; - __le16 key; - __le32 value; -} __packed; - -struct iwm_umac_cmd_set_param_var { - __le16 tbl; - __le16 key; - __le16 len; - __le16 reserved; -} __packed; - -struct iwm_umac_cmd_get_param { - __le16 tbl; - __le16 key; -} __packed; - -struct iwm_umac_cmd_get_param_resp { - __le16 tbl; - __le16 key; - __le16 len; - __le16 reserved; -} __packed; - -struct iwm_umac_cmd_eeprom_proxy_hdr { - __le32 type; - __le32 offset; - __le32 len; -} __packed; - -struct iwm_umac_cmd_eeprom_proxy { - struct iwm_umac_cmd_eeprom_proxy_hdr hdr; - u8 buf[0]; -} __packed; - -#define IWM_UMAC_CMD_EEPROM_TYPE_READ 0x1 -#define IWM_UMAC_CMD_EEPROM_TYPE_WRITE 0x2 - -#define UMAC_CHANNEL_FLAG_VALID BIT(0) -#define UMAC_CHANNEL_FLAG_IBSS BIT(1) -#define UMAC_CHANNEL_FLAG_ACTIVE BIT(3) -#define UMAC_CHANNEL_FLAG_RADAR BIT(4) -#define UMAC_CHANNEL_FLAG_DFS BIT(7) - -struct iwm_umac_channel_info { - u8 band; - u8 type; - u8 reserved; - u8 flags; - __le32 channels_mask; -} __packed; - -struct iwm_umac_cmd_get_channel_list { - __le16 count; - __le16 reserved; - struct iwm_umac_channel_info ch[0]; -} __packed; - - -/* UMAC WiFi interface commands */ - -/* Coexistence mode */ -#define COEX_MODE_SA 0x1 -#define COEX_MODE_XOR 0x2 -#define COEX_MODE_CM 0x3 -#define COEX_MODE_MAX 0x4 - -/* Wireless mode */ -#define WIRELESS_MODE_11A 0x1 -#define WIRELESS_MODE_11G 0x2 -#define WIRELESS_MODE_11N 0x4 - -#define UMAC_PROFILE_EX_IE_REQUIRED 0x1 -#define UMAC_PROFILE_QOS_ALLOWED 0x2 - -/* Scanning */ -#define UMAC_WIFI_IF_PROBE_OPTION_MAX 10 - -#define UMAC_WIFI_IF_SCAN_TYPE_USER 0x0 -#define UMAC_WIFI_IF_SCAN_TYPE_UMAC_RESERVED 0x1 -#define UMAC_WIFI_IF_SCAN_TYPE_HOST_PERIODIC 0x2 -#define UMAC_WIFI_IF_SCAN_TYPE_MAX 0x3 - -struct iwm_umac_ssid { - u8 ssid_len; - u8 ssid[IEEE80211_MAX_SSID_LEN]; - u8 reserved[3]; -} __packed; - -struct iwm_umac_cmd_scan_request { - struct iwm_umac_wifi_if hdr; - __le32 type; /* UMAC_WIFI_IF_SCAN_TYPE_* */ - u8 ssid_num; - u8 seq_num; - u8 timeout; /* In seconds */ - u8 reserved; - struct iwm_umac_ssid ssids[UMAC_WIFI_IF_PROBE_OPTION_MAX]; -} __packed; - -#define UMAC_CIPHER_TYPE_NONE 0xFF -#define UMAC_CIPHER_TYPE_USE_GROUPCAST 0x00 -#define UMAC_CIPHER_TYPE_WEP_40 0x01 -#define UMAC_CIPHER_TYPE_WEP_104 0x02 -#define UMAC_CIPHER_TYPE_TKIP 0x04 -#define UMAC_CIPHER_TYPE_CCMP 0x08 - -/* Supported authentication types - bitmap */ -#define UMAC_AUTH_TYPE_OPEN 0x00 -#define UMAC_AUTH_TYPE_LEGACY_PSK 0x01 -#define UMAC_AUTH_TYPE_8021X 0x02 -#define UMAC_AUTH_TYPE_RSNA_PSK 0x04 - -/* iwm_umac_security.flag is WPA supported -- bits[0:0] */ -#define UMAC_SEC_FLG_WPA_ON_POS 0 -#define UMAC_SEC_FLG_WPA_ON_SEED 1 -#define UMAC_SEC_FLG_WPA_ON_MSK (UMAC_SEC_FLG_WPA_ON_SEED << \ - UMAC_SEC_FLG_WPA_ON_POS) - -/* iwm_umac_security.flag is WPA2 supported -- bits [1:1] */ -#define UMAC_SEC_FLG_RSNA_ON_POS 1 -#define UMAC_SEC_FLG_RSNA_ON_SEED 1 -#define UMAC_SEC_FLG_RSNA_ON_MSK (UMAC_SEC_FLG_RSNA_ON_SEED << \ - UMAC_SEC_FLG_RSNA_ON_POS) - -/* iwm_umac_security.flag is WSC mode on -- bits [2:2] */ -#define UMAC_SEC_FLG_WSC_ON_POS 2 -#define UMAC_SEC_FLG_WSC_ON_SEED 1 -#define UMAC_SEC_FLG_WSC_ON_MSK (UMAC_SEC_FLG_WSC_ON_SEED << \ - UMAC_SEC_FLG_WSC_ON_POS) - - -/* Legacy profile can use only WEP40 and WEP104 for encryption and - * OPEN or PSK for authentication */ -#define UMAC_SEC_FLG_LEGACY_PROFILE 0 - -struct iwm_umac_security { - u8 auth_type; - u8 ucast_cipher; - u8 mcast_cipher; - u8 flags; -} __packed; - -struct iwm_umac_ibss { - u8 beacon_interval; /* in millisecond */ - u8 atim; /* in millisecond */ - s8 join_only; - u8 band; - u8 channel; - u8 reserved[3]; -} __packed; - -#define UMAC_MODE_BSS 0 -#define UMAC_MODE_IBSS 1 - -#define UMAC_BSSID_MAX 4 - -struct iwm_umac_profile { - struct iwm_umac_wifi_if hdr; - __le32 mode; - struct iwm_umac_ssid ssid; - u8 bssid[UMAC_BSSID_MAX][ETH_ALEN]; - struct iwm_umac_security sec; - struct iwm_umac_ibss ibss; - __le32 channel_2ghz; - __le32 channel_5ghz; - __le16 flags; - u8 wireless_mode; - u8 bss_num; -} __packed; - -struct iwm_umac_invalidate_profile { - struct iwm_umac_wifi_if hdr; - u8 reason; - u8 reserved[3]; -} __packed; - -/* Encryption key commands */ -struct iwm_umac_key_wep40 { - struct iwm_umac_wifi_if hdr; - struct iwm_umac_key_hdr key_hdr; - u8 key[WLAN_KEY_LEN_WEP40]; - u8 static_key; - u8 reserved[2]; -} __packed; - -struct iwm_umac_key_wep104 { - struct iwm_umac_wifi_if hdr; - struct iwm_umac_key_hdr key_hdr; - u8 key[WLAN_KEY_LEN_WEP104]; - u8 static_key; - u8 reserved[2]; -} __packed; - -#define IWM_TKIP_KEY_SIZE 16 -#define IWM_TKIP_MIC_SIZE 8 -struct iwm_umac_key_tkip { - struct iwm_umac_wifi_if hdr; - struct iwm_umac_key_hdr key_hdr; - u8 iv_count[6]; - u8 reserved[2]; - u8 tkip_key[IWM_TKIP_KEY_SIZE]; - u8 mic_rx_key[IWM_TKIP_MIC_SIZE]; - u8 mic_tx_key[IWM_TKIP_MIC_SIZE]; -} __packed; - -struct iwm_umac_key_ccmp { - struct iwm_umac_wifi_if hdr; - struct iwm_umac_key_hdr key_hdr; - u8 iv_count[6]; - u8 reserved[2]; - u8 key[WLAN_KEY_LEN_CCMP]; -} __packed; - -struct iwm_umac_key_remove { - struct iwm_umac_wifi_if hdr; - struct iwm_umac_key_hdr key_hdr; -} __packed; - -struct iwm_umac_tx_key_id { - struct iwm_umac_wifi_if hdr; - u8 key_idx; - u8 reserved[3]; -} __packed; - -struct iwm_umac_pwr_trigger { - struct iwm_umac_wifi_if hdr; - __le32 reseved; -} __packed; - -struct iwm_umac_cmd_stats_req { - __le32 flags; -} __packed; - -struct iwm_umac_cmd_stop_resume_tx { - u8 flags; - u8 sta_id; - __le16 stop_resume_tid_msk; - __le16 last_seq_num[IWM_UMAC_TID_NR]; - u16 reserved; -} __packed; - -#define IWM_CMD_PMKID_ADD 1 -#define IWM_CMD_PMKID_DEL 2 -#define IWM_CMD_PMKID_FLUSH 3 - -struct iwm_umac_pmkid_update { - struct iwm_umac_wifi_if hdr; - __le32 command; - u8 bssid[ETH_ALEN]; - __le16 reserved; - u8 pmkid[WLAN_PMKID_LEN]; -} __packed; - -/* LMAC commands */ -int iwm_read_mac(struct iwm_priv *iwm, u8 *mac); -int iwm_send_prio_table(struct iwm_priv *iwm); -int iwm_send_init_calib_cfg(struct iwm_priv *iwm, u8 calib_requested); -int iwm_send_periodic_calib_cfg(struct iwm_priv *iwm, u8 calib_requested); -int iwm_send_calib_results(struct iwm_priv *iwm); -int iwm_store_rxiq_calib_result(struct iwm_priv *iwm); -int iwm_send_ct_kill_cfg(struct iwm_priv *iwm, u8 entry, u8 exit); - -/* UMAC commands */ -int iwm_send_wifi_if_cmd(struct iwm_priv *iwm, void *payload, u16 payload_size, - bool resp); -int iwm_send_umac_reset(struct iwm_priv *iwm, __le32 reset_flags, bool resp); -int iwm_umac_set_config_fix(struct iwm_priv *iwm, u16 tbl, u16 key, u32 value); -int iwm_umac_set_config_var(struct iwm_priv *iwm, u16 key, - void *payload, u16 payload_size); -int iwm_send_umac_config(struct iwm_priv *iwm, __le32 reset_flags); -int iwm_send_mlme_profile(struct iwm_priv *iwm); -int __iwm_invalidate_mlme_profile(struct iwm_priv *iwm); -int iwm_invalidate_mlme_profile(struct iwm_priv *iwm); -int iwm_send_packet(struct iwm_priv *iwm, struct sk_buff *skb, int pool_id); -int iwm_set_tx_key(struct iwm_priv *iwm, u8 key_idx); -int iwm_set_key(struct iwm_priv *iwm, bool remove, struct iwm_key *key); -int iwm_tx_power_trigger(struct iwm_priv *iwm); -int iwm_send_umac_stats_req(struct iwm_priv *iwm, u32 flags); -int iwm_send_umac_channel_list(struct iwm_priv *iwm); -int iwm_scan_ssids(struct iwm_priv *iwm, struct cfg80211_ssid *ssids, - int ssid_num); -int iwm_scan_one_ssid(struct iwm_priv *iwm, u8 *ssid, int ssid_len); -int iwm_send_umac_stop_resume_tx(struct iwm_priv *iwm, - struct iwm_umac_notif_stop_resume_tx *ntf); -int iwm_send_pmkid_update(struct iwm_priv *iwm, - struct cfg80211_pmksa *pmksa, u32 command); - -/* UDMA commands */ -int iwm_target_reset(struct iwm_priv *iwm); -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/debug.h b/drivers/net/wireless/iwmc3200wifi/debug.h deleted file mode 100644 index a0c13a49ab3c..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/debug.h +++ /dev/null @@ -1,123 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - */ - -#ifndef __IWM_DEBUG_H__ -#define __IWM_DEBUG_H__ - -#define IWM_ERR(p, f, a...) dev_err(iwm_to_dev(p), f, ## a) -#define IWM_WARN(p, f, a...) dev_warn(iwm_to_dev(p), f, ## a) -#define IWM_INFO(p, f, a...) dev_info(iwm_to_dev(p), f, ## a) -#define IWM_CRIT(p, f, a...) dev_crit(iwm_to_dev(p), f, ## a) - -#ifdef CONFIG_IWM_DEBUG - -#define IWM_DEBUG_MODULE(i, level, module, f, a...) \ -do { \ - if (unlikely(i->dbg.dbg_module[IWM_DM_##module] >= (IWM_DL_##level)))\ - dev_printk(KERN_INFO, (iwm_to_dev(i)), \ - "%s " f, __func__ , ## a); \ -} while (0) - -#define IWM_HEXDUMP(i, level, module, pref, buf, len) \ -do { \ - if (unlikely(i->dbg.dbg_module[IWM_DM_##module] >= (IWM_DL_##level)))\ - print_hex_dump(KERN_INFO, pref, DUMP_PREFIX_OFFSET, \ - 16, 1, buf, len, 1); \ -} while (0) - -#else - -#define IWM_DEBUG_MODULE(i, level, module, f, a...) -#define IWM_HEXDUMP(i, level, module, pref, buf, len) - -#endif /* CONFIG_IWM_DEBUG */ - -/* Debug modules */ -enum iwm_debug_module_id { - IWM_DM_BOOT = 0, - IWM_DM_FW, - IWM_DM_SDIO, - IWM_DM_NTF, - IWM_DM_RX, - IWM_DM_TX, - IWM_DM_MLME, - IWM_DM_CMD, - IWM_DM_WEXT, - __IWM_DM_NR, -}; -#define IWM_DM_DEFAULT 0 - -#define IWM_DBG_BOOT(i, l, f, a...) IWM_DEBUG_MODULE(i, l, BOOT, f, ## a) -#define IWM_DBG_FW(i, l, f, a...) IWM_DEBUG_MODULE(i, l, FW, f, ## a) -#define IWM_DBG_SDIO(i, l, f, a...) IWM_DEBUG_MODULE(i, l, SDIO, f, ## a) -#define IWM_DBG_NTF(i, l, f, a...) IWM_DEBUG_MODULE(i, l, NTF, f, ## a) -#define IWM_DBG_RX(i, l, f, a...) IWM_DEBUG_MODULE(i, l, RX, f, ## a) -#define IWM_DBG_TX(i, l, f, a...) IWM_DEBUG_MODULE(i, l, TX, f, ## a) -#define IWM_DBG_MLME(i, l, f, a...) IWM_DEBUG_MODULE(i, l, MLME, f, ## a) -#define IWM_DBG_CMD(i, l, f, a...) IWM_DEBUG_MODULE(i, l, CMD, f, ## a) -#define IWM_DBG_WEXT(i, l, f, a...) IWM_DEBUG_MODULE(i, l, WEXT, f, ## a) - -/* Debug levels */ -enum iwm_debug_level { - IWM_DL_NONE = 0, - IWM_DL_ERR, - IWM_DL_WARN, - IWM_DL_INFO, - IWM_DL_DBG, -}; -#define IWM_DL_DEFAULT IWM_DL_ERR - -struct iwm_debugfs { - struct iwm_priv *iwm; - struct dentry *rootdir; - struct dentry *devdir; - struct dentry *dbgdir; - struct dentry *txdir; - struct dentry *rxdir; - struct dentry *busdir; - - u32 dbg_level; - struct dentry *dbg_level_dentry; - - unsigned long dbg_modules; - struct dentry *dbg_modules_dentry; - - u8 dbg_module[__IWM_DM_NR]; - struct dentry *dbg_module_dentries[__IWM_DM_NR]; - - struct dentry *txq_dentry; - struct dentry *tx_credit_dentry; - struct dentry *rx_ticket_dentry; - - struct dentry *fw_err_dentry; -}; - -#ifdef CONFIG_IWM_DEBUG -void iwm_debugfs_init(struct iwm_priv *iwm); -void iwm_debugfs_exit(struct iwm_priv *iwm); -#else -static inline void iwm_debugfs_init(struct iwm_priv *iwm) {} -static inline void iwm_debugfs_exit(struct iwm_priv *iwm) {} -#endif - -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/debugfs.c b/drivers/net/wireless/iwmc3200wifi/debugfs.c deleted file mode 100644 index b6199d124bb9..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/debugfs.c +++ /dev/null @@ -1,488 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - */ - -#include <linux/slab.h> -#include <linux/kernel.h> -#include <linux/bitops.h> -#include <linux/debugfs.h> -#include <linux/export.h> - -#include "iwm.h" -#include "bus.h" -#include "rx.h" -#include "debug.h" - -static struct { - u8 id; - char *name; -} iwm_debug_module[__IWM_DM_NR] = { - {IWM_DM_BOOT, "boot"}, - {IWM_DM_FW, "fw"}, - {IWM_DM_SDIO, "sdio"}, - {IWM_DM_NTF, "ntf"}, - {IWM_DM_RX, "rx"}, - {IWM_DM_TX, "tx"}, - {IWM_DM_MLME, "mlme"}, - {IWM_DM_CMD, "cmd"}, - {IWM_DM_WEXT, "wext"}, -}; - -#define add_dbg_module(dbg, name, id, initlevel) \ -do { \ - dbg.dbg_module[id] = (initlevel); \ - dbg.dbg_module_dentries[id] = \ - debugfs_create_x8(name, 0600, \ - dbg.dbgdir, \ - &(dbg.dbg_module[id])); \ -} while (0) - -static int iwm_debugfs_u32_read(void *data, u64 *val) -{ - struct iwm_priv *iwm = data; - - *val = iwm->dbg.dbg_level; - return 0; -} - -static int iwm_debugfs_dbg_level_write(void *data, u64 val) -{ - struct iwm_priv *iwm = data; - int i; - - iwm->dbg.dbg_level = val; - - for (i = 0; i < __IWM_DM_NR; i++) - iwm->dbg.dbg_module[i] = val; - - return 0; -} -DEFINE_SIMPLE_ATTRIBUTE(fops_iwm_dbg_level, - iwm_debugfs_u32_read, iwm_debugfs_dbg_level_write, - "%llu\n"); - -static int iwm_debugfs_dbg_modules_write(void *data, u64 val) -{ - struct iwm_priv *iwm = data; - int i, bit; - - iwm->dbg.dbg_modules = val; - - for (i = 0; i < __IWM_DM_NR; i++) - iwm->dbg.dbg_module[i] = 0; - - for_each_set_bit(bit, &iwm->dbg.dbg_modules, __IWM_DM_NR) - iwm->dbg.dbg_module[bit] = iwm->dbg.dbg_level; - - return 0; -} -DEFINE_SIMPLE_ATTRIBUTE(fops_iwm_dbg_modules, - iwm_debugfs_u32_read, iwm_debugfs_dbg_modules_write, - "%llu\n"); - - -static ssize_t iwm_debugfs_txq_read(struct file *filp, char __user *buffer, - size_t count, loff_t *ppos) -{ - struct iwm_priv *iwm = filp->private_data; - char *buf; - int i, buf_len = 4096; - size_t len = 0; - ssize_t ret; - - if (*ppos != 0) - return 0; - if (count < sizeof(buf)) - return -ENOSPC; - - buf = kzalloc(buf_len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - - for (i = 0; i < IWM_TX_QUEUES; i++) { - struct iwm_tx_queue *txq = &iwm->txq[i]; - struct sk_buff *skb; - int j; - unsigned long flags; - - spin_lock_irqsave(&txq->queue.lock, flags); - - skb = (struct sk_buff *)&txq->queue; - - len += snprintf(buf + len, buf_len - len, "TXQ #%d\n", i); - len += snprintf(buf + len, buf_len - len, "\tStopped: %d\n", - __netif_subqueue_stopped(iwm_to_ndev(iwm), - txq->id)); - len += snprintf(buf + len, buf_len - len, "\tConcat count:%d\n", - txq->concat_count); - len += snprintf(buf + len, buf_len - len, "\tQueue len: %d\n", - skb_queue_len(&txq->queue)); - for (j = 0; j < skb_queue_len(&txq->queue); j++) { - struct iwm_tx_info *tx_info; - - skb = skb->next; - tx_info = skb_to_tx_info(skb); - - len += snprintf(buf + len, buf_len - len, - "\tSKB #%d\n", j); - len += snprintf(buf + len, buf_len - len, - "\t\tsta: %d\n", tx_info->sta); - len += snprintf(buf + len, buf_len - len, - "\t\tcolor: %d\n", tx_info->color); - len += snprintf(buf + len, buf_len - len, - "\t\ttid: %d\n", tx_info->tid); - } - - spin_unlock_irqrestore(&txq->queue.lock, flags); - - spin_lock_irqsave(&txq->stopped_queue.lock, flags); - - len += snprintf(buf + len, buf_len - len, - "\tStopped Queue len: %d\n", - skb_queue_len(&txq->stopped_queue)); - for (j = 0; j < skb_queue_len(&txq->stopped_queue); j++) { - struct iwm_tx_info *tx_info; - - skb = skb->next; - tx_info = skb_to_tx_info(skb); - - len += snprintf(buf + len, buf_len - len, - "\tSKB #%d\n", j); - len += snprintf(buf + len, buf_len - len, - "\t\tsta: %d\n", tx_info->sta); - len += snprintf(buf + len, buf_len - len, - "\t\tcolor: %d\n", tx_info->color); - len += snprintf(buf + len, buf_len - len, - "\t\ttid: %d\n", tx_info->tid); - } - - spin_unlock_irqrestore(&txq->stopped_queue.lock, flags); - } - - ret = simple_read_from_buffer(buffer, len, ppos, buf, buf_len); - kfree(buf); - - return ret; -} - -static ssize_t iwm_debugfs_tx_credit_read(struct file *filp, - char __user *buffer, - size_t count, loff_t *ppos) -{ - struct iwm_priv *iwm = filp->private_data; - struct iwm_tx_credit *credit = &iwm->tx_credit; - char *buf; - int i, buf_len = 4096; - size_t len = 0; - ssize_t ret; - - if (*ppos != 0) - return 0; - if (count < sizeof(buf)) - return -ENOSPC; - - buf = kzalloc(buf_len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - - len += snprintf(buf + len, buf_len - len, - "NR pools: %d\n", credit->pool_nr); - len += snprintf(buf + len, buf_len - len, - "pools map: 0x%lx\n", credit->full_pools_map); - - len += snprintf(buf + len, buf_len - len, "\n### POOLS ###\n"); - for (i = 0; i < IWM_MACS_OUT_GROUPS; i++) { - len += snprintf(buf + len, buf_len - len, - "pools entry #%d\n", i); - len += snprintf(buf + len, buf_len - len, - "\tid: %d\n", - credit->pools[i].id); - len += snprintf(buf + len, buf_len - len, - "\tsid: %d\n", - credit->pools[i].sid); - len += snprintf(buf + len, buf_len - len, - "\tmin_pages: %d\n", - credit->pools[i].min_pages); - len += snprintf(buf + len, buf_len - len, - "\tmax_pages: %d\n", - credit->pools[i].max_pages); - len += snprintf(buf + len, buf_len - len, - "\talloc_pages: %d\n", - credit->pools[i].alloc_pages); - len += snprintf(buf + len, buf_len - len, - "\tfreed_pages: %d\n", - credit->pools[i].total_freed_pages); - } - - len += snprintf(buf + len, buf_len - len, "\n### SPOOLS ###\n"); - for (i = 0; i < IWM_MACS_OUT_SGROUPS; i++) { - len += snprintf(buf + len, buf_len - len, - "spools entry #%d\n", i); - len += snprintf(buf + len, buf_len - len, - "\tid: %d\n", - credit->spools[i].id); - len += snprintf(buf + len, buf_len - len, - "\tmax_pages: %d\n", - credit->spools[i].max_pages); - len += snprintf(buf + len, buf_len - len, - "\talloc_pages: %d\n", - credit->spools[i].alloc_pages); - - } - - ret = simple_read_from_buffer(buffer, len, ppos, buf, buf_len); - kfree(buf); - - return ret; -} - -static ssize_t iwm_debugfs_rx_ticket_read(struct file *filp, - char __user *buffer, - size_t count, loff_t *ppos) -{ - struct iwm_priv *iwm = filp->private_data; - struct iwm_rx_ticket_node *ticket; - char *buf; - int buf_len = 4096, i; - size_t len = 0; - ssize_t ret; - - if (*ppos != 0) - return 0; - if (count < sizeof(buf)) - return -ENOSPC; - - buf = kzalloc(buf_len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - - spin_lock(&iwm->ticket_lock); - list_for_each_entry(ticket, &iwm->rx_tickets, node) { - len += snprintf(buf + len, buf_len - len, "Ticket #%d\n", - ticket->ticket->id); - len += snprintf(buf + len, buf_len - len, "\taction: 0x%x\n", - ticket->ticket->action); - len += snprintf(buf + len, buf_len - len, "\tflags: 0x%x\n", - ticket->ticket->flags); - } - spin_unlock(&iwm->ticket_lock); - - for (i = 0; i < IWM_RX_ID_HASH; i++) { - struct iwm_rx_packet *packet; - struct list_head *pkt_list = &iwm->rx_packets[i]; - - if (!list_empty(pkt_list)) { - len += snprintf(buf + len, buf_len - len, - "Packet hash #%d\n", i); - spin_lock(&iwm->packet_lock[i]); - list_for_each_entry(packet, pkt_list, node) { - len += snprintf(buf + len, buf_len - len, - "\tPacket id: %d\n", - packet->id); - len += snprintf(buf + len, buf_len - len, - "\tPacket length: %lu\n", - packet->pkt_size); - } - spin_unlock(&iwm->packet_lock[i]); - } - } - - ret = simple_read_from_buffer(buffer, len, ppos, buf, buf_len); - kfree(buf); - - return ret; -} - -static ssize_t iwm_debugfs_fw_err_read(struct file *filp, - char __user *buffer, - size_t count, loff_t *ppos) -{ - - struct iwm_priv *iwm = filp->private_data; - char buf[512]; - int buf_len = 512; - size_t len = 0; - - if (*ppos != 0) - return 0; - if (count < sizeof(buf)) - return -ENOSPC; - - if (!iwm->last_fw_err) - return -ENOMEM; - - if (iwm->last_fw_err->line_num == 0) - goto out; - - len += snprintf(buf + len, buf_len - len, "%cMAC FW ERROR:\n", - (le32_to_cpu(iwm->last_fw_err->category) == UMAC_SYS_ERR_CAT_LMAC) - ? 'L' : 'U'); - len += snprintf(buf + len, buf_len - len, - "\tCategory: %d\n", - le32_to_cpu(iwm->last_fw_err->category)); - - len += snprintf(buf + len, buf_len - len, - "\tStatus: 0x%x\n", - le32_to_cpu(iwm->last_fw_err->status)); - - len += snprintf(buf + len, buf_len - len, - "\tPC: 0x%x\n", - le32_to_cpu(iwm->last_fw_err->pc)); - - len += snprintf(buf + len, buf_len - len, - "\tblink1: %d\n", - le32_to_cpu(iwm->last_fw_err->blink1)); - - len += snprintf(buf + len, buf_len - len, - "\tblink2: %d\n", - le32_to_cpu(iwm->last_fw_err->blink2)); - - len += snprintf(buf + len, buf_len - len, - "\tilink1: %d\n", - le32_to_cpu(iwm->last_fw_err->ilink1)); - - len += snprintf(buf + len, buf_len - len, - "\tilink2: %d\n", - le32_to_cpu(iwm->last_fw_err->ilink2)); - - len += snprintf(buf + len, buf_len - len, - "\tData1: 0x%x\n", - le32_to_cpu(iwm->last_fw_err->data1)); - - len += snprintf(buf + len, buf_len - len, - "\tData2: 0x%x\n", - le32_to_cpu(iwm->last_fw_err->data2)); - - len += snprintf(buf + len, buf_len - len, - "\tLine number: %d\n", - le32_to_cpu(iwm->last_fw_err->line_num)); - - len += snprintf(buf + len, buf_len - len, - "\tUMAC status: 0x%x\n", - le32_to_cpu(iwm->last_fw_err->umac_status)); - - len += snprintf(buf + len, buf_len - len, - "\tLMAC status: 0x%x\n", - le32_to_cpu(iwm->last_fw_err->lmac_status)); - - len += snprintf(buf + len, buf_len - len, - "\tSDIO status: 0x%x\n", - le32_to_cpu(iwm->last_fw_err->sdio_status)); - -out: - - return simple_read_from_buffer(buffer, len, ppos, buf, buf_len); -} - -static const struct file_operations iwm_debugfs_txq_fops = { - .owner = THIS_MODULE, - .open = simple_open, - .read = iwm_debugfs_txq_read, - .llseek = default_llseek, -}; - -static const struct file_operations iwm_debugfs_tx_credit_fops = { - .owner = THIS_MODULE, - .open = simple_open, - .read = iwm_debugfs_tx_credit_read, - .llseek = default_llseek, -}; - -static const struct file_operations iwm_debugfs_rx_ticket_fops = { - .owner = THIS_MODULE, - .open = simple_open, - .read = iwm_debugfs_rx_ticket_read, - .llseek = default_llseek, -}; - -static const struct file_operations iwm_debugfs_fw_err_fops = { - .owner = THIS_MODULE, - .open = simple_open, - .read = iwm_debugfs_fw_err_read, - .llseek = default_llseek, -}; - -void iwm_debugfs_init(struct iwm_priv *iwm) -{ - int i; - - iwm->dbg.rootdir = debugfs_create_dir(KBUILD_MODNAME, NULL); - iwm->dbg.devdir = debugfs_create_dir(wiphy_name(iwm_to_wiphy(iwm)), - iwm->dbg.rootdir); - iwm->dbg.dbgdir = debugfs_create_dir("debug", iwm->dbg.devdir); - iwm->dbg.rxdir = debugfs_create_dir("rx", iwm->dbg.devdir); - iwm->dbg.txdir = debugfs_create_dir("tx", iwm->dbg.devdir); - iwm->dbg.busdir = debugfs_create_dir("bus", iwm->dbg.devdir); - if (iwm->bus_ops->debugfs_init) - iwm->bus_ops->debugfs_init(iwm, iwm->dbg.busdir); - - iwm->dbg.dbg_level = IWM_DL_NONE; - iwm->dbg.dbg_level_dentry = - debugfs_create_file("level", 0200, iwm->dbg.dbgdir, iwm, - &fops_iwm_dbg_level); - - iwm->dbg.dbg_modules = IWM_DM_DEFAULT; - iwm->dbg.dbg_modules_dentry = - debugfs_create_file("modules", 0200, iwm->dbg.dbgdir, iwm, - &fops_iwm_dbg_modules); - - for (i = 0; i < __IWM_DM_NR; i++) - add_dbg_module(iwm->dbg, iwm_debug_module[i].name, - iwm_debug_module[i].id, IWM_DL_DEFAULT); - - iwm->dbg.txq_dentry = debugfs_create_file("queues", 0200, - iwm->dbg.txdir, iwm, - &iwm_debugfs_txq_fops); - iwm->dbg.tx_credit_dentry = debugfs_create_file("credits", 0200, - iwm->dbg.txdir, iwm, - &iwm_debugfs_tx_credit_fops); - iwm->dbg.rx_ticket_dentry = debugfs_create_file("tickets", 0200, - iwm->dbg.rxdir, iwm, - &iwm_debugfs_rx_ticket_fops); - iwm->dbg.fw_err_dentry = debugfs_create_file("last_fw_err", 0200, - iwm->dbg.dbgdir, iwm, - &iwm_debugfs_fw_err_fops); -} - -void iwm_debugfs_exit(struct iwm_priv *iwm) -{ - int i; - - for (i = 0; i < __IWM_DM_NR; i++) - debugfs_remove(iwm->dbg.dbg_module_dentries[i]); - - debugfs_remove(iwm->dbg.dbg_modules_dentry); - debugfs_remove(iwm->dbg.dbg_level_dentry); - debugfs_remove(iwm->dbg.txq_dentry); - debugfs_remove(iwm->dbg.tx_credit_dentry); - debugfs_remove(iwm->dbg.rx_ticket_dentry); - debugfs_remove(iwm->dbg.fw_err_dentry); - if (iwm->bus_ops->debugfs_exit) - iwm->bus_ops->debugfs_exit(iwm); - - debugfs_remove(iwm->dbg.busdir); - debugfs_remove(iwm->dbg.dbgdir); - debugfs_remove(iwm->dbg.txdir); - debugfs_remove(iwm->dbg.rxdir); - debugfs_remove(iwm->dbg.devdir); - debugfs_remove(iwm->dbg.rootdir); -} diff --git a/drivers/net/wireless/iwmc3200wifi/eeprom.c b/drivers/net/wireless/iwmc3200wifi/eeprom.c deleted file mode 100644 index e80e776b74f7..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/eeprom.c +++ /dev/null @@ -1,234 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#include <linux/kernel.h> -#include <linux/slab.h> - -#include "iwm.h" -#include "umac.h" -#include "commands.h" -#include "eeprom.h" - -static struct iwm_eeprom_entry eeprom_map[] = { - [IWM_EEPROM_SIG] = - {"Signature", IWM_EEPROM_SIG_OFF, IWM_EEPROM_SIG_LEN}, - - [IWM_EEPROM_VERSION] = - {"Version", IWM_EEPROM_VERSION_OFF, IWM_EEPROM_VERSION_LEN}, - - [IWM_EEPROM_OEM_HW_VERSION] = - {"OEM HW version", IWM_EEPROM_OEM_HW_VERSION_OFF, - IWM_EEPROM_OEM_HW_VERSION_LEN}, - - [IWM_EEPROM_MAC_VERSION] = - {"MAC version", IWM_EEPROM_MAC_VERSION_OFF, IWM_EEPROM_MAC_VERSION_LEN}, - - [IWM_EEPROM_CARD_ID] = - {"Card ID", IWM_EEPROM_CARD_ID_OFF, IWM_EEPROM_CARD_ID_LEN}, - - [IWM_EEPROM_RADIO_CONF] = - {"Radio config", IWM_EEPROM_RADIO_CONF_OFF, IWM_EEPROM_RADIO_CONF_LEN}, - - [IWM_EEPROM_SKU_CAP] = - {"SKU capabilities", IWM_EEPROM_SKU_CAP_OFF, IWM_EEPROM_SKU_CAP_LEN}, - - [IWM_EEPROM_FAT_CHANNELS_CAP] = - {"HT channels capabilities", IWM_EEPROM_FAT_CHANNELS_CAP_OFF, - IWM_EEPROM_FAT_CHANNELS_CAP_LEN}, - - [IWM_EEPROM_CALIB_RXIQ_OFFSET] = - {"RX IQ offset", IWM_EEPROM_CALIB_RXIQ_OFF, IWM_EEPROM_INDIRECT_LEN}, - - [IWM_EEPROM_CALIB_RXIQ] = - {"Calib RX IQ", 0, IWM_EEPROM_CALIB_RXIQ_LEN}, -}; - - -static int iwm_eeprom_read(struct iwm_priv *iwm, u8 eeprom_id) -{ - int ret; - u32 entry_size, chunk_size, data_offset = 0, addr_offset = 0; - u32 addr; - struct iwm_udma_wifi_cmd udma_cmd; - struct iwm_umac_cmd umac_cmd; - struct iwm_umac_cmd_eeprom_proxy eeprom_cmd; - - if (eeprom_id > (IWM_EEPROM_LAST - 1)) - return -EINVAL; - - entry_size = eeprom_map[eeprom_id].length; - - if (eeprom_id >= IWM_EEPROM_INDIRECT_DATA) { - /* indirect data */ - u32 off_id = eeprom_id - IWM_EEPROM_INDIRECT_DATA + - IWM_EEPROM_INDIRECT_OFFSET; - - eeprom_map[eeprom_id].offset = - *(u16 *)(iwm->eeprom + eeprom_map[off_id].offset) << 1; - } - - addr = eeprom_map[eeprom_id].offset; - - udma_cmd.eop = 1; - udma_cmd.credit_group = 0x4; - udma_cmd.ra_tid = UMAC_HDI_ACT_TBL_IDX_HOST_CMD; - udma_cmd.lmac_offset = 0; - - umac_cmd.id = UMAC_CMD_OPCODE_EEPROM_PROXY; - umac_cmd.resp = 1; - - while (entry_size > 0) { - chunk_size = min_t(u32, entry_size, IWM_MAX_EEPROM_DATA_LEN); - - eeprom_cmd.hdr.type = - cpu_to_le32(IWM_UMAC_CMD_EEPROM_TYPE_READ); - eeprom_cmd.hdr.offset = cpu_to_le32(addr + addr_offset); - eeprom_cmd.hdr.len = cpu_to_le32(chunk_size); - - ret = iwm_hal_send_umac_cmd(iwm, &udma_cmd, - &umac_cmd, &eeprom_cmd, - sizeof(struct iwm_umac_cmd_eeprom_proxy)); - if (ret < 0) { - IWM_ERR(iwm, "Couldn't read eeprom\n"); - return ret; - } - - ret = iwm_notif_handle(iwm, UMAC_CMD_OPCODE_EEPROM_PROXY, - IWM_SRC_UMAC, 2*HZ); - if (ret < 0) { - IWM_ERR(iwm, "Did not get any eeprom answer\n"); - return ret; - } - - data_offset += chunk_size; - addr_offset += chunk_size; - entry_size -= chunk_size; - } - - return 0; -} - -u8 *iwm_eeprom_access(struct iwm_priv *iwm, u8 eeprom_id) -{ - if (!iwm->eeprom) - return ERR_PTR(-ENODEV); - - return iwm->eeprom + eeprom_map[eeprom_id].offset; -} - -int iwm_eeprom_fat_channels(struct iwm_priv *iwm) -{ - struct wiphy *wiphy = iwm_to_wiphy(iwm); - struct ieee80211_supported_band *band; - u16 *channels, i; - - channels = (u16 *)iwm_eeprom_access(iwm, IWM_EEPROM_FAT_CHANNELS_CAP); - if (IS_ERR(channels)) - return PTR_ERR(channels); - - band = wiphy->bands[IEEE80211_BAND_2GHZ]; - band->ht_cap.ht_supported = true; - - for (i = 0; i < IWM_EEPROM_FAT_CHANNELS_24; i++) - if (!(channels[i] & IWM_EEPROM_FAT_CHANNEL_ENABLED)) - band->ht_cap.ht_supported = false; - - band = wiphy->bands[IEEE80211_BAND_5GHZ]; - band->ht_cap.ht_supported = true; - for (i = IWM_EEPROM_FAT_CHANNELS_24; i < IWM_EEPROM_FAT_CHANNELS; i++) - if (!(channels[i] & IWM_EEPROM_FAT_CHANNEL_ENABLED)) - band->ht_cap.ht_supported = false; - - return 0; -} - -u32 iwm_eeprom_wireless_mode(struct iwm_priv *iwm) -{ - u16 sku_cap; - u32 wireless_mode = 0; - - sku_cap = *((u16 *)iwm_eeprom_access(iwm, IWM_EEPROM_SKU_CAP)); - - if (sku_cap & IWM_EEPROM_SKU_CAP_BAND_24GHZ) - wireless_mode |= WIRELESS_MODE_11G; - - if (sku_cap & IWM_EEPROM_SKU_CAP_BAND_52GHZ) - wireless_mode |= WIRELESS_MODE_11A; - - if (sku_cap & IWM_EEPROM_SKU_CAP_11N_ENABLE) - wireless_mode |= WIRELESS_MODE_11N; - - return wireless_mode; -} - - -int iwm_eeprom_init(struct iwm_priv *iwm) -{ - int i, ret = 0; - char name[32]; - - iwm->eeprom = kzalloc(IWM_EEPROM_LEN, GFP_KERNEL); - if (!iwm->eeprom) - return -ENOMEM; - - for (i = IWM_EEPROM_FIRST; i < IWM_EEPROM_LAST; i++) { - ret = iwm_eeprom_read(iwm, i); - if (ret < 0) { - IWM_ERR(iwm, "Couldn't read eeprom entry #%d: %s\n", - i, eeprom_map[i].name); - break; - } - } - - IWM_DBG_BOOT(iwm, DBG, "EEPROM dump:\n"); - for (i = IWM_EEPROM_FIRST; i < IWM_EEPROM_LAST; i++) { - memset(name, 0, 32); - sprintf(name, "%s: ", eeprom_map[i].name); - - IWM_HEXDUMP(iwm, DBG, BOOT, name, - iwm->eeprom + eeprom_map[i].offset, - eeprom_map[i].length); - } - - return ret; -} - -void iwm_eeprom_exit(struct iwm_priv *iwm) -{ - kfree(iwm->eeprom); -} diff --git a/drivers/net/wireless/iwmc3200wifi/eeprom.h b/drivers/net/wireless/iwmc3200wifi/eeprom.h deleted file mode 100644 index 4e3a3fdab0d3..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/eeprom.h +++ /dev/null @@ -1,127 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#ifndef __IWM_EEPROM_H__ -#define __IWM_EEPROM_H__ - -enum { - IWM_EEPROM_SIG = 0, - IWM_EEPROM_FIRST = IWM_EEPROM_SIG, - IWM_EEPROM_VERSION, - IWM_EEPROM_OEM_HW_VERSION, - IWM_EEPROM_MAC_VERSION, - IWM_EEPROM_CARD_ID, - IWM_EEPROM_RADIO_CONF, - IWM_EEPROM_SKU_CAP, - IWM_EEPROM_FAT_CHANNELS_CAP, - - IWM_EEPROM_INDIRECT_OFFSET, - IWM_EEPROM_CALIB_RXIQ_OFFSET = IWM_EEPROM_INDIRECT_OFFSET, - - IWM_EEPROM_INDIRECT_DATA, - IWM_EEPROM_CALIB_RXIQ = IWM_EEPROM_INDIRECT_DATA, - - IWM_EEPROM_LAST, -}; - -#define IWM_EEPROM_SIG_OFF 0x00 -#define IWM_EEPROM_VERSION_OFF (0x54 << 1) -#define IWM_EEPROM_OEM_HW_VERSION_OFF (0x56 << 1) -#define IWM_EEPROM_MAC_VERSION_OFF (0x30 << 1) -#define IWM_EEPROM_CARD_ID_OFF (0x5d << 1) -#define IWM_EEPROM_RADIO_CONF_OFF (0x58 << 1) -#define IWM_EEPROM_SKU_CAP_OFF (0x55 << 1) -#define IWM_EEPROM_CALIB_CONFIG_OFF (0x7c << 1) -#define IWM_EEPROM_FAT_CHANNELS_CAP_OFF (0xde << 1) - -#define IWM_EEPROM_SIG_LEN 4 -#define IWM_EEPROM_VERSION_LEN 2 -#define IWM_EEPROM_OEM_HW_VERSION_LEN 2 -#define IWM_EEPROM_MAC_VERSION_LEN 1 -#define IWM_EEPROM_CARD_ID_LEN 2 -#define IWM_EEPROM_RADIO_CONF_LEN 2 -#define IWM_EEPROM_SKU_CAP_LEN 2 -#define IWM_EEPROM_FAT_CHANNELS_CAP_LEN 40 -#define IWM_EEPROM_INDIRECT_LEN 2 - -#define IWM_MAX_EEPROM_DATA_LEN 240 -#define IWM_EEPROM_LEN 0x800 - -#define IWM_EEPROM_MIN_ALLOWED_VERSION 0x0610 -#define IWM_EEPROM_MAX_ALLOWED_VERSION 0x0700 -#define IWM_EEPROM_CURRENT_VERSION 0x0612 - -#define IWM_EEPROM_SKU_CAP_BAND_24GHZ (1 << 4) -#define IWM_EEPROM_SKU_CAP_BAND_52GHZ (1 << 5) -#define IWM_EEPROM_SKU_CAP_11N_ENABLE (1 << 6) - -#define IWM_EEPROM_FAT_CHANNELS 20 -/* 2.4 gHz FAT primary channels: 1, 2, 3, 4, 5, 6, 7, 8, 9 */ -#define IWM_EEPROM_FAT_CHANNELS_24 9 -/* 5.2 gHz FAT primary channels: 36,44,52,60,100,108,116,124,132,149,157 */ -#define IWM_EEPROM_FAT_CHANNELS_52 11 - -#define IWM_EEPROM_FAT_CHANNEL_ENABLED (1 << 0) - -enum { - IWM_EEPROM_CALIB_CAL_HDR, - IWM_EEPROM_CALIB_TX_POWER, - IWM_EEPROM_CALIB_XTAL, - IWM_EEPROM_CALIB_TEMPERATURE, - IWM_EEPROM_CALIB_RX_BB_FILTER, - IWM_EEPROM_CALIB_RX_IQ, - IWM_EEPROM_CALIB_MAX, -}; - -#define IWM_EEPROM_CALIB_RXIQ_OFF (IWM_EEPROM_CALIB_CONFIG_OFF + \ - (IWM_EEPROM_CALIB_RX_IQ << 1)) -#define IWM_EEPROM_CALIB_RXIQ_LEN sizeof(struct iwm_lmac_calib_rxiq) - -struct iwm_eeprom_entry { - char *name; - u32 offset; - u32 length; -}; - -int iwm_eeprom_init(struct iwm_priv *iwm); -void iwm_eeprom_exit(struct iwm_priv *iwm); -u8 *iwm_eeprom_access(struct iwm_priv *iwm, u8 eeprom_id); -int iwm_eeprom_fat_channels(struct iwm_priv *iwm); -u32 iwm_eeprom_wireless_mode(struct iwm_priv *iwm); - -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/fw.c b/drivers/net/wireless/iwmc3200wifi/fw.c deleted file mode 100644 index 6f1afe6bbc8c..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/fw.c +++ /dev/null @@ -1,416 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#include <linux/kernel.h> -#include <linux/firmware.h> - -#include "iwm.h" -#include "bus.h" -#include "hal.h" -#include "umac.h" -#include "debug.h" -#include "fw.h" -#include "commands.h" - -static const char fw_barker[] = "*WESTOPFORNOONE*"; - -/* - * @op_code: Op code we're looking for. - * @index: There can be several instances of the same opcode within - * the firmware. Index specifies which one we're looking for. - */ -static int iwm_fw_op_offset(struct iwm_priv *iwm, const struct firmware *fw, - u16 op_code, u32 index) -{ - int offset = -EINVAL, fw_offset; - u32 op_index = 0; - const u8 *fw_ptr; - struct iwm_fw_hdr_rec *rec; - - fw_offset = 0; - fw_ptr = fw->data; - - /* We first need to look for the firmware barker */ - if (memcmp(fw_ptr, fw_barker, IWM_HDR_BARKER_LEN)) { - IWM_ERR(iwm, "No barker string in this FW\n"); - return -EINVAL; - } - - if (fw->size < IWM_HDR_LEN) { - IWM_ERR(iwm, "FW is too small (%zu)\n", fw->size); - return -EINVAL; - } - - fw_offset += IWM_HDR_BARKER_LEN; - - while (fw_offset < fw->size) { - rec = (struct iwm_fw_hdr_rec *)(fw_ptr + fw_offset); - - IWM_DBG_FW(iwm, DBG, "FW: op_code: 0x%x, len: %d @ 0x%x\n", - rec->op_code, rec->len, fw_offset); - - if (rec->op_code == IWM_HDR_REC_OP_INVALID) { - IWM_DBG_FW(iwm, DBG, "Reached INVALID op code\n"); - break; - } - - if (rec->op_code == op_code) { - if (op_index == index) { - fw_offset += sizeof(struct iwm_fw_hdr_rec); - offset = fw_offset; - goto out; - } - op_index++; - } - - fw_offset += sizeof(struct iwm_fw_hdr_rec) + rec->len; - } - - out: - return offset; -} - -static int iwm_load_firmware_chunk(struct iwm_priv *iwm, - const struct firmware *fw, - struct iwm_fw_img_desc *img_desc) -{ - struct iwm_udma_nonwifi_cmd target_cmd; - u32 chunk_size; - const u8 *chunk_ptr; - int ret = 0; - - IWM_DBG_FW(iwm, INFO, "Loading FW chunk: %d bytes @ 0x%x\n", - img_desc->length, img_desc->address); - - target_cmd.opcode = UMAC_HDI_OUT_OPCODE_WRITE; - target_cmd.handle_by_hw = 1; - target_cmd.op2 = 0; - target_cmd.resp = 0; - target_cmd.eop = 1; - - chunk_size = img_desc->length; - chunk_ptr = fw->data + img_desc->offset; - - while (chunk_size > 0) { - u32 tmp_chunk_size; - - tmp_chunk_size = min_t(u32, chunk_size, - IWM_MAX_NONWIFI_CMD_BUFF_SIZE); - - target_cmd.addr = cpu_to_le32(img_desc->address + - (chunk_ptr - fw->data - img_desc->offset)); - target_cmd.op1_sz = cpu_to_le32(tmp_chunk_size); - - IWM_DBG_FW(iwm, DBG, "\t%d bytes @ 0x%x\n", - tmp_chunk_size, target_cmd.addr); - - ret = iwm_hal_send_target_cmd(iwm, &target_cmd, chunk_ptr); - if (ret < 0) { - IWM_ERR(iwm, "Couldn't load FW chunk\n"); - break; - } - - chunk_size -= tmp_chunk_size; - chunk_ptr += tmp_chunk_size; - } - - return ret; -} -/* - * To load a fw image to the target, we basically go through the - * fw, looking for OP_MEM_DESC records. Once we found one, we - * pass it to iwm_load_firmware_chunk(). - * The OP_MEM_DESC records contain the actuall memory chunk to be - * sent, but also the destination address. - */ -static int iwm_load_img(struct iwm_priv *iwm, const char *img_name) -{ - const struct firmware *fw; - struct iwm_fw_img_desc *img_desc; - struct iwm_fw_img_ver *ver; - int ret = 0, fw_offset; - u32 opcode_idx = 0, build_date; - char *build_tag; - - ret = request_firmware(&fw, img_name, iwm_to_dev(iwm)); - if (ret) { - IWM_ERR(iwm, "Request firmware failed"); - return ret; - } - - IWM_DBG_FW(iwm, INFO, "Start to load FW %s\n", img_name); - - while (1) { - fw_offset = iwm_fw_op_offset(iwm, fw, - IWM_HDR_REC_OP_MEM_DESC, - opcode_idx); - if (fw_offset < 0) - break; - - img_desc = (struct iwm_fw_img_desc *)(fw->data + fw_offset); - ret = iwm_load_firmware_chunk(iwm, fw, img_desc); - if (ret < 0) - goto err_release_fw; - opcode_idx++; - } - - /* Read firmware version */ - fw_offset = iwm_fw_op_offset(iwm, fw, IWM_HDR_REC_OP_SW_VER, 0); - if (fw_offset < 0) - goto err_release_fw; - - ver = (struct iwm_fw_img_ver *)(fw->data + fw_offset); - - /* Read build tag */ - fw_offset = iwm_fw_op_offset(iwm, fw, IWM_HDR_REC_OP_BUILD_TAG, 0); - if (fw_offset < 0) - goto err_release_fw; - - build_tag = (char *)(fw->data + fw_offset); - - /* Read build date */ - fw_offset = iwm_fw_op_offset(iwm, fw, IWM_HDR_REC_OP_BUILD_DATE, 0); - if (fw_offset < 0) - goto err_release_fw; - - build_date = *(u32 *)(fw->data + fw_offset); - - IWM_INFO(iwm, "%s:\n", img_name); - IWM_INFO(iwm, "\tVersion: %02X.%02X\n", ver->major, ver->minor); - IWM_INFO(iwm, "\tBuild tag: %s\n", build_tag); - IWM_INFO(iwm, "\tBuild date: %x-%x-%x\n", - IWM_BUILD_YEAR(build_date), IWM_BUILD_MONTH(build_date), - IWM_BUILD_DAY(build_date)); - - if (!strcmp(img_name, iwm->bus_ops->umac_name)) - sprintf(iwm->umac_version, "%02X.%02X", - ver->major, ver->minor); - - if (!strcmp(img_name, iwm->bus_ops->lmac_name)) - sprintf(iwm->lmac_version, "%02X.%02X", - ver->major, ver->minor); - - err_release_fw: - release_firmware(fw); - - return ret; -} - -static int iwm_load_umac(struct iwm_priv *iwm) -{ - struct iwm_udma_nonwifi_cmd target_cmd; - int ret; - - ret = iwm_load_img(iwm, iwm->bus_ops->umac_name); - if (ret < 0) - return ret; - - /* We've loaded the UMAC, we can tell the target to jump there */ - target_cmd.opcode = UMAC_HDI_OUT_OPCODE_JUMP; - target_cmd.addr = cpu_to_le32(UMAC_MU_FW_INST_DATA_12_ADDR); - target_cmd.op1_sz = 0; - target_cmd.op2 = 0; - target_cmd.handle_by_hw = 0; - target_cmd.resp = 1 ; - target_cmd.eop = 1; - - ret = iwm_hal_send_target_cmd(iwm, &target_cmd, NULL); - if (ret < 0) - IWM_ERR(iwm, "Couldn't send JMP command\n"); - - return ret; -} - -static int iwm_load_lmac(struct iwm_priv *iwm, const char *img_name) -{ - int ret; - - ret = iwm_load_img(iwm, img_name); - if (ret < 0) - return ret; - - return iwm_send_umac_reset(iwm, - cpu_to_le32(UMAC_RST_CTRL_FLG_LARC_CLK_EN), 0); -} - -static int iwm_init_calib(struct iwm_priv *iwm, unsigned long cfg_bitmap, - unsigned long expected_bitmap, u8 rx_iq_cmd) -{ - /* Read RX IQ calibration result from EEPROM */ - if (test_bit(rx_iq_cmd, &cfg_bitmap)) { - iwm_store_rxiq_calib_result(iwm); - set_bit(PHY_CALIBRATE_RX_IQ_CMD, &iwm->calib_done_map); - } - - iwm_send_prio_table(iwm); - iwm_send_init_calib_cfg(iwm, cfg_bitmap); - - while (iwm->calib_done_map != expected_bitmap) { - if (iwm_notif_handle(iwm, CALIBRATION_RES_NOTIFICATION, - IWM_SRC_LMAC, WAIT_NOTIF_TIMEOUT)) { - IWM_DBG_FW(iwm, DBG, "Initial calibration timeout\n"); - return -ETIMEDOUT; - } - - IWM_DBG_FW(iwm, DBG, "Got calibration result. calib_done_map: " - "0x%lx, expected calibrations: 0x%lx\n", - iwm->calib_done_map, expected_bitmap); - } - - return 0; -} - -/* - * We currently have to load 3 FWs: - * 1) The UMAC (Upper MAC). - * 2) The calibration LMAC (Lower MAC). - * We then send the calibration init command, so that the device can - * run a first calibration round. - * 3) The operational LMAC, which replaces the calibration one when it's - * done with the first calibration round. - * - * Once those 3 FWs have been loaded, we send the periodic calibration - * command, and then the device is available for regular 802.11 operations. - */ -int iwm_load_fw(struct iwm_priv *iwm) -{ - unsigned long init_calib_map, periodic_calib_map; - unsigned long expected_calib_map; - int ret; - - /* We first start downloading the UMAC */ - ret = iwm_load_umac(iwm); - if (ret < 0) { - IWM_ERR(iwm, "UMAC loading failed\n"); - return ret; - } - - /* Handle UMAC_ALIVE notification */ - ret = iwm_notif_handle(iwm, UMAC_NOTIFY_OPCODE_ALIVE, IWM_SRC_UMAC, - WAIT_NOTIF_TIMEOUT); - if (ret) { - IWM_ERR(iwm, "Handle UMAC_ALIVE failed: %d\n", ret); - return ret; - } - - /* UMAC is alive, we can download the calibration LMAC */ - ret = iwm_load_lmac(iwm, iwm->bus_ops->calib_lmac_name); - if (ret) { - IWM_ERR(iwm, "Calibration LMAC loading failed\n"); - return ret; - } - - /* Handle UMAC_INIT_COMPLETE notification */ - ret = iwm_notif_handle(iwm, UMAC_NOTIFY_OPCODE_INIT_COMPLETE, - IWM_SRC_UMAC, WAIT_NOTIF_TIMEOUT); - if (ret) { - IWM_ERR(iwm, "Handle INIT_COMPLETE failed for calibration " - "LMAC: %d\n", ret); - return ret; - } - - /* Read EEPROM data */ - ret = iwm_eeprom_init(iwm); - if (ret < 0) { - IWM_ERR(iwm, "Couldn't init eeprom array\n"); - return ret; - } - - init_calib_map = iwm->conf.calib_map & IWM_CALIB_MAP_INIT_MSK; - expected_calib_map = iwm->conf.expected_calib_map & - IWM_CALIB_MAP_INIT_MSK; - periodic_calib_map = IWM_CALIB_MAP_PER_LMAC(iwm->conf.calib_map); - - ret = iwm_init_calib(iwm, init_calib_map, expected_calib_map, - CALIB_CFG_RX_IQ_IDX); - if (ret < 0) { - /* Let's try the old way */ - ret = iwm_init_calib(iwm, expected_calib_map, - expected_calib_map, - PHY_CALIBRATE_RX_IQ_CMD); - if (ret < 0) { - IWM_ERR(iwm, "Calibration result timeout\n"); - goto out; - } - } - - /* Handle LMAC CALIBRATION_COMPLETE notification */ - ret = iwm_notif_handle(iwm, CALIBRATION_COMPLETE_NOTIFICATION, - IWM_SRC_LMAC, WAIT_NOTIF_TIMEOUT); - if (ret) { - IWM_ERR(iwm, "Wait for CALIBRATION_COMPLETE timeout\n"); - goto out; - } - - IWM_INFO(iwm, "LMAC calibration done: 0x%lx\n", iwm->calib_done_map); - - iwm_send_umac_reset(iwm, cpu_to_le32(UMAC_RST_CTRL_FLG_LARC_RESET), 1); - - ret = iwm_notif_handle(iwm, UMAC_CMD_OPCODE_RESET, IWM_SRC_UMAC, - WAIT_NOTIF_TIMEOUT); - if (ret) { - IWM_ERR(iwm, "Wait for UMAC RESET timeout\n"); - goto out; - } - - /* Download the operational LMAC */ - ret = iwm_load_lmac(iwm, iwm->bus_ops->lmac_name); - if (ret) { - IWM_ERR(iwm, "LMAC loading failed\n"); - goto out; - } - - ret = iwm_notif_handle(iwm, UMAC_NOTIFY_OPCODE_INIT_COMPLETE, - IWM_SRC_UMAC, WAIT_NOTIF_TIMEOUT); - if (ret) { - IWM_ERR(iwm, "Handle INIT_COMPLETE failed for LMAC: %d\n", ret); - goto out; - } - - iwm_send_prio_table(iwm); - iwm_send_calib_results(iwm); - iwm_send_periodic_calib_cfg(iwm, periodic_calib_map); - iwm_send_ct_kill_cfg(iwm, iwm->conf.ct_kill_entry, - iwm->conf.ct_kill_exit); - - return 0; - - out: - iwm_eeprom_exit(iwm); - return ret; -} diff --git a/drivers/net/wireless/iwmc3200wifi/fw.h b/drivers/net/wireless/iwmc3200wifi/fw.h deleted file mode 100644 index c70a3b40dad3..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/fw.h +++ /dev/null @@ -1,100 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#ifndef __IWM_FW_H__ -#define __IWM_FW_H__ - -/** - * struct iwm_fw_hdr_rec - An iwm firmware image is a - * concatenation of various records. Each of them is - * defined by an ID (aka op code), a length, and the - * actual data. - * @op_code: The record ID, see IWM_HDR_REC_OP_* - * - * @len: The record payload length - * - * @buf: The record payload - */ -struct iwm_fw_hdr_rec { - u16 op_code; - u16 len; - u8 buf[0]; -}; - -/* Header's definitions */ -#define IWM_HDR_LEN (512) -#define IWM_HDR_BARKER_LEN (16) - -/* Header's opcodes */ -#define IWM_HDR_REC_OP_INVALID (0x00) -#define IWM_HDR_REC_OP_BUILD_DATE (0x01) -#define IWM_HDR_REC_OP_BUILD_TAG (0x02) -#define IWM_HDR_REC_OP_SW_VER (0x03) -#define IWM_HDR_REC_OP_HW_SKU (0x04) -#define IWM_HDR_REC_OP_BUILD_OPT (0x05) -#define IWM_HDR_REC_OP_MEM_DESC (0x06) -#define IWM_HDR_REC_USERDEFS (0x07) - -/* Header's records length (in bytes) */ -#define IWM_HDR_REC_LEN_BUILD_DATE (4) -#define IWM_HDR_REC_LEN_BUILD_TAG (64) -#define IWM_HDR_REC_LEN_SW_VER (4) -#define IWM_HDR_REC_LEN_HW_SKU (4) -#define IWM_HDR_REC_LEN_BUILD_OPT (4) -#define IWM_HDR_REC_LEN_MEM_DESC (12) -#define IWM_HDR_REC_LEN_USERDEF (64) - -#define IWM_BUILD_YEAR(date) ((date >> 16) & 0xffff) -#define IWM_BUILD_MONTH(date) ((date >> 8) & 0xff) -#define IWM_BUILD_DAY(date) (date & 0xff) - -struct iwm_fw_img_desc { - u32 offset; - u32 address; - u32 length; -}; - -struct iwm_fw_img_ver { - u8 minor; - u8 major; - u16 reserved; -}; - -int iwm_load_fw(struct iwm_priv *iwm); - -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/hal.c b/drivers/net/wireless/iwmc3200wifi/hal.c deleted file mode 100644 index 1cabcb39643f..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/hal.c +++ /dev/null @@ -1,470 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -/* - * Hardware Abstraction Layer for iwm. - * - * This file mostly defines an abstraction API for - * sending various commands to the target. - * - * We have 2 types of commands: wifi and non-wifi ones. - * - * - wifi commands: - * They are used for sending LMAC and UMAC commands, - * and thus are the most commonly used ones. - * There are 2 different wifi command types, the regular - * one and the LMAC one. The former is used to send - * UMAC commands (see UMAC_CMD_OPCODE_* from umac.h) - * while the latter is used for sending commands to the - * LMAC. If you look at LMAC commands you'll se that they - * are actually regular iwlwifi target commands encapsulated - * into a special UMAC command called UMAC passthrough. - * This is due to the fact the host talks exclusively - * to the UMAC and so there needs to be a special UMAC - * command for talking to the LMAC. - * This is how a wifi command is laid out: - * ------------------------ - * | iwm_udma_out_wifi_hdr | - * ------------------------ - * | SW meta_data (32 bits) | - * ------------------------ - * | iwm_dev_cmd_hdr | - * ------------------------ - * | payload | - * | .... | - * - * - non-wifi, or general commands: - * Those commands are handled by the device's bootrom, - * and are typically sent when the UMAC and the LMAC - * are not yet available. - * * This is how a non-wifi command is laid out: - * --------------------------- - * | iwm_udma_out_nonwifi_hdr | - * --------------------------- - * | payload | - * | .... | - - * - * All the commands start with a UDMA header, which is - * basically a 32 bits field. The 4 LSB there define - * an opcode that allows the target to differentiate - * between wifi (opcode is 0xf) and non-wifi commands - * (opcode is [0..0xe]). - * - * When a command (wifi or non-wifi) is supposed to receive - * an answer, we queue the command buffer. When we do receive - * a command response from the UMAC, we go through the list - * of pending command, and pass both the command and the answer - * to the rx handler. Each command is sent with a unique - * sequence id, and the answer is sent with the same one. This - * is how we're supposed to match an answer with its command. - * See rx.c:iwm_rx_handle_[non]wifi() and iwm_get_pending_[non]wifi() - * for the implementation details. - */ -#include <linux/kernel.h> -#include <linux/netdevice.h> -#include <linux/slab.h> - -#include "iwm.h" -#include "bus.h" -#include "hal.h" -#include "umac.h" -#include "debug.h" -#include "trace.h" - -static int iwm_nonwifi_cmd_init(struct iwm_priv *iwm, - struct iwm_nonwifi_cmd *cmd, - struct iwm_udma_nonwifi_cmd *udma_cmd) -{ - INIT_LIST_HEAD(&cmd->pending); - - spin_lock(&iwm->cmd_lock); - - cmd->resp_received = 0; - - cmd->seq_num = iwm->nonwifi_seq_num; - udma_cmd->seq_num = cpu_to_le16(cmd->seq_num); - - iwm->nonwifi_seq_num++; - iwm->nonwifi_seq_num %= UMAC_NONWIFI_SEQ_NUM_MAX; - - if (udma_cmd->resp) - list_add_tail(&cmd->pending, &iwm->nonwifi_pending_cmd); - - spin_unlock(&iwm->cmd_lock); - - cmd->buf.start = cmd->buf.payload; - cmd->buf.len = 0; - - memcpy(&cmd->udma_cmd, udma_cmd, sizeof(*udma_cmd)); - - return cmd->seq_num; -} - -u16 iwm_alloc_wifi_cmd_seq(struct iwm_priv *iwm) -{ - u16 seq_num = iwm->wifi_seq_num; - - iwm->wifi_seq_num++; - iwm->wifi_seq_num %= UMAC_WIFI_SEQ_NUM_MAX; - - return seq_num; -} - -static void iwm_wifi_cmd_init(struct iwm_priv *iwm, - struct iwm_wifi_cmd *cmd, - struct iwm_udma_wifi_cmd *udma_cmd, - struct iwm_umac_cmd *umac_cmd, - struct iwm_lmac_cmd *lmac_cmd, - u16 payload_size) -{ - INIT_LIST_HEAD(&cmd->pending); - - spin_lock(&iwm->cmd_lock); - - cmd->seq_num = iwm_alloc_wifi_cmd_seq(iwm); - umac_cmd->seq_num = cpu_to_le16(cmd->seq_num); - - if (umac_cmd->resp) - list_add_tail(&cmd->pending, &iwm->wifi_pending_cmd); - - spin_unlock(&iwm->cmd_lock); - - cmd->buf.start = cmd->buf.payload; - cmd->buf.len = 0; - - if (lmac_cmd) { - cmd->buf.start -= sizeof(struct iwm_lmac_hdr); - - lmac_cmd->seq_num = cpu_to_le16(cmd->seq_num); - lmac_cmd->count = cpu_to_le16(payload_size); - - memcpy(&cmd->lmac_cmd, lmac_cmd, sizeof(*lmac_cmd)); - - umac_cmd->count = cpu_to_le16(sizeof(struct iwm_lmac_hdr)); - } else - umac_cmd->count = 0; - - umac_cmd->count = cpu_to_le16(payload_size + - le16_to_cpu(umac_cmd->count)); - udma_cmd->count = cpu_to_le16(sizeof(struct iwm_umac_fw_cmd_hdr) + - le16_to_cpu(umac_cmd->count)); - - memcpy(&cmd->udma_cmd, udma_cmd, sizeof(*udma_cmd)); - memcpy(&cmd->umac_cmd, umac_cmd, sizeof(*umac_cmd)); -} - -void iwm_cmd_flush(struct iwm_priv *iwm) -{ - struct iwm_wifi_cmd *wcmd, *wnext; - struct iwm_nonwifi_cmd *nwcmd, *nwnext; - - list_for_each_entry_safe(wcmd, wnext, &iwm->wifi_pending_cmd, pending) { - list_del(&wcmd->pending); - kfree(wcmd); - } - - list_for_each_entry_safe(nwcmd, nwnext, &iwm->nonwifi_pending_cmd, - pending) { - list_del(&nwcmd->pending); - kfree(nwcmd); - } -} - -struct iwm_wifi_cmd *iwm_get_pending_wifi_cmd(struct iwm_priv *iwm, u16 seq_num) -{ - struct iwm_wifi_cmd *cmd; - - list_for_each_entry(cmd, &iwm->wifi_pending_cmd, pending) - if (cmd->seq_num == seq_num) { - list_del(&cmd->pending); - return cmd; - } - - return NULL; -} - -struct iwm_nonwifi_cmd *iwm_get_pending_nonwifi_cmd(struct iwm_priv *iwm, - u8 seq_num, u8 cmd_opcode) -{ - struct iwm_nonwifi_cmd *cmd; - - list_for_each_entry(cmd, &iwm->nonwifi_pending_cmd, pending) - if ((cmd->seq_num == seq_num) && - (cmd->udma_cmd.opcode == cmd_opcode) && - (cmd->resp_received)) { - list_del(&cmd->pending); - return cmd; - } - - return NULL; -} - -static void iwm_build_udma_nonwifi_hdr(struct iwm_priv *iwm, - struct iwm_udma_out_nonwifi_hdr *hdr, - struct iwm_udma_nonwifi_cmd *cmd) -{ - memset(hdr, 0, sizeof(*hdr)); - - SET_VAL32(hdr->cmd, UMAC_HDI_OUT_CMD_OPCODE, cmd->opcode); - SET_VAL32(hdr->cmd, UDMA_HDI_OUT_NW_CMD_RESP, cmd->resp); - SET_VAL32(hdr->cmd, UMAC_HDI_OUT_CMD_EOT, 1); - SET_VAL32(hdr->cmd, UDMA_HDI_OUT_NW_CMD_HANDLE_BY_HW, - cmd->handle_by_hw); - SET_VAL32(hdr->cmd, UMAC_HDI_OUT_CMD_SIGNATURE, UMAC_HDI_OUT_SIGNATURE); - SET_VAL32(hdr->cmd, UDMA_HDI_OUT_CMD_NON_WIFI_HW_SEQ_NUM, - le16_to_cpu(cmd->seq_num)); - - hdr->addr = cmd->addr; - hdr->op1_sz = cmd->op1_sz; - hdr->op2 = cmd->op2; -} - -static int iwm_send_udma_nonwifi_cmd(struct iwm_priv *iwm, - struct iwm_nonwifi_cmd *cmd) -{ - struct iwm_udma_out_nonwifi_hdr *udma_hdr; - struct iwm_nonwifi_cmd_buff *buf; - struct iwm_udma_nonwifi_cmd *udma_cmd = &cmd->udma_cmd; - - buf = &cmd->buf; - - buf->start -= sizeof(struct iwm_umac_nonwifi_out_hdr); - buf->len += sizeof(struct iwm_umac_nonwifi_out_hdr); - - udma_hdr = (struct iwm_udma_out_nonwifi_hdr *)(buf->start); - - iwm_build_udma_nonwifi_hdr(iwm, udma_hdr, udma_cmd); - - IWM_DBG_CMD(iwm, DBG, - "Send UDMA nonwifi cmd: opcode = 0x%x, resp = 0x%x, " - "hw = 0x%x, seqnum = %d, addr = 0x%x, op1_sz = 0x%x, " - "op2 = 0x%x\n", udma_cmd->opcode, udma_cmd->resp, - udma_cmd->handle_by_hw, cmd->seq_num, udma_cmd->addr, - udma_cmd->op1_sz, udma_cmd->op2); - - trace_iwm_tx_nonwifi_cmd(iwm, udma_hdr); - return iwm_bus_send_chunk(iwm, buf->start, buf->len); -} - -void iwm_udma_wifi_hdr_set_eop(struct iwm_priv *iwm, u8 *buf, u8 eop) -{ - struct iwm_udma_out_wifi_hdr *hdr = (struct iwm_udma_out_wifi_hdr *)buf; - - SET_VAL32(hdr->cmd, UMAC_HDI_OUT_CMD_EOT, eop); -} - -void iwm_build_udma_wifi_hdr(struct iwm_priv *iwm, - struct iwm_udma_out_wifi_hdr *hdr, - struct iwm_udma_wifi_cmd *cmd) -{ - memset(hdr, 0, sizeof(*hdr)); - - SET_VAL32(hdr->cmd, UMAC_HDI_OUT_CMD_OPCODE, UMAC_HDI_OUT_OPCODE_WIFI); - SET_VAL32(hdr->cmd, UMAC_HDI_OUT_CMD_EOT, cmd->eop); - SET_VAL32(hdr->cmd, UMAC_HDI_OUT_CMD_SIGNATURE, UMAC_HDI_OUT_SIGNATURE); - - SET_VAL32(hdr->meta_data, UMAC_HDI_OUT_BYTE_COUNT, - le16_to_cpu(cmd->count)); - SET_VAL32(hdr->meta_data, UMAC_HDI_OUT_CREDIT_GRP, cmd->credit_group); - SET_VAL32(hdr->meta_data, UMAC_HDI_OUT_RATID, cmd->ra_tid); - SET_VAL32(hdr->meta_data, UMAC_HDI_OUT_LMAC_OFFSET, cmd->lmac_offset); -} - -void iwm_build_umac_hdr(struct iwm_priv *iwm, - struct iwm_umac_fw_cmd_hdr *hdr, - struct iwm_umac_cmd *cmd) -{ - memset(hdr, 0, sizeof(*hdr)); - - SET_VAL32(hdr->meta_data, UMAC_FW_CMD_BYTE_COUNT, - le16_to_cpu(cmd->count)); - SET_VAL32(hdr->meta_data, UMAC_FW_CMD_TX_STA_COLOR, cmd->color); - SET_VAL8(hdr->cmd.flags, UMAC_DEV_CMD_FLAGS_RESP_REQ, cmd->resp); - - hdr->cmd.cmd = cmd->id; - hdr->cmd.seq_num = cmd->seq_num; -} - -static int iwm_send_udma_wifi_cmd(struct iwm_priv *iwm, - struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_wifi_out_hdr *umac_hdr; - struct iwm_wifi_cmd_buff *buf; - struct iwm_udma_wifi_cmd *udma_cmd = &cmd->udma_cmd; - struct iwm_umac_cmd *umac_cmd = &cmd->umac_cmd; - int ret; - - buf = &cmd->buf; - - buf->start -= sizeof(struct iwm_umac_wifi_out_hdr); - buf->len += sizeof(struct iwm_umac_wifi_out_hdr); - - umac_hdr = (struct iwm_umac_wifi_out_hdr *)(buf->start); - - iwm_build_udma_wifi_hdr(iwm, &umac_hdr->hw_hdr, udma_cmd); - iwm_build_umac_hdr(iwm, &umac_hdr->sw_hdr, umac_cmd); - - IWM_DBG_CMD(iwm, DBG, - "Send UDMA wifi cmd: opcode = 0x%x, UMAC opcode = 0x%x, " - "eop = 0x%x, count = 0x%x, credit_group = 0x%x, " - "ra_tid = 0x%x, lmac_offset = 0x%x, seqnum = %d\n", - UMAC_HDI_OUT_OPCODE_WIFI, umac_cmd->id, - udma_cmd->eop, udma_cmd->count, udma_cmd->credit_group, - udma_cmd->ra_tid, udma_cmd->lmac_offset, cmd->seq_num); - - if (umac_cmd->id == UMAC_CMD_OPCODE_WIFI_PASS_THROUGH) - IWM_DBG_CMD(iwm, DBG, "\tLMAC opcode: 0x%x\n", - cmd->lmac_cmd.id); - - ret = iwm_tx_credit_alloc(iwm, udma_cmd->credit_group, buf->len); - - /* We keep sending UMAC reset regardless of the command credits. - * The UMAC is supposed to be reset anyway and the Tx credits are - * reinitialized afterwards. If we are lucky, the reset could - * still be done even though we have run out of credits for the - * command pool at this moment.*/ - if (ret && (umac_cmd->id != UMAC_CMD_OPCODE_RESET)) { - IWM_DBG_TX(iwm, DBG, "Failed to alloc tx credit for cmd %d\n", - umac_cmd->id); - return ret; - } - - trace_iwm_tx_wifi_cmd(iwm, umac_hdr); - return iwm_bus_send_chunk(iwm, buf->start, buf->len); -} - -/* target_cmd a.k.a udma_nonwifi_cmd can be sent when UMAC is not available */ -int iwm_hal_send_target_cmd(struct iwm_priv *iwm, - struct iwm_udma_nonwifi_cmd *udma_cmd, - const void *payload) -{ - struct iwm_nonwifi_cmd *cmd; - int ret, seq_num; - - cmd = kzalloc(sizeof(struct iwm_nonwifi_cmd), GFP_KERNEL); - if (!cmd) { - IWM_ERR(iwm, "Couldn't alloc memory for hal cmd\n"); - return -ENOMEM; - } - - seq_num = iwm_nonwifi_cmd_init(iwm, cmd, udma_cmd); - - if (cmd->udma_cmd.opcode == UMAC_HDI_OUT_OPCODE_WRITE || - cmd->udma_cmd.opcode == UMAC_HDI_OUT_OPCODE_WRITE_PERSISTENT) { - cmd->buf.len = le32_to_cpu(cmd->udma_cmd.op1_sz); - memcpy(&cmd->buf.payload, payload, cmd->buf.len); - } - - ret = iwm_send_udma_nonwifi_cmd(iwm, cmd); - - if (!udma_cmd->resp) - kfree(cmd); - - if (ret < 0) - return ret; - - return seq_num; -} - -static void iwm_build_lmac_hdr(struct iwm_priv *iwm, struct iwm_lmac_hdr *hdr, - struct iwm_lmac_cmd *cmd) -{ - memset(hdr, 0, sizeof(*hdr)); - - hdr->id = cmd->id; - hdr->flags = 0; /* Is this ever used? */ - hdr->seq_num = cmd->seq_num; -} - -/* - * iwm_hal_send_host_cmd(): sends commands to the UMAC or the LMAC. - * Sending command to the LMAC is equivalent to sending a - * regular UMAC command with the LMAC passthrough or the LMAC - * wrapper UMAC command IDs. - */ -int iwm_hal_send_host_cmd(struct iwm_priv *iwm, - struct iwm_udma_wifi_cmd *udma_cmd, - struct iwm_umac_cmd *umac_cmd, - struct iwm_lmac_cmd *lmac_cmd, - const void *payload, u16 payload_size) -{ - struct iwm_wifi_cmd *cmd; - struct iwm_lmac_hdr *hdr; - int lmac_hdr_len = 0; - int ret; - - cmd = kzalloc(sizeof(struct iwm_wifi_cmd), GFP_KERNEL); - if (!cmd) { - IWM_ERR(iwm, "Couldn't alloc memory for wifi hal cmd\n"); - return -ENOMEM; - } - - iwm_wifi_cmd_init(iwm, cmd, udma_cmd, umac_cmd, lmac_cmd, payload_size); - - if (lmac_cmd) { - hdr = (struct iwm_lmac_hdr *)(cmd->buf.start); - - iwm_build_lmac_hdr(iwm, hdr, &cmd->lmac_cmd); - lmac_hdr_len = sizeof(struct iwm_lmac_hdr); - } - - memcpy(cmd->buf.payload, payload, payload_size); - cmd->buf.len = le16_to_cpu(umac_cmd->count); - - ret = iwm_send_udma_wifi_cmd(iwm, cmd); - - /* We free the cmd if we're not expecting any response */ - if (!umac_cmd->resp) - kfree(cmd); - return ret; -} - -/* - * iwm_hal_send_umac_cmd(): This is a special case for - * iwm_hal_send_host_cmd() to send direct UMAC cmd (without - * LMAC involved). - */ -int iwm_hal_send_umac_cmd(struct iwm_priv *iwm, - struct iwm_udma_wifi_cmd *udma_cmd, - struct iwm_umac_cmd *umac_cmd, - const void *payload, u16 payload_size) -{ - return iwm_hal_send_host_cmd(iwm, udma_cmd, umac_cmd, NULL, - payload, payload_size); -} diff --git a/drivers/net/wireless/iwmc3200wifi/hal.h b/drivers/net/wireless/iwmc3200wifi/hal.h deleted file mode 100644 index c20936d9b6b7..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/hal.h +++ /dev/null @@ -1,237 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#ifndef _IWM_HAL_H_ -#define _IWM_HAL_H_ - -#include "umac.h" - -#define GET_VAL8(s, name) ((s >> name##_POS) & name##_SEED) -#define GET_VAL16(s, name) ((le16_to_cpu(s) >> name##_POS) & name##_SEED) -#define GET_VAL32(s, name) ((le32_to_cpu(s) >> name##_POS) & name##_SEED) - -#define SET_VAL8(s, name, val) \ -do { \ - s = (s & ~(name##_SEED << name##_POS)) | \ - ((val & name##_SEED) << name##_POS); \ -} while (0) - -#define SET_VAL16(s, name, val) \ -do { \ - s = cpu_to_le16((le16_to_cpu(s) & ~(name##_SEED << name##_POS)) | \ - ((val & name##_SEED) << name##_POS)); \ -} while (0) - -#define SET_VAL32(s, name, val) \ -do { \ - s = cpu_to_le32((le32_to_cpu(s) & ~(name##_SEED << name##_POS)) | \ - ((val & name##_SEED) << name##_POS)); \ -} while (0) - - -#define UDMA_UMAC_INIT { .eop = 1, \ - .credit_group = 0x4, \ - .ra_tid = UMAC_HDI_ACT_TBL_IDX_HOST_CMD, \ - .lmac_offset = 0 } -#define UDMA_LMAC_INIT { .eop = 1, \ - .credit_group = 0x4, \ - .ra_tid = UMAC_HDI_ACT_TBL_IDX_HOST_CMD, \ - .lmac_offset = 4 } - - -/* UDMA IN OP CODE -- cmd bits [3:0] */ -#define UDMA_HDI_IN_NW_CMD_OPCODE_POS 0 -#define UDMA_HDI_IN_NW_CMD_OPCODE_SEED 0xF - -#define UDMA_IN_OPCODE_GENERAL_RESP 0x0 -#define UDMA_IN_OPCODE_READ_RESP 0x1 -#define UDMA_IN_OPCODE_WRITE_RESP 0x2 -#define UDMA_IN_OPCODE_PERS_WRITE_RESP 0x5 -#define UDMA_IN_OPCODE_PERS_READ_RESP 0x6 -#define UDMA_IN_OPCODE_RD_MDFY_WR_RESP 0x7 -#define UDMA_IN_OPCODE_EP_MNGMT_MSG 0x8 -#define UDMA_IN_OPCODE_CRDT_CHNG_MSG 0x9 -#define UDMA_IN_OPCODE_CNTRL_DATABASE_MSG 0xA -#define UDMA_IN_OPCODE_SW_MSG 0xB -#define UDMA_IN_OPCODE_WIFI 0xF -#define UDMA_IN_OPCODE_WIFI_LMAC 0x1F -#define UDMA_IN_OPCODE_WIFI_UMAC 0x2F - -/* HW API: udma_hdi_nonwifi API (OUT and IN) */ - -/* iwm_udma_nonwifi_cmd request response -- bits [9:9] */ -#define UDMA_HDI_OUT_NW_CMD_RESP_POS 9 -#define UDMA_HDI_OUT_NW_CMD_RESP_SEED 0x1 - -/* iwm_udma_nonwifi_cmd handle by HW -- bits [11:11] */ -#define UDMA_HDI_OUT_NW_CMD_HANDLE_BY_HW_POS 11 -#define UDMA_HDI_OUT_NW_CMD_HANDLE_BY_HW_SEED 0x1 - -/* iwm_udma_nonwifi_cmd sequence-number -- bits [12:15] */ -#define UDMA_HDI_OUT_NW_CMD_SEQ_NUM_POS 12 -#define UDMA_HDI_OUT_NW_CMD_SEQ_NUM_SEED 0xF - -/* UDMA IN Non-WIFI HW sequence number -- bits [12:15] */ -#define UDMA_IN_NW_HW_SEQ_NUM_POS 12 -#define UDMA_IN_NW_HW_SEQ_NUM_SEED 0xF - -/* UDMA IN Non-WIFI HW signature -- bits [16:31] */ -#define UDMA_IN_NW_HW_SIG_POS 16 -#define UDMA_IN_NW_HW_SIG_SEED 0xFFFF - -/* fixed signature */ -#define UDMA_IN_NW_HW_SIG 0xCBBC - -/* UDMA IN Non-WIFI HW block length -- bits [32:35] */ -#define UDMA_IN_NW_HW_LENGTH_SEED 0xF -#define UDMA_IN_NW_HW_LENGTH_POS 32 - -/* End of HW API: udma_hdi_nonwifi API (OUT and IN) */ - -#define IWM_SDIO_FW_MAX_CHUNK_SIZE 2032 -#define IWM_MAX_WIFI_HEADERS_SIZE 32 -#define IWM_MAX_NONWIFI_HEADERS_SIZE 16 -#define IWM_MAX_NONWIFI_CMD_BUFF_SIZE (IWM_SDIO_FW_MAX_CHUNK_SIZE - \ - IWM_MAX_NONWIFI_HEADERS_SIZE) -#define IWM_MAX_WIFI_CMD_BUFF_SIZE (IWM_SDIO_FW_MAX_CHUNK_SIZE - \ - IWM_MAX_WIFI_HEADERS_SIZE) - -#define IWM_HAL_CONCATENATE_BUF_SIZE (32 * 1024) - -struct iwm_wifi_cmd_buff { - u16 len; - u8 *start; - u8 hdr[IWM_MAX_WIFI_HEADERS_SIZE]; - u8 payload[IWM_MAX_WIFI_CMD_BUFF_SIZE]; -}; - -struct iwm_nonwifi_cmd_buff { - u16 len; - u8 *start; - u8 hdr[IWM_MAX_NONWIFI_HEADERS_SIZE]; - u8 payload[IWM_MAX_NONWIFI_CMD_BUFF_SIZE]; -}; - -struct iwm_udma_nonwifi_cmd { - u8 opcode; - u8 eop; - u8 resp; - u8 handle_by_hw; - __le32 addr; - __le32 op1_sz; - __le32 op2; - __le16 seq_num; -}; - -struct iwm_udma_wifi_cmd { - __le16 count; - u8 eop; - u8 credit_group; - u8 ra_tid; - u8 lmac_offset; -}; - -struct iwm_umac_cmd { - u8 id; - __le16 count; - u8 resp; - __le16 seq_num; - u8 color; -}; - -struct iwm_lmac_cmd { - u8 id; - __le16 count; - u8 resp; - __le16 seq_num; -}; - -struct iwm_nonwifi_cmd { - u16 seq_num; - bool resp_received; - struct list_head pending; - struct iwm_udma_nonwifi_cmd udma_cmd; - struct iwm_umac_cmd umac_cmd; - struct iwm_lmac_cmd lmac_cmd; - struct iwm_nonwifi_cmd_buff buf; - u32 flags; -}; - -struct iwm_wifi_cmd { - u16 seq_num; - struct list_head pending; - struct iwm_udma_wifi_cmd udma_cmd; - struct iwm_umac_cmd umac_cmd; - struct iwm_lmac_cmd lmac_cmd; - struct iwm_wifi_cmd_buff buf; - u32 flags; -}; - -void iwm_cmd_flush(struct iwm_priv *iwm); - -struct iwm_wifi_cmd *iwm_get_pending_wifi_cmd(struct iwm_priv *iwm, - u16 seq_num); -struct iwm_nonwifi_cmd *iwm_get_pending_nonwifi_cmd(struct iwm_priv *iwm, - u8 seq_num, u8 cmd_opcode); - - -int iwm_hal_send_target_cmd(struct iwm_priv *iwm, - struct iwm_udma_nonwifi_cmd *ucmd, - const void *payload); - -int iwm_hal_send_host_cmd(struct iwm_priv *iwm, - struct iwm_udma_wifi_cmd *udma_cmd, - struct iwm_umac_cmd *umac_cmd, - struct iwm_lmac_cmd *lmac_cmd, - const void *payload, u16 payload_size); - -int iwm_hal_send_umac_cmd(struct iwm_priv *iwm, - struct iwm_udma_wifi_cmd *udma_cmd, - struct iwm_umac_cmd *umac_cmd, - const void *payload, u16 payload_size); - -u16 iwm_alloc_wifi_cmd_seq(struct iwm_priv *iwm); - -void iwm_udma_wifi_hdr_set_eop(struct iwm_priv *iwm, u8 *buf, u8 eop); -void iwm_build_udma_wifi_hdr(struct iwm_priv *iwm, - struct iwm_udma_out_wifi_hdr *hdr, - struct iwm_udma_wifi_cmd *cmd); -void iwm_build_umac_hdr(struct iwm_priv *iwm, - struct iwm_umac_fw_cmd_hdr *hdr, - struct iwm_umac_cmd *cmd); -#endif /* _IWM_HAL_H_ */ diff --git a/drivers/net/wireless/iwmc3200wifi/iwm.h b/drivers/net/wireless/iwmc3200wifi/iwm.h deleted file mode 100644 index 51d7efa15ae6..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/iwm.h +++ /dev/null @@ -1,367 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#ifndef __IWM_H__ -#define __IWM_H__ - -#include <linux/netdevice.h> -#include <linux/wireless.h> -#include <net/cfg80211.h> - -#include "debug.h" -#include "hal.h" -#include "umac.h" -#include "lmac.h" -#include "eeprom.h" -#include "trace.h" - -#define IWM_COPYRIGHT "Copyright(c) 2009 Intel Corporation" -#define IWM_AUTHOR "<ilw@linux.intel.com>" - -#define IWM_SRC_LMAC UMAC_HDI_IN_SOURCE_FHRX -#define IWM_SRC_UDMA UMAC_HDI_IN_SOURCE_UDMA -#define IWM_SRC_UMAC UMAC_HDI_IN_SOURCE_FW -#define IWM_SRC_NUM 3 - -#define IWM_POWER_INDEX_MIN 0 -#define IWM_POWER_INDEX_MAX 5 -#define IWM_POWER_INDEX_DEFAULT 3 - -struct iwm_conf { - u32 sdio_ior_timeout; - unsigned long calib_map; - unsigned long expected_calib_map; - u8 ct_kill_entry; - u8 ct_kill_exit; - bool reset_on_fatal_err; - bool auto_connect; - bool wimax_not_present; - bool enable_qos; - u32 mode; - - u32 power_index; - u32 frag_threshold; - u32 rts_threshold; - bool cts_to_self; - - u32 assoc_timeout; - u32 roam_timeout; - u32 wireless_mode; - - u8 ibss_band; - u8 ibss_channel; - - u8 mac_addr[ETH_ALEN]; -}; - -enum { - COEX_MODE_SA = 1, - COEX_MODE_XOR, - COEX_MODE_CM, - COEX_MODE_MAX, -}; - -struct iwm_if_ops; -struct iwm_wifi_cmd; - -struct pool_entry { - int id; /* group id */ - int sid; /* super group id */ - int min_pages; /* min capacity in pages */ - int max_pages; /* max capacity in pages */ - int alloc_pages; /* allocated # of pages. incresed by driver */ - int total_freed_pages; /* total freed # of pages. incresed by UMAC */ -}; - -struct spool_entry { - int id; - int max_pages; - int alloc_pages; -}; - -struct iwm_tx_credit { - spinlock_t lock; - int pool_nr; - unsigned long full_pools_map; /* bitmap for # of filled tx pools */ - struct pool_entry pools[IWM_MACS_OUT_GROUPS]; - struct spool_entry spools[IWM_MACS_OUT_SGROUPS]; -}; - -struct iwm_notif { - struct list_head pending; - u32 cmd_id; - void *cmd; - u8 src; - void *buf; - unsigned long buf_size; -}; - -struct iwm_tid_info { - __le16 last_seq_num; - bool stopped; - struct mutex mutex; -}; - -struct iwm_sta_info { - u8 addr[ETH_ALEN]; - bool valid; - bool qos; - u8 color; - struct iwm_tid_info tid_info[IWM_UMAC_TID_NR]; -}; - -struct iwm_tx_info { - u8 sta; - u8 color; - u8 tid; -}; - -struct iwm_rx_info { - unsigned long rx_size; - unsigned long rx_buf_size; -}; - -#define IWM_NUM_KEYS 4 - -struct iwm_umac_key_hdr { - u8 mac[ETH_ALEN]; - u8 key_idx; - u8 multicast; /* BCast encrypt & BCast decrypt of frames FROM mac */ -} __packed; - -struct iwm_key { - struct iwm_umac_key_hdr hdr; - u32 cipher; - u8 key[WLAN_MAX_KEY_LEN]; - u8 seq[IW_ENCODE_SEQ_MAX_SIZE]; - int key_len; - int seq_len; -}; - -#define IWM_RX_ID_HASH 0xff -#define IWM_RX_ID_GET_HASH(id) ((id) % IWM_RX_ID_HASH) - -#define IWM_STA_TABLE_NUM 16 -#define IWM_TX_LIST_SIZE 64 -#define IWM_RX_LIST_SIZE 256 - -#define IWM_SCAN_ID_MAX 0xff - -#define IWM_STATUS_READY 0 -#define IWM_STATUS_SCANNING 1 -#define IWM_STATUS_SCAN_ABORTING 2 -#define IWM_STATUS_SME_CONNECTING 3 -#define IWM_STATUS_ASSOCIATED 4 -#define IWM_STATUS_RESETTING 5 - -struct iwm_tx_queue { - int id; - struct sk_buff_head queue; - struct sk_buff_head stopped_queue; - spinlock_t lock; - struct workqueue_struct *wq; - struct work_struct worker; - u8 concat_buf[IWM_HAL_CONCATENATE_BUF_SIZE]; - int concat_count; - u8 *concat_ptr; -}; - -/* Queues 0 ~ 3 for AC data, 5 for iPAN */ -#define IWM_TX_QUEUES 5 -#define IWM_TX_DATA_QUEUES 4 -#define IWM_TX_CMD_QUEUE 4 - -struct iwm_bss_info { - struct list_head node; - struct cfg80211_bss *cfg_bss; - struct iwm_umac_notif_bss_info *bss; -}; - -typedef int (*iwm_handler)(struct iwm_priv *priv, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd); - -#define IWM_WATCHDOG_PERIOD (6 * HZ) - -struct iwm_priv { - struct wireless_dev *wdev; - struct iwm_if_ops *bus_ops; - - struct iwm_conf conf; - - unsigned long status; - - struct list_head pending_notif; - wait_queue_head_t notif_queue; - - wait_queue_head_t nonwifi_queue; - - unsigned long calib_done_map; - struct { - u8 *buf; - u32 size; - } calib_res[CALIBRATION_CMD_NUM]; - - struct iwm_umac_profile *umac_profile; - bool umac_profile_active; - - u8 bssid[ETH_ALEN]; - u8 channel; - u16 rate; - u32 txpower; - - struct iwm_sta_info sta_table[IWM_STA_TABLE_NUM]; - struct list_head bss_list; - - void (*nonwifi_rx_handlers[UMAC_HDI_IN_OPCODE_NONWIFI_MAX]) - (struct iwm_priv *priv, u8 *buf, unsigned long buf_size); - - const iwm_handler *umac_handlers; - const iwm_handler *lmac_handlers; - DECLARE_BITMAP(lmac_handler_map, LMAC_COMMAND_ID_NUM); - DECLARE_BITMAP(umac_handler_map, LMAC_COMMAND_ID_NUM); - DECLARE_BITMAP(udma_handler_map, LMAC_COMMAND_ID_NUM); - - struct list_head wifi_pending_cmd; - struct list_head nonwifi_pending_cmd; - u16 wifi_seq_num; - u8 nonwifi_seq_num; - spinlock_t cmd_lock; - - u32 core_enabled; - - u8 scan_id; - struct cfg80211_scan_request *scan_request; - - struct sk_buff_head rx_list; - struct list_head rx_tickets; - spinlock_t ticket_lock; - struct list_head rx_packets[IWM_RX_ID_HASH]; - spinlock_t packet_lock[IWM_RX_ID_HASH]; - struct workqueue_struct *rx_wq; - struct work_struct rx_worker; - - struct iwm_tx_credit tx_credit; - struct iwm_tx_queue txq[IWM_TX_QUEUES]; - - struct iwm_key keys[IWM_NUM_KEYS]; - s8 default_key; - - DECLARE_BITMAP(wifi_ntfy, WIFI_IF_NTFY_MAX); - wait_queue_head_t wifi_ntfy_queue; - - wait_queue_head_t mlme_queue; - - struct iw_statistics wstats; - struct delayed_work stats_request; - struct delayed_work disconnect; - struct delayed_work ct_kill_delay; - - struct iwm_debugfs dbg; - - u8 *eeprom; - struct timer_list watchdog; - struct work_struct reset_worker; - struct work_struct auth_retry_worker; - struct mutex mutex; - - u8 *req_ie; - int req_ie_len; - u8 *resp_ie; - int resp_ie_len; - - struct iwm_fw_error_hdr *last_fw_err; - char umac_version[8]; - char lmac_version[8]; - - char private[0] __attribute__((__aligned__(NETDEV_ALIGN))); -}; - -static inline void *iwm_private(struct iwm_priv *iwm) -{ - BUG_ON(!iwm); - return &iwm->private; -} - -#define hw_to_iwm(h) (h->iwm) -#define iwm_to_dev(i) (wiphy_dev(i->wdev->wiphy)) -#define iwm_to_wiphy(i) (i->wdev->wiphy) -#define wiphy_to_iwm(w) (struct iwm_priv *)(wiphy_priv(w)) -#define iwm_to_wdev(i) (i->wdev) -#define wdev_to_iwm(w) (struct iwm_priv *)(wdev_priv(w)) -#define iwm_to_ndev(i) (i->wdev->netdev) -#define ndev_to_iwm(n) (wdev_to_iwm(n->ieee80211_ptr)) -#define skb_to_rx_info(s) ((struct iwm_rx_info *)(s->cb)) -#define skb_to_tx_info(s) ((struct iwm_tx_info *)s->cb) - -void *iwm_if_alloc(int sizeof_bus, struct device *dev, - struct iwm_if_ops *if_ops); -void iwm_if_free(struct iwm_priv *iwm); -int iwm_if_add(struct iwm_priv *iwm); -void iwm_if_remove(struct iwm_priv *iwm); -int iwm_mode_to_nl80211_iftype(int mode); -int iwm_priv_init(struct iwm_priv *iwm); -void iwm_priv_deinit(struct iwm_priv *iwm); -void iwm_reset(struct iwm_priv *iwm); -void iwm_resetting(struct iwm_priv *iwm); -void iwm_tx_credit_init_pools(struct iwm_priv *iwm, - struct iwm_umac_notif_alive *alive); -int iwm_tx_credit_alloc(struct iwm_priv *iwm, int id, int nb); -int iwm_notif_send(struct iwm_priv *iwm, struct iwm_wifi_cmd *cmd, - u8 cmd_id, u8 source, u8 *buf, unsigned long buf_size); -int iwm_notif_handle(struct iwm_priv *iwm, u32 cmd, u8 source, long timeout); -void iwm_init_default_profile(struct iwm_priv *iwm, - struct iwm_umac_profile *profile); -void iwm_link_on(struct iwm_priv *iwm); -void iwm_link_off(struct iwm_priv *iwm); -int iwm_up(struct iwm_priv *iwm); -int iwm_down(struct iwm_priv *iwm); - -/* TX API */ -int iwm_tid_to_queue(u16 tid); -void iwm_tx_credit_inc(struct iwm_priv *iwm, int id, int total_freed_pages); -void iwm_tx_worker(struct work_struct *work); -int iwm_xmit_frame(struct sk_buff *skb, struct net_device *netdev); - -/* RX API */ -void iwm_rx_setup_handlers(struct iwm_priv *iwm); -int iwm_rx_handle(struct iwm_priv *iwm, u8 *buf, unsigned long buf_size); -int iwm_rx_handle_resp(struct iwm_priv *iwm, u8 *buf, unsigned long buf_size, - struct iwm_wifi_cmd *cmd); -void iwm_rx_free(struct iwm_priv *iwm); - -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/lmac.h b/drivers/net/wireless/iwmc3200wifi/lmac.h deleted file mode 100644 index 5ddcdf8c70c0..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/lmac.h +++ /dev/null @@ -1,484 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#ifndef __IWM_LMAC_H__ -#define __IWM_LMAC_H__ - -struct iwm_lmac_hdr { - u8 id; - u8 flags; - __le16 seq_num; -} __packed; - -/* LMAC commands */ -#define CALIB_CFG_FLAG_SEND_COMPLETE_NTFY_AFTER_MSK 0x1 - -struct iwm_lmac_cal_cfg_elt { - __le32 enable; /* 1 means LMAC needs to do something */ - __le32 start; /* 1 to start calibration, 0 to stop */ - __le32 send_res; /* 1 for sending back results */ - __le32 apply_res; /* 1 for applying calibration results to HW */ - __le32 reserved; -} __packed; - -struct iwm_lmac_cal_cfg_status { - struct iwm_lmac_cal_cfg_elt init; - struct iwm_lmac_cal_cfg_elt periodic; - __le32 flags; /* CALIB_CFG_FLAG_SEND_COMPLETE_NTFY_AFTER_MSK */ -} __packed; - -struct iwm_lmac_cal_cfg_cmd { - struct iwm_lmac_cal_cfg_status ucode_cfg; - struct iwm_lmac_cal_cfg_status driver_cfg; - __le32 reserved; -} __packed; - -struct iwm_lmac_cal_cfg_resp { - __le32 status; -} __packed; - -#define IWM_CARD_STATE_SW_HW_ENABLED 0x00 -#define IWM_CARD_STATE_HW_DISABLED 0x01 -#define IWM_CARD_STATE_SW_DISABLED 0x02 -#define IWM_CARD_STATE_CTKILL_DISABLED 0x04 -#define IWM_CARD_STATE_IS_RXON 0x10 - -struct iwm_lmac_card_state { - __le32 flags; -} __packed; - -/** - * COEX_PRIORITY_TABLE_CMD - * - * Priority entry for each state - * Will keep two tables, for STA and WIPAN - */ -enum { - /* UN-ASSOCIATION PART */ - COEX_UNASSOC_IDLE = 0, - COEX_UNASSOC_MANUAL_SCAN, - COEX_UNASSOC_AUTO_SCAN, - - /* CALIBRATION */ - COEX_CALIBRATION, - COEX_PERIODIC_CALIBRATION, - - /* CONNECTION */ - COEX_CONNECTION_ESTAB, - - /* ASSOCIATION PART */ - COEX_ASSOCIATED_IDLE, - COEX_ASSOC_MANUAL_SCAN, - COEX_ASSOC_AUTO_SCAN, - COEX_ASSOC_ACTIVE_LEVEL, - - /* RF ON/OFF */ - COEX_RF_ON, - COEX_RF_OFF, - COEX_STAND_ALONE_DEBUG, - - /* IPNN */ - COEX_IPAN_ASSOC_LEVEL, - - /* RESERVED */ - COEX_RSRVD1, - COEX_RSRVD2, - - COEX_EVENTS_NUM -}; - -#define COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK 0x1 -#define COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK 0x2 -#define COEX_EVT_FLAG_DELAY_MEDIUM_FREE_NTFY_MSK 0x4 - -struct coex_event { - u8 req_prio; - u8 win_med_prio; - u8 reserved; - u8 flags; -} __packed; - -#define COEX_FLAGS_STA_TABLE_VALID_MSK 0x1 -#define COEX_FLAGS_UNASSOC_WAKEUP_UMASK_MSK 0x4 -#define COEX_FLAGS_ASSOC_WAKEUP_UMASK_MSK 0x8 -#define COEX_FLAGS_COEX_ENABLE_MSK 0x80 - -struct iwm_coex_prio_table_cmd { - u8 flags; - u8 reserved[3]; - struct coex_event sta_prio[COEX_EVENTS_NUM]; -} __packed; - -/* Coexistence definitions - * - * Constants to fill in the Priorities' Tables - * RP - Requested Priority - * WP - Win Medium Priority: priority assigned when the contention has been won - * FLAGS - Combination of COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK and - * COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK - */ - -#define COEX_UNASSOC_IDLE_FLAGS 0 -#define COEX_UNASSOC_MANUAL_SCAN_FLAGS (COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK | \ - COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK) -#define COEX_UNASSOC_AUTO_SCAN_FLAGS (COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK | \ - COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK) -#define COEX_CALIBRATION_FLAGS (COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK | \ - COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK) -#define COEX_PERIODIC_CALIBRATION_FLAGS 0 -/* COEX_CONNECTION_ESTAB: we need DELAY_MEDIUM_FREE_NTFY to let WiMAX - * disconnect from network. */ -#define COEX_CONNECTION_ESTAB_FLAGS (COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK | \ - COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK | \ - COEX_EVT_FLAG_DELAY_MEDIUM_FREE_NTFY_MSK) -#define COEX_ASSOCIATED_IDLE_FLAGS 0 -#define COEX_ASSOC_MANUAL_SCAN_FLAGS (COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK | \ - COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK) -#define COEX_ASSOC_AUTO_SCAN_FLAGS (COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK | \ - COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK) -#define COEX_ASSOC_ACTIVE_LEVEL_FLAGS 0 -#define COEX_RF_ON_FLAGS 0 -#define COEX_RF_OFF_FLAGS 0 -#define COEX_STAND_ALONE_DEBUG_FLAGS (COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK | \ - COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK) -#define COEX_IPAN_ASSOC_LEVEL_FLAGS (COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK | \ - COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK | \ - COEX_EVT_FLAG_DELAY_MEDIUM_FREE_NTFY_MSK) -#define COEX_RSRVD1_FLAGS 0 -#define COEX_RSRVD2_FLAGS 0 -/* XOR_RF_ON is the event wrapping all radio ownership. We need - * DELAY_MEDIUM_FREE_NTFY to let WiMAX disconnect from network. */ -#define COEX_XOR_RF_ON_FLAGS (COEX_EVT_FLAG_MEDIUM_FREE_NTFY_MSK | \ - COEX_EVT_FLAG_MEDIUM_ACTV_NTFY_MSK | \ - COEX_EVT_FLAG_DELAY_MEDIUM_FREE_NTFY_MSK) - -/* CT kill config command */ -struct iwm_ct_kill_cfg_cmd { - u32 exit_threshold; - u32 reserved; - u32 entry_threshold; -} __packed; - - -/* LMAC OP CODES */ -#define REPLY_PAD 0x0 -#define REPLY_ALIVE 0x1 -#define REPLY_ERROR 0x2 -#define REPLY_ECHO 0x3 -#define REPLY_HALT 0x6 - -/* RXON state commands */ -#define REPLY_RX_ON 0x10 -#define REPLY_RX_ON_ASSOC 0x11 -#define REPLY_RX_OFF 0x12 -#define REPLY_QOS_PARAM 0x13 -#define REPLY_RX_ON_TIMING 0x14 -#define REPLY_INTERNAL_QOS_PARAM 0x15 -#define REPLY_RX_INT_TIMEOUT_CNFG 0x16 -#define REPLY_NULL 0x17 - -/* Multi-Station support */ -#define REPLY_ADD_STA 0x18 -#define REPLY_REMOVE_STA 0x19 -#define REPLY_RESET_ALL_STA 0x1a - -/* RX, TX */ -#define REPLY_ALM_RX 0x1b -#define REPLY_TX 0x1c -#define REPLY_TXFIFO_FLUSH 0x1e - -/* MISC commands */ -#define REPLY_MGMT_MCAST_KEY 0x1f -#define REPLY_WEPKEY 0x20 -#define REPLY_INIT_IV 0x21 -#define REPLY_WRITE_MIB 0x22 -#define REPLY_READ_MIB 0x23 -#define REPLY_RADIO_FE 0x24 -#define REPLY_TXFIFO_CFG 0x25 -#define REPLY_WRITE_READ 0x26 -#define REPLY_INSTALL_SEC_KEY 0x27 - - -#define REPLY_RATE_SCALE 0x47 -#define REPLY_LEDS_CMD 0x48 -#define REPLY_TX_LINK_QUALITY_CMD 0x4e -#define REPLY_ANA_MIB_OVERRIDE_CMD 0x4f -#define REPLY_WRITE2REG_CMD 0x50 - -/* winfi-wifi coexistence */ -#define COEX_PRIORITY_TABLE_CMD 0x5a -#define COEX_MEDIUM_NOTIFICATION 0x5b -#define COEX_EVENT_CMD 0x5c - -/* more Protocol and Protocol-test commands */ -#define REPLY_MAX_SLEEP_TIME_CMD 0x61 -#define CALIBRATION_CFG_CMD 0x65 -#define CALIBRATION_RES_NOTIFICATION 0x66 -#define CALIBRATION_COMPLETE_NOTIFICATION 0x67 - -/* Measurements */ -#define REPLY_QUIET_CMD 0x71 -#define REPLY_CHANNEL_SWITCH 0x72 -#define CHANNEL_SWITCH_NOTIFICATION 0x73 - -#define REPLY_SPECTRUM_MEASUREMENT_CMD 0x74 -#define SPECTRUM_MEASURE_NOTIFICATION 0x75 -#define REPLY_MEASUREMENT_ABORT_CMD 0x76 - -/* Power Management */ -#define POWER_TABLE_CMD 0x77 -#define SAVE_RESTORE_ADDRESS_CMD 0x78 -#define REPLY_WATERMARK_CMD 0x79 -#define PM_DEBUG_STATISTIC_NOTIFIC 0x7B -#define PD_FLUSH_N_NOTIFICATION 0x7C - -/* Scan commands and notifications */ -#define REPLY_SCAN_REQUEST_CMD 0x80 -#define REPLY_SCAN_ABORT_CMD 0x81 -#define SCAN_START_NOTIFICATION 0x82 -#define SCAN_RESULTS_NOTIFICATION 0x83 -#define SCAN_COMPLETE_NOTIFICATION 0x84 - -/* Continuous TX commands */ -#define REPLY_CONT_TX_CMD 0x85 -#define END_OF_CONT_TX_NOTIFICATION 0x86 - -/* Timer/Eeprom commands */ -#define TIMER_CMD 0x87 -#define EEPROM_WRITE_CMD 0x88 - -/* PAPD commands */ -#define FEEDBACK_REQUEST_NOTIFICATION 0x8b -#define REPLY_CW_CMD 0x8c - -/* IBSS/AP commands Continue */ -#define BEACON_NOTIFICATION 0x90 -#define REPLY_TX_BEACON 0x91 -#define REPLY_REQUEST_ATIM 0x93 -#define WHO_IS_AWAKE_NOTIFICATION 0x94 -#define TX_PWR_DBM_LIMIT_CMD 0x95 -#define QUIET_NOTIFICATION 0x96 -#define TX_PWR_TABLE_CMD 0x97 -#define TX_ANT_CONFIGURATION_CMD 0x98 -#define MEASURE_ABORT_NOTIFICATION 0x99 -#define REPLY_CALIBRATION_TUNE 0x9a - -/* bt config command */ -#define REPLY_BT_CONFIG 0x9b -#define REPLY_STATISTICS_CMD 0x9c -#define STATISTICS_NOTIFICATION 0x9d - -/* RF-KILL commands and notifications */ -#define REPLY_CARD_STATE_CMD 0xa0 -#define CARD_STATE_NOTIFICATION 0xa1 - -/* Missed beacons notification */ -#define MISSED_BEACONS_NOTIFICATION 0xa2 -#define MISSED_BEACONS_NOTIFICATION_TH_CMD 0xa3 - -#define REPLY_CT_KILL_CONFIG_CMD 0xa4 - -/* HD commands and notifications */ -#define REPLY_HD_PARAMS_CMD 0xa6 -#define HD_PARAMS_NOTIFICATION 0xa7 -#define SENSITIVITY_CMD 0xa8 -#define U_APSD_PARAMS_CMD 0xa9 -#define NOISY_PLATFORM_CMD 0xaa -#define ILLEGAL_CMD 0xac -#define REPLY_PHY_CALIBRATION_CMD 0xb0 -#define REPLAY_RX_GAIN_CALIB_CMD 0xb1 - -/* WiPAN commands */ -#define REPLY_WIPAN_PARAMS_CMD 0xb2 -#define REPLY_WIPAN_RX_ON_CMD 0xb3 -#define REPLY_WIPAN_RX_ON_TIMING 0xb4 -#define REPLY_WIPAN_TX_PWR_TABLE_CMD 0xb5 -#define REPLY_WIPAN_RXON_ASSOC_CMD 0xb6 -#define REPLY_WIPAN_QOS_PARAM 0xb7 -#define WIPAN_REPLY_WEPKEY 0xb8 - -/* BeamForming commands */ -#define BEAMFORMER_CFG_CMD 0xba -#define BEAMFORMEE_NOTIFICATION 0xbb - -/* TGn new Commands */ -#define REPLY_RX_PHY_CMD 0xc0 -#define REPLY_RX_MPDU_CMD 0xc1 -#define REPLY_MULTICAST_HASH 0xc2 -#define REPLY_KDR_RX 0xc3 -#define REPLY_RX_DSP_EXT_INFO 0xc4 -#define REPLY_COMPRESSED_BA 0xc5 - -/* PNC commands */ -#define PNC_CONFIG_CMD 0xc8 -#define PNC_UPDATE_TABLE_CMD 0xc9 -#define XVT_GENERAL_CTRL_CMD 0xca -#define REPLY_LEGACY_RADIO_FE 0xdd - -/* WoWLAN commands */ -#define WOWLAN_PATTERNS 0xe0 -#define WOWLAN_WAKEUP_FILTER 0xe1 -#define WOWLAN_TSC_RSC_PARAM 0xe2 -#define WOWLAN_TKIP_PARAM 0xe3 -#define WOWLAN_KEK_KCK_MATERIAL 0xe4 -#define WOWLAN_GET_STATUSES 0xe5 -#define WOWLAN_TX_POWER_PER_DB 0xe6 -#define REPLY_WOWLAN_GET_STATUSES WOWLAN_GET_STATUSES - -#define REPLY_DEBUG_CMD 0xf0 -#define REPLY_DSP_DEBUG_CMD 0xf1 -#define REPLY_DEBUG_MONITOR_CMD 0xf2 -#define REPLY_DEBUG_XVT_CMD 0xf3 -#define REPLY_DEBUG_DC_CALIB 0xf4 -#define REPLY_DYNAMIC_BP 0xf5 - -/* General purpose Commands */ -#define REPLY_GP1_CMD 0xfa -#define REPLY_GP2_CMD 0xfb -#define REPLY_GP3_CMD 0xfc -#define REPLY_GP4_CMD 0xfd -#define REPLY_REPLAY_WRAPPER 0xfe -#define REPLY_FRAME_DURATION_CALC_CMD 0xff - -#define LMAC_COMMAND_ID_MAX 0xff -#define LMAC_COMMAND_ID_NUM (LMAC_COMMAND_ID_MAX + 1) - - -/* Calibration */ - -enum { - PHY_CALIBRATE_DC_CMD = 0, - PHY_CALIBRATE_LO_CMD = 1, - PHY_CALIBRATE_RX_BB_CMD = 2, - PHY_CALIBRATE_TX_IQ_CMD = 3, - PHY_CALIBRATE_RX_IQ_CMD = 4, - PHY_CALIBRATION_NOISE_CMD = 5, - PHY_CALIBRATE_AGC_TABLE_CMD = 6, - PHY_CALIBRATE_CRYSTAL_FRQ_CMD = 7, - PHY_CALIBRATE_OPCODES_NUM, - SHILOH_PHY_CALIBRATE_DC_CMD = 8, - SHILOH_PHY_CALIBRATE_LO_CMD = 9, - SHILOH_PHY_CALIBRATE_RX_BB_CMD = 10, - SHILOH_PHY_CALIBRATE_TX_IQ_CMD = 11, - SHILOH_PHY_CALIBRATE_RX_IQ_CMD = 12, - SHILOH_PHY_CALIBRATION_NOISE_CMD = 13, - SHILOH_PHY_CALIBRATE_AGC_TABLE_CMD = 14, - SHILOH_PHY_CALIBRATE_CRYSTAL_FRQ_CMD = 15, - SHILOH_PHY_CALIBRATE_BASE_BAND_CMD = 16, - SHILOH_PHY_CALIBRATE_TXIQ_PERIODIC_CMD = 17, - CALIBRATION_CMD_NUM, -}; - -enum { - CALIB_CFG_RX_BB_IDX = 0, - CALIB_CFG_DC_IDX = 1, - CALIB_CFG_LO_IDX = 2, - CALIB_CFG_TX_IQ_IDX = 3, - CALIB_CFG_RX_IQ_IDX = 4, - CALIB_CFG_NOISE_IDX = 5, - CALIB_CFG_CRYSTAL_IDX = 6, - CALIB_CFG_TEMPERATURE_IDX = 7, - CALIB_CFG_PAPD_IDX = 8, - CALIB_CFG_LAST_IDX = CALIB_CFG_PAPD_IDX, - CALIB_CFG_MODULE_NUM, -}; - -#define IWM_CALIB_MAP_INIT_MSK 0xFFFF -#define IWM_CALIB_MAP_PER_LMAC(m) ((m & 0xFF0000) >> 16) -#define IWM_CALIB_MAP_PER_UMAC(m) ((m & 0xFF000000) >> 24) -#define IWM_CALIB_OPCODE_TO_INDEX(op) (op - PHY_CALIBRATE_OPCODES_NUM) - -struct iwm_lmac_calib_hdr { - u8 opcode; - u8 first_grp; - u8 grp_num; - u8 all_data_valid; -} __packed; - -#define IWM_LMAC_CALIB_FREQ_GROUPS_NR 7 -#define IWM_CALIB_FREQ_GROUPS_NR 5 -#define IWM_CALIB_DC_MODES_NR 12 - -struct iwm_calib_rxiq_entry { - u16 ptam_postdist_ars; - u16 ptam_postdist_arc; -} __packed; - -struct iwm_calib_rxiq_group { - struct iwm_calib_rxiq_entry mode[IWM_CALIB_DC_MODES_NR]; -} __packed; - -struct iwm_lmac_calib_rxiq { - struct iwm_calib_rxiq_group group[IWM_LMAC_CALIB_FREQ_GROUPS_NR]; -} __packed; - -struct iwm_calib_rxiq { - struct iwm_lmac_calib_hdr hdr; - struct iwm_calib_rxiq_group group[IWM_CALIB_FREQ_GROUPS_NR]; -} __packed; - -#define LMAC_STA_ID_SEED 0x0f -#define LMAC_STA_ID_POS 0 - -#define LMAC_STA_COLOR_SEED 0x7 -#define LMAC_STA_COLOR_POS 4 - -struct iwm_lmac_power_report { - u8 pa_status; - u8 pa_integ_res_A[3]; - u8 pa_integ_res_B[3]; - u8 pa_integ_res_C[3]; -} __packed; - -struct iwm_lmac_tx_resp { - u8 frame_cnt; /* 1-no aggregation, greater then 1 - aggregation */ - u8 bt_kill_cnt; - __le16 retry_cnt; - __le32 initial_tx_rate; - __le16 wireless_media_time; - struct iwm_lmac_power_report power_report; - __le32 tfd_info; - __le16 seq_ctl; - __le16 byte_cnt; - u8 tlc_rate_info; - u8 ra_tid; - __le16 frame_ctl; - __le32 status; -} __packed; - -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/main.c b/drivers/net/wireless/iwmc3200wifi/main.c deleted file mode 100644 index 1f868b166d10..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/main.c +++ /dev/null @@ -1,847 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#include <linux/kernel.h> -#include <linux/netdevice.h> -#include <linux/sched.h> -#include <linux/ieee80211.h> -#include <linux/wireless.h> -#include <linux/slab.h> -#include <linux/moduleparam.h> - -#include "iwm.h" -#include "debug.h" -#include "bus.h" -#include "umac.h" -#include "commands.h" -#include "hal.h" -#include "fw.h" -#include "rx.h" - -static struct iwm_conf def_iwm_conf = { - - .sdio_ior_timeout = 5000, - .calib_map = BIT(CALIB_CFG_DC_IDX) | - BIT(CALIB_CFG_LO_IDX) | - BIT(CALIB_CFG_TX_IQ_IDX) | - BIT(CALIB_CFG_RX_IQ_IDX) | - BIT(SHILOH_PHY_CALIBRATE_BASE_BAND_CMD), - .expected_calib_map = BIT(PHY_CALIBRATE_DC_CMD) | - BIT(PHY_CALIBRATE_LO_CMD) | - BIT(PHY_CALIBRATE_TX_IQ_CMD) | - BIT(PHY_CALIBRATE_RX_IQ_CMD) | - BIT(SHILOH_PHY_CALIBRATE_BASE_BAND_CMD), - .ct_kill_entry = 110, - .ct_kill_exit = 110, - .reset_on_fatal_err = 1, - .auto_connect = 1, - .enable_qos = 1, - .mode = UMAC_MODE_BSS, - - /* UMAC configuration */ - .power_index = 0, - .frag_threshold = IEEE80211_MAX_FRAG_THRESHOLD, - .rts_threshold = IEEE80211_MAX_RTS_THRESHOLD, - .cts_to_self = 0, - - .assoc_timeout = 2, - .roam_timeout = 10, - .wireless_mode = WIRELESS_MODE_11A | WIRELESS_MODE_11G | - WIRELESS_MODE_11N, - - /* IBSS */ - .ibss_band = UMAC_BAND_2GHZ, - .ibss_channel = 1, - - .mac_addr = {0x00, 0x02, 0xb3, 0x01, 0x02, 0x03}, -}; - -static bool modparam_reset; -module_param_named(reset, modparam_reset, bool, 0644); -MODULE_PARM_DESC(reset, "reset on firmware errors (default 0 [not reset])"); - -static bool modparam_wimax_enable = true; -module_param_named(wimax_enable, modparam_wimax_enable, bool, 0644); -MODULE_PARM_DESC(wimax_enable, "Enable wimax core (default 1 [wimax enabled])"); - -int iwm_mode_to_nl80211_iftype(int mode) -{ - switch (mode) { - case UMAC_MODE_BSS: - return NL80211_IFTYPE_STATION; - case UMAC_MODE_IBSS: - return NL80211_IFTYPE_ADHOC; - default: - return NL80211_IFTYPE_UNSPECIFIED; - } - - return 0; -} - -static void iwm_statistics_request(struct work_struct *work) -{ - struct iwm_priv *iwm = - container_of(work, struct iwm_priv, stats_request.work); - - iwm_send_umac_stats_req(iwm, 0); -} - -static void iwm_disconnect_work(struct work_struct *work) -{ - struct iwm_priv *iwm = - container_of(work, struct iwm_priv, disconnect.work); - - if (iwm->umac_profile_active) - iwm_invalidate_mlme_profile(iwm); - - clear_bit(IWM_STATUS_ASSOCIATED, &iwm->status); - iwm->umac_profile_active = false; - memset(iwm->bssid, 0, ETH_ALEN); - iwm->channel = 0; - - iwm_link_off(iwm); - - wake_up_interruptible(&iwm->mlme_queue); - - cfg80211_disconnected(iwm_to_ndev(iwm), 0, NULL, 0, GFP_KERNEL); -} - -static void iwm_ct_kill_work(struct work_struct *work) -{ - struct iwm_priv *iwm = - container_of(work, struct iwm_priv, ct_kill_delay.work); - struct wiphy *wiphy = iwm_to_wiphy(iwm); - - IWM_INFO(iwm, "CT kill delay timeout\n"); - - wiphy_rfkill_set_hw_state(wiphy, false); -} - -static int __iwm_up(struct iwm_priv *iwm); -static int __iwm_down(struct iwm_priv *iwm); - -static void iwm_reset_worker(struct work_struct *work) -{ - struct iwm_priv *iwm; - struct iwm_umac_profile *profile = NULL; - int uninitialized_var(ret), retry = 0; - - iwm = container_of(work, struct iwm_priv, reset_worker); - - /* - * XXX: The iwm->mutex is introduced purely for this reset work, - * because the other users for iwm_up and iwm_down are only netdev - * ndo_open and ndo_stop which are already protected by rtnl. - * Please remove iwm->mutex together if iwm_reset_worker() is not - * required in the future. - */ - if (!mutex_trylock(&iwm->mutex)) { - IWM_WARN(iwm, "We are in the middle of interface bringing " - "UP/DOWN. Skip driver resetting.\n"); - return; - } - - if (iwm->umac_profile_active) { - profile = kmalloc(sizeof(struct iwm_umac_profile), GFP_KERNEL); - if (profile) - memcpy(profile, iwm->umac_profile, sizeof(*profile)); - else - IWM_ERR(iwm, "Couldn't alloc memory for profile\n"); - } - - __iwm_down(iwm); - - while (retry++ < 3) { - ret = __iwm_up(iwm); - if (!ret) - break; - - schedule_timeout_uninterruptible(10 * HZ); - } - - if (ret) { - IWM_WARN(iwm, "iwm_up() failed: %d\n", ret); - - kfree(profile); - goto out; - } - - if (profile) { - IWM_DBG_MLME(iwm, DBG, "Resend UMAC profile\n"); - memcpy(iwm->umac_profile, profile, sizeof(*profile)); - iwm_send_mlme_profile(iwm); - kfree(profile); - } else - clear_bit(IWM_STATUS_RESETTING, &iwm->status); - - out: - mutex_unlock(&iwm->mutex); -} - -static void iwm_auth_retry_worker(struct work_struct *work) -{ - struct iwm_priv *iwm; - int i, ret; - - iwm = container_of(work, struct iwm_priv, auth_retry_worker); - if (iwm->umac_profile_active) { - ret = iwm_invalidate_mlme_profile(iwm); - if (ret < 0) - return; - } - - iwm->umac_profile->sec.auth_type = UMAC_AUTH_TYPE_LEGACY_PSK; - - ret = iwm_send_mlme_profile(iwm); - if (ret < 0) - return; - - for (i = 0; i < IWM_NUM_KEYS; i++) - if (iwm->keys[i].key_len) - iwm_set_key(iwm, 0, &iwm->keys[i]); - - iwm_set_tx_key(iwm, iwm->default_key); -} - - - -static void iwm_watchdog(unsigned long data) -{ - struct iwm_priv *iwm = (struct iwm_priv *)data; - - IWM_WARN(iwm, "Watchdog expired: UMAC stalls!\n"); - - if (modparam_reset) - iwm_resetting(iwm); -} - -int iwm_priv_init(struct iwm_priv *iwm) -{ - int i, j; - char name[32]; - - iwm->status = 0; - INIT_LIST_HEAD(&iwm->pending_notif); - init_waitqueue_head(&iwm->notif_queue); - init_waitqueue_head(&iwm->nonwifi_queue); - init_waitqueue_head(&iwm->wifi_ntfy_queue); - init_waitqueue_head(&iwm->mlme_queue); - memcpy(&iwm->conf, &def_iwm_conf, sizeof(struct iwm_conf)); - spin_lock_init(&iwm->tx_credit.lock); - INIT_LIST_HEAD(&iwm->wifi_pending_cmd); - INIT_LIST_HEAD(&iwm->nonwifi_pending_cmd); - iwm->wifi_seq_num = UMAC_WIFI_SEQ_NUM_BASE; - iwm->nonwifi_seq_num = UMAC_NONWIFI_SEQ_NUM_BASE; - spin_lock_init(&iwm->cmd_lock); - iwm->scan_id = 1; - INIT_DELAYED_WORK(&iwm->stats_request, iwm_statistics_request); - INIT_DELAYED_WORK(&iwm->disconnect, iwm_disconnect_work); - INIT_DELAYED_WORK(&iwm->ct_kill_delay, iwm_ct_kill_work); - INIT_WORK(&iwm->reset_worker, iwm_reset_worker); - INIT_WORK(&iwm->auth_retry_worker, iwm_auth_retry_worker); - INIT_LIST_HEAD(&iwm->bss_list); - - skb_queue_head_init(&iwm->rx_list); - INIT_LIST_HEAD(&iwm->rx_tickets); - spin_lock_init(&iwm->ticket_lock); - for (i = 0; i < IWM_RX_ID_HASH; i++) { - INIT_LIST_HEAD(&iwm->rx_packets[i]); - spin_lock_init(&iwm->packet_lock[i]); - } - - INIT_WORK(&iwm->rx_worker, iwm_rx_worker); - - iwm->rx_wq = create_singlethread_workqueue(KBUILD_MODNAME "_rx"); - if (!iwm->rx_wq) - return -EAGAIN; - - for (i = 0; i < IWM_TX_QUEUES; i++) { - INIT_WORK(&iwm->txq[i].worker, iwm_tx_worker); - snprintf(name, 32, KBUILD_MODNAME "_tx_%d", i); - iwm->txq[i].id = i; - iwm->txq[i].wq = create_singlethread_workqueue(name); - if (!iwm->txq[i].wq) - return -EAGAIN; - - skb_queue_head_init(&iwm->txq[i].queue); - skb_queue_head_init(&iwm->txq[i].stopped_queue); - spin_lock_init(&iwm->txq[i].lock); - } - - for (i = 0; i < IWM_NUM_KEYS; i++) - memset(&iwm->keys[i], 0, sizeof(struct iwm_key)); - - iwm->default_key = -1; - - for (i = 0; i < IWM_STA_TABLE_NUM; i++) - for (j = 0; j < IWM_UMAC_TID_NR; j++) { - mutex_init(&iwm->sta_table[i].tid_info[j].mutex); - iwm->sta_table[i].tid_info[j].stopped = false; - } - - init_timer(&iwm->watchdog); - iwm->watchdog.function = iwm_watchdog; - iwm->watchdog.data = (unsigned long)iwm; - mutex_init(&iwm->mutex); - - iwm->last_fw_err = kzalloc(sizeof(struct iwm_fw_error_hdr), - GFP_KERNEL); - if (iwm->last_fw_err == NULL) - return -ENOMEM; - - return 0; -} - -void iwm_priv_deinit(struct iwm_priv *iwm) -{ - int i; - - for (i = 0; i < IWM_TX_QUEUES; i++) - destroy_workqueue(iwm->txq[i].wq); - - destroy_workqueue(iwm->rx_wq); - kfree(iwm->last_fw_err); -} - -/* - * We reset all the structures, and we reset the UMAC. - * After calling this routine, you're expected to reload - * the firmware. - */ -void iwm_reset(struct iwm_priv *iwm) -{ - struct iwm_notif *notif, *next; - - if (test_bit(IWM_STATUS_READY, &iwm->status)) - iwm_target_reset(iwm); - - if (test_bit(IWM_STATUS_RESETTING, &iwm->status)) { - iwm->status = 0; - set_bit(IWM_STATUS_RESETTING, &iwm->status); - } else - iwm->status = 0; - iwm->scan_id = 1; - - list_for_each_entry_safe(notif, next, &iwm->pending_notif, pending) { - list_del(¬if->pending); - kfree(notif->buf); - kfree(notif); - } - - iwm_cmd_flush(iwm); - - flush_workqueue(iwm->rx_wq); - - iwm_link_off(iwm); -} - -void iwm_resetting(struct iwm_priv *iwm) -{ - set_bit(IWM_STATUS_RESETTING, &iwm->status); - - schedule_work(&iwm->reset_worker); -} - -/* - * Notification code: - * - * We're faced with the following issue: Any host command can - * have an answer or not, and if there's an answer to expect, - * it can be treated synchronously or asynchronously. - * To work around the synchronous answer case, we implemented - * our notification mechanism. - * When a code path needs to wait for a command response - * synchronously, it calls notif_handle(), which waits for the - * right notification to show up, and then process it. Before - * starting to wait, it registered as a waiter for this specific - * answer (by toggling a bit in on of the handler_map), so that - * the rx code knows that it needs to send a notification to the - * waiting processes. It does so by calling iwm_notif_send(), - * which adds the notification to the pending notifications list, - * and then wakes the waiting processes up. - */ -int iwm_notif_send(struct iwm_priv *iwm, struct iwm_wifi_cmd *cmd, - u8 cmd_id, u8 source, u8 *buf, unsigned long buf_size) -{ - struct iwm_notif *notif; - - notif = kzalloc(sizeof(struct iwm_notif), GFP_KERNEL); - if (!notif) { - IWM_ERR(iwm, "Couldn't alloc memory for notification\n"); - return -ENOMEM; - } - - INIT_LIST_HEAD(¬if->pending); - notif->cmd = cmd; - notif->cmd_id = cmd_id; - notif->src = source; - notif->buf = kzalloc(buf_size, GFP_KERNEL); - if (!notif->buf) { - IWM_ERR(iwm, "Couldn't alloc notification buffer\n"); - kfree(notif); - return -ENOMEM; - } - notif->buf_size = buf_size; - memcpy(notif->buf, buf, buf_size); - list_add_tail(¬if->pending, &iwm->pending_notif); - - wake_up_interruptible(&iwm->notif_queue); - - return 0; -} - -static struct iwm_notif *iwm_notif_find(struct iwm_priv *iwm, u32 cmd, - u8 source) -{ - struct iwm_notif *notif; - - list_for_each_entry(notif, &iwm->pending_notif, pending) { - if ((notif->cmd_id == cmd) && (notif->src == source)) { - list_del(¬if->pending); - return notif; - } - } - - return NULL; -} - -static struct iwm_notif *iwm_notif_wait(struct iwm_priv *iwm, u32 cmd, - u8 source, long timeout) -{ - int ret; - struct iwm_notif *notif; - unsigned long *map = NULL; - - switch (source) { - case IWM_SRC_LMAC: - map = &iwm->lmac_handler_map[0]; - break; - case IWM_SRC_UMAC: - map = &iwm->umac_handler_map[0]; - break; - case IWM_SRC_UDMA: - map = &iwm->udma_handler_map[0]; - break; - } - - set_bit(cmd, map); - - ret = wait_event_interruptible_timeout(iwm->notif_queue, - ((notif = iwm_notif_find(iwm, cmd, source)) != NULL), - timeout); - clear_bit(cmd, map); - - if (!ret) - return NULL; - - return notif; -} - -int iwm_notif_handle(struct iwm_priv *iwm, u32 cmd, u8 source, long timeout) -{ - int ret; - struct iwm_notif *notif; - - notif = iwm_notif_wait(iwm, cmd, source, timeout); - if (!notif) - return -ETIME; - - ret = iwm_rx_handle_resp(iwm, notif->buf, notif->buf_size, notif->cmd); - kfree(notif->buf); - kfree(notif); - - return ret; -} - -static int iwm_config_boot_params(struct iwm_priv *iwm) -{ - struct iwm_udma_nonwifi_cmd target_cmd; - int ret; - - /* check Wimax is off and config debug monitor */ - if (!modparam_wimax_enable) { - u32 data1 = 0x1f; - u32 addr1 = 0x606BE258; - - u32 data2_set = 0x0; - u32 data2_clr = 0x1; - u32 addr2 = 0x606BE100; - - u32 data3 = 0x1; - u32 addr3 = 0x606BEC00; - - target_cmd.resp = 0; - target_cmd.handle_by_hw = 0; - target_cmd.eop = 1; - - target_cmd.opcode = UMAC_HDI_OUT_OPCODE_WRITE; - target_cmd.addr = cpu_to_le32(addr1); - target_cmd.op1_sz = cpu_to_le32(sizeof(u32)); - target_cmd.op2 = 0; - - ret = iwm_hal_send_target_cmd(iwm, &target_cmd, &data1); - if (ret < 0) { - IWM_ERR(iwm, "iwm_hal_send_target_cmd failed\n"); - return ret; - } - - target_cmd.opcode = UMAC_HDI_OUT_OPCODE_READ_MODIFY_WRITE; - target_cmd.addr = cpu_to_le32(addr2); - target_cmd.op1_sz = cpu_to_le32(data2_set); - target_cmd.op2 = cpu_to_le32(data2_clr); - - ret = iwm_hal_send_target_cmd(iwm, &target_cmd, &data1); - if (ret < 0) { - IWM_ERR(iwm, "iwm_hal_send_target_cmd failed\n"); - return ret; - } - - target_cmd.opcode = UMAC_HDI_OUT_OPCODE_WRITE; - target_cmd.addr = cpu_to_le32(addr3); - target_cmd.op1_sz = cpu_to_le32(sizeof(u32)); - target_cmd.op2 = 0; - - ret = iwm_hal_send_target_cmd(iwm, &target_cmd, &data3); - if (ret < 0) { - IWM_ERR(iwm, "iwm_hal_send_target_cmd failed\n"); - return ret; - } - } - - return 0; -} - -void iwm_init_default_profile(struct iwm_priv *iwm, - struct iwm_umac_profile *profile) -{ - memset(profile, 0, sizeof(struct iwm_umac_profile)); - - profile->sec.auth_type = UMAC_AUTH_TYPE_OPEN; - profile->sec.flags = UMAC_SEC_FLG_LEGACY_PROFILE; - profile->sec.ucast_cipher = UMAC_CIPHER_TYPE_NONE; - profile->sec.mcast_cipher = UMAC_CIPHER_TYPE_NONE; - - if (iwm->conf.enable_qos) - profile->flags |= cpu_to_le16(UMAC_PROFILE_QOS_ALLOWED); - - profile->wireless_mode = iwm->conf.wireless_mode; - profile->mode = cpu_to_le32(iwm->conf.mode); - - profile->ibss.atim = 0; - profile->ibss.beacon_interval = 100; - profile->ibss.join_only = 0; - profile->ibss.band = iwm->conf.ibss_band; - profile->ibss.channel = iwm->conf.ibss_channel; -} - -void iwm_link_on(struct iwm_priv *iwm) -{ - netif_carrier_on(iwm_to_ndev(iwm)); - netif_tx_wake_all_queues(iwm_to_ndev(iwm)); - - iwm_send_umac_stats_req(iwm, 0); -} - -void iwm_link_off(struct iwm_priv *iwm) -{ - struct iw_statistics *wstats = &iwm->wstats; - int i; - - netif_tx_stop_all_queues(iwm_to_ndev(iwm)); - netif_carrier_off(iwm_to_ndev(iwm)); - - for (i = 0; i < IWM_TX_QUEUES; i++) { - skb_queue_purge(&iwm->txq[i].queue); - skb_queue_purge(&iwm->txq[i].stopped_queue); - - iwm->txq[i].concat_count = 0; - iwm->txq[i].concat_ptr = iwm->txq[i].concat_buf; - - flush_workqueue(iwm->txq[i].wq); - } - - iwm_rx_free(iwm); - - cancel_delayed_work_sync(&iwm->stats_request); - memset(wstats, 0, sizeof(struct iw_statistics)); - wstats->qual.updated = IW_QUAL_ALL_INVALID; - - kfree(iwm->req_ie); - iwm->req_ie = NULL; - iwm->req_ie_len = 0; - kfree(iwm->resp_ie); - iwm->resp_ie = NULL; - iwm->resp_ie_len = 0; - - del_timer_sync(&iwm->watchdog); -} - -static void iwm_bss_list_clean(struct iwm_priv *iwm) -{ - struct iwm_bss_info *bss, *next; - - list_for_each_entry_safe(bss, next, &iwm->bss_list, node) { - list_del(&bss->node); - kfree(bss->bss); - kfree(bss); - } -} - -static int iwm_channels_init(struct iwm_priv *iwm) -{ - int ret; - - ret = iwm_send_umac_channel_list(iwm); - if (ret) { - IWM_ERR(iwm, "Send channel list failed\n"); - return ret; - } - - ret = iwm_notif_handle(iwm, UMAC_CMD_OPCODE_GET_CHAN_INFO_LIST, - IWM_SRC_UMAC, WAIT_NOTIF_TIMEOUT); - if (ret) { - IWM_ERR(iwm, "Didn't get a channel list notification\n"); - return ret; - } - - return 0; -} - -static int __iwm_up(struct iwm_priv *iwm) -{ - int ret; - struct iwm_notif *notif_reboot, *notif_ack = NULL; - struct wiphy *wiphy = iwm_to_wiphy(iwm); - u32 wireless_mode; - - ret = iwm_bus_enable(iwm); - if (ret) { - IWM_ERR(iwm, "Couldn't enable function\n"); - return ret; - } - - iwm_rx_setup_handlers(iwm); - - /* Wait for initial BARKER_REBOOT from hardware */ - notif_reboot = iwm_notif_wait(iwm, IWM_BARKER_REBOOT_NOTIFICATION, - IWM_SRC_UDMA, 2 * HZ); - if (!notif_reboot) { - IWM_ERR(iwm, "Wait for REBOOT_BARKER timeout\n"); - goto err_disable; - } - - /* We send the barker back */ - ret = iwm_bus_send_chunk(iwm, notif_reboot->buf, 16); - if (ret) { - IWM_ERR(iwm, "REBOOT barker response failed\n"); - kfree(notif_reboot); - goto err_disable; - } - - kfree(notif_reboot->buf); - kfree(notif_reboot); - - /* Wait for ACK_BARKER from hardware */ - notif_ack = iwm_notif_wait(iwm, IWM_ACK_BARKER_NOTIFICATION, - IWM_SRC_UDMA, 2 * HZ); - if (!notif_ack) { - IWM_ERR(iwm, "Wait for ACK_BARKER timeout\n"); - goto err_disable; - } - - kfree(notif_ack->buf); - kfree(notif_ack); - - /* We start to config static boot parameters */ - ret = iwm_config_boot_params(iwm); - if (ret) { - IWM_ERR(iwm, "Config boot parameters failed\n"); - goto err_disable; - } - - ret = iwm_read_mac(iwm, iwm_to_ndev(iwm)->dev_addr); - if (ret) { - IWM_ERR(iwm, "MAC reading failed\n"); - goto err_disable; - } - memcpy(iwm_to_ndev(iwm)->perm_addr, iwm_to_ndev(iwm)->dev_addr, - ETH_ALEN); - - /* We can load the FWs */ - ret = iwm_load_fw(iwm); - if (ret) { - IWM_ERR(iwm, "FW loading failed\n"); - goto err_disable; - } - - ret = iwm_eeprom_fat_channels(iwm); - if (ret) { - IWM_ERR(iwm, "Couldnt read HT channels EEPROM entries\n"); - goto err_fw; - } - - /* - * Read our SKU capabilities. - * If it's valid, we AND the configured wireless mode with the - * device EEPROM value as the current profile wireless mode. - */ - wireless_mode = iwm_eeprom_wireless_mode(iwm); - if (wireless_mode) { - iwm->conf.wireless_mode &= wireless_mode; - if (iwm->umac_profile) - iwm->umac_profile->wireless_mode = - iwm->conf.wireless_mode; - } else - IWM_ERR(iwm, "Wrong SKU capabilities: 0x%x\n", - *((u16 *)iwm_eeprom_access(iwm, IWM_EEPROM_SKU_CAP))); - - snprintf(wiphy->fw_version, sizeof(wiphy->fw_version), "L%s_U%s", - iwm->lmac_version, iwm->umac_version); - - /* We configure the UMAC and enable the wifi module */ - ret = iwm_send_umac_config(iwm, - cpu_to_le32(UMAC_RST_CTRL_FLG_WIFI_CORE_EN) | - cpu_to_le32(UMAC_RST_CTRL_FLG_WIFI_LINK_EN) | - cpu_to_le32(UMAC_RST_CTRL_FLG_WIFI_MLME_EN)); - if (ret) { - IWM_ERR(iwm, "UMAC config failed\n"); - goto err_fw; - } - - ret = iwm_notif_handle(iwm, UMAC_NOTIFY_OPCODE_WIFI_CORE_STATUS, - IWM_SRC_UMAC, WAIT_NOTIF_TIMEOUT); - if (ret) { - IWM_ERR(iwm, "Didn't get a wifi core status notification\n"); - goto err_fw; - } - - if (iwm->core_enabled != (UMAC_NTFY_WIFI_CORE_STATUS_LINK_EN | - UMAC_NTFY_WIFI_CORE_STATUS_MLME_EN)) { - IWM_DBG_BOOT(iwm, DBG, "Not all cores enabled:0x%x\n", - iwm->core_enabled); - ret = iwm_notif_handle(iwm, UMAC_NOTIFY_OPCODE_WIFI_CORE_STATUS, - IWM_SRC_UMAC, WAIT_NOTIF_TIMEOUT); - if (ret) { - IWM_ERR(iwm, "Didn't get a core status notification\n"); - goto err_fw; - } - - if (iwm->core_enabled != (UMAC_NTFY_WIFI_CORE_STATUS_LINK_EN | - UMAC_NTFY_WIFI_CORE_STATUS_MLME_EN)) { - IWM_ERR(iwm, "Not all cores enabled: 0x%x\n", - iwm->core_enabled); - goto err_fw; - } else { - IWM_INFO(iwm, "All cores enabled\n"); - } - } - - ret = iwm_channels_init(iwm); - if (ret < 0) { - IWM_ERR(iwm, "Couldn't init channels\n"); - goto err_fw; - } - - /* Set the READY bit to indicate interface is brought up successfully */ - set_bit(IWM_STATUS_READY, &iwm->status); - - return 0; - - err_fw: - iwm_eeprom_exit(iwm); - - err_disable: - ret = iwm_bus_disable(iwm); - if (ret < 0) - IWM_ERR(iwm, "Couldn't disable function\n"); - - return -EIO; -} - -int iwm_up(struct iwm_priv *iwm) -{ - int ret; - - mutex_lock(&iwm->mutex); - ret = __iwm_up(iwm); - mutex_unlock(&iwm->mutex); - - return ret; -} - -static int __iwm_down(struct iwm_priv *iwm) -{ - int ret; - - /* The interface is already down */ - if (!test_bit(IWM_STATUS_READY, &iwm->status)) - return 0; - - if (iwm->scan_request) { - cfg80211_scan_done(iwm->scan_request, true); - iwm->scan_request = NULL; - } - - clear_bit(IWM_STATUS_READY, &iwm->status); - - iwm_eeprom_exit(iwm); - iwm_bss_list_clean(iwm); - iwm_init_default_profile(iwm, iwm->umac_profile); - iwm->umac_profile_active = false; - iwm->default_key = -1; - iwm->core_enabled = 0; - - ret = iwm_bus_disable(iwm); - if (ret < 0) { - IWM_ERR(iwm, "Couldn't disable function\n"); - return ret; - } - - return 0; -} - -int iwm_down(struct iwm_priv *iwm) -{ - int ret; - - mutex_lock(&iwm->mutex); - ret = __iwm_down(iwm); - mutex_unlock(&iwm->mutex); - - return ret; -} diff --git a/drivers/net/wireless/iwmc3200wifi/netdev.c b/drivers/net/wireless/iwmc3200wifi/netdev.c deleted file mode 100644 index 5091d77e02ce..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/netdev.c +++ /dev/null @@ -1,191 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License version - * 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA. - * - */ - -/* - * This is the netdev related hooks for iwm. - * - * Some interesting code paths: - * - * iwm_open() (Called at netdev interface bringup time) - * -> iwm_up() (main.c) - * -> iwm_bus_enable() - * -> if_sdio_enable() (In case of an SDIO bus) - * -> sdio_enable_func() - * -> iwm_notif_wait(BARKER_REBOOT) (wait for reboot barker) - * -> iwm_notif_wait(ACK_BARKER) (wait for ACK barker) - * -> iwm_load_fw() (fw.c) - * -> iwm_load_umac() - * -> iwm_load_lmac() (Calibration LMAC) - * -> iwm_load_lmac() (Operational LMAC) - * -> iwm_send_umac_config() - * - * iwm_stop() (Called at netdev interface bringdown time) - * -> iwm_down() - * -> iwm_bus_disable() - * -> if_sdio_disable() (In case of an SDIO bus) - * -> sdio_disable_func() - */ -#include <linux/netdevice.h> -#include <linux/slab.h> - -#include "iwm.h" -#include "commands.h" -#include "cfg80211.h" -#include "debug.h" - -static int iwm_open(struct net_device *ndev) -{ - struct iwm_priv *iwm = ndev_to_iwm(ndev); - - return iwm_up(iwm); -} - -static int iwm_stop(struct net_device *ndev) -{ - struct iwm_priv *iwm = ndev_to_iwm(ndev); - - return iwm_down(iwm); -} - -/* - * iwm AC to queue mapping - * - * AC_VO -> queue 3 - * AC_VI -> queue 2 - * AC_BE -> queue 1 - * AC_BK -> queue 0 - */ -static const u16 iwm_1d_to_queue[8] = { 1, 0, 0, 1, 2, 2, 3, 3 }; - -int iwm_tid_to_queue(u16 tid) -{ - if (tid > IWM_UMAC_TID_NR - 2) - return -EINVAL; - - return iwm_1d_to_queue[tid]; -} - -static u16 iwm_select_queue(struct net_device *dev, struct sk_buff *skb) -{ - skb->priority = cfg80211_classify8021d(skb); - - return iwm_1d_to_queue[skb->priority]; -} - -static const struct net_device_ops iwm_netdev_ops = { - .ndo_open = iwm_open, - .ndo_stop = iwm_stop, - .ndo_start_xmit = iwm_xmit_frame, - .ndo_select_queue = iwm_select_queue, -}; - -void *iwm_if_alloc(int sizeof_bus, struct device *dev, - struct iwm_if_ops *if_ops) -{ - struct net_device *ndev; - struct wireless_dev *wdev; - struct iwm_priv *iwm; - int ret = 0; - - wdev = iwm_wdev_alloc(sizeof_bus, dev); - if (IS_ERR(wdev)) - return wdev; - - iwm = wdev_to_iwm(wdev); - iwm->bus_ops = if_ops; - iwm->wdev = wdev; - - ret = iwm_priv_init(iwm); - if (ret) { - dev_err(dev, "failed to init iwm_priv\n"); - goto out_wdev; - } - - wdev->iftype = iwm_mode_to_nl80211_iftype(iwm->conf.mode); - - ndev = alloc_netdev_mq(0, "wlan%d", ether_setup, IWM_TX_QUEUES); - if (!ndev) { - dev_err(dev, "no memory for network device instance\n"); - ret = -ENOMEM; - goto out_priv; - } - - ndev->netdev_ops = &iwm_netdev_ops; - ndev->ieee80211_ptr = wdev; - SET_NETDEV_DEV(ndev, wiphy_dev(wdev->wiphy)); - wdev->netdev = ndev; - - iwm->umac_profile = kmalloc(sizeof(struct iwm_umac_profile), - GFP_KERNEL); - if (!iwm->umac_profile) { - dev_err(dev, "Couldn't alloc memory for profile\n"); - ret = -ENOMEM; - goto out_profile; - } - - iwm_init_default_profile(iwm, iwm->umac_profile); - - return iwm; - - out_profile: - free_netdev(ndev); - - out_priv: - iwm_priv_deinit(iwm); - - out_wdev: - iwm_wdev_free(iwm); - return ERR_PTR(ret); -} - -void iwm_if_free(struct iwm_priv *iwm) -{ - if (!iwm_to_ndev(iwm)) - return; - - cancel_delayed_work_sync(&iwm->ct_kill_delay); - free_netdev(iwm_to_ndev(iwm)); - iwm_priv_deinit(iwm); - kfree(iwm->umac_profile); - iwm->umac_profile = NULL; - iwm_wdev_free(iwm); -} - -int iwm_if_add(struct iwm_priv *iwm) -{ - struct net_device *ndev = iwm_to_ndev(iwm); - int ret; - - ret = register_netdev(ndev); - if (ret < 0) { - dev_err(&ndev->dev, "Failed to register netdev: %d\n", ret); - return ret; - } - - return 0; -} - -void iwm_if_remove(struct iwm_priv *iwm) -{ - unregister_netdev(iwm_to_ndev(iwm)); -} diff --git a/drivers/net/wireless/iwmc3200wifi/rx.c b/drivers/net/wireless/iwmc3200wifi/rx.c deleted file mode 100644 index 7d708f4395f3..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/rx.c +++ /dev/null @@ -1,1701 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#include <linux/kernel.h> -#include <linux/netdevice.h> -#include <linux/sched.h> -#include <linux/etherdevice.h> -#include <linux/wireless.h> -#include <linux/ieee80211.h> -#include <linux/if_arp.h> -#include <linux/list.h> -#include <linux/slab.h> -#include <net/iw_handler.h> - -#include "iwm.h" -#include "debug.h" -#include "hal.h" -#include "umac.h" -#include "lmac.h" -#include "commands.h" -#include "rx.h" -#include "cfg80211.h" -#include "eeprom.h" - -static int iwm_rx_check_udma_hdr(struct iwm_udma_in_hdr *hdr) -{ - if ((le32_to_cpu(hdr->cmd) == UMAC_PAD_TERMINAL) || - (le32_to_cpu(hdr->size) == UMAC_PAD_TERMINAL)) - return -EINVAL; - - return 0; -} - -static inline int iwm_rx_resp_size(struct iwm_udma_in_hdr *hdr) -{ - return ALIGN(le32_to_cpu(hdr->size) + sizeof(struct iwm_udma_in_hdr), - 16); -} - -/* - * Notification handlers: - * - * For every possible notification we can receive from the - * target, we have a handler. - * When we get a target notification, and there is no one - * waiting for it, it's just processed through the rx code - * path: - * - * iwm_rx_handle() - * -> iwm_rx_handle_umac() - * -> iwm_rx_handle_wifi() - * -> iwm_rx_handle_resp() - * -> iwm_ntf_*() - * - * OR - * - * -> iwm_rx_handle_non_wifi() - * - * If there are processes waiting for this notification, then - * iwm_rx_handle_wifi() just wakes those processes up and they - * grab the pending notification. - */ -static int iwm_ntf_error(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_error *error; - struct iwm_fw_error_hdr *fw_err; - - error = (struct iwm_umac_notif_error *)buf; - fw_err = &error->err; - - memcpy(iwm->last_fw_err, fw_err, sizeof(struct iwm_fw_error_hdr)); - - IWM_ERR(iwm, "%cMAC FW ERROR:\n", - (le32_to_cpu(fw_err->category) == UMAC_SYS_ERR_CAT_LMAC) ? 'L' : 'U'); - IWM_ERR(iwm, "\tCategory: %d\n", le32_to_cpu(fw_err->category)); - IWM_ERR(iwm, "\tStatus: 0x%x\n", le32_to_cpu(fw_err->status)); - IWM_ERR(iwm, "\tPC: 0x%x\n", le32_to_cpu(fw_err->pc)); - IWM_ERR(iwm, "\tblink1: %d\n", le32_to_cpu(fw_err->blink1)); - IWM_ERR(iwm, "\tblink2: %d\n", le32_to_cpu(fw_err->blink2)); - IWM_ERR(iwm, "\tilink1: %d\n", le32_to_cpu(fw_err->ilink1)); - IWM_ERR(iwm, "\tilink2: %d\n", le32_to_cpu(fw_err->ilink2)); - IWM_ERR(iwm, "\tData1: 0x%x\n", le32_to_cpu(fw_err->data1)); - IWM_ERR(iwm, "\tData2: 0x%x\n", le32_to_cpu(fw_err->data2)); - IWM_ERR(iwm, "\tLine number: %d\n", le32_to_cpu(fw_err->line_num)); - IWM_ERR(iwm, "\tUMAC status: 0x%x\n", le32_to_cpu(fw_err->umac_status)); - IWM_ERR(iwm, "\tLMAC status: 0x%x\n", le32_to_cpu(fw_err->lmac_status)); - IWM_ERR(iwm, "\tSDIO status: 0x%x\n", le32_to_cpu(fw_err->sdio_status)); - - iwm_resetting(iwm); - - return 0; -} - -static int iwm_ntf_umac_alive(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_alive *alive_resp = - (struct iwm_umac_notif_alive *)(buf); - u16 status = le16_to_cpu(alive_resp->status); - - if (status == UMAC_NTFY_ALIVE_STATUS_ERR) { - IWM_ERR(iwm, "Receive error UMAC_ALIVE\n"); - return -EIO; - } - - iwm_tx_credit_init_pools(iwm, alive_resp); - - return 0; -} - -static int iwm_ntf_init_complete(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct wiphy *wiphy = iwm_to_wiphy(iwm); - struct iwm_umac_notif_init_complete *init_complete = - (struct iwm_umac_notif_init_complete *)(buf); - u16 status = le16_to_cpu(init_complete->status); - bool blocked = (status == UMAC_NTFY_INIT_COMPLETE_STATUS_ERR); - - if (blocked) - IWM_DBG_NTF(iwm, DBG, "Hardware rf kill is on (radio off)\n"); - else - IWM_DBG_NTF(iwm, DBG, "Hardware rf kill is off (radio on)\n"); - - wiphy_rfkill_set_hw_state(wiphy, blocked); - - return 0; -} - -static int iwm_ntf_tx_credit_update(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - int pool_nr, total_freed_pages; - unsigned long pool_map; - int i, id; - struct iwm_umac_notif_page_dealloc *dealloc = - (struct iwm_umac_notif_page_dealloc *)buf; - - pool_nr = GET_VAL32(dealloc->changes, UMAC_DEALLOC_NTFY_CHANGES_CNT); - pool_map = GET_VAL32(dealloc->changes, UMAC_DEALLOC_NTFY_CHANGES_MSK); - - IWM_DBG_TX(iwm, DBG, "UMAC dealloc notification: pool nr %d, " - "update map 0x%lx\n", pool_nr, pool_map); - - spin_lock(&iwm->tx_credit.lock); - - for (i = 0; i < pool_nr; i++) { - id = GET_VAL32(dealloc->grp_info[i], - UMAC_DEALLOC_NTFY_GROUP_NUM); - if (test_bit(id, &pool_map)) { - total_freed_pages = GET_VAL32(dealloc->grp_info[i], - UMAC_DEALLOC_NTFY_PAGE_CNT); - iwm_tx_credit_inc(iwm, id, total_freed_pages); - } - } - - spin_unlock(&iwm->tx_credit.lock); - - return 0; -} - -static int iwm_ntf_umac_reset(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - IWM_DBG_NTF(iwm, DBG, "UMAC RESET done\n"); - - return 0; -} - -static int iwm_ntf_lmac_version(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - IWM_DBG_NTF(iwm, INFO, "LMAC Version: %x.%x\n", buf[9], buf[8]); - - return 0; -} - -static int iwm_ntf_tx(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_lmac_tx_resp *tx_resp; - struct iwm_umac_wifi_in_hdr *hdr; - - tx_resp = (struct iwm_lmac_tx_resp *) - (buf + sizeof(struct iwm_umac_wifi_in_hdr)); - hdr = (struct iwm_umac_wifi_in_hdr *)buf; - - IWM_DBG_TX(iwm, DBG, "REPLY_TX, buf size: %lu\n", buf_size); - - IWM_DBG_TX(iwm, DBG, "Seqnum: %d\n", - le16_to_cpu(hdr->sw_hdr.cmd.seq_num)); - IWM_DBG_TX(iwm, DBG, "\tFrame cnt: %d\n", tx_resp->frame_cnt); - IWM_DBG_TX(iwm, DBG, "\tRetry cnt: %d\n", - le16_to_cpu(tx_resp->retry_cnt)); - IWM_DBG_TX(iwm, DBG, "\tSeq ctl: %d\n", le16_to_cpu(tx_resp->seq_ctl)); - IWM_DBG_TX(iwm, DBG, "\tByte cnt: %d\n", - le16_to_cpu(tx_resp->byte_cnt)); - IWM_DBG_TX(iwm, DBG, "\tStatus: 0x%x\n", le32_to_cpu(tx_resp->status)); - - return 0; -} - - -static int iwm_ntf_calib_res(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - u8 opcode; - u8 *calib_buf; - struct iwm_lmac_calib_hdr *hdr = (struct iwm_lmac_calib_hdr *) - (buf + sizeof(struct iwm_umac_wifi_in_hdr)); - - opcode = hdr->opcode; - - BUG_ON(opcode >= CALIBRATION_CMD_NUM || - opcode < PHY_CALIBRATE_OPCODES_NUM); - - IWM_DBG_NTF(iwm, DBG, "Store calibration result for opcode: %d\n", - opcode); - - buf_size -= sizeof(struct iwm_umac_wifi_in_hdr); - calib_buf = iwm->calib_res[opcode].buf; - - if (!calib_buf || (iwm->calib_res[opcode].size < buf_size)) { - kfree(calib_buf); - calib_buf = kzalloc(buf_size, GFP_KERNEL); - if (!calib_buf) { - IWM_ERR(iwm, "Memory allocation failed: calib_res\n"); - return -ENOMEM; - } - iwm->calib_res[opcode].buf = calib_buf; - iwm->calib_res[opcode].size = buf_size; - } - - memcpy(calib_buf, hdr, buf_size); - set_bit(opcode - PHY_CALIBRATE_OPCODES_NUM, &iwm->calib_done_map); - - return 0; -} - -static int iwm_ntf_calib_complete(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - IWM_DBG_NTF(iwm, DBG, "Calibration completed\n"); - - return 0; -} - -static int iwm_ntf_calib_cfg(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_lmac_cal_cfg_resp *cal_resp; - - cal_resp = (struct iwm_lmac_cal_cfg_resp *) - (buf + sizeof(struct iwm_umac_wifi_in_hdr)); - - IWM_DBG_NTF(iwm, DBG, "Calibration CFG command status: %d\n", - le32_to_cpu(cal_resp->status)); - - return 0; -} - -static int iwm_ntf_wifi_status(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_wifi_status *status = - (struct iwm_umac_notif_wifi_status *)buf; - - iwm->core_enabled |= le16_to_cpu(status->status); - - return 0; -} - -static struct iwm_rx_ticket_node * -iwm_rx_ticket_node_alloc(struct iwm_priv *iwm, struct iwm_rx_ticket *ticket) -{ - struct iwm_rx_ticket_node *ticket_node; - - ticket_node = kzalloc(sizeof(struct iwm_rx_ticket_node), GFP_KERNEL); - if (!ticket_node) { - IWM_ERR(iwm, "Couldn't allocate ticket node\n"); - return ERR_PTR(-ENOMEM); - } - - ticket_node->ticket = kmemdup(ticket, sizeof(struct iwm_rx_ticket), - GFP_KERNEL); - if (!ticket_node->ticket) { - IWM_ERR(iwm, "Couldn't allocate RX ticket\n"); - kfree(ticket_node); - return ERR_PTR(-ENOMEM); - } - - INIT_LIST_HEAD(&ticket_node->node); - - return ticket_node; -} - -static void iwm_rx_ticket_node_free(struct iwm_rx_ticket_node *ticket_node) -{ - kfree(ticket_node->ticket); - kfree(ticket_node); -} - -static struct iwm_rx_packet *iwm_rx_packet_get(struct iwm_priv *iwm, u16 id) -{ - u8 id_hash = IWM_RX_ID_GET_HASH(id); - struct iwm_rx_packet *packet; - - spin_lock(&iwm->packet_lock[id_hash]); - list_for_each_entry(packet, &iwm->rx_packets[id_hash], node) - if (packet->id == id) { - list_del(&packet->node); - spin_unlock(&iwm->packet_lock[id_hash]); - return packet; - } - - spin_unlock(&iwm->packet_lock[id_hash]); - return NULL; -} - -static struct iwm_rx_packet *iwm_rx_packet_alloc(struct iwm_priv *iwm, u8 *buf, - u32 size, u16 id) -{ - struct iwm_rx_packet *packet; - - packet = kzalloc(sizeof(struct iwm_rx_packet), GFP_KERNEL); - if (!packet) { - IWM_ERR(iwm, "Couldn't allocate packet\n"); - return ERR_PTR(-ENOMEM); - } - - packet->skb = dev_alloc_skb(size); - if (!packet->skb) { - IWM_ERR(iwm, "Couldn't allocate packet SKB\n"); - kfree(packet); - return ERR_PTR(-ENOMEM); - } - - packet->pkt_size = size; - - skb_put(packet->skb, size); - memcpy(packet->skb->data, buf, size); - INIT_LIST_HEAD(&packet->node); - packet->id = id; - - return packet; -} - -void iwm_rx_free(struct iwm_priv *iwm) -{ - struct iwm_rx_ticket_node *ticket, *nt; - struct iwm_rx_packet *packet, *np; - int i; - - spin_lock(&iwm->ticket_lock); - list_for_each_entry_safe(ticket, nt, &iwm->rx_tickets, node) { - list_del(&ticket->node); - iwm_rx_ticket_node_free(ticket); - } - spin_unlock(&iwm->ticket_lock); - - for (i = 0; i < IWM_RX_ID_HASH; i++) { - spin_lock(&iwm->packet_lock[i]); - list_for_each_entry_safe(packet, np, &iwm->rx_packets[i], - node) { - list_del(&packet->node); - kfree_skb(packet->skb); - kfree(packet); - } - spin_unlock(&iwm->packet_lock[i]); - } -} - -static int iwm_ntf_rx_ticket(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_rx_ticket *ntf_rx_ticket = - (struct iwm_umac_notif_rx_ticket *)buf; - struct iwm_rx_ticket *ticket = - (struct iwm_rx_ticket *)ntf_rx_ticket->tickets; - int i, schedule_rx = 0; - - for (i = 0; i < ntf_rx_ticket->num_tickets; i++) { - struct iwm_rx_ticket_node *ticket_node; - - switch (le16_to_cpu(ticket->action)) { - case IWM_RX_TICKET_RELEASE: - case IWM_RX_TICKET_DROP: - /* We can push the packet to the stack */ - ticket_node = iwm_rx_ticket_node_alloc(iwm, ticket); - if (IS_ERR(ticket_node)) - return PTR_ERR(ticket_node); - - IWM_DBG_RX(iwm, DBG, "TICKET %s(%d)\n", - __le16_to_cpu(ticket->action) == - IWM_RX_TICKET_RELEASE ? - "RELEASE" : "DROP", - ticket->id); - spin_lock(&iwm->ticket_lock); - list_add_tail(&ticket_node->node, &iwm->rx_tickets); - spin_unlock(&iwm->ticket_lock); - - /* - * We received an Rx ticket, most likely there's - * a packet pending for it, it's not worth going - * through the packet hash list to double check. - * Let's just fire the rx worker.. - */ - schedule_rx = 1; - - break; - - default: - IWM_ERR(iwm, "Invalid RX ticket action: 0x%x\n", - ticket->action); - } - - ticket++; - } - - if (schedule_rx) - queue_work(iwm->rx_wq, &iwm->rx_worker); - - return 0; -} - -static int iwm_ntf_rx_packet(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_wifi_in_hdr *wifi_hdr; - struct iwm_rx_packet *packet; - u16 id, buf_offset; - u32 packet_size; - u8 id_hash; - - IWM_DBG_RX(iwm, DBG, "\n"); - - wifi_hdr = (struct iwm_umac_wifi_in_hdr *)buf; - id = le16_to_cpu(wifi_hdr->sw_hdr.cmd.seq_num); - buf_offset = sizeof(struct iwm_umac_wifi_in_hdr); - packet_size = buf_size - sizeof(struct iwm_umac_wifi_in_hdr); - - IWM_DBG_RX(iwm, DBG, "CMD:0x%x, seqnum: %d, packet size: %d\n", - wifi_hdr->sw_hdr.cmd.cmd, id, packet_size); - IWM_DBG_RX(iwm, DBG, "Packet id: %d\n", id); - IWM_HEXDUMP(iwm, DBG, RX, "PACKET: ", buf + buf_offset, packet_size); - - packet = iwm_rx_packet_alloc(iwm, buf + buf_offset, packet_size, id); - if (IS_ERR(packet)) - return PTR_ERR(packet); - - id_hash = IWM_RX_ID_GET_HASH(id); - spin_lock(&iwm->packet_lock[id_hash]); - list_add_tail(&packet->node, &iwm->rx_packets[id_hash]); - spin_unlock(&iwm->packet_lock[id_hash]); - - /* We might (unlikely) have received the packet _after_ the ticket */ - queue_work(iwm->rx_wq, &iwm->rx_worker); - - return 0; -} - -/* MLME handlers */ -static int iwm_mlme_assoc_start(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_assoc_start *start; - - start = (struct iwm_umac_notif_assoc_start *)buf; - - IWM_DBG_MLME(iwm, INFO, "Association with %pM Started, reason: %d\n", - start->bssid, le32_to_cpu(start->roam_reason)); - - wake_up_interruptible(&iwm->mlme_queue); - - return 0; -} - -static u8 iwm_is_open_wep_profile(struct iwm_priv *iwm) -{ - if ((iwm->umac_profile->sec.ucast_cipher == UMAC_CIPHER_TYPE_WEP_40 || - iwm->umac_profile->sec.ucast_cipher == UMAC_CIPHER_TYPE_WEP_104) && - (iwm->umac_profile->sec.ucast_cipher == - iwm->umac_profile->sec.mcast_cipher) && - (iwm->umac_profile->sec.auth_type == UMAC_AUTH_TYPE_OPEN)) - return 1; - - return 0; -} - -static int iwm_mlme_assoc_complete(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct wiphy *wiphy = iwm_to_wiphy(iwm); - struct ieee80211_channel *chan; - struct iwm_umac_notif_assoc_complete *complete = - (struct iwm_umac_notif_assoc_complete *)buf; - - IWM_DBG_MLME(iwm, INFO, "Association with %pM completed, status: %d\n", - complete->bssid, complete->status); - - switch (le32_to_cpu(complete->status)) { - case UMAC_ASSOC_COMPLETE_SUCCESS: - chan = ieee80211_get_channel(wiphy, - ieee80211_channel_to_frequency(complete->channel, - complete->band == UMAC_BAND_2GHZ ? - IEEE80211_BAND_2GHZ : - IEEE80211_BAND_5GHZ)); - if (!chan || chan->flags & IEEE80211_CHAN_DISABLED) { - /* Associated to a unallowed channel, disassociate. */ - __iwm_invalidate_mlme_profile(iwm); - IWM_WARN(iwm, "Couldn't associate with %pM due to " - "channel %d is disabled. Check your local " - "regulatory setting.\n", - complete->bssid, complete->channel); - goto failure; - } - - set_bit(IWM_STATUS_ASSOCIATED, &iwm->status); - memcpy(iwm->bssid, complete->bssid, ETH_ALEN); - iwm->channel = complete->channel; - - /* Internal roaming state, avoid notifying SME. */ - if (!test_and_clear_bit(IWM_STATUS_SME_CONNECTING, &iwm->status) - && iwm->conf.mode == UMAC_MODE_BSS) { - cancel_delayed_work(&iwm->disconnect); - cfg80211_roamed(iwm_to_ndev(iwm), NULL, - complete->bssid, - iwm->req_ie, iwm->req_ie_len, - iwm->resp_ie, iwm->resp_ie_len, - GFP_KERNEL); - break; - } - - iwm_link_on(iwm); - - if (iwm->conf.mode == UMAC_MODE_IBSS) - goto ibss; - - if (!test_bit(IWM_STATUS_RESETTING, &iwm->status)) - cfg80211_connect_result(iwm_to_ndev(iwm), - complete->bssid, - iwm->req_ie, iwm->req_ie_len, - iwm->resp_ie, iwm->resp_ie_len, - WLAN_STATUS_SUCCESS, - GFP_KERNEL); - else - cfg80211_roamed(iwm_to_ndev(iwm), NULL, - complete->bssid, - iwm->req_ie, iwm->req_ie_len, - iwm->resp_ie, iwm->resp_ie_len, - GFP_KERNEL); - break; - case UMAC_ASSOC_COMPLETE_FAILURE: - failure: - clear_bit(IWM_STATUS_ASSOCIATED, &iwm->status); - memset(iwm->bssid, 0, ETH_ALEN); - iwm->channel = 0; - - /* Internal roaming state, avoid notifying SME. */ - if (!test_and_clear_bit(IWM_STATUS_SME_CONNECTING, &iwm->status) - && iwm->conf.mode == UMAC_MODE_BSS) { - cancel_delayed_work(&iwm->disconnect); - break; - } - - iwm_link_off(iwm); - - if (iwm->conf.mode == UMAC_MODE_IBSS) - goto ibss; - - if (!test_bit(IWM_STATUS_RESETTING, &iwm->status)) - if (!iwm_is_open_wep_profile(iwm)) { - cfg80211_connect_result(iwm_to_ndev(iwm), - complete->bssid, - NULL, 0, NULL, 0, - WLAN_STATUS_UNSPECIFIED_FAILURE, - GFP_KERNEL); - } else { - /* Let's try shared WEP auth */ - IWM_ERR(iwm, "Trying WEP shared auth\n"); - schedule_work(&iwm->auth_retry_worker); - } - else - cfg80211_disconnected(iwm_to_ndev(iwm), 0, NULL, 0, - GFP_KERNEL); - break; - default: - break; - } - - clear_bit(IWM_STATUS_RESETTING, &iwm->status); - return 0; - - ibss: - cfg80211_ibss_joined(iwm_to_ndev(iwm), iwm->bssid, GFP_KERNEL); - clear_bit(IWM_STATUS_RESETTING, &iwm->status); - return 0; -} - -static int iwm_mlme_profile_invalidate(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_profile_invalidate *invalid; - u32 reason; - - invalid = (struct iwm_umac_notif_profile_invalidate *)buf; - reason = le32_to_cpu(invalid->reason); - - IWM_DBG_MLME(iwm, INFO, "Profile Invalidated. Reason: %d\n", reason); - - if (reason != UMAC_PROFILE_INVALID_REQUEST && - test_bit(IWM_STATUS_SME_CONNECTING, &iwm->status)) - cfg80211_connect_result(iwm_to_ndev(iwm), NULL, NULL, 0, NULL, - 0, WLAN_STATUS_UNSPECIFIED_FAILURE, - GFP_KERNEL); - - clear_bit(IWM_STATUS_SME_CONNECTING, &iwm->status); - clear_bit(IWM_STATUS_ASSOCIATED, &iwm->status); - - iwm->umac_profile_active = false; - memset(iwm->bssid, 0, ETH_ALEN); - iwm->channel = 0; - - iwm_link_off(iwm); - - wake_up_interruptible(&iwm->mlme_queue); - - return 0; -} - -#define IWM_DISCONNECT_INTERVAL (5 * HZ) - -static int iwm_mlme_connection_terminated(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - IWM_DBG_MLME(iwm, DBG, "Connection terminated\n"); - - schedule_delayed_work(&iwm->disconnect, IWM_DISCONNECT_INTERVAL); - - return 0; -} - -static int iwm_mlme_scan_complete(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - int ret; - struct iwm_umac_notif_scan_complete *scan_complete = - (struct iwm_umac_notif_scan_complete *)buf; - u32 result = le32_to_cpu(scan_complete->result); - - IWM_DBG_MLME(iwm, INFO, "type:0x%x result:0x%x seq:%d\n", - le32_to_cpu(scan_complete->type), - le32_to_cpu(scan_complete->result), - scan_complete->seq_num); - - if (!test_and_clear_bit(IWM_STATUS_SCANNING, &iwm->status)) { - IWM_ERR(iwm, "Scan complete while device not scanning\n"); - return -EIO; - } - if (!iwm->scan_request) - return 0; - - ret = iwm_cfg80211_inform_bss(iwm); - - cfg80211_scan_done(iwm->scan_request, - (result & UMAC_SCAN_RESULT_ABORTED) ? 1 : !!ret); - iwm->scan_request = NULL; - - return ret; -} - -static int iwm_mlme_update_sta_table(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_sta_info *umac_sta = - (struct iwm_umac_notif_sta_info *)buf; - struct iwm_sta_info *sta; - int i; - - switch (le32_to_cpu(umac_sta->opcode)) { - case UMAC_OPCODE_ADD_MODIFY: - sta = &iwm->sta_table[GET_VAL8(umac_sta->sta_id, LMAC_STA_ID)]; - - IWM_DBG_MLME(iwm, INFO, "%s STA: ID = %d, Color = %d, " - "addr = %pM, qos = %d\n", - sta->valid ? "Modify" : "Add", - GET_VAL8(umac_sta->sta_id, LMAC_STA_ID), - GET_VAL8(umac_sta->sta_id, LMAC_STA_COLOR), - umac_sta->mac_addr, - umac_sta->flags & UMAC_STA_FLAG_QOS); - - sta->valid = true; - sta->qos = umac_sta->flags & UMAC_STA_FLAG_QOS; - sta->color = GET_VAL8(umac_sta->sta_id, LMAC_STA_COLOR); - memcpy(sta->addr, umac_sta->mac_addr, ETH_ALEN); - break; - case UMAC_OPCODE_REMOVE: - IWM_DBG_MLME(iwm, INFO, "Remove STA: ID = %d, Color = %d, " - "addr = %pM\n", - GET_VAL8(umac_sta->sta_id, LMAC_STA_ID), - GET_VAL8(umac_sta->sta_id, LMAC_STA_COLOR), - umac_sta->mac_addr); - - sta = &iwm->sta_table[GET_VAL8(umac_sta->sta_id, LMAC_STA_ID)]; - - if (!memcmp(sta->addr, umac_sta->mac_addr, ETH_ALEN)) - sta->valid = false; - - break; - case UMAC_OPCODE_CLEAR_ALL: - for (i = 0; i < IWM_STA_TABLE_NUM; i++) - iwm->sta_table[i].valid = false; - - break; - default: - break; - } - - return 0; -} - -static int iwm_mlme_medium_lost(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct wiphy *wiphy = iwm_to_wiphy(iwm); - - IWM_DBG_NTF(iwm, DBG, "WiFi/WiMax coexistence radio is OFF\n"); - - wiphy_rfkill_set_hw_state(wiphy, true); - - return 0; -} - -static int iwm_mlme_update_bss_table(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct wiphy *wiphy = iwm_to_wiphy(iwm); - struct ieee80211_mgmt *mgmt; - struct iwm_umac_notif_bss_info *umac_bss = - (struct iwm_umac_notif_bss_info *)buf; - struct ieee80211_channel *channel; - struct ieee80211_supported_band *band; - struct iwm_bss_info *bss; - s32 signal; - int freq; - u16 frame_len = le16_to_cpu(umac_bss->frame_len); - size_t bss_len = sizeof(struct iwm_umac_notif_bss_info) + frame_len; - - mgmt = (struct ieee80211_mgmt *)(umac_bss->frame_buf); - - IWM_DBG_MLME(iwm, DBG, "New BSS info entry: %pM\n", mgmt->bssid); - IWM_DBG_MLME(iwm, DBG, "\tType: 0x%x\n", le32_to_cpu(umac_bss->type)); - IWM_DBG_MLME(iwm, DBG, "\tTimestamp: %d\n", - le32_to_cpu(umac_bss->timestamp)); - IWM_DBG_MLME(iwm, DBG, "\tTable Index: %d\n", - le16_to_cpu(umac_bss->table_idx)); - IWM_DBG_MLME(iwm, DBG, "\tBand: %d\n", umac_bss->band); - IWM_DBG_MLME(iwm, DBG, "\tChannel: %d\n", umac_bss->channel); - IWM_DBG_MLME(iwm, DBG, "\tRSSI: %d\n", umac_bss->rssi); - IWM_DBG_MLME(iwm, DBG, "\tFrame Length: %d\n", frame_len); - - list_for_each_entry(bss, &iwm->bss_list, node) - if (bss->bss->table_idx == umac_bss->table_idx) - break; - - if (&bss->node != &iwm->bss_list) { - /* Remove the old BSS entry, we will add it back later. */ - list_del(&bss->node); - kfree(bss->bss); - } else { - /* New BSS entry */ - - bss = kzalloc(sizeof(struct iwm_bss_info), GFP_KERNEL); - if (!bss) { - IWM_ERR(iwm, "Couldn't allocate bss_info\n"); - return -ENOMEM; - } - } - - bss->bss = kzalloc(bss_len, GFP_KERNEL); - if (!bss->bss) { - kfree(bss); - IWM_ERR(iwm, "Couldn't allocate bss\n"); - return -ENOMEM; - } - - INIT_LIST_HEAD(&bss->node); - memcpy(bss->bss, umac_bss, bss_len); - - if (umac_bss->band == UMAC_BAND_2GHZ) - band = wiphy->bands[IEEE80211_BAND_2GHZ]; - else if (umac_bss->band == UMAC_BAND_5GHZ) - band = wiphy->bands[IEEE80211_BAND_5GHZ]; - else { - IWM_ERR(iwm, "Invalid band: %d\n", umac_bss->band); - goto err; - } - - freq = ieee80211_channel_to_frequency(umac_bss->channel, band->band); - channel = ieee80211_get_channel(wiphy, freq); - signal = umac_bss->rssi * 100; - - bss->cfg_bss = cfg80211_inform_bss_frame(wiphy, channel, - mgmt, frame_len, - signal, GFP_KERNEL); - if (!bss->cfg_bss) - goto err; - - list_add_tail(&bss->node, &iwm->bss_list); - - return 0; - err: - kfree(bss->bss); - kfree(bss); - - return -EINVAL; -} - -static int iwm_mlme_remove_bss(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_bss_removed *bss_rm = - (struct iwm_umac_notif_bss_removed *)buf; - struct iwm_bss_info *bss, *next; - u16 table_idx; - int i; - - for (i = 0; i < le32_to_cpu(bss_rm->count); i++) { - table_idx = le16_to_cpu(bss_rm->entries[i]) & - IWM_BSS_REMOVE_INDEX_MSK; - list_for_each_entry_safe(bss, next, &iwm->bss_list, node) - if (bss->bss->table_idx == cpu_to_le16(table_idx)) { - struct ieee80211_mgmt *mgmt; - - mgmt = (struct ieee80211_mgmt *) - (bss->bss->frame_buf); - IWM_DBG_MLME(iwm, ERR, "BSS removed: %pM\n", - mgmt->bssid); - list_del(&bss->node); - kfree(bss->bss); - kfree(bss); - } - } - - return 0; -} - -static int iwm_mlme_mgt_frame(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_mgt_frame *mgt_frame = - (struct iwm_umac_notif_mgt_frame *)buf; - struct ieee80211_mgmt *mgt = (struct ieee80211_mgmt *)mgt_frame->frame; - - IWM_HEXDUMP(iwm, DBG, MLME, "MGT: ", mgt_frame->frame, - le16_to_cpu(mgt_frame->len)); - - if (ieee80211_is_assoc_req(mgt->frame_control)) { - iwm->req_ie_len = le16_to_cpu(mgt_frame->len) - - offsetof(struct ieee80211_mgmt, - u.assoc_req.variable); - kfree(iwm->req_ie); - iwm->req_ie = kmemdup(mgt->u.assoc_req.variable, - iwm->req_ie_len, GFP_KERNEL); - } else if (ieee80211_is_reassoc_req(mgt->frame_control)) { - iwm->req_ie_len = le16_to_cpu(mgt_frame->len) - - offsetof(struct ieee80211_mgmt, - u.reassoc_req.variable); - kfree(iwm->req_ie); - iwm->req_ie = kmemdup(mgt->u.reassoc_req.variable, - iwm->req_ie_len, GFP_KERNEL); - } else if (ieee80211_is_assoc_resp(mgt->frame_control)) { - iwm->resp_ie_len = le16_to_cpu(mgt_frame->len) - - offsetof(struct ieee80211_mgmt, - u.assoc_resp.variable); - kfree(iwm->resp_ie); - iwm->resp_ie = kmemdup(mgt->u.assoc_resp.variable, - iwm->resp_ie_len, GFP_KERNEL); - } else if (ieee80211_is_reassoc_resp(mgt->frame_control)) { - iwm->resp_ie_len = le16_to_cpu(mgt_frame->len) - - offsetof(struct ieee80211_mgmt, - u.reassoc_resp.variable); - kfree(iwm->resp_ie); - iwm->resp_ie = kmemdup(mgt->u.reassoc_resp.variable, - iwm->resp_ie_len, GFP_KERNEL); - } else { - IWM_ERR(iwm, "Unsupported management frame: 0x%x", - le16_to_cpu(mgt->frame_control)); - return 0; - } - - return 0; -} - -static int iwm_ntf_mlme(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_wifi_if *notif = - (struct iwm_umac_notif_wifi_if *)buf; - - switch (notif->status) { - case WIFI_IF_NTFY_ASSOC_START: - return iwm_mlme_assoc_start(iwm, buf, buf_size, cmd); - case WIFI_IF_NTFY_ASSOC_COMPLETE: - return iwm_mlme_assoc_complete(iwm, buf, buf_size, cmd); - case WIFI_IF_NTFY_PROFILE_INVALIDATE_COMPLETE: - return iwm_mlme_profile_invalidate(iwm, buf, buf_size, cmd); - case WIFI_IF_NTFY_CONNECTION_TERMINATED: - return iwm_mlme_connection_terminated(iwm, buf, buf_size, cmd); - case WIFI_IF_NTFY_SCAN_COMPLETE: - return iwm_mlme_scan_complete(iwm, buf, buf_size, cmd); - case WIFI_IF_NTFY_STA_TABLE_CHANGE: - return iwm_mlme_update_sta_table(iwm, buf, buf_size, cmd); - case WIFI_IF_NTFY_EXTENDED_IE_REQUIRED: - IWM_DBG_MLME(iwm, DBG, "Extended IE required\n"); - break; - case WIFI_IF_NTFY_RADIO_PREEMPTION: - return iwm_mlme_medium_lost(iwm, buf, buf_size, cmd); - case WIFI_IF_NTFY_BSS_TRK_TABLE_CHANGED: - return iwm_mlme_update_bss_table(iwm, buf, buf_size, cmd); - case WIFI_IF_NTFY_BSS_TRK_ENTRIES_REMOVED: - return iwm_mlme_remove_bss(iwm, buf, buf_size, cmd); - break; - case WIFI_IF_NTFY_MGMT_FRAME: - return iwm_mlme_mgt_frame(iwm, buf, buf_size, cmd); - case WIFI_DBG_IF_NTFY_SCAN_SUPER_JOB_START: - case WIFI_DBG_IF_NTFY_SCAN_SUPER_JOB_COMPLETE: - case WIFI_DBG_IF_NTFY_SCAN_CHANNEL_START: - case WIFI_DBG_IF_NTFY_SCAN_CHANNEL_RESULT: - case WIFI_DBG_IF_NTFY_SCAN_MINI_JOB_START: - case WIFI_DBG_IF_NTFY_SCAN_MINI_JOB_COMPLETE: - case WIFI_DBG_IF_NTFY_CNCT_ATC_START: - case WIFI_DBG_IF_NTFY_COEX_NOTIFICATION: - case WIFI_DBG_IF_NTFY_COEX_HANDLE_ENVELOP: - case WIFI_DBG_IF_NTFY_COEX_HANDLE_RELEASE_ENVELOP: - IWM_DBG_MLME(iwm, DBG, "MLME debug notification: 0x%x\n", - notif->status); - break; - default: - IWM_ERR(iwm, "Unhandled notification: 0x%x\n", notif->status); - break; - } - - return 0; -} - -#define IWM_STATS_UPDATE_INTERVAL (2 * HZ) - -static int iwm_ntf_statistics(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_stats *stats = (struct iwm_umac_notif_stats *)buf; - struct iw_statistics *wstats = &iwm->wstats; - u16 max_rate = 0; - int i; - - IWM_DBG_MLME(iwm, DBG, "Statistics notification received\n"); - - if (test_bit(IWM_STATUS_ASSOCIATED, &iwm->status)) { - for (i = 0; i < UMAC_NTF_RATE_SAMPLE_NR; i++) { - max_rate = max_t(u16, max_rate, - max(le16_to_cpu(stats->tx_rate[i]), - le16_to_cpu(stats->rx_rate[i]))); - } - /* UMAC passes rate info multiplies by 2 */ - iwm->rate = max_rate >> 1; - } - iwm->txpower = le32_to_cpu(stats->tx_power); - - wstats->status = 0; - - wstats->discard.nwid = le32_to_cpu(stats->rx_drop_other_bssid); - wstats->discard.code = le32_to_cpu(stats->rx_drop_decode); - wstats->discard.fragment = le32_to_cpu(stats->rx_drop_reassembly); - wstats->discard.retries = le32_to_cpu(stats->tx_drop_max_retry); - - wstats->miss.beacon = le32_to_cpu(stats->missed_beacons); - - /* according to cfg80211 */ - if (stats->rssi_dbm < -110) - wstats->qual.qual = 0; - else if (stats->rssi_dbm > -40) - wstats->qual.qual = 70; - else - wstats->qual.qual = stats->rssi_dbm + 110; - - wstats->qual.level = stats->rssi_dbm; - wstats->qual.noise = stats->noise_dbm; - wstats->qual.updated = IW_QUAL_ALL_UPDATED | IW_QUAL_DBM; - - schedule_delayed_work(&iwm->stats_request, IWM_STATS_UPDATE_INTERVAL); - - mod_timer(&iwm->watchdog, round_jiffies(jiffies + IWM_WATCHDOG_PERIOD)); - - return 0; -} - -static int iwm_ntf_eeprom_proxy(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_cmd_eeprom_proxy *eeprom_proxy = - (struct iwm_umac_cmd_eeprom_proxy *) - (buf + sizeof(struct iwm_umac_wifi_in_hdr)); - struct iwm_umac_cmd_eeprom_proxy_hdr *hdr = &eeprom_proxy->hdr; - u32 hdr_offset = le32_to_cpu(hdr->offset); - u32 hdr_len = le32_to_cpu(hdr->len); - u32 hdr_type = le32_to_cpu(hdr->type); - - IWM_DBG_NTF(iwm, DBG, "type: 0x%x, len: %d, offset: 0x%x\n", - hdr_type, hdr_len, hdr_offset); - - if ((hdr_offset + hdr_len) > IWM_EEPROM_LEN) - return -EINVAL; - - switch (hdr_type) { - case IWM_UMAC_CMD_EEPROM_TYPE_READ: - memcpy(iwm->eeprom + hdr_offset, eeprom_proxy->buf, hdr_len); - break; - case IWM_UMAC_CMD_EEPROM_TYPE_WRITE: - default: - return -ENOTSUPP; - } - - return 0; -} - -static int iwm_ntf_channel_info_list(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_cmd_get_channel_list *ch_list = - (struct iwm_umac_cmd_get_channel_list *) - (buf + sizeof(struct iwm_umac_wifi_in_hdr)); - struct wiphy *wiphy = iwm_to_wiphy(iwm); - struct ieee80211_supported_band *band; - int i; - - band = wiphy->bands[IEEE80211_BAND_2GHZ]; - - for (i = 0; i < band->n_channels; i++) { - unsigned long ch_mask_0 = - le32_to_cpu(ch_list->ch[0].channels_mask); - unsigned long ch_mask_2 = - le32_to_cpu(ch_list->ch[2].channels_mask); - - if (!test_bit(i, &ch_mask_0)) - band->channels[i].flags |= IEEE80211_CHAN_DISABLED; - - if (!test_bit(i, &ch_mask_2)) - band->channels[i].flags |= IEEE80211_CHAN_NO_IBSS; - } - - band = wiphy->bands[IEEE80211_BAND_5GHZ]; - - for (i = 0; i < min(band->n_channels, 32); i++) { - unsigned long ch_mask_1 = - le32_to_cpu(ch_list->ch[1].channels_mask); - unsigned long ch_mask_3 = - le32_to_cpu(ch_list->ch[3].channels_mask); - - if (!test_bit(i, &ch_mask_1)) - band->channels[i].flags |= IEEE80211_CHAN_DISABLED; - - if (!test_bit(i, &ch_mask_3)) - band->channels[i].flags |= IEEE80211_CHAN_NO_IBSS; - } - - return 0; -} - -static int iwm_ntf_stop_resume_tx(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_notif_stop_resume_tx *stp_res_tx = - (struct iwm_umac_notif_stop_resume_tx *)buf; - struct iwm_sta_info *sta_info; - struct iwm_tid_info *tid_info; - u8 sta_id = STA_ID_N_COLOR_ID(stp_res_tx->sta_id); - u16 tid_msk = le16_to_cpu(stp_res_tx->stop_resume_tid_msk); - int bit, ret = 0; - bool stop = false; - - IWM_DBG_NTF(iwm, DBG, "stop/resume notification:\n" - "\tflags: 0x%x\n" - "\tSTA id: %d\n" - "\tTID bitmask: 0x%x\n", - stp_res_tx->flags, stp_res_tx->sta_id, - stp_res_tx->stop_resume_tid_msk); - - if (stp_res_tx->flags & UMAC_STOP_TX_FLAG) - stop = true; - - sta_info = &iwm->sta_table[sta_id]; - if (!sta_info->valid) { - IWM_ERR(iwm, "Stoping an invalid STA: %d %d\n", - sta_id, stp_res_tx->sta_id); - return -EINVAL; - } - - for_each_set_bit(bit, (unsigned long *)&tid_msk, IWM_UMAC_TID_NR) { - tid_info = &sta_info->tid_info[bit]; - - mutex_lock(&tid_info->mutex); - tid_info->stopped = stop; - mutex_unlock(&tid_info->mutex); - - if (!stop) { - struct iwm_tx_queue *txq; - int queue = iwm_tid_to_queue(bit); - - if (queue < 0) - continue; - - txq = &iwm->txq[queue]; - /* - * If we resume, we have to move our SKBs - * back to the tx queue and queue some work. - */ - spin_lock_bh(&txq->lock); - skb_queue_splice_init(&txq->queue, &txq->stopped_queue); - spin_unlock_bh(&txq->lock); - - queue_work(txq->wq, &txq->worker); - } - - } - - /* We send an ACK only for the stop case */ - if (stop) - ret = iwm_send_umac_stop_resume_tx(iwm, stp_res_tx); - - return ret; -} - -static int iwm_ntf_wifi_if_wrapper(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - struct iwm_umac_wifi_if *hdr; - - if (cmd == NULL) { - IWM_ERR(iwm, "Couldn't find expected wifi command\n"); - return -EINVAL; - } - - hdr = (struct iwm_umac_wifi_if *)cmd->buf.payload; - - IWM_DBG_NTF(iwm, DBG, "WIFI_IF_WRAPPER cmd is delivered to UMAC: " - "oid is 0x%x\n", hdr->oid); - - set_bit(hdr->oid, &iwm->wifi_ntfy[0]); - wake_up_interruptible(&iwm->wifi_ntfy_queue); - - switch (hdr->oid) { - case UMAC_WIFI_IF_CMD_SET_PROFILE: - iwm->umac_profile_active = true; - break; - default: - break; - } - - return 0; -} - -#define CT_KILL_DELAY (30 * HZ) -static int iwm_ntf_card_state(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size, struct iwm_wifi_cmd *cmd) -{ - struct wiphy *wiphy = iwm_to_wiphy(iwm); - struct iwm_lmac_card_state *state = (struct iwm_lmac_card_state *) - (buf + sizeof(struct iwm_umac_wifi_in_hdr)); - u32 flags = le32_to_cpu(state->flags); - - IWM_INFO(iwm, "HW RF Kill %s, CT Kill %s\n", - flags & IWM_CARD_STATE_HW_DISABLED ? "ON" : "OFF", - flags & IWM_CARD_STATE_CTKILL_DISABLED ? "ON" : "OFF"); - - if (flags & IWM_CARD_STATE_CTKILL_DISABLED) { - /* - * We got a CTKILL event: We bring the interface down in - * oder to cool the device down, and try to bring it up - * 30 seconds later. If it's still too hot, we'll go through - * this code path again. - */ - cancel_delayed_work_sync(&iwm->ct_kill_delay); - schedule_delayed_work(&iwm->ct_kill_delay, CT_KILL_DELAY); - } - - wiphy_rfkill_set_hw_state(wiphy, flags & - (IWM_CARD_STATE_HW_DISABLED | - IWM_CARD_STATE_CTKILL_DISABLED)); - - return 0; -} - -static int iwm_rx_handle_wifi(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size) -{ - struct iwm_umac_wifi_in_hdr *wifi_hdr; - struct iwm_wifi_cmd *cmd; - u8 source, cmd_id; - u16 seq_num; - u32 count; - - wifi_hdr = (struct iwm_umac_wifi_in_hdr *)buf; - cmd_id = wifi_hdr->sw_hdr.cmd.cmd; - source = GET_VAL32(wifi_hdr->hw_hdr.cmd, UMAC_HDI_IN_CMD_SOURCE); - if (source >= IWM_SRC_NUM) { - IWM_CRIT(iwm, "invalid source %d\n", source); - return -EINVAL; - } - - if (cmd_id == REPLY_RX_MPDU_CMD) - trace_iwm_rx_packet(iwm, buf, buf_size); - else if ((cmd_id == UMAC_NOTIFY_OPCODE_RX_TICKET) && - (source == UMAC_HDI_IN_SOURCE_FW)) - trace_iwm_rx_ticket(iwm, buf, buf_size); - else - trace_iwm_rx_wifi_cmd(iwm, wifi_hdr); - - count = GET_VAL32(wifi_hdr->sw_hdr.meta_data, UMAC_FW_CMD_BYTE_COUNT); - count += sizeof(struct iwm_umac_wifi_in_hdr) - - sizeof(struct iwm_dev_cmd_hdr); - if (count > buf_size) { - IWM_CRIT(iwm, "count %d, buf size:%ld\n", count, buf_size); - return -EINVAL; - } - - seq_num = le16_to_cpu(wifi_hdr->sw_hdr.cmd.seq_num); - - IWM_DBG_RX(iwm, DBG, "CMD:0x%x, source: 0x%x, seqnum: %d\n", - cmd_id, source, seq_num); - - /* - * If this is a response to a previously sent command, there must - * be a pending command for this sequence number. - */ - cmd = iwm_get_pending_wifi_cmd(iwm, seq_num); - - /* Notify the caller only for sync commands. */ - switch (source) { - case UMAC_HDI_IN_SOURCE_FHRX: - if (iwm->lmac_handlers[cmd_id] && - test_bit(cmd_id, &iwm->lmac_handler_map[0])) - return iwm_notif_send(iwm, cmd, cmd_id, source, - buf, count); - break; - case UMAC_HDI_IN_SOURCE_FW: - if (iwm->umac_handlers[cmd_id] && - test_bit(cmd_id, &iwm->umac_handler_map[0])) - return iwm_notif_send(iwm, cmd, cmd_id, source, - buf, count); - break; - case UMAC_HDI_IN_SOURCE_UDMA: - break; - } - - return iwm_rx_handle_resp(iwm, buf, count, cmd); -} - -int iwm_rx_handle_resp(struct iwm_priv *iwm, u8 *buf, unsigned long buf_size, - struct iwm_wifi_cmd *cmd) -{ - u8 source, cmd_id; - struct iwm_umac_wifi_in_hdr *wifi_hdr; - int ret = 0; - - wifi_hdr = (struct iwm_umac_wifi_in_hdr *)buf; - cmd_id = wifi_hdr->sw_hdr.cmd.cmd; - - source = GET_VAL32(wifi_hdr->hw_hdr.cmd, UMAC_HDI_IN_CMD_SOURCE); - - IWM_DBG_RX(iwm, DBG, "CMD:0x%x, source: 0x%x\n", cmd_id, source); - - switch (source) { - case UMAC_HDI_IN_SOURCE_FHRX: - if (iwm->lmac_handlers[cmd_id]) - ret = iwm->lmac_handlers[cmd_id] - (iwm, buf, buf_size, cmd); - break; - case UMAC_HDI_IN_SOURCE_FW: - if (iwm->umac_handlers[cmd_id]) - ret = iwm->umac_handlers[cmd_id] - (iwm, buf, buf_size, cmd); - break; - case UMAC_HDI_IN_SOURCE_UDMA: - ret = -EINVAL; - break; - } - - kfree(cmd); - - return ret; -} - -static int iwm_rx_handle_nonwifi(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size) -{ - u8 seq_num; - struct iwm_udma_in_hdr *hdr = (struct iwm_udma_in_hdr *)buf; - struct iwm_nonwifi_cmd *cmd; - - trace_iwm_rx_nonwifi_cmd(iwm, buf, buf_size); - seq_num = GET_VAL32(hdr->cmd, UDMA_HDI_IN_CMD_NON_WIFI_HW_SEQ_NUM); - - /* - * We received a non wifi answer. - * Let's check if there's a pending command for it, and if so - * replace the command payload with the buffer, and then wake the - * callers up. - * That means we only support synchronised non wifi command response - * schemes. - */ - list_for_each_entry(cmd, &iwm->nonwifi_pending_cmd, pending) - if (cmd->seq_num == seq_num) { - cmd->resp_received = true; - cmd->buf.len = buf_size; - memcpy(cmd->buf.hdr, buf, buf_size); - wake_up_interruptible(&iwm->nonwifi_queue); - } - - return 0; -} - -static int iwm_rx_handle_umac(struct iwm_priv *iwm, u8 *buf, - unsigned long buf_size) -{ - int ret = 0; - u8 op_code; - unsigned long buf_offset = 0; - struct iwm_udma_in_hdr *hdr; - - /* - * To allow for a more efficient bus usage, UMAC - * messages are encapsulated into UDMA ones. This - * way we can have several UMAC messages in one bus - * transfer. - * A UDMA frame size is always aligned on 16 bytes, - * and a UDMA frame must not start with a UMAC_PAD_TERMINAL - * word. This is how we parse a bus frame into several - * UDMA ones. - */ - while (buf_offset < buf_size) { - - hdr = (struct iwm_udma_in_hdr *)(buf + buf_offset); - - if (iwm_rx_check_udma_hdr(hdr) < 0) { - IWM_DBG_RX(iwm, DBG, "End of frame\n"); - break; - } - - op_code = GET_VAL32(hdr->cmd, UMAC_HDI_IN_CMD_OPCODE); - - IWM_DBG_RX(iwm, DBG, "Op code: 0x%x\n", op_code); - - if (op_code == UMAC_HDI_IN_OPCODE_WIFI) { - ret |= iwm_rx_handle_wifi(iwm, buf + buf_offset, - buf_size - buf_offset); - } else if (op_code < UMAC_HDI_IN_OPCODE_NONWIFI_MAX) { - if (GET_VAL32(hdr->cmd, - UDMA_HDI_IN_CMD_NON_WIFI_HW_SIG) != - UDMA_HDI_IN_CMD_NON_WIFI_HW_SIG) { - IWM_ERR(iwm, "Incorrect hw signature\n"); - return -EINVAL; - } - ret |= iwm_rx_handle_nonwifi(iwm, buf + buf_offset, - buf_size - buf_offset); - } else { - IWM_ERR(iwm, "Invalid RX opcode: 0x%x\n", op_code); - ret |= -EINVAL; - } - - buf_offset += iwm_rx_resp_size(hdr); - } - - return ret; -} - -int iwm_rx_handle(struct iwm_priv *iwm, u8 *buf, unsigned long buf_size) -{ - struct iwm_udma_in_hdr *hdr; - - hdr = (struct iwm_udma_in_hdr *)buf; - - switch (le32_to_cpu(hdr->cmd)) { - case UMAC_REBOOT_BARKER: - if (test_bit(IWM_STATUS_READY, &iwm->status)) { - IWM_ERR(iwm, "Unexpected BARKER\n"); - - schedule_work(&iwm->reset_worker); - - return 0; - } - - return iwm_notif_send(iwm, NULL, IWM_BARKER_REBOOT_NOTIFICATION, - IWM_SRC_UDMA, buf, buf_size); - case UMAC_ACK_BARKER: - return iwm_notif_send(iwm, NULL, IWM_ACK_BARKER_NOTIFICATION, - IWM_SRC_UDMA, NULL, 0); - default: - IWM_DBG_RX(iwm, DBG, "Received cmd: 0x%x\n", hdr->cmd); - return iwm_rx_handle_umac(iwm, buf, buf_size); - } - - return 0; -} - -static const iwm_handler iwm_umac_handlers[] = -{ - [UMAC_NOTIFY_OPCODE_ERROR] = iwm_ntf_error, - [UMAC_NOTIFY_OPCODE_ALIVE] = iwm_ntf_umac_alive, - [UMAC_NOTIFY_OPCODE_INIT_COMPLETE] = iwm_ntf_init_complete, - [UMAC_NOTIFY_OPCODE_WIFI_CORE_STATUS] = iwm_ntf_wifi_status, - [UMAC_NOTIFY_OPCODE_WIFI_IF_WRAPPER] = iwm_ntf_mlme, - [UMAC_NOTIFY_OPCODE_PAGE_DEALLOC] = iwm_ntf_tx_credit_update, - [UMAC_NOTIFY_OPCODE_RX_TICKET] = iwm_ntf_rx_ticket, - [UMAC_CMD_OPCODE_RESET] = iwm_ntf_umac_reset, - [UMAC_NOTIFY_OPCODE_STATS] = iwm_ntf_statistics, - [UMAC_CMD_OPCODE_EEPROM_PROXY] = iwm_ntf_eeprom_proxy, - [UMAC_CMD_OPCODE_GET_CHAN_INFO_LIST] = iwm_ntf_channel_info_list, - [UMAC_CMD_OPCODE_STOP_RESUME_STA_TX] = iwm_ntf_stop_resume_tx, - [REPLY_RX_MPDU_CMD] = iwm_ntf_rx_packet, - [UMAC_CMD_OPCODE_WIFI_IF_WRAPPER] = iwm_ntf_wifi_if_wrapper, -}; - -static const iwm_handler iwm_lmac_handlers[] = -{ - [REPLY_TX] = iwm_ntf_tx, - [REPLY_ALIVE] = iwm_ntf_lmac_version, - [CALIBRATION_RES_NOTIFICATION] = iwm_ntf_calib_res, - [CALIBRATION_COMPLETE_NOTIFICATION] = iwm_ntf_calib_complete, - [CALIBRATION_CFG_CMD] = iwm_ntf_calib_cfg, - [REPLY_RX_MPDU_CMD] = iwm_ntf_rx_packet, - [CARD_STATE_NOTIFICATION] = iwm_ntf_card_state, -}; - -void iwm_rx_setup_handlers(struct iwm_priv *iwm) -{ - iwm->umac_handlers = (iwm_handler *) iwm_umac_handlers; - iwm->lmac_handlers = (iwm_handler *) iwm_lmac_handlers; -} - -static void iwm_remove_iv(struct sk_buff *skb, u32 hdr_total_len) -{ - struct ieee80211_hdr *hdr; - unsigned int hdr_len; - - hdr = (struct ieee80211_hdr *)skb->data; - - if (!ieee80211_has_protected(hdr->frame_control)) - return; - - hdr_len = ieee80211_hdrlen(hdr->frame_control); - if (hdr_total_len <= hdr_len) - return; - - memmove(skb->data + (hdr_total_len - hdr_len), skb->data, hdr_len); - skb_pull(skb, (hdr_total_len - hdr_len)); -} - -static void iwm_rx_adjust_packet(struct iwm_priv *iwm, - struct iwm_rx_packet *packet, - struct iwm_rx_ticket_node *ticket_node) -{ - u32 payload_offset = 0, payload_len; - struct iwm_rx_ticket *ticket = ticket_node->ticket; - struct iwm_rx_mpdu_hdr *mpdu_hdr; - struct ieee80211_hdr *hdr; - - mpdu_hdr = (struct iwm_rx_mpdu_hdr *)packet->skb->data; - payload_offset += sizeof(struct iwm_rx_mpdu_hdr); - /* Padding is 0 or 2 bytes */ - payload_len = le16_to_cpu(mpdu_hdr->len) + - (le16_to_cpu(ticket->flags) & IWM_RX_TICKET_PAD_SIZE_MSK); - payload_len -= ticket->tail_len; - - IWM_DBG_RX(iwm, DBG, "Packet adjusted, len:%d, offset:%d, " - "ticket offset:%d ticket tail len:%d\n", - payload_len, payload_offset, ticket->payload_offset, - ticket->tail_len); - - IWM_HEXDUMP(iwm, DBG, RX, "RAW: ", packet->skb->data, packet->skb->len); - - skb_pull(packet->skb, payload_offset); - skb_trim(packet->skb, payload_len); - - iwm_remove_iv(packet->skb, ticket->payload_offset); - - hdr = (struct ieee80211_hdr *) packet->skb->data; - if (ieee80211_is_data_qos(hdr->frame_control)) { - /* UMAC handed QOS_DATA frame with 2 padding bytes appended - * to the qos_ctl field in IEEE 802.11 headers. */ - memmove(packet->skb->data + IEEE80211_QOS_CTL_LEN + 2, - packet->skb->data, - ieee80211_hdrlen(hdr->frame_control) - - IEEE80211_QOS_CTL_LEN); - hdr = (struct ieee80211_hdr *) skb_pull(packet->skb, - IEEE80211_QOS_CTL_LEN + 2); - hdr->frame_control &= ~cpu_to_le16(IEEE80211_STYPE_QOS_DATA); - } - - IWM_HEXDUMP(iwm, DBG, RX, "ADJUSTED: ", - packet->skb->data, packet->skb->len); -} - -static void classify8023(struct sk_buff *skb) -{ - struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; - - if (ieee80211_is_data_qos(hdr->frame_control)) { - u8 *qc = ieee80211_get_qos_ctl(hdr); - /* frame has qos control */ - skb->priority = *qc & IEEE80211_QOS_CTL_TID_MASK; - } else { - skb->priority = 0; - } -} - -static void iwm_rx_process_amsdu(struct iwm_priv *iwm, struct sk_buff *skb) -{ - struct wireless_dev *wdev = iwm_to_wdev(iwm); - struct net_device *ndev = iwm_to_ndev(iwm); - struct sk_buff_head list; - struct sk_buff *frame; - - IWM_HEXDUMP(iwm, DBG, RX, "A-MSDU: ", skb->data, skb->len); - - __skb_queue_head_init(&list); - ieee80211_amsdu_to_8023s(skb, &list, ndev->dev_addr, wdev->iftype, 0, - true); - - while ((frame = __skb_dequeue(&list))) { - ndev->stats.rx_packets++; - ndev->stats.rx_bytes += frame->len; - - frame->protocol = eth_type_trans(frame, ndev); - frame->ip_summed = CHECKSUM_NONE; - memset(frame->cb, 0, sizeof(frame->cb)); - - if (netif_rx_ni(frame) == NET_RX_DROP) { - IWM_ERR(iwm, "Packet dropped\n"); - ndev->stats.rx_dropped++; - } - } -} - -static void iwm_rx_process_packet(struct iwm_priv *iwm, - struct iwm_rx_packet *packet, - struct iwm_rx_ticket_node *ticket_node) -{ - int ret; - struct sk_buff *skb = packet->skb; - struct wireless_dev *wdev = iwm_to_wdev(iwm); - struct net_device *ndev = iwm_to_ndev(iwm); - - IWM_DBG_RX(iwm, DBG, "Processing packet ID %d\n", packet->id); - - switch (le16_to_cpu(ticket_node->ticket->action)) { - case IWM_RX_TICKET_RELEASE: - IWM_DBG_RX(iwm, DBG, "RELEASE packet\n"); - - iwm_rx_adjust_packet(iwm, packet, ticket_node); - skb->dev = iwm_to_ndev(iwm); - classify8023(skb); - - if (le16_to_cpu(ticket_node->ticket->flags) & - IWM_RX_TICKET_AMSDU_MSK) { - iwm_rx_process_amsdu(iwm, skb); - break; - } - - ret = ieee80211_data_to_8023(skb, ndev->dev_addr, wdev->iftype); - if (ret < 0) { - IWM_DBG_RX(iwm, DBG, "Couldn't convert 802.11 header - " - "%d\n", ret); - kfree_skb(packet->skb); - break; - } - - IWM_HEXDUMP(iwm, DBG, RX, "802.3: ", skb->data, skb->len); - - ndev->stats.rx_packets++; - ndev->stats.rx_bytes += skb->len; - - skb->protocol = eth_type_trans(skb, ndev); - skb->ip_summed = CHECKSUM_NONE; - memset(skb->cb, 0, sizeof(skb->cb)); - - if (netif_rx_ni(skb) == NET_RX_DROP) { - IWM_ERR(iwm, "Packet dropped\n"); - ndev->stats.rx_dropped++; - } - break; - case IWM_RX_TICKET_DROP: - IWM_DBG_RX(iwm, DBG, "DROP packet: 0x%x\n", - le16_to_cpu(ticket_node->ticket->flags)); - kfree_skb(packet->skb); - break; - default: - IWM_ERR(iwm, "Unknown ticket action: %d\n", - le16_to_cpu(ticket_node->ticket->action)); - kfree_skb(packet->skb); - } - - kfree(packet); - iwm_rx_ticket_node_free(ticket_node); -} - -/* - * Rx data processing: - * - * We're receiving Rx packet from the LMAC, and Rx ticket from - * the UMAC. - * To forward a target data packet upstream (i.e. to the - * kernel network stack), we must have received an Rx ticket - * that tells us we're allowed to release this packet (ticket - * action is IWM_RX_TICKET_RELEASE). The Rx ticket also indicates, - * among other things, where valid data actually starts in the Rx - * packet. - */ -void iwm_rx_worker(struct work_struct *work) -{ - struct iwm_priv *iwm; - struct iwm_rx_ticket_node *ticket, *next; - - iwm = container_of(work, struct iwm_priv, rx_worker); - - /* - * We go through the tickets list and if there is a pending - * packet for it, we push it upstream. - * We stop whenever a ticket is missing its packet, as we're - * supposed to send the packets in order. - */ - spin_lock(&iwm->ticket_lock); - list_for_each_entry_safe(ticket, next, &iwm->rx_tickets, node) { - struct iwm_rx_packet *packet = - iwm_rx_packet_get(iwm, le16_to_cpu(ticket->ticket->id)); - - if (!packet) { - IWM_DBG_RX(iwm, DBG, "Skip rx_work: Wait for ticket %d " - "to be handled first\n", - le16_to_cpu(ticket->ticket->id)); - break; - } - - list_del(&ticket->node); - iwm_rx_process_packet(iwm, packet, ticket); - } - spin_unlock(&iwm->ticket_lock); -} - diff --git a/drivers/net/wireless/iwmc3200wifi/rx.h b/drivers/net/wireless/iwmc3200wifi/rx.h deleted file mode 100644 index da0db91cee59..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/rx.h +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#ifndef __IWM_RX_H__ -#define __IWM_RX_H__ - -#include <linux/skbuff.h> - -#include "umac.h" - -struct iwm_rx_ticket_node { - struct list_head node; - struct iwm_rx_ticket *ticket; -}; - -struct iwm_rx_packet { - struct list_head node; - u16 id; - struct sk_buff *skb; - unsigned long pkt_size; -}; - -void iwm_rx_worker(struct work_struct *work); - -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/sdio.c b/drivers/net/wireless/iwmc3200wifi/sdio.c deleted file mode 100644 index 0042f204b07f..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/sdio.c +++ /dev/null @@ -1,509 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -/* - * This is the SDIO bus specific hooks for iwm. - * It also is the module's entry point. - * - * Interesting code paths: - * iwm_sdio_probe() (Called by an SDIO bus scan) - * -> iwm_if_alloc() (netdev.c) - * -> iwm_wdev_alloc() (cfg80211.c, allocates and register our wiphy) - * -> wiphy_new() - * -> wiphy_register() - * -> alloc_netdev_mq() - * -> register_netdev() - * - * iwm_sdio_remove() - * -> iwm_if_free() (netdev.c) - * -> unregister_netdev() - * -> iwm_wdev_free() (cfg80211.c) - * -> wiphy_unregister() - * -> wiphy_free() - * - * iwm_sdio_isr() (called in process context from the SDIO core code) - * -> queue_work(.., isr_worker) - * -- [async] --> iwm_sdio_isr_worker() - * -> iwm_rx_handle() - */ - -#include <linux/kernel.h> -#include <linux/module.h> -#include <linux/slab.h> -#include <linux/netdevice.h> -#include <linux/debugfs.h> -#include <linux/mmc/sdio_ids.h> -#include <linux/mmc/sdio.h> -#include <linux/mmc/sdio_func.h> - -#include "iwm.h" -#include "debug.h" -#include "bus.h" -#include "sdio.h" - -static void iwm_sdio_isr_worker(struct work_struct *work) -{ - struct iwm_sdio_priv *hw; - struct iwm_priv *iwm; - struct iwm_rx_info *rx_info; - struct sk_buff *skb; - u8 *rx_buf; - unsigned long rx_size; - - hw = container_of(work, struct iwm_sdio_priv, isr_worker); - iwm = hw_to_iwm(hw); - - while (!skb_queue_empty(&iwm->rx_list)) { - skb = skb_dequeue(&iwm->rx_list); - rx_info = skb_to_rx_info(skb); - rx_size = rx_info->rx_size; - rx_buf = skb->data; - - IWM_HEXDUMP(iwm, DBG, SDIO, "RX: ", rx_buf, rx_size); - if (iwm_rx_handle(iwm, rx_buf, rx_size) < 0) - IWM_WARN(iwm, "RX error\n"); - - kfree_skb(skb); - } -} - -static void iwm_sdio_isr(struct sdio_func *func) -{ - struct iwm_priv *iwm; - struct iwm_sdio_priv *hw; - struct iwm_rx_info *rx_info; - struct sk_buff *skb; - unsigned long buf_size, read_size; - int ret; - u8 val; - - hw = sdio_get_drvdata(func); - iwm = hw_to_iwm(hw); - - buf_size = hw->blk_size; - - /* We're checking the status */ - val = sdio_readb(func, IWM_SDIO_INTR_STATUS_ADDR, &ret); - if (val == 0 || ret < 0) { - IWM_ERR(iwm, "Wrong INTR_STATUS\n"); - return; - } - - /* See if we have free buffers */ - if (skb_queue_len(&iwm->rx_list) > IWM_RX_LIST_SIZE) { - IWM_ERR(iwm, "No buffer for more Rx frames\n"); - return; - } - - /* We first read the transaction size */ - read_size = sdio_readb(func, IWM_SDIO_INTR_GET_SIZE_ADDR + 1, &ret); - read_size = read_size << 8; - - if (ret < 0) { - IWM_ERR(iwm, "Couldn't read the xfer size\n"); - return; - } - - /* We need to clear the INT register */ - sdio_writeb(func, 1, IWM_SDIO_INTR_CLEAR_ADDR, &ret); - if (ret < 0) { - IWM_ERR(iwm, "Couldn't clear the INT register\n"); - return; - } - - while (buf_size < read_size) - buf_size <<= 1; - - skb = dev_alloc_skb(buf_size); - if (!skb) { - IWM_ERR(iwm, "Couldn't alloc RX skb\n"); - return; - } - rx_info = skb_to_rx_info(skb); - rx_info->rx_size = read_size; - rx_info->rx_buf_size = buf_size; - - /* Now we can read the actual buffer */ - ret = sdio_memcpy_fromio(func, skb_put(skb, read_size), - IWM_SDIO_DATA_ADDR, read_size); - - /* The skb is put on a driver's specific Rx SKB list */ - skb_queue_tail(&iwm->rx_list, skb); - - /* We can now schedule the actual worker */ - queue_work(hw->isr_wq, &hw->isr_worker); -} - -static void iwm_sdio_rx_free(struct iwm_sdio_priv *hw) -{ - struct iwm_priv *iwm = hw_to_iwm(hw); - - flush_workqueue(hw->isr_wq); - - skb_queue_purge(&iwm->rx_list); -} - -/* Bus ops */ -static int if_sdio_enable(struct iwm_priv *iwm) -{ - struct iwm_sdio_priv *hw = iwm_to_if_sdio(iwm); - int ret; - - sdio_claim_host(hw->func); - - ret = sdio_enable_func(hw->func); - if (ret) { - IWM_ERR(iwm, "Couldn't enable the device: is TOP driver " - "loaded and functional?\n"); - goto release_host; - } - - iwm_reset(iwm); - - ret = sdio_claim_irq(hw->func, iwm_sdio_isr); - if (ret) { - IWM_ERR(iwm, "Failed to claim irq: %d\n", ret); - goto release_host; - } - - sdio_writeb(hw->func, 1, IWM_SDIO_INTR_ENABLE_ADDR, &ret); - if (ret < 0) { - IWM_ERR(iwm, "Couldn't enable INTR: %d\n", ret); - goto release_irq; - } - - sdio_release_host(hw->func); - - IWM_DBG_SDIO(iwm, INFO, "IWM SDIO enable\n"); - - return 0; - - release_irq: - sdio_release_irq(hw->func); - release_host: - sdio_release_host(hw->func); - - return ret; -} - -static int if_sdio_disable(struct iwm_priv *iwm) -{ - struct iwm_sdio_priv *hw = iwm_to_if_sdio(iwm); - int ret; - - sdio_claim_host(hw->func); - sdio_writeb(hw->func, 0, IWM_SDIO_INTR_ENABLE_ADDR, &ret); - if (ret < 0) - IWM_WARN(iwm, "Couldn't disable INTR: %d\n", ret); - - sdio_release_irq(hw->func); - sdio_disable_func(hw->func); - sdio_release_host(hw->func); - - iwm_sdio_rx_free(hw); - - iwm_reset(iwm); - - IWM_DBG_SDIO(iwm, INFO, "IWM SDIO disable\n"); - - return 0; -} - -static int if_sdio_send_chunk(struct iwm_priv *iwm, u8 *buf, int count) -{ - struct iwm_sdio_priv *hw = iwm_to_if_sdio(iwm); - int aligned_count = ALIGN(count, hw->blk_size); - int ret; - - if ((unsigned long)buf & 0x3) { - IWM_ERR(iwm, "buf <%p> is not dword aligned\n", buf); - /* TODO: Is this a hardware limitation? use get_unligned */ - return -EINVAL; - } - - sdio_claim_host(hw->func); - ret = sdio_memcpy_toio(hw->func, IWM_SDIO_DATA_ADDR, buf, - aligned_count); - sdio_release_host(hw->func); - - return ret; -} - -static ssize_t iwm_debugfs_sdio_read(struct file *filp, char __user *buffer, - size_t count, loff_t *ppos) -{ - struct iwm_priv *iwm = filp->private_data; - struct iwm_sdio_priv *hw = iwm_to_if_sdio(iwm); - char *buf; - u8 cccr; - int buf_len = 4096, ret; - size_t len = 0; - - if (*ppos != 0) - return 0; - if (count < sizeof(buf)) - return -ENOSPC; - - buf = kzalloc(buf_len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - - sdio_claim_host(hw->func); - - cccr = sdio_f0_readb(hw->func, SDIO_CCCR_IOEx, &ret); - if (ret) { - IWM_ERR(iwm, "Could not read SDIO_CCCR_IOEx\n"); - goto err; - } - len += snprintf(buf + len, buf_len - len, "CCCR_IOEx: 0x%x\n", cccr); - - cccr = sdio_f0_readb(hw->func, SDIO_CCCR_IORx, &ret); - if (ret) { - IWM_ERR(iwm, "Could not read SDIO_CCCR_IORx\n"); - goto err; - } - len += snprintf(buf + len, buf_len - len, "CCCR_IORx: 0x%x\n", cccr); - - - cccr = sdio_f0_readb(hw->func, SDIO_CCCR_IENx, &ret); - if (ret) { - IWM_ERR(iwm, "Could not read SDIO_CCCR_IENx\n"); - goto err; - } - len += snprintf(buf + len, buf_len - len, "CCCR_IENx: 0x%x\n", cccr); - - - cccr = sdio_f0_readb(hw->func, SDIO_CCCR_INTx, &ret); - if (ret) { - IWM_ERR(iwm, "Could not read SDIO_CCCR_INTx\n"); - goto err; - } - len += snprintf(buf + len, buf_len - len, "CCCR_INTx: 0x%x\n", cccr); - - - cccr = sdio_f0_readb(hw->func, SDIO_CCCR_ABORT, &ret); - if (ret) { - IWM_ERR(iwm, "Could not read SDIO_CCCR_ABORTx\n"); - goto err; - } - len += snprintf(buf + len, buf_len - len, "CCCR_ABORT: 0x%x\n", cccr); - - cccr = sdio_f0_readb(hw->func, SDIO_CCCR_IF, &ret); - if (ret) { - IWM_ERR(iwm, "Could not read SDIO_CCCR_IF\n"); - goto err; - } - len += snprintf(buf + len, buf_len - len, "CCCR_IF: 0x%x\n", cccr); - - - cccr = sdio_f0_readb(hw->func, SDIO_CCCR_CAPS, &ret); - if (ret) { - IWM_ERR(iwm, "Could not read SDIO_CCCR_CAPS\n"); - goto err; - } - len += snprintf(buf + len, buf_len - len, "CCCR_CAPS: 0x%x\n", cccr); - - cccr = sdio_f0_readb(hw->func, SDIO_CCCR_CIS, &ret); - if (ret) { - IWM_ERR(iwm, "Could not read SDIO_CCCR_CIS\n"); - goto err; - } - len += snprintf(buf + len, buf_len - len, "CCCR_CIS: 0x%x\n", cccr); - - ret = simple_read_from_buffer(buffer, len, ppos, buf, buf_len); -err: - sdio_release_host(hw->func); - - kfree(buf); - - return ret; -} - -static const struct file_operations iwm_debugfs_sdio_fops = { - .owner = THIS_MODULE, - .open = simple_open, - .read = iwm_debugfs_sdio_read, - .llseek = default_llseek, -}; - -static void if_sdio_debugfs_init(struct iwm_priv *iwm, struct dentry *parent_dir) -{ - struct iwm_sdio_priv *hw = iwm_to_if_sdio(iwm); - - hw->cccr_dentry = debugfs_create_file("cccr", 0200, - parent_dir, iwm, - &iwm_debugfs_sdio_fops); -} - -static void if_sdio_debugfs_exit(struct iwm_priv *iwm) -{ - struct iwm_sdio_priv *hw = iwm_to_if_sdio(iwm); - - debugfs_remove(hw->cccr_dentry); -} - -static struct iwm_if_ops if_sdio_ops = { - .enable = if_sdio_enable, - .disable = if_sdio_disable, - .send_chunk = if_sdio_send_chunk, - .debugfs_init = if_sdio_debugfs_init, - .debugfs_exit = if_sdio_debugfs_exit, - .umac_name = "iwmc3200wifi-umac-sdio.bin", - .calib_lmac_name = "iwmc3200wifi-calib-sdio.bin", - .lmac_name = "iwmc3200wifi-lmac-sdio.bin", -}; -MODULE_FIRMWARE("iwmc3200wifi-umac-sdio.bin"); -MODULE_FIRMWARE("iwmc3200wifi-calib-sdio.bin"); -MODULE_FIRMWARE("iwmc3200wifi-lmac-sdio.bin"); - -static int iwm_sdio_probe(struct sdio_func *func, - const struct sdio_device_id *id) -{ - struct iwm_priv *iwm; - struct iwm_sdio_priv *hw; - struct device *dev = &func->dev; - int ret; - - /* check if TOP has already initialized the card */ - sdio_claim_host(func); - ret = sdio_enable_func(func); - if (ret) { - dev_err(dev, "wait for TOP to enable the device\n"); - sdio_release_host(func); - return ret; - } - - ret = sdio_set_block_size(func, IWM_SDIO_BLK_SIZE); - - sdio_disable_func(func); - sdio_release_host(func); - - if (ret < 0) { - dev_err(dev, "Failed to set block size: %d\n", ret); - return ret; - } - - iwm = iwm_if_alloc(sizeof(struct iwm_sdio_priv), dev, &if_sdio_ops); - if (IS_ERR(iwm)) { - dev_err(dev, "allocate SDIO interface failed\n"); - return PTR_ERR(iwm); - } - - hw = iwm_private(iwm); - hw->iwm = iwm; - - iwm_debugfs_init(iwm); - - sdio_set_drvdata(func, hw); - - hw->func = func; - hw->blk_size = IWM_SDIO_BLK_SIZE; - - hw->isr_wq = create_singlethread_workqueue(KBUILD_MODNAME "_sdio"); - if (!hw->isr_wq) { - ret = -ENOMEM; - goto debugfs_exit; - } - - INIT_WORK(&hw->isr_worker, iwm_sdio_isr_worker); - - ret = iwm_if_add(iwm); - if (ret) { - dev_err(dev, "add SDIO interface failed\n"); - goto destroy_wq; - } - - dev_info(dev, "IWM SDIO probe\n"); - - return 0; - - destroy_wq: - destroy_workqueue(hw->isr_wq); - debugfs_exit: - iwm_debugfs_exit(iwm); - iwm_if_free(iwm); - return ret; -} - -static void iwm_sdio_remove(struct sdio_func *func) -{ - struct iwm_sdio_priv *hw = sdio_get_drvdata(func); - struct iwm_priv *iwm = hw_to_iwm(hw); - struct device *dev = &func->dev; - - iwm_if_remove(iwm); - destroy_workqueue(hw->isr_wq); - iwm_debugfs_exit(iwm); - iwm_if_free(iwm); - - sdio_set_drvdata(func, NULL); - - dev_info(dev, "IWM SDIO remove\n"); -} - -static const struct sdio_device_id iwm_sdio_ids[] = { - /* Global/AGN SKU */ - { SDIO_DEVICE(SDIO_VENDOR_ID_INTEL, 0x1403) }, - /* BGN SKU */ - { SDIO_DEVICE(SDIO_VENDOR_ID_INTEL, 0x1408) }, - { /* end: all zeroes */ }, -}; -MODULE_DEVICE_TABLE(sdio, iwm_sdio_ids); - -static struct sdio_driver iwm_sdio_driver = { - .name = "iwm_sdio", - .id_table = iwm_sdio_ids, - .probe = iwm_sdio_probe, - .remove = iwm_sdio_remove, -}; - -static int __init iwm_sdio_init_module(void) -{ - return sdio_register_driver(&iwm_sdio_driver); -} - -static void __exit iwm_sdio_exit_module(void) -{ - sdio_unregister_driver(&iwm_sdio_driver); -} - -module_init(iwm_sdio_init_module); -module_exit(iwm_sdio_exit_module); - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR(IWM_COPYRIGHT " " IWM_AUTHOR); diff --git a/drivers/net/wireless/iwmc3200wifi/sdio.h b/drivers/net/wireless/iwmc3200wifi/sdio.h deleted file mode 100644 index aab6b6892e45..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/sdio.h +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#ifndef __IWM_SDIO_H__ -#define __IWM_SDIO_H__ - -#define IWM_SDIO_DATA_ADDR 0x0 -#define IWM_SDIO_INTR_ENABLE_ADDR 0x14 -#define IWM_SDIO_INTR_STATUS_ADDR 0x13 -#define IWM_SDIO_INTR_CLEAR_ADDR 0x13 -#define IWM_SDIO_INTR_GET_SIZE_ADDR 0x2C - -#define IWM_SDIO_BLK_SIZE 256 - -#define iwm_to_if_sdio(i) (struct iwm_sdio_priv *)(iwm->private) - -struct iwm_sdio_priv { - struct sdio_func *func; - struct iwm_priv *iwm; - - struct workqueue_struct *isr_wq; - struct work_struct isr_worker; - - struct dentry *cccr_dentry; - - unsigned int blk_size; -}; - -#endif diff --git a/drivers/net/wireless/iwmc3200wifi/trace.c b/drivers/net/wireless/iwmc3200wifi/trace.c deleted file mode 100644 index 904d36f22311..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/trace.c +++ /dev/null @@ -1,3 +0,0 @@ -#include "iwm.h" -#define CREATE_TRACE_POINTS -#include "trace.h" diff --git a/drivers/net/wireless/iwmc3200wifi/trace.h b/drivers/net/wireless/iwmc3200wifi/trace.h deleted file mode 100644 index f5f7070b7e22..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/trace.h +++ /dev/null @@ -1,283 +0,0 @@ -#if !defined(__IWM_TRACE_H__) || defined(TRACE_HEADER_MULTI_READ) -#define __IWM_TRACE_H__ - -#include <linux/tracepoint.h> - -#if !defined(CONFIG_IWM_TRACING) -#undef TRACE_EVENT -#define TRACE_EVENT(name, proto, ...) \ -static inline void trace_ ## name(proto) {} -#endif - -#undef TRACE_SYSTEM -#define TRACE_SYSTEM iwm - -#define IWM_ENTRY __array(char, ndev_name, 16) -#define IWM_ASSIGN strlcpy(__entry->ndev_name, iwm_to_ndev(iwm)->name, 16) -#define IWM_PR_FMT "%s" -#define IWM_PR_ARG __entry->ndev_name - -TRACE_EVENT(iwm_tx_nonwifi_cmd, - TP_PROTO(struct iwm_priv *iwm, struct iwm_udma_out_nonwifi_hdr *hdr), - - TP_ARGS(iwm, hdr), - - TP_STRUCT__entry( - IWM_ENTRY - __field(u8, opcode) - __field(u8, resp) - __field(u8, eot) - __field(u8, hw) - __field(u16, seq) - __field(u32, addr) - __field(u32, op1) - __field(u32, op2) - ), - - TP_fast_assign( - IWM_ASSIGN; - __entry->opcode = GET_VAL32(hdr->cmd, UMAC_HDI_OUT_CMD_OPCODE); - __entry->resp = GET_VAL32(hdr->cmd, UDMA_HDI_OUT_NW_CMD_RESP); - __entry->eot = GET_VAL32(hdr->cmd, UMAC_HDI_OUT_CMD_EOT); - __entry->hw = GET_VAL32(hdr->cmd, UDMA_HDI_OUT_NW_CMD_HANDLE_BY_HW); - __entry->seq = GET_VAL32(hdr->cmd, UDMA_HDI_OUT_CMD_NON_WIFI_HW_SEQ_NUM); - __entry->addr = le32_to_cpu(hdr->addr); - __entry->op1 = le32_to_cpu(hdr->op1_sz); - __entry->op2 = le32_to_cpu(hdr->op2); - ), - - TP_printk( - IWM_PR_FMT " Tx TARGET CMD: opcode 0x%x, resp %d, eot %d, " - "hw %d, seq 0x%x, addr 0x%x, op1 0x%x, op2 0x%x", - IWM_PR_ARG, __entry->opcode, __entry->resp, __entry->eot, - __entry->hw, __entry->seq, __entry->addr, __entry->op1, - __entry->op2 - ) -); - -TRACE_EVENT(iwm_tx_wifi_cmd, - TP_PROTO(struct iwm_priv *iwm, struct iwm_umac_wifi_out_hdr *hdr), - - TP_ARGS(iwm, hdr), - - TP_STRUCT__entry( - IWM_ENTRY - __field(u8, opcode) - __field(u8, lmac) - __field(u8, resp) - __field(u8, eot) - __field(u8, ra_tid) - __field(u8, credit_group) - __field(u8, color) - __field(u16, seq) - ), - - TP_fast_assign( - IWM_ASSIGN; - __entry->opcode = hdr->sw_hdr.cmd.cmd; - __entry->lmac = 0; - __entry->seq = __le16_to_cpu(hdr->sw_hdr.cmd.seq_num); - __entry->resp = GET_VAL8(hdr->sw_hdr.cmd.flags, UMAC_DEV_CMD_FLAGS_RESP_REQ); - __entry->color = GET_VAL32(hdr->sw_hdr.meta_data, UMAC_FW_CMD_TX_STA_COLOR); - __entry->eot = GET_VAL32(hdr->hw_hdr.cmd, UMAC_HDI_OUT_CMD_EOT); - __entry->ra_tid = GET_VAL32(hdr->hw_hdr.meta_data, UMAC_HDI_OUT_RATID); - __entry->credit_group = GET_VAL32(hdr->hw_hdr.meta_data, UMAC_HDI_OUT_CREDIT_GRP); - if (__entry->opcode == UMAC_CMD_OPCODE_WIFI_PASS_THROUGH || - __entry->opcode == UMAC_CMD_OPCODE_WIFI_IF_WRAPPER) { - __entry->lmac = 1; - __entry->opcode = ((struct iwm_lmac_hdr *)(hdr + 1))->id; - } - ), - - TP_printk( - IWM_PR_FMT " Tx %cMAC CMD: opcode 0x%x, resp %d, eot %d, " - "seq 0x%x, sta_color 0x%x, ra_tid 0x%x, credit_group 0x%x", - IWM_PR_ARG, __entry->lmac ? 'L' : 'U', __entry->opcode, - __entry->resp, __entry->eot, __entry->seq, __entry->color, - __entry->ra_tid, __entry->credit_group - ) -); - -TRACE_EVENT(iwm_tx_packets, - TP_PROTO(struct iwm_priv *iwm, u8 *buf, int len), - - TP_ARGS(iwm, buf, len), - - TP_STRUCT__entry( - IWM_ENTRY - __field(u8, eot) - __field(u8, ra_tid) - __field(u8, credit_group) - __field(u8, color) - __field(u16, seq) - __field(u8, npkt) - __field(u32, bytes) - ), - - TP_fast_assign( - struct iwm_umac_wifi_out_hdr *hdr = - (struct iwm_umac_wifi_out_hdr *)buf; - - IWM_ASSIGN; - __entry->eot = GET_VAL32(hdr->hw_hdr.cmd, UMAC_HDI_OUT_CMD_EOT); - __entry->ra_tid = GET_VAL32(hdr->hw_hdr.meta_data, UMAC_HDI_OUT_RATID); - __entry->credit_group = GET_VAL32(hdr->hw_hdr.meta_data, UMAC_HDI_OUT_CREDIT_GRP); - __entry->color = GET_VAL32(hdr->sw_hdr.meta_data, UMAC_FW_CMD_TX_STA_COLOR); - __entry->seq = __le16_to_cpu(hdr->sw_hdr.cmd.seq_num); - __entry->npkt = 1; - __entry->bytes = len; - - if (!__entry->eot) { - int count; - u8 *ptr = buf; - - __entry->npkt = 0; - while (ptr < buf + len) { - count = GET_VAL32(hdr->sw_hdr.meta_data, - UMAC_FW_CMD_BYTE_COUNT); - ptr += ALIGN(sizeof(*hdr) + count, 16); - hdr = (struct iwm_umac_wifi_out_hdr *)ptr; - __entry->npkt++; - } - } - ), - - TP_printk( - IWM_PR_FMT " Tx %spacket: eot %d, seq 0x%x, sta_color 0x%x, " - "ra_tid 0x%x, credit_group 0x%x, embedded_packets %d, %d bytes", - IWM_PR_ARG, !__entry->eot ? "concatenated " : "", - __entry->eot, __entry->seq, __entry->color, __entry->ra_tid, - __entry->credit_group, __entry->npkt, __entry->bytes - ) -); - -TRACE_EVENT(iwm_rx_nonwifi_cmd, - TP_PROTO(struct iwm_priv *iwm, void *buf, int len), - - TP_ARGS(iwm, buf, len), - - TP_STRUCT__entry( - IWM_ENTRY - __field(u8, opcode) - __field(u16, seq) - __field(u32, len) - ), - - TP_fast_assign( - struct iwm_udma_in_hdr *hdr = buf; - - IWM_ASSIGN; - __entry->opcode = GET_VAL32(hdr->cmd, UDMA_HDI_IN_NW_CMD_OPCODE); - __entry->seq = GET_VAL32(hdr->cmd, UDMA_HDI_IN_CMD_NON_WIFI_HW_SEQ_NUM); - __entry->len = len; - ), - - TP_printk( - IWM_PR_FMT " Rx TARGET RESP: opcode 0x%x, seq 0x%x, len 0x%x", - IWM_PR_ARG, __entry->opcode, __entry->seq, __entry->len - ) -); - -TRACE_EVENT(iwm_rx_wifi_cmd, - TP_PROTO(struct iwm_priv *iwm, struct iwm_umac_wifi_in_hdr *hdr), - - TP_ARGS(iwm, hdr), - - TP_STRUCT__entry( - IWM_ENTRY - __field(u8, cmd) - __field(u8, source) - __field(u16, seq) - __field(u32, count) - ), - - TP_fast_assign( - IWM_ASSIGN; - __entry->cmd = hdr->sw_hdr.cmd.cmd; - __entry->source = GET_VAL32(hdr->hw_hdr.cmd, UMAC_HDI_IN_CMD_SOURCE); - __entry->count = GET_VAL32(hdr->sw_hdr.meta_data, UMAC_FW_CMD_BYTE_COUNT); - __entry->seq = le16_to_cpu(hdr->sw_hdr.cmd.seq_num); - ), - - TP_printk( - IWM_PR_FMT " Rx %s RESP: cmd 0x%x, seq 0x%x, count 0x%x", - IWM_PR_ARG, __entry->source == UMAC_HDI_IN_SOURCE_FHRX ? "LMAC" : - __entry->source == UMAC_HDI_IN_SOURCE_FW ? "UMAC" : "UDMA", - __entry->cmd, __entry->seq, __entry->count - ) -); - -#define iwm_ticket_action_symbol \ - { IWM_RX_TICKET_DROP, "DROP" }, \ - { IWM_RX_TICKET_RELEASE, "RELEASE" }, \ - { IWM_RX_TICKET_SNIFFER, "SNIFFER" }, \ - { IWM_RX_TICKET_ENQUEUE, "ENQUEUE" } - -TRACE_EVENT(iwm_rx_ticket, - TP_PROTO(struct iwm_priv *iwm, void *buf, int len), - - TP_ARGS(iwm, buf, len), - - TP_STRUCT__entry( - IWM_ENTRY - __field(u8, action) - __field(u8, reason) - __field(u16, id) - __field(u16, flags) - ), - - TP_fast_assign( - struct iwm_rx_ticket *ticket = - ((struct iwm_umac_notif_rx_ticket *)buf)->tickets; - - IWM_ASSIGN; - __entry->id = le16_to_cpu(ticket->id); - __entry->action = le16_to_cpu(ticket->action); - __entry->flags = le16_to_cpu(ticket->flags); - __entry->reason = (__entry->flags & IWM_RX_TICKET_DROP_REASON_MSK) >> IWM_RX_TICKET_DROP_REASON_POS; - ), - - TP_printk( - IWM_PR_FMT " Rx ticket: id 0x%x, action %s, %s 0x%x%s", - IWM_PR_ARG, __entry->id, - __print_symbolic(__entry->action, iwm_ticket_action_symbol), - __entry->reason ? "reason" : "flags", - __entry->reason ? __entry->reason : __entry->flags, - __entry->flags & IWM_RX_TICKET_AMSDU_MSK ? ", AMSDU frame" : "" - ) -); - -TRACE_EVENT(iwm_rx_packet, - TP_PROTO(struct iwm_priv *iwm, void *buf, int len), - - TP_ARGS(iwm, buf, len), - - TP_STRUCT__entry( - IWM_ENTRY - __field(u8, source) - __field(u16, id) - __field(u32, len) - ), - - TP_fast_assign( - struct iwm_umac_wifi_in_hdr *hdr = buf; - - IWM_ASSIGN; - __entry->source = GET_VAL32(hdr->hw_hdr.cmd, UMAC_HDI_IN_CMD_SOURCE); - __entry->id = le16_to_cpu(hdr->sw_hdr.cmd.seq_num); - __entry->len = len - sizeof(*hdr); - ), - - TP_printk( - IWM_PR_FMT " Rx %s packet: id 0x%x, %d bytes", - IWM_PR_ARG, __entry->source == UMAC_HDI_IN_SOURCE_FHRX ? - "LMAC" : "UMAC", __entry->id, __entry->len - ) -); -#endif - -#undef TRACE_INCLUDE_PATH -#define TRACE_INCLUDE_PATH . -#undef TRACE_INCLUDE_FILE -#define TRACE_INCLUDE_FILE trace -#include <trace/define_trace.h> diff --git a/drivers/net/wireless/iwmc3200wifi/tx.c b/drivers/net/wireless/iwmc3200wifi/tx.c deleted file mode 100644 index be98074c0608..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/tx.c +++ /dev/null @@ -1,529 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -/* - * iwm Tx theory of operation: - * - * 1) We receive a 802.3 frame from the stack - * 2) We convert it to a 802.11 frame [iwm_xmit_frame] - * 3) We queue it to its corresponding tx queue [iwm_xmit_frame] - * 4) We schedule the tx worker. There is one worker per tx - * queue. [iwm_xmit_frame] - * 5) The tx worker is scheduled - * 6) We go through every queued skb on the tx queue, and for each - * and every one of them: [iwm_tx_worker] - * a) We check if we have enough Tx credits (see below for a Tx - * credits description) for the frame length. [iwm_tx_worker] - * b) If we do, we aggregate the Tx frame into a UDMA one, by - * concatenating one REPLY_TX command per Tx frame. [iwm_tx_worker] - * c) When we run out of credits, or when we reach the maximum - * concatenation size, we actually send the concatenated UDMA - * frame. [iwm_tx_worker] - * - * When we run out of Tx credits, the skbs are filling the tx queue, - * and eventually we will stop the netdev queue. [iwm_tx_worker] - * The tx queue is emptied as we're getting new tx credits, by - * scheduling the tx_worker. [iwm_tx_credit_inc] - * The netdev queue is started again when we have enough tx credits, - * and when our tx queue has some reasonable amout of space available - * (i.e. half of the max size). [iwm_tx_worker] - */ - -#include <linux/slab.h> -#include <linux/skbuff.h> -#include <linux/netdevice.h> -#include <linux/ieee80211.h> - -#include "iwm.h" -#include "debug.h" -#include "commands.h" -#include "hal.h" -#include "umac.h" -#include "bus.h" - -#define IWM_UMAC_PAGE_ALLOC_WRAP 0xffff - -#define BYTES_TO_PAGES(n) (1 + ((n) >> ilog2(IWM_UMAC_PAGE_SIZE)) - \ - (((n) & (IWM_UMAC_PAGE_SIZE - 1)) == 0)) - -#define pool_id_to_queue(id) ((id < IWM_TX_CMD_QUEUE) ? id : id - 1) -#define queue_to_pool_id(q) ((q < IWM_TX_CMD_QUEUE) ? q : q + 1) - -/* require to hold tx_credit lock */ -static int iwm_tx_credit_get(struct iwm_tx_credit *tx_credit, int id) -{ - struct pool_entry *pool = &tx_credit->pools[id]; - struct spool_entry *spool = &tx_credit->spools[pool->sid]; - int spool_pages; - - /* number of pages can be taken from spool by this pool */ - spool_pages = spool->max_pages - spool->alloc_pages + - max(pool->min_pages - pool->alloc_pages, 0); - - return min(pool->max_pages - pool->alloc_pages, spool_pages); -} - -static bool iwm_tx_credit_ok(struct iwm_priv *iwm, int id, int nb) -{ - u32 npages = BYTES_TO_PAGES(nb); - - if (npages <= iwm_tx_credit_get(&iwm->tx_credit, id)) - return 1; - - set_bit(id, &iwm->tx_credit.full_pools_map); - - IWM_DBG_TX(iwm, DBG, "LINK: stop txq[%d], available credit: %d\n", - pool_id_to_queue(id), - iwm_tx_credit_get(&iwm->tx_credit, id)); - - return 0; -} - -void iwm_tx_credit_inc(struct iwm_priv *iwm, int id, int total_freed_pages) -{ - struct pool_entry *pool; - struct spool_entry *spool; - int freed_pages; - int queue; - - BUG_ON(id >= IWM_MACS_OUT_GROUPS); - - pool = &iwm->tx_credit.pools[id]; - spool = &iwm->tx_credit.spools[pool->sid]; - - freed_pages = total_freed_pages - pool->total_freed_pages; - IWM_DBG_TX(iwm, DBG, "Free %d pages for pool[%d]\n", freed_pages, id); - - if (!freed_pages) { - IWM_DBG_TX(iwm, DBG, "No pages are freed by UMAC\n"); - return; - } else if (freed_pages < 0) - freed_pages += IWM_UMAC_PAGE_ALLOC_WRAP + 1; - - if (pool->alloc_pages > pool->min_pages) { - int spool_pages = pool->alloc_pages - pool->min_pages; - spool_pages = min(spool_pages, freed_pages); - spool->alloc_pages -= spool_pages; - } - - pool->alloc_pages -= freed_pages; - pool->total_freed_pages = total_freed_pages; - - IWM_DBG_TX(iwm, DBG, "Pool[%d] pages alloc: %d, total_freed: %d, " - "Spool[%d] pages alloc: %d\n", id, pool->alloc_pages, - pool->total_freed_pages, pool->sid, spool->alloc_pages); - - if (test_bit(id, &iwm->tx_credit.full_pools_map) && - (pool->alloc_pages < pool->max_pages / 2)) { - clear_bit(id, &iwm->tx_credit.full_pools_map); - - queue = pool_id_to_queue(id); - - IWM_DBG_TX(iwm, DBG, "LINK: start txq[%d], available " - "credit: %d\n", queue, - iwm_tx_credit_get(&iwm->tx_credit, id)); - queue_work(iwm->txq[queue].wq, &iwm->txq[queue].worker); - } -} - -static void iwm_tx_credit_dec(struct iwm_priv *iwm, int id, int alloc_pages) -{ - struct pool_entry *pool; - struct spool_entry *spool; - int spool_pages; - - IWM_DBG_TX(iwm, DBG, "Allocate %d pages for pool[%d]\n", - alloc_pages, id); - - BUG_ON(id >= IWM_MACS_OUT_GROUPS); - - pool = &iwm->tx_credit.pools[id]; - spool = &iwm->tx_credit.spools[pool->sid]; - - spool_pages = pool->alloc_pages + alloc_pages - pool->min_pages; - - if (pool->alloc_pages >= pool->min_pages) - spool->alloc_pages += alloc_pages; - else if (spool_pages > 0) - spool->alloc_pages += spool_pages; - - pool->alloc_pages += alloc_pages; - - IWM_DBG_TX(iwm, DBG, "Pool[%d] pages alloc: %d, total_freed: %d, " - "Spool[%d] pages alloc: %d\n", id, pool->alloc_pages, - pool->total_freed_pages, pool->sid, spool->alloc_pages); -} - -int iwm_tx_credit_alloc(struct iwm_priv *iwm, int id, int nb) -{ - u32 npages = BYTES_TO_PAGES(nb); - int ret = 0; - - spin_lock(&iwm->tx_credit.lock); - - if (!iwm_tx_credit_ok(iwm, id, nb)) { - IWM_DBG_TX(iwm, DBG, "No credit available for pool[%d]\n", id); - ret = -ENOSPC; - goto out; - } - - iwm_tx_credit_dec(iwm, id, npages); - - out: - spin_unlock(&iwm->tx_credit.lock); - return ret; -} - -/* - * Since we're on an SDIO or USB bus, we are not sharing memory - * for storing to be transmitted frames. The host needs to push - * them upstream. As a consequence there needs to be a way for - * the target to let us know if it can actually take more TX frames - * or not. This is what Tx credits are for. - * - * For each Tx HW queue, we have a Tx pool, and then we have one - * unique super pool (spool), which is actually a global pool of - * all the UMAC pages. - * For each Tx pool we have a min_pages, a max_pages fields, and a - * alloc_pages fields. The alloc_pages tracks the number of pages - * currently allocated from the tx pool. - * Here are the rules to check if given a tx frame we have enough - * tx credits for it: - * 1) We translate the frame length into a number of UMAC pages. - * Let's call them n_pages. - * 2) For the corresponding tx pool, we check if n_pages + - * pool->alloc_pages is higher than pool->min_pages. min_pages - * represent a set of pre-allocated pages on the tx pool. If - * that's the case, then we need to allocate those pages from - * the spool. We can do so until we reach spool->max_pages. - * 3) Each tx pool is not allowed to allocate more than pool->max_pages - * from the spool, so once we're over min_pages, we can allocate - * pages from the spool, but not more than max_pages. - * - * When the tx code path needs to send a tx frame, it checks first - * if it has enough tx credits, following those rules. [iwm_tx_credit_get] - * If it does, it then updates the pool and spool counters and - * then send the frame. [iwm_tx_credit_alloc and iwm_tx_credit_dec] - * On the other side, when the UMAC is done transmitting frames, it - * will send a credit update notification to the host. This is when - * the pool and spool counters gets to be decreased. [iwm_tx_credit_inc, - * called from rx.c:iwm_ntf_tx_credit_update] - * - */ -void iwm_tx_credit_init_pools(struct iwm_priv *iwm, - struct iwm_umac_notif_alive *alive) -{ - int i, sid, pool_pages; - - spin_lock(&iwm->tx_credit.lock); - - iwm->tx_credit.pool_nr = le16_to_cpu(alive->page_grp_count); - iwm->tx_credit.full_pools_map = 0; - memset(&iwm->tx_credit.spools[0], 0, sizeof(struct spool_entry)); - - IWM_DBG_TX(iwm, DBG, "Pools number is %d\n", iwm->tx_credit.pool_nr); - - for (i = 0; i < iwm->tx_credit.pool_nr; i++) { - __le32 page_grp_state = alive->page_grp_state[i]; - - iwm->tx_credit.pools[i].id = GET_VAL32(page_grp_state, - UMAC_ALIVE_PAGE_STS_GRP_NUM); - iwm->tx_credit.pools[i].sid = GET_VAL32(page_grp_state, - UMAC_ALIVE_PAGE_STS_SGRP_NUM); - iwm->tx_credit.pools[i].min_pages = GET_VAL32(page_grp_state, - UMAC_ALIVE_PAGE_STS_GRP_MIN_SIZE); - iwm->tx_credit.pools[i].max_pages = GET_VAL32(page_grp_state, - UMAC_ALIVE_PAGE_STS_GRP_MAX_SIZE); - iwm->tx_credit.pools[i].alloc_pages = 0; - iwm->tx_credit.pools[i].total_freed_pages = 0; - - sid = iwm->tx_credit.pools[i].sid; - pool_pages = iwm->tx_credit.pools[i].min_pages; - - if (iwm->tx_credit.spools[sid].max_pages == 0) { - iwm->tx_credit.spools[sid].id = sid; - iwm->tx_credit.spools[sid].max_pages = - GET_VAL32(page_grp_state, - UMAC_ALIVE_PAGE_STS_SGRP_MAX_SIZE); - iwm->tx_credit.spools[sid].alloc_pages = 0; - } - - iwm->tx_credit.spools[sid].alloc_pages += pool_pages; - - IWM_DBG_TX(iwm, DBG, "Pool idx: %d, id: %d, sid: %d, capacity " - "min: %d, max: %d, pool alloc: %d, total_free: %d, " - "super poll alloc: %d\n", - i, iwm->tx_credit.pools[i].id, - iwm->tx_credit.pools[i].sid, - iwm->tx_credit.pools[i].min_pages, - iwm->tx_credit.pools[i].max_pages, - iwm->tx_credit.pools[i].alloc_pages, - iwm->tx_credit.pools[i].total_freed_pages, - iwm->tx_credit.spools[sid].alloc_pages); - } - - spin_unlock(&iwm->tx_credit.lock); -} - -#define IWM_UDMA_HDR_LEN sizeof(struct iwm_umac_wifi_out_hdr) - -static __le16 iwm_tx_build_packet(struct iwm_priv *iwm, struct sk_buff *skb, - int pool_id, u8 *buf) -{ - struct iwm_umac_wifi_out_hdr *hdr = (struct iwm_umac_wifi_out_hdr *)buf; - struct iwm_udma_wifi_cmd udma_cmd; - struct iwm_umac_cmd umac_cmd; - struct iwm_tx_info *tx_info = skb_to_tx_info(skb); - - udma_cmd.count = cpu_to_le16(skb->len + - sizeof(struct iwm_umac_fw_cmd_hdr)); - /* set EOP to 0 here. iwm_udma_wifi_hdr_set_eop() will be - * called later to set EOP for the last packet. */ - udma_cmd.eop = 0; - udma_cmd.credit_group = pool_id; - udma_cmd.ra_tid = tx_info->sta << 4 | tx_info->tid; - udma_cmd.lmac_offset = 0; - - umac_cmd.id = REPLY_TX; - umac_cmd.count = cpu_to_le16(skb->len); - umac_cmd.color = tx_info->color; - umac_cmd.resp = 0; - umac_cmd.seq_num = cpu_to_le16(iwm_alloc_wifi_cmd_seq(iwm)); - - iwm_build_udma_wifi_hdr(iwm, &hdr->hw_hdr, &udma_cmd); - iwm_build_umac_hdr(iwm, &hdr->sw_hdr, &umac_cmd); - - memcpy(buf + sizeof(*hdr), skb->data, skb->len); - - return umac_cmd.seq_num; -} - -static int iwm_tx_send_concat_packets(struct iwm_priv *iwm, - struct iwm_tx_queue *txq) -{ - int ret; - - if (!txq->concat_count) - return 0; - - IWM_DBG_TX(iwm, DBG, "Send concatenated Tx: queue %d, %d bytes\n", - txq->id, txq->concat_count); - - /* mark EOP for the last packet */ - iwm_udma_wifi_hdr_set_eop(iwm, txq->concat_ptr, 1); - - trace_iwm_tx_packets(iwm, txq->concat_buf, txq->concat_count); - ret = iwm_bus_send_chunk(iwm, txq->concat_buf, txq->concat_count); - - txq->concat_count = 0; - txq->concat_ptr = txq->concat_buf; - - return ret; -} - -void iwm_tx_worker(struct work_struct *work) -{ - struct iwm_priv *iwm; - struct iwm_tx_info *tx_info = NULL; - struct sk_buff *skb; - struct iwm_tx_queue *txq; - struct iwm_sta_info *sta_info; - struct iwm_tid_info *tid_info; - int cmdlen, ret, pool_id; - - txq = container_of(work, struct iwm_tx_queue, worker); - iwm = container_of(txq, struct iwm_priv, txq[txq->id]); - - pool_id = queue_to_pool_id(txq->id); - - while (!test_bit(pool_id, &iwm->tx_credit.full_pools_map) && - !skb_queue_empty(&txq->queue)) { - - spin_lock_bh(&txq->lock); - skb = skb_dequeue(&txq->queue); - spin_unlock_bh(&txq->lock); - - tx_info = skb_to_tx_info(skb); - sta_info = &iwm->sta_table[tx_info->sta]; - if (!sta_info->valid) { - IWM_ERR(iwm, "Trying to send a frame to unknown STA\n"); - kfree_skb(skb); - continue; - } - - tid_info = &sta_info->tid_info[tx_info->tid]; - - mutex_lock(&tid_info->mutex); - - /* - * If the RAxTID is stopped, we queue the skb to the stopped - * queue. - * Whenever we'll get a UMAC notification to resume the tx flow - * for this RAxTID, we'll merge back the stopped queue into the - * regular queue. See iwm_ntf_stop_resume_tx() from rx.c. - */ - if (tid_info->stopped) { - IWM_DBG_TX(iwm, DBG, "%dx%d stopped\n", - tx_info->sta, tx_info->tid); - spin_lock_bh(&txq->lock); - skb_queue_tail(&txq->stopped_queue, skb); - spin_unlock_bh(&txq->lock); - - mutex_unlock(&tid_info->mutex); - continue; - } - - cmdlen = IWM_UDMA_HDR_LEN + skb->len; - - IWM_DBG_TX(iwm, DBG, "Tx frame on queue %d: skb: 0x%p, sta: " - "%d, color: %d\n", txq->id, skb, tx_info->sta, - tx_info->color); - - if (txq->concat_count + cmdlen > IWM_HAL_CONCATENATE_BUF_SIZE) - iwm_tx_send_concat_packets(iwm, txq); - - ret = iwm_tx_credit_alloc(iwm, pool_id, cmdlen); - if (ret) { - IWM_DBG_TX(iwm, DBG, "not enough tx_credit for queue " - "%d, Tx worker stopped\n", txq->id); - spin_lock_bh(&txq->lock); - skb_queue_head(&txq->queue, skb); - spin_unlock_bh(&txq->lock); - - mutex_unlock(&tid_info->mutex); - break; - } - - txq->concat_ptr = txq->concat_buf + txq->concat_count; - tid_info->last_seq_num = - iwm_tx_build_packet(iwm, skb, pool_id, txq->concat_ptr); - txq->concat_count += ALIGN(cmdlen, 16); - - mutex_unlock(&tid_info->mutex); - - kfree_skb(skb); - } - - iwm_tx_send_concat_packets(iwm, txq); - - if (__netif_subqueue_stopped(iwm_to_ndev(iwm), txq->id) && - !test_bit(pool_id, &iwm->tx_credit.full_pools_map) && - (skb_queue_len(&txq->queue) < IWM_TX_LIST_SIZE / 2)) { - IWM_DBG_TX(iwm, DBG, "LINK: start netif_subqueue[%d]", txq->id); - netif_wake_subqueue(iwm_to_ndev(iwm), txq->id); - } -} - -int iwm_xmit_frame(struct sk_buff *skb, struct net_device *netdev) -{ - struct iwm_priv *iwm = ndev_to_iwm(netdev); - struct wireless_dev *wdev = iwm_to_wdev(iwm); - struct iwm_tx_info *tx_info; - struct iwm_tx_queue *txq; - struct iwm_sta_info *sta_info; - u8 *dst_addr, sta_id; - u16 queue; - int ret; - - - if (!test_bit(IWM_STATUS_ASSOCIATED, &iwm->status)) { - IWM_DBG_TX(iwm, DBG, "LINK: stop netif_all_queues: " - "not associated\n"); - netif_tx_stop_all_queues(netdev); - goto drop; - } - - queue = skb_get_queue_mapping(skb); - BUG_ON(queue >= IWM_TX_DATA_QUEUES); /* no iPAN yet */ - - txq = &iwm->txq[queue]; - - /* No free space for Tx, tx_worker is too slow */ - if ((skb_queue_len(&txq->queue) > IWM_TX_LIST_SIZE) || - (skb_queue_len(&txq->stopped_queue) > IWM_TX_LIST_SIZE)) { - IWM_DBG_TX(iwm, DBG, "LINK: stop netif_subqueue[%d]\n", queue); - netif_stop_subqueue(netdev, queue); - return NETDEV_TX_BUSY; - } - - ret = ieee80211_data_from_8023(skb, netdev->dev_addr, wdev->iftype, - iwm->bssid, 0); - if (ret) { - IWM_ERR(iwm, "build wifi header failed\n"); - goto drop; - } - - dst_addr = ((struct ieee80211_hdr *)(skb->data))->addr1; - - for (sta_id = 0; sta_id < IWM_STA_TABLE_NUM; sta_id++) { - sta_info = &iwm->sta_table[sta_id]; - if (sta_info->valid && - !memcmp(dst_addr, sta_info->addr, ETH_ALEN)) - break; - } - - if (sta_id == IWM_STA_TABLE_NUM) { - IWM_ERR(iwm, "STA %pM not found in sta_table, Tx ignored\n", - dst_addr); - goto drop; - } - - tx_info = skb_to_tx_info(skb); - tx_info->sta = sta_id; - tx_info->color = sta_info->color; - /* UMAC uses TID 8 (vs. 0) for non QoS packets */ - if (sta_info->qos) - tx_info->tid = skb->priority; - else - tx_info->tid = IWM_UMAC_MGMT_TID; - - spin_lock_bh(&iwm->txq[queue].lock); - skb_queue_tail(&iwm->txq[queue].queue, skb); - spin_unlock_bh(&iwm->txq[queue].lock); - - queue_work(iwm->txq[queue].wq, &iwm->txq[queue].worker); - - netdev->stats.tx_packets++; - netdev->stats.tx_bytes += skb->len; - return NETDEV_TX_OK; - - drop: - netdev->stats.tx_dropped++; - dev_kfree_skb_any(skb); - return NETDEV_TX_OK; -} diff --git a/drivers/net/wireless/iwmc3200wifi/umac.h b/drivers/net/wireless/iwmc3200wifi/umac.h deleted file mode 100644 index 4a137d334a42..000000000000 --- a/drivers/net/wireless/iwmc3200wifi/umac.h +++ /dev/null @@ -1,789 +0,0 @@ -/* - * Intel Wireless Multicomm 3200 WiFi driver - * - * Copyright (C) 2009 Intel Corporation. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Intel Corporation <ilw@linux.intel.com> - * Samuel Ortiz <samuel.ortiz@intel.com> - * Zhu Yi <yi.zhu@intel.com> - * - */ - -#ifndef __IWM_UMAC_H__ -#define __IWM_UMAC_H__ - -struct iwm_udma_in_hdr { - __le32 cmd; - __le32 size; -} __packed; - -struct iwm_udma_out_nonwifi_hdr { - __le32 cmd; - __le32 addr; - __le32 op1_sz; - __le32 op2; -} __packed; - -struct iwm_udma_out_wifi_hdr { - __le32 cmd; - __le32 meta_data; -} __packed; - -/* Sequence numbering */ -#define UMAC_WIFI_SEQ_NUM_BASE 1 -#define UMAC_WIFI_SEQ_NUM_MAX 0x4000 -#define UMAC_NONWIFI_SEQ_NUM_BASE 1 -#define UMAC_NONWIFI_SEQ_NUM_MAX 0x10 - -/* MAC address address */ -#define WICO_MAC_ADDRESS_ADDR 0x604008F8 - -/* RA / TID */ -#define UMAC_HDI_ACT_TBL_IDX_TID_POS 0 -#define UMAC_HDI_ACT_TBL_IDX_TID_SEED 0xF - -#define UMAC_HDI_ACT_TBL_IDX_RA_POS 4 -#define UMAC_HDI_ACT_TBL_IDX_RA_SEED 0xF - -#define UMAC_HDI_ACT_TBL_IDX_RA_UMAC 0xF -#define UMAC_HDI_ACT_TBL_IDX_TID_UMAC 0x9 -#define UMAC_HDI_ACT_TBL_IDX_TID_LMAC 0xA - -#define UMAC_HDI_ACT_TBL_IDX_HOST_CMD \ - ((UMAC_HDI_ACT_TBL_IDX_RA_UMAC << UMAC_HDI_ACT_TBL_IDX_RA_POS) |\ - (UMAC_HDI_ACT_TBL_IDX_TID_UMAC << UMAC_HDI_ACT_TBL_IDX_TID_POS)) -#define UMAC_HDI_ACT_TBL_IDX_UMAC_CMD \ - ((UMAC_HDI_ACT_TBL_IDX_RA_UMAC << UMAC_HDI_ACT_TBL_IDX_RA_POS) |\ - (UMAC_HDI_ACT_TBL_IDX_TID_LMAC << UMAC_HDI_ACT_TBL_IDX_TID_POS)) - -/* STA ID and color */ -#define STA_ID_SEED (0x0f) -#define STA_ID_POS (0) -#define STA_ID_MSK (STA_ID_SEED << STA_ID_POS) - -#define STA_COLOR_SEED (0x7) -#define STA_COLOR_POS (4) -#define STA_COLOR_MSK (STA_COLOR_SEED << STA_COLOR_POS) - -#define STA_ID_N_COLOR_COLOR(id_n_color) \ - (((id_n_color) & STA_COLOR_MSK) >> STA_COLOR_POS) -#define STA_ID_N_COLOR_ID(id_n_color) \ - (((id_n_color) & STA_ID_MSK) >> STA_ID_POS) - -/* iwm_umac_notif_alive.page_grp_state Group number -- bits [3:0] */ -#define UMAC_ALIVE_PAGE_STS_GRP_NUM_POS 0 -#define UMAC_ALIVE_PAGE_STS_GRP_NUM_SEED 0xF - -/* iwm_umac_notif_alive.page_grp_state Super group number -- bits [7:4] */ -#define UMAC_ALIVE_PAGE_STS_SGRP_NUM_POS 4 -#define UMAC_ALIVE_PAGE_STS_SGRP_NUM_SEED 0xF - -/* iwm_umac_notif_alive.page_grp_state Group min size -- bits [15:8] */ -#define UMAC_ALIVE_PAGE_STS_GRP_MIN_SIZE_POS 8 -#define UMAC_ALIVE_PAGE_STS_GRP_MIN_SIZE_SEED 0xFF - -/* iwm_umac_notif_alive.page_grp_state Group max size -- bits [23:16] */ -#define UMAC_ALIVE_PAGE_STS_GRP_MAX_SIZE_POS 16 -#define UMAC_ALIVE_PAGE_STS_GRP_MAX_SIZE_SEED 0xFF - -/* iwm_umac_notif_alive.page_grp_state Super group max size -- bits [31:24] */ -#define UMAC_ALIVE_PAGE_STS_SGRP_MAX_SIZE_POS 24 -#define UMAC_ALIVE_PAGE_STS_SGRP_MAX_SIZE_SEED 0xFF - -/* Barkers */ -#define UMAC_REBOOT_BARKER 0xdeadbeef -#define UMAC_ACK_BARKER 0xfeedbabe -#define UMAC_PAD_TERMINAL 0xadadadad - -/* UMAC JMP address */ -#define UMAC_MU_FW_INST_DATA_12_ADDR 0xBF0000 - -/* iwm_umac_hdi_out_hdr.cmd OP code -- bits [3:0] */ -#define UMAC_HDI_OUT_CMD_OPCODE_POS 0 -#define UMAC_HDI_OUT_CMD_OPCODE_SEED 0xF - -/* iwm_umac_hdi_out_hdr.cmd End-Of-Transfer -- bits [10:10] */ -#define UMAC_HDI_OUT_CMD_EOT_POS 10 -#define UMAC_HDI_OUT_CMD_EOT_SEED 0x1 - -/* iwm_umac_hdi_out_hdr.cmd UTFD only usage -- bits [11:11] */ -#define UMAC_HDI_OUT_CMD_UTFD_ONLY_POS 11 -#define UMAC_HDI_OUT_CMD_UTFD_ONLY_SEED 0x1 - -/* iwm_umac_hdi_out_hdr.cmd Non-WiFi HW sequence number -- bits [12:15] */ -#define UDMA_HDI_OUT_CMD_NON_WIFI_HW_SEQ_NUM_POS 12 -#define UDMA_HDI_OUT_CMD_NON_WIFI_HW_SEQ_NUM_SEED 0xF - -/* iwm_umac_hdi_out_hdr.cmd Signature -- bits [31:16] */ -#define UMAC_HDI_OUT_CMD_SIGNATURE_POS 16 -#define UMAC_HDI_OUT_CMD_SIGNATURE_SEED 0xFFFF - -/* iwm_umac_hdi_out_hdr.meta_data Byte count -- bits [11:0] */ -#define UMAC_HDI_OUT_BYTE_COUNT_POS 0 -#define UMAC_HDI_OUT_BYTE_COUNT_SEED 0xFFF - -/* iwm_umac_hdi_out_hdr.meta_data Credit group -- bits [15:12] */ -#define UMAC_HDI_OUT_CREDIT_GRP_POS 12 -#define UMAC_HDI_OUT_CREDIT_GRP_SEED 0xF - -/* iwm_umac_hdi_out_hdr.meta_data RA/TID -- bits [23:16] */ -#define UMAC_HDI_OUT_RATID_POS 16 -#define UMAC_HDI_OUT_RATID_SEED 0xFF - -/* iwm_umac_hdi_out_hdr.meta_data LMAC offset -- bits [31:24] */ -#define UMAC_HDI_OUT_LMAC_OFFSET_POS 24 -#define UMAC_HDI_OUT_LMAC_OFFSET_SEED 0xFF - -/* Signature */ -#define UMAC_HDI_OUT_SIGNATURE 0xCBBC - -/* buffer alignment */ -#define UMAC_HDI_BUF_ALIGN_MSK 0xF - -/* iwm_umac_hdi_in_hdr.cmd OP code -- bits [3:0] */ -#define UMAC_HDI_IN_CMD_OPCODE_POS 0 -#define UMAC_HDI_IN_CMD_OPCODE_SEED 0xF - -/* iwm_umac_hdi_in_hdr.cmd Non-WiFi API response -- bits [6:4] */ -#define UMAC_HDI_IN_CMD_NON_WIFI_RESP_POS 4 -#define UMAC_HDI_IN_CMD_NON_WIFI_RESP_SEED 0x7 - -/* iwm_umac_hdi_in_hdr.cmd WiFi API source -- bits [5:4] */ -#define UMAC_HDI_IN_CMD_SOURCE_POS 4 -#define UMAC_HDI_IN_CMD_SOURCE_SEED 0x3 - -/* iwm_umac_hdi_in_hdr.cmd WiFi API EOT -- bits [6:6] */ -#define UMAC_HDI_IN_CMD_EOT_POS 6 -#define UMAC_HDI_IN_CMD_EOT_SEED 0x1 - -/* iwm_umac_hdi_in_hdr.cmd timestamp present -- bits [7:7] */ -#define UMAC_HDI_IN_CMD_TIME_STAMP_PRESENT_POS 7 -#define UMAC_HDI_IN_CMD_TIME_STAMP_PRESENT_SEED 0x1 - -/* iwm_umac_hdi_in_hdr.cmd WiFi Non-last AMSDU -- bits [8:8] */ -#define UMAC_HDI_IN_CMD_NON_LAST_AMSDU_POS 8 -#define UMAC_HDI_IN_CMD_NON_LAST_AMSDU_SEED 0x1 - -/* iwm_umac_hdi_in_hdr.cmd WiFi HW sequence number -- bits [31:9] */ -#define UMAC_HDI_IN_CMD_HW_SEQ_NUM_POS 9 -#define UMAC_HDI_IN_CMD_HW_SEQ_NUM_SEED 0x7FFFFF - -/* iwm_umac_hdi_in_hdr.cmd Non-WiFi HW sequence number -- bits [12:15] */ -#define UDMA_HDI_IN_CMD_NON_WIFI_HW_SEQ_NUM_POS 12 -#define UDMA_HDI_IN_CMD_NON_WIFI_HW_SEQ_NUM_SEED 0xF - -/* iwm_umac_hdi_in_hdr.cmd Non-WiFi HW signature -- bits [16:31] */ -#define UDMA_HDI_IN_CMD_NON_WIFI_HW_SIG_POS 16 -#define UDMA_HDI_IN_CMD_NON_WIFI_HW_SIG_SEED 0xFFFF - -/* Fixed Non-WiFi signature */ -#define UDMA_HDI_IN_CMD_NON_WIFI_HW_SIG 0xCBBC - -/* IN NTFY op-codes */ -#define UMAC_NOTIFY_OPCODE_ALIVE 0xA1 -#define UMAC_NOTIFY_OPCODE_INIT_COMPLETE 0xA2 -#define UMAC_NOTIFY_OPCODE_WIFI_CORE_STATUS 0xA3 -#define UMAC_NOTIFY_OPCODE_ERROR 0xA4 -#define UMAC_NOTIFY_OPCODE_DEBUG 0xA5 -#define UMAC_NOTIFY_OPCODE_WIFI_IF_WRAPPER 0xB0 -#define UMAC_NOTIFY_OPCODE_STATS 0xB1 -#define UMAC_NOTIFY_OPCODE_PAGE_DEALLOC 0xB3 -#define UMAC_NOTIFY_OPCODE_RX_TICKET 0xB4 -#define UMAC_NOTIFY_OPCODE_MAX (UMAC_NOTIFY_OPCODE_RX_TICKET -\ - UMAC_NOTIFY_OPCODE_ALIVE + 1) -#define UMAC_NOTIFY_OPCODE_FIRST (UMAC_NOTIFY_OPCODE_ALIVE) - -/* HDI OUT OP CODE */ -#define UMAC_HDI_OUT_OPCODE_PING 0x0 -#define UMAC_HDI_OUT_OPCODE_READ 0x1 -#define UMAC_HDI_OUT_OPCODE_WRITE 0x2 -#define UMAC_HDI_OUT_OPCODE_JUMP 0x3 -#define UMAC_HDI_OUT_OPCODE_REBOOT 0x4 -#define UMAC_HDI_OUT_OPCODE_WRITE_PERSISTENT 0x5 -#define UMAC_HDI_OUT_OPCODE_READ_PERSISTENT 0x6 -#define UMAC_HDI_OUT_OPCODE_READ_MODIFY_WRITE 0x7 -/* #define UMAC_HDI_OUT_OPCODE_RESERVED 0x8..0xA */ -#define UMAC_HDI_OUT_OPCODE_WRITE_AUX_REG 0xB -#define UMAC_HDI_OUT_OPCODE_WIFI 0xF - -/* HDI IN OP CODE -- Non WiFi*/ -#define UMAC_HDI_IN_OPCODE_PING 0x0 -#define UMAC_HDI_IN_OPCODE_READ 0x1 -#define UMAC_HDI_IN_OPCODE_WRITE 0x2 -#define UMAC_HDI_IN_OPCODE_WRITE_PERSISTENT 0x5 -#define UMAC_HDI_IN_OPCODE_READ_PERSISTENT 0x6 -#define UMAC_HDI_IN_OPCODE_READ_MODIFY_WRITE 0x7 -#define UMAC_HDI_IN_OPCODE_EP_MGMT 0x8 -#define UMAC_HDI_IN_OPCODE_CREDIT_CHANGE 0x9 -#define UMAC_HDI_IN_OPCODE_CTRL_DATABASE 0xA -#define UMAC_HDI_IN_OPCODE_WRITE_AUX_REG 0xB -#define UMAC_HDI_IN_OPCODE_NONWIFI_MAX \ - (UMAC_HDI_IN_OPCODE_WRITE_AUX_REG + 1) -#define UMAC_HDI_IN_OPCODE_WIFI 0xF - -/* HDI IN SOURCE */ -#define UMAC_HDI_IN_SOURCE_FHRX 0x0 -#define UMAC_HDI_IN_SOURCE_UDMA 0x1 -#define UMAC_HDI_IN_SOURCE_FW 0x2 -#define UMAC_HDI_IN_SOURCE_RESERVED 0x3 - -/* OUT CMD op-codes */ -#define UMAC_CMD_OPCODE_ECHO 0x01 -#define UMAC_CMD_OPCODE_HALT 0x02 -#define UMAC_CMD_OPCODE_RESET 0x03 -#define UMAC_CMD_OPCODE_BULK_EP_INACT_TIMEOUT 0x09 -#define UMAC_CMD_OPCODE_URB_CANCEL_ACK 0x0A -#define UMAC_CMD_OPCODE_DCACHE_FLUSH 0x0B -#define UMAC_CMD_OPCODE_EEPROM_PROXY 0x0C -#define UMAC_CMD_OPCODE_TX_ECHO 0x0D -#define UMAC_CMD_OPCODE_DBG_MON 0x0E -#define UMAC_CMD_OPCODE_INTERNAL_TX 0x0F -#define UMAC_CMD_OPCODE_SET_PARAM_FIX 0x10 -#define UMAC_CMD_OPCODE_SET_PARAM_VAR 0x11 -#define UMAC_CMD_OPCODE_GET_PARAM 0x12 -#define UMAC_CMD_OPCODE_DBG_EVENT_WRAPPER 0x13 -#define UMAC_CMD_OPCODE_TARGET 0x14 -#define UMAC_CMD_OPCODE_STATISTIC_REQUEST 0x15 -#define UMAC_CMD_OPCODE_GET_CHAN_INFO_LIST 0x16 -#define UMAC_CMD_OPCODE_SET_PARAM_LIST 0x17 -#define UMAC_CMD_OPCODE_GET_PARAM_LIST 0x18 -#define UMAC_CMD_OPCODE_STOP_RESUME_STA_TX 0x19 -#define UMAC_CMD_OPCODE_TEST_BLOCK_ACK 0x1A - -#define UMAC_CMD_OPCODE_BASE_WRAPPER 0xFA -#define UMAC_CMD_OPCODE_LMAC_WRAPPER 0xFB -#define UMAC_CMD_OPCODE_HW_TEST_WRAPPER 0xFC -#define UMAC_CMD_OPCODE_WIFI_IF_WRAPPER 0xFD -#define UMAC_CMD_OPCODE_WIFI_WRAPPER 0xFE -#define UMAC_CMD_OPCODE_WIFI_PASS_THROUGH 0xFF - -/* UMAC WiFi interface op-codes */ -#define UMAC_WIFI_IF_CMD_SET_PROFILE 0x11 -#define UMAC_WIFI_IF_CMD_INVALIDATE_PROFILE 0x12 -#define UMAC_WIFI_IF_CMD_SET_EXCLUDE_LIST 0x13 -#define UMAC_WIFI_IF_CMD_SCAN_REQUEST 0x14 -#define UMAC_WIFI_IF_CMD_SCAN_CONFIG 0x15 -#define UMAC_WIFI_IF_CMD_ADD_WEP40_KEY 0x16 -#define UMAC_WIFI_IF_CMD_ADD_WEP104_KEY 0x17 -#define UMAC_WIFI_IF_CMD_ADD_TKIP_KEY 0x18 -#define UMAC_WIFI_IF_CMD_ADD_CCMP_KEY 0x19 -#define UMAC_WIFI_IF_CMD_REMOVE_KEY 0x1A -#define UMAC_WIFI_IF_CMD_GLOBAL_TX_KEY_ID 0x1B -#define UMAC_WIFI_IF_CMD_SET_HOST_EXTENDED_IE 0x1C -#define UMAC_WIFI_IF_CMD_GET_SUPPORTED_CHANNELS 0x1E -#define UMAC_WIFI_IF_CMD_PMKID_UPDATE 0x1F -#define UMAC_WIFI_IF_CMD_TX_PWR_TRIGGER 0x20 - -/* UMAC WiFi interface ports */ -#define UMAC_WIFI_IF_FLG_PORT_DEF 0x00 -#define UMAC_WIFI_IF_FLG_PORT_PAN 0x01 -#define UMAC_WIFI_IF_FLG_PORT_PAN_INVALID WIFI_IF_FLG_PORT_DEF - -/* UMAC WiFi interface actions */ -#define UMAC_WIFI_IF_FLG_ACT_GET 0x10 -#define UMAC_WIFI_IF_FLG_ACT_SET 0x20 - -/* iwm_umac_fw_cmd_hdr.meta_data byte count -- bits [11:0] */ -#define UMAC_FW_CMD_BYTE_COUNT_POS 0 -#define UMAC_FW_CMD_BYTE_COUNT_SEED 0xFFF - -/* iwm_umac_fw_cmd_hdr.meta_data status -- bits [15:12] */ -#define UMAC_FW_CMD_STATUS_POS 12 -#define UMAC_FW_CMD_STATUS_SEED 0xF - -/* iwm_umac_fw_cmd_hdr.meta_data full TX command by Driver -- bits [16:16] */ -#define UMAC_FW_CMD_TX_DRV_FULL_CMD_POS 16 -#define UMAC_FW_CMD_TX_DRV_FULL_CMD_SEED 0x1 - -/* iwm_umac_fw_cmd_hdr.meta_data TX command by FW -- bits [17:17] */ -#define UMAC_FW_CMD_TX_FW_CMD_POS 17 -#define UMAC_FW_CMD_TX_FW_CMD_SEED 0x1 - -/* iwm_umac_fw_cmd_hdr.meta_data TX plaintext mode -- bits [18:18] */ -#define UMAC_FW_CMD_TX_PLAINTEXT_POS 18 -#define UMAC_FW_CMD_TX_PLAINTEXT_SEED 0x1 - -/* iwm_umac_fw_cmd_hdr.meta_data STA color -- bits [22:20] */ -#define UMAC_FW_CMD_TX_STA_COLOR_POS 20 -#define UMAC_FW_CMD_TX_STA_COLOR_SEED 0x7 - -/* iwm_umac_fw_cmd_hdr.meta_data TX life time (TU) -- bits [31:24] */ -#define UMAC_FW_CMD_TX_LIFETIME_TU_POS 24 -#define UMAC_FW_CMD_TX_LIFETIME_TU_SEED 0xFF - -/* iwm_dev_cmd_hdr.flags Response required -- bits [5:5] */ -#define UMAC_DEV_CMD_FLAGS_RESP_REQ_POS 5 -#define UMAC_DEV_CMD_FLAGS_RESP_REQ_SEED 0x1 - -/* iwm_dev_cmd_hdr.flags Aborted command -- bits [6:6] */ -#define UMAC_DEV_CMD_FLAGS_ABORT_POS 6 -#define UMAC_DEV_CMD_FLAGS_ABORT_SEED 0x1 - -/* iwm_dev_cmd_hdr.flags Internal command -- bits [7:7] */ -#define DEV_CMD_FLAGS_FLD_INTERNAL_POS 7 -#define DEV_CMD_FLAGS_FLD_INTERNAL_SEED 0x1 - -/* Rx */ -/* Rx actions */ -#define IWM_RX_TICKET_DROP 0x0 -#define IWM_RX_TICKET_RELEASE 0x1 -#define IWM_RX_TICKET_SNIFFER 0x2 -#define IWM_RX_TICKET_ENQUEUE 0x3 - -/* Rx flags */ -#define IWM_RX_TICKET_PAD_SIZE_MSK 0x2 -#define IWM_RX_TICKET_SPECIAL_SNAP_MSK 0x4 -#define IWM_RX_TICKET_AMSDU_MSK 0x8 -#define IWM_RX_TICKET_DROP_REASON_POS 4 -#define IWM_RX_TICKET_DROP_REASON_MSK (0x1F << IWM_RX_TICKET_DROP_REASON_POS) - -#define IWM_RX_DROP_NO_DROP 0x0 -#define IWM_RX_DROP_BAD_CRC 0x1 -/* L2P no address match */ -#define IWM_RX_DROP_LMAC_ADDR_FILTER 0x2 -/* Multicast address not in list */ -#define IWM_RX_DROP_MCAST_ADDR_FILTER 0x3 -/* Control frames are not sent to the driver */ -#define IWM_RX_DROP_CTL_FRAME 0x4 -/* Our frame is back */ -#define IWM_RX_DROP_OUR_TX 0x5 -/* Association class filtering */ -#define IWM_RX_DROP_CLASS_FILTER 0x6 -/* Duplicated frame */ -#define IWM_RX_DROP_DUPLICATE_FILTER 0x7 -/* Decryption error */ -#define IWM_RX_DROP_SEC_ERR 0x8 -/* Unencrypted frame while encryption is on */ -#define IWM_RX_DROP_SEC_NO_ENCRYPTION 0x9 -/* Replay check failure */ -#define IWM_RX_DROP_SEC_REPLAY_ERR 0xa -/* uCode and FW key color mismatch, check before replay */ -#define IWM_RX_DROP_SEC_KEY_COLOR_MISMATCH 0xb -#define IWM_RX_DROP_SEC_TKIP_COUNTER_MEASURE 0xc -/* No fragmentations Db is found */ -#define IWM_RX_DROP_FRAG_NO_RESOURCE 0xd -/* Fragmention Db has seqCtl mismatch Vs. non-1st frag */ -#define IWM_RX_DROP_FRAG_ERR 0xe -#define IWM_RX_DROP_FRAG_LOST 0xf -#define IWM_RX_DROP_FRAG_COMPLETE 0x10 -/* Should be handled by UMAC */ -#define IWM_RX_DROP_MANAGEMENT 0x11 -/* STA not found by UMAC */ -#define IWM_RX_DROP_NO_STATION 0x12 -/* NULL or QoS NULL */ -#define IWM_RX_DROP_NULL_DATA 0x13 -#define IWM_RX_DROP_BA_REORDER_OLD_SEQCTL 0x14 -#define IWM_RX_DROP_BA_REORDER_DUPLICATE 0x15 - -struct iwm_rx_ticket { - __le16 action; - __le16 id; - __le16 flags; - u8 payload_offset; /* includes: MAC header, pad, IV */ - u8 tail_len; /* includes: MIC, ICV, CRC (w/o STATUS) */ -} __packed; - -struct iwm_rx_mpdu_hdr { - __le16 len; - __le16 reserved; -} __packed; - -/* UMAC SW WIFI API */ - -struct iwm_dev_cmd_hdr { - u8 cmd; - u8 flags; - __le16 seq_num; -} __packed; - -struct iwm_umac_fw_cmd_hdr { - __le32 meta_data; - struct iwm_dev_cmd_hdr cmd; -} __packed; - -struct iwm_umac_wifi_out_hdr { - struct iwm_udma_out_wifi_hdr hw_hdr; - struct iwm_umac_fw_cmd_hdr sw_hdr; -} __packed; - -struct iwm_umac_nonwifi_out_hdr { - struct iwm_udma_out_nonwifi_hdr hw_hdr; -} __packed; - -struct iwm_umac_wifi_in_hdr { - struct iwm_udma_in_hdr hw_hdr; - struct iwm_umac_fw_cmd_hdr sw_hdr; -} __packed; - -struct iwm_umac_nonwifi_in_hdr { - struct iwm_udma_in_hdr hw_hdr; - __le32 time_stamp; -} __packed; - -#define IWM_UMAC_PAGE_SIZE 0x200 - -/* Notify structures */ -struct iwm_fw_version { - u8 minor; - u8 major; - __le16 id; -}; - -struct iwm_fw_build { - u8 type; - u8 subtype; - u8 platform; - u8 opt; -}; - -struct iwm_fw_alive_hdr { - struct iwm_fw_version ver; - struct iwm_fw_build build; - __le32 os_build; - __le32 log_hdr_addr; - __le32 log_buf_addr; - __le32 sys_timer_addr; -}; - -#define WAIT_NOTIF_TIMEOUT (2 * HZ) -#define SCAN_COMPLETE_TIMEOUT (3 * HZ) - -#define UMAC_NTFY_ALIVE_STATUS_ERR 0xDEAD -#define UMAC_NTFY_ALIVE_STATUS_OK 0xCAFE - -#define UMAC_NTFY_INIT_COMPLETE_STATUS_ERR 0xDEAD -#define UMAC_NTFY_INIT_COMPLETE_STATUS_OK 0xCAFE - -#define UMAC_NTFY_WIFI_CORE_STATUS_LINK_EN 0x40 -#define UMAC_NTFY_WIFI_CORE_STATUS_MLME_EN 0x80 - -#define IWM_MACS_OUT_GROUPS 6 -#define IWM_MACS_OUT_SGROUPS 1 - - -#define WIFI_IF_NTFY_ASSOC_START 0x80 -#define WIFI_IF_NTFY_ASSOC_COMPLETE 0x81 -#define WIFI_IF_NTFY_PROFILE_INVALIDATE_COMPLETE 0x82 -#define WIFI_IF_NTFY_CONNECTION_TERMINATED 0x83 -#define WIFI_IF_NTFY_SCAN_COMPLETE 0x84 -#define WIFI_IF_NTFY_STA_TABLE_CHANGE 0x85 -#define WIFI_IF_NTFY_EXTENDED_IE_REQUIRED 0x86 -#define WIFI_IF_NTFY_RADIO_PREEMPTION 0x87 -#define WIFI_IF_NTFY_BSS_TRK_TABLE_CHANGED 0x88 -#define WIFI_IF_NTFY_BSS_TRK_ENTRIES_REMOVED 0x89 -#define WIFI_IF_NTFY_LINK_QUALITY_STATISTICS 0x8A -#define WIFI_IF_NTFY_MGMT_FRAME 0x8B - -/* DEBUG INDICATIONS */ -#define WIFI_DBG_IF_NTFY_SCAN_SUPER_JOB_START 0xE0 -#define WIFI_DBG_IF_NTFY_SCAN_SUPER_JOB_COMPLETE 0xE1 -#define WIFI_DBG_IF_NTFY_SCAN_CHANNEL_START 0xE2 -#define WIFI_DBG_IF_NTFY_SCAN_CHANNEL_RESULT 0xE3 -#define WIFI_DBG_IF_NTFY_SCAN_MINI_JOB_START 0xE4 -#define WIFI_DBG_IF_NTFY_SCAN_MINI_JOB_COMPLETE 0xE5 -#define WIFI_DBG_IF_NTFY_CNCT_ATC_START 0xE6 -#define WIFI_DBG_IF_NTFY_COEX_NOTIFICATION 0xE7 -#define WIFI_DBG_IF_NTFY_COEX_HANDLE_ENVELOP 0xE8 -#define WIFI_DBG_IF_NTFY_COEX_HANDLE_RELEASE_ENVELOP 0xE9 - -#define WIFI_IF_NTFY_MAX 0xff - -/* Notification structures */ -struct iwm_umac_notif_wifi_if { - struct iwm_umac_wifi_in_hdr hdr; - u8 status; - u8 flags; - __le16 buf_size; -} __packed; - -#define UMAC_ROAM_REASON_FIRST_SELECTION 0x1 -#define UMAC_ROAM_REASON_AP_DEAUTH 0x2 -#define UMAC_ROAM_REASON_AP_CONNECT_LOST 0x3 -#define UMAC_ROAM_REASON_RSSI 0x4 -#define UMAC_ROAM_REASON_AP_ASSISTED_ROAM 0x5 -#define UMAC_ROAM_REASON_IBSS_COALESCING 0x6 - -struct iwm_umac_notif_assoc_start { - struct iwm_umac_notif_wifi_if mlme_hdr; - __le32 roam_reason; - u8 bssid[ETH_ALEN]; - u8 reserved[2]; -} __packed; - -#define UMAC_ASSOC_COMPLETE_SUCCESS 0x0 -#define UMAC_ASSOC_COMPLETE_FAILURE 0x1 - -struct iwm_umac_notif_assoc_complete { - struct iwm_umac_notif_wifi_if mlme_hdr; - __le32 status; - u8 bssid[ETH_ALEN]; - u8 band; - u8 channel; -} __packed; - -#define UMAC_PROFILE_INVALID_ASSOC_TIMEOUT 0x0 -#define UMAC_PROFILE_INVALID_ROAM_TIMEOUT 0x1 -#define UMAC_PROFILE_INVALID_REQUEST 0x2 -#define UMAC_PROFILE_INVALID_RF_PREEMPTED 0x3 - -struct iwm_umac_notif_profile_invalidate { - struct iwm_umac_notif_wifi_if mlme_hdr; - __le32 reason; -} __packed; - -#define UMAC_SCAN_RESULT_SUCCESS 0x0 -#define UMAC_SCAN_RESULT_ABORTED 0x1 -#define UMAC_SCAN_RESULT_REJECTED 0x2 -#define UMAC_SCAN_RESULT_FAILED 0x3 - -struct iwm_umac_notif_scan_complete { - struct iwm_umac_notif_wifi_if mlme_hdr; - __le32 type; - __le32 result; - u8 seq_num; -} __packed; - -#define UMAC_OPCODE_ADD_MODIFY 0x0 -#define UMAC_OPCODE_REMOVE 0x1 -#define UMAC_OPCODE_CLEAR_ALL 0x2 - -#define UMAC_STA_FLAG_QOS 0x1 - -struct iwm_umac_notif_sta_info { - struct iwm_umac_notif_wifi_if mlme_hdr; - __le32 opcode; - u8 mac_addr[ETH_ALEN]; - u8 sta_id; /* bits 0-3: station ID, bits 4-7: station color */ - u8 flags; -} __packed; - -#define UMAC_BAND_2GHZ 0 -#define UMAC_BAND_5GHZ 1 - -#define UMAC_CHANNEL_WIDTH_20MHZ 0 -#define UMAC_CHANNEL_WIDTH_40MHZ 1 - -struct iwm_umac_notif_bss_info { - struct iwm_umac_notif_wifi_if mlme_hdr; - __le32 type; - __le32 timestamp; - __le16 table_idx; - __le16 frame_len; - u8 band; - u8 channel; - s8 rssi; - u8 reserved; - u8 frame_buf[1]; -} __packed; - -#define IWM_BSS_REMOVE_INDEX_MSK 0x0fff -#define IWM_BSS_REMOVE_FLAGS_MSK 0xfc00 - -#define IWM_BSS_REMOVE_FLG_AGE 0x1000 -#define IWM_BSS_REMOVE_FLG_TIMEOUT 0x2000 -#define IWM_BSS_REMOVE_FLG_TABLE_FULL 0x4000 - -struct iwm_umac_notif_bss_removed { - struct iwm_umac_notif_wifi_if mlme_hdr; - __le32 count; - __le16 entries[0]; -} __packed; - -struct iwm_umac_notif_mgt_frame { - struct iwm_umac_notif_wifi_if mlme_hdr; - __le16 len; - u8 frame[1]; -} __packed; - -struct iwm_umac_notif_alive { - struct iwm_umac_wifi_in_hdr hdr; - __le16 status; - __le16 reserved1; - struct iwm_fw_alive_hdr alive_data; - __le16 reserved2; - __le16 page_grp_count; - __le32 page_grp_state[IWM_MACS_OUT_GROUPS]; -} __packed; - -struct iwm_umac_notif_init_complete { - struct iwm_umac_wifi_in_hdr hdr; - __le16 status; - __le16 reserved; -} __packed; - -/* error categories */ -enum { - UMAC_SYS_ERR_CAT_NONE = 0, - UMAC_SYS_ERR_CAT_BOOT, - UMAC_SYS_ERR_CAT_UMAC, - UMAC_SYS_ERR_CAT_UAXM, - UMAC_SYS_ERR_CAT_LMAC, - UMAC_SYS_ERR_CAT_MAX -}; - -struct iwm_fw_error_hdr { - __le32 category; - __le32 status; - __le32 pc; - __le32 blink1; - __le32 blink2; - __le32 ilink1; - __le32 ilink2; - __le32 data1; - __le32 data2; - __le32 line_num; - __le32 umac_status; - __le32 lmac_status; - __le32 sdio_status; - __le32 dbm_sample_ctrl; - __le32 dbm_buf_base; - __le32 dbm_buf_end; - __le32 dbm_buf_write_ptr; - __le32 dbm_buf_cycle_cnt; -} __packed; - -struct iwm_umac_notif_error { - struct iwm_umac_wifi_in_hdr hdr; - struct iwm_fw_error_hdr err; -} __packed; - -#define UMAC_DEALLOC_NTFY_CHANGES_CNT_POS 0 -#define UMAC_DEALLOC_NTFY_CHANGES_CNT_SEED 0xff -#define UMAC_DEALLOC_NTFY_CHANGES_MSK_POS 8 -#define UMAC_DEALLOC_NTFY_CHANGES_MSK_SEED 0xffffff -#define UMAC_DEALLOC_NTFY_PAGE_CNT_POS 0 -#define UMAC_DEALLOC_NTFY_PAGE_CNT_SEED 0xffffff -#define UMAC_DEALLOC_NTFY_GROUP_NUM_POS 24 -#define UMAC_DEALLOC_NTFY_GROUP_NUM_SEED 0xf - -struct iwm_umac_notif_page_dealloc { - struct iwm_umac_wifi_in_hdr hdr; - __le32 changes; - __le32 grp_info[IWM_MACS_OUT_GROUPS]; -} __packed; - -struct iwm_umac_notif_wifi_status { - struct iwm_umac_wifi_in_hdr hdr; - __le16 status; - __le16 reserved; -} __packed; - -struct iwm_umac_notif_rx_ticket { - struct iwm_umac_wifi_in_hdr hdr; - u8 num_tickets; - u8 reserved[3]; - struct iwm_rx_ticket tickets[1]; -} __packed; - -/* Tx/Rx rates window (number of max of last update window per second) */ -#define UMAC_NTF_RATE_SAMPLE_NR 4 - -/* Max numbers of bits required to go through all antennae in bitmasks */ -#define UMAC_PHY_NUM_CHAINS 3 - -#define IWM_UMAC_MGMT_TID 8 -#define IWM_UMAC_TID_NR 9 /* 8 TIDs + MGMT */ - -struct iwm_umac_notif_stats { - struct iwm_umac_wifi_in_hdr hdr; - __le32 flags; - __le32 timestamp; - __le16 tid_load[IWM_UMAC_TID_NR + 1]; /* 1 non-QoS + 1 dword align */ - __le16 tx_rate[UMAC_NTF_RATE_SAMPLE_NR]; - __le16 rx_rate[UMAC_NTF_RATE_SAMPLE_NR]; - __le32 chain_energy[UMAC_PHY_NUM_CHAINS]; - s32 rssi_dbm; - s32 noise_dbm; - __le32 supp_rates; - __le32 supp_ht_rates; - __le32 missed_beacons; - __le32 rx_beacons; - __le32 rx_dir_pkts; - __le32 rx_nondir_pkts; - __le32 rx_multicast; - __le32 rx_errors; - __le32 rx_drop_other_bssid; - __le32 rx_drop_decode; - __le32 rx_drop_reassembly; - __le32 rx_drop_bad_len; - __le32 rx_drop_overflow; - __le32 rx_drop_crc; - __le32 rx_drop_missed; - __le32 tx_dir_pkts; - __le32 tx_nondir_pkts; - __le32 tx_failure; - __le32 tx_errors; - __le32 tx_drop_max_retry; - __le32 tx_err_abort; - __le32 tx_err_carrier; - __le32 rx_bytes; - __le32 tx_bytes; - __le32 tx_power; - __le32 tx_max_power; - __le32 roam_threshold; - __le32 ap_assoc_nr; - __le32 scan_full; - __le32 scan_abort; - __le32 ap_nr; - __le32 roam_nr; - __le32 roam_missed_beacons; - __le32 roam_rssi; - __le32 roam_unassoc; - __le32 roam_deauth; - __le32 roam_ap_loadblance; -} __packed; - -#define UMAC_STOP_TX_FLAG 0x1 -#define UMAC_RESUME_TX_FLAG 0x2 - -#define LAST_SEQ_NUM_INVALID 0xFFFF - -struct iwm_umac_notif_stop_resume_tx { - struct iwm_umac_wifi_in_hdr hdr; - u8 flags; /* UMAC_*_TX_FLAG_* */ - u8 sta_id; - __le16 stop_resume_tid_msk; /* tid bitmask */ -} __packed; - -#define UMAC_MAX_NUM_PMKIDS 4 - -/* WiFi interface wrapper header */ -struct iwm_umac_wifi_if { - u8 oid; - u8 flags; - __le16 buf_size; -} __packed; - -#define IWM_SEQ_NUM_HOST_MSK 0x0000 -#define IWM_SEQ_NUM_UMAC_MSK 0x4000 -#define IWM_SEQ_NUM_LMAC_MSK 0x8000 -#define IWM_SEQ_NUM_MSK 0xC000 - -#endif diff --git a/drivers/net/wireless/libertas/cfg.c b/drivers/net/wireless/libertas/cfg.c index 2fa879b015b6..eb5de800ed90 100644 --- a/drivers/net/wireless/libertas/cfg.c +++ b/drivers/net/wireless/libertas/cfg.c @@ -435,24 +435,40 @@ static int lbs_add_wpa_tlv(u8 *tlv, const u8 *ie, u8 ie_len) * Set Channel */ -static int lbs_cfg_set_channel(struct wiphy *wiphy, - struct net_device *netdev, - struct ieee80211_channel *channel, - enum nl80211_channel_type channel_type) +static int lbs_cfg_set_monitor_channel(struct wiphy *wiphy, + struct ieee80211_channel *channel, + enum nl80211_channel_type channel_type) { struct lbs_private *priv = wiphy_priv(wiphy); int ret = -ENOTSUPP; - lbs_deb_enter_args(LBS_DEB_CFG80211, "iface %s freq %d, type %d", - netdev_name(netdev), channel->center_freq, channel_type); + lbs_deb_enter_args(LBS_DEB_CFG80211, "freq %d, type %d", + channel->center_freq, channel_type); if (channel_type != NL80211_CHAN_NO_HT) goto out; - if (netdev == priv->mesh_dev) - ret = lbs_mesh_set_channel(priv, channel->hw_value); - else - ret = lbs_set_channel(priv, channel->hw_value); + ret = lbs_set_channel(priv, channel->hw_value); + + out: + lbs_deb_leave_args(LBS_DEB_CFG80211, "ret %d", ret); + return ret; +} + +static int lbs_cfg_set_mesh_channel(struct wiphy *wiphy, + struct net_device *netdev, + struct ieee80211_channel *channel) +{ + struct lbs_private *priv = wiphy_priv(wiphy); + int ret = -ENOTSUPP; + + lbs_deb_enter_args(LBS_DEB_CFG80211, "iface %s freq %d", + netdev_name(netdev), channel->center_freq); + + if (netdev != priv->mesh_dev) + goto out; + + ret = lbs_mesh_set_channel(priv, channel->hw_value); out: lbs_deb_leave_args(LBS_DEB_CFG80211, "ret %d", ret); @@ -789,7 +805,6 @@ void lbs_scan_done(struct lbs_private *priv) } static int lbs_cfg_scan(struct wiphy *wiphy, - struct net_device *dev, struct cfg80211_scan_request *request) { struct lbs_private *priv = wiphy_priv(wiphy); @@ -2029,7 +2044,8 @@ static int lbs_leave_ibss(struct wiphy *wiphy, struct net_device *dev) */ static struct cfg80211_ops lbs_cfg80211_ops = { - .set_channel = lbs_cfg_set_channel, + .set_monitor_channel = lbs_cfg_set_monitor_channel, + .libertas_set_mesh_channel = lbs_cfg_set_mesh_channel, .scan = lbs_cfg_scan, .connect = lbs_cfg_connect, .disconnect = lbs_cfg_disconnect, @@ -2164,13 +2180,15 @@ int lbs_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request) { struct lbs_private *priv = wiphy_priv(wiphy); - int ret; + int ret = 0; lbs_deb_enter_args(LBS_DEB_CFG80211, "cfg80211 regulatory domain " "callback for domain %c%c\n", request->alpha2[0], request->alpha2[1]); - ret = lbs_set_11d_domain_info(priv, request, wiphy->bands); + memcpy(priv->country_code, request->alpha2, sizeof(request->alpha2)); + if (lbs_iface_active(priv)) + ret = lbs_set_11d_domain_info(priv); lbs_deb_leave(LBS_DEB_CFG80211); return ret; diff --git a/drivers/net/wireless/libertas/cmd.c b/drivers/net/wireless/libertas/cmd.c index d798bcc0d83a..26e68326710b 100644 --- a/drivers/net/wireless/libertas/cmd.c +++ b/drivers/net/wireless/libertas/cmd.c @@ -733,15 +733,13 @@ int lbs_get_rssi(struct lbs_private *priv, s8 *rssi, s8 *nf) * to the firmware * * @priv: pointer to &struct lbs_private - * @request: cfg80211 regulatory request structure - * @bands: the device's supported bands and channels * * returns: 0 on success, error code on failure */ -int lbs_set_11d_domain_info(struct lbs_private *priv, - struct regulatory_request *request, - struct ieee80211_supported_band **bands) +int lbs_set_11d_domain_info(struct lbs_private *priv) { + struct wiphy *wiphy = priv->wdev->wiphy; + struct ieee80211_supported_band **bands = wiphy->bands; struct cmd_ds_802_11d_domain_info cmd; struct mrvl_ie_domain_param_set *domain = &cmd.domain; struct ieee80211_country_ie_triplet *t; @@ -752,21 +750,23 @@ int lbs_set_11d_domain_info(struct lbs_private *priv, u8 first_channel = 0, next_chan = 0, max_pwr = 0; u8 i, flag = 0; size_t triplet_size; - int ret; + int ret = 0; lbs_deb_enter(LBS_DEB_11D); + if (!priv->country_code[0]) + goto out; memset(&cmd, 0, sizeof(cmd)); cmd.action = cpu_to_le16(CMD_ACT_SET); lbs_deb_11d("Setting country code '%c%c'\n", - request->alpha2[0], request->alpha2[1]); + priv->country_code[0], priv->country_code[1]); domain->header.type = cpu_to_le16(TLV_TYPE_DOMAIN); /* Set country code */ - domain->country_code[0] = request->alpha2[0]; - domain->country_code[1] = request->alpha2[1]; + domain->country_code[0] = priv->country_code[0]; + domain->country_code[1] = priv->country_code[1]; domain->country_code[2] = ' '; /* Now set up the channel triplets; firmware is somewhat picky here @@ -848,6 +848,7 @@ int lbs_set_11d_domain_info(struct lbs_private *priv, ret = lbs_cmd_with_response(priv, CMD_802_11D_DOMAIN_INFO, &cmd); +out: lbs_deb_leave_args(LBS_DEB_11D, "ret %d", ret); return ret; } @@ -1019,9 +1020,9 @@ static void lbs_submit_command(struct lbs_private *priv, if (ret) { netdev_info(priv->dev, "DNLD_CMD: hw_host_to_card failed: %d\n", ret); - /* Let the timer kick in and retry, and potentially reset - the whole thing if the condition persists */ - timeo = HZ/4; + /* Reset dnld state machine, report failure */ + priv->dnld_sent = DNLD_RES_RECEIVED; + lbs_complete_command(priv, cmdnode, ret); } if (command == CMD_802_11_DEEP_SLEEP) { diff --git a/drivers/net/wireless/libertas/cmd.h b/drivers/net/wireless/libertas/cmd.h index b280ef7a0aea..ab07608e13d0 100644 --- a/drivers/net/wireless/libertas/cmd.h +++ b/drivers/net/wireless/libertas/cmd.h @@ -128,9 +128,7 @@ int lbs_set_monitor_mode(struct lbs_private *priv, int enable); int lbs_get_rssi(struct lbs_private *priv, s8 *snr, s8 *nf); -int lbs_set_11d_domain_info(struct lbs_private *priv, - struct regulatory_request *request, - struct ieee80211_supported_band **bands); +int lbs_set_11d_domain_info(struct lbs_private *priv); int lbs_get_reg(struct lbs_private *priv, u16 reg, u16 offset, u32 *value); diff --git a/drivers/net/wireless/libertas/debugfs.c b/drivers/net/wireless/libertas/debugfs.c index a06cc283e23d..668dd27616a0 100644 --- a/drivers/net/wireless/libertas/debugfs.c +++ b/drivers/net/wireless/libertas/debugfs.c @@ -483,7 +483,7 @@ static ssize_t lbs_rdmac_write(struct file *file, res = -EFAULT; goto out_unlock; } - priv->mac_offset = simple_strtoul((char *)buf, NULL, 16); + priv->mac_offset = simple_strtoul(buf, NULL, 16); res = count; out_unlock: free_page(addr); @@ -565,7 +565,7 @@ static ssize_t lbs_rdbbp_write(struct file *file, res = -EFAULT; goto out_unlock; } - priv->bbp_offset = simple_strtoul((char *)buf, NULL, 16); + priv->bbp_offset = simple_strtoul(buf, NULL, 16); res = count; out_unlock: free_page(addr); diff --git a/drivers/net/wireless/libertas/dev.h b/drivers/net/wireless/libertas/dev.h index 672005430aca..6bd1608992b0 100644 --- a/drivers/net/wireless/libertas/dev.h +++ b/drivers/net/wireless/libertas/dev.h @@ -49,6 +49,7 @@ struct lbs_private { bool wiphy_registered; struct cfg80211_scan_request *scan_req; u8 assoc_bss[ETH_ALEN]; + u8 country_code[IEEE80211_COUNTRY_STRING_LEN]; u8 disassoc_reason; /* Mesh */ @@ -58,6 +59,7 @@ struct lbs_private { uint16_t mesh_tlv; u8 mesh_ssid[IEEE80211_MAX_SSID_LEN + 1]; u8 mesh_ssid_len; + u8 mesh_channel; #endif /* Debugfs */ diff --git a/drivers/net/wireless/libertas/firmware.c b/drivers/net/wireless/libertas/firmware.c index 601f2075355e..c0f9e7e862f6 100644 --- a/drivers/net/wireless/libertas/firmware.c +++ b/drivers/net/wireless/libertas/firmware.c @@ -4,9 +4,7 @@ #include <linux/sched.h> #include <linux/firmware.h> -#include <linux/firmware.h> #include <linux/module.h> -#include <linux/sched.h> #include "dev.h" #include "decl.h" diff --git a/drivers/net/wireless/libertas/host.h b/drivers/net/wireless/libertas/host.h index 2e2dbfa2ee50..96726f79a1dd 100644 --- a/drivers/net/wireless/libertas/host.h +++ b/drivers/net/wireless/libertas/host.h @@ -68,7 +68,6 @@ #define CMD_802_11_BEACON_STOP 0x0049 #define CMD_802_11_MAC_ADDRESS 0x004d #define CMD_802_11_LED_GPIO_CTRL 0x004e -#define CMD_802_11_EEPROM_ACCESS 0x0059 #define CMD_802_11_BAND_CONFIG 0x0058 #define CMD_GSPI_BUS_CONFIG 0x005a #define CMD_802_11D_DOMAIN_INFO 0x005b diff --git a/drivers/net/wireless/libertas/if_usb.c b/drivers/net/wireless/libertas/if_usb.c index cd3b0d400618..55a77e41170a 100644 --- a/drivers/net/wireless/libertas/if_usb.c +++ b/drivers/net/wireless/libertas/if_usb.c @@ -302,14 +302,13 @@ error: static void if_usb_disconnect(struct usb_interface *intf) { struct if_usb_card *cardp = usb_get_intfdata(intf); - struct lbs_private *priv = (struct lbs_private *) cardp->priv; + struct lbs_private *priv = cardp->priv; lbs_deb_enter(LBS_DEB_MAIN); cardp->surprise_removed = 1; if (priv) { - priv->surpriseremoved = 1; lbs_stop_card(priv); lbs_remove_card(priv); } diff --git a/drivers/net/wireless/libertas/main.c b/drivers/net/wireless/libertas/main.c index e96ee0aa8439..58048189bd24 100644 --- a/drivers/net/wireless/libertas/main.c +++ b/drivers/net/wireless/libertas/main.c @@ -152,6 +152,12 @@ int lbs_start_iface(struct lbs_private *priv) goto err; } + ret = lbs_set_11d_domain_info(priv); + if (ret) { + lbs_deb_net("set 11d domain info failed\n"); + goto err; + } + lbs_update_channel(priv); priv->iface_running = true; diff --git a/drivers/net/wireless/libertas/mesh.c b/drivers/net/wireless/libertas/mesh.c index e87c031b298f..97807751ebcf 100644 --- a/drivers/net/wireless/libertas/mesh.c +++ b/drivers/net/wireless/libertas/mesh.c @@ -131,16 +131,13 @@ static int lbs_mesh_config(struct lbs_private *priv, uint16_t action, int lbs_mesh_set_channel(struct lbs_private *priv, u8 channel) { + priv->mesh_channel = channel; return lbs_mesh_config(priv, CMD_ACT_MESH_CONFIG_START, channel); } static uint16_t lbs_mesh_get_channel(struct lbs_private *priv) { - struct wireless_dev *mesh_wdev = priv->mesh_dev->ieee80211_ptr; - if (mesh_wdev->channel) - return mesh_wdev->channel->hw_value; - else - return 1; + return priv->mesh_channel ?: 1; } /*************************************************************************** diff --git a/drivers/net/wireless/libertas_tf/if_usb.c b/drivers/net/wireless/libertas_tf/if_usb.c index 19a5a92dd779..d576dd6665d3 100644 --- a/drivers/net/wireless/libertas_tf/if_usb.c +++ b/drivers/net/wireless/libertas_tf/if_usb.c @@ -253,7 +253,7 @@ lbtf_deb_leave(LBTF_DEB_MAIN); static void if_usb_disconnect(struct usb_interface *intf) { struct if_usb_card *cardp = usb_get_intfdata(intf); - struct lbtf_private *priv = (struct lbtf_private *) cardp->priv; + struct lbtf_private *priv = cardp->priv; lbtf_deb_enter(LBTF_DEB_MAIN); diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c index a0b7cfd34685..643f968b05ee 100644 --- a/drivers/net/wireless/mac80211_hwsim.c +++ b/drivers/net/wireless/mac80211_hwsim.c @@ -292,7 +292,7 @@ struct mac80211_hwsim_data { struct list_head list; struct ieee80211_hw *hw; struct device *dev; - struct ieee80211_supported_band bands[2]; + struct ieee80211_supported_band bands[IEEE80211_NUM_BANDS]; struct ieee80211_channel channels_2ghz[ARRAY_SIZE(hwsim_channels_2ghz)]; struct ieee80211_channel channels_5ghz[ARRAY_SIZE(hwsim_channels_5ghz)]; struct ieee80211_rate rates[ARRAY_SIZE(hwsim_rates)]; @@ -571,7 +571,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw, skb_dequeue(&data->pending); } - skb = genlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC); + skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_ATOMIC); if (skb == NULL) goto nla_put_failure; @@ -678,8 +678,7 @@ static bool mac80211_hwsim_tx_frame_no_nl(struct ieee80211_hw *hw, continue; if (data2->idle || !data2->started || - !hwsim_ps_rx_ok(data2, skb) || - !data->channel || !data2->channel || + !hwsim_ps_rx_ok(data2, skb) || !data2->channel || data->channel->center_freq != data2->channel->center_freq || !(data->group & data2->group)) continue; @@ -1083,6 +1082,8 @@ enum hwsim_testmode_attr { enum hwsim_testmode_cmd { HWSIM_TM_CMD_SET_PS = 0, HWSIM_TM_CMD_GET_PS = 1, + HWSIM_TM_CMD_STOP_QUEUES = 2, + HWSIM_TM_CMD_WAKE_QUEUES = 3, }; static const struct nla_policy hwsim_testmode_policy[HWSIM_TM_ATTR_MAX + 1] = { @@ -1122,6 +1123,12 @@ static int mac80211_hwsim_testmode_cmd(struct ieee80211_hw *hw, if (nla_put_u32(skb, HWSIM_TM_ATTR_PS, hwsim->ps)) goto nla_put_failure; return cfg80211_testmode_reply(skb); + case HWSIM_TM_CMD_STOP_QUEUES: + ieee80211_stop_queues(hw); + return 0; + case HWSIM_TM_CMD_WAKE_QUEUES: + ieee80211_wake_queues(hw); + return 0; default: return -EOPNOTSUPP; } @@ -1486,7 +1493,7 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2, struct mac80211_hwsim_data *data2; struct ieee80211_tx_info *txi; struct hwsim_tx_rate *tx_attempts; - struct sk_buff __user *ret_skb; + unsigned long ret_skb_ptr; struct sk_buff *skb, *tmp; struct mac_address *src; unsigned int hwsim_flags; @@ -1504,8 +1511,7 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2, info->attrs[HWSIM_ATTR_ADDR_TRANSMITTER]); hwsim_flags = nla_get_u32(info->attrs[HWSIM_ATTR_FLAGS]); - ret_skb = (struct sk_buff __user *) - (unsigned long) nla_get_u64(info->attrs[HWSIM_ATTR_COOKIE]); + ret_skb_ptr = nla_get_u64(info->attrs[HWSIM_ATTR_COOKIE]); data2 = get_hwsim_data_ref_from_addr(src); @@ -1514,7 +1520,7 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2, /* look for the skb matching the cookie passed back from user */ skb_queue_walk_safe(&data2->pending, skb, tmp) { - if (skb == ret_skb) { + if ((unsigned long)skb == ret_skb_ptr) { skb_unlink(skb, &data2->pending); found = true; break; @@ -1534,11 +1540,6 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2, /* now send back TX status */ txi = IEEE80211_SKB_CB(skb); - if (txi->control.vif) - hwsim_check_magic(txi->control.vif); - if (txi->control.sta) - hwsim_check_sta_magic(txi->control.sta); - ieee80211_tx_info_clear_status(txi); for (i = 0; i < IEEE80211_TX_MAX_RATES; i++) { @@ -1857,7 +1858,7 @@ static int __init init_mac80211_hwsim(void) sband->n_bitrates = ARRAY_SIZE(hwsim_rates) - 4; break; default: - break; + continue; } sband->ht_cap.ht_supported = true; diff --git a/drivers/net/wireless/mwifiex/11n.c b/drivers/net/wireless/mwifiex/11n.c index fe8ebfebcc0e..e535c937628b 100644 --- a/drivers/net/wireless/mwifiex/11n.c +++ b/drivers/net/wireless/mwifiex/11n.c @@ -101,8 +101,7 @@ int mwifiex_ret_11n_delba(struct mwifiex_private *priv, { int tid; struct mwifiex_tx_ba_stream_tbl *tx_ba_tbl; - struct host_cmd_ds_11n_delba *del_ba = - (struct host_cmd_ds_11n_delba *) &resp->params.del_ba; + struct host_cmd_ds_11n_delba *del_ba = &resp->params.del_ba; uint16_t del_ba_param_set = le16_to_cpu(del_ba->del_ba_param_set); tid = del_ba_param_set >> DELBA_TID_POS; @@ -147,8 +146,7 @@ int mwifiex_ret_11n_addba_req(struct mwifiex_private *priv, struct host_cmd_ds_command *resp) { int tid; - struct host_cmd_ds_11n_addba_rsp *add_ba_rsp = - (struct host_cmd_ds_11n_addba_rsp *) &resp->params.add_ba_rsp; + struct host_cmd_ds_11n_addba_rsp *add_ba_rsp = &resp->params.add_ba_rsp; struct mwifiex_tx_ba_stream_tbl *tx_ba_tbl; add_ba_rsp->ssn = cpu_to_le16((le16_to_cpu(add_ba_rsp->ssn)) @@ -412,7 +410,7 @@ mwifiex_cmd_append_11n_tlv(struct mwifiex_private *priv, memcpy((u8 *) bss_co_2040 + sizeof(struct mwifiex_ie_types_header), - (u8 *) bss_desc->bcn_bss_co_2040 + + bss_desc->bcn_bss_co_2040 + sizeof(struct ieee_types_header), le16_to_cpu(bss_co_2040->header.len)); @@ -426,10 +424,8 @@ mwifiex_cmd_append_11n_tlv(struct mwifiex_private *priv, ext_cap->header.type = cpu_to_le16(WLAN_EID_EXT_CAPABILITY); ext_cap->header.len = cpu_to_le16(sizeof(ext_cap->ext_cap)); - memcpy((u8 *) ext_cap + - sizeof(struct mwifiex_ie_types_header), - (u8 *) bss_desc->bcn_ext_cap + - sizeof(struct ieee_types_header), + memcpy((u8 *)ext_cap + sizeof(struct mwifiex_ie_types_header), + bss_desc->bcn_ext_cap + sizeof(struct ieee_types_header), le16_to_cpu(ext_cap->header.len)); *buffer += sizeof(struct mwifiex_ie_types_extcap); diff --git a/drivers/net/wireless/mwifiex/11n.h b/drivers/net/wireless/mwifiex/11n.h index 77646d777dce..28366e9211fb 100644 --- a/drivers/net/wireless/mwifiex/11n.h +++ b/drivers/net/wireless/mwifiex/11n.h @@ -105,8 +105,7 @@ static inline u8 mwifiex_space_avail_for_new_ba_stream( priv = adapter->priv[i]; if (priv) ba_stream_num += mwifiex_wmm_list_len( - (struct list_head *) - &priv->tx_ba_stream_tbl_ptr); + &priv->tx_ba_stream_tbl_ptr); } return ((ba_stream_num < diff --git a/drivers/net/wireless/mwifiex/11n_rxreorder.c b/drivers/net/wireless/mwifiex/11n_rxreorder.c index 900ee129e825..591ccd33f83c 100644 --- a/drivers/net/wireless/mwifiex/11n_rxreorder.c +++ b/drivers/net/wireless/mwifiex/11n_rxreorder.c @@ -297,9 +297,7 @@ mwifiex_11n_create_rx_reorder_tbl(struct mwifiex_private *priv, u8 *ta, */ int mwifiex_cmd_11n_addba_req(struct host_cmd_ds_command *cmd, void *data_buf) { - struct host_cmd_ds_11n_addba_req *add_ba_req = - (struct host_cmd_ds_11n_addba_req *) - &cmd->params.add_ba_req; + struct host_cmd_ds_11n_addba_req *add_ba_req = &cmd->params.add_ba_req; cmd->command = cpu_to_le16(HostCmd_CMD_11N_ADDBA_REQ); cmd->size = cpu_to_le16(sizeof(*add_ba_req) + S_DS_GEN); @@ -321,9 +319,7 @@ int mwifiex_cmd_11n_addba_rsp_gen(struct mwifiex_private *priv, struct host_cmd_ds_11n_addba_req *cmd_addba_req) { - struct host_cmd_ds_11n_addba_rsp *add_ba_rsp = - (struct host_cmd_ds_11n_addba_rsp *) - &cmd->params.add_ba_rsp; + struct host_cmd_ds_11n_addba_rsp *add_ba_rsp = &cmd->params.add_ba_rsp; u8 tid; int win_size; uint16_t block_ack_param_set; @@ -368,8 +364,7 @@ int mwifiex_cmd_11n_addba_rsp_gen(struct mwifiex_private *priv, */ int mwifiex_cmd_11n_delba(struct host_cmd_ds_command *cmd, void *data_buf) { - struct host_cmd_ds_11n_delba *del_ba = (struct host_cmd_ds_11n_delba *) - &cmd->params.del_ba; + struct host_cmd_ds_11n_delba *del_ba = &cmd->params.del_ba; cmd->command = cpu_to_le16(HostCmd_CMD_11N_DELBA); cmd->size = cpu_to_le16(sizeof(*del_ba) + S_DS_GEN); @@ -399,8 +394,7 @@ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *priv, int start_win, end_win, win_size; u16 pkt_index; - tbl = mwifiex_11n_get_rx_reorder_tbl((struct mwifiex_private *) priv, - tid, ta); + tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, ta); if (!tbl) { if (pkt_type != PKT_TYPE_BAR) mwifiex_process_rx_packet(priv->adapter, payload); @@ -521,9 +515,7 @@ mwifiex_del_ba_tbl(struct mwifiex_private *priv, int tid, u8 *peer_mac, int mwifiex_ret_11n_addba_resp(struct mwifiex_private *priv, struct host_cmd_ds_command *resp) { - struct host_cmd_ds_11n_addba_rsp *add_ba_rsp = - (struct host_cmd_ds_11n_addba_rsp *) - &resp->params.add_ba_rsp; + struct host_cmd_ds_11n_addba_rsp *add_ba_rsp = &resp->params.add_ba_rsp; int tid, win_size; struct mwifiex_rx_reorder_tbl *tbl; uint16_t block_ack_param_set; diff --git a/drivers/net/wireless/mwifiex/cfg80211.c b/drivers/net/wireless/mwifiex/cfg80211.c index 5c7fd185373c..fe42137384da 100644 --- a/drivers/net/wireless/mwifiex/cfg80211.c +++ b/drivers/net/wireless/mwifiex/cfg80211.c @@ -48,10 +48,9 @@ static const struct ieee80211_iface_combination mwifiex_iface_comb_ap_sta = { * Others -> IEEE80211_HT_PARAM_CHA_SEC_NONE */ static u8 -mwifiex_cfg80211_channel_type_to_sec_chan_offset(enum nl80211_channel_type - channel_type) +mwifiex_chan_type_to_sec_chan_offset(enum nl80211_channel_type chan_type) { - switch (channel_type) { + switch (chan_type) { case NL80211_CHAN_NO_HT: case NL80211_CHAN_HT20: return IEEE80211_HT_PARAM_CHA_SEC_NONE; @@ -170,7 +169,9 @@ mwifiex_cfg80211_set_default_key(struct wiphy *wiphy, struct net_device *netdev, if (!priv->sec_info.wep_enabled) return 0; - if (mwifiex_set_encode(priv, NULL, 0, key_index, NULL, 0)) { + if (priv->bss_type == MWIFIEX_BSS_TYPE_UAP) { + priv->wep_key_curr_index = key_index; + } else if (mwifiex_set_encode(priv, NULL, 0, key_index, NULL, 0)) { wiphy_err(wiphy, "set default Tx key index\n"); return -EFAULT; } @@ -187,9 +188,25 @@ mwifiex_cfg80211_add_key(struct wiphy *wiphy, struct net_device *netdev, struct key_params *params) { struct mwifiex_private *priv = mwifiex_netdev_get_priv(netdev); + struct mwifiex_wep_key *wep_key; const u8 bc_mac[] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; const u8 *peer_mac = pairwise ? mac_addr : bc_mac; + if (GET_BSS_ROLE(priv) == MWIFIEX_BSS_ROLE_UAP && + (params->cipher == WLAN_CIPHER_SUITE_WEP40 || + params->cipher == WLAN_CIPHER_SUITE_WEP104)) { + if (params->key && params->key_len) { + wep_key = &priv->wep_key[key_index]; + memset(wep_key, 0, sizeof(struct mwifiex_wep_key)); + memcpy(wep_key->key_material, params->key, + params->key_len); + wep_key->key_index = key_index; + wep_key->key_length = params->key_len; + priv->sec_info.wep_enabled = 1; + } + return 0; + } + if (mwifiex_set_encode(priv, params->key, params->key_len, key_index, peer_mac, 0)) { wiphy_err(wiphy, "crypto keys added\n"); @@ -242,13 +259,13 @@ static int mwifiex_send_domain_info_cmd_fw(struct wiphy *wiphy) flag = 1; first_chan = (u32) ch->hw_value; next_chan = first_chan; - max_pwr = ch->max_power; + max_pwr = ch->max_reg_power; no_of_parsed_chan = 1; continue; } if (ch->hw_value == next_chan + 1 && - ch->max_power == max_pwr) { + ch->max_reg_power == max_pwr) { next_chan++; no_of_parsed_chan++; } else { @@ -259,7 +276,7 @@ static int mwifiex_send_domain_info_cmd_fw(struct wiphy *wiphy) no_of_triplet++; first_chan = (u32) ch->hw_value; next_chan = first_chan; - max_pwr = ch->max_power; + max_pwr = ch->max_reg_power; no_of_parsed_chan = 1; } } @@ -321,79 +338,6 @@ static int mwifiex_reg_notifier(struct wiphy *wiphy, } /* - * This function sets the RF channel. - * - * This function creates multiple IOCTL requests, populates them accordingly - * and issues them to set the band/channel and frequency. - */ -static int -mwifiex_set_rf_channel(struct mwifiex_private *priv, - struct ieee80211_channel *chan, - enum nl80211_channel_type channel_type) -{ - struct mwifiex_chan_freq_power cfp; - u32 config_bands = 0; - struct wiphy *wiphy = priv->wdev->wiphy; - struct mwifiex_adapter *adapter = priv->adapter; - - if (chan) { - /* Set appropriate bands */ - if (chan->band == IEEE80211_BAND_2GHZ) { - if (channel_type == NL80211_CHAN_NO_HT) - if (priv->adapter->config_bands == BAND_B || - priv->adapter->config_bands == BAND_G) - config_bands = - priv->adapter->config_bands; - else - config_bands = BAND_B | BAND_G; - else - config_bands = BAND_B | BAND_G | BAND_GN; - } else { - if (channel_type == NL80211_CHAN_NO_HT) - config_bands = BAND_A; - else - config_bands = BAND_AN | BAND_A; - } - - if (!((config_bands | adapter->fw_bands) & - ~adapter->fw_bands)) { - adapter->config_bands = config_bands; - if (priv->bss_mode == NL80211_IFTYPE_ADHOC) { - adapter->adhoc_start_band = config_bands; - if ((config_bands & BAND_GN) || - (config_bands & BAND_AN)) - adapter->adhoc_11n_enabled = true; - else - adapter->adhoc_11n_enabled = false; - } - } - adapter->sec_chan_offset = - mwifiex_cfg80211_channel_type_to_sec_chan_offset - (channel_type); - adapter->channel_type = channel_type; - - mwifiex_send_domain_info_cmd_fw(wiphy); - } - - wiphy_dbg(wiphy, "info: setting band %d, chan offset %d, mode %d\n", - config_bands, adapter->sec_chan_offset, priv->bss_mode); - if (!chan) - return 0; - - memset(&cfp, 0, sizeof(cfp)); - cfp.freq = chan->center_freq; - cfp.channel = ieee80211_frequency_to_channel(chan->center_freq); - - if (mwifiex_bss_set_channel(priv, &cfp)) - return -EFAULT; - - if (priv->bss_type == MWIFIEX_BSS_TYPE_STA) - return mwifiex_drv_change_adhoc_chan(priv, cfp.channel); - else - return mwifiex_uap_set_channel(priv, cfp.channel); -} - -/* * This function sets the fragmentation threshold. * * The fragmentation threshold value must lie between MWIFIEX_FRAG_MIN_VALUE @@ -608,7 +552,7 @@ static int mwifiex_dump_station_info(struct mwifiex_private *priv, struct station_info *sinfo) { - struct mwifiex_rate_cfg rate; + u32 rate; sinfo->filled = STATION_INFO_RX_BYTES | STATION_INFO_TX_BYTES | STATION_INFO_RX_PACKETS | STATION_INFO_TX_PACKETS | @@ -634,9 +578,9 @@ mwifiex_dump_station_info(struct mwifiex_private *priv, /* * Bit 0 in tx_htinfo indicates that current Tx rate is 11n rate. Valid - * MCS index values for us are 0 to 7. + * MCS index values for us are 0 to 15. */ - if ((priv->tx_htinfo & BIT(0)) && (priv->tx_rate < 8)) { + if ((priv->tx_htinfo & BIT(0)) && (priv->tx_rate < 16)) { sinfo->txrate.mcs = priv->tx_rate; sinfo->txrate.flags |= RATE_INFO_FLAGS_MCS; /* 40MHz rate */ @@ -654,7 +598,7 @@ mwifiex_dump_station_info(struct mwifiex_private *priv, sinfo->tx_packets = priv->stats.tx_packets; sinfo->signal = priv->bcn_rssi_avg; /* bit rate is in 500 kb/s units. Convert it to 100kb/s units */ - sinfo->txrate.legacy = rate.rate * 5; + sinfo->txrate.legacy = rate * 5; if (priv->bss_mode == NL80211_IFTYPE_STATION) { sinfo->filled |= STATION_INFO_BSS_PARAM; @@ -809,8 +753,8 @@ static const u32 mwifiex_cipher_suites[] = { /* * CFG802.11 operation handler for setting bit rates. * - * Function selects legacy bang B/G/BG from corresponding bitrates selection. - * Currently only 2.4GHz band is supported. + * Function configures data rates to firmware using bitrate mask + * provided by cfg80211. */ static int mwifiex_cfg80211_set_bitrate_mask(struct wiphy *wiphy, struct net_device *dev, @@ -818,43 +762,36 @@ static int mwifiex_cfg80211_set_bitrate_mask(struct wiphy *wiphy, const struct cfg80211_bitrate_mask *mask) { struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev); - int index = 0, mode = 0, i; - struct mwifiex_adapter *adapter = priv->adapter; + u16 bitmap_rates[MAX_BITMAP_RATES_SIZE]; + enum ieee80211_band band; - /* Currently only 2.4GHz is supported */ - for (i = 0; i < mwifiex_band_2ghz.n_bitrates; i++) { - /* - * Rates below 6 Mbps in the table are CCK rates; 802.11b - * and from 6 they are OFDM; 802.11G - */ - if (mwifiex_rates[i].bitrate == 60) { - index = 1 << i; - break; - } + if (!priv->media_connected) { + dev_err(priv->adapter->dev, + "Can not set Tx data rate in disconnected state\n"); + return -EINVAL; } - if (mask->control[IEEE80211_BAND_2GHZ].legacy < index) { - mode = BAND_B; - } else { - mode = BAND_G; - if (mask->control[IEEE80211_BAND_2GHZ].legacy % index) - mode |= BAND_B; - } + band = mwifiex_band_to_radio_type(priv->curr_bss_params.band); - if (!((mode | adapter->fw_bands) & ~adapter->fw_bands)) { - adapter->config_bands = mode; - if (priv->bss_mode == NL80211_IFTYPE_ADHOC) { - adapter->adhoc_start_band = mode; - adapter->adhoc_11n_enabled = false; - } - } - adapter->sec_chan_offset = IEEE80211_HT_PARAM_CHA_SEC_NONE; - adapter->channel_type = NL80211_CHAN_NO_HT; + memset(bitmap_rates, 0, sizeof(bitmap_rates)); - wiphy_debug(wiphy, "info: device configured in 802.11%s%s mode\n", - (mode & BAND_B) ? "b" : "", (mode & BAND_G) ? "g" : ""); + /* Fill HR/DSSS rates. */ + if (band == IEEE80211_BAND_2GHZ) + bitmap_rates[0] = mask->control[band].legacy & 0x000f; - return 0; + /* Fill OFDM rates */ + if (band == IEEE80211_BAND_2GHZ) + bitmap_rates[1] = (mask->control[band].legacy & 0x0ff0) >> 4; + else + bitmap_rates[1] = mask->control[band].legacy; + + /* Fill MCS rates */ + bitmap_rates[2] = mask->control[band].mcs[0]; + if (priv->adapter->hw_dev_mcs_support == HT_STREAM_2X2) + bitmap_rates[2] |= mask->control[band].mcs[1] << 8; + + return mwifiex_send_cmd_sync(priv, HostCmd_CMD_TX_RATE_CFG, + HostCmd_ACT_GEN_SET, 0, bitmap_rates); } /* @@ -896,6 +833,69 @@ static int mwifiex_cfg80211_set_cqm_rssi_config(struct wiphy *wiphy, return 0; } +/* cfg80211 operation handler for change_beacon. + * Function retrieves and sets modified management IEs to FW. + */ +static int mwifiex_cfg80211_change_beacon(struct wiphy *wiphy, + struct net_device *dev, + struct cfg80211_beacon_data *data) +{ + struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev); + + if (priv->bss_type != MWIFIEX_BSS_TYPE_UAP) { + wiphy_err(wiphy, "%s: bss_type mismatched\n", __func__); + return -EINVAL; + } + + if (!priv->bss_started) { + wiphy_err(wiphy, "%s: bss not started\n", __func__); + return -EINVAL; + } + + if (mwifiex_set_mgmt_ies(priv, data)) { + wiphy_err(wiphy, "%s: setting mgmt ies failed\n", __func__); + return -EFAULT; + } + + return 0; +} + +static int +mwifiex_cfg80211_set_antenna(struct wiphy *wiphy, u32 tx_ant, u32 rx_ant) +{ + struct mwifiex_adapter *adapter = mwifiex_cfg80211_get_adapter(wiphy); + struct mwifiex_private *priv = mwifiex_get_priv(adapter, + MWIFIEX_BSS_ROLE_ANY); + struct mwifiex_ds_ant_cfg ant_cfg; + + if (!tx_ant || !rx_ant) + return -EOPNOTSUPP; + + if (adapter->hw_dev_mcs_support != HT_STREAM_2X2) { + /* Not a MIMO chip. User should provide specific antenna number + * for Tx/Rx path or enable all antennas for diversity + */ + if (tx_ant != rx_ant) + return -EOPNOTSUPP; + + if ((tx_ant & (tx_ant - 1)) && + (tx_ant != BIT(adapter->number_of_antenna) - 1)) + return -EOPNOTSUPP; + + if ((tx_ant == BIT(adapter->number_of_antenna) - 1) && + (priv->adapter->number_of_antenna > 1)) { + tx_ant = RF_ANTENNA_AUTO; + rx_ant = RF_ANTENNA_AUTO; + } + } + + ant_cfg.tx_ant = tx_ant; + ant_cfg.rx_ant = rx_ant; + + return mwifiex_send_cmd_sync(priv, HostCmd_CMD_RF_ANTENNA, + HostCmd_ACT_GEN_SET, 0, &ant_cfg); +} + /* cfg80211 operation handler for stop ap. * Function stops BSS running at uAP interface. */ @@ -926,10 +926,11 @@ static int mwifiex_cfg80211_start_ap(struct wiphy *wiphy, { struct mwifiex_uap_bss_param *bss_cfg; struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev); + u8 config_bands = 0; if (priv->bss_type != MWIFIEX_BSS_TYPE_UAP) return -1; - if (mwifiex_set_mgmt_ies(priv, params)) + if (mwifiex_set_mgmt_ies(priv, ¶ms->beacon)) return -1; bss_cfg = kzalloc(sizeof(struct mwifiex_uap_bss_param), GFP_KERNEL); @@ -962,12 +963,37 @@ static int mwifiex_cfg80211_start_ap(struct wiphy *wiphy, return -EINVAL; } + bss_cfg->channel = + (u8)ieee80211_frequency_to_channel(params->channel->center_freq); + bss_cfg->band_cfg = BAND_CONFIG_MANUAL; + + /* Set appropriate bands */ + if (params->channel->band == IEEE80211_BAND_2GHZ) { + if (params->channel_type == NL80211_CHAN_NO_HT) + config_bands = BAND_B | BAND_G; + else + config_bands = BAND_B | BAND_G | BAND_GN; + } else { + if (params->channel_type == NL80211_CHAN_NO_HT) + config_bands = BAND_A; + else + config_bands = BAND_AN | BAND_A; + } + + if (!((config_bands | priv->adapter->fw_bands) & + ~priv->adapter->fw_bands)) + priv->adapter->config_bands = config_bands; + + mwifiex_send_domain_info_cmd_fw(wiphy); + if (mwifiex_set_secure_params(priv, bss_cfg, params)) { kfree(bss_cfg); wiphy_err(wiphy, "Failed to parse secuirty parameters!\n"); return -1; } + mwifiex_set_ht_params(priv, bss_cfg, params); + if (mwifiex_send_cmd_sync(priv, HostCmd_CMD_UAP_BSS_STOP, HostCmd_ACT_GEN_SET, 0, NULL)) { wiphy_err(wiphy, "Failed to stop the BSS\n"); @@ -991,6 +1017,16 @@ static int mwifiex_cfg80211_start_ap(struct wiphy *wiphy, return -1; } + if (priv->sec_info.wep_enabled) + priv->curr_pkt_filter |= HostCmd_ACT_MAC_WEP_ENABLE; + else + priv->curr_pkt_filter &= ~HostCmd_ACT_MAC_WEP_ENABLE; + + if (mwifiex_send_cmd_sync(priv, HostCmd_CMD_MAC_CONTROL, + HostCmd_ACT_GEN_SET, 0, + &priv->curr_pkt_filter)) + return -1; + return 0; } @@ -1083,7 +1119,7 @@ mwifiex_cfg80211_assoc(struct mwifiex_private *priv, size_t ssid_len, u8 *ssid, struct cfg80211_ssid req_ssid; int ret, auth_type = 0; struct cfg80211_bss *bss = NULL; - u8 is_scanning_required = 0; + u8 is_scanning_required = 0, config_bands = 0; memset(&req_ssid, 0, sizeof(struct cfg80211_ssid)); @@ -1102,9 +1138,19 @@ mwifiex_cfg80211_assoc(struct mwifiex_private *priv, size_t ssid_len, u8 *ssid, /* disconnect before try to associate */ mwifiex_deauthenticate(priv, NULL); - if (channel) - ret = mwifiex_set_rf_channel(priv, channel, - priv->adapter->channel_type); + if (channel) { + if (mode == NL80211_IFTYPE_STATION) { + if (channel->band == IEEE80211_BAND_2GHZ) + config_bands = BAND_B | BAND_G | BAND_GN; + else + config_bands = BAND_A | BAND_AN; + + if (!((config_bands | priv->adapter->fw_bands) & + ~priv->adapter->fw_bands)) + priv->adapter->config_bands = config_bands; + } + mwifiex_send_domain_info_cmd_fw(priv->wdev->wiphy); + } /* As this is new association, clear locally stored * keys and security related flags */ @@ -1269,6 +1315,76 @@ done: } /* + * This function sets following parameters for ibss network. + * - channel + * - start band + * - 11n flag + * - secondary channel offset + */ +static int mwifiex_set_ibss_params(struct mwifiex_private *priv, + struct cfg80211_ibss_params *params) +{ + struct wiphy *wiphy = priv->wdev->wiphy; + struct mwifiex_adapter *adapter = priv->adapter; + int index = 0, i; + u8 config_bands = 0; + + if (params->channel->band == IEEE80211_BAND_2GHZ) { + if (!params->basic_rates) { + config_bands = BAND_B | BAND_G; + } else { + for (i = 0; i < mwifiex_band_2ghz.n_bitrates; i++) { + /* + * Rates below 6 Mbps in the table are CCK + * rates; 802.11b and from 6 they are OFDM; + * 802.11G + */ + if (mwifiex_rates[i].bitrate == 60) { + index = 1 << i; + break; + } + } + + if (params->basic_rates < index) { + config_bands = BAND_B; + } else { + config_bands = BAND_G; + if (params->basic_rates % index) + config_bands |= BAND_B; + } + } + + if (params->channel_type != NL80211_CHAN_NO_HT) + config_bands |= BAND_GN; + } else { + if (params->channel_type == NL80211_CHAN_NO_HT) + config_bands = BAND_A; + else + config_bands = BAND_AN | BAND_A; + } + + if (!((config_bands | adapter->fw_bands) & ~adapter->fw_bands)) { + adapter->config_bands = config_bands; + adapter->adhoc_start_band = config_bands; + + if ((config_bands & BAND_GN) || (config_bands & BAND_AN)) + adapter->adhoc_11n_enabled = true; + else + adapter->adhoc_11n_enabled = false; + } + + adapter->sec_chan_offset = + mwifiex_chan_type_to_sec_chan_offset(params->channel_type); + priv->adhoc_channel = + ieee80211_frequency_to_channel(params->channel->center_freq); + + wiphy_dbg(wiphy, "info: set ibss band %d, chan %d, chan offset %d\n", + config_bands, priv->adhoc_channel, adapter->sec_chan_offset); + + return 0; +} + +/* * CFG802.11 operation handler to join an IBSS. * * This function does not work in any mode other than Ad-Hoc, or if @@ -1290,6 +1406,8 @@ mwifiex_cfg80211_join_ibss(struct wiphy *wiphy, struct net_device *dev, wiphy_dbg(wiphy, "info: trying to join to %s and bssid %pM\n", (char *) params->ssid, params->bssid); + mwifiex_set_ibss_params(priv, params); + ret = mwifiex_cfg80211_assoc(priv, params->ssid_len, params->ssid, params->bssid, priv->bss_mode, params->channel, NULL, params->privacy); @@ -1336,9 +1454,10 @@ mwifiex_cfg80211_leave_ibss(struct wiphy *wiphy, struct net_device *dev) * it also informs the results. */ static int -mwifiex_cfg80211_scan(struct wiphy *wiphy, struct net_device *dev, +mwifiex_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request) { + struct net_device *dev = request->wdev->netdev; struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev); int i; struct ieee80211_channel *chan; @@ -1382,7 +1501,7 @@ mwifiex_cfg80211_scan(struct wiphy *wiphy, struct net_device *dev, priv->user_scan_cfg->chan_list[i].scan_time = 0; } - if (mwifiex_set_user_scan_ioctl(priv, priv->user_scan_cfg)) + if (mwifiex_scan_networks(priv, priv->user_scan_cfg)) return -EFAULT; if (request->ie && request->ie_len) { @@ -1472,11 +1591,11 @@ mwifiex_setup_ht_caps(struct ieee80211_sta_ht_cap *ht_info, /* * create a new virtual interface with the given name */ -struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy, - char *name, - enum nl80211_iftype type, - u32 *flags, - struct vif_params *params) +struct wireless_dev *mwifiex_add_virtual_intf(struct wiphy *wiphy, + char *name, + enum nl80211_iftype type, + u32 *flags, + struct vif_params *params) { struct mwifiex_adapter *adapter = mwifiex_cfg80211_get_adapter(wiphy); struct mwifiex_private *priv; @@ -1597,16 +1716,16 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy, #ifdef CONFIG_DEBUG_FS mwifiex_dev_debugfs_init(priv); #endif - return dev; + return wdev; } EXPORT_SYMBOL_GPL(mwifiex_add_virtual_intf); /* * del_virtual_intf: remove the virtual interface determined by dev */ -int mwifiex_del_virtual_intf(struct wiphy *wiphy, struct net_device *dev) +int mwifiex_del_virtual_intf(struct wiphy *wiphy, struct wireless_dev *wdev) { - struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev); + struct mwifiex_private *priv = mwifiex_netdev_get_priv(wdev->netdev); #ifdef CONFIG_DEBUG_FS mwifiex_dev_debugfs_remove(priv); @@ -1618,11 +1737,11 @@ int mwifiex_del_virtual_intf(struct wiphy *wiphy, struct net_device *dev) if (netif_carrier_ok(priv->netdev)) netif_carrier_off(priv->netdev); - if (dev->reg_state == NETREG_REGISTERED) - unregister_netdevice(dev); + if (wdev->netdev->reg_state == NETREG_REGISTERED) + unregister_netdevice(wdev->netdev); - if (dev->reg_state == NETREG_UNREGISTERED) - free_netdev(dev); + if (wdev->netdev->reg_state == NETREG_UNREGISTERED) + free_netdev(wdev->netdev); /* Clear the priv in adapter */ priv->netdev = NULL; @@ -1656,7 +1775,9 @@ static struct cfg80211_ops mwifiex_cfg80211_ops = { .set_bitrate_mask = mwifiex_cfg80211_set_bitrate_mask, .start_ap = mwifiex_cfg80211_start_ap, .stop_ap = mwifiex_cfg80211_stop_ap, + .change_beacon = mwifiex_cfg80211_change_beacon, .set_cqm_rssi_config = mwifiex_cfg80211_set_cqm_rssi_config, + .set_antenna = mwifiex_cfg80211_set_antenna, }; /* @@ -1703,7 +1824,16 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter) memcpy(wiphy->perm_addr, priv->curr_addr, ETH_ALEN); wiphy->signal_type = CFG80211_SIGNAL_TYPE_MBM; - wiphy->flags |= WIPHY_FLAG_HAVE_AP_SME | WIPHY_FLAG_CUSTOM_REGULATORY; + wiphy->flags |= WIPHY_FLAG_HAVE_AP_SME | + WIPHY_FLAG_AP_PROBE_RESP_OFFLOAD; + + wiphy->probe_resp_offload = NL80211_PROBE_RESP_OFFLOAD_SUPPORT_WPS | + NL80211_PROBE_RESP_OFFLOAD_SUPPORT_WPS2; + + wiphy->available_antennas_tx = BIT(adapter->number_of_antenna) - 1; + wiphy->available_antennas_rx = BIT(adapter->number_of_antenna) - 1; + + wiphy->features = NL80211_FEATURE_HT_IBSS; /* Reserve space for mwifiex specific private data for BSS */ wiphy->bss_priv_size = sizeof(struct mwifiex_bss_priv); @@ -1714,7 +1844,7 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter) wdev_priv = wiphy_priv(wiphy); *(unsigned long *)wdev_priv = (unsigned long)adapter; - set_wiphy_dev(wiphy, (struct device *)priv->adapter->dev); + set_wiphy_dev(wiphy, priv->adapter->dev); ret = wiphy_register(wiphy); if (ret < 0) { diff --git a/drivers/net/wireless/mwifiex/cfp.c b/drivers/net/wireless/mwifiex/cfp.c index 560871b0e236..f69300f93f42 100644 --- a/drivers/net/wireless/mwifiex/cfp.c +++ b/drivers/net/wireless/mwifiex/cfp.c @@ -167,23 +167,6 @@ u32 mwifiex_index_to_data_rate(struct mwifiex_private *priv, u8 index, } /* - * This function maps a data rate value into corresponding index in supported - * rates table. - */ -u8 mwifiex_data_rate_to_index(u32 rate) -{ - u16 *ptr; - - if (rate) { - ptr = memchr(mwifiex_data_rates, rate, - sizeof(mwifiex_data_rates)); - if (ptr) - return (u8) (ptr - mwifiex_data_rates); - } - return 0; -} - -/* * This function returns the current active data rates. * * The result may vary depending upon connection status. @@ -277,20 +260,6 @@ mwifiex_is_rate_auto(struct mwifiex_private *priv) } /* - * This function converts rate bitmap into rate index. - */ -int mwifiex_get_rate_index(u16 *rate_bitmap, int size) -{ - int i; - - for (i = 0; i < size * 8; i++) - if (rate_bitmap[i / 16] & (1 << (i % 16))) - return i; - - return 0; -} - -/* * This function gets the supported data rates. * * The function works in both Ad-Hoc and infra mode by printing the diff --git a/drivers/net/wireless/mwifiex/cmdevt.c b/drivers/net/wireless/mwifiex/cmdevt.c index 51e023ec1de4..c68adec3cc8b 100644 --- a/drivers/net/wireless/mwifiex/cmdevt.c +++ b/drivers/net/wireless/mwifiex/cmdevt.c @@ -578,6 +578,7 @@ int mwifiex_send_cmd_async(struct mwifiex_private *priv, uint16_t cmd_no, } else { adapter->cmd_queued = cmd_node; mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true); + queue_work(adapter->workqueue, &adapter->main_work); } return ret; @@ -1102,7 +1103,8 @@ int mwifiex_ret_802_11_hs_cfg(struct mwifiex_private *priv, &resp->params.opt_hs_cfg; uint32_t conditions = le32_to_cpu(phs_cfg->params.hs_config.conditions); - if (phs_cfg->action == cpu_to_le16(HS_ACTIVATE)) { + if (phs_cfg->action == cpu_to_le16(HS_ACTIVATE) && + adapter->iface_type == MWIFIEX_SDIO) { mwifiex_hs_activated_event(priv, true); return 0; } else { @@ -1114,6 +1116,9 @@ int mwifiex_ret_802_11_hs_cfg(struct mwifiex_private *priv, } if (conditions != HOST_SLEEP_CFG_CANCEL) { adapter->is_hs_configured = true; + if (adapter->iface_type == MWIFIEX_USB || + adapter->iface_type == MWIFIEX_PCIE) + mwifiex_hs_activated_event(priv, true); } else { adapter->is_hs_configured = false; if (adapter->hs_activated) diff --git a/drivers/net/wireless/mwifiex/decl.h b/drivers/net/wireless/mwifiex/decl.h index f918f66e5e27..070ef25f5186 100644 --- a/drivers/net/wireless/mwifiex/decl.h +++ b/drivers/net/wireless/mwifiex/decl.h @@ -41,16 +41,7 @@ #define MWIFIEX_AMPDU_DEF_RXWINSIZE 16 #define MWIFIEX_DEFAULT_BLOCK_ACK_TIMEOUT 0xffff -#define MWIFIEX_RATE_INDEX_HRDSSS0 0 -#define MWIFIEX_RATE_INDEX_HRDSSS3 3 -#define MWIFIEX_RATE_INDEX_OFDM0 4 -#define MWIFIEX_RATE_INDEX_OFDM7 11 -#define MWIFIEX_RATE_INDEX_MCS0 12 - -#define MWIFIEX_RATE_BITMAP_OFDM0 16 -#define MWIFIEX_RATE_BITMAP_OFDM7 23 #define MWIFIEX_RATE_BITMAP_MCS0 32 -#define MWIFIEX_RATE_BITMAP_MCS127 159 #define MWIFIEX_RX_DATA_BUF_SIZE (4 * 1024) #define MWIFIEX_RX_CMD_BUF_SIZE (2 * 1024) diff --git a/drivers/net/wireless/mwifiex/fw.h b/drivers/net/wireless/mwifiex/fw.h index 561452a5c818..e831b440a24a 100644 --- a/drivers/net/wireless/mwifiex/fw.h +++ b/drivers/net/wireless/mwifiex/fw.h @@ -124,6 +124,7 @@ enum MWIFIEX_802_11_PRIVACY_FILTER { #define TLV_TYPE_UAP_DTIM_PERIOD (PROPRIETARY_TLV_BASE_ID + 45) #define TLV_TYPE_UAP_BCAST_SSID (PROPRIETARY_TLV_BASE_ID + 48) #define TLV_TYPE_UAP_RTS_THRESHOLD (PROPRIETARY_TLV_BASE_ID + 51) +#define TLV_TYPE_UAP_WEP_KEY (PROPRIETARY_TLV_BASE_ID + 59) #define TLV_TYPE_UAP_WPA_PASSPHRASE (PROPRIETARY_TLV_BASE_ID + 60) #define TLV_TYPE_UAP_ENCRY_PROTOCOL (PROPRIETARY_TLV_BASE_ID + 64) #define TLV_TYPE_UAP_AKMP (PROPRIETARY_TLV_BASE_ID + 65) @@ -162,6 +163,12 @@ enum MWIFIEX_802_11_PRIVACY_FILTER { #define ISSUPP_11NENABLED(FwCapInfo) (FwCapInfo & BIT(11)) +#define MWIFIEX_DEF_HT_CAP (IEEE80211_HT_CAP_DSSSCCK40 | \ + (1 << IEEE80211_HT_CAP_RX_STBC_SHIFT) | \ + IEEE80211_HT_CAP_SM_PS) + +#define MWIFIEX_DEF_AMPDU IEEE80211_HT_AMPDU_PARM_FACTOR + /* dev_cap bitmap * BIT * 0-16 reserved @@ -218,7 +225,8 @@ enum MWIFIEX_802_11_PRIVACY_FILTER { #define HostCmd_CMD_BBP_REG_ACCESS 0x001a #define HostCmd_CMD_RF_REG_ACCESS 0x001b #define HostCmd_CMD_PMIC_REG_ACCESS 0x00ad -#define HostCmd_CMD_802_11_RF_CHANNEL 0x001d +#define HostCmd_CMD_RF_TX_PWR 0x001e +#define HostCmd_CMD_RF_ANTENNA 0x0020 #define HostCmd_CMD_802_11_DEAUTHENTICATE 0x0024 #define HostCmd_CMD_MAC_CONTROL 0x0028 #define HostCmd_CMD_802_11_AD_HOC_START 0x002b @@ -314,6 +322,12 @@ enum ENH_PS_MODES { #define HostCmd_BSS_TYPE_MASK 0xf000 +#define HostCmd_ACT_SET_RX 0x0001 +#define HostCmd_ACT_SET_TX 0x0002 +#define HostCmd_ACT_SET_BOTH 0x0003 + +#define RF_ANTENNA_AUTO 0xFFFF + #define HostCmd_SET_SEQ_NO_BSS_INFO(seq, num, type) { \ (((seq) & 0x00ff) | \ (((num) & 0x000f) << 8)) | \ @@ -869,6 +883,25 @@ struct host_cmd_ds_txpwr_cfg { __le32 mode; } __packed; +struct host_cmd_ds_rf_tx_pwr { + __le16 action; + __le16 cur_level; + u8 max_power; + u8 min_power; +} __packed; + +struct host_cmd_ds_rf_ant_mimo { + __le16 action_tx; + __le16 tx_ant_mode; + __le16 action_rx; + __le16 rx_ant_mode; +}; + +struct host_cmd_ds_rf_ant_siso { + __le16 action; + __le16 ant_mode; +}; + struct mwifiex_bcn_param { u8 bssid[ETH_ALEN]; u8 rssi; @@ -1195,6 +1228,13 @@ struct host_cmd_tlv_passphrase { u8 passphrase[0]; } __packed; +struct host_cmd_tlv_wep_key { + struct host_cmd_tlv tlv; + u8 key_index; + u8 is_default; + u8 key[1]; +}; + struct host_cmd_tlv_auth_type { struct host_cmd_tlv tlv; u8 auth_type; @@ -1251,14 +1291,6 @@ struct host_cmd_tlv_channel_band { u8 channel; } __packed; -struct host_cmd_ds_802_11_rf_channel { - __le16 action; - __le16 current_channel; - __le16 rf_type; - __le16 reserved; - u8 reserved_1[32]; -} __packed; - struct host_cmd_ds_version_ext { u8 version_str_sel; char version_str[128]; @@ -1343,10 +1375,12 @@ struct host_cmd_ds_command { struct host_cmd_ds_802_11_rssi_info rssi_info; struct host_cmd_ds_802_11_rssi_info_rsp rssi_info_rsp; struct host_cmd_ds_802_11_snmp_mib smib; - struct host_cmd_ds_802_11_rf_channel rf_channel; struct host_cmd_ds_tx_rate_query tx_rate; struct host_cmd_ds_tx_rate_cfg tx_rate_cfg; struct host_cmd_ds_txpwr_cfg txp_cfg; + struct host_cmd_ds_rf_tx_pwr txp; + struct host_cmd_ds_rf_ant_mimo ant_mimo; + struct host_cmd_ds_rf_ant_siso ant_siso; struct host_cmd_ds_802_11_ps_mode_enh psmode_enh; struct host_cmd_ds_802_11_hs_cfg_enh opt_hs_cfg; struct host_cmd_ds_802_11_scan scan; diff --git a/drivers/net/wireless/mwifiex/ie.c b/drivers/net/wireless/mwifiex/ie.c index 383820a52beb..1d8dd003e396 100644 --- a/drivers/net/wireless/mwifiex/ie.c +++ b/drivers/net/wireless/mwifiex/ie.c @@ -51,8 +51,7 @@ mwifiex_ie_get_autoidx(struct mwifiex_private *priv, u16 subtype_mask, for (i = 0; i < priv->adapter->max_mgmt_ie_index; i++) { mask = le16_to_cpu(priv->mgmt_ie[i].mgmt_subtype_mask); - len = le16_to_cpu(priv->mgmt_ie[i].ie_length) + - le16_to_cpu(ie->ie_length); + len = le16_to_cpu(ie->ie_length); if (mask == MWIFIEX_AUTO_IDX_MASK) continue; @@ -108,10 +107,8 @@ mwifiex_update_autoindex_ies(struct mwifiex_private *priv, return -1; tmp = (u8 *)&priv->mgmt_ie[index].ie_buffer; - tmp += le16_to_cpu(priv->mgmt_ie[index].ie_length); memcpy(tmp, &ie->ie_buffer, le16_to_cpu(ie->ie_length)); - le16_add_cpu(&priv->mgmt_ie[index].ie_length, - le16_to_cpu(ie->ie_length)); + priv->mgmt_ie[index].ie_length = ie->ie_length; priv->mgmt_ie[index].ie_index = cpu_to_le16(index); priv->mgmt_ie[index].mgmt_subtype_mask = cpu_to_le16(mask); @@ -217,92 +214,63 @@ mwifiex_update_uap_custom_ie(struct mwifiex_private *priv, return ret; } -/* This function parses different IEs- Tail IEs, beacon IEs, probe response IEs, - * association response IEs from cfg80211_ap_settings function and sets these IE - * to FW. +/* This function checks if WPS IE is present in passed buffer and copies it to + * mwifiex_ie structure. + * Function takes pointer to struct mwifiex_ie pointer as argument. + * If WPS IE is present memory is allocated for mwifiex_ie pointer and filled + * in with WPS IE. Caller should take care of freeing this memory. */ -int mwifiex_set_mgmt_ies(struct mwifiex_private *priv, - struct cfg80211_ap_settings *params) +static int mwifiex_update_wps_ie(const u8 *ies, int ies_len, + struct mwifiex_ie **ie_ptr, u16 mask) { - struct mwifiex_ie *beacon_ie = NULL, *pr_ie = NULL; - struct mwifiex_ie *ar_ie = NULL, *rsn_ie = NULL; - struct ieee_types_header *ie = NULL; - u16 beacon_idx = MWIFIEX_AUTO_IDX_MASK, pr_idx = MWIFIEX_AUTO_IDX_MASK; - u16 ar_idx = MWIFIEX_AUTO_IDX_MASK, rsn_idx = MWIFIEX_AUTO_IDX_MASK; - u16 mask; - int ret = 0; - - if (params->beacon.tail && params->beacon.tail_len) { - ie = (void *)cfg80211_find_ie(WLAN_EID_RSN, params->beacon.tail, - params->beacon.tail_len); - if (ie) { - rsn_ie = kmalloc(sizeof(struct mwifiex_ie), GFP_KERNEL); - if (!rsn_ie) - return -ENOMEM; - - rsn_ie->ie_index = cpu_to_le16(rsn_idx); - mask = MGMT_MASK_BEACON | MGMT_MASK_PROBE_RESP | - MGMT_MASK_ASSOC_RESP; - rsn_ie->mgmt_subtype_mask = cpu_to_le16(mask); - rsn_ie->ie_length = cpu_to_le16(ie->len + 2); - memcpy(rsn_ie->ie_buffer, ie, ie->len + 2); - - if (mwifiex_update_uap_custom_ie(priv, rsn_ie, &rsn_idx, - NULL, NULL, - NULL, NULL)) { - ret = -1; - goto done; - } + struct ieee_types_header *wps_ie; + struct mwifiex_ie *ie = NULL; + const u8 *vendor_ie; + + vendor_ie = cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT, + WLAN_OUI_TYPE_MICROSOFT_WPS, + ies, ies_len); + if (vendor_ie) { + ie = kmalloc(sizeof(struct mwifiex_ie), GFP_KERNEL); + if (!ie) + return -ENOMEM; - priv->rsn_idx = rsn_idx; - } + wps_ie = (struct ieee_types_header *)vendor_ie; + memcpy(ie->ie_buffer, wps_ie, wps_ie->len + 2); + ie->ie_length = cpu_to_le16(wps_ie->len + 2); + ie->mgmt_subtype_mask = cpu_to_le16(mask); + ie->ie_index = cpu_to_le16(MWIFIEX_AUTO_IDX_MASK); } - if (params->beacon.beacon_ies && params->beacon.beacon_ies_len) { - beacon_ie = kmalloc(sizeof(struct mwifiex_ie), GFP_KERNEL); - if (!beacon_ie) { - ret = -ENOMEM; - goto done; - } - - beacon_ie->ie_index = cpu_to_le16(beacon_idx); - beacon_ie->mgmt_subtype_mask = cpu_to_le16(MGMT_MASK_BEACON); - beacon_ie->ie_length = - cpu_to_le16(params->beacon.beacon_ies_len); - memcpy(beacon_ie->ie_buffer, params->beacon.beacon_ies, - params->beacon.beacon_ies_len); - } + *ie_ptr = ie; + return 0; +} - if (params->beacon.proberesp_ies && params->beacon.proberesp_ies_len) { - pr_ie = kmalloc(sizeof(struct mwifiex_ie), GFP_KERNEL); - if (!pr_ie) { - ret = -ENOMEM; - goto done; - } +/* This function parses beacon IEs, probe response IEs, association response IEs + * from cfg80211_ap_settings->beacon and sets these IE to FW. + */ +static int mwifiex_set_mgmt_beacon_data_ies(struct mwifiex_private *priv, + struct cfg80211_beacon_data *data) +{ + struct mwifiex_ie *beacon_ie = NULL, *pr_ie = NULL, *ar_ie = NULL; + u16 beacon_idx = MWIFIEX_AUTO_IDX_MASK, pr_idx = MWIFIEX_AUTO_IDX_MASK; + u16 ar_idx = MWIFIEX_AUTO_IDX_MASK; + int ret = 0; - pr_ie->ie_index = cpu_to_le16(pr_idx); - pr_ie->mgmt_subtype_mask = cpu_to_le16(MGMT_MASK_PROBE_RESP); - pr_ie->ie_length = - cpu_to_le16(params->beacon.proberesp_ies_len); - memcpy(pr_ie->ie_buffer, params->beacon.proberesp_ies, - params->beacon.proberesp_ies_len); - } + if (data->beacon_ies && data->beacon_ies_len) + mwifiex_update_wps_ie(data->beacon_ies, data->beacon_ies_len, + &beacon_ie, MGMT_MASK_BEACON); - if (params->beacon.assocresp_ies && params->beacon.assocresp_ies_len) { - ar_ie = kmalloc(sizeof(struct mwifiex_ie), GFP_KERNEL); - if (!ar_ie) { - ret = -ENOMEM; - goto done; - } + if (data->proberesp_ies && data->proberesp_ies_len) + mwifiex_update_wps_ie(data->proberesp_ies, + data->proberesp_ies_len, &pr_ie, + MGMT_MASK_PROBE_RESP); - ar_ie->ie_index = cpu_to_le16(ar_idx); - mask = MGMT_MASK_ASSOC_RESP | MGMT_MASK_REASSOC_RESP; - ar_ie->mgmt_subtype_mask = cpu_to_le16(mask); - ar_ie->ie_length = - cpu_to_le16(params->beacon.assocresp_ies_len); - memcpy(ar_ie->ie_buffer, params->beacon.assocresp_ies, - params->beacon.assocresp_ies_len); - } + if (data->assocresp_ies && data->assocresp_ies_len) + mwifiex_update_wps_ie(data->assocresp_ies, + data->assocresp_ies_len, &ar_ie, + MGMT_MASK_ASSOC_RESP | + MGMT_MASK_REASSOC_RESP); if (beacon_ie || pr_ie || ar_ie) { ret = mwifiex_update_uap_custom_ie(priv, beacon_ie, @@ -320,11 +288,67 @@ done: kfree(beacon_ie); kfree(pr_ie); kfree(ar_ie); - kfree(rsn_ie); return ret; } +/* This function parses different IEs-tail IEs, beacon IEs, probe response IEs, + * association response IEs from cfg80211_ap_settings function and sets these IE + * to FW. + */ +int mwifiex_set_mgmt_ies(struct mwifiex_private *priv, + struct cfg80211_beacon_data *info) +{ + struct mwifiex_ie *gen_ie; + struct ieee_types_header *rsn_ie, *wpa_ie = NULL; + u16 rsn_idx = MWIFIEX_AUTO_IDX_MASK, ie_len = 0; + const u8 *vendor_ie; + + if (info->tail && info->tail_len) { + gen_ie = kzalloc(sizeof(struct mwifiex_ie), GFP_KERNEL); + if (!gen_ie) + return -ENOMEM; + gen_ie->ie_index = cpu_to_le16(rsn_idx); + gen_ie->mgmt_subtype_mask = cpu_to_le16(MGMT_MASK_BEACON | + MGMT_MASK_PROBE_RESP | + MGMT_MASK_ASSOC_RESP); + + rsn_ie = (void *)cfg80211_find_ie(WLAN_EID_RSN, + info->tail, info->tail_len); + if (rsn_ie) { + memcpy(gen_ie->ie_buffer, rsn_ie, rsn_ie->len + 2); + ie_len = rsn_ie->len + 2; + gen_ie->ie_length = cpu_to_le16(ie_len); + } + + vendor_ie = cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT, + WLAN_OUI_TYPE_MICROSOFT_WPA, + info->tail, + info->tail_len); + if (vendor_ie) { + wpa_ie = (struct ieee_types_header *)vendor_ie; + memcpy(gen_ie->ie_buffer + ie_len, + wpa_ie, wpa_ie->len + 2); + ie_len += wpa_ie->len + 2; + gen_ie->ie_length = cpu_to_le16(ie_len); + } + + if (rsn_ie || wpa_ie) { + if (mwifiex_update_uap_custom_ie(priv, gen_ie, &rsn_idx, + NULL, NULL, + NULL, NULL)) { + kfree(gen_ie); + return -1; + } + priv->rsn_idx = rsn_idx; + } + + kfree(gen_ie); + } + + return mwifiex_set_mgmt_beacon_data_ies(priv, info); +} + /* This function removes management IE set */ int mwifiex_del_mgmt_ies(struct mwifiex_private *priv) { diff --git a/drivers/net/wireless/mwifiex/init.c b/drivers/net/wireless/mwifiex/init.c index c1cb004db913..21fdc6c02775 100644 --- a/drivers/net/wireless/mwifiex/init.c +++ b/drivers/net/wireless/mwifiex/init.c @@ -57,6 +57,69 @@ static int mwifiex_add_bss_prio_tbl(struct mwifiex_private *priv) return 0; } +static void scan_delay_timer_fn(unsigned long data) +{ + struct mwifiex_private *priv = (struct mwifiex_private *)data; + struct mwifiex_adapter *adapter = priv->adapter; + struct cmd_ctrl_node *cmd_node, *tmp_node; + unsigned long flags; + + if (!mwifiex_wmm_lists_empty(adapter)) { + if (adapter->scan_delay_cnt == MWIFIEX_MAX_SCAN_DELAY_CNT) { + /* + * Abort scan operation by cancelling all pending scan + * command + */ + spin_lock_irqsave(&adapter->scan_pending_q_lock, flags); + list_for_each_entry_safe(cmd_node, tmp_node, + &adapter->scan_pending_q, + list) { + list_del(&cmd_node->list); + cmd_node->wait_q_enabled = false; + mwifiex_insert_cmd_to_free_q(adapter, cmd_node); + } + spin_unlock_irqrestore(&adapter->scan_pending_q_lock, + flags); + + spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags); + adapter->scan_processing = false; + spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, + flags); + + if (priv->user_scan_cfg) { + dev_dbg(priv->adapter->dev, + "info: %s: scan aborted\n", __func__); + cfg80211_scan_done(priv->scan_request, 1); + priv->scan_request = NULL; + kfree(priv->user_scan_cfg); + priv->user_scan_cfg = NULL; + } + } else { + /* + * Tx data queue is still not empty, delay scan + * operation further by 20msec. + */ + mod_timer(&priv->scan_delay_timer, jiffies + + msecs_to_jiffies(MWIFIEX_SCAN_DELAY_MSEC)); + adapter->scan_delay_cnt++; + } + queue_work(priv->adapter->workqueue, &priv->adapter->main_work); + } else { + /* + * Tx data queue is empty. Get scan command from scan_pending_q + * and put to cmd_pending_q to resume scan operation + */ + adapter->scan_delay_cnt = 0; + spin_lock_irqsave(&adapter->scan_pending_q_lock, flags); + cmd_node = list_first_entry(&adapter->scan_pending_q, + struct cmd_ctrl_node, list); + list_del(&cmd_node->list); + spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags); + + mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true); + } +} + /* * This function initializes the private structure and sets default * values to the members. @@ -136,6 +199,9 @@ static int mwifiex_init_priv(struct mwifiex_private *priv) priv->scan_block = false; + setup_timer(&priv->scan_delay_timer, scan_delay_timer_fn, + (unsigned long)priv); + return mwifiex_add_bss_prio_tbl(priv); } @@ -278,7 +344,6 @@ static void mwifiex_init_adapter(struct mwifiex_adapter *adapter) adapter->adhoc_awake_period = 0; memset(&adapter->arp_filter, 0, sizeof(adapter->arp_filter)); adapter->arp_filter_size = 0; - adapter->channel_type = NL80211_CHAN_HT20; adapter->max_mgmt_ie_index = MAX_MGMT_IE_INDEX; } diff --git a/drivers/net/wireless/mwifiex/ioctl.h b/drivers/net/wireless/mwifiex/ioctl.h index e6be6ee75951..50191539bb32 100644 --- a/drivers/net/wireless/mwifiex/ioctl.h +++ b/drivers/net/wireless/mwifiex/ioctl.h @@ -21,6 +21,7 @@ #define _MWIFIEX_IOCTL_H_ #include <net/mac80211.h> +#include <net/lib80211.h> enum { MWIFIEX_SCAN_TYPE_UNCHANGED = 0, @@ -71,6 +72,13 @@ struct wpa_param { u8 passphrase[MWIFIEX_WPA_PASSHPHRASE_LEN]; }; +struct wep_key { + u8 key_index; + u8 is_default; + u16 length; + u8 key[WLAN_KEY_LEN_WEP104]; +}; + #define KEY_MGMT_ON_HOST 0x03 #define MWIFIEX_AUTH_MODE_AUTO 0xFF #define BAND_CONFIG_MANUAL 0x00 @@ -90,6 +98,8 @@ struct mwifiex_uap_bss_param { u16 key_mgmt; u16 key_mgmt_operation; struct wpa_param wpa_cfg; + struct wep_key wep_cfg[NUM_WEP_KEYS]; + struct ieee80211_ht_cap ht_cap; }; enum { @@ -215,12 +225,6 @@ struct mwifiex_ds_encrypt_key { u8 wapi_rxpn[WAPI_RXPN_LEN]; }; -struct mwifiex_rate_cfg { - u32 action; - u32 is_rate_auto; - u32 rate; -}; - struct mwifiex_power_cfg { u32 is_power_auto; u32 power_level; @@ -267,6 +271,11 @@ struct mwifiex_ds_11n_amsdu_aggr_ctrl { u16 curr_buf_size; }; +struct mwifiex_ds_ant_cfg { + u32 tx_ant; + u32 rx_ant; +}; + #define MWIFIEX_NUM_OF_CMD_BUFFER 20 #define MWIFIEX_SIZE_OF_CMD_BUFFER 2048 diff --git a/drivers/net/wireless/mwifiex/join.c b/drivers/net/wireless/mwifiex/join.c index d6b4fb04011f..82e63cee1e97 100644 --- a/drivers/net/wireless/mwifiex/join.c +++ b/drivers/net/wireless/mwifiex/join.c @@ -1349,22 +1349,16 @@ static int mwifiex_deauthenticate_infra(struct mwifiex_private *priv, u8 *mac) { u8 mac_address[ETH_ALEN]; int ret; - u8 zero_mac[ETH_ALEN] = { 0, 0, 0, 0, 0, 0 }; - if (mac) { - if (!memcmp(mac, zero_mac, sizeof(zero_mac))) - memcpy((u8 *) &mac_address, - (u8 *) &priv->curr_bss_params.bss_descriptor. - mac_address, ETH_ALEN); - else - memcpy((u8 *) &mac_address, (u8 *) mac, ETH_ALEN); - } else { - memcpy((u8 *) &mac_address, (u8 *) &priv->curr_bss_params. - bss_descriptor.mac_address, ETH_ALEN); - } + if (!mac || is_zero_ether_addr(mac)) + memcpy(mac_address, + priv->curr_bss_params.bss_descriptor.mac_address, + ETH_ALEN); + else + memcpy(mac_address, mac, ETH_ALEN); ret = mwifiex_send_cmd_sync(priv, HostCmd_CMD_802_11_DEAUTHENTICATE, - HostCmd_ACT_GEN_SET, 0, &mac_address); + HostCmd_ACT_GEN_SET, 0, mac_address); return ret; } diff --git a/drivers/net/wireless/mwifiex/main.c b/drivers/net/wireless/mwifiex/main.c index 3192855c31c0..46803621d015 100644 --- a/drivers/net/wireless/mwifiex/main.c +++ b/drivers/net/wireless/mwifiex/main.c @@ -190,7 +190,8 @@ process_start: adapter->tx_lock_flag) break; - if (adapter->scan_processing || adapter->data_sent || + if ((adapter->scan_processing && + !adapter->scan_delay_cnt) || adapter->data_sent || mwifiex_wmm_lists_empty(adapter)) { if (adapter->cmd_sent || adapter->curr_cmd || (!is_command_pending(adapter))) @@ -244,8 +245,8 @@ process_start: } } - if (!adapter->scan_processing && !adapter->data_sent && - !mwifiex_wmm_lists_empty(adapter)) { + if ((!adapter->scan_processing || adapter->scan_delay_cnt) && + !adapter->data_sent && !mwifiex_wmm_lists_empty(adapter)) { mwifiex_wmm_process_tx(adapter); if (adapter->hs_activated) { adapter->is_hs_configured = false; @@ -376,7 +377,7 @@ static void mwifiex_fw_dpc(const struct firmware *firmware, void *context) goto done; err_add_intf: - mwifiex_del_virtual_intf(adapter->wiphy, priv->netdev); + mwifiex_del_virtual_intf(adapter->wiphy, priv->wdev); rtnl_unlock(); err_init_fw: pr_debug("info: %s: unregister device\n", __func__); @@ -843,7 +844,7 @@ int mwifiex_remove_card(struct mwifiex_adapter *adapter, struct semaphore *sem) rtnl_lock(); if (priv->wdev && priv->netdev) - mwifiex_del_virtual_intf(adapter->wiphy, priv->netdev); + mwifiex_del_virtual_intf(adapter->wiphy, priv->wdev); rtnl_unlock(); } diff --git a/drivers/net/wireless/mwifiex/main.h b/drivers/net/wireless/mwifiex/main.h index bd3b0bf94b9e..e7c2a82fd610 100644 --- a/drivers/net/wireless/mwifiex/main.h +++ b/drivers/net/wireless/mwifiex/main.h @@ -79,14 +79,17 @@ enum { #define SCAN_BEACON_ENTRY_PAD 6 -#define MWIFIEX_PASSIVE_SCAN_CHAN_TIME 200 -#define MWIFIEX_ACTIVE_SCAN_CHAN_TIME 200 -#define MWIFIEX_SPECIFIC_SCAN_CHAN_TIME 110 +#define MWIFIEX_PASSIVE_SCAN_CHAN_TIME 110 +#define MWIFIEX_ACTIVE_SCAN_CHAN_TIME 30 +#define MWIFIEX_SPECIFIC_SCAN_CHAN_TIME 30 #define SCAN_RSSI(RSSI) (0x100 - ((u8)(RSSI))) #define MWIFIEX_MAX_TOTAL_SCAN_TIME (MWIFIEX_TIMER_10S - MWIFIEX_TIMER_1S) +#define MWIFIEX_MAX_SCAN_DELAY_CNT 50 +#define MWIFIEX_SCAN_DELAY_MSEC 20 + #define RSN_GTK_OUI_OFFSET 2 #define MWIFIEX_OUI_NOT_PRESENT 0 @@ -482,6 +485,7 @@ struct mwifiex_private { u16 proberesp_idx; u16 assocresp_idx; u16 rsn_idx; + struct timer_list scan_delay_timer; }; enum mwifiex_ba_status { @@ -674,7 +678,6 @@ struct mwifiex_adapter { u8 hw_dev_mcs_support; u8 adhoc_11n_enabled; u8 sec_chan_offset; - enum nl80211_channel_type channel_type; struct mwifiex_dbg dbg; u8 arp_filter[ARP_FILTER_MAX_BUF_SIZE]; u32 arp_filter_size; @@ -686,6 +689,7 @@ struct mwifiex_adapter { struct completion fw_load; u8 country_code[IEEE80211_COUNTRY_STRING_LEN]; u16 max_mgmt_ie_index; + u8 scan_delay_cnt; }; int mwifiex_init_lock_list(struct mwifiex_adapter *adapter); @@ -819,9 +823,7 @@ int mwifiex_cmd_append_vsie_tlv(struct mwifiex_private *priv, u16 vsie_mask, u32 mwifiex_get_active_data_rates(struct mwifiex_private *priv, u8 *rates); u32 mwifiex_get_supported_rates(struct mwifiex_private *priv, u8 *rates); -u8 mwifiex_data_rate_to_index(u32 rate); u8 mwifiex_is_rate_auto(struct mwifiex_private *priv); -int mwifiex_get_rate_index(u16 *rateBitmap, int size); extern u16 region_code_index[MWIFIEX_MAX_REGION_CODE]; void mwifiex_save_curr_bcn(struct mwifiex_private *priv); void mwifiex_free_curr_bcn(struct mwifiex_private *priv); @@ -835,6 +837,9 @@ void mwifiex_init_priv_params(struct mwifiex_private *priv, int mwifiex_set_secure_params(struct mwifiex_private *priv, struct mwifiex_uap_bss_param *bss_config, struct cfg80211_ap_settings *params); +void mwifiex_set_ht_params(struct mwifiex_private *priv, + struct mwifiex_uap_bss_param *bss_cfg, + struct cfg80211_ap_settings *params); /* * This function checks if the queuing is RA based or not. @@ -937,16 +942,13 @@ int mwifiex_bss_start(struct mwifiex_private *priv, struct cfg80211_bss *bss, int mwifiex_cancel_hs(struct mwifiex_private *priv, int cmd_type); int mwifiex_enable_hs(struct mwifiex_adapter *adapter); int mwifiex_disable_auto_ds(struct mwifiex_private *priv); -int mwifiex_drv_get_data_rate(struct mwifiex_private *priv, - struct mwifiex_rate_cfg *rate); +int mwifiex_drv_get_data_rate(struct mwifiex_private *priv, u32 *rate); int mwifiex_request_scan(struct mwifiex_private *priv, struct cfg80211_ssid *req_ssid); -int mwifiex_set_user_scan_ioctl(struct mwifiex_private *priv, - struct mwifiex_user_scan_cfg *scan_req); +int mwifiex_scan_networks(struct mwifiex_private *priv, + const struct mwifiex_user_scan_cfg *user_scan_in); int mwifiex_set_radio(struct mwifiex_private *priv, u8 option); -int mwifiex_drv_change_adhoc_chan(struct mwifiex_private *priv, u16 channel); - int mwifiex_set_encode(struct mwifiex_private *priv, const u8 *key, int key_len, u8 key_index, const u8 *mac_addr, int disable); @@ -985,9 +987,6 @@ int mwifiex_set_tx_power(struct mwifiex_private *priv, int mwifiex_main_process(struct mwifiex_adapter *); -int mwifiex_uap_set_channel(struct mwifiex_private *priv, int channel); -int mwifiex_bss_set_channel(struct mwifiex_private *, - struct mwifiex_chan_freq_power *cfp); int mwifiex_get_bss_info(struct mwifiex_private *, struct mwifiex_bss_info *); int mwifiex_fill_new_bss_desc(struct mwifiex_private *priv, @@ -998,15 +997,17 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, int mwifiex_check_network_compatibility(struct mwifiex_private *priv, struct mwifiex_bssdescriptor *bss_desc); -struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy, - char *name, enum nl80211_iftype type, - u32 *flags, struct vif_params *params); -int mwifiex_del_virtual_intf(struct wiphy *wiphy, struct net_device *dev); +struct wireless_dev *mwifiex_add_virtual_intf(struct wiphy *wiphy, + char *name, + enum nl80211_iftype type, + u32 *flags, + struct vif_params *params); +int mwifiex_del_virtual_intf(struct wiphy *wiphy, struct wireless_dev *wdev); void mwifiex_set_sys_config_invalid_data(struct mwifiex_uap_bss_param *config); int mwifiex_set_mgmt_ies(struct mwifiex_private *priv, - struct cfg80211_ap_settings *params); + struct cfg80211_beacon_data *data); int mwifiex_del_mgmt_ies(struct mwifiex_private *priv); u8 *mwifiex_11d_code_2_region(u8 code); diff --git a/drivers/net/wireless/mwifiex/scan.c b/drivers/net/wireless/mwifiex/scan.c index 74f045715723..04dc7ca4ac22 100644 --- a/drivers/net/wireless/mwifiex/scan.c +++ b/drivers/net/wireless/mwifiex/scan.c @@ -28,7 +28,10 @@ /* The maximum number of channels the firmware can scan per command */ #define MWIFIEX_MAX_CHANNELS_PER_SPECIFIC_SCAN 14 -#define MWIFIEX_CHANNELS_PER_SCAN_CMD 4 +#define MWIFIEX_DEF_CHANNELS_PER_SCAN_CMD 4 +#define MWIFIEX_LIMIT_1_CHANNEL_PER_SCAN_CMD 15 +#define MWIFIEX_LIMIT_2_CHANNELS_PER_SCAN_CMD 27 +#define MWIFIEX_LIMIT_3_CHANNELS_PER_SCAN_CMD 35 /* Memory needed to store a max sized Channel List TLV for a firmware scan */ #define CHAN_TLV_MAX_SIZE (sizeof(struct mwifiex_ie_types_header) \ @@ -471,7 +474,7 @@ mwifiex_is_network_compatible(struct mwifiex_private *priv, * This routine is used for any scan that is not provided with a * specific channel list to scan. */ -static void +static int mwifiex_scan_create_channel_list(struct mwifiex_private *priv, const struct mwifiex_user_scan_cfg *user_scan_in, @@ -528,6 +531,7 @@ mwifiex_scan_create_channel_list(struct mwifiex_private *priv, } } + return chan_idx; } /* @@ -727,6 +731,7 @@ mwifiex_config_scan(struct mwifiex_private *priv, u32 num_probes; u32 ssid_len; u32 chan_idx; + u32 chan_num; u32 scan_type; u16 scan_dur; u8 channel; @@ -850,7 +855,7 @@ mwifiex_config_scan(struct mwifiex_private *priv, if (*filtered_scan) *max_chan_per_scan = MWIFIEX_MAX_CHANNELS_PER_SPECIFIC_SCAN; else - *max_chan_per_scan = MWIFIEX_CHANNELS_PER_SCAN_CMD; + *max_chan_per_scan = MWIFIEX_DEF_CHANNELS_PER_SCAN_CMD; /* If the input config or adapter has the number of Probes set, add tlv */ @@ -962,13 +967,28 @@ mwifiex_config_scan(struct mwifiex_private *priv, dev_dbg(adapter->dev, "info: Scan: Scanning current channel only\n"); } - + chan_num = chan_idx; } else { dev_dbg(adapter->dev, "info: Scan: Creating full region channel list\n"); - mwifiex_scan_create_channel_list(priv, user_scan_in, - scan_chan_list, - *filtered_scan); + chan_num = mwifiex_scan_create_channel_list(priv, user_scan_in, + scan_chan_list, + *filtered_scan); + } + + /* + * In associated state we will reduce the number of channels scanned per + * scan command to avoid any traffic delay/loss. This number is decided + * based on total number of channels to be scanned due to constraints + * of command buffers. + */ + if (priv->media_connected) { + if (chan_num < MWIFIEX_LIMIT_1_CHANNEL_PER_SCAN_CMD) + *max_chan_per_scan = 1; + else if (chan_num < MWIFIEX_LIMIT_2_CHANNELS_PER_SCAN_CMD) + *max_chan_per_scan = 2; + else if (chan_num < MWIFIEX_LIMIT_3_CHANNELS_PER_SCAN_CMD) + *max_chan_per_scan = 3; } } @@ -1014,14 +1034,12 @@ mwifiex_ret_802_11_scan_get_tlv_ptrs(struct mwifiex_adapter *adapter, case TLV_TYPE_TSFTIMESTAMP: dev_dbg(adapter->dev, "info: SCAN_RESP: TSF " "timestamp TLV, len = %d\n", tlv_len); - *tlv_data = (struct mwifiex_ie_types_data *) - current_tlv; + *tlv_data = current_tlv; break; case TLV_TYPE_CHANNELBANDLIST: dev_dbg(adapter->dev, "info: SCAN_RESP: channel" " band list TLV, len = %d\n", tlv_len); - *tlv_data = (struct mwifiex_ie_types_data *) - current_tlv; + *tlv_data = current_tlv; break; default: dev_err(adapter->dev, @@ -1226,15 +1244,15 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, bss_entry->beacon_buf); break; case WLAN_EID_BSS_COEX_2040: - bss_entry->bcn_bss_co_2040 = (u8 *) (current_ptr + - sizeof(struct ieee_types_header)); + bss_entry->bcn_bss_co_2040 = current_ptr + + sizeof(struct ieee_types_header); bss_entry->bss_co_2040_offset = (u16) (current_ptr + sizeof(struct ieee_types_header) - bss_entry->beacon_buf); break; case WLAN_EID_EXT_CAPABILITY: - bss_entry->bcn_ext_cap = (u8 *) (current_ptr + - sizeof(struct ieee_types_header)); + bss_entry->bcn_ext_cap = current_ptr + + sizeof(struct ieee_types_header); bss_entry->ext_cap_offset = (u16) (current_ptr + sizeof(struct ieee_types_header) - bss_entry->beacon_buf); @@ -1276,8 +1294,8 @@ mwifiex_radio_type_to_band(u8 radio_type) * order to send the appropriate scan commands to firmware to populate or * update the internal driver scan table. */ -static int mwifiex_scan_networks(struct mwifiex_private *priv, - const struct mwifiex_user_scan_cfg *user_scan_in) +int mwifiex_scan_networks(struct mwifiex_private *priv, + const struct mwifiex_user_scan_cfg *user_scan_in) { int ret = 0; struct mwifiex_adapter *adapter = priv->adapter; @@ -1342,6 +1360,7 @@ static int mwifiex_scan_networks(struct mwifiex_private *priv, adapter->cmd_queued = cmd_node; mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true); + queue_work(adapter->workqueue, &adapter->main_work); } else { spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags); @@ -1358,26 +1377,6 @@ static int mwifiex_scan_networks(struct mwifiex_private *priv, } /* - * Sends IOCTL request to start a scan with user configurations. - * - * This function allocates the IOCTL request buffer, fills it - * with requisite parameters and calls the IOCTL handler. - * - * Upon completion, it also generates a wireless event to notify - * applications. - */ -int mwifiex_set_user_scan_ioctl(struct mwifiex_private *priv, - struct mwifiex_user_scan_cfg *scan_req) -{ - int status; - - status = mwifiex_scan_networks(priv, scan_req); - queue_work(priv->adapter->workqueue, &priv->adapter->main_work); - - return status; -} - -/* * This function prepares a scan command to be sent to the firmware. * * This uses the scan command configuration sent to the command processing @@ -1683,8 +1682,7 @@ int mwifiex_ret_802_11_scan(struct mwifiex_private *priv, goto done; } if (element_id == WLAN_EID_DS_PARAMS) { - channel = *(u8 *) (current_ptr + - sizeof(struct ieee_types_header)); + channel = *(current_ptr + sizeof(struct ieee_types_header)); break; } @@ -1772,14 +1770,23 @@ int mwifiex_ret_802_11_scan(struct mwifiex_private *priv, priv->user_scan_cfg = NULL; } } else { - /* Get scan command from scan_pending_q and put to - cmd_pending_q */ - cmd_node = list_first_entry(&adapter->scan_pending_q, - struct cmd_ctrl_node, list); - list_del(&cmd_node->list); - spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags); - - mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true); + if (!mwifiex_wmm_lists_empty(adapter)) { + spin_unlock_irqrestore(&adapter->scan_pending_q_lock, + flags); + adapter->scan_delay_cnt = 1; + mod_timer(&priv->scan_delay_timer, jiffies + + msecs_to_jiffies(MWIFIEX_SCAN_DELAY_MSEC)); + } else { + /* Get scan command from scan_pending_q and put to + cmd_pending_q */ + cmd_node = list_first_entry(&adapter->scan_pending_q, + struct cmd_ctrl_node, list); + list_del(&cmd_node->list); + spin_unlock_irqrestore(&adapter->scan_pending_q_lock, + flags); + mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, + true); + } } done: @@ -2010,12 +2017,11 @@ mwifiex_save_curr_bcn(struct mwifiex_private *priv) if (curr_bss->bcn_bss_co_2040) curr_bss->bcn_bss_co_2040 = - (u8 *) (curr_bss->beacon_buf + - curr_bss->bss_co_2040_offset); + (curr_bss->beacon_buf + curr_bss->bss_co_2040_offset); if (curr_bss->bcn_ext_cap) - curr_bss->bcn_ext_cap = (u8 *) (curr_bss->beacon_buf + - curr_bss->ext_cap_offset); + curr_bss->bcn_ext_cap = curr_bss->beacon_buf + + curr_bss->ext_cap_offset; } /* diff --git a/drivers/net/wireless/mwifiex/sta_cmd.c b/drivers/net/wireless/mwifiex/sta_cmd.c index 40e025da6bc2..df3a33c530cf 100644 --- a/drivers/net/wireless/mwifiex/sta_cmd.c +++ b/drivers/net/wireless/mwifiex/sta_cmd.c @@ -260,6 +260,56 @@ static int mwifiex_cmd_tx_power_cfg(struct host_cmd_ds_command *cmd, } /* + * This function prepares command to get RF Tx power. + */ +static int mwifiex_cmd_rf_tx_power(struct mwifiex_private *priv, + struct host_cmd_ds_command *cmd, + u16 cmd_action, void *data_buf) +{ + struct host_cmd_ds_rf_tx_pwr *txp = &cmd->params.txp; + + cmd->size = cpu_to_le16(sizeof(struct host_cmd_ds_rf_tx_pwr) + + S_DS_GEN); + cmd->command = cpu_to_le16(HostCmd_CMD_RF_TX_PWR); + txp->action = cpu_to_le16(cmd_action); + + return 0; +} + +/* + * This function prepares command to set rf antenna. + */ +static int mwifiex_cmd_rf_antenna(struct mwifiex_private *priv, + struct host_cmd_ds_command *cmd, + u16 cmd_action, + struct mwifiex_ds_ant_cfg *ant_cfg) +{ + struct host_cmd_ds_rf_ant_mimo *ant_mimo = &cmd->params.ant_mimo; + struct host_cmd_ds_rf_ant_siso *ant_siso = &cmd->params.ant_siso; + + cmd->command = cpu_to_le16(HostCmd_CMD_RF_ANTENNA); + + if (cmd_action != HostCmd_ACT_GEN_SET) + return 0; + + if (priv->adapter->hw_dev_mcs_support == HT_STREAM_2X2) { + cmd->size = cpu_to_le16(sizeof(struct host_cmd_ds_rf_ant_mimo) + + S_DS_GEN); + ant_mimo->action_tx = cpu_to_le16(HostCmd_ACT_SET_TX); + ant_mimo->tx_ant_mode = cpu_to_le16((u16)ant_cfg->tx_ant); + ant_mimo->action_rx = cpu_to_le16(HostCmd_ACT_SET_RX); + ant_mimo->rx_ant_mode = cpu_to_le16((u16)ant_cfg->rx_ant); + } else { + cmd->size = cpu_to_le16(sizeof(struct host_cmd_ds_rf_ant_siso) + + S_DS_GEN); + ant_siso->action = cpu_to_le16(HostCmd_ACT_SET_BOTH); + ant_siso->ant_mode = cpu_to_le16((u16)ant_cfg->tx_ant); + } + + return 0; +} + +/* * This function prepares command to set Host Sleep configuration. * * Preparation includes - @@ -695,40 +745,6 @@ static int mwifiex_cmd_802_11d_domain_info(struct mwifiex_private *priv, } /* - * This function prepares command to set/get RF channel. - * - * Preparation includes - - * - Setting command ID, action and proper size - * - Setting RF type and current RF channel (for SET only) - * - Ensuring correct endian-ness - */ -static int mwifiex_cmd_802_11_rf_channel(struct mwifiex_private *priv, - struct host_cmd_ds_command *cmd, - u16 cmd_action, u16 *channel) -{ - struct host_cmd_ds_802_11_rf_channel *rf_chan = - &cmd->params.rf_channel; - uint16_t rf_type = le16_to_cpu(rf_chan->rf_type); - - cmd->command = cpu_to_le16(HostCmd_CMD_802_11_RF_CHANNEL); - cmd->size = cpu_to_le16(sizeof(struct host_cmd_ds_802_11_rf_channel) - + S_DS_GEN); - - if (cmd_action == HostCmd_ACT_GEN_SET) { - if ((priv->adapter->adhoc_start_band & BAND_A) || - (priv->adapter->adhoc_start_band & BAND_AN)) - rf_chan->rf_type = - cpu_to_le16(HostCmd_SCAN_RADIO_TYPE_A); - - rf_type = le16_to_cpu(rf_chan->rf_type); - SET_SECONDARYCHAN(rf_type, priv->adapter->sec_chan_offset); - rf_chan->current_channel = cpu_to_le16(*channel); - } - rf_chan->action = cpu_to_le16(cmd_action); - return 0; -} - -/* * This function prepares command to set/get IBSS coalescing status. * * Preparation includes - @@ -793,8 +809,7 @@ static int mwifiex_cmd_reg_access(struct host_cmd_ds_command *cmd, struct host_cmd_ds_mac_reg_access *mac_reg; cmd->size = cpu_to_le16(sizeof(*mac_reg) + S_DS_GEN); - mac_reg = (struct host_cmd_ds_mac_reg_access *) &cmd-> - params.mac_reg; + mac_reg = &cmd->params.mac_reg; mac_reg->action = cpu_to_le16(cmd_action); mac_reg->offset = cpu_to_le16((u16) le32_to_cpu(reg_rw->offset)); @@ -806,8 +821,7 @@ static int mwifiex_cmd_reg_access(struct host_cmd_ds_command *cmd, struct host_cmd_ds_bbp_reg_access *bbp_reg; cmd->size = cpu_to_le16(sizeof(*bbp_reg) + S_DS_GEN); - bbp_reg = (struct host_cmd_ds_bbp_reg_access *) - &cmd->params.bbp_reg; + bbp_reg = &cmd->params.bbp_reg; bbp_reg->action = cpu_to_le16(cmd_action); bbp_reg->offset = cpu_to_le16((u16) le32_to_cpu(reg_rw->offset)); @@ -819,8 +833,7 @@ static int mwifiex_cmd_reg_access(struct host_cmd_ds_command *cmd, struct host_cmd_ds_rf_reg_access *rf_reg; cmd->size = cpu_to_le16(sizeof(*rf_reg) + S_DS_GEN); - rf_reg = (struct host_cmd_ds_rf_reg_access *) - &cmd->params.rf_reg; + rf_reg = &cmd->params.rf_reg; rf_reg->action = cpu_to_le16(cmd_action); rf_reg->offset = cpu_to_le16((u16) le32_to_cpu(reg_rw->offset)); rf_reg->value = (u8) le32_to_cpu(reg_rw->value); @@ -831,8 +844,7 @@ static int mwifiex_cmd_reg_access(struct host_cmd_ds_command *cmd, struct host_cmd_ds_pmic_reg_access *pmic_reg; cmd->size = cpu_to_le16(sizeof(*pmic_reg) + S_DS_GEN); - pmic_reg = (struct host_cmd_ds_pmic_reg_access *) &cmd-> - params.pmic_reg; + pmic_reg = &cmd->params.pmic_reg; pmic_reg->action = cpu_to_le16(cmd_action); pmic_reg->offset = cpu_to_le16((u16) le32_to_cpu(reg_rw->offset)); @@ -844,8 +856,7 @@ static int mwifiex_cmd_reg_access(struct host_cmd_ds_command *cmd, struct host_cmd_ds_rf_reg_access *cau_reg; cmd->size = cpu_to_le16(sizeof(*cau_reg) + S_DS_GEN); - cau_reg = (struct host_cmd_ds_rf_reg_access *) - &cmd->params.rf_reg; + cau_reg = &cmd->params.rf_reg; cau_reg->action = cpu_to_le16(cmd_action); cau_reg->offset = cpu_to_le16((u16) le32_to_cpu(reg_rw->offset)); @@ -856,7 +867,6 @@ static int mwifiex_cmd_reg_access(struct host_cmd_ds_command *cmd, { struct mwifiex_ds_read_eeprom *rd_eeprom = data_buf; struct host_cmd_ds_802_11_eeprom_access *cmd_eeprom = - (struct host_cmd_ds_802_11_eeprom_access *) &cmd->params.eeprom; cmd->size = cpu_to_le16(sizeof(*cmd_eeprom) + S_DS_GEN); @@ -1055,6 +1065,14 @@ int mwifiex_sta_prepare_cmd(struct mwifiex_private *priv, uint16_t cmd_no, ret = mwifiex_cmd_tx_power_cfg(cmd_ptr, cmd_action, data_buf); break; + case HostCmd_CMD_RF_TX_PWR: + ret = mwifiex_cmd_rf_tx_power(priv, cmd_ptr, cmd_action, + data_buf); + break; + case HostCmd_CMD_RF_ANTENNA: + ret = mwifiex_cmd_rf_antenna(priv, cmd_ptr, cmd_action, + data_buf); + break; case HostCmd_CMD_802_11_PS_MODE_ENH: ret = mwifiex_cmd_enh_power_mode(priv, cmd_ptr, cmd_action, (uint16_t)cmd_oid, data_buf); @@ -1117,10 +1135,6 @@ int mwifiex_sta_prepare_cmd(struct mwifiex_private *priv, uint16_t cmd_no, S_DS_GEN); ret = 0; break; - case HostCmd_CMD_802_11_RF_CHANNEL: - ret = mwifiex_cmd_802_11_rf_channel(priv, cmd_ptr, cmd_action, - data_buf); - break; case HostCmd_CMD_FUNC_INIT: if (priv->adapter->hw_status == MWIFIEX_HW_STATUS_RESET) priv->adapter->hw_status = MWIFIEX_HW_STATUS_READY; @@ -1283,7 +1297,7 @@ int mwifiex_sta_init_cmd(struct mwifiex_private *priv, u8 first_sta) priv->data_rate = 0; /* get tx power */ - ret = mwifiex_send_cmd_async(priv, HostCmd_CMD_TXPWR_CFG, + ret = mwifiex_send_cmd_async(priv, HostCmd_CMD_RF_TX_PWR, HostCmd_ACT_GEN_GET, 0, NULL); if (ret) return -1; diff --git a/drivers/net/wireless/mwifiex/sta_cmdresp.c b/drivers/net/wireless/mwifiex/sta_cmdresp.c index a79ed9bd9695..0b09004ebb25 100644 --- a/drivers/net/wireless/mwifiex/sta_cmdresp.c +++ b/drivers/net/wireless/mwifiex/sta_cmdresp.c @@ -227,7 +227,7 @@ static int mwifiex_ret_get_log(struct mwifiex_private *priv, struct mwifiex_ds_get_stats *stats) { struct host_cmd_ds_802_11_get_log *get_log = - (struct host_cmd_ds_802_11_get_log *) &resp->params.get_log; + &resp->params.get_log; if (stats) { stats->mcast_tx_frame = le32_to_cpu(get_log->mcast_tx_frame); @@ -267,12 +267,10 @@ static int mwifiex_ret_get_log(struct mwifiex_private *priv, * * Based on the new rate bitmaps, the function re-evaluates if * auto data rate has been activated. If not, it sends another - * query to the firmware to get the current Tx data rate and updates - * the driver value. + * query to the firmware to get the current Tx data rate. */ static int mwifiex_ret_tx_rate_cfg(struct mwifiex_private *priv, - struct host_cmd_ds_command *resp, - struct mwifiex_rate_cfg *ds_rate) + struct host_cmd_ds_command *resp) { struct host_cmd_ds_tx_rate_cfg *rate_cfg = &resp->params.tx_rate_cfg; struct mwifiex_rate_scope *rate_scope; @@ -280,9 +278,8 @@ static int mwifiex_ret_tx_rate_cfg(struct mwifiex_private *priv, u16 tlv, tlv_buf_len; u8 *tlv_buf; u32 i; - int ret = 0; - tlv_buf = (u8 *) ((u8 *) rate_cfg) + + tlv_buf = ((u8 *)rate_cfg) + sizeof(struct host_cmd_ds_tx_rate_cfg); tlv_buf_len = *(u16 *) (tlv_buf + sizeof(u16)); @@ -318,33 +315,11 @@ static int mwifiex_ret_tx_rate_cfg(struct mwifiex_private *priv, if (priv->is_data_rate_auto) priv->data_rate = 0; else - ret = mwifiex_send_cmd_async(priv, - HostCmd_CMD_802_11_TX_RATE_QUERY, - HostCmd_ACT_GEN_GET, 0, NULL); - - if (!ds_rate) - return ret; - - if (le16_to_cpu(rate_cfg->action) == HostCmd_ACT_GEN_GET) { - if (priv->is_data_rate_auto) { - ds_rate->is_rate_auto = 1; - return ret; - } - ds_rate->rate = mwifiex_get_rate_index(priv->bitmap_rates, - sizeof(priv->bitmap_rates)); - - if (ds_rate->rate >= MWIFIEX_RATE_BITMAP_OFDM0 && - ds_rate->rate <= MWIFIEX_RATE_BITMAP_OFDM7) - ds_rate->rate -= (MWIFIEX_RATE_BITMAP_OFDM0 - - MWIFIEX_RATE_INDEX_OFDM0); - - if (ds_rate->rate >= MWIFIEX_RATE_BITMAP_MCS0 && - ds_rate->rate <= MWIFIEX_RATE_BITMAP_MCS127) - ds_rate->rate -= (MWIFIEX_RATE_BITMAP_MCS0 - - MWIFIEX_RATE_INDEX_MCS0); - } + return mwifiex_send_cmd_async(priv, + HostCmd_CMD_802_11_TX_RATE_QUERY, + HostCmd_ACT_GEN_GET, 0, NULL); - return ret; + return 0; } /* @@ -451,6 +426,57 @@ static int mwifiex_ret_tx_power_cfg(struct mwifiex_private *priv, } /* + * This function handles the command response of get RF Tx power. + */ +static int mwifiex_ret_rf_tx_power(struct mwifiex_private *priv, + struct host_cmd_ds_command *resp) +{ + struct host_cmd_ds_rf_tx_pwr *txp = &resp->params.txp; + u16 action = le16_to_cpu(txp->action); + + priv->tx_power_level = le16_to_cpu(txp->cur_level); + + if (action == HostCmd_ACT_GEN_GET) { + priv->max_tx_power_level = txp->max_power; + priv->min_tx_power_level = txp->min_power; + } + + dev_dbg(priv->adapter->dev, + "Current TxPower Level=%d, Max Power=%d, Min Power=%d\n", + priv->tx_power_level, priv->max_tx_power_level, + priv->min_tx_power_level); + + return 0; +} + +/* + * This function handles the command response of set rf antenna + */ +static int mwifiex_ret_rf_antenna(struct mwifiex_private *priv, + struct host_cmd_ds_command *resp) +{ + struct host_cmd_ds_rf_ant_mimo *ant_mimo = &resp->params.ant_mimo; + struct host_cmd_ds_rf_ant_siso *ant_siso = &resp->params.ant_siso; + struct mwifiex_adapter *adapter = priv->adapter; + + if (adapter->hw_dev_mcs_support == HT_STREAM_2X2) + dev_dbg(adapter->dev, + "RF_ANT_RESP: Tx action = 0x%x, Tx Mode = 0x%04x" + " Rx action = 0x%x, Rx Mode = 0x%04x\n", + le16_to_cpu(ant_mimo->action_tx), + le16_to_cpu(ant_mimo->tx_ant_mode), + le16_to_cpu(ant_mimo->action_rx), + le16_to_cpu(ant_mimo->rx_ant_mode)); + else + dev_dbg(adapter->dev, + "RF_ANT_RESP: action = 0x%x, Mode = 0x%04x\n", + le16_to_cpu(ant_siso->action), + le16_to_cpu(ant_siso->ant_mode)); + + return 0; +} + +/* * This function handles the command response of set/get MAC address. * * Handling includes saving the MAC address in driver. @@ -605,34 +631,6 @@ static int mwifiex_ret_802_11d_domain_info(struct mwifiex_private *priv, } /* - * This function handles the command response of get RF channel. - * - * Handling includes changing the header fields into CPU format - * and saving the new channel in driver. - */ -static int mwifiex_ret_802_11_rf_channel(struct mwifiex_private *priv, - struct host_cmd_ds_command *resp, - u16 *data_buf) -{ - struct host_cmd_ds_802_11_rf_channel *rf_channel = - &resp->params.rf_channel; - u16 new_channel = le16_to_cpu(rf_channel->current_channel); - - if (priv->curr_bss_params.bss_descriptor.channel != new_channel) { - dev_dbg(priv->adapter->dev, "cmd: Channel Switch: %d to %d\n", - priv->curr_bss_params.bss_descriptor.channel, - new_channel); - /* Update the channel again */ - priv->curr_bss_params.bss_descriptor.channel = new_channel; - } - - if (data_buf) - *data_buf = new_channel; - - return 0; -} - -/* * This function handles the command response of get extended version. * * Handling includes forming the extended version string and sending it @@ -679,39 +677,33 @@ static int mwifiex_ret_reg_access(u16 type, struct host_cmd_ds_command *resp, eeprom = data_buf; switch (type) { case HostCmd_CMD_MAC_REG_ACCESS: - r.mac = (struct host_cmd_ds_mac_reg_access *) - &resp->params.mac_reg; + r.mac = &resp->params.mac_reg; reg_rw->offset = cpu_to_le32((u32) le16_to_cpu(r.mac->offset)); reg_rw->value = r.mac->value; break; case HostCmd_CMD_BBP_REG_ACCESS: - r.bbp = (struct host_cmd_ds_bbp_reg_access *) - &resp->params.bbp_reg; + r.bbp = &resp->params.bbp_reg; reg_rw->offset = cpu_to_le32((u32) le16_to_cpu(r.bbp->offset)); reg_rw->value = cpu_to_le32((u32) r.bbp->value); break; case HostCmd_CMD_RF_REG_ACCESS: - r.rf = (struct host_cmd_ds_rf_reg_access *) - &resp->params.rf_reg; + r.rf = &resp->params.rf_reg; reg_rw->offset = cpu_to_le32((u32) le16_to_cpu(r.rf->offset)); reg_rw->value = cpu_to_le32((u32) r.bbp->value); break; case HostCmd_CMD_PMIC_REG_ACCESS: - r.pmic = (struct host_cmd_ds_pmic_reg_access *) - &resp->params.pmic_reg; + r.pmic = &resp->params.pmic_reg; reg_rw->offset = cpu_to_le32((u32) le16_to_cpu(r.pmic->offset)); reg_rw->value = cpu_to_le32((u32) r.pmic->value); break; case HostCmd_CMD_CAU_REG_ACCESS: - r.rf = (struct host_cmd_ds_rf_reg_access *) - &resp->params.rf_reg; + r.rf = &resp->params.rf_reg; reg_rw->offset = cpu_to_le32((u32) le16_to_cpu(r.rf->offset)); reg_rw->value = cpu_to_le32((u32) r.rf->value); break; case HostCmd_CMD_802_11_EEPROM_ACCESS: - r.eeprom = (struct host_cmd_ds_802_11_eeprom_access *) - &resp->params.eeprom; + r.eeprom = &resp->params.eeprom; pr_debug("info: EEPROM read len=%x\n", r.eeprom->byte_count); if (le16_to_cpu(eeprom->byte_count) < le16_to_cpu(r.eeprom->byte_count)) { @@ -787,7 +779,7 @@ static int mwifiex_ret_subsc_evt(struct mwifiex_private *priv, struct mwifiex_ds_misc_subsc_evt *sub_event) { struct host_cmd_ds_802_11_subsc_evt *cmd_sub_event = - (struct host_cmd_ds_802_11_subsc_evt *)&resp->params.subsc_evt; + &resp->params.subsc_evt; /* For every subscribe event command (Get/Set/Clear), FW reports the * current set of subscribed events*/ @@ -833,7 +825,7 @@ int mwifiex_process_sta_cmdresp(struct mwifiex_private *priv, u16 cmdresp_no, ret = mwifiex_ret_mac_multicast_adr(priv, resp); break; case HostCmd_CMD_TX_RATE_CFG: - ret = mwifiex_ret_tx_rate_cfg(priv, resp, data_buf); + ret = mwifiex_ret_tx_rate_cfg(priv, resp); break; case HostCmd_CMD_802_11_SCAN: ret = mwifiex_ret_802_11_scan(priv, resp); @@ -847,6 +839,12 @@ int mwifiex_process_sta_cmdresp(struct mwifiex_private *priv, u16 cmdresp_no, case HostCmd_CMD_TXPWR_CFG: ret = mwifiex_ret_tx_power_cfg(priv, resp); break; + case HostCmd_CMD_RF_TX_PWR: + ret = mwifiex_ret_rf_tx_power(priv, resp); + break; + case HostCmd_CMD_RF_ANTENNA: + ret = mwifiex_ret_rf_antenna(priv, resp); + break; case HostCmd_CMD_802_11_PS_MODE_ENH: ret = mwifiex_ret_enh_power_mode(priv, resp, data_buf); break; @@ -878,9 +876,6 @@ int mwifiex_process_sta_cmdresp(struct mwifiex_private *priv, u16 cmdresp_no, case HostCmd_CMD_802_11_TX_RATE_QUERY: ret = mwifiex_ret_802_11_tx_rate_query(priv, resp); break; - case HostCmd_CMD_802_11_RF_CHANNEL: - ret = mwifiex_ret_802_11_rf_channel(priv, resp, data_buf); - break; case HostCmd_CMD_VERSION_EXT: ret = mwifiex_ret_ver_ext(priv, resp, data_buf); break; diff --git a/drivers/net/wireless/mwifiex/sta_event.c b/drivers/net/wireless/mwifiex/sta_event.c index 11e731f3581c..b8614a825460 100644 --- a/drivers/net/wireless/mwifiex/sta_event.c +++ b/drivers/net/wireless/mwifiex/sta_event.c @@ -422,7 +422,7 @@ int mwifiex_process_sta_event(struct mwifiex_private *priv) if (len != -1) { sinfo.filled = STATION_INFO_ASSOC_REQ_IES; - sinfo.assoc_req_ies = (u8 *)&event->data[len]; + sinfo.assoc_req_ies = &event->data[len]; len = (u8 *)sinfo.assoc_req_ies - (u8 *)&event->frame_control; sinfo.assoc_req_ies_len = diff --git a/drivers/net/wireless/mwifiex/sta_ioctl.c b/drivers/net/wireless/mwifiex/sta_ioctl.c index 106c449477b2..fb2136089a22 100644 --- a/drivers/net/wireless/mwifiex/sta_ioctl.c +++ b/drivers/net/wireless/mwifiex/sta_ioctl.c @@ -66,9 +66,6 @@ int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter) dev_dbg(adapter->dev, "cmd pending\n"); atomic_inc(&adapter->cmd_pending); - /* Status pending, wake up main process */ - queue_work(adapter->workqueue, &adapter->main_work); - /* Wait for completion */ wait_event_interruptible(adapter->cmd_wait_q.wait, *(cmd_queued->condition)); @@ -500,297 +497,24 @@ int mwifiex_disable_auto_ds(struct mwifiex_private *priv) EXPORT_SYMBOL_GPL(mwifiex_disable_auto_ds); /* - * IOCTL request handler to set/get active channel. - * - * This function performs validity checking on channel/frequency - * compatibility and returns failure if not valid. - */ -int mwifiex_bss_set_channel(struct mwifiex_private *priv, - struct mwifiex_chan_freq_power *chan) -{ - struct mwifiex_adapter *adapter = priv->adapter; - struct mwifiex_chan_freq_power *cfp = NULL; - - if (!chan) - return -1; - - if (!chan->channel && !chan->freq) - return -1; - if (adapter->adhoc_start_band & BAND_AN) - adapter->adhoc_start_band = BAND_G | BAND_B | BAND_GN; - else if (adapter->adhoc_start_band & BAND_A) - adapter->adhoc_start_band = BAND_G | BAND_B; - if (chan->channel) { - if (chan->channel <= MAX_CHANNEL_BAND_BG) - cfp = mwifiex_get_cfp(priv, 0, (u16) chan->channel, 0); - if (!cfp) { - cfp = mwifiex_get_cfp(priv, BAND_A, - (u16) chan->channel, 0); - if (cfp) { - if (adapter->adhoc_11n_enabled) - adapter->adhoc_start_band = BAND_A - | BAND_AN; - else - adapter->adhoc_start_band = BAND_A; - } - } - } else { - if (chan->freq <= MAX_FREQUENCY_BAND_BG) - cfp = mwifiex_get_cfp(priv, 0, 0, chan->freq); - if (!cfp) { - cfp = mwifiex_get_cfp(priv, BAND_A, 0, chan->freq); - if (cfp) { - if (adapter->adhoc_11n_enabled) - adapter->adhoc_start_band = BAND_A - | BAND_AN; - else - adapter->adhoc_start_band = BAND_A; - } - } - } - if (!cfp || !cfp->channel) { - dev_err(adapter->dev, "invalid channel/freq\n"); - return -1; - } - priv->adhoc_channel = (u8) cfp->channel; - chan->channel = cfp->channel; - chan->freq = cfp->freq; - - return 0; -} - -/* - * IOCTL request handler to set/get Ad-Hoc channel. - * - * This function prepares the correct firmware command and - * issues it to set or get the ad-hoc channel. - */ -static int mwifiex_bss_ioctl_ibss_channel(struct mwifiex_private *priv, - u16 action, u16 *channel) -{ - if (action == HostCmd_ACT_GEN_GET) { - if (!priv->media_connected) { - *channel = priv->adhoc_channel; - return 0; - } - } else { - priv->adhoc_channel = (u8) *channel; - } - - return mwifiex_send_cmd_sync(priv, HostCmd_CMD_802_11_RF_CHANNEL, - action, 0, channel); -} - -/* - * IOCTL request handler to change Ad-Hoc channel. - * - * This function allocates the IOCTL request buffer, fills it - * with requisite parameters and calls the IOCTL handler. - * - * The function follows the following steps to perform the change - - * - Get current IBSS information - * - Get current channel - * - If no change is required, return - * - If not connected, change channel and return - * - If connected, - * - Disconnect - * - Change channel - * - Perform specific SSID scan with same SSID - * - Start/Join the IBSS - */ -int -mwifiex_drv_change_adhoc_chan(struct mwifiex_private *priv, u16 channel) -{ - int ret; - struct mwifiex_bss_info bss_info; - struct mwifiex_ssid_bssid ssid_bssid; - u16 curr_chan = 0; - struct cfg80211_bss *bss = NULL; - struct ieee80211_channel *chan; - enum ieee80211_band band; - - memset(&bss_info, 0, sizeof(bss_info)); - - /* Get BSS information */ - if (mwifiex_get_bss_info(priv, &bss_info)) - return -1; - - /* Get current channel */ - ret = mwifiex_bss_ioctl_ibss_channel(priv, HostCmd_ACT_GEN_GET, - &curr_chan); - - if (curr_chan == channel) { - ret = 0; - goto done; - } - dev_dbg(priv->adapter->dev, "cmd: updating channel from %d to %d\n", - curr_chan, channel); - - if (!bss_info.media_connected) { - ret = 0; - goto done; - } - - /* Do disonnect */ - memset(&ssid_bssid, 0, ETH_ALEN); - ret = mwifiex_deauthenticate(priv, ssid_bssid.bssid); - - ret = mwifiex_bss_ioctl_ibss_channel(priv, HostCmd_ACT_GEN_SET, - &channel); - - /* Do specific SSID scanning */ - if (mwifiex_request_scan(priv, &bss_info.ssid)) { - ret = -1; - goto done; - } - - band = mwifiex_band_to_radio_type(priv->curr_bss_params.band); - chan = __ieee80211_get_channel(priv->wdev->wiphy, - ieee80211_channel_to_frequency(channel, - band)); - - /* Find the BSS we want using available scan results */ - bss = cfg80211_get_bss(priv->wdev->wiphy, chan, bss_info.bssid, - bss_info.ssid.ssid, bss_info.ssid.ssid_len, - WLAN_CAPABILITY_ESS, WLAN_CAPABILITY_ESS); - if (!bss) - wiphy_warn(priv->wdev->wiphy, "assoc: bss %pM not in scan results\n", - bss_info.bssid); - - ret = mwifiex_bss_start(priv, bss, &bss_info.ssid); -done: - return ret; -} - -/* - * IOCTL request handler to get rate. - * - * This function prepares the correct firmware command and - * issues it to get the current rate if it is connected, - * otherwise, the function returns the lowest supported rate - * for the band. - */ -static int mwifiex_rate_ioctl_get_rate_value(struct mwifiex_private *priv, - struct mwifiex_rate_cfg *rate_cfg) -{ - rate_cfg->is_rate_auto = priv->is_data_rate_auto; - return mwifiex_send_cmd_sync(priv, HostCmd_CMD_802_11_TX_RATE_QUERY, - HostCmd_ACT_GEN_GET, 0, NULL); -} - -/* - * IOCTL request handler to set rate. - * - * This function prepares the correct firmware command and - * issues it to set the current rate. - * - * The function also performs validation checking on the supplied value. - */ -static int mwifiex_rate_ioctl_set_rate_value(struct mwifiex_private *priv, - struct mwifiex_rate_cfg *rate_cfg) -{ - u8 rates[MWIFIEX_SUPPORTED_RATES]; - u8 *rate; - int rate_index, ret; - u16 bitmap_rates[MAX_BITMAP_RATES_SIZE]; - u32 i; - struct mwifiex_adapter *adapter = priv->adapter; - - if (rate_cfg->is_rate_auto) { - memset(bitmap_rates, 0, sizeof(bitmap_rates)); - /* Support all HR/DSSS rates */ - bitmap_rates[0] = 0x000F; - /* Support all OFDM rates */ - bitmap_rates[1] = 0x00FF; - /* Support all HT-MCSs rate */ - for (i = 0; i < ARRAY_SIZE(priv->bitmap_rates) - 3; i++) - bitmap_rates[i + 2] = 0xFFFF; - bitmap_rates[9] = 0x3FFF; - } else { - memset(rates, 0, sizeof(rates)); - mwifiex_get_active_data_rates(priv, rates); - rate = rates; - for (i = 0; (rate[i] && i < MWIFIEX_SUPPORTED_RATES); i++) { - dev_dbg(adapter->dev, "info: rate=%#x wanted=%#x\n", - rate[i], rate_cfg->rate); - if ((rate[i] & 0x7f) == (rate_cfg->rate & 0x7f)) - break; - } - if ((i == MWIFIEX_SUPPORTED_RATES) || !rate[i]) { - dev_err(adapter->dev, "fixed data rate %#x is out " - "of range\n", rate_cfg->rate); - return -1; - } - memset(bitmap_rates, 0, sizeof(bitmap_rates)); - - rate_index = mwifiex_data_rate_to_index(rate_cfg->rate); - - /* Only allow b/g rates to be set */ - if (rate_index >= MWIFIEX_RATE_INDEX_HRDSSS0 && - rate_index <= MWIFIEX_RATE_INDEX_HRDSSS3) { - bitmap_rates[0] = 1 << rate_index; - } else { - rate_index -= 1; /* There is a 0x00 in the table */ - if (rate_index >= MWIFIEX_RATE_INDEX_OFDM0 && - rate_index <= MWIFIEX_RATE_INDEX_OFDM7) - bitmap_rates[1] = 1 << (rate_index - - MWIFIEX_RATE_INDEX_OFDM0); - } - } - - ret = mwifiex_send_cmd_sync(priv, HostCmd_CMD_TX_RATE_CFG, - HostCmd_ACT_GEN_SET, 0, bitmap_rates); - - return ret; -} - -/* - * IOCTL request handler to set/get rate. - * - * This function can be used to set/get either the rate value or the - * rate index. - */ -static int mwifiex_rate_ioctl_cfg(struct mwifiex_private *priv, - struct mwifiex_rate_cfg *rate_cfg) -{ - int status; - - if (!rate_cfg) - return -1; - - if (rate_cfg->action == HostCmd_ACT_GEN_GET) - status = mwifiex_rate_ioctl_get_rate_value(priv, rate_cfg); - else - status = mwifiex_rate_ioctl_set_rate_value(priv, rate_cfg); - - return status; -} - -/* * Sends IOCTL request to get the data rate. * * This function allocates the IOCTL request buffer, fills it * with requisite parameters and calls the IOCTL handler. */ -int mwifiex_drv_get_data_rate(struct mwifiex_private *priv, - struct mwifiex_rate_cfg *rate) +int mwifiex_drv_get_data_rate(struct mwifiex_private *priv, u32 *rate) { int ret; - memset(rate, 0, sizeof(struct mwifiex_rate_cfg)); - rate->action = HostCmd_ACT_GEN_GET; - ret = mwifiex_rate_ioctl_cfg(priv, rate); + ret = mwifiex_send_cmd_sync(priv, HostCmd_CMD_802_11_TX_RATE_QUERY, + HostCmd_ACT_GEN_GET, 0, NULL); if (!ret) { - if (rate->is_rate_auto) - rate->rate = mwifiex_index_to_data_rate(priv, - priv->tx_rate, - priv->tx_htinfo - ); + if (priv->is_data_rate_auto) + *rate = mwifiex_index_to_data_rate(priv, priv->tx_rate, + priv->tx_htinfo); else - rate->rate = priv->data_rate; - } else { - ret = -1; + *rate = priv->data_rate; } return ret; diff --git a/drivers/net/wireless/mwifiex/uap_cmd.c b/drivers/net/wireless/mwifiex/uap_cmd.c index 89f9a2a45de3..f40e93fe894a 100644 --- a/drivers/net/wireless/mwifiex/uap_cmd.c +++ b/drivers/net/wireless/mwifiex/uap_cmd.c @@ -26,6 +26,7 @@ int mwifiex_set_secure_params(struct mwifiex_private *priv, struct mwifiex_uap_bss_param *bss_config, struct cfg80211_ap_settings *params) { int i; + struct mwifiex_wep_key wep_key; if (!params->privacy) { bss_config->protocol = PROTOCOL_NO_SECURITY; @@ -65,7 +66,7 @@ int mwifiex_set_secure_params(struct mwifiex_private *priv, } if (params->crypto.wpa_versions & NL80211_WPA_VERSION_2) { - bss_config->protocol = PROTOCOL_WPA2; + bss_config->protocol |= PROTOCOL_WPA2; bss_config->key_mgmt = KEY_MGMT_EAP; } break; @@ -77,7 +78,7 @@ int mwifiex_set_secure_params(struct mwifiex_private *priv, } if (params->crypto.wpa_versions & NL80211_WPA_VERSION_2) { - bss_config->protocol = PROTOCOL_WPA2; + bss_config->protocol |= PROTOCOL_WPA2; bss_config->key_mgmt = KEY_MGMT_PSK; } break; @@ -91,10 +92,19 @@ int mwifiex_set_secure_params(struct mwifiex_private *priv, case WLAN_CIPHER_SUITE_WEP104: break; case WLAN_CIPHER_SUITE_TKIP: - bss_config->wpa_cfg.pairwise_cipher_wpa = CIPHER_TKIP; + if (params->crypto.wpa_versions & NL80211_WPA_VERSION_1) + bss_config->wpa_cfg.pairwise_cipher_wpa |= + CIPHER_TKIP; + if (params->crypto.wpa_versions & NL80211_WPA_VERSION_2) + bss_config->wpa_cfg.pairwise_cipher_wpa2 |= + CIPHER_TKIP; break; case WLAN_CIPHER_SUITE_CCMP: - bss_config->wpa_cfg.pairwise_cipher_wpa2 = + if (params->crypto.wpa_versions & NL80211_WPA_VERSION_1) + bss_config->wpa_cfg.pairwise_cipher_wpa |= + CIPHER_AES_CCMP; + if (params->crypto.wpa_versions & NL80211_WPA_VERSION_2) + bss_config->wpa_cfg.pairwise_cipher_wpa2 |= CIPHER_AES_CCMP; default: break; @@ -104,6 +114,27 @@ int mwifiex_set_secure_params(struct mwifiex_private *priv, switch (params->crypto.cipher_group) { case WLAN_CIPHER_SUITE_WEP40: case WLAN_CIPHER_SUITE_WEP104: + if (priv->sec_info.wep_enabled) { + bss_config->protocol = PROTOCOL_STATIC_WEP; + bss_config->key_mgmt = KEY_MGMT_NONE; + bss_config->wpa_cfg.length = 0; + + for (i = 0; i < NUM_WEP_KEYS; i++) { + wep_key = priv->wep_key[i]; + bss_config->wep_cfg[i].key_index = i; + + if (priv->wep_key_curr_index == i) + bss_config->wep_cfg[i].is_default = 1; + else + bss_config->wep_cfg[i].is_default = 0; + + bss_config->wep_cfg[i].length = + wep_key.key_length; + memcpy(&bss_config->wep_cfg[i].key, + &wep_key.key_material, + wep_key.key_length); + } + } break; case WLAN_CIPHER_SUITE_TKIP: bss_config->wpa_cfg.group_cipher = CIPHER_TKIP; @@ -118,6 +149,33 @@ int mwifiex_set_secure_params(struct mwifiex_private *priv, return 0; } +/* This function updates 11n related parameters from IE and sets them into + * bss_config structure. + */ +void +mwifiex_set_ht_params(struct mwifiex_private *priv, + struct mwifiex_uap_bss_param *bss_cfg, + struct cfg80211_ap_settings *params) +{ + const u8 *ht_ie; + + if (!ISSUPP_11NENABLED(priv->adapter->fw_cap_info)) + return; + + ht_ie = cfg80211_find_ie(WLAN_EID_HT_CAPABILITY, params->beacon.tail, + params->beacon.tail_len); + if (ht_ie) { + memcpy(&bss_cfg->ht_cap, ht_ie + 2, + sizeof(struct ieee80211_ht_cap)); + } else { + memset(&bss_cfg->ht_cap , 0, sizeof(struct ieee80211_ht_cap)); + bss_cfg->ht_cap.cap_info = cpu_to_le16(MWIFIEX_DEF_HT_CAP); + bss_cfg->ht_cap.ampdu_params_info = MWIFIEX_DEF_AMPDU; + } + + return; +} + /* This function initializes some of mwifiex_uap_bss_param variables. * This helps FW in ignoring invalid values. These values may or may not * be get updated to valid ones at later stage. @@ -135,6 +193,120 @@ void mwifiex_set_sys_config_invalid_data(struct mwifiex_uap_bss_param *config) } /* This function parses BSS related parameters from structure + * and prepares TLVs specific to WPA/WPA2 security. + * These TLVs are appended to command buffer. + */ +static void +mwifiex_uap_bss_wpa(u8 **tlv_buf, void *cmd_buf, u16 *param_size) +{ + struct host_cmd_tlv_pwk_cipher *pwk_cipher; + struct host_cmd_tlv_gwk_cipher *gwk_cipher; + struct host_cmd_tlv_passphrase *passphrase; + struct host_cmd_tlv_akmp *tlv_akmp; + struct mwifiex_uap_bss_param *bss_cfg = cmd_buf; + u16 cmd_size = *param_size; + u8 *tlv = *tlv_buf; + + tlv_akmp = (struct host_cmd_tlv_akmp *)tlv; + tlv_akmp->tlv.type = cpu_to_le16(TLV_TYPE_UAP_AKMP); + tlv_akmp->tlv.len = cpu_to_le16(sizeof(struct host_cmd_tlv_akmp) - + sizeof(struct host_cmd_tlv)); + tlv_akmp->key_mgmt_operation = cpu_to_le16(bss_cfg->key_mgmt_operation); + tlv_akmp->key_mgmt = cpu_to_le16(bss_cfg->key_mgmt); + cmd_size += sizeof(struct host_cmd_tlv_akmp); + tlv += sizeof(struct host_cmd_tlv_akmp); + + if (bss_cfg->wpa_cfg.pairwise_cipher_wpa & VALID_CIPHER_BITMAP) { + pwk_cipher = (struct host_cmd_tlv_pwk_cipher *)tlv; + pwk_cipher->tlv.type = cpu_to_le16(TLV_TYPE_PWK_CIPHER); + pwk_cipher->tlv.len = + cpu_to_le16(sizeof(struct host_cmd_tlv_pwk_cipher) - + sizeof(struct host_cmd_tlv)); + pwk_cipher->proto = cpu_to_le16(PROTOCOL_WPA); + pwk_cipher->cipher = bss_cfg->wpa_cfg.pairwise_cipher_wpa; + cmd_size += sizeof(struct host_cmd_tlv_pwk_cipher); + tlv += sizeof(struct host_cmd_tlv_pwk_cipher); + } + + if (bss_cfg->wpa_cfg.pairwise_cipher_wpa2 & VALID_CIPHER_BITMAP) { + pwk_cipher = (struct host_cmd_tlv_pwk_cipher *)tlv; + pwk_cipher->tlv.type = cpu_to_le16(TLV_TYPE_PWK_CIPHER); + pwk_cipher->tlv.len = + cpu_to_le16(sizeof(struct host_cmd_tlv_pwk_cipher) - + sizeof(struct host_cmd_tlv)); + pwk_cipher->proto = cpu_to_le16(PROTOCOL_WPA2); + pwk_cipher->cipher = bss_cfg->wpa_cfg.pairwise_cipher_wpa2; + cmd_size += sizeof(struct host_cmd_tlv_pwk_cipher); + tlv += sizeof(struct host_cmd_tlv_pwk_cipher); + } + + if (bss_cfg->wpa_cfg.group_cipher & VALID_CIPHER_BITMAP) { + gwk_cipher = (struct host_cmd_tlv_gwk_cipher *)tlv; + gwk_cipher->tlv.type = cpu_to_le16(TLV_TYPE_GWK_CIPHER); + gwk_cipher->tlv.len = + cpu_to_le16(sizeof(struct host_cmd_tlv_gwk_cipher) - + sizeof(struct host_cmd_tlv)); + gwk_cipher->cipher = bss_cfg->wpa_cfg.group_cipher; + cmd_size += sizeof(struct host_cmd_tlv_gwk_cipher); + tlv += sizeof(struct host_cmd_tlv_gwk_cipher); + } + + if (bss_cfg->wpa_cfg.length) { + passphrase = (struct host_cmd_tlv_passphrase *)tlv; + passphrase->tlv.type = cpu_to_le16(TLV_TYPE_UAP_WPA_PASSPHRASE); + passphrase->tlv.len = cpu_to_le16(bss_cfg->wpa_cfg.length); + memcpy(passphrase->passphrase, bss_cfg->wpa_cfg.passphrase, + bss_cfg->wpa_cfg.length); + cmd_size += sizeof(struct host_cmd_tlv) + + bss_cfg->wpa_cfg.length; + tlv += sizeof(struct host_cmd_tlv) + bss_cfg->wpa_cfg.length; + } + + *param_size = cmd_size; + *tlv_buf = tlv; + + return; +} + +/* This function parses BSS related parameters from structure + * and prepares TLVs specific to WEP encryption. + * These TLVs are appended to command buffer. + */ +static void +mwifiex_uap_bss_wep(u8 **tlv_buf, void *cmd_buf, u16 *param_size) +{ + struct host_cmd_tlv_wep_key *wep_key; + u16 cmd_size = *param_size; + int i; + u8 *tlv = *tlv_buf; + struct mwifiex_uap_bss_param *bss_cfg = cmd_buf; + + for (i = 0; i < NUM_WEP_KEYS; i++) { + if (bss_cfg->wep_cfg[i].length && + (bss_cfg->wep_cfg[i].length == WLAN_KEY_LEN_WEP40 || + bss_cfg->wep_cfg[i].length == WLAN_KEY_LEN_WEP104)) { + wep_key = (struct host_cmd_tlv_wep_key *)tlv; + wep_key->tlv.type = cpu_to_le16(TLV_TYPE_UAP_WEP_KEY); + wep_key->tlv.len = + cpu_to_le16(bss_cfg->wep_cfg[i].length + 2); + wep_key->key_index = bss_cfg->wep_cfg[i].key_index; + wep_key->is_default = bss_cfg->wep_cfg[i].is_default; + memcpy(wep_key->key, bss_cfg->wep_cfg[i].key, + bss_cfg->wep_cfg[i].length); + cmd_size += sizeof(struct host_cmd_tlv) + 2 + + bss_cfg->wep_cfg[i].length; + tlv += sizeof(struct host_cmd_tlv) + 2 + + bss_cfg->wep_cfg[i].length; + } + } + + *param_size = cmd_size; + *tlv_buf = tlv; + + return; +} + +/* This function parses BSS related parameters from structure * and prepares TLVs. These TLVs are appended to command buffer. */ static int @@ -148,12 +320,9 @@ mwifiex_uap_bss_param_prepare(u8 *tlv, void *cmd_buf, u16 *param_size) struct host_cmd_tlv_frag_threshold *frag_threshold; struct host_cmd_tlv_rts_threshold *rts_threshold; struct host_cmd_tlv_retry_limit *retry_limit; - struct host_cmd_tlv_pwk_cipher *pwk_cipher; - struct host_cmd_tlv_gwk_cipher *gwk_cipher; struct host_cmd_tlv_encrypt_protocol *encrypt_protocol; struct host_cmd_tlv_auth_type *auth_type; - struct host_cmd_tlv_passphrase *passphrase; - struct host_cmd_tlv_akmp *tlv_akmp; + struct mwifiex_ie_types_htcap *htcap; struct mwifiex_uap_bss_param *bss_cfg = cmd_buf; u16 cmd_size = *param_size; @@ -243,70 +412,11 @@ mwifiex_uap_bss_param_prepare(u8 *tlv, void *cmd_buf, u16 *param_size) } if ((bss_cfg->protocol & PROTOCOL_WPA) || (bss_cfg->protocol & PROTOCOL_WPA2) || - (bss_cfg->protocol & PROTOCOL_EAP)) { - tlv_akmp = (struct host_cmd_tlv_akmp *)tlv; - tlv_akmp->tlv.type = cpu_to_le16(TLV_TYPE_UAP_AKMP); - tlv_akmp->tlv.len = - cpu_to_le16(sizeof(struct host_cmd_tlv_akmp) - - sizeof(struct host_cmd_tlv)); - tlv_akmp->key_mgmt_operation = - cpu_to_le16(bss_cfg->key_mgmt_operation); - tlv_akmp->key_mgmt = cpu_to_le16(bss_cfg->key_mgmt); - cmd_size += sizeof(struct host_cmd_tlv_akmp); - tlv += sizeof(struct host_cmd_tlv_akmp); - - if (bss_cfg->wpa_cfg.pairwise_cipher_wpa & - VALID_CIPHER_BITMAP) { - pwk_cipher = (struct host_cmd_tlv_pwk_cipher *)tlv; - pwk_cipher->tlv.type = - cpu_to_le16(TLV_TYPE_PWK_CIPHER); - pwk_cipher->tlv.len = cpu_to_le16( - sizeof(struct host_cmd_tlv_pwk_cipher) - - sizeof(struct host_cmd_tlv)); - pwk_cipher->proto = cpu_to_le16(PROTOCOL_WPA); - pwk_cipher->cipher = - bss_cfg->wpa_cfg.pairwise_cipher_wpa; - cmd_size += sizeof(struct host_cmd_tlv_pwk_cipher); - tlv += sizeof(struct host_cmd_tlv_pwk_cipher); - } - if (bss_cfg->wpa_cfg.pairwise_cipher_wpa2 & - VALID_CIPHER_BITMAP) { - pwk_cipher = (struct host_cmd_tlv_pwk_cipher *)tlv; - pwk_cipher->tlv.type = cpu_to_le16(TLV_TYPE_PWK_CIPHER); - pwk_cipher->tlv.len = cpu_to_le16( - sizeof(struct host_cmd_tlv_pwk_cipher) - - sizeof(struct host_cmd_tlv)); - pwk_cipher->proto = cpu_to_le16(PROTOCOL_WPA2); - pwk_cipher->cipher = - bss_cfg->wpa_cfg.pairwise_cipher_wpa2; - cmd_size += sizeof(struct host_cmd_tlv_pwk_cipher); - tlv += sizeof(struct host_cmd_tlv_pwk_cipher); - } - if (bss_cfg->wpa_cfg.group_cipher & VALID_CIPHER_BITMAP) { - gwk_cipher = (struct host_cmd_tlv_gwk_cipher *)tlv; - gwk_cipher->tlv.type = cpu_to_le16(TLV_TYPE_GWK_CIPHER); - gwk_cipher->tlv.len = cpu_to_le16( - sizeof(struct host_cmd_tlv_gwk_cipher) - - sizeof(struct host_cmd_tlv)); - gwk_cipher->cipher = bss_cfg->wpa_cfg.group_cipher; - cmd_size += sizeof(struct host_cmd_tlv_gwk_cipher); - tlv += sizeof(struct host_cmd_tlv_gwk_cipher); - } - if (bss_cfg->wpa_cfg.length) { - passphrase = (struct host_cmd_tlv_passphrase *)tlv; - passphrase->tlv.type = - cpu_to_le16(TLV_TYPE_UAP_WPA_PASSPHRASE); - passphrase->tlv.len = - cpu_to_le16(bss_cfg->wpa_cfg.length); - memcpy(passphrase->passphrase, - bss_cfg->wpa_cfg.passphrase, - bss_cfg->wpa_cfg.length); - cmd_size += sizeof(struct host_cmd_tlv) + - bss_cfg->wpa_cfg.length; - tlv += sizeof(struct host_cmd_tlv) + - bss_cfg->wpa_cfg.length; - } - } + (bss_cfg->protocol & PROTOCOL_EAP)) + mwifiex_uap_bss_wpa(&tlv, cmd_buf, &cmd_size); + else + mwifiex_uap_bss_wep(&tlv, cmd_buf, &cmd_size); + if ((bss_cfg->auth_mode <= WLAN_AUTH_SHARED_KEY) || (bss_cfg->auth_mode == MWIFIEX_AUTH_MODE_AUTO)) { auth_type = (struct host_cmd_tlv_auth_type *)tlv; @@ -330,6 +440,25 @@ mwifiex_uap_bss_param_prepare(u8 *tlv, void *cmd_buf, u16 *param_size) tlv += sizeof(struct host_cmd_tlv_encrypt_protocol); } + if (bss_cfg->ht_cap.cap_info) { + htcap = (struct mwifiex_ie_types_htcap *)tlv; + htcap->header.type = cpu_to_le16(WLAN_EID_HT_CAPABILITY); + htcap->header.len = + cpu_to_le16(sizeof(struct ieee80211_ht_cap)); + htcap->ht_cap.cap_info = bss_cfg->ht_cap.cap_info; + htcap->ht_cap.ampdu_params_info = + bss_cfg->ht_cap.ampdu_params_info; + memcpy(&htcap->ht_cap.mcs, &bss_cfg->ht_cap.mcs, + sizeof(struct ieee80211_mcs_info)); + htcap->ht_cap.extended_ht_cap_info = + bss_cfg->ht_cap.extended_ht_cap_info; + htcap->ht_cap.tx_BF_cap_info = bss_cfg->ht_cap.tx_BF_cap_info; + htcap->ht_cap.antenna_selection_info = + bss_cfg->ht_cap.antenna_selection_info; + cmd_size += sizeof(struct mwifiex_ie_types_htcap); + tlv += sizeof(struct mwifiex_ie_types_htcap); + } + *param_size = cmd_size; return 0; @@ -421,33 +550,3 @@ int mwifiex_uap_prepare_cmd(struct mwifiex_private *priv, u16 cmd_no, return 0; } - -/* This function sets the RF channel for AP. - * - * This function populates channel information in AP config structure - * and sends command to configure channel information in AP. - */ -int mwifiex_uap_set_channel(struct mwifiex_private *priv, int channel) -{ - struct mwifiex_uap_bss_param *bss_cfg; - struct wiphy *wiphy = priv->wdev->wiphy; - - bss_cfg = kzalloc(sizeof(struct mwifiex_uap_bss_param), GFP_KERNEL); - if (!bss_cfg) - return -ENOMEM; - - mwifiex_set_sys_config_invalid_data(bss_cfg); - bss_cfg->band_cfg = BAND_CONFIG_MANUAL; - bss_cfg->channel = channel; - - if (mwifiex_send_cmd_async(priv, HostCmd_CMD_UAP_SYS_CONFIG, - HostCmd_ACT_GEN_SET, - UAP_BSS_PARAMS_I, bss_cfg)) { - wiphy_err(wiphy, "Failed to set the uAP channel\n"); - kfree(bss_cfg); - return -1; - } - - kfree(bss_cfg); - return 0; -} diff --git a/drivers/net/wireless/mwl8k.c b/drivers/net/wireless/mwl8k.c index cf7bdc66f822..224e03ade145 100644 --- a/drivers/net/wireless/mwl8k.c +++ b/drivers/net/wireless/mwl8k.c @@ -1665,7 +1665,9 @@ mwl8k_txq_reclaim(struct ieee80211_hw *hw, int index, int limit, int force) info = IEEE80211_SKB_CB(skb); if (ieee80211_is_data(wh->frame_control)) { - sta = info->control.sta; + rcu_read_lock(); + sta = ieee80211_find_sta_by_ifaddr(hw, wh->addr1, + wh->addr2); if (sta) { sta_info = MWL8K_STA(sta); BUG_ON(sta_info == NULL); @@ -1682,6 +1684,7 @@ mwl8k_txq_reclaim(struct ieee80211_hw *hw, int index, int limit, int force) sta_info->is_ampdu_allowed = true; } } + rcu_read_unlock(); } ieee80211_tx_info_clear_status(info); diff --git a/drivers/net/wireless/orinoco/cfg.c b/drivers/net/wireless/orinoco/cfg.c index f7b15b8934fa..7b751fba7e1f 100644 --- a/drivers/net/wireless/orinoco/cfg.c +++ b/drivers/net/wireless/orinoco/cfg.c @@ -138,7 +138,7 @@ static int orinoco_change_vif(struct wiphy *wiphy, struct net_device *dev, return err; } -static int orinoco_scan(struct wiphy *wiphy, struct net_device *dev, +static int orinoco_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request) { struct orinoco_private *priv = wiphy_priv(wiphy); @@ -160,10 +160,9 @@ static int orinoco_scan(struct wiphy *wiphy, struct net_device *dev, return err; } -static int orinoco_set_channel(struct wiphy *wiphy, - struct net_device *netdev, - struct ieee80211_channel *chan, - enum nl80211_channel_type channel_type) +static int orinoco_set_monitor_channel(struct wiphy *wiphy, + struct ieee80211_channel *chan, + enum nl80211_channel_type channel_type) { struct orinoco_private *priv = wiphy_priv(wiphy); int err = 0; @@ -286,7 +285,7 @@ static int orinoco_set_wiphy_params(struct wiphy *wiphy, u32 changed) const struct cfg80211_ops orinoco_cfg_ops = { .change_virtual_intf = orinoco_change_vif, - .set_channel = orinoco_set_channel, + .set_monitor_channel = orinoco_set_monitor_channel, .scan = orinoco_scan, .set_wiphy_params = orinoco_set_wiphy_params, }; diff --git a/drivers/net/wireless/p54/eeprom.c b/drivers/net/wireless/p54/eeprom.c index fa8ce5104781..14037092ba89 100644 --- a/drivers/net/wireless/p54/eeprom.c +++ b/drivers/net/wireless/p54/eeprom.c @@ -857,7 +857,7 @@ good_eeprom: wiphy_warn(dev->wiphy, "Invalid hwaddr! Using randomly generated MAC addr\n"); - random_ether_addr(perm_addr); + eth_random_addr(perm_addr); SET_IEEE80211_PERM_ADDR(dev, perm_addr); } @@ -905,7 +905,7 @@ int p54_read_eeprom(struct ieee80211_hw *dev) while (eeprom_size) { blocksize = min(eeprom_size, maxblocksize); - ret = p54_download_eeprom(priv, (void *) (eeprom + offset), + ret = p54_download_eeprom(priv, eeprom + offset, offset, blocksize); if (unlikely(ret)) goto free; diff --git a/drivers/net/wireless/p54/fwio.c b/drivers/net/wireless/p54/fwio.c index 18e82b31afa6..9ba85106eec0 100644 --- a/drivers/net/wireless/p54/fwio.c +++ b/drivers/net/wireless/p54/fwio.c @@ -478,7 +478,7 @@ int p54_scan(struct p54_common *priv, u16 mode, u16 dwell) if (priv->rxhw == PDR_SYNTH_FRONTEND_LONGBOW) { memcpy(&body->longbow.curve_data, - (void *) entry + sizeof(__le16), + entry + sizeof(__le16), priv->curve_data->entry_size); } else { struct p54_scan_body *chan = &body->normal; diff --git a/drivers/net/wireless/p54/txrx.c b/drivers/net/wireless/p54/txrx.c index 82a1cac920bd..f38786e02623 100644 --- a/drivers/net/wireless/p54/txrx.c +++ b/drivers/net/wireless/p54/txrx.c @@ -422,11 +422,11 @@ static void p54_rx_frame_sent(struct p54_common *priv, struct sk_buff *skb) * Clear manually, ieee80211_tx_info_clear_status would * clear the counts too and we need them. */ - memset(&info->status.ampdu_ack_len, 0, + memset(&info->status.ack_signal, 0, sizeof(struct ieee80211_tx_info) - - offsetof(struct ieee80211_tx_info, status.ampdu_ack_len)); + offsetof(struct ieee80211_tx_info, status.ack_signal)); BUILD_BUG_ON(offsetof(struct ieee80211_tx_info, - status.ampdu_ack_len) != 23); + status.ack_signal) != 20); if (entry_hdr->flags & cpu_to_le16(P54_HDR_FLAG_DATA_ALIGN)) pad = entry_data->align[0]; diff --git a/drivers/net/wireless/prism54/islpci_eth.c b/drivers/net/wireless/prism54/islpci_eth.c index 266d45bf86f5..799e148d0370 100644 --- a/drivers/net/wireless/prism54/islpci_eth.c +++ b/drivers/net/wireless/prism54/islpci_eth.c @@ -455,7 +455,7 @@ islpci_eth_receive(islpci_private *priv) "Error mapping DMA address\n"); /* free the skbuf structure before aborting */ - dev_kfree_skb_irq((struct sk_buff *) skb); + dev_kfree_skb_irq(skb); skb = NULL; break; } diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c index 86a738bf591c..598ca1cafb95 100644 --- a/drivers/net/wireless/ray_cs.c +++ b/drivers/net/wireless/ray_cs.c @@ -1849,7 +1849,7 @@ static irqreturn_t ray_interrupt(int irq, void *dev_id) pr_debug("ray_cs: interrupt for *dev=%p\n", dev); local = netdev_priv(dev); - link = (struct pcmcia_device *)local->finder; + link = local->finder; if (!pcmcia_dev_present(link)) { pr_debug( "ray_cs interrupt from device not present or suspended.\n"); diff --git a/drivers/net/wireless/rndis_wlan.c b/drivers/net/wireless/rndis_wlan.c index dfcd02ab6cae..241162e8111d 100644 --- a/drivers/net/wireless/rndis_wlan.c +++ b/drivers/net/wireless/rndis_wlan.c @@ -484,7 +484,7 @@ static int rndis_change_virtual_intf(struct wiphy *wiphy, enum nl80211_iftype type, u32 *flags, struct vif_params *params); -static int rndis_scan(struct wiphy *wiphy, struct net_device *dev, +static int rndis_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request); static int rndis_set_wiphy_params(struct wiphy *wiphy, u32 changed); @@ -1941,9 +1941,10 @@ static int rndis_get_tx_power(struct wiphy *wiphy, int *dbm) } #define SCAN_DELAY_JIFFIES (6 * HZ) -static int rndis_scan(struct wiphy *wiphy, struct net_device *dev, +static int rndis_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request) { + struct net_device *dev = request->wdev->netdev; struct usbnet *usbdev = netdev_priv(dev); struct rndis_wlan_private *priv = get_rndis_wlan_priv(usbdev); int ret; diff --git a/drivers/net/wireless/rt2x00/Kconfig b/drivers/net/wireless/rt2x00/Kconfig index 299c3879582d..c7548da6573d 100644 --- a/drivers/net/wireless/rt2x00/Kconfig +++ b/drivers/net/wireless/rt2x00/Kconfig @@ -99,6 +99,14 @@ config RT2800PCI_RT53XX rt2800pci driver. Supported chips: RT5390 +config RT2800PCI_RT3290 + bool "rt2800pci - Include support for rt3290 devices (EXPERIMENTAL)" + depends on EXPERIMENTAL + default y + ---help--- + This adds support for rt3290 wireless chipset family to the + rt2800pci driver. + Supported chips: RT3290 endif config RT2500USB diff --git a/drivers/net/wireless/rt2x00/rt2400pci.c b/drivers/net/wireless/rt2x00/rt2400pci.c index 5e6b50143165..8b9dbd76a252 100644 --- a/drivers/net/wireless/rt2x00/rt2400pci.c +++ b/drivers/net/wireless/rt2x00/rt2400pci.c @@ -1455,7 +1455,7 @@ static int rt2400pci_validate_eeprom(struct rt2x00_dev *rt2x00dev) */ mac = rt2x00_eeprom_addr(rt2x00dev, EEPROM_MAC_ADDR_0); if (!is_valid_ether_addr(mac)) { - random_ether_addr(mac); + eth_random_addr(mac); EEPROM(rt2x00dev, "MAC: %pM\n", mac); } diff --git a/drivers/net/wireless/rt2x00/rt2500pci.c b/drivers/net/wireless/rt2x00/rt2500pci.c index 136b849f11b5..d2cf8a4bc8b5 100644 --- a/drivers/net/wireless/rt2x00/rt2500pci.c +++ b/drivers/net/wireless/rt2x00/rt2500pci.c @@ -1585,7 +1585,7 @@ static int rt2500pci_validate_eeprom(struct rt2x00_dev *rt2x00dev) */ mac = rt2x00_eeprom_addr(rt2x00dev, EEPROM_MAC_ADDR_0); if (!is_valid_ether_addr(mac)) { - random_ether_addr(mac); + eth_random_addr(mac); EEPROM(rt2x00dev, "MAC: %pM\n", mac); } diff --git a/drivers/net/wireless/rt2x00/rt2500usb.c b/drivers/net/wireless/rt2x00/rt2500usb.c index 669aecdb411d..3aae36bb0a9e 100644 --- a/drivers/net/wireless/rt2x00/rt2500usb.c +++ b/drivers/net/wireless/rt2x00/rt2500usb.c @@ -1352,7 +1352,7 @@ static int rt2500usb_validate_eeprom(struct rt2x00_dev *rt2x00dev) */ mac = rt2x00_eeprom_addr(rt2x00dev, EEPROM_MAC_ADDR_0); if (!is_valid_ether_addr(mac)) { - random_ether_addr(mac); + eth_random_addr(mac); EEPROM(rt2x00dev, "MAC: %pM\n", mac); } diff --git a/drivers/net/wireless/rt2x00/rt2800.h b/drivers/net/wireless/rt2x00/rt2800.h index 9348521e0832..e252e9bafd0e 100644 --- a/drivers/net/wireless/rt2x00/rt2800.h +++ b/drivers/net/wireless/rt2x00/rt2800.h @@ -51,6 +51,7 @@ * RF3320 2.4G 1T1R(RT3350/RT3370/RT3390) * RF3322 2.4G 2T2R(RT3352/RT3371/RT3372/RT3391/RT3392) * RF3053 2.4G/5G 3T3R(RT3883/RT3563/RT3573/RT3593/RT3662) + * RF5360 2.4G 1T1R * RF5370 2.4G 1T1R * RF5390 2.4G 1T1R */ @@ -67,9 +68,12 @@ #define RF3320 0x000b #define RF3322 0x000c #define RF3053 0x000d +#define RF3290 0x3290 +#define RF5360 0x5360 #define RF5370 0x5370 #define RF5372 0x5372 #define RF5390 0x5390 +#define RF5392 0x5392 /* * Chipset revisions. @@ -114,6 +118,12 @@ * Registers. */ + +/* + * MAC_CSR0_3290: MAC_CSR0 for RT3290 to identity MAC version number. + */ +#define MAC_CSR0_3290 0x0000 + /* * E2PROM_CSR: PCI EEPROM control register. * RELOAD: Write 1 to reload eeprom content. @@ -130,6 +140,150 @@ #define E2PROM_CSR_RELOAD FIELD32(0x00000080) /* + * CMB_CTRL_CFG + */ +#define CMB_CTRL 0x0020 +#define AUX_OPT_BIT0 FIELD32(0x00000001) +#define AUX_OPT_BIT1 FIELD32(0x00000002) +#define AUX_OPT_BIT2 FIELD32(0x00000004) +#define AUX_OPT_BIT3 FIELD32(0x00000008) +#define AUX_OPT_BIT4 FIELD32(0x00000010) +#define AUX_OPT_BIT5 FIELD32(0x00000020) +#define AUX_OPT_BIT6 FIELD32(0x00000040) +#define AUX_OPT_BIT7 FIELD32(0x00000080) +#define AUX_OPT_BIT8 FIELD32(0x00000100) +#define AUX_OPT_BIT9 FIELD32(0x00000200) +#define AUX_OPT_BIT10 FIELD32(0x00000400) +#define AUX_OPT_BIT11 FIELD32(0x00000800) +#define AUX_OPT_BIT12 FIELD32(0x00001000) +#define AUX_OPT_BIT13 FIELD32(0x00002000) +#define AUX_OPT_BIT14 FIELD32(0x00004000) +#define AUX_OPT_BIT15 FIELD32(0x00008000) +#define LDO25_LEVEL FIELD32(0x00030000) +#define LDO25_LARGEA FIELD32(0x00040000) +#define LDO25_FRC_ON FIELD32(0x00080000) +#define CMB_RSV FIELD32(0x00300000) +#define XTAL_RDY FIELD32(0x00400000) +#define PLL_LD FIELD32(0x00800000) +#define LDO_CORE_LEVEL FIELD32(0x0F000000) +#define LDO_BGSEL FIELD32(0x30000000) +#define LDO3_EN FIELD32(0x40000000) +#define LDO0_EN FIELD32(0x80000000) + +/* + * EFUSE_CSR_3290: RT3290 EEPROM + */ +#define EFUSE_CTRL_3290 0x0024 + +/* + * EFUSE_DATA3 of 3290 + */ +#define EFUSE_DATA3_3290 0x0028 + +/* + * EFUSE_DATA2 of 3290 + */ +#define EFUSE_DATA2_3290 0x002c + +/* + * EFUSE_DATA1 of 3290 + */ +#define EFUSE_DATA1_3290 0x0030 + +/* + * EFUSE_DATA0 of 3290 + */ +#define EFUSE_DATA0_3290 0x0034 + +/* + * OSC_CTRL_CFG + * Ring oscillator configuration + */ +#define OSC_CTRL 0x0038 +#define OSC_REF_CYCLE FIELD32(0x00001fff) +#define OSC_RSV FIELD32(0x0000e000) +#define OSC_CAL_CNT FIELD32(0x0fff0000) +#define OSC_CAL_ACK FIELD32(0x10000000) +#define OSC_CLK_32K_VLD FIELD32(0x20000000) +#define OSC_CAL_REQ FIELD32(0x40000000) +#define OSC_ROSC_EN FIELD32(0x80000000) + +/* + * COEX_CFG_0 + */ +#define COEX_CFG0 0x0040 +#define COEX_CFG_ANT FIELD32(0xff000000) +/* + * COEX_CFG_1 + */ +#define COEX_CFG1 0x0044 + +/* + * COEX_CFG_2 + */ +#define COEX_CFG2 0x0048 +#define BT_COEX_CFG1 FIELD32(0xff000000) +#define BT_COEX_CFG0 FIELD32(0x00ff0000) +#define WL_COEX_CFG1 FIELD32(0x0000ff00) +#define WL_COEX_CFG0 FIELD32(0x000000ff) +/* + * PLL_CTRL_CFG + * PLL configuration register + */ +#define PLL_CTRL 0x0050 +#define PLL_RESERVED_INPUT1 FIELD32(0x000000ff) +#define PLL_RESERVED_INPUT2 FIELD32(0x0000ff00) +#define PLL_CONTROL FIELD32(0x00070000) +#define PLL_LPF_R1 FIELD32(0x00080000) +#define PLL_LPF_C1_CTRL FIELD32(0x00300000) +#define PLL_LPF_C2_CTRL FIELD32(0x00c00000) +#define PLL_CP_CURRENT_CTRL FIELD32(0x03000000) +#define PLL_PFD_DELAY_CTRL FIELD32(0x0c000000) +#define PLL_LOCK_CTRL FIELD32(0x70000000) +#define PLL_VBGBK_EN FIELD32(0x80000000) + + +/* + * WLAN_CTRL_CFG + * RT3290 wlan configuration + */ +#define WLAN_FUN_CTRL 0x0080 +#define WLAN_EN FIELD32(0x00000001) +#define WLAN_CLK_EN FIELD32(0x00000002) +#define WLAN_RSV1 FIELD32(0x00000004) +#define WLAN_RESET FIELD32(0x00000008) +#define PCIE_APP0_CLK_REQ FIELD32(0x00000010) +#define FRC_WL_ANT_SET FIELD32(0x00000020) +#define INV_TR_SW0 FIELD32(0x00000040) +#define WLAN_GPIO_IN_BIT0 FIELD32(0x00000100) +#define WLAN_GPIO_IN_BIT1 FIELD32(0x00000200) +#define WLAN_GPIO_IN_BIT2 FIELD32(0x00000400) +#define WLAN_GPIO_IN_BIT3 FIELD32(0x00000800) +#define WLAN_GPIO_IN_BIT4 FIELD32(0x00001000) +#define WLAN_GPIO_IN_BIT5 FIELD32(0x00002000) +#define WLAN_GPIO_IN_BIT6 FIELD32(0x00004000) +#define WLAN_GPIO_IN_BIT7 FIELD32(0x00008000) +#define WLAN_GPIO_IN_BIT_ALL FIELD32(0x0000ff00) +#define WLAN_GPIO_OUT_BIT0 FIELD32(0x00010000) +#define WLAN_GPIO_OUT_BIT1 FIELD32(0x00020000) +#define WLAN_GPIO_OUT_BIT2 FIELD32(0x00040000) +#define WLAN_GPIO_OUT_BIT3 FIELD32(0x00050000) +#define WLAN_GPIO_OUT_BIT4 FIELD32(0x00100000) +#define WLAN_GPIO_OUT_BIT5 FIELD32(0x00200000) +#define WLAN_GPIO_OUT_BIT6 FIELD32(0x00400000) +#define WLAN_GPIO_OUT_BIT7 FIELD32(0x00800000) +#define WLAN_GPIO_OUT_BIT_ALL FIELD32(0x00ff0000) +#define WLAN_GPIO_OUT_OE_BIT0 FIELD32(0x01000000) +#define WLAN_GPIO_OUT_OE_BIT1 FIELD32(0x02000000) +#define WLAN_GPIO_OUT_OE_BIT2 FIELD32(0x04000000) +#define WLAN_GPIO_OUT_OE_BIT3 FIELD32(0x08000000) +#define WLAN_GPIO_OUT_OE_BIT4 FIELD32(0x10000000) +#define WLAN_GPIO_OUT_OE_BIT5 FIELD32(0x20000000) +#define WLAN_GPIO_OUT_OE_BIT6 FIELD32(0x40000000) +#define WLAN_GPIO_OUT_OE_BIT7 FIELD32(0x80000000) +#define WLAN_GPIO_OUT_OE_BIT_ALL FIELD32(0xff000000) + +/* * AUX_CTRL: Aux/PCI-E related configuration */ #define AUX_CTRL 0x10c @@ -1760,9 +1914,11 @@ struct mac_iveiv_entry { /* * BBP 3: RX Antenna */ -#define BBP3_RX_ADC FIELD8(0x03) +#define BBP3_RX_ADC FIELD8(0x03) #define BBP3_RX_ANTENNA FIELD8(0x18) #define BBP3_HT40_MINUS FIELD8(0x20) +#define BBP3_ADC_MODE_SWITCH FIELD8(0x40) +#define BBP3_ADC_INIT_MODE FIELD8(0x80) /* * BBP 4: Bandwidth @@ -1772,6 +1928,14 @@ struct mac_iveiv_entry { #define BBP4_MAC_IF_CTRL FIELD8(0x40) /* + * BBP 47: Bandwidth + */ +#define BBP47_TSSI_REPORT_SEL FIELD8(0x03) +#define BBP47_TSSI_UPDATE_REQ FIELD8(0x04) +#define BBP47_TSSI_TSSI_MODE FIELD8(0x18) +#define BBP47_TSSI_ADC6 FIELD8(0x80) + +/* * BBP 109 */ #define BBP109_TX0_POWER FIELD8(0x0f) @@ -1914,6 +2078,16 @@ struct mac_iveiv_entry { #define RFCSR27_R4 FIELD8(0x40) /* + * RFCSR 29: + */ +#define RFCSR29_ADC6_TEST FIELD8(0x01) +#define RFCSR29_ADC6_INT_TEST FIELD8(0x02) +#define RFCSR29_RSSI_RESET FIELD8(0x04) +#define RFCSR29_RSSI_ON FIELD8(0x08) +#define RFCSR29_RSSI_RIP_CTRL FIELD8(0x30) +#define RFCSR29_RSSI_GAIN FIELD8(0xc0) + +/* * RFCSR 30: */ #define RFCSR30_TX_H20M FIELD8(0x02) @@ -1944,6 +2118,11 @@ struct mac_iveiv_entry { #define RFCSR49_TX FIELD8(0x3f) /* + * RFCSR 50: + */ +#define RFCSR50_TX FIELD8(0x3f) + +/* * RF registers */ diff --git a/drivers/net/wireless/rt2x00/rt2800lib.c b/drivers/net/wireless/rt2x00/rt2800lib.c index dfc90d34be6d..88455b1b9fe0 100644 --- a/drivers/net/wireless/rt2x00/rt2800lib.c +++ b/drivers/net/wireless/rt2x00/rt2800lib.c @@ -354,16 +354,15 @@ int rt2800_check_firmware(struct rt2x00_dev *rt2x00dev, * of 4kb. Certain USB chipsets however require different firmware, * which Ralink only provides attached to the original firmware * file. Thus for USB devices, firmware files have a length - * which is a multiple of 4kb. + * which is a multiple of 4kb. The firmware for rt3290 chip also + * have a length which is a multiple of 4kb. */ - if (rt2x00_is_usb(rt2x00dev)) { + if (rt2x00_is_usb(rt2x00dev) || rt2x00_rt(rt2x00dev, RT3290)) fw_len = 4096; - multiple = true; - } else { + else fw_len = 8192; - multiple = true; - } + multiple = true; /* * Validate the firmware length */ @@ -415,7 +414,8 @@ int rt2800_load_firmware(struct rt2x00_dev *rt2x00dev, return -EBUSY; if (rt2x00_is_pci(rt2x00dev)) { - if (rt2x00_rt(rt2x00dev, RT3572) || + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT3572) || rt2x00_rt(rt2x00dev, RT5390) || rt2x00_rt(rt2x00dev, RT5392)) { rt2800_register_read(rt2x00dev, AUX_CTRL, ®); @@ -851,8 +851,13 @@ int rt2800_rfkill_poll(struct rt2x00_dev *rt2x00dev) { u32 reg; - rt2800_register_read(rt2x00dev, GPIO_CTRL_CFG, ®); - return rt2x00_get_field32(reg, GPIO_CTRL_CFG_BIT2); + if (rt2x00_rt(rt2x00dev, RT3290)) { + rt2800_register_read(rt2x00dev, WLAN_FUN_CTRL, ®); + return rt2x00_get_field32(reg, WLAN_GPIO_IN_BIT0); + } else { + rt2800_register_read(rt2x00dev, GPIO_CTRL_CFG, ®); + return rt2x00_get_field32(reg, GPIO_CTRL_CFG_BIT2); + } } EXPORT_SYMBOL_GPL(rt2800_rfkill_poll); @@ -1935,8 +1940,50 @@ static void rt2800_config_channel_rf3052(struct rt2x00_dev *rt2x00dev, rt2800_rfcsr_write(rt2x00dev, 7, rfcsr); } -#define RT5390_POWER_BOUND 0x27 -#define RT5390_FREQ_OFFSET_BOUND 0x5f +#define POWER_BOUND 0x27 +#define FREQ_OFFSET_BOUND 0x5f + +static void rt2800_config_channel_rf3290(struct rt2x00_dev *rt2x00dev, + struct ieee80211_conf *conf, + struct rf_channel *rf, + struct channel_info *info) +{ + u8 rfcsr; + + rt2800_rfcsr_write(rt2x00dev, 8, rf->rf1); + rt2800_rfcsr_write(rt2x00dev, 9, rf->rf3); + rt2800_rfcsr_read(rt2x00dev, 11, &rfcsr); + rt2x00_set_field8(&rfcsr, RFCSR11_R, rf->rf2); + rt2800_rfcsr_write(rt2x00dev, 11, rfcsr); + + rt2800_rfcsr_read(rt2x00dev, 49, &rfcsr); + if (info->default_power1 > POWER_BOUND) + rt2x00_set_field8(&rfcsr, RFCSR49_TX, POWER_BOUND); + else + rt2x00_set_field8(&rfcsr, RFCSR49_TX, info->default_power1); + rt2800_rfcsr_write(rt2x00dev, 49, rfcsr); + + rt2800_rfcsr_read(rt2x00dev, 17, &rfcsr); + if (rt2x00dev->freq_offset > FREQ_OFFSET_BOUND) + rt2x00_set_field8(&rfcsr, RFCSR17_CODE, FREQ_OFFSET_BOUND); + else + rt2x00_set_field8(&rfcsr, RFCSR17_CODE, rt2x00dev->freq_offset); + rt2800_rfcsr_write(rt2x00dev, 17, rfcsr); + + if (rf->channel <= 14) { + if (rf->channel == 6) + rt2800_bbp_write(rt2x00dev, 68, 0x0c); + else + rt2800_bbp_write(rt2x00dev, 68, 0x0b); + + if (rf->channel >= 1 && rf->channel <= 6) + rt2800_bbp_write(rt2x00dev, 59, 0x0f); + else if (rf->channel >= 7 && rf->channel <= 11) + rt2800_bbp_write(rt2x00dev, 59, 0x0e); + else if (rf->channel >= 12 && rf->channel <= 14) + rt2800_bbp_write(rt2x00dev, 59, 0x0d); + } +} static void rt2800_config_channel_rf53xx(struct rt2x00_dev *rt2x00dev, struct ieee80211_conf *conf, @@ -1952,13 +1999,27 @@ static void rt2800_config_channel_rf53xx(struct rt2x00_dev *rt2x00dev, rt2800_rfcsr_write(rt2x00dev, 11, rfcsr); rt2800_rfcsr_read(rt2x00dev, 49, &rfcsr); - if (info->default_power1 > RT5390_POWER_BOUND) - rt2x00_set_field8(&rfcsr, RFCSR49_TX, RT5390_POWER_BOUND); + if (info->default_power1 > POWER_BOUND) + rt2x00_set_field8(&rfcsr, RFCSR49_TX, POWER_BOUND); else rt2x00_set_field8(&rfcsr, RFCSR49_TX, info->default_power1); rt2800_rfcsr_write(rt2x00dev, 49, rfcsr); + if (rt2x00_rt(rt2x00dev, RT5392)) { + rt2800_rfcsr_read(rt2x00dev, 50, &rfcsr); + if (info->default_power1 > POWER_BOUND) + rt2x00_set_field8(&rfcsr, RFCSR50_TX, POWER_BOUND); + else + rt2x00_set_field8(&rfcsr, RFCSR50_TX, + info->default_power2); + rt2800_rfcsr_write(rt2x00dev, 50, rfcsr); + } + rt2800_rfcsr_read(rt2x00dev, 1, &rfcsr); + if (rt2x00_rt(rt2x00dev, RT5392)) { + rt2x00_set_field8(&rfcsr, RFCSR1_RX1_PD, 1); + rt2x00_set_field8(&rfcsr, RFCSR1_TX1_PD, 1); + } rt2x00_set_field8(&rfcsr, RFCSR1_RF_BLOCK_EN, 1); rt2x00_set_field8(&rfcsr, RFCSR1_PLL_PD, 1); rt2x00_set_field8(&rfcsr, RFCSR1_RX0_PD, 1); @@ -1966,9 +2027,8 @@ static void rt2800_config_channel_rf53xx(struct rt2x00_dev *rt2x00dev, rt2800_rfcsr_write(rt2x00dev, 1, rfcsr); rt2800_rfcsr_read(rt2x00dev, 17, &rfcsr); - if (rt2x00dev->freq_offset > RT5390_FREQ_OFFSET_BOUND) - rt2x00_set_field8(&rfcsr, RFCSR17_CODE, - RT5390_FREQ_OFFSET_BOUND); + if (rt2x00dev->freq_offset > FREQ_OFFSET_BOUND) + rt2x00_set_field8(&rfcsr, RFCSR17_CODE, FREQ_OFFSET_BOUND); else rt2x00_set_field8(&rfcsr, RFCSR17_CODE, rt2x00dev->freq_offset); rt2800_rfcsr_write(rt2x00dev, 17, rfcsr); @@ -2021,15 +2081,6 @@ static void rt2800_config_channel_rf53xx(struct rt2x00_dev *rt2x00dev, } } } - - rt2800_rfcsr_read(rt2x00dev, 30, &rfcsr); - rt2x00_set_field8(&rfcsr, RFCSR30_TX_H20M, 0); - rt2x00_set_field8(&rfcsr, RFCSR30_RX_H20M, 0); - rt2800_rfcsr_write(rt2x00dev, 30, rfcsr); - - rt2800_rfcsr_read(rt2x00dev, 3, &rfcsr); - rt2x00_set_field8(&rfcsr, RFCSR30_RF_CALIBRATION, 1); - rt2800_rfcsr_write(rt2x00dev, 3, rfcsr); } static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev, @@ -2039,7 +2090,7 @@ static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev, { u32 reg; unsigned int tx_pin; - u8 bbp; + u8 bbp, rfcsr; if (rf->channel <= 14) { info->default_power1 = TXPOWER_G_TO_DEV(info->default_power1); @@ -2060,15 +2111,36 @@ static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev, case RF3052: rt2800_config_channel_rf3052(rt2x00dev, conf, rf, info); break; + case RF3290: + rt2800_config_channel_rf3290(rt2x00dev, conf, rf, info); + break; + case RF5360: case RF5370: case RF5372: case RF5390: + case RF5392: rt2800_config_channel_rf53xx(rt2x00dev, conf, rf, info); break; default: rt2800_config_channel_rf2xxx(rt2x00dev, conf, rf, info); } + if (rt2x00_rf(rt2x00dev, RF3290) || + rt2x00_rf(rt2x00dev, RF5360) || + rt2x00_rf(rt2x00dev, RF5370) || + rt2x00_rf(rt2x00dev, RF5372) || + rt2x00_rf(rt2x00dev, RF5390) || + rt2x00_rf(rt2x00dev, RF5392)) { + rt2800_rfcsr_read(rt2x00dev, 30, &rfcsr); + rt2x00_set_field8(&rfcsr, RFCSR30_TX_H20M, 0); + rt2x00_set_field8(&rfcsr, RFCSR30_RX_H20M, 0); + rt2800_rfcsr_write(rt2x00dev, 30, rfcsr); + + rt2800_rfcsr_read(rt2x00dev, 3, &rfcsr); + rt2x00_set_field8(&rfcsr, RFCSR30_RF_CALIBRATION, 1); + rt2800_rfcsr_write(rt2x00dev, 3, rfcsr); + } + /* * Change BBP settings */ @@ -2549,9 +2621,12 @@ void rt2800_vco_calibration(struct rt2x00_dev *rt2x00dev) rt2x00_set_field8(&rfcsr, RFCSR7_RF_TUNING, 1); rt2800_rfcsr_write(rt2x00dev, 7, rfcsr); break; + case RF3290: + case RF5360: case RF5370: case RF5372: case RF5390: + case RF5392: rt2800_rfcsr_read(rt2x00dev, 3, &rfcsr); rt2x00_set_field8(&rfcsr, RFCSR30_RF_CALIBRATION, 1); rt2800_rfcsr_write(rt2x00dev, 3, rfcsr); @@ -2682,6 +2757,7 @@ static u8 rt2800_get_default_vgc(struct rt2x00_dev *rt2x00dev) if (rt2x00_rt(rt2x00dev, RT3070) || rt2x00_rt(rt2x00dev, RT3071) || rt2x00_rt(rt2x00dev, RT3090) || + rt2x00_rt(rt2x00dev, RT3290) || rt2x00_rt(rt2x00dev, RT3390) || rt2x00_rt(rt2x00dev, RT5390) || rt2x00_rt(rt2x00dev, RT5392)) @@ -2778,10 +2854,54 @@ static int rt2800_init_registers(struct rt2x00_dev *rt2x00dev) rt2x00_set_field32(®, BKOFF_SLOT_CFG_CC_DELAY_TIME, 2); rt2800_register_write(rt2x00dev, BKOFF_SLOT_CFG, reg); + if (rt2x00_rt(rt2x00dev, RT3290)) { + rt2800_register_read(rt2x00dev, WLAN_FUN_CTRL, ®); + if (rt2x00_get_field32(reg, WLAN_EN) == 1) { + rt2x00_set_field32(®, PCIE_APP0_CLK_REQ, 1); + rt2800_register_write(rt2x00dev, WLAN_FUN_CTRL, reg); + } + + rt2800_register_read(rt2x00dev, CMB_CTRL, ®); + if (!(rt2x00_get_field32(reg, LDO0_EN) == 1)) { + rt2x00_set_field32(®, LDO0_EN, 1); + rt2x00_set_field32(®, LDO_BGSEL, 3); + rt2800_register_write(rt2x00dev, CMB_CTRL, reg); + } + + rt2800_register_read(rt2x00dev, OSC_CTRL, ®); + rt2x00_set_field32(®, OSC_ROSC_EN, 1); + rt2x00_set_field32(®, OSC_CAL_REQ, 1); + rt2x00_set_field32(®, OSC_REF_CYCLE, 0x27); + rt2800_register_write(rt2x00dev, OSC_CTRL, reg); + + rt2800_register_read(rt2x00dev, COEX_CFG0, ®); + rt2x00_set_field32(®, COEX_CFG_ANT, 0x5e); + rt2800_register_write(rt2x00dev, COEX_CFG0, reg); + + rt2800_register_read(rt2x00dev, COEX_CFG2, ®); + rt2x00_set_field32(®, BT_COEX_CFG1, 0x00); + rt2x00_set_field32(®, BT_COEX_CFG0, 0x17); + rt2x00_set_field32(®, WL_COEX_CFG1, 0x93); + rt2x00_set_field32(®, WL_COEX_CFG0, 0x7f); + rt2800_register_write(rt2x00dev, COEX_CFG2, reg); + + rt2800_register_read(rt2x00dev, PLL_CTRL, ®); + rt2x00_set_field32(®, PLL_CONTROL, 1); + rt2800_register_write(rt2x00dev, PLL_CTRL, reg); + } + if (rt2x00_rt(rt2x00dev, RT3071) || rt2x00_rt(rt2x00dev, RT3090) || + rt2x00_rt(rt2x00dev, RT3290) || rt2x00_rt(rt2x00dev, RT3390)) { - rt2800_register_write(rt2x00dev, TX_SW_CFG0, 0x00000400); + + if (rt2x00_rt(rt2x00dev, RT3290)) + rt2800_register_write(rt2x00dev, TX_SW_CFG0, + 0x00000404); + else + rt2800_register_write(rt2x00dev, TX_SW_CFG0, + 0x00000400); + rt2800_register_write(rt2x00dev, TX_SW_CFG1, 0x00000000); if (rt2x00_rt_rev_lt(rt2x00dev, RT3071, REV_RT3071E) || rt2x00_rt_rev_lt(rt2x00dev, RT3090, REV_RT3090E) || @@ -3190,14 +3310,16 @@ static int rt2800_init_bbp(struct rt2x00_dev *rt2x00dev) rt2800_wait_bbp_ready(rt2x00dev))) return -EACCES; - if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) { + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) { rt2800_bbp_read(rt2x00dev, 4, &value); rt2x00_set_field8(&value, BBP4_MAC_IF_CTRL, 1); rt2800_bbp_write(rt2x00dev, 4, value); } if (rt2800_is_305x_soc(rt2x00dev) || + rt2x00_rt(rt2x00dev, RT3290) || rt2x00_rt(rt2x00dev, RT3572) || rt2x00_rt(rt2x00dev, RT5390) || rt2x00_rt(rt2x00dev, RT5392)) @@ -3206,20 +3328,26 @@ static int rt2800_init_bbp(struct rt2x00_dev *rt2x00dev) rt2800_bbp_write(rt2x00dev, 65, 0x2c); rt2800_bbp_write(rt2x00dev, 66, 0x38); - if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) rt2800_bbp_write(rt2x00dev, 68, 0x0b); if (rt2x00_rt_rev(rt2x00dev, RT2860, REV_RT2860C)) { rt2800_bbp_write(rt2x00dev, 69, 0x16); rt2800_bbp_write(rt2x00dev, 73, 0x12); - } else if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) { + } else if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) { rt2800_bbp_write(rt2x00dev, 69, 0x12); rt2800_bbp_write(rt2x00dev, 73, 0x13); rt2800_bbp_write(rt2x00dev, 75, 0x46); rt2800_bbp_write(rt2x00dev, 76, 0x28); - rt2800_bbp_write(rt2x00dev, 77, 0x59); + + if (rt2x00_rt(rt2x00dev, RT3290)) + rt2800_bbp_write(rt2x00dev, 77, 0x58); + else + rt2800_bbp_write(rt2x00dev, 77, 0x59); } else { rt2800_bbp_write(rt2x00dev, 69, 0x12); rt2800_bbp_write(rt2x00dev, 73, 0x10); @@ -3244,23 +3372,33 @@ static int rt2800_init_bbp(struct rt2x00_dev *rt2x00dev) rt2800_bbp_write(rt2x00dev, 81, 0x37); } + if (rt2x00_rt(rt2x00dev, RT3290)) { + rt2800_bbp_write(rt2x00dev, 74, 0x0b); + rt2800_bbp_write(rt2x00dev, 79, 0x18); + rt2800_bbp_write(rt2x00dev, 80, 0x09); + rt2800_bbp_write(rt2x00dev, 81, 0x33); + } + rt2800_bbp_write(rt2x00dev, 82, 0x62); - if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) rt2800_bbp_write(rt2x00dev, 83, 0x7a); else rt2800_bbp_write(rt2x00dev, 83, 0x6a); if (rt2x00_rt_rev(rt2x00dev, RT2860, REV_RT2860D)) rt2800_bbp_write(rt2x00dev, 84, 0x19); - else if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) + else if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) rt2800_bbp_write(rt2x00dev, 84, 0x9a); else rt2800_bbp_write(rt2x00dev, 84, 0x99); - if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) rt2800_bbp_write(rt2x00dev, 86, 0x38); else rt2800_bbp_write(rt2x00dev, 86, 0x00); @@ -3270,8 +3408,9 @@ static int rt2800_init_bbp(struct rt2x00_dev *rt2x00dev) rt2800_bbp_write(rt2x00dev, 91, 0x04); - if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) rt2800_bbp_write(rt2x00dev, 92, 0x02); else rt2800_bbp_write(rt2x00dev, 92, 0x00); @@ -3285,6 +3424,7 @@ static int rt2800_init_bbp(struct rt2x00_dev *rt2x00dev) rt2x00_rt_rev_gte(rt2x00dev, RT3071, REV_RT3071E) || rt2x00_rt_rev_gte(rt2x00dev, RT3090, REV_RT3090E) || rt2x00_rt_rev_gte(rt2x00dev, RT3390, REV_RT3390E) || + rt2x00_rt(rt2x00dev, RT3290) || rt2x00_rt(rt2x00dev, RT3572) || rt2x00_rt(rt2x00dev, RT5390) || rt2x00_rt(rt2x00dev, RT5392) || @@ -3293,27 +3433,32 @@ static int rt2800_init_bbp(struct rt2x00_dev *rt2x00dev) else rt2800_bbp_write(rt2x00dev, 103, 0x00); - if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) rt2800_bbp_write(rt2x00dev, 104, 0x92); if (rt2800_is_305x_soc(rt2x00dev)) rt2800_bbp_write(rt2x00dev, 105, 0x01); + else if (rt2x00_rt(rt2x00dev, RT3290)) + rt2800_bbp_write(rt2x00dev, 105, 0x1c); else if (rt2x00_rt(rt2x00dev, RT5390) || rt2x00_rt(rt2x00dev, RT5392)) rt2800_bbp_write(rt2x00dev, 105, 0x3c); else rt2800_bbp_write(rt2x00dev, 105, 0x05); - if (rt2x00_rt(rt2x00dev, RT5390)) + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390)) rt2800_bbp_write(rt2x00dev, 106, 0x03); else if (rt2x00_rt(rt2x00dev, RT5392)) rt2800_bbp_write(rt2x00dev, 106, 0x12); else rt2800_bbp_write(rt2x00dev, 106, 0x35); - if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) rt2800_bbp_write(rt2x00dev, 128, 0x12); if (rt2x00_rt(rt2x00dev, RT5392)) { @@ -3338,6 +3483,29 @@ static int rt2800_init_bbp(struct rt2x00_dev *rt2x00dev) rt2800_bbp_write(rt2x00dev, 138, value); } + if (rt2x00_rt(rt2x00dev, RT3290)) { + rt2800_bbp_write(rt2x00dev, 67, 0x24); + rt2800_bbp_write(rt2x00dev, 143, 0x04); + rt2800_bbp_write(rt2x00dev, 142, 0x99); + rt2800_bbp_write(rt2x00dev, 150, 0x30); + rt2800_bbp_write(rt2x00dev, 151, 0x2e); + rt2800_bbp_write(rt2x00dev, 152, 0x20); + rt2800_bbp_write(rt2x00dev, 153, 0x34); + rt2800_bbp_write(rt2x00dev, 154, 0x40); + rt2800_bbp_write(rt2x00dev, 155, 0x3b); + rt2800_bbp_write(rt2x00dev, 253, 0x04); + + rt2800_bbp_read(rt2x00dev, 47, &value); + rt2x00_set_field8(&value, BBP47_TSSI_ADC6, 1); + rt2800_bbp_write(rt2x00dev, 47, value); + + /* Use 5-bit ADC for Acquisition and 8-bit ADC for data */ + rt2800_bbp_read(rt2x00dev, 3, &value); + rt2x00_set_field8(&value, BBP3_ADC_MODE_SWITCH, 1); + rt2x00_set_field8(&value, BBP3_ADC_INIT_MODE, 1); + rt2800_bbp_write(rt2x00dev, 3, value); + } + if (rt2x00_rt(rt2x00dev, RT5390) || rt2x00_rt(rt2x00dev, RT5392)) { int ant, div_mode; @@ -3470,6 +3638,7 @@ static int rt2800_init_rfcsr(struct rt2x00_dev *rt2x00dev) if (!rt2x00_rt(rt2x00dev, RT3070) && !rt2x00_rt(rt2x00dev, RT3071) && !rt2x00_rt(rt2x00dev, RT3090) && + !rt2x00_rt(rt2x00dev, RT3290) && !rt2x00_rt(rt2x00dev, RT3390) && !rt2x00_rt(rt2x00dev, RT3572) && !rt2x00_rt(rt2x00dev, RT5390) && @@ -3480,8 +3649,9 @@ static int rt2800_init_rfcsr(struct rt2x00_dev *rt2x00dev) /* * Init RF calibration. */ - if (rt2x00_rt(rt2x00dev, RT5390) || - rt2x00_rt(rt2x00dev, RT5392)) { + if (rt2x00_rt(rt2x00dev, RT3290) || + rt2x00_rt(rt2x00dev, RT5390) || + rt2x00_rt(rt2x00dev, RT5392)) { rt2800_rfcsr_read(rt2x00dev, 2, &rfcsr); rt2x00_set_field8(&rfcsr, RFCSR2_RESCAL_EN, 1); rt2800_rfcsr_write(rt2x00dev, 2, rfcsr); @@ -3519,6 +3689,53 @@ static int rt2800_init_rfcsr(struct rt2x00_dev *rt2x00dev) rt2800_rfcsr_write(rt2x00dev, 24, 0x16); rt2800_rfcsr_write(rt2x00dev, 25, 0x01); rt2800_rfcsr_write(rt2x00dev, 29, 0x1f); + } else if (rt2x00_rt(rt2x00dev, RT3290)) { + rt2800_rfcsr_write(rt2x00dev, 1, 0x0f); + rt2800_rfcsr_write(rt2x00dev, 2, 0x80); + rt2800_rfcsr_write(rt2x00dev, 3, 0x08); + rt2800_rfcsr_write(rt2x00dev, 4, 0x00); + rt2800_rfcsr_write(rt2x00dev, 6, 0xa0); + rt2800_rfcsr_write(rt2x00dev, 8, 0xf3); + rt2800_rfcsr_write(rt2x00dev, 9, 0x02); + rt2800_rfcsr_write(rt2x00dev, 10, 0x53); + rt2800_rfcsr_write(rt2x00dev, 11, 0x4a); + rt2800_rfcsr_write(rt2x00dev, 12, 0x46); + rt2800_rfcsr_write(rt2x00dev, 13, 0x9f); + rt2800_rfcsr_write(rt2x00dev, 18, 0x02); + rt2800_rfcsr_write(rt2x00dev, 22, 0x20); + rt2800_rfcsr_write(rt2x00dev, 25, 0x83); + rt2800_rfcsr_write(rt2x00dev, 26, 0x82); + rt2800_rfcsr_write(rt2x00dev, 27, 0x09); + rt2800_rfcsr_write(rt2x00dev, 29, 0x10); + rt2800_rfcsr_write(rt2x00dev, 30, 0x10); + rt2800_rfcsr_write(rt2x00dev, 31, 0x80); + rt2800_rfcsr_write(rt2x00dev, 32, 0x80); + rt2800_rfcsr_write(rt2x00dev, 33, 0x00); + rt2800_rfcsr_write(rt2x00dev, 34, 0x05); + rt2800_rfcsr_write(rt2x00dev, 35, 0x12); + rt2800_rfcsr_write(rt2x00dev, 36, 0x00); + rt2800_rfcsr_write(rt2x00dev, 38, 0x85); + rt2800_rfcsr_write(rt2x00dev, 39, 0x1b); + rt2800_rfcsr_write(rt2x00dev, 40, 0x0b); + rt2800_rfcsr_write(rt2x00dev, 41, 0xbb); + rt2800_rfcsr_write(rt2x00dev, 42, 0xd5); + rt2800_rfcsr_write(rt2x00dev, 43, 0x7b); + rt2800_rfcsr_write(rt2x00dev, 44, 0x0e); + rt2800_rfcsr_write(rt2x00dev, 45, 0xa2); + rt2800_rfcsr_write(rt2x00dev, 46, 0x73); + rt2800_rfcsr_write(rt2x00dev, 47, 0x00); + rt2800_rfcsr_write(rt2x00dev, 48, 0x10); + rt2800_rfcsr_write(rt2x00dev, 49, 0x98); + rt2800_rfcsr_write(rt2x00dev, 52, 0x38); + rt2800_rfcsr_write(rt2x00dev, 53, 0x00); + rt2800_rfcsr_write(rt2x00dev, 54, 0x78); + rt2800_rfcsr_write(rt2x00dev, 55, 0x43); + rt2800_rfcsr_write(rt2x00dev, 56, 0x02); + rt2800_rfcsr_write(rt2x00dev, 57, 0x80); + rt2800_rfcsr_write(rt2x00dev, 58, 0x7f); + rt2800_rfcsr_write(rt2x00dev, 59, 0x09); + rt2800_rfcsr_write(rt2x00dev, 60, 0x45); + rt2800_rfcsr_write(rt2x00dev, 61, 0xc1); } else if (rt2x00_rt(rt2x00dev, RT3390)) { rt2800_rfcsr_write(rt2x00dev, 0, 0xa0); rt2800_rfcsr_write(rt2x00dev, 1, 0xe1); @@ -3927,6 +4144,12 @@ static int rt2800_init_rfcsr(struct rt2x00_dev *rt2x00dev) rt2800_rfcsr_write(rt2x00dev, 27, rfcsr); } + if (rt2x00_rt(rt2x00dev, RT3290)) { + rt2800_rfcsr_read(rt2x00dev, 29, &rfcsr); + rt2x00_set_field8(&rfcsr, RFCSR29_RSSI_GAIN, 3); + rt2800_rfcsr_write(rt2x00dev, 29, rfcsr); + } + if (rt2x00_rt(rt2x00dev, RT5390) || rt2x00_rt(rt2x00dev, RT5392)) { rt2800_rfcsr_read(rt2x00dev, 38, &rfcsr); @@ -4033,9 +4256,14 @@ EXPORT_SYMBOL_GPL(rt2800_disable_radio); int rt2800_efuse_detect(struct rt2x00_dev *rt2x00dev) { u32 reg; + u16 efuse_ctrl_reg; - rt2800_register_read(rt2x00dev, EFUSE_CTRL, ®); + if (rt2x00_rt(rt2x00dev, RT3290)) + efuse_ctrl_reg = EFUSE_CTRL_3290; + else + efuse_ctrl_reg = EFUSE_CTRL; + rt2800_register_read(rt2x00dev, efuse_ctrl_reg, ®); return rt2x00_get_field32(reg, EFUSE_CTRL_PRESENT); } EXPORT_SYMBOL_GPL(rt2800_efuse_detect); @@ -4043,27 +4271,44 @@ EXPORT_SYMBOL_GPL(rt2800_efuse_detect); static void rt2800_efuse_read(struct rt2x00_dev *rt2x00dev, unsigned int i) { u32 reg; - + u16 efuse_ctrl_reg; + u16 efuse_data0_reg; + u16 efuse_data1_reg; + u16 efuse_data2_reg; + u16 efuse_data3_reg; + + if (rt2x00_rt(rt2x00dev, RT3290)) { + efuse_ctrl_reg = EFUSE_CTRL_3290; + efuse_data0_reg = EFUSE_DATA0_3290; + efuse_data1_reg = EFUSE_DATA1_3290; + efuse_data2_reg = EFUSE_DATA2_3290; + efuse_data3_reg = EFUSE_DATA3_3290; + } else { + efuse_ctrl_reg = EFUSE_CTRL; + efuse_data0_reg = EFUSE_DATA0; + efuse_data1_reg = EFUSE_DATA1; + efuse_data2_reg = EFUSE_DATA2; + efuse_data3_reg = EFUSE_DATA3; + } mutex_lock(&rt2x00dev->csr_mutex); - rt2800_register_read_lock(rt2x00dev, EFUSE_CTRL, ®); + rt2800_register_read_lock(rt2x00dev, efuse_ctrl_reg, ®); rt2x00_set_field32(®, EFUSE_CTRL_ADDRESS_IN, i); rt2x00_set_field32(®, EFUSE_CTRL_MODE, 0); rt2x00_set_field32(®, EFUSE_CTRL_KICK, 1); - rt2800_register_write_lock(rt2x00dev, EFUSE_CTRL, reg); + rt2800_register_write_lock(rt2x00dev, efuse_ctrl_reg, reg); /* Wait until the EEPROM has been loaded */ - rt2800_regbusy_read(rt2x00dev, EFUSE_CTRL, EFUSE_CTRL_KICK, ®); - + rt2800_regbusy_read(rt2x00dev, efuse_ctrl_reg, EFUSE_CTRL_KICK, ®); /* Apparently the data is read from end to start */ - rt2800_register_read_lock(rt2x00dev, EFUSE_DATA3, ®); + rt2800_register_read_lock(rt2x00dev, efuse_data3_reg, ®); /* The returned value is in CPU order, but eeprom is le */ *(u32 *)&rt2x00dev->eeprom[i] = cpu_to_le32(reg); - rt2800_register_read_lock(rt2x00dev, EFUSE_DATA2, ®); + rt2800_register_read_lock(rt2x00dev, efuse_data2_reg, ®); *(u32 *)&rt2x00dev->eeprom[i + 2] = cpu_to_le32(reg); - rt2800_register_read_lock(rt2x00dev, EFUSE_DATA1, ®); + rt2800_register_read_lock(rt2x00dev, efuse_data1_reg, ®); *(u32 *)&rt2x00dev->eeprom[i + 4] = cpu_to_le32(reg); - rt2800_register_read_lock(rt2x00dev, EFUSE_DATA0, ®); + rt2800_register_read_lock(rt2x00dev, efuse_data0_reg, ®); *(u32 *)&rt2x00dev->eeprom[i + 6] = cpu_to_le32(reg); mutex_unlock(&rt2x00dev->csr_mutex); @@ -4090,7 +4335,7 @@ int rt2800_validate_eeprom(struct rt2x00_dev *rt2x00dev) */ mac = rt2x00_eeprom_addr(rt2x00dev, EEPROM_MAC_ADDR_0); if (!is_valid_ether_addr(mac)) { - random_ether_addr(mac); + eth_random_addr(mac); EEPROM(rt2x00dev, "MAC: %pM\n", mac); } @@ -4225,9 +4470,14 @@ int rt2800_init_eeprom(struct rt2x00_dev *rt2x00dev) * RT28xx/RT30xx: defined in "EEPROM_NIC_CONF0_RF_TYPE" field * RT53xx: defined in "EEPROM_CHIP_ID" field */ - rt2800_register_read(rt2x00dev, MAC_CSR0, ®); - if (rt2x00_get_field32(reg, MAC_CSR0_CHIPSET) == RT5390 || - rt2x00_get_field32(reg, MAC_CSR0_CHIPSET) == RT5392) + if (rt2x00_rt(rt2x00dev, RT3290)) + rt2800_register_read(rt2x00dev, MAC_CSR0_3290, ®); + else + rt2800_register_read(rt2x00dev, MAC_CSR0, ®); + + if (rt2x00_get_field32(reg, MAC_CSR0_CHIPSET) == RT3290 || + rt2x00_get_field32(reg, MAC_CSR0_CHIPSET) == RT5390 || + rt2x00_get_field32(reg, MAC_CSR0_CHIPSET) == RT5392) rt2x00_eeprom_read(rt2x00dev, EEPROM_CHIP_ID, &value); else value = rt2x00_get_field16(eeprom, EEPROM_NIC_CONF0_RF_TYPE); @@ -4242,6 +4492,7 @@ int rt2800_init_eeprom(struct rt2x00_dev *rt2x00dev) case RT3070: case RT3071: case RT3090: + case RT3290: case RT3390: case RT3572: case RT5390: @@ -4262,10 +4513,13 @@ int rt2800_init_eeprom(struct rt2x00_dev *rt2x00dev) case RF3021: case RF3022: case RF3052: + case RF3290: case RF3320: + case RF5360: case RF5370: case RF5372: case RF5390: + case RF5392: break; default: ERROR(rt2x00dev, "Invalid RF chipset 0x%04x detected.\n", @@ -4576,10 +4830,13 @@ int rt2800_probe_hw_mode(struct rt2x00_dev *rt2x00dev) rt2x00_rf(rt2x00dev, RF2020) || rt2x00_rf(rt2x00dev, RF3021) || rt2x00_rf(rt2x00dev, RF3022) || + rt2x00_rf(rt2x00dev, RF3290) || rt2x00_rf(rt2x00dev, RF3320) || + rt2x00_rf(rt2x00dev, RF5360) || rt2x00_rf(rt2x00dev, RF5370) || rt2x00_rf(rt2x00dev, RF5372) || - rt2x00_rf(rt2x00dev, RF5390)) { + rt2x00_rf(rt2x00dev, RF5390) || + rt2x00_rf(rt2x00dev, RF5392)) { spec->num_channels = 14; spec->channels = rf_vals_3x; } else if (rt2x00_rf(rt2x00dev, RF3052)) { @@ -4662,9 +4919,12 @@ int rt2800_probe_hw_mode(struct rt2x00_dev *rt2x00dev) case RF3022: case RF3320: case RF3052: + case RF3290: + case RF5360: case RF5370: case RF5372: case RF5390: + case RF5392: __set_bit(CAPABILITY_VCO_RECALIBRATION, &rt2x00dev->cap_flags); break; } diff --git a/drivers/net/wireless/rt2x00/rt2800pci.c b/drivers/net/wireless/rt2x00/rt2800pci.c index cad25bfebd7a..235376e9cb04 100644 --- a/drivers/net/wireless/rt2x00/rt2800pci.c +++ b/drivers/net/wireless/rt2x00/rt2800pci.c @@ -280,7 +280,13 @@ static void rt2800pci_stop_queue(struct data_queue *queue) */ static char *rt2800pci_get_firmware_name(struct rt2x00_dev *rt2x00dev) { - return FIRMWARE_RT2860; + /* + * Chip rt3290 use specific 4KB firmware named rt3290.bin. + */ + if (rt2x00_rt(rt2x00dev, RT3290)) + return FIRMWARE_RT3290; + else + return FIRMWARE_RT2860; } static int rt2800pci_write_firmware(struct rt2x00_dev *rt2x00dev, @@ -974,6 +980,66 @@ static int rt2800pci_validate_eeprom(struct rt2x00_dev *rt2x00dev) return rt2800_validate_eeprom(rt2x00dev); } +static int rt2800_enable_wlan_rt3290(struct rt2x00_dev *rt2x00dev) +{ + u32 reg; + int i, count; + + rt2800_register_read(rt2x00dev, WLAN_FUN_CTRL, ®); + if (rt2x00_get_field32(reg, WLAN_EN)) + return 0; + + rt2x00_set_field32(®, WLAN_GPIO_OUT_OE_BIT_ALL, 0xff); + rt2x00_set_field32(®, FRC_WL_ANT_SET, 1); + rt2x00_set_field32(®, WLAN_CLK_EN, 0); + rt2x00_set_field32(®, WLAN_EN, 1); + rt2800_register_write(rt2x00dev, WLAN_FUN_CTRL, reg); + + udelay(REGISTER_BUSY_DELAY); + + count = 0; + do { + /* + * Check PLL_LD & XTAL_RDY. + */ + for (i = 0; i < REGISTER_BUSY_COUNT; i++) { + rt2800_register_read(rt2x00dev, CMB_CTRL, ®); + if (rt2x00_get_field32(reg, PLL_LD) && + rt2x00_get_field32(reg, XTAL_RDY)) + break; + udelay(REGISTER_BUSY_DELAY); + } + + if (i >= REGISTER_BUSY_COUNT) { + + if (count >= 10) + return -EIO; + + rt2800_register_write(rt2x00dev, 0x58, 0x018); + udelay(REGISTER_BUSY_DELAY); + rt2800_register_write(rt2x00dev, 0x58, 0x418); + udelay(REGISTER_BUSY_DELAY); + rt2800_register_write(rt2x00dev, 0x58, 0x618); + udelay(REGISTER_BUSY_DELAY); + count++; + } else { + count = 0; + } + + rt2800_register_read(rt2x00dev, WLAN_FUN_CTRL, ®); + rt2x00_set_field32(®, PCIE_APP0_CLK_REQ, 0); + rt2x00_set_field32(®, WLAN_CLK_EN, 1); + rt2x00_set_field32(®, WLAN_RESET, 1); + rt2800_register_write(rt2x00dev, WLAN_FUN_CTRL, reg); + udelay(10); + rt2x00_set_field32(®, WLAN_RESET, 0); + rt2800_register_write(rt2x00dev, WLAN_FUN_CTRL, reg); + udelay(10); + rt2800_register_write(rt2x00dev, INT_SOURCE_CSR, 0x7fffffff); + } while (count != 0); + + return 0; +} static int rt2800pci_probe_hw(struct rt2x00_dev *rt2x00dev) { int retval; @@ -997,6 +1063,17 @@ static int rt2800pci_probe_hw(struct rt2x00_dev *rt2x00dev) return retval; /* + * In probe phase call rt2800_enable_wlan_rt3290 to enable wlan + * clk for rt3290. That avoid the MCU fail in start phase. + */ + if (rt2x00_rt(rt2x00dev, RT3290)) { + retval = rt2800_enable_wlan_rt3290(rt2x00dev); + + if (retval) + return retval; + } + + /* * This device has multiple filters for control frames * and has a separate filter for PS Poll frames. */ @@ -1175,6 +1252,9 @@ static DEFINE_PCI_DEVICE_TABLE(rt2800pci_device_table) = { { PCI_DEVICE(0x1432, 0x7768) }, { PCI_DEVICE(0x1462, 0x891a) }, { PCI_DEVICE(0x1a3b, 0x1059) }, +#ifdef CONFIG_RT2800PCI_RT3290 + { PCI_DEVICE(0x1814, 0x3290) }, +#endif #ifdef CONFIG_RT2800PCI_RT33XX { PCI_DEVICE(0x1814, 0x3390) }, #endif @@ -1188,6 +1268,7 @@ static DEFINE_PCI_DEVICE_TABLE(rt2800pci_device_table) = { { PCI_DEVICE(0x1814, 0x3593) }, #endif #ifdef CONFIG_RT2800PCI_RT53XX + { PCI_DEVICE(0x1814, 0x5360) }, { PCI_DEVICE(0x1814, 0x5362) }, { PCI_DEVICE(0x1814, 0x5390) }, { PCI_DEVICE(0x1814, 0x5392) }, diff --git a/drivers/net/wireless/rt2x00/rt2800pci.h b/drivers/net/wireless/rt2x00/rt2800pci.h index 70e050d904c8..ab22a087c50d 100644 --- a/drivers/net/wireless/rt2x00/rt2800pci.h +++ b/drivers/net/wireless/rt2x00/rt2800pci.h @@ -47,6 +47,7 @@ * 8051 firmware image. */ #define FIRMWARE_RT2860 "rt2860.bin" +#define FIRMWARE_RT3290 "rt3290.bin" #define FIRMWARE_IMAGE_BASE 0x2000 /* diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c index bf78317a6adb..6cf336595e25 100644 --- a/drivers/net/wireless/rt2x00/rt2800usb.c +++ b/drivers/net/wireless/rt2x00/rt2800usb.c @@ -971,6 +971,7 @@ static struct usb_device_id rt2800usb_device_table[] = { { USB_DEVICE(0x0411, 0x015d) }, { USB_DEVICE(0x0411, 0x016f) }, { USB_DEVICE(0x0411, 0x01a2) }, + { USB_DEVICE(0x0411, 0x01ee) }, /* Corega */ { USB_DEVICE(0x07aa, 0x002f) }, { USB_DEVICE(0x07aa, 0x003c) }, @@ -1137,6 +1138,8 @@ static struct usb_device_id rt2800usb_device_table[] = { #ifdef CONFIG_RT2800USB_RT33XX /* Belkin */ { USB_DEVICE(0x050d, 0x945b) }, + /* D-Link */ + { USB_DEVICE(0x2001, 0x3c17) }, /* Panasonic */ { USB_DEVICE(0x083a, 0xb511) }, /* Philips */ @@ -1237,7 +1240,6 @@ static struct usb_device_id rt2800usb_device_table[] = { /* D-Link */ { USB_DEVICE(0x07d1, 0x3c0b) }, { USB_DEVICE(0x07d1, 0x3c17) }, - { USB_DEVICE(0x2001, 0x3c17) }, /* Encore */ { USB_DEVICE(0x203d, 0x14a1) }, /* Gemtek */ diff --git a/drivers/net/wireless/rt2x00/rt2x00.h b/drivers/net/wireless/rt2x00/rt2x00.h index 8f754025b06e..8afb546c2b2d 100644 --- a/drivers/net/wireless/rt2x00/rt2x00.h +++ b/drivers/net/wireless/rt2x00/rt2x00.h @@ -187,6 +187,7 @@ struct rt2x00_chip { #define RT3070 0x3070 #define RT3071 0x3071 #define RT3090 0x3090 /* 2.4GHz PCIe */ +#define RT3290 0x3290 #define RT3390 0x3390 #define RT3572 0x3572 #define RT3593 0x3593 diff --git a/drivers/net/wireless/rt2x00/rt2x00config.c b/drivers/net/wireless/rt2x00/rt2x00config.c index e7361d913e8e..49a63e973934 100644 --- a/drivers/net/wireless/rt2x00/rt2x00config.c +++ b/drivers/net/wireless/rt2x00/rt2x00config.c @@ -102,7 +102,7 @@ void rt2x00lib_config_erp(struct rt2x00_dev *rt2x00dev, /* Update the AID, this is needed for dynamic PS support */ rt2x00dev->aid = bss_conf->assoc ? bss_conf->aid : 0; - rt2x00dev->last_beacon = bss_conf->last_tsf; + rt2x00dev->last_beacon = bss_conf->sync_tsf; /* Update global beacon interval time, this is needed for PS support */ rt2x00dev->beacon_int = bss_conf->beacon_int; diff --git a/drivers/net/wireless/rt2x00/rt2x00dev.c b/drivers/net/wireless/rt2x00/rt2x00dev.c index e5404e576251..a6b88bd4a1a5 100644 --- a/drivers/net/wireless/rt2x00/rt2x00dev.c +++ b/drivers/net/wireless/rt2x00/rt2x00dev.c @@ -1161,6 +1161,8 @@ int rt2x00lib_probe_dev(struct rt2x00_dev *rt2x00dev) BIT(NL80211_IFTYPE_MESH_POINT) | BIT(NL80211_IFTYPE_WDS); + rt2x00dev->hw->wiphy->flags |= WIPHY_FLAG_IBSS_RSN; + /* * Initialize work. */ diff --git a/drivers/net/wireless/rt2x00/rt2x00mac.c b/drivers/net/wireless/rt2x00/rt2x00mac.c index dd24b2663b5e..4ff26c2159bf 100644 --- a/drivers/net/wireless/rt2x00/rt2x00mac.c +++ b/drivers/net/wireless/rt2x00/rt2x00mac.c @@ -506,9 +506,19 @@ int rt2x00mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, if (!test_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags)) return 0; - else if (!test_bit(CAPABILITY_HW_CRYPTO, &rt2x00dev->cap_flags)) + + if (!test_bit(CAPABILITY_HW_CRYPTO, &rt2x00dev->cap_flags)) + return -EOPNOTSUPP; + + /* + * To support IBSS RSN, don't program group keys in IBSS, the + * hardware will then not attempt to decrypt the frames. + */ + if (vif->type == NL80211_IFTYPE_ADHOC && + !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) return -EOPNOTSUPP; - else if (key->keylen > 32) + + if (key->keylen > 32) return -ENOSPC; memset(&crypto, 0, sizeof(crypto)); diff --git a/drivers/net/wireless/rt2x00/rt2x00pci.c b/drivers/net/wireless/rt2x00/rt2x00pci.c index 0a4653a92cab..a0c8caef3b0a 100644 --- a/drivers/net/wireless/rt2x00/rt2x00pci.c +++ b/drivers/net/wireless/rt2x00/rt2x00pci.c @@ -256,6 +256,7 @@ int rt2x00pci_probe(struct pci_dev *pci_dev, const struct rt2x00_ops *ops) struct ieee80211_hw *hw; struct rt2x00_dev *rt2x00dev; int retval; + u16 chip; retval = pci_enable_device(pci_dev); if (retval) { @@ -305,6 +306,14 @@ int rt2x00pci_probe(struct pci_dev *pci_dev, const struct rt2x00_ops *ops) if (retval) goto exit_free_device; + /* + * Because rt3290 chip use different efuse offset to read efuse data. + * So before read efuse it need to indicate it is the + * rt3290 or not. + */ + pci_read_config_word(pci_dev, PCI_DEVICE_ID, &chip); + rt2x00dev->chip.rt = chip; + retval = rt2x00lib_probe_dev(rt2x00dev); if (retval) goto exit_free_reg; diff --git a/drivers/net/wireless/rt2x00/rt2x00queue.c b/drivers/net/wireless/rt2x00/rt2x00queue.c index 2fd830103415..f7e74a0a7759 100644 --- a/drivers/net/wireless/rt2x00/rt2x00queue.c +++ b/drivers/net/wireless/rt2x00/rt2x00queue.c @@ -774,9 +774,7 @@ int rt2x00queue_update_beacon(struct rt2x00_dev *rt2x00dev, bool rt2x00queue_for_each_entry(struct data_queue *queue, enum queue_index start, enum queue_index end, - void *data, - bool (*fn)(struct queue_entry *entry, - void *data)) + bool (*fn)(struct queue_entry *entry)) { unsigned long irqflags; unsigned int index_start; @@ -807,17 +805,17 @@ bool rt2x00queue_for_each_entry(struct data_queue *queue, */ if (index_start < index_end) { for (i = index_start; i < index_end; i++) { - if (fn(&queue->entries[i], data)) + if (fn(&queue->entries[i])) return true; } } else { for (i = index_start; i < queue->limit; i++) { - if (fn(&queue->entries[i], data)) + if (fn(&queue->entries[i])) return true; } for (i = 0; i < index_end; i++) { - if (fn(&queue->entries[i], data)) + if (fn(&queue->entries[i])) return true; } } diff --git a/drivers/net/wireless/rt2x00/rt2x00queue.h b/drivers/net/wireless/rt2x00/rt2x00queue.h index 5f1392c72673..9b8c10a86dee 100644 --- a/drivers/net/wireless/rt2x00/rt2x00queue.h +++ b/drivers/net/wireless/rt2x00/rt2x00queue.h @@ -584,7 +584,6 @@ struct data_queue_desc { * @queue: Pointer to @data_queue * @start: &enum queue_index Pointer to start index * @end: &enum queue_index Pointer to end index - * @data: Data to pass to the callback function * @fn: The function to call for each &struct queue_entry * * This will walk through all entries in the queue, in chronological @@ -597,9 +596,7 @@ struct data_queue_desc { bool rt2x00queue_for_each_entry(struct data_queue *queue, enum queue_index start, enum queue_index end, - void *data, - bool (*fn)(struct queue_entry *entry, - void *data)); + bool (*fn)(struct queue_entry *entry)); /** * rt2x00queue_empty - Check if the queue is empty. diff --git a/drivers/net/wireless/rt2x00/rt2x00usb.c b/drivers/net/wireless/rt2x00/rt2x00usb.c index 74ecc33fdd90..40ea80725a96 100644 --- a/drivers/net/wireless/rt2x00/rt2x00usb.c +++ b/drivers/net/wireless/rt2x00/rt2x00usb.c @@ -285,7 +285,7 @@ static void rt2x00usb_interrupt_txdone(struct urb *urb) queue_work(rt2x00dev->workqueue, &rt2x00dev->txdone_work); } -static bool rt2x00usb_kick_tx_entry(struct queue_entry *entry, void* data) +static bool rt2x00usb_kick_tx_entry(struct queue_entry *entry) { struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev; struct usb_device *usb_dev = to_usb_device_intf(rt2x00dev->dev); @@ -390,7 +390,7 @@ static void rt2x00usb_interrupt_rxdone(struct urb *urb) queue_work(rt2x00dev->workqueue, &rt2x00dev->rxdone_work); } -static bool rt2x00usb_kick_rx_entry(struct queue_entry *entry, void* data) +static bool rt2x00usb_kick_rx_entry(struct queue_entry *entry) { struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev; struct usb_device *usb_dev = to_usb_device_intf(rt2x00dev->dev); @@ -427,18 +427,12 @@ void rt2x00usb_kick_queue(struct data_queue *queue) case QID_AC_BE: case QID_AC_BK: if (!rt2x00queue_empty(queue)) - rt2x00queue_for_each_entry(queue, - Q_INDEX_DONE, - Q_INDEX, - NULL, + rt2x00queue_for_each_entry(queue, Q_INDEX_DONE, Q_INDEX, rt2x00usb_kick_tx_entry); break; case QID_RX: if (!rt2x00queue_full(queue)) - rt2x00queue_for_each_entry(queue, - Q_INDEX, - Q_INDEX_DONE, - NULL, + rt2x00queue_for_each_entry(queue, Q_INDEX, Q_INDEX_DONE, rt2x00usb_kick_rx_entry); break; default: @@ -447,7 +441,7 @@ void rt2x00usb_kick_queue(struct data_queue *queue) } EXPORT_SYMBOL_GPL(rt2x00usb_kick_queue); -static bool rt2x00usb_flush_entry(struct queue_entry *entry, void* data) +static bool rt2x00usb_flush_entry(struct queue_entry *entry) { struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev; struct queue_entry_priv_usb *entry_priv = entry->priv_data; @@ -474,7 +468,7 @@ void rt2x00usb_flush_queue(struct data_queue *queue, bool drop) unsigned int i; if (drop) - rt2x00queue_for_each_entry(queue, Q_INDEX_DONE, Q_INDEX, NULL, + rt2x00queue_for_each_entry(queue, Q_INDEX_DONE, Q_INDEX, rt2x00usb_flush_entry); /* @@ -565,7 +559,7 @@ void rt2x00usb_clear_entry(struct queue_entry *entry) entry->flags = 0; if (entry->queue->qid == QID_RX) - rt2x00usb_kick_rx_entry(entry, NULL); + rt2x00usb_kick_rx_entry(entry); } EXPORT_SYMBOL_GPL(rt2x00usb_clear_entry); diff --git a/drivers/net/wireless/rt2x00/rt61pci.c b/drivers/net/wireless/rt2x00/rt61pci.c index ee22bd74579d..f32259686b45 100644 --- a/drivers/net/wireless/rt2x00/rt61pci.c +++ b/drivers/net/wireless/rt2x00/rt61pci.c @@ -2415,7 +2415,7 @@ static int rt61pci_validate_eeprom(struct rt2x00_dev *rt2x00dev) */ mac = rt2x00_eeprom_addr(rt2x00dev, EEPROM_MAC_ADDR_0); if (!is_valid_ether_addr(mac)) { - random_ether_addr(mac); + eth_random_addr(mac); EEPROM(rt2x00dev, "MAC: %pM\n", mac); } diff --git a/drivers/net/wireless/rt2x00/rt73usb.c b/drivers/net/wireless/rt2x00/rt73usb.c index 77ccbbc7da41..ba6e434b859d 100644 --- a/drivers/net/wireless/rt2x00/rt73usb.c +++ b/drivers/net/wireless/rt2x00/rt73usb.c @@ -1770,7 +1770,7 @@ static int rt73usb_validate_eeprom(struct rt2x00_dev *rt2x00dev) */ mac = rt2x00_eeprom_addr(rt2x00dev, EEPROM_MAC_ADDR_0); if (!is_valid_ether_addr(mac)) { - random_ether_addr(mac); + eth_random_addr(mac); EEPROM(rt2x00dev, "MAC: %pM\n", mac); } diff --git a/drivers/net/wireless/rtl818x/rtl8180/dev.c b/drivers/net/wireless/rtl818x/rtl8180/dev.c index 2bebcb71a1e9..aceaf689f737 100644 --- a/drivers/net/wireless/rtl818x/rtl8180/dev.c +++ b/drivers/net/wireless/rtl818x/rtl8180/dev.c @@ -47,6 +47,8 @@ static DEFINE_PCI_DEVICE_TABLE(rtl8180_table) = { { PCI_DEVICE(0x1799, 0x6001) }, { PCI_DEVICE(0x1799, 0x6020) }, { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x3300) }, + { PCI_DEVICE(0x1186, 0x3301) }, + { PCI_DEVICE(0x1432, 0x7106) }, { } }; @@ -1076,7 +1078,7 @@ static int __devinit rtl8180_probe(struct pci_dev *pdev, if (!is_valid_ether_addr(mac_addr)) { printk(KERN_WARNING "%s (rtl8180): Invalid hwaddr! Using" " randomly generated MAC addr\n", pci_name(pdev)); - random_ether_addr(mac_addr); + eth_random_addr(mac_addr); } SET_IEEE80211_PERM_ADDR(dev, mac_addr); diff --git a/drivers/net/wireless/rtl818x/rtl8187/dev.c b/drivers/net/wireless/rtl818x/rtl8187/dev.c index 4fb1ca1b86b9..71a30b026089 100644 --- a/drivers/net/wireless/rtl818x/rtl8187/dev.c +++ b/drivers/net/wireless/rtl818x/rtl8187/dev.c @@ -1486,7 +1486,7 @@ static int __devinit rtl8187_probe(struct usb_interface *intf, if (!is_valid_ether_addr(mac_addr)) { printk(KERN_WARNING "rtl8187: Invalid hwaddr! Using randomly " "generated MAC address\n"); - random_ether_addr(mac_addr); + eth_random_addr(mac_addr); } SET_IEEE80211_PERM_ADDR(dev, mac_addr); diff --git a/drivers/net/wireless/rtlwifi/base.c b/drivers/net/wireless/rtlwifi/base.c index f4c852c6749b..942e56b77b60 100644 --- a/drivers/net/wireless/rtlwifi/base.c +++ b/drivers/net/wireless/rtlwifi/base.c @@ -167,7 +167,7 @@ static const u8 tid_to_ac[] = { 0, /* IEEE80211_AC_VO */ }; -u8 rtl_tid_to_ac(struct ieee80211_hw *hw, u8 tid) +u8 rtl_tid_to_ac(u8 tid) { return tid_to_ac[tid]; } @@ -907,7 +907,7 @@ bool rtl_action_proc(struct ieee80211_hw *hw, struct sk_buff *skb, u8 is_tx) struct ieee80211_hdr *hdr = rtl_get_hdr(skb); struct rtl_priv *rtlpriv = rtl_priv(hw); __le16 fc = hdr->frame_control; - u8 *act = (u8 *) (((u8 *) skb->data + MAC80211_3ADDR_LEN)); + u8 *act = (u8 *)skb->data + MAC80211_3ADDR_LEN; u8 category; if (!ieee80211_is_action(fc)) diff --git a/drivers/net/wireless/rtlwifi/base.h b/drivers/net/wireless/rtlwifi/base.h index 5a23a6d0f49d..f35af0fdaaf0 100644 --- a/drivers/net/wireless/rtlwifi/base.h +++ b/drivers/net/wireless/rtlwifi/base.h @@ -138,7 +138,7 @@ int rtl_send_smps_action(struct ieee80211_hw *hw, enum ieee80211_smps_mode smps); u8 *rtl_find_ie(u8 *data, unsigned int len, u8 ie); void rtl_recognize_peer(struct ieee80211_hw *hw, u8 *data, unsigned int len); -u8 rtl_tid_to_ac(struct ieee80211_hw *hw, u8 tid); +u8 rtl_tid_to_ac(u8 tid); extern struct attribute_group rtl_attribute_group; int rtlwifi_rate_mapping(struct ieee80211_hw *hw, bool isht, u8 desc_rate, bool first_ampdu); diff --git a/drivers/net/wireless/rtlwifi/cam.c b/drivers/net/wireless/rtlwifi/cam.c index 3d8cc4a0c86d..5b4b4d4eaf9e 100644 --- a/drivers/net/wireless/rtlwifi/cam.c +++ b/drivers/net/wireless/rtlwifi/cam.c @@ -128,7 +128,7 @@ u8 rtl_cam_add_one_entry(struct ieee80211_hw *hw, u8 *mac_addr, u32 us_config; struct rtl_priv *rtlpriv = rtl_priv(hw); - RT_TRACE(rtlpriv, COMP_SEC, DBG_DMESG, + RT_TRACE(rtlpriv, COMP_SEC, DBG_LOUD, "EntryNo:%x, ulKeyId=%x, ulEncAlg=%x, ulUseDK=%x MacAddr %pM\n", ul_entry_idx, ul_key_id, ul_enc_alg, ul_default_key, mac_addr); @@ -146,7 +146,7 @@ u8 rtl_cam_add_one_entry(struct ieee80211_hw *hw, u8 *mac_addr, } rtl_cam_program_entry(hw, ul_entry_idx, mac_addr, - (u8 *) key_content, us_config); + key_content, us_config); RT_TRACE(rtlpriv, COMP_SEC, DBG_DMESG, "<===\n"); @@ -342,7 +342,8 @@ void rtl_cam_del_entry(struct ieee80211_hw *hw, u8 *sta_addr) /* Remove from HW Security CAM */ memset(rtlpriv->sec.hwsec_cam_sta_addr[i], 0, ETH_ALEN); rtlpriv->sec.hwsec_cam_bitmap &= ~(BIT(0) << i); - pr_info("&&&&&&&&&del entry %d\n", i); + RT_TRACE(rtlpriv, COMP_SEC, DBG_LOUD, + "del CAM entry %d\n", i); } } return; diff --git a/drivers/net/wireless/rtlwifi/core.c b/drivers/net/wireless/rtlwifi/core.c index 278e9f957e0d..a18ad2a98938 100644 --- a/drivers/net/wireless/rtlwifi/core.c +++ b/drivers/net/wireless/rtlwifi/core.c @@ -680,7 +680,7 @@ static void rtl_op_bss_info_changed(struct ieee80211_hw *hw, mac->short_preamble = bss_conf->use_short_preamble; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_ACK_PREAMBLE, - (u8 *) (&mac->short_preamble)); + &mac->short_preamble); } if (changed & BSS_CHANGED_ERP_SLOT) { @@ -693,7 +693,7 @@ static void rtl_op_bss_info_changed(struct ieee80211_hw *hw, mac->slot_time = RTL_SLOT_TIME_20; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_SLOT_TIME, - (u8 *) (&mac->slot_time)); + &mac->slot_time); } if (changed & BSS_CHANGED_HT) { @@ -713,7 +713,7 @@ static void rtl_op_bss_info_changed(struct ieee80211_hw *hw, rcu_read_unlock(); rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_SHORTGI_DENSITY, - (u8 *) (&mac->max_mss_density)); + &mac->max_mss_density); rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_AMPDU_FACTOR, &mac->current_ampdu_factor); rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_AMPDU_MIN_SPACE, @@ -801,7 +801,7 @@ static void rtl_op_bss_info_changed(struct ieee80211_hw *hw, u8 mstatus = RT_MEDIA_CONNECT; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_H2C_FW_JOINBSSRPT, - (u8 *) (&mstatus)); + &mstatus); ppsc->report_linked = true; } } else { @@ -809,7 +809,7 @@ static void rtl_op_bss_info_changed(struct ieee80211_hw *hw, u8 mstatus = RT_MEDIA_DISCONNECT; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_H2C_FW_JOINBSSRPT, - (u8 *)(&mstatus)); + &mstatus); ppsc->report_linked = false; } } @@ -836,7 +836,7 @@ static void rtl_op_set_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u8 bibss = (mac->opmode == NL80211_IFTYPE_ADHOC) ? 1 : 0; mac->tsf = tsf; - rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_CORRECT_TSF, (u8 *) (&bibss)); + rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_CORRECT_TSF, &bibss); } static void rtl_op_reset_tsf(struct ieee80211_hw *hw, @@ -845,7 +845,7 @@ static void rtl_op_reset_tsf(struct ieee80211_hw *hw, struct rtl_priv *rtlpriv = rtl_priv(hw); u8 tmp = 0; - rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_DUAL_TSF_RST, (u8 *) (&tmp)); + rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_DUAL_TSF_RST, &tmp); } static void rtl_op_sta_notify(struct ieee80211_hw *hw, diff --git a/drivers/net/wireless/rtlwifi/efuse.c b/drivers/net/wireless/rtlwifi/efuse.c index 1f143800a8d7..8e2f9afb125a 100644 --- a/drivers/net/wireless/rtlwifi/efuse.c +++ b/drivers/net/wireless/rtlwifi/efuse.c @@ -352,7 +352,7 @@ void read_efuse(struct ieee80211_hw *hw, u16 _offset, u16 _size_byte, u8 *pbuf) rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_EFUSE_BYTES, (u8 *)&efuse_utilized); rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_EFUSE_USAGE, - (u8 *)&efuse_usage); + &efuse_usage); done: for (i = 0; i < EFUSE_MAX_WORD_UNIT; i++) kfree(efuse_word[i]); @@ -409,7 +409,7 @@ void efuse_shadow_read(struct ieee80211_hw *hw, u8 type, else if (type == 2) efuse_shadow_read_2byte(hw, offset, (u16 *) value); else if (type == 4) - efuse_shadow_read_4byte(hw, offset, (u32 *) value); + efuse_shadow_read_4byte(hw, offset, value); } diff --git a/drivers/net/wireless/rtlwifi/pci.c b/drivers/net/wireless/rtlwifi/pci.c index 2062ea1d7c80..80f75d3ba84a 100644 --- a/drivers/net/wireless/rtlwifi/pci.c +++ b/drivers/net/wireless/rtlwifi/pci.c @@ -480,7 +480,7 @@ static void _rtl_pci_tx_chk_waitq(struct ieee80211_hw *hw) /* we juse use em for BE/BK/VI/VO */ for (tid = 7; tid >= 0; tid--) { - u8 hw_queue = ac_to_hwq[rtl_tid_to_ac(hw, tid)]; + u8 hw_queue = ac_to_hwq[rtl_tid_to_ac(tid)]; struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[hw_queue]; while (!mac->act_scanning && rtlpriv->psc.rfpwr_state == ERFON) { @@ -756,10 +756,10 @@ done: if (index == rtlpci->rxringcount - 1) rtlpriv->cfg->ops->set_desc((u8 *)pdesc, false, HW_DESC_RXERO, - (u8 *)&tmp_one); + &tmp_one); rtlpriv->cfg->ops->set_desc((u8 *)pdesc, false, HW_DESC_RXOWN, - (u8 *)&tmp_one); + &tmp_one); index = (index + 1) % rtlpci->rxringcount; } @@ -934,7 +934,7 @@ static void _rtl_pci_prepare_bcn_tasklet(struct ieee80211_hw *hw) __skb_queue_tail(&ring->queue, pskb); rtlpriv->cfg->ops->set_desc((u8 *) pdesc, true, HW_DESC_OWN, - (u8 *)&temp_one); + &temp_one); return; } @@ -1126,11 +1126,11 @@ static int _rtl_pci_init_rx_ring(struct ieee80211_hw *hw) rxbuffersize); rtlpriv->cfg->ops->set_desc((u8 *) entry, false, HW_DESC_RXOWN, - (u8 *)&tmp_one); + &tmp_one); } rtlpriv->cfg->ops->set_desc((u8 *) entry, false, - HW_DESC_RXERO, (u8 *)&tmp_one); + HW_DESC_RXERO, &tmp_one); } return 0; } @@ -1263,7 +1263,7 @@ int rtl_pci_reset_trx_ring(struct ieee80211_hw *hw) rtlpriv->cfg->ops->set_desc((u8 *) entry, false, HW_DESC_RXOWN, - (u8 *)&tmp_one); + &tmp_one); } rtlpci->rx_ring[rx_queue_idx].idx = 0; } @@ -1273,17 +1273,18 @@ int rtl_pci_reset_trx_ring(struct ieee80211_hw *hw) *after reset, release previous pending packet, *and force the tx idx to the first one */ - spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags); for (i = 0; i < RTL_PCI_MAX_TX_QUEUE_COUNT; i++) { if (rtlpci->tx_ring[i].desc) { struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[i]; while (skb_queue_len(&ring->queue)) { - struct rtl_tx_desc *entry = - &ring->desc[ring->idx]; - struct sk_buff *skb = - __skb_dequeue(&ring->queue); + struct rtl_tx_desc *entry; + struct sk_buff *skb; + spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, + flags); + entry = &ring->desc[ring->idx]; + skb = __skb_dequeue(&ring->queue); pci_unmap_single(rtlpci->pdev, rtlpriv->cfg->ops-> get_desc((u8 *) @@ -1291,15 +1292,15 @@ int rtl_pci_reset_trx_ring(struct ieee80211_hw *hw) true, HW_DESC_TXBUFF_ADDR), skb->len, PCI_DMA_TODEVICE); - kfree_skb(skb); ring->idx = (ring->idx + 1) % ring->entries; + spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, + flags); + kfree_skb(skb); } ring->idx = 0; } } - spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags); - return 0; } @@ -1422,7 +1423,7 @@ static int rtl_pci_tx(struct ieee80211_hw *hw, struct sk_buff *skb, __skb_queue_tail(&ring->queue, skb); rtlpriv->cfg->ops->set_desc((u8 *)pdesc, true, - HW_DESC_OWN, (u8 *)&temp_one); + HW_DESC_OWN, &temp_one); if ((ring->entries - skb_queue_len(&ring->queue)) < 2 && diff --git a/drivers/net/wireless/rtlwifi/ps.c b/drivers/net/wireless/rtlwifi/ps.c index 5ae26647f340..13ad33e85577 100644 --- a/drivers/net/wireless/rtlwifi/ps.c +++ b/drivers/net/wireless/rtlwifi/ps.c @@ -333,10 +333,10 @@ static void rtl_lps_set_psmode(struct ieee80211_hw *hw, u8 rt_psmode) rpwm_val = 0x0C; /* RF on */ fw_pwrmode = FW_PS_ACTIVE_MODE; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_SET_RPWM, - (u8 *) (&rpwm_val)); + &rpwm_val); rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_H2C_FW_PWRMODE, - (u8 *) (&fw_pwrmode)); + &fw_pwrmode); fw_current_inps = false; rtlpriv->cfg->ops->set_hw_reg(hw, @@ -356,11 +356,11 @@ static void rtl_lps_set_psmode(struct ieee80211_hw *hw, u8 rt_psmode) (u8 *) (&fw_current_inps)); rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_H2C_FW_PWRMODE, - (u8 *) (&ppsc->fwctrl_psmode)); + &ppsc->fwctrl_psmode); rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_SET_RPWM, - (u8 *) (&rpwm_val)); + &rpwm_val); } else { /* Reset the power save related parameters. */ ppsc->dot11_psmode = EACTIVE; @@ -446,7 +446,7 @@ void rtl_swlps_beacon(struct ieee80211_hw *hw, void *data, unsigned int len) { struct rtl_priv *rtlpriv = rtl_priv(hw); struct rtl_mac *mac = rtl_mac(rtl_priv(hw)); - struct ieee80211_hdr *hdr = (void *) data; + struct ieee80211_hdr *hdr = data; struct ieee80211_tim_ie *tim_ie; u8 *tim; u8 tim_len; diff --git a/drivers/net/wireless/rtlwifi/rtl8192c/dm_common.c b/drivers/net/wireless/rtlwifi/rtl8192c/dm_common.c index f7f48c7ac854..a45afda8259c 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192c/dm_common.c +++ b/drivers/net/wireless/rtlwifi/rtl8192c/dm_common.c @@ -656,9 +656,8 @@ static void rtl92c_dm_check_edca_turbo(struct ieee80211_hw *hw) } else { if (rtlpriv->dm.current_turbo_edca) { u8 tmp = AC0_BE; - rtlpriv->cfg->ops->set_hw_reg(hw, - HW_VAR_AC_PARAM, - (u8 *) (&tmp)); + rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_AC_PARAM, + &tmp); rtlpriv->dm.current_turbo_edca = false; } } diff --git a/drivers/net/wireless/rtlwifi/rtl8192c/fw_common.c b/drivers/net/wireless/rtlwifi/rtl8192c/fw_common.c index 692c8ef5ee89..44febfde9493 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192c/fw_common.c +++ b/drivers/net/wireless/rtlwifi/rtl8192c/fw_common.c @@ -168,7 +168,7 @@ static void _rtl92c_write_fw(struct ieee80211_hw *hw, { struct rtl_priv *rtlpriv = rtl_priv(hw); struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); - u8 *bufferPtr = (u8 *) buffer; + u8 *bufferPtr = buffer; RT_TRACE(rtlpriv, COMP_FW, DBG_TRACE, "FW size is %d bytes\n", size); @@ -262,7 +262,7 @@ int rtl92c_download_fw(struct ieee80211_hw *hw) return 1; pfwheader = (struct rtl92c_firmware_header *)rtlhal->pfirmware; - pfwdata = (u8 *) rtlhal->pfirmware; + pfwdata = rtlhal->pfirmware; fwsize = rtlhal->fwsize; if (IS_FW_HEADER_EXIST(pfwheader)) { diff --git a/drivers/net/wireless/rtlwifi/rtl8192ce/hw.c b/drivers/net/wireless/rtlwifi/rtl8192ce/hw.c index 5c4d9bc040f1..bd0da7ef290b 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192ce/hw.c +++ b/drivers/net/wireless/rtlwifi/rtl8192ce/hw.c @@ -214,13 +214,13 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) for (e_aci = 0; e_aci < AC_MAX; e_aci++) { rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_AC_PARAM, - (u8 *) (&e_aci)); + &e_aci); } break; } case HW_VAR_ACK_PREAMBLE:{ u8 reg_tmp; - u8 short_preamble = (bool) (*(u8 *) val); + u8 short_preamble = (bool)*val; reg_tmp = (mac->cur_40_prime_sc) << 5; if (short_preamble) reg_tmp |= 0x80; @@ -232,7 +232,7 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) u8 min_spacing_to_set; u8 sec_min_space; - min_spacing_to_set = *((u8 *) val); + min_spacing_to_set = *val; if (min_spacing_to_set <= 7) { sec_min_space = 0; @@ -257,7 +257,7 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) case HW_VAR_SHORTGI_DENSITY:{ u8 density_to_set; - density_to_set = *((u8 *) val); + density_to_set = *val; mac->min_space_cfg |= (density_to_set << 3); RT_TRACE(rtlpriv, COMP_MLME, DBG_LOUD, @@ -284,7 +284,7 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) else p_regtoset = regtoset_normal; - factor_toset = *((u8 *) val); + factor_toset = *(val); if (factor_toset <= 3) { factor_toset = (1 << (factor_toset + 2)); if (factor_toset > 0xf) @@ -316,17 +316,17 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_AC_PARAM:{ - u8 e_aci = *((u8 *) val); + u8 e_aci = *(val); rtl92c_dm_init_edca_turbo(hw); if (rtlpci->acm_method != eAcmWay2_SW) rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_ACM_CTRL, - (u8 *) (&e_aci)); + (&e_aci)); break; } case HW_VAR_ACM_CTRL:{ - u8 e_aci = *((u8 *) val); + u8 e_aci = *(val); union aci_aifsn *p_aci_aifsn = (union aci_aifsn *)(&(mac->ac[0].aifs)); u8 acm = p_aci_aifsn->f.acm; @@ -382,7 +382,7 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_RETRY_LIMIT:{ - u8 retry_limit = ((u8 *) (val))[0]; + u8 retry_limit = val[0]; rtl_write_word(rtlpriv, REG_RL, retry_limit << RETRY_LIMIT_SHORT_SHIFT | @@ -396,13 +396,13 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) rtlefuse->efuse_usedbytes = *((u16 *) val); break; case HW_VAR_EFUSE_USAGE: - rtlefuse->efuse_usedpercentage = *((u8 *) val); + rtlefuse->efuse_usedpercentage = *val; break; case HW_VAR_IO_CMD: rtl92c_phy_set_io_cmd(hw, (*(enum io_type *)val)); break; case HW_VAR_WPA_CONFIG: - rtl_write_byte(rtlpriv, REG_SECCFG, *((u8 *) val)); + rtl_write_byte(rtlpriv, REG_SECCFG, *val); break; case HW_VAR_SET_RPWM:{ u8 rpwm_val; @@ -411,31 +411,30 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) udelay(1); if (rpwm_val & BIT(7)) { - rtl_write_byte(rtlpriv, REG_PCIE_HRPWM, - (*(u8 *) val)); + rtl_write_byte(rtlpriv, REG_PCIE_HRPWM, *val); } else { rtl_write_byte(rtlpriv, REG_PCIE_HRPWM, - ((*(u8 *) val) | BIT(7))); + *val | BIT(7)); } break; } case HW_VAR_H2C_FW_PWRMODE:{ - u8 psmode = (*(u8 *) val); + u8 psmode = *val; if ((psmode != FW_PS_ACTIVE_MODE) && (!IS_92C_SERIAL(rtlhal->version))) { rtl92c_dm_rf_saving(hw, true); } - rtl92c_set_fw_pwrmode_cmd(hw, (*(u8 *) val)); + rtl92c_set_fw_pwrmode_cmd(hw, *val); break; } case HW_VAR_FW_PSMODE_STATUS: ppsc->fw_current_inpsmode = *((bool *) val); break; case HW_VAR_H2C_FW_JOINBSSRPT:{ - u8 mstatus = (*(u8 *) val); + u8 mstatus = *val; u8 tmp_regcr, tmp_reg422; bool recover = false; @@ -472,7 +471,7 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) rtl_write_byte(rtlpriv, REG_CR + 1, (tmp_regcr & ~(BIT(0)))); } - rtl92c_set_fw_joinbss_report_cmd(hw, (*(u8 *) val)); + rtl92c_set_fw_joinbss_report_cmd(hw, *val); break; } @@ -486,7 +485,7 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_CORRECT_TSF:{ - u8 btype_ibss = ((u8 *) (val))[0]; + u8 btype_ibss = val[0]; if (btype_ibss) _rtl92ce_stop_tx_beacon(hw); @@ -1589,10 +1588,10 @@ static void _rtl92ce_read_adapter_info(struct ieee80211_hw *hw) rtlefuse->autoload_failflag, hwinfo); - rtlefuse->eeprom_channelplan = *(u8 *)&hwinfo[EEPROM_CHANNELPLAN]; + rtlefuse->eeprom_channelplan = *&hwinfo[EEPROM_CHANNELPLAN]; rtlefuse->eeprom_version = *(u16 *)&hwinfo[EEPROM_VERSION]; rtlefuse->txpwr_fromeprom = true; - rtlefuse->eeprom_oemid = *(u8 *)&hwinfo[EEPROM_CUSTOMER_ID]; + rtlefuse->eeprom_oemid = *&hwinfo[EEPROM_CUSTOMER_ID]; RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid); @@ -1939,7 +1938,7 @@ void rtl92ce_update_channel_access_setting(struct ieee80211_hw *hw) u16 sifs_timer; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_SLOT_TIME, - (u8 *)&mac->slot_time); + &mac->slot_time); if (!mac->ht_enable) sifs_timer = 0x0a0a; else diff --git a/drivers/net/wireless/rtlwifi/rtl8192ce/trx.c b/drivers/net/wireless/rtlwifi/rtl8192ce/trx.c index 3af874e69595..52166640f167 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192ce/trx.c +++ b/drivers/net/wireless/rtlwifi/rtl8192ce/trx.c @@ -605,7 +605,7 @@ void rtl92ce_tx_fill_desc(struct ieee80211_hw *hw, struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw)); bool defaultadapter = true; struct ieee80211_sta *sta; - u8 *pdesc = (u8 *) pdesc_tx; + u8 *pdesc = pdesc_tx; u16 seq_number; __le16 fc = hdr->frame_control; u8 fw_qsel = _rtl92ce_map_hwqueue_to_fwqueue(skb, hw_queue); @@ -806,7 +806,7 @@ void rtl92ce_tx_fill_cmddesc(struct ieee80211_hw *hw, SET_TX_DESC_OWN(pdesc, 1); - SET_TX_DESC_PKT_SIZE((u8 *) pdesc, (u16) (skb->len)); + SET_TX_DESC_PKT_SIZE(pdesc, (u16) (skb->len)); SET_TX_DESC_FIRST_SEG(pdesc, 1); SET_TX_DESC_LAST_SEG(pdesc, 1); diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c index 0c74d4f2eeb4..4bbb711a36c5 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c +++ b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c @@ -381,11 +381,11 @@ static void _rtl92cu_read_adapter_info(struct ieee80211_hw *hw) rtlefuse->eeprom_did = le16_to_cpu(*(__le16 *)&hwinfo[EEPROM_DID]); RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, " VID = 0x%02x PID = 0x%02x\n", rtlefuse->eeprom_vid, rtlefuse->eeprom_did); - rtlefuse->eeprom_channelplan = *(u8 *)&hwinfo[EEPROM_CHANNELPLAN]; + rtlefuse->eeprom_channelplan = hwinfo[EEPROM_CHANNELPLAN]; rtlefuse->eeprom_version = le16_to_cpu(*(__le16 *)&hwinfo[EEPROM_VERSION]); rtlefuse->txpwr_fromeprom = true; - rtlefuse->eeprom_oemid = *(u8 *)&hwinfo[EEPROM_CUSTOMER_ID]; + rtlefuse->eeprom_oemid = hwinfo[EEPROM_CUSTOMER_ID]; RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid); if (rtlhal->oem_id == RT_CID_DEFAULT) { @@ -1660,7 +1660,7 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) for (e_aci = 0; e_aci < AC_MAX; e_aci++) rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_AC_PARAM, - (u8 *)(&e_aci)); + &e_aci); } else { u8 sifstime = 0; u8 u1bAIFS; @@ -1685,7 +1685,7 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) } case HW_VAR_ACK_PREAMBLE:{ u8 reg_tmp; - u8 short_preamble = (bool) (*(u8 *) val); + u8 short_preamble = (bool)*val; reg_tmp = 0; if (short_preamble) reg_tmp |= 0x80; @@ -1696,7 +1696,7 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) u8 min_spacing_to_set; u8 sec_min_space; - min_spacing_to_set = *((u8 *) val); + min_spacing_to_set = *val; if (min_spacing_to_set <= 7) { switch (rtlpriv->sec.pairwise_enc_algorithm) { case NO_ENCRYPTION: @@ -1729,7 +1729,7 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) case HW_VAR_SHORTGI_DENSITY:{ u8 density_to_set; - density_to_set = *((u8 *) val); + density_to_set = *val; density_to_set &= 0x1f; mac->min_space_cfg &= 0x07; mac->min_space_cfg |= (density_to_set << 3); @@ -1747,7 +1747,7 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) u8 index = 0; p_regtoset = regtoset_normal; - factor_toset = *((u8 *) val); + factor_toset = *val; if (factor_toset <= 3) { factor_toset = (1 << (factor_toset + 2)); if (factor_toset > 0xf) @@ -1774,7 +1774,7 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_AC_PARAM:{ - u8 e_aci = *((u8 *) val); + u8 e_aci = *val; u32 u4b_ac_param; u16 cw_min = le16_to_cpu(mac->ac[e_aci].cw_min); u16 cw_max = le16_to_cpu(mac->ac[e_aci].cw_max); @@ -1814,11 +1814,11 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) } if (rtlusb->acm_method != eAcmWay2_SW) rtlpriv->cfg->ops->set_hw_reg(hw, - HW_VAR_ACM_CTRL, (u8 *)(&e_aci)); + HW_VAR_ACM_CTRL, &e_aci); break; } case HW_VAR_ACM_CTRL:{ - u8 e_aci = *((u8 *) val); + u8 e_aci = *val; union aci_aifsn *p_aci_aifsn = (union aci_aifsn *) (&(mac->ac[0].aifs)); u8 acm = p_aci_aifsn->f.acm; @@ -1874,7 +1874,7 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_RETRY_LIMIT:{ - u8 retry_limit = ((u8 *) (val))[0]; + u8 retry_limit = val[0]; rtl_write_word(rtlpriv, REG_RL, retry_limit << RETRY_LIMIT_SHORT_SHIFT | @@ -1891,39 +1891,38 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) rtlefuse->efuse_usedbytes = *((u16 *) val); break; case HW_VAR_EFUSE_USAGE: - rtlefuse->efuse_usedpercentage = *((u8 *) val); + rtlefuse->efuse_usedpercentage = *val; break; case HW_VAR_IO_CMD: rtl92c_phy_set_io_cmd(hw, (*(enum io_type *)val)); break; case HW_VAR_WPA_CONFIG: - rtl_write_byte(rtlpriv, REG_SECCFG, *((u8 *) val)); + rtl_write_byte(rtlpriv, REG_SECCFG, *val); break; case HW_VAR_SET_RPWM:{ u8 rpwm_val = rtl_read_byte(rtlpriv, REG_USB_HRPWM); if (rpwm_val & BIT(7)) - rtl_write_byte(rtlpriv, REG_USB_HRPWM, - (*(u8 *)val)); + rtl_write_byte(rtlpriv, REG_USB_HRPWM, *val); else rtl_write_byte(rtlpriv, REG_USB_HRPWM, - ((*(u8 *)val) | BIT(7))); + *val | BIT(7)); break; } case HW_VAR_H2C_FW_PWRMODE:{ - u8 psmode = (*(u8 *) val); + u8 psmode = *val; if ((psmode != FW_PS_ACTIVE_MODE) && (!IS_92C_SERIAL(rtlhal->version))) rtl92c_dm_rf_saving(hw, true); - rtl92c_set_fw_pwrmode_cmd(hw, (*(u8 *) val)); + rtl92c_set_fw_pwrmode_cmd(hw, (*val)); break; } case HW_VAR_FW_PSMODE_STATUS: ppsc->fw_current_inpsmode = *((bool *) val); break; case HW_VAR_H2C_FW_JOINBSSRPT:{ - u8 mstatus = (*(u8 *) val); + u8 mstatus = *val; u8 tmp_reg422; bool recover = false; @@ -1948,7 +1947,7 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) tmp_reg422 | BIT(6)); rtl_write_byte(rtlpriv, REG_CR + 1, 0x02); } - rtl92c_set_fw_joinbss_report_cmd(hw, (*(u8 *) val)); + rtl92c_set_fw_joinbss_report_cmd(hw, (*val)); break; } case HW_VAR_AID:{ @@ -1961,7 +1960,7 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_CORRECT_TSF:{ - u8 btype_ibss = ((u8 *) (val))[0]; + u8 btype_ibss = val[0]; if (btype_ibss) _rtl92cu_stop_tx_beacon(hw); @@ -2184,7 +2183,7 @@ void rtl92cu_update_channel_access_setting(struct ieee80211_hw *hw) u16 sifs_timer; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_SLOT_TIME, - (u8 *)&mac->slot_time); + &mac->slot_time); if (!mac->ht_enable) sifs_timer = 0x0a0a; else diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/trx.c b/drivers/net/wireless/rtlwifi/rtl8192cu/trx.c index 21bc827c5fa6..2e6eb356a93e 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192cu/trx.c +++ b/drivers/net/wireless/rtlwifi/rtl8192cu/trx.c @@ -668,7 +668,7 @@ void rtl92cu_tx_fill_cmddesc(struct ieee80211_hw *hw, SET_TX_DESC_RATE_ID(pdesc, 7); SET_TX_DESC_MACID(pdesc, 0); SET_TX_DESC_OWN(pdesc, 1); - SET_TX_DESC_PKT_SIZE((u8 *) pdesc, (u16) (skb->len)); + SET_TX_DESC_PKT_SIZE(pdesc, (u16)skb->len); SET_TX_DESC_FIRST_SEG(pdesc, 1); SET_TX_DESC_LAST_SEG(pdesc, 1); SET_TX_DESC_OFFSET(pdesc, 0x20); diff --git a/drivers/net/wireless/rtlwifi/rtl8192de/dm.c b/drivers/net/wireless/rtlwifi/rtl8192de/dm.c index a7d63a84551a..c0201ed69dd7 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192de/dm.c +++ b/drivers/net/wireless/rtlwifi/rtl8192de/dm.c @@ -696,7 +696,7 @@ static void rtl92d_dm_check_edca_turbo(struct ieee80211_hw *hw) if (rtlpriv->dm.current_turbo_edca) { u8 tmp = AC0_BE; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_AC_PARAM, - (u8 *) (&tmp)); + &tmp); rtlpriv->dm.current_turbo_edca = false; } } diff --git a/drivers/net/wireless/rtlwifi/rtl8192de/fw.c b/drivers/net/wireless/rtlwifi/rtl8192de/fw.c index f548a8d0068d..895ae6c1f354 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192de/fw.c +++ b/drivers/net/wireless/rtlwifi/rtl8192de/fw.c @@ -120,7 +120,7 @@ static void _rtl92d_write_fw(struct ieee80211_hw *hw, { struct rtl_priv *rtlpriv = rtl_priv(hw); struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); - u8 *bufferPtr = (u8 *) buffer; + u8 *bufferPtr = buffer; u32 pagenums, remainSize; u32 page, offset; @@ -256,8 +256,8 @@ int rtl92d_download_fw(struct ieee80211_hw *hw) if (rtlpriv->max_fw_size == 0 || !rtlhal->pfirmware) return 1; fwsize = rtlhal->fwsize; - pfwheader = (u8 *) rtlhal->pfirmware; - pfwdata = (u8 *) rtlhal->pfirmware; + pfwheader = rtlhal->pfirmware; + pfwdata = rtlhal->pfirmware; rtlhal->fw_version = (u16) GET_FIRMWARE_HDR_VERSION(pfwheader); rtlhal->fw_subversion = (u16) GET_FIRMWARE_HDR_SUB_VER(pfwheader); RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, diff --git a/drivers/net/wireless/rtlwifi/rtl8192de/hw.c b/drivers/net/wireless/rtlwifi/rtl8192de/hw.c index b338d526c422..f4051f4f0390 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192de/hw.c +++ b/drivers/net/wireless/rtlwifi/rtl8192de/hw.c @@ -235,12 +235,12 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) for (e_aci = 0; e_aci < AC_MAX; e_aci++) rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_AC_PARAM, - (u8 *) (&e_aci)); + (&e_aci)); break; } case HW_VAR_ACK_PREAMBLE: { u8 reg_tmp; - u8 short_preamble = (bool) (*(u8 *) val); + u8 short_preamble = (bool) (*val); reg_tmp = (mac->cur_40_prime_sc) << 5; if (short_preamble) @@ -252,7 +252,7 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) u8 min_spacing_to_set; u8 sec_min_space; - min_spacing_to_set = *((u8 *) val); + min_spacing_to_set = *val; if (min_spacing_to_set <= 7) { sec_min_space = 0; if (min_spacing_to_set < sec_min_space) @@ -271,7 +271,7 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) case HW_VAR_SHORTGI_DENSITY: { u8 density_to_set; - density_to_set = *((u8 *) val); + density_to_set = *val; mac->min_space_cfg = rtlpriv->rtlhal.minspace_cfg; mac->min_space_cfg |= (density_to_set << 3); RT_TRACE(rtlpriv, COMP_MLME, DBG_LOUD, @@ -293,7 +293,7 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) regtoSet = 0x66626641; else regtoSet = 0xb972a841; - factor_toset = *((u8 *) val); + factor_toset = *val; if (factor_toset <= 3) { factor_toset = (1 << (factor_toset + 2)); if (factor_toset > 0xf) @@ -316,15 +316,15 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_AC_PARAM: { - u8 e_aci = *((u8 *) val); + u8 e_aci = *val; rtl92d_dm_init_edca_turbo(hw); if (rtlpci->acm_method != eAcmWay2_SW) rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_ACM_CTRL, - (u8 *) (&e_aci)); + &e_aci); break; } case HW_VAR_ACM_CTRL: { - u8 e_aci = *((u8 *) val); + u8 e_aci = *val; union aci_aifsn *p_aci_aifsn = (union aci_aifsn *)(&(mac->ac[0].aifs)); u8 acm = p_aci_aifsn->f.acm; @@ -376,7 +376,7 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) rtlpci->receive_config = ((u32 *) (val))[0]; break; case HW_VAR_RETRY_LIMIT: { - u8 retry_limit = ((u8 *) (val))[0]; + u8 retry_limit = val[0]; rtl_write_word(rtlpriv, REG_RL, retry_limit << RETRY_LIMIT_SHORT_SHIFT | @@ -390,16 +390,16 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) rtlefuse->efuse_usedbytes = *((u16 *) val); break; case HW_VAR_EFUSE_USAGE: - rtlefuse->efuse_usedpercentage = *((u8 *) val); + rtlefuse->efuse_usedpercentage = *val; break; case HW_VAR_IO_CMD: rtl92d_phy_set_io_cmd(hw, (*(enum io_type *)val)); break; case HW_VAR_WPA_CONFIG: - rtl_write_byte(rtlpriv, REG_SECCFG, *((u8 *) val)); + rtl_write_byte(rtlpriv, REG_SECCFG, *val); break; case HW_VAR_SET_RPWM: - rtl92d_fill_h2c_cmd(hw, H2C_PWRM, 1, (u8 *) (val)); + rtl92d_fill_h2c_cmd(hw, H2C_PWRM, 1, (val)); break; case HW_VAR_H2C_FW_PWRMODE: break; @@ -407,7 +407,7 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) ppsc->fw_current_inpsmode = *((bool *) val); break; case HW_VAR_H2C_FW_JOINBSSRPT: { - u8 mstatus = (*(u8 *) val); + u8 mstatus = (*val); u8 tmp_regcr, tmp_reg422; bool recover = false; @@ -435,7 +435,7 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) rtl_write_byte(rtlpriv, REG_CR + 1, (tmp_regcr & ~(BIT(0)))); } - rtl92d_set_fw_joinbss_report_cmd(hw, (*(u8 *) val)); + rtl92d_set_fw_joinbss_report_cmd(hw, (*val)); break; } case HW_VAR_AID: { @@ -447,7 +447,7 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_CORRECT_TSF: { - u8 btype_ibss = ((u8 *) (val))[0]; + u8 btype_ibss = val[0]; if (btype_ibss) _rtl92de_stop_tx_beacon(hw); @@ -1794,7 +1794,7 @@ static void _rtl92de_read_adapter_info(struct ieee80211_hw *hw) "RTL819X Not boot from eeprom, check it !!\n"); return; } - rtlefuse->eeprom_oemid = *(u8 *)&hwinfo[EEPROM_CUSTOMER_ID]; + rtlefuse->eeprom_oemid = hwinfo[EEPROM_CUSTOMER_ID]; _rtl92de_read_macphymode_and_bandtype(hw, hwinfo); /* VID, DID SE 0xA-D */ @@ -2115,7 +2115,7 @@ void rtl92de_update_channel_access_setting(struct ieee80211_hw *hw) u16 sifs_timer; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_SLOT_TIME, - (u8 *)&mac->slot_time); + &mac->slot_time); if (!mac->ht_enable) sifs_timer = 0x0a0a; else diff --git a/drivers/net/wireless/rtlwifi/rtl8192de/phy.c b/drivers/net/wireless/rtlwifi/rtl8192de/phy.c index 18380a7829f1..442031256bce 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192de/phy.c +++ b/drivers/net/wireless/rtlwifi/rtl8192de/phy.c @@ -3345,21 +3345,21 @@ void rtl92d_phy_config_macphymode_info(struct ieee80211_hw *hw) switch (rtlhal->macphymode) { case DUALMAC_SINGLEPHY: rtlphy->rf_type = RF_2T2R; - rtlhal->version |= CHIP_92D_SINGLEPHY; + rtlhal->version |= RF_TYPE_2T2R; rtlhal->bandset = BAND_ON_BOTH; rtlhal->current_bandtype = BAND_ON_2_4G; break; case SINGLEMAC_SINGLEPHY: rtlphy->rf_type = RF_2T2R; - rtlhal->version |= CHIP_92D_SINGLEPHY; + rtlhal->version |= RF_TYPE_2T2R; rtlhal->bandset = BAND_ON_BOTH; rtlhal->current_bandtype = BAND_ON_2_4G; break; case DUALMAC_DUALPHY: rtlphy->rf_type = RF_1T1R; - rtlhal->version &= (~CHIP_92D_SINGLEPHY); + rtlhal->version &= RF_TYPE_1T1R; /* Now we let MAC0 run on 5G band. */ if (rtlhal->interfaceindex == 0) { rtlhal->bandset = BAND_ON_5G; diff --git a/drivers/net/wireless/rtlwifi/rtl8192de/trx.c b/drivers/net/wireless/rtlwifi/rtl8192de/trx.c index 1666ef7fd87b..f80690d82c11 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192de/trx.c +++ b/drivers/net/wireless/rtlwifi/rtl8192de/trx.c @@ -560,7 +560,7 @@ void rtl92de_tx_fill_desc(struct ieee80211_hw *hw, struct rtl_hal *rtlhal = rtl_hal(rtlpriv); struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw)); struct ieee80211_sta *sta = info->control.sta; - u8 *pdesc = (u8 *) pdesc_tx; + u8 *pdesc = pdesc_tx; u16 seq_number; __le16 fc = hdr->frame_control; unsigned int buf_len = 0; @@ -761,11 +761,11 @@ void rtl92de_tx_fill_cmddesc(struct ieee80211_hw *hw, SET_TX_DESC_QUEUE_SEL(pdesc, fw_queue); SET_TX_DESC_FIRST_SEG(pdesc, 1); SET_TX_DESC_LAST_SEG(pdesc, 1); - SET_TX_DESC_TX_BUFFER_SIZE(pdesc, (u16) (skb->len)); + SET_TX_DESC_TX_BUFFER_SIZE(pdesc, (u16)skb->len); SET_TX_DESC_TX_BUFFER_ADDRESS(pdesc, mapping); SET_TX_DESC_RATE_ID(pdesc, 7); SET_TX_DESC_MACID(pdesc, 0); - SET_TX_DESC_PKT_SIZE((u8 *) pdesc, (u16) (skb->len)); + SET_TX_DESC_PKT_SIZE(pdesc, (u16) (skb->len)); SET_TX_DESC_FIRST_SEG(pdesc, 1); SET_TX_DESC_LAST_SEG(pdesc, 1); SET_TX_DESC_OFFSET(pdesc, 0x20); diff --git a/drivers/net/wireless/rtlwifi/rtl8192se/dm.c b/drivers/net/wireless/rtlwifi/rtl8192se/dm.c index 2e1158026fb7..465f58157101 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192se/dm.c +++ b/drivers/net/wireless/rtlwifi/rtl8192se/dm.c @@ -146,7 +146,7 @@ static void _rtl92s_dm_check_edca_turbo(struct ieee80211_hw *hw) if (rtlpriv->dm.current_turbo_edca) { u8 tmp = AC0_BE; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_AC_PARAM, - (u8 *)(&tmp)); + &tmp); rtlpriv->dm.current_turbo_edca = false; } } diff --git a/drivers/net/wireless/rtlwifi/rtl8192se/hw.c b/drivers/net/wireless/rtlwifi/rtl8192se/hw.c index b141c35bf926..4542e6952b97 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192se/hw.c +++ b/drivers/net/wireless/rtlwifi/rtl8192se/hw.c @@ -145,13 +145,13 @@ void rtl92se_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) for (e_aci = 0; e_aci < AC_MAX; e_aci++) { rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_AC_PARAM, - (u8 *)(&e_aci)); + (&e_aci)); } break; } case HW_VAR_ACK_PREAMBLE:{ u8 reg_tmp; - u8 short_preamble = (bool) (*(u8 *) val); + u8 short_preamble = (bool) (*val); reg_tmp = (mac->cur_40_prime_sc) << 5; if (short_preamble) reg_tmp |= 0x80; @@ -163,7 +163,7 @@ void rtl92se_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) u8 min_spacing_to_set; u8 sec_min_space; - min_spacing_to_set = *((u8 *)val); + min_spacing_to_set = *val; if (min_spacing_to_set <= 7) { if (rtlpriv->sec.pairwise_enc_algorithm == NO_ENCRYPTION) @@ -194,7 +194,7 @@ void rtl92se_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) case HW_VAR_SHORTGI_DENSITY:{ u8 density_to_set; - density_to_set = *((u8 *) val); + density_to_set = *val; mac->min_space_cfg = rtlpriv->rtlhal.minspace_cfg; mac->min_space_cfg |= (density_to_set << 3); @@ -216,7 +216,7 @@ void rtl92se_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) 15, 15, 15, 15, 0}; u8 index = 0; - factor_toset = *((u8 *) val); + factor_toset = *val; if (factor_toset <= 3) { factor_toset = (1 << (factor_toset + 2)); if (factor_toset > 0xf) @@ -248,17 +248,17 @@ void rtl92se_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_AC_PARAM:{ - u8 e_aci = *((u8 *) val); + u8 e_aci = *val; rtl92s_dm_init_edca_turbo(hw); if (rtlpci->acm_method != eAcmWay2_SW) rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_ACM_CTRL, - (u8 *)(&e_aci)); + &e_aci); break; } case HW_VAR_ACM_CTRL:{ - u8 e_aci = *((u8 *) val); + u8 e_aci = *val; union aci_aifsn *p_aci_aifsn = (union aci_aifsn *)(&( mac->ac[0].aifs)); u8 acm = p_aci_aifsn->f.acm; @@ -313,7 +313,7 @@ void rtl92se_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_RETRY_LIMIT:{ - u8 retry_limit = ((u8 *) (val))[0]; + u8 retry_limit = val[0]; rtl_write_word(rtlpriv, RETRY_LIMIT, retry_limit << RETRY_LIMIT_SHORT_SHIFT | @@ -328,14 +328,14 @@ void rtl92se_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) break; } case HW_VAR_EFUSE_USAGE: { - rtlefuse->efuse_usedpercentage = *((u8 *) val); + rtlefuse->efuse_usedpercentage = *val; break; } case HW_VAR_IO_CMD: { break; } case HW_VAR_WPA_CONFIG: { - rtl_write_byte(rtlpriv, REG_SECR, *((u8 *) val)); + rtl_write_byte(rtlpriv, REG_SECR, *val); break; } case HW_VAR_SET_RPWM:{ @@ -1813,8 +1813,7 @@ static void _rtl92se_read_adapter_info(struct ieee80211_hw *hw) else index = 2; - tempval = (*(u8 *)&hwinfo[EEPROM_TX_PWR_HT20_DIFF + - index]) & 0xff; + tempval = hwinfo[EEPROM_TX_PWR_HT20_DIFF + index] & 0xff; rtlefuse->txpwr_ht20diff[RF90_PATH_A][i] = (tempval & 0xF); rtlefuse->txpwr_ht20diff[RF90_PATH_B][i] = ((tempval >> 4) & 0xF); @@ -1830,14 +1829,13 @@ static void _rtl92se_read_adapter_info(struct ieee80211_hw *hw) else index = 1; - tempval = (*(u8 *)&hwinfo[EEPROM_TX_PWR_OFDM_DIFF + index]) - & 0xff; + tempval = hwinfo[EEPROM_TX_PWR_OFDM_DIFF + index] & 0xff; rtlefuse->txpwr_legacyhtdiff[RF90_PATH_A][i] = (tempval & 0xF); rtlefuse->txpwr_legacyhtdiff[RF90_PATH_B][i] = ((tempval >> 4) & 0xF); - tempval = (*(u8 *)&hwinfo[TX_PWR_SAFETY_CHK]); + tempval = hwinfo[TX_PWR_SAFETY_CHK]; rtlefuse->txpwr_safetyflag = (tempval & 0x01); } @@ -1876,7 +1874,7 @@ static void _rtl92se_read_adapter_info(struct ieee80211_hw *hw) /* Read RF-indication and Tx Power gain * index diff of legacy to HT OFDM rate. */ - tempval = (*(u8 *)&hwinfo[EEPROM_RFIND_POWERDIFF]) & 0xff; + tempval = hwinfo[EEPROM_RFIND_POWERDIFF] & 0xff; rtlefuse->eeprom_txpowerdiff = tempval; rtlefuse->legacy_httxpowerdiff = rtlefuse->txpwr_legacyhtdiff[RF90_PATH_A][0]; @@ -1887,7 +1885,7 @@ static void _rtl92se_read_adapter_info(struct ieee80211_hw *hw) /* Get TSSI value for each path. */ usvalue = *(u16 *)&hwinfo[EEPROM_TSSI_A]; rtlefuse->eeprom_tssi[RF90_PATH_A] = (u8)((usvalue & 0xff00) >> 8); - usvalue = *(u8 *)&hwinfo[EEPROM_TSSI_B]; + usvalue = hwinfo[EEPROM_TSSI_B]; rtlefuse->eeprom_tssi[RF90_PATH_B] = (u8)(usvalue & 0xff); RTPRINT(rtlpriv, FINIT, INIT_TxPower, "TSSI_A = 0x%x, TSSI_B = 0x%x\n", @@ -1896,7 +1894,7 @@ static void _rtl92se_read_adapter_info(struct ieee80211_hw *hw) /* Read antenna tx power offset of B/C/D to A from EEPROM */ /* and read ThermalMeter from EEPROM */ - tempval = *(u8 *)&hwinfo[EEPROM_THERMALMETER]; + tempval = hwinfo[EEPROM_THERMALMETER]; rtlefuse->eeprom_thermalmeter = tempval; RTPRINT(rtlpriv, FINIT, INIT_TxPower, "thermalmeter = 0x%x\n", rtlefuse->eeprom_thermalmeter); @@ -1906,20 +1904,20 @@ static void _rtl92se_read_adapter_info(struct ieee80211_hw *hw) rtlefuse->tssi_13dbm = rtlefuse->eeprom_thermalmeter * 100; /* Read CrystalCap from EEPROM */ - tempval = (*(u8 *)&hwinfo[EEPROM_CRYSTALCAP]) >> 4; + tempval = hwinfo[EEPROM_CRYSTALCAP] >> 4; rtlefuse->eeprom_crystalcap = tempval; /* CrystalCap, BIT(12)~15 */ rtlefuse->crystalcap = rtlefuse->eeprom_crystalcap; /* Read IC Version && Channel Plan */ /* Version ID, Channel plan */ - rtlefuse->eeprom_channelplan = *(u8 *)&hwinfo[EEPROM_CHANNELPLAN]; + rtlefuse->eeprom_channelplan = hwinfo[EEPROM_CHANNELPLAN]; rtlefuse->txpwr_fromeprom = true; RTPRINT(rtlpriv, FINIT, INIT_TxPower, "EEPROM ChannelPlan = 0x%4x\n", rtlefuse->eeprom_channelplan); /* Read Customer ID or Board Type!!! */ - tempval = *(u8 *)&hwinfo[EEPROM_BOARDTYPE]; + tempval = hwinfo[EEPROM_BOARDTYPE]; /* Change RF type definition */ if (tempval == 0) rtlphy->rf_type = RF_2T2R; @@ -1941,7 +1939,7 @@ static void _rtl92se_read_adapter_info(struct ieee80211_hw *hw) } } rtlefuse->b1ss_support = rtlefuse->b1x1_recvcombine; - rtlefuse->eeprom_oemid = *(u8 *)&hwinfo[EEPROM_CUSTOMID]; + rtlefuse->eeprom_oemid = *&hwinfo[EEPROM_CUSTOMID]; RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "EEPROM Customer ID: 0x%2x", rtlefuse->eeprom_oemid); @@ -2251,7 +2249,7 @@ void rtl92se_update_channel_access_setting(struct ieee80211_hw *hw) u16 sifs_timer; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_SLOT_TIME, - (u8 *)&mac->slot_time); + &mac->slot_time); sifs_timer = 0x0e0e; rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_SIFS, (u8 *)&sifs_timer); diff --git a/drivers/net/wireless/rtlwifi/rtl8192se/phy.c b/drivers/net/wireless/rtlwifi/rtl8192se/phy.c index 8d7099bc472c..b917a2a3caf7 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192se/phy.c +++ b/drivers/net/wireless/rtlwifi/rtl8192se/phy.c @@ -1247,6 +1247,9 @@ static void _rtl92s_phy_get_txpower_index(struct ieee80211_hw *hw, u8 channel, /* Read HT 40 OFDM TX power */ ofdmpowerLevel[0] = rtlefuse->txpwrlevel_ht40_2s[0][index]; ofdmpowerLevel[1] = rtlefuse->txpwrlevel_ht40_2s[1][index]; + } else { + ofdmpowerLevel[0] = 0; + ofdmpowerLevel[1] = 0; } } diff --git a/drivers/net/wireless/rtlwifi/rtl8192se/sw.c b/drivers/net/wireless/rtlwifi/rtl8192se/sw.c index 730bcc919529..ad4b4803482d 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192se/sw.c +++ b/drivers/net/wireless/rtlwifi/rtl8192se/sw.c @@ -29,7 +29,6 @@ #include "../wifi.h" #include "../core.h" -#include "../pci.h" #include "../base.h" #include "../pci.h" #include "reg.h" diff --git a/drivers/net/wireless/rtlwifi/rtl8192se/trx.c b/drivers/net/wireless/rtlwifi/rtl8192se/trx.c index 812b5858f14a..36d1cb3aef8a 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192se/trx.c +++ b/drivers/net/wireless/rtlwifi/rtl8192se/trx.c @@ -599,7 +599,7 @@ void rtl92se_tx_fill_desc(struct ieee80211_hw *hw, struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw)); struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); struct ieee80211_sta *sta = info->control.sta; - u8 *pdesc = (u8 *) pdesc_tx; + u8 *pdesc = pdesc_tx; u16 seq_number; __le16 fc = hdr->frame_control; u8 reserved_macid = 0; diff --git a/drivers/net/wireless/rtlwifi/usb.c b/drivers/net/wireless/rtlwifi/usb.c index a6049d7d51b3..aa970fc18a21 100644 --- a/drivers/net/wireless/rtlwifi/usb.c +++ b/drivers/net/wireless/rtlwifi/usb.c @@ -131,15 +131,19 @@ static u32 _usb_read_sync(struct rtl_priv *rtlpriv, u32 addr, u16 len) u8 request; u16 wvalue; u16 index; - __le32 *data = &rtlpriv->usb_data[rtlpriv->usb_data_index]; + __le32 *data; + unsigned long flags; + spin_lock_irqsave(&rtlpriv->locks.usb_lock, flags); + if (++rtlpriv->usb_data_index >= RTL_USB_MAX_RX_COUNT) + rtlpriv->usb_data_index = 0; + data = &rtlpriv->usb_data[rtlpriv->usb_data_index]; + spin_unlock_irqrestore(&rtlpriv->locks.usb_lock, flags); request = REALTEK_USB_VENQT_CMD_REQ; index = REALTEK_USB_VENQT_CMD_IDX; /* n/a */ wvalue = (u16)addr; _usbctrl_vendorreq_sync_read(udev, request, wvalue, index, data, len); - if (++rtlpriv->usb_data_index >= RTL_USB_MAX_RX_COUNT) - rtlpriv->usb_data_index = 0; return le32_to_cpu(*data); } @@ -951,6 +955,10 @@ int __devinit rtl_usb_probe(struct usb_interface *intf, GFP_KERNEL); if (!rtlpriv->usb_data) return -ENOMEM; + + /* this spin lock must be initialized early */ + spin_lock_init(&rtlpriv->locks.usb_lock); + rtlpriv->usb_data_index = 0; init_completion(&rtlpriv->firmware_loading_complete); SET_IEEE80211_DEV(hw, &intf->dev); diff --git a/drivers/net/wireless/rtlwifi/wifi.h b/drivers/net/wireless/rtlwifi/wifi.h index bd816aef26dc..cdaa21f29710 100644 --- a/drivers/net/wireless/rtlwifi/wifi.h +++ b/drivers/net/wireless/rtlwifi/wifi.h @@ -1555,6 +1555,7 @@ struct rtl_locks { spinlock_t rf_ps_lock; spinlock_t rf_lock; spinlock_t waitq_lock; + spinlock_t usb_lock; /*Dual mac*/ spinlock_t cck_and_rw_pagea_lock; diff --git a/drivers/net/wireless/ti/Kconfig b/drivers/net/wireless/ti/Kconfig index 1a72932e2213..be800119d0a3 100644 --- a/drivers/net/wireless/ti/Kconfig +++ b/drivers/net/wireless/ti/Kconfig @@ -8,6 +8,7 @@ menuconfig WL_TI if WL_TI source "drivers/net/wireless/ti/wl1251/Kconfig" source "drivers/net/wireless/ti/wl12xx/Kconfig" +source "drivers/net/wireless/ti/wl18xx/Kconfig" # keep last for automatic dependencies source "drivers/net/wireless/ti/wlcore/Kconfig" diff --git a/drivers/net/wireless/ti/Makefile b/drivers/net/wireless/ti/Makefile index 0a565622d4a4..4d6823983c04 100644 --- a/drivers/net/wireless/ti/Makefile +++ b/drivers/net/wireless/ti/Makefile @@ -2,3 +2,4 @@ obj-$(CONFIG_WLCORE) += wlcore/ obj-$(CONFIG_WL12XX) += wl12xx/ obj-$(CONFIG_WL12XX_PLATFORM_DATA) += wlcore/ obj-$(CONFIG_WL1251) += wl1251/ +obj-$(CONFIG_WL18XX) += wl18xx/ diff --git a/drivers/net/wireless/ti/wl1251/cmd.c b/drivers/net/wireless/ti/wl1251/cmd.c index d14d69d733a0..6822b845efc1 100644 --- a/drivers/net/wireless/ti/wl1251/cmd.c +++ b/drivers/net/wireless/ti/wl1251/cmd.c @@ -277,15 +277,6 @@ int wl1251_cmd_join(struct wl1251 *wl, u8 bss_type, u8 channel, join->rx_config_options = wl->rx_config; join->rx_filter_options = wl->rx_filter; - /* - * FIXME: disable temporarily all filters because after commit - * 9cef8737 "mac80211: fix managed mode BSSID handling" broke - * association. The filter logic needs to be implemented properly - * and once that is done, this hack can be removed. - */ - join->rx_config_options = 0; - join->rx_filter_options = WL1251_DEFAULT_RX_FILTER; - join->basic_rate_set = RATE_MASK_1MBPS | RATE_MASK_2MBPS | RATE_MASK_5_5MBPS | RATE_MASK_11MBPS; diff --git a/drivers/net/wireless/ti/wl1251/main.c b/drivers/net/wireless/ti/wl1251/main.c index d1afb8e3b2ef..3118c425bcf1 100644 --- a/drivers/net/wireless/ti/wl1251/main.c +++ b/drivers/net/wireless/ti/wl1251/main.c @@ -334,6 +334,12 @@ static int wl1251_join(struct wl1251 *wl, u8 bss_type, u8 channel, if (ret < 0) goto out; + /* + * Join command applies filters, and if we are not associated, + * BSSID filter must be disabled for association to work. + */ + if (is_zero_ether_addr(wl->bssid)) + wl->rx_config &= ~CFG_BSSID_FILTER_EN; ret = wl1251_cmd_join(wl, bss_type, channel, beacon_interval, dtim_period); @@ -348,33 +354,6 @@ out: return ret; } -static void wl1251_filter_work(struct work_struct *work) -{ - struct wl1251 *wl = - container_of(work, struct wl1251, filter_work); - int ret; - - mutex_lock(&wl->mutex); - - if (wl->state == WL1251_STATE_OFF) - goto out; - - ret = wl1251_ps_elp_wakeup(wl); - if (ret < 0) - goto out; - - ret = wl1251_join(wl, wl->bss_type, wl->channel, wl->beacon_int, - wl->dtim_period); - if (ret < 0) - goto out_sleep; - -out_sleep: - wl1251_ps_elp_sleep(wl); - -out: - mutex_unlock(&wl->mutex); -} - static void wl1251_op_tx(struct ieee80211_hw *hw, struct sk_buff *skb) { struct wl1251 *wl = hw->priv; @@ -478,7 +457,6 @@ static void wl1251_op_stop(struct ieee80211_hw *hw) cancel_work_sync(&wl->irq_work); cancel_work_sync(&wl->tx_work); - cancel_work_sync(&wl->filter_work); cancel_delayed_work_sync(&wl->elp_work); mutex_lock(&wl->mutex); @@ -681,13 +659,15 @@ out: FIF_FCSFAIL | \ FIF_BCN_PRBRESP_PROMISC | \ FIF_CONTROL | \ - FIF_OTHER_BSS) + FIF_OTHER_BSS | \ + FIF_PROBE_REQ) static void wl1251_op_configure_filter(struct ieee80211_hw *hw, unsigned int changed, unsigned int *total,u64 multicast) { struct wl1251 *wl = hw->priv; + int ret; wl1251_debug(DEBUG_MAC80211, "mac80211 configure filter"); @@ -698,7 +678,7 @@ static void wl1251_op_configure_filter(struct ieee80211_hw *hw, /* no filters which we support changed */ return; - /* FIXME: wl->rx_config and wl->rx_filter are not protected */ + mutex_lock(&wl->mutex); wl->rx_config = WL1251_DEFAULT_RX_CONFIG; wl->rx_filter = WL1251_DEFAULT_RX_FILTER; @@ -721,15 +701,25 @@ static void wl1251_op_configure_filter(struct ieee80211_hw *hw, } if (*total & FIF_CONTROL) wl->rx_filter |= CFG_RX_CTL_EN; - if (*total & FIF_OTHER_BSS) - wl->rx_filter &= ~CFG_BSSID_FILTER_EN; + if (*total & FIF_OTHER_BSS || is_zero_ether_addr(wl->bssid)) + wl->rx_config &= ~CFG_BSSID_FILTER_EN; + if (*total & FIF_PROBE_REQ) + wl->rx_filter |= CFG_RX_PREQ_EN; - /* - * FIXME: workqueues need to be properly cancelled on stop(), for - * now let's just disable changing the filter settings. They will - * be updated any on config(). - */ - /* schedule_work(&wl->filter_work); */ + if (wl->state == WL1251_STATE_OFF) + goto out; + + ret = wl1251_ps_elp_wakeup(wl); + if (ret < 0) + goto out; + + /* send filters to firmware */ + wl1251_acx_rx_config(wl, wl->rx_config, wl->rx_filter); + + wl1251_ps_elp_sleep(wl); + +out: + mutex_unlock(&wl->mutex); } /* HW encryption */ @@ -1390,7 +1380,6 @@ struct ieee80211_hw *wl1251_alloc_hw(void) skb_queue_head_init(&wl->tx_queue); - INIT_WORK(&wl->filter_work, wl1251_filter_work); INIT_DELAYED_WORK(&wl->elp_work, wl1251_elp_work); wl->channel = WL1251_DEFAULT_CHANNEL; wl->scanning = false; diff --git a/drivers/net/wireless/ti/wl1251/wl1251.h b/drivers/net/wireless/ti/wl1251/wl1251.h index 9d8f5816c6f9..fd02060038de 100644 --- a/drivers/net/wireless/ti/wl1251/wl1251.h +++ b/drivers/net/wireless/ti/wl1251/wl1251.h @@ -315,7 +315,6 @@ struct wl1251 { bool tx_queue_stopped; struct work_struct tx_work; - struct work_struct filter_work; /* Pending TX frames */ struct sk_buff *tx_frames[16]; diff --git a/drivers/net/wireless/ti/wl12xx/Makefile b/drivers/net/wireless/ti/wl12xx/Makefile index 87f64b14db35..da509aa7d009 100644 --- a/drivers/net/wireless/ti/wl12xx/Makefile +++ b/drivers/net/wireless/ti/wl12xx/Makefile @@ -1,3 +1,3 @@ -wl12xx-objs = main.o cmd.o acx.o +wl12xx-objs = main.o cmd.o acx.o debugfs.o obj-$(CONFIG_WL12XX) += wl12xx.o diff --git a/drivers/net/wireless/ti/wl12xx/acx.h b/drivers/net/wireless/ti/wl12xx/acx.h index d1f5aba0afce..2a26868b837d 100644 --- a/drivers/net/wireless/ti/wl12xx/acx.h +++ b/drivers/net/wireless/ti/wl12xx/acx.h @@ -24,6 +24,21 @@ #define __WL12XX_ACX_H__ #include "../wlcore/wlcore.h" +#include "../wlcore/acx.h" + +#define WL12XX_ACX_ALL_EVENTS_VECTOR (WL1271_ACX_INTR_WATCHDOG | \ + WL1271_ACX_INTR_INIT_COMPLETE | \ + WL1271_ACX_INTR_EVENT_A | \ + WL1271_ACX_INTR_EVENT_B | \ + WL1271_ACX_INTR_CMD_COMPLETE | \ + WL1271_ACX_INTR_HW_AVAILABLE | \ + WL1271_ACX_INTR_DATA) + +#define WL12XX_INTR_MASK (WL1271_ACX_INTR_WATCHDOG | \ + WL1271_ACX_INTR_EVENT_A | \ + WL1271_ACX_INTR_EVENT_B | \ + WL1271_ACX_INTR_HW_AVAILABLE | \ + WL1271_ACX_INTR_DATA) struct wl1271_acx_host_config_bitmap { struct acx_header header; @@ -31,6 +46,228 @@ struct wl1271_acx_host_config_bitmap { __le32 host_cfg_bitmap; } __packed; +struct wl12xx_acx_tx_statistics { + __le32 internal_desc_overflow; +} __packed; + +struct wl12xx_acx_rx_statistics { + __le32 out_of_mem; + __le32 hdr_overflow; + __le32 hw_stuck; + __le32 dropped; + __le32 fcs_err; + __le32 xfr_hint_trig; + __le32 path_reset; + __le32 reset_counter; +} __packed; + +struct wl12xx_acx_dma_statistics { + __le32 rx_requested; + __le32 rx_errors; + __le32 tx_requested; + __le32 tx_errors; +} __packed; + +struct wl12xx_acx_isr_statistics { + /* host command complete */ + __le32 cmd_cmplt; + + /* fiqisr() */ + __le32 fiqs; + + /* (INT_STS_ND & INT_TRIG_RX_HEADER) */ + __le32 rx_headers; + + /* (INT_STS_ND & INT_TRIG_RX_CMPLT) */ + __le32 rx_completes; + + /* (INT_STS_ND & INT_TRIG_NO_RX_BUF) */ + __le32 rx_mem_overflow; + + /* (INT_STS_ND & INT_TRIG_S_RX_RDY) */ + __le32 rx_rdys; + + /* irqisr() */ + __le32 irqs; + + /* (INT_STS_ND & INT_TRIG_TX_PROC) */ + __le32 tx_procs; + + /* (INT_STS_ND & INT_TRIG_DECRYPT_DONE) */ + __le32 decrypt_done; + + /* (INT_STS_ND & INT_TRIG_DMA0) */ + __le32 dma0_done; + + /* (INT_STS_ND & INT_TRIG_DMA1) */ + __le32 dma1_done; + + /* (INT_STS_ND & INT_TRIG_TX_EXC_CMPLT) */ + __le32 tx_exch_complete; + + /* (INT_STS_ND & INT_TRIG_COMMAND) */ + __le32 commands; + + /* (INT_STS_ND & INT_TRIG_RX_PROC) */ + __le32 rx_procs; + + /* (INT_STS_ND & INT_TRIG_PM_802) */ + __le32 hw_pm_mode_changes; + + /* (INT_STS_ND & INT_TRIG_ACKNOWLEDGE) */ + __le32 host_acknowledges; + + /* (INT_STS_ND & INT_TRIG_PM_PCI) */ + __le32 pci_pm; + + /* (INT_STS_ND & INT_TRIG_ACM_WAKEUP) */ + __le32 wakeups; + + /* (INT_STS_ND & INT_TRIG_LOW_RSSI) */ + __le32 low_rssi; +} __packed; + +struct wl12xx_acx_wep_statistics { + /* WEP address keys configured */ + __le32 addr_key_count; + + /* default keys configured */ + __le32 default_key_count; + + __le32 reserved; + + /* number of times that WEP key not found on lookup */ + __le32 key_not_found; + + /* number of times that WEP key decryption failed */ + __le32 decrypt_fail; + + /* WEP packets decrypted */ + __le32 packets; + + /* WEP decrypt interrupts */ + __le32 interrupt; +} __packed; + +#define ACX_MISSED_BEACONS_SPREAD 10 + +struct wl12xx_acx_pwr_statistics { + /* the amount of enters into power save mode (both PD & ELP) */ + __le32 ps_enter; + + /* the amount of enters into ELP mode */ + __le32 elp_enter; + + /* the amount of missing beacon interrupts to the host */ + __le32 missing_bcns; + + /* the amount of wake on host-access times */ + __le32 wake_on_host; + + /* the amount of wake on timer-expire */ + __le32 wake_on_timer_exp; + + /* the number of packets that were transmitted with PS bit set */ + __le32 tx_with_ps; + + /* the number of packets that were transmitted with PS bit clear */ + __le32 tx_without_ps; + + /* the number of received beacons */ + __le32 rcvd_beacons; + + /* the number of entering into PowerOn (power save off) */ + __le32 power_save_off; + + /* the number of entries into power save mode */ + __le16 enable_ps; + + /* + * the number of exits from power save, not including failed PS + * transitions + */ + __le16 disable_ps; + + /* + * the number of times the TSF counter was adjusted because + * of drift + */ + __le32 fix_tsf_ps; + + /* Gives statistics about the spread continuous missed beacons. + * The 16 LSB are dedicated for the PS mode. + * The 16 MSB are dedicated for the PS mode. + * cont_miss_bcns_spread[0] - single missed beacon. + * cont_miss_bcns_spread[1] - two continuous missed beacons. + * cont_miss_bcns_spread[2] - three continuous missed beacons. + * ... + * cont_miss_bcns_spread[9] - ten and more continuous missed beacons. + */ + __le32 cont_miss_bcns_spread[ACX_MISSED_BEACONS_SPREAD]; + + /* the number of beacons in awake mode */ + __le32 rcvd_awake_beacons; +} __packed; + +struct wl12xx_acx_mic_statistics { + __le32 rx_pkts; + __le32 calc_failure; +} __packed; + +struct wl12xx_acx_aes_statistics { + __le32 encrypt_fail; + __le32 decrypt_fail; + __le32 encrypt_packets; + __le32 decrypt_packets; + __le32 encrypt_interrupt; + __le32 decrypt_interrupt; +} __packed; + +struct wl12xx_acx_event_statistics { + __le32 heart_beat; + __le32 calibration; + __le32 rx_mismatch; + __le32 rx_mem_empty; + __le32 rx_pool; + __le32 oom_late; + __le32 phy_transmit_error; + __le32 tx_stuck; +} __packed; + +struct wl12xx_acx_ps_statistics { + __le32 pspoll_timeouts; + __le32 upsd_timeouts; + __le32 upsd_max_sptime; + __le32 upsd_max_apturn; + __le32 pspoll_max_apturn; + __le32 pspoll_utilization; + __le32 upsd_utilization; +} __packed; + +struct wl12xx_acx_rxpipe_statistics { + __le32 rx_prep_beacon_drop; + __le32 descr_host_int_trig_rx_data; + __le32 beacon_buffer_thres_host_int_trig_rx_data; + __le32 missed_beacon_host_int_trig_rx_data; + __le32 tx_xfr_host_int_trig_rx_data; +} __packed; + +struct wl12xx_acx_statistics { + struct acx_header header; + + struct wl12xx_acx_tx_statistics tx; + struct wl12xx_acx_rx_statistics rx; + struct wl12xx_acx_dma_statistics dma; + struct wl12xx_acx_isr_statistics isr; + struct wl12xx_acx_wep_statistics wep; + struct wl12xx_acx_pwr_statistics pwr; + struct wl12xx_acx_aes_statistics aes; + struct wl12xx_acx_mic_statistics mic; + struct wl12xx_acx_event_statistics event; + struct wl12xx_acx_ps_statistics ps; + struct wl12xx_acx_rxpipe_statistics rxpipe; +} __packed; + int wl1271_acx_host_if_cfg_bitmap(struct wl1271 *wl, u32 host_cfg_bitmap); #endif /* __WL12XX_ACX_H__ */ diff --git a/drivers/net/wireless/ti/wl12xx/cmd.c b/drivers/net/wireless/ti/wl12xx/cmd.c index 8ffaeb5f2147..622206241e83 100644 --- a/drivers/net/wireless/ti/wl12xx/cmd.c +++ b/drivers/net/wireless/ti/wl12xx/cmd.c @@ -65,6 +65,7 @@ int wl1271_cmd_general_parms(struct wl1271 *wl) struct wl1271_general_parms_cmd *gen_parms; struct wl1271_ini_general_params *gp = &((struct wl1271_nvs_file *)wl->nvs)->general_params; + struct wl12xx_priv *priv = wl->priv; bool answer = false; int ret; @@ -84,11 +85,15 @@ int wl1271_cmd_general_parms(struct wl1271 *wl) memcpy(&gen_parms->general_params, gp, sizeof(*gp)); - if (gp->tx_bip_fem_auto_detect) + /* If we started in PLT FEM_DETECT mode, force auto detect */ + if (wl->plt_mode == PLT_FEM_DETECT) + gen_parms->general_params.tx_bip_fem_auto_detect = true; + + if (gen_parms->general_params.tx_bip_fem_auto_detect) answer = true; /* Override the REF CLK from the NVS with the one from platform data */ - gen_parms->general_params.ref_clock = wl->ref_clock; + gen_parms->general_params.ref_clock = priv->ref_clock; ret = wl1271_cmd_test(wl, gen_parms, sizeof(*gen_parms), answer); if (ret < 0) { @@ -105,8 +110,17 @@ int wl1271_cmd_general_parms(struct wl1271 *wl) goto out; } + /* If we are in calibrator based fem auto detect - save fem nr */ + if (wl->plt_mode == PLT_FEM_DETECT) + wl->fem_manuf = gp->tx_bip_fem_manufacturer; + wl1271_debug(DEBUG_CMD, "FEM autodetect: %s, manufacturer: %d\n", - answer ? "auto" : "manual", gp->tx_bip_fem_manufacturer); + answer == false ? + "manual" : + wl->plt_mode == PLT_FEM_DETECT ? + "calibrator_fem_detect" : + "auto", + gp->tx_bip_fem_manufacturer); out: kfree(gen_parms); @@ -118,6 +132,7 @@ int wl128x_cmd_general_parms(struct wl1271 *wl) struct wl128x_general_parms_cmd *gen_parms; struct wl128x_ini_general_params *gp = &((struct wl128x_nvs_file *)wl->nvs)->general_params; + struct wl12xx_priv *priv = wl->priv; bool answer = false; int ret; @@ -137,12 +152,16 @@ int wl128x_cmd_general_parms(struct wl1271 *wl) memcpy(&gen_parms->general_params, gp, sizeof(*gp)); - if (gp->tx_bip_fem_auto_detect) + /* If we started in PLT FEM_DETECT mode, force auto detect */ + if (wl->plt_mode == PLT_FEM_DETECT) + gen_parms->general_params.tx_bip_fem_auto_detect = true; + + if (gen_parms->general_params.tx_bip_fem_auto_detect) answer = true; /* Replace REF and TCXO CLKs with the ones from platform data */ - gen_parms->general_params.ref_clock = wl->ref_clock; - gen_parms->general_params.tcxo_ref_clock = wl->tcxo_clock; + gen_parms->general_params.ref_clock = priv->ref_clock; + gen_parms->general_params.tcxo_ref_clock = priv->tcxo_clock; ret = wl1271_cmd_test(wl, gen_parms, sizeof(*gen_parms), answer); if (ret < 0) { @@ -159,8 +178,17 @@ int wl128x_cmd_general_parms(struct wl1271 *wl) goto out; } + /* If we are in calibrator based fem auto detect - save fem nr */ + if (wl->plt_mode == PLT_FEM_DETECT) + wl->fem_manuf = gp->tx_bip_fem_manufacturer; + wl1271_debug(DEBUG_CMD, "FEM autodetect: %s, manufacturer: %d\n", - answer ? "auto" : "manual", gp->tx_bip_fem_manufacturer); + answer == false ? + "manual" : + wl->plt_mode == PLT_FEM_DETECT ? + "calibrator_fem_detect" : + "auto", + gp->tx_bip_fem_manufacturer); out: kfree(gen_parms); @@ -172,7 +200,7 @@ int wl1271_cmd_radio_parms(struct wl1271 *wl) struct wl1271_nvs_file *nvs = (struct wl1271_nvs_file *)wl->nvs; struct wl1271_radio_parms_cmd *radio_parms; struct wl1271_ini_general_params *gp = &nvs->general_params; - int ret; + int ret, fem_idx; if (!wl->nvs) return -ENODEV; @@ -183,11 +211,13 @@ int wl1271_cmd_radio_parms(struct wl1271 *wl) radio_parms->test.id = TEST_CMD_INI_FILE_RADIO_PARAM; + fem_idx = WL12XX_FEM_TO_NVS_ENTRY(gp->tx_bip_fem_manufacturer); + /* 2.4GHz parameters */ memcpy(&radio_parms->static_params_2, &nvs->stat_radio_params_2, sizeof(struct wl1271_ini_band_params_2)); memcpy(&radio_parms->dyn_params_2, - &nvs->dyn_radio_params_2[gp->tx_bip_fem_manufacturer].params, + &nvs->dyn_radio_params_2[fem_idx].params, sizeof(struct wl1271_ini_fem_params_2)); /* 5GHz parameters */ @@ -195,7 +225,7 @@ int wl1271_cmd_radio_parms(struct wl1271 *wl) &nvs->stat_radio_params_5, sizeof(struct wl1271_ini_band_params_5)); memcpy(&radio_parms->dyn_params_5, - &nvs->dyn_radio_params_5[gp->tx_bip_fem_manufacturer].params, + &nvs->dyn_radio_params_5[fem_idx].params, sizeof(struct wl1271_ini_fem_params_5)); wl1271_dump(DEBUG_CMD, "TEST_CMD_INI_FILE_RADIO_PARAM: ", @@ -214,7 +244,7 @@ int wl128x_cmd_radio_parms(struct wl1271 *wl) struct wl128x_nvs_file *nvs = (struct wl128x_nvs_file *)wl->nvs; struct wl128x_radio_parms_cmd *radio_parms; struct wl128x_ini_general_params *gp = &nvs->general_params; - int ret; + int ret, fem_idx; if (!wl->nvs) return -ENODEV; @@ -225,11 +255,13 @@ int wl128x_cmd_radio_parms(struct wl1271 *wl) radio_parms->test.id = TEST_CMD_INI_FILE_RADIO_PARAM; + fem_idx = WL12XX_FEM_TO_NVS_ENTRY(gp->tx_bip_fem_manufacturer); + /* 2.4GHz parameters */ memcpy(&radio_parms->static_params_2, &nvs->stat_radio_params_2, sizeof(struct wl128x_ini_band_params_2)); memcpy(&radio_parms->dyn_params_2, - &nvs->dyn_radio_params_2[gp->tx_bip_fem_manufacturer].params, + &nvs->dyn_radio_params_2[fem_idx].params, sizeof(struct wl128x_ini_fem_params_2)); /* 5GHz parameters */ @@ -237,7 +269,7 @@ int wl128x_cmd_radio_parms(struct wl1271 *wl) &nvs->stat_radio_params_5, sizeof(struct wl128x_ini_band_params_5)); memcpy(&radio_parms->dyn_params_5, - &nvs->dyn_radio_params_5[gp->tx_bip_fem_manufacturer].params, + &nvs->dyn_radio_params_5[fem_idx].params, sizeof(struct wl128x_ini_fem_params_5)); radio_parms->fem_vendor_and_options = nvs->fem_vendor_and_options; diff --git a/drivers/net/wireless/ti/wl12xx/debugfs.c b/drivers/net/wireless/ti/wl12xx/debugfs.c new file mode 100644 index 000000000000..0521cbf858cf --- /dev/null +++ b/drivers/net/wireless/ti/wl12xx/debugfs.c @@ -0,0 +1,243 @@ +/* + * This file is part of wl12xx + * + * Copyright (C) 2009 Nokia Corporation + * Copyright (C) 2011-2012 Texas Instruments + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#include "../wlcore/debugfs.h" +#include "../wlcore/wlcore.h" + +#include "wl12xx.h" +#include "acx.h" +#include "debugfs.h" + +#define WL12XX_DEBUGFS_FWSTATS_FILE(a, b, c) \ + DEBUGFS_FWSTATS_FILE(a, b, c, wl12xx_acx_statistics) + +WL12XX_DEBUGFS_FWSTATS_FILE(tx, internal_desc_overflow, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(rx, out_of_mem, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rx, hdr_overflow, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rx, hw_stuck, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rx, dropped, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rx, fcs_err, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rx, xfr_hint_trig, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rx, path_reset, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rx, reset_counter, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(dma, rx_requested, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(dma, rx_errors, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(dma, tx_requested, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(dma, tx_errors, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(isr, cmd_cmplt, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, fiqs, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, rx_headers, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, rx_mem_overflow, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, rx_rdys, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, irqs, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, tx_procs, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, decrypt_done, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, dma0_done, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, dma1_done, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, tx_exch_complete, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, commands, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, rx_procs, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, hw_pm_mode_changes, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, host_acknowledges, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, pci_pm, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, wakeups, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(isr, low_rssi, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(wep, addr_key_count, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(wep, default_key_count, "%u"); +/* skipping wep.reserved */ +WL12XX_DEBUGFS_FWSTATS_FILE(wep, key_not_found, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(wep, decrypt_fail, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(wep, packets, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(wep, interrupt, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, ps_enter, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, elp_enter, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, missing_bcns, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, wake_on_host, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, wake_on_timer_exp, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, tx_with_ps, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, tx_without_ps, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, rcvd_beacons, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, power_save_off, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, enable_ps, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, disable_ps, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, fix_tsf_ps, "%u"); +/* skipping cont_miss_bcns_spread for now */ +WL12XX_DEBUGFS_FWSTATS_FILE(pwr, rcvd_awake_beacons, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(mic, rx_pkts, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(mic, calc_failure, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(aes, encrypt_fail, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(aes, decrypt_fail, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(aes, encrypt_packets, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(aes, decrypt_packets, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(aes, encrypt_interrupt, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(aes, decrypt_interrupt, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(event, heart_beat, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(event, calibration, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(event, rx_mismatch, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(event, rx_mem_empty, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(event, rx_pool, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(event, oom_late, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(event, phy_transmit_error, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(event, tx_stuck, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(ps, pspoll_timeouts, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(ps, upsd_timeouts, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(ps, upsd_max_sptime, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(ps, upsd_max_apturn, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(ps, pspoll_max_apturn, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(ps, pspoll_utilization, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(ps, upsd_utilization, "%u"); + +WL12XX_DEBUGFS_FWSTATS_FILE(rxpipe, rx_prep_beacon_drop, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rxpipe, descr_host_int_trig_rx_data, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rxpipe, beacon_buffer_thres_host_int_trig_rx_data, + "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rxpipe, missed_beacon_host_int_trig_rx_data, "%u"); +WL12XX_DEBUGFS_FWSTATS_FILE(rxpipe, tx_xfr_host_int_trig_rx_data, "%u"); + +int wl12xx_debugfs_add_files(struct wl1271 *wl, + struct dentry *rootdir) +{ + int ret = 0; + struct dentry *entry, *stats, *moddir; + + moddir = debugfs_create_dir(KBUILD_MODNAME, rootdir); + if (!moddir || IS_ERR(moddir)) { + entry = moddir; + goto err; + } + + stats = debugfs_create_dir("fw_stats", moddir); + if (!stats || IS_ERR(stats)) { + entry = stats; + goto err; + } + + DEBUGFS_FWSTATS_ADD(tx, internal_desc_overflow); + + DEBUGFS_FWSTATS_ADD(rx, out_of_mem); + DEBUGFS_FWSTATS_ADD(rx, hdr_overflow); + DEBUGFS_FWSTATS_ADD(rx, hw_stuck); + DEBUGFS_FWSTATS_ADD(rx, dropped); + DEBUGFS_FWSTATS_ADD(rx, fcs_err); + DEBUGFS_FWSTATS_ADD(rx, xfr_hint_trig); + DEBUGFS_FWSTATS_ADD(rx, path_reset); + DEBUGFS_FWSTATS_ADD(rx, reset_counter); + + DEBUGFS_FWSTATS_ADD(dma, rx_requested); + DEBUGFS_FWSTATS_ADD(dma, rx_errors); + DEBUGFS_FWSTATS_ADD(dma, tx_requested); + DEBUGFS_FWSTATS_ADD(dma, tx_errors); + + DEBUGFS_FWSTATS_ADD(isr, cmd_cmplt); + DEBUGFS_FWSTATS_ADD(isr, fiqs); + DEBUGFS_FWSTATS_ADD(isr, rx_headers); + DEBUGFS_FWSTATS_ADD(isr, rx_mem_overflow); + DEBUGFS_FWSTATS_ADD(isr, rx_rdys); + DEBUGFS_FWSTATS_ADD(isr, irqs); + DEBUGFS_FWSTATS_ADD(isr, tx_procs); + DEBUGFS_FWSTATS_ADD(isr, decrypt_done); + DEBUGFS_FWSTATS_ADD(isr, dma0_done); + DEBUGFS_FWSTATS_ADD(isr, dma1_done); + DEBUGFS_FWSTATS_ADD(isr, tx_exch_complete); + DEBUGFS_FWSTATS_ADD(isr, commands); + DEBUGFS_FWSTATS_ADD(isr, rx_procs); + DEBUGFS_FWSTATS_ADD(isr, hw_pm_mode_changes); + DEBUGFS_FWSTATS_ADD(isr, host_acknowledges); + DEBUGFS_FWSTATS_ADD(isr, pci_pm); + DEBUGFS_FWSTATS_ADD(isr, wakeups); + DEBUGFS_FWSTATS_ADD(isr, low_rssi); + + DEBUGFS_FWSTATS_ADD(wep, addr_key_count); + DEBUGFS_FWSTATS_ADD(wep, default_key_count); + /* skipping wep.reserved */ + DEBUGFS_FWSTATS_ADD(wep, key_not_found); + DEBUGFS_FWSTATS_ADD(wep, decrypt_fail); + DEBUGFS_FWSTATS_ADD(wep, packets); + DEBUGFS_FWSTATS_ADD(wep, interrupt); + + DEBUGFS_FWSTATS_ADD(pwr, ps_enter); + DEBUGFS_FWSTATS_ADD(pwr, elp_enter); + DEBUGFS_FWSTATS_ADD(pwr, missing_bcns); + DEBUGFS_FWSTATS_ADD(pwr, wake_on_host); + DEBUGFS_FWSTATS_ADD(pwr, wake_on_timer_exp); + DEBUGFS_FWSTATS_ADD(pwr, tx_with_ps); + DEBUGFS_FWSTATS_ADD(pwr, tx_without_ps); + DEBUGFS_FWSTATS_ADD(pwr, rcvd_beacons); + DEBUGFS_FWSTATS_ADD(pwr, power_save_off); + DEBUGFS_FWSTATS_ADD(pwr, enable_ps); + DEBUGFS_FWSTATS_ADD(pwr, disable_ps); + DEBUGFS_FWSTATS_ADD(pwr, fix_tsf_ps); + /* skipping cont_miss_bcns_spread for now */ + DEBUGFS_FWSTATS_ADD(pwr, rcvd_awake_beacons); + + DEBUGFS_FWSTATS_ADD(mic, rx_pkts); + DEBUGFS_FWSTATS_ADD(mic, calc_failure); + + DEBUGFS_FWSTATS_ADD(aes, encrypt_fail); + DEBUGFS_FWSTATS_ADD(aes, decrypt_fail); + DEBUGFS_FWSTATS_ADD(aes, encrypt_packets); + DEBUGFS_FWSTATS_ADD(aes, decrypt_packets); + DEBUGFS_FWSTATS_ADD(aes, encrypt_interrupt); + DEBUGFS_FWSTATS_ADD(aes, decrypt_interrupt); + + DEBUGFS_FWSTATS_ADD(event, heart_beat); + DEBUGFS_FWSTATS_ADD(event, calibration); + DEBUGFS_FWSTATS_ADD(event, rx_mismatch); + DEBUGFS_FWSTATS_ADD(event, rx_mem_empty); + DEBUGFS_FWSTATS_ADD(event, rx_pool); + DEBUGFS_FWSTATS_ADD(event, oom_late); + DEBUGFS_FWSTATS_ADD(event, phy_transmit_error); + DEBUGFS_FWSTATS_ADD(event, tx_stuck); + + DEBUGFS_FWSTATS_ADD(ps, pspoll_timeouts); + DEBUGFS_FWSTATS_ADD(ps, upsd_timeouts); + DEBUGFS_FWSTATS_ADD(ps, upsd_max_sptime); + DEBUGFS_FWSTATS_ADD(ps, upsd_max_apturn); + DEBUGFS_FWSTATS_ADD(ps, pspoll_max_apturn); + DEBUGFS_FWSTATS_ADD(ps, pspoll_utilization); + DEBUGFS_FWSTATS_ADD(ps, upsd_utilization); + + DEBUGFS_FWSTATS_ADD(rxpipe, rx_prep_beacon_drop); + DEBUGFS_FWSTATS_ADD(rxpipe, descr_host_int_trig_rx_data); + DEBUGFS_FWSTATS_ADD(rxpipe, beacon_buffer_thres_host_int_trig_rx_data); + DEBUGFS_FWSTATS_ADD(rxpipe, missed_beacon_host_int_trig_rx_data); + DEBUGFS_FWSTATS_ADD(rxpipe, tx_xfr_host_int_trig_rx_data); + + return 0; + +err: + if (IS_ERR(entry)) + ret = PTR_ERR(entry); + else + ret = -ENOMEM; + + return ret; +} diff --git a/drivers/net/wireless/ti/wl12xx/debugfs.h b/drivers/net/wireless/ti/wl12xx/debugfs.h new file mode 100644 index 000000000000..96898e291b78 --- /dev/null +++ b/drivers/net/wireless/ti/wl12xx/debugfs.h @@ -0,0 +1,28 @@ +/* + * This file is part of wl12xx + * + * Copyright (C) 2012 Texas Instruments. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#ifndef __WL12XX_DEBUGFS_H__ +#define __WL12XX_DEBUGFS_H__ + +int wl12xx_debugfs_add_files(struct wl1271 *wl, + struct dentry *rootdir); + +#endif /* __WL12XX_DEBUGFS_H__ */ diff --git a/drivers/net/wireless/ti/wl12xx/main.c b/drivers/net/wireless/ti/wl12xx/main.c index d7dd3def07b5..f429fc110cb0 100644 --- a/drivers/net/wireless/ti/wl12xx/main.c +++ b/drivers/net/wireless/ti/wl12xx/main.c @@ -39,6 +39,10 @@ #include "reg.h" #include "cmd.h" #include "acx.h" +#include "debugfs.h" + +static char *fref_param; +static char *tcxo_param; static struct wlcore_conf wl12xx_conf = { .sg = { @@ -212,7 +216,7 @@ static struct wlcore_conf wl12xx_conf = { .suspend_wake_up_event = CONF_WAKE_UP_EVENT_N_DTIM, .suspend_listen_interval = 3, .bcn_filt_mode = CONF_BCN_FILT_MODE_ENABLED, - .bcn_filt_ie_count = 2, + .bcn_filt_ie_count = 3, .bcn_filt_ie = { [0] = { .ie = WLAN_EID_CHANNEL_SWITCH, @@ -222,9 +226,13 @@ static struct wlcore_conf wl12xx_conf = { .ie = WLAN_EID_HT_OPERATION, .rule = CONF_BCN_RULE_PASS_ON_CHANGE, }, + [2] = { + .ie = WLAN_EID_ERP_INFO, + .rule = CONF_BCN_RULE_PASS_ON_CHANGE, + }, }, - .synch_fail_thold = 10, - .bss_lose_timeout = 100, + .synch_fail_thold = 12, + .bss_lose_timeout = 400, .beacon_rx_timeout = 10000, .broadcast_timeout = 20000, .rx_broadcast_in_ps = 1, @@ -234,10 +242,11 @@ static struct wlcore_conf wl12xx_conf = { .psm_entry_retries = 8, .psm_exit_retries = 16, .psm_entry_nullfunc_retries = 3, - .dynamic_ps_timeout = 40, + .dynamic_ps_timeout = 1500, .forced_ps = false, .keep_alive_interval = 55000, .max_listen_interval = 20, + .sta_sleep_auth = WL1271_PSM_ILLEGAL, }, .itrim = { .enable = false, @@ -245,7 +254,7 @@ static struct wlcore_conf wl12xx_conf = { }, .pm_config = { .host_clk_settling_time = 5000, - .host_fast_wakeup_support = false + .host_fast_wakeup_support = CONF_FAST_WAKEUP_DISABLE, }, .roam_trigger = { .trigger_pacing = 1, @@ -305,8 +314,8 @@ static struct wlcore_conf wl12xx_conf = { .swallow_period = 5, .n_divider_fref_set_1 = 0xff, /* default */ .n_divider_fref_set_2 = 12, - .m_divider_fref_set_1 = 148, - .m_divider_fref_set_2 = 0xffff, /* default */ + .m_divider_fref_set_1 = 0xffff, + .m_divider_fref_set_2 = 148, /* default */ .coex_pll_stabilization_time = 0xffffffff, /* default */ .ldo_stabilization_time = 0xffff, /* default */ .fm_disturbed_band_margin = 0xff, /* default */ @@ -581,19 +590,21 @@ static const int wl12xx_rtable[REG_TABLE_LEN] = { }; /* TODO: maybe move to a new header file? */ -#define WL127X_FW_NAME_MULTI "ti-connectivity/wl127x-fw-4-mr.bin" -#define WL127X_FW_NAME_SINGLE "ti-connectivity/wl127x-fw-4-sr.bin" -#define WL127X_PLT_FW_NAME "ti-connectivity/wl127x-fw-4-plt.bin" +#define WL127X_FW_NAME_MULTI "ti-connectivity/wl127x-fw-5-mr.bin" +#define WL127X_FW_NAME_SINGLE "ti-connectivity/wl127x-fw-5-sr.bin" +#define WL127X_PLT_FW_NAME "ti-connectivity/wl127x-fw-5-plt.bin" -#define WL128X_FW_NAME_MULTI "ti-connectivity/wl128x-fw-4-mr.bin" -#define WL128X_FW_NAME_SINGLE "ti-connectivity/wl128x-fw-4-sr.bin" -#define WL128X_PLT_FW_NAME "ti-connectivity/wl128x-fw-4-plt.bin" +#define WL128X_FW_NAME_MULTI "ti-connectivity/wl128x-fw-5-mr.bin" +#define WL128X_FW_NAME_SINGLE "ti-connectivity/wl128x-fw-5-sr.bin" +#define WL128X_PLT_FW_NAME "ti-connectivity/wl128x-fw-5-plt.bin" -static void wl127x_prepare_read(struct wl1271 *wl, u32 rx_desc, u32 len) +static int wl127x_prepare_read(struct wl1271 *wl, u32 rx_desc, u32 len) { + int ret; + if (wl->chip.id != CHIP_ID_1283_PG20) { struct wl1271_acx_mem_map *wl_mem_map = wl->target_mem_map; - struct wl1271_rx_mem_pool_addr rx_mem_addr; + struct wl127x_rx_mem_pool_addr rx_mem_addr; /* * Choose the block we want to read @@ -607,9 +618,13 @@ static void wl127x_prepare_read(struct wl1271 *wl, u32 rx_desc, u32 len) rx_mem_addr.addr_extra = rx_mem_addr.addr + 4; - wl1271_write(wl, WL1271_SLV_REG_DATA, - &rx_mem_addr, sizeof(rx_mem_addr), false); + ret = wlcore_write(wl, WL1271_SLV_REG_DATA, &rx_mem_addr, + sizeof(rx_mem_addr), false); + if (ret < 0) + return ret; } + + return 0; } static int wl12xx_identify_chip(struct wl1271 *wl) @@ -621,10 +636,9 @@ static int wl12xx_identify_chip(struct wl1271 *wl) wl1271_warning("chip id 0x%x (1271 PG10) support is obsolete", wl->chip.id); - /* clear the alignment quirk, since we don't support it */ - wl->quirks &= ~WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN; - - wl->quirks |= WLCORE_QUIRK_LEGACY_NVS; + wl->quirks |= WLCORE_QUIRK_LEGACY_NVS | + WLCORE_QUIRK_DUAL_PROBE_TMPL | + WLCORE_QUIRK_TKIP_HEADER_SPACE; wl->sr_fw_name = WL127X_FW_NAME_SINGLE; wl->mr_fw_name = WL127X_FW_NAME_MULTI; memcpy(&wl->conf.mem, &wl12xx_default_priv_conf.mem_wl127x, @@ -633,16 +647,18 @@ static int wl12xx_identify_chip(struct wl1271 *wl) /* read data preparation is only needed by wl127x */ wl->ops->prepare_read = wl127x_prepare_read; + wlcore_set_min_fw_ver(wl, WL127X_CHIP_VER, WL127X_IFTYPE_VER, + WL127X_MAJOR_VER, WL127X_SUBTYPE_VER, + WL127X_MINOR_VER); break; case CHIP_ID_1271_PG20: wl1271_debug(DEBUG_BOOT, "chip id 0x%x (1271 PG20)", wl->chip.id); - /* clear the alignment quirk, since we don't support it */ - wl->quirks &= ~WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN; - - wl->quirks |= WLCORE_QUIRK_LEGACY_NVS; + wl->quirks |= WLCORE_QUIRK_LEGACY_NVS | + WLCORE_QUIRK_DUAL_PROBE_TMPL | + WLCORE_QUIRK_TKIP_HEADER_SPACE; wl->plt_fw_name = WL127X_PLT_FW_NAME; wl->sr_fw_name = WL127X_FW_NAME_SINGLE; wl->mr_fw_name = WL127X_FW_NAME_MULTI; @@ -652,6 +668,9 @@ static int wl12xx_identify_chip(struct wl1271 *wl) /* read data preparation is only needed by wl127x */ wl->ops->prepare_read = wl127x_prepare_read; + wlcore_set_min_fw_ver(wl, WL127X_CHIP_VER, WL127X_IFTYPE_VER, + WL127X_MAJOR_VER, WL127X_SUBTYPE_VER, + WL127X_MINOR_VER); break; case CHIP_ID_1283_PG20: @@ -660,6 +679,15 @@ static int wl12xx_identify_chip(struct wl1271 *wl) wl->plt_fw_name = WL128X_PLT_FW_NAME; wl->sr_fw_name = WL128X_FW_NAME_SINGLE; wl->mr_fw_name = WL128X_FW_NAME_MULTI; + + /* wl128x requires TX blocksize alignment */ + wl->quirks |= WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN | + WLCORE_QUIRK_DUAL_PROBE_TMPL | + WLCORE_QUIRK_TKIP_HEADER_SPACE; + + wlcore_set_min_fw_ver(wl, WL128X_CHIP_VER, WL128X_IFTYPE_VER, + WL128X_MAJOR_VER, WL128X_SUBTYPE_VER, + WL128X_MINOR_VER); break; case CHIP_ID_1283_PG10: default: @@ -672,64 +700,95 @@ out: return ret; } -static void wl12xx_top_reg_write(struct wl1271 *wl, int addr, u16 val) +static int __must_check wl12xx_top_reg_write(struct wl1271 *wl, int addr, + u16 val) { + int ret; + /* write address >> 1 + 0x30000 to OCP_POR_CTR */ addr = (addr >> 1) + 0x30000; - wl1271_write32(wl, WL12XX_OCP_POR_CTR, addr); + ret = wlcore_write32(wl, WL12XX_OCP_POR_CTR, addr); + if (ret < 0) + goto out; /* write value to OCP_POR_WDATA */ - wl1271_write32(wl, WL12XX_OCP_DATA_WRITE, val); + ret = wlcore_write32(wl, WL12XX_OCP_DATA_WRITE, val); + if (ret < 0) + goto out; /* write 1 to OCP_CMD */ - wl1271_write32(wl, WL12XX_OCP_CMD, OCP_CMD_WRITE); + ret = wlcore_write32(wl, WL12XX_OCP_CMD, OCP_CMD_WRITE); + if (ret < 0) + goto out; + +out: + return ret; } -static u16 wl12xx_top_reg_read(struct wl1271 *wl, int addr) +static int __must_check wl12xx_top_reg_read(struct wl1271 *wl, int addr, + u16 *out) { u32 val; int timeout = OCP_CMD_LOOP; + int ret; /* write address >> 1 + 0x30000 to OCP_POR_CTR */ addr = (addr >> 1) + 0x30000; - wl1271_write32(wl, WL12XX_OCP_POR_CTR, addr); + ret = wlcore_write32(wl, WL12XX_OCP_POR_CTR, addr); + if (ret < 0) + return ret; /* write 2 to OCP_CMD */ - wl1271_write32(wl, WL12XX_OCP_CMD, OCP_CMD_READ); + ret = wlcore_write32(wl, WL12XX_OCP_CMD, OCP_CMD_READ); + if (ret < 0) + return ret; /* poll for data ready */ do { - val = wl1271_read32(wl, WL12XX_OCP_DATA_READ); + ret = wlcore_read32(wl, WL12XX_OCP_DATA_READ, &val); + if (ret < 0) + return ret; } while (!(val & OCP_READY_MASK) && --timeout); if (!timeout) { wl1271_warning("Top register access timed out."); - return 0xffff; + return -ETIMEDOUT; } /* check data status and return if OK */ - if ((val & OCP_STATUS_MASK) == OCP_STATUS_OK) - return val & 0xffff; - else { + if ((val & OCP_STATUS_MASK) != OCP_STATUS_OK) { wl1271_warning("Top register access returned error."); - return 0xffff; + return -EIO; } + + if (out) + *out = val & 0xffff; + + return 0; } static int wl128x_switch_tcxo_to_fref(struct wl1271 *wl) { u16 spare_reg; + int ret; /* Mask bits [2] & [8:4] in the sys_clk_cfg register */ - spare_reg = wl12xx_top_reg_read(wl, WL_SPARE_REG); + ret = wl12xx_top_reg_read(wl, WL_SPARE_REG, &spare_reg); + if (ret < 0) + return ret; + if (spare_reg == 0xFFFF) return -EFAULT; spare_reg |= (BIT(3) | BIT(5) | BIT(6)); - wl12xx_top_reg_write(wl, WL_SPARE_REG, spare_reg); + ret = wl12xx_top_reg_write(wl, WL_SPARE_REG, spare_reg); + if (ret < 0) + return ret; /* Enable FREF_CLK_REQ & mux MCS and coex PLLs to FREF */ - wl12xx_top_reg_write(wl, SYS_CLK_CFG_REG, - WL_CLK_REQ_TYPE_PG2 | MCS_PLL_CLK_SEL_FREF); + ret = wl12xx_top_reg_write(wl, SYS_CLK_CFG_REG, + WL_CLK_REQ_TYPE_PG2 | MCS_PLL_CLK_SEL_FREF); + if (ret < 0) + return ret; /* Delay execution for 15msec, to let the HW settle */ mdelay(15); @@ -740,8 +799,12 @@ static int wl128x_switch_tcxo_to_fref(struct wl1271 *wl) static bool wl128x_is_tcxo_valid(struct wl1271 *wl) { u16 tcxo_detection; + int ret; + + ret = wl12xx_top_reg_read(wl, TCXO_CLK_DETECT_REG, &tcxo_detection); + if (ret < 0) + return false; - tcxo_detection = wl12xx_top_reg_read(wl, TCXO_CLK_DETECT_REG); if (tcxo_detection & TCXO_DET_FAILED) return false; @@ -751,8 +814,12 @@ static bool wl128x_is_tcxo_valid(struct wl1271 *wl) static bool wl128x_is_fref_valid(struct wl1271 *wl) { u16 fref_detection; + int ret; + + ret = wl12xx_top_reg_read(wl, FREF_CLK_DETECT_REG, &fref_detection); + if (ret < 0) + return false; - fref_detection = wl12xx_top_reg_read(wl, FREF_CLK_DETECT_REG); if (fref_detection & FREF_CLK_DETECT_FAIL) return false; @@ -761,11 +828,21 @@ static bool wl128x_is_fref_valid(struct wl1271 *wl) static int wl128x_manually_configure_mcs_pll(struct wl1271 *wl) { - wl12xx_top_reg_write(wl, MCS_PLL_M_REG, MCS_PLL_M_REG_VAL); - wl12xx_top_reg_write(wl, MCS_PLL_N_REG, MCS_PLL_N_REG_VAL); - wl12xx_top_reg_write(wl, MCS_PLL_CONFIG_REG, MCS_PLL_CONFIG_REG_VAL); + int ret; - return 0; + ret = wl12xx_top_reg_write(wl, MCS_PLL_M_REG, MCS_PLL_M_REG_VAL); + if (ret < 0) + goto out; + + ret = wl12xx_top_reg_write(wl, MCS_PLL_N_REG, MCS_PLL_N_REG_VAL); + if (ret < 0) + goto out; + + ret = wl12xx_top_reg_write(wl, MCS_PLL_CONFIG_REG, + MCS_PLL_CONFIG_REG_VAL); + +out: + return ret; } static int wl128x_configure_mcs_pll(struct wl1271 *wl, int clk) @@ -773,30 +850,40 @@ static int wl128x_configure_mcs_pll(struct wl1271 *wl, int clk) u16 spare_reg; u16 pll_config; u8 input_freq; + struct wl12xx_priv *priv = wl->priv; + int ret; /* Mask bits [3:1] in the sys_clk_cfg register */ - spare_reg = wl12xx_top_reg_read(wl, WL_SPARE_REG); + ret = wl12xx_top_reg_read(wl, WL_SPARE_REG, &spare_reg); + if (ret < 0) + return ret; + if (spare_reg == 0xFFFF) return -EFAULT; spare_reg |= BIT(2); - wl12xx_top_reg_write(wl, WL_SPARE_REG, spare_reg); + ret = wl12xx_top_reg_write(wl, WL_SPARE_REG, spare_reg); + if (ret < 0) + return ret; /* Handle special cases of the TCXO clock */ - if (wl->tcxo_clock == WL12XX_TCXOCLOCK_16_8 || - wl->tcxo_clock == WL12XX_TCXOCLOCK_33_6) + if (priv->tcxo_clock == WL12XX_TCXOCLOCK_16_8 || + priv->tcxo_clock == WL12XX_TCXOCLOCK_33_6) return wl128x_manually_configure_mcs_pll(wl); /* Set the input frequency according to the selected clock source */ input_freq = (clk & 1) + 1; - pll_config = wl12xx_top_reg_read(wl, MCS_PLL_CONFIG_REG); + ret = wl12xx_top_reg_read(wl, MCS_PLL_CONFIG_REG, &pll_config); + if (ret < 0) + return ret; + if (pll_config == 0xFFFF) return -EFAULT; pll_config |= (input_freq << MCS_SEL_IN_FREQ_SHIFT); pll_config |= MCS_PLL_ENABLE_HP; - wl12xx_top_reg_write(wl, MCS_PLL_CONFIG_REG, pll_config); + ret = wl12xx_top_reg_write(wl, MCS_PLL_CONFIG_REG, pll_config); - return 0; + return ret; } /* @@ -808,26 +895,31 @@ static int wl128x_configure_mcs_pll(struct wl1271 *wl, int clk) */ static int wl128x_boot_clk(struct wl1271 *wl, int *selected_clock) { + struct wl12xx_priv *priv = wl->priv; u16 sys_clk_cfg; + int ret; /* For XTAL-only modes, FREF will be used after switching from TCXO */ - if (wl->ref_clock == WL12XX_REFCLOCK_26_XTAL || - wl->ref_clock == WL12XX_REFCLOCK_38_XTAL) { + if (priv->ref_clock == WL12XX_REFCLOCK_26_XTAL || + priv->ref_clock == WL12XX_REFCLOCK_38_XTAL) { if (!wl128x_switch_tcxo_to_fref(wl)) return -EINVAL; goto fref_clk; } /* Query the HW, to determine which clock source we should use */ - sys_clk_cfg = wl12xx_top_reg_read(wl, SYS_CLK_CFG_REG); + ret = wl12xx_top_reg_read(wl, SYS_CLK_CFG_REG, &sys_clk_cfg); + if (ret < 0) + return ret; + if (sys_clk_cfg == 0xFFFF) return -EINVAL; if (sys_clk_cfg & PRCM_CM_EN_MUX_WLAN_FREF) goto fref_clk; /* If TCXO is either 32.736MHz or 16.368MHz, switch to FREF */ - if (wl->tcxo_clock == WL12XX_TCXOCLOCK_16_368 || - wl->tcxo_clock == WL12XX_TCXOCLOCK_32_736) { + if (priv->tcxo_clock == WL12XX_TCXOCLOCK_16_368 || + priv->tcxo_clock == WL12XX_TCXOCLOCK_32_736) { if (!wl128x_switch_tcxo_to_fref(wl)) return -EINVAL; goto fref_clk; @@ -836,14 +928,14 @@ static int wl128x_boot_clk(struct wl1271 *wl, int *selected_clock) /* TCXO clock is selected */ if (!wl128x_is_tcxo_valid(wl)) return -EINVAL; - *selected_clock = wl->tcxo_clock; + *selected_clock = priv->tcxo_clock; goto config_mcs_pll; fref_clk: /* FREF clock is selected */ if (!wl128x_is_fref_valid(wl)) return -EINVAL; - *selected_clock = wl->ref_clock; + *selected_clock = priv->ref_clock; config_mcs_pll: return wl128x_configure_mcs_pll(wl, *selected_clock); @@ -851,69 +943,98 @@ config_mcs_pll: static int wl127x_boot_clk(struct wl1271 *wl) { + struct wl12xx_priv *priv = wl->priv; u32 pause; u32 clk; + int ret; if (WL127X_PG_GET_MAJOR(wl->hw_pg_ver) < 3) wl->quirks |= WLCORE_QUIRK_END_OF_TRANSACTION; - if (wl->ref_clock == CONF_REF_CLK_19_2_E || - wl->ref_clock == CONF_REF_CLK_38_4_E || - wl->ref_clock == CONF_REF_CLK_38_4_M_XTAL) + if (priv->ref_clock == CONF_REF_CLK_19_2_E || + priv->ref_clock == CONF_REF_CLK_38_4_E || + priv->ref_clock == CONF_REF_CLK_38_4_M_XTAL) /* ref clk: 19.2/38.4/38.4-XTAL */ clk = 0x3; - else if (wl->ref_clock == CONF_REF_CLK_26_E || - wl->ref_clock == CONF_REF_CLK_52_E) + else if (priv->ref_clock == CONF_REF_CLK_26_E || + priv->ref_clock == CONF_REF_CLK_26_M_XTAL || + priv->ref_clock == CONF_REF_CLK_52_E) /* ref clk: 26/52 */ clk = 0x5; else return -EINVAL; - if (wl->ref_clock != CONF_REF_CLK_19_2_E) { + if (priv->ref_clock != CONF_REF_CLK_19_2_E) { u16 val; /* Set clock type (open drain) */ - val = wl12xx_top_reg_read(wl, OCP_REG_CLK_TYPE); + ret = wl12xx_top_reg_read(wl, OCP_REG_CLK_TYPE, &val); + if (ret < 0) + goto out; + val &= FREF_CLK_TYPE_BITS; - wl12xx_top_reg_write(wl, OCP_REG_CLK_TYPE, val); + ret = wl12xx_top_reg_write(wl, OCP_REG_CLK_TYPE, val); + if (ret < 0) + goto out; /* Set clock pull mode (no pull) */ - val = wl12xx_top_reg_read(wl, OCP_REG_CLK_PULL); + ret = wl12xx_top_reg_read(wl, OCP_REG_CLK_PULL, &val); + if (ret < 0) + goto out; + val |= NO_PULL; - wl12xx_top_reg_write(wl, OCP_REG_CLK_PULL, val); + ret = wl12xx_top_reg_write(wl, OCP_REG_CLK_PULL, val); + if (ret < 0) + goto out; } else { u16 val; /* Set clock polarity */ - val = wl12xx_top_reg_read(wl, OCP_REG_CLK_POLARITY); + ret = wl12xx_top_reg_read(wl, OCP_REG_CLK_POLARITY, &val); + if (ret < 0) + goto out; + val &= FREF_CLK_POLARITY_BITS; val |= CLK_REQ_OUTN_SEL; - wl12xx_top_reg_write(wl, OCP_REG_CLK_POLARITY, val); + ret = wl12xx_top_reg_write(wl, OCP_REG_CLK_POLARITY, val); + if (ret < 0) + goto out; } - wl1271_write32(wl, WL12XX_PLL_PARAMETERS, clk); + ret = wlcore_write32(wl, WL12XX_PLL_PARAMETERS, clk); + if (ret < 0) + goto out; - pause = wl1271_read32(wl, WL12XX_PLL_PARAMETERS); + ret = wlcore_read32(wl, WL12XX_PLL_PARAMETERS, &pause); + if (ret < 0) + goto out; wl1271_debug(DEBUG_BOOT, "pause1 0x%x", pause); pause &= ~(WU_COUNTER_PAUSE_VAL); pause |= WU_COUNTER_PAUSE_VAL; - wl1271_write32(wl, WL12XX_WU_COUNTER_PAUSE, pause); + ret = wlcore_write32(wl, WL12XX_WU_COUNTER_PAUSE, pause); - return 0; +out: + return ret; } static int wl1271_boot_soft_reset(struct wl1271 *wl) { unsigned long timeout; u32 boot_data; + int ret = 0; /* perform soft reset */ - wl1271_write32(wl, WL12XX_SLV_SOFT_RESET, ACX_SLV_SOFT_RESET_BIT); + ret = wlcore_write32(wl, WL12XX_SLV_SOFT_RESET, ACX_SLV_SOFT_RESET_BIT); + if (ret < 0) + goto out; /* SOFT_RESET is self clearing */ timeout = jiffies + usecs_to_jiffies(SOFT_RESET_MAX_TIME); while (1) { - boot_data = wl1271_read32(wl, WL12XX_SLV_SOFT_RESET); + ret = wlcore_read32(wl, WL12XX_SLV_SOFT_RESET, &boot_data); + if (ret < 0) + goto out; + wl1271_debug(DEBUG_BOOT, "soft reset bootdata 0x%x", boot_data); if ((boot_data & ACX_SLV_SOFT_RESET_BIT) == 0) break; @@ -929,16 +1050,20 @@ static int wl1271_boot_soft_reset(struct wl1271 *wl) } /* disable Rx/Tx */ - wl1271_write32(wl, WL12XX_ENABLE, 0x0); + ret = wlcore_write32(wl, WL12XX_ENABLE, 0x0); + if (ret < 0) + goto out; /* disable auto calibration on start*/ - wl1271_write32(wl, WL12XX_SPARE_A2, 0xffff); + ret = wlcore_write32(wl, WL12XX_SPARE_A2, 0xffff); - return 0; +out: + return ret; } static int wl12xx_pre_boot(struct wl1271 *wl) { + struct wl12xx_priv *priv = wl->priv; int ret = 0; u32 clk; int selected_clock = -1; @@ -954,30 +1079,43 @@ static int wl12xx_pre_boot(struct wl1271 *wl) } /* Continue the ELP wake up sequence */ - wl1271_write32(wl, WL12XX_WELP_ARM_COMMAND, WELP_ARM_COMMAND_VAL); + ret = wlcore_write32(wl, WL12XX_WELP_ARM_COMMAND, WELP_ARM_COMMAND_VAL); + if (ret < 0) + goto out; + udelay(500); - wlcore_set_partition(wl, &wl->ptable[PART_DRPW]); + ret = wlcore_set_partition(wl, &wl->ptable[PART_DRPW]); + if (ret < 0) + goto out; /* Read-modify-write DRPW_SCRATCH_START register (see next state) to be used by DRPw FW. The RTRIM value will be added by the FW before taking DRPw out of reset */ - clk = wl1271_read32(wl, WL12XX_DRPW_SCRATCH_START); + ret = wlcore_read32(wl, WL12XX_DRPW_SCRATCH_START, &clk); + if (ret < 0) + goto out; wl1271_debug(DEBUG_BOOT, "clk2 0x%x", clk); if (wl->chip.id == CHIP_ID_1283_PG20) clk |= ((selected_clock & 0x3) << 1) << 4; else - clk |= (wl->ref_clock << 1) << 4; + clk |= (priv->ref_clock << 1) << 4; - wl1271_write32(wl, WL12XX_DRPW_SCRATCH_START, clk); + ret = wlcore_write32(wl, WL12XX_DRPW_SCRATCH_START, clk); + if (ret < 0) + goto out; - wlcore_set_partition(wl, &wl->ptable[PART_WORK]); + ret = wlcore_set_partition(wl, &wl->ptable[PART_WORK]); + if (ret < 0) + goto out; /* Disable interrupts */ - wlcore_write_reg(wl, REG_INTERRUPT_MASK, WL1271_ACX_INTR_ALL); + ret = wlcore_write_reg(wl, REG_INTERRUPT_MASK, WL1271_ACX_INTR_ALL); + if (ret < 0) + goto out; ret = wl1271_boot_soft_reset(wl); if (ret < 0) @@ -987,47 +1125,72 @@ out: return ret; } -static void wl12xx_pre_upload(struct wl1271 *wl) +static int wl12xx_pre_upload(struct wl1271 *wl) { u32 tmp; + u16 polarity; + int ret; /* write firmware's last address (ie. it's length) to * ACX_EEPROMLESS_IND_REG */ wl1271_debug(DEBUG_BOOT, "ACX_EEPROMLESS_IND_REG"); - wl1271_write32(wl, WL12XX_EEPROMLESS_IND, WL12XX_EEPROMLESS_IND); + ret = wlcore_write32(wl, WL12XX_EEPROMLESS_IND, WL12XX_EEPROMLESS_IND); + if (ret < 0) + goto out; - tmp = wlcore_read_reg(wl, REG_CHIP_ID_B); + ret = wlcore_read_reg(wl, REG_CHIP_ID_B, &tmp); + if (ret < 0) + goto out; wl1271_debug(DEBUG_BOOT, "chip id 0x%x", tmp); /* 6. read the EEPROM parameters */ - tmp = wl1271_read32(wl, WL12XX_SCR_PAD2); + ret = wlcore_read32(wl, WL12XX_SCR_PAD2, &tmp); + if (ret < 0) + goto out; /* WL1271: The reference driver skips steps 7 to 10 (jumps directly * to upload_fw) */ - if (wl->chip.id == CHIP_ID_1283_PG20) - wl12xx_top_reg_write(wl, SDIO_IO_DS, HCI_IO_DS_6MA); -} - -static void wl12xx_enable_interrupts(struct wl1271 *wl) -{ - u32 polarity; + if (wl->chip.id == CHIP_ID_1283_PG20) { + ret = wl12xx_top_reg_write(wl, SDIO_IO_DS, HCI_IO_DS_6MA); + if (ret < 0) + goto out; + } - polarity = wl12xx_top_reg_read(wl, OCP_REG_POLARITY); + /* polarity must be set before the firmware is loaded */ + ret = wl12xx_top_reg_read(wl, OCP_REG_POLARITY, &polarity); + if (ret < 0) + goto out; /* We use HIGH polarity, so unset the LOW bit */ polarity &= ~POLARITY_LOW; - wl12xx_top_reg_write(wl, OCP_REG_POLARITY, polarity); + ret = wl12xx_top_reg_write(wl, OCP_REG_POLARITY, polarity); + +out: + return ret; +} - wlcore_write_reg(wl, REG_INTERRUPT_MASK, WL1271_ACX_ALL_EVENTS_VECTOR); +static int wl12xx_enable_interrupts(struct wl1271 *wl) +{ + int ret; + + ret = wlcore_write_reg(wl, REG_INTERRUPT_MASK, + WL12XX_ACX_ALL_EVENTS_VECTOR); + if (ret < 0) + goto out; wlcore_enable_interrupts(wl); - wlcore_write_reg(wl, REG_INTERRUPT_MASK, - WL1271_ACX_INTR_ALL & ~(WL1271_INTR_MASK)); + ret = wlcore_write_reg(wl, REG_INTERRUPT_MASK, + WL1271_ACX_INTR_ALL & ~(WL12XX_INTR_MASK)); + if (ret < 0) + goto out; + + ret = wlcore_write32(wl, WL12XX_HI_CFG, HI_CFG_DEF_VAL); - wl1271_write32(wl, WL12XX_HI_CFG, HI_CFG_DEF_VAL); +out: + return ret; } static int wl12xx_boot(struct wl1271 *wl) @@ -1042,7 +1205,9 @@ static int wl12xx_boot(struct wl1271 *wl) if (ret < 0) goto out; - wl12xx_pre_upload(wl); + ret = wl12xx_pre_upload(wl); + if (ret < 0) + goto out; ret = wlcore_boot_upload_firmware(wl); if (ret < 0) @@ -1052,22 +1217,30 @@ static int wl12xx_boot(struct wl1271 *wl) if (ret < 0) goto out; - wl12xx_enable_interrupts(wl); + ret = wl12xx_enable_interrupts(wl); out: return ret; } -static void wl12xx_trigger_cmd(struct wl1271 *wl, int cmd_box_addr, +static int wl12xx_trigger_cmd(struct wl1271 *wl, int cmd_box_addr, void *buf, size_t len) { - wl1271_write(wl, cmd_box_addr, buf, len, false); - wlcore_write_reg(wl, REG_INTERRUPT_TRIG, WL12XX_INTR_TRIG_CMD); + int ret; + + ret = wlcore_write(wl, cmd_box_addr, buf, len, false); + if (ret < 0) + return ret; + + ret = wlcore_write_reg(wl, REG_INTERRUPT_TRIG, WL12XX_INTR_TRIG_CMD); + + return ret; } -static void wl12xx_ack_event(struct wl1271 *wl) +static int wl12xx_ack_event(struct wl1271 *wl) { - wlcore_write_reg(wl, REG_INTERRUPT_TRIG, WL12XX_INTR_TRIG_EVENT_ACK); + return wlcore_write_reg(wl, REG_INTERRUPT_TRIG, + WL12XX_INTR_TRIG_EVENT_ACK); } static u32 wl12xx_calc_tx_blocks(struct wl1271 *wl, u32 len, u32 spare_blks) @@ -1147,12 +1320,13 @@ static u32 wl12xx_get_rx_packet_len(struct wl1271 *wl, void *rx_data, return data_len - sizeof(*desc) - desc->pad_len; } -static void wl12xx_tx_delayed_compl(struct wl1271 *wl) +static int wl12xx_tx_delayed_compl(struct wl1271 *wl) { - if (wl->fw_status->tx_results_counter == (wl->tx_results_count & 0xff)) - return; + if (wl->fw_status_1->tx_results_counter == + (wl->tx_results_count & 0xff)) + return 0; - wl1271_tx_complete(wl); + return wlcore_tx_complete(wl); } static int wl12xx_hw_init(struct wl1271 *wl) @@ -1165,6 +1339,14 @@ static int wl12xx_hw_init(struct wl1271 *wl) ret = wl128x_cmd_general_parms(wl); if (ret < 0) goto out; + + /* + * If we are in calibrator based auto detect then we got the FEM nr + * in wl->fem_manuf. No need to continue further + */ + if (wl->plt_mode == PLT_FEM_DETECT) + goto out; + ret = wl128x_cmd_radio_parms(wl); if (ret < 0) goto out; @@ -1181,6 +1363,14 @@ static int wl12xx_hw_init(struct wl1271 *wl) ret = wl1271_cmd_general_parms(wl); if (ret < 0) goto out; + + /* + * If we are in calibrator based auto detect then we got the FEM nr + * in wl->fem_manuf. No need to continue further + */ + if (wl->plt_mode == PLT_FEM_DETECT) + goto out; + ret = wl1271_cmd_radio_parms(wl); if (ret < 0) goto out; @@ -1253,45 +1443,151 @@ static bool wl12xx_mac_in_fuse(struct wl1271 *wl) return supported; } -static void wl12xx_get_fuse_mac(struct wl1271 *wl) +static int wl12xx_get_fuse_mac(struct wl1271 *wl) { u32 mac1, mac2; + int ret; + + ret = wlcore_set_partition(wl, &wl->ptable[PART_DRPW]); + if (ret < 0) + goto out; - wlcore_set_partition(wl, &wl->ptable[PART_DRPW]); + ret = wlcore_read32(wl, WL12XX_REG_FUSE_BD_ADDR_1, &mac1); + if (ret < 0) + goto out; - mac1 = wl1271_read32(wl, WL12XX_REG_FUSE_BD_ADDR_1); - mac2 = wl1271_read32(wl, WL12XX_REG_FUSE_BD_ADDR_2); + ret = wlcore_read32(wl, WL12XX_REG_FUSE_BD_ADDR_2, &mac2); + if (ret < 0) + goto out; /* these are the two parts of the BD_ADDR */ wl->fuse_oui_addr = ((mac2 & 0xffff) << 8) + ((mac1 & 0xff000000) >> 24); wl->fuse_nic_addr = mac1 & 0xffffff; - wlcore_set_partition(wl, &wl->ptable[PART_DOWN]); + ret = wlcore_set_partition(wl, &wl->ptable[PART_DOWN]); + +out: + return ret; } -static s8 wl12xx_get_pg_ver(struct wl1271 *wl) +static int wl12xx_get_pg_ver(struct wl1271 *wl, s8 *ver) { - u32 die_info; + u16 die_info; + int ret; if (wl->chip.id == CHIP_ID_1283_PG20) - die_info = wl12xx_top_reg_read(wl, WL128X_REG_FUSE_DATA_2_1); + ret = wl12xx_top_reg_read(wl, WL128X_REG_FUSE_DATA_2_1, + &die_info); else - die_info = wl12xx_top_reg_read(wl, WL127X_REG_FUSE_DATA_2_1); + ret = wl12xx_top_reg_read(wl, WL127X_REG_FUSE_DATA_2_1, + &die_info); - return (s8) (die_info & PG_VER_MASK) >> PG_VER_OFFSET; + if (ret >= 0 && ver) + *ver = (s8)((die_info & PG_VER_MASK) >> PG_VER_OFFSET); + + return ret; } -static void wl12xx_get_mac(struct wl1271 *wl) +static int wl12xx_get_mac(struct wl1271 *wl) { if (wl12xx_mac_in_fuse(wl)) - wl12xx_get_fuse_mac(wl); + return wl12xx_get_fuse_mac(wl); + + return 0; +} + +static void wl12xx_set_tx_desc_csum(struct wl1271 *wl, + struct wl1271_tx_hw_descr *desc, + struct sk_buff *skb) +{ + desc->wl12xx_reserved = 0; +} + +static int wl12xx_plt_init(struct wl1271 *wl) +{ + int ret; + + ret = wl->ops->boot(wl); + if (ret < 0) + goto out; + + ret = wl->ops->hw_init(wl); + if (ret < 0) + goto out_irq_disable; + + /* + * If we are in calibrator based auto detect then we got the FEM nr + * in wl->fem_manuf. No need to continue further + */ + if (wl->plt_mode == PLT_FEM_DETECT) + goto out; + + ret = wl1271_acx_init_mem_config(wl); + if (ret < 0) + goto out_irq_disable; + + ret = wl12xx_acx_mem_cfg(wl); + if (ret < 0) + goto out_free_memmap; + + /* Enable data path */ + ret = wl1271_cmd_data_path(wl, 1); + if (ret < 0) + goto out_free_memmap; + + /* Configure for CAM power saving (ie. always active) */ + ret = wl1271_acx_sleep_auth(wl, WL1271_PSM_CAM); + if (ret < 0) + goto out_free_memmap; + + /* configure PM */ + ret = wl1271_acx_pm_config(wl); + if (ret < 0) + goto out_free_memmap; + + goto out; + +out_free_memmap: + kfree(wl->target_mem_map); + wl->target_mem_map = NULL; + +out_irq_disable: + mutex_unlock(&wl->mutex); + /* Unlocking the mutex in the middle of handling is + inherently unsafe. In this case we deem it safe to do, + because we need to let any possibly pending IRQ out of + the system (and while we are WL1271_STATE_OFF the IRQ + work function will not do anything.) Also, any other + possible concurrent operations will fail due to the + current state, hence the wl1271 struct should be safe. */ + wlcore_disable_interrupts(wl); + mutex_lock(&wl->mutex); +out: + return ret; +} + +static int wl12xx_get_spare_blocks(struct wl1271 *wl, bool is_gem) +{ + if (is_gem) + return WL12XX_TX_HW_BLOCK_GEM_SPARE; + + return WL12XX_TX_HW_BLOCK_SPARE_DEFAULT; +} + +static int wl12xx_set_key(struct wl1271 *wl, enum set_key_cmd cmd, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key_conf) +{ + return wlcore_set_key(wl, cmd, vif, sta, key_conf); } static struct wlcore_ops wl12xx_ops = { .identify_chip = wl12xx_identify_chip, .identify_fw = wl12xx_identify_fw, .boot = wl12xx_boot, + .plt_init = wl12xx_plt_init, .trigger_cmd = wl12xx_trigger_cmd, .ack_event = wl12xx_ack_event, .calc_tx_blocks = wl12xx_calc_tx_blocks, @@ -1306,6 +1602,13 @@ static struct wlcore_ops wl12xx_ops = { .sta_get_ap_rate_mask = wl12xx_sta_get_ap_rate_mask, .get_pg_ver = wl12xx_get_pg_ver, .get_mac = wl12xx_get_mac, + .set_tx_desc_csum = wl12xx_set_tx_desc_csum, + .set_rx_csum = NULL, + .ap_get_mimo_wide_rate_mask = NULL, + .debugfs_init = wl12xx_debugfs_add_files, + .get_spare_blocks = wl12xx_get_spare_blocks, + .set_key = wl12xx_set_key, + .pre_pkt_send = NULL, }; static struct ieee80211_sta_ht_cap wl12xx_ht_cap = { @@ -1323,6 +1626,7 @@ static struct ieee80211_sta_ht_cap wl12xx_ht_cap = { static int __devinit wl12xx_probe(struct platform_device *pdev) { + struct wl12xx_platform_data *pdata = pdev->dev.platform_data; struct wl1271 *wl; struct ieee80211_hw *hw; struct wl12xx_priv *priv; @@ -1334,19 +1638,63 @@ static int __devinit wl12xx_probe(struct platform_device *pdev) } wl = hw->priv; + priv = wl->priv; wl->ops = &wl12xx_ops; wl->ptable = wl12xx_ptable; wl->rtable = wl12xx_rtable; wl->num_tx_desc = 16; - wl->normal_tx_spare = WL12XX_TX_HW_BLOCK_SPARE_DEFAULT; - wl->gem_tx_spare = WL12XX_TX_HW_BLOCK_GEM_SPARE; + wl->num_rx_desc = 8; wl->band_rate_to_idx = wl12xx_band_rate_to_idx; wl->hw_tx_rate_tbl_size = WL12XX_CONF_HW_RXTX_RATE_MAX; wl->hw_min_ht_rate = WL12XX_CONF_HW_RXTX_RATE_MCS0; wl->fw_status_priv_len = 0; - memcpy(&wl->ht_cap, &wl12xx_ht_cap, sizeof(wl12xx_ht_cap)); + wl->stats.fw_stats_len = sizeof(struct wl12xx_acx_statistics); + wlcore_set_ht_cap(wl, IEEE80211_BAND_2GHZ, &wl12xx_ht_cap); + wlcore_set_ht_cap(wl, IEEE80211_BAND_5GHZ, &wl12xx_ht_cap); wl12xx_conf_init(wl); + if (!fref_param) { + priv->ref_clock = pdata->board_ref_clock; + } else { + if (!strcmp(fref_param, "19.2")) + priv->ref_clock = WL12XX_REFCLOCK_19; + else if (!strcmp(fref_param, "26")) + priv->ref_clock = WL12XX_REFCLOCK_26; + else if (!strcmp(fref_param, "26x")) + priv->ref_clock = WL12XX_REFCLOCK_26_XTAL; + else if (!strcmp(fref_param, "38.4")) + priv->ref_clock = WL12XX_REFCLOCK_38; + else if (!strcmp(fref_param, "38.4x")) + priv->ref_clock = WL12XX_REFCLOCK_38_XTAL; + else if (!strcmp(fref_param, "52")) + priv->ref_clock = WL12XX_REFCLOCK_52; + else + wl1271_error("Invalid fref parameter %s", fref_param); + } + + if (!tcxo_param) { + priv->tcxo_clock = pdata->board_tcxo_clock; + } else { + if (!strcmp(tcxo_param, "19.2")) + priv->tcxo_clock = WL12XX_TCXOCLOCK_19_2; + else if (!strcmp(tcxo_param, "26")) + priv->tcxo_clock = WL12XX_TCXOCLOCK_26; + else if (!strcmp(tcxo_param, "38.4")) + priv->tcxo_clock = WL12XX_TCXOCLOCK_38_4; + else if (!strcmp(tcxo_param, "52")) + priv->tcxo_clock = WL12XX_TCXOCLOCK_52; + else if (!strcmp(tcxo_param, "16.368")) + priv->tcxo_clock = WL12XX_TCXOCLOCK_16_368; + else if (!strcmp(tcxo_param, "32.736")) + priv->tcxo_clock = WL12XX_TCXOCLOCK_32_736; + else if (!strcmp(tcxo_param, "16.8")) + priv->tcxo_clock = WL12XX_TCXOCLOCK_16_8; + else if (!strcmp(tcxo_param, "33.6")) + priv->tcxo_clock = WL12XX_TCXOCLOCK_33_6; + else + wl1271_error("Invalid tcxo parameter %s", tcxo_param); + } + return wlcore_probe(wl, pdev); } @@ -1378,6 +1726,13 @@ static void __exit wl12xx_exit(void) } module_exit(wl12xx_exit); +module_param_named(fref, fref_param, charp, 0); +MODULE_PARM_DESC(fref, "FREF clock: 19.2, 26, 26x, 38.4, 38.4x, 52"); + +module_param_named(tcxo, tcxo_param, charp, 0); +MODULE_PARM_DESC(tcxo, + "TCXO clock: 19.2, 26, 38.4, 52, 16.368, 32.736, 16.8, 33.6"); + MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Luciano Coelho <coelho@ti.com>"); MODULE_FIRMWARE(WL127X_FW_NAME_SINGLE); diff --git a/drivers/net/wireless/ti/wl12xx/wl12xx.h b/drivers/net/wireless/ti/wl12xx/wl12xx.h index 74cd332e23ef..26990fb4edea 100644 --- a/drivers/net/wireless/ti/wl12xx/wl12xx.h +++ b/drivers/net/wireless/ti/wl12xx/wl12xx.h @@ -24,8 +24,30 @@ #include "conf.h" +/* minimum FW required for driver for wl127x */ +#define WL127X_CHIP_VER 6 +#define WL127X_IFTYPE_VER 3 +#define WL127X_MAJOR_VER 10 +#define WL127X_SUBTYPE_VER 2 +#define WL127X_MINOR_VER 115 + +/* minimum FW required for driver for wl128x */ +#define WL128X_CHIP_VER 7 +#define WL128X_IFTYPE_VER 3 +#define WL128X_MAJOR_VER 10 +#define WL128X_SUBTYPE_VER 2 +#define WL128X_MINOR_VER 115 + +struct wl127x_rx_mem_pool_addr { + u32 addr; + u32 addr_extra; +}; + struct wl12xx_priv { struct wl12xx_priv_conf conf; + + int ref_clock; + int tcxo_clock; }; #endif /* __WL12XX_PRIV_H__ */ diff --git a/drivers/net/wireless/ti/wl18xx/Kconfig b/drivers/net/wireless/ti/wl18xx/Kconfig new file mode 100644 index 000000000000..1cfdb2548821 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/Kconfig @@ -0,0 +1,7 @@ +config WL18XX + tristate "TI wl18xx support" + depends on MAC80211 + select WLCORE + ---help--- + This module adds support for wireless adapters based on TI + WiLink 8 chipsets. diff --git a/drivers/net/wireless/ti/wl18xx/Makefile b/drivers/net/wireless/ti/wl18xx/Makefile new file mode 100644 index 000000000000..67c098734c7f --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/Makefile @@ -0,0 +1,3 @@ +wl18xx-objs = main.o acx.o tx.o io.o debugfs.o + +obj-$(CONFIG_WL18XX) += wl18xx.o diff --git a/drivers/net/wireless/ti/wl18xx/acx.c b/drivers/net/wireless/ti/wl18xx/acx.c new file mode 100644 index 000000000000..72840e23bf59 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/acx.c @@ -0,0 +1,111 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2011 Texas Instruments Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#include "../wlcore/cmd.h" +#include "../wlcore/debug.h" +#include "../wlcore/acx.h" + +#include "acx.h" + +int wl18xx_acx_host_if_cfg_bitmap(struct wl1271 *wl, u32 host_cfg_bitmap, + u32 sdio_blk_size, u32 extra_mem_blks, + u32 len_field_size) +{ + struct wl18xx_acx_host_config_bitmap *bitmap_conf; + int ret; + + wl1271_debug(DEBUG_ACX, "acx cfg bitmap %d blk %d spare %d field %d", + host_cfg_bitmap, sdio_blk_size, extra_mem_blks, + len_field_size); + + bitmap_conf = kzalloc(sizeof(*bitmap_conf), GFP_KERNEL); + if (!bitmap_conf) { + ret = -ENOMEM; + goto out; + } + + bitmap_conf->host_cfg_bitmap = cpu_to_le32(host_cfg_bitmap); + bitmap_conf->host_sdio_block_size = cpu_to_le32(sdio_blk_size); + bitmap_conf->extra_mem_blocks = cpu_to_le32(extra_mem_blks); + bitmap_conf->length_field_size = cpu_to_le32(len_field_size); + + ret = wl1271_cmd_configure(wl, ACX_HOST_IF_CFG_BITMAP, + bitmap_conf, sizeof(*bitmap_conf)); + if (ret < 0) { + wl1271_warning("wl1271 bitmap config opt failed: %d", ret); + goto out; + } + +out: + kfree(bitmap_conf); + + return ret; +} + +int wl18xx_acx_set_checksum_state(struct wl1271 *wl) +{ + struct wl18xx_acx_checksum_state *acx; + int ret; + + wl1271_debug(DEBUG_ACX, "acx checksum state"); + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + acx->checksum_state = CHECKSUM_OFFLOAD_ENABLED; + + ret = wl1271_cmd_configure(wl, ACX_CHECKSUM_CONFIG, acx, sizeof(*acx)); + if (ret < 0) { + wl1271_warning("failed to set Tx checksum state: %d", ret); + goto out; + } + +out: + kfree(acx); + return ret; +} + +int wl18xx_acx_clear_statistics(struct wl1271 *wl) +{ + struct wl18xx_acx_clear_statistics *acx; + int ret = 0; + + wl1271_debug(DEBUG_ACX, "acx clear statistics"); + + acx = kzalloc(sizeof(*acx), GFP_KERNEL); + if (!acx) { + ret = -ENOMEM; + goto out; + } + + ret = wl1271_cmd_configure(wl, ACX_CLEAR_STATISTICS, acx, sizeof(*acx)); + if (ret < 0) { + wl1271_warning("failed to clear firmware statistics: %d", ret); + goto out; + } + +out: + kfree(acx); + return ret; +} diff --git a/drivers/net/wireless/ti/wl18xx/acx.h b/drivers/net/wireless/ti/wl18xx/acx.h new file mode 100644 index 000000000000..e2609a6b7341 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/acx.h @@ -0,0 +1,287 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2011 Texas Instruments. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#ifndef __WL18XX_ACX_H__ +#define __WL18XX_ACX_H__ + +#include "../wlcore/wlcore.h" +#include "../wlcore/acx.h" + +enum { + ACX_CLEAR_STATISTICS = 0x0047, +}; + +/* numbers of bits the length field takes (add 1 for the actual number) */ +#define WL18XX_HOST_IF_LEN_SIZE_FIELD 15 + +#define WL18XX_ACX_EVENTS_VECTOR (WL1271_ACX_INTR_WATCHDOG | \ + WL1271_ACX_INTR_INIT_COMPLETE | \ + WL1271_ACX_INTR_EVENT_A | \ + WL1271_ACX_INTR_EVENT_B | \ + WL1271_ACX_INTR_CMD_COMPLETE | \ + WL1271_ACX_INTR_HW_AVAILABLE | \ + WL1271_ACX_INTR_DATA | \ + WL1271_ACX_SW_INTR_WATCHDOG) + +#define WL18XX_INTR_MASK (WL1271_ACX_INTR_WATCHDOG | \ + WL1271_ACX_INTR_EVENT_A | \ + WL1271_ACX_INTR_EVENT_B | \ + WL1271_ACX_INTR_HW_AVAILABLE | \ + WL1271_ACX_INTR_DATA | \ + WL1271_ACX_SW_INTR_WATCHDOG) + +struct wl18xx_acx_host_config_bitmap { + struct acx_header header; + + __le32 host_cfg_bitmap; + + __le32 host_sdio_block_size; + + /* extra mem blocks per frame in TX. */ + __le32 extra_mem_blocks; + + /* + * number of bits of the length field in the first TX word + * (up to 15 - for using the entire 16 bits). + */ + __le32 length_field_size; + +} __packed; + +enum { + CHECKSUM_OFFLOAD_DISABLED = 0, + CHECKSUM_OFFLOAD_ENABLED = 1, + CHECKSUM_OFFLOAD_FAKE_RX = 2, + CHECKSUM_OFFLOAD_INVALID = 0xFF +}; + +struct wl18xx_acx_checksum_state { + struct acx_header header; + + /* enum acx_checksum_state */ + u8 checksum_state; + u8 pad[3]; +} __packed; + + +struct wl18xx_acx_error_stats { + u32 error_frame; + u32 error_null_Frame_tx_start; + u32 error_numll_frame_cts_start; + u32 error_bar_retry; + u32 error_frame_cts_nul_flid; +} __packed; + +struct wl18xx_acx_debug_stats { + u32 debug1; + u32 debug2; + u32 debug3; + u32 debug4; + u32 debug5; + u32 debug6; +} __packed; + +struct wl18xx_acx_ring_stats { + u32 prepared_descs; + u32 tx_cmplt; +} __packed; + +struct wl18xx_acx_tx_stats { + u32 tx_prepared_descs; + u32 tx_cmplt; + u32 tx_template_prepared; + u32 tx_data_prepared; + u32 tx_template_programmed; + u32 tx_data_programmed; + u32 tx_burst_programmed; + u32 tx_starts; + u32 tx_imm_resp; + u32 tx_start_templates; + u32 tx_start_int_templates; + u32 tx_start_fw_gen; + u32 tx_start_data; + u32 tx_start_null_frame; + u32 tx_exch; + u32 tx_retry_template; + u32 tx_retry_data; + u32 tx_exch_pending; + u32 tx_exch_expiry; + u32 tx_done_template; + u32 tx_done_data; + u32 tx_done_int_template; + u32 tx_frame_checksum; + u32 tx_checksum_result; + u32 frag_called; + u32 frag_mpdu_alloc_failed; + u32 frag_init_called; + u32 frag_in_process_called; + u32 frag_tkip_called; + u32 frag_key_not_found; + u32 frag_need_fragmentation; + u32 frag_bad_mblk_num; + u32 frag_failed; + u32 frag_cache_hit; + u32 frag_cache_miss; +} __packed; + +struct wl18xx_acx_rx_stats { + u32 rx_beacon_early_term; + u32 rx_out_of_mpdu_nodes; + u32 rx_hdr_overflow; + u32 rx_dropped_frame; + u32 rx_done_stage; + u32 rx_done; + u32 rx_defrag; + u32 rx_defrag_end; + u32 rx_cmplt; + u32 rx_pre_complt; + u32 rx_cmplt_task; + u32 rx_phy_hdr; + u32 rx_timeout; + u32 rx_timeout_wa; + u32 rx_wa_density_dropped_frame; + u32 rx_wa_ba_not_expected; + u32 rx_frame_checksum; + u32 rx_checksum_result; + u32 defrag_called; + u32 defrag_init_called; + u32 defrag_in_process_called; + u32 defrag_tkip_called; + u32 defrag_need_defrag; + u32 defrag_decrypt_failed; + u32 decrypt_key_not_found; + u32 defrag_need_decrypt; + u32 rx_tkip_replays; +} __packed; + +struct wl18xx_acx_isr_stats { + u32 irqs; +} __packed; + +#define PWR_STAT_MAX_CONT_MISSED_BCNS_SPREAD 10 + +struct wl18xx_acx_pwr_stats { + u32 missing_bcns_cnt; + u32 rcvd_bcns_cnt; + u32 connection_out_of_sync; + u32 cont_miss_bcns_spread[PWR_STAT_MAX_CONT_MISSED_BCNS_SPREAD]; + u32 rcvd_awake_bcns_cnt; +} __packed; + +struct wl18xx_acx_event_stats { + u32 calibration; + u32 rx_mismatch; + u32 rx_mem_empty; +} __packed; + +struct wl18xx_acx_ps_poll_stats { + u32 ps_poll_timeouts; + u32 upsd_timeouts; + u32 upsd_max_ap_turn; + u32 ps_poll_max_ap_turn; + u32 ps_poll_utilization; + u32 upsd_utilization; +} __packed; + +struct wl18xx_acx_rx_filter_stats { + u32 beacon_filter; + u32 arp_filter; + u32 mc_filter; + u32 dup_filter; + u32 data_filter; + u32 ibss_filter; + u32 protection_filter; + u32 accum_arp_pend_requests; + u32 max_arp_queue_dep; +} __packed; + +struct wl18xx_acx_rx_rate_stats { + u32 rx_frames_per_rates[50]; +} __packed; + +#define AGGR_STATS_TX_AGG 16 +#define AGGR_STATS_TX_RATE 16 +#define AGGR_STATS_RX_SIZE_LEN 16 + +struct wl18xx_acx_aggr_stats { + u32 tx_agg_vs_rate[AGGR_STATS_TX_AGG * AGGR_STATS_TX_RATE]; + u32 rx_size[AGGR_STATS_RX_SIZE_LEN]; +} __packed; + +#define PIPE_STATS_HW_FIFO 11 + +struct wl18xx_acx_pipeline_stats { + u32 hs_tx_stat_fifo_int; + u32 hs_rx_stat_fifo_int; + u32 tcp_tx_stat_fifo_int; + u32 tcp_rx_stat_fifo_int; + u32 enc_tx_stat_fifo_int; + u32 enc_rx_stat_fifo_int; + u32 rx_complete_stat_fifo_int; + u32 pre_proc_swi; + u32 post_proc_swi; + u32 sec_frag_swi; + u32 pre_to_defrag_swi; + u32 defrag_to_csum_swi; + u32 csum_to_rx_xfer_swi; + u32 dec_packet_in; + u32 dec_packet_in_fifo_full; + u32 dec_packet_out; + u32 cs_rx_packet_in; + u32 cs_rx_packet_out; + u16 pipeline_fifo_full[PIPE_STATS_HW_FIFO]; +} __packed; + +struct wl18xx_acx_mem_stats { + u32 rx_free_mem_blks; + u32 tx_free_mem_blks; + u32 fwlog_free_mem_blks; + u32 fw_gen_free_mem_blks; +} __packed; + +struct wl18xx_acx_statistics { + struct acx_header header; + + struct wl18xx_acx_error_stats error; + struct wl18xx_acx_debug_stats debug; + struct wl18xx_acx_tx_stats tx; + struct wl18xx_acx_rx_stats rx; + struct wl18xx_acx_isr_stats isr; + struct wl18xx_acx_pwr_stats pwr; + struct wl18xx_acx_ps_poll_stats ps_poll; + struct wl18xx_acx_rx_filter_stats rx_filter; + struct wl18xx_acx_rx_rate_stats rx_rate; + struct wl18xx_acx_aggr_stats aggr_size; + struct wl18xx_acx_pipeline_stats pipeline; + struct wl18xx_acx_mem_stats mem; +} __packed; + +struct wl18xx_acx_clear_statistics { + struct acx_header header; +}; + +int wl18xx_acx_host_if_cfg_bitmap(struct wl1271 *wl, u32 host_cfg_bitmap, + u32 sdio_blk_size, u32 extra_mem_blks, + u32 len_field_size); +int wl18xx_acx_set_checksum_state(struct wl1271 *wl); +int wl18xx_acx_clear_statistics(struct wl1271 *wl); + +#endif /* __WL18XX_ACX_H__ */ diff --git a/drivers/net/wireless/ti/wl18xx/conf.h b/drivers/net/wireless/ti/wl18xx/conf.h new file mode 100644 index 000000000000..4d426cc20274 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/conf.h @@ -0,0 +1,111 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2011 Texas Instruments Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#ifndef __WL18XX_CONF_H__ +#define __WL18XX_CONF_H__ + +#define WL18XX_CONF_MAGIC 0x10e100ca +#define WL18XX_CONF_VERSION (WLCORE_CONF_VERSION | 0x0003) +#define WL18XX_CONF_MASK 0x0000ffff +#define WL18XX_CONF_SIZE (WLCORE_CONF_SIZE + \ + sizeof(struct wl18xx_priv_conf)) + +#define NUM_OF_CHANNELS_11_ABG 150 +#define NUM_OF_CHANNELS_11_P 7 +#define WL18XX_NUM_OF_SUB_BANDS 9 +#define SRF_TABLE_LEN 16 +#define PIN_MUXING_SIZE 2 + +struct wl18xx_mac_and_phy_params { + u8 phy_standalone; + u8 rdl; + u8 enable_clpc; + u8 enable_tx_low_pwr_on_siso_rdl; + u8 auto_detect; + u8 dedicated_fem; + + u8 low_band_component; + + /* Bit 0: One Hot, Bit 1: Control Enable, Bit 2: 1.8V, Bit 3: 3V */ + u8 low_band_component_type; + + u8 high_band_component; + + /* Bit 0: One Hot, Bit 1: Control Enable, Bit 2: 1.8V, Bit 3: 3V */ + u8 high_band_component_type; + u8 number_of_assembled_ant2_4; + u8 number_of_assembled_ant5; + u8 pin_muxing_platform_options[PIN_MUXING_SIZE]; + u8 external_pa_dc2dc; + u8 tcxo_ldo_voltage; + u8 xtal_itrim_val; + u8 srf_state; + u8 srf1[SRF_TABLE_LEN]; + u8 srf2[SRF_TABLE_LEN]; + u8 srf3[SRF_TABLE_LEN]; + u8 io_configuration; + u8 sdio_configuration; + u8 settings; + u8 rx_profile; + u8 per_chan_pwr_limit_arr_11abg[NUM_OF_CHANNELS_11_ABG]; + u8 pwr_limit_reference_11_abg; + u8 per_chan_pwr_limit_arr_11p[NUM_OF_CHANNELS_11_P]; + u8 pwr_limit_reference_11p; + u8 per_sub_band_tx_trace_loss[WL18XX_NUM_OF_SUB_BANDS]; + u8 per_sub_band_rx_trace_loss[WL18XX_NUM_OF_SUB_BANDS]; + u8 primary_clock_setting_time; + u8 clock_valid_on_wake_up; + u8 secondary_clock_setting_time; + u8 board_type; + /* enable point saturation */ + u8 psat; + /* low/medium/high Tx power in dBm */ + s8 low_power_val; + s8 med_power_val; + s8 high_power_val; + u8 padding[1]; +} __packed; + +enum wl18xx_ht_mode { + /* Default - use MIMO, fallback to SISO20 */ + HT_MODE_DEFAULT = 0, + + /* Wide - use SISO40 */ + HT_MODE_WIDE = 1, + + /* Use SISO20 */ + HT_MODE_SISO20 = 2, +}; + +struct wl18xx_ht_settings { + /* DEFAULT / WIDE / SISO20 */ + u8 mode; +} __packed; + +struct wl18xx_priv_conf { + /* Module params structures */ + struct wl18xx_ht_settings ht; + + /* this structure is copied wholesale to FW */ + struct wl18xx_mac_and_phy_params phy; +} __packed; + +#endif /* __WL18XX_CONF_H__ */ diff --git a/drivers/net/wireless/ti/wl18xx/debugfs.c b/drivers/net/wireless/ti/wl18xx/debugfs.c new file mode 100644 index 000000000000..3ce6f1039af3 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/debugfs.c @@ -0,0 +1,403 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2009 Nokia Corporation + * Copyright (C) 2011-2012 Texas Instruments + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#include "../wlcore/debugfs.h" +#include "../wlcore/wlcore.h" + +#include "wl18xx.h" +#include "acx.h" +#include "debugfs.h" + +#define WL18XX_DEBUGFS_FWSTATS_FILE(a, b, c) \ + DEBUGFS_FWSTATS_FILE(a, b, c, wl18xx_acx_statistics) +#define WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(a, b, c) \ + DEBUGFS_FWSTATS_FILE_ARRAY(a, b, c, wl18xx_acx_statistics) + + +WL18XX_DEBUGFS_FWSTATS_FILE(debug, debug1, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(debug, debug2, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(debug, debug3, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(debug, debug4, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(debug, debug5, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(debug, debug6, "%u"); + +WL18XX_DEBUGFS_FWSTATS_FILE(error, error_frame, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(error, error_null_Frame_tx_start, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(error, error_numll_frame_cts_start, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(error, error_bar_retry, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(error, error_frame_cts_nul_flid, "%u"); + +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_prepared_descs, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_cmplt, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_template_prepared, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_data_prepared, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_template_programmed, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_data_programmed, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_burst_programmed, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_starts, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_imm_resp, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_start_templates, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_start_int_templates, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_start_fw_gen, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_start_data, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_start_null_frame, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_exch, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_retry_template, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_retry_data, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_exch_pending, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_exch_expiry, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_done_template, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_done_data, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_done_int_template, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_frame_checksum, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, tx_checksum_result, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_called, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_mpdu_alloc_failed, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_init_called, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_in_process_called, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_tkip_called, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_key_not_found, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_need_fragmentation, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_bad_mblk_num, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_failed, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_cache_hit, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(tx, frag_cache_miss, "%u"); + +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_beacon_early_term, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_out_of_mpdu_nodes, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_hdr_overflow, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_dropped_frame, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_done, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_defrag, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_defrag_end, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_cmplt, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_pre_complt, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_cmplt_task, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_phy_hdr, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_timeout, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_timeout_wa, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_wa_density_dropped_frame, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_wa_ba_not_expected, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_frame_checksum, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_checksum_result, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, defrag_called, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, defrag_init_called, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, defrag_in_process_called, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, defrag_tkip_called, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, defrag_need_defrag, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, defrag_decrypt_failed, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, decrypt_key_not_found, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, defrag_need_decrypt, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx, rx_tkip_replays, "%u"); + +WL18XX_DEBUGFS_FWSTATS_FILE(isr, irqs, "%u"); + +WL18XX_DEBUGFS_FWSTATS_FILE(pwr, missing_bcns_cnt, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pwr, rcvd_bcns_cnt, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pwr, connection_out_of_sync, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(pwr, cont_miss_bcns_spread, + PWR_STAT_MAX_CONT_MISSED_BCNS_SPREAD); +WL18XX_DEBUGFS_FWSTATS_FILE(pwr, rcvd_awake_bcns_cnt, "%u"); + + +WL18XX_DEBUGFS_FWSTATS_FILE(ps_poll, ps_poll_timeouts, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(ps_poll, upsd_timeouts, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(ps_poll, upsd_max_ap_turn, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(ps_poll, ps_poll_max_ap_turn, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(ps_poll, ps_poll_utilization, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(ps_poll, upsd_utilization, "%u"); + +WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, beacon_filter, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, arp_filter, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, mc_filter, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, dup_filter, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, data_filter, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, ibss_filter, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, protection_filter, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, accum_arp_pend_requests, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, max_arp_queue_dep, "%u"); + +WL18XX_DEBUGFS_FWSTATS_FILE(rx_rate, rx_frames_per_rates, "%u"); + +WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(aggr_size, tx_agg_vs_rate, + AGGR_STATS_TX_AGG*AGGR_STATS_TX_RATE); +WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(aggr_size, rx_size, + AGGR_STATS_RX_SIZE_LEN); + +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, hs_tx_stat_fifo_int, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, tcp_tx_stat_fifo_int, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, tcp_rx_stat_fifo_int, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, enc_tx_stat_fifo_int, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, enc_rx_stat_fifo_int, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, rx_complete_stat_fifo_int, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, pre_proc_swi, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, post_proc_swi, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, sec_frag_swi, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, pre_to_defrag_swi, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, defrag_to_csum_swi, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, csum_to_rx_xfer_swi, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, dec_packet_in, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, dec_packet_in_fifo_full, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, dec_packet_out, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, cs_rx_packet_in, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(pipeline, cs_rx_packet_out, "%u"); + +WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(pipeline, pipeline_fifo_full, + PIPE_STATS_HW_FIFO); + +WL18XX_DEBUGFS_FWSTATS_FILE(mem, rx_free_mem_blks, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(mem, tx_free_mem_blks, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(mem, fwlog_free_mem_blks, "%u"); +WL18XX_DEBUGFS_FWSTATS_FILE(mem, fw_gen_free_mem_blks, "%u"); + +static ssize_t conf_read(struct file *file, char __user *user_buf, + size_t count, loff_t *ppos) +{ + struct wl1271 *wl = file->private_data; + struct wl18xx_priv *priv = wl->priv; + struct wlcore_conf_header header; + char *buf, *pos; + size_t len; + int ret; + + len = WL18XX_CONF_SIZE; + buf = kmalloc(len, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + header.magic = cpu_to_le32(WL18XX_CONF_MAGIC); + header.version = cpu_to_le32(WL18XX_CONF_VERSION); + header.checksum = 0; + + mutex_lock(&wl->mutex); + + pos = buf; + memcpy(pos, &header, sizeof(header)); + pos += sizeof(header); + memcpy(pos, &wl->conf, sizeof(wl->conf)); + pos += sizeof(wl->conf); + memcpy(pos, &priv->conf, sizeof(priv->conf)); + + mutex_unlock(&wl->mutex); + + ret = simple_read_from_buffer(user_buf, count, ppos, buf, len); + + kfree(buf); + return ret; +} + +static const struct file_operations conf_ops = { + .read = conf_read, + .open = simple_open, + .llseek = default_llseek, +}; + +static ssize_t clear_fw_stats_write(struct file *file, + const char __user *user_buf, + size_t count, loff_t *ppos) +{ + struct wl1271 *wl = file->private_data; + int ret; + + mutex_lock(&wl->mutex); + + if (wl->state == WL1271_STATE_OFF) + goto out; + + ret = wl18xx_acx_clear_statistics(wl); + if (ret < 0) { + count = ret; + goto out; + } +out: + mutex_unlock(&wl->mutex); + return count; +} + +static const struct file_operations clear_fw_stats_ops = { + .write = clear_fw_stats_write, + .open = simple_open, + .llseek = default_llseek, +}; + +int wl18xx_debugfs_add_files(struct wl1271 *wl, + struct dentry *rootdir) +{ + int ret = 0; + struct dentry *entry, *stats, *moddir; + + moddir = debugfs_create_dir(KBUILD_MODNAME, rootdir); + if (!moddir || IS_ERR(moddir)) { + entry = moddir; + goto err; + } + + stats = debugfs_create_dir("fw_stats", moddir); + if (!stats || IS_ERR(stats)) { + entry = stats; + goto err; + } + + DEBUGFS_ADD(clear_fw_stats, stats); + + DEBUGFS_FWSTATS_ADD(debug, debug1); + DEBUGFS_FWSTATS_ADD(debug, debug2); + DEBUGFS_FWSTATS_ADD(debug, debug3); + DEBUGFS_FWSTATS_ADD(debug, debug4); + DEBUGFS_FWSTATS_ADD(debug, debug5); + DEBUGFS_FWSTATS_ADD(debug, debug6); + + DEBUGFS_FWSTATS_ADD(error, error_frame); + DEBUGFS_FWSTATS_ADD(error, error_null_Frame_tx_start); + DEBUGFS_FWSTATS_ADD(error, error_numll_frame_cts_start); + DEBUGFS_FWSTATS_ADD(error, error_bar_retry); + DEBUGFS_FWSTATS_ADD(error, error_frame_cts_nul_flid); + + DEBUGFS_FWSTATS_ADD(tx, tx_prepared_descs); + DEBUGFS_FWSTATS_ADD(tx, tx_cmplt); + DEBUGFS_FWSTATS_ADD(tx, tx_template_prepared); + DEBUGFS_FWSTATS_ADD(tx, tx_data_prepared); + DEBUGFS_FWSTATS_ADD(tx, tx_template_programmed); + DEBUGFS_FWSTATS_ADD(tx, tx_data_programmed); + DEBUGFS_FWSTATS_ADD(tx, tx_burst_programmed); + DEBUGFS_FWSTATS_ADD(tx, tx_starts); + DEBUGFS_FWSTATS_ADD(tx, tx_imm_resp); + DEBUGFS_FWSTATS_ADD(tx, tx_start_templates); + DEBUGFS_FWSTATS_ADD(tx, tx_start_int_templates); + DEBUGFS_FWSTATS_ADD(tx, tx_start_fw_gen); + DEBUGFS_FWSTATS_ADD(tx, tx_start_data); + DEBUGFS_FWSTATS_ADD(tx, tx_start_null_frame); + DEBUGFS_FWSTATS_ADD(tx, tx_exch); + DEBUGFS_FWSTATS_ADD(tx, tx_retry_template); + DEBUGFS_FWSTATS_ADD(tx, tx_retry_data); + DEBUGFS_FWSTATS_ADD(tx, tx_exch_pending); + DEBUGFS_FWSTATS_ADD(tx, tx_exch_expiry); + DEBUGFS_FWSTATS_ADD(tx, tx_done_template); + DEBUGFS_FWSTATS_ADD(tx, tx_done_data); + DEBUGFS_FWSTATS_ADD(tx, tx_done_int_template); + DEBUGFS_FWSTATS_ADD(tx, tx_frame_checksum); + DEBUGFS_FWSTATS_ADD(tx, tx_checksum_result); + DEBUGFS_FWSTATS_ADD(tx, frag_called); + DEBUGFS_FWSTATS_ADD(tx, frag_mpdu_alloc_failed); + DEBUGFS_FWSTATS_ADD(tx, frag_init_called); + DEBUGFS_FWSTATS_ADD(tx, frag_in_process_called); + DEBUGFS_FWSTATS_ADD(tx, frag_tkip_called); + DEBUGFS_FWSTATS_ADD(tx, frag_key_not_found); + DEBUGFS_FWSTATS_ADD(tx, frag_need_fragmentation); + DEBUGFS_FWSTATS_ADD(tx, frag_bad_mblk_num); + DEBUGFS_FWSTATS_ADD(tx, frag_failed); + DEBUGFS_FWSTATS_ADD(tx, frag_cache_hit); + DEBUGFS_FWSTATS_ADD(tx, frag_cache_miss); + + DEBUGFS_FWSTATS_ADD(rx, rx_beacon_early_term); + DEBUGFS_FWSTATS_ADD(rx, rx_out_of_mpdu_nodes); + DEBUGFS_FWSTATS_ADD(rx, rx_hdr_overflow); + DEBUGFS_FWSTATS_ADD(rx, rx_dropped_frame); + DEBUGFS_FWSTATS_ADD(rx, rx_done); + DEBUGFS_FWSTATS_ADD(rx, rx_defrag); + DEBUGFS_FWSTATS_ADD(rx, rx_defrag_end); + DEBUGFS_FWSTATS_ADD(rx, rx_cmplt); + DEBUGFS_FWSTATS_ADD(rx, rx_pre_complt); + DEBUGFS_FWSTATS_ADD(rx, rx_cmplt_task); + DEBUGFS_FWSTATS_ADD(rx, rx_phy_hdr); + DEBUGFS_FWSTATS_ADD(rx, rx_timeout); + DEBUGFS_FWSTATS_ADD(rx, rx_timeout_wa); + DEBUGFS_FWSTATS_ADD(rx, rx_wa_density_dropped_frame); + DEBUGFS_FWSTATS_ADD(rx, rx_wa_ba_not_expected); + DEBUGFS_FWSTATS_ADD(rx, rx_frame_checksum); + DEBUGFS_FWSTATS_ADD(rx, rx_checksum_result); + DEBUGFS_FWSTATS_ADD(rx, defrag_called); + DEBUGFS_FWSTATS_ADD(rx, defrag_init_called); + DEBUGFS_FWSTATS_ADD(rx, defrag_in_process_called); + DEBUGFS_FWSTATS_ADD(rx, defrag_tkip_called); + DEBUGFS_FWSTATS_ADD(rx, defrag_need_defrag); + DEBUGFS_FWSTATS_ADD(rx, defrag_decrypt_failed); + DEBUGFS_FWSTATS_ADD(rx, decrypt_key_not_found); + DEBUGFS_FWSTATS_ADD(rx, defrag_need_decrypt); + DEBUGFS_FWSTATS_ADD(rx, rx_tkip_replays); + + DEBUGFS_FWSTATS_ADD(isr, irqs); + + DEBUGFS_FWSTATS_ADD(pwr, missing_bcns_cnt); + DEBUGFS_FWSTATS_ADD(pwr, rcvd_bcns_cnt); + DEBUGFS_FWSTATS_ADD(pwr, connection_out_of_sync); + DEBUGFS_FWSTATS_ADD(pwr, cont_miss_bcns_spread); + DEBUGFS_FWSTATS_ADD(pwr, rcvd_awake_bcns_cnt); + + DEBUGFS_FWSTATS_ADD(ps_poll, ps_poll_timeouts); + DEBUGFS_FWSTATS_ADD(ps_poll, upsd_timeouts); + DEBUGFS_FWSTATS_ADD(ps_poll, upsd_max_ap_turn); + DEBUGFS_FWSTATS_ADD(ps_poll, ps_poll_max_ap_turn); + DEBUGFS_FWSTATS_ADD(ps_poll, ps_poll_utilization); + DEBUGFS_FWSTATS_ADD(ps_poll, upsd_utilization); + + DEBUGFS_FWSTATS_ADD(rx_filter, beacon_filter); + DEBUGFS_FWSTATS_ADD(rx_filter, arp_filter); + DEBUGFS_FWSTATS_ADD(rx_filter, mc_filter); + DEBUGFS_FWSTATS_ADD(rx_filter, dup_filter); + DEBUGFS_FWSTATS_ADD(rx_filter, data_filter); + DEBUGFS_FWSTATS_ADD(rx_filter, ibss_filter); + DEBUGFS_FWSTATS_ADD(rx_filter, protection_filter); + DEBUGFS_FWSTATS_ADD(rx_filter, accum_arp_pend_requests); + DEBUGFS_FWSTATS_ADD(rx_filter, max_arp_queue_dep); + + DEBUGFS_FWSTATS_ADD(rx_rate, rx_frames_per_rates); + + DEBUGFS_FWSTATS_ADD(aggr_size, tx_agg_vs_rate); + DEBUGFS_FWSTATS_ADD(aggr_size, rx_size); + + DEBUGFS_FWSTATS_ADD(pipeline, hs_tx_stat_fifo_int); + DEBUGFS_FWSTATS_ADD(pipeline, tcp_tx_stat_fifo_int); + DEBUGFS_FWSTATS_ADD(pipeline, tcp_rx_stat_fifo_int); + DEBUGFS_FWSTATS_ADD(pipeline, enc_tx_stat_fifo_int); + DEBUGFS_FWSTATS_ADD(pipeline, enc_rx_stat_fifo_int); + DEBUGFS_FWSTATS_ADD(pipeline, rx_complete_stat_fifo_int); + DEBUGFS_FWSTATS_ADD(pipeline, pre_proc_swi); + DEBUGFS_FWSTATS_ADD(pipeline, post_proc_swi); + DEBUGFS_FWSTATS_ADD(pipeline, sec_frag_swi); + DEBUGFS_FWSTATS_ADD(pipeline, pre_to_defrag_swi); + DEBUGFS_FWSTATS_ADD(pipeline, defrag_to_csum_swi); + DEBUGFS_FWSTATS_ADD(pipeline, csum_to_rx_xfer_swi); + DEBUGFS_FWSTATS_ADD(pipeline, dec_packet_in); + DEBUGFS_FWSTATS_ADD(pipeline, dec_packet_in_fifo_full); + DEBUGFS_FWSTATS_ADD(pipeline, dec_packet_out); + DEBUGFS_FWSTATS_ADD(pipeline, cs_rx_packet_in); + DEBUGFS_FWSTATS_ADD(pipeline, cs_rx_packet_out); + DEBUGFS_FWSTATS_ADD(pipeline, pipeline_fifo_full); + + DEBUGFS_FWSTATS_ADD(mem, rx_free_mem_blks); + DEBUGFS_FWSTATS_ADD(mem, tx_free_mem_blks); + DEBUGFS_FWSTATS_ADD(mem, fwlog_free_mem_blks); + DEBUGFS_FWSTATS_ADD(mem, fw_gen_free_mem_blks); + + DEBUGFS_ADD(conf, moddir); + + return 0; + +err: + if (IS_ERR(entry)) + ret = PTR_ERR(entry); + else + ret = -ENOMEM; + + return ret; +} diff --git a/drivers/net/wireless/ti/wl18xx/debugfs.h b/drivers/net/wireless/ti/wl18xx/debugfs.h new file mode 100644 index 000000000000..ed679bebf620 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/debugfs.h @@ -0,0 +1,28 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2012 Texas Instruments. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#ifndef __WL18XX_DEBUGFS_H__ +#define __WL18XX_DEBUGFS_H__ + +int wl18xx_debugfs_add_files(struct wl1271 *wl, + struct dentry *rootdir); + +#endif /* __WL18XX_DEBUGFS_H__ */ diff --git a/drivers/net/wireless/ti/wl18xx/io.c b/drivers/net/wireless/ti/wl18xx/io.c new file mode 100644 index 000000000000..f0abf3ef2c95 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/io.c @@ -0,0 +1,75 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2011 Texas Instruments + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#include "../wlcore/wlcore.h" +#include "../wlcore/io.h" + +#include "io.h" + +int wl18xx_top_reg_write(struct wl1271 *wl, int addr, u16 val) +{ + u32 tmp; + int ret; + + if (WARN_ON(addr % 2)) + return -EINVAL; + + if ((addr % 4) == 0) { + ret = wlcore_read32(wl, addr, &tmp); + if (ret < 0) + goto out; + + tmp = (tmp & 0xffff0000) | val; + ret = wlcore_write32(wl, addr, tmp); + } else { + ret = wlcore_read32(wl, addr - 2, &tmp); + if (ret < 0) + goto out; + + tmp = (tmp & 0xffff) | (val << 16); + ret = wlcore_write32(wl, addr - 2, tmp); + } + +out: + return ret; +} + +int wl18xx_top_reg_read(struct wl1271 *wl, int addr, u16 *out) +{ + u32 val = 0; + int ret; + + if (WARN_ON(addr % 2)) + return -EINVAL; + + if ((addr % 4) == 0) { + /* address is 4-bytes aligned */ + ret = wlcore_read32(wl, addr, &val); + if (ret >= 0 && out) + *out = val & 0xffff; + } else { + ret = wlcore_read32(wl, addr - 2, &val); + if (ret >= 0 && out) + *out = (val & 0xffff0000) >> 16; + } + + return ret; +} diff --git a/drivers/net/wireless/ti/wl18xx/io.h b/drivers/net/wireless/ti/wl18xx/io.h new file mode 100644 index 000000000000..c32ae30277df --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/io.h @@ -0,0 +1,28 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2011 Texas Instruments + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#ifndef __WL18XX_IO_H__ +#define __WL18XX_IO_H__ + +int __must_check wl18xx_top_reg_write(struct wl1271 *wl, int addr, u16 val); +int __must_check wl18xx_top_reg_read(struct wl1271 *wl, int addr, u16 *out); + +#endif /* __WL18XX_IO_H__ */ diff --git a/drivers/net/wireless/ti/wl18xx/main.c b/drivers/net/wireless/ti/wl18xx/main.c new file mode 100644 index 000000000000..69042bb9a097 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/main.c @@ -0,0 +1,1610 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2011 Texas Instruments + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/ip.h> +#include <linux/firmware.h> + +#include "../wlcore/wlcore.h" +#include "../wlcore/debug.h" +#include "../wlcore/io.h" +#include "../wlcore/acx.h" +#include "../wlcore/tx.h" +#include "../wlcore/rx.h" +#include "../wlcore/io.h" +#include "../wlcore/boot.h" + +#include "reg.h" +#include "conf.h" +#include "acx.h" +#include "tx.h" +#include "wl18xx.h" +#include "io.h" +#include "debugfs.h" + +#define WL18XX_RX_CHECKSUM_MASK 0x40 + +static char *ht_mode_param = NULL; +static char *board_type_param = NULL; +static bool checksum_param = false; +static bool enable_11a_param = true; +static int num_rx_desc_param = -1; + +/* phy paramters */ +static int dc2dc_param = -1; +static int n_antennas_2_param = -1; +static int n_antennas_5_param = -1; +static int low_band_component_param = -1; +static int low_band_component_type_param = -1; +static int high_band_component_param = -1; +static int high_band_component_type_param = -1; +static int pwr_limit_reference_11_abg_param = -1; + +static const u8 wl18xx_rate_to_idx_2ghz[] = { + /* MCS rates are used only with 11n */ + 15, /* WL18XX_CONF_HW_RXTX_RATE_MCS15 */ + 14, /* WL18XX_CONF_HW_RXTX_RATE_MCS14 */ + 13, /* WL18XX_CONF_HW_RXTX_RATE_MCS13 */ + 12, /* WL18XX_CONF_HW_RXTX_RATE_MCS12 */ + 11, /* WL18XX_CONF_HW_RXTX_RATE_MCS11 */ + 10, /* WL18XX_CONF_HW_RXTX_RATE_MCS10 */ + 9, /* WL18XX_CONF_HW_RXTX_RATE_MCS9 */ + 8, /* WL18XX_CONF_HW_RXTX_RATE_MCS8 */ + 7, /* WL18XX_CONF_HW_RXTX_RATE_MCS7 */ + 6, /* WL18XX_CONF_HW_RXTX_RATE_MCS6 */ + 5, /* WL18XX_CONF_HW_RXTX_RATE_MCS5 */ + 4, /* WL18XX_CONF_HW_RXTX_RATE_MCS4 */ + 3, /* WL18XX_CONF_HW_RXTX_RATE_MCS3 */ + 2, /* WL18XX_CONF_HW_RXTX_RATE_MCS2 */ + 1, /* WL18XX_CONF_HW_RXTX_RATE_MCS1 */ + 0, /* WL18XX_CONF_HW_RXTX_RATE_MCS0 */ + + 11, /* WL18XX_CONF_HW_RXTX_RATE_54 */ + 10, /* WL18XX_CONF_HW_RXTX_RATE_48 */ + 9, /* WL18XX_CONF_HW_RXTX_RATE_36 */ + 8, /* WL18XX_CONF_HW_RXTX_RATE_24 */ + + /* TI-specific rate */ + CONF_HW_RXTX_RATE_UNSUPPORTED, /* WL18XX_CONF_HW_RXTX_RATE_22 */ + + 7, /* WL18XX_CONF_HW_RXTX_RATE_18 */ + 6, /* WL18XX_CONF_HW_RXTX_RATE_12 */ + 3, /* WL18XX_CONF_HW_RXTX_RATE_11 */ + 5, /* WL18XX_CONF_HW_RXTX_RATE_9 */ + 4, /* WL18XX_CONF_HW_RXTX_RATE_6 */ + 2, /* WL18XX_CONF_HW_RXTX_RATE_5_5 */ + 1, /* WL18XX_CONF_HW_RXTX_RATE_2 */ + 0 /* WL18XX_CONF_HW_RXTX_RATE_1 */ +}; + +static const u8 wl18xx_rate_to_idx_5ghz[] = { + /* MCS rates are used only with 11n */ + 15, /* WL18XX_CONF_HW_RXTX_RATE_MCS15 */ + 14, /* WL18XX_CONF_HW_RXTX_RATE_MCS14 */ + 13, /* WL18XX_CONF_HW_RXTX_RATE_MCS13 */ + 12, /* WL18XX_CONF_HW_RXTX_RATE_MCS12 */ + 11, /* WL18XX_CONF_HW_RXTX_RATE_MCS11 */ + 10, /* WL18XX_CONF_HW_RXTX_RATE_MCS10 */ + 9, /* WL18XX_CONF_HW_RXTX_RATE_MCS9 */ + 8, /* WL18XX_CONF_HW_RXTX_RATE_MCS8 */ + 7, /* WL18XX_CONF_HW_RXTX_RATE_MCS7 */ + 6, /* WL18XX_CONF_HW_RXTX_RATE_MCS6 */ + 5, /* WL18XX_CONF_HW_RXTX_RATE_MCS5 */ + 4, /* WL18XX_CONF_HW_RXTX_RATE_MCS4 */ + 3, /* WL18XX_CONF_HW_RXTX_RATE_MCS3 */ + 2, /* WL18XX_CONF_HW_RXTX_RATE_MCS2 */ + 1, /* WL18XX_CONF_HW_RXTX_RATE_MCS1 */ + 0, /* WL18XX_CONF_HW_RXTX_RATE_MCS0 */ + + 7, /* WL18XX_CONF_HW_RXTX_RATE_54 */ + 6, /* WL18XX_CONF_HW_RXTX_RATE_48 */ + 5, /* WL18XX_CONF_HW_RXTX_RATE_36 */ + 4, /* WL18XX_CONF_HW_RXTX_RATE_24 */ + + /* TI-specific rate */ + CONF_HW_RXTX_RATE_UNSUPPORTED, /* WL18XX_CONF_HW_RXTX_RATE_22 */ + + 3, /* WL18XX_CONF_HW_RXTX_RATE_18 */ + 2, /* WL18XX_CONF_HW_RXTX_RATE_12 */ + CONF_HW_RXTX_RATE_UNSUPPORTED, /* WL18XX_CONF_HW_RXTX_RATE_11 */ + 1, /* WL18XX_CONF_HW_RXTX_RATE_9 */ + 0, /* WL18XX_CONF_HW_RXTX_RATE_6 */ + CONF_HW_RXTX_RATE_UNSUPPORTED, /* WL18XX_CONF_HW_RXTX_RATE_5_5 */ + CONF_HW_RXTX_RATE_UNSUPPORTED, /* WL18XX_CONF_HW_RXTX_RATE_2 */ + CONF_HW_RXTX_RATE_UNSUPPORTED, /* WL18XX_CONF_HW_RXTX_RATE_1 */ +}; + +static const u8 *wl18xx_band_rate_to_idx[] = { + [IEEE80211_BAND_2GHZ] = wl18xx_rate_to_idx_2ghz, + [IEEE80211_BAND_5GHZ] = wl18xx_rate_to_idx_5ghz +}; + +enum wl18xx_hw_rates { + WL18XX_CONF_HW_RXTX_RATE_MCS15 = 0, + WL18XX_CONF_HW_RXTX_RATE_MCS14, + WL18XX_CONF_HW_RXTX_RATE_MCS13, + WL18XX_CONF_HW_RXTX_RATE_MCS12, + WL18XX_CONF_HW_RXTX_RATE_MCS11, + WL18XX_CONF_HW_RXTX_RATE_MCS10, + WL18XX_CONF_HW_RXTX_RATE_MCS9, + WL18XX_CONF_HW_RXTX_RATE_MCS8, + WL18XX_CONF_HW_RXTX_RATE_MCS7, + WL18XX_CONF_HW_RXTX_RATE_MCS6, + WL18XX_CONF_HW_RXTX_RATE_MCS5, + WL18XX_CONF_HW_RXTX_RATE_MCS4, + WL18XX_CONF_HW_RXTX_RATE_MCS3, + WL18XX_CONF_HW_RXTX_RATE_MCS2, + WL18XX_CONF_HW_RXTX_RATE_MCS1, + WL18XX_CONF_HW_RXTX_RATE_MCS0, + WL18XX_CONF_HW_RXTX_RATE_54, + WL18XX_CONF_HW_RXTX_RATE_48, + WL18XX_CONF_HW_RXTX_RATE_36, + WL18XX_CONF_HW_RXTX_RATE_24, + WL18XX_CONF_HW_RXTX_RATE_22, + WL18XX_CONF_HW_RXTX_RATE_18, + WL18XX_CONF_HW_RXTX_RATE_12, + WL18XX_CONF_HW_RXTX_RATE_11, + WL18XX_CONF_HW_RXTX_RATE_9, + WL18XX_CONF_HW_RXTX_RATE_6, + WL18XX_CONF_HW_RXTX_RATE_5_5, + WL18XX_CONF_HW_RXTX_RATE_2, + WL18XX_CONF_HW_RXTX_RATE_1, + WL18XX_CONF_HW_RXTX_RATE_MAX, +}; + +static struct wlcore_conf wl18xx_conf = { + .sg = { + .params = { + [CONF_SG_ACL_BT_MASTER_MIN_BR] = 10, + [CONF_SG_ACL_BT_MASTER_MAX_BR] = 180, + [CONF_SG_ACL_BT_SLAVE_MIN_BR] = 10, + [CONF_SG_ACL_BT_SLAVE_MAX_BR] = 180, + [CONF_SG_ACL_BT_MASTER_MIN_EDR] = 10, + [CONF_SG_ACL_BT_MASTER_MAX_EDR] = 80, + [CONF_SG_ACL_BT_SLAVE_MIN_EDR] = 10, + [CONF_SG_ACL_BT_SLAVE_MAX_EDR] = 80, + [CONF_SG_ACL_WLAN_PS_MASTER_BR] = 8, + [CONF_SG_ACL_WLAN_PS_SLAVE_BR] = 8, + [CONF_SG_ACL_WLAN_PS_MASTER_EDR] = 20, + [CONF_SG_ACL_WLAN_PS_SLAVE_EDR] = 20, + [CONF_SG_ACL_WLAN_ACTIVE_MASTER_MIN_BR] = 20, + [CONF_SG_ACL_WLAN_ACTIVE_MASTER_MAX_BR] = 35, + [CONF_SG_ACL_WLAN_ACTIVE_SLAVE_MIN_BR] = 16, + [CONF_SG_ACL_WLAN_ACTIVE_SLAVE_MAX_BR] = 35, + [CONF_SG_ACL_WLAN_ACTIVE_MASTER_MIN_EDR] = 32, + [CONF_SG_ACL_WLAN_ACTIVE_MASTER_MAX_EDR] = 50, + [CONF_SG_ACL_WLAN_ACTIVE_SLAVE_MIN_EDR] = 28, + [CONF_SG_ACL_WLAN_ACTIVE_SLAVE_MAX_EDR] = 50, + [CONF_SG_ACL_ACTIVE_SCAN_WLAN_BR] = 10, + [CONF_SG_ACL_ACTIVE_SCAN_WLAN_EDR] = 20, + [CONF_SG_ACL_PASSIVE_SCAN_BT_BR] = 75, + [CONF_SG_ACL_PASSIVE_SCAN_WLAN_BR] = 15, + [CONF_SG_ACL_PASSIVE_SCAN_BT_EDR] = 27, + [CONF_SG_ACL_PASSIVE_SCAN_WLAN_EDR] = 17, + /* active scan params */ + [CONF_SG_AUTO_SCAN_PROBE_REQ] = 170, + [CONF_SG_ACTIVE_SCAN_DURATION_FACTOR_HV3] = 50, + [CONF_SG_ACTIVE_SCAN_DURATION_FACTOR_A2DP] = 100, + /* passive scan params */ + [CONF_SG_PASSIVE_SCAN_DURATION_FACTOR_A2DP_BR] = 800, + [CONF_SG_PASSIVE_SCAN_DURATION_FACTOR_A2DP_EDR] = 200, + [CONF_SG_PASSIVE_SCAN_DURATION_FACTOR_HV3] = 200, + /* passive scan in dual antenna params */ + [CONF_SG_CONSECUTIVE_HV3_IN_PASSIVE_SCAN] = 0, + [CONF_SG_BCN_HV3_COLLISION_THRESH_IN_PASSIVE_SCAN] = 0, + [CONF_SG_TX_RX_PROTECTION_BWIDTH_IN_PASSIVE_SCAN] = 0, + /* general params */ + [CONF_SG_STA_FORCE_PS_IN_BT_SCO] = 1, + [CONF_SG_ANTENNA_CONFIGURATION] = 0, + [CONF_SG_BEACON_MISS_PERCENT] = 60, + [CONF_SG_DHCP_TIME] = 5000, + [CONF_SG_RXT] = 1200, + [CONF_SG_TXT] = 1000, + [CONF_SG_ADAPTIVE_RXT_TXT] = 1, + [CONF_SG_GENERAL_USAGE_BIT_MAP] = 3, + [CONF_SG_HV3_MAX_SERVED] = 6, + [CONF_SG_PS_POLL_TIMEOUT] = 10, + [CONF_SG_UPSD_TIMEOUT] = 10, + [CONF_SG_CONSECUTIVE_CTS_THRESHOLD] = 2, + [CONF_SG_STA_RX_WINDOW_AFTER_DTIM] = 5, + [CONF_SG_STA_CONNECTION_PROTECTION_TIME] = 30, + /* AP params */ + [CONF_AP_BEACON_MISS_TX] = 3, + [CONF_AP_RX_WINDOW_AFTER_BEACON] = 10, + [CONF_AP_BEACON_WINDOW_INTERVAL] = 2, + [CONF_AP_CONNECTION_PROTECTION_TIME] = 0, + [CONF_AP_BT_ACL_VAL_BT_SERVE_TIME] = 25, + [CONF_AP_BT_ACL_VAL_WL_SERVE_TIME] = 25, + /* CTS Diluting params */ + [CONF_SG_CTS_DILUTED_BAD_RX_PACKETS_TH] = 0, + [CONF_SG_CTS_CHOP_IN_DUAL_ANT_SCO_MASTER] = 0, + }, + .state = CONF_SG_PROTECTIVE, + }, + .rx = { + .rx_msdu_life_time = 512000, + .packet_detection_threshold = 0, + .ps_poll_timeout = 15, + .upsd_timeout = 15, + .rts_threshold = IEEE80211_MAX_RTS_THRESHOLD, + .rx_cca_threshold = 0, + .irq_blk_threshold = 0xFFFF, + .irq_pkt_threshold = 0, + .irq_timeout = 600, + .queue_type = CONF_RX_QUEUE_TYPE_LOW_PRIORITY, + }, + .tx = { + .tx_energy_detection = 0, + .sta_rc_conf = { + .enabled_rates = 0, + .short_retry_limit = 10, + .long_retry_limit = 10, + .aflags = 0, + }, + .ac_conf_count = 4, + .ac_conf = { + [CONF_TX_AC_BE] = { + .ac = CONF_TX_AC_BE, + .cw_min = 15, + .cw_max = 63, + .aifsn = 3, + .tx_op_limit = 0, + }, + [CONF_TX_AC_BK] = { + .ac = CONF_TX_AC_BK, + .cw_min = 15, + .cw_max = 63, + .aifsn = 7, + .tx_op_limit = 0, + }, + [CONF_TX_AC_VI] = { + .ac = CONF_TX_AC_VI, + .cw_min = 15, + .cw_max = 63, + .aifsn = CONF_TX_AIFS_PIFS, + .tx_op_limit = 3008, + }, + [CONF_TX_AC_VO] = { + .ac = CONF_TX_AC_VO, + .cw_min = 15, + .cw_max = 63, + .aifsn = CONF_TX_AIFS_PIFS, + .tx_op_limit = 1504, + }, + }, + .max_tx_retries = 100, + .ap_aging_period = 300, + .tid_conf_count = 4, + .tid_conf = { + [CONF_TX_AC_BE] = { + .queue_id = CONF_TX_AC_BE, + .channel_type = CONF_CHANNEL_TYPE_EDCF, + .tsid = CONF_TX_AC_BE, + .ps_scheme = CONF_PS_SCHEME_LEGACY, + .ack_policy = CONF_ACK_POLICY_LEGACY, + .apsd_conf = {0, 0}, + }, + [CONF_TX_AC_BK] = { + .queue_id = CONF_TX_AC_BK, + .channel_type = CONF_CHANNEL_TYPE_EDCF, + .tsid = CONF_TX_AC_BK, + .ps_scheme = CONF_PS_SCHEME_LEGACY, + .ack_policy = CONF_ACK_POLICY_LEGACY, + .apsd_conf = {0, 0}, + }, + [CONF_TX_AC_VI] = { + .queue_id = CONF_TX_AC_VI, + .channel_type = CONF_CHANNEL_TYPE_EDCF, + .tsid = CONF_TX_AC_VI, + .ps_scheme = CONF_PS_SCHEME_LEGACY, + .ack_policy = CONF_ACK_POLICY_LEGACY, + .apsd_conf = {0, 0}, + }, + [CONF_TX_AC_VO] = { + .queue_id = CONF_TX_AC_VO, + .channel_type = CONF_CHANNEL_TYPE_EDCF, + .tsid = CONF_TX_AC_VO, + .ps_scheme = CONF_PS_SCHEME_LEGACY, + .ack_policy = CONF_ACK_POLICY_LEGACY, + .apsd_conf = {0, 0}, + }, + }, + .frag_threshold = IEEE80211_MAX_FRAG_THRESHOLD, + .tx_compl_timeout = 350, + .tx_compl_threshold = 10, + .basic_rate = CONF_HW_BIT_RATE_1MBPS, + .basic_rate_5 = CONF_HW_BIT_RATE_6MBPS, + .tmpl_short_retry_limit = 10, + .tmpl_long_retry_limit = 10, + .tx_watchdog_timeout = 5000, + }, + .conn = { + .wake_up_event = CONF_WAKE_UP_EVENT_DTIM, + .listen_interval = 1, + .suspend_wake_up_event = CONF_WAKE_UP_EVENT_N_DTIM, + .suspend_listen_interval = 3, + .bcn_filt_mode = CONF_BCN_FILT_MODE_ENABLED, + .bcn_filt_ie_count = 3, + .bcn_filt_ie = { + [0] = { + .ie = WLAN_EID_CHANNEL_SWITCH, + .rule = CONF_BCN_RULE_PASS_ON_APPEARANCE, + }, + [1] = { + .ie = WLAN_EID_HT_OPERATION, + .rule = CONF_BCN_RULE_PASS_ON_CHANGE, + }, + [2] = { + .ie = WLAN_EID_ERP_INFO, + .rule = CONF_BCN_RULE_PASS_ON_CHANGE, + }, + }, + .synch_fail_thold = 12, + .bss_lose_timeout = 400, + .beacon_rx_timeout = 10000, + .broadcast_timeout = 20000, + .rx_broadcast_in_ps = 1, + .ps_poll_threshold = 10, + .bet_enable = CONF_BET_MODE_ENABLE, + .bet_max_consecutive = 50, + .psm_entry_retries = 8, + .psm_exit_retries = 16, + .psm_entry_nullfunc_retries = 3, + .dynamic_ps_timeout = 1500, + .forced_ps = false, + .keep_alive_interval = 55000, + .max_listen_interval = 20, + .sta_sleep_auth = WL1271_PSM_ILLEGAL, + }, + .itrim = { + .enable = false, + .timeout = 50000, + }, + .pm_config = { + .host_clk_settling_time = 5000, + .host_fast_wakeup_support = CONF_FAST_WAKEUP_DISABLE, + }, + .roam_trigger = { + .trigger_pacing = 1, + .avg_weight_rssi_beacon = 20, + .avg_weight_rssi_data = 10, + .avg_weight_snr_beacon = 20, + .avg_weight_snr_data = 10, + }, + .scan = { + .min_dwell_time_active = 7500, + .max_dwell_time_active = 30000, + .min_dwell_time_passive = 100000, + .max_dwell_time_passive = 100000, + .num_probe_reqs = 2, + .split_scan_timeout = 50000, + }, + .sched_scan = { + /* + * Values are in TU/1000 but since sched scan FW command + * params are in TUs rounding up may occur. + */ + .base_dwell_time = 7500, + .max_dwell_time_delta = 22500, + /* based on 250bits per probe @1Mbps */ + .dwell_time_delta_per_probe = 2000, + /* based on 250bits per probe @6Mbps (plus a bit more) */ + .dwell_time_delta_per_probe_5 = 350, + .dwell_time_passive = 100000, + .dwell_time_dfs = 150000, + .num_probe_reqs = 2, + .rssi_threshold = -90, + .snr_threshold = 0, + }, + .ht = { + .rx_ba_win_size = 10, + .tx_ba_win_size = 64, + .inactivity_timeout = 10000, + .tx_ba_tid_bitmap = CONF_TX_BA_ENABLED_TID_BITMAP, + }, + .mem = { + .num_stations = 1, + .ssid_profiles = 1, + .rx_block_num = 40, + .tx_min_block_num = 40, + .dynamic_memory = 1, + .min_req_tx_blocks = 45, + .min_req_rx_blocks = 22, + .tx_min = 27, + }, + .fm_coex = { + .enable = true, + .swallow_period = 5, + .n_divider_fref_set_1 = 0xff, /* default */ + .n_divider_fref_set_2 = 12, + .m_divider_fref_set_1 = 0xffff, + .m_divider_fref_set_2 = 148, /* default */ + .coex_pll_stabilization_time = 0xffffffff, /* default */ + .ldo_stabilization_time = 0xffff, /* default */ + .fm_disturbed_band_margin = 0xff, /* default */ + .swallow_clk_diff = 0xff, /* default */ + }, + .rx_streaming = { + .duration = 150, + .queues = 0x1, + .interval = 20, + .always = 0, + }, + .fwlog = { + .mode = WL12XX_FWLOG_ON_DEMAND, + .mem_blocks = 2, + .severity = 0, + .timestamp = WL12XX_FWLOG_TIMESTAMP_DISABLED, + .output = WL12XX_FWLOG_OUTPUT_HOST, + .threshold = 0, + }, + .rate = { + .rate_retry_score = 32000, + .per_add = 8192, + .per_th1 = 2048, + .per_th2 = 4096, + .max_per = 8100, + .inverse_curiosity_factor = 5, + .tx_fail_low_th = 4, + .tx_fail_high_th = 10, + .per_alpha_shift = 4, + .per_add_shift = 13, + .per_beta1_shift = 10, + .per_beta2_shift = 8, + .rate_check_up = 2, + .rate_check_down = 12, + .rate_retry_policy = { + 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, + }, + }, + .hangover = { + .recover_time = 0, + .hangover_period = 20, + .dynamic_mode = 1, + .early_termination_mode = 1, + .max_period = 20, + .min_period = 1, + .increase_delta = 1, + .decrease_delta = 2, + .quiet_time = 4, + .increase_time = 1, + .window_size = 16, + }, +}; + +static struct wl18xx_priv_conf wl18xx_default_priv_conf = { + .ht = { + .mode = HT_MODE_DEFAULT, + }, + .phy = { + .phy_standalone = 0x00, + .primary_clock_setting_time = 0x05, + .clock_valid_on_wake_up = 0x00, + .secondary_clock_setting_time = 0x05, + .board_type = BOARD_TYPE_HDK_18XX, + .rdl = 0x01, + .auto_detect = 0x00, + .dedicated_fem = FEM_NONE, + .low_band_component = COMPONENT_2_WAY_SWITCH, + .low_band_component_type = 0x06, + .high_band_component = COMPONENT_2_WAY_SWITCH, + .high_band_component_type = 0x09, + .tcxo_ldo_voltage = 0x00, + .xtal_itrim_val = 0x04, + .srf_state = 0x00, + .io_configuration = 0x01, + .sdio_configuration = 0x00, + .settings = 0x00, + .enable_clpc = 0x00, + .enable_tx_low_pwr_on_siso_rdl = 0x00, + .rx_profile = 0x00, + .pwr_limit_reference_11_abg = 0xc8, + .psat = 0, + .low_power_val = 0x00, + .med_power_val = 0x0a, + .high_power_val = 0x1e, + .external_pa_dc2dc = 0, + .number_of_assembled_ant2_4 = 1, + .number_of_assembled_ant5 = 1, + }, +}; + +static const struct wlcore_partition_set wl18xx_ptable[PART_TABLE_LEN] = { + [PART_TOP_PRCM_ELP_SOC] = { + .mem = { .start = 0x00A02000, .size = 0x00010000 }, + .reg = { .start = 0x00807000, .size = 0x00005000 }, + .mem2 = { .start = 0x00800000, .size = 0x0000B000 }, + .mem3 = { .start = 0x00000000, .size = 0x00000000 }, + }, + [PART_DOWN] = { + .mem = { .start = 0x00000000, .size = 0x00014000 }, + .reg = { .start = 0x00810000, .size = 0x0000BFFF }, + .mem2 = { .start = 0x00000000, .size = 0x00000000 }, + .mem3 = { .start = 0x00000000, .size = 0x00000000 }, + }, + [PART_BOOT] = { + .mem = { .start = 0x00700000, .size = 0x0000030c }, + .reg = { .start = 0x00802000, .size = 0x00014578 }, + .mem2 = { .start = 0x00B00404, .size = 0x00001000 }, + .mem3 = { .start = 0x00C00000, .size = 0x00000400 }, + }, + [PART_WORK] = { + .mem = { .start = 0x00800000, .size = 0x000050FC }, + .reg = { .start = 0x00B00404, .size = 0x00001000 }, + .mem2 = { .start = 0x00C00000, .size = 0x00000400 }, + .mem3 = { .start = 0x00000000, .size = 0x00000000 }, + }, + [PART_PHY_INIT] = { + .mem = { .start = 0x80926000, + .size = sizeof(struct wl18xx_mac_and_phy_params) }, + .reg = { .start = 0x00000000, .size = 0x00000000 }, + .mem2 = { .start = 0x00000000, .size = 0x00000000 }, + .mem3 = { .start = 0x00000000, .size = 0x00000000 }, + }, +}; + +static const int wl18xx_rtable[REG_TABLE_LEN] = { + [REG_ECPU_CONTROL] = WL18XX_REG_ECPU_CONTROL, + [REG_INTERRUPT_NO_CLEAR] = WL18XX_REG_INTERRUPT_NO_CLEAR, + [REG_INTERRUPT_ACK] = WL18XX_REG_INTERRUPT_ACK, + [REG_COMMAND_MAILBOX_PTR] = WL18XX_REG_COMMAND_MAILBOX_PTR, + [REG_EVENT_MAILBOX_PTR] = WL18XX_REG_EVENT_MAILBOX_PTR, + [REG_INTERRUPT_TRIG] = WL18XX_REG_INTERRUPT_TRIG_H, + [REG_INTERRUPT_MASK] = WL18XX_REG_INTERRUPT_MASK, + [REG_PC_ON_RECOVERY] = WL18XX_SCR_PAD4, + [REG_CHIP_ID_B] = WL18XX_REG_CHIP_ID_B, + [REG_CMD_MBOX_ADDRESS] = WL18XX_CMD_MBOX_ADDRESS, + + /* data access memory addresses, used with partition translation */ + [REG_SLV_MEM_DATA] = WL18XX_SLV_MEM_DATA, + [REG_SLV_REG_DATA] = WL18XX_SLV_REG_DATA, + + /* raw data access memory addresses */ + [REG_RAW_FW_STATUS_ADDR] = WL18XX_FW_STATUS_ADDR, +}; + +static const struct wl18xx_clk_cfg wl18xx_clk_table[NUM_CLOCK_CONFIGS] = { + [CLOCK_CONFIG_16_2_M] = { 7, 104, 801, 4, true }, + [CLOCK_CONFIG_16_368_M] = { 9, 132, 3751, 4, true }, + [CLOCK_CONFIG_16_8_M] = { 7, 100, 0, 0, false }, + [CLOCK_CONFIG_19_2_M] = { 8, 100, 0, 0, false }, + [CLOCK_CONFIG_26_M] = { 13, 120, 0, 0, false }, + [CLOCK_CONFIG_32_736_M] = { 9, 132, 3751, 4, true }, + [CLOCK_CONFIG_33_6_M] = { 7, 100, 0, 0, false }, + [CLOCK_CONFIG_38_468_M] = { 8, 100, 0, 0, false }, + [CLOCK_CONFIG_52_M] = { 13, 120, 0, 0, false }, +}; + +/* TODO: maybe move to a new header file? */ +#define WL18XX_FW_NAME "ti-connectivity/wl18xx-fw.bin" + +static int wl18xx_identify_chip(struct wl1271 *wl) +{ + int ret = 0; + + switch (wl->chip.id) { + case CHIP_ID_185x_PG20: + wl1271_debug(DEBUG_BOOT, "chip id 0x%x (185x PG20)", + wl->chip.id); + wl->sr_fw_name = WL18XX_FW_NAME; + /* wl18xx uses the same firmware for PLT */ + wl->plt_fw_name = WL18XX_FW_NAME; + wl->quirks |= WLCORE_QUIRK_NO_ELP | + WLCORE_QUIRK_RX_BLOCKSIZE_ALIGN | + WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN | + WLCORE_QUIRK_NO_SCHED_SCAN_WHILE_CONN | + WLCORE_QUIRK_TX_PAD_LAST_FRAME; + + wlcore_set_min_fw_ver(wl, WL18XX_CHIP_VER, WL18XX_IFTYPE_VER, + WL18XX_MAJOR_VER, WL18XX_SUBTYPE_VER, + WL18XX_MINOR_VER); + break; + case CHIP_ID_185x_PG10: + wl1271_warning("chip id 0x%x (185x PG10) is deprecated", + wl->chip.id); + ret = -ENODEV; + goto out; + + default: + wl1271_warning("unsupported chip id: 0x%x", wl->chip.id); + ret = -ENODEV; + goto out; + } + +out: + return ret; +} + +static int wl18xx_set_clk(struct wl1271 *wl) +{ + u16 clk_freq; + int ret; + + ret = wlcore_set_partition(wl, &wl->ptable[PART_TOP_PRCM_ELP_SOC]); + if (ret < 0) + goto out; + + /* TODO: PG2: apparently we need to read the clk type */ + + ret = wl18xx_top_reg_read(wl, PRIMARY_CLK_DETECT, &clk_freq); + if (ret < 0) + goto out; + + wl1271_debug(DEBUG_BOOT, "clock freq %d (%d, %d, %d, %d, %s)", clk_freq, + wl18xx_clk_table[clk_freq].n, wl18xx_clk_table[clk_freq].m, + wl18xx_clk_table[clk_freq].p, wl18xx_clk_table[clk_freq].q, + wl18xx_clk_table[clk_freq].swallow ? "swallow" : "spit"); + + ret = wl18xx_top_reg_write(wl, PLLSH_WCS_PLL_N, + wl18xx_clk_table[clk_freq].n); + if (ret < 0) + goto out; + + ret = wl18xx_top_reg_write(wl, PLLSH_WCS_PLL_M, + wl18xx_clk_table[clk_freq].m); + if (ret < 0) + goto out; + + if (wl18xx_clk_table[clk_freq].swallow) { + /* first the 16 lower bits */ + ret = wl18xx_top_reg_write(wl, PLLSH_WCS_PLL_Q_FACTOR_CFG_1, + wl18xx_clk_table[clk_freq].q & + PLLSH_WCS_PLL_Q_FACTOR_CFG_1_MASK); + if (ret < 0) + goto out; + + /* then the 16 higher bits, masked out */ + ret = wl18xx_top_reg_write(wl, PLLSH_WCS_PLL_Q_FACTOR_CFG_2, + (wl18xx_clk_table[clk_freq].q >> 16) & + PLLSH_WCS_PLL_Q_FACTOR_CFG_2_MASK); + if (ret < 0) + goto out; + + /* first the 16 lower bits */ + ret = wl18xx_top_reg_write(wl, PLLSH_WCS_PLL_P_FACTOR_CFG_1, + wl18xx_clk_table[clk_freq].p & + PLLSH_WCS_PLL_P_FACTOR_CFG_1_MASK); + if (ret < 0) + goto out; + + /* then the 16 higher bits, masked out */ + ret = wl18xx_top_reg_write(wl, PLLSH_WCS_PLL_P_FACTOR_CFG_2, + (wl18xx_clk_table[clk_freq].p >> 16) & + PLLSH_WCS_PLL_P_FACTOR_CFG_2_MASK); + } else { + ret = wl18xx_top_reg_write(wl, PLLSH_WCS_PLL_SWALLOW_EN, + PLLSH_WCS_PLL_SWALLOW_EN_VAL2); + } + +out: + return ret; +} + +static int wl18xx_boot_soft_reset(struct wl1271 *wl) +{ + int ret; + + /* disable Rx/Tx */ + ret = wlcore_write32(wl, WL18XX_ENABLE, 0x0); + if (ret < 0) + goto out; + + /* disable auto calibration on start*/ + ret = wlcore_write32(wl, WL18XX_SPARE_A2, 0xffff); + +out: + return ret; +} + +static int wl18xx_pre_boot(struct wl1271 *wl) +{ + int ret; + + ret = wl18xx_set_clk(wl); + if (ret < 0) + goto out; + + /* Continue the ELP wake up sequence */ + ret = wlcore_write32(wl, WL18XX_WELP_ARM_COMMAND, WELP_ARM_COMMAND_VAL); + if (ret < 0) + goto out; + + udelay(500); + + ret = wlcore_set_partition(wl, &wl->ptable[PART_BOOT]); + if (ret < 0) + goto out; + + /* Disable interrupts */ + ret = wlcore_write_reg(wl, REG_INTERRUPT_MASK, WL1271_ACX_INTR_ALL); + if (ret < 0) + goto out; + + ret = wl18xx_boot_soft_reset(wl); + +out: + return ret; +} + +static int wl18xx_pre_upload(struct wl1271 *wl) +{ + u32 tmp; + int ret; + + ret = wlcore_set_partition(wl, &wl->ptable[PART_BOOT]); + if (ret < 0) + goto out; + + /* TODO: check if this is all needed */ + ret = wlcore_write32(wl, WL18XX_EEPROMLESS_IND, WL18XX_EEPROMLESS_IND); + if (ret < 0) + goto out; + + ret = wlcore_read_reg(wl, REG_CHIP_ID_B, &tmp); + if (ret < 0) + goto out; + + wl1271_debug(DEBUG_BOOT, "chip id 0x%x", tmp); + + ret = wlcore_read32(wl, WL18XX_SCR_PAD2, &tmp); + +out: + return ret; +} + +static int wl18xx_set_mac_and_phy(struct wl1271 *wl) +{ + struct wl18xx_priv *priv = wl->priv; + struct wl18xx_mac_and_phy_params *params; + int ret; + + params = kmemdup(&priv->conf.phy, sizeof(*params), GFP_KERNEL); + if (!params) { + ret = -ENOMEM; + goto out; + } + + ret = wlcore_set_partition(wl, &wl->ptable[PART_PHY_INIT]); + if (ret < 0) + goto out; + + ret = wlcore_write(wl, WL18XX_PHY_INIT_MEM_ADDR, params, + sizeof(*params), false); + +out: + kfree(params); + return ret; +} + +static int wl18xx_enable_interrupts(struct wl1271 *wl) +{ + u32 event_mask, intr_mask; + int ret; + + event_mask = WL18XX_ACX_EVENTS_VECTOR; + intr_mask = WL18XX_INTR_MASK; + + ret = wlcore_write_reg(wl, REG_INTERRUPT_MASK, event_mask); + if (ret < 0) + goto out; + + wlcore_enable_interrupts(wl); + + ret = wlcore_write_reg(wl, REG_INTERRUPT_MASK, + WL1271_ACX_INTR_ALL & ~intr_mask); + +out: + return ret; +} + +static int wl18xx_boot(struct wl1271 *wl) +{ + int ret; + + ret = wl18xx_pre_boot(wl); + if (ret < 0) + goto out; + + ret = wl18xx_pre_upload(wl); + if (ret < 0) + goto out; + + ret = wlcore_boot_upload_firmware(wl); + if (ret < 0) + goto out; + + ret = wl18xx_set_mac_and_phy(wl); + if (ret < 0) + goto out; + + ret = wlcore_boot_run_firmware(wl); + if (ret < 0) + goto out; + + ret = wl18xx_enable_interrupts(wl); + +out: + return ret; +} + +static int wl18xx_trigger_cmd(struct wl1271 *wl, int cmd_box_addr, + void *buf, size_t len) +{ + struct wl18xx_priv *priv = wl->priv; + + memcpy(priv->cmd_buf, buf, len); + memset(priv->cmd_buf + len, 0, WL18XX_CMD_MAX_SIZE - len); + + return wlcore_write(wl, cmd_box_addr, priv->cmd_buf, + WL18XX_CMD_MAX_SIZE, false); +} + +static int wl18xx_ack_event(struct wl1271 *wl) +{ + return wlcore_write_reg(wl, REG_INTERRUPT_TRIG, + WL18XX_INTR_TRIG_EVENT_ACK); +} + +static u32 wl18xx_calc_tx_blocks(struct wl1271 *wl, u32 len, u32 spare_blks) +{ + u32 blk_size = WL18XX_TX_HW_BLOCK_SIZE; + return (len + blk_size - 1) / blk_size + spare_blks; +} + +static void +wl18xx_set_tx_desc_blocks(struct wl1271 *wl, struct wl1271_tx_hw_descr *desc, + u32 blks, u32 spare_blks) +{ + desc->wl18xx_mem.total_mem_blocks = blks; +} + +static void +wl18xx_set_tx_desc_data_len(struct wl1271 *wl, struct wl1271_tx_hw_descr *desc, + struct sk_buff *skb) +{ + desc->length = cpu_to_le16(skb->len); + + /* if only the last frame is to be padded, we unset this bit on Tx */ + if (wl->quirks & WLCORE_QUIRK_TX_PAD_LAST_FRAME) + desc->wl18xx_mem.ctrl = WL18XX_TX_CTRL_NOT_PADDED; + else + desc->wl18xx_mem.ctrl = 0; + + wl1271_debug(DEBUG_TX, "tx_fill_hdr: hlid: %d " + "len: %d life: %d mem: %d", desc->hlid, + le16_to_cpu(desc->length), + le16_to_cpu(desc->life_time), + desc->wl18xx_mem.total_mem_blocks); +} + +static enum wl_rx_buf_align +wl18xx_get_rx_buf_align(struct wl1271 *wl, u32 rx_desc) +{ + if (rx_desc & RX_BUF_PADDED_PAYLOAD) + return WLCORE_RX_BUF_PADDED; + + return WLCORE_RX_BUF_ALIGNED; +} + +static u32 wl18xx_get_rx_packet_len(struct wl1271 *wl, void *rx_data, + u32 data_len) +{ + struct wl1271_rx_descriptor *desc = rx_data; + + /* invalid packet */ + if (data_len < sizeof(*desc)) + return 0; + + return data_len - sizeof(*desc); +} + +static void wl18xx_tx_immediate_completion(struct wl1271 *wl) +{ + wl18xx_tx_immediate_complete(wl); +} + +static int wl18xx_set_host_cfg_bitmap(struct wl1271 *wl, u32 extra_mem_blk) +{ + int ret; + u32 sdio_align_size = 0; + u32 host_cfg_bitmap = HOST_IF_CFG_RX_FIFO_ENABLE | + HOST_IF_CFG_ADD_RX_ALIGNMENT; + + /* Enable Tx SDIO padding */ + if (wl->quirks & WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN) { + host_cfg_bitmap |= HOST_IF_CFG_TX_PAD_TO_SDIO_BLK; + sdio_align_size = WL12XX_BUS_BLOCK_SIZE; + } + + /* Enable Rx SDIO padding */ + if (wl->quirks & WLCORE_QUIRK_RX_BLOCKSIZE_ALIGN) { + host_cfg_bitmap |= HOST_IF_CFG_RX_PAD_TO_SDIO_BLK; + sdio_align_size = WL12XX_BUS_BLOCK_SIZE; + } + + ret = wl18xx_acx_host_if_cfg_bitmap(wl, host_cfg_bitmap, + sdio_align_size, extra_mem_blk, + WL18XX_HOST_IF_LEN_SIZE_FIELD); + if (ret < 0) + return ret; + + return 0; +} + +static int wl18xx_hw_init(struct wl1271 *wl) +{ + int ret; + struct wl18xx_priv *priv = wl->priv; + + /* (re)init private structures. Relevant on recovery as well. */ + priv->last_fw_rls_idx = 0; + priv->extra_spare_vif_count = 0; + + /* set the default amount of spare blocks in the bitmap */ + ret = wl18xx_set_host_cfg_bitmap(wl, WL18XX_TX_HW_BLOCK_SPARE); + if (ret < 0) + return ret; + + if (checksum_param) { + ret = wl18xx_acx_set_checksum_state(wl); + if (ret != 0) + return ret; + } + + return ret; +} + +static void wl18xx_set_tx_desc_csum(struct wl1271 *wl, + struct wl1271_tx_hw_descr *desc, + struct sk_buff *skb) +{ + u32 ip_hdr_offset; + struct iphdr *ip_hdr; + + if (!checksum_param) { + desc->wl18xx_checksum_data = 0; + return; + } + + if (skb->ip_summed != CHECKSUM_PARTIAL) { + desc->wl18xx_checksum_data = 0; + return; + } + + ip_hdr_offset = skb_network_header(skb) - skb_mac_header(skb); + if (WARN_ON(ip_hdr_offset >= (1<<7))) { + desc->wl18xx_checksum_data = 0; + return; + } + + desc->wl18xx_checksum_data = ip_hdr_offset << 1; + + /* FW is interested only in the LSB of the protocol TCP=0 UDP=1 */ + ip_hdr = (void *)skb_network_header(skb); + desc->wl18xx_checksum_data |= (ip_hdr->protocol & 0x01); +} + +static void wl18xx_set_rx_csum(struct wl1271 *wl, + struct wl1271_rx_descriptor *desc, + struct sk_buff *skb) +{ + if (desc->status & WL18XX_RX_CHECKSUM_MASK) + skb->ip_summed = CHECKSUM_UNNECESSARY; +} + +static bool wl18xx_is_mimo_supported(struct wl1271 *wl) +{ + struct wl18xx_priv *priv = wl->priv; + + return priv->conf.phy.number_of_assembled_ant2_4 >= 2; +} + +/* + * TODO: instead of having these two functions to get the rate mask, + * we should modify the wlvif->rate_set instead + */ +static u32 wl18xx_sta_get_ap_rate_mask(struct wl1271 *wl, + struct wl12xx_vif *wlvif) +{ + u32 hw_rate_set = wlvif->rate_set; + + if (wlvif->channel_type == NL80211_CHAN_HT40MINUS || + wlvif->channel_type == NL80211_CHAN_HT40PLUS) { + wl1271_debug(DEBUG_ACX, "using wide channel rate mask"); + hw_rate_set |= CONF_TX_RATE_USE_WIDE_CHAN; + + /* we don't support MIMO in wide-channel mode */ + hw_rate_set &= ~CONF_TX_MIMO_RATES; + } else if (wl18xx_is_mimo_supported(wl)) { + wl1271_debug(DEBUG_ACX, "using MIMO channel rate mask"); + hw_rate_set |= CONF_TX_MIMO_RATES; + } + + return hw_rate_set; +} + +static u32 wl18xx_ap_get_mimo_wide_rate_mask(struct wl1271 *wl, + struct wl12xx_vif *wlvif) +{ + if (wlvif->channel_type == NL80211_CHAN_HT40MINUS || + wlvif->channel_type == NL80211_CHAN_HT40PLUS) { + wl1271_debug(DEBUG_ACX, "using wide channel rate mask"); + + /* sanity check - we don't support this */ + if (WARN_ON(wlvif->band != IEEE80211_BAND_5GHZ)) + return 0; + + return CONF_TX_RATE_USE_WIDE_CHAN; + } else if (wl18xx_is_mimo_supported(wl) && + wlvif->band == IEEE80211_BAND_2GHZ) { + wl1271_debug(DEBUG_ACX, "using MIMO rate mask"); + /* + * we don't care about HT channel here - if a peer doesn't + * support MIMO, we won't enable it in its rates + */ + return CONF_TX_MIMO_RATES; + } else { + return 0; + } +} + +static int wl18xx_get_pg_ver(struct wl1271 *wl, s8 *ver) +{ + u32 fuse; + int ret; + + ret = wlcore_set_partition(wl, &wl->ptable[PART_TOP_PRCM_ELP_SOC]); + if (ret < 0) + goto out; + + ret = wlcore_read32(wl, WL18XX_REG_FUSE_DATA_1_3, &fuse); + if (ret < 0) + goto out; + + if (ver) + *ver = (fuse & WL18XX_PG_VER_MASK) >> WL18XX_PG_VER_OFFSET; + + ret = wlcore_set_partition(wl, &wl->ptable[PART_BOOT]); + +out: + return ret; +} + +#define WL18XX_CONF_FILE_NAME "ti-connectivity/wl18xx-conf.bin" +static int wl18xx_conf_init(struct wl1271 *wl, struct device *dev) +{ + struct wl18xx_priv *priv = wl->priv; + struct wlcore_conf_file *conf_file; + const struct firmware *fw; + int ret; + + ret = request_firmware(&fw, WL18XX_CONF_FILE_NAME, dev); + if (ret < 0) { + wl1271_error("could not get configuration binary %s: %d", + WL18XX_CONF_FILE_NAME, ret); + goto out_fallback; + } + + if (fw->size != WL18XX_CONF_SIZE) { + wl1271_error("configuration binary file size is wrong, expected %zu got %zu", + WL18XX_CONF_SIZE, fw->size); + ret = -EINVAL; + goto out; + } + + conf_file = (struct wlcore_conf_file *) fw->data; + + if (conf_file->header.magic != cpu_to_le32(WL18XX_CONF_MAGIC)) { + wl1271_error("configuration binary file magic number mismatch, " + "expected 0x%0x got 0x%0x", WL18XX_CONF_MAGIC, + conf_file->header.magic); + ret = -EINVAL; + goto out; + } + + if (conf_file->header.version != cpu_to_le32(WL18XX_CONF_VERSION)) { + wl1271_error("configuration binary file version not supported, " + "expected 0x%08x got 0x%08x", + WL18XX_CONF_VERSION, conf_file->header.version); + ret = -EINVAL; + goto out; + } + + memcpy(&wl->conf, &conf_file->core, sizeof(wl18xx_conf)); + memcpy(&priv->conf, &conf_file->priv, sizeof(priv->conf)); + + goto out; + +out_fallback: + wl1271_warning("falling back to default config"); + + /* apply driver default configuration */ + memcpy(&wl->conf, &wl18xx_conf, sizeof(wl18xx_conf)); + /* apply default private configuration */ + memcpy(&priv->conf, &wl18xx_default_priv_conf, sizeof(priv->conf)); + + /* For now we just fallback */ + return 0; + +out: + release_firmware(fw); + return ret; +} + +static int wl18xx_plt_init(struct wl1271 *wl) +{ + int ret; + + /* calibrator based auto/fem detect not supported for 18xx */ + if (wl->plt_mode == PLT_FEM_DETECT) { + wl1271_error("wl18xx_plt_init: PLT FEM_DETECT not supported"); + return -EINVAL; + } + + ret = wlcore_write32(wl, WL18XX_SCR_PAD8, WL18XX_SCR_PAD8_PLT); + if (ret < 0) + return ret; + + return wl->ops->boot(wl); +} + +static int wl18xx_get_mac(struct wl1271 *wl) +{ + u32 mac1, mac2; + int ret; + + ret = wlcore_set_partition(wl, &wl->ptable[PART_TOP_PRCM_ELP_SOC]); + if (ret < 0) + goto out; + + ret = wlcore_read32(wl, WL18XX_REG_FUSE_BD_ADDR_1, &mac1); + if (ret < 0) + goto out; + + ret = wlcore_read32(wl, WL18XX_REG_FUSE_BD_ADDR_2, &mac2); + if (ret < 0) + goto out; + + /* these are the two parts of the BD_ADDR */ + wl->fuse_oui_addr = ((mac2 & 0xffff) << 8) + + ((mac1 & 0xff000000) >> 24); + wl->fuse_nic_addr = (mac1 & 0xffffff); + + ret = wlcore_set_partition(wl, &wl->ptable[PART_DOWN]); + +out: + return ret; +} + +static int wl18xx_handle_static_data(struct wl1271 *wl, + struct wl1271_static_data *static_data) +{ + struct wl18xx_static_data_priv *static_data_priv = + (struct wl18xx_static_data_priv *) static_data->priv; + + wl1271_info("PHY firmware version: %s", static_data_priv->phy_version); + + return 0; +} + +static int wl18xx_get_spare_blocks(struct wl1271 *wl, bool is_gem) +{ + struct wl18xx_priv *priv = wl->priv; + + /* If we have VIFs requiring extra spare, indulge them */ + if (priv->extra_spare_vif_count) + return WL18XX_TX_HW_EXTRA_BLOCK_SPARE; + + return WL18XX_TX_HW_BLOCK_SPARE; +} + +static int wl18xx_set_key(struct wl1271 *wl, enum set_key_cmd cmd, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key_conf) +{ + struct wl18xx_priv *priv = wl->priv; + bool change_spare = false; + int ret; + + /* + * when adding the first or removing the last GEM/TKIP interface, + * we have to adjust the number of spare blocks. + */ + change_spare = (key_conf->cipher == WL1271_CIPHER_SUITE_GEM || + key_conf->cipher == WLAN_CIPHER_SUITE_TKIP) && + ((priv->extra_spare_vif_count == 0 && cmd == SET_KEY) || + (priv->extra_spare_vif_count == 1 && cmd == DISABLE_KEY)); + + /* no need to change spare - just regular set_key */ + if (!change_spare) + return wlcore_set_key(wl, cmd, vif, sta, key_conf); + + /* + * stop the queues and flush to ensure the next packets are + * in sync with FW spare block accounting + */ + wlcore_stop_queues(wl, WLCORE_QUEUE_STOP_REASON_SPARE_BLK); + wl1271_tx_flush(wl); + + ret = wlcore_set_key(wl, cmd, vif, sta, key_conf); + if (ret < 0) + goto out; + + /* key is now set, change the spare blocks */ + if (cmd == SET_KEY) { + ret = wl18xx_set_host_cfg_bitmap(wl, + WL18XX_TX_HW_EXTRA_BLOCK_SPARE); + if (ret < 0) + goto out; + + priv->extra_spare_vif_count++; + } else { + ret = wl18xx_set_host_cfg_bitmap(wl, + WL18XX_TX_HW_BLOCK_SPARE); + if (ret < 0) + goto out; + + priv->extra_spare_vif_count--; + } + +out: + wlcore_wake_queues(wl, WLCORE_QUEUE_STOP_REASON_SPARE_BLK); + return ret; +} + +static u32 wl18xx_pre_pkt_send(struct wl1271 *wl, + u32 buf_offset, u32 last_len) +{ + if (wl->quirks & WLCORE_QUIRK_TX_PAD_LAST_FRAME) { + struct wl1271_tx_hw_descr *last_desc; + + /* get the last TX HW descriptor written to the aggr buf */ + last_desc = (struct wl1271_tx_hw_descr *)(wl->aggr_buf + + buf_offset - last_len); + + /* the last frame is padded up to an SDIO block */ + last_desc->wl18xx_mem.ctrl &= ~WL18XX_TX_CTRL_NOT_PADDED; + return ALIGN(buf_offset, WL12XX_BUS_BLOCK_SIZE); + } + + /* no modifications */ + return buf_offset; +} + +static struct wlcore_ops wl18xx_ops = { + .identify_chip = wl18xx_identify_chip, + .boot = wl18xx_boot, + .plt_init = wl18xx_plt_init, + .trigger_cmd = wl18xx_trigger_cmd, + .ack_event = wl18xx_ack_event, + .calc_tx_blocks = wl18xx_calc_tx_blocks, + .set_tx_desc_blocks = wl18xx_set_tx_desc_blocks, + .set_tx_desc_data_len = wl18xx_set_tx_desc_data_len, + .get_rx_buf_align = wl18xx_get_rx_buf_align, + .get_rx_packet_len = wl18xx_get_rx_packet_len, + .tx_immediate_compl = wl18xx_tx_immediate_completion, + .tx_delayed_compl = NULL, + .hw_init = wl18xx_hw_init, + .set_tx_desc_csum = wl18xx_set_tx_desc_csum, + .get_pg_ver = wl18xx_get_pg_ver, + .set_rx_csum = wl18xx_set_rx_csum, + .sta_get_ap_rate_mask = wl18xx_sta_get_ap_rate_mask, + .ap_get_mimo_wide_rate_mask = wl18xx_ap_get_mimo_wide_rate_mask, + .get_mac = wl18xx_get_mac, + .debugfs_init = wl18xx_debugfs_add_files, + .handle_static_data = wl18xx_handle_static_data, + .get_spare_blocks = wl18xx_get_spare_blocks, + .set_key = wl18xx_set_key, + .pre_pkt_send = wl18xx_pre_pkt_send, +}; + +/* HT cap appropriate for wide channels in 2Ghz */ +static struct ieee80211_sta_ht_cap wl18xx_siso40_ht_cap_2ghz = { + .cap = IEEE80211_HT_CAP_SGI_20 | IEEE80211_HT_CAP_SGI_40 | + IEEE80211_HT_CAP_SUP_WIDTH_20_40 | IEEE80211_HT_CAP_DSSSCCK40, + .ht_supported = true, + .ampdu_factor = IEEE80211_HT_MAX_AMPDU_16K, + .ampdu_density = IEEE80211_HT_MPDU_DENSITY_16, + .mcs = { + .rx_mask = { 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, + .rx_highest = cpu_to_le16(150), + .tx_params = IEEE80211_HT_MCS_TX_DEFINED, + }, +}; + +/* HT cap appropriate for wide channels in 5Ghz */ +static struct ieee80211_sta_ht_cap wl18xx_siso40_ht_cap_5ghz = { + .cap = IEEE80211_HT_CAP_SGI_20 | IEEE80211_HT_CAP_SGI_40 | + IEEE80211_HT_CAP_SUP_WIDTH_20_40, + .ht_supported = true, + .ampdu_factor = IEEE80211_HT_MAX_AMPDU_16K, + .ampdu_density = IEEE80211_HT_MPDU_DENSITY_16, + .mcs = { + .rx_mask = { 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, + .rx_highest = cpu_to_le16(150), + .tx_params = IEEE80211_HT_MCS_TX_DEFINED, + }, +}; + +/* HT cap appropriate for SISO 20 */ +static struct ieee80211_sta_ht_cap wl18xx_siso20_ht_cap = { + .cap = IEEE80211_HT_CAP_SGI_20, + .ht_supported = true, + .ampdu_factor = IEEE80211_HT_MAX_AMPDU_16K, + .ampdu_density = IEEE80211_HT_MPDU_DENSITY_16, + .mcs = { + .rx_mask = { 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, + .rx_highest = cpu_to_le16(72), + .tx_params = IEEE80211_HT_MCS_TX_DEFINED, + }, +}; + +/* HT cap appropriate for MIMO rates in 20mhz channel */ +static struct ieee80211_sta_ht_cap wl18xx_mimo_ht_cap_2ghz = { + .cap = IEEE80211_HT_CAP_SGI_20, + .ht_supported = true, + .ampdu_factor = IEEE80211_HT_MAX_AMPDU_16K, + .ampdu_density = IEEE80211_HT_MPDU_DENSITY_16, + .mcs = { + .rx_mask = { 0xff, 0xff, 0, 0, 0, 0, 0, 0, 0, 0, }, + .rx_highest = cpu_to_le16(144), + .tx_params = IEEE80211_HT_MCS_TX_DEFINED, + }, +}; + +static int __devinit wl18xx_probe(struct platform_device *pdev) +{ + struct wl1271 *wl; + struct ieee80211_hw *hw; + struct wl18xx_priv *priv; + int ret; + + hw = wlcore_alloc_hw(sizeof(*priv)); + if (IS_ERR(hw)) { + wl1271_error("can't allocate hw"); + ret = PTR_ERR(hw); + goto out; + } + + wl = hw->priv; + priv = wl->priv; + wl->ops = &wl18xx_ops; + wl->ptable = wl18xx_ptable; + wl->rtable = wl18xx_rtable; + wl->num_tx_desc = 32; + wl->num_rx_desc = 32; + wl->band_rate_to_idx = wl18xx_band_rate_to_idx; + wl->hw_tx_rate_tbl_size = WL18XX_CONF_HW_RXTX_RATE_MAX; + wl->hw_min_ht_rate = WL18XX_CONF_HW_RXTX_RATE_MCS0; + wl->fw_status_priv_len = sizeof(struct wl18xx_fw_status_priv); + wl->stats.fw_stats_len = sizeof(struct wl18xx_acx_statistics); + wl->static_data_priv_len = sizeof(struct wl18xx_static_data_priv); + + if (num_rx_desc_param != -1) + wl->num_rx_desc = num_rx_desc_param; + + ret = wl18xx_conf_init(wl, &pdev->dev); + if (ret < 0) + goto out_free; + + /* If the module param is set, update it in conf */ + if (board_type_param) { + if (!strcmp(board_type_param, "fpga")) { + priv->conf.phy.board_type = BOARD_TYPE_FPGA_18XX; + } else if (!strcmp(board_type_param, "hdk")) { + priv->conf.phy.board_type = BOARD_TYPE_HDK_18XX; + } else if (!strcmp(board_type_param, "dvp")) { + priv->conf.phy.board_type = BOARD_TYPE_DVP_18XX; + } else if (!strcmp(board_type_param, "evb")) { + priv->conf.phy.board_type = BOARD_TYPE_EVB_18XX; + } else if (!strcmp(board_type_param, "com8")) { + priv->conf.phy.board_type = BOARD_TYPE_COM8_18XX; + } else { + wl1271_error("invalid board type '%s'", + board_type_param); + ret = -EINVAL; + goto out_free; + } + } + + /* HACK! Just for now we hardcode COM8 and HDK to 0x06 */ + switch (priv->conf.phy.board_type) { + case BOARD_TYPE_HDK_18XX: + case BOARD_TYPE_COM8_18XX: + priv->conf.phy.low_band_component_type = 0x06; + break; + case BOARD_TYPE_FPGA_18XX: + case BOARD_TYPE_DVP_18XX: + case BOARD_TYPE_EVB_18XX: + priv->conf.phy.low_band_component_type = 0x05; + break; + default: + wl1271_error("invalid board type '%d'", + priv->conf.phy.board_type); + ret = -EINVAL; + goto out_free; + } + + if (low_band_component_param != -1) + priv->conf.phy.low_band_component = low_band_component_param; + if (low_band_component_type_param != -1) + priv->conf.phy.low_band_component_type = + low_band_component_type_param; + if (high_band_component_param != -1) + priv->conf.phy.high_band_component = high_band_component_param; + if (high_band_component_type_param != -1) + priv->conf.phy.high_band_component_type = + high_band_component_type_param; + if (pwr_limit_reference_11_abg_param != -1) + priv->conf.phy.pwr_limit_reference_11_abg = + pwr_limit_reference_11_abg_param; + if (n_antennas_2_param != -1) + priv->conf.phy.number_of_assembled_ant2_4 = n_antennas_2_param; + if (n_antennas_5_param != -1) + priv->conf.phy.number_of_assembled_ant5 = n_antennas_5_param; + if (dc2dc_param != -1) + priv->conf.phy.external_pa_dc2dc = dc2dc_param; + + if (ht_mode_param) { + if (!strcmp(ht_mode_param, "default")) + priv->conf.ht.mode = HT_MODE_DEFAULT; + else if (!strcmp(ht_mode_param, "wide")) + priv->conf.ht.mode = HT_MODE_WIDE; + else if (!strcmp(ht_mode_param, "siso20")) + priv->conf.ht.mode = HT_MODE_SISO20; + else { + wl1271_error("invalid ht_mode '%s'", ht_mode_param); + ret = -EINVAL; + goto out_free; + } + } + + if (priv->conf.ht.mode == HT_MODE_DEFAULT) { + /* + * Only support mimo with multiple antennas. Fall back to + * siso20. + */ + if (wl18xx_is_mimo_supported(wl)) + wlcore_set_ht_cap(wl, IEEE80211_BAND_2GHZ, + &wl18xx_mimo_ht_cap_2ghz); + else + wlcore_set_ht_cap(wl, IEEE80211_BAND_2GHZ, + &wl18xx_siso20_ht_cap); + + /* 5Ghz is always wide */ + wlcore_set_ht_cap(wl, IEEE80211_BAND_5GHZ, + &wl18xx_siso40_ht_cap_5ghz); + } else if (priv->conf.ht.mode == HT_MODE_WIDE) { + wlcore_set_ht_cap(wl, IEEE80211_BAND_2GHZ, + &wl18xx_siso40_ht_cap_2ghz); + wlcore_set_ht_cap(wl, IEEE80211_BAND_5GHZ, + &wl18xx_siso40_ht_cap_5ghz); + } else if (priv->conf.ht.mode == HT_MODE_SISO20) { + wlcore_set_ht_cap(wl, IEEE80211_BAND_2GHZ, + &wl18xx_siso20_ht_cap); + wlcore_set_ht_cap(wl, IEEE80211_BAND_5GHZ, + &wl18xx_siso20_ht_cap); + } + + if (!checksum_param) { + wl18xx_ops.set_rx_csum = NULL; + wl18xx_ops.init_vif = NULL; + } + + wl->enable_11a = enable_11a_param; + + return wlcore_probe(wl, pdev); + +out_free: + wlcore_free_hw(wl); +out: + return ret; +} + +static const struct platform_device_id wl18xx_id_table[] __devinitconst = { + { "wl18xx", 0 }, + { } /* Terminating Entry */ +}; +MODULE_DEVICE_TABLE(platform, wl18xx_id_table); + +static struct platform_driver wl18xx_driver = { + .probe = wl18xx_probe, + .remove = __devexit_p(wlcore_remove), + .id_table = wl18xx_id_table, + .driver = { + .name = "wl18xx_driver", + .owner = THIS_MODULE, + } +}; + +static int __init wl18xx_init(void) +{ + return platform_driver_register(&wl18xx_driver); +} +module_init(wl18xx_init); + +static void __exit wl18xx_exit(void) +{ + platform_driver_unregister(&wl18xx_driver); +} +module_exit(wl18xx_exit); + +module_param_named(ht_mode, ht_mode_param, charp, S_IRUSR); +MODULE_PARM_DESC(ht_mode, "Force HT mode: wide or siso20"); + +module_param_named(board_type, board_type_param, charp, S_IRUSR); +MODULE_PARM_DESC(board_type, "Board type: fpga, hdk (default), evb, com8 or " + "dvp"); + +module_param_named(checksum, checksum_param, bool, S_IRUSR); +MODULE_PARM_DESC(checksum, "Enable TCP checksum: boolean (defaults to false)"); + +module_param_named(enable_11a, enable_11a_param, bool, S_IRUSR); +MODULE_PARM_DESC(enable_11a, "Enable 11a (5GHz): boolean (defaults to true)"); + +module_param_named(dc2dc, dc2dc_param, int, S_IRUSR); +MODULE_PARM_DESC(dc2dc, "External DC2DC: u8 (defaults to 0)"); + +module_param_named(n_antennas_2, n_antennas_2_param, int, S_IRUSR); +MODULE_PARM_DESC(n_antennas_2, + "Number of installed 2.4GHz antennas: 1 (default) or 2"); + +module_param_named(n_antennas_5, n_antennas_5_param, int, S_IRUSR); +MODULE_PARM_DESC(n_antennas_5, + "Number of installed 5GHz antennas: 1 (default) or 2"); + +module_param_named(low_band_component, low_band_component_param, int, + S_IRUSR); +MODULE_PARM_DESC(low_band_component, "Low band component: u8 " + "(default is 0x01)"); + +module_param_named(low_band_component_type, low_band_component_type_param, + int, S_IRUSR); +MODULE_PARM_DESC(low_band_component_type, "Low band component type: u8 " + "(default is 0x05 or 0x06 depending on the board_type)"); + +module_param_named(high_band_component, high_band_component_param, int, + S_IRUSR); +MODULE_PARM_DESC(high_band_component, "High band component: u8, " + "(default is 0x01)"); + +module_param_named(high_band_component_type, high_band_component_type_param, + int, S_IRUSR); +MODULE_PARM_DESC(high_band_component_type, "High band component type: u8 " + "(default is 0x09)"); + +module_param_named(pwr_limit_reference_11_abg, + pwr_limit_reference_11_abg_param, int, S_IRUSR); +MODULE_PARM_DESC(pwr_limit_reference_11_abg, "Power limit reference: u8 " + "(default is 0xc8)"); + +module_param_named(num_rx_desc, + num_rx_desc_param, int, S_IRUSR); +MODULE_PARM_DESC(num_rx_desc_param, + "Number of Rx descriptors: u8 (default is 32)"); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Luciano Coelho <coelho@ti.com>"); +MODULE_FIRMWARE(WL18XX_FW_NAME); diff --git a/drivers/net/wireless/ti/wl18xx/reg.h b/drivers/net/wireless/ti/wl18xx/reg.h new file mode 100644 index 000000000000..937b71d8783f --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/reg.h @@ -0,0 +1,191 @@ +/* + * This file is part of wlcore + * + * Copyright (C) 2011 Texas Instruments Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#ifndef __REG_H__ +#define __REG_H__ + +#define WL18XX_REGISTERS_BASE 0x00800000 +#define WL18XX_CODE_BASE 0x00000000 +#define WL18XX_DATA_BASE 0x00400000 +#define WL18XX_DOUBLE_BUFFER_BASE 0x00600000 +#define WL18XX_MCU_KEY_SEARCH_BASE 0x00700000 +#define WL18XX_PHY_BASE 0x00900000 +#define WL18XX_TOP_OCP_BASE 0x00A00000 +#define WL18XX_PACKET_RAM_BASE 0x00B00000 +#define WL18XX_HOST_BASE 0x00C00000 + +#define WL18XX_REGISTERS_DOWN_SIZE 0x0000B000 + +#define WL18XX_REG_BOOT_PART_START 0x00802000 +#define WL18XX_REG_BOOT_PART_SIZE 0x00014578 + +#define WL18XX_PHY_INIT_MEM_ADDR 0x80926000 + +#define WL18XX_SDIO_WSPI_BASE (WL18XX_REGISTERS_BASE) +#define WL18XX_REG_CONFIG_BASE (WL18XX_REGISTERS_BASE + 0x02000) +#define WL18XX_WGCM_REGS_BASE (WL18XX_REGISTERS_BASE + 0x03000) +#define WL18XX_ENC_BASE (WL18XX_REGISTERS_BASE + 0x04000) +#define WL18XX_INTERRUPT_BASE (WL18XX_REGISTERS_BASE + 0x05000) +#define WL18XX_UART_BASE (WL18XX_REGISTERS_BASE + 0x06000) +#define WL18XX_WELP_BASE (WL18XX_REGISTERS_BASE + 0x07000) +#define WL18XX_TCP_CKSM_BASE (WL18XX_REGISTERS_BASE + 0x08000) +#define WL18XX_FIFO_BASE (WL18XX_REGISTERS_BASE + 0x09000) +#define WL18XX_OCP_BRIDGE_BASE (WL18XX_REGISTERS_BASE + 0x0A000) +#define WL18XX_PMAC_RX_BASE (WL18XX_REGISTERS_BASE + 0x14800) +#define WL18XX_PMAC_ACM_BASE (WL18XX_REGISTERS_BASE + 0x14C00) +#define WL18XX_PMAC_TX_BASE (WL18XX_REGISTERS_BASE + 0x15000) +#define WL18XX_PMAC_CSR_BASE (WL18XX_REGISTERS_BASE + 0x15400) + +#define WL18XX_REG_ECPU_CONTROL (WL18XX_REGISTERS_BASE + 0x02004) +#define WL18XX_REG_INTERRUPT_NO_CLEAR (WL18XX_REGISTERS_BASE + 0x050E8) +#define WL18XX_REG_INTERRUPT_ACK (WL18XX_REGISTERS_BASE + 0x050F0) +#define WL18XX_REG_INTERRUPT_TRIG (WL18XX_REGISTERS_BASE + 0x5074) +#define WL18XX_REG_INTERRUPT_TRIG_H (WL18XX_REGISTERS_BASE + 0x5078) +#define WL18XX_REG_INTERRUPT_MASK (WL18XX_REGISTERS_BASE + 0x0050DC) + +#define WL18XX_REG_CHIP_ID_B (WL18XX_REGISTERS_BASE + 0x01542C) + +#define WL18XX_SLV_MEM_DATA (WL18XX_HOST_BASE + 0x0018) +#define WL18XX_SLV_REG_DATA (WL18XX_HOST_BASE + 0x0008) + +/* Scratch Pad registers*/ +#define WL18XX_SCR_PAD0 (WL18XX_REGISTERS_BASE + 0x0154EC) +#define WL18XX_SCR_PAD1 (WL18XX_REGISTERS_BASE + 0x0154F0) +#define WL18XX_SCR_PAD2 (WL18XX_REGISTERS_BASE + 0x0154F4) +#define WL18XX_SCR_PAD3 (WL18XX_REGISTERS_BASE + 0x0154F8) +#define WL18XX_SCR_PAD4 (WL18XX_REGISTERS_BASE + 0x0154FC) +#define WL18XX_SCR_PAD4_SET (WL18XX_REGISTERS_BASE + 0x015504) +#define WL18XX_SCR_PAD4_CLR (WL18XX_REGISTERS_BASE + 0x015500) +#define WL18XX_SCR_PAD5 (WL18XX_REGISTERS_BASE + 0x015508) +#define WL18XX_SCR_PAD5_SET (WL18XX_REGISTERS_BASE + 0x015510) +#define WL18XX_SCR_PAD5_CLR (WL18XX_REGISTERS_BASE + 0x01550C) +#define WL18XX_SCR_PAD6 (WL18XX_REGISTERS_BASE + 0x015514) +#define WL18XX_SCR_PAD7 (WL18XX_REGISTERS_BASE + 0x015518) +#define WL18XX_SCR_PAD8 (WL18XX_REGISTERS_BASE + 0x01551C) +#define WL18XX_SCR_PAD9 (WL18XX_REGISTERS_BASE + 0x015520) + +/* Spare registers*/ +#define WL18XX_SPARE_A1 (WL18XX_REGISTERS_BASE + 0x002194) +#define WL18XX_SPARE_A2 (WL18XX_REGISTERS_BASE + 0x002198) +#define WL18XX_SPARE_A3 (WL18XX_REGISTERS_BASE + 0x00219C) +#define WL18XX_SPARE_A4 (WL18XX_REGISTERS_BASE + 0x0021A0) +#define WL18XX_SPARE_A5 (WL18XX_REGISTERS_BASE + 0x0021A4) +#define WL18XX_SPARE_A6 (WL18XX_REGISTERS_BASE + 0x0021A8) +#define WL18XX_SPARE_A7 (WL18XX_REGISTERS_BASE + 0x0021AC) +#define WL18XX_SPARE_A8 (WL18XX_REGISTERS_BASE + 0x0021B0) +#define WL18XX_SPARE_B1 (WL18XX_REGISTERS_BASE + 0x015524) +#define WL18XX_SPARE_B2 (WL18XX_REGISTERS_BASE + 0x015528) +#define WL18XX_SPARE_B3 (WL18XX_REGISTERS_BASE + 0x01552C) +#define WL18XX_SPARE_B4 (WL18XX_REGISTERS_BASE + 0x015530) +#define WL18XX_SPARE_B5 (WL18XX_REGISTERS_BASE + 0x015534) +#define WL18XX_SPARE_B6 (WL18XX_REGISTERS_BASE + 0x015538) +#define WL18XX_SPARE_B7 (WL18XX_REGISTERS_BASE + 0x01553C) +#define WL18XX_SPARE_B8 (WL18XX_REGISTERS_BASE + 0x015540) + +#define WL18XX_REG_COMMAND_MAILBOX_PTR (WL18XX_SCR_PAD0) +#define WL18XX_REG_EVENT_MAILBOX_PTR (WL18XX_SCR_PAD1) +#define WL18XX_EEPROMLESS_IND (WL18XX_SCR_PAD4) + +#define WL18XX_WELP_ARM_COMMAND (WL18XX_REGISTERS_BASE + 0x7100) +#define WL18XX_ENABLE (WL18XX_REGISTERS_BASE + 0x01543C) + +/* PRCM registers */ +#define PLATFORM_DETECTION 0xA0E3E0 +#define OCS_EN 0xA02080 +#define PRIMARY_CLK_DETECT 0xA020A6 +#define PLLSH_WCS_PLL_N 0xA02362 +#define PLLSH_WCS_PLL_M 0xA02360 +#define PLLSH_WCS_PLL_Q_FACTOR_CFG_1 0xA02364 +#define PLLSH_WCS_PLL_Q_FACTOR_CFG_2 0xA02366 +#define PLLSH_WCS_PLL_P_FACTOR_CFG_1 0xA02368 +#define PLLSH_WCS_PLL_P_FACTOR_CFG_2 0xA0236A +#define PLLSH_WCS_PLL_SWALLOW_EN 0xA0236C +#define PLLSH_WL_PLL_EN 0xA02392 + +#define PLLSH_WCS_PLL_Q_FACTOR_CFG_1_MASK 0xFFFF +#define PLLSH_WCS_PLL_Q_FACTOR_CFG_2_MASK 0x007F +#define PLLSH_WCS_PLL_P_FACTOR_CFG_1_MASK 0xFFFF +#define PLLSH_WCS_PLL_P_FACTOR_CFG_2_MASK 0x000F + +#define PLLSH_WCS_PLL_SWALLOW_EN_VAL1 0x1 +#define PLLSH_WCS_PLL_SWALLOW_EN_VAL2 0x12 + +#define WL18XX_REG_FUSE_DATA_1_3 0xA0260C +#define WL18XX_PG_VER_MASK 0x70 +#define WL18XX_PG_VER_OFFSET 4 + +#define WL18XX_REG_FUSE_BD_ADDR_1 0xA02602 +#define WL18XX_REG_FUSE_BD_ADDR_2 0xA02606 + +#define WL18XX_CMD_MBOX_ADDRESS 0xB007B4 + +#define WL18XX_FW_STATUS_ADDR 0x50F8 + +#define CHIP_ID_185x_PG10 (0x06030101) +#define CHIP_ID_185x_PG20 (0x06030111) + +/* + * Host Command Interrupt. Setting this bit masks + * the interrupt that the host issues to inform + * the FW that it has sent a command + * to the Wlan hardware Command Mailbox. + */ +#define WL18XX_INTR_TRIG_CMD BIT(28) + +/* + * Host Event Acknowlegde Interrupt. The host + * sets this bit to acknowledge that it received + * the unsolicited information from the event + * mailbox. + */ +#define WL18XX_INTR_TRIG_EVENT_ACK BIT(29) + +/* + * To boot the firmware in PLT mode we need to write this value in + * SCR_PAD8 before starting. + */ +#define WL18XX_SCR_PAD8_PLT 0xBABABEBE + +enum { + COMPONENT_NO_SWITCH = 0x0, + COMPONENT_2_WAY_SWITCH = 0x1, + COMPONENT_3_WAY_SWITCH = 0x2, + COMPONENT_MATCHING = 0x3, +}; + +enum { + FEM_NONE = 0x0, + FEM_VENDOR_1 = 0x1, + FEM_VENDOR_2 = 0x2, + FEM_VENDOR_3 = 0x3, +}; + +enum { + BOARD_TYPE_EVB_18XX = 0, + BOARD_TYPE_DVP_18XX = 1, + BOARD_TYPE_HDK_18XX = 2, + BOARD_TYPE_FPGA_18XX = 3, + BOARD_TYPE_COM8_18XX = 4, + + NUM_BOARD_TYPES, +}; + +#endif /* __REG_H__ */ diff --git a/drivers/net/wireless/ti/wl18xx/tx.c b/drivers/net/wireless/ti/wl18xx/tx.c new file mode 100644 index 000000000000..5b1fb10d9fd7 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/tx.c @@ -0,0 +1,127 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2011 Texas Instruments Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#include "../wlcore/wlcore.h" +#include "../wlcore/cmd.h" +#include "../wlcore/debug.h" +#include "../wlcore/acx.h" +#include "../wlcore/tx.h" + +#include "wl18xx.h" +#include "tx.h" + +static void wl18xx_tx_complete_packet(struct wl1271 *wl, u8 tx_stat_byte) +{ + struct ieee80211_tx_info *info; + struct sk_buff *skb; + int id = tx_stat_byte & WL18XX_TX_STATUS_DESC_ID_MASK; + bool tx_success; + + /* check for id legality */ + if (unlikely(id >= wl->num_tx_desc || wl->tx_frames[id] == NULL)) { + wl1271_warning("illegal id in tx completion: %d", id); + return; + } + + /* a zero bit indicates Tx success */ + tx_success = !(tx_stat_byte & BIT(WL18XX_TX_STATUS_STAT_BIT_IDX)); + + + skb = wl->tx_frames[id]; + info = IEEE80211_SKB_CB(skb); + + if (wl12xx_is_dummy_packet(wl, skb)) { + wl1271_free_tx_id(wl, id); + return; + } + + /* update the TX status info */ + if (tx_success && !(info->flags & IEEE80211_TX_CTL_NO_ACK)) + info->flags |= IEEE80211_TX_STAT_ACK; + + /* no real data about Tx completion */ + info->status.rates[0].idx = -1; + info->status.rates[0].count = 0; + info->status.rates[0].flags = 0; + info->status.ack_signal = -1; + + if (!tx_success) + wl->stats.retry_count++; + + /* + * TODO: update sequence number for encryption? seems to be + * unsupported for now. needed for recovery with encryption. + */ + + /* remove private header from packet */ + skb_pull(skb, sizeof(struct wl1271_tx_hw_descr)); + + /* remove TKIP header space if present */ + if ((wl->quirks & WLCORE_QUIRK_TKIP_HEADER_SPACE) && + info->control.hw_key && + info->control.hw_key->cipher == WLAN_CIPHER_SUITE_TKIP) { + int hdrlen = ieee80211_get_hdrlen_from_skb(skb); + memmove(skb->data + WL1271_EXTRA_SPACE_TKIP, skb->data, hdrlen); + skb_pull(skb, WL1271_EXTRA_SPACE_TKIP); + } + + wl1271_debug(DEBUG_TX, "tx status id %u skb 0x%p success %d", + id, skb, tx_success); + + /* return the packet to the stack */ + skb_queue_tail(&wl->deferred_tx_queue, skb); + queue_work(wl->freezable_wq, &wl->netstack_work); + wl1271_free_tx_id(wl, id); +} + +void wl18xx_tx_immediate_complete(struct wl1271 *wl) +{ + struct wl18xx_fw_status_priv *status_priv = + (struct wl18xx_fw_status_priv *)wl->fw_status_2->priv; + struct wl18xx_priv *priv = wl->priv; + u8 i; + + /* nothing to do here */ + if (priv->last_fw_rls_idx == status_priv->fw_release_idx) + return; + + /* freed Tx descriptors */ + wl1271_debug(DEBUG_TX, "last released desc = %d, current idx = %d", + priv->last_fw_rls_idx, status_priv->fw_release_idx); + + if (status_priv->fw_release_idx >= WL18XX_FW_MAX_TX_STATUS_DESC) { + wl1271_error("invalid desc release index %d", + status_priv->fw_release_idx); + WARN_ON(1); + return; + } + + for (i = priv->last_fw_rls_idx; + i != status_priv->fw_release_idx; + i = (i + 1) % WL18XX_FW_MAX_TX_STATUS_DESC) { + wl18xx_tx_complete_packet(wl, + status_priv->released_tx_desc[i]); + + wl->tx_results_count++; + } + + priv->last_fw_rls_idx = status_priv->fw_release_idx; +} diff --git a/drivers/net/wireless/ti/wl18xx/tx.h b/drivers/net/wireless/ti/wl18xx/tx.h new file mode 100644 index 000000000000..ccddc548e44a --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/tx.h @@ -0,0 +1,46 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2011 Texas Instruments. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#ifndef __WL18XX_TX_H__ +#define __WL18XX_TX_H__ + +#include "../wlcore/wlcore.h" + +#define WL18XX_TX_HW_BLOCK_SPARE 1 +/* for special cases - namely, TKIP and GEM */ +#define WL18XX_TX_HW_EXTRA_BLOCK_SPARE 2 +#define WL18XX_TX_HW_BLOCK_SIZE 268 + +#define WL18XX_TX_STATUS_DESC_ID_MASK 0x7F +#define WL18XX_TX_STATUS_STAT_BIT_IDX 7 + +/* Indicates this TX HW frame is not padded to SDIO block size */ +#define WL18XX_TX_CTRL_NOT_PADDED BIT(7) + +/* + * The FW uses a special bit to indicate a wide channel should be used in + * the rate policy. + */ +#define CONF_TX_RATE_USE_WIDE_CHAN BIT(31) + +void wl18xx_tx_immediate_complete(struct wl1271 *wl); + +#endif /* __WL12XX_TX_H__ */ diff --git a/drivers/net/wireless/ti/wl18xx/wl18xx.h b/drivers/net/wireless/ti/wl18xx/wl18xx.h new file mode 100644 index 000000000000..6452396fa1d4 --- /dev/null +++ b/drivers/net/wireless/ti/wl18xx/wl18xx.h @@ -0,0 +1,95 @@ +/* + * This file is part of wl18xx + * + * Copyright (C) 2011 Texas Instruments Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA + * 02110-1301 USA + * + */ + +#ifndef __WL18XX_PRIV_H__ +#define __WL18XX_PRIV_H__ + +#include "conf.h" + +/* minimum FW required for driver */ +#define WL18XX_CHIP_VER 8 +#define WL18XX_IFTYPE_VER 2 +#define WL18XX_MAJOR_VER 0 +#define WL18XX_SUBTYPE_VER 0 +#define WL18XX_MINOR_VER 100 + +#define WL18XX_CMD_MAX_SIZE 740 + +struct wl18xx_priv { + /* buffer for sending commands to FW */ + u8 cmd_buf[WL18XX_CMD_MAX_SIZE]; + + struct wl18xx_priv_conf conf; + + /* Index of last released Tx desc in FW */ + u8 last_fw_rls_idx; + + /* number of VIFs requiring extra spare mem-blocks */ + int extra_spare_vif_count; +}; + +#define WL18XX_FW_MAX_TX_STATUS_DESC 33 + +struct wl18xx_fw_status_priv { + /* + * Index in released_tx_desc for first byte that holds + * released tx host desc + */ + u8 fw_release_idx; + + /* + * Array of host Tx descriptors, where fw_release_idx + * indicated the first released idx. + */ + u8 released_tx_desc[WL18XX_FW_MAX_TX_STATUS_DESC]; + + u8 padding[2]; +}; + +#define WL18XX_PHY_VERSION_MAX_LEN 20 + +struct wl18xx_static_data_priv { + char phy_version[WL18XX_PHY_VERSION_MAX_LEN]; +}; + +struct wl18xx_clk_cfg { + u32 n; + u32 m; + u32 p; + u32 q; + bool swallow; +}; + +enum { + CLOCK_CONFIG_16_2_M = 1, + CLOCK_CONFIG_16_368_M, + CLOCK_CONFIG_16_8_M, + CLOCK_CONFIG_19_2_M, + CLOCK_CONFIG_26_M, + CLOCK_CONFIG_32_736_M, + CLOCK_CONFIG_33_6_M, + CLOCK_CONFIG_38_468_M, + CLOCK_CONFIG_52_M, + + NUM_CLOCK_CONFIGS, +}; + +#endif /* __WL18XX_PRIV_H__ */ diff --git a/drivers/net/wireless/ti/wlcore/acx.c b/drivers/net/wireless/ti/wlcore/acx.c index f3d6fa508269..ce108a736bd0 100644 --- a/drivers/net/wireless/ti/wlcore/acx.c +++ b/drivers/net/wireless/ti/wlcore/acx.c @@ -70,7 +70,7 @@ int wl1271_acx_sleep_auth(struct wl1271 *wl, u8 sleep_auth) struct acx_sleep_auth *auth; int ret; - wl1271_debug(DEBUG_ACX, "acx sleep auth"); + wl1271_debug(DEBUG_ACX, "acx sleep auth %d", sleep_auth); auth = kzalloc(sizeof(*auth), GFP_KERNEL); if (!auth) { @@ -81,11 +81,18 @@ int wl1271_acx_sleep_auth(struct wl1271 *wl, u8 sleep_auth) auth->sleep_auth = sleep_auth; ret = wl1271_cmd_configure(wl, ACX_SLEEP_AUTH, auth, sizeof(*auth)); + if (ret < 0) { + wl1271_error("could not configure sleep_auth to %d: %d", + sleep_auth, ret); + goto out; + } + wl->sleep_auth = sleep_auth; out: kfree(auth); return ret; } +EXPORT_SYMBOL_GPL(wl1271_acx_sleep_auth); int wl1271_acx_tx_power(struct wl1271 *wl, struct wl12xx_vif *wlvif, int power) @@ -708,14 +715,14 @@ out: return ret; } -int wl1271_acx_statistics(struct wl1271 *wl, struct acx_statistics *stats) +int wl1271_acx_statistics(struct wl1271 *wl, void *stats) { int ret; wl1271_debug(DEBUG_ACX, "acx statistics"); ret = wl1271_cmd_interrogate(wl, ACX_STATISTICS, stats, - sizeof(*stats)); + wl->stats.fw_stats_len); if (ret < 0) { wl1271_warning("acx statistics failed: %d", ret); return -ENOMEM; @@ -997,6 +1004,7 @@ out: kfree(mem_conf); return ret; } +EXPORT_SYMBOL_GPL(wl12xx_acx_mem_cfg); int wl1271_acx_init_mem_config(struct wl1271 *wl) { @@ -1027,6 +1035,7 @@ int wl1271_acx_init_mem_config(struct wl1271 *wl) return 0; } +EXPORT_SYMBOL_GPL(wl1271_acx_init_mem_config); int wl1271_acx_init_rx_interrupt(struct wl1271 *wl) { @@ -1150,6 +1159,7 @@ out: kfree(acx); return ret; } +EXPORT_SYMBOL_GPL(wl1271_acx_pm_config); int wl1271_acx_keep_alive_mode(struct wl1271 *wl, struct wl12xx_vif *wlvif, bool enable) diff --git a/drivers/net/wireless/ti/wlcore/acx.h b/drivers/net/wireless/ti/wlcore/acx.h index e6a74869a5ff..d03215d6b3bd 100644 --- a/drivers/net/wireless/ti/wlcore/acx.h +++ b/drivers/net/wireless/ti/wlcore/acx.h @@ -51,21 +51,18 @@ #define WL1271_ACX_INTR_TRACE_A BIT(7) /* Trace message on MBOX #B */ #define WL1271_ACX_INTR_TRACE_B BIT(8) +/* SW FW Initiated interrupt Watchdog timer expiration */ +#define WL1271_ACX_SW_INTR_WATCHDOG BIT(9) -#define WL1271_ACX_INTR_ALL 0xFFFFFFFF -#define WL1271_ACX_ALL_EVENTS_VECTOR (WL1271_ACX_INTR_WATCHDOG | \ - WL1271_ACX_INTR_INIT_COMPLETE | \ - WL1271_ACX_INTR_EVENT_A | \ - WL1271_ACX_INTR_EVENT_B | \ - WL1271_ACX_INTR_CMD_COMPLETE | \ - WL1271_ACX_INTR_HW_AVAILABLE | \ - WL1271_ACX_INTR_DATA) - -#define WL1271_INTR_MASK (WL1271_ACX_INTR_WATCHDOG | \ - WL1271_ACX_INTR_EVENT_A | \ - WL1271_ACX_INTR_EVENT_B | \ - WL1271_ACX_INTR_HW_AVAILABLE | \ - WL1271_ACX_INTR_DATA) +#define WL1271_ACX_INTR_ALL 0xFFFFFFFF + +/* all possible interrupts - only appropriate ones will be masked in */ +#define WLCORE_ALL_INTR_MASK (WL1271_ACX_INTR_WATCHDOG | \ + WL1271_ACX_INTR_EVENT_A | \ + WL1271_ACX_INTR_EVENT_B | \ + WL1271_ACX_INTR_HW_AVAILABLE | \ + WL1271_ACX_INTR_DATA | \ + WL1271_ACX_SW_INTR_WATCHDOG) /* Target's information element */ struct acx_header { @@ -121,6 +118,11 @@ enum wl1271_psm_mode { /* Extreme low power */ WL1271_PSM_ELP = 2, + + WL1271_PSM_MAX = WL1271_PSM_ELP, + + /* illegal out of band value of PSM mode */ + WL1271_PSM_ILLEGAL = 0xff }; struct acx_sleep_auth { @@ -417,228 +419,6 @@ struct acx_ctsprotect { u8 padding[2]; } __packed; -struct acx_tx_statistics { - __le32 internal_desc_overflow; -} __packed; - -struct acx_rx_statistics { - __le32 out_of_mem; - __le32 hdr_overflow; - __le32 hw_stuck; - __le32 dropped; - __le32 fcs_err; - __le32 xfr_hint_trig; - __le32 path_reset; - __le32 reset_counter; -} __packed; - -struct acx_dma_statistics { - __le32 rx_requested; - __le32 rx_errors; - __le32 tx_requested; - __le32 tx_errors; -} __packed; - -struct acx_isr_statistics { - /* host command complete */ - __le32 cmd_cmplt; - - /* fiqisr() */ - __le32 fiqs; - - /* (INT_STS_ND & INT_TRIG_RX_HEADER) */ - __le32 rx_headers; - - /* (INT_STS_ND & INT_TRIG_RX_CMPLT) */ - __le32 rx_completes; - - /* (INT_STS_ND & INT_TRIG_NO_RX_BUF) */ - __le32 rx_mem_overflow; - - /* (INT_STS_ND & INT_TRIG_S_RX_RDY) */ - __le32 rx_rdys; - - /* irqisr() */ - __le32 irqs; - - /* (INT_STS_ND & INT_TRIG_TX_PROC) */ - __le32 tx_procs; - - /* (INT_STS_ND & INT_TRIG_DECRYPT_DONE) */ - __le32 decrypt_done; - - /* (INT_STS_ND & INT_TRIG_DMA0) */ - __le32 dma0_done; - - /* (INT_STS_ND & INT_TRIG_DMA1) */ - __le32 dma1_done; - - /* (INT_STS_ND & INT_TRIG_TX_EXC_CMPLT) */ - __le32 tx_exch_complete; - - /* (INT_STS_ND & INT_TRIG_COMMAND) */ - __le32 commands; - - /* (INT_STS_ND & INT_TRIG_RX_PROC) */ - __le32 rx_procs; - - /* (INT_STS_ND & INT_TRIG_PM_802) */ - __le32 hw_pm_mode_changes; - - /* (INT_STS_ND & INT_TRIG_ACKNOWLEDGE) */ - __le32 host_acknowledges; - - /* (INT_STS_ND & INT_TRIG_PM_PCI) */ - __le32 pci_pm; - - /* (INT_STS_ND & INT_TRIG_ACM_WAKEUP) */ - __le32 wakeups; - - /* (INT_STS_ND & INT_TRIG_LOW_RSSI) */ - __le32 low_rssi; -} __packed; - -struct acx_wep_statistics { - /* WEP address keys configured */ - __le32 addr_key_count; - - /* default keys configured */ - __le32 default_key_count; - - __le32 reserved; - - /* number of times that WEP key not found on lookup */ - __le32 key_not_found; - - /* number of times that WEP key decryption failed */ - __le32 decrypt_fail; - - /* WEP packets decrypted */ - __le32 packets; - - /* WEP decrypt interrupts */ - __le32 interrupt; -} __packed; - -#define ACX_MISSED_BEACONS_SPREAD 10 - -struct acx_pwr_statistics { - /* the amount of enters into power save mode (both PD & ELP) */ - __le32 ps_enter; - - /* the amount of enters into ELP mode */ - __le32 elp_enter; - - /* the amount of missing beacon interrupts to the host */ - __le32 missing_bcns; - - /* the amount of wake on host-access times */ - __le32 wake_on_host; - - /* the amount of wake on timer-expire */ - __le32 wake_on_timer_exp; - - /* the number of packets that were transmitted with PS bit set */ - __le32 tx_with_ps; - - /* the number of packets that were transmitted with PS bit clear */ - __le32 tx_without_ps; - - /* the number of received beacons */ - __le32 rcvd_beacons; - - /* the number of entering into PowerOn (power save off) */ - __le32 power_save_off; - - /* the number of entries into power save mode */ - __le16 enable_ps; - - /* - * the number of exits from power save, not including failed PS - * transitions - */ - __le16 disable_ps; - - /* - * the number of times the TSF counter was adjusted because - * of drift - */ - __le32 fix_tsf_ps; - - /* Gives statistics about the spread continuous missed beacons. - * The 16 LSB are dedicated for the PS mode. - * The 16 MSB are dedicated for the PS mode. - * cont_miss_bcns_spread[0] - single missed beacon. - * cont_miss_bcns_spread[1] - two continuous missed beacons. - * cont_miss_bcns_spread[2] - three continuous missed beacons. - * ... - * cont_miss_bcns_spread[9] - ten and more continuous missed beacons. - */ - __le32 cont_miss_bcns_spread[ACX_MISSED_BEACONS_SPREAD]; - - /* the number of beacons in awake mode */ - __le32 rcvd_awake_beacons; -} __packed; - -struct acx_mic_statistics { - __le32 rx_pkts; - __le32 calc_failure; -} __packed; - -struct acx_aes_statistics { - __le32 encrypt_fail; - __le32 decrypt_fail; - __le32 encrypt_packets; - __le32 decrypt_packets; - __le32 encrypt_interrupt; - __le32 decrypt_interrupt; -} __packed; - -struct acx_event_statistics { - __le32 heart_beat; - __le32 calibration; - __le32 rx_mismatch; - __le32 rx_mem_empty; - __le32 rx_pool; - __le32 oom_late; - __le32 phy_transmit_error; - __le32 tx_stuck; -} __packed; - -struct acx_ps_statistics { - __le32 pspoll_timeouts; - __le32 upsd_timeouts; - __le32 upsd_max_sptime; - __le32 upsd_max_apturn; - __le32 pspoll_max_apturn; - __le32 pspoll_utilization; - __le32 upsd_utilization; -} __packed; - -struct acx_rxpipe_statistics { - __le32 rx_prep_beacon_drop; - __le32 descr_host_int_trig_rx_data; - __le32 beacon_buffer_thres_host_int_trig_rx_data; - __le32 missed_beacon_host_int_trig_rx_data; - __le32 tx_xfr_host_int_trig_rx_data; -} __packed; - -struct acx_statistics { - struct acx_header header; - - struct acx_tx_statistics tx; - struct acx_rx_statistics rx; - struct acx_dma_statistics dma; - struct acx_isr_statistics isr; - struct acx_wep_statistics wep; - struct acx_pwr_statistics pwr; - struct acx_aes_statistics aes; - struct acx_mic_statistics mic; - struct acx_event_statistics event; - struct acx_ps_statistics ps; - struct acx_rxpipe_statistics rxpipe; -} __packed; - struct acx_rate_class { __le32 enabled_rates; u8 short_retry_limit; @@ -828,6 +608,8 @@ struct wl1271_acx_keep_alive_config { #define HOST_IF_CFG_RX_FIFO_ENABLE BIT(0) #define HOST_IF_CFG_TX_EXTRA_BLKS_SWAP BIT(1) #define HOST_IF_CFG_TX_PAD_TO_SDIO_BLK BIT(3) +#define HOST_IF_CFG_RX_PAD_TO_SDIO_BLK BIT(4) +#define HOST_IF_CFG_ADD_RX_ALIGNMENT BIT(6) enum { WL1271_ACX_TRIG_TYPE_LEVEL = 0, @@ -946,7 +728,7 @@ struct wl1271_acx_ht_information { u8 padding[2]; } __packed; -#define RX_BA_MAX_SESSIONS 2 +#define RX_BA_MAX_SESSIONS 3 struct wl1271_acx_ba_initiator_policy { struct acx_header header; @@ -1243,6 +1025,7 @@ enum { ACX_CONFIG_HANGOVER = 0x0042, ACX_FEATURE_CFG = 0x0043, ACX_PROTECTION_CFG = 0x0044, + ACX_CHECKSUM_CONFIG = 0x0045, }; @@ -1281,7 +1064,7 @@ int wl1271_acx_set_preamble(struct wl1271 *wl, struct wl12xx_vif *wlvif, enum acx_preamble_type preamble); int wl1271_acx_cts_protect(struct wl1271 *wl, struct wl12xx_vif *wlvif, enum acx_ctsprotect_type ctsprotect); -int wl1271_acx_statistics(struct wl1271 *wl, struct acx_statistics *stats); +int wl1271_acx_statistics(struct wl1271 *wl, void *stats); int wl1271_acx_sta_rate_policies(struct wl1271 *wl, struct wl12xx_vif *wlvif); int wl1271_acx_ap_rate_policy(struct wl1271 *wl, struct conf_tx_rate_class *c, u8 idx); diff --git a/drivers/net/wireless/ti/wlcore/boot.c b/drivers/net/wireless/ti/wlcore/boot.c index 9b98230f84ce..375ea574eafb 100644 --- a/drivers/net/wireless/ti/wlcore/boot.c +++ b/drivers/net/wireless/ti/wlcore/boot.c @@ -33,22 +33,35 @@ #include "rx.h" #include "hw_ops.h" -static void wl1271_boot_set_ecpu_ctrl(struct wl1271 *wl, u32 flag) +static int wl1271_boot_set_ecpu_ctrl(struct wl1271 *wl, u32 flag) { u32 cpu_ctrl; + int ret; /* 10.5.0 run the firmware (I) */ - cpu_ctrl = wlcore_read_reg(wl, REG_ECPU_CONTROL); + ret = wlcore_read_reg(wl, REG_ECPU_CONTROL, &cpu_ctrl); + if (ret < 0) + goto out; /* 10.5.1 run the firmware (II) */ cpu_ctrl |= flag; - wlcore_write_reg(wl, REG_ECPU_CONTROL, cpu_ctrl); + ret = wlcore_write_reg(wl, REG_ECPU_CONTROL, cpu_ctrl); + +out: + return ret; } -static int wlcore_parse_fw_ver(struct wl1271 *wl) +static int wlcore_boot_parse_fw_ver(struct wl1271 *wl, + struct wl1271_static_data *static_data) { int ret; + strncpy(wl->chip.fw_ver_str, static_data->fw_version, + sizeof(wl->chip.fw_ver_str)); + + /* make sure the string is NULL-terminated */ + wl->chip.fw_ver_str[sizeof(wl->chip.fw_ver_str) - 1] = '\0'; + ret = sscanf(wl->chip.fw_ver_str + 4, "%u.%u.%u.%u.%u", &wl->chip.fw_ver[0], &wl->chip.fw_ver[1], &wl->chip.fw_ver[2], &wl->chip.fw_ver[3], @@ -57,43 +70,96 @@ static int wlcore_parse_fw_ver(struct wl1271 *wl) if (ret != 5) { wl1271_warning("fw version incorrect value"); memset(wl->chip.fw_ver, 0, sizeof(wl->chip.fw_ver)); - return -EINVAL; + ret = -EINVAL; + goto out; } ret = wlcore_identify_fw(wl); if (ret < 0) - return ret; + goto out; +out: + return ret; +} + +static int wlcore_validate_fw_ver(struct wl1271 *wl) +{ + unsigned int *fw_ver = wl->chip.fw_ver; + unsigned int *min_ver = wl->min_fw_ver; + /* the chip must be exactly equal */ + if (min_ver[FW_VER_CHIP] != fw_ver[FW_VER_CHIP]) + goto fail; + + /* always check the next digit if all previous ones are equal */ + + if (min_ver[FW_VER_IF_TYPE] < fw_ver[FW_VER_IF_TYPE]) + goto out; + else if (min_ver[FW_VER_IF_TYPE] > fw_ver[FW_VER_IF_TYPE]) + goto fail; + + if (min_ver[FW_VER_MAJOR] < fw_ver[FW_VER_MAJOR]) + goto out; + else if (min_ver[FW_VER_MAJOR] > fw_ver[FW_VER_MAJOR]) + goto fail; + + if (min_ver[FW_VER_SUBTYPE] < fw_ver[FW_VER_SUBTYPE]) + goto out; + else if (min_ver[FW_VER_SUBTYPE] > fw_ver[FW_VER_SUBTYPE]) + goto fail; + + if (min_ver[FW_VER_MINOR] < fw_ver[FW_VER_MINOR]) + goto out; + else if (min_ver[FW_VER_MINOR] > fw_ver[FW_VER_MINOR]) + goto fail; + +out: return 0; + +fail: + wl1271_error("Your WiFi FW version (%u.%u.%u.%u.%u) is outdated.\n" + "Please use at least FW %u.%u.%u.%u.%u.\n" + "You can get more information at:\n" + "http://wireless.kernel.org/en/users/Drivers/wl12xx", + fw_ver[FW_VER_CHIP], fw_ver[FW_VER_IF_TYPE], + fw_ver[FW_VER_MAJOR], fw_ver[FW_VER_SUBTYPE], + fw_ver[FW_VER_MINOR], min_ver[FW_VER_CHIP], + min_ver[FW_VER_IF_TYPE], min_ver[FW_VER_MAJOR], + min_ver[FW_VER_SUBTYPE], min_ver[FW_VER_MINOR]); + return -EINVAL; } -static int wlcore_boot_fw_version(struct wl1271 *wl) +static int wlcore_boot_static_data(struct wl1271 *wl) { struct wl1271_static_data *static_data; + size_t len = sizeof(*static_data) + wl->static_data_priv_len; int ret; - static_data = kmalloc(sizeof(*static_data), GFP_KERNEL | GFP_DMA); + static_data = kmalloc(len, GFP_KERNEL); if (!static_data) { - wl1271_error("Couldn't allocate memory for static data!"); - return -ENOMEM; + ret = -ENOMEM; + goto out; } - wl1271_read(wl, wl->cmd_box_addr, static_data, sizeof(*static_data), - false); - - strncpy(wl->chip.fw_ver_str, static_data->fw_version, - sizeof(wl->chip.fw_ver_str)); + ret = wlcore_read(wl, wl->cmd_box_addr, static_data, len, false); + if (ret < 0) + goto out_free; - kfree(static_data); + ret = wlcore_boot_parse_fw_ver(wl, static_data); + if (ret < 0) + goto out_free; - /* make sure the string is NULL-terminated */ - wl->chip.fw_ver_str[sizeof(wl->chip.fw_ver_str) - 1] = '\0'; + ret = wlcore_validate_fw_ver(wl); + if (ret < 0) + goto out_free; - ret = wlcore_parse_fw_ver(wl); + ret = wlcore_handle_static_data(wl, static_data); if (ret < 0) - return ret; + goto out_free; - return 0; +out_free: + kfree(static_data); +out: + return ret; } static int wl1271_boot_upload_firmware_chunk(struct wl1271 *wl, void *buf, @@ -102,6 +168,7 @@ static int wl1271_boot_upload_firmware_chunk(struct wl1271 *wl, void *buf, struct wlcore_partition_set partition; int addr, chunk_num, partition_limit; u8 *p, *chunk; + int ret; /* whal_FwCtrl_LoadFwImageSm() */ @@ -123,7 +190,9 @@ static int wl1271_boot_upload_firmware_chunk(struct wl1271 *wl, void *buf, memcpy(&partition, &wl->ptable[PART_DOWN], sizeof(partition)); partition.mem.start = dest; - wlcore_set_partition(wl, &partition); + ret = wlcore_set_partition(wl, &partition); + if (ret < 0) + goto out; /* 10.1 set partition limit and chunk num */ chunk_num = 0; @@ -137,7 +206,9 @@ static int wl1271_boot_upload_firmware_chunk(struct wl1271 *wl, void *buf, partition_limit = chunk_num * CHUNK_SIZE + wl->ptable[PART_DOWN].mem.size; partition.mem.start = addr; - wlcore_set_partition(wl, &partition); + ret = wlcore_set_partition(wl, &partition); + if (ret < 0) + goto out; } /* 10.3 upload the chunk */ @@ -146,7 +217,9 @@ static int wl1271_boot_upload_firmware_chunk(struct wl1271 *wl, void *buf, memcpy(chunk, p, CHUNK_SIZE); wl1271_debug(DEBUG_BOOT, "uploading fw chunk 0x%p to 0x%x", p, addr); - wl1271_write(wl, addr, chunk, CHUNK_SIZE, false); + ret = wlcore_write(wl, addr, chunk, CHUNK_SIZE, false); + if (ret < 0) + goto out; chunk_num++; } @@ -157,10 +230,11 @@ static int wl1271_boot_upload_firmware_chunk(struct wl1271 *wl, void *buf, memcpy(chunk, p, fw_data_len % CHUNK_SIZE); wl1271_debug(DEBUG_BOOT, "uploading fw last chunk (%zd B) 0x%p to 0x%x", fw_data_len % CHUNK_SIZE, p, addr); - wl1271_write(wl, addr, chunk, fw_data_len % CHUNK_SIZE, false); + ret = wlcore_write(wl, addr, chunk, fw_data_len % CHUNK_SIZE, false); +out: kfree(chunk); - return 0; + return ret; } int wlcore_boot_upload_firmware(struct wl1271 *wl) @@ -203,9 +277,12 @@ int wlcore_boot_upload_nvs(struct wl1271 *wl) int i; u32 dest_addr, val; u8 *nvs_ptr, *nvs_aligned; + int ret; - if (wl->nvs == NULL) + if (wl->nvs == NULL) { + wl1271_error("NVS file is needed during boot"); return -ENODEV; + } if (wl->quirks & WLCORE_QUIRK_LEGACY_NVS) { struct wl1271_nvs_file *nvs = @@ -298,7 +375,9 @@ int wlcore_boot_upload_nvs(struct wl1271 *wl) wl1271_debug(DEBUG_BOOT, "nvs burst write 0x%x: 0x%x", dest_addr, val); - wl1271_write32(wl, dest_addr, val); + ret = wlcore_write32(wl, dest_addr, val); + if (ret < 0) + return ret; nvs_ptr += 4; dest_addr += 4; @@ -324,7 +403,9 @@ int wlcore_boot_upload_nvs(struct wl1271 *wl) nvs_len -= nvs_ptr - (u8 *)wl->nvs; /* Now we must set the partition correctly */ - wlcore_set_partition(wl, &wl->ptable[PART_WORK]); + ret = wlcore_set_partition(wl, &wl->ptable[PART_WORK]); + if (ret < 0) + return ret; /* Copy the NVS tables to a new block to ensure alignment */ nvs_aligned = kmemdup(nvs_ptr, nvs_len, GFP_KERNEL); @@ -332,11 +413,11 @@ int wlcore_boot_upload_nvs(struct wl1271 *wl) return -ENOMEM; /* And finally we upload the NVS tables */ - wlcore_write_data(wl, REG_CMD_MBOX_ADDRESS, - nvs_aligned, nvs_len, false); + ret = wlcore_write_data(wl, REG_CMD_MBOX_ADDRESS, nvs_aligned, nvs_len, + false); kfree(nvs_aligned); - return 0; + return ret; out_badnvs: wl1271_error("nvs data is malformed"); @@ -350,11 +431,17 @@ int wlcore_boot_run_firmware(struct wl1271 *wl) u32 chip_id, intr; /* Make sure we have the boot partition */ - wlcore_set_partition(wl, &wl->ptable[PART_BOOT]); + ret = wlcore_set_partition(wl, &wl->ptable[PART_BOOT]); + if (ret < 0) + return ret; - wl1271_boot_set_ecpu_ctrl(wl, ECPU_CONTROL_HALT); + ret = wl1271_boot_set_ecpu_ctrl(wl, ECPU_CONTROL_HALT); + if (ret < 0) + return ret; - chip_id = wlcore_read_reg(wl, REG_CHIP_ID_B); + ret = wlcore_read_reg(wl, REG_CHIP_ID_B, &chip_id); + if (ret < 0) + return ret; wl1271_debug(DEBUG_BOOT, "chip id after firmware boot: 0x%x", chip_id); @@ -367,7 +454,9 @@ int wlcore_boot_run_firmware(struct wl1271 *wl) loop = 0; while (loop++ < INIT_LOOP) { udelay(INIT_LOOP_DELAY); - intr = wlcore_read_reg(wl, REG_INTERRUPT_NO_CLEAR); + ret = wlcore_read_reg(wl, REG_INTERRUPT_NO_CLEAR, &intr); + if (ret < 0) + return ret; if (intr == 0xffffffff) { wl1271_error("error reading hardware complete " @@ -376,8 +465,10 @@ int wlcore_boot_run_firmware(struct wl1271 *wl) } /* check that ACX_INTR_INIT_COMPLETE is enabled */ else if (intr & WL1271_ACX_INTR_INIT_COMPLETE) { - wlcore_write_reg(wl, REG_INTERRUPT_ACK, - WL1271_ACX_INTR_INIT_COMPLETE); + ret = wlcore_write_reg(wl, REG_INTERRUPT_ACK, + WL1271_ACX_INTR_INIT_COMPLETE); + if (ret < 0) + return ret; break; } } @@ -389,20 +480,25 @@ int wlcore_boot_run_firmware(struct wl1271 *wl) } /* get hardware config command mail box */ - wl->cmd_box_addr = wlcore_read_reg(wl, REG_COMMAND_MAILBOX_PTR); + ret = wlcore_read_reg(wl, REG_COMMAND_MAILBOX_PTR, &wl->cmd_box_addr); + if (ret < 0) + return ret; wl1271_debug(DEBUG_MAILBOX, "cmd_box_addr 0x%x", wl->cmd_box_addr); /* get hardware config event mail box */ - wl->mbox_ptr[0] = wlcore_read_reg(wl, REG_EVENT_MAILBOX_PTR); + ret = wlcore_read_reg(wl, REG_EVENT_MAILBOX_PTR, &wl->mbox_ptr[0]); + if (ret < 0) + return ret; + wl->mbox_ptr[1] = wl->mbox_ptr[0] + sizeof(struct event_mailbox); wl1271_debug(DEBUG_MAILBOX, "MBOX ptrs: 0x%x 0x%x", wl->mbox_ptr[0], wl->mbox_ptr[1]); - ret = wlcore_boot_fw_version(wl); + ret = wlcore_boot_static_data(wl); if (ret < 0) { - wl1271_error("couldn't boot firmware"); + wl1271_error("error getting static data"); return ret; } @@ -436,9 +532,9 @@ int wlcore_boot_run_firmware(struct wl1271 *wl) } /* set the working partition to its "running" mode offset */ - wlcore_set_partition(wl, &wl->ptable[PART_WORK]); + ret = wlcore_set_partition(wl, &wl->ptable[PART_WORK]); /* firmware startup completed */ - return 0; + return ret; } EXPORT_SYMBOL_GPL(wlcore_boot_run_firmware); diff --git a/drivers/net/wireless/ti/wlcore/boot.h b/drivers/net/wireless/ti/wlcore/boot.h index 094981dd2227..a525225f990c 100644 --- a/drivers/net/wireless/ti/wlcore/boot.h +++ b/drivers/net/wireless/ti/wlcore/boot.h @@ -40,6 +40,7 @@ struct wl1271_static_data { u8 fw_version[WL1271_FW_VERSION_MAX_LEN]; u32 hw_version; u8 tx_power_table[WL1271_NO_SUBBANDS][WL1271_NO_POWER_LEVELS]; + u8 priv[0]; }; /* number of times we try to read the INIT interrupt */ diff --git a/drivers/net/wireless/ti/wlcore/cmd.c b/drivers/net/wireless/ti/wlcore/cmd.c index 5b128a971449..20e1bd923832 100644 --- a/drivers/net/wireless/ti/wlcore/cmd.c +++ b/drivers/net/wireless/ti/wlcore/cmd.c @@ -36,8 +36,10 @@ #include "cmd.h" #include "event.h" #include "tx.h" +#include "hw_ops.h" #define WL1271_CMD_FAST_POLL_COUNT 50 +#define WL1271_WAIT_EVENT_FAST_POLL_COUNT 20 /* * send command to firmware @@ -64,17 +66,24 @@ int wl1271_cmd_send(struct wl1271 *wl, u16 id, void *buf, size_t len, WARN_ON(len % 4 != 0); WARN_ON(test_bit(WL1271_FLAG_IN_ELP, &wl->flags)); - wl1271_write(wl, wl->cmd_box_addr, buf, len, false); + ret = wlcore_write(wl, wl->cmd_box_addr, buf, len, false); + if (ret < 0) + goto fail; /* * TODO: we just need this because one bit is in a different * place. Is there any better way? */ - wl->ops->trigger_cmd(wl, wl->cmd_box_addr, buf, len); + ret = wl->ops->trigger_cmd(wl, wl->cmd_box_addr, buf, len); + if (ret < 0) + goto fail; timeout = jiffies + msecs_to_jiffies(WL1271_COMMAND_TIMEOUT); - intr = wlcore_read_reg(wl, REG_INTERRUPT_NO_CLEAR); + ret = wlcore_read_reg(wl, REG_INTERRUPT_NO_CLEAR, &intr); + if (ret < 0) + goto fail; + while (!(intr & WL1271_ACX_INTR_CMD_COMPLETE)) { if (time_after(jiffies, timeout)) { wl1271_error("command complete timeout"); @@ -88,13 +97,18 @@ int wl1271_cmd_send(struct wl1271 *wl, u16 id, void *buf, size_t len, else msleep(1); - intr = wlcore_read_reg(wl, REG_INTERRUPT_NO_CLEAR); + ret = wlcore_read_reg(wl, REG_INTERRUPT_NO_CLEAR, &intr); + if (ret < 0) + goto fail; } /* read back the status code of the command */ if (res_len == 0) res_len = sizeof(struct wl1271_cmd_header); - wl1271_read(wl, wl->cmd_box_addr, cmd, res_len, false); + + ret = wlcore_read(wl, wl->cmd_box_addr, cmd, res_len, false); + if (ret < 0) + goto fail; status = le16_to_cpu(cmd->status); if (status != CMD_STATUS_SUCCESS) { @@ -103,11 +117,14 @@ int wl1271_cmd_send(struct wl1271 *wl, u16 id, void *buf, size_t len, goto fail; } - wlcore_write_reg(wl, REG_INTERRUPT_ACK, WL1271_ACX_INTR_CMD_COMPLETE); + ret = wlcore_write_reg(wl, REG_INTERRUPT_ACK, + WL1271_ACX_INTR_CMD_COMPLETE); + if (ret < 0) + goto fail; + return 0; fail: - WARN_ON(1); wl12xx_queue_recovery_work(wl); return ret; } @@ -116,35 +133,50 @@ fail: * Poll the mailbox event field until any of the bits in the mask is set or a * timeout occurs (WL1271_EVENT_TIMEOUT in msecs) */ -static int wl1271_cmd_wait_for_event_or_timeout(struct wl1271 *wl, u32 mask) +static int wl1271_cmd_wait_for_event_or_timeout(struct wl1271 *wl, + u32 mask, bool *timeout) { u32 *events_vector; u32 event; - unsigned long timeout; + unsigned long timeout_time; + u16 poll_count = 0; int ret = 0; + *timeout = false; + events_vector = kmalloc(sizeof(*events_vector), GFP_KERNEL | GFP_DMA); if (!events_vector) return -ENOMEM; - timeout = jiffies + msecs_to_jiffies(WL1271_EVENT_TIMEOUT); + timeout_time = jiffies + msecs_to_jiffies(WL1271_EVENT_TIMEOUT); do { - if (time_after(jiffies, timeout)) { + if (time_after(jiffies, timeout_time)) { wl1271_debug(DEBUG_CMD, "timeout waiting for event %d", (int)mask); - ret = -ETIMEDOUT; + *timeout = true; goto out; } - msleep(1); + poll_count++; + if (poll_count < WL1271_WAIT_EVENT_FAST_POLL_COUNT) + usleep_range(50, 51); + else + usleep_range(1000, 5000); /* read from both event fields */ - wl1271_read(wl, wl->mbox_ptr[0], events_vector, - sizeof(*events_vector), false); + ret = wlcore_read(wl, wl->mbox_ptr[0], events_vector, + sizeof(*events_vector), false); + if (ret < 0) + goto out; + event = *events_vector & mask; - wl1271_read(wl, wl->mbox_ptr[1], events_vector, - sizeof(*events_vector), false); + + ret = wlcore_read(wl, wl->mbox_ptr[1], events_vector, + sizeof(*events_vector), false); + if (ret < 0) + goto out; + event |= *events_vector & mask; } while (!event); @@ -156,9 +188,10 @@ out: static int wl1271_cmd_wait_for_event(struct wl1271 *wl, u32 mask) { int ret; + bool timeout = false; - ret = wl1271_cmd_wait_for_event_or_timeout(wl, mask); - if (ret != 0) { + ret = wl1271_cmd_wait_for_event_or_timeout(wl, mask, &timeout); + if (ret != 0 || timeout) { wl12xx_queue_recovery_work(wl); return ret; } @@ -291,6 +324,23 @@ static int wl12xx_get_new_session_id(struct wl1271 *wl, return wlvif->session_counter; } +static u8 wlcore_get_native_channel_type(u8 nl_channel_type) +{ + switch (nl_channel_type) { + case NL80211_CHAN_NO_HT: + return WLCORE_CHAN_NO_HT; + case NL80211_CHAN_HT20: + return WLCORE_CHAN_HT20; + case NL80211_CHAN_HT40MINUS: + return WLCORE_CHAN_HT40MINUS; + case NL80211_CHAN_HT40PLUS: + return WLCORE_CHAN_HT40PLUS; + default: + WARN_ON(1); + return WLCORE_CHAN_NO_HT; + } +} + static int wl12xx_cmd_role_start_dev(struct wl1271 *wl, struct wl12xx_vif *wlvif) { @@ -407,6 +457,7 @@ int wl12xx_cmd_role_start_sta(struct wl1271 *wl, struct wl12xx_vif *wlvif) memcpy(cmd->sta.ssid, wlvif->ssid, wlvif->ssid_len); memcpy(cmd->sta.bssid, vif->bss_conf.bssid, ETH_ALEN); cmd->sta.local_rates = cpu_to_le32(wlvif->rate_set); + cmd->channel_type = wlcore_get_native_channel_type(wlvif->channel_type); if (wlvif->sta.hlid == WL12XX_INVALID_LINK_ID) { ret = wl12xx_allocate_link(wl, wlvif, &wlvif->sta.hlid); @@ -446,6 +497,7 @@ int wl12xx_cmd_role_stop_sta(struct wl1271 *wl, struct wl12xx_vif *wlvif) { struct wl12xx_cmd_role_stop *cmd; int ret; + bool timeout = false; if (WARN_ON(wlvif->sta.hlid == WL12XX_INVALID_LINK_ID)) return -EINVAL; @@ -468,6 +520,17 @@ int wl12xx_cmd_role_stop_sta(struct wl1271 *wl, struct wl12xx_vif *wlvif) goto out_free; } + /* + * Sometimes the firmware doesn't send this event, so we just + * time out without failing. Queue recovery for other + * failures. + */ + ret = wl1271_cmd_wait_for_event_or_timeout(wl, + ROLE_STOP_COMPLETE_EVENT_ID, + &timeout); + if (ret) + wl12xx_queue_recovery_work(wl); + wl12xx_free_link(wl, wlvif, &wlvif->sta.hlid); out_free: @@ -482,6 +545,7 @@ int wl12xx_cmd_role_start_ap(struct wl1271 *wl, struct wl12xx_vif *wlvif) struct wl12xx_cmd_role_start *cmd; struct ieee80211_vif *vif = wl12xx_wlvif_to_vif(wlvif); struct ieee80211_bss_conf *bss_conf = &vif->bss_conf; + u32 supported_rates; int ret; wl1271_debug(DEBUG_CMD, "cmd role start ap %d", wlvif->role_id); @@ -519,6 +583,7 @@ int wl12xx_cmd_role_start_ap(struct wl1271 *wl, struct wl12xx_vif *wlvif) /* FIXME: Change when adding DFS */ cmd->ap.reset_tsf = 1; /* By default reset AP TSF */ cmd->channel = wlvif->channel; + cmd->channel_type = wlcore_get_native_channel_type(wlvif->channel_type); if (!bss_conf->hidden_ssid) { /* take the SSID from the beacon for backward compatibility */ @@ -531,7 +596,13 @@ int wl12xx_cmd_role_start_ap(struct wl1271 *wl, struct wl12xx_vif *wlvif) memcpy(cmd->ap.ssid, bss_conf->ssid, bss_conf->ssid_len); } - cmd->ap.local_rates = cpu_to_le32(0xffffffff); + supported_rates = CONF_TX_AP_ENABLED_RATES | CONF_TX_MCS_RATES | + wlcore_hw_ap_get_mimo_wide_rate_mask(wl, wlvif); + + wl1271_debug(DEBUG_CMD, "cmd role start ap with supported_rates 0x%08x", + supported_rates); + + cmd->ap.local_rates = cpu_to_le32(supported_rates); switch (wlvif->band) { case IEEE80211_BAND_2GHZ: @@ -797,6 +868,7 @@ out: kfree(cmd); return ret; } +EXPORT_SYMBOL_GPL(wl1271_cmd_data_path); int wl1271_cmd_ps_mode(struct wl1271 *wl, struct wl12xx_vif *wlvif, u8 ps_mode, u16 auto_ps_timeout) @@ -953,12 +1025,14 @@ out: int wl12xx_cmd_build_probe_req(struct wl1271 *wl, struct wl12xx_vif *wlvif, u8 role_id, u8 band, const u8 *ssid, size_t ssid_len, - const u8 *ie, size_t ie_len) + const u8 *ie, size_t ie_len, bool sched_scan) { struct ieee80211_vif *vif = wl12xx_wlvif_to_vif(wlvif); struct sk_buff *skb; int ret; u32 rate; + u16 template_id_2_4 = CMD_TEMPL_CFG_PROBE_REQ_2_4; + u16 template_id_5 = CMD_TEMPL_CFG_PROBE_REQ_5; skb = ieee80211_probereq_get(wl->hw, vif, ssid, ssid_len, ie, ie_len); @@ -969,14 +1043,20 @@ int wl12xx_cmd_build_probe_req(struct wl1271 *wl, struct wl12xx_vif *wlvif, wl1271_dump(DEBUG_SCAN, "PROBE REQ: ", skb->data, skb->len); + if (!sched_scan && + (wl->quirks & WLCORE_QUIRK_DUAL_PROBE_TMPL)) { + template_id_2_4 = CMD_TEMPL_APP_PROBE_REQ_2_4; + template_id_5 = CMD_TEMPL_APP_PROBE_REQ_5; + } + rate = wl1271_tx_min_rate_get(wl, wlvif->bitrate_masks[band]); if (band == IEEE80211_BAND_2GHZ) ret = wl1271_cmd_template_set(wl, role_id, - CMD_TEMPL_CFG_PROBE_REQ_2_4, + template_id_2_4, skb->data, skb->len, 0, rate); else ret = wl1271_cmd_template_set(wl, role_id, - CMD_TEMPL_CFG_PROBE_REQ_5, + template_id_5, skb->data, skb->len, 0, rate); out: @@ -1018,7 +1098,7 @@ out: int wl1271_cmd_build_arp_rsp(struct wl1271 *wl, struct wl12xx_vif *wlvif) { - int ret, extra; + int ret, extra = 0; u16 fc; struct ieee80211_vif *vif = wl12xx_wlvif_to_vif(wlvif); struct sk_buff *skb; @@ -1057,7 +1137,8 @@ int wl1271_cmd_build_arp_rsp(struct wl1271 *wl, struct wl12xx_vif *wlvif) /* encryption space */ switch (wlvif->encryption_type) { case KEY_TKIP: - extra = WL1271_EXTRA_SPACE_TKIP; + if (wl->quirks & WLCORE_QUIRK_TKIP_HEADER_SPACE) + extra = WL1271_EXTRA_SPACE_TKIP; break; case KEY_AES: extra = WL1271_EXTRA_SPACE_AES; @@ -1346,13 +1427,18 @@ int wl12xx_cmd_add_peer(struct wl1271 *wl, struct wl12xx_vif *wlvif, for (i = 0; i < NUM_ACCESS_CATEGORIES_COPY; i++) if (sta->wme && (sta->uapsd_queues & BIT(i))) - cmd->psd_type[i] = WL1271_PSD_UPSD_TRIGGER; + cmd->psd_type[NUM_ACCESS_CATEGORIES_COPY-1-i] = + WL1271_PSD_UPSD_TRIGGER; else - cmd->psd_type[i] = WL1271_PSD_LEGACY; + cmd->psd_type[NUM_ACCESS_CATEGORIES_COPY-1-i] = + WL1271_PSD_LEGACY; + sta_rates = sta->supp_rates[wlvif->band]; if (sta->ht_cap.ht_supported) - sta_rates |= sta->ht_cap.mcs.rx_mask[0] << HW_HT_RATES_OFFSET; + sta_rates |= + (sta->ht_cap.mcs.rx_mask[0] << HW_HT_RATES_OFFSET) | + (sta->ht_cap.mcs.rx_mask[1] << HW_MIMO_RATES_OFFSET); cmd->supported_rates = cpu_to_le32(wl1271_tx_enabled_rates_get(wl, sta_rates, @@ -1378,6 +1464,7 @@ int wl12xx_cmd_remove_peer(struct wl1271 *wl, u8 hlid) { struct wl12xx_cmd_remove_peer *cmd; int ret; + bool timeout = false; wl1271_debug(DEBUG_CMD, "cmd remove peer %d", (int)hlid); @@ -1398,12 +1485,16 @@ int wl12xx_cmd_remove_peer(struct wl1271 *wl, u8 hlid) goto out_free; } + ret = wl1271_cmd_wait_for_event_or_timeout(wl, + PEER_REMOVE_COMPLETE_EVENT_ID, + &timeout); /* * We are ok with a timeout here. The event is sometimes not sent - * due to a firmware bug. + * due to a firmware bug. In case of another error (like SDIO timeout) + * queue a recovery. */ - wl1271_cmd_wait_for_event_or_timeout(wl, - PEER_REMOVE_COMPLETE_EVENT_ID); + if (ret) + wl12xx_queue_recovery_work(wl); out_free: kfree(cmd); @@ -1573,19 +1664,25 @@ out: int wl12xx_roc(struct wl1271 *wl, struct wl12xx_vif *wlvif, u8 role_id) { int ret = 0; + bool is_first_roc; if (WARN_ON(test_bit(role_id, wl->roc_map))) return 0; + is_first_roc = (find_first_bit(wl->roc_map, WL12XX_MAX_ROLES) >= + WL12XX_MAX_ROLES); + ret = wl12xx_cmd_roc(wl, wlvif, role_id); if (ret < 0) goto out; - ret = wl1271_cmd_wait_for_event(wl, - REMAIN_ON_CHANNEL_COMPLETE_EVENT_ID); - if (ret < 0) { - wl1271_error("cmd roc event completion error"); - goto out; + if (is_first_roc) { + ret = wl1271_cmd_wait_for_event(wl, + REMAIN_ON_CHANNEL_COMPLETE_EVENT_ID); + if (ret < 0) { + wl1271_error("cmd roc event completion error"); + goto out; + } } __set_bit(role_id, wl->roc_map); @@ -1714,7 +1811,9 @@ int wl12xx_stop_dev(struct wl1271 *wl, struct wl12xx_vif *wlvif) return -EINVAL; /* flush all pending packets */ - wl1271_tx_work_locked(wl); + ret = wlcore_tx_work_locked(wl); + if (ret < 0) + goto out; if (test_bit(wlvif->dev_role_id, wl->roc_map)) { ret = wl12xx_croc(wl, wlvif->dev_role_id); diff --git a/drivers/net/wireless/ti/wlcore/cmd.h b/drivers/net/wireless/ti/wlcore/cmd.h index a46ae07cb77e..4ef0b095f0d6 100644 --- a/drivers/net/wireless/ti/wlcore/cmd.h +++ b/drivers/net/wireless/ti/wlcore/cmd.h @@ -58,7 +58,7 @@ int wl1271_cmd_build_ps_poll(struct wl1271 *wl, struct wl12xx_vif *wlvif, int wl12xx_cmd_build_probe_req(struct wl1271 *wl, struct wl12xx_vif *wlvif, u8 role_id, u8 band, const u8 *ssid, size_t ssid_len, - const u8 *ie, size_t ie_len); + const u8 *ie, size_t ie_len, bool sched_scan); struct sk_buff *wl1271_cmd_build_ap_probe_req(struct wl1271 *wl, struct wl12xx_vif *wlvif, struct sk_buff *skb); @@ -172,8 +172,8 @@ enum cmd_templ { CMD_TEMPL_PS_POLL, CMD_TEMPL_KLV, CMD_TEMPL_DISCONNECT, - CMD_TEMPL_PROBE_REQ_2_4, /* for firmware internal use only */ - CMD_TEMPL_PROBE_REQ_5, /* for firmware internal use only */ + CMD_TEMPL_APP_PROBE_REQ_2_4, + CMD_TEMPL_APP_PROBE_REQ_5, CMD_TEMPL_BAR, /* for firmware internal use only */ CMD_TEMPL_CTS, /* * For CTS-to-self (FastCTS) mechanism @@ -192,7 +192,7 @@ enum cmd_templ { #define WL1271_COMMAND_TIMEOUT 2000 #define WL1271_CMD_TEMPL_DFLT_SIZE 252 #define WL1271_CMD_TEMPL_MAX_SIZE 512 -#define WL1271_EVENT_TIMEOUT 750 +#define WL1271_EVENT_TIMEOUT 1500 struct wl1271_cmd_header { __le16 id; @@ -266,13 +266,22 @@ enum wlcore_band { WLCORE_BAND_MAX_RADIO = 0x7F, }; +enum wlcore_channel_type { + WLCORE_CHAN_NO_HT, + WLCORE_CHAN_HT20, + WLCORE_CHAN_HT40MINUS, + WLCORE_CHAN_HT40PLUS +}; + struct wl12xx_cmd_role_start { struct wl1271_cmd_header header; u8 role_id; u8 band; u8 channel; - u8 padding; + + /* enum wlcore_channel_type */ + u8 channel_type; union { struct { @@ -643,4 +652,25 @@ struct wl12xx_cmd_stop_channel_switch { struct wl1271_cmd_header header; } __packed; +/* Used to check radio status after calibration */ +#define MAX_TLV_LENGTH 500 +#define TEST_CMD_P2G_CAL 2 /* TX BiP */ + +struct wl1271_cmd_cal_p2g { + struct wl1271_cmd_header header; + + struct wl1271_cmd_test_header test; + + __le32 ver; + __le16 len; + u8 buf[MAX_TLV_LENGTH]; + u8 type; + u8 padding; + + __le16 radio_status; + + u8 sub_band_mask; + u8 padding2; +} __packed; + #endif /* __WL1271_CMD_H__ */ diff --git a/drivers/net/wireless/ti/wlcore/conf.h b/drivers/net/wireless/ti/wlcore/conf.h index fef0db4213bc..d77224f2ac6b 100644 --- a/drivers/net/wireless/ti/wlcore/conf.h +++ b/drivers/net/wireless/ti/wlcore/conf.h @@ -45,7 +45,15 @@ enum { CONF_HW_BIT_RATE_MCS_4 = BIT(17), CONF_HW_BIT_RATE_MCS_5 = BIT(18), CONF_HW_BIT_RATE_MCS_6 = BIT(19), - CONF_HW_BIT_RATE_MCS_7 = BIT(20) + CONF_HW_BIT_RATE_MCS_7 = BIT(20), + CONF_HW_BIT_RATE_MCS_8 = BIT(21), + CONF_HW_BIT_RATE_MCS_9 = BIT(22), + CONF_HW_BIT_RATE_MCS_10 = BIT(23), + CONF_HW_BIT_RATE_MCS_11 = BIT(24), + CONF_HW_BIT_RATE_MCS_12 = BIT(25), + CONF_HW_BIT_RATE_MCS_13 = BIT(26), + CONF_HW_BIT_RATE_MCS_14 = BIT(27), + CONF_HW_BIT_RATE_MCS_15 = BIT(28), }; enum { @@ -310,7 +318,7 @@ enum { struct conf_sg_settings { u32 params[CONF_SG_PARAMS_MAX]; u8 state; -}; +} __packed; enum conf_rx_queue_type { CONF_RX_QUEUE_TYPE_LOW_PRIORITY, /* All except the high priority */ @@ -394,7 +402,7 @@ struct conf_rx_settings { * Range: RX_QUEUE_TYPE_RX_LOW_PRIORITY, RX_QUEUE_TYPE_RX_HIGH_PRIORITY, */ u8 queue_type; -}; +} __packed; #define CONF_TX_MAX_RATE_CLASSES 10 @@ -435,6 +443,12 @@ struct conf_rx_settings { CONF_HW_BIT_RATE_MCS_5 | CONF_HW_BIT_RATE_MCS_6 | \ CONF_HW_BIT_RATE_MCS_7) +#define CONF_TX_MIMO_RATES (CONF_HW_BIT_RATE_MCS_8 | \ + CONF_HW_BIT_RATE_MCS_9 | CONF_HW_BIT_RATE_MCS_10 | \ + CONF_HW_BIT_RATE_MCS_11 | CONF_HW_BIT_RATE_MCS_12 | \ + CONF_HW_BIT_RATE_MCS_13 | CONF_HW_BIT_RATE_MCS_14 | \ + CONF_HW_BIT_RATE_MCS_15) + /* * Default rates for management traffic when operating in AP mode. This * should be configured according to the basic rate set of the AP @@ -487,7 +501,7 @@ struct conf_tx_rate_class { * the policy (0 - long preamble, 1 - short preamble. */ u8 aflags; -}; +} __packed; #define CONF_TX_MAX_AC_COUNT 4 @@ -504,7 +518,7 @@ enum conf_tx_ac { CONF_TX_AC_VI = 2, /* video */ CONF_TX_AC_VO = 3, /* voice */ CONF_TX_AC_CTS2SELF = 4, /* fictitious AC, follows AC_VO */ - CONF_TX_AC_ANY_TID = 0x1f + CONF_TX_AC_ANY_TID = 0xff }; struct conf_tx_ac_category { @@ -544,7 +558,7 @@ struct conf_tx_ac_category { * Range: u16 */ u16 tx_op_limit; -}; +} __packed; #define CONF_TX_MAX_TID_COUNT 8 @@ -578,7 +592,7 @@ struct conf_tx_tid { u8 ps_scheme; u8 ack_policy; u32 apsd_conf[2]; -}; +} __packed; struct conf_tx_settings { /* @@ -664,7 +678,7 @@ struct conf_tx_settings { /* Time in ms for Tx watchdog timer to expire */ u32 tx_watchdog_timeout; -}; +} __packed; enum { CONF_WAKE_UP_EVENT_BEACON = 0x01, /* Wake on every Beacon*/ @@ -711,7 +725,7 @@ struct conf_bcn_filt_rule { * Version for the vendor specifie IE (221) */ u8 version[CONF_BCN_IE_VER_LEN]; -}; +} __packed; #define CONF_MAX_RSSI_SNR_TRIGGERS 8 @@ -762,7 +776,7 @@ struct conf_sig_weights { * Range: u8 */ u8 snr_pkt_avg_weight; -}; +} __packed; enum conf_bcn_filt_mode { CONF_BCN_FILT_MODE_DISABLED = 0, @@ -810,7 +824,7 @@ struct conf_conn_settings { * * Range: CONF_BCN_FILT_MODE_* */ - enum conf_bcn_filt_mode bcn_filt_mode; + u8 bcn_filt_mode; /* * Configure Beacon filter pass-thru rules. @@ -937,7 +951,13 @@ struct conf_conn_settings { * Range: u16 */ u8 max_listen_interval; -}; + + /* + * Default sleep authorization for a new STA interface. This determines + * whether we can go to ELP. + */ + u8 sta_sleep_auth; +} __packed; enum { CONF_REF_CLK_19_2_E, @@ -965,6 +985,11 @@ struct conf_itrim_settings { /* moderation timeout in microsecs from the last TX */ u32 timeout; +} __packed; + +enum conf_fast_wakeup { + CONF_FAST_WAKEUP_ENABLE, + CONF_FAST_WAKEUP_DISABLE, }; struct conf_pm_config_settings { @@ -978,10 +1003,10 @@ struct conf_pm_config_settings { /* * Host fast wakeup support * - * Range: true, false + * Range: enum conf_fast_wakeup */ - bool host_fast_wakeup_support; -}; + u8 host_fast_wakeup_support; +} __packed; struct conf_roam_trigger_settings { /* @@ -1018,7 +1043,7 @@ struct conf_roam_trigger_settings { * Range: 0 - 255 */ u8 avg_weight_snr_data; -}; +} __packed; struct conf_scan_settings { /* @@ -1064,7 +1089,7 @@ struct conf_scan_settings { * Range: u32 Microsecs */ u32 split_scan_timeout; -}; +} __packed; struct conf_sched_scan_settings { /* @@ -1102,7 +1127,7 @@ struct conf_sched_scan_settings { /* SNR threshold to be used for filtering */ s8 snr_threshold; -}; +} __packed; struct conf_ht_setting { u8 rx_ba_win_size; @@ -1111,7 +1136,7 @@ struct conf_ht_setting { /* bitmap of enabled TIDs for TX BA sessions */ u8 tx_ba_tid_bitmap; -}; +} __packed; struct conf_memory_settings { /* Number of stations supported in IBSS mode */ @@ -1151,7 +1176,7 @@ struct conf_memory_settings { * Range: 0-120 */ u8 tx_min; -}; +} __packed; struct conf_fm_coex { u8 enable; @@ -1164,7 +1189,7 @@ struct conf_fm_coex { u16 ldo_stabilization_time; u8 fm_disturbed_band_margin; u8 swallow_clk_diff; -}; +} __packed; struct conf_rx_streaming_settings { /* @@ -1193,7 +1218,7 @@ struct conf_rx_streaming_settings { * enable rx streaming also when there is no coex activity */ u8 always; -}; +} __packed; struct conf_fwlog { /* Continuous or on-demand */ @@ -1217,7 +1242,7 @@ struct conf_fwlog { /* Regulates the frequency of log messages */ u8 threshold; -}; +} __packed; #define ACX_RATE_MGMT_NUM_OF_RATES 13 struct conf_rate_policy_settings { @@ -1236,7 +1261,7 @@ struct conf_rate_policy_settings { u8 rate_check_up; u8 rate_check_down; u8 rate_retry_policy[ACX_RATE_MGMT_NUM_OF_RATES]; -}; +} __packed; struct conf_hangover_settings { u32 recover_time; @@ -1250,7 +1275,23 @@ struct conf_hangover_settings { u8 quiet_time; u8 increase_time; u8 window_size; -}; +} __packed; + +/* + * The conf version consists of 4 bytes. The two MSB are the wlcore + * version, the two LSB are the lower driver's private conf + * version. + */ +#define WLCORE_CONF_VERSION (0x0002 << 16) +#define WLCORE_CONF_MASK 0xffff0000 +#define WLCORE_CONF_SIZE (sizeof(struct wlcore_conf_header) + \ + sizeof(struct wlcore_conf)) + +struct wlcore_conf_header { + __le32 magic; + __le32 version; + __le32 checksum; +} __packed; struct wlcore_conf { struct conf_sg_settings sg; @@ -1269,6 +1310,12 @@ struct wlcore_conf { struct conf_fwlog fwlog; struct conf_rate_policy_settings rate; struct conf_hangover_settings hangover; -}; +} __packed; + +struct wlcore_conf_file { + struct wlcore_conf_header header; + struct wlcore_conf core; + u8 priv[0]; +} __packed; #endif diff --git a/drivers/net/wireless/ti/wlcore/debugfs.c b/drivers/net/wireless/ti/wlcore/debugfs.c index d5aea1ff5ad1..80dbc5304fac 100644 --- a/drivers/net/wireless/ti/wlcore/debugfs.c +++ b/drivers/net/wireless/ti/wlcore/debugfs.c @@ -25,6 +25,7 @@ #include <linux/skbuff.h> #include <linux/slab.h> +#include <linux/module.h> #include "wlcore.h" #include "debug.h" @@ -32,14 +33,16 @@ #include "ps.h" #include "io.h" #include "tx.h" +#include "hw_ops.h" /* ms */ #define WL1271_DEBUGFS_STATS_LIFETIME 1000 +#define WLCORE_MAX_BLOCK_SIZE ((size_t)(4*PAGE_SIZE)) + /* debugfs macros idea from mac80211 */ -#define DEBUGFS_FORMAT_BUFFER_SIZE 100 -static int wl1271_format_buffer(char __user *userbuf, size_t count, - loff_t *ppos, char *fmt, ...) +int wl1271_format_buffer(char __user *userbuf, size_t count, + loff_t *ppos, char *fmt, ...) { va_list args; char buf[DEBUGFS_FORMAT_BUFFER_SIZE]; @@ -51,59 +54,9 @@ static int wl1271_format_buffer(char __user *userbuf, size_t count, return simple_read_from_buffer(userbuf, count, ppos, buf, res); } +EXPORT_SYMBOL_GPL(wl1271_format_buffer); -#define DEBUGFS_READONLY_FILE(name, fmt, value...) \ -static ssize_t name## _read(struct file *file, char __user *userbuf, \ - size_t count, loff_t *ppos) \ -{ \ - struct wl1271 *wl = file->private_data; \ - return wl1271_format_buffer(userbuf, count, ppos, \ - fmt "\n", ##value); \ -} \ - \ -static const struct file_operations name## _ops = { \ - .read = name## _read, \ - .open = simple_open, \ - .llseek = generic_file_llseek, \ -}; - -#define DEBUGFS_ADD(name, parent) \ - entry = debugfs_create_file(#name, 0400, parent, \ - wl, &name## _ops); \ - if (!entry || IS_ERR(entry)) \ - goto err; \ - -#define DEBUGFS_ADD_PREFIX(prefix, name, parent) \ - do { \ - entry = debugfs_create_file(#name, 0400, parent, \ - wl, &prefix## _## name## _ops); \ - if (!entry || IS_ERR(entry)) \ - goto err; \ - } while (0); - -#define DEBUGFS_FWSTATS_FILE(sub, name, fmt) \ -static ssize_t sub## _ ##name## _read(struct file *file, \ - char __user *userbuf, \ - size_t count, loff_t *ppos) \ -{ \ - struct wl1271 *wl = file->private_data; \ - \ - wl1271_debugfs_update_stats(wl); \ - \ - return wl1271_format_buffer(userbuf, count, ppos, fmt "\n", \ - wl->stats.fw_stats->sub.name); \ -} \ - \ -static const struct file_operations sub## _ ##name## _ops = { \ - .read = sub## _ ##name## _read, \ - .open = simple_open, \ - .llseek = generic_file_llseek, \ -}; - -#define DEBUGFS_FWSTATS_ADD(sub, name) \ - DEBUGFS_ADD(sub## _ ##name, stats) - -static void wl1271_debugfs_update_stats(struct wl1271 *wl) +void wl1271_debugfs_update_stats(struct wl1271 *wl) { int ret; @@ -125,97 +78,7 @@ static void wl1271_debugfs_update_stats(struct wl1271 *wl) out: mutex_unlock(&wl->mutex); } - -DEBUGFS_FWSTATS_FILE(tx, internal_desc_overflow, "%u"); - -DEBUGFS_FWSTATS_FILE(rx, out_of_mem, "%u"); -DEBUGFS_FWSTATS_FILE(rx, hdr_overflow, "%u"); -DEBUGFS_FWSTATS_FILE(rx, hw_stuck, "%u"); -DEBUGFS_FWSTATS_FILE(rx, dropped, "%u"); -DEBUGFS_FWSTATS_FILE(rx, fcs_err, "%u"); -DEBUGFS_FWSTATS_FILE(rx, xfr_hint_trig, "%u"); -DEBUGFS_FWSTATS_FILE(rx, path_reset, "%u"); -DEBUGFS_FWSTATS_FILE(rx, reset_counter, "%u"); - -DEBUGFS_FWSTATS_FILE(dma, rx_requested, "%u"); -DEBUGFS_FWSTATS_FILE(dma, rx_errors, "%u"); -DEBUGFS_FWSTATS_FILE(dma, tx_requested, "%u"); -DEBUGFS_FWSTATS_FILE(dma, tx_errors, "%u"); - -DEBUGFS_FWSTATS_FILE(isr, cmd_cmplt, "%u"); -DEBUGFS_FWSTATS_FILE(isr, fiqs, "%u"); -DEBUGFS_FWSTATS_FILE(isr, rx_headers, "%u"); -DEBUGFS_FWSTATS_FILE(isr, rx_mem_overflow, "%u"); -DEBUGFS_FWSTATS_FILE(isr, rx_rdys, "%u"); -DEBUGFS_FWSTATS_FILE(isr, irqs, "%u"); -DEBUGFS_FWSTATS_FILE(isr, tx_procs, "%u"); -DEBUGFS_FWSTATS_FILE(isr, decrypt_done, "%u"); -DEBUGFS_FWSTATS_FILE(isr, dma0_done, "%u"); -DEBUGFS_FWSTATS_FILE(isr, dma1_done, "%u"); -DEBUGFS_FWSTATS_FILE(isr, tx_exch_complete, "%u"); -DEBUGFS_FWSTATS_FILE(isr, commands, "%u"); -DEBUGFS_FWSTATS_FILE(isr, rx_procs, "%u"); -DEBUGFS_FWSTATS_FILE(isr, hw_pm_mode_changes, "%u"); -DEBUGFS_FWSTATS_FILE(isr, host_acknowledges, "%u"); -DEBUGFS_FWSTATS_FILE(isr, pci_pm, "%u"); -DEBUGFS_FWSTATS_FILE(isr, wakeups, "%u"); -DEBUGFS_FWSTATS_FILE(isr, low_rssi, "%u"); - -DEBUGFS_FWSTATS_FILE(wep, addr_key_count, "%u"); -DEBUGFS_FWSTATS_FILE(wep, default_key_count, "%u"); -/* skipping wep.reserved */ -DEBUGFS_FWSTATS_FILE(wep, key_not_found, "%u"); -DEBUGFS_FWSTATS_FILE(wep, decrypt_fail, "%u"); -DEBUGFS_FWSTATS_FILE(wep, packets, "%u"); -DEBUGFS_FWSTATS_FILE(wep, interrupt, "%u"); - -DEBUGFS_FWSTATS_FILE(pwr, ps_enter, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, elp_enter, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, missing_bcns, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, wake_on_host, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, wake_on_timer_exp, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, tx_with_ps, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, tx_without_ps, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, rcvd_beacons, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, power_save_off, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, enable_ps, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, disable_ps, "%u"); -DEBUGFS_FWSTATS_FILE(pwr, fix_tsf_ps, "%u"); -/* skipping cont_miss_bcns_spread for now */ -DEBUGFS_FWSTATS_FILE(pwr, rcvd_awake_beacons, "%u"); - -DEBUGFS_FWSTATS_FILE(mic, rx_pkts, "%u"); -DEBUGFS_FWSTATS_FILE(mic, calc_failure, "%u"); - -DEBUGFS_FWSTATS_FILE(aes, encrypt_fail, "%u"); -DEBUGFS_FWSTATS_FILE(aes, decrypt_fail, "%u"); -DEBUGFS_FWSTATS_FILE(aes, encrypt_packets, "%u"); -DEBUGFS_FWSTATS_FILE(aes, decrypt_packets, "%u"); -DEBUGFS_FWSTATS_FILE(aes, encrypt_interrupt, "%u"); -DEBUGFS_FWSTATS_FILE(aes, decrypt_interrupt, "%u"); - -DEBUGFS_FWSTATS_FILE(event, heart_beat, "%u"); -DEBUGFS_FWSTATS_FILE(event, calibration, "%u"); -DEBUGFS_FWSTATS_FILE(event, rx_mismatch, "%u"); -DEBUGFS_FWSTATS_FILE(event, rx_mem_empty, "%u"); -DEBUGFS_FWSTATS_FILE(event, rx_pool, "%u"); -DEBUGFS_FWSTATS_FILE(event, oom_late, "%u"); -DEBUGFS_FWSTATS_FILE(event, phy_transmit_error, "%u"); -DEBUGFS_FWSTATS_FILE(event, tx_stuck, "%u"); - -DEBUGFS_FWSTATS_FILE(ps, pspoll_timeouts, "%u"); -DEBUGFS_FWSTATS_FILE(ps, upsd_timeouts, "%u"); -DEBUGFS_FWSTATS_FILE(ps, upsd_max_sptime, "%u"); -DEBUGFS_FWSTATS_FILE(ps, upsd_max_apturn, "%u"); -DEBUGFS_FWSTATS_FILE(ps, pspoll_max_apturn, "%u"); -DEBUGFS_FWSTATS_FILE(ps, pspoll_utilization, "%u"); -DEBUGFS_FWSTATS_FILE(ps, upsd_utilization, "%u"); - -DEBUGFS_FWSTATS_FILE(rxpipe, rx_prep_beacon_drop, "%u"); -DEBUGFS_FWSTATS_FILE(rxpipe, descr_host_int_trig_rx_data, "%u"); -DEBUGFS_FWSTATS_FILE(rxpipe, beacon_buffer_thres_host_int_trig_rx_data, "%u"); -DEBUGFS_FWSTATS_FILE(rxpipe, missed_beacon_host_int_trig_rx_data, "%u"); -DEBUGFS_FWSTATS_FILE(rxpipe, tx_xfr_host_int_trig_rx_data, "%u"); +EXPORT_SYMBOL_GPL(wl1271_debugfs_update_stats); DEBUGFS_READONLY_FILE(retry_count, "%u", wl->stats.retry_count); DEBUGFS_READONLY_FILE(excessive_retries, "%u", @@ -241,6 +104,89 @@ static const struct file_operations tx_queue_len_ops = { .llseek = default_llseek, }; +static void chip_op_handler(struct wl1271 *wl, unsigned long value, + void *arg) +{ + int ret; + int (*chip_op) (struct wl1271 *wl); + + if (!arg) { + wl1271_warning("debugfs chip_op_handler with no callback"); + return; + } + + ret = wl1271_ps_elp_wakeup(wl); + if (ret < 0) + return; + + chip_op = arg; + chip_op(wl); + + wl1271_ps_elp_sleep(wl); +} + + +static inline void no_write_handler(struct wl1271 *wl, + unsigned long value, + unsigned long param) +{ +} + +#define WL12XX_CONF_DEBUGFS(param, conf_sub_struct, \ + min_val, max_val, write_handler_locked, \ + write_handler_arg) \ + static ssize_t param##_read(struct file *file, \ + char __user *user_buf, \ + size_t count, loff_t *ppos) \ + { \ + struct wl1271 *wl = file->private_data; \ + return wl1271_format_buffer(user_buf, count, \ + ppos, "%d\n", \ + wl->conf.conf_sub_struct.param); \ + } \ + \ + static ssize_t param##_write(struct file *file, \ + const char __user *user_buf, \ + size_t count, loff_t *ppos) \ + { \ + struct wl1271 *wl = file->private_data; \ + unsigned long value; \ + int ret; \ + \ + ret = kstrtoul_from_user(user_buf, count, 10, &value); \ + if (ret < 0) { \ + wl1271_warning("illegal value for " #param); \ + return -EINVAL; \ + } \ + \ + if (value < min_val || value > max_val) { \ + wl1271_warning(#param " is not in valid range"); \ + return -ERANGE; \ + } \ + \ + mutex_lock(&wl->mutex); \ + wl->conf.conf_sub_struct.param = value; \ + \ + write_handler_locked(wl, value, write_handler_arg); \ + \ + mutex_unlock(&wl->mutex); \ + return count; \ + } \ + \ + static const struct file_operations param##_ops = { \ + .read = param##_read, \ + .write = param##_write, \ + .open = simple_open, \ + .llseek = default_llseek, \ + }; + +WL12XX_CONF_DEBUGFS(irq_pkt_threshold, rx, 0, 65535, + chip_op_handler, wl1271_acx_init_rx_interrupt) +WL12XX_CONF_DEBUGFS(irq_blk_threshold, rx, 0, 65535, + chip_op_handler, wl1271_acx_init_rx_interrupt) +WL12XX_CONF_DEBUGFS(irq_timeout, rx, 0, 100, + chip_op_handler, wl1271_acx_init_rx_interrupt) + static ssize_t gpio_power_read(struct file *file, char __user *user_buf, size_t count, loff_t *ppos) { @@ -535,8 +481,7 @@ static ssize_t driver_state_read(struct file *file, char __user *user_buf, DRIVER_STATE_PRINT_LHEX(ap_ps_map); DRIVER_STATE_PRINT_HEX(quirks); DRIVER_STATE_PRINT_HEX(irq); - DRIVER_STATE_PRINT_HEX(ref_clock); - DRIVER_STATE_PRINT_HEX(tcxo_clock); + /* TODO: ref_clock and tcxo_clock were moved to wl12xx priv */ DRIVER_STATE_PRINT_HEX(hw_pg_ver); DRIVER_STATE_PRINT_HEX(platform_quirks); DRIVER_STATE_PRINT_HEX(chip.id); @@ -647,7 +592,6 @@ static ssize_t vifs_state_read(struct file *file, char __user *user_buf, VIF_STATE_PRINT_INT(last_rssi_event); VIF_STATE_PRINT_INT(ba_support); VIF_STATE_PRINT_INT(ba_allowed); - VIF_STATE_PRINT_INT(is_gem); VIF_STATE_PRINT_LLHEX(tx_security_seq); VIF_STATE_PRINT_INT(tx_security_last_seq_lsb); } @@ -1002,108 +946,281 @@ static const struct file_operations beacon_filtering_ops = { .llseek = default_llseek, }; -static int wl1271_debugfs_add_files(struct wl1271 *wl, - struct dentry *rootdir) +static ssize_t fw_stats_raw_read(struct file *file, + char __user *userbuf, + size_t count, loff_t *ppos) { - int ret = 0; - struct dentry *entry, *stats, *streaming; + struct wl1271 *wl = file->private_data; - stats = debugfs_create_dir("fw-statistics", rootdir); - if (!stats || IS_ERR(stats)) { - entry = stats; - goto err; + wl1271_debugfs_update_stats(wl); + + return simple_read_from_buffer(userbuf, count, ppos, + wl->stats.fw_stats, + wl->stats.fw_stats_len); +} + +static const struct file_operations fw_stats_raw_ops = { + .read = fw_stats_raw_read, + .open = simple_open, + .llseek = default_llseek, +}; + +static ssize_t sleep_auth_read(struct file *file, char __user *user_buf, + size_t count, loff_t *ppos) +{ + struct wl1271 *wl = file->private_data; + + return wl1271_format_buffer(user_buf, count, + ppos, "%d\n", + wl->sleep_auth); +} + +static ssize_t sleep_auth_write(struct file *file, + const char __user *user_buf, + size_t count, loff_t *ppos) +{ + struct wl1271 *wl = file->private_data; + unsigned long value; + int ret; + + ret = kstrtoul_from_user(user_buf, count, 0, &value); + if (ret < 0) { + wl1271_warning("illegal value in sleep_auth"); + return -EINVAL; + } + + if (value < 0 || value > WL1271_PSM_MAX) { + wl1271_warning("sleep_auth must be between 0 and %d", + WL1271_PSM_MAX); + return -ERANGE; + } + + mutex_lock(&wl->mutex); + + wl->conf.conn.sta_sleep_auth = value; + + if (wl->state == WL1271_STATE_OFF) { + /* this will show up on "read" in case we are off */ + wl->sleep_auth = value; + goto out; } - DEBUGFS_FWSTATS_ADD(tx, internal_desc_overflow); - - DEBUGFS_FWSTATS_ADD(rx, out_of_mem); - DEBUGFS_FWSTATS_ADD(rx, hdr_overflow); - DEBUGFS_FWSTATS_ADD(rx, hw_stuck); - DEBUGFS_FWSTATS_ADD(rx, dropped); - DEBUGFS_FWSTATS_ADD(rx, fcs_err); - DEBUGFS_FWSTATS_ADD(rx, xfr_hint_trig); - DEBUGFS_FWSTATS_ADD(rx, path_reset); - DEBUGFS_FWSTATS_ADD(rx, reset_counter); - - DEBUGFS_FWSTATS_ADD(dma, rx_requested); - DEBUGFS_FWSTATS_ADD(dma, rx_errors); - DEBUGFS_FWSTATS_ADD(dma, tx_requested); - DEBUGFS_FWSTATS_ADD(dma, tx_errors); - - DEBUGFS_FWSTATS_ADD(isr, cmd_cmplt); - DEBUGFS_FWSTATS_ADD(isr, fiqs); - DEBUGFS_FWSTATS_ADD(isr, rx_headers); - DEBUGFS_FWSTATS_ADD(isr, rx_mem_overflow); - DEBUGFS_FWSTATS_ADD(isr, rx_rdys); - DEBUGFS_FWSTATS_ADD(isr, irqs); - DEBUGFS_FWSTATS_ADD(isr, tx_procs); - DEBUGFS_FWSTATS_ADD(isr, decrypt_done); - DEBUGFS_FWSTATS_ADD(isr, dma0_done); - DEBUGFS_FWSTATS_ADD(isr, dma1_done); - DEBUGFS_FWSTATS_ADD(isr, tx_exch_complete); - DEBUGFS_FWSTATS_ADD(isr, commands); - DEBUGFS_FWSTATS_ADD(isr, rx_procs); - DEBUGFS_FWSTATS_ADD(isr, hw_pm_mode_changes); - DEBUGFS_FWSTATS_ADD(isr, host_acknowledges); - DEBUGFS_FWSTATS_ADD(isr, pci_pm); - DEBUGFS_FWSTATS_ADD(isr, wakeups); - DEBUGFS_FWSTATS_ADD(isr, low_rssi); - - DEBUGFS_FWSTATS_ADD(wep, addr_key_count); - DEBUGFS_FWSTATS_ADD(wep, default_key_count); - /* skipping wep.reserved */ - DEBUGFS_FWSTATS_ADD(wep, key_not_found); - DEBUGFS_FWSTATS_ADD(wep, decrypt_fail); - DEBUGFS_FWSTATS_ADD(wep, packets); - DEBUGFS_FWSTATS_ADD(wep, interrupt); - - DEBUGFS_FWSTATS_ADD(pwr, ps_enter); - DEBUGFS_FWSTATS_ADD(pwr, elp_enter); - DEBUGFS_FWSTATS_ADD(pwr, missing_bcns); - DEBUGFS_FWSTATS_ADD(pwr, wake_on_host); - DEBUGFS_FWSTATS_ADD(pwr, wake_on_timer_exp); - DEBUGFS_FWSTATS_ADD(pwr, tx_with_ps); - DEBUGFS_FWSTATS_ADD(pwr, tx_without_ps); - DEBUGFS_FWSTATS_ADD(pwr, rcvd_beacons); - DEBUGFS_FWSTATS_ADD(pwr, power_save_off); - DEBUGFS_FWSTATS_ADD(pwr, enable_ps); - DEBUGFS_FWSTATS_ADD(pwr, disable_ps); - DEBUGFS_FWSTATS_ADD(pwr, fix_tsf_ps); - /* skipping cont_miss_bcns_spread for now */ - DEBUGFS_FWSTATS_ADD(pwr, rcvd_awake_beacons); - - DEBUGFS_FWSTATS_ADD(mic, rx_pkts); - DEBUGFS_FWSTATS_ADD(mic, calc_failure); - - DEBUGFS_FWSTATS_ADD(aes, encrypt_fail); - DEBUGFS_FWSTATS_ADD(aes, decrypt_fail); - DEBUGFS_FWSTATS_ADD(aes, encrypt_packets); - DEBUGFS_FWSTATS_ADD(aes, decrypt_packets); - DEBUGFS_FWSTATS_ADD(aes, encrypt_interrupt); - DEBUGFS_FWSTATS_ADD(aes, decrypt_interrupt); - - DEBUGFS_FWSTATS_ADD(event, heart_beat); - DEBUGFS_FWSTATS_ADD(event, calibration); - DEBUGFS_FWSTATS_ADD(event, rx_mismatch); - DEBUGFS_FWSTATS_ADD(event, rx_mem_empty); - DEBUGFS_FWSTATS_ADD(event, rx_pool); - DEBUGFS_FWSTATS_ADD(event, oom_late); - DEBUGFS_FWSTATS_ADD(event, phy_transmit_error); - DEBUGFS_FWSTATS_ADD(event, tx_stuck); - - DEBUGFS_FWSTATS_ADD(ps, pspoll_timeouts); - DEBUGFS_FWSTATS_ADD(ps, upsd_timeouts); - DEBUGFS_FWSTATS_ADD(ps, upsd_max_sptime); - DEBUGFS_FWSTATS_ADD(ps, upsd_max_apturn); - DEBUGFS_FWSTATS_ADD(ps, pspoll_max_apturn); - DEBUGFS_FWSTATS_ADD(ps, pspoll_utilization); - DEBUGFS_FWSTATS_ADD(ps, upsd_utilization); - - DEBUGFS_FWSTATS_ADD(rxpipe, rx_prep_beacon_drop); - DEBUGFS_FWSTATS_ADD(rxpipe, descr_host_int_trig_rx_data); - DEBUGFS_FWSTATS_ADD(rxpipe, beacon_buffer_thres_host_int_trig_rx_data); - DEBUGFS_FWSTATS_ADD(rxpipe, missed_beacon_host_int_trig_rx_data); - DEBUGFS_FWSTATS_ADD(rxpipe, tx_xfr_host_int_trig_rx_data); + ret = wl1271_ps_elp_wakeup(wl); + if (ret < 0) + goto out; + + ret = wl1271_acx_sleep_auth(wl, value); + if (ret < 0) + goto out_sleep; + +out_sleep: + wl1271_ps_elp_sleep(wl); +out: + mutex_unlock(&wl->mutex); + return count; +} + +static const struct file_operations sleep_auth_ops = { + .read = sleep_auth_read, + .write = sleep_auth_write, + .open = simple_open, + .llseek = default_llseek, +}; + +static ssize_t dev_mem_read(struct file *file, + char __user *user_buf, size_t count, + loff_t *ppos) +{ + struct wl1271 *wl = file->private_data; + struct wlcore_partition_set part, old_part; + size_t bytes = count; + int ret; + char *buf; + + /* only requests of dword-aligned size and offset are supported */ + if (bytes % 4) + return -EINVAL; + + if (*ppos % 4) + return -EINVAL; + + /* function should return in reasonable time */ + bytes = min(bytes, WLCORE_MAX_BLOCK_SIZE); + + if (bytes == 0) + return -EINVAL; + + memset(&part, 0, sizeof(part)); + part.mem.start = file->f_pos; + part.mem.size = bytes; + + buf = kmalloc(bytes, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + mutex_lock(&wl->mutex); + + if (wl->state == WL1271_STATE_OFF) { + ret = -EFAULT; + goto skip_read; + } + + ret = wl1271_ps_elp_wakeup(wl); + if (ret < 0) + goto skip_read; + + /* store current partition and switch partition */ + memcpy(&old_part, &wl->curr_part, sizeof(old_part)); + ret = wlcore_set_partition(wl, &part); + if (ret < 0) + goto part_err; + + ret = wlcore_raw_read(wl, 0, buf, bytes, false); + if (ret < 0) + goto read_err; + +read_err: + /* recover partition */ + ret = wlcore_set_partition(wl, &old_part); + if (ret < 0) + goto part_err; + +part_err: + wl1271_ps_elp_sleep(wl); + +skip_read: + mutex_unlock(&wl->mutex); + + if (ret == 0) { + ret = copy_to_user(user_buf, buf, bytes); + if (ret < bytes) { + bytes -= ret; + *ppos += bytes; + ret = 0; + } else { + ret = -EFAULT; + } + } + + kfree(buf); + + return ((ret == 0) ? bytes : ret); +} + +static ssize_t dev_mem_write(struct file *file, const char __user *user_buf, + size_t count, loff_t *ppos) +{ + struct wl1271 *wl = file->private_data; + struct wlcore_partition_set part, old_part; + size_t bytes = count; + int ret; + char *buf; + + /* only requests of dword-aligned size and offset are supported */ + if (bytes % 4) + return -EINVAL; + + if (*ppos % 4) + return -EINVAL; + + /* function should return in reasonable time */ + bytes = min(bytes, WLCORE_MAX_BLOCK_SIZE); + + if (bytes == 0) + return -EINVAL; + + memset(&part, 0, sizeof(part)); + part.mem.start = file->f_pos; + part.mem.size = bytes; + + buf = kmalloc(bytes, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + ret = copy_from_user(buf, user_buf, bytes); + if (ret) { + ret = -EFAULT; + goto err_out; + } + + mutex_lock(&wl->mutex); + + if (wl->state == WL1271_STATE_OFF) { + ret = -EFAULT; + goto skip_write; + } + + ret = wl1271_ps_elp_wakeup(wl); + if (ret < 0) + goto skip_write; + + /* store current partition and switch partition */ + memcpy(&old_part, &wl->curr_part, sizeof(old_part)); + ret = wlcore_set_partition(wl, &part); + if (ret < 0) + goto part_err; + + ret = wlcore_raw_write(wl, 0, buf, bytes, false); + if (ret < 0) + goto write_err; + +write_err: + /* recover partition */ + ret = wlcore_set_partition(wl, &old_part); + if (ret < 0) + goto part_err; + +part_err: + wl1271_ps_elp_sleep(wl); + +skip_write: + mutex_unlock(&wl->mutex); + + if (ret == 0) + *ppos += bytes; + +err_out: + kfree(buf); + + return ((ret == 0) ? bytes : ret); +} + +static loff_t dev_mem_seek(struct file *file, loff_t offset, int orig) +{ + loff_t ret; + + /* only requests of dword-aligned size and offset are supported */ + if (offset % 4) + return -EINVAL; + + switch (orig) { + case SEEK_SET: + file->f_pos = offset; + ret = file->f_pos; + break; + case SEEK_CUR: + file->f_pos += offset; + ret = file->f_pos; + break; + default: + ret = -EINVAL; + } + + return ret; +} + +static const struct file_operations dev_mem_ops = { + .open = simple_open, + .read = dev_mem_read, + .write = dev_mem_write, + .llseek = dev_mem_seek, +}; + +static int wl1271_debugfs_add_files(struct wl1271 *wl, + struct dentry *rootdir) +{ + int ret = 0; + struct dentry *entry, *streaming; DEBUGFS_ADD(tx_queue_len, rootdir); DEBUGFS_ADD(retry_count, rootdir); @@ -1120,6 +1237,11 @@ static int wl1271_debugfs_add_files(struct wl1271 *wl, DEBUGFS_ADD(dynamic_ps_timeout, rootdir); DEBUGFS_ADD(forced_ps, rootdir); DEBUGFS_ADD(split_scan_timeout, rootdir); + DEBUGFS_ADD(irq_pkt_threshold, rootdir); + DEBUGFS_ADD(irq_blk_threshold, rootdir); + DEBUGFS_ADD(irq_timeout, rootdir); + DEBUGFS_ADD(fw_stats_raw, rootdir); + DEBUGFS_ADD(sleep_auth, rootdir); streaming = debugfs_create_dir("rx_streaming", rootdir); if (!streaming || IS_ERR(streaming)) @@ -1128,6 +1250,7 @@ static int wl1271_debugfs_add_files(struct wl1271 *wl, DEBUGFS_ADD_PREFIX(rx_streaming, interval, streaming); DEBUGFS_ADD_PREFIX(rx_streaming, always, streaming); + DEBUGFS_ADD_PREFIX(dev, mem, rootdir); return 0; @@ -1145,7 +1268,7 @@ void wl1271_debugfs_reset(struct wl1271 *wl) if (!wl->stats.fw_stats) return; - memset(wl->stats.fw_stats, 0, sizeof(*wl->stats.fw_stats)); + memset(wl->stats.fw_stats, 0, wl->stats.fw_stats_len); wl->stats.retry_count = 0; wl->stats.excessive_retries = 0; } @@ -1160,34 +1283,34 @@ int wl1271_debugfs_init(struct wl1271 *wl) if (IS_ERR(rootdir)) { ret = PTR_ERR(rootdir); - goto err; + goto out; } - wl->stats.fw_stats = kzalloc(sizeof(*wl->stats.fw_stats), - GFP_KERNEL); - + wl->stats.fw_stats = kzalloc(wl->stats.fw_stats_len, GFP_KERNEL); if (!wl->stats.fw_stats) { ret = -ENOMEM; - goto err_fw; + goto out_remove; } wl->stats.fw_stats_update = jiffies; ret = wl1271_debugfs_add_files(wl, rootdir); + if (ret < 0) + goto out_exit; + ret = wlcore_debugfs_init(wl, rootdir); if (ret < 0) - goto err_file; + goto out_exit; - return 0; + goto out; -err_file: - kfree(wl->stats.fw_stats); - wl->stats.fw_stats = NULL; +out_exit: + wl1271_debugfs_exit(wl); -err_fw: +out_remove: debugfs_remove_recursive(rootdir); -err: +out: return ret; } diff --git a/drivers/net/wireless/ti/wlcore/debugfs.h b/drivers/net/wireless/ti/wlcore/debugfs.h index a8d3aef011ff..f7381dd69009 100644 --- a/drivers/net/wireless/ti/wlcore/debugfs.h +++ b/drivers/net/wireless/ti/wlcore/debugfs.h @@ -26,8 +26,95 @@ #include "wlcore.h" +int wl1271_format_buffer(char __user *userbuf, size_t count, + loff_t *ppos, char *fmt, ...); + int wl1271_debugfs_init(struct wl1271 *wl); void wl1271_debugfs_exit(struct wl1271 *wl); void wl1271_debugfs_reset(struct wl1271 *wl); +void wl1271_debugfs_update_stats(struct wl1271 *wl); + +#define DEBUGFS_FORMAT_BUFFER_SIZE 256 + +#define DEBUGFS_READONLY_FILE(name, fmt, value...) \ +static ssize_t name## _read(struct file *file, char __user *userbuf, \ + size_t count, loff_t *ppos) \ +{ \ + struct wl1271 *wl = file->private_data; \ + return wl1271_format_buffer(userbuf, count, ppos, \ + fmt "\n", ##value); \ +} \ + \ +static const struct file_operations name## _ops = { \ + .read = name## _read, \ + .open = simple_open, \ + .llseek = generic_file_llseek, \ +}; + +#define DEBUGFS_ADD(name, parent) \ + do { \ + entry = debugfs_create_file(#name, 0400, parent, \ + wl, &name## _ops); \ + if (!entry || IS_ERR(entry)) \ + goto err; \ + } while (0); + + +#define DEBUGFS_ADD_PREFIX(prefix, name, parent) \ + do { \ + entry = debugfs_create_file(#name, 0400, parent, \ + wl, &prefix## _## name## _ops); \ + if (!entry || IS_ERR(entry)) \ + goto err; \ + } while (0); + +#define DEBUGFS_FWSTATS_FILE(sub, name, fmt, struct_type) \ +static ssize_t sub## _ ##name## _read(struct file *file, \ + char __user *userbuf, \ + size_t count, loff_t *ppos) \ +{ \ + struct wl1271 *wl = file->private_data; \ + struct struct_type *stats = wl->stats.fw_stats; \ + \ + wl1271_debugfs_update_stats(wl); \ + \ + return wl1271_format_buffer(userbuf, count, ppos, fmt "\n", \ + stats->sub.name); \ +} \ + \ +static const struct file_operations sub## _ ##name## _ops = { \ + .read = sub## _ ##name## _read, \ + .open = simple_open, \ + .llseek = generic_file_llseek, \ +}; + +#define DEBUGFS_FWSTATS_FILE_ARRAY(sub, name, len, struct_type) \ +static ssize_t sub## _ ##name## _read(struct file *file, \ + char __user *userbuf, \ + size_t count, loff_t *ppos) \ +{ \ + struct wl1271 *wl = file->private_data; \ + struct struct_type *stats = wl->stats.fw_stats; \ + char buf[DEBUGFS_FORMAT_BUFFER_SIZE] = ""; \ + int res, i; \ + \ + wl1271_debugfs_update_stats(wl); \ + \ + for (i = 0; i < len; i++) \ + res = snprintf(buf, sizeof(buf), "%s[%d] = %d\n", \ + buf, i, stats->sub.name[i]); \ + \ + return wl1271_format_buffer(userbuf, count, ppos, "%s", buf); \ +} \ + \ +static const struct file_operations sub## _ ##name## _ops = { \ + .read = sub## _ ##name## _read, \ + .open = simple_open, \ + .llseek = generic_file_llseek, \ +}; + +#define DEBUGFS_FWSTATS_ADD(sub, name) \ + DEBUGFS_ADD(sub## _ ##name, stats) + #endif /* WL1271_DEBUGFS_H */ diff --git a/drivers/net/wireless/ti/wlcore/event.c b/drivers/net/wireless/ti/wlcore/event.c index 28e2a633c3be..48907054d493 100644 --- a/drivers/net/wireless/ti/wlcore/event.c +++ b/drivers/net/wireless/ti/wlcore/event.c @@ -105,6 +105,7 @@ static int wl1271_event_process(struct wl1271 *wl) u32 vector; bool disconnect_sta = false; unsigned long sta_bitmap = 0; + int ret; wl1271_event_mbox_dump(mbox); @@ -148,15 +149,33 @@ static int wl1271_event_process(struct wl1271 *wl) int delay = wl->conf.conn.synch_fail_thold * wl->conf.conn.bss_lose_timeout; wl1271_info("Beacon loss detected."); - cancel_delayed_work_sync(&wl->connection_loss_work); + + /* + * if the work is already queued, it should take place. We + * don't want to delay the connection loss indication + * any more. + */ ieee80211_queue_delayed_work(wl->hw, &wl->connection_loss_work, - msecs_to_jiffies(delay)); + msecs_to_jiffies(delay)); + + wl12xx_for_each_wlvif_sta(wl, wlvif) { + vif = wl12xx_wlvif_to_vif(wlvif); + + ieee80211_cqm_rssi_notify( + vif, + NL80211_CQM_RSSI_BEACON_LOSS_EVENT, + GFP_KERNEL); + } } if (vector & REGAINED_BSS_EVENT_ID) { /* TODO: check for multi-role */ wl1271_info("Beacon regained."); - cancel_delayed_work_sync(&wl->connection_loss_work); + cancel_delayed_work(&wl->connection_loss_work); + + /* sanity check - we can't lose and gain the beacon together */ + WARN(vector & BSS_LOSE_EVENT_ID, + "Concurrent beacon loss and gain from FW"); } if (vector & RSSI_SNR_TRIGGER_0_EVENT_ID) { @@ -210,7 +229,9 @@ static int wl1271_event_process(struct wl1271 *wl) if ((vector & DUMMY_PACKET_EVENT_ID)) { wl1271_debug(DEBUG_EVENT, "DUMMY_PACKET_ID_EVENT_ID"); - wl1271_tx_dummy_packet(wl); + ret = wl1271_tx_dummy_packet(wl); + if (ret < 0) + return ret; } /* @@ -283,8 +304,10 @@ int wl1271_event_handle(struct wl1271 *wl, u8 mbox_num) return -EINVAL; /* first we read the mbox descriptor */ - wl1271_read(wl, wl->mbox_ptr[mbox_num], wl->mbox, - sizeof(*wl->mbox), false); + ret = wlcore_read(wl, wl->mbox_ptr[mbox_num], wl->mbox, + sizeof(*wl->mbox), false); + if (ret < 0) + return ret; /* process the descriptor */ ret = wl1271_event_process(wl); @@ -295,7 +318,7 @@ int wl1271_event_handle(struct wl1271 *wl, u8 mbox_num) * TODO: we just need this because one bit is in a different * place. Is there any better way? */ - wl->ops->ack_event(wl); + ret = wl->ops->ack_event(wl); - return 0; + return ret; } diff --git a/drivers/net/wireless/ti/wlcore/hw_ops.h b/drivers/net/wireless/ti/wlcore/hw_ops.h index 9384b4d56c24..2673d783ec1e 100644 --- a/drivers/net/wireless/ti/wlcore/hw_ops.h +++ b/drivers/net/wireless/ti/wlcore/hw_ops.h @@ -65,11 +65,13 @@ wlcore_hw_get_rx_buf_align(struct wl1271 *wl, u32 rx_desc) return wl->ops->get_rx_buf_align(wl, rx_desc); } -static inline void +static inline int wlcore_hw_prepare_read(struct wl1271 *wl, u32 rx_desc, u32 len) { if (wl->ops->prepare_read) - wl->ops->prepare_read(wl, rx_desc, len); + return wl->ops->prepare_read(wl, rx_desc, len); + + return 0; } static inline u32 @@ -81,10 +83,12 @@ wlcore_hw_get_rx_packet_len(struct wl1271 *wl, void *rx_data, u32 data_len) return wl->ops->get_rx_packet_len(wl, rx_data, data_len); } -static inline void wlcore_hw_tx_delayed_compl(struct wl1271 *wl) +static inline int wlcore_hw_tx_delayed_compl(struct wl1271 *wl) { if (wl->ops->tx_delayed_compl) - wl->ops->tx_delayed_compl(wl); + return wl->ops->tx_delayed_compl(wl); + + return 0; } static inline void wlcore_hw_tx_immediate_compl(struct wl1271 *wl) @@ -119,4 +123,82 @@ static inline int wlcore_identify_fw(struct wl1271 *wl) return 0; } +static inline void +wlcore_hw_set_tx_desc_csum(struct wl1271 *wl, + struct wl1271_tx_hw_descr *desc, + struct sk_buff *skb) +{ + if (!wl->ops->set_tx_desc_csum) + BUG_ON(1); + + wl->ops->set_tx_desc_csum(wl, desc, skb); +} + +static inline void +wlcore_hw_set_rx_csum(struct wl1271 *wl, + struct wl1271_rx_descriptor *desc, + struct sk_buff *skb) +{ + if (wl->ops->set_rx_csum) + wl->ops->set_rx_csum(wl, desc, skb); +} + +static inline u32 +wlcore_hw_ap_get_mimo_wide_rate_mask(struct wl1271 *wl, + struct wl12xx_vif *wlvif) +{ + if (wl->ops->ap_get_mimo_wide_rate_mask) + return wl->ops->ap_get_mimo_wide_rate_mask(wl, wlvif); + + return 0; +} + +static inline int +wlcore_debugfs_init(struct wl1271 *wl, struct dentry *rootdir) +{ + if (wl->ops->debugfs_init) + return wl->ops->debugfs_init(wl, rootdir); + + return 0; +} + +static inline int +wlcore_handle_static_data(struct wl1271 *wl, void *static_data) +{ + if (wl->ops->handle_static_data) + return wl->ops->handle_static_data(wl, static_data); + + return 0; +} + +static inline int +wlcore_hw_get_spare_blocks(struct wl1271 *wl, bool is_gem) +{ + if (!wl->ops->get_spare_blocks) + BUG_ON(1); + + return wl->ops->get_spare_blocks(wl, is_gem); +} + +static inline int +wlcore_hw_set_key(struct wl1271 *wl, enum set_key_cmd cmd, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key_conf) +{ + if (!wl->ops->set_key) + BUG_ON(1); + + return wl->ops->set_key(wl, cmd, vif, sta, key_conf); +} + +static inline u32 +wlcore_hw_pre_pkt_send(struct wl1271 *wl, u32 buf_offset, u32 last_len) +{ + if (wl->ops->pre_pkt_send) + return wl->ops->pre_pkt_send(wl, buf_offset, last_len); + + return buf_offset; +} + #endif diff --git a/drivers/net/wireless/ti/wlcore/ini.h b/drivers/net/wireless/ti/wlcore/ini.h index 4cf9ecc56212..d24fe3bbc672 100644 --- a/drivers/net/wireless/ti/wlcore/ini.h +++ b/drivers/net/wireless/ti/wlcore/ini.h @@ -172,7 +172,19 @@ struct wl128x_ini_fem_params_5 { /* NVS data structure */ #define WL1271_INI_NVS_SECTION_SIZE 468 -#define WL1271_INI_FEM_MODULE_COUNT 2 + +/* We have four FEM module types: 0-RFMD, 1-TQS, 2-SKW, 3-TQS_HP */ +#define WL1271_INI_FEM_MODULE_COUNT 4 + +/* + * In NVS we only store two FEM module entries - + * FEM modules 0,2,3 are stored in entry 0 + * FEM module 1 is stored in entry 1 + */ +#define WL12XX_NVS_FEM_MODULE_COUNT 2 + +#define WL12XX_FEM_TO_NVS_ENTRY(ini_fem_module) \ + ((ini_fem_module) == 1 ? 1 : 0) #define WL1271_INI_LEGACY_NVS_FILE_SIZE 800 @@ -188,13 +200,13 @@ struct wl1271_nvs_file { struct { struct wl1271_ini_fem_params_2 params; u8 padding; - } dyn_radio_params_2[WL1271_INI_FEM_MODULE_COUNT]; + } dyn_radio_params_2[WL12XX_NVS_FEM_MODULE_COUNT]; struct wl1271_ini_band_params_5 stat_radio_params_5; u8 padding3; struct { struct wl1271_ini_fem_params_5 params; u8 padding; - } dyn_radio_params_5[WL1271_INI_FEM_MODULE_COUNT]; + } dyn_radio_params_5[WL12XX_NVS_FEM_MODULE_COUNT]; } __packed; struct wl128x_nvs_file { @@ -209,12 +221,12 @@ struct wl128x_nvs_file { struct { struct wl128x_ini_fem_params_2 params; u8 padding; - } dyn_radio_params_2[WL1271_INI_FEM_MODULE_COUNT]; + } dyn_radio_params_2[WL12XX_NVS_FEM_MODULE_COUNT]; struct wl128x_ini_band_params_5 stat_radio_params_5; u8 padding3; struct { struct wl128x_ini_fem_params_5 params; u8 padding; - } dyn_radio_params_5[WL1271_INI_FEM_MODULE_COUNT]; + } dyn_radio_params_5[WL12XX_NVS_FEM_MODULE_COUNT]; } __packed; #endif diff --git a/drivers/net/wireless/ti/wlcore/init.c b/drivers/net/wireless/ti/wlcore/init.c index 9f89255eb6e6..a3c867786df8 100644 --- a/drivers/net/wireless/ti/wlcore/init.c +++ b/drivers/net/wireless/ti/wlcore/init.c @@ -54,6 +54,22 @@ int wl1271_init_templates_config(struct wl1271 *wl) if (ret < 0) return ret; + if (wl->quirks & WLCORE_QUIRK_DUAL_PROBE_TMPL) { + ret = wl1271_cmd_template_set(wl, WL12XX_INVALID_ROLE_ID, + CMD_TEMPL_APP_PROBE_REQ_2_4, NULL, + WL1271_CMD_TEMPL_MAX_SIZE, + 0, WL1271_RATE_AUTOMATIC); + if (ret < 0) + return ret; + + ret = wl1271_cmd_template_set(wl, WL12XX_INVALID_ROLE_ID, + CMD_TEMPL_APP_PROBE_REQ_5, NULL, + WL1271_CMD_TEMPL_MAX_SIZE, + 0, WL1271_RATE_AUTOMATIC); + if (ret < 0) + return ret; + } + ret = wl1271_cmd_template_set(wl, WL12XX_INVALID_ROLE_ID, CMD_TEMPL_NULL_DATA, NULL, sizeof(struct wl12xx_null_data_template), @@ -460,6 +476,9 @@ int wl1271_init_ap_rates(struct wl1271 *wl, struct wl12xx_vif *wlvif) /* unconditionally enable HT rates */ supported_rates |= CONF_TX_MCS_RATES; + /* get extra MIMO or wide-chan rates where the HW supports it */ + supported_rates |= wlcore_hw_ap_get_mimo_wide_rate_mask(wl, wlvif); + /* configure unicast TX rate classes */ for (i = 0; i < wl->conf.tx.ac_conf_count; i++) { rc.enabled_rates = supported_rates; @@ -551,29 +570,28 @@ int wl1271_init_vif_specific(struct wl1271 *wl, struct ieee80211_vif *vif) bool is_ap = (wlvif->bss_type == BSS_TYPE_AP_BSS); int ret, i; - /* - * consider all existing roles before configuring psm. - * TODO: reconfigure on interface removal. - */ - if (!wl->ap_count) { - if (is_ap) { - /* Configure for power always on */ + /* consider all existing roles before configuring psm. */ + + if (wl->ap_count == 0 && is_ap) { /* first AP */ + /* Configure for power always on */ + ret = wl1271_acx_sleep_auth(wl, WL1271_PSM_CAM); + if (ret < 0) + return ret; + /* first STA, no APs */ + } else if (wl->sta_count == 0 && wl->ap_count == 0 && !is_ap) { + u8 sta_auth = wl->conf.conn.sta_sleep_auth; + /* Configure for power according to debugfs */ + if (sta_auth != WL1271_PSM_ILLEGAL) + ret = wl1271_acx_sleep_auth(wl, sta_auth); + /* Configure for power always on */ + else if (wl->quirks & WLCORE_QUIRK_NO_ELP) ret = wl1271_acx_sleep_auth(wl, WL1271_PSM_CAM); - if (ret < 0) - return ret; - } else if (!wl->sta_count) { - if (wl->quirks & WLCORE_QUIRK_NO_ELP) { - /* Configure for power always on */ - ret = wl1271_acx_sleep_auth(wl, WL1271_PSM_CAM); - if (ret < 0) - return ret; - } else { - /* Configure for ELP power saving */ - ret = wl1271_acx_sleep_auth(wl, WL1271_PSM_ELP); - if (ret < 0) - return ret; - } - } + /* Configure for ELP power saving */ + else + ret = wl1271_acx_sleep_auth(wl, WL1271_PSM_ELP); + + if (ret < 0) + return ret; } /* Mode specific init */ diff --git a/drivers/net/wireless/ti/wlcore/io.c b/drivers/net/wireless/ti/wlcore/io.c index 7cd0081aede5..68e74eefd296 100644 --- a/drivers/net/wireless/ti/wlcore/io.c +++ b/drivers/net/wireless/ti/wlcore/io.c @@ -48,12 +48,24 @@ void wlcore_disable_interrupts(struct wl1271 *wl) } EXPORT_SYMBOL_GPL(wlcore_disable_interrupts); +void wlcore_disable_interrupts_nosync(struct wl1271 *wl) +{ + disable_irq_nosync(wl->irq); +} +EXPORT_SYMBOL_GPL(wlcore_disable_interrupts_nosync); + void wlcore_enable_interrupts(struct wl1271 *wl) { enable_irq(wl->irq); } EXPORT_SYMBOL_GPL(wlcore_enable_interrupts); +void wlcore_synchronize_interrupts(struct wl1271 *wl) +{ + synchronize_irq(wl->irq); +} +EXPORT_SYMBOL_GPL(wlcore_synchronize_interrupts); + int wlcore_translate_addr(struct wl1271 *wl, int addr) { struct wlcore_partition_set *part = &wl->curr_part; @@ -122,9 +134,11 @@ EXPORT_SYMBOL_GPL(wlcore_translate_addr); * | | * */ -void wlcore_set_partition(struct wl1271 *wl, - const struct wlcore_partition_set *p) +int wlcore_set_partition(struct wl1271 *wl, + const struct wlcore_partition_set *p) { + int ret; + /* copy partition info */ memcpy(&wl->curr_part, p, sizeof(*p)); @@ -137,28 +151,41 @@ void wlcore_set_partition(struct wl1271 *wl, wl1271_debug(DEBUG_IO, "mem3_start %08X mem3_size %08X", p->mem3.start, p->mem3.size); - wl1271_raw_write32(wl, HW_PART0_START_ADDR, p->mem.start); - wl1271_raw_write32(wl, HW_PART0_SIZE_ADDR, p->mem.size); - wl1271_raw_write32(wl, HW_PART1_START_ADDR, p->reg.start); - wl1271_raw_write32(wl, HW_PART1_SIZE_ADDR, p->reg.size); - wl1271_raw_write32(wl, HW_PART2_START_ADDR, p->mem2.start); - wl1271_raw_write32(wl, HW_PART2_SIZE_ADDR, p->mem2.size); + ret = wlcore_raw_write32(wl, HW_PART0_START_ADDR, p->mem.start); + if (ret < 0) + goto out; + + ret = wlcore_raw_write32(wl, HW_PART0_SIZE_ADDR, p->mem.size); + if (ret < 0) + goto out; + + ret = wlcore_raw_write32(wl, HW_PART1_START_ADDR, p->reg.start); + if (ret < 0) + goto out; + + ret = wlcore_raw_write32(wl, HW_PART1_SIZE_ADDR, p->reg.size); + if (ret < 0) + goto out; + + ret = wlcore_raw_write32(wl, HW_PART2_START_ADDR, p->mem2.start); + if (ret < 0) + goto out; + + ret = wlcore_raw_write32(wl, HW_PART2_SIZE_ADDR, p->mem2.size); + if (ret < 0) + goto out; + /* * We don't need the size of the last partition, as it is * automatically calculated based on the total memory size and * the sizes of the previous partitions. */ - wl1271_raw_write32(wl, HW_PART3_START_ADDR, p->mem3.start); -} -EXPORT_SYMBOL_GPL(wlcore_set_partition); - -void wlcore_select_partition(struct wl1271 *wl, u8 part) -{ - wl1271_debug(DEBUG_IO, "setting partition %d", part); + ret = wlcore_raw_write32(wl, HW_PART3_START_ADDR, p->mem3.start); - wlcore_set_partition(wl, &wl->ptable[part]); +out: + return ret; } -EXPORT_SYMBOL_GPL(wlcore_select_partition); +EXPORT_SYMBOL_GPL(wlcore_set_partition); void wl1271_io_reset(struct wl1271 *wl) { diff --git a/drivers/net/wireless/ti/wlcore/io.h b/drivers/net/wireless/ti/wlcore/io.h index 8942954b56a0..259149f36fae 100644 --- a/drivers/net/wireless/ti/wlcore/io.h +++ b/drivers/net/wireless/ti/wlcore/io.h @@ -45,86 +45,122 @@ struct wl1271; void wlcore_disable_interrupts(struct wl1271 *wl); +void wlcore_disable_interrupts_nosync(struct wl1271 *wl); void wlcore_enable_interrupts(struct wl1271 *wl); +void wlcore_synchronize_interrupts(struct wl1271 *wl); void wl1271_io_reset(struct wl1271 *wl); void wl1271_io_init(struct wl1271 *wl); int wlcore_translate_addr(struct wl1271 *wl, int addr); /* Raw target IO, address is not translated */ -static inline void wl1271_raw_write(struct wl1271 *wl, int addr, void *buf, - size_t len, bool fixed) +static inline int __must_check wlcore_raw_write(struct wl1271 *wl, int addr, + void *buf, size_t len, + bool fixed) { - wl->if_ops->write(wl->dev, addr, buf, len, fixed); + int ret; + + if (test_bit(WL1271_FLAG_IO_FAILED, &wl->flags)) + return -EIO; + + ret = wl->if_ops->write(wl->dev, addr, buf, len, fixed); + if (ret && wl->state != WL1271_STATE_OFF) + set_bit(WL1271_FLAG_IO_FAILED, &wl->flags); + + return ret; } -static inline void wl1271_raw_read(struct wl1271 *wl, int addr, void *buf, - size_t len, bool fixed) +static inline int __must_check wlcore_raw_read(struct wl1271 *wl, int addr, + void *buf, size_t len, + bool fixed) { - wl->if_ops->read(wl->dev, addr, buf, len, fixed); + int ret; + + if (test_bit(WL1271_FLAG_IO_FAILED, &wl->flags)) + return -EIO; + + ret = wl->if_ops->read(wl->dev, addr, buf, len, fixed); + if (ret && wl->state != WL1271_STATE_OFF) + set_bit(WL1271_FLAG_IO_FAILED, &wl->flags); + + return ret; } -static inline void wlcore_raw_read_data(struct wl1271 *wl, int reg, void *buf, - size_t len, bool fixed) +static inline int __must_check wlcore_raw_read_data(struct wl1271 *wl, int reg, + void *buf, size_t len, + bool fixed) { - wl1271_raw_read(wl, wl->rtable[reg], buf, len, fixed); + return wlcore_raw_read(wl, wl->rtable[reg], buf, len, fixed); } -static inline void wlcore_raw_write_data(struct wl1271 *wl, int reg, void *buf, - size_t len, bool fixed) +static inline int __must_check wlcore_raw_write_data(struct wl1271 *wl, int reg, + void *buf, size_t len, + bool fixed) { - wl1271_raw_write(wl, wl->rtable[reg], buf, len, fixed); + return wlcore_raw_write(wl, wl->rtable[reg], buf, len, fixed); } -static inline u32 wl1271_raw_read32(struct wl1271 *wl, int addr) +static inline int __must_check wlcore_raw_read32(struct wl1271 *wl, int addr, + u32 *val) { - wl1271_raw_read(wl, addr, &wl->buffer_32, - sizeof(wl->buffer_32), false); + int ret; + + ret = wlcore_raw_read(wl, addr, &wl->buffer_32, + sizeof(wl->buffer_32), false); + if (ret < 0) + return ret; + + if (val) + *val = le32_to_cpu(wl->buffer_32); - return le32_to_cpu(wl->buffer_32); + return 0; } -static inline void wl1271_raw_write32(struct wl1271 *wl, int addr, u32 val) +static inline int __must_check wlcore_raw_write32(struct wl1271 *wl, int addr, + u32 val) { wl->buffer_32 = cpu_to_le32(val); - wl1271_raw_write(wl, addr, &wl->buffer_32, - sizeof(wl->buffer_32), false); + return wlcore_raw_write(wl, addr, &wl->buffer_32, + sizeof(wl->buffer_32), false); } -static inline void wl1271_read(struct wl1271 *wl, int addr, void *buf, - size_t len, bool fixed) +static inline int __must_check wlcore_read(struct wl1271 *wl, int addr, + void *buf, size_t len, bool fixed) { int physical; physical = wlcore_translate_addr(wl, addr); - wl1271_raw_read(wl, physical, buf, len, fixed); + return wlcore_raw_read(wl, physical, buf, len, fixed); } -static inline void wl1271_write(struct wl1271 *wl, int addr, void *buf, - size_t len, bool fixed) +static inline int __must_check wlcore_write(struct wl1271 *wl, int addr, + void *buf, size_t len, bool fixed) { int physical; physical = wlcore_translate_addr(wl, addr); - wl1271_raw_write(wl, physical, buf, len, fixed); + return wlcore_raw_write(wl, physical, buf, len, fixed); } -static inline void wlcore_write_data(struct wl1271 *wl, int reg, void *buf, - size_t len, bool fixed) +static inline int __must_check wlcore_write_data(struct wl1271 *wl, int reg, + void *buf, size_t len, + bool fixed) { - wl1271_write(wl, wl->rtable[reg], buf, len, fixed); + return wlcore_write(wl, wl->rtable[reg], buf, len, fixed); } -static inline void wlcore_read_data(struct wl1271 *wl, int reg, void *buf, - size_t len, bool fixed) +static inline int __must_check wlcore_read_data(struct wl1271 *wl, int reg, + void *buf, size_t len, + bool fixed) { - wl1271_read(wl, wl->rtable[reg], buf, len, fixed); + return wlcore_read(wl, wl->rtable[reg], buf, len, fixed); } -static inline void wl1271_read_hwaddr(struct wl1271 *wl, int hwaddr, - void *buf, size_t len, bool fixed) +static inline int __must_check wlcore_read_hwaddr(struct wl1271 *wl, int hwaddr, + void *buf, size_t len, + bool fixed) { int physical; int addr; @@ -134,34 +170,47 @@ static inline void wl1271_read_hwaddr(struct wl1271 *wl, int hwaddr, physical = wlcore_translate_addr(wl, addr); - wl1271_raw_read(wl, physical, buf, len, fixed); + return wlcore_raw_read(wl, physical, buf, len, fixed); } -static inline u32 wl1271_read32(struct wl1271 *wl, int addr) +static inline int __must_check wlcore_read32(struct wl1271 *wl, int addr, + u32 *val) { - return wl1271_raw_read32(wl, wlcore_translate_addr(wl, addr)); + return wlcore_raw_read32(wl, wlcore_translate_addr(wl, addr), val); } -static inline void wl1271_write32(struct wl1271 *wl, int addr, u32 val) +static inline int __must_check wlcore_write32(struct wl1271 *wl, int addr, + u32 val) { - wl1271_raw_write32(wl, wlcore_translate_addr(wl, addr), val); + return wlcore_raw_write32(wl, wlcore_translate_addr(wl, addr), val); } -static inline u32 wlcore_read_reg(struct wl1271 *wl, int reg) +static inline int __must_check wlcore_read_reg(struct wl1271 *wl, int reg, + u32 *val) { - return wl1271_raw_read32(wl, - wlcore_translate_addr(wl, wl->rtable[reg])); + return wlcore_raw_read32(wl, + wlcore_translate_addr(wl, wl->rtable[reg]), + val); } -static inline void wlcore_write_reg(struct wl1271 *wl, int reg, u32 val) +static inline int __must_check wlcore_write_reg(struct wl1271 *wl, int reg, + u32 val) { - wl1271_raw_write32(wl, wlcore_translate_addr(wl, wl->rtable[reg]), val); + return wlcore_raw_write32(wl, + wlcore_translate_addr(wl, wl->rtable[reg]), + val); } static inline void wl1271_power_off(struct wl1271 *wl) { - wl->if_ops->power(wl->dev, false); - clear_bit(WL1271_FLAG_GPIO_POWER, &wl->flags); + int ret; + + if (!test_bit(WL1271_FLAG_GPIO_POWER, &wl->flags)) + return; + + ret = wl->if_ops->power(wl->dev, false); + if (!ret) + clear_bit(WL1271_FLAG_GPIO_POWER, &wl->flags); } static inline int wl1271_power_on(struct wl1271 *wl) @@ -173,8 +222,8 @@ static inline int wl1271_power_on(struct wl1271 *wl) return ret; } -void wlcore_set_partition(struct wl1271 *wl, - const struct wlcore_partition_set *p); +int wlcore_set_partition(struct wl1271 *wl, + const struct wlcore_partition_set *p); bool wl1271_set_block_size(struct wl1271 *wl); @@ -182,6 +231,4 @@ bool wl1271_set_block_size(struct wl1271 *wl); int wl1271_tx_dummy_packet(struct wl1271 *wl); -void wlcore_select_partition(struct wl1271 *wl, u8 part); - #endif diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c index acef93390d3d..72548609f711 100644 --- a/drivers/net/wireless/ti/wlcore/main.c +++ b/drivers/net/wireless/ti/wlcore/main.c @@ -62,7 +62,7 @@ static bool no_recovery; static void __wl1271_op_remove_interface(struct wl1271 *wl, struct ieee80211_vif *vif, bool reset_tx_queues); -static void wl1271_op_stop(struct ieee80211_hw *hw); +static void wlcore_op_stop_locked(struct wl1271 *wl); static void wl1271_free_ap_keys(struct wl1271 *wl, struct wl12xx_vif *wlvif); static int wl12xx_set_authorized(struct wl1271 *wl, @@ -320,46 +320,6 @@ static void wlcore_adjust_conf(struct wl1271 *wl) } } -static int wl1271_plt_init(struct wl1271 *wl) -{ - int ret; - - ret = wl->ops->hw_init(wl); - if (ret < 0) - return ret; - - ret = wl1271_acx_init_mem_config(wl); - if (ret < 0) - return ret; - - ret = wl12xx_acx_mem_cfg(wl); - if (ret < 0) - goto out_free_memmap; - - /* Enable data path */ - ret = wl1271_cmd_data_path(wl, 1); - if (ret < 0) - goto out_free_memmap; - - /* Configure for CAM power saving (ie. always active) */ - ret = wl1271_acx_sleep_auth(wl, WL1271_PSM_CAM); - if (ret < 0) - goto out_free_memmap; - - /* configure PM */ - ret = wl1271_acx_pm_config(wl); - if (ret < 0) - goto out_free_memmap; - - return 0; - - out_free_memmap: - kfree(wl->target_mem_map); - wl->target_mem_map = NULL; - - return ret; -} - static void wl12xx_irq_ps_regulate_link(struct wl1271 *wl, struct wl12xx_vif *wlvif, u8 hlid, u8 tx_pkts) @@ -387,7 +347,7 @@ static void wl12xx_irq_ps_regulate_link(struct wl1271 *wl, static void wl12xx_irq_update_links_status(struct wl1271 *wl, struct wl12xx_vif *wlvif, - struct wl_fw_status *status) + struct wl_fw_status_2 *status) { struct wl1271_link *lnk; u32 cur_fw_ps_map; @@ -418,8 +378,9 @@ static void wl12xx_irq_update_links_status(struct wl1271 *wl, } } -static void wl12xx_fw_status(struct wl1271 *wl, - struct wl_fw_status *status) +static int wlcore_fw_status(struct wl1271 *wl, + struct wl_fw_status_1 *status_1, + struct wl_fw_status_2 *status_2) { struct wl12xx_vif *wlvif; struct timespec ts; @@ -427,38 +388,42 @@ static void wl12xx_fw_status(struct wl1271 *wl, int avail, freed_blocks; int i; size_t status_len; + int ret; - status_len = sizeof(*status) + wl->fw_status_priv_len; + status_len = WLCORE_FW_STATUS_1_LEN(wl->num_rx_desc) + + sizeof(*status_2) + wl->fw_status_priv_len; - wlcore_raw_read_data(wl, REG_RAW_FW_STATUS_ADDR, status, - status_len, false); + ret = wlcore_raw_read_data(wl, REG_RAW_FW_STATUS_ADDR, status_1, + status_len, false); + if (ret < 0) + return ret; wl1271_debug(DEBUG_IRQ, "intr: 0x%x (fw_rx_counter = %d, " "drv_rx_counter = %d, tx_results_counter = %d)", - status->intr, - status->fw_rx_counter, - status->drv_rx_counter, - status->tx_results_counter); + status_1->intr, + status_1->fw_rx_counter, + status_1->drv_rx_counter, + status_1->tx_results_counter); for (i = 0; i < NUM_TX_QUEUES; i++) { /* prevent wrap-around in freed-packets counter */ wl->tx_allocated_pkts[i] -= - (status->counters.tx_released_pkts[i] - + (status_2->counters.tx_released_pkts[i] - wl->tx_pkts_freed[i]) & 0xff; - wl->tx_pkts_freed[i] = status->counters.tx_released_pkts[i]; + wl->tx_pkts_freed[i] = status_2->counters.tx_released_pkts[i]; } /* prevent wrap-around in total blocks counter */ if (likely(wl->tx_blocks_freed <= - le32_to_cpu(status->total_released_blks))) - freed_blocks = le32_to_cpu(status->total_released_blks) - + le32_to_cpu(status_2->total_released_blks))) + freed_blocks = le32_to_cpu(status_2->total_released_blks) - wl->tx_blocks_freed; else freed_blocks = 0x100000000LL - wl->tx_blocks_freed + - le32_to_cpu(status->total_released_blks); + le32_to_cpu(status_2->total_released_blks); - wl->tx_blocks_freed = le32_to_cpu(status->total_released_blks); + wl->tx_blocks_freed = le32_to_cpu(status_2->total_released_blks); wl->tx_allocated_blocks -= freed_blocks; @@ -474,7 +439,7 @@ static void wl12xx_fw_status(struct wl1271 *wl, cancel_delayed_work(&wl->tx_watchdog_work); } - avail = le32_to_cpu(status->tx_total) - wl->tx_allocated_blocks; + avail = le32_to_cpu(status_2->tx_total) - wl->tx_allocated_blocks; /* * The FW might change the total number of TX memblocks before @@ -493,13 +458,15 @@ static void wl12xx_fw_status(struct wl1271 *wl, /* for AP update num of allocated TX blocks per link and ps status */ wl12xx_for_each_wlvif_ap(wl, wlvif) { - wl12xx_irq_update_links_status(wl, wlvif, status); + wl12xx_irq_update_links_status(wl, wlvif, status_2); } /* update the host-chipset time offset */ getnstimeofday(&ts); wl->time_offset = (timespec_to_ns(&ts) >> 10) - - (s64)le32_to_cpu(status->fw_localtime); + (s64)le32_to_cpu(status_2->fw_localtime); + + return 0; } static void wl1271_flush_deferred_work(struct wl1271 *wl) @@ -527,20 +494,15 @@ static void wl1271_netstack_work(struct work_struct *work) #define WL1271_IRQ_MAX_LOOPS 256 -static irqreturn_t wl1271_irq(int irq, void *cookie) +static int wlcore_irq_locked(struct wl1271 *wl) { - int ret; + int ret = 0; u32 intr; int loopcount = WL1271_IRQ_MAX_LOOPS; - struct wl1271 *wl = (struct wl1271 *)cookie; bool done = false; unsigned int defer_count; unsigned long flags; - /* TX might be handled here, avoid redundant work */ - set_bit(WL1271_FLAG_TX_PENDING, &wl->flags); - cancel_work_sync(&wl->tx_work); - /* * In case edge triggered interrupt must be used, we cannot iterate * more than once without introducing race conditions with the hardirq. @@ -548,8 +510,6 @@ static irqreturn_t wl1271_irq(int irq, void *cookie) if (wl->platform_quirks & WL12XX_PLATFORM_QUIRK_EDGE_IRQ) loopcount = 1; - mutex_lock(&wl->mutex); - wl1271_debug(DEBUG_IRQ, "IRQ work"); if (unlikely(wl->state == WL1271_STATE_OFF)) @@ -568,21 +528,33 @@ static irqreturn_t wl1271_irq(int irq, void *cookie) clear_bit(WL1271_FLAG_IRQ_RUNNING, &wl->flags); smp_mb__after_clear_bit(); - wl12xx_fw_status(wl, wl->fw_status); + ret = wlcore_fw_status(wl, wl->fw_status_1, wl->fw_status_2); + if (ret < 0) + goto out; wlcore_hw_tx_immediate_compl(wl); - intr = le32_to_cpu(wl->fw_status->intr); - intr &= WL1271_INTR_MASK; + intr = le32_to_cpu(wl->fw_status_1->intr); + intr &= WLCORE_ALL_INTR_MASK; if (!intr) { done = true; continue; } if (unlikely(intr & WL1271_ACX_INTR_WATCHDOG)) { - wl1271_error("watchdog interrupt received! " + wl1271_error("HW watchdog interrupt received! starting recovery."); + wl->watchdog_recovery = true; + ret = -EIO; + + /* restarting the chip. ignore any other interrupt. */ + goto out; + } + + if (unlikely(intr & WL1271_ACX_SW_INTR_WATCHDOG)) { + wl1271_error("SW watchdog interrupt received! " "starting recovery."); - wl12xx_queue_recovery_work(wl); + wl->watchdog_recovery = true; + ret = -EIO; /* restarting the chip. ignore any other interrupt. */ goto out; @@ -591,7 +563,9 @@ static irqreturn_t wl1271_irq(int irq, void *cookie) if (likely(intr & WL1271_ACX_INTR_DATA)) { wl1271_debug(DEBUG_IRQ, "WL1271_ACX_INTR_DATA"); - wl12xx_rx(wl, wl->fw_status); + ret = wlcore_rx(wl, wl->fw_status_1); + if (ret < 0) + goto out; /* Check if any tx blocks were freed */ spin_lock_irqsave(&wl->wl_lock, flags); @@ -602,13 +576,17 @@ static irqreturn_t wl1271_irq(int irq, void *cookie) * In order to avoid starvation of the TX path, * call the work function directly. */ - wl1271_tx_work_locked(wl); + ret = wlcore_tx_work_locked(wl); + if (ret < 0) + goto out; } else { spin_unlock_irqrestore(&wl->wl_lock, flags); } /* check for tx results */ - wlcore_hw_tx_delayed_compl(wl); + ret = wlcore_hw_tx_delayed_compl(wl); + if (ret < 0) + goto out; /* Make sure the deferred queues don't get too long */ defer_count = skb_queue_len(&wl->deferred_tx_queue) + @@ -619,12 +597,16 @@ static irqreturn_t wl1271_irq(int irq, void *cookie) if (intr & WL1271_ACX_INTR_EVENT_A) { wl1271_debug(DEBUG_IRQ, "WL1271_ACX_INTR_EVENT_A"); - wl1271_event_handle(wl, 0); + ret = wl1271_event_handle(wl, 0); + if (ret < 0) + goto out; } if (intr & WL1271_ACX_INTR_EVENT_B) { wl1271_debug(DEBUG_IRQ, "WL1271_ACX_INTR_EVENT_B"); - wl1271_event_handle(wl, 1); + ret = wl1271_event_handle(wl, 1); + if (ret < 0) + goto out; } if (intr & WL1271_ACX_INTR_INIT_COMPLETE) @@ -638,6 +620,25 @@ static irqreturn_t wl1271_irq(int irq, void *cookie) wl1271_ps_elp_sleep(wl); out: + return ret; +} + +static irqreturn_t wlcore_irq(int irq, void *cookie) +{ + int ret; + unsigned long flags; + struct wl1271 *wl = cookie; + + /* TX might be handled here, avoid redundant work */ + set_bit(WL1271_FLAG_TX_PENDING, &wl->flags); + cancel_work_sync(&wl->tx_work); + + mutex_lock(&wl->mutex); + + ret = wlcore_irq_locked(wl); + if (ret) + wl12xx_queue_recovery_work(wl); + spin_lock_irqsave(&wl->wl_lock, flags); /* In case TX was not handled here, queue TX work */ clear_bit(WL1271_FLAG_TX_PENDING, &wl->flags); @@ -743,7 +744,7 @@ out: return ret; } -static int wl1271_fetch_nvs(struct wl1271 *wl) +static void wl1271_fetch_nvs(struct wl1271 *wl) { const struct firmware *fw; int ret; @@ -751,16 +752,15 @@ static int wl1271_fetch_nvs(struct wl1271 *wl) ret = request_firmware(&fw, WL12XX_NVS_NAME, wl->dev); if (ret < 0) { - wl1271_error("could not get nvs file %s: %d", WL12XX_NVS_NAME, - ret); - return ret; + wl1271_debug(DEBUG_BOOT, "could not get nvs file %s: %d", + WL12XX_NVS_NAME, ret); + return; } wl->nvs = kmemdup(fw->data, fw->size, GFP_KERNEL); if (!wl->nvs) { wl1271_error("could not allocate memory for the nvs file"); - ret = -ENOMEM; goto out; } @@ -768,14 +768,17 @@ static int wl1271_fetch_nvs(struct wl1271 *wl) out: release_firmware(fw); - - return ret; } void wl12xx_queue_recovery_work(struct wl1271 *wl) { - if (!test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags)) + WARN_ON(!test_bit(WL1271_FLAG_INTENDED_FW_RECOVERY, &wl->flags)); + + /* Avoid a recursive recovery */ + if (!test_and_set_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags)) { + wlcore_disable_interrupts_nosync(wl); ieee80211_queue_work(wl->hw, &wl->recovery_work); + } } size_t wl12xx_copy_fwlog(struct wl1271 *wl, u8 *memblock, size_t maxlen) @@ -801,14 +804,17 @@ size_t wl12xx_copy_fwlog(struct wl1271 *wl, u8 *memblock, size_t maxlen) return len; } +#define WLCORE_FW_LOG_END 0x2000000 + static void wl12xx_read_fwlog_panic(struct wl1271 *wl) { u32 addr; - u32 first_addr; + u32 offset; + u32 end_of_log; u8 *block; + int ret; if ((wl->quirks & WLCORE_QUIRK_FWLOG_NOT_IMPLEMENTED) || - (wl->conf.fwlog.mode != WL12XX_FWLOG_ON_DEMAND) || (wl->conf.fwlog.mem_blocks == 0)) return; @@ -820,34 +826,49 @@ static void wl12xx_read_fwlog_panic(struct wl1271 *wl) /* * Make sure the chip is awake and the logger isn't active. - * This might fail if the firmware hanged. + * Do not send a stop fwlog command if the fw is hanged. */ - if (!wl1271_ps_elp_wakeup(wl)) + if (wl1271_ps_elp_wakeup(wl)) + goto out; + if (!wl->watchdog_recovery) wl12xx_cmd_stop_fwlog(wl); /* Read the first memory block address */ - wl12xx_fw_status(wl, wl->fw_status); - first_addr = le32_to_cpu(wl->fw_status->log_start_addr); - if (!first_addr) + ret = wlcore_fw_status(wl, wl->fw_status_1, wl->fw_status_2); + if (ret < 0) goto out; + addr = le32_to_cpu(wl->fw_status_2->log_start_addr); + if (!addr) + goto out; + + if (wl->conf.fwlog.mode == WL12XX_FWLOG_CONTINUOUS) { + offset = sizeof(addr) + sizeof(struct wl1271_rx_descriptor); + end_of_log = WLCORE_FW_LOG_END; + } else { + offset = sizeof(addr); + end_of_log = addr; + } + /* Traverse the memory blocks linked list */ - addr = first_addr; do { memset(block, 0, WL12XX_HW_BLOCK_SIZE); - wl1271_read_hwaddr(wl, addr, block, WL12XX_HW_BLOCK_SIZE, - false); + ret = wlcore_read_hwaddr(wl, addr, block, WL12XX_HW_BLOCK_SIZE, + false); + if (ret < 0) + goto out; /* * Memory blocks are linked to one another. The first 4 bytes * of each memory block hold the hardware address of the next - * one. The last memory block points to the first one. + * one. The last memory block points to the first one in + * on demand mode and is equal to 0x2000000 in continuous mode. */ addr = le32_to_cpup((__le32 *)block); - if (!wl12xx_copy_fwlog(wl, block + sizeof(addr), - WL12XX_HW_BLOCK_SIZE - sizeof(addr))) + if (!wl12xx_copy_fwlog(wl, block + offset, + WL12XX_HW_BLOCK_SIZE - offset)) break; - } while (addr && (addr != first_addr)); + } while (addr && (addr != end_of_log)); wake_up_interruptible(&wl->fwlog_waitq); @@ -855,6 +876,34 @@ out: kfree(block); } +static void wlcore_print_recovery(struct wl1271 *wl) +{ + u32 pc = 0; + u32 hint_sts = 0; + int ret; + + wl1271_info("Hardware recovery in progress. FW ver: %s", + wl->chip.fw_ver_str); + + /* change partitions momentarily so we can read the FW pc */ + ret = wlcore_set_partition(wl, &wl->ptable[PART_BOOT]); + if (ret < 0) + return; + + ret = wlcore_read_reg(wl, REG_PC_ON_RECOVERY, &pc); + if (ret < 0) + return; + + ret = wlcore_read_reg(wl, REG_INTERRUPT_NO_CLEAR, &hint_sts); + if (ret < 0) + return; + + wl1271_info("pc: 0x%x, hint_sts: 0x%08x", pc, hint_sts); + + wlcore_set_partition(wl, &wl->ptable[PART_WORK]); +} + + static void wl1271_recovery_work(struct work_struct *work) { struct wl1271 *wl = @@ -867,26 +916,19 @@ static void wl1271_recovery_work(struct work_struct *work) if (wl->state != WL1271_STATE_ON || wl->plt) goto out_unlock; - /* Avoid a recursive recovery */ - set_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags); - - wl12xx_read_fwlog_panic(wl); - - wl1271_info("Hardware recovery in progress. FW ver: %s pc: 0x%x", - wl->chip.fw_ver_str, - wlcore_read_reg(wl, REG_PC_ON_RECOVERY)); + if (!test_bit(WL1271_FLAG_INTENDED_FW_RECOVERY, &wl->flags)) { + wl12xx_read_fwlog_panic(wl); + wlcore_print_recovery(wl); + } BUG_ON(bug_on_recovery && !test_bit(WL1271_FLAG_INTENDED_FW_RECOVERY, &wl->flags)); if (no_recovery) { wl1271_info("No recovery (chosen on module load). Fw will remain stuck."); - clear_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags); goto out_unlock; } - BUG_ON(bug_on_recovery); - /* * Advance security sequence number to overcome potential progress * in the firmware during recovery. This doens't hurt if the network is @@ -900,7 +942,7 @@ static void wl1271_recovery_work(struct work_struct *work) } /* Prevent spurious TX during FW restart */ - ieee80211_stop_queues(wl->hw); + wlcore_stop_queues(wl, WLCORE_QUEUE_STOP_REASON_FW_RESTART); if (wl->sched_scanning) { ieee80211_sched_scan_stopped(wl->hw); @@ -914,10 +956,8 @@ static void wl1271_recovery_work(struct work_struct *work) vif = wl12xx_wlvif_to_vif(wlvif); __wl1271_op_remove_interface(wl, vif, false); } - mutex_unlock(&wl->mutex); - wl1271_op_stop(wl->hw); - clear_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags); + wlcore_op_stop_locked(wl); ieee80211_restart_hw(wl->hw); @@ -925,26 +965,34 @@ static void wl1271_recovery_work(struct work_struct *work) * Its safe to enable TX now - the queues are stopped after a request * to restart the HW. */ - ieee80211_wake_queues(wl->hw); - return; + wlcore_wake_queues(wl, WLCORE_QUEUE_STOP_REASON_FW_RESTART); + out_unlock: + wl->watchdog_recovery = false; + clear_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags); mutex_unlock(&wl->mutex); } -static void wl1271_fw_wakeup(struct wl1271 *wl) +static int wlcore_fw_wakeup(struct wl1271 *wl) { - wl1271_raw_write32(wl, HW_ACCESS_ELP_CTRL_REG, ELPCTRL_WAKE_UP); + return wlcore_raw_write32(wl, HW_ACCESS_ELP_CTRL_REG, ELPCTRL_WAKE_UP); } static int wl1271_setup(struct wl1271 *wl) { - wl->fw_status = kmalloc(sizeof(*wl->fw_status), GFP_KERNEL); - if (!wl->fw_status) + wl->fw_status_1 = kmalloc(WLCORE_FW_STATUS_1_LEN(wl->num_rx_desc) + + sizeof(*wl->fw_status_2) + + wl->fw_status_priv_len, GFP_KERNEL); + if (!wl->fw_status_1) return -ENOMEM; + wl->fw_status_2 = (struct wl_fw_status_2 *) + (((u8 *) wl->fw_status_1) + + WLCORE_FW_STATUS_1_LEN(wl->num_rx_desc)); + wl->tx_res_if = kmalloc(sizeof(*wl->tx_res_if), GFP_KERNEL); if (!wl->tx_res_if) { - kfree(wl->fw_status); + kfree(wl->fw_status_1); return -ENOMEM; } @@ -963,13 +1011,21 @@ static int wl12xx_set_power_on(struct wl1271 *wl) wl1271_io_reset(wl); wl1271_io_init(wl); - wlcore_set_partition(wl, &wl->ptable[PART_BOOT]); + ret = wlcore_set_partition(wl, &wl->ptable[PART_BOOT]); + if (ret < 0) + goto fail; /* ELP module wake up */ - wl1271_fw_wakeup(wl); + ret = wlcore_fw_wakeup(wl); + if (ret < 0) + goto fail; out: return ret; + +fail: + wl1271_power_off(wl); + return ret; } static int wl12xx_chip_wakeup(struct wl1271 *wl, bool plt) @@ -987,13 +1043,12 @@ static int wl12xx_chip_wakeup(struct wl1271 *wl, bool plt) * simplify the code and since the performance impact is * negligible, we use the same block size for all different * chip types. + * + * Check if the bus supports blocksize alignment and, if it + * doesn't, make sure we don't have the quirk. */ - if (wl1271_set_block_size(wl)) - wl->quirks |= WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN; - - ret = wl->ops->identify_chip(wl); - if (ret < 0) - goto out; + if (!wl1271_set_block_size(wl)) + wl->quirks &= ~WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN; /* TODO: make sure the lower driver has set things up correctly */ @@ -1005,21 +1060,21 @@ static int wl12xx_chip_wakeup(struct wl1271 *wl, bool plt) if (ret < 0) goto out; - /* No NVS from netlink, try to get it from the filesystem */ - if (wl->nvs == NULL) { - ret = wl1271_fetch_nvs(wl); - if (ret < 0) - goto out; - } - out: return ret; } -int wl1271_plt_start(struct wl1271 *wl) +int wl1271_plt_start(struct wl1271 *wl, const enum plt_mode plt_mode) { int retries = WL1271_BOOT_RETRIES; struct wiphy *wiphy = wl->hw->wiphy; + + static const char* const PLT_MODE[] = { + "PLT_OFF", + "PLT_ON", + "PLT_FEM_DETECT" + }; + int ret; mutex_lock(&wl->mutex); @@ -1033,23 +1088,23 @@ int wl1271_plt_start(struct wl1271 *wl) goto out; } + /* Indicate to lower levels that we are now in PLT mode */ + wl->plt = true; + wl->plt_mode = plt_mode; + while (retries) { retries--; ret = wl12xx_chip_wakeup(wl, true); if (ret < 0) goto power_off; - ret = wl->ops->boot(wl); + ret = wl->ops->plt_init(wl); if (ret < 0) goto power_off; - ret = wl1271_plt_init(wl); - if (ret < 0) - goto irq_disable; - - wl->plt = true; wl->state = WL1271_STATE_ON; - wl1271_notice("firmware booted in PLT mode (%s)", + wl1271_notice("firmware booted in PLT mode %s (%s)", + PLT_MODE[plt_mode], wl->chip.fw_ver_str); /* update hw/fw version info in wiphy struct */ @@ -1059,23 +1114,13 @@ int wl1271_plt_start(struct wl1271 *wl) goto out; -irq_disable: - mutex_unlock(&wl->mutex); - /* Unlocking the mutex in the middle of handling is - inherently unsafe. In this case we deem it safe to do, - because we need to let any possibly pending IRQ out of - the system (and while we are WL1271_STATE_OFF the IRQ - work function will not do anything.) Also, any other - possible concurrent operations will fail due to the - current state, hence the wl1271 struct should be safe. */ - wlcore_disable_interrupts(wl); - wl1271_flush_deferred_work(wl); - cancel_work_sync(&wl->netstack_work); - mutex_lock(&wl->mutex); power_off: wl1271_power_off(wl); } + wl->plt = false; + wl->plt_mode = PLT_OFF; + wl1271_error("firmware boot in PLT mode failed despite %d retries", WL1271_BOOT_RETRIES); out: @@ -1125,8 +1170,10 @@ int wl1271_plt_stop(struct wl1271 *wl) mutex_lock(&wl->mutex); wl1271_power_off(wl); wl->flags = 0; + wl->sleep_auth = WL1271_PSM_ILLEGAL; wl->state = WL1271_STATE_OFF; wl->plt = false; + wl->plt_mode = PLT_OFF; wl->rx_counter = 0; mutex_unlock(&wl->mutex); @@ -1154,9 +1201,16 @@ static void wl1271_op_tx(struct ieee80211_hw *hw, struct sk_buff *skb) spin_lock_irqsave(&wl->wl_lock, flags); - /* queue the packet */ + /* + * drop the packet if the link is invalid or the queue is stopped + * for any reason but watermark. Watermark is a "soft"-stop so we + * allow these packets through. + */ if (hlid == WL12XX_INVALID_LINK_ID || - (wlvif && !test_bit(hlid, wlvif->links_map))) { + (wlvif && !test_bit(hlid, wlvif->links_map)) || + (wlcore_is_queue_stopped(wl, q) && + !wlcore_is_queue_stopped_by_reason(wl, q, + WLCORE_QUEUE_STOP_REASON_WATERMARK))) { wl1271_debug(DEBUG_TX, "DROP skb hlid %d q %d", hlid, q); ieee80211_free_txskb(hw, skb); goto out; @@ -1172,10 +1226,12 @@ static void wl1271_op_tx(struct ieee80211_hw *hw, struct sk_buff *skb) * The workqueue is slow to process the tx_queue and we need stop * the queue here, otherwise the queue will get too long. */ - if (wl->tx_queue_count[q] >= WL1271_TX_QUEUE_HIGH_WATERMARK) { + if (wl->tx_queue_count[q] >= WL1271_TX_QUEUE_HIGH_WATERMARK && + !wlcore_is_queue_stopped_by_reason(wl, q, + WLCORE_QUEUE_STOP_REASON_WATERMARK)) { wl1271_debug(DEBUG_TX, "op_tx: stopping queues for q %d", q); - ieee80211_stop_queue(wl->hw, mapping); - set_bit(q, &wl->stopped_queues_map); + wlcore_stop_queue_locked(wl, q, + WLCORE_QUEUE_STOP_REASON_WATERMARK); } /* @@ -1209,7 +1265,7 @@ int wl1271_tx_dummy_packet(struct wl1271 *wl) /* The FW is low on RX memory blocks, so send the dummy packet asap */ if (!test_bit(WL1271_FLAG_FW_TX_BUSY, &wl->flags)) - wl1271_tx_work_locked(wl); + return wlcore_tx_work_locked(wl); /* * If the FW TX is busy, TX work will be scheduled by the threaded @@ -1476,8 +1532,15 @@ static int wl1271_configure_wowlan(struct wl1271 *wl, int i, ret; if (!wow || wow->any || !wow->n_patterns) { - wl1271_acx_default_rx_filter_enable(wl, 0, FILTER_SIGNAL); - wl1271_rx_filter_clear_all(wl); + ret = wl1271_acx_default_rx_filter_enable(wl, 0, + FILTER_SIGNAL); + if (ret) + goto out; + + ret = wl1271_rx_filter_clear_all(wl); + if (ret) + goto out; + return 0; } @@ -1493,8 +1556,13 @@ static int wl1271_configure_wowlan(struct wl1271 *wl, } } - wl1271_acx_default_rx_filter_enable(wl, 0, FILTER_SIGNAL); - wl1271_rx_filter_clear_all(wl); + ret = wl1271_acx_default_rx_filter_enable(wl, 0, FILTER_SIGNAL); + if (ret) + goto out; + + ret = wl1271_rx_filter_clear_all(wl); + if (ret) + goto out; /* Translate WoWLAN patterns into filters */ for (i = 0; i < wow->n_patterns; i++) { @@ -1532,11 +1600,20 @@ static int wl1271_configure_suspend_sta(struct wl1271 *wl, if (!test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags)) goto out; + if ((wl->conf.conn.suspend_wake_up_event == + wl->conf.conn.wake_up_event) && + (wl->conf.conn.suspend_listen_interval == + wl->conf.conn.listen_interval)) + goto out; + ret = wl1271_ps_elp_wakeup(wl); if (ret < 0) goto out; - wl1271_configure_wowlan(wl, wow); + ret = wl1271_configure_wowlan(wl, wow); + if (ret < 0) + goto out_sleep; + ret = wl1271_acx_wake_up_conditions(wl, wlvif, wl->conf.conn.suspend_wake_up_event, wl->conf.conn.suspend_listen_interval); @@ -1544,8 +1621,8 @@ static int wl1271_configure_suspend_sta(struct wl1271 *wl, if (ret < 0) wl1271_error("suspend: set wake up conditions failed: %d", ret); +out_sleep: wl1271_ps_elp_sleep(wl); - out: return ret; @@ -1592,6 +1669,13 @@ static void wl1271_configure_resume(struct wl1271 *wl, if ((!is_ap) && (!is_sta)) return; + if (is_sta && + ((wl->conf.conn.suspend_wake_up_event == + wl->conf.conn.wake_up_event) && + (wl->conf.conn.suspend_listen_interval == + wl->conf.conn.listen_interval))) + return; + ret = wl1271_ps_elp_wakeup(wl); if (ret < 0) return; @@ -1624,6 +1708,12 @@ static int wl1271_op_suspend(struct ieee80211_hw *hw, wl1271_debug(DEBUG_MAC80211, "mac80211 suspend wow=%d", !!wow); WARN_ON(!wow); + /* we want to perform the recovery before suspending */ + if (test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags)) { + wl1271_warning("postponing suspend to perform recovery"); + return -EBUSY; + } + wl1271_tx_flush(wl); mutex_lock(&wl->mutex); @@ -1664,7 +1754,8 @@ static int wl1271_op_resume(struct ieee80211_hw *hw) struct wl1271 *wl = hw->priv; struct wl12xx_vif *wlvif; unsigned long flags; - bool run_irq_work = false; + bool run_irq_work = false, pending_recovery; + int ret; wl1271_debug(DEBUG_MAC80211, "mac80211 resume wow=%d", wl->wow_enabled); @@ -1680,17 +1771,37 @@ static int wl1271_op_resume(struct ieee80211_hw *hw) run_irq_work = true; spin_unlock_irqrestore(&wl->wl_lock, flags); + mutex_lock(&wl->mutex); + + /* test the recovery flag before calling any SDIO functions */ + pending_recovery = test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, + &wl->flags); + if (run_irq_work) { wl1271_debug(DEBUG_MAC80211, "run postponed irq_work directly"); - wl1271_irq(0, wl); + + /* don't talk to the HW if recovery is pending */ + if (!pending_recovery) { + ret = wlcore_irq_locked(wl); + if (ret) + wl12xx_queue_recovery_work(wl); + } + wlcore_enable_interrupts(wl); } - mutex_lock(&wl->mutex); + if (pending_recovery) { + wl1271_warning("queuing forgotten recovery on resume"); + ieee80211_queue_work(wl->hw, &wl->recovery_work); + goto out; + } + wl12xx_for_each_wlvif(wl, wlvif) { wl1271_configure_resume(wl, wlvif); } + +out: wl->wow_enabled = false; mutex_unlock(&wl->mutex); @@ -1716,29 +1827,15 @@ static int wl1271_op_start(struct ieee80211_hw *hw) return 0; } -static void wl1271_op_stop(struct ieee80211_hw *hw) +static void wlcore_op_stop_locked(struct wl1271 *wl) { - struct wl1271 *wl = hw->priv; int i; - wl1271_debug(DEBUG_MAC80211, "mac80211 stop"); - - /* - * Interrupts must be disabled before setting the state to OFF. - * Otherwise, the interrupt handler might be called and exit without - * reading the interrupt status. - */ - wlcore_disable_interrupts(wl); - mutex_lock(&wl->mutex); if (wl->state == WL1271_STATE_OFF) { - mutex_unlock(&wl->mutex); + if (test_and_clear_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, + &wl->flags)) + wlcore_enable_interrupts(wl); - /* - * This will not necessarily enable interrupts as interrupts - * may have been disabled when op_stop was called. It will, - * however, balance the above call to disable_interrupts(). - */ - wlcore_enable_interrupts(wl); return; } @@ -1747,8 +1844,16 @@ static void wl1271_op_stop(struct ieee80211_hw *hw) * functions don't perform further work. */ wl->state = WL1271_STATE_OFF; + + /* + * Use the nosync variant to disable interrupts, so the mutex could be + * held while doing so without deadlocking. + */ + wlcore_disable_interrupts_nosync(wl); + mutex_unlock(&wl->mutex); + wlcore_synchronize_interrupts(wl); wl1271_flush_deferred_work(wl); cancel_delayed_work_sync(&wl->scan_complete_work); cancel_work_sync(&wl->netstack_work); @@ -1758,15 +1863,23 @@ static void wl1271_op_stop(struct ieee80211_hw *hw) cancel_delayed_work_sync(&wl->connection_loss_work); /* let's notify MAC80211 about the remaining pending TX frames */ - wl12xx_tx_reset(wl, true); + wl12xx_tx_reset(wl); mutex_lock(&wl->mutex); wl1271_power_off(wl); + /* + * In case a recovery was scheduled, interrupts were disabled to avoid + * an interrupt storm. Now that the power is down, it is safe to + * re-enable interrupts to balance the disable depth + */ + if (test_and_clear_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags)) + wlcore_enable_interrupts(wl); wl->band = IEEE80211_BAND_2GHZ; wl->rx_counter = 0; wl->power_level = WL1271_DEFAULT_POWER_LEVEL; + wl->channel_type = NL80211_CHAN_NO_HT; wl->tx_blocks_available = 0; wl->tx_allocated_blocks = 0; wl->tx_results_count = 0; @@ -1775,6 +1888,7 @@ static void wl1271_op_stop(struct ieee80211_hw *hw) wl->ap_fw_ps_map = 0; wl->ap_ps_map = 0; wl->sched_scanning = false; + wl->sleep_auth = WL1271_PSM_ILLEGAL; memset(wl->roles_map, 0, sizeof(wl->roles_map)); memset(wl->links_map, 0, sizeof(wl->links_map)); memset(wl->roc_map, 0, sizeof(wl->roc_map)); @@ -1799,12 +1913,24 @@ static void wl1271_op_stop(struct ieee80211_hw *hw) wl1271_debugfs_reset(wl); - kfree(wl->fw_status); - wl->fw_status = NULL; + kfree(wl->fw_status_1); + wl->fw_status_1 = NULL; + wl->fw_status_2 = NULL; kfree(wl->tx_res_if); wl->tx_res_if = NULL; kfree(wl->target_mem_map); wl->target_mem_map = NULL; +} + +static void wlcore_op_stop(struct ieee80211_hw *hw) +{ + struct wl1271 *wl = hw->priv; + + wl1271_debug(DEBUG_MAC80211, "mac80211 stop"); + + mutex_lock(&wl->mutex); + + wlcore_op_stop_locked(wl); mutex_unlock(&wl->mutex); } @@ -1894,6 +2020,9 @@ static int wl12xx_init_vif_data(struct wl1271 *wl, struct ieee80211_vif *vif) wl12xx_allocate_rate_policy(wl, &wlvif->sta.basic_rate_idx); wl12xx_allocate_rate_policy(wl, &wlvif->sta.ap_rate_idx); wl12xx_allocate_rate_policy(wl, &wlvif->sta.p2p_rate_idx); + wlvif->basic_rate_set = CONF_TX_RATE_MASK_BASIC; + wlvif->basic_rate = CONF_TX_RATE_MASK_BASIC; + wlvif->rate_set = CONF_TX_RATE_MASK_BASIC; } else { /* init ap data */ wlvif->ap.bcast_hlid = WL12XX_INVALID_LINK_ID; @@ -1903,13 +2032,19 @@ static int wl12xx_init_vif_data(struct wl1271 *wl, struct ieee80211_vif *vif) for (i = 0; i < CONF_TX_MAX_AC_COUNT; i++) wl12xx_allocate_rate_policy(wl, &wlvif->ap.ucast_rate_idx[i]); + wlvif->basic_rate_set = CONF_TX_AP_ENABLED_RATES; + /* + * TODO: check if basic_rate shouldn't be + * wl1271_tx_min_rate_get(wl, wlvif->basic_rate_set); + * instead (the same thing for STA above). + */ + wlvif->basic_rate = CONF_TX_AP_ENABLED_RATES; + /* TODO: this seems to be used only for STA, check it */ + wlvif->rate_set = CONF_TX_AP_ENABLED_RATES; } wlvif->bitrate_masks[IEEE80211_BAND_2GHZ] = wl->conf.tx.basic_rate; wlvif->bitrate_masks[IEEE80211_BAND_5GHZ] = wl->conf.tx.basic_rate_5; - wlvif->basic_rate_set = CONF_TX_RATE_MASK_BASIC; - wlvif->basic_rate = CONF_TX_RATE_MASK_BASIC; - wlvif->rate_set = CONF_TX_RATE_MASK_BASIC; wlvif->beacon_int = WL1271_DEFAULT_BEACON_INT; /* @@ -1919,6 +2054,7 @@ static int wl12xx_init_vif_data(struct wl1271 *wl, struct ieee80211_vif *vif) wlvif->band = wl->band; wlvif->channel = wl->channel; wlvif->power_level = wl->power_level; + wlvif->channel_type = wl->channel_type; INIT_WORK(&wlvif->rx_streaming_enable_work, wl1271_rx_streaming_enable_work); @@ -2170,6 +2306,7 @@ static void __wl1271_op_remove_interface(struct wl1271 *wl, { struct wl12xx_vif *wlvif = wl12xx_vif_to_data(vif); int i, ret; + bool is_ap = (wlvif->bss_type == BSS_TYPE_AP_BSS); wl1271_debug(DEBUG_MAC80211, "mac80211 remove interface"); @@ -2250,11 +2387,33 @@ deinit: wlvif->role_id = WL12XX_INVALID_ROLE_ID; wlvif->dev_role_id = WL12XX_INVALID_ROLE_ID; - if (wlvif->bss_type == BSS_TYPE_AP_BSS) + if (is_ap) wl->ap_count--; else wl->sta_count--; + /* + * Last AP, have more stations. Configure sleep auth according to STA. + * Don't do thin on unintended recovery. + */ + if (test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags) && + !test_bit(WL1271_FLAG_INTENDED_FW_RECOVERY, &wl->flags)) + goto unlock; + + if (wl->ap_count == 0 && is_ap && wl->sta_count) { + u8 sta_auth = wl->conf.conn.sta_sleep_auth; + /* Configure for power according to debugfs */ + if (sta_auth != WL1271_PSM_ILLEGAL) + wl1271_acx_sleep_auth(wl, sta_auth); + /* Configure for power always on */ + else if (wl->quirks & WLCORE_QUIRK_NO_ELP) + wl1271_acx_sleep_auth(wl, WL1271_PSM_CAM); + /* Configure for ELP power saving */ + else + wl1271_acx_sleep_auth(wl, WL1271_PSM_ELP); + } + +unlock: mutex_unlock(&wl->mutex); del_timer_sync(&wlvif->rx_streaming_timer); @@ -2444,7 +2603,7 @@ static int wl1271_sta_handle_idle(struct wl1271 *wl, struct wl12xx_vif *wlvif, } else { /* The current firmware only supports sched_scan in idle */ if (wl->sched_scanning) { - wl1271_scan_sched_scan_stop(wl); + wl1271_scan_sched_scan_stop(wl, wlvif); ieee80211_sched_scan_stopped(wl->hw); } @@ -2469,13 +2628,24 @@ static int wl12xx_config_vif(struct wl1271 *wl, struct wl12xx_vif *wlvif, /* if the channel changes while joined, join again */ if (changed & IEEE80211_CONF_CHANGE_CHANNEL && ((wlvif->band != conf->channel->band) || - (wlvif->channel != channel))) { + (wlvif->channel != channel) || + (wlvif->channel_type != conf->channel_type))) { /* send all pending packets */ - wl1271_tx_work_locked(wl); + ret = wlcore_tx_work_locked(wl); + if (ret < 0) + return ret; + wlvif->band = conf->channel->band; wlvif->channel = channel; + wlvif->channel_type = conf->channel_type; - if (!is_ap) { + if (is_ap) { + wl1271_set_band_rate(wl, wlvif); + ret = wl1271_init_ap_rates(wl, wlvif); + if (ret < 0) + wl1271_error("AP rate policy change failed %d", + ret); + } else { /* * FIXME: the mac80211 should really provide a fixed * rate to use here. for now, just use the smallest @@ -2583,8 +2753,9 @@ static int wl1271_op_config(struct ieee80211_hw *hw, u32 changed) * frames, such as the deauth. To make sure those frames reach the air, * wait here until the TX queue is fully flushed. */ - if ((changed & IEEE80211_CONF_CHANGE_IDLE) && - (conf->flags & IEEE80211_CONF_IDLE)) + if ((changed & IEEE80211_CONF_CHANGE_CHANNEL) || + ((changed & IEEE80211_CONF_CHANGE_IDLE) && + (conf->flags & IEEE80211_CONF_IDLE))) wl1271_tx_flush(wl); mutex_lock(&wl->mutex); @@ -2593,6 +2764,7 @@ static int wl1271_op_config(struct ieee80211_hw *hw, u32 changed) if (changed & IEEE80211_CONF_CHANGE_CHANNEL) { wl->band = conf->channel->band; wl->channel = channel; + wl->channel_type = conf->channel_type; } if (changed & IEEE80211_CONF_CHANGE_POWER) @@ -2825,17 +2997,6 @@ static int wl1271_set_key(struct wl1271 *wl, struct wl12xx_vif *wlvif, int ret; bool is_ap = (wlvif->bss_type == BSS_TYPE_AP_BSS); - /* - * A role set to GEM cipher requires different Tx settings (namely - * spare blocks). Note when we are in this mode so the HW can adjust. - */ - if (key_type == KEY_GEM) { - if (action == KEY_ADD_OR_REPLACE) - wlvif->is_gem = true; - else if (action == KEY_REMOVE) - wlvif->is_gem = false; - } - if (is_ap) { struct wl1271_station *wl_sta; u8 hlid; @@ -2913,12 +3074,21 @@ static int wl1271_set_key(struct wl1271 *wl, struct wl12xx_vif *wlvif, return 0; } -static int wl1271_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, +static int wlcore_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, struct ieee80211_vif *vif, struct ieee80211_sta *sta, struct ieee80211_key_conf *key_conf) { struct wl1271 *wl = hw->priv; + + return wlcore_hw_set_key(wl, cmd, vif, sta, key_conf); +} + +int wlcore_set_key(struct wl1271 *wl, enum set_key_cmd cmd, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key_conf) +{ struct wl12xx_vif *wlvif = wl12xx_vif_to_data(vif); int ret; u32 tx_seq_32 = 0; @@ -3029,6 +3199,7 @@ out_unlock: return ret; } +EXPORT_SYMBOL_GPL(wlcore_set_key); static int wl1271_op_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, @@ -3167,6 +3338,7 @@ static void wl1271_op_sched_scan_stop(struct ieee80211_hw *hw, struct ieee80211_vif *vif) { struct wl1271 *wl = hw->priv; + struct wl12xx_vif *wlvif = wl12xx_vif_to_data(vif); int ret; wl1271_debug(DEBUG_MAC80211, "wl1271_op_sched_scan_stop"); @@ -3180,7 +3352,7 @@ static void wl1271_op_sched_scan_stop(struct ieee80211_hw *hw, if (ret < 0) goto out; - wl1271_scan_sched_scan_stop(wl); + wl1271_scan_sched_scan_stop(wl, wlvif); wl1271_ps_elp_sleep(wl); out: @@ -3316,8 +3488,15 @@ static int wl1271_ap_set_probe_resp_tmpl(struct wl1271 *wl, u32 rates, skb->data, skb->len, 0, rates); - dev_kfree_skb(skb); + + if (ret < 0) + goto out; + + wl1271_debug(DEBUG_AP, "probe response updated"); + set_bit(WLVIF_FLAG_AP_PROBE_RESP_SET, &wlvif->flags); + +out: return ret; } @@ -3422,6 +3601,87 @@ out: return ret; } +static int wlcore_set_beacon_template(struct wl1271 *wl, + struct ieee80211_vif *vif, + bool is_ap) +{ + struct wl12xx_vif *wlvif = wl12xx_vif_to_data(vif); + struct ieee80211_hdr *hdr; + u32 min_rate; + int ret; + int ieoffset = offsetof(struct ieee80211_mgmt, + u.beacon.variable); + struct sk_buff *beacon = ieee80211_beacon_get(wl->hw, vif); + u16 tmpl_id; + + if (!beacon) { + ret = -EINVAL; + goto out; + } + + wl1271_debug(DEBUG_MASTER, "beacon updated"); + + ret = wl1271_ssid_set(vif, beacon, ieoffset); + if (ret < 0) { + dev_kfree_skb(beacon); + goto out; + } + min_rate = wl1271_tx_min_rate_get(wl, wlvif->basic_rate_set); + tmpl_id = is_ap ? CMD_TEMPL_AP_BEACON : + CMD_TEMPL_BEACON; + ret = wl1271_cmd_template_set(wl, wlvif->role_id, tmpl_id, + beacon->data, + beacon->len, 0, + min_rate); + if (ret < 0) { + dev_kfree_skb(beacon); + goto out; + } + + /* + * In case we already have a probe-resp beacon set explicitly + * by usermode, don't use the beacon data. + */ + if (test_bit(WLVIF_FLAG_AP_PROBE_RESP_SET, &wlvif->flags)) + goto end_bcn; + + /* remove TIM ie from probe response */ + wl12xx_remove_ie(beacon, WLAN_EID_TIM, ieoffset); + + /* + * remove p2p ie from probe response. + * the fw reponds to probe requests that don't include + * the p2p ie. probe requests with p2p ie will be passed, + * and will be responded by the supplicant (the spec + * forbids including the p2p ie when responding to probe + * requests that didn't include it). + */ + wl12xx_remove_vendor_ie(beacon, WLAN_OUI_WFA, + WLAN_OUI_TYPE_WFA_P2P, ieoffset); + + hdr = (struct ieee80211_hdr *) beacon->data; + hdr->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT | + IEEE80211_STYPE_PROBE_RESP); + if (is_ap) + ret = wl1271_ap_set_probe_resp_tmpl_legacy(wl, vif, + beacon->data, + beacon->len, + min_rate); + else + ret = wl1271_cmd_template_set(wl, wlvif->role_id, + CMD_TEMPL_PROBE_RESPONSE, + beacon->data, + beacon->len, 0, + min_rate); +end_bcn: + dev_kfree_skb(beacon); + if (ret < 0) + goto out; + +out: + return ret; +} + static int wl1271_bss_beacon_info_changed(struct wl1271 *wl, struct ieee80211_vif *vif, struct ieee80211_bss_conf *bss_conf, @@ -3440,81 +3700,12 @@ static int wl1271_bss_beacon_info_changed(struct wl1271 *wl, if ((changed & BSS_CHANGED_AP_PROBE_RESP) && is_ap) { u32 rate = wl1271_tx_min_rate_get(wl, wlvif->basic_rate_set); - if (!wl1271_ap_set_probe_resp_tmpl(wl, rate, vif)) { - wl1271_debug(DEBUG_AP, "probe response updated"); - set_bit(WLVIF_FLAG_AP_PROBE_RESP_SET, &wlvif->flags); - } + + wl1271_ap_set_probe_resp_tmpl(wl, rate, vif); } if ((changed & BSS_CHANGED_BEACON)) { - struct ieee80211_hdr *hdr; - u32 min_rate; - int ieoffset = offsetof(struct ieee80211_mgmt, - u.beacon.variable); - struct sk_buff *beacon = ieee80211_beacon_get(wl->hw, vif); - u16 tmpl_id; - - if (!beacon) { - ret = -EINVAL; - goto out; - } - - wl1271_debug(DEBUG_MASTER, "beacon updated"); - - ret = wl1271_ssid_set(vif, beacon, ieoffset); - if (ret < 0) { - dev_kfree_skb(beacon); - goto out; - } - min_rate = wl1271_tx_min_rate_get(wl, wlvif->basic_rate_set); - tmpl_id = is_ap ? CMD_TEMPL_AP_BEACON : - CMD_TEMPL_BEACON; - ret = wl1271_cmd_template_set(wl, wlvif->role_id, tmpl_id, - beacon->data, - beacon->len, 0, - min_rate); - if (ret < 0) { - dev_kfree_skb(beacon); - goto out; - } - - /* - * In case we already have a probe-resp beacon set explicitly - * by usermode, don't use the beacon data. - */ - if (test_bit(WLVIF_FLAG_AP_PROBE_RESP_SET, &wlvif->flags)) - goto end_bcn; - - /* remove TIM ie from probe response */ - wl12xx_remove_ie(beacon, WLAN_EID_TIM, ieoffset); - - /* - * remove p2p ie from probe response. - * the fw reponds to probe requests that don't include - * the p2p ie. probe requests with p2p ie will be passed, - * and will be responded by the supplicant (the spec - * forbids including the p2p ie when responding to probe - * requests that didn't include it). - */ - wl12xx_remove_vendor_ie(beacon, WLAN_OUI_WFA, - WLAN_OUI_TYPE_WFA_P2P, ieoffset); - - hdr = (struct ieee80211_hdr *) beacon->data; - hdr->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT | - IEEE80211_STYPE_PROBE_RESP); - if (is_ap) - ret = wl1271_ap_set_probe_resp_tmpl_legacy(wl, vif, - beacon->data, - beacon->len, - min_rate); - else - ret = wl1271_cmd_template_set(wl, wlvif->role_id, - CMD_TEMPL_PROBE_RESPONSE, - beacon->data, - beacon->len, 0, - min_rate); -end_bcn: - dev_kfree_skb(beacon); + ret = wlcore_set_beacon_template(wl, vif, is_ap); if (ret < 0) goto out; } @@ -3551,6 +3742,14 @@ static void wl1271_bss_info_changed_ap(struct wl1271 *wl, ret = wl1271_ap_init_templates(wl, vif); if (ret < 0) goto out; + + ret = wl1271_ap_set_probe_resp_tmpl(wl, wlvif->basic_rate, vif); + if (ret < 0) + goto out; + + ret = wlcore_set_beacon_template(wl, vif, true); + if (ret < 0) + goto out; } ret = wl1271_bss_beacon_info_changed(wl, vif, bss_conf, changed); @@ -3691,7 +3890,8 @@ static void wl1271_bss_info_changed_sta(struct wl1271 *wl, sta_rate_set = sta->supp_rates[wl->hw->conf.channel->band]; if (sta->ht_cap.ht_supported) sta_rate_set |= - (sta->ht_cap.mcs.rx_mask[0] << HW_HT_RATES_OFFSET); + (sta->ht_cap.mcs.rx_mask[0] << HW_HT_RATES_OFFSET) | + (sta->ht_cap.mcs.rx_mask[1] << HW_MIMO_RATES_OFFSET); sta_ht_cap = sta->ht_cap; sta_exists = true; @@ -3704,13 +3904,11 @@ sta_not_found: u32 rates; int ieoffset; wlvif->aid = bss_conf->aid; + wlvif->channel_type = bss_conf->channel_type; wlvif->beacon_int = bss_conf->beacon_int; do_join = true; set_assoc = true; - /* Cancel connection_loss_work */ - cancel_delayed_work_sync(&wl->connection_loss_work); - /* * use basic rates from AP, and determine lowest rate * to use with control frames. @@ -3960,6 +4158,17 @@ static void wl1271_op_bss_info_changed(struct ieee80211_hw *hw, wl1271_debug(DEBUG_MAC80211, "mac80211 bss info changed 0x%x", (int)changed); + /* + * make sure to cancel pending disconnections if our association + * state changed + */ + if (!is_ap && (changed & BSS_CHANGED_ASSOC)) + cancel_delayed_work_sync(&wl->connection_loss_work); + + if (is_ap && (changed & BSS_CHANGED_BEACON_ENABLED) && + !bss_conf->enable_beacon) + wl1271_tx_flush(wl); + mutex_lock(&wl->mutex); if (unlikely(wl->state == WL1271_STATE_OFF)) @@ -4068,16 +4277,13 @@ out: static int wl1271_op_get_survey(struct ieee80211_hw *hw, int idx, struct survey_info *survey) { - struct wl1271 *wl = hw->priv; struct ieee80211_conf *conf = &hw->conf; if (idx != 0) return -ENOENT; survey->channel = conf->channel; - survey->filled = SURVEY_INFO_NOISE_DBM; - survey->noise = wl->noise; - + survey->filled = 0; return 0; } @@ -4343,9 +4549,14 @@ static int wl1271_op_ampdu_action(struct ieee80211_hw *hw, case IEEE80211_AMPDU_RX_STOP: if (!(*ba_bitmap & BIT(tid))) { - ret = -EINVAL; - wl1271_error("no active RX BA session on tid: %d", + /* + * this happens on reconfig - so only output a debug + * message for now, and don't fail the function. + */ + wl1271_debug(DEBUG_MAC80211, + "no active RX BA session on tid: %d", tid); + ret = 0; break; } @@ -4394,7 +4605,7 @@ static int wl12xx_set_bitrate_mask(struct ieee80211_hw *hw, mutex_lock(&wl->mutex); - for (i = 0; i < IEEE80211_NUM_BANDS; i++) + for (i = 0; i < WLCORE_NUM_BANDS; i++) wlvif->bitrate_masks[i] = wl1271_tx_enabled_rates_get(wl, mask->control[i].legacy, @@ -4462,6 +4673,13 @@ out: mutex_unlock(&wl->mutex); } +static void wlcore_op_flush(struct ieee80211_hw *hw, bool drop) +{ + struct wl1271 *wl = hw->priv; + + wl1271_tx_flush(wl); +} + static bool wl1271_tx_frames_pending(struct ieee80211_hw *hw) { struct wl1271 *wl = hw->priv; @@ -4624,7 +4842,7 @@ static struct ieee80211_supported_band wl1271_band_5ghz = { static const struct ieee80211_ops wl1271_ops = { .start = wl1271_op_start, - .stop = wl1271_op_stop, + .stop = wlcore_op_stop, .add_interface = wl1271_op_add_interface, .remove_interface = wl1271_op_remove_interface, .change_interface = wl12xx_op_change_interface, @@ -4636,7 +4854,7 @@ static const struct ieee80211_ops wl1271_ops = { .prepare_multicast = wl1271_op_prepare_multicast, .configure_filter = wl1271_op_configure_filter, .tx = wl1271_op_tx, - .set_key = wl1271_op_set_key, + .set_key = wlcore_op_set_key, .hw_scan = wl1271_op_hw_scan, .cancel_hw_scan = wl1271_op_cancel_hw_scan, .sched_scan_start = wl1271_op_sched_scan_start, @@ -4652,6 +4870,7 @@ static const struct ieee80211_ops wl1271_ops = { .tx_frames_pending = wl1271_tx_frames_pending, .set_bitrate_mask = wl12xx_set_bitrate_mask, .channel_switch = wl12xx_op_channel_switch, + .flush = wlcore_op_flush, CFG80211_TESTMODE_CMD(wl1271_tm_cmd) }; @@ -4882,18 +5101,22 @@ static int wl12xx_get_hw_info(struct wl1271 *wl) if (ret < 0) goto out; - wl->chip.id = wlcore_read_reg(wl, REG_CHIP_ID_B); + ret = wlcore_read_reg(wl, REG_CHIP_ID_B, &wl->chip.id); + if (ret < 0) + goto out; wl->fuse_oui_addr = 0; wl->fuse_nic_addr = 0; - wl->hw_pg_ver = wl->ops->get_pg_ver(wl); + ret = wl->ops->get_pg_ver(wl, &wl->hw_pg_ver); + if (ret < 0) + goto out; if (wl->ops->get_mac) - wl->ops->get_mac(wl); + ret = wl->ops->get_mac(wl); - wl1271_power_off(wl); out: + wl1271_power_off(wl); return ret; } @@ -4905,14 +5128,8 @@ static int wl1271_register_hw(struct wl1271 *wl) if (wl->mac80211_registered) return 0; - ret = wl12xx_get_hw_info(wl); - if (ret < 0) { - wl1271_error("couldn't get hw info"); - goto out; - } - - ret = wl1271_fetch_nvs(wl); - if (ret == 0) { + wl1271_fetch_nvs(wl); + if (wl->nvs != NULL) { /* NOTE: The wl->nvs->nvs element must be first, in * order to simplify the casting, we assume it is at * the beginning of the wl->nvs structure. @@ -4960,6 +5177,29 @@ static void wl1271_unregister_hw(struct wl1271 *wl) } +static const struct ieee80211_iface_limit wlcore_iface_limits[] = { + { + .max = 2, + .types = BIT(NL80211_IFTYPE_STATION), + }, + { + .max = 1, + .types = BIT(NL80211_IFTYPE_AP) | + BIT(NL80211_IFTYPE_P2P_GO) | + BIT(NL80211_IFTYPE_P2P_CLIENT), + }, +}; + +static const struct ieee80211_iface_combination +wlcore_iface_combinations[] = { + { + .num_different_channels = 1, + .max_interfaces = 2, + .limits = wlcore_iface_limits, + .n_limits = ARRAY_SIZE(wlcore_iface_limits), + }, +}; + static int wl1271_init_ieee80211(struct wl1271 *wl) { static const u32 cipher_suites[] = { @@ -4970,9 +5210,11 @@ static int wl1271_init_ieee80211(struct wl1271 *wl) WL1271_CIPHER_SUITE_GEM, }; - /* The tx descriptor buffer and the TKIP space. */ - wl->hw->extra_tx_headroom = WL1271_EXTRA_SPACE_TKIP + - sizeof(struct wl1271_tx_hw_descr); + /* The tx descriptor buffer */ + wl->hw->extra_tx_headroom = sizeof(struct wl1271_tx_hw_descr); + + if (wl->quirks & WLCORE_QUIRK_TKIP_HEADER_SPACE) + wl->hw->extra_tx_headroom += WL1271_EXTRA_SPACE_TKIP; /* unit us */ /* FIXME: find a proper value */ @@ -5025,12 +5267,14 @@ static int wl1271_init_ieee80211(struct wl1271 *wl) */ memcpy(&wl->bands[IEEE80211_BAND_2GHZ], &wl1271_band_2ghz, sizeof(wl1271_band_2ghz)); - memcpy(&wl->bands[IEEE80211_BAND_2GHZ].ht_cap, &wl->ht_cap, - sizeof(wl->ht_cap)); + memcpy(&wl->bands[IEEE80211_BAND_2GHZ].ht_cap, + &wl->ht_cap[IEEE80211_BAND_2GHZ], + sizeof(*wl->ht_cap)); memcpy(&wl->bands[IEEE80211_BAND_5GHZ], &wl1271_band_5ghz, sizeof(wl1271_band_5ghz)); - memcpy(&wl->bands[IEEE80211_BAND_5GHZ].ht_cap, &wl->ht_cap, - sizeof(wl->ht_cap)); + memcpy(&wl->bands[IEEE80211_BAND_5GHZ].ht_cap, + &wl->ht_cap[IEEE80211_BAND_5GHZ], + sizeof(*wl->ht_cap)); wl->hw->wiphy->bands[IEEE80211_BAND_2GHZ] = &wl->bands[IEEE80211_BAND_2GHZ]; @@ -5049,6 +5293,11 @@ static int wl1271_init_ieee80211(struct wl1271 *wl) NL80211_PROBE_RESP_OFFLOAD_SUPPORT_WPS2 | NL80211_PROBE_RESP_OFFLOAD_SUPPORT_P2P; + /* allowed interface combinations */ + wl->hw->wiphy->iface_combinations = wlcore_iface_combinations; + wl->hw->wiphy->n_iface_combinations = + ARRAY_SIZE(wlcore_iface_combinations); + SET_IEEE80211_DEV(wl->hw, wl->dev); wl->hw->sta_data_size = sizeof(struct wl1271_station); @@ -5117,8 +5366,10 @@ struct ieee80211_hw *wlcore_alloc_hw(size_t priv_size) wl->rx_counter = 0; wl->power_level = WL1271_DEFAULT_POWER_LEVEL; wl->band = IEEE80211_BAND_2GHZ; + wl->channel_type = NL80211_CHAN_NO_HT; wl->flags = 0; wl->sg_enabled = true; + wl->sleep_auth = WL1271_PSM_ILLEGAL; wl->hw_pg_ver = -1; wl->ap_ps_map = 0; wl->ap_fw_ps_map = 0; @@ -5142,6 +5393,7 @@ struct ieee80211_hw *wlcore_alloc_hw(size_t priv_size) wl->state = WL1271_STATE_OFF; wl->fw_type = WL12XX_FW_TYPE_NONE; mutex_init(&wl->mutex); + mutex_init(&wl->flush_mutex); order = get_order(WL1271_AGGR_BUFFER_SIZE); wl->aggr_buf = (u8 *)__get_free_pages(GFP_KERNEL, order); @@ -5222,7 +5474,7 @@ int wlcore_free_hw(struct wl1271 *wl) kfree(wl->nvs); wl->nvs = NULL; - kfree(wl->fw_status); + kfree(wl->fw_status_1); kfree(wl->tx_res_if); destroy_workqueue(wl->freezable_wq); @@ -5279,8 +5531,6 @@ int __devinit wlcore_probe(struct wl1271 *wl, struct platform_device *pdev) wlcore_adjust_conf(wl); wl->irq = platform_get_irq(pdev, 0); - wl->ref_clock = pdata->board_ref_clock; - wl->tcxo_clock = pdata->board_tcxo_clock; wl->platform_quirks = pdata->platform_quirks; wl->set_power = pdata->set_power; wl->dev = &pdev->dev; @@ -5293,7 +5543,7 @@ int __devinit wlcore_probe(struct wl1271 *wl, struct platform_device *pdev) else irqflags = IRQF_TRIGGER_HIGH | IRQF_ONESHOT; - ret = request_threaded_irq(wl->irq, wl12xx_hardirq, wl1271_irq, + ret = request_threaded_irq(wl->irq, wl12xx_hardirq, wlcore_irq, irqflags, pdev->name, wl); if (ret < 0) { @@ -5301,6 +5551,7 @@ int __devinit wlcore_probe(struct wl1271 *wl, struct platform_device *pdev) goto out_free_hw; } +#ifdef CONFIG_PM ret = enable_irq_wake(wl->irq); if (!ret) { wl->irq_wake_enabled = true; @@ -5314,8 +5565,19 @@ int __devinit wlcore_probe(struct wl1271 *wl, struct platform_device *pdev) WL1271_RX_FILTER_MAX_PATTERN_SIZE; } } +#endif disable_irq(wl->irq); + ret = wl12xx_get_hw_info(wl); + if (ret < 0) { + wl1271_error("couldn't get hw info"); + goto out_irq; + } + + ret = wl->ops->identify_chip(wl); + if (ret < 0) + goto out_irq; + ret = wl1271_init_ieee80211(wl); if (ret) goto out_irq; @@ -5328,7 +5590,7 @@ int __devinit wlcore_probe(struct wl1271 *wl, struct platform_device *pdev) ret = device_create_file(wl->dev, &dev_attr_bt_coex_state); if (ret < 0) { wl1271_error("failed to create sysfs file bt_coex_state"); - goto out_irq; + goto out_unreg; } /* Create sysfs file to get HW PG version */ @@ -5353,6 +5615,9 @@ out_hw_pg_ver: out_bt_coex_state: device_remove_file(wl->dev, &dev_attr_bt_coex_state); +out_unreg: + wl1271_unregister_hw(wl); + out_irq: free_irq(wl->irq, wl); diff --git a/drivers/net/wireless/ti/wlcore/ps.c b/drivers/net/wireless/ti/wlcore/ps.c index 756eee2257b4..46d36fd30eba 100644 --- a/drivers/net/wireless/ti/wlcore/ps.c +++ b/drivers/net/wireless/ti/wlcore/ps.c @@ -28,11 +28,14 @@ #define WL1271_WAKEUP_TIMEOUT 500 +#define ELP_ENTRY_DELAY 5 + void wl1271_elp_work(struct work_struct *work) { struct delayed_work *dwork; struct wl1271 *wl; struct wl12xx_vif *wlvif; + int ret; dwork = container_of(work, struct delayed_work, work); wl = container_of(dwork, struct wl1271, elp_work); @@ -61,7 +64,12 @@ void wl1271_elp_work(struct work_struct *work) } wl1271_debug(DEBUG_PSM, "chip to elp"); - wl1271_raw_write32(wl, HW_ACCESS_ELP_CTRL_REG, ELPCTRL_SLEEP); + ret = wlcore_raw_write32(wl, HW_ACCESS_ELP_CTRL_REG, ELPCTRL_SLEEP); + if (ret < 0) { + wl12xx_queue_recovery_work(wl); + goto out; + } + set_bit(WL1271_FLAG_IN_ELP, &wl->flags); out: @@ -72,8 +80,9 @@ out: void wl1271_ps_elp_sleep(struct wl1271 *wl) { struct wl12xx_vif *wlvif; + u32 timeout; - if (wl->quirks & WLCORE_QUIRK_NO_ELP) + if (wl->sleep_auth != WL1271_PSM_ELP) return; /* we shouldn't get consecutive sleep requests */ @@ -89,8 +98,13 @@ void wl1271_ps_elp_sleep(struct wl1271 *wl) return; } + if (wl->conf.conn.forced_ps) + timeout = ELP_ENTRY_DELAY; + else + timeout = wl->conf.conn.dynamic_ps_timeout; + ieee80211_queue_delayed_work(wl->hw, &wl->elp_work, - msecs_to_jiffies(wl->conf.conn.dynamic_ps_timeout)); + msecs_to_jiffies(timeout)); } int wl1271_ps_elp_wakeup(struct wl1271 *wl) @@ -127,7 +141,11 @@ int wl1271_ps_elp_wakeup(struct wl1271 *wl) wl->elp_compl = &compl; spin_unlock_irqrestore(&wl->wl_lock, flags); - wl1271_raw_write32(wl, HW_ACCESS_ELP_CTRL_REG, ELPCTRL_WAKE_UP); + ret = wlcore_raw_write32(wl, HW_ACCESS_ELP_CTRL_REG, ELPCTRL_WAKE_UP); + if (ret < 0) { + wl12xx_queue_recovery_work(wl); + goto err; + } if (!pending) { ret = wait_for_completion_timeout( @@ -185,8 +203,12 @@ int wl1271_ps_set_mode(struct wl1271 *wl, struct wl12xx_vif *wlvif, set_bit(WLVIF_FLAG_IN_PS, &wlvif->flags); - /* enable beacon early termination. Not relevant for 5GHz */ - if (wlvif->band == IEEE80211_BAND_2GHZ) { + /* + * enable beacon early termination. + * Not relevant for 5GHz and for high rates. + */ + if ((wlvif->band == IEEE80211_BAND_2GHZ) && + (wlvif->basic_rate < CONF_HW_BIT_RATE_9MBPS)) { ret = wl1271_acx_bet_enable(wl, wlvif, true); if (ret < 0) return ret; @@ -196,7 +218,8 @@ int wl1271_ps_set_mode(struct wl1271 *wl, struct wl12xx_vif *wlvif, wl1271_debug(DEBUG_PSM, "leaving psm"); /* disable beacon early termination */ - if (wlvif->band == IEEE80211_BAND_2GHZ) { + if ((wlvif->band == IEEE80211_BAND_2GHZ) && + (wlvif->basic_rate < CONF_HW_BIT_RATE_9MBPS)) { ret = wl1271_acx_bet_enable(wl, wlvif, false); if (ret < 0) return ret; diff --git a/drivers/net/wireless/ti/wlcore/rx.c b/drivers/net/wireless/ti/wlcore/rx.c index d6a3c6b07827..f55e2f9e7ac5 100644 --- a/drivers/net/wireless/ti/wlcore/rx.c +++ b/drivers/net/wireless/ti/wlcore/rx.c @@ -127,7 +127,7 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length, } if (rx_align == WLCORE_RX_BUF_UNALIGNED) - reserved = NET_IP_ALIGN; + reserved = RX_BUF_ALIGN; /* the data read starts with the descriptor */ desc = (struct wl1271_rx_descriptor *) data; @@ -175,7 +175,7 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length, */ memcpy(buf, data + sizeof(*desc), pkt_data_len); if (rx_align == WLCORE_RX_BUF_PADDED) - skb_pull(skb, NET_IP_ALIGN); + skb_pull(skb, RX_BUF_ALIGN); *hlid = desc->hlid; @@ -186,6 +186,7 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length, is_data = 1; wl1271_rx_status(wl, desc, IEEE80211_SKB_RXCB(skb), beacon); + wlcore_hw_set_rx_csum(wl, desc, skb); seq_num = (le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_SEQ) >> 4; wl1271_debug(DEBUG_RX, "rx skb 0x%p: %d B %s seq %d hlid %d", skb, @@ -199,17 +200,18 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length, return is_data; } -void wl12xx_rx(struct wl1271 *wl, struct wl_fw_status *status) +int wlcore_rx(struct wl1271 *wl, struct wl_fw_status_1 *status) { unsigned long active_hlids[BITS_TO_LONGS(WL12XX_MAX_LINKS)] = {0}; u32 buf_size; - u32 fw_rx_counter = status->fw_rx_counter & NUM_RX_PKT_DESC_MOD_MASK; - u32 drv_rx_counter = wl->rx_counter & NUM_RX_PKT_DESC_MOD_MASK; + u32 fw_rx_counter = status->fw_rx_counter % wl->num_rx_desc; + u32 drv_rx_counter = wl->rx_counter % wl->num_rx_desc; u32 rx_counter; u32 pkt_len, align_pkt_len; u32 pkt_offset, des; u8 hlid; enum wl_rx_buf_align rx_align; + int ret = 0; while (drv_rx_counter != fw_rx_counter) { buf_size = 0; @@ -223,7 +225,7 @@ void wl12xx_rx(struct wl1271 *wl, struct wl_fw_status *status) break; buf_size += align_pkt_len; rx_counter++; - rx_counter &= NUM_RX_PKT_DESC_MOD_MASK; + rx_counter %= wl->num_rx_desc; } if (buf_size == 0) { @@ -233,9 +235,14 @@ void wl12xx_rx(struct wl1271 *wl, struct wl_fw_status *status) /* Read all available packets at once */ des = le32_to_cpu(status->rx_pkt_descs[drv_rx_counter]); - wlcore_hw_prepare_read(wl, des, buf_size); - wlcore_read_data(wl, REG_SLV_MEM_DATA, wl->aggr_buf, - buf_size, true); + ret = wlcore_hw_prepare_read(wl, des, buf_size); + if (ret < 0) + goto out; + + ret = wlcore_read_data(wl, REG_SLV_MEM_DATA, wl->aggr_buf, + buf_size, true); + if (ret < 0) + goto out; /* Split data into separate packets */ pkt_offset = 0; @@ -263,7 +270,7 @@ void wl12xx_rx(struct wl1271 *wl, struct wl_fw_status *status) wl->rx_counter++; drv_rx_counter++; - drv_rx_counter &= NUM_RX_PKT_DESC_MOD_MASK; + drv_rx_counter %= wl->num_rx_desc; pkt_offset += wlcore_rx_get_align_buf_size(wl, pkt_len); } } @@ -272,11 +279,17 @@ void wl12xx_rx(struct wl1271 *wl, struct wl_fw_status *status) * Write the driver's packet counter to the FW. This is only required * for older hardware revisions */ - if (wl->quirks & WLCORE_QUIRK_END_OF_TRANSACTION) - wl1271_write32(wl, WL12XX_REG_RX_DRIVER_COUNTER, - wl->rx_counter); + if (wl->quirks & WLCORE_QUIRK_END_OF_TRANSACTION) { + ret = wlcore_write32(wl, WL12XX_REG_RX_DRIVER_COUNTER, + wl->rx_counter); + if (ret < 0) + goto out; + } wl12xx_rearm_rx_streaming(wl, active_hlids); + +out: + return ret; } #ifdef CONFIG_PM @@ -305,14 +318,19 @@ int wl1271_rx_filter_enable(struct wl1271 *wl, return 0; } -void wl1271_rx_filter_clear_all(struct wl1271 *wl) +int wl1271_rx_filter_clear_all(struct wl1271 *wl) { - int i; + int i, ret = 0; for (i = 0; i < WL1271_MAX_RX_FILTERS; i++) { if (!wl->rx_filter_enabled[i]) continue; - wl1271_rx_filter_enable(wl, i, 0, NULL); + ret = wl1271_rx_filter_enable(wl, i, 0, NULL); + if (ret) + goto out; } + +out: + return ret; } #endif /* CONFIG_PM */ diff --git a/drivers/net/wireless/ti/wlcore/rx.h b/drivers/net/wireless/ti/wlcore/rx.h index e9a162a864ca..71eba1899915 100644 --- a/drivers/net/wireless/ti/wlcore/rx.h +++ b/drivers/net/wireless/ti/wlcore/rx.h @@ -38,8 +38,6 @@ #define RX_DESC_PACKETID_SHIFT 11 #define RX_MAX_PACKET_ID 3 -#define NUM_RX_PKT_DESC_MOD_MASK 7 - #define RX_DESC_VALID_FCS 0x0001 #define RX_DESC_MATCH_RXADDR1 0x0002 #define RX_DESC_MCAST 0x0004 @@ -102,6 +100,15 @@ /* If set, the start of IP payload is not 4 bytes aligned */ #define RX_BUF_UNALIGNED_PAYLOAD BIT(20) +/* If set, the buffer was padded by the FW to be 4 bytes aligned */ +#define RX_BUF_PADDED_PAYLOAD BIT(30) + +/* + * Account for the padding inserted by the FW in case of RX_ALIGNMENT + * or for fixing alignment in case the packet wasn't aligned. + */ +#define RX_BUF_ALIGN 2 + /* Describes the alignment state of a Rx buffer */ enum wl_rx_buf_align { WLCORE_RX_BUF_ALIGNED, @@ -136,11 +143,11 @@ struct wl1271_rx_descriptor { u8 reserved; } __packed; -void wl12xx_rx(struct wl1271 *wl, struct wl_fw_status *status); +int wlcore_rx(struct wl1271 *wl, struct wl_fw_status_1 *status); u8 wl1271_rate_to_idx(int rate, enum ieee80211_band band); int wl1271_rx_filter_enable(struct wl1271 *wl, int index, bool enable, struct wl12xx_rx_filter *filter); -void wl1271_rx_filter_clear_all(struct wl1271 *wl); +int wl1271_rx_filter_clear_all(struct wl1271 *wl); #endif diff --git a/drivers/net/wireless/ti/wlcore/scan.c b/drivers/net/wireless/ti/wlcore/scan.c index ade21a011c45..dbeca1bfbb2c 100644 --- a/drivers/net/wireless/ti/wlcore/scan.c +++ b/drivers/net/wireless/ti/wlcore/scan.c @@ -226,7 +226,7 @@ static int wl1271_scan_send(struct wl1271 *wl, struct ieee80211_vif *vif, cmd->params.role_id, band, wl->scan.ssid, wl->scan.ssid_len, wl->scan.req->ie, - wl->scan.req->ie_len); + wl->scan.req->ie_len, false); if (ret < 0) { wl1271_error("PROBE request template failed"); goto out; @@ -411,7 +411,8 @@ wl1271_scan_get_sched_scan_channels(struct wl1271 *wl, struct cfg80211_sched_scan_request *req, struct conn_scan_ch_params *channels, u32 band, bool radar, bool passive, - int start, int max_channels) + int start, int max_channels, + u8 *n_pactive_ch) { struct conf_sched_scan_settings *c = &wl->conf.sched_scan; int i, j; @@ -479,6 +480,23 @@ wl1271_scan_get_sched_scan_channels(struct wl1271 *wl, channels[j].tx_power_att = req->channels[i]->max_power; channels[j].channel = req->channels[i]->hw_value; + if ((band == IEEE80211_BAND_2GHZ) && + (channels[j].channel >= 12) && + (channels[j].channel <= 14) && + (flags & IEEE80211_CHAN_PASSIVE_SCAN) && + !force_passive) { + /* pactive channels treated as DFS */ + channels[j].flags = SCAN_CHANNEL_FLAGS_DFS; + + /* + * n_pactive_ch is counted down from the end of + * the passive channel list + */ + (*n_pactive_ch)++; + wl1271_debug(DEBUG_SCAN, "n_pactive_ch = %d", + *n_pactive_ch); + } + j++; } } @@ -491,38 +509,47 @@ wl1271_scan_sched_scan_channels(struct wl1271 *wl, struct cfg80211_sched_scan_request *req, struct wl1271_cmd_sched_scan_config *cfg) { + u8 n_pactive_ch = 0; + cfg->passive[0] = wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_2, IEEE80211_BAND_2GHZ, false, true, 0, - MAX_CHANNELS_2GHZ); + MAX_CHANNELS_2GHZ, + &n_pactive_ch); cfg->active[0] = wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_2, IEEE80211_BAND_2GHZ, false, false, cfg->passive[0], - MAX_CHANNELS_2GHZ); + MAX_CHANNELS_2GHZ, + &n_pactive_ch); cfg->passive[1] = wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_5, IEEE80211_BAND_5GHZ, false, true, 0, - MAX_CHANNELS_5GHZ); + MAX_CHANNELS_5GHZ, + &n_pactive_ch); cfg->dfs = wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_5, IEEE80211_BAND_5GHZ, true, true, cfg->passive[1], - MAX_CHANNELS_5GHZ); + MAX_CHANNELS_5GHZ, + &n_pactive_ch); cfg->active[1] = wl1271_scan_get_sched_scan_channels(wl, req, cfg->channels_5, IEEE80211_BAND_5GHZ, false, false, cfg->passive[1] + cfg->dfs, - MAX_CHANNELS_5GHZ); + MAX_CHANNELS_5GHZ, + &n_pactive_ch); /* 802.11j channels are not supported yet */ cfg->passive[2] = 0; cfg->active[2] = 0; + cfg->n_pactive_ch = n_pactive_ch; + wl1271_debug(DEBUG_SCAN, " 2.4GHz: active %d passive %d", cfg->active[0], cfg->passive[0]); wl1271_debug(DEBUG_SCAN, " 5GHz: active %d passive %d", @@ -537,6 +564,7 @@ wl1271_scan_sched_scan_channels(struct wl1271 *wl, /* Returns the scan type to be used or a negative value on error */ static int wl12xx_scan_sched_scan_ssid_list(struct wl1271 *wl, + struct wl12xx_vif *wlvif, struct cfg80211_sched_scan_request *req) { struct wl1271_cmd_sched_scan_ssid_list *cmd = NULL; @@ -565,6 +593,7 @@ wl12xx_scan_sched_scan_ssid_list(struct wl1271 *wl, goto out; } + cmd->role_id = wlvif->dev_role_id; if (!n_match_ssids) { /* No filter, with ssids */ type = SCAN_SSID_FILTER_DISABLED; @@ -603,7 +632,9 @@ wl12xx_scan_sched_scan_ssid_list(struct wl1271 *wl, continue; for (j = 0; j < cmd->n_ssids; j++) - if (!memcmp(req->ssids[i].ssid, + if ((req->ssids[i].ssid_len == + cmd->ssids[j].len) && + !memcmp(req->ssids[i].ssid, cmd->ssids[j].ssid, req->ssids[i].ssid_len)) { cmd->ssids[j].type = @@ -652,6 +683,7 @@ int wl1271_scan_sched_scan_config(struct wl1271 *wl, if (!cfg) return -ENOMEM; + cfg->role_id = wlvif->dev_role_id; cfg->rssi_threshold = c->rssi_threshold; cfg->snr_threshold = c->snr_threshold; cfg->n_probe_reqs = c->num_probe_reqs; @@ -669,7 +701,7 @@ int wl1271_scan_sched_scan_config(struct wl1271 *wl, cfg->intervals[i] = cpu_to_le32(req->interval); cfg->ssid_len = 0; - ret = wl12xx_scan_sched_scan_ssid_list(wl, req); + ret = wl12xx_scan_sched_scan_ssid_list(wl, wlvif, req); if (ret < 0) goto out; @@ -690,7 +722,7 @@ int wl1271_scan_sched_scan_config(struct wl1271 *wl, req->ssids[0].ssid, req->ssids[0].ssid_len, ies->ie[band], - ies->len[band]); + ies->len[band], true); if (ret < 0) { wl1271_error("2.4GHz PROBE request template failed"); goto out; @@ -704,7 +736,7 @@ int wl1271_scan_sched_scan_config(struct wl1271 *wl, req->ssids[0].ssid, req->ssids[0].ssid_len, ies->ie[band], - ies->len[band]); + ies->len[band], true); if (ret < 0) { wl1271_error("5GHz PROBE request template failed"); goto out; @@ -734,13 +766,15 @@ int wl1271_scan_sched_scan_start(struct wl1271 *wl, struct wl12xx_vif *wlvif) if (wlvif->bss_type != BSS_TYPE_STA_BSS) return -EOPNOTSUPP; - if (test_bit(WLVIF_FLAG_IN_USE, &wlvif->flags)) + if ((wl->quirks & WLCORE_QUIRK_NO_SCHED_SCAN_WHILE_CONN) && + test_bit(WLVIF_FLAG_IN_USE, &wlvif->flags)) return -EBUSY; start = kzalloc(sizeof(*start), GFP_KERNEL); if (!start) return -ENOMEM; + start->role_id = wlvif->dev_role_id; start->tag = WL1271_SCAN_DEFAULT_TAG; ret = wl1271_cmd_send(wl, CMD_START_PERIODIC_SCAN, start, @@ -762,7 +796,7 @@ void wl1271_scan_sched_scan_results(struct wl1271 *wl) ieee80211_sched_scan_results(wl->hw); } -void wl1271_scan_sched_scan_stop(struct wl1271 *wl) +void wl1271_scan_sched_scan_stop(struct wl1271 *wl, struct wl12xx_vif *wlvif) { struct wl1271_cmd_sched_scan_stop *stop; int ret = 0; @@ -776,6 +810,7 @@ void wl1271_scan_sched_scan_stop(struct wl1271 *wl) return; } + stop->role_id = wlvif->dev_role_id; stop->tag = WL1271_SCAN_DEFAULT_TAG; ret = wl1271_cmd_send(wl, CMD_STOP_PERIODIC_SCAN, stop, diff --git a/drivers/net/wireless/ti/wlcore/scan.h b/drivers/net/wireless/ti/wlcore/scan.h index 81ee36ac2078..29f3c8d6b046 100644 --- a/drivers/net/wireless/ti/wlcore/scan.h +++ b/drivers/net/wireless/ti/wlcore/scan.h @@ -40,7 +40,7 @@ int wl1271_scan_sched_scan_config(struct wl1271 *wl, struct cfg80211_sched_scan_request *req, struct ieee80211_sched_scan_ies *ies); int wl1271_scan_sched_scan_start(struct wl1271 *wl, struct wl12xx_vif *wlvif); -void wl1271_scan_sched_scan_stop(struct wl1271 *wl); +void wl1271_scan_sched_scan_stop(struct wl1271 *wl, struct wl12xx_vif *wlvif); void wl1271_scan_sched_scan_results(struct wl1271 *wl); #define WL1271_SCAN_MAX_CHANNELS 24 @@ -142,7 +142,8 @@ enum { SCAN_BSS_TYPE_ANY, }; -#define SCAN_CHANNEL_FLAGS_DFS BIT(0) +#define SCAN_CHANNEL_FLAGS_DFS BIT(0) /* channel is passive until an + activity is detected on it */ #define SCAN_CHANNEL_FLAGS_DFS_ENABLED BIT(1) struct conn_scan_ch_params { @@ -185,7 +186,10 @@ struct wl1271_cmd_sched_scan_config { u8 dfs; - u8 padding[3]; + u8 n_pactive_ch; /* number of pactive (passive until fw detects energy) + channels in BG band */ + u8 role_id; + u8 padding[1]; struct conn_scan_ch_params channels_2[MAX_CHANNELS_2GHZ]; struct conn_scan_ch_params channels_5[MAX_CHANNELS_5GHZ]; @@ -212,21 +216,24 @@ struct wl1271_cmd_sched_scan_ssid_list { u8 n_ssids; struct wl1271_ssid ssids[SCHED_SCAN_MAX_SSIDS]; - u8 padding[3]; + u8 role_id; + u8 padding[2]; } __packed; struct wl1271_cmd_sched_scan_start { struct wl1271_cmd_header header; u8 tag; - u8 padding[3]; + u8 role_id; + u8 padding[2]; } __packed; struct wl1271_cmd_sched_scan_stop { struct wl1271_cmd_header header; u8 tag; - u8 padding[3]; + u8 role_id; + u8 padding[2]; } __packed; diff --git a/drivers/net/wireless/ti/wlcore/sdio.c b/drivers/net/wireless/ti/wlcore/sdio.c index 0a72347cfc4c..73ace4b2604e 100644 --- a/drivers/net/wireless/ti/wlcore/sdio.c +++ b/drivers/net/wireless/ti/wlcore/sdio.c @@ -25,6 +25,7 @@ #include <linux/module.h> #include <linux/vmalloc.h> #include <linux/platform_device.h> +#include <linux/mmc/sdio.h> #include <linux/mmc/sdio_func.h> #include <linux/mmc/sdio_ids.h> #include <linux/mmc/card.h> @@ -32,6 +33,7 @@ #include <linux/gpio.h> #include <linux/wl12xx.h> #include <linux/pm_runtime.h> +#include <linux/printk.h> #include "wlcore.h" #include "wl12xx_80211.h" @@ -45,6 +47,8 @@ #define SDIO_DEVICE_ID_TI_WL1271 0x4076 #endif +static bool dump = false; + struct wl12xx_sdio_glue { struct device *dev; struct platform_device *core; @@ -67,8 +71,8 @@ static void wl1271_sdio_set_block_size(struct device *child, sdio_release_host(func); } -static void wl12xx_sdio_raw_read(struct device *child, int addr, void *buf, - size_t len, bool fixed) +static int __must_check wl12xx_sdio_raw_read(struct device *child, int addr, + void *buf, size_t len, bool fixed) { int ret; struct wl12xx_sdio_glue *glue = dev_get_drvdata(child->parent); @@ -76,6 +80,13 @@ static void wl12xx_sdio_raw_read(struct device *child, int addr, void *buf, sdio_claim_host(func); + if (unlikely(dump)) { + printk(KERN_DEBUG "wlcore_sdio: READ from 0x%04x\n", addr); + print_hex_dump(KERN_DEBUG, "wlcore_sdio: READ ", + DUMP_PREFIX_OFFSET, 16, 1, + buf, len, false); + } + if (unlikely(addr == HW_ACCESS_ELP_CTRL_REG)) { ((u8 *)buf)[0] = sdio_f0_readb(func, addr, &ret); dev_dbg(child->parent, "sdio read 52 addr 0x%x, byte 0x%02x\n", @@ -92,12 +103,14 @@ static void wl12xx_sdio_raw_read(struct device *child, int addr, void *buf, sdio_release_host(func); - if (ret) + if (WARN_ON(ret)) dev_err(child->parent, "sdio read failed (%d)\n", ret); + + return ret; } -static void wl12xx_sdio_raw_write(struct device *child, int addr, void *buf, - size_t len, bool fixed) +static int __must_check wl12xx_sdio_raw_write(struct device *child, int addr, + void *buf, size_t len, bool fixed) { int ret; struct wl12xx_sdio_glue *glue = dev_get_drvdata(child->parent); @@ -105,6 +118,13 @@ static void wl12xx_sdio_raw_write(struct device *child, int addr, void *buf, sdio_claim_host(func); + if (unlikely(dump)) { + printk(KERN_DEBUG "wlcore_sdio: WRITE to 0x%04x\n", addr); + print_hex_dump(KERN_DEBUG, "wlcore_sdio: WRITE ", + DUMP_PREFIX_OFFSET, 16, 1, + buf, len, false); + } + if (unlikely(addr == HW_ACCESS_ELP_CTRL_REG)) { sdio_f0_writeb(func, ((u8 *)buf)[0], addr, &ret); dev_dbg(child->parent, "sdio write 52 addr 0x%x, byte 0x%02x\n", @@ -121,25 +141,30 @@ static void wl12xx_sdio_raw_write(struct device *child, int addr, void *buf, sdio_release_host(func); - if (ret) + if (WARN_ON(ret)) dev_err(child->parent, "sdio write failed (%d)\n", ret); + + return ret; } static int wl12xx_sdio_power_on(struct wl12xx_sdio_glue *glue) { int ret; struct sdio_func *func = dev_to_sdio_func(glue->dev); + struct mmc_card *card = func->card; - /* If enabled, tell runtime PM not to power off the card */ - if (pm_runtime_enabled(&func->dev)) { - ret = pm_runtime_get_sync(&func->dev); - if (ret < 0) - goto out; - } else { - /* Runtime PM is disabled: power up the card manually */ - ret = mmc_power_restore_host(func->card->host); - if (ret < 0) + ret = pm_runtime_get_sync(&card->dev); + if (ret) { + /* + * Runtime PM might be temporarily disabled, or the device + * might have a positive reference counter. Make sure it is + * really powered on. + */ + ret = mmc_power_restore_host(card->host); + if (ret < 0) { + pm_runtime_put_sync(&card->dev); goto out; + } } sdio_claim_host(func); @@ -154,20 +179,21 @@ static int wl12xx_sdio_power_off(struct wl12xx_sdio_glue *glue) { int ret; struct sdio_func *func = dev_to_sdio_func(glue->dev); + struct mmc_card *card = func->card; sdio_claim_host(func); sdio_disable_func(func); sdio_release_host(func); - /* Power off the card manually, even if runtime PM is enabled. */ - ret = mmc_power_save_host(func->card->host); + /* Power off the card manually in case it wasn't powered off above */ + ret = mmc_power_save_host(card->host); if (ret < 0) - return ret; + goto out; - /* If enabled, let runtime PM know the card is powered off */ - if (pm_runtime_enabled(&func->dev)) - ret = pm_runtime_put_sync(&func->dev); + /* Let runtime PM know the card is powered off */ + pm_runtime_put_sync(&card->dev); +out: return ret; } @@ -196,6 +222,7 @@ static int __devinit wl1271_probe(struct sdio_func *func, struct resource res[1]; mmc_pm_flag_t mmcflags; int ret = -ENOMEM; + const char *chip_family; /* We are only able to handle the wlan function */ if (func->num != 0x02) @@ -236,7 +263,18 @@ static int __devinit wl1271_probe(struct sdio_func *func, /* Tell PM core that we don't need the card to be powered now */ pm_runtime_put_noidle(&func->dev); - glue->core = platform_device_alloc("wl12xx", -1); + /* + * Due to a hardware bug, we can't differentiate wl18xx from + * wl12xx, because both report the same device ID. The only + * way to differentiate is by checking the SDIO revision, + * which is 3.00 on the wl18xx chips. + */ + if (func->card->cccr.sdio_vsn == SDIO_SDIO_REV_3_00) + chip_family = "wl18xx"; + else + chip_family = "wl12xx"; + + glue->core = platform_device_alloc(chip_family, -1); if (!glue->core) { dev_err(glue->dev, "can't allocate platform_device"); ret = -ENOMEM; @@ -367,12 +405,9 @@ static void __exit wl1271_exit(void) module_init(wl1271_init); module_exit(wl1271_exit); +module_param(dump, bool, S_IRUSR | S_IWUSR); +MODULE_PARM_DESC(dump, "Enable sdio read/write dumps."); + MODULE_LICENSE("GPL"); MODULE_AUTHOR("Luciano Coelho <coelho@ti.com>"); MODULE_AUTHOR("Juuso Oikarinen <juuso.oikarinen@nokia.com>"); -MODULE_FIRMWARE(WL127X_FW_NAME_SINGLE); -MODULE_FIRMWARE(WL127X_FW_NAME_MULTI); -MODULE_FIRMWARE(WL127X_PLT_FW_NAME); -MODULE_FIRMWARE(WL128X_FW_NAME_SINGLE); -MODULE_FIRMWARE(WL128X_FW_NAME_MULTI); -MODULE_FIRMWARE(WL128X_PLT_FW_NAME); diff --git a/drivers/net/wireless/ti/wlcore/spi.c b/drivers/net/wireless/ti/wlcore/spi.c index 553cd3cbb98c..8da4ed243ebc 100644 --- a/drivers/net/wireless/ti/wlcore/spi.c +++ b/drivers/net/wireless/ti/wlcore/spi.c @@ -193,8 +193,8 @@ static int wl12xx_spi_read_busy(struct device *child) return -ETIMEDOUT; } -static void wl12xx_spi_raw_read(struct device *child, int addr, void *buf, - size_t len, bool fixed) +static int __must_check wl12xx_spi_raw_read(struct device *child, int addr, + void *buf, size_t len, bool fixed) { struct wl12xx_spi_glue *glue = dev_get_drvdata(child->parent); struct wl1271 *wl = dev_get_drvdata(child); @@ -238,7 +238,7 @@ static void wl12xx_spi_raw_read(struct device *child, int addr, void *buf, if (!(busy_buf[WL1271_BUSY_WORD_CNT - 1] & 0x1) && wl12xx_spi_read_busy(child)) { memset(buf, 0, chunk_len); - return; + return 0; } spi_message_init(&m); @@ -256,10 +256,12 @@ static void wl12xx_spi_raw_read(struct device *child, int addr, void *buf, buf += chunk_len; len -= chunk_len; } + + return 0; } -static void wl12xx_spi_raw_write(struct device *child, int addr, void *buf, - size_t len, bool fixed) +static int __must_check wl12xx_spi_raw_write(struct device *child, int addr, + void *buf, size_t len, bool fixed) { struct wl12xx_spi_glue *glue = dev_get_drvdata(child->parent); struct spi_transfer t[2 * WSPI_MAX_NUM_OF_CHUNKS]; @@ -304,6 +306,8 @@ static void wl12xx_spi_raw_write(struct device *child, int addr, void *buf, } spi_sync(to_spi_device(glue->dev), &m); + + return 0; } static struct wl1271_if_operations spi_ops = { @@ -431,10 +435,4 @@ module_exit(wl1271_exit); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Luciano Coelho <coelho@ti.com>"); MODULE_AUTHOR("Juuso Oikarinen <juuso.oikarinen@nokia.com>"); -MODULE_FIRMWARE(WL127X_FW_NAME_SINGLE); -MODULE_FIRMWARE(WL127X_FW_NAME_MULTI); -MODULE_FIRMWARE(WL127X_PLT_FW_NAME); -MODULE_FIRMWARE(WL128X_FW_NAME_SINGLE); -MODULE_FIRMWARE(WL128X_FW_NAME_MULTI); -MODULE_FIRMWARE(WL128X_PLT_FW_NAME); MODULE_ALIAS("spi:wl1271"); diff --git a/drivers/net/wireless/ti/wlcore/testmode.c b/drivers/net/wireless/ti/wlcore/testmode.c index 0e59ea2cdd39..49e5ee1525c9 100644 --- a/drivers/net/wireless/ti/wlcore/testmode.c +++ b/drivers/net/wireless/ti/wlcore/testmode.c @@ -40,7 +40,7 @@ enum wl1271_tm_commands { WL1271_TM_CMD_CONFIGURE, WL1271_TM_CMD_NVS_PUSH, /* Not in use. Keep to not break ABI */ WL1271_TM_CMD_SET_PLT_MODE, - WL1271_TM_CMD_RECOVER, + WL1271_TM_CMD_RECOVER, /* Not in use. Keep to not break ABI */ WL1271_TM_CMD_GET_MAC, __WL1271_TM_CMD_AFTER_LAST @@ -108,6 +108,20 @@ static int wl1271_tm_cmd_test(struct wl1271 *wl, struct nlattr *tb[]) } if (answer) { + /* If we got bip calibration answer print radio status */ + struct wl1271_cmd_cal_p2g *params = + (struct wl1271_cmd_cal_p2g *) buf; + + s16 radio_status = (s16) le16_to_cpu(params->radio_status); + + if (params->test.id == TEST_CMD_P2G_CAL && + radio_status < 0) + wl1271_warning("testmode cmd: radio status=%d", + radio_status); + else + wl1271_info("testmode cmd: radio status=%d", + radio_status); + len = nla_total_size(buf_len); skb = cfg80211_testmode_alloc_reply_skb(wl->hw->wiphy, len); if (!skb) { @@ -115,8 +129,12 @@ static int wl1271_tm_cmd_test(struct wl1271 *wl, struct nlattr *tb[]) goto out_sleep; } - if (nla_put(skb, WL1271_TM_ATTR_DATA, buf_len, buf)) - goto nla_put_failure; + if (nla_put(skb, WL1271_TM_ATTR_DATA, buf_len, buf)) { + kfree_skb(skb); + ret = -EMSGSIZE; + goto out_sleep; + } + ret = cfg80211_testmode_reply(skb); if (ret < 0) goto out_sleep; @@ -128,11 +146,6 @@ out: mutex_unlock(&wl->mutex); return ret; - -nla_put_failure: - kfree_skb(skb); - ret = -EMSGSIZE; - goto out_sleep; } static int wl1271_tm_cmd_interrogate(struct wl1271 *wl, struct nlattr *tb[]) @@ -178,8 +191,12 @@ static int wl1271_tm_cmd_interrogate(struct wl1271 *wl, struct nlattr *tb[]) goto out_free; } - if (nla_put(skb, WL1271_TM_ATTR_DATA, sizeof(*cmd), cmd)) - goto nla_put_failure; + if (nla_put(skb, WL1271_TM_ATTR_DATA, sizeof(*cmd), cmd)) { + kfree_skb(skb); + ret = -EMSGSIZE; + goto out_free; + } + ret = cfg80211_testmode_reply(skb); if (ret < 0) goto out_free; @@ -192,11 +209,6 @@ out: mutex_unlock(&wl->mutex); return ret; - -nla_put_failure: - kfree_skb(skb); - ret = -EMSGSIZE; - goto out_free; } static int wl1271_tm_cmd_configure(struct wl1271 *wl, struct nlattr *tb[]) @@ -231,6 +243,43 @@ static int wl1271_tm_cmd_configure(struct wl1271 *wl, struct nlattr *tb[]) return 0; } +static int wl1271_tm_detect_fem(struct wl1271 *wl, struct nlattr *tb[]) +{ + /* return FEM type */ + int ret, len; + struct sk_buff *skb; + + ret = wl1271_plt_start(wl, PLT_FEM_DETECT); + if (ret < 0) + goto out; + + mutex_lock(&wl->mutex); + + len = nla_total_size(sizeof(wl->fem_manuf)); + skb = cfg80211_testmode_alloc_reply_skb(wl->hw->wiphy, len); + if (!skb) { + ret = -ENOMEM; + goto out_mutex; + } + + if (nla_put(skb, WL1271_TM_ATTR_DATA, sizeof(wl->fem_manuf), + &wl->fem_manuf)) { + kfree_skb(skb); + ret = -EMSGSIZE; + goto out_mutex; + } + + ret = cfg80211_testmode_reply(skb); + +out_mutex: + mutex_unlock(&wl->mutex); + + /* We always stop plt after DETECT mode */ + wl1271_plt_stop(wl); +out: + return ret; +} + static int wl1271_tm_cmd_set_plt_mode(struct wl1271 *wl, struct nlattr *tb[]) { u32 val; @@ -244,11 +293,14 @@ static int wl1271_tm_cmd_set_plt_mode(struct wl1271 *wl, struct nlattr *tb[]) val = nla_get_u32(tb[WL1271_TM_ATTR_PLT_MODE]); switch (val) { - case 0: + case PLT_OFF: ret = wl1271_plt_stop(wl); break; - case 1: - ret = wl1271_plt_start(wl); + case PLT_ON: + ret = wl1271_plt_start(wl, PLT_ON); + break; + case PLT_FEM_DETECT: + ret = wl1271_tm_detect_fem(wl, tb); break; default: ret = -EINVAL; @@ -258,15 +310,6 @@ static int wl1271_tm_cmd_set_plt_mode(struct wl1271 *wl, struct nlattr *tb[]) return ret; } -static int wl1271_tm_cmd_recover(struct wl1271 *wl, struct nlattr *tb[]) -{ - wl1271_debug(DEBUG_TESTMODE, "testmode cmd recover"); - - wl12xx_queue_recovery_work(wl); - - return 0; -} - static int wl12xx_tm_cmd_get_mac(struct wl1271 *wl, struct nlattr *tb[]) { struct sk_buff *skb; @@ -298,8 +341,12 @@ static int wl12xx_tm_cmd_get_mac(struct wl1271 *wl, struct nlattr *tb[]) goto out; } - if (nla_put(skb, WL1271_TM_ATTR_DATA, ETH_ALEN, mac_addr)) - goto nla_put_failure; + if (nla_put(skb, WL1271_TM_ATTR_DATA, ETH_ALEN, mac_addr)) { + kfree_skb(skb); + ret = -EMSGSIZE; + goto out; + } + ret = cfg80211_testmode_reply(skb); if (ret < 0) goto out; @@ -307,11 +354,6 @@ static int wl12xx_tm_cmd_get_mac(struct wl1271 *wl, struct nlattr *tb[]) out: mutex_unlock(&wl->mutex); return ret; - -nla_put_failure: - kfree_skb(skb); - ret = -EMSGSIZE; - goto out; } int wl1271_tm_cmd(struct ieee80211_hw *hw, void *data, int len) @@ -336,8 +378,6 @@ int wl1271_tm_cmd(struct ieee80211_hw *hw, void *data, int len) return wl1271_tm_cmd_configure(wl, tb); case WL1271_TM_CMD_SET_PLT_MODE: return wl1271_tm_cmd_set_plt_mode(wl, tb); - case WL1271_TM_CMD_RECOVER: - return wl1271_tm_cmd_recover(wl, tb); case WL1271_TM_CMD_GET_MAC: return wl12xx_tm_cmd_get_mac(wl, tb); default: diff --git a/drivers/net/wireless/ti/wlcore/tx.c b/drivers/net/wireless/ti/wlcore/tx.c index 6893bc207994..f0081f746482 100644 --- a/drivers/net/wireless/ti/wlcore/tx.c +++ b/drivers/net/wireless/ti/wlcore/tx.c @@ -72,7 +72,7 @@ static int wl1271_alloc_tx_id(struct wl1271 *wl, struct sk_buff *skb) return id; } -static void wl1271_free_tx_id(struct wl1271 *wl, int id) +void wl1271_free_tx_id(struct wl1271 *wl, int id) { if (__test_and_clear_bit(id, wl->tx_frames_map)) { if (unlikely(wl->tx_frames_cnt == wl->num_tx_desc)) @@ -82,6 +82,7 @@ static void wl1271_free_tx_id(struct wl1271 *wl, int id) wl->tx_frames_cnt--; } } +EXPORT_SYMBOL(wl1271_free_tx_id); static void wl1271_tx_ap_update_inconnection_sta(struct wl1271 *wl, struct sk_buff *skb) @@ -127,6 +128,7 @@ bool wl12xx_is_dummy_packet(struct wl1271 *wl, struct sk_buff *skb) { return wl->dummy_packet == skb; } +EXPORT_SYMBOL(wl12xx_is_dummy_packet); u8 wl12xx_tx_get_hlid_ap(struct wl1271 *wl, struct wl12xx_vif *wlvif, struct sk_buff *skb) @@ -146,10 +148,10 @@ u8 wl12xx_tx_get_hlid_ap(struct wl1271 *wl, struct wl12xx_vif *wlvif, return wl->system_hlid; hdr = (struct ieee80211_hdr *)skb->data; - if (ieee80211_is_mgmt(hdr->frame_control)) - return wlvif->ap.global_hlid; - else + if (is_multicast_ether_addr(ieee80211_get_DA(hdr))) return wlvif->ap.bcast_hlid; + else + return wlvif->ap.global_hlid; } } @@ -176,37 +178,34 @@ u8 wl12xx_tx_get_hlid(struct wl1271 *wl, struct wl12xx_vif *wlvif, unsigned int wlcore_calc_packet_alignment(struct wl1271 *wl, unsigned int packet_length) { - if (wl->quirks & WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN) - return ALIGN(packet_length, WL12XX_BUS_BLOCK_SIZE); - else + if ((wl->quirks & WLCORE_QUIRK_TX_PAD_LAST_FRAME) || + !(wl->quirks & WLCORE_QUIRK_TX_BLOCKSIZE_ALIGN)) return ALIGN(packet_length, WL1271_TX_ALIGN_TO); + else + return ALIGN(packet_length, WL12XX_BUS_BLOCK_SIZE); } EXPORT_SYMBOL(wlcore_calc_packet_alignment); static int wl1271_tx_allocate(struct wl1271 *wl, struct wl12xx_vif *wlvif, struct sk_buff *skb, u32 extra, u32 buf_offset, - u8 hlid) + u8 hlid, bool is_gem) { struct wl1271_tx_hw_descr *desc; u32 total_len = skb->len + sizeof(struct wl1271_tx_hw_descr) + extra; u32 total_blocks; int id, ret = -EBUSY, ac; - u32 spare_blocks = wl->normal_tx_spare; - bool is_dummy = false; + u32 spare_blocks; if (buf_offset + total_len > WL1271_AGGR_BUFFER_SIZE) return -EAGAIN; + spare_blocks = wlcore_hw_get_spare_blocks(wl, is_gem); + /* allocate free identifier for the packet */ id = wl1271_alloc_tx_id(wl, skb); if (id < 0) return id; - if (unlikely(wl12xx_is_dummy_packet(wl, skb))) - is_dummy = true; - else if (wlvif->is_gem) - spare_blocks = wl->gem_tx_spare; - total_blocks = wlcore_hw_calc_tx_blocks(wl, total_len, spare_blocks); if (total_blocks <= wl->tx_blocks_available) { @@ -228,7 +227,7 @@ static int wl1271_tx_allocate(struct wl1271 *wl, struct wl12xx_vif *wlvif, ac = wl1271_tx_get_queue(skb_get_queue_mapping(skb)); wl->tx_allocated_pkts[ac]++; - if (!is_dummy && wlvif && + if (!wl12xx_is_dummy_packet(wl, skb) && wlvif && wlvif->bss_type == BSS_TYPE_AP_BSS && test_bit(hlid, wlvif->ap.sta_hlid_map)) wl->links[hlid].allocated_pkts++; @@ -268,6 +267,7 @@ static void wl1271_tx_fill_hdr(struct wl1271 *wl, struct wl12xx_vif *wlvif, if (extra) { int hdrlen = ieee80211_hdrlen(frame_control); memmove(frame_start, hdr, hdrlen); + skb_set_network_header(skb, skb_network_offset(skb) + extra); } /* configure packet life time */ @@ -305,19 +305,25 @@ static void wl1271_tx_fill_hdr(struct wl1271 *wl, struct wl12xx_vif *wlvif, if (is_dummy || !wlvif) rate_idx = 0; else if (wlvif->bss_type != BSS_TYPE_AP_BSS) { - /* if the packets are destined for AP (have a STA entry) - send them with AP rate policies, otherwise use default - basic rates */ - if (control->flags & IEEE80211_TX_CTL_NO_CCK_RATE) + /* + * if the packets are data packets + * send them with AP rate policies (EAPOLs are an exception), + * otherwise use default basic rates + */ + if (skb->protocol == cpu_to_be16(ETH_P_PAE)) + rate_idx = wlvif->sta.basic_rate_idx; + else if (control->flags & IEEE80211_TX_CTL_NO_CCK_RATE) rate_idx = wlvif->sta.p2p_rate_idx; - else if (control->control.sta) + else if (ieee80211_is_data(frame_control)) rate_idx = wlvif->sta.ap_rate_idx; else rate_idx = wlvif->sta.basic_rate_idx; } else { if (hlid == wlvif->ap.global_hlid) rate_idx = wlvif->ap.mgmt_rate_idx; - else if (hlid == wlvif->ap.bcast_hlid) + else if (hlid == wlvif->ap.bcast_hlid || + skb->protocol == cpu_to_be16(ETH_P_PAE)) + /* send AP bcast and EAPOLs using the min basic rate */ rate_idx = wlvif->ap.bcast_rate_idx; else rate_idx = wlvif->ap.ucast_rate_idx[ac]; @@ -330,9 +336,9 @@ static void wl1271_tx_fill_hdr(struct wl1271 *wl, struct wl12xx_vif *wlvif, ieee80211_has_protected(frame_control)) tx_attr |= TX_HW_ATTR_HOST_ENCRYPT; - desc->reserved = 0; desc->tx_attr = cpu_to_le16(tx_attr); + wlcore_hw_set_tx_desc_csum(wl, desc, skb); wlcore_hw_set_tx_desc_data_len(wl, desc, skb); } @@ -346,16 +352,20 @@ static int wl1271_prepare_tx_frame(struct wl1271 *wl, struct wl12xx_vif *wlvif, u32 total_len; u8 hlid; bool is_dummy; + bool is_gem = false; - if (!skb) + if (!skb) { + wl1271_error("discarding null skb"); return -EINVAL; + } info = IEEE80211_SKB_CB(skb); /* TODO: handle dummy packets on multi-vifs */ is_dummy = wl12xx_is_dummy_packet(wl, skb); - if (info->control.hw_key && + if ((wl->quirks & WLCORE_QUIRK_TKIP_HEADER_SPACE) && + info->control.hw_key && info->control.hw_key->cipher == WLAN_CIPHER_SUITE_TKIP) extra = WL1271_EXTRA_SPACE_TKIP; @@ -373,6 +383,8 @@ static int wl1271_prepare_tx_frame(struct wl1271 *wl, struct wl12xx_vif *wlvif, return ret; wlvif->default_key = idx; } + + is_gem = (cipher == WL1271_CIPHER_SUITE_GEM); } hlid = wl12xx_tx_get_hlid(wl, wlvif, skb); if (hlid == WL12XX_INVALID_LINK_ID) { @@ -380,7 +392,8 @@ static int wl1271_prepare_tx_frame(struct wl1271 *wl, struct wl12xx_vif *wlvif, return -EINVAL; } - ret = wl1271_tx_allocate(wl, wlvif, skb, extra, buf_offset, hlid); + ret = wl1271_tx_allocate(wl, wlvif, skb, extra, buf_offset, hlid, + is_gem); if (ret < 0) return ret; @@ -425,10 +438,10 @@ u32 wl1271_tx_enabled_rates_get(struct wl1271 *wl, u32 rate_set, rate_set >>= 1; } - /* MCS rates indication are on bits 16 - 23 */ + /* MCS rates indication are on bits 16 - 31 */ rate_set >>= HW_HT_RATES_OFFSET - band->n_bitrates; - for (bit = 0; bit < 8; bit++) { + for (bit = 0; bit < 16; bit++) { if (rate_set & 0x1) enabled_rates |= (CONF_HW_BIT_RATE_MCS_0 << bit); rate_set >>= 1; @@ -439,18 +452,15 @@ u32 wl1271_tx_enabled_rates_get(struct wl1271 *wl, u32 rate_set, void wl1271_handle_tx_low_watermark(struct wl1271 *wl) { - unsigned long flags; int i; for (i = 0; i < NUM_TX_QUEUES; i++) { - if (test_bit(i, &wl->stopped_queues_map) && + if (wlcore_is_queue_stopped_by_reason(wl, i, + WLCORE_QUEUE_STOP_REASON_WATERMARK) && wl->tx_queue_count[i] <= WL1271_TX_QUEUE_LOW_WATERMARK) { /* firmware buffer has space, restart queues */ - spin_lock_irqsave(&wl->wl_lock, flags); - ieee80211_wake_queue(wl->hw, - wl1271_tx_get_mac80211_queue(i)); - clear_bit(i, &wl->stopped_queues_map); - spin_unlock_irqrestore(&wl->wl_lock, flags); + wlcore_wake_queue(wl, i, + WLCORE_QUEUE_STOP_REASON_WATERMARK); } } } @@ -656,18 +666,29 @@ void wl12xx_rearm_rx_streaming(struct wl1271 *wl, unsigned long *active_hlids) } } -void wl1271_tx_work_locked(struct wl1271 *wl) +/* + * Returns failure values only in case of failed bus ops within this function. + * wl1271_prepare_tx_frame retvals won't be returned in order to avoid + * triggering recovery by higher layers when not necessary. + * In case a FW command fails within wl1271_prepare_tx_frame fails a recovery + * will be queued in wl1271_cmd_send. -EAGAIN/-EBUSY from prepare_tx_frame + * can occur and are legitimate so don't propagate. -EINVAL will emit a WARNING + * within prepare_tx_frame code but there's nothing we should do about those + * as well. + */ +int wlcore_tx_work_locked(struct wl1271 *wl) { struct wl12xx_vif *wlvif; struct sk_buff *skb; struct wl1271_tx_hw_descr *desc; - u32 buf_offset = 0; + u32 buf_offset = 0, last_len = 0; bool sent_packets = false; unsigned long active_hlids[BITS_TO_LONGS(WL12XX_MAX_LINKS)] = {0}; - int ret; + int ret = 0; + int bus_ret = 0; if (unlikely(wl->state == WL1271_STATE_OFF)) - return; + return 0; while ((skb = wl1271_skb_dequeue(wl))) { struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); @@ -685,8 +706,14 @@ void wl1271_tx_work_locked(struct wl1271 *wl) * Flush buffer and try again. */ wl1271_skb_queue_head(wl, wlvif, skb); - wlcore_write_data(wl, REG_SLV_MEM_DATA, wl->aggr_buf, - buf_offset, true); + + buf_offset = wlcore_hw_pre_pkt_send(wl, buf_offset, + last_len); + bus_ret = wlcore_write_data(wl, REG_SLV_MEM_DATA, + wl->aggr_buf, buf_offset, true); + if (bus_ret < 0) + goto out; + sent_packets = true; buf_offset = 0; continue; @@ -710,7 +737,8 @@ void wl1271_tx_work_locked(struct wl1271 *wl) ieee80211_free_txskb(wl->hw, skb); goto out_ack; } - buf_offset += ret; + last_len = ret; + buf_offset += last_len; wl->tx_packets_count++; if (has_data) { desc = (struct wl1271_tx_hw_descr *) skb->data; @@ -720,8 +748,12 @@ void wl1271_tx_work_locked(struct wl1271 *wl) out_ack: if (buf_offset) { - wlcore_write_data(wl, REG_SLV_MEM_DATA, wl->aggr_buf, - buf_offset, true); + buf_offset = wlcore_hw_pre_pkt_send(wl, buf_offset, last_len); + bus_ret = wlcore_write_data(wl, REG_SLV_MEM_DATA, wl->aggr_buf, + buf_offset, true); + if (bus_ret < 0) + goto out; + sent_packets = true; } if (sent_packets) { @@ -729,13 +761,19 @@ out_ack: * Interrupt the firmware with the new packets. This is only * required for older hardware revisions */ - if (wl->quirks & WLCORE_QUIRK_END_OF_TRANSACTION) - wl1271_write32(wl, WL12XX_HOST_WR_ACCESS, - wl->tx_packets_count); + if (wl->quirks & WLCORE_QUIRK_END_OF_TRANSACTION) { + bus_ret = wlcore_write32(wl, WL12XX_HOST_WR_ACCESS, + wl->tx_packets_count); + if (bus_ret < 0) + goto out; + } wl1271_handle_tx_low_watermark(wl); } wl12xx_rearm_rx_streaming(wl, active_hlids); + +out: + return bus_ret; } void wl1271_tx_work(struct work_struct *work) @@ -748,7 +786,11 @@ void wl1271_tx_work(struct work_struct *work) if (ret < 0) goto out; - wl1271_tx_work_locked(wl); + ret = wlcore_tx_work_locked(wl); + if (ret < 0) { + wl12xx_queue_recovery_work(wl); + goto out; + } wl1271_ps_elp_sleep(wl); out: @@ -849,7 +891,8 @@ static void wl1271_tx_complete_packet(struct wl1271 *wl, skb_pull(skb, sizeof(struct wl1271_tx_hw_descr)); /* remove TKIP header space if present */ - if (info->control.hw_key && + if ((wl->quirks & WLCORE_QUIRK_TKIP_HEADER_SPACE) && + info->control.hw_key && info->control.hw_key->cipher == WLAN_CIPHER_SUITE_TKIP) { int hdrlen = ieee80211_get_hdrlen_from_skb(skb); memmove(skb->data + WL1271_EXTRA_SPACE_TKIP, skb->data, @@ -869,22 +912,27 @@ static void wl1271_tx_complete_packet(struct wl1271 *wl, } /* Called upon reception of a TX complete interrupt */ -void wl1271_tx_complete(struct wl1271 *wl) +int wlcore_tx_complete(struct wl1271 *wl) { - struct wl1271_acx_mem_map *memmap = - (struct wl1271_acx_mem_map *)wl->target_mem_map; + struct wl1271_acx_mem_map *memmap = wl->target_mem_map; u32 count, fw_counter; u32 i; + int ret; /* read the tx results from the chipset */ - wl1271_read(wl, le32_to_cpu(memmap->tx_result), - wl->tx_res_if, sizeof(*wl->tx_res_if), false); + ret = wlcore_read(wl, le32_to_cpu(memmap->tx_result), + wl->tx_res_if, sizeof(*wl->tx_res_if), false); + if (ret < 0) + goto out; + fw_counter = le32_to_cpu(wl->tx_res_if->tx_result_fw_counter); /* write host counter to chipset (to ack) */ - wl1271_write32(wl, le32_to_cpu(memmap->tx_result) + - offsetof(struct wl1271_tx_hw_res_if, - tx_result_host_counter), fw_counter); + ret = wlcore_write32(wl, le32_to_cpu(memmap->tx_result) + + offsetof(struct wl1271_tx_hw_res_if, + tx_result_host_counter), fw_counter); + if (ret < 0) + goto out; count = fw_counter - wl->tx_results_count; wl1271_debug(DEBUG_TX, "tx_complete received, packets: %d", count); @@ -904,8 +952,11 @@ void wl1271_tx_complete(struct wl1271 *wl) wl->tx_results_count++; } + +out: + return ret; } -EXPORT_SYMBOL(wl1271_tx_complete); +EXPORT_SYMBOL(wlcore_tx_complete); void wl1271_tx_reset_link_queues(struct wl1271 *wl, u8 hlid) { @@ -958,7 +1009,7 @@ void wl12xx_tx_reset_wlvif(struct wl1271 *wl, struct wl12xx_vif *wlvif) } /* caller must hold wl->mutex and TX must be stopped */ -void wl12xx_tx_reset(struct wl1271 *wl, bool reset_tx_queues) +void wl12xx_tx_reset(struct wl1271 *wl) { int i; struct sk_buff *skb; @@ -973,15 +1024,12 @@ void wl12xx_tx_reset(struct wl1271 *wl, bool reset_tx_queues) wl->tx_queue_count[i] = 0; } - wl->stopped_queues_map = 0; - /* * Make sure the driver is at a consistent state, in case this * function is called from a context other than interface removal. * This call will always wake the TX queues. */ - if (reset_tx_queues) - wl1271_handle_tx_low_watermark(wl); + wl1271_handle_tx_low_watermark(wl); for (i = 0; i < wl->num_tx_desc; i++) { if (wl->tx_frames[i] == NULL) @@ -998,7 +1046,8 @@ void wl12xx_tx_reset(struct wl1271 *wl, bool reset_tx_queues) */ info = IEEE80211_SKB_CB(skb); skb_pull(skb, sizeof(struct wl1271_tx_hw_descr)); - if (info->control.hw_key && + if ((wl->quirks & WLCORE_QUIRK_TKIP_HEADER_SPACE) && + info->control.hw_key && info->control.hw_key->cipher == WLAN_CIPHER_SUITE_TKIP) { int hdrlen = ieee80211_get_hdrlen_from_skb(skb); @@ -1024,6 +1073,11 @@ void wl1271_tx_flush(struct wl1271 *wl) int i; timeout = jiffies + usecs_to_jiffies(WL1271_TX_FLUSH_TIMEOUT); + /* only one flush should be in progress, for consistent queue state */ + mutex_lock(&wl->flush_mutex); + + wlcore_stop_queues(wl, WLCORE_QUEUE_STOP_REASON_FLUSH); + while (!time_after(jiffies, timeout)) { mutex_lock(&wl->mutex); wl1271_debug(DEBUG_TX, "flushing tx buffer: %d %d", @@ -1032,7 +1086,7 @@ void wl1271_tx_flush(struct wl1271 *wl) if ((wl->tx_frames_cnt == 0) && (wl1271_tx_total_queue_count(wl) == 0)) { mutex_unlock(&wl->mutex); - return; + goto out; } mutex_unlock(&wl->mutex); msleep(1); @@ -1045,7 +1099,12 @@ void wl1271_tx_flush(struct wl1271 *wl) for (i = 0; i < WL12XX_MAX_LINKS; i++) wl1271_tx_reset_link_queues(wl, i); mutex_unlock(&wl->mutex); + +out: + wlcore_wake_queues(wl, WLCORE_QUEUE_STOP_REASON_FLUSH); + mutex_unlock(&wl->flush_mutex); } +EXPORT_SYMBOL_GPL(wl1271_tx_flush); u32 wl1271_tx_min_rate_get(struct wl1271 *wl, u32 rate_set) { @@ -1054,3 +1113,96 @@ u32 wl1271_tx_min_rate_get(struct wl1271 *wl, u32 rate_set) return BIT(__ffs(rate_set)); } + +void wlcore_stop_queue_locked(struct wl1271 *wl, u8 queue, + enum wlcore_queue_stop_reason reason) +{ + bool stopped = !!wl->queue_stop_reasons[queue]; + + /* queue should not be stopped for this reason */ + WARN_ON(test_and_set_bit(reason, &wl->queue_stop_reasons[queue])); + + if (stopped) + return; + + ieee80211_stop_queue(wl->hw, wl1271_tx_get_mac80211_queue(queue)); +} + +void wlcore_stop_queue(struct wl1271 *wl, u8 queue, + enum wlcore_queue_stop_reason reason) +{ + unsigned long flags; + + spin_lock_irqsave(&wl->wl_lock, flags); + wlcore_stop_queue_locked(wl, queue, reason); + spin_unlock_irqrestore(&wl->wl_lock, flags); +} + +void wlcore_wake_queue(struct wl1271 *wl, u8 queue, + enum wlcore_queue_stop_reason reason) +{ + unsigned long flags; + + spin_lock_irqsave(&wl->wl_lock, flags); + + /* queue should not be clear for this reason */ + WARN_ON(!test_and_clear_bit(reason, &wl->queue_stop_reasons[queue])); + + if (wl->queue_stop_reasons[queue]) + goto out; + + ieee80211_wake_queue(wl->hw, wl1271_tx_get_mac80211_queue(queue)); + +out: + spin_unlock_irqrestore(&wl->wl_lock, flags); +} + +void wlcore_stop_queues(struct wl1271 *wl, + enum wlcore_queue_stop_reason reason) +{ + int i; + + for (i = 0; i < NUM_TX_QUEUES; i++) + wlcore_stop_queue(wl, i, reason); +} +EXPORT_SYMBOL_GPL(wlcore_stop_queues); + +void wlcore_wake_queues(struct wl1271 *wl, + enum wlcore_queue_stop_reason reason) +{ + int i; + + for (i = 0; i < NUM_TX_QUEUES; i++) + wlcore_wake_queue(wl, i, reason); +} +EXPORT_SYMBOL_GPL(wlcore_wake_queues); + +void wlcore_reset_stopped_queues(struct wl1271 *wl) +{ + int i; + unsigned long flags; + + spin_lock_irqsave(&wl->wl_lock, flags); + + for (i = 0; i < NUM_TX_QUEUES; i++) { + if (!wl->queue_stop_reasons[i]) + continue; + + wl->queue_stop_reasons[i] = 0; + ieee80211_wake_queue(wl->hw, + wl1271_tx_get_mac80211_queue(i)); + } + + spin_unlock_irqrestore(&wl->wl_lock, flags); +} + +bool wlcore_is_queue_stopped_by_reason(struct wl1271 *wl, u8 queue, + enum wlcore_queue_stop_reason reason) +{ + return test_bit(reason, &wl->queue_stop_reasons[queue]); +} + +bool wlcore_is_queue_stopped(struct wl1271 *wl, u8 queue) +{ + return !!wl->queue_stop_reasons[queue]; +} diff --git a/drivers/net/wireless/ti/wlcore/tx.h b/drivers/net/wireless/ti/wlcore/tx.h index 2fd6e5dc6f75..1e939b016155 100644 --- a/drivers/net/wireless/ti/wlcore/tx.h +++ b/drivers/net/wireless/ti/wlcore/tx.h @@ -85,6 +85,19 @@ struct wl128x_tx_mem { u8 extra_bytes; } __packed; +struct wl18xx_tx_mem { + /* + * Total number of memory blocks allocated by the host for + * this packet. + */ + u8 total_mem_blocks; + + /* + * control bits + */ + u8 ctrl; +} __packed; + /* * On wl128x based devices, when TX packets are aggregated, each packet * size must be aligned to the SDIO block size. The maximum block size @@ -100,6 +113,7 @@ struct wl1271_tx_hw_descr { union { struct wl127x_tx_mem wl127x_mem; struct wl128x_tx_mem wl128x_mem; + struct wl18xx_tx_mem wl18xx_mem; } __packed; /* Device time (in us) when the packet arrived to the driver */ __le32 start_time; @@ -116,7 +130,16 @@ struct wl1271_tx_hw_descr { u8 tid; /* host link ID (HLID) */ u8 hlid; - u8 reserved; + + union { + u8 wl12xx_reserved; + + /* + * bit 0 -> 0 = udp, 1 = tcp + * bit 1:7 -> IP header offset + */ + u8 wl18xx_checksum_data; + } __packed; } __packed; enum wl1271_tx_hw_res_status { @@ -161,6 +184,13 @@ struct wl1271_tx_hw_res_if { struct wl1271_tx_hw_res_descr tx_results_queue[TX_HW_RESULT_QUEUE_LEN]; } __packed; +enum wlcore_queue_stop_reason { + WLCORE_QUEUE_STOP_REASON_WATERMARK, + WLCORE_QUEUE_STOP_REASON_FW_RESTART, + WLCORE_QUEUE_STOP_REASON_FLUSH, + WLCORE_QUEUE_STOP_REASON_SPARE_BLK, /* 18xx specific */ +}; + static inline int wl1271_tx_get_queue(int queue) { switch (queue) { @@ -204,10 +234,10 @@ static inline int wl1271_tx_total_queue_count(struct wl1271 *wl) } void wl1271_tx_work(struct work_struct *work); -void wl1271_tx_work_locked(struct wl1271 *wl); -void wl1271_tx_complete(struct wl1271 *wl); +int wlcore_tx_work_locked(struct wl1271 *wl); +int wlcore_tx_complete(struct wl1271 *wl); void wl12xx_tx_reset_wlvif(struct wl1271 *wl, struct wl12xx_vif *wlvif); -void wl12xx_tx_reset(struct wl1271 *wl, bool reset_tx_queues); +void wl12xx_tx_reset(struct wl1271 *wl); void wl1271_tx_flush(struct wl1271 *wl); u8 wlcore_rate_to_idx(struct wl1271 *wl, u8 rate, enum ieee80211_band band); u32 wl1271_tx_enabled_rates_get(struct wl1271 *wl, u32 rate_set, @@ -223,6 +253,21 @@ bool wl12xx_is_dummy_packet(struct wl1271 *wl, struct sk_buff *skb); void wl12xx_rearm_rx_streaming(struct wl1271 *wl, unsigned long *active_hlids); unsigned int wlcore_calc_packet_alignment(struct wl1271 *wl, unsigned int packet_length); +void wl1271_free_tx_id(struct wl1271 *wl, int id); +void wlcore_stop_queue_locked(struct wl1271 *wl, u8 queue, + enum wlcore_queue_stop_reason reason); +void wlcore_stop_queue(struct wl1271 *wl, u8 queue, + enum wlcore_queue_stop_reason reason); +void wlcore_wake_queue(struct wl1271 *wl, u8 queue, + enum wlcore_queue_stop_reason reason); +void wlcore_stop_queues(struct wl1271 *wl, + enum wlcore_queue_stop_reason reason); +void wlcore_wake_queues(struct wl1271 *wl, + enum wlcore_queue_stop_reason reason); +void wlcore_reset_stopped_queues(struct wl1271 *wl); +bool wlcore_is_queue_stopped_by_reason(struct wl1271 *wl, u8 queue, + enum wlcore_queue_stop_reason reason); +bool wlcore_is_queue_stopped(struct wl1271 *wl, u8 queue); /* from main.c */ void wl1271_free_sta(struct wl1271 *wl, struct wl12xx_vif *wlvif, u8 hlid); diff --git a/drivers/net/wireless/ti/wlcore/wlcore.h b/drivers/net/wireless/ti/wlcore/wlcore.h index 0b3f0b586f4b..0ce7a8ebbd46 100644 --- a/drivers/net/wireless/ti/wlcore/wlcore.h +++ b/drivers/net/wireless/ti/wlcore/wlcore.h @@ -24,8 +24,9 @@ #include <linux/platform_device.h> -#include "wl12xx.h" +#include "wlcore_i.h" #include "event.h" +#include "boot.h" /* The maximum number of Tx descriptors in all chip families */ #define WLCORE_MAX_TX_DESCRIPTORS 32 @@ -33,14 +34,16 @@ /* forward declaration */ struct wl1271_tx_hw_descr; enum wl_rx_buf_align; +struct wl1271_rx_descriptor; struct wlcore_ops { int (*identify_chip)(struct wl1271 *wl); int (*identify_fw)(struct wl1271 *wl); int (*boot)(struct wl1271 *wl); - void (*trigger_cmd)(struct wl1271 *wl, int cmd_box_addr, - void *buf, size_t len); - void (*ack_event)(struct wl1271 *wl); + int (*plt_init)(struct wl1271 *wl); + int (*trigger_cmd)(struct wl1271 *wl, int cmd_box_addr, + void *buf, size_t len); + int (*ack_event)(struct wl1271 *wl); u32 (*calc_tx_blocks)(struct wl1271 *wl, u32 len, u32 spare_blks); void (*set_tx_desc_blocks)(struct wl1271 *wl, struct wl1271_tx_hw_descr *desc, @@ -50,17 +53,34 @@ struct wlcore_ops { struct sk_buff *skb); enum wl_rx_buf_align (*get_rx_buf_align)(struct wl1271 *wl, u32 rx_desc); - void (*prepare_read)(struct wl1271 *wl, u32 rx_desc, u32 len); + int (*prepare_read)(struct wl1271 *wl, u32 rx_desc, u32 len); u32 (*get_rx_packet_len)(struct wl1271 *wl, void *rx_data, u32 data_len); - void (*tx_delayed_compl)(struct wl1271 *wl); + int (*tx_delayed_compl)(struct wl1271 *wl); void (*tx_immediate_compl)(struct wl1271 *wl); int (*hw_init)(struct wl1271 *wl); int (*init_vif)(struct wl1271 *wl, struct wl12xx_vif *wlvif); u32 (*sta_get_ap_rate_mask)(struct wl1271 *wl, struct wl12xx_vif *wlvif); - s8 (*get_pg_ver)(struct wl1271 *wl); - void (*get_mac)(struct wl1271 *wl); + int (*get_pg_ver)(struct wl1271 *wl, s8 *ver); + int (*get_mac)(struct wl1271 *wl); + void (*set_tx_desc_csum)(struct wl1271 *wl, + struct wl1271_tx_hw_descr *desc, + struct sk_buff *skb); + void (*set_rx_csum)(struct wl1271 *wl, + struct wl1271_rx_descriptor *desc, + struct sk_buff *skb); + u32 (*ap_get_mimo_wide_rate_mask)(struct wl1271 *wl, + struct wl12xx_vif *wlvif); + int (*debugfs_init)(struct wl1271 *wl, struct dentry *rootdir); + int (*handle_static_data)(struct wl1271 *wl, + struct wl1271_static_data *static_data); + int (*get_spare_blocks)(struct wl1271 *wl, bool is_gem); + int (*set_key)(struct wl1271 *wl, enum set_key_cmd cmd, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key_conf); + u32 (*pre_pkt_send)(struct wl1271 *wl, u32 buf_offset, u32 last_len); }; enum wlcore_partitions { @@ -109,6 +129,15 @@ enum wlcore_registers { REG_TABLE_LEN, }; +struct wl1271_stats { + void *fw_stats; + unsigned long fw_stats_update; + size_t fw_stats_len; + + unsigned int retry_count; + unsigned int excessive_retries; +}; + struct wl1271 { struct ieee80211_hw *hw; bool mac80211_registered; @@ -121,13 +150,14 @@ struct wl1271 { void (*set_power)(bool enable); int irq; - int ref_clock; spinlock_t wl_lock; enum wl1271_state state; enum wl12xx_fw_type fw_type; bool plt; + enum plt_mode plt_mode; + u8 fem_manuf; u8 last_vif_count; struct mutex mutex; @@ -186,7 +216,7 @@ struct wl1271 { /* Frames scheduled for transmission, not handled yet */ int tx_queue_count[NUM_TX_QUEUES]; - long stopped_queues_map; + unsigned long queue_stop_reasons[NUM_TX_QUEUES]; /* Frames received, not handled yet by mac80211 */ struct sk_buff_head deferred_rx_queue; @@ -205,9 +235,6 @@ struct wl1271 { /* FW Rx counter */ u32 rx_counter; - /* Rx memory pool address */ - struct wl1271_rx_mem_pool_addr rx_mem_pool_addr; - /* Intermediate buffer, used for packet aggregation */ u8 *aggr_buf; @@ -228,6 +255,7 @@ struct wl1271 { /* Hardware recovery work */ struct work_struct recovery_work; + bool watchdog_recovery; /* Pointer that holds DMA-friendly block for the mailbox */ struct event_mailbox *mbox; @@ -263,7 +291,8 @@ struct wl1271 { u32 buffer_cmd; u32 buffer_busyword[WL1271_BUSY_WORD_CNT]; - struct wl_fw_status *fw_status; + struct wl_fw_status_1 *fw_status_1; + struct wl_fw_status_2 *fw_status_2; struct wl1271_tx_hw_res_if *tx_res_if; /* Current chipset configuration */ @@ -277,9 +306,7 @@ struct wl1271 { s8 noise; /* bands supported by this instance of wl12xx */ - struct ieee80211_supported_band bands[IEEE80211_NUM_BANDS]; - - int tcxo_clock; + struct ieee80211_supported_band bands[WLCORE_NUM_BANDS]; /* * wowlan trigger was configured during suspend. @@ -333,10 +360,8 @@ struct wl1271 { /* number of TX descriptors the HW supports. */ u32 num_tx_desc; - - /* spare Tx blocks for normal/GEM operating modes */ - u32 normal_tx_spare; - u32 gem_tx_spare; + /* number of RX descriptors the HW supports. */ + u32 num_rx_desc; /* translate HW Tx rates to standard rate-indices */ const u8 **band_rate_to_idx; @@ -348,19 +373,57 @@ struct wl1271 { u8 hw_min_ht_rate; /* HW HT (11n) capabilities */ - struct ieee80211_sta_ht_cap ht_cap; + struct ieee80211_sta_ht_cap ht_cap[WLCORE_NUM_BANDS]; /* size of the private FW status data */ size_t fw_status_priv_len; /* RX Data filter rule state - enabled/disabled */ bool rx_filter_enabled[WL1271_MAX_RX_FILTERS]; + + /* size of the private static data */ + size_t static_data_priv_len; + + /* the current channel type */ + enum nl80211_channel_type channel_type; + + /* mutex for protecting the tx_flush function */ + struct mutex flush_mutex; + + /* sleep auth value currently configured to FW */ + int sleep_auth; + + /* the minimum FW version required for the driver to work */ + unsigned int min_fw_ver[NUM_FW_VER]; }; int __devinit wlcore_probe(struct wl1271 *wl, struct platform_device *pdev); int __devexit wlcore_remove(struct platform_device *pdev); struct ieee80211_hw *wlcore_alloc_hw(size_t priv_size); int wlcore_free_hw(struct wl1271 *wl); +int wlcore_set_key(struct wl1271 *wl, enum set_key_cmd cmd, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key_conf); + +static inline void +wlcore_set_ht_cap(struct wl1271 *wl, enum ieee80211_band band, + struct ieee80211_sta_ht_cap *ht_cap) +{ + memcpy(&wl->ht_cap[band], ht_cap, sizeof(*ht_cap)); +} + +static inline void +wlcore_set_min_fw_ver(struct wl1271 *wl, unsigned int chip, + unsigned int iftype, unsigned int major, + unsigned int subtype, unsigned int minor) +{ + wl->min_fw_ver[FW_VER_CHIP] = chip; + wl->min_fw_ver[FW_VER_IF_TYPE] = iftype; + wl->min_fw_ver[FW_VER_MAJOR] = major; + wl->min_fw_ver[FW_VER_SUBTYPE] = subtype; + wl->min_fw_ver[FW_VER_MINOR] = minor; +} /* Firmware image load chunk size */ #define CHUNK_SIZE 16384 @@ -385,6 +448,18 @@ int wlcore_free_hw(struct wl1271 *wl); /* Some firmwares may not support ELP */ #define WLCORE_QUIRK_NO_ELP BIT(6) +/* pad only the last frame in the aggregate buffer */ +#define WLCORE_QUIRK_TX_PAD_LAST_FRAME BIT(7) + +/* extra header space is required for TKIP */ +#define WLCORE_QUIRK_TKIP_HEADER_SPACE BIT(8) + +/* Some firmwares not support sched scans while connected */ +#define WLCORE_QUIRK_NO_SCHED_SCAN_WHILE_CONN BIT(9) + +/* separate probe response templates for one-shot and sched scans */ +#define WLCORE_QUIRK_DUAL_PROBE_TMPL BIT(10) + /* TODO: move to the lower drivers when all usages are abstracted */ #define CHIP_ID_1271_PG10 (0x4030101) #define CHIP_ID_1271_PG20 (0x4030111) diff --git a/drivers/net/wireless/ti/wlcore/wl12xx.h b/drivers/net/wireless/ti/wlcore/wlcore_i.h index f12bdf745180..c0505635bb00 100644 --- a/drivers/net/wireless/ti/wlcore/wl12xx.h +++ b/drivers/net/wireless/ti/wlcore/wlcore_i.h @@ -22,8 +22,8 @@ * */ -#ifndef __WL12XX_H__ -#define __WL12XX_H__ +#ifndef __WLCORE_I_H__ +#define __WLCORE_I_H__ #include <linux/mutex.h> #include <linux/completion.h> @@ -35,15 +35,6 @@ #include "conf.h" #include "ini.h" -#define WL127X_FW_NAME_MULTI "ti-connectivity/wl127x-fw-4-mr.bin" -#define WL127X_FW_NAME_SINGLE "ti-connectivity/wl127x-fw-4-sr.bin" - -#define WL128X_FW_NAME_MULTI "ti-connectivity/wl128x-fw-4-mr.bin" -#define WL128X_FW_NAME_SINGLE "ti-connectivity/wl128x-fw-4-sr.bin" - -#define WL127X_PLT_FW_NAME "ti-connectivity/wl127x-fw-4-plt.bin" -#define WL128X_PLT_FW_NAME "ti-connectivity/wl128x-fw-4-plt.bin" - /* * wl127x and wl128x are using the same NVS file name. However, the * ini parameters between them are different. The driver validates @@ -71,6 +62,9 @@ #define WL12XX_INVALID_ROLE_ID 0xff #define WL12XX_INVALID_LINK_ID 0xff +/* the driver supports the 2.4Ghz and 5Ghz bands */ +#define WLCORE_NUM_BANDS 2 + #define WL12XX_MAX_RATE_POLICIES 16 /* Defined by FW as 0. Will not be freed or allocated. */ @@ -89,7 +83,7 @@ #define WL1271_AP_BSS_INDEX 0 #define WL1271_AP_DEF_BEACON_EXP 20 -#define WL1271_AGGR_BUFFER_SIZE (4 * PAGE_SIZE) +#define WL1271_AGGR_BUFFER_SIZE (5 * PAGE_SIZE) enum wl1271_state { WL1271_STATE_OFF, @@ -132,16 +126,7 @@ struct wl1271_chip { unsigned int fw_ver[NUM_FW_VER]; }; -struct wl1271_stats { - struct acx_statistics *fw_stats; - unsigned long fw_stats_update; - - unsigned int retry_count; - unsigned int excessive_retries; -}; - #define NUM_TX_QUEUES 4 -#define NUM_RX_PKT_DESC 8 #define AP_MAX_STATIONS 8 @@ -159,13 +144,26 @@ struct wl_fw_packet_counters { } __packed; /* FW status registers */ -struct wl_fw_status { +struct wl_fw_status_1 { __le32 intr; u8 fw_rx_counter; u8 drv_rx_counter; u8 reserved; u8 tx_results_counter; - __le32 rx_pkt_descs[NUM_RX_PKT_DESC]; + __le32 rx_pkt_descs[0]; +} __packed; + +/* + * Each HW arch has a different number of Rx descriptors. + * The length of the status depends on it, since it holds an array + * of descriptors. + */ +#define WLCORE_FW_STATUS_1_LEN(num_rx_desc) \ + (sizeof(struct wl_fw_status_1) + \ + (sizeof(((struct wl_fw_status_1 *)0)->rx_pkt_descs[0])) * \ + num_rx_desc) + +struct wl_fw_status_2 { __le32 fw_localtime; /* @@ -194,11 +192,6 @@ struct wl_fw_status { u8 priv[0]; } __packed; -struct wl1271_rx_mem_pool_addr { - u32 addr; - u32 addr_extra; -}; - #define WL1271_MAX_CHANNELS 64 struct wl1271_scan { struct cfg80211_scan_request *req; @@ -210,10 +203,10 @@ struct wl1271_scan { }; struct wl1271_if_operations { - void (*read)(struct device *child, int addr, void *buf, size_t len, - bool fixed); - void (*write)(struct device *child, int addr, void *buf, size_t len, - bool fixed); + int __must_check (*read)(struct device *child, int addr, void *buf, + size_t len, bool fixed); + int __must_check (*write)(struct device *child, int addr, void *buf, + size_t len, bool fixed); void (*reset)(struct device *child); void (*init)(struct device *child); int (*power)(struct device *child, bool enable); @@ -248,6 +241,7 @@ enum wl12xx_flags { WL1271_FLAG_RECOVERY_IN_PROGRESS, WL1271_FLAG_VIF_CHANGE_IN_PROGRESS, WL1271_FLAG_INTENDED_FW_RECOVERY, + WL1271_FLAG_IO_FAILED, }; enum wl12xx_vif_flags { @@ -299,6 +293,12 @@ enum rx_filter_action { FILTER_FW_HANDLE = 2 }; +enum plt_mode { + PLT_OFF = 0, + PLT_ON = 1, + PLT_FEM_DETECT = 2, +}; + struct wl12xx_rx_filter_field { __le16 offset; u8 len; @@ -367,8 +367,9 @@ struct wl12xx_vif { /* The current band */ enum ieee80211_band band; int channel; + enum nl80211_channel_type channel_type; - u32 bitrate_masks[IEEE80211_NUM_BANDS]; + u32 bitrate_masks[WLCORE_NUM_BANDS]; u32 basic_rate_set; /* @@ -417,9 +418,6 @@ struct wl12xx_vif { struct work_struct rx_streaming_disable_work; struct timer_list rx_streaming_timer; - /* does the current role use GEM for encryption (AP or STA) */ - bool is_gem; - /* * This struct must be last! * data that has to be saved acrossed reconfigs (e.g. recovery) @@ -467,7 +465,7 @@ struct ieee80211_vif *wl12xx_wlvif_to_vif(struct wl12xx_vif *wlvif) #define wl12xx_for_each_wlvif_ap(wl, wlvif) \ wl12xx_for_each_wlvif_bss_type(wl, wlvif, BSS_TYPE_AP_BSS) -int wl1271_plt_start(struct wl1271 *wl); +int wl1271_plt_start(struct wl1271 *wl, const enum plt_mode plt_mode); int wl1271_plt_stop(struct wl1271 *wl); int wl1271_recalc_rx_streaming(struct wl1271 *wl, struct wl12xx_vif *wlvif); void wl12xx_queue_recovery_work(struct wl1271 *wl); @@ -501,7 +499,8 @@ void wl1271_rx_filter_flatten_fields(struct wl12xx_rx_filter *filter, /* Macros to handle wl1271.sta_rate_set */ #define HW_BG_RATES_MASK 0xffff #define HW_HT_RATES_OFFSET 16 +#define HW_MIMO_RATES_OFFSET 24 #define WL12XX_HW_BLOCK_SIZE 256 -#endif +#endif /* __WLCORE_I_H__ */ diff --git a/drivers/net/wireless/zd1211rw/zd_chip.h b/drivers/net/wireless/zd1211rw/zd_chip.h index 117c4123943c..7ab922209b25 100644 --- a/drivers/net/wireless/zd1211rw/zd_chip.h +++ b/drivers/net/wireless/zd1211rw/zd_chip.h @@ -827,7 +827,7 @@ int zd_ioread32v_locked(struct zd_chip *chip, u32 *values, static inline int zd_ioread32_locked(struct zd_chip *chip, u32 *value, const zd_addr_t addr) { - return zd_ioread32v_locked(chip, value, (const zd_addr_t *)&addr, 1); + return zd_ioread32v_locked(chip, value, &addr, 1); } static inline int zd_iowrite16_locked(struct zd_chip *chip, u16 value, diff --git a/drivers/net/wireless/zd1211rw/zd_usb.h b/drivers/net/wireless/zd1211rw/zd_usb.h index 99193b456a79..45e3bb28a01c 100644 --- a/drivers/net/wireless/zd1211rw/zd_usb.h +++ b/drivers/net/wireless/zd1211rw/zd_usb.h @@ -274,7 +274,7 @@ int zd_usb_ioread16v(struct zd_usb *usb, u16 *values, static inline int zd_usb_ioread16(struct zd_usb *usb, u16 *value, const zd_addr_t addr) { - return zd_usb_ioread16v(usb, value, (const zd_addr_t *)&addr, 1); + return zd_usb_ioread16v(usb, value, &addr, 1); } void zd_usb_iowrite16v_async_start(struct zd_usb *usb); diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index f4a6fcaeffb1..682633bfe00f 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -1363,8 +1363,6 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk) INVALID_PENDING_IDX); } - __skb_queue_tail(&netbk->tx_queue, skb); - netbk->pending_cons++; request_gop = xen_netbk_get_requests(netbk, vif, @@ -1376,6 +1374,8 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk) } gop = request_gop; + __skb_queue_tail(&netbk->tx_queue, skb); + vif->tx.req_cons = idx; xen_netbk_check_rx_xenvif(vif); diff --git a/drivers/nfc/nfcwilink.c b/drivers/nfc/nfcwilink.c index 1f74a77d040d..e7fd4938f9bc 100644 --- a/drivers/nfc/nfcwilink.c +++ b/drivers/nfc/nfcwilink.c @@ -535,9 +535,10 @@ static int nfcwilink_probe(struct platform_device *pdev) drv->pdev = pdev; protocols = NFC_PROTO_JEWEL_MASK - | NFC_PROTO_MIFARE_MASK | NFC_PROTO_FELICA_MASK - | NFC_PROTO_ISO14443_MASK - | NFC_PROTO_NFC_DEP_MASK; + | NFC_PROTO_MIFARE_MASK | NFC_PROTO_FELICA_MASK + | NFC_PROTO_ISO14443_MASK + | NFC_PROTO_ISO14443_B_MASK + | NFC_PROTO_NFC_DEP_MASK; drv->ndev = nci_allocate_device(&nfcwilink_ops, protocols, diff --git a/drivers/nfc/pn533.c b/drivers/nfc/pn533.c index 19110f0eb15f..d606f52fec84 100644 --- a/drivers/nfc/pn533.c +++ b/drivers/nfc/pn533.c @@ -38,13 +38,51 @@ #define SCM_VENDOR_ID 0x4E6 #define SCL3711_PRODUCT_ID 0x5591 +#define SONY_VENDOR_ID 0x054c +#define PASORI_PRODUCT_ID 0x02e1 + +#define PN533_QUIRKS_TYPE_A BIT(0) +#define PN533_QUIRKS_TYPE_F BIT(1) +#define PN533_QUIRKS_DEP BIT(2) +#define PN533_QUIRKS_RAW_EXCHANGE BIT(3) + +#define PN533_DEVICE_STD 0x1 +#define PN533_DEVICE_PASORI 0x2 + +#define PN533_ALL_PROTOCOLS (NFC_PROTO_JEWEL_MASK | NFC_PROTO_MIFARE_MASK |\ + NFC_PROTO_FELICA_MASK | NFC_PROTO_ISO14443_MASK |\ + NFC_PROTO_NFC_DEP_MASK |\ + NFC_PROTO_ISO14443_B_MASK) + +#define PN533_NO_TYPE_B_PROTOCOLS (NFC_PROTO_JEWEL_MASK | \ + NFC_PROTO_MIFARE_MASK | \ + NFC_PROTO_FELICA_MASK | \ + NFC_PROTO_ISO14443_MASK | \ + NFC_PROTO_NFC_DEP_MASK) + static const struct usb_device_id pn533_table[] = { - { USB_DEVICE(PN533_VENDOR_ID, PN533_PRODUCT_ID) }, - { USB_DEVICE(SCM_VENDOR_ID, SCL3711_PRODUCT_ID) }, + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = PN533_VENDOR_ID, + .idProduct = PN533_PRODUCT_ID, + .driver_info = PN533_DEVICE_STD, + }, + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = SCM_VENDOR_ID, + .idProduct = SCL3711_PRODUCT_ID, + .driver_info = PN533_DEVICE_STD, + }, + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE, + .idVendor = SONY_VENDOR_ID, + .idProduct = PASORI_PRODUCT_ID, + .driver_info = PN533_DEVICE_PASORI, + }, { } }; MODULE_DEVICE_TABLE(usb, pn533_table); +/* How much time we spend listening for initiators */ +#define PN533_LISTEN_TIME 2 + /* frame definitions */ #define PN533_FRAME_TAIL_SIZE 2 #define PN533_FRAME_SIZE(f) (sizeof(struct pn533_frame) + f->datalen + \ @@ -69,11 +107,16 @@ MODULE_DEVICE_TABLE(usb, pn533_table); #define PN533_CMD_GET_FIRMWARE_VERSION 0x02 #define PN533_CMD_RF_CONFIGURATION 0x32 #define PN533_CMD_IN_DATA_EXCHANGE 0x40 +#define PN533_CMD_IN_COMM_THRU 0x42 #define PN533_CMD_IN_LIST_PASSIVE_TARGET 0x4A #define PN533_CMD_IN_ATR 0x50 #define PN533_CMD_IN_RELEASE 0x52 #define PN533_CMD_IN_JUMP_FOR_DEP 0x56 +#define PN533_CMD_TG_INIT_AS_TARGET 0x8c +#define PN533_CMD_TG_GET_DATA 0x86 +#define PN533_CMD_TG_SET_DATA 0x8e + #define PN533_CMD_RESPONSE(cmd) (cmd + 1) /* PN533 Return codes */ @@ -81,6 +124,9 @@ MODULE_DEVICE_TABLE(usb, pn533_table); #define PN533_CMD_MI_MASK 0x40 #define PN533_CMD_RET_SUCCESS 0x00 +/* PN533 status codes */ +#define PN533_STATUS_TARGET_RELEASED 0x29 + struct pn533; typedef int (*pn533_cmd_complete_t) (struct pn533 *dev, void *arg, @@ -97,7 +143,14 @@ struct pn533_fw_version { }; /* PN533_CMD_RF_CONFIGURATION */ +#define PN533_CFGITEM_TIMING 0x02 #define PN533_CFGITEM_MAX_RETRIES 0x05 +#define PN533_CFGITEM_PASORI 0x82 + +#define PN533_CONFIG_TIMING_102 0xb +#define PN533_CONFIG_TIMING_204 0xc +#define PN533_CONFIG_TIMING_409 0xd +#define PN533_CONFIG_TIMING_819 0xe #define PN533_CONFIG_MAX_RETRIES_NO_RETRY 0x00 #define PN533_CONFIG_MAX_RETRIES_ENDLESS 0xFF @@ -108,6 +161,12 @@ struct pn533_config_max_retries { u8 mx_rty_passive_act; } __packed; +struct pn533_config_timing { + u8 rfu; + u8 atr_res_timeout; + u8 dep_timeout; +} __packed; + /* PN533_CMD_IN_LIST_PASSIVE_TARGET */ /* felica commands opcode */ @@ -144,6 +203,7 @@ enum { PN533_POLL_MOD_424KBPS_FELICA, PN533_POLL_MOD_106KBPS_JEWEL, PN533_POLL_MOD_847KBPS_B, + PN533_LISTEN_MOD, __PN533_POLL_MOD_AFTER_LAST, }; @@ -211,6 +271,9 @@ const struct pn533_poll_modulations poll_mod[] = { }, .len = 3, }, + [PN533_LISTEN_MOD] = { + .len = 0, + }, }; /* PN533_CMD_IN_ATR */ @@ -237,7 +300,7 @@ struct pn533_cmd_jump_dep { u8 active; u8 baud; u8 next; - u8 gt[]; + u8 data[]; } __packed; struct pn533_cmd_jump_dep_response { @@ -253,6 +316,29 @@ struct pn533_cmd_jump_dep_response { u8 gt[]; } __packed; + +/* PN533_TG_INIT_AS_TARGET */ +#define PN533_INIT_TARGET_PASSIVE 0x1 +#define PN533_INIT_TARGET_DEP 0x2 + +#define PN533_INIT_TARGET_RESP_FRAME_MASK 0x3 +#define PN533_INIT_TARGET_RESP_ACTIVE 0x1 +#define PN533_INIT_TARGET_RESP_DEP 0x4 + +struct pn533_cmd_init_target { + u8 mode; + u8 mifare[6]; + u8 felica[18]; + u8 nfcid3[10]; + u8 gb_len; + u8 gb[]; +} __packed; + +struct pn533_cmd_init_target_response { + u8 mode; + u8 cmd[]; +} __packed; + struct pn533 { struct usb_device *udev; struct usb_interface *interface; @@ -270,22 +356,33 @@ struct pn533 { struct workqueue_struct *wq; struct work_struct cmd_work; + struct work_struct poll_work; struct work_struct mi_work; + struct work_struct tg_work; + struct timer_list listen_timer; struct pn533_frame *wq_in_frame; int wq_in_error; + int cancel_listen; pn533_cmd_complete_t cmd_complete; void *cmd_complete_arg; - struct semaphore cmd_lock; + struct mutex cmd_lock; u8 cmd; struct pn533_poll_modulations *poll_mod_active[PN533_POLL_MOD_MAX + 1]; u8 poll_mod_count; u8 poll_mod_curr; u32 poll_protocols; + u32 listen_protocols; + + u8 *gb; + size_t gb_len; u8 tgt_available_prots; u8 tgt_active_prot; + u8 tgt_mode; + + u32 device_type; }; struct pn533_frame { @@ -405,7 +502,7 @@ static void pn533_wq_cmd_complete(struct work_struct *work) PN533_FRAME_CMD_PARAMS_LEN(in_frame)); if (rc != -EINPROGRESS) - up(&dev->cmd_lock); + mutex_unlock(&dev->cmd_lock); } static void pn533_recv_response(struct urb *urb) @@ -583,7 +680,7 @@ static int pn533_send_cmd_frame_async(struct pn533 *dev, nfc_dev_dbg(&dev->interface->dev, "%s", __func__); - if (down_trylock(&dev->cmd_lock)) + if (!mutex_trylock(&dev->cmd_lock)) return -EBUSY; rc = __pn533_send_cmd_frame_async(dev, out_frame, in_frame, @@ -593,7 +690,7 @@ static int pn533_send_cmd_frame_async(struct pn533 *dev, return 0; error: - up(&dev->cmd_lock); + mutex_unlock(&dev->cmd_lock); return rc; } @@ -892,7 +989,7 @@ static int pn533_target_found_type_b(struct nfc_target *nfc_tgt, u8 *tgt_data, if (!pn533_target_type_b_is_valid(tgt_type_b, tgt_data_len)) return -EPROTO; - nfc_tgt->supported_protocols = NFC_PROTO_ISO14443_MASK; + nfc_tgt->supported_protocols = NFC_PROTO_ISO14443_B_MASK; return 0; } @@ -963,6 +1060,11 @@ static int pn533_target_found(struct pn533 *dev, return 0; } +static inline void pn533_poll_next_mod(struct pn533 *dev) +{ + dev->poll_mod_curr = (dev->poll_mod_curr + 1) % dev->poll_mod_count; +} + static void pn533_poll_reset_mod_list(struct pn533 *dev) { dev->poll_mod_count = 0; @@ -975,102 +1077,283 @@ static void pn533_poll_add_mod(struct pn533 *dev, u8 mod_index) dev->poll_mod_count++; } -static void pn533_poll_create_mod_list(struct pn533 *dev, u32 protocols) +static void pn533_poll_create_mod_list(struct pn533 *dev, + u32 im_protocols, u32 tm_protocols) { pn533_poll_reset_mod_list(dev); - if (protocols & NFC_PROTO_MIFARE_MASK - || protocols & NFC_PROTO_ISO14443_MASK - || protocols & NFC_PROTO_NFC_DEP_MASK) + if (im_protocols & NFC_PROTO_MIFARE_MASK + || im_protocols & NFC_PROTO_ISO14443_MASK + || im_protocols & NFC_PROTO_NFC_DEP_MASK) pn533_poll_add_mod(dev, PN533_POLL_MOD_106KBPS_A); - if (protocols & NFC_PROTO_FELICA_MASK - || protocols & NFC_PROTO_NFC_DEP_MASK) { + if (im_protocols & NFC_PROTO_FELICA_MASK + || im_protocols & NFC_PROTO_NFC_DEP_MASK) { pn533_poll_add_mod(dev, PN533_POLL_MOD_212KBPS_FELICA); pn533_poll_add_mod(dev, PN533_POLL_MOD_424KBPS_FELICA); } - if (protocols & NFC_PROTO_JEWEL_MASK) + if (im_protocols & NFC_PROTO_JEWEL_MASK) pn533_poll_add_mod(dev, PN533_POLL_MOD_106KBPS_JEWEL); - if (protocols & NFC_PROTO_ISO14443_MASK) + if (im_protocols & NFC_PROTO_ISO14443_B_MASK) pn533_poll_add_mod(dev, PN533_POLL_MOD_847KBPS_B); + + if (tm_protocols) + pn533_poll_add_mod(dev, PN533_LISTEN_MOD); } -static void pn533_start_poll_frame(struct pn533_frame *frame, - struct pn533_poll_modulations *mod) +static int pn533_start_poll_complete(struct pn533 *dev, void *arg, + u8 *params, int params_len) { + struct pn533_poll_response *resp; + int rc; - pn533_tx_frame_init(frame, PN533_CMD_IN_LIST_PASSIVE_TARGET); + nfc_dev_dbg(&dev->interface->dev, "%s", __func__); - memcpy(PN533_FRAME_CMD_PARAMS_PTR(frame), &mod->data, mod->len); - frame->datalen += mod->len; + resp = (struct pn533_poll_response *) params; + if (resp->nbtg) { + rc = pn533_target_found(dev, resp, params_len); + + /* We must stop the poll after a valid target found */ + if (rc == 0) { + pn533_poll_reset_mod_list(dev); + return 0; + } + } + + return -EAGAIN; +} + +static int pn533_init_target_frame(struct pn533_frame *frame, + u8 *gb, size_t gb_len) +{ + struct pn533_cmd_init_target *cmd; + size_t cmd_len; + u8 felica_params[18] = {0x1, 0xfe, /* DEP */ + 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, /* random */ + 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, + 0xff, 0xff}; /* System code */ + u8 mifare_params[6] = {0x1, 0x1, /* SENS_RES */ + 0x0, 0x0, 0x0, + 0x40}; /* SEL_RES for DEP */ + + cmd_len = sizeof(struct pn533_cmd_init_target) + gb_len + 1; + cmd = kzalloc(cmd_len, GFP_KERNEL); + if (cmd == NULL) + return -ENOMEM; + + pn533_tx_frame_init(frame, PN533_CMD_TG_INIT_AS_TARGET); + + /* DEP support only */ + cmd->mode |= PN533_INIT_TARGET_DEP; + + /* Felica params */ + memcpy(cmd->felica, felica_params, 18); + get_random_bytes(cmd->felica + 2, 6); + + /* NFCID3 */ + memset(cmd->nfcid3, 0, 10); + memcpy(cmd->nfcid3, cmd->felica, 8); + + /* MIFARE params */ + memcpy(cmd->mifare, mifare_params, 6); + + /* General bytes */ + cmd->gb_len = gb_len; + memcpy(cmd->gb, gb, gb_len); + + /* Len Tk */ + cmd->gb[gb_len] = 0; + + memcpy(PN533_FRAME_CMD_PARAMS_PTR(frame), cmd, cmd_len); + + frame->datalen += cmd_len; pn533_tx_frame_finish(frame); + + kfree(cmd); + + return 0; } -static int pn533_start_poll_complete(struct pn533 *dev, void *arg, - u8 *params, int params_len) +#define PN533_CMD_DATAEXCH_HEAD_LEN (sizeof(struct pn533_frame) + 3) +#define PN533_CMD_DATAEXCH_DATA_MAXLEN 262 +static int pn533_tm_get_data_complete(struct pn533 *dev, void *arg, + u8 *params, int params_len) { - struct pn533_poll_response *resp; - struct pn533_poll_modulations *next_mod; - int rc; + struct sk_buff *skb_resp = arg; + struct pn533_frame *in_frame = (struct pn533_frame *) skb_resp->data; nfc_dev_dbg(&dev->interface->dev, "%s", __func__); - if (params_len == -ENOENT) { - nfc_dev_dbg(&dev->interface->dev, "Polling operation has been" - " stopped"); - goto stop_poll; + if (params_len < 0) { + nfc_dev_err(&dev->interface->dev, + "Error %d when starting as a target", + params_len); + + return params_len; + } + + if (params_len > 0 && params[0] != 0) { + nfc_tm_deactivated(dev->nfc_dev); + + dev->tgt_mode = 0; + + kfree_skb(skb_resp); + return 0; } + skb_put(skb_resp, PN533_FRAME_SIZE(in_frame)); + skb_pull(skb_resp, PN533_CMD_DATAEXCH_HEAD_LEN); + skb_trim(skb_resp, skb_resp->len - PN533_FRAME_TAIL_SIZE); + + return nfc_tm_data_received(dev->nfc_dev, skb_resp); +} + +static void pn533_wq_tg_get_data(struct work_struct *work) +{ + struct pn533 *dev = container_of(work, struct pn533, tg_work); + struct pn533_frame *in_frame; + struct sk_buff *skb_resp; + size_t skb_resp_len; + + nfc_dev_dbg(&dev->interface->dev, "%s", __func__); + + skb_resp_len = PN533_CMD_DATAEXCH_HEAD_LEN + + PN533_CMD_DATAEXCH_DATA_MAXLEN + + PN533_FRAME_TAIL_SIZE; + + skb_resp = nfc_alloc_recv_skb(skb_resp_len, GFP_KERNEL); + if (!skb_resp) + return; + + in_frame = (struct pn533_frame *)skb_resp->data; + + pn533_tx_frame_init(dev->out_frame, PN533_CMD_TG_GET_DATA); + pn533_tx_frame_finish(dev->out_frame); + + pn533_send_cmd_frame_async(dev, dev->out_frame, in_frame, + skb_resp_len, + pn533_tm_get_data_complete, + skb_resp, GFP_KERNEL); + + return; +} + +#define ATR_REQ_GB_OFFSET 17 +static int pn533_init_target_complete(struct pn533 *dev, void *arg, + u8 *params, int params_len) +{ + struct pn533_cmd_init_target_response *resp; + u8 frame, comm_mode = NFC_COMM_PASSIVE, *gb; + size_t gb_len; + int rc; + + nfc_dev_dbg(&dev->interface->dev, "%s", __func__); + if (params_len < 0) { - nfc_dev_err(&dev->interface->dev, "Error %d when running poll", - params_len); - goto stop_poll; + nfc_dev_err(&dev->interface->dev, + "Error %d when starting as a target", + params_len); + + return params_len; } - resp = (struct pn533_poll_response *) params; - if (resp->nbtg) { - rc = pn533_target_found(dev, resp, params_len); + if (params_len < ATR_REQ_GB_OFFSET + 1) + return -EINVAL; - /* We must stop the poll after a valid target found */ - if (rc == 0) - goto stop_poll; + resp = (struct pn533_cmd_init_target_response *) params; + + nfc_dev_dbg(&dev->interface->dev, "Target mode 0x%x param len %d\n", + resp->mode, params_len); + + frame = resp->mode & PN533_INIT_TARGET_RESP_FRAME_MASK; + if (frame == PN533_INIT_TARGET_RESP_ACTIVE) + comm_mode = NFC_COMM_ACTIVE; - if (rc != -EAGAIN) - nfc_dev_err(&dev->interface->dev, "The target found is" - " not valid - continuing to poll"); + /* Again, only DEP */ + if ((resp->mode & PN533_INIT_TARGET_RESP_DEP) == 0) + return -EOPNOTSUPP; + + gb = resp->cmd + ATR_REQ_GB_OFFSET; + gb_len = params_len - (ATR_REQ_GB_OFFSET + 1); + + rc = nfc_tm_activated(dev->nfc_dev, NFC_PROTO_NFC_DEP_MASK, + comm_mode, gb, gb_len); + if (rc < 0) { + nfc_dev_err(&dev->interface->dev, + "Error when signaling target activation"); + return rc; } - dev->poll_mod_curr = (dev->poll_mod_curr + 1) % dev->poll_mod_count; + dev->tgt_mode = 1; - next_mod = dev->poll_mod_active[dev->poll_mod_curr]; + queue_work(dev->wq, &dev->tg_work); - nfc_dev_dbg(&dev->interface->dev, "Polling next modulation (0x%x)", - dev->poll_mod_curr); + return 0; +} - pn533_start_poll_frame(dev->out_frame, next_mod); +static void pn533_listen_mode_timer(unsigned long data) +{ + struct pn533 *dev = (struct pn533 *) data; - /* Don't need to down the semaphore again */ - rc = __pn533_send_cmd_frame_async(dev, dev->out_frame, dev->in_frame, - dev->in_maxlen, pn533_start_poll_complete, - NULL, GFP_ATOMIC); + nfc_dev_dbg(&dev->interface->dev, "Listen mode timeout"); + + /* An ack will cancel the last issued command (poll) */ + pn533_send_ack(dev, GFP_ATOMIC); + + dev->cancel_listen = 1; + + mutex_unlock(&dev->cmd_lock); + + pn533_poll_next_mod(dev); + + queue_work(dev->wq, &dev->poll_work); +} + +static int pn533_poll_complete(struct pn533 *dev, void *arg, + u8 *params, int params_len) +{ + struct pn533_poll_modulations *cur_mod; + int rc; + + nfc_dev_dbg(&dev->interface->dev, "%s", __func__); + + if (params_len == -ENOENT) { + if (dev->poll_mod_count != 0) + return 0; + + nfc_dev_err(&dev->interface->dev, + "Polling operation has been stopped"); - if (rc == -EPERM) { - nfc_dev_dbg(&dev->interface->dev, "Cannot poll next modulation" - " because poll has been stopped"); goto stop_poll; } - if (rc) { - nfc_dev_err(&dev->interface->dev, "Error %d when trying to poll" - " next modulation", rc); + if (params_len < 0) { + nfc_dev_err(&dev->interface->dev, + "Error %d when running poll", params_len); + goto stop_poll; } - /* Inform caller function to do not up the semaphore */ - return -EINPROGRESS; + cur_mod = dev->poll_mod_active[dev->poll_mod_curr]; + + if (cur_mod->len == 0) { + del_timer(&dev->listen_timer); + + return pn533_init_target_complete(dev, arg, params, params_len); + } else { + rc = pn533_start_poll_complete(dev, arg, params, params_len); + if (!rc) + return rc; + } + + pn533_poll_next_mod(dev); + + queue_work(dev->wq, &dev->poll_work); + + return 0; stop_poll: pn533_poll_reset_mod_list(dev); @@ -1078,61 +1361,104 @@ stop_poll: return 0; } -static int pn533_start_poll(struct nfc_dev *nfc_dev, u32 protocols) +static void pn533_build_poll_frame(struct pn533 *dev, + struct pn533_frame *frame, + struct pn533_poll_modulations *mod) { - struct pn533 *dev = nfc_get_drvdata(nfc_dev); - struct pn533_poll_modulations *start_mod; - int rc; + nfc_dev_dbg(&dev->interface->dev, "mod len %d\n", mod->len); - nfc_dev_dbg(&dev->interface->dev, "%s - protocols=0x%x", __func__, - protocols); + if (mod->len == 0) { + /* Listen mode */ + pn533_init_target_frame(frame, dev->gb, dev->gb_len); + } else { + /* Polling mode */ + pn533_tx_frame_init(frame, PN533_CMD_IN_LIST_PASSIVE_TARGET); - if (dev->poll_mod_count) { - nfc_dev_err(&dev->interface->dev, "Polling operation already" - " active"); - return -EBUSY; - } + memcpy(PN533_FRAME_CMD_PARAMS_PTR(frame), &mod->data, mod->len); + frame->datalen += mod->len; - if (dev->tgt_active_prot) { - nfc_dev_err(&dev->interface->dev, "Cannot poll with a target" - " already activated"); - return -EBUSY; + pn533_tx_frame_finish(frame); } +} + +static int pn533_send_poll_frame(struct pn533 *dev) +{ + struct pn533_poll_modulations *cur_mod; + int rc; - pn533_poll_create_mod_list(dev, protocols); + cur_mod = dev->poll_mod_active[dev->poll_mod_curr]; - if (!dev->poll_mod_count) { - nfc_dev_err(&dev->interface->dev, "No valid protocols" - " specified"); - rc = -EINVAL; - goto error; + pn533_build_poll_frame(dev, dev->out_frame, cur_mod); + + rc = pn533_send_cmd_frame_async(dev, dev->out_frame, dev->in_frame, + dev->in_maxlen, pn533_poll_complete, + NULL, GFP_KERNEL); + if (rc) + nfc_dev_err(&dev->interface->dev, "Polling loop error %d", rc); + + return rc; +} + +static void pn533_wq_poll(struct work_struct *work) +{ + struct pn533 *dev = container_of(work, struct pn533, poll_work); + struct pn533_poll_modulations *cur_mod; + int rc; + + cur_mod = dev->poll_mod_active[dev->poll_mod_curr]; + + nfc_dev_dbg(&dev->interface->dev, + "%s cancel_listen %d modulation len %d", + __func__, dev->cancel_listen, cur_mod->len); + + if (dev->cancel_listen == 1) { + dev->cancel_listen = 0; + usb_kill_urb(dev->in_urb); } - nfc_dev_dbg(&dev->interface->dev, "It will poll %d modulations types", - dev->poll_mod_count); + rc = pn533_send_poll_frame(dev); + if (rc) + return; - dev->poll_mod_curr = 0; - start_mod = dev->poll_mod_active[dev->poll_mod_curr]; + if (cur_mod->len == 0 && dev->poll_mod_count > 1) + mod_timer(&dev->listen_timer, jiffies + PN533_LISTEN_TIME * HZ); - pn533_start_poll_frame(dev->out_frame, start_mod); + return; +} - rc = pn533_send_cmd_frame_async(dev, dev->out_frame, dev->in_frame, - dev->in_maxlen, pn533_start_poll_complete, - NULL, GFP_KERNEL); +static int pn533_start_poll(struct nfc_dev *nfc_dev, + u32 im_protocols, u32 tm_protocols) +{ + struct pn533 *dev = nfc_get_drvdata(nfc_dev); - if (rc) { - nfc_dev_err(&dev->interface->dev, "Error %d when trying to" - " start poll", rc); - goto error; + nfc_dev_dbg(&dev->interface->dev, + "%s: im protocols 0x%x tm protocols 0x%x", + __func__, im_protocols, tm_protocols); + + if (dev->tgt_active_prot) { + nfc_dev_err(&dev->interface->dev, + "Cannot poll with a target already activated"); + return -EBUSY; } - dev->poll_protocols = protocols; + if (dev->tgt_mode) { + nfc_dev_err(&dev->interface->dev, + "Cannot poll while already being activated"); + return -EBUSY; + } - return 0; + if (tm_protocols) { + dev->gb = nfc_get_local_general_bytes(nfc_dev, &dev->gb_len); + if (dev->gb == NULL) + tm_protocols = 0; + } -error: - pn533_poll_reset_mod_list(dev); - return rc; + dev->poll_mod_curr = 0; + pn533_poll_create_mod_list(dev, im_protocols, tm_protocols); + dev->poll_protocols = im_protocols; + dev->listen_protocols = tm_protocols; + + return pn533_send_poll_frame(dev); } static void pn533_stop_poll(struct nfc_dev *nfc_dev) @@ -1141,6 +1467,8 @@ static void pn533_stop_poll(struct nfc_dev *nfc_dev) nfc_dev_dbg(&dev->interface->dev, "%s", __func__); + del_timer(&dev->listen_timer); + if (!dev->poll_mod_count) { nfc_dev_dbg(&dev->interface->dev, "Polling operation was not" " running"); @@ -1152,6 +1480,8 @@ static void pn533_stop_poll(struct nfc_dev *nfc_dev) /* prevent pn533_start_poll_complete to issue a new poll meanwhile */ usb_kill_urb(dev->in_urb); + + pn533_poll_reset_mod_list(dev); } static int pn533_activate_target_nfcdep(struct pn533 *dev) @@ -1349,13 +1679,29 @@ static int pn533_in_dep_link_up_complete(struct pn533 *dev, void *arg, return 0; } +static int pn533_mod_to_baud(struct pn533 *dev) +{ + switch (dev->poll_mod_curr) { + case PN533_POLL_MOD_106KBPS_A: + return 0; + case PN533_POLL_MOD_212KBPS_FELICA: + return 1; + case PN533_POLL_MOD_424KBPS_FELICA: + return 2; + default: + return -EINVAL; + } +} + +#define PASSIVE_DATA_LEN 5 static int pn533_dep_link_up(struct nfc_dev *nfc_dev, struct nfc_target *target, u8 comm_mode, u8* gb, size_t gb_len) { struct pn533 *dev = nfc_get_drvdata(nfc_dev); struct pn533_cmd_jump_dep *cmd; - u8 cmd_len; - int rc; + u8 cmd_len, *data_ptr; + u8 passive_data[PASSIVE_DATA_LEN] = {0x00, 0xff, 0xff, 0x00, 0x3}; + int rc, baud; nfc_dev_dbg(&dev->interface->dev, "%s", __func__); @@ -1371,7 +1717,17 @@ static int pn533_dep_link_up(struct nfc_dev *nfc_dev, struct nfc_target *target, return -EBUSY; } + baud = pn533_mod_to_baud(dev); + if (baud < 0) { + nfc_dev_err(&dev->interface->dev, + "Invalid curr modulation %d", dev->poll_mod_curr); + return baud; + } + cmd_len = sizeof(struct pn533_cmd_jump_dep) + gb_len; + if (comm_mode == NFC_COMM_PASSIVE) + cmd_len += PASSIVE_DATA_LEN; + cmd = kzalloc(cmd_len, GFP_KERNEL); if (cmd == NULL) return -ENOMEM; @@ -1379,10 +1735,18 @@ static int pn533_dep_link_up(struct nfc_dev *nfc_dev, struct nfc_target *target, pn533_tx_frame_init(dev->out_frame, PN533_CMD_IN_JUMP_FOR_DEP); cmd->active = !comm_mode; - cmd->baud = 0; + cmd->next = 0; + cmd->baud = baud; + data_ptr = cmd->data; + if (comm_mode == NFC_COMM_PASSIVE && cmd->baud > 0) { + memcpy(data_ptr, passive_data, PASSIVE_DATA_LEN); + cmd->next |= 1; + data_ptr += PASSIVE_DATA_LEN; + } + if (gb != NULL && gb_len > 0) { - cmd->next = 4; /* We have some Gi */ - memcpy(cmd->gt, gb, gb_len); + cmd->next |= 4; /* We have some Gi */ + memcpy(data_ptr, gb, gb_len); } else { cmd->next = 0; } @@ -1407,15 +1771,25 @@ out: static int pn533_dep_link_down(struct nfc_dev *nfc_dev) { - pn533_deactivate_target(nfc_dev, 0); + struct pn533 *dev = nfc_get_drvdata(nfc_dev); + + pn533_poll_reset_mod_list(dev); + + if (dev->tgt_mode || dev->tgt_active_prot) { + pn533_send_ack(dev, GFP_KERNEL); + usb_kill_urb(dev->in_urb); + } + + dev->tgt_active_prot = 0; + dev->tgt_mode = 0; + + skb_queue_purge(&dev->resp_q); return 0; } -#define PN533_CMD_DATAEXCH_HEAD_LEN (sizeof(struct pn533_frame) + 3) -#define PN533_CMD_DATAEXCH_DATA_MAXLEN 262 - -static int pn533_data_exchange_tx_frame(struct pn533 *dev, struct sk_buff *skb) +static int pn533_build_tx_frame(struct pn533 *dev, struct sk_buff *skb, + bool target) { int payload_len = skb->len; struct pn533_frame *out_frame; @@ -1432,14 +1806,37 @@ static int pn533_data_exchange_tx_frame(struct pn533 *dev, struct sk_buff *skb) return -ENOSYS; } - skb_push(skb, PN533_CMD_DATAEXCH_HEAD_LEN); - out_frame = (struct pn533_frame *) skb->data; + if (target == true) { + switch (dev->device_type) { + case PN533_DEVICE_PASORI: + if (dev->tgt_active_prot == NFC_PROTO_FELICA) { + skb_push(skb, PN533_CMD_DATAEXCH_HEAD_LEN - 1); + out_frame = (struct pn533_frame *) skb->data; + pn533_tx_frame_init(out_frame, + PN533_CMD_IN_COMM_THRU); + + break; + } + + default: + skb_push(skb, PN533_CMD_DATAEXCH_HEAD_LEN); + out_frame = (struct pn533_frame *) skb->data; + pn533_tx_frame_init(out_frame, + PN533_CMD_IN_DATA_EXCHANGE); + tg = 1; + memcpy(PN533_FRAME_CMD_PARAMS_PTR(out_frame), + &tg, sizeof(u8)); + out_frame->datalen += sizeof(u8); + + break; + } - pn533_tx_frame_init(out_frame, PN533_CMD_IN_DATA_EXCHANGE); + } else { + skb_push(skb, PN533_CMD_DATAEXCH_HEAD_LEN - 1); + out_frame = (struct pn533_frame *) skb->data; + pn533_tx_frame_init(out_frame, PN533_CMD_TG_SET_DATA); + } - tg = 1; - memcpy(PN533_FRAME_CMD_PARAMS_PTR(out_frame), &tg, sizeof(u8)); - out_frame->datalen += sizeof(u8); /* The data is already in the out_frame, just update the datalen */ out_frame->datalen += payload_len; @@ -1550,9 +1947,9 @@ error: return 0; } -static int pn533_data_exchange(struct nfc_dev *nfc_dev, - struct nfc_target *target, struct sk_buff *skb, - data_exchange_cb_t cb, void *cb_context) +static int pn533_transceive(struct nfc_dev *nfc_dev, + struct nfc_target *target, struct sk_buff *skb, + data_exchange_cb_t cb, void *cb_context) { struct pn533 *dev = nfc_get_drvdata(nfc_dev); struct pn533_frame *out_frame, *in_frame; @@ -1570,7 +1967,7 @@ static int pn533_data_exchange(struct nfc_dev *nfc_dev, goto error; } - rc = pn533_data_exchange_tx_frame(dev, skb); + rc = pn533_build_tx_frame(dev, skb, true); if (rc) goto error; @@ -1618,6 +2015,63 @@ error: return rc; } +static int pn533_tm_send_complete(struct pn533 *dev, void *arg, + u8 *params, int params_len) +{ + nfc_dev_dbg(&dev->interface->dev, "%s", __func__); + + if (params_len < 0) { + nfc_dev_err(&dev->interface->dev, + "Error %d when sending data", + params_len); + + return params_len; + } + + if (params_len > 0 && params[0] != 0) { + nfc_tm_deactivated(dev->nfc_dev); + + dev->tgt_mode = 0; + + return 0; + } + + queue_work(dev->wq, &dev->tg_work); + + return 0; +} + +static int pn533_tm_send(struct nfc_dev *nfc_dev, struct sk_buff *skb) +{ + struct pn533 *dev = nfc_get_drvdata(nfc_dev); + struct pn533_frame *out_frame; + int rc; + + nfc_dev_dbg(&dev->interface->dev, "%s", __func__); + + rc = pn533_build_tx_frame(dev, skb, false); + if (rc) + goto error; + + out_frame = (struct pn533_frame *) skb->data; + + rc = pn533_send_cmd_frame_async(dev, out_frame, dev->in_frame, + dev->in_maxlen, pn533_tm_send_complete, + NULL, GFP_KERNEL); + if (rc) { + nfc_dev_err(&dev->interface->dev, + "Error %d when trying to send data", rc); + goto error; + } + + return 0; + +error: + kfree_skb(skb); + + return rc; +} + static void pn533_wq_mi_recv(struct work_struct *work) { struct pn533 *dev = container_of(work, struct pn533, mi_work); @@ -1638,7 +2092,7 @@ static void pn533_wq_mi_recv(struct work_struct *work) skb_reserve(skb_cmd, PN533_CMD_DATAEXCH_HEAD_LEN); - rc = pn533_data_exchange_tx_frame(dev, skb_cmd); + rc = pn533_build_tx_frame(dev, skb_cmd, true); if (rc) goto error_frame; @@ -1677,7 +2131,7 @@ error_cmd: kfree(arg); - up(&dev->cmd_lock); + mutex_unlock(&dev->cmd_lock); } static int pn533_set_configuration(struct pn533 *dev, u8 cfgitem, u8 *cfgdata, @@ -1703,7 +2157,28 @@ static int pn533_set_configuration(struct pn533 *dev, u8 cfgitem, u8 *cfgdata, return rc; } -struct nfc_ops pn533_nfc_ops = { +static int pn533_fw_reset(struct pn533 *dev) +{ + int rc; + u8 *params; + + nfc_dev_dbg(&dev->interface->dev, "%s", __func__); + + pn533_tx_frame_init(dev->out_frame, 0x18); + + params = PN533_FRAME_CMD_PARAMS_PTR(dev->out_frame); + params[0] = 0x1; + dev->out_frame->datalen += 1; + + pn533_tx_frame_finish(dev->out_frame); + + rc = pn533_send_cmd_frame_sync(dev, dev->out_frame, dev->in_frame, + dev->in_maxlen); + + return rc; +} + +static struct nfc_ops pn533_nfc_ops = { .dev_up = NULL, .dev_down = NULL, .dep_link_up = pn533_dep_link_up, @@ -1712,9 +2187,88 @@ struct nfc_ops pn533_nfc_ops = { .stop_poll = pn533_stop_poll, .activate_target = pn533_activate_target, .deactivate_target = pn533_deactivate_target, - .data_exchange = pn533_data_exchange, + .im_transceive = pn533_transceive, + .tm_send = pn533_tm_send, }; +static int pn533_setup(struct pn533 *dev) +{ + struct pn533_config_max_retries max_retries; + struct pn533_config_timing timing; + u8 pasori_cfg[3] = {0x08, 0x01, 0x08}; + int rc; + + switch (dev->device_type) { + case PN533_DEVICE_STD: + max_retries.mx_rty_atr = PN533_CONFIG_MAX_RETRIES_ENDLESS; + max_retries.mx_rty_psl = 2; + max_retries.mx_rty_passive_act = + PN533_CONFIG_MAX_RETRIES_NO_RETRY; + + timing.rfu = PN533_CONFIG_TIMING_102; + timing.atr_res_timeout = PN533_CONFIG_TIMING_204; + timing.dep_timeout = PN533_CONFIG_TIMING_409; + + break; + + case PN533_DEVICE_PASORI: + max_retries.mx_rty_atr = 0x2; + max_retries.mx_rty_psl = 0x1; + max_retries.mx_rty_passive_act = + PN533_CONFIG_MAX_RETRIES_NO_RETRY; + + timing.rfu = PN533_CONFIG_TIMING_102; + timing.atr_res_timeout = PN533_CONFIG_TIMING_102; + timing.dep_timeout = PN533_CONFIG_TIMING_204; + + break; + + default: + nfc_dev_err(&dev->interface->dev, "Unknown device type %d\n", + dev->device_type); + return -EINVAL; + } + + rc = pn533_set_configuration(dev, PN533_CFGITEM_MAX_RETRIES, + (u8 *)&max_retries, sizeof(max_retries)); + if (rc) { + nfc_dev_err(&dev->interface->dev, + "Error on setting MAX_RETRIES config"); + return rc; + } + + + rc = pn533_set_configuration(dev, PN533_CFGITEM_TIMING, + (u8 *)&timing, sizeof(timing)); + if (rc) { + nfc_dev_err(&dev->interface->dev, + "Error on setting RF timings"); + return rc; + } + + switch (dev->device_type) { + case PN533_DEVICE_STD: + break; + + case PN533_DEVICE_PASORI: + pn533_fw_reset(dev); + + rc = pn533_set_configuration(dev, PN533_CFGITEM_PASORI, + pasori_cfg, 3); + if (rc) { + nfc_dev_err(&dev->interface->dev, + "Error while settings PASORI config"); + return rc; + } + + pn533_fw_reset(dev); + + break; + } + + return 0; +} + static int pn533_probe(struct usb_interface *interface, const struct usb_device_id *id) { @@ -1722,7 +2276,6 @@ static int pn533_probe(struct usb_interface *interface, struct pn533 *dev; struct usb_host_interface *iface_desc; struct usb_endpoint_descriptor *endpoint; - struct pn533_config_max_retries max_retries; int in_endpoint = 0; int out_endpoint = 0; int rc = -ENOMEM; @@ -1735,7 +2288,7 @@ static int pn533_probe(struct usb_interface *interface, dev->udev = usb_get_dev(interface_to_usbdev(interface)); dev->interface = interface; - sema_init(&dev->cmd_lock, 1); + mutex_init(&dev->cmd_lock); iface_desc = interface->cur_altsetting; for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { @@ -1779,12 +2332,18 @@ static int pn533_probe(struct usb_interface *interface, INIT_WORK(&dev->cmd_work, pn533_wq_cmd_complete); INIT_WORK(&dev->mi_work, pn533_wq_mi_recv); + INIT_WORK(&dev->tg_work, pn533_wq_tg_get_data); + INIT_WORK(&dev->poll_work, pn533_wq_poll); dev->wq = alloc_workqueue("pn533", WQ_NON_REENTRANT | WQ_UNBOUND | WQ_MEM_RECLAIM, 1); if (dev->wq == NULL) goto error; + init_timer(&dev->listen_timer); + dev->listen_timer.data = (unsigned long) dev; + dev->listen_timer.function = pn533_listen_mode_timer; + skb_queue_head_init(&dev->resp_q); usb_set_intfdata(interface, dev); @@ -1802,10 +2361,22 @@ static int pn533_probe(struct usb_interface *interface, nfc_dev_info(&dev->interface->dev, "NXP PN533 firmware ver %d.%d now" " attached", fw_ver->ver, fw_ver->rev); - protocols = NFC_PROTO_JEWEL_MASK - | NFC_PROTO_MIFARE_MASK | NFC_PROTO_FELICA_MASK - | NFC_PROTO_ISO14443_MASK - | NFC_PROTO_NFC_DEP_MASK; + dev->device_type = id->driver_info; + switch (dev->device_type) { + case PN533_DEVICE_STD: + protocols = PN533_ALL_PROTOCOLS; + break; + + case PN533_DEVICE_PASORI: + protocols = PN533_NO_TYPE_B_PROTOCOLS; + break; + + default: + nfc_dev_err(&dev->interface->dev, "Unknown device type %d\n", + dev->device_type); + rc = -EINVAL; + goto destroy_wq; + } dev->nfc_dev = nfc_allocate_device(&pn533_nfc_ops, protocols, PN533_CMD_DATAEXCH_HEAD_LEN, @@ -1820,23 +2391,18 @@ static int pn533_probe(struct usb_interface *interface, if (rc) goto free_nfc_dev; - max_retries.mx_rty_atr = PN533_CONFIG_MAX_RETRIES_ENDLESS; - max_retries.mx_rty_psl = 2; - max_retries.mx_rty_passive_act = PN533_CONFIG_MAX_RETRIES_NO_RETRY; - - rc = pn533_set_configuration(dev, PN533_CFGITEM_MAX_RETRIES, - (u8 *) &max_retries, sizeof(max_retries)); - - if (rc) { - nfc_dev_err(&dev->interface->dev, "Error on setting MAX_RETRIES" - " config"); - goto free_nfc_dev; - } + rc = pn533_setup(dev); + if (rc) + goto unregister_nfc_dev; return 0; +unregister_nfc_dev: + nfc_unregister_device(dev->nfc_dev); + free_nfc_dev: nfc_free_device(dev->nfc_dev); + destroy_wq: destroy_workqueue(dev->wq); error: @@ -1865,6 +2431,8 @@ static void pn533_disconnect(struct usb_interface *interface) skb_queue_purge(&dev->resp_q); + del_timer(&dev->listen_timer); + kfree(dev->in_frame); usb_free_urb(dev->in_urb); kfree(dev->out_frame); diff --git a/drivers/nfc/pn544_hci.c b/drivers/nfc/pn544_hci.c index 281f18c2fb82..aa71807189ba 100644 --- a/drivers/nfc/pn544_hci.c +++ b/drivers/nfc/pn544_hci.c @@ -108,16 +108,22 @@ enum pn544_state { #define PN544_NFC_WI_MGMT_GATE 0xA1 -static u8 pn544_custom_gates[] = { - PN544_SYS_MGMT_GATE, - PN544_SWP_MGMT_GATE, - PN544_POLLING_LOOP_MGMT_GATE, - PN544_NFC_WI_MGMT_GATE, - PN544_RF_READER_F_GATE, - PN544_RF_READER_JEWEL_GATE, - PN544_RF_READER_ISO15693_GATE, - PN544_RF_READER_NFCIP1_INITIATOR_GATE, - PN544_RF_READER_NFCIP1_TARGET_GATE +static struct nfc_hci_gate pn544_gates[] = { + {NFC_HCI_ADMIN_GATE, NFC_HCI_INVALID_PIPE}, + {NFC_HCI_LOOPBACK_GATE, NFC_HCI_INVALID_PIPE}, + {NFC_HCI_ID_MGMT_GATE, NFC_HCI_INVALID_PIPE}, + {NFC_HCI_LINK_MGMT_GATE, NFC_HCI_INVALID_PIPE}, + {NFC_HCI_RF_READER_B_GATE, NFC_HCI_INVALID_PIPE}, + {NFC_HCI_RF_READER_A_GATE, NFC_HCI_INVALID_PIPE}, + {PN544_SYS_MGMT_GATE, NFC_HCI_INVALID_PIPE}, + {PN544_SWP_MGMT_GATE, NFC_HCI_INVALID_PIPE}, + {PN544_POLLING_LOOP_MGMT_GATE, NFC_HCI_INVALID_PIPE}, + {PN544_NFC_WI_MGMT_GATE, NFC_HCI_INVALID_PIPE}, + {PN544_RF_READER_F_GATE, NFC_HCI_INVALID_PIPE}, + {PN544_RF_READER_JEWEL_GATE, NFC_HCI_INVALID_PIPE}, + {PN544_RF_READER_ISO15693_GATE, NFC_HCI_INVALID_PIPE}, + {PN544_RF_READER_NFCIP1_INITIATOR_GATE, NFC_HCI_INVALID_PIPE}, + {PN544_RF_READER_NFCIP1_TARGET_GATE, NFC_HCI_INVALID_PIPE} }; /* Largest headroom needed for outgoing custom commands */ @@ -377,6 +383,9 @@ static int pn544_hci_open(struct nfc_shdlc *shdlc) r = pn544_hci_enable(info, HCI_MODE); + if (r == 0) + info->state = PN544_ST_READY; + out: mutex_unlock(&info->info_lock); return r; @@ -393,6 +402,8 @@ static void pn544_hci_close(struct nfc_shdlc *shdlc) pn544_hci_disable(info); + info->state = PN544_ST_COLD; + out: mutex_unlock(&info->info_lock); } @@ -576,7 +587,8 @@ static int pn544_hci_xmit(struct nfc_shdlc *shdlc, struct sk_buff *skb) return pn544_hci_i2c_write(client, skb->data, skb->len); } -static int pn544_hci_start_poll(struct nfc_shdlc *shdlc, u32 protocols) +static int pn544_hci_start_poll(struct nfc_shdlc *shdlc, + u32 im_protocols, u32 tm_protocols) { struct nfc_hci_dev *hdev = nfc_shdlc_get_hci_dev(shdlc); u8 phases = 0; @@ -584,7 +596,8 @@ static int pn544_hci_start_poll(struct nfc_shdlc *shdlc, u32 protocols) u8 duration[2]; u8 activated; - pr_info(DRIVER_DESC ": %s protocols = %d\n", __func__, protocols); + pr_info(DRIVER_DESC ": %s protocols 0x%x 0x%x\n", + __func__, im_protocols, tm_protocols); r = nfc_hci_send_event(hdev, NFC_HCI_RF_READER_A_GATE, NFC_HCI_EVT_END_OPERATION, NULL, 0); @@ -604,10 +617,10 @@ static int pn544_hci_start_poll(struct nfc_shdlc *shdlc, u32 protocols) if (r < 0) return r; - if (protocols & (NFC_PROTO_ISO14443_MASK | NFC_PROTO_MIFARE_MASK | + if (im_protocols & (NFC_PROTO_ISO14443_MASK | NFC_PROTO_MIFARE_MASK | NFC_PROTO_JEWEL_MASK)) phases |= 1; /* Type A */ - if (protocols & NFC_PROTO_FELICA_MASK) { + if (im_protocols & NFC_PROTO_FELICA_MASK) { phases |= (1 << 2); /* Type F 212 */ phases |= (1 << 3); /* Type F 424 */ } @@ -842,10 +855,9 @@ static int __devinit pn544_hci_probe(struct i2c_client *client, goto err_rti; } - init_data.gate_count = ARRAY_SIZE(pn544_custom_gates); + init_data.gate_count = ARRAY_SIZE(pn544_gates); - memcpy(init_data.gates, pn544_custom_gates, - ARRAY_SIZE(pn544_custom_gates)); + memcpy(init_data.gates, pn544_gates, sizeof(pn544_gates)); /* * TODO: Session id must include the driver name + some bus addr @@ -857,6 +869,7 @@ static int __devinit pn544_hci_probe(struct i2c_client *client, NFC_PROTO_MIFARE_MASK | NFC_PROTO_FELICA_MASK | NFC_PROTO_ISO14443_MASK | + NFC_PROTO_ISO14443_B_MASK | NFC_PROTO_NFC_DEP_MASK; info->shdlc = nfc_shdlc_allocate(&pn544_shdlc_ops, diff --git a/drivers/of/of_mdio.c b/drivers/of/of_mdio.c index 2574abde8d99..8e6c25f35040 100644 --- a/drivers/of/of_mdio.c +++ b/drivers/of/of_mdio.c @@ -57,6 +57,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) const __be32 *paddr; u32 addr; int len; + bool is_c45; /* A PHY must have a reg property in the range [0-31] */ paddr = of_get_property(child, "reg", &len); @@ -79,11 +80,18 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) mdio->irq[addr] = PHY_POLL; } - phy = get_phy_device(mdio, addr); + is_c45 = of_device_is_compatible(child, + "ethernet-phy-ieee802.3-c45"); + phy = get_phy_device(mdio, addr, is_c45); + if (!phy || IS_ERR(phy)) { - dev_err(&mdio->dev, "error probing PHY at address %i\n", - addr); - continue; + phy = phy_device_create(mdio, addr, 0, false, NULL); + if (!phy || IS_ERR(phy)) { + dev_err(&mdio->dev, + "error creating PHY at address %i\n", + addr); + continue; + } } /* Associate the OF node with the device structure so it diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index d86f645a76da..2db409330c21 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -645,7 +645,7 @@ static int qeth_l2_request_initial_mac(struct qeth_card *card) } QETH_DBF_HEX(SETUP, 2, card->dev->dev_addr, OSA_ADDR_LEN); } else { - random_ether_addr(card->dev->dev_addr); + eth_random_addr(card->dev->dev_addr); memcpy(card->dev->dev_addr, vendor_pre, 3); } return 0; diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index f0045ca8a766..0cf706699a04 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -1471,7 +1471,7 @@ static int qeth_l3_iqd_read_initial_mac_cb(struct qeth_card *card, memcpy(card->dev->dev_addr, cmd->data.create_destroy_addr.unique_id, ETH_ALEN); else - random_ether_addr(card->dev->dev_addr); + eth_random_addr(card->dev->dev_addr); return 0; } @@ -2698,10 +2698,11 @@ int inline qeth_l3_get_cast_type(struct qeth_card *card, struct sk_buff *skb) rcu_read_lock(); dst = skb_dst(skb); if (dst) - n = dst_get_neighbour_noref(dst); + n = dst_neigh_lookup_skb(dst, skb); if (n) { cast_type = n->type; rcu_read_unlock(); + neigh_release(n); if ((cast_type == RTN_BROADCAST) || (cast_type == RTN_MULTICAST) || (cast_type == RTN_ANYCAST)) diff --git a/drivers/scsi/bnx2fc/bnx2fc.h b/drivers/scsi/bnx2fc/bnx2fc.h index 0578fa0dc14b..42969e8a45bd 100644 --- a/drivers/scsi/bnx2fc/bnx2fc.h +++ b/drivers/scsi/bnx2fc/bnx2fc.h @@ -59,6 +59,7 @@ #include "57xx_hsi_bnx2fc.h" #include "bnx2fc_debug.h" #include "../../net/ethernet/broadcom/cnic_if.h" +#include "../../net/ethernet/broadcom/bnx2x/bnx2x_mfw_req.h" #include "bnx2fc_constants.h" #define BNX2FC_NAME "bnx2fc" @@ -84,6 +85,8 @@ #define BNX2FC_NUM_MAX_SESS 1024 #define BNX2FC_NUM_MAX_SESS_LOG (ilog2(BNX2FC_NUM_MAX_SESS)) +#define BNX2FC_MAX_NPIV 256 + #define BNX2FC_MAX_OUTSTANDING_CMNDS 2048 #define BNX2FC_CAN_QUEUE BNX2FC_MAX_OUTSTANDING_CMNDS #define BNX2FC_ELSTM_XIDS BNX2FC_CAN_QUEUE @@ -206,6 +209,7 @@ struct bnx2fc_hba { struct fcoe_statistics_params *stats_buffer; dma_addr_t stats_buf_dma; struct completion stat_req_done; + struct fcoe_capabilities fcoe_cap; /*destroy handling */ struct timer_list destroy_timer; diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c index f52f668fd247..05fe6620b3f0 100644 --- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c +++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c @@ -1326,6 +1326,7 @@ static void bnx2fc_hba_destroy(struct bnx2fc_hba *hba) static struct bnx2fc_hba *bnx2fc_hba_create(struct cnic_dev *cnic) { struct bnx2fc_hba *hba; + struct fcoe_capabilities *fcoe_cap; int rc; hba = kzalloc(sizeof(*hba), GFP_KERNEL); @@ -1361,6 +1362,21 @@ static struct bnx2fc_hba *bnx2fc_hba_create(struct cnic_dev *cnic) printk(KERN_ERR PFX "em_config:bnx2fc_cmd_mgr_alloc failed\n"); goto cmgr_err; } + fcoe_cap = &hba->fcoe_cap; + + fcoe_cap->capability1 = BNX2FC_TM_MAX_SQES << + FCOE_IOS_PER_CONNECTION_SHIFT; + fcoe_cap->capability1 |= BNX2FC_NUM_MAX_SESS << + FCOE_LOGINS_PER_PORT_SHIFT; + fcoe_cap->capability2 = BNX2FC_MAX_OUTSTANDING_CMNDS << + FCOE_NUMBER_OF_EXCHANGES_SHIFT; + fcoe_cap->capability2 |= BNX2FC_MAX_NPIV << + FCOE_NPIV_WWN_PER_PORT_SHIFT; + fcoe_cap->capability3 = BNX2FC_NUM_MAX_SESS << + FCOE_TARGETS_SUPPORTED_SHIFT; + fcoe_cap->capability3 |= BNX2FC_MAX_OUTSTANDING_CMNDS << + FCOE_OUTSTANDING_COMMANDS_SHIFT; + fcoe_cap->capability4 = FCOE_CAPABILITY4_STATEFUL; init_waitqueue_head(&hba->shutdown_wait); init_waitqueue_head(&hba->destroy_wait); @@ -1691,6 +1707,32 @@ static void bnx2fc_unbind_pcidev(struct bnx2fc_hba *hba) hba->pcidev = NULL; } +/** + * bnx2fc_ulp_get_stats - cnic callback to populate FCoE stats + * + * @handle: transport handle pointing to adapter struture + */ +static int bnx2fc_ulp_get_stats(void *handle) +{ + struct bnx2fc_hba *hba = handle; + struct cnic_dev *cnic; + struct fcoe_stats_info *stats_addr; + + if (!hba) + return -EINVAL; + + cnic = hba->cnic; + stats_addr = &cnic->stats_addr->fcoe_stat; + if (!stats_addr) + return -EINVAL; + + strncpy(stats_addr->version, BNX2FC_VERSION, + sizeof(stats_addr->version)); + stats_addr->txq_size = BNX2FC_SQ_WQES_MAX; + stats_addr->rxq_size = BNX2FC_CQ_WQES_MAX; + + return 0; +} /** @@ -1944,6 +1986,7 @@ static void bnx2fc_ulp_init(struct cnic_dev *dev) adapter_count++; mutex_unlock(&bnx2fc_dev_lock); + dev->fcoe_cap = &hba->fcoe_cap; clear_bit(BNX2FC_CNIC_REGISTERED, &hba->reg_with_cnic); rc = dev->register_device(dev, CNIC_ULP_FCOE, (void *) hba); @@ -2643,4 +2686,5 @@ static struct cnic_ulp_ops bnx2fc_cnic_cb = { .cnic_stop = bnx2fc_ulp_stop, .indicate_kcqes = bnx2fc_indicate_kcqe, .indicate_netevent = bnx2fc_indicate_netevent, + .cnic_get_stats = bnx2fc_ulp_get_stats, }; diff --git a/drivers/scsi/bnx2i/57xx_iscsi_hsi.h b/drivers/scsi/bnx2i/57xx_iscsi_hsi.h index dc0a08e69c82..f2db5fe7bdc2 100644 --- a/drivers/scsi/bnx2i/57xx_iscsi_hsi.h +++ b/drivers/scsi/bnx2i/57xx_iscsi_hsi.h @@ -267,7 +267,13 @@ struct bnx2i_cmd_request { * task statistics for write response */ struct bnx2i_write_resp_task_stat { - u32 num_data_ins; +#if defined(__BIG_ENDIAN) + u16 num_r2ts; + u16 num_data_outs; +#elif defined(__LITTLE_ENDIAN) + u16 num_data_outs; + u16 num_r2ts; +#endif }; /* @@ -275,11 +281,11 @@ struct bnx2i_write_resp_task_stat { */ struct bnx2i_read_resp_task_stat { #if defined(__BIG_ENDIAN) - u16 num_data_outs; - u16 num_r2ts; + u16 reserved; + u16 num_data_ins; #elif defined(__LITTLE_ENDIAN) - u16 num_r2ts; - u16 num_data_outs; + u16 num_data_ins; + u16 reserved; #endif }; diff --git a/drivers/scsi/bnx2i/bnx2i.h b/drivers/scsi/bnx2i/bnx2i.h index 7e77cf620291..3f9e7061258e 100644 --- a/drivers/scsi/bnx2i/bnx2i.h +++ b/drivers/scsi/bnx2i/bnx2i.h @@ -44,6 +44,8 @@ #include "57xx_iscsi_hsi.h" #include "57xx_iscsi_constants.h" +#include "../../net/ethernet/broadcom/bnx2x/bnx2x_mfw_req.h" + #define BNX2_ISCSI_DRIVER_NAME "bnx2i" #define BNX2I_MAX_ADAPTERS 8 @@ -126,6 +128,43 @@ #define REG_WR(__hba, offset, val) \ writel(val, __hba->regview + offset) +#ifdef CONFIG_32BIT +#define GET_STATS_64(__hba, dst, field) \ + do { \ + spin_lock_bh(&__hba->stat_lock); \ + dst->field##_lo = __hba->stats.field##_lo; \ + dst->field##_hi = __hba->stats.field##_hi; \ + spin_unlock_bh(&__hba->stat_lock); \ + } while (0) + +#define ADD_STATS_64(__hba, field, len) \ + do { \ + if (spin_trylock(&__hba->stat_lock)) { \ + if (__hba->stats.field##_lo + len < \ + __hba->stats.field##_lo) \ + __hba->stats.field##_hi++; \ + __hba->stats.field##_lo += len; \ + spin_unlock(&__hba->stat_lock); \ + } \ + } while (0) + +#else +#define GET_STATS_64(__hba, dst, field) \ + do { \ + u64 val, *out; \ + \ + val = __hba->bnx2i_stats.field; \ + out = (u64 *)&__hba->stats.field##_lo; \ + *out = cpu_to_le64(val); \ + out = (u64 *)&dst->field##_lo; \ + *out = cpu_to_le64(val); \ + } while (0) + +#define ADD_STATS_64(__hba, field, len) \ + do { \ + __hba->bnx2i_stats.field += len; \ + } while (0) +#endif /** * struct generic_pdu_resc - login pdu resource structure @@ -288,6 +327,15 @@ struct iscsi_cid_queue { struct bnx2i_conn **conn_cid_tbl; }; + +struct bnx2i_stats_info { + u64 rx_pdus; + u64 rx_bytes; + u64 tx_pdus; + u64 tx_bytes; +}; + + /** * struct bnx2i_hba - bnx2i adapter structure * @@ -341,6 +389,8 @@ struct iscsi_cid_queue { * @ctx_ccell_tasks: captures number of ccells and tasks supported by * currently offloaded connection, used to decode * context memory + * @stat_lock: spin lock used by the statistic collector (32 bit) + * @stats: local iSCSI statistic collection place holder * * Adapter Data Structure */ @@ -427,6 +477,12 @@ struct bnx2i_hba { u32 num_sess_opened; u32 num_conn_opened; unsigned int ctx_ccell_tasks; + +#ifdef CONFIG_32BIT + spinlock_t stat_lock; +#endif + struct bnx2i_stats_info bnx2i_stats; + struct iscsi_stats_info stats; }; @@ -750,6 +806,8 @@ extern void bnx2i_ulp_init(struct cnic_dev *dev); extern void bnx2i_ulp_exit(struct cnic_dev *dev); extern void bnx2i_start(void *handle); extern void bnx2i_stop(void *handle); +extern int bnx2i_get_stats(void *handle); + extern struct bnx2i_hba *get_adapter_list_head(void); struct bnx2i_conn *bnx2i_get_conn_from_id(struct bnx2i_hba *hba, diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c index 86a12b48e477..33d6630529de 100644 --- a/drivers/scsi/bnx2i/bnx2i_hwi.c +++ b/drivers/scsi/bnx2i/bnx2i_hwi.c @@ -1350,6 +1350,7 @@ int bnx2i_process_scsi_cmd_resp(struct iscsi_session *session, struct cqe *cqe) { struct iscsi_conn *conn = bnx2i_conn->cls_conn->dd_data; + struct bnx2i_hba *hba = bnx2i_conn->hba; struct bnx2i_cmd_response *resp_cqe; struct bnx2i_cmd *bnx2i_cmd; struct iscsi_task *task; @@ -1367,16 +1368,26 @@ int bnx2i_process_scsi_cmd_resp(struct iscsi_session *session, if (bnx2i_cmd->req.op_attr & ISCSI_CMD_REQUEST_READ) { conn->datain_pdus_cnt += - resp_cqe->task_stat.read_stat.num_data_outs; + resp_cqe->task_stat.read_stat.num_data_ins; conn->rxdata_octets += bnx2i_cmd->req.total_data_transfer_length; + ADD_STATS_64(hba, rx_pdus, + resp_cqe->task_stat.read_stat.num_data_ins); + ADD_STATS_64(hba, rx_bytes, + bnx2i_cmd->req.total_data_transfer_length); } else { conn->dataout_pdus_cnt += - resp_cqe->task_stat.read_stat.num_data_outs; + resp_cqe->task_stat.write_stat.num_data_outs; conn->r2t_pdus_cnt += - resp_cqe->task_stat.read_stat.num_r2ts; + resp_cqe->task_stat.write_stat.num_r2ts; conn->txdata_octets += bnx2i_cmd->req.total_data_transfer_length; + ADD_STATS_64(hba, tx_pdus, + resp_cqe->task_stat.write_stat.num_data_outs); + ADD_STATS_64(hba, tx_bytes, + bnx2i_cmd->req.total_data_transfer_length); + ADD_STATS_64(hba, rx_pdus, + resp_cqe->task_stat.write_stat.num_r2ts); } bnx2i_iscsi_unmap_sg_list(bnx2i_cmd); @@ -1961,6 +1972,7 @@ static int bnx2i_process_new_cqes(struct bnx2i_conn *bnx2i_conn) { struct iscsi_conn *conn = bnx2i_conn->cls_conn->dd_data; struct iscsi_session *session = conn->session; + struct bnx2i_hba *hba = bnx2i_conn->hba; struct qp_info *qp; struct bnx2i_nop_in_msg *nopin; int tgt_async_msg; @@ -1973,7 +1985,7 @@ static int bnx2i_process_new_cqes(struct bnx2i_conn *bnx2i_conn) if (!qp->cq_virt) { printk(KERN_ALERT "bnx2i (%s): cq resr freed in bh execution!", - bnx2i_conn->hba->netdev->name); + hba->netdev->name); goto out; } while (1) { @@ -1985,9 +1997,9 @@ static int bnx2i_process_new_cqes(struct bnx2i_conn *bnx2i_conn) if (nopin->op_code == ISCSI_OP_NOOP_IN && nopin->itt == (u16) RESERVED_ITT) { printk(KERN_ALERT "bnx2i: Unsolicited " - "NOP-In detected for suspended " - "connection dev=%s!\n", - bnx2i_conn->hba->netdev->name); + "NOP-In detected for suspended " + "connection dev=%s!\n", + hba->netdev->name); bnx2i_unsol_pdu_adjust_rq(bnx2i_conn); goto cqe_out; } @@ -2001,7 +2013,7 @@ static int bnx2i_process_new_cqes(struct bnx2i_conn *bnx2i_conn) /* Run the kthread engine only for data cmds All other cmds will be completed in this bh! */ bnx2i_queue_scsi_cmd_resp(session, bnx2i_conn, nopin); - break; + goto done; case ISCSI_OP_LOGIN_RSP: bnx2i_process_login_resp(session, bnx2i_conn, qp->cq_cons_qe); @@ -2044,11 +2056,15 @@ static int bnx2i_process_new_cqes(struct bnx2i_conn *bnx2i_conn) printk(KERN_ALERT "bnx2i: unknown opcode 0x%x\n", nopin->op_code); } + + ADD_STATS_64(hba, rx_pdus, 1); + ADD_STATS_64(hba, rx_bytes, nopin->data_length); +done: if (!tgt_async_msg) { if (!atomic_read(&bnx2i_conn->ep->num_active_cmds)) printk(KERN_ALERT "bnx2i (%s): no active cmd! " "op 0x%x\n", - bnx2i_conn->hba->netdev->name, + hba->netdev->name, nopin->op_code); else atomic_dec(&bnx2i_conn->ep->num_active_cmds); @@ -2692,6 +2708,7 @@ struct cnic_ulp_ops bnx2i_cnic_cb = { .cm_remote_close = bnx2i_cm_remote_close, .cm_remote_abort = bnx2i_cm_remote_abort, .iscsi_nl_send_msg = bnx2i_send_nl_mesg, + .cnic_get_stats = bnx2i_get_stats, .owner = THIS_MODULE }; diff --git a/drivers/scsi/bnx2i/bnx2i_init.c b/drivers/scsi/bnx2i/bnx2i_init.c index 8b6816706ee5..b17637aab9a7 100644 --- a/drivers/scsi/bnx2i/bnx2i_init.c +++ b/drivers/scsi/bnx2i/bnx2i_init.c @@ -381,6 +381,46 @@ void bnx2i_ulp_exit(struct cnic_dev *dev) /** + * bnx2i_get_stats - Retrieve various statistic from iSCSI offload + * @handle: bnx2i_hba + * + * function callback exported via bnx2i - cnic driver interface to + * retrieve various iSCSI offload related statistics. + */ +int bnx2i_get_stats(void *handle) +{ + struct bnx2i_hba *hba = handle; + struct iscsi_stats_info *stats; + + if (!hba) + return -EINVAL; + + stats = (struct iscsi_stats_info *)hba->cnic->stats_addr; + + if (!stats) + return -ENOMEM; + + strlcpy(stats->version, DRV_MODULE_VERSION, sizeof(stats->version)); + memcpy(stats->mac_add1 + 2, hba->cnic->mac_addr, ETH_ALEN); + + stats->max_frame_size = hba->netdev->mtu; + stats->txq_size = hba->max_sqes; + stats->rxq_size = hba->max_cqes; + + stats->txq_avg_depth = 0; + stats->rxq_avg_depth = 0; + + GET_STATS_64(hba, stats, rx_pdus); + GET_STATS_64(hba, stats, rx_bytes); + + GET_STATS_64(hba, stats, tx_pdus); + GET_STATS_64(hba, stats, tx_bytes); + + return 0; +} + + +/** * bnx2i_percpu_thread_create - Create a receive thread for an * online CPU * diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c index 621538b8b544..3b34c13e2f02 100644 --- a/drivers/scsi/bnx2i/bnx2i_iscsi.c +++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c @@ -874,6 +874,11 @@ struct bnx2i_hba *bnx2i_alloc_hba(struct cnic_dev *cnic) hba->conn_ctx_destroy_tmo = 2 * HZ; } +#ifdef CONFIG_32BIT + spin_lock_init(&hba->stat_lock); +#endif + memset(&hba->stats, 0, sizeof(struct iscsi_stats_info)); + if (iscsi_host_add(shost, &hba->pcidev->dev)) goto free_dump_mem; return hba; @@ -1181,12 +1186,18 @@ static int bnx2i_mtask_xmit(struct iscsi_conn *conn, struct iscsi_task *task) { struct bnx2i_conn *bnx2i_conn = conn->dd_data; + struct bnx2i_hba *hba = bnx2i_conn->hba; struct bnx2i_cmd *cmd = task->dd_data; memset(bnx2i_conn->gen_pdu.req_buf, 0, ISCSI_DEF_MAX_RECV_SEG_LEN); bnx2i_setup_cmd_wqe_template(cmd); bnx2i_conn->gen_pdu.req_buf_size = task->data_count; + + /* Tx PDU/data length count */ + ADD_STATS_64(hba, tx_pdus, 1); + ADD_STATS_64(hba, tx_bytes, task->data_count); + if (task->data_count) { memcpy(bnx2i_conn->gen_pdu.req_buf, task->data, task->data_count); diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c index 36739da8bc15..49692a1ac44a 100644 --- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c +++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c @@ -966,7 +966,8 @@ static int init_act_open(struct cxgbi_sock *csk) csk->saddr.sin_addr.s_addr = chba->ipv4addr; csk->rss_qid = 0; - csk->l2t = t3_l2t_get(t3dev, dst, ndev); + csk->l2t = t3_l2t_get(t3dev, dst, ndev, + &csk->daddr.sin_addr.s_addr); if (!csk->l2t) { pr_err("NO l2t available.\n"); return -EINVAL; diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c index 5a4a3bfc60cf..cc9a06897f34 100644 --- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c +++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c @@ -1142,7 +1142,7 @@ static int init_act_open(struct cxgbi_sock *csk) cxgbi_sock_set_flag(csk, CTPF_HAS_ATID); cxgbi_sock_get(csk); - n = dst_get_neighbour_noref(csk->dst); + n = dst_neigh_lookup(csk->dst, &csk->daddr.sin_addr.s_addr); if (!n) { pr_err("%s, can't get neighbour of csk->dst.\n", ndev->name); goto rel_resource; @@ -1182,9 +1182,12 @@ static int init_act_open(struct cxgbi_sock *csk) cxgbi_sock_set_state(csk, CTP_ACTIVE_OPEN); send_act_open_req(csk, skb, csk->l2t); + neigh_release(n); return 0; rel_resource: + if (n) + neigh_release(n); if (skb) __kfree_skb(skb); return -EINVAL; diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c index d9253db1d0e2..b44c1cff3114 100644 --- a/drivers/scsi/cxgbi/libcxgbi.c +++ b/drivers/scsi/cxgbi/libcxgbi.c @@ -494,7 +494,7 @@ static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr) goto err_out; } dst = &rt->dst; - n = dst_get_neighbour_noref(dst); + n = dst_neigh_lookup(dst, &daddr->sin_addr.s_addr); if (!n) { err = -ENODEV; goto rel_rt; @@ -506,7 +506,7 @@ static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr) &daddr->sin_addr.s_addr, ntohs(daddr->sin_port), ndev->name); err = -ENETUNREACH; - goto rel_rt; + goto rel_neigh; } if (ndev->flags & IFF_LOOPBACK) { @@ -521,7 +521,7 @@ static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr) pr_info("dst %pI4, %s, NOT cxgbi device.\n", &daddr->sin_addr.s_addr, ndev->name); err = -ENETUNREACH; - goto rel_rt; + goto rel_neigh; } log_debug(1 << CXGBI_DBG_SOCK, "route to %pI4 :%u, ndev p#%d,%s, cdev 0x%p.\n", @@ -531,7 +531,7 @@ static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr) csk = cxgbi_sock_create(cdev); if (!csk) { err = -ENOMEM; - goto rel_rt; + goto rel_neigh; } csk->cdev = cdev; csk->port_id = port; @@ -541,9 +541,13 @@ static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr) csk->daddr.sin_port = daddr->sin_port; csk->daddr.sin_family = daddr->sin_family; csk->saddr.sin_addr.s_addr = fl4.saddr; + neigh_release(n); return csk; +rel_neigh: + neigh_release(n); + rel_rt: ip_rt_put(rt); if (csk) diff --git a/drivers/scsi/scsi_netlink.c b/drivers/scsi/scsi_netlink.c index c77628afbf9f..8818dd681c19 100644 --- a/drivers/scsi/scsi_netlink.c +++ b/drivers/scsi/scsi_netlink.c @@ -486,6 +486,10 @@ void scsi_netlink_init(void) { int error; + struct netlink_kernel_cfg cfg = { + .input = scsi_nl_rcv_msg, + .groups = SCSI_NL_GRP_CNT, + }; INIT_LIST_HEAD(&scsi_nl_drivers); @@ -497,8 +501,7 @@ scsi_netlink_init(void) } scsi_nl_sock = netlink_kernel_create(&init_net, NETLINK_SCSITRANSPORT, - SCSI_NL_GRP_CNT, scsi_nl_rcv_msg, NULL, - THIS_MODULE); + THIS_MODULE, &cfg); if (!scsi_nl_sock) { printk(KERN_ERR "%s: register of receive handler failed\n", __func__); diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c index 1cf640e575da..6042954d8f3b 100644 --- a/drivers/scsi/scsi_transport_iscsi.c +++ b/drivers/scsi/scsi_transport_iscsi.c @@ -2936,7 +2936,10 @@ EXPORT_SYMBOL_GPL(iscsi_unregister_transport); static __init int iscsi_transport_init(void) { int err; - + struct netlink_kernel_cfg cfg = { + .groups = 1, + .input = iscsi_if_rx, + }; printk(KERN_INFO "Loading iSCSI transport class v%s.\n", ISCSI_TRANSPORT_VERSION); @@ -2966,8 +2969,8 @@ static __init int iscsi_transport_init(void) if (err) goto unregister_conn_class; - nls = netlink_kernel_create(&init_net, NETLINK_ISCSI, 1, iscsi_if_rx, - NULL, THIS_MODULE); + nls = netlink_kernel_create(&init_net, NETLINK_ISCSI, + THIS_MODULE, &cfg); if (!nls) { err = -ENOBUFS; goto unregister_session_class; diff --git a/drivers/ssb/b43_pci_bridge.c b/drivers/ssb/b43_pci_bridge.c index f551e5376147..266aa1648a02 100644 --- a/drivers/ssb/b43_pci_bridge.c +++ b/drivers/ssb/b43_pci_bridge.c @@ -36,6 +36,7 @@ static const struct pci_device_id b43_pci_bridge_tbl[] = { { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4328) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4329) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x432b) }, + { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x432c) }, { 0, }, }; MODULE_DEVICE_TABLE(pci, b43_pci_bridge_tbl); diff --git a/drivers/ssb/scan.c b/drivers/ssb/scan.c index 266c7c5c86dc..ab4627cf1114 100644 --- a/drivers/ssb/scan.c +++ b/drivers/ssb/scan.c @@ -90,6 +90,8 @@ const char *ssb_core_name(u16 coreid) return "ARM 1176"; case SSB_DEV_ARM_7TDMI: return "ARM 7TDMI"; + case SSB_DEV_ARM_CM3: + return "ARM Cortex M3"; } return "UNKNOWN"; } diff --git a/drivers/staging/gdm72xx/netlink_k.c b/drivers/staging/gdm72xx/netlink_k.c index 51665132c61b..87c3a07ed80e 100644 --- a/drivers/staging/gdm72xx/netlink_k.c +++ b/drivers/staging/gdm72xx/netlink_k.c @@ -88,13 +88,15 @@ struct sock *netlink_init(int unit, void (*cb)(struct net_device *dev, u16 type, void *msg, int len)) { struct sock *sock; + struct netlink_kernel_cfg cfg = { + .input = netlink_rcv, + }; #if !defined(DEFINE_MUTEX) init_MUTEX(&netlink_mutex); #endif - sock = netlink_kernel_create(&init_net, unit, 0, netlink_rcv, NULL, - THIS_MODULE); + sock = netlink_kernel_create(&init_net, unit, THIS_MODULE, &cfg); if (sock) rcv_cb = cb; @@ -127,8 +129,12 @@ int netlink_send(struct sock *sock, int group, u16 type, void *msg, int len) } seq++; - nlh = NLMSG_PUT(skb, 0, seq, type, len); - memcpy(NLMSG_DATA(nlh), msg, len); + nlh = nlmsg_put(skb, 0, seq, type, len, 0); + if (!nlh) { + kfree_skb(skb); + return -EMSGSIZE; + } + memcpy(nlmsg_data(nlh), msg, len); NETLINK_CB(skb).pid = 0; NETLINK_CB(skb).dst_group = 0; @@ -144,7 +150,5 @@ int netlink_send(struct sock *sock, int group, u16 type, void *msg, int len) } ret = 0; } - -nlmsg_failure: return ret; } diff --git a/drivers/usb/atm/xusbatm.c b/drivers/usb/atm/xusbatm.c index 14ec9f0c5924..b3b1bb78b2ef 100644 --- a/drivers/usb/atm/xusbatm.c +++ b/drivers/usb/atm/xusbatm.c @@ -20,7 +20,7 @@ ******************************************************************************/ #include <linux/module.h> -#include <linux/etherdevice.h> /* for random_ether_addr() */ +#include <linux/etherdevice.h> /* for eth_random_addr() */ #include "usbatm.h" @@ -163,7 +163,7 @@ static int xusbatm_atm_start(struct usbatm_data *usbatm, atm_dbg(usbatm, "%s entered\n", __func__); /* use random MAC as we've no way to get it from the device */ - random_ether_addr(atm_dev->esi); + eth_random_addr(atm_dev->esi); return 0; } diff --git a/drivers/usb/gadget/u_ether.c b/drivers/usb/gadget/u_ether.c index 47cf48b51c9d..b9e1925b2df0 100644 --- a/drivers/usb/gadget/u_ether.c +++ b/drivers/usb/gadget/u_ether.c @@ -724,7 +724,7 @@ static int get_ether_addr(const char *str, u8 *dev_addr) if (is_valid_ether_addr(dev_addr)) return 0; } - random_ether_addr(dev_addr); + eth_random_addr(dev_addr); return 1; } diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f82a7394756e..072cbbadbc36 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -823,14 +823,14 @@ static long vhost_net_ioctl(struct file *f, unsigned int ioctl, return -EFAULT; return vhost_net_set_backend(n, backend.index, backend.fd); case VHOST_GET_FEATURES: - features = VHOST_FEATURES; + features = VHOST_NET_FEATURES; if (copy_to_user(featurep, &features, sizeof features)) return -EFAULT; return 0; case VHOST_SET_FEATURES: if (copy_from_user(&features, featurep, sizeof features)) return -EFAULT; - if (features & ~VHOST_FEATURES) + if (features & ~VHOST_NET_FEATURES) return -EOPNOTSUPP; return vhost_net_set_features(n, features); case VHOST_RESET_OWNER: diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c index 3de00d9fae2e..91d6f060aade 100644 --- a/drivers/vhost/test.c +++ b/drivers/vhost/test.c @@ -261,14 +261,14 @@ static long vhost_test_ioctl(struct file *f, unsigned int ioctl, return -EFAULT; return vhost_test_run(n, test); case VHOST_GET_FEATURES: - features = VHOST_FEATURES; + features = VHOST_NET_FEATURES; if (copy_to_user(featurep, &features, sizeof features)) return -EFAULT; return 0; case VHOST_SET_FEATURES: if (copy_from_user(&features, featurep, sizeof features)) return -EFAULT; - if (features & ~VHOST_FEATURES) + if (features & ~VHOST_NET_FEATURES) return -EOPNOTSUPP; return vhost_test_set_features(n, features); case VHOST_RESET_OWNER: diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 112156f68afb..ef82a0d18489 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -64,7 +64,7 @@ static int vhost_poll_wakeup(wait_queue_t *wait, unsigned mode, int sync, return 0; } -static void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn) +void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn) { INIT_LIST_HEAD(&work->node); work->fn = fn; @@ -137,8 +137,7 @@ void vhost_poll_flush(struct vhost_poll *poll) vhost_work_flush(poll->dev, &poll->work); } -static inline void vhost_work_queue(struct vhost_dev *dev, - struct vhost_work *work) +void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) { unsigned long flags; diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 8de1fd5b8efb..1125af3d27d1 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -43,6 +43,9 @@ struct vhost_poll { struct vhost_dev *dev; }; +void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn); +void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work); + void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn, unsigned long mask, struct vhost_dev *dev); void vhost_poll_start(struct vhost_poll *poll, struct file *file); @@ -201,7 +204,8 @@ enum { VHOST_FEATURES = (1ULL << VIRTIO_F_NOTIFY_ON_EMPTY) | (1ULL << VIRTIO_RING_F_INDIRECT_DESC) | (1ULL << VIRTIO_RING_F_EVENT_IDX) | - (1ULL << VHOST_F_LOG_ALL) | + (1ULL << VHOST_F_LOG_ALL), + VHOST_NET_FEATURES = VHOST_FEATURES | (1ULL << VHOST_NET_F_VIRTIO_NET_HDR) | (1ULL << VIRTIO_NET_F_MRG_RXBUF), }; diff --git a/include/linux/bcma/bcma.h b/include/linux/bcma/bcma.h index 8deaf6d050c3..1954a4e305a3 100644 --- a/include/linux/bcma/bcma.h +++ b/include/linux/bcma/bcma.h @@ -7,6 +7,7 @@ #include <linux/bcma/bcma_driver_chipcommon.h> #include <linux/bcma/bcma_driver_pci.h> #include <linux/bcma/bcma_driver_mips.h> +#include <linux/bcma/bcma_driver_gmac_cmn.h> #include <linux/ssb/ssb.h> /* SPROM sharing */ #include "bcma_regs.h" @@ -70,6 +71,13 @@ struct bcma_host_ops { /* Core-ID values. */ #define BCMA_CORE_OOB_ROUTER 0x367 /* Out of band */ +#define BCMA_CORE_4706_CHIPCOMMON 0x500 +#define BCMA_CORE_4706_SOC_RAM 0x50E +#define BCMA_CORE_4706_MAC_GBIT 0x52D +#define BCMA_CORE_AMEMC 0x52E /* DDR1/2 memory controller core */ +#define BCMA_CORE_ALTA 0x534 /* I2S core */ +#define BCMA_CORE_4706_MAC_GBIT_COMMON 0x5DC +#define BCMA_CORE_DDR23_PHY 0x5DD #define BCMA_CORE_INVALID 0x700 #define BCMA_CORE_CHIPCOMMON 0x800 #define BCMA_CORE_ILINE20 0x801 @@ -130,6 +138,36 @@ struct bcma_host_ops { #define BCMA_MAX_NR_CORES 16 +/* Chip IDs of PCIe devices */ +#define BCMA_CHIP_ID_BCM4313 0x4313 +#define BCMA_CHIP_ID_BCM43224 43224 +#define BCMA_PKG_ID_BCM43224_FAB_CSM 0x8 +#define BCMA_PKG_ID_BCM43224_FAB_SMIC 0xa +#define BCMA_CHIP_ID_BCM43225 43225 +#define BCMA_CHIP_ID_BCM43227 43227 +#define BCMA_CHIP_ID_BCM43228 43228 +#define BCMA_CHIP_ID_BCM43421 43421 +#define BCMA_CHIP_ID_BCM43428 43428 +#define BCMA_CHIP_ID_BCM43431 43431 +#define BCMA_CHIP_ID_BCM43460 43460 +#define BCMA_CHIP_ID_BCM4331 0x4331 +#define BCMA_CHIP_ID_BCM6362 0x6362 +#define BCMA_CHIP_ID_BCM4360 0x4360 +#define BCMA_CHIP_ID_BCM4352 0x4352 + +/* Chip IDs of SoCs */ +#define BCMA_CHIP_ID_BCM4706 0x5300 +#define BCMA_CHIP_ID_BCM4716 0x4716 +#define BCMA_PKG_ID_BCM4716 8 +#define BCMA_PKG_ID_BCM4717 9 +#define BCMA_PKG_ID_BCM4718 10 +#define BCMA_CHIP_ID_BCM47162 47162 +#define BCMA_CHIP_ID_BCM4748 0x4748 +#define BCMA_CHIP_ID_BCM4749 0x4749 +#define BCMA_CHIP_ID_BCM5356 0x5356 +#define BCMA_CHIP_ID_BCM5357 0x5357 +#define BCMA_CHIP_ID_BCM53572 53572 + struct bcma_device { struct bcma_bus *bus; struct bcma_device_id id; @@ -215,6 +253,7 @@ struct bcma_bus { struct bcma_drv_cc drv_cc; struct bcma_drv_pci drv_pci; struct bcma_drv_mips drv_mips; + struct bcma_drv_gmac_cmn drv_gmac_cmn; /* We decided to share SPROM struct with SSB as long as we do not need * any hacks for BCMA. This simplifies drivers code. */ diff --git a/include/linux/bcma/bcma_driver_chipcommon.h b/include/linux/bcma/bcma_driver_chipcommon.h index 8bbfe31fbac8..3c80885fa829 100644 --- a/include/linux/bcma/bcma_driver_chipcommon.h +++ b/include/linux/bcma/bcma_driver_chipcommon.h @@ -24,7 +24,7 @@ #define BCMA_CC_FLASHT_NONE 0x00000000 /* No flash */ #define BCMA_CC_FLASHT_STSER 0x00000100 /* ST serial flash */ #define BCMA_CC_FLASHT_ATSER 0x00000200 /* Atmel serial flash */ -#define BCMA_CC_FLASHT_NFLASH 0x00000200 +#define BCMA_CC_FLASHT_NFLASH 0x00000200 /* NAND flash */ #define BCMA_CC_FLASHT_PARA 0x00000700 /* Parallel flash */ #define BCMA_CC_CAP_PLLT 0x00038000 /* PLL Type */ #define BCMA_PLLTYPE_NONE 0x00000000 @@ -45,6 +45,7 @@ #define BCMA_CC_CAP_PMU 0x10000000 /* PMU available (rev >= 20) */ #define BCMA_CC_CAP_ECI 0x20000000 /* ECI available (rev >= 20) */ #define BCMA_CC_CAP_SPROM 0x40000000 /* SPROM present */ +#define BCMA_CC_CAP_NFLASH 0x80000000 /* NAND flash present (rev >= 35 or BCM4706?) */ #define BCMA_CC_CORECTL 0x0008 #define BCMA_CC_CORECTL_UARTCLK0 0x00000001 /* Drive UART with internal clock */ #define BCMA_CC_CORECTL_SE 0x00000002 /* sync clk out enable (corerev >= 3) */ @@ -88,6 +89,11 @@ #define BCMA_CC_CHIPST_4313_OTP_PRESENT 2 #define BCMA_CC_CHIPST_4331_SPROM_PRESENT 2 #define BCMA_CC_CHIPST_4331_OTP_PRESENT 4 +#define BCMA_CC_CHIPST_4706_PKG_OPTION BIT(0) /* 0: full-featured package 1: low-cost package */ +#define BCMA_CC_CHIPST_4706_SFLASH_PRESENT BIT(1) /* 0: parallel, 1: serial flash is present */ +#define BCMA_CC_CHIPST_4706_SFLASH_TYPE BIT(2) /* 0: 8b-p/ST-s flash, 1: 16b-p/Atmal-s flash */ +#define BCMA_CC_CHIPST_4706_MIPS_BENDIAN BIT(3) /* 0: little, 1: big endian */ +#define BCMA_CC_CHIPST_4706_PCIE1_DISABLE BIT(5) /* PCIE1 enable strap pin */ #define BCMA_CC_JCMD 0x0030 /* Rev >= 10 only */ #define BCMA_CC_JCMD_START 0x80000000 #define BCMA_CC_JCMD_BUSY 0x80000000 @@ -117,10 +123,58 @@ #define BCMA_CC_JCTL_EXT_EN 2 /* Enable external targets */ #define BCMA_CC_JCTL_EN 1 /* Enable Jtag master */ #define BCMA_CC_FLASHCTL 0x0040 +/* Start/busy bit in flashcontrol */ +#define BCMA_CC_FLASHCTL_OPCODE 0x000000ff +#define BCMA_CC_FLASHCTL_ACTION 0x00000700 +#define BCMA_CC_FLASHCTL_CS_ACTIVE 0x00001000 /* Chip Select Active, rev >= 20 */ #define BCMA_CC_FLASHCTL_START 0x80000000 #define BCMA_CC_FLASHCTL_BUSY BCMA_CC_FLASHCTL_START +/* Flashcontrol action + opcodes for ST flashes */ +#define BCMA_CC_FLASHCTL_ST_WREN 0x0006 /* Write Enable */ +#define BCMA_CC_FLASHCTL_ST_WRDIS 0x0004 /* Write Disable */ +#define BCMA_CC_FLASHCTL_ST_RDSR 0x0105 /* Read Status Register */ +#define BCMA_CC_FLASHCTL_ST_WRSR 0x0101 /* Write Status Register */ +#define BCMA_CC_FLASHCTL_ST_READ 0x0303 /* Read Data Bytes */ +#define BCMA_CC_FLASHCTL_ST_PP 0x0302 /* Page Program */ +#define BCMA_CC_FLASHCTL_ST_SE 0x02d8 /* Sector Erase */ +#define BCMA_CC_FLASHCTL_ST_BE 0x00c7 /* Bulk Erase */ +#define BCMA_CC_FLASHCTL_ST_DP 0x00b9 /* Deep Power-down */ +#define BCMA_CC_FLASHCTL_ST_RES 0x03ab /* Read Electronic Signature */ +#define BCMA_CC_FLASHCTL_ST_CSA 0x1000 /* Keep chip select asserted */ +#define BCMA_CC_FLASHCTL_ST_SSE 0x0220 /* Sub-sector Erase */ +/* Flashcontrol action + opcodes for Atmel flashes */ +#define BCMA_CC_FLASHCTL_AT_READ 0x07e8 +#define BCMA_CC_FLASHCTL_AT_PAGE_READ 0x07d2 +#define BCMA_CC_FLASHCTL_AT_STATUS 0x01d7 +#define BCMA_CC_FLASHCTL_AT_BUF1_WRITE 0x0384 +#define BCMA_CC_FLASHCTL_AT_BUF2_WRITE 0x0387 +#define BCMA_CC_FLASHCTL_AT_BUF1_ERASE_PROGRAM 0x0283 +#define BCMA_CC_FLASHCTL_AT_BUF2_ERASE_PROGRAM 0x0286 +#define BCMA_CC_FLASHCTL_AT_BUF1_PROGRAM 0x0288 +#define BCMA_CC_FLASHCTL_AT_BUF2_PROGRAM 0x0289 +#define BCMA_CC_FLASHCTL_AT_PAGE_ERASE 0x0281 +#define BCMA_CC_FLASHCTL_AT_BLOCK_ERASE 0x0250 +#define BCMA_CC_FLASHCTL_AT_BUF1_WRITE_ERASE_PROGRAM 0x0382 +#define BCMA_CC_FLASHCTL_AT_BUF2_WRITE_ERASE_PROGRAM 0x0385 +#define BCMA_CC_FLASHCTL_AT_BUF1_LOAD 0x0253 +#define BCMA_CC_FLASHCTL_AT_BUF2_LOAD 0x0255 +#define BCMA_CC_FLASHCTL_AT_BUF1_COMPARE 0x0260 +#define BCMA_CC_FLASHCTL_AT_BUF2_COMPARE 0x0261 +#define BCMA_CC_FLASHCTL_AT_BUF1_REPROGRAM 0x0258 +#define BCMA_CC_FLASHCTL_AT_BUF2_REPROGRAM 0x0259 #define BCMA_CC_FLASHADDR 0x0044 #define BCMA_CC_FLASHDATA 0x0048 +/* Status register bits for ST flashes */ +#define BCMA_CC_FLASHDATA_ST_WIP 0x01 /* Write In Progress */ +#define BCMA_CC_FLASHDATA_ST_WEL 0x02 /* Write Enable Latch */ +#define BCMA_CC_FLASHDATA_ST_BP_MASK 0x1c /* Block Protect */ +#define BCMA_CC_FLASHDATA_ST_BP_SHIFT 2 +#define BCMA_CC_FLASHDATA_ST_SRWD 0x80 /* Status Register Write Disable */ +/* Status register bits for Atmel flashes */ +#define BCMA_CC_FLASHDATA_AT_READY 0x80 +#define BCMA_CC_FLASHDATA_AT_MISMATCH 0x40 +#define BCMA_CC_FLASHDATA_AT_ID_MASK 0x38 +#define BCMA_CC_FLASHDATA_AT_ID_SHIFT 3 #define BCMA_CC_BCAST_ADDR 0x0050 #define BCMA_CC_BCAST_DATA 0x0054 #define BCMA_CC_GPIOPULLUP 0x0058 /* Rev >= 20 only */ @@ -280,6 +334,15 @@ /* 4706 PMU */ #define BCMA_CC_PMU4706_MAINPLL_PLL0 0 +#define BCMA_CC_PMU6_4706_PROCPLL_OFF 4 /* The CPU PLL */ +#define BCMA_CC_PMU6_4706_PROC_P2DIV_MASK 0x000f0000 +#define BCMA_CC_PMU6_4706_PROC_P2DIV_SHIFT 16 +#define BCMA_CC_PMU6_4706_PROC_P1DIV_MASK 0x0000f000 +#define BCMA_CC_PMU6_4706_PROC_P1DIV_SHIFT 12 +#define BCMA_CC_PMU6_4706_PROC_NDIV_INT_MASK 0x00000ff8 +#define BCMA_CC_PMU6_4706_PROC_NDIV_INT_SHIFT 3 +#define BCMA_CC_PMU6_4706_PROC_NDIV_MODE_MASK 0x00000007 +#define BCMA_CC_PMU6_4706_PROC_NDIV_MODE_SHIFT 0 /* ALP clock on pre-PMU chips */ #define BCMA_CC_PMU_ALP_CLOCK 20000000 @@ -308,6 +371,19 @@ #define BCMA_CC_PPL_PCHI_OFF 5 #define BCMA_CC_PPL_PCHI_MASK 0x0000003f +#define BCMA_CC_PMU_PLL_CTL0 0 +#define BCMA_CC_PMU_PLL_CTL1 1 +#define BCMA_CC_PMU_PLL_CTL2 2 +#define BCMA_CC_PMU_PLL_CTL3 3 +#define BCMA_CC_PMU_PLL_CTL4 4 +#define BCMA_CC_PMU_PLL_CTL5 5 + +#define BCMA_CC_PMU1_PLL0_PC0_P1DIV_MASK 0x00f00000 +#define BCMA_CC_PMU1_PLL0_PC0_P1DIV_SHIFT 20 + +#define BCMA_CC_PMU1_PLL0_PC2_NDIV_INT_MASK 0x1ff00000 +#define BCMA_CC_PMU1_PLL0_PC2_NDIV_INT_SHIFT 20 + /* BCM4331 ChipControl numbers. */ #define BCMA_CHIPCTL_4331_BT_COEXIST BIT(0) /* 0 disable */ #define BCMA_CHIPCTL_4331_SECI BIT(1) /* 0 SECI is disabled (JATG functional) */ @@ -321,9 +397,18 @@ #define BCMA_CHIPCTL_4331_OVR_PIPEAUXPWRDOWN BIT(9) /* override core control on pipe_AuxPowerDown */ #define BCMA_CHIPCTL_4331_PCIE_AUXCLKEN BIT(10) /* pcie_auxclkenable */ #define BCMA_CHIPCTL_4331_PCIE_PIPE_PLLDOWN BIT(11) /* pcie_pipe_pllpowerdown */ +#define BCMA_CHIPCTL_4331_EXTPA_EN2 BIT(12) /* 0 ext pa disable, 1 ext pa enabled */ #define BCMA_CHIPCTL_4331_BT_SHD0_ON_GPIO4 BIT(16) /* enable bt_shd0 at gpio4 */ #define BCMA_CHIPCTL_4331_BT_SHD1_ON_GPIO5 BIT(17) /* enable bt_shd1 at gpio5 */ +/* 43224 chip-specific ChipControl register bits */ +#define BCMA_CCTRL_43224_GPIO_TOGGLE 0x8000 /* gpio[3:0] pins as btcoex or s/w gpio */ +#define BCMA_CCTRL_43224A0_12MA_LED_DRIVE 0x00F000F0 /* 12 mA drive strength */ +#define BCMA_CCTRL_43224B0_12MA_LED_DRIVE 0xF0 /* 12 mA drive strength for later 43224s */ + +/* 4313 Chip specific ChipControl register bits */ +#define BCMA_CCTRL_4313_12MA_LED_DRIVE 0x00000007 /* 12 mA drive strengh for later 4313 */ + /* Data for the PMU, if available. * Check availability with ((struct bcma_chipcommon)->capabilities & BCMA_CC_CAP_PMU) */ @@ -411,5 +496,6 @@ extern void bcma_chipco_chipctl_maskset(struct bcma_drv_cc *cc, u32 offset, u32 mask, u32 set); extern void bcma_chipco_regctl_maskset(struct bcma_drv_cc *cc, u32 offset, u32 mask, u32 set); +extern void bcma_pmu_spuravoid_pllupdate(struct bcma_drv_cc *cc, int spuravoid); #endif /* LINUX_BCMA_DRIVER_CC_H_ */ diff --git a/include/linux/bcma/bcma_driver_gmac_cmn.h b/include/linux/bcma/bcma_driver_gmac_cmn.h new file mode 100644 index 000000000000..def894b83b0d --- /dev/null +++ b/include/linux/bcma/bcma_driver_gmac_cmn.h @@ -0,0 +1,100 @@ +#ifndef LINUX_BCMA_DRIVER_GMAC_CMN_H_ +#define LINUX_BCMA_DRIVER_GMAC_CMN_H_ + +#include <linux/types.h> + +#define BCMA_GMAC_CMN_STAG0 0x000 +#define BCMA_GMAC_CMN_STAG1 0x004 +#define BCMA_GMAC_CMN_STAG2 0x008 +#define BCMA_GMAC_CMN_STAG3 0x00C +#define BCMA_GMAC_CMN_PARSER_CTL 0x020 +#define BCMA_GMAC_CMN_MIB_MAX_LEN 0x024 +#define BCMA_GMAC_CMN_PHY_ACCESS 0x100 +#define BCMA_GMAC_CMN_PA_DATA_MASK 0x0000ffff +#define BCMA_GMAC_CMN_PA_ADDR_MASK 0x001f0000 +#define BCMA_GMAC_CMN_PA_ADDR_SHIFT 16 +#define BCMA_GMAC_CMN_PA_REG_MASK 0x1f000000 +#define BCMA_GMAC_CMN_PA_REG_SHIFT 24 +#define BCMA_GMAC_CMN_PA_WRITE 0x20000000 +#define BCMA_GMAC_CMN_PA_START 0x40000000 +#define BCMA_GMAC_CMN_PHY_CTL 0x104 +#define BCMA_GMAC_CMN_PC_EPA_MASK 0x0000001f +#define BCMA_GMAC_CMN_PC_MCT_MASK 0x007f0000 +#define BCMA_GMAC_CMN_PC_MCT_SHIFT 16 +#define BCMA_GMAC_CMN_PC_MTE 0x00800000 +#define BCMA_GMAC_CMN_GMAC0_RGMII_CTL 0x110 +#define BCMA_GMAC_CMN_CFP_ACCESS 0x200 +#define BCMA_GMAC_CMN_CFP_TCAM_DATA0 0x210 +#define BCMA_GMAC_CMN_CFP_TCAM_DATA1 0x214 +#define BCMA_GMAC_CMN_CFP_TCAM_DATA2 0x218 +#define BCMA_GMAC_CMN_CFP_TCAM_DATA3 0x21C +#define BCMA_GMAC_CMN_CFP_TCAM_DATA4 0x220 +#define BCMA_GMAC_CMN_CFP_TCAM_DATA5 0x224 +#define BCMA_GMAC_CMN_CFP_TCAM_DATA6 0x228 +#define BCMA_GMAC_CMN_CFP_TCAM_DATA7 0x22C +#define BCMA_GMAC_CMN_CFP_TCAM_MASK0 0x230 +#define BCMA_GMAC_CMN_CFP_TCAM_MASK1 0x234 +#define BCMA_GMAC_CMN_CFP_TCAM_MASK2 0x238 +#define BCMA_GMAC_CMN_CFP_TCAM_MASK3 0x23C +#define BCMA_GMAC_CMN_CFP_TCAM_MASK4 0x240 +#define BCMA_GMAC_CMN_CFP_TCAM_MASK5 0x244 +#define BCMA_GMAC_CMN_CFP_TCAM_MASK6 0x248 +#define BCMA_GMAC_CMN_CFP_TCAM_MASK7 0x24C +#define BCMA_GMAC_CMN_CFP_ACTION_DATA 0x250 +#define BCMA_GMAC_CMN_TCAM_BIST_CTL 0x2A0 +#define BCMA_GMAC_CMN_TCAM_BIST_STATUS 0x2A4 +#define BCMA_GMAC_CMN_TCAM_CMP_STATUS 0x2A8 +#define BCMA_GMAC_CMN_TCAM_DISABLE 0x2AC +#define BCMA_GMAC_CMN_TCAM_TEST_CTL 0x2F0 +#define BCMA_GMAC_CMN_UDF_0_A3_A0 0x300 +#define BCMA_GMAC_CMN_UDF_0_A7_A4 0x304 +#define BCMA_GMAC_CMN_UDF_0_A8 0x308 +#define BCMA_GMAC_CMN_UDF_1_A3_A0 0x310 +#define BCMA_GMAC_CMN_UDF_1_A7_A4 0x314 +#define BCMA_GMAC_CMN_UDF_1_A8 0x318 +#define BCMA_GMAC_CMN_UDF_2_A3_A0 0x320 +#define BCMA_GMAC_CMN_UDF_2_A7_A4 0x324 +#define BCMA_GMAC_CMN_UDF_2_A8 0x328 +#define BCMA_GMAC_CMN_UDF_0_B3_B0 0x330 +#define BCMA_GMAC_CMN_UDF_0_B7_B4 0x334 +#define BCMA_GMAC_CMN_UDF_0_B8 0x338 +#define BCMA_GMAC_CMN_UDF_1_B3_B0 0x340 +#define BCMA_GMAC_CMN_UDF_1_B7_B4 0x344 +#define BCMA_GMAC_CMN_UDF_1_B8 0x348 +#define BCMA_GMAC_CMN_UDF_2_B3_B0 0x350 +#define BCMA_GMAC_CMN_UDF_2_B7_B4 0x354 +#define BCMA_GMAC_CMN_UDF_2_B8 0x358 +#define BCMA_GMAC_CMN_UDF_0_C3_C0 0x360 +#define BCMA_GMAC_CMN_UDF_0_C7_C4 0x364 +#define BCMA_GMAC_CMN_UDF_0_C8 0x368 +#define BCMA_GMAC_CMN_UDF_1_C3_C0 0x370 +#define BCMA_GMAC_CMN_UDF_1_C7_C4 0x374 +#define BCMA_GMAC_CMN_UDF_1_C8 0x378 +#define BCMA_GMAC_CMN_UDF_2_C3_C0 0x380 +#define BCMA_GMAC_CMN_UDF_2_C7_C4 0x384 +#define BCMA_GMAC_CMN_UDF_2_C8 0x388 +#define BCMA_GMAC_CMN_UDF_0_D3_D0 0x390 +#define BCMA_GMAC_CMN_UDF_0_D7_D4 0x394 +#define BCMA_GMAC_CMN_UDF_0_D11_D8 0x394 + +struct bcma_drv_gmac_cmn { + struct bcma_device *core; + + /* Drivers accessing BCMA_GMAC_CMN_PHY_ACCESS and + * BCMA_GMAC_CMN_PHY_CTL need to take that mutex first. */ + struct mutex phy_mutex; +}; + +/* Register access */ +#define gmac_cmn_read16(gc, offset) bcma_read16((gc)->core, offset) +#define gmac_cmn_read32(gc, offset) bcma_read32((gc)->core, offset) +#define gmac_cmn_write16(gc, offset, val) bcma_write16((gc)->core, offset, val) +#define gmac_cmn_write32(gc, offset, val) bcma_write32((gc)->core, offset, val) + +#ifdef CONFIG_BCMA_DRIVER_GMAC_CMN +extern void __devinit bcma_core_gmac_cmn_init(struct bcma_drv_gmac_cmn *gc); +#else +static inline void bcma_core_gmac_cmn_init(struct bcma_drv_gmac_cmn *gc) { } +#endif + +#endif /* LINUX_BCMA_DRIVER_GMAC_CMN_H_ */ diff --git a/include/linux/can.h b/include/linux/can.h index 9a19bcb3eeaf..018055efc034 100644 --- a/include/linux/can.h +++ b/include/linux/can.h @@ -21,7 +21,7 @@ /* special address description flags for the CAN_ID */ #define CAN_EFF_FLAG 0x80000000U /* EFF/SFF is set in the MSB */ #define CAN_RTR_FLAG 0x40000000U /* remote transmission request */ -#define CAN_ERR_FLAG 0x20000000U /* error frame */ +#define CAN_ERR_FLAG 0x20000000U /* error message frame */ /* valid bits in CAN ID for frame formats */ #define CAN_SFF_MASK 0x000007FFU /* standard frame format (SFF) */ @@ -32,32 +32,84 @@ * Controller Area Network Identifier structure * * bit 0-28 : CAN identifier (11/29 bit) - * bit 29 : error frame flag (0 = data frame, 1 = error frame) + * bit 29 : error message frame flag (0 = data frame, 1 = error message) * bit 30 : remote transmission request flag (1 = rtr frame) * bit 31 : frame format flag (0 = standard 11 bit, 1 = extended 29 bit) */ typedef __u32 canid_t; +#define CAN_SFF_ID_BITS 11 +#define CAN_EFF_ID_BITS 29 + /* - * Controller Area Network Error Frame Mask structure + * Controller Area Network Error Message Frame Mask structure * * bit 0-28 : error class mask (see include/linux/can/error.h) * bit 29-31 : set to zero */ typedef __u32 can_err_mask_t; +/* CAN payload length and DLC definitions according to ISO 11898-1 */ +#define CAN_MAX_DLC 8 +#define CAN_MAX_DLEN 8 + +/* CAN FD payload length and DLC definitions according to ISO 11898-7 */ +#define CANFD_MAX_DLC 15 +#define CANFD_MAX_DLEN 64 + /** * struct can_frame - basic CAN frame structure - * @can_id: the CAN ID of the frame and CAN_*_FLAG flags, see above. - * @can_dlc: the data length field of the CAN frame - * @data: the CAN frame payload. + * @can_id: CAN ID of the frame and CAN_*_FLAG flags, see canid_t definition + * @can_dlc: frame payload length in byte (0 .. 8) aka data length code + * N.B. the DLC field from ISO 11898-1 Chapter 8.4.2.3 has a 1:1 + * mapping of the 'data length code' to the real payload length + * @data: CAN frame payload (up to 8 byte) */ struct can_frame { canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ - __u8 can_dlc; /* data length code: 0 .. 8 */ - __u8 data[8] __attribute__((aligned(8))); + __u8 can_dlc; /* frame payload length in byte (0 .. CAN_MAX_DLEN) */ + __u8 data[CAN_MAX_DLEN] __attribute__((aligned(8))); +}; + +/* + * defined bits for canfd_frame.flags + * + * As the default for CAN FD should be to support the high data rate in the + * payload section of the frame (HDR) and to support up to 64 byte in the + * data section (EDL) the bits are only set in the non-default case. + * Btw. as long as there's no real implementation for CAN FD network driver + * these bits are only preliminary. + * + * RX: NOHDR/NOEDL - info about received CAN FD frame + * ESI - bit from originating CAN controller + * TX: NOHDR/NOEDL - control per-frame settings if supported by CAN controller + * ESI - bit is set by local CAN controller + */ +#define CANFD_NOHDR 0x01 /* frame without high data rate */ +#define CANFD_NOEDL 0x02 /* frame without extended data length */ +#define CANFD_ESI 0x04 /* error state indicator */ + +/** + * struct canfd_frame - CAN flexible data rate frame structure + * @can_id: CAN ID of the frame and CAN_*_FLAG flags, see canid_t definition + * @len: frame payload length in byte (0 .. CANFD_MAX_DLEN) + * @flags: additional flags for CAN FD + * @__res0: reserved / padding + * @__res1: reserved / padding + * @data: CAN FD frame payload (up to CANFD_MAX_DLEN byte) + */ +struct canfd_frame { + canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ + __u8 len; /* frame payload length in byte */ + __u8 flags; /* additional flags for CAN FD */ + __u8 __res0; /* reserved / padding */ + __u8 __res1; /* reserved / padding */ + __u8 data[CANFD_MAX_DLEN] __attribute__((aligned(8))); }; +#define CAN_MTU (sizeof(struct can_frame)) +#define CANFD_MTU (sizeof(struct canfd_frame)) + /* particular protocols of the protocol family PF_CAN */ #define CAN_RAW 1 /* RAW sockets */ #define CAN_BCM 2 /* Broadcast Manager */ @@ -97,7 +149,7 @@ struct sockaddr_can { * <received_can_id> & mask == can_id & mask * * The filter can be inverted (CAN_INV_FILTER bit set in can_id) or it can - * filter for error frames (CAN_ERR_FLAG bit set in mask). + * filter for error message frames (CAN_ERR_FLAG bit set in mask). */ struct can_filter { canid_t can_id; diff --git a/include/linux/can/core.h b/include/linux/can/core.h index 0ccc1cd28b95..78c6c52073ad 100644 --- a/include/linux/can/core.h +++ b/include/linux/can/core.h @@ -17,10 +17,10 @@ #include <linux/skbuff.h> #include <linux/netdevice.h> -#define CAN_VERSION "20090105" +#define CAN_VERSION "20120528" /* increment this number each time you change some user-space interface */ -#define CAN_ABI_VERSION "8" +#define CAN_ABI_VERSION "9" #define CAN_VERSION_STRING "rev " CAN_VERSION " abi " CAN_ABI_VERSION diff --git a/include/linux/can/dev.h b/include/linux/can/dev.h index 5d2efe7e3f1b..2b2fc345afca 100644 --- a/include/linux/can/dev.h +++ b/include/linux/can/dev.h @@ -33,7 +33,7 @@ struct can_priv { struct can_device_stats can_stats; struct can_bittiming bittiming; - struct can_bittiming_const *bittiming_const; + const struct can_bittiming_const *bittiming_const; struct can_clock clock; enum can_state state; @@ -61,23 +61,40 @@ struct can_priv { * To be used in the CAN netdriver receive path to ensure conformance with * ISO 11898-1 Chapter 8.4.2.3 (DLC field) */ -#define get_can_dlc(i) (min_t(__u8, (i), 8)) +#define get_can_dlc(i) (min_t(__u8, (i), CAN_MAX_DLC)) +#define get_canfd_dlc(i) (min_t(__u8, (i), CANFD_MAX_DLC)) /* Drop a given socketbuffer if it does not contain a valid CAN frame. */ static inline int can_dropped_invalid_skb(struct net_device *dev, struct sk_buff *skb) { - const struct can_frame *cf = (struct can_frame *)skb->data; - - if (unlikely(skb->len != sizeof(*cf) || cf->can_dlc > 8)) { - kfree_skb(skb); - dev->stats.tx_dropped++; - return 1; - } + const struct canfd_frame *cfd = (struct canfd_frame *)skb->data; + + if (skb->protocol == htons(ETH_P_CAN)) { + if (unlikely(skb->len != CAN_MTU || + cfd->len > CAN_MAX_DLEN)) + goto inval_skb; + } else if (skb->protocol == htons(ETH_P_CANFD)) { + if (unlikely(skb->len != CANFD_MTU || + cfd->len > CANFD_MAX_DLEN)) + goto inval_skb; + } else + goto inval_skb; return 0; + +inval_skb: + kfree_skb(skb); + dev->stats.tx_dropped++; + return 1; } +/* get data length from can_dlc with sanitized can_dlc */ +u8 can_dlc2len(u8 can_dlc); + +/* map the sanitized data length to an appropriate data length code */ +u8 can_len2dlc(u8 len); + struct net_device *alloc_candev(int sizeof_priv, unsigned int echo_skb_max); void free_candev(struct net_device *dev); diff --git a/include/linux/can/error.h b/include/linux/can/error.h index 63e855ea6b84..7b7148bded71 100644 --- a/include/linux/can/error.h +++ b/include/linux/can/error.h @@ -1,7 +1,7 @@ /* * linux/can/error.h * - * Definitions of the CAN error frame to be filtered and passed to the user. + * Definitions of the CAN error messages to be filtered and passed to the user. * * Author: Oliver Hartkopp <oliver.hartkopp@volkswagen.de> * Copyright (c) 2002-2007 Volkswagen Group Electronic Research @@ -12,7 +12,7 @@ #ifndef CAN_ERROR_H #define CAN_ERROR_H -#define CAN_ERR_DLC 8 /* dlc for error frames */ +#define CAN_ERR_DLC 8 /* dlc for error message frames */ /* error class (mask) in can_id */ #define CAN_ERR_TX_TIMEOUT 0x00000001U /* TX timeout (by netdevice driver) */ diff --git a/include/linux/can/raw.h b/include/linux/can/raw.h index 781f3a3701be..a814062b0719 100644 --- a/include/linux/can/raw.h +++ b/include/linux/can/raw.h @@ -23,7 +23,8 @@ enum { CAN_RAW_FILTER = 1, /* set 0 .. n can_filter(s) */ CAN_RAW_ERR_FILTER, /* set filter for error frames */ CAN_RAW_LOOPBACK, /* local loopback (default:on) */ - CAN_RAW_RECV_OWN_MSGS /* receive my own msgs (default:off) */ + CAN_RAW_RECV_OWN_MSGS, /* receive my own msgs (default:off) */ + CAN_RAW_FD_FRAMES, /* allow CAN FD frames (default:off) */ }; #endif diff --git a/include/linux/cpu_rmap.h b/include/linux/cpu_rmap.h index 473771a528c0..ac3bbb5b9502 100644 --- a/include/linux/cpu_rmap.h +++ b/include/linux/cpu_rmap.h @@ -1,3 +1,6 @@ +#ifndef __LINUX_CPU_RMAP_H +#define __LINUX_CPU_RMAP_H + /* * cpu_rmap.c: CPU affinity reverse-map support * Copyright 2011 Solarflare Communications Inc. @@ -71,3 +74,4 @@ extern void free_irq_cpu_rmap(struct cpu_rmap *rmap); extern int irq_cpu_rmap_add(struct cpu_rmap *rmap, int irq); #endif +#endif /* __LINUX_CPU_RMAP_H */ diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h index 3d406e0ede6d..d426336d92d9 100644 --- a/include/linux/etherdevice.h +++ b/include/linux/etherdevice.h @@ -124,17 +124,30 @@ static inline bool is_valid_ether_addr(const u8 *addr) } /** - * random_ether_addr - Generate software assigned random Ethernet address + * eth_random_addr - Generate software assigned random Ethernet address * @addr: Pointer to a six-byte array containing the Ethernet address * * Generate a random Ethernet address (MAC) that is not multicast * and has the local assigned bit set. */ -static inline void random_ether_addr(u8 *addr) +static inline void eth_random_addr(u8 *addr) { - get_random_bytes (addr, ETH_ALEN); - addr [0] &= 0xfe; /* clear multicast bit */ - addr [0] |= 0x02; /* set local assignment bit (IEEE802) */ + get_random_bytes(addr, ETH_ALEN); + addr[0] &= 0xfe; /* clear multicast bit */ + addr[0] |= 0x02; /* set local assignment bit (IEEE802) */ +} + +#define random_ether_addr(addr) eth_random_addr(addr) + +/** + * eth_broadcast_addr - Assign broadcast address + * @addr: Pointer to a six-byte array containing the Ethernet address + * + * Assign the broadcast address to the given address array. + */ +static inline void eth_broadcast_addr(u8 *addr) +{ + memset(addr, 0xff, ETH_ALEN); } /** @@ -149,7 +162,7 @@ static inline void random_ether_addr(u8 *addr) static inline void eth_hw_addr_random(struct net_device *dev) { dev->addr_assign_type |= NET_ADDR_RANDOM; - random_ether_addr(dev->dev_addr); + eth_random_addr(dev->dev_addr); } /** diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h index e17fa7140588..21eff418091b 100644 --- a/include/linux/ethtool.h +++ b/include/linux/ethtool.h @@ -137,6 +137,35 @@ struct ethtool_eeprom { }; /** + * struct ethtool_eee - Energy Efficient Ethernet information + * @cmd: ETHTOOL_{G,S}EEE + * @supported: Mask of %SUPPORTED_* flags for the speed/duplex combinations + * for which there is EEE support. + * @advertised: Mask of %ADVERTISED_* flags for the speed/duplex combinations + * advertised as eee capable. + * @lp_advertised: Mask of %ADVERTISED_* flags for the speed/duplex + * combinations advertised by the link partner as eee capable. + * @eee_active: Result of the eee auto negotiation. + * @eee_enabled: EEE configured mode (enabled/disabled). + * @tx_lpi_enabled: Whether the interface should assert its tx lpi, given + * that eee was negotiated. + * @tx_lpi_timer: Time in microseconds the interface delays prior to asserting + * its tx lpi (after reaching 'idle' state). Effective only when eee + * was negotiated and tx_lpi_enabled was set. + */ +struct ethtool_eee { + __u32 cmd; + __u32 supported; + __u32 advertised; + __u32 lp_advertised; + __u32 eee_active; + __u32 eee_enabled; + __u32 tx_lpi_enabled; + __u32 tx_lpi_timer; + __u32 reserved[2]; +}; + +/** * struct ethtool_modinfo - plugin module eeprom information * @cmd: %ETHTOOL_GMODULEINFO * @type: Standard the module information conforms to %ETH_MODULE_SFF_xxxx @@ -945,6 +974,8 @@ static inline u32 ethtool_rxfh_indir_default(u32 index, u32 n_rx_rings) * @get_module_info: Get the size and type of the eeprom contained within * a plug-in module. * @get_module_eeprom: Get the eeprom information from the plug-in module + * @get_eee: Get Energy-Efficient (EEE) supported and status. + * @set_eee: Set EEE status (enable/disable) as well as LPI timers. * * All operations are optional (i.e. the function pointer may be set * to %NULL) and callers must take this into account. Callers must @@ -1011,6 +1042,8 @@ struct ethtool_ops { struct ethtool_modinfo *); int (*get_module_eeprom)(struct net_device *, struct ethtool_eeprom *, u8 *); + int (*get_eee)(struct net_device *, struct ethtool_eee *); + int (*set_eee)(struct net_device *, struct ethtool_eee *); }; @@ -1089,6 +1122,8 @@ struct ethtool_ops { #define ETHTOOL_GET_TS_INFO 0x00000041 /* Get time stamping and PHC info */ #define ETHTOOL_GMODULEINFO 0x00000042 /* Get plug-in module information */ #define ETHTOOL_GMODULEEEPROM 0x00000043 /* Get plug-in module eeprom */ +#define ETHTOOL_GEEE 0x00000044 /* Get EEE settings */ +#define ETHTOOL_SEEE 0x00000045 /* Set EEE settings */ /* compatibility with older code */ #define SPARC_ETH_GSET ETHTOOL_GSET @@ -1118,6 +1153,10 @@ struct ethtool_ops { #define SUPPORTED_10000baseR_FEC (1 << 20) #define SUPPORTED_20000baseMLD2_Full (1 << 21) #define SUPPORTED_20000baseKR2_Full (1 << 22) +#define SUPPORTED_40000baseKR4_Full (1 << 23) +#define SUPPORTED_40000baseCR4_Full (1 << 24) +#define SUPPORTED_40000baseSR4_Full (1 << 25) +#define SUPPORTED_40000baseLR4_Full (1 << 26) /* Indicates what features are advertised by the interface. */ #define ADVERTISED_10baseT_Half (1 << 0) @@ -1143,6 +1182,10 @@ struct ethtool_ops { #define ADVERTISED_10000baseR_FEC (1 << 20) #define ADVERTISED_20000baseMLD2_Full (1 << 21) #define ADVERTISED_20000baseKR2_Full (1 << 22) +#define ADVERTISED_40000baseKR4_Full (1 << 23) +#define ADVERTISED_40000baseCR4_Full (1 << 24) +#define ADVERTISED_40000baseSR4_Full (1 << 25) +#define ADVERTISED_40000baseLR4_Full (1 << 26) /* The following are all involved in forcing a particular link * mode for the device for setting things. When getting the diff --git a/include/linux/genetlink.h b/include/linux/genetlink.h index 7a114016ac7d..5ab61c1eb6bf 100644 --- a/include/linux/genetlink.h +++ b/include/linux/genetlink.h @@ -85,7 +85,7 @@ enum { /* All generic netlink requests are serialized by a global lock. */ extern void genl_lock(void); extern void genl_unlock(void); -#ifdef CONFIG_PROVE_LOCKING +#ifdef CONFIG_LOCKDEP extern int lockdep_genl_is_held(void); #endif diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h index ce9af8918514..e02fc682bb68 100644 --- a/include/linux/ieee80211.h +++ b/include/linux/ieee80211.h @@ -47,6 +47,7 @@ #define IEEE80211_FCTL_MOREDATA 0x2000 #define IEEE80211_FCTL_PROTECTED 0x4000 #define IEEE80211_FCTL_ORDER 0x8000 +#define IEEE80211_FCTL_CTL_EXT 0x0f00 #define IEEE80211_SCTL_FRAG 0x000F #define IEEE80211_SCTL_SEQ 0xFFF0 @@ -54,6 +55,7 @@ #define IEEE80211_FTYPE_MGMT 0x0000 #define IEEE80211_FTYPE_CTL 0x0004 #define IEEE80211_FTYPE_DATA 0x0008 +#define IEEE80211_FTYPE_EXT 0x000c /* management */ #define IEEE80211_STYPE_ASSOC_REQ 0x0000 @@ -70,6 +72,7 @@ #define IEEE80211_STYPE_ACTION 0x00D0 /* control */ +#define IEEE80211_STYPE_CTL_EXT 0x0060 #define IEEE80211_STYPE_BACK_REQ 0x0080 #define IEEE80211_STYPE_BACK 0x0090 #define IEEE80211_STYPE_PSPOLL 0x00A0 @@ -97,6 +100,18 @@ #define IEEE80211_STYPE_QOS_CFPOLL 0x00E0 #define IEEE80211_STYPE_QOS_CFACKPOLL 0x00F0 +/* extension, added by 802.11ad */ +#define IEEE80211_STYPE_DMG_BEACON 0x0000 + +/* control extension - for IEEE80211_FTYPE_CTL | IEEE80211_STYPE_CTL_EXT */ +#define IEEE80211_CTL_EXT_POLL 0x2000 +#define IEEE80211_CTL_EXT_SPR 0x3000 +#define IEEE80211_CTL_EXT_GRANT 0x4000 +#define IEEE80211_CTL_EXT_DMG_CTS 0x5000 +#define IEEE80211_CTL_EXT_DMG_DTS 0x6000 +#define IEEE80211_CTL_EXT_SSW 0x8000 +#define IEEE80211_CTL_EXT_SSW_FBACK 0x9000 +#define IEEE80211_CTL_EXT_SSW_ACK 0xa000 /* miscellaneous IEEE 802.11 constants */ #define IEEE80211_MAX_FRAG_THRESHOLD 2352 @@ -568,6 +583,26 @@ struct ieee80211s_hdr { #define MESH_FLAGS_PS_DEEP 0x4 /** + * enum ieee80211_preq_flags - mesh PREQ element flags + * + * @IEEE80211_PREQ_PROACTIVE_PREP_FLAG: proactive PREP subfield + */ +enum ieee80211_preq_flags { + IEEE80211_PREQ_PROACTIVE_PREP_FLAG = 1<<2, +}; + +/** + * enum ieee80211_preq_target_flags - mesh PREQ element per target flags + * + * @IEEE80211_PREQ_TO_FLAG: target only subfield + * @IEEE80211_PREQ_USN_FLAG: unknown target HWMP sequence number subfield + */ +enum ieee80211_preq_target_flags { + IEEE80211_PREQ_TO_FLAG = 1<<0, + IEEE80211_PREQ_USN_FLAG = 1<<2, +}; + +/** * struct ieee80211_quiet_ie * * This structure refers to "Quiet information element" @@ -1072,6 +1107,73 @@ struct ieee80211_ht_operation { #define WLAN_HT_SMPS_CONTROL_STATIC 1 #define WLAN_HT_SMPS_CONTROL_DYNAMIC 3 +#define VHT_MCS_SUPPORTED_SET_SIZE 8 + +struct ieee80211_vht_capabilities { + __le32 vht_capabilities_info; + u8 vht_supported_mcs_set[VHT_MCS_SUPPORTED_SET_SIZE]; +} __packed; + +struct ieee80211_vht_operation { + u8 vht_op_info_chwidth; + u8 vht_op_info_chan_center_freq_seg1_idx; + u8 vht_op_info_chan_center_freq_seg2_idx; + __le16 vht_basic_mcs_set; +} __packed; + +/** + * struct ieee80211_vht_mcs_info - VHT MCS information + * @rx_mcs_map: RX MCS map 2 bits for each stream, total 8 streams + * @rx_highest: Indicates highest long GI VHT PPDU data rate + * STA can receive. Rate expressed in units of 1 Mbps. + * If this field is 0 this value should not be used to + * consider the highest RX data rate supported. + * @tx_mcs_map: TX MCS map 2 bits for each stream, total 8 streams + * @tx_highest: Indicates highest long GI VHT PPDU data rate + * STA can transmit. Rate expressed in units of 1 Mbps. + * If this field is 0 this value should not be used to + * consider the highest TX data rate supported. + */ +struct ieee80211_vht_mcs_info { + __le16 rx_mcs_map; + __le16 rx_highest; + __le16 tx_mcs_map; + __le16 tx_highest; +} __packed; + +#define IEEE80211_VHT_MCS_ZERO_TO_SEVEN_SUPPORT 0 +#define IEEE80211_VHT_MCS_ZERO_TO_EIGHT_SUPPORT 1 +#define IEEE80211_VHT_MCS_ZERO_TO_NINE_SUPPORT 2 +#define IEEE80211_VHT_MCS_NOT_SUPPORTED 3 + +/* 802.11ac VHT Capabilities */ +#define IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895 0x00000000 +#define IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 0x00000001 +#define IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 0x00000002 +#define IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ 0x00000004 +#define IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ 0x00000008 +#define IEEE80211_VHT_CAP_RXLDPC 0x00000010 +#define IEEE80211_VHT_CAP_SHORT_GI_80 0x00000020 +#define IEEE80211_VHT_CAP_SHORT_GI_160 0x00000040 +#define IEEE80211_VHT_CAP_TXSTBC 0x00000080 +#define IEEE80211_VHT_CAP_RXSTBC_1 0x00000100 +#define IEEE80211_VHT_CAP_RXSTBC_2 0x00000200 +#define IEEE80211_VHT_CAP_RXSTBC_3 0x00000300 +#define IEEE80211_VHT_CAP_RXSTBC_4 0x00000400 +#define IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE 0x00000800 +#define IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE 0x00001000 +#define IEEE80211_VHT_CAP_BEAMFORMER_ANTENNAS_MAX 0x00006000 +#define IEEE80211_VHT_CAP_SOUNDING_DIMENTION_MAX 0x00030000 +#define IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE 0x00080000 +#define IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE 0x00100000 +#define IEEE80211_VHT_CAP_VHT_TXOP_PS 0x00200000 +#define IEEE80211_VHT_CAP_HTC_VHT 0x00400000 +#define IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT 0x00800000 +#define IEEE80211_VHT_CAP_VHT_LINK_ADAPTATION_VHT_UNSOL_MFB 0x08000000 +#define IEEE80211_VHT_CAP_VHT_LINK_ADAPTATION_VHT_MRQ_MFB 0x0c000000 +#define IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN 0x10000000 +#define IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN 0x20000000 + /* Authentication algorithms */ #define WLAN_AUTH_OPEN 0 #define WLAN_AUTH_SHARED_KEY 1 @@ -1104,6 +1206,21 @@ struct ieee80211_ht_operation { #define WLAN_CAPABILITY_QOS (1<<9) #define WLAN_CAPABILITY_SHORT_SLOT_TIME (1<<10) #define WLAN_CAPABILITY_DSSS_OFDM (1<<13) + +/* DMG (60gHz) 802.11ad */ +/* type - bits 0..1 */ +#define WLAN_CAPABILITY_DMG_TYPE_IBSS (1<<0) /* Tx by: STA */ +#define WLAN_CAPABILITY_DMG_TYPE_PBSS (2<<0) /* Tx by: PCP */ +#define WLAN_CAPABILITY_DMG_TYPE_AP (3<<0) /* Tx by: AP */ + +#define WLAN_CAPABILITY_DMG_CBAP_ONLY (1<<2) +#define WLAN_CAPABILITY_DMG_CBAP_SOURCE (1<<3) +#define WLAN_CAPABILITY_DMG_PRIVACY (1<<4) +#define WLAN_CAPABILITY_DMG_ECPAC (1<<5) + +#define WLAN_CAPABILITY_DMG_SPECTRUM_MGMT (1<<8) +#define WLAN_CAPABILITY_DMG_RADIO_MEASURE (1<<12) + /* measurement */ #define IEEE80211_SPCT_MSR_RPRT_MODE_LATE (1<<0) #define IEEE80211_SPCT_MSR_RPRT_MODE_INCAPABLE (1<<1) @@ -1113,7 +1230,6 @@ struct ieee80211_ht_operation { #define IEEE80211_SPCT_MSR_RPRT_TYPE_CCA 1 #define IEEE80211_SPCT_MSR_RPRT_TYPE_RPI 2 - /* 802.11g ERP information element */ #define WLAN_ERP_NON_ERP_PRESENT (1<<0) #define WLAN_ERP_USE_PROTECTION (1<<1) @@ -1125,6 +1241,16 @@ enum { WLAN_ERP_PREAMBLE_LONG = 1, }; +/* Band ID, 802.11ad #8.4.1.45 */ +enum { + IEEE80211_BANDID_TV_WS = 0, /* TV white spaces */ + IEEE80211_BANDID_SUB1 = 1, /* Sub-1 GHz (excluding TV white spaces) */ + IEEE80211_BANDID_2G = 2, /* 2.4 GHz */ + IEEE80211_BANDID_3G = 3, /* 3.6 GHz */ + IEEE80211_BANDID_5G = 4, /* 4.9 and 5 GHz */ + IEEE80211_BANDID_60G = 5, /* 60 GHz */ +}; + /* Status codes */ enum ieee80211_statuscode { WLAN_STATUS_SUCCESS = 0, @@ -1176,6 +1302,17 @@ enum ieee80211_statuscode { WLAN_STATUS_ANTI_CLOG_REQUIRED = 76, WLAN_STATUS_FCG_NOT_SUPP = 78, WLAN_STATUS_STA_NO_TBTT = 78, + /* 802.11ad */ + WLAN_STATUS_REJECTED_WITH_SUGGESTED_CHANGES = 39, + WLAN_STATUS_REJECTED_FOR_DELAY_PERIOD = 47, + WLAN_STATUS_REJECT_WITH_SCHEDULE = 83, + WLAN_STATUS_PENDING_ADMITTING_FST_SESSION = 86, + WLAN_STATUS_PERFORMING_FST_NOW = 87, + WLAN_STATUS_PENDING_GAP_IN_BA_WINDOW = 88, + WLAN_STATUS_REJECT_U_PID_SETTING = 89, + WLAN_STATUS_REJECT_DSE_BAND = 96, + WLAN_STATUS_DENIED_WITH_SUGGESTED_BAND_AND_CHANNEL = 99, + WLAN_STATUS_DENIED_DUE_TO_SPECTRUM_MANAGEMENT = 103, }; @@ -1332,6 +1469,43 @@ enum ieee80211_eid { WLAN_EID_DSE_REGISTERED_LOCATION = 58, WLAN_EID_SUPPORTED_REGULATORY_CLASSES = 59, WLAN_EID_EXT_CHANSWITCH_ANN = 60, + + WLAN_EID_VHT_CAPABILITY = 191, + WLAN_EID_VHT_OPERATION = 192, + + /* 802.11ad */ + WLAN_EID_NON_TX_BSSID_CAP = 83, + WLAN_EID_WAKEUP_SCHEDULE = 143, + WLAN_EID_EXT_SCHEDULE = 144, + WLAN_EID_STA_AVAILABILITY = 145, + WLAN_EID_DMG_TSPEC = 146, + WLAN_EID_DMG_AT = 147, + WLAN_EID_DMG_CAP = 148, + WLAN_EID_DMG_OPERATION = 151, + WLAN_EID_DMG_BSS_PARAM_CHANGE = 152, + WLAN_EID_DMG_BEAM_REFINEMENT = 153, + WLAN_EID_CHANNEL_MEASURE_FEEDBACK = 154, + WLAN_EID_AWAKE_WINDOW = 157, + WLAN_EID_MULTI_BAND = 158, + WLAN_EID_ADDBA_EXT = 159, + WLAN_EID_NEXT_PCP_LIST = 160, + WLAN_EID_PCP_HANDOVER = 161, + WLAN_EID_DMG_LINK_MARGIN = 162, + WLAN_EID_SWITCHING_STREAM = 163, + WLAN_EID_SESSION_TRANSITION = 164, + WLAN_EID_DYN_TONE_PAIRING_REPORT = 165, + WLAN_EID_CLUSTER_REPORT = 166, + WLAN_EID_RELAY_CAP = 167, + WLAN_EID_RELAY_XFER_PARAM_SET = 168, + WLAN_EID_BEAM_LINK_MAINT = 169, + WLAN_EID_MULTIPLE_MAC_ADDR = 170, + WLAN_EID_U_PID = 171, + WLAN_EID_DMG_LINK_ADAPT_ACK = 172, + WLAN_EID_QUIET_PERIOD_REQ = 175, + WLAN_EID_QUIET_PERIOD_RESP = 177, + WLAN_EID_EPAC_POLICY = 182, + WLAN_EID_CLISTER_TIME_OFF = 183, + WLAN_EID_ANTENNA_SECTOR_ID_PATTERN = 190, }; /* Action category code */ @@ -1348,7 +1522,10 @@ enum ieee80211_category { WLAN_CATEGORY_MESH_ACTION = 13, WLAN_CATEGORY_MULTIHOP_ACTION = 14, WLAN_CATEGORY_SELF_PROTECTED = 15, + WLAN_CATEGORY_DMG = 16, WLAN_CATEGORY_WMM = 17, + WLAN_CATEGORY_FST = 18, + WLAN_CATEGORY_UNPROT_DMG = 20, WLAN_CATEGORY_VENDOR_SPECIFIC_PROTECTED = 126, WLAN_CATEGORY_VENDOR_SPECIFIC = 127, }; @@ -1443,7 +1620,7 @@ enum ieee80211_tdls_actioncode { * * @IEEE80211_SYNC_METHOD_NEIGHBOR_OFFSET: the default synchronization method * @IEEE80211_SYNC_METHOD_VENDOR: a vendor specific synchronization method - * that will be specified in a vendor specific information element + * that will be specified in a vendor specific information element */ enum { IEEE80211_SYNC_METHOD_NEIGHBOR_OFFSET = 1, @@ -1455,7 +1632,7 @@ enum { * * @IEEE80211_PATH_PROTOCOL_HWMP: the default path selection protocol * @IEEE80211_PATH_PROTOCOL_VENDOR: a vendor specific protocol that will - * be specified in a vendor specific information element + * be specified in a vendor specific information element */ enum { IEEE80211_PATH_PROTOCOL_HWMP = 1, @@ -1467,13 +1644,35 @@ enum { * * @IEEE80211_PATH_METRIC_AIRTIME: the default path selection metric * @IEEE80211_PATH_METRIC_VENDOR: a vendor specific metric that will be - * specified in a vendor specific information element + * specified in a vendor specific information element */ enum { IEEE80211_PATH_METRIC_AIRTIME = 1, IEEE80211_PATH_METRIC_VENDOR = 255, }; +/** + * enum ieee80211_root_mode_identifier - root mesh STA mode identifier + * + * These attribute are used by dot11MeshHWMPRootMode to set root mesh STA mode + * + * @IEEE80211_ROOTMODE_NO_ROOT: the mesh STA is not a root mesh STA (default) + * @IEEE80211_ROOTMODE_ROOT: the mesh STA is a root mesh STA if greater than + * this value + * @IEEE80211_PROACTIVE_PREQ_NO_PREP: the mesh STA is a root mesh STA supports + * the proactive PREQ with proactive PREP subfield set to 0 + * @IEEE80211_PROACTIVE_PREQ_WITH_PREP: the mesh STA is a root mesh STA + * supports the proactive PREQ with proactive PREP subfield set to 1 + * @IEEE80211_PROACTIVE_RANN: the mesh STA is a root mesh STA supports + * the proactive RANN + */ +enum ieee80211_root_mode_identifier { + IEEE80211_ROOTMODE_NO_ROOT = 0, + IEEE80211_ROOTMODE_ROOT = 1, + IEEE80211_PROACTIVE_PREQ_NO_PREP = 2, + IEEE80211_PROACTIVE_PREQ_WITH_PREP = 3, + IEEE80211_PROACTIVE_RANN = 4, +}; /* * IEEE 802.11-2007 7.3.2.9 Country information element @@ -1574,6 +1773,7 @@ enum ieee80211_sa_query_action { #define WLAN_CIPHER_SUITE_CCMP 0x000FAC04 #define WLAN_CIPHER_SUITE_WEP104 0x000FAC05 #define WLAN_CIPHER_SUITE_AES_CMAC 0x000FAC06 +#define WLAN_CIPHER_SUITE_GCMP 0x000FAC08 #define WLAN_CIPHER_SUITE_SMS4 0x00147201 @@ -1589,6 +1789,10 @@ enum ieee80211_sa_query_action { #define WLAN_OUI_WFA 0x506f9a #define WLAN_OUI_TYPE_WFA_P2P 9 +#define WLAN_OUI_MICROSOFT 0x0050f2 +#define WLAN_OUI_TYPE_MICROSOFT_WPA 1 +#define WLAN_OUI_TYPE_MICROSOFT_WMM 2 +#define WLAN_OUI_TYPE_MICROSOFT_WPS 4 /* * WMM/802.11e Tspec Element diff --git a/include/linux/if.h b/include/linux/if.h index f995c663c493..1ec407b01e46 100644 --- a/include/linux/if.h +++ b/include/linux/if.h @@ -81,6 +81,8 @@ #define IFF_UNICAST_FLT 0x20000 /* Supports unicast filtering */ #define IFF_TEAM_PORT 0x40000 /* device used as team port */ #define IFF_SUPP_NOFCS 0x80000 /* device supports sending custom FCS */ +#define IFF_LIVE_ADDR_CHANGE 0x100000 /* device supports hardware address + * change when it's running */ #define IF_GET_IFACE 0x0001 /* for querying only */ diff --git a/include/linux/if_ether.h b/include/linux/if_ether.h index 56d907a2c804..167ce5b363d2 100644 --- a/include/linux/if_ether.h +++ b/include/linux/if_ether.h @@ -105,7 +105,8 @@ #define ETH_P_WAN_PPP 0x0007 /* Dummy type for WAN PPP frames*/ #define ETH_P_PPP_MP 0x0008 /* Dummy type for PPP MP frames */ #define ETH_P_LOCALTALK 0x0009 /* Localtalk pseudo type */ -#define ETH_P_CAN 0x000C /* Controller Area Network */ +#define ETH_P_CAN 0x000C /* CAN: Controller Area Network */ +#define ETH_P_CANFD 0x000D /* CANFD: CAN flexible data rate*/ #define ETH_P_PPPTALK 0x0010 /* Dummy type for Atalk over PPP*/ #define ETH_P_TR_802_2 0x0011 /* 802.2 frames */ #define ETH_P_MOBITEX 0x0015 /* Mobitex (kaz@cafe.net) */ diff --git a/include/linux/if_link.h b/include/linux/if_link.h index f715750d0b87..ac173bd2ab65 100644 --- a/include/linux/if_link.h +++ b/include/linux/if_link.h @@ -140,6 +140,8 @@ enum { IFLA_EXT_MASK, /* Extended info mask, VFs, etc */ IFLA_PROMISCUITY, /* Promiscuity count: > 0 means acts PROMISC */ #define IFLA_PROMISCUITY IFLA_PROMISCUITY + IFLA_NUM_TX_QUEUES, + IFLA_NUM_RX_QUEUES, __IFLA_MAX }; diff --git a/include/linux/if_team.h b/include/linux/if_team.h index 8185f57a9c7f..6960fc1841a7 100644 --- a/include/linux/if_team.h +++ b/include/linux/if_team.h @@ -13,6 +13,9 @@ #ifdef __KERNEL__ +#include <linux/netpoll.h> +#include <net/sch_generic.h> + struct team_pcpu_stats { u64 rx_packets; u64 rx_bytes; @@ -60,9 +63,54 @@ struct team_port { unsigned int mtu; } orig; - struct rcu_head rcu; +#ifdef CONFIG_NET_POLL_CONTROLLER + struct netpoll *np; +#endif + + long mode_priv[0]; }; +static inline bool team_port_enabled(struct team_port *port) +{ + return port->index != -1; +} + +static inline bool team_port_txable(struct team_port *port) +{ + return port->linkup && team_port_enabled(port); +} + +#ifdef CONFIG_NET_POLL_CONTROLLER +static inline void team_netpoll_send_skb(struct team_port *port, + struct sk_buff *skb) +{ + struct netpoll *np = port->np; + + if (np) + netpoll_send_skb(np, skb); +} +#else +static inline void team_netpoll_send_skb(struct team_port *port, + struct sk_buff *skb) +{ +} +#endif + +static inline int team_dev_queue_xmit(struct team *team, struct team_port *port, + struct sk_buff *skb) +{ + BUILD_BUG_ON(sizeof(skb->queue_mapping) != + sizeof(qdisc_skb_cb(skb)->slave_dev_queue_mapping)); + skb_set_queue_mapping(skb, qdisc_skb_cb(skb)->slave_dev_queue_mapping); + + skb->dev = port->dev; + if (unlikely(netpoll_tx_running(port->dev))) { + team_netpoll_send_skb(port, skb); + return 0; + } + return dev_queue_xmit(skb); +} + struct team_mode_ops { int (*init)(struct team *team); void (*exit)(struct team *team); @@ -73,6 +121,8 @@ struct team_mode_ops { int (*port_enter)(struct team *team, struct team_port *port); void (*port_leave)(struct team *team, struct team_port *port); void (*port_change_mac)(struct team *team, struct team_port *port); + void (*port_enabled)(struct team *team, struct team_port *port); + void (*port_disabled)(struct team *team, struct team_port *port); }; enum team_option_type { @@ -82,6 +132,11 @@ enum team_option_type { TEAM_OPTION_TYPE_BOOL, }; +struct team_option_inst_info { + u32 array_index; + struct team_port *port; /* != NULL if per-port */ +}; + struct team_gsetter_ctx { union { u32 u32_val; @@ -92,23 +147,28 @@ struct team_gsetter_ctx { } bin_val; bool bool_val; } data; - struct team_port *port; + struct team_option_inst_info *info; }; struct team_option { struct list_head list; const char *name; bool per_port; + unsigned int array_size; /* != 0 means the option is array */ enum team_option_type type; + int (*init)(struct team *team, struct team_option_inst_info *info); int (*getter)(struct team *team, struct team_gsetter_ctx *ctx); int (*setter)(struct team *team, struct team_gsetter_ctx *ctx); }; +extern void team_option_inst_set_change(struct team_option_inst_info *opt_inst_info); +extern void team_options_change_check(struct team *team); + struct team_mode { - struct list_head list; const char *kind; struct module *owner; size_t priv_size; + size_t port_priv_size; const struct team_mode_ops *ops; }; @@ -178,8 +238,11 @@ extern int team_options_register(struct team *team, extern void team_options_unregister(struct team *team, const struct team_option *option, size_t option_count); -extern int team_mode_register(struct team_mode *mode); -extern int team_mode_unregister(struct team_mode *mode); +extern int team_mode_register(const struct team_mode *mode); +extern void team_mode_unregister(const struct team_mode *mode); + +#define TEAM_DEFAULT_NUM_TX_QUEUES 16 +#define TEAM_DEFAULT_NUM_RX_QUEUES 16 #endif /* __KERNEL__ */ @@ -241,6 +304,7 @@ enum { TEAM_ATTR_OPTION_DATA, /* dynamic */ TEAM_ATTR_OPTION_REMOVED, /* flag */ TEAM_ATTR_OPTION_PORT_IFINDEX, /* u32 */ /* for per-port options */ + TEAM_ATTR_OPTION_ARRAY_INDEX, /* u32 */ /* for array options */ __TEAM_ATTR_OPTION_MAX, TEAM_ATTR_OPTION_MAX = __TEAM_ATTR_OPTION_MAX - 1, diff --git a/include/linux/if_tunnel.h b/include/linux/if_tunnel.h index 16b92d008bed..5efff60b6f56 100644 --- a/include/linux/if_tunnel.h +++ b/include/linux/if_tunnel.h @@ -80,4 +80,18 @@ enum { #define IFLA_GRE_MAX (__IFLA_GRE_MAX - 1) +/* VTI-mode i_flags */ +#define VTI_ISVTI 0x0001 + +enum { + IFLA_VTI_UNSPEC, + IFLA_VTI_LINK, + IFLA_VTI_IKEY, + IFLA_VTI_OKEY, + IFLA_VTI_LOCAL, + IFLA_VTI_REMOTE, + __IFLA_VTI_MAX, +}; + +#define IFLA_VTI_MAX (__IFLA_VTI_MAX - 1) #endif /* _IF_TUNNEL_H_ */ diff --git a/include/linux/inetdevice.h b/include/linux/inetdevice.h index 597f4a9f3240..67f9ddacb70c 100644 --- a/include/linux/inetdevice.h +++ b/include/linux/inetdevice.h @@ -38,6 +38,7 @@ enum IPV4_DEVCONF_ACCEPT_LOCAL, IPV4_DEVCONF_SRC_VMARK, IPV4_DEVCONF_PROXY_ARP_PVLAN, + IPV4_DEVCONF_ROUTE_LOCALNET, __IPV4_DEVCONF_MAX }; @@ -131,6 +132,7 @@ static inline void ipv4_devconf_setall(struct in_device *in_dev) #define IN_DEV_PROMOTE_SECONDARIES(in_dev) \ IN_DEV_ORCONF((in_dev), \ PROMOTE_SECONDARIES) +#define IN_DEV_ROUTE_LOCALNET(in_dev) IN_DEV_ORCONF(in_dev, ROUTE_LOCALNET) #define IN_DEV_RX_REDIRECTS(in_dev) \ ((IN_DEV_FORWARD(in_dev) && \ diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h index 8260ef779762..379e433e15e0 100644 --- a/include/linux/ipv6.h +++ b/include/linux/ipv6.h @@ -299,9 +299,9 @@ struct ipv6_pinfo { struct in6_addr rcv_saddr; struct in6_addr daddr; struct in6_pktinfo sticky_pktinfo; - struct in6_addr *daddr_cache; + const struct in6_addr *daddr_cache; #ifdef CONFIG_IPV6_SUBTREES - struct in6_addr *saddr_cache; + const struct in6_addr *saddr_cache; #endif __be32 flow_label; @@ -410,6 +410,22 @@ struct tcp6_sock { extern int inet6_sk_rebuild_header(struct sock *sk); +struct inet6_timewait_sock { + struct in6_addr tw_v6_daddr; + struct in6_addr tw_v6_rcv_saddr; +}; + +struct tcp6_timewait_sock { + struct tcp_timewait_sock tcp6tw_tcp; + struct inet6_timewait_sock tcp6tw_inet6; +}; + +static inline struct inet6_timewait_sock *inet6_twsk(const struct sock *sk) +{ + return (struct inet6_timewait_sock *)(((u8 *)sk) + + inet_twsk(sk)->tw_ipv6_offset); +} + #if IS_ENABLED(CONFIG_IPV6) static inline struct ipv6_pinfo * inet6_sk(const struct sock *__sk) { @@ -459,28 +475,12 @@ static inline void inet_sk_copy_descendant(struct sock *sk_to, #define __ipv6_only_sock(sk) (inet6_sk(sk)->ipv6only) #define ipv6_only_sock(sk) ((sk)->sk_family == PF_INET6 && __ipv6_only_sock(sk)) -struct inet6_timewait_sock { - struct in6_addr tw_v6_daddr; - struct in6_addr tw_v6_rcv_saddr; -}; - -struct tcp6_timewait_sock { - struct tcp_timewait_sock tcp6tw_tcp; - struct inet6_timewait_sock tcp6tw_inet6; -}; - static inline u16 inet6_tw_offset(const struct proto *prot) { return prot->twsk_prot->twsk_obj_size - sizeof(struct inet6_timewait_sock); } -static inline struct inet6_timewait_sock *inet6_twsk(const struct sock *sk) -{ - return (struct inet6_timewait_sock *)(((u8 *)sk) + - inet_twsk(sk)->tw_ipv6_offset); -} - static inline struct in6_addr *__inet6_rcv_saddr(const struct sock *sk) { return likely(sk->sk_state != TCP_TIME_WAIT) ? diff --git a/include/linux/ks8851_mll.h b/include/linux/ks8851_mll.h new file mode 100644 index 000000000000..e9ccfb59ed30 --- /dev/null +++ b/include/linux/ks8851_mll.h @@ -0,0 +1,33 @@ +/* + * ks8861_mll platform data struct definition + * Copyright (c) 2012 BTicino S.p.A. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + */ + +#ifndef _LINUX_KS8851_MLL_H +#define _LINUX_KS8851_MLL_H + +#include <linux/if_ether.h> + +/** + * struct ks8851_mll_platform_data - Platform data of the KS8851_MLL network driver + * @macaddr: The MAC address of the device, set to all 0:s to use the on in + * the chip. + */ +struct ks8851_mll_platform_data { + u8 mac_addr[ETH_ALEN]; +}; + +#endif diff --git a/include/linux/mdio.h b/include/linux/mdio.h index dfb947959ec9..7cccafe50e7b 100644 --- a/include/linux/mdio.h +++ b/include/linux/mdio.h @@ -43,7 +43,11 @@ #define MDIO_PKGID2 15 #define MDIO_AN_ADVERTISE 16 /* AN advertising (base page) */ #define MDIO_AN_LPA 19 /* AN LP abilities (base page) */ +#define MDIO_PCS_EEE_ABLE 20 /* EEE Capability register */ +#define MDIO_PCS_EEE_WK_ERR 22 /* EEE wake error counter */ #define MDIO_PHYXS_LNSTAT 24 /* PHY XGXS lane state */ +#define MDIO_AN_EEE_ADV 60 /* EEE advertisement */ +#define MDIO_AN_EEE_LPABLE 61 /* EEE link partner ability */ /* Media-dependent registers. */ #define MDIO_PMA_10GBT_SWAPPOL 130 /* 10GBASE-T pair swap & polarity */ @@ -56,7 +60,6 @@ #define MDIO_PCS_10GBRT_STAT2 33 /* 10GBASE-R/-T PCS status 2 */ #define MDIO_AN_10GBT_CTRL 32 /* 10GBASE-T auto-negotiation control */ #define MDIO_AN_10GBT_STAT 33 /* 10GBASE-T auto-negotiation status */ -#define MDIO_AN_EEE_ADV 60 /* EEE advertisement */ /* LASI (Link Alarm Status Interrupt) registers, defined by XENPAK MSA. */ #define MDIO_PMA_LASI_RXCTRL 0x9000 /* RX_ALARM control */ @@ -82,6 +85,7 @@ #define MDIO_AN_CTRL1_RESTART BMCR_ANRESTART #define MDIO_AN_CTRL1_ENABLE BMCR_ANENABLE #define MDIO_AN_CTRL1_XNP 0x2000 /* Enable extended next page */ +#define MDIO_PCS_CTRL1_CLKSTOP_EN 0x400 /* Stop the clock during LPI */ /* 10 Gb/s */ #define MDIO_CTRL1_SPEED10G (MDIO_CTRL1_SPEEDSELEXT | 0x00) @@ -237,9 +241,25 @@ #define MDIO_AN_10GBT_STAT_MS 0x4000 /* Master/slave config */ #define MDIO_AN_10GBT_STAT_MSFLT 0x8000 /* Master/slave config fault */ -/* AN EEE Advertisement register. */ -#define MDIO_AN_EEE_ADV_100TX 0x0002 /* Advertise 100TX EEE cap */ -#define MDIO_AN_EEE_ADV_1000T 0x0004 /* Advertise 1000T EEE cap */ +/* EEE Supported/Advertisement/LP Advertisement registers. + * + * EEE capability Register (3.20), Advertisement (7.60) and + * Link partner ability (7.61) registers have and can use the same identical + * bit masks. + */ +#define MDIO_AN_EEE_ADV_100TX 0x0002 /* Advertise 100TX EEE cap */ +#define MDIO_AN_EEE_ADV_1000T 0x0004 /* Advertise 1000T EEE cap */ +/* Note: the two defines above can be potentially used by the user-land + * and cannot remove them now. + * So, we define the new generic MDIO_EEE_100TX and MDIO_EEE_1000T macros + * using the previous ones (that can be considered obsolete). + */ +#define MDIO_EEE_100TX MDIO_AN_EEE_ADV_100TX /* 100TX EEE cap */ +#define MDIO_EEE_1000T MDIO_AN_EEE_ADV_1000T /* 1000T EEE cap */ +#define MDIO_EEE_10GT 0x0008 /* 10GT EEE cap */ +#define MDIO_EEE_1000KX 0x0010 /* 1000KX EEE cap */ +#define MDIO_EEE_10GKX4 0x0020 /* 10G KX4 EEE cap */ +#define MDIO_EEE_10GKR 0x0040 /* 10G KR EEE cap */ /* LASI RX_ALARM control/status registers. */ #define MDIO_PMA_LASI_RX_PHYXSLFLT 0x0001 /* PHY XS RX local fault */ diff --git a/include/linux/mii.h b/include/linux/mii.h index 2783eca629a0..8ef3a7a11592 100644 --- a/include/linux/mii.h +++ b/include/linux/mii.h @@ -21,6 +21,8 @@ #define MII_EXPANSION 0x06 /* Expansion register */ #define MII_CTRL1000 0x09 /* 1000BASE-T control */ #define MII_STAT1000 0x0a /* 1000BASE-T status */ +#define MII_MMD_CTRL 0x0d /* MMD Access Control Register */ +#define MII_MMD_DATA 0x0e /* MMD Access Data Register */ #define MII_ESTATUS 0x0f /* Extended Status */ #define MII_DCOUNTER 0x12 /* Disconnect counter */ #define MII_FCSCOUNTER 0x13 /* False carrier counter */ @@ -141,6 +143,13 @@ #define FLOW_CTRL_TX 0x01 #define FLOW_CTRL_RX 0x02 +/* MMD Access Control register fields */ +#define MII_MMD_CTRL_DEVAD_MASK 0x1f /* Mask MMD DEVAD*/ +#define MII_MMD_CTRL_ADDR 0x0000 /* Address */ +#define MII_MMD_CTRL_NOINCR 0x4000 /* no post increment */ +#define MII_MMD_CTRL_INCR_RDWT 0x8000 /* post increment on reads & writes */ +#define MII_MMD_CTRL_INCR_ON_WT 0xC000 /* post increment on writes only */ + /* This structure is used in all SIOCxMIIxxx ioctl calls */ struct mii_ioctl_data { __u16 phy_id; diff --git a/include/linux/mlx4/cmd.h b/include/linux/mlx4/cmd.h index 1f3860a8a109..260695186256 100644 --- a/include/linux/mlx4/cmd.h +++ b/include/linux/mlx4/cmd.h @@ -154,6 +154,10 @@ enum { /* set port opcode modifiers */ MLX4_SET_PORT_PRIO2TC = 0x8, MLX4_SET_PORT_SCHEDULER = 0x9, + + /* register/delete flow steering network rules */ + MLX4_QP_FLOW_STEERING_ATTACH = 0x65, + MLX4_QP_FLOW_STEERING_DETACH = 0x66, }; enum { diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h index 6a8f002b8ed3..4d7761f8c3f6 100644 --- a/include/linux/mlx4/device.h +++ b/include/linux/mlx4/device.h @@ -36,6 +36,7 @@ #include <linux/pci.h> #include <linux/completion.h> #include <linux/radix-tree.h> +#include <linux/cpu_rmap.h> #include <linux/atomic.h> @@ -70,6 +71,36 @@ enum { MLX4_MFUNC_EQE_MASK = (MLX4_MFUNC_MAX_EQES - 1) }; +/* Driver supports 3 diffrent device methods to manage traffic steering: + * -device managed - High level API for ib and eth flow steering. FW is + * managing flow steering tables. + * - B0 steering mode - Common low level API for ib and (if supported) eth. + * - A0 steering mode - Limited low level API for eth. In case of IB, + * B0 mode is in use. + */ +enum { + MLX4_STEERING_MODE_A0, + MLX4_STEERING_MODE_B0, + MLX4_STEERING_MODE_DEVICE_MANAGED +}; + +static inline const char *mlx4_steering_mode_str(int steering_mode) +{ + switch (steering_mode) { + case MLX4_STEERING_MODE_A0: + return "A0 steering"; + + case MLX4_STEERING_MODE_B0: + return "B0 steering"; + + case MLX4_STEERING_MODE_DEVICE_MANAGED: + return "Device managed flow steering"; + + default: + return "Unrecognize steering mode"; + } +} + enum { MLX4_DEV_CAP_FLAG_RC = 1LL << 0, MLX4_DEV_CAP_FLAG_UC = 1LL << 1, @@ -102,7 +133,8 @@ enum { enum { MLX4_DEV_CAP_FLAG2_RSS = 1LL << 0, MLX4_DEV_CAP_FLAG2_RSS_TOP = 1LL << 1, - MLX4_DEV_CAP_FLAG2_RSS_XOR = 1LL << 2 + MLX4_DEV_CAP_FLAG2_RSS_XOR = 1LL << 2, + MLX4_DEV_CAP_FLAG2_FS_EN = 1LL << 3 }; #define MLX4_ATTR_EXTENDED_PORT_INFO cpu_to_be16(0xff90) @@ -295,6 +327,8 @@ struct mlx4_caps { int num_amgms; int reserved_mcgs; int num_qp_per_mgm; + int steering_mode; + int fs_log_max_ucast_qp_range_size; int num_pds; int reserved_pds; int max_xrcds; @@ -509,6 +543,8 @@ struct mlx4_dev { u8 rev_id; char board_id[MLX4_BOARD_ID_LEN]; int num_vfs; + u64 regid_promisc_array[MLX4_MAX_PORTS + 1]; + u64 regid_allmulti_array[MLX4_MAX_PORTS + 1]; }; struct mlx4_init_port_param { @@ -623,9 +659,99 @@ int mlx4_unicast_attach(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], int mlx4_unicast_detach(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], enum mlx4_protocol prot); int mlx4_multicast_attach(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], - int block_mcast_loopback, enum mlx4_protocol protocol); + u8 port, int block_mcast_loopback, + enum mlx4_protocol protocol, u64 *reg_id); int mlx4_multicast_detach(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], - enum mlx4_protocol protocol); + enum mlx4_protocol protocol, u64 reg_id); + +enum { + MLX4_DOMAIN_UVERBS = 0x1000, + MLX4_DOMAIN_ETHTOOL = 0x2000, + MLX4_DOMAIN_RFS = 0x3000, + MLX4_DOMAIN_NIC = 0x5000, +}; + +enum mlx4_net_trans_rule_id { + MLX4_NET_TRANS_RULE_ID_ETH = 0, + MLX4_NET_TRANS_RULE_ID_IB, + MLX4_NET_TRANS_RULE_ID_IPV6, + MLX4_NET_TRANS_RULE_ID_IPV4, + MLX4_NET_TRANS_RULE_ID_TCP, + MLX4_NET_TRANS_RULE_ID_UDP, + MLX4_NET_TRANS_RULE_NUM, /* should be last */ +}; + +enum mlx4_net_trans_promisc_mode { + MLX4_FS_PROMISC_NONE = 0, + MLX4_FS_PROMISC_UPLINK, + /* For future use. Not implemented yet */ + MLX4_FS_PROMISC_FUNCTION_PORT, + MLX4_FS_PROMISC_ALL_MULTI, +}; + +struct mlx4_spec_eth { + u8 dst_mac[6]; + u8 dst_mac_msk[6]; + u8 src_mac[6]; + u8 src_mac_msk[6]; + u8 ether_type_enable; + __be16 ether_type; + __be16 vlan_id_msk; + __be16 vlan_id; +}; + +struct mlx4_spec_tcp_udp { + __be16 dst_port; + __be16 dst_port_msk; + __be16 src_port; + __be16 src_port_msk; +}; + +struct mlx4_spec_ipv4 { + __be32 dst_ip; + __be32 dst_ip_msk; + __be32 src_ip; + __be32 src_ip_msk; +}; + +struct mlx4_spec_ib { + __be32 r_qpn; + __be32 qpn_msk; + u8 dst_gid[16]; + u8 dst_gid_msk[16]; +}; + +struct mlx4_spec_list { + struct list_head list; + enum mlx4_net_trans_rule_id id; + union { + struct mlx4_spec_eth eth; + struct mlx4_spec_ib ib; + struct mlx4_spec_ipv4 ipv4; + struct mlx4_spec_tcp_udp tcp_udp; + }; +}; + +enum mlx4_net_trans_hw_rule_queue { + MLX4_NET_TRANS_Q_FIFO, + MLX4_NET_TRANS_Q_LIFO, +}; + +struct mlx4_net_trans_rule { + struct list_head list; + enum mlx4_net_trans_hw_rule_queue queue_mode; + bool exclusive; + bool allow_loopback; + enum mlx4_net_trans_promisc_mode promisc_mode; + u8 port; + u16 priority; + u32 qpn; +}; + +int mlx4_flow_steer_promisc_add(struct mlx4_dev *dev, u8 port, u32 qpn, + enum mlx4_net_trans_promisc_mode mode); +int mlx4_flow_steer_promisc_remove(struct mlx4_dev *dev, u8 port, + enum mlx4_net_trans_promisc_mode mode); int mlx4_multicast_promisc_add(struct mlx4_dev *dev, u32 qpn, u8 port); int mlx4_multicast_promisc_remove(struct mlx4_dev *dev, u32 qpn, u8 port); int mlx4_unicast_promisc_add(struct mlx4_dev *dev, u32 qpn, u8 port); @@ -659,7 +785,8 @@ void mlx4_fmr_unmap(struct mlx4_dev *dev, struct mlx4_fmr *fmr, int mlx4_fmr_free(struct mlx4_dev *dev, struct mlx4_fmr *fmr); int mlx4_SYNC_TPT(struct mlx4_dev *dev); int mlx4_test_interrupts(struct mlx4_dev *dev); -int mlx4_assign_eq(struct mlx4_dev *dev, char* name , int* vector); +int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, + int *vector); void mlx4_release_eq(struct mlx4_dev *dev, int vec); int mlx4_wol_read(struct mlx4_dev *dev, u64 *config, int port); @@ -668,4 +795,8 @@ int mlx4_wol_write(struct mlx4_dev *dev, u64 config, int port); int mlx4_counter_alloc(struct mlx4_dev *dev, u32 *idx); void mlx4_counter_free(struct mlx4_dev *dev, u32 idx); +int mlx4_flow_attach(struct mlx4_dev *dev, + struct mlx4_net_trans_rule *rule, u64 *reg_id); +int mlx4_flow_detach(struct mlx4_dev *dev, u64 reg_id); + #endif /* MLX4_DEVICE_H */ diff --git a/include/linux/mlx4/driver.h b/include/linux/mlx4/driver.h index 5f1298b1b5ef..8dc485febc6b 100644 --- a/include/linux/mlx4/driver.h +++ b/include/linux/mlx4/driver.h @@ -37,6 +37,8 @@ struct mlx4_dev; +#define MLX4_MAC_MASK 0xffffffffffffULL + enum mlx4_dev_event { MLX4_DEV_EVENT_CATASTROPHIC_ERROR, MLX4_DEV_EVENT_PORT_UP, diff --git a/include/linux/net.h b/include/linux/net.h index e9ac2df079ba..99276c3dc89a 100644 --- a/include/linux/net.h +++ b/include/linux/net.h @@ -72,6 +72,7 @@ struct net; #define SOCK_NOSPACE 2 #define SOCK_PASSCRED 3 #define SOCK_PASSSEC 4 +#define SOCK_EXTERNALLY_ALLOCATED 5 #ifndef ARCH_HAS_SOCKET_TYPES /** @@ -247,6 +248,7 @@ extern int sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, int flags); extern int sock_map_fd(struct socket *sock, int flags); extern struct socket *sockfd_lookup(int fd, int *err); +extern struct socket *sock_from_file(struct file *file, int *err); #define sockfd_put(sock) fput(sock->file) extern int net_ratelimit(void); diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index d94cb1431519..eb06e58bed0b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1046,10 +1046,9 @@ struct net_device { */ char name[IFNAMSIZ]; - struct pm_qos_request pm_qos_req; - - /* device name hash chain */ + /* device name hash chain, please keep it close to name[] */ struct hlist_node name_hlist; + /* snmp alias */ char *ifalias; @@ -1322,6 +1321,8 @@ struct net_device { /* group the device belongs to */ int group; + + struct pm_qos_request pm_qos_req; }; #define to_net_dev(d) container_of(d, struct net_device, dev) @@ -1626,6 +1627,7 @@ extern int dev_alloc_name(struct net_device *dev, const char *name); extern int dev_open(struct net_device *dev); extern int dev_close(struct net_device *dev); extern void dev_disable_lro(struct net_device *dev); +extern int dev_loopback_xmit(struct sk_buff *newskb); extern int dev_queue_xmit(struct sk_buff *skb); extern int register_netdevice(struct net_device *dev); extern void unregister_netdevice_queue(struct net_device *dev, @@ -2108,7 +2110,12 @@ static inline int netif_set_real_num_rx_queues(struct net_device *dev, static inline int netif_copy_real_num_queues(struct net_device *to_dev, const struct net_device *from_dev) { - netif_set_real_num_tx_queues(to_dev, from_dev->real_num_tx_queues); + int err; + + err = netif_set_real_num_tx_queues(to_dev, + from_dev->real_num_tx_queues); + if (err) + return err; #ifdef CONFIG_RPS return netif_set_real_num_rx_queues(to_dev, from_dev->real_num_rx_queues); @@ -2117,6 +2124,9 @@ static inline int netif_copy_real_num_queues(struct net_device *to_dev, #endif } +#define DEFAULT_MAX_NUM_RSS_QUEUES (8) +extern int netif_get_num_default_rss_queues(void); + /* Use this variant when it is known for sure that it * is executing from hardware interrupt context or with hardware interrupts * disabled. diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h index ff9c84c29b28..c613cf0d7884 100644 --- a/include/linux/netfilter.h +++ b/include/linux/netfilter.h @@ -94,6 +94,16 @@ static inline int nf_inet_addr_cmp(const union nf_inet_addr *a1, a1->all[3] == a2->all[3]; } +static inline void nf_inet_addr_mask(const union nf_inet_addr *a1, + union nf_inet_addr *result, + const union nf_inet_addr *mask) +{ + result->all[0] = a1->all[0] & mask->all[0]; + result->all[1] = a1->all[1] & mask->all[1]; + result->all[2] = a1->all[2] & mask->all[2]; + result->all[3] = a1->all[3] & mask->all[3]; +} + extern void netfilter_init(void); /* Largest hook number + 1 */ @@ -383,6 +393,22 @@ nf_nat_decode_session(struct sk_buff *skb, struct flowi *fl, u_int8_t family) extern void (*ip_ct_attach)(struct sk_buff *, struct sk_buff *) __rcu; extern void nf_ct_attach(struct sk_buff *, struct sk_buff *); extern void (*nf_ct_destroy)(struct nf_conntrack *) __rcu; + +struct nf_conn; +struct nlattr; + +struct nfq_ct_hook { + size_t (*build_size)(const struct nf_conn *ct); + int (*build)(struct sk_buff *skb, struct nf_conn *ct); + int (*parse)(const struct nlattr *attr, struct nf_conn *ct); +}; +extern struct nfq_ct_hook __rcu *nfq_ct_hook; + +struct nfq_ct_nat_hook { + void (*seq_adjust)(struct sk_buff *skb, struct nf_conn *ct, + u32 ctinfo, int off); +}; +extern struct nfq_ct_nat_hook __rcu *nfq_ct_nat_hook; #else static inline void nf_ct_attach(struct sk_buff *new, struct sk_buff *skb) {} #endif diff --git a/include/linux/netfilter/Kbuild b/include/linux/netfilter/Kbuild index 1697036336b6..874ae8f2706b 100644 --- a/include/linux/netfilter/Kbuild +++ b/include/linux/netfilter/Kbuild @@ -10,6 +10,7 @@ header-y += nfnetlink.h header-y += nfnetlink_acct.h header-y += nfnetlink_compat.h header-y += nfnetlink_conntrack.h +header-y += nfnetlink_cthelper.h header-y += nfnetlink_cttimeout.h header-y += nfnetlink_log.h header-y += nfnetlink_queue.h diff --git a/include/linux/netfilter/nf_conntrack_sip.h b/include/linux/netfilter/nf_conntrack_sip.h index 0ce91d56a5f2..0dfc8b7210a3 100644 --- a/include/linux/netfilter/nf_conntrack_sip.h +++ b/include/linux/netfilter/nf_conntrack_sip.h @@ -2,6 +2,8 @@ #define __NF_CONNTRACK_SIP_H__ #ifdef __KERNEL__ +#include <net/netfilter/nf_conntrack_expect.h> + #define SIP_PORT 5060 #define SIP_TIMEOUT 3600 diff --git a/include/linux/netfilter/nfnetlink.h b/include/linux/netfilter/nfnetlink.h index a1048c1587d1..18341cdb2443 100644 --- a/include/linux/netfilter/nfnetlink.h +++ b/include/linux/netfilter/nfnetlink.h @@ -50,7 +50,8 @@ struct nfgenmsg { #define NFNL_SUBSYS_IPSET 6 #define NFNL_SUBSYS_ACCT 7 #define NFNL_SUBSYS_CTNETLINK_TIMEOUT 8 -#define NFNL_SUBSYS_COUNT 9 +#define NFNL_SUBSYS_CTHELPER 9 +#define NFNL_SUBSYS_COUNT 10 #ifdef __KERNEL__ diff --git a/include/linux/netfilter/nfnetlink_conntrack.h b/include/linux/netfilter/nfnetlink_conntrack.h index e58e4b93c108..f649f7423ca2 100644 --- a/include/linux/netfilter/nfnetlink_conntrack.h +++ b/include/linux/netfilter/nfnetlink_conntrack.h @@ -7,6 +7,8 @@ enum cntl_msg_types { IPCTNL_MSG_CT_GET, IPCTNL_MSG_CT_DELETE, IPCTNL_MSG_CT_GET_CTRZERO, + IPCTNL_MSG_CT_GET_STATS_CPU, + IPCTNL_MSG_CT_GET_STATS, IPCTNL_MSG_MAX }; @@ -15,6 +17,7 @@ enum ctnl_exp_msg_types { IPCTNL_MSG_EXP_NEW, IPCTNL_MSG_EXP_GET, IPCTNL_MSG_EXP_DELETE, + IPCTNL_MSG_EXP_GET_STATS_CPU, IPCTNL_MSG_EXP_MAX }; @@ -191,6 +194,7 @@ enum ctattr_expect_nat { enum ctattr_help { CTA_HELP_UNSPEC, CTA_HELP_NAME, + CTA_HELP_INFO, __CTA_HELP_MAX }; #define CTA_HELP_MAX (__CTA_HELP_MAX - 1) @@ -202,4 +206,39 @@ enum ctattr_secctx { }; #define CTA_SECCTX_MAX (__CTA_SECCTX_MAX - 1) +enum ctattr_stats_cpu { + CTA_STATS_UNSPEC, + CTA_STATS_SEARCHED, + CTA_STATS_FOUND, + CTA_STATS_NEW, + CTA_STATS_INVALID, + CTA_STATS_IGNORE, + CTA_STATS_DELETE, + CTA_STATS_DELETE_LIST, + CTA_STATS_INSERT, + CTA_STATS_INSERT_FAILED, + CTA_STATS_DROP, + CTA_STATS_EARLY_DROP, + CTA_STATS_ERROR, + CTA_STATS_SEARCH_RESTART, + __CTA_STATS_MAX, +}; +#define CTA_STATS_MAX (__CTA_STATS_MAX - 1) + +enum ctattr_stats_global { + CTA_STATS_GLOBAL_UNSPEC, + CTA_STATS_GLOBAL_ENTRIES, + __CTA_STATS_GLOBAL_MAX, +}; +#define CTA_STATS_GLOBAL_MAX (__CTA_STATS_GLOBAL_MAX - 1) + +enum ctattr_expect_stats { + CTA_STATS_EXP_UNSPEC, + CTA_STATS_EXP_NEW, + CTA_STATS_EXP_CREATE, + CTA_STATS_EXP_DELETE, + __CTA_STATS_EXP_MAX, +}; +#define CTA_STATS_EXP_MAX (__CTA_STATS_EXP_MAX - 1) + #endif /* _IPCONNTRACK_NETLINK_H */ diff --git a/include/linux/netfilter/nfnetlink_cthelper.h b/include/linux/netfilter/nfnetlink_cthelper.h new file mode 100644 index 000000000000..33659f6fad3e --- /dev/null +++ b/include/linux/netfilter/nfnetlink_cthelper.h @@ -0,0 +1,55 @@ +#ifndef _NFNL_CTHELPER_H_ +#define _NFNL_CTHELPER_H_ + +#define NFCT_HELPER_STATUS_DISABLED 0 +#define NFCT_HELPER_STATUS_ENABLED 1 + +enum nfnl_acct_msg_types { + NFNL_MSG_CTHELPER_NEW, + NFNL_MSG_CTHELPER_GET, + NFNL_MSG_CTHELPER_DEL, + NFNL_MSG_CTHELPER_MAX +}; + +enum nfnl_cthelper_type { + NFCTH_UNSPEC, + NFCTH_NAME, + NFCTH_TUPLE, + NFCTH_QUEUE_NUM, + NFCTH_POLICY, + NFCTH_PRIV_DATA_LEN, + NFCTH_STATUS, + __NFCTH_MAX +}; +#define NFCTH_MAX (__NFCTH_MAX - 1) + +enum nfnl_cthelper_policy_type { + NFCTH_POLICY_SET_UNSPEC, + NFCTH_POLICY_SET_NUM, + NFCTH_POLICY_SET, + NFCTH_POLICY_SET1 = NFCTH_POLICY_SET, + NFCTH_POLICY_SET2, + NFCTH_POLICY_SET3, + NFCTH_POLICY_SET4, + __NFCTH_POLICY_SET_MAX +}; +#define NFCTH_POLICY_SET_MAX (__NFCTH_POLICY_SET_MAX - 1) + +enum nfnl_cthelper_pol_type { + NFCTH_POLICY_UNSPEC, + NFCTH_POLICY_NAME, + NFCTH_POLICY_EXPECT_MAX, + NFCTH_POLICY_EXPECT_TIMEOUT, + __NFCTH_POLICY_MAX +}; +#define NFCTH_POLICY_MAX (__NFCTH_POLICY_MAX - 1) + +enum nfnl_cthelper_tuple_type { + NFCTH_TUPLE_UNSPEC, + NFCTH_TUPLE_L3PROTONUM, + NFCTH_TUPLE_L4PROTONUM, + __NFCTH_TUPLE_MAX, +}; +#define NFCTH_TUPLE_MAX (__NFCTH_TUPLE_MAX - 1) + +#endif /* _NFNL_CTHELPER_H */ diff --git a/include/linux/netfilter/nfnetlink_queue.h b/include/linux/netfilter/nfnetlink_queue.h index 24b32e6c009e..3b1c1360aedf 100644 --- a/include/linux/netfilter/nfnetlink_queue.h +++ b/include/linux/netfilter/nfnetlink_queue.h @@ -42,6 +42,8 @@ enum nfqnl_attr_type { NFQA_IFINDEX_PHYSOUTDEV, /* __u32 ifindex */ NFQA_HWADDR, /* nfqnl_msg_packet_hw */ NFQA_PAYLOAD, /* opaque data payload */ + NFQA_CT, /* nf_conntrack_netlink.h */ + NFQA_CT_INFO, /* enum ip_conntrack_info */ __NFQA_MAX }; @@ -84,8 +86,15 @@ enum nfqnl_attr_config { NFQA_CFG_CMD, /* nfqnl_msg_config_cmd */ NFQA_CFG_PARAMS, /* nfqnl_msg_config_params */ NFQA_CFG_QUEUE_MAXLEN, /* __u32 */ + NFQA_CFG_MASK, /* identify which flags to change */ + NFQA_CFG_FLAGS, /* value of these flags (__u32) */ __NFQA_CFG_MAX }; #define NFQA_CFG_MAX (__NFQA_CFG_MAX-1) +/* Flags for NFQA_CFG_FLAGS */ +#define NFQA_CFG_F_FAIL_OPEN (1 << 0) +#define NFQA_CFG_F_CONNTRACK (1 << 1) +#define NFQA_CFG_F_MAX (1 << 2) + #endif /* _NFNETLINK_QUEUE_H */ diff --git a/include/linux/netfilter/xt_connlimit.h b/include/linux/netfilter/xt_connlimit.h index d1366f05d1b2..f1656096121e 100644 --- a/include/linux/netfilter/xt_connlimit.h +++ b/include/linux/netfilter/xt_connlimit.h @@ -22,13 +22,8 @@ struct xt_connlimit_info { #endif }; unsigned int limit; - union { - /* revision 0 */ - unsigned int inverse; - - /* revision 1 */ - __u32 flags; - }; + /* revision 1 */ + __u32 flags; /* Used internally by the kernel */ struct xt_connlimit_data *data __attribute__((aligned(8))); diff --git a/include/linux/netfilter/xt_recent.h b/include/linux/netfilter/xt_recent.h index 83318e01425e..6ef36c113e89 100644 --- a/include/linux/netfilter/xt_recent.h +++ b/include/linux/netfilter/xt_recent.h @@ -32,4 +32,14 @@ struct xt_recent_mtinfo { __u8 side; }; +struct xt_recent_mtinfo_v1 { + __u32 seconds; + __u32 hit_count; + __u8 check_set; + __u8 invert; + char name[XT_RECENT_NAME_LEN]; + __u8 side; + union nf_inet_addr mask; +}; + #endif /* _LINUX_NETFILTER_XT_RECENT_H */ diff --git a/include/linux/netfilter_ipv4.h b/include/linux/netfilter_ipv4.h index fa0946c549d3..e2b12801378d 100644 --- a/include/linux/netfilter_ipv4.h +++ b/include/linux/netfilter_ipv4.h @@ -66,6 +66,7 @@ enum nf_ip_hook_priorities { NF_IP_PRI_SECURITY = 50, NF_IP_PRI_NAT_SRC = 100, NF_IP_PRI_SELINUX_LAST = 225, + NF_IP_PRI_CONNTRACK_HELPER = 300, NF_IP_PRI_CONNTRACK_CONFIRM = INT_MAX, NF_IP_PRI_LAST = INT_MAX, }; diff --git a/include/linux/netfilter_ipv4/Kbuild b/include/linux/netfilter_ipv4/Kbuild index c61b8fb1a9ef..8ba0c5b72ea9 100644 --- a/include/linux/netfilter_ipv4/Kbuild +++ b/include/linux/netfilter_ipv4/Kbuild @@ -5,7 +5,6 @@ header-y += ipt_LOG.h header-y += ipt_REJECT.h header-y += ipt_TTL.h header-y += ipt_ULOG.h -header-y += ipt_addrtype.h header-y += ipt_ah.h header-y += ipt_ecn.h header-y += ipt_ttl.h diff --git a/include/linux/netfilter_ipv4/ipt_addrtype.h b/include/linux/netfilter_ipv4/ipt_addrtype.h deleted file mode 100644 index 0da42237c8da..000000000000 --- a/include/linux/netfilter_ipv4/ipt_addrtype.h +++ /dev/null @@ -1,27 +0,0 @@ -#ifndef _IPT_ADDRTYPE_H -#define _IPT_ADDRTYPE_H - -#include <linux/types.h> - -enum { - IPT_ADDRTYPE_INVERT_SOURCE = 0x0001, - IPT_ADDRTYPE_INVERT_DEST = 0x0002, - IPT_ADDRTYPE_LIMIT_IFACE_IN = 0x0004, - IPT_ADDRTYPE_LIMIT_IFACE_OUT = 0x0008, -}; - -struct ipt_addrtype_info_v1 { - __u16 source; /* source-type mask */ - __u16 dest; /* dest-type mask */ - __u32 flags; -}; - -/* revision 0 */ -struct ipt_addrtype_info { - __u16 source; /* source-type mask */ - __u16 dest; /* dest-type mask */ - __u32 invert_source; - __u32 invert_dest; -}; - -#endif diff --git a/include/linux/netfilter_ipv6.h b/include/linux/netfilter_ipv6.h index 57c025127f1d..7c8a513ce7a3 100644 --- a/include/linux/netfilter_ipv6.h +++ b/include/linux/netfilter_ipv6.h @@ -71,6 +71,7 @@ enum nf_ip6_hook_priorities { NF_IP6_PRI_SECURITY = 50, NF_IP6_PRI_NAT_SRC = 100, NF_IP6_PRI_SELINUX_LAST = 225, + NF_IP6_PRI_CONNTRACK_HELPER = 300, NF_IP6_PRI_LAST = INT_MAX, }; diff --git a/include/linux/netlink.h b/include/linux/netlink.h index 0f628ffa420c..f74dd133788f 100644 --- a/include/linux/netlink.h +++ b/include/linux/netlink.h @@ -174,11 +174,17 @@ struct netlink_skb_parms { extern void netlink_table_grab(void); extern void netlink_table_ungrab(void); -extern struct sock *netlink_kernel_create(struct net *net, - int unit,unsigned int groups, - void (*input)(struct sk_buff *skb), - struct mutex *cb_mutex, - struct module *module); +/* optional Netlink kernel configuration parameters */ +struct netlink_kernel_cfg { + unsigned int groups; + void (*input)(struct sk_buff *skb); + struct mutex *cb_mutex; + void (*bind)(int group); +}; + +extern struct sock *netlink_kernel_create(struct net *net, int unit, + struct module *module, + struct netlink_kernel_cfg *cfg); extern void netlink_kernel_release(struct sock *sk); extern int __netlink_change_ngroups(struct sock *sk, unsigned int groups); extern int netlink_change_ngroups(struct sock *sk, unsigned int groups); @@ -241,14 +247,6 @@ struct netlink_notify { struct nlmsghdr * __nlmsg_put(struct sk_buff *skb, u32 pid, u32 seq, int type, int len, int flags); -#define NLMSG_NEW(skb, pid, seq, type, len, flags) \ -({ if (unlikely(skb_tailroom(skb) < (int)NLMSG_SPACE(len))) \ - goto nlmsg_failure; \ - __nlmsg_put(skb, pid, seq, type, len, flags); }) - -#define NLMSG_PUT(skb, pid, seq, type, len) \ - NLMSG_NEW(skb, pid, seq, type, len, 0) - struct netlink_dump_control { int (*dump)(struct sk_buff *skb, struct netlink_callback *); int (*done)(struct netlink_callback*); diff --git a/include/linux/netpoll.h b/include/linux/netpoll.h index 5dfa091c3347..28f5389c924b 100644 --- a/include/linux/netpoll.h +++ b/include/linux/netpoll.h @@ -43,7 +43,7 @@ struct netpoll_info { void netpoll_send_udp(struct netpoll *np, const char *msg, int len); void netpoll_print_options(struct netpoll *np); int netpoll_parse_options(struct netpoll *np, char *opt); -int __netpoll_setup(struct netpoll *np); +int __netpoll_setup(struct netpoll *np, struct net_device *ndev); int netpoll_setup(struct netpoll *np); int netpoll_trap(void); void netpoll_set_trap(int trap); diff --git a/include/linux/nfc.h b/include/linux/nfc.h index 0ae9b5857c83..6189f27e305b 100644 --- a/include/linux/nfc.h +++ b/include/linux/nfc.h @@ -56,6 +56,10 @@ * %NFC_ATTR_PROTOCOLS) * @NFC_EVENT_DEVICE_REMOVED: event emitted when a device is removed * (it sends %NFC_ATTR_DEVICE_INDEX) + * @NFC_EVENT_TM_ACTIVATED: event emitted when the adapter is activated in + * target mode. + * @NFC_EVENT_DEVICE_DEACTIVATED: event emitted when the adapter is deactivated + * from target mode. */ enum nfc_commands { NFC_CMD_UNSPEC, @@ -71,6 +75,8 @@ enum nfc_commands { NFC_EVENT_DEVICE_ADDED, NFC_EVENT_DEVICE_REMOVED, NFC_EVENT_TARGET_LOST, + NFC_EVENT_TM_ACTIVATED, + NFC_EVENT_TM_DEACTIVATED, /* private: internal use only */ __NFC_CMD_AFTER_LAST }; @@ -94,6 +100,8 @@ enum nfc_commands { * @NFC_ATTR_TARGET_SENSF_RES: NFC-F targets extra information, max 18 bytes * @NFC_ATTR_COMM_MODE: Passive or active mode * @NFC_ATTR_RF_MODE: Initiator or target + * @NFC_ATTR_IM_PROTOCOLS: Initiator mode protocols to poll for + * @NFC_ATTR_TM_PROTOCOLS: Target mode protocols to listen for */ enum nfc_attrs { NFC_ATTR_UNSPEC, @@ -109,6 +117,8 @@ enum nfc_attrs { NFC_ATTR_COMM_MODE, NFC_ATTR_RF_MODE, NFC_ATTR_DEVICE_POWERED, + NFC_ATTR_IM_PROTOCOLS, + NFC_ATTR_TM_PROTOCOLS, /* private: internal use only */ __NFC_ATTR_AFTER_LAST }; @@ -118,6 +128,7 @@ enum nfc_attrs { #define NFC_NFCID1_MAXSIZE 10 #define NFC_SENSB_RES_MAXSIZE 12 #define NFC_SENSF_RES_MAXSIZE 18 +#define NFC_GB_MAXSIZE 48 /* NFC protocols */ #define NFC_PROTO_JEWEL 1 @@ -125,8 +136,9 @@ enum nfc_attrs { #define NFC_PROTO_FELICA 3 #define NFC_PROTO_ISO14443 4 #define NFC_PROTO_NFC_DEP 5 +#define NFC_PROTO_ISO14443_B 6 -#define NFC_PROTO_MAX 6 +#define NFC_PROTO_MAX 7 /* NFC communication modes */ #define NFC_COMM_ACTIVE 0 @@ -135,13 +147,15 @@ enum nfc_attrs { /* NFC RF modes */ #define NFC_RF_INITIATOR 0 #define NFC_RF_TARGET 1 +#define NFC_RF_NONE 2 /* NFC protocols masks used in bitsets */ -#define NFC_PROTO_JEWEL_MASK (1 << NFC_PROTO_JEWEL) -#define NFC_PROTO_MIFARE_MASK (1 << NFC_PROTO_MIFARE) -#define NFC_PROTO_FELICA_MASK (1 << NFC_PROTO_FELICA) -#define NFC_PROTO_ISO14443_MASK (1 << NFC_PROTO_ISO14443) -#define NFC_PROTO_NFC_DEP_MASK (1 << NFC_PROTO_NFC_DEP) +#define NFC_PROTO_JEWEL_MASK (1 << NFC_PROTO_JEWEL) +#define NFC_PROTO_MIFARE_MASK (1 << NFC_PROTO_MIFARE) +#define NFC_PROTO_FELICA_MASK (1 << NFC_PROTO_FELICA) +#define NFC_PROTO_ISO14443_MASK (1 << NFC_PROTO_ISO14443) +#define NFC_PROTO_NFC_DEP_MASK (1 << NFC_PROTO_NFC_DEP) +#define NFC_PROTO_ISO14443_B_MASK (1 << NFC_PROTO_ISO14443_B) struct sockaddr_nfc { sa_family_t sa_family; diff --git a/include/linux/nl80211.h b/include/linux/nl80211.h index a6959f72745e..2f3878806403 100644 --- a/include/linux/nl80211.h +++ b/include/linux/nl80211.h @@ -170,6 +170,8 @@ * %NL80211_ATTR_CIPHER_GROUP, %NL80211_ATTR_WPA_VERSIONS, * %NL80211_ATTR_AKM_SUITES, %NL80211_ATTR_PRIVACY, * %NL80211_ATTR_AUTH_TYPE and %NL80211_ATTR_INACTIVITY_TIMEOUT. + * The channel to use can be set on the interface or be given using the + * %NL80211_ATTR_WIPHY_FREQ and %NL80211_ATTR_WIPHY_CHANNEL_TYPE attrs. * @NL80211_CMD_NEW_BEACON: old alias for %NL80211_CMD_START_AP * @NL80211_CMD_STOP_AP: Stop AP operation on the given interface * @NL80211_CMD_DEL_BEACON: old alias for %NL80211_CMD_STOP_AP @@ -275,6 +277,12 @@ * @NL80211_CMD_NEW_SURVEY_RESULTS: survey data notification (as a reply to * NL80211_CMD_GET_SURVEY and on the "scan" multicast group) * + * @NL80211_CMD_SET_PMKSA: Add a PMKSA cache entry, using %NL80211_ATTR_MAC + * (for the BSSID) and %NL80211_ATTR_PMKID. + * @NL80211_CMD_DEL_PMKSA: Delete a PMKSA cache entry, using %NL80211_ATTR_MAC + * (for the BSSID) and %NL80211_ATTR_PMKID. + * @NL80211_CMD_FLUSH_PMKSA: Flush all PMKSA cache entries. + * * @NL80211_CMD_REG_CHANGE: indicates to userspace the regulatory domain * has been changed and provides details of the request information * that caused the change such as who initiated the regulatory request @@ -454,6 +462,10 @@ * the frame. * @NL80211_CMD_ACTION_TX_STATUS: Alias for @NL80211_CMD_FRAME_TX_STATUS for * backward compatibility. + * + * @NL80211_CMD_SET_POWER_SAVE: Set powersave, using %NL80211_ATTR_PS_STATE + * @NL80211_CMD_GET_POWER_SAVE: Get powersave status in %NL80211_ATTR_PS_STATE + * * @NL80211_CMD_SET_CQM: Connection quality monitor configuration. This command * is used to configure connection quality monitoring notification trigger * levels. @@ -759,6 +771,9 @@ enum nl80211_commands { * @NL80211_ATTR_IFNAME: network interface name * @NL80211_ATTR_IFTYPE: type of virtual interface, see &enum nl80211_iftype * + * @NL80211_ATTR_WDEV: wireless device identifier, used for pseudo-devices + * that don't have a netdev (u64) + * * @NL80211_ATTR_MAC: MAC address (various uses) * * @NL80211_ATTR_KEY_DATA: (temporal) key data; for TKIP this consists of @@ -769,6 +784,13 @@ enum nl80211_commands { * section 7.3.2.25.1, e.g. 0x000FAC04) * @NL80211_ATTR_KEY_SEQ: transmit key sequence number (IV/PN) for TKIP and * CCMP keys, each six bytes in little endian + * @NL80211_ATTR_KEY_DEFAULT: Flag attribute indicating the key is default key + * @NL80211_ATTR_KEY_DEFAULT_MGMT: Flag attribute indicating the key is the + * default management key + * @NL80211_ATTR_CIPHER_SUITES_PAIRWISE: For crypto settings for connect or + * other commands, indicates which pairwise cipher suites are used + * @NL80211_ATTR_CIPHER_SUITE_GROUP: For crypto settings for connect or + * other commands, indicates which group cipher suite is used * * @NL80211_ATTR_BEACON_INTERVAL: beacon interval in TU * @NL80211_ATTR_DTIM_PERIOD: DTIM period for beaconing @@ -1004,6 +1026,8 @@ enum nl80211_commands { * @NL80211_ATTR_ACK: Flag attribute indicating that the frame was * acknowledged by the recipient. * + * @NL80211_ATTR_PS_STATE: powersave state, using &enum nl80211_ps_state values. + * * @NL80211_ATTR_CQM: connection quality monitor configuration in a * nested attribute with %NL80211_ATTR_CQM_* sub-attributes. * @@ -1061,7 +1085,7 @@ enum nl80211_commands { * flag isn't set, the frame will be rejected. This is also used as an * nl80211 capability flag. * - * @NL80211_ATTR_BSS_HTOPMODE: HT operation mode (u16) + * @NL80211_ATTR_BSS_HT_OPMODE: HT operation mode (u16) * * @NL80211_ATTR_KEY_DEFAULT_TYPES: A nested attribute containing flags * attributes, specifying what a key should be set as default as. @@ -1085,10 +1109,10 @@ enum nl80211_commands { * indicate which WoW triggers should be enabled. This is also * used by %NL80211_CMD_GET_WOWLAN to get the currently enabled WoWLAN * triggers. - + * * @NL80211_ATTR_SCHED_SCAN_INTERVAL: Interval between scheduled scan * cycles, in msecs. - + * * @NL80211_ATTR_SCHED_SCAN_MATCH: Nested attribute with one or more * sets of attributes to match during scheduled scans. Only BSSs * that match any of the sets will be reported. These are @@ -1115,7 +1139,7 @@ enum nl80211_commands { * are managed in software: interfaces of these types aren't subject to * any restrictions in their number or combinations. * - * @%NL80211_ATTR_REKEY_DATA: nested attribute containing the information + * @NL80211_ATTR_REKEY_DATA: nested attribute containing the information * necessary for GTK rekeying in the device, see &enum nl80211_rekey_data. * * @NL80211_ATTR_SCAN_SUPP_RATES: rates per to be advertised as supported in scan, @@ -1182,7 +1206,6 @@ enum nl80211_commands { * @NL80211_ATTR_FEATURE_FLAGS: This u32 attribute contains flags from * &enum nl80211_feature_flags and is advertised in wiphy information. * @NL80211_ATTR_PROBE_RESP_OFFLOAD: Indicates that the HW responds to probe - * * requests while operating in AP-mode. * This attribute holds a bitmap of the supported protocols for * offloading (see &enum nl80211_probe_resp_offload_support_attr). @@ -1222,6 +1245,12 @@ enum nl80211_commands { * @NL80211_ATTR_BG_SCAN_PERIOD: Background scan period in seconds * or 0 to disable background scan. * + * @NL80211_ATTR_USER_REG_HINT_TYPE: type of regulatory hint passed from + * userspace. If unset it is assumed the hint comes directly from + * a user. If set code could specify exactly what type of source + * was used to provide the hint. For the different types of + * allowed user regulatory hints see nl80211_user_reg_hint_type. + * * @NL80211_ATTR_MAX: highest attribute number currently defined * @__NL80211_ATTR_AFTER_LAST: internal use */ @@ -1473,6 +1502,10 @@ enum nl80211_attrs { NL80211_ATTR_BG_SCAN_PERIOD, + NL80211_ATTR_WDEV, + + NL80211_ATTR_USER_REG_HINT_TYPE, + /* add attributes here, update the policy in nl80211.c */ __NL80211_ATTR_AFTER_LAST, @@ -1520,6 +1553,13 @@ enum nl80211_attrs { #define NL80211_MAX_NR_CIPHER_SUITES 5 #define NL80211_MAX_NR_AKM_SUITES 2 +#define NL80211_MIN_REMAIN_ON_CHANNEL_TIME 10 + +/* default RSSI threshold for scan results if none specified. */ +#define NL80211_SCAN_RSSI_THOLD_OFF -300 + +#define NL80211_CQM_TXE_MAX_INTVL 1800 + /** * enum nl80211_iftype - (virtual) interface types * @@ -1613,12 +1653,20 @@ struct nl80211_sta_flag_update { * * These attribute types are used with %NL80211_STA_INFO_TXRATE * when getting information about the bitrate of a station. + * There are 2 attributes for bitrate, a legacy one that represents + * a 16-bit value, and new one that represents a 32-bit value. + * If the rate value fits into 16 bit, both attributes are reported + * with the same value. If the rate is too high to fit into 16 bits + * (>6.5535Gbps) only 32-bit attribute is included. + * User space tools encouraged to use the 32-bit attribute and fall + * back to the 16-bit one for compatibility with older kernels. * * @__NL80211_RATE_INFO_INVALID: attribute number 0 is reserved * @NL80211_RATE_INFO_BITRATE: total bitrate (u16, 100kbit/s) * @NL80211_RATE_INFO_MCS: mcs index for 802.11n (u8) * @NL80211_RATE_INFO_40_MHZ_WIDTH: 40 Mhz dualchannel bitrate * @NL80211_RATE_INFO_SHORT_GI: 400ns guard interval + * @NL80211_RATE_INFO_BITRATE32: total bitrate (u32, 100kbit/s) * @NL80211_RATE_INFO_MAX: highest rate_info number currently defined * @__NL80211_RATE_INFO_AFTER_LAST: internal use */ @@ -1628,6 +1676,7 @@ enum nl80211_rate_info { NL80211_RATE_INFO_MCS, NL80211_RATE_INFO_40_MHZ_WIDTH, NL80211_RATE_INFO_SHORT_GI, + NL80211_RATE_INFO_BITRATE32, /* keep last */ __NL80211_RATE_INFO_AFTER_LAST, @@ -1788,6 +1837,9 @@ enum nl80211_mpath_info { * @NL80211_BAND_ATTR_HT_CAPA: HT capabilities, as in the HT information IE * @NL80211_BAND_ATTR_HT_AMPDU_FACTOR: A-MPDU factor, as in 11n * @NL80211_BAND_ATTR_HT_AMPDU_DENSITY: A-MPDU density, as in 11n + * @NL80211_BAND_ATTR_VHT_MCS_SET: 32-byte attribute containing the MCS set as + * defined in 802.11ac + * @NL80211_BAND_ATTR_VHT_CAPA: VHT capabilities, as in the HT information IE * @NL80211_BAND_ATTR_MAX: highest band attribute currently defined * @__NL80211_BAND_ATTR_AFTER_LAST: internal use */ @@ -1801,6 +1853,9 @@ enum nl80211_band_attr { NL80211_BAND_ATTR_HT_AMPDU_FACTOR, NL80211_BAND_ATTR_HT_AMPDU_DENSITY, + NL80211_BAND_ATTR_VHT_MCS_SET, + NL80211_BAND_ATTR_VHT_CAPA, + /* keep last */ __NL80211_BAND_ATTR_AFTER_LAST, NL80211_BAND_ATTR_MAX = __NL80211_BAND_ATTR_AFTER_LAST - 1 @@ -1952,6 +2007,8 @@ enum nl80211_reg_rule_attr { * @__NL80211_SCHED_SCAN_MATCH_ATTR_INVALID: attribute number 0 is reserved * @NL80211_SCHED_SCAN_MATCH_ATTR_SSID: SSID to be used for matching, * only report BSS with matching SSID. + * @NL80211_SCHED_SCAN_MATCH_ATTR_RSSI: RSSI threshold (in dBm) for reporting a + * BSS in scan results. Filtering is turned off if not specified. * @NL80211_SCHED_SCAN_MATCH_ATTR_MAX: highest scheduled scan filter * attribute number currently defined * @__NL80211_SCHED_SCAN_MATCH_ATTR_AFTER_LAST: internal use @@ -1959,7 +2016,8 @@ enum nl80211_reg_rule_attr { enum nl80211_sched_scan_match_attr { __NL80211_SCHED_SCAN_MATCH_ATTR_INVALID, - NL80211_ATTR_SCHED_SCAN_MATCH_SSID, + NL80211_SCHED_SCAN_MATCH_ATTR_SSID, + NL80211_SCHED_SCAN_MATCH_ATTR_RSSI, /* keep last */ __NL80211_SCHED_SCAN_MATCH_ATTR_AFTER_LAST, @@ -1967,6 +2025,9 @@ enum nl80211_sched_scan_match_attr { __NL80211_SCHED_SCAN_MATCH_ATTR_AFTER_LAST - 1 }; +/* only for backward compatibility */ +#define NL80211_ATTR_SCHED_SCAN_MATCH_SSID NL80211_SCHED_SCAN_MATCH_ATTR_SSID + /** * enum nl80211_reg_rule_flags - regulatory rule flags * @@ -2008,6 +2069,26 @@ enum nl80211_dfs_regions { }; /** + * enum nl80211_user_reg_hint_type - type of user regulatory hint + * + * @NL80211_USER_REG_HINT_USER: a user sent the hint. This is always + * assumed if the attribute is not set. + * @NL80211_USER_REG_HINT_CELL_BASE: the hint comes from a cellular + * base station. Device drivers that have been tested to work + * properly to support this type of hint can enable these hints + * by setting the NL80211_FEATURE_CELL_BASE_REG_HINTS feature + * capability on the struct wiphy. The wireless core will + * ignore all cell base station hints until at least one device + * present has been registered with the wireless core that + * has listed NL80211_FEATURE_CELL_BASE_REG_HINTS as a + * supported feature. + */ +enum nl80211_user_reg_hint_type { + NL80211_USER_REG_HINT_USER = 0, + NL80211_USER_REG_HINT_CELL_BASE = 1, +}; + +/** * enum nl80211_survey_info - survey information * * These attribute types are used with %NL80211_ATTR_SURVEY_INFO @@ -2086,78 +2167,91 @@ enum nl80211_mntr_flags { * @__NL80211_MESHCONF_INVALID: internal use * * @NL80211_MESHCONF_RETRY_TIMEOUT: specifies the initial retry timeout in - * millisecond units, used by the Peer Link Open message + * millisecond units, used by the Peer Link Open message * * @NL80211_MESHCONF_CONFIRM_TIMEOUT: specifies the initial confirm timeout, in - * millisecond units, used by the peer link management to close a peer link + * millisecond units, used by the peer link management to close a peer link * * @NL80211_MESHCONF_HOLDING_TIMEOUT: specifies the holding timeout, in - * millisecond units + * millisecond units * * @NL80211_MESHCONF_MAX_PEER_LINKS: maximum number of peer links allowed - * on this mesh interface + * on this mesh interface * * @NL80211_MESHCONF_MAX_RETRIES: specifies the maximum number of peer link - * open retries that can be sent to establish a new peer link instance in a - * mesh + * open retries that can be sent to establish a new peer link instance in a + * mesh * * @NL80211_MESHCONF_TTL: specifies the value of TTL field set at a source mesh - * point. + * point. * * @NL80211_MESHCONF_AUTO_OPEN_PLINKS: whether we should automatically - * open peer links when we detect compatible mesh peers. + * open peer links when we detect compatible mesh peers. * * @NL80211_MESHCONF_HWMP_MAX_PREQ_RETRIES: the number of action frames - * containing a PREQ that an MP can send to a particular destination (path - * target) + * containing a PREQ that an MP can send to a particular destination (path + * target) * * @NL80211_MESHCONF_PATH_REFRESH_TIME: how frequently to refresh mesh paths - * (in milliseconds) + * (in milliseconds) * * @NL80211_MESHCONF_MIN_DISCOVERY_TIMEOUT: minimum length of time to wait - * until giving up on a path discovery (in milliseconds) + * until giving up on a path discovery (in milliseconds) * * @NL80211_MESHCONF_HWMP_ACTIVE_PATH_TIMEOUT: The time (in TUs) for which mesh - * points receiving a PREQ shall consider the forwarding information from the - * root to be valid. (TU = time unit) + * points receiving a PREQ shall consider the forwarding information from + * the root to be valid. (TU = time unit) * * @NL80211_MESHCONF_HWMP_PREQ_MIN_INTERVAL: The minimum interval of time (in - * TUs) during which an MP can send only one action frame containing a PREQ - * reference element + * TUs) during which an MP can send only one action frame containing a PREQ + * reference element * * @NL80211_MESHCONF_HWMP_NET_DIAM_TRVS_TIME: The interval of time (in TUs) - * that it takes for an HWMP information element to propagate across the mesh + * that it takes for an HWMP information element to propagate across the + * mesh * * @NL80211_MESHCONF_HWMP_ROOTMODE: whether root mode is enabled or not * * @NL80211_MESHCONF_ELEMENT_TTL: specifies the value of TTL field set at a - * source mesh point for path selection elements. + * source mesh point for path selection elements. * * @NL80211_MESHCONF_HWMP_RANN_INTERVAL: The interval of time (in TUs) between - * root announcements are transmitted. + * root announcements are transmitted. * * @NL80211_MESHCONF_GATE_ANNOUNCEMENTS: Advertise that this mesh station has - * access to a broader network beyond the MBSS. This is done via Root - * Announcement frames. + * access to a broader network beyond the MBSS. This is done via Root + * Announcement frames. * * @NL80211_MESHCONF_HWMP_PERR_MIN_INTERVAL: The minimum interval of time (in - * TUs) during which a mesh STA can send only one Action frame containing a - * PERR element. + * TUs) during which a mesh STA can send only one Action frame containing a + * PERR element. * * @NL80211_MESHCONF_FORWARDING: set Mesh STA as forwarding or non-forwarding - * or forwarding entity (default is TRUE - forwarding entity) + * or forwarding entity (default is TRUE - forwarding entity) * * @NL80211_MESHCONF_RSSI_THRESHOLD: RSSI threshold in dBm. This specifies the - * threshold for average signal strength of candidate station to establish - * a peer link. - * - * @NL80211_MESHCONF_ATTR_MAX: highest possible mesh configuration attribute + * threshold for average signal strength of candidate station to establish + * a peer link. * * @NL80211_MESHCONF_SYNC_OFFSET_MAX_NEIGHBOR: maximum number of neighbors - * to synchronize to for 11s default synchronization method (see 11C.12.2.2) + * to synchronize to for 11s default synchronization method + * (see 11C.12.2.2) * * @NL80211_MESHCONF_HT_OPMODE: set mesh HT protection mode. * + * @NL80211_MESHCONF_ATTR_MAX: highest possible mesh configuration attribute + * + * @NL80211_MESHCONF_HWMP_PATH_TO_ROOT_TIMEOUT: The time (in TUs) for + * which mesh STAs receiving a proactive PREQ shall consider the forwarding + * information to the root mesh STA to be valid. + * + * @NL80211_MESHCONF_HWMP_ROOT_INTERVAL: The interval of time (in TUs) between + * proactive PREQs are transmitted. + * + * @NL80211_MESHCONF_HWMP_CONFIRMATION_INTERVAL: The minimum interval of time + * (in TUs) during which a mesh STA can send only one Action frame + * containing a PREQ element for root path confirmation. + * * @__NL80211_MESHCONF_ATTR_AFTER_LAST: internal use */ enum nl80211_meshconf_params { @@ -2184,6 +2278,9 @@ enum nl80211_meshconf_params { NL80211_MESHCONF_RSSI_THRESHOLD, NL80211_MESHCONF_SYNC_OFFSET_MAX_NEIGHBOR, NL80211_MESHCONF_HT_OPMODE, + NL80211_MESHCONF_HWMP_PATH_TO_ROOT_TIMEOUT, + NL80211_MESHCONF_HWMP_ROOT_INTERVAL, + NL80211_MESHCONF_HWMP_CONFIRMATION_INTERVAL, /* keep last */ __NL80211_MESHCONF_ATTR_AFTER_LAST, @@ -2199,34 +2296,36 @@ enum nl80211_meshconf_params { * @__NL80211_MESH_SETUP_INVALID: Internal use * * @NL80211_MESH_SETUP_ENABLE_VENDOR_PATH_SEL: Enable this option to use a - * vendor specific path selection algorithm or disable it to use the default - * HWMP. + * vendor specific path selection algorithm or disable it to use the + * default HWMP. * * @NL80211_MESH_SETUP_ENABLE_VENDOR_METRIC: Enable this option to use a - * vendor specific path metric or disable it to use the default Airtime - * metric. + * vendor specific path metric or disable it to use the default Airtime + * metric. * * @NL80211_MESH_SETUP_IE: Information elements for this mesh, for instance, a - * robust security network ie, or a vendor specific information element that - * vendors will use to identify the path selection methods and metrics in use. + * robust security network ie, or a vendor specific information element + * that vendors will use to identify the path selection methods and + * metrics in use. * * @NL80211_MESH_SETUP_USERSPACE_AUTH: Enable this option if an authentication - * daemon will be authenticating mesh candidates. + * daemon will be authenticating mesh candidates. * * @NL80211_MESH_SETUP_USERSPACE_AMPE: Enable this option if an authentication - * daemon will be securing peer link frames. AMPE is a secured version of Mesh - * Peering Management (MPM) and is implemented with the assistance of a - * userspace daemon. When this flag is set, the kernel will send peer - * management frames to a userspace daemon that will implement AMPE - * functionality (security capabilities selection, key confirmation, and key - * management). When the flag is unset (default), the kernel can autonomously - * complete (unsecured) mesh peering without the need of a userspace daemon. - * - * @NL80211_MESH_SETUP_ATTR_MAX: highest possible mesh setup attribute number + * daemon will be securing peer link frames. AMPE is a secured version of + * Mesh Peering Management (MPM) and is implemented with the assistance of + * a userspace daemon. When this flag is set, the kernel will send peer + * management frames to a userspace daemon that will implement AMPE + * functionality (security capabilities selection, key confirmation, and + * key management). When the flag is unset (default), the kernel can + * autonomously complete (unsecured) mesh peering without the need of a + * userspace daemon. * * @NL80211_MESH_SETUP_ENABLE_VENDOR_SYNC: Enable this option to use a - * vendor specific synchronization method or disable it to use the default - * neighbor offset synchronization + * vendor specific synchronization method or disable it to use the default + * neighbor offset synchronization + * + * @NL80211_MESH_SETUP_ATTR_MAX: highest possible mesh setup attribute number * * @__NL80211_MESH_SETUP_ATTR_AFTER_LAST: Internal use */ @@ -2490,12 +2589,19 @@ enum nl80211_tx_rate_attributes { * enum nl80211_band - Frequency band * @NL80211_BAND_2GHZ: 2.4 GHz ISM band * @NL80211_BAND_5GHZ: around 5 GHz band (4.9 - 5.7 GHz) + * @NL80211_BAND_60GHZ: around 60 GHz band (58.32 - 64.80 GHz) */ enum nl80211_band { NL80211_BAND_2GHZ, NL80211_BAND_5GHZ, + NL80211_BAND_60GHZ, }; +/** + * enum nl80211_ps_state - powersave state + * @NL80211_PS_DISABLED: powersave is disabled + * @NL80211_PS_ENABLED: powersave is enabled + */ enum nl80211_ps_state { NL80211_PS_DISABLED, NL80211_PS_ENABLED, @@ -2513,6 +2619,17 @@ enum nl80211_ps_state { * @NL80211_ATTR_CQM_RSSI_THRESHOLD_EVENT: RSSI threshold event * @NL80211_ATTR_CQM_PKT_LOSS_EVENT: a u32 value indicating that this many * consecutive packets were not acknowledged by the peer + * @NL80211_ATTR_CQM_TXE_RATE: TX error rate in %. Minimum % of TX failures + * during the given %NL80211_ATTR_CQM_TXE_INTVL before an + * %NL80211_CMD_NOTIFY_CQM with reported %NL80211_ATTR_CQM_TXE_RATE and + * %NL80211_ATTR_CQM_TXE_PKTS is generated. + * @NL80211_ATTR_CQM_TXE_PKTS: number of attempted packets in a given + * %NL80211_ATTR_CQM_TXE_INTVL before %NL80211_ATTR_CQM_TXE_RATE is + * checked. + * @NL80211_ATTR_CQM_TXE_INTVL: interval in seconds. Specifies the periodic + * interval in which %NL80211_ATTR_CQM_TXE_PKTS and + * %NL80211_ATTR_CQM_TXE_RATE must be satisfied before generating an + * %NL80211_CMD_NOTIFY_CQM. Set to 0 to turn off TX error reporting. * @__NL80211_ATTR_CQM_AFTER_LAST: internal * @NL80211_ATTR_CQM_MAX: highest key attribute */ @@ -2522,6 +2639,9 @@ enum nl80211_attr_cqm { NL80211_ATTR_CQM_RSSI_HYST, NL80211_ATTR_CQM_RSSI_THRESHOLD_EVENT, NL80211_ATTR_CQM_PKT_LOSS_EVENT, + NL80211_ATTR_CQM_TXE_RATE, + NL80211_ATTR_CQM_TXE_PKTS, + NL80211_ATTR_CQM_TXE_INTVL, /* keep last */ __NL80211_ATTR_CQM_AFTER_LAST, @@ -2534,10 +2654,14 @@ enum nl80211_attr_cqm { * configured threshold * @NL80211_CQM_RSSI_THRESHOLD_EVENT_HIGH: The RSSI is higher than the * configured threshold + * @NL80211_CQM_RSSI_BEACON_LOSS_EVENT: The device experienced beacon loss. + * (Note that deauth/disassoc will still follow if the AP is not + * available. This event might get used as roaming event, etc.) */ enum nl80211_cqm_rssi_threshold_event { NL80211_CQM_RSSI_THRESHOLD_EVENT_LOW, NL80211_CQM_RSSI_THRESHOLD_EVENT_HIGH, + NL80211_CQM_RSSI_BEACON_LOSS_EVENT, }; @@ -2867,11 +2991,15 @@ enum nl80211_ap_sme_features { * @NL80211_FEATURE_HT_IBSS: This driver supports IBSS with HT datarates. * @NL80211_FEATURE_INACTIVITY_TIMER: This driver takes care of freeing up * the connected inactive stations in AP mode. + * @NL80211_FEATURE_CELL_BASE_REG_HINTS: This driver has been tested + * to work properly to suppport receiving regulatory hints from + * cellular base stations. */ enum nl80211_feature_flags { NL80211_FEATURE_SK_TX_STATUS = 1 << 0, NL80211_FEATURE_HT_IBSS = 1 << 1, NL80211_FEATURE_INACTIVITY_TIMER = 1 << 2, + NL80211_FEATURE_CELL_BASE_REG_HINTS = 1 << 3, }; /** diff --git a/include/linux/nl802154.h b/include/linux/nl802154.h index 5a3db3aa5f17..fd4f2d1cdf6c 100644 --- a/include/linux/nl802154.h +++ b/include/linux/nl802154.h @@ -130,18 +130,8 @@ enum { enum { __IEEE802154_DEV_INVALID = -1, - /* TODO: - * Nowadays three device types supported by this stack at linux-zigbee - * project: WPAN = 0, MONITOR = 1 and SMAC = 2. - * - * Since this stack implementation exists many years, it's definitely - * bad idea to change the assigned values due to they are already used - * by third-party userspace software like: iz-tools, wireshark... - * - * Currently only monitor device is added and initialized by '1' for - * compatibility. - */ - IEEE802154_DEV_MONITOR = 1, + IEEE802154_DEV_WPAN, + IEEE802154_DEV_MONITOR, __IEEE802154_DEV_MAX, }; diff --git a/include/linux/phy.h b/include/linux/phy.h index c291cae8ce32..93b3cf77f564 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -243,6 +243,15 @@ enum phy_state { PHY_RESUMING }; +/** + * struct phy_c45_device_ids - 802.3-c45 Device Identifiers + * @devices_in_package: Bit vector of devices present. + * @device_ids: The device identifer for each present device. + */ +struct phy_c45_device_ids { + u32 devices_in_package; + u32 device_ids[8]; +}; /* phy_device: An instance of a PHY * @@ -250,6 +259,8 @@ enum phy_state { * bus: Pointer to the bus this PHY is on * dev: driver model device structure for this PHY * phy_id: UID for this device found during discovery + * c45_ids: 802.3-c45 Device Identifers if is_c45. + * is_c45: Set to true if this phy uses clause 45 addressing. * state: state of the PHY for management purposes * dev_flags: Device-specific flags used by the PHY driver. * addr: Bus address of PHY @@ -285,6 +296,9 @@ struct phy_device { u32 phy_id; + struct phy_c45_device_ids c45_ids; + bool is_c45; + enum phy_state state; u32 dev_flags; @@ -412,6 +426,12 @@ struct phy_driver { /* Clears up any memory if needed */ void (*remove)(struct phy_device *phydev); + /* Returns true if this is a suitable driver for the given + * phydev. If NULL, matching is based on phy_id and + * phy_id_mask. + */ + int (*match_phy_device)(struct phy_device *phydev); + /* Handles ethtool queries for hardware time stamping. */ int (*ts_info)(struct phy_device *phydev, struct ethtool_ts_info *ti); @@ -480,7 +500,9 @@ static inline int phy_write(struct phy_device *phydev, u32 regnum, u16 val) return mdiobus_write(phydev->bus, phydev->addr, regnum, val); } -struct phy_device* get_phy_device(struct mii_bus *bus, int addr); +struct phy_device *phy_device_create(struct mii_bus *bus, int addr, int phy_id, + bool is_c45, struct phy_c45_device_ids *c45_ids); +struct phy_device *get_phy_device(struct mii_bus *bus, int addr, bool is_c45); int phy_device_register(struct phy_device *phy); int phy_init_hw(struct phy_device *phydev); struct phy_device * phy_attach(struct net_device *dev, @@ -511,7 +533,9 @@ int genphy_read_status(struct phy_device *phydev); int genphy_suspend(struct phy_device *phydev); int genphy_resume(struct phy_device *phydev); void phy_driver_unregister(struct phy_driver *drv); +void phy_drivers_unregister(struct phy_driver *drv, int n); int phy_driver_register(struct phy_driver *new_driver); +int phy_drivers_register(struct phy_driver *new_driver, int n); void phy_state_machine(struct work_struct *work); void phy_start_machine(struct phy_device *phydev, void (*handler)(struct net_device *)); @@ -532,6 +556,11 @@ int phy_register_fixup_for_uid(u32 phy_uid, u32 phy_uid_mask, int (*run)(struct phy_device *)); int phy_scan_fixups(struct phy_device *phydev); +int phy_init_eee(struct phy_device *phydev, bool clk_stop_enable); +int phy_get_eee_err(struct phy_device *phydev); +int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_eee *data); +int phy_ethtool_get_eee(struct phy_device *phydev, struct ethtool_eee *data); + int __init mdio_bus_init(void); void mdio_bus_exit(void); diff --git a/include/linux/pkt_cls.h b/include/linux/pkt_cls.h index defbde203d07..082eafaf026b 100644 --- a/include/linux/pkt_cls.h +++ b/include/linux/pkt_cls.h @@ -451,8 +451,10 @@ enum { #define TCF_EM_U32 3 #define TCF_EM_META 4 #define TCF_EM_TEXT 5 -#define TCF_EM_VLAN 6 -#define TCF_EM_MAX 6 +#define TCF_EM_VLAN 6 +#define TCF_EM_CANID 7 +#define TCF_EM_IPSET 8 +#define TCF_EM_MAX 8 enum { TCF_EM_PROG_TC diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h index 2c1de8982c85..db71c4ad8624 100644 --- a/include/linux/rtnetlink.h +++ b/include/linux/rtnetlink.h @@ -612,12 +612,6 @@ struct tcamsg { #include <linux/mutex.h> #include <linux/netdevice.h> -static __inline__ int rtattr_strcmp(const struct rtattr *rta, const char *str) -{ - int len = strlen(str) + 1; - return len > rta->rta_len || memcmp(RTA_DATA(rta), str, len); -} - extern int rtnetlink_send(struct sk_buff *skb, struct net *net, u32 pid, u32 group, int echo); extern int rtnl_unicast(struct sk_buff *skb, struct net *net, u32 pid); extern void rtnl_notify(struct sk_buff *skb, struct net *net, u32 pid, @@ -625,124 +619,7 @@ extern void rtnl_notify(struct sk_buff *skb, struct net *net, u32 pid, extern void rtnl_set_sk_err(struct net *net, u32 group, int error); extern int rtnetlink_put_metrics(struct sk_buff *skb, u32 *metrics); extern int rtnl_put_cacheinfo(struct sk_buff *skb, struct dst_entry *dst, - u32 id, u32 ts, u32 tsage, long expires, - u32 error); - -extern void __rta_fill(struct sk_buff *skb, int attrtype, int attrlen, const void *data); - -#define RTA_PUT(skb, attrtype, attrlen, data) \ -({ if (unlikely(skb_tailroom(skb) < (int)RTA_SPACE(attrlen))) \ - goto rtattr_failure; \ - __rta_fill(skb, attrtype, attrlen, data); }) - -#define RTA_APPEND(skb, attrlen, data) \ -({ if (unlikely(skb_tailroom(skb) < (int)(attrlen))) \ - goto rtattr_failure; \ - memcpy(skb_put(skb, attrlen), data, attrlen); }) - -#define RTA_PUT_NOHDR(skb, attrlen, data) \ -({ RTA_APPEND(skb, RTA_ALIGN(attrlen), data); \ - memset(skb_tail_pointer(skb) - (RTA_ALIGN(attrlen) - attrlen), 0, \ - RTA_ALIGN(attrlen) - attrlen); }) - -#define RTA_PUT_U8(skb, attrtype, value) \ -({ u8 _tmp = (value); \ - RTA_PUT(skb, attrtype, sizeof(u8), &_tmp); }) - -#define RTA_PUT_U16(skb, attrtype, value) \ -({ u16 _tmp = (value); \ - RTA_PUT(skb, attrtype, sizeof(u16), &_tmp); }) - -#define RTA_PUT_U32(skb, attrtype, value) \ -({ u32 _tmp = (value); \ - RTA_PUT(skb, attrtype, sizeof(u32), &_tmp); }) - -#define RTA_PUT_U64(skb, attrtype, value) \ -({ u64 _tmp = (value); \ - RTA_PUT(skb, attrtype, sizeof(u64), &_tmp); }) - -#define RTA_PUT_SECS(skb, attrtype, value) \ - RTA_PUT_U64(skb, attrtype, (value) / HZ) - -#define RTA_PUT_MSECS(skb, attrtype, value) \ - RTA_PUT_U64(skb, attrtype, jiffies_to_msecs(value)) - -#define RTA_PUT_STRING(skb, attrtype, value) \ - RTA_PUT(skb, attrtype, strlen(value) + 1, value) - -#define RTA_PUT_FLAG(skb, attrtype) \ - RTA_PUT(skb, attrtype, 0, NULL); - -#define RTA_NEST(skb, type) \ -({ struct rtattr *__start = (struct rtattr *)skb_tail_pointer(skb); \ - RTA_PUT(skb, type, 0, NULL); \ - __start; }) - -#define RTA_NEST_END(skb, start) \ -({ (start)->rta_len = skb_tail_pointer(skb) - (unsigned char *)(start); \ - (skb)->len; }) - -#define RTA_NEST_COMPAT(skb, type, attrlen, data) \ -({ struct rtattr *__start = (struct rtattr *)skb_tail_pointer(skb); \ - RTA_PUT(skb, type, attrlen, data); \ - RTA_NEST(skb, type); \ - __start; }) - -#define RTA_NEST_COMPAT_END(skb, start) \ -({ struct rtattr *__nest = (void *)(start) + NLMSG_ALIGN((start)->rta_len); \ - (start)->rta_len = skb_tail_pointer(skb) - (unsigned char *)(start); \ - RTA_NEST_END(skb, __nest); \ - (skb)->len; }) - -#define RTA_NEST_CANCEL(skb, start) \ -({ if (start) \ - skb_trim(skb, (unsigned char *) (start) - (skb)->data); \ - -1; }) - -#define RTA_GET_U8(rta) \ -({ if (!rta || RTA_PAYLOAD(rta) < sizeof(u8)) \ - goto rtattr_failure; \ - *(u8 *) RTA_DATA(rta); }) - -#define RTA_GET_U16(rta) \ -({ if (!rta || RTA_PAYLOAD(rta) < sizeof(u16)) \ - goto rtattr_failure; \ - *(u16 *) RTA_DATA(rta); }) - -#define RTA_GET_U32(rta) \ -({ if (!rta || RTA_PAYLOAD(rta) < sizeof(u32)) \ - goto rtattr_failure; \ - *(u32 *) RTA_DATA(rta); }) - -#define RTA_GET_U64(rta) \ -({ u64 _tmp; \ - if (!rta || RTA_PAYLOAD(rta) < sizeof(u64)) \ - goto rtattr_failure; \ - memcpy(&_tmp, RTA_DATA(rta), sizeof(_tmp)); \ - _tmp; }) - -#define RTA_GET_FLAG(rta) (!!(rta)) - -#define RTA_GET_SECS(rta) ((unsigned long) RTA_GET_U64(rta) * HZ) -#define RTA_GET_MSECS(rta) (msecs_to_jiffies((unsigned long) RTA_GET_U64(rta))) - -static inline struct rtattr * -__rta_reserve(struct sk_buff *skb, int attrtype, int attrlen) -{ - struct rtattr *rta; - int size = RTA_LENGTH(attrlen); - - rta = (struct rtattr*)skb_put(skb, RTA_ALIGN(size)); - rta->rta_type = attrtype; - rta->rta_len = size; - memset(RTA_DATA(rta) + attrlen, 0, RTA_ALIGN(size) - size); - return rta; -} - -#define __RTA_PUT(skb, attrtype, attrlen) \ -({ if (unlikely(skb_tailroom(skb) < (int)RTA_SPACE(attrlen))) \ - goto rtattr_failure; \ - __rta_reserve(skb, attrtype, attrlen); }) + u32 id, long expires, u32 error); extern void rtmsg_ifinfo(int type, struct net_device *dev, unsigned change); @@ -794,13 +671,6 @@ extern void __rtnl_unlock(void); } \ } while(0) -static inline u32 rtm_get_table(struct rtattr **rta, u8 table) -{ - return RTA_GET_U32(rta[RTA_TABLE-1]); -rtattr_failure: - return table; -} - extern int ndo_dflt_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb, struct net_device *dev, diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 642cb7355df3..d205c4be7f5b 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1667,6 +1667,22 @@ static inline void skb_orphan(struct sk_buff *skb) } /** + * skb_orphan_frags - orphan the frags contained in a buffer + * @skb: buffer to orphan frags from + * @gfp_mask: allocation mask for replacement pages + * + * For each frag in the SKB which needs a destructor (i.e. has an + * owner) create a copy of that frag and release the original + * page by calling the destructor. + */ +static inline int skb_orphan_frags(struct sk_buff *skb, gfp_t gfp_mask) +{ + if (likely(!(skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY))) + return 0; + return skb_copy_ubufs(skb, gfp_mask); +} + +/** * __skb_queue_purge - empty a list * @list: list to empty * diff --git a/include/linux/snmp.h b/include/linux/snmp.h index 2e68f5ba0389..00bc189cb395 100644 --- a/include/linux/snmp.h +++ b/include/linux/snmp.h @@ -208,7 +208,6 @@ enum LINUX_MIB_TCPDSACKOFOSENT, /* TCPDSACKOfoSent */ LINUX_MIB_TCPDSACKRECV, /* TCPDSACKRecv */ LINUX_MIB_TCPDSACKOFORECV, /* TCPDSACKOfoRecv */ - LINUX_MIB_TCPABORTONSYN, /* TCPAbortOnSyn */ LINUX_MIB_TCPABORTONDATA, /* TCPAbortOnData */ LINUX_MIB_TCPABORTONCLOSE, /* TCPAbortOnClose */ LINUX_MIB_TCPABORTONMEMORY, /* TCPAbortOnMemory */ @@ -233,7 +232,13 @@ enum LINUX_MIB_TCPREQQFULLDOCOOKIES, /* TCPReqQFullDoCookies */ LINUX_MIB_TCPREQQFULLDROP, /* TCPReqQFullDrop */ LINUX_MIB_TCPRETRANSFAIL, /* TCPRetransFail */ - LINUX_MIB_TCPRCVCOALESCE, /* TCPRcvCoalesce */ + LINUX_MIB_TCPRCVCOALESCE, /* TCPRcvCoalesce */ + LINUX_MIB_TCPOFOQUEUE, /* TCPOFOQueue */ + LINUX_MIB_TCPOFODROP, /* TCPOFODrop */ + LINUX_MIB_TCPOFOMERGE, /* TCPOFOMerge */ + LINUX_MIB_TCPCHALLENGEACK, /* TCPChallengeACK */ + LINUX_MIB_TCPSYNCHALLENGE, /* TCPSYNChallenge */ + LINUX_MIB_TCPFASTOPENACTIVE, /* TCPFastOpenActive */ __LINUX_MIB_MAX }; diff --git a/include/linux/sock_diag.h b/include/linux/sock_diag.h index db4bae78bda9..e3e395acc2fd 100644 --- a/include/linux/sock_diag.h +++ b/include/linux/sock_diag.h @@ -18,6 +18,7 @@ enum { SK_MEMINFO_FWD_ALLOC, SK_MEMINFO_WMEM_QUEUED, SK_MEMINFO_OPTMEM, + SK_MEMINFO_BACKLOG, SK_MEMINFO_VARS, }; @@ -43,6 +44,5 @@ void sock_diag_save_cookie(void *sk, __u32 *cookie); int sock_diag_put_meminfo(struct sock *sk, struct sk_buff *skb, int attr); -extern struct sock *sock_diag_nlsk; #endif /* KERNEL */ #endif diff --git a/include/linux/socket.h b/include/linux/socket.h index 25d6322fb635..ba7b2e817cfa 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -268,6 +268,7 @@ struct ucred { #define MSG_SENDPAGE_NOTLAST 0x20000 /* sendpage() internal : not the last page */ #define MSG_EOF MSG_FIN +#define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */ #define MSG_CMSG_CLOEXEC 0x40000000 /* Set close_on_exit for file descriptor received through SCM_RIGHTS */ diff --git a/include/linux/spi/at86rf230.h b/include/linux/spi/at86rf230.h new file mode 100644 index 000000000000..b2b1afbb3202 --- /dev/null +++ b/include/linux/spi/at86rf230.h @@ -0,0 +1,31 @@ +/* + * AT86RF230/RF231 driver + * + * Copyright (C) 2009-2012 Siemens AG + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Written by: + * Dmitry Eremin-Solenikov <dmitry.baryshkov@siemens.com> + */ +#ifndef AT86RF230_H +#define AT86RF230_H + +struct at86rf230_platform_data { + int rstn; + int slp_tr; + int dig2; +}; + +#endif diff --git a/include/linux/ssb/ssb.h b/include/linux/ssb/ssb.h index bc14bd738ade..bb674c02f306 100644 --- a/include/linux/ssb/ssb.h +++ b/include/linux/ssb/ssb.h @@ -243,6 +243,7 @@ struct ssb_bus_ops { #define SSB_DEV_MINI_MACPHY 0x823 #define SSB_DEV_ARM_1176 0x824 #define SSB_DEV_ARM_7TDMI 0x825 +#define SSB_DEV_ARM_CM3 0x82A /* Vendor-ID values */ #define SSB_VENDOR_BROADCOM 0x4243 diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 5f359dbfcdce..eb125a4c30b3 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -243,6 +243,16 @@ static inline unsigned int tcp_optlen(const struct sk_buff *skb) return (tcp_hdr(skb)->doff - 5) * 4; } +/* TCP Fast Open */ +#define TCP_FASTOPEN_COOKIE_MIN 4 /* Min Fast Open Cookie size in bytes */ +#define TCP_FASTOPEN_COOKIE_MAX 16 /* Max Fast Open Cookie size in bytes */ + +/* TCP Fast Open Cookie as stored in memory */ +struct tcp_fastopen_cookie { + s8 len; + u8 val[TCP_FASTOPEN_COOKIE_MAX]; +}; + /* This defines a selective acknowledgement block. */ struct tcp_sack_block_wire { __be32 start_seq; @@ -339,6 +349,9 @@ struct tcp_sock { u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */ u32 lsndtime; /* timestamp of last sent data packet (for restart window) */ + struct list_head tsq_node; /* anchor in tsq_tasklet.head list */ + unsigned long tsq_flags; + /* Data for direct copy to user */ struct { struct sk_buff_head prequeue; @@ -373,7 +386,9 @@ struct tcp_sock { unused : 1; u8 repair_queue; u8 do_early_retrans:1,/* Enable RFC5827 early-retransmit */ - early_retrans_delayed:1; /* Delayed ER timer installed */ + early_retrans_delayed:1, /* Delayed ER timer installed */ + syn_data:1, /* SYN includes data */ + syn_fastopen:1; /* SYN includes Fast Open option */ /* RTT measurement */ u32 srtt; /* smoothed round trip time << 3 */ @@ -478,6 +493,9 @@ struct tcp_sock { u32 probe_seq_start; u32 probe_seq_end; } mtu_probe; + u32 mtu_info; /* We received an ICMP_FRAG_NEEDED / ICMPV6_PKT_TOOBIG + * while socket was owned by user. + */ #ifdef CONFIG_TCP_MD5SIG /* TCP AF-Specific parts; only used by MD5 Signature support so far */ @@ -487,6 +505,9 @@ struct tcp_sock { struct tcp_md5sig_info __rcu *md5sig_info; #endif +/* TCP fastopen related information */ + struct tcp_fastopen_request *fastopen_req; + /* When the cookie options are generated and exchanged, then this * object holds a reference to them (cookie_values->kref). Also * contains related tcp_cookie_transactions fields. @@ -494,6 +515,17 @@ struct tcp_sock { struct tcp_cookie_values *cookie_values; }; +enum tsq_flags { + TSQ_THROTTLED, + TSQ_QUEUED, + TCP_TSQ_DEFERRED, /* tcp_tasklet_func() found socket was owned */ + TCP_WRITE_TIMER_DEFERRED, /* tcp_write_timer() found socket was owned */ + TCP_DELACK_TIMER_DEFERRED, /* tcp_delack_timer() found socket was owned */ + TCP_MTU_REDUCED_DEFERRED, /* tcp_v{4|6}_err() could not call + * tcp_v{4|6}_mtu_reduced() + */ +}; + static inline struct tcp_sock *tcp_sk(const struct sock *sk) { return (struct tcp_sock *)sk; @@ -507,7 +539,7 @@ struct tcp_timewait_sock { u32 tw_ts_recent; long tw_ts_recent_stamp; #ifdef CONFIG_TCP_MD5SIG - struct tcp_md5sig_key *tw_md5_key; + struct tcp_md5sig_key *tw_md5_key; #endif /* Few sockets in timewait have cookies; in that case, then this * object holds a reference to them (tw_cookie_values->kref). diff --git a/include/linux/tipc_config.h b/include/linux/tipc_config.h index 9730b0e51e46..c98928420100 100644 --- a/include/linux/tipc_config.h +++ b/include/linux/tipc_config.h @@ -102,8 +102,8 @@ #define TIPC_CMD_SET_LINK_TOL 0x4107 /* tx link_config, rx none */ #define TIPC_CMD_SET_LINK_PRI 0x4108 /* tx link_config, rx none */ #define TIPC_CMD_SET_LINK_WINDOW 0x4109 /* tx link_config, rx none */ -#define TIPC_CMD_SET_LOG_SIZE 0x410A /* tx unsigned, rx none */ -#define TIPC_CMD_DUMP_LOG 0x410B /* tx none, rx ultra_string */ +#define TIPC_CMD_SET_LOG_SIZE 0x410A /* obsoleted */ +#define TIPC_CMD_DUMP_LOG 0x410B /* obsoleted */ #define TIPC_CMD_RESET_LINK_STATS 0x410C /* tx link_name, rx none */ /* diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h index 76f439647c4b..f87cf622317f 100644 --- a/include/linux/usb/usbnet.h +++ b/include/linux/usb/usbnet.h @@ -66,9 +66,8 @@ struct usbnet { # define EVENT_STS_SPLIT 3 # define EVENT_LINK_RESET 4 # define EVENT_RX_PAUSED 5 -# define EVENT_DEV_WAKING 6 -# define EVENT_DEV_ASLEEP 7 -# define EVENT_DEV_OPEN 8 +# define EVENT_DEV_ASLEEP 6 +# define EVENT_DEV_OPEN 7 }; static inline struct usb_driver *driver_of(struct usb_interface *intf) diff --git a/include/net/addrconf.h b/include/net/addrconf.h index f2b801c4b555..089a09d001d1 100644 --- a/include/net/addrconf.h +++ b/include/net/addrconf.h @@ -46,7 +46,8 @@ struct prefix_info { #include <net/if_inet6.h> #include <net/ipv6.h> -#define IN6_ADDR_HSIZE 16 +#define IN6_ADDR_HSIZE_SHIFT 4 +#define IN6_ADDR_HSIZE (1 << IN6_ADDR_HSIZE_SHIFT) extern int addrconf_init(void); extern void addrconf_cleanup(void); diff --git a/include/net/af_unix.h b/include/net/af_unix.h index 2ee33da36a7a..b5f8988e4283 100644 --- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -14,10 +14,11 @@ extern struct sock *unix_get_socket(struct file *filp); extern struct sock *unix_peer_get(struct sock *); #define UNIX_HASH_SIZE 256 +#define UNIX_HASH_BITS 8 extern unsigned int unix_tot_inflight; extern spinlock_t unix_table_lock; -extern struct hlist_head unix_socket_table[UNIX_HASH_SIZE + 1]; +extern struct hlist_head unix_socket_table[2 * UNIX_HASH_SIZE]; struct unix_address { atomic_t refcnt; diff --git a/include/net/arp.h b/include/net/arp.h index 4a1f3fb562eb..7f7df93f37cd 100644 --- a/include/net/arp.h +++ b/include/net/arp.h @@ -15,24 +15,31 @@ static inline u32 arp_hashfn(u32 key, const struct net_device *dev, u32 hash_rnd return val * hash_rnd; } -static inline struct neighbour *__ipv4_neigh_lookup(struct net_device *dev, u32 key) +static inline struct neighbour *__ipv4_neigh_lookup_noref(struct net_device *dev, u32 key) { - struct neigh_hash_table *nht; + struct neigh_hash_table *nht = rcu_dereference_bh(arp_tbl.nht); struct neighbour *n; u32 hash_val; - rcu_read_lock_bh(); - nht = rcu_dereference_bh(arp_tbl.nht); hash_val = arp_hashfn(key, dev, nht->hash_rnd[0]) >> (32 - nht->hash_shift); for (n = rcu_dereference_bh(nht->hash_buckets[hash_val]); n != NULL; n = rcu_dereference_bh(n->next)) { - if (n->dev == dev && *(u32 *)n->primary_key == key) { - if (!atomic_inc_not_zero(&n->refcnt)) - n = NULL; - break; - } + if (n->dev == dev && *(u32 *)n->primary_key == key) + return n; } + + return NULL; +} + +static inline struct neighbour *__ipv4_neigh_lookup(struct net_device *dev, u32 key) +{ + struct neighbour *n; + + rcu_read_lock_bh(); + n = __ipv4_neigh_lookup_noref(dev, key); + if (n && !atomic_inc_not_zero(&n->refcnt)) + n = NULL; rcu_read_unlock_bh(); return n; diff --git a/include/net/bluetooth/a2mp.h b/include/net/bluetooth/a2mp.h new file mode 100644 index 000000000000..6a76e0a0705e --- /dev/null +++ b/include/net/bluetooth/a2mp.h @@ -0,0 +1,126 @@ +/* + Copyright (c) 2010,2011 Code Aurora Forum. All rights reserved. + Copyright (c) 2011,2012 Intel Corp. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License version 2 and + only version 2 as published by the Free Software Foundation. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. +*/ + +#ifndef __A2MP_H +#define __A2MP_H + +#include <net/bluetooth/l2cap.h> + +#define A2MP_FEAT_EXT 0x8000 + +struct amp_mgr { + struct l2cap_conn *l2cap_conn; + struct l2cap_chan *a2mp_chan; + struct kref kref; + __u8 ident; + __u8 handle; + unsigned long flags; +}; + +struct a2mp_cmd { + __u8 code; + __u8 ident; + __le16 len; + __u8 data[0]; +} __packed; + +/* A2MP command codes */ +#define A2MP_COMMAND_REJ 0x01 +struct a2mp_cmd_rej { + __le16 reason; + __u8 data[0]; +} __packed; + +#define A2MP_DISCOVER_REQ 0x02 +struct a2mp_discov_req { + __le16 mtu; + __le16 ext_feat; +} __packed; + +struct a2mp_cl { + __u8 id; + __u8 type; + __u8 status; +} __packed; + +#define A2MP_DISCOVER_RSP 0x03 +struct a2mp_discov_rsp { + __le16 mtu; + __le16 ext_feat; + struct a2mp_cl cl[0]; +} __packed; + +#define A2MP_CHANGE_NOTIFY 0x04 +#define A2MP_CHANGE_RSP 0x05 + +#define A2MP_GETINFO_REQ 0x06 +struct a2mp_info_req { + __u8 id; +} __packed; + +#define A2MP_GETINFO_RSP 0x07 +struct a2mp_info_rsp { + __u8 id; + __u8 status; + __le32 total_bw; + __le32 max_bw; + __le32 min_latency; + __le16 pal_cap; + __le16 assoc_size; +} __packed; + +#define A2MP_GETAMPASSOC_REQ 0x08 +struct a2mp_amp_assoc_req { + __u8 id; +} __packed; + +#define A2MP_GETAMPASSOC_RSP 0x09 +struct a2mp_amp_assoc_rsp { + __u8 id; + __u8 status; + __u8 amp_assoc[0]; +} __packed; + +#define A2MP_CREATEPHYSLINK_REQ 0x0A +#define A2MP_DISCONNPHYSLINK_REQ 0x0C +struct a2mp_physlink_req { + __u8 local_id; + __u8 remote_id; + __u8 amp_assoc[0]; +} __packed; + +#define A2MP_CREATEPHYSLINK_RSP 0x0B +#define A2MP_DISCONNPHYSLINK_RSP 0x0D +struct a2mp_physlink_rsp { + __u8 local_id; + __u8 remote_id; + __u8 status; +} __packed; + +/* A2MP response status */ +#define A2MP_STATUS_SUCCESS 0x00 +#define A2MP_STATUS_INVALID_CTRL_ID 0x01 +#define A2MP_STATUS_UNABLE_START_LINK_CREATION 0x02 +#define A2MP_STATUS_NO_PHYSICAL_LINK_EXISTS 0x02 +#define A2MP_STATUS_COLLISION_OCCURED 0x03 +#define A2MP_STATUS_DISCONN_REQ_RECVD 0x04 +#define A2MP_STATUS_PHYS_LINK_EXISTS 0x05 +#define A2MP_STATUS_SECURITY_VIOLATION 0x06 + +void amp_mgr_get(struct amp_mgr *mgr); +int amp_mgr_put(struct amp_mgr *mgr); +struct l2cap_chan *a2mp_channel_create(struct l2cap_conn *conn, + struct sk_buff *skb); + +#endif /* __A2MP_H */ diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h index 961669b648fd..565d4bee1e49 100644 --- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -1,4 +1,4 @@ -/* +/* BlueZ - Bluetooth protocol stack for Linux Copyright (C) 2000-2001 Qualcomm Incorporated @@ -12,22 +12,19 @@ OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) AND AUTHOR(S) BE LIABLE FOR ANY - CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES - WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN - ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - ALL LIABILITY, INCLUDING LIABILITY FOR INFRINGEMENT OF ANY PATENTS, - COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS, RELATING TO USE OF THIS + ALL LIABILITY, INCLUDING LIABILITY FOR INFRINGEMENT OF ANY PATENTS, + COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS, RELATING TO USE OF THIS SOFTWARE IS DISCLAIMED. */ #ifndef __BLUETOOTH_H #define __BLUETOOTH_H -#include <asm/types.h> -#include <asm/byteorder.h> -#include <linux/list.h> #include <linux/poll.h> #include <net/sock.h> @@ -168,8 +165,8 @@ typedef struct { #define BDADDR_LE_PUBLIC 0x01 #define BDADDR_LE_RANDOM 0x02 -#define BDADDR_ANY (&(bdaddr_t) {{0, 0, 0, 0, 0, 0}}) -#define BDADDR_LOCAL (&(bdaddr_t) {{0, 0, 0, 0xff, 0xff, 0xff}}) +#define BDADDR_ANY (&(bdaddr_t) {{0, 0, 0, 0, 0, 0} }) +#define BDADDR_LOCAL (&(bdaddr_t) {{0, 0, 0, 0xff, 0xff, 0xff} }) /* Copy, swap, convert BD Address */ static inline int bacmp(bdaddr_t *ba1, bdaddr_t *ba2) @@ -215,7 +212,7 @@ int bt_sock_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg, size_t len, int flags); int bt_sock_stream_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg, size_t len, int flags); -uint bt_sock_poll(struct file * file, struct socket *sock, poll_table *wait); +uint bt_sock_poll(struct file *file, struct socket *sock, poll_table *wait); int bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); int bt_sock_wait_state(struct sock *sk, int state, unsigned long timeo); @@ -225,12 +222,12 @@ struct sock *bt_accept_dequeue(struct sock *parent, struct socket *newsock); /* Skb helpers */ struct l2cap_ctrl { - unsigned int sframe : 1, - poll : 1, - final : 1, - fcs : 1, - sar : 2, - super : 2; + unsigned int sframe:1, + poll:1, + final:1, + fcs:1, + sar:2, + super:2; __u16 reqseq; __u16 txseq; __u8 retries; @@ -249,7 +246,8 @@ static inline struct sk_buff *bt_skb_alloc(unsigned int len, gfp_t how) { struct sk_buff *skb; - if ((skb = alloc_skb(len + BT_SKB_RESERVE, how))) { + skb = alloc_skb(len + BT_SKB_RESERVE, how); + if (skb) { skb_reserve(skb, BT_SKB_RESERVE); bt_cb(skb)->incoming = 0; } @@ -261,7 +259,8 @@ static inline struct sk_buff *bt_skb_send_alloc(struct sock *sk, { struct sk_buff *skb; - if ((skb = sock_alloc_send_skb(sk, len + BT_SKB_RESERVE, nb, err))) { + skb = sock_alloc_send_skb(sk, len + BT_SKB_RESERVE, nb, err); + if (skb) { skb_reserve(skb, BT_SKB_RESERVE); bt_cb(skb)->incoming = 0; } diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h index 3def64ba77fa..ccd723e0f783 100644 --- a/include/net/bluetooth/hci.h +++ b/include/net/bluetooth/hci.h @@ -30,6 +30,9 @@ #define HCI_MAX_EVENT_SIZE 260 #define HCI_MAX_FRAME_SIZE (HCI_MAX_ACL_SIZE + 4) +#define HCI_LINK_KEY_SIZE 16 +#define HCI_AMP_LINK_KEY_SIZE (2 * HCI_LINK_KEY_SIZE) + /* HCI dev events */ #define HCI_DEV_REG 1 #define HCI_DEV_UNREG 2 @@ -56,9 +59,12 @@ #define HCI_BREDR 0x00 #define HCI_AMP 0x01 +/* First BR/EDR Controller shall have ID = 0 */ +#define HCI_BREDR_ID 0 + /* HCI device quirks */ enum { - HCI_QUIRK_NO_RESET, + HCI_QUIRK_RESET_ON_CLOSE, HCI_QUIRK_RAW_DEVICE, HCI_QUIRK_FIXUP_BUFFER_SIZE }; @@ -133,13 +139,12 @@ enum { #define HCIINQUIRY _IOR('H', 240, int) /* HCI timeouts */ -#define HCI_CONNECT_TIMEOUT (40000) /* 40 seconds */ -#define HCI_DISCONN_TIMEOUT (2000) /* 2 seconds */ -#define HCI_PAIRING_TIMEOUT (60000) /* 60 seconds */ -#define HCI_IDLE_TIMEOUT (6000) /* 6 seconds */ -#define HCI_INIT_TIMEOUT (10000) /* 10 seconds */ -#define HCI_CMD_TIMEOUT (1000) /* 1 seconds */ -#define HCI_ACL_TX_TIMEOUT (45000) /* 45 seconds */ +#define HCI_DISCONN_TIMEOUT msecs_to_jiffies(2000) /* 2 seconds */ +#define HCI_PAIRING_TIMEOUT msecs_to_jiffies(60000) /* 60 seconds */ +#define HCI_INIT_TIMEOUT msecs_to_jiffies(10000) /* 10 seconds */ +#define HCI_CMD_TIMEOUT msecs_to_jiffies(1000) /* 1 second */ +#define HCI_ACL_TX_TIMEOUT msecs_to_jiffies(45000) /* 45 seconds */ +#define HCI_AUTO_OFF_TIMEOUT msecs_to_jiffies(2000) /* 2 seconds */ /* HCI data types */ #define HCI_COMMAND_PKT 0x01 @@ -371,7 +376,7 @@ struct hci_cp_reject_conn_req { #define HCI_OP_LINK_KEY_REPLY 0x040b struct hci_cp_link_key_reply { bdaddr_t bdaddr; - __u8 link_key[16]; + __u8 link_key[HCI_LINK_KEY_SIZE]; } __packed; #define HCI_OP_LINK_KEY_NEG_REPLY 0x040c @@ -523,6 +528,28 @@ struct hci_cp_io_capability_neg_reply { __u8 reason; } __packed; +#define HCI_OP_CREATE_PHY_LINK 0x0435 +struct hci_cp_create_phy_link { + __u8 phy_handle; + __u8 key_len; + __u8 key_type; + __u8 key[HCI_AMP_LINK_KEY_SIZE]; +} __packed; + +#define HCI_OP_ACCEPT_PHY_LINK 0x0436 +struct hci_cp_accept_phy_link { + __u8 phy_handle; + __u8 key_len; + __u8 key_type; + __u8 key[HCI_AMP_LINK_KEY_SIZE]; +} __packed; + +#define HCI_OP_DISCONN_PHY_LINK 0x0437 +struct hci_cp_disconn_phy_link { + __u8 phy_handle; + __u8 reason; +} __packed; + #define HCI_OP_SNIFF_MODE 0x0803 struct hci_cp_sniff_mode { __le16 handle; @@ -818,6 +845,31 @@ struct hci_rp_read_local_amp_info { __le32 be_flush_to; } __packed; +#define HCI_OP_READ_LOCAL_AMP_ASSOC 0x140a +struct hci_cp_read_local_amp_assoc { + __u8 phy_handle; + __le16 len_so_far; + __le16 max_len; +} __packed; +struct hci_rp_read_local_amp_assoc { + __u8 status; + __u8 phy_handle; + __le16 rem_len; + __u8 frag[0]; +} __packed; + +#define HCI_OP_WRITE_REMOTE_AMP_ASSOC 0x140b +struct hci_cp_write_remote_amp_assoc { + __u8 phy_handle; + __le16 len_so_far; + __le16 rem_len; + __u8 frag[0]; +} __packed; +struct hci_rp_write_remote_amp_assoc { + __u8 status; + __u8 phy_handle; +} __packed; + #define HCI_OP_LE_SET_EVENT_MASK 0x2001 struct hci_cp_le_set_event_mask { __u8 mask[8]; @@ -1048,7 +1100,7 @@ struct hci_ev_link_key_req { #define HCI_EV_LINK_KEY_NOTIFY 0x18 struct hci_ev_link_key_notify { bdaddr_t bdaddr; - __u8 link_key[16]; + __u8 link_key[HCI_LINK_KEY_SIZE]; __u8 key_type; } __packed; @@ -1196,6 +1248,39 @@ struct hci_ev_le_meta { __u8 subevent; } __packed; +#define HCI_EV_PHY_LINK_COMPLETE 0x40 +struct hci_ev_phy_link_complete { + __u8 status; + __u8 phy_handle; +} __packed; + +#define HCI_EV_CHANNEL_SELECTED 0x41 +struct hci_ev_channel_selected { + __u8 phy_handle; +} __packed; + +#define HCI_EV_DISCONN_PHY_LINK_COMPLETE 0x42 +struct hci_ev_disconn_phy_link_complete { + __u8 status; + __u8 phy_handle; + __u8 reason; +} __packed; + +#define HCI_EV_LOGICAL_LINK_COMPLETE 0x45 +struct hci_ev_logical_link_complete { + __u8 status; + __le16 handle; + __u8 phy_handle; + __u8 flow_spec_id; +} __packed; + +#define HCI_EV_DISCONN_LOGICAL_LINK_COMPLETE 0x46 +struct hci_ev_disconn_logical_link_complete { + __u8 status; + __le16 handle; + __u8 reason; +} __packed; + #define HCI_EV_NUM_COMP_BLOCKS 0x48 struct hci_comp_blocks_info { __le16 handle; @@ -1296,7 +1381,6 @@ struct hci_sco_hdr { __u8 dlen; } __packed; -#include <linux/skbuff.h> static inline struct hci_event_hdr *hci_event_hdr(const struct sk_buff *skb) { return (struct hci_event_hdr *) skb->data; @@ -1313,12 +1397,12 @@ static inline struct hci_sco_hdr *hci_sco_hdr(const struct sk_buff *skb) } /* Command opcode pack/unpack */ -#define hci_opcode_pack(ogf, ocf) (__u16) ((ocf & 0x03ff)|(ogf << 10)) +#define hci_opcode_pack(ogf, ocf) ((__u16) ((ocf & 0x03ff)|(ogf << 10))) #define hci_opcode_ogf(op) (op >> 10) #define hci_opcode_ocf(op) (op & 0x03ff) /* ACL handle and flags pack/unpack */ -#define hci_handle_pack(h, f) (__u16) ((h & 0x0fff)|(f << 12)) +#define hci_handle_pack(h, f) ((__u16) ((h & 0x0fff)|(f << 12))) #define hci_handle(h) (h & 0x0fff) #define hci_flags(h) (h >> 12) diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index 9fc7728f94e4..475b8c04ba52 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -25,7 +25,6 @@ #ifndef __HCI_CORE_H #define __HCI_CORE_H -#include <linux/interrupt.h> #include <net/bluetooth/hci.h> /* HCI priority */ @@ -65,7 +64,7 @@ struct discovery_state { DISCOVERY_RESOLVING, DISCOVERY_STOPPING, } state; - struct list_head all; /* All devices found during inquiry */ + struct list_head all; /* All devices found during inquiry */ struct list_head unknown; /* Name state not known */ struct list_head resolve; /* Name needs to be resolved */ __u32 timestamp; @@ -105,7 +104,7 @@ struct link_key { struct list_head list; bdaddr_t bdaddr; u8 type; - u8 val[16]; + u8 val[HCI_LINK_KEY_SIZE]; u8 pin_len; }; @@ -333,6 +332,7 @@ struct hci_conn { void *l2cap_data; void *sco_data; void *smp_conn; + struct amp_mgr *amp_mgr; struct hci_conn *link; @@ -360,7 +360,8 @@ extern int l2cap_connect_cfm(struct hci_conn *hcon, u8 status); extern int l2cap_disconn_ind(struct hci_conn *hcon); extern int l2cap_disconn_cfm(struct hci_conn *hcon, u8 reason); extern int l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt); -extern int l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags); +extern int l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, + u16 flags); extern int sco_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr); extern int sco_connect_cfm(struct hci_conn *hcon, __u8 status); @@ -429,8 +430,8 @@ enum { static inline bool hci_conn_ssp_enabled(struct hci_conn *conn) { struct hci_dev *hdev = conn->hdev; - return (test_bit(HCI_SSP_ENABLED, &hdev->dev_flags) && - test_bit(HCI_CONN_SSP_ENABLED, &conn->flags)); + return test_bit(HCI_SSP_ENABLED, &hdev->dev_flags) && + test_bit(HCI_CONN_SSP_ENABLED, &conn->flags); } static inline void hci_conn_hash_init(struct hci_dev *hdev) @@ -586,18 +587,24 @@ void hci_conn_put_device(struct hci_conn *conn); static inline void hci_conn_hold(struct hci_conn *conn) { + BT_DBG("hcon %p refcnt %d -> %d", conn, atomic_read(&conn->refcnt), + atomic_read(&conn->refcnt) + 1); + atomic_inc(&conn->refcnt); cancel_delayed_work(&conn->disc_work); } static inline void hci_conn_put(struct hci_conn *conn) { + BT_DBG("hcon %p refcnt %d -> %d", conn, atomic_read(&conn->refcnt), + atomic_read(&conn->refcnt) - 1); + if (atomic_dec_and_test(&conn->refcnt)) { unsigned long timeo; if (conn->type == ACL_LINK || conn->type == LE_LINK) { del_timer(&conn->idle_timer); if (conn->state == BT_CONNECTED) { - timeo = msecs_to_jiffies(conn->disc_timeout); + timeo = conn->disc_timeout; if (!conn->out) timeo *= 2; } else { @@ -640,6 +647,19 @@ static inline void hci_set_drvdata(struct hci_dev *hdev, void *data) dev_set_drvdata(&hdev->dev, data); } +/* hci_dev_list shall be locked */ +static inline uint8_t __hci_num_ctrl(void) +{ + uint8_t count = 0; + struct list_head *p; + + list_for_each(p, &hci_dev_list) { + count++; + } + + return count; +} + struct hci_dev *hci_dev_get(int index); struct hci_dev *hci_get_route(bdaddr_t *src, bdaddr_t *dst); @@ -661,7 +681,8 @@ int hci_get_conn_info(struct hci_dev *hdev, void __user *arg); int hci_get_auth_info(struct hci_dev *hdev, void __user *arg); int hci_inquiry(void __user *arg); -struct bdaddr_list *hci_blacklist_lookup(struct hci_dev *hdev, bdaddr_t *bdaddr); +struct bdaddr_list *hci_blacklist_lookup(struct hci_dev *hdev, + bdaddr_t *bdaddr); int hci_blacklist_clear(struct hci_dev *hdev); int hci_blacklist_add(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 type); int hci_blacklist_del(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 type); diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h index 1c7d1cd5e679..a7679f8913d2 100644 --- a/include/net/bluetooth/l2cap.h +++ b/include/net/bluetooth/l2cap.h @@ -40,11 +40,11 @@ #define L2CAP_DEFAULT_MONITOR_TO 12000 /* 12 seconds */ #define L2CAP_DEFAULT_MAX_PDU_SIZE 1009 /* Sized for 3-DH5 packet */ #define L2CAP_DEFAULT_ACK_TO 200 -#define L2CAP_LE_DEFAULT_MTU 23 #define L2CAP_DEFAULT_MAX_SDU_SIZE 0xFFFF #define L2CAP_DEFAULT_SDU_ITIME 0xFFFFFFFF #define L2CAP_DEFAULT_ACC_LAT 0xFFFFFFFF #define L2CAP_BREDR_MAX_PAYLOAD 1019 /* 3-DH5 packet */ +#define L2CAP_LE_MIN_MTU 23 #define L2CAP_DISC_TIMEOUT msecs_to_jiffies(100) #define L2CAP_DISC_REJ_TIMEOUT msecs_to_jiffies(5000) @@ -52,6 +52,8 @@ #define L2CAP_CONN_TIMEOUT msecs_to_jiffies(40000) #define L2CAP_INFO_TIMEOUT msecs_to_jiffies(4000) +#define L2CAP_A2MP_DEFAULT_MTU 670 + /* L2CAP socket address */ struct sockaddr_l2 { sa_family_t l2_family; @@ -229,9 +231,14 @@ struct l2cap_conn_rsp { __le16 status; } __packed; +/* protocol/service multiplexer (PSM) */ +#define L2CAP_PSM_SDP 0x0001 +#define L2CAP_PSM_RFCOMM 0x0003 + /* channel indentifier */ #define L2CAP_CID_SIGNALING 0x0001 #define L2CAP_CID_CONN_LESS 0x0002 +#define L2CAP_CID_A2MP 0x0003 #define L2CAP_CID_LE_DATA 0x0004 #define L2CAP_CID_LE_SIGNALING 0x0005 #define L2CAP_CID_SMP 0x0006 @@ -271,6 +278,9 @@ struct l2cap_conf_rsp { #define L2CAP_CONF_PENDING 0x0004 #define L2CAP_CONF_EFS_REJECT 0x0005 +/* configuration req/rsp continuation flag */ +#define L2CAP_CONF_FLAG_CONTINUATION 0x0001 + struct l2cap_conf_opt { __u8 type; __u8 len; @@ -419,11 +429,6 @@ struct l2cap_seq_list { #define L2CAP_SEQ_LIST_CLEAR 0xFFFF #define L2CAP_SEQ_LIST_TAIL 0x8000 -struct srej_list { - __u16 tx_seq; - struct list_head list; -}; - struct l2cap_chan { struct sock *sk; @@ -459,6 +464,7 @@ struct l2cap_chan { __u16 tx_win; __u16 tx_win_max; + __u16 ack_win; __u8 max_tx; __u16 retrans_timeout; __u16 monitor_timeout; @@ -475,14 +481,12 @@ struct l2cap_chan { __u16 expected_ack_seq; __u16 expected_tx_seq; __u16 buffer_seq; - __u16 buffer_seq_srej; __u16 srej_save_reqseq; __u16 last_acked_seq; __u16 frames_sent; __u16 unacked_frames; __u8 retry_count; __u16 srej_queue_next; - __u8 num_acked; __u16 sdu_len; struct sk_buff *sdu; struct sk_buff *sdu_last_frag; @@ -515,7 +519,6 @@ struct l2cap_chan { struct sk_buff_head srej_q; struct l2cap_seq_list srej_list; struct l2cap_seq_list retrans_list; - struct list_head srej_l; struct list_head list; struct list_head global_l; @@ -528,10 +531,14 @@ struct l2cap_chan { struct l2cap_ops { char *name; - struct l2cap_chan *(*new_connection) (void *data); - int (*recv) (void *data, struct sk_buff *skb); - void (*close) (void *data); - void (*state_change) (void *data, int state); + struct l2cap_chan *(*new_connection) (struct l2cap_chan *chan); + int (*recv) (struct l2cap_chan * chan, + struct sk_buff *skb); + void (*teardown) (struct l2cap_chan *chan, int err); + void (*close) (struct l2cap_chan *chan); + void (*state_change) (struct l2cap_chan *chan, + int state); + void (*ready) (struct l2cap_chan *chan); struct sk_buff *(*alloc_skb) (struct l2cap_chan *chan, unsigned long len, int nb); }; @@ -575,6 +582,7 @@ struct l2cap_conn { #define L2CAP_CHAN_RAW 1 #define L2CAP_CHAN_CONN_LESS 2 #define L2CAP_CHAN_CONN_ORIENTED 3 +#define L2CAP_CHAN_CONN_FIX_A2MP 4 /* ----- L2CAP socket info ----- */ #define l2cap_pi(sk) ((struct l2cap_pinfo *) sk) @@ -597,6 +605,7 @@ enum { CONF_EWS_RECV, CONF_LOC_CONF_PEND, CONF_REM_CONF_PEND, + CONF_NOT_COMPLETE, }; #define L2CAP_CONF_MAX_CONF_REQ 2 @@ -664,11 +673,15 @@ enum { static inline void l2cap_chan_hold(struct l2cap_chan *c) { + BT_DBG("chan %p orig refcnt %d", c, atomic_read(&c->refcnt)); + atomic_inc(&c->refcnt); } static inline void l2cap_chan_put(struct l2cap_chan *c) { + BT_DBG("chan %p orig refcnt %d", c, atomic_read(&c->refcnt)); + if (atomic_dec_and_test(&c->refcnt)) kfree(c); } @@ -713,11 +726,7 @@ static inline bool l2cap_clear_timer(struct l2cap_chan *chan, #define __set_chan_timer(c, t) l2cap_set_timer(c, &c->chan_timer, (t)) #define __clear_chan_timer(c) l2cap_clear_timer(c, &c->chan_timer) -#define __set_retrans_timer(c) l2cap_set_timer(c, &c->retrans_timer, \ - msecs_to_jiffies(L2CAP_DEFAULT_RETRANS_TO)); #define __clear_retrans_timer(c) l2cap_clear_timer(c, &c->retrans_timer) -#define __set_monitor_timer(c) l2cap_set_timer(c, &c->monitor_timer, \ - msecs_to_jiffies(L2CAP_DEFAULT_MONITOR_TO)); #define __clear_monitor_timer(c) l2cap_clear_timer(c, &c->monitor_timer) #define __set_ack_timer(c) l2cap_set_timer(c, &chan->ack_timer, \ msecs_to_jiffies(L2CAP_DEFAULT_ACK_TO)); @@ -736,173 +745,17 @@ static inline __u16 __next_seq(struct l2cap_chan *chan, __u16 seq) return (seq + 1) % (chan->tx_win_max + 1); } -static inline int l2cap_tx_window_full(struct l2cap_chan *ch) -{ - int sub; - - sub = (ch->next_tx_seq - ch->expected_ack_seq) % 64; - - if (sub < 0) - sub += 64; - - return sub == ch->remote_tx_win; -} - -static inline __u16 __get_reqseq(struct l2cap_chan *chan, __u32 ctrl) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return (ctrl & L2CAP_EXT_CTRL_REQSEQ) >> - L2CAP_EXT_CTRL_REQSEQ_SHIFT; - else - return (ctrl & L2CAP_CTRL_REQSEQ) >> L2CAP_CTRL_REQSEQ_SHIFT; -} - -static inline __u32 __set_reqseq(struct l2cap_chan *chan, __u32 reqseq) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return (reqseq << L2CAP_EXT_CTRL_REQSEQ_SHIFT) & - L2CAP_EXT_CTRL_REQSEQ; - else - return (reqseq << L2CAP_CTRL_REQSEQ_SHIFT) & L2CAP_CTRL_REQSEQ; -} - -static inline __u16 __get_txseq(struct l2cap_chan *chan, __u32 ctrl) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return (ctrl & L2CAP_EXT_CTRL_TXSEQ) >> - L2CAP_EXT_CTRL_TXSEQ_SHIFT; - else - return (ctrl & L2CAP_CTRL_TXSEQ) >> L2CAP_CTRL_TXSEQ_SHIFT; -} - -static inline __u32 __set_txseq(struct l2cap_chan *chan, __u32 txseq) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return (txseq << L2CAP_EXT_CTRL_TXSEQ_SHIFT) & - L2CAP_EXT_CTRL_TXSEQ; - else - return (txseq << L2CAP_CTRL_TXSEQ_SHIFT) & L2CAP_CTRL_TXSEQ; -} - -static inline bool __is_sframe(struct l2cap_chan *chan, __u32 ctrl) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return ctrl & L2CAP_EXT_CTRL_FRAME_TYPE; - else - return ctrl & L2CAP_CTRL_FRAME_TYPE; -} - -static inline __u32 __set_sframe(struct l2cap_chan *chan) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return L2CAP_EXT_CTRL_FRAME_TYPE; - else - return L2CAP_CTRL_FRAME_TYPE; -} - -static inline __u8 __get_ctrl_sar(struct l2cap_chan *chan, __u32 ctrl) +static inline struct l2cap_chan *l2cap_chan_no_new_connection(struct l2cap_chan *chan) { - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return (ctrl & L2CAP_EXT_CTRL_SAR) >> L2CAP_EXT_CTRL_SAR_SHIFT; - else - return (ctrl & L2CAP_CTRL_SAR) >> L2CAP_CTRL_SAR_SHIFT; + return NULL; } -static inline __u32 __set_ctrl_sar(struct l2cap_chan *chan, __u32 sar) +static inline void l2cap_chan_no_teardown(struct l2cap_chan *chan, int err) { - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return (sar << L2CAP_EXT_CTRL_SAR_SHIFT) & L2CAP_EXT_CTRL_SAR; - else - return (sar << L2CAP_CTRL_SAR_SHIFT) & L2CAP_CTRL_SAR; } -static inline bool __is_sar_start(struct l2cap_chan *chan, __u32 ctrl) +static inline void l2cap_chan_no_ready(struct l2cap_chan *chan) { - return __get_ctrl_sar(chan, ctrl) == L2CAP_SAR_START; -} - -static inline __u32 __get_sar_mask(struct l2cap_chan *chan) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return L2CAP_EXT_CTRL_SAR; - else - return L2CAP_CTRL_SAR; -} - -static inline __u8 __get_ctrl_super(struct l2cap_chan *chan, __u32 ctrl) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return (ctrl & L2CAP_EXT_CTRL_SUPERVISE) >> - L2CAP_EXT_CTRL_SUPER_SHIFT; - else - return (ctrl & L2CAP_CTRL_SUPERVISE) >> L2CAP_CTRL_SUPER_SHIFT; -} - -static inline __u32 __set_ctrl_super(struct l2cap_chan *chan, __u32 super) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return (super << L2CAP_EXT_CTRL_SUPER_SHIFT) & - L2CAP_EXT_CTRL_SUPERVISE; - else - return (super << L2CAP_CTRL_SUPER_SHIFT) & - L2CAP_CTRL_SUPERVISE; -} - -static inline __u32 __set_ctrl_final(struct l2cap_chan *chan) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return L2CAP_EXT_CTRL_FINAL; - else - return L2CAP_CTRL_FINAL; -} - -static inline bool __is_ctrl_final(struct l2cap_chan *chan, __u32 ctrl) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return ctrl & L2CAP_EXT_CTRL_FINAL; - else - return ctrl & L2CAP_CTRL_FINAL; -} - -static inline __u32 __set_ctrl_poll(struct l2cap_chan *chan) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return L2CAP_EXT_CTRL_POLL; - else - return L2CAP_CTRL_POLL; -} - -static inline bool __is_ctrl_poll(struct l2cap_chan *chan, __u32 ctrl) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return ctrl & L2CAP_EXT_CTRL_POLL; - else - return ctrl & L2CAP_CTRL_POLL; -} - -static inline __u32 __get_control(struct l2cap_chan *chan, void *p) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return get_unaligned_le32(p); - else - return get_unaligned_le16(p); -} - -static inline void __put_control(struct l2cap_chan *chan, __u32 control, - void *p) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return put_unaligned_le32(control, p); - else - return put_unaligned_le16(control, p); -} - -static inline __u8 __ctrl_size(struct l2cap_chan *chan) -{ - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - return L2CAP_EXT_HDR_SIZE - L2CAP_HDR_SIZE; - else - return L2CAP_ENH_HDR_SIZE - L2CAP_HDR_SIZE; } extern bool disable_ertm; @@ -926,5 +779,8 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len, void l2cap_chan_busy(struct l2cap_chan *chan, int busy); int l2cap_chan_check_security(struct l2cap_chan *chan); void l2cap_chan_set_defaults(struct l2cap_chan *chan); +int l2cap_ertm_init(struct l2cap_chan *chan); +void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan); +void l2cap_chan_del(struct l2cap_chan *chan, int err); #endif /* __L2CAP_H */ diff --git a/include/net/bluetooth/mgmt.h b/include/net/bluetooth/mgmt.h index 23fd0546fccb..4348ee8bda69 100644 --- a/include/net/bluetooth/mgmt.h +++ b/include/net/bluetooth/mgmt.h @@ -444,7 +444,7 @@ struct mgmt_ev_auth_failed { struct mgmt_ev_device_found { struct mgmt_addr_info addr; __s8 rssi; - __u8 flags[4]; + __le32 flags; __le16 eir_len; __u8 eir[0]; } __packed; diff --git a/include/net/caif/caif_hsi.h b/include/net/caif/caif_hsi.h index 439dadc8102f..bcb9cc3ce98b 100644 --- a/include/net/caif/caif_hsi.h +++ b/include/net/caif/caif_hsi.h @@ -93,25 +93,25 @@ struct cfhsi_desc { #endif /* Structure implemented by the CAIF HSI driver. */ -struct cfhsi_drv { - void (*tx_done_cb) (struct cfhsi_drv *drv); - void (*rx_done_cb) (struct cfhsi_drv *drv); - void (*wake_up_cb) (struct cfhsi_drv *drv); - void (*wake_down_cb) (struct cfhsi_drv *drv); +struct cfhsi_cb_ops { + void (*tx_done_cb) (struct cfhsi_cb_ops *drv); + void (*rx_done_cb) (struct cfhsi_cb_ops *drv); + void (*wake_up_cb) (struct cfhsi_cb_ops *drv); + void (*wake_down_cb) (struct cfhsi_cb_ops *drv); }; /* Structure implemented by HSI device. */ -struct cfhsi_dev { - int (*cfhsi_up) (struct cfhsi_dev *dev); - int (*cfhsi_down) (struct cfhsi_dev *dev); - int (*cfhsi_tx) (u8 *ptr, int len, struct cfhsi_dev *dev); - int (*cfhsi_rx) (u8 *ptr, int len, struct cfhsi_dev *dev); - int (*cfhsi_wake_up) (struct cfhsi_dev *dev); - int (*cfhsi_wake_down) (struct cfhsi_dev *dev); - int (*cfhsi_get_peer_wake) (struct cfhsi_dev *dev, bool *status); - int (*cfhsi_fifo_occupancy)(struct cfhsi_dev *dev, size_t *occupancy); - int (*cfhsi_rx_cancel)(struct cfhsi_dev *dev); - struct cfhsi_drv *drv; +struct cfhsi_ops { + int (*cfhsi_up) (struct cfhsi_ops *dev); + int (*cfhsi_down) (struct cfhsi_ops *dev); + int (*cfhsi_tx) (u8 *ptr, int len, struct cfhsi_ops *dev); + int (*cfhsi_rx) (u8 *ptr, int len, struct cfhsi_ops *dev); + int (*cfhsi_wake_up) (struct cfhsi_ops *dev); + int (*cfhsi_wake_down) (struct cfhsi_ops *dev); + int (*cfhsi_get_peer_wake) (struct cfhsi_ops *dev, bool *status); + int (*cfhsi_fifo_occupancy) (struct cfhsi_ops *dev, size_t *occupancy); + int (*cfhsi_rx_cancel)(struct cfhsi_ops *dev); + struct cfhsi_cb_ops *cb_ops; }; /* Structure holds status of received CAIF frames processing */ @@ -132,17 +132,26 @@ enum { CFHSI_PRIO_LAST, }; +struct cfhsi_config { + u32 inactivity_timeout; + u32 aggregation_timeout; + u32 head_align; + u32 tail_align; + u32 q_high_mark; + u32 q_low_mark; +}; + /* Structure implemented by CAIF HSI drivers. */ struct cfhsi { struct caif_dev_common cfdev; struct net_device *ndev; struct platform_device *pdev; struct sk_buff_head qhead[CFHSI_PRIO_LAST]; - struct cfhsi_drv drv; - struct cfhsi_dev *dev; + struct cfhsi_cb_ops cb_ops; + struct cfhsi_ops *ops; int tx_state; struct cfhsi_rx_state rx_state; - unsigned long inactivity_timeout; + struct cfhsi_config cfg; int rx_len; u8 *rx_ptr; u8 *tx_buf; @@ -150,8 +159,6 @@ struct cfhsi { u8 *rx_flip_buf; spinlock_t lock; int flow_off_sent; - u32 q_low_mark; - u32 q_high_mark; struct list_head list; struct work_struct wake_up_work; struct work_struct wake_down_work; @@ -164,13 +171,31 @@ struct cfhsi { struct timer_list rx_slowpath_timer; /* TX aggregation */ - unsigned long aggregation_timeout; int aggregation_len; struct timer_list aggregation_timer; unsigned long bits; }; - extern struct platform_driver cfhsi_driver; +/** + * enum ifla_caif_hsi - CAIF HSI NetlinkRT parameters. + * @IFLA_CAIF_HSI_INACTIVITY_TOUT: Inactivity timeout before + * taking the HSI wakeline down, in milliseconds. + * When using RT Netlink to create, destroy or configure a CAIF HSI interface, + * enum ifla_caif_hsi is used to specify the configuration attributes. + */ +enum ifla_caif_hsi { + __IFLA_CAIF_HSI_UNSPEC, + __IFLA_CAIF_HSI_INACTIVITY_TOUT, + __IFLA_CAIF_HSI_AGGREGATION_TOUT, + __IFLA_CAIF_HSI_HEAD_ALIGN, + __IFLA_CAIF_HSI_TAIL_ALIGN, + __IFLA_CAIF_HSI_QHIGH_WATERMARK, + __IFLA_CAIF_HSI_QLOW_WATERMARK, + __IFLA_CAIF_HSI_MAX +}; + +extern struct cfhsi_ops *cfhsi_get_ops(void); + #endif /* CAIF_HSI_H_ */ diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h index 0289d4ce7070..493fa0c79005 100644 --- a/include/net/cfg80211.h +++ b/include/net/cfg80211.h @@ -70,11 +70,13 @@ * * @IEEE80211_BAND_2GHZ: 2.4GHz ISM band * @IEEE80211_BAND_5GHZ: around 5GHz band (4.9-5.7) + * @IEEE80211_BAND_60GHZ: around 60 GHz band (58.32 - 64.80 GHz) * @IEEE80211_NUM_BANDS: number of defined bands */ enum ieee80211_band { IEEE80211_BAND_2GHZ = NL80211_BAND_2GHZ, IEEE80211_BAND_5GHZ = NL80211_BAND_5GHZ, + IEEE80211_BAND_60GHZ = NL80211_BAND_60GHZ, /* keep last */ IEEE80211_NUM_BANDS @@ -211,6 +213,22 @@ struct ieee80211_sta_ht_cap { }; /** + * struct ieee80211_sta_vht_cap - STA's VHT capabilities + * + * This structure describes most essential parameters needed + * to describe 802.11ac VHT capabilities for an STA. + * + * @vht_supported: is VHT supported by the STA + * @cap: VHT capabilities map as described in 802.11ac spec + * @vht_mcs: Supported VHT MCS rates + */ +struct ieee80211_sta_vht_cap { + bool vht_supported; + u32 cap; /* use IEEE80211_VHT_CAP_ */ + struct ieee80211_vht_mcs_info vht_mcs; +}; + +/** * struct ieee80211_supported_band - frequency band definition * * This structure describes a frequency band a wiphy @@ -233,6 +251,7 @@ struct ieee80211_supported_band { int n_channels; int n_bitrates; struct ieee80211_sta_ht_cap ht_cap; + struct ieee80211_sta_vht_cap vht_cap; }; /* @@ -404,6 +423,8 @@ struct cfg80211_beacon_data { * * Used to configure an AP interface. * + * @channel: the channel to start the AP on + * @channel_type: the channel type to use * @beacon: beacon data * @beacon_interval: beacon interval * @dtim_period: DTIM period @@ -417,6 +438,9 @@ struct cfg80211_beacon_data { * @inactivity_timeout: time in seconds to determine station's inactivity. */ struct cfg80211_ap_settings { + struct ieee80211_channel *channel; + enum nl80211_channel_type channel_type; + struct cfg80211_beacon_data beacon; int beacon_interval, dtim_period; @@ -556,11 +580,13 @@ enum station_info_flags { * @RATE_INFO_FLAGS_MCS: @tx_bitrate_mcs filled * @RATE_INFO_FLAGS_40_MHZ_WIDTH: 40 Mhz width transmission * @RATE_INFO_FLAGS_SHORT_GI: 400ns guard interval + * @RATE_INFO_FLAGS_60G: 60gHz MCS */ enum rate_info_flags { RATE_INFO_FLAGS_MCS = 1<<0, RATE_INFO_FLAGS_40_MHZ_WIDTH = 1<<1, RATE_INFO_FLAGS_SHORT_GI = 1<<2, + RATE_INFO_FLAGS_60G = 1<<3, }; /** @@ -622,10 +648,10 @@ struct sta_bss_parameters { * @llid: mesh local link id * @plid: mesh peer link id * @plink_state: mesh peer link state - * @signal: the signal strength, type depends on the wiphy's signal_type - NOTE: For CFG80211_SIGNAL_TYPE_MBM, value is expressed in _dBm_. - * @signal_avg: avg signal strength, type depends on the wiphy's signal_type - NOTE: For CFG80211_SIGNAL_TYPE_MBM, value is expressed in _dBm_. + * @signal: The signal strength, type depends on the wiphy's signal_type. + * For CFG80211_SIGNAL_TYPE_MBM, value is expressed in _dBm_. + * @signal_avg: Average signal strength, type depends on the wiphy's signal_type. + * For CFG80211_SIGNAL_TYPE_MBM, value is expressed in _dBm_. * @txrate: current unicast bitrate from this station * @rxrate: current unicast bitrate to this station * @rx_packets: packets received from this station @@ -785,47 +811,101 @@ struct bss_parameters { int ht_opmode; }; -/* +/** * struct mesh_config - 802.11s mesh configuration * * These parameters can be changed while the mesh is active. + * + * @dot11MeshRetryTimeout: the initial retry timeout in millisecond units used + * by the Mesh Peering Open message + * @dot11MeshConfirmTimeout: the initial retry timeout in millisecond units + * used by the Mesh Peering Open message + * @dot11MeshHoldingTimeout: the confirm timeout in millisecond units used by + * the mesh peering management to close a mesh peering + * @dot11MeshMaxPeerLinks: the maximum number of peer links allowed on this + * mesh interface + * @dot11MeshMaxRetries: the maximum number of peer link open retries that can + * be sent to establish a new peer link instance in a mesh + * @dot11MeshTTL: the value of TTL field set at a source mesh STA + * @element_ttl: the value of TTL field set at a mesh STA for path selection + * elements + * @auto_open_plinks: whether we should automatically open peer links when we + * detect compatible mesh peers + * @dot11MeshNbrOffsetMaxNeighbor: the maximum number of neighbors to + * synchronize to for 11s default synchronization method + * @dot11MeshHWMPmaxPREQretries: the number of action frames containing a PREQ + * that an originator mesh STA can send to a particular path target + * @path_refresh_time: how frequently to refresh mesh paths in milliseconds + * @min_discovery_timeout: the minimum length of time to wait until giving up on + * a path discovery in milliseconds + * @dot11MeshHWMPactivePathTimeout: the time (in TUs) for which mesh STAs + * receiving a PREQ shall consider the forwarding information from the + * root to be valid. (TU = time unit) + * @dot11MeshHWMPpreqMinInterval: the minimum interval of time (in TUs) during + * which a mesh STA can send only one action frame containing a PREQ + * element + * @dot11MeshHWMPperrMinInterval: the minimum interval of time (in TUs) during + * which a mesh STA can send only one Action frame containing a PERR + * element + * @dot11MeshHWMPnetDiameterTraversalTime: the interval of time (in TUs) that + * it takes for an HWMP information element to propagate across the mesh + * @dot11MeshHWMPRootMode: the configuration of a mesh STA as root mesh STA + * @dot11MeshHWMPRannInterval: the interval of time (in TUs) between root + * announcements are transmitted + * @dot11MeshGateAnnouncementProtocol: whether to advertise that this mesh + * station has access to a broader network beyond the MBSS. (This is + * missnamed in draft 12.0: dot11MeshGateAnnouncementProtocol set to true + * only means that the station will announce others it's a mesh gate, but + * not necessarily using the gate announcement protocol. Still keeping the + * same nomenclature to be in sync with the spec) + * @dot11MeshForwarding: whether the Mesh STA is forwarding or non-forwarding + * entity (default is TRUE - forwarding entity) + * @rssi_threshold: the threshold for average signal strength of candidate + * station to establish a peer link + * @ht_opmode: mesh HT protection mode + * + * @dot11MeshHWMPactivePathToRootTimeout: The time (in TUs) for which mesh STAs + * receiving a proactive PREQ shall consider the forwarding information to + * the root mesh STA to be valid. + * + * @dot11MeshHWMProotInterval: The interval of time (in TUs) between proactive + * PREQs are transmitted. + * @dot11MeshHWMPconfirmationInterval: The minimum interval of time (in TUs) + * during which a mesh STA can send only one Action frame containing + * a PREQ element for root path confirmation. */ struct mesh_config { - /* Timeouts in ms */ - /* Mesh plink management parameters */ u16 dot11MeshRetryTimeout; u16 dot11MeshConfirmTimeout; u16 dot11MeshHoldingTimeout; u16 dot11MeshMaxPeerLinks; - u8 dot11MeshMaxRetries; - u8 dot11MeshTTL; - /* ttl used in path selection information elements */ - u8 element_ttl; + u8 dot11MeshMaxRetries; + u8 dot11MeshTTL; + u8 element_ttl; bool auto_open_plinks; - /* neighbor offset synchronization */ u32 dot11MeshNbrOffsetMaxNeighbor; - /* HWMP parameters */ - u8 dot11MeshHWMPmaxPREQretries; + u8 dot11MeshHWMPmaxPREQretries; u32 path_refresh_time; u16 min_discovery_timeout; u32 dot11MeshHWMPactivePathTimeout; u16 dot11MeshHWMPpreqMinInterval; u16 dot11MeshHWMPperrMinInterval; u16 dot11MeshHWMPnetDiameterTraversalTime; - u8 dot11MeshHWMPRootMode; + u8 dot11MeshHWMPRootMode; u16 dot11MeshHWMPRannInterval; - /* This is missnamed in draft 12.0: dot11MeshGateAnnouncementProtocol - * set to true only means that the station will announce others it's a - * mesh gate, but not necessarily using the gate announcement protocol. - * Still keeping the same nomenclature to be in sync with the spec. */ - bool dot11MeshGateAnnouncementProtocol; + bool dot11MeshGateAnnouncementProtocol; bool dot11MeshForwarding; s32 rssi_threshold; u16 ht_opmode; + u32 dot11MeshHWMPactivePathToRootTimeout; + u16 dot11MeshHWMProotInterval; + u16 dot11MeshHWMPconfirmationInterval; }; /** * struct mesh_setup - 802.11s mesh setup configuration + * @channel: the channel to start the mesh network on + * @channel_type: the channel type to use * @mesh_id: the mesh ID * @mesh_id_len: length of the mesh ID, at least 1 and at most 32 bytes * @sync_method: which synchronization method to use @@ -840,6 +920,8 @@ struct mesh_config { * These parameters are fixed when the mesh is created. */ struct mesh_setup { + struct ieee80211_channel *channel; + enum nl80211_channel_type channel_type; const u8 *mesh_id; u8 mesh_id_len; u8 sync_method; @@ -917,7 +999,7 @@ struct cfg80211_ssid { * @ie_len: length of ie in octets * @rates: bitmap of rates to advertise for each band * @wiphy: the wiphy this was for - * @dev: the interface + * @wdev: the wireless device to scan for * @aborted: (internal) scan request was notified as aborted * @no_cck: used to send probe requests at non CCK rate in 2GHz band */ @@ -930,9 +1012,10 @@ struct cfg80211_scan_request { u32 rates[IEEE80211_NUM_BANDS]; + struct wireless_dev *wdev; + /* internal */ struct wiphy *wiphy; - struct net_device *dev; bool aborted; bool no_cck; @@ -966,6 +1049,7 @@ struct cfg80211_match_set { * @wiphy: the wiphy this was for * @dev: the interface * @channels: channels to scan + * @rssi_thold: don't report scan results below this threshold (in s32 dBm) */ struct cfg80211_sched_scan_request { struct cfg80211_ssid *ssids; @@ -976,6 +1060,7 @@ struct cfg80211_sched_scan_request { size_t ie_len; struct cfg80211_match_set *match_sets; int n_match_sets; + s32 rssi_thold; /* internal */ struct wiphy *wiphy; @@ -1351,10 +1436,10 @@ struct cfg80211_gtk_rekey_data { * * @add_virtual_intf: create a new virtual interface with the given name, * must set the struct wireless_dev's iftype. Beware: You must create - * the new netdev in the wiphy's network namespace! Returns the netdev, - * or an ERR_PTR. + * the new netdev in the wiphy's network namespace! Returns the struct + * wireless_dev, or an ERR_PTR. * - * @del_virtual_intf: remove the virtual interface determined by ifindex. + * @del_virtual_intf: remove the virtual interface * * @change_virtual_intf: change type/configuration of virtual interface, * keep the struct wireless_dev's iftype updated. @@ -1411,14 +1496,14 @@ struct cfg80211_gtk_rekey_data { * * @set_txq_params: Set TX queue parameters * - * @set_channel: Set channel for a given wireless interface. Some devices - * may support multi-channel operation (by channel hopping) so cfg80211 - * doesn't verify much. Note, however, that the passed netdev may be - * %NULL as well if the user requested changing the channel for the - * device itself, or for a monitor interface. - * @get_channel: Get the current operating channel, should return %NULL if - * there's no single defined operating channel if for example the - * device implements channel hopping for multi-channel virtual interfaces. + * @libertas_set_mesh_channel: Only for backward compatibility for libertas, + * as it doesn't implement join_mesh and needs to set the channel to + * join the mesh instead. + * + * @set_monitor_channel: Set the monitor mode channel for the device. If other + * interfaces are active this callback should reject the configuration. + * If no interfaces are active or the device is down, the channel should + * be stored for when a monitor interface becomes active. * * @scan: Request to do a scan. If returning zero, the scan request is given * the driver, and will be valid until passed to cfg80211_scan_done(). @@ -1488,6 +1573,8 @@ struct cfg80211_gtk_rekey_data { * @set_power_mgmt: Configure WLAN power management. A timeout value of -1 * allows the driver to adjust the dynamic ps timeout value. * @set_cqm_rssi_config: Configure connection quality monitor RSSI threshold. + * @set_cqm_txe_config: Configure connection quality monitor TX error + * thresholds. * @sched_scan_start: Tell the driver to start a scheduled scan. * @sched_scan_stop: Tell the driver to stop an ongoing scheduled * scan. The driver_initiated flag specifies whether the driver @@ -1525,18 +1612,23 @@ struct cfg80211_gtk_rekey_data { * @get_et_strings: Ethtool API to get a set of strings to describe stats * and perhaps other supported types of ethtool data-sets. * See @ethtool_ops.get_strings + * + * @get_channel: Get the current operating channel for the virtual interface. + * For monitor interfaces, it should return %NULL unless there's a single + * current monitoring channel. */ struct cfg80211_ops { int (*suspend)(struct wiphy *wiphy, struct cfg80211_wowlan *wow); int (*resume)(struct wiphy *wiphy); void (*set_wakeup)(struct wiphy *wiphy, bool enabled); - struct net_device * (*add_virtual_intf)(struct wiphy *wiphy, - char *name, - enum nl80211_iftype type, - u32 *flags, - struct vif_params *params); - int (*del_virtual_intf)(struct wiphy *wiphy, struct net_device *dev); + struct wireless_dev * (*add_virtual_intf)(struct wiphy *wiphy, + char *name, + enum nl80211_iftype type, + u32 *flags, + struct vif_params *params); + int (*del_virtual_intf)(struct wiphy *wiphy, + struct wireless_dev *wdev); int (*change_virtual_intf)(struct wiphy *wiphy, struct net_device *dev, enum nl80211_iftype type, u32 *flags, @@ -1605,11 +1697,15 @@ struct cfg80211_ops { int (*set_txq_params)(struct wiphy *wiphy, struct net_device *dev, struct ieee80211_txq_params *params); - int (*set_channel)(struct wiphy *wiphy, struct net_device *dev, - struct ieee80211_channel *chan, - enum nl80211_channel_type channel_type); + int (*libertas_set_mesh_channel)(struct wiphy *wiphy, + struct net_device *dev, + struct ieee80211_channel *chan); + + int (*set_monitor_channel)(struct wiphy *wiphy, + struct ieee80211_channel *chan, + enum nl80211_channel_type channel_type); - int (*scan)(struct wiphy *wiphy, struct net_device *dev, + int (*scan)(struct wiphy *wiphy, struct cfg80211_scan_request *request); int (*auth)(struct wiphy *wiphy, struct net_device *dev, @@ -1663,23 +1759,23 @@ struct cfg80211_ops { int (*flush_pmksa)(struct wiphy *wiphy, struct net_device *netdev); int (*remain_on_channel)(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, unsigned int duration, u64 *cookie); int (*cancel_remain_on_channel)(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, u64 cookie); - int (*mgmt_tx)(struct wiphy *wiphy, struct net_device *dev, + int (*mgmt_tx)(struct wiphy *wiphy, struct wireless_dev *wdev, struct ieee80211_channel *chan, bool offchan, enum nl80211_channel_type channel_type, bool channel_type_valid, unsigned int wait, const u8 *buf, size_t len, bool no_cck, bool dont_wait_for_ack, u64 *cookie); int (*mgmt_tx_cancel_wait)(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, u64 cookie); int (*set_power_mgmt)(struct wiphy *wiphy, struct net_device *dev, @@ -1689,8 +1785,12 @@ struct cfg80211_ops { struct net_device *dev, s32 rssi_thold, u32 rssi_hyst); + int (*set_cqm_txe_config)(struct wiphy *wiphy, + struct net_device *dev, + u32 rate, u32 pkts, u32 intvl); + void (*mgmt_frame_register)(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, u16 frame_type, bool reg); int (*set_antenna)(struct wiphy *wiphy, u32 tx_ant, u32 rx_ant); @@ -1721,15 +1821,17 @@ struct cfg80211_ops { struct net_device *dev, u16 noack_map); - struct ieee80211_channel *(*get_channel)(struct wiphy *wiphy, - enum nl80211_channel_type *type); - int (*get_et_sset_count)(struct wiphy *wiphy, struct net_device *dev, int sset); void (*get_et_stats)(struct wiphy *wiphy, struct net_device *dev, struct ethtool_stats *stats, u64 *data); void (*get_et_strings)(struct wiphy *wiphy, struct net_device *dev, u32 sset, u8 *data); + + struct ieee80211_channel * + (*get_channel)(struct wiphy *wiphy, + struct wireless_dev *wdev, + enum nl80211_channel_type *type); }; /* @@ -2083,7 +2185,9 @@ struct wiphy { char fw_version[ETHTOOL_BUSINFO_LEN]; u32 hw_version; +#ifdef CONFIG_PM struct wiphy_wowlan_support wowlan; +#endif u16 max_remain_on_channel_duration; @@ -2250,20 +2354,31 @@ struct cfg80211_internal_bss; struct cfg80211_cached_keys; /** - * struct wireless_dev - wireless per-netdev state + * struct wireless_dev - wireless device state + * + * For netdevs, this structure must be allocated by the driver + * that uses the ieee80211_ptr field in struct net_device (this + * is intentional so it can be allocated along with the netdev.) + * It need not be registered then as netdev registration will + * be intercepted by cfg80211 to see the new wireless device. * - * This structure must be allocated by the driver/stack - * that uses the ieee80211_ptr field in struct net_device - * (this is intentional so it can be allocated along with - * the netdev.) + * For non-netdev uses, it must also be allocated by the driver + * in response to the cfg80211 callbacks that require it, as + * there's no netdev registration in that case it may not be + * allocated outside of callback operations that return it. * * @wiphy: pointer to hardware description * @iftype: interface type * @list: (private) Used to collect the interfaces - * @netdev: (private) Used to reference back to the netdev + * @netdev: (private) Used to reference back to the netdev, may be %NULL + * @identifier: (private) Identifier used in nl80211 to identify this + * wireless device if it has no netdev * @current_bss: (private) Used by the internal configuration code * @channel: (private) Used by the internal configuration code to track - * user-set AP, monitor and WDS channels for wireless extensions + * the user-set AP, monitor and WDS channel + * @preset_chan: (private) Used by the internal configuration code to + * track the channel to be used for AP later + * @preset_chantype: (private) the corresponding channel type * @bssid: (private) Used by the internal configuration code * @ssid: (private) Used by the internal configuration code * @ssid_len: (private) Used by the internal configuration code @@ -2289,6 +2404,8 @@ struct wireless_dev { struct list_head list; struct net_device *netdev; + u32 identifier; + struct list_head mgmt_registrations; spinlock_t mgmt_registrations_lock; @@ -2313,8 +2430,14 @@ struct wireless_dev { spinlock_t event_lock; struct cfg80211_internal_bss *current_bss; /* associated / joined */ + struct ieee80211_channel *preset_chan; + enum nl80211_channel_type preset_chantype; + + /* for AP and mesh channel tracking */ struct ieee80211_channel *channel; + bool ibss_fixed; + bool ps; int ps_timeout; @@ -3169,7 +3292,7 @@ void cfg80211_disconnected(struct net_device *dev, u16 reason, /** * cfg80211_ready_on_channel - notification of remain_on_channel start - * @dev: network device + * @wdev: wireless device * @cookie: the request cookie * @chan: The current channel (from remain_on_channel request) * @channel_type: Channel type @@ -3177,21 +3300,20 @@ void cfg80211_disconnected(struct net_device *dev, u16 reason, * channel * @gfp: allocation flags */ -void cfg80211_ready_on_channel(struct net_device *dev, u64 cookie, +void cfg80211_ready_on_channel(struct wireless_dev *wdev, u64 cookie, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, unsigned int duration, gfp_t gfp); /** * cfg80211_remain_on_channel_expired - remain_on_channel duration expired - * @dev: network device + * @wdev: wireless device * @cookie: the request cookie * @chan: The current channel (from remain_on_channel request) * @channel_type: Channel type * @gfp: allocation flags */ -void cfg80211_remain_on_channel_expired(struct net_device *dev, - u64 cookie, +void cfg80211_remain_on_channel_expired(struct wireless_dev *wdev, u64 cookie, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, gfp_t gfp); @@ -3219,7 +3341,7 @@ void cfg80211_del_sta(struct net_device *dev, const u8 *mac_addr, gfp_t gfp); /** * cfg80211_rx_mgmt - notification of received, unprocessed management frame - * @dev: network device + * @wdev: wireless device receiving the frame * @freq: Frequency on which the frame was received in MHz * @sig_dbm: signal strength in mBm, or 0 if unknown * @buf: Management frame (header + body) @@ -3234,12 +3356,12 @@ void cfg80211_del_sta(struct net_device *dev, const u8 *mac_addr, gfp_t gfp); * This function is called whenever an Action frame is received for a station * mode interface, but is not processed in kernel. */ -bool cfg80211_rx_mgmt(struct net_device *dev, int freq, int sig_dbm, +bool cfg80211_rx_mgmt(struct wireless_dev *wdev, int freq, int sig_dbm, const u8 *buf, size_t len, gfp_t gfp); /** * cfg80211_mgmt_tx_status - notification of TX status for management frame - * @dev: network device + * @wdev: wireless device receiving the frame * @cookie: Cookie returned by cfg80211_ops::mgmt_tx() * @buf: Management frame (header + body) * @len: length of the frame data @@ -3250,7 +3372,7 @@ bool cfg80211_rx_mgmt(struct net_device *dev, int freq, int sig_dbm, * transmitted with cfg80211_ops::mgmt_tx() to report the TX status of the * transmission attempt. */ -void cfg80211_mgmt_tx_status(struct net_device *dev, u64 cookie, +void cfg80211_mgmt_tx_status(struct wireless_dev *wdev, u64 cookie, const u8 *buf, size_t len, bool ack, gfp_t gfp); @@ -3280,6 +3402,21 @@ void cfg80211_cqm_pktloss_notify(struct net_device *dev, const u8 *peer, u32 num_packets, gfp_t gfp); /** + * cfg80211_cqm_txe_notify - TX error rate event + * @dev: network device + * @peer: peer's MAC address + * @num_packets: how many packets were lost + * @rate: % of packets which failed transmission + * @intvl: interval (in s) over which the TX failure threshold was breached. + * @gfp: context flags + * + * Notify userspace when configured % TX failures over number of packets in a + * given interval is exceeded. + */ +void cfg80211_cqm_txe_notify(struct net_device *dev, const u8 *peer, + u32 num_packets, u32 rate, u32 intvl, gfp_t gfp); + +/** * cfg80211_gtk_rekey_notify - notify userspace about driver rekeying * @dev: network device * @bssid: BSSID of AP (to avoid races) @@ -3359,11 +3496,14 @@ void cfg80211_report_obss_beacon(struct wiphy *wiphy, const u8 *frame, size_t len, int freq, int sig_dbm, gfp_t gfp); -/* +/** * cfg80211_can_beacon_sec_chan - test if ht40 on extension channel can be used * @wiphy: the wiphy * @chan: main channel * @channel_type: HT mode + * + * This function returns true if there is no secondary channel or the secondary + * channel can be used for beaconing (i.e. is not a radar channel etc.) */ bool cfg80211_can_beacon_sec_chan(struct wiphy *wiphy, struct ieee80211_channel *chan, @@ -3386,7 +3526,7 @@ void cfg80211_ch_switch_notify(struct net_device *dev, int freq, * * return 0 if MCS index >= 32 */ -u16 cfg80211_calculate_bitrate(struct rate_info *rate); +u32 cfg80211_calculate_bitrate(struct rate_info *rate); /* Logging, debugging and troubleshooting/diagnostic helpers. */ diff --git a/include/net/dn_route.h b/include/net/dn_route.h index c507e05d172f..4f7d6a182381 100644 --- a/include/net/dn_route.h +++ b/include/net/dn_route.h @@ -67,6 +67,8 @@ extern void dn_rt_cache_flush(int delay); struct dn_route { struct dst_entry dst; + struct neighbour *n; + struct flowidn fld; __le16 rt_saddr; diff --git a/include/net/dst.h b/include/net/dst.h index 8197eadca819..baf597890064 100644 --- a/include/net/dst.h +++ b/include/net/dst.h @@ -42,16 +42,16 @@ struct dst_entry { struct dst_entry *from; }; struct dst_entry *path; - struct neighbour __rcu *_neighbour; + void *__pad0; #ifdef CONFIG_XFRM struct xfrm_state *xfrm; #else void *__pad1; #endif - int (*input)(struct sk_buff*); - int (*output)(struct sk_buff*); + int (*input)(struct sk_buff *); + int (*output)(struct sk_buff *); - int flags; + unsigned short flags; #define DST_HOST 0x0001 #define DST_NOXFRM 0x0002 #define DST_NOPOLICY 0x0004 @@ -62,8 +62,23 @@ struct dst_entry { #define DST_FAKE_RTABLE 0x0080 #define DST_XFRM_TUNNEL 0x0100 + unsigned short pending_confirm; + short error; + + /* A non-zero value of dst->obsolete forces by-hand validation + * of the route entry. Positive values are set by the generic + * dst layer to indicate that the entry has been forcefully + * destroyed. + * + * Negative values are used by the implementation layer code to + * force invocation of the dst_ops->check() method. + */ short obsolete; +#define DST_OBSOLETE_NONE 0 +#define DST_OBSOLETE_DEAD 2 +#define DST_OBSOLETE_FORCE_CHK -1 +#define DST_OBSOLETE_KILL -2 unsigned short header_len; /* more space at head required */ unsigned short trailer_len; /* space to reserve at tail */ #ifdef CONFIG_IP_ROUTE_CLASSID @@ -94,21 +109,6 @@ struct dst_entry { }; }; -static inline struct neighbour *dst_get_neighbour_noref(struct dst_entry *dst) -{ - return rcu_dereference(dst->_neighbour); -} - -static inline struct neighbour *dst_get_neighbour_noref_raw(struct dst_entry *dst) -{ - return rcu_dereference_raw(dst->_neighbour); -} - -static inline void dst_set_neighbour(struct dst_entry *dst, struct neighbour *neigh) -{ - rcu_assign_pointer(dst->_neighbour, neigh); -} - extern u32 *dst_cow_metrics_generic(struct dst_entry *dst, unsigned long old); extern const u32 dst_default_metrics[RTAX_MAX]; @@ -222,12 +222,6 @@ static inline unsigned long dst_metric_rtt(const struct dst_entry *dst, int metr return msecs_to_jiffies(dst_metric(dst, metric)); } -static inline void set_dst_metric_rtt(struct dst_entry *dst, int metric, - unsigned long rtt) -{ - dst_metric_set(dst, metric, jiffies_to_msecs(rtt)); -} - static inline u32 dst_allfrag(const struct dst_entry *dst) { @@ -241,7 +235,7 @@ dst_metric_locked(const struct dst_entry *dst, int metric) return dst_metric(dst, RTAX_LOCK) & (1<<metric); } -static inline void dst_hold(struct dst_entry * dst) +static inline void dst_hold(struct dst_entry *dst) { /* * If your kernel compilation stops here, please check @@ -264,8 +258,7 @@ static inline void dst_use_noref(struct dst_entry *dst, unsigned long time) dst->lastuse = time; } -static inline -struct dst_entry * dst_clone(struct dst_entry * dst) +static inline struct dst_entry *dst_clone(struct dst_entry *dst) { if (dst) atomic_inc(&dst->__refcnt); @@ -371,14 +364,15 @@ static inline struct dst_entry *skb_dst_pop(struct sk_buff *skb) } extern int dst_discard(struct sk_buff *skb); -extern void *dst_alloc(struct dst_ops * ops, struct net_device *dev, - int initial_ref, int initial_obsolete, int flags); -extern void __dst_free(struct dst_entry * dst); -extern struct dst_entry *dst_destroy(struct dst_entry * dst); +extern void *dst_alloc(struct dst_ops *ops, struct net_device *dev, + int initial_ref, int initial_obsolete, + unsigned short flags); +extern void __dst_free(struct dst_entry *dst); +extern struct dst_entry *dst_destroy(struct dst_entry *dst); -static inline void dst_free(struct dst_entry * dst) +static inline void dst_free(struct dst_entry *dst) { - if (dst->obsolete > 1) + if (dst->obsolete > 0) return; if (!atomic_read(&dst->__refcnt)) { dst = dst_destroy(dst); @@ -396,19 +390,35 @@ static inline void dst_rcu_free(struct rcu_head *head) static inline void dst_confirm(struct dst_entry *dst) { - if (dst) { - struct neighbour *n; + dst->pending_confirm = 1; +} - rcu_read_lock(); - n = dst_get_neighbour_noref(dst); - neigh_confirm(n); - rcu_read_unlock(); +static inline int dst_neigh_output(struct dst_entry *dst, struct neighbour *n, + struct sk_buff *skb) +{ + struct hh_cache *hh; + + if (unlikely(dst->pending_confirm)) { + n->confirmed = jiffies; + dst->pending_confirm = 0; } + + hh = &n->hh; + if ((n->nud_state & NUD_CONNECTED) && hh->hh_len) + return neigh_hh_output(hh, skb); + else + return n->output(n, skb); } static inline struct neighbour *dst_neigh_lookup(const struct dst_entry *dst, const void *daddr) { - return dst->ops->neigh_lookup(dst, daddr); + return dst->ops->neigh_lookup(dst, NULL, daddr); +} + +static inline struct neighbour *dst_neigh_lookup_skb(const struct dst_entry *dst, + struct sk_buff *skb) +{ + return dst->ops->neigh_lookup(dst, skb, NULL); } static inline void dst_link_failure(struct sk_buff *skb) diff --git a/include/net/dst_ops.h b/include/net/dst_ops.h index 3682a0a076c1..2f26dfb8450e 100644 --- a/include/net/dst_ops.h +++ b/include/net/dst_ops.h @@ -8,6 +8,7 @@ struct dst_entry; struct kmem_cachep; struct net_device; struct sk_buff; +struct sock; struct dst_ops { unsigned short family; @@ -24,9 +25,14 @@ struct dst_ops { struct net_device *dev, int how); struct dst_entry * (*negative_advice)(struct dst_entry *); void (*link_failure)(struct sk_buff *); - void (*update_pmtu)(struct dst_entry *dst, u32 mtu); + void (*update_pmtu)(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu); + void (*redirect)(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb); int (*local_out)(struct sk_buff *skb); - struct neighbour * (*neigh_lookup)(const struct dst_entry *dst, const void *daddr); + struct neighbour * (*neigh_lookup)(const struct dst_entry *dst, + struct sk_buff *skb, + const void *daddr); struct kmem_cache *kmem_cachep; diff --git a/include/net/fib_rules.h b/include/net/fib_rules.h index 075f1e3a0fed..e361f4882426 100644 --- a/include/net/fib_rules.h +++ b/include/net/fib_rules.h @@ -52,6 +52,7 @@ struct fib_rules_ops { struct sk_buff *, struct fib_rule_hdr *, struct nlattr **); + void (*delete)(struct fib_rule *); int (*compare)(struct fib_rule *, struct fib_rule_hdr *, struct nlattr **); diff --git a/include/net/flow.h b/include/net/flow.h index 6c469dbdb917..e1dd5082ec7e 100644 --- a/include/net/flow.h +++ b/include/net/flow.h @@ -20,8 +20,7 @@ struct flowi_common { __u8 flowic_proto; __u8 flowic_flags; #define FLOWI_FLAG_ANYSRC 0x01 -#define FLOWI_FLAG_PRECOW_METRICS 0x02 -#define FLOWI_FLAG_CAN_SLEEP 0x04 +#define FLOWI_FLAG_CAN_SLEEP 0x02 __u32 flowic_secid; }; diff --git a/include/net/genetlink.h b/include/net/genetlink.h index ccb68880abf5..48905cd3884c 100644 --- a/include/net/genetlink.h +++ b/include/net/genetlink.h @@ -5,6 +5,8 @@ #include <net/netlink.h> #include <net/net_namespace.h> +#define GENLMSG_DEFAULT_SIZE (NLMSG_DEFAULT_SIZE - GENL_HDRLEN) + /** * struct genl_multicast_group - generic netlink multicast group * @name: name of the multicast group, names are per-family diff --git a/include/net/inet6_connection_sock.h b/include/net/inet6_connection_sock.h index 1866a676c810..04642c920431 100644 --- a/include/net/inet6_connection_sock.h +++ b/include/net/inet6_connection_sock.h @@ -26,6 +26,7 @@ extern int inet6_csk_bind_conflict(const struct sock *sk, const struct inet_bind_bucket *tb, bool relax); extern struct dst_entry* inet6_csk_route_req(struct sock *sk, + struct flowi6 *fl6, const struct request_sock *req); extern struct request_sock *inet6_csk_search_req(const struct sock *sk, @@ -42,4 +43,6 @@ extern void inet6_csk_reqsk_queue_hash_add(struct sock *sk, extern void inet6_csk_addr2sockaddr(struct sock *sk, struct sockaddr *uaddr); extern int inet6_csk_xmit(struct sk_buff *skb, struct flowi *fl); + +extern struct dst_entry *inet6_csk_update_pmtu(struct sock *sk, u32 mtu); #endif /* _INET6_CONNECTION_SOCK_H */ diff --git a/include/net/inet_common.h b/include/net/inet_common.h index 22fac9892b16..234008782c8c 100644 --- a/include/net/inet_common.h +++ b/include/net/inet_common.h @@ -14,9 +14,11 @@ struct sockaddr; struct socket; extern int inet_release(struct socket *sock); -extern int inet_stream_connect(struct socket *sock, struct sockaddr * uaddr, +extern int inet_stream_connect(struct socket *sock, struct sockaddr *uaddr, int addr_len, int flags); -extern int inet_dgram_connect(struct socket *sock, struct sockaddr * uaddr, +extern int __inet_stream_connect(struct socket *sock, struct sockaddr *uaddr, + int addr_len, int flags); +extern int inet_dgram_connect(struct socket *sock, struct sockaddr *uaddr, int addr_len, int flags); extern int inet_accept(struct socket *sock, struct socket *newsock, int flags); extern int inet_sendmsg(struct kiocb *iocb, struct socket *sock, diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h index 7d83f90f203f..5ee66f517b4f 100644 --- a/include/net/inet_connection_sock.h +++ b/include/net/inet_connection_sock.h @@ -43,7 +43,6 @@ struct inet_connection_sock_af_ops { struct sock *(*syn_recv_sock)(struct sock *sk, struct sk_buff *skb, struct request_sock *req, struct dst_entry *dst); - struct inet_peer *(*get_peer)(struct sock *sk, bool *release_it); u16 net_header_len; u16 net_frag_header_len; u16 sockaddr_len; @@ -337,4 +336,6 @@ extern int inet_csk_compat_getsockopt(struct sock *sk, int level, int optname, char __user *optval, int __user *optlen); extern int inet_csk_compat_setsockopt(struct sock *sk, int level, int optname, char __user *optval, unsigned int optlen); + +extern struct dst_entry *inet_csk_update_pmtu(struct sock *sk, u32 mtu); #endif /* _INET_CONNECTION_SOCK_H */ diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h index 808fc5f76b03..54be0287eb98 100644 --- a/include/net/inet_hashtables.h +++ b/include/net/inet_hashtables.h @@ -379,10 +379,10 @@ static inline struct sock *__inet_lookup_skb(struct inet_hashinfo *hashinfo, const __be16 sport, const __be16 dport) { - struct sock *sk; + struct sock *sk = skb_steal_sock(skb); const struct iphdr *iph = ip_hdr(skb); - if (unlikely(sk = skb_steal_sock(skb))) + if (sk) return sk; else return __inet_lookup(dev_net(skb_dst(skb)->dev), hashinfo, diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h index ae17e1352d7e..613cfa401672 100644 --- a/include/net/inet_sock.h +++ b/include/net/inet_sock.h @@ -172,6 +172,7 @@ struct inet_sock { int uc_index; int mc_index; __be32 mc_addr; + int rx_dst_ifindex; struct ip_mc_socklist __rcu *mc_list; struct inet_cork_full cork; }; @@ -245,8 +246,6 @@ static inline __u8 inet_sk_flowi_flags(const struct sock *sk) if (inet_sk(sk)->transparent || inet_sk(sk)->hdrincl) flags |= FLOWI_FLAG_ANYSRC; - if (sk->sk_protocol == IPPROTO_TCP) - flags |= FLOWI_FLAG_PRECOW_METRICS; return flags; } diff --git a/include/net/inetpeer.h b/include/net/inetpeer.h index 2040bff945d4..53f464d7cddc 100644 --- a/include/net/inetpeer.h +++ b/include/net/inetpeer.h @@ -36,25 +36,19 @@ struct inet_peer { u32 metrics[RTAX_MAX]; u32 rate_tokens; /* rate limiting for ICMP */ unsigned long rate_last; - unsigned long pmtu_expires; - u32 pmtu_orig; - u32 pmtu_learned; - struct inetpeer_addr_base redirect_learned; union { struct list_head gc_list; struct rcu_head gc_rcu; }; /* * Once inet_peer is queued for deletion (refcnt == -1), following fields - * are not available: rid, ip_id_count, tcp_ts, tcp_ts_stamp + * are not available: rid, ip_id_count * We can share memory with rcu_head to help keep inet_peer small. */ union { struct { atomic_t rid; /* Frag reception counter */ atomic_t ip_id_count; /* IP ID for the next packet */ - __u32 tcp_ts; - __u32 tcp_ts_stamp; }; struct rcu_head rcu; struct inet_peer *gc_next; @@ -65,6 +59,69 @@ struct inet_peer { atomic_t refcnt; }; +struct inet_peer_base { + struct inet_peer __rcu *root; + seqlock_t lock; + u32 flush_seq; + int total; +}; + +#define INETPEER_BASE_BIT 0x1UL + +static inline struct inet_peer *inetpeer_ptr(unsigned long val) +{ + BUG_ON(val & INETPEER_BASE_BIT); + return (struct inet_peer *) val; +} + +static inline struct inet_peer_base *inetpeer_base_ptr(unsigned long val) +{ + if (!(val & INETPEER_BASE_BIT)) + return NULL; + val &= ~INETPEER_BASE_BIT; + return (struct inet_peer_base *) val; +} + +static inline bool inetpeer_ptr_is_peer(unsigned long val) +{ + return !(val & INETPEER_BASE_BIT); +} + +static inline void __inetpeer_ptr_set_peer(unsigned long *val, struct inet_peer *peer) +{ + /* This implicitly clears INETPEER_BASE_BIT */ + *val = (unsigned long) peer; +} + +static inline bool inetpeer_ptr_set_peer(unsigned long *ptr, struct inet_peer *peer) +{ + unsigned long val = (unsigned long) peer; + unsigned long orig = *ptr; + + if (!(orig & INETPEER_BASE_BIT) || + cmpxchg(ptr, orig, val) != orig) + return false; + return true; +} + +static inline void inetpeer_init_ptr(unsigned long *ptr, struct inet_peer_base *base) +{ + *ptr = (unsigned long) base | INETPEER_BASE_BIT; +} + +static inline void inetpeer_transfer_peer(unsigned long *to, unsigned long *from) +{ + unsigned long val = *from; + + *to = val; + if (inetpeer_ptr_is_peer(val)) { + struct inet_peer *peer = inetpeer_ptr(val); + atomic_inc(&peer->refcnt); + } +} + +extern void inet_peer_base_init(struct inet_peer_base *); + void inet_initpeers(void) __init; #define INETPEER_METRICS_NEW (~(u32) 0) @@ -75,31 +132,38 @@ static inline bool inet_metrics_new(const struct inet_peer *p) } /* can be called with or without local BH being disabled */ -struct inet_peer *inet_getpeer(const struct inetpeer_addr *daddr, int create); +struct inet_peer *inet_getpeer(struct inet_peer_base *base, + const struct inetpeer_addr *daddr, + int create); -static inline struct inet_peer *inet_getpeer_v4(__be32 v4daddr, int create) +static inline struct inet_peer *inet_getpeer_v4(struct inet_peer_base *base, + __be32 v4daddr, + int create) { struct inetpeer_addr daddr; daddr.addr.a4 = v4daddr; daddr.family = AF_INET; - return inet_getpeer(&daddr, create); + return inet_getpeer(base, &daddr, create); } -static inline struct inet_peer *inet_getpeer_v6(const struct in6_addr *v6daddr, int create) +static inline struct inet_peer *inet_getpeer_v6(struct inet_peer_base *base, + const struct in6_addr *v6daddr, + int create) { struct inetpeer_addr daddr; *(struct in6_addr *)daddr.addr.a6 = *v6daddr; daddr.family = AF_INET6; - return inet_getpeer(&daddr, create); + return inet_getpeer(base, &daddr, create); } /* can be called from BH context or outside */ extern void inet_putpeer(struct inet_peer *p); extern bool inet_peer_xrlim_allow(struct inet_peer *peer, int timeout); -extern void inetpeer_invalidate_tree(int family); +extern void inetpeer_invalidate_tree(struct inet_peer_base *); +extern void inetpeer_invalidate_family(int family); /* * temporary check to make sure we dont access rid, ip_id_count, tcp_ts, diff --git a/include/net/ip.h b/include/net/ip.h index 83e0619f59d0..bd5e444a19ce 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -158,8 +158,9 @@ static inline __u8 ip_reply_arg_flowi_flags(const struct ip_reply_arg *arg) return (arg->flags & IP_REPLY_ARG_NOSRCCHECK) ? FLOWI_FLAG_ANYSRC : 0; } -void ip_send_reply(struct sock *sk, struct sk_buff *skb, __be32 daddr, - const struct ip_reply_arg *arg, unsigned int len); +void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr, + __be32 saddr, const struct ip_reply_arg *arg, + unsigned int len); struct ipv4_config { int log_martians; @@ -210,6 +211,9 @@ extern int inet_peer_threshold; extern int inet_peer_minttl; extern int inet_peer_maxttl; +/* From ip_input.c */ +extern int sysctl_ip_early_demux; + /* From ip_output.c */ extern int sysctl_ip_dynaddr; diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h index 0ae759a6c76e..0fedbd8d747a 100644 --- a/include/net/ip6_fib.h +++ b/include/net/ip6_fib.h @@ -86,6 +86,8 @@ struct fib6_table; struct rt6_info { struct dst_entry dst; + struct neighbour *n; + /* * Tail elements of dst_entry (__refcnt etc.) * and these elements (rarely used in hot path) are in @@ -107,7 +109,7 @@ struct rt6_info { u32 rt6i_peer_genid; struct inet6_dev *rt6i_idev; - struct inet_peer *rt6i_peer; + unsigned long _rt6i_peer; #ifdef CONFIG_XFRM u32 rt6i_flow_cache_genid; @@ -118,6 +120,36 @@ struct rt6_info { u8 rt6i_protocol; }; +static inline struct inet_peer *rt6_peer_ptr(struct rt6_info *rt) +{ + return inetpeer_ptr(rt->_rt6i_peer); +} + +static inline bool rt6_has_peer(struct rt6_info *rt) +{ + return inetpeer_ptr_is_peer(rt->_rt6i_peer); +} + +static inline void __rt6_set_peer(struct rt6_info *rt, struct inet_peer *peer) +{ + __inetpeer_ptr_set_peer(&rt->_rt6i_peer, peer); +} + +static inline bool rt6_set_peer(struct rt6_info *rt, struct inet_peer *peer) +{ + return inetpeer_ptr_set_peer(&rt->_rt6i_peer, peer); +} + +static inline void rt6_init_peer(struct rt6_info *rt, struct inet_peer_base *base) +{ + inetpeer_init_ptr(&rt->_rt6i_peer, base); +} + +static inline void rt6_transfer_peer(struct rt6_info *rt, struct rt6_info *ort) +{ + inetpeer_transfer_peer(&rt->_rt6i_peer, &ort->_rt6i_peer); +} + static inline struct inet6_dev *ip6_dst_idev(struct dst_entry *dst) { return ((struct rt6_info *)dst)->rt6i_idev; @@ -207,6 +239,7 @@ struct fib6_table { u32 tb6_id; rwlock_t tb6_lock; struct fib6_node tb6_root; + struct inet_peer_base tb6_peers; }; #define RT6_TABLE_UNSPEC RT_TABLE_UNSPEC diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h index 37c1a1ed82c1..5fa2af00634a 100644 --- a/include/net/ip6_route.h +++ b/include/net/ip6_route.h @@ -53,16 +53,25 @@ static inline unsigned int rt6_flags2srcprefs(int flags) return (flags >> 3) & 7; } -extern void rt6_bind_peer(struct rt6_info *rt, - int create); +extern void rt6_bind_peer(struct rt6_info *rt, int create); + +static inline struct inet_peer *__rt6_get_peer(struct rt6_info *rt, int create) +{ + if (rt6_has_peer(rt)) + return rt6_peer_ptr(rt); + + rt6_bind_peer(rt, create); + return (rt6_has_peer(rt) ? rt6_peer_ptr(rt) : NULL); +} static inline struct inet_peer *rt6_get_peer(struct rt6_info *rt) { - if (rt->rt6i_peer) - return rt->rt6i_peer; + return __rt6_get_peer(rt, 0); +} - rt6_bind_peer(rt, 0); - return rt->rt6i_peer; +static inline struct inet_peer *rt6_get_peer_create(struct rt6_info *rt) +{ + return __rt6_get_peer(rt, 1); } extern void ip6_route_input(struct sk_buff *skb); @@ -124,17 +133,12 @@ extern int rt6_route_rcv(struct net_device *dev, u8 *opt, int len, const struct in6_addr *gwaddr); -extern void rt6_redirect(const struct in6_addr *dest, - const struct in6_addr *src, - const struct in6_addr *saddr, - struct neighbour *neigh, - u8 *lladdr, - int on_link); - -extern void rt6_pmtu_discovery(const struct in6_addr *daddr, - const struct in6_addr *saddr, - struct net_device *dev, - u32 pmtu); +extern void ip6_update_pmtu(struct sk_buff *skb, struct net *net, __be32 mtu, + int oif, u32 mark); +extern void ip6_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, + __be32 mtu); +extern void ip6_redirect(struct sk_buff *skb, struct net *net, int oif, u32 mark); +extern void ip6_sk_redirect(struct sk_buff *skb, struct sock *sk); struct netlink_callback; @@ -154,7 +158,8 @@ extern void rt6_remove_prefsrc(struct inet6_ifaddr *ifp); * Store a destination cache entry in a socket */ static inline void __ip6_dst_store(struct sock *sk, struct dst_entry *dst, - struct in6_addr *daddr, struct in6_addr *saddr) + const struct in6_addr *daddr, + const struct in6_addr *saddr) { struct ipv6_pinfo *np = inet6_sk(sk); struct rt6_info *rt = (struct rt6_info *) dst; diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h index fc73e667b50e..358fb86f57eb 100644 --- a/include/net/ip6_tunnel.h +++ b/include/net/ip6_tunnel.h @@ -9,6 +9,8 @@ #define IP6_TNL_F_CAP_XMIT 0x10000 /* capable of receiving packets */ #define IP6_TNL_F_CAP_RCV 0x20000 +/* determine capability on a per-packet basis */ +#define IP6_TNL_F_CAP_PER_PACKET 0x40000 /* IPv6 tunnel */ diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h index 78df0866cc38..e69c3a47153d 100644 --- a/include/net/ip_fib.h +++ b/include/net/ip_fib.h @@ -18,7 +18,9 @@ #include <net/flow.h> #include <linux/seq_file.h> +#include <linux/rcupdate.h> #include <net/fib_rules.h> +#include <net/inetpeer.h> struct fib_config { u8 fc_dst_len; @@ -44,6 +46,23 @@ struct fib_config { }; struct fib_info; +struct rtable; + +struct fib_nh_exception { + struct fib_nh_exception __rcu *fnhe_next; + __be32 fnhe_daddr; + u32 fnhe_pmtu; + __be32 fnhe_gw; + unsigned long fnhe_expires; + unsigned long fnhe_stamp; +}; + +struct fnhe_hash_bucket { + struct fib_nh_exception __rcu *chain; +}; + +#define FNHE_HASH_SIZE 2048 +#define FNHE_RECLAIM_DEPTH 5 struct fib_nh { struct net_device *nh_dev; @@ -62,6 +81,9 @@ struct fib_nh { __be32 nh_gw; __be32 nh_saddr; int nh_saddr_genid; + struct rtable *nh_rth_output; + struct rtable *nh_rth_input; + struct fnhe_hash_bucket *nh_exceptions; }; /* @@ -105,12 +127,10 @@ struct fib_result { unsigned char nh_sel; unsigned char type; unsigned char scope; + u32 tclassid; struct fib_info *fi; struct fib_table *table; struct list_head *fa_head; -#ifdef CONFIG_IP_MULTIPLE_TABLES - struct fib_rule *r; -#endif }; struct fib_result_nl { @@ -157,11 +177,11 @@ extern __be32 fib_info_update_nh_saddr(struct net *net, struct fib_nh *nh); FIB_RES_SADDR(net, res)) struct fib_table { - struct hlist_node tb_hlist; - u32 tb_id; - int tb_default; - int tb_num_default; - unsigned long tb_data[0]; + struct hlist_node tb_hlist; + u32 tb_id; + int tb_default; + int tb_num_default; + unsigned long tb_data[0]; }; extern int fib_table_lookup(struct fib_table *tb, const struct flowi4 *flp, @@ -214,24 +234,55 @@ static inline int fib_lookup(struct net *net, const struct flowi4 *flp, extern int __net_init fib4_rules_init(struct net *net); extern void __net_exit fib4_rules_exit(struct net *net); -#ifdef CONFIG_IP_ROUTE_CLASSID -extern u32 fib_rules_tclass(const struct fib_result *res); -#endif - -extern int fib_lookup(struct net *n, struct flowi4 *flp, struct fib_result *res); - extern struct fib_table *fib_new_table(struct net *net, u32 id); extern struct fib_table *fib_get_table(struct net *net, u32 id); +extern int __fib_lookup(struct net *net, struct flowi4 *flp, + struct fib_result *res); + +static inline int fib_lookup(struct net *net, struct flowi4 *flp, + struct fib_result *res) +{ + if (!net->ipv4.fib_has_custom_rules) { + res->tclassid = 0; + if (net->ipv4.fib_local && + !fib_table_lookup(net->ipv4.fib_local, flp, res, + FIB_LOOKUP_NOREF)) + return 0; + if (net->ipv4.fib_main && + !fib_table_lookup(net->ipv4.fib_main, flp, res, + FIB_LOOKUP_NOREF)) + return 0; + if (net->ipv4.fib_default && + !fib_table_lookup(net->ipv4.fib_default, flp, res, + FIB_LOOKUP_NOREF)) + return 0; + return -ENETUNREACH; + } + return __fib_lookup(net, flp, res); +} + #endif /* CONFIG_IP_MULTIPLE_TABLES */ /* Exported by fib_frontend.c */ extern const struct nla_policy rtm_ipv4_policy[]; extern void ip_fib_init(void); +extern __be32 fib_compute_spec_dst(struct sk_buff *skb); extern int fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst, u8 tos, int oif, struct net_device *dev, - __be32 *spec_dst, u32 *itag); + struct in_device *idev, u32 *itag); extern void fib_select_default(struct fib_result *res); +#ifdef CONFIG_IP_ROUTE_CLASSID +static inline int fib_num_tclassid_users(struct net *net) +{ + return net->ipv4.fib_num_tclassid_users; +} +#else +static inline int fib_num_tclassid_users(struct net *net) +{ + return 0; +} +#endif /* Exported by fib_semantics.c */ extern int ip_fib_check_default(__be32 gw, struct net_device *dev); @@ -253,7 +304,7 @@ static inline void fib_combine_itag(u32 *itag, const struct fib_result *res) #endif *itag = FIB_RES_NH(*res).nh_tclassid<<16; #ifdef CONFIG_IP_MULTIPLE_TABLES - rtag = fib_rules_tclass(res); + rtag = res->tclassid; if (*itag == 0) *itag = (rtag<<16); *itag |= (rtag>>16); diff --git a/include/net/ipv6.h b/include/net/ipv6.h index aecf88436abf..01c34b363a34 100644 --- a/include/net/ipv6.h +++ b/include/net/ipv6.h @@ -251,6 +251,8 @@ static inline void fl6_sock_release(struct ip6_flowlabel *fl) atomic_dec(&fl->users); } +extern void icmpv6_notify(struct sk_buff *skb, u8 type, u8 code, __be32 info); + extern int ip6_ra_control(struct sock *sk, int sel); extern int ipv6_parse_hopopts(struct sk_buff *skb); @@ -298,14 +300,23 @@ static inline int ipv6_addr_cmp(const struct in6_addr *a1, const struct in6_addr return memcmp(a1, a2, sizeof(struct in6_addr)); } -static inline int +static inline bool ipv6_masked_addr_cmp(const struct in6_addr *a1, const struct in6_addr *m, const struct in6_addr *a2) { +#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64 + const unsigned long *ul1 = (const unsigned long *)a1; + const unsigned long *ulm = (const unsigned long *)m; + const unsigned long *ul2 = (const unsigned long *)a2; + + return !!(((ul1[0] ^ ul2[0]) & ulm[0]) | + ((ul1[1] ^ ul2[1]) & ulm[1])); +#else return !!(((a1->s6_addr32[0] ^ a2->s6_addr32[0]) & m->s6_addr32[0]) | ((a1->s6_addr32[1] ^ a2->s6_addr32[1]) & m->s6_addr32[1]) | ((a1->s6_addr32[2] ^ a2->s6_addr32[2]) & m->s6_addr32[2]) | ((a1->s6_addr32[3] ^ a2->s6_addr32[3]) & m->s6_addr32[3])); +#endif } static inline void ipv6_addr_prefix(struct in6_addr *pfx, @@ -335,10 +346,17 @@ static inline void ipv6_addr_set(struct in6_addr *addr, static inline bool ipv6_addr_equal(const struct in6_addr *a1, const struct in6_addr *a2) { +#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64 + const unsigned long *ul1 = (const unsigned long *)a1; + const unsigned long *ul2 = (const unsigned long *)a2; + + return ((ul1[0] ^ ul2[0]) | (ul1[1] ^ ul2[1])) == 0UL; +#else return ((a1->s6_addr32[0] ^ a2->s6_addr32[0]) | (a1->s6_addr32[1] ^ a2->s6_addr32[1]) | (a1->s6_addr32[2] ^ a2->s6_addr32[2]) | (a1->s6_addr32[3] ^ a2->s6_addr32[3])) == 0; +#endif } static inline bool __ipv6_prefix_equal(const __be32 *a1, const __be32 *a2, @@ -391,8 +409,27 @@ bool ip6_frag_match(struct inet_frag_queue *q, void *a); static inline bool ipv6_addr_any(const struct in6_addr *a) { +#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64 + const unsigned long *ul = (const unsigned long *)a; + + return (ul[0] | ul[1]) == 0UL; +#else return (a->s6_addr32[0] | a->s6_addr32[1] | a->s6_addr32[2] | a->s6_addr32[3]) == 0; +#endif +} + +static inline u32 ipv6_addr_hash(const struct in6_addr *a) +{ +#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64 + const unsigned long *ul = (const unsigned long *)a; + unsigned long x = ul[0] ^ ul[1]; + + return (u32)(x ^ (x >> 32)); +#else + return (__force u32)(a->s6_addr32[0] ^ a->s6_addr32[1] ^ + a->s6_addr32[2] ^ a->s6_addr32[3]); +#endif } static inline bool ipv6_addr_loopback(const struct in6_addr *a) diff --git a/include/net/mac80211.h b/include/net/mac80211.h index 95e39b6a02ec..bb86aa6f98dd 100644 --- a/include/net/mac80211.h +++ b/include/net/mac80211.h @@ -233,8 +233,10 @@ enum ieee80211_rssi_event { * valid in station mode only while @assoc is true and if also * requested by %IEEE80211_HW_NEED_DTIM_PERIOD (cf. also hw conf * @ps_dtim_period) - * @last_tsf: last beacon's/probe response's TSF timestamp (could be old + * @sync_tsf: last beacon's/probe response's TSF timestamp (could be old * as it may have been received during scanning long ago) + * @sync_device_ts: the device timestamp corresponding to the sync_tsf, + * the driver/device can use this to calculate synchronisation * @beacon_int: beacon interval * @assoc_capability: capabilities taken from assoc resp * @basic_rates: bitmap of basic rates, each bit stands for an @@ -281,7 +283,8 @@ struct ieee80211_bss_conf { u8 dtim_period; u16 beacon_int; u16 assoc_capability; - u64 last_tsf; + u64 sync_tsf; + u32 sync_device_ts; u32 basic_rates; int mcast_rate[IEEE80211_NUM_BANDS]; u16 ht_operation_mode; @@ -475,7 +478,7 @@ enum mac80211_rate_control_flags { #define IEEE80211_TX_INFO_RATE_DRIVER_DATA_SIZE 24 /* maximum number of rate stages */ -#define IEEE80211_TX_MAX_RATES 5 +#define IEEE80211_TX_MAX_RATES 4 /** * struct ieee80211_tx_rate - rate selection/status @@ -563,11 +566,11 @@ struct ieee80211_tx_info { } control; struct { struct ieee80211_tx_rate rates[IEEE80211_TX_MAX_RATES]; - u8 ampdu_ack_len; int ack_signal; + u8 ampdu_ack_len; u8 ampdu_len; u8 antenna; - /* 14 bytes free */ + /* 21 bytes free */ } status; struct { struct ieee80211_tx_rate driver_rates[ @@ -634,7 +637,7 @@ ieee80211_tx_info_clear_status(struct ieee80211_tx_info *info) info->status.rates[i].count = 0; BUILD_BUG_ON( - offsetof(struct ieee80211_tx_info, status.ampdu_ack_len) != 23); + offsetof(struct ieee80211_tx_info, status.ack_signal) != 20); memset(&info->status.ampdu_ack_len, 0, sizeof(struct ieee80211_tx_info) - offsetof(struct ieee80211_tx_info, status.ampdu_ack_len)); @@ -696,6 +699,8 @@ enum mac80211_rx_flags { * * @mactime: value in microseconds of the 64-bit Time Synchronization Function * (TSF) timer when the first data symbol (MPDU) arrived at the hardware. + * @device_timestamp: arbitrary timestamp for the device, mac80211 doesn't use + * it but can store it and pass it back to the driver for synchronisation * @band: the active band when this frame was received * @freq: frequency the radio was tuned to when receiving this frame, in MHz * @signal: signal strength when receiving this frame, either in dBm, in dB or @@ -709,13 +714,14 @@ enum mac80211_rx_flags { */ struct ieee80211_rx_status { u64 mactime; - enum ieee80211_band band; - int freq; - int signal; - int antenna; - int rate_idx; - int flag; - unsigned int rx_flags; + u32 device_timestamp; + u16 flag; + u16 freq; + u8 rate_idx; + u8 rx_flags; + u8 band; + u8 antenna; + s8 signal; }; /** @@ -1297,6 +1303,10 @@ enum ieee80211_hw_flags { * reports, by default it is set to _MCS, _GI and _BW but doesn't * include _FMT. Use %IEEE80211_RADIOTAP_MCS_HAVE_* values, only * adding _BW is supported today. + * + * @netdev_features: netdev features to be set in each netdev created + * from this HW. Note only HW checksum features are currently + * compatible with mac80211. Other feature bits will be rejected. */ struct ieee80211_hw { struct ieee80211_conf conf; @@ -1319,6 +1329,7 @@ struct ieee80211_hw { u8 max_tx_aggregation_subframes; u8 offchannel_tx_hw_queue; u8 radiotap_mcs_details; + netdev_features_t netdev_features; }; /** @@ -1891,19 +1902,6 @@ enum ieee80211_rate_control_changed { * The low-level driver should send the frame out based on * configuration in the TX control data. This handler should, * preferably, never fail and stop queues appropriately. - * This must be implemented if @tx_frags is not. - * Must be atomic. - * - * @tx_frags: Called to transmit multiple fragments of a single MSDU. - * This handler must consume all fragments, sending out some of - * them only is useless and it can't ask for some of them to be - * queued again. If the frame is not fragmented the queue has a - * single SKB only. To avoid issues with the networking stack - * when TX status is reported the frames should be removed from - * the skb queue. - * If this is used, the tx_info @vif and @sta pointers will be - * invalid -- you must not use them in that case. - * This must be implemented if @tx isn't. * Must be atomic. * * @start: Called before the first netdevice attached to the hardware @@ -2183,7 +2181,10 @@ enum ieee80211_rate_control_changed { * offload. Frames to transmit on the off-channel channel are transmitted * normally except for the %IEEE80211_TX_CTL_TX_OFFCHAN flag. When the * duration (which will always be non-zero) expires, the driver must call - * ieee80211_remain_on_channel_expired(). This callback may sleep. + * ieee80211_remain_on_channel_expired(). + * Note that this callback may be called while the device is in IDLE and + * must be accepted in this case. + * This callback may sleep. * @cancel_remain_on_channel: Requests that an ongoing off-channel period is * aborted before it expires. This callback may sleep. * @@ -2246,11 +2247,24 @@ enum ieee80211_rate_control_changed { * @get_et_strings: Ethtool API to get a set of strings to describe stats * and perhaps other supported types of ethtool data-sets. * + * @get_rssi: Get current signal strength in dBm, the function is optional + * and can sleep. + * + * @mgd_prepare_tx: Prepare for transmitting a management frame for association + * before associated. In multi-channel scenarios, a virtual interface is + * bound to a channel before it is associated, but as it isn't associated + * yet it need not necessarily be given airtime, in particular since any + * transmission to a P2P GO needs to be synchronized against the GO's + * powersave state. mac80211 will call this function before transmitting a + * management frame prior to having successfully associated to allow the + * driver to give it channel time for the transmission, to get a response + * and to be able to synchronize with the GO. + * The callback will be called before each transmission and upon return + * mac80211 will transmit the frame right away. + * The callback is optional and can (should!) sleep. */ struct ieee80211_ops { void (*tx)(struct ieee80211_hw *hw, struct sk_buff *skb); - void (*tx_frags)(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - struct ieee80211_sta *sta, struct sk_buff_head *skbs); int (*start)(struct ieee80211_hw *hw); void (*stop)(struct ieee80211_hw *hw); #ifdef CONFIG_PM @@ -2385,6 +2399,11 @@ struct ieee80211_ops { void (*get_et_strings)(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u32 sset, u8 *data); + int (*get_rssi)(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + struct ieee80211_sta *sta, s8 *rssi_dbm); + + void (*mgd_prepare_tx)(struct ieee80211_hw *hw, + struct ieee80211_vif *vif); }; /** @@ -3557,16 +3576,6 @@ void ieee80211_cqm_rssi_notify(struct ieee80211_vif *vif, gfp_t gfp); /** - * ieee80211_get_operstate - get the operstate of the vif - * - * @vif: &struct ieee80211_vif pointer from the add_interface callback. - * - * The driver might need to know the operstate of the net_device - * (specifically, whether the link is IF_OPER_UP after resume) - */ -unsigned char ieee80211_get_operstate(struct ieee80211_vif *vif); - -/** * ieee80211_chswitch_done - Complete channel switch process * @vif: &struct ieee80211_vif pointer from the add_interface callback. * @success: make the channel switch successful or not @@ -3589,22 +3598,6 @@ void ieee80211_request_smps(struct ieee80211_vif *vif, enum ieee80211_smps_mode smps_mode); /** - * ieee80211_key_removed - disable hw acceleration for key - * @key_conf: The key hw acceleration should be disabled for - * - * This allows drivers to indicate that the given key has been - * removed from hardware acceleration, due to a new key that - * was added. Don't use this if the key can continue to be used - * for TX, if the key restriction is on RX only it is permitted - * to keep the key for TX only and not call this function. - * - * Due to locking constraints, it may only be called during - * @set_key. This function must be allowed to sleep, and the - * key it tries to disable may still be used until it returns. - */ -void ieee80211_key_removed(struct ieee80211_key_conf *key_conf); - -/** * ieee80211_ready_on_channel - notification of remain-on-channel start * @hw: pointer as obtained from ieee80211_alloc_hw() */ @@ -3829,12 +3822,6 @@ void ieee80211_enable_rssi_reports(struct ieee80211_vif *vif, void ieee80211_disable_rssi_reports(struct ieee80211_vif *vif); -int ieee80211_add_srates_ie(struct ieee80211_vif *vif, - struct sk_buff *skb, bool need_basic); - -int ieee80211_add_ext_srates_ie(struct ieee80211_vif *vif, - struct sk_buff *skb, bool need_basic); - /** * ieee80211_ave_rssi - report the average rssi for the specified interface * diff --git a/include/net/mac802154.h b/include/net/mac802154.h index c9f8ab5cc687..d0d11df9cba1 100644 --- a/include/net/mac802154.h +++ b/include/net/mac802154.h @@ -21,6 +21,14 @@ #include <net/af_ieee802154.h> +/* General MAC frame format: + * 2 bytes: Frame Control + * 1 byte: Sequence Number + * 20 bytes: Addressing fields + * 14 bytes: Auxiliary Security Header + */ +#define MAC802154_FRAME_HARD_HEADER_LEN (2 + 1 + 20 + 14) + /* The following flags are used to indicate changed address settings from * the stack to the hardware. */ diff --git a/include/net/ndisc.h b/include/net/ndisc.h index c02b6ad3f6c5..96a3b5c03e37 100644 --- a/include/net/ndisc.h +++ b/include/net/ndisc.h @@ -47,6 +47,8 @@ enum { #include <linux/icmpv6.h> #include <linux/in6.h> #include <linux/types.h> +#include <linux/if_arp.h> +#include <linux/netdevice.h> #include <net/neighbour.h> @@ -80,6 +82,54 @@ struct nd_opt_hdr { __u8 nd_opt_len; } __packed; +/* ND options */ +struct ndisc_options { + struct nd_opt_hdr *nd_opt_array[__ND_OPT_ARRAY_MAX]; +#ifdef CONFIG_IPV6_ROUTE_INFO + struct nd_opt_hdr *nd_opts_ri; + struct nd_opt_hdr *nd_opts_ri_end; +#endif + struct nd_opt_hdr *nd_useropts; + struct nd_opt_hdr *nd_useropts_end; +}; + +#define nd_opts_src_lladdr nd_opt_array[ND_OPT_SOURCE_LL_ADDR] +#define nd_opts_tgt_lladdr nd_opt_array[ND_OPT_TARGET_LL_ADDR] +#define nd_opts_pi nd_opt_array[ND_OPT_PREFIX_INFO] +#define nd_opts_pi_end nd_opt_array[__ND_OPT_PREFIX_INFO_END] +#define nd_opts_rh nd_opt_array[ND_OPT_REDIRECT_HDR] +#define nd_opts_mtu nd_opt_array[ND_OPT_MTU] + +#define NDISC_OPT_SPACE(len) (((len)+2+7)&~7) + +extern struct ndisc_options *ndisc_parse_options(u8 *opt, int opt_len, + struct ndisc_options *ndopts); + +/* + * Return the padding between the option length and the start of the + * link addr. Currently only IP-over-InfiniBand needs this, although + * if RFC 3831 IPv6-over-Fibre Channel is ever implemented it may + * also need a pad of 2. + */ +static int ndisc_addr_option_pad(unsigned short type) +{ + switch (type) { + case ARPHRD_INFINIBAND: return 2; + default: return 0; + } +} + +static inline u8 *ndisc_opt_addr_data(struct nd_opt_hdr *p, + struct net_device *dev) +{ + u8 *lladdr = (u8 *)(p + 1); + int lladdrlen = p->nd_opt_len << 3; + int prepad = ndisc_addr_option_pad(dev->type); + if (lladdrlen != NDISC_OPT_SPACE(dev->addr_len + prepad)) + return NULL; + return lladdr + prepad; +} + static inline u32 ndisc_hashfn(const void *pkey, const struct net_device *dev, __u32 *hash_rnd) { const u32 *p32 = pkey; diff --git a/include/net/neighbour.h b/include/net/neighbour.h index 6cdfeedb650b..344d8988842a 100644 --- a/include/net/neighbour.h +++ b/include/net/neighbour.h @@ -202,9 +202,16 @@ extern struct neighbour * neigh_lookup(struct neigh_table *tbl, extern struct neighbour * neigh_lookup_nodev(struct neigh_table *tbl, struct net *net, const void *pkey); -extern struct neighbour * neigh_create(struct neigh_table *tbl, +extern struct neighbour * __neigh_create(struct neigh_table *tbl, + const void *pkey, + struct net_device *dev, + bool want_ref); +static inline struct neighbour *neigh_create(struct neigh_table *tbl, const void *pkey, - struct net_device *dev); + struct net_device *dev) +{ + return __neigh_create(tbl, pkey, dev, true); +} extern void neigh_destroy(struct neighbour *neigh); extern int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb); extern int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new, @@ -302,12 +309,6 @@ static inline struct neighbour * neigh_clone(struct neighbour *neigh) #define neigh_hold(n) atomic_inc(&(n)->refcnt) -static inline void neigh_confirm(struct neighbour *neigh) -{ - if (neigh) - neigh->confirmed = jiffies; -} - static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) { unsigned long now = jiffies; @@ -351,15 +352,6 @@ static inline int neigh_hh_output(struct hh_cache *hh, struct sk_buff *skb) return dev_queue_xmit(skb); } -static inline int neigh_output(struct neighbour *n, struct sk_buff *skb) -{ - struct hh_cache *hh = &n->hh; - if ((n->nud_state & NUD_CONNECTED) && hh->hh_len) - return neigh_hh_output(hh, skb); - else - return n->output(n, skb); -} - static inline struct neighbour * __neigh_lookup(struct neigh_table *tbl, const void *pkey, struct net_device *dev, int creat) { diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h index ac9195e6a062..ae1cd6c9ba52 100644 --- a/include/net/net_namespace.h +++ b/include/net/net_namespace.h @@ -101,6 +101,7 @@ struct net { struct netns_xfrm xfrm; #endif struct netns_ipvs *ipvs; + struct sock *diag_nlsk; }; diff --git a/include/net/netevent.h b/include/net/netevent.h index 086f8a5b59dc..3ce4988c9c08 100644 --- a/include/net/netevent.h +++ b/include/net/netevent.h @@ -12,10 +12,14 @@ */ struct dst_entry; +struct neighbour; struct netevent_redirect { struct dst_entry *old; + struct neighbour *old_neigh; struct dst_entry *new; + struct neighbour *new_neigh; + const void *daddr; }; enum netevent_notif_type { diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h index cce7f6a798bf..f1494feba79f 100644 --- a/include/net/netfilter/nf_conntrack.h +++ b/include/net/netfilter/nf_conntrack.h @@ -39,36 +39,6 @@ union nf_conntrack_expect_proto { /* insert expect proto private data here */ }; -/* Add protocol helper include file here */ -#include <linux/netfilter/nf_conntrack_ftp.h> -#include <linux/netfilter/nf_conntrack_pptp.h> -#include <linux/netfilter/nf_conntrack_h323.h> -#include <linux/netfilter/nf_conntrack_sane.h> -#include <linux/netfilter/nf_conntrack_sip.h> - -/* per conntrack: application helper private data */ -union nf_conntrack_help { - /* insert conntrack helper private data (master) here */ -#if defined(CONFIG_NF_CONNTRACK_FTP) || defined(CONFIG_NF_CONNTRACK_FTP_MODULE) - struct nf_ct_ftp_master ct_ftp_info; -#endif -#if defined(CONFIG_NF_CONNTRACK_PPTP) || \ - defined(CONFIG_NF_CONNTRACK_PPTP_MODULE) - struct nf_ct_pptp_master ct_pptp_info; -#endif -#if defined(CONFIG_NF_CONNTRACK_H323) || \ - defined(CONFIG_NF_CONNTRACK_H323_MODULE) - struct nf_ct_h323_master ct_h323_info; -#endif -#if defined(CONFIG_NF_CONNTRACK_SANE) || \ - defined(CONFIG_NF_CONNTRACK_SANE_MODULE) - struct nf_ct_sane_master ct_sane_info; -#endif -#if defined(CONFIG_NF_CONNTRACK_SIP) || defined(CONFIG_NF_CONNTRACK_SIP_MODULE) - struct nf_ct_sip_master ct_sip_info; -#endif -}; - #include <linux/types.h> #include <linux/skbuff.h> #include <linux/timer.h> @@ -89,12 +59,13 @@ struct nf_conn_help { /* Helper. if any */ struct nf_conntrack_helper __rcu *helper; - union nf_conntrack_help help; - struct hlist_head expectations; /* Current number of expected connections */ u8 expecting[NF_CT_MAX_EXPECT_CLASSES]; + + /* private helper information. */ + char data[]; }; #include <net/netfilter/ipv4/nf_conntrack_ipv4.h> diff --git a/include/net/netfilter/nf_conntrack_core.h b/include/net/netfilter/nf_conntrack_core.h index aced085132e7..d8f5b9f52169 100644 --- a/include/net/netfilter/nf_conntrack_core.h +++ b/include/net/netfilter/nf_conntrack_core.h @@ -28,8 +28,8 @@ extern unsigned int nf_conntrack_in(struct net *net, extern int nf_conntrack_init(struct net *net); extern void nf_conntrack_cleanup(struct net *net); -extern int nf_conntrack_proto_init(void); -extern void nf_conntrack_proto_fini(void); +extern int nf_conntrack_proto_init(struct net *net); +extern void nf_conntrack_proto_fini(struct net *net); extern bool nf_ct_get_tuple(const struct sk_buff *skb, diff --git a/include/net/netfilter/nf_conntrack_expect.h b/include/net/netfilter/nf_conntrack_expect.h index 4619caadd9d1..983f00263243 100644 --- a/include/net/netfilter/nf_conntrack_expect.h +++ b/include/net/netfilter/nf_conntrack_expect.h @@ -59,10 +59,12 @@ static inline struct net *nf_ct_exp_net(struct nf_conntrack_expect *exp) return nf_ct_net(exp->master); } +#define NF_CT_EXP_POLICY_NAME_LEN 16 + struct nf_conntrack_expect_policy { unsigned int max_expected; unsigned int timeout; - const char *name; + char name[NF_CT_EXP_POLICY_NAME_LEN]; }; #define NF_CT_EXPECT_CLASS_DEFAULT 0 diff --git a/include/net/netfilter/nf_conntrack_extend.h b/include/net/netfilter/nf_conntrack_extend.h index 96755c3798a5..8b4d1fc29096 100644 --- a/include/net/netfilter/nf_conntrack_extend.h +++ b/include/net/netfilter/nf_conntrack_extend.h @@ -80,10 +80,13 @@ static inline void nf_ct_ext_free(struct nf_conn *ct) } /* Add this type, returns pointer to data or NULL. */ -void * -__nf_ct_ext_add(struct nf_conn *ct, enum nf_ct_ext_id id, gfp_t gfp); +void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id, + size_t var_alloc_len, gfp_t gfp); + #define nf_ct_ext_add(ct, id, gfp) \ - ((id##_TYPE *)__nf_ct_ext_add((ct), (id), (gfp))) + ((id##_TYPE *)__nf_ct_ext_add_length((ct), (id), 0, (gfp))) +#define nf_ct_ext_add_length(ct, id, len, gfp) \ + ((id##_TYPE *)__nf_ct_ext_add_length((ct), (id), (len), (gfp))) #define NF_CT_EXT_F_PREALLOC 0x0001 diff --git a/include/net/netfilter/nf_conntrack_helper.h b/include/net/netfilter/nf_conntrack_helper.h index 1d1889409b9e..9aad956d1008 100644 --- a/include/net/netfilter/nf_conntrack_helper.h +++ b/include/net/netfilter/nf_conntrack_helper.h @@ -11,18 +11,27 @@ #define _NF_CONNTRACK_HELPER_H #include <net/netfilter/nf_conntrack.h> #include <net/netfilter/nf_conntrack_extend.h> +#include <net/netfilter/nf_conntrack_expect.h> struct module; +enum nf_ct_helper_flags { + NF_CT_HELPER_F_USERSPACE = (1 << 0), + NF_CT_HELPER_F_CONFIGURED = (1 << 1), +}; + #define NF_CT_HELPER_NAME_LEN 16 struct nf_conntrack_helper { struct hlist_node hnode; /* Internal use. */ - const char *name; /* name of the module */ + char name[NF_CT_HELPER_NAME_LEN]; /* name of the module */ struct module *me; /* pointer to self */ const struct nf_conntrack_expect_policy *expect_policy; + /* length of internal data, ie. sizeof(struct nf_ct_*_master) */ + size_t data_len; + /* Tuple of things we will help (compared against server response) */ struct nf_conntrack_tuple tuple; @@ -35,8 +44,12 @@ struct nf_conntrack_helper { void (*destroy)(struct nf_conn *ct); + int (*from_nlattr)(struct nlattr *attr, struct nf_conn *ct); int (*to_nlattr)(struct sk_buff *skb, const struct nf_conn *ct); unsigned int expect_class_max; + + unsigned int flags; + unsigned int queue_num; /* For user-space helpers. */ }; extern struct nf_conntrack_helper * @@ -48,7 +61,7 @@ nf_conntrack_helper_try_module_get(const char *name, u16 l3num, u8 protonum); extern int nf_conntrack_helper_register(struct nf_conntrack_helper *); extern void nf_conntrack_helper_unregister(struct nf_conntrack_helper *); -extern struct nf_conn_help *nf_ct_helper_ext_add(struct nf_conn *ct, gfp_t gfp); +extern struct nf_conn_help *nf_ct_helper_ext_add(struct nf_conn *ct, struct nf_conntrack_helper *helper, gfp_t gfp); extern int __nf_ct_try_assign_helper(struct nf_conn *ct, struct nf_conn *tmpl, gfp_t flags); @@ -60,6 +73,15 @@ static inline struct nf_conn_help *nfct_help(const struct nf_conn *ct) return nf_ct_ext_find(ct, NF_CT_EXT_HELPER); } +static inline void *nfct_help_data(const struct nf_conn *ct) +{ + struct nf_conn_help *help; + + help = nf_ct_ext_find(ct, NF_CT_EXT_HELPER); + + return (void *)help->data; +} + extern int nf_conntrack_helper_init(struct net *net); extern void nf_conntrack_helper_fini(struct net *net); @@ -82,4 +104,7 @@ nf_ct_helper_expectfn_find_by_name(const char *name); struct nf_ct_helper_expectfn * nf_ct_helper_expectfn_find_by_symbol(const void *symbol); +extern struct hlist_head *nf_ct_helper_hash; +extern unsigned int nf_ct_helper_hsize; + #endif /*_NF_CONNTRACK_HELPER_H*/ diff --git a/include/net/netfilter/nf_conntrack_l3proto.h b/include/net/netfilter/nf_conntrack_l3proto.h index 9699c028b74b..6f7c13f4ac03 100644 --- a/include/net/netfilter/nf_conntrack_l3proto.h +++ b/include/net/netfilter/nf_conntrack_l3proto.h @@ -64,11 +64,12 @@ struct nf_conntrack_l3proto { size_t nla_size; #ifdef CONFIG_SYSCTL - struct ctl_table_header *ctl_table_header; const char *ctl_table_path; - struct ctl_table *ctl_table; #endif /* CONFIG_SYSCTL */ + /* Init l3proto pernet data */ + int (*init_net)(struct net *net); + /* Module (if any) which this is connected to. */ struct module *me; }; @@ -76,8 +77,10 @@ struct nf_conntrack_l3proto { extern struct nf_conntrack_l3proto __rcu *nf_ct_l3protos[AF_MAX]; /* Protocol registration. */ -extern int nf_conntrack_l3proto_register(struct nf_conntrack_l3proto *proto); -extern void nf_conntrack_l3proto_unregister(struct nf_conntrack_l3proto *proto); +extern int nf_conntrack_l3proto_register(struct net *net, + struct nf_conntrack_l3proto *proto); +extern void nf_conntrack_l3proto_unregister(struct net *net, + struct nf_conntrack_l3proto *proto); extern struct nf_conntrack_l3proto *nf_ct_l3proto_find_get(u_int16_t l3proto); extern void nf_ct_l3proto_put(struct nf_conntrack_l3proto *p); diff --git a/include/net/netfilter/nf_conntrack_l4proto.h b/include/net/netfilter/nf_conntrack_l4proto.h index 3b572bb20aa2..c3be4aef6bf7 100644 --- a/include/net/netfilter/nf_conntrack_l4proto.h +++ b/include/net/netfilter/nf_conntrack_l4proto.h @@ -12,6 +12,7 @@ #include <linux/netlink.h> #include <net/netlink.h> #include <net/netfilter/nf_conntrack.h> +#include <net/netns/generic.h> struct seq_file; @@ -86,23 +87,21 @@ struct nf_conntrack_l4proto { #if IS_ENABLED(CONFIG_NF_CT_NETLINK_TIMEOUT) struct { size_t obj_size; - int (*nlattr_to_obj)(struct nlattr *tb[], void *data); + int (*nlattr_to_obj)(struct nlattr *tb[], + struct net *net, void *data); int (*obj_to_nlattr)(struct sk_buff *skb, const void *data); unsigned int nlattr_max; const struct nla_policy *nla_policy; } ctnl_timeout; #endif + int *net_id; + /* Init l4proto pernet data */ + int (*init_net)(struct net *net, u_int16_t proto); + + /* Return the per-net protocol part. */ + struct nf_proto_net *(*get_net_proto)(struct net *net); -#ifdef CONFIG_SYSCTL - struct ctl_table_header **ctl_table_header; - struct ctl_table *ctl_table; - unsigned int *ctl_table_users; -#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT - struct ctl_table_header *ctl_compat_table_header; - struct ctl_table *ctl_compat_table; -#endif -#endif /* Protocol name */ const char *name; @@ -123,8 +122,18 @@ nf_ct_l4proto_find_get(u_int16_t l3proto, u_int8_t l4proto); extern void nf_ct_l4proto_put(struct nf_conntrack_l4proto *p); /* Protocol registration. */ -extern int nf_conntrack_l4proto_register(struct nf_conntrack_l4proto *proto); -extern void nf_conntrack_l4proto_unregister(struct nf_conntrack_l4proto *proto); +extern int nf_conntrack_l4proto_register(struct net *net, + struct nf_conntrack_l4proto *proto); +extern void nf_conntrack_l4proto_unregister(struct net *net, + struct nf_conntrack_l4proto *proto); + +static inline void nf_ct_kfree_compat_sysctl_table(struct nf_proto_net *pn) +{ +#if defined(CONFIG_SYSCTL) && defined(CONFIG_NF_CONNTRACK_PROC_COMPAT) + kfree(pn->ctl_compat_table); + pn->ctl_compat_table = NULL; +#endif +} /* Generic netlink helpers */ extern int nf_ct_port_tuple_to_nlattr(struct sk_buff *skb, diff --git a/include/net/netfilter/nf_nat_helper.h b/include/net/netfilter/nf_nat_helper.h index 02bb6c29dc3d..7d8fb7b46c44 100644 --- a/include/net/netfilter/nf_nat_helper.h +++ b/include/net/netfilter/nf_nat_helper.h @@ -54,4 +54,8 @@ extern void nf_nat_follow_master(struct nf_conn *ct, extern s16 nf_nat_get_offset(const struct nf_conn *ct, enum ip_conntrack_dir dir, u32 seq); + +extern void nf_nat_tcp_seq_adjust(struct sk_buff *skb, struct nf_conn *ct, + u32 dir, int off); + #endif diff --git a/include/net/netfilter/nfnetlink_queue.h b/include/net/netfilter/nfnetlink_queue.h new file mode 100644 index 000000000000..86267a529514 --- /dev/null +++ b/include/net/netfilter/nfnetlink_queue.h @@ -0,0 +1,43 @@ +#ifndef _NET_NFNL_QUEUE_H_ +#define _NET_NFNL_QUEUE_H_ + +#include <linux/netfilter/nf_conntrack_common.h> + +struct nf_conn; + +#ifdef CONFIG_NETFILTER_NETLINK_QUEUE_CT +struct nf_conn *nfqnl_ct_get(struct sk_buff *entskb, size_t *size, + enum ip_conntrack_info *ctinfo); +struct nf_conn *nfqnl_ct_parse(const struct sk_buff *skb, + const struct nlattr *attr, + enum ip_conntrack_info *ctinfo); +int nfqnl_ct_put(struct sk_buff *skb, struct nf_conn *ct, + enum ip_conntrack_info ctinfo); +void nfqnl_ct_seq_adjust(struct sk_buff *skb, struct nf_conn *ct, + enum ip_conntrack_info ctinfo, int diff); +#else +inline struct nf_conn * +nfqnl_ct_get(struct sk_buff *entskb, size_t *size, enum ip_conntrack_info *ctinfo) +{ + return NULL; +} + +inline struct nf_conn *nfqnl_ct_parse(const struct sk_buff *skb, + const struct nlattr *attr, + enum ip_conntrack_info *ctinfo) +{ + return NULL; +} + +inline int +nfqnl_ct_put(struct sk_buff *skb, struct nf_conn *ct, enum ip_conntrack_info ctinfo) +{ + return 0; +} + +inline void nfqnl_ct_seq_adjust(struct sk_buff *skb, struct nf_conn *ct, + enum ip_conntrack_info ctinfo, int diff) +{ +} +#endif /* NF_CONNTRACK */ +#endif diff --git a/include/net/netns/conntrack.h b/include/net/netns/conntrack.h index a053a19870cf..3aecdc7a84fb 100644 --- a/include/net/netns/conntrack.h +++ b/include/net/netns/conntrack.h @@ -4,10 +4,64 @@ #include <linux/list.h> #include <linux/list_nulls.h> #include <linux/atomic.h> +#include <linux/netfilter/nf_conntrack_tcp.h> struct ctl_table_header; struct nf_conntrack_ecache; +struct nf_proto_net { +#ifdef CONFIG_SYSCTL + struct ctl_table_header *ctl_table_header; + struct ctl_table *ctl_table; +#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT + struct ctl_table_header *ctl_compat_header; + struct ctl_table *ctl_compat_table; +#endif +#endif + unsigned int users; +}; + +struct nf_generic_net { + struct nf_proto_net pn; + unsigned int timeout; +}; + +struct nf_tcp_net { + struct nf_proto_net pn; + unsigned int timeouts[TCP_CONNTRACK_TIMEOUT_MAX]; + unsigned int tcp_loose; + unsigned int tcp_be_liberal; + unsigned int tcp_max_retrans; +}; + +enum udp_conntrack { + UDP_CT_UNREPLIED, + UDP_CT_REPLIED, + UDP_CT_MAX +}; + +struct nf_udp_net { + struct nf_proto_net pn; + unsigned int timeouts[UDP_CT_MAX]; +}; + +struct nf_icmp_net { + struct nf_proto_net pn; + unsigned int timeout; +}; + +struct nf_ip_net { + struct nf_generic_net generic; + struct nf_tcp_net tcp; + struct nf_udp_net udp; + struct nf_icmp_net icmp; + struct nf_icmp_net icmpv6; +#if defined(CONFIG_SYSCTL) && defined(CONFIG_NF_CONNTRACK_PROC_COMPAT) + struct ctl_table_header *ctl_table_header; + struct ctl_table *ctl_table; +#endif +}; + struct netns_ct { atomic_t count; unsigned int expect_count; @@ -28,6 +82,7 @@ struct netns_ct { unsigned int sysctl_log_invalid; /* Log invalid packets */ int sysctl_auto_assign_helper; bool auto_assign_helper_warned; + struct nf_ip_net nf_ct_proto; #ifdef CONFIG_SYSCTL struct ctl_table_header *sysctl_header; struct ctl_table_header *acct_sysctl_header; diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h index bbd023a1c9b9..0ffb8e31f3cd 100644 --- a/include/net/netns/ipv4.h +++ b/include/net/netns/ipv4.h @@ -7,10 +7,12 @@ #include <net/inet_frag.h> +struct tcpm_hash_bucket; struct ctl_table_header; struct ipv4_devconf; struct fib_rules_ops; struct hlist_head; +struct fib_table; struct sock; struct netns_ipv4 { @@ -24,13 +26,21 @@ struct netns_ipv4 { struct ipv4_devconf *devconf_dflt; #ifdef CONFIG_IP_MULTIPLE_TABLES struct fib_rules_ops *rules_ops; + bool fib_has_custom_rules; + struct fib_table *fib_local; + struct fib_table *fib_main; + struct fib_table *fib_default; +#endif +#ifdef CONFIG_IP_ROUTE_CLASSID + int fib_num_tclassid_users; #endif struct hlist_head *fib_table_hash; struct sock *fibnl; struct sock **icmp_sk; - struct sock *tcp_sock; - + struct inet_peer_base *peers; + struct tcpm_hash_bucket *tcp_metrics_hash; + unsigned int tcp_metrics_hash_log; struct netns_frags frags; #ifdef CONFIG_NETFILTER struct xt_table *iptable_filter; diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h index b42be53587ba..df0a5456a3fd 100644 --- a/include/net/netns/ipv6.h +++ b/include/net/netns/ipv6.h @@ -33,6 +33,7 @@ struct netns_ipv6 { struct netns_sysctl_ipv6 sysctl; struct ipv6_devconf *devconf_all; struct ipv6_devconf *devconf_dflt; + struct inet_peer_base *peers; struct netns_frags frags; #ifdef CONFIG_NETFILTER struct xt_table *ip6table_filter; diff --git a/include/net/netprio_cgroup.h b/include/net/netprio_cgroup.h index d58fdec47597..2719dec6b5a8 100644 --- a/include/net/netprio_cgroup.h +++ b/include/net/netprio_cgroup.h @@ -35,7 +35,7 @@ struct cgroup_netprio_state { extern int net_prio_subsys_id; #endif -extern void sock_update_netprioidx(struct sock *sk); +extern void sock_update_netprioidx(struct sock *sk, struct task_struct *task); #if IS_BUILTIN(CONFIG_NETPRIO_CGROUP) @@ -82,7 +82,7 @@ static inline u32 task_netprioidx(struct task_struct *p) #endif /* CONFIG_NETPRIO_CGROUP */ #else -#define sock_update_netprioidx(sk) +#define sock_update_netprioidx(sk, task) #endif #endif /* _NET_CLS_CGROUP_H */ diff --git a/include/net/nfc/hci.h b/include/net/nfc/hci.h index 4467c9460857..f5169b04f082 100644 --- a/include/net/nfc/hci.h +++ b/include/net/nfc/hci.h @@ -31,7 +31,8 @@ struct nfc_hci_ops { void (*close) (struct nfc_hci_dev *hdev); int (*hci_ready) (struct nfc_hci_dev *hdev); int (*xmit) (struct nfc_hci_dev *hdev, struct sk_buff *skb); - int (*start_poll) (struct nfc_hci_dev *hdev, u32 protocols); + int (*start_poll) (struct nfc_hci_dev *hdev, + u32 im_protocols, u32 tm_protocols); int (*target_from_gate) (struct nfc_hci_dev *hdev, u8 gate, struct nfc_target *target); int (*complete_target_discovered) (struct nfc_hci_dev *hdev, u8 gate, @@ -43,10 +44,20 @@ struct nfc_hci_ops { struct nfc_target *target); }; -#define NFC_HCI_MAX_CUSTOM_GATES 15 +/* Pipes */ +#define NFC_HCI_INVALID_PIPE 0x80 +#define NFC_HCI_LINK_MGMT_PIPE 0x00 +#define NFC_HCI_ADMIN_PIPE 0x01 + +struct nfc_hci_gate { + u8 gate; + u8 pipe; +}; + +#define NFC_HCI_MAX_CUSTOM_GATES 50 struct nfc_hci_init_data { u8 gate_count; - u8 gates[NFC_HCI_MAX_CUSTOM_GATES]; + struct nfc_hci_gate gates[NFC_HCI_MAX_CUSTOM_GATES]; char session_id[9]; }; @@ -111,6 +122,8 @@ void nfc_hci_unregister_device(struct nfc_hci_dev *hdev); void nfc_hci_set_clientdata(struct nfc_hci_dev *hdev, void *clientdata); void *nfc_hci_get_clientdata(struct nfc_hci_dev *hdev); +void nfc_hci_driver_failure(struct nfc_hci_dev *hdev, int err); + /* Host IDs */ #define NFC_HCI_HOST_CONTROLLER_ID 0x00 #define NFC_HCI_TERMINAL_HOST_ID 0x01 @@ -179,7 +192,8 @@ void nfc_hci_event_received(struct nfc_hci_dev *hdev, u8 pipe, u8 event, void nfc_hci_recv_frame(struct nfc_hci_dev *hdev, struct sk_buff *skb); /* connecting to gates and sending hci instructions */ -int nfc_hci_connect_gate(struct nfc_hci_dev *hdev, u8 dest_host, u8 dest_gate); +int nfc_hci_connect_gate(struct nfc_hci_dev *hdev, u8 dest_host, u8 dest_gate, + u8 pipe); int nfc_hci_disconnect_gate(struct nfc_hci_dev *hdev, u8 gate); int nfc_hci_disconnect_all_gates(struct nfc_hci_dev *hdev); int nfc_hci_get_param(struct nfc_hci_dev *hdev, u8 gate, u8 idx, diff --git a/include/net/nfc/nfc.h b/include/net/nfc/nfc.h index b7ca4a2a1d72..6431f5e39022 100644 --- a/include/net/nfc/nfc.h +++ b/include/net/nfc/nfc.h @@ -53,7 +53,8 @@ struct nfc_target; struct nfc_ops { int (*dev_up)(struct nfc_dev *dev); int (*dev_down)(struct nfc_dev *dev); - int (*start_poll)(struct nfc_dev *dev, u32 protocols); + int (*start_poll)(struct nfc_dev *dev, + u32 im_protocols, u32 tm_protocols); void (*stop_poll)(struct nfc_dev *dev); int (*dep_link_up)(struct nfc_dev *dev, struct nfc_target *target, u8 comm_mode, u8 *gb, size_t gb_len); @@ -62,9 +63,10 @@ struct nfc_ops { u32 protocol); void (*deactivate_target)(struct nfc_dev *dev, struct nfc_target *target); - int (*data_exchange)(struct nfc_dev *dev, struct nfc_target *target, + int (*im_transceive)(struct nfc_dev *dev, struct nfc_target *target, struct sk_buff *skb, data_exchange_cb_t cb, void *cb_context); + int (*tm_send)(struct nfc_dev *dev, struct sk_buff *skb); int (*check_presence)(struct nfc_dev *dev, struct nfc_target *target); }; @@ -99,10 +101,10 @@ struct nfc_dev { int targets_generation; struct device dev; bool dev_up; + u8 rf_mode; bool polling; struct nfc_target *active_target; bool dep_link_up; - u32 dep_rf_mode; struct nfc_genl_data genl_data; u32 supported_protocols; @@ -188,6 +190,7 @@ struct sk_buff *nfc_alloc_recv_skb(unsigned int size, gfp_t gfp); int nfc_set_remote_general_bytes(struct nfc_dev *dev, u8 *gt, u8 gt_len); +u8 *nfc_get_local_general_bytes(struct nfc_dev *dev, size_t *gb_len); int nfc_targets_found(struct nfc_dev *dev, struct nfc_target *targets, int ntargets); @@ -196,4 +199,11 @@ int nfc_target_lost(struct nfc_dev *dev, u32 target_idx); int nfc_dep_link_is_up(struct nfc_dev *dev, u32 target_idx, u8 comm_mode, u8 rf_mode); +int nfc_tm_activated(struct nfc_dev *dev, u32 protocol, u8 comm_mode, + u8 *gb, size_t gb_len); +int nfc_tm_deactivated(struct nfc_dev *dev); +int nfc_tm_data_received(struct nfc_dev *dev, struct sk_buff *skb); + +void nfc_driver_failure(struct nfc_dev *dev, int err); + #endif /* __NET_NFC_H */ diff --git a/include/net/nfc/shdlc.h b/include/net/nfc/shdlc.h index ab06afd462da..35e930d2f638 100644 --- a/include/net/nfc/shdlc.h +++ b/include/net/nfc/shdlc.h @@ -27,7 +27,8 @@ struct nfc_shdlc_ops { void (*close) (struct nfc_shdlc *shdlc); int (*hci_ready) (struct nfc_shdlc *shdlc); int (*xmit) (struct nfc_shdlc *shdlc, struct sk_buff *skb); - int (*start_poll) (struct nfc_shdlc *shdlc, u32 protocols); + int (*start_poll) (struct nfc_shdlc *shdlc, + u32 im_protocols, u32 tm_protocols); int (*target_from_gate) (struct nfc_shdlc *shdlc, u8 gate, struct nfc_target *target); int (*complete_target_discovered) (struct nfc_shdlc *shdlc, u8 gate, diff --git a/include/net/protocol.h b/include/net/protocol.h index 875f4895b033..057f2d315567 100644 --- a/include/net/protocol.h +++ b/include/net/protocol.h @@ -29,11 +29,15 @@ #include <linux/ipv6.h> #endif -#define MAX_INET_PROTOS 256 /* Must be a power of 2 */ - +/* This is one larger than the largest protocol value that can be + * found in an ipv4 or ipv6 header. Since in both cases the protocol + * value is presented in a __u8, this is defined to be 256. + */ +#define MAX_INET_PROTOS 256 /* This is used to register protocols. */ struct net_protocol { + void (*early_demux)(struct sk_buff *skb); int (*handler)(struct sk_buff *skb); void (*err_handler)(struct sk_buff *skb, u32 info); int (*gso_send_check)(struct sk_buff *skb); diff --git a/include/net/regulatory.h b/include/net/regulatory.h index a5f79933e211..7dcaa2794fde 100644 --- a/include/net/regulatory.h +++ b/include/net/regulatory.h @@ -52,6 +52,10 @@ enum environment_cap { * DFS master operation on a known DFS region (NL80211_DFS_*), * dfs_region represents that region. Drivers can use this and the * @alpha2 to adjust their device's DFS parameters as required. + * @user_reg_hint_type: if the @initiator was of type + * %NL80211_REGDOM_SET_BY_USER, this classifies the type + * of hint passed. This could be any of the %NL80211_USER_REG_HINT_* + * types. * @intersect: indicates whether the wireless core should intersect * the requested regulatory domain with the presently set regulatory * domain. @@ -70,6 +74,7 @@ enum environment_cap { struct regulatory_request { int wiphy_idx; enum nl80211_reg_initiator initiator; + enum nl80211_user_reg_hint_type user_reg_hint_type; char alpha2[2]; u8 dfs_region; bool intersect; diff --git a/include/net/route.h b/include/net/route.h index 98705468ac03..c29ef2733f2d 100644 --- a/include/net/route.h +++ b/include/net/route.h @@ -40,45 +40,39 @@ #define RT_CONN_FLAGS(sk) (RT_TOS(inet_sk(sk)->tos) | sock_flag(sk, SOCK_LOCALROUTE)) struct fib_nh; -struct inet_peer; struct fib_info; struct rtable { struct dst_entry dst; - /* Lookup key. */ - __be32 rt_key_dst; - __be32 rt_key_src; - int rt_genid; unsigned int rt_flags; __u16 rt_type; - __u8 rt_key_tos; + __u16 rt_is_input; - __be32 rt_dst; /* Path destination */ - __be32 rt_src; /* Path source */ - int rt_route_iif; int rt_iif; - int rt_oif; - __u32 rt_mark; /* Info on neighbour */ __be32 rt_gateway; /* Miscellaneous cached information */ - __be32 rt_spec_dst; /* RFC1122 specific destination */ - u32 rt_peer_genid; - struct inet_peer *peer; /* long-living peer info */ - struct fib_info *fi; /* for client ref to shared metrics */ + u32 rt_pmtu; }; static inline bool rt_is_input_route(const struct rtable *rt) { - return rt->rt_route_iif != 0; + return rt->rt_is_input != 0; } static inline bool rt_is_output_route(const struct rtable *rt) { - return rt->rt_route_iif == 0; + return rt->rt_is_input == 0; +} + +static inline __be32 rt_nexthop(const struct rtable *rt, __be32 daddr) +{ + if (rt->rt_gateway) + return rt->rt_gateway; + return daddr; } struct ip_rt_acct { @@ -111,10 +105,7 @@ extern struct ip_rt_acct __percpu *ip_rt_acct; struct in_device; extern int ip_rt_init(void); -extern void ip_rt_redirect(__be32 old_gw, __be32 dst, __be32 new_gw, - __be32 src, struct net_device *dev); extern void rt_cache_flush(struct net *net, int how); -extern void rt_cache_flush_batch(struct net *net); extern struct rtable *__ip_route_output_key(struct net *, struct flowi4 *flp); extern struct rtable *ip_route_output_flow(struct net *, struct flowi4 *flp, struct sock *sk); @@ -166,24 +157,16 @@ static inline struct rtable *ip_route_output_gre(struct net *net, struct flowi4 return ip_route_output_key(net, fl4); } -extern int ip_route_input_common(struct sk_buff *skb, __be32 dst, __be32 src, - u8 tos, struct net_device *devin, bool noref); - -static inline int ip_route_input(struct sk_buff *skb, __be32 dst, __be32 src, - u8 tos, struct net_device *devin) -{ - return ip_route_input_common(skb, dst, src, tos, devin, false); -} - -static inline int ip_route_input_noref(struct sk_buff *skb, __be32 dst, __be32 src, - u8 tos, struct net_device *devin) -{ - return ip_route_input_common(skb, dst, src, tos, devin, true); -} +extern int ip_route_input(struct sk_buff *skb, __be32 dst, __be32 src, + u8 tos, struct net_device *devin); -extern unsigned short ip_rt_frag_needed(struct net *net, const struct iphdr *iph, - unsigned short new_mtu, struct net_device *dev); -extern void ip_rt_send_redirect(struct sk_buff *skb); +extern void ipv4_update_pmtu(struct sk_buff *skb, struct net *net, u32 mtu, + int oif, u32 mark, u8 protocol, int flow_flags); +extern void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu); +extern void ipv4_redirect(struct sk_buff *skb, struct net *net, + int oif, u32 mark, u8 protocol, int flow_flags); +extern void ipv4_sk_redirect(struct sk_buff *skb, struct sock *sk); +extern void ip_rt_send_redirect(struct sk_buff *skb); extern unsigned int inet_addr_type(struct net *net, __be32 addr); extern unsigned int inet_dev_addr_type(struct net *net, const struct net_device *dev, __be32 addr); @@ -244,8 +227,6 @@ static inline void ip_route_connect_init(struct flowi4 *fl4, __be32 dst, __be32 if (inet_sk(sk)->transparent) flow_flags |= FLOWI_FLAG_ANYSRC; - if (protocol == IPPROTO_TCP) - flow_flags |= FLOWI_FLAG_PRECOW_METRICS; if (can_sleep) flow_flags |= FLOWI_FLAG_CAN_SLEEP; @@ -294,20 +275,13 @@ static inline struct rtable *ip_route_newports(struct flowi4 *fl4, struct rtable return rt; } -extern void rt_bind_peer(struct rtable *rt, __be32 daddr, int create); - -static inline struct inet_peer *rt_get_peer(struct rtable *rt, __be32 daddr) -{ - if (rt->peer) - return rt->peer; - - rt_bind_peer(rt, daddr, 0); - return rt->peer; -} - static inline int inet_iif(const struct sk_buff *skb) { - return skb_rtable(skb)->rt_iif; + int iif = skb_rtable(skb)->rt_iif; + + if (iif) + return iif; + return skb->skb_iif; } extern int sysctl_ip_default_ttl; diff --git a/include/net/rtnetlink.h b/include/net/rtnetlink.h index bbcfd0993432..6b00c4fc4291 100644 --- a/include/net/rtnetlink.h +++ b/include/net/rtnetlink.h @@ -44,8 +44,10 @@ static inline int rtnl_msg_family(const struct nlmsghdr *nlh) * @get_xstats_size: Function to calculate required room for dumping device * specific statistics * @fill_xstats: Function to dump device specific statistics - * @get_tx_queues: Function to determine number of transmit queues to create when - * creating a new device. + * @get_num_tx_queues: Function to determine number of transmit queues + * to create when creating a new device. + * @get_num_rx_queues: Function to determine number of receive queues + * to create when creating a new device. */ struct rtnl_link_ops { struct list_head list; @@ -77,8 +79,8 @@ struct rtnl_link_ops { size_t (*get_xstats_size)(const struct net_device *dev); int (*fill_xstats)(struct sk_buff *skb, const struct net_device *dev); - int (*get_tx_queues)(struct net *net, - struct nlattr *tb[]); + unsigned int (*get_num_tx_queues)(void); + unsigned int (*get_num_rx_queues)(void); }; extern int __rtnl_link_register(struct rtnl_link_ops *ops); diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 9d7d54a00e63..d9611e032418 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -220,7 +220,7 @@ struct tcf_proto { struct qdisc_skb_cb { unsigned int pkt_len; - u16 bond_queue_mapping; + u16 slave_dev_queue_mapping; u16 _pad; unsigned char data[20]; }; diff --git a/include/net/sctp/constants.h b/include/net/sctp/constants.h index 942b864f6135..d053d2e99876 100644 --- a/include/net/sctp/constants.h +++ b/include/net/sctp/constants.h @@ -334,6 +334,7 @@ typedef enum { typedef enum { SCTP_TRANSPORT_UP, SCTP_TRANSPORT_DOWN, + SCTP_TRANSPORT_PF, } sctp_transport_cmd_t; /* These are the address scopes defined mainly for IPv4 addresses diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h index a2ef81466b00..ff499640528b 100644 --- a/include/net/sctp/sctp.h +++ b/include/net/sctp/sctp.h @@ -162,6 +162,8 @@ struct sock *sctp_err_lookup(int family, struct sk_buff *, void sctp_err_finish(struct sock *, struct sctp_association *); void sctp_icmp_frag_needed(struct sock *, struct sctp_association *, struct sctp_transport *t, __u32 pmtu); +void sctp_icmp_redirect(struct sock *, struct sctp_transport *, + struct sk_buff *); void sctp_icmp_proto_unreachable(struct sock *sk, struct sctp_association *asoc, struct sctp_transport *t); @@ -517,10 +519,10 @@ static inline int sctp_frag_point(const struct sctp_association *asoc, int pmtu) return frag; } -static inline void sctp_assoc_pending_pmtu(struct sctp_association *asoc) +static inline void sctp_assoc_pending_pmtu(struct sock *sk, struct sctp_association *asoc) { - sctp_assoc_sync_pmtu(asoc); + sctp_assoc_sync_pmtu(sk, asoc); asoc->pmtu_pending = 0; } diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h index fecdf31816f2..fc5e60016e37 100644 --- a/include/net/sctp/structs.h +++ b/include/net/sctp/structs.h @@ -161,6 +161,12 @@ extern struct sctp_globals { int max_retrans_path; int max_retrans_init; + /* Potentially-Failed.Max.Retrans sysctl value + * taken from: + * http://tools.ietf.org/html/draft-nishida-tsvwg-sctp-failover-05 + */ + int pf_retrans; + /* * Policy for preforming sctp/socket accounting * 0 - do socket level accounting, all assocs share sk_sndbuf @@ -258,6 +264,7 @@ extern struct sctp_globals { #define sctp_sndbuf_policy (sctp_globals.sndbuf_policy) #define sctp_rcvbuf_policy (sctp_globals.rcvbuf_policy) #define sctp_max_retrans_path (sctp_globals.max_retrans_path) +#define sctp_pf_retrans (sctp_globals.pf_retrans) #define sctp_max_retrans_init (sctp_globals.max_retrans_init) #define sctp_sack_timeout (sctp_globals.sack_timeout) #define sctp_hb_interval (sctp_globals.hb_interval) @@ -990,10 +997,15 @@ struct sctp_transport { /* This is the max_retrans value for the transport and will * be initialized from the assocs value. This can be changed - * using SCTP_SET_PEER_ADDR_PARAMS socket option. + * using the SCTP_SET_PEER_ADDR_PARAMS socket option. */ __u16 pathmaxrxt; + /* This is the partially failed retrans value for the transport + * and will be initialized from the assocs value. This can be changed + * using the SCTP_PEER_ADDR_THLDS socket option + */ + int pf_retrans; /* PMTU : The current known path MTU. */ __u32 pathmtu; @@ -1091,7 +1103,7 @@ void sctp_transport_burst_limited(struct sctp_transport *); void sctp_transport_burst_reset(struct sctp_transport *); unsigned long sctp_transport_timeout(struct sctp_transport *); void sctp_transport_reset(struct sctp_transport *); -void sctp_transport_update_pmtu(struct sctp_transport *, u32); +void sctp_transport_update_pmtu(struct sock *, struct sctp_transport *, u32); void sctp_transport_immediate_rtx(struct sctp_transport *); @@ -1664,6 +1676,12 @@ struct sctp_association { */ int max_retrans; + /* This is the partially failed retrans value for the transport + * and will be initialized from the assocs value. This can be + * changed using the SCTP_PEER_ADDR_THLDS socket option + */ + int pf_retrans; + /* Maximum number of times the endpoint will retransmit INIT */ __u16 max_init_attempts; @@ -2003,7 +2021,7 @@ void sctp_assoc_update(struct sctp_association *old, __u32 sctp_association_get_next_tsn(struct sctp_association *); -void sctp_assoc_sync_pmtu(struct sctp_association *); +void sctp_assoc_sync_pmtu(struct sock *, struct sctp_association *); void sctp_assoc_rwnd_increase(struct sctp_association *, unsigned int); void sctp_assoc_rwnd_decrease(struct sctp_association *, unsigned int); void sctp_assoc_set_primary(struct sctp_association *, diff --git a/include/net/sctp/user.h b/include/net/sctp/user.h index 0842ef00b2fe..1b02d7ad453b 100644 --- a/include/net/sctp/user.h +++ b/include/net/sctp/user.h @@ -93,6 +93,7 @@ typedef __s32 sctp_assoc_t; #define SCTP_GET_ASSOC_NUMBER 28 /* Read only */ #define SCTP_GET_ASSOC_ID_LIST 29 /* Read only */ #define SCTP_AUTO_ASCONF 30 +#define SCTP_PEER_ADDR_THLDS 31 /* Internal Socket Options. Some of the sctp library functions are * implemented using these socket options. @@ -649,6 +650,7 @@ struct sctp_paddrinfo { */ enum sctp_spinfo_state { SCTP_INACTIVE, + SCTP_PF, SCTP_ACTIVE, SCTP_UNCONFIRMED, SCTP_UNKNOWN = 0xffff /* Value used for transport state unknown */ @@ -741,4 +743,13 @@ typedef struct { int sd; } sctp_peeloff_arg_t; +/* + * Peer Address Thresholds socket option + */ +struct sctp_paddrthlds { + sctp_assoc_t spt_assoc_id; + struct sockaddr_storage spt_address; + __u16 spt_pathmaxrxt; + __u16 spt_pathpfthld; +}; #endif /* __net_sctp_user_h__ */ diff --git a/include/net/sock.h b/include/net/sock.h index 4a4521699563..e067f8c18f88 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -198,6 +198,7 @@ struct cg_proto; * @sk_lock: synchronizer * @sk_rcvbuf: size of receive buffer in bytes * @sk_wq: sock wait queue and async head + * @sk_rx_dst: receive input route used by early tcp demux * @sk_dst_cache: destination cache * @sk_dst_lock: destination cache lock * @sk_policy: flow policy @@ -317,6 +318,7 @@ struct sock { struct xfrm_policy *sk_policy[2]; #endif unsigned long sk_flags; + struct dst_entry *sk_rx_dst; struct dst_entry *sk_dst_cache; spinlock_t sk_dst_lock; atomic_t sk_wmem_alloc; @@ -856,6 +858,9 @@ struct proto { int (*backlog_rcv) (struct sock *sk, struct sk_buff *skb); + void (*release_cb)(struct sock *sk); + void (*mtu_reduced)(struct sock *sk); + /* Keeping track of sk's, looking them up, and port selection methods. */ void (*hash)(struct sock *sk); void (*unhash)(struct sock *sk); @@ -1426,6 +1431,7 @@ extern struct sk_buff *sock_rmalloc(struct sock *sk, gfp_t priority); extern void sock_wfree(struct sk_buff *skb); extern void sock_rfree(struct sk_buff *skb); +extern void sock_edemux(struct sk_buff *skb); extern int sock_setsockopt(struct socket *sock, int level, int op, char __user *optval, @@ -2152,7 +2158,7 @@ static inline void sk_change_net(struct sock *sk, struct net *net) static inline struct sock *skb_steal_sock(struct sk_buff *skb) { - if (unlikely(skb->sk)) { + if (skb->sk) { struct sock *sk = skb->sk; skb->destructor = NULL; diff --git a/include/net/tcp.h b/include/net/tcp.h index e79aa48d9fc1..e19124b84cd2 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -170,6 +170,11 @@ extern void tcp_time_wait(struct sock *sk, int state, int timeo); #define TCPOPT_TIMESTAMP 8 /* Better RTT estimations/PAWS */ #define TCPOPT_MD5SIG 19 /* MD5 Signature (RFC2385) */ #define TCPOPT_COOKIE 253 /* Cookie extension (experimental) */ +#define TCPOPT_EXP 254 /* Experimental */ +/* Magic number to be after the option value for sharing TCP + * experimental options. See draft-ietf-tcpm-experimental-options-00.txt + */ +#define TCPOPT_FASTOPEN_MAGIC 0xF989 /* * TCP option lengths @@ -180,6 +185,7 @@ extern void tcp_time_wait(struct sock *sk, int state, int timeo); #define TCPOLEN_SACK_PERM 2 #define TCPOLEN_TIMESTAMP 10 #define TCPOLEN_MD5SIG 18 +#define TCPOLEN_EXP_FASTOPEN_BASE 4 #define TCPOLEN_COOKIE_BASE 2 /* Cookie-less header extension */ #define TCPOLEN_COOKIE_PAIR 3 /* Cookie pair header extension */ #define TCPOLEN_COOKIE_MIN (TCPOLEN_COOKIE_BASE+TCP_COOKIE_MIN) @@ -206,6 +212,10 @@ extern void tcp_time_wait(struct sock *sk, int state, int timeo); /* TCP initial congestion window as per draft-hkchu-tcpm-initcwnd-01 */ #define TCP_INIT_CWND 10 +/* Bit Flags for sysctl_tcp_fastopen */ +#define TFO_CLIENT_ENABLE 1 +#define TFO_CLIENT_NO_COOKIE 4 /* Data in SYN w/o cookie option */ + extern struct inet_timewait_death_row tcp_death_row; /* sysctl variables for tcp */ @@ -222,6 +232,7 @@ extern int sysctl_tcp_retries1; extern int sysctl_tcp_retries2; extern int sysctl_tcp_orphan_retries; extern int sysctl_tcp_syncookies; +extern int sysctl_tcp_fastopen; extern int sysctl_tcp_retrans_collapse; extern int sysctl_tcp_stdurg; extern int sysctl_tcp_rfc1337; @@ -253,6 +264,8 @@ extern int sysctl_tcp_cookie_size; extern int sysctl_tcp_thin_linear_timeouts; extern int sysctl_tcp_thin_dupack; extern int sysctl_tcp_early_retrans; +extern int sysctl_tcp_limit_output_bytes; +extern int sysctl_tcp_challenge_ack_limit; extern atomic_long_t tcp_memory_allocated; extern struct percpu_counter tcp_sockets_allocated; @@ -321,19 +334,24 @@ extern struct proto tcp_prot; extern void tcp_init_mem(struct net *net); +extern void tcp_tasklet_init(void); + extern void tcp_v4_err(struct sk_buff *skb, u32); extern void tcp_shutdown (struct sock *sk, int how); +extern void tcp_v4_early_demux(struct sk_buff *skb); extern int tcp_v4_rcv(struct sk_buff *skb); -extern struct inet_peer *tcp_v4_get_peer(struct sock *sk, bool *release_it); -extern void *tcp_v4_tw_get_peer(struct sock *sk); +extern struct inet_peer *tcp_v4_get_peer(struct sock *sk); extern int tcp_v4_tw_remember_stamp(struct inet_timewait_sock *tw); extern int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, size_t size); extern int tcp_sendpage(struct sock *sk, struct page *page, int offset, size_t size, int flags); +extern void tcp_release_cb(struct sock *sk); +extern void tcp_write_timer_handler(struct sock *sk); +extern void tcp_delack_timer_handler(struct sock *sk); extern int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg); extern int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb, const struct tcphdr *th, unsigned int len); @@ -388,6 +406,19 @@ extern void tcp_enter_frto(struct sock *sk); extern void tcp_enter_loss(struct sock *sk, int how); extern void tcp_clear_retrans(struct tcp_sock *tp); extern void tcp_update_metrics(struct sock *sk); +extern void tcp_init_metrics(struct sock *sk); +extern void tcp_metrics_init(void); +extern bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst, bool paws_check); +extern bool tcp_remember_stamp(struct sock *sk); +extern bool tcp_tw_remember_stamp(struct inet_timewait_sock *tw); +extern void tcp_fastopen_cache_get(struct sock *sk, u16 *mss, + struct tcp_fastopen_cookie *cookie, + int *syn_loss, unsigned long *last_syn_loss); +extern void tcp_fastopen_cache_set(struct sock *sk, u16 mss, + struct tcp_fastopen_cookie *cookie, + bool syn_lost); +extern void tcp_fetch_timewait_stamp(struct sock *sk, struct dst_entry *dst); +extern void tcp_disable_fack(struct tcp_sock *tp); extern void tcp_close(struct sock *sk, long timeout); extern void tcp_init_sock(struct sock *sk); extern unsigned int tcp_poll(struct file * file, struct socket *sock, @@ -406,7 +437,7 @@ extern int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, size_t len, int nonblock, int flags, int *addr_len); extern void tcp_parse_options(const struct sk_buff *skb, struct tcp_options_received *opt_rx, const u8 **hvpp, - int estab); + int estab, struct tcp_fastopen_cookie *foc); extern const u8 *tcp_parse_md5sig_option(const struct tcphdr *th); /* @@ -556,6 +587,8 @@ static inline u32 __tcp_set_rto(const struct tcp_sock *tp) return (tp->srtt >> 3) + tp->rttvar; } +extern void tcp_set_rto(struct sock *sk); + static inline void __tcp_fast_path_on(struct tcp_sock *tp, u32 snd_wnd) { tp->pred_flags = htonl((tp->tcp_header_len << 26) | @@ -1264,6 +1297,15 @@ extern int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *, const struct sk_buff extern int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_key *key); +struct tcp_fastopen_request { + /* Fast Open cookie. Size 0 means a cookie request */ + struct tcp_fastopen_cookie cookie; + struct msghdr *data; /* data in MSG_FASTOPEN */ + u16 copied; /* queued in tcp_connect() */ +}; + +void tcp_free_fastopen_req(struct tcp_sock *tp); + /* write queue abstraction */ static inline void tcp_write_queue_purge(struct sock *sk) { diff --git a/include/net/timewait_sock.h b/include/net/timewait_sock.h index 8d6689cb2c66..68f0ecad6c6e 100644 --- a/include/net/timewait_sock.h +++ b/include/net/timewait_sock.h @@ -22,7 +22,6 @@ struct timewait_sock_ops { int (*twsk_unique)(struct sock *sk, struct sock *sktw, void *twp); void (*twsk_destructor)(struct sock *sk); - void *(*twsk_getpeer)(struct sock *sk); }; static inline int twsk_unique(struct sock *sk, struct sock *sktw, void *twp) @@ -41,11 +40,4 @@ static inline void twsk_destructor(struct sock *sk) sk->sk_prot->twsk_prot->twsk_destructor(sk); } -static inline void *twsk_getpeer(struct sock *sk) -{ - if (sk->sk_prot->twsk_prot->twsk_getpeer) - return sk->sk_prot->twsk_prot->twsk_getpeer(sk); - return NULL; -} - #endif /* _TIMEWAIT_SOCK_H */ diff --git a/include/net/xfrm.h b/include/net/xfrm.h index e0a55df5bde8..d9509eb29b80 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1475,6 +1475,8 @@ extern int xfrm4_output(struct sk_buff *skb); extern int xfrm4_output_finish(struct sk_buff *skb); extern int xfrm4_tunnel_register(struct xfrm_tunnel *handler, unsigned short family); extern int xfrm4_tunnel_deregister(struct xfrm_tunnel *handler, unsigned short family); +extern int xfrm4_mode_tunnel_input_register(struct xfrm_tunnel *handler); +extern int xfrm4_mode_tunnel_input_deregister(struct xfrm_tunnel *handler); extern int xfrm6_extract_header(struct sk_buff *skb); extern int xfrm6_extract_input(struct xfrm_state *x, struct sk_buff *skb); extern int xfrm6_rcv_spi(struct sk_buff *skb, int nexthdr, __be32 spi); @@ -1682,13 +1684,11 @@ static inline int xfrm_mark_get(struct nlattr **attrs, struct xfrm_mark *m) static inline int xfrm_mark_put(struct sk_buff *skb, const struct xfrm_mark *m) { - if ((m->m | m->v) && - nla_put(skb, XFRMA_MARK, sizeof(struct xfrm_mark), m)) - goto nla_put_failure; - return 0; + int ret = 0; -nla_put_failure: - return -1; + if (m->m | m->v) + ret = nla_put(skb, XFRMA_MARK, sizeof(struct xfrm_mark), m); + return ret; } #endif /* _NET_XFRM_H */ diff --git a/kernel/audit.c b/kernel/audit.c index 1c7f2c61416b..4a3f28d2ca65 100644 --- a/kernel/audit.c +++ b/kernel/audit.c @@ -384,7 +384,7 @@ static void audit_hold_skb(struct sk_buff *skb) static void audit_printk_skb(struct sk_buff *skb) { struct nlmsghdr *nlh = nlmsg_hdr(skb); - char *data = NLMSG_DATA(nlh); + char *data = nlmsg_data(nlh); if (nlh->nlmsg_type != AUDIT_EOE) { if (printk_ratelimit()) @@ -516,14 +516,15 @@ struct sk_buff *audit_make_reply(int pid, int seq, int type, int done, if (!skb) return NULL; - nlh = NLMSG_NEW(skb, pid, seq, t, size, flags); - data = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, t, size, flags); + if (!nlh) + goto out_kfree_skb; + data = nlmsg_data(nlh); memcpy(data, payload, size); return skb; -nlmsg_failure: /* Used by NLMSG_NEW */ - if (skb) - kfree_skb(skb); +out_kfree_skb: + kfree_skb(skb); return NULL; } @@ -680,7 +681,7 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh) sessionid = audit_get_sessionid(current); security_task_getsecid(current, &sid); seq = nlh->nlmsg_seq; - data = NLMSG_DATA(nlh); + data = nlmsg_data(nlh); switch (msg_type) { case AUDIT_GET: @@ -961,14 +962,17 @@ static void audit_receive(struct sk_buff *skb) static int __init audit_init(void) { int i; + struct netlink_kernel_cfg cfg = { + .input = audit_receive, + }; if (audit_initialized == AUDIT_DISABLED) return 0; printk(KERN_INFO "audit: initializing netlink socket (%s)\n", audit_default ? "enabled" : "disabled"); - audit_sock = netlink_kernel_create(&init_net, NETLINK_AUDIT, 0, - audit_receive, NULL, THIS_MODULE); + audit_sock = netlink_kernel_create(&init_net, NETLINK_AUDIT, + THIS_MODULE, &cfg); if (!audit_sock) audit_panic("cannot initialize netlink socket"); else @@ -1060,13 +1064,15 @@ static struct audit_buffer * audit_buffer_alloc(struct audit_context *ctx, ab->skb = nlmsg_new(AUDIT_BUFSIZ, gfp_mask); if (!ab->skb) - goto nlmsg_failure; + goto err; - nlh = NLMSG_NEW(ab->skb, 0, 0, type, 0, 0); + nlh = nlmsg_put(ab->skb, 0, 0, type, 0, 0); + if (!nlh) + goto out_kfree_skb; return ab; -nlmsg_failure: /* Used by NLMSG_NEW */ +out_kfree_skb: kfree_skb(ab->skb); ab->skb = NULL; err: diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c index 1a91efa6d121..0401d2916d9f 100644 --- a/lib/kobject_uevent.c +++ b/lib/kobject_uevent.c @@ -373,13 +373,16 @@ EXPORT_SYMBOL_GPL(add_uevent_var); static int uevent_net_init(struct net *net) { struct uevent_sock *ue_sk; + struct netlink_kernel_cfg cfg = { + .groups = 1, + }; ue_sk = kzalloc(sizeof(*ue_sk), GFP_KERNEL); if (!ue_sk) return -ENOMEM; ue_sk->sk = netlink_kernel_create(net, NETLINK_KOBJECT_UEVENT, - 1, NULL, NULL, THIS_MODULE); + THIS_MODULE, &cfg); if (!ue_sk->sk) { printk(KERN_ERR "kobject_uevent: unable to create netlink socket!\n"); diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index da1bc9c3cf38..73a2a83ee2da 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -681,10 +681,7 @@ static int vlan_dev_netpoll_setup(struct net_device *dev, struct netpoll_info *n if (!netpoll) goto out; - netpoll->dev = real_dev; - strlcpy(netpoll->dev_name, real_dev->name, IFNAMSIZ); - - err = __netpoll_setup(netpoll); + err = __netpoll_setup(netpoll, real_dev); if (err) { kfree(netpoll); goto out; diff --git a/net/9p/client.c b/net/9p/client.c index a170893d70e0..8260f132b32e 100644 --- a/net/9p/client.c +++ b/net/9p/client.c @@ -1548,7 +1548,7 @@ p9_client_read(struct p9_fid *fid, char *data, char __user *udata, u64 offset, kernel_buf = 1; indata = data; } else - indata = (char *)udata; + indata = (__force char *)udata; /* * response header len is 11 * PDU Header(7) + IO Size (4) diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c index 2a167658bb95..35b8911b1c8e 100644 --- a/net/9p/trans_virtio.c +++ b/net/9p/trans_virtio.c @@ -212,7 +212,7 @@ static int p9_virtio_cancel(struct p9_client *client, struct p9_req_t *req) * this takes a list of pages. * @sg: scatter/gather list to pack into * @start: which segment of the sg_list to start at - * @**pdata: a list of pages to add into sg. + * @pdata: a list of pages to add into sg. * @nr_pages: number of pages to pack into the scatter/gather list * @data: data to pack into scatter/gather list * @count: amount of data to pack into the scatter/gather list diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c index 86852963b7f7..33475291c9c1 100644 --- a/net/appletalk/ddp.c +++ b/net/appletalk/ddp.c @@ -129,8 +129,8 @@ found: /** * atalk_find_or_insert_socket - Try to find a socket matching ADDR - * @sk - socket to insert in the list if it is not there already - * @sat - address to search for + * @sk: socket to insert in the list if it is not there already + * @sat: address to search for * * Try to find a socket matching ADDR in the socket list, if found then return * it. If not, insert SK into the socket list. @@ -1066,8 +1066,8 @@ static int atalk_release(struct socket *sock) /** * atalk_pick_and_bind_port - Pick a source port when one is not given - * @sk - socket to insert into the tables - * @sat - address to search for + * @sk: socket to insert into the tables + * @sat: address to search for * * Pick a source port when one is not given. If we can find a suitable free * one, we insert the socket into the tables using it. diff --git a/net/atm/lec.c b/net/atm/lec.c index a7d172105c99..2e3d942e77f1 100644 --- a/net/atm/lec.c +++ b/net/atm/lec.c @@ -231,9 +231,11 @@ static netdev_tx_t lec_start_xmit(struct sk_buff *skb, if (skb_headroom(skb) < 2) { pr_debug("reallocating skb\n"); skb2 = skb_realloc_headroom(skb, LEC_HEADER_LEN); - kfree_skb(skb); - if (skb2 == NULL) + if (unlikely(!skb2)) { + kfree_skb(skb); return NETDEV_TX_OK; + } + consume_skb(skb); skb = skb2; } skb_push(skb, 2); @@ -1602,7 +1604,7 @@ static void lec_arp_expire_vcc(unsigned long data) { unsigned long flags; struct lec_arp_table *to_remove = (struct lec_arp_table *)data; - struct lec_priv *priv = (struct lec_priv *)to_remove->priv; + struct lec_priv *priv = to_remove->priv; del_timer(&to_remove->timer); diff --git a/net/atm/pppoatm.c b/net/atm/pppoatm.c index ce1e59fdae7b..226dca989448 100644 --- a/net/atm/pppoatm.c +++ b/net/atm/pppoatm.c @@ -283,7 +283,7 @@ static int pppoatm_send(struct ppp_channel *chan, struct sk_buff *skb) kfree_skb(n); goto nospace; } - kfree_skb(skb); + consume_skb(skb); skb = n; if (skb == NULL) return DROP_PACKET; diff --git a/net/ax25/ax25_addr.c b/net/ax25/ax25_addr.c index 9162409559cf..e7c9b0ea17a1 100644 --- a/net/ax25/ax25_addr.c +++ b/net/ax25/ax25_addr.c @@ -189,8 +189,10 @@ const unsigned char *ax25_addr_parse(const unsigned char *buf, int len, digi->ndigi = 0; while (!(buf[-1] & AX25_EBIT)) { - if (d >= AX25_MAX_DIGIS) return NULL; /* Max of 6 digis */ - if (len < 7) return NULL; /* Short packet */ + if (d >= AX25_MAX_DIGIS) + return NULL; + if (len < AX25_ADDR_LEN) + return NULL; memcpy(&digi->calls[d], buf, AX25_ADDR_LEN); digi->ndigi = d + 1; diff --git a/net/ax25/ax25_out.c b/net/ax25/ax25_out.c index be8a25e0db65..be2acab9be9d 100644 --- a/net/ax25/ax25_out.c +++ b/net/ax25/ax25_out.c @@ -350,7 +350,7 @@ void ax25_transmit_buffer(ax25_cb *ax25, struct sk_buff *skb, int type) if (skb->sk != NULL) skb_set_owner_w(skbn, skb->sk); - kfree_skb(skb); + consume_skb(skb); skb = skbn; } diff --git a/net/ax25/ax25_route.c b/net/ax25/ax25_route.c index a65588040b9e..d39097737e38 100644 --- a/net/ax25/ax25_route.c +++ b/net/ax25/ax25_route.c @@ -474,7 +474,7 @@ struct sk_buff *ax25_rt_build_path(struct sk_buff *skb, ax25_address *src, if (skb->sk != NULL) skb_set_owner_w(skbn, skb->sk); - kfree_skb(skb); + consume_skb(skb); skb = skbn; } diff --git a/net/batman-adv/Makefile b/net/batman-adv/Makefile index 6d5c1940667d..8676d2b1d574 100644 --- a/net/batman-adv/Makefile +++ b/net/batman-adv/Makefile @@ -19,11 +19,10 @@ # obj-$(CONFIG_BATMAN_ADV) += batman-adv.o -batman-adv-y += bat_debugfs.o batman-adv-y += bat_iv_ogm.o -batman-adv-y += bat_sysfs.o batman-adv-y += bitarray.o batman-adv-$(CONFIG_BATMAN_ADV_BLA) += bridge_loop_avoidance.o +batman-adv-y += debugfs.o batman-adv-y += gateway_client.o batman-adv-y += gateway_common.o batman-adv-y += hard-interface.o @@ -35,6 +34,7 @@ batman-adv-y += ring_buffer.o batman-adv-y += routing.o batman-adv-y += send.o batman-adv-y += soft-interface.o +batman-adv-y += sysfs.o batman-adv-y += translation-table.o batman-adv-y += unicast.o batman-adv-y += vis.o diff --git a/net/batman-adv/bat_algo.h b/net/batman-adv/bat_algo.h index 9852a688ba43..a0ba3bff9b36 100644 --- a/net/batman-adv/bat_algo.h +++ b/net/batman-adv/bat_algo.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2011-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,12 +15,11 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_BAT_ALGO_H_ #define _NET_BATMAN_ADV_BAT_ALGO_H_ -int bat_iv_init(void); +int batadv_iv_init(void); #endif /* _NET_BATMAN_ADV_BAT_ALGO_H_ */ diff --git a/net/batman-adv/bat_debugfs.c b/net/batman-adv/bat_debugfs.c deleted file mode 100644 index 3b588f86d770..000000000000 --- a/net/batman-adv/bat_debugfs.c +++ /dev/null @@ -1,388 +0,0 @@ -/* - * Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: - * - * Marek Lindner - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of version 2 of the GNU General Public - * License as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA - * - */ - -#include "main.h" - -#include <linux/debugfs.h> - -#include "bat_debugfs.h" -#include "translation-table.h" -#include "originator.h" -#include "hard-interface.h" -#include "gateway_common.h" -#include "gateway_client.h" -#include "soft-interface.h" -#include "vis.h" -#include "icmp_socket.h" -#include "bridge_loop_avoidance.h" - -static struct dentry *bat_debugfs; - -#ifdef CONFIG_BATMAN_ADV_DEBUG -#define LOG_BUFF_MASK (log_buff_len-1) -#define LOG_BUFF(idx) (debug_log->log_buff[(idx) & LOG_BUFF_MASK]) - -static int log_buff_len = LOG_BUF_LEN; - -static void emit_log_char(struct debug_log *debug_log, char c) -{ - LOG_BUFF(debug_log->log_end) = c; - debug_log->log_end++; - - if (debug_log->log_end - debug_log->log_start > log_buff_len) - debug_log->log_start = debug_log->log_end - log_buff_len; -} - -__printf(2, 3) -static int fdebug_log(struct debug_log *debug_log, const char *fmt, ...) -{ - va_list args; - static char debug_log_buf[256]; - char *p; - - if (!debug_log) - return 0; - - spin_lock_bh(&debug_log->lock); - va_start(args, fmt); - vscnprintf(debug_log_buf, sizeof(debug_log_buf), fmt, args); - va_end(args); - - for (p = debug_log_buf; *p != 0; p++) - emit_log_char(debug_log, *p); - - spin_unlock_bh(&debug_log->lock); - - wake_up(&debug_log->queue_wait); - - return 0; -} - -int debug_log(struct bat_priv *bat_priv, const char *fmt, ...) -{ - va_list args; - char tmp_log_buf[256]; - - va_start(args, fmt); - vscnprintf(tmp_log_buf, sizeof(tmp_log_buf), fmt, args); - fdebug_log(bat_priv->debug_log, "[%10u] %s", - jiffies_to_msecs(jiffies), tmp_log_buf); - va_end(args); - - return 0; -} - -static int log_open(struct inode *inode, struct file *file) -{ - nonseekable_open(inode, file); - file->private_data = inode->i_private; - inc_module_count(); - return 0; -} - -static int log_release(struct inode *inode, struct file *file) -{ - dec_module_count(); - return 0; -} - -static ssize_t log_read(struct file *file, char __user *buf, - size_t count, loff_t *ppos) -{ - struct bat_priv *bat_priv = file->private_data; - struct debug_log *debug_log = bat_priv->debug_log; - int error, i = 0; - char c; - - if ((file->f_flags & O_NONBLOCK) && - !(debug_log->log_end - debug_log->log_start)) - return -EAGAIN; - - if (!buf) - return -EINVAL; - - if (count == 0) - return 0; - - if (!access_ok(VERIFY_WRITE, buf, count)) - return -EFAULT; - - error = wait_event_interruptible(debug_log->queue_wait, - (debug_log->log_start - debug_log->log_end)); - - if (error) - return error; - - spin_lock_bh(&debug_log->lock); - - while ((!error) && (i < count) && - (debug_log->log_start != debug_log->log_end)) { - c = LOG_BUFF(debug_log->log_start); - - debug_log->log_start++; - - spin_unlock_bh(&debug_log->lock); - - error = __put_user(c, buf); - - spin_lock_bh(&debug_log->lock); - - buf++; - i++; - - } - - spin_unlock_bh(&debug_log->lock); - - if (!error) - return i; - - return error; -} - -static unsigned int log_poll(struct file *file, poll_table *wait) -{ - struct bat_priv *bat_priv = file->private_data; - struct debug_log *debug_log = bat_priv->debug_log; - - poll_wait(file, &debug_log->queue_wait, wait); - - if (debug_log->log_end - debug_log->log_start) - return POLLIN | POLLRDNORM; - - return 0; -} - -static const struct file_operations log_fops = { - .open = log_open, - .release = log_release, - .read = log_read, - .poll = log_poll, - .llseek = no_llseek, -}; - -static int debug_log_setup(struct bat_priv *bat_priv) -{ - struct dentry *d; - - if (!bat_priv->debug_dir) - goto err; - - bat_priv->debug_log = kzalloc(sizeof(*bat_priv->debug_log), GFP_ATOMIC); - if (!bat_priv->debug_log) - goto err; - - spin_lock_init(&bat_priv->debug_log->lock); - init_waitqueue_head(&bat_priv->debug_log->queue_wait); - - d = debugfs_create_file("log", S_IFREG | S_IRUSR, - bat_priv->debug_dir, bat_priv, &log_fops); - if (d) - goto err; - - return 0; - -err: - return 1; -} - -static void debug_log_cleanup(struct bat_priv *bat_priv) -{ - kfree(bat_priv->debug_log); - bat_priv->debug_log = NULL; -} -#else /* CONFIG_BATMAN_ADV_DEBUG */ -static int debug_log_setup(struct bat_priv *bat_priv) -{ - bat_priv->debug_log = NULL; - return 0; -} - -static void debug_log_cleanup(struct bat_priv *bat_priv) -{ - return; -} -#endif - -static int bat_algorithms_open(struct inode *inode, struct file *file) -{ - return single_open(file, bat_algo_seq_print_text, NULL); -} - -static int originators_open(struct inode *inode, struct file *file) -{ - struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, orig_seq_print_text, net_dev); -} - -static int gateways_open(struct inode *inode, struct file *file) -{ - struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, gw_client_seq_print_text, net_dev); -} - -static int transtable_global_open(struct inode *inode, struct file *file) -{ - struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, tt_global_seq_print_text, net_dev); -} - -#ifdef CONFIG_BATMAN_ADV_BLA -static int bla_claim_table_open(struct inode *inode, struct file *file) -{ - struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, bla_claim_table_seq_print_text, net_dev); -} -#endif - -static int transtable_local_open(struct inode *inode, struct file *file) -{ - struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, tt_local_seq_print_text, net_dev); -} - -static int vis_data_open(struct inode *inode, struct file *file) -{ - struct net_device *net_dev = (struct net_device *)inode->i_private; - return single_open(file, vis_seq_print_text, net_dev); -} - -struct bat_debuginfo { - struct attribute attr; - const struct file_operations fops; -}; - -#define BAT_DEBUGINFO(_name, _mode, _open) \ -struct bat_debuginfo bat_debuginfo_##_name = { \ - .attr = { .name = __stringify(_name), \ - .mode = _mode, }, \ - .fops = { .owner = THIS_MODULE, \ - .open = _open, \ - .read = seq_read, \ - .llseek = seq_lseek, \ - .release = single_release, \ - } \ -}; - -static BAT_DEBUGINFO(routing_algos, S_IRUGO, bat_algorithms_open); -static BAT_DEBUGINFO(originators, S_IRUGO, originators_open); -static BAT_DEBUGINFO(gateways, S_IRUGO, gateways_open); -static BAT_DEBUGINFO(transtable_global, S_IRUGO, transtable_global_open); -#ifdef CONFIG_BATMAN_ADV_BLA -static BAT_DEBUGINFO(bla_claim_table, S_IRUGO, bla_claim_table_open); -#endif -static BAT_DEBUGINFO(transtable_local, S_IRUGO, transtable_local_open); -static BAT_DEBUGINFO(vis_data, S_IRUGO, vis_data_open); - -static struct bat_debuginfo *mesh_debuginfos[] = { - &bat_debuginfo_originators, - &bat_debuginfo_gateways, - &bat_debuginfo_transtable_global, -#ifdef CONFIG_BATMAN_ADV_BLA - &bat_debuginfo_bla_claim_table, -#endif - &bat_debuginfo_transtable_local, - &bat_debuginfo_vis_data, - NULL, -}; - -void debugfs_init(void) -{ - struct bat_debuginfo *bat_debug; - struct dentry *file; - - bat_debugfs = debugfs_create_dir(DEBUGFS_BAT_SUBDIR, NULL); - if (bat_debugfs == ERR_PTR(-ENODEV)) - bat_debugfs = NULL; - - if (!bat_debugfs) - goto out; - - bat_debug = &bat_debuginfo_routing_algos; - file = debugfs_create_file(bat_debug->attr.name, - S_IFREG | bat_debug->attr.mode, - bat_debugfs, NULL, &bat_debug->fops); - if (!file) - pr_err("Can't add debugfs file: %s\n", bat_debug->attr.name); - -out: - return; -} - -void debugfs_destroy(void) -{ - if (bat_debugfs) { - debugfs_remove_recursive(bat_debugfs); - bat_debugfs = NULL; - } -} - -int debugfs_add_meshif(struct net_device *dev) -{ - struct bat_priv *bat_priv = netdev_priv(dev); - struct bat_debuginfo **bat_debug; - struct dentry *file; - - if (!bat_debugfs) - goto out; - - bat_priv->debug_dir = debugfs_create_dir(dev->name, bat_debugfs); - if (!bat_priv->debug_dir) - goto out; - - bat_socket_setup(bat_priv); - debug_log_setup(bat_priv); - - for (bat_debug = mesh_debuginfos; *bat_debug; ++bat_debug) { - file = debugfs_create_file(((*bat_debug)->attr).name, - S_IFREG | ((*bat_debug)->attr).mode, - bat_priv->debug_dir, - dev, &(*bat_debug)->fops); - if (!file) { - bat_err(dev, "Can't add debugfs file: %s/%s\n", - dev->name, ((*bat_debug)->attr).name); - goto rem_attr; - } - } - - return 0; -rem_attr: - debugfs_remove_recursive(bat_priv->debug_dir); - bat_priv->debug_dir = NULL; -out: -#ifdef CONFIG_DEBUG_FS - return -ENOMEM; -#else - return 0; -#endif /* CONFIG_DEBUG_FS */ -} - -void debugfs_del_meshif(struct net_device *dev) -{ - struct bat_priv *bat_priv = netdev_priv(dev); - - debug_log_cleanup(bat_priv); - - if (bat_debugfs) { - debugfs_remove_recursive(bat_priv->debug_dir); - bat_priv->debug_dir = NULL; - } -} diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c index dc53798ebb47..e877af8bdd1e 100644 --- a/net/batman-adv/bat_iv_ogm.c +++ b/net/batman-adv/bat_iv_ogm.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -30,15 +28,16 @@ #include "send.h" #include "bat_algo.h" -static struct neigh_node *bat_iv_ogm_neigh_new(struct hard_iface *hard_iface, - const uint8_t *neigh_addr, - struct orig_node *orig_node, - struct orig_node *orig_neigh, - uint32_t seqno) +static struct batadv_neigh_node * +batadv_iv_ogm_neigh_new(struct batadv_hard_iface *hard_iface, + const uint8_t *neigh_addr, + struct batadv_orig_node *orig_node, + struct batadv_orig_node *orig_neigh, __be32 seqno) { - struct neigh_node *neigh_node; + struct batadv_neigh_node *neigh_node; - neigh_node = batadv_neigh_node_new(hard_iface, neigh_addr, seqno); + neigh_node = batadv_neigh_node_new(hard_iface, neigh_addr, + ntohl(seqno)); if (!neigh_node) goto out; @@ -55,30 +54,30 @@ out: return neigh_node; } -static int bat_iv_ogm_iface_enable(struct hard_iface *hard_iface) +static int batadv_iv_ogm_iface_enable(struct batadv_hard_iface *hard_iface) { - struct batman_ogm_packet *batman_ogm_packet; + struct batadv_ogm_packet *batadv_ogm_packet; uint32_t random_seqno; - int res = -1; + int res = -ENOMEM; /* randomize initial seqno to avoid collision */ get_random_bytes(&random_seqno, sizeof(random_seqno)); atomic_set(&hard_iface->seqno, random_seqno); - hard_iface->packet_len = BATMAN_OGM_HLEN; + hard_iface->packet_len = BATADV_OGM_HLEN; hard_iface->packet_buff = kmalloc(hard_iface->packet_len, GFP_ATOMIC); if (!hard_iface->packet_buff) goto out; - batman_ogm_packet = (struct batman_ogm_packet *)hard_iface->packet_buff; - batman_ogm_packet->header.packet_type = BAT_IV_OGM; - batman_ogm_packet->header.version = COMPAT_VERSION; - batman_ogm_packet->header.ttl = 2; - batman_ogm_packet->flags = NO_FLAGS; - batman_ogm_packet->tq = TQ_MAX_VALUE; - batman_ogm_packet->tt_num_changes = 0; - batman_ogm_packet->ttvn = 0; + batadv_ogm_packet = (struct batadv_ogm_packet *)hard_iface->packet_buff; + batadv_ogm_packet->header.packet_type = BATADV_IV_OGM; + batadv_ogm_packet->header.version = BATADV_COMPAT_VERSION; + batadv_ogm_packet->header.ttl = 2; + batadv_ogm_packet->flags = BATADV_NO_FLAGS; + batadv_ogm_packet->tq = BATADV_TQ_MAX_VALUE; + batadv_ogm_packet->tt_num_changes = 0; + batadv_ogm_packet->ttvn = 0; res = 0; @@ -86,133 +85,152 @@ out: return res; } -static void bat_iv_ogm_iface_disable(struct hard_iface *hard_iface) +static void batadv_iv_ogm_iface_disable(struct batadv_hard_iface *hard_iface) { kfree(hard_iface->packet_buff); hard_iface->packet_buff = NULL; } -static void bat_iv_ogm_iface_update_mac(struct hard_iface *hard_iface) +static void batadv_iv_ogm_iface_update_mac(struct batadv_hard_iface *hard_iface) { - struct batman_ogm_packet *batman_ogm_packet; + struct batadv_ogm_packet *batadv_ogm_packet; - batman_ogm_packet = (struct batman_ogm_packet *)hard_iface->packet_buff; - memcpy(batman_ogm_packet->orig, + batadv_ogm_packet = (struct batadv_ogm_packet *)hard_iface->packet_buff; + memcpy(batadv_ogm_packet->orig, hard_iface->net_dev->dev_addr, ETH_ALEN); - memcpy(batman_ogm_packet->prev_sender, + memcpy(batadv_ogm_packet->prev_sender, hard_iface->net_dev->dev_addr, ETH_ALEN); } -static void bat_iv_ogm_primary_iface_set(struct hard_iface *hard_iface) +static void +batadv_iv_ogm_primary_iface_set(struct batadv_hard_iface *hard_iface) { - struct batman_ogm_packet *batman_ogm_packet; + struct batadv_ogm_packet *batadv_ogm_packet; - batman_ogm_packet = (struct batman_ogm_packet *)hard_iface->packet_buff; - batman_ogm_packet->flags = PRIMARIES_FIRST_HOP; - batman_ogm_packet->header.ttl = TTL; + batadv_ogm_packet = (struct batadv_ogm_packet *)hard_iface->packet_buff; + batadv_ogm_packet->flags = BATADV_PRIMARIES_FIRST_HOP; + batadv_ogm_packet->header.ttl = BATADV_TTL; } /* when do we schedule our own ogm to be sent */ -static unsigned long bat_iv_ogm_emit_send_time(const struct bat_priv *bat_priv) +static unsigned long +batadv_iv_ogm_emit_send_time(const struct batadv_priv *bat_priv) { - return jiffies + msecs_to_jiffies( - atomic_read(&bat_priv->orig_interval) - - JITTER + (random32() % 2*JITTER)); + unsigned int msecs; + + msecs = atomic_read(&bat_priv->orig_interval) - BATADV_JITTER; + msecs += (random32() % 2 * BATADV_JITTER); + + return jiffies + msecs_to_jiffies(msecs); } /* when do we schedule a ogm packet to be sent */ -static unsigned long bat_iv_ogm_fwd_send_time(void) +static unsigned long batadv_iv_ogm_fwd_send_time(void) { - return jiffies + msecs_to_jiffies(random32() % (JITTER/2)); + return jiffies + msecs_to_jiffies(random32() % (BATADV_JITTER / 2)); } /* apply hop penalty for a normal link */ -static uint8_t hop_penalty(uint8_t tq, const struct bat_priv *bat_priv) +static uint8_t batadv_hop_penalty(uint8_t tq, + const struct batadv_priv *bat_priv) { int hop_penalty = atomic_read(&bat_priv->hop_penalty); - return (tq * (TQ_MAX_VALUE - hop_penalty)) / (TQ_MAX_VALUE); + int new_tq; + + new_tq = tq * (BATADV_TQ_MAX_VALUE - hop_penalty); + new_tq /= BATADV_TQ_MAX_VALUE; + + return new_tq; } /* is there another aggregated packet here? */ -static int bat_iv_ogm_aggr_packet(int buff_pos, int packet_len, - int tt_num_changes) +static int batadv_iv_ogm_aggr_packet(int buff_pos, int packet_len, + int tt_num_changes) { - int next_buff_pos = buff_pos + BATMAN_OGM_HLEN + tt_len(tt_num_changes); + int next_buff_pos = 0; + + next_buff_pos += buff_pos + BATADV_OGM_HLEN; + next_buff_pos += batadv_tt_len(tt_num_changes); return (next_buff_pos <= packet_len) && - (next_buff_pos <= MAX_AGGREGATION_BYTES); + (next_buff_pos <= BATADV_MAX_AGGREGATION_BYTES); } /* send a batman ogm to a given interface */ -static void bat_iv_ogm_send_to_if(struct forw_packet *forw_packet, - struct hard_iface *hard_iface) +static void batadv_iv_ogm_send_to_if(struct batadv_forw_packet *forw_packet, + struct batadv_hard_iface *hard_iface) { - struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); + struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); char *fwd_str; uint8_t packet_num; int16_t buff_pos; - struct batman_ogm_packet *batman_ogm_packet; + struct batadv_ogm_packet *batadv_ogm_packet; struct sk_buff *skb; - if (hard_iface->if_status != IF_ACTIVE) + if (hard_iface->if_status != BATADV_IF_ACTIVE) return; packet_num = 0; buff_pos = 0; - batman_ogm_packet = (struct batman_ogm_packet *)forw_packet->skb->data; + batadv_ogm_packet = (struct batadv_ogm_packet *)forw_packet->skb->data; /* adjust all flags and log packets */ - while (bat_iv_ogm_aggr_packet(buff_pos, forw_packet->packet_len, - batman_ogm_packet->tt_num_changes)) { + while (batadv_iv_ogm_aggr_packet(buff_pos, forw_packet->packet_len, + batadv_ogm_packet->tt_num_changes)) { /* we might have aggregated direct link packets with an - * ordinary base packet */ + * ordinary base packet + */ if ((forw_packet->direct_link_flags & (1 << packet_num)) && (forw_packet->if_incoming == hard_iface)) - batman_ogm_packet->flags |= DIRECTLINK; + batadv_ogm_packet->flags |= BATADV_DIRECTLINK; else - batman_ogm_packet->flags &= ~DIRECTLINK; + batadv_ogm_packet->flags &= ~BATADV_DIRECTLINK; fwd_str = (packet_num > 0 ? "Forwarding" : (forw_packet->own ? "Sending own" : "Forwarding")); - bat_dbg(DBG_BATMAN, bat_priv, - "%s %spacket (originator %pM, seqno %u, TQ %d, TTL %d, IDF %s, ttvn %d) on interface %s [%pM]\n", - fwd_str, (packet_num > 0 ? "aggregated " : ""), - batman_ogm_packet->orig, - ntohl(batman_ogm_packet->seqno), - batman_ogm_packet->tq, batman_ogm_packet->header.ttl, - (batman_ogm_packet->flags & DIRECTLINK ? - "on" : "off"), - batman_ogm_packet->ttvn, hard_iface->net_dev->name, - hard_iface->net_dev->dev_addr); - - buff_pos += BATMAN_OGM_HLEN + - tt_len(batman_ogm_packet->tt_num_changes); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "%s %spacket (originator %pM, seqno %u, TQ %d, TTL %d, IDF %s, ttvn %d) on interface %s [%pM]\n", + fwd_str, (packet_num > 0 ? "aggregated " : ""), + batadv_ogm_packet->orig, + ntohl(batadv_ogm_packet->seqno), + batadv_ogm_packet->tq, batadv_ogm_packet->header.ttl, + (batadv_ogm_packet->flags & BATADV_DIRECTLINK ? + "on" : "off"), + batadv_ogm_packet->ttvn, hard_iface->net_dev->name, + hard_iface->net_dev->dev_addr); + + buff_pos += BATADV_OGM_HLEN; + buff_pos += batadv_tt_len(batadv_ogm_packet->tt_num_changes); packet_num++; - batman_ogm_packet = (struct batman_ogm_packet *) + batadv_ogm_packet = (struct batadv_ogm_packet *) (forw_packet->skb->data + buff_pos); } /* create clone because function is called more than once */ skb = skb_clone(forw_packet->skb, GFP_ATOMIC); - if (skb) - send_skb_packet(skb, hard_iface, broadcast_addr); + if (skb) { + batadv_inc_counter(bat_priv, BATADV_CNT_MGMT_TX); + batadv_add_counter(bat_priv, BATADV_CNT_MGMT_TX_BYTES, + skb->len + ETH_HLEN); + batadv_send_skb_packet(skb, hard_iface, batadv_broadcast_addr); + } } /* send a batman ogm packet */ -static void bat_iv_ogm_emit(struct forw_packet *forw_packet) +static void batadv_iv_ogm_emit(struct batadv_forw_packet *forw_packet) { - struct hard_iface *hard_iface; + struct batadv_hard_iface *hard_iface; struct net_device *soft_iface; - struct bat_priv *bat_priv; - struct hard_iface *primary_if = NULL; - struct batman_ogm_packet *batman_ogm_packet; + struct batadv_priv *bat_priv; + struct batadv_hard_iface *primary_if = NULL; + struct batadv_ogm_packet *batadv_ogm_packet; unsigned char directlink; - batman_ogm_packet = (struct batman_ogm_packet *) + batadv_ogm_packet = (struct batadv_ogm_packet *) (forw_packet->skb->data); - directlink = (batman_ogm_packet->flags & DIRECTLINK ? 1 : 0); + directlink = (batadv_ogm_packet->flags & BATADV_DIRECTLINK ? 1 : 0); if (!forw_packet->if_incoming) { pr_err("Error - can't forward packet: incoming iface not specified\n"); @@ -222,31 +240,33 @@ static void bat_iv_ogm_emit(struct forw_packet *forw_packet) soft_iface = forw_packet->if_incoming->soft_iface; bat_priv = netdev_priv(soft_iface); - if (forw_packet->if_incoming->if_status != IF_ACTIVE) + if (forw_packet->if_incoming->if_status != BATADV_IF_ACTIVE) goto out; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; - /* multihomed peer assumed */ - /* non-primary OGMs are only broadcasted on their interface */ - if ((directlink && (batman_ogm_packet->header.ttl == 1)) || + /* multihomed peer assumed + * non-primary OGMs are only broadcasted on their interface + */ + if ((directlink && (batadv_ogm_packet->header.ttl == 1)) || (forw_packet->own && (forw_packet->if_incoming != primary_if))) { /* FIXME: what about aggregated packets ? */ - bat_dbg(DBG_BATMAN, bat_priv, - "%s packet (originator %pM, seqno %u, TTL %d) on interface %s [%pM]\n", - (forw_packet->own ? "Sending own" : "Forwarding"), - batman_ogm_packet->orig, - ntohl(batman_ogm_packet->seqno), - batman_ogm_packet->header.ttl, - forw_packet->if_incoming->net_dev->name, - forw_packet->if_incoming->net_dev->dev_addr); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "%s packet (originator %pM, seqno %u, TTL %d) on interface %s [%pM]\n", + (forw_packet->own ? "Sending own" : "Forwarding"), + batadv_ogm_packet->orig, + ntohl(batadv_ogm_packet->seqno), + batadv_ogm_packet->header.ttl, + forw_packet->if_incoming->net_dev->name, + forw_packet->if_incoming->net_dev->dev_addr); /* skb is only used once and than forw_packet is free'd */ - send_skb_packet(forw_packet->skb, forw_packet->if_incoming, - broadcast_addr); + batadv_send_skb_packet(forw_packet->skb, + forw_packet->if_incoming, + batadv_broadcast_addr); forw_packet->skb = NULL; goto out; @@ -254,70 +274,70 @@ static void bat_iv_ogm_emit(struct forw_packet *forw_packet) /* broadcast on every interface */ rcu_read_lock(); - list_for_each_entry_rcu(hard_iface, &hardif_list, list) { + list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) { if (hard_iface->soft_iface != soft_iface) continue; - bat_iv_ogm_send_to_if(forw_packet, hard_iface); + batadv_iv_ogm_send_to_if(forw_packet, hard_iface); } rcu_read_unlock(); out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } /* return true if new_packet can be aggregated with forw_packet */ -static bool bat_iv_ogm_can_aggregate(const struct batman_ogm_packet - *new_batman_ogm_packet, - struct bat_priv *bat_priv, - int packet_len, unsigned long send_time, - bool directlink, - const struct hard_iface *if_incoming, - const struct forw_packet *forw_packet) +static bool +batadv_iv_ogm_can_aggregate(const struct batadv_ogm_packet *new_bat_ogm_packet, + struct batadv_priv *bat_priv, + int packet_len, unsigned long send_time, + bool directlink, + const struct batadv_hard_iface *if_incoming, + const struct batadv_forw_packet *forw_packet) { - struct batman_ogm_packet *batman_ogm_packet; + struct batadv_ogm_packet *batadv_ogm_packet; int aggregated_bytes = forw_packet->packet_len + packet_len; - struct hard_iface *primary_if = NULL; + struct batadv_hard_iface *primary_if = NULL; bool res = false; + unsigned long aggregation_end_time; - batman_ogm_packet = (struct batman_ogm_packet *)forw_packet->skb->data; + batadv_ogm_packet = (struct batadv_ogm_packet *)forw_packet->skb->data; + aggregation_end_time = send_time; + aggregation_end_time += msecs_to_jiffies(BATADV_MAX_AGGREGATION_MS); - /** - * we can aggregate the current packet to this aggregated packet + /* we can aggregate the current packet to this aggregated packet * if: * * - the send time is within our MAX_AGGREGATION_MS time * - the resulting packet wont be bigger than * MAX_AGGREGATION_BYTES */ - if (time_before(send_time, forw_packet->send_time) && - time_after_eq(send_time + msecs_to_jiffies(MAX_AGGREGATION_MS), - forw_packet->send_time) && - (aggregated_bytes <= MAX_AGGREGATION_BYTES)) { + time_after_eq(aggregation_end_time, forw_packet->send_time) && + (aggregated_bytes <= BATADV_MAX_AGGREGATION_BYTES)) { - /** - * check aggregation compatibility + /* check aggregation compatibility * -> direct link packets are broadcasted on * their interface only * -> aggregate packet if the current packet is * a "global" packet as well as the base * packet */ - - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; /* packets without direct link flag and high TTL - * are flooded through the net */ + * are flooded through the net + */ if ((!directlink) && - (!(batman_ogm_packet->flags & DIRECTLINK)) && - (batman_ogm_packet->header.ttl != 1) && + (!(batadv_ogm_packet->flags & BATADV_DIRECTLINK)) && + (batadv_ogm_packet->header.ttl != 1) && /* own packets originating non-primary - * interfaces leave only that interface */ + * interfaces leave only that interface + */ ((!forw_packet->own) || (forw_packet->if_incoming == primary_if))) { res = true; @@ -325,15 +345,17 @@ static bool bat_iv_ogm_can_aggregate(const struct batman_ogm_packet } /* if the incoming packet is sent via this one - * interface only - we still can aggregate */ + * interface only - we still can aggregate + */ if ((directlink) && - (new_batman_ogm_packet->header.ttl == 1) && + (new_bat_ogm_packet->header.ttl == 1) && (forw_packet->if_incoming == if_incoming) && /* packets from direct neighbors or * own secondary interface packets - * (= secondary interface packets in general) */ - (batman_ogm_packet->flags & DIRECTLINK || + * (= secondary interface packets in general) + */ + (batadv_ogm_packet->flags & BATADV_DIRECTLINK || (forw_packet->own && forw_packet->if_incoming != primary_if))) { res = true; @@ -343,29 +365,30 @@ static bool bat_iv_ogm_can_aggregate(const struct batman_ogm_packet out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return res; } /* create a new aggregated packet and add this packet to it */ -static void bat_iv_ogm_aggregate_new(const unsigned char *packet_buff, - int packet_len, unsigned long send_time, - bool direct_link, - struct hard_iface *if_incoming, - int own_packet) +static void batadv_iv_ogm_aggregate_new(const unsigned char *packet_buff, + int packet_len, unsigned long send_time, + bool direct_link, + struct batadv_hard_iface *if_incoming, + int own_packet) { - struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); - struct forw_packet *forw_packet_aggr; + struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); + struct batadv_forw_packet *forw_packet_aggr; unsigned char *skb_buff; + unsigned int skb_size; if (!atomic_inc_not_zero(&if_incoming->refcount)) return; /* own packet should always be scheduled */ if (!own_packet) { - if (!atomic_dec_not_zero(&bat_priv->batman_queue_left)) { - bat_dbg(DBG_BATMAN, bat_priv, - "batman packet queue full\n"); + if (!batadv_atomic_dec_not_zero(&bat_priv->batman_queue_left)) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "batman packet queue full\n"); goto out; } } @@ -378,12 +401,12 @@ static void bat_iv_ogm_aggregate_new(const unsigned char *packet_buff, } if ((atomic_read(&bat_priv->aggregated_ogms)) && - (packet_len < MAX_AGGREGATION_BYTES)) - forw_packet_aggr->skb = dev_alloc_skb(MAX_AGGREGATION_BYTES + - ETH_HLEN); + (packet_len < BATADV_MAX_AGGREGATION_BYTES)) + skb_size = BATADV_MAX_AGGREGATION_BYTES + ETH_HLEN; else - forw_packet_aggr->skb = dev_alloc_skb(packet_len + ETH_HLEN); + skb_size = packet_len + ETH_HLEN; + forw_packet_aggr->skb = dev_alloc_skb(skb_size); if (!forw_packet_aggr->skb) { if (!own_packet) atomic_inc(&bat_priv->batman_queue_left); @@ -401,7 +424,7 @@ static void bat_iv_ogm_aggregate_new(const unsigned char *packet_buff, forw_packet_aggr->own = own_packet; forw_packet_aggr->if_incoming = if_incoming; forw_packet_aggr->num_packets = 0; - forw_packet_aggr->direct_link_flags = NO_FLAGS; + forw_packet_aggr->direct_link_flags = BATADV_NO_FLAGS; forw_packet_aggr->send_time = send_time; /* save packet direct link flag status */ @@ -415,20 +438,20 @@ static void bat_iv_ogm_aggregate_new(const unsigned char *packet_buff, /* start timer for this packet */ INIT_DELAYED_WORK(&forw_packet_aggr->delayed_work, - send_outstanding_bat_ogm_packet); - queue_delayed_work(bat_event_workqueue, + batadv_send_outstanding_bat_ogm_packet); + queue_delayed_work(batadv_event_workqueue, &forw_packet_aggr->delayed_work, send_time - jiffies); return; out: - hardif_free_ref(if_incoming); + batadv_hardif_free_ref(if_incoming); } /* aggregate a new packet into the existing ogm packet */ -static void bat_iv_ogm_aggregate(struct forw_packet *forw_packet_aggr, - const unsigned char *packet_buff, - int packet_len, bool direct_link) +static void batadv_iv_ogm_aggregate(struct batadv_forw_packet *forw_packet_aggr, + const unsigned char *packet_buff, + int packet_len, bool direct_link) { unsigned char *skb_buff; @@ -443,22 +466,25 @@ static void bat_iv_ogm_aggregate(struct forw_packet *forw_packet_aggr, (1 << forw_packet_aggr->num_packets); } -static void bat_iv_ogm_queue_add(struct bat_priv *bat_priv, - unsigned char *packet_buff, - int packet_len, struct hard_iface *if_incoming, - int own_packet, unsigned long send_time) +static void batadv_iv_ogm_queue_add(struct batadv_priv *bat_priv, + unsigned char *packet_buff, + int packet_len, + struct batadv_hard_iface *if_incoming, + int own_packet, unsigned long send_time) { - /** - * _aggr -> pointer to the packet we want to aggregate with + /* _aggr -> pointer to the packet we want to aggregate with * _pos -> pointer to the position in the queue */ - struct forw_packet *forw_packet_aggr = NULL, *forw_packet_pos = NULL; + struct batadv_forw_packet *forw_packet_aggr = NULL; + struct batadv_forw_packet *forw_packet_pos = NULL; struct hlist_node *tmp_node; - struct batman_ogm_packet *batman_ogm_packet; + struct batadv_ogm_packet *batadv_ogm_packet; bool direct_link; + unsigned long max_aggregation_jiffies; - batman_ogm_packet = (struct batman_ogm_packet *)packet_buff; - direct_link = batman_ogm_packet->flags & DIRECTLINK ? 1 : 0; + batadv_ogm_packet = (struct batadv_ogm_packet *)packet_buff; + direct_link = batadv_ogm_packet->flags & BATADV_DIRECTLINK ? 1 : 0; + max_aggregation_jiffies = msecs_to_jiffies(BATADV_MAX_AGGREGATION_MS); /* find position for the packet in the forward queue */ spin_lock_bh(&bat_priv->forw_bat_list_lock); @@ -466,11 +492,11 @@ static void bat_iv_ogm_queue_add(struct bat_priv *bat_priv, if ((atomic_read(&bat_priv->aggregated_ogms)) && (!own_packet)) { hlist_for_each_entry(forw_packet_pos, tmp_node, &bat_priv->forw_bat_list, list) { - if (bat_iv_ogm_can_aggregate(batman_ogm_packet, - bat_priv, packet_len, - send_time, direct_link, - if_incoming, - forw_packet_pos)) { + if (batadv_iv_ogm_can_aggregate(batadv_ogm_packet, + bat_priv, packet_len, + send_time, direct_link, + if_incoming, + forw_packet_pos)) { forw_packet_aggr = forw_packet_pos; break; } @@ -478,42 +504,41 @@ static void bat_iv_ogm_queue_add(struct bat_priv *bat_priv, } /* nothing to aggregate with - either aggregation disabled or no - * suitable aggregation packet found */ + * suitable aggregation packet found + */ if (!forw_packet_aggr) { /* the following section can run without the lock */ spin_unlock_bh(&bat_priv->forw_bat_list_lock); - /** - * if we could not aggregate this packet with one of the others + /* if we could not aggregate this packet with one of the others * we hold it back for a while, so that it might be aggregated * later on */ - if ((!own_packet) && - (atomic_read(&bat_priv->aggregated_ogms))) - send_time += msecs_to_jiffies(MAX_AGGREGATION_MS); + if (!own_packet && atomic_read(&bat_priv->aggregated_ogms)) + send_time += max_aggregation_jiffies; - bat_iv_ogm_aggregate_new(packet_buff, packet_len, - send_time, direct_link, - if_incoming, own_packet); + batadv_iv_ogm_aggregate_new(packet_buff, packet_len, + send_time, direct_link, + if_incoming, own_packet); } else { - bat_iv_ogm_aggregate(forw_packet_aggr, packet_buff, - packet_len, direct_link); + batadv_iv_ogm_aggregate(forw_packet_aggr, packet_buff, + packet_len, direct_link); spin_unlock_bh(&bat_priv->forw_bat_list_lock); } } -static void bat_iv_ogm_forward(struct orig_node *orig_node, - const struct ethhdr *ethhdr, - struct batman_ogm_packet *batman_ogm_packet, - bool is_single_hop_neigh, - bool is_from_best_next_hop, - struct hard_iface *if_incoming) +static void batadv_iv_ogm_forward(struct batadv_orig_node *orig_node, + const struct ethhdr *ethhdr, + struct batadv_ogm_packet *batadv_ogm_packet, + bool is_single_hop_neigh, + bool is_from_best_next_hop, + struct batadv_hard_iface *if_incoming) { - struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); + struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); uint8_t tt_num_changes; - if (batman_ogm_packet->header.ttl <= 1) { - bat_dbg(DBG_BATMAN, bat_priv, "ttl exceeded\n"); + if (batadv_ogm_packet->header.ttl <= 1) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, "ttl exceeded\n"); return; } @@ -525,110 +550,113 @@ static void bat_iv_ogm_forward(struct orig_node *orig_node, * simply drop the ogm. */ if (is_single_hop_neigh) - batman_ogm_packet->flags |= NOT_BEST_NEXT_HOP; + batadv_ogm_packet->flags |= BATADV_NOT_BEST_NEXT_HOP; else return; } - tt_num_changes = batman_ogm_packet->tt_num_changes; + tt_num_changes = batadv_ogm_packet->tt_num_changes; - batman_ogm_packet->header.ttl--; - memcpy(batman_ogm_packet->prev_sender, ethhdr->h_source, ETH_ALEN); + batadv_ogm_packet->header.ttl--; + memcpy(batadv_ogm_packet->prev_sender, ethhdr->h_source, ETH_ALEN); /* apply hop penalty */ - batman_ogm_packet->tq = hop_penalty(batman_ogm_packet->tq, bat_priv); + batadv_ogm_packet->tq = batadv_hop_penalty(batadv_ogm_packet->tq, + bat_priv); - bat_dbg(DBG_BATMAN, bat_priv, - "Forwarding packet: tq: %i, ttl: %i\n", - batman_ogm_packet->tq, batman_ogm_packet->header.ttl); - - batman_ogm_packet->seqno = htonl(batman_ogm_packet->seqno); - batman_ogm_packet->tt_crc = htons(batman_ogm_packet->tt_crc); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Forwarding packet: tq: %i, ttl: %i\n", + batadv_ogm_packet->tq, batadv_ogm_packet->header.ttl); /* switch of primaries first hop flag when forwarding */ - batman_ogm_packet->flags &= ~PRIMARIES_FIRST_HOP; + batadv_ogm_packet->flags &= ~BATADV_PRIMARIES_FIRST_HOP; if (is_single_hop_neigh) - batman_ogm_packet->flags |= DIRECTLINK; + batadv_ogm_packet->flags |= BATADV_DIRECTLINK; else - batman_ogm_packet->flags &= ~DIRECTLINK; + batadv_ogm_packet->flags &= ~BATADV_DIRECTLINK; - bat_iv_ogm_queue_add(bat_priv, (unsigned char *)batman_ogm_packet, - BATMAN_OGM_HLEN + tt_len(tt_num_changes), - if_incoming, 0, bat_iv_ogm_fwd_send_time()); + batadv_iv_ogm_queue_add(bat_priv, (unsigned char *)batadv_ogm_packet, + BATADV_OGM_HLEN + batadv_tt_len(tt_num_changes), + if_incoming, 0, batadv_iv_ogm_fwd_send_time()); } -static void bat_iv_ogm_schedule(struct hard_iface *hard_iface, - int tt_num_changes) +static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface) { - struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); - struct batman_ogm_packet *batman_ogm_packet; - struct hard_iface *primary_if; - int vis_server; + struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); + struct batadv_ogm_packet *batadv_ogm_packet; + struct batadv_hard_iface *primary_if; + int vis_server, tt_num_changes = 0; vis_server = atomic_read(&bat_priv->vis_mode); - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); + + if (hard_iface == primary_if) + tt_num_changes = batadv_tt_append_diff(bat_priv, + &hard_iface->packet_buff, + &hard_iface->packet_len, + BATADV_OGM_HLEN); - batman_ogm_packet = (struct batman_ogm_packet *)hard_iface->packet_buff; + batadv_ogm_packet = (struct batadv_ogm_packet *)hard_iface->packet_buff; /* change sequence number to network order */ - batman_ogm_packet->seqno = + batadv_ogm_packet->seqno = htonl((uint32_t)atomic_read(&hard_iface->seqno)); + atomic_inc(&hard_iface->seqno); - batman_ogm_packet->ttvn = atomic_read(&bat_priv->ttvn); - batman_ogm_packet->tt_crc = htons((uint16_t) - atomic_read(&bat_priv->tt_crc)); + batadv_ogm_packet->ttvn = atomic_read(&bat_priv->ttvn); + batadv_ogm_packet->tt_crc = htons(bat_priv->tt_crc); if (tt_num_changes >= 0) - batman_ogm_packet->tt_num_changes = tt_num_changes; + batadv_ogm_packet->tt_num_changes = tt_num_changes; - if (vis_server == VIS_TYPE_SERVER_SYNC) - batman_ogm_packet->flags |= VIS_SERVER; + if (vis_server == BATADV_VIS_TYPE_SERVER_SYNC) + batadv_ogm_packet->flags |= BATADV_VIS_SERVER; else - batman_ogm_packet->flags &= ~VIS_SERVER; + batadv_ogm_packet->flags &= ~BATADV_VIS_SERVER; if ((hard_iface == primary_if) && - (atomic_read(&bat_priv->gw_mode) == GW_MODE_SERVER)) - batman_ogm_packet->gw_flags = + (atomic_read(&bat_priv->gw_mode) == BATADV_GW_MODE_SERVER)) + batadv_ogm_packet->gw_flags = (uint8_t)atomic_read(&bat_priv->gw_bandwidth); else - batman_ogm_packet->gw_flags = NO_FLAGS; - - atomic_inc(&hard_iface->seqno); + batadv_ogm_packet->gw_flags = BATADV_NO_FLAGS; - slide_own_bcast_window(hard_iface); - bat_iv_ogm_queue_add(bat_priv, hard_iface->packet_buff, - hard_iface->packet_len, hard_iface, 1, - bat_iv_ogm_emit_send_time(bat_priv)); + batadv_slide_own_bcast_window(hard_iface); + batadv_iv_ogm_queue_add(bat_priv, hard_iface->packet_buff, + hard_iface->packet_len, hard_iface, 1, + batadv_iv_ogm_emit_send_time(bat_priv)); if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } -static void bat_iv_ogm_orig_update(struct bat_priv *bat_priv, - struct orig_node *orig_node, - const struct ethhdr *ethhdr, - const struct batman_ogm_packet - *batman_ogm_packet, - struct hard_iface *if_incoming, - const unsigned char *tt_buff, - int is_duplicate) +static void +batadv_iv_ogm_orig_update(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const struct ethhdr *ethhdr, + const struct batadv_ogm_packet *batadv_ogm_packet, + struct batadv_hard_iface *if_incoming, + const unsigned char *tt_buff, + int is_duplicate) { - struct neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; - struct neigh_node *router = NULL; - struct orig_node *orig_node_tmp; + struct batadv_neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; + struct batadv_neigh_node *router = NULL; + struct batadv_orig_node *orig_node_tmp; struct hlist_node *node; uint8_t bcast_own_sum_orig, bcast_own_sum_neigh; + uint8_t *neigh_addr; - bat_dbg(DBG_BATMAN, bat_priv, - "update_originator(): Searching and updating originator entry of received packet\n"); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "update_originator(): Searching and updating originator entry of received packet\n"); rcu_read_lock(); hlist_for_each_entry_rcu(tmp_neigh_node, node, &orig_node->neigh_list, list) { - if (compare_eth(tmp_neigh_node->addr, ethhdr->h_source) && - (tmp_neigh_node->if_incoming == if_incoming) && - atomic_inc_not_zero(&tmp_neigh_node->refcount)) { + neigh_addr = tmp_neigh_node->addr; + if (batadv_compare_eth(neigh_addr, ethhdr->h_source) && + tmp_neigh_node->if_incoming == if_incoming && + atomic_inc_not_zero(&tmp_neigh_node->refcount)) { if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); neigh_node = tmp_neigh_node; continue; } @@ -637,53 +665,55 @@ static void bat_iv_ogm_orig_update(struct bat_priv *bat_priv, continue; spin_lock_bh(&tmp_neigh_node->lq_update_lock); - ring_buffer_set(tmp_neigh_node->tq_recv, - &tmp_neigh_node->tq_index, 0); + batadv_ring_buffer_set(tmp_neigh_node->tq_recv, + &tmp_neigh_node->tq_index, 0); tmp_neigh_node->tq_avg = - ring_buffer_avg(tmp_neigh_node->tq_recv); + batadv_ring_buffer_avg(tmp_neigh_node->tq_recv); spin_unlock_bh(&tmp_neigh_node->lq_update_lock); } if (!neigh_node) { - struct orig_node *orig_tmp; + struct batadv_orig_node *orig_tmp; - orig_tmp = get_orig_node(bat_priv, ethhdr->h_source); + orig_tmp = batadv_get_orig_node(bat_priv, ethhdr->h_source); if (!orig_tmp) goto unlock; - neigh_node = bat_iv_ogm_neigh_new(if_incoming, ethhdr->h_source, - orig_node, orig_tmp, - batman_ogm_packet->seqno); + neigh_node = batadv_iv_ogm_neigh_new(if_incoming, + ethhdr->h_source, + orig_node, orig_tmp, + batadv_ogm_packet->seqno); - orig_node_free_ref(orig_tmp); + batadv_orig_node_free_ref(orig_tmp); if (!neigh_node) goto unlock; } else - bat_dbg(DBG_BATMAN, bat_priv, - "Updating existing last-hop neighbor of originator\n"); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Updating existing last-hop neighbor of originator\n"); rcu_read_unlock(); - orig_node->flags = batman_ogm_packet->flags; + orig_node->flags = batadv_ogm_packet->flags; neigh_node->last_seen = jiffies; spin_lock_bh(&neigh_node->lq_update_lock); - ring_buffer_set(neigh_node->tq_recv, - &neigh_node->tq_index, - batman_ogm_packet->tq); - neigh_node->tq_avg = ring_buffer_avg(neigh_node->tq_recv); + batadv_ring_buffer_set(neigh_node->tq_recv, + &neigh_node->tq_index, + batadv_ogm_packet->tq); + neigh_node->tq_avg = batadv_ring_buffer_avg(neigh_node->tq_recv); spin_unlock_bh(&neigh_node->lq_update_lock); if (!is_duplicate) { - orig_node->last_ttl = batman_ogm_packet->header.ttl; - neigh_node->last_ttl = batman_ogm_packet->header.ttl; + orig_node->last_ttl = batadv_ogm_packet->header.ttl; + neigh_node->last_ttl = batadv_ogm_packet->header.ttl; } - bonding_candidate_add(orig_node, neigh_node); + batadv_bonding_candidate_add(orig_node, neigh_node); /* if this neighbor already is our next hop there is nothing - * to change */ - router = orig_node_get_router(orig_node); + * to change + */ + router = batadv_orig_node_get_router(orig_node); if (router == neigh_node) goto update_tt; @@ -692,7 +722,8 @@ static void bat_iv_ogm_orig_update(struct bat_priv *bat_priv, goto update_tt; /* if the TQ is the same and the link not more symmetric we - * won't consider it either */ + * won't consider it either + */ if (router && (neigh_node->tq_avg == router->tq_avg)) { orig_node_tmp = router->orig_node; spin_lock_bh(&orig_node_tmp->ogm_cnt_lock); @@ -710,30 +741,31 @@ static void bat_iv_ogm_orig_update(struct bat_priv *bat_priv, goto update_tt; } - update_route(bat_priv, orig_node, neigh_node); + batadv_update_route(bat_priv, orig_node, neigh_node); update_tt: /* I have to check for transtable changes only if the OGM has been - * sent through a primary interface */ - if (((batman_ogm_packet->orig != ethhdr->h_source) && - (batman_ogm_packet->header.ttl > 2)) || - (batman_ogm_packet->flags & PRIMARIES_FIRST_HOP)) - tt_update_orig(bat_priv, orig_node, tt_buff, - batman_ogm_packet->tt_num_changes, - batman_ogm_packet->ttvn, - batman_ogm_packet->tt_crc); + * sent through a primary interface + */ + if (((batadv_ogm_packet->orig != ethhdr->h_source) && + (batadv_ogm_packet->header.ttl > 2)) || + (batadv_ogm_packet->flags & BATADV_PRIMARIES_FIRST_HOP)) + batadv_tt_update_orig(bat_priv, orig_node, tt_buff, + batadv_ogm_packet->tt_num_changes, + batadv_ogm_packet->ttvn, + ntohs(batadv_ogm_packet->tt_crc)); - if (orig_node->gw_flags != batman_ogm_packet->gw_flags) - gw_node_update(bat_priv, orig_node, - batman_ogm_packet->gw_flags); + if (orig_node->gw_flags != batadv_ogm_packet->gw_flags) + batadv_gw_node_update(bat_priv, orig_node, + batadv_ogm_packet->gw_flags); - orig_node->gw_flags = batman_ogm_packet->gw_flags; + orig_node->gw_flags = batadv_ogm_packet->gw_flags; /* restart gateway selection if fast or late switching was enabled */ if ((orig_node->gw_flags) && - (atomic_read(&bat_priv->gw_mode) == GW_MODE_CLIENT) && + (atomic_read(&bat_priv->gw_mode) == BATADV_GW_MODE_CLIENT) && (atomic_read(&bat_priv->gw_sel_class) > 2)) - gw_check_election(bat_priv, orig_node); + batadv_gw_check_election(bat_priv, orig_node); goto out; @@ -741,29 +773,32 @@ unlock: rcu_read_unlock(); out: if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); } -static int bat_iv_ogm_calc_tq(struct orig_node *orig_node, - struct orig_node *orig_neigh_node, - struct batman_ogm_packet *batman_ogm_packet, - struct hard_iface *if_incoming) +static int batadv_iv_ogm_calc_tq(struct batadv_orig_node *orig_node, + struct batadv_orig_node *orig_neigh_node, + struct batadv_ogm_packet *batadv_ogm_packet, + struct batadv_hard_iface *if_incoming) { - struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); - struct neigh_node *neigh_node = NULL, *tmp_neigh_node; + struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); + struct batadv_neigh_node *neigh_node = NULL, *tmp_neigh_node; struct hlist_node *node; uint8_t total_count; - uint8_t orig_eq_count, neigh_rq_count, tq_own; - int tq_asym_penalty, ret = 0; + uint8_t orig_eq_count, neigh_rq_count, neigh_rq_inv, tq_own; + unsigned int neigh_rq_inv_cube, neigh_rq_max_cube; + int tq_asym_penalty, inv_asym_penalty, ret = 0; + unsigned int combined_tq; /* find corresponding one hop neighbor */ rcu_read_lock(); hlist_for_each_entry_rcu(tmp_neigh_node, node, &orig_neigh_node->neigh_list, list) { - if (!compare_eth(tmp_neigh_node->addr, orig_neigh_node->orig)) + if (!batadv_compare_eth(tmp_neigh_node->addr, + orig_neigh_node->orig)) continue; if (tmp_neigh_node->if_incoming != if_incoming) @@ -778,11 +813,11 @@ static int bat_iv_ogm_calc_tq(struct orig_node *orig_node, rcu_read_unlock(); if (!neigh_node) - neigh_node = bat_iv_ogm_neigh_new(if_incoming, - orig_neigh_node->orig, - orig_neigh_node, - orig_neigh_node, - batman_ogm_packet->seqno); + neigh_node = batadv_iv_ogm_neigh_new(if_incoming, + orig_neigh_node->orig, + orig_neigh_node, + orig_neigh_node, + batadv_ogm_packet->seqno); if (!neigh_node) goto out; @@ -803,47 +838,52 @@ static int bat_iv_ogm_calc_tq(struct orig_node *orig_node, total_count = (orig_eq_count > neigh_rq_count ? neigh_rq_count : orig_eq_count); - /* if we have too few packets (too less data) we set tq_own to zero */ - /* if we receive too few packets it is not considered bidirectional */ - if ((total_count < TQ_LOCAL_BIDRECT_SEND_MINIMUM) || - (neigh_rq_count < TQ_LOCAL_BIDRECT_RECV_MINIMUM)) + /* if we have too few packets (too less data) we set tq_own to zero + * if we receive too few packets it is not considered bidirectional + */ + if (total_count < BATADV_TQ_LOCAL_BIDRECT_SEND_MINIMUM || + neigh_rq_count < BATADV_TQ_LOCAL_BIDRECT_RECV_MINIMUM) tq_own = 0; else /* neigh_node->real_packet_count is never zero as we * only purge old information when getting new - * information */ - tq_own = (TQ_MAX_VALUE * total_count) / neigh_rq_count; + * information + */ + tq_own = (BATADV_TQ_MAX_VALUE * total_count) / neigh_rq_count; /* 1 - ((1-x) ** 3), normalized to TQ_MAX_VALUE this does * affect the nearly-symmetric links only a little, but * punishes asymmetric links more. This will give a value * between 0 and TQ_MAX_VALUE */ - tq_asym_penalty = TQ_MAX_VALUE - (TQ_MAX_VALUE * - (TQ_LOCAL_WINDOW_SIZE - neigh_rq_count) * - (TQ_LOCAL_WINDOW_SIZE - neigh_rq_count) * - (TQ_LOCAL_WINDOW_SIZE - neigh_rq_count)) / - (TQ_LOCAL_WINDOW_SIZE * - TQ_LOCAL_WINDOW_SIZE * - TQ_LOCAL_WINDOW_SIZE); - - batman_ogm_packet->tq = ((batman_ogm_packet->tq * tq_own - * tq_asym_penalty) / - (TQ_MAX_VALUE * TQ_MAX_VALUE)); - - bat_dbg(DBG_BATMAN, bat_priv, - "bidirectional: orig = %-15pM neigh = %-15pM => own_bcast = %2i, real recv = %2i, local tq: %3i, asym_penalty: %3i, total tq: %3i\n", - orig_node->orig, orig_neigh_node->orig, total_count, - neigh_rq_count, tq_own, tq_asym_penalty, batman_ogm_packet->tq); + neigh_rq_inv = BATADV_TQ_LOCAL_WINDOW_SIZE - neigh_rq_count; + neigh_rq_inv_cube = neigh_rq_inv * neigh_rq_inv * neigh_rq_inv; + neigh_rq_max_cube = BATADV_TQ_LOCAL_WINDOW_SIZE * + BATADV_TQ_LOCAL_WINDOW_SIZE * + BATADV_TQ_LOCAL_WINDOW_SIZE; + inv_asym_penalty = BATADV_TQ_MAX_VALUE * neigh_rq_inv_cube; + inv_asym_penalty /= neigh_rq_max_cube; + tq_asym_penalty = BATADV_TQ_MAX_VALUE - inv_asym_penalty; + + combined_tq = batadv_ogm_packet->tq * tq_own * tq_asym_penalty; + combined_tq /= BATADV_TQ_MAX_VALUE * BATADV_TQ_MAX_VALUE; + batadv_ogm_packet->tq = combined_tq; + + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "bidirectional: orig = %-15pM neigh = %-15pM => own_bcast = %2i, real recv = %2i, local tq: %3i, asym_penalty: %3i, total tq: %3i\n", + orig_node->orig, orig_neigh_node->orig, total_count, + neigh_rq_count, tq_own, + tq_asym_penalty, batadv_ogm_packet->tq); /* if link has the minimum required transmission quality - * consider it bidirectional */ - if (batman_ogm_packet->tq >= TQ_TOTAL_BIDRECT_LIMIT) + * consider it bidirectional + */ + if (batadv_ogm_packet->tq >= BATADV_TQ_TOTAL_BIDRECT_LIMIT) ret = 1; out: if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); return ret; } @@ -855,90 +895,94 @@ out: * -1 the packet is old and has been received while the seqno window * was protected. Caller should drop it. */ -static int bat_iv_ogm_update_seqnos(const struct ethhdr *ethhdr, - const struct batman_ogm_packet - *batman_ogm_packet, - const struct hard_iface *if_incoming) +static int +batadv_iv_ogm_update_seqnos(const struct ethhdr *ethhdr, + const struct batadv_ogm_packet *batadv_ogm_packet, + const struct batadv_hard_iface *if_incoming) { - struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); - struct orig_node *orig_node; - struct neigh_node *tmp_neigh_node; + struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); + struct batadv_orig_node *orig_node; + struct batadv_neigh_node *tmp_neigh_node; struct hlist_node *node; int is_duplicate = 0; int32_t seq_diff; int need_update = 0; int set_mark, ret = -1; + uint32_t seqno = ntohl(batadv_ogm_packet->seqno); + uint8_t *neigh_addr; - orig_node = get_orig_node(bat_priv, batman_ogm_packet->orig); + orig_node = batadv_get_orig_node(bat_priv, batadv_ogm_packet->orig); if (!orig_node) return 0; spin_lock_bh(&orig_node->ogm_cnt_lock); - seq_diff = batman_ogm_packet->seqno - orig_node->last_real_seqno; + seq_diff = seqno - orig_node->last_real_seqno; /* signalize caller that the packet is to be dropped. */ if (!hlist_empty(&orig_node->neigh_list) && - window_protected(bat_priv, seq_diff, - &orig_node->batman_seqno_reset)) + batadv_window_protected(bat_priv, seq_diff, + &orig_node->batman_seqno_reset)) goto out; rcu_read_lock(); hlist_for_each_entry_rcu(tmp_neigh_node, node, &orig_node->neigh_list, list) { - is_duplicate |= bat_test_bit(tmp_neigh_node->real_bits, - orig_node->last_real_seqno, - batman_ogm_packet->seqno); + is_duplicate |= batadv_test_bit(tmp_neigh_node->real_bits, + orig_node->last_real_seqno, + seqno); - if (compare_eth(tmp_neigh_node->addr, ethhdr->h_source) && - (tmp_neigh_node->if_incoming == if_incoming)) + neigh_addr = tmp_neigh_node->addr; + if (batadv_compare_eth(neigh_addr, ethhdr->h_source) && + tmp_neigh_node->if_incoming == if_incoming) set_mark = 1; else set_mark = 0; /* if the window moved, set the update flag. */ - need_update |= bit_get_packet(bat_priv, - tmp_neigh_node->real_bits, - seq_diff, set_mark); + need_update |= batadv_bit_get_packet(bat_priv, + tmp_neigh_node->real_bits, + seq_diff, set_mark); tmp_neigh_node->real_packet_count = bitmap_weight(tmp_neigh_node->real_bits, - TQ_LOCAL_WINDOW_SIZE); + BATADV_TQ_LOCAL_WINDOW_SIZE); } rcu_read_unlock(); if (need_update) { - bat_dbg(DBG_BATMAN, bat_priv, - "updating last_seqno: old %u, new %u\n", - orig_node->last_real_seqno, batman_ogm_packet->seqno); - orig_node->last_real_seqno = batman_ogm_packet->seqno; + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "updating last_seqno: old %u, new %u\n", + orig_node->last_real_seqno, seqno); + orig_node->last_real_seqno = seqno; } ret = is_duplicate; out: spin_unlock_bh(&orig_node->ogm_cnt_lock); - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); return ret; } -static void bat_iv_ogm_process(const struct ethhdr *ethhdr, - struct batman_ogm_packet *batman_ogm_packet, - const unsigned char *tt_buff, - struct hard_iface *if_incoming) +static void batadv_iv_ogm_process(const struct ethhdr *ethhdr, + struct batadv_ogm_packet *batadv_ogm_packet, + const unsigned char *tt_buff, + struct batadv_hard_iface *if_incoming) { - struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); - struct hard_iface *hard_iface; - struct orig_node *orig_neigh_node, *orig_node; - struct neigh_node *router = NULL, *router_router = NULL; - struct neigh_node *orig_neigh_router = NULL; + struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); + struct batadv_hard_iface *hard_iface; + struct batadv_orig_node *orig_neigh_node, *orig_node; + struct batadv_neigh_node *router = NULL, *router_router = NULL; + struct batadv_neigh_node *orig_neigh_router = NULL; int has_directlink_flag; int is_my_addr = 0, is_my_orig = 0, is_my_oldorig = 0; - int is_broadcast = 0, is_bidirectional; + int is_broadcast = 0, is_bidirect; bool is_single_hop_neigh = false; bool is_from_best_next_hop = false; - int is_duplicate; + int is_duplicate, sameseq, simlar_ttl; uint32_t if_incoming_seqno; + uint8_t *prev_sender; /* Silently drop when the batman packet is actually not a * correct packet. @@ -948,49 +992,53 @@ static void bat_iv_ogm_process(const struct ethhdr *ethhdr, * it as an additional length. * * TODO: A more sane solution would be to have a bit in the - * batman_ogm_packet to detect whether the packet is the last + * batadv_ogm_packet to detect whether the packet is the last * packet in an aggregation. Here we expect that the padding * is always zero (or not 0x01) */ - if (batman_ogm_packet->header.packet_type != BAT_IV_OGM) + if (batadv_ogm_packet->header.packet_type != BATADV_IV_OGM) return; /* could be changed by schedule_own_packet() */ if_incoming_seqno = atomic_read(&if_incoming->seqno); - has_directlink_flag = (batman_ogm_packet->flags & DIRECTLINK ? 1 : 0); + if (batadv_ogm_packet->flags & BATADV_DIRECTLINK) + has_directlink_flag = 1; + else + has_directlink_flag = 0; - if (compare_eth(ethhdr->h_source, batman_ogm_packet->orig)) + if (batadv_compare_eth(ethhdr->h_source, batadv_ogm_packet->orig)) is_single_hop_neigh = true; - bat_dbg(DBG_BATMAN, bat_priv, - "Received BATMAN packet via NB: %pM, IF: %s [%pM] (from OG: %pM, via prev OG: %pM, seqno %u, ttvn %u, crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n", - ethhdr->h_source, if_incoming->net_dev->name, - if_incoming->net_dev->dev_addr, batman_ogm_packet->orig, - batman_ogm_packet->prev_sender, batman_ogm_packet->seqno, - batman_ogm_packet->ttvn, batman_ogm_packet->tt_crc, - batman_ogm_packet->tt_num_changes, batman_ogm_packet->tq, - batman_ogm_packet->header.ttl, - batman_ogm_packet->header.version, has_directlink_flag); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Received BATMAN packet via NB: %pM, IF: %s [%pM] (from OG: %pM, via prev OG: %pM, seqno %u, ttvn %u, crc %u, changes %u, td %d, TTL %d, V %d, IDF %d)\n", + ethhdr->h_source, if_incoming->net_dev->name, + if_incoming->net_dev->dev_addr, batadv_ogm_packet->orig, + batadv_ogm_packet->prev_sender, + ntohl(batadv_ogm_packet->seqno), batadv_ogm_packet->ttvn, + ntohs(batadv_ogm_packet->tt_crc), + batadv_ogm_packet->tt_num_changes, batadv_ogm_packet->tq, + batadv_ogm_packet->header.ttl, + batadv_ogm_packet->header.version, has_directlink_flag); rcu_read_lock(); - list_for_each_entry_rcu(hard_iface, &hardif_list, list) { - if (hard_iface->if_status != IF_ACTIVE) + list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) { + if (hard_iface->if_status != BATADV_IF_ACTIVE) continue; if (hard_iface->soft_iface != if_incoming->soft_iface) continue; - if (compare_eth(ethhdr->h_source, - hard_iface->net_dev->dev_addr)) + if (batadv_compare_eth(ethhdr->h_source, + hard_iface->net_dev->dev_addr)) is_my_addr = 1; - if (compare_eth(batman_ogm_packet->orig, - hard_iface->net_dev->dev_addr)) + if (batadv_compare_eth(batadv_ogm_packet->orig, + hard_iface->net_dev->dev_addr)) is_my_orig = 1; - if (compare_eth(batman_ogm_packet->prev_sender, - hard_iface->net_dev->dev_addr)) + if (batadv_compare_eth(batadv_ogm_packet->prev_sender, + hard_iface->net_dev->dev_addr)) is_my_oldorig = 1; if (is_broadcast_ether_addr(ethhdr->h_source)) @@ -998,268 +1046,278 @@ static void bat_iv_ogm_process(const struct ethhdr *ethhdr, } rcu_read_unlock(); - if (batman_ogm_packet->header.version != COMPAT_VERSION) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: incompatible batman version (%i)\n", - batman_ogm_packet->header.version); + if (batadv_ogm_packet->header.version != BATADV_COMPAT_VERSION) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: incompatible batman version (%i)\n", + batadv_ogm_packet->header.version); return; } if (is_my_addr) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: received my own broadcast (sender: %pM)\n", - ethhdr->h_source); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: received my own broadcast (sender: %pM)\n", + ethhdr->h_source); return; } if (is_broadcast) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: ignoring all packets with broadcast source addr (sender: %pM)\n", - ethhdr->h_source); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: ignoring all packets with broadcast source addr (sender: %pM)\n", + ethhdr->h_source); return; } if (is_my_orig) { unsigned long *word; int offset; + int32_t bit_pos; + int16_t if_num; + uint8_t *weight; - orig_neigh_node = get_orig_node(bat_priv, ethhdr->h_source); + orig_neigh_node = batadv_get_orig_node(bat_priv, + ethhdr->h_source); if (!orig_neigh_node) return; /* neighbor has to indicate direct link and it has to - * come via the corresponding interface */ - /* save packet seqno for bidirectional check */ + * come via the corresponding interface + * save packet seqno for bidirectional check + */ if (has_directlink_flag && - compare_eth(if_incoming->net_dev->dev_addr, - batman_ogm_packet->orig)) { - offset = if_incoming->if_num * NUM_WORDS; + batadv_compare_eth(if_incoming->net_dev->dev_addr, + batadv_ogm_packet->orig)) { + if_num = if_incoming->if_num; + offset = if_num * BATADV_NUM_WORDS; spin_lock_bh(&orig_neigh_node->ogm_cnt_lock); word = &(orig_neigh_node->bcast_own[offset]); - bat_set_bit(word, - if_incoming_seqno - - batman_ogm_packet->seqno - 2); - orig_neigh_node->bcast_own_sum[if_incoming->if_num] = - bitmap_weight(word, TQ_LOCAL_WINDOW_SIZE); + bit_pos = if_incoming_seqno - 2; + bit_pos -= ntohl(batadv_ogm_packet->seqno); + batadv_set_bit(word, bit_pos); + weight = &orig_neigh_node->bcast_own_sum[if_num]; + *weight = bitmap_weight(word, + BATADV_TQ_LOCAL_WINDOW_SIZE); spin_unlock_bh(&orig_neigh_node->ogm_cnt_lock); } - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: originator packet from myself (via neighbor)\n"); - orig_node_free_ref(orig_neigh_node); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: originator packet from myself (via neighbor)\n"); + batadv_orig_node_free_ref(orig_neigh_node); return; } if (is_my_oldorig) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: ignoring all rebroadcast echos (sender: %pM)\n", - ethhdr->h_source); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: ignoring all rebroadcast echos (sender: %pM)\n", + ethhdr->h_source); return; } - if (batman_ogm_packet->flags & NOT_BEST_NEXT_HOP) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: ignoring all packets not forwarded from the best next hop (sender: %pM)\n", - ethhdr->h_source); + if (batadv_ogm_packet->flags & BATADV_NOT_BEST_NEXT_HOP) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: ignoring all packets not forwarded from the best next hop (sender: %pM)\n", + ethhdr->h_source); return; } - orig_node = get_orig_node(bat_priv, batman_ogm_packet->orig); + orig_node = batadv_get_orig_node(bat_priv, batadv_ogm_packet->orig); if (!orig_node) return; - is_duplicate = bat_iv_ogm_update_seqnos(ethhdr, batman_ogm_packet, - if_incoming); + is_duplicate = batadv_iv_ogm_update_seqnos(ethhdr, batadv_ogm_packet, + if_incoming); if (is_duplicate == -1) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: packet within seqno protection time (sender: %pM)\n", - ethhdr->h_source); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: packet within seqno protection time (sender: %pM)\n", + ethhdr->h_source); goto out; } - if (batman_ogm_packet->tq == 0) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: originator packet with tq equal 0\n"); + if (batadv_ogm_packet->tq == 0) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: originator packet with tq equal 0\n"); goto out; } - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (router) - router_router = orig_node_get_router(router->orig_node); + router_router = batadv_orig_node_get_router(router->orig_node); if ((router && router->tq_avg != 0) && - (compare_eth(router->addr, ethhdr->h_source))) + (batadv_compare_eth(router->addr, ethhdr->h_source))) is_from_best_next_hop = true; + prev_sender = batadv_ogm_packet->prev_sender; /* avoid temporary routing loops */ if (router && router_router && - (compare_eth(router->addr, batman_ogm_packet->prev_sender)) && - !(compare_eth(batman_ogm_packet->orig, - batman_ogm_packet->prev_sender)) && - (compare_eth(router->addr, router_router->addr))) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: ignoring all rebroadcast packets that may make me loop (sender: %pM)\n", - ethhdr->h_source); + (batadv_compare_eth(router->addr, prev_sender)) && + !(batadv_compare_eth(batadv_ogm_packet->orig, prev_sender)) && + (batadv_compare_eth(router->addr, router_router->addr))) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: ignoring all rebroadcast packets that may make me loop (sender: %pM)\n", + ethhdr->h_source); goto out; } /* if sender is a direct neighbor the sender mac equals - * originator mac */ + * originator mac + */ orig_neigh_node = (is_single_hop_neigh ? orig_node : - get_orig_node(bat_priv, ethhdr->h_source)); + batadv_get_orig_node(bat_priv, ethhdr->h_source)); if (!orig_neigh_node) goto out; - orig_neigh_router = orig_node_get_router(orig_neigh_node); + orig_neigh_router = batadv_orig_node_get_router(orig_neigh_node); /* drop packet if sender is not a direct neighbor and if we - * don't route towards it */ + * don't route towards it + */ if (!is_single_hop_neigh && (!orig_neigh_router)) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: OGM via unknown neighbor!\n"); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: OGM via unknown neighbor!\n"); goto out_neigh; } - is_bidirectional = bat_iv_ogm_calc_tq(orig_node, orig_neigh_node, - batman_ogm_packet, if_incoming); + is_bidirect = batadv_iv_ogm_calc_tq(orig_node, orig_neigh_node, + batadv_ogm_packet, if_incoming); - bonding_save_primary(orig_node, orig_neigh_node, batman_ogm_packet); + batadv_bonding_save_primary(orig_node, orig_neigh_node, + batadv_ogm_packet); /* update ranking if it is not a duplicate or has the same - * seqno and similar ttl as the non-duplicate */ - if (is_bidirectional && - (!is_duplicate || - ((orig_node->last_real_seqno == batman_ogm_packet->seqno) && - (orig_node->last_ttl - 3 <= batman_ogm_packet->header.ttl)))) - bat_iv_ogm_orig_update(bat_priv, orig_node, ethhdr, - batman_ogm_packet, if_incoming, - tt_buff, is_duplicate); + * seqno and similar ttl as the non-duplicate + */ + sameseq = orig_node->last_real_seqno == ntohl(batadv_ogm_packet->seqno); + simlar_ttl = orig_node->last_ttl - 3 <= batadv_ogm_packet->header.ttl; + if (is_bidirect && (!is_duplicate || (sameseq && simlar_ttl))) + batadv_iv_ogm_orig_update(bat_priv, orig_node, ethhdr, + batadv_ogm_packet, if_incoming, + tt_buff, is_duplicate); /* is single hop (direct) neighbor */ if (is_single_hop_neigh) { /* mark direct link on incoming interface */ - bat_iv_ogm_forward(orig_node, ethhdr, batman_ogm_packet, - is_single_hop_neigh, is_from_best_next_hop, - if_incoming); + batadv_iv_ogm_forward(orig_node, ethhdr, batadv_ogm_packet, + is_single_hop_neigh, + is_from_best_next_hop, if_incoming); - bat_dbg(DBG_BATMAN, bat_priv, - "Forwarding packet: rebroadcast neighbor packet with direct link flag\n"); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Forwarding packet: rebroadcast neighbor packet with direct link flag\n"); goto out_neigh; } /* multihop originator */ - if (!is_bidirectional) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: not received via bidirectional link\n"); + if (!is_bidirect) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: not received via bidirectional link\n"); goto out_neigh; } if (is_duplicate) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: duplicate packet received\n"); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: duplicate packet received\n"); goto out_neigh; } - bat_dbg(DBG_BATMAN, bat_priv, - "Forwarding packet: rebroadcast originator packet\n"); - bat_iv_ogm_forward(orig_node, ethhdr, batman_ogm_packet, - is_single_hop_neigh, is_from_best_next_hop, - if_incoming); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Forwarding packet: rebroadcast originator packet\n"); + batadv_iv_ogm_forward(orig_node, ethhdr, batadv_ogm_packet, + is_single_hop_neigh, is_from_best_next_hop, + if_incoming); out_neigh: if ((orig_neigh_node) && (!is_single_hop_neigh)) - orig_node_free_ref(orig_neigh_node); + batadv_orig_node_free_ref(orig_neigh_node); out: if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); if (router_router) - neigh_node_free_ref(router_router); + batadv_neigh_node_free_ref(router_router); if (orig_neigh_router) - neigh_node_free_ref(orig_neigh_router); + batadv_neigh_node_free_ref(orig_neigh_router); - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); } -static int bat_iv_ogm_receive(struct sk_buff *skb, - struct hard_iface *if_incoming) +static int batadv_iv_ogm_receive(struct sk_buff *skb, + struct batadv_hard_iface *if_incoming) { - struct bat_priv *bat_priv = netdev_priv(if_incoming->soft_iface); - struct batman_ogm_packet *batman_ogm_packet; + struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); + struct batadv_ogm_packet *batadv_ogm_packet; struct ethhdr *ethhdr; int buff_pos = 0, packet_len; unsigned char *tt_buff, *packet_buff; bool ret; - ret = check_management_packet(skb, if_incoming, BATMAN_OGM_HLEN); + ret = batadv_check_management_packet(skb, if_incoming, BATADV_OGM_HLEN); if (!ret) return NET_RX_DROP; /* did we receive a B.A.T.M.A.N. IV OGM packet on an interface * that does not have B.A.T.M.A.N. IV enabled ? */ - if (bat_priv->bat_algo_ops->bat_ogm_emit != bat_iv_ogm_emit) + if (bat_priv->bat_algo_ops->bat_ogm_emit != batadv_iv_ogm_emit) return NET_RX_DROP; + batadv_inc_counter(bat_priv, BATADV_CNT_MGMT_RX); + batadv_add_counter(bat_priv, BATADV_CNT_MGMT_RX_BYTES, + skb->len + ETH_HLEN); + packet_len = skb_headlen(skb); ethhdr = (struct ethhdr *)skb_mac_header(skb); packet_buff = skb->data; - batman_ogm_packet = (struct batman_ogm_packet *)packet_buff; + batadv_ogm_packet = (struct batadv_ogm_packet *)packet_buff; /* unpack the aggregated packets and process them one by one */ do { - /* network to host order for our 32bit seqno and the - orig_interval */ - batman_ogm_packet->seqno = ntohl(batman_ogm_packet->seqno); - batman_ogm_packet->tt_crc = ntohs(batman_ogm_packet->tt_crc); - - tt_buff = packet_buff + buff_pos + BATMAN_OGM_HLEN; + tt_buff = packet_buff + buff_pos + BATADV_OGM_HLEN; - bat_iv_ogm_process(ethhdr, batman_ogm_packet, - tt_buff, if_incoming); + batadv_iv_ogm_process(ethhdr, batadv_ogm_packet, tt_buff, + if_incoming); - buff_pos += BATMAN_OGM_HLEN + - tt_len(batman_ogm_packet->tt_num_changes); + buff_pos += BATADV_OGM_HLEN; + buff_pos += batadv_tt_len(batadv_ogm_packet->tt_num_changes); - batman_ogm_packet = (struct batman_ogm_packet *) + batadv_ogm_packet = (struct batadv_ogm_packet *) (packet_buff + buff_pos); - } while (bat_iv_ogm_aggr_packet(buff_pos, packet_len, - batman_ogm_packet->tt_num_changes)); + } while (batadv_iv_ogm_aggr_packet(buff_pos, packet_len, + batadv_ogm_packet->tt_num_changes)); kfree_skb(skb); return NET_RX_SUCCESS; } -static struct bat_algo_ops batman_iv __read_mostly = { - .name = "BATMAN IV", - .bat_iface_enable = bat_iv_ogm_iface_enable, - .bat_iface_disable = bat_iv_ogm_iface_disable, - .bat_iface_update_mac = bat_iv_ogm_iface_update_mac, - .bat_primary_iface_set = bat_iv_ogm_primary_iface_set, - .bat_ogm_schedule = bat_iv_ogm_schedule, - .bat_ogm_emit = bat_iv_ogm_emit, +static struct batadv_algo_ops batadv_batman_iv __read_mostly = { + .name = "BATMAN_IV", + .bat_iface_enable = batadv_iv_ogm_iface_enable, + .bat_iface_disable = batadv_iv_ogm_iface_disable, + .bat_iface_update_mac = batadv_iv_ogm_iface_update_mac, + .bat_primary_iface_set = batadv_iv_ogm_primary_iface_set, + .bat_ogm_schedule = batadv_iv_ogm_schedule, + .bat_ogm_emit = batadv_iv_ogm_emit, }; -int __init bat_iv_init(void) +int __init batadv_iv_init(void) { int ret; /* batman originator packet */ - ret = recv_handler_register(BAT_IV_OGM, bat_iv_ogm_receive); + ret = batadv_recv_handler_register(BATADV_IV_OGM, + batadv_iv_ogm_receive); if (ret < 0) goto out; - ret = bat_algo_register(&batman_iv); + ret = batadv_algo_register(&batadv_batman_iv); if (ret < 0) goto handler_unregister; goto out; handler_unregister: - recv_handler_unregister(BAT_IV_OGM); + batadv_recv_handler_unregister(BATADV_IV_OGM); out: return ret; } diff --git a/net/batman-adv/bat_sysfs.c b/net/batman-adv/bat_sysfs.c deleted file mode 100644 index 5bc7b66d32dc..000000000000 --- a/net/batman-adv/bat_sysfs.c +++ /dev/null @@ -1,735 +0,0 @@ -/* - * Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: - * - * Marek Lindner - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of version 2 of the GNU General Public - * License as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA - * 02110-1301, USA - * - */ - -#include "main.h" -#include "bat_sysfs.h" -#include "translation-table.h" -#include "originator.h" -#include "hard-interface.h" -#include "gateway_common.h" -#include "gateway_client.h" -#include "vis.h" - -static struct net_device *kobj_to_netdev(struct kobject *obj) -{ - struct device *dev = container_of(obj->parent, struct device, kobj); - return to_net_dev(dev); -} - -static struct bat_priv *kobj_to_batpriv(struct kobject *obj) -{ - struct net_device *net_dev = kobj_to_netdev(obj); - return netdev_priv(net_dev); -} - -#define UEV_TYPE_VAR "BATTYPE=" -#define UEV_ACTION_VAR "BATACTION=" -#define UEV_DATA_VAR "BATDATA=" - -static char *uev_action_str[] = { - "add", - "del", - "change" -}; - -static char *uev_type_str[] = { - "gw" -}; - -/* Use this, if you have customized show and store functions */ -#define BAT_ATTR(_name, _mode, _show, _store) \ -struct bat_attribute bat_attr_##_name = { \ - .attr = {.name = __stringify(_name), \ - .mode = _mode }, \ - .show = _show, \ - .store = _store, \ -}; - -#define BAT_ATTR_SIF_STORE_BOOL(_name, _post_func) \ -ssize_t store_##_name(struct kobject *kobj, struct attribute *attr, \ - char *buff, size_t count) \ -{ \ - struct net_device *net_dev = kobj_to_netdev(kobj); \ - struct bat_priv *bat_priv = netdev_priv(net_dev); \ - return __store_bool_attr(buff, count, _post_func, attr, \ - &bat_priv->_name, net_dev); \ -} - -#define BAT_ATTR_SIF_SHOW_BOOL(_name) \ -ssize_t show_##_name(struct kobject *kobj, \ - struct attribute *attr, char *buff) \ -{ \ - struct bat_priv *bat_priv = kobj_to_batpriv(kobj); \ - return sprintf(buff, "%s\n", \ - atomic_read(&bat_priv->_name) == 0 ? \ - "disabled" : "enabled"); \ -} \ - -/* Use this, if you are going to turn a [name] in the soft-interface - * (bat_priv) on or off */ -#define BAT_ATTR_SIF_BOOL(_name, _mode, _post_func) \ - static BAT_ATTR_SIF_STORE_BOOL(_name, _post_func) \ - static BAT_ATTR_SIF_SHOW_BOOL(_name) \ - static BAT_ATTR(_name, _mode, show_##_name, store_##_name) - - -#define BAT_ATTR_SIF_STORE_UINT(_name, _min, _max, _post_func) \ -ssize_t store_##_name(struct kobject *kobj, struct attribute *attr, \ - char *buff, size_t count) \ -{ \ - struct net_device *net_dev = kobj_to_netdev(kobj); \ - struct bat_priv *bat_priv = netdev_priv(net_dev); \ - return __store_uint_attr(buff, count, _min, _max, _post_func, \ - attr, &bat_priv->_name, net_dev); \ -} - -#define BAT_ATTR_SIF_SHOW_UINT(_name) \ -ssize_t show_##_name(struct kobject *kobj, \ - struct attribute *attr, char *buff) \ -{ \ - struct bat_priv *bat_priv = kobj_to_batpriv(kobj); \ - return sprintf(buff, "%i\n", atomic_read(&bat_priv->_name)); \ -} \ - -/* Use this, if you are going to set [name] in the soft-interface - * (bat_priv) to an unsigned integer value */ -#define BAT_ATTR_SIF_UINT(_name, _mode, _min, _max, _post_func) \ - static BAT_ATTR_SIF_STORE_UINT(_name, _min, _max, _post_func) \ - static BAT_ATTR_SIF_SHOW_UINT(_name) \ - static BAT_ATTR(_name, _mode, show_##_name, store_##_name) - - -#define BAT_ATTR_HIF_STORE_UINT(_name, _min, _max, _post_func) \ -ssize_t store_##_name(struct kobject *kobj, struct attribute *attr, \ - char *buff, size_t count) \ -{ \ - struct net_device *net_dev = kobj_to_netdev(kobj); \ - struct hard_iface *hard_iface = hardif_get_by_netdev(net_dev); \ - ssize_t length; \ - \ - if (!hard_iface) \ - return 0; \ - \ - length = __store_uint_attr(buff, count, _min, _max, _post_func, \ - attr, &hard_iface->_name, net_dev); \ - \ - hardif_free_ref(hard_iface); \ - return length; \ -} - -#define BAT_ATTR_HIF_SHOW_UINT(_name) \ -ssize_t show_##_name(struct kobject *kobj, \ - struct attribute *attr, char *buff) \ -{ \ - struct net_device *net_dev = kobj_to_netdev(kobj); \ - struct hard_iface *hard_iface = hardif_get_by_netdev(net_dev); \ - ssize_t length; \ - \ - if (!hard_iface) \ - return 0; \ - \ - length = sprintf(buff, "%i\n", atomic_read(&hard_iface->_name));\ - \ - hardif_free_ref(hard_iface); \ - return length; \ -} - -/* Use this, if you are going to set [name] in hard_iface to an - * unsigned integer value*/ -#define BAT_ATTR_HIF_UINT(_name, _mode, _min, _max, _post_func) \ - static BAT_ATTR_HIF_STORE_UINT(_name, _min, _max, _post_func) \ - static BAT_ATTR_HIF_SHOW_UINT(_name) \ - static BAT_ATTR(_name, _mode, show_##_name, store_##_name) - - -static int store_bool_attr(char *buff, size_t count, - struct net_device *net_dev, - const char *attr_name, atomic_t *attr) -{ - int enabled = -1; - - if (buff[count - 1] == '\n') - buff[count - 1] = '\0'; - - if ((strncmp(buff, "1", 2) == 0) || - (strncmp(buff, "enable", 7) == 0) || - (strncmp(buff, "enabled", 8) == 0)) - enabled = 1; - - if ((strncmp(buff, "0", 2) == 0) || - (strncmp(buff, "disable", 8) == 0) || - (strncmp(buff, "disabled", 9) == 0)) - enabled = 0; - - if (enabled < 0) { - bat_info(net_dev, - "%s: Invalid parameter received: %s\n", - attr_name, buff); - return -EINVAL; - } - - if (atomic_read(attr) == enabled) - return count; - - bat_info(net_dev, "%s: Changing from: %s to: %s\n", attr_name, - atomic_read(attr) == 1 ? "enabled" : "disabled", - enabled == 1 ? "enabled" : "disabled"); - - atomic_set(attr, (unsigned int)enabled); - return count; -} - -static inline ssize_t __store_bool_attr(char *buff, size_t count, - void (*post_func)(struct net_device *), - struct attribute *attr, - atomic_t *attr_store, struct net_device *net_dev) -{ - int ret; - - ret = store_bool_attr(buff, count, net_dev, attr->name, attr_store); - if (post_func && ret) - post_func(net_dev); - - return ret; -} - -static int store_uint_attr(const char *buff, size_t count, - struct net_device *net_dev, const char *attr_name, - unsigned int min, unsigned int max, atomic_t *attr) -{ - unsigned long uint_val; - int ret; - - ret = kstrtoul(buff, 10, &uint_val); - if (ret) { - bat_info(net_dev, - "%s: Invalid parameter received: %s\n", - attr_name, buff); - return -EINVAL; - } - - if (uint_val < min) { - bat_info(net_dev, "%s: Value is too small: %lu min: %u\n", - attr_name, uint_val, min); - return -EINVAL; - } - - if (uint_val > max) { - bat_info(net_dev, "%s: Value is too big: %lu max: %u\n", - attr_name, uint_val, max); - return -EINVAL; - } - - if (atomic_read(attr) == uint_val) - return count; - - bat_info(net_dev, "%s: Changing from: %i to: %lu\n", - attr_name, atomic_read(attr), uint_val); - - atomic_set(attr, uint_val); - return count; -} - -static inline ssize_t __store_uint_attr(const char *buff, size_t count, - int min, int max, - void (*post_func)(struct net_device *), - const struct attribute *attr, - atomic_t *attr_store, struct net_device *net_dev) -{ - int ret; - - ret = store_uint_attr(buff, count, net_dev, attr->name, - min, max, attr_store); - if (post_func && ret) - post_func(net_dev); - - return ret; -} - -static ssize_t show_vis_mode(struct kobject *kobj, struct attribute *attr, - char *buff) -{ - struct bat_priv *bat_priv = kobj_to_batpriv(kobj); - int vis_mode = atomic_read(&bat_priv->vis_mode); - - return sprintf(buff, "%s\n", - vis_mode == VIS_TYPE_CLIENT_UPDATE ? - "client" : "server"); -} - -static ssize_t store_vis_mode(struct kobject *kobj, struct attribute *attr, - char *buff, size_t count) -{ - struct net_device *net_dev = kobj_to_netdev(kobj); - struct bat_priv *bat_priv = netdev_priv(net_dev); - unsigned long val; - int ret, vis_mode_tmp = -1; - - ret = kstrtoul(buff, 10, &val); - - if (((count == 2) && (!ret) && (val == VIS_TYPE_CLIENT_UPDATE)) || - (strncmp(buff, "client", 6) == 0) || - (strncmp(buff, "off", 3) == 0)) - vis_mode_tmp = VIS_TYPE_CLIENT_UPDATE; - - if (((count == 2) && (!ret) && (val == VIS_TYPE_SERVER_SYNC)) || - (strncmp(buff, "server", 6) == 0)) - vis_mode_tmp = VIS_TYPE_SERVER_SYNC; - - if (vis_mode_tmp < 0) { - if (buff[count - 1] == '\n') - buff[count - 1] = '\0'; - - bat_info(net_dev, - "Invalid parameter for 'vis mode' setting received: %s\n", - buff); - return -EINVAL; - } - - if (atomic_read(&bat_priv->vis_mode) == vis_mode_tmp) - return count; - - bat_info(net_dev, "Changing vis mode from: %s to: %s\n", - atomic_read(&bat_priv->vis_mode) == VIS_TYPE_CLIENT_UPDATE ? - "client" : "server", vis_mode_tmp == VIS_TYPE_CLIENT_UPDATE ? - "client" : "server"); - - atomic_set(&bat_priv->vis_mode, (unsigned int)vis_mode_tmp); - return count; -} - -static ssize_t show_bat_algo(struct kobject *kobj, struct attribute *attr, - char *buff) -{ - struct bat_priv *bat_priv = kobj_to_batpriv(kobj); - return sprintf(buff, "%s\n", bat_priv->bat_algo_ops->name); -} - -static void post_gw_deselect(struct net_device *net_dev) -{ - struct bat_priv *bat_priv = netdev_priv(net_dev); - gw_deselect(bat_priv); -} - -static ssize_t show_gw_mode(struct kobject *kobj, struct attribute *attr, - char *buff) -{ - struct bat_priv *bat_priv = kobj_to_batpriv(kobj); - int bytes_written; - - switch (atomic_read(&bat_priv->gw_mode)) { - case GW_MODE_CLIENT: - bytes_written = sprintf(buff, "%s\n", GW_MODE_CLIENT_NAME); - break; - case GW_MODE_SERVER: - bytes_written = sprintf(buff, "%s\n", GW_MODE_SERVER_NAME); - break; - default: - bytes_written = sprintf(buff, "%s\n", GW_MODE_OFF_NAME); - break; - } - - return bytes_written; -} - -static ssize_t store_gw_mode(struct kobject *kobj, struct attribute *attr, - char *buff, size_t count) -{ - struct net_device *net_dev = kobj_to_netdev(kobj); - struct bat_priv *bat_priv = netdev_priv(net_dev); - char *curr_gw_mode_str; - int gw_mode_tmp = -1; - - if (buff[count - 1] == '\n') - buff[count - 1] = '\0'; - - if (strncmp(buff, GW_MODE_OFF_NAME, strlen(GW_MODE_OFF_NAME)) == 0) - gw_mode_tmp = GW_MODE_OFF; - - if (strncmp(buff, GW_MODE_CLIENT_NAME, - strlen(GW_MODE_CLIENT_NAME)) == 0) - gw_mode_tmp = GW_MODE_CLIENT; - - if (strncmp(buff, GW_MODE_SERVER_NAME, - strlen(GW_MODE_SERVER_NAME)) == 0) - gw_mode_tmp = GW_MODE_SERVER; - - if (gw_mode_tmp < 0) { - bat_info(net_dev, - "Invalid parameter for 'gw mode' setting received: %s\n", - buff); - return -EINVAL; - } - - if (atomic_read(&bat_priv->gw_mode) == gw_mode_tmp) - return count; - - switch (atomic_read(&bat_priv->gw_mode)) { - case GW_MODE_CLIENT: - curr_gw_mode_str = GW_MODE_CLIENT_NAME; - break; - case GW_MODE_SERVER: - curr_gw_mode_str = GW_MODE_SERVER_NAME; - break; - default: - curr_gw_mode_str = GW_MODE_OFF_NAME; - break; - } - - bat_info(net_dev, "Changing gw mode from: %s to: %s\n", - curr_gw_mode_str, buff); - - gw_deselect(bat_priv); - atomic_set(&bat_priv->gw_mode, (unsigned int)gw_mode_tmp); - return count; -} - -static ssize_t show_gw_bwidth(struct kobject *kobj, struct attribute *attr, - char *buff) -{ - struct bat_priv *bat_priv = kobj_to_batpriv(kobj); - int down, up; - int gw_bandwidth = atomic_read(&bat_priv->gw_bandwidth); - - gw_bandwidth_to_kbit(gw_bandwidth, &down, &up); - return sprintf(buff, "%i%s/%i%s\n", - (down > 2048 ? down / 1024 : down), - (down > 2048 ? "MBit" : "KBit"), - (up > 2048 ? up / 1024 : up), - (up > 2048 ? "MBit" : "KBit")); -} - -static ssize_t store_gw_bwidth(struct kobject *kobj, struct attribute *attr, - char *buff, size_t count) -{ - struct net_device *net_dev = kobj_to_netdev(kobj); - - if (buff[count - 1] == '\n') - buff[count - 1] = '\0'; - - return gw_bandwidth_set(net_dev, buff, count); -} - -BAT_ATTR_SIF_BOOL(aggregated_ogms, S_IRUGO | S_IWUSR, NULL); -BAT_ATTR_SIF_BOOL(bonding, S_IRUGO | S_IWUSR, NULL); -#ifdef CONFIG_BATMAN_ADV_BLA -BAT_ATTR_SIF_BOOL(bridge_loop_avoidance, S_IRUGO | S_IWUSR, NULL); -#endif -BAT_ATTR_SIF_BOOL(fragmentation, S_IRUGO | S_IWUSR, update_min_mtu); -BAT_ATTR_SIF_BOOL(ap_isolation, S_IRUGO | S_IWUSR, NULL); -static BAT_ATTR(vis_mode, S_IRUGO | S_IWUSR, show_vis_mode, store_vis_mode); -static BAT_ATTR(routing_algo, S_IRUGO, show_bat_algo, NULL); -static BAT_ATTR(gw_mode, S_IRUGO | S_IWUSR, show_gw_mode, store_gw_mode); -BAT_ATTR_SIF_UINT(orig_interval, S_IRUGO | S_IWUSR, 2 * JITTER, INT_MAX, NULL); -BAT_ATTR_SIF_UINT(hop_penalty, S_IRUGO | S_IWUSR, 0, TQ_MAX_VALUE, NULL); -BAT_ATTR_SIF_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE, - post_gw_deselect); -static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth, - store_gw_bwidth); -#ifdef CONFIG_BATMAN_ADV_DEBUG -BAT_ATTR_SIF_UINT(log_level, S_IRUGO | S_IWUSR, 0, 15, NULL); -#endif - -static struct bat_attribute *mesh_attrs[] = { - &bat_attr_aggregated_ogms, - &bat_attr_bonding, -#ifdef CONFIG_BATMAN_ADV_BLA - &bat_attr_bridge_loop_avoidance, -#endif - &bat_attr_fragmentation, - &bat_attr_ap_isolation, - &bat_attr_vis_mode, - &bat_attr_routing_algo, - &bat_attr_gw_mode, - &bat_attr_orig_interval, - &bat_attr_hop_penalty, - &bat_attr_gw_sel_class, - &bat_attr_gw_bandwidth, -#ifdef CONFIG_BATMAN_ADV_DEBUG - &bat_attr_log_level, -#endif - NULL, -}; - -int sysfs_add_meshif(struct net_device *dev) -{ - struct kobject *batif_kobject = &dev->dev.kobj; - struct bat_priv *bat_priv = netdev_priv(dev); - struct bat_attribute **bat_attr; - int err; - - bat_priv->mesh_obj = kobject_create_and_add(SYSFS_IF_MESH_SUBDIR, - batif_kobject); - if (!bat_priv->mesh_obj) { - bat_err(dev, "Can't add sysfs directory: %s/%s\n", dev->name, - SYSFS_IF_MESH_SUBDIR); - goto out; - } - - for (bat_attr = mesh_attrs; *bat_attr; ++bat_attr) { - err = sysfs_create_file(bat_priv->mesh_obj, - &((*bat_attr)->attr)); - if (err) { - bat_err(dev, "Can't add sysfs file: %s/%s/%s\n", - dev->name, SYSFS_IF_MESH_SUBDIR, - ((*bat_attr)->attr).name); - goto rem_attr; - } - } - - return 0; - -rem_attr: - for (bat_attr = mesh_attrs; *bat_attr; ++bat_attr) - sysfs_remove_file(bat_priv->mesh_obj, &((*bat_attr)->attr)); - - kobject_put(bat_priv->mesh_obj); - bat_priv->mesh_obj = NULL; -out: - return -ENOMEM; -} - -void sysfs_del_meshif(struct net_device *dev) -{ - struct bat_priv *bat_priv = netdev_priv(dev); - struct bat_attribute **bat_attr; - - for (bat_attr = mesh_attrs; *bat_attr; ++bat_attr) - sysfs_remove_file(bat_priv->mesh_obj, &((*bat_attr)->attr)); - - kobject_put(bat_priv->mesh_obj); - bat_priv->mesh_obj = NULL; -} - -static ssize_t show_mesh_iface(struct kobject *kobj, struct attribute *attr, - char *buff) -{ - struct net_device *net_dev = kobj_to_netdev(kobj); - struct hard_iface *hard_iface = hardif_get_by_netdev(net_dev); - ssize_t length; - - if (!hard_iface) - return 0; - - length = sprintf(buff, "%s\n", hard_iface->if_status == IF_NOT_IN_USE ? - "none" : hard_iface->soft_iface->name); - - hardif_free_ref(hard_iface); - - return length; -} - -static ssize_t store_mesh_iface(struct kobject *kobj, struct attribute *attr, - char *buff, size_t count) -{ - struct net_device *net_dev = kobj_to_netdev(kobj); - struct hard_iface *hard_iface = hardif_get_by_netdev(net_dev); - int status_tmp = -1; - int ret = count; - - if (!hard_iface) - return count; - - if (buff[count - 1] == '\n') - buff[count - 1] = '\0'; - - if (strlen(buff) >= IFNAMSIZ) { - pr_err("Invalid parameter for 'mesh_iface' setting received: interface name too long '%s'\n", - buff); - hardif_free_ref(hard_iface); - return -EINVAL; - } - - if (strncmp(buff, "none", 4) == 0) - status_tmp = IF_NOT_IN_USE; - else - status_tmp = IF_I_WANT_YOU; - - if (hard_iface->if_status == status_tmp) - goto out; - - if ((hard_iface->soft_iface) && - (strncmp(hard_iface->soft_iface->name, buff, IFNAMSIZ) == 0)) - goto out; - - if (!rtnl_trylock()) { - ret = -ERESTARTSYS; - goto out; - } - - if (status_tmp == IF_NOT_IN_USE) { - hardif_disable_interface(hard_iface); - goto unlock; - } - - /* if the interface already is in use */ - if (hard_iface->if_status != IF_NOT_IN_USE) - hardif_disable_interface(hard_iface); - - ret = hardif_enable_interface(hard_iface, buff); - -unlock: - rtnl_unlock(); -out: - hardif_free_ref(hard_iface); - return ret; -} - -static ssize_t show_iface_status(struct kobject *kobj, struct attribute *attr, - char *buff) -{ - struct net_device *net_dev = kobj_to_netdev(kobj); - struct hard_iface *hard_iface = hardif_get_by_netdev(net_dev); - ssize_t length; - - if (!hard_iface) - return 0; - - switch (hard_iface->if_status) { - case IF_TO_BE_REMOVED: - length = sprintf(buff, "disabling\n"); - break; - case IF_INACTIVE: - length = sprintf(buff, "inactive\n"); - break; - case IF_ACTIVE: - length = sprintf(buff, "active\n"); - break; - case IF_TO_BE_ACTIVATED: - length = sprintf(buff, "enabling\n"); - break; - case IF_NOT_IN_USE: - default: - length = sprintf(buff, "not in use\n"); - break; - } - - hardif_free_ref(hard_iface); - - return length; -} - -static BAT_ATTR(mesh_iface, S_IRUGO | S_IWUSR, - show_mesh_iface, store_mesh_iface); -static BAT_ATTR(iface_status, S_IRUGO, show_iface_status, NULL); - -static struct bat_attribute *batman_attrs[] = { - &bat_attr_mesh_iface, - &bat_attr_iface_status, - NULL, -}; - -int sysfs_add_hardif(struct kobject **hardif_obj, struct net_device *dev) -{ - struct kobject *hardif_kobject = &dev->dev.kobj; - struct bat_attribute **bat_attr; - int err; - - *hardif_obj = kobject_create_and_add(SYSFS_IF_BAT_SUBDIR, - hardif_kobject); - - if (!*hardif_obj) { - bat_err(dev, "Can't add sysfs directory: %s/%s\n", dev->name, - SYSFS_IF_BAT_SUBDIR); - goto out; - } - - for (bat_attr = batman_attrs; *bat_attr; ++bat_attr) { - err = sysfs_create_file(*hardif_obj, &((*bat_attr)->attr)); - if (err) { - bat_err(dev, "Can't add sysfs file: %s/%s/%s\n", - dev->name, SYSFS_IF_BAT_SUBDIR, - ((*bat_attr)->attr).name); - goto rem_attr; - } - } - - return 0; - -rem_attr: - for (bat_attr = batman_attrs; *bat_attr; ++bat_attr) - sysfs_remove_file(*hardif_obj, &((*bat_attr)->attr)); -out: - return -ENOMEM; -} - -void sysfs_del_hardif(struct kobject **hardif_obj) -{ - kobject_put(*hardif_obj); - *hardif_obj = NULL; -} - -int throw_uevent(struct bat_priv *bat_priv, enum uev_type type, - enum uev_action action, const char *data) -{ - int ret = -1; - struct hard_iface *primary_if = NULL; - struct kobject *bat_kobj; - char *uevent_env[4] = { NULL, NULL, NULL, NULL }; - - primary_if = primary_if_get_selected(bat_priv); - if (!primary_if) - goto out; - - bat_kobj = &primary_if->soft_iface->dev.kobj; - - uevent_env[0] = kmalloc(strlen(UEV_TYPE_VAR) + - strlen(uev_type_str[type]) + 1, - GFP_ATOMIC); - if (!uevent_env[0]) - goto out; - - sprintf(uevent_env[0], "%s%s", UEV_TYPE_VAR, uev_type_str[type]); - - uevent_env[1] = kmalloc(strlen(UEV_ACTION_VAR) + - strlen(uev_action_str[action]) + 1, - GFP_ATOMIC); - if (!uevent_env[1]) - goto out; - - sprintf(uevent_env[1], "%s%s", UEV_ACTION_VAR, uev_action_str[action]); - - /* If the event is DEL, ignore the data field */ - if (action != UEV_DEL) { - uevent_env[2] = kmalloc(strlen(UEV_DATA_VAR) + - strlen(data) + 1, GFP_ATOMIC); - if (!uevent_env[2]) - goto out; - - sprintf(uevent_env[2], "%s%s", UEV_DATA_VAR, data); - } - - ret = kobject_uevent_env(bat_kobj, KOBJ_CHANGE, uevent_env); -out: - kfree(uevent_env[0]); - kfree(uevent_env[1]); - kfree(uevent_env[2]); - - if (primary_if) - hardif_free_ref(primary_if); - - if (ret) - bat_dbg(DBG_BATMAN, bat_priv, - "Impossible to send uevent for (%s,%s,%s) event (err: %d)\n", - uev_type_str[type], uev_action_str[action], - (action == UEV_DEL ? "NULL" : data), ret); - return ret; -} diff --git a/net/batman-adv/bitarray.c b/net/batman-adv/bitarray.c index 07ae6e1b8aca..aea174cdbfbd 100644 --- a/net/batman-adv/bitarray.c +++ b/net/batman-adv/bitarray.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2006-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2006-2012 B.A.T.M.A.N. contributors: * * Simon Wunderlich, Marek Lindner * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -25,12 +23,12 @@ #include <linux/bitops.h> /* shift the packet array by n places. */ -static void bat_bitmap_shift_left(unsigned long *seq_bits, int32_t n) +static void batadv_bitmap_shift_left(unsigned long *seq_bits, int32_t n) { - if (n <= 0 || n >= TQ_LOCAL_WINDOW_SIZE) + if (n <= 0 || n >= BATADV_TQ_LOCAL_WINDOW_SIZE) return; - bitmap_shift_left(seq_bits, seq_bits, n, TQ_LOCAL_WINDOW_SIZE); + bitmap_shift_left(seq_bits, seq_bits, n, BATADV_TQ_LOCAL_WINDOW_SIZE); } @@ -40,58 +38,57 @@ static void bat_bitmap_shift_left(unsigned long *seq_bits, int32_t n) * 1 if the window was moved (either new or very old) * 0 if the window was not moved/shifted. */ -int bit_get_packet(void *priv, unsigned long *seq_bits, - int32_t seq_num_diff, int set_mark) +int batadv_bit_get_packet(void *priv, unsigned long *seq_bits, + int32_t seq_num_diff, int set_mark) { - struct bat_priv *bat_priv = priv; + struct batadv_priv *bat_priv = priv; /* sequence number is slightly older. We already got a sequence number - * higher than this one, so we just mark it. */ - - if ((seq_num_diff <= 0) && (seq_num_diff > -TQ_LOCAL_WINDOW_SIZE)) { + * higher than this one, so we just mark it. + */ + if (seq_num_diff <= 0 && seq_num_diff > -BATADV_TQ_LOCAL_WINDOW_SIZE) { if (set_mark) - bat_set_bit(seq_bits, -seq_num_diff); + batadv_set_bit(seq_bits, -seq_num_diff); return 0; } /* sequence number is slightly newer, so we shift the window and - * set the mark if required */ - - if ((seq_num_diff > 0) && (seq_num_diff < TQ_LOCAL_WINDOW_SIZE)) { - bat_bitmap_shift_left(seq_bits, seq_num_diff); + * set the mark if required + */ + if (seq_num_diff > 0 && seq_num_diff < BATADV_TQ_LOCAL_WINDOW_SIZE) { + batadv_bitmap_shift_left(seq_bits, seq_num_diff); if (set_mark) - bat_set_bit(seq_bits, 0); + batadv_set_bit(seq_bits, 0); return 1; } /* sequence number is much newer, probably missed a lot of packets */ - - if ((seq_num_diff >= TQ_LOCAL_WINDOW_SIZE) && - (seq_num_diff < EXPECTED_SEQNO_RANGE)) { - bat_dbg(DBG_BATMAN, bat_priv, - "We missed a lot of packets (%i) !\n", - seq_num_diff - 1); - bitmap_zero(seq_bits, TQ_LOCAL_WINDOW_SIZE); + if (seq_num_diff >= BATADV_TQ_LOCAL_WINDOW_SIZE && + seq_num_diff < BATADV_EXPECTED_SEQNO_RANGE) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "We missed a lot of packets (%i) !\n", + seq_num_diff - 1); + bitmap_zero(seq_bits, BATADV_TQ_LOCAL_WINDOW_SIZE); if (set_mark) - bat_set_bit(seq_bits, 0); + batadv_set_bit(seq_bits, 0); return 1; } /* received a much older packet. The other host either restarted * or the old packet got delayed somewhere in the network. The * packet should be dropped without calling this function if the - * seqno window is protected. */ - - if ((seq_num_diff <= -TQ_LOCAL_WINDOW_SIZE) || - (seq_num_diff >= EXPECTED_SEQNO_RANGE)) { + * seqno window is protected. + */ + if (seq_num_diff <= -BATADV_TQ_LOCAL_WINDOW_SIZE || + seq_num_diff >= BATADV_EXPECTED_SEQNO_RANGE) { - bat_dbg(DBG_BATMAN, bat_priv, - "Other host probably restarted!\n"); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Other host probably restarted!\n"); - bitmap_zero(seq_bits, TQ_LOCAL_WINDOW_SIZE); + bitmap_zero(seq_bits, BATADV_TQ_LOCAL_WINDOW_SIZE); if (set_mark) - bat_set_bit(seq_bits, 0); + batadv_set_bit(seq_bits, 0); return 1; } diff --git a/net/batman-adv/bitarray.h b/net/batman-adv/bitarray.h index 1835c15cda41..a081ce1c0514 100644 --- a/net/batman-adv/bitarray.h +++ b/net/batman-adv/bitarray.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2006-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2006-2012 B.A.T.M.A.N. contributors: * * Simon Wunderlich, Marek Lindner * @@ -16,39 +15,40 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_BITARRAY_H_ #define _NET_BATMAN_ADV_BITARRAY_H_ /* returns true if the corresponding bit in the given seq_bits indicates true - * and curr_seqno is within range of last_seqno */ -static inline int bat_test_bit(const unsigned long *seq_bits, - uint32_t last_seqno, uint32_t curr_seqno) + * and curr_seqno is within range of last_seqno + */ +static inline int batadv_test_bit(const unsigned long *seq_bits, + uint32_t last_seqno, uint32_t curr_seqno) { int32_t diff; diff = last_seqno - curr_seqno; - if (diff < 0 || diff >= TQ_LOCAL_WINDOW_SIZE) + if (diff < 0 || diff >= BATADV_TQ_LOCAL_WINDOW_SIZE) return 0; else return test_bit(diff, seq_bits); } /* turn corresponding bit on, so we can remember that we got the packet */ -static inline void bat_set_bit(unsigned long *seq_bits, int32_t n) +static inline void batadv_set_bit(unsigned long *seq_bits, int32_t n) { /* if too old, just drop it */ - if (n < 0 || n >= TQ_LOCAL_WINDOW_SIZE) + if (n < 0 || n >= BATADV_TQ_LOCAL_WINDOW_SIZE) return; set_bit(n, seq_bits); /* turn the position on */ } /* receive and process one packet, returns 1 if received seq_num is considered - * new, 0 if old */ -int bit_get_packet(void *priv, unsigned long *seq_bits, - int32_t seq_num_diff, int set_mark); + * new, 0 if old + */ +int batadv_bit_get_packet(void *priv, unsigned long *seq_bits, + int32_t seq_num_diff, int set_mark); #endif /* _NET_BATMAN_ADV_BITARRAY_H_ */ diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c index c5863f499133..6705d35b17ce 100644 --- a/net/batman-adv/bridge_loop_avoidance.c +++ b/net/batman-adv/bridge_loop_avoidance.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2011-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2012 B.A.T.M.A.N. contributors: * * Simon Wunderlich * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -33,14 +31,14 @@ #include <net/arp.h> #include <linux/if_vlan.h> -static const uint8_t announce_mac[4] = {0x43, 0x05, 0x43, 0x05}; +static const uint8_t batadv_announce_mac[4] = {0x43, 0x05, 0x43, 0x05}; -static void bla_periodic_work(struct work_struct *work); -static void bla_send_announce(struct bat_priv *bat_priv, - struct backbone_gw *backbone_gw); +static void batadv_bla_periodic_work(struct work_struct *work); +static void batadv_bla_send_announce(struct batadv_priv *bat_priv, + struct batadv_backbone_gw *backbone_gw); /* return the index of the claim */ -static inline uint32_t choose_claim(const void *data, uint32_t size) +static inline uint32_t batadv_choose_claim(const void *data, uint32_t size) { const unsigned char *key = data; uint32_t hash = 0; @@ -60,7 +58,8 @@ static inline uint32_t choose_claim(const void *data, uint32_t size) } /* return the index of the backbone gateway */ -static inline uint32_t choose_backbone_gw(const void *data, uint32_t size) +static inline uint32_t batadv_choose_backbone_gw(const void *data, + uint32_t size) { const unsigned char *key = data; uint32_t hash = 0; @@ -81,74 +80,75 @@ static inline uint32_t choose_backbone_gw(const void *data, uint32_t size) /* compares address and vid of two backbone gws */ -static int compare_backbone_gw(const struct hlist_node *node, const void *data2) +static int batadv_compare_backbone_gw(const struct hlist_node *node, + const void *data2) { - const void *data1 = container_of(node, struct backbone_gw, + const void *data1 = container_of(node, struct batadv_backbone_gw, hash_entry); return (memcmp(data1, data2, ETH_ALEN + sizeof(short)) == 0 ? 1 : 0); } /* compares address and vid of two claims */ -static int compare_claim(const struct hlist_node *node, const void *data2) +static int batadv_compare_claim(const struct hlist_node *node, + const void *data2) { - const void *data1 = container_of(node, struct claim, + const void *data1 = container_of(node, struct batadv_claim, hash_entry); return (memcmp(data1, data2, ETH_ALEN + sizeof(short)) == 0 ? 1 : 0); } /* free a backbone gw */ -static void backbone_gw_free_ref(struct backbone_gw *backbone_gw) +static void batadv_backbone_gw_free_ref(struct batadv_backbone_gw *backbone_gw) { if (atomic_dec_and_test(&backbone_gw->refcount)) kfree_rcu(backbone_gw, rcu); } /* finally deinitialize the claim */ -static void claim_free_rcu(struct rcu_head *rcu) +static void batadv_claim_free_rcu(struct rcu_head *rcu) { - struct claim *claim; + struct batadv_claim *claim; - claim = container_of(rcu, struct claim, rcu); + claim = container_of(rcu, struct batadv_claim, rcu); - backbone_gw_free_ref(claim->backbone_gw); + batadv_backbone_gw_free_ref(claim->backbone_gw); kfree(claim); } /* free a claim, call claim_free_rcu if its the last reference */ -static void claim_free_ref(struct claim *claim) +static void batadv_claim_free_ref(struct batadv_claim *claim) { if (atomic_dec_and_test(&claim->refcount)) - call_rcu(&claim->rcu, claim_free_rcu); + call_rcu(&claim->rcu, batadv_claim_free_rcu); } -/** - * @bat_priv: the bat priv with all the soft interface information +/* @bat_priv: the bat priv with all the soft interface information * @data: search data (may be local/static data) * * looks for a claim in the hash, and returns it if found * or NULL otherwise. */ -static struct claim *claim_hash_find(struct bat_priv *bat_priv, - struct claim *data) +static struct batadv_claim *batadv_claim_hash_find(struct batadv_priv *bat_priv, + struct batadv_claim *data) { - struct hashtable_t *hash = bat_priv->claim_hash; + struct batadv_hashtable *hash = bat_priv->claim_hash; struct hlist_head *head; struct hlist_node *node; - struct claim *claim; - struct claim *claim_tmp = NULL; + struct batadv_claim *claim; + struct batadv_claim *claim_tmp = NULL; int index; if (!hash) return NULL; - index = choose_claim(data, hash->size); + index = batadv_choose_claim(data, hash->size); head = &hash->table[index]; rcu_read_lock(); hlist_for_each_entry_rcu(claim, node, head, hash_entry) { - if (!compare_claim(&claim->hash_entry, data)) + if (!batadv_compare_claim(&claim->hash_entry, data)) continue; if (!atomic_inc_not_zero(&claim->refcount)) @@ -163,21 +163,22 @@ static struct claim *claim_hash_find(struct bat_priv *bat_priv, } /** + * batadv_backbone_hash_find - looks for a claim in the hash * @bat_priv: the bat priv with all the soft interface information * @addr: the address of the originator * @vid: the VLAN ID * - * looks for a claim in the hash, and returns it if found - * or NULL otherwise. + * Returns claim if found or NULL otherwise. */ -static struct backbone_gw *backbone_hash_find(struct bat_priv *bat_priv, - uint8_t *addr, short vid) +static struct batadv_backbone_gw * +batadv_backbone_hash_find(struct batadv_priv *bat_priv, + uint8_t *addr, short vid) { - struct hashtable_t *hash = bat_priv->backbone_hash; + struct batadv_hashtable *hash = bat_priv->backbone_hash; struct hlist_head *head; struct hlist_node *node; - struct backbone_gw search_entry, *backbone_gw; - struct backbone_gw *backbone_gw_tmp = NULL; + struct batadv_backbone_gw search_entry, *backbone_gw; + struct batadv_backbone_gw *backbone_gw_tmp = NULL; int index; if (!hash) @@ -186,13 +187,13 @@ static struct backbone_gw *backbone_hash_find(struct bat_priv *bat_priv, memcpy(search_entry.orig, addr, ETH_ALEN); search_entry.vid = vid; - index = choose_backbone_gw(&search_entry, hash->size); + index = batadv_choose_backbone_gw(&search_entry, hash->size); head = &hash->table[index]; rcu_read_lock(); hlist_for_each_entry_rcu(backbone_gw, node, head, hash_entry) { - if (!compare_backbone_gw(&backbone_gw->hash_entry, - &search_entry)) + if (!batadv_compare_backbone_gw(&backbone_gw->hash_entry, + &search_entry)) continue; if (!atomic_inc_not_zero(&backbone_gw->refcount)) @@ -207,12 +208,13 @@ static struct backbone_gw *backbone_hash_find(struct bat_priv *bat_priv, } /* delete all claims for a backbone */ -static void bla_del_backbone_claims(struct backbone_gw *backbone_gw) +static void +batadv_bla_del_backbone_claims(struct batadv_backbone_gw *backbone_gw) { - struct hashtable_t *hash; + struct batadv_hashtable *hash; struct hlist_node *node, *node_tmp; struct hlist_head *head; - struct claim *claim; + struct batadv_claim *claim; int i; spinlock_t *list_lock; /* protects write access to the hash lists */ @@ -231,36 +233,35 @@ static void bla_del_backbone_claims(struct backbone_gw *backbone_gw) if (claim->backbone_gw != backbone_gw) continue; - claim_free_ref(claim); + batadv_claim_free_ref(claim); hlist_del_rcu(node); } spin_unlock_bh(list_lock); } /* all claims gone, intialize CRC */ - backbone_gw->crc = BLA_CRC_INIT; + backbone_gw->crc = BATADV_BLA_CRC_INIT; } /** + * batadv_bla_send_claim - sends a claim frame according to the provided info * @bat_priv: the bat priv with all the soft interface information * @orig: the mac address to be announced within the claim * @vid: the VLAN ID * @claimtype: the type of the claim (CLAIM, UNCLAIM, ANNOUNCE, ...) - * - * sends a claim frame according to the provided info. */ -static void bla_send_claim(struct bat_priv *bat_priv, uint8_t *mac, - short vid, int claimtype) +static void batadv_bla_send_claim(struct batadv_priv *bat_priv, uint8_t *mac, + short vid, int claimtype) { struct sk_buff *skb; struct ethhdr *ethhdr; - struct hard_iface *primary_if; + struct batadv_hard_iface *primary_if; struct net_device *soft_iface; uint8_t *hw_src; - struct bla_claim_dst local_claim_dest; - uint32_t zeroip = 0; + struct batadv_bla_claim_dst local_claim_dest; + __be32 zeroip = 0; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) return; @@ -294,40 +295,41 @@ static void bla_send_claim(struct bat_priv *bat_priv, uint8_t *mac, /* now we pretend that the client would have sent this ... */ switch (claimtype) { - case CLAIM_TYPE_ADD: + case BATADV_CLAIM_TYPE_ADD: /* normal claim frame * set Ethernet SRC to the clients mac */ memcpy(ethhdr->h_source, mac, ETH_ALEN); - bat_dbg(DBG_BLA, bat_priv, - "bla_send_claim(): CLAIM %pM on vid %d\n", mac, vid); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_send_claim(): CLAIM %pM on vid %d\n", mac, vid); break; - case CLAIM_TYPE_DEL: + case BATADV_CLAIM_TYPE_DEL: /* unclaim frame * set HW SRC to the clients mac */ memcpy(hw_src, mac, ETH_ALEN); - bat_dbg(DBG_BLA, bat_priv, - "bla_send_claim(): UNCLAIM %pM on vid %d\n", mac, vid); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_send_claim(): UNCLAIM %pM on vid %d\n", mac, + vid); break; - case CLAIM_TYPE_ANNOUNCE: + case BATADV_CLAIM_TYPE_ANNOUNCE: /* announcement frame * set HW SRC to the special mac containg the crc */ memcpy(hw_src, mac, ETH_ALEN); - bat_dbg(DBG_BLA, bat_priv, - "bla_send_claim(): ANNOUNCE of %pM on vid %d\n", - ethhdr->h_source, vid); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_send_claim(): ANNOUNCE of %pM on vid %d\n", + ethhdr->h_source, vid); break; - case CLAIM_TYPE_REQUEST: + case BATADV_CLAIM_TYPE_REQUEST: /* request frame * set HW SRC to the special mac containg the crc */ memcpy(hw_src, mac, ETH_ALEN); memcpy(ethhdr->h_dest, mac, ETH_ALEN); - bat_dbg(DBG_BLA, bat_priv, - "bla_send_claim(): REQUEST of %pM to %pMon vid %d\n", - ethhdr->h_source, ethhdr->h_dest, vid); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_send_claim(): REQUEST of %pM to %pMon vid %d\n", + ethhdr->h_source, ethhdr->h_dest, vid); break; } @@ -344,10 +346,11 @@ static void bla_send_claim(struct bat_priv *bat_priv, uint8_t *mac, netif_rx(skb); out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } /** + * batadv_bla_get_backbone_gw * @bat_priv: the bat priv with all the soft interface information * @orig: the mac address of the originator * @vid: the VLAN ID @@ -355,21 +358,22 @@ out: * searches for the backbone gw or creates a new one if it could not * be found. */ -static struct backbone_gw *bla_get_backbone_gw(struct bat_priv *bat_priv, - uint8_t *orig, short vid) +static struct batadv_backbone_gw * +batadv_bla_get_backbone_gw(struct batadv_priv *bat_priv, uint8_t *orig, + short vid) { - struct backbone_gw *entry; - struct orig_node *orig_node; + struct batadv_backbone_gw *entry; + struct batadv_orig_node *orig_node; int hash_added; - entry = backbone_hash_find(bat_priv, orig, vid); + entry = batadv_backbone_hash_find(bat_priv, orig, vid); if (entry) return entry; - bat_dbg(DBG_BLA, bat_priv, - "bla_get_backbone_gw(): not found (%pM, %d), creating new entry\n", - orig, vid); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_get_backbone_gw(): not found (%pM, %d), creating new entry\n", + orig, vid); entry = kzalloc(sizeof(*entry), GFP_ATOMIC); if (!entry) @@ -377,7 +381,7 @@ static struct backbone_gw *bla_get_backbone_gw(struct bat_priv *bat_priv, entry->vid = vid; entry->lasttime = jiffies; - entry->crc = BLA_CRC_INIT; + entry->crc = BATADV_BLA_CRC_INIT; entry->bat_priv = bat_priv; atomic_set(&entry->request_sent, 0); memcpy(entry->orig, orig, ETH_ALEN); @@ -385,8 +389,10 @@ static struct backbone_gw *bla_get_backbone_gw(struct bat_priv *bat_priv, /* one for the hash, one for returning */ atomic_set(&entry->refcount, 2); - hash_added = hash_add(bat_priv->backbone_hash, compare_backbone_gw, - choose_backbone_gw, entry, &entry->hash_entry); + hash_added = batadv_hash_add(bat_priv->backbone_hash, + batadv_compare_backbone_gw, + batadv_choose_backbone_gw, entry, + &entry->hash_entry); if (unlikely(hash_added != 0)) { /* hash failed, free the structure */ @@ -395,11 +401,11 @@ static struct backbone_gw *bla_get_backbone_gw(struct bat_priv *bat_priv, } /* this is a gateway now, remove any tt entries */ - orig_node = orig_hash_find(bat_priv, orig); + orig_node = batadv_orig_hash_find(bat_priv, orig); if (orig_node) { - tt_global_del_orig(bat_priv, orig_node, - "became a backbone gateway"); - orig_node_free_ref(orig_node); + batadv_tt_global_del_orig(bat_priv, orig_node, + "became a backbone gateway"); + batadv_orig_node_free_ref(orig_node); } return entry; } @@ -407,43 +413,46 @@ static struct backbone_gw *bla_get_backbone_gw(struct bat_priv *bat_priv, /* update or add the own backbone gw to make sure we announce * where we receive other backbone gws */ -static void bla_update_own_backbone_gw(struct bat_priv *bat_priv, - struct hard_iface *primary_if, - short vid) +static void +batadv_bla_update_own_backbone_gw(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + short vid) { - struct backbone_gw *backbone_gw; + struct batadv_backbone_gw *backbone_gw; - backbone_gw = bla_get_backbone_gw(bat_priv, - primary_if->net_dev->dev_addr, vid); + backbone_gw = batadv_bla_get_backbone_gw(bat_priv, + primary_if->net_dev->dev_addr, + vid); if (unlikely(!backbone_gw)) return; backbone_gw->lasttime = jiffies; - backbone_gw_free_ref(backbone_gw); + batadv_backbone_gw_free_ref(backbone_gw); } -/** - * @bat_priv: the bat priv with all the soft interface information +/* @bat_priv: the bat priv with all the soft interface information * @vid: the vid where the request came on * * Repeat all of our own claims, and finally send an ANNOUNCE frame * to allow the requester another check if the CRC is correct now. */ -static void bla_answer_request(struct bat_priv *bat_priv, - struct hard_iface *primary_if, short vid) +static void batadv_bla_answer_request(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + short vid) { struct hlist_node *node; struct hlist_head *head; - struct hashtable_t *hash; - struct claim *claim; - struct backbone_gw *backbone_gw; + struct batadv_hashtable *hash; + struct batadv_claim *claim; + struct batadv_backbone_gw *backbone_gw; int i; - bat_dbg(DBG_BLA, bat_priv, - "bla_answer_request(): received a claim request, send all of our own claims again\n"); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_answer_request(): received a claim request, send all of our own claims again\n"); - backbone_gw = backbone_hash_find(bat_priv, - primary_if->net_dev->dev_addr, vid); + backbone_gw = batadv_backbone_hash_find(bat_priv, + primary_if->net_dev->dev_addr, + vid); if (!backbone_gw) return; @@ -457,36 +466,34 @@ static void bla_answer_request(struct bat_priv *bat_priv, if (claim->backbone_gw != backbone_gw) continue; - bla_send_claim(bat_priv, claim->addr, claim->vid, - CLAIM_TYPE_ADD); + batadv_bla_send_claim(bat_priv, claim->addr, claim->vid, + BATADV_CLAIM_TYPE_ADD); } rcu_read_unlock(); } /* finally, send an announcement frame */ - bla_send_announce(bat_priv, backbone_gw); - backbone_gw_free_ref(backbone_gw); + batadv_bla_send_announce(bat_priv, backbone_gw); + batadv_backbone_gw_free_ref(backbone_gw); } -/** - * @backbone_gw: the backbone gateway from whom we are out of sync +/* @backbone_gw: the backbone gateway from whom we are out of sync * * When the crc is wrong, ask the backbone gateway for a full table update. * After the request, it will repeat all of his own claims and finally * send an announcement claim with which we can check again. */ -static void bla_send_request(struct backbone_gw *backbone_gw) +static void batadv_bla_send_request(struct batadv_backbone_gw *backbone_gw) { /* first, remove all old entries */ - bla_del_backbone_claims(backbone_gw); + batadv_bla_del_backbone_claims(backbone_gw); - bat_dbg(DBG_BLA, backbone_gw->bat_priv, - "Sending REQUEST to %pM\n", - backbone_gw->orig); + batadv_dbg(BATADV_DBG_BLA, backbone_gw->bat_priv, + "Sending REQUEST to %pM\n", backbone_gw->orig); /* send request */ - bla_send_claim(backbone_gw->bat_priv, backbone_gw->orig, - backbone_gw->vid, CLAIM_TYPE_REQUEST); + batadv_bla_send_claim(backbone_gw->bat_priv, backbone_gw->orig, + backbone_gw->vid, BATADV_CLAIM_TYPE_REQUEST); /* no local broadcasts should be sent or received, for now. */ if (!atomic_read(&backbone_gw->request_sent)) { @@ -495,45 +502,45 @@ static void bla_send_request(struct backbone_gw *backbone_gw) } } -/** - * @bat_priv: the bat priv with all the soft interface information +/* @bat_priv: the bat priv with all the soft interface information * @backbone_gw: our backbone gateway which should be announced * * This function sends an announcement. It is called from multiple * places. */ -static void bla_send_announce(struct bat_priv *bat_priv, - struct backbone_gw *backbone_gw) +static void batadv_bla_send_announce(struct batadv_priv *bat_priv, + struct batadv_backbone_gw *backbone_gw) { uint8_t mac[ETH_ALEN]; - uint16_t crc; + __be16 crc; - memcpy(mac, announce_mac, 4); + memcpy(mac, batadv_announce_mac, 4); crc = htons(backbone_gw->crc); - memcpy(&mac[4], (uint8_t *)&crc, 2); + memcpy(&mac[4], &crc, 2); - bla_send_claim(bat_priv, mac, backbone_gw->vid, CLAIM_TYPE_ANNOUNCE); + batadv_bla_send_claim(bat_priv, mac, backbone_gw->vid, + BATADV_CLAIM_TYPE_ANNOUNCE); } /** + * batadv_bla_add_claim - Adds a claim in the claim hash * @bat_priv: the bat priv with all the soft interface information * @mac: the mac address of the claim * @vid: the VLAN ID of the frame * @backbone_gw: the backbone gateway which claims it - * - * Adds a claim in the claim hash. */ -static void bla_add_claim(struct bat_priv *bat_priv, const uint8_t *mac, - const short vid, struct backbone_gw *backbone_gw) +static void batadv_bla_add_claim(struct batadv_priv *bat_priv, + const uint8_t *mac, const short vid, + struct batadv_backbone_gw *backbone_gw) { - struct claim *claim; - struct claim search_claim; + struct batadv_claim *claim; + struct batadv_claim search_claim; int hash_added; memcpy(search_claim.addr, mac, ETH_ALEN); search_claim.vid = vid; - claim = claim_hash_find(bat_priv, &search_claim); + claim = batadv_claim_hash_find(bat_priv, &search_claim); /* create a new claim entry if it does not exist yet. */ if (!claim) { @@ -547,11 +554,13 @@ static void bla_add_claim(struct bat_priv *bat_priv, const uint8_t *mac, claim->backbone_gw = backbone_gw; atomic_set(&claim->refcount, 2); - bat_dbg(DBG_BLA, bat_priv, - "bla_add_claim(): adding new entry %pM, vid %d to hash ...\n", - mac, vid); - hash_added = hash_add(bat_priv->claim_hash, compare_claim, - choose_claim, claim, &claim->hash_entry); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_add_claim(): adding new entry %pM, vid %d to hash ...\n", + mac, vid); + hash_added = batadv_hash_add(bat_priv->claim_hash, + batadv_compare_claim, + batadv_choose_claim, claim, + &claim->hash_entry); if (unlikely(hash_added != 0)) { /* only local changes happened. */ @@ -564,13 +573,13 @@ static void bla_add_claim(struct bat_priv *bat_priv, const uint8_t *mac, /* no need to register a new backbone */ goto claim_free_ref; - bat_dbg(DBG_BLA, bat_priv, - "bla_add_claim(): changing ownership for %pM, vid %d\n", - mac, vid); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_add_claim(): changing ownership for %pM, vid %d\n", + mac, vid); claim->backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN); - backbone_gw_free_ref(claim->backbone_gw); + batadv_backbone_gw_free_ref(claim->backbone_gw); } /* set (new) backbone gw */ @@ -581,45 +590,48 @@ static void bla_add_claim(struct bat_priv *bat_priv, const uint8_t *mac, backbone_gw->lasttime = jiffies; claim_free_ref: - claim_free_ref(claim); + batadv_claim_free_ref(claim); } /* Delete a claim from the claim hash which has the * given mac address and vid. */ -static void bla_del_claim(struct bat_priv *bat_priv, const uint8_t *mac, - const short vid) +static void batadv_bla_del_claim(struct batadv_priv *bat_priv, + const uint8_t *mac, const short vid) { - struct claim search_claim, *claim; + struct batadv_claim search_claim, *claim; memcpy(search_claim.addr, mac, ETH_ALEN); search_claim.vid = vid; - claim = claim_hash_find(bat_priv, &search_claim); + claim = batadv_claim_hash_find(bat_priv, &search_claim); if (!claim) return; - bat_dbg(DBG_BLA, bat_priv, "bla_del_claim(): %pM, vid %d\n", mac, vid); + batadv_dbg(BATADV_DBG_BLA, bat_priv, "bla_del_claim(): %pM, vid %d\n", + mac, vid); - hash_remove(bat_priv->claim_hash, compare_claim, choose_claim, claim); - claim_free_ref(claim); /* reference from the hash is gone */ + batadv_hash_remove(bat_priv->claim_hash, batadv_compare_claim, + batadv_choose_claim, claim); + batadv_claim_free_ref(claim); /* reference from the hash is gone */ claim->backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN); /* don't need the reference from hash_find() anymore */ - claim_free_ref(claim); + batadv_claim_free_ref(claim); } /* check for ANNOUNCE frame, return 1 if handled */ -static int handle_announce(struct bat_priv *bat_priv, - uint8_t *an_addr, uint8_t *backbone_addr, short vid) +static int batadv_handle_announce(struct batadv_priv *bat_priv, + uint8_t *an_addr, uint8_t *backbone_addr, + short vid) { - struct backbone_gw *backbone_gw; + struct batadv_backbone_gw *backbone_gw; uint16_t crc; - if (memcmp(an_addr, announce_mac, 4) != 0) + if (memcmp(an_addr, batadv_announce_mac, 4) != 0) return 0; - backbone_gw = bla_get_backbone_gw(bat_priv, backbone_addr, vid); + backbone_gw = batadv_bla_get_backbone_gw(bat_priv, backbone_addr, vid); if (unlikely(!backbone_gw)) return 1; @@ -627,19 +639,19 @@ static int handle_announce(struct bat_priv *bat_priv, /* handle as ANNOUNCE frame */ backbone_gw->lasttime = jiffies; - crc = ntohs(*((uint16_t *)(&an_addr[4]))); + crc = ntohs(*((__be16 *)(&an_addr[4]))); - bat_dbg(DBG_BLA, bat_priv, - "handle_announce(): ANNOUNCE vid %d (sent by %pM)... CRC = %04x\n", - vid, backbone_gw->orig, crc); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "handle_announce(): ANNOUNCE vid %d (sent by %pM)... CRC = %04x\n", + vid, backbone_gw->orig, crc); if (backbone_gw->crc != crc) { - bat_dbg(DBG_BLA, backbone_gw->bat_priv, - "handle_announce(): CRC FAILED for %pM/%d (my = %04x, sent = %04x)\n", - backbone_gw->orig, backbone_gw->vid, backbone_gw->crc, - crc); + batadv_dbg(BATADV_DBG_BLA, backbone_gw->bat_priv, + "handle_announce(): CRC FAILED for %pM/%d (my = %04x, sent = %04x)\n", + backbone_gw->orig, backbone_gw->vid, + backbone_gw->crc, crc); - bla_send_request(backbone_gw); + batadv_bla_send_request(backbone_gw); } else { /* if we have sent a request and the crc was OK, * we can allow traffic again. @@ -650,88 +662,92 @@ static int handle_announce(struct bat_priv *bat_priv, } } - backbone_gw_free_ref(backbone_gw); + batadv_backbone_gw_free_ref(backbone_gw); return 1; } /* check for REQUEST frame, return 1 if handled */ -static int handle_request(struct bat_priv *bat_priv, - struct hard_iface *primary_if, - uint8_t *backbone_addr, - struct ethhdr *ethhdr, short vid) +static int batadv_handle_request(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + uint8_t *backbone_addr, + struct ethhdr *ethhdr, short vid) { /* check for REQUEST frame */ - if (!compare_eth(backbone_addr, ethhdr->h_dest)) + if (!batadv_compare_eth(backbone_addr, ethhdr->h_dest)) return 0; /* sanity check, this should not happen on a normal switch, * we ignore it in this case. */ - if (!compare_eth(ethhdr->h_dest, primary_if->net_dev->dev_addr)) + if (!batadv_compare_eth(ethhdr->h_dest, primary_if->net_dev->dev_addr)) return 1; - bat_dbg(DBG_BLA, bat_priv, - "handle_request(): REQUEST vid %d (sent by %pM)...\n", - vid, ethhdr->h_source); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "handle_request(): REQUEST vid %d (sent by %pM)...\n", + vid, ethhdr->h_source); - bla_answer_request(bat_priv, primary_if, vid); + batadv_bla_answer_request(bat_priv, primary_if, vid); return 1; } /* check for UNCLAIM frame, return 1 if handled */ -static int handle_unclaim(struct bat_priv *bat_priv, - struct hard_iface *primary_if, - uint8_t *backbone_addr, - uint8_t *claim_addr, short vid) +static int batadv_handle_unclaim(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + uint8_t *backbone_addr, + uint8_t *claim_addr, short vid) { - struct backbone_gw *backbone_gw; + struct batadv_backbone_gw *backbone_gw; /* unclaim in any case if it is our own */ - if (primary_if && compare_eth(backbone_addr, - primary_if->net_dev->dev_addr)) - bla_send_claim(bat_priv, claim_addr, vid, CLAIM_TYPE_DEL); + if (primary_if && batadv_compare_eth(backbone_addr, + primary_if->net_dev->dev_addr)) + batadv_bla_send_claim(bat_priv, claim_addr, vid, + BATADV_CLAIM_TYPE_DEL); - backbone_gw = backbone_hash_find(bat_priv, backbone_addr, vid); + backbone_gw = batadv_backbone_hash_find(bat_priv, backbone_addr, vid); if (!backbone_gw) return 1; /* this must be an UNCLAIM frame */ - bat_dbg(DBG_BLA, bat_priv, - "handle_unclaim(): UNCLAIM %pM on vid %d (sent by %pM)...\n", - claim_addr, vid, backbone_gw->orig); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "handle_unclaim(): UNCLAIM %pM on vid %d (sent by %pM)...\n", + claim_addr, vid, backbone_gw->orig); - bla_del_claim(bat_priv, claim_addr, vid); - backbone_gw_free_ref(backbone_gw); + batadv_bla_del_claim(bat_priv, claim_addr, vid); + batadv_backbone_gw_free_ref(backbone_gw); return 1; } /* check for CLAIM frame, return 1 if handled */ -static int handle_claim(struct bat_priv *bat_priv, - struct hard_iface *primary_if, uint8_t *backbone_addr, - uint8_t *claim_addr, short vid) +static int batadv_handle_claim(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + uint8_t *backbone_addr, uint8_t *claim_addr, + short vid) { - struct backbone_gw *backbone_gw; + struct batadv_backbone_gw *backbone_gw; /* register the gateway if not yet available, and add the claim. */ - backbone_gw = bla_get_backbone_gw(bat_priv, backbone_addr, vid); + backbone_gw = batadv_bla_get_backbone_gw(bat_priv, backbone_addr, vid); if (unlikely(!backbone_gw)) return 1; /* this must be a CLAIM frame */ - bla_add_claim(bat_priv, claim_addr, vid, backbone_gw); - if (compare_eth(backbone_addr, primary_if->net_dev->dev_addr)) - bla_send_claim(bat_priv, claim_addr, vid, CLAIM_TYPE_ADD); + batadv_bla_add_claim(bat_priv, claim_addr, vid, backbone_gw); + if (batadv_compare_eth(backbone_addr, primary_if->net_dev->dev_addr)) + batadv_bla_send_claim(bat_priv, claim_addr, vid, + BATADV_CLAIM_TYPE_ADD); /* TODO: we could call something like tt_local_del() here. */ - backbone_gw_free_ref(backbone_gw); + batadv_backbone_gw_free_ref(backbone_gw); return 1; } /** + * batadv_check_claim_group * @bat_priv: the bat priv with all the soft interface information * @hw_src: the Hardware source in the ARP Header * @hw_dst: the Hardware destination in the ARP Header @@ -746,16 +762,16 @@ static int handle_claim(struct bat_priv *bat_priv, * 1 - if is a claim packet from another group * 0 - if it is not a claim packet */ -static int check_claim_group(struct bat_priv *bat_priv, - struct hard_iface *primary_if, - uint8_t *hw_src, uint8_t *hw_dst, - struct ethhdr *ethhdr) +static int batadv_check_claim_group(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + uint8_t *hw_src, uint8_t *hw_dst, + struct ethhdr *ethhdr) { uint8_t *backbone_addr; - struct orig_node *orig_node; - struct bla_claim_dst *bla_dst, *bla_dst_own; + struct batadv_orig_node *orig_node; + struct batadv_bla_claim_dst *bla_dst, *bla_dst_own; - bla_dst = (struct bla_claim_dst *)hw_dst; + bla_dst = (struct batadv_bla_claim_dst *)hw_dst; bla_dst_own = &bat_priv->claim_dest; /* check if it is a claim packet in general */ @@ -767,12 +783,12 @@ static int check_claim_group(struct bat_priv *bat_priv, * otherwise assume it is in the hw_src */ switch (bla_dst->type) { - case CLAIM_TYPE_ADD: + case BATADV_CLAIM_TYPE_ADD: backbone_addr = hw_src; break; - case CLAIM_TYPE_REQUEST: - case CLAIM_TYPE_ANNOUNCE: - case CLAIM_TYPE_DEL: + case BATADV_CLAIM_TYPE_REQUEST: + case BATADV_CLAIM_TYPE_ANNOUNCE: + case BATADV_CLAIM_TYPE_DEL: backbone_addr = ethhdr->h_source; break; default: @@ -780,7 +796,7 @@ static int check_claim_group(struct bat_priv *bat_priv, } /* don't accept claim frames from ourselves */ - if (compare_eth(backbone_addr, primary_if->net_dev->dev_addr)) + if (batadv_compare_eth(backbone_addr, primary_if->net_dev->dev_addr)) return 0; /* if its already the same group, it is fine. */ @@ -788,7 +804,7 @@ static int check_claim_group(struct bat_priv *bat_priv, return 2; /* lets see if this originator is in our mesh */ - orig_node = orig_hash_find(bat_priv, backbone_addr); + orig_node = batadv_orig_hash_find(bat_priv, backbone_addr); /* dont accept claims from gateways which are not in * the same mesh or group. @@ -798,20 +814,19 @@ static int check_claim_group(struct bat_priv *bat_priv, /* if our mesh friends mac is bigger, use it for ourselves. */ if (ntohs(bla_dst->group) > ntohs(bla_dst_own->group)) { - bat_dbg(DBG_BLA, bat_priv, - "taking other backbones claim group: %04x\n", - ntohs(bla_dst->group)); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "taking other backbones claim group: %04x\n", + ntohs(bla_dst->group)); bla_dst_own->group = bla_dst->group; } - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); return 2; } -/** - * @bat_priv: the bat priv with all the soft interface information +/* @bat_priv: the bat priv with all the soft interface information * @skb: the frame to be checked * * Check if this is a claim frame, and process it accordingly. @@ -819,15 +834,15 @@ static int check_claim_group(struct bat_priv *bat_priv, * returns 1 if it was a claim frame, otherwise return 0 to * tell the callee that it can use the frame on its own. */ -static int bla_process_claim(struct bat_priv *bat_priv, - struct hard_iface *primary_if, - struct sk_buff *skb) +static int batadv_bla_process_claim(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + struct sk_buff *skb) { struct ethhdr *ethhdr; struct vlan_ethhdr *vhdr; struct arphdr *arphdr; uint8_t *hw_src, *hw_dst; - struct bla_claim_dst *bla_dst; + struct batadv_bla_claim_dst *bla_dst; uint16_t proto; int headlen; short vid = -1; @@ -860,7 +875,6 @@ static int bla_process_claim(struct bat_priv *bat_priv, /* Check whether the ARP frame carries a valid * IP information */ - if (arphdr->ar_hrd != htons(ARPHRD_ETHER)) return 0; if (arphdr->ar_pro != htons(ETH_P_IP)) @@ -872,59 +886,62 @@ static int bla_process_claim(struct bat_priv *bat_priv, hw_src = (uint8_t *)arphdr + sizeof(struct arphdr); hw_dst = hw_src + ETH_ALEN + 4; - bla_dst = (struct bla_claim_dst *)hw_dst; + bla_dst = (struct batadv_bla_claim_dst *)hw_dst; /* check if it is a claim frame. */ - ret = check_claim_group(bat_priv, primary_if, hw_src, hw_dst, ethhdr); + ret = batadv_check_claim_group(bat_priv, primary_if, hw_src, hw_dst, + ethhdr); if (ret == 1) - bat_dbg(DBG_BLA, bat_priv, - "bla_process_claim(): received a claim frame from another group. From: %pM on vid %d ...(hw_src %pM, hw_dst %pM)\n", - ethhdr->h_source, vid, hw_src, hw_dst); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_process_claim(): received a claim frame from another group. From: %pM on vid %d ...(hw_src %pM, hw_dst %pM)\n", + ethhdr->h_source, vid, hw_src, hw_dst); if (ret < 2) return ret; /* become a backbone gw ourselves on this vlan if not happened yet */ - bla_update_own_backbone_gw(bat_priv, primary_if, vid); + batadv_bla_update_own_backbone_gw(bat_priv, primary_if, vid); /* check for the different types of claim frames ... */ switch (bla_dst->type) { - case CLAIM_TYPE_ADD: - if (handle_claim(bat_priv, primary_if, hw_src, - ethhdr->h_source, vid)) + case BATADV_CLAIM_TYPE_ADD: + if (batadv_handle_claim(bat_priv, primary_if, hw_src, + ethhdr->h_source, vid)) return 1; break; - case CLAIM_TYPE_DEL: - if (handle_unclaim(bat_priv, primary_if, - ethhdr->h_source, hw_src, vid)) + case BATADV_CLAIM_TYPE_DEL: + if (batadv_handle_unclaim(bat_priv, primary_if, + ethhdr->h_source, hw_src, vid)) return 1; break; - case CLAIM_TYPE_ANNOUNCE: - if (handle_announce(bat_priv, hw_src, ethhdr->h_source, vid)) + case BATADV_CLAIM_TYPE_ANNOUNCE: + if (batadv_handle_announce(bat_priv, hw_src, ethhdr->h_source, + vid)) return 1; break; - case CLAIM_TYPE_REQUEST: - if (handle_request(bat_priv, primary_if, hw_src, ethhdr, vid)) + case BATADV_CLAIM_TYPE_REQUEST: + if (batadv_handle_request(bat_priv, primary_if, hw_src, ethhdr, + vid)) return 1; break; } - bat_dbg(DBG_BLA, bat_priv, - "bla_process_claim(): ERROR - this looks like a claim frame, but is useless. eth src %pM on vid %d ...(hw_src %pM, hw_dst %pM)\n", - ethhdr->h_source, vid, hw_src, hw_dst); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_process_claim(): ERROR - this looks like a claim frame, but is useless. eth src %pM on vid %d ...(hw_src %pM, hw_dst %pM)\n", + ethhdr->h_source, vid, hw_src, hw_dst); return 1; } /* Check when we last heard from other nodes, and remove them in case of * a time out, or clean all backbone gws if now is set. */ -static void bla_purge_backbone_gw(struct bat_priv *bat_priv, int now) +static void batadv_bla_purge_backbone_gw(struct batadv_priv *bat_priv, int now) { - struct backbone_gw *backbone_gw; + struct batadv_backbone_gw *backbone_gw; struct hlist_node *node, *node_tmp; struct hlist_head *head; - struct hashtable_t *hash; + struct batadv_hashtable *hash; spinlock_t *list_lock; /* protects write access to the hash lists */ int i; @@ -941,29 +958,30 @@ static void bla_purge_backbone_gw(struct bat_priv *bat_priv, int now) head, hash_entry) { if (now) goto purge_now; - if (!has_timed_out(backbone_gw->lasttime, - BLA_BACKBONE_TIMEOUT)) + if (!batadv_has_timed_out(backbone_gw->lasttime, + BATADV_BLA_BACKBONE_TIMEOUT)) continue; - bat_dbg(DBG_BLA, backbone_gw->bat_priv, - "bla_purge_backbone_gw(): backbone gw %pM timed out\n", - backbone_gw->orig); + batadv_dbg(BATADV_DBG_BLA, backbone_gw->bat_priv, + "bla_purge_backbone_gw(): backbone gw %pM timed out\n", + backbone_gw->orig); purge_now: /* don't wait for the pending request anymore */ if (atomic_read(&backbone_gw->request_sent)) atomic_dec(&bat_priv->bla_num_requests); - bla_del_backbone_claims(backbone_gw); + batadv_bla_del_backbone_claims(backbone_gw); hlist_del_rcu(node); - backbone_gw_free_ref(backbone_gw); + batadv_backbone_gw_free_ref(backbone_gw); } spin_unlock_bh(list_lock); } } /** + * batadv_bla_purge_claims * @bat_priv: the bat priv with all the soft interface information * @primary_if: the selected primary interface, may be NULL if now is set * @now: whether the whole hash shall be wiped now @@ -971,13 +989,14 @@ purge_now: * Check when we heard last time from our own claims, and remove them in case of * a time out, or clean all claims if now is set */ -static void bla_purge_claims(struct bat_priv *bat_priv, - struct hard_iface *primary_if, int now) +static void batadv_bla_purge_claims(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + int now) { - struct claim *claim; + struct batadv_claim *claim; struct hlist_node *node; struct hlist_head *head; - struct hashtable_t *hash; + struct batadv_hashtable *hash; int i; hash = bat_priv->claim_hash; @@ -991,42 +1010,42 @@ static void bla_purge_claims(struct bat_priv *bat_priv, hlist_for_each_entry_rcu(claim, node, head, hash_entry) { if (now) goto purge_now; - if (!compare_eth(claim->backbone_gw->orig, - primary_if->net_dev->dev_addr)) + if (!batadv_compare_eth(claim->backbone_gw->orig, + primary_if->net_dev->dev_addr)) continue; - if (!has_timed_out(claim->lasttime, - BLA_CLAIM_TIMEOUT)) + if (!batadv_has_timed_out(claim->lasttime, + BATADV_BLA_CLAIM_TIMEOUT)) continue; - bat_dbg(DBG_BLA, bat_priv, - "bla_purge_claims(): %pM, vid %d, time out\n", - claim->addr, claim->vid); + batadv_dbg(BATADV_DBG_BLA, bat_priv, + "bla_purge_claims(): %pM, vid %d, time out\n", + claim->addr, claim->vid); purge_now: - handle_unclaim(bat_priv, primary_if, - claim->backbone_gw->orig, - claim->addr, claim->vid); + batadv_handle_unclaim(bat_priv, primary_if, + claim->backbone_gw->orig, + claim->addr, claim->vid); } rcu_read_unlock(); } } /** + * batadv_bla_update_orig_address * @bat_priv: the bat priv with all the soft interface information * @primary_if: the new selected primary_if * @oldif: the old primary interface, may be NULL * * Update the backbone gateways when the own orig address changes. - * */ -void bla_update_orig_address(struct bat_priv *bat_priv, - struct hard_iface *primary_if, - struct hard_iface *oldif) +void batadv_bla_update_orig_address(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + struct batadv_hard_iface *oldif) { - struct backbone_gw *backbone_gw; + struct batadv_backbone_gw *backbone_gw; struct hlist_node *node; struct hlist_head *head; - struct hashtable_t *hash; + struct batadv_hashtable *hash; int i; /* reset bridge loop avoidance group id */ @@ -1034,8 +1053,8 @@ void bla_update_orig_address(struct bat_priv *bat_priv, htons(crc16(0, primary_if->net_dev->dev_addr, ETH_ALEN)); if (!oldif) { - bla_purge_claims(bat_priv, NULL, 1); - bla_purge_backbone_gw(bat_priv, 1); + batadv_bla_purge_claims(bat_priv, NULL, 1); + batadv_bla_purge_backbone_gw(bat_priv, 1); return; } @@ -1049,8 +1068,8 @@ void bla_update_orig_address(struct bat_priv *bat_priv, rcu_read_lock(); hlist_for_each_entry_rcu(backbone_gw, node, head, hash_entry) { /* own orig still holds the old value. */ - if (!compare_eth(backbone_gw->orig, - oldif->net_dev->dev_addr)) + if (!batadv_compare_eth(backbone_gw->orig, + oldif->net_dev->dev_addr)) continue; memcpy(backbone_gw->orig, @@ -1058,7 +1077,7 @@ void bla_update_orig_address(struct bat_priv *bat_priv, /* send an announce frame so others will ask for our * claims and update their tables. */ - bla_send_announce(bat_priv, backbone_gw); + batadv_bla_send_announce(bat_priv, backbone_gw); } rcu_read_unlock(); } @@ -1067,36 +1086,36 @@ void bla_update_orig_address(struct bat_priv *bat_priv, /* (re)start the timer */ -static void bla_start_timer(struct bat_priv *bat_priv) +static void batadv_bla_start_timer(struct batadv_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->bla_work, bla_periodic_work); - queue_delayed_work(bat_event_workqueue, &bat_priv->bla_work, - msecs_to_jiffies(BLA_PERIOD_LENGTH)); + INIT_DELAYED_WORK(&bat_priv->bla_work, batadv_bla_periodic_work); + queue_delayed_work(batadv_event_workqueue, &bat_priv->bla_work, + msecs_to_jiffies(BATADV_BLA_PERIOD_LENGTH)); } /* periodic work to do: * * purge structures when they are too old * * send announcements */ -static void bla_periodic_work(struct work_struct *work) +static void batadv_bla_periodic_work(struct work_struct *work) { struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); - struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, bla_work); + struct batadv_priv *bat_priv; struct hlist_node *node; struct hlist_head *head; - struct backbone_gw *backbone_gw; - struct hashtable_t *hash; - struct hard_iface *primary_if; + struct batadv_backbone_gw *backbone_gw; + struct batadv_hashtable *hash; + struct batadv_hard_iface *primary_if; int i; - primary_if = primary_if_get_selected(bat_priv); + bat_priv = container_of(delayed_work, struct batadv_priv, bla_work); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; - bla_purge_claims(bat_priv, primary_if, 0); - bla_purge_backbone_gw(bat_priv, 0); + batadv_bla_purge_claims(bat_priv, primary_if, 0); + batadv_bla_purge_backbone_gw(bat_priv, 0); if (!atomic_read(&bat_priv->bridge_loop_avoidance)) goto out; @@ -1110,67 +1129,81 @@ static void bla_periodic_work(struct work_struct *work) rcu_read_lock(); hlist_for_each_entry_rcu(backbone_gw, node, head, hash_entry) { - if (!compare_eth(backbone_gw->orig, - primary_if->net_dev->dev_addr)) + if (!batadv_compare_eth(backbone_gw->orig, + primary_if->net_dev->dev_addr)) continue; backbone_gw->lasttime = jiffies; - bla_send_announce(bat_priv, backbone_gw); + batadv_bla_send_announce(bat_priv, backbone_gw); } rcu_read_unlock(); } out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); - bla_start_timer(bat_priv); + batadv_bla_start_timer(bat_priv); } +/* The hash for claim and backbone hash receive the same key because they + * are getting initialized by hash_new with the same key. Reinitializing + * them with to different keys to allow nested locking without generating + * lockdep warnings + */ +static struct lock_class_key batadv_claim_hash_lock_class_key; +static struct lock_class_key batadv_backbone_hash_lock_class_key; + /* initialize all bla structures */ -int bla_init(struct bat_priv *bat_priv) +int batadv_bla_init(struct batadv_priv *bat_priv) { int i; uint8_t claim_dest[ETH_ALEN] = {0xff, 0x43, 0x05, 0x00, 0x00, 0x00}; - struct hard_iface *primary_if; + struct batadv_hard_iface *primary_if; - bat_dbg(DBG_BLA, bat_priv, "bla hash registering\n"); + batadv_dbg(BATADV_DBG_BLA, bat_priv, "bla hash registering\n"); /* setting claim destination address */ memcpy(&bat_priv->claim_dest.magic, claim_dest, 3); bat_priv->claim_dest.type = 0; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (primary_if) { bat_priv->claim_dest.group = htons(crc16(0, primary_if->net_dev->dev_addr, ETH_ALEN)); - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } else { bat_priv->claim_dest.group = 0; /* will be set later */ } /* initialize the duplicate list */ - for (i = 0; i < DUPLIST_SIZE; i++) + for (i = 0; i < BATADV_DUPLIST_SIZE; i++) bat_priv->bcast_duplist[i].entrytime = - jiffies - msecs_to_jiffies(DUPLIST_TIMEOUT); + jiffies - msecs_to_jiffies(BATADV_DUPLIST_TIMEOUT); bat_priv->bcast_duplist_curr = 0; if (bat_priv->claim_hash) - return 1; + return 0; - bat_priv->claim_hash = hash_new(128); - bat_priv->backbone_hash = hash_new(32); + bat_priv->claim_hash = batadv_hash_new(128); + bat_priv->backbone_hash = batadv_hash_new(32); if (!bat_priv->claim_hash || !bat_priv->backbone_hash) - return -1; + return -ENOMEM; - bat_dbg(DBG_BLA, bat_priv, "bla hashes initialized\n"); + batadv_hash_set_lock_class(bat_priv->claim_hash, + &batadv_claim_hash_lock_class_key); + batadv_hash_set_lock_class(bat_priv->backbone_hash, + &batadv_backbone_hash_lock_class_key); - bla_start_timer(bat_priv); - return 1; + batadv_dbg(BATADV_DBG_BLA, bat_priv, "bla hashes initialized\n"); + + batadv_bla_start_timer(bat_priv); + return 0; } /** + * batadv_bla_check_bcast_duplist * @bat_priv: the bat priv with all the soft interface information * @bcast_packet: originator mac address * @hdr_size: maximum length of the frame @@ -1183,17 +1216,15 @@ int bla_init(struct bat_priv *bat_priv) * with a good chance that it is the same packet. If it is furthermore * sent by another host, drop it. We allow equal packets from * the same host however as this might be intended. - * - **/ - -int bla_check_bcast_duplist(struct bat_priv *bat_priv, - struct bcast_packet *bcast_packet, - int hdr_size) + */ +int batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv, + struct batadv_bcast_packet *bcast_packet, + int hdr_size) { int i, length, curr; uint8_t *content; uint16_t crc; - struct bcast_duplist_entry *entry; + struct batadv_bcast_duplist_entry *entry; length = hdr_size - sizeof(*bcast_packet); content = (uint8_t *)bcast_packet; @@ -1202,20 +1233,21 @@ int bla_check_bcast_duplist(struct bat_priv *bat_priv, /* calculate the crc ... */ crc = crc16(0, content, length); - for (i = 0 ; i < DUPLIST_SIZE; i++) { - curr = (bat_priv->bcast_duplist_curr + i) % DUPLIST_SIZE; + for (i = 0; i < BATADV_DUPLIST_SIZE; i++) { + curr = (bat_priv->bcast_duplist_curr + i) % BATADV_DUPLIST_SIZE; entry = &bat_priv->bcast_duplist[curr]; /* we can stop searching if the entry is too old ; * later entries will be even older */ - if (has_timed_out(entry->entrytime, DUPLIST_TIMEOUT)) + if (batadv_has_timed_out(entry->entrytime, + BATADV_DUPLIST_TIMEOUT)) break; if (entry->crc != crc) continue; - if (compare_eth(entry->orig, bcast_packet->orig)) + if (batadv_compare_eth(entry->orig, bcast_packet->orig)) continue; /* this entry seems to match: same crc, not too old, @@ -1224,7 +1256,8 @@ int bla_check_bcast_duplist(struct bat_priv *bat_priv, return 1; } /* not found, add a new entry (overwrite the oldest entry) */ - curr = (bat_priv->bcast_duplist_curr + DUPLIST_SIZE - 1) % DUPLIST_SIZE; + curr = (bat_priv->bcast_duplist_curr + BATADV_DUPLIST_SIZE - 1); + curr %= BATADV_DUPLIST_SIZE; entry = &bat_priv->bcast_duplist[curr]; entry->crc = crc; entry->entrytime = jiffies; @@ -1237,22 +1270,19 @@ int bla_check_bcast_duplist(struct bat_priv *bat_priv, -/** - * @bat_priv: the bat priv with all the soft interface information +/* @bat_priv: the bat priv with all the soft interface information * @orig: originator mac address * * check if the originator is a gateway for any VLAN ID. * * returns 1 if it is found, 0 otherwise - * */ - -int bla_is_backbone_gw_orig(struct bat_priv *bat_priv, uint8_t *orig) +int batadv_bla_is_backbone_gw_orig(struct batadv_priv *bat_priv, uint8_t *orig) { - struct hashtable_t *hash = bat_priv->backbone_hash; + struct batadv_hashtable *hash = bat_priv->backbone_hash; struct hlist_head *head; struct hlist_node *node; - struct backbone_gw *backbone_gw; + struct batadv_backbone_gw *backbone_gw; int i; if (!atomic_read(&bat_priv->bridge_loop_avoidance)) @@ -1266,7 +1296,7 @@ int bla_is_backbone_gw_orig(struct bat_priv *bat_priv, uint8_t *orig) rcu_read_lock(); hlist_for_each_entry_rcu(backbone_gw, node, head, hash_entry) { - if (compare_eth(backbone_gw->orig, orig)) { + if (batadv_compare_eth(backbone_gw->orig, orig)) { rcu_read_unlock(); return 1; } @@ -1279,6 +1309,7 @@ int bla_is_backbone_gw_orig(struct bat_priv *bat_priv, uint8_t *orig) /** + * batadv_bla_is_backbone_gw * @skb: the frame to be checked * @orig_node: the orig_node of the frame * @hdr_size: maximum length of the frame @@ -1286,14 +1317,13 @@ int bla_is_backbone_gw_orig(struct bat_priv *bat_priv, uint8_t *orig) * bla_is_backbone_gw inspects the skb for the VLAN ID and returns 1 * if the orig_node is also a gateway on the soft interface, otherwise it * returns 0. - * */ -int bla_is_backbone_gw(struct sk_buff *skb, - struct orig_node *orig_node, int hdr_size) +int batadv_bla_is_backbone_gw(struct sk_buff *skb, + struct batadv_orig_node *orig_node, int hdr_size) { struct ethhdr *ethhdr; struct vlan_ethhdr *vhdr; - struct backbone_gw *backbone_gw; + struct batadv_backbone_gw *backbone_gw; short vid = -1; if (!atomic_read(&orig_node->bat_priv->bridge_loop_avoidance)) @@ -1315,39 +1345,39 @@ int bla_is_backbone_gw(struct sk_buff *skb, } /* see if this originator is a backbone gw for this VLAN */ - - backbone_gw = backbone_hash_find(orig_node->bat_priv, - orig_node->orig, vid); + backbone_gw = batadv_backbone_hash_find(orig_node->bat_priv, + orig_node->orig, vid); if (!backbone_gw) return 0; - backbone_gw_free_ref(backbone_gw); + batadv_backbone_gw_free_ref(backbone_gw); return 1; } /* free all bla structures (for softinterface free or module unload) */ -void bla_free(struct bat_priv *bat_priv) +void batadv_bla_free(struct batadv_priv *bat_priv) { - struct hard_iface *primary_if; + struct batadv_hard_iface *primary_if; cancel_delayed_work_sync(&bat_priv->bla_work); - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (bat_priv->claim_hash) { - bla_purge_claims(bat_priv, primary_if, 1); - hash_destroy(bat_priv->claim_hash); + batadv_bla_purge_claims(bat_priv, primary_if, 1); + batadv_hash_destroy(bat_priv->claim_hash); bat_priv->claim_hash = NULL; } if (bat_priv->backbone_hash) { - bla_purge_backbone_gw(bat_priv, 1); - hash_destroy(bat_priv->backbone_hash); + batadv_bla_purge_backbone_gw(bat_priv, 1); + batadv_hash_destroy(bat_priv->backbone_hash); bat_priv->backbone_hash = NULL; } if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } /** + * batadv_bla_rx * @bat_priv: the bat priv with all the soft interface information * @skb: the frame to be checked * @vid: the VLAN ID of the frame @@ -1360,19 +1390,18 @@ void bla_free(struct bat_priv *bat_priv) * in these cases, the skb is further handled by this function and * returns 1, otherwise it returns 0 and the caller shall further * process the skb. - * */ -int bla_rx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid, - bool is_bcast) +int batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb, short vid, + bool is_bcast) { struct ethhdr *ethhdr; - struct claim search_claim, *claim = NULL; - struct hard_iface *primary_if; + struct batadv_claim search_claim, *claim = NULL; + struct batadv_hard_iface *primary_if; int ret; ethhdr = (struct ethhdr *)skb_mac_header(skb); - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto handled; @@ -1387,21 +1416,21 @@ int bla_rx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid, memcpy(search_claim.addr, ethhdr->h_source, ETH_ALEN); search_claim.vid = vid; - claim = claim_hash_find(bat_priv, &search_claim); + claim = batadv_claim_hash_find(bat_priv, &search_claim); if (!claim) { /* possible optimization: race for a claim */ /* No claim exists yet, claim it for us! */ - handle_claim(bat_priv, primary_if, - primary_if->net_dev->dev_addr, - ethhdr->h_source, vid); + batadv_handle_claim(bat_priv, primary_if, + primary_if->net_dev->dev_addr, + ethhdr->h_source, vid); goto allow; } /* if it is our own claim ... */ - if (compare_eth(claim->backbone_gw->orig, - primary_if->net_dev->dev_addr)) { + if (batadv_compare_eth(claim->backbone_gw->orig, + primary_if->net_dev->dev_addr)) { /* ... allow it in any case */ claim->lasttime = jiffies; goto allow; @@ -1421,13 +1450,13 @@ int bla_rx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid, * send a claim and update the claim table * immediately. */ - handle_claim(bat_priv, primary_if, - primary_if->net_dev->dev_addr, - ethhdr->h_source, vid); + batadv_handle_claim(bat_priv, primary_if, + primary_if->net_dev->dev_addr, + ethhdr->h_source, vid); goto allow; } allow: - bla_update_own_backbone_gw(bat_priv, primary_if, vid); + batadv_bla_update_own_backbone_gw(bat_priv, primary_if, vid); ret = 0; goto out; @@ -1437,13 +1466,14 @@ handled: out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); if (claim) - claim_free_ref(claim); + batadv_claim_free_ref(claim); return ret; } /** + * batadv_bla_tx * @bat_priv: the bat priv with all the soft interface information * @skb: the frame to be checked * @vid: the VLAN ID of the frame @@ -1455,16 +1485,15 @@ out: * in these cases, the skb is further handled by this function and * returns 1, otherwise it returns 0 and the caller shall further * process the skb. - * */ -int bla_tx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid) +int batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb, short vid) { struct ethhdr *ethhdr; - struct claim search_claim, *claim = NULL; - struct hard_iface *primary_if; + struct batadv_claim search_claim, *claim = NULL; + struct batadv_hard_iface *primary_if; int ret = 0; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; @@ -1474,7 +1503,7 @@ int bla_tx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid) /* in VLAN case, the mac header might not be set. */ skb_reset_mac_header(skb); - if (bla_process_claim(bat_priv, primary_if, skb)) + if (batadv_bla_process_claim(bat_priv, primary_if, skb)) goto handled; ethhdr = (struct ethhdr *)skb_mac_header(skb); @@ -1487,21 +1516,21 @@ int bla_tx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid) memcpy(search_claim.addr, ethhdr->h_source, ETH_ALEN); search_claim.vid = vid; - claim = claim_hash_find(bat_priv, &search_claim); + claim = batadv_claim_hash_find(bat_priv, &search_claim); /* if no claim exists, allow it. */ if (!claim) goto allow; /* check if we are responsible. */ - if (compare_eth(claim->backbone_gw->orig, - primary_if->net_dev->dev_addr)) { + if (batadv_compare_eth(claim->backbone_gw->orig, + primary_if->net_dev->dev_addr)) { /* if yes, the client has roamed and we have * to unclaim it. */ - handle_unclaim(bat_priv, primary_if, - primary_if->net_dev->dev_addr, - ethhdr->h_source, vid); + batadv_handle_unclaim(bat_priv, primary_if, + primary_if->net_dev->dev_addr, + ethhdr->h_source, vid); goto allow; } @@ -1518,33 +1547,34 @@ int bla_tx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid) goto allow; } allow: - bla_update_own_backbone_gw(bat_priv, primary_if, vid); + batadv_bla_update_own_backbone_gw(bat_priv, primary_if, vid); ret = 0; goto out; handled: ret = 1; out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); if (claim) - claim_free_ref(claim); + batadv_claim_free_ref(claim); return ret; } -int bla_claim_table_seq_print_text(struct seq_file *seq, void *offset) +int batadv_bla_claim_table_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; - struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->claim_hash; - struct claim *claim; - struct hard_iface *primary_if; + struct batadv_priv *bat_priv = netdev_priv(net_dev); + struct batadv_hashtable *hash = bat_priv->claim_hash; + struct batadv_claim *claim; + struct batadv_hard_iface *primary_if; struct hlist_node *node; struct hlist_head *head; uint32_t i; bool is_own; int ret = 0; + uint8_t *primary_addr; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) { ret = seq_printf(seq, "BATMAN mesh %s disabled - please specify interfaces to enable it\n", @@ -1552,16 +1582,17 @@ int bla_claim_table_seq_print_text(struct seq_file *seq, void *offset) goto out; } - if (primary_if->if_status != IF_ACTIVE) { + if (primary_if->if_status != BATADV_IF_ACTIVE) { ret = seq_printf(seq, "BATMAN mesh %s disabled - primary interface not active\n", net_dev->name); goto out; } + primary_addr = primary_if->net_dev->dev_addr; seq_printf(seq, "Claims announced for the mesh %s (orig %pM, group id %04x)\n", - net_dev->name, primary_if->net_dev->dev_addr, + net_dev->name, primary_addr, ntohs(bat_priv->claim_dest.group)); seq_printf(seq, " %-17s %-5s %-17s [o] (%-4s)\n", "Client", "VID", "Originator", "CRC"); @@ -1570,8 +1601,8 @@ int bla_claim_table_seq_print_text(struct seq_file *seq, void *offset) rcu_read_lock(); hlist_for_each_entry_rcu(claim, node, head, hash_entry) { - is_own = compare_eth(claim->backbone_gw->orig, - primary_if->net_dev->dev_addr); + is_own = batadv_compare_eth(claim->backbone_gw->orig, + primary_addr); seq_printf(seq, " * %pM on % 5d by %pM [%c] (%04x)\n", claim->addr, claim->vid, claim->backbone_gw->orig, @@ -1582,6 +1613,6 @@ int bla_claim_table_seq_print_text(struct seq_file *seq, void *offset) } out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return ret; } diff --git a/net/batman-adv/bridge_loop_avoidance.h b/net/batman-adv/bridge_loop_avoidance.h index dc5227b398d4..563cfbf94a7f 100644 --- a/net/batman-adv/bridge_loop_avoidance.h +++ b/net/batman-adv/bridge_loop_avoidance.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2011-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2012 B.A.T.M.A.N. contributors: * * Simon Wunderlich * @@ -16,81 +15,84 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_BLA_H_ #define _NET_BATMAN_ADV_BLA_H_ #ifdef CONFIG_BATMAN_ADV_BLA -int bla_rx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid, - bool is_bcast); -int bla_tx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid); -int bla_is_backbone_gw(struct sk_buff *skb, - struct orig_node *orig_node, int hdr_size); -int bla_claim_table_seq_print_text(struct seq_file *seq, void *offset); -int bla_is_backbone_gw_orig(struct bat_priv *bat_priv, uint8_t *orig); -int bla_check_bcast_duplist(struct bat_priv *bat_priv, - struct bcast_packet *bcast_packet, int hdr_size); -void bla_update_orig_address(struct bat_priv *bat_priv, - struct hard_iface *primary_if, - struct hard_iface *oldif); -int bla_init(struct bat_priv *bat_priv); -void bla_free(struct bat_priv *bat_priv); +int batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb, short vid, + bool is_bcast); +int batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb, short vid); +int batadv_bla_is_backbone_gw(struct sk_buff *skb, + struct batadv_orig_node *orig_node, int hdr_size); +int batadv_bla_claim_table_seq_print_text(struct seq_file *seq, void *offset); +int batadv_bla_is_backbone_gw_orig(struct batadv_priv *bat_priv, uint8_t *orig); +int batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv, + struct batadv_bcast_packet *bcast_packet, + int hdr_size); +void batadv_bla_update_orig_address(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + struct batadv_hard_iface *oldif); +int batadv_bla_init(struct batadv_priv *bat_priv); +void batadv_bla_free(struct batadv_priv *bat_priv); -#define BLA_CRC_INIT 0 +#define BATADV_BLA_CRC_INIT 0 #else /* ifdef CONFIG_BATMAN_ADV_BLA */ -static inline int bla_rx(struct bat_priv *bat_priv, struct sk_buff *skb, - short vid, bool is_bcast) +static inline int batadv_bla_rx(struct batadv_priv *bat_priv, + struct sk_buff *skb, short vid, + bool is_bcast) { return 0; } -static inline int bla_tx(struct bat_priv *bat_priv, struct sk_buff *skb, - short vid) +static inline int batadv_bla_tx(struct batadv_priv *bat_priv, + struct sk_buff *skb, short vid) { return 0; } -static inline int bla_is_backbone_gw(struct sk_buff *skb, - struct orig_node *orig_node, - int hdr_size) +static inline int batadv_bla_is_backbone_gw(struct sk_buff *skb, + struct batadv_orig_node *orig_node, + int hdr_size) { return 0; } -static inline int bla_claim_table_seq_print_text(struct seq_file *seq, - void *offset) +static inline int batadv_bla_claim_table_seq_print_text(struct seq_file *seq, + void *offset) { return 0; } -static inline int bla_is_backbone_gw_orig(struct bat_priv *bat_priv, - uint8_t *orig) +static inline int batadv_bla_is_backbone_gw_orig(struct batadv_priv *bat_priv, + uint8_t *orig) { return 0; } -static inline int bla_check_bcast_duplist(struct bat_priv *bat_priv, - struct bcast_packet *bcast_packet, - int hdr_size) +static inline int +batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv, + struct batadv_bcast_packet *bcast_packet, + int hdr_size) { return 0; } -static inline void bla_update_orig_address(struct bat_priv *bat_priv, - struct hard_iface *primary_if, - struct hard_iface *oldif) +static inline void +batadv_bla_update_orig_address(struct batadv_priv *bat_priv, + struct batadv_hard_iface *primary_if, + struct batadv_hard_iface *oldif) { } -static inline int bla_init(struct bat_priv *bat_priv) +static inline int batadv_bla_init(struct batadv_priv *bat_priv) { return 1; } -static inline void bla_free(struct bat_priv *bat_priv) +static inline void batadv_bla_free(struct batadv_priv *bat_priv) { } diff --git a/net/batman-adv/debugfs.c b/net/batman-adv/debugfs.c new file mode 100644 index 000000000000..34fbb1667bcd --- /dev/null +++ b/net/batman-adv/debugfs.c @@ -0,0 +1,409 @@ +/* Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: + * + * Marek Lindner + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA + * 02110-1301, USA + */ + +#include "main.h" + +#include <linux/debugfs.h> + +#include "debugfs.h" +#include "translation-table.h" +#include "originator.h" +#include "hard-interface.h" +#include "gateway_common.h" +#include "gateway_client.h" +#include "soft-interface.h" +#include "vis.h" +#include "icmp_socket.h" +#include "bridge_loop_avoidance.h" + +static struct dentry *batadv_debugfs; + +#ifdef CONFIG_BATMAN_ADV_DEBUG +#define BATADV_LOG_BUFF_MASK (batadv_log_buff_len - 1) + +static const int batadv_log_buff_len = BATADV_LOG_BUF_LEN; + +static char *batadv_log_char_addr(struct batadv_debug_log *debug_log, + size_t idx) +{ + return &debug_log->log_buff[idx & BATADV_LOG_BUFF_MASK]; +} + +static void batadv_emit_log_char(struct batadv_debug_log *debug_log, char c) +{ + char *char_addr; + + char_addr = batadv_log_char_addr(debug_log, debug_log->log_end); + *char_addr = c; + debug_log->log_end++; + + if (debug_log->log_end - debug_log->log_start > batadv_log_buff_len) + debug_log->log_start = debug_log->log_end - batadv_log_buff_len; +} + +__printf(2, 3) +static int batadv_fdebug_log(struct batadv_debug_log *debug_log, + const char *fmt, ...) +{ + va_list args; + static char debug_log_buf[256]; + char *p; + + if (!debug_log) + return 0; + + spin_lock_bh(&debug_log->lock); + va_start(args, fmt); + vscnprintf(debug_log_buf, sizeof(debug_log_buf), fmt, args); + va_end(args); + + for (p = debug_log_buf; *p != 0; p++) + batadv_emit_log_char(debug_log, *p); + + spin_unlock_bh(&debug_log->lock); + + wake_up(&debug_log->queue_wait); + + return 0; +} + +int batadv_debug_log(struct batadv_priv *bat_priv, const char *fmt, ...) +{ + va_list args; + char tmp_log_buf[256]; + + va_start(args, fmt); + vscnprintf(tmp_log_buf, sizeof(tmp_log_buf), fmt, args); + batadv_fdebug_log(bat_priv->debug_log, "[%10u] %s", + jiffies_to_msecs(jiffies), tmp_log_buf); + va_end(args); + + return 0; +} + +static int batadv_log_open(struct inode *inode, struct file *file) +{ + nonseekable_open(inode, file); + file->private_data = inode->i_private; + batadv_inc_module_count(); + return 0; +} + +static int batadv_log_release(struct inode *inode, struct file *file) +{ + batadv_dec_module_count(); + return 0; +} + +static int batadv_log_empty(struct batadv_debug_log *debug_log) +{ + return !(debug_log->log_start - debug_log->log_end); +} + +static ssize_t batadv_log_read(struct file *file, char __user *buf, + size_t count, loff_t *ppos) +{ + struct batadv_priv *bat_priv = file->private_data; + struct batadv_debug_log *debug_log = bat_priv->debug_log; + int error, i = 0; + char *char_addr; + char c; + + if ((file->f_flags & O_NONBLOCK) && batadv_log_empty(debug_log)) + return -EAGAIN; + + if (!buf) + return -EINVAL; + + if (count == 0) + return 0; + + if (!access_ok(VERIFY_WRITE, buf, count)) + return -EFAULT; + + error = wait_event_interruptible(debug_log->queue_wait, + (!batadv_log_empty(debug_log))); + + if (error) + return error; + + spin_lock_bh(&debug_log->lock); + + while ((!error) && (i < count) && + (debug_log->log_start != debug_log->log_end)) { + char_addr = batadv_log_char_addr(debug_log, + debug_log->log_start); + c = *char_addr; + + debug_log->log_start++; + + spin_unlock_bh(&debug_log->lock); + + error = __put_user(c, buf); + + spin_lock_bh(&debug_log->lock); + + buf++; + i++; + + } + + spin_unlock_bh(&debug_log->lock); + + if (!error) + return i; + + return error; +} + +static unsigned int batadv_log_poll(struct file *file, poll_table *wait) +{ + struct batadv_priv *bat_priv = file->private_data; + struct batadv_debug_log *debug_log = bat_priv->debug_log; + + poll_wait(file, &debug_log->queue_wait, wait); + + if (!batadv_log_empty(debug_log)) + return POLLIN | POLLRDNORM; + + return 0; +} + +static const struct file_operations batadv_log_fops = { + .open = batadv_log_open, + .release = batadv_log_release, + .read = batadv_log_read, + .poll = batadv_log_poll, + .llseek = no_llseek, +}; + +static int batadv_debug_log_setup(struct batadv_priv *bat_priv) +{ + struct dentry *d; + + if (!bat_priv->debug_dir) + goto err; + + bat_priv->debug_log = kzalloc(sizeof(*bat_priv->debug_log), GFP_ATOMIC); + if (!bat_priv->debug_log) + goto err; + + spin_lock_init(&bat_priv->debug_log->lock); + init_waitqueue_head(&bat_priv->debug_log->queue_wait); + + d = debugfs_create_file("log", S_IFREG | S_IRUSR, + bat_priv->debug_dir, bat_priv, + &batadv_log_fops); + if (!d) + goto err; + + return 0; + +err: + return -ENOMEM; +} + +static void batadv_debug_log_cleanup(struct batadv_priv *bat_priv) +{ + kfree(bat_priv->debug_log); + bat_priv->debug_log = NULL; +} +#else /* CONFIG_BATMAN_ADV_DEBUG */ +static int batadv_debug_log_setup(struct batadv_priv *bat_priv) +{ + bat_priv->debug_log = NULL; + return 0; +} + +static void batadv_debug_log_cleanup(struct batadv_priv *bat_priv) +{ + return; +} +#endif + +static int batadv_algorithms_open(struct inode *inode, struct file *file) +{ + return single_open(file, batadv_algo_seq_print_text, NULL); +} + +static int batadv_originators_open(struct inode *inode, struct file *file) +{ + struct net_device *net_dev = (struct net_device *)inode->i_private; + return single_open(file, batadv_orig_seq_print_text, net_dev); +} + +static int batadv_gateways_open(struct inode *inode, struct file *file) +{ + struct net_device *net_dev = (struct net_device *)inode->i_private; + return single_open(file, batadv_gw_client_seq_print_text, net_dev); +} + +static int batadv_transtable_global_open(struct inode *inode, struct file *file) +{ + struct net_device *net_dev = (struct net_device *)inode->i_private; + return single_open(file, batadv_tt_global_seq_print_text, net_dev); +} + +#ifdef CONFIG_BATMAN_ADV_BLA +static int batadv_bla_claim_table_open(struct inode *inode, struct file *file) +{ + struct net_device *net_dev = (struct net_device *)inode->i_private; + return single_open(file, batadv_bla_claim_table_seq_print_text, + net_dev); +} +#endif + +static int batadv_transtable_local_open(struct inode *inode, struct file *file) +{ + struct net_device *net_dev = (struct net_device *)inode->i_private; + return single_open(file, batadv_tt_local_seq_print_text, net_dev); +} + +static int batadv_vis_data_open(struct inode *inode, struct file *file) +{ + struct net_device *net_dev = (struct net_device *)inode->i_private; + return single_open(file, batadv_vis_seq_print_text, net_dev); +} + +struct batadv_debuginfo { + struct attribute attr; + const struct file_operations fops; +}; + +#define BATADV_DEBUGINFO(_name, _mode, _open) \ +struct batadv_debuginfo batadv_debuginfo_##_name = { \ + .attr = { .name = __stringify(_name), \ + .mode = _mode, }, \ + .fops = { .owner = THIS_MODULE, \ + .open = _open, \ + .read = seq_read, \ + .llseek = seq_lseek, \ + .release = single_release, \ + } \ +}; + +static BATADV_DEBUGINFO(routing_algos, S_IRUGO, batadv_algorithms_open); +static BATADV_DEBUGINFO(originators, S_IRUGO, batadv_originators_open); +static BATADV_DEBUGINFO(gateways, S_IRUGO, batadv_gateways_open); +static BATADV_DEBUGINFO(transtable_global, S_IRUGO, + batadv_transtable_global_open); +#ifdef CONFIG_BATMAN_ADV_BLA +static BATADV_DEBUGINFO(bla_claim_table, S_IRUGO, batadv_bla_claim_table_open); +#endif +static BATADV_DEBUGINFO(transtable_local, S_IRUGO, + batadv_transtable_local_open); +static BATADV_DEBUGINFO(vis_data, S_IRUGO, batadv_vis_data_open); + +static struct batadv_debuginfo *batadv_mesh_debuginfos[] = { + &batadv_debuginfo_originators, + &batadv_debuginfo_gateways, + &batadv_debuginfo_transtable_global, +#ifdef CONFIG_BATMAN_ADV_BLA + &batadv_debuginfo_bla_claim_table, +#endif + &batadv_debuginfo_transtable_local, + &batadv_debuginfo_vis_data, + NULL, +}; + +void batadv_debugfs_init(void) +{ + struct batadv_debuginfo *bat_debug; + struct dentry *file; + + batadv_debugfs = debugfs_create_dir(BATADV_DEBUGFS_SUBDIR, NULL); + if (batadv_debugfs == ERR_PTR(-ENODEV)) + batadv_debugfs = NULL; + + if (!batadv_debugfs) + goto out; + + bat_debug = &batadv_debuginfo_routing_algos; + file = debugfs_create_file(bat_debug->attr.name, + S_IFREG | bat_debug->attr.mode, + batadv_debugfs, NULL, &bat_debug->fops); + if (!file) + pr_err("Can't add debugfs file: %s\n", bat_debug->attr.name); + +out: + return; +} + +void batadv_debugfs_destroy(void) +{ + if (batadv_debugfs) { + debugfs_remove_recursive(batadv_debugfs); + batadv_debugfs = NULL; + } +} + +int batadv_debugfs_add_meshif(struct net_device *dev) +{ + struct batadv_priv *bat_priv = netdev_priv(dev); + struct batadv_debuginfo **bat_debug; + struct dentry *file; + + if (!batadv_debugfs) + goto out; + + bat_priv->debug_dir = debugfs_create_dir(dev->name, batadv_debugfs); + if (!bat_priv->debug_dir) + goto out; + + if (batadv_socket_setup(bat_priv) < 0) + goto rem_attr; + + if (batadv_debug_log_setup(bat_priv) < 0) + goto rem_attr; + + for (bat_debug = batadv_mesh_debuginfos; *bat_debug; ++bat_debug) { + file = debugfs_create_file(((*bat_debug)->attr).name, + S_IFREG | ((*bat_debug)->attr).mode, + bat_priv->debug_dir, + dev, &(*bat_debug)->fops); + if (!file) { + batadv_err(dev, "Can't add debugfs file: %s/%s\n", + dev->name, ((*bat_debug)->attr).name); + goto rem_attr; + } + } + + return 0; +rem_attr: + debugfs_remove_recursive(bat_priv->debug_dir); + bat_priv->debug_dir = NULL; +out: +#ifdef CONFIG_DEBUG_FS + return -ENOMEM; +#else + return 0; +#endif /* CONFIG_DEBUG_FS */ +} + +void batadv_debugfs_del_meshif(struct net_device *dev) +{ + struct batadv_priv *bat_priv = netdev_priv(dev); + + batadv_debug_log_cleanup(bat_priv); + + if (batadv_debugfs) { + debugfs_remove_recursive(bat_priv->debug_dir); + bat_priv->debug_dir = NULL; + } +} diff --git a/net/batman-adv/bat_debugfs.h b/net/batman-adv/debugfs.h index d605c6746428..3319e1f21f55 100644 --- a/net/batman-adv/bat_debugfs.h +++ b/net/batman-adv/debugfs.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,18 +15,16 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ - #ifndef _NET_BATMAN_ADV_DEBUGFS_H_ #define _NET_BATMAN_ADV_DEBUGFS_H_ -#define DEBUGFS_BAT_SUBDIR "batman_adv" +#define BATADV_DEBUGFS_SUBDIR "batman_adv" -void debugfs_init(void); -void debugfs_destroy(void); -int debugfs_add_meshif(struct net_device *dev); -void debugfs_del_meshif(struct net_device *dev); +void batadv_debugfs_init(void); +void batadv_debugfs_destroy(void); +int batadv_debugfs_add_meshif(struct net_device *dev); +void batadv_debugfs_del_meshif(struct net_device *dev); #endif /* _NET_BATMAN_ADV_DEBUGFS_H_ */ diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c index 47f7186dcefc..b421cc49d2cd 100644 --- a/net/batman-adv/gateway_client.c +++ b/net/batman-adv/gateway_client.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,11 +15,10 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" -#include "bat_sysfs.h" +#include "sysfs.h" #include "gateway_client.h" #include "gateway_common.h" #include "hard-interface.h" @@ -33,19 +31,21 @@ #include <linux/if_vlan.h> /* This is the offset of the options field in a dhcp packet starting at - * the beginning of the dhcp header */ -#define DHCP_OPTIONS_OFFSET 240 -#define DHCP_REQUEST 3 + * the beginning of the dhcp header + */ +#define BATADV_DHCP_OPTIONS_OFFSET 240 +#define BATADV_DHCP_REQUEST 3 -static void gw_node_free_ref(struct gw_node *gw_node) +static void batadv_gw_node_free_ref(struct batadv_gw_node *gw_node) { if (atomic_dec_and_test(&gw_node->refcount)) kfree_rcu(gw_node, rcu); } -static struct gw_node *gw_get_selected_gw_node(struct bat_priv *bat_priv) +static struct batadv_gw_node * +batadv_gw_get_selected_gw_node(struct batadv_priv *bat_priv) { - struct gw_node *gw_node; + struct batadv_gw_node *gw_node; rcu_read_lock(); gw_node = rcu_dereference(bat_priv->curr_gw); @@ -60,12 +60,13 @@ out: return gw_node; } -struct orig_node *gw_get_selected_orig(struct bat_priv *bat_priv) +struct batadv_orig_node * +batadv_gw_get_selected_orig(struct batadv_priv *bat_priv) { - struct gw_node *gw_node; - struct orig_node *orig_node = NULL; + struct batadv_gw_node *gw_node; + struct batadv_orig_node *orig_node = NULL; - gw_node = gw_get_selected_gw_node(bat_priv); + gw_node = batadv_gw_get_selected_gw_node(bat_priv); if (!gw_node) goto out; @@ -81,13 +82,14 @@ unlock: rcu_read_unlock(); out: if (gw_node) - gw_node_free_ref(gw_node); + batadv_gw_node_free_ref(gw_node); return orig_node; } -static void gw_select(struct bat_priv *bat_priv, struct gw_node *new_gw_node) +static void batadv_gw_select(struct batadv_priv *bat_priv, + struct batadv_gw_node *new_gw_node) { - struct gw_node *curr_gw_node; + struct batadv_gw_node *curr_gw_node; spin_lock_bh(&bat_priv->gw_list_lock); @@ -98,31 +100,34 @@ static void gw_select(struct bat_priv *bat_priv, struct gw_node *new_gw_node) rcu_assign_pointer(bat_priv->curr_gw, new_gw_node); if (curr_gw_node) - gw_node_free_ref(curr_gw_node); + batadv_gw_node_free_ref(curr_gw_node); spin_unlock_bh(&bat_priv->gw_list_lock); } -void gw_deselect(struct bat_priv *bat_priv) +void batadv_gw_deselect(struct batadv_priv *bat_priv) { atomic_set(&bat_priv->gw_reselect, 1); } -static struct gw_node *gw_get_best_gw_node(struct bat_priv *bat_priv) +static struct batadv_gw_node * +batadv_gw_get_best_gw_node(struct batadv_priv *bat_priv) { - struct neigh_node *router; + struct batadv_neigh_node *router; struct hlist_node *node; - struct gw_node *gw_node, *curr_gw = NULL; + struct batadv_gw_node *gw_node, *curr_gw = NULL; uint32_t max_gw_factor = 0, tmp_gw_factor = 0; uint8_t max_tq = 0; int down, up; + struct batadv_orig_node *orig_node; rcu_read_lock(); hlist_for_each_entry_rcu(gw_node, node, &bat_priv->gw_list, list) { if (gw_node->deleted) continue; - router = orig_node_get_router(gw_node->orig_node); + orig_node = gw_node->orig_node; + router = batadv_orig_node_get_router(orig_node); if (!router) continue; @@ -131,35 +136,34 @@ static struct gw_node *gw_get_best_gw_node(struct bat_priv *bat_priv) switch (atomic_read(&bat_priv->gw_sel_class)) { case 1: /* fast connection */ - gw_bandwidth_to_kbit(gw_node->orig_node->gw_flags, - &down, &up); + batadv_gw_bandwidth_to_kbit(orig_node->gw_flags, + &down, &up); tmp_gw_factor = (router->tq_avg * router->tq_avg * down * 100 * 100) / - (TQ_LOCAL_WINDOW_SIZE * - TQ_LOCAL_WINDOW_SIZE * 64); + (BATADV_TQ_LOCAL_WINDOW_SIZE * + BATADV_TQ_LOCAL_WINDOW_SIZE * 64); if ((tmp_gw_factor > max_gw_factor) || ((tmp_gw_factor == max_gw_factor) && (router->tq_avg > max_tq))) { if (curr_gw) - gw_node_free_ref(curr_gw); + batadv_gw_node_free_ref(curr_gw); curr_gw = gw_node; atomic_inc(&curr_gw->refcount); } break; - default: /** - * 2: stable connection (use best statistic) + default: /* 2: stable connection (use best statistic) * 3: fast-switch (use best statistic but change as * soon as a better gateway appears) * XX: late-switch (use best statistic but change as * soon as a better gateway appears which has * $routing_class more tq points) - **/ + */ if (router->tq_avg > max_tq) { if (curr_gw) - gw_node_free_ref(curr_gw); + batadv_gw_node_free_ref(curr_gw); curr_gw = gw_node; atomic_inc(&curr_gw->refcount); } @@ -172,37 +176,36 @@ static struct gw_node *gw_get_best_gw_node(struct bat_priv *bat_priv) if (tmp_gw_factor > max_gw_factor) max_gw_factor = tmp_gw_factor; - gw_node_free_ref(gw_node); + batadv_gw_node_free_ref(gw_node); next: - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); } rcu_read_unlock(); return curr_gw; } -void gw_election(struct bat_priv *bat_priv) +void batadv_gw_election(struct batadv_priv *bat_priv) { - struct gw_node *curr_gw = NULL, *next_gw = NULL; - struct neigh_node *router = NULL; + struct batadv_gw_node *curr_gw = NULL, *next_gw = NULL; + struct batadv_neigh_node *router = NULL; char gw_addr[18] = { '\0' }; - /** - * The batman daemon checks here if we already passed a full originator + /* The batman daemon checks here if we already passed a full originator * cycle in order to make sure we don't choose the first gateway we * hear about. This check is based on the daemon's uptime which we * don't have. - **/ - if (atomic_read(&bat_priv->gw_mode) != GW_MODE_CLIENT) + */ + if (atomic_read(&bat_priv->gw_mode) != BATADV_GW_MODE_CLIENT) goto out; - if (!atomic_dec_not_zero(&bat_priv->gw_reselect)) + if (!batadv_atomic_dec_not_zero(&bat_priv->gw_reselect)) goto out; - curr_gw = gw_get_selected_gw_node(bat_priv); + curr_gw = batadv_gw_get_selected_gw_node(bat_priv); - next_gw = gw_get_best_gw_node(bat_priv); + next_gw = batadv_gw_get_best_gw_node(bat_priv); if (curr_gw == next_gw) goto out; @@ -210,53 +213,57 @@ void gw_election(struct bat_priv *bat_priv) if (next_gw) { sprintf(gw_addr, "%pM", next_gw->orig_node->orig); - router = orig_node_get_router(next_gw->orig_node); + router = batadv_orig_node_get_router(next_gw->orig_node); if (!router) { - gw_deselect(bat_priv); + batadv_gw_deselect(bat_priv); goto out; } } if ((curr_gw) && (!next_gw)) { - bat_dbg(DBG_BATMAN, bat_priv, - "Removing selected gateway - no gateway in range\n"); - throw_uevent(bat_priv, UEV_GW, UEV_DEL, NULL); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Removing selected gateway - no gateway in range\n"); + batadv_throw_uevent(bat_priv, BATADV_UEV_GW, BATADV_UEV_DEL, + NULL); } else if ((!curr_gw) && (next_gw)) { - bat_dbg(DBG_BATMAN, bat_priv, - "Adding route to gateway %pM (gw_flags: %i, tq: %i)\n", - next_gw->orig_node->orig, next_gw->orig_node->gw_flags, - router->tq_avg); - throw_uevent(bat_priv, UEV_GW, UEV_ADD, gw_addr); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Adding route to gateway %pM (gw_flags: %i, tq: %i)\n", + next_gw->orig_node->orig, + next_gw->orig_node->gw_flags, router->tq_avg); + batadv_throw_uevent(bat_priv, BATADV_UEV_GW, BATADV_UEV_ADD, + gw_addr); } else { - bat_dbg(DBG_BATMAN, bat_priv, - "Changing route to gateway %pM (gw_flags: %i, tq: %i)\n", - next_gw->orig_node->orig, next_gw->orig_node->gw_flags, - router->tq_avg); - throw_uevent(bat_priv, UEV_GW, UEV_CHANGE, gw_addr); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Changing route to gateway %pM (gw_flags: %i, tq: %i)\n", + next_gw->orig_node->orig, + next_gw->orig_node->gw_flags, router->tq_avg); + batadv_throw_uevent(bat_priv, BATADV_UEV_GW, BATADV_UEV_CHANGE, + gw_addr); } - gw_select(bat_priv, next_gw); + batadv_gw_select(bat_priv, next_gw); out: if (curr_gw) - gw_node_free_ref(curr_gw); + batadv_gw_node_free_ref(curr_gw); if (next_gw) - gw_node_free_ref(next_gw); + batadv_gw_node_free_ref(next_gw); if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); } -void gw_check_election(struct bat_priv *bat_priv, struct orig_node *orig_node) +void batadv_gw_check_election(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node) { - struct orig_node *curr_gw_orig; - struct neigh_node *router_gw = NULL, *router_orig = NULL; + struct batadv_orig_node *curr_gw_orig; + struct batadv_neigh_node *router_gw = NULL, *router_orig = NULL; uint8_t gw_tq_avg, orig_tq_avg; - curr_gw_orig = gw_get_selected_orig(bat_priv); + curr_gw_orig = batadv_gw_get_selected_orig(bat_priv); if (!curr_gw_orig) goto deselect; - router_gw = orig_node_get_router(curr_gw_orig); + router_gw = batadv_orig_node_get_router(curr_gw_orig); if (!router_gw) goto deselect; @@ -264,7 +271,7 @@ void gw_check_election(struct bat_priv *bat_priv, struct orig_node *orig_node) if (curr_gw_orig == orig_node) goto out; - router_orig = orig_node_get_router(orig_node); + router_orig = batadv_orig_node_get_router(orig_node); if (!router_orig) goto out; @@ -275,35 +282,35 @@ void gw_check_election(struct bat_priv *bat_priv, struct orig_node *orig_node) if (orig_tq_avg < gw_tq_avg) goto out; - /** - * if the routing class is greater than 3 the value tells us how much + /* if the routing class is greater than 3 the value tells us how much * greater the TQ value of the new gateway must be - **/ + */ if ((atomic_read(&bat_priv->gw_sel_class) > 3) && (orig_tq_avg - gw_tq_avg < atomic_read(&bat_priv->gw_sel_class))) goto out; - bat_dbg(DBG_BATMAN, bat_priv, - "Restarting gateway selection: better gateway found (tq curr: %i, tq new: %i)\n", - gw_tq_avg, orig_tq_avg); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Restarting gateway selection: better gateway found (tq curr: %i, tq new: %i)\n", + gw_tq_avg, orig_tq_avg); deselect: - gw_deselect(bat_priv); + batadv_gw_deselect(bat_priv); out: if (curr_gw_orig) - orig_node_free_ref(curr_gw_orig); + batadv_orig_node_free_ref(curr_gw_orig); if (router_gw) - neigh_node_free_ref(router_gw); + batadv_neigh_node_free_ref(router_gw); if (router_orig) - neigh_node_free_ref(router_orig); + batadv_neigh_node_free_ref(router_orig); return; } -static void gw_node_add(struct bat_priv *bat_priv, - struct orig_node *orig_node, uint8_t new_gwflags) +static void batadv_gw_node_add(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + uint8_t new_gwflags) { - struct gw_node *gw_node; + struct batadv_gw_node *gw_node; int down, up; gw_node = kzalloc(sizeof(*gw_node), GFP_ATOMIC); @@ -318,47 +325,47 @@ static void gw_node_add(struct bat_priv *bat_priv, hlist_add_head_rcu(&gw_node->list, &bat_priv->gw_list); spin_unlock_bh(&bat_priv->gw_list_lock); - gw_bandwidth_to_kbit(new_gwflags, &down, &up); - bat_dbg(DBG_BATMAN, bat_priv, - "Found new gateway %pM -> gw_class: %i - %i%s/%i%s\n", - orig_node->orig, new_gwflags, - (down > 2048 ? down / 1024 : down), - (down > 2048 ? "MBit" : "KBit"), - (up > 2048 ? up / 1024 : up), - (up > 2048 ? "MBit" : "KBit")); + batadv_gw_bandwidth_to_kbit(new_gwflags, &down, &up); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Found new gateway %pM -> gw_class: %i - %i%s/%i%s\n", + orig_node->orig, new_gwflags, + (down > 2048 ? down / 1024 : down), + (down > 2048 ? "MBit" : "KBit"), + (up > 2048 ? up / 1024 : up), + (up > 2048 ? "MBit" : "KBit")); } -void gw_node_update(struct bat_priv *bat_priv, - struct orig_node *orig_node, uint8_t new_gwflags) +void batadv_gw_node_update(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + uint8_t new_gwflags) { struct hlist_node *node; - struct gw_node *gw_node, *curr_gw; + struct batadv_gw_node *gw_node, *curr_gw; - /** - * Note: We don't need a NULL check here, since curr_gw never gets + /* Note: We don't need a NULL check here, since curr_gw never gets * dereferenced. If curr_gw is NULL we also should not exit as we may * have this gateway in our list (duplication check!) even though we * have no currently selected gateway. */ - curr_gw = gw_get_selected_gw_node(bat_priv); + curr_gw = batadv_gw_get_selected_gw_node(bat_priv); rcu_read_lock(); hlist_for_each_entry_rcu(gw_node, node, &bat_priv->gw_list, list) { if (gw_node->orig_node != orig_node) continue; - bat_dbg(DBG_BATMAN, bat_priv, - "Gateway class of originator %pM changed from %i to %i\n", - orig_node->orig, gw_node->orig_node->gw_flags, - new_gwflags); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Gateway class of originator %pM changed from %i to %i\n", + orig_node->orig, gw_node->orig_node->gw_flags, + new_gwflags); gw_node->deleted = 0; - if (new_gwflags == NO_FLAGS) { + if (new_gwflags == BATADV_NO_FLAGS) { gw_node->deleted = jiffies; - bat_dbg(DBG_BATMAN, bat_priv, - "Gateway %pM removed from gateway list\n", - orig_node->orig); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Gateway %pM removed from gateway list\n", + orig_node->orig); if (gw_node == curr_gw) goto deselect; @@ -367,34 +374,35 @@ void gw_node_update(struct bat_priv *bat_priv, goto unlock; } - if (new_gwflags == NO_FLAGS) + if (new_gwflags == BATADV_NO_FLAGS) goto unlock; - gw_node_add(bat_priv, orig_node, new_gwflags); + batadv_gw_node_add(bat_priv, orig_node, new_gwflags); goto unlock; deselect: - gw_deselect(bat_priv); + batadv_gw_deselect(bat_priv); unlock: rcu_read_unlock(); if (curr_gw) - gw_node_free_ref(curr_gw); + batadv_gw_node_free_ref(curr_gw); } -void gw_node_delete(struct bat_priv *bat_priv, struct orig_node *orig_node) +void batadv_gw_node_delete(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node) { - gw_node_update(bat_priv, orig_node, 0); + batadv_gw_node_update(bat_priv, orig_node, 0); } -void gw_node_purge(struct bat_priv *bat_priv) +void batadv_gw_node_purge(struct batadv_priv *bat_priv) { - struct gw_node *gw_node, *curr_gw; + struct batadv_gw_node *gw_node, *curr_gw; struct hlist_node *node, *node_tmp; - unsigned long timeout = msecs_to_jiffies(2 * PURGE_TIMEOUT); + unsigned long timeout = msecs_to_jiffies(2 * BATADV_PURGE_TIMEOUT); int do_deselect = 0; - curr_gw = gw_get_selected_gw_node(bat_priv); + curr_gw = batadv_gw_get_selected_gw_node(bat_priv); spin_lock_bh(&bat_priv->gw_list_lock); @@ -402,43 +410,42 @@ void gw_node_purge(struct bat_priv *bat_priv) &bat_priv->gw_list, list) { if (((!gw_node->deleted) || (time_before(jiffies, gw_node->deleted + timeout))) && - atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) + atomic_read(&bat_priv->mesh_state) == BATADV_MESH_ACTIVE) continue; if (curr_gw == gw_node) do_deselect = 1; hlist_del_rcu(&gw_node->list); - gw_node_free_ref(gw_node); + batadv_gw_node_free_ref(gw_node); } spin_unlock_bh(&bat_priv->gw_list_lock); /* gw_deselect() needs to acquire the gw_list_lock */ if (do_deselect) - gw_deselect(bat_priv); + batadv_gw_deselect(bat_priv); if (curr_gw) - gw_node_free_ref(curr_gw); + batadv_gw_node_free_ref(curr_gw); } -/** - * fails if orig_node has no router - */ -static int _write_buffer_text(struct bat_priv *bat_priv, struct seq_file *seq, - const struct gw_node *gw_node) +/* fails if orig_node has no router */ +static int batadv_write_buffer_text(struct batadv_priv *bat_priv, + struct seq_file *seq, + const struct batadv_gw_node *gw_node) { - struct gw_node *curr_gw; - struct neigh_node *router; + struct batadv_gw_node *curr_gw; + struct batadv_neigh_node *router; int down, up, ret = -1; - gw_bandwidth_to_kbit(gw_node->orig_node->gw_flags, &down, &up); + batadv_gw_bandwidth_to_kbit(gw_node->orig_node->gw_flags, &down, &up); - router = orig_node_get_router(gw_node->orig_node); + router = batadv_orig_node_get_router(gw_node->orig_node); if (!router) goto out; - curr_gw = gw_get_selected_gw_node(bat_priv); + curr_gw = batadv_gw_get_selected_gw_node(bat_priv); ret = seq_printf(seq, "%s %pM (%3i) %pM [%10s]: %3i - %i%s/%i%s\n", (curr_gw == gw_node ? "=>" : " "), @@ -451,23 +458,23 @@ static int _write_buffer_text(struct bat_priv *bat_priv, struct seq_file *seq, (up > 2048 ? up / 1024 : up), (up > 2048 ? "MBit" : "KBit")); - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); if (curr_gw) - gw_node_free_ref(curr_gw); + batadv_gw_node_free_ref(curr_gw); out: return ret; } -int gw_client_seq_print_text(struct seq_file *seq, void *offset) +int batadv_gw_client_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; - struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hard_iface *primary_if; - struct gw_node *gw_node; + struct batadv_priv *bat_priv = netdev_priv(net_dev); + struct batadv_hard_iface *primary_if; + struct batadv_gw_node *gw_node; struct hlist_node *node; int gw_count = 0, ret = 0; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) { ret = seq_printf(seq, "BATMAN mesh %s disabled - please specify interfaces to enable it\n", @@ -475,7 +482,7 @@ int gw_client_seq_print_text(struct seq_file *seq, void *offset) goto out; } - if (primary_if->if_status != IF_ACTIVE) { + if (primary_if->if_status != BATADV_IF_ACTIVE) { ret = seq_printf(seq, "BATMAN mesh %s disabled - primary interface not active\n", net_dev->name); @@ -484,8 +491,8 @@ int gw_client_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, " %-12s (%s/%i) %17s [%10s]: gw_class ... [B.A.T.M.A.N. adv %s, MainIF/MAC: %s/%pM (%s)]\n", - "Gateway", "#", TQ_MAX_VALUE, "Nexthop", "outgoingIF", - SOURCE_VERSION, primary_if->net_dev->name, + "Gateway", "#", BATADV_TQ_MAX_VALUE, "Nexthop", "outgoingIF", + BATADV_SOURCE_VERSION, primary_if->net_dev->name, primary_if->net_dev->dev_addr, net_dev->name); rcu_read_lock(); @@ -494,7 +501,7 @@ int gw_client_seq_print_text(struct seq_file *seq, void *offset) continue; /* fails if orig_node has no router */ - if (_write_buffer_text(bat_priv, seq, gw_node) < 0) + if (batadv_write_buffer_text(bat_priv, seq, gw_node) < 0) continue; gw_count++; @@ -506,11 +513,11 @@ int gw_client_seq_print_text(struct seq_file *seq, void *offset) out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return ret; } -static bool is_type_dhcprequest(struct sk_buff *skb, int header_len) +static bool batadv_is_type_dhcprequest(struct sk_buff *skb, int header_len) { int ret = false; unsigned char *p; @@ -521,27 +528,29 @@ static bool is_type_dhcprequest(struct sk_buff *skb, int header_len) pkt_len = skb_headlen(skb); - if (pkt_len < header_len + DHCP_OPTIONS_OFFSET + 1) + if (pkt_len < header_len + BATADV_DHCP_OPTIONS_OFFSET + 1) goto out; - p = skb->data + header_len + DHCP_OPTIONS_OFFSET; - pkt_len -= header_len + DHCP_OPTIONS_OFFSET + 1; + p = skb->data + header_len + BATADV_DHCP_OPTIONS_OFFSET; + pkt_len -= header_len + BATADV_DHCP_OPTIONS_OFFSET + 1; /* Access the dhcp option lists. Each entry is made up by: * - octet 1: option type * - octet 2: option data len (only if type != 255 and 0) - * - octet 3: option data */ + * - octet 3: option data + */ while (*p != 255 && !ret) { /* p now points to the first octet: option type */ if (*p == 53) { /* type 53 is the message type option. - * Jump the len octet and go to the data octet */ + * Jump the len octet and go to the data octet + */ if (pkt_len < 2) goto out; p += 2; /* check if the message type is what we need */ - if (*p == DHCP_REQUEST) + if (*p == BATADV_DHCP_REQUEST) ret = true; break; } else if (*p == 0) { @@ -568,7 +577,7 @@ out: return ret; } -bool gw_is_dhcp_target(struct sk_buff *skb, unsigned int *header_len) +bool batadv_gw_is_dhcp_target(struct sk_buff *skb, unsigned int *header_len) { struct ethhdr *ethhdr; struct iphdr *iphdr; @@ -634,40 +643,41 @@ bool gw_is_dhcp_target(struct sk_buff *skb, unsigned int *header_len) return true; } -bool gw_out_of_range(struct bat_priv *bat_priv, - struct sk_buff *skb, struct ethhdr *ethhdr) +bool batadv_gw_out_of_range(struct batadv_priv *bat_priv, + struct sk_buff *skb, struct ethhdr *ethhdr) { - struct neigh_node *neigh_curr = NULL, *neigh_old = NULL; - struct orig_node *orig_dst_node = NULL; - struct gw_node *curr_gw = NULL; + struct batadv_neigh_node *neigh_curr = NULL, *neigh_old = NULL; + struct batadv_orig_node *orig_dst_node = NULL; + struct batadv_gw_node *curr_gw = NULL; bool ret, out_of_range = false; unsigned int header_len = 0; uint8_t curr_tq_avg; - ret = gw_is_dhcp_target(skb, &header_len); + ret = batadv_gw_is_dhcp_target(skb, &header_len); if (!ret) goto out; - orig_dst_node = transtable_search(bat_priv, ethhdr->h_source, - ethhdr->h_dest); + orig_dst_node = batadv_transtable_search(bat_priv, ethhdr->h_source, + ethhdr->h_dest); if (!orig_dst_node) goto out; if (!orig_dst_node->gw_flags) goto out; - ret = is_type_dhcprequest(skb, header_len); + ret = batadv_is_type_dhcprequest(skb, header_len); if (!ret) goto out; switch (atomic_read(&bat_priv->gw_mode)) { - case GW_MODE_SERVER: + case BATADV_GW_MODE_SERVER: /* If we are a GW then we are our best GW. We can artificially - * set the tq towards ourself as the maximum value */ - curr_tq_avg = TQ_MAX_VALUE; + * set the tq towards ourself as the maximum value + */ + curr_tq_avg = BATADV_TQ_MAX_VALUE; break; - case GW_MODE_CLIENT: - curr_gw = gw_get_selected_gw_node(bat_priv); + case BATADV_GW_MODE_CLIENT: + curr_gw = batadv_gw_get_selected_gw_node(bat_priv); if (!curr_gw) goto out; @@ -677,33 +687,35 @@ bool gw_out_of_range(struct bat_priv *bat_priv, /* If the dhcp packet has been sent to a different gw, * we have to evaluate whether the old gw is still - * reliable enough */ - neigh_curr = find_router(bat_priv, curr_gw->orig_node, NULL); + * reliable enough + */ + neigh_curr = batadv_find_router(bat_priv, curr_gw->orig_node, + NULL); if (!neigh_curr) goto out; curr_tq_avg = neigh_curr->tq_avg; break; - case GW_MODE_OFF: + case BATADV_GW_MODE_OFF: default: goto out; } - neigh_old = find_router(bat_priv, orig_dst_node, NULL); + neigh_old = batadv_find_router(bat_priv, orig_dst_node, NULL); if (!neigh_old) goto out; - if (curr_tq_avg - neigh_old->tq_avg > GW_THRESHOLD) + if (curr_tq_avg - neigh_old->tq_avg > BATADV_GW_THRESHOLD) out_of_range = true; out: if (orig_dst_node) - orig_node_free_ref(orig_dst_node); + batadv_orig_node_free_ref(orig_dst_node); if (curr_gw) - gw_node_free_ref(curr_gw); + batadv_gw_node_free_ref(curr_gw); if (neigh_old) - neigh_node_free_ref(neigh_old); + batadv_neigh_node_free_ref(neigh_old); if (neigh_curr) - neigh_node_free_ref(neigh_curr); + batadv_neigh_node_free_ref(neigh_curr); return out_of_range; } diff --git a/net/batman-adv/gateway_client.h b/net/batman-adv/gateway_client.h index bf56a5aea10b..f0d129e323c8 100644 --- a/net/batman-adv/gateway_client.h +++ b/net/batman-adv/gateway_client.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,23 +15,26 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_GATEWAY_CLIENT_H_ #define _NET_BATMAN_ADV_GATEWAY_CLIENT_H_ -void gw_deselect(struct bat_priv *bat_priv); -void gw_election(struct bat_priv *bat_priv); -struct orig_node *gw_get_selected_orig(struct bat_priv *bat_priv); -void gw_check_election(struct bat_priv *bat_priv, struct orig_node *orig_node); -void gw_node_update(struct bat_priv *bat_priv, - struct orig_node *orig_node, uint8_t new_gwflags); -void gw_node_delete(struct bat_priv *bat_priv, struct orig_node *orig_node); -void gw_node_purge(struct bat_priv *bat_priv); -int gw_client_seq_print_text(struct seq_file *seq, void *offset); -bool gw_is_dhcp_target(struct sk_buff *skb, unsigned int *header_len); -bool gw_out_of_range(struct bat_priv *bat_priv, - struct sk_buff *skb, struct ethhdr *ethhdr); +void batadv_gw_deselect(struct batadv_priv *bat_priv); +void batadv_gw_election(struct batadv_priv *bat_priv); +struct batadv_orig_node * +batadv_gw_get_selected_orig(struct batadv_priv *bat_priv); +void batadv_gw_check_election(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node); +void batadv_gw_node_update(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + uint8_t new_gwflags); +void batadv_gw_node_delete(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node); +void batadv_gw_node_purge(struct batadv_priv *bat_priv); +int batadv_gw_client_seq_print_text(struct seq_file *seq, void *offset); +bool batadv_gw_is_dhcp_target(struct sk_buff *skb, unsigned int *header_len); +bool batadv_gw_out_of_range(struct batadv_priv *bat_priv, + struct sk_buff *skb, struct ethhdr *ethhdr); #endif /* _NET_BATMAN_ADV_GATEWAY_CLIENT_H_ */ diff --git a/net/batman-adv/gateway_common.c b/net/batman-adv/gateway_common.c index ca57ac7d73b2..9001208d1752 100644 --- a/net/batman-adv/gateway_common.c +++ b/net/batman-adv/gateway_common.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -24,7 +22,7 @@ #include "gateway_client.h" /* calculates the gateway class from kbit */ -static void kbit_to_gw_bandwidth(int down, int up, long *gw_srv_class) +static void batadv_kbit_to_gw_bandwidth(int down, int up, long *gw_srv_class) { int mdown = 0, tdown, tup, difference; uint8_t sbit, part; @@ -59,7 +57,7 @@ static void kbit_to_gw_bandwidth(int down, int up, long *gw_srv_class) } /* returns the up and downspeeds in kbit, calculated from the class */ -void gw_bandwidth_to_kbit(uint8_t gw_srv_class, int *down, int *up) +void batadv_gw_bandwidth_to_kbit(uint8_t gw_srv_class, int *down, int *up) { int sbit = (gw_srv_class & 0x80) >> 7; int dpart = (gw_srv_class & 0x78) >> 3; @@ -75,8 +73,8 @@ void gw_bandwidth_to_kbit(uint8_t gw_srv_class, int *down, int *up) *up = ((upart + 1) * (*down)) / 8; } -static bool parse_gw_bandwidth(struct net_device *net_dev, char *buff, - int *up, int *down) +static bool batadv_parse_gw_bandwidth(struct net_device *net_dev, char *buff, + int *up, int *down) { int ret, multi = 1; char *slash_ptr, *tmp_ptr; @@ -99,9 +97,9 @@ static bool parse_gw_bandwidth(struct net_device *net_dev, char *buff, ret = kstrtol(buff, 10, &ldown); if (ret) { - bat_err(net_dev, - "Download speed of gateway mode invalid: %s\n", - buff); + batadv_err(net_dev, + "Download speed of gateway mode invalid: %s\n", + buff); return false; } @@ -124,9 +122,9 @@ static bool parse_gw_bandwidth(struct net_device *net_dev, char *buff, ret = kstrtol(slash_ptr + 1, 10, &lup); if (ret) { - bat_err(net_dev, - "Upload speed of gateway mode invalid: %s\n", - slash_ptr + 1); + batadv_err(net_dev, + "Upload speed of gateway mode invalid: %s\n", + slash_ptr + 1); return false; } @@ -136,14 +134,15 @@ static bool parse_gw_bandwidth(struct net_device *net_dev, char *buff, return true; } -ssize_t gw_bandwidth_set(struct net_device *net_dev, char *buff, size_t count) +ssize_t batadv_gw_bandwidth_set(struct net_device *net_dev, char *buff, + size_t count) { - struct bat_priv *bat_priv = netdev_priv(net_dev); + struct batadv_priv *bat_priv = netdev_priv(net_dev); long gw_bandwidth_tmp = 0; int up = 0, down = 0; bool ret; - ret = parse_gw_bandwidth(net_dev, buff, &up, &down); + ret = batadv_parse_gw_bandwidth(net_dev, buff, &up, &down); if (!ret) goto end; @@ -153,23 +152,25 @@ ssize_t gw_bandwidth_set(struct net_device *net_dev, char *buff, size_t count) if (!up) up = down / 5; - kbit_to_gw_bandwidth(down, up, &gw_bandwidth_tmp); + batadv_kbit_to_gw_bandwidth(down, up, &gw_bandwidth_tmp); - /** - * the gw bandwidth we guessed above might not match the given + /* the gw bandwidth we guessed above might not match the given * speeds, hence we need to calculate it back to show the number * that is going to be propagated - **/ - gw_bandwidth_to_kbit((uint8_t)gw_bandwidth_tmp, &down, &up); - - gw_deselect(bat_priv); - bat_info(net_dev, - "Changing gateway bandwidth from: '%i' to: '%ld' (propagating: %d%s/%d%s)\n", - atomic_read(&bat_priv->gw_bandwidth), gw_bandwidth_tmp, - (down > 2048 ? down / 1024 : down), - (down > 2048 ? "MBit" : "KBit"), - (up > 2048 ? up / 1024 : up), - (up > 2048 ? "MBit" : "KBit")); + */ + batadv_gw_bandwidth_to_kbit((uint8_t)gw_bandwidth_tmp, &down, &up); + + if (atomic_read(&bat_priv->gw_bandwidth) == gw_bandwidth_tmp) + return count; + + batadv_gw_deselect(bat_priv); + batadv_info(net_dev, + "Changing gateway bandwidth from: '%i' to: '%ld' (propagating: %d%s/%d%s)\n", + atomic_read(&bat_priv->gw_bandwidth), gw_bandwidth_tmp, + (down > 2048 ? down / 1024 : down), + (down > 2048 ? "MBit" : "KBit"), + (up > 2048 ? up / 1024 : up), + (up > 2048 ? "MBit" : "KBit")); atomic_set(&bat_priv->gw_bandwidth, gw_bandwidth_tmp); diff --git a/net/batman-adv/gateway_common.h b/net/batman-adv/gateway_common.h index b8fb11c4f927..13697f6e7113 100644 --- a/net/batman-adv/gateway_common.h +++ b/net/batman-adv/gateway_common.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,23 +15,23 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_GATEWAY_COMMON_H_ #define _NET_BATMAN_ADV_GATEWAY_COMMON_H_ -enum gw_modes { - GW_MODE_OFF, - GW_MODE_CLIENT, - GW_MODE_SERVER, +enum batadv_gw_modes { + BATADV_GW_MODE_OFF, + BATADV_GW_MODE_CLIENT, + BATADV_GW_MODE_SERVER, }; -#define GW_MODE_OFF_NAME "off" -#define GW_MODE_CLIENT_NAME "client" -#define GW_MODE_SERVER_NAME "server" +#define BATADV_GW_MODE_OFF_NAME "off" +#define BATADV_GW_MODE_CLIENT_NAME "client" +#define BATADV_GW_MODE_SERVER_NAME "server" -void gw_bandwidth_to_kbit(uint8_t gw_class, int *down, int *up); -ssize_t gw_bandwidth_set(struct net_device *net_dev, char *buff, size_t count); +void batadv_gw_bandwidth_to_kbit(uint8_t gw_class, int *down, int *up); +ssize_t batadv_gw_bandwidth_set(struct net_device *net_dev, char *buff, + size_t count); #endif /* _NET_BATMAN_ADV_GATEWAY_COMMON_H_ */ diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c index dc334fa89847..282bf6e9353e 100644 --- a/net/batman-adv/hard-interface.c +++ b/net/batman-adv/hard-interface.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -25,28 +23,29 @@ #include "send.h" #include "translation-table.h" #include "routing.h" -#include "bat_sysfs.h" +#include "sysfs.h" #include "originator.h" #include "hash.h" #include "bridge_loop_avoidance.h" #include <linux/if_arp.h> -void hardif_free_rcu(struct rcu_head *rcu) +void batadv_hardif_free_rcu(struct rcu_head *rcu) { - struct hard_iface *hard_iface; + struct batadv_hard_iface *hard_iface; - hard_iface = container_of(rcu, struct hard_iface, rcu); + hard_iface = container_of(rcu, struct batadv_hard_iface, rcu); dev_put(hard_iface->net_dev); kfree(hard_iface); } -struct hard_iface *hardif_get_by_netdev(const struct net_device *net_dev) +struct batadv_hard_iface * +batadv_hardif_get_by_netdev(const struct net_device *net_dev) { - struct hard_iface *hard_iface; + struct batadv_hard_iface *hard_iface; rcu_read_lock(); - list_for_each_entry_rcu(hard_iface, &hardif_list, list) { + list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) { if (hard_iface->net_dev == net_dev && atomic_inc_not_zero(&hard_iface->refcount)) goto out; @@ -59,7 +58,7 @@ out: return hard_iface; } -static int is_valid_iface(const struct net_device *net_dev) +static int batadv_is_valid_iface(const struct net_device *net_dev) { if (net_dev->flags & IFF_LOOPBACK) return 0; @@ -71,26 +70,23 @@ static int is_valid_iface(const struct net_device *net_dev) return 0; /* no batman over batman */ - if (softif_is_valid(net_dev)) + if (batadv_softif_is_valid(net_dev)) return 0; - /* Device is being bridged */ - /* if (net_dev->priv_flags & IFF_BRIDGE_PORT) - return 0; */ - return 1; } -static struct hard_iface *hardif_get_active(const struct net_device *soft_iface) +static struct batadv_hard_iface * +batadv_hardif_get_active(const struct net_device *soft_iface) { - struct hard_iface *hard_iface; + struct batadv_hard_iface *hard_iface; rcu_read_lock(); - list_for_each_entry_rcu(hard_iface, &hardif_list, list) { + list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) { if (hard_iface->soft_iface != soft_iface) continue; - if (hard_iface->if_status == IF_ACTIVE && + if (hard_iface->if_status == BATADV_IF_ACTIVE && atomic_inc_not_zero(&hard_iface->refcount)) goto out; } @@ -102,32 +98,32 @@ out: return hard_iface; } -static void primary_if_update_addr(struct bat_priv *bat_priv, - struct hard_iface *oldif) +static void batadv_primary_if_update_addr(struct batadv_priv *bat_priv, + struct batadv_hard_iface *oldif) { - struct vis_packet *vis_packet; - struct hard_iface *primary_if; + struct batadv_vis_packet *vis_packet; + struct batadv_hard_iface *primary_if; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; - vis_packet = (struct vis_packet *) + vis_packet = (struct batadv_vis_packet *) bat_priv->my_vis_info->skb_packet->data; memcpy(vis_packet->vis_orig, primary_if->net_dev->dev_addr, ETH_ALEN); memcpy(vis_packet->sender_orig, primary_if->net_dev->dev_addr, ETH_ALEN); - bla_update_orig_address(bat_priv, primary_if, oldif); + batadv_bla_update_orig_address(bat_priv, primary_if, oldif); out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } -static void primary_if_select(struct bat_priv *bat_priv, - struct hard_iface *new_hard_iface) +static void batadv_primary_if_select(struct batadv_priv *bat_priv, + struct batadv_hard_iface *new_hard_iface) { - struct hard_iface *curr_hard_iface; + struct batadv_hard_iface *curr_hard_iface; ASSERT_RTNL(); @@ -141,14 +137,15 @@ static void primary_if_select(struct bat_priv *bat_priv, goto out; bat_priv->bat_algo_ops->bat_primary_iface_set(new_hard_iface); - primary_if_update_addr(bat_priv, curr_hard_iface); + batadv_primary_if_update_addr(bat_priv, curr_hard_iface); out: if (curr_hard_iface) - hardif_free_ref(curr_hard_iface); + batadv_hardif_free_ref(curr_hard_iface); } -static bool hardif_is_iface_up(const struct hard_iface *hard_iface) +static bool +batadv_hardif_is_iface_up(const struct batadv_hard_iface *hard_iface) { if (hard_iface->net_dev->flags & IFF_UP) return true; @@ -156,21 +153,21 @@ static bool hardif_is_iface_up(const struct hard_iface *hard_iface) return false; } -static void check_known_mac_addr(const struct net_device *net_dev) +static void batadv_check_known_mac_addr(const struct net_device *net_dev) { - const struct hard_iface *hard_iface; + const struct batadv_hard_iface *hard_iface; rcu_read_lock(); - list_for_each_entry_rcu(hard_iface, &hardif_list, list) { - if ((hard_iface->if_status != IF_ACTIVE) && - (hard_iface->if_status != IF_TO_BE_ACTIVATED)) + list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) { + if ((hard_iface->if_status != BATADV_IF_ACTIVE) && + (hard_iface->if_status != BATADV_IF_TO_BE_ACTIVATED)) continue; if (hard_iface->net_dev == net_dev) continue; - if (!compare_eth(hard_iface->net_dev->dev_addr, - net_dev->dev_addr)) + if (!batadv_compare_eth(hard_iface->net_dev->dev_addr, + net_dev->dev_addr)) continue; pr_warn("The newly added mac address (%pM) already exists on: %s\n", @@ -180,27 +177,29 @@ static void check_known_mac_addr(const struct net_device *net_dev) rcu_read_unlock(); } -int hardif_min_mtu(struct net_device *soft_iface) +int batadv_hardif_min_mtu(struct net_device *soft_iface) { - const struct bat_priv *bat_priv = netdev_priv(soft_iface); - const struct hard_iface *hard_iface; + const struct batadv_priv *bat_priv = netdev_priv(soft_iface); + const struct batadv_hard_iface *hard_iface; /* allow big frames if all devices are capable to do so - * (have MTU > 1500 + BAT_HEADER_LEN) */ + * (have MTU > 1500 + BAT_HEADER_LEN) + */ int min_mtu = ETH_DATA_LEN; if (atomic_read(&bat_priv->fragmentation)) goto out; rcu_read_lock(); - list_for_each_entry_rcu(hard_iface, &hardif_list, list) { - if ((hard_iface->if_status != IF_ACTIVE) && - (hard_iface->if_status != IF_TO_BE_ACTIVATED)) + list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) { + if ((hard_iface->if_status != BATADV_IF_ACTIVE) && + (hard_iface->if_status != BATADV_IF_TO_BE_ACTIVATED)) continue; if (hard_iface->soft_iface != soft_iface) continue; - min_mtu = min_t(int, hard_iface->net_dev->mtu - BAT_HEADER_LEN, + min_mtu = min_t(int, + hard_iface->net_dev->mtu - BATADV_HEADER_LEN, min_mtu); } rcu_read_unlock(); @@ -209,68 +208,70 @@ out: } /* adjusts the MTU if a new interface with a smaller MTU appeared. */ -void update_min_mtu(struct net_device *soft_iface) +void batadv_update_min_mtu(struct net_device *soft_iface) { int min_mtu; - min_mtu = hardif_min_mtu(soft_iface); + min_mtu = batadv_hardif_min_mtu(soft_iface); if (soft_iface->mtu != min_mtu) soft_iface->mtu = min_mtu; } -static void hardif_activate_interface(struct hard_iface *hard_iface) +static void +batadv_hardif_activate_interface(struct batadv_hard_iface *hard_iface) { - struct bat_priv *bat_priv; - struct hard_iface *primary_if = NULL; + struct batadv_priv *bat_priv; + struct batadv_hard_iface *primary_if = NULL; - if (hard_iface->if_status != IF_INACTIVE) + if (hard_iface->if_status != BATADV_IF_INACTIVE) goto out; bat_priv = netdev_priv(hard_iface->soft_iface); bat_priv->bat_algo_ops->bat_iface_update_mac(hard_iface); - hard_iface->if_status = IF_TO_BE_ACTIVATED; + hard_iface->if_status = BATADV_IF_TO_BE_ACTIVATED; - /** - * the first active interface becomes our primary interface or + /* the first active interface becomes our primary interface or * the next active interface after the old primary interface was removed */ - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) - primary_if_select(bat_priv, hard_iface); + batadv_primary_if_select(bat_priv, hard_iface); - bat_info(hard_iface->soft_iface, "Interface activated: %s\n", - hard_iface->net_dev->name); + batadv_info(hard_iface->soft_iface, "Interface activated: %s\n", + hard_iface->net_dev->name); - update_min_mtu(hard_iface->soft_iface); + batadv_update_min_mtu(hard_iface->soft_iface); out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } -static void hardif_deactivate_interface(struct hard_iface *hard_iface) +static void +batadv_hardif_deactivate_interface(struct batadv_hard_iface *hard_iface) { - if ((hard_iface->if_status != IF_ACTIVE) && - (hard_iface->if_status != IF_TO_BE_ACTIVATED)) + if ((hard_iface->if_status != BATADV_IF_ACTIVE) && + (hard_iface->if_status != BATADV_IF_TO_BE_ACTIVATED)) return; - hard_iface->if_status = IF_INACTIVE; + hard_iface->if_status = BATADV_IF_INACTIVE; - bat_info(hard_iface->soft_iface, "Interface deactivated: %s\n", - hard_iface->net_dev->name); + batadv_info(hard_iface->soft_iface, "Interface deactivated: %s\n", + hard_iface->net_dev->name); - update_min_mtu(hard_iface->soft_iface); + batadv_update_min_mtu(hard_iface->soft_iface); } -int hardif_enable_interface(struct hard_iface *hard_iface, - const char *iface_name) +int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface, + const char *iface_name) { - struct bat_priv *bat_priv; + struct batadv_priv *bat_priv; struct net_device *soft_iface; + __be16 ethertype = __constant_htons(BATADV_ETH_P_BATMAN); int ret; - if (hard_iface->if_status != IF_NOT_IN_USE) + if (hard_iface->if_status != BATADV_IF_NOT_IN_USE) goto out; if (!atomic_inc_not_zero(&hard_iface->refcount)) @@ -284,7 +285,7 @@ int hardif_enable_interface(struct hard_iface *hard_iface, soft_iface = dev_get_by_name(&init_net, iface_name); if (!soft_iface) { - soft_iface = softif_create(iface_name); + soft_iface = batadv_softif_create(iface_name); if (!soft_iface) { ret = -ENOMEM; @@ -295,7 +296,7 @@ int hardif_enable_interface(struct hard_iface *hard_iface, dev_hold(soft_iface); } - if (!softif_is_valid(soft_iface)) { + if (!batadv_softif_is_valid(soft_iface)) { pr_err("Can't create batman mesh interface %s: already exists as regular interface\n", soft_iface->name); ret = -EINVAL; @@ -306,48 +307,46 @@ int hardif_enable_interface(struct hard_iface *hard_iface, bat_priv = netdev_priv(hard_iface->soft_iface); ret = bat_priv->bat_algo_ops->bat_iface_enable(hard_iface); - if (ret < 0) { - ret = -ENOMEM; + if (ret < 0) goto err_dev; - } hard_iface->if_num = bat_priv->num_ifaces; bat_priv->num_ifaces++; - hard_iface->if_status = IF_INACTIVE; - orig_hash_add_if(hard_iface, bat_priv->num_ifaces); + hard_iface->if_status = BATADV_IF_INACTIVE; + batadv_orig_hash_add_if(hard_iface, bat_priv->num_ifaces); - hard_iface->batman_adv_ptype.type = __constant_htons(ETH_P_BATMAN); - hard_iface->batman_adv_ptype.func = batman_skb_recv; + hard_iface->batman_adv_ptype.type = ethertype; + hard_iface->batman_adv_ptype.func = batadv_batman_skb_recv; hard_iface->batman_adv_ptype.dev = hard_iface->net_dev; dev_add_pack(&hard_iface->batman_adv_ptype); atomic_set(&hard_iface->frag_seqno, 1); - bat_info(hard_iface->soft_iface, "Adding interface: %s\n", - hard_iface->net_dev->name); - - if (atomic_read(&bat_priv->fragmentation) && hard_iface->net_dev->mtu < - ETH_DATA_LEN + BAT_HEADER_LEN) - bat_info(hard_iface->soft_iface, - "The MTU of interface %s is too small (%i) to handle the transport of batman-adv packets. Packets going over this interface will be fragmented on layer2 which could impact the performance. Setting the MTU to %zi would solve the problem.\n", - hard_iface->net_dev->name, hard_iface->net_dev->mtu, - ETH_DATA_LEN + BAT_HEADER_LEN); - - if (!atomic_read(&bat_priv->fragmentation) && hard_iface->net_dev->mtu < - ETH_DATA_LEN + BAT_HEADER_LEN) - bat_info(hard_iface->soft_iface, - "The MTU of interface %s is too small (%i) to handle the transport of batman-adv packets. If you experience problems getting traffic through try increasing the MTU to %zi.\n", - hard_iface->net_dev->name, hard_iface->net_dev->mtu, - ETH_DATA_LEN + BAT_HEADER_LEN); - - if (hardif_is_iface_up(hard_iface)) - hardif_activate_interface(hard_iface); + batadv_info(hard_iface->soft_iface, "Adding interface: %s\n", + hard_iface->net_dev->name); + + if (atomic_read(&bat_priv->fragmentation) && + hard_iface->net_dev->mtu < ETH_DATA_LEN + BATADV_HEADER_LEN) + batadv_info(hard_iface->soft_iface, + "The MTU of interface %s is too small (%i) to handle the transport of batman-adv packets. Packets going over this interface will be fragmented on layer2 which could impact the performance. Setting the MTU to %zi would solve the problem.\n", + hard_iface->net_dev->name, hard_iface->net_dev->mtu, + ETH_DATA_LEN + BATADV_HEADER_LEN); + + if (!atomic_read(&bat_priv->fragmentation) && + hard_iface->net_dev->mtu < ETH_DATA_LEN + BATADV_HEADER_LEN) + batadv_info(hard_iface->soft_iface, + "The MTU of interface %s is too small (%i) to handle the transport of batman-adv packets. If you experience problems getting traffic through try increasing the MTU to %zi.\n", + hard_iface->net_dev->name, hard_iface->net_dev->mtu, + ETH_DATA_LEN + BATADV_HEADER_LEN); + + if (batadv_hardif_is_iface_up(hard_iface)) + batadv_hardif_activate_interface(hard_iface); else - bat_err(hard_iface->soft_iface, - "Not using interface %s (retrying later): interface not active\n", - hard_iface->net_dev->name); + batadv_err(hard_iface->soft_iface, + "Not using interface %s (retrying later): interface not active\n", + hard_iface->net_dev->name); /* begin scheduling originator messages on that interface */ - schedule_bat_ogm(hard_iface); + batadv_schedule_bat_ogm(hard_iface); out: return 0; @@ -355,67 +354,68 @@ out: err_dev: dev_put(soft_iface); err: - hardif_free_ref(hard_iface); + batadv_hardif_free_ref(hard_iface); return ret; } -void hardif_disable_interface(struct hard_iface *hard_iface) +void batadv_hardif_disable_interface(struct batadv_hard_iface *hard_iface) { - struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); - struct hard_iface *primary_if = NULL; + struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); + struct batadv_hard_iface *primary_if = NULL; - if (hard_iface->if_status == IF_ACTIVE) - hardif_deactivate_interface(hard_iface); + if (hard_iface->if_status == BATADV_IF_ACTIVE) + batadv_hardif_deactivate_interface(hard_iface); - if (hard_iface->if_status != IF_INACTIVE) + if (hard_iface->if_status != BATADV_IF_INACTIVE) goto out; - bat_info(hard_iface->soft_iface, "Removing interface: %s\n", - hard_iface->net_dev->name); + batadv_info(hard_iface->soft_iface, "Removing interface: %s\n", + hard_iface->net_dev->name); dev_remove_pack(&hard_iface->batman_adv_ptype); bat_priv->num_ifaces--; - orig_hash_del_if(hard_iface, bat_priv->num_ifaces); + batadv_orig_hash_del_if(hard_iface, bat_priv->num_ifaces); - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (hard_iface == primary_if) { - struct hard_iface *new_if; + struct batadv_hard_iface *new_if; - new_if = hardif_get_active(hard_iface->soft_iface); - primary_if_select(bat_priv, new_if); + new_if = batadv_hardif_get_active(hard_iface->soft_iface); + batadv_primary_if_select(bat_priv, new_if); if (new_if) - hardif_free_ref(new_if); + batadv_hardif_free_ref(new_if); } bat_priv->bat_algo_ops->bat_iface_disable(hard_iface); - hard_iface->if_status = IF_NOT_IN_USE; + hard_iface->if_status = BATADV_IF_NOT_IN_USE; /* delete all references to this hard_iface */ - purge_orig_ref(bat_priv); - purge_outstanding_packets(bat_priv, hard_iface); + batadv_purge_orig_ref(bat_priv); + batadv_purge_outstanding_packets(bat_priv, hard_iface); dev_put(hard_iface->soft_iface); /* nobody uses this interface anymore */ if (!bat_priv->num_ifaces) - softif_destroy(hard_iface->soft_iface); + batadv_softif_destroy(hard_iface->soft_iface); hard_iface->soft_iface = NULL; - hardif_free_ref(hard_iface); + batadv_hardif_free_ref(hard_iface); out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } -static struct hard_iface *hardif_add_interface(struct net_device *net_dev) +static struct batadv_hard_iface * +batadv_hardif_add_interface(struct net_device *net_dev) { - struct hard_iface *hard_iface; + struct batadv_hard_iface *hard_iface; int ret; ASSERT_RTNL(); - ret = is_valid_iface(net_dev); + ret = batadv_is_valid_iface(net_dev); if (ret != 1) goto out; @@ -425,23 +425,22 @@ static struct hard_iface *hardif_add_interface(struct net_device *net_dev) if (!hard_iface) goto release_dev; - ret = sysfs_add_hardif(&hard_iface->hardif_obj, net_dev); + ret = batadv_sysfs_add_hardif(&hard_iface->hardif_obj, net_dev); if (ret) goto free_if; hard_iface->if_num = -1; hard_iface->net_dev = net_dev; hard_iface->soft_iface = NULL; - hard_iface->if_status = IF_NOT_IN_USE; + hard_iface->if_status = BATADV_IF_NOT_IN_USE; INIT_LIST_HEAD(&hard_iface->list); /* extra reference for return */ atomic_set(&hard_iface->refcount, 2); - check_known_mac_addr(hard_iface->net_dev); - list_add_tail_rcu(&hard_iface->list, &hardif_list); + batadv_check_known_mac_addr(hard_iface->net_dev); + list_add_tail_rcu(&hard_iface->list, &batadv_hardif_list); - /** - * This can't be called via a bat_priv callback because + /* This can't be called via a bat_priv callback because * we have no bat_priv yet. */ atomic_set(&hard_iface->seqno, 1); @@ -457,102 +456,104 @@ out: return NULL; } -static void hardif_remove_interface(struct hard_iface *hard_iface) +static void batadv_hardif_remove_interface(struct batadv_hard_iface *hard_iface) { ASSERT_RTNL(); /* first deactivate interface */ - if (hard_iface->if_status != IF_NOT_IN_USE) - hardif_disable_interface(hard_iface); + if (hard_iface->if_status != BATADV_IF_NOT_IN_USE) + batadv_hardif_disable_interface(hard_iface); - if (hard_iface->if_status != IF_NOT_IN_USE) + if (hard_iface->if_status != BATADV_IF_NOT_IN_USE) return; - hard_iface->if_status = IF_TO_BE_REMOVED; - sysfs_del_hardif(&hard_iface->hardif_obj); - hardif_free_ref(hard_iface); + hard_iface->if_status = BATADV_IF_TO_BE_REMOVED; + batadv_sysfs_del_hardif(&hard_iface->hardif_obj); + batadv_hardif_free_ref(hard_iface); } -void hardif_remove_interfaces(void) +void batadv_hardif_remove_interfaces(void) { - struct hard_iface *hard_iface, *hard_iface_tmp; + struct batadv_hard_iface *hard_iface, *hard_iface_tmp; rtnl_lock(); list_for_each_entry_safe(hard_iface, hard_iface_tmp, - &hardif_list, list) { + &batadv_hardif_list, list) { list_del_rcu(&hard_iface->list); - hardif_remove_interface(hard_iface); + batadv_hardif_remove_interface(hard_iface); } rtnl_unlock(); } -static int hard_if_event(struct notifier_block *this, - unsigned long event, void *ptr) +static int batadv_hard_if_event(struct notifier_block *this, + unsigned long event, void *ptr) { struct net_device *net_dev = ptr; - struct hard_iface *hard_iface = hardif_get_by_netdev(net_dev); - struct hard_iface *primary_if = NULL; - struct bat_priv *bat_priv; + struct batadv_hard_iface *hard_iface; + struct batadv_hard_iface *primary_if = NULL; + struct batadv_priv *bat_priv; + hard_iface = batadv_hardif_get_by_netdev(net_dev); if (!hard_iface && event == NETDEV_REGISTER) - hard_iface = hardif_add_interface(net_dev); + hard_iface = batadv_hardif_add_interface(net_dev); if (!hard_iface) goto out; switch (event) { case NETDEV_UP: - hardif_activate_interface(hard_iface); + batadv_hardif_activate_interface(hard_iface); break; case NETDEV_GOING_DOWN: case NETDEV_DOWN: - hardif_deactivate_interface(hard_iface); + batadv_hardif_deactivate_interface(hard_iface); break; case NETDEV_UNREGISTER: list_del_rcu(&hard_iface->list); - hardif_remove_interface(hard_iface); + batadv_hardif_remove_interface(hard_iface); break; case NETDEV_CHANGEMTU: if (hard_iface->soft_iface) - update_min_mtu(hard_iface->soft_iface); + batadv_update_min_mtu(hard_iface->soft_iface); break; case NETDEV_CHANGEADDR: - if (hard_iface->if_status == IF_NOT_IN_USE) + if (hard_iface->if_status == BATADV_IF_NOT_IN_USE) goto hardif_put; - check_known_mac_addr(hard_iface->net_dev); + batadv_check_known_mac_addr(hard_iface->net_dev); bat_priv = netdev_priv(hard_iface->soft_iface); bat_priv->bat_algo_ops->bat_iface_update_mac(hard_iface); - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto hardif_put; if (hard_iface == primary_if) - primary_if_update_addr(bat_priv, NULL); + batadv_primary_if_update_addr(bat_priv, NULL); break; default: break; } hardif_put: - hardif_free_ref(hard_iface); + batadv_hardif_free_ref(hard_iface); out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return NOTIFY_DONE; } /* This function returns true if the interface represented by ifindex is a - * 802.11 wireless device */ -bool is_wifi_iface(int ifindex) + * 802.11 wireless device + */ +bool batadv_is_wifi_iface(int ifindex) { struct net_device *net_device = NULL; bool ret = false; - if (ifindex == NULL_IFINDEX) + if (ifindex == BATADV_NULL_IFINDEX) goto out; net_device = dev_get_by_index(&init_net, ifindex); @@ -561,7 +562,8 @@ bool is_wifi_iface(int ifindex) #ifdef CONFIG_WIRELESS_EXT /* pre-cfg80211 drivers have to implement WEXT, so it is possible to - * check for wireless_handlers != NULL */ + * check for wireless_handlers != NULL + */ if (net_device->wireless_handlers) ret = true; else @@ -575,6 +577,6 @@ out: return ret; } -struct notifier_block hard_if_notifier = { - .notifier_call = hard_if_event, +struct notifier_block batadv_hard_if_notifier = { + .notifier_call = batadv_hard_if_event, }; diff --git a/net/batman-adv/hard-interface.h b/net/batman-adv/hard-interface.h index e68c5655e616..3732366e7445 100644 --- a/net/batman-adv/hard-interface.h +++ b/net/batman-adv/hard-interface.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,44 +15,44 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_HARD_INTERFACE_H_ #define _NET_BATMAN_ADV_HARD_INTERFACE_H_ -enum hard_if_state { - IF_NOT_IN_USE, - IF_TO_BE_REMOVED, - IF_INACTIVE, - IF_ACTIVE, - IF_TO_BE_ACTIVATED, - IF_I_WANT_YOU +enum batadv_hard_if_state { + BATADV_IF_NOT_IN_USE, + BATADV_IF_TO_BE_REMOVED, + BATADV_IF_INACTIVE, + BATADV_IF_ACTIVE, + BATADV_IF_TO_BE_ACTIVATED, + BATADV_IF_I_WANT_YOU, }; -extern struct notifier_block hard_if_notifier; +extern struct notifier_block batadv_hard_if_notifier; -struct hard_iface* -hardif_get_by_netdev(const struct net_device *net_dev); -int hardif_enable_interface(struct hard_iface *hard_iface, - const char *iface_name); -void hardif_disable_interface(struct hard_iface *hard_iface); -void hardif_remove_interfaces(void); -int hardif_min_mtu(struct net_device *soft_iface); -void update_min_mtu(struct net_device *soft_iface); -void hardif_free_rcu(struct rcu_head *rcu); -bool is_wifi_iface(int ifindex); +struct batadv_hard_iface* +batadv_hardif_get_by_netdev(const struct net_device *net_dev); +int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface, + const char *iface_name); +void batadv_hardif_disable_interface(struct batadv_hard_iface *hard_iface); +void batadv_hardif_remove_interfaces(void); +int batadv_hardif_min_mtu(struct net_device *soft_iface); +void batadv_update_min_mtu(struct net_device *soft_iface); +void batadv_hardif_free_rcu(struct rcu_head *rcu); +bool batadv_is_wifi_iface(int ifindex); -static inline void hardif_free_ref(struct hard_iface *hard_iface) +static inline void +batadv_hardif_free_ref(struct batadv_hard_iface *hard_iface) { if (atomic_dec_and_test(&hard_iface->refcount)) - call_rcu(&hard_iface->rcu, hardif_free_rcu); + call_rcu(&hard_iface->rcu, batadv_hardif_free_rcu); } -static inline struct hard_iface *primary_if_get_selected( - struct bat_priv *bat_priv) +static inline struct batadv_hard_iface * +batadv_primary_if_get_selected(struct batadv_priv *bat_priv) { - struct hard_iface *hard_iface; + struct batadv_hard_iface *hard_iface; rcu_read_lock(); hard_iface = rcu_dereference(bat_priv->primary_if); diff --git a/net/batman-adv/hash.c b/net/batman-adv/hash.c index 117687bedf25..15a849c2d414 100644 --- a/net/batman-adv/hash.c +++ b/net/batman-adv/hash.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2006-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2006-2012 B.A.T.M.A.N. contributors: * * Simon Wunderlich, Marek Lindner * @@ -16,25 +15,24 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" #include "hash.h" /* clears the hash */ -static void hash_init(struct hashtable_t *hash) +static void batadv_hash_init(struct batadv_hashtable *hash) { uint32_t i; - for (i = 0 ; i < hash->size; i++) { + for (i = 0; i < hash->size; i++) { INIT_HLIST_HEAD(&hash->table[i]); spin_lock_init(&hash->list_locks[i]); } } /* free only the hashtable and the hash itself. */ -void hash_destroy(struct hashtable_t *hash) +void batadv_hash_destroy(struct batadv_hashtable *hash) { kfree(hash->list_locks); kfree(hash->table); @@ -42,9 +40,9 @@ void hash_destroy(struct hashtable_t *hash) } /* allocates and clears the hash */ -struct hashtable_t *hash_new(uint32_t size) +struct batadv_hashtable *batadv_hash_new(uint32_t size) { - struct hashtable_t *hash; + struct batadv_hashtable *hash; hash = kmalloc(sizeof(*hash), GFP_ATOMIC); if (!hash) @@ -60,7 +58,7 @@ struct hashtable_t *hash_new(uint32_t size) goto free_table; hash->size = size; - hash_init(hash); + batadv_hash_init(hash); return hash; free_table: @@ -69,3 +67,12 @@ free_hash: kfree(hash); return NULL; } + +void batadv_hash_set_lock_class(struct batadv_hashtable *hash, + struct lock_class_key *key) +{ + uint32_t i; + + for (i = 0; i < hash->size; i++) + lockdep_set_class(&hash->list_locks[i], key); +} diff --git a/net/batman-adv/hash.h b/net/batman-adv/hash.h index d4bd7862719b..977de9c75fc2 100644 --- a/net/batman-adv/hash.h +++ b/net/batman-adv/hash.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2006-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2006-2012 B.A.T.M.A.N. contributors: * * Simon Wunderlich, Marek Lindner * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_HASH_H_ @@ -24,35 +22,42 @@ #include <linux/list.h> -/* callback to a compare function. should - * compare 2 element datas for their keys, - * return 0 if same and not 0 if not - * same */ -typedef int (*hashdata_compare_cb)(const struct hlist_node *, const void *); +/* callback to a compare function. should compare 2 element datas for their + * keys, return 0 if same and not 0 if not same + */ +typedef int (*batadv_hashdata_compare_cb)(const struct hlist_node *, + const void *); /* the hashfunction, should return an index * based on the key in the data of the first - * argument and the size the second */ -typedef uint32_t (*hashdata_choose_cb)(const void *, uint32_t); -typedef void (*hashdata_free_cb)(struct hlist_node *, void *); + * argument and the size the second + */ +typedef uint32_t (*batadv_hashdata_choose_cb)(const void *, uint32_t); +typedef void (*batadv_hashdata_free_cb)(struct hlist_node *, void *); -struct hashtable_t { +struct batadv_hashtable { struct hlist_head *table; /* the hashtable itself with the buckets */ spinlock_t *list_locks; /* spinlock for each hash list entry */ uint32_t size; /* size of hashtable */ }; /* allocates and clears the hash */ -struct hashtable_t *hash_new(uint32_t size); +struct batadv_hashtable *batadv_hash_new(uint32_t size); + +/* set class key for all locks */ +void batadv_hash_set_lock_class(struct batadv_hashtable *hash, + struct lock_class_key *key); /* free only the hashtable and the hash itself. */ -void hash_destroy(struct hashtable_t *hash); +void batadv_hash_destroy(struct batadv_hashtable *hash); /* remove the hash structure. if hashdata_free_cb != NULL, this function will be * called to remove the elements inside of the hash. if you don't remove the - * elements, memory might be leaked. */ -static inline void hash_delete(struct hashtable_t *hash, - hashdata_free_cb free_cb, void *arg) + * elements, memory might be leaked. + */ +static inline void batadv_hash_delete(struct batadv_hashtable *hash, + batadv_hashdata_free_cb free_cb, + void *arg) { struct hlist_head *head; struct hlist_node *node, *node_tmp; @@ -73,11 +78,11 @@ static inline void hash_delete(struct hashtable_t *hash, spin_unlock_bh(list_lock); } - hash_destroy(hash); + batadv_hash_destroy(hash); } /** - * hash_add - adds data to the hashtable + * batadv_hash_add - adds data to the hashtable * @hash: storage hash table * @compare: callback to determine if 2 hash elements are identical * @choose: callback calculating the hash index @@ -87,11 +92,11 @@ static inline void hash_delete(struct hashtable_t *hash, * Returns 0 on success, 1 if the element already is in the hash * and -1 on error. */ - -static inline int hash_add(struct hashtable_t *hash, - hashdata_compare_cb compare, - hashdata_choose_cb choose, - const void *data, struct hlist_node *data_node) +static inline int batadv_hash_add(struct batadv_hashtable *hash, + batadv_hashdata_compare_cb compare, + batadv_hashdata_choose_cb choose, + const void *data, + struct hlist_node *data_node) { uint32_t index; int ret = -1; @@ -106,26 +111,23 @@ static inline int hash_add(struct hashtable_t *hash, head = &hash->table[index]; list_lock = &hash->list_locks[index]; - rcu_read_lock(); - __hlist_for_each_rcu(node, head) { + spin_lock_bh(list_lock); + + hlist_for_each(node, head) { if (!compare(node, data)) continue; ret = 1; - goto err_unlock; + goto unlock; } - rcu_read_unlock(); /* no duplicate found in list, add new element */ - spin_lock_bh(list_lock); hlist_add_head_rcu(data_node, head); - spin_unlock_bh(list_lock); ret = 0; - goto out; -err_unlock: - rcu_read_unlock(); +unlock: + spin_unlock_bh(list_lock); out: return ret; } @@ -133,10 +135,12 @@ out: /* removes data from hash, if found. returns pointer do data on success, so you * can remove the used structure yourself, or NULL on error . data could be the * structure you use with just the key filled, we just need the key for - * comparing. */ -static inline void *hash_remove(struct hashtable_t *hash, - hashdata_compare_cb compare, - hashdata_choose_cb choose, void *data) + * comparing. + */ +static inline void *batadv_hash_remove(struct batadv_hashtable *hash, + batadv_hashdata_compare_cb compare, + batadv_hashdata_choose_cb choose, + void *data) { uint32_t index; struct hlist_node *node; diff --git a/net/batman-adv/icmp_socket.c b/net/batman-adv/icmp_socket.c index 2e98a57f3407..bde3cf747507 100644 --- a/net/batman-adv/icmp_socket.c +++ b/net/batman-adv/icmp_socket.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -28,21 +26,21 @@ #include "originator.h" #include "hard-interface.h" -static struct socket_client *socket_client_hash[256]; +static struct batadv_socket_client *batadv_socket_client_hash[256]; -static void bat_socket_add_packet(struct socket_client *socket_client, - struct icmp_packet_rr *icmp_packet, - size_t icmp_len); +static void batadv_socket_add_packet(struct batadv_socket_client *socket_client, + struct batadv_icmp_packet_rr *icmp_packet, + size_t icmp_len); -void bat_socket_init(void) +void batadv_socket_init(void) { - memset(socket_client_hash, 0, sizeof(socket_client_hash)); + memset(batadv_socket_client_hash, 0, sizeof(batadv_socket_client_hash)); } -static int bat_socket_open(struct inode *inode, struct file *file) +static int batadv_socket_open(struct inode *inode, struct file *file) { unsigned int i; - struct socket_client *socket_client; + struct batadv_socket_client *socket_client; nonseekable_open(inode, file); @@ -51,14 +49,14 @@ static int bat_socket_open(struct inode *inode, struct file *file) if (!socket_client) return -ENOMEM; - for (i = 0; i < ARRAY_SIZE(socket_client_hash); i++) { - if (!socket_client_hash[i]) { - socket_client_hash[i] = socket_client; + for (i = 0; i < ARRAY_SIZE(batadv_socket_client_hash); i++) { + if (!batadv_socket_client_hash[i]) { + batadv_socket_client_hash[i] = socket_client; break; } } - if (i == ARRAY_SIZE(socket_client_hash)) { + if (i == ARRAY_SIZE(batadv_socket_client_hash)) { pr_err("Error - can't add another packet client: maximum number of clients reached\n"); kfree(socket_client); return -EXFULL; @@ -73,14 +71,14 @@ static int bat_socket_open(struct inode *inode, struct file *file) file->private_data = socket_client; - inc_module_count(); + batadv_inc_module_count(); return 0; } -static int bat_socket_release(struct inode *inode, struct file *file) +static int batadv_socket_release(struct inode *inode, struct file *file) { - struct socket_client *socket_client = file->private_data; - struct socket_packet *socket_packet; + struct batadv_socket_client *socket_client = file->private_data; + struct batadv_socket_packet *socket_packet; struct list_head *list_pos, *list_pos_tmp; spin_lock_bh(&socket_client->lock); @@ -88,33 +86,33 @@ static int bat_socket_release(struct inode *inode, struct file *file) /* for all packets in the queue ... */ list_for_each_safe(list_pos, list_pos_tmp, &socket_client->queue_list) { socket_packet = list_entry(list_pos, - struct socket_packet, list); + struct batadv_socket_packet, list); list_del(list_pos); kfree(socket_packet); } - socket_client_hash[socket_client->index] = NULL; + batadv_socket_client_hash[socket_client->index] = NULL; spin_unlock_bh(&socket_client->lock); kfree(socket_client); - dec_module_count(); + batadv_dec_module_count(); return 0; } -static ssize_t bat_socket_read(struct file *file, char __user *buf, - size_t count, loff_t *ppos) +static ssize_t batadv_socket_read(struct file *file, char __user *buf, + size_t count, loff_t *ppos) { - struct socket_client *socket_client = file->private_data; - struct socket_packet *socket_packet; + struct batadv_socket_client *socket_client = file->private_data; + struct batadv_socket_packet *socket_packet; size_t packet_len; int error; if ((file->f_flags & O_NONBLOCK) && (socket_client->queue_len == 0)) return -EAGAIN; - if ((!buf) || (count < sizeof(struct icmp_packet))) + if ((!buf) || (count < sizeof(struct batadv_icmp_packet))) return -EINVAL; if (!access_ok(VERIFY_WRITE, buf, count)) @@ -129,7 +127,7 @@ static ssize_t bat_socket_read(struct file *file, char __user *buf, spin_lock_bh(&socket_client->lock); socket_packet = list_first_entry(&socket_client->queue_list, - struct socket_packet, list); + struct batadv_socket_packet, list); list_del(&socket_packet->list); socket_client->queue_len--; @@ -146,34 +144,34 @@ static ssize_t bat_socket_read(struct file *file, char __user *buf, return packet_len; } -static ssize_t bat_socket_write(struct file *file, const char __user *buff, - size_t len, loff_t *off) +static ssize_t batadv_socket_write(struct file *file, const char __user *buff, + size_t len, loff_t *off) { - struct socket_client *socket_client = file->private_data; - struct bat_priv *bat_priv = socket_client->bat_priv; - struct hard_iface *primary_if = NULL; + struct batadv_socket_client *socket_client = file->private_data; + struct batadv_priv *bat_priv = socket_client->bat_priv; + struct batadv_hard_iface *primary_if = NULL; struct sk_buff *skb; - struct icmp_packet_rr *icmp_packet; + struct batadv_icmp_packet_rr *icmp_packet; - struct orig_node *orig_node = NULL; - struct neigh_node *neigh_node = NULL; - size_t packet_len = sizeof(struct icmp_packet); + struct batadv_orig_node *orig_node = NULL; + struct batadv_neigh_node *neigh_node = NULL; + size_t packet_len = sizeof(struct batadv_icmp_packet); - if (len < sizeof(struct icmp_packet)) { - bat_dbg(DBG_BATMAN, bat_priv, - "Error - can't send packet from char device: invalid packet size\n"); + if (len < sizeof(struct batadv_icmp_packet)) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Error - can't send packet from char device: invalid packet size\n"); return -EINVAL; } - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) { len = -EFAULT; goto out; } - if (len >= sizeof(struct icmp_packet_rr)) - packet_len = sizeof(struct icmp_packet_rr); + if (len >= sizeof(struct batadv_icmp_packet_rr)) + packet_len = sizeof(struct batadv_icmp_packet_rr); skb = dev_alloc_skb(packet_len + ETH_HLEN); if (!skb) { @@ -182,81 +180,82 @@ static ssize_t bat_socket_write(struct file *file, const char __user *buff, } skb_reserve(skb, ETH_HLEN); - icmp_packet = (struct icmp_packet_rr *)skb_put(skb, packet_len); + icmp_packet = (struct batadv_icmp_packet_rr *)skb_put(skb, packet_len); if (copy_from_user(icmp_packet, buff, packet_len)) { len = -EFAULT; goto free_skb; } - if (icmp_packet->header.packet_type != BAT_ICMP) { - bat_dbg(DBG_BATMAN, bat_priv, - "Error - can't send packet from char device: got bogus packet type (expected: BAT_ICMP)\n"); + if (icmp_packet->header.packet_type != BATADV_ICMP) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Error - can't send packet from char device: got bogus packet type (expected: BAT_ICMP)\n"); len = -EINVAL; goto free_skb; } - if (icmp_packet->msg_type != ECHO_REQUEST) { - bat_dbg(DBG_BATMAN, bat_priv, - "Error - can't send packet from char device: got bogus message type (expected: ECHO_REQUEST)\n"); + if (icmp_packet->msg_type != BATADV_ECHO_REQUEST) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Error - can't send packet from char device: got bogus message type (expected: ECHO_REQUEST)\n"); len = -EINVAL; goto free_skb; } icmp_packet->uid = socket_client->index; - if (icmp_packet->header.version != COMPAT_VERSION) { - icmp_packet->msg_type = PARAMETER_PROBLEM; - icmp_packet->header.version = COMPAT_VERSION; - bat_socket_add_packet(socket_client, icmp_packet, packet_len); + if (icmp_packet->header.version != BATADV_COMPAT_VERSION) { + icmp_packet->msg_type = BATADV_PARAMETER_PROBLEM; + icmp_packet->header.version = BATADV_COMPAT_VERSION; + batadv_socket_add_packet(socket_client, icmp_packet, + packet_len); goto free_skb; } - if (atomic_read(&bat_priv->mesh_state) != MESH_ACTIVE) + if (atomic_read(&bat_priv->mesh_state) != BATADV_MESH_ACTIVE) goto dst_unreach; - orig_node = orig_hash_find(bat_priv, icmp_packet->dst); + orig_node = batadv_orig_hash_find(bat_priv, icmp_packet->dst); if (!orig_node) goto dst_unreach; - neigh_node = orig_node_get_router(orig_node); + neigh_node = batadv_orig_node_get_router(orig_node); if (!neigh_node) goto dst_unreach; if (!neigh_node->if_incoming) goto dst_unreach; - if (neigh_node->if_incoming->if_status != IF_ACTIVE) + if (neigh_node->if_incoming->if_status != BATADV_IF_ACTIVE) goto dst_unreach; memcpy(icmp_packet->orig, primary_if->net_dev->dev_addr, ETH_ALEN); - if (packet_len == sizeof(struct icmp_packet_rr)) + if (packet_len == sizeof(struct batadv_icmp_packet_rr)) memcpy(icmp_packet->rr, neigh_node->if_incoming->net_dev->dev_addr, ETH_ALEN); - send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + batadv_send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); goto out; dst_unreach: - icmp_packet->msg_type = DESTINATION_UNREACHABLE; - bat_socket_add_packet(socket_client, icmp_packet, packet_len); + icmp_packet->msg_type = BATADV_DESTINATION_UNREACHABLE; + batadv_socket_add_packet(socket_client, icmp_packet, packet_len); free_skb: kfree_skb(skb); out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); return len; } -static unsigned int bat_socket_poll(struct file *file, poll_table *wait) +static unsigned int batadv_socket_poll(struct file *file, poll_table *wait) { - struct socket_client *socket_client = file->private_data; + struct batadv_socket_client *socket_client = file->private_data; poll_wait(file, &socket_client->queue_wait, wait); @@ -266,39 +265,39 @@ static unsigned int bat_socket_poll(struct file *file, poll_table *wait) return 0; } -static const struct file_operations fops = { +static const struct file_operations batadv_fops = { .owner = THIS_MODULE, - .open = bat_socket_open, - .release = bat_socket_release, - .read = bat_socket_read, - .write = bat_socket_write, - .poll = bat_socket_poll, + .open = batadv_socket_open, + .release = batadv_socket_release, + .read = batadv_socket_read, + .write = batadv_socket_write, + .poll = batadv_socket_poll, .llseek = no_llseek, }; -int bat_socket_setup(struct bat_priv *bat_priv) +int batadv_socket_setup(struct batadv_priv *bat_priv) { struct dentry *d; if (!bat_priv->debug_dir) goto err; - d = debugfs_create_file(ICMP_SOCKET, S_IFREG | S_IWUSR | S_IRUSR, - bat_priv->debug_dir, bat_priv, &fops); - if (d) + d = debugfs_create_file(BATADV_ICMP_SOCKET, S_IFREG | S_IWUSR | S_IRUSR, + bat_priv->debug_dir, bat_priv, &batadv_fops); + if (!d) goto err; return 0; err: - return 1; + return -ENOMEM; } -static void bat_socket_add_packet(struct socket_client *socket_client, - struct icmp_packet_rr *icmp_packet, - size_t icmp_len) +static void batadv_socket_add_packet(struct batadv_socket_client *socket_client, + struct batadv_icmp_packet_rr *icmp_packet, + size_t icmp_len) { - struct socket_packet *socket_packet; + struct batadv_socket_packet *socket_packet; socket_packet = kmalloc(sizeof(*socket_packet), GFP_ATOMIC); @@ -312,8 +311,9 @@ static void bat_socket_add_packet(struct socket_client *socket_client, spin_lock_bh(&socket_client->lock); /* while waiting for the lock the socket_client could have been - * deleted */ - if (!socket_client_hash[icmp_packet->uid]) { + * deleted + */ + if (!batadv_socket_client_hash[icmp_packet->uid]) { spin_unlock_bh(&socket_client->lock); kfree(socket_packet); return; @@ -324,7 +324,8 @@ static void bat_socket_add_packet(struct socket_client *socket_client, if (socket_client->queue_len > 100) { socket_packet = list_first_entry(&socket_client->queue_list, - struct socket_packet, list); + struct batadv_socket_packet, + list); list_del(&socket_packet->list); kfree(socket_packet); @@ -336,11 +337,12 @@ static void bat_socket_add_packet(struct socket_client *socket_client, wake_up(&socket_client->queue_wait); } -void bat_socket_receive_packet(struct icmp_packet_rr *icmp_packet, - size_t icmp_len) +void batadv_socket_receive_packet(struct batadv_icmp_packet_rr *icmp_packet, + size_t icmp_len) { - struct socket_client *hash = socket_client_hash[icmp_packet->uid]; + struct batadv_socket_client *hash; + hash = batadv_socket_client_hash[icmp_packet->uid]; if (hash) - bat_socket_add_packet(hash, icmp_packet, icmp_len); + batadv_socket_add_packet(hash, icmp_packet, icmp_len); } diff --git a/net/batman-adv/icmp_socket.h b/net/batman-adv/icmp_socket.h index 380ed4c2443a..29443a1dbb5c 100644 --- a/net/batman-adv/icmp_socket.h +++ b/net/batman-adv/icmp_socket.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,17 +15,16 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_ICMP_SOCKET_H_ #define _NET_BATMAN_ADV_ICMP_SOCKET_H_ -#define ICMP_SOCKET "socket" +#define BATADV_ICMP_SOCKET "socket" -void bat_socket_init(void); -int bat_socket_setup(struct bat_priv *bat_priv); -void bat_socket_receive_packet(struct icmp_packet_rr *icmp_packet, - size_t icmp_len); +void batadv_socket_init(void); +int batadv_socket_setup(struct batadv_priv *bat_priv); +void batadv_socket_receive_packet(struct batadv_icmp_packet_rr *icmp_packet, + size_t icmp_len); #endif /* _NET_BATMAN_ADV_ICMP_SOCKET_H_ */ diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c index 083a2993efe4..13c88b25ab31 100644 --- a/net/batman-adv/main.c +++ b/net/batman-adv/main.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,12 +15,11 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" -#include "bat_sysfs.h" -#include "bat_debugfs.h" +#include "sysfs.h" +#include "debugfs.h" #include "routing.h" #include "send.h" #include "originator.h" @@ -37,61 +35,65 @@ /* List manipulations on hardif_list have to be rtnl_lock()'ed, - * list traversals just rcu-locked */ -struct list_head hardif_list; -static int (*recv_packet_handler[256])(struct sk_buff *, struct hard_iface *); -char bat_routing_algo[20] = "BATMAN IV"; -static struct hlist_head bat_algo_list; + * list traversals just rcu-locked + */ +struct list_head batadv_hardif_list; +static int (*batadv_rx_handler[256])(struct sk_buff *, + struct batadv_hard_iface *); +char batadv_routing_algo[20] = "BATMAN_IV"; +static struct hlist_head batadv_algo_list; -unsigned char broadcast_addr[] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; +unsigned char batadv_broadcast_addr[] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; -struct workqueue_struct *bat_event_workqueue; +struct workqueue_struct *batadv_event_workqueue; -static void recv_handler_init(void); +static void batadv_recv_handler_init(void); -static int __init batman_init(void) +static int __init batadv_init(void) { - INIT_LIST_HEAD(&hardif_list); - INIT_HLIST_HEAD(&bat_algo_list); + INIT_LIST_HEAD(&batadv_hardif_list); + INIT_HLIST_HEAD(&batadv_algo_list); - recv_handler_init(); + batadv_recv_handler_init(); - bat_iv_init(); + batadv_iv_init(); /* the name should not be longer than 10 chars - see - * http://lwn.net/Articles/23634/ */ - bat_event_workqueue = create_singlethread_workqueue("bat_events"); + * http://lwn.net/Articles/23634/ + */ + batadv_event_workqueue = create_singlethread_workqueue("bat_events"); - if (!bat_event_workqueue) + if (!batadv_event_workqueue) return -ENOMEM; - bat_socket_init(); - debugfs_init(); + batadv_socket_init(); + batadv_debugfs_init(); - register_netdevice_notifier(&hard_if_notifier); + register_netdevice_notifier(&batadv_hard_if_notifier); pr_info("B.A.T.M.A.N. advanced %s (compatibility version %i) loaded\n", - SOURCE_VERSION, COMPAT_VERSION); + BATADV_SOURCE_VERSION, BATADV_COMPAT_VERSION); return 0; } -static void __exit batman_exit(void) +static void __exit batadv_exit(void) { - debugfs_destroy(); - unregister_netdevice_notifier(&hard_if_notifier); - hardif_remove_interfaces(); + batadv_debugfs_destroy(); + unregister_netdevice_notifier(&batadv_hard_if_notifier); + batadv_hardif_remove_interfaces(); - flush_workqueue(bat_event_workqueue); - destroy_workqueue(bat_event_workqueue); - bat_event_workqueue = NULL; + flush_workqueue(batadv_event_workqueue); + destroy_workqueue(batadv_event_workqueue); + batadv_event_workqueue = NULL; rcu_barrier(); } -int mesh_init(struct net_device *soft_iface) +int batadv_mesh_init(struct net_device *soft_iface) { - struct bat_priv *bat_priv = netdev_priv(soft_iface); + struct batadv_priv *bat_priv = netdev_priv(soft_iface); + int ret; spin_lock_init(&bat_priv->forw_bat_list_lock); spin_lock_init(&bat_priv->forw_bcast_list_lock); @@ -110,72 +112,77 @@ int mesh_init(struct net_device *soft_iface) INIT_LIST_HEAD(&bat_priv->tt_req_list); INIT_LIST_HEAD(&bat_priv->tt_roam_list); - if (originator_init(bat_priv) < 1) + ret = batadv_originator_init(bat_priv); + if (ret < 0) goto err; - if (tt_init(bat_priv) < 1) + ret = batadv_tt_init(bat_priv); + if (ret < 0) goto err; - tt_local_add(soft_iface, soft_iface->dev_addr, NULL_IFINDEX); + batadv_tt_local_add(soft_iface, soft_iface->dev_addr, + BATADV_NULL_IFINDEX); - if (vis_init(bat_priv) < 1) + ret = batadv_vis_init(bat_priv); + if (ret < 0) goto err; - if (bla_init(bat_priv) < 1) + ret = batadv_bla_init(bat_priv); + if (ret < 0) goto err; atomic_set(&bat_priv->gw_reselect, 0); - atomic_set(&bat_priv->mesh_state, MESH_ACTIVE); - goto end; - -err: - mesh_free(soft_iface); - return -1; + atomic_set(&bat_priv->mesh_state, BATADV_MESH_ACTIVE); -end: return 0; + +err: + batadv_mesh_free(soft_iface); + return ret; } -void mesh_free(struct net_device *soft_iface) +void batadv_mesh_free(struct net_device *soft_iface) { - struct bat_priv *bat_priv = netdev_priv(soft_iface); + struct batadv_priv *bat_priv = netdev_priv(soft_iface); - atomic_set(&bat_priv->mesh_state, MESH_DEACTIVATING); + atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING); - purge_outstanding_packets(bat_priv, NULL); + batadv_purge_outstanding_packets(bat_priv, NULL); - vis_quit(bat_priv); + batadv_vis_quit(bat_priv); - gw_node_purge(bat_priv); - originator_free(bat_priv); + batadv_gw_node_purge(bat_priv); + batadv_originator_free(bat_priv); - tt_free(bat_priv); + batadv_tt_free(bat_priv); - bla_free(bat_priv); + batadv_bla_free(bat_priv); - atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); + free_percpu(bat_priv->bat_counters); + + atomic_set(&bat_priv->mesh_state, BATADV_MESH_INACTIVE); } -void inc_module_count(void) +void batadv_inc_module_count(void) { try_module_get(THIS_MODULE); } -void dec_module_count(void) +void batadv_dec_module_count(void) { module_put(THIS_MODULE); } -int is_my_mac(const uint8_t *addr) +int batadv_is_my_mac(const uint8_t *addr) { - const struct hard_iface *hard_iface; + const struct batadv_hard_iface *hard_iface; rcu_read_lock(); - list_for_each_entry_rcu(hard_iface, &hardif_list, list) { - if (hard_iface->if_status != IF_ACTIVE) + list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) { + if (hard_iface->if_status != BATADV_IF_ACTIVE) continue; - if (compare_eth(hard_iface->net_dev->dev_addr, addr)) { + if (batadv_compare_eth(hard_iface->net_dev->dev_addr, addr)) { rcu_read_unlock(); return 1; } @@ -184,8 +191,8 @@ int is_my_mac(const uint8_t *addr) return 0; } -static int recv_unhandled_packet(struct sk_buff *skb, - struct hard_iface *recv_if) +static int batadv_recv_unhandled_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if) { return NET_RX_DROP; } @@ -193,16 +200,18 @@ static int recv_unhandled_packet(struct sk_buff *skb, /* incoming packets with the batman ethertype received on any active hard * interface */ -int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, - struct packet_type *ptype, struct net_device *orig_dev) +int batadv_batman_skb_recv(struct sk_buff *skb, struct net_device *dev, + struct packet_type *ptype, + struct net_device *orig_dev) { - struct bat_priv *bat_priv; - struct batman_ogm_packet *batman_ogm_packet; - struct hard_iface *hard_iface; + struct batadv_priv *bat_priv; + struct batadv_ogm_packet *batadv_ogm_packet; + struct batadv_hard_iface *hard_iface; uint8_t idx; int ret; - hard_iface = container_of(ptype, struct hard_iface, batman_adv_ptype); + hard_iface = container_of(ptype, struct batadv_hard_iface, + batman_adv_ptype); skb = skb_share_check(skb, GFP_ATOMIC); /* skb was released by skb_share_check() */ @@ -222,27 +231,27 @@ int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, bat_priv = netdev_priv(hard_iface->soft_iface); - if (atomic_read(&bat_priv->mesh_state) != MESH_ACTIVE) + if (atomic_read(&bat_priv->mesh_state) != BATADV_MESH_ACTIVE) goto err_free; /* discard frames on not active interfaces */ - if (hard_iface->if_status != IF_ACTIVE) + if (hard_iface->if_status != BATADV_IF_ACTIVE) goto err_free; - batman_ogm_packet = (struct batman_ogm_packet *)skb->data; + batadv_ogm_packet = (struct batadv_ogm_packet *)skb->data; - if (batman_ogm_packet->header.version != COMPAT_VERSION) { - bat_dbg(DBG_BATMAN, bat_priv, - "Drop packet: incompatible batman version (%i)\n", - batman_ogm_packet->header.version); + if (batadv_ogm_packet->header.version != BATADV_COMPAT_VERSION) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Drop packet: incompatible batman version (%i)\n", + batadv_ogm_packet->header.version); goto err_free; } /* all receive handlers return whether they received or reused * the supplied skb. if not, we have to free the skb. */ - idx = batman_ogm_packet->header.packet_type; - ret = (*recv_packet_handler[idx])(skb, hard_iface); + idx = batadv_ogm_packet->header.packet_type; + ret = (*batadv_rx_handler[idx])(skb, hard_iface); if (ret == NET_RX_DROP) kfree_skb(skb); @@ -259,51 +268,52 @@ err_out: return NET_RX_DROP; } -static void recv_handler_init(void) +static void batadv_recv_handler_init(void) { int i; - for (i = 0; i < ARRAY_SIZE(recv_packet_handler); i++) - recv_packet_handler[i] = recv_unhandled_packet; + for (i = 0; i < ARRAY_SIZE(batadv_rx_handler); i++) + batadv_rx_handler[i] = batadv_recv_unhandled_packet; /* batman icmp packet */ - recv_packet_handler[BAT_ICMP] = recv_icmp_packet; + batadv_rx_handler[BATADV_ICMP] = batadv_recv_icmp_packet; /* unicast packet */ - recv_packet_handler[BAT_UNICAST] = recv_unicast_packet; + batadv_rx_handler[BATADV_UNICAST] = batadv_recv_unicast_packet; /* fragmented unicast packet */ - recv_packet_handler[BAT_UNICAST_FRAG] = recv_ucast_frag_packet; + batadv_rx_handler[BATADV_UNICAST_FRAG] = batadv_recv_ucast_frag_packet; /* broadcast packet */ - recv_packet_handler[BAT_BCAST] = recv_bcast_packet; + batadv_rx_handler[BATADV_BCAST] = batadv_recv_bcast_packet; /* vis packet */ - recv_packet_handler[BAT_VIS] = recv_vis_packet; + batadv_rx_handler[BATADV_VIS] = batadv_recv_vis_packet; /* Translation table query (request or response) */ - recv_packet_handler[BAT_TT_QUERY] = recv_tt_query; + batadv_rx_handler[BATADV_TT_QUERY] = batadv_recv_tt_query; /* Roaming advertisement */ - recv_packet_handler[BAT_ROAM_ADV] = recv_roam_adv; + batadv_rx_handler[BATADV_ROAM_ADV] = batadv_recv_roam_adv; } -int recv_handler_register(uint8_t packet_type, - int (*recv_handler)(struct sk_buff *, - struct hard_iface *)) +int +batadv_recv_handler_register(uint8_t packet_type, + int (*recv_handler)(struct sk_buff *, + struct batadv_hard_iface *)) { - if (recv_packet_handler[packet_type] != &recv_unhandled_packet) + if (batadv_rx_handler[packet_type] != &batadv_recv_unhandled_packet) return -EBUSY; - recv_packet_handler[packet_type] = recv_handler; + batadv_rx_handler[packet_type] = recv_handler; return 0; } -void recv_handler_unregister(uint8_t packet_type) +void batadv_recv_handler_unregister(uint8_t packet_type) { - recv_packet_handler[packet_type] = recv_unhandled_packet; + batadv_rx_handler[packet_type] = batadv_recv_unhandled_packet; } -static struct bat_algo_ops *bat_algo_get(char *name) +static struct batadv_algo_ops *batadv_algo_get(char *name) { - struct bat_algo_ops *bat_algo_ops = NULL, *bat_algo_ops_tmp; + struct batadv_algo_ops *bat_algo_ops = NULL, *bat_algo_ops_tmp; struct hlist_node *node; - hlist_for_each_entry(bat_algo_ops_tmp, node, &bat_algo_list, list) { + hlist_for_each_entry(bat_algo_ops_tmp, node, &batadv_algo_list, list) { if (strcmp(bat_algo_ops_tmp->name, name) != 0) continue; @@ -314,15 +324,16 @@ static struct bat_algo_ops *bat_algo_get(char *name) return bat_algo_ops; } -int bat_algo_register(struct bat_algo_ops *bat_algo_ops) +int batadv_algo_register(struct batadv_algo_ops *bat_algo_ops) { - struct bat_algo_ops *bat_algo_ops_tmp; - int ret = -1; + struct batadv_algo_ops *bat_algo_ops_tmp; + int ret; - bat_algo_ops_tmp = bat_algo_get(bat_algo_ops->name); + bat_algo_ops_tmp = batadv_algo_get(bat_algo_ops->name); if (bat_algo_ops_tmp) { pr_info("Trying to register already registered routing algorithm: %s\n", bat_algo_ops->name); + ret = -EEXIST; goto out; } @@ -335,23 +346,24 @@ int bat_algo_register(struct bat_algo_ops *bat_algo_ops) !bat_algo_ops->bat_ogm_emit) { pr_info("Routing algo '%s' does not implement required ops\n", bat_algo_ops->name); + ret = -EINVAL; goto out; } INIT_HLIST_NODE(&bat_algo_ops->list); - hlist_add_head(&bat_algo_ops->list, &bat_algo_list); + hlist_add_head(&bat_algo_ops->list, &batadv_algo_list); ret = 0; out: return ret; } -int bat_algo_select(struct bat_priv *bat_priv, char *name) +int batadv_algo_select(struct batadv_priv *bat_priv, char *name) { - struct bat_algo_ops *bat_algo_ops; - int ret = -1; + struct batadv_algo_ops *bat_algo_ops; + int ret = -EINVAL; - bat_algo_ops = bat_algo_get(name); + bat_algo_ops = batadv_algo_get(name); if (!bat_algo_ops) goto out; @@ -362,50 +374,56 @@ out: return ret; } -int bat_algo_seq_print_text(struct seq_file *seq, void *offset) +int batadv_algo_seq_print_text(struct seq_file *seq, void *offset) { - struct bat_algo_ops *bat_algo_ops; + struct batadv_algo_ops *bat_algo_ops; struct hlist_node *node; seq_printf(seq, "Available routing algorithms:\n"); - hlist_for_each_entry(bat_algo_ops, node, &bat_algo_list, list) { + hlist_for_each_entry(bat_algo_ops, node, &batadv_algo_list, list) { seq_printf(seq, "%s\n", bat_algo_ops->name); } return 0; } -static int param_set_ra(const char *val, const struct kernel_param *kp) +static int batadv_param_set_ra(const char *val, const struct kernel_param *kp) { - struct bat_algo_ops *bat_algo_ops; + struct batadv_algo_ops *bat_algo_ops; + char *algo_name = (char *)val; + size_t name_len = strlen(algo_name); + + if (algo_name[name_len - 1] == '\n') + algo_name[name_len - 1] = '\0'; - bat_algo_ops = bat_algo_get((char *)val); + bat_algo_ops = batadv_algo_get(algo_name); if (!bat_algo_ops) { - pr_err("Routing algorithm '%s' is not supported\n", val); + pr_err("Routing algorithm '%s' is not supported\n", algo_name); return -EINVAL; } - return param_set_copystring(val, kp); + return param_set_copystring(algo_name, kp); } -static const struct kernel_param_ops param_ops_ra = { - .set = param_set_ra, +static const struct kernel_param_ops batadv_param_ops_ra = { + .set = batadv_param_set_ra, .get = param_get_string, }; -static struct kparam_string __param_string_ra = { - .maxlen = sizeof(bat_routing_algo), - .string = bat_routing_algo, +static struct kparam_string batadv_param_string_ra = { + .maxlen = sizeof(batadv_routing_algo), + .string = batadv_routing_algo, }; -module_param_cb(routing_algo, ¶m_ops_ra, &__param_string_ra, 0644); -module_init(batman_init); -module_exit(batman_exit); +module_param_cb(routing_algo, &batadv_param_ops_ra, &batadv_param_string_ra, + 0644); +module_init(batadv_init); +module_exit(batadv_exit); MODULE_LICENSE("GPL"); -MODULE_AUTHOR(DRIVER_AUTHOR); -MODULE_DESCRIPTION(DRIVER_DESC); -MODULE_SUPPORTED_DEVICE(DRIVER_DEVICE); -MODULE_VERSION(SOURCE_VERSION); +MODULE_AUTHOR(BATADV_DRIVER_AUTHOR); +MODULE_DESCRIPTION(BATADV_DRIVER_DESC); +MODULE_SUPPORTED_DEVICE(BATADV_DRIVER_DEVICE); +MODULE_VERSION(BATADV_SOURCE_VERSION); diff --git a/net/batman-adv/main.h b/net/batman-adv/main.h index f4a3ec003479..5d8fa0757947 100644 --- a/net/batman-adv/main.h +++ b/net/batman-adv/main.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,100 +15,106 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_MAIN_H_ #define _NET_BATMAN_ADV_MAIN_H_ -#define DRIVER_AUTHOR "Marek Lindner <lindner_marek@yahoo.de>, " \ - "Simon Wunderlich <siwu@hrz.tu-chemnitz.de>" -#define DRIVER_DESC "B.A.T.M.A.N. advanced" -#define DRIVER_DEVICE "batman-adv" +#define BATADV_DRIVER_AUTHOR "Marek Lindner <lindner_marek@yahoo.de>, " \ + "Simon Wunderlich <siwu@hrz.tu-chemnitz.de>" +#define BATADV_DRIVER_DESC "B.A.T.M.A.N. advanced" +#define BATADV_DRIVER_DEVICE "batman-adv" -#ifndef SOURCE_VERSION -#define SOURCE_VERSION "2012.2.0" +#ifndef BATADV_SOURCE_VERSION +#define BATADV_SOURCE_VERSION "2012.3.0" #endif /* B.A.T.M.A.N. parameters */ -#define TQ_MAX_VALUE 255 -#define JITTER 20 +#define BATADV_TQ_MAX_VALUE 255 +#define BATADV_JITTER 20 - /* Time To Live of broadcast messages */ -#define TTL 50 +/* Time To Live of broadcast messages */ +#define BATADV_TTL 50 /* purge originators after time in seconds if no valid packet comes in - * -> TODO: check influence on TQ_LOCAL_WINDOW_SIZE */ -#define PURGE_TIMEOUT 200000 /* 200 seconds */ -#define TT_LOCAL_TIMEOUT 3600000 /* in miliseconds */ -#define TT_CLIENT_ROAM_TIMEOUT 600000 /* in miliseconds */ + * -> TODO: check influence on BATADV_TQ_LOCAL_WINDOW_SIZE + */ +#define BATADV_PURGE_TIMEOUT 200000 /* 200 seconds */ +#define BATADV_TT_LOCAL_TIMEOUT 3600000 /* in miliseconds */ +#define BATADV_TT_CLIENT_ROAM_TIMEOUT 600000 /* in miliseconds */ /* sliding packet range of received originator messages in sequence numbers - * (should be a multiple of our word size) */ -#define TQ_LOCAL_WINDOW_SIZE 64 -#define TT_REQUEST_TIMEOUT 3000 /* miliseconds we have to keep - * pending tt_req */ + * (should be a multiple of our word size) + */ +#define BATADV_TQ_LOCAL_WINDOW_SIZE 64 +/* miliseconds we have to keep pending tt_req */ +#define BATADV_TT_REQUEST_TIMEOUT 3000 -#define TQ_GLOBAL_WINDOW_SIZE 5 -#define TQ_LOCAL_BIDRECT_SEND_MINIMUM 1 -#define TQ_LOCAL_BIDRECT_RECV_MINIMUM 1 -#define TQ_TOTAL_BIDRECT_LIMIT 1 +#define BATADV_TQ_GLOBAL_WINDOW_SIZE 5 +#define BATADV_TQ_LOCAL_BIDRECT_SEND_MINIMUM 1 +#define BATADV_TQ_LOCAL_BIDRECT_RECV_MINIMUM 1 +#define BATADV_TQ_TOTAL_BIDRECT_LIMIT 1 -#define TT_OGM_APPEND_MAX 3 /* number of OGMs sent with the last tt diff */ +/* number of OGMs sent with the last tt diff */ +#define BATADV_TT_OGM_APPEND_MAX 3 -#define ROAMING_MAX_TIME 20000 /* Time in which a client can roam at most - * ROAMING_MAX_COUNT times in miliseconds*/ -#define ROAMING_MAX_COUNT 5 +/* Time in which a client can roam at most ROAMING_MAX_COUNT times in + * miliseconds + */ +#define BATADV_ROAMING_MAX_TIME 20000 +#define BATADV_ROAMING_MAX_COUNT 5 -#define NO_FLAGS 0 +#define BATADV_NO_FLAGS 0 -#define NULL_IFINDEX 0 /* dummy ifindex used to avoid iface checks */ +#define BATADV_NULL_IFINDEX 0 /* dummy ifindex used to avoid iface checks */ -#define NUM_WORDS BITS_TO_LONGS(TQ_LOCAL_WINDOW_SIZE) +#define BATADV_NUM_WORDS BITS_TO_LONGS(BATADV_TQ_LOCAL_WINDOW_SIZE) -#define LOG_BUF_LEN 8192 /* has to be a power of 2 */ +#define BATADV_LOG_BUF_LEN 8192 /* has to be a power of 2 */ -#define VIS_INTERVAL 5000 /* 5 seconds */ +#define BATADV_VIS_INTERVAL 5000 /* 5 seconds */ /* how much worse secondary interfaces may be to be considered as bonding - * candidates */ -#define BONDING_TQ_THRESHOLD 50 + * candidates + */ +#define BATADV_BONDING_TQ_THRESHOLD 50 /* should not be bigger than 512 bytes or change the size of - * forw_packet->direct_link_flags */ -#define MAX_AGGREGATION_BYTES 512 -#define MAX_AGGREGATION_MS 100 + * forw_packet->direct_link_flags + */ +#define BATADV_MAX_AGGREGATION_BYTES 512 +#define BATADV_MAX_AGGREGATION_MS 100 -#define BLA_PERIOD_LENGTH 10000 /* 10 seconds */ -#define BLA_BACKBONE_TIMEOUT (BLA_PERIOD_LENGTH * 3) -#define BLA_CLAIM_TIMEOUT (BLA_PERIOD_LENGTH * 10) +#define BATADV_BLA_PERIOD_LENGTH 10000 /* 10 seconds */ +#define BATADV_BLA_BACKBONE_TIMEOUT (BATADV_BLA_PERIOD_LENGTH * 3) +#define BATADV_BLA_CLAIM_TIMEOUT (BATADV_BLA_PERIOD_LENGTH * 10) -#define DUPLIST_SIZE 16 -#define DUPLIST_TIMEOUT 500 /* 500 ms */ +#define BATADV_DUPLIST_SIZE 16 +#define BATADV_DUPLIST_TIMEOUT 500 /* 500 ms */ /* don't reset again within 30 seconds */ -#define RESET_PROTECTION_MS 30000 -#define EXPECTED_SEQNO_RANGE 65536 +#define BATADV_RESET_PROTECTION_MS 30000 +#define BATADV_EXPECTED_SEQNO_RANGE 65536 -enum mesh_state { - MESH_INACTIVE, - MESH_ACTIVE, - MESH_DEACTIVATING +enum batadv_mesh_state { + BATADV_MESH_INACTIVE, + BATADV_MESH_ACTIVE, + BATADV_MESH_DEACTIVATING, }; -#define BCAST_QUEUE_LEN 256 -#define BATMAN_QUEUE_LEN 256 +#define BATADV_BCAST_QUEUE_LEN 256 +#define BATADV_BATMAN_QUEUE_LEN 256 -enum uev_action { - UEV_ADD = 0, - UEV_DEL, - UEV_CHANGE +enum batadv_uev_action { + BATADV_UEV_ADD = 0, + BATADV_UEV_DEL, + BATADV_UEV_CHANGE, }; -enum uev_type { - UEV_GW = 0 +enum batadv_uev_type { + BATADV_UEV_GW = 0, }; -#define GW_THRESHOLD 50 +#define BATADV_GW_THRESHOLD 50 /* Debug Messages */ #ifdef pr_fmt @@ -119,12 +124,12 @@ enum uev_type { #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt /* all messages related to routing / flooding / broadcasting / etc */ -enum dbg_level { - DBG_BATMAN = 1 << 0, - DBG_ROUTES = 1 << 1, /* route added / changed / deleted */ - DBG_TT = 1 << 2, /* translation table operations */ - DBG_BLA = 1 << 3, /* bridge loop avoidance */ - DBG_ALL = 15 +enum batadv_dbg_level { + BATADV_DBG_BATMAN = 1 << 0, + BATADV_DBG_ROUTES = 1 << 1, /* route added / changed / deleted */ + BATADV_DBG_TT = 1 << 2, /* translation table operations */ + BATADV_DBG_BLA = 1 << 3, /* bridge loop avoidance */ + BATADV_DBG_ALL = 15, }; /* Kernel headers */ @@ -138,73 +143,75 @@ enum dbg_level { #include <linux/kthread.h> /* kernel threads */ #include <linux/pkt_sched.h> /* schedule types */ #include <linux/workqueue.h> /* workqueue */ +#include <linux/percpu.h> #include <linux/slab.h> #include <net/sock.h> /* struct sock */ #include <linux/jiffies.h> #include <linux/seq_file.h> #include "types.h" -extern char bat_routing_algo[]; -extern struct list_head hardif_list; - -extern unsigned char broadcast_addr[]; -extern struct workqueue_struct *bat_event_workqueue; - -int mesh_init(struct net_device *soft_iface); -void mesh_free(struct net_device *soft_iface); -void inc_module_count(void); -void dec_module_count(void); -int is_my_mac(const uint8_t *addr); -int batman_skb_recv(struct sk_buff *skb, struct net_device *dev, - struct packet_type *ptype, struct net_device *orig_dev); -int recv_handler_register(uint8_t packet_type, - int (*recv_handler)(struct sk_buff *, - struct hard_iface *)); -void recv_handler_unregister(uint8_t packet_type); -int bat_algo_register(struct bat_algo_ops *bat_algo_ops); -int bat_algo_select(struct bat_priv *bat_priv, char *name); -int bat_algo_seq_print_text(struct seq_file *seq, void *offset); +extern char batadv_routing_algo[]; +extern struct list_head batadv_hardif_list; + +extern unsigned char batadv_broadcast_addr[]; +extern struct workqueue_struct *batadv_event_workqueue; + +int batadv_mesh_init(struct net_device *soft_iface); +void batadv_mesh_free(struct net_device *soft_iface); +void batadv_inc_module_count(void); +void batadv_dec_module_count(void); +int batadv_is_my_mac(const uint8_t *addr); +int batadv_batman_skb_recv(struct sk_buff *skb, struct net_device *dev, + struct packet_type *ptype, + struct net_device *orig_dev); +int +batadv_recv_handler_register(uint8_t packet_type, + int (*recv_handler)(struct sk_buff *, + struct batadv_hard_iface *)); +void batadv_recv_handler_unregister(uint8_t packet_type); +int batadv_algo_register(struct batadv_algo_ops *bat_algo_ops); +int batadv_algo_select(struct batadv_priv *bat_priv, char *name); +int batadv_algo_seq_print_text(struct seq_file *seq, void *offset); #ifdef CONFIG_BATMAN_ADV_DEBUG -int debug_log(struct bat_priv *bat_priv, const char *fmt, ...) __printf(2, 3); +int batadv_debug_log(struct batadv_priv *bat_priv, const char *fmt, ...) +__printf(2, 3); -#define bat_dbg(type, bat_priv, fmt, arg...) \ +#define batadv_dbg(type, bat_priv, fmt, arg...) \ do { \ if (atomic_read(&bat_priv->log_level) & type) \ - debug_log(bat_priv, fmt, ## arg); \ + batadv_debug_log(bat_priv, fmt, ## arg);\ } \ while (0) #else /* !CONFIG_BATMAN_ADV_DEBUG */ __printf(3, 4) -static inline void bat_dbg(int type __always_unused, - struct bat_priv *bat_priv __always_unused, - const char *fmt __always_unused, ...) +static inline void batadv_dbg(int type __always_unused, + struct batadv_priv *bat_priv __always_unused, + const char *fmt __always_unused, ...) { } #endif -#define bat_info(net_dev, fmt, arg...) \ +#define batadv_info(net_dev, fmt, arg...) \ do { \ struct net_device *_netdev = (net_dev); \ - struct bat_priv *_batpriv = netdev_priv(_netdev); \ - bat_dbg(DBG_ALL, _batpriv, fmt, ## arg); \ + struct batadv_priv *_batpriv = netdev_priv(_netdev); \ + batadv_dbg(BATADV_DBG_ALL, _batpriv, fmt, ## arg); \ pr_info("%s: " fmt, _netdev->name, ## arg); \ } while (0) -#define bat_err(net_dev, fmt, arg...) \ +#define batadv_err(net_dev, fmt, arg...) \ do { \ struct net_device *_netdev = (net_dev); \ - struct bat_priv *_batpriv = netdev_priv(_netdev); \ - bat_dbg(DBG_ALL, _batpriv, fmt, ## arg); \ + struct batadv_priv *_batpriv = netdev_priv(_netdev); \ + batadv_dbg(BATADV_DBG_ALL, _batpriv, fmt, ## arg); \ pr_err("%s: " fmt, _netdev->name, ## arg); \ } while (0) -/** - * returns 1 if they are the same ethernet addr +/* returns 1 if they are the same ethernet addr * * note: can't use compare_ether_addr() as it requires aligned memory */ - -static inline int compare_eth(const void *data1, const void *data2) +static inline int batadv_compare_eth(const void *data1, const void *data2) { return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); } @@ -216,15 +223,16 @@ static inline int compare_eth(const void *data1, const void *data2) * * Returns true if current time is after timestamp + timeout */ -static inline bool has_timed_out(unsigned long timestamp, unsigned int timeout) +static inline bool batadv_has_timed_out(unsigned long timestamp, + unsigned int timeout) { return time_is_before_jiffies(timestamp + msecs_to_jiffies(timeout)); } -#define atomic_dec_not_zero(v) atomic_add_unless((v), -1, 0) +#define batadv_atomic_dec_not_zero(v) atomic_add_unless((v), -1, 0) /* Returns the smallest signed integer in two's complement with the sizeof x */ -#define smallest_signed_int(x) (1u << (7u + 8u * (sizeof(x) - 1u))) +#define batadv_smallest_signed_int(x) (1u << (7u + 8u * (sizeof(x) - 1u))) /* Checks if a sequence number x is a predecessor/successor of y. * they handle overflows/underflows and can correctly check for a @@ -234,12 +242,39 @@ static inline bool has_timed_out(unsigned long timestamp, unsigned int timeout) * - when adding nothing - it is neither a predecessor nor a successor * - before adding more than 127 to the starting value - it is a predecessor, * - when adding 128 - it is neither a predecessor nor a successor, - * - after adding more than 127 to the starting value - it is a successor */ -#define seq_before(x, y) ({typeof(x) _d1 = (x); \ - typeof(y) _d2 = (y); \ - typeof(x) _dummy = (_d1 - _d2); \ - (void) (&_d1 == &_d2); \ - _dummy > smallest_signed_int(_dummy); }) -#define seq_after(x, y) seq_before(y, x) + * - after adding more than 127 to the starting value - it is a successor + */ +#define batadv_seq_before(x, y) ({typeof(x) _d1 = (x); \ + typeof(y) _d2 = (y); \ + typeof(x) _dummy = (_d1 - _d2); \ + (void) (&_d1 == &_d2); \ + _dummy > batadv_smallest_signed_int(_dummy); }) +#define batadv_seq_after(x, y) batadv_seq_before(y, x) + +/* Stop preemption on local cpu while incrementing the counter */ +static inline void batadv_add_counter(struct batadv_priv *bat_priv, size_t idx, + size_t count) +{ + int cpu = get_cpu(); + per_cpu_ptr(bat_priv->bat_counters, cpu)[idx] += count; + put_cpu(); +} + +#define batadv_inc_counter(b, i) batadv_add_counter(b, i, 1) + +/* Sum and return the cpu-local counters for index 'idx' */ +static inline uint64_t batadv_sum_counter(struct batadv_priv *bat_priv, + size_t idx) +{ + uint64_t *counters, sum = 0; + int cpu; + + for_each_possible_cpu(cpu) { + counters = per_cpu_ptr(bat_priv->bat_counters, cpu); + sum += counters[idx]; + } + + return sum; +} #endif /* _NET_BATMAN_ADV_MAIN_H_ */ diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c index 41147942ba53..ac9bdf8f80a6 100644 --- a/net/batman-adv/originator.c +++ b/net/batman-adv/originator.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -30,50 +28,52 @@ #include "soft-interface.h" #include "bridge_loop_avoidance.h" -static void purge_orig(struct work_struct *work); +static void batadv_purge_orig(struct work_struct *work); -static void start_purge_timer(struct bat_priv *bat_priv) +static void batadv_start_purge_timer(struct batadv_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->orig_work, purge_orig); - queue_delayed_work(bat_event_workqueue, + INIT_DELAYED_WORK(&bat_priv->orig_work, batadv_purge_orig); + queue_delayed_work(batadv_event_workqueue, &bat_priv->orig_work, msecs_to_jiffies(1000)); } /* returns 1 if they are the same originator */ -static int compare_orig(const struct hlist_node *node, const void *data2) +static int batadv_compare_orig(const struct hlist_node *node, const void *data2) { - const void *data1 = container_of(node, struct orig_node, hash_entry); + const void *data1 = container_of(node, struct batadv_orig_node, + hash_entry); return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); } -int originator_init(struct bat_priv *bat_priv) +int batadv_originator_init(struct batadv_priv *bat_priv) { if (bat_priv->orig_hash) - return 1; + return 0; - bat_priv->orig_hash = hash_new(1024); + bat_priv->orig_hash = batadv_hash_new(1024); if (!bat_priv->orig_hash) goto err; - start_purge_timer(bat_priv); - return 1; + batadv_start_purge_timer(bat_priv); + return 0; err: - return 0; + return -ENOMEM; } -void neigh_node_free_ref(struct neigh_node *neigh_node) +void batadv_neigh_node_free_ref(struct batadv_neigh_node *neigh_node) { if (atomic_dec_and_test(&neigh_node->refcount)) kfree_rcu(neigh_node, rcu); } /* increases the refcounter of a found router */ -struct neigh_node *orig_node_get_router(struct orig_node *orig_node) +struct batadv_neigh_node * +batadv_orig_node_get_router(struct batadv_orig_node *orig_node) { - struct neigh_node *router; + struct batadv_neigh_node *router; rcu_read_lock(); router = rcu_dereference(orig_node->router); @@ -85,12 +85,12 @@ struct neigh_node *orig_node_get_router(struct orig_node *orig_node) return router; } -struct neigh_node *batadv_neigh_node_new(struct hard_iface *hard_iface, - const uint8_t *neigh_addr, - uint32_t seqno) +struct batadv_neigh_node * +batadv_neigh_node_new(struct batadv_hard_iface *hard_iface, + const uint8_t *neigh_addr, uint32_t seqno) { - struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); - struct neigh_node *neigh_node; + struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); + struct batadv_neigh_node *neigh_node; neigh_node = kzalloc(sizeof(*neigh_node), GFP_ATOMIC); if (!neigh_node) @@ -104,21 +104,21 @@ struct neigh_node *batadv_neigh_node_new(struct hard_iface *hard_iface, /* extra reference for return */ atomic_set(&neigh_node->refcount, 2); - bat_dbg(DBG_BATMAN, bat_priv, - "Creating new neighbor %pM, initial seqno %d\n", - neigh_addr, seqno); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Creating new neighbor %pM, initial seqno %d\n", + neigh_addr, seqno); out: return neigh_node; } -static void orig_node_free_rcu(struct rcu_head *rcu) +static void batadv_orig_node_free_rcu(struct rcu_head *rcu) { struct hlist_node *node, *node_tmp; - struct neigh_node *neigh_node, *tmp_neigh_node; - struct orig_node *orig_node; + struct batadv_neigh_node *neigh_node, *tmp_neigh_node; + struct batadv_orig_node *orig_node; - orig_node = container_of(rcu, struct orig_node, rcu); + orig_node = container_of(rcu, struct batadv_orig_node, rcu); spin_lock_bh(&orig_node->neigh_list_lock); @@ -126,21 +126,21 @@ static void orig_node_free_rcu(struct rcu_head *rcu) list_for_each_entry_safe(neigh_node, tmp_neigh_node, &orig_node->bond_list, bonding_list) { list_del_rcu(&neigh_node->bonding_list); - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); } /* for all neighbors towards this originator ... */ hlist_for_each_entry_safe(neigh_node, node, node_tmp, &orig_node->neigh_list, list) { hlist_del_rcu(&neigh_node->list); - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); } spin_unlock_bh(&orig_node->neigh_list_lock); - frag_list_free(&orig_node->frag_list); - tt_global_del_orig(orig_node->bat_priv, orig_node, - "originator timed out"); + batadv_frag_list_free(&orig_node->frag_list); + batadv_tt_global_del_orig(orig_node->bat_priv, orig_node, + "originator timed out"); kfree(orig_node->tt_buff); kfree(orig_node->bcast_own); @@ -148,19 +148,19 @@ static void orig_node_free_rcu(struct rcu_head *rcu) kfree(orig_node); } -void orig_node_free_ref(struct orig_node *orig_node) +void batadv_orig_node_free_ref(struct batadv_orig_node *orig_node) { if (atomic_dec_and_test(&orig_node->refcount)) - call_rcu(&orig_node->rcu, orig_node_free_rcu); + call_rcu(&orig_node->rcu, batadv_orig_node_free_rcu); } -void originator_free(struct bat_priv *bat_priv) +void batadv_originator_free(struct batadv_priv *bat_priv) { - struct hashtable_t *hash = bat_priv->orig_hash; + struct batadv_hashtable *hash = bat_priv->orig_hash; struct hlist_node *node, *node_tmp; struct hlist_head *head; spinlock_t *list_lock; /* spinlock to protect write access */ - struct orig_node *orig_node; + struct batadv_orig_node *orig_node; uint32_t i; if (!hash) @@ -179,28 +179,31 @@ void originator_free(struct bat_priv *bat_priv) head, hash_entry) { hlist_del_rcu(node); - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); } spin_unlock_bh(list_lock); } - hash_destroy(hash); + batadv_hash_destroy(hash); } /* this function finds or creates an originator entry for the given - * address if it does not exits */ -struct orig_node *get_orig_node(struct bat_priv *bat_priv, const uint8_t *addr) + * address if it does not exits + */ +struct batadv_orig_node *batadv_get_orig_node(struct batadv_priv *bat_priv, + const uint8_t *addr) { - struct orig_node *orig_node; + struct batadv_orig_node *orig_node; int size; int hash_added; + unsigned long reset_time; - orig_node = orig_hash_find(bat_priv, addr); + orig_node = batadv_orig_hash_find(bat_priv, addr); if (orig_node) return orig_node; - bat_dbg(DBG_BATMAN, bat_priv, - "Creating new originator: %pM\n", addr); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Creating new originator: %pM\n", addr); orig_node = kzalloc(sizeof(*orig_node), GFP_ATOMIC); if (!orig_node) @@ -226,14 +229,13 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, const uint8_t *addr) orig_node->tt_buff = NULL; orig_node->tt_buff_len = 0; atomic_set(&orig_node->tt_size, 0); - orig_node->bcast_seqno_reset = jiffies - 1 - - msecs_to_jiffies(RESET_PROTECTION_MS); - orig_node->batman_seqno_reset = jiffies - 1 - - msecs_to_jiffies(RESET_PROTECTION_MS); + reset_time = jiffies - 1 - msecs_to_jiffies(BATADV_RESET_PROTECTION_MS); + orig_node->bcast_seqno_reset = reset_time; + orig_node->batman_seqno_reset = reset_time; atomic_set(&orig_node->bond_candidates, 0); - size = bat_priv->num_ifaces * sizeof(unsigned long) * NUM_WORDS; + size = bat_priv->num_ifaces * sizeof(unsigned long) * BATADV_NUM_WORDS; orig_node->bcast_own = kzalloc(size, GFP_ATOMIC); if (!orig_node->bcast_own) @@ -248,8 +250,9 @@ struct orig_node *get_orig_node(struct bat_priv *bat_priv, const uint8_t *addr) if (!orig_node->bcast_own_sum) goto free_bcast_own; - hash_added = hash_add(bat_priv->orig_hash, compare_orig, - choose_orig, orig_node, &orig_node->hash_entry); + hash_added = batadv_hash_add(bat_priv->orig_hash, batadv_compare_orig, + batadv_choose_orig, orig_node, + &orig_node->hash_entry); if (hash_added != 0) goto free_bcast_own_sum; @@ -263,14 +266,16 @@ free_orig_node: return NULL; } -static bool purge_orig_neighbors(struct bat_priv *bat_priv, - struct orig_node *orig_node, - struct neigh_node **best_neigh_node) +static bool +batadv_purge_orig_neighbors(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + struct batadv_neigh_node **best_neigh_node) { struct hlist_node *node, *node_tmp; - struct neigh_node *neigh_node; + struct batadv_neigh_node *neigh_node; bool neigh_purged = false; unsigned long last_seen; + struct batadv_hard_iface *if_incoming; *best_neigh_node = NULL; @@ -280,34 +285,32 @@ static bool purge_orig_neighbors(struct bat_priv *bat_priv, hlist_for_each_entry_safe(neigh_node, node, node_tmp, &orig_node->neigh_list, list) { - if ((has_timed_out(neigh_node->last_seen, PURGE_TIMEOUT)) || - (neigh_node->if_incoming->if_status == IF_INACTIVE) || - (neigh_node->if_incoming->if_status == IF_NOT_IN_USE) || - (neigh_node->if_incoming->if_status == IF_TO_BE_REMOVED)) { - - last_seen = neigh_node->last_seen; - - if ((neigh_node->if_incoming->if_status == - IF_INACTIVE) || - (neigh_node->if_incoming->if_status == - IF_NOT_IN_USE) || - (neigh_node->if_incoming->if_status == - IF_TO_BE_REMOVED)) - bat_dbg(DBG_BATMAN, bat_priv, - "neighbor purge: originator %pM, neighbor: %pM, iface: %s\n", - orig_node->orig, neigh_node->addr, - neigh_node->if_incoming->net_dev->name); + last_seen = neigh_node->last_seen; + if_incoming = neigh_node->if_incoming; + + if ((batadv_has_timed_out(last_seen, BATADV_PURGE_TIMEOUT)) || + (if_incoming->if_status == BATADV_IF_INACTIVE) || + (if_incoming->if_status == BATADV_IF_NOT_IN_USE) || + (if_incoming->if_status == BATADV_IF_TO_BE_REMOVED)) { + + if ((if_incoming->if_status == BATADV_IF_INACTIVE) || + (if_incoming->if_status == BATADV_IF_NOT_IN_USE) || + (if_incoming->if_status == BATADV_IF_TO_BE_REMOVED)) + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "neighbor purge: originator %pM, neighbor: %pM, iface: %s\n", + orig_node->orig, neigh_node->addr, + if_incoming->net_dev->name); else - bat_dbg(DBG_BATMAN, bat_priv, - "neighbor timeout: originator %pM, neighbor: %pM, last_seen: %u\n", - orig_node->orig, neigh_node->addr, - jiffies_to_msecs(last_seen)); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "neighbor timeout: originator %pM, neighbor: %pM, last_seen: %u\n", + orig_node->orig, neigh_node->addr, + jiffies_to_msecs(last_seen)); neigh_purged = true; hlist_del_rcu(&neigh_node->list); - bonding_candidate_del(orig_node, neigh_node); - neigh_node_free_ref(neigh_node); + batadv_bonding_candidate_del(orig_node, neigh_node); + batadv_neigh_node_free_ref(neigh_node); } else { if ((!*best_neigh_node) || (neigh_node->tq_avg > (*best_neigh_node)->tq_avg)) @@ -319,33 +322,35 @@ static bool purge_orig_neighbors(struct bat_priv *bat_priv, return neigh_purged; } -static bool purge_orig_node(struct bat_priv *bat_priv, - struct orig_node *orig_node) +static bool batadv_purge_orig_node(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node) { - struct neigh_node *best_neigh_node; - - if (has_timed_out(orig_node->last_seen, 2 * PURGE_TIMEOUT)) { - bat_dbg(DBG_BATMAN, bat_priv, - "Originator timeout: originator %pM, last_seen %u\n", - orig_node->orig, - jiffies_to_msecs(orig_node->last_seen)); + struct batadv_neigh_node *best_neigh_node; + + if (batadv_has_timed_out(orig_node->last_seen, + 2 * BATADV_PURGE_TIMEOUT)) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Originator timeout: originator %pM, last_seen %u\n", + orig_node->orig, + jiffies_to_msecs(orig_node->last_seen)); return true; } else { - if (purge_orig_neighbors(bat_priv, orig_node, - &best_neigh_node)) - update_route(bat_priv, orig_node, best_neigh_node); + if (batadv_purge_orig_neighbors(bat_priv, orig_node, + &best_neigh_node)) + batadv_update_route(bat_priv, orig_node, + best_neigh_node); } return false; } -static void _purge_orig(struct bat_priv *bat_priv) +static void _batadv_purge_orig(struct batadv_priv *bat_priv) { - struct hashtable_t *hash = bat_priv->orig_hash; + struct batadv_hashtable *hash = bat_priv->orig_hash; struct hlist_node *node, *node_tmp; struct hlist_head *head; spinlock_t *list_lock; /* spinlock to protect write access */ - struct orig_node *orig_node; + struct batadv_orig_node *orig_node; uint32_t i; if (!hash) @@ -359,58 +364,60 @@ static void _purge_orig(struct bat_priv *bat_priv) spin_lock_bh(list_lock); hlist_for_each_entry_safe(orig_node, node, node_tmp, head, hash_entry) { - if (purge_orig_node(bat_priv, orig_node)) { + if (batadv_purge_orig_node(bat_priv, orig_node)) { if (orig_node->gw_flags) - gw_node_delete(bat_priv, orig_node); + batadv_gw_node_delete(bat_priv, + orig_node); hlist_del_rcu(node); - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); continue; } - if (has_timed_out(orig_node->last_frag_packet, - FRAG_TIMEOUT)) - frag_list_free(&orig_node->frag_list); + if (batadv_has_timed_out(orig_node->last_frag_packet, + BATADV_FRAG_TIMEOUT)) + batadv_frag_list_free(&orig_node->frag_list); } spin_unlock_bh(list_lock); } - gw_node_purge(bat_priv); - gw_election(bat_priv); + batadv_gw_node_purge(bat_priv); + batadv_gw_election(bat_priv); } -static void purge_orig(struct work_struct *work) +static void batadv_purge_orig(struct work_struct *work) { - struct delayed_work *delayed_work = - container_of(work, struct delayed_work, work); - struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, orig_work); + struct delayed_work *delayed_work; + struct batadv_priv *bat_priv; - _purge_orig(bat_priv); - start_purge_timer(bat_priv); + delayed_work = container_of(work, struct delayed_work, work); + bat_priv = container_of(delayed_work, struct batadv_priv, orig_work); + _batadv_purge_orig(bat_priv); + batadv_start_purge_timer(bat_priv); } -void purge_orig_ref(struct bat_priv *bat_priv) +void batadv_purge_orig_ref(struct batadv_priv *bat_priv) { - _purge_orig(bat_priv); + _batadv_purge_orig(bat_priv); } -int orig_seq_print_text(struct seq_file *seq, void *offset) +int batadv_orig_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; - struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->orig_hash; + struct batadv_priv *bat_priv = netdev_priv(net_dev); + struct batadv_hashtable *hash = bat_priv->orig_hash; struct hlist_node *node, *node_tmp; struct hlist_head *head; - struct hard_iface *primary_if; - struct orig_node *orig_node; - struct neigh_node *neigh_node, *neigh_node_tmp; + struct batadv_hard_iface *primary_if; + struct batadv_orig_node *orig_node; + struct batadv_neigh_node *neigh_node, *neigh_node_tmp; int batman_count = 0; int last_seen_secs; int last_seen_msecs; + unsigned long last_seen_jiffies; uint32_t i; int ret = 0; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) { ret = seq_printf(seq, @@ -419,7 +426,7 @@ int orig_seq_print_text(struct seq_file *seq, void *offset) goto out; } - if (primary_if->if_status != IF_ACTIVE) { + if (primary_if->if_status != BATADV_IF_ACTIVE) { ret = seq_printf(seq, "BATMAN mesh %s disabled - primary interface not active\n", net_dev->name); @@ -427,28 +434,28 @@ int orig_seq_print_text(struct seq_file *seq, void *offset) } seq_printf(seq, "[B.A.T.M.A.N. adv %s, MainIF/MAC: %s/%pM (%s)]\n", - SOURCE_VERSION, primary_if->net_dev->name, + BATADV_SOURCE_VERSION, primary_if->net_dev->name, primary_if->net_dev->dev_addr, net_dev->name); seq_printf(seq, " %-15s %s (%s/%i) %17s [%10s]: %20s ...\n", - "Originator", "last-seen", "#", TQ_MAX_VALUE, "Nexthop", - "outgoingIF", "Potential nexthops"); + "Originator", "last-seen", "#", BATADV_TQ_MAX_VALUE, + "Nexthop", "outgoingIF", "Potential nexthops"); for (i = 0; i < hash->size; i++) { head = &hash->table[i]; rcu_read_lock(); hlist_for_each_entry_rcu(orig_node, node, head, hash_entry) { - neigh_node = orig_node_get_router(orig_node); + neigh_node = batadv_orig_node_get_router(orig_node); if (!neigh_node) continue; if (neigh_node->tq_avg == 0) goto next; - last_seen_secs = jiffies_to_msecs(jiffies - - orig_node->last_seen) / 1000; - last_seen_msecs = jiffies_to_msecs(jiffies - - orig_node->last_seen) % 1000; + last_seen_jiffies = jiffies - orig_node->last_seen; + last_seen_msecs = jiffies_to_msecs(last_seen_jiffies); + last_seen_secs = last_seen_msecs / 1000; + last_seen_msecs = last_seen_msecs % 1000; seq_printf(seq, "%pM %4i.%03is (%3i) %pM [%10s]:", orig_node->orig, last_seen_secs, @@ -467,7 +474,7 @@ int orig_seq_print_text(struct seq_file *seq, void *offset) batman_count++; next: - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); } rcu_read_unlock(); } @@ -477,27 +484,29 @@ next: out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return ret; } -static int orig_node_add_if(struct orig_node *orig_node, int max_if_num) +static int batadv_orig_node_add_if(struct batadv_orig_node *orig_node, + int max_if_num) { void *data_ptr; + size_t data_size, old_size; - data_ptr = kmalloc(max_if_num * sizeof(unsigned long) * NUM_WORDS, - GFP_ATOMIC); + data_size = max_if_num * sizeof(unsigned long) * BATADV_NUM_WORDS; + old_size = (max_if_num - 1) * sizeof(unsigned long) * BATADV_NUM_WORDS; + data_ptr = kmalloc(data_size, GFP_ATOMIC); if (!data_ptr) - return -1; + return -ENOMEM; - memcpy(data_ptr, orig_node->bcast_own, - (max_if_num - 1) * sizeof(unsigned long) * NUM_WORDS); + memcpy(data_ptr, orig_node->bcast_own, old_size); kfree(orig_node->bcast_own); orig_node->bcast_own = data_ptr; data_ptr = kmalloc(max_if_num * sizeof(uint8_t), GFP_ATOMIC); if (!data_ptr) - return -1; + return -ENOMEM; memcpy(data_ptr, orig_node->bcast_own_sum, (max_if_num - 1) * sizeof(uint8_t)); @@ -507,28 +516,30 @@ static int orig_node_add_if(struct orig_node *orig_node, int max_if_num) return 0; } -int orig_hash_add_if(struct hard_iface *hard_iface, int max_if_num) +int batadv_orig_hash_add_if(struct batadv_hard_iface *hard_iface, + int max_if_num) { - struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); - struct hashtable_t *hash = bat_priv->orig_hash; + struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); + struct batadv_hashtable *hash = bat_priv->orig_hash; struct hlist_node *node; struct hlist_head *head; - struct orig_node *orig_node; + struct batadv_orig_node *orig_node; uint32_t i; int ret; /* resize all orig nodes because orig_node->bcast_own(_sum) depend on - * if_num */ + * if_num + */ for (i = 0; i < hash->size; i++) { head = &hash->table[i]; rcu_read_lock(); hlist_for_each_entry_rcu(orig_node, node, head, hash_entry) { spin_lock_bh(&orig_node->ogm_cnt_lock); - ret = orig_node_add_if(orig_node, max_if_num); + ret = batadv_orig_node_add_if(orig_node, max_if_num); spin_unlock_bh(&orig_node->ogm_cnt_lock); - if (ret == -1) + if (ret == -ENOMEM) goto err; } rcu_read_unlock(); @@ -541,8 +552,8 @@ err: return -ENOMEM; } -static int orig_node_del_if(struct orig_node *orig_node, - int max_if_num, int del_if_num) +static int batadv_orig_node_del_if(struct batadv_orig_node *orig_node, + int max_if_num, int del_if_num) { void *data_ptr = NULL; int chunk_size; @@ -551,10 +562,10 @@ static int orig_node_del_if(struct orig_node *orig_node, if (max_if_num == 0) goto free_bcast_own; - chunk_size = sizeof(unsigned long) * NUM_WORDS; + chunk_size = sizeof(unsigned long) * BATADV_NUM_WORDS; data_ptr = kmalloc(max_if_num * chunk_size, GFP_ATOMIC); if (!data_ptr) - return -1; + return -ENOMEM; /* copy first part */ memcpy(data_ptr, orig_node->bcast_own, del_if_num * chunk_size); @@ -573,7 +584,7 @@ free_bcast_own: data_ptr = kmalloc(max_if_num * sizeof(uint8_t), GFP_ATOMIC); if (!data_ptr) - return -1; + return -ENOMEM; memcpy(data_ptr, orig_node->bcast_own_sum, del_if_num * sizeof(uint8_t)); @@ -589,30 +600,32 @@ free_own_sum: return 0; } -int orig_hash_del_if(struct hard_iface *hard_iface, int max_if_num) +int batadv_orig_hash_del_if(struct batadv_hard_iface *hard_iface, + int max_if_num) { - struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); - struct hashtable_t *hash = bat_priv->orig_hash; + struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); + struct batadv_hashtable *hash = bat_priv->orig_hash; struct hlist_node *node; struct hlist_head *head; - struct hard_iface *hard_iface_tmp; - struct orig_node *orig_node; + struct batadv_hard_iface *hard_iface_tmp; + struct batadv_orig_node *orig_node; uint32_t i; int ret; /* resize all orig nodes because orig_node->bcast_own(_sum) depend on - * if_num */ + * if_num + */ for (i = 0; i < hash->size; i++) { head = &hash->table[i]; rcu_read_lock(); hlist_for_each_entry_rcu(orig_node, node, head, hash_entry) { spin_lock_bh(&orig_node->ogm_cnt_lock); - ret = orig_node_del_if(orig_node, max_if_num, - hard_iface->if_num); + ret = batadv_orig_node_del_if(orig_node, max_if_num, + hard_iface->if_num); spin_unlock_bh(&orig_node->ogm_cnt_lock); - if (ret == -1) + if (ret == -ENOMEM) goto err; } rcu_read_unlock(); @@ -620,8 +633,8 @@ int orig_hash_del_if(struct hard_iface *hard_iface, int max_if_num) /* renumber remaining batman interfaces _inside_ of orig_hash_lock */ rcu_read_lock(); - list_for_each_entry_rcu(hard_iface_tmp, &hardif_list, list) { - if (hard_iface_tmp->if_status == IF_NOT_IN_USE) + list_for_each_entry_rcu(hard_iface_tmp, &batadv_hardif_list, list) { + if (hard_iface_tmp->if_status == BATADV_IF_NOT_IN_USE) continue; if (hard_iface == hard_iface_tmp) diff --git a/net/batman-adv/originator.h b/net/batman-adv/originator.h index f74d0d693359..9778e656dec7 100644 --- a/net/batman-adv/originator.h +++ b/net/batman-adv/originator.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_ORIGINATOR_H_ @@ -24,24 +22,29 @@ #include "hash.h" -int originator_init(struct bat_priv *bat_priv); -void originator_free(struct bat_priv *bat_priv); -void purge_orig_ref(struct bat_priv *bat_priv); -void orig_node_free_ref(struct orig_node *orig_node); -struct orig_node *get_orig_node(struct bat_priv *bat_priv, const uint8_t *addr); -struct neigh_node *batadv_neigh_node_new(struct hard_iface *hard_iface, - const uint8_t *neigh_addr, - uint32_t seqno); -void neigh_node_free_ref(struct neigh_node *neigh_node); -struct neigh_node *orig_node_get_router(struct orig_node *orig_node); -int orig_seq_print_text(struct seq_file *seq, void *offset); -int orig_hash_add_if(struct hard_iface *hard_iface, int max_if_num); -int orig_hash_del_if(struct hard_iface *hard_iface, int max_if_num); - - -/* hashfunction to choose an entry in a hash table of given size */ -/* hash algorithm from http://en.wikipedia.org/wiki/Hash_table */ -static inline uint32_t choose_orig(const void *data, uint32_t size) +int batadv_originator_init(struct batadv_priv *bat_priv); +void batadv_originator_free(struct batadv_priv *bat_priv); +void batadv_purge_orig_ref(struct batadv_priv *bat_priv); +void batadv_orig_node_free_ref(struct batadv_orig_node *orig_node); +struct batadv_orig_node *batadv_get_orig_node(struct batadv_priv *bat_priv, + const uint8_t *addr); +struct batadv_neigh_node * +batadv_neigh_node_new(struct batadv_hard_iface *hard_iface, + const uint8_t *neigh_addr, uint32_t seqno); +void batadv_neigh_node_free_ref(struct batadv_neigh_node *neigh_node); +struct batadv_neigh_node * +batadv_orig_node_get_router(struct batadv_orig_node *orig_node); +int batadv_orig_seq_print_text(struct seq_file *seq, void *offset); +int batadv_orig_hash_add_if(struct batadv_hard_iface *hard_iface, + int max_if_num); +int batadv_orig_hash_del_if(struct batadv_hard_iface *hard_iface, + int max_if_num); + + +/* hashfunction to choose an entry in a hash table of given size + * hash algorithm from http://en.wikipedia.org/wiki/Hash_table + */ +static inline uint32_t batadv_choose_orig(const void *data, uint32_t size) { const unsigned char *key = data; uint32_t hash = 0; @@ -60,24 +63,24 @@ static inline uint32_t choose_orig(const void *data, uint32_t size) return hash % size; } -static inline struct orig_node *orig_hash_find(struct bat_priv *bat_priv, - const void *data) +static inline struct batadv_orig_node * +batadv_orig_hash_find(struct batadv_priv *bat_priv, const void *data) { - struct hashtable_t *hash = bat_priv->orig_hash; + struct batadv_hashtable *hash = bat_priv->orig_hash; struct hlist_head *head; struct hlist_node *node; - struct orig_node *orig_node, *orig_node_tmp = NULL; + struct batadv_orig_node *orig_node, *orig_node_tmp = NULL; int index; if (!hash) return NULL; - index = choose_orig(data, hash->size); + index = batadv_choose_orig(data, hash->size); head = &hash->table[index]; rcu_read_lock(); hlist_for_each_entry_rcu(orig_node, node, head, hash_entry) { - if (!compare_eth(orig_node, data)) + if (!batadv_compare_eth(orig_node, data)) continue; if (!atomic_inc_not_zero(&orig_node->refcount)) diff --git a/net/batman-adv/packet.h b/net/batman-adv/packet.h index 0ee1af770798..8d3e55a96adc 100644 --- a/net/batman-adv/packet.h +++ b/net/batman-adv/packet.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,171 +15,172 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_PACKET_H_ #define _NET_BATMAN_ADV_PACKET_H_ -#define ETH_P_BATMAN 0x4305 /* unofficial/not registered Ethertype */ - -enum bat_packettype { - BAT_IV_OGM = 0x01, - BAT_ICMP = 0x02, - BAT_UNICAST = 0x03, - BAT_BCAST = 0x04, - BAT_VIS = 0x05, - BAT_UNICAST_FRAG = 0x06, - BAT_TT_QUERY = 0x07, - BAT_ROAM_ADV = 0x08 +#define BATADV_ETH_P_BATMAN 0x4305 /* unofficial/not registered Ethertype */ + +enum batadv_packettype { + BATADV_IV_OGM = 0x01, + BATADV_ICMP = 0x02, + BATADV_UNICAST = 0x03, + BATADV_BCAST = 0x04, + BATADV_VIS = 0x05, + BATADV_UNICAST_FRAG = 0x06, + BATADV_TT_QUERY = 0x07, + BATADV_ROAM_ADV = 0x08, }; /* this file is included by batctl which needs these defines */ -#define COMPAT_VERSION 14 +#define BATADV_COMPAT_VERSION 14 -enum batman_iv_flags { - NOT_BEST_NEXT_HOP = 1 << 3, - PRIMARIES_FIRST_HOP = 1 << 4, - VIS_SERVER = 1 << 5, - DIRECTLINK = 1 << 6 +enum batadv_iv_flags { + BATADV_NOT_BEST_NEXT_HOP = 1 << 3, + BATADV_PRIMARIES_FIRST_HOP = 1 << 4, + BATADV_VIS_SERVER = 1 << 5, + BATADV_DIRECTLINK = 1 << 6, }; /* ICMP message types */ -enum icmp_packettype { - ECHO_REPLY = 0, - DESTINATION_UNREACHABLE = 3, - ECHO_REQUEST = 8, - TTL_EXCEEDED = 11, - PARAMETER_PROBLEM = 12 +enum batadv_icmp_packettype { + BATADV_ECHO_REPLY = 0, + BATADV_DESTINATION_UNREACHABLE = 3, + BATADV_ECHO_REQUEST = 8, + BATADV_TTL_EXCEEDED = 11, + BATADV_PARAMETER_PROBLEM = 12, }; /* vis defines */ -enum vis_packettype { - VIS_TYPE_SERVER_SYNC = 0, - VIS_TYPE_CLIENT_UPDATE = 1 +enum batadv_vis_packettype { + BATADV_VIS_TYPE_SERVER_SYNC = 0, + BATADV_VIS_TYPE_CLIENT_UPDATE = 1, }; /* fragmentation defines */ -enum unicast_frag_flags { - UNI_FRAG_HEAD = 1 << 0, - UNI_FRAG_LARGETAIL = 1 << 1 +enum batadv_unicast_frag_flags { + BATADV_UNI_FRAG_HEAD = 1 << 0, + BATADV_UNI_FRAG_LARGETAIL = 1 << 1, }; /* TT_QUERY subtypes */ -#define TT_QUERY_TYPE_MASK 0x3 +#define BATADV_TT_QUERY_TYPE_MASK 0x3 -enum tt_query_packettype { - TT_REQUEST = 0, - TT_RESPONSE = 1 +enum batadv_tt_query_packettype { + BATADV_TT_REQUEST = 0, + BATADV_TT_RESPONSE = 1, }; /* TT_QUERY flags */ -enum tt_query_flags { - TT_FULL_TABLE = 1 << 2 +enum batadv_tt_query_flags { + BATADV_TT_FULL_TABLE = 1 << 2, }; -/* TT_CLIENT flags. +/* BATADV_TT_CLIENT flags. * Flags from 1 to 1 << 7 are sent on the wire, while flags from 1 << 8 to - * 1 << 15 are used for local computation only */ -enum tt_client_flags { - TT_CLIENT_DEL = 1 << 0, - TT_CLIENT_ROAM = 1 << 1, - TT_CLIENT_WIFI = 1 << 2, - TT_CLIENT_NOPURGE = 1 << 8, - TT_CLIENT_NEW = 1 << 9, - TT_CLIENT_PENDING = 1 << 10 + * 1 << 15 are used for local computation only + */ +enum batadv_tt_client_flags { + BATADV_TT_CLIENT_DEL = 1 << 0, + BATADV_TT_CLIENT_ROAM = 1 << 1, + BATADV_TT_CLIENT_WIFI = 1 << 2, + BATADV_TT_CLIENT_NOPURGE = 1 << 8, + BATADV_TT_CLIENT_NEW = 1 << 9, + BATADV_TT_CLIENT_PENDING = 1 << 10, }; /* claim frame types for the bridge loop avoidance */ -enum bla_claimframe { - CLAIM_TYPE_ADD = 0x00, - CLAIM_TYPE_DEL = 0x01, - CLAIM_TYPE_ANNOUNCE = 0x02, - CLAIM_TYPE_REQUEST = 0x03 +enum batadv_bla_claimframe { + BATADV_CLAIM_TYPE_ADD = 0x00, + BATADV_CLAIM_TYPE_DEL = 0x01, + BATADV_CLAIM_TYPE_ANNOUNCE = 0x02, + BATADV_CLAIM_TYPE_REQUEST = 0x03, }; /* the destination hardware field in the ARP frame is used to * transport the claim type and the group id */ -struct bla_claim_dst { +struct batadv_bla_claim_dst { uint8_t magic[3]; /* FF:43:05 */ uint8_t type; /* bla_claimframe */ - uint16_t group; /* group id */ + __be16 group; /* group id */ } __packed; -struct batman_header { +struct batadv_header { uint8_t packet_type; uint8_t version; /* batman version field */ uint8_t ttl; } __packed; -struct batman_ogm_packet { - struct batman_header header; +struct batadv_ogm_packet { + struct batadv_header header; uint8_t flags; /* 0x40: DIRECTLINK flag, 0x20 VIS_SERVER flag... */ - uint32_t seqno; + __be32 seqno; uint8_t orig[ETH_ALEN]; uint8_t prev_sender[ETH_ALEN]; uint8_t gw_flags; /* flags related to gateway class */ uint8_t tq; uint8_t tt_num_changes; uint8_t ttvn; /* translation table version number */ - uint16_t tt_crc; + __be16 tt_crc; } __packed; -#define BATMAN_OGM_HLEN sizeof(struct batman_ogm_packet) +#define BATADV_OGM_HLEN sizeof(struct batadv_ogm_packet) -struct icmp_packet { - struct batman_header header; +struct batadv_icmp_packet { + struct batadv_header header; uint8_t msg_type; /* see ICMP message types above */ uint8_t dst[ETH_ALEN]; uint8_t orig[ETH_ALEN]; - uint16_t seqno; + __be16 seqno; uint8_t uid; uint8_t reserved; } __packed; -#define BAT_RR_LEN 16 +#define BATADV_RR_LEN 16 /* icmp_packet_rr must start with all fields from imcp_packet - * as this is assumed by code that handles ICMP packets */ -struct icmp_packet_rr { - struct batman_header header; + * as this is assumed by code that handles ICMP packets + */ +struct batadv_icmp_packet_rr { + struct batadv_header header; uint8_t msg_type; /* see ICMP message types above */ uint8_t dst[ETH_ALEN]; uint8_t orig[ETH_ALEN]; - uint16_t seqno; + __be16 seqno; uint8_t uid; uint8_t rr_cur; - uint8_t rr[BAT_RR_LEN][ETH_ALEN]; + uint8_t rr[BATADV_RR_LEN][ETH_ALEN]; } __packed; -struct unicast_packet { - struct batman_header header; +struct batadv_unicast_packet { + struct batadv_header header; uint8_t ttvn; /* destination translation table version number */ uint8_t dest[ETH_ALEN]; } __packed; -struct unicast_frag_packet { - struct batman_header header; +struct batadv_unicast_frag_packet { + struct batadv_header header; uint8_t ttvn; /* destination translation table version number */ uint8_t dest[ETH_ALEN]; uint8_t flags; uint8_t align; uint8_t orig[ETH_ALEN]; - uint16_t seqno; + __be16 seqno; } __packed; -struct bcast_packet { - struct batman_header header; +struct batadv_bcast_packet { + struct batadv_header header; uint8_t reserved; - uint32_t seqno; + __be32 seqno; uint8_t orig[ETH_ALEN]; } __packed; -struct vis_packet { - struct batman_header header; +struct batadv_vis_packet { + struct batadv_header header; uint8_t vis_type; /* which type of vis-participant sent this? */ - uint32_t seqno; /* sequence number */ + __be32 seqno; /* sequence number */ uint8_t entries; /* number of entries behind this struct */ uint8_t reserved; uint8_t vis_orig[ETH_ALEN]; /* originator reporting its neighbors */ @@ -188,11 +188,12 @@ struct vis_packet { uint8_t sender_orig[ETH_ALEN]; /* who sent or forwarded this packet */ } __packed; -struct tt_query_packet { - struct batman_header header; +struct batadv_tt_query_packet { + struct batadv_header header; /* the flag field is a combination of: * - TT_REQUEST or TT_RESPONSE - * - TT_FULL_TABLE */ + * - TT_FULL_TABLE + */ uint8_t flags; uint8_t dst[ETH_ALEN]; uint8_t src[ETH_ALEN]; @@ -200,24 +201,26 @@ struct tt_query_packet { * if TT_REQUEST: ttvn that triggered the * request * if TT_RESPONSE: new ttvn for the src - * orig_node */ + * orig_node + */ uint8_t ttvn; /* tt_data field is: * if TT_REQUEST: crc associated with the * ttvn - * if TT_RESPONSE: table_size */ - uint16_t tt_data; + * if TT_RESPONSE: table_size + */ + __be16 tt_data; } __packed; -struct roam_adv_packet { - struct batman_header header; +struct batadv_roam_adv_packet { + struct batadv_header header; uint8_t reserved; uint8_t dst[ETH_ALEN]; uint8_t src[ETH_ALEN]; uint8_t client[ETH_ALEN]; } __packed; -struct tt_change { +struct batadv_tt_change { uint8_t flags; uint8_t addr[ETH_ALEN]; } __packed; diff --git a/net/batman-adv/ring_buffer.c b/net/batman-adv/ring_buffer.c index fd63951d118d..c8f61e395b74 100644 --- a/net/batman-adv/ring_buffer.c +++ b/net/batman-adv/ring_buffer.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,26 +15,26 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" #include "ring_buffer.h" -void ring_buffer_set(uint8_t lq_recv[], uint8_t *lq_index, uint8_t value) +void batadv_ring_buffer_set(uint8_t lq_recv[], uint8_t *lq_index, + uint8_t value) { lq_recv[*lq_index] = value; - *lq_index = (*lq_index + 1) % TQ_GLOBAL_WINDOW_SIZE; + *lq_index = (*lq_index + 1) % BATADV_TQ_GLOBAL_WINDOW_SIZE; } -uint8_t ring_buffer_avg(const uint8_t lq_recv[]) +uint8_t batadv_ring_buffer_avg(const uint8_t lq_recv[]) { const uint8_t *ptr; uint16_t count = 0, i = 0, sum = 0; ptr = lq_recv; - while (i < TQ_GLOBAL_WINDOW_SIZE) { + while (i < BATADV_TQ_GLOBAL_WINDOW_SIZE) { if (*ptr != 0) { count++; sum += *ptr; diff --git a/net/batman-adv/ring_buffer.h b/net/batman-adv/ring_buffer.h index 8b58bd82767d..fda8c17df273 100644 --- a/net/batman-adv/ring_buffer.h +++ b/net/batman-adv/ring_buffer.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,13 +15,13 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_RING_BUFFER_H_ #define _NET_BATMAN_ADV_RING_BUFFER_H_ -void ring_buffer_set(uint8_t lq_recv[], uint8_t *lq_index, uint8_t value); -uint8_t ring_buffer_avg(const uint8_t lq_recv[]); +void batadv_ring_buffer_set(uint8_t lq_recv[], uint8_t *lq_index, + uint8_t value); +uint8_t batadv_ring_buffer_avg(const uint8_t lq_recv[]); #endif /* _NET_BATMAN_ADV_RING_BUFFER_H_ */ diff --git a/net/batman-adv/routing.c b/net/batman-adv/routing.c index 015471d801b4..bc2b88bbea1f 100644 --- a/net/batman-adv/routing.c +++ b/net/batman-adv/routing.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -31,19 +29,20 @@ #include "unicast.h" #include "bridge_loop_avoidance.h" -static int route_unicast_packet(struct sk_buff *skb, - struct hard_iface *recv_if); +static int batadv_route_unicast_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if); -void slide_own_bcast_window(struct hard_iface *hard_iface) +void batadv_slide_own_bcast_window(struct batadv_hard_iface *hard_iface) { - struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); - struct hashtable_t *hash = bat_priv->orig_hash; + struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); + struct batadv_hashtable *hash = bat_priv->orig_hash; struct hlist_node *node; struct hlist_head *head; - struct orig_node *orig_node; + struct batadv_orig_node *orig_node; unsigned long *word; uint32_t i; size_t word_index; + uint8_t *w; for (i = 0; i < hash->size; i++) { head = &hash->table[i]; @@ -51,49 +50,49 @@ void slide_own_bcast_window(struct hard_iface *hard_iface) rcu_read_lock(); hlist_for_each_entry_rcu(orig_node, node, head, hash_entry) { spin_lock_bh(&orig_node->ogm_cnt_lock); - word_index = hard_iface->if_num * NUM_WORDS; + word_index = hard_iface->if_num * BATADV_NUM_WORDS; word = &(orig_node->bcast_own[word_index]); - bit_get_packet(bat_priv, word, 1, 0); - orig_node->bcast_own_sum[hard_iface->if_num] = - bitmap_weight(word, TQ_LOCAL_WINDOW_SIZE); + batadv_bit_get_packet(bat_priv, word, 1, 0); + w = &orig_node->bcast_own_sum[hard_iface->if_num]; + *w = bitmap_weight(word, BATADV_TQ_LOCAL_WINDOW_SIZE); spin_unlock_bh(&orig_node->ogm_cnt_lock); } rcu_read_unlock(); } } -static void _update_route(struct bat_priv *bat_priv, - struct orig_node *orig_node, - struct neigh_node *neigh_node) +static void _batadv_update_route(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + struct batadv_neigh_node *neigh_node) { - struct neigh_node *curr_router; + struct batadv_neigh_node *curr_router; - curr_router = orig_node_get_router(orig_node); + curr_router = batadv_orig_node_get_router(orig_node); /* route deleted */ if ((curr_router) && (!neigh_node)) { - bat_dbg(DBG_ROUTES, bat_priv, "Deleting route towards: %pM\n", - orig_node->orig); - tt_global_del_orig(bat_priv, orig_node, - "Deleted route towards originator"); + batadv_dbg(BATADV_DBG_ROUTES, bat_priv, + "Deleting route towards: %pM\n", orig_node->orig); + batadv_tt_global_del_orig(bat_priv, orig_node, + "Deleted route towards originator"); /* route added */ } else if ((!curr_router) && (neigh_node)) { - bat_dbg(DBG_ROUTES, bat_priv, - "Adding route towards: %pM (via %pM)\n", - orig_node->orig, neigh_node->addr); + batadv_dbg(BATADV_DBG_ROUTES, bat_priv, + "Adding route towards: %pM (via %pM)\n", + orig_node->orig, neigh_node->addr); /* route changed */ } else if (neigh_node && curr_router) { - bat_dbg(DBG_ROUTES, bat_priv, - "Changing route towards: %pM (now via %pM - was via %pM)\n", - orig_node->orig, neigh_node->addr, - curr_router->addr); + batadv_dbg(BATADV_DBG_ROUTES, bat_priv, + "Changing route towards: %pM (now via %pM - was via %pM)\n", + orig_node->orig, neigh_node->addr, + curr_router->addr); } if (curr_router) - neigh_node_free_ref(curr_router); + batadv_neigh_node_free_ref(curr_router); /* increase refcount of new best neighbor */ if (neigh_node && !atomic_inc_not_zero(&neigh_node->refcount)) @@ -105,30 +104,31 @@ static void _update_route(struct bat_priv *bat_priv, /* decrease refcount of previous best neighbor */ if (curr_router) - neigh_node_free_ref(curr_router); + batadv_neigh_node_free_ref(curr_router); } -void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node) +void batadv_update_route(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + struct batadv_neigh_node *neigh_node) { - struct neigh_node *router = NULL; + struct batadv_neigh_node *router = NULL; if (!orig_node) goto out; - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (router != neigh_node) - _update_route(bat_priv, orig_node, neigh_node); + _batadv_update_route(bat_priv, orig_node, neigh_node); out: if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); } /* caller must hold the neigh_list_lock */ -void bonding_candidate_del(struct orig_node *orig_node, - struct neigh_node *neigh_node) +void batadv_bonding_candidate_del(struct batadv_orig_node *orig_node, + struct batadv_neigh_node *neigh_node) { /* this neighbor is not part of our candidate list */ if (list_empty(&neigh_node->bonding_list)) @@ -136,37 +136,36 @@ void bonding_candidate_del(struct orig_node *orig_node, list_del_rcu(&neigh_node->bonding_list); INIT_LIST_HEAD(&neigh_node->bonding_list); - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); atomic_dec(&orig_node->bond_candidates); out: return; } -void bonding_candidate_add(struct orig_node *orig_node, - struct neigh_node *neigh_node) +void batadv_bonding_candidate_add(struct batadv_orig_node *orig_node, + struct batadv_neigh_node *neigh_node) { struct hlist_node *node; - struct neigh_node *tmp_neigh_node, *router = NULL; + struct batadv_neigh_node *tmp_neigh_node, *router = NULL; uint8_t interference_candidate = 0; spin_lock_bh(&orig_node->neigh_list_lock); /* only consider if it has the same primary address ... */ - if (!compare_eth(orig_node->orig, - neigh_node->orig_node->primary_addr)) + if (!batadv_compare_eth(orig_node->orig, + neigh_node->orig_node->primary_addr)) goto candidate_del; - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (!router) goto candidate_del; /* ... and is good enough to be considered */ - if (neigh_node->tq_avg < router->tq_avg - BONDING_TQ_THRESHOLD) + if (neigh_node->tq_avg < router->tq_avg - BATADV_BONDING_TQ_THRESHOLD) goto candidate_del; - /** - * check if we have another candidate with the same mac address or + /* check if we have another candidate with the same mac address or * interface. If we do, we won't select this candidate because of * possible interference. */ @@ -177,12 +176,14 @@ void bonding_candidate_add(struct orig_node *orig_node, continue; /* we only care if the other candidate is even - * considered as candidate. */ + * considered as candidate. + */ if (list_empty(&tmp_neigh_node->bonding_list)) continue; if ((neigh_node->if_incoming == tmp_neigh_node->if_incoming) || - (compare_eth(neigh_node->addr, tmp_neigh_node->addr))) { + (batadv_compare_eth(neigh_node->addr, + tmp_neigh_node->addr))) { interference_candidate = 1; break; } @@ -204,21 +205,22 @@ void bonding_candidate_add(struct orig_node *orig_node, goto out; candidate_del: - bonding_candidate_del(orig_node, neigh_node); + batadv_bonding_candidate_del(orig_node, neigh_node); out: spin_unlock_bh(&orig_node->neigh_list_lock); if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); } /* copy primary address for bonding */ -void bonding_save_primary(const struct orig_node *orig_node, - struct orig_node *orig_neigh_node, - const struct batman_ogm_packet *batman_ogm_packet) +void +batadv_bonding_save_primary(const struct batadv_orig_node *orig_node, + struct batadv_orig_node *orig_neigh_node, + const struct batadv_ogm_packet *batman_ogm_packet) { - if (!(batman_ogm_packet->flags & PRIMARIES_FIRST_HOP)) + if (!(batman_ogm_packet->flags & BATADV_PRIMARIES_FIRST_HOP)) return; memcpy(orig_neigh_node->primary_addr, orig_node->orig, ETH_ALEN); @@ -229,25 +231,26 @@ void bonding_save_primary(const struct orig_node *orig_node, * 0 if the packet is to be accepted * 1 if the packet is to be ignored. */ -int window_protected(struct bat_priv *bat_priv, int32_t seq_num_diff, - unsigned long *last_reset) +int batadv_window_protected(struct batadv_priv *bat_priv, int32_t seq_num_diff, + unsigned long *last_reset) { - if ((seq_num_diff <= -TQ_LOCAL_WINDOW_SIZE) || - (seq_num_diff >= EXPECTED_SEQNO_RANGE)) { - if (!has_timed_out(*last_reset, RESET_PROTECTION_MS)) + if (seq_num_diff <= -BATADV_TQ_LOCAL_WINDOW_SIZE || + seq_num_diff >= BATADV_EXPECTED_SEQNO_RANGE) { + if (!batadv_has_timed_out(*last_reset, + BATADV_RESET_PROTECTION_MS)) return 1; *last_reset = jiffies; - bat_dbg(DBG_BATMAN, bat_priv, - "old packet received, start protection\n"); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "old packet received, start protection\n"); } return 0; } -bool check_management_packet(struct sk_buff *skb, - struct hard_iface *hard_iface, - int header_len) +bool batadv_check_management_packet(struct sk_buff *skb, + struct batadv_hard_iface *hard_iface, + int header_len) { struct ethhdr *ethhdr; @@ -276,34 +279,34 @@ bool check_management_packet(struct sk_buff *skb, return true; } -static int recv_my_icmp_packet(struct bat_priv *bat_priv, - struct sk_buff *skb, size_t icmp_len) +static int batadv_recv_my_icmp_packet(struct batadv_priv *bat_priv, + struct sk_buff *skb, size_t icmp_len) { - struct hard_iface *primary_if = NULL; - struct orig_node *orig_node = NULL; - struct neigh_node *router = NULL; - struct icmp_packet_rr *icmp_packet; + struct batadv_hard_iface *primary_if = NULL; + struct batadv_orig_node *orig_node = NULL; + struct batadv_neigh_node *router = NULL; + struct batadv_icmp_packet_rr *icmp_packet; int ret = NET_RX_DROP; - icmp_packet = (struct icmp_packet_rr *)skb->data; + icmp_packet = (struct batadv_icmp_packet_rr *)skb->data; /* add data to device queue */ - if (icmp_packet->msg_type != ECHO_REQUEST) { - bat_socket_receive_packet(icmp_packet, icmp_len); + if (icmp_packet->msg_type != BATADV_ECHO_REQUEST) { + batadv_socket_receive_packet(icmp_packet, icmp_len); goto out; } - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; /* answer echo request (ping) */ /* get routing information */ - orig_node = orig_hash_find(bat_priv, icmp_packet->orig); + orig_node = batadv_orig_hash_find(bat_priv, icmp_packet->orig); if (!orig_node) goto out; - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (!router) goto out; @@ -311,54 +314,54 @@ static int recv_my_icmp_packet(struct bat_priv *bat_priv, if (skb_cow(skb, ETH_HLEN) < 0) goto out; - icmp_packet = (struct icmp_packet_rr *)skb->data; + icmp_packet = (struct batadv_icmp_packet_rr *)skb->data; memcpy(icmp_packet->dst, icmp_packet->orig, ETH_ALEN); memcpy(icmp_packet->orig, primary_if->net_dev->dev_addr, ETH_ALEN); - icmp_packet->msg_type = ECHO_REPLY; - icmp_packet->header.ttl = TTL; + icmp_packet->msg_type = BATADV_ECHO_REPLY; + icmp_packet->header.ttl = BATADV_TTL; - send_skb_packet(skb, router->if_incoming, router->addr); + batadv_send_skb_packet(skb, router->if_incoming, router->addr); ret = NET_RX_SUCCESS; out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); return ret; } -static int recv_icmp_ttl_exceeded(struct bat_priv *bat_priv, - struct sk_buff *skb) +static int batadv_recv_icmp_ttl_exceeded(struct batadv_priv *bat_priv, + struct sk_buff *skb) { - struct hard_iface *primary_if = NULL; - struct orig_node *orig_node = NULL; - struct neigh_node *router = NULL; - struct icmp_packet *icmp_packet; + struct batadv_hard_iface *primary_if = NULL; + struct batadv_orig_node *orig_node = NULL; + struct batadv_neigh_node *router = NULL; + struct batadv_icmp_packet *icmp_packet; int ret = NET_RX_DROP; - icmp_packet = (struct icmp_packet *)skb->data; + icmp_packet = (struct batadv_icmp_packet *)skb->data; /* send TTL exceeded if packet is an echo request (traceroute) */ - if (icmp_packet->msg_type != ECHO_REQUEST) { + if (icmp_packet->msg_type != BATADV_ECHO_REQUEST) { pr_debug("Warning - can't forward icmp packet from %pM to %pM: ttl exceeded\n", icmp_packet->orig, icmp_packet->dst); goto out; } - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; /* get routing information */ - orig_node = orig_hash_find(bat_priv, icmp_packet->orig); + orig_node = batadv_orig_hash_find(bat_priv, icmp_packet->orig); if (!orig_node) goto out; - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (!router) goto out; @@ -366,42 +369,41 @@ static int recv_icmp_ttl_exceeded(struct bat_priv *bat_priv, if (skb_cow(skb, ETH_HLEN) < 0) goto out; - icmp_packet = (struct icmp_packet *)skb->data; + icmp_packet = (struct batadv_icmp_packet *)skb->data; memcpy(icmp_packet->dst, icmp_packet->orig, ETH_ALEN); memcpy(icmp_packet->orig, primary_if->net_dev->dev_addr, ETH_ALEN); - icmp_packet->msg_type = TTL_EXCEEDED; - icmp_packet->header.ttl = TTL; + icmp_packet->msg_type = BATADV_TTL_EXCEEDED; + icmp_packet->header.ttl = BATADV_TTL; - send_skb_packet(skb, router->if_incoming, router->addr); + batadv_send_skb_packet(skb, router->if_incoming, router->addr); ret = NET_RX_SUCCESS; out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); return ret; } -int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if) +int batadv_recv_icmp_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if) { - struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); - struct icmp_packet_rr *icmp_packet; + struct batadv_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct batadv_icmp_packet_rr *icmp_packet; struct ethhdr *ethhdr; - struct orig_node *orig_node = NULL; - struct neigh_node *router = NULL; - int hdr_size = sizeof(struct icmp_packet); + struct batadv_orig_node *orig_node = NULL; + struct batadv_neigh_node *router = NULL; + int hdr_size = sizeof(struct batadv_icmp_packet); int ret = NET_RX_DROP; - /** - * we truncate all incoming icmp packets if they don't match our size - */ - if (skb->len >= sizeof(struct icmp_packet_rr)) - hdr_size = sizeof(struct icmp_packet_rr); + /* we truncate all incoming icmp packets if they don't match our size */ + if (skb->len >= sizeof(struct batadv_icmp_packet_rr)) + hdr_size = sizeof(struct batadv_icmp_packet_rr); /* drop packet if it has not necessary minimum size */ if (unlikely(!pskb_may_pull(skb, hdr_size))) @@ -418,33 +420,33 @@ int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if) goto out; /* not for me */ - if (!is_my_mac(ethhdr->h_dest)) + if (!batadv_is_my_mac(ethhdr->h_dest)) goto out; - icmp_packet = (struct icmp_packet_rr *)skb->data; + icmp_packet = (struct batadv_icmp_packet_rr *)skb->data; /* add record route information if not full */ - if ((hdr_size == sizeof(struct icmp_packet_rr)) && - (icmp_packet->rr_cur < BAT_RR_LEN)) { + if ((hdr_size == sizeof(struct batadv_icmp_packet_rr)) && + (icmp_packet->rr_cur < BATADV_RR_LEN)) { memcpy(&(icmp_packet->rr[icmp_packet->rr_cur]), ethhdr->h_dest, ETH_ALEN); icmp_packet->rr_cur++; } /* packet for me */ - if (is_my_mac(icmp_packet->dst)) - return recv_my_icmp_packet(bat_priv, skb, hdr_size); + if (batadv_is_my_mac(icmp_packet->dst)) + return batadv_recv_my_icmp_packet(bat_priv, skb, hdr_size); /* TTL exceeded */ if (icmp_packet->header.ttl < 2) - return recv_icmp_ttl_exceeded(bat_priv, skb); + return batadv_recv_icmp_ttl_exceeded(bat_priv, skb); /* get routing information */ - orig_node = orig_hash_find(bat_priv, icmp_packet->dst); + orig_node = batadv_orig_hash_find(bat_priv, icmp_packet->dst); if (!orig_node) goto out; - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (!router) goto out; @@ -452,20 +454,20 @@ int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if) if (skb_cow(skb, ETH_HLEN) < 0) goto out; - icmp_packet = (struct icmp_packet_rr *)skb->data; + icmp_packet = (struct batadv_icmp_packet_rr *)skb->data; /* decrement ttl */ icmp_packet->header.ttl--; /* route it */ - send_skb_packet(skb, router->if_incoming, router->addr); + batadv_send_skb_packet(skb, router->if_incoming, router->addr); ret = NET_RX_SUCCESS; out: if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); return ret; } @@ -473,12 +475,14 @@ out: * robin fashion over the remaining interfaces. * * This method rotates the bonding list and increases the - * returned router's refcount. */ -static struct neigh_node *find_bond_router(struct orig_node *primary_orig, - const struct hard_iface *recv_if) + * returned router's refcount. + */ +static struct batadv_neigh_node * +batadv_find_bond_router(struct batadv_orig_node *primary_orig, + const struct batadv_hard_iface *recv_if) { - struct neigh_node *tmp_neigh_node; - struct neigh_node *router = NULL, *first_candidate = NULL; + struct batadv_neigh_node *tmp_neigh_node; + struct batadv_neigh_node *router = NULL, *first_candidate = NULL; rcu_read_lock(); list_for_each_entry_rcu(tmp_neigh_node, &primary_orig->bond_list, @@ -506,10 +510,12 @@ static struct neigh_node *find_bond_router(struct orig_node *primary_orig, goto out; /* selected should point to the next element - * after the current router */ + * after the current router + */ spin_lock_bh(&primary_orig->neigh_list_lock); /* this is a list_move(), which unfortunately - * does not exist as rcu version */ + * does not exist as rcu version + */ list_del_rcu(&primary_orig->bond_list); list_add_rcu(&primary_orig->bond_list, &router->bonding_list); @@ -524,12 +530,14 @@ out: * remaining candidates which are not using * this interface. * - * Increases the returned router's refcount */ -static struct neigh_node *find_ifalter_router(struct orig_node *primary_orig, - const struct hard_iface *recv_if) + * Increases the returned router's refcount + */ +static struct batadv_neigh_node * +batadv_find_ifalter_router(struct batadv_orig_node *primary_orig, + const struct batadv_hard_iface *recv_if) { - struct neigh_node *tmp_neigh_node; - struct neigh_node *router = NULL, *first_candidate = NULL; + struct batadv_neigh_node *tmp_neigh_node; + struct batadv_neigh_node *router = NULL, *first_candidate = NULL; rcu_read_lock(); list_for_each_entry_rcu(tmp_neigh_node, &primary_orig->bond_list, @@ -545,19 +553,21 @@ static struct neigh_node *find_ifalter_router(struct orig_node *primary_orig, continue; /* if we don't have a router yet - * or this one is better, choose it. */ + * or this one is better, choose it. + */ if ((!router) || (tmp_neigh_node->tq_avg > router->tq_avg)) { /* decrement refcount of - * previously selected router */ + * previously selected router + */ if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); router = tmp_neigh_node; atomic_inc_not_zero(&router->refcount); } - neigh_node_free_ref(tmp_neigh_node); + batadv_neigh_node_free_ref(tmp_neigh_node); } /* use the first candidate if nothing was found. */ @@ -569,19 +579,22 @@ static struct neigh_node *find_ifalter_router(struct orig_node *primary_orig, return router; } -int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if) +int batadv_recv_tt_query(struct sk_buff *skb, struct batadv_hard_iface *recv_if) { - struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); - struct tt_query_packet *tt_query; - uint16_t tt_len; + struct batadv_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct batadv_tt_query_packet *tt_query; + uint16_t tt_size; struct ethhdr *ethhdr; + char tt_flag; + size_t packet_size; /* drop packet if it has not necessary minimum size */ - if (unlikely(!pskb_may_pull(skb, sizeof(struct tt_query_packet)))) + if (unlikely(!pskb_may_pull(skb, + sizeof(struct batadv_tt_query_packet)))) goto out; /* I could need to modify it */ - if (skb_cow(skb, sizeof(struct tt_query_packet)) < 0) + if (skb_cow(skb, sizeof(struct batadv_tt_query_packet)) < 0) goto out; ethhdr = (struct ethhdr *)skb_mac_header(skb); @@ -594,47 +607,59 @@ int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if) if (is_broadcast_ether_addr(ethhdr->h_source)) goto out; - tt_query = (struct tt_query_packet *)skb->data; + tt_query = (struct batadv_tt_query_packet *)skb->data; - tt_query->tt_data = ntohs(tt_query->tt_data); + switch (tt_query->flags & BATADV_TT_QUERY_TYPE_MASK) { + case BATADV_TT_REQUEST: + batadv_inc_counter(bat_priv, BATADV_CNT_TT_REQUEST_RX); - switch (tt_query->flags & TT_QUERY_TYPE_MASK) { - case TT_REQUEST: /* If we cannot provide an answer the tt_request is - * forwarded */ - if (!send_tt_response(bat_priv, tt_query)) { - bat_dbg(DBG_TT, bat_priv, - "Routing TT_REQUEST to %pM [%c]\n", - tt_query->dst, - (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); - tt_query->tt_data = htons(tt_query->tt_data); - return route_unicast_packet(skb, recv_if); + * forwarded + */ + if (!batadv_send_tt_response(bat_priv, tt_query)) { + if (tt_query->flags & BATADV_TT_FULL_TABLE) + tt_flag = 'F'; + else + tt_flag = '.'; + + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Routing TT_REQUEST to %pM [%c]\n", + tt_query->dst, + tt_flag); + return batadv_route_unicast_packet(skb, recv_if); } break; - case TT_RESPONSE: - if (is_my_mac(tt_query->dst)) { + case BATADV_TT_RESPONSE: + batadv_inc_counter(bat_priv, BATADV_CNT_TT_RESPONSE_RX); + + if (batadv_is_my_mac(tt_query->dst)) { /* packet needs to be linearized to access the TT - * changes */ + * changes + */ if (skb_linearize(skb) < 0) goto out; /* skb_linearize() possibly changed skb->data */ - tt_query = (struct tt_query_packet *)skb->data; + tt_query = (struct batadv_tt_query_packet *)skb->data; - tt_len = tt_query->tt_data * sizeof(struct tt_change); + tt_size = batadv_tt_len(ntohs(tt_query->tt_data)); /* Ensure we have all the claimed data */ - if (unlikely(skb_headlen(skb) < - sizeof(struct tt_query_packet) + tt_len)) + packet_size = sizeof(struct batadv_tt_query_packet); + packet_size += tt_size; + if (unlikely(skb_headlen(skb) < packet_size)) goto out; - handle_tt_response(bat_priv, tt_query); + batadv_handle_tt_response(bat_priv, tt_query); } else { - bat_dbg(DBG_TT, bat_priv, - "Routing TT_RESPONSE to %pM [%c]\n", - tt_query->dst, - (tt_query->flags & TT_FULL_TABLE ? 'F' : '.')); - tt_query->tt_data = htons(tt_query->tt_data); - return route_unicast_packet(skb, recv_if); + if (tt_query->flags & BATADV_TT_FULL_TABLE) + tt_flag = 'F'; + else + tt_flag = '.'; + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Routing TT_RESPONSE to %pM [%c]\n", + tt_query->dst, + tt_flag); + return batadv_route_unicast_packet(skb, recv_if); } break; } @@ -644,15 +669,16 @@ out: return NET_RX_DROP; } -int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if) +int batadv_recv_roam_adv(struct sk_buff *skb, struct batadv_hard_iface *recv_if) { - struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); - struct roam_adv_packet *roam_adv_packet; - struct orig_node *orig_node; + struct batadv_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct batadv_roam_adv_packet *roam_adv_packet; + struct batadv_orig_node *orig_node; struct ethhdr *ethhdr; /* drop packet if it has not necessary minimum size */ - if (unlikely(!pskb_may_pull(skb, sizeof(struct roam_adv_packet)))) + if (unlikely(!pskb_may_pull(skb, + sizeof(struct batadv_roam_adv_packet)))) goto out; ethhdr = (struct ethhdr *)skb_mac_header(skb); @@ -665,35 +691,39 @@ int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if) if (is_broadcast_ether_addr(ethhdr->h_source)) goto out; - roam_adv_packet = (struct roam_adv_packet *)skb->data; + batadv_inc_counter(bat_priv, BATADV_CNT_TT_ROAM_ADV_RX); + + roam_adv_packet = (struct batadv_roam_adv_packet *)skb->data; - if (!is_my_mac(roam_adv_packet->dst)) - return route_unicast_packet(skb, recv_if); + if (!batadv_is_my_mac(roam_adv_packet->dst)) + return batadv_route_unicast_packet(skb, recv_if); /* check if it is a backbone gateway. we don't accept * roaming advertisement from it, as it has the same * entries as we have. */ - if (bla_is_backbone_gw_orig(bat_priv, roam_adv_packet->src)) + if (batadv_bla_is_backbone_gw_orig(bat_priv, roam_adv_packet->src)) goto out; - orig_node = orig_hash_find(bat_priv, roam_adv_packet->src); + orig_node = batadv_orig_hash_find(bat_priv, roam_adv_packet->src); if (!orig_node) goto out; - bat_dbg(DBG_TT, bat_priv, - "Received ROAMING_ADV from %pM (client %pM)\n", - roam_adv_packet->src, roam_adv_packet->client); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Received ROAMING_ADV from %pM (client %pM)\n", + roam_adv_packet->src, roam_adv_packet->client); - tt_global_add(bat_priv, orig_node, roam_adv_packet->client, - atomic_read(&orig_node->last_ttvn) + 1, true, false); + batadv_tt_global_add(bat_priv, orig_node, roam_adv_packet->client, + BATADV_TT_CLIENT_ROAM, + atomic_read(&orig_node->last_ttvn) + 1); /* Roaming phase starts: I have new information but the ttvn has not * been incremented yet. This flag will make me check all the incoming - * packets for the correct destination. */ + * packets for the correct destination. + */ bat_priv->tt_poss_change = true; - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); out: /* returning NET_RX_DROP will make the caller function kfree the skb */ return NET_RX_DROP; @@ -701,26 +731,30 @@ out: /* find a suitable router for this originator, and use * bonding if possible. increases the found neighbors - * refcount.*/ -struct neigh_node *find_router(struct bat_priv *bat_priv, - struct orig_node *orig_node, - const struct hard_iface *recv_if) + * refcount. + */ +struct batadv_neigh_node * +batadv_find_router(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const struct batadv_hard_iface *recv_if) { - struct orig_node *primary_orig_node; - struct orig_node *router_orig; - struct neigh_node *router; + struct batadv_orig_node *primary_orig_node; + struct batadv_orig_node *router_orig; + struct batadv_neigh_node *router; static uint8_t zero_mac[ETH_ALEN] = {0, 0, 0, 0, 0, 0}; int bonding_enabled; + uint8_t *primary_addr; if (!orig_node) return NULL; - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (!router) goto err; /* without bonding, the first node should - * always choose the default router. */ + * always choose the default router. + */ bonding_enabled = atomic_read(&bat_priv->bonding); rcu_read_lock(); @@ -732,43 +766,47 @@ struct neigh_node *find_router(struct bat_priv *bat_priv, if ((!recv_if) && (!bonding_enabled)) goto return_router; + primary_addr = router_orig->primary_addr; + /* if we have something in the primary_addr, we can search - * for a potential bonding candidate. */ - if (compare_eth(router_orig->primary_addr, zero_mac)) + * for a potential bonding candidate. + */ + if (batadv_compare_eth(primary_addr, zero_mac)) goto return_router; /* find the orig_node which has the primary interface. might - * even be the same as our router_orig in many cases */ - - if (compare_eth(router_orig->primary_addr, router_orig->orig)) { + * even be the same as our router_orig in many cases + */ + if (batadv_compare_eth(primary_addr, router_orig->orig)) { primary_orig_node = router_orig; } else { - primary_orig_node = orig_hash_find(bat_priv, - router_orig->primary_addr); + primary_orig_node = batadv_orig_hash_find(bat_priv, + primary_addr); if (!primary_orig_node) goto return_router; - orig_node_free_ref(primary_orig_node); + batadv_orig_node_free_ref(primary_orig_node); } /* with less than 2 candidates, we can't do any - * bonding and prefer the original router. */ + * bonding and prefer the original router. + */ if (atomic_read(&primary_orig_node->bond_candidates) < 2) goto return_router; /* all nodes between should choose a candidate which * is is not on the interface where the packet came - * in. */ - - neigh_node_free_ref(router); + * in. + */ + batadv_neigh_node_free_ref(router); if (bonding_enabled) - router = find_bond_router(primary_orig_node, recv_if); + router = batadv_find_bond_router(primary_orig_node, recv_if); else - router = find_ifalter_router(primary_orig_node, recv_if); + router = batadv_find_ifalter_router(primary_orig_node, recv_if); return_router: - if (router && router->if_incoming->if_status != IF_ACTIVE) + if (router && router->if_incoming->if_status != BATADV_IF_ACTIVE) goto err_unlock; rcu_read_unlock(); @@ -777,11 +815,11 @@ err_unlock: rcu_read_unlock(); err: if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); return NULL; } -static int check_unicast_packet(struct sk_buff *skb, int hdr_size) +static int batadv_check_unicast_packet(struct sk_buff *skb, int hdr_size) { struct ethhdr *ethhdr; @@ -800,23 +838,24 @@ static int check_unicast_packet(struct sk_buff *skb, int hdr_size) return -1; /* not for me */ - if (!is_my_mac(ethhdr->h_dest)) + if (!batadv_is_my_mac(ethhdr->h_dest)) return -1; return 0; } -static int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) +static int batadv_route_unicast_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if) { - struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); - struct orig_node *orig_node = NULL; - struct neigh_node *neigh_node = NULL; - struct unicast_packet *unicast_packet; + struct batadv_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct batadv_orig_node *orig_node = NULL; + struct batadv_neigh_node *neigh_node = NULL; + struct batadv_unicast_packet *unicast_packet; struct ethhdr *ethhdr = (struct ethhdr *)skb_mac_header(skb); int ret = NET_RX_DROP; struct sk_buff *new_skb; - unicast_packet = (struct unicast_packet *)skb->data; + unicast_packet = (struct batadv_unicast_packet *)skb->data; /* TTL exceeded */ if (unicast_packet->header.ttl < 2) { @@ -826,13 +865,13 @@ static int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) } /* get routing information */ - orig_node = orig_hash_find(bat_priv, unicast_packet->dest); + orig_node = batadv_orig_hash_find(bat_priv, unicast_packet->dest); if (!orig_node) goto out; /* find_router() increases neigh_nodes refcount if found. */ - neigh_node = find_router(bat_priv, orig_node, recv_if); + neigh_node = batadv_find_router(bat_priv, orig_node, recv_if); if (!neigh_node) goto out; @@ -841,20 +880,22 @@ static int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) if (skb_cow(skb, ETH_HLEN) < 0) goto out; - unicast_packet = (struct unicast_packet *)skb->data; + unicast_packet = (struct batadv_unicast_packet *)skb->data; - if (unicast_packet->header.packet_type == BAT_UNICAST && + if (unicast_packet->header.packet_type == BATADV_UNICAST && atomic_read(&bat_priv->fragmentation) && skb->len > neigh_node->if_incoming->net_dev->mtu) { - ret = frag_send_skb(skb, bat_priv, - neigh_node->if_incoming, neigh_node->addr); + ret = batadv_frag_send_skb(skb, bat_priv, + neigh_node->if_incoming, + neigh_node->addr); goto out; } - if (unicast_packet->header.packet_type == BAT_UNICAST_FRAG && - frag_can_reassemble(skb, neigh_node->if_incoming->net_dev->mtu)) { + if (unicast_packet->header.packet_type == BATADV_UNICAST_FRAG && + batadv_frag_can_reassemble(skb, + neigh_node->if_incoming->net_dev->mtu)) { - ret = frag_reassemble_skb(skb, bat_priv, &new_skb); + ret = batadv_frag_reassemble_skb(skb, bat_priv, &new_skb); if (ret == NET_RX_DROP) goto out; @@ -866,141 +907,153 @@ static int route_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) } skb = new_skb; - unicast_packet = (struct unicast_packet *)skb->data; + unicast_packet = (struct batadv_unicast_packet *)skb->data; } /* decrement ttl */ unicast_packet->header.ttl--; + /* Update stats counter */ + batadv_inc_counter(bat_priv, BATADV_CNT_FORWARD); + batadv_add_counter(bat_priv, BATADV_CNT_FORWARD_BYTES, + skb->len + ETH_HLEN); + /* route it */ - send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + batadv_send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); ret = NET_RX_SUCCESS; out: if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); return ret; } -static int check_unicast_ttvn(struct bat_priv *bat_priv, - struct sk_buff *skb) { +static int batadv_check_unicast_ttvn(struct batadv_priv *bat_priv, + struct sk_buff *skb) { uint8_t curr_ttvn; - struct orig_node *orig_node; + struct batadv_orig_node *orig_node; struct ethhdr *ethhdr; - struct hard_iface *primary_if; - struct unicast_packet *unicast_packet; + struct batadv_hard_iface *primary_if; + struct batadv_unicast_packet *unicast_packet; bool tt_poss_change; + int is_old_ttvn; /* I could need to modify it */ - if (skb_cow(skb, sizeof(struct unicast_packet)) < 0) + if (skb_cow(skb, sizeof(struct batadv_unicast_packet)) < 0) return 0; - unicast_packet = (struct unicast_packet *)skb->data; + unicast_packet = (struct batadv_unicast_packet *)skb->data; - if (is_my_mac(unicast_packet->dest)) { + if (batadv_is_my_mac(unicast_packet->dest)) { tt_poss_change = bat_priv->tt_poss_change; curr_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); } else { - orig_node = orig_hash_find(bat_priv, unicast_packet->dest); + orig_node = batadv_orig_hash_find(bat_priv, + unicast_packet->dest); if (!orig_node) return 0; curr_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); tt_poss_change = orig_node->tt_poss_change; - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); } /* Check whether I have to reroute the packet */ - if (seq_before(unicast_packet->ttvn, curr_ttvn) || tt_poss_change) { + is_old_ttvn = batadv_seq_before(unicast_packet->ttvn, curr_ttvn); + if (is_old_ttvn || tt_poss_change) { /* check if there is enough data before accessing it */ - if (pskb_may_pull(skb, sizeof(struct unicast_packet) + + if (pskb_may_pull(skb, sizeof(struct batadv_unicast_packet) + ETH_HLEN) < 0) return 0; - ethhdr = (struct ethhdr *)(skb->data + - sizeof(struct unicast_packet)); + ethhdr = (struct ethhdr *)(skb->data + sizeof(*unicast_packet)); /* we don't have an updated route for this client, so we should * not try to reroute the packet!! */ - if (tt_global_client_is_roaming(bat_priv, ethhdr->h_dest)) + if (batadv_tt_global_client_is_roaming(bat_priv, + ethhdr->h_dest)) return 1; - orig_node = transtable_search(bat_priv, NULL, ethhdr->h_dest); + orig_node = batadv_transtable_search(bat_priv, NULL, + ethhdr->h_dest); if (!orig_node) { - if (!is_my_client(bat_priv, ethhdr->h_dest)) + if (!batadv_is_my_client(bat_priv, ethhdr->h_dest)) return 0; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) return 0; memcpy(unicast_packet->dest, primary_if->net_dev->dev_addr, ETH_ALEN); - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } else { memcpy(unicast_packet->dest, orig_node->orig, ETH_ALEN); curr_ttvn = (uint8_t) atomic_read(&orig_node->last_ttvn); - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); } - bat_dbg(DBG_ROUTES, bat_priv, - "TTVN mismatch (old_ttvn %u new_ttvn %u)! Rerouting unicast packet (for %pM) to %pM\n", - unicast_packet->ttvn, curr_ttvn, ethhdr->h_dest, - unicast_packet->dest); + batadv_dbg(BATADV_DBG_ROUTES, bat_priv, + "TTVN mismatch (old_ttvn %u new_ttvn %u)! Rerouting unicast packet (for %pM) to %pM\n", + unicast_packet->ttvn, curr_ttvn, ethhdr->h_dest, + unicast_packet->dest); unicast_packet->ttvn = curr_ttvn; } return 1; } -int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if) +int batadv_recv_unicast_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if) { - struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); - struct unicast_packet *unicast_packet; + struct batadv_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct batadv_unicast_packet *unicast_packet; int hdr_size = sizeof(*unicast_packet); - if (check_unicast_packet(skb, hdr_size) < 0) + if (batadv_check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP; - if (!check_unicast_ttvn(bat_priv, skb)) + if (!batadv_check_unicast_ttvn(bat_priv, skb)) return NET_RX_DROP; - unicast_packet = (struct unicast_packet *)skb->data; + unicast_packet = (struct batadv_unicast_packet *)skb->data; /* packet for me */ - if (is_my_mac(unicast_packet->dest)) { - interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); + if (batadv_is_my_mac(unicast_packet->dest)) { + batadv_interface_rx(recv_if->soft_iface, skb, recv_if, + hdr_size); return NET_RX_SUCCESS; } - return route_unicast_packet(skb, recv_if); + return batadv_route_unicast_packet(skb, recv_if); } -int recv_ucast_frag_packet(struct sk_buff *skb, struct hard_iface *recv_if) +int batadv_recv_ucast_frag_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if) { - struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); - struct unicast_frag_packet *unicast_packet; + struct batadv_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct batadv_unicast_frag_packet *unicast_packet; int hdr_size = sizeof(*unicast_packet); struct sk_buff *new_skb = NULL; int ret; - if (check_unicast_packet(skb, hdr_size) < 0) + if (batadv_check_unicast_packet(skb, hdr_size) < 0) return NET_RX_DROP; - if (!check_unicast_ttvn(bat_priv, skb)) + if (!batadv_check_unicast_ttvn(bat_priv, skb)) return NET_RX_DROP; - unicast_packet = (struct unicast_frag_packet *)skb->data; + unicast_packet = (struct batadv_unicast_frag_packet *)skb->data; /* packet for me */ - if (is_my_mac(unicast_packet->dest)) { + if (batadv_is_my_mac(unicast_packet->dest)) { - ret = frag_reassemble_skb(skb, bat_priv, &new_skb); + ret = batadv_frag_reassemble_skb(skb, bat_priv, &new_skb); if (ret == NET_RX_DROP) return NET_RX_DROP; @@ -1009,20 +1062,21 @@ int recv_ucast_frag_packet(struct sk_buff *skb, struct hard_iface *recv_if) if (!new_skb) return NET_RX_SUCCESS; - interface_rx(recv_if->soft_iface, new_skb, recv_if, - sizeof(struct unicast_packet)); + batadv_interface_rx(recv_if->soft_iface, new_skb, recv_if, + sizeof(struct batadv_unicast_packet)); return NET_RX_SUCCESS; } - return route_unicast_packet(skb, recv_if); + return batadv_route_unicast_packet(skb, recv_if); } -int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if) +int batadv_recv_bcast_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if) { - struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); - struct orig_node *orig_node = NULL; - struct bcast_packet *bcast_packet; + struct batadv_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct batadv_orig_node *orig_node = NULL; + struct batadv_bcast_packet *bcast_packet; struct ethhdr *ethhdr; int hdr_size = sizeof(*bcast_packet); int ret = NET_RX_DROP; @@ -1043,19 +1097,19 @@ int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if) goto out; /* ignore broadcasts sent by myself */ - if (is_my_mac(ethhdr->h_source)) + if (batadv_is_my_mac(ethhdr->h_source)) goto out; - bcast_packet = (struct bcast_packet *)skb->data; + bcast_packet = (struct batadv_bcast_packet *)skb->data; /* ignore broadcasts originated by myself */ - if (is_my_mac(bcast_packet->orig)) + if (batadv_is_my_mac(bcast_packet->orig)) goto out; if (bcast_packet->header.ttl < 2) goto out; - orig_node = orig_hash_find(bat_priv, bcast_packet->orig); + orig_node = batadv_orig_hash_find(bat_priv, bcast_packet->orig); if (!orig_node) goto out; @@ -1063,39 +1117,40 @@ int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if) spin_lock_bh(&orig_node->bcast_seqno_lock); /* check whether the packet is a duplicate */ - if (bat_test_bit(orig_node->bcast_bits, orig_node->last_bcast_seqno, - ntohl(bcast_packet->seqno))) + if (batadv_test_bit(orig_node->bcast_bits, orig_node->last_bcast_seqno, + ntohl(bcast_packet->seqno))) goto spin_unlock; seq_diff = ntohl(bcast_packet->seqno) - orig_node->last_bcast_seqno; /* check whether the packet is old and the host just restarted. */ - if (window_protected(bat_priv, seq_diff, - &orig_node->bcast_seqno_reset)) + if (batadv_window_protected(bat_priv, seq_diff, + &orig_node->bcast_seqno_reset)) goto spin_unlock; /* mark broadcast in flood history, update window position - * if required. */ - if (bit_get_packet(bat_priv, orig_node->bcast_bits, seq_diff, 1)) + * if required. + */ + if (batadv_bit_get_packet(bat_priv, orig_node->bcast_bits, seq_diff, 1)) orig_node->last_bcast_seqno = ntohl(bcast_packet->seqno); spin_unlock_bh(&orig_node->bcast_seqno_lock); /* check whether this has been sent by another originator before */ - if (bla_check_bcast_duplist(bat_priv, bcast_packet, hdr_size)) + if (batadv_bla_check_bcast_duplist(bat_priv, bcast_packet, hdr_size)) goto out; /* rebroadcast packet */ - add_bcast_packet_to_list(bat_priv, skb, 1); + batadv_add_bcast_packet_to_list(bat_priv, skb, 1); /* don't hand the broadcast up if it is from an originator * from the same backbone. */ - if (bla_is_backbone_gw(skb, orig_node, hdr_size)) + if (batadv_bla_is_backbone_gw(skb, orig_node, hdr_size)) goto out; /* broadcast for me */ - interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); + batadv_interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size); ret = NET_RX_SUCCESS; goto out; @@ -1103,15 +1158,16 @@ spin_unlock: spin_unlock_bh(&orig_node->bcast_seqno_lock); out: if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); return ret; } -int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if) +int batadv_recv_vis_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if) { - struct vis_packet *vis_packet; + struct batadv_vis_packet *vis_packet; struct ethhdr *ethhdr; - struct bat_priv *bat_priv = netdev_priv(recv_if->soft_iface); + struct batadv_priv *bat_priv = netdev_priv(recv_if->soft_iface); int hdr_size = sizeof(*vis_packet); /* keep skb linear */ @@ -1121,29 +1177,29 @@ int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if) if (unlikely(!pskb_may_pull(skb, hdr_size))) return NET_RX_DROP; - vis_packet = (struct vis_packet *)skb->data; + vis_packet = (struct batadv_vis_packet *)skb->data; ethhdr = (struct ethhdr *)skb_mac_header(skb); /* not for me */ - if (!is_my_mac(ethhdr->h_dest)) + if (!batadv_is_my_mac(ethhdr->h_dest)) return NET_RX_DROP; /* ignore own packets */ - if (is_my_mac(vis_packet->vis_orig)) + if (batadv_is_my_mac(vis_packet->vis_orig)) return NET_RX_DROP; - if (is_my_mac(vis_packet->sender_orig)) + if (batadv_is_my_mac(vis_packet->sender_orig)) return NET_RX_DROP; switch (vis_packet->vis_type) { - case VIS_TYPE_SERVER_SYNC: - receive_server_sync_packet(bat_priv, vis_packet, - skb_headlen(skb)); + case BATADV_VIS_TYPE_SERVER_SYNC: + batadv_receive_server_sync_packet(bat_priv, vis_packet, + skb_headlen(skb)); break; - case VIS_TYPE_CLIENT_UPDATE: - receive_client_update_packet(bat_priv, vis_packet, - skb_headlen(skb)); + case BATADV_VIS_TYPE_CLIENT_UPDATE: + batadv_receive_client_update_packet(bat_priv, vis_packet, + skb_headlen(skb)); break; default: /* ignore unknown packet */ @@ -1151,6 +1207,7 @@ int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if) } /* We take a copy of the data in the packet, so we should - always free the skbuf. */ + * always free the skbuf. + */ return NET_RX_DROP; } diff --git a/net/batman-adv/routing.h b/net/batman-adv/routing.h index d6bbbebb6567..9262279ea667 100644 --- a/net/batman-adv/routing.h +++ b/net/batman-adv/routing.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,36 +15,45 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_ROUTING_H_ #define _NET_BATMAN_ADV_ROUTING_H_ -void slide_own_bcast_window(struct hard_iface *hard_iface); -bool check_management_packet(struct sk_buff *skb, - struct hard_iface *hard_iface, - int header_len); -void update_route(struct bat_priv *bat_priv, struct orig_node *orig_node, - struct neigh_node *neigh_node); -int recv_icmp_packet(struct sk_buff *skb, struct hard_iface *recv_if); -int recv_unicast_packet(struct sk_buff *skb, struct hard_iface *recv_if); -int recv_ucast_frag_packet(struct sk_buff *skb, struct hard_iface *recv_if); -int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if); -int recv_vis_packet(struct sk_buff *skb, struct hard_iface *recv_if); -int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if); -int recv_roam_adv(struct sk_buff *skb, struct hard_iface *recv_if); -struct neigh_node *find_router(struct bat_priv *bat_priv, - struct orig_node *orig_node, - const struct hard_iface *recv_if); -void bonding_candidate_del(struct orig_node *orig_node, - struct neigh_node *neigh_node); -void bonding_candidate_add(struct orig_node *orig_node, - struct neigh_node *neigh_node); -void bonding_save_primary(const struct orig_node *orig_node, - struct orig_node *orig_neigh_node, - const struct batman_ogm_packet *batman_ogm_packet); -int window_protected(struct bat_priv *bat_priv, int32_t seq_num_diff, - unsigned long *last_reset); +void batadv_slide_own_bcast_window(struct batadv_hard_iface *hard_iface); +bool batadv_check_management_packet(struct sk_buff *skb, + struct batadv_hard_iface *hard_iface, + int header_len); +void batadv_update_route(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + struct batadv_neigh_node *neigh_node); +int batadv_recv_icmp_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if); +int batadv_recv_unicast_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if); +int batadv_recv_ucast_frag_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if); +int batadv_recv_bcast_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if); +int batadv_recv_vis_packet(struct sk_buff *skb, + struct batadv_hard_iface *recv_if); +int batadv_recv_tt_query(struct sk_buff *skb, + struct batadv_hard_iface *recv_if); +int batadv_recv_roam_adv(struct sk_buff *skb, + struct batadv_hard_iface *recv_if); +struct batadv_neigh_node * +batadv_find_router(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const struct batadv_hard_iface *recv_if); +void batadv_bonding_candidate_del(struct batadv_orig_node *orig_node, + struct batadv_neigh_node *neigh_node); +void batadv_bonding_candidate_add(struct batadv_orig_node *orig_node, + struct batadv_neigh_node *neigh_node); +void batadv_bonding_save_primary(const struct batadv_orig_node *orig_node, + struct batadv_orig_node *orig_neigh_node, + const struct batadv_ogm_packet + *batman_ogm_packet); +int batadv_window_protected(struct batadv_priv *bat_priv, int32_t seq_num_diff, + unsigned long *last_reset); #endif /* _NET_BATMAN_ADV_ROUTING_H_ */ diff --git a/net/batman-adv/send.c b/net/batman-adv/send.c index f47299f22c68..3b4b2daa3b3e 100644 --- a/net/batman-adv/send.c +++ b/net/batman-adv/send.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -29,16 +27,18 @@ #include "gateway_common.h" #include "originator.h" -static void send_outstanding_bcast_packet(struct work_struct *work); +static void batadv_send_outstanding_bcast_packet(struct work_struct *work); /* send out an already prepared packet to the given address via the - * specified batman interface */ -int send_skb_packet(struct sk_buff *skb, struct hard_iface *hard_iface, - const uint8_t *dst_addr) + * specified batman interface + */ +int batadv_send_skb_packet(struct sk_buff *skb, + struct batadv_hard_iface *hard_iface, + const uint8_t *dst_addr) { struct ethhdr *ethhdr; - if (hard_iface->if_status != IF_ACTIVE) + if (hard_iface->if_status != BATADV_IF_ACTIVE) goto send_skb_err; if (unlikely(!hard_iface->net_dev)) @@ -51,7 +51,7 @@ int send_skb_packet(struct sk_buff *skb, struct hard_iface *hard_iface, } /* push to the ethernet header. */ - if (my_skb_head_push(skb, ETH_HLEN) < 0) + if (batadv_skb_head_push(skb, ETH_HLEN) < 0) goto send_skb_err; skb_reset_mac_header(skb); @@ -59,129 +59,57 @@ int send_skb_packet(struct sk_buff *skb, struct hard_iface *hard_iface, ethhdr = (struct ethhdr *)skb_mac_header(skb); memcpy(ethhdr->h_source, hard_iface->net_dev->dev_addr, ETH_ALEN); memcpy(ethhdr->h_dest, dst_addr, ETH_ALEN); - ethhdr->h_proto = __constant_htons(ETH_P_BATMAN); + ethhdr->h_proto = __constant_htons(BATADV_ETH_P_BATMAN); skb_set_network_header(skb, ETH_HLEN); skb->priority = TC_PRIO_CONTROL; - skb->protocol = __constant_htons(ETH_P_BATMAN); + skb->protocol = __constant_htons(BATADV_ETH_P_BATMAN); skb->dev = hard_iface->net_dev; /* dev_queue_xmit() returns a negative result on error. However on * congestion and traffic shaping, it drops and returns NET_XMIT_DROP - * (which is > 0). This will not be treated as an error. */ - + * (which is > 0). This will not be treated as an error. + */ return dev_queue_xmit(skb); send_skb_err: kfree_skb(skb); return NET_XMIT_DROP; } -static void realloc_packet_buffer(struct hard_iface *hard_iface, - int new_len) +void batadv_schedule_bat_ogm(struct batadv_hard_iface *hard_iface) { - unsigned char *new_buff; - - new_buff = kmalloc(new_len, GFP_ATOMIC); - - /* keep old buffer if kmalloc should fail */ - if (new_buff) { - memcpy(new_buff, hard_iface->packet_buff, - BATMAN_OGM_HLEN); - - kfree(hard_iface->packet_buff); - hard_iface->packet_buff = new_buff; - hard_iface->packet_len = new_len; - } -} + struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); -/* when calling this function (hard_iface == primary_if) has to be true */ -static int prepare_packet_buffer(struct bat_priv *bat_priv, - struct hard_iface *hard_iface) -{ - int new_len; - - new_len = BATMAN_OGM_HLEN + - tt_len((uint8_t)atomic_read(&bat_priv->tt_local_changes)); - - /* if we have too many changes for one packet don't send any - * and wait for the tt table request which will be fragmented */ - if (new_len > hard_iface->soft_iface->mtu) - new_len = BATMAN_OGM_HLEN; - - realloc_packet_buffer(hard_iface, new_len); - - atomic_set(&bat_priv->tt_crc, tt_local_crc(bat_priv)); - - /* reset the sending counter */ - atomic_set(&bat_priv->tt_ogm_append_cnt, TT_OGM_APPEND_MAX); - - return tt_changes_fill_buffer(bat_priv, - hard_iface->packet_buff + BATMAN_OGM_HLEN, - hard_iface->packet_len - BATMAN_OGM_HLEN); -} - -static int reset_packet_buffer(struct bat_priv *bat_priv, - struct hard_iface *hard_iface) -{ - realloc_packet_buffer(hard_iface, BATMAN_OGM_HLEN); - return 0; -} - -void schedule_bat_ogm(struct hard_iface *hard_iface) -{ - struct bat_priv *bat_priv = netdev_priv(hard_iface->soft_iface); - struct hard_iface *primary_if; - int tt_num_changes = -1; - - if ((hard_iface->if_status == IF_NOT_IN_USE) || - (hard_iface->if_status == IF_TO_BE_REMOVED)) + if ((hard_iface->if_status == BATADV_IF_NOT_IN_USE) || + (hard_iface->if_status == BATADV_IF_TO_BE_REMOVED)) return; - /** - * the interface gets activated here to avoid race conditions between + /* the interface gets activated here to avoid race conditions between * the moment of activating the interface in * hardif_activate_interface() where the originator mac is set and * outdated packets (especially uninitialized mac addresses) in the * packet queue */ - if (hard_iface->if_status == IF_TO_BE_ACTIVATED) - hard_iface->if_status = IF_ACTIVE; - - primary_if = primary_if_get_selected(bat_priv); - - if (hard_iface == primary_if) { - /* if at least one change happened */ - if (atomic_read(&bat_priv->tt_local_changes) > 0) { - tt_commit_changes(bat_priv); - tt_num_changes = prepare_packet_buffer(bat_priv, - hard_iface); - } - - /* if the changes have been sent often enough */ - if (!atomic_dec_not_zero(&bat_priv->tt_ogm_append_cnt)) - tt_num_changes = reset_packet_buffer(bat_priv, - hard_iface); - } - - if (primary_if) - hardif_free_ref(primary_if); + if (hard_iface->if_status == BATADV_IF_TO_BE_ACTIVATED) + hard_iface->if_status = BATADV_IF_ACTIVE; - bat_priv->bat_algo_ops->bat_ogm_schedule(hard_iface, tt_num_changes); + bat_priv->bat_algo_ops->bat_ogm_schedule(hard_iface); } -static void forw_packet_free(struct forw_packet *forw_packet) +static void batadv_forw_packet_free(struct batadv_forw_packet *forw_packet) { if (forw_packet->skb) kfree_skb(forw_packet->skb); if (forw_packet->if_incoming) - hardif_free_ref(forw_packet->if_incoming); + batadv_hardif_free_ref(forw_packet->if_incoming); kfree(forw_packet); } -static void _add_bcast_packet_to_list(struct bat_priv *bat_priv, - struct forw_packet *forw_packet, - unsigned long send_time) +static void +_batadv_add_bcast_packet_to_list(struct batadv_priv *bat_priv, + struct batadv_forw_packet *forw_packet, + unsigned long send_time) { INIT_HLIST_NODE(&forw_packet->list); @@ -192,8 +120,8 @@ static void _add_bcast_packet_to_list(struct bat_priv *bat_priv, /* start timer for this packet */ INIT_DELAYED_WORK(&forw_packet->delayed_work, - send_outstanding_bcast_packet); - queue_delayed_work(bat_event_workqueue, &forw_packet->delayed_work, + batadv_send_outstanding_bcast_packet); + queue_delayed_work(batadv_event_workqueue, &forw_packet->delayed_work, send_time); } @@ -204,21 +132,24 @@ static void _add_bcast_packet_to_list(struct bat_priv *bat_priv, * errors. * * The skb is not consumed, so the caller should make sure that the - * skb is freed. */ -int add_bcast_packet_to_list(struct bat_priv *bat_priv, - const struct sk_buff *skb, unsigned long delay) + * skb is freed. + */ +int batadv_add_bcast_packet_to_list(struct batadv_priv *bat_priv, + const struct sk_buff *skb, + unsigned long delay) { - struct hard_iface *primary_if = NULL; - struct forw_packet *forw_packet; - struct bcast_packet *bcast_packet; + struct batadv_hard_iface *primary_if = NULL; + struct batadv_forw_packet *forw_packet; + struct batadv_bcast_packet *bcast_packet; struct sk_buff *newskb; - if (!atomic_dec_not_zero(&bat_priv->bcast_queue_left)) { - bat_dbg(DBG_BATMAN, bat_priv, "bcast packet queue full\n"); + if (!batadv_atomic_dec_not_zero(&bat_priv->bcast_queue_left)) { + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "bcast packet queue full\n"); goto out; } - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out_and_inc; @@ -232,7 +163,7 @@ int add_bcast_packet_to_list(struct bat_priv *bat_priv, goto packet_free; /* as we have a copy now, it is safe to decrease the TTL */ - bcast_packet = (struct bcast_packet *)newskb->data; + bcast_packet = (struct batadv_bcast_packet *)newskb->data; bcast_packet->header.ttl--; skb_reset_mac_header(newskb); @@ -243,7 +174,7 @@ int add_bcast_packet_to_list(struct bat_priv *bat_priv, /* how often did we send the bcast packet ? */ forw_packet->num_packets = 0; - _add_bcast_packet_to_list(bat_priv, forw_packet, delay); + _batadv_add_bcast_packet_to_list(bat_priv, forw_packet, delay); return NETDEV_TX_OK; packet_free: @@ -252,38 +183,43 @@ out_and_inc: atomic_inc(&bat_priv->bcast_queue_left); out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return NETDEV_TX_BUSY; } -static void send_outstanding_bcast_packet(struct work_struct *work) +static void batadv_send_outstanding_bcast_packet(struct work_struct *work) { - struct hard_iface *hard_iface; + struct batadv_hard_iface *hard_iface; struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); - struct forw_packet *forw_packet = - container_of(delayed_work, struct forw_packet, delayed_work); + struct batadv_forw_packet *forw_packet; struct sk_buff *skb1; - struct net_device *soft_iface = forw_packet->if_incoming->soft_iface; - struct bat_priv *bat_priv = netdev_priv(soft_iface); + struct net_device *soft_iface; + struct batadv_priv *bat_priv; + + forw_packet = container_of(delayed_work, struct batadv_forw_packet, + delayed_work); + soft_iface = forw_packet->if_incoming->soft_iface; + bat_priv = netdev_priv(soft_iface); spin_lock_bh(&bat_priv->forw_bcast_list_lock); hlist_del(&forw_packet->list); spin_unlock_bh(&bat_priv->forw_bcast_list_lock); - if (atomic_read(&bat_priv->mesh_state) == MESH_DEACTIVATING) + if (atomic_read(&bat_priv->mesh_state) == BATADV_MESH_DEACTIVATING) goto out; /* rebroadcast packet */ rcu_read_lock(); - list_for_each_entry_rcu(hard_iface, &hardif_list, list) { + list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) { if (hard_iface->soft_iface != soft_iface) continue; /* send a copy of the saved skb */ skb1 = skb_clone(forw_packet->skb, GFP_ATOMIC); if (skb1) - send_skb_packet(skb1, hard_iface, broadcast_addr); + batadv_send_skb_packet(skb1, hard_iface, + batadv_broadcast_addr); } rcu_read_unlock(); @@ -291,72 +227,72 @@ static void send_outstanding_bcast_packet(struct work_struct *work) /* if we still have some more bcasts to send */ if (forw_packet->num_packets < 3) { - _add_bcast_packet_to_list(bat_priv, forw_packet, - msecs_to_jiffies(5)); + _batadv_add_bcast_packet_to_list(bat_priv, forw_packet, + msecs_to_jiffies(5)); return; } out: - forw_packet_free(forw_packet); + batadv_forw_packet_free(forw_packet); atomic_inc(&bat_priv->bcast_queue_left); } -void send_outstanding_bat_ogm_packet(struct work_struct *work) +void batadv_send_outstanding_bat_ogm_packet(struct work_struct *work) { struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); - struct forw_packet *forw_packet = - container_of(delayed_work, struct forw_packet, delayed_work); - struct bat_priv *bat_priv; + struct batadv_forw_packet *forw_packet; + struct batadv_priv *bat_priv; + forw_packet = container_of(delayed_work, struct batadv_forw_packet, + delayed_work); bat_priv = netdev_priv(forw_packet->if_incoming->soft_iface); spin_lock_bh(&bat_priv->forw_bat_list_lock); hlist_del(&forw_packet->list); spin_unlock_bh(&bat_priv->forw_bat_list_lock); - if (atomic_read(&bat_priv->mesh_state) == MESH_DEACTIVATING) + if (atomic_read(&bat_priv->mesh_state) == BATADV_MESH_DEACTIVATING) goto out; bat_priv->bat_algo_ops->bat_ogm_emit(forw_packet); - /** - * we have to have at least one packet in the queue + /* we have to have at least one packet in the queue * to determine the queues wake up time unless we are * shutting down */ if (forw_packet->own) - schedule_bat_ogm(forw_packet->if_incoming); + batadv_schedule_bat_ogm(forw_packet->if_incoming); out: /* don't count own packet */ if (!forw_packet->own) atomic_inc(&bat_priv->batman_queue_left); - forw_packet_free(forw_packet); + batadv_forw_packet_free(forw_packet); } -void purge_outstanding_packets(struct bat_priv *bat_priv, - const struct hard_iface *hard_iface) +void +batadv_purge_outstanding_packets(struct batadv_priv *bat_priv, + const struct batadv_hard_iface *hard_iface) { - struct forw_packet *forw_packet; + struct batadv_forw_packet *forw_packet; struct hlist_node *tmp_node, *safe_tmp_node; bool pending; if (hard_iface) - bat_dbg(DBG_BATMAN, bat_priv, - "purge_outstanding_packets(): %s\n", - hard_iface->net_dev->name); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "purge_outstanding_packets(): %s\n", + hard_iface->net_dev->name); else - bat_dbg(DBG_BATMAN, bat_priv, - "purge_outstanding_packets()\n"); + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "purge_outstanding_packets()\n"); /* free bcast list */ spin_lock_bh(&bat_priv->forw_bcast_list_lock); hlist_for_each_entry_safe(forw_packet, tmp_node, safe_tmp_node, &bat_priv->forw_bcast_list, list) { - /** - * if purge_outstanding_packets() was called with an argument + /* if purge_outstanding_packets() was called with an argument * we delete only packets belonging to the given interface */ if ((hard_iface) && @@ -365,8 +301,7 @@ void purge_outstanding_packets(struct bat_priv *bat_priv, spin_unlock_bh(&bat_priv->forw_bcast_list_lock); - /** - * send_outstanding_bcast_packet() will lock the list to + /* batadv_send_outstanding_bcast_packet() will lock the list to * delete the item from the list */ pending = cancel_delayed_work_sync(&forw_packet->delayed_work); @@ -374,7 +309,7 @@ void purge_outstanding_packets(struct bat_priv *bat_priv, if (pending) { hlist_del(&forw_packet->list); - forw_packet_free(forw_packet); + batadv_forw_packet_free(forw_packet); } } spin_unlock_bh(&bat_priv->forw_bcast_list_lock); @@ -384,8 +319,7 @@ void purge_outstanding_packets(struct bat_priv *bat_priv, hlist_for_each_entry_safe(forw_packet, tmp_node, safe_tmp_node, &bat_priv->forw_bat_list, list) { - /** - * if purge_outstanding_packets() was called with an argument + /* if purge_outstanding_packets() was called with an argument * we delete only packets belonging to the given interface */ if ((hard_iface) && @@ -394,8 +328,7 @@ void purge_outstanding_packets(struct bat_priv *bat_priv, spin_unlock_bh(&bat_priv->forw_bat_list_lock); - /** - * send_outstanding_bat_packet() will lock the list to + /* send_outstanding_bat_packet() will lock the list to * delete the item from the list */ pending = cancel_delayed_work_sync(&forw_packet->delayed_work); @@ -403,7 +336,7 @@ void purge_outstanding_packets(struct bat_priv *bat_priv, if (pending) { hlist_del(&forw_packet->list); - forw_packet_free(forw_packet); + batadv_forw_packet_free(forw_packet); } } spin_unlock_bh(&bat_priv->forw_bat_list_lock); diff --git a/net/batman-adv/send.h b/net/batman-adv/send.h index 824ef06f9b01..643329b787ed 100644 --- a/net/batman-adv/send.h +++ b/net/batman-adv/send.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,19 +15,21 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_SEND_H_ #define _NET_BATMAN_ADV_SEND_H_ -int send_skb_packet(struct sk_buff *skb, struct hard_iface *hard_iface, - const uint8_t *dst_addr); -void schedule_bat_ogm(struct hard_iface *hard_iface); -int add_bcast_packet_to_list(struct bat_priv *bat_priv, - const struct sk_buff *skb, unsigned long delay); -void send_outstanding_bat_ogm_packet(struct work_struct *work); -void purge_outstanding_packets(struct bat_priv *bat_priv, - const struct hard_iface *hard_iface); +int batadv_send_skb_packet(struct sk_buff *skb, + struct batadv_hard_iface *hard_iface, + const uint8_t *dst_addr); +void batadv_schedule_bat_ogm(struct batadv_hard_iface *hard_iface); +int batadv_add_bcast_packet_to_list(struct batadv_priv *bat_priv, + const struct sk_buff *skb, + unsigned long delay); +void batadv_send_outstanding_bat_ogm_packet(struct work_struct *work); +void +batadv_purge_outstanding_packets(struct batadv_priv *bat_priv, + const struct batadv_hard_iface *hard_iface); #endif /* _NET_BATMAN_ADV_SEND_H_ */ diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c index a0ec0e4ada4c..109ea2aae96c 100644 --- a/net/batman-adv/soft-interface.c +++ b/net/batman-adv/soft-interface.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -24,12 +22,12 @@ #include "hard-interface.h" #include "routing.h" #include "send.h" -#include "bat_debugfs.h" +#include "debugfs.h" #include "translation-table.h" #include "hash.h" #include "gateway_common.h" #include "gateway_client.h" -#include "bat_sysfs.h" +#include "sysfs.h" #include "originator.h" #include <linux/slab.h> #include <linux/ethtool.h> @@ -39,27 +37,33 @@ #include "bridge_loop_avoidance.h" -static int bat_get_settings(struct net_device *dev, struct ethtool_cmd *cmd); -static void bat_get_drvinfo(struct net_device *dev, - struct ethtool_drvinfo *info); -static u32 bat_get_msglevel(struct net_device *dev); -static void bat_set_msglevel(struct net_device *dev, u32 value); -static u32 bat_get_link(struct net_device *dev); - -static const struct ethtool_ops bat_ethtool_ops = { - .get_settings = bat_get_settings, - .get_drvinfo = bat_get_drvinfo, - .get_msglevel = bat_get_msglevel, - .set_msglevel = bat_set_msglevel, - .get_link = bat_get_link, +static int batadv_get_settings(struct net_device *dev, struct ethtool_cmd *cmd); +static void batadv_get_drvinfo(struct net_device *dev, + struct ethtool_drvinfo *info); +static u32 batadv_get_msglevel(struct net_device *dev); +static void batadv_set_msglevel(struct net_device *dev, u32 value); +static u32 batadv_get_link(struct net_device *dev); +static void batadv_get_strings(struct net_device *dev, u32 stringset, u8 *data); +static void batadv_get_ethtool_stats(struct net_device *dev, + struct ethtool_stats *stats, u64 *data); +static int batadv_get_sset_count(struct net_device *dev, int stringset); + +static const struct ethtool_ops batadv_ethtool_ops = { + .get_settings = batadv_get_settings, + .get_drvinfo = batadv_get_drvinfo, + .get_msglevel = batadv_get_msglevel, + .set_msglevel = batadv_set_msglevel, + .get_link = batadv_get_link, + .get_strings = batadv_get_strings, + .get_ethtool_stats = batadv_get_ethtool_stats, + .get_sset_count = batadv_get_sset_count, }; -int my_skb_head_push(struct sk_buff *skb, unsigned int len) +int batadv_skb_head_push(struct sk_buff *skb, unsigned int len) { int result; - /** - * TODO: We must check if we can release all references to non-payload + /* TODO: We must check if we can release all references to non-payload * data using skb_header_release in our skbs to allow skb_cow_header to * work optimally. This means that those skbs are not allowed to read * or write any data which is before the current position of skb->data @@ -74,37 +78,37 @@ int my_skb_head_push(struct sk_buff *skb, unsigned int len) return 0; } -static int interface_open(struct net_device *dev) +static int batadv_interface_open(struct net_device *dev) { netif_start_queue(dev); return 0; } -static int interface_release(struct net_device *dev) +static int batadv_interface_release(struct net_device *dev) { netif_stop_queue(dev); return 0; } -static struct net_device_stats *interface_stats(struct net_device *dev) +static struct net_device_stats *batadv_interface_stats(struct net_device *dev) { - struct bat_priv *bat_priv = netdev_priv(dev); + struct batadv_priv *bat_priv = netdev_priv(dev); return &bat_priv->stats; } -static int interface_set_mac_addr(struct net_device *dev, void *p) +static int batadv_interface_set_mac_addr(struct net_device *dev, void *p) { - struct bat_priv *bat_priv = netdev_priv(dev); + struct batadv_priv *bat_priv = netdev_priv(dev); struct sockaddr *addr = p; if (!is_valid_ether_addr(addr->sa_data)) return -EADDRNOTAVAIL; /* only modify transtable if it has been initialized before */ - if (atomic_read(&bat_priv->mesh_state) == MESH_ACTIVE) { - tt_local_remove(bat_priv, dev->dev_addr, - "mac address changed", false); - tt_local_add(dev, addr->sa_data, NULL_IFINDEX); + if (atomic_read(&bat_priv->mesh_state) == BATADV_MESH_ACTIVE) { + batadv_tt_local_remove(bat_priv, dev->dev_addr, + "mac address changed", false); + batadv_tt_local_add(dev, addr->sa_data, BATADV_NULL_IFINDEX); } memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN); @@ -112,10 +116,10 @@ static int interface_set_mac_addr(struct net_device *dev, void *p) return 0; } -static int interface_change_mtu(struct net_device *dev, int new_mtu) +static int batadv_interface_change_mtu(struct net_device *dev, int new_mtu) { /* check ranges */ - if ((new_mtu < 68) || (new_mtu > hardif_min_mtu(dev))) + if ((new_mtu < 68) || (new_mtu > batadv_hardif_min_mtu(dev))) return -EINVAL; dev->mtu = new_mtu; @@ -123,13 +127,15 @@ static int interface_change_mtu(struct net_device *dev, int new_mtu) return 0; } -static int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) +static int batadv_interface_tx(struct sk_buff *skb, + struct net_device *soft_iface) { struct ethhdr *ethhdr = (struct ethhdr *)skb->data; - struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct hard_iface *primary_if = NULL; - struct bcast_packet *bcast_packet; + struct batadv_priv *bat_priv = netdev_priv(soft_iface); + struct batadv_hard_iface *primary_if = NULL; + struct batadv_bcast_packet *bcast_packet; struct vlan_ethhdr *vhdr; + __be16 ethertype = __constant_htons(BATADV_ETH_P_BATMAN); static const uint8_t stp_addr[ETH_ALEN] = {0x01, 0x80, 0xC2, 0x00, 0x00, 0x00}; unsigned int header_len = 0; @@ -137,7 +143,7 @@ static int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) short vid __maybe_unused = -1; bool do_bcast = false; - if (atomic_read(&bat_priv->mesh_state) != MESH_ACTIVE) + if (atomic_read(&bat_priv->mesh_state) != BATADV_MESH_ACTIVE) goto dropped; soft_iface->trans_start = jiffies; @@ -147,45 +153,47 @@ static int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) vhdr = (struct vlan_ethhdr *)skb->data; vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK; - if (ntohs(vhdr->h_vlan_encapsulated_proto) != ETH_P_BATMAN) + if (vhdr->h_vlan_encapsulated_proto != ethertype) break; /* fall through */ - case ETH_P_BATMAN: + case BATADV_ETH_P_BATMAN: goto dropped; } - if (bla_tx(bat_priv, skb, vid)) + if (batadv_bla_tx(bat_priv, skb, vid)) goto dropped; /* Register the client MAC in the transtable */ - tt_local_add(soft_iface, ethhdr->h_source, skb->skb_iif); + batadv_tt_local_add(soft_iface, ethhdr->h_source, skb->skb_iif); /* don't accept stp packets. STP does not help in meshes. * better use the bridge loop avoidance ... */ - if (compare_eth(ethhdr->h_dest, stp_addr)) + if (batadv_compare_eth(ethhdr->h_dest, stp_addr)) goto dropped; if (is_multicast_ether_addr(ethhdr->h_dest)) { do_bcast = true; switch (atomic_read(&bat_priv->gw_mode)) { - case GW_MODE_SERVER: + case BATADV_GW_MODE_SERVER: /* gateway servers should not send dhcp - * requests into the mesh */ - ret = gw_is_dhcp_target(skb, &header_len); + * requests into the mesh + */ + ret = batadv_gw_is_dhcp_target(skb, &header_len); if (ret) goto dropped; break; - case GW_MODE_CLIENT: + case BATADV_GW_MODE_CLIENT: /* gateway clients should send dhcp requests - * via unicast to their gateway */ - ret = gw_is_dhcp_target(skb, &header_len); + * via unicast to their gateway + */ + ret = batadv_gw_is_dhcp_target(skb, &header_len); if (ret) do_bcast = false; break; - case GW_MODE_OFF: + case BATADV_GW_MODE_OFF: default: break; } @@ -193,22 +201,24 @@ static int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) /* ethernet packet should be broadcasted */ if (do_bcast) { - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto dropped; - if (my_skb_head_push(skb, sizeof(*bcast_packet)) < 0) + if (batadv_skb_head_push(skb, sizeof(*bcast_packet)) < 0) goto dropped; - bcast_packet = (struct bcast_packet *)skb->data; - bcast_packet->header.version = COMPAT_VERSION; - bcast_packet->header.ttl = TTL; + bcast_packet = (struct batadv_bcast_packet *)skb->data; + bcast_packet->header.version = BATADV_COMPAT_VERSION; + bcast_packet->header.ttl = BATADV_TTL; /* batman packet type: broadcast */ - bcast_packet->header.packet_type = BAT_BCAST; + bcast_packet->header.packet_type = BATADV_BCAST; + bcast_packet->reserved = 0; /* hw address of first interface is the orig mac because only - * this mac is known throughout the mesh */ + * this mac is known throughout the mesh + */ memcpy(bcast_packet->orig, primary_if->net_dev->dev_addr, ETH_ALEN); @@ -216,21 +226,22 @@ static int interface_tx(struct sk_buff *skb, struct net_device *soft_iface) bcast_packet->seqno = htonl(atomic_inc_return(&bat_priv->bcast_seqno)); - add_bcast_packet_to_list(bat_priv, skb, 1); + batadv_add_bcast_packet_to_list(bat_priv, skb, 1); /* a copy is stored in the bcast list, therefore removing - * the original skb. */ + * the original skb. + */ kfree_skb(skb); /* unicast packet */ } else { - if (atomic_read(&bat_priv->gw_mode) != GW_MODE_OFF) { - ret = gw_out_of_range(bat_priv, skb, ethhdr); + if (atomic_read(&bat_priv->gw_mode) != BATADV_GW_MODE_OFF) { + ret = batadv_gw_out_of_range(bat_priv, skb, ethhdr); if (ret) goto dropped; } - ret = unicast_send_skb(skb, bat_priv); + ret = batadv_unicast_send_skb(skb, bat_priv); if (ret != 0) goto dropped_freed; } @@ -245,22 +256,23 @@ dropped_freed: bat_priv->stats.tx_dropped++; end: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return NETDEV_TX_OK; } -void interface_rx(struct net_device *soft_iface, - struct sk_buff *skb, struct hard_iface *recv_if, - int hdr_size) +void batadv_interface_rx(struct net_device *soft_iface, + struct sk_buff *skb, struct batadv_hard_iface *recv_if, + int hdr_size) { - struct bat_priv *bat_priv = netdev_priv(soft_iface); + struct batadv_priv *bat_priv = netdev_priv(soft_iface); struct ethhdr *ethhdr; struct vlan_ethhdr *vhdr; - struct batman_header *batadv_header = (struct batman_header *)skb->data; + struct batadv_header *batadv_header = (struct batadv_header *)skb->data; short vid __maybe_unused = -1; + __be16 ethertype = __constant_htons(BATADV_ETH_P_BATMAN); bool is_bcast; - is_bcast = (batadv_header->packet_type == BAT_BCAST); + is_bcast = (batadv_header->packet_type == BATADV_BCAST); /* check if enough space is available for pulling, and pull */ if (!pskb_may_pull(skb, hdr_size)) @@ -276,11 +288,11 @@ void interface_rx(struct net_device *soft_iface, vhdr = (struct vlan_ethhdr *)skb->data; vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK; - if (ntohs(vhdr->h_vlan_encapsulated_proto) != ETH_P_BATMAN) + if (vhdr->h_vlan_encapsulated_proto != ethertype) break; /* fall through */ - case ETH_P_BATMAN: + case BATADV_ETH_P_BATMAN: goto dropped; } @@ -291,22 +303,23 @@ void interface_rx(struct net_device *soft_iface, /* should not be necessary anymore as we use skb_pull_rcsum() * TODO: please verify this and remove this TODO - * -- Dec 21st 2009, Simon Wunderlich */ + * -- Dec 21st 2009, Simon Wunderlich + */ -/* skb->ip_summed = CHECKSUM_UNNECESSARY;*/ + /* skb->ip_summed = CHECKSUM_UNNECESSARY; */ bat_priv->stats.rx_packets++; bat_priv->stats.rx_bytes += skb->len + ETH_HLEN; soft_iface->last_rx = jiffies; - if (is_ap_isolated(bat_priv, ethhdr->h_source, ethhdr->h_dest)) + if (batadv_is_ap_isolated(bat_priv, ethhdr->h_source, ethhdr->h_dest)) goto dropped; /* Let the bridge loop avoidance check the packet. If will * not handle it, we can safely push it up. */ - if (bla_rx(bat_priv, skb, vid, is_bcast)) + if (batadv_bla_rx(bat_priv, skb, vid, is_bcast)) goto out; netif_rx(skb); @@ -318,49 +331,50 @@ out: return; } -static const struct net_device_ops bat_netdev_ops = { - .ndo_open = interface_open, - .ndo_stop = interface_release, - .ndo_get_stats = interface_stats, - .ndo_set_mac_address = interface_set_mac_addr, - .ndo_change_mtu = interface_change_mtu, - .ndo_start_xmit = interface_tx, +static const struct net_device_ops batadv_netdev_ops = { + .ndo_open = batadv_interface_open, + .ndo_stop = batadv_interface_release, + .ndo_get_stats = batadv_interface_stats, + .ndo_set_mac_address = batadv_interface_set_mac_addr, + .ndo_change_mtu = batadv_interface_change_mtu, + .ndo_start_xmit = batadv_interface_tx, .ndo_validate_addr = eth_validate_addr }; -static void interface_setup(struct net_device *dev) +static void batadv_interface_setup(struct net_device *dev) { - struct bat_priv *priv = netdev_priv(dev); + struct batadv_priv *priv = netdev_priv(dev); ether_setup(dev); - dev->netdev_ops = &bat_netdev_ops; + dev->netdev_ops = &batadv_netdev_ops; dev->destructor = free_netdev; dev->tx_queue_len = 0; - /** - * can't call min_mtu, because the needed variables + /* can't call min_mtu, because the needed variables * have not been initialized yet */ dev->mtu = ETH_DATA_LEN; /* reserve more space in the skbuff for our header */ - dev->hard_header_len = BAT_HEADER_LEN; + dev->hard_header_len = BATADV_HEADER_LEN; /* generate random address */ eth_hw_addr_random(dev); - SET_ETHTOOL_OPS(dev, &bat_ethtool_ops); + SET_ETHTOOL_OPS(dev, &batadv_ethtool_ops); memset(priv, 0, sizeof(*priv)); } -struct net_device *softif_create(const char *name) +struct net_device *batadv_softif_create(const char *name) { struct net_device *soft_iface; - struct bat_priv *bat_priv; + struct batadv_priv *bat_priv; int ret; + size_t cnt_len = sizeof(uint64_t) * BATADV_CNT_NUM; - soft_iface = alloc_netdev(sizeof(*bat_priv), name, interface_setup); + soft_iface = alloc_netdev(sizeof(*bat_priv), name, + batadv_interface_setup); if (!soft_iface) goto out; @@ -378,18 +392,18 @@ struct net_device *softif_create(const char *name) atomic_set(&bat_priv->bonding, 0); atomic_set(&bat_priv->bridge_loop_avoidance, 0); atomic_set(&bat_priv->ap_isolation, 0); - atomic_set(&bat_priv->vis_mode, VIS_TYPE_CLIENT_UPDATE); - atomic_set(&bat_priv->gw_mode, GW_MODE_OFF); + atomic_set(&bat_priv->vis_mode, BATADV_VIS_TYPE_CLIENT_UPDATE); + atomic_set(&bat_priv->gw_mode, BATADV_GW_MODE_OFF); atomic_set(&bat_priv->gw_sel_class, 20); atomic_set(&bat_priv->gw_bandwidth, 41); atomic_set(&bat_priv->orig_interval, 1000); atomic_set(&bat_priv->hop_penalty, 30); atomic_set(&bat_priv->log_level, 0); atomic_set(&bat_priv->fragmentation, 1); - atomic_set(&bat_priv->bcast_queue_left, BCAST_QUEUE_LEN); - atomic_set(&bat_priv->batman_queue_left, BATMAN_QUEUE_LEN); + atomic_set(&bat_priv->bcast_queue_left, BATADV_BCAST_QUEUE_LEN); + atomic_set(&bat_priv->batman_queue_left, BATADV_BATMAN_QUEUE_LEN); - atomic_set(&bat_priv->mesh_state, MESH_INACTIVE); + atomic_set(&bat_priv->mesh_state, BATADV_MESH_INACTIVE); atomic_set(&bat_priv->bcast_seqno, 1); atomic_set(&bat_priv->ttvn, 0); atomic_set(&bat_priv->tt_local_changes, 0); @@ -403,28 +417,34 @@ struct net_device *softif_create(const char *name) bat_priv->primary_if = NULL; bat_priv->num_ifaces = 0; - ret = bat_algo_select(bat_priv, bat_routing_algo); - if (ret < 0) + bat_priv->bat_counters = __alloc_percpu(cnt_len, __alignof__(uint64_t)); + if (!bat_priv->bat_counters) goto unreg_soft_iface; - ret = sysfs_add_meshif(soft_iface); + ret = batadv_algo_select(bat_priv, batadv_routing_algo); if (ret < 0) - goto unreg_soft_iface; + goto free_bat_counters; - ret = debugfs_add_meshif(soft_iface); + ret = batadv_sysfs_add_meshif(soft_iface); + if (ret < 0) + goto free_bat_counters; + + ret = batadv_debugfs_add_meshif(soft_iface); if (ret < 0) goto unreg_sysfs; - ret = mesh_init(soft_iface); + ret = batadv_mesh_init(soft_iface); if (ret < 0) goto unreg_debugfs; return soft_iface; unreg_debugfs: - debugfs_del_meshif(soft_iface); + batadv_debugfs_del_meshif(soft_iface); unreg_sysfs: - sysfs_del_meshif(soft_iface); + batadv_sysfs_del_meshif(soft_iface); +free_bat_counters: + free_percpu(bat_priv->bat_counters); unreg_soft_iface: unregister_netdevice(soft_iface); return NULL; @@ -435,24 +455,24 @@ out: return NULL; } -void softif_destroy(struct net_device *soft_iface) +void batadv_softif_destroy(struct net_device *soft_iface) { - debugfs_del_meshif(soft_iface); - sysfs_del_meshif(soft_iface); - mesh_free(soft_iface); + batadv_debugfs_del_meshif(soft_iface); + batadv_sysfs_del_meshif(soft_iface); + batadv_mesh_free(soft_iface); unregister_netdevice(soft_iface); } -int softif_is_valid(const struct net_device *net_dev) +int batadv_softif_is_valid(const struct net_device *net_dev) { - if (net_dev->netdev_ops->ndo_start_xmit == interface_tx) + if (net_dev->netdev_ops->ndo_start_xmit == batadv_interface_tx) return 1; return 0; } /* ethtool */ -static int bat_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) +static int batadv_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) { cmd->supported = 0; cmd->advertising = 0; @@ -468,25 +488,73 @@ static int bat_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) return 0; } -static void bat_get_drvinfo(struct net_device *dev, - struct ethtool_drvinfo *info) +static void batadv_get_drvinfo(struct net_device *dev, + struct ethtool_drvinfo *info) { strcpy(info->driver, "B.A.T.M.A.N. advanced"); - strcpy(info->version, SOURCE_VERSION); + strcpy(info->version, BATADV_SOURCE_VERSION); strcpy(info->fw_version, "N/A"); strcpy(info->bus_info, "batman"); } -static u32 bat_get_msglevel(struct net_device *dev) +static u32 batadv_get_msglevel(struct net_device *dev) { return -EOPNOTSUPP; } -static void bat_set_msglevel(struct net_device *dev, u32 value) +static void batadv_set_msglevel(struct net_device *dev, u32 value) { } -static u32 bat_get_link(struct net_device *dev) +static u32 batadv_get_link(struct net_device *dev) { return 1; } + +/* Inspired by drivers/net/ethernet/dlink/sundance.c:1702 + * Declare each description string in struct.name[] to get fixed sized buffer + * and compile time checking for strings longer than ETH_GSTRING_LEN. + */ +static const struct { + const char name[ETH_GSTRING_LEN]; +} batadv_counters_strings[] = { + { "forward" }, + { "forward_bytes" }, + { "mgmt_tx" }, + { "mgmt_tx_bytes" }, + { "mgmt_rx" }, + { "mgmt_rx_bytes" }, + { "tt_request_tx" }, + { "tt_request_rx" }, + { "tt_response_tx" }, + { "tt_response_rx" }, + { "tt_roam_adv_tx" }, + { "tt_roam_adv_rx" }, +}; + +static void batadv_get_strings(struct net_device *dev, uint32_t stringset, + uint8_t *data) +{ + if (stringset == ETH_SS_STATS) + memcpy(data, batadv_counters_strings, + sizeof(batadv_counters_strings)); +} + +static void batadv_get_ethtool_stats(struct net_device *dev, + struct ethtool_stats *stats, + uint64_t *data) +{ + struct batadv_priv *bat_priv = netdev_priv(dev); + int i; + + for (i = 0; i < BATADV_CNT_NUM; i++) + data[i] = batadv_sum_counter(bat_priv, i); +} + +static int batadv_get_sset_count(struct net_device *dev, int stringset) +{ + if (stringset == ETH_SS_STATS) + return BATADV_CNT_NUM; + + return -EOPNOTSUPP; +} diff --git a/net/batman-adv/soft-interface.h b/net/batman-adv/soft-interface.h index 020300673884..852c683b06a1 100644 --- a/net/batman-adv/soft-interface.h +++ b/net/batman-adv/soft-interface.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,18 +15,16 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_SOFT_INTERFACE_H_ #define _NET_BATMAN_ADV_SOFT_INTERFACE_H_ -int my_skb_head_push(struct sk_buff *skb, unsigned int len); -void interface_rx(struct net_device *soft_iface, - struct sk_buff *skb, struct hard_iface *recv_if, - int hdr_size); -struct net_device *softif_create(const char *name); -void softif_destroy(struct net_device *soft_iface); -int softif_is_valid(const struct net_device *net_dev); +int batadv_skb_head_push(struct sk_buff *skb, unsigned int len); +void batadv_interface_rx(struct net_device *soft_iface, struct sk_buff *skb, + struct batadv_hard_iface *recv_if, int hdr_size); +struct net_device *batadv_softif_create(const char *name); +void batadv_softif_destroy(struct net_device *soft_iface); +int batadv_softif_is_valid(const struct net_device *net_dev); #endif /* _NET_BATMAN_ADV_SOFT_INTERFACE_H_ */ diff --git a/net/batman-adv/sysfs.c b/net/batman-adv/sysfs.c new file mode 100644 index 000000000000..66518c75c217 --- /dev/null +++ b/net/batman-adv/sysfs.c @@ -0,0 +1,787 @@ +/* Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: + * + * Marek Lindner + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA + * 02110-1301, USA + */ + +#include "main.h" +#include "sysfs.h" +#include "translation-table.h" +#include "originator.h" +#include "hard-interface.h" +#include "gateway_common.h" +#include "gateway_client.h" +#include "vis.h" + +static struct net_device *batadv_kobj_to_netdev(struct kobject *obj) +{ + struct device *dev = container_of(obj->parent, struct device, kobj); + return to_net_dev(dev); +} + +static struct batadv_priv *batadv_kobj_to_batpriv(struct kobject *obj) +{ + struct net_device *net_dev = batadv_kobj_to_netdev(obj); + return netdev_priv(net_dev); +} + +#define BATADV_UEV_TYPE_VAR "BATTYPE=" +#define BATADV_UEV_ACTION_VAR "BATACTION=" +#define BATADV_UEV_DATA_VAR "BATDATA=" + +static char *batadv_uev_action_str[] = { + "add", + "del", + "change" +}; + +static char *batadv_uev_type_str[] = { + "gw" +}; + +/* Use this, if you have customized show and store functions */ +#define BATADV_ATTR(_name, _mode, _show, _store) \ +struct batadv_attribute batadv_attr_##_name = { \ + .attr = {.name = __stringify(_name), \ + .mode = _mode }, \ + .show = _show, \ + .store = _store, \ +}; + +#define BATADV_ATTR_SIF_STORE_BOOL(_name, _post_func) \ +ssize_t batadv_store_##_name(struct kobject *kobj, \ + struct attribute *attr, char *buff, \ + size_t count) \ +{ \ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); \ + struct batadv_priv *bat_priv = netdev_priv(net_dev); \ + return __batadv_store_bool_attr(buff, count, _post_func, attr, \ + &bat_priv->_name, net_dev); \ +} + +#define BATADV_ATTR_SIF_SHOW_BOOL(_name) \ +ssize_t batadv_show_##_name(struct kobject *kobj, \ + struct attribute *attr, char *buff) \ +{ \ + struct batadv_priv *bat_priv = batadv_kobj_to_batpriv(kobj); \ + return sprintf(buff, "%s\n", \ + atomic_read(&bat_priv->_name) == 0 ? \ + "disabled" : "enabled"); \ +} \ + +/* Use this, if you are going to turn a [name] in the soft-interface + * (bat_priv) on or off + */ +#define BATADV_ATTR_SIF_BOOL(_name, _mode, _post_func) \ + static BATADV_ATTR_SIF_STORE_BOOL(_name, _post_func) \ + static BATADV_ATTR_SIF_SHOW_BOOL(_name) \ + static BATADV_ATTR(_name, _mode, batadv_show_##_name, \ + batadv_store_##_name) + + +#define BATADV_ATTR_SIF_STORE_UINT(_name, _min, _max, _post_func) \ +ssize_t batadv_store_##_name(struct kobject *kobj, \ + struct attribute *attr, char *buff, \ + size_t count) \ +{ \ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); \ + struct batadv_priv *bat_priv = netdev_priv(net_dev); \ + return __batadv_store_uint_attr(buff, count, _min, _max, \ + _post_func, attr, \ + &bat_priv->_name, net_dev); \ +} + +#define BATADV_ATTR_SIF_SHOW_UINT(_name) \ +ssize_t batadv_show_##_name(struct kobject *kobj, \ + struct attribute *attr, char *buff) \ +{ \ + struct batadv_priv *bat_priv = batadv_kobj_to_batpriv(kobj); \ + return sprintf(buff, "%i\n", atomic_read(&bat_priv->_name)); \ +} \ + +/* Use this, if you are going to set [name] in the soft-interface + * (bat_priv) to an unsigned integer value + */ +#define BATADV_ATTR_SIF_UINT(_name, _mode, _min, _max, _post_func) \ + static BATADV_ATTR_SIF_STORE_UINT(_name, _min, _max, _post_func)\ + static BATADV_ATTR_SIF_SHOW_UINT(_name) \ + static BATADV_ATTR(_name, _mode, batadv_show_##_name, \ + batadv_store_##_name) + + +#define BATADV_ATTR_HIF_STORE_UINT(_name, _min, _max, _post_func) \ +ssize_t batadv_store_##_name(struct kobject *kobj, \ + struct attribute *attr, char *buff, \ + size_t count) \ +{ \ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); \ + struct batadv_hard_iface *hard_iface; \ + ssize_t length; \ + \ + hard_iface = batadv_hardif_get_by_netdev(net_dev); \ + if (!hard_iface) \ + return 0; \ + \ + length = __batadv_store_uint_attr(buff, count, _min, _max, \ + _post_func, attr, \ + &hard_iface->_name, net_dev); \ + \ + batadv_hardif_free_ref(hard_iface); \ + return length; \ +} + +#define BATADV_ATTR_HIF_SHOW_UINT(_name) \ +ssize_t batadv_show_##_name(struct kobject *kobj, \ + struct attribute *attr, char *buff) \ +{ \ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); \ + struct batadv_hard_iface *hard_iface; \ + ssize_t length; \ + \ + hard_iface = batadv_hardif_get_by_netdev(net_dev); \ + if (!hard_iface) \ + return 0; \ + \ + length = sprintf(buff, "%i\n", atomic_read(&hard_iface->_name));\ + \ + batadv_hardif_free_ref(hard_iface); \ + return length; \ +} + +/* Use this, if you are going to set [name] in hard_iface to an + * unsigned integer value + */ +#define BATADV_ATTR_HIF_UINT(_name, _mode, _min, _max, _post_func) \ + static BATADV_ATTR_HIF_STORE_UINT(_name, _min, _max, _post_func)\ + static BATADV_ATTR_HIF_SHOW_UINT(_name) \ + static BATADV_ATTR(_name, _mode, batadv_show_##_name, \ + batadv_store_##_name) + + +static int batadv_store_bool_attr(char *buff, size_t count, + struct net_device *net_dev, + const char *attr_name, atomic_t *attr) +{ + int enabled = -1; + + if (buff[count - 1] == '\n') + buff[count - 1] = '\0'; + + if ((strncmp(buff, "1", 2) == 0) || + (strncmp(buff, "enable", 7) == 0) || + (strncmp(buff, "enabled", 8) == 0)) + enabled = 1; + + if ((strncmp(buff, "0", 2) == 0) || + (strncmp(buff, "disable", 8) == 0) || + (strncmp(buff, "disabled", 9) == 0)) + enabled = 0; + + if (enabled < 0) { + batadv_info(net_dev, "%s: Invalid parameter received: %s\n", + attr_name, buff); + return -EINVAL; + } + + if (atomic_read(attr) == enabled) + return count; + + batadv_info(net_dev, "%s: Changing from: %s to: %s\n", attr_name, + atomic_read(attr) == 1 ? "enabled" : "disabled", + enabled == 1 ? "enabled" : "disabled"); + + atomic_set(attr, (unsigned int)enabled); + return count; +} + +static inline ssize_t +__batadv_store_bool_attr(char *buff, size_t count, + void (*post_func)(struct net_device *), + struct attribute *attr, + atomic_t *attr_store, struct net_device *net_dev) +{ + int ret; + + ret = batadv_store_bool_attr(buff, count, net_dev, attr->name, + attr_store); + if (post_func && ret) + post_func(net_dev); + + return ret; +} + +static int batadv_store_uint_attr(const char *buff, size_t count, + struct net_device *net_dev, + const char *attr_name, + unsigned int min, unsigned int max, + atomic_t *attr) +{ + unsigned long uint_val; + int ret; + + ret = kstrtoul(buff, 10, &uint_val); + if (ret) { + batadv_info(net_dev, "%s: Invalid parameter received: %s\n", + attr_name, buff); + return -EINVAL; + } + + if (uint_val < min) { + batadv_info(net_dev, "%s: Value is too small: %lu min: %u\n", + attr_name, uint_val, min); + return -EINVAL; + } + + if (uint_val > max) { + batadv_info(net_dev, "%s: Value is too big: %lu max: %u\n", + attr_name, uint_val, max); + return -EINVAL; + } + + if (atomic_read(attr) == uint_val) + return count; + + batadv_info(net_dev, "%s: Changing from: %i to: %lu\n", + attr_name, atomic_read(attr), uint_val); + + atomic_set(attr, uint_val); + return count; +} + +static inline ssize_t +__batadv_store_uint_attr(const char *buff, size_t count, + int min, int max, + void (*post_func)(struct net_device *), + const struct attribute *attr, + atomic_t *attr_store, struct net_device *net_dev) +{ + int ret; + + ret = batadv_store_uint_attr(buff, count, net_dev, attr->name, min, max, + attr_store); + if (post_func && ret) + post_func(net_dev); + + return ret; +} + +static ssize_t batadv_show_vis_mode(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct batadv_priv *bat_priv = batadv_kobj_to_batpriv(kobj); + int vis_mode = atomic_read(&bat_priv->vis_mode); + const char *mode; + + if (vis_mode == BATADV_VIS_TYPE_CLIENT_UPDATE) + mode = "client"; + else + mode = "server"; + + return sprintf(buff, "%s\n", mode); +} + +static ssize_t batadv_store_vis_mode(struct kobject *kobj, + struct attribute *attr, char *buff, + size_t count) +{ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); + struct batadv_priv *bat_priv = netdev_priv(net_dev); + unsigned long val; + int ret, vis_mode_tmp = -1; + const char *old_mode, *new_mode; + + ret = kstrtoul(buff, 10, &val); + + if (((count == 2) && (!ret) && + (val == BATADV_VIS_TYPE_CLIENT_UPDATE)) || + (strncmp(buff, "client", 6) == 0) || + (strncmp(buff, "off", 3) == 0)) + vis_mode_tmp = BATADV_VIS_TYPE_CLIENT_UPDATE; + + if (((count == 2) && (!ret) && + (val == BATADV_VIS_TYPE_SERVER_SYNC)) || + (strncmp(buff, "server", 6) == 0)) + vis_mode_tmp = BATADV_VIS_TYPE_SERVER_SYNC; + + if (vis_mode_tmp < 0) { + if (buff[count - 1] == '\n') + buff[count - 1] = '\0'; + + batadv_info(net_dev, + "Invalid parameter for 'vis mode' setting received: %s\n", + buff); + return -EINVAL; + } + + if (atomic_read(&bat_priv->vis_mode) == vis_mode_tmp) + return count; + + if (atomic_read(&bat_priv->vis_mode) == BATADV_VIS_TYPE_CLIENT_UPDATE) + old_mode = "client"; + else + old_mode = "server"; + + if (vis_mode_tmp == BATADV_VIS_TYPE_CLIENT_UPDATE) + new_mode = "client"; + else + new_mode = "server"; + + batadv_info(net_dev, "Changing vis mode from: %s to: %s\n", old_mode, + new_mode); + + atomic_set(&bat_priv->vis_mode, (unsigned int)vis_mode_tmp); + return count; +} + +static ssize_t batadv_show_bat_algo(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct batadv_priv *bat_priv = batadv_kobj_to_batpriv(kobj); + return sprintf(buff, "%s\n", bat_priv->bat_algo_ops->name); +} + +static void batadv_post_gw_deselect(struct net_device *net_dev) +{ + struct batadv_priv *bat_priv = netdev_priv(net_dev); + batadv_gw_deselect(bat_priv); +} + +static ssize_t batadv_show_gw_mode(struct kobject *kobj, struct attribute *attr, + char *buff) +{ + struct batadv_priv *bat_priv = batadv_kobj_to_batpriv(kobj); + int bytes_written; + + switch (atomic_read(&bat_priv->gw_mode)) { + case BATADV_GW_MODE_CLIENT: + bytes_written = sprintf(buff, "%s\n", + BATADV_GW_MODE_CLIENT_NAME); + break; + case BATADV_GW_MODE_SERVER: + bytes_written = sprintf(buff, "%s\n", + BATADV_GW_MODE_SERVER_NAME); + break; + default: + bytes_written = sprintf(buff, "%s\n", + BATADV_GW_MODE_OFF_NAME); + break; + } + + return bytes_written; +} + +static ssize_t batadv_store_gw_mode(struct kobject *kobj, + struct attribute *attr, char *buff, + size_t count) +{ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); + struct batadv_priv *bat_priv = netdev_priv(net_dev); + char *curr_gw_mode_str; + int gw_mode_tmp = -1; + + if (buff[count - 1] == '\n') + buff[count - 1] = '\0'; + + if (strncmp(buff, BATADV_GW_MODE_OFF_NAME, + strlen(BATADV_GW_MODE_OFF_NAME)) == 0) + gw_mode_tmp = BATADV_GW_MODE_OFF; + + if (strncmp(buff, BATADV_GW_MODE_CLIENT_NAME, + strlen(BATADV_GW_MODE_CLIENT_NAME)) == 0) + gw_mode_tmp = BATADV_GW_MODE_CLIENT; + + if (strncmp(buff, BATADV_GW_MODE_SERVER_NAME, + strlen(BATADV_GW_MODE_SERVER_NAME)) == 0) + gw_mode_tmp = BATADV_GW_MODE_SERVER; + + if (gw_mode_tmp < 0) { + batadv_info(net_dev, + "Invalid parameter for 'gw mode' setting received: %s\n", + buff); + return -EINVAL; + } + + if (atomic_read(&bat_priv->gw_mode) == gw_mode_tmp) + return count; + + switch (atomic_read(&bat_priv->gw_mode)) { + case BATADV_GW_MODE_CLIENT: + curr_gw_mode_str = BATADV_GW_MODE_CLIENT_NAME; + break; + case BATADV_GW_MODE_SERVER: + curr_gw_mode_str = BATADV_GW_MODE_SERVER_NAME; + break; + default: + curr_gw_mode_str = BATADV_GW_MODE_OFF_NAME; + break; + } + + batadv_info(net_dev, "Changing gw mode from: %s to: %s\n", + curr_gw_mode_str, buff); + + batadv_gw_deselect(bat_priv); + atomic_set(&bat_priv->gw_mode, (unsigned int)gw_mode_tmp); + return count; +} + +static ssize_t batadv_show_gw_bwidth(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct batadv_priv *bat_priv = batadv_kobj_to_batpriv(kobj); + int down, up; + int gw_bandwidth = atomic_read(&bat_priv->gw_bandwidth); + + batadv_gw_bandwidth_to_kbit(gw_bandwidth, &down, &up); + return sprintf(buff, "%i%s/%i%s\n", + (down > 2048 ? down / 1024 : down), + (down > 2048 ? "MBit" : "KBit"), + (up > 2048 ? up / 1024 : up), + (up > 2048 ? "MBit" : "KBit")); +} + +static ssize_t batadv_store_gw_bwidth(struct kobject *kobj, + struct attribute *attr, char *buff, + size_t count) +{ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); + + if (buff[count - 1] == '\n') + buff[count - 1] = '\0'; + + return batadv_gw_bandwidth_set(net_dev, buff, count); +} + +BATADV_ATTR_SIF_BOOL(aggregated_ogms, S_IRUGO | S_IWUSR, NULL); +BATADV_ATTR_SIF_BOOL(bonding, S_IRUGO | S_IWUSR, NULL); +#ifdef CONFIG_BATMAN_ADV_BLA +BATADV_ATTR_SIF_BOOL(bridge_loop_avoidance, S_IRUGO | S_IWUSR, NULL); +#endif +BATADV_ATTR_SIF_BOOL(fragmentation, S_IRUGO | S_IWUSR, batadv_update_min_mtu); +BATADV_ATTR_SIF_BOOL(ap_isolation, S_IRUGO | S_IWUSR, NULL); +static BATADV_ATTR(vis_mode, S_IRUGO | S_IWUSR, batadv_show_vis_mode, + batadv_store_vis_mode); +static BATADV_ATTR(routing_algo, S_IRUGO, batadv_show_bat_algo, NULL); +static BATADV_ATTR(gw_mode, S_IRUGO | S_IWUSR, batadv_show_gw_mode, + batadv_store_gw_mode); +BATADV_ATTR_SIF_UINT(orig_interval, S_IRUGO | S_IWUSR, 2 * BATADV_JITTER, + INT_MAX, NULL); +BATADV_ATTR_SIF_UINT(hop_penalty, S_IRUGO | S_IWUSR, 0, BATADV_TQ_MAX_VALUE, + NULL); +BATADV_ATTR_SIF_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, BATADV_TQ_MAX_VALUE, + batadv_post_gw_deselect); +static BATADV_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, batadv_show_gw_bwidth, + batadv_store_gw_bwidth); +#ifdef CONFIG_BATMAN_ADV_DEBUG +BATADV_ATTR_SIF_UINT(log_level, S_IRUGO | S_IWUSR, 0, BATADV_DBG_ALL, NULL); +#endif + +static struct batadv_attribute *batadv_mesh_attrs[] = { + &batadv_attr_aggregated_ogms, + &batadv_attr_bonding, +#ifdef CONFIG_BATMAN_ADV_BLA + &batadv_attr_bridge_loop_avoidance, +#endif + &batadv_attr_fragmentation, + &batadv_attr_ap_isolation, + &batadv_attr_vis_mode, + &batadv_attr_routing_algo, + &batadv_attr_gw_mode, + &batadv_attr_orig_interval, + &batadv_attr_hop_penalty, + &batadv_attr_gw_sel_class, + &batadv_attr_gw_bandwidth, +#ifdef CONFIG_BATMAN_ADV_DEBUG + &batadv_attr_log_level, +#endif + NULL, +}; + +int batadv_sysfs_add_meshif(struct net_device *dev) +{ + struct kobject *batif_kobject = &dev->dev.kobj; + struct batadv_priv *bat_priv = netdev_priv(dev); + struct batadv_attribute **bat_attr; + int err; + + bat_priv->mesh_obj = kobject_create_and_add(BATADV_SYSFS_IF_MESH_SUBDIR, + batif_kobject); + if (!bat_priv->mesh_obj) { + batadv_err(dev, "Can't add sysfs directory: %s/%s\n", dev->name, + BATADV_SYSFS_IF_MESH_SUBDIR); + goto out; + } + + for (bat_attr = batadv_mesh_attrs; *bat_attr; ++bat_attr) { + err = sysfs_create_file(bat_priv->mesh_obj, + &((*bat_attr)->attr)); + if (err) { + batadv_err(dev, "Can't add sysfs file: %s/%s/%s\n", + dev->name, BATADV_SYSFS_IF_MESH_SUBDIR, + ((*bat_attr)->attr).name); + goto rem_attr; + } + } + + return 0; + +rem_attr: + for (bat_attr = batadv_mesh_attrs; *bat_attr; ++bat_attr) + sysfs_remove_file(bat_priv->mesh_obj, &((*bat_attr)->attr)); + + kobject_put(bat_priv->mesh_obj); + bat_priv->mesh_obj = NULL; +out: + return -ENOMEM; +} + +void batadv_sysfs_del_meshif(struct net_device *dev) +{ + struct batadv_priv *bat_priv = netdev_priv(dev); + struct batadv_attribute **bat_attr; + + for (bat_attr = batadv_mesh_attrs; *bat_attr; ++bat_attr) + sysfs_remove_file(bat_priv->mesh_obj, &((*bat_attr)->attr)); + + kobject_put(bat_priv->mesh_obj); + bat_priv->mesh_obj = NULL; +} + +static ssize_t batadv_show_mesh_iface(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); + struct batadv_hard_iface *hard_iface; + ssize_t length; + const char *ifname; + + hard_iface = batadv_hardif_get_by_netdev(net_dev); + if (!hard_iface) + return 0; + + if (hard_iface->if_status == BATADV_IF_NOT_IN_USE) + ifname = "none"; + else + ifname = hard_iface->soft_iface->name; + + length = sprintf(buff, "%s\n", ifname); + + batadv_hardif_free_ref(hard_iface); + + return length; +} + +static ssize_t batadv_store_mesh_iface(struct kobject *kobj, + struct attribute *attr, char *buff, + size_t count) +{ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); + struct batadv_hard_iface *hard_iface; + int status_tmp = -1; + int ret = count; + + hard_iface = batadv_hardif_get_by_netdev(net_dev); + if (!hard_iface) + return count; + + if (buff[count - 1] == '\n') + buff[count - 1] = '\0'; + + if (strlen(buff) >= IFNAMSIZ) { + pr_err("Invalid parameter for 'mesh_iface' setting received: interface name too long '%s'\n", + buff); + batadv_hardif_free_ref(hard_iface); + return -EINVAL; + } + + if (strncmp(buff, "none", 4) == 0) + status_tmp = BATADV_IF_NOT_IN_USE; + else + status_tmp = BATADV_IF_I_WANT_YOU; + + if (hard_iface->if_status == status_tmp) + goto out; + + if ((hard_iface->soft_iface) && + (strncmp(hard_iface->soft_iface->name, buff, IFNAMSIZ) == 0)) + goto out; + + if (!rtnl_trylock()) { + ret = -ERESTARTSYS; + goto out; + } + + if (status_tmp == BATADV_IF_NOT_IN_USE) { + batadv_hardif_disable_interface(hard_iface); + goto unlock; + } + + /* if the interface already is in use */ + if (hard_iface->if_status != BATADV_IF_NOT_IN_USE) + batadv_hardif_disable_interface(hard_iface); + + ret = batadv_hardif_enable_interface(hard_iface, buff); + +unlock: + rtnl_unlock(); +out: + batadv_hardif_free_ref(hard_iface); + return ret; +} + +static ssize_t batadv_show_iface_status(struct kobject *kobj, + struct attribute *attr, char *buff) +{ + struct net_device *net_dev = batadv_kobj_to_netdev(kobj); + struct batadv_hard_iface *hard_iface; + ssize_t length; + + hard_iface = batadv_hardif_get_by_netdev(net_dev); + if (!hard_iface) + return 0; + + switch (hard_iface->if_status) { + case BATADV_IF_TO_BE_REMOVED: + length = sprintf(buff, "disabling\n"); + break; + case BATADV_IF_INACTIVE: + length = sprintf(buff, "inactive\n"); + break; + case BATADV_IF_ACTIVE: + length = sprintf(buff, "active\n"); + break; + case BATADV_IF_TO_BE_ACTIVATED: + length = sprintf(buff, "enabling\n"); + break; + case BATADV_IF_NOT_IN_USE: + default: + length = sprintf(buff, "not in use\n"); + break; + } + + batadv_hardif_free_ref(hard_iface); + + return length; +} + +static BATADV_ATTR(mesh_iface, S_IRUGO | S_IWUSR, batadv_show_mesh_iface, + batadv_store_mesh_iface); +static BATADV_ATTR(iface_status, S_IRUGO, batadv_show_iface_status, NULL); + +static struct batadv_attribute *batadv_batman_attrs[] = { + &batadv_attr_mesh_iface, + &batadv_attr_iface_status, + NULL, +}; + +int batadv_sysfs_add_hardif(struct kobject **hardif_obj, struct net_device *dev) +{ + struct kobject *hardif_kobject = &dev->dev.kobj; + struct batadv_attribute **bat_attr; + int err; + + *hardif_obj = kobject_create_and_add(BATADV_SYSFS_IF_BAT_SUBDIR, + hardif_kobject); + + if (!*hardif_obj) { + batadv_err(dev, "Can't add sysfs directory: %s/%s\n", dev->name, + BATADV_SYSFS_IF_BAT_SUBDIR); + goto out; + } + + for (bat_attr = batadv_batman_attrs; *bat_attr; ++bat_attr) { + err = sysfs_create_file(*hardif_obj, &((*bat_attr)->attr)); + if (err) { + batadv_err(dev, "Can't add sysfs file: %s/%s/%s\n", + dev->name, BATADV_SYSFS_IF_BAT_SUBDIR, + ((*bat_attr)->attr).name); + goto rem_attr; + } + } + + return 0; + +rem_attr: + for (bat_attr = batadv_batman_attrs; *bat_attr; ++bat_attr) + sysfs_remove_file(*hardif_obj, &((*bat_attr)->attr)); +out: + return -ENOMEM; +} + +void batadv_sysfs_del_hardif(struct kobject **hardif_obj) +{ + kobject_put(*hardif_obj); + *hardif_obj = NULL; +} + +int batadv_throw_uevent(struct batadv_priv *bat_priv, enum batadv_uev_type type, + enum batadv_uev_action action, const char *data) +{ + int ret = -ENOMEM; + struct batadv_hard_iface *primary_if = NULL; + struct kobject *bat_kobj; + char *uevent_env[4] = { NULL, NULL, NULL, NULL }; + + primary_if = batadv_primary_if_get_selected(bat_priv); + if (!primary_if) + goto out; + + bat_kobj = &primary_if->soft_iface->dev.kobj; + + uevent_env[0] = kmalloc(strlen(BATADV_UEV_TYPE_VAR) + + strlen(batadv_uev_type_str[type]) + 1, + GFP_ATOMIC); + if (!uevent_env[0]) + goto out; + + sprintf(uevent_env[0], "%s%s", BATADV_UEV_TYPE_VAR, + batadv_uev_type_str[type]); + + uevent_env[1] = kmalloc(strlen(BATADV_UEV_ACTION_VAR) + + strlen(batadv_uev_action_str[action]) + 1, + GFP_ATOMIC); + if (!uevent_env[1]) + goto out; + + sprintf(uevent_env[1], "%s%s", BATADV_UEV_ACTION_VAR, + batadv_uev_action_str[action]); + + /* If the event is DEL, ignore the data field */ + if (action != BATADV_UEV_DEL) { + uevent_env[2] = kmalloc(strlen(BATADV_UEV_DATA_VAR) + + strlen(data) + 1, GFP_ATOMIC); + if (!uevent_env[2]) + goto out; + + sprintf(uevent_env[2], "%s%s", BATADV_UEV_DATA_VAR, data); + } + + ret = kobject_uevent_env(bat_kobj, KOBJ_CHANGE, uevent_env); +out: + kfree(uevent_env[0]); + kfree(uevent_env[1]); + kfree(uevent_env[2]); + + if (primary_if) + batadv_hardif_free_ref(primary_if); + + if (ret) + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Impossible to send uevent for (%s,%s,%s) event (err: %d)\n", + batadv_uev_type_str[type], + batadv_uev_action_str[action], + (action == BATADV_UEV_DEL ? "NULL" : data), ret); + return ret; +} diff --git a/net/batman-adv/bat_sysfs.h b/net/batman-adv/sysfs.h index fece77ae586e..3fd1412b0620 100644 --- a/net/batman-adv/bat_sysfs.h +++ b/net/batman-adv/sysfs.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: * * Marek Lindner * @@ -16,17 +15,15 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ - #ifndef _NET_BATMAN_ADV_SYSFS_H_ #define _NET_BATMAN_ADV_SYSFS_H_ -#define SYSFS_IF_MESH_SUBDIR "mesh" -#define SYSFS_IF_BAT_SUBDIR "batman_adv" +#define BATADV_SYSFS_IF_MESH_SUBDIR "mesh" +#define BATADV_SYSFS_IF_BAT_SUBDIR "batman_adv" -struct bat_attribute { +struct batadv_attribute { struct attribute attr; ssize_t (*show)(struct kobject *kobj, struct attribute *attr, char *buf); @@ -34,11 +31,12 @@ struct bat_attribute { char *buf, size_t count); }; -int sysfs_add_meshif(struct net_device *dev); -void sysfs_del_meshif(struct net_device *dev); -int sysfs_add_hardif(struct kobject **hardif_obj, struct net_device *dev); -void sysfs_del_hardif(struct kobject **hardif_obj); -int throw_uevent(struct bat_priv *bat_priv, enum uev_type type, - enum uev_action action, const char *data); +int batadv_sysfs_add_meshif(struct net_device *dev); +void batadv_sysfs_del_meshif(struct net_device *dev); +int batadv_sysfs_add_hardif(struct kobject **hardif_obj, + struct net_device *dev); +void batadv_sysfs_del_hardif(struct kobject **hardif_obj); +int batadv_throw_uevent(struct batadv_priv *bat_priv, enum batadv_uev_type type, + enum batadv_uev_action action, const char *data); #endif /* _NET_BATMAN_ADV_SYSFS_H_ */ diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c index 2ab83d7fb1f8..a438f4b582fc 100644 --- a/net/batman-adv/translation-table.c +++ b/net/batman-adv/translation-table.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich, Antonio Quartulli * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -31,44 +29,46 @@ #include <linux/crc16.h> -static void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, - struct orig_node *orig_node); -static void tt_purge(struct work_struct *work); -static void tt_global_del_orig_list(struct tt_global_entry *tt_global_entry); +static void batadv_send_roam_adv(struct batadv_priv *bat_priv, uint8_t *client, + struct batadv_orig_node *orig_node); +static void batadv_tt_purge(struct work_struct *work); +static void +batadv_tt_global_del_orig_list(struct batadv_tt_global_entry *tt_global_entry); /* returns 1 if they are the same mac addr */ -static int compare_tt(const struct hlist_node *node, const void *data2) +static int batadv_compare_tt(const struct hlist_node *node, const void *data2) { - const void *data1 = container_of(node, struct tt_common_entry, + const void *data1 = container_of(node, struct batadv_tt_common_entry, hash_entry); return (memcmp(data1, data2, ETH_ALEN) == 0 ? 1 : 0); } -static void tt_start_timer(struct bat_priv *bat_priv) +static void batadv_tt_start_timer(struct batadv_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->tt_work, tt_purge); - queue_delayed_work(bat_event_workqueue, &bat_priv->tt_work, + INIT_DELAYED_WORK(&bat_priv->tt_work, batadv_tt_purge); + queue_delayed_work(batadv_event_workqueue, &bat_priv->tt_work, msecs_to_jiffies(5000)); } -static struct tt_common_entry *tt_hash_find(struct hashtable_t *hash, - const void *data) +static struct batadv_tt_common_entry * +batadv_tt_hash_find(struct batadv_hashtable *hash, const void *data) { struct hlist_head *head; struct hlist_node *node; - struct tt_common_entry *tt_common_entry, *tt_common_entry_tmp = NULL; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_tt_common_entry *tt_common_entry_tmp = NULL; uint32_t index; if (!hash) return NULL; - index = choose_orig(data, hash->size); + index = batadv_choose_orig(data, hash->size); head = &hash->table[index]; rcu_read_lock(); hlist_for_each_entry_rcu(tt_common_entry, node, head, hash_entry) { - if (!compare_eth(tt_common_entry, data)) + if (!batadv_compare_eth(tt_common_entry, data)) continue; if (!atomic_inc_not_zero(&tt_common_entry->refcount)) @@ -82,80 +82,87 @@ static struct tt_common_entry *tt_hash_find(struct hashtable_t *hash, return tt_common_entry_tmp; } -static struct tt_local_entry *tt_local_hash_find(struct bat_priv *bat_priv, - const void *data) +static struct batadv_tt_local_entry * +batadv_tt_local_hash_find(struct batadv_priv *bat_priv, const void *data) { - struct tt_common_entry *tt_common_entry; - struct tt_local_entry *tt_local_entry = NULL; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_tt_local_entry *tt_local_entry = NULL; - tt_common_entry = tt_hash_find(bat_priv->tt_local_hash, data); + tt_common_entry = batadv_tt_hash_find(bat_priv->tt_local_hash, data); if (tt_common_entry) tt_local_entry = container_of(tt_common_entry, - struct tt_local_entry, common); + struct batadv_tt_local_entry, + common); return tt_local_entry; } -static struct tt_global_entry *tt_global_hash_find(struct bat_priv *bat_priv, - const void *data) +static struct batadv_tt_global_entry * +batadv_tt_global_hash_find(struct batadv_priv *bat_priv, const void *data) { - struct tt_common_entry *tt_common_entry; - struct tt_global_entry *tt_global_entry = NULL; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_tt_global_entry *tt_global_entry = NULL; - tt_common_entry = tt_hash_find(bat_priv->tt_global_hash, data); + tt_common_entry = batadv_tt_hash_find(bat_priv->tt_global_hash, data); if (tt_common_entry) tt_global_entry = container_of(tt_common_entry, - struct tt_global_entry, common); + struct batadv_tt_global_entry, + common); return tt_global_entry; } -static void tt_local_entry_free_ref(struct tt_local_entry *tt_local_entry) +static void +batadv_tt_local_entry_free_ref(struct batadv_tt_local_entry *tt_local_entry) { if (atomic_dec_and_test(&tt_local_entry->common.refcount)) kfree_rcu(tt_local_entry, common.rcu); } -static void tt_global_entry_free_rcu(struct rcu_head *rcu) +static void batadv_tt_global_entry_free_rcu(struct rcu_head *rcu) { - struct tt_common_entry *tt_common_entry; - struct tt_global_entry *tt_global_entry; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_tt_global_entry *tt_global_entry; - tt_common_entry = container_of(rcu, struct tt_common_entry, rcu); - tt_global_entry = container_of(tt_common_entry, struct tt_global_entry, - common); + tt_common_entry = container_of(rcu, struct batadv_tt_common_entry, rcu); + tt_global_entry = container_of(tt_common_entry, + struct batadv_tt_global_entry, common); kfree(tt_global_entry); } -static void tt_global_entry_free_ref(struct tt_global_entry *tt_global_entry) +static void +batadv_tt_global_entry_free_ref(struct batadv_tt_global_entry *tt_global_entry) { if (atomic_dec_and_test(&tt_global_entry->common.refcount)) { - tt_global_del_orig_list(tt_global_entry); + batadv_tt_global_del_orig_list(tt_global_entry); call_rcu(&tt_global_entry->common.rcu, - tt_global_entry_free_rcu); + batadv_tt_global_entry_free_rcu); } } -static void tt_orig_list_entry_free_rcu(struct rcu_head *rcu) +static void batadv_tt_orig_list_entry_free_rcu(struct rcu_head *rcu) { - struct tt_orig_list_entry *orig_entry; + struct batadv_tt_orig_list_entry *orig_entry; - orig_entry = container_of(rcu, struct tt_orig_list_entry, rcu); - orig_node_free_ref(orig_entry->orig_node); + orig_entry = container_of(rcu, struct batadv_tt_orig_list_entry, rcu); + batadv_orig_node_free_ref(orig_entry->orig_node); kfree(orig_entry); } -static void tt_orig_list_entry_free_ref(struct tt_orig_list_entry *orig_entry) +static void +batadv_tt_orig_list_entry_free_ref(struct batadv_tt_orig_list_entry *orig_entry) { /* to avoid race conditions, immediately decrease the tt counter */ atomic_dec(&orig_entry->orig_node->tt_size); - call_rcu(&orig_entry->rcu, tt_orig_list_entry_free_rcu); + call_rcu(&orig_entry->rcu, batadv_tt_orig_list_entry_free_rcu); } -static void tt_local_event(struct bat_priv *bat_priv, const uint8_t *addr, - uint8_t flags) +static void batadv_tt_local_event(struct batadv_priv *bat_priv, + const uint8_t *addr, uint8_t flags) { - struct tt_change_node *tt_change_node; + struct batadv_tt_change_node *tt_change_node, *entry, *safe; + bool event_removed = false; + bool del_op_requested, del_op_entry; tt_change_node = kmalloc(sizeof(*tt_change_node), GFP_ATOMIC); @@ -165,50 +172,82 @@ static void tt_local_event(struct bat_priv *bat_priv, const uint8_t *addr, tt_change_node->change.flags = flags; memcpy(tt_change_node->change.addr, addr, ETH_ALEN); + del_op_requested = flags & BATADV_TT_CLIENT_DEL; + + /* check for ADD+DEL or DEL+ADD events */ spin_lock_bh(&bat_priv->tt_changes_list_lock); + list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, + list) { + if (!batadv_compare_eth(entry->change.addr, addr)) + continue; + + /* DEL+ADD in the same orig interval have no effect and can be + * removed to avoid silly behaviour on the receiver side. The + * other way around (ADD+DEL) can happen in case of roaming of + * a client still in the NEW state. Roaming of NEW clients is + * now possible due to automatically recognition of "temporary" + * clients + */ + del_op_entry = entry->change.flags & BATADV_TT_CLIENT_DEL; + if (!del_op_requested && del_op_entry) + goto del; + if (del_op_requested && !del_op_entry) + goto del; + continue; +del: + list_del(&entry->list); + kfree(entry); + event_removed = true; + goto unlock; + } + /* track the change in the OGMinterval list */ list_add_tail(&tt_change_node->list, &bat_priv->tt_changes_list); - atomic_inc(&bat_priv->tt_local_changes); + +unlock: spin_unlock_bh(&bat_priv->tt_changes_list_lock); - atomic_set(&bat_priv->tt_ogm_append_cnt, 0); + if (event_removed) + atomic_dec(&bat_priv->tt_local_changes); + else + atomic_inc(&bat_priv->tt_local_changes); } -int tt_len(int changes_num) +int batadv_tt_len(int changes_num) { - return changes_num * sizeof(struct tt_change); + return changes_num * sizeof(struct batadv_tt_change); } -static int tt_local_init(struct bat_priv *bat_priv) +static int batadv_tt_local_init(struct batadv_priv *bat_priv) { if (bat_priv->tt_local_hash) - return 1; + return 0; - bat_priv->tt_local_hash = hash_new(1024); + bat_priv->tt_local_hash = batadv_hash_new(1024); if (!bat_priv->tt_local_hash) - return 0; + return -ENOMEM; - return 1; + return 0; } -void tt_local_add(struct net_device *soft_iface, const uint8_t *addr, - int ifindex) +void batadv_tt_local_add(struct net_device *soft_iface, const uint8_t *addr, + int ifindex) { - struct bat_priv *bat_priv = netdev_priv(soft_iface); - struct tt_local_entry *tt_local_entry = NULL; - struct tt_global_entry *tt_global_entry = NULL; + struct batadv_priv *bat_priv = netdev_priv(soft_iface); + struct batadv_tt_local_entry *tt_local_entry = NULL; + struct batadv_tt_global_entry *tt_global_entry = NULL; struct hlist_head *head; struct hlist_node *node; - struct tt_orig_list_entry *orig_entry; + struct batadv_tt_orig_list_entry *orig_entry; int hash_added; - tt_local_entry = tt_local_hash_find(bat_priv, addr); + tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr); if (tt_local_entry) { tt_local_entry->last_seen = jiffies; - /* possibly unset the TT_CLIENT_PENDING flag */ - tt_local_entry->common.flags &= ~TT_CLIENT_PENDING; + /* possibly unset the BATADV_TT_CLIENT_PENDING flag */ + tt_local_entry->common.flags &= ~BATADV_TT_CLIENT_PENDING; goto out; } @@ -216,40 +255,42 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr, if (!tt_local_entry) goto out; - bat_dbg(DBG_TT, bat_priv, - "Creating new local tt entry: %pM (ttvn: %d)\n", addr, - (uint8_t)atomic_read(&bat_priv->ttvn)); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Creating new local tt entry: %pM (ttvn: %d)\n", addr, + (uint8_t)atomic_read(&bat_priv->ttvn)); memcpy(tt_local_entry->common.addr, addr, ETH_ALEN); - tt_local_entry->common.flags = NO_FLAGS; - if (is_wifi_iface(ifindex)) - tt_local_entry->common.flags |= TT_CLIENT_WIFI; + tt_local_entry->common.flags = BATADV_NO_FLAGS; + if (batadv_is_wifi_iface(ifindex)) + tt_local_entry->common.flags |= BATADV_TT_CLIENT_WIFI; atomic_set(&tt_local_entry->common.refcount, 2); tt_local_entry->last_seen = jiffies; /* the batman interface mac address should never be purged */ - if (compare_eth(addr, soft_iface->dev_addr)) - tt_local_entry->common.flags |= TT_CLIENT_NOPURGE; + if (batadv_compare_eth(addr, soft_iface->dev_addr)) + tt_local_entry->common.flags |= BATADV_TT_CLIENT_NOPURGE; /* The local entry has to be marked as NEW to avoid to send it in * a full table response going out before the next ttvn increment - * (consistency check) */ - tt_local_entry->common.flags |= TT_CLIENT_NEW; + * (consistency check) + */ + tt_local_entry->common.flags |= BATADV_TT_CLIENT_NEW; - hash_added = hash_add(bat_priv->tt_local_hash, compare_tt, choose_orig, - &tt_local_entry->common, - &tt_local_entry->common.hash_entry); + hash_added = batadv_hash_add(bat_priv->tt_local_hash, batadv_compare_tt, + batadv_choose_orig, + &tt_local_entry->common, + &tt_local_entry->common.hash_entry); if (unlikely(hash_added != 0)) { /* remove the reference for the hash */ - tt_local_entry_free_ref(tt_local_entry); + batadv_tt_local_entry_free_ref(tt_local_entry); goto out; } - tt_local_event(bat_priv, addr, tt_local_entry->common.flags); + batadv_tt_local_event(bat_priv, addr, tt_local_entry->common.flags); /* remove address from global hash if present */ - tt_global_entry = tt_global_hash_find(bat_priv, addr); + tt_global_entry = batadv_tt_global_hash_find(bat_priv, addr); /* Check whether it is a roaming! */ if (tt_global_entry) { @@ -259,31 +300,85 @@ void tt_local_add(struct net_device *soft_iface, const uint8_t *addr, hlist_for_each_entry_rcu(orig_entry, node, head, list) { orig_entry->orig_node->tt_poss_change = true; - send_roam_adv(bat_priv, tt_global_entry->common.addr, - orig_entry->orig_node); + batadv_send_roam_adv(bat_priv, + tt_global_entry->common.addr, + orig_entry->orig_node); } rcu_read_unlock(); /* The global entry has to be marked as ROAMING and * has to be kept for consistency purpose */ - tt_global_entry->common.flags |= TT_CLIENT_ROAM; + tt_global_entry->common.flags |= BATADV_TT_CLIENT_ROAM; tt_global_entry->roam_at = jiffies; } out: if (tt_local_entry) - tt_local_entry_free_ref(tt_local_entry); + batadv_tt_local_entry_free_ref(tt_local_entry); if (tt_global_entry) - tt_global_entry_free_ref(tt_global_entry); + batadv_tt_global_entry_free_ref(tt_global_entry); +} + +static void batadv_tt_realloc_packet_buff(unsigned char **packet_buff, + int *packet_buff_len, + int min_packet_len, + int new_packet_len) +{ + unsigned char *new_buff; + + new_buff = kmalloc(new_packet_len, GFP_ATOMIC); + + /* keep old buffer if kmalloc should fail */ + if (new_buff) { + memcpy(new_buff, *packet_buff, min_packet_len); + kfree(*packet_buff); + *packet_buff = new_buff; + *packet_buff_len = new_packet_len; + } +} + +static void batadv_tt_prepare_packet_buff(struct batadv_priv *bat_priv, + unsigned char **packet_buff, + int *packet_buff_len, + int min_packet_len) +{ + struct batadv_hard_iface *primary_if; + int req_len; + + primary_if = batadv_primary_if_get_selected(bat_priv); + + req_len = min_packet_len; + req_len += batadv_tt_len(atomic_read(&bat_priv->tt_local_changes)); + + /* if we have too many changes for one packet don't send any + * and wait for the tt table request which will be fragmented + */ + if ((!primary_if) || (req_len > primary_if->soft_iface->mtu)) + req_len = min_packet_len; + + batadv_tt_realloc_packet_buff(packet_buff, packet_buff_len, + min_packet_len, req_len); + + if (primary_if) + batadv_hardif_free_ref(primary_if); } -int tt_changes_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len) +static int batadv_tt_changes_fill_buff(struct batadv_priv *bat_priv, + unsigned char **packet_buff, + int *packet_buff_len, + int min_packet_len) { - int count = 0, tot_changes = 0; - struct tt_change_node *entry, *safe; + struct batadv_tt_change_node *entry, *safe; + int count = 0, tot_changes = 0, new_len; + unsigned char *tt_buff; - if (buff_len > 0) - tot_changes = buff_len / tt_len(1); + batadv_tt_prepare_packet_buff(bat_priv, packet_buff, + packet_buff_len, min_packet_len); + + new_len = *packet_buff_len - min_packet_len; + tt_buff = *packet_buff + min_packet_len; + + if (new_len > 0) + tot_changes = new_len / batadv_tt_len(1); spin_lock_bh(&bat_priv->tt_changes_list_lock); atomic_set(&bat_priv->tt_local_changes, 0); @@ -291,8 +386,8 @@ int tt_changes_fill_buffer(struct bat_priv *bat_priv, list_for_each_entry_safe(entry, safe, &bat_priv->tt_changes_list, list) { if (count < tot_changes) { - memcpy(buff + tt_len(count), - &entry->change, sizeof(struct tt_change)); + memcpy(tt_buff + batadv_tt_len(count), + &entry->change, sizeof(struct batadv_tt_change)); count++; } list_del(&entry->list); @@ -305,37 +400,35 @@ int tt_changes_fill_buffer(struct bat_priv *bat_priv, kfree(bat_priv->tt_buff); bat_priv->tt_buff_len = 0; bat_priv->tt_buff = NULL; - /* We check whether this new OGM has no changes due to size - * problems */ - if (buff_len > 0) { - /** - * if kmalloc() fails we will reply with the full table + /* check whether this new OGM has no changes due to size problems */ + if (new_len > 0) { + /* if kmalloc() fails we will reply with the full table * instead of providing the diff */ - bat_priv->tt_buff = kmalloc(buff_len, GFP_ATOMIC); + bat_priv->tt_buff = kmalloc(new_len, GFP_ATOMIC); if (bat_priv->tt_buff) { - memcpy(bat_priv->tt_buff, buff, buff_len); - bat_priv->tt_buff_len = buff_len; + memcpy(bat_priv->tt_buff, tt_buff, new_len); + bat_priv->tt_buff_len = new_len; } } spin_unlock_bh(&bat_priv->tt_buff_lock); - return tot_changes; + return count; } -int tt_local_seq_print_text(struct seq_file *seq, void *offset) +int batadv_tt_local_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; - struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->tt_local_hash; - struct tt_common_entry *tt_common_entry; - struct hard_iface *primary_if; + struct batadv_priv *bat_priv = netdev_priv(net_dev); + struct batadv_hashtable *hash = bat_priv->tt_local_hash; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_hard_iface *primary_if; struct hlist_node *node; struct hlist_head *head; uint32_t i; int ret = 0; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) { ret = seq_printf(seq, "BATMAN mesh %s disabled - please specify interfaces to enable it\n", @@ -343,7 +436,7 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) goto out; } - if (primary_if->if_status != IF_ACTIVE) { + if (primary_if->if_status != BATADV_IF_ACTIVE) { ret = seq_printf(seq, "BATMAN mesh %s disabled - primary interface not active\n", net_dev->name); @@ -363,63 +456,94 @@ int tt_local_seq_print_text(struct seq_file *seq, void *offset) seq_printf(seq, " * %pM [%c%c%c%c%c]\n", tt_common_entry->addr, (tt_common_entry->flags & - TT_CLIENT_ROAM ? 'R' : '.'), + BATADV_TT_CLIENT_ROAM ? 'R' : '.'), (tt_common_entry->flags & - TT_CLIENT_NOPURGE ? 'P' : '.'), + BATADV_TT_CLIENT_NOPURGE ? 'P' : '.'), (tt_common_entry->flags & - TT_CLIENT_NEW ? 'N' : '.'), + BATADV_TT_CLIENT_NEW ? 'N' : '.'), (tt_common_entry->flags & - TT_CLIENT_PENDING ? 'X' : '.'), + BATADV_TT_CLIENT_PENDING ? 'X' : '.'), (tt_common_entry->flags & - TT_CLIENT_WIFI ? 'W' : '.')); + BATADV_TT_CLIENT_WIFI ? 'W' : '.')); } rcu_read_unlock(); } out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return ret; } -static void tt_local_set_pending(struct bat_priv *bat_priv, - struct tt_local_entry *tt_local_entry, - uint16_t flags, const char *message) +static void +batadv_tt_local_set_pending(struct batadv_priv *bat_priv, + struct batadv_tt_local_entry *tt_local_entry, + uint16_t flags, const char *message) { - tt_local_event(bat_priv, tt_local_entry->common.addr, - tt_local_entry->common.flags | flags); + batadv_tt_local_event(bat_priv, tt_local_entry->common.addr, + tt_local_entry->common.flags | flags); /* The local client has to be marked as "pending to be removed" but has * to be kept in the table in order to send it in a full table - * response issued before the net ttvn increment (consistency check) */ - tt_local_entry->common.flags |= TT_CLIENT_PENDING; + * response issued before the net ttvn increment (consistency check) + */ + tt_local_entry->common.flags |= BATADV_TT_CLIENT_PENDING; - bat_dbg(DBG_TT, bat_priv, - "Local tt entry (%pM) pending to be removed: %s\n", - tt_local_entry->common.addr, message); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Local tt entry (%pM) pending to be removed: %s\n", + tt_local_entry->common.addr, message); } -void tt_local_remove(struct bat_priv *bat_priv, const uint8_t *addr, - const char *message, bool roaming) +void batadv_tt_local_remove(struct batadv_priv *bat_priv, const uint8_t *addr, + const char *message, bool roaming) { - struct tt_local_entry *tt_local_entry = NULL; + struct batadv_tt_local_entry *tt_local_entry = NULL; + uint16_t flags; - tt_local_entry = tt_local_hash_find(bat_priv, addr); + tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr); if (!tt_local_entry) goto out; - tt_local_set_pending(bat_priv, tt_local_entry, TT_CLIENT_DEL | - (roaming ? TT_CLIENT_ROAM : NO_FLAGS), message); + flags = BATADV_TT_CLIENT_DEL; + if (roaming) + flags |= BATADV_TT_CLIENT_ROAM; + + batadv_tt_local_set_pending(bat_priv, tt_local_entry, flags, message); out: if (tt_local_entry) - tt_local_entry_free_ref(tt_local_entry); + batadv_tt_local_entry_free_ref(tt_local_entry); } -static void tt_local_purge(struct bat_priv *bat_priv) +static void batadv_tt_local_purge_list(struct batadv_priv *bat_priv, + struct hlist_head *head) { - struct hashtable_t *hash = bat_priv->tt_local_hash; - struct tt_local_entry *tt_local_entry; - struct tt_common_entry *tt_common_entry; + struct batadv_tt_local_entry *tt_local_entry; + struct batadv_tt_common_entry *tt_common_entry; struct hlist_node *node, *node_tmp; + + hlist_for_each_entry_safe(tt_common_entry, node, node_tmp, head, + hash_entry) { + tt_local_entry = container_of(tt_common_entry, + struct batadv_tt_local_entry, + common); + if (tt_local_entry->common.flags & BATADV_TT_CLIENT_NOPURGE) + continue; + + /* entry already marked for deletion */ + if (tt_local_entry->common.flags & BATADV_TT_CLIENT_PENDING) + continue; + + if (!batadv_has_timed_out(tt_local_entry->last_seen, + BATADV_TT_LOCAL_TIMEOUT)) + continue; + + batadv_tt_local_set_pending(bat_priv, tt_local_entry, + BATADV_TT_CLIENT_DEL, "timed out"); + } +} + +static void batadv_tt_local_purge(struct batadv_priv *bat_priv) +{ + struct batadv_hashtable *hash = bat_priv->tt_local_hash; struct hlist_head *head; spinlock_t *list_lock; /* protects write access to the hash lists */ uint32_t i; @@ -429,36 +553,18 @@ static void tt_local_purge(struct bat_priv *bat_priv) list_lock = &hash->list_locks[i]; spin_lock_bh(list_lock); - hlist_for_each_entry_safe(tt_common_entry, node, node_tmp, - head, hash_entry) { - tt_local_entry = container_of(tt_common_entry, - struct tt_local_entry, - common); - if (tt_local_entry->common.flags & TT_CLIENT_NOPURGE) - continue; - - /* entry already marked for deletion */ - if (tt_local_entry->common.flags & TT_CLIENT_PENDING) - continue; - - if (!has_timed_out(tt_local_entry->last_seen, - TT_LOCAL_TIMEOUT)) - continue; - - tt_local_set_pending(bat_priv, tt_local_entry, - TT_CLIENT_DEL, "timed out"); - } + batadv_tt_local_purge_list(bat_priv, head); spin_unlock_bh(list_lock); } } -static void tt_local_table_free(struct bat_priv *bat_priv) +static void batadv_tt_local_table_free(struct batadv_priv *bat_priv) { - struct hashtable_t *hash; + struct batadv_hashtable *hash; spinlock_t *list_lock; /* protects write access to the hash lists */ - struct tt_common_entry *tt_common_entry; - struct tt_local_entry *tt_local_entry; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_tt_local_entry *tt_local; struct hlist_node *node, *node_tmp; struct hlist_head *head; uint32_t i; @@ -476,35 +582,35 @@ static void tt_local_table_free(struct bat_priv *bat_priv) hlist_for_each_entry_safe(tt_common_entry, node, node_tmp, head, hash_entry) { hlist_del_rcu(node); - tt_local_entry = container_of(tt_common_entry, - struct tt_local_entry, - common); - tt_local_entry_free_ref(tt_local_entry); + tt_local = container_of(tt_common_entry, + struct batadv_tt_local_entry, + common); + batadv_tt_local_entry_free_ref(tt_local); } spin_unlock_bh(list_lock); } - hash_destroy(hash); + batadv_hash_destroy(hash); bat_priv->tt_local_hash = NULL; } -static int tt_global_init(struct bat_priv *bat_priv) +static int batadv_tt_global_init(struct batadv_priv *bat_priv) { if (bat_priv->tt_global_hash) - return 1; + return 0; - bat_priv->tt_global_hash = hash_new(1024); + bat_priv->tt_global_hash = batadv_hash_new(1024); if (!bat_priv->tt_global_hash) - return 0; + return -ENOMEM; - return 1; + return 0; } -static void tt_changes_list_free(struct bat_priv *bat_priv) +static void batadv_tt_changes_list_free(struct batadv_priv *bat_priv) { - struct tt_change_node *entry, *safe; + struct batadv_tt_change_node *entry, *safe; spin_lock_bh(&bat_priv->tt_changes_list_lock); @@ -521,10 +627,11 @@ static void tt_changes_list_free(struct bat_priv *bat_priv) /* find out if an orig_node is already in the list of a tt_global_entry. * returns 1 if found, 0 otherwise */ -static bool tt_global_entry_has_orig(const struct tt_global_entry *entry, - const struct orig_node *orig_node) +static bool +batadv_tt_global_entry_has_orig(const struct batadv_tt_global_entry *entry, + const struct batadv_orig_node *orig_node) { - struct tt_orig_list_entry *tmp_orig_entry; + struct batadv_tt_orig_list_entry *tmp_orig_entry; const struct hlist_head *head; struct hlist_node *node; bool found = false; @@ -541,11 +648,11 @@ static bool tt_global_entry_has_orig(const struct tt_global_entry *entry, return found; } -static void tt_global_add_orig_entry(struct tt_global_entry *tt_global_entry, - struct orig_node *orig_node, - int ttvn) +static void +batadv_tt_global_add_orig_entry(struct batadv_tt_global_entry *tt_global_entry, + struct batadv_orig_node *orig_node, int ttvn) { - struct tt_orig_list_entry *orig_entry; + struct batadv_tt_orig_list_entry *orig_entry; orig_entry = kzalloc(sizeof(*orig_entry), GFP_ATOMIC); if (!orig_entry) @@ -564,91 +671,95 @@ static void tt_global_add_orig_entry(struct tt_global_entry *tt_global_entry, } /* caller must hold orig_node refcount */ -int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, - const unsigned char *tt_addr, uint8_t ttvn, bool roaming, - bool wifi) +int batadv_tt_global_add(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const unsigned char *tt_addr, uint8_t flags, + uint8_t ttvn) { - struct tt_global_entry *tt_global_entry = NULL; + struct batadv_tt_global_entry *tt_global_entry = NULL; int ret = 0; int hash_added; + struct batadv_tt_common_entry *common; - tt_global_entry = tt_global_hash_find(bat_priv, tt_addr); + tt_global_entry = batadv_tt_global_hash_find(bat_priv, tt_addr); if (!tt_global_entry) { - tt_global_entry = kzalloc(sizeof(*tt_global_entry), - GFP_ATOMIC); + tt_global_entry = kzalloc(sizeof(*tt_global_entry), GFP_ATOMIC); if (!tt_global_entry) goto out; - memcpy(tt_global_entry->common.addr, tt_addr, ETH_ALEN); + common = &tt_global_entry->common; + memcpy(common->addr, tt_addr, ETH_ALEN); - tt_global_entry->common.flags = NO_FLAGS; + common->flags = flags; tt_global_entry->roam_at = 0; - atomic_set(&tt_global_entry->common.refcount, 2); + atomic_set(&common->refcount, 2); INIT_HLIST_HEAD(&tt_global_entry->orig_list); spin_lock_init(&tt_global_entry->list_lock); - hash_added = hash_add(bat_priv->tt_global_hash, compare_tt, - choose_orig, &tt_global_entry->common, - &tt_global_entry->common.hash_entry); + hash_added = batadv_hash_add(bat_priv->tt_global_hash, + batadv_compare_tt, + batadv_choose_orig, common, + &common->hash_entry); if (unlikely(hash_added != 0)) { /* remove the reference for the hash */ - tt_global_entry_free_ref(tt_global_entry); + batadv_tt_global_entry_free_ref(tt_global_entry); goto out_remove; } - tt_global_add_orig_entry(tt_global_entry, orig_node, ttvn); + batadv_tt_global_add_orig_entry(tt_global_entry, orig_node, + ttvn); } else { /* there is already a global entry, use this one. */ - /* If there is the TT_CLIENT_ROAM flag set, there is only one - * originator left in the list and we previously received a + /* If there is the BATADV_TT_CLIENT_ROAM flag set, there is only + * one originator left in the list and we previously received a * delete + roaming change for this originator. * * We should first delete the old originator before adding the * new one. */ - if (tt_global_entry->common.flags & TT_CLIENT_ROAM) { - tt_global_del_orig_list(tt_global_entry); - tt_global_entry->common.flags &= ~TT_CLIENT_ROAM; + if (tt_global_entry->common.flags & BATADV_TT_CLIENT_ROAM) { + batadv_tt_global_del_orig_list(tt_global_entry); + tt_global_entry->common.flags &= ~BATADV_TT_CLIENT_ROAM; tt_global_entry->roam_at = 0; } - if (!tt_global_entry_has_orig(tt_global_entry, orig_node)) - tt_global_add_orig_entry(tt_global_entry, orig_node, - ttvn); + if (!batadv_tt_global_entry_has_orig(tt_global_entry, + orig_node)) + batadv_tt_global_add_orig_entry(tt_global_entry, + orig_node, ttvn); } - if (wifi) - tt_global_entry->common.flags |= TT_CLIENT_WIFI; - - bat_dbg(DBG_TT, bat_priv, - "Creating new global tt entry: %pM (via %pM)\n", - tt_global_entry->common.addr, orig_node->orig); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Creating new global tt entry: %pM (via %pM)\n", + tt_global_entry->common.addr, orig_node->orig); out_remove: /* remove address from local hash if present */ - tt_local_remove(bat_priv, tt_global_entry->common.addr, - "global tt received", roaming); + batadv_tt_local_remove(bat_priv, tt_global_entry->common.addr, + "global tt received", + flags & BATADV_TT_CLIENT_ROAM); ret = 1; out: if (tt_global_entry) - tt_global_entry_free_ref(tt_global_entry); + batadv_tt_global_entry_free_ref(tt_global_entry); return ret; } /* print all orig nodes who announce the address for this global entry. * it is assumed that the caller holds rcu_read_lock(); */ -static void tt_global_print_entry(struct tt_global_entry *tt_global_entry, - struct seq_file *seq) +static void +batadv_tt_global_print_entry(struct batadv_tt_global_entry *tt_global_entry, + struct seq_file *seq) { struct hlist_head *head; struct hlist_node *node; - struct tt_orig_list_entry *orig_entry; - struct tt_common_entry *tt_common_entry; + struct batadv_tt_orig_list_entry *orig_entry; + struct batadv_tt_common_entry *tt_common_entry; uint16_t flags; uint8_t last_ttvn; @@ -662,25 +773,25 @@ static void tt_global_print_entry(struct tt_global_entry *tt_global_entry, seq_printf(seq, " * %pM (%3u) via %pM (%3u) [%c%c]\n", tt_global_entry->common.addr, orig_entry->ttvn, orig_entry->orig_node->orig, last_ttvn, - (flags & TT_CLIENT_ROAM ? 'R' : '.'), - (flags & TT_CLIENT_WIFI ? 'W' : '.')); + (flags & BATADV_TT_CLIENT_ROAM ? 'R' : '.'), + (flags & BATADV_TT_CLIENT_WIFI ? 'W' : '.')); } } -int tt_global_seq_print_text(struct seq_file *seq, void *offset) +int batadv_tt_global_seq_print_text(struct seq_file *seq, void *offset) { struct net_device *net_dev = (struct net_device *)seq->private; - struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->tt_global_hash; - struct tt_common_entry *tt_common_entry; - struct tt_global_entry *tt_global_entry; - struct hard_iface *primary_if; + struct batadv_priv *bat_priv = netdev_priv(net_dev); + struct batadv_hashtable *hash = bat_priv->tt_global_hash; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_tt_global_entry *tt_global; + struct batadv_hard_iface *primary_if; struct hlist_node *node; struct hlist_head *head; uint32_t i; int ret = 0; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) { ret = seq_printf(seq, "BATMAN mesh %s disabled - please specify interfaces to enable it\n", @@ -688,7 +799,7 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) goto out; } - if (primary_if->if_status != IF_ACTIVE) { + if (primary_if->if_status != BATADV_IF_ACTIVE) { ret = seq_printf(seq, "BATMAN mesh %s disabled - primary interface not active\n", net_dev->name); @@ -707,87 +818,91 @@ int tt_global_seq_print_text(struct seq_file *seq, void *offset) rcu_read_lock(); hlist_for_each_entry_rcu(tt_common_entry, node, head, hash_entry) { - tt_global_entry = container_of(tt_common_entry, - struct tt_global_entry, - common); - tt_global_print_entry(tt_global_entry, seq); + tt_global = container_of(tt_common_entry, + struct batadv_tt_global_entry, + common); + batadv_tt_global_print_entry(tt_global, seq); } rcu_read_unlock(); } out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return ret; } /* deletes the orig list of a tt_global_entry */ -static void tt_global_del_orig_list(struct tt_global_entry *tt_global_entry) +static void +batadv_tt_global_del_orig_list(struct batadv_tt_global_entry *tt_global_entry) { struct hlist_head *head; struct hlist_node *node, *safe; - struct tt_orig_list_entry *orig_entry; + struct batadv_tt_orig_list_entry *orig_entry; spin_lock_bh(&tt_global_entry->list_lock); head = &tt_global_entry->orig_list; hlist_for_each_entry_safe(orig_entry, node, safe, head, list) { hlist_del_rcu(node); - tt_orig_list_entry_free_ref(orig_entry); + batadv_tt_orig_list_entry_free_ref(orig_entry); } spin_unlock_bh(&tt_global_entry->list_lock); } -static void tt_global_del_orig_entry(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - struct orig_node *orig_node, - const char *message) +static void +batadv_tt_global_del_orig_entry(struct batadv_priv *bat_priv, + struct batadv_tt_global_entry *tt_global_entry, + struct batadv_orig_node *orig_node, + const char *message) { struct hlist_head *head; struct hlist_node *node, *safe; - struct tt_orig_list_entry *orig_entry; + struct batadv_tt_orig_list_entry *orig_entry; spin_lock_bh(&tt_global_entry->list_lock); head = &tt_global_entry->orig_list; hlist_for_each_entry_safe(orig_entry, node, safe, head, list) { if (orig_entry->orig_node == orig_node) { - bat_dbg(DBG_TT, bat_priv, - "Deleting %pM from global tt entry %pM: %s\n", - orig_node->orig, tt_global_entry->common.addr, - message); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Deleting %pM from global tt entry %pM: %s\n", + orig_node->orig, + tt_global_entry->common.addr, message); hlist_del_rcu(node); - tt_orig_list_entry_free_ref(orig_entry); + batadv_tt_orig_list_entry_free_ref(orig_entry); } } spin_unlock_bh(&tt_global_entry->list_lock); } -static void tt_global_del_struct(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - const char *message) +static void +batadv_tt_global_del_struct(struct batadv_priv *bat_priv, + struct batadv_tt_global_entry *tt_global_entry, + const char *message) { - bat_dbg(DBG_TT, bat_priv, - "Deleting global tt entry %pM: %s\n", - tt_global_entry->common.addr, message); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Deleting global tt entry %pM: %s\n", + tt_global_entry->common.addr, message); - hash_remove(bat_priv->tt_global_hash, compare_tt, choose_orig, - tt_global_entry->common.addr); - tt_global_entry_free_ref(tt_global_entry); + batadv_hash_remove(bat_priv->tt_global_hash, batadv_compare_tt, + batadv_choose_orig, tt_global_entry->common.addr); + batadv_tt_global_entry_free_ref(tt_global_entry); } /* If the client is to be deleted, we check if it is the last origantor entry - * within tt_global entry. If yes, we set the TT_CLIENT_ROAM flag and the timer, - * otherwise we simply remove the originator scheduled for deletion. + * within tt_global entry. If yes, we set the BATADV_TT_CLIENT_ROAM flag and the + * timer, otherwise we simply remove the originator scheduled for deletion. */ -static void tt_global_del_roaming(struct bat_priv *bat_priv, - struct tt_global_entry *tt_global_entry, - struct orig_node *orig_node, - const char *message) +static void +batadv_tt_global_del_roaming(struct batadv_priv *bat_priv, + struct batadv_tt_global_entry *tt_global_entry, + struct batadv_orig_node *orig_node, + const char *message) { bool last_entry = true; struct hlist_head *head; struct hlist_node *node; - struct tt_orig_list_entry *orig_entry; + struct batadv_tt_orig_list_entry *orig_entry; /* no local entry exists, case 1: * Check if this is the last one or if other entries exist. @@ -805,37 +920,37 @@ static void tt_global_del_roaming(struct bat_priv *bat_priv, if (last_entry) { /* its the last one, mark for roaming. */ - tt_global_entry->common.flags |= TT_CLIENT_ROAM; + tt_global_entry->common.flags |= BATADV_TT_CLIENT_ROAM; tt_global_entry->roam_at = jiffies; } else /* there is another entry, we can simply delete this * one and can still use the other one. */ - tt_global_del_orig_entry(bat_priv, tt_global_entry, - orig_node, message); + batadv_tt_global_del_orig_entry(bat_priv, tt_global_entry, + orig_node, message); } -static void tt_global_del(struct bat_priv *bat_priv, - struct orig_node *orig_node, - const unsigned char *addr, - const char *message, bool roaming) +static void batadv_tt_global_del(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const unsigned char *addr, + const char *message, bool roaming) { - struct tt_global_entry *tt_global_entry = NULL; - struct tt_local_entry *tt_local_entry = NULL; + struct batadv_tt_global_entry *tt_global_entry = NULL; + struct batadv_tt_local_entry *local_entry = NULL; - tt_global_entry = tt_global_hash_find(bat_priv, addr); + tt_global_entry = batadv_tt_global_hash_find(bat_priv, addr); if (!tt_global_entry) goto out; if (!roaming) { - tt_global_del_orig_entry(bat_priv, tt_global_entry, orig_node, - message); + batadv_tt_global_del_orig_entry(bat_priv, tt_global_entry, + orig_node, message); if (hlist_empty(&tt_global_entry->orig_list)) - tt_global_del_struct(bat_priv, tt_global_entry, - message); + batadv_tt_global_del_struct(bat_priv, tt_global_entry, + message); goto out; } @@ -844,41 +959,42 @@ static void tt_global_del(struct bat_priv *bat_priv, * event, there are two possibilities: * 1) the client roamed from node A to node B => if there * is only one originator left for this client, we mark - * it with TT_CLIENT_ROAM, we start a timer and we + * it with BATADV_TT_CLIENT_ROAM, we start a timer and we * wait for node B to claim it. In case of timeout * the entry is purged. * * If there are other originators left, we directly delete * the originator. * 2) the client roamed to us => we can directly delete - * the global entry, since it is useless now. */ - - tt_local_entry = tt_local_hash_find(bat_priv, - tt_global_entry->common.addr); - if (tt_local_entry) { + * the global entry, since it is useless now. + */ + local_entry = batadv_tt_local_hash_find(bat_priv, + tt_global_entry->common.addr); + if (local_entry) { /* local entry exists, case 2: client roamed to us. */ - tt_global_del_orig_list(tt_global_entry); - tt_global_del_struct(bat_priv, tt_global_entry, message); + batadv_tt_global_del_orig_list(tt_global_entry); + batadv_tt_global_del_struct(bat_priv, tt_global_entry, message); } else /* no local entry exists, case 1: check for roaming */ - tt_global_del_roaming(bat_priv, tt_global_entry, orig_node, - message); + batadv_tt_global_del_roaming(bat_priv, tt_global_entry, + orig_node, message); out: if (tt_global_entry) - tt_global_entry_free_ref(tt_global_entry); - if (tt_local_entry) - tt_local_entry_free_ref(tt_local_entry); + batadv_tt_global_entry_free_ref(tt_global_entry); + if (local_entry) + batadv_tt_local_entry_free_ref(local_entry); } -void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, const char *message) +void batadv_tt_global_del_orig(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const char *message) { - struct tt_global_entry *tt_global_entry; - struct tt_common_entry *tt_common_entry; + struct batadv_tt_global_entry *tt_global; + struct batadv_tt_common_entry *tt_common_entry; uint32_t i; - struct hashtable_t *hash = bat_priv->tt_global_hash; + struct batadv_hashtable *hash = bat_priv->tt_global_hash; struct hlist_node *node, *safe; struct hlist_head *head; spinlock_t *list_lock; /* protects write access to the hash lists */ @@ -893,20 +1009,19 @@ void tt_global_del_orig(struct bat_priv *bat_priv, spin_lock_bh(list_lock); hlist_for_each_entry_safe(tt_common_entry, node, safe, head, hash_entry) { - tt_global_entry = container_of(tt_common_entry, - struct tt_global_entry, - common); - - tt_global_del_orig_entry(bat_priv, tt_global_entry, - orig_node, message); - - if (hlist_empty(&tt_global_entry->orig_list)) { - bat_dbg(DBG_TT, bat_priv, - "Deleting global tt entry %pM: %s\n", - tt_global_entry->common.addr, - message); + tt_global = container_of(tt_common_entry, + struct batadv_tt_global_entry, + common); + + batadv_tt_global_del_orig_entry(bat_priv, tt_global, + orig_node, message); + + if (hlist_empty(&tt_global->orig_list)) { + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Deleting global tt entry %pM: %s\n", + tt_global->common.addr, message); hlist_del_rcu(node); - tt_global_entry_free_ref(tt_global_entry); + batadv_tt_global_entry_free_ref(tt_global); } } spin_unlock_bh(list_lock); @@ -914,12 +1029,36 @@ void tt_global_del_orig(struct bat_priv *bat_priv, orig_node->tt_initialised = false; } -static void tt_global_roam_purge(struct bat_priv *bat_priv) +static void batadv_tt_global_roam_purge_list(struct batadv_priv *bat_priv, + struct hlist_head *head) { - struct hashtable_t *hash = bat_priv->tt_global_hash; - struct tt_common_entry *tt_common_entry; - struct tt_global_entry *tt_global_entry; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_tt_global_entry *tt_global_entry; struct hlist_node *node, *node_tmp; + + hlist_for_each_entry_safe(tt_common_entry, node, node_tmp, head, + hash_entry) { + tt_global_entry = container_of(tt_common_entry, + struct batadv_tt_global_entry, + common); + if (!(tt_global_entry->common.flags & BATADV_TT_CLIENT_ROAM)) + continue; + if (!batadv_has_timed_out(tt_global_entry->roam_at, + BATADV_TT_CLIENT_ROAM_TIMEOUT)) + continue; + + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Deleting global tt entry (%pM): Roaming timeout\n", + tt_global_entry->common.addr); + + hlist_del_rcu(node); + batadv_tt_global_entry_free_ref(tt_global_entry); + } +} + +static void batadv_tt_global_roam_purge(struct batadv_priv *bat_priv) +{ + struct batadv_hashtable *hash = bat_priv->tt_global_hash; struct hlist_head *head; spinlock_t *list_lock; /* protects write access to the hash lists */ uint32_t i; @@ -929,35 +1068,18 @@ static void tt_global_roam_purge(struct bat_priv *bat_priv) list_lock = &hash->list_locks[i]; spin_lock_bh(list_lock); - hlist_for_each_entry_safe(tt_common_entry, node, node_tmp, - head, hash_entry) { - tt_global_entry = container_of(tt_common_entry, - struct tt_global_entry, - common); - if (!(tt_global_entry->common.flags & TT_CLIENT_ROAM)) - continue; - if (!has_timed_out(tt_global_entry->roam_at, - TT_CLIENT_ROAM_TIMEOUT)) - continue; - - bat_dbg(DBG_TT, bat_priv, - "Deleting global tt entry (%pM): Roaming timeout\n", - tt_global_entry->common.addr); - - hlist_del_rcu(node); - tt_global_entry_free_ref(tt_global_entry); - } + batadv_tt_global_roam_purge_list(bat_priv, head); spin_unlock_bh(list_lock); } } -static void tt_global_table_free(struct bat_priv *bat_priv) +static void batadv_tt_global_table_free(struct batadv_priv *bat_priv) { - struct hashtable_t *hash; + struct batadv_hashtable *hash; spinlock_t *list_lock; /* protects write access to the hash lists */ - struct tt_common_entry *tt_common_entry; - struct tt_global_entry *tt_global_entry; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_tt_global_entry *tt_global; struct hlist_node *node, *node_tmp; struct hlist_head *head; uint32_t i; @@ -975,56 +1097,60 @@ static void tt_global_table_free(struct bat_priv *bat_priv) hlist_for_each_entry_safe(tt_common_entry, node, node_tmp, head, hash_entry) { hlist_del_rcu(node); - tt_global_entry = container_of(tt_common_entry, - struct tt_global_entry, - common); - tt_global_entry_free_ref(tt_global_entry); + tt_global = container_of(tt_common_entry, + struct batadv_tt_global_entry, + common); + batadv_tt_global_entry_free_ref(tt_global); } spin_unlock_bh(list_lock); } - hash_destroy(hash); + batadv_hash_destroy(hash); bat_priv->tt_global_hash = NULL; } -static bool _is_ap_isolated(struct tt_local_entry *tt_local_entry, - struct tt_global_entry *tt_global_entry) +static bool +_batadv_is_ap_isolated(struct batadv_tt_local_entry *tt_local_entry, + struct batadv_tt_global_entry *tt_global_entry) { bool ret = false; - if (tt_local_entry->common.flags & TT_CLIENT_WIFI && - tt_global_entry->common.flags & TT_CLIENT_WIFI) + if (tt_local_entry->common.flags & BATADV_TT_CLIENT_WIFI && + tt_global_entry->common.flags & BATADV_TT_CLIENT_WIFI) ret = true; return ret; } -struct orig_node *transtable_search(struct bat_priv *bat_priv, - const uint8_t *src, const uint8_t *addr) +struct batadv_orig_node *batadv_transtable_search(struct batadv_priv *bat_priv, + const uint8_t *src, + const uint8_t *addr) { - struct tt_local_entry *tt_local_entry = NULL; - struct tt_global_entry *tt_global_entry = NULL; - struct orig_node *orig_node = NULL; - struct neigh_node *router = NULL; + struct batadv_tt_local_entry *tt_local_entry = NULL; + struct batadv_tt_global_entry *tt_global_entry = NULL; + struct batadv_orig_node *orig_node = NULL; + struct batadv_neigh_node *router = NULL; struct hlist_head *head; struct hlist_node *node; - struct tt_orig_list_entry *orig_entry; + struct batadv_tt_orig_list_entry *orig_entry; int best_tq; if (src && atomic_read(&bat_priv->ap_isolation)) { - tt_local_entry = tt_local_hash_find(bat_priv, src); + tt_local_entry = batadv_tt_local_hash_find(bat_priv, src); if (!tt_local_entry) goto out; } - tt_global_entry = tt_global_hash_find(bat_priv, addr); + tt_global_entry = batadv_tt_global_hash_find(bat_priv, addr); if (!tt_global_entry) goto out; /* check whether the clients should not communicate due to AP - * isolation */ - if (tt_local_entry && _is_ap_isolated(tt_local_entry, tt_global_entry)) + * isolation + */ + if (tt_local_entry && + _batadv_is_ap_isolated(tt_local_entry, tt_global_entry)) goto out; best_tq = 0; @@ -1032,7 +1158,7 @@ struct orig_node *transtable_search(struct bat_priv *bat_priv, rcu_read_lock(); head = &tt_global_entry->orig_list; hlist_for_each_entry_rcu(orig_entry, node, head, list) { - router = orig_node_get_router(orig_entry->orig_node); + router = batadv_orig_node_get_router(orig_entry->orig_node); if (!router) continue; @@ -1040,7 +1166,7 @@ struct orig_node *transtable_search(struct bat_priv *bat_priv, orig_node = orig_entry->orig_node; best_tq = router->tq_avg; } - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); } /* found anything? */ if (orig_node && !atomic_inc_not_zero(&orig_node->refcount)) @@ -1048,21 +1174,21 @@ struct orig_node *transtable_search(struct bat_priv *bat_priv, rcu_read_unlock(); out: if (tt_global_entry) - tt_global_entry_free_ref(tt_global_entry); + batadv_tt_global_entry_free_ref(tt_global_entry); if (tt_local_entry) - tt_local_entry_free_ref(tt_local_entry); + batadv_tt_local_entry_free_ref(tt_local_entry); return orig_node; } /* Calculates the checksum of the local table of a given orig_node */ -static uint16_t tt_global_crc(struct bat_priv *bat_priv, - struct orig_node *orig_node) +static uint16_t batadv_tt_global_crc(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node) { uint16_t total = 0, total_one; - struct hashtable_t *hash = bat_priv->tt_global_hash; - struct tt_common_entry *tt_common_entry; - struct tt_global_entry *tt_global_entry; + struct batadv_hashtable *hash = bat_priv->tt_global_hash; + struct batadv_tt_common_entry *tt_common; + struct batadv_tt_global_entry *tt_global; struct hlist_node *node; struct hlist_head *head; uint32_t i; @@ -1072,30 +1198,29 @@ static uint16_t tt_global_crc(struct bat_priv *bat_priv, head = &hash->table[i]; rcu_read_lock(); - hlist_for_each_entry_rcu(tt_common_entry, node, - head, hash_entry) { - tt_global_entry = container_of(tt_common_entry, - struct tt_global_entry, - common); + hlist_for_each_entry_rcu(tt_common, node, head, hash_entry) { + tt_global = container_of(tt_common, + struct batadv_tt_global_entry, + common); /* Roaming clients are in the global table for * consistency only. They don't have to be * taken into account while computing the * global crc */ - if (tt_global_entry->common.flags & TT_CLIENT_ROAM) + if (tt_common->flags & BATADV_TT_CLIENT_ROAM) continue; /* find out if this global entry is announced by this * originator */ - if (!tt_global_entry_has_orig(tt_global_entry, - orig_node)) + if (!batadv_tt_global_entry_has_orig(tt_global, + orig_node)) continue; total_one = 0; for (j = 0; j < ETH_ALEN; j++) total_one = crc16_byte(total_one, - tt_global_entry->common.addr[j]); + tt_common->addr[j]); total ^= total_one; } rcu_read_unlock(); @@ -1105,11 +1230,11 @@ static uint16_t tt_global_crc(struct bat_priv *bat_priv, } /* Calculates the checksum of the local table */ -uint16_t tt_local_crc(struct bat_priv *bat_priv) +static uint16_t batadv_tt_local_crc(struct batadv_priv *bat_priv) { uint16_t total = 0, total_one; - struct hashtable_t *hash = bat_priv->tt_local_hash; - struct tt_common_entry *tt_common_entry; + struct batadv_hashtable *hash = bat_priv->tt_local_hash; + struct batadv_tt_common_entry *tt_common; struct hlist_node *node; struct hlist_head *head; uint32_t i; @@ -1119,16 +1244,16 @@ uint16_t tt_local_crc(struct bat_priv *bat_priv) head = &hash->table[i]; rcu_read_lock(); - hlist_for_each_entry_rcu(tt_common_entry, node, - head, hash_entry) { + hlist_for_each_entry_rcu(tt_common, node, head, hash_entry) { /* not yet committed clients have not to be taken into - * account while computing the CRC */ - if (tt_common_entry->flags & TT_CLIENT_NEW) + * account while computing the CRC + */ + if (tt_common->flags & BATADV_TT_CLIENT_NEW) continue; total_one = 0; for (j = 0; j < ETH_ALEN; j++) total_one = crc16_byte(total_one, - tt_common_entry->addr[j]); + tt_common->addr[j]); total ^= total_one; } rcu_read_unlock(); @@ -1137,9 +1262,9 @@ uint16_t tt_local_crc(struct bat_priv *bat_priv) return total; } -static void tt_req_list_free(struct bat_priv *bat_priv) +static void batadv_tt_req_list_free(struct batadv_priv *bat_priv) { - struct tt_req_node *node, *safe; + struct batadv_tt_req_node *node, *safe; spin_lock_bh(&bat_priv->tt_req_list_lock); @@ -1151,15 +1276,16 @@ static void tt_req_list_free(struct bat_priv *bat_priv) spin_unlock_bh(&bat_priv->tt_req_list_lock); } -static void tt_save_orig_buffer(struct bat_priv *bat_priv, - struct orig_node *orig_node, - const unsigned char *tt_buff, - uint8_t tt_num_changes) +static void batadv_tt_save_orig_buffer(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const unsigned char *tt_buff, + uint8_t tt_num_changes) { - uint16_t tt_buff_len = tt_len(tt_num_changes); + uint16_t tt_buff_len = batadv_tt_len(tt_num_changes); /* Replace the old buffer only if I received something in the - * last OGM (the OGM could carry no changes) */ + * last OGM (the OGM could carry no changes) + */ spin_lock_bh(&orig_node->tt_buff_lock); if (tt_buff_len > 0) { kfree(orig_node->tt_buff); @@ -1173,13 +1299,14 @@ static void tt_save_orig_buffer(struct bat_priv *bat_priv, spin_unlock_bh(&orig_node->tt_buff_lock); } -static void tt_req_purge(struct bat_priv *bat_priv) +static void batadv_tt_req_purge(struct batadv_priv *bat_priv) { - struct tt_req_node *node, *safe; + struct batadv_tt_req_node *node, *safe; spin_lock_bh(&bat_priv->tt_req_list_lock); list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { - if (has_timed_out(node->issued_at, TT_REQUEST_TIMEOUT)) { + if (batadv_has_timed_out(node->issued_at, + BATADV_TT_REQUEST_TIMEOUT)) { list_del(&node->list); kfree(node); } @@ -1188,17 +1315,19 @@ static void tt_req_purge(struct bat_priv *bat_priv) } /* returns the pointer to the new tt_req_node struct if no request - * has already been issued for this orig_node, NULL otherwise */ -static struct tt_req_node *new_tt_req_node(struct bat_priv *bat_priv, - struct orig_node *orig_node) + * has already been issued for this orig_node, NULL otherwise + */ +static struct batadv_tt_req_node * +batadv_new_tt_req_node(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node) { - struct tt_req_node *tt_req_node_tmp, *tt_req_node = NULL; + struct batadv_tt_req_node *tt_req_node_tmp, *tt_req_node = NULL; spin_lock_bh(&bat_priv->tt_req_list_lock); list_for_each_entry(tt_req_node_tmp, &bat_priv->tt_req_list, list) { - if (compare_eth(tt_req_node_tmp, orig_node) && - !has_timed_out(tt_req_node_tmp->issued_at, - TT_REQUEST_TIMEOUT)) + if (batadv_compare_eth(tt_req_node_tmp, orig_node) && + !batadv_has_timed_out(tt_req_node_tmp->issued_at, + BATADV_TT_REQUEST_TIMEOUT)) goto unlock; } @@ -1216,63 +1345,67 @@ unlock: } /* data_ptr is useless here, but has to be kept to respect the prototype */ -static int tt_local_valid_entry(const void *entry_ptr, const void *data_ptr) +static int batadv_tt_local_valid_entry(const void *entry_ptr, + const void *data_ptr) { - const struct tt_common_entry *tt_common_entry = entry_ptr; + const struct batadv_tt_common_entry *tt_common_entry = entry_ptr; - if (tt_common_entry->flags & TT_CLIENT_NEW) + if (tt_common_entry->flags & BATADV_TT_CLIENT_NEW) return 0; return 1; } -static int tt_global_valid_entry(const void *entry_ptr, const void *data_ptr) +static int batadv_tt_global_valid(const void *entry_ptr, + const void *data_ptr) { - const struct tt_common_entry *tt_common_entry = entry_ptr; - const struct tt_global_entry *tt_global_entry; - const struct orig_node *orig_node = data_ptr; + const struct batadv_tt_common_entry *tt_common_entry = entry_ptr; + const struct batadv_tt_global_entry *tt_global_entry; + const struct batadv_orig_node *orig_node = data_ptr; - if (tt_common_entry->flags & TT_CLIENT_ROAM) + if (tt_common_entry->flags & BATADV_TT_CLIENT_ROAM) return 0; - tt_global_entry = container_of(tt_common_entry, struct tt_global_entry, + tt_global_entry = container_of(tt_common_entry, + struct batadv_tt_global_entry, common); - return tt_global_entry_has_orig(tt_global_entry, orig_node); + return batadv_tt_global_entry_has_orig(tt_global_entry, orig_node); } -static struct sk_buff *tt_response_fill_table(uint16_t tt_len, uint8_t ttvn, - struct hashtable_t *hash, - struct hard_iface *primary_if, - int (*valid_cb)(const void *, - const void *), - void *cb_data) +static struct sk_buff * +batadv_tt_response_fill_table(uint16_t tt_len, uint8_t ttvn, + struct batadv_hashtable *hash, + struct batadv_hard_iface *primary_if, + int (*valid_cb)(const void *, const void *), + void *cb_data) { - struct tt_common_entry *tt_common_entry; - struct tt_query_packet *tt_response; - struct tt_change *tt_change; + struct batadv_tt_common_entry *tt_common_entry; + struct batadv_tt_query_packet *tt_response; + struct batadv_tt_change *tt_change; struct hlist_node *node; struct hlist_head *head; struct sk_buff *skb = NULL; uint16_t tt_tot, tt_count; - ssize_t tt_query_size = sizeof(struct tt_query_packet); + ssize_t tt_query_size = sizeof(struct batadv_tt_query_packet); uint32_t i; + size_t len; if (tt_query_size + tt_len > primary_if->soft_iface->mtu) { tt_len = primary_if->soft_iface->mtu - tt_query_size; - tt_len -= tt_len % sizeof(struct tt_change); + tt_len -= tt_len % sizeof(struct batadv_tt_change); } - tt_tot = tt_len / sizeof(struct tt_change); + tt_tot = tt_len / sizeof(struct batadv_tt_change); - skb = dev_alloc_skb(tt_query_size + tt_len + ETH_HLEN); + len = tt_query_size + tt_len; + skb = dev_alloc_skb(len + ETH_HLEN); if (!skb) goto out; skb_reserve(skb, ETH_HLEN); - tt_response = (struct tt_query_packet *)skb_put(skb, - tt_query_size + tt_len); + tt_response = (struct batadv_tt_query_packet *)skb_put(skb, len); tt_response->ttvn = ttvn; - tt_change = (struct tt_change *)(skb->data + tt_query_size); + tt_change = (struct batadv_tt_change *)(skb->data + tt_query_size); tt_count = 0; rcu_read_lock(); @@ -1289,7 +1422,7 @@ static struct sk_buff *tt_response_fill_table(uint16_t tt_len, uint8_t ttvn, memcpy(tt_change->addr, tt_common_entry->addr, ETH_ALEN); - tt_change->flags = NO_FLAGS; + tt_change->flags = BATADV_NO_FLAGS; tt_count++; tt_change++; @@ -1298,72 +1431,78 @@ static struct sk_buff *tt_response_fill_table(uint16_t tt_len, uint8_t ttvn, rcu_read_unlock(); /* store in the message the number of entries we have successfully - * copied */ + * copied + */ tt_response->tt_data = htons(tt_count); out: return skb; } -static int send_tt_request(struct bat_priv *bat_priv, - struct orig_node *dst_orig_node, - uint8_t ttvn, uint16_t tt_crc, bool full_table) +static int batadv_send_tt_request(struct batadv_priv *bat_priv, + struct batadv_orig_node *dst_orig_node, + uint8_t ttvn, uint16_t tt_crc, + bool full_table) { struct sk_buff *skb = NULL; - struct tt_query_packet *tt_request; - struct neigh_node *neigh_node = NULL; - struct hard_iface *primary_if; - struct tt_req_node *tt_req_node = NULL; + struct batadv_tt_query_packet *tt_request; + struct batadv_neigh_node *neigh_node = NULL; + struct batadv_hard_iface *primary_if; + struct batadv_tt_req_node *tt_req_node = NULL; int ret = 1; + size_t tt_req_len; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; /* The new tt_req will be issued only if I'm not waiting for a - * reply from the same orig_node yet */ - tt_req_node = new_tt_req_node(bat_priv, dst_orig_node); + * reply from the same orig_node yet + */ + tt_req_node = batadv_new_tt_req_node(bat_priv, dst_orig_node); if (!tt_req_node) goto out; - skb = dev_alloc_skb(sizeof(struct tt_query_packet) + ETH_HLEN); + skb = dev_alloc_skb(sizeof(*tt_request) + ETH_HLEN); if (!skb) goto out; skb_reserve(skb, ETH_HLEN); - tt_request = (struct tt_query_packet *)skb_put(skb, - sizeof(struct tt_query_packet)); + tt_req_len = sizeof(*tt_request); + tt_request = (struct batadv_tt_query_packet *)skb_put(skb, tt_req_len); - tt_request->header.packet_type = BAT_TT_QUERY; - tt_request->header.version = COMPAT_VERSION; + tt_request->header.packet_type = BATADV_TT_QUERY; + tt_request->header.version = BATADV_COMPAT_VERSION; memcpy(tt_request->src, primary_if->net_dev->dev_addr, ETH_ALEN); memcpy(tt_request->dst, dst_orig_node->orig, ETH_ALEN); - tt_request->header.ttl = TTL; + tt_request->header.ttl = BATADV_TTL; tt_request->ttvn = ttvn; tt_request->tt_data = htons(tt_crc); - tt_request->flags = TT_REQUEST; + tt_request->flags = BATADV_TT_REQUEST; if (full_table) - tt_request->flags |= TT_FULL_TABLE; + tt_request->flags |= BATADV_TT_FULL_TABLE; - neigh_node = orig_node_get_router(dst_orig_node); + neigh_node = batadv_orig_node_get_router(dst_orig_node); if (!neigh_node) goto out; - bat_dbg(DBG_TT, bat_priv, - "Sending TT_REQUEST to %pM via %pM [%c]\n", - dst_orig_node->orig, neigh_node->addr, - (full_table ? 'F' : '.')); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Sending TT_REQUEST to %pM via %pM [%c]\n", + dst_orig_node->orig, neigh_node->addr, + (full_table ? 'F' : '.')); + + batadv_inc_counter(bat_priv, BATADV_CNT_TT_REQUEST_TX); - send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + batadv_send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); ret = 0; out: if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); if (ret) kfree_skb(skb); if (ret && tt_req_node) { @@ -1375,39 +1514,42 @@ out: return ret; } -static bool send_other_tt_response(struct bat_priv *bat_priv, - struct tt_query_packet *tt_request) +static bool +batadv_send_other_tt_response(struct batadv_priv *bat_priv, + struct batadv_tt_query_packet *tt_request) { - struct orig_node *req_dst_orig_node = NULL, *res_dst_orig_node = NULL; - struct neigh_node *neigh_node = NULL; - struct hard_iface *primary_if = NULL; + struct batadv_orig_node *req_dst_orig_node = NULL; + struct batadv_orig_node *res_dst_orig_node = NULL; + struct batadv_neigh_node *neigh_node = NULL; + struct batadv_hard_iface *primary_if = NULL; uint8_t orig_ttvn, req_ttvn, ttvn; int ret = false; unsigned char *tt_buff; bool full_table; uint16_t tt_len, tt_tot; struct sk_buff *skb = NULL; - struct tt_query_packet *tt_response; + struct batadv_tt_query_packet *tt_response; + size_t len; - bat_dbg(DBG_TT, bat_priv, - "Received TT_REQUEST from %pM for ttvn: %u (%pM) [%c]\n", - tt_request->src, tt_request->ttvn, tt_request->dst, - (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Received TT_REQUEST from %pM for ttvn: %u (%pM) [%c]\n", + tt_request->src, tt_request->ttvn, tt_request->dst, + (tt_request->flags & BATADV_TT_FULL_TABLE ? 'F' : '.')); /* Let's get the orig node of the REAL destination */ - req_dst_orig_node = orig_hash_find(bat_priv, tt_request->dst); + req_dst_orig_node = batadv_orig_hash_find(bat_priv, tt_request->dst); if (!req_dst_orig_node) goto out; - res_dst_orig_node = orig_hash_find(bat_priv, tt_request->src); + res_dst_orig_node = batadv_orig_hash_find(bat_priv, tt_request->src); if (!res_dst_orig_node) goto out; - neigh_node = orig_node_get_router(res_dst_orig_node); + neigh_node = batadv_orig_node_get_router(res_dst_orig_node); if (!neigh_node) goto out; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; @@ -1416,71 +1558,75 @@ static bool send_other_tt_response(struct bat_priv *bat_priv, /* I don't have the requested data */ if (orig_ttvn != req_ttvn || - tt_request->tt_data != req_dst_orig_node->tt_crc) + tt_request->tt_data != htons(req_dst_orig_node->tt_crc)) goto out; /* If the full table has been explicitly requested */ - if (tt_request->flags & TT_FULL_TABLE || + if (tt_request->flags & BATADV_TT_FULL_TABLE || !req_dst_orig_node->tt_buff) full_table = true; else full_table = false; /* In this version, fragmentation is not implemented, then - * I'll send only one packet with as much TT entries as I can */ + * I'll send only one packet with as much TT entries as I can + */ if (!full_table) { spin_lock_bh(&req_dst_orig_node->tt_buff_lock); tt_len = req_dst_orig_node->tt_buff_len; - tt_tot = tt_len / sizeof(struct tt_change); + tt_tot = tt_len / sizeof(struct batadv_tt_change); - skb = dev_alloc_skb(sizeof(struct tt_query_packet) + - tt_len + ETH_HLEN); + len = sizeof(*tt_response) + tt_len; + skb = dev_alloc_skb(len + ETH_HLEN); if (!skb) goto unlock; skb_reserve(skb, ETH_HLEN); - tt_response = (struct tt_query_packet *)skb_put(skb, - sizeof(struct tt_query_packet) + tt_len); + tt_response = (struct batadv_tt_query_packet *)skb_put(skb, + len); tt_response->ttvn = req_ttvn; tt_response->tt_data = htons(tt_tot); - tt_buff = skb->data + sizeof(struct tt_query_packet); + tt_buff = skb->data + sizeof(*tt_response); /* Copy the last orig_node's OGM buffer */ memcpy(tt_buff, req_dst_orig_node->tt_buff, req_dst_orig_node->tt_buff_len); spin_unlock_bh(&req_dst_orig_node->tt_buff_lock); } else { - tt_len = (uint16_t)atomic_read(&req_dst_orig_node->tt_size) * - sizeof(struct tt_change); + tt_len = (uint16_t)atomic_read(&req_dst_orig_node->tt_size); + tt_len *= sizeof(struct batadv_tt_change); ttvn = (uint8_t)atomic_read(&req_dst_orig_node->last_ttvn); - skb = tt_response_fill_table(tt_len, ttvn, - bat_priv->tt_global_hash, - primary_if, tt_global_valid_entry, - req_dst_orig_node); + skb = batadv_tt_response_fill_table(tt_len, ttvn, + bat_priv->tt_global_hash, + primary_if, + batadv_tt_global_valid, + req_dst_orig_node); if (!skb) goto out; - tt_response = (struct tt_query_packet *)skb->data; + tt_response = (struct batadv_tt_query_packet *)skb->data; } - tt_response->header.packet_type = BAT_TT_QUERY; - tt_response->header.version = COMPAT_VERSION; - tt_response->header.ttl = TTL; + tt_response->header.packet_type = BATADV_TT_QUERY; + tt_response->header.version = BATADV_COMPAT_VERSION; + tt_response->header.ttl = BATADV_TTL; memcpy(tt_response->src, req_dst_orig_node->orig, ETH_ALEN); memcpy(tt_response->dst, tt_request->src, ETH_ALEN); - tt_response->flags = TT_RESPONSE; + tt_response->flags = BATADV_TT_RESPONSE; if (full_table) - tt_response->flags |= TT_FULL_TABLE; + tt_response->flags |= BATADV_TT_FULL_TABLE; - bat_dbg(DBG_TT, bat_priv, - "Sending TT_RESPONSE %pM via %pM for %pM (ttvn: %u)\n", - res_dst_orig_node->orig, neigh_node->addr, - req_dst_orig_node->orig, req_ttvn); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Sending TT_RESPONSE %pM via %pM for %pM (ttvn: %u)\n", + res_dst_orig_node->orig, neigh_node->addr, + req_dst_orig_node->orig, req_ttvn); - send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + batadv_inc_counter(bat_priv, BATADV_CNT_TT_RESPONSE_TX); + + batadv_send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); ret = true; goto out; @@ -1489,114 +1635,122 @@ unlock: out: if (res_dst_orig_node) - orig_node_free_ref(res_dst_orig_node); + batadv_orig_node_free_ref(res_dst_orig_node); if (req_dst_orig_node) - orig_node_free_ref(req_dst_orig_node); + batadv_orig_node_free_ref(req_dst_orig_node); if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); if (!ret) kfree_skb(skb); return ret; } -static bool send_my_tt_response(struct bat_priv *bat_priv, - struct tt_query_packet *tt_request) + +static bool +batadv_send_my_tt_response(struct batadv_priv *bat_priv, + struct batadv_tt_query_packet *tt_request) { - struct orig_node *orig_node = NULL; - struct neigh_node *neigh_node = NULL; - struct hard_iface *primary_if = NULL; + struct batadv_orig_node *orig_node = NULL; + struct batadv_neigh_node *neigh_node = NULL; + struct batadv_hard_iface *primary_if = NULL; uint8_t my_ttvn, req_ttvn, ttvn; int ret = false; unsigned char *tt_buff; bool full_table; uint16_t tt_len, tt_tot; struct sk_buff *skb = NULL; - struct tt_query_packet *tt_response; + struct batadv_tt_query_packet *tt_response; + size_t len; - bat_dbg(DBG_TT, bat_priv, - "Received TT_REQUEST from %pM for ttvn: %u (me) [%c]\n", - tt_request->src, tt_request->ttvn, - (tt_request->flags & TT_FULL_TABLE ? 'F' : '.')); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Received TT_REQUEST from %pM for ttvn: %u (me) [%c]\n", + tt_request->src, tt_request->ttvn, + (tt_request->flags & BATADV_TT_FULL_TABLE ? 'F' : '.')); my_ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); req_ttvn = tt_request->ttvn; - orig_node = orig_hash_find(bat_priv, tt_request->src); + orig_node = batadv_orig_hash_find(bat_priv, tt_request->src); if (!orig_node) goto out; - neigh_node = orig_node_get_router(orig_node); + neigh_node = batadv_orig_node_get_router(orig_node); if (!neigh_node) goto out; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; /* If the full table has been explicitly requested or the gap - * is too big send the whole local translation table */ - if (tt_request->flags & TT_FULL_TABLE || my_ttvn != req_ttvn || + * is too big send the whole local translation table + */ + if (tt_request->flags & BATADV_TT_FULL_TABLE || my_ttvn != req_ttvn || !bat_priv->tt_buff) full_table = true; else full_table = false; /* In this version, fragmentation is not implemented, then - * I'll send only one packet with as much TT entries as I can */ + * I'll send only one packet with as much TT entries as I can + */ if (!full_table) { spin_lock_bh(&bat_priv->tt_buff_lock); tt_len = bat_priv->tt_buff_len; - tt_tot = tt_len / sizeof(struct tt_change); + tt_tot = tt_len / sizeof(struct batadv_tt_change); - skb = dev_alloc_skb(sizeof(struct tt_query_packet) + - tt_len + ETH_HLEN); + len = sizeof(*tt_response) + tt_len; + skb = dev_alloc_skb(len + ETH_HLEN); if (!skb) goto unlock; skb_reserve(skb, ETH_HLEN); - tt_response = (struct tt_query_packet *)skb_put(skb, - sizeof(struct tt_query_packet) + tt_len); + tt_response = (struct batadv_tt_query_packet *)skb_put(skb, + len); tt_response->ttvn = req_ttvn; tt_response->tt_data = htons(tt_tot); - tt_buff = skb->data + sizeof(struct tt_query_packet); + tt_buff = skb->data + sizeof(*tt_response); memcpy(tt_buff, bat_priv->tt_buff, bat_priv->tt_buff_len); spin_unlock_bh(&bat_priv->tt_buff_lock); } else { - tt_len = (uint16_t)atomic_read(&bat_priv->num_local_tt) * - sizeof(struct tt_change); + tt_len = (uint16_t)atomic_read(&bat_priv->num_local_tt); + tt_len *= sizeof(struct batadv_tt_change); ttvn = (uint8_t)atomic_read(&bat_priv->ttvn); - skb = tt_response_fill_table(tt_len, ttvn, - bat_priv->tt_local_hash, - primary_if, tt_local_valid_entry, - NULL); + skb = batadv_tt_response_fill_table(tt_len, ttvn, + bat_priv->tt_local_hash, + primary_if, + batadv_tt_local_valid_entry, + NULL); if (!skb) goto out; - tt_response = (struct tt_query_packet *)skb->data; + tt_response = (struct batadv_tt_query_packet *)skb->data; } - tt_response->header.packet_type = BAT_TT_QUERY; - tt_response->header.version = COMPAT_VERSION; - tt_response->header.ttl = TTL; + tt_response->header.packet_type = BATADV_TT_QUERY; + tt_response->header.version = BATADV_COMPAT_VERSION; + tt_response->header.ttl = BATADV_TTL; memcpy(tt_response->src, primary_if->net_dev->dev_addr, ETH_ALEN); memcpy(tt_response->dst, tt_request->src, ETH_ALEN); - tt_response->flags = TT_RESPONSE; + tt_response->flags = BATADV_TT_RESPONSE; if (full_table) - tt_response->flags |= TT_FULL_TABLE; + tt_response->flags |= BATADV_TT_FULL_TABLE; - bat_dbg(DBG_TT, bat_priv, - "Sending TT_RESPONSE to %pM via %pM [%c]\n", - orig_node->orig, neigh_node->addr, - (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Sending TT_RESPONSE to %pM via %pM [%c]\n", + orig_node->orig, neigh_node->addr, + (tt_response->flags & BATADV_TT_FULL_TABLE ? 'F' : '.')); - send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + batadv_inc_counter(bat_priv, BATADV_CNT_TT_RESPONSE_TX); + + batadv_send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); ret = true; goto out; @@ -1604,49 +1758,50 @@ unlock: spin_unlock_bh(&bat_priv->tt_buff_lock); out: if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); if (!ret) kfree_skb(skb); /* This packet was for me, so it doesn't need to be re-routed */ return true; } -bool send_tt_response(struct bat_priv *bat_priv, - struct tt_query_packet *tt_request) +bool batadv_send_tt_response(struct batadv_priv *bat_priv, + struct batadv_tt_query_packet *tt_request) { - if (is_my_mac(tt_request->dst)) { + if (batadv_is_my_mac(tt_request->dst)) { /* don't answer backbone gws! */ - if (bla_is_backbone_gw_orig(bat_priv, tt_request->src)) + if (batadv_bla_is_backbone_gw_orig(bat_priv, tt_request->src)) return true; - return send_my_tt_response(bat_priv, tt_request); + return batadv_send_my_tt_response(bat_priv, tt_request); } else { - return send_other_tt_response(bat_priv, tt_request); + return batadv_send_other_tt_response(bat_priv, tt_request); } } -static void _tt_update_changes(struct bat_priv *bat_priv, - struct orig_node *orig_node, - struct tt_change *tt_change, - uint16_t tt_num_changes, uint8_t ttvn) +static void _batadv_tt_update_changes(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + struct batadv_tt_change *tt_change, + uint16_t tt_num_changes, uint8_t ttvn) { int i; + int roams; for (i = 0; i < tt_num_changes; i++) { - if ((tt_change + i)->flags & TT_CLIENT_DEL) - tt_global_del(bat_priv, orig_node, - (tt_change + i)->addr, - "tt removed by changes", - (tt_change + i)->flags & TT_CLIENT_ROAM); - else - if (!tt_global_add(bat_priv, orig_node, - (tt_change + i)->addr, ttvn, false, - (tt_change + i)->flags & - TT_CLIENT_WIFI)) + if ((tt_change + i)->flags & BATADV_TT_CLIENT_DEL) { + roams = (tt_change + i)->flags & BATADV_TT_CLIENT_ROAM; + batadv_tt_global_del(bat_priv, orig_node, + (tt_change + i)->addr, + "tt removed by changes", + roams); + } else { + if (!batadv_tt_global_add(bat_priv, orig_node, + (tt_change + i)->addr, + (tt_change + i)->flags, ttvn)) /* In case of problem while storing a * global_entry, we stop the updating * procedure without committing the @@ -1654,25 +1809,27 @@ static void _tt_update_changes(struct bat_priv *bat_priv, * corrupted data on tt_request */ return; + } } orig_node->tt_initialised = true; } -static void tt_fill_gtable(struct bat_priv *bat_priv, - struct tt_query_packet *tt_response) +static void batadv_tt_fill_gtable(struct batadv_priv *bat_priv, + struct batadv_tt_query_packet *tt_response) { - struct orig_node *orig_node = NULL; + struct batadv_orig_node *orig_node = NULL; - orig_node = orig_hash_find(bat_priv, tt_response->src); + orig_node = batadv_orig_hash_find(bat_priv, tt_response->src); if (!orig_node) goto out; /* Purge the old table first.. */ - tt_global_del_orig(bat_priv, orig_node, "Received full table"); + batadv_tt_global_del_orig(bat_priv, orig_node, "Received full table"); - _tt_update_changes(bat_priv, orig_node, - (struct tt_change *)(tt_response + 1), - tt_response->tt_data, tt_response->ttvn); + _batadv_tt_update_changes(bat_priv, orig_node, + (struct batadv_tt_change *)(tt_response + 1), + ntohs(tt_response->tt_data), + tt_response->ttvn); spin_lock_bh(&orig_node->tt_buff_lock); kfree(orig_node->tt_buff); @@ -1684,71 +1841,76 @@ static void tt_fill_gtable(struct bat_priv *bat_priv, out: if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); } -static void tt_update_changes(struct bat_priv *bat_priv, - struct orig_node *orig_node, - uint16_t tt_num_changes, uint8_t ttvn, - struct tt_change *tt_change) +static void batadv_tt_update_changes(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + uint16_t tt_num_changes, uint8_t ttvn, + struct batadv_tt_change *tt_change) { - _tt_update_changes(bat_priv, orig_node, tt_change, tt_num_changes, - ttvn); + _batadv_tt_update_changes(bat_priv, orig_node, tt_change, + tt_num_changes, ttvn); - tt_save_orig_buffer(bat_priv, orig_node, (unsigned char *)tt_change, - tt_num_changes); + batadv_tt_save_orig_buffer(bat_priv, orig_node, + (unsigned char *)tt_change, tt_num_changes); atomic_set(&orig_node->last_ttvn, ttvn); } -bool is_my_client(struct bat_priv *bat_priv, const uint8_t *addr) +bool batadv_is_my_client(struct batadv_priv *bat_priv, const uint8_t *addr) { - struct tt_local_entry *tt_local_entry = NULL; + struct batadv_tt_local_entry *tt_local_entry = NULL; bool ret = false; - tt_local_entry = tt_local_hash_find(bat_priv, addr); + tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr); if (!tt_local_entry) goto out; /* Check if the client has been logically deleted (but is kept for - * consistency purpose) */ - if (tt_local_entry->common.flags & TT_CLIENT_PENDING) + * consistency purpose) + */ + if (tt_local_entry->common.flags & BATADV_TT_CLIENT_PENDING) goto out; ret = true; out: if (tt_local_entry) - tt_local_entry_free_ref(tt_local_entry); + batadv_tt_local_entry_free_ref(tt_local_entry); return ret; } -void handle_tt_response(struct bat_priv *bat_priv, - struct tt_query_packet *tt_response) +void batadv_handle_tt_response(struct batadv_priv *bat_priv, + struct batadv_tt_query_packet *tt_response) { - struct tt_req_node *node, *safe; - struct orig_node *orig_node = NULL; + struct batadv_tt_req_node *node, *safe; + struct batadv_orig_node *orig_node = NULL; + struct batadv_tt_change *tt_change; - bat_dbg(DBG_TT, bat_priv, - "Received TT_RESPONSE from %pM for ttvn %d t_size: %d [%c]\n", - tt_response->src, tt_response->ttvn, tt_response->tt_data, - (tt_response->flags & TT_FULL_TABLE ? 'F' : '.')); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Received TT_RESPONSE from %pM for ttvn %d t_size: %d [%c]\n", + tt_response->src, tt_response->ttvn, + ntohs(tt_response->tt_data), + (tt_response->flags & BATADV_TT_FULL_TABLE ? 'F' : '.')); /* we should have never asked a backbone gw */ - if (bla_is_backbone_gw_orig(bat_priv, tt_response->src)) + if (batadv_bla_is_backbone_gw_orig(bat_priv, tt_response->src)) goto out; - orig_node = orig_hash_find(bat_priv, tt_response->src); + orig_node = batadv_orig_hash_find(bat_priv, tt_response->src); if (!orig_node) goto out; - if (tt_response->flags & TT_FULL_TABLE) - tt_fill_gtable(bat_priv, tt_response); - else - tt_update_changes(bat_priv, orig_node, tt_response->tt_data, - tt_response->ttvn, - (struct tt_change *)(tt_response + 1)); + if (tt_response->flags & BATADV_TT_FULL_TABLE) { + batadv_tt_fill_gtable(bat_priv, tt_response); + } else { + tt_change = (struct batadv_tt_change *)(tt_response + 1); + batadv_tt_update_changes(bat_priv, orig_node, + ntohs(tt_response->tt_data), + tt_response->ttvn, tt_change); + } /* Delete the tt_req_node from pending tt_requests list */ spin_lock_bh(&bat_priv->tt_req_list_lock); list_for_each_entry_safe(node, safe, &bat_priv->tt_req_list, list) { - if (!compare_eth(node->addr, tt_response->src)) + if (!batadv_compare_eth(node->addr, tt_response->src)) continue; list_del(&node->list); kfree(node); @@ -1756,31 +1918,36 @@ void handle_tt_response(struct bat_priv *bat_priv, spin_unlock_bh(&bat_priv->tt_req_list_lock); /* Recalculate the CRC for this orig_node and store it */ - orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + orig_node->tt_crc = batadv_tt_global_crc(bat_priv, orig_node); /* Roaming phase is over: tables are in sync again. I can - * unset the flag */ + * unset the flag + */ orig_node->tt_poss_change = false; out: if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); } -int tt_init(struct bat_priv *bat_priv) +int batadv_tt_init(struct batadv_priv *bat_priv) { - if (!tt_local_init(bat_priv)) - return 0; + int ret; - if (!tt_global_init(bat_priv)) - return 0; + ret = batadv_tt_local_init(bat_priv); + if (ret < 0) + return ret; - tt_start_timer(bat_priv); + ret = batadv_tt_global_init(bat_priv); + if (ret < 0) + return ret; + + batadv_tt_start_timer(bat_priv); return 1; } -static void tt_roam_list_free(struct bat_priv *bat_priv) +static void batadv_tt_roam_list_free(struct batadv_priv *bat_priv) { - struct tt_roam_node *node, *safe; + struct batadv_tt_roam_node *node, *safe; spin_lock_bh(&bat_priv->tt_roam_list_lock); @@ -1792,13 +1959,14 @@ static void tt_roam_list_free(struct bat_priv *bat_priv) spin_unlock_bh(&bat_priv->tt_roam_list_lock); } -static void tt_roam_purge(struct bat_priv *bat_priv) +static void batadv_tt_roam_purge(struct batadv_priv *bat_priv) { - struct tt_roam_node *node, *safe; + struct batadv_tt_roam_node *node, *safe; spin_lock_bh(&bat_priv->tt_roam_list_lock); list_for_each_entry_safe(node, safe, &bat_priv->tt_roam_list, list) { - if (!has_timed_out(node->first_time, ROAMING_MAX_TIME)) + if (!batadv_has_timed_out(node->first_time, + BATADV_ROAMING_MAX_TIME)) continue; list_del(&node->list); @@ -1811,24 +1979,27 @@ static void tt_roam_purge(struct bat_priv *bat_priv) * maximum number of possible roaming phases. In this case the ROAMING_ADV * will not be sent. * - * returns true if the ROAMING_ADV can be sent, false otherwise */ -static bool tt_check_roam_count(struct bat_priv *bat_priv, - uint8_t *client) + * returns true if the ROAMING_ADV can be sent, false otherwise + */ +static bool batadv_tt_check_roam_count(struct batadv_priv *bat_priv, + uint8_t *client) { - struct tt_roam_node *tt_roam_node; + struct batadv_tt_roam_node *tt_roam_node; bool ret = false; spin_lock_bh(&bat_priv->tt_roam_list_lock); /* The new tt_req will be issued only if I'm not waiting for a - * reply from the same orig_node yet */ + * reply from the same orig_node yet + */ list_for_each_entry(tt_roam_node, &bat_priv->tt_roam_list, list) { - if (!compare_eth(tt_roam_node->addr, client)) + if (!batadv_compare_eth(tt_roam_node->addr, client)) continue; - if (has_timed_out(tt_roam_node->first_time, ROAMING_MAX_TIME)) + if (batadv_has_timed_out(tt_roam_node->first_time, + BATADV_ROAMING_MAX_TIME)) continue; - if (!atomic_dec_not_zero(&tt_roam_node->counter)) + if (!batadv_atomic_dec_not_zero(&tt_roam_node->counter)) /* Sorry, you roamed too many times! */ goto unlock; ret = true; @@ -1841,7 +2012,8 @@ static bool tt_check_roam_count(struct bat_priv *bat_priv, goto unlock; tt_roam_node->first_time = jiffies; - atomic_set(&tt_roam_node->counter, ROAMING_MAX_COUNT - 1); + atomic_set(&tt_roam_node->counter, + BATADV_ROAMING_MAX_COUNT - 1); memcpy(tt_roam_node->addr, client, ETH_ALEN); list_add(&tt_roam_node->list, &bat_priv->tt_roam_list); @@ -1853,97 +2025,103 @@ unlock: return ret; } -static void send_roam_adv(struct bat_priv *bat_priv, uint8_t *client, - struct orig_node *orig_node) +static void batadv_send_roam_adv(struct batadv_priv *bat_priv, uint8_t *client, + struct batadv_orig_node *orig_node) { - struct neigh_node *neigh_node = NULL; + struct batadv_neigh_node *neigh_node = NULL; struct sk_buff *skb = NULL; - struct roam_adv_packet *roam_adv_packet; + struct batadv_roam_adv_packet *roam_adv_packet; int ret = 1; - struct hard_iface *primary_if; + struct batadv_hard_iface *primary_if; + size_t len = sizeof(*roam_adv_packet); /* before going on we have to check whether the client has - * already roamed to us too many times */ - if (!tt_check_roam_count(bat_priv, client)) + * already roamed to us too many times + */ + if (!batadv_tt_check_roam_count(bat_priv, client)) goto out; - skb = dev_alloc_skb(sizeof(struct roam_adv_packet) + ETH_HLEN); + skb = dev_alloc_skb(sizeof(*roam_adv_packet) + ETH_HLEN); if (!skb) goto out; skb_reserve(skb, ETH_HLEN); - roam_adv_packet = (struct roam_adv_packet *)skb_put(skb, - sizeof(struct roam_adv_packet)); + roam_adv_packet = (struct batadv_roam_adv_packet *)skb_put(skb, len); - roam_adv_packet->header.packet_type = BAT_ROAM_ADV; - roam_adv_packet->header.version = COMPAT_VERSION; - roam_adv_packet->header.ttl = TTL; - primary_if = primary_if_get_selected(bat_priv); + roam_adv_packet->header.packet_type = BATADV_ROAM_ADV; + roam_adv_packet->header.version = BATADV_COMPAT_VERSION; + roam_adv_packet->header.ttl = BATADV_TTL; + roam_adv_packet->reserved = 0; + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; memcpy(roam_adv_packet->src, primary_if->net_dev->dev_addr, ETH_ALEN); - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); memcpy(roam_adv_packet->dst, orig_node->orig, ETH_ALEN); memcpy(roam_adv_packet->client, client, ETH_ALEN); - neigh_node = orig_node_get_router(orig_node); + neigh_node = batadv_orig_node_get_router(orig_node); if (!neigh_node) goto out; - bat_dbg(DBG_TT, bat_priv, - "Sending ROAMING_ADV to %pM (client %pM) via %pM\n", - orig_node->orig, client, neigh_node->addr); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Sending ROAMING_ADV to %pM (client %pM) via %pM\n", + orig_node->orig, client, neigh_node->addr); + + batadv_inc_counter(bat_priv, BATADV_CNT_TT_ROAM_ADV_TX); - send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + batadv_send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); ret = 0; out: if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); if (ret) kfree_skb(skb); return; } -static void tt_purge(struct work_struct *work) +static void batadv_tt_purge(struct work_struct *work) { - struct delayed_work *delayed_work = - container_of(work, struct delayed_work, work); - struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, tt_work); + struct delayed_work *delayed_work; + struct batadv_priv *bat_priv; - tt_local_purge(bat_priv); - tt_global_roam_purge(bat_priv); - tt_req_purge(bat_priv); - tt_roam_purge(bat_priv); + delayed_work = container_of(work, struct delayed_work, work); + bat_priv = container_of(delayed_work, struct batadv_priv, tt_work); - tt_start_timer(bat_priv); + batadv_tt_local_purge(bat_priv); + batadv_tt_global_roam_purge(bat_priv); + batadv_tt_req_purge(bat_priv); + batadv_tt_roam_purge(bat_priv); + + batadv_tt_start_timer(bat_priv); } -void tt_free(struct bat_priv *bat_priv) +void batadv_tt_free(struct batadv_priv *bat_priv) { cancel_delayed_work_sync(&bat_priv->tt_work); - tt_local_table_free(bat_priv); - tt_global_table_free(bat_priv); - tt_req_list_free(bat_priv); - tt_changes_list_free(bat_priv); - tt_roam_list_free(bat_priv); + batadv_tt_local_table_free(bat_priv); + batadv_tt_global_table_free(bat_priv); + batadv_tt_req_list_free(bat_priv); + batadv_tt_changes_list_free(bat_priv); + batadv_tt_roam_list_free(bat_priv); kfree(bat_priv->tt_buff); } /* This function will enable or disable the specified flags for all the entries - * in the given hash table and returns the number of modified entries */ -static uint16_t tt_set_flags(struct hashtable_t *hash, uint16_t flags, - bool enable) + * in the given hash table and returns the number of modified entries + */ +static uint16_t batadv_tt_set_flags(struct batadv_hashtable *hash, + uint16_t flags, bool enable) { uint32_t i; uint16_t changed_num = 0; struct hlist_head *head; struct hlist_node *node; - struct tt_common_entry *tt_common_entry; + struct batadv_tt_common_entry *tt_common_entry; if (!hash) goto out; @@ -1971,12 +2149,12 @@ out: return changed_num; } -/* Purge out all the tt local entries marked with TT_CLIENT_PENDING */ -static void tt_local_purge_pending_clients(struct bat_priv *bat_priv) +/* Purge out all the tt local entries marked with BATADV_TT_CLIENT_PENDING */ +static void batadv_tt_local_purge_pending_clients(struct batadv_priv *bat_priv) { - struct hashtable_t *hash = bat_priv->tt_local_hash; - struct tt_common_entry *tt_common_entry; - struct tt_local_entry *tt_local_entry; + struct batadv_hashtable *hash = bat_priv->tt_local_hash; + struct batadv_tt_common_entry *tt_common; + struct batadv_tt_local_entry *tt_local; struct hlist_node *node, *node_tmp; struct hlist_head *head; spinlock_t *list_lock; /* protects write access to the hash lists */ @@ -1990,103 +2168,149 @@ static void tt_local_purge_pending_clients(struct bat_priv *bat_priv) list_lock = &hash->list_locks[i]; spin_lock_bh(list_lock); - hlist_for_each_entry_safe(tt_common_entry, node, node_tmp, - head, hash_entry) { - if (!(tt_common_entry->flags & TT_CLIENT_PENDING)) + hlist_for_each_entry_safe(tt_common, node, node_tmp, head, + hash_entry) { + if (!(tt_common->flags & BATADV_TT_CLIENT_PENDING)) continue; - bat_dbg(DBG_TT, bat_priv, - "Deleting local tt entry (%pM): pending\n", - tt_common_entry->addr); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Deleting local tt entry (%pM): pending\n", + tt_common->addr); atomic_dec(&bat_priv->num_local_tt); hlist_del_rcu(node); - tt_local_entry = container_of(tt_common_entry, - struct tt_local_entry, - common); - tt_local_entry_free_ref(tt_local_entry); + tt_local = container_of(tt_common, + struct batadv_tt_local_entry, + common); + batadv_tt_local_entry_free_ref(tt_local); } spin_unlock_bh(list_lock); } } -void tt_commit_changes(struct bat_priv *bat_priv) +static int batadv_tt_commit_changes(struct batadv_priv *bat_priv, + unsigned char **packet_buff, + int *packet_buff_len, int packet_min_len) { - uint16_t changed_num = tt_set_flags(bat_priv->tt_local_hash, - TT_CLIENT_NEW, false); - /* all the reset entries have now to be effectively counted as local - * entries */ + uint16_t changed_num = 0; + + if (atomic_read(&bat_priv->tt_local_changes) < 1) + return -ENOENT; + + changed_num = batadv_tt_set_flags(bat_priv->tt_local_hash, + BATADV_TT_CLIENT_NEW, false); + + /* all reset entries have to be counted as local entries */ atomic_add(changed_num, &bat_priv->num_local_tt); - tt_local_purge_pending_clients(bat_priv); + batadv_tt_local_purge_pending_clients(bat_priv); + bat_priv->tt_crc = batadv_tt_local_crc(bat_priv); /* Increment the TTVN only once per OGM interval */ atomic_inc(&bat_priv->ttvn); - bat_dbg(DBG_TT, bat_priv, "Local changes committed, updating to ttvn %u\n", - (uint8_t)atomic_read(&bat_priv->ttvn)); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "Local changes committed, updating to ttvn %u\n", + (uint8_t)atomic_read(&bat_priv->ttvn)); bat_priv->tt_poss_change = false; + + /* reset the sending counter */ + atomic_set(&bat_priv->tt_ogm_append_cnt, BATADV_TT_OGM_APPEND_MAX); + + return batadv_tt_changes_fill_buff(bat_priv, packet_buff, + packet_buff_len, packet_min_len); +} + +/* when calling this function (hard_iface == primary_if) has to be true */ +int batadv_tt_append_diff(struct batadv_priv *bat_priv, + unsigned char **packet_buff, int *packet_buff_len, + int packet_min_len) +{ + int tt_num_changes; + + /* if at least one change happened */ + tt_num_changes = batadv_tt_commit_changes(bat_priv, packet_buff, + packet_buff_len, + packet_min_len); + + /* if the changes have been sent often enough */ + if ((tt_num_changes < 0) && + (!batadv_atomic_dec_not_zero(&bat_priv->tt_ogm_append_cnt))) { + batadv_tt_realloc_packet_buff(packet_buff, packet_buff_len, + packet_min_len, packet_min_len); + tt_num_changes = 0; + } + + return tt_num_changes; } -bool is_ap_isolated(struct bat_priv *bat_priv, uint8_t *src, uint8_t *dst) +bool batadv_is_ap_isolated(struct batadv_priv *bat_priv, uint8_t *src, + uint8_t *dst) { - struct tt_local_entry *tt_local_entry = NULL; - struct tt_global_entry *tt_global_entry = NULL; + struct batadv_tt_local_entry *tt_local_entry = NULL; + struct batadv_tt_global_entry *tt_global_entry = NULL; bool ret = false; if (!atomic_read(&bat_priv->ap_isolation)) goto out; - tt_local_entry = tt_local_hash_find(bat_priv, dst); + tt_local_entry = batadv_tt_local_hash_find(bat_priv, dst); if (!tt_local_entry) goto out; - tt_global_entry = tt_global_hash_find(bat_priv, src); + tt_global_entry = batadv_tt_global_hash_find(bat_priv, src); if (!tt_global_entry) goto out; - if (!_is_ap_isolated(tt_local_entry, tt_global_entry)) + if (!_batadv_is_ap_isolated(tt_local_entry, tt_global_entry)) goto out; ret = true; out: if (tt_global_entry) - tt_global_entry_free_ref(tt_global_entry); + batadv_tt_global_entry_free_ref(tt_global_entry); if (tt_local_entry) - tt_local_entry_free_ref(tt_local_entry); + batadv_tt_local_entry_free_ref(tt_local_entry); return ret; } -void tt_update_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, - const unsigned char *tt_buff, uint8_t tt_num_changes, - uint8_t ttvn, uint16_t tt_crc) +void batadv_tt_update_orig(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const unsigned char *tt_buff, uint8_t tt_num_changes, + uint8_t ttvn, uint16_t tt_crc) { uint8_t orig_ttvn = (uint8_t)atomic_read(&orig_node->last_ttvn); bool full_table = true; + struct batadv_tt_change *tt_change; /* don't care about a backbone gateways updates. */ - if (bla_is_backbone_gw_orig(bat_priv, orig_node->orig)) + if (batadv_bla_is_backbone_gw_orig(bat_priv, orig_node->orig)) return; /* orig table not initialised AND first diff is in the OGM OR the ttvn - * increased by one -> we can apply the attached changes */ + * increased by one -> we can apply the attached changes + */ if ((!orig_node->tt_initialised && ttvn == 1) || ttvn - orig_ttvn == 1) { /* the OGM could not contain the changes due to their size or - * because they have already been sent TT_OGM_APPEND_MAX times. - * In this case send a tt request */ + * because they have already been sent BATADV_TT_OGM_APPEND_MAX + * times. + * In this case send a tt request + */ if (!tt_num_changes) { full_table = false; goto request_table; } - tt_update_changes(bat_priv, orig_node, tt_num_changes, ttvn, - (struct tt_change *)tt_buff); + tt_change = (struct batadv_tt_change *)tt_buff; + batadv_tt_update_changes(bat_priv, orig_node, tt_num_changes, + ttvn, tt_change); /* Even if we received the precomputed crc with the OGM, we * prefer to recompute it to spot any possible inconsistency - * in the global table */ - orig_node->tt_crc = tt_global_crc(bat_priv, orig_node); + * in the global table + */ + orig_node->tt_crc = batadv_tt_global_crc(bat_priv, orig_node); /* The ttvn alone is not enough to guarantee consistency * because a single value could represent different states @@ -2095,26 +2319,28 @@ void tt_update_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, * consistent or not. E.g. a node could disconnect while its * ttvn is X and reconnect on ttvn = X + TTVN_MAX: in this case * checking the CRC value is mandatory to detect the - * inconsistency */ + * inconsistency + */ if (orig_node->tt_crc != tt_crc) goto request_table; /* Roaming phase is over: tables are in sync again. I can - * unset the flag */ + * unset the flag + */ orig_node->tt_poss_change = false; } else { /* if we missed more than one change or our tables are not - * in sync anymore -> request fresh tt data */ - + * in sync anymore -> request fresh tt data + */ if (!orig_node->tt_initialised || ttvn != orig_ttvn || orig_node->tt_crc != tt_crc) { request_table: - bat_dbg(DBG_TT, bat_priv, - "TT inconsistency for %pM. Need to retrieve the correct information (ttvn: %u last_ttvn: %u crc: %u last_crc: %u num_changes: %u)\n", - orig_node->orig, ttvn, orig_ttvn, tt_crc, - orig_node->tt_crc, tt_num_changes); - send_tt_request(bat_priv, orig_node, ttvn, tt_crc, - full_table); + batadv_dbg(BATADV_DBG_TT, bat_priv, + "TT inconsistency for %pM. Need to retrieve the correct information (ttvn: %u last_ttvn: %u crc: %u last_crc: %u num_changes: %u)\n", + orig_node->orig, ttvn, orig_ttvn, tt_crc, + orig_node->tt_crc, tt_num_changes); + batadv_send_tt_request(bat_priv, orig_node, ttvn, + tt_crc, full_table); return; } } @@ -2124,17 +2350,18 @@ request_table: * originator to another one. This entry is kept is still kept for consistency * purposes */ -bool tt_global_client_is_roaming(struct bat_priv *bat_priv, uint8_t *addr) +bool batadv_tt_global_client_is_roaming(struct batadv_priv *bat_priv, + uint8_t *addr) { - struct tt_global_entry *tt_global_entry; + struct batadv_tt_global_entry *tt_global_entry; bool ret = false; - tt_global_entry = tt_global_hash_find(bat_priv, addr); + tt_global_entry = batadv_tt_global_hash_find(bat_priv, addr); if (!tt_global_entry) goto out; - ret = tt_global_entry->common.flags & TT_CLIENT_ROAM; - tt_global_entry_free_ref(tt_global_entry); + ret = tt_global_entry->common.flags & BATADV_TT_CLIENT_ROAM; + batadv_tt_global_entry_free_ref(tt_global_entry); out: return ret; } diff --git a/net/batman-adv/translation-table.h b/net/batman-adv/translation-table.h index c43374dc364d..ffa87355096b 100644 --- a/net/batman-adv/translation-table.h +++ b/net/batman-adv/translation-table.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich, Antonio Quartulli * @@ -16,44 +15,50 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ #define _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ -int tt_len(int changes_num); -int tt_changes_fill_buffer(struct bat_priv *bat_priv, - unsigned char *buff, int buff_len); -int tt_init(struct bat_priv *bat_priv); -void tt_local_add(struct net_device *soft_iface, const uint8_t *addr, - int ifindex); -void tt_local_remove(struct bat_priv *bat_priv, - const uint8_t *addr, const char *message, bool roaming); -int tt_local_seq_print_text(struct seq_file *seq, void *offset); -void tt_global_add_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, - const unsigned char *tt_buff, int tt_buff_len); -int tt_global_add(struct bat_priv *bat_priv, struct orig_node *orig_node, - const unsigned char *addr, uint8_t ttvn, bool roaming, - bool wifi); -int tt_global_seq_print_text(struct seq_file *seq, void *offset); -void tt_global_del_orig(struct bat_priv *bat_priv, - struct orig_node *orig_node, const char *message); -struct orig_node *transtable_search(struct bat_priv *bat_priv, - const uint8_t *src, const uint8_t *addr); -uint16_t tt_local_crc(struct bat_priv *bat_priv); -void tt_free(struct bat_priv *bat_priv); -bool send_tt_response(struct bat_priv *bat_priv, - struct tt_query_packet *tt_request); -bool is_my_client(struct bat_priv *bat_priv, const uint8_t *addr); -void handle_tt_response(struct bat_priv *bat_priv, - struct tt_query_packet *tt_response); -void tt_commit_changes(struct bat_priv *bat_priv); -bool is_ap_isolated(struct bat_priv *bat_priv, uint8_t *src, uint8_t *dst); -void tt_update_orig(struct bat_priv *bat_priv, struct orig_node *orig_node, - const unsigned char *tt_buff, uint8_t tt_num_changes, - uint8_t ttvn, uint16_t tt_crc); -bool tt_global_client_is_roaming(struct bat_priv *bat_priv, uint8_t *addr); +int batadv_tt_len(int changes_num); +int batadv_tt_init(struct batadv_priv *bat_priv); +void batadv_tt_local_add(struct net_device *soft_iface, const uint8_t *addr, + int ifindex); +void batadv_tt_local_remove(struct batadv_priv *bat_priv, + const uint8_t *addr, const char *message, + bool roaming); +int batadv_tt_local_seq_print_text(struct seq_file *seq, void *offset); +void batadv_tt_global_add_orig(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const unsigned char *tt_buff, int tt_buff_len); +int batadv_tt_global_add(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const unsigned char *addr, uint8_t flags, + uint8_t ttvn); +int batadv_tt_global_seq_print_text(struct seq_file *seq, void *offset); +void batadv_tt_global_del_orig(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const char *message); +struct batadv_orig_node *batadv_transtable_search(struct batadv_priv *bat_priv, + const uint8_t *src, + const uint8_t *addr); +void batadv_tt_free(struct batadv_priv *bat_priv); +bool batadv_send_tt_response(struct batadv_priv *bat_priv, + struct batadv_tt_query_packet *tt_request); +bool batadv_is_my_client(struct batadv_priv *bat_priv, const uint8_t *addr); +void batadv_handle_tt_response(struct batadv_priv *bat_priv, + struct batadv_tt_query_packet *tt_response); +bool batadv_is_ap_isolated(struct batadv_priv *bat_priv, uint8_t *src, + uint8_t *dst); +void batadv_tt_update_orig(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, + const unsigned char *tt_buff, uint8_t tt_num_changes, + uint8_t ttvn, uint16_t tt_crc); +int batadv_tt_append_diff(struct batadv_priv *bat_priv, + unsigned char **packet_buff, int *packet_buff_len, + int packet_min_len); +bool batadv_tt_global_client_is_roaming(struct batadv_priv *bat_priv, + uint8_t *addr); #endif /* _NET_BATMAN_ADV_TRANSLATION_TABLE_H_ */ diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h index 61308e8016ff..12635fd2c3d3 100644 --- a/net/batman-adv/types.h +++ b/net/batman-adv/types.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2012 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich * @@ -16,24 +15,20 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ - - #ifndef _NET_BATMAN_ADV_TYPES_H_ #define _NET_BATMAN_ADV_TYPES_H_ #include "packet.h" #include "bitarray.h" +#include <linux/kernel.h> -#define BAT_HEADER_LEN (ETH_HLEN + \ - ((sizeof(struct unicast_packet) > sizeof(struct bcast_packet) ? \ - sizeof(struct unicast_packet) : \ - sizeof(struct bcast_packet)))) - +#define BATADV_HEADER_LEN \ + (ETH_HLEN + max(sizeof(struct batadv_unicast_packet), \ + sizeof(struct batadv_bcast_packet))) -struct hard_iface { +struct batadv_hard_iface { struct list_head list; int16_t if_num; char if_status; @@ -50,7 +45,7 @@ struct hard_iface { }; /** - * orig_node - structure for orig_list maintaining nodes of mesh + * struct batadv_orig_node - structure for orig_list maintaining nodes of mesh * @primary_addr: hosts primary interface address * @last_seen: when last packet from this node was received * @bcast_seqno_reset: time when the broadcast seqno window was reset @@ -64,10 +59,10 @@ struct hard_iface { * @candidates: how many candidates are available * @selected: next bonding candidate */ -struct orig_node { +struct batadv_orig_node { uint8_t orig[ETH_ALEN]; uint8_t primary_addr[ETH_ALEN]; - struct neigh_node __rcu *router; /* rcu protected pointer */ + struct batadv_neigh_node __rcu *router; /* rcu protected pointer */ unsigned long *bcast_own; uint8_t *bcast_own_sum; unsigned long last_seen; @@ -86,11 +81,12 @@ struct orig_node { * If true, then I sent a Roaming_adv to this orig_node and I have to * inspect every packet directed to it to check whether it is still * the true destination or not. This flag will be reset to false as - * soon as I receive a new TTVN from this orig_node */ + * soon as I receive a new TTVN from this orig_node + */ bool tt_poss_change; uint32_t last_real_seqno; uint8_t last_ttl; - DECLARE_BITMAP(bcast_bits, TQ_LOCAL_WINDOW_SIZE); + DECLARE_BITMAP(bcast_bits, BATADV_TQ_LOCAL_WINDOW_SIZE); uint32_t last_bcast_seqno; struct hlist_head neigh_list; struct list_head frag_list; @@ -98,10 +94,11 @@ struct orig_node { atomic_t refcount; struct rcu_head rcu; struct hlist_node hash_entry; - struct bat_priv *bat_priv; + struct batadv_priv *bat_priv; unsigned long last_frag_packet; /* ogm_cnt_lock protects: bcast_own, bcast_own_sum, - * neigh_node->real_bits, neigh_node->real_packet_count */ + * neigh_node->real_bits, neigh_node->real_packet_count + */ spinlock_t ogm_cnt_lock; /* bcast_seqno_lock protects bcast_bits, last_bcast_seqno */ spinlock_t bcast_seqno_lock; @@ -110,47 +107,63 @@ struct orig_node { struct list_head bond_list; }; -struct gw_node { +struct batadv_gw_node { struct hlist_node list; - struct orig_node *orig_node; + struct batadv_orig_node *orig_node; unsigned long deleted; atomic_t refcount; struct rcu_head rcu; }; -/** - * neigh_node +/* batadv_neigh_node * @last_seen: when last packet via this neighbor was received */ -struct neigh_node { +struct batadv_neigh_node { struct hlist_node list; uint8_t addr[ETH_ALEN]; uint8_t real_packet_count; - uint8_t tq_recv[TQ_GLOBAL_WINDOW_SIZE]; + uint8_t tq_recv[BATADV_TQ_GLOBAL_WINDOW_SIZE]; uint8_t tq_index; uint8_t tq_avg; uint8_t last_ttl; struct list_head bonding_list; unsigned long last_seen; - DECLARE_BITMAP(real_bits, TQ_LOCAL_WINDOW_SIZE); + DECLARE_BITMAP(real_bits, BATADV_TQ_LOCAL_WINDOW_SIZE); atomic_t refcount; struct rcu_head rcu; - struct orig_node *orig_node; - struct hard_iface *if_incoming; + struct batadv_orig_node *orig_node; + struct batadv_hard_iface *if_incoming; spinlock_t lq_update_lock; /* protects: tq_recv, tq_index */ }; #ifdef CONFIG_BATMAN_ADV_BLA -struct bcast_duplist_entry { +struct batadv_bcast_duplist_entry { uint8_t orig[ETH_ALEN]; uint16_t crc; unsigned long entrytime; }; #endif -struct bat_priv { +enum batadv_counters { + BATADV_CNT_FORWARD, + BATADV_CNT_FORWARD_BYTES, + BATADV_CNT_MGMT_TX, + BATADV_CNT_MGMT_TX_BYTES, + BATADV_CNT_MGMT_RX, + BATADV_CNT_MGMT_RX_BYTES, + BATADV_CNT_TT_REQUEST_TX, + BATADV_CNT_TT_REQUEST_RX, + BATADV_CNT_TT_RESPONSE_TX, + BATADV_CNT_TT_RESPONSE_RX, + BATADV_CNT_TT_ROAM_ADV_TX, + BATADV_CNT_TT_ROAM_ADV_RX, + BATADV_CNT_NUM, +}; + +struct batadv_priv { atomic_t mesh_state; struct net_device_stats stats; + uint64_t __percpu *bat_counters; /* Per cpu counters */ atomic_t aggregated_ogms; /* boolean */ atomic_t bonding; /* boolean */ atomic_t fragmentation; /* boolean */ @@ -174,10 +187,11 @@ struct bat_priv { * If true, then I received a Roaming_adv and I have to inspect every * packet directed to me to check whether I am still the true * destination or not. This flag will be reset to false as soon as I - * increase my TTVN */ + * increase my TTVN + */ bool tt_poss_change; char num_ifaces; - struct debug_log *debug_log; + struct batadv_debug_log *debug_log; struct kobject *mesh_obj; struct dentry *debug_dir; struct hlist_head forw_bat_list; @@ -185,20 +199,20 @@ struct bat_priv { struct hlist_head gw_list; struct list_head tt_changes_list; /* tracks changes in a OGM int */ struct list_head vis_send_list; - struct hashtable_t *orig_hash; - struct hashtable_t *tt_local_hash; - struct hashtable_t *tt_global_hash; + struct batadv_hashtable *orig_hash; + struct batadv_hashtable *tt_local_hash; + struct batadv_hashtable *tt_global_hash; #ifdef CONFIG_BATMAN_ADV_BLA - struct hashtable_t *claim_hash; - struct hashtable_t *backbone_hash; + struct batadv_hashtable *claim_hash; + struct batadv_hashtable *backbone_hash; #endif struct list_head tt_req_list; /* list of pending tt_requests */ struct list_head tt_roam_list; - struct hashtable_t *vis_hash; + struct batadv_hashtable *vis_hash; #ifdef CONFIG_BATMAN_ADV_BLA - struct bcast_duplist_entry bcast_duplist[DUPLIST_SIZE]; + struct batadv_bcast_duplist_entry bcast_duplist[BATADV_DUPLIST_SIZE]; int bcast_duplist_curr; - struct bla_claim_dst claim_dest; + struct batadv_bla_claim_dst claim_dest; #endif spinlock_t forw_bat_list_lock; /* protects forw_bat_list */ spinlock_t forw_bcast_list_lock; /* protects */ @@ -210,7 +224,7 @@ struct bat_priv { spinlock_t vis_list_lock; /* protects vis_info::recv_list */ atomic_t num_local_tt; /* Checksum of the local table, recomputed before sending a new OGM */ - atomic_t tt_crc; + uint16_t tt_crc; unsigned char *tt_buff; int16_t tt_buff_len; spinlock_t tt_buff_lock; /* protects tt_buff */ @@ -218,29 +232,29 @@ struct bat_priv { struct delayed_work orig_work; struct delayed_work vis_work; struct delayed_work bla_work; - struct gw_node __rcu *curr_gw; /* rcu protected pointer */ + struct batadv_gw_node __rcu *curr_gw; /* rcu protected pointer */ atomic_t gw_reselect; - struct hard_iface __rcu *primary_if; /* rcu protected pointer */ - struct vis_info *my_vis_info; - struct bat_algo_ops *bat_algo_ops; + struct batadv_hard_iface __rcu *primary_if; /* rcu protected pointer */ + struct batadv_vis_info *my_vis_info; + struct batadv_algo_ops *bat_algo_ops; }; -struct socket_client { +struct batadv_socket_client { struct list_head queue_list; unsigned int queue_len; unsigned char index; spinlock_t lock; /* protects queue_list, queue_len, index */ wait_queue_head_t queue_wait; - struct bat_priv *bat_priv; + struct batadv_priv *bat_priv; }; -struct socket_packet { +struct batadv_socket_packet { struct list_head list; size_t icmp_len; - struct icmp_packet_rr icmp_packet; + struct batadv_icmp_packet_rr icmp_packet; }; -struct tt_common_entry { +struct batadv_tt_common_entry { uint8_t addr[ETH_ALEN]; struct hlist_node hash_entry; uint16_t flags; @@ -248,31 +262,31 @@ struct tt_common_entry { struct rcu_head rcu; }; -struct tt_local_entry { - struct tt_common_entry common; +struct batadv_tt_local_entry { + struct batadv_tt_common_entry common; unsigned long last_seen; }; -struct tt_global_entry { - struct tt_common_entry common; +struct batadv_tt_global_entry { + struct batadv_tt_common_entry common; struct hlist_head orig_list; spinlock_t list_lock; /* protects the list */ unsigned long roam_at; /* time at which TT_GLOBAL_ROAM was set */ }; -struct tt_orig_list_entry { - struct orig_node *orig_node; +struct batadv_tt_orig_list_entry { + struct batadv_orig_node *orig_node; uint8_t ttvn; struct rcu_head rcu; struct hlist_node list; }; #ifdef CONFIG_BATMAN_ADV_BLA -struct backbone_gw { +struct batadv_backbone_gw { uint8_t orig[ETH_ALEN]; short vid; /* used VLAN ID */ struct hlist_node hash_entry; - struct bat_priv *bat_priv; + struct batadv_priv *bat_priv; unsigned long lasttime; /* last time we heard of this backbone gw */ atomic_t request_sent; atomic_t refcount; @@ -280,10 +294,10 @@ struct backbone_gw { uint16_t crc; /* crc checksum over all claims */ }; -struct claim { +struct batadv_claim { uint8_t addr[ETH_ALEN]; short vid; - struct backbone_gw *backbone_gw; + struct batadv_backbone_gw *backbone_gw; unsigned long lasttime; /* last time we heard of claim (locals only) */ struct rcu_head rcu; atomic_t refcount; @@ -291,29 +305,28 @@ struct claim { }; #endif -struct tt_change_node { +struct batadv_tt_change_node { struct list_head list; - struct tt_change change; + struct batadv_tt_change change; }; -struct tt_req_node { +struct batadv_tt_req_node { uint8_t addr[ETH_ALEN]; unsigned long issued_at; struct list_head list; }; -struct tt_roam_node { +struct batadv_tt_roam_node { uint8_t addr[ETH_ALEN]; atomic_t counter; unsigned long first_time; struct list_head list; }; -/** - * forw_packet - structure for forw_list maintaining packets to be +/* forw_packet - structure for forw_list maintaining packets to be * send/forwarded */ -struct forw_packet { +struct batadv_forw_packet { struct hlist_node list; unsigned long send_time; uint8_t own; @@ -322,76 +335,76 @@ struct forw_packet { uint32_t direct_link_flags; uint8_t num_packets; struct delayed_work delayed_work; - struct hard_iface *if_incoming; + struct batadv_hard_iface *if_incoming; }; /* While scanning for vis-entries of a particular vis-originator * this list collects its interfaces to create a subgraph/cluster * out of them later */ -struct if_list_entry { +struct batadv_if_list_entry { uint8_t addr[ETH_ALEN]; bool primary; struct hlist_node list; }; -struct debug_log { - char log_buff[LOG_BUF_LEN]; +struct batadv_debug_log { + char log_buff[BATADV_LOG_BUF_LEN]; unsigned long log_start; unsigned long log_end; spinlock_t lock; /* protects log_buff, log_start and log_end */ wait_queue_head_t queue_wait; }; -struct frag_packet_list_entry { +struct batadv_frag_packet_list_entry { struct list_head list; uint16_t seqno; struct sk_buff *skb; }; -struct vis_info { +struct batadv_vis_info { unsigned long first_seen; /* list of server-neighbors we received a vis-packet - * from. we should not reply to them. */ + * from. we should not reply to them. + */ struct list_head recv_list; struct list_head send_list; struct kref refcount; struct hlist_node hash_entry; - struct bat_priv *bat_priv; + struct batadv_priv *bat_priv; /* this packet might be part of the vis send queue. */ struct sk_buff *skb_packet; - /* vis_info may follow here*/ + /* vis_info may follow here */ } __packed; -struct vis_info_entry { +struct batadv_vis_info_entry { uint8_t src[ETH_ALEN]; uint8_t dest[ETH_ALEN]; uint8_t quality; /* quality = 0 client */ } __packed; -struct recvlist_node { +struct batadv_recvlist_node { struct list_head list; uint8_t mac[ETH_ALEN]; }; -struct bat_algo_ops { +struct batadv_algo_ops { struct hlist_node list; char *name; /* init routing info when hard-interface is enabled */ - int (*bat_iface_enable)(struct hard_iface *hard_iface); + int (*bat_iface_enable)(struct batadv_hard_iface *hard_iface); /* de-init routing info when hard-interface is disabled */ - void (*bat_iface_disable)(struct hard_iface *hard_iface); + void (*bat_iface_disable)(struct batadv_hard_iface *hard_iface); /* (re-)init mac addresses of the protocol information * belonging to this hard-interface */ - void (*bat_iface_update_mac)(struct hard_iface *hard_iface); + void (*bat_iface_update_mac)(struct batadv_hard_iface *hard_iface); /* called when primary interface is selected / changed */ - void (*bat_primary_iface_set)(struct hard_iface *hard_iface); + void (*bat_primary_iface_set)(struct batadv_hard_iface *hard_iface); /* prepare a new outgoing OGM for the send queue */ - void (*bat_ogm_schedule)(struct hard_iface *hard_iface, - int tt_num_changes); + void (*bat_ogm_schedule)(struct batadv_hard_iface *hard_iface); /* send scheduled OGM */ - void (*bat_ogm_emit)(struct forw_packet *forw_packet); + void (*bat_ogm_emit)(struct batadv_forw_packet *forw_packet); }; #endif /* _NET_BATMAN_ADV_TYPES_H_ */ diff --git a/net/batman-adv/unicast.c b/net/batman-adv/unicast.c index 74175c210858..00164645b3f7 100644 --- a/net/batman-adv/unicast.c +++ b/net/batman-adv/unicast.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: * * Andreas Langer * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -31,19 +29,20 @@ #include "hard-interface.h" -static struct sk_buff *frag_merge_packet(struct list_head *head, - struct frag_packet_list_entry *tfp, - struct sk_buff *skb) +static struct sk_buff * +batadv_frag_merge_packet(struct list_head *head, + struct batadv_frag_packet_list_entry *tfp, + struct sk_buff *skb) { - struct unicast_frag_packet *up = - (struct unicast_frag_packet *)skb->data; + struct batadv_unicast_frag_packet *up; struct sk_buff *tmp_skb; - struct unicast_packet *unicast_packet; + struct batadv_unicast_packet *unicast_packet; int hdr_len = sizeof(*unicast_packet); int uni_diff = sizeof(*up) - hdr_len; + up = (struct batadv_unicast_frag_packet *)skb->data; /* set skb to the first part and tmp_skb to the second part */ - if (up->flags & UNI_FRAG_HEAD) { + if (up->flags & BATADV_UNI_FRAG_HEAD) { tmp_skb = tfp->skb; } else { tmp_skb = skb; @@ -66,8 +65,9 @@ static struct sk_buff *frag_merge_packet(struct list_head *head, kfree_skb(tmp_skb); memmove(skb->data + uni_diff, skb->data, hdr_len); - unicast_packet = (struct unicast_packet *)skb_pull(skb, uni_diff); - unicast_packet->header.packet_type = BAT_UNICAST; + unicast_packet = (struct batadv_unicast_packet *)skb_pull(skb, + uni_diff); + unicast_packet->header.packet_type = BATADV_UNICAST; return skb; @@ -77,11 +77,13 @@ err: return NULL; } -static void frag_create_entry(struct list_head *head, struct sk_buff *skb) +static void batadv_frag_create_entry(struct list_head *head, + struct sk_buff *skb) { - struct frag_packet_list_entry *tfp; - struct unicast_frag_packet *up = - (struct unicast_frag_packet *)skb->data; + struct batadv_frag_packet_list_entry *tfp; + struct batadv_unicast_frag_packet *up; + + up = (struct batadv_unicast_frag_packet *)skb->data; /* free and oldest packets stand at the end */ tfp = list_entry((head)->prev, typeof(*tfp), list); @@ -93,15 +95,15 @@ static void frag_create_entry(struct list_head *head, struct sk_buff *skb) return; } -static int frag_create_buffer(struct list_head *head) +static int batadv_frag_create_buffer(struct list_head *head) { int i; - struct frag_packet_list_entry *tfp; + struct batadv_frag_packet_list_entry *tfp; - for (i = 0; i < FRAG_BUFFER_SIZE; i++) { + for (i = 0; i < BATADV_FRAG_BUFFER_SIZE; i++) { tfp = kmalloc(sizeof(*tfp), GFP_ATOMIC); if (!tfp) { - frag_list_free(head); + batadv_frag_list_free(head); return -ENOMEM; } tfp->skb = NULL; @@ -113,14 +115,15 @@ static int frag_create_buffer(struct list_head *head) return 0; } -static struct frag_packet_list_entry *frag_search_packet(struct list_head *head, - const struct unicast_frag_packet *up) +static struct batadv_frag_packet_list_entry * +batadv_frag_search_packet(struct list_head *head, + const struct batadv_unicast_frag_packet *up) { - struct frag_packet_list_entry *tfp; - struct unicast_frag_packet *tmp_up = NULL; + struct batadv_frag_packet_list_entry *tfp; + struct batadv_unicast_frag_packet *tmp_up = NULL; uint16_t search_seqno; - if (up->flags & UNI_FRAG_HEAD) + if (up->flags & BATADV_UNI_FRAG_HEAD) search_seqno = ntohs(up->seqno)+1; else search_seqno = ntohs(up->seqno)-1; @@ -133,12 +136,12 @@ static struct frag_packet_list_entry *frag_search_packet(struct list_head *head, if (tfp->seqno == ntohs(up->seqno)) goto mov_tail; - tmp_up = (struct unicast_frag_packet *)tfp->skb->data; + tmp_up = (struct batadv_unicast_frag_packet *)tfp->skb->data; if (tfp->seqno == search_seqno) { - if ((tmp_up->flags & UNI_FRAG_HEAD) != - (up->flags & UNI_FRAG_HEAD)) + if ((tmp_up->flags & BATADV_UNI_FRAG_HEAD) != + (up->flags & BATADV_UNI_FRAG_HEAD)) return tfp; else goto mov_tail; @@ -151,9 +154,9 @@ mov_tail: return NULL; } -void frag_list_free(struct list_head *head) +void batadv_frag_list_free(struct list_head *head) { - struct frag_packet_list_entry *pf, *tmp_pf; + struct batadv_frag_packet_list_entry *pf, *tmp_pf; if (!list_empty(head)) { @@ -172,64 +175,66 @@ void frag_list_free(struct list_head *head) * or the skb could be reassembled (skb_new will point to the new packet and * skb was freed) */ -int frag_reassemble_skb(struct sk_buff *skb, struct bat_priv *bat_priv, - struct sk_buff **new_skb) +int batadv_frag_reassemble_skb(struct sk_buff *skb, + struct batadv_priv *bat_priv, + struct sk_buff **new_skb) { - struct orig_node *orig_node; - struct frag_packet_list_entry *tmp_frag_entry; + struct batadv_orig_node *orig_node; + struct batadv_frag_packet_list_entry *tmp_frag_entry; int ret = NET_RX_DROP; - struct unicast_frag_packet *unicast_packet = - (struct unicast_frag_packet *)skb->data; + struct batadv_unicast_frag_packet *unicast_packet; + unicast_packet = (struct batadv_unicast_frag_packet *)skb->data; *new_skb = NULL; - orig_node = orig_hash_find(bat_priv, unicast_packet->orig); + orig_node = batadv_orig_hash_find(bat_priv, unicast_packet->orig); if (!orig_node) goto out; orig_node->last_frag_packet = jiffies; if (list_empty(&orig_node->frag_list) && - frag_create_buffer(&orig_node->frag_list)) { + batadv_frag_create_buffer(&orig_node->frag_list)) { pr_debug("couldn't create frag buffer\n"); goto out; } - tmp_frag_entry = frag_search_packet(&orig_node->frag_list, - unicast_packet); + tmp_frag_entry = batadv_frag_search_packet(&orig_node->frag_list, + unicast_packet); if (!tmp_frag_entry) { - frag_create_entry(&orig_node->frag_list, skb); + batadv_frag_create_entry(&orig_node->frag_list, skb); ret = NET_RX_SUCCESS; goto out; } - *new_skb = frag_merge_packet(&orig_node->frag_list, tmp_frag_entry, - skb); + *new_skb = batadv_frag_merge_packet(&orig_node->frag_list, + tmp_frag_entry, skb); /* if not, merge failed */ if (*new_skb) ret = NET_RX_SUCCESS; out: if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); return ret; } -int frag_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv, - struct hard_iface *hard_iface, const uint8_t dstaddr[]) +int batadv_frag_send_skb(struct sk_buff *skb, struct batadv_priv *bat_priv, + struct batadv_hard_iface *hard_iface, + const uint8_t dstaddr[]) { - struct unicast_packet tmp_uc, *unicast_packet; - struct hard_iface *primary_if; + struct batadv_unicast_packet tmp_uc, *unicast_packet; + struct batadv_hard_iface *primary_if; struct sk_buff *frag_skb; - struct unicast_frag_packet *frag1, *frag2; + struct batadv_unicast_frag_packet *frag1, *frag2; int uc_hdr_len = sizeof(*unicast_packet); int ucf_hdr_len = sizeof(*frag1); int data_len = skb->len - uc_hdr_len; int large_tail = 0, ret = NET_RX_DROP; uint16_t seqno; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto dropped; @@ -238,38 +243,38 @@ int frag_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv, goto dropped; skb_reserve(frag_skb, ucf_hdr_len); - unicast_packet = (struct unicast_packet *)skb->data; + unicast_packet = (struct batadv_unicast_packet *)skb->data; memcpy(&tmp_uc, unicast_packet, uc_hdr_len); skb_split(skb, frag_skb, data_len / 2 + uc_hdr_len); - if (my_skb_head_push(skb, ucf_hdr_len - uc_hdr_len) < 0 || - my_skb_head_push(frag_skb, ucf_hdr_len) < 0) + if (batadv_skb_head_push(skb, ucf_hdr_len - uc_hdr_len) < 0 || + batadv_skb_head_push(frag_skb, ucf_hdr_len) < 0) goto drop_frag; - frag1 = (struct unicast_frag_packet *)skb->data; - frag2 = (struct unicast_frag_packet *)frag_skb->data; + frag1 = (struct batadv_unicast_frag_packet *)skb->data; + frag2 = (struct batadv_unicast_frag_packet *)frag_skb->data; memcpy(frag1, &tmp_uc, sizeof(tmp_uc)); frag1->header.ttl--; - frag1->header.version = COMPAT_VERSION; - frag1->header.packet_type = BAT_UNICAST_FRAG; + frag1->header.version = BATADV_COMPAT_VERSION; + frag1->header.packet_type = BATADV_UNICAST_FRAG; memcpy(frag1->orig, primary_if->net_dev->dev_addr, ETH_ALEN); memcpy(frag2, frag1, sizeof(*frag2)); if (data_len & 1) - large_tail = UNI_FRAG_LARGETAIL; + large_tail = BATADV_UNI_FRAG_LARGETAIL; - frag1->flags = UNI_FRAG_HEAD | large_tail; + frag1->flags = BATADV_UNI_FRAG_HEAD | large_tail; frag2->flags = large_tail; seqno = atomic_add_return(2, &hard_iface->frag_seqno); frag1->seqno = htons(seqno - 1); frag2->seqno = htons(seqno); - send_skb_packet(skb, hard_iface, dstaddr); - send_skb_packet(frag_skb, hard_iface, dstaddr); + batadv_send_skb_packet(skb, hard_iface, dstaddr); + batadv_send_skb_packet(frag_skb, hard_iface, dstaddr); ret = NET_RX_SUCCESS; goto out; @@ -279,52 +284,53 @@ dropped: kfree_skb(skb); out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return ret; } -int unicast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv) +int batadv_unicast_send_skb(struct sk_buff *skb, struct batadv_priv *bat_priv) { struct ethhdr *ethhdr = (struct ethhdr *)skb->data; - struct unicast_packet *unicast_packet; - struct orig_node *orig_node; - struct neigh_node *neigh_node; + struct batadv_unicast_packet *unicast_packet; + struct batadv_orig_node *orig_node; + struct batadv_neigh_node *neigh_node; int data_len = skb->len; int ret = 1; + unsigned int dev_mtu; /* get routing information */ if (is_multicast_ether_addr(ethhdr->h_dest)) { - orig_node = gw_get_selected_orig(bat_priv); + orig_node = batadv_gw_get_selected_orig(bat_priv); if (orig_node) goto find_router; } /* check for tt host - increases orig_node refcount. - * returns NULL in case of AP isolation */ - orig_node = transtable_search(bat_priv, ethhdr->h_source, - ethhdr->h_dest); + * returns NULL in case of AP isolation + */ + orig_node = batadv_transtable_search(bat_priv, ethhdr->h_source, + ethhdr->h_dest); find_router: - /** - * find_router(): + /* find_router(): * - if orig_node is NULL it returns NULL * - increases neigh_nodes refcount if found. */ - neigh_node = find_router(bat_priv, orig_node, NULL); + neigh_node = batadv_find_router(bat_priv, orig_node, NULL); if (!neigh_node) goto out; - if (my_skb_head_push(skb, sizeof(*unicast_packet)) < 0) + if (batadv_skb_head_push(skb, sizeof(*unicast_packet)) < 0) goto out; - unicast_packet = (struct unicast_packet *)skb->data; + unicast_packet = (struct batadv_unicast_packet *)skb->data; - unicast_packet->header.version = COMPAT_VERSION; + unicast_packet->header.version = BATADV_COMPAT_VERSION; /* batman packet type: unicast */ - unicast_packet->header.packet_type = BAT_UNICAST; + unicast_packet->header.packet_type = BATADV_UNICAST; /* set unicast ttl */ - unicast_packet->header.ttl = TTL; + unicast_packet->header.ttl = BATADV_TTL; /* copy the destination for faster routing */ memcpy(unicast_packet->dest, orig_node->orig, ETH_ALEN); /* set the destination tt version number */ @@ -336,28 +342,29 @@ find_router: * try to reroute it because the ttvn contained in the header is less * than the current one */ - if (tt_global_client_is_roaming(bat_priv, ethhdr->h_dest)) + if (batadv_tt_global_client_is_roaming(bat_priv, ethhdr->h_dest)) unicast_packet->ttvn = unicast_packet->ttvn - 1; + dev_mtu = neigh_node->if_incoming->net_dev->mtu; if (atomic_read(&bat_priv->fragmentation) && - data_len + sizeof(*unicast_packet) > - neigh_node->if_incoming->net_dev->mtu) { + data_len + sizeof(*unicast_packet) > dev_mtu) { /* send frag skb decreases ttl */ unicast_packet->header.ttl++; - ret = frag_send_skb(skb, bat_priv, - neigh_node->if_incoming, neigh_node->addr); + ret = batadv_frag_send_skb(skb, bat_priv, + neigh_node->if_incoming, + neigh_node->addr); goto out; } - send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); + batadv_send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr); ret = 0; goto out; out: if (neigh_node) - neigh_node_free_ref(neigh_node); + batadv_neigh_node_free_ref(neigh_node); if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); if (ret == 1) kfree_skb(skb); return ret; diff --git a/net/batman-adv/unicast.h b/net/batman-adv/unicast.h index a9faf6b1db19..1c46e2eb1ef9 100644 --- a/net/batman-adv/unicast.h +++ b/net/batman-adv/unicast.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2012 B.A.T.M.A.N. contributors: * * Andreas Langer * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_UNICAST_H_ @@ -24,33 +22,35 @@ #include "packet.h" -#define FRAG_TIMEOUT 10000 /* purge frag list entries after time in ms */ -#define FRAG_BUFFER_SIZE 6 /* number of list elements in buffer */ +#define BATADV_FRAG_TIMEOUT 10000 /* purge frag list entries after time in ms */ +#define BATADV_FRAG_BUFFER_SIZE 6 /* number of list elements in buffer */ -int frag_reassemble_skb(struct sk_buff *skb, struct bat_priv *bat_priv, - struct sk_buff **new_skb); -void frag_list_free(struct list_head *head); -int unicast_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv); -int frag_send_skb(struct sk_buff *skb, struct bat_priv *bat_priv, - struct hard_iface *hard_iface, const uint8_t dstaddr[]); +int batadv_frag_reassemble_skb(struct sk_buff *skb, + struct batadv_priv *bat_priv, + struct sk_buff **new_skb); +void batadv_frag_list_free(struct list_head *head); +int batadv_unicast_send_skb(struct sk_buff *skb, struct batadv_priv *bat_priv); +int batadv_frag_send_skb(struct sk_buff *skb, struct batadv_priv *bat_priv, + struct batadv_hard_iface *hard_iface, + const uint8_t dstaddr[]); -static inline int frag_can_reassemble(const struct sk_buff *skb, int mtu) +static inline int batadv_frag_can_reassemble(const struct sk_buff *skb, int mtu) { - const struct unicast_frag_packet *unicast_packet; + const struct batadv_unicast_frag_packet *unicast_packet; int uneven_correction = 0; unsigned int merged_size; - unicast_packet = (struct unicast_frag_packet *)skb->data; + unicast_packet = (struct batadv_unicast_frag_packet *)skb->data; - if (unicast_packet->flags & UNI_FRAG_LARGETAIL) { - if (unicast_packet->flags & UNI_FRAG_HEAD) + if (unicast_packet->flags & BATADV_UNI_FRAG_LARGETAIL) { + if (unicast_packet->flags & BATADV_UNI_FRAG_HEAD) uneven_correction = 1; else uneven_correction = -1; } merged_size = (skb->len - sizeof(*unicast_packet)) * 2; - merged_size += sizeof(struct unicast_packet) + uneven_correction; + merged_size += sizeof(struct batadv_unicast_packet) + uneven_correction; return merged_size <= mtu; } diff --git a/net/batman-adv/vis.c b/net/batman-adv/vis.c index cec216fb77c7..2a2ea0681469 100644 --- a/net/batman-adv/vis.c +++ b/net/batman-adv/vis.c @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2008-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2008-2012 B.A.T.M.A.N. contributors: * * Simon Wunderlich * @@ -16,7 +15,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #include "main.h" @@ -28,16 +26,19 @@ #include "hash.h" #include "originator.h" -#define MAX_VIS_PACKET_SIZE 1000 +#define BATADV_MAX_VIS_PACKET_SIZE 1000 -static void start_vis_timer(struct bat_priv *bat_priv); +static void batadv_start_vis_timer(struct batadv_priv *bat_priv); /* free the info */ -static void free_info(struct kref *ref) +static void batadv_free_info(struct kref *ref) { - struct vis_info *info = container_of(ref, struct vis_info, refcount); - struct bat_priv *bat_priv = info->bat_priv; - struct recvlist_node *entry, *tmp; + struct batadv_vis_info *info; + struct batadv_priv *bat_priv; + struct batadv_recvlist_node *entry, *tmp; + + info = container_of(ref, struct batadv_vis_info, refcount); + bat_priv = info->bat_priv; list_del_init(&info->send_list); spin_lock_bh(&bat_priv->vis_list_lock); @@ -52,29 +53,30 @@ static void free_info(struct kref *ref) } /* Compare two vis packets, used by the hashing algorithm */ -static int vis_info_cmp(const struct hlist_node *node, const void *data2) +static int batadv_vis_info_cmp(const struct hlist_node *node, const void *data2) { - const struct vis_info *d1, *d2; - const struct vis_packet *p1, *p2; + const struct batadv_vis_info *d1, *d2; + const struct batadv_vis_packet *p1, *p2; - d1 = container_of(node, struct vis_info, hash_entry); + d1 = container_of(node, struct batadv_vis_info, hash_entry); d2 = data2; - p1 = (struct vis_packet *)d1->skb_packet->data; - p2 = (struct vis_packet *)d2->skb_packet->data; - return compare_eth(p1->vis_orig, p2->vis_orig); + p1 = (struct batadv_vis_packet *)d1->skb_packet->data; + p2 = (struct batadv_vis_packet *)d2->skb_packet->data; + return batadv_compare_eth(p1->vis_orig, p2->vis_orig); } -/* hash function to choose an entry in a hash table of given size */ -/* hash algorithm from http://en.wikipedia.org/wiki/Hash_table */ -static uint32_t vis_info_choose(const void *data, uint32_t size) +/* hash function to choose an entry in a hash table of given size + * hash algorithm from http://en.wikipedia.org/wiki/Hash_table + */ +static uint32_t batadv_vis_info_choose(const void *data, uint32_t size) { - const struct vis_info *vis_info = data; - const struct vis_packet *packet; + const struct batadv_vis_info *vis_info = data; + const struct batadv_vis_packet *packet; const unsigned char *key; uint32_t hash = 0; size_t i; - packet = (struct vis_packet *)vis_info->skb_packet->data; + packet = (struct batadv_vis_packet *)vis_info->skb_packet->data; key = packet->vis_orig; for (i = 0; i < ETH_ALEN; i++) { hash += key[i]; @@ -89,24 +91,24 @@ static uint32_t vis_info_choose(const void *data, uint32_t size) return hash % size; } -static struct vis_info *vis_hash_find(struct bat_priv *bat_priv, - const void *data) +static struct batadv_vis_info * +batadv_vis_hash_find(struct batadv_priv *bat_priv, const void *data) { - struct hashtable_t *hash = bat_priv->vis_hash; + struct batadv_hashtable *hash = bat_priv->vis_hash; struct hlist_head *head; struct hlist_node *node; - struct vis_info *vis_info, *vis_info_tmp = NULL; + struct batadv_vis_info *vis_info, *vis_info_tmp = NULL; uint32_t index; if (!hash) return NULL; - index = vis_info_choose(data, hash->size); + index = batadv_vis_info_choose(data, hash->size); head = &hash->table[index]; rcu_read_lock(); hlist_for_each_entry_rcu(vis_info, node, head, hash_entry) { - if (!vis_info_cmp(node, data)) + if (!batadv_vis_info_cmp(node, data)) continue; vis_info_tmp = vis_info; @@ -118,16 +120,17 @@ static struct vis_info *vis_hash_find(struct bat_priv *bat_priv, } /* insert interface to the list of interfaces of one originator, if it - * does not already exist in the list */ -static void vis_data_insert_interface(const uint8_t *interface, - struct hlist_head *if_list, - bool primary) + * does not already exist in the list + */ +static void batadv_vis_data_insert_interface(const uint8_t *interface, + struct hlist_head *if_list, + bool primary) { - struct if_list_entry *entry; + struct batadv_if_list_entry *entry; struct hlist_node *pos; hlist_for_each_entry(entry, pos, if_list, list) { - if (compare_eth(entry->addr, interface)) + if (batadv_compare_eth(entry->addr, interface)) return; } @@ -140,195 +143,145 @@ static void vis_data_insert_interface(const uint8_t *interface, hlist_add_head(&entry->list, if_list); } -static ssize_t vis_data_read_prim_sec(char *buff, - const struct hlist_head *if_list) +static void batadv_vis_data_read_prim_sec(struct seq_file *seq, + const struct hlist_head *if_list) { - struct if_list_entry *entry; + struct batadv_if_list_entry *entry; struct hlist_node *pos; - size_t len = 0; hlist_for_each_entry(entry, pos, if_list, list) { if (entry->primary) - len += sprintf(buff + len, "PRIMARY, "); + seq_printf(seq, "PRIMARY, "); else - len += sprintf(buff + len, "SEC %pM, ", entry->addr); + seq_printf(seq, "SEC %pM, ", entry->addr); } +} - return len; +/* read an entry */ +static ssize_t +batadv_vis_data_read_entry(struct seq_file *seq, + const struct batadv_vis_info_entry *entry, + const uint8_t *src, bool primary) +{ + if (primary && entry->quality == 0) + return seq_printf(seq, "TT %pM, ", entry->dest); + else if (batadv_compare_eth(entry->src, src)) + return seq_printf(seq, "TQ %pM %d, ", entry->dest, + entry->quality); + + return 0; } -static size_t vis_data_count_prim_sec(struct hlist_head *if_list) +static void +batadv_vis_data_insert_interfaces(struct hlist_head *list, + struct batadv_vis_packet *packet, + struct batadv_vis_info_entry *entries) { - struct if_list_entry *entry; - struct hlist_node *pos; - size_t count = 0; + int i; - hlist_for_each_entry(entry, pos, if_list, list) { - if (entry->primary) - count += 9; - else - count += 23; - } + for (i = 0; i < packet->entries; i++) { + if (entries[i].quality == 0) + continue; - return count; + if (batadv_compare_eth(entries[i].src, packet->vis_orig)) + continue; + + batadv_vis_data_insert_interface(entries[i].src, list, false); + } } -/* read an entry */ -static ssize_t vis_data_read_entry(char *buff, - const struct vis_info_entry *entry, - const uint8_t *src, bool primary) +static void batadv_vis_data_read_entries(struct seq_file *seq, + struct hlist_head *list, + struct batadv_vis_packet *packet, + struct batadv_vis_info_entry *entries) { - /* maximal length: max(4+17+2, 3+17+1+3+2) == 26 */ - if (primary && entry->quality == 0) - return sprintf(buff, "TT %pM, ", entry->dest); - else if (compare_eth(entry->src, src)) - return sprintf(buff, "TQ %pM %d, ", entry->dest, - entry->quality); + int i; + struct batadv_if_list_entry *entry; + struct hlist_node *pos; - return 0; + hlist_for_each_entry(entry, pos, list, list) { + seq_printf(seq, "%pM,", entry->addr); + + for (i = 0; i < packet->entries; i++) + batadv_vis_data_read_entry(seq, &entries[i], + entry->addr, entry->primary); + + /* add primary/secondary records */ + if (batadv_compare_eth(entry->addr, packet->vis_orig)) + batadv_vis_data_read_prim_sec(seq, list); + + seq_printf(seq, "\n"); + } } -int vis_seq_print_text(struct seq_file *seq, void *offset) +static void batadv_vis_seq_print_text_bucket(struct seq_file *seq, + const struct hlist_head *head) { - struct hard_iface *primary_if; struct hlist_node *node; + struct batadv_vis_info *info; + struct batadv_vis_packet *packet; + uint8_t *entries_pos; + struct batadv_vis_info_entry *entries; + struct batadv_if_list_entry *entry; + struct hlist_node *pos, *n; + + HLIST_HEAD(vis_if_list); + + hlist_for_each_entry_rcu(info, node, head, hash_entry) { + packet = (struct batadv_vis_packet *)info->skb_packet->data; + entries_pos = (uint8_t *)packet + sizeof(*packet); + entries = (struct batadv_vis_info_entry *)entries_pos; + + batadv_vis_data_insert_interface(packet->vis_orig, &vis_if_list, + true); + batadv_vis_data_insert_interfaces(&vis_if_list, packet, + entries); + batadv_vis_data_read_entries(seq, &vis_if_list, packet, + entries); + + hlist_for_each_entry_safe(entry, pos, n, &vis_if_list, list) { + hlist_del(&entry->list); + kfree(entry); + } + } +} + +int batadv_vis_seq_print_text(struct seq_file *seq, void *offset) +{ + struct batadv_hard_iface *primary_if; struct hlist_head *head; - struct vis_info *info; - struct vis_packet *packet; - struct vis_info_entry *entries; struct net_device *net_dev = (struct net_device *)seq->private; - struct bat_priv *bat_priv = netdev_priv(net_dev); - struct hashtable_t *hash = bat_priv->vis_hash; - HLIST_HEAD(vis_if_list); - struct if_list_entry *entry; - struct hlist_node *pos, *n; + struct batadv_priv *bat_priv = netdev_priv(net_dev); + struct batadv_hashtable *hash = bat_priv->vis_hash; uint32_t i; - int j, ret = 0; + int ret = 0; int vis_server = atomic_read(&bat_priv->vis_mode); - size_t buff_pos, buf_size; - char *buff; - int compare; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; - if (vis_server == VIS_TYPE_CLIENT_UPDATE) + if (vis_server == BATADV_VIS_TYPE_CLIENT_UPDATE) goto out; - buf_size = 1; - /* Estimate length */ spin_lock_bh(&bat_priv->vis_hash_lock); for (i = 0; i < hash->size; i++) { head = &hash->table[i]; - - rcu_read_lock(); - hlist_for_each_entry_rcu(info, node, head, hash_entry) { - packet = (struct vis_packet *)info->skb_packet->data; - entries = (struct vis_info_entry *) - ((char *)packet + sizeof(*packet)); - - for (j = 0; j < packet->entries; j++) { - if (entries[j].quality == 0) - continue; - compare = - compare_eth(entries[j].src, packet->vis_orig); - vis_data_insert_interface(entries[j].src, - &vis_if_list, - compare); - } - - hlist_for_each_entry(entry, pos, &vis_if_list, list) { - buf_size += 18 + 26 * packet->entries; - - /* add primary/secondary records */ - if (compare_eth(entry->addr, packet->vis_orig)) - buf_size += - vis_data_count_prim_sec(&vis_if_list); - - buf_size += 1; - } - - hlist_for_each_entry_safe(entry, pos, n, &vis_if_list, - list) { - hlist_del(&entry->list); - kfree(entry); - } - } - rcu_read_unlock(); - } - - buff = kmalloc(buf_size, GFP_ATOMIC); - if (!buff) { - spin_unlock_bh(&bat_priv->vis_hash_lock); - ret = -ENOMEM; - goto out; - } - buff[0] = '\0'; - buff_pos = 0; - - for (i = 0; i < hash->size; i++) { - head = &hash->table[i]; - - rcu_read_lock(); - hlist_for_each_entry_rcu(info, node, head, hash_entry) { - packet = (struct vis_packet *)info->skb_packet->data; - entries = (struct vis_info_entry *) - ((char *)packet + sizeof(*packet)); - - for (j = 0; j < packet->entries; j++) { - if (entries[j].quality == 0) - continue; - compare = - compare_eth(entries[j].src, packet->vis_orig); - vis_data_insert_interface(entries[j].src, - &vis_if_list, - compare); - } - - hlist_for_each_entry(entry, pos, &vis_if_list, list) { - buff_pos += sprintf(buff + buff_pos, "%pM,", - entry->addr); - - for (j = 0; j < packet->entries; j++) - buff_pos += vis_data_read_entry( - buff + buff_pos, - &entries[j], - entry->addr, - entry->primary); - - /* add primary/secondary records */ - if (compare_eth(entry->addr, packet->vis_orig)) - buff_pos += - vis_data_read_prim_sec(buff + buff_pos, - &vis_if_list); - - buff_pos += sprintf(buff + buff_pos, "\n"); - } - - hlist_for_each_entry_safe(entry, pos, n, &vis_if_list, - list) { - hlist_del(&entry->list); - kfree(entry); - } - } - rcu_read_unlock(); + batadv_vis_seq_print_text_bucket(seq, head); } - spin_unlock_bh(&bat_priv->vis_hash_lock); - seq_printf(seq, "%s", buff); - kfree(buff); - out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); return ret; } /* add the info packet to the send list, if it was not - * already linked in. */ -static void send_list_add(struct bat_priv *bat_priv, struct vis_info *info) + * already linked in. + */ +static void batadv_send_list_add(struct batadv_priv *bat_priv, + struct batadv_vis_info *info) { if (list_empty(&info->send_list)) { kref_get(&info->refcount); @@ -337,20 +290,21 @@ static void send_list_add(struct bat_priv *bat_priv, struct vis_info *info) } /* delete the info packet from the send list, if it was - * linked in. */ -static void send_list_del(struct vis_info *info) + * linked in. + */ +static void batadv_send_list_del(struct batadv_vis_info *info) { if (!list_empty(&info->send_list)) { list_del_init(&info->send_list); - kref_put(&info->refcount, free_info); + kref_put(&info->refcount, batadv_free_info); } } /* tries to add one entry to the receive list. */ -static void recv_list_add(struct bat_priv *bat_priv, - struct list_head *recv_list, const char *mac) +static void batadv_recv_list_add(struct batadv_priv *bat_priv, + struct list_head *recv_list, const char *mac) { - struct recvlist_node *entry; + struct batadv_recvlist_node *entry; entry = kmalloc(sizeof(*entry), GFP_ATOMIC); if (!entry) @@ -363,14 +317,15 @@ static void recv_list_add(struct bat_priv *bat_priv, } /* returns 1 if this mac is in the recv_list */ -static int recv_list_is_in(struct bat_priv *bat_priv, - const struct list_head *recv_list, const char *mac) +static int batadv_recv_list_is_in(struct batadv_priv *bat_priv, + const struct list_head *recv_list, + const char *mac) { - const struct recvlist_node *entry; + const struct batadv_recvlist_node *entry; spin_lock_bh(&bat_priv->vis_list_lock); list_for_each_entry(entry, recv_list, list) { - if (compare_eth(entry->mac, mac)) { + if (batadv_compare_eth(entry->mac, mac)) { spin_unlock_bh(&bat_priv->vis_list_lock); return 1; } @@ -381,17 +336,21 @@ static int recv_list_is_in(struct bat_priv *bat_priv, /* try to add the packet to the vis_hash. return NULL if invalid (e.g. too old, * broken.. ). vis hash must be locked outside. is_new is set when the packet - * is newer than old entries in the hash. */ -static struct vis_info *add_packet(struct bat_priv *bat_priv, - struct vis_packet *vis_packet, - int vis_info_len, int *is_new, - int make_broadcast) + * is newer than old entries in the hash. + */ +static struct batadv_vis_info * +batadv_add_packet(struct batadv_priv *bat_priv, + struct batadv_vis_packet *vis_packet, int vis_info_len, + int *is_new, int make_broadcast) { - struct vis_info *info, *old_info; - struct vis_packet *search_packet, *old_packet; - struct vis_info search_elem; - struct vis_packet *packet; + struct batadv_vis_info *info, *old_info; + struct batadv_vis_packet *search_packet, *old_packet; + struct batadv_vis_info search_elem; + struct batadv_vis_packet *packet; + struct sk_buff *tmp_skb; int hash_added; + size_t len; + size_t max_entries; *is_new = 0; /* sanity check */ @@ -402,20 +361,23 @@ static struct vis_info *add_packet(struct bat_priv *bat_priv, search_elem.skb_packet = dev_alloc_skb(sizeof(*search_packet)); if (!search_elem.skb_packet) return NULL; - search_packet = (struct vis_packet *)skb_put(search_elem.skb_packet, - sizeof(*search_packet)); + len = sizeof(*search_packet); + tmp_skb = search_elem.skb_packet; + search_packet = (struct batadv_vis_packet *)skb_put(tmp_skb, len); memcpy(search_packet->vis_orig, vis_packet->vis_orig, ETH_ALEN); - old_info = vis_hash_find(bat_priv, &search_elem); + old_info = batadv_vis_hash_find(bat_priv, &search_elem); kfree_skb(search_elem.skb_packet); if (old_info) { - old_packet = (struct vis_packet *)old_info->skb_packet->data; - if (!seq_after(ntohl(vis_packet->seqno), - ntohl(old_packet->seqno))) { + tmp_skb = old_info->skb_packet; + old_packet = (struct batadv_vis_packet *)tmp_skb->data; + if (!batadv_seq_after(ntohl(vis_packet->seqno), + ntohl(old_packet->seqno))) { if (old_packet->seqno == vis_packet->seqno) { - recv_list_add(bat_priv, &old_info->recv_list, - vis_packet->sender_orig); + batadv_recv_list_add(bat_priv, + &old_info->recv_list, + vis_packet->sender_orig); return old_info; } else { /* newer packet is already in hash. */ @@ -423,52 +385,53 @@ static struct vis_info *add_packet(struct bat_priv *bat_priv, } } /* remove old entry */ - hash_remove(bat_priv->vis_hash, vis_info_cmp, vis_info_choose, - old_info); - send_list_del(old_info); - kref_put(&old_info->refcount, free_info); + batadv_hash_remove(bat_priv->vis_hash, batadv_vis_info_cmp, + batadv_vis_info_choose, old_info); + batadv_send_list_del(old_info); + kref_put(&old_info->refcount, batadv_free_info); } info = kmalloc(sizeof(*info), GFP_ATOMIC); if (!info) return NULL; - info->skb_packet = dev_alloc_skb(sizeof(*packet) + vis_info_len + - ETH_HLEN); + len = sizeof(*packet) + vis_info_len; + info->skb_packet = dev_alloc_skb(len + ETH_HLEN); if (!info->skb_packet) { kfree(info); return NULL; } skb_reserve(info->skb_packet, ETH_HLEN); - packet = (struct vis_packet *)skb_put(info->skb_packet, sizeof(*packet) - + vis_info_len); + packet = (struct batadv_vis_packet *)skb_put(info->skb_packet, len); kref_init(&info->refcount); INIT_LIST_HEAD(&info->send_list); INIT_LIST_HEAD(&info->recv_list); info->first_seen = jiffies; info->bat_priv = bat_priv; - memcpy(packet, vis_packet, sizeof(*packet) + vis_info_len); + memcpy(packet, vis_packet, len); /* initialize and add new packet. */ *is_new = 1; /* Make it a broadcast packet, if required */ if (make_broadcast) - memcpy(packet->target_orig, broadcast_addr, ETH_ALEN); + memcpy(packet->target_orig, batadv_broadcast_addr, ETH_ALEN); /* repair if entries is longer than packet. */ - if (packet->entries * sizeof(struct vis_info_entry) > vis_info_len) - packet->entries = vis_info_len / sizeof(struct vis_info_entry); + max_entries = vis_info_len / sizeof(struct batadv_vis_info_entry); + if (packet->entries > max_entries) + packet->entries = max_entries; - recv_list_add(bat_priv, &info->recv_list, packet->sender_orig); + batadv_recv_list_add(bat_priv, &info->recv_list, packet->sender_orig); /* try to add it */ - hash_added = hash_add(bat_priv->vis_hash, vis_info_cmp, vis_info_choose, - info, &info->hash_entry); + hash_added = batadv_hash_add(bat_priv->vis_hash, batadv_vis_info_cmp, + batadv_vis_info_choose, info, + &info->hash_entry); if (hash_added != 0) { /* did not work (for some reason) */ - kref_put(&info->refcount, free_info); + kref_put(&info->refcount, batadv_free_info); info = NULL; } @@ -476,37 +439,38 @@ static struct vis_info *add_packet(struct bat_priv *bat_priv, } /* handle the server sync packet, forward if needed. */ -void receive_server_sync_packet(struct bat_priv *bat_priv, - struct vis_packet *vis_packet, - int vis_info_len) +void batadv_receive_server_sync_packet(struct batadv_priv *bat_priv, + struct batadv_vis_packet *vis_packet, + int vis_info_len) { - struct vis_info *info; + struct batadv_vis_info *info; int is_new, make_broadcast; int vis_server = atomic_read(&bat_priv->vis_mode); - make_broadcast = (vis_server == VIS_TYPE_SERVER_SYNC); + make_broadcast = (vis_server == BATADV_VIS_TYPE_SERVER_SYNC); spin_lock_bh(&bat_priv->vis_hash_lock); - info = add_packet(bat_priv, vis_packet, vis_info_len, - &is_new, make_broadcast); + info = batadv_add_packet(bat_priv, vis_packet, vis_info_len, + &is_new, make_broadcast); if (!info) goto end; /* only if we are server ourselves and packet is newer than the one in - * hash.*/ - if (vis_server == VIS_TYPE_SERVER_SYNC && is_new) - send_list_add(bat_priv, info); + * hash. + */ + if (vis_server == BATADV_VIS_TYPE_SERVER_SYNC && is_new) + batadv_send_list_add(bat_priv, info); end: spin_unlock_bh(&bat_priv->vis_hash_lock); } /* handle an incoming client update packet and schedule forward if needed. */ -void receive_client_update_packet(struct bat_priv *bat_priv, - struct vis_packet *vis_packet, - int vis_info_len) +void batadv_receive_client_update_packet(struct batadv_priv *bat_priv, + struct batadv_vis_packet *vis_packet, + int vis_info_len) { - struct vis_info *info; - struct vis_packet *packet; + struct batadv_vis_info *info; + struct batadv_vis_packet *packet; int is_new; int vis_server = atomic_read(&bat_priv->vis_mode); int are_target = 0; @@ -516,28 +480,28 @@ void receive_client_update_packet(struct bat_priv *bat_priv, return; /* Are we the target for this VIS packet? */ - if (vis_server == VIS_TYPE_SERVER_SYNC && - is_my_mac(vis_packet->target_orig)) + if (vis_server == BATADV_VIS_TYPE_SERVER_SYNC && + batadv_is_my_mac(vis_packet->target_orig)) are_target = 1; spin_lock_bh(&bat_priv->vis_hash_lock); - info = add_packet(bat_priv, vis_packet, vis_info_len, - &is_new, are_target); + info = batadv_add_packet(bat_priv, vis_packet, vis_info_len, + &is_new, are_target); if (!info) goto end; /* note that outdated packets will be dropped at this point. */ - packet = (struct vis_packet *)info->skb_packet->data; + packet = (struct batadv_vis_packet *)info->skb_packet->data; /* send only if we're the target server or ... */ if (are_target && is_new) { - packet->vis_type = VIS_TYPE_SERVER_SYNC; /* upgrade! */ - send_list_add(bat_priv, info); + packet->vis_type = BATADV_VIS_TYPE_SERVER_SYNC; /* upgrade! */ + batadv_send_list_add(bat_priv, info); /* ... we're not the recipient (and thus need to forward). */ - } else if (!is_my_mac(packet->target_orig)) { - send_list_add(bat_priv, info); + } else if (!batadv_is_my_mac(packet->target_orig)) { + batadv_send_list_add(bat_priv, info); } end: @@ -547,37 +511,38 @@ end: /* Walk the originators and find the VIS server with the best tq. Set the packet * address to its address and return the best_tq. * - * Must be called with the originator hash locked */ -static int find_best_vis_server(struct bat_priv *bat_priv, - struct vis_info *info) + * Must be called with the originator hash locked + */ +static int batadv_find_best_vis_server(struct batadv_priv *bat_priv, + struct batadv_vis_info *info) { - struct hashtable_t *hash = bat_priv->orig_hash; - struct neigh_node *router; + struct batadv_hashtable *hash = bat_priv->orig_hash; + struct batadv_neigh_node *router; struct hlist_node *node; struct hlist_head *head; - struct orig_node *orig_node; - struct vis_packet *packet; + struct batadv_orig_node *orig_node; + struct batadv_vis_packet *packet; int best_tq = -1; uint32_t i; - packet = (struct vis_packet *)info->skb_packet->data; + packet = (struct batadv_vis_packet *)info->skb_packet->data; for (i = 0; i < hash->size; i++) { head = &hash->table[i]; rcu_read_lock(); hlist_for_each_entry_rcu(orig_node, node, head, hash_entry) { - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (!router) continue; - if ((orig_node->flags & VIS_SERVER) && + if ((orig_node->flags & BATADV_VIS_SERVER) && (router->tq_avg > best_tq)) { best_tq = router->tq_avg; memcpy(packet->target_orig, orig_node->orig, ETH_ALEN); } - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); } rcu_read_unlock(); } @@ -586,47 +551,52 @@ static int find_best_vis_server(struct bat_priv *bat_priv, } /* Return true if the vis packet is full. */ -static bool vis_packet_full(const struct vis_info *info) +static bool batadv_vis_packet_full(const struct batadv_vis_info *info) { - const struct vis_packet *packet; - packet = (struct vis_packet *)info->skb_packet->data; + const struct batadv_vis_packet *packet; + size_t num; + + packet = (struct batadv_vis_packet *)info->skb_packet->data; + num = BATADV_MAX_VIS_PACKET_SIZE / sizeof(struct batadv_vis_info_entry); - if (MAX_VIS_PACKET_SIZE / sizeof(struct vis_info_entry) - < packet->entries + 1) + if (num < packet->entries + 1) return true; return false; } /* generates a packet of own vis data, - * returns 0 on success, -1 if no packet could be generated */ -static int generate_vis_packet(struct bat_priv *bat_priv) + * returns 0 on success, -1 if no packet could be generated + */ +static int batadv_generate_vis_packet(struct batadv_priv *bat_priv) { - struct hashtable_t *hash = bat_priv->orig_hash; + struct batadv_hashtable *hash = bat_priv->orig_hash; struct hlist_node *node; struct hlist_head *head; - struct orig_node *orig_node; - struct neigh_node *router; - struct vis_info *info = bat_priv->my_vis_info; - struct vis_packet *packet = (struct vis_packet *)info->skb_packet->data; - struct vis_info_entry *entry; - struct tt_common_entry *tt_common_entry; + struct batadv_orig_node *orig_node; + struct batadv_neigh_node *router; + struct batadv_vis_info *info = bat_priv->my_vis_info; + struct batadv_vis_packet *packet; + struct batadv_vis_info_entry *entry; + struct batadv_tt_common_entry *tt_common_entry; int best_tq = -1; uint32_t i; info->first_seen = jiffies; + packet = (struct batadv_vis_packet *)info->skb_packet->data; packet->vis_type = atomic_read(&bat_priv->vis_mode); - memcpy(packet->target_orig, broadcast_addr, ETH_ALEN); - packet->header.ttl = TTL; + memcpy(packet->target_orig, batadv_broadcast_addr, ETH_ALEN); + packet->header.ttl = BATADV_TTL; packet->seqno = htonl(ntohl(packet->seqno) + 1); packet->entries = 0; + packet->reserved = 0; skb_trim(info->skb_packet, sizeof(*packet)); - if (packet->vis_type == VIS_TYPE_CLIENT_UPDATE) { - best_tq = find_best_vis_server(bat_priv, info); + if (packet->vis_type == BATADV_VIS_TYPE_CLIENT_UPDATE) { + best_tq = batadv_find_best_vis_server(bat_priv, info); if (best_tq < 0) - return -1; + return best_tq; } for (i = 0; i < hash->size; i++) { @@ -634,21 +604,21 @@ static int generate_vis_packet(struct bat_priv *bat_priv) rcu_read_lock(); hlist_for_each_entry_rcu(orig_node, node, head, hash_entry) { - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (!router) continue; - if (!compare_eth(router->addr, orig_node->orig)) + if (!batadv_compare_eth(router->addr, orig_node->orig)) goto next; - if (router->if_incoming->if_status != IF_ACTIVE) + if (router->if_incoming->if_status != BATADV_IF_ACTIVE) goto next; if (router->tq_avg < 1) goto next; /* fill one entry into buffer. */ - entry = (struct vis_info_entry *) + entry = (struct batadv_vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); memcpy(entry->src, router->if_incoming->net_dev->dev_addr, @@ -658,9 +628,9 @@ static int generate_vis_packet(struct bat_priv *bat_priv) packet->entries++; next: - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); - if (vis_packet_full(info)) + if (batadv_vis_packet_full(info)) goto unlock; } rcu_read_unlock(); @@ -674,7 +644,7 @@ next: rcu_read_lock(); hlist_for_each_entry_rcu(tt_common_entry, node, head, hash_entry) { - entry = (struct vis_info_entry *) + entry = (struct batadv_vis_info_entry *) skb_put(info->skb_packet, sizeof(*entry)); memset(entry->src, 0, ETH_ALEN); @@ -682,7 +652,7 @@ next: entry->quality = 0; /* 0 means TT */ packet->entries++; - if (vis_packet_full(info)) + if (batadv_vis_packet_full(info)) goto unlock; } rcu_read_unlock(); @@ -696,14 +666,15 @@ unlock: } /* free old vis packets. Must be called with this vis_hash_lock - * held */ -static void purge_vis_packets(struct bat_priv *bat_priv) + * held + */ +static void batadv_purge_vis_packets(struct batadv_priv *bat_priv) { uint32_t i; - struct hashtable_t *hash = bat_priv->vis_hash; + struct batadv_hashtable *hash = bat_priv->vis_hash; struct hlist_node *node, *node_tmp; struct hlist_head *head; - struct vis_info *info; + struct batadv_vis_info *info; for (i = 0; i < hash->size; i++) { head = &hash->table[i]; @@ -714,31 +685,32 @@ static void purge_vis_packets(struct bat_priv *bat_priv) if (info == bat_priv->my_vis_info) continue; - if (has_timed_out(info->first_seen, VIS_TIMEOUT)) { + if (batadv_has_timed_out(info->first_seen, + BATADV_VIS_TIMEOUT)) { hlist_del(node); - send_list_del(info); - kref_put(&info->refcount, free_info); + batadv_send_list_del(info); + kref_put(&info->refcount, batadv_free_info); } } } } -static void broadcast_vis_packet(struct bat_priv *bat_priv, - struct vis_info *info) +static void batadv_broadcast_vis_packet(struct batadv_priv *bat_priv, + struct batadv_vis_info *info) { - struct neigh_node *router; - struct hashtable_t *hash = bat_priv->orig_hash; + struct batadv_neigh_node *router; + struct batadv_hashtable *hash = bat_priv->orig_hash; struct hlist_node *node; struct hlist_head *head; - struct orig_node *orig_node; - struct vis_packet *packet; + struct batadv_orig_node *orig_node; + struct batadv_vis_packet *packet; struct sk_buff *skb; - struct hard_iface *hard_iface; + struct batadv_hard_iface *hard_iface; uint8_t dstaddr[ETH_ALEN]; uint32_t i; - packet = (struct vis_packet *)info->skb_packet->data; + packet = (struct batadv_vis_packet *)info->skb_packet->data; /* send to all routers in range. */ for (i = 0; i < hash->size; i++) { @@ -747,18 +719,19 @@ static void broadcast_vis_packet(struct bat_priv *bat_priv, rcu_read_lock(); hlist_for_each_entry_rcu(orig_node, node, head, hash_entry) { /* if it's a vis server and reachable, send it. */ - if (!(orig_node->flags & VIS_SERVER)) + if (!(orig_node->flags & BATADV_VIS_SERVER)) continue; - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (!router) continue; /* don't send it if we already received the packet from - * this node. */ - if (recv_list_is_in(bat_priv, &info->recv_list, - orig_node->orig)) { - neigh_node_free_ref(router); + * this node. + */ + if (batadv_recv_list_is_in(bat_priv, &info->recv_list, + orig_node->orig)) { + batadv_neigh_node_free_ref(router); continue; } @@ -766,57 +739,59 @@ static void broadcast_vis_packet(struct bat_priv *bat_priv, hard_iface = router->if_incoming; memcpy(dstaddr, router->addr, ETH_ALEN); - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); skb = skb_clone(info->skb_packet, GFP_ATOMIC); if (skb) - send_skb_packet(skb, hard_iface, dstaddr); + batadv_send_skb_packet(skb, hard_iface, + dstaddr); } rcu_read_unlock(); } } -static void unicast_vis_packet(struct bat_priv *bat_priv, - struct vis_info *info) +static void batadv_unicast_vis_packet(struct batadv_priv *bat_priv, + struct batadv_vis_info *info) { - struct orig_node *orig_node; - struct neigh_node *router = NULL; + struct batadv_orig_node *orig_node; + struct batadv_neigh_node *router = NULL; struct sk_buff *skb; - struct vis_packet *packet; + struct batadv_vis_packet *packet; - packet = (struct vis_packet *)info->skb_packet->data; + packet = (struct batadv_vis_packet *)info->skb_packet->data; - orig_node = orig_hash_find(bat_priv, packet->target_orig); + orig_node = batadv_orig_hash_find(bat_priv, packet->target_orig); if (!orig_node) goto out; - router = orig_node_get_router(orig_node); + router = batadv_orig_node_get_router(orig_node); if (!router) goto out; skb = skb_clone(info->skb_packet, GFP_ATOMIC); if (skb) - send_skb_packet(skb, router->if_incoming, router->addr); + batadv_send_skb_packet(skb, router->if_incoming, router->addr); out: if (router) - neigh_node_free_ref(router); + batadv_neigh_node_free_ref(router); if (orig_node) - orig_node_free_ref(orig_node); + batadv_orig_node_free_ref(orig_node); } -/* only send one vis packet. called from send_vis_packets() */ -static void send_vis_packet(struct bat_priv *bat_priv, struct vis_info *info) +/* only send one vis packet. called from batadv_send_vis_packets() */ +static void batadv_send_vis_packet(struct batadv_priv *bat_priv, + struct batadv_vis_info *info) { - struct hard_iface *primary_if; - struct vis_packet *packet; + struct batadv_hard_iface *primary_if; + struct batadv_vis_packet *packet; - primary_if = primary_if_get_selected(bat_priv); + primary_if = batadv_primary_if_get_selected(bat_priv); if (!primary_if) goto out; - packet = (struct vis_packet *)info->skb_packet->data; + packet = (struct batadv_vis_packet *)info->skb_packet->data; if (packet->header.ttl < 2) { pr_debug("Error - can't send vis packet: ttl exceeded\n"); goto out; @@ -826,31 +801,31 @@ static void send_vis_packet(struct bat_priv *bat_priv, struct vis_info *info) packet->header.ttl--; if (is_broadcast_ether_addr(packet->target_orig)) - broadcast_vis_packet(bat_priv, info); + batadv_broadcast_vis_packet(bat_priv, info); else - unicast_vis_packet(bat_priv, info); + batadv_unicast_vis_packet(bat_priv, info); packet->header.ttl++; /* restore TTL */ out: if (primary_if) - hardif_free_ref(primary_if); + batadv_hardif_free_ref(primary_if); } /* called from timer; send (and maybe generate) vis packet. */ -static void send_vis_packets(struct work_struct *work) +static void batadv_send_vis_packets(struct work_struct *work) { struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); - struct bat_priv *bat_priv = - container_of(delayed_work, struct bat_priv, vis_work); - struct vis_info *info; + struct batadv_priv *bat_priv; + struct batadv_vis_info *info; + bat_priv = container_of(delayed_work, struct batadv_priv, vis_work); spin_lock_bh(&bat_priv->vis_hash_lock); - purge_vis_packets(bat_priv); + batadv_purge_vis_packets(bat_priv); - if (generate_vis_packet(bat_priv) == 0) { + if (batadv_generate_vis_packet(bat_priv) == 0) { /* schedule if generation was successful */ - send_list_add(bat_priv, bat_priv->my_vis_info); + batadv_send_list_add(bat_priv, bat_priv->my_vis_info); } while (!list_empty(&bat_priv->vis_send_list)) { @@ -860,98 +835,103 @@ static void send_vis_packets(struct work_struct *work) kref_get(&info->refcount); spin_unlock_bh(&bat_priv->vis_hash_lock); - send_vis_packet(bat_priv, info); + batadv_send_vis_packet(bat_priv, info); spin_lock_bh(&bat_priv->vis_hash_lock); - send_list_del(info); - kref_put(&info->refcount, free_info); + batadv_send_list_del(info); + kref_put(&info->refcount, batadv_free_info); } spin_unlock_bh(&bat_priv->vis_hash_lock); - start_vis_timer(bat_priv); + batadv_start_vis_timer(bat_priv); } /* init the vis server. this may only be called when if_list is already - * initialized (e.g. bat0 is initialized, interfaces have been added) */ -int vis_init(struct bat_priv *bat_priv) + * initialized (e.g. bat0 is initialized, interfaces have been added) + */ +int batadv_vis_init(struct batadv_priv *bat_priv) { - struct vis_packet *packet; + struct batadv_vis_packet *packet; int hash_added; + unsigned int len; + unsigned long first_seen; + struct sk_buff *tmp_skb; if (bat_priv->vis_hash) - return 1; + return 0; spin_lock_bh(&bat_priv->vis_hash_lock); - bat_priv->vis_hash = hash_new(256); + bat_priv->vis_hash = batadv_hash_new(256); if (!bat_priv->vis_hash) { pr_err("Can't initialize vis_hash\n"); goto err; } - bat_priv->my_vis_info = kmalloc(MAX_VIS_PACKET_SIZE, GFP_ATOMIC); + bat_priv->my_vis_info = kmalloc(BATADV_MAX_VIS_PACKET_SIZE, GFP_ATOMIC); if (!bat_priv->my_vis_info) goto err; - bat_priv->my_vis_info->skb_packet = dev_alloc_skb(sizeof(*packet) + - MAX_VIS_PACKET_SIZE + - ETH_HLEN); + len = sizeof(*packet) + BATADV_MAX_VIS_PACKET_SIZE + ETH_HLEN; + bat_priv->my_vis_info->skb_packet = dev_alloc_skb(len); if (!bat_priv->my_vis_info->skb_packet) goto free_info; skb_reserve(bat_priv->my_vis_info->skb_packet, ETH_HLEN); - packet = (struct vis_packet *)skb_put(bat_priv->my_vis_info->skb_packet, - sizeof(*packet)); + tmp_skb = bat_priv->my_vis_info->skb_packet; + packet = (struct batadv_vis_packet *)skb_put(tmp_skb, sizeof(*packet)); /* prefill the vis info */ - bat_priv->my_vis_info->first_seen = jiffies - - msecs_to_jiffies(VIS_INTERVAL); + first_seen = jiffies - msecs_to_jiffies(BATADV_VIS_INTERVAL); + bat_priv->my_vis_info->first_seen = first_seen; INIT_LIST_HEAD(&bat_priv->my_vis_info->recv_list); INIT_LIST_HEAD(&bat_priv->my_vis_info->send_list); kref_init(&bat_priv->my_vis_info->refcount); bat_priv->my_vis_info->bat_priv = bat_priv; - packet->header.version = COMPAT_VERSION; - packet->header.packet_type = BAT_VIS; - packet->header.ttl = TTL; + packet->header.version = BATADV_COMPAT_VERSION; + packet->header.packet_type = BATADV_VIS; + packet->header.ttl = BATADV_TTL; packet->seqno = 0; + packet->reserved = 0; packet->entries = 0; INIT_LIST_HEAD(&bat_priv->vis_send_list); - hash_added = hash_add(bat_priv->vis_hash, vis_info_cmp, vis_info_choose, - bat_priv->my_vis_info, - &bat_priv->my_vis_info->hash_entry); + hash_added = batadv_hash_add(bat_priv->vis_hash, batadv_vis_info_cmp, + batadv_vis_info_choose, + bat_priv->my_vis_info, + &bat_priv->my_vis_info->hash_entry); if (hash_added != 0) { pr_err("Can't add own vis packet into hash\n"); /* not in hash, need to remove it manually. */ - kref_put(&bat_priv->my_vis_info->refcount, free_info); + kref_put(&bat_priv->my_vis_info->refcount, batadv_free_info); goto err; } spin_unlock_bh(&bat_priv->vis_hash_lock); - start_vis_timer(bat_priv); - return 1; + batadv_start_vis_timer(bat_priv); + return 0; free_info: kfree(bat_priv->my_vis_info); bat_priv->my_vis_info = NULL; err: spin_unlock_bh(&bat_priv->vis_hash_lock); - vis_quit(bat_priv); - return 0; + batadv_vis_quit(bat_priv); + return -ENOMEM; } /* Decrease the reference count on a hash item info */ -static void free_info_ref(struct hlist_node *node, void *arg) +static void batadv_free_info_ref(struct hlist_node *node, void *arg) { - struct vis_info *info; + struct batadv_vis_info *info; - info = container_of(node, struct vis_info, hash_entry); - send_list_del(info); - kref_put(&info->refcount, free_info); + info = container_of(node, struct batadv_vis_info, hash_entry); + batadv_send_list_del(info); + kref_put(&info->refcount, batadv_free_info); } /* shutdown vis-server */ -void vis_quit(struct bat_priv *bat_priv) +void batadv_vis_quit(struct batadv_priv *bat_priv) { if (!bat_priv->vis_hash) return; @@ -960,16 +940,16 @@ void vis_quit(struct bat_priv *bat_priv) spin_lock_bh(&bat_priv->vis_hash_lock); /* properly remove, kill timers ... */ - hash_delete(bat_priv->vis_hash, free_info_ref, NULL); + batadv_hash_delete(bat_priv->vis_hash, batadv_free_info_ref, NULL); bat_priv->vis_hash = NULL; bat_priv->my_vis_info = NULL; spin_unlock_bh(&bat_priv->vis_hash_lock); } /* schedule packets for (re)transmission */ -static void start_vis_timer(struct bat_priv *bat_priv) +static void batadv_start_vis_timer(struct batadv_priv *bat_priv) { - INIT_DELAYED_WORK(&bat_priv->vis_work, send_vis_packets); - queue_delayed_work(bat_event_workqueue, &bat_priv->vis_work, - msecs_to_jiffies(VIS_INTERVAL)); + INIT_DELAYED_WORK(&bat_priv->vis_work, batadv_send_vis_packets); + queue_delayed_work(batadv_event_workqueue, &bat_priv->vis_work, + msecs_to_jiffies(BATADV_VIS_INTERVAL)); } diff --git a/net/batman-adv/vis.h b/net/batman-adv/vis.h index ee2e46e5347b..84e716ed8963 100644 --- a/net/batman-adv/vis.h +++ b/net/batman-adv/vis.h @@ -1,5 +1,4 @@ -/* - * Copyright (C) 2008-2012 B.A.T.M.A.N. contributors: +/* Copyright (C) 2008-2012 B.A.T.M.A.N. contributors: * * Simon Wunderlich, Marek Lindner * @@ -16,23 +15,22 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA - * */ #ifndef _NET_BATMAN_ADV_VIS_H_ #define _NET_BATMAN_ADV_VIS_H_ -#define VIS_TIMEOUT 200000 /* timeout of vis packets - * in miliseconds */ +/* timeout of vis packets in miliseconds */ +#define BATADV_VIS_TIMEOUT 200000 -int vis_seq_print_text(struct seq_file *seq, void *offset); -void receive_server_sync_packet(struct bat_priv *bat_priv, - struct vis_packet *vis_packet, - int vis_info_len); -void receive_client_update_packet(struct bat_priv *bat_priv, - struct vis_packet *vis_packet, - int vis_info_len); -int vis_init(struct bat_priv *bat_priv); -void vis_quit(struct bat_priv *bat_priv); +int batadv_vis_seq_print_text(struct seq_file *seq, void *offset); +void batadv_receive_server_sync_packet(struct batadv_priv *bat_priv, + struct batadv_vis_packet *vis_packet, + int vis_info_len); +void batadv_receive_client_update_packet(struct batadv_priv *bat_priv, + struct batadv_vis_packet *vis_packet, + int vis_info_len); +int batadv_vis_init(struct batadv_priv *bat_priv); +void batadv_vis_quit(struct batadv_priv *bat_priv); #endif /* _NET_BATMAN_ADV_VIS_H_ */ diff --git a/net/bluetooth/Makefile b/net/bluetooth/Makefile index 2dc5a5700f53..fa6d94a4602a 100644 --- a/net/bluetooth/Makefile +++ b/net/bluetooth/Makefile @@ -9,4 +9,5 @@ obj-$(CONFIG_BT_CMTP) += cmtp/ obj-$(CONFIG_BT_HIDP) += hidp/ bluetooth-y := af_bluetooth.o hci_core.o hci_conn.o hci_event.o mgmt.o \ - hci_sock.o hci_sysfs.o l2cap_core.o l2cap_sock.o smp.o sco.o lib.o + hci_sock.o hci_sysfs.o l2cap_core.o l2cap_sock.o smp.o sco.o lib.o \ + a2mp.o diff --git a/net/bluetooth/a2mp.c b/net/bluetooth/a2mp.c new file mode 100644 index 000000000000..4ff0bf3ba9a5 --- /dev/null +++ b/net/bluetooth/a2mp.c @@ -0,0 +1,568 @@ +/* + Copyright (c) 2010,2011 Code Aurora Forum. All rights reserved. + Copyright (c) 2011,2012 Intel Corp. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License version 2 and + only version 2 as published by the Free Software Foundation. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. +*/ + +#include <net/bluetooth/bluetooth.h> +#include <net/bluetooth/hci_core.h> +#include <net/bluetooth/l2cap.h> +#include <net/bluetooth/a2mp.h> + +/* A2MP build & send command helper functions */ +static struct a2mp_cmd *__a2mp_build(u8 code, u8 ident, u16 len, void *data) +{ + struct a2mp_cmd *cmd; + int plen; + + plen = sizeof(*cmd) + len; + cmd = kzalloc(plen, GFP_KERNEL); + if (!cmd) + return NULL; + + cmd->code = code; + cmd->ident = ident; + cmd->len = cpu_to_le16(len); + + memcpy(cmd->data, data, len); + + return cmd; +} + +static void a2mp_send(struct amp_mgr *mgr, u8 code, u8 ident, u16 len, + void *data) +{ + struct l2cap_chan *chan = mgr->a2mp_chan; + struct a2mp_cmd *cmd; + u16 total_len = len + sizeof(*cmd); + struct kvec iv; + struct msghdr msg; + + cmd = __a2mp_build(code, ident, len, data); + if (!cmd) + return; + + iv.iov_base = cmd; + iv.iov_len = total_len; + + memset(&msg, 0, sizeof(msg)); + + msg.msg_iov = (struct iovec *) &iv; + msg.msg_iovlen = 1; + + l2cap_chan_send(chan, &msg, total_len, 0); + + kfree(cmd); +} + +static inline void __a2mp_cl_bredr(struct a2mp_cl *cl) +{ + cl->id = 0; + cl->type = 0; + cl->status = 1; +} + +/* hci_dev_list shall be locked */ +static void __a2mp_add_cl(struct amp_mgr *mgr, struct a2mp_cl *cl, u8 num_ctrl) +{ + int i = 0; + struct hci_dev *hdev; + + __a2mp_cl_bredr(cl); + + list_for_each_entry(hdev, &hci_dev_list, list) { + /* Iterate through AMP controllers */ + if (hdev->id == HCI_BREDR_ID) + continue; + + /* Starting from second entry */ + if (++i >= num_ctrl) + return; + + cl[i].id = hdev->id; + cl[i].type = hdev->amp_type; + cl[i].status = hdev->amp_status; + } +} + +/* Processing A2MP messages */ +static int a2mp_command_rej(struct amp_mgr *mgr, struct sk_buff *skb, + struct a2mp_cmd *hdr) +{ + struct a2mp_cmd_rej *rej = (void *) skb->data; + + if (le16_to_cpu(hdr->len) < sizeof(*rej)) + return -EINVAL; + + BT_DBG("ident %d reason %d", hdr->ident, le16_to_cpu(rej->reason)); + + skb_pull(skb, sizeof(*rej)); + + return 0; +} + +static int a2mp_discover_req(struct amp_mgr *mgr, struct sk_buff *skb, + struct a2mp_cmd *hdr) +{ + struct a2mp_discov_req *req = (void *) skb->data; + u16 len = le16_to_cpu(hdr->len); + struct a2mp_discov_rsp *rsp; + u16 ext_feat; + u8 num_ctrl; + + if (len < sizeof(*req)) + return -EINVAL; + + skb_pull(skb, sizeof(*req)); + + ext_feat = le16_to_cpu(req->ext_feat); + + BT_DBG("mtu %d efm 0x%4.4x", le16_to_cpu(req->mtu), ext_feat); + + /* check that packet is not broken for now */ + while (ext_feat & A2MP_FEAT_EXT) { + if (len < sizeof(ext_feat)) + return -EINVAL; + + ext_feat = get_unaligned_le16(skb->data); + BT_DBG("efm 0x%4.4x", ext_feat); + len -= sizeof(ext_feat); + skb_pull(skb, sizeof(ext_feat)); + } + + read_lock(&hci_dev_list_lock); + + num_ctrl = __hci_num_ctrl(); + len = num_ctrl * sizeof(struct a2mp_cl) + sizeof(*rsp); + rsp = kmalloc(len, GFP_ATOMIC); + if (!rsp) { + read_unlock(&hci_dev_list_lock); + return -ENOMEM; + } + + rsp->mtu = __constant_cpu_to_le16(L2CAP_A2MP_DEFAULT_MTU); + rsp->ext_feat = 0; + + __a2mp_add_cl(mgr, rsp->cl, num_ctrl); + + read_unlock(&hci_dev_list_lock); + + a2mp_send(mgr, A2MP_DISCOVER_RSP, hdr->ident, len, rsp); + + kfree(rsp); + return 0; +} + +static int a2mp_change_notify(struct amp_mgr *mgr, struct sk_buff *skb, + struct a2mp_cmd *hdr) +{ + struct a2mp_cl *cl = (void *) skb->data; + + while (skb->len >= sizeof(*cl)) { + BT_DBG("Controller id %d type %d status %d", cl->id, cl->type, + cl->status); + cl = (struct a2mp_cl *) skb_pull(skb, sizeof(*cl)); + } + + /* TODO send A2MP_CHANGE_RSP */ + + return 0; +} + +static int a2mp_getinfo_req(struct amp_mgr *mgr, struct sk_buff *skb, + struct a2mp_cmd *hdr) +{ + struct a2mp_info_req *req = (void *) skb->data; + struct a2mp_info_rsp rsp; + struct hci_dev *hdev; + + if (le16_to_cpu(hdr->len) < sizeof(*req)) + return -EINVAL; + + BT_DBG("id %d", req->id); + + rsp.id = req->id; + rsp.status = A2MP_STATUS_INVALID_CTRL_ID; + + hdev = hci_dev_get(req->id); + if (hdev && hdev->amp_type != HCI_BREDR) { + rsp.status = 0; + rsp.total_bw = cpu_to_le32(hdev->amp_total_bw); + rsp.max_bw = cpu_to_le32(hdev->amp_max_bw); + rsp.min_latency = cpu_to_le32(hdev->amp_min_latency); + rsp.pal_cap = cpu_to_le16(hdev->amp_pal_cap); + rsp.assoc_size = cpu_to_le16(hdev->amp_assoc_size); + } + + if (hdev) + hci_dev_put(hdev); + + a2mp_send(mgr, A2MP_GETINFO_RSP, hdr->ident, sizeof(rsp), &rsp); + + skb_pull(skb, sizeof(*req)); + return 0; +} + +static int a2mp_getampassoc_req(struct amp_mgr *mgr, struct sk_buff *skb, + struct a2mp_cmd *hdr) +{ + struct a2mp_amp_assoc_req *req = (void *) skb->data; + struct hci_dev *hdev; + + if (le16_to_cpu(hdr->len) < sizeof(*req)) + return -EINVAL; + + BT_DBG("id %d", req->id); + + hdev = hci_dev_get(req->id); + if (!hdev || hdev->amp_type == HCI_BREDR) { + struct a2mp_amp_assoc_rsp rsp; + rsp.id = req->id; + rsp.status = A2MP_STATUS_INVALID_CTRL_ID; + + a2mp_send(mgr, A2MP_GETAMPASSOC_RSP, hdr->ident, sizeof(rsp), + &rsp); + goto clean; + } + + /* Placeholder for HCI Read AMP Assoc */ + +clean: + if (hdev) + hci_dev_put(hdev); + + skb_pull(skb, sizeof(*req)); + return 0; +} + +static int a2mp_createphyslink_req(struct amp_mgr *mgr, struct sk_buff *skb, + struct a2mp_cmd *hdr) +{ + struct a2mp_physlink_req *req = (void *) skb->data; + + struct a2mp_physlink_rsp rsp; + struct hci_dev *hdev; + + if (le16_to_cpu(hdr->len) < sizeof(*req)) + return -EINVAL; + + BT_DBG("local_id %d, remote_id %d", req->local_id, req->remote_id); + + rsp.local_id = req->remote_id; + rsp.remote_id = req->local_id; + + hdev = hci_dev_get(req->remote_id); + if (!hdev || hdev->amp_type != HCI_AMP) { + rsp.status = A2MP_STATUS_INVALID_CTRL_ID; + goto send_rsp; + } + + /* TODO process physlink create */ + + rsp.status = A2MP_STATUS_SUCCESS; + +send_rsp: + if (hdev) + hci_dev_put(hdev); + + a2mp_send(mgr, A2MP_CREATEPHYSLINK_RSP, hdr->ident, sizeof(rsp), + &rsp); + + skb_pull(skb, le16_to_cpu(hdr->len)); + return 0; +} + +static int a2mp_discphyslink_req(struct amp_mgr *mgr, struct sk_buff *skb, + struct a2mp_cmd *hdr) +{ + struct a2mp_physlink_req *req = (void *) skb->data; + struct a2mp_physlink_rsp rsp; + struct hci_dev *hdev; + + if (le16_to_cpu(hdr->len) < sizeof(*req)) + return -EINVAL; + + BT_DBG("local_id %d remote_id %d", req->local_id, req->remote_id); + + rsp.local_id = req->remote_id; + rsp.remote_id = req->local_id; + rsp.status = A2MP_STATUS_SUCCESS; + + hdev = hci_dev_get(req->local_id); + if (!hdev) { + rsp.status = A2MP_STATUS_INVALID_CTRL_ID; + goto send_rsp; + } + + /* TODO Disconnect Phys Link here */ + + hci_dev_put(hdev); + +send_rsp: + a2mp_send(mgr, A2MP_DISCONNPHYSLINK_RSP, hdr->ident, sizeof(rsp), &rsp); + + skb_pull(skb, sizeof(*req)); + return 0; +} + +static inline int a2mp_cmd_rsp(struct amp_mgr *mgr, struct sk_buff *skb, + struct a2mp_cmd *hdr) +{ + BT_DBG("ident %d code %d", hdr->ident, hdr->code); + + skb_pull(skb, le16_to_cpu(hdr->len)); + return 0; +} + +/* Handle A2MP signalling */ +static int a2mp_chan_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb) +{ + struct a2mp_cmd *hdr = (void *) skb->data; + struct amp_mgr *mgr = chan->data; + int err = 0; + + amp_mgr_get(mgr); + + while (skb->len >= sizeof(*hdr)) { + struct a2mp_cmd *hdr = (void *) skb->data; + u16 len = le16_to_cpu(hdr->len); + + BT_DBG("code 0x%02x id %d len %d", hdr->code, hdr->ident, len); + + skb_pull(skb, sizeof(*hdr)); + + if (len > skb->len || !hdr->ident) { + err = -EINVAL; + break; + } + + mgr->ident = hdr->ident; + + switch (hdr->code) { + case A2MP_COMMAND_REJ: + a2mp_command_rej(mgr, skb, hdr); + break; + + case A2MP_DISCOVER_REQ: + err = a2mp_discover_req(mgr, skb, hdr); + break; + + case A2MP_CHANGE_NOTIFY: + err = a2mp_change_notify(mgr, skb, hdr); + break; + + case A2MP_GETINFO_REQ: + err = a2mp_getinfo_req(mgr, skb, hdr); + break; + + case A2MP_GETAMPASSOC_REQ: + err = a2mp_getampassoc_req(mgr, skb, hdr); + break; + + case A2MP_CREATEPHYSLINK_REQ: + err = a2mp_createphyslink_req(mgr, skb, hdr); + break; + + case A2MP_DISCONNPHYSLINK_REQ: + err = a2mp_discphyslink_req(mgr, skb, hdr); + break; + + case A2MP_CHANGE_RSP: + case A2MP_DISCOVER_RSP: + case A2MP_GETINFO_RSP: + case A2MP_GETAMPASSOC_RSP: + case A2MP_CREATEPHYSLINK_RSP: + case A2MP_DISCONNPHYSLINK_RSP: + err = a2mp_cmd_rsp(mgr, skb, hdr); + break; + + default: + BT_ERR("Unknown A2MP sig cmd 0x%2.2x", hdr->code); + err = -EINVAL; + break; + } + } + + if (err) { + struct a2mp_cmd_rej rej; + rej.reason = __constant_cpu_to_le16(0); + + BT_DBG("Send A2MP Rej: cmd 0x%2.2x err %d", hdr->code, err); + + a2mp_send(mgr, A2MP_COMMAND_REJ, hdr->ident, sizeof(rej), + &rej); + } + + /* Always free skb and return success error code to prevent + from sending L2CAP Disconnect over A2MP channel */ + kfree_skb(skb); + + amp_mgr_put(mgr); + + return 0; +} + +static void a2mp_chan_close_cb(struct l2cap_chan *chan) +{ + l2cap_chan_destroy(chan); +} + +static void a2mp_chan_state_change_cb(struct l2cap_chan *chan, int state) +{ + struct amp_mgr *mgr = chan->data; + + if (!mgr) + return; + + BT_DBG("chan %p state %s", chan, state_to_string(state)); + + chan->state = state; + + switch (state) { + case BT_CLOSED: + if (mgr) + amp_mgr_put(mgr); + break; + } +} + +static struct sk_buff *a2mp_chan_alloc_skb_cb(struct l2cap_chan *chan, + unsigned long len, int nb) +{ + return bt_skb_alloc(len, GFP_KERNEL); +} + +static struct l2cap_ops a2mp_chan_ops = { + .name = "L2CAP A2MP channel", + .recv = a2mp_chan_recv_cb, + .close = a2mp_chan_close_cb, + .state_change = a2mp_chan_state_change_cb, + .alloc_skb = a2mp_chan_alloc_skb_cb, + + /* Not implemented for A2MP */ + .new_connection = l2cap_chan_no_new_connection, + .teardown = l2cap_chan_no_teardown, + .ready = l2cap_chan_no_ready, +}; + +static struct l2cap_chan *a2mp_chan_open(struct l2cap_conn *conn) +{ + struct l2cap_chan *chan; + int err; + + chan = l2cap_chan_create(); + if (!chan) + return NULL; + + BT_DBG("chan %p", chan); + + chan->chan_type = L2CAP_CHAN_CONN_FIX_A2MP; + chan->flush_to = L2CAP_DEFAULT_FLUSH_TO; + + chan->ops = &a2mp_chan_ops; + + l2cap_chan_set_defaults(chan); + chan->remote_max_tx = chan->max_tx; + chan->remote_tx_win = chan->tx_win; + + chan->retrans_timeout = L2CAP_DEFAULT_RETRANS_TO; + chan->monitor_timeout = L2CAP_DEFAULT_MONITOR_TO; + + skb_queue_head_init(&chan->tx_q); + + chan->mode = L2CAP_MODE_ERTM; + + err = l2cap_ertm_init(chan); + if (err < 0) { + l2cap_chan_del(chan, 0); + return NULL; + } + + chan->conf_state = 0; + + l2cap_chan_add(conn, chan); + + chan->remote_mps = chan->omtu; + chan->mps = chan->omtu; + + chan->state = BT_CONNECTED; + + return chan; +} + +/* AMP Manager functions */ +void amp_mgr_get(struct amp_mgr *mgr) +{ + BT_DBG("mgr %p orig refcnt %d", mgr, atomic_read(&mgr->kref.refcount)); + + kref_get(&mgr->kref); +} + +static void amp_mgr_destroy(struct kref *kref) +{ + struct amp_mgr *mgr = container_of(kref, struct amp_mgr, kref); + + BT_DBG("mgr %p", mgr); + + kfree(mgr); +} + +int amp_mgr_put(struct amp_mgr *mgr) +{ + BT_DBG("mgr %p orig refcnt %d", mgr, atomic_read(&mgr->kref.refcount)); + + return kref_put(&mgr->kref, &_mgr_destroy); +} + +static struct amp_mgr *amp_mgr_create(struct l2cap_conn *conn) +{ + struct amp_mgr *mgr; + struct l2cap_chan *chan; + + mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); + if (!mgr) + return NULL; + + BT_DBG("conn %p mgr %p", conn, mgr); + + mgr->l2cap_conn = conn; + + chan = a2mp_chan_open(conn); + if (!chan) { + kfree(mgr); + return NULL; + } + + mgr->a2mp_chan = chan; + chan->data = mgr; + + conn->hcon->amp_mgr = mgr; + + kref_init(&mgr->kref); + + return mgr; +} + +struct l2cap_chan *a2mp_channel_create(struct l2cap_conn *conn, + struct sk_buff *skb) +{ + struct amp_mgr *mgr; + + mgr = amp_mgr_create(conn); + if (!mgr) { + BT_ERR("Could not create AMP manager"); + return NULL; + } + + BT_DBG("mgr: %p chan %p", mgr, mgr->a2mp_chan); + + return mgr->a2mp_chan; +} diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c index 3e18af4dadc4..f7db5792ec64 100644 --- a/net/bluetooth/af_bluetooth.c +++ b/net/bluetooth/af_bluetooth.c @@ -25,18 +25,7 @@ /* Bluetooth address family and sockets. */ #include <linux/module.h> - -#include <linux/types.h> -#include <linux/list.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/skbuff.h> -#include <linux/init.h> -#include <linux/poll.h> -#include <net/sock.h> #include <asm/ioctls.h> -#include <linux/kmod.h> #include <net/bluetooth/bluetooth.h> @@ -418,7 +407,8 @@ static inline unsigned int bt_accept_poll(struct sock *parent) return 0; } -unsigned int bt_sock_poll(struct file *file, struct socket *sock, poll_table *wait) +unsigned int bt_sock_poll(struct file *file, struct socket *sock, + poll_table *wait) { struct sock *sk = sock->sk; unsigned int mask = 0; diff --git a/net/bluetooth/bnep/core.c b/net/bluetooth/bnep/core.c index 031d7d656754..4a6620bc1570 100644 --- a/net/bluetooth/bnep/core.c +++ b/net/bluetooth/bnep/core.c @@ -26,26 +26,9 @@ */ #include <linux/module.h> - -#include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/signal.h> -#include <linux/init.h> -#include <linux/wait.h> -#include <linux/freezer.h> -#include <linux/errno.h> -#include <linux/net.h> -#include <linux/slab.h> #include <linux/kthread.h> -#include <net/sock.h> - -#include <linux/socket.h> #include <linux/file.h> - -#include <linux/netdevice.h> #include <linux/etherdevice.h> -#include <linux/skbuff.h> - #include <asm/unaligned.h> #include <net/bluetooth/bluetooth.h> @@ -306,7 +289,7 @@ static u8 __bnep_rx_hlen[] = { ETH_ALEN + 2 /* BNEP_COMPRESSED_DST_ONLY */ }; -static inline int bnep_rx_frame(struct bnep_session *s, struct sk_buff *skb) +static int bnep_rx_frame(struct bnep_session *s, struct sk_buff *skb) { struct net_device *dev = s->dev; struct sk_buff *nskb; @@ -404,7 +387,7 @@ static u8 __bnep_tx_types[] = { BNEP_COMPRESSED }; -static inline int bnep_tx_frame(struct bnep_session *s, struct sk_buff *skb) +static int bnep_tx_frame(struct bnep_session *s, struct sk_buff *skb) { struct ethhdr *eh = (void *) skb->data; struct socket *sock = s->sock; diff --git a/net/bluetooth/bnep/netdev.c b/net/bluetooth/bnep/netdev.c index bc4086480d97..98f86f91d47c 100644 --- a/net/bluetooth/bnep/netdev.c +++ b/net/bluetooth/bnep/netdev.c @@ -25,16 +25,8 @@ SOFTWARE IS DISCLAIMED. */ -#include <linux/module.h> -#include <linux/slab.h> - -#include <linux/socket.h> -#include <linux/netdevice.h> +#include <linux/export.h> #include <linux/etherdevice.h> -#include <linux/skbuff.h> -#include <linux/wait.h> - -#include <asm/unaligned.h> #include <net/bluetooth/bluetooth.h> #include <net/bluetooth/hci_core.h> @@ -128,7 +120,7 @@ static void bnep_net_timeout(struct net_device *dev) } #ifdef CONFIG_BT_BNEP_MC_FILTER -static inline int bnep_net_mc_filter(struct sk_buff *skb, struct bnep_session *s) +static int bnep_net_mc_filter(struct sk_buff *skb, struct bnep_session *s) { struct ethhdr *eh = (void *) skb->data; @@ -140,7 +132,7 @@ static inline int bnep_net_mc_filter(struct sk_buff *skb, struct bnep_session *s #ifdef CONFIG_BT_BNEP_PROTO_FILTER /* Determine ether protocol. Based on eth_type_trans. */ -static inline u16 bnep_net_eth_proto(struct sk_buff *skb) +static u16 bnep_net_eth_proto(struct sk_buff *skb) { struct ethhdr *eh = (void *) skb->data; u16 proto = ntohs(eh->h_proto); @@ -154,7 +146,7 @@ static inline u16 bnep_net_eth_proto(struct sk_buff *skb) return ETH_P_802_2; } -static inline int bnep_net_proto_filter(struct sk_buff *skb, struct bnep_session *s) +static int bnep_net_proto_filter(struct sk_buff *skb, struct bnep_session *s) { u16 proto = bnep_net_eth_proto(skb); struct bnep_proto_filter *f = s->proto_filter; diff --git a/net/bluetooth/bnep/sock.c b/net/bluetooth/bnep/sock.c index 180bfc45810d..5e5f5b410e0b 100644 --- a/net/bluetooth/bnep/sock.c +++ b/net/bluetooth/bnep/sock.c @@ -24,24 +24,8 @@ SOFTWARE IS DISCLAIMED. */ -#include <linux/module.h> - -#include <linux/types.h> -#include <linux/capability.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/poll.h> -#include <linux/fcntl.h> -#include <linux/skbuff.h> -#include <linux/socket.h> -#include <linux/ioctl.h> +#include <linux/export.h> #include <linux/file.h> -#include <linux/init.h> -#include <linux/compat.h> -#include <linux/gfp.h> -#include <linux/uaccess.h> -#include <net/sock.h> - #include "bnep.h" diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c index 3f18a6ed9731..5ad7da217474 100644 --- a/net/bluetooth/hci_conn.c +++ b/net/bluetooth/hci_conn.c @@ -24,24 +24,11 @@ /* Bluetooth HCI connection handling. */ -#include <linux/module.h> - -#include <linux/types.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/slab.h> -#include <linux/poll.h> -#include <linux/fcntl.h> -#include <linux/init.h> -#include <linux/skbuff.h> -#include <linux/interrupt.h> -#include <net/sock.h> - -#include <linux/uaccess.h> -#include <asm/unaligned.h> +#include <linux/export.h> #include <net/bluetooth/bluetooth.h> #include <net/bluetooth/hci_core.h> +#include <net/bluetooth/a2mp.h> static void hci_le_connect(struct hci_conn *conn) { @@ -54,15 +41,15 @@ static void hci_le_connect(struct hci_conn *conn) conn->sec_level = BT_SECURITY_LOW; memset(&cp, 0, sizeof(cp)); - cp.scan_interval = cpu_to_le16(0x0060); - cp.scan_window = cpu_to_le16(0x0030); + cp.scan_interval = __constant_cpu_to_le16(0x0060); + cp.scan_window = __constant_cpu_to_le16(0x0030); bacpy(&cp.peer_addr, &conn->dst); cp.peer_addr_type = conn->dst_type; - cp.conn_interval_min = cpu_to_le16(0x0028); - cp.conn_interval_max = cpu_to_le16(0x0038); - cp.supervision_timeout = cpu_to_le16(0x002a); - cp.min_ce_len = cpu_to_le16(0x0000); - cp.max_ce_len = cpu_to_le16(0x0000); + cp.conn_interval_min = __constant_cpu_to_le16(0x0028); + cp.conn_interval_max = __constant_cpu_to_le16(0x0038); + cp.supervision_timeout = __constant_cpu_to_le16(0x002a); + cp.min_ce_len = __constant_cpu_to_le16(0x0000); + cp.max_ce_len = __constant_cpu_to_le16(0x0000); hci_send_cmd(hdev, HCI_OP_LE_CREATE_CONN, sizeof(cp), &cp); } @@ -99,7 +86,7 @@ void hci_acl_connect(struct hci_conn *conn) cp.pscan_rep_mode = ie->data.pscan_rep_mode; cp.pscan_mode = ie->data.pscan_mode; cp.clock_offset = ie->data.clock_offset | - cpu_to_le16(0x8000); + __constant_cpu_to_le16(0x8000); } memcpy(conn->dev_class, ie->data.dev_class, 3); @@ -120,7 +107,7 @@ static void hci_acl_connect_cancel(struct hci_conn *conn) { struct hci_cp_create_conn_cancel cp; - BT_DBG("%p", conn); + BT_DBG("hcon %p", conn); if (conn->hdev->hci_ver < BLUETOOTH_VER_1_2) return; @@ -133,7 +120,7 @@ void hci_acl_disconn(struct hci_conn *conn, __u8 reason) { struct hci_cp_disconnect cp; - BT_DBG("%p", conn); + BT_DBG("hcon %p", conn); conn->state = BT_DISCONN; @@ -147,7 +134,7 @@ void hci_add_sco(struct hci_conn *conn, __u16 handle) struct hci_dev *hdev = conn->hdev; struct hci_cp_add_sco cp; - BT_DBG("%p", conn); + BT_DBG("hcon %p", conn); conn->state = BT_CONNECT; conn->out = true; @@ -165,7 +152,7 @@ void hci_setup_sync(struct hci_conn *conn, __u16 handle) struct hci_dev *hdev = conn->hdev; struct hci_cp_setup_sync_conn cp; - BT_DBG("%p", conn); + BT_DBG("hcon %p", conn); conn->state = BT_CONNECT; conn->out = true; @@ -175,9 +162,9 @@ void hci_setup_sync(struct hci_conn *conn, __u16 handle) cp.handle = cpu_to_le16(handle); cp.pkt_type = cpu_to_le16(conn->pkt_type); - cp.tx_bandwidth = cpu_to_le32(0x00001f40); - cp.rx_bandwidth = cpu_to_le32(0x00001f40); - cp.max_latency = cpu_to_le16(0xffff); + cp.tx_bandwidth = __constant_cpu_to_le32(0x00001f40); + cp.rx_bandwidth = __constant_cpu_to_le32(0x00001f40); + cp.max_latency = __constant_cpu_to_le16(0xffff); cp.voice_setting = cpu_to_le16(hdev->voice_setting); cp.retrans_effort = 0xff; @@ -185,7 +172,7 @@ void hci_setup_sync(struct hci_conn *conn, __u16 handle) } void hci_le_conn_update(struct hci_conn *conn, u16 min, u16 max, - u16 latency, u16 to_multiplier) + u16 latency, u16 to_multiplier) { struct hci_cp_le_conn_update cp; struct hci_dev *hdev = conn->hdev; @@ -197,20 +184,19 @@ void hci_le_conn_update(struct hci_conn *conn, u16 min, u16 max, cp.conn_interval_max = cpu_to_le16(max); cp.conn_latency = cpu_to_le16(latency); cp.supervision_timeout = cpu_to_le16(to_multiplier); - cp.min_ce_len = cpu_to_le16(0x0001); - cp.max_ce_len = cpu_to_le16(0x0001); + cp.min_ce_len = __constant_cpu_to_le16(0x0001); + cp.max_ce_len = __constant_cpu_to_le16(0x0001); hci_send_cmd(hdev, HCI_OP_LE_CONN_UPDATE, sizeof(cp), &cp); } -EXPORT_SYMBOL(hci_le_conn_update); void hci_le_start_enc(struct hci_conn *conn, __le16 ediv, __u8 rand[8], - __u8 ltk[16]) + __u8 ltk[16]) { struct hci_dev *hdev = conn->hdev; struct hci_cp_le_start_enc cp; - BT_DBG("%p", conn); + BT_DBG("hcon %p", conn); memset(&cp, 0, sizeof(cp)); @@ -221,18 +207,17 @@ void hci_le_start_enc(struct hci_conn *conn, __le16 ediv, __u8 rand[8], hci_send_cmd(hdev, HCI_OP_LE_START_ENC, sizeof(cp), &cp); } -EXPORT_SYMBOL(hci_le_start_enc); /* Device _must_ be locked */ void hci_sco_setup(struct hci_conn *conn, __u8 status) { struct hci_conn *sco = conn->link; - BT_DBG("%p", conn); - if (!sco) return; + BT_DBG("hcon %p", conn); + if (!status) { if (lmp_esco_capable(conn->hdev)) hci_setup_sync(sco, conn->handle); @@ -247,10 +232,10 @@ void hci_sco_setup(struct hci_conn *conn, __u8 status) static void hci_conn_timeout(struct work_struct *work) { struct hci_conn *conn = container_of(work, struct hci_conn, - disc_work.work); + disc_work.work); __u8 reason; - BT_DBG("conn %p state %s", conn, state_to_string(conn->state)); + BT_DBG("hcon %p state %s", conn, state_to_string(conn->state)); if (atomic_read(&conn->refcnt)) return; @@ -281,7 +266,7 @@ static void hci_conn_enter_sniff_mode(struct hci_conn *conn) { struct hci_dev *hdev = conn->hdev; - BT_DBG("conn %p mode %d", conn, conn->mode); + BT_DBG("hcon %p mode %d", conn, conn->mode); if (test_bit(HCI_RAW, &hdev->flags)) return; @@ -295,9 +280,9 @@ static void hci_conn_enter_sniff_mode(struct hci_conn *conn) if (lmp_sniffsubr_capable(hdev) && lmp_sniffsubr_capable(conn)) { struct hci_cp_sniff_subrate cp; cp.handle = cpu_to_le16(conn->handle); - cp.max_latency = cpu_to_le16(0); - cp.min_remote_timeout = cpu_to_le16(0); - cp.min_local_timeout = cpu_to_le16(0); + cp.max_latency = __constant_cpu_to_le16(0); + cp.min_remote_timeout = __constant_cpu_to_le16(0); + cp.min_local_timeout = __constant_cpu_to_le16(0); hci_send_cmd(hdev, HCI_OP_SNIFF_SUBRATE, sizeof(cp), &cp); } @@ -306,8 +291,8 @@ static void hci_conn_enter_sniff_mode(struct hci_conn *conn) cp.handle = cpu_to_le16(conn->handle); cp.max_interval = cpu_to_le16(hdev->sniff_max_interval); cp.min_interval = cpu_to_le16(hdev->sniff_min_interval); - cp.attempt = cpu_to_le16(4); - cp.timeout = cpu_to_le16(1); + cp.attempt = __constant_cpu_to_le16(4); + cp.timeout = __constant_cpu_to_le16(1); hci_send_cmd(hdev, HCI_OP_SNIFF_MODE, sizeof(cp), &cp); } } @@ -316,7 +301,7 @@ static void hci_conn_idle(unsigned long arg) { struct hci_conn *conn = (void *) arg; - BT_DBG("conn %p mode %d", conn, conn->mode); + BT_DBG("hcon %p mode %d", conn, conn->mode); hci_conn_enter_sniff_mode(conn); } @@ -327,7 +312,7 @@ static void hci_conn_auto_accept(unsigned long arg) struct hci_dev *hdev = conn->hdev; hci_send_cmd(hdev, HCI_OP_USER_CONFIRM_REPLY, sizeof(conn->dst), - &conn->dst); + &conn->dst); } struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst) @@ -376,7 +361,7 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst) INIT_DELAYED_WORK(&conn->disc_work, hci_conn_timeout); setup_timer(&conn->idle_timer, hci_conn_idle, (unsigned long)conn); setup_timer(&conn->auto_accept_timer, hci_conn_auto_accept, - (unsigned long) conn); + (unsigned long) conn); atomic_set(&conn->refcnt, 0); @@ -397,7 +382,7 @@ int hci_conn_del(struct hci_conn *conn) { struct hci_dev *hdev = conn->hdev; - BT_DBG("%s conn %p handle %d", hdev->name, conn, conn->handle); + BT_DBG("%s hcon %p handle %d", hdev->name, conn, conn->handle); del_timer(&conn->idle_timer); @@ -425,9 +410,11 @@ int hci_conn_del(struct hci_conn *conn) } } - hci_chan_list_flush(conn); + if (conn->amp_mgr) + amp_mgr_put(conn->amp_mgr); + hci_conn_hash_del(hdev, conn); if (hdev->notify) hdev->notify(hdev, HCI_NOTIFY_CONN_DEL); @@ -454,7 +441,9 @@ struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src) read_lock(&hci_dev_list_lock); list_for_each_entry(d, &hci_dev_list, list) { - if (!test_bit(HCI_UP, &d->flags) || test_bit(HCI_RAW, &d->flags)) + if (!test_bit(HCI_UP, &d->flags) || + test_bit(HCI_RAW, &d->flags) || + d->dev_type != HCI_BREDR) continue; /* Simple routing: @@ -495,6 +484,11 @@ struct hci_conn *hci_connect(struct hci_dev *hdev, int type, bdaddr_t *dst, if (type == LE_LINK) { le = hci_conn_hash_lookup_ba(hdev, LE_LINK, dst); if (!le) { + le = hci_conn_hash_lookup_state(hdev, LE_LINK, + BT_CONNECT); + if (le) + return ERR_PTR(-EBUSY); + le = hci_conn_add(hdev, LE_LINK, dst); if (!le) return ERR_PTR(-ENOMEM); @@ -545,7 +539,7 @@ struct hci_conn *hci_connect(struct hci_dev *hdev, int type, bdaddr_t *dst, hci_conn_hold(sco); if (acl->state == BT_CONNECTED && - (sco->state == BT_OPEN || sco->state == BT_CLOSED)) { + (sco->state == BT_OPEN || sco->state == BT_CLOSED)) { set_bit(HCI_CONN_POWER_SAVE, &acl->flags); hci_conn_enter_active_mode(acl, BT_POWER_FORCE_ACTIVE_ON); @@ -560,24 +554,22 @@ struct hci_conn *hci_connect(struct hci_dev *hdev, int type, bdaddr_t *dst, return sco; } -EXPORT_SYMBOL(hci_connect); /* Check link security requirement */ int hci_conn_check_link_mode(struct hci_conn *conn) { - BT_DBG("conn %p", conn); + BT_DBG("hcon %p", conn); if (hci_conn_ssp_enabled(conn) && !(conn->link_mode & HCI_LM_ENCRYPT)) return 0; return 1; } -EXPORT_SYMBOL(hci_conn_check_link_mode); /* Authenticate remote device */ static int hci_conn_auth(struct hci_conn *conn, __u8 sec_level, __u8 auth_type) { - BT_DBG("conn %p", conn); + BT_DBG("hcon %p", conn); if (conn->pending_sec_level > sec_level) sec_level = conn->pending_sec_level; @@ -600,7 +592,7 @@ static int hci_conn_auth(struct hci_conn *conn, __u8 sec_level, __u8 auth_type) cp.handle = cpu_to_le16(conn->handle); hci_send_cmd(conn->hdev, HCI_OP_AUTH_REQUESTED, - sizeof(cp), &cp); + sizeof(cp), &cp); if (conn->key_type != 0xff) set_bit(HCI_CONN_REAUTH_PEND, &conn->flags); } @@ -611,21 +603,21 @@ static int hci_conn_auth(struct hci_conn *conn, __u8 sec_level, __u8 auth_type) /* Encrypt the the link */ static void hci_conn_encrypt(struct hci_conn *conn) { - BT_DBG("conn %p", conn); + BT_DBG("hcon %p", conn); if (!test_and_set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags)) { struct hci_cp_set_conn_encrypt cp; cp.handle = cpu_to_le16(conn->handle); cp.encrypt = 0x01; hci_send_cmd(conn->hdev, HCI_OP_SET_CONN_ENCRYPT, sizeof(cp), - &cp); + &cp); } } /* Enable security */ int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type) { - BT_DBG("conn %p", conn); + BT_DBG("hcon %p", conn); /* For sdp we don't need the link key. */ if (sec_level == BT_SECURITY_SDP) @@ -648,8 +640,7 @@ int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type) /* An unauthenticated combination key has sufficient security for security level 1 and 2. */ if (conn->key_type == HCI_LK_UNAUTH_COMBINATION && - (sec_level == BT_SECURITY_MEDIUM || - sec_level == BT_SECURITY_LOW)) + (sec_level == BT_SECURITY_MEDIUM || sec_level == BT_SECURITY_LOW)) goto encrypt; /* A combination key has always sufficient security for the security @@ -657,8 +648,7 @@ int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type) is generated using maximum PIN code length (16). For pre 2.1 units. */ if (conn->key_type == HCI_LK_COMBINATION && - (sec_level != BT_SECURITY_HIGH || - conn->pin_length == 16)) + (sec_level != BT_SECURITY_HIGH || conn->pin_length == 16)) goto encrypt; auth: @@ -680,7 +670,7 @@ EXPORT_SYMBOL(hci_conn_security); /* Check secure link requirement */ int hci_conn_check_secure(struct hci_conn *conn, __u8 sec_level) { - BT_DBG("conn %p", conn); + BT_DBG("hcon %p", conn); if (sec_level != BT_SECURITY_HIGH) return 1; /* Accept if non-secure is required */ @@ -695,23 +685,22 @@ EXPORT_SYMBOL(hci_conn_check_secure); /* Change link key */ int hci_conn_change_link_key(struct hci_conn *conn) { - BT_DBG("conn %p", conn); + BT_DBG("hcon %p", conn); if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->flags)) { struct hci_cp_change_conn_link_key cp; cp.handle = cpu_to_le16(conn->handle); hci_send_cmd(conn->hdev, HCI_OP_CHANGE_CONN_LINK_KEY, - sizeof(cp), &cp); + sizeof(cp), &cp); } return 0; } -EXPORT_SYMBOL(hci_conn_change_link_key); /* Switch role */ int hci_conn_switch_role(struct hci_conn *conn, __u8 role) { - BT_DBG("conn %p", conn); + BT_DBG("hcon %p", conn); if (!role && conn->link_mode & HCI_LM_MASTER) return 1; @@ -732,7 +721,7 @@ void hci_conn_enter_active_mode(struct hci_conn *conn, __u8 force_active) { struct hci_dev *hdev = conn->hdev; - BT_DBG("conn %p mode %d", conn, conn->mode); + BT_DBG("hcon %p mode %d", conn, conn->mode); if (test_bit(HCI_RAW, &hdev->flags)) return; @@ -752,7 +741,7 @@ void hci_conn_enter_active_mode(struct hci_conn *conn, __u8 force_active) timer: if (hdev->idle_timeout > 0) mod_timer(&conn->idle_timer, - jiffies + msecs_to_jiffies(hdev->idle_timeout)); + jiffies + msecs_to_jiffies(hdev->idle_timeout)); } /* Drop all connection on the device */ @@ -802,7 +791,7 @@ EXPORT_SYMBOL(hci_conn_put_device); int hci_get_conn_list(void __user *arg) { - register struct hci_conn *c; + struct hci_conn *c; struct hci_conn_list_req req, *cl; struct hci_conn_info *ci; struct hci_dev *hdev; @@ -906,7 +895,7 @@ struct hci_chan *hci_chan_create(struct hci_conn *conn) struct hci_dev *hdev = conn->hdev; struct hci_chan *chan; - BT_DBG("%s conn %p", hdev->name, conn); + BT_DBG("%s hcon %p", hdev->name, conn); chan = kzalloc(sizeof(struct hci_chan), GFP_KERNEL); if (!chan) @@ -925,7 +914,7 @@ int hci_chan_del(struct hci_chan *chan) struct hci_conn *conn = chan->conn; struct hci_dev *hdev = conn->hdev; - BT_DBG("%s conn %p chan %p", hdev->name, conn, chan); + BT_DBG("%s hcon %p chan %p", hdev->name, conn, chan); list_del_rcu(&chan->list); @@ -941,7 +930,7 @@ void hci_chan_list_flush(struct hci_conn *conn) { struct hci_chan *chan, *n; - BT_DBG("conn %p", conn); + BT_DBG("hcon %p", conn); list_for_each_entry_safe(chan, n, &conn->chan_list, list) hci_chan_del(chan); diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c index 411ace8e647b..d4de5db18d5a 100644 --- a/net/bluetooth/hci_core.c +++ b/net/bluetooth/hci_core.c @@ -25,34 +25,14 @@ /* Bluetooth HCI core. */ -#include <linux/jiffies.h> -#include <linux/module.h> -#include <linux/kmod.h> - -#include <linux/types.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/slab.h> -#include <linux/poll.h> -#include <linux/fcntl.h> -#include <linux/init.h> -#include <linux/skbuff.h> -#include <linux/workqueue.h> -#include <linux/interrupt.h> -#include <linux/rfkill.h> -#include <linux/timer.h> -#include <linux/crypto.h> -#include <net/sock.h> +#include <linux/export.h> +#include <linux/idr.h> -#include <linux/uaccess.h> -#include <asm/unaligned.h> +#include <linux/rfkill.h> #include <net/bluetooth/bluetooth.h> #include <net/bluetooth/hci_core.h> -#define AUTO_OFF_TIMEOUT 2000 - static void hci_rx_work(struct work_struct *work); static void hci_cmd_work(struct work_struct *work); static void hci_tx_work(struct work_struct *work); @@ -65,6 +45,9 @@ DEFINE_RWLOCK(hci_dev_list_lock); LIST_HEAD(hci_cb_list); DEFINE_RWLOCK(hci_cb_list_lock); +/* HCI ID Numbering */ +static DEFINE_IDA(hci_index_ida); + /* ---- HCI notifications ---- */ static void hci_notify(struct hci_dev *hdev, int event) @@ -76,7 +59,7 @@ static void hci_notify(struct hci_dev *hdev, int event) void hci_req_complete(struct hci_dev *hdev, __u16 cmd, int result) { - BT_DBG("%s command 0x%04x result 0x%2.2x", hdev->name, cmd, result); + BT_DBG("%s command 0x%4.4x result 0x%2.2x", hdev->name, cmd, result); /* If this is the init phase check if the completed command matches * the last init command, and if not just return. @@ -124,8 +107,9 @@ static void hci_req_cancel(struct hci_dev *hdev, int err) } /* Execute request and wait for completion. */ -static int __hci_request(struct hci_dev *hdev, void (*req)(struct hci_dev *hdev, unsigned long opt), - unsigned long opt, __u32 timeout) +static int __hci_request(struct hci_dev *hdev, + void (*req)(struct hci_dev *hdev, unsigned long opt), + unsigned long opt, __u32 timeout) { DECLARE_WAITQUEUE(wait, current); int err = 0; @@ -166,8 +150,9 @@ static int __hci_request(struct hci_dev *hdev, void (*req)(struct hci_dev *hdev, return err; } -static inline int hci_request(struct hci_dev *hdev, void (*req)(struct hci_dev *hdev, unsigned long opt), - unsigned long opt, __u32 timeout) +static int hci_request(struct hci_dev *hdev, + void (*req)(struct hci_dev *hdev, unsigned long opt), + unsigned long opt, __u32 timeout) { int ret; @@ -201,12 +186,6 @@ static void bredr_init(struct hci_dev *hdev) /* Mandatory initialization */ - /* Reset */ - if (!test_bit(HCI_QUIRK_NO_RESET, &hdev->quirks)) { - set_bit(HCI_RESET, &hdev->flags); - hci_send_cmd(hdev, HCI_OP_RESET, 0, NULL); - } - /* Read Local Supported Features */ hci_send_cmd(hdev, HCI_OP_READ_LOCAL_FEATURES, 0, NULL); @@ -235,7 +214,7 @@ static void bredr_init(struct hci_dev *hdev) hci_send_cmd(hdev, HCI_OP_SET_EVENT_FLT, 1, &flt_type); /* Connection accept timeout ~20 secs */ - param = cpu_to_le16(0x7d00); + param = __constant_cpu_to_le16(0x7d00); hci_send_cmd(hdev, HCI_OP_WRITE_CA_TIMEOUT, 2, ¶m); bacpy(&cp.bdaddr, BDADDR_ANY); @@ -247,9 +226,6 @@ static void amp_init(struct hci_dev *hdev) { hdev->flow_ctl_mode = HCI_FLOW_CTL_MODE_BLOCK_BASED; - /* Reset */ - hci_send_cmd(hdev, HCI_OP_RESET, 0, NULL); - /* Read Local Version */ hci_send_cmd(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL); @@ -275,6 +251,10 @@ static void hci_init_req(struct hci_dev *hdev, unsigned long opt) } skb_queue_purge(&hdev->driver_init); + /* Reset */ + if (!test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks)) + hci_reset_req(hdev, 0); + switch (hdev->dev_type) { case HCI_BREDR: bredr_init(hdev); @@ -417,7 +397,8 @@ static void inquiry_cache_flush(struct hci_dev *hdev) INIT_LIST_HEAD(&cache->resolve); } -struct inquiry_entry *hci_inquiry_cache_lookup(struct hci_dev *hdev, bdaddr_t *bdaddr) +struct inquiry_entry *hci_inquiry_cache_lookup(struct hci_dev *hdev, + bdaddr_t *bdaddr) { struct discovery_state *cache = &hdev->discovery; struct inquiry_entry *e; @@ -478,7 +459,7 @@ void hci_inquiry_cache_update_resolve(struct hci_dev *hdev, list_for_each_entry(p, &cache->resolve, list) { if (p->name_state != NAME_PENDING && - abs(p->data.rssi) >= abs(ie->data.rssi)) + abs(p->data.rssi) >= abs(ie->data.rssi)) break; pos = &p->list; } @@ -503,7 +484,7 @@ bool hci_inquiry_cache_update(struct hci_dev *hdev, struct inquiry_data *data, *ssp = true; if (ie->name_state == NAME_NEEDED && - data->rssi != ie->data.rssi) { + data->rssi != ie->data.rssi) { ie->data.rssi = data->rssi; hci_inquiry_cache_update_resolve(hdev, ie); } @@ -527,7 +508,7 @@ bool hci_inquiry_cache_update(struct hci_dev *hdev, struct inquiry_data *data, update: if (name_known && ie->name_state != NAME_KNOWN && - ie->name_state != NAME_PENDING) { + ie->name_state != NAME_PENDING) { ie->name_state = NAME_KNOWN; list_del(&ie->list); } @@ -605,8 +586,7 @@ int hci_inquiry(void __user *arg) hci_dev_lock(hdev); if (inquiry_cache_age(hdev) > INQUIRY_CACHE_AGE_MAX || - inquiry_cache_empty(hdev) || - ir.flags & IREQ_CACHE_FLUSH) { + inquiry_cache_empty(hdev) || ir.flags & IREQ_CACHE_FLUSH) { inquiry_cache_flush(hdev); do_inquiry = 1; } @@ -620,7 +600,9 @@ int hci_inquiry(void __user *arg) goto done; } - /* for unlimited number of responses we will use buffer with 255 entries */ + /* for unlimited number of responses we will use buffer with + * 255 entries + */ max_rsp = (ir.num_rsp == 0) ? 255 : ir.num_rsp; /* cache_dump can't sleep. Therefore we allocate temp buffer and then @@ -641,7 +623,7 @@ int hci_inquiry(void __user *arg) if (!copy_to_user(ptr, &ir, sizeof(ir))) { ptr += sizeof(ir); if (copy_to_user(ptr, buf, sizeof(struct inquiry_info) * - ir.num_rsp)) + ir.num_rsp)) err = -EFAULT; } else err = -EFAULT; @@ -701,12 +683,11 @@ int hci_dev_open(__u16 dev) set_bit(HCI_INIT, &hdev->flags); hdev->init_last_cmd = 0; - ret = __hci_request(hdev, hci_init_req, 0, - msecs_to_jiffies(HCI_INIT_TIMEOUT)); + ret = __hci_request(hdev, hci_init_req, 0, HCI_INIT_TIMEOUT); if (lmp_host_le_capable(hdev)) ret = __hci_request(hdev, hci_le_init_req, 0, - msecs_to_jiffies(HCI_INIT_TIMEOUT)); + HCI_INIT_TIMEOUT); clear_bit(HCI_INIT, &hdev->flags); } @@ -791,10 +772,9 @@ static int hci_dev_do_close(struct hci_dev *hdev) skb_queue_purge(&hdev->cmd_q); atomic_set(&hdev->cmd_cnt, 1); if (!test_bit(HCI_RAW, &hdev->flags) && - test_bit(HCI_QUIRK_NO_RESET, &hdev->quirks)) { + test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks)) { set_bit(HCI_INIT, &hdev->flags); - __hci_request(hdev, hci_reset_req, 0, - msecs_to_jiffies(250)); + __hci_request(hdev, hci_reset_req, 0, HCI_CMD_TIMEOUT); clear_bit(HCI_INIT, &hdev->flags); } @@ -883,8 +863,7 @@ int hci_dev_reset(__u16 dev) hdev->acl_cnt = 0; hdev->sco_cnt = 0; hdev->le_cnt = 0; if (!test_bit(HCI_RAW, &hdev->flags)) - ret = __hci_request(hdev, hci_reset_req, 0, - msecs_to_jiffies(HCI_INIT_TIMEOUT)); + ret = __hci_request(hdev, hci_reset_req, 0, HCI_INIT_TIMEOUT); done: hci_req_unlock(hdev); @@ -924,7 +903,7 @@ int hci_dev_cmd(unsigned int cmd, void __user *arg) switch (cmd) { case HCISETAUTH: err = hci_request(hdev, hci_auth_req, dr.dev_opt, - msecs_to_jiffies(HCI_INIT_TIMEOUT)); + HCI_INIT_TIMEOUT); break; case HCISETENCRYPT: @@ -936,23 +915,23 @@ int hci_dev_cmd(unsigned int cmd, void __user *arg) if (!test_bit(HCI_AUTH, &hdev->flags)) { /* Auth must be enabled first */ err = hci_request(hdev, hci_auth_req, dr.dev_opt, - msecs_to_jiffies(HCI_INIT_TIMEOUT)); + HCI_INIT_TIMEOUT); if (err) break; } err = hci_request(hdev, hci_encrypt_req, dr.dev_opt, - msecs_to_jiffies(HCI_INIT_TIMEOUT)); + HCI_INIT_TIMEOUT); break; case HCISETSCAN: err = hci_request(hdev, hci_scan_req, dr.dev_opt, - msecs_to_jiffies(HCI_INIT_TIMEOUT)); + HCI_INIT_TIMEOUT); break; case HCISETLINKPOL: err = hci_request(hdev, hci_linkpol_req, dr.dev_opt, - msecs_to_jiffies(HCI_INIT_TIMEOUT)); + HCI_INIT_TIMEOUT); break; case HCISETLINKMODE: @@ -1102,8 +1081,7 @@ static void hci_power_on(struct work_struct *work) return; if (test_bit(HCI_AUTO_OFF, &hdev->dev_flags)) - schedule_delayed_work(&hdev->power_off, - msecs_to_jiffies(AUTO_OFF_TIMEOUT)); + schedule_delayed_work(&hdev->power_off, HCI_AUTO_OFF_TIMEOUT); if (test_and_clear_bit(HCI_SETUP, &hdev->dev_flags)) mgmt_index_added(hdev); @@ -1112,7 +1090,7 @@ static void hci_power_on(struct work_struct *work) static void hci_power_off(struct work_struct *work) { struct hci_dev *hdev = container_of(work, struct hci_dev, - power_off.work); + power_off.work); BT_DBG("%s", hdev->name); @@ -1193,7 +1171,7 @@ struct link_key *hci_find_link_key(struct hci_dev *hdev, bdaddr_t *bdaddr) } static bool hci_persistent_key(struct hci_dev *hdev, struct hci_conn *conn, - u8 key_type, u8 old_key_type) + u8 key_type, u8 old_key_type) { /* Legacy key */ if (key_type < 0x03) @@ -1234,7 +1212,7 @@ struct smp_ltk *hci_find_ltk(struct hci_dev *hdev, __le16 ediv, u8 rand[8]) list_for_each_entry(k, &hdev->long_term_keys, list) { if (k->ediv != ediv || - memcmp(rand, k->rand, sizeof(k->rand))) + memcmp(rand, k->rand, sizeof(k->rand))) continue; return k; @@ -1242,7 +1220,6 @@ struct smp_ltk *hci_find_ltk(struct hci_dev *hdev, __le16 ediv, u8 rand[8]) return NULL; } -EXPORT_SYMBOL(hci_find_ltk); struct smp_ltk *hci_find_ltk_by_addr(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 addr_type) @@ -1251,12 +1228,11 @@ struct smp_ltk *hci_find_ltk_by_addr(struct hci_dev *hdev, bdaddr_t *bdaddr, list_for_each_entry(k, &hdev->long_term_keys, list) if (addr_type == k->bdaddr_type && - bacmp(bdaddr, &k->bdaddr) == 0) + bacmp(bdaddr, &k->bdaddr) == 0) return k; return NULL; } -EXPORT_SYMBOL(hci_find_ltk_by_addr); int hci_add_link_key(struct hci_dev *hdev, struct hci_conn *conn, int new_key, bdaddr_t *bdaddr, u8 *val, u8 type, u8 pin_len) @@ -1283,15 +1259,14 @@ int hci_add_link_key(struct hci_dev *hdev, struct hci_conn *conn, int new_key, * combination key for legacy pairing even when there's no * previous key */ if (type == HCI_LK_CHANGED_COMBINATION && - (!conn || conn->remote_auth == 0xff) && - old_key_type == 0xff) { + (!conn || conn->remote_auth == 0xff) && old_key_type == 0xff) { type = HCI_LK_COMBINATION; if (conn) conn->key_type = type; } bacpy(&key->bdaddr, bdaddr); - memcpy(key->val, val, 16); + memcpy(key->val, val, HCI_LINK_KEY_SIZE); key->pin_len = pin_len; if (type == HCI_LK_CHANGED_COMBINATION) @@ -1383,11 +1358,19 @@ int hci_remove_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr) } /* HCI command timer function */ -static void hci_cmd_timer(unsigned long arg) +static void hci_cmd_timeout(unsigned long arg) { struct hci_dev *hdev = (void *) arg; - BT_ERR("%s command tx timeout", hdev->name); + if (hdev->sent_cmd) { + struct hci_command_hdr *sent = (void *) hdev->sent_cmd->data; + u16 opcode = __le16_to_cpu(sent->opcode); + + BT_ERR("%s command 0x%4.4x tx timeout", hdev->name, opcode); + } else { + BT_ERR("%s command tx timeout", hdev->name); + } + atomic_set(&hdev->cmd_cnt, 1); queue_work(hdev->workqueue, &hdev->cmd_work); } @@ -1540,6 +1523,7 @@ static void le_scan_enable_req(struct hci_dev *hdev, unsigned long opt) memset(&cp, 0, sizeof(cp)); cp.enable = 1; + cp.filter_dup = 1; hci_send_cmd(hdev, HCI_OP_LE_SET_SCAN_ENABLE, sizeof(cp), &cp); } @@ -1684,7 +1668,7 @@ struct hci_dev *hci_alloc_dev(void) init_waitqueue_head(&hdev->req_wait_q); - setup_timer(&hdev->cmd_timer, hci_cmd_timer, (unsigned long) hdev); + setup_timer(&hdev->cmd_timer, hci_cmd_timeout, (unsigned long) hdev); hci_init_sysfs(hdev); discovery_init(hdev); @@ -1707,41 +1691,39 @@ EXPORT_SYMBOL(hci_free_dev); /* Register HCI device */ int hci_register_dev(struct hci_dev *hdev) { - struct list_head *head, *p; int id, error; if (!hdev->open || !hdev->close) return -EINVAL; - write_lock(&hci_dev_list_lock); - /* Do not allow HCI_AMP devices to register at index 0, * so the index can be used as the AMP controller ID. */ - id = (hdev->dev_type == HCI_BREDR) ? 0 : 1; - head = &hci_dev_list; - - /* Find first available device id */ - list_for_each(p, &hci_dev_list) { - int nid = list_entry(p, struct hci_dev, list)->id; - if (nid > id) - break; - if (nid == id) - id++; - head = p; + switch (hdev->dev_type) { + case HCI_BREDR: + id = ida_simple_get(&hci_index_ida, 0, 0, GFP_KERNEL); + break; + case HCI_AMP: + id = ida_simple_get(&hci_index_ida, 1, 0, GFP_KERNEL); + break; + default: + return -EINVAL; } + if (id < 0) + return id; + sprintf(hdev->name, "hci%d", id); hdev->id = id; BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); - list_add(&hdev->list, head); - + write_lock(&hci_dev_list_lock); + list_add(&hdev->list, &hci_dev_list); write_unlock(&hci_dev_list_lock); hdev->workqueue = alloc_workqueue(hdev->name, WQ_HIGHPRI | WQ_UNBOUND | - WQ_MEM_RECLAIM, 1); + WQ_MEM_RECLAIM, 1); if (!hdev->workqueue) { error = -ENOMEM; goto err; @@ -1752,7 +1734,8 @@ int hci_register_dev(struct hci_dev *hdev) goto err_wqueue; hdev->rfkill = rfkill_alloc(hdev->name, &hdev->dev, - RFKILL_TYPE_BLUETOOTH, &hci_rfkill_ops, hdev); + RFKILL_TYPE_BLUETOOTH, &hci_rfkill_ops, + hdev); if (hdev->rfkill) { if (rfkill_register(hdev->rfkill) < 0) { rfkill_destroy(hdev->rfkill); @@ -1760,8 +1743,11 @@ int hci_register_dev(struct hci_dev *hdev) } } - set_bit(HCI_AUTO_OFF, &hdev->dev_flags); set_bit(HCI_SETUP, &hdev->dev_flags); + + if (hdev->dev_type != HCI_AMP) + set_bit(HCI_AUTO_OFF, &hdev->dev_flags); + schedule_work(&hdev->power_on); hci_notify(hdev, HCI_DEV_REG); @@ -1772,6 +1758,7 @@ int hci_register_dev(struct hci_dev *hdev) err_wqueue: destroy_workqueue(hdev->workqueue); err: + ida_simple_remove(&hci_index_ida, hdev->id); write_lock(&hci_dev_list_lock); list_del(&hdev->list); write_unlock(&hci_dev_list_lock); @@ -1783,12 +1770,14 @@ EXPORT_SYMBOL(hci_register_dev); /* Unregister HCI device */ void hci_unregister_dev(struct hci_dev *hdev) { - int i; + int i, id; BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); set_bit(HCI_UNREGISTER, &hdev->dev_flags); + id = hdev->id; + write_lock(&hci_dev_list_lock); list_del(&hdev->list); write_unlock(&hci_dev_list_lock); @@ -1799,7 +1788,7 @@ void hci_unregister_dev(struct hci_dev *hdev) kfree_skb(hdev->reassembly[i]); if (!test_bit(HCI_INIT, &hdev->flags) && - !test_bit(HCI_SETUP, &hdev->dev_flags)) { + !test_bit(HCI_SETUP, &hdev->dev_flags)) { hci_dev_lock(hdev); mgmt_index_removed(hdev); hci_dev_unlock(hdev); @@ -1829,6 +1818,8 @@ void hci_unregister_dev(struct hci_dev *hdev) hci_dev_unlock(hdev); hci_dev_put(hdev); + + ida_simple_remove(&hci_index_ida, id); } EXPORT_SYMBOL(hci_unregister_dev); @@ -1853,7 +1844,7 @@ int hci_recv_frame(struct sk_buff *skb) { struct hci_dev *hdev = (struct hci_dev *) skb->dev; if (!hdev || (!test_bit(HCI_UP, &hdev->flags) - && !test_bit(HCI_INIT, &hdev->flags))) { + && !test_bit(HCI_INIT, &hdev->flags))) { kfree_skb(skb); return -ENXIO; } @@ -1872,7 +1863,7 @@ int hci_recv_frame(struct sk_buff *skb) EXPORT_SYMBOL(hci_recv_frame); static int hci_reassembly(struct hci_dev *hdev, int type, void *data, - int count, __u8 index) + int count, __u8 index) { int len = 0; int hlen = 0; @@ -1881,7 +1872,7 @@ static int hci_reassembly(struct hci_dev *hdev, int type, void *data, struct bt_skb_cb *scb; if ((type < HCI_ACLDATA_PKT || type > HCI_EVENT_PKT) || - index >= NUM_REASSEMBLY) + index >= NUM_REASSEMBLY) return -EILSEQ; skb = hdev->reassembly[index]; @@ -2023,7 +2014,7 @@ int hci_recv_stream_fragment(struct hci_dev *hdev, void *data, int count) type = bt_cb(skb)->pkt_type; rem = hci_reassembly(hdev, type, data, count, - STREAM_REASSEMBLY); + STREAM_REASSEMBLY); if (rem < 0) return rem; @@ -2096,7 +2087,7 @@ int hci_send_cmd(struct hci_dev *hdev, __u16 opcode, __u32 plen, void *param) struct hci_command_hdr *hdr; struct sk_buff *skb; - BT_DBG("%s opcode 0x%x plen %d", hdev->name, opcode, plen); + BT_DBG("%s opcode 0x%4.4x plen %d", hdev->name, opcode, plen); skb = bt_skb_alloc(len, GFP_ATOMIC); if (!skb) { @@ -2138,7 +2129,7 @@ void *hci_sent_cmd_data(struct hci_dev *hdev, __u16 opcode) if (hdr->opcode != cpu_to_le16(opcode)) return NULL; - BT_DBG("%s opcode 0x%x", hdev->name, opcode); + BT_DBG("%s opcode 0x%4.4x", hdev->name, opcode); return hdev->sent_cmd->data + HCI_COMMAND_HDR_SIZE; } @@ -2157,7 +2148,7 @@ static void hci_add_acl_hdr(struct sk_buff *skb, __u16 handle, __u16 flags) } static void hci_queue_acl(struct hci_conn *conn, struct sk_buff_head *queue, - struct sk_buff *skb, __u16 flags) + struct sk_buff *skb, __u16 flags) { struct hci_dev *hdev = conn->hdev; struct sk_buff *list; @@ -2208,7 +2199,7 @@ void hci_send_acl(struct hci_chan *chan, struct sk_buff *skb, __u16 flags) struct hci_conn *conn = chan->conn; struct hci_dev *hdev = conn->hdev; - BT_DBG("%s chan %p flags 0x%x", hdev->name, chan, flags); + BT_DBG("%s chan %p flags 0x%4.4x", hdev->name, chan, flags); skb->dev = (void *) hdev; @@ -2216,7 +2207,6 @@ void hci_send_acl(struct hci_chan *chan, struct sk_buff *skb, __u16 flags) queue_work(hdev->workqueue, &hdev->tx_work); } -EXPORT_SYMBOL(hci_send_acl); /* Send SCO data */ void hci_send_sco(struct hci_conn *conn, struct sk_buff *skb) @@ -2239,12 +2229,12 @@ void hci_send_sco(struct hci_conn *conn, struct sk_buff *skb) skb_queue_tail(&conn->data_q, skb); queue_work(hdev->workqueue, &hdev->tx_work); } -EXPORT_SYMBOL(hci_send_sco); /* ---- HCI TX task (outgoing data) ---- */ /* HCI Connection scheduler */ -static inline struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type, int *quote) +static struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type, + int *quote) { struct hci_conn_hash *h = &hdev->conn_hash; struct hci_conn *conn = NULL, *c; @@ -2303,7 +2293,7 @@ static inline struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type, int return conn; } -static inline void hci_link_tx_to(struct hci_dev *hdev, __u8 type) +static void hci_link_tx_to(struct hci_dev *hdev, __u8 type) { struct hci_conn_hash *h = &hdev->conn_hash; struct hci_conn *c; @@ -2316,16 +2306,16 @@ static inline void hci_link_tx_to(struct hci_dev *hdev, __u8 type) list_for_each_entry_rcu(c, &h->list, list) { if (c->type == type && c->sent) { BT_ERR("%s killing stalled connection %s", - hdev->name, batostr(&c->dst)); - hci_acl_disconn(c, 0x13); + hdev->name, batostr(&c->dst)); + hci_acl_disconn(c, HCI_ERROR_REMOTE_USER_TERM); } } rcu_read_unlock(); } -static inline struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type, - int *quote) +static struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type, + int *quote) { struct hci_conn_hash *h = &hdev->conn_hash; struct hci_chan *chan = NULL; @@ -2442,7 +2432,7 @@ static void hci_prio_recalculate(struct hci_dev *hdev, __u8 type) skb->priority = HCI_PRIO_MAX - 1; BT_DBG("chan %p skb %p promoted to %d", chan, skb, - skb->priority); + skb->priority); } if (hci_conn_num(hdev, type) == num) @@ -2459,18 +2449,18 @@ static inline int __get_blocks(struct hci_dev *hdev, struct sk_buff *skb) return DIV_ROUND_UP(skb->len - HCI_ACL_HDR_SIZE, hdev->block_len); } -static inline void __check_timeout(struct hci_dev *hdev, unsigned int cnt) +static void __check_timeout(struct hci_dev *hdev, unsigned int cnt) { if (!test_bit(HCI_RAW, &hdev->flags)) { /* ACL tx timeout must be longer than maximum * link supervision timeout (40.9 seconds) */ if (!cnt && time_after(jiffies, hdev->acl_last_tx + - msecs_to_jiffies(HCI_ACL_TX_TIMEOUT))) + HCI_ACL_TX_TIMEOUT)) hci_link_tx_to(hdev, ACL_LINK); } } -static inline void hci_sched_acl_pkt(struct hci_dev *hdev) +static void hci_sched_acl_pkt(struct hci_dev *hdev) { unsigned int cnt = hdev->acl_cnt; struct hci_chan *chan; @@ -2480,11 +2470,11 @@ static inline void hci_sched_acl_pkt(struct hci_dev *hdev) __check_timeout(hdev, cnt); while (hdev->acl_cnt && - (chan = hci_chan_sent(hdev, ACL_LINK, "e))) { + (chan = hci_chan_sent(hdev, ACL_LINK, "e))) { u32 priority = (skb_peek(&chan->data_q))->priority; while (quote-- && (skb = skb_peek(&chan->data_q))) { BT_DBG("chan %p skb %p len %d priority %u", chan, skb, - skb->len, skb->priority); + skb->len, skb->priority); /* Stop if priority has changed */ if (skb->priority < priority) @@ -2508,7 +2498,7 @@ static inline void hci_sched_acl_pkt(struct hci_dev *hdev) hci_prio_recalculate(hdev, ACL_LINK); } -static inline void hci_sched_acl_blk(struct hci_dev *hdev) +static void hci_sched_acl_blk(struct hci_dev *hdev) { unsigned int cnt = hdev->block_cnt; struct hci_chan *chan; @@ -2518,13 +2508,13 @@ static inline void hci_sched_acl_blk(struct hci_dev *hdev) __check_timeout(hdev, cnt); while (hdev->block_cnt > 0 && - (chan = hci_chan_sent(hdev, ACL_LINK, "e))) { + (chan = hci_chan_sent(hdev, ACL_LINK, "e))) { u32 priority = (skb_peek(&chan->data_q))->priority; while (quote > 0 && (skb = skb_peek(&chan->data_q))) { int blocks; BT_DBG("chan %p skb %p len %d priority %u", chan, skb, - skb->len, skb->priority); + skb->len, skb->priority); /* Stop if priority has changed */ if (skb->priority < priority) @@ -2537,7 +2527,7 @@ static inline void hci_sched_acl_blk(struct hci_dev *hdev) return; hci_conn_enter_active_mode(chan->conn, - bt_cb(skb)->force_active); + bt_cb(skb)->force_active); hci_send_frame(skb); hdev->acl_last_tx = jiffies; @@ -2554,7 +2544,7 @@ static inline void hci_sched_acl_blk(struct hci_dev *hdev) hci_prio_recalculate(hdev, ACL_LINK); } -static inline void hci_sched_acl(struct hci_dev *hdev) +static void hci_sched_acl(struct hci_dev *hdev) { BT_DBG("%s", hdev->name); @@ -2573,7 +2563,7 @@ static inline void hci_sched_acl(struct hci_dev *hdev) } /* Schedule SCO */ -static inline void hci_sched_sco(struct hci_dev *hdev) +static void hci_sched_sco(struct hci_dev *hdev) { struct hci_conn *conn; struct sk_buff *skb; @@ -2596,7 +2586,7 @@ static inline void hci_sched_sco(struct hci_dev *hdev) } } -static inline void hci_sched_esco(struct hci_dev *hdev) +static void hci_sched_esco(struct hci_dev *hdev) { struct hci_conn *conn; struct sk_buff *skb; @@ -2607,7 +2597,8 @@ static inline void hci_sched_esco(struct hci_dev *hdev) if (!hci_conn_num(hdev, ESCO_LINK)) return; - while (hdev->sco_cnt && (conn = hci_low_sent(hdev, ESCO_LINK, "e))) { + while (hdev->sco_cnt && (conn = hci_low_sent(hdev, ESCO_LINK, + "e))) { while (quote-- && (skb = skb_dequeue(&conn->data_q))) { BT_DBG("skb %p len %d", skb, skb->len); hci_send_frame(skb); @@ -2619,7 +2610,7 @@ static inline void hci_sched_esco(struct hci_dev *hdev) } } -static inline void hci_sched_le(struct hci_dev *hdev) +static void hci_sched_le(struct hci_dev *hdev) { struct hci_chan *chan; struct sk_buff *skb; @@ -2634,7 +2625,7 @@ static inline void hci_sched_le(struct hci_dev *hdev) /* LE tx timeout must be longer than maximum * link supervision timeout (40.9 seconds) */ if (!hdev->le_cnt && hdev->le_pkts && - time_after(jiffies, hdev->le_last_tx + HZ * 45)) + time_after(jiffies, hdev->le_last_tx + HZ * 45)) hci_link_tx_to(hdev, LE_LINK); } @@ -2644,7 +2635,7 @@ static inline void hci_sched_le(struct hci_dev *hdev) u32 priority = (skb_peek(&chan->data_q))->priority; while (quote-- && (skb = skb_peek(&chan->data_q))) { BT_DBG("chan %p skb %p len %d priority %u", chan, skb, - skb->len, skb->priority); + skb->len, skb->priority); /* Stop if priority has changed */ if (skb->priority < priority) @@ -2676,7 +2667,7 @@ static void hci_tx_work(struct work_struct *work) struct sk_buff *skb; BT_DBG("%s acl %d sco %d le %d", hdev->name, hdev->acl_cnt, - hdev->sco_cnt, hdev->le_cnt); + hdev->sco_cnt, hdev->le_cnt); /* Schedule queues and send stuff to HCI driver */ @@ -2696,7 +2687,7 @@ static void hci_tx_work(struct work_struct *work) /* ----- HCI RX task (incoming data processing) ----- */ /* ACL data packet */ -static inline void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_acl_hdr *hdr = (void *) skb->data; struct hci_conn *conn; @@ -2708,7 +2699,8 @@ static inline void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb) flags = hci_flags(handle); handle = hci_handle(handle); - BT_DBG("%s len %d handle 0x%x flags 0x%x", hdev->name, skb->len, handle, flags); + BT_DBG("%s len %d handle 0x%4.4x flags 0x%4.4x", hdev->name, skb->len, + handle, flags); hdev->stat.acl_rx++; @@ -2732,14 +2724,14 @@ static inline void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb) return; } else { BT_ERR("%s ACL packet for unknown connection handle %d", - hdev->name, handle); + hdev->name, handle); } kfree_skb(skb); } /* SCO data packet */ -static inline void hci_scodata_packet(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_scodata_packet(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_sco_hdr *hdr = (void *) skb->data; struct hci_conn *conn; @@ -2749,7 +2741,7 @@ static inline void hci_scodata_packet(struct hci_dev *hdev, struct sk_buff *skb) handle = __le16_to_cpu(hdr->handle); - BT_DBG("%s len %d handle 0x%x", hdev->name, skb->len, handle); + BT_DBG("%s len %d handle 0x%4.4x", hdev->name, skb->len, handle); hdev->stat.sco_rx++; @@ -2763,7 +2755,7 @@ static inline void hci_scodata_packet(struct hci_dev *hdev, struct sk_buff *skb) return; } else { BT_ERR("%s SCO packet for unknown connection handle %d", - hdev->name, handle); + hdev->name, handle); } kfree_skb(skb); @@ -2829,7 +2821,8 @@ static void hci_cmd_work(struct work_struct *work) struct hci_dev *hdev = container_of(work, struct hci_dev, cmd_work); struct sk_buff *skb; - BT_DBG("%s cmd %d", hdev->name, atomic_read(&hdev->cmd_cnt)); + BT_DBG("%s cmd_cnt %d cmd queued %d", hdev->name, + atomic_read(&hdev->cmd_cnt), skb_queue_len(&hdev->cmd_q)); /* Send queued commands */ if (atomic_read(&hdev->cmd_cnt)) { @@ -2847,7 +2840,7 @@ static void hci_cmd_work(struct work_struct *work) del_timer(&hdev->cmd_timer); else mod_timer(&hdev->cmd_timer, - jiffies + msecs_to_jiffies(HCI_CMD_TIMEOUT)); + jiffies + HCI_CMD_TIMEOUT); } else { skb_queue_head(&hdev->cmd_q, skb); queue_work(hdev->workqueue, &hdev->cmd_work); diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 94ad124a4ea3..41ff978a33f9 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -24,20 +24,7 @@ /* Bluetooth HCI event handling. */ -#include <linux/module.h> - -#include <linux/types.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/slab.h> -#include <linux/poll.h> -#include <linux/fcntl.h> -#include <linux/init.h> -#include <linux/skbuff.h> -#include <linux/interrupt.h> -#include <net/sock.h> - -#include <linux/uaccess.h> +#include <linux/export.h> #include <asm/unaligned.h> #include <net/bluetooth/bluetooth.h> @@ -49,7 +36,7 @@ static void hci_cc_inquiry_cancel(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (status) { hci_dev_lock(hdev); @@ -73,7 +60,7 @@ static void hci_cc_periodic_inq(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (status) return; @@ -85,7 +72,7 @@ static void hci_cc_exit_periodic_inq(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (status) return; @@ -95,7 +82,8 @@ static void hci_cc_exit_periodic_inq(struct hci_dev *hdev, struct sk_buff *skb) hci_conn_check_pending(hdev); } -static void hci_cc_remote_name_req_cancel(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_cc_remote_name_req_cancel(struct hci_dev *hdev, + struct sk_buff *skb) { BT_DBG("%s", hdev->name); } @@ -105,7 +93,7 @@ static void hci_cc_role_discovery(struct hci_dev *hdev, struct sk_buff *skb) struct hci_rp_role_discovery *rp = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -128,7 +116,7 @@ static void hci_cc_read_link_policy(struct hci_dev *hdev, struct sk_buff *skb) struct hci_rp_read_link_policy *rp = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -148,7 +136,7 @@ static void hci_cc_write_link_policy(struct hci_dev *hdev, struct sk_buff *skb) struct hci_conn *conn; void *sent; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -166,11 +154,12 @@ static void hci_cc_write_link_policy(struct hci_dev *hdev, struct sk_buff *skb) hci_dev_unlock(hdev); } -static void hci_cc_read_def_link_policy(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_cc_read_def_link_policy(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_rp_read_def_link_policy *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -178,12 +167,13 @@ static void hci_cc_read_def_link_policy(struct hci_dev *hdev, struct sk_buff *sk hdev->link_policy = __le16_to_cpu(rp->policy); } -static void hci_cc_write_def_link_policy(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_cc_write_def_link_policy(struct hci_dev *hdev, + struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); void *sent; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_DEF_LINK_POLICY); if (!sent) @@ -199,7 +189,7 @@ static void hci_cc_reset(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); clear_bit(HCI_RESET, &hdev->flags); @@ -217,7 +207,7 @@ static void hci_cc_write_local_name(struct hci_dev *hdev, struct sk_buff *skb) __u8 status = *((__u8 *) skb->data); void *sent; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_LOCAL_NAME); if (!sent) @@ -239,7 +229,7 @@ static void hci_cc_read_local_name(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_read_local_name *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -253,7 +243,7 @@ static void hci_cc_write_auth_enable(struct hci_dev *hdev, struct sk_buff *skb) __u8 status = *((__u8 *) skb->data); void *sent; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_AUTH_ENABLE); if (!sent) @@ -279,7 +269,7 @@ static void hci_cc_write_encrypt_mode(struct hci_dev *hdev, struct sk_buff *skb) __u8 status = *((__u8 *) skb->data); void *sent; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_ENCRYPT_MODE); if (!sent) @@ -303,7 +293,7 @@ static void hci_cc_write_scan_enable(struct hci_dev *hdev, struct sk_buff *skb) int old_pscan, old_iscan; void *sent; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_SCAN_ENABLE); if (!sent) @@ -329,7 +319,7 @@ static void hci_cc_write_scan_enable(struct hci_dev *hdev, struct sk_buff *skb) if (hdev->discov_timeout > 0) { int to = msecs_to_jiffies(hdev->discov_timeout * 1000); queue_delayed_work(hdev->workqueue, &hdev->discov_off, - to); + to); } } else if (old_iscan) mgmt_discoverable(hdev, 0); @@ -350,7 +340,7 @@ static void hci_cc_read_class_of_dev(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_read_class_of_dev *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -358,7 +348,7 @@ static void hci_cc_read_class_of_dev(struct hci_dev *hdev, struct sk_buff *skb) memcpy(hdev->dev_class, rp->dev_class, 3); BT_DBG("%s class 0x%.2x%.2x%.2x", hdev->name, - hdev->dev_class[2], hdev->dev_class[1], hdev->dev_class[0]); + hdev->dev_class[2], hdev->dev_class[1], hdev->dev_class[0]); } static void hci_cc_write_class_of_dev(struct hci_dev *hdev, struct sk_buff *skb) @@ -366,7 +356,7 @@ static void hci_cc_write_class_of_dev(struct hci_dev *hdev, struct sk_buff *skb) __u8 status = *((__u8 *) skb->data); void *sent; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_CLASS_OF_DEV); if (!sent) @@ -388,7 +378,7 @@ static void hci_cc_read_voice_setting(struct hci_dev *hdev, struct sk_buff *skb) struct hci_rp_read_voice_setting *rp = (void *) skb->data; __u16 setting; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -400,19 +390,20 @@ static void hci_cc_read_voice_setting(struct hci_dev *hdev, struct sk_buff *skb) hdev->voice_setting = setting; - BT_DBG("%s voice setting 0x%04x", hdev->name, setting); + BT_DBG("%s voice setting 0x%4.4x", hdev->name, setting); if (hdev->notify) hdev->notify(hdev, HCI_NOTIFY_VOICE_SETTING); } -static void hci_cc_write_voice_setting(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_cc_write_voice_setting(struct hci_dev *hdev, + struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); __u16 setting; void *sent; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (status) return; @@ -428,7 +419,7 @@ static void hci_cc_write_voice_setting(struct hci_dev *hdev, struct sk_buff *skb hdev->voice_setting = setting; - BT_DBG("%s voice setting 0x%04x", hdev->name, setting); + BT_DBG("%s voice setting 0x%4.4x", hdev->name, setting); if (hdev->notify) hdev->notify(hdev, HCI_NOTIFY_VOICE_SETTING); @@ -438,7 +429,7 @@ static void hci_cc_host_buffer_size(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); hci_req_complete(hdev, HCI_OP_HOST_BUFFER_SIZE, status); } @@ -448,7 +439,7 @@ static void hci_cc_write_ssp_mode(struct hci_dev *hdev, struct sk_buff *skb) __u8 status = *((__u8 *) skb->data); void *sent; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_SSP_MODE); if (!sent) @@ -473,7 +464,7 @@ static u8 hci_get_inquiry_mode(struct hci_dev *hdev) return 1; if (hdev->manufacturer == 11 && hdev->hci_rev == 0x00 && - hdev->lmp_subver == 0x0757) + hdev->lmp_subver == 0x0757) return 1; if (hdev->manufacturer == 15) { @@ -486,7 +477,7 @@ static u8 hci_get_inquiry_mode(struct hci_dev *hdev) } if (hdev->manufacturer == 31 && hdev->hci_rev == 0x2005 && - hdev->lmp_subver == 0x1805) + hdev->lmp_subver == 0x1805) return 1; return 0; @@ -566,7 +557,7 @@ static void hci_setup(struct hci_dev *hdev) if (hdev->hci_ver > BLUETOOTH_VER_1_1) hci_send_cmd(hdev, HCI_OP_READ_LOCAL_COMMANDS, 0, NULL); - if (hdev->features[6] & LMP_SIMPLE_PAIR) { + if (lmp_ssp_capable(hdev)) { if (test_bit(HCI_SSP_ENABLED, &hdev->dev_flags)) { u8 mode = 0x01; hci_send_cmd(hdev, HCI_OP_WRITE_SSP_MODE, @@ -606,7 +597,7 @@ static void hci_cc_read_local_version(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_read_local_version *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) goto done; @@ -617,9 +608,8 @@ static void hci_cc_read_local_version(struct hci_dev *hdev, struct sk_buff *skb) hdev->manufacturer = __le16_to_cpu(rp->manufacturer); hdev->lmp_subver = __le16_to_cpu(rp->lmp_subver); - BT_DBG("%s manufacturer %d hci ver %d:%d", hdev->name, - hdev->manufacturer, - hdev->hci_ver, hdev->hci_rev); + BT_DBG("%s manufacturer 0x%4.4x hci ver %d:%d", hdev->name, + hdev->manufacturer, hdev->hci_ver, hdev->hci_rev); if (test_bit(HCI_INIT, &hdev->flags)) hci_setup(hdev); @@ -646,11 +636,12 @@ static void hci_setup_link_policy(struct hci_dev *hdev) hci_send_cmd(hdev, HCI_OP_WRITE_DEF_LINK_POLICY, sizeof(cp), &cp); } -static void hci_cc_read_local_commands(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_cc_read_local_commands(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_rp_read_local_commands *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) goto done; @@ -664,11 +655,12 @@ done: hci_req_complete(hdev, HCI_OP_READ_LOCAL_COMMANDS, rp->status); } -static void hci_cc_read_local_features(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_cc_read_local_features(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_rp_read_local_features *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -713,10 +705,10 @@ static void hci_cc_read_local_features(struct hci_dev *hdev, struct sk_buff *skb hdev->esco_type |= (ESCO_2EV5 | ESCO_3EV5); BT_DBG("%s features 0x%.2x%.2x%.2x%.2x%.2x%.2x%.2x%.2x", hdev->name, - hdev->features[0], hdev->features[1], - hdev->features[2], hdev->features[3], - hdev->features[4], hdev->features[5], - hdev->features[6], hdev->features[7]); + hdev->features[0], hdev->features[1], + hdev->features[2], hdev->features[3], + hdev->features[4], hdev->features[5], + hdev->features[6], hdev->features[7]); } static void hci_set_le_support(struct hci_dev *hdev) @@ -736,11 +728,11 @@ static void hci_set_le_support(struct hci_dev *hdev) } static void hci_cc_read_local_ext_features(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { struct hci_rp_read_local_ext_features *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) goto done; @@ -762,11 +754,11 @@ done: } static void hci_cc_read_flow_control_mode(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { struct hci_rp_read_flow_control_mode *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -780,7 +772,7 @@ static void hci_cc_read_buffer_size(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_read_buffer_size *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -798,16 +790,15 @@ static void hci_cc_read_buffer_size(struct hci_dev *hdev, struct sk_buff *skb) hdev->acl_cnt = hdev->acl_pkts; hdev->sco_cnt = hdev->sco_pkts; - BT_DBG("%s acl mtu %d:%d sco mtu %d:%d", hdev->name, - hdev->acl_mtu, hdev->acl_pkts, - hdev->sco_mtu, hdev->sco_pkts); + BT_DBG("%s acl mtu %d:%d sco mtu %d:%d", hdev->name, hdev->acl_mtu, + hdev->acl_pkts, hdev->sco_mtu, hdev->sco_pkts); } static void hci_cc_read_bd_addr(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_read_bd_addr *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (!rp->status) bacpy(&hdev->bdaddr, &rp->bdaddr); @@ -816,11 +807,11 @@ static void hci_cc_read_bd_addr(struct hci_dev *hdev, struct sk_buff *skb) } static void hci_cc_read_data_block_size(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { struct hci_rp_read_data_block_size *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -832,7 +823,7 @@ static void hci_cc_read_data_block_size(struct hci_dev *hdev, hdev->block_cnt = hdev->num_blocks; BT_DBG("%s blk mtu %d cnt %d len %d", hdev->name, hdev->block_mtu, - hdev->block_cnt, hdev->block_len); + hdev->block_cnt, hdev->block_len); hci_req_complete(hdev, HCI_OP_READ_DATA_BLOCK_SIZE, rp->status); } @@ -841,17 +832,17 @@ static void hci_cc_write_ca_timeout(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); hci_req_complete(hdev, HCI_OP_WRITE_CA_TIMEOUT, status); } static void hci_cc_read_local_amp_info(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { struct hci_rp_read_local_amp_info *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -871,11 +862,11 @@ static void hci_cc_read_local_amp_info(struct hci_dev *hdev, } static void hci_cc_delete_stored_link_key(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); hci_req_complete(hdev, HCI_OP_DELETE_STORED_LINK_KEY, status); } @@ -884,27 +875,27 @@ static void hci_cc_set_event_mask(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); hci_req_complete(hdev, HCI_OP_SET_EVENT_MASK, status); } static void hci_cc_write_inquiry_mode(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); hci_req_complete(hdev, HCI_OP_WRITE_INQUIRY_MODE, status); } static void hci_cc_read_inq_rsp_tx_power(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { struct hci_rp_read_inq_rsp_tx_power *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (!rp->status) hdev->inq_tx_power = rp->tx_power; @@ -916,7 +907,7 @@ static void hci_cc_set_event_flt(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); hci_req_complete(hdev, HCI_OP_SET_EVENT_FLT, status); } @@ -927,7 +918,7 @@ static void hci_cc_pin_code_reply(struct hci_dev *hdev, struct sk_buff *skb) struct hci_cp_pin_code_reply *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); hci_dev_lock(hdev); @@ -953,13 +944,13 @@ static void hci_cc_pin_code_neg_reply(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_pin_code_neg_reply *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); hci_dev_lock(hdev); if (test_bit(HCI_MGMT, &hdev->dev_flags)) mgmt_pin_code_neg_reply_complete(hdev, &rp->bdaddr, - rp->status); + rp->status); hci_dev_unlock(hdev); } @@ -969,7 +960,7 @@ static void hci_cc_le_read_buffer_size(struct hci_dev *hdev, { struct hci_rp_le_read_buffer_size *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -988,7 +979,7 @@ static void hci_cc_user_confirm_reply(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_user_confirm_reply *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); hci_dev_lock(hdev); @@ -1000,11 +991,11 @@ static void hci_cc_user_confirm_reply(struct hci_dev *hdev, struct sk_buff *skb) } static void hci_cc_user_confirm_neg_reply(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { struct hci_rp_user_confirm_reply *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); hci_dev_lock(hdev); @@ -1019,7 +1010,7 @@ static void hci_cc_user_passkey_reply(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_user_confirm_reply *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); hci_dev_lock(hdev); @@ -1031,11 +1022,11 @@ static void hci_cc_user_passkey_reply(struct hci_dev *hdev, struct sk_buff *skb) } static void hci_cc_user_passkey_neg_reply(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { struct hci_rp_user_confirm_reply *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); hci_dev_lock(hdev); @@ -1047,11 +1038,11 @@ static void hci_cc_user_passkey_neg_reply(struct hci_dev *hdev, } static void hci_cc_read_local_oob_data_reply(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { struct hci_rp_read_local_oob_data *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); hci_dev_lock(hdev); mgmt_read_local_oob_data_reply_complete(hdev, rp->hash, @@ -1063,7 +1054,7 @@ static void hci_cc_le_set_scan_param(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); hci_req_complete(hdev, HCI_OP_LE_SET_SCAN_PARAM, status); @@ -1076,12 +1067,12 @@ static void hci_cc_le_set_scan_param(struct hci_dev *hdev, struct sk_buff *skb) } static void hci_cc_le_set_scan_enable(struct hci_dev *hdev, - struct sk_buff *skb) + struct sk_buff *skb) { struct hci_cp_le_set_scan_enable *cp; __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_SCAN_ENABLE); if (!cp) @@ -1136,7 +1127,7 @@ static void hci_cc_le_ltk_reply(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_le_ltk_reply *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -1148,7 +1139,7 @@ static void hci_cc_le_ltk_neg_reply(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_rp_le_ltk_neg_reply *rp = (void *) skb->data; - BT_DBG("%s status 0x%x", hdev->name, rp->status); + BT_DBG("%s status 0x%2.2x", hdev->name, rp->status); if (rp->status) return; @@ -1156,13 +1147,13 @@ static void hci_cc_le_ltk_neg_reply(struct hci_dev *hdev, struct sk_buff *skb) hci_req_complete(hdev, HCI_OP_LE_LTK_NEG_REPLY, rp->status); } -static inline void hci_cc_write_le_host_supported(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_cc_write_le_host_supported(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_cp_write_le_host_supported *sent; __u8 status = *((__u8 *) skb->data); - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); sent = hci_sent_cmd_data(hdev, HCI_OP_WRITE_LE_HOST_SUPPORTED); if (!sent) @@ -1176,15 +1167,15 @@ static inline void hci_cc_write_le_host_supported(struct hci_dev *hdev, } if (test_bit(HCI_MGMT, &hdev->dev_flags) && - !test_bit(HCI_INIT, &hdev->flags)) + !test_bit(HCI_INIT, &hdev->flags)) mgmt_le_enable_complete(hdev, sent->le, status); hci_req_complete(hdev, HCI_OP_WRITE_LE_HOST_SUPPORTED, status); } -static inline void hci_cs_inquiry(struct hci_dev *hdev, __u8 status) +static void hci_cs_inquiry(struct hci_dev *hdev, __u8 status) { - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (status) { hci_req_complete(hdev, HCI_OP_INQUIRY, status); @@ -1203,12 +1194,12 @@ static inline void hci_cs_inquiry(struct hci_dev *hdev, __u8 status) hci_dev_unlock(hdev); } -static inline void hci_cs_create_conn(struct hci_dev *hdev, __u8 status) +static void hci_cs_create_conn(struct hci_dev *hdev, __u8 status) { struct hci_cp_create_conn *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); cp = hci_sent_cmd_data(hdev, HCI_OP_CREATE_CONN); if (!cp) @@ -1218,7 +1209,7 @@ static inline void hci_cs_create_conn(struct hci_dev *hdev, __u8 status) conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &cp->bdaddr); - BT_DBG("%s bdaddr %s conn %p", hdev->name, batostr(&cp->bdaddr), conn); + BT_DBG("%s bdaddr %s hcon %p", hdev->name, batostr(&cp->bdaddr), conn); if (status) { if (conn && conn->state == BT_CONNECT) { @@ -1249,7 +1240,7 @@ static void hci_cs_add_sco(struct hci_dev *hdev, __u8 status) struct hci_conn *acl, *sco; __u16 handle; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (!status) return; @@ -1260,7 +1251,7 @@ static void hci_cs_add_sco(struct hci_dev *hdev, __u8 status) handle = __le16_to_cpu(cp->handle); - BT_DBG("%s handle %d", hdev->name, handle); + BT_DBG("%s handle 0x%4.4x", hdev->name, handle); hci_dev_lock(hdev); @@ -1283,7 +1274,7 @@ static void hci_cs_auth_requested(struct hci_dev *hdev, __u8 status) struct hci_cp_auth_requested *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (!status) return; @@ -1310,7 +1301,7 @@ static void hci_cs_set_conn_encrypt(struct hci_dev *hdev, __u8 status) struct hci_cp_set_conn_encrypt *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (!status) return; @@ -1333,7 +1324,7 @@ static void hci_cs_set_conn_encrypt(struct hci_dev *hdev, __u8 status) } static int hci_outgoing_auth_needed(struct hci_dev *hdev, - struct hci_conn *conn) + struct hci_conn *conn) { if (conn->state != BT_CONFIG || !conn->out) return 0; @@ -1343,15 +1334,14 @@ static int hci_outgoing_auth_needed(struct hci_dev *hdev, /* Only request authentication for SSP connections or non-SSP * devices with sec_level HIGH or if MITM protection is requested */ - if (!hci_conn_ssp_enabled(conn) && - conn->pending_sec_level != BT_SECURITY_HIGH && - !(conn->auth_type & 0x01)) + if (!hci_conn_ssp_enabled(conn) && !(conn->auth_type & 0x01) && + conn->pending_sec_level != BT_SECURITY_HIGH) return 0; return 1; } -static inline int hci_resolve_name(struct hci_dev *hdev, +static int hci_resolve_name(struct hci_dev *hdev, struct inquiry_entry *e) { struct hci_cp_remote_name_req cp; @@ -1423,7 +1413,7 @@ static void hci_cs_remote_name_req(struct hci_dev *hdev, __u8 status) struct hci_cp_remote_name_req *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); /* If successful wait for the name req complete event before * checking for the need to do authentication */ @@ -1462,7 +1452,7 @@ static void hci_cs_read_remote_features(struct hci_dev *hdev, __u8 status) struct hci_cp_read_remote_features *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (!status) return; @@ -1489,7 +1479,7 @@ static void hci_cs_read_remote_ext_features(struct hci_dev *hdev, __u8 status) struct hci_cp_read_remote_ext_features *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (!status) return; @@ -1517,7 +1507,7 @@ static void hci_cs_setup_sync_conn(struct hci_dev *hdev, __u8 status) struct hci_conn *acl, *sco; __u16 handle; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (!status) return; @@ -1528,7 +1518,7 @@ static void hci_cs_setup_sync_conn(struct hci_dev *hdev, __u8 status) handle = __le16_to_cpu(cp->handle); - BT_DBG("%s handle %d", hdev->name, handle); + BT_DBG("%s handle 0x%4.4x", hdev->name, handle); hci_dev_lock(hdev); @@ -1551,7 +1541,7 @@ static void hci_cs_sniff_mode(struct hci_dev *hdev, __u8 status) struct hci_cp_sniff_mode *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (!status) return; @@ -1578,7 +1568,7 @@ static void hci_cs_exit_sniff_mode(struct hci_dev *hdev, __u8 status) struct hci_cp_exit_sniff_mode *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); if (!status) return; @@ -1627,7 +1617,7 @@ static void hci_cs_le_create_conn(struct hci_dev *hdev, __u8 status) struct hci_cp_le_create_conn *cp; struct hci_conn *conn; - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); cp = hci_sent_cmd_data(hdev, HCI_OP_LE_CREATE_CONN); if (!cp) @@ -1638,7 +1628,7 @@ static void hci_cs_le_create_conn(struct hci_dev *hdev, __u8 status) conn = hci_conn_hash_lookup_ba(hdev, LE_LINK, &cp->peer_addr); BT_DBG("%s bdaddr %s conn %p", hdev->name, batostr(&cp->peer_addr), - conn); + conn); if (status) { if (conn && conn->state == BT_CONNECT) { @@ -1665,16 +1655,16 @@ static void hci_cs_le_create_conn(struct hci_dev *hdev, __u8 status) static void hci_cs_le_start_enc(struct hci_dev *hdev, u8 status) { - BT_DBG("%s status 0x%x", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); } -static inline void hci_inquiry_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_inquiry_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { __u8 status = *((__u8 *) skb->data); struct discovery_state *discov = &hdev->discovery; struct inquiry_entry *e; - BT_DBG("%s status %d", hdev->name, status); + BT_DBG("%s status 0x%2.2x", hdev->name, status); hci_req_complete(hdev, HCI_OP_INQUIRY, status); @@ -1708,7 +1698,7 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct inquiry_data data; struct inquiry_info *info = (void *) (skb->data + 1); @@ -1745,7 +1735,7 @@ static inline void hci_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff * hci_dev_unlock(hdev); } -static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_conn_complete *ev = (void *) skb->data; struct hci_conn *conn; @@ -1823,18 +1813,18 @@ unlock: hci_conn_check_pending(hdev); } -static inline void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_conn_request *ev = (void *) skb->data; int mask = hdev->link_mode; - BT_DBG("%s bdaddr %s type 0x%x", hdev->name, - batostr(&ev->bdaddr), ev->link_type); + BT_DBG("%s bdaddr %s type 0x%x", hdev->name, batostr(&ev->bdaddr), + ev->link_type); mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ev->link_type); if ((mask & HCI_LM_ACCEPT) && - !hci_blacklist_lookup(hdev, &ev->bdaddr)) { + !hci_blacklist_lookup(hdev, &ev->bdaddr)) { /* Connection accepted */ struct inquiry_entry *ie; struct hci_conn *conn; @@ -1845,7 +1835,8 @@ static inline void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *sk if (ie) memcpy(ie->data.dev_class, ev->dev_class, 3); - conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, &ev->bdaddr); + conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, + &ev->bdaddr); if (!conn) { conn = hci_conn_add(hdev, ev->link_type, &ev->bdaddr); if (!conn) { @@ -1878,9 +1869,9 @@ static inline void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *sk bacpy(&cp.bdaddr, &ev->bdaddr); cp.pkt_type = cpu_to_le16(conn->pkt_type); - cp.tx_bandwidth = cpu_to_le32(0x00001f40); - cp.rx_bandwidth = cpu_to_le32(0x00001f40); - cp.max_latency = cpu_to_le16(0xffff); + cp.tx_bandwidth = __constant_cpu_to_le32(0x00001f40); + cp.rx_bandwidth = __constant_cpu_to_le32(0x00001f40); + cp.max_latency = __constant_cpu_to_le16(0xffff); cp.content_format = cpu_to_le16(hdev->voice_setting); cp.retrans_effort = 0xff; @@ -1897,12 +1888,12 @@ static inline void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *sk } } -static inline void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_disconn_complete *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -1914,10 +1905,10 @@ static inline void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff conn->state = BT_CLOSED; if (test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags) && - (conn->type == ACL_LINK || conn->type == LE_LINK)) { + (conn->type == ACL_LINK || conn->type == LE_LINK)) { if (ev->status != 0) mgmt_disconnect_failed(hdev, &conn->dst, conn->type, - conn->dst_type, ev->status); + conn->dst_type, ev->status); else mgmt_device_disconnected(hdev, &conn->dst, conn->type, conn->dst_type); @@ -1934,12 +1925,12 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_auth_complete *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -1949,7 +1940,7 @@ static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *s if (!ev->status) { if (!hci_conn_ssp_enabled(conn) && - test_bit(HCI_CONN_REAUTH_PEND, &conn->flags)) { + test_bit(HCI_CONN_REAUTH_PEND, &conn->flags)) { BT_INFO("re-auth of legacy device is not possible."); } else { conn->link_mode |= HCI_LM_AUTH; @@ -1969,7 +1960,7 @@ static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *s cp.handle = ev->handle; cp.encrypt = 0x01; hci_send_cmd(hdev, HCI_OP_SET_CONN_ENCRYPT, sizeof(cp), - &cp); + &cp); } else { conn->state = BT_CONNECTED; hci_proto_connect_cfm(conn, ev->status); @@ -1989,7 +1980,7 @@ static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *s cp.handle = ev->handle; cp.encrypt = 0x01; hci_send_cmd(hdev, HCI_OP_SET_CONN_ENCRYPT, sizeof(cp), - &cp); + &cp); } else { clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags); hci_encrypt_cfm(conn, ev->status, 0x00); @@ -2000,7 +1991,7 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_remote_name_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_remote_name_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_remote_name *ev = (void *) skb->data; struct hci_conn *conn; @@ -2039,12 +2030,12 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_encrypt_change *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -2082,12 +2073,13 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_change_link_key_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_change_link_key_complete_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_ev_change_link_key_complete *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -2104,12 +2096,13 @@ static inline void hci_change_link_key_complete_evt(struct hci_dev *hdev, struct hci_dev_unlock(hdev); } -static inline void hci_remote_features_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_remote_features_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_ev_remote_features *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -2128,7 +2121,7 @@ static inline void hci_remote_features_evt(struct hci_dev *hdev, struct sk_buff cp.handle = ev->handle; cp.page = 0x01; hci_send_cmd(hdev, HCI_OP_READ_REMOTE_EXT_FEATURES, - sizeof(cp), &cp); + sizeof(cp), &cp); goto unlock; } @@ -2153,17 +2146,18 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_remote_version_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_remote_version_evt(struct hci_dev *hdev, struct sk_buff *skb) { BT_DBG("%s", hdev->name); } -static inline void hci_qos_setup_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_qos_setup_complete_evt(struct hci_dev *hdev, + struct sk_buff *skb) { BT_DBG("%s", hdev->name); } -static inline void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_cmd_complete *ev = (void *) skb->data; __u16 opcode; @@ -2370,7 +2364,7 @@ static inline void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *sk break; default: - BT_DBG("%s opcode 0x%x", hdev->name, opcode); + BT_DBG("%s opcode 0x%4.4x", hdev->name, opcode); break; } @@ -2384,7 +2378,7 @@ static inline void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *sk } } -static inline void hci_cmd_status_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_cmd_status_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_cmd_status *ev = (void *) skb->data; __u16 opcode; @@ -2451,7 +2445,7 @@ static inline void hci_cmd_status_evt(struct hci_dev *hdev, struct sk_buff *skb) break; default: - BT_DBG("%s opcode 0x%x", hdev->name, opcode); + BT_DBG("%s opcode 0x%4.4x", hdev->name, opcode); break; } @@ -2465,12 +2459,12 @@ static inline void hci_cmd_status_evt(struct hci_dev *hdev, struct sk_buff *skb) } } -static inline void hci_role_change_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_role_change_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_role_change *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -2491,7 +2485,7 @@ static inline void hci_role_change_evt(struct hci_dev *hdev, struct sk_buff *skb hci_dev_unlock(hdev); } -static inline void hci_num_comp_pkts_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_num_comp_pkts_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_num_comp_pkts *ev = (void *) skb->data; int i; @@ -2502,7 +2496,7 @@ static inline void hci_num_comp_pkts_evt(struct hci_dev *hdev, struct sk_buff *s } if (skb->len < sizeof(*ev) || skb->len < sizeof(*ev) + - ev->num_hndl * sizeof(struct hci_comp_pkts_info)) { + ev->num_hndl * sizeof(struct hci_comp_pkts_info)) { BT_DBG("%s bad parameters", hdev->name); return; } @@ -2557,8 +2551,7 @@ static inline void hci_num_comp_pkts_evt(struct hci_dev *hdev, struct sk_buff *s queue_work(hdev->workqueue, &hdev->tx_work); } -static inline void hci_num_comp_blocks_evt(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_num_comp_blocks_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_num_comp_blocks *ev = (void *) skb->data; int i; @@ -2569,13 +2562,13 @@ static inline void hci_num_comp_blocks_evt(struct hci_dev *hdev, } if (skb->len < sizeof(*ev) || skb->len < sizeof(*ev) + - ev->num_hndl * sizeof(struct hci_comp_blocks_info)) { + ev->num_hndl * sizeof(struct hci_comp_blocks_info)) { BT_DBG("%s bad parameters", hdev->name); return; } BT_DBG("%s num_blocks %d num_hndl %d", hdev->name, ev->num_blocks, - ev->num_hndl); + ev->num_hndl); for (i = 0; i < ev->num_hndl; i++) { struct hci_comp_blocks_info *info = &ev->handles[i]; @@ -2607,12 +2600,12 @@ static inline void hci_num_comp_blocks_evt(struct hci_dev *hdev, queue_work(hdev->workqueue, &hdev->tx_work); } -static inline void hci_mode_change_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_mode_change_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_mode_change *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -2621,7 +2614,8 @@ static inline void hci_mode_change_evt(struct hci_dev *hdev, struct sk_buff *skb conn->mode = ev->mode; conn->interval = __le16_to_cpu(ev->interval); - if (!test_and_clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->flags)) { + if (!test_and_clear_bit(HCI_CONN_MODE_CHANGE_PEND, + &conn->flags)) { if (conn->mode == HCI_CM_ACTIVE) set_bit(HCI_CONN_POWER_SAVE, &conn->flags); else @@ -2635,7 +2629,7 @@ static inline void hci_mode_change_evt(struct hci_dev *hdev, struct sk_buff *skb hci_dev_unlock(hdev); } -static inline void hci_pin_code_request_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_pin_code_request_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_pin_code_req *ev = (void *) skb->data; struct hci_conn *conn; @@ -2656,7 +2650,7 @@ static inline void hci_pin_code_request_evt(struct hci_dev *hdev, struct sk_buff if (!test_bit(HCI_PAIRABLE, &hdev->dev_flags)) hci_send_cmd(hdev, HCI_OP_PIN_CODE_NEG_REPLY, - sizeof(ev->bdaddr), &ev->bdaddr); + sizeof(ev->bdaddr), &ev->bdaddr); else if (test_bit(HCI_MGMT, &hdev->dev_flags)) { u8 secure; @@ -2672,7 +2666,7 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_link_key_request_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_link_key_request_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_link_key_req *ev = (void *) skb->data; struct hci_cp_link_key_reply cp; @@ -2689,15 +2683,15 @@ static inline void hci_link_key_request_evt(struct hci_dev *hdev, struct sk_buff key = hci_find_link_key(hdev, &ev->bdaddr); if (!key) { BT_DBG("%s link key not found for %s", hdev->name, - batostr(&ev->bdaddr)); + batostr(&ev->bdaddr)); goto not_found; } BT_DBG("%s found key type %u for %s", hdev->name, key->type, - batostr(&ev->bdaddr)); + batostr(&ev->bdaddr)); if (!test_bit(HCI_DEBUG_KEYS, &hdev->dev_flags) && - key->type == HCI_LK_DEBUG_COMBINATION) { + key->type == HCI_LK_DEBUG_COMBINATION) { BT_DBG("%s ignoring debug key", hdev->name); goto not_found; } @@ -2705,16 +2699,15 @@ static inline void hci_link_key_request_evt(struct hci_dev *hdev, struct sk_buff conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr); if (conn) { if (key->type == HCI_LK_UNAUTH_COMBINATION && - conn->auth_type != 0xff && - (conn->auth_type & 0x01)) { + conn->auth_type != 0xff && (conn->auth_type & 0x01)) { BT_DBG("%s ignoring unauthenticated key", hdev->name); goto not_found; } if (key->type == HCI_LK_COMBINATION && key->pin_len < 16 && - conn->pending_sec_level == BT_SECURITY_HIGH) { - BT_DBG("%s ignoring key unauthenticated for high \ - security", hdev->name); + conn->pending_sec_level == BT_SECURITY_HIGH) { + BT_DBG("%s ignoring key unauthenticated for high security", + hdev->name); goto not_found; } @@ -2723,7 +2716,7 @@ static inline void hci_link_key_request_evt(struct hci_dev *hdev, struct sk_buff } bacpy(&cp.bdaddr, &ev->bdaddr); - memcpy(cp.link_key, key->val, 16); + memcpy(cp.link_key, key->val, HCI_LINK_KEY_SIZE); hci_send_cmd(hdev, HCI_OP_LINK_KEY_REPLY, sizeof(cp), &cp); @@ -2736,7 +2729,7 @@ not_found: hci_dev_unlock(hdev); } -static inline void hci_link_key_notify_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_link_key_notify_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_link_key_notify *ev = (void *) skb->data; struct hci_conn *conn; @@ -2760,17 +2753,17 @@ static inline void hci_link_key_notify_evt(struct hci_dev *hdev, struct sk_buff if (test_bit(HCI_LINK_KEYS, &hdev->dev_flags)) hci_add_link_key(hdev, conn, 1, &ev->bdaddr, ev->link_key, - ev->key_type, pin_len); + ev->key_type, pin_len); hci_dev_unlock(hdev); } -static inline void hci_clock_offset_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_clock_offset_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_clock_offset *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -2788,12 +2781,12 @@ static inline void hci_clock_offset_evt(struct hci_dev *hdev, struct sk_buff *sk hci_dev_unlock(hdev); } -static inline void hci_pkt_type_change_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_pkt_type_change_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_pkt_type_change *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -2804,7 +2797,7 @@ static inline void hci_pkt_type_change_evt(struct hci_dev *hdev, struct sk_buff hci_dev_unlock(hdev); } -static inline void hci_pscan_rep_mode_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_pscan_rep_mode_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_pscan_rep_mode *ev = (void *) skb->data; struct inquiry_entry *ie; @@ -2822,7 +2815,8 @@ static inline void hci_pscan_rep_mode_evt(struct hci_dev *hdev, struct sk_buff * hci_dev_unlock(hdev); } -static inline void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct inquiry_data data; int num_rsp = *((__u8 *) skb->data); @@ -2881,7 +2875,8 @@ static inline void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev, struct hci_dev_unlock(hdev); } -static inline void hci_remote_ext_features_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_remote_ext_features_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_ev_remote_ext_features *ev = (void *) skb->data; struct hci_conn *conn; @@ -2929,12 +2924,13 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_sync_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_sync_conn_complete_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_ev_sync_conn_complete *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); @@ -2984,19 +2980,20 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_sync_conn_changed_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_sync_conn_changed_evt(struct hci_dev *hdev, struct sk_buff *skb) { BT_DBG("%s", hdev->name); } -static inline void hci_sniff_subrate_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_sniff_subrate_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_sniff_subrate *ev = (void *) skb->data; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); } -static inline void hci_extended_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_extended_inquiry_result_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct inquiry_data data; struct extended_inquiry_info *info = (void *) (skb->data + 1); @@ -3049,7 +3046,7 @@ static void hci_key_refresh_complete_evt(struct hci_dev *hdev, struct hci_ev_key_refresh_complete *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %u handle %u", hdev->name, ev->status, + BT_DBG("%s status 0x%2.2x handle 0x%4.4x", hdev->name, ev->status, __le16_to_cpu(ev->handle)); hci_dev_lock(hdev); @@ -3087,7 +3084,7 @@ unlock: hci_dev_unlock(hdev); } -static inline u8 hci_get_auth_req(struct hci_conn *conn) +static u8 hci_get_auth_req(struct hci_conn *conn) { /* If remote requests dedicated bonding follow that lead */ if (conn->remote_auth == 0x02 || conn->remote_auth == 0x03) { @@ -3106,7 +3103,7 @@ static inline u8 hci_get_auth_req(struct hci_conn *conn) return conn->auth_type; } -static inline void hci_io_capa_request_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_io_capa_request_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_io_capa_request *ev = (void *) skb->data; struct hci_conn *conn; @@ -3125,7 +3122,7 @@ static inline void hci_io_capa_request_evt(struct hci_dev *hdev, struct sk_buff goto unlock; if (test_bit(HCI_PAIRABLE, &hdev->dev_flags) || - (conn->remote_auth & ~0x01) == HCI_AT_NO_BONDING) { + (conn->remote_auth & ~0x01) == HCI_AT_NO_BONDING) { struct hci_cp_io_capability_reply cp; bacpy(&cp.bdaddr, &ev->bdaddr); @@ -3136,14 +3133,14 @@ static inline void hci_io_capa_request_evt(struct hci_dev *hdev, struct sk_buff conn->auth_type = hci_get_auth_req(conn); cp.authentication = conn->auth_type; - if ((conn->out || test_bit(HCI_CONN_REMOTE_OOB, &conn->flags)) && - hci_find_remote_oob_data(hdev, &conn->dst)) + if (hci_find_remote_oob_data(hdev, &conn->dst) && + (conn->out || test_bit(HCI_CONN_REMOTE_OOB, &conn->flags))) cp.oob_data = 0x01; else cp.oob_data = 0x00; hci_send_cmd(hdev, HCI_OP_IO_CAPABILITY_REPLY, - sizeof(cp), &cp); + sizeof(cp), &cp); } else { struct hci_cp_io_capability_neg_reply cp; @@ -3151,14 +3148,14 @@ static inline void hci_io_capa_request_evt(struct hci_dev *hdev, struct sk_buff cp.reason = HCI_ERROR_PAIRING_NOT_ALLOWED; hci_send_cmd(hdev, HCI_OP_IO_CAPABILITY_NEG_REPLY, - sizeof(cp), &cp); + sizeof(cp), &cp); } unlock: hci_dev_unlock(hdev); } -static inline void hci_io_capa_reply_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_io_capa_reply_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_io_capa_reply *ev = (void *) skb->data; struct hci_conn *conn; @@ -3180,8 +3177,8 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_user_confirm_request_evt(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_user_confirm_request_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_ev_user_confirm_req *ev = (void *) skb->data; int loc_mitm, rem_mitm, confirm_hint = 0; @@ -3209,13 +3206,13 @@ static inline void hci_user_confirm_request_evt(struct hci_dev *hdev, if (!conn->connect_cfm_cb && loc_mitm && conn->remote_cap == 0x03) { BT_DBG("Rejecting request: remote device can't provide MITM"); hci_send_cmd(hdev, HCI_OP_USER_CONFIRM_NEG_REPLY, - sizeof(ev->bdaddr), &ev->bdaddr); + sizeof(ev->bdaddr), &ev->bdaddr); goto unlock; } /* If no side requires MITM protection; auto-accept */ if ((!loc_mitm || conn->remote_cap == 0x03) && - (!rem_mitm || conn->io_capability == 0x03)) { + (!rem_mitm || conn->io_capability == 0x03)) { /* If we're not the initiators request authorization to * proceed from user space (mgmt_user_confirm with @@ -3227,7 +3224,7 @@ static inline void hci_user_confirm_request_evt(struct hci_dev *hdev, } BT_DBG("Auto-accept of user confirmation with %ums delay", - hdev->auto_accept_delay); + hdev->auto_accept_delay); if (hdev->auto_accept_delay > 0) { int delay = msecs_to_jiffies(hdev->auto_accept_delay); @@ -3236,7 +3233,7 @@ static inline void hci_user_confirm_request_evt(struct hci_dev *hdev, } hci_send_cmd(hdev, HCI_OP_USER_CONFIRM_REPLY, - sizeof(ev->bdaddr), &ev->bdaddr); + sizeof(ev->bdaddr), &ev->bdaddr); goto unlock; } @@ -3248,8 +3245,8 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_user_passkey_request_evt(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_user_passkey_request_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_ev_user_passkey_req *ev = (void *) skb->data; @@ -3263,7 +3260,8 @@ static inline void hci_user_passkey_request_evt(struct hci_dev *hdev, hci_dev_unlock(hdev); } -static inline void hci_simple_pair_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_simple_pair_complete_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_ev_simple_pair_complete *ev = (void *) skb->data; struct hci_conn *conn; @@ -3291,7 +3289,8 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_remote_host_features_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_remote_host_features_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_ev_remote_host_features *ev = (void *) skb->data; struct inquiry_entry *ie; @@ -3307,8 +3306,8 @@ static inline void hci_remote_host_features_evt(struct hci_dev *hdev, struct sk_ hci_dev_unlock(hdev); } -static inline void hci_remote_oob_data_request_evt(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_remote_oob_data_request_evt(struct hci_dev *hdev, + struct sk_buff *skb) { struct hci_ev_remote_oob_data_request *ev = (void *) skb->data; struct oob_data *data; @@ -3329,28 +3328,41 @@ static inline void hci_remote_oob_data_request_evt(struct hci_dev *hdev, memcpy(cp.randomizer, data->randomizer, sizeof(cp.randomizer)); hci_send_cmd(hdev, HCI_OP_REMOTE_OOB_DATA_REPLY, sizeof(cp), - &cp); + &cp); } else { struct hci_cp_remote_oob_data_neg_reply cp; bacpy(&cp.bdaddr, &ev->bdaddr); hci_send_cmd(hdev, HCI_OP_REMOTE_OOB_DATA_NEG_REPLY, sizeof(cp), - &cp); + &cp); } unlock: hci_dev_unlock(hdev); } -static inline void hci_le_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_le_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_le_conn_complete *ev = (void *) skb->data; struct hci_conn *conn; - BT_DBG("%s status %d", hdev->name, ev->status); + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); hci_dev_lock(hdev); + if (ev->status) { + conn = hci_conn_hash_lookup_state(hdev, LE_LINK, BT_CONNECT); + if (!conn) + goto unlock; + + mgmt_connect_failed(hdev, &conn->dst, conn->type, + conn->dst_type, ev->status); + hci_proto_connect_cfm(conn, ev->status); + conn->state = BT_CLOSED; + hci_conn_del(conn); + goto unlock; + } + conn = hci_conn_hash_lookup_ba(hdev, LE_LINK, &ev->bdaddr); if (!conn) { conn = hci_conn_add(hdev, LE_LINK, &ev->bdaddr); @@ -3363,15 +3375,6 @@ static inline void hci_le_conn_complete_evt(struct hci_dev *hdev, struct sk_buff conn->dst_type = ev->bdaddr_type; } - if (ev->status) { - mgmt_connect_failed(hdev, &ev->bdaddr, conn->type, - conn->dst_type, ev->status); - hci_proto_connect_cfm(conn, ev->status); - conn->state = BT_CLOSED; - hci_conn_del(conn); - goto unlock; - } - if (!test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags)) mgmt_device_connected(hdev, &ev->bdaddr, conn->type, conn->dst_type, 0, NULL, 0, NULL); @@ -3389,8 +3392,7 @@ unlock: hci_dev_unlock(hdev); } -static inline void hci_le_adv_report_evt(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb) { u8 num_reports = skb->data[0]; void *ptr = &skb->data[1]; @@ -3411,8 +3413,7 @@ static inline void hci_le_adv_report_evt(struct hci_dev *hdev, hci_dev_unlock(hdev); } -static inline void hci_le_ltk_request_evt(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_le_ltk_request_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_le_ltk_req *ev = (void *) skb->data; struct hci_cp_le_ltk_reply cp; @@ -3420,7 +3421,7 @@ static inline void hci_le_ltk_request_evt(struct hci_dev *hdev, struct hci_conn *conn; struct smp_ltk *ltk; - BT_DBG("%s handle %d", hdev->name, __le16_to_cpu(ev->handle)); + BT_DBG("%s handle 0x%4.4x", hdev->name, __le16_to_cpu(ev->handle)); hci_dev_lock(hdev); @@ -3455,7 +3456,7 @@ not_found: hci_dev_unlock(hdev); } -static inline void hci_le_meta_evt(struct hci_dev *hdev, struct sk_buff *skb) +static void hci_le_meta_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_le_meta *le_ev = (void *) skb->data; @@ -3644,7 +3645,7 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb) break; default: - BT_DBG("%s event 0x%x", hdev->name, event); + BT_DBG("%s event 0x%2.2x", hdev->name, event); break; } diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c index 5914623f426a..a7f04de03d79 100644 --- a/net/bluetooth/hci_sock.c +++ b/net/bluetooth/hci_sock.c @@ -24,25 +24,7 @@ /* Bluetooth HCI sockets. */ -#include <linux/module.h> - -#include <linux/types.h> -#include <linux/capability.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/slab.h> -#include <linux/poll.h> -#include <linux/fcntl.h> -#include <linux/init.h> -#include <linux/skbuff.h> -#include <linux/workqueue.h> -#include <linux/interrupt.h> -#include <linux/compat.h> -#include <linux/socket.h> -#include <linux/ioctl.h> -#include <net/sock.h> - -#include <linux/uaccess.h> +#include <linux/export.h> #include <asm/unaligned.h> #include <net/bluetooth/bluetooth.h> @@ -113,11 +95,12 @@ void hci_send_to_sock(struct hci_dev *hdev, struct sk_buff *skb) flt = &hci_pi(sk)->filter; if (!test_bit((bt_cb(skb)->pkt_type == HCI_VENDOR_PKT) ? - 0 : (bt_cb(skb)->pkt_type & HCI_FLT_TYPE_BITS), &flt->type_mask)) + 0 : (bt_cb(skb)->pkt_type & HCI_FLT_TYPE_BITS), + &flt->type_mask)) continue; if (bt_cb(skb)->pkt_type == HCI_EVENT_PKT) { - register int evt = (*(__u8 *)skb->data & HCI_FLT_EVENT_BITS); + int evt = (*(__u8 *)skb->data & HCI_FLT_EVENT_BITS); if (!hci_test_bit(evt, &flt->event_mask)) continue; @@ -240,7 +223,8 @@ void hci_send_to_monitor(struct hci_dev *hdev, struct sk_buff *skb) struct hci_mon_hdr *hdr; /* Create a private copy with headroom */ - skb_copy = __pskb_copy(skb, HCI_MON_HDR_SIZE, GFP_ATOMIC); + skb_copy = __pskb_copy(skb, HCI_MON_HDR_SIZE, + GFP_ATOMIC); if (!skb_copy) continue; @@ -495,7 +479,8 @@ static int hci_sock_blacklist_del(struct hci_dev *hdev, void __user *arg) } /* Ioctls that require bound socket */ -static inline int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd, unsigned long arg) +static int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd, + unsigned long arg) { struct hci_dev *hdev = hci_pi(sk)->hdev; @@ -540,7 +525,8 @@ static inline int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd, unsign } } -static int hci_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) +static int hci_sock_ioctl(struct socket *sock, unsigned int cmd, + unsigned long arg) { struct sock *sk = sock->sk; void __user *argp = (void __user *) arg; @@ -601,7 +587,8 @@ static int hci_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long a } } -static int hci_sock_bind(struct socket *sock, struct sockaddr *addr, int addr_len) +static int hci_sock_bind(struct socket *sock, struct sockaddr *addr, + int addr_len) { struct sockaddr_hci haddr; struct sock *sk = sock->sk; @@ -690,7 +677,8 @@ done: return err; } -static int hci_sock_getname(struct socket *sock, struct sockaddr *addr, int *addr_len, int peer) +static int hci_sock_getname(struct socket *sock, struct sockaddr *addr, + int *addr_len, int peer) { struct sockaddr_hci *haddr = (struct sockaddr_hci *) addr; struct sock *sk = sock->sk; @@ -711,13 +699,15 @@ static int hci_sock_getname(struct socket *sock, struct sockaddr *addr, int *add return 0; } -static inline void hci_sock_cmsg(struct sock *sk, struct msghdr *msg, struct sk_buff *skb) +static void hci_sock_cmsg(struct sock *sk, struct msghdr *msg, + struct sk_buff *skb) { __u32 mask = hci_pi(sk)->cmsg_mask; if (mask & HCI_CMSG_DIR) { int incoming = bt_cb(skb)->incoming; - put_cmsg(msg, SOL_HCI, HCI_CMSG_DIR, sizeof(incoming), &incoming); + put_cmsg(msg, SOL_HCI, HCI_CMSG_DIR, sizeof(incoming), + &incoming); } if (mask & HCI_CMSG_TSTAMP) { @@ -747,7 +737,7 @@ static inline void hci_sock_cmsg(struct sock *sk, struct msghdr *msg, struct sk_ } static int hci_sock_recvmsg(struct kiocb *iocb, struct socket *sock, - struct msghdr *msg, size_t len, int flags) + struct msghdr *msg, size_t len, int flags) { int noblock = flags & MSG_DONTWAIT; struct sock *sk = sock->sk; @@ -857,8 +847,9 @@ static int hci_sock_sendmsg(struct kiocb *iocb, struct socket *sock, u16 ocf = hci_opcode_ocf(opcode); if (((ogf > HCI_SFLT_MAX_OGF) || - !hci_test_bit(ocf & HCI_FLT_OCF_BITS, &hci_sec_filter.ocf_mask[ogf])) && - !capable(CAP_NET_RAW)) { + !hci_test_bit(ocf & HCI_FLT_OCF_BITS, + &hci_sec_filter.ocf_mask[ogf])) && + !capable(CAP_NET_RAW)) { err = -EPERM; goto drop; } @@ -891,7 +882,8 @@ drop: goto done; } -static int hci_sock_setsockopt(struct socket *sock, int level, int optname, char __user *optval, unsigned int len) +static int hci_sock_setsockopt(struct socket *sock, int level, int optname, + char __user *optval, unsigned int len) { struct hci_ufilter uf = { .opcode = 0 }; struct sock *sk = sock->sk; @@ -973,7 +965,8 @@ done: return err; } -static int hci_sock_getsockopt(struct socket *sock, int level, int optname, char __user *optval, int __user *optlen) +static int hci_sock_getsockopt(struct socket *sock, int level, int optname, + char __user *optval, int __user *optlen) { struct hci_ufilter uf; struct sock *sk = sock->sk; diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c index 937f3187eafa..a20e61c3653d 100644 --- a/net/bluetooth/hci_sysfs.c +++ b/net/bluetooth/hci_sysfs.c @@ -1,10 +1,6 @@ /* Bluetooth HCI driver model support. */ -#include <linux/kernel.h> -#include <linux/slab.h> -#include <linux/init.h> #include <linux/debugfs.h> -#include <linux/seq_file.h> #include <linux/module.h> #include <net/bluetooth/bluetooth.h> @@ -31,27 +27,30 @@ static inline char *link_typetostr(int type) } } -static ssize_t show_link_type(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_link_type(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_conn *conn = to_hci_conn(dev); return sprintf(buf, "%s\n", link_typetostr(conn->type)); } -static ssize_t show_link_address(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_link_address(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_conn *conn = to_hci_conn(dev); return sprintf(buf, "%s\n", batostr(&conn->dst)); } -static ssize_t show_link_features(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_link_features(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_conn *conn = to_hci_conn(dev); return sprintf(buf, "0x%02x%02x%02x%02x%02x%02x%02x%02x\n", - conn->features[0], conn->features[1], - conn->features[2], conn->features[3], - conn->features[4], conn->features[5], - conn->features[6], conn->features[7]); + conn->features[0], conn->features[1], + conn->features[2], conn->features[3], + conn->features[4], conn->features[5], + conn->features[6], conn->features[7]); } #define LINK_ATTR(_name, _mode, _show, _store) \ @@ -185,19 +184,22 @@ static inline char *host_typetostr(int type) } } -static ssize_t show_bus(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_bus(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "%s\n", host_bustostr(hdev->bus)); } -static ssize_t show_type(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_type(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "%s\n", host_typetostr(hdev->dev_type)); } -static ssize_t show_name(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_name(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); char name[HCI_MAX_NAME_LENGTH + 1]; @@ -210,55 +212,64 @@ static ssize_t show_name(struct device *dev, struct device_attribute *attr, char return sprintf(buf, "%s\n", name); } -static ssize_t show_class(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_class(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); - return sprintf(buf, "0x%.2x%.2x%.2x\n", - hdev->dev_class[2], hdev->dev_class[1], hdev->dev_class[0]); + return sprintf(buf, "0x%.2x%.2x%.2x\n", hdev->dev_class[2], + hdev->dev_class[1], hdev->dev_class[0]); } -static ssize_t show_address(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_address(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "%s\n", batostr(&hdev->bdaddr)); } -static ssize_t show_features(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_features(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "0x%02x%02x%02x%02x%02x%02x%02x%02x\n", - hdev->features[0], hdev->features[1], - hdev->features[2], hdev->features[3], - hdev->features[4], hdev->features[5], - hdev->features[6], hdev->features[7]); + hdev->features[0], hdev->features[1], + hdev->features[2], hdev->features[3], + hdev->features[4], hdev->features[5], + hdev->features[6], hdev->features[7]); } -static ssize_t show_manufacturer(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_manufacturer(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "%d\n", hdev->manufacturer); } -static ssize_t show_hci_version(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_hci_version(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "%d\n", hdev->hci_ver); } -static ssize_t show_hci_revision(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_hci_revision(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "%d\n", hdev->hci_rev); } -static ssize_t show_idle_timeout(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_idle_timeout(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "%d\n", hdev->idle_timeout); } -static ssize_t store_idle_timeout(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) +static ssize_t store_idle_timeout(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { struct hci_dev *hdev = to_hci_dev(dev); unsigned int val; @@ -276,13 +287,16 @@ static ssize_t store_idle_timeout(struct device *dev, struct device_attribute *a return count; } -static ssize_t show_sniff_max_interval(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_sniff_max_interval(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "%d\n", hdev->sniff_max_interval); } -static ssize_t store_sniff_max_interval(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) +static ssize_t store_sniff_max_interval(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { struct hci_dev *hdev = to_hci_dev(dev); u16 val; @@ -300,13 +314,16 @@ static ssize_t store_sniff_max_interval(struct device *dev, struct device_attrib return count; } -static ssize_t show_sniff_min_interval(struct device *dev, struct device_attribute *attr, char *buf) +static ssize_t show_sniff_min_interval(struct device *dev, + struct device_attribute *attr, char *buf) { struct hci_dev *hdev = to_hci_dev(dev); return sprintf(buf, "%d\n", hdev->sniff_min_interval); } -static ssize_t store_sniff_min_interval(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) +static ssize_t store_sniff_min_interval(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { struct hci_dev *hdev = to_hci_dev(dev); u16 val; @@ -335,11 +352,11 @@ static DEVICE_ATTR(hci_version, S_IRUGO, show_hci_version, NULL); static DEVICE_ATTR(hci_revision, S_IRUGO, show_hci_revision, NULL); static DEVICE_ATTR(idle_timeout, S_IRUGO | S_IWUSR, - show_idle_timeout, store_idle_timeout); + show_idle_timeout, store_idle_timeout); static DEVICE_ATTR(sniff_max_interval, S_IRUGO | S_IWUSR, - show_sniff_max_interval, store_sniff_max_interval); + show_sniff_max_interval, store_sniff_max_interval); static DEVICE_ATTR(sniff_min_interval, S_IRUGO | S_IWUSR, - show_sniff_min_interval, store_sniff_min_interval); + show_sniff_min_interval, store_sniff_min_interval); static struct attribute *bt_host_attrs[] = { &dev_attr_bus.attr, @@ -455,8 +472,8 @@ static void print_bt_uuid(struct seq_file *f, u8 *uuid) memcpy(&data5, &uuid[14], 2); seq_printf(f, "%.8x-%.4x-%.4x-%.4x-%.8x%.4x\n", - ntohl(data0), ntohs(data1), ntohs(data2), - ntohs(data3), ntohl(data4), ntohs(data5)); + ntohl(data0), ntohs(data1), ntohs(data2), ntohs(data3), + ntohl(data4), ntohs(data5)); } static int uuids_show(struct seq_file *f, void *p) @@ -513,7 +530,7 @@ static int auto_accept_delay_get(void *data, u64 *val) } DEFINE_SIMPLE_ATTRIBUTE(auto_accept_delay_fops, auto_accept_delay_get, - auto_accept_delay_set, "%llu\n"); + auto_accept_delay_set, "%llu\n"); void hci_init_sysfs(struct hci_dev *hdev) { @@ -547,15 +564,15 @@ int hci_add_sysfs(struct hci_dev *hdev) return 0; debugfs_create_file("inquiry_cache", 0444, hdev->debugfs, - hdev, &inquiry_cache_fops); + hdev, &inquiry_cache_fops); debugfs_create_file("blacklist", 0444, hdev->debugfs, - hdev, &blacklist_fops); + hdev, &blacklist_fops); debugfs_create_file("uuids", 0444, hdev->debugfs, hdev, &uuids_fops); debugfs_create_file("auto_accept_delay", 0444, hdev->debugfs, hdev, - &auto_accept_delay_fops); + &auto_accept_delay_fops); return 0; } diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c index 2c20d765b394..ccd985da6518 100644 --- a/net/bluetooth/hidp/core.c +++ b/net/bluetooth/hidp/core.c @@ -21,27 +21,8 @@ */ #include <linux/module.h> - -#include <linux/types.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/slab.h> -#include <linux/poll.h> -#include <linux/freezer.h> -#include <linux/fcntl.h> -#include <linux/skbuff.h> -#include <linux/socket.h> -#include <linux/ioctl.h> #include <linux/file.h> -#include <linux/init.h> -#include <linux/wait.h> -#include <linux/mutex.h> #include <linux/kthread.h> -#include <net/sock.h> - -#include <linux/input.h> -#include <linux/hid.h> #include <linux/hidraw.h> #include <net/bluetooth/bluetooth.h> @@ -244,7 +225,8 @@ static void hidp_input_report(struct hidp_session *session, struct sk_buff *skb) } static int __hidp_send_ctrl_message(struct hidp_session *session, - unsigned char hdr, unsigned char *data, int size) + unsigned char hdr, unsigned char *data, + int size) { struct sk_buff *skb; @@ -268,7 +250,7 @@ static int __hidp_send_ctrl_message(struct hidp_session *session, return 0; } -static inline int hidp_send_ctrl_message(struct hidp_session *session, +static int hidp_send_ctrl_message(struct hidp_session *session, unsigned char hdr, unsigned char *data, int size) { int err; @@ -471,7 +453,7 @@ static void hidp_set_timer(struct hidp_session *session) mod_timer(&session->timer, jiffies + HZ * session->idle_to); } -static inline void hidp_del_timer(struct hidp_session *session) +static void hidp_del_timer(struct hidp_session *session) { if (session->idle_to > 0) del_timer(&session->timer); diff --git a/net/bluetooth/hidp/sock.c b/net/bluetooth/hidp/sock.c index 73a32d705c1f..18b3f6892a36 100644 --- a/net/bluetooth/hidp/sock.c +++ b/net/bluetooth/hidp/sock.c @@ -20,22 +20,8 @@ SOFTWARE IS DISCLAIMED. */ -#include <linux/module.h> - -#include <linux/types.h> -#include <linux/capability.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/poll.h> -#include <linux/fcntl.h> -#include <linux/skbuff.h> -#include <linux/socket.h> -#include <linux/ioctl.h> +#include <linux/export.h> #include <linux/file.h> -#include <linux/init.h> -#include <linux/compat.h> -#include <linux/gfp.h> -#include <net/sock.h> #include "hidp.h" diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 4554e80d16a3..a8964db04bfb 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -30,32 +30,14 @@ #include <linux/module.h> -#include <linux/types.h> -#include <linux/capability.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/slab.h> -#include <linux/poll.h> -#include <linux/fcntl.h> -#include <linux/init.h> -#include <linux/interrupt.h> -#include <linux/socket.h> -#include <linux/skbuff.h> -#include <linux/list.h> -#include <linux/device.h> #include <linux/debugfs.h> -#include <linux/seq_file.h> -#include <linux/uaccess.h> #include <linux/crc16.h> -#include <net/sock.h> - -#include <asm/unaligned.h> #include <net/bluetooth/bluetooth.h> #include <net/bluetooth/hci_core.h> #include <net/bluetooth/l2cap.h> #include <net/bluetooth/smp.h> +#include <net/bluetooth/a2mp.h> bool disable_ertm; @@ -73,6 +55,9 @@ static int l2cap_build_conf_req(struct l2cap_chan *chan, void *data); static void l2cap_send_disconn_req(struct l2cap_conn *conn, struct l2cap_chan *chan, int err); +static void l2cap_tx(struct l2cap_chan *chan, struct l2cap_ctrl *control, + struct sk_buff_head *skbs, u8 event); + /* ---- L2CAP channels ---- */ static struct l2cap_chan *__l2cap_get_chan_by_dcid(struct l2cap_conn *conn, u16 cid) @@ -196,7 +181,7 @@ static void __l2cap_state_change(struct l2cap_chan *chan, int state) state_to_string(state)); chan->state = state; - chan->ops->state_change(chan->data, state); + chan->ops->state_change(chan, state); } static void l2cap_state_change(struct l2cap_chan *chan, int state) @@ -224,6 +209,37 @@ static inline void l2cap_chan_set_err(struct l2cap_chan *chan, int err) release_sock(sk); } +static void __set_retrans_timer(struct l2cap_chan *chan) +{ + if (!delayed_work_pending(&chan->monitor_timer) && + chan->retrans_timeout) { + l2cap_set_timer(chan, &chan->retrans_timer, + msecs_to_jiffies(chan->retrans_timeout)); + } +} + +static void __set_monitor_timer(struct l2cap_chan *chan) +{ + __clear_retrans_timer(chan); + if (chan->monitor_timeout) { + l2cap_set_timer(chan, &chan->monitor_timer, + msecs_to_jiffies(chan->monitor_timeout)); + } +} + +static struct sk_buff *l2cap_ertm_seq_in_queue(struct sk_buff_head *head, + u16 seq) +{ + struct sk_buff *skb; + + skb_queue_walk(head, skb) { + if (bt_cb(skb)->control.txseq == seq) + return skb; + } + + return NULL; +} + /* ---- L2CAP sequence number lists ---- */ /* For ERTM, ordered lists of sequence numbers must be tracked for @@ -366,7 +382,7 @@ static void l2cap_chan_timeout(struct work_struct *work) l2cap_chan_unlock(chan); - chan->ops->close(chan->data); + chan->ops->close(chan); mutex_unlock(&conn->chan_lock); l2cap_chan_put(chan); @@ -392,6 +408,9 @@ struct l2cap_chan *l2cap_chan_create(void) atomic_set(&chan->refcnt, 1); + /* This flag is cleared in l2cap_chan_ready() */ + set_bit(CONF_NOT_COMPLETE, &chan->conf_state); + BT_DBG("chan %p", chan); return chan; @@ -412,6 +431,7 @@ void l2cap_chan_set_defaults(struct l2cap_chan *chan) chan->max_tx = L2CAP_DEFAULT_MAX_TX; chan->tx_win = L2CAP_DEFAULT_TX_WINDOW; chan->tx_win_max = L2CAP_DEFAULT_TX_WINDOW; + chan->ack_win = L2CAP_DEFAULT_TX_WINDOW; chan->sec_level = BT_SECURITY_LOW; set_bit(FLAG_FORCE_ACTIVE, &chan->flags); @@ -430,7 +450,7 @@ static void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) case L2CAP_CHAN_CONN_ORIENTED: if (conn->hcon->type == LE_LINK) { /* LE connection */ - chan->omtu = L2CAP_LE_DEFAULT_MTU; + chan->omtu = L2CAP_DEFAULT_MTU; chan->scid = L2CAP_CID_LE_DATA; chan->dcid = L2CAP_CID_LE_DATA; } else { @@ -447,6 +467,13 @@ static void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) chan->omtu = L2CAP_DEFAULT_MTU; break; + case L2CAP_CHAN_CONN_FIX_A2MP: + chan->scid = L2CAP_CID_A2MP; + chan->dcid = L2CAP_CID_A2MP; + chan->omtu = L2CAP_A2MP_DEFAULT_MTU; + chan->imtu = L2CAP_A2MP_DEFAULT_MTU; + break; + default: /* Raw socket can send/recv signalling messages only */ chan->scid = L2CAP_CID_SIGNALING; @@ -466,18 +493,16 @@ static void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) list_add(&chan->list, &conn->chan_l); } -static void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) +void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) { mutex_lock(&conn->chan_lock); __l2cap_chan_add(conn, chan); mutex_unlock(&conn->chan_lock); } -static void l2cap_chan_del(struct l2cap_chan *chan, int err) +void l2cap_chan_del(struct l2cap_chan *chan, int err) { - struct sock *sk = chan->sk; struct l2cap_conn *conn = chan->conn; - struct sock *parent = bt_sk(sk)->parent; __clear_chan_timer(chan); @@ -490,34 +515,22 @@ static void l2cap_chan_del(struct l2cap_chan *chan, int err) l2cap_chan_put(chan); chan->conn = NULL; - hci_conn_put(conn->hcon); - } - - lock_sock(sk); - - __l2cap_state_change(chan, BT_CLOSED); - sock_set_flag(sk, SOCK_ZAPPED); - - if (err) - __l2cap_chan_set_err(chan, err); - if (parent) { - bt_accept_unlink(sk); - parent->sk_data_ready(parent, 0); - } else - sk->sk_state_change(sk); + if (chan->chan_type != L2CAP_CHAN_CONN_FIX_A2MP) + hci_conn_put(conn->hcon); + } - release_sock(sk); + if (chan->ops->teardown) + chan->ops->teardown(chan, err); - if (!(test_bit(CONF_OUTPUT_DONE, &chan->conf_state) && - test_bit(CONF_INPUT_DONE, &chan->conf_state))) + if (test_bit(CONF_NOT_COMPLETE, &chan->conf_state)) return; - skb_queue_purge(&chan->tx_q); - - if (chan->mode == L2CAP_MODE_ERTM) { - struct srej_list *l, *tmp; + switch(chan->mode) { + case L2CAP_MODE_BASIC: + break; + case L2CAP_MODE_ERTM: __clear_retrans_timer(chan); __clear_monitor_timer(chan); __clear_ack_timer(chan); @@ -526,30 +539,15 @@ static void l2cap_chan_del(struct l2cap_chan *chan, int err) l2cap_seq_list_free(&chan->srej_list); l2cap_seq_list_free(&chan->retrans_list); - list_for_each_entry_safe(l, tmp, &chan->srej_l, list) { - list_del(&l->list); - kfree(l); - } - } -} -static void l2cap_chan_cleanup_listen(struct sock *parent) -{ - struct sock *sk; - - BT_DBG("parent %p", parent); - - /* Close not yet accepted channels */ - while ((sk = bt_accept_dequeue(parent, NULL))) { - struct l2cap_chan *chan = l2cap_pi(sk)->chan; - - l2cap_chan_lock(chan); - __clear_chan_timer(chan); - l2cap_chan_close(chan, ECONNRESET); - l2cap_chan_unlock(chan); + /* fall through */ - chan->ops->close(chan->data); + case L2CAP_MODE_STREAMING: + skb_queue_purge(&chan->tx_q); + break; } + + return; } void l2cap_chan_close(struct l2cap_chan *chan, int reason) @@ -562,12 +560,8 @@ void l2cap_chan_close(struct l2cap_chan *chan, int reason) switch (chan->state) { case BT_LISTEN: - lock_sock(sk); - l2cap_chan_cleanup_listen(sk); - - __l2cap_state_change(chan, BT_CLOSED); - sock_set_flag(sk, SOCK_ZAPPED); - release_sock(sk); + if (chan->ops->teardown) + chan->ops->teardown(chan, 0); break; case BT_CONNECTED: @@ -595,7 +589,7 @@ void l2cap_chan_close(struct l2cap_chan *chan, int reason) rsp.scid = cpu_to_le16(chan->dcid); rsp.dcid = cpu_to_le16(chan->scid); rsp.result = cpu_to_le16(result); - rsp.status = cpu_to_le16(L2CAP_CS_NO_INFO); + rsp.status = __constant_cpu_to_le16(L2CAP_CS_NO_INFO); l2cap_send_cmd(conn, chan->ident, L2CAP_CONN_RSP, sizeof(rsp), &rsp); } @@ -609,9 +603,8 @@ void l2cap_chan_close(struct l2cap_chan *chan, int reason) break; default: - lock_sock(sk); - sock_set_flag(sk, SOCK_ZAPPED); - release_sock(sk); + if (chan->ops->teardown) + chan->ops->teardown(chan, 0); break; } } @@ -627,7 +620,7 @@ static inline u8 l2cap_get_auth_type(struct l2cap_chan *chan) default: return HCI_AT_NO_BONDING; } - } else if (chan->psm == cpu_to_le16(0x0001)) { + } else if (chan->psm == __constant_cpu_to_le16(L2CAP_PSM_SDP)) { if (chan->sec_level == BT_SECURITY_LOW) chan->sec_level = BT_SECURITY_SDP; @@ -773,9 +766,11 @@ static inline void __unpack_control(struct l2cap_chan *chan, if (test_bit(FLAG_EXT_CTRL, &chan->flags)) { __unpack_extended_control(get_unaligned_le32(skb->data), &bt_cb(skb)->control); + skb_pull(skb, L2CAP_EXT_CTRL_SIZE); } else { __unpack_enhanced_control(get_unaligned_le16(skb->data), &bt_cb(skb)->control); + skb_pull(skb, L2CAP_ENH_CTRL_SIZE); } } @@ -830,66 +825,102 @@ static inline void __pack_control(struct l2cap_chan *chan, } } -static inline void l2cap_send_sframe(struct l2cap_chan *chan, u32 control) +static inline unsigned int __ertm_hdr_size(struct l2cap_chan *chan) { - struct sk_buff *skb; - struct l2cap_hdr *lh; - struct l2cap_conn *conn = chan->conn; - int count, hlen; - - if (chan->state != BT_CONNECTED) - return; - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - hlen = L2CAP_EXT_HDR_SIZE; + return L2CAP_EXT_HDR_SIZE; else - hlen = L2CAP_ENH_HDR_SIZE; + return L2CAP_ENH_HDR_SIZE; +} + +static struct sk_buff *l2cap_create_sframe_pdu(struct l2cap_chan *chan, + u32 control) +{ + struct sk_buff *skb; + struct l2cap_hdr *lh; + int hlen = __ertm_hdr_size(chan); if (chan->fcs == L2CAP_FCS_CRC16) hlen += L2CAP_FCS_SIZE; - BT_DBG("chan %p, control 0x%8.8x", chan, control); - - count = min_t(unsigned int, conn->mtu, hlen); + skb = bt_skb_alloc(hlen, GFP_KERNEL); - control |= __set_sframe(chan); - - if (test_and_clear_bit(CONN_SEND_FBIT, &chan->conn_state)) - control |= __set_ctrl_final(chan); - - if (test_and_clear_bit(CONN_SEND_PBIT, &chan->conn_state)) - control |= __set_ctrl_poll(chan); - - skb = bt_skb_alloc(count, GFP_ATOMIC); if (!skb) - return; + return ERR_PTR(-ENOMEM); lh = (struct l2cap_hdr *) skb_put(skb, L2CAP_HDR_SIZE); lh->len = cpu_to_le16(hlen - L2CAP_HDR_SIZE); lh->cid = cpu_to_le16(chan->dcid); - __put_control(chan, control, skb_put(skb, __ctrl_size(chan))); + if (test_bit(FLAG_EXT_CTRL, &chan->flags)) + put_unaligned_le32(control, skb_put(skb, L2CAP_EXT_CTRL_SIZE)); + else + put_unaligned_le16(control, skb_put(skb, L2CAP_ENH_CTRL_SIZE)); if (chan->fcs == L2CAP_FCS_CRC16) { - u16 fcs = crc16(0, (u8 *)lh, count - L2CAP_FCS_SIZE); + u16 fcs = crc16(0, (u8 *)skb->data, skb->len); put_unaligned_le16(fcs, skb_put(skb, L2CAP_FCS_SIZE)); } skb->priority = HCI_PRIO_MAX; - l2cap_do_send(chan, skb); + return skb; } -static inline void l2cap_send_rr_or_rnr(struct l2cap_chan *chan, u32 control) +static void l2cap_send_sframe(struct l2cap_chan *chan, + struct l2cap_ctrl *control) { - if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) { - control |= __set_ctrl_super(chan, L2CAP_SUPER_RNR); + struct sk_buff *skb; + u32 control_field; + + BT_DBG("chan %p, control %p", chan, control); + + if (!control->sframe) + return; + + if (test_and_clear_bit(CONN_SEND_FBIT, &chan->conn_state) && + !control->poll) + control->final = 1; + + if (control->super == L2CAP_SUPER_RR) + clear_bit(CONN_RNR_SENT, &chan->conn_state); + else if (control->super == L2CAP_SUPER_RNR) set_bit(CONN_RNR_SENT, &chan->conn_state); - } else - control |= __set_ctrl_super(chan, L2CAP_SUPER_RR); - control |= __set_reqseq(chan, chan->buffer_seq); + if (control->super != L2CAP_SUPER_SREJ) { + chan->last_acked_seq = control->reqseq; + __clear_ack_timer(chan); + } - l2cap_send_sframe(chan, control); + BT_DBG("reqseq %d, final %d, poll %d, super %d", control->reqseq, + control->final, control->poll, control->super); + + if (test_bit(FLAG_EXT_CTRL, &chan->flags)) + control_field = __pack_extended_control(control); + else + control_field = __pack_enhanced_control(control); + + skb = l2cap_create_sframe_pdu(chan, control_field); + if (!IS_ERR(skb)) + l2cap_do_send(chan, skb); +} + +static void l2cap_send_rr_or_rnr(struct l2cap_chan *chan, bool poll) +{ + struct l2cap_ctrl control; + + BT_DBG("chan %p, poll %d", chan, poll); + + memset(&control, 0, sizeof(control)); + control.sframe = 1; + control.poll = poll; + + if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) + control.super = L2CAP_SUPER_RNR; + else + control.super = L2CAP_SUPER_RR; + + control.reqseq = chan->buffer_seq; + l2cap_send_sframe(chan, &control); } static inline int __l2cap_no_conn_pending(struct l2cap_chan *chan) @@ -914,25 +945,13 @@ static void l2cap_send_conn_req(struct l2cap_chan *chan) static void l2cap_chan_ready(struct l2cap_chan *chan) { - struct sock *sk = chan->sk; - struct sock *parent; - - lock_sock(sk); - - parent = bt_sk(sk)->parent; - - BT_DBG("sk %p, parent %p", sk, parent); - + /* This clears all conf flags, including CONF_NOT_COMPLETE */ chan->conf_state = 0; __clear_chan_timer(chan); - __l2cap_state_change(chan, BT_CONNECTED); - sk->sk_state_change(sk); - - if (parent) - parent->sk_data_ready(parent, 0); + chan->state = BT_CONNECTED; - release_sock(sk); + chan->ops->ready(chan); } static void l2cap_do_start(struct l2cap_chan *chan) @@ -953,7 +972,7 @@ static void l2cap_do_start(struct l2cap_chan *chan) l2cap_send_conn_req(chan); } else { struct l2cap_info_req req; - req.type = cpu_to_le16(L2CAP_IT_FEAT_MASK); + req.type = __constant_cpu_to_le16(L2CAP_IT_FEAT_MASK); conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_SENT; conn->info_ident = l2cap_get_ident(conn); @@ -995,6 +1014,11 @@ static void l2cap_send_disconn_req(struct l2cap_conn *conn, struct l2cap_chan *c __clear_ack_timer(chan); } + if (chan->chan_type == L2CAP_CHAN_CONN_FIX_A2MP) { + __l2cap_state_change(chan, BT_DISCONN); + return; + } + req.dcid = cpu_to_le16(chan->dcid); req.scid = cpu_to_le16(chan->scid); l2cap_send_cmd(conn, l2cap_get_ident(conn), @@ -1053,20 +1077,20 @@ static void l2cap_conn_start(struct l2cap_conn *conn) if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) { struct sock *parent = bt_sk(sk)->parent; - rsp.result = cpu_to_le16(L2CAP_CR_PEND); - rsp.status = cpu_to_le16(L2CAP_CS_AUTHOR_PEND); + rsp.result = __constant_cpu_to_le16(L2CAP_CR_PEND); + rsp.status = __constant_cpu_to_le16(L2CAP_CS_AUTHOR_PEND); if (parent) parent->sk_data_ready(parent, 0); } else { __l2cap_state_change(chan, BT_CONFIG); - rsp.result = cpu_to_le16(L2CAP_CR_SUCCESS); - rsp.status = cpu_to_le16(L2CAP_CS_NO_INFO); + rsp.result = __constant_cpu_to_le16(L2CAP_CR_SUCCESS); + rsp.status = __constant_cpu_to_le16(L2CAP_CS_NO_INFO); } release_sock(sk); } else { - rsp.result = cpu_to_le16(L2CAP_CR_PEND); - rsp.status = cpu_to_le16(L2CAP_CS_AUTHEN_PEND); + rsp.result = __constant_cpu_to_le16(L2CAP_CR_PEND); + rsp.status = __constant_cpu_to_le16(L2CAP_CS_AUTHEN_PEND); } l2cap_send_cmd(conn, chan->ident, L2CAP_CONN_RSP, @@ -1150,13 +1174,7 @@ static void l2cap_le_conn_ready(struct l2cap_conn *conn) lock_sock(parent); - /* Check for backlog size */ - if (sk_acceptq_is_full(parent)) { - BT_DBG("backlog full %d", parent->sk_ack_backlog); - goto clean; - } - - chan = pchan->ops->new_connection(pchan->data); + chan = pchan->ops->new_connection(pchan); if (!chan) goto clean; @@ -1171,10 +1189,7 @@ static void l2cap_le_conn_ready(struct l2cap_conn *conn) l2cap_chan_add(conn, chan); - __set_chan_timer(chan, sk->sk_sndtimeo); - - __l2cap_state_change(chan, BT_CONNECTED); - parent->sk_data_ready(parent, 0); + l2cap_chan_ready(chan); clean: release_sock(parent); @@ -1198,6 +1213,11 @@ static void l2cap_conn_ready(struct l2cap_conn *conn) l2cap_chan_lock(chan); + if (chan->chan_type == L2CAP_CHAN_CONN_FIX_A2MP) { + l2cap_chan_unlock(chan); + continue; + } + if (conn->hcon->type == LE_LINK) { if (smp_conn_security(conn, chan->sec_level)) l2cap_chan_ready(chan); @@ -1270,7 +1290,7 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err) l2cap_chan_unlock(chan); - chan->ops->close(chan->data); + chan->ops->close(chan); l2cap_chan_put(chan); } @@ -1444,21 +1464,17 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid, goto done; } - lock_sock(sk); - - switch (sk->sk_state) { + switch (chan->state) { case BT_CONNECT: case BT_CONNECT2: case BT_CONFIG: /* Already connecting */ err = 0; - release_sock(sk); goto done; case BT_CONNECTED: /* Already connected */ err = -EISCONN; - release_sock(sk); goto done; case BT_OPEN: @@ -1468,13 +1484,12 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid, default: err = -EBADFD; - release_sock(sk); goto done; } /* Set destination address and psm */ + lock_sock(sk); bacpy(&bt_sk(sk)->dst, dst); - release_sock(sk); chan->psm = psm; @@ -1576,23 +1591,20 @@ int __l2cap_wait_ack(struct sock *sk) static void l2cap_monitor_timeout(struct work_struct *work) { struct l2cap_chan *chan = container_of(work, struct l2cap_chan, - monitor_timer.work); + monitor_timer.work); BT_DBG("chan %p", chan); l2cap_chan_lock(chan); - if (chan->retry_count >= chan->remote_max_tx) { - l2cap_send_disconn_req(chan->conn, chan, ECONNABORTED); + if (!chan->conn) { l2cap_chan_unlock(chan); l2cap_chan_put(chan); return; } - chan->retry_count++; - __set_monitor_timer(chan); + l2cap_tx(chan, NULL, NULL, L2CAP_EV_MONITOR_TO); - l2cap_send_rr_or_rnr(chan, L2CAP_CTRL_POLL); l2cap_chan_unlock(chan); l2cap_chan_put(chan); } @@ -1600,234 +1612,293 @@ static void l2cap_monitor_timeout(struct work_struct *work) static void l2cap_retrans_timeout(struct work_struct *work) { struct l2cap_chan *chan = container_of(work, struct l2cap_chan, - retrans_timer.work); + retrans_timer.work); BT_DBG("chan %p", chan); l2cap_chan_lock(chan); - chan->retry_count = 1; - __set_monitor_timer(chan); - - set_bit(CONN_WAIT_F, &chan->conn_state); - - l2cap_send_rr_or_rnr(chan, L2CAP_CTRL_POLL); + if (!chan->conn) { + l2cap_chan_unlock(chan); + l2cap_chan_put(chan); + return; + } + l2cap_tx(chan, NULL, NULL, L2CAP_EV_RETRANS_TO); l2cap_chan_unlock(chan); l2cap_chan_put(chan); } -static void l2cap_drop_acked_frames(struct l2cap_chan *chan) +static void l2cap_streaming_send(struct l2cap_chan *chan, + struct sk_buff_head *skbs) { struct sk_buff *skb; + struct l2cap_ctrl *control; - while ((skb = skb_peek(&chan->tx_q)) && - chan->unacked_frames) { - if (bt_cb(skb)->control.txseq == chan->expected_ack_seq) - break; + BT_DBG("chan %p, skbs %p", chan, skbs); - skb = skb_dequeue(&chan->tx_q); - kfree_skb(skb); + skb_queue_splice_tail_init(skbs, &chan->tx_q); - chan->unacked_frames--; - } + while (!skb_queue_empty(&chan->tx_q)) { - if (!chan->unacked_frames) - __clear_retrans_timer(chan); -} + skb = skb_dequeue(&chan->tx_q); -static void l2cap_streaming_send(struct l2cap_chan *chan) -{ - struct sk_buff *skb; - u32 control; - u16 fcs; + bt_cb(skb)->control.retries = 1; + control = &bt_cb(skb)->control; + + control->reqseq = 0; + control->txseq = chan->next_tx_seq; - while ((skb = skb_dequeue(&chan->tx_q))) { - control = __get_control(chan, skb->data + L2CAP_HDR_SIZE); - control |= __set_txseq(chan, chan->next_tx_seq); - control |= __set_ctrl_sar(chan, bt_cb(skb)->control.sar); - __put_control(chan, control, skb->data + L2CAP_HDR_SIZE); + __pack_control(chan, control, skb); if (chan->fcs == L2CAP_FCS_CRC16) { - fcs = crc16(0, (u8 *)skb->data, - skb->len - L2CAP_FCS_SIZE); - put_unaligned_le16(fcs, - skb->data + skb->len - L2CAP_FCS_SIZE); + u16 fcs = crc16(0, (u8 *) skb->data, skb->len); + put_unaligned_le16(fcs, skb_put(skb, L2CAP_FCS_SIZE)); } l2cap_do_send(chan, skb); + BT_DBG("Sent txseq %u", control->txseq); + chan->next_tx_seq = __next_seq(chan, chan->next_tx_seq); + chan->frames_sent++; } } -static void l2cap_retransmit_one_frame(struct l2cap_chan *chan, u16 tx_seq) +static int l2cap_ertm_send(struct l2cap_chan *chan) { struct sk_buff *skb, *tx_skb; - u16 fcs; - u32 control; + struct l2cap_ctrl *control; + int sent = 0; - skb = skb_peek(&chan->tx_q); - if (!skb) - return; + BT_DBG("chan %p", chan); - while (bt_cb(skb)->control.txseq != tx_seq) { - if (skb_queue_is_last(&chan->tx_q, skb)) - return; + if (chan->state != BT_CONNECTED) + return -ENOTCONN; - skb = skb_queue_next(&chan->tx_q, skb); - } + if (test_bit(CONN_REMOTE_BUSY, &chan->conn_state)) + return 0; - if (bt_cb(skb)->control.retries == chan->remote_max_tx && - chan->remote_max_tx) { - l2cap_send_disconn_req(chan->conn, chan, ECONNABORTED); - return; - } + while (chan->tx_send_head && + chan->unacked_frames < chan->remote_tx_win && + chan->tx_state == L2CAP_TX_STATE_XMIT) { + + skb = chan->tx_send_head; - tx_skb = skb_clone(skb, GFP_ATOMIC); - bt_cb(skb)->control.retries++; + bt_cb(skb)->control.retries = 1; + control = &bt_cb(skb)->control; - control = __get_control(chan, tx_skb->data + L2CAP_HDR_SIZE); - control &= __get_sar_mask(chan); + if (test_and_clear_bit(CONN_SEND_FBIT, &chan->conn_state)) + control->final = 1; + + control->reqseq = chan->buffer_seq; + chan->last_acked_seq = chan->buffer_seq; + control->txseq = chan->next_tx_seq; - if (test_and_clear_bit(CONN_SEND_FBIT, &chan->conn_state)) - control |= __set_ctrl_final(chan); + __pack_control(chan, control, skb); - control |= __set_reqseq(chan, chan->buffer_seq); - control |= __set_txseq(chan, tx_seq); + if (chan->fcs == L2CAP_FCS_CRC16) { + u16 fcs = crc16(0, (u8 *) skb->data, skb->len); + put_unaligned_le16(fcs, skb_put(skb, L2CAP_FCS_SIZE)); + } - __put_control(chan, control, tx_skb->data + L2CAP_HDR_SIZE); + /* Clone after data has been modified. Data is assumed to be + read-only (for locking purposes) on cloned sk_buffs. + */ + tx_skb = skb_clone(skb, GFP_KERNEL); - if (chan->fcs == L2CAP_FCS_CRC16) { - fcs = crc16(0, (u8 *)tx_skb->data, - tx_skb->len - L2CAP_FCS_SIZE); - put_unaligned_le16(fcs, - tx_skb->data + tx_skb->len - L2CAP_FCS_SIZE); + if (!tx_skb) + break; + + __set_retrans_timer(chan); + + chan->next_tx_seq = __next_seq(chan, chan->next_tx_seq); + chan->unacked_frames++; + chan->frames_sent++; + sent++; + + if (skb_queue_is_last(&chan->tx_q, skb)) + chan->tx_send_head = NULL; + else + chan->tx_send_head = skb_queue_next(&chan->tx_q, skb); + + l2cap_do_send(chan, tx_skb); + BT_DBG("Sent txseq %u", control->txseq); } - l2cap_do_send(chan, tx_skb); + BT_DBG("Sent %d, %u unacked, %u in ERTM queue", sent, + chan->unacked_frames, skb_queue_len(&chan->tx_q)); + + return sent; } -static int l2cap_ertm_send(struct l2cap_chan *chan) +static void l2cap_ertm_resend(struct l2cap_chan *chan) { - struct sk_buff *skb, *tx_skb; - u16 fcs; - u32 control; - int nsent = 0; + struct l2cap_ctrl control; + struct sk_buff *skb; + struct sk_buff *tx_skb; + u16 seq; - if (chan->state != BT_CONNECTED) - return -ENOTCONN; + BT_DBG("chan %p", chan); if (test_bit(CONN_REMOTE_BUSY, &chan->conn_state)) - return 0; + return; - while ((skb = chan->tx_send_head) && (!l2cap_tx_window_full(chan))) { + while (chan->retrans_list.head != L2CAP_SEQ_LIST_CLEAR) { + seq = l2cap_seq_list_pop(&chan->retrans_list); - if (bt_cb(skb)->control.retries == chan->remote_max_tx && - chan->remote_max_tx) { - l2cap_send_disconn_req(chan->conn, chan, ECONNABORTED); - break; + skb = l2cap_ertm_seq_in_queue(&chan->tx_q, seq); + if (!skb) { + BT_DBG("Error: Can't retransmit seq %d, frame missing", + seq); + continue; } - tx_skb = skb_clone(skb, GFP_ATOMIC); - bt_cb(skb)->control.retries++; + control = bt_cb(skb)->control; - control = __get_control(chan, tx_skb->data + L2CAP_HDR_SIZE); - control &= __get_sar_mask(chan); + if (chan->max_tx != 0 && + bt_cb(skb)->control.retries > chan->max_tx) { + BT_DBG("Retry limit exceeded (%d)", chan->max_tx); + l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); + l2cap_seq_list_clear(&chan->retrans_list); + break; + } + control.reqseq = chan->buffer_seq; if (test_and_clear_bit(CONN_SEND_FBIT, &chan->conn_state)) - control |= __set_ctrl_final(chan); + control.final = 1; + else + control.final = 0; - control |= __set_reqseq(chan, chan->buffer_seq); - control |= __set_txseq(chan, chan->next_tx_seq); - control |= __set_ctrl_sar(chan, bt_cb(skb)->control.sar); + if (skb_cloned(skb)) { + /* Cloned sk_buffs are read-only, so we need a + * writeable copy + */ + tx_skb = skb_copy(skb, GFP_ATOMIC); + } else { + tx_skb = skb_clone(skb, GFP_ATOMIC); + } - __put_control(chan, control, tx_skb->data + L2CAP_HDR_SIZE); + if (!tx_skb) { + l2cap_seq_list_clear(&chan->retrans_list); + break; + } + + /* Update skb contents */ + if (test_bit(FLAG_EXT_CTRL, &chan->flags)) { + put_unaligned_le32(__pack_extended_control(&control), + tx_skb->data + L2CAP_HDR_SIZE); + } else { + put_unaligned_le16(__pack_enhanced_control(&control), + tx_skb->data + L2CAP_HDR_SIZE); + } if (chan->fcs == L2CAP_FCS_CRC16) { - fcs = crc16(0, (u8 *)skb->data, - tx_skb->len - L2CAP_FCS_SIZE); - put_unaligned_le16(fcs, skb->data + - tx_skb->len - L2CAP_FCS_SIZE); + u16 fcs = crc16(0, (u8 *) tx_skb->data, tx_skb->len); + put_unaligned_le16(fcs, skb_put(tx_skb, + L2CAP_FCS_SIZE)); } l2cap_do_send(chan, tx_skb); - __set_retrans_timer(chan); - - bt_cb(skb)->control.txseq = chan->next_tx_seq; + BT_DBG("Resent txseq %d", control.txseq); - chan->next_tx_seq = __next_seq(chan, chan->next_tx_seq); - - if (bt_cb(skb)->control.retries == 1) { - chan->unacked_frames++; - - if (!nsent++) - __clear_ack_timer(chan); - } - - chan->frames_sent++; - - if (skb_queue_is_last(&chan->tx_q, skb)) - chan->tx_send_head = NULL; - else - chan->tx_send_head = skb_queue_next(&chan->tx_q, skb); + chan->last_acked_seq = chan->buffer_seq; } - - return nsent; } -static int l2cap_retransmit_frames(struct l2cap_chan *chan) +static void l2cap_retransmit(struct l2cap_chan *chan, + struct l2cap_ctrl *control) { - int ret; + BT_DBG("chan %p, control %p", chan, control); - if (!skb_queue_empty(&chan->tx_q)) - chan->tx_send_head = chan->tx_q.next; - - chan->next_tx_seq = chan->expected_ack_seq; - ret = l2cap_ertm_send(chan); - return ret; + l2cap_seq_list_append(&chan->retrans_list, control->reqseq); + l2cap_ertm_resend(chan); } -static void __l2cap_send_ack(struct l2cap_chan *chan) +static void l2cap_retransmit_all(struct l2cap_chan *chan, + struct l2cap_ctrl *control) { - u32 control = 0; + struct sk_buff *skb; - control |= __set_reqseq(chan, chan->buffer_seq); + BT_DBG("chan %p, control %p", chan, control); - if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) { - control |= __set_ctrl_super(chan, L2CAP_SUPER_RNR); - set_bit(CONN_RNR_SENT, &chan->conn_state); - l2cap_send_sframe(chan, control); - return; - } + if (control->poll) + set_bit(CONN_SEND_FBIT, &chan->conn_state); + + l2cap_seq_list_clear(&chan->retrans_list); - if (l2cap_ertm_send(chan) > 0) + if (test_bit(CONN_REMOTE_BUSY, &chan->conn_state)) return; - control |= __set_ctrl_super(chan, L2CAP_SUPER_RR); - l2cap_send_sframe(chan, control); + if (chan->unacked_frames) { + skb_queue_walk(&chan->tx_q, skb) { + if (bt_cb(skb)->control.txseq == control->reqseq || + skb == chan->tx_send_head) + break; + } + + skb_queue_walk_from(&chan->tx_q, skb) { + if (skb == chan->tx_send_head) + break; + + l2cap_seq_list_append(&chan->retrans_list, + bt_cb(skb)->control.txseq); + } + + l2cap_ertm_resend(chan); + } } static void l2cap_send_ack(struct l2cap_chan *chan) { - __clear_ack_timer(chan); - __l2cap_send_ack(chan); -} + struct l2cap_ctrl control; + u16 frames_to_ack = __seq_offset(chan, chan->buffer_seq, + chan->last_acked_seq); + int threshold; -static void l2cap_send_srejtail(struct l2cap_chan *chan) -{ - struct srej_list *tail; - u32 control; + BT_DBG("chan %p last_acked_seq %d buffer_seq %d", + chan, chan->last_acked_seq, chan->buffer_seq); - control = __set_ctrl_super(chan, L2CAP_SUPER_SREJ); - control |= __set_ctrl_final(chan); + memset(&control, 0, sizeof(control)); + control.sframe = 1; - tail = list_entry((&chan->srej_l)->prev, struct srej_list, list); - control |= __set_reqseq(chan, tail->tx_seq); + if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state) && + chan->rx_state == L2CAP_RX_STATE_RECV) { + __clear_ack_timer(chan); + control.super = L2CAP_SUPER_RNR; + control.reqseq = chan->buffer_seq; + l2cap_send_sframe(chan, &control); + } else { + if (!test_bit(CONN_REMOTE_BUSY, &chan->conn_state)) { + l2cap_ertm_send(chan); + /* If any i-frames were sent, they included an ack */ + if (chan->buffer_seq == chan->last_acked_seq) + frames_to_ack = 0; + } - l2cap_send_sframe(chan, control); + /* Ack now if the window is 3/4ths full. + * Calculate without mul or div + */ + threshold = chan->ack_win; + threshold += threshold << 1; + threshold >>= 2; + + BT_DBG("frames_to_ack %u, threshold %d", frames_to_ack, + threshold); + + if (frames_to_ack >= threshold) { + __clear_ack_timer(chan); + control.super = L2CAP_SUPER_RR; + control.reqseq = chan->buffer_seq; + l2cap_send_sframe(chan, &control); + frames_to_ack = 0; + } + + if (frames_to_ack) + __set_ack_timer(chan); + } } static inline int l2cap_skbuff_fromiovec(struct l2cap_chan *chan, @@ -1876,15 +1947,15 @@ static inline int l2cap_skbuff_fromiovec(struct l2cap_chan *chan, } static struct sk_buff *l2cap_create_connless_pdu(struct l2cap_chan *chan, - struct msghdr *msg, size_t len, - u32 priority) + struct msghdr *msg, size_t len, + u32 priority) { struct l2cap_conn *conn = chan->conn; struct sk_buff *skb; int err, count, hlen = L2CAP_HDR_SIZE + L2CAP_PSMLEN_SIZE; struct l2cap_hdr *lh; - BT_DBG("chan %p len %d priority %u", chan, (int)len, priority); + BT_DBG("chan %p len %zu priority %u", chan, len, priority); count = min_t(unsigned int, (conn->mtu - hlen), len); @@ -1910,15 +1981,15 @@ static struct sk_buff *l2cap_create_connless_pdu(struct l2cap_chan *chan, } static struct sk_buff *l2cap_create_basic_pdu(struct l2cap_chan *chan, - struct msghdr *msg, size_t len, - u32 priority) + struct msghdr *msg, size_t len, + u32 priority) { struct l2cap_conn *conn = chan->conn; struct sk_buff *skb; int err, count; struct l2cap_hdr *lh; - BT_DBG("chan %p len %d", chan, (int)len); + BT_DBG("chan %p len %zu", chan, len); count = min_t(unsigned int, (conn->mtu - L2CAP_HDR_SIZE), len); @@ -1943,23 +2014,20 @@ static struct sk_buff *l2cap_create_basic_pdu(struct l2cap_chan *chan, } static struct sk_buff *l2cap_create_iframe_pdu(struct l2cap_chan *chan, - struct msghdr *msg, size_t len, - u16 sdulen) + struct msghdr *msg, size_t len, + u16 sdulen) { struct l2cap_conn *conn = chan->conn; struct sk_buff *skb; int err, count, hlen; struct l2cap_hdr *lh; - BT_DBG("chan %p len %d", chan, (int)len); + BT_DBG("chan %p len %zu", chan, len); if (!conn) return ERR_PTR(-ENOTCONN); - if (test_bit(FLAG_EXT_CTRL, &chan->flags)) - hlen = L2CAP_EXT_HDR_SIZE; - else - hlen = L2CAP_ENH_HDR_SIZE; + hlen = __ertm_hdr_size(chan); if (sdulen) hlen += L2CAP_SDULEN_SIZE; @@ -1979,7 +2047,11 @@ static struct sk_buff *l2cap_create_iframe_pdu(struct l2cap_chan *chan, lh->cid = cpu_to_le16(chan->dcid); lh->len = cpu_to_le16(len + (hlen - L2CAP_HDR_SIZE)); - __put_control(chan, 0, skb_put(skb, __ctrl_size(chan))); + /* Control header is populated later */ + if (test_bit(FLAG_EXT_CTRL, &chan->flags)) + put_unaligned_le32(0, skb_put(skb, L2CAP_EXT_CTRL_SIZE)); + else + put_unaligned_le16(0, skb_put(skb, L2CAP_ENH_CTRL_SIZE)); if (sdulen) put_unaligned_le16(sdulen, skb_put(skb, L2CAP_SDULEN_SIZE)); @@ -1990,9 +2062,7 @@ static struct sk_buff *l2cap_create_iframe_pdu(struct l2cap_chan *chan, return ERR_PTR(err); } - if (chan->fcs == L2CAP_FCS_CRC16) - put_unaligned_le16(0, skb_put(skb, L2CAP_FCS_SIZE)); - + bt_cb(skb)->control.fcs = chan->fcs; bt_cb(skb)->control.retries = 0; return skb; } @@ -2004,10 +2074,9 @@ static int l2cap_segment_sdu(struct l2cap_chan *chan, struct sk_buff *skb; u16 sdu_len; size_t pdu_len; - int err = 0; u8 sar; - BT_DBG("chan %p, msg %p, len %d", chan, msg, (int)len); + BT_DBG("chan %p, msg %p, len %zu", chan, msg, len); /* It is critical that ERTM PDUs fit in a single HCI fragment, * so fragmented skbs are not used. The HCI layer's handling @@ -2020,7 +2089,10 @@ static int l2cap_segment_sdu(struct l2cap_chan *chan, pdu_len = min_t(size_t, pdu_len, L2CAP_BREDR_MAX_PAYLOAD); /* Adjust for largest possible L2CAP overhead. */ - pdu_len -= L2CAP_EXT_HDR_SIZE + L2CAP_FCS_SIZE; + if (chan->fcs) + pdu_len -= L2CAP_FCS_SIZE; + + pdu_len -= __ertm_hdr_size(chan); /* Remote device may have requested smaller PDUs */ pdu_len = min_t(size_t, pdu_len, chan->remote_mps); @@ -2060,7 +2132,7 @@ static int l2cap_segment_sdu(struct l2cap_chan *chan, } } - return err; + return 0; } int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len, @@ -2122,17 +2194,12 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len, if (err) break; - if (chan->mode == L2CAP_MODE_ERTM && chan->tx_send_head == NULL) - chan->tx_send_head = seg_queue.next; - skb_queue_splice_tail_init(&seg_queue, &chan->tx_q); - if (chan->mode == L2CAP_MODE_ERTM) - err = l2cap_ertm_send(chan); + l2cap_tx(chan, NULL, &seg_queue, L2CAP_EV_DATA_REQUEST); else - l2cap_streaming_send(chan); + l2cap_streaming_send(chan, &seg_queue); - if (err >= 0) - err = len; + err = len; /* If the skbs were not queued for sending, they'll still be in * seg_queue and need to be purged. @@ -2148,6 +2215,296 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len, return err; } +static void l2cap_send_srej(struct l2cap_chan *chan, u16 txseq) +{ + struct l2cap_ctrl control; + u16 seq; + + BT_DBG("chan %p, txseq %u", chan, txseq); + + memset(&control, 0, sizeof(control)); + control.sframe = 1; + control.super = L2CAP_SUPER_SREJ; + + for (seq = chan->expected_tx_seq; seq != txseq; + seq = __next_seq(chan, seq)) { + if (!l2cap_ertm_seq_in_queue(&chan->srej_q, seq)) { + control.reqseq = seq; + l2cap_send_sframe(chan, &control); + l2cap_seq_list_append(&chan->srej_list, seq); + } + } + + chan->expected_tx_seq = __next_seq(chan, txseq); +} + +static void l2cap_send_srej_tail(struct l2cap_chan *chan) +{ + struct l2cap_ctrl control; + + BT_DBG("chan %p", chan); + + if (chan->srej_list.tail == L2CAP_SEQ_LIST_CLEAR) + return; + + memset(&control, 0, sizeof(control)); + control.sframe = 1; + control.super = L2CAP_SUPER_SREJ; + control.reqseq = chan->srej_list.tail; + l2cap_send_sframe(chan, &control); +} + +static void l2cap_send_srej_list(struct l2cap_chan *chan, u16 txseq) +{ + struct l2cap_ctrl control; + u16 initial_head; + u16 seq; + + BT_DBG("chan %p, txseq %u", chan, txseq); + + memset(&control, 0, sizeof(control)); + control.sframe = 1; + control.super = L2CAP_SUPER_SREJ; + + /* Capture initial list head to allow only one pass through the list. */ + initial_head = chan->srej_list.head; + + do { + seq = l2cap_seq_list_pop(&chan->srej_list); + if (seq == txseq || seq == L2CAP_SEQ_LIST_CLEAR) + break; + + control.reqseq = seq; + l2cap_send_sframe(chan, &control); + l2cap_seq_list_append(&chan->srej_list, seq); + } while (chan->srej_list.head != initial_head); +} + +static void l2cap_process_reqseq(struct l2cap_chan *chan, u16 reqseq) +{ + struct sk_buff *acked_skb; + u16 ackseq; + + BT_DBG("chan %p, reqseq %u", chan, reqseq); + + if (chan->unacked_frames == 0 || reqseq == chan->expected_ack_seq) + return; + + BT_DBG("expected_ack_seq %u, unacked_frames %u", + chan->expected_ack_seq, chan->unacked_frames); + + for (ackseq = chan->expected_ack_seq; ackseq != reqseq; + ackseq = __next_seq(chan, ackseq)) { + + acked_skb = l2cap_ertm_seq_in_queue(&chan->tx_q, ackseq); + if (acked_skb) { + skb_unlink(acked_skb, &chan->tx_q); + kfree_skb(acked_skb); + chan->unacked_frames--; + } + } + + chan->expected_ack_seq = reqseq; + + if (chan->unacked_frames == 0) + __clear_retrans_timer(chan); + + BT_DBG("unacked_frames %u", chan->unacked_frames); +} + +static void l2cap_abort_rx_srej_sent(struct l2cap_chan *chan) +{ + BT_DBG("chan %p", chan); + + chan->expected_tx_seq = chan->buffer_seq; + l2cap_seq_list_clear(&chan->srej_list); + skb_queue_purge(&chan->srej_q); + chan->rx_state = L2CAP_RX_STATE_RECV; +} + +static void l2cap_tx_state_xmit(struct l2cap_chan *chan, + struct l2cap_ctrl *control, + struct sk_buff_head *skbs, u8 event) +{ + BT_DBG("chan %p, control %p, skbs %p, event %d", chan, control, skbs, + event); + + switch (event) { + case L2CAP_EV_DATA_REQUEST: + if (chan->tx_send_head == NULL) + chan->tx_send_head = skb_peek(skbs); + + skb_queue_splice_tail_init(skbs, &chan->tx_q); + l2cap_ertm_send(chan); + break; + case L2CAP_EV_LOCAL_BUSY_DETECTED: + BT_DBG("Enter LOCAL_BUSY"); + set_bit(CONN_LOCAL_BUSY, &chan->conn_state); + + if (chan->rx_state == L2CAP_RX_STATE_SREJ_SENT) { + /* The SREJ_SENT state must be aborted if we are to + * enter the LOCAL_BUSY state. + */ + l2cap_abort_rx_srej_sent(chan); + } + + l2cap_send_ack(chan); + + break; + case L2CAP_EV_LOCAL_BUSY_CLEAR: + BT_DBG("Exit LOCAL_BUSY"); + clear_bit(CONN_LOCAL_BUSY, &chan->conn_state); + + if (test_bit(CONN_RNR_SENT, &chan->conn_state)) { + struct l2cap_ctrl local_control; + + memset(&local_control, 0, sizeof(local_control)); + local_control.sframe = 1; + local_control.super = L2CAP_SUPER_RR; + local_control.poll = 1; + local_control.reqseq = chan->buffer_seq; + l2cap_send_sframe(chan, &local_control); + + chan->retry_count = 1; + __set_monitor_timer(chan); + chan->tx_state = L2CAP_TX_STATE_WAIT_F; + } + break; + case L2CAP_EV_RECV_REQSEQ_AND_FBIT: + l2cap_process_reqseq(chan, control->reqseq); + break; + case L2CAP_EV_EXPLICIT_POLL: + l2cap_send_rr_or_rnr(chan, 1); + chan->retry_count = 1; + __set_monitor_timer(chan); + __clear_ack_timer(chan); + chan->tx_state = L2CAP_TX_STATE_WAIT_F; + break; + case L2CAP_EV_RETRANS_TO: + l2cap_send_rr_or_rnr(chan, 1); + chan->retry_count = 1; + __set_monitor_timer(chan); + chan->tx_state = L2CAP_TX_STATE_WAIT_F; + break; + case L2CAP_EV_RECV_FBIT: + /* Nothing to process */ + break; + default: + break; + } +} + +static void l2cap_tx_state_wait_f(struct l2cap_chan *chan, + struct l2cap_ctrl *control, + struct sk_buff_head *skbs, u8 event) +{ + BT_DBG("chan %p, control %p, skbs %p, event %d", chan, control, skbs, + event); + + switch (event) { + case L2CAP_EV_DATA_REQUEST: + if (chan->tx_send_head == NULL) + chan->tx_send_head = skb_peek(skbs); + /* Queue data, but don't send. */ + skb_queue_splice_tail_init(skbs, &chan->tx_q); + break; + case L2CAP_EV_LOCAL_BUSY_DETECTED: + BT_DBG("Enter LOCAL_BUSY"); + set_bit(CONN_LOCAL_BUSY, &chan->conn_state); + + if (chan->rx_state == L2CAP_RX_STATE_SREJ_SENT) { + /* The SREJ_SENT state must be aborted if we are to + * enter the LOCAL_BUSY state. + */ + l2cap_abort_rx_srej_sent(chan); + } + + l2cap_send_ack(chan); + + break; + case L2CAP_EV_LOCAL_BUSY_CLEAR: + BT_DBG("Exit LOCAL_BUSY"); + clear_bit(CONN_LOCAL_BUSY, &chan->conn_state); + + if (test_bit(CONN_RNR_SENT, &chan->conn_state)) { + struct l2cap_ctrl local_control; + memset(&local_control, 0, sizeof(local_control)); + local_control.sframe = 1; + local_control.super = L2CAP_SUPER_RR; + local_control.poll = 1; + local_control.reqseq = chan->buffer_seq; + l2cap_send_sframe(chan, &local_control); + + chan->retry_count = 1; + __set_monitor_timer(chan); + chan->tx_state = L2CAP_TX_STATE_WAIT_F; + } + break; + case L2CAP_EV_RECV_REQSEQ_AND_FBIT: + l2cap_process_reqseq(chan, control->reqseq); + + /* Fall through */ + + case L2CAP_EV_RECV_FBIT: + if (control && control->final) { + __clear_monitor_timer(chan); + if (chan->unacked_frames > 0) + __set_retrans_timer(chan); + chan->retry_count = 0; + chan->tx_state = L2CAP_TX_STATE_XMIT; + BT_DBG("recv fbit tx_state 0x2.2%x", chan->tx_state); + } + break; + case L2CAP_EV_EXPLICIT_POLL: + /* Ignore */ + break; + case L2CAP_EV_MONITOR_TO: + if (chan->max_tx == 0 || chan->retry_count < chan->max_tx) { + l2cap_send_rr_or_rnr(chan, 1); + __set_monitor_timer(chan); + chan->retry_count++; + } else { + l2cap_send_disconn_req(chan->conn, chan, ECONNABORTED); + } + break; + default: + break; + } +} + +static void l2cap_tx(struct l2cap_chan *chan, struct l2cap_ctrl *control, + struct sk_buff_head *skbs, u8 event) +{ + BT_DBG("chan %p, control %p, skbs %p, event %d, state %d", + chan, control, skbs, event, chan->tx_state); + + switch (chan->tx_state) { + case L2CAP_TX_STATE_XMIT: + l2cap_tx_state_xmit(chan, control, skbs, event); + break; + case L2CAP_TX_STATE_WAIT_F: + l2cap_tx_state_wait_f(chan, control, skbs, event); + break; + default: + /* Ignore event */ + break; + } +} + +static void l2cap_pass_to_tx(struct l2cap_chan *chan, + struct l2cap_ctrl *control) +{ + BT_DBG("chan %p, control %p", chan, control); + l2cap_tx(chan, control, NULL, L2CAP_EV_RECV_REQSEQ_AND_FBIT); +} + +static void l2cap_pass_to_tx_fbit(struct l2cap_chan *chan, + struct l2cap_ctrl *control) +{ + BT_DBG("chan %p, control %p", chan, control); + l2cap_tx(chan, control, NULL, L2CAP_EV_RECV_FBIT); +} + /* Copy frame to all raw sockets on that connection */ static void l2cap_raw_recv(struct l2cap_conn *conn, struct sk_buff *skb) { @@ -2170,7 +2527,7 @@ static void l2cap_raw_recv(struct l2cap_conn *conn, struct sk_buff *skb) if (!nskb) continue; - if (chan->ops->recv(chan->data, nskb)) + if (chan->ops->recv(chan, nskb)) kfree_skb(nskb); } @@ -2178,16 +2535,16 @@ static void l2cap_raw_recv(struct l2cap_conn *conn, struct sk_buff *skb) } /* ---- L2CAP signalling commands ---- */ -static struct sk_buff *l2cap_build_cmd(struct l2cap_conn *conn, - u8 code, u8 ident, u16 dlen, void *data) +static struct sk_buff *l2cap_build_cmd(struct l2cap_conn *conn, u8 code, + u8 ident, u16 dlen, void *data) { struct sk_buff *skb, **frag; struct l2cap_cmd_hdr *cmd; struct l2cap_hdr *lh; int len, count; - BT_DBG("conn %p, code 0x%2.2x, ident 0x%2.2x, len %d", - conn, code, ident, dlen); + BT_DBG("conn %p, code 0x%2.2x, ident 0x%2.2x, len %u", + conn, code, ident, dlen); len = L2CAP_HDR_SIZE + L2CAP_CMD_HDR_SIZE + dlen; count = min_t(unsigned int, conn->mtu, len); @@ -2200,9 +2557,9 @@ static struct sk_buff *l2cap_build_cmd(struct l2cap_conn *conn, lh->len = cpu_to_le16(L2CAP_CMD_HDR_SIZE + dlen); if (conn->hcon->type == LE_LINK) - lh->cid = cpu_to_le16(L2CAP_CID_LE_SIGNALING); + lh->cid = __constant_cpu_to_le16(L2CAP_CID_LE_SIGNALING); else - lh->cid = cpu_to_le16(L2CAP_CID_SIGNALING); + lh->cid = __constant_cpu_to_le16(L2CAP_CID_SIGNALING); cmd = (struct l2cap_cmd_hdr *) skb_put(skb, L2CAP_CMD_HDR_SIZE); cmd->code = code; @@ -2270,7 +2627,7 @@ static inline int l2cap_get_conf_opt(void **ptr, int *type, int *olen, unsigned break; } - BT_DBG("type 0x%2.2x len %d val 0x%lx", *type, opt->len, *val); + BT_DBG("type 0x%2.2x len %u val 0x%lx", *type, opt->len, *val); return len; } @@ -2278,7 +2635,7 @@ static void l2cap_add_conf_opt(void **ptr, u8 type, u8 len, unsigned long val) { struct l2cap_conf_opt *opt = *ptr; - BT_DBG("type 0x%2.2x len %d val 0x%lx", type, len, val); + BT_DBG("type 0x%2.2x len %u val 0x%lx", type, len, val); opt->type = type; opt->len = len; @@ -2314,8 +2671,8 @@ static void l2cap_add_opt_efs(void **ptr, struct l2cap_chan *chan) efs.stype = chan->local_stype; efs.msdu = cpu_to_le16(chan->local_msdu); efs.sdu_itime = cpu_to_le32(chan->local_sdu_itime); - efs.acc_lat = cpu_to_le32(L2CAP_DEFAULT_ACC_LAT); - efs.flush_to = cpu_to_le32(L2CAP_DEFAULT_FLUSH_TO); + efs.acc_lat = __constant_cpu_to_le32(L2CAP_DEFAULT_ACC_LAT); + efs.flush_to = __constant_cpu_to_le32(L2CAP_DEFAULT_FLUSH_TO); break; case L2CAP_MODE_STREAMING: @@ -2338,20 +2695,24 @@ static void l2cap_add_opt_efs(void **ptr, struct l2cap_chan *chan) static void l2cap_ack_timeout(struct work_struct *work) { struct l2cap_chan *chan = container_of(work, struct l2cap_chan, - ack_timer.work); + ack_timer.work); + u16 frames_to_ack; BT_DBG("chan %p", chan); l2cap_chan_lock(chan); - __l2cap_send_ack(chan); + frames_to_ack = __seq_offset(chan, chan->buffer_seq, + chan->last_acked_seq); - l2cap_chan_unlock(chan); + if (frames_to_ack) + l2cap_send_rr_or_rnr(chan, 0); + l2cap_chan_unlock(chan); l2cap_chan_put(chan); } -static inline int l2cap_ertm_init(struct l2cap_chan *chan) +int l2cap_ertm_init(struct l2cap_chan *chan) { int err; @@ -2360,7 +2721,6 @@ static inline int l2cap_ertm_init(struct l2cap_chan *chan) chan->expected_ack_seq = 0; chan->unacked_frames = 0; chan->buffer_seq = 0; - chan->num_acked = 0; chan->frames_sent = 0; chan->last_acked_seq = 0; chan->sdu = NULL; @@ -2381,12 +2741,15 @@ static inline int l2cap_ertm_init(struct l2cap_chan *chan) skb_queue_head_init(&chan->srej_q); - INIT_LIST_HEAD(&chan->srej_l); err = l2cap_seq_list_init(&chan->srej_list, chan->tx_win); if (err < 0) return err; - return l2cap_seq_list_init(&chan->retrans_list, chan->remote_tx_win); + err = l2cap_seq_list_init(&chan->retrans_list, chan->remote_tx_win); + if (err < 0) + l2cap_seq_list_free(&chan->srej_list); + + return err; } static inline __u8 l2cap_select_mode(__u8 mode, __u16 remote_feat_mask) @@ -2424,6 +2787,7 @@ static inline void l2cap_txwin_setup(struct l2cap_chan *chan) L2CAP_DEFAULT_TX_WINDOW); chan->tx_win_max = L2CAP_DEFAULT_TX_WINDOW; } + chan->ack_win = chan->tx_win; } static int l2cap_build_conf_req(struct l2cap_chan *chan, void *data) @@ -2512,6 +2876,7 @@ done: break; case L2CAP_MODE_STREAMING: + l2cap_txwin_setup(chan); rfc.mode = L2CAP_MODE_STREAMING; rfc.txwin_size = 0; rfc.max_transmit = 0; @@ -2542,7 +2907,7 @@ done: } req->dcid = cpu_to_le16(chan->dcid); - req->flags = cpu_to_le16(0); + req->flags = __constant_cpu_to_le16(0); return ptr - data; } @@ -2762,7 +3127,7 @@ done: } rsp->scid = cpu_to_le16(chan->dcid); rsp->result = cpu_to_le16(result); - rsp->flags = cpu_to_le16(0x0000); + rsp->flags = __constant_cpu_to_le16(0); return ptr - data; } @@ -2812,10 +3177,9 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len, voi break; case L2CAP_CONF_EWS: - chan->tx_win = min_t(u16, val, - L2CAP_DEFAULT_EXT_WINDOW); + chan->ack_win = min_t(u16, val, chan->ack_win); l2cap_add_conf_opt(&ptr, L2CAP_CONF_EWS, 2, - chan->tx_win); + chan->tx_win); break; case L2CAP_CONF_EFS: @@ -2844,6 +3208,9 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len, voi chan->retrans_timeout = le16_to_cpu(rfc.retrans_timeout); chan->monitor_timeout = le16_to_cpu(rfc.monitor_timeout); chan->mps = le16_to_cpu(rfc.max_pdu_size); + if (!test_bit(FLAG_EXT_CTRL, &chan->flags)) + chan->ack_win = min_t(u16, chan->ack_win, + rfc.txwin_size); if (test_bit(FLAG_EFS_ENABLE, &chan->flags)) { chan->local_msdu = le16_to_cpu(efs.msdu); @@ -2861,7 +3228,7 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len, voi } req->dcid = cpu_to_le16(chan->dcid); - req->flags = cpu_to_le16(0x0000); + req->flags = __constant_cpu_to_le16(0); return ptr - data; } @@ -2888,8 +3255,8 @@ void __l2cap_connect_rsp_defer(struct l2cap_chan *chan) rsp.scid = cpu_to_le16(chan->dcid); rsp.dcid = cpu_to_le16(chan->scid); - rsp.result = cpu_to_le16(L2CAP_CR_SUCCESS); - rsp.status = cpu_to_le16(L2CAP_CS_NO_INFO); + rsp.result = __constant_cpu_to_le16(L2CAP_CR_SUCCESS); + rsp.status = __constant_cpu_to_le16(L2CAP_CS_NO_INFO); l2cap_send_cmd(conn, chan->ident, L2CAP_CONN_RSP, sizeof(rsp), &rsp); @@ -2905,7 +3272,17 @@ static void l2cap_conf_rfc_get(struct l2cap_chan *chan, void *rsp, int len) { int type, olen; unsigned long val; - struct l2cap_conf_rfc rfc; + /* Use sane default values in case a misbehaving remote device + * did not send an RFC or extended window size option. + */ + u16 txwin_ext = chan->ack_win; + struct l2cap_conf_rfc rfc = { + .mode = chan->mode, + .retrans_timeout = __constant_cpu_to_le16(L2CAP_DEFAULT_RETRANS_TO), + .monitor_timeout = __constant_cpu_to_le16(L2CAP_DEFAULT_MONITOR_TO), + .max_pdu_size = cpu_to_le16(chan->imtu), + .txwin_size = min_t(u16, chan->ack_win, L2CAP_DEFAULT_TX_WINDOW), + }; BT_DBG("chan %p, rsp %p, len %d", chan, rsp, len); @@ -2915,32 +3292,27 @@ static void l2cap_conf_rfc_get(struct l2cap_chan *chan, void *rsp, int len) while (len >= L2CAP_CONF_OPT_SIZE) { len -= l2cap_get_conf_opt(&rsp, &type, &olen, &val); - if (type != L2CAP_CONF_RFC) - continue; - - if (olen != sizeof(rfc)) + switch (type) { + case L2CAP_CONF_RFC: + if (olen == sizeof(rfc)) + memcpy(&rfc, (void *)val, olen); break; - - memcpy(&rfc, (void *)val, olen); - goto done; + case L2CAP_CONF_EWS: + txwin_ext = val; + break; + } } - /* Use sane default values in case a misbehaving remote device - * did not send an RFC option. - */ - rfc.mode = chan->mode; - rfc.retrans_timeout = cpu_to_le16(L2CAP_DEFAULT_RETRANS_TO); - rfc.monitor_timeout = cpu_to_le16(L2CAP_DEFAULT_MONITOR_TO); - rfc.max_pdu_size = cpu_to_le16(chan->imtu); - - BT_ERR("Expected RFC option was not found, using defaults"); - -done: switch (rfc.mode) { case L2CAP_MODE_ERTM: chan->retrans_timeout = le16_to_cpu(rfc.retrans_timeout); chan->monitor_timeout = le16_to_cpu(rfc.monitor_timeout); - chan->mps = le16_to_cpu(rfc.max_pdu_size); + chan->mps = le16_to_cpu(rfc.max_pdu_size); + if (test_bit(FLAG_EXT_CTRL, &chan->flags)) + chan->ack_win = min_t(u16, chan->ack_win, txwin_ext); + else + chan->ack_win = min_t(u16, chan->ack_win, + rfc.txwin_size); break; case L2CAP_MODE_STREAMING: chan->mps = le16_to_cpu(rfc.max_pdu_size); @@ -2993,7 +3365,7 @@ static inline int l2cap_connect_req(struct l2cap_conn *conn, struct l2cap_cmd_hd lock_sock(parent); /* Check if the ACL is secure enough (if not SDP) */ - if (psm != cpu_to_le16(0x0001) && + if (psm != __constant_cpu_to_le16(L2CAP_PSM_SDP) && !hci_conn_check_link_mode(conn->hcon)) { conn->disc_reason = HCI_ERROR_AUTH_FAILURE; result = L2CAP_CR_SEC_BLOCK; @@ -3002,25 +3374,16 @@ static inline int l2cap_connect_req(struct l2cap_conn *conn, struct l2cap_cmd_hd result = L2CAP_CR_NO_MEM; - /* Check for backlog size */ - if (sk_acceptq_is_full(parent)) { - BT_DBG("backlog full %d", parent->sk_ack_backlog); + /* Check if we already have channel with that dcid */ + if (__l2cap_get_chan_by_dcid(conn, scid)) goto response; - } - chan = pchan->ops->new_connection(pchan->data); + chan = pchan->ops->new_connection(pchan); if (!chan) goto response; sk = chan->sk; - /* Check if we already have channel with that dcid */ - if (__l2cap_get_chan_by_dcid(conn, scid)) { - sock_set_flag(sk, SOCK_ZAPPED); - chan->ops->close(chan->data); - goto response; - } - hci_conn_hold(conn->hcon); bacpy(&bt_sk(sk)->src, conn->src); @@ -3074,7 +3437,7 @@ sendresp: if (result == L2CAP_CR_PEND && status == L2CAP_CS_NO_INFO) { struct l2cap_info_req info; - info.type = cpu_to_le16(L2CAP_IT_FEAT_MASK); + info.type = __constant_cpu_to_le16(L2CAP_IT_FEAT_MASK); conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_SENT; conn->info_ident = l2cap_get_ident(conn); @@ -3196,7 +3559,7 @@ static inline int l2cap_config_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr if (chan->state != BT_CONFIG && chan->state != BT_CONNECT2) { struct l2cap_cmd_rej_cid rej; - rej.reason = cpu_to_le16(L2CAP_REJ_INVALID_CID); + rej.reason = __constant_cpu_to_le16(L2CAP_REJ_INVALID_CID); rej.scid = cpu_to_le16(chan->scid); rej.dcid = cpu_to_le16(chan->dcid); @@ -3218,11 +3581,11 @@ static inline int l2cap_config_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr memcpy(chan->conf_req + chan->conf_len, req->data, len); chan->conf_len += len; - if (flags & 0x0001) { + if (flags & L2CAP_CONF_FLAG_CONTINUATION) { /* Incomplete config. Send empty response. */ l2cap_send_cmd(conn, cmd->ident, L2CAP_CONF_RSP, l2cap_build_conf_rsp(chan, rsp, - L2CAP_CONF_SUCCESS, 0x0001), rsp); + L2CAP_CONF_SUCCESS, flags), rsp); goto unlock; } @@ -3245,8 +3608,6 @@ static inline int l2cap_config_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr if (test_bit(CONF_INPUT_DONE, &chan->conf_state)) { set_default_fcs(chan); - l2cap_state_change(chan, BT_CONNECTED); - if (chan->mode == L2CAP_MODE_ERTM || chan->mode == L2CAP_MODE_STREAMING) err = l2cap_ertm_init(chan); @@ -3278,7 +3639,7 @@ static inline int l2cap_config_req(struct l2cap_conn *conn, struct l2cap_cmd_hdr l2cap_send_cmd(conn, cmd->ident, L2CAP_CONF_RSP, l2cap_build_conf_rsp(chan, rsp, - L2CAP_CONF_SUCCESS, 0x0000), rsp); + L2CAP_CONF_SUCCESS, flags), rsp); } unlock: @@ -3369,7 +3730,7 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn, struct l2cap_cmd_hdr goto done; } - if (flags & 0x01) + if (flags & L2CAP_CONF_FLAG_CONTINUATION) goto done; set_bit(CONF_INPUT_DONE, &chan->conf_state); @@ -3377,7 +3738,6 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn, struct l2cap_cmd_hdr if (test_bit(CONF_OUTPUT_DONE, &chan->conf_state)) { set_default_fcs(chan); - l2cap_state_change(chan, BT_CONNECTED); if (chan->mode == L2CAP_MODE_ERTM || chan->mode == L2CAP_MODE_STREAMING) err = l2cap_ertm_init(chan); @@ -3431,7 +3791,7 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn, struct l2cap_cmd l2cap_chan_unlock(chan); - chan->ops->close(chan->data); + chan->ops->close(chan); l2cap_chan_put(chan); mutex_unlock(&conn->chan_lock); @@ -3465,7 +3825,7 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn, struct l2cap_cmd l2cap_chan_unlock(chan); - chan->ops->close(chan->data); + chan->ops->close(chan); l2cap_chan_put(chan); mutex_unlock(&conn->chan_lock); @@ -3486,8 +3846,8 @@ static inline int l2cap_information_req(struct l2cap_conn *conn, struct l2cap_cm u8 buf[8]; u32 feat_mask = l2cap_feat_mask; struct l2cap_info_rsp *rsp = (struct l2cap_info_rsp *) buf; - rsp->type = cpu_to_le16(L2CAP_IT_FEAT_MASK); - rsp->result = cpu_to_le16(L2CAP_IR_SUCCESS); + rsp->type = __constant_cpu_to_le16(L2CAP_IT_FEAT_MASK); + rsp->result = __constant_cpu_to_le16(L2CAP_IR_SUCCESS); if (!disable_ertm) feat_mask |= L2CAP_FEAT_ERTM | L2CAP_FEAT_STREAMING | L2CAP_FEAT_FCS; @@ -3507,15 +3867,15 @@ static inline int l2cap_information_req(struct l2cap_conn *conn, struct l2cap_cm else l2cap_fixed_chan[0] &= ~L2CAP_FC_A2MP; - rsp->type = cpu_to_le16(L2CAP_IT_FIXED_CHAN); - rsp->result = cpu_to_le16(L2CAP_IR_SUCCESS); + rsp->type = __constant_cpu_to_le16(L2CAP_IT_FIXED_CHAN); + rsp->result = __constant_cpu_to_le16(L2CAP_IR_SUCCESS); memcpy(rsp->data, l2cap_fixed_chan, sizeof(l2cap_fixed_chan)); l2cap_send_cmd(conn, cmd->ident, L2CAP_INFO_RSP, sizeof(buf), buf); } else { struct l2cap_info_rsp rsp; rsp.type = cpu_to_le16(type); - rsp.result = cpu_to_le16(L2CAP_IR_NOTSUPP); + rsp.result = __constant_cpu_to_le16(L2CAP_IR_NOTSUPP); l2cap_send_cmd(conn, cmd->ident, L2CAP_INFO_RSP, sizeof(rsp), &rsp); } @@ -3555,7 +3915,7 @@ static inline int l2cap_information_rsp(struct l2cap_conn *conn, struct l2cap_cm if (conn->feat_mask & L2CAP_FEAT_FIXED_CHAN) { struct l2cap_info_req req; - req.type = cpu_to_le16(L2CAP_IT_FIXED_CHAN); + req.type = __constant_cpu_to_le16(L2CAP_IT_FIXED_CHAN); conn->info_ident = l2cap_get_ident(conn); @@ -3598,7 +3958,7 @@ static inline int l2cap_create_channel_req(struct l2cap_conn *conn, psm = le16_to_cpu(req->psm); scid = le16_to_cpu(req->scid); - BT_DBG("psm %d, scid %d, amp_id %d", psm, scid, req->amp_id); + BT_DBG("psm 0x%2.2x, scid 0x%4.4x, amp_id %d", psm, scid, req->amp_id); /* Placeholder: Always reject */ rsp.dcid = 0; @@ -3621,11 +3981,11 @@ static inline int l2cap_create_channel_rsp(struct l2cap_conn *conn, } static void l2cap_send_move_chan_rsp(struct l2cap_conn *conn, u8 ident, - u16 icid, u16 result) + u16 icid, u16 result) { struct l2cap_move_chan_rsp rsp; - BT_DBG("icid %d, result %d", icid, result); + BT_DBG("icid 0x%4.4x, result 0x%4.4x", icid, result); rsp.icid = cpu_to_le16(icid); rsp.result = cpu_to_le16(result); @@ -3634,12 +3994,13 @@ static void l2cap_send_move_chan_rsp(struct l2cap_conn *conn, u8 ident, } static void l2cap_send_move_chan_cfm(struct l2cap_conn *conn, - struct l2cap_chan *chan, u16 icid, u16 result) + struct l2cap_chan *chan, + u16 icid, u16 result) { struct l2cap_move_chan_cfm cfm; u8 ident; - BT_DBG("icid %d, result %d", icid, result); + BT_DBG("icid 0x%4.4x, result 0x%4.4x", icid, result); ident = l2cap_get_ident(conn); if (chan) @@ -3652,18 +4013,19 @@ static void l2cap_send_move_chan_cfm(struct l2cap_conn *conn, } static void l2cap_send_move_chan_cfm_rsp(struct l2cap_conn *conn, u8 ident, - u16 icid) + u16 icid) { struct l2cap_move_chan_cfm_rsp rsp; - BT_DBG("icid %d", icid); + BT_DBG("icid 0x%4.4x", icid); rsp.icid = cpu_to_le16(icid); l2cap_send_cmd(conn, ident, L2CAP_MOVE_CHAN_CFM_RSP, sizeof(rsp), &rsp); } static inline int l2cap_move_channel_req(struct l2cap_conn *conn, - struct l2cap_cmd_hdr *cmd, u16 cmd_len, void *data) + struct l2cap_cmd_hdr *cmd, + u16 cmd_len, void *data) { struct l2cap_move_chan_req *req = data; u16 icid = 0; @@ -3674,7 +4036,7 @@ static inline int l2cap_move_channel_req(struct l2cap_conn *conn, icid = le16_to_cpu(req->icid); - BT_DBG("icid %d, dest_amp_id %d", icid, req->dest_amp_id); + BT_DBG("icid 0x%4.4x, dest_amp_id %d", icid, req->dest_amp_id); if (!enable_hs) return -EINVAL; @@ -3686,7 +4048,8 @@ static inline int l2cap_move_channel_req(struct l2cap_conn *conn, } static inline int l2cap_move_channel_rsp(struct l2cap_conn *conn, - struct l2cap_cmd_hdr *cmd, u16 cmd_len, void *data) + struct l2cap_cmd_hdr *cmd, + u16 cmd_len, void *data) { struct l2cap_move_chan_rsp *rsp = data; u16 icid, result; @@ -3697,7 +4060,7 @@ static inline int l2cap_move_channel_rsp(struct l2cap_conn *conn, icid = le16_to_cpu(rsp->icid); result = le16_to_cpu(rsp->result); - BT_DBG("icid %d, result %d", icid, result); + BT_DBG("icid 0x%4.4x, result 0x%4.4x", icid, result); /* Placeholder: Always unconfirmed */ l2cap_send_move_chan_cfm(conn, NULL, icid, L2CAP_MC_UNCONFIRMED); @@ -3706,7 +4069,8 @@ static inline int l2cap_move_channel_rsp(struct l2cap_conn *conn, } static inline int l2cap_move_channel_confirm(struct l2cap_conn *conn, - struct l2cap_cmd_hdr *cmd, u16 cmd_len, void *data) + struct l2cap_cmd_hdr *cmd, + u16 cmd_len, void *data) { struct l2cap_move_chan_cfm *cfm = data; u16 icid, result; @@ -3717,7 +4081,7 @@ static inline int l2cap_move_channel_confirm(struct l2cap_conn *conn, icid = le16_to_cpu(cfm->icid); result = le16_to_cpu(cfm->result); - BT_DBG("icid %d, result %d", icid, result); + BT_DBG("icid 0x%4.4x, result 0x%4.4x", icid, result); l2cap_send_move_chan_cfm_rsp(conn, cmd->ident, icid); @@ -3725,7 +4089,8 @@ static inline int l2cap_move_channel_confirm(struct l2cap_conn *conn, } static inline int l2cap_move_channel_confirm_rsp(struct l2cap_conn *conn, - struct l2cap_cmd_hdr *cmd, u16 cmd_len, void *data) + struct l2cap_cmd_hdr *cmd, + u16 cmd_len, void *data) { struct l2cap_move_chan_cfm_rsp *rsp = data; u16 icid; @@ -3735,7 +4100,7 @@ static inline int l2cap_move_channel_confirm_rsp(struct l2cap_conn *conn, icid = le16_to_cpu(rsp->icid); - BT_DBG("icid %d", icid); + BT_DBG("icid 0x%4.4x", icid); return 0; } @@ -3790,9 +4155,9 @@ static inline int l2cap_conn_param_update_req(struct l2cap_conn *conn, err = l2cap_check_conn_param(min, max, latency, to_multiplier); if (err) - rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED); + rsp.result = __constant_cpu_to_le16(L2CAP_CONN_PARAM_REJECTED); else - rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_ACCEPTED); + rsp.result = __constant_cpu_to_le16(L2CAP_CONN_PARAM_ACCEPTED); l2cap_send_cmd(conn, cmd->ident, L2CAP_CONN_PARAM_UPDATE_RSP, sizeof(rsp), &rsp); @@ -3940,7 +4305,7 @@ static inline void l2cap_sig_channel(struct l2cap_conn *conn, BT_ERR("Wrong link type (%d)", err); /* FIXME: Map err to a valid reason */ - rej.reason = cpu_to_le16(L2CAP_REJ_NOT_UNDERSTOOD); + rej.reason = __constant_cpu_to_le16(L2CAP_REJ_NOT_UNDERSTOOD); l2cap_send_cmd(conn, cmd.ident, L2CAP_COMMAND_REJ, sizeof(rej), &rej); } @@ -3972,65 +4337,38 @@ static int l2cap_check_fcs(struct l2cap_chan *chan, struct sk_buff *skb) return 0; } -static inline void l2cap_send_i_or_rr_or_rnr(struct l2cap_chan *chan) +static void l2cap_send_i_or_rr_or_rnr(struct l2cap_chan *chan) { - u32 control = 0; + struct l2cap_ctrl control; - chan->frames_sent = 0; + BT_DBG("chan %p", chan); - control |= __set_reqseq(chan, chan->buffer_seq); + memset(&control, 0, sizeof(control)); + control.sframe = 1; + control.final = 1; + control.reqseq = chan->buffer_seq; + set_bit(CONN_SEND_FBIT, &chan->conn_state); if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) { - control |= __set_ctrl_super(chan, L2CAP_SUPER_RNR); - l2cap_send_sframe(chan, control); - set_bit(CONN_RNR_SENT, &chan->conn_state); + control.super = L2CAP_SUPER_RNR; + l2cap_send_sframe(chan, &control); } - if (test_bit(CONN_REMOTE_BUSY, &chan->conn_state)) - l2cap_retransmit_frames(chan); + if (test_and_clear_bit(CONN_REMOTE_BUSY, &chan->conn_state) && + chan->unacked_frames > 0) + __set_retrans_timer(chan); + /* Send pending iframes */ l2cap_ertm_send(chan); if (!test_bit(CONN_LOCAL_BUSY, &chan->conn_state) && - chan->frames_sent == 0) { - control |= __set_ctrl_super(chan, L2CAP_SUPER_RR); - l2cap_send_sframe(chan, control); - } -} - -static int l2cap_add_to_srej_queue(struct l2cap_chan *chan, struct sk_buff *skb, u16 tx_seq, u8 sar) -{ - struct sk_buff *next_skb; - int tx_seq_offset, next_tx_seq_offset; - - bt_cb(skb)->control.txseq = tx_seq; - bt_cb(skb)->control.sar = sar; - - next_skb = skb_peek(&chan->srej_q); - - tx_seq_offset = __seq_offset(chan, tx_seq, chan->buffer_seq); - - while (next_skb) { - if (bt_cb(next_skb)->control.txseq == tx_seq) - return -EINVAL; - - next_tx_seq_offset = __seq_offset(chan, - bt_cb(next_skb)->control.txseq, chan->buffer_seq); - - if (next_tx_seq_offset > tx_seq_offset) { - __skb_queue_before(&chan->srej_q, next_skb, skb); - return 0; - } - - if (skb_queue_is_last(&chan->srej_q, next_skb)) - next_skb = NULL; - else - next_skb = skb_queue_next(&chan->srej_q, next_skb); + test_bit(CONN_SEND_FBIT, &chan->conn_state)) { + /* F-bit wasn't sent in an s-frame or i-frame yet, so + * send it now. + */ + control.super = L2CAP_SUPER_RR; + l2cap_send_sframe(chan, &control); } - - __skb_queue_tail(&chan->srej_q, skb); - - return 0; } static void append_skb_frag(struct sk_buff *skb, @@ -4052,16 +4390,17 @@ static void append_skb_frag(struct sk_buff *skb, skb->truesize += new_frag->truesize; } -static int l2cap_reassemble_sdu(struct l2cap_chan *chan, struct sk_buff *skb, u32 control) +static int l2cap_reassemble_sdu(struct l2cap_chan *chan, struct sk_buff *skb, + struct l2cap_ctrl *control) { int err = -EINVAL; - switch (__get_ctrl_sar(chan, control)) { + switch (control->sar) { case L2CAP_SAR_UNSEGMENTED: if (chan->sdu) break; - err = chan->ops->recv(chan->data, skb); + err = chan->ops->recv(chan, skb); break; case L2CAP_SAR_START: @@ -4111,7 +4450,7 @@ static int l2cap_reassemble_sdu(struct l2cap_chan *chan, struct sk_buff *skb, u3 if (chan->sdu->len != chan->sdu_len) break; - err = chan->ops->recv(chan->data, chan->sdu); + err = chan->ops->recv(chan, chan->sdu); if (!err) { /* Reassembly complete */ @@ -4133,448 +4472,609 @@ static int l2cap_reassemble_sdu(struct l2cap_chan *chan, struct sk_buff *skb, u3 return err; } -static void l2cap_ertm_enter_local_busy(struct l2cap_chan *chan) +void l2cap_chan_busy(struct l2cap_chan *chan, int busy) { - BT_DBG("chan %p, Enter local busy", chan); + u8 event; - set_bit(CONN_LOCAL_BUSY, &chan->conn_state); - l2cap_seq_list_clear(&chan->srej_list); + if (chan->mode != L2CAP_MODE_ERTM) + return; - __set_ack_timer(chan); + event = busy ? L2CAP_EV_LOCAL_BUSY_DETECTED : L2CAP_EV_LOCAL_BUSY_CLEAR; + l2cap_tx(chan, NULL, NULL, event); } -static void l2cap_ertm_exit_local_busy(struct l2cap_chan *chan) +static int l2cap_rx_queued_iframes(struct l2cap_chan *chan) { - u32 control; - - if (!test_bit(CONN_RNR_SENT, &chan->conn_state)) - goto done; + int err = 0; + /* Pass sequential frames to l2cap_reassemble_sdu() + * until a gap is encountered. + */ - control = __set_reqseq(chan, chan->buffer_seq); - control |= __set_ctrl_poll(chan); - control |= __set_ctrl_super(chan, L2CAP_SUPER_RR); - l2cap_send_sframe(chan, control); - chan->retry_count = 1; + BT_DBG("chan %p", chan); - __clear_retrans_timer(chan); - __set_monitor_timer(chan); + while (!test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) { + struct sk_buff *skb; + BT_DBG("Searching for skb with txseq %d (queue len %d)", + chan->buffer_seq, skb_queue_len(&chan->srej_q)); - set_bit(CONN_WAIT_F, &chan->conn_state); + skb = l2cap_ertm_seq_in_queue(&chan->srej_q, chan->buffer_seq); -done: - clear_bit(CONN_LOCAL_BUSY, &chan->conn_state); - clear_bit(CONN_RNR_SENT, &chan->conn_state); + if (!skb) + break; - BT_DBG("chan %p, Exit local busy", chan); -} + skb_unlink(skb, &chan->srej_q); + chan->buffer_seq = __next_seq(chan, chan->buffer_seq); + err = l2cap_reassemble_sdu(chan, skb, &bt_cb(skb)->control); + if (err) + break; + } -void l2cap_chan_busy(struct l2cap_chan *chan, int busy) -{ - if (chan->mode == L2CAP_MODE_ERTM) { - if (busy) - l2cap_ertm_enter_local_busy(chan); - else - l2cap_ertm_exit_local_busy(chan); + if (skb_queue_empty(&chan->srej_q)) { + chan->rx_state = L2CAP_RX_STATE_RECV; + l2cap_send_ack(chan); } + + return err; } -static void l2cap_check_srej_gap(struct l2cap_chan *chan, u16 tx_seq) +static void l2cap_handle_srej(struct l2cap_chan *chan, + struct l2cap_ctrl *control) { struct sk_buff *skb; - u32 control; - while ((skb = skb_peek(&chan->srej_q)) && - !test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) { - int err; + BT_DBG("chan %p, control %p", chan, control); - if (bt_cb(skb)->control.txseq != tx_seq) - break; + if (control->reqseq == chan->next_tx_seq) { + BT_DBG("Invalid reqseq %d, disconnecting", control->reqseq); + l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); + return; + } - skb = skb_dequeue(&chan->srej_q); - control = __set_ctrl_sar(chan, bt_cb(skb)->control.sar); - err = l2cap_reassemble_sdu(chan, skb, control); + skb = l2cap_ertm_seq_in_queue(&chan->tx_q, control->reqseq); - if (err < 0) { - l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); - break; - } + if (skb == NULL) { + BT_DBG("Seq %d not available for retransmission", + control->reqseq); + return; + } - chan->buffer_seq_srej = __next_seq(chan, chan->buffer_seq_srej); - tx_seq = __next_seq(chan, tx_seq); + if (chan->max_tx != 0 && bt_cb(skb)->control.retries >= chan->max_tx) { + BT_DBG("Retry limit exceeded (%d)", chan->max_tx); + l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); + return; } -} -static void l2cap_resend_srejframe(struct l2cap_chan *chan, u16 tx_seq) -{ - struct srej_list *l, *tmp; - u32 control; + clear_bit(CONN_REMOTE_BUSY, &chan->conn_state); - list_for_each_entry_safe(l, tmp, &chan->srej_l, list) { - if (l->tx_seq == tx_seq) { - list_del(&l->list); - kfree(l); - return; + if (control->poll) { + l2cap_pass_to_tx(chan, control); + + set_bit(CONN_SEND_FBIT, &chan->conn_state); + l2cap_retransmit(chan, control); + l2cap_ertm_send(chan); + + if (chan->tx_state == L2CAP_TX_STATE_WAIT_F) { + set_bit(CONN_SREJ_ACT, &chan->conn_state); + chan->srej_save_reqseq = control->reqseq; + } + } else { + l2cap_pass_to_tx_fbit(chan, control); + + if (control->final) { + if (chan->srej_save_reqseq != control->reqseq || + !test_and_clear_bit(CONN_SREJ_ACT, + &chan->conn_state)) + l2cap_retransmit(chan, control); + } else { + l2cap_retransmit(chan, control); + if (chan->tx_state == L2CAP_TX_STATE_WAIT_F) { + set_bit(CONN_SREJ_ACT, &chan->conn_state); + chan->srej_save_reqseq = control->reqseq; + } } - control = __set_ctrl_super(chan, L2CAP_SUPER_SREJ); - control |= __set_reqseq(chan, l->tx_seq); - l2cap_send_sframe(chan, control); - list_del(&l->list); - list_add_tail(&l->list, &chan->srej_l); } } -static int l2cap_send_srejframe(struct l2cap_chan *chan, u16 tx_seq) +static void l2cap_handle_rej(struct l2cap_chan *chan, + struct l2cap_ctrl *control) { - struct srej_list *new; - u32 control; - - while (tx_seq != chan->expected_tx_seq) { - control = __set_ctrl_super(chan, L2CAP_SUPER_SREJ); - control |= __set_reqseq(chan, chan->expected_tx_seq); - l2cap_seq_list_append(&chan->srej_list, chan->expected_tx_seq); - l2cap_send_sframe(chan, control); + struct sk_buff *skb; - new = kzalloc(sizeof(struct srej_list), GFP_ATOMIC); - if (!new) - return -ENOMEM; + BT_DBG("chan %p, control %p", chan, control); - new->tx_seq = chan->expected_tx_seq; + if (control->reqseq == chan->next_tx_seq) { + BT_DBG("Invalid reqseq %d, disconnecting", control->reqseq); + l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); + return; + } - chan->expected_tx_seq = __next_seq(chan, chan->expected_tx_seq); + skb = l2cap_ertm_seq_in_queue(&chan->tx_q, control->reqseq); - list_add_tail(&new->list, &chan->srej_l); + if (chan->max_tx && skb && + bt_cb(skb)->control.retries >= chan->max_tx) { + BT_DBG("Retry limit exceeded (%d)", chan->max_tx); + l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); + return; } - chan->expected_tx_seq = __next_seq(chan, chan->expected_tx_seq); + clear_bit(CONN_REMOTE_BUSY, &chan->conn_state); + + l2cap_pass_to_tx(chan, control); - return 0; + if (control->final) { + if (!test_and_clear_bit(CONN_REJ_ACT, &chan->conn_state)) + l2cap_retransmit_all(chan, control); + } else { + l2cap_retransmit_all(chan, control); + l2cap_ertm_send(chan); + if (chan->tx_state == L2CAP_TX_STATE_WAIT_F) + set_bit(CONN_REJ_ACT, &chan->conn_state); + } } -static inline int l2cap_data_channel_iframe(struct l2cap_chan *chan, u32 rx_control, struct sk_buff *skb) +static u8 l2cap_classify_txseq(struct l2cap_chan *chan, u16 txseq) { - u16 tx_seq = __get_txseq(chan, rx_control); - u16 req_seq = __get_reqseq(chan, rx_control); - u8 sar = __get_ctrl_sar(chan, rx_control); - int tx_seq_offset, expected_tx_seq_offset; - int num_to_ack = (chan->tx_win/6) + 1; - int err = 0; + BT_DBG("chan %p, txseq %d", chan, txseq); - BT_DBG("chan %p len %d tx_seq %d rx_control 0x%8.8x", chan, skb->len, - tx_seq, rx_control); + BT_DBG("last_acked_seq %d, expected_tx_seq %d", chan->last_acked_seq, + chan->expected_tx_seq); - if (__is_ctrl_final(chan, rx_control) && - test_bit(CONN_WAIT_F, &chan->conn_state)) { - __clear_monitor_timer(chan); - if (chan->unacked_frames > 0) - __set_retrans_timer(chan); - clear_bit(CONN_WAIT_F, &chan->conn_state); - } + if (chan->rx_state == L2CAP_RX_STATE_SREJ_SENT) { + if (__seq_offset(chan, txseq, chan->last_acked_seq) >= + chan->tx_win) { + /* See notes below regarding "double poll" and + * invalid packets. + */ + if (chan->tx_win <= ((chan->tx_win_max + 1) >> 1)) { + BT_DBG("Invalid/Ignore - after SREJ"); + return L2CAP_TXSEQ_INVALID_IGNORE; + } else { + BT_DBG("Invalid - in window after SREJ sent"); + return L2CAP_TXSEQ_INVALID; + } + } - chan->expected_ack_seq = req_seq; - l2cap_drop_acked_frames(chan); + if (chan->srej_list.head == txseq) { + BT_DBG("Expected SREJ"); + return L2CAP_TXSEQ_EXPECTED_SREJ; + } - tx_seq_offset = __seq_offset(chan, tx_seq, chan->buffer_seq); + if (l2cap_ertm_seq_in_queue(&chan->srej_q, txseq)) { + BT_DBG("Duplicate SREJ - txseq already stored"); + return L2CAP_TXSEQ_DUPLICATE_SREJ; + } - /* invalid tx_seq */ - if (tx_seq_offset >= chan->tx_win) { - l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); - goto drop; + if (l2cap_seq_list_contains(&chan->srej_list, txseq)) { + BT_DBG("Unexpected SREJ - not requested"); + return L2CAP_TXSEQ_UNEXPECTED_SREJ; + } } - if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) { - if (!test_bit(CONN_RNR_SENT, &chan->conn_state)) - l2cap_send_ack(chan); - goto drop; + if (chan->expected_tx_seq == txseq) { + if (__seq_offset(chan, txseq, chan->last_acked_seq) >= + chan->tx_win) { + BT_DBG("Invalid - txseq outside tx window"); + return L2CAP_TXSEQ_INVALID; + } else { + BT_DBG("Expected"); + return L2CAP_TXSEQ_EXPECTED; + } } - if (tx_seq == chan->expected_tx_seq) - goto expected; + if (__seq_offset(chan, txseq, chan->last_acked_seq) < + __seq_offset(chan, chan->expected_tx_seq, + chan->last_acked_seq)){ + BT_DBG("Duplicate - expected_tx_seq later than txseq"); + return L2CAP_TXSEQ_DUPLICATE; + } + + if (__seq_offset(chan, txseq, chan->last_acked_seq) >= chan->tx_win) { + /* A source of invalid packets is a "double poll" condition, + * where delays cause us to send multiple poll packets. If + * the remote stack receives and processes both polls, + * sequence numbers can wrap around in such a way that a + * resent frame has a sequence number that looks like new data + * with a sequence gap. This would trigger an erroneous SREJ + * request. + * + * Fortunately, this is impossible with a tx window that's + * less than half of the maximum sequence number, which allows + * invalid frames to be safely ignored. + * + * With tx window sizes greater than half of the tx window + * maximum, the frame is invalid and cannot be ignored. This + * causes a disconnect. + */ - if (test_bit(CONN_SREJ_SENT, &chan->conn_state)) { - struct srej_list *first; + if (chan->tx_win <= ((chan->tx_win_max + 1) >> 1)) { + BT_DBG("Invalid/Ignore - txseq outside tx window"); + return L2CAP_TXSEQ_INVALID_IGNORE; + } else { + BT_DBG("Invalid - txseq outside tx window"); + return L2CAP_TXSEQ_INVALID; + } + } else { + BT_DBG("Unexpected - txseq indicates missing frames"); + return L2CAP_TXSEQ_UNEXPECTED; + } +} - first = list_first_entry(&chan->srej_l, - struct srej_list, list); - if (tx_seq == first->tx_seq) { - l2cap_add_to_srej_queue(chan, skb, tx_seq, sar); - l2cap_check_srej_gap(chan, tx_seq); +static int l2cap_rx_state_recv(struct l2cap_chan *chan, + struct l2cap_ctrl *control, + struct sk_buff *skb, u8 event) +{ + int err = 0; + bool skb_in_use = 0; - list_del(&first->list); - kfree(first); + BT_DBG("chan %p, control %p, skb %p, event %d", chan, control, skb, + event); - if (list_empty(&chan->srej_l)) { - chan->buffer_seq = chan->buffer_seq_srej; - clear_bit(CONN_SREJ_SENT, &chan->conn_state); - l2cap_send_ack(chan); - BT_DBG("chan %p, Exit SREJ_SENT", chan); + switch (event) { + case L2CAP_EV_RECV_IFRAME: + switch (l2cap_classify_txseq(chan, control->txseq)) { + case L2CAP_TXSEQ_EXPECTED: + l2cap_pass_to_tx(chan, control); + + if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) { + BT_DBG("Busy, discarding expected seq %d", + control->txseq); + break; } - } else { - struct srej_list *l; - /* duplicated tx_seq */ - if (l2cap_add_to_srej_queue(chan, skb, tx_seq, sar) < 0) - goto drop; + chan->expected_tx_seq = __next_seq(chan, + control->txseq); + + chan->buffer_seq = chan->expected_tx_seq; + skb_in_use = 1; + + err = l2cap_reassemble_sdu(chan, skb, control); + if (err) + break; - list_for_each_entry(l, &chan->srej_l, list) { - if (l->tx_seq == tx_seq) { - l2cap_resend_srejframe(chan, tx_seq); - return 0; + if (control->final) { + if (!test_and_clear_bit(CONN_REJ_ACT, + &chan->conn_state)) { + control->final = 0; + l2cap_retransmit_all(chan, control); + l2cap_ertm_send(chan); } } - err = l2cap_send_srejframe(chan, tx_seq); - if (err < 0) { - l2cap_send_disconn_req(chan->conn, chan, -err); - return err; + if (!test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) + l2cap_send_ack(chan); + break; + case L2CAP_TXSEQ_UNEXPECTED: + l2cap_pass_to_tx(chan, control); + + /* Can't issue SREJ frames in the local busy state. + * Drop this frame, it will be seen as missing + * when local busy is exited. + */ + if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) { + BT_DBG("Busy, discarding unexpected seq %d", + control->txseq); + break; } - } - } else { - expected_tx_seq_offset = __seq_offset(chan, - chan->expected_tx_seq, chan->buffer_seq); - /* duplicated tx_seq */ - if (tx_seq_offset < expected_tx_seq_offset) - goto drop; - - set_bit(CONN_SREJ_SENT, &chan->conn_state); + /* There was a gap in the sequence, so an SREJ + * must be sent for each missing frame. The + * current frame is stored for later use. + */ + skb_queue_tail(&chan->srej_q, skb); + skb_in_use = 1; + BT_DBG("Queued %p (queue len %d)", skb, + skb_queue_len(&chan->srej_q)); - BT_DBG("chan %p, Enter SREJ", chan); + clear_bit(CONN_SREJ_ACT, &chan->conn_state); + l2cap_seq_list_clear(&chan->srej_list); + l2cap_send_srej(chan, control->txseq); - INIT_LIST_HEAD(&chan->srej_l); - chan->buffer_seq_srej = chan->buffer_seq; + chan->rx_state = L2CAP_RX_STATE_SREJ_SENT; + break; + case L2CAP_TXSEQ_DUPLICATE: + l2cap_pass_to_tx(chan, control); + break; + case L2CAP_TXSEQ_INVALID_IGNORE: + break; + case L2CAP_TXSEQ_INVALID: + default: + l2cap_send_disconn_req(chan->conn, chan, + ECONNRESET); + break; + } + break; + case L2CAP_EV_RECV_RR: + l2cap_pass_to_tx(chan, control); + if (control->final) { + clear_bit(CONN_REMOTE_BUSY, &chan->conn_state); - __skb_queue_head_init(&chan->srej_q); - l2cap_add_to_srej_queue(chan, skb, tx_seq, sar); + if (!test_and_clear_bit(CONN_REJ_ACT, + &chan->conn_state)) { + control->final = 0; + l2cap_retransmit_all(chan, control); + } - /* Set P-bit only if there are some I-frames to ack. */ - if (__clear_ack_timer(chan)) - set_bit(CONN_SEND_PBIT, &chan->conn_state); + l2cap_ertm_send(chan); + } else if (control->poll) { + l2cap_send_i_or_rr_or_rnr(chan); + } else { + if (test_and_clear_bit(CONN_REMOTE_BUSY, + &chan->conn_state) && + chan->unacked_frames) + __set_retrans_timer(chan); - err = l2cap_send_srejframe(chan, tx_seq); - if (err < 0) { - l2cap_send_disconn_req(chan->conn, chan, -err); - return err; + l2cap_ertm_send(chan); } - } - return 0; - -expected: - chan->expected_tx_seq = __next_seq(chan, chan->expected_tx_seq); - - if (test_bit(CONN_SREJ_SENT, &chan->conn_state)) { - bt_cb(skb)->control.txseq = tx_seq; - bt_cb(skb)->control.sar = sar; - __skb_queue_tail(&chan->srej_q, skb); - return 0; + break; + case L2CAP_EV_RECV_RNR: + set_bit(CONN_REMOTE_BUSY, &chan->conn_state); + l2cap_pass_to_tx(chan, control); + if (control && control->poll) { + set_bit(CONN_SEND_FBIT, &chan->conn_state); + l2cap_send_rr_or_rnr(chan, 0); + } + __clear_retrans_timer(chan); + l2cap_seq_list_clear(&chan->retrans_list); + break; + case L2CAP_EV_RECV_REJ: + l2cap_handle_rej(chan, control); + break; + case L2CAP_EV_RECV_SREJ: + l2cap_handle_srej(chan, control); + break; + default: + break; } - err = l2cap_reassemble_sdu(chan, skb, rx_control); - chan->buffer_seq = __next_seq(chan, chan->buffer_seq); - - if (err < 0) { - l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); - return err; + if (skb && !skb_in_use) { + BT_DBG("Freeing %p", skb); + kfree_skb(skb); } - if (__is_ctrl_final(chan, rx_control)) { - if (!test_and_clear_bit(CONN_REJ_ACT, &chan->conn_state)) - l2cap_retransmit_frames(chan); - } + return err; +} +static int l2cap_rx_state_srej_sent(struct l2cap_chan *chan, + struct l2cap_ctrl *control, + struct sk_buff *skb, u8 event) +{ + int err = 0; + u16 txseq = control->txseq; + bool skb_in_use = 0; + + BT_DBG("chan %p, control %p, skb %p, event %d", chan, control, skb, + event); + + switch (event) { + case L2CAP_EV_RECV_IFRAME: + switch (l2cap_classify_txseq(chan, txseq)) { + case L2CAP_TXSEQ_EXPECTED: + /* Keep frame for reassembly later */ + l2cap_pass_to_tx(chan, control); + skb_queue_tail(&chan->srej_q, skb); + skb_in_use = 1; + BT_DBG("Queued %p (queue len %d)", skb, + skb_queue_len(&chan->srej_q)); + + chan->expected_tx_seq = __next_seq(chan, txseq); + break; + case L2CAP_TXSEQ_EXPECTED_SREJ: + l2cap_seq_list_pop(&chan->srej_list); - chan->num_acked = (chan->num_acked + 1) % num_to_ack; - if (chan->num_acked == num_to_ack - 1) - l2cap_send_ack(chan); - else - __set_ack_timer(chan); + l2cap_pass_to_tx(chan, control); + skb_queue_tail(&chan->srej_q, skb); + skb_in_use = 1; + BT_DBG("Queued %p (queue len %d)", skb, + skb_queue_len(&chan->srej_q)); - return 0; + err = l2cap_rx_queued_iframes(chan); + if (err) + break; -drop: - kfree_skb(skb); - return 0; -} + break; + case L2CAP_TXSEQ_UNEXPECTED: + /* Got a frame that can't be reassembled yet. + * Save it for later, and send SREJs to cover + * the missing frames. + */ + skb_queue_tail(&chan->srej_q, skb); + skb_in_use = 1; + BT_DBG("Queued %p (queue len %d)", skb, + skb_queue_len(&chan->srej_q)); + + l2cap_pass_to_tx(chan, control); + l2cap_send_srej(chan, control->txseq); + break; + case L2CAP_TXSEQ_UNEXPECTED_SREJ: + /* This frame was requested with an SREJ, but + * some expected retransmitted frames are + * missing. Request retransmission of missing + * SREJ'd frames. + */ + skb_queue_tail(&chan->srej_q, skb); + skb_in_use = 1; + BT_DBG("Queued %p (queue len %d)", skb, + skb_queue_len(&chan->srej_q)); + + l2cap_pass_to_tx(chan, control); + l2cap_send_srej_list(chan, control->txseq); + break; + case L2CAP_TXSEQ_DUPLICATE_SREJ: + /* We've already queued this frame. Drop this copy. */ + l2cap_pass_to_tx(chan, control); + break; + case L2CAP_TXSEQ_DUPLICATE: + /* Expecting a later sequence number, so this frame + * was already received. Ignore it completely. + */ + break; + case L2CAP_TXSEQ_INVALID_IGNORE: + break; + case L2CAP_TXSEQ_INVALID: + default: + l2cap_send_disconn_req(chan->conn, chan, + ECONNRESET); + break; + } + break; + case L2CAP_EV_RECV_RR: + l2cap_pass_to_tx(chan, control); + if (control->final) { + clear_bit(CONN_REMOTE_BUSY, &chan->conn_state); -static inline void l2cap_data_channel_rrframe(struct l2cap_chan *chan, u32 rx_control) -{ - BT_DBG("chan %p, req_seq %d ctrl 0x%8.8x", chan, - __get_reqseq(chan, rx_control), rx_control); + if (!test_and_clear_bit(CONN_REJ_ACT, + &chan->conn_state)) { + control->final = 0; + l2cap_retransmit_all(chan, control); + } - chan->expected_ack_seq = __get_reqseq(chan, rx_control); - l2cap_drop_acked_frames(chan); + l2cap_ertm_send(chan); + } else if (control->poll) { + if (test_and_clear_bit(CONN_REMOTE_BUSY, + &chan->conn_state) && + chan->unacked_frames) { + __set_retrans_timer(chan); + } - if (__is_ctrl_poll(chan, rx_control)) { - set_bit(CONN_SEND_FBIT, &chan->conn_state); - if (test_bit(CONN_SREJ_SENT, &chan->conn_state)) { - if (test_bit(CONN_REMOTE_BUSY, &chan->conn_state) && - (chan->unacked_frames > 0)) + set_bit(CONN_SEND_FBIT, &chan->conn_state); + l2cap_send_srej_tail(chan); + } else { + if (test_and_clear_bit(CONN_REMOTE_BUSY, + &chan->conn_state) && + chan->unacked_frames) __set_retrans_timer(chan); - clear_bit(CONN_REMOTE_BUSY, &chan->conn_state); - l2cap_send_srejtail(chan); + l2cap_send_ack(chan); + } + break; + case L2CAP_EV_RECV_RNR: + set_bit(CONN_REMOTE_BUSY, &chan->conn_state); + l2cap_pass_to_tx(chan, control); + if (control->poll) { + l2cap_send_srej_tail(chan); } else { - l2cap_send_i_or_rr_or_rnr(chan); + struct l2cap_ctrl rr_control; + memset(&rr_control, 0, sizeof(rr_control)); + rr_control.sframe = 1; + rr_control.super = L2CAP_SUPER_RR; + rr_control.reqseq = chan->buffer_seq; + l2cap_send_sframe(chan, &rr_control); } - } else if (__is_ctrl_final(chan, rx_control)) { - clear_bit(CONN_REMOTE_BUSY, &chan->conn_state); - - if (!test_and_clear_bit(CONN_REJ_ACT, &chan->conn_state)) - l2cap_retransmit_frames(chan); - - } else { - if (test_bit(CONN_REMOTE_BUSY, &chan->conn_state) && - (chan->unacked_frames > 0)) - __set_retrans_timer(chan); + break; + case L2CAP_EV_RECV_REJ: + l2cap_handle_rej(chan, control); + break; + case L2CAP_EV_RECV_SREJ: + l2cap_handle_srej(chan, control); + break; + } - clear_bit(CONN_REMOTE_BUSY, &chan->conn_state); - if (test_bit(CONN_SREJ_SENT, &chan->conn_state)) - l2cap_send_ack(chan); - else - l2cap_ertm_send(chan); + if (skb && !skb_in_use) { + BT_DBG("Freeing %p", skb); + kfree_skb(skb); } + + return err; } -static inline void l2cap_data_channel_rejframe(struct l2cap_chan *chan, u32 rx_control) +static bool __valid_reqseq(struct l2cap_chan *chan, u16 reqseq) { - u16 tx_seq = __get_reqseq(chan, rx_control); - - BT_DBG("chan %p, req_seq %d ctrl 0x%8.8x", chan, tx_seq, rx_control); - - clear_bit(CONN_REMOTE_BUSY, &chan->conn_state); - - chan->expected_ack_seq = tx_seq; - l2cap_drop_acked_frames(chan); + /* Make sure reqseq is for a packet that has been sent but not acked */ + u16 unacked; - if (__is_ctrl_final(chan, rx_control)) { - if (!test_and_clear_bit(CONN_REJ_ACT, &chan->conn_state)) - l2cap_retransmit_frames(chan); - } else { - l2cap_retransmit_frames(chan); - - if (test_bit(CONN_WAIT_F, &chan->conn_state)) - set_bit(CONN_REJ_ACT, &chan->conn_state); - } + unacked = __seq_offset(chan, chan->next_tx_seq, chan->expected_ack_seq); + return __seq_offset(chan, chan->next_tx_seq, reqseq) <= unacked; } -static inline void l2cap_data_channel_srejframe(struct l2cap_chan *chan, u32 rx_control) -{ - u16 tx_seq = __get_reqseq(chan, rx_control); - BT_DBG("chan %p, req_seq %d ctrl 0x%8.8x", chan, tx_seq, rx_control); - - clear_bit(CONN_REMOTE_BUSY, &chan->conn_state); - - if (__is_ctrl_poll(chan, rx_control)) { - chan->expected_ack_seq = tx_seq; - l2cap_drop_acked_frames(chan); - - set_bit(CONN_SEND_FBIT, &chan->conn_state); - l2cap_retransmit_one_frame(chan, tx_seq); +static int l2cap_rx(struct l2cap_chan *chan, struct l2cap_ctrl *control, + struct sk_buff *skb, u8 event) +{ + int err = 0; - l2cap_ertm_send(chan); + BT_DBG("chan %p, control %p, skb %p, event %d, state %d", chan, + control, skb, event, chan->rx_state); - if (test_bit(CONN_WAIT_F, &chan->conn_state)) { - chan->srej_save_reqseq = tx_seq; - set_bit(CONN_SREJ_ACT, &chan->conn_state); + if (__valid_reqseq(chan, control->reqseq)) { + switch (chan->rx_state) { + case L2CAP_RX_STATE_RECV: + err = l2cap_rx_state_recv(chan, control, skb, event); + break; + case L2CAP_RX_STATE_SREJ_SENT: + err = l2cap_rx_state_srej_sent(chan, control, skb, + event); + break; + default: + /* shut it down */ + break; } - } else if (__is_ctrl_final(chan, rx_control)) { - if (test_bit(CONN_SREJ_ACT, &chan->conn_state) && - chan->srej_save_reqseq == tx_seq) - clear_bit(CONN_SREJ_ACT, &chan->conn_state); - else - l2cap_retransmit_one_frame(chan, tx_seq); } else { - l2cap_retransmit_one_frame(chan, tx_seq); - if (test_bit(CONN_WAIT_F, &chan->conn_state)) { - chan->srej_save_reqseq = tx_seq; - set_bit(CONN_SREJ_ACT, &chan->conn_state); - } + BT_DBG("Invalid reqseq %d (next_tx_seq %d, expected_ack_seq %d", + control->reqseq, chan->next_tx_seq, + chan->expected_ack_seq); + l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); } + + return err; } -static inline void l2cap_data_channel_rnrframe(struct l2cap_chan *chan, u32 rx_control) +static int l2cap_stream_rx(struct l2cap_chan *chan, struct l2cap_ctrl *control, + struct sk_buff *skb) { - u16 tx_seq = __get_reqseq(chan, rx_control); + int err = 0; - BT_DBG("chan %p, req_seq %d ctrl 0x%8.8x", chan, tx_seq, rx_control); + BT_DBG("chan %p, control %p, skb %p, state %d", chan, control, skb, + chan->rx_state); - set_bit(CONN_REMOTE_BUSY, &chan->conn_state); - chan->expected_ack_seq = tx_seq; - l2cap_drop_acked_frames(chan); + if (l2cap_classify_txseq(chan, control->txseq) == + L2CAP_TXSEQ_EXPECTED) { + l2cap_pass_to_tx(chan, control); - if (__is_ctrl_poll(chan, rx_control)) - set_bit(CONN_SEND_FBIT, &chan->conn_state); + BT_DBG("buffer_seq %d->%d", chan->buffer_seq, + __next_seq(chan, chan->buffer_seq)); - if (!test_bit(CONN_SREJ_SENT, &chan->conn_state)) { - __clear_retrans_timer(chan); - if (__is_ctrl_poll(chan, rx_control)) - l2cap_send_rr_or_rnr(chan, L2CAP_CTRL_FINAL); - return; - } + chan->buffer_seq = __next_seq(chan, chan->buffer_seq); - if (__is_ctrl_poll(chan, rx_control)) { - l2cap_send_srejtail(chan); + l2cap_reassemble_sdu(chan, skb, control); } else { - rx_control = __set_ctrl_super(chan, L2CAP_SUPER_RR); - l2cap_send_sframe(chan, rx_control); - } -} - -static inline int l2cap_data_channel_sframe(struct l2cap_chan *chan, u32 rx_control, struct sk_buff *skb) -{ - BT_DBG("chan %p rx_control 0x%8.8x len %d", chan, rx_control, skb->len); + if (chan->sdu) { + kfree_skb(chan->sdu); + chan->sdu = NULL; + } + chan->sdu_last_frag = NULL; + chan->sdu_len = 0; - if (__is_ctrl_final(chan, rx_control) && - test_bit(CONN_WAIT_F, &chan->conn_state)) { - __clear_monitor_timer(chan); - if (chan->unacked_frames > 0) - __set_retrans_timer(chan); - clear_bit(CONN_WAIT_F, &chan->conn_state); + if (skb) { + BT_DBG("Freeing %p", skb); + kfree_skb(skb); + } } - switch (__get_ctrl_super(chan, rx_control)) { - case L2CAP_SUPER_RR: - l2cap_data_channel_rrframe(chan, rx_control); - break; - - case L2CAP_SUPER_REJ: - l2cap_data_channel_rejframe(chan, rx_control); - break; - - case L2CAP_SUPER_SREJ: - l2cap_data_channel_srejframe(chan, rx_control); - break; - - case L2CAP_SUPER_RNR: - l2cap_data_channel_rnrframe(chan, rx_control); - break; - } + chan->last_acked_seq = control->txseq; + chan->expected_tx_seq = __next_seq(chan, control->txseq); - kfree_skb(skb); - return 0; + return err; } -static int l2cap_ertm_data_rcv(struct l2cap_chan *chan, struct sk_buff *skb) +static int l2cap_data_rcv(struct l2cap_chan *chan, struct sk_buff *skb) { - u32 control; - u16 req_seq; - int len, next_tx_seq_offset, req_seq_offset; + struct l2cap_ctrl *control = &bt_cb(skb)->control; + u16 len; + u8 event; __unpack_control(chan, skb); - control = __get_control(chan, skb->data); - skb_pull(skb, __ctrl_size(chan)); len = skb->len; /* * We can just drop the corrupted I-frame here. * Receiver will miss it and start proper recovery - * procedures and ask retransmission. + * procedures and ask for retransmission. */ if (l2cap_check_fcs(chan, skb)) goto drop; - if (__is_sar_start(chan, control) && !__is_sframe(chan, control)) + if (!control->sframe && control->sar == L2CAP_SAR_START) len -= L2CAP_SDULEN_SIZE; if (chan->fcs == L2CAP_FCS_CRC16) @@ -4585,34 +5085,57 @@ static int l2cap_ertm_data_rcv(struct l2cap_chan *chan, struct sk_buff *skb) goto drop; } - req_seq = __get_reqseq(chan, control); - - req_seq_offset = __seq_offset(chan, req_seq, chan->expected_ack_seq); + if (!control->sframe) { + int err; - next_tx_seq_offset = __seq_offset(chan, chan->next_tx_seq, - chan->expected_ack_seq); + BT_DBG("iframe sar %d, reqseq %d, final %d, txseq %d", + control->sar, control->reqseq, control->final, + control->txseq); - /* check for invalid req-seq */ - if (req_seq_offset > next_tx_seq_offset) { - l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); - goto drop; - } - - if (!__is_sframe(chan, control)) { - if (len < 0) { - l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); + /* Validate F-bit - F=0 always valid, F=1 only + * valid in TX WAIT_F + */ + if (control->final && chan->tx_state != L2CAP_TX_STATE_WAIT_F) goto drop; + + if (chan->mode != L2CAP_MODE_STREAMING) { + event = L2CAP_EV_RECV_IFRAME; + err = l2cap_rx(chan, control, skb, event); + } else { + err = l2cap_stream_rx(chan, control, skb); } - l2cap_data_channel_iframe(chan, control, skb); + if (err) + l2cap_send_disconn_req(chan->conn, chan, + ECONNRESET); } else { + const u8 rx_func_to_event[4] = { + L2CAP_EV_RECV_RR, L2CAP_EV_RECV_REJ, + L2CAP_EV_RECV_RNR, L2CAP_EV_RECV_SREJ + }; + + /* Only I-frames are expected in streaming mode */ + if (chan->mode == L2CAP_MODE_STREAMING) + goto drop; + + BT_DBG("sframe reqseq %d, final %d, poll %d, super %d", + control->reqseq, control->final, control->poll, + control->super); + if (len != 0) { BT_ERR("%d", len); l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); goto drop; } - l2cap_data_channel_sframe(chan, control, skb); + /* Validate F and P bits */ + if (control->final && (control->poll || + chan->tx_state != L2CAP_TX_STATE_WAIT_F)) + goto drop; + + event = rx_func_to_event[control->super]; + if (l2cap_rx(chan, control, skb, event)) + l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); } return 0; @@ -4622,19 +5145,27 @@ drop: return 0; } -static inline int l2cap_data_channel(struct l2cap_conn *conn, u16 cid, struct sk_buff *skb) +static void l2cap_data_channel(struct l2cap_conn *conn, u16 cid, + struct sk_buff *skb) { struct l2cap_chan *chan; - u32 control; - u16 tx_seq; - int len; chan = l2cap_get_chan_by_scid(conn, cid); if (!chan) { - BT_DBG("unknown cid 0x%4.4x", cid); - /* Drop packet and return */ - kfree_skb(skb); - return 0; + if (cid == L2CAP_CID_A2MP) { + chan = a2mp_channel_create(conn, skb); + if (!chan) { + kfree_skb(skb); + return; + } + + l2cap_chan_lock(chan); + } else { + BT_DBG("unknown cid 0x%4.4x", cid); + /* Drop packet and return */ + kfree_skb(skb); + return; + } } BT_DBG("chan %p, len %d", chan, skb->len); @@ -4652,49 +5183,13 @@ static inline int l2cap_data_channel(struct l2cap_conn *conn, u16 cid, struct sk if (chan->imtu < skb->len) goto drop; - if (!chan->ops->recv(chan->data, skb)) + if (!chan->ops->recv(chan, skb)) goto done; break; case L2CAP_MODE_ERTM: - l2cap_ertm_data_rcv(chan, skb); - - goto done; - case L2CAP_MODE_STREAMING: - control = __get_control(chan, skb->data); - skb_pull(skb, __ctrl_size(chan)); - len = skb->len; - - if (l2cap_check_fcs(chan, skb)) - goto drop; - - if (__is_sar_start(chan, control)) - len -= L2CAP_SDULEN_SIZE; - - if (chan->fcs == L2CAP_FCS_CRC16) - len -= L2CAP_FCS_SIZE; - - if (len > chan->mps || len < 0 || __is_sframe(chan, control)) - goto drop; - - tx_seq = __get_txseq(chan, control); - - if (chan->expected_tx_seq != tx_seq) { - /* Frame(s) missing - must discard partial SDU */ - kfree_skb(chan->sdu); - chan->sdu = NULL; - chan->sdu_last_frag = NULL; - chan->sdu_len = 0; - - /* TODO: Notify userland of missing data */ - } - - chan->expected_tx_seq = __next_seq(chan, tx_seq); - - if (l2cap_reassemble_sdu(chan, skb, control) == -EMSGSIZE) - l2cap_send_disconn_req(chan->conn, chan, ECONNRESET); - + l2cap_data_rcv(chan, skb); goto done; default: @@ -4707,11 +5202,10 @@ drop: done: l2cap_chan_unlock(chan); - - return 0; } -static inline int l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm, struct sk_buff *skb) +static void l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm, + struct sk_buff *skb) { struct l2cap_chan *chan; @@ -4727,17 +5221,15 @@ static inline int l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm, str if (chan->imtu < skb->len) goto drop; - if (!chan->ops->recv(chan->data, skb)) - return 0; + if (!chan->ops->recv(chan, skb)) + return; drop: kfree_skb(skb); - - return 0; } -static inline int l2cap_att_channel(struct l2cap_conn *conn, u16 cid, - struct sk_buff *skb) +static void l2cap_att_channel(struct l2cap_conn *conn, u16 cid, + struct sk_buff *skb) { struct l2cap_chan *chan; @@ -4753,13 +5245,11 @@ static inline int l2cap_att_channel(struct l2cap_conn *conn, u16 cid, if (chan->imtu < skb->len) goto drop; - if (!chan->ops->recv(chan->data, skb)) - return 0; + if (!chan->ops->recv(chan, skb)) + return; drop: kfree_skb(skb); - - return 0; } static void l2cap_recv_frame(struct l2cap_conn *conn, struct sk_buff *skb) @@ -4787,7 +5277,7 @@ static void l2cap_recv_frame(struct l2cap_conn *conn, struct sk_buff *skb) case L2CAP_CID_CONN_LESS: psm = get_unaligned((__le16 *) skb->data); - skb_pull(skb, 2); + skb_pull(skb, L2CAP_PSMLEN_SIZE); l2cap_conless_channel(conn, psm, skb); break; @@ -4898,7 +5388,7 @@ int l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt) if (!conn) return 0; - BT_DBG("conn %p", conn); + BT_DBG("conn %p status 0x%2.2x encrypt %u", conn, status, encrypt); if (hcon->type == LE_LINK) { if (!status && encrypt) @@ -4911,7 +5401,8 @@ int l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt) list_for_each_entry(chan, &conn->chan_l, list) { l2cap_chan_lock(chan); - BT_DBG("chan->scid %d", chan->scid); + BT_DBG("chan %p scid 0x%4.4x state %s", chan, chan->scid, + state_to_string(chan->state)); if (chan->scid == L2CAP_CID_LE_DATA) { if (!status && encrypt) { @@ -4981,6 +5472,17 @@ int l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt) rsp.status = cpu_to_le16(stat); l2cap_send_cmd(conn, chan->ident, L2CAP_CONN_RSP, sizeof(rsp), &rsp); + + if (!test_bit(CONF_REQ_SENT, &chan->conf_state) && + res == L2CAP_CR_SUCCESS) { + char buf[128]; + set_bit(CONF_REQ_SENT, &chan->conf_state); + l2cap_send_cmd(conn, l2cap_get_ident(conn), + L2CAP_CONF_REQ, + l2cap_build_conf_req(chan, buf), + buf); + chan->num_conf_req++; + } } l2cap_chan_unlock(chan); diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c index 3bb1611b9d48..a4bb27e8427e 100644 --- a/net/bluetooth/l2cap_sock.c +++ b/net/bluetooth/l2cap_sock.c @@ -27,7 +27,6 @@ /* Bluetooth L2CAP sockets. */ -#include <linux/security.h> #include <linux/export.h> #include <net/bluetooth/bluetooth.h> @@ -89,8 +88,8 @@ static int l2cap_sock_bind(struct socket *sock, struct sockaddr *addr, int alen) if (err < 0) goto done; - if (__le16_to_cpu(la.l2_psm) == 0x0001 || - __le16_to_cpu(la.l2_psm) == 0x0003) + if (__le16_to_cpu(la.l2_psm) == L2CAP_PSM_SDP || + __le16_to_cpu(la.l2_psm) == L2CAP_PSM_RFCOMM) chan->sec_level = BT_SECURITY_SDP; bacpy(&bt_sk(sk)->src, &la.l2_bdaddr); @@ -446,6 +445,22 @@ static int l2cap_sock_getsockopt(struct socket *sock, int level, int optname, ch return err; } +static bool l2cap_valid_mtu(struct l2cap_chan *chan, u16 mtu) +{ + switch (chan->scid) { + case L2CAP_CID_LE_DATA: + if (mtu < L2CAP_LE_MIN_MTU) + return false; + break; + + default: + if (mtu < L2CAP_DEFAULT_MIN_MTU) + return false; + } + + return true; +} + static int l2cap_sock_setsockopt_old(struct socket *sock, int optname, char __user *optval, unsigned int optlen) { struct sock *sk = sock->sk; @@ -484,6 +499,11 @@ static int l2cap_sock_setsockopt_old(struct socket *sock, int optname, char __us break; } + if (!l2cap_valid_mtu(chan, opts.imtu)) { + err = -EINVAL; + break; + } + chan->mode = opts.mode; switch (chan->mode) { case L2CAP_MODE_BASIC: @@ -873,9 +893,34 @@ static int l2cap_sock_release(struct socket *sock) return err; } -static struct l2cap_chan *l2cap_sock_new_connection_cb(void *data) +static void l2cap_sock_cleanup_listen(struct sock *parent) { - struct sock *sk, *parent = data; + struct sock *sk; + + BT_DBG("parent %p", parent); + + /* Close not yet accepted channels */ + while ((sk = bt_accept_dequeue(parent, NULL))) { + struct l2cap_chan *chan = l2cap_pi(sk)->chan; + + l2cap_chan_lock(chan); + __clear_chan_timer(chan); + l2cap_chan_close(chan, ECONNRESET); + l2cap_chan_unlock(chan); + + l2cap_sock_kill(sk); + } +} + +static struct l2cap_chan *l2cap_sock_new_connection_cb(struct l2cap_chan *chan) +{ + struct sock *sk, *parent = chan->data; + + /* Check for backlog size */ + if (sk_acceptq_is_full(parent)) { + BT_DBG("backlog full %d", parent->sk_ack_backlog); + return NULL; + } sk = l2cap_sock_alloc(sock_net(parent), NULL, BTPROTO_L2CAP, GFP_ATOMIC); @@ -889,10 +934,10 @@ static struct l2cap_chan *l2cap_sock_new_connection_cb(void *data) return l2cap_pi(sk)->chan; } -static int l2cap_sock_recv_cb(void *data, struct sk_buff *skb) +static int l2cap_sock_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb) { int err; - struct sock *sk = data; + struct sock *sk = chan->data; struct l2cap_pinfo *pi = l2cap_pi(sk); lock_sock(sk); @@ -925,16 +970,57 @@ done: return err; } -static void l2cap_sock_close_cb(void *data) +static void l2cap_sock_close_cb(struct l2cap_chan *chan) { - struct sock *sk = data; + struct sock *sk = chan->data; l2cap_sock_kill(sk); } -static void l2cap_sock_state_change_cb(void *data, int state) +static void l2cap_sock_teardown_cb(struct l2cap_chan *chan, int err) { - struct sock *sk = data; + struct sock *sk = chan->data; + struct sock *parent; + + lock_sock(sk); + + parent = bt_sk(sk)->parent; + + sock_set_flag(sk, SOCK_ZAPPED); + + switch (chan->state) { + case BT_OPEN: + case BT_BOUND: + case BT_CLOSED: + break; + case BT_LISTEN: + l2cap_sock_cleanup_listen(sk); + sk->sk_state = BT_CLOSED; + chan->state = BT_CLOSED; + + break; + default: + sk->sk_state = BT_CLOSED; + chan->state = BT_CLOSED; + + sk->sk_err = err; + + if (parent) { + bt_accept_unlink(sk); + parent->sk_data_ready(parent, 0); + } else { + sk->sk_state_change(sk); + } + + break; + } + + release_sock(sk); +} + +static void l2cap_sock_state_change_cb(struct l2cap_chan *chan, int state) +{ + struct sock *sk = chan->data; sk->sk_state = state; } @@ -955,12 +1041,34 @@ static struct sk_buff *l2cap_sock_alloc_skb_cb(struct l2cap_chan *chan, return skb; } +static void l2cap_sock_ready_cb(struct l2cap_chan *chan) +{ + struct sock *sk = chan->data; + struct sock *parent; + + lock_sock(sk); + + parent = bt_sk(sk)->parent; + + BT_DBG("sk %p, parent %p", sk, parent); + + sk->sk_state = BT_CONNECTED; + sk->sk_state_change(sk); + + if (parent) + parent->sk_data_ready(parent, 0); + + release_sock(sk); +} + static struct l2cap_ops l2cap_chan_ops = { .name = "L2CAP Socket Interface", .new_connection = l2cap_sock_new_connection_cb, .recv = l2cap_sock_recv_cb, .close = l2cap_sock_close_cb, + .teardown = l2cap_sock_teardown_cb, .state_change = l2cap_sock_state_change_cb, + .ready = l2cap_sock_ready_cb, .alloc_skb = l2cap_sock_alloc_skb_cb, }; diff --git a/net/bluetooth/lib.c b/net/bluetooth/lib.c index 506628876f36..e1c97527e16c 100644 --- a/net/bluetooth/lib.c +++ b/net/bluetooth/lib.c @@ -26,12 +26,7 @@ #define pr_fmt(fmt) "Bluetooth: " fmt -#include <linux/module.h> - -#include <linux/kernel.h> -#include <linux/stddef.h> -#include <linux/string.h> -#include <asm/errno.h> +#include <linux/export.h> #include <net/bluetooth/bluetooth.h> diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index 3e5e3362ea00..ad6613d17ca6 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -24,8 +24,6 @@ /* Bluetooth HCI Management interface */ -#include <linux/kernel.h> -#include <linux/uaccess.h> #include <linux/module.h> #include <asm/unaligned.h> @@ -212,7 +210,7 @@ static int cmd_status(struct sock *sk, u16 index, u16 cmd, u8 status) BT_DBG("sock %p, index %u, cmd %u, status %u", sk, index, cmd, status); - skb = alloc_skb(sizeof(*hdr) + sizeof(*ev), GFP_ATOMIC); + skb = alloc_skb(sizeof(*hdr) + sizeof(*ev), GFP_KERNEL); if (!skb) return -ENOMEM; @@ -243,7 +241,7 @@ static int cmd_complete(struct sock *sk, u16 index, u16 cmd, u8 status, BT_DBG("sock %p", sk); - skb = alloc_skb(sizeof(*hdr) + sizeof(*ev) + rp_len, GFP_ATOMIC); + skb = alloc_skb(sizeof(*hdr) + sizeof(*ev) + rp_len, GFP_KERNEL); if (!skb) return -ENOMEM; @@ -689,14 +687,14 @@ static struct pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode, { struct pending_cmd *cmd; - cmd = kmalloc(sizeof(*cmd), GFP_ATOMIC); + cmd = kmalloc(sizeof(*cmd), GFP_KERNEL); if (!cmd) return NULL; cmd->opcode = opcode; cmd->index = hdev->id; - cmd->param = kmalloc(len, GFP_ATOMIC); + cmd->param = kmalloc(len, GFP_KERNEL); if (!cmd->param) { kfree(cmd); return NULL; @@ -714,7 +712,8 @@ static struct pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode, } static void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, - void (*cb)(struct pending_cmd *cmd, void *data), + void (*cb)(struct pending_cmd *cmd, + void *data), void *data) { struct list_head *p, *n; @@ -813,7 +812,7 @@ static int mgmt_event(u16 event, struct hci_dev *hdev, void *data, u16 data_len, struct sk_buff *skb; struct mgmt_hdr *hdr; - skb = alloc_skb(sizeof(*hdr) + data_len, GFP_ATOMIC); + skb = alloc_skb(sizeof(*hdr) + data_len, GFP_KERNEL); if (!skb) return -ENOMEM; @@ -871,7 +870,7 @@ static int set_discoverable(struct sock *sk, struct hci_dev *hdev, void *data, } if (mgmt_pending_find(MGMT_OP_SET_DISCOVERABLE, hdev) || - mgmt_pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) { + mgmt_pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) { err = cmd_status(sk, hdev->id, MGMT_OP_SET_DISCOVERABLE, MGMT_STATUS_BUSY); goto failed; @@ -978,7 +977,7 @@ static int set_connectable(struct sock *sk, struct hci_dev *hdev, void *data, } if (mgmt_pending_find(MGMT_OP_SET_DISCOVERABLE, hdev) || - mgmt_pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) { + mgmt_pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) { err = cmd_status(sk, hdev->id, MGMT_OP_SET_CONNECTABLE, MGMT_STATUS_BUSY); goto failed; @@ -1001,7 +1000,7 @@ static int set_connectable(struct sock *sk, struct hci_dev *hdev, void *data, scan = 0; if (test_bit(HCI_ISCAN, &hdev->flags) && - hdev->discov_timeout > 0) + hdev->discov_timeout > 0) cancel_delayed_work(&hdev->discov_off); } @@ -1056,7 +1055,7 @@ static int set_link_security(struct sock *sk, struct hci_dev *hdev, void *data, bool changed = false; if (!!cp->val != test_bit(HCI_LINK_SECURITY, - &hdev->dev_flags)) { + &hdev->dev_flags)) { change_bit(HCI_LINK_SECURITY, &hdev->dev_flags); changed = true; } @@ -1269,7 +1268,7 @@ static int add_uuid(struct sock *sk, struct hci_dev *hdev, void *data, u16 len) goto failed; } - uuid = kmalloc(sizeof(*uuid), GFP_ATOMIC); + uuid = kmalloc(sizeof(*uuid), GFP_KERNEL); if (!uuid) { err = -ENOMEM; goto failed; @@ -1317,7 +1316,7 @@ static bool enable_service_cache(struct hci_dev *hdev) } static int remove_uuid(struct sock *sk, struct hci_dev *hdev, void *data, - u16 len) + u16 len) { struct mgmt_cp_remove_uuid *cp = data; struct pending_cmd *cmd; @@ -1442,7 +1441,7 @@ unlock: } static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data, - u16 len) + u16 len) { struct mgmt_cp_load_link_keys *cp = data; u16 key_count, expected_len; @@ -1454,13 +1453,13 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data, sizeof(struct mgmt_link_key_info); if (expected_len != len) { BT_ERR("load_link_keys: expected %u bytes, got %u bytes", - len, expected_len); + len, expected_len); return cmd_status(sk, hdev->id, MGMT_OP_LOAD_LINK_KEYS, MGMT_STATUS_INVALID_PARAMS); } BT_DBG("%s debug_keys %u key_count %u", hdev->name, cp->debug_keys, - key_count); + key_count); hci_dev_lock(hdev); @@ -1535,10 +1534,10 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data, if (cp->disconnect) { if (cp->addr.type == BDADDR_BREDR) conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, - &cp->addr.bdaddr); + &cp->addr.bdaddr); else conn = hci_conn_hash_lookup_ba(hdev, LE_LINK, - &cp->addr.bdaddr); + &cp->addr.bdaddr); } else { conn = NULL; } @@ -1594,7 +1593,8 @@ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data, } if (cp->addr.type == BDADDR_BREDR) - conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &cp->addr.bdaddr); + conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, + &cp->addr.bdaddr); else conn = hci_conn_hash_lookup_ba(hdev, LE_LINK, &cp->addr.bdaddr); @@ -1611,7 +1611,7 @@ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data, } dc.handle = cpu_to_le16(conn->handle); - dc.reason = 0x13; /* Remote User Terminated Connection */ + dc.reason = HCI_ERROR_REMOTE_USER_TERM; err = hci_send_cmd(hdev, HCI_OP_DISCONNECT, sizeof(dc), &dc); if (err < 0) @@ -1667,7 +1667,7 @@ static int get_connections(struct sock *sk, struct hci_dev *hdev, void *data, } rp_len = sizeof(*rp) + (i * sizeof(struct mgmt_addr_info)); - rp = kmalloc(rp_len, GFP_ATOMIC); + rp = kmalloc(rp_len, GFP_KERNEL); if (!rp) { err = -ENOMEM; goto unlock; @@ -1778,29 +1778,6 @@ failed: return err; } -static int pin_code_neg_reply(struct sock *sk, struct hci_dev *hdev, - void *data, u16 len) -{ - struct mgmt_cp_pin_code_neg_reply *cp = data; - int err; - - BT_DBG(""); - - hci_dev_lock(hdev); - - if (!hdev_is_powered(hdev)) { - err = cmd_status(sk, hdev->id, MGMT_OP_PIN_CODE_NEG_REPLY, - MGMT_STATUS_NOT_POWERED); - goto failed; - } - - err = send_pin_code_neg_reply(sk, hdev, cp); - -failed: - hci_dev_unlock(hdev); - return err; -} - static int set_io_capability(struct sock *sk, struct hci_dev *hdev, void *data, u16 len) { @@ -1813,7 +1790,7 @@ static int set_io_capability(struct sock *sk, struct hci_dev *hdev, void *data, hdev->io_capability = cp->io_capability; BT_DBG("%s IO capability set to 0x%02x", hdev->name, - hdev->io_capability); + hdev->io_capability); hci_dev_unlock(hdev); @@ -1821,7 +1798,7 @@ static int set_io_capability(struct sock *sk, struct hci_dev *hdev, void *data, 0); } -static inline struct pending_cmd *find_pairing(struct hci_conn *conn) +static struct pending_cmd *find_pairing(struct hci_conn *conn) { struct hci_dev *hdev = conn->hdev; struct pending_cmd *cmd; @@ -1927,8 +1904,15 @@ static int pair_device(struct sock *sk, struct hci_dev *hdev, void *data, rp.addr.type = cp->addr.type; if (IS_ERR(conn)) { + int status; + + if (PTR_ERR(conn) == -EBUSY) + status = MGMT_STATUS_BUSY; + else + status = MGMT_STATUS_CONNECT_FAILED; + err = cmd_complete(sk, hdev->id, MGMT_OP_PAIR_DEVICE, - MGMT_STATUS_CONNECT_FAILED, &rp, + status, &rp, sizeof(rp)); goto unlock; } @@ -1959,7 +1943,7 @@ static int pair_device(struct sock *sk, struct hci_dev *hdev, void *data, cmd->user_data = conn; if (conn->state == BT_CONNECTED && - hci_conn_security(conn, sec_level, auth_type)) + hci_conn_security(conn, sec_level, auth_type)) pairing_complete(cmd, 0); err = 0; @@ -2076,6 +2060,18 @@ done: return err; } +static int pin_code_neg_reply(struct sock *sk, struct hci_dev *hdev, + void *data, u16 len) +{ + struct mgmt_cp_pin_code_neg_reply *cp = data; + + BT_DBG(""); + + return user_pairing_resp(sk, hdev, &cp->addr.bdaddr, cp->addr.type, + MGMT_OP_PIN_CODE_NEG_REPLY, + HCI_OP_PIN_CODE_NEG_REPLY, 0); +} + static int user_confirm_reply(struct sock *sk, struct hci_dev *hdev, void *data, u16 len) { @@ -2256,7 +2252,7 @@ unlock: } static int remove_remote_oob_data(struct sock *sk, struct hci_dev *hdev, - void *data, u16 len) + void *data, u16 len) { struct mgmt_cp_remove_remote_oob_data *cp = data; u8 status; @@ -2425,7 +2421,7 @@ static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data, case DISCOVERY_RESOLVING: e = hci_inquiry_cache_lookup_resolve(hdev, BDADDR_ANY, - NAME_PENDING); + NAME_PENDING); if (!e) { mgmt_pending_remove(cmd); err = cmd_complete(sk, hdev->id, @@ -2600,8 +2596,8 @@ static int set_fast_connectable(struct sock *sk, struct hci_dev *hdev, if (cp->val) { type = PAGE_SCAN_TYPE_INTERLACED; - /* 22.5 msec page scan interval */ - acp.interval = __constant_cpu_to_le16(0x0024); + /* 160 msec page scan interval */ + acp.interval = __constant_cpu_to_le16(0x0100); } else { type = PAGE_SCAN_TYPE_STANDARD; /* default */ @@ -2647,7 +2643,7 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev, sizeof(struct mgmt_ltk_info); if (expected_len != len) { BT_ERR("load_keys: expected %u bytes, got %u bytes", - len, expected_len); + len, expected_len); return cmd_status(sk, hdev->id, MGMT_OP_LOAD_LONG_TERM_KEYS, EINVAL); } @@ -2772,7 +2768,7 @@ int mgmt_control(struct sock *sk, struct msghdr *msg, size_t msglen) } if (opcode >= ARRAY_SIZE(mgmt_handlers) || - mgmt_handlers[opcode].func == NULL) { + mgmt_handlers[opcode].func == NULL) { BT_DBG("Unknown op %u", opcode); err = cmd_status(sk, index, opcode, MGMT_STATUS_UNKNOWN_COMMAND); @@ -2780,7 +2776,7 @@ int mgmt_control(struct sock *sk, struct msghdr *msg, size_t msglen) } if ((hdev && opcode < MGMT_OP_READ_INFO) || - (!hdev && opcode >= MGMT_OP_READ_INFO)) { + (!hdev && opcode >= MGMT_OP_READ_INFO)) { err = cmd_status(sk, index, opcode, MGMT_STATUS_INVALID_INDEX); goto done; @@ -2789,7 +2785,7 @@ int mgmt_control(struct sock *sk, struct msghdr *msg, size_t msglen) handler = &mgmt_handlers[opcode]; if ((handler->var_len && len < handler->data_len) || - (!handler->var_len && len != handler->data_len)) { + (!handler->var_len && len != handler->data_len)) { err = cmd_status(sk, index, opcode, MGMT_STATUS_INVALID_PARAMS); goto done; @@ -2973,7 +2969,7 @@ int mgmt_new_link_key(struct hci_dev *hdev, struct link_key *key, bacpy(&ev.key.addr.bdaddr, &key->bdaddr); ev.key.addr.type = BDADDR_BREDR; ev.key.type = key->type; - memcpy(ev.key.val, key->val, 16); + memcpy(ev.key.val, key->val, HCI_LINK_KEY_SIZE); ev.key.pin_len = key->pin_len; return mgmt_event(MGMT_EV_NEW_LINK_KEY, hdev, &ev, sizeof(ev), NULL); @@ -3108,7 +3104,7 @@ int mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr, mgmt_pending_remove(cmd); mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp, - hdev); + hdev); return err; } @@ -3198,7 +3194,7 @@ int mgmt_user_confirm_request(struct hci_dev *hdev, bdaddr_t *bdaddr, } int mgmt_user_passkey_request(struct hci_dev *hdev, bdaddr_t *bdaddr, - u8 link_type, u8 addr_type) + u8 link_type, u8 addr_type) { struct mgmt_ev_user_passkey_request ev; @@ -3212,8 +3208,8 @@ int mgmt_user_passkey_request(struct hci_dev *hdev, bdaddr_t *bdaddr, } static int user_pairing_resp_complete(struct hci_dev *hdev, bdaddr_t *bdaddr, - u8 link_type, u8 addr_type, u8 status, - u8 opcode) + u8 link_type, u8 addr_type, u8 status, + u8 opcode) { struct pending_cmd *cmd; struct mgmt_rp_user_confirm_reply rp; @@ -3244,7 +3240,8 @@ int mgmt_user_confirm_neg_reply_complete(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type, u8 addr_type, u8 status) { return user_pairing_resp_complete(hdev, bdaddr, link_type, addr_type, - status, MGMT_OP_USER_CONFIRM_NEG_REPLY); + status, + MGMT_OP_USER_CONFIRM_NEG_REPLY); } int mgmt_user_passkey_reply_complete(struct hci_dev *hdev, bdaddr_t *bdaddr, @@ -3258,7 +3255,8 @@ int mgmt_user_passkey_neg_reply_complete(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type, u8 addr_type, u8 status) { return user_pairing_resp_complete(hdev, bdaddr, link_type, addr_type, - status, MGMT_OP_USER_PASSKEY_NEG_REPLY); + status, + MGMT_OP_USER_PASSKEY_NEG_REPLY); } int mgmt_auth_failed(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type, @@ -3537,9 +3535,9 @@ int mgmt_device_found(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type, ev->addr.type = link_to_bdaddr(link_type, addr_type); ev->rssi = rssi; if (cfm_name) - ev->flags[0] |= MGMT_DEV_FOUND_CONFIRM_NAME; + ev->flags |= cpu_to_le32(MGMT_DEV_FOUND_CONFIRM_NAME); if (!ssp) - ev->flags[0] |= MGMT_DEV_FOUND_LEGACY_PAIRING; + ev->flags |= cpu_to_le32(MGMT_DEV_FOUND_LEGACY_PAIRING); if (eir_len > 0) memcpy(ev->eir, eir, eir_len); @@ -3549,7 +3547,6 @@ int mgmt_device_found(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type, dev_class, 3); ev->eir_len = cpu_to_le16(eir_len); - ev_size = sizeof(*ev) + eir_len; return mgmt_event(MGMT_EV_DEVICE_FOUND, hdev, ev, ev_size, NULL); diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c index 8a602388f1e7..c75107ef8920 100644 --- a/net/bluetooth/rfcomm/core.c +++ b/net/bluetooth/rfcomm/core.c @@ -26,22 +26,8 @@ */ #include <linux/module.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/signal.h> -#include <linux/init.h> -#include <linux/wait.h> -#include <linux/device.h> #include <linux/debugfs.h> -#include <linux/seq_file.h> -#include <linux/net.h> -#include <linux/mutex.h> #include <linux/kthread.h> -#include <linux/slab.h> - -#include <net/sock.h> -#include <linux/uaccess.h> #include <asm/unaligned.h> #include <net/bluetooth/bluetooth.h> @@ -115,14 +101,14 @@ static void rfcomm_session_del(struct rfcomm_session *s); #define __get_rpn_stop_bits(line) (((line) >> 2) & 0x1) #define __get_rpn_parity(line) (((line) >> 3) & 0x7) -static inline void rfcomm_schedule(void) +static void rfcomm_schedule(void) { if (!rfcomm_thread) return; wake_up_process(rfcomm_thread); } -static inline void rfcomm_session_put(struct rfcomm_session *s) +static void rfcomm_session_put(struct rfcomm_session *s) { if (atomic_dec_and_test(&s->refcnt)) rfcomm_session_del(s); @@ -227,7 +213,7 @@ static int rfcomm_l2sock_create(struct socket **sock) return err; } -static inline int rfcomm_check_security(struct rfcomm_dlc *d) +static int rfcomm_check_security(struct rfcomm_dlc *d) { struct sock *sk = d->session->sock->sk; struct l2cap_conn *conn = l2cap_pi(sk)->chan->conn; @@ -1750,7 +1736,7 @@ static void rfcomm_process_connect(struct rfcomm_session *s) /* Send data queued for the DLC. * Return number of frames left in the queue. */ -static inline int rfcomm_process_tx(struct rfcomm_dlc *d) +static int rfcomm_process_tx(struct rfcomm_dlc *d) { struct sk_buff *skb; int err; @@ -1798,7 +1784,7 @@ static inline int rfcomm_process_tx(struct rfcomm_dlc *d) return skb_queue_len(&d->tx_queue); } -static inline void rfcomm_process_dlcs(struct rfcomm_session *s) +static void rfcomm_process_dlcs(struct rfcomm_session *s) { struct rfcomm_dlc *d; struct list_head *p, *n; @@ -1858,7 +1844,7 @@ static inline void rfcomm_process_dlcs(struct rfcomm_session *s) } } -static inline void rfcomm_process_rx(struct rfcomm_session *s) +static void rfcomm_process_rx(struct rfcomm_session *s) { struct socket *sock = s->sock; struct sock *sk = sock->sk; @@ -1883,7 +1869,7 @@ static inline void rfcomm_process_rx(struct rfcomm_session *s) } } -static inline void rfcomm_accept_connection(struct rfcomm_session *s) +static void rfcomm_accept_connection(struct rfcomm_session *s) { struct socket *sock = s->sock, *nsock; int err; @@ -1917,7 +1903,7 @@ static inline void rfcomm_accept_connection(struct rfcomm_session *s) sock_release(nsock); } -static inline void rfcomm_check_connection(struct rfcomm_session *s) +static void rfcomm_check_connection(struct rfcomm_session *s) { struct sock *sk = s->sock->sk; @@ -1941,7 +1927,7 @@ static inline void rfcomm_check_connection(struct rfcomm_session *s) } } -static inline void rfcomm_process_sessions(void) +static void rfcomm_process_sessions(void) { struct list_head *p, *n; diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c index e8707debb864..7e1e59645c05 100644 --- a/net/bluetooth/rfcomm/sock.c +++ b/net/bluetooth/rfcomm/sock.c @@ -25,27 +25,8 @@ * RFCOMM sockets. */ -#include <linux/module.h> - -#include <linux/types.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/slab.h> -#include <linux/poll.h> -#include <linux/fcntl.h> -#include <linux/init.h> -#include <linux/interrupt.h> -#include <linux/socket.h> -#include <linux/skbuff.h> -#include <linux/list.h> -#include <linux/device.h> +#include <linux/export.h> #include <linux/debugfs.h> -#include <linux/seq_file.h> -#include <linux/security.h> -#include <net/sock.h> - -#include <linux/uaccess.h> #include <net/bluetooth/bluetooth.h> #include <net/bluetooth/hci_core.h> diff --git a/net/bluetooth/rfcomm/tty.c b/net/bluetooth/rfcomm/tty.c index d1820ff14aee..cb960773c002 100644 --- a/net/bluetooth/rfcomm/tty.c +++ b/net/bluetooth/rfcomm/tty.c @@ -31,11 +31,6 @@ #include <linux/tty_driver.h> #include <linux/tty_flip.h> -#include <linux/capability.h> -#include <linux/slab.h> -#include <linux/skbuff.h> -#include <linux/workqueue.h> - #include <net/bluetooth/bluetooth.h> #include <net/bluetooth/hci_core.h> #include <net/bluetooth/rfcomm.h> @@ -132,7 +127,7 @@ static struct rfcomm_dev *__rfcomm_dev_get(int id) return NULL; } -static inline struct rfcomm_dev *rfcomm_dev_get(int id) +static struct rfcomm_dev *rfcomm_dev_get(int id) { struct rfcomm_dev *dev; @@ -345,7 +340,7 @@ static void rfcomm_wfree(struct sk_buff *skb) tty_port_put(&dev->port); } -static inline void rfcomm_set_owner_w(struct sk_buff *skb, struct rfcomm_dev *dev) +static void rfcomm_set_owner_w(struct sk_buff *skb, struct rfcomm_dev *dev) { tty_port_get(&dev->port); atomic_add(skb->truesize, &dev->wmem_alloc); diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c index cbdd313659a7..40bbe25dcff7 100644 --- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -25,26 +25,8 @@ /* Bluetooth SCO sockets. */ #include <linux/module.h> - -#include <linux/types.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/slab.h> -#include <linux/poll.h> -#include <linux/fcntl.h> -#include <linux/init.h> -#include <linux/interrupt.h> -#include <linux/socket.h> -#include <linux/skbuff.h> -#include <linux/device.h> #include <linux/debugfs.h> #include <linux/seq_file.h> -#include <linux/list.h> -#include <linux/security.h> -#include <net/sock.h> - -#include <linux/uaccess.h> #include <net/bluetooth/bluetooth.h> #include <net/bluetooth/hci_core.h> @@ -123,7 +105,7 @@ static struct sco_conn *sco_conn_add(struct hci_conn *hcon) return conn; } -static inline struct sock *sco_chan_get(struct sco_conn *conn) +static struct sock *sco_chan_get(struct sco_conn *conn) { struct sock *sk = NULL; sco_conn_lock(conn); @@ -157,7 +139,8 @@ static int sco_conn_del(struct hci_conn *hcon, int err) return 0; } -static inline int sco_chan_add(struct sco_conn *conn, struct sock *sk, struct sock *parent) +static int sco_chan_add(struct sco_conn *conn, struct sock *sk, + struct sock *parent) { int err = 0; @@ -228,7 +211,7 @@ done: return err; } -static inline int sco_send_frame(struct sock *sk, struct msghdr *msg, int len) +static int sco_send_frame(struct sock *sk, struct msghdr *msg, int len) { struct sco_conn *conn = sco_pi(sk)->conn; struct sk_buff *skb; @@ -254,7 +237,7 @@ static inline int sco_send_frame(struct sock *sk, struct msghdr *msg, int len) return len; } -static inline void sco_recv_frame(struct sco_conn *conn, struct sk_buff *skb) +static void sco_recv_frame(struct sco_conn *conn, struct sk_buff *skb) { struct sock *sk = sco_chan_get(conn); @@ -523,7 +506,7 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen goto done; err = bt_sock_wait_state(sk, BT_CONNECTED, - sock_sndtimeo(sk, flags & O_NONBLOCK)); + sock_sndtimeo(sk, flags & O_NONBLOCK)); done: release_sock(sk); @@ -788,7 +771,7 @@ static int sco_sock_shutdown(struct socket *sock, int how) if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime) err = bt_sock_wait_state(sk, BT_CLOSED, - sk->sk_lingertime); + sk->sk_lingertime); } release_sock(sk); return err; @@ -878,7 +861,7 @@ static void sco_conn_ready(struct sco_conn *conn) bh_lock_sock(parent); sk = sco_sock_alloc(sock_net(parent), NULL, - BTPROTO_SCO, GFP_ATOMIC); + BTPROTO_SCO, GFP_ATOMIC); if (!sk) { bh_unlock_sock(parent); goto done; @@ -907,7 +890,7 @@ done: /* ----- SCO interface with lower layer (HCI) ----- */ int sco_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr) { - register struct sock *sk; + struct sock *sk; struct hlist_node *node; int lm = 0; @@ -920,7 +903,7 @@ int sco_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr) continue; if (!bacmp(&bt_sk(sk)->src, &hdev->bdaddr) || - !bacmp(&bt_sk(sk)->src, BDADDR_ANY)) { + !bacmp(&bt_sk(sk)->src, BDADDR_ANY)) { lm |= HCI_LM_ACCEPT; break; } @@ -981,7 +964,7 @@ static int sco_debugfs_show(struct seq_file *f, void *p) sk_for_each(sk, node, &sco_sk_list.head) { seq_printf(f, "%s %s %d\n", batostr(&bt_sk(sk)->src), - batostr(&bt_sk(sk)->dst), sk->sk_state); + batostr(&bt_sk(sk)->dst), sk->sk_state); } read_unlock(&sco_sk_list.lock); @@ -1044,8 +1027,8 @@ int __init sco_init(void) } if (bt_debugfs) { - sco_debugfs = debugfs_create_file("sco", 0444, - bt_debugfs, NULL, &sco_debugfs_fops); + sco_debugfs = debugfs_create_file("sco", 0444, bt_debugfs, + NULL, &sco_debugfs_fops); if (!sco_debugfs) BT_ERR("Failed to create SCO debug file"); } diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c index 37df4e9b3896..16ef0dc85a0a 100644 --- a/net/bluetooth/smp.c +++ b/net/bluetooth/smp.c @@ -20,14 +20,15 @@ SOFTWARE IS DISCLAIMED. */ +#include <linux/crypto.h> +#include <linux/scatterlist.h> +#include <crypto/b128ops.h> + #include <net/bluetooth/bluetooth.h> #include <net/bluetooth/hci_core.h> #include <net/bluetooth/l2cap.h> #include <net/bluetooth/mgmt.h> #include <net/bluetooth/smp.h> -#include <linux/crypto.h> -#include <linux/scatterlist.h> -#include <crypto/b128ops.h> #define SMP_TIMEOUT msecs_to_jiffies(30000) diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c index 929e48aed444..333484537600 100644 --- a/net/bridge/br_device.c +++ b/net/bridge/br_device.c @@ -127,9 +127,9 @@ static struct rtnl_link_stats64 *br_get_stats64(struct net_device *dev, const struct br_cpu_netstats *bstats = per_cpu_ptr(br->stats, cpu); do { - start = u64_stats_fetch_begin(&bstats->syncp); + start = u64_stats_fetch_begin_bh(&bstats->syncp); memcpy(&tmp, bstats, sizeof(tmp)); - } while (u64_stats_fetch_retry(&bstats->syncp, start)); + } while (u64_stats_fetch_retry_bh(&bstats->syncp, start)); sum.tx_bytes += tmp.tx_bytes; sum.tx_packets += tmp.tx_packets; sum.rx_bytes += tmp.rx_bytes; @@ -246,10 +246,7 @@ int br_netpoll_enable(struct net_bridge_port *p) if (!np) goto out; - np->dev = p->dev; - strlcpy(np->dev_name, p->dev->name, IFNAMSIZ); - - err = __netpoll_setup(np); + err = __netpoll_setup(np, p->dev); if (err) { kfree(np); goto out; diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index b66581208cb2..241743417f49 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -540,10 +540,11 @@ static struct net_bridge_mdb_entry *br_multicast_get_group( if (mdb->size >= max) { max *= 2; - if (unlikely(max >= br->hash_max)) { - br_warn(br, "Multicast hash table maximum " - "reached, disabling snooping: %s, %d\n", - port ? port->dev->name : br->dev->name, max); + if (unlikely(max > br->hash_max)) { + br_warn(br, "Multicast hash table maximum of %d " + "reached, disabling snooping: %s\n", + br->hash_max, + port ? port->dev->name : br->dev->name); err = -E2BIG; disable: br->multicast_disabled = 1; @@ -1160,7 +1161,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, goto out; } mld = (struct mld_msg *) icmp6_hdr(skb); - max_delay = msecs_to_jiffies(htons(mld->mld_maxdelay)); + max_delay = msecs_to_jiffies(ntohs(mld->mld_maxdelay)); if (max_delay) group = &mld->mld_mca; } else if (skb->len >= sizeof(*mld2q)) { diff --git a/net/bridge/br_netfilter.c b/net/bridge/br_netfilter.c index e41456bd3cc6..68e8f364bbf8 100644 --- a/net/bridge/br_netfilter.c +++ b/net/bridge/br_netfilter.c @@ -111,7 +111,13 @@ static inline __be16 pppoe_proto(const struct sk_buff *skb) pppoe_proto(skb) == htons(PPP_IPV6) && \ brnf_filter_pppoe_tagged) -static void fake_update_pmtu(struct dst_entry *dst, u32 mtu) +static void fake_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu) +{ +} + +static void fake_redirect(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb) { } @@ -120,7 +126,9 @@ static u32 *fake_cow_metrics(struct dst_entry *dst, unsigned long old) return NULL; } -static struct neighbour *fake_neigh_lookup(const struct dst_entry *dst, const void *daddr) +static struct neighbour *fake_neigh_lookup(const struct dst_entry *dst, + struct sk_buff *skb, + const void *daddr) { return NULL; } @@ -134,6 +142,7 @@ static struct dst_ops fake_dst_ops = { .family = AF_INET, .protocol = cpu_to_be16(ETH_P_IP), .update_pmtu = fake_update_pmtu, + .redirect = fake_redirect, .cow_metrics = fake_cow_metrics, .neigh_lookup = fake_neigh_lookup, .mtu = fake_mtu, @@ -373,19 +382,29 @@ static int br_nf_pre_routing_finish_bridge(struct sk_buff *skb) if (!skb->dev) goto free_skb; dst = skb_dst(skb); - neigh = dst_get_neighbour_noref(dst); - if (neigh->hh.hh_len) { - neigh_hh_bridge(&neigh->hh, skb); - skb->dev = nf_bridge->physindev; - return br_handle_frame_finish(skb); - } else { - /* the neighbour function below overwrites the complete - * MAC header, so we save the Ethernet source address and - * protocol number. */ - skb_copy_from_linear_data_offset(skb, -(ETH_HLEN-ETH_ALEN), skb->nf_bridge->data, ETH_HLEN-ETH_ALEN); - /* tell br_dev_xmit to continue with forwarding */ - nf_bridge->mask |= BRNF_BRIDGED_DNAT; - return neigh->output(neigh, skb); + neigh = dst_neigh_lookup_skb(dst, skb); + if (neigh) { + int ret; + + if (neigh->hh.hh_len) { + neigh_hh_bridge(&neigh->hh, skb); + skb->dev = nf_bridge->physindev; + ret = br_handle_frame_finish(skb); + } else { + /* the neighbour function below overwrites the complete + * MAC header, so we save the Ethernet source address and + * protocol number. + */ + skb_copy_from_linear_data_offset(skb, + -(ETH_HLEN-ETH_ALEN), + skb->nf_bridge->data, + ETH_HLEN-ETH_ALEN); + /* tell br_dev_xmit to continue with forwarding */ + nf_bridge->mask |= BRNF_BRIDGED_DNAT; + ret = neigh->output(neigh, skb); + } + neigh_release(neigh); + return ret; } free_skb: kfree_skb(skb); @@ -764,9 +783,9 @@ static unsigned int br_nf_forward_ip(unsigned int hook, struct sk_buff *skb, return NF_DROP; if (IS_IP(skb) || IS_VLAN_IP(skb) || IS_PPPOE_IP(skb)) - pf = PF_INET; + pf = NFPROTO_IPV4; else if (IS_IPV6(skb) || IS_VLAN_IPV6(skb) || IS_PPPOE_IPV6(skb)) - pf = PF_INET6; + pf = NFPROTO_IPV6; else return NF_ACCEPT; @@ -778,13 +797,13 @@ static unsigned int br_nf_forward_ip(unsigned int hook, struct sk_buff *skb, nf_bridge->mask |= BRNF_PKT_TYPE; } - if (pf == PF_INET && br_parse_ip_options(skb)) + if (pf == NFPROTO_IPV4 && br_parse_ip_options(skb)) return NF_DROP; /* The physdev module checks on this */ nf_bridge->mask |= BRNF_BRIDGED; nf_bridge->physoutdev = skb->dev; - if (pf == PF_INET) + if (pf == NFPROTO_IPV4) skb->protocol = htons(ETH_P_IP); else skb->protocol = htons(ETH_P_IPV6); @@ -871,9 +890,9 @@ static unsigned int br_nf_post_routing(unsigned int hook, struct sk_buff *skb, return NF_DROP; if (IS_IP(skb) || IS_VLAN_IP(skb) || IS_PPPOE_IP(skb)) - pf = PF_INET; + pf = NFPROTO_IPV4; else if (IS_IPV6(skb) || IS_VLAN_IPV6(skb) || IS_PPPOE_IPV6(skb)) - pf = PF_INET6; + pf = NFPROTO_IPV6; else return NF_ACCEPT; @@ -886,7 +905,7 @@ static unsigned int br_nf_post_routing(unsigned int hook, struct sk_buff *skb, nf_bridge_pull_encap_header(skb); nf_bridge_save_header(skb); - if (pf == PF_INET) + if (pf == NFPROTO_IPV4) skb->protocol = htons(ETH_P_IP); else skb->protocol = htons(ETH_P_IPV6); @@ -919,49 +938,49 @@ static struct nf_hook_ops br_nf_ops[] __read_mostly = { { .hook = br_nf_pre_routing, .owner = THIS_MODULE, - .pf = PF_BRIDGE, + .pf = NFPROTO_BRIDGE, .hooknum = NF_BR_PRE_ROUTING, .priority = NF_BR_PRI_BRNF, }, { .hook = br_nf_local_in, .owner = THIS_MODULE, - .pf = PF_BRIDGE, + .pf = NFPROTO_BRIDGE, .hooknum = NF_BR_LOCAL_IN, .priority = NF_BR_PRI_BRNF, }, { .hook = br_nf_forward_ip, .owner = THIS_MODULE, - .pf = PF_BRIDGE, + .pf = NFPROTO_BRIDGE, .hooknum = NF_BR_FORWARD, .priority = NF_BR_PRI_BRNF - 1, }, { .hook = br_nf_forward_arp, .owner = THIS_MODULE, - .pf = PF_BRIDGE, + .pf = NFPROTO_BRIDGE, .hooknum = NF_BR_FORWARD, .priority = NF_BR_PRI_BRNF, }, { .hook = br_nf_post_routing, .owner = THIS_MODULE, - .pf = PF_BRIDGE, + .pf = NFPROTO_BRIDGE, .hooknum = NF_BR_POST_ROUTING, .priority = NF_BR_PRI_LAST, }, { .hook = ip_sabotage_in, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_PRE_ROUTING, .priority = NF_IP_PRI_FIRST, }, { .hook = ip_sabotage_in, .owner = THIS_MODULE, - .pf = PF_INET6, + .pf = NFPROTO_IPV6, .hooknum = NF_INET_PRE_ROUTING, .priority = NF_IP6_PRI_FIRST, }, diff --git a/net/bridge/netfilter/ebt_ulog.c b/net/bridge/netfilter/ebt_ulog.c index 5449294bdd5e..19063473c71f 100644 --- a/net/bridge/netfilter/ebt_ulog.c +++ b/net/bridge/netfilter/ebt_ulog.c @@ -145,19 +145,24 @@ static void ebt_ulog_packet(unsigned int hooknr, const struct sk_buff *skb, if (!ub->skb) { if (!(ub->skb = ulog_alloc_skb(size))) - goto alloc_failure; + goto unlock; } else if (size > skb_tailroom(ub->skb)) { ulog_send(group); if (!(ub->skb = ulog_alloc_skb(size))) - goto alloc_failure; + goto unlock; } - nlh = NLMSG_PUT(ub->skb, 0, ub->qlen, 0, - size - NLMSG_ALIGN(sizeof(*nlh))); + nlh = nlmsg_put(ub->skb, 0, ub->qlen, 0, + size - NLMSG_ALIGN(sizeof(*nlh)), 0); + if (!nlh) { + kfree_skb(ub->skb); + ub->skb = NULL; + goto unlock; + } ub->qlen++; - pm = NLMSG_DATA(nlh); + pm = nlmsg_data(nlh); /* Fill in the ulog data */ pm->version = EBT_ULOG_VERSION; @@ -209,14 +214,6 @@ static void ebt_ulog_packet(unsigned int hooknr, const struct sk_buff *skb, unlock: spin_unlock_bh(lock); - - return; - -nlmsg_failure: - pr_debug("error during NLMSG_PUT. This should " - "not happen, please report to author.\n"); -alloc_failure: - goto unlock; } /* this function is registered with the netfilter core */ @@ -285,6 +282,9 @@ static int __init ebt_ulog_init(void) { int ret; int i; + struct netlink_kernel_cfg cfg = { + .groups = EBT_ULOG_MAXNLGROUPS, + }; if (nlbufsiz >= 128*1024) { pr_warning("Netlink buffer has to be <= 128kB," @@ -299,8 +299,7 @@ static int __init ebt_ulog_init(void) } ebtulognl = netlink_kernel_create(&init_net, NETLINK_NFLOG, - EBT_ULOG_MAXNLGROUPS, NULL, NULL, - THIS_MODULE); + THIS_MODULE, &cfg); if (!ebtulognl) ret = -ENOMEM; else if ((ret = xt_register_target(&ebt_ulog_tg_reg)) != 0) diff --git a/net/caif/caif_dev.c b/net/caif/caif_dev.c index 8c83c175b03a..1ae1d9cb278d 100644 --- a/net/caif/caif_dev.c +++ b/net/caif/caif_dev.c @@ -90,11 +90,8 @@ static int caifd_refcnt_read(struct caif_device_entry *e) /* Allocate new CAIF device. */ static struct caif_device_entry *caif_device_alloc(struct net_device *dev) { - struct caif_device_entry_list *caifdevs; struct caif_device_entry *caifd; - caifdevs = caif_device_list(dev_net(dev)); - caifd = kzalloc(sizeof(*caifd), GFP_KERNEL); if (!caifd) return NULL; @@ -131,6 +128,11 @@ void caif_flow_cb(struct sk_buff *skb) rcu_read_lock(); caifd = caif_get(skb->dev); + + WARN_ON(caifd == NULL); + if (caifd == NULL) + return; + caifd_hold(caifd); rcu_read_unlock(); diff --git a/net/caif/cfctrl.c b/net/caif/cfctrl.c index 047cd0eec022..44f270fc2d06 100644 --- a/net/caif/cfctrl.c +++ b/net/caif/cfctrl.c @@ -175,15 +175,17 @@ static void init_info(struct caif_payload_info *info, struct cfctrl *cfctrl) void cfctrl_enum_req(struct cflayer *layer, u8 physlinkid) { + struct cfpkt *pkt; struct cfctrl *cfctrl = container_obj(layer); - struct cfpkt *pkt = cfpkt_create(CFPKT_CTRL_PKT_LEN); struct cflayer *dn = cfctrl->serv.layer.dn; - if (!pkt) - return; + if (!dn) { pr_debug("not able to send enum request\n"); return; } + pkt = cfpkt_create(CFPKT_CTRL_PKT_LEN); + if (!pkt) + return; caif_assert(offsetof(struct cfctrl, serv.layer) == 0); init_info(cfpkt_info(pkt), cfctrl); cfpkt_info(pkt)->dev_info->id = physlinkid; @@ -302,18 +304,17 @@ int cfctrl_linkdown_req(struct cflayer *layer, u8 channelid, struct cflayer *client) { int ret; + struct cfpkt *pkt; struct cfctrl *cfctrl = container_obj(layer); - struct cfpkt *pkt = cfpkt_create(CFPKT_CTRL_PKT_LEN); struct cflayer *dn = cfctrl->serv.layer.dn; - if (!pkt) - return -ENOMEM; - if (!dn) { pr_debug("not able to send link-down request\n"); return -ENODEV; } - + pkt = cfpkt_create(CFPKT_CTRL_PKT_LEN); + if (!pkt) + return -ENOMEM; cfpkt_addbdy(pkt, CFCTRL_CMD_LINK_DESTROY); cfpkt_addbdy(pkt, channelid); init_info(cfpkt_info(pkt), cfctrl); diff --git a/net/can/af_can.c b/net/can/af_can.c index 0ce2ad0696da..821022a7214f 100644 --- a/net/can/af_can.c +++ b/net/can/af_can.c @@ -41,6 +41,7 @@ */ #include <linux/module.h> +#include <linux/stddef.h> #include <linux/init.h> #include <linux/kmod.h> #include <linux/slab.h> @@ -220,30 +221,46 @@ static int can_create(struct net *net, struct socket *sock, int protocol, * -ENOBUFS on full driver queue (see net_xmit_errno()) * -ENOMEM when local loopback failed at calling skb_clone() * -EPERM when trying to send on a non-CAN interface + * -EMSGSIZE CAN frame size is bigger than CAN interface MTU * -EINVAL when the skb->data does not contain a valid CAN frame */ int can_send(struct sk_buff *skb, int loop) { struct sk_buff *newskb = NULL; - struct can_frame *cf = (struct can_frame *)skb->data; - int err; + struct canfd_frame *cfd = (struct canfd_frame *)skb->data; + int err = -EINVAL; + + if (skb->len == CAN_MTU) { + skb->protocol = htons(ETH_P_CAN); + if (unlikely(cfd->len > CAN_MAX_DLEN)) + goto inval_skb; + } else if (skb->len == CANFD_MTU) { + skb->protocol = htons(ETH_P_CANFD); + if (unlikely(cfd->len > CANFD_MAX_DLEN)) + goto inval_skb; + } else + goto inval_skb; - if (skb->len != sizeof(struct can_frame) || cf->can_dlc > 8) { - kfree_skb(skb); - return -EINVAL; + /* + * Make sure the CAN frame can pass the selected CAN netdevice. + * As structs can_frame and canfd_frame are similar, we can provide + * CAN FD frames to legacy CAN drivers as long as the length is <= 8 + */ + if (unlikely(skb->len > skb->dev->mtu && cfd->len > CAN_MAX_DLEN)) { + err = -EMSGSIZE; + goto inval_skb; } - if (skb->dev->type != ARPHRD_CAN) { - kfree_skb(skb); - return -EPERM; + if (unlikely(skb->dev->type != ARPHRD_CAN)) { + err = -EPERM; + goto inval_skb; } - if (!(skb->dev->flags & IFF_UP)) { - kfree_skb(skb); - return -ENETDOWN; + if (unlikely(!(skb->dev->flags & IFF_UP))) { + err = -ENETDOWN; + goto inval_skb; } - skb->protocol = htons(ETH_P_CAN); skb_reset_network_header(skb); skb_reset_transport_header(skb); @@ -300,6 +317,10 @@ int can_send(struct sk_buff *skb, int loop) can_stats.tx_frames_delta++; return 0; + +inval_skb: + kfree_skb(skb); + return err; } EXPORT_SYMBOL(can_send); @@ -334,8 +355,8 @@ static struct dev_rcv_lists *find_dev_rcv_lists(struct net_device *dev) * relevant bits for the filter. * * The filter can be inverted (CAN_INV_FILTER bit set in can_id) or it can - * filter for error frames (CAN_ERR_FLAG bit set in mask). For error frames - * there is a special filterlist and a special rx path filter handling. + * filter for error messages (CAN_ERR_FLAG bit set in mask). For error msg + * frames there is a special filterlist and a special rx path filter handling. * * Return: * Pointer to optimal filterlist for the given can_id/mask pair. @@ -347,7 +368,7 @@ static struct hlist_head *find_rcv_list(canid_t *can_id, canid_t *mask, { canid_t inv = *can_id & CAN_INV_FILTER; /* save flag before masking */ - /* filter for error frames in extra filterlist */ + /* filter for error message frames in extra filterlist */ if (*mask & CAN_ERR_FLAG) { /* clear CAN_ERR_FLAG in filter entry */ *mask &= CAN_ERR_MASK; @@ -408,7 +429,7 @@ static struct hlist_head *find_rcv_list(canid_t *can_id, canid_t *mask, * <received_can_id> & mask == can_id & mask * * The filter can be inverted (CAN_INV_FILTER bit set in can_id) or it can - * filter for error frames (CAN_ERR_FLAG bit set in mask). + * filter for error message frames (CAN_ERR_FLAG bit set in mask). * * The provided pointer to the sk_buff is guaranteed to be valid as long as * the callback function is running. The callback function must *not* free @@ -578,7 +599,7 @@ static int can_rcv_filter(struct dev_rcv_lists *d, struct sk_buff *skb) return 0; if (can_id & CAN_ERR_FLAG) { - /* check for error frame entries only */ + /* check for error message frame entries only */ hlist_for_each_entry_rcu(r, n, &d->rx[RX_ERR], list) { if (can_id & r->mask) { deliver(skb, r); @@ -632,24 +653,11 @@ static int can_rcv_filter(struct dev_rcv_lists *d, struct sk_buff *skb) return matches; } -static int can_rcv(struct sk_buff *skb, struct net_device *dev, - struct packet_type *pt, struct net_device *orig_dev) +static void can_receive(struct sk_buff *skb, struct net_device *dev) { struct dev_rcv_lists *d; - struct can_frame *cf = (struct can_frame *)skb->data; int matches; - if (!net_eq(dev_net(dev), &init_net)) - goto drop; - - if (WARN_ONCE(dev->type != ARPHRD_CAN || - skb->len != sizeof(struct can_frame) || - cf->can_dlc > 8, - "PF_CAN: dropped non conform skbuf: " - "dev type %d, len %d, can_dlc %d\n", - dev->type, skb->len, cf->can_dlc)) - goto drop; - /* update statistics */ can_stats.rx_frames++; can_stats.rx_frames_delta++; @@ -673,7 +681,49 @@ static int can_rcv(struct sk_buff *skb, struct net_device *dev, can_stats.matches++; can_stats.matches_delta++; } +} +static int can_rcv(struct sk_buff *skb, struct net_device *dev, + struct packet_type *pt, struct net_device *orig_dev) +{ + struct canfd_frame *cfd = (struct canfd_frame *)skb->data; + + if (unlikely(!net_eq(dev_net(dev), &init_net))) + goto drop; + + if (WARN_ONCE(dev->type != ARPHRD_CAN || + skb->len != CAN_MTU || + cfd->len > CAN_MAX_DLEN, + "PF_CAN: dropped non conform CAN skbuf: " + "dev type %d, len %d, datalen %d\n", + dev->type, skb->len, cfd->len)) + goto drop; + + can_receive(skb, dev); + return NET_RX_SUCCESS; + +drop: + kfree_skb(skb); + return NET_RX_DROP; +} + +static int canfd_rcv(struct sk_buff *skb, struct net_device *dev, + struct packet_type *pt, struct net_device *orig_dev) +{ + struct canfd_frame *cfd = (struct canfd_frame *)skb->data; + + if (unlikely(!net_eq(dev_net(dev), &init_net))) + goto drop; + + if (WARN_ONCE(dev->type != ARPHRD_CAN || + skb->len != CANFD_MTU || + cfd->len > CANFD_MAX_DLEN, + "PF_CAN: dropped non conform CAN FD skbuf: " + "dev type %d, len %d, datalen %d\n", + dev->type, skb->len, cfd->len)) + goto drop; + + can_receive(skb, dev); return NET_RX_SUCCESS; drop: @@ -807,10 +857,14 @@ static int can_notifier(struct notifier_block *nb, unsigned long msg, static struct packet_type can_packet __read_mostly = { .type = cpu_to_be16(ETH_P_CAN), - .dev = NULL, .func = can_rcv, }; +static struct packet_type canfd_packet __read_mostly = { + .type = cpu_to_be16(ETH_P_CANFD), + .func = canfd_rcv, +}; + static const struct net_proto_family can_family_ops = { .family = PF_CAN, .create = can_create, @@ -824,6 +878,12 @@ static struct notifier_block can_netdev_notifier __read_mostly = { static __init int can_init(void) { + /* check for correct padding to be able to use the structs similarly */ + BUILD_BUG_ON(offsetof(struct can_frame, can_dlc) != + offsetof(struct canfd_frame, len) || + offsetof(struct can_frame, data) != + offsetof(struct canfd_frame, data)); + printk(banner); memset(&can_rx_alldev_list, 0, sizeof(can_rx_alldev_list)); @@ -846,6 +906,7 @@ static __init int can_init(void) sock_register(&can_family_ops); register_netdevice_notifier(&can_netdev_notifier); dev_add_pack(&can_packet); + dev_add_pack(&canfd_packet); return 0; } @@ -860,6 +921,7 @@ static __exit void can_exit(void) can_remove_proc(); /* protocol unregister */ + dev_remove_pack(&canfd_packet); dev_remove_pack(&can_packet); unregister_netdevice_notifier(&can_netdev_notifier); sock_unregister(PF_CAN); diff --git a/net/can/af_can.h b/net/can/af_can.h index fd882dbadad3..1dccb4c33894 100644 --- a/net/can/af_can.h +++ b/net/can/af_can.h @@ -104,6 +104,9 @@ struct s_pstats { unsigned long rcv_entries_max; }; +/* receive filters subscribed for 'all' CAN devices */ +extern struct dev_rcv_lists can_rx_alldev_list; + /* function prototypes for the CAN networklayer procfs (proc.c) */ extern void can_init_proc(void); extern void can_remove_proc(void); diff --git a/net/can/gw.c b/net/can/gw.c index b41acf25668f..b54d5e695b03 100644 --- a/net/can/gw.c +++ b/net/can/gw.c @@ -444,11 +444,14 @@ static int cgw_notifier(struct notifier_block *nb, return NOTIFY_DONE; } -static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj) +static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj, int type, + u32 pid, u32 seq, int flags) { struct cgw_frame_mod mb; struct rtcanmsg *rtcan; - struct nlmsghdr *nlh = nlmsg_put(skb, 0, 0, 0, sizeof(*rtcan), 0); + struct nlmsghdr *nlh; + + nlh = nlmsg_put(skb, pid, seq, type, sizeof(*rtcan), flags); if (!nlh) return -EMSGSIZE; @@ -462,15 +465,11 @@ static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj) if (gwj->handled_frames) { if (nla_put_u32(skb, CGW_HANDLED, gwj->handled_frames) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + NLA_ALIGN(sizeof(u32)); } if (gwj->dropped_frames) { if (nla_put_u32(skb, CGW_DROPPED, gwj->dropped_frames) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + NLA_ALIGN(sizeof(u32)); } /* check non default settings of attributes */ @@ -480,8 +479,6 @@ static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj) mb.modtype = gwj->mod.modtype.and; if (nla_put(skb, CGW_MOD_AND, sizeof(mb), &mb) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + NLA_ALIGN(sizeof(mb)); } if (gwj->mod.modtype.or) { @@ -489,8 +486,6 @@ static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj) mb.modtype = gwj->mod.modtype.or; if (nla_put(skb, CGW_MOD_OR, sizeof(mb), &mb) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + NLA_ALIGN(sizeof(mb)); } if (gwj->mod.modtype.xor) { @@ -498,8 +493,6 @@ static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj) mb.modtype = gwj->mod.modtype.xor; if (nla_put(skb, CGW_MOD_XOR, sizeof(mb), &mb) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + NLA_ALIGN(sizeof(mb)); } if (gwj->mod.modtype.set) { @@ -507,26 +500,18 @@ static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj) mb.modtype = gwj->mod.modtype.set; if (nla_put(skb, CGW_MOD_SET, sizeof(mb), &mb) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + NLA_ALIGN(sizeof(mb)); } if (gwj->mod.csumfunc.crc8) { if (nla_put(skb, CGW_CS_CRC8, CGW_CS_CRC8_LEN, &gwj->mod.csum.crc8) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + \ - NLA_ALIGN(CGW_CS_CRC8_LEN); } if (gwj->mod.csumfunc.xor) { if (nla_put(skb, CGW_CS_XOR, CGW_CS_XOR_LEN, &gwj->mod.csum.xor) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + \ - NLA_ALIGN(CGW_CS_XOR_LEN); } if (gwj->gwtype == CGW_TYPE_CAN_CAN) { @@ -535,23 +520,16 @@ static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj) if (nla_put(skb, CGW_FILTER, sizeof(struct can_filter), &gwj->ccgw.filter) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + - NLA_ALIGN(sizeof(struct can_filter)); } if (nla_put_u32(skb, CGW_SRC_IF, gwj->ccgw.src_idx) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + NLA_ALIGN(sizeof(u32)); if (nla_put_u32(skb, CGW_DST_IF, gwj->ccgw.dst_idx) < 0) goto cancel; - else - nlh->nlmsg_len += NLA_HDRLEN + NLA_ALIGN(sizeof(u32)); } - return skb->len; + return nlmsg_end(skb, nlh); cancel: nlmsg_cancel(skb, nlh); @@ -571,7 +549,8 @@ static int cgw_dump_jobs(struct sk_buff *skb, struct netlink_callback *cb) if (idx < s_idx) goto cont; - if (cgw_put_job(skb, gwj) < 0) + if (cgw_put_job(skb, gwj, RTM_NEWROUTE, NETLINK_CB(cb->skb).pid, + cb->nlh->nlmsg_seq, NLM_F_MULTI) < 0) break; cont: idx++; @@ -583,6 +562,18 @@ cont: return skb->len; } +static const struct nla_policy cgw_policy[CGW_MAX+1] = { + [CGW_MOD_AND] = { .len = sizeof(struct cgw_frame_mod) }, + [CGW_MOD_OR] = { .len = sizeof(struct cgw_frame_mod) }, + [CGW_MOD_XOR] = { .len = sizeof(struct cgw_frame_mod) }, + [CGW_MOD_SET] = { .len = sizeof(struct cgw_frame_mod) }, + [CGW_CS_XOR] = { .len = sizeof(struct cgw_csum_xor) }, + [CGW_CS_CRC8] = { .len = sizeof(struct cgw_csum_crc8) }, + [CGW_SRC_IF] = { .type = NLA_U32 }, + [CGW_DST_IF] = { .type = NLA_U32 }, + [CGW_FILTER] = { .len = sizeof(struct can_filter) }, +}; + /* check for common and gwtype specific attributes */ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod, u8 gwtype, void *gwtypeattr) @@ -595,14 +586,14 @@ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod, /* initialize modification & checksum data space */ memset(mod, 0, sizeof(*mod)); - err = nlmsg_parse(nlh, sizeof(struct rtcanmsg), tb, CGW_MAX, NULL); + err = nlmsg_parse(nlh, sizeof(struct rtcanmsg), tb, CGW_MAX, + cgw_policy); if (err < 0) return err; /* check for AND/OR/XOR/SET modifications */ - if (tb[CGW_MOD_AND] && - nla_len(tb[CGW_MOD_AND]) == CGW_MODATTR_LEN) { + if (tb[CGW_MOD_AND]) { nla_memcpy(&mb, tb[CGW_MOD_AND], CGW_MODATTR_LEN); canframecpy(&mod->modframe.and, &mb.cf); @@ -618,8 +609,7 @@ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod, mod->modfunc[modidx++] = mod_and_data; } - if (tb[CGW_MOD_OR] && - nla_len(tb[CGW_MOD_OR]) == CGW_MODATTR_LEN) { + if (tb[CGW_MOD_OR]) { nla_memcpy(&mb, tb[CGW_MOD_OR], CGW_MODATTR_LEN); canframecpy(&mod->modframe.or, &mb.cf); @@ -635,8 +625,7 @@ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod, mod->modfunc[modidx++] = mod_or_data; } - if (tb[CGW_MOD_XOR] && - nla_len(tb[CGW_MOD_XOR]) == CGW_MODATTR_LEN) { + if (tb[CGW_MOD_XOR]) { nla_memcpy(&mb, tb[CGW_MOD_XOR], CGW_MODATTR_LEN); canframecpy(&mod->modframe.xor, &mb.cf); @@ -652,8 +641,7 @@ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod, mod->modfunc[modidx++] = mod_xor_data; } - if (tb[CGW_MOD_SET] && - nla_len(tb[CGW_MOD_SET]) == CGW_MODATTR_LEN) { + if (tb[CGW_MOD_SET]) { nla_memcpy(&mb, tb[CGW_MOD_SET], CGW_MODATTR_LEN); canframecpy(&mod->modframe.set, &mb.cf); @@ -672,11 +660,8 @@ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod, /* check for checksum operations after CAN frame modifications */ if (modidx) { - if (tb[CGW_CS_CRC8] && - nla_len(tb[CGW_CS_CRC8]) == CGW_CS_CRC8_LEN) { - - struct cgw_csum_crc8 *c = (struct cgw_csum_crc8 *)\ - nla_data(tb[CGW_CS_CRC8]); + if (tb[CGW_CS_CRC8]) { + struct cgw_csum_crc8 *c = nla_data(tb[CGW_CS_CRC8]); err = cgw_chk_csum_parms(c->from_idx, c->to_idx, c->result_idx); @@ -699,11 +684,8 @@ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod, mod->csumfunc.crc8 = cgw_csum_crc8_neg; } - if (tb[CGW_CS_XOR] && - nla_len(tb[CGW_CS_XOR]) == CGW_CS_XOR_LEN) { - - struct cgw_csum_xor *c = (struct cgw_csum_xor *)\ - nla_data(tb[CGW_CS_XOR]); + if (tb[CGW_CS_XOR]) { + struct cgw_csum_xor *c = nla_data(tb[CGW_CS_XOR]); err = cgw_chk_csum_parms(c->from_idx, c->to_idx, c->result_idx); @@ -735,8 +717,7 @@ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod, memset(ccgw, 0, sizeof(*ccgw)); /* check for can_filter in attributes */ - if (tb[CGW_FILTER] && - nla_len(tb[CGW_FILTER]) == sizeof(struct can_filter)) + if (tb[CGW_FILTER]) nla_memcpy(&ccgw->filter, tb[CGW_FILTER], sizeof(struct can_filter)); @@ -746,13 +727,8 @@ static int cgw_parse_attr(struct nlmsghdr *nlh, struct cf_mod *mod, if (!tb[CGW_SRC_IF] || !tb[CGW_DST_IF]) return err; - if (nla_len(tb[CGW_SRC_IF]) == sizeof(u32)) - nla_memcpy(&ccgw->src_idx, tb[CGW_SRC_IF], - sizeof(u32)); - - if (nla_len(tb[CGW_DST_IF]) == sizeof(u32)) - nla_memcpy(&ccgw->dst_idx, tb[CGW_DST_IF], - sizeof(u32)); + ccgw->src_idx = nla_get_u32(tb[CGW_SRC_IF]); + ccgw->dst_idx = nla_get_u32(tb[CGW_DST_IF]); /* both indices set to 0 for flushing all routing entries */ if (!ccgw->src_idx && !ccgw->dst_idx) diff --git a/net/can/proc.c b/net/can/proc.c index ba873c36d2fd..3b6dd3180492 100644 --- a/net/can/proc.c +++ b/net/can/proc.c @@ -83,9 +83,6 @@ static const char rx_list_name[][8] = { [RX_EFF] = "rx_eff", }; -/* receive filters subscribed for 'all' CAN devices */ -extern struct dev_rcv_lists can_rx_alldev_list; - /* * af_can statistics stuff */ diff --git a/net/can/raw.c b/net/can/raw.c index 46cca3a91d19..3e9c89356a93 100644 --- a/net/can/raw.c +++ b/net/can/raw.c @@ -82,6 +82,7 @@ struct raw_sock { struct notifier_block notifier; int loopback; int recv_own_msgs; + int fd_frames; int count; /* number of active filters */ struct can_filter dfilter; /* default/single filter */ struct can_filter *filter; /* pointer to filter(s) */ @@ -119,6 +120,14 @@ static void raw_rcv(struct sk_buff *oskb, void *data) if (!ro->recv_own_msgs && oskb->sk == sk) return; + /* do not pass frames with DLC > 8 to a legacy socket */ + if (!ro->fd_frames) { + struct canfd_frame *cfd = (struct canfd_frame *)oskb->data; + + if (unlikely(cfd->len > CAN_MAX_DLEN)) + return; + } + /* clone the given skb to be able to enqueue it into the rcv queue */ skb = skb_clone(oskb, GFP_ATOMIC); if (!skb) @@ -291,6 +300,7 @@ static int raw_init(struct sock *sk) /* set default loopback behaviour */ ro->loopback = 1; ro->recv_own_msgs = 0; + ro->fd_frames = 0; /* set notifier */ ro->notifier.notifier_call = raw_notifier; @@ -569,6 +579,15 @@ static int raw_setsockopt(struct socket *sock, int level, int optname, break; + case CAN_RAW_FD_FRAMES: + if (optlen != sizeof(ro->fd_frames)) + return -EINVAL; + + if (copy_from_user(&ro->fd_frames, optval, optlen)) + return -EFAULT; + + break; + default: return -ENOPROTOOPT; } @@ -627,6 +646,12 @@ static int raw_getsockopt(struct socket *sock, int level, int optname, val = &ro->recv_own_msgs; break; + case CAN_RAW_FD_FRAMES: + if (len > sizeof(int)) + len = sizeof(int); + val = &ro->fd_frames; + break; + default: return -ENOPROTOOPT; } @@ -662,8 +687,13 @@ static int raw_sendmsg(struct kiocb *iocb, struct socket *sock, } else ifindex = ro->ifindex; - if (size != sizeof(struct can_frame)) - return -EINVAL; + if (ro->fd_frames) { + if (unlikely(size != CANFD_MTU && size != CAN_MTU)) + return -EINVAL; + } else { + if (unlikely(size != CAN_MTU)) + return -EINVAL; + } dev = dev_get_by_index(&init_net, ifindex); if (!dev) @@ -705,7 +735,9 @@ static int raw_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg, size_t size, int flags) { struct sock *sk = sock->sk; + struct raw_sock *ro = raw_sk(sk); struct sk_buff *skb; + int rxmtu; int err = 0; int noblock; @@ -716,10 +748,20 @@ static int raw_recvmsg(struct kiocb *iocb, struct socket *sock, if (!skb) return err; - if (size < skb->len) + /* + * when serving a legacy socket the DLC <= 8 is already checked inside + * raw_rcv(). Now check if we need to pass a canfd_frame to a legacy + * socket and cut the possible CANFD_MTU/CAN_MTU length to CAN_MTU + */ + if (!ro->fd_frames) + rxmtu = CAN_MTU; + else + rxmtu = skb->len; + + if (size < rxmtu) msg->msg_flags |= MSG_TRUNC; else - size = skb->len; + size = rxmtu; err = memcpy_toiovec(msg->msg_iov, skb->data, size); if (err < 0) { diff --git a/net/ceph/pagelist.c b/net/ceph/pagelist.c index 13cb409a7bba..665cd23020ff 100644 --- a/net/ceph/pagelist.c +++ b/net/ceph/pagelist.c @@ -72,8 +72,7 @@ int ceph_pagelist_append(struct ceph_pagelist *pl, const void *buf, size_t len) } EXPORT_SYMBOL(ceph_pagelist_append); -/** - * Allocate enough pages for a pagelist to append the given amount +/* Allocate enough pages for a pagelist to append the given amount * of data without without allocating. * Returns: 0 on success, -ENOMEM on error. */ @@ -95,9 +94,7 @@ int ceph_pagelist_reserve(struct ceph_pagelist *pl, size_t space) } EXPORT_SYMBOL(ceph_pagelist_reserve); -/** - * Free any pages that have been preallocated. - */ +/* Free any pages that have been preallocated. */ int ceph_pagelist_free_reserve(struct ceph_pagelist *pl) { while (!list_empty(&pl->free_list)) { @@ -112,9 +109,7 @@ int ceph_pagelist_free_reserve(struct ceph_pagelist *pl) } EXPORT_SYMBOL(ceph_pagelist_free_reserve); -/** - * Create a truncation point. - */ +/* Create a truncation point. */ void ceph_pagelist_set_cursor(struct ceph_pagelist *pl, struct ceph_pagelist_cursor *c) { @@ -124,8 +119,7 @@ void ceph_pagelist_set_cursor(struct ceph_pagelist *pl, } EXPORT_SYMBOL(ceph_pagelist_set_cursor); -/** - * Truncate a pagelist to the given point. Move extra pages to reserve. +/* Truncate a pagelist to the given point. Move extra pages to reserve. * This won't sleep. * Returns: 0 on success, * -EINVAL if the pagelist doesn't match the trunc point pagelist diff --git a/net/compat.c b/net/compat.c index 1b96281892de..74ed1d7a84a2 100644 --- a/net/compat.c +++ b/net/compat.c @@ -221,6 +221,8 @@ int put_cmsg_compat(struct msghdr *kmsg, int level, int type, int len, void *dat { struct compat_cmsghdr __user *cm = (struct compat_cmsghdr __user *) kmsg->msg_control; struct compat_cmsghdr cmhdr; + struct compat_timeval ctv; + struct compat_timespec cts[3]; int cmlen; if (cm == NULL || kmsg->msg_controllen < sizeof(*cm)) { @@ -229,8 +231,6 @@ int put_cmsg_compat(struct msghdr *kmsg, int level, int type, int len, void *dat } if (!COMPAT_USE_64BIT_TIME) { - struct compat_timeval ctv; - struct compat_timespec cts[3]; if (level == SOL_SOCKET && type == SCM_TIMESTAMP) { struct timeval *tv = (struct timeval *)data; ctv.tv_sec = tv->tv_sec; diff --git a/net/core/datagram.c b/net/core/datagram.c index ae6acf6a3dea..0337e2b76862 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -248,7 +248,6 @@ void skb_free_datagram_locked(struct sock *sk, struct sk_buff *skb) unlock_sock_fast(sk, slow); /* skb is now orphaned, can be freed outside of locked section */ - trace_kfree_skb(skb, skb_free_datagram_locked); __kfree_skb(skb); } EXPORT_SYMBOL(skb_free_datagram_locked); diff --git a/net/core/dev.c b/net/core/dev.c index 1cb0d8a6aa6c..0ebaea16632f 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -1632,6 +1632,8 @@ static inline int deliver_skb(struct sk_buff *skb, struct packet_type *pt_prev, struct net_device *orig_dev) { + if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC))) + return -ENOMEM; atomic_inc(&skb->users); return pt_prev->func(skb, skb->dev, pt_prev, orig_dev); } @@ -1691,7 +1693,8 @@ static void dev_queue_xmit_nit(struct sk_buff *skb, struct net_device *dev) rcu_read_unlock(); } -/* netif_setup_tc - Handle tc mappings on real_num_tx_queues change +/** + * netif_setup_tc - Handle tc mappings on real_num_tx_queues change * @dev: Network device * @txq: number of queues available * @@ -1793,6 +1796,18 @@ int netif_set_real_num_rx_queues(struct net_device *dev, unsigned int rxq) EXPORT_SYMBOL(netif_set_real_num_rx_queues); #endif +/** + * netif_get_num_default_rss_queues - default number of RSS queues + * + * This routine should set an upper limit on the number of RSS queues + * used by default by multiqueue devices. + */ +int netif_get_num_default_rss_queues(void) +{ + return min_t(int, DEFAULT_MAX_NUM_RSS_QUEUES, num_online_cpus()); +} +EXPORT_SYMBOL(netif_get_num_default_rss_queues); + static inline void __netif_reschedule(struct Qdisc *q) { struct softnet_data *sd; @@ -2459,6 +2474,23 @@ static DEFINE_PER_CPU(int, xmit_recursion); #define RECURSION_LIMIT 10 /** + * dev_loopback_xmit - loop back @skb + * @skb: buffer to transmit + */ +int dev_loopback_xmit(struct sk_buff *skb) +{ + skb_reset_mac_header(skb); + __skb_pull(skb, skb_network_offset(skb)); + skb->pkt_type = PACKET_LOOPBACK; + skb->ip_summed = CHECKSUM_UNNECESSARY; + WARN_ON(!skb_dst(skb)); + skb_dst_force(skb); + netif_rx_ni(skb); + return 0; +} +EXPORT_SYMBOL(dev_loopback_xmit); + +/** * dev_queue_xmit - transmit a buffer * @skb: buffer to transmit * @@ -3141,8 +3173,6 @@ static int __netif_receive_skb(struct sk_buff *skb) if (netpoll_receive_skb(skb)) return NET_RX_DROP; - if (!skb->skb_iif) - skb->skb_iif = skb->dev->ifindex; orig_dev = skb->dev; skb_reset_network_header(skb); @@ -3154,6 +3184,7 @@ static int __netif_receive_skb(struct sk_buff *skb) rcu_read_lock(); another_round: + skb->skb_iif = skb->dev->ifindex; __this_cpu_inc(softnet_data.processed); @@ -3232,7 +3263,10 @@ ncls: } if (pt_prev) { - ret = pt_prev->func(skb, skb->dev, pt_prev, orig_dev); + if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC))) + ret = -ENOMEM; + else + ret = pt_prev->func(skb, skb->dev, pt_prev, orig_dev); } else { atomic_long_inc(&skb->dev->rx_dropped); kfree_skb(skb); @@ -5646,7 +5680,7 @@ int netdev_refcnt_read(const struct net_device *dev) } EXPORT_SYMBOL(netdev_refcnt_read); -/* +/** * netdev_wait_allrefs - wait until all references are gone. * * This is called when unregistering network devices. diff --git a/net/core/dst.c b/net/core/dst.c index 43d94cedbf7c..069d51d29414 100644 --- a/net/core/dst.c +++ b/net/core/dst.c @@ -94,7 +94,7 @@ loop: * But we do not have state "obsoleted, but * referenced by parent", so it is right. */ - if (dst->obsolete > 1) + if (dst->obsolete > 0) continue; ___dst_free(dst); @@ -152,7 +152,7 @@ EXPORT_SYMBOL(dst_discard); const u32 dst_default_metrics[RTAX_MAX]; void *dst_alloc(struct dst_ops *ops, struct net_device *dev, - int initial_ref, int initial_obsolete, int flags) + int initial_ref, int initial_obsolete, unsigned short flags) { struct dst_entry *dst; @@ -171,7 +171,6 @@ void *dst_alloc(struct dst_ops *ops, struct net_device *dev, dst_init_metrics(dst, dst_default_metrics, true); dst->expires = 0UL; dst->path = dst; - RCU_INIT_POINTER(dst->_neighbour, NULL); #ifdef CONFIG_XFRM dst->xfrm = NULL; #endif @@ -188,6 +187,7 @@ void *dst_alloc(struct dst_ops *ops, struct net_device *dev, dst->__use = 0; dst->lastuse = jiffies; dst->flags = flags; + dst->pending_confirm = 0; dst->next = NULL; if (!(flags & DST_NOCOUNT)) dst_entries_add(ops, 1); @@ -202,7 +202,7 @@ static void ___dst_free(struct dst_entry *dst) */ if (dst->dev == NULL || !(dst->dev->flags&IFF_UP)) dst->input = dst->output = dst_discard; - dst->obsolete = 2; + dst->obsolete = DST_OBSOLETE_DEAD; } void __dst_free(struct dst_entry *dst) @@ -224,19 +224,12 @@ EXPORT_SYMBOL(__dst_free); struct dst_entry *dst_destroy(struct dst_entry * dst) { struct dst_entry *child; - struct neighbour *neigh; smp_rmb(); again: - neigh = rcu_dereference_protected(dst->_neighbour, 1); child = dst->child; - if (neigh) { - RCU_INIT_POINTER(dst->_neighbour, NULL); - neigh_release(neigh); - } - if (!(dst->flags & DST_NOCOUNT)) dst_entries_add(dst->ops, -1); @@ -360,19 +353,9 @@ static void dst_ifdown(struct dst_entry *dst, struct net_device *dev, if (!unregister) { dst->input = dst->output = dst_discard; } else { - struct neighbour *neigh; - dst->dev = dev_net(dst->dev)->loopback_dev; dev_hold(dst->dev); dev_put(dev); - rcu_read_lock(); - neigh = dst_get_neighbour_noref(dst); - if (neigh && neigh->dev == dev) { - neigh->dev = dst->dev; - dev_hold(dst->dev); - dev_put(dev); - } - rcu_read_unlock(); } } diff --git a/net/core/ethtool.c b/net/core/ethtool.c index 9c2afb480270..cbf033dcaf1f 100644 --- a/net/core/ethtool.c +++ b/net/core/ethtool.c @@ -729,6 +729,40 @@ static int ethtool_set_wol(struct net_device *dev, char __user *useraddr) return dev->ethtool_ops->set_wol(dev, &wol); } +static int ethtool_get_eee(struct net_device *dev, char __user *useraddr) +{ + struct ethtool_eee edata; + int rc; + + if (!dev->ethtool_ops->get_eee) + return -EOPNOTSUPP; + + memset(&edata, 0, sizeof(struct ethtool_eee)); + edata.cmd = ETHTOOL_GEEE; + rc = dev->ethtool_ops->get_eee(dev, &edata); + + if (rc) + return rc; + + if (copy_to_user(useraddr, &edata, sizeof(edata))) + return -EFAULT; + + return 0; +} + +static int ethtool_set_eee(struct net_device *dev, char __user *useraddr) +{ + struct ethtool_eee edata; + + if (!dev->ethtool_ops->set_eee) + return -EOPNOTSUPP; + + if (copy_from_user(&edata, useraddr, sizeof(edata))) + return -EFAULT; + + return dev->ethtool_ops->set_eee(dev, &edata); +} + static int ethtool_nway_reset(struct net_device *dev) { if (!dev->ethtool_ops->nway_reset) @@ -1409,6 +1443,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr) case ETHTOOL_GSET: case ETHTOOL_GDRVINFO: case ETHTOOL_GMSGLVL: + case ETHTOOL_GLINK: case ETHTOOL_GCOALESCE: case ETHTOOL_GRINGPARAM: case ETHTOOL_GPAUSEPARAM: @@ -1417,6 +1452,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr) case ETHTOOL_GSG: case ETHTOOL_GSSET_INFO: case ETHTOOL_GSTRINGS: + case ETHTOOL_GSTATS: case ETHTOOL_GTSO: case ETHTOOL_GPERMADDR: case ETHTOOL_GUFO: @@ -1429,8 +1465,11 @@ int dev_ethtool(struct net *net, struct ifreq *ifr) case ETHTOOL_GRXCLSRLCNT: case ETHTOOL_GRXCLSRULE: case ETHTOOL_GRXCLSRLALL: + case ETHTOOL_GRXFHINDIR: case ETHTOOL_GFEATURES: + case ETHTOOL_GCHANNELS: case ETHTOOL_GET_TS_INFO: + case ETHTOOL_GEEE: break; default: if (!capable(CAP_NET_ADMIN)) @@ -1471,6 +1510,12 @@ int dev_ethtool(struct net *net, struct ifreq *ifr) rc = ethtool_set_value_void(dev, useraddr, dev->ethtool_ops->set_msglevel); break; + case ETHTOOL_GEEE: + rc = ethtool_get_eee(dev, useraddr); + break; + case ETHTOOL_SEEE: + rc = ethtool_set_eee(dev, useraddr); + break; case ETHTOOL_NWAY_RST: rc = ethtool_nway_reset(dev); break; diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c index 72cceb79d0d4..ab7db83236c9 100644 --- a/net/core/fib_rules.c +++ b/net/core/fib_rules.c @@ -151,6 +151,8 @@ static void fib_rules_cleanup_ops(struct fib_rules_ops *ops) list_for_each_entry_safe(rule, tmp, &ops->rules_list, list) { list_del_rcu(&rule->list); + if (ops->delete) + ops->delete(rule); fib_rule_put(rule); } } @@ -499,6 +501,8 @@ static int fib_nl_delrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) notify_rule_change(RTM_DELRULE, rule, ops, nlh, NETLINK_CB(skb).pid); + if (ops->delete) + ops->delete(rule); fib_rule_put(rule); flush_route_cache(ops); rules_ops_put(ops); diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c index a225089df5b6..466820b6e344 100644 --- a/net/core/flow_dissector.c +++ b/net/core/flow_dissector.c @@ -4,6 +4,7 @@ #include <linux/ipv6.h> #include <linux/if_vlan.h> #include <net/ip.h> +#include <net/ipv6.h> #include <linux/if_tunnel.h> #include <linux/if_pppox.h> #include <linux/ppp_defs.h> @@ -55,8 +56,8 @@ ipv6: return false; ip_proto = iph->nexthdr; - flow->src = iph->saddr.s6_addr32[3]; - flow->dst = iph->daddr.s6_addr32[3]; + flow->src = (__force __be32)ipv6_addr_hash(&iph->saddr); + flow->dst = (__force __be32)ipv6_addr_hash(&iph->daddr); nhoff += sizeof(struct ipv6hdr); break; } diff --git a/net/core/neighbour.c b/net/core/neighbour.c index d81d026138f0..117afaf51268 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -474,8 +474,8 @@ struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net, } EXPORT_SYMBOL(neigh_lookup_nodev); -struct neighbour *neigh_create(struct neigh_table *tbl, const void *pkey, - struct net_device *dev) +struct neighbour *__neigh_create(struct neigh_table *tbl, const void *pkey, + struct net_device *dev, bool want_ref) { u32 hash_val; int key_len = tbl->key_len; @@ -535,14 +535,16 @@ struct neighbour *neigh_create(struct neigh_table *tbl, const void *pkey, n1 = rcu_dereference_protected(n1->next, lockdep_is_held(&tbl->lock))) { if (dev == n1->dev && !memcmp(n1->primary_key, pkey, key_len)) { - neigh_hold(n1); + if (want_ref) + neigh_hold(n1); rc = n1; goto out_tbl_unlock; } } n->dead = 0; - neigh_hold(n); + if (want_ref) + neigh_hold(n); rcu_assign_pointer(n->next, rcu_dereference_protected(nht->hash_buckets[hash_val], lockdep_is_held(&tbl->lock))); @@ -558,7 +560,7 @@ out_neigh_release: neigh_release(n); goto out; } -EXPORT_SYMBOL(neigh_create); +EXPORT_SYMBOL(__neigh_create); static u32 pneigh_hash(const void *pkey, int key_len) { @@ -1199,10 +1201,23 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new, write_unlock_bh(&neigh->lock); rcu_read_lock(); - /* On shaper/eql skb->dst->neighbour != neigh :( */ - if (dst && (n2 = dst_get_neighbour_noref(dst)) != NULL) - n1 = n2; + + /* Why not just use 'neigh' as-is? The problem is that + * things such as shaper, eql, and sch_teql can end up + * using alternative, different, neigh objects to output + * the packet in the output path. So what we need to do + * here is re-lookup the top-level neigh in the path so + * we can reinject the packet there. + */ + n2 = NULL; + if (dst) { + n2 = dst_neigh_lookup_skb(dst, skb); + if (n2) + n1 = n2; + } n1->output(n1, skb); + if (n2) + neigh_release(n2); rcu_read_unlock(); write_lock_bh(&neigh->lock); diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index fdf9e61d0651..72607174ea5a 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -417,72 +417,6 @@ static struct attribute_group netstat_group = { .name = "statistics", .attrs = netstat_attrs, }; - -#ifdef CONFIG_WIRELESS_EXT_SYSFS -/* helper function that does all the locking etc for wireless stats */ -static ssize_t wireless_show(struct device *d, char *buf, - ssize_t (*format)(const struct iw_statistics *, - char *)) -{ - struct net_device *dev = to_net_dev(d); - const struct iw_statistics *iw; - ssize_t ret = -EINVAL; - - if (!rtnl_trylock()) - return restart_syscall(); - if (dev_isalive(dev)) { - iw = get_wireless_stats(dev); - if (iw) - ret = (*format)(iw, buf); - } - rtnl_unlock(); - - return ret; -} - -/* show function template for wireless fields */ -#define WIRELESS_SHOW(name, field, format_string) \ -static ssize_t format_iw_##name(const struct iw_statistics *iw, char *buf) \ -{ \ - return sprintf(buf, format_string, iw->field); \ -} \ -static ssize_t show_iw_##name(struct device *d, \ - struct device_attribute *attr, char *buf) \ -{ \ - return wireless_show(d, buf, format_iw_##name); \ -} \ -static DEVICE_ATTR(name, S_IRUGO, show_iw_##name, NULL) - -WIRELESS_SHOW(status, status, fmt_hex); -WIRELESS_SHOW(link, qual.qual, fmt_dec); -WIRELESS_SHOW(level, qual.level, fmt_dec); -WIRELESS_SHOW(noise, qual.noise, fmt_dec); -WIRELESS_SHOW(nwid, discard.nwid, fmt_dec); -WIRELESS_SHOW(crypt, discard.code, fmt_dec); -WIRELESS_SHOW(fragment, discard.fragment, fmt_dec); -WIRELESS_SHOW(misc, discard.misc, fmt_dec); -WIRELESS_SHOW(retries, discard.retries, fmt_dec); -WIRELESS_SHOW(beacon, miss.beacon, fmt_dec); - -static struct attribute *wireless_attrs[] = { - &dev_attr_status.attr, - &dev_attr_link.attr, - &dev_attr_level.attr, - &dev_attr_noise.attr, - &dev_attr_nwid.attr, - &dev_attr_crypt.attr, - &dev_attr_fragment.attr, - &dev_attr_retries.attr, - &dev_attr_misc.attr, - &dev_attr_beacon.attr, - NULL -}; - -static struct attribute_group wireless_group = { - .name = "wireless", - .attrs = wireless_attrs, -}; -#endif #endif /* CONFIG_SYSFS */ #ifdef CONFIG_RPS @@ -1463,14 +1397,6 @@ int netdev_register_kobject(struct net_device *net) groups++; *groups++ = &netstat_group; -#ifdef CONFIG_WIRELESS_EXT_SYSFS - if (net->ieee80211_ptr) - *groups++ = &wireless_group; -#ifdef CONFIG_WIRELESS_EXT - else if (net->wireless_handlers) - *groups++ = &wireless_group; -#endif -#endif #endif /* CONFIG_SYSFS */ error = device_add(dev); diff --git a/net/core/netpoll.c b/net/core/netpoll.c index f9f40b932e4b..b4c90e42b443 100644 --- a/net/core/netpoll.c +++ b/net/core/netpoll.c @@ -715,14 +715,16 @@ int netpoll_parse_options(struct netpoll *np, char *opt) } EXPORT_SYMBOL(netpoll_parse_options); -int __netpoll_setup(struct netpoll *np) +int __netpoll_setup(struct netpoll *np, struct net_device *ndev) { - struct net_device *ndev = np->dev; struct netpoll_info *npinfo; const struct net_device_ops *ops; unsigned long flags; int err; + np->dev = ndev; + strlcpy(np->dev_name, ndev->name, IFNAMSIZ); + if ((ndev->priv_flags & IFF_DISABLE_NETPOLL) || !ndev->netdev_ops->ndo_poll_controller) { np_err(np, "%s doesn't support polling, aborting\n", @@ -851,13 +853,11 @@ int netpoll_setup(struct netpoll *np) np_info(np, "local IP %pI4\n", &np->local_ip); } - np->dev = ndev; - /* fill up the skb queue */ refill_skbs(); rtnl_lock(); - err = __netpoll_setup(np); + err = __netpoll_setup(np, ndev); rtnl_unlock(); if (err) diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c index b2e9caa1ad1a..63d15e8f80e9 100644 --- a/net/core/netprio_cgroup.c +++ b/net/core/netprio_cgroup.c @@ -25,6 +25,8 @@ #include <net/sock.h> #include <net/netprio_cgroup.h> +#include <linux/fdtable.h> + #define PRIOIDX_SZ 128 static unsigned long prioidx_map[PRIOIDX_SZ]; @@ -272,6 +274,56 @@ out_free_devname: return ret; } +void net_prio_attach(struct cgroup *cgrp, struct cgroup_taskset *tset) +{ + struct task_struct *p; + char *tmp = kzalloc(sizeof(char) * PATH_MAX, GFP_KERNEL); + + if (!tmp) { + pr_warn("Unable to attach cgrp due to alloc failure!\n"); + return; + } + + cgroup_taskset_for_each(p, cgrp, tset) { + unsigned int fd; + struct fdtable *fdt; + struct files_struct *files; + + task_lock(p); + files = p->files; + if (!files) { + task_unlock(p); + continue; + } + + rcu_read_lock(); + fdt = files_fdtable(files); + for (fd = 0; fd < fdt->max_fds; fd++) { + char *path; + struct file *file; + struct socket *sock; + unsigned long s; + int rv, err = 0; + + file = fcheck_files(files, fd); + if (!file) + continue; + + path = d_path(&file->f_path, tmp, PAGE_SIZE); + rv = sscanf(path, "socket:[%lu]", &s); + if (rv <= 0) + continue; + + sock = sock_from_file(file, &err); + if (!err) + sock_update_netprioidx(sock->sk, p); + } + rcu_read_unlock(); + task_unlock(p); + } + kfree(tmp); +} + static struct cftype ss_files[] = { { .name = "prioidx", @@ -289,6 +341,7 @@ struct cgroup_subsys net_prio_subsys = { .name = "net_prio", .create = cgrp_create, .destroy = cgrp_destroy, + .attach = net_prio_attach, #ifdef CONFIG_NETPRIO_CGROUP .subsys_id = net_prio_subsys_id, #endif diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index 21318d15bbc3..334b930e0de3 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -541,19 +541,6 @@ static const int rta_max[RTM_NR_FAMILIES] = [RTM_FAM(RTM_NEWACTION)] = TCAA_MAX, }; -void __rta_fill(struct sk_buff *skb, int attrtype, int attrlen, const void *data) -{ - struct rtattr *rta; - int size = RTA_LENGTH(attrlen); - - rta = (struct rtattr *)skb_put(skb, RTA_ALIGN(size)); - rta->rta_type = attrtype; - rta->rta_len = size; - memcpy(RTA_DATA(rta), data, attrlen); - memset(RTA_DATA(rta) + attrlen, 0, RTA_ALIGN(size) - size); -} -EXPORT_SYMBOL(__rta_fill); - int rtnetlink_send(struct sk_buff *skb, struct net *net, u32 pid, unsigned int group, int echo) { struct sock *rtnl = net->rtnl; @@ -628,7 +615,7 @@ nla_put_failure: EXPORT_SYMBOL(rtnetlink_put_metrics); int rtnl_put_cacheinfo(struct sk_buff *skb, struct dst_entry *dst, u32 id, - u32 ts, u32 tsage, long expires, u32 error) + long expires, u32 error) { struct rta_cacheinfo ci = { .rta_lastuse = jiffies_to_clock_t(jiffies - dst->lastuse), @@ -636,8 +623,6 @@ int rtnl_put_cacheinfo(struct sk_buff *skb, struct dst_entry *dst, u32 id, .rta_clntref = atomic_read(&(dst->__refcnt)), .rta_error = error, .rta_id = id, - .rta_ts = ts, - .rta_tsage = tsage, }; if (expires) @@ -786,6 +771,8 @@ static noinline size_t if_nlmsg_size(const struct net_device *dev, + nla_total_size(4) /* IFLA_LINK */ + nla_total_size(4) /* IFLA_MASTER */ + nla_total_size(4) /* IFLA_PROMISCUITY */ + + nla_total_size(4) /* IFLA_NUM_TX_QUEUES */ + + nla_total_size(4) /* IFLA_NUM_RX_QUEUES */ + nla_total_size(1) /* IFLA_OPERSTATE */ + nla_total_size(1) /* IFLA_LINKMODE */ + nla_total_size(ext_filter_mask @@ -904,6 +891,10 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev, nla_put_u32(skb, IFLA_MTU, dev->mtu) || nla_put_u32(skb, IFLA_GROUP, dev->group) || nla_put_u32(skb, IFLA_PROMISCUITY, dev->promiscuity) || + nla_put_u32(skb, IFLA_NUM_TX_QUEUES, dev->num_tx_queues) || +#ifdef CONFIG_RPS + nla_put_u32(skb, IFLA_NUM_RX_QUEUES, dev->num_rx_queues) || +#endif (dev->ifindex != dev->iflink && nla_put_u32(skb, IFLA_LINK, dev->iflink)) || (dev->master && @@ -1121,6 +1112,8 @@ const struct nla_policy ifla_policy[IFLA_MAX+1] = { [IFLA_AF_SPEC] = { .type = NLA_NESTED }, [IFLA_EXT_MASK] = { .type = NLA_U32 }, [IFLA_PROMISCUITY] = { .type = NLA_U32 }, + [IFLA_NUM_TX_QUEUES] = { .type = NLA_U32 }, + [IFLA_NUM_RX_QUEUES] = { .type = NLA_U32 }, }; EXPORT_SYMBOL(ifla_policy); @@ -1639,17 +1632,22 @@ struct net_device *rtnl_create_link(struct net *src_net, struct net *net, { int err; struct net_device *dev; - unsigned int num_queues = 1; + unsigned int num_tx_queues = 1; + unsigned int num_rx_queues = 1; - if (ops->get_tx_queues) { - err = ops->get_tx_queues(src_net, tb); - if (err < 0) - goto err; - num_queues = err; - } + if (tb[IFLA_NUM_TX_QUEUES]) + num_tx_queues = nla_get_u32(tb[IFLA_NUM_TX_QUEUES]); + else if (ops->get_num_tx_queues) + num_tx_queues = ops->get_num_tx_queues(); + + if (tb[IFLA_NUM_RX_QUEUES]) + num_rx_queues = nla_get_u32(tb[IFLA_NUM_RX_QUEUES]); + else if (ops->get_num_rx_queues) + num_rx_queues = ops->get_num_rx_queues(); err = -ENOMEM; - dev = alloc_netdev_mq(ops->priv_size, ifname, ops->setup, num_queues); + dev = alloc_netdev_mqs(ops->priv_size, ifname, ops->setup, + num_tx_queues, num_rx_queues); if (!dev) goto err; @@ -2189,7 +2187,7 @@ skip: } /** - * ndo_dflt_fdb_dump: default netdevice operation to dump an FDB table. + * ndo_dflt_fdb_dump - default netdevice operation to dump an FDB table. * @nlh: netlink message header * @dev: netdevice * @@ -2366,8 +2364,13 @@ static struct notifier_block rtnetlink_dev_notifier = { static int __net_init rtnetlink_net_init(struct net *net) { struct sock *sk; - sk = netlink_kernel_create(net, NETLINK_ROUTE, RTNLGRP_MAX, - rtnetlink_rcv, &rtnl_mutex, THIS_MODULE); + struct netlink_kernel_cfg cfg = { + .groups = RTNLGRP_MAX, + .input = rtnetlink_rcv, + .cb_mutex = &rtnl_mutex, + }; + + sk = netlink_kernel_create(net, NETLINK_ROUTE, THIS_MODULE, &cfg); if (!sk) return -ENOMEM; net->rtnl = sk; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index d124306b81fd..368f65c15e4f 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -160,8 +160,8 @@ static void skb_under_panic(struct sk_buff *skb, int sz, void *here) * @node: numa node to allocate memory on * * Allocate a new &sk_buff. The returned buffer has no headroom and a - * tail room of size bytes. The object has a reference count of one. - * The return is the buffer. On a failure the return is %NULL. + * tail room of at least size bytes. The object has a reference count + * of one. The return is the buffer. On a failure the return is %NULL. * * Buffers may only be allocated from interrupts using a @gfp_mask of * %GFP_ATOMIC. @@ -296,9 +296,12 @@ EXPORT_SYMBOL(build_skb); struct netdev_alloc_cache { struct page *page; unsigned int offset; + unsigned int pagecnt_bias; }; static DEFINE_PER_CPU(struct netdev_alloc_cache, netdev_alloc_cache); +#define NETDEV_PAGECNT_BIAS (PAGE_SIZE / SMP_CACHE_BYTES) + /** * netdev_alloc_frag - allocate a page fragment * @fragsz: fragment size @@ -317,17 +320,26 @@ void *netdev_alloc_frag(unsigned int fragsz) if (unlikely(!nc->page)) { refill: nc->page = alloc_page(GFP_ATOMIC | __GFP_COLD); + if (unlikely(!nc->page)) + goto end; +recycle: + atomic_set(&nc->page->_count, NETDEV_PAGECNT_BIAS); + nc->pagecnt_bias = NETDEV_PAGECNT_BIAS; nc->offset = 0; } - if (likely(nc->page)) { - if (nc->offset + fragsz > PAGE_SIZE) { - put_page(nc->page); - goto refill; - } - data = page_address(nc->page) + nc->offset; - nc->offset += fragsz; - get_page(nc->page); + + if (nc->offset + fragsz > PAGE_SIZE) { + /* avoid unnecessary locked operations if possible */ + if ((atomic_read(&nc->page->_count) == nc->pagecnt_bias) || + atomic_sub_and_test(nc->pagecnt_bias, &nc->page->_count)) + goto recycle; + goto refill; } + + data = page_address(nc->page) + nc->offset; + nc->offset += fragsz; + nc->pagecnt_bias--; +end: local_irq_restore(flags); return data; } @@ -713,7 +725,8 @@ struct sk_buff *skb_morph(struct sk_buff *dst, struct sk_buff *src) } EXPORT_SYMBOL_GPL(skb_morph); -/* skb_copy_ubufs - copy userspace skb frags buffers to kernel +/** + * skb_copy_ubufs - copy userspace skb frags buffers to kernel * @skb: the skb to modify * @gfp_mask: allocation priority * @@ -738,7 +751,7 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) u8 *vaddr; skb_frag_t *f = &skb_shinfo(skb)->frags[i]; - page = alloc_page(GFP_ATOMIC); + page = alloc_page(gfp_mask); if (!page) { while (head) { struct page *next = (struct page *)head->private; @@ -756,22 +769,22 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) } /* skb frags release userspace buffers */ - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) + for (i = 0; i < num_frags; i++) skb_frag_unref(skb, i); uarg->callback(uarg); /* skb frags point to kernel buffers */ - for (i = skb_shinfo(skb)->nr_frags; i > 0; i--) { - __skb_fill_page_desc(skb, i-1, head, 0, - skb_shinfo(skb)->frags[i - 1].size); + for (i = num_frags - 1; i >= 0; i--) { + __skb_fill_page_desc(skb, i, head, 0, + skb_shinfo(skb)->frags[i].size); head = (struct page *)head->private; } skb_shinfo(skb)->tx_flags &= ~SKBTX_DEV_ZEROCOPY; return 0; } - +EXPORT_SYMBOL_GPL(skb_copy_ubufs); /** * skb_clone - duplicate an sk_buff @@ -791,10 +804,8 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask) { struct sk_buff *n; - if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) { - if (skb_copy_ubufs(skb, gfp_mask)) - return NULL; - } + if (skb_orphan_frags(skb, gfp_mask)) + return NULL; n = skb + 1; if (skb->fclone == SKB_FCLONE_ORIG && @@ -914,12 +925,10 @@ struct sk_buff *__pskb_copy(struct sk_buff *skb, int headroom, gfp_t gfp_mask) if (skb_shinfo(skb)->nr_frags) { int i; - if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) { - if (skb_copy_ubufs(skb, gfp_mask)) { - kfree_skb(n); - n = NULL; - goto out; - } + if (skb_orphan_frags(skb, gfp_mask)) { + kfree_skb(n); + n = NULL; + goto out; } for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { skb_shinfo(n)->frags[i] = skb_shinfo(skb)->frags[i]; @@ -992,10 +1001,8 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, */ if (skb_cloned(skb)) { /* copy this zero copy skb frags */ - if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) { - if (skb_copy_ubufs(skb, gfp_mask)) - goto nofrags; - } + if (skb_orphan_frags(skb, gfp_mask)) + goto nofrags; for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) skb_frag_ref(skb, i); @@ -2614,7 +2621,7 @@ unsigned int skb_find_text(struct sk_buff *skb, unsigned int from, EXPORT_SYMBOL(skb_find_text); /** - * skb_append_datato_frags: - append the user data to a skb + * skb_append_datato_frags - append the user data to a skb * @sk: sock structure * @skb: skb structure to be appened with user data. * @getfrag: call back function to be used for getting the user data diff --git a/net/core/sock.c b/net/core/sock.c index 9e5b71fda6ec..2676a88f533e 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1180,12 +1180,12 @@ void sock_update_classid(struct sock *sk) } EXPORT_SYMBOL(sock_update_classid); -void sock_update_netprioidx(struct sock *sk) +void sock_update_netprioidx(struct sock *sk, struct task_struct *task) { if (in_interrupt()) return; - sk->sk_cgrp_prioidx = task_netprioidx(current); + sk->sk_cgrp_prioidx = task_netprioidx(task); } EXPORT_SYMBOL_GPL(sock_update_netprioidx); #endif @@ -1215,7 +1215,7 @@ struct sock *sk_alloc(struct net *net, int family, gfp_t priority, atomic_set(&sk->sk_wmem_alloc, 1); sock_update_classid(sk); - sock_update_netprioidx(sk); + sock_update_netprioidx(sk, current); } return sk; @@ -1465,6 +1465,11 @@ void sock_rfree(struct sk_buff *skb) } EXPORT_SYMBOL(sock_rfree); +void sock_edemux(struct sk_buff *skb) +{ + sock_put(skb->sk); +} +EXPORT_SYMBOL(sock_edemux); int sock_i_uid(struct sock *sk) { @@ -2154,6 +2159,10 @@ void release_sock(struct sock *sk) spin_lock_bh(&sk->sk_lock.slock); if (sk->sk_backlog.tail) __release_sock(sk); + + if (sk->sk_prot->release_cb) + sk->sk_prot->release_cb(sk); + sk->sk_lock.owned = 0; if (waitqueue_active(&sk->sk_lock.wq)) wake_up(&sk->sk_lock.wq); diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c index 5fd146720f39..9d8755e4a7a5 100644 --- a/net/core/sock_diag.c +++ b/net/core/sock_diag.c @@ -4,7 +4,6 @@ #include <net/netlink.h> #include <net/net_namespace.h> #include <linux/module.h> -#include <linux/rtnetlink.h> #include <net/sock.h> #include <linux/inet_diag.h> @@ -35,9 +34,7 @@ EXPORT_SYMBOL_GPL(sock_diag_save_cookie); int sock_diag_put_meminfo(struct sock *sk, struct sk_buff *skb, int attrtype) { - __u32 *mem; - - mem = RTA_DATA(__RTA_PUT(skb, attrtype, SK_MEMINFO_VARS * sizeof(__u32))); + u32 mem[SK_MEMINFO_VARS]; mem[SK_MEMINFO_RMEM_ALLOC] = sk_rmem_alloc_get(sk); mem[SK_MEMINFO_RCVBUF] = sk->sk_rcvbuf; @@ -46,11 +43,9 @@ int sock_diag_put_meminfo(struct sock *sk, struct sk_buff *skb, int attrtype) mem[SK_MEMINFO_FWD_ALLOC] = sk->sk_forward_alloc; mem[SK_MEMINFO_WMEM_QUEUED] = sk->sk_wmem_queued; mem[SK_MEMINFO_OPTMEM] = atomic_read(&sk->sk_omem_alloc); + mem[SK_MEMINFO_BACKLOG] = sk->sk_backlog.len; - return 0; - -rtattr_failure: - return -EMSGSIZE; + return nla_put(skb, attrtype, sizeof(mem), &mem); } EXPORT_SYMBOL_GPL(sock_diag_put_meminfo); @@ -120,7 +115,7 @@ static inline void sock_diag_unlock_handler(const struct sock_diag_handler *h) static int __sock_diag_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) { int err; - struct sock_diag_req *req = NLMSG_DATA(nlh); + struct sock_diag_req *req = nlmsg_data(nlh); const struct sock_diag_handler *hndl; if (nlmsg_len(nlh) < sizeof(*req)) @@ -171,19 +166,36 @@ static void sock_diag_rcv(struct sk_buff *skb) mutex_unlock(&sock_diag_mutex); } -struct sock *sock_diag_nlsk; -EXPORT_SYMBOL_GPL(sock_diag_nlsk); +static int __net_init diag_net_init(struct net *net) +{ + struct netlink_kernel_cfg cfg = { + .input = sock_diag_rcv, + }; + + net->diag_nlsk = netlink_kernel_create(net, NETLINK_SOCK_DIAG, + THIS_MODULE, &cfg); + return net->diag_nlsk == NULL ? -ENOMEM : 0; +} + +static void __net_exit diag_net_exit(struct net *net) +{ + netlink_kernel_release(net->diag_nlsk); + net->diag_nlsk = NULL; +} + +static struct pernet_operations diag_net_ops = { + .init = diag_net_init, + .exit = diag_net_exit, +}; static int __init sock_diag_init(void) { - sock_diag_nlsk = netlink_kernel_create(&init_net, NETLINK_SOCK_DIAG, 0, - sock_diag_rcv, NULL, THIS_MODULE); - return sock_diag_nlsk == NULL ? -ENOMEM : 0; + return register_pernet_subsys(&diag_net_ops); } static void __exit sock_diag_exit(void) { - netlink_kernel_release(sock_diag_nlsk); + unregister_pernet_subsys(&diag_net_ops); } module_init(sock_diag_init); diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c index 656c7c75b192..81f2bb62dea3 100644 --- a/net/dcb/dcbnl.c +++ b/net/dcb/dcbnl.c @@ -28,8 +28,7 @@ #include <linux/module.h> #include <net/sock.h> -/** - * Data Center Bridging (DCB) is a collection of Ethernet enhancements +/* Data Center Bridging (DCB) is a collection of Ethernet enhancements * intended to allow network traffic with differing requirements * (highly reliable, no drops vs. best effort vs. low latency) to operate * and co-exist on Ethernet. Current DCB features are: @@ -196,92 +195,66 @@ static const struct nla_policy dcbnl_featcfg_nest[DCB_FEATCFG_ATTR_MAX + 1] = { static LIST_HEAD(dcb_app_list); static DEFINE_SPINLOCK(dcb_lock); -/* standard netlink reply call */ -static int dcbnl_reply(u8 value, u8 event, u8 cmd, u8 attr, u32 pid, - u32 seq, u16 flags) +static struct sk_buff *dcbnl_newmsg(int type, u8 cmd, u32 port, u32 seq, + u32 flags, struct nlmsghdr **nlhp) { - struct sk_buff *dcbnl_skb; + struct sk_buff *skb; struct dcbmsg *dcb; struct nlmsghdr *nlh; - int ret = -EINVAL; - dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!dcbnl_skb) - return ret; + skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!skb) + return NULL; - nlh = NLMSG_NEW(dcbnl_skb, pid, seq, event, sizeof(*dcb), flags); + nlh = nlmsg_put(skb, port, seq, type, sizeof(*dcb), flags); + BUG_ON(!nlh); - dcb = NLMSG_DATA(nlh); + dcb = nlmsg_data(nlh); dcb->dcb_family = AF_UNSPEC; dcb->cmd = cmd; dcb->dcb_pad = 0; - ret = nla_put_u8(dcbnl_skb, attr, value); - if (ret) - goto err; + if (nlhp) + *nlhp = nlh; - /* end the message, assign the nlmsg_len. */ - nlmsg_end(dcbnl_skb, nlh); - ret = rtnl_unicast(dcbnl_skb, &init_net, pid); - if (ret) - return -EINVAL; - - return 0; -nlmsg_failure: -err: - kfree_skb(dcbnl_skb); - return ret; + return skb; } -static int dcbnl_getstate(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_getstate(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - int ret = -EINVAL; - /* if (!tb[DCB_ATTR_STATE] || !netdev->dcbnl_ops->getstate) */ if (!netdev->dcbnl_ops->getstate) - return ret; - - ret = dcbnl_reply(netdev->dcbnl_ops->getstate(netdev), RTM_GETDCB, - DCB_CMD_GSTATE, DCB_ATTR_STATE, pid, seq, flags); + return -EOPNOTSUPP; - return ret; + return nla_put_u8(skb, DCB_ATTR_STATE, + netdev->dcbnl_ops->getstate(netdev)); } -static int dcbnl_getpfccfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_getpfccfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - struct sk_buff *dcbnl_skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; struct nlattr *data[DCB_PFC_UP_ATTR_MAX + 1], *nest; u8 value; - int ret = -EINVAL; + int ret; int i; int getall = 0; - if (!tb[DCB_ATTR_PFC_CFG] || !netdev->dcbnl_ops->getpfccfg) - return ret; + if (!tb[DCB_ATTR_PFC_CFG]) + return -EINVAL; + + if (!netdev->dcbnl_ops->getpfccfg) + return -EOPNOTSUPP; ret = nla_parse_nested(data, DCB_PFC_UP_ATTR_MAX, tb[DCB_ATTR_PFC_CFG], dcbnl_pfc_up_nest); if (ret) - goto err_out; - - dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!dcbnl_skb) - goto err_out; - - nlh = NLMSG_NEW(dcbnl_skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = DCB_CMD_PFC_GCFG; + return ret; - nest = nla_nest_start(dcbnl_skb, DCB_ATTR_PFC_CFG); + nest = nla_nest_start(skb, DCB_ATTR_PFC_CFG); if (!nest) - goto err; + return -EMSGSIZE; if (data[DCB_PFC_UP_ATTR_ALL]) getall = 1; @@ -292,103 +265,53 @@ static int dcbnl_getpfccfg(struct net_device *netdev, struct nlattr **tb, netdev->dcbnl_ops->getpfccfg(netdev, i - DCB_PFC_UP_ATTR_0, &value); - ret = nla_put_u8(dcbnl_skb, i, value); - + ret = nla_put_u8(skb, i, value); if (ret) { - nla_nest_cancel(dcbnl_skb, nest); - goto err; + nla_nest_cancel(skb, nest); + return ret; } } - nla_nest_end(dcbnl_skb, nest); - - nlmsg_end(dcbnl_skb, nlh); - - ret = rtnl_unicast(dcbnl_skb, &init_net, pid); - if (ret) - goto err_out; + nla_nest_end(skb, nest); return 0; -nlmsg_failure: -err: - kfree_skb(dcbnl_skb); -err_out: - return -EINVAL; } -static int dcbnl_getperm_hwaddr(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_getperm_hwaddr(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - struct sk_buff *dcbnl_skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; u8 perm_addr[MAX_ADDR_LEN]; - int ret = -EINVAL; if (!netdev->dcbnl_ops->getpermhwaddr) - return ret; - - dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!dcbnl_skb) - goto err_out; - - nlh = NLMSG_NEW(dcbnl_skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = DCB_CMD_GPERM_HWADDR; + return -EOPNOTSUPP; netdev->dcbnl_ops->getpermhwaddr(netdev, perm_addr); - ret = nla_put(dcbnl_skb, DCB_ATTR_PERM_HWADDR, sizeof(perm_addr), - perm_addr); - - nlmsg_end(dcbnl_skb, nlh); - - ret = rtnl_unicast(dcbnl_skb, &init_net, pid); - if (ret) - goto err_out; - - return 0; - -nlmsg_failure: - kfree_skb(dcbnl_skb); -err_out: - return -EINVAL; + return nla_put(skb, DCB_ATTR_PERM_HWADDR, sizeof(perm_addr), perm_addr); } -static int dcbnl_getcap(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_getcap(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - struct sk_buff *dcbnl_skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; struct nlattr *data[DCB_CAP_ATTR_MAX + 1], *nest; u8 value; - int ret = -EINVAL; + int ret; int i; int getall = 0; - if (!tb[DCB_ATTR_CAP] || !netdev->dcbnl_ops->getcap) - return ret; + if (!tb[DCB_ATTR_CAP]) + return -EINVAL; + + if (!netdev->dcbnl_ops->getcap) + return -EOPNOTSUPP; ret = nla_parse_nested(data, DCB_CAP_ATTR_MAX, tb[DCB_ATTR_CAP], dcbnl_cap_nest); if (ret) - goto err_out; - - dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!dcbnl_skb) - goto err_out; - - nlh = NLMSG_NEW(dcbnl_skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = DCB_CMD_GCAP; + return ret; - nest = nla_nest_start(dcbnl_skb, DCB_ATTR_CAP); + nest = nla_nest_start(skb, DCB_ATTR_CAP); if (!nest) - goto err; + return -EMSGSIZE; if (data[DCB_CAP_ATTR_ALL]) getall = 1; @@ -398,69 +321,41 @@ static int dcbnl_getcap(struct net_device *netdev, struct nlattr **tb, continue; if (!netdev->dcbnl_ops->getcap(netdev, i, &value)) { - ret = nla_put_u8(dcbnl_skb, i, value); - + ret = nla_put_u8(skb, i, value); if (ret) { - nla_nest_cancel(dcbnl_skb, nest); - goto err; + nla_nest_cancel(skb, nest); + return ret; } } } - nla_nest_end(dcbnl_skb, nest); - - nlmsg_end(dcbnl_skb, nlh); - - ret = rtnl_unicast(dcbnl_skb, &init_net, pid); - if (ret) - goto err_out; + nla_nest_end(skb, nest); return 0; -nlmsg_failure: -err: - kfree_skb(dcbnl_skb); -err_out: - return -EINVAL; } -static int dcbnl_getnumtcs(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_getnumtcs(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - struct sk_buff *dcbnl_skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; struct nlattr *data[DCB_NUMTCS_ATTR_MAX + 1], *nest; u8 value; - int ret = -EINVAL; + int ret; int i; int getall = 0; - if (!tb[DCB_ATTR_NUMTCS] || !netdev->dcbnl_ops->getnumtcs) - return ret; + if (!tb[DCB_ATTR_NUMTCS]) + return -EINVAL; + + if (!netdev->dcbnl_ops->getnumtcs) + return -EOPNOTSUPP; ret = nla_parse_nested(data, DCB_NUMTCS_ATTR_MAX, tb[DCB_ATTR_NUMTCS], dcbnl_numtcs_nest); - if (ret) { - ret = -EINVAL; - goto err_out; - } - - dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!dcbnl_skb) { - ret = -EINVAL; - goto err_out; - } - - nlh = NLMSG_NEW(dcbnl_skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = DCB_CMD_GNUMTCS; + if (ret) + return ret; - nest = nla_nest_start(dcbnl_skb, DCB_ATTR_NUMTCS); - if (!nest) { - ret = -EINVAL; - goto err; - } + nest = nla_nest_start(skb, DCB_ATTR_NUMTCS); + if (!nest) + return -EMSGSIZE; if (data[DCB_NUMTCS_ATTR_ALL]) getall = 1; @@ -471,53 +366,37 @@ static int dcbnl_getnumtcs(struct net_device *netdev, struct nlattr **tb, ret = netdev->dcbnl_ops->getnumtcs(netdev, i, &value); if (!ret) { - ret = nla_put_u8(dcbnl_skb, i, value); - + ret = nla_put_u8(skb, i, value); if (ret) { - nla_nest_cancel(dcbnl_skb, nest); - ret = -EINVAL; - goto err; + nla_nest_cancel(skb, nest); + return ret; } - } else { - goto err; - } - } - nla_nest_end(dcbnl_skb, nest); - - nlmsg_end(dcbnl_skb, nlh); - - ret = rtnl_unicast(dcbnl_skb, &init_net, pid); - if (ret) { - ret = -EINVAL; - goto err_out; + } else + return -EINVAL; } + nla_nest_end(skb, nest); return 0; -nlmsg_failure: -err: - kfree_skb(dcbnl_skb); -err_out: - return ret; } -static int dcbnl_setnumtcs(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_setnumtcs(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { struct nlattr *data[DCB_NUMTCS_ATTR_MAX + 1]; - int ret = -EINVAL; + int ret; u8 value; int i; - if (!tb[DCB_ATTR_NUMTCS] || !netdev->dcbnl_ops->setnumtcs) - return ret; + if (!tb[DCB_ATTR_NUMTCS]) + return -EINVAL; + + if (!netdev->dcbnl_ops->setnumtcs) + return -EOPNOTSUPP; ret = nla_parse_nested(data, DCB_NUMTCS_ATTR_MAX, tb[DCB_ATTR_NUMTCS], dcbnl_numtcs_nest); - - if (ret) { - ret = -EINVAL; - goto err; - } + if (ret) + return ret; for (i = DCB_NUMTCS_ATTR_ALL+1; i <= DCB_NUMTCS_ATTR_MAX; i++) { if (data[i] == NULL) @@ -526,84 +405,68 @@ static int dcbnl_setnumtcs(struct net_device *netdev, struct nlattr **tb, value = nla_get_u8(data[i]); ret = netdev->dcbnl_ops->setnumtcs(netdev, i, value); - if (ret) - goto operr; + break; } -operr: - ret = dcbnl_reply(!!ret, RTM_SETDCB, DCB_CMD_SNUMTCS, - DCB_ATTR_NUMTCS, pid, seq, flags); - -err: - return ret; + return nla_put_u8(skb, DCB_ATTR_NUMTCS, !!ret); } -static int dcbnl_getpfcstate(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_getpfcstate(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - int ret = -EINVAL; - if (!netdev->dcbnl_ops->getpfcstate) - return ret; - - ret = dcbnl_reply(netdev->dcbnl_ops->getpfcstate(netdev), RTM_GETDCB, - DCB_CMD_PFC_GSTATE, DCB_ATTR_PFC_STATE, - pid, seq, flags); + return -EOPNOTSUPP; - return ret; + return nla_put_u8(skb, DCB_ATTR_PFC_STATE, + netdev->dcbnl_ops->getpfcstate(netdev)); } -static int dcbnl_setpfcstate(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_setpfcstate(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - int ret = -EINVAL; u8 value; - if (!tb[DCB_ATTR_PFC_STATE] || !netdev->dcbnl_ops->setpfcstate) - return ret; + if (!tb[DCB_ATTR_PFC_STATE]) + return -EINVAL; + + if (!netdev->dcbnl_ops->setpfcstate) + return -EOPNOTSUPP; value = nla_get_u8(tb[DCB_ATTR_PFC_STATE]); netdev->dcbnl_ops->setpfcstate(netdev, value); - ret = dcbnl_reply(0, RTM_SETDCB, DCB_CMD_PFC_SSTATE, DCB_ATTR_PFC_STATE, - pid, seq, flags); - - return ret; + return nla_put_u8(skb, DCB_ATTR_PFC_STATE, 0); } -static int dcbnl_getapp(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_getapp(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - struct sk_buff *dcbnl_skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; struct nlattr *app_nest; struct nlattr *app_tb[DCB_APP_ATTR_MAX + 1]; u16 id; u8 up, idtype; - int ret = -EINVAL; + int ret; if (!tb[DCB_ATTR_APP]) - goto out; + return -EINVAL; ret = nla_parse_nested(app_tb, DCB_APP_ATTR_MAX, tb[DCB_ATTR_APP], dcbnl_app_nest); if (ret) - goto out; + return ret; - ret = -EINVAL; /* all must be non-null */ if ((!app_tb[DCB_APP_ATTR_IDTYPE]) || (!app_tb[DCB_APP_ATTR_ID])) - goto out; + return -EINVAL; /* either by eth type or by socket number */ idtype = nla_get_u8(app_tb[DCB_APP_ATTR_IDTYPE]); if ((idtype != DCB_APP_IDTYPE_ETHTYPE) && (idtype != DCB_APP_IDTYPE_PORTNUM)) - goto out; + return -EINVAL; id = nla_get_u16(app_tb[DCB_APP_ATTR_ID]); @@ -617,138 +480,106 @@ static int dcbnl_getapp(struct net_device *netdev, struct nlattr **tb, up = dcb_getapp(netdev, &app); } - /* send this back */ - dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!dcbnl_skb) - goto out; - - nlh = NLMSG_NEW(dcbnl_skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = DCB_CMD_GAPP; - - app_nest = nla_nest_start(dcbnl_skb, DCB_ATTR_APP); + app_nest = nla_nest_start(skb, DCB_ATTR_APP); if (!app_nest) - goto out_cancel; + return -EMSGSIZE; - ret = nla_put_u8(dcbnl_skb, DCB_APP_ATTR_IDTYPE, idtype); + ret = nla_put_u8(skb, DCB_APP_ATTR_IDTYPE, idtype); if (ret) goto out_cancel; - ret = nla_put_u16(dcbnl_skb, DCB_APP_ATTR_ID, id); + ret = nla_put_u16(skb, DCB_APP_ATTR_ID, id); if (ret) goto out_cancel; - ret = nla_put_u8(dcbnl_skb, DCB_APP_ATTR_PRIORITY, up); + ret = nla_put_u8(skb, DCB_APP_ATTR_PRIORITY, up); if (ret) goto out_cancel; - nla_nest_end(dcbnl_skb, app_nest); - nlmsg_end(dcbnl_skb, nlh); - - ret = rtnl_unicast(dcbnl_skb, &init_net, pid); - if (ret) - goto nlmsg_failure; + nla_nest_end(skb, app_nest); - goto out; + return 0; out_cancel: - nla_nest_cancel(dcbnl_skb, app_nest); -nlmsg_failure: - kfree_skb(dcbnl_skb); -out: + nla_nest_cancel(skb, app_nest); return ret; } -static int dcbnl_setapp(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_setapp(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - int err, ret = -EINVAL; + int ret; u16 id; u8 up, idtype; struct nlattr *app_tb[DCB_APP_ATTR_MAX + 1]; if (!tb[DCB_ATTR_APP]) - goto out; + return -EINVAL; ret = nla_parse_nested(app_tb, DCB_APP_ATTR_MAX, tb[DCB_ATTR_APP], dcbnl_app_nest); if (ret) - goto out; + return ret; - ret = -EINVAL; /* all must be non-null */ if ((!app_tb[DCB_APP_ATTR_IDTYPE]) || (!app_tb[DCB_APP_ATTR_ID]) || (!app_tb[DCB_APP_ATTR_PRIORITY])) - goto out; + return -EINVAL; /* either by eth type or by socket number */ idtype = nla_get_u8(app_tb[DCB_APP_ATTR_IDTYPE]); if ((idtype != DCB_APP_IDTYPE_ETHTYPE) && (idtype != DCB_APP_IDTYPE_PORTNUM)) - goto out; + return -EINVAL; id = nla_get_u16(app_tb[DCB_APP_ATTR_ID]); up = nla_get_u8(app_tb[DCB_APP_ATTR_PRIORITY]); if (netdev->dcbnl_ops->setapp) { - err = netdev->dcbnl_ops->setapp(netdev, idtype, id, up); + ret = netdev->dcbnl_ops->setapp(netdev, idtype, id, up); } else { struct dcb_app app; app.selector = idtype; app.protocol = id; app.priority = up; - err = dcb_setapp(netdev, &app); + ret = dcb_setapp(netdev, &app); } - ret = dcbnl_reply(err, RTM_SETDCB, DCB_CMD_SAPP, DCB_ATTR_APP, - pid, seq, flags); + ret = nla_put_u8(skb, DCB_ATTR_APP, ret); dcbnl_cee_notify(netdev, RTM_SETDCB, DCB_CMD_SAPP, seq, 0); -out: + return ret; } -static int __dcbnl_pg_getcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags, int dir) +static int __dcbnl_pg_getcfg(struct net_device *netdev, struct nlmsghdr *nlh, + struct nlattr **tb, struct sk_buff *skb, int dir) { - struct sk_buff *dcbnl_skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; struct nlattr *pg_nest, *param_nest, *data; struct nlattr *pg_tb[DCB_PG_ATTR_MAX + 1]; struct nlattr *param_tb[DCB_TC_ATTR_PARAM_MAX + 1]; u8 prio, pgid, tc_pct, up_map; - int ret = -EINVAL; + int ret; int getall = 0; int i; - if (!tb[DCB_ATTR_PG_CFG] || - !netdev->dcbnl_ops->getpgtccfgtx || + if (!tb[DCB_ATTR_PG_CFG]) + return -EINVAL; + + if (!netdev->dcbnl_ops->getpgtccfgtx || !netdev->dcbnl_ops->getpgtccfgrx || !netdev->dcbnl_ops->getpgbwgcfgtx || !netdev->dcbnl_ops->getpgbwgcfgrx) - return ret; + return -EOPNOTSUPP; ret = nla_parse_nested(pg_tb, DCB_PG_ATTR_MAX, tb[DCB_ATTR_PG_CFG], dcbnl_pg_nest); - if (ret) - goto err_out; - - dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!dcbnl_skb) - goto err_out; - - nlh = NLMSG_NEW(dcbnl_skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = (dir) ? DCB_CMD_PGRX_GCFG : DCB_CMD_PGTX_GCFG; + return ret; - pg_nest = nla_nest_start(dcbnl_skb, DCB_ATTR_PG_CFG); + pg_nest = nla_nest_start(skb, DCB_ATTR_PG_CFG); if (!pg_nest) - goto err; + return -EMSGSIZE; if (pg_tb[DCB_PG_ATTR_TC_ALL]) getall = 1; @@ -766,7 +597,7 @@ static int __dcbnl_pg_getcfg(struct net_device *netdev, struct nlattr **tb, if (ret) goto err_pg; - param_nest = nla_nest_start(dcbnl_skb, i); + param_nest = nla_nest_start(skb, i); if (!param_nest) goto err_pg; @@ -789,33 +620,33 @@ static int __dcbnl_pg_getcfg(struct net_device *netdev, struct nlattr **tb, if (param_tb[DCB_TC_ATTR_PARAM_PGID] || param_tb[DCB_TC_ATTR_PARAM_ALL]) { - ret = nla_put_u8(dcbnl_skb, + ret = nla_put_u8(skb, DCB_TC_ATTR_PARAM_PGID, pgid); if (ret) goto err_param; } if (param_tb[DCB_TC_ATTR_PARAM_UP_MAPPING] || param_tb[DCB_TC_ATTR_PARAM_ALL]) { - ret = nla_put_u8(dcbnl_skb, + ret = nla_put_u8(skb, DCB_TC_ATTR_PARAM_UP_MAPPING, up_map); if (ret) goto err_param; } if (param_tb[DCB_TC_ATTR_PARAM_STRICT_PRIO] || param_tb[DCB_TC_ATTR_PARAM_ALL]) { - ret = nla_put_u8(dcbnl_skb, + ret = nla_put_u8(skb, DCB_TC_ATTR_PARAM_STRICT_PRIO, prio); if (ret) goto err_param; } if (param_tb[DCB_TC_ATTR_PARAM_BW_PCT] || param_tb[DCB_TC_ATTR_PARAM_ALL]) { - ret = nla_put_u8(dcbnl_skb, DCB_TC_ATTR_PARAM_BW_PCT, + ret = nla_put_u8(skb, DCB_TC_ATTR_PARAM_BW_PCT, tc_pct); if (ret) goto err_param; } - nla_nest_end(dcbnl_skb, param_nest); + nla_nest_end(skb, param_nest); } if (pg_tb[DCB_PG_ATTR_BW_ID_ALL]) @@ -838,80 +669,71 @@ static int __dcbnl_pg_getcfg(struct net_device *netdev, struct nlattr **tb, netdev->dcbnl_ops->getpgbwgcfgtx(netdev, i - DCB_PG_ATTR_BW_ID_0, &tc_pct); } - ret = nla_put_u8(dcbnl_skb, i, tc_pct); - + ret = nla_put_u8(skb, i, tc_pct); if (ret) goto err_pg; } - nla_nest_end(dcbnl_skb, pg_nest); - - nlmsg_end(dcbnl_skb, nlh); - - ret = rtnl_unicast(dcbnl_skb, &init_net, pid); - if (ret) - goto err_out; + nla_nest_end(skb, pg_nest); return 0; err_param: - nla_nest_cancel(dcbnl_skb, param_nest); + nla_nest_cancel(skb, param_nest); err_pg: - nla_nest_cancel(dcbnl_skb, pg_nest); -nlmsg_failure: -err: - kfree_skb(dcbnl_skb); -err_out: - ret = -EINVAL; - return ret; + nla_nest_cancel(skb, pg_nest); + + return -EMSGSIZE; } -static int dcbnl_pgtx_getcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_pgtx_getcfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - return __dcbnl_pg_getcfg(netdev, tb, pid, seq, flags, 0); + return __dcbnl_pg_getcfg(netdev, nlh, tb, skb, 0); } -static int dcbnl_pgrx_getcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_pgrx_getcfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - return __dcbnl_pg_getcfg(netdev, tb, pid, seq, flags, 1); + return __dcbnl_pg_getcfg(netdev, nlh, tb, skb, 1); } -static int dcbnl_setstate(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_setstate(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - int ret = -EINVAL; u8 value; - if (!tb[DCB_ATTR_STATE] || !netdev->dcbnl_ops->setstate) - return ret; + if (!tb[DCB_ATTR_STATE]) + return -EINVAL; - value = nla_get_u8(tb[DCB_ATTR_STATE]); + if (!netdev->dcbnl_ops->setstate) + return -EOPNOTSUPP; - ret = dcbnl_reply(netdev->dcbnl_ops->setstate(netdev, value), - RTM_SETDCB, DCB_CMD_SSTATE, DCB_ATTR_STATE, - pid, seq, flags); + value = nla_get_u8(tb[DCB_ATTR_STATE]); - return ret; + return nla_put_u8(skb, DCB_ATTR_STATE, + netdev->dcbnl_ops->setstate(netdev, value)); } -static int dcbnl_setpfccfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_setpfccfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { struct nlattr *data[DCB_PFC_UP_ATTR_MAX + 1]; int i; - int ret = -EINVAL; + int ret; u8 value; - if (!tb[DCB_ATTR_PFC_CFG] || !netdev->dcbnl_ops->setpfccfg) - return ret; + if (!tb[DCB_ATTR_PFC_CFG]) + return -EINVAL; + + if (!netdev->dcbnl_ops->setpfccfg) + return -EOPNOTSUPP; ret = nla_parse_nested(data, DCB_PFC_UP_ATTR_MAX, tb[DCB_ATTR_PFC_CFG], dcbnl_pfc_up_nest); if (ret) - goto err; + return ret; for (i = DCB_PFC_UP_ATTR_0; i <= DCB_PFC_UP_ATTR_7; i++) { if (data[i] == NULL) @@ -921,50 +743,53 @@ static int dcbnl_setpfccfg(struct net_device *netdev, struct nlattr **tb, data[i]->nla_type - DCB_PFC_UP_ATTR_0, value); } - ret = dcbnl_reply(0, RTM_SETDCB, DCB_CMD_PFC_SCFG, DCB_ATTR_PFC_CFG, - pid, seq, flags); -err: - return ret; + return nla_put_u8(skb, DCB_ATTR_PFC_CFG, 0); } -static int dcbnl_setall(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_setall(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - int ret = -EINVAL; + int ret; - if (!tb[DCB_ATTR_SET_ALL] || !netdev->dcbnl_ops->setall) - return ret; + if (!tb[DCB_ATTR_SET_ALL]) + return -EINVAL; + + if (!netdev->dcbnl_ops->setall) + return -EOPNOTSUPP; - ret = dcbnl_reply(netdev->dcbnl_ops->setall(netdev), RTM_SETDCB, - DCB_CMD_SET_ALL, DCB_ATTR_SET_ALL, pid, seq, flags); + ret = nla_put_u8(skb, DCB_ATTR_SET_ALL, + netdev->dcbnl_ops->setall(netdev)); dcbnl_cee_notify(netdev, RTM_SETDCB, DCB_CMD_SET_ALL, seq, 0); return ret; } -static int __dcbnl_pg_setcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags, int dir) +static int __dcbnl_pg_setcfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb, + int dir) { struct nlattr *pg_tb[DCB_PG_ATTR_MAX + 1]; struct nlattr *param_tb[DCB_TC_ATTR_PARAM_MAX + 1]; - int ret = -EINVAL; + int ret; int i; u8 pgid; u8 up_map; u8 prio; u8 tc_pct; - if (!tb[DCB_ATTR_PG_CFG] || - !netdev->dcbnl_ops->setpgtccfgtx || + if (!tb[DCB_ATTR_PG_CFG]) + return -EINVAL; + + if (!netdev->dcbnl_ops->setpgtccfgtx || !netdev->dcbnl_ops->setpgtccfgrx || !netdev->dcbnl_ops->setpgbwgcfgtx || !netdev->dcbnl_ops->setpgbwgcfgrx) - return ret; + return -EOPNOTSUPP; ret = nla_parse_nested(pg_tb, DCB_PG_ATTR_MAX, tb[DCB_ATTR_PG_CFG], dcbnl_pg_nest); if (ret) - goto err; + return ret; for (i = DCB_PG_ATTR_TC_0; i <= DCB_PG_ATTR_TC_7; i++) { if (!pg_tb[i]) @@ -973,7 +798,7 @@ static int __dcbnl_pg_setcfg(struct net_device *netdev, struct nlattr **tb, ret = nla_parse_nested(param_tb, DCB_TC_ATTR_PARAM_MAX, pg_tb[i], dcbnl_tc_param_nest); if (ret) - goto err; + return ret; pgid = DCB_ATTR_VALUE_UNDEFINED; prio = DCB_ATTR_VALUE_UNDEFINED; @@ -1026,63 +851,47 @@ static int __dcbnl_pg_setcfg(struct net_device *netdev, struct nlattr **tb, } } - ret = dcbnl_reply(0, RTM_SETDCB, - (dir ? DCB_CMD_PGRX_SCFG : DCB_CMD_PGTX_SCFG), - DCB_ATTR_PG_CFG, pid, seq, flags); - -err: - return ret; + return nla_put_u8(skb, DCB_ATTR_PG_CFG, 0); } -static int dcbnl_pgtx_setcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_pgtx_setcfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - return __dcbnl_pg_setcfg(netdev, tb, pid, seq, flags, 0); + return __dcbnl_pg_setcfg(netdev, nlh, seq, tb, skb, 0); } -static int dcbnl_pgrx_setcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_pgrx_setcfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - return __dcbnl_pg_setcfg(netdev, tb, pid, seq, flags, 1); + return __dcbnl_pg_setcfg(netdev, nlh, seq, tb, skb, 1); } -static int dcbnl_bcn_getcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_bcn_getcfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - struct sk_buff *dcbnl_skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; struct nlattr *bcn_nest; struct nlattr *bcn_tb[DCB_BCN_ATTR_MAX + 1]; u8 value_byte; u32 value_integer; - int ret = -EINVAL; + int ret; bool getall = false; int i; - if (!tb[DCB_ATTR_BCN] || !netdev->dcbnl_ops->getbcnrp || + if (!tb[DCB_ATTR_BCN]) + return -EINVAL; + + if (!netdev->dcbnl_ops->getbcnrp || !netdev->dcbnl_ops->getbcncfg) - return ret; + return -EOPNOTSUPP; ret = nla_parse_nested(bcn_tb, DCB_BCN_ATTR_MAX, tb[DCB_ATTR_BCN], dcbnl_bcn_nest); - if (ret) - goto err_out; - - dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!dcbnl_skb) - goto err_out; - - nlh = NLMSG_NEW(dcbnl_skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = DCB_CMD_BCN_GCFG; + return ret; - bcn_nest = nla_nest_start(dcbnl_skb, DCB_ATTR_BCN); + bcn_nest = nla_nest_start(skb, DCB_ATTR_BCN); if (!bcn_nest) - goto err; + return -EMSGSIZE; if (bcn_tb[DCB_BCN_ATTR_ALL]) getall = true; @@ -1093,7 +902,7 @@ static int dcbnl_bcn_getcfg(struct net_device *netdev, struct nlattr **tb, netdev->dcbnl_ops->getbcnrp(netdev, i - DCB_BCN_ATTR_RP_0, &value_byte); - ret = nla_put_u8(dcbnl_skb, i, value_byte); + ret = nla_put_u8(skb, i, value_byte); if (ret) goto err_bcn; } @@ -1104,49 +913,41 @@ static int dcbnl_bcn_getcfg(struct net_device *netdev, struct nlattr **tb, netdev->dcbnl_ops->getbcncfg(netdev, i, &value_integer); - ret = nla_put_u32(dcbnl_skb, i, value_integer); + ret = nla_put_u32(skb, i, value_integer); if (ret) goto err_bcn; } - nla_nest_end(dcbnl_skb, bcn_nest); - - nlmsg_end(dcbnl_skb, nlh); - - ret = rtnl_unicast(dcbnl_skb, &init_net, pid); - if (ret) - goto err_out; + nla_nest_end(skb, bcn_nest); return 0; err_bcn: - nla_nest_cancel(dcbnl_skb, bcn_nest); -nlmsg_failure: -err: - kfree_skb(dcbnl_skb); -err_out: - ret = -EINVAL; + nla_nest_cancel(skb, bcn_nest); return ret; } -static int dcbnl_bcn_setcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_bcn_setcfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { struct nlattr *data[DCB_BCN_ATTR_MAX + 1]; int i; - int ret = -EINVAL; + int ret; u8 value_byte; u32 value_int; - if (!tb[DCB_ATTR_BCN] || !netdev->dcbnl_ops->setbcncfg || + if (!tb[DCB_ATTR_BCN]) + return -EINVAL; + + if (!netdev->dcbnl_ops->setbcncfg || !netdev->dcbnl_ops->setbcnrp) - return ret; + return -EOPNOTSUPP; ret = nla_parse_nested(data, DCB_BCN_ATTR_MAX, tb[DCB_ATTR_BCN], dcbnl_pfc_up_nest); if (ret) - goto err; + return ret; for (i = DCB_BCN_ATTR_RP_0; i <= DCB_BCN_ATTR_RP_7; i++) { if (data[i] == NULL) @@ -1164,10 +965,7 @@ static int dcbnl_bcn_setcfg(struct net_device *netdev, struct nlattr **tb, i, value_int); } - ret = dcbnl_reply(0, RTM_SETDCB, DCB_CMD_BCN_SCFG, DCB_ATTR_BCN, - pid, seq, flags); -err: - return ret; + return nla_put_u8(skb, DCB_ATTR_BCN, 0); } static int dcbnl_build_peer_app(struct net_device *netdev, struct sk_buff* skb, @@ -1233,20 +1031,21 @@ static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev) struct dcb_app_type *itr; const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops; int dcbx; - int err = -EMSGSIZE; + int err; if (nla_put_string(skb, DCB_ATTR_IFNAME, netdev->name)) - goto nla_put_failure; + return -EMSGSIZE; + ieee = nla_nest_start(skb, DCB_ATTR_IEEE); if (!ieee) - goto nla_put_failure; + return -EMSGSIZE; if (ops->ieee_getets) { struct ieee_ets ets; err = ops->ieee_getets(netdev, &ets); if (!err && nla_put(skb, DCB_ATTR_IEEE_ETS, sizeof(ets), &ets)) - goto nla_put_failure; + return -EMSGSIZE; } if (ops->ieee_getmaxrate) { @@ -1256,7 +1055,7 @@ static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev) err = nla_put(skb, DCB_ATTR_IEEE_MAXRATE, sizeof(maxrate), &maxrate); if (err) - goto nla_put_failure; + return -EMSGSIZE; } } @@ -1265,12 +1064,12 @@ static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev) err = ops->ieee_getpfc(netdev, &pfc); if (!err && nla_put(skb, DCB_ATTR_IEEE_PFC, sizeof(pfc), &pfc)) - goto nla_put_failure; + return -EMSGSIZE; } app = nla_nest_start(skb, DCB_ATTR_IEEE_APP_TABLE); if (!app) - goto nla_put_failure; + return -EMSGSIZE; spin_lock(&dcb_lock); list_for_each_entry(itr, &dcb_app_list, list) { @@ -1279,7 +1078,7 @@ static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev) &itr->app); if (err) { spin_unlock(&dcb_lock); - goto nla_put_failure; + return -EMSGSIZE; } } } @@ -1298,7 +1097,7 @@ static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev) err = ops->ieee_peer_getets(netdev, &ets); if (!err && nla_put(skb, DCB_ATTR_IEEE_PEER_ETS, sizeof(ets), &ets)) - goto nla_put_failure; + return -EMSGSIZE; } if (ops->ieee_peer_getpfc) { @@ -1306,7 +1105,7 @@ static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev) err = ops->ieee_peer_getpfc(netdev, &pfc); if (!err && nla_put(skb, DCB_ATTR_IEEE_PEER_PFC, sizeof(pfc), &pfc)) - goto nla_put_failure; + return -EMSGSIZE; } if (ops->peer_getappinfo && ops->peer_getapptable) { @@ -1315,20 +1114,17 @@ static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev) DCB_ATTR_IEEE_APP_UNSPEC, DCB_ATTR_IEEE_APP); if (err) - goto nla_put_failure; + return -EMSGSIZE; } nla_nest_end(skb, ieee); if (dcbx >= 0) { err = nla_put_u8(skb, DCB_ATTR_DCBX, dcbx); if (err) - goto nla_put_failure; + return -EMSGSIZE; } return 0; - -nla_put_failure: - return err; } static int dcbnl_cee_pg_fill(struct sk_buff *skb, struct net_device *dev, @@ -1340,13 +1136,13 @@ static int dcbnl_cee_pg_fill(struct sk_buff *skb, struct net_device *dev, struct nlattr *pg = nla_nest_start(skb, i); if (!pg) - goto nla_put_failure; + return -EMSGSIZE; for (i = DCB_PG_ATTR_TC_0; i <= DCB_PG_ATTR_TC_7; i++) { struct nlattr *tc_nest = nla_nest_start(skb, i); if (!tc_nest) - goto nla_put_failure; + return -EMSGSIZE; pgid = DCB_ATTR_VALUE_UNDEFINED; prio = DCB_ATTR_VALUE_UNDEFINED; @@ -1364,7 +1160,7 @@ static int dcbnl_cee_pg_fill(struct sk_buff *skb, struct net_device *dev, nla_put_u8(skb, DCB_TC_ATTR_PARAM_UP_MAPPING, up_map) || nla_put_u8(skb, DCB_TC_ATTR_PARAM_STRICT_PRIO, prio) || nla_put_u8(skb, DCB_TC_ATTR_PARAM_BW_PCT, tc_pct)) - goto nla_put_failure; + return -EMSGSIZE; nla_nest_end(skb, tc_nest); } @@ -1378,13 +1174,10 @@ static int dcbnl_cee_pg_fill(struct sk_buff *skb, struct net_device *dev, ops->getpgbwgcfgtx(dev, i - DCB_PG_ATTR_BW_ID_0, &tc_pct); if (nla_put_u8(skb, i, tc_pct)) - goto nla_put_failure; + return -EMSGSIZE; } nla_nest_end(skb, pg); return 0; - -nla_put_failure: - return -EMSGSIZE; } static int dcbnl_cee_fill(struct sk_buff *skb, struct net_device *netdev) @@ -1531,27 +1324,16 @@ static int dcbnl_notify(struct net_device *dev, int event, int cmd, struct net *net = dev_net(dev); struct sk_buff *skb; struct nlmsghdr *nlh; - struct dcbmsg *dcb; const struct dcbnl_rtnl_ops *ops = dev->dcbnl_ops; int err; if (!ops) return -EOPNOTSUPP; - skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + skb = dcbnl_newmsg(event, cmd, pid, seq, 0, &nlh); if (!skb) return -ENOBUFS; - nlh = nlmsg_put(skb, pid, 0, event, sizeof(*dcb), 0); - if (nlh == NULL) { - nlmsg_free(skb); - return -EMSGSIZE; - } - - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = cmd; - if (dcbx_ver == DCB_CAP_DCBX_VER_IEEE) err = dcbnl_ieee_fill(skb, dev); else @@ -1559,8 +1341,7 @@ static int dcbnl_notify(struct net_device *dev, int event, int cmd, if (err < 0) { /* Report error to broadcast listeners */ - nlmsg_cancel(skb, nlh); - kfree_skb(skb); + nlmsg_free(skb); rtnl_set_sk_err(net, RTNLGRP_DCB, err); } else { /* End nlmsg and notify broadcast listeners */ @@ -1590,15 +1371,15 @@ EXPORT_SYMBOL(dcbnl_cee_notify); * No attempt is made to reconcile the case where only part of the * cmd can be completed. */ -static int dcbnl_ieee_set(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_ieee_set(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops; struct nlattr *ieee[DCB_ATTR_IEEE_MAX + 1]; - int err = -EOPNOTSUPP; + int err; if (!ops) - return err; + return -EOPNOTSUPP; if (!tb[DCB_ATTR_IEEE]) return -EINVAL; @@ -1649,58 +1430,28 @@ static int dcbnl_ieee_set(struct net_device *netdev, struct nlattr **tb, } err: - dcbnl_reply(err, RTM_SETDCB, DCB_CMD_IEEE_SET, DCB_ATTR_IEEE, - pid, seq, flags); + err = nla_put_u8(skb, DCB_ATTR_IEEE, err); dcbnl_ieee_notify(netdev, RTM_SETDCB, DCB_CMD_IEEE_SET, seq, 0); return err; } -static int dcbnl_ieee_get(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_ieee_get(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - struct net *net = dev_net(netdev); - struct sk_buff *skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops; - int err; if (!ops) return -EOPNOTSUPP; - skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!skb) - return -ENOBUFS; - - nlh = nlmsg_put(skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - if (nlh == NULL) { - nlmsg_free(skb); - return -EMSGSIZE; - } - - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = DCB_CMD_IEEE_GET; - - err = dcbnl_ieee_fill(skb, netdev); - - if (err < 0) { - nlmsg_cancel(skb, nlh); - kfree_skb(skb); - } else { - nlmsg_end(skb, nlh); - err = rtnl_unicast(skb, net, pid); - } - - return err; + return dcbnl_ieee_fill(skb, netdev); } -static int dcbnl_ieee_del(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_ieee_del(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops; struct nlattr *ieee[DCB_ATTR_IEEE_MAX + 1]; - int err = -EOPNOTSUPP; + int err; if (!ops) return -EOPNOTSUPP; @@ -1733,32 +1484,26 @@ static int dcbnl_ieee_del(struct net_device *netdev, struct nlattr **tb, } err: - dcbnl_reply(err, RTM_SETDCB, DCB_CMD_IEEE_DEL, DCB_ATTR_IEEE, - pid, seq, flags); + err = nla_put_u8(skb, DCB_ATTR_IEEE, err); dcbnl_ieee_notify(netdev, RTM_SETDCB, DCB_CMD_IEEE_DEL, seq, 0); return err; } /* DCBX configuration */ -static int dcbnl_getdcbx(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_getdcbx(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - int ret; - if (!netdev->dcbnl_ops->getdcbx) return -EOPNOTSUPP; - ret = dcbnl_reply(netdev->dcbnl_ops->getdcbx(netdev), RTM_GETDCB, - DCB_CMD_GDCBX, DCB_ATTR_DCBX, pid, seq, flags); - - return ret; + return nla_put_u8(skb, DCB_ATTR_DCBX, + netdev->dcbnl_ops->getdcbx(netdev)); } -static int dcbnl_setdcbx(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_setdcbx(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - int ret; u8 value; if (!netdev->dcbnl_ops->setdcbx) @@ -1769,19 +1514,13 @@ static int dcbnl_setdcbx(struct net_device *netdev, struct nlattr **tb, value = nla_get_u8(tb[DCB_ATTR_DCBX]); - ret = dcbnl_reply(netdev->dcbnl_ops->setdcbx(netdev, value), - RTM_SETDCB, DCB_CMD_SDCBX, DCB_ATTR_DCBX, - pid, seq, flags); - - return ret; + return nla_put_u8(skb, DCB_ATTR_DCBX, + netdev->dcbnl_ops->setdcbx(netdev, value)); } -static int dcbnl_getfeatcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_getfeatcfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - struct sk_buff *dcbnl_skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; struct nlattr *data[DCB_FEATCFG_ATTR_MAX + 1], *nest; u8 value; int ret, i; @@ -1796,25 +1535,11 @@ static int dcbnl_getfeatcfg(struct net_device *netdev, struct nlattr **tb, ret = nla_parse_nested(data, DCB_FEATCFG_ATTR_MAX, tb[DCB_ATTR_FEATCFG], dcbnl_featcfg_nest); if (ret) - goto err_out; - - dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!dcbnl_skb) { - ret = -ENOBUFS; - goto err_out; - } - - nlh = NLMSG_NEW(dcbnl_skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = DCB_CMD_GFEATCFG; + return ret; - nest = nla_nest_start(dcbnl_skb, DCB_ATTR_FEATCFG); - if (!nest) { - ret = -EMSGSIZE; - goto nla_put_failure; - } + nest = nla_nest_start(skb, DCB_ATTR_FEATCFG); + if (!nest) + return -EMSGSIZE; if (data[DCB_FEATCFG_ATTR_ALL]) getall = 1; @@ -1825,28 +1550,21 @@ static int dcbnl_getfeatcfg(struct net_device *netdev, struct nlattr **tb, ret = netdev->dcbnl_ops->getfeatcfg(netdev, i, &value); if (!ret) - ret = nla_put_u8(dcbnl_skb, i, value); + ret = nla_put_u8(skb, i, value); if (ret) { - nla_nest_cancel(dcbnl_skb, nest); + nla_nest_cancel(skb, nest); goto nla_put_failure; } } - nla_nest_end(dcbnl_skb, nest); + nla_nest_end(skb, nest); - nlmsg_end(dcbnl_skb, nlh); - - return rtnl_unicast(dcbnl_skb, &init_net, pid); nla_put_failure: - nlmsg_cancel(dcbnl_skb, nlh); -nlmsg_failure: - kfree_skb(dcbnl_skb); -err_out: return ret; } -static int dcbnl_setfeatcfg(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_setfeatcfg(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { struct nlattr *data[DCB_FEATCFG_ATTR_MAX + 1]; int ret, i; @@ -1876,60 +1594,73 @@ static int dcbnl_setfeatcfg(struct net_device *netdev, struct nlattr **tb, goto err; } err: - dcbnl_reply(ret, RTM_SETDCB, DCB_CMD_SFEATCFG, DCB_ATTR_FEATCFG, - pid, seq, flags); + ret = nla_put_u8(skb, DCB_ATTR_FEATCFG, ret); return ret; } /* Handle CEE DCBX GET commands. */ -static int dcbnl_cee_get(struct net_device *netdev, struct nlattr **tb, - u32 pid, u32 seq, u16 flags) +static int dcbnl_cee_get(struct net_device *netdev, struct nlmsghdr *nlh, + u32 seq, struct nlattr **tb, struct sk_buff *skb) { - struct net *net = dev_net(netdev); - struct sk_buff *skb; - struct nlmsghdr *nlh; - struct dcbmsg *dcb; const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops; - int err; if (!ops) return -EOPNOTSUPP; - skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!skb) - return -ENOBUFS; - - nlh = nlmsg_put(skb, pid, seq, RTM_GETDCB, sizeof(*dcb), flags); - if (nlh == NULL) { - nlmsg_free(skb); - return -EMSGSIZE; - } + return dcbnl_cee_fill(skb, netdev); +} - dcb = NLMSG_DATA(nlh); - dcb->dcb_family = AF_UNSPEC; - dcb->cmd = DCB_CMD_CEE_GET; +struct reply_func { + /* reply netlink message type */ + int type; - err = dcbnl_cee_fill(skb, netdev); + /* function to fill message contents */ + int (*cb)(struct net_device *, struct nlmsghdr *, u32, + struct nlattr **, struct sk_buff *); +}; - if (err < 0) { - nlmsg_cancel(skb, nlh); - nlmsg_free(skb); - } else { - nlmsg_end(skb, nlh); - err = rtnl_unicast(skb, net, pid); - } - return err; -} +static const struct reply_func reply_funcs[DCB_CMD_MAX+1] = { + [DCB_CMD_GSTATE] = { RTM_GETDCB, dcbnl_getstate }, + [DCB_CMD_SSTATE] = { RTM_SETDCB, dcbnl_setstate }, + [DCB_CMD_PFC_GCFG] = { RTM_GETDCB, dcbnl_getpfccfg }, + [DCB_CMD_PFC_SCFG] = { RTM_SETDCB, dcbnl_setpfccfg }, + [DCB_CMD_GPERM_HWADDR] = { RTM_GETDCB, dcbnl_getperm_hwaddr }, + [DCB_CMD_GCAP] = { RTM_GETDCB, dcbnl_getcap }, + [DCB_CMD_GNUMTCS] = { RTM_GETDCB, dcbnl_getnumtcs }, + [DCB_CMD_SNUMTCS] = { RTM_SETDCB, dcbnl_setnumtcs }, + [DCB_CMD_PFC_GSTATE] = { RTM_GETDCB, dcbnl_getpfcstate }, + [DCB_CMD_PFC_SSTATE] = { RTM_SETDCB, dcbnl_setpfcstate }, + [DCB_CMD_GAPP] = { RTM_GETDCB, dcbnl_getapp }, + [DCB_CMD_SAPP] = { RTM_SETDCB, dcbnl_setapp }, + [DCB_CMD_PGTX_GCFG] = { RTM_GETDCB, dcbnl_pgtx_getcfg }, + [DCB_CMD_PGTX_SCFG] = { RTM_SETDCB, dcbnl_pgtx_setcfg }, + [DCB_CMD_PGRX_GCFG] = { RTM_GETDCB, dcbnl_pgrx_getcfg }, + [DCB_CMD_PGRX_SCFG] = { RTM_SETDCB, dcbnl_pgrx_setcfg }, + [DCB_CMD_SET_ALL] = { RTM_SETDCB, dcbnl_setall }, + [DCB_CMD_BCN_GCFG] = { RTM_GETDCB, dcbnl_bcn_getcfg }, + [DCB_CMD_BCN_SCFG] = { RTM_SETDCB, dcbnl_bcn_setcfg }, + [DCB_CMD_IEEE_GET] = { RTM_GETDCB, dcbnl_ieee_get }, + [DCB_CMD_IEEE_SET] = { RTM_SETDCB, dcbnl_ieee_set }, + [DCB_CMD_IEEE_DEL] = { RTM_SETDCB, dcbnl_ieee_del }, + [DCB_CMD_GDCBX] = { RTM_GETDCB, dcbnl_getdcbx }, + [DCB_CMD_SDCBX] = { RTM_SETDCB, dcbnl_setdcbx }, + [DCB_CMD_GFEATCFG] = { RTM_GETDCB, dcbnl_getfeatcfg }, + [DCB_CMD_SFEATCFG] = { RTM_SETDCB, dcbnl_setfeatcfg }, + [DCB_CMD_CEE_GET] = { RTM_GETDCB, dcbnl_cee_get }, +}; static int dcb_doit(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) { struct net *net = sock_net(skb->sk); struct net_device *netdev; - struct dcbmsg *dcb = (struct dcbmsg *)NLMSG_DATA(nlh); + struct dcbmsg *dcb = nlmsg_data(nlh); struct nlattr *tb[DCB_ATTR_MAX + 1]; u32 pid = skb ? NETLINK_CB(skb).pid : 0; int ret = -EINVAL; + struct sk_buff *reply_skb; + struct nlmsghdr *reply_nlh = NULL; + const struct reply_func *fn; if (!net_eq(net, &init_net)) return -EINVAL; @@ -1939,136 +1670,78 @@ static int dcb_doit(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) if (ret < 0) return ret; + if (dcb->cmd > DCB_CMD_MAX) + return -EINVAL; + + /* check if a reply function has been defined for the command */ + fn = &reply_funcs[dcb->cmd]; + if (!fn->cb) + return -EOPNOTSUPP; + if (!tb[DCB_ATTR_IFNAME]) return -EINVAL; netdev = dev_get_by_name(&init_net, nla_data(tb[DCB_ATTR_IFNAME])); if (!netdev) - return -EINVAL; + return -ENODEV; - if (!netdev->dcbnl_ops) - goto errout; - - switch (dcb->cmd) { - case DCB_CMD_GSTATE: - ret = dcbnl_getstate(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_PFC_GCFG: - ret = dcbnl_getpfccfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_GPERM_HWADDR: - ret = dcbnl_getperm_hwaddr(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_PGTX_GCFG: - ret = dcbnl_pgtx_getcfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_PGRX_GCFG: - ret = dcbnl_pgrx_getcfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_BCN_GCFG: - ret = dcbnl_bcn_getcfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_SSTATE: - ret = dcbnl_setstate(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_PFC_SCFG: - ret = dcbnl_setpfccfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); + if (!netdev->dcbnl_ops) { + ret = -EOPNOTSUPP; goto out; + } - case DCB_CMD_SET_ALL: - ret = dcbnl_setall(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_PGTX_SCFG: - ret = dcbnl_pgtx_setcfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_PGRX_SCFG: - ret = dcbnl_pgrx_setcfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_GCAP: - ret = dcbnl_getcap(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_GNUMTCS: - ret = dcbnl_getnumtcs(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_SNUMTCS: - ret = dcbnl_setnumtcs(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_PFC_GSTATE: - ret = dcbnl_getpfcstate(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_PFC_SSTATE: - ret = dcbnl_setpfcstate(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_BCN_SCFG: - ret = dcbnl_bcn_setcfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_GAPP: - ret = dcbnl_getapp(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_SAPP: - ret = dcbnl_setapp(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_IEEE_SET: - ret = dcbnl_ieee_set(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_IEEE_GET: - ret = dcbnl_ieee_get(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_IEEE_DEL: - ret = dcbnl_ieee_del(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_GDCBX: - ret = dcbnl_getdcbx(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_SDCBX: - ret = dcbnl_setdcbx(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_GFEATCFG: - ret = dcbnl_getfeatcfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); - goto out; - case DCB_CMD_SFEATCFG: - ret = dcbnl_setfeatcfg(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); + reply_skb = dcbnl_newmsg(fn->type, dcb->cmd, pid, nlh->nlmsg_seq, + nlh->nlmsg_flags, &reply_nlh); + if (!reply_skb) { + ret = -ENOBUFS; goto out; - case DCB_CMD_CEE_GET: - ret = dcbnl_cee_get(netdev, tb, pid, nlh->nlmsg_seq, - nlh->nlmsg_flags); + } + + ret = fn->cb(netdev, nlh, nlh->nlmsg_seq, tb, reply_skb); + if (ret < 0) { + nlmsg_free(reply_skb); goto out; - default: - goto errout; } -errout: - ret = -EINVAL; + + nlmsg_end(reply_skb, reply_nlh); + + ret = rtnl_unicast(reply_skb, &init_net, pid); out: dev_put(netdev); return ret; } +static struct dcb_app_type *dcb_app_lookup(const struct dcb_app *app, + int ifindex, int prio) +{ + struct dcb_app_type *itr; + + list_for_each_entry(itr, &dcb_app_list, list) { + if (itr->app.selector == app->selector && + itr->app.protocol == app->protocol && + itr->ifindex == ifindex && + (!prio || itr->app.priority == prio)) + return itr; + } + + return NULL; +} + +static int dcb_app_add(const struct dcb_app *app, int ifindex) +{ + struct dcb_app_type *entry; + + entry = kmalloc(sizeof(*entry), GFP_ATOMIC); + if (!entry) + return -ENOMEM; + + memcpy(&entry->app, app, sizeof(*app)); + entry->ifindex = ifindex; + list_add(&entry->list, &dcb_app_list); + + return 0; +} + /** * dcb_getapp - retrieve the DCBX application user priority * @@ -2082,14 +1755,8 @@ u8 dcb_getapp(struct net_device *dev, struct dcb_app *app) u8 prio = 0; spin_lock(&dcb_lock); - list_for_each_entry(itr, &dcb_app_list, list) { - if (itr->app.selector == app->selector && - itr->app.protocol == app->protocol && - itr->ifindex == dev->ifindex) { - prio = itr->app.priority; - break; - } - } + if ((itr = dcb_app_lookup(app, dev->ifindex, 0))) + prio = itr->app.priority; spin_unlock(&dcb_lock); return prio; @@ -2107,6 +1774,7 @@ int dcb_setapp(struct net_device *dev, struct dcb_app *new) { struct dcb_app_type *itr; struct dcb_app_type event; + int err = 0; event.ifindex = dev->ifindex; memcpy(&event.app, new, sizeof(event.app)); @@ -2115,36 +1783,23 @@ int dcb_setapp(struct net_device *dev, struct dcb_app *new) spin_lock(&dcb_lock); /* Search for existing match and replace */ - list_for_each_entry(itr, &dcb_app_list, list) { - if (itr->app.selector == new->selector && - itr->app.protocol == new->protocol && - itr->ifindex == dev->ifindex) { - if (new->priority) - itr->app.priority = new->priority; - else { - list_del(&itr->list); - kfree(itr); - } - goto out; + if ((itr = dcb_app_lookup(new, dev->ifindex, 0))) { + if (new->priority) + itr->app.priority = new->priority; + else { + list_del(&itr->list); + kfree(itr); } + goto out; } /* App type does not exist add new application type */ - if (new->priority) { - struct dcb_app_type *entry; - entry = kmalloc(sizeof(struct dcb_app_type), GFP_ATOMIC); - if (!entry) { - spin_unlock(&dcb_lock); - return -ENOMEM; - } - - memcpy(&entry->app, new, sizeof(*new)); - entry->ifindex = dev->ifindex; - list_add(&entry->list, &dcb_app_list); - } + if (new->priority) + err = dcb_app_add(new, dev->ifindex); out: spin_unlock(&dcb_lock); - call_dcbevent_notifiers(DCB_APP_EVENT, &event); - return 0; + if (!err) + call_dcbevent_notifiers(DCB_APP_EVENT, &event); + return err; } EXPORT_SYMBOL(dcb_setapp); @@ -2161,13 +1816,8 @@ u8 dcb_ieee_getapp_mask(struct net_device *dev, struct dcb_app *app) u8 prio = 0; spin_lock(&dcb_lock); - list_for_each_entry(itr, &dcb_app_list, list) { - if (itr->app.selector == app->selector && - itr->app.protocol == app->protocol && - itr->ifindex == dev->ifindex) { - prio |= 1 << itr->app.priority; - } - } + if ((itr = dcb_app_lookup(app, dev->ifindex, 0))) + prio |= 1 << itr->app.priority; spin_unlock(&dcb_lock); return prio; @@ -2183,7 +1833,6 @@ EXPORT_SYMBOL(dcb_ieee_getapp_mask); */ int dcb_ieee_setapp(struct net_device *dev, struct dcb_app *new) { - struct dcb_app_type *itr, *entry; struct dcb_app_type event; int err = 0; @@ -2194,26 +1843,12 @@ int dcb_ieee_setapp(struct net_device *dev, struct dcb_app *new) spin_lock(&dcb_lock); /* Search for existing match and abort if found */ - list_for_each_entry(itr, &dcb_app_list, list) { - if (itr->app.selector == new->selector && - itr->app.protocol == new->protocol && - itr->app.priority == new->priority && - itr->ifindex == dev->ifindex) { - err = -EEXIST; - goto out; - } - } - - /* App entry does not exist add new entry */ - entry = kmalloc(sizeof(struct dcb_app_type), GFP_ATOMIC); - if (!entry) { - err = -ENOMEM; + if (dcb_app_lookup(new, dev->ifindex, new->priority)) { + err = -EEXIST; goto out; } - memcpy(&entry->app, new, sizeof(*new)); - entry->ifindex = dev->ifindex; - list_add(&entry->list, &dcb_app_list); + err = dcb_app_add(new, dev->ifindex); out: spin_unlock(&dcb_lock); if (!err) @@ -2240,19 +1875,12 @@ int dcb_ieee_delapp(struct net_device *dev, struct dcb_app *del) spin_lock(&dcb_lock); /* Search for existing match and remove it. */ - list_for_each_entry(itr, &dcb_app_list, list) { - if (itr->app.selector == del->selector && - itr->app.protocol == del->protocol && - itr->app.priority == del->priority && - itr->ifindex == dev->ifindex) { - list_del(&itr->list); - kfree(itr); - err = 0; - goto out; - } + if ((itr = dcb_app_lookup(del, dev->ifindex, del->priority))) { + list_del(&itr->list); + kfree(itr); + err = 0; } -out: spin_unlock(&dcb_lock); if (!err) call_dcbevent_notifiers(DCB_APP_EVENT, &event); diff --git a/net/dccp/ackvec.h b/net/dccp/ackvec.h index e2ab0627a5ff..a269aa7f7923 100644 --- a/net/dccp/ackvec.h +++ b/net/dccp/ackvec.h @@ -50,7 +50,8 @@ static inline u8 dccp_ackvec_state(const u8 *cell) return *cell & ~DCCPAV_MAX_RUNLEN; } -/** struct dccp_ackvec - Ack Vector main data structure +/** + * struct dccp_ackvec - Ack Vector main data structure * * This implements a fixed-size circular buffer within an array and is largely * based on Appendix A of RFC 4340. @@ -76,7 +77,8 @@ struct dccp_ackvec { struct list_head av_records; }; -/** struct dccp_ackvec_record - Records information about sent Ack Vectors +/** + * struct dccp_ackvec_record - Records information about sent Ack Vectors * * These list entries define the additional information which the HC-Receiver * keeps about recently-sent Ack Vectors; again refer to RFC 4340, Appendix A. @@ -121,6 +123,7 @@ static inline bool dccp_ackvec_is_empty(const struct dccp_ackvec *av) * @len: length of @vec * @nonce: whether @vec had an ECN nonce of 0 or 1 * @node: FIFO - arranged in descending order of ack_ackno + * * This structure is used by CCIDs to access Ack Vectors in a received skb. */ struct dccp_ackvec_parsed { diff --git a/net/dccp/ccid.c b/net/dccp/ccid.c index 48b585a5cba7..597557254ddb 100644 --- a/net/dccp/ccid.c +++ b/net/dccp/ccid.c @@ -46,6 +46,7 @@ bool ccid_support_check(u8 const *ccid_array, u8 array_len) * ccid_get_builtin_ccids - Populate a list of built-in CCIDs * @ccid_array: pointer to copy into * @array_len: value to return length into + * * This function allocates memory - caller must see that it is freed after use. */ int ccid_get_builtin_ccids(u8 **ccid_array, u8 *array_len) diff --git a/net/dccp/ccids/ccid3.c b/net/dccp/ccids/ccid3.c index 8c67bedf85b0..d65e98798eca 100644 --- a/net/dccp/ccids/ccid3.c +++ b/net/dccp/ccids/ccid3.c @@ -113,6 +113,7 @@ static u32 ccid3_hc_tx_idle_rtt(struct ccid3_hc_tx_sock *hc, ktime_t now) /** * ccid3_hc_tx_update_x - Update allowed sending rate X * @stamp: most recent time if available - can be left NULL. + * * This function tracks draft rfc3448bis, check there for latest details. * * Note: X and X_recv are both stored in units of 64 * bytes/second, to support @@ -161,9 +162,11 @@ static void ccid3_hc_tx_update_x(struct sock *sk, ktime_t *stamp) } } -/* - * Track the mean packet size `s' (cf. RFC 4342, 5.3 and RFC 3448, 4.1) +/** + * ccid3_hc_tx_update_s - Track the mean packet size `s' * @len: DCCP packet payload size in bytes + * + * cf. RFC 4342, 5.3 and RFC 3448, 4.1 */ static inline void ccid3_hc_tx_update_s(struct ccid3_hc_tx_sock *hc, int len) { @@ -270,6 +273,7 @@ out: /** * ccid3_hc_tx_send_packet - Delay-based dequeueing of TX packets * @skb: next packet candidate to send on @sk + * * This function uses the convention of ccid_packet_dequeue_eval() and * returns a millisecond-delay value between 0 and t_mbi = 64000 msec. */ diff --git a/net/dccp/ccids/lib/loss_interval.c b/net/dccp/ccids/lib/loss_interval.c index 497723c4d4bb..57f9fd78c4df 100644 --- a/net/dccp/ccids/lib/loss_interval.c +++ b/net/dccp/ccids/lib/loss_interval.c @@ -133,6 +133,7 @@ static inline u8 tfrc_lh_is_new_loss(struct tfrc_loss_interval *cur, * @rh: Receive history containing a fresh loss event * @calc_first_li: Caller-dependent routine to compute length of first interval * @sk: Used by @calc_first_li in caller-specific way (subtyping) + * * Updates I_mean and returns 1 if a new interval has in fact been added to @lh. */ int tfrc_lh_interval_add(struct tfrc_loss_hist *lh, struct tfrc_rx_hist *rh, diff --git a/net/dccp/ccids/lib/packet_history.c b/net/dccp/ccids/lib/packet_history.c index de8fe294bf0b..08df7a3acb3d 100644 --- a/net/dccp/ccids/lib/packet_history.c +++ b/net/dccp/ccids/lib/packet_history.c @@ -315,6 +315,7 @@ static void __three_after_loss(struct tfrc_rx_hist *h) * @ndp: The NDP count belonging to @skb * @calc_first_li: Caller-dependent computation of first loss interval in @lh * @sk: Used by @calc_first_li (see tfrc_lh_interval_add) + * * Chooses action according to pending loss, updates LI database when a new * loss was detected, and does required post-processing. Returns 1 when caller * should send feedback, 0 otherwise. @@ -387,7 +388,7 @@ static inline struct tfrc_rx_hist_entry * } /** - * tfrc_rx_hist_rtt_prev_s: previously suitable (wrt rtt_last_s) RTT-sampling entry + * tfrc_rx_hist_rtt_prev_s - previously suitable (wrt rtt_last_s) RTT-sampling entry */ static inline struct tfrc_rx_hist_entry * tfrc_rx_hist_rtt_prev_s(const struct tfrc_rx_hist *h) diff --git a/net/dccp/ccids/lib/tfrc_equation.c b/net/dccp/ccids/lib/tfrc_equation.c index a052a4377e26..88ef98285bec 100644 --- a/net/dccp/ccids/lib/tfrc_equation.c +++ b/net/dccp/ccids/lib/tfrc_equation.c @@ -611,6 +611,7 @@ static inline u32 tfrc_binsearch(u32 fval, u8 small) * @s: packet size in bytes * @R: RTT scaled by 1000000 (i.e., microseconds) * @p: loss ratio estimate scaled by 1000000 + * * Returns X_calc in bytes per second (not scaled). */ u32 tfrc_calc_x(u16 s, u32 R, u32 p) @@ -659,6 +660,7 @@ u32 tfrc_calc_x(u16 s, u32 R, u32 p) /** * tfrc_calc_x_reverse_lookup - try to find p given f(p) * @fvalue: function value to match, scaled by 1000000 + * * Returns closest match for p, also scaled by 1000000 */ u32 tfrc_calc_x_reverse_lookup(u32 fvalue) diff --git a/net/dccp/dccp.h b/net/dccp/dccp.h index 9040be049d8c..708e75bf623d 100644 --- a/net/dccp/dccp.h +++ b/net/dccp/dccp.h @@ -352,6 +352,7 @@ static inline int dccp_bad_service_code(const struct sock *sk, * @dccpd_opt_len: total length of all options (5.8) in the packet * @dccpd_seq: sequence number * @dccpd_ack_seq: acknowledgment number subheader field value + * * This is used for transmission as well as for reception. */ struct dccp_skb_cb { diff --git a/net/dccp/feat.c b/net/dccp/feat.c index 78a2ad70e1b0..9733ddbc96cb 100644 --- a/net/dccp/feat.c +++ b/net/dccp/feat.c @@ -350,6 +350,7 @@ static int __dccp_feat_activate(struct sock *sk, const int idx, * @feat_num: feature to activate, one of %dccp_feature_numbers * @local: whether local (1) or remote (0) @feat_num is meant * @fval: the value (SP or NN) to activate, or NULL to use the default value + * * For general use this function is preferable over __dccp_feat_activate(). */ static int dccp_feat_activate(struct sock *sk, u8 feat_num, bool local, @@ -446,6 +447,7 @@ static struct dccp_feat_entry *dccp_feat_list_lookup(struct list_head *fn_list, * @head: list to add to * @feat: feature number * @local: whether the local (1) or remote feature with number @feat is meant + * * This is the only constructor and serves to ensure the above invariants. */ static struct dccp_feat_entry * @@ -504,6 +506,7 @@ static int dccp_feat_push_change(struct list_head *fn_list, u8 feat, u8 local, * @feat: one of %dccp_feature_numbers * @local: whether local (1) or remote (0) @feat_num is being confirmed * @fval: pointer to NN/SP value to be inserted or NULL + * * Returns 0 on success, a Reset code for further processing otherwise. */ static int dccp_feat_push_confirm(struct list_head *fn_list, u8 feat, u8 local, @@ -691,6 +694,7 @@ int dccp_feat_insert_opts(struct dccp_sock *dp, struct dccp_request_sock *dreq, * @feat: an NN feature from %dccp_feature_numbers * @mandatory: use Mandatory option if 1 * @nn_val: value to register (restricted to 4 bytes) + * * Note that NN features are local by definition (RFC 4340, 6.3.2). */ static int __feat_register_nn(struct list_head *fn, u8 feat, @@ -760,6 +764,7 @@ int dccp_feat_register_sp(struct sock *sk, u8 feat, u8 is_local, * dccp_feat_nn_get - Query current/pending value of NN feature * @sk: DCCP socket of an established connection * @feat: NN feature number from %dccp_feature_numbers + * * For a known NN feature, returns value currently being negotiated, or * current (confirmed) value if no negotiation is going on. */ @@ -790,6 +795,7 @@ EXPORT_SYMBOL_GPL(dccp_feat_nn_get); * @sk: DCCP socket of an established connection * @feat: NN feature number from %dccp_feature_numbers * @nn_val: the new value to use + * * This function is used to communicate NN updates out-of-band. */ int dccp_feat_signal_nn_change(struct sock *sk, u8 feat, u64 nn_val) @@ -930,6 +936,7 @@ static const struct ccid_dependency *dccp_feat_ccid_deps(u8 ccid, bool is_local) * @fn: feature-negotiation list to update * @id: CCID number to track * @is_local: whether TX CCID (1) or RX CCID (0) is meant + * * This function needs to be called after registering all other features. */ static int dccp_feat_propagate_ccid(struct list_head *fn, u8 id, bool is_local) @@ -953,6 +960,7 @@ static int dccp_feat_propagate_ccid(struct list_head *fn, u8 id, bool is_local) /** * dccp_feat_finalise_settings - Finalise settings before starting negotiation * @dp: client or listening socket (settings will be inherited) + * * This is called after all registrations (socket initialisation, sysctls, and * sockopt calls), and before sending the first packet containing Change options * (ie. client-Request or server-Response), to ensure internal consistency. @@ -1284,6 +1292,7 @@ confirmation_failed: * @feat: NN number, one of %dccp_feature_numbers * @val: NN value * @len: length of @val in bytes + * * This function combines the functionality of change_recv/confirm_recv, with * the following differences (reset codes are the same): * - cleanup after receiving the Confirm; @@ -1379,6 +1388,7 @@ fast_path_failed: * @feat: one of %dccp_feature_numbers * @val: value contents of @opt * @len: length of @val in bytes + * * Returns 0 on success, a Reset code for ending the connection otherwise. */ int dccp_feat_parse_options(struct sock *sk, struct dccp_request_sock *dreq, diff --git a/net/dccp/input.c b/net/dccp/input.c index bc93a333931e..14cdafad7a90 100644 --- a/net/dccp/input.c +++ b/net/dccp/input.c @@ -710,6 +710,7 @@ EXPORT_SYMBOL_GPL(dccp_rcv_state_process); /** * dccp_sample_rtt - Validate and finalise computation of RTT sample * @delta: number of microseconds between packet and acknowledgment + * * The routine is kept generic to work in different contexts. It should be * called immediately when the ACK used for the RTT sample arrives. */ diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c index 07f5579ca756..176ecdba4a22 100644 --- a/net/dccp/ipv4.c +++ b/net/dccp/ipv4.c @@ -161,17 +161,10 @@ static inline void dccp_do_pmtu_discovery(struct sock *sk, if (sk->sk_state == DCCP_LISTEN) return; - /* We don't check in the destentry if pmtu discovery is forbidden - * on this route. We just assume that no packet_to_big packets - * are send back when pmtu discovery is not active. - * There is a small race when the user changes this flag in the - * route, but I think that's acceptable. - */ - if ((dst = __sk_dst_check(sk, 0)) == NULL) + dst = inet_csk_update_pmtu(sk, mtu); + if (!dst) return; - dst->ops->update_pmtu(dst, mtu); - /* Something is about to be wrong... Remember soft error * for the case, if this connection will not able to recover. */ @@ -195,6 +188,14 @@ static inline void dccp_do_pmtu_discovery(struct sock *sk, } /* else let the usual retransmit timer handle it */ } +static void dccp_do_redirect(struct sk_buff *skb, struct sock *sk) +{ + struct dst_entry *dst = __sk_dst_check(sk, 0); + + if (dst) + dst->ops->redirect(dst, sk, skb); +} + /* * This routine is called by the ICMP module when it gets some sort of error * condition. If err < 0 then the socket should be closed and the error @@ -259,6 +260,9 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info) } switch (type) { + case ICMP_REDIRECT: + dccp_do_redirect(skb, sk); + goto out; case ICMP_SOURCE_QUENCH: /* Just silently ignore these. */ goto out; @@ -477,7 +481,7 @@ static struct dst_entry* dccp_v4_route_skb(struct net *net, struct sock *sk, struct rtable *rt; const struct iphdr *iph = ip_hdr(skb); struct flowi4 fl4 = { - .flowi4_oif = skb_rtable(skb)->rt_iif, + .flowi4_oif = inet_iif(skb), .daddr = iph->saddr, .saddr = iph->daddr, .flowi4_tos = RT_CONN_FLAGS(sk), diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c index fa9512d86f3b..56840b249f3b 100644 --- a/net/dccp/ipv6.c +++ b/net/dccp/ipv6.c @@ -130,6 +130,13 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, np = inet6_sk(sk); + if (type == NDISC_REDIRECT) { + struct dst_entry *dst = __sk_dst_check(sk, np->dst_cookie); + + if (dst) + dst->ops->redirect(dst, sk, skb); + } + if (type == ICMPV6_PKT_TOOBIG) { struct dst_entry *dst = NULL; @@ -138,37 +145,12 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, if ((1 << sk->sk_state) & (DCCPF_LISTEN | DCCPF_CLOSED)) goto out; - /* icmp should have updated the destination cache entry */ - dst = __sk_dst_check(sk, np->dst_cookie); - if (dst == NULL) { - struct inet_sock *inet = inet_sk(sk); - struct flowi6 fl6; - - /* BUGGG_FUTURE: Again, it is not clear how - to handle rthdr case. Ignore this complexity - for now. - */ - memset(&fl6, 0, sizeof(fl6)); - fl6.flowi6_proto = IPPROTO_DCCP; - fl6.daddr = np->daddr; - fl6.saddr = np->saddr; - fl6.flowi6_oif = sk->sk_bound_dev_if; - fl6.fl6_dport = inet->inet_dport; - fl6.fl6_sport = inet->inet_sport; - security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); - - dst = ip6_dst_lookup_flow(sk, &fl6, NULL, false); - if (IS_ERR(dst)) { - sk->sk_err_soft = -PTR_ERR(dst); - goto out; - } - } else - dst_hold(dst); + dst = inet6_csk_update_pmtu(sk, ntohl(info)); + if (!dst) + goto out; - if (inet_csk(sk)->icsk_pmtu_cookie > dst_mtu(dst)) { + if (inet_csk(sk)->icsk_pmtu_cookie > dst_mtu(dst)) dccp_sync_mss(sk, dst_mtu(dst)); - } /* else let the usual retransmit timer handle it */ - dst_release(dst); goto out; } @@ -237,7 +219,6 @@ static int dccp_v6_send_response(struct sock *sk, struct request_sock *req, struct inet6_request_sock *ireq6 = inet6_rsk(req); struct ipv6_pinfo *np = inet6_sk(sk); struct sk_buff *skb; - struct ipv6_txoptions *opt = NULL; struct in6_addr *final_p, final; struct flowi6 fl6; int err = -1; @@ -253,9 +234,8 @@ static int dccp_v6_send_response(struct sock *sk, struct request_sock *req, fl6.fl6_sport = inet_rsk(req)->loc_port; security_req_classify_flow(req, flowi6_to_flowi(&fl6)); - opt = np->opt; - final_p = fl6_update_dst(&fl6, opt, &final); + final_p = fl6_update_dst(&fl6, np->opt, &final); dst = ip6_dst_lookup_flow(sk, &fl6, final_p, false); if (IS_ERR(dst)) { @@ -272,13 +252,11 @@ static int dccp_v6_send_response(struct sock *sk, struct request_sock *req, &ireq6->loc_addr, &ireq6->rmt_addr); fl6.daddr = ireq6->rmt_addr; - err = ip6_xmit(sk, skb, &fl6, opt, np->tclass); + err = ip6_xmit(sk, skb, &fl6, np->opt, np->tclass); err = net_xmit_eval(err); } done: - if (opt != NULL && opt != np->opt) - sock_kfree_s(sk, opt, opt->tot_len); dst_release(dst); return err; } @@ -473,7 +451,6 @@ static struct sock *dccp_v6_request_recv_sock(struct sock *sk, struct inet_sock *newinet; struct dccp6_sock *newdp6; struct sock *newsk; - struct ipv6_txoptions *opt; if (skb->protocol == htons(ETH_P_IP)) { /* @@ -518,7 +495,6 @@ static struct sock *dccp_v6_request_recv_sock(struct sock *sk, return newsk; } - opt = np->opt; if (sk_acceptq_is_full(sk)) goto out_overflow; @@ -530,7 +506,7 @@ static struct sock *dccp_v6_request_recv_sock(struct sock *sk, memset(&fl6, 0, sizeof(fl6)); fl6.flowi6_proto = IPPROTO_DCCP; fl6.daddr = ireq6->rmt_addr; - final_p = fl6_update_dst(&fl6, opt, &final); + final_p = fl6_update_dst(&fl6, np->opt, &final); fl6.saddr = ireq6->loc_addr; fl6.flowi6_oif = sk->sk_bound_dev_if; fl6.fl6_dport = inet_rsk(req)->rmt_port; @@ -595,11 +571,8 @@ static struct sock *dccp_v6_request_recv_sock(struct sock *sk, * Yes, keeping reference count would be much more clever, but we make * one more one thing there: reattach optmem to newsk. */ - if (opt != NULL) { - newnp->opt = ipv6_dup_options(newsk, opt); - if (opt != np->opt) - sock_kfree_s(sk, opt, opt->tot_len); - } + if (np->opt != NULL) + newnp->opt = ipv6_dup_options(newsk, np->opt); inet_csk(newsk)->icsk_ext_hdr_len = 0; if (newnp->opt != NULL) @@ -625,8 +598,6 @@ out_nonewsk: dst_release(dst); out: NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS); - if (opt != NULL && opt != np->opt) - sock_kfree_s(sk, opt, opt->tot_len); return NULL; } diff --git a/net/dccp/options.c b/net/dccp/options.c index 68fa6b7a3e01..a58e0b634050 100644 --- a/net/dccp/options.c +++ b/net/dccp/options.c @@ -527,6 +527,7 @@ int dccp_insert_option_mandatory(struct sk_buff *skb) * @val: NN value or SP array (preferred element first) to copy * @len: true length of @val in bytes (excluding first element repetition) * @repeat_first: whether to copy the first element of @val twice + * * The last argument is used to construct Confirm options, where the preferred * value and the preference list appear separately (RFC 4340, 6.3.1). Preference * lists are kept such that the preferred entry is always first, so we only need diff --git a/net/dccp/output.c b/net/dccp/output.c index 787367308797..d17fc90a74b6 100644 --- a/net/dccp/output.c +++ b/net/dccp/output.c @@ -214,6 +214,7 @@ void dccp_write_space(struct sock *sk) * dccp_wait_for_ccid - Await CCID send permission * @sk: socket to wait for * @delay: timeout in jiffies + * * This is used by CCIDs which need to delay the send time in process context. */ static int dccp_wait_for_ccid(struct sock *sk, unsigned long delay) diff --git a/net/decnet/dn_fib.c b/net/decnet/dn_fib.c index 7eaf98799729..102d6106a942 100644 --- a/net/decnet/dn_fib.c +++ b/net/decnet/dn_fib.c @@ -505,6 +505,14 @@ static int dn_fib_check_attr(struct rtmsg *r, struct rtattr **rta) return 0; } +static inline u32 rtm_get_table(struct rtattr **rta, u8 table) +{ + if (rta[RTA_TABLE - 1]) + table = nla_get_u32((struct nlattr *) rta[RTA_TABLE - 1]); + + return table; +} + static int dn_fib_rtm_delroute(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) { struct net *net = sock_net(skb->sk); diff --git a/net/decnet/dn_neigh.c b/net/decnet/dn_neigh.c index ac90f658586c..3aede1b459fd 100644 --- a/net/decnet/dn_neigh.c +++ b/net/decnet/dn_neigh.c @@ -202,7 +202,7 @@ static int dn_neigh_output_packet(struct sk_buff *skb) { struct dst_entry *dst = skb_dst(skb); struct dn_route *rt = (struct dn_route *)dst; - struct neighbour *neigh = dst_get_neighbour_noref(dst); + struct neighbour *neigh = rt->n; struct net_device *dev = neigh->dev; char mac_addr[ETH_ALEN]; unsigned int seq; @@ -240,7 +240,7 @@ static int dn_long_output(struct neighbour *neigh, struct sk_buff *skb) kfree_skb(skb); return -ENOBUFS; } - kfree_skb(skb); + consume_skb(skb); skb = skb2; net_info_ratelimited("dn_long_output: Increasing headroom\n"); } @@ -283,7 +283,7 @@ static int dn_short_output(struct neighbour *neigh, struct sk_buff *skb) kfree_skb(skb); return -ENOBUFS; } - kfree_skb(skb); + consume_skb(skb); skb = skb2; net_info_ratelimited("dn_short_output: Increasing headroom\n"); } @@ -322,7 +322,7 @@ static int dn_phase3_output(struct neighbour *neigh, struct sk_buff *skb) kfree_skb(skb); return -ENOBUFS; } - kfree_skb(skb); + consume_skb(skb); skb = skb2; net_info_ratelimited("dn_phase3_output: Increasing headroom\n"); } diff --git a/net/decnet/dn_nsp_out.c b/net/decnet/dn_nsp_out.c index 564a6ad13ce7..8a96047c7c94 100644 --- a/net/decnet/dn_nsp_out.c +++ b/net/decnet/dn_nsp_out.c @@ -322,7 +322,7 @@ static __le16 *dn_mk_ack_header(struct sock *sk, struct sk_buff *skb, unsigned c /* Set "cross subchannel" bit in ackcrs */ ackcrs |= 0x2000; - ptr = (__le16 *)dn_mk_common_header(scp, skb, msgflag, hlen); + ptr = dn_mk_common_header(scp, skb, msgflag, hlen); *ptr++ = cpu_to_le16(acknum); *ptr++ = cpu_to_le16(ackcrs); diff --git a/net/decnet/dn_route.c b/net/decnet/dn_route.c index 586302e557ad..85a3604c87c8 100644 --- a/net/decnet/dn_route.c +++ b/net/decnet/dn_route.c @@ -114,10 +114,16 @@ static struct dst_entry *dn_dst_check(struct dst_entry *, __u32); static unsigned int dn_dst_default_advmss(const struct dst_entry *dst); static unsigned int dn_dst_mtu(const struct dst_entry *dst); static void dn_dst_destroy(struct dst_entry *); +static void dn_dst_ifdown(struct dst_entry *, struct net_device *dev, int how); static struct dst_entry *dn_dst_negative_advice(struct dst_entry *); static void dn_dst_link_failure(struct sk_buff *); -static void dn_dst_update_pmtu(struct dst_entry *dst, u32 mtu); -static struct neighbour *dn_dst_neigh_lookup(const struct dst_entry *dst, const void *daddr); +static void dn_dst_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb , u32 mtu); +static void dn_dst_redirect(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb); +static struct neighbour *dn_dst_neigh_lookup(const struct dst_entry *dst, + struct sk_buff *skb, + const void *daddr); static int dn_route_input(struct sk_buff *); static void dn_run_flush(unsigned long dummy); @@ -138,17 +144,37 @@ static struct dst_ops dn_dst_ops = { .mtu = dn_dst_mtu, .cow_metrics = dst_cow_metrics_generic, .destroy = dn_dst_destroy, + .ifdown = dn_dst_ifdown, .negative_advice = dn_dst_negative_advice, .link_failure = dn_dst_link_failure, .update_pmtu = dn_dst_update_pmtu, + .redirect = dn_dst_redirect, .neigh_lookup = dn_dst_neigh_lookup, }; static void dn_dst_destroy(struct dst_entry *dst) { + struct dn_route *rt = (struct dn_route *) dst; + + if (rt->n) + neigh_release(rt->n); dst_destroy_metrics_generic(dst); } +static void dn_dst_ifdown(struct dst_entry *dst, struct net_device *dev, int how) +{ + if (how) { + struct dn_route *rt = (struct dn_route *) dst; + struct neighbour *n = rt->n; + + if (n && n->dev == dev) { + n->dev = dev_net(dev)->loopback_dev; + dev_hold(n->dev); + dev_put(dev); + } + } +} + static __inline__ unsigned int dn_hash(__le16 src, __le16 dst) { __u16 tmp = (__u16 __force)(src ^ dst); @@ -242,9 +268,11 @@ static int dn_dst_gc(struct dst_ops *ops) * We update both the mtu and the advertised mss (i.e. the segment size we * advertise to the other end). */ -static void dn_dst_update_pmtu(struct dst_entry *dst, u32 mtu) +static void dn_dst_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu) { - struct neighbour *n = dst_get_neighbour_noref(dst); + struct dn_route *rt = (struct dn_route *) dst; + struct neighbour *n = rt->n; u32 min_mtu = 230; struct dn_dev *dn; @@ -269,6 +297,11 @@ static void dn_dst_update_pmtu(struct dst_entry *dst, u32 mtu) } } +static void dn_dst_redirect(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb) +{ +} + /* * When a route has been marked obsolete. (e.g. routing cache flush) */ @@ -713,7 +746,8 @@ out: static int dn_to_neigh_output(struct sk_buff *skb) { struct dst_entry *dst = skb_dst(skb); - struct neighbour *n = dst_get_neighbour_noref(dst); + struct dn_route *rt = (struct dn_route *) dst; + struct neighbour *n = rt->n; return n->output(n, skb); } @@ -727,7 +761,7 @@ static int dn_output(struct sk_buff *skb) int err = -EINVAL; - if (dst_get_neighbour_noref(dst) == NULL) + if (rt->n == NULL) goto error; skb->dev = dev; @@ -828,7 +862,9 @@ static unsigned int dn_dst_mtu(const struct dst_entry *dst) return mtu ? : dst->dev->mtu; } -static struct neighbour *dn_dst_neigh_lookup(const struct dst_entry *dst, const void *daddr) +static struct neighbour *dn_dst_neigh_lookup(const struct dst_entry *dst, + struct sk_buff *skb, + const void *daddr) { return __neigh_lookup_errno(&dn_neigh_table, daddr, dst->dev); } @@ -848,11 +884,11 @@ static int dn_rt_set_next_hop(struct dn_route *rt, struct dn_fib_res *res) } rt->rt_type = res->type; - if (dev != NULL && dst_get_neighbour_noref(&rt->dst) == NULL) { + if (dev != NULL && rt->n == NULL) { n = __neigh_lookup_errno(&dn_neigh_table, &rt->rt_gateway, dev); if (IS_ERR(n)) return PTR_ERR(n); - dst_set_neighbour(&rt->dst, n); + rt->n = n; } if (dst_metric(&rt->dst, RTAX_MTU) > rt->dst.dev->mtu) @@ -1140,7 +1176,7 @@ make_route: if (dev_out->flags & IFF_LOOPBACK) flags |= RTCF_LOCAL; - rt = dst_alloc(&dn_dst_ops, dev_out, 1, 0, DST_HOST); + rt = dst_alloc(&dn_dst_ops, dev_out, 1, DST_OBSOLETE_NONE, DST_HOST); if (rt == NULL) goto e_nobufs; @@ -1159,7 +1195,7 @@ make_route: rt->rt_dst_map = fld.daddr; rt->rt_src_map = fld.saddr; - dst_set_neighbour(&rt->dst, neigh); + rt->n = neigh; neigh = NULL; rt->dst.lastuse = jiffies; @@ -1388,7 +1424,6 @@ static int dn_route_input_slow(struct sk_buff *skb) /* Packet was intra-ethernet, so we know its on-link */ if (cb->rt_flags & DN_RT_F_IE) { gateway = cb->src; - flags |= RTCF_DIRECTSRC; goto make_route; } @@ -1401,14 +1436,13 @@ static int dn_route_input_slow(struct sk_buff *skb) /* Close eyes and pray */ gateway = cb->src; - flags |= RTCF_DIRECTSRC; goto make_route; default: goto e_inval; } make_route: - rt = dst_alloc(&dn_dst_ops, out_dev, 0, 0, DST_HOST); + rt = dst_alloc(&dn_dst_ops, out_dev, 0, DST_OBSOLETE_NONE, DST_HOST); if (rt == NULL) goto e_nobufs; @@ -1429,7 +1463,7 @@ make_route: rt->fld.flowidn_iif = in_dev->ifindex; rt->fld.flowidn_mark = fld.flowidn_mark; - dst_set_neighbour(&rt->dst, neigh); + rt->n = neigh; rt->dst.lastuse = jiffies; rt->dst.output = dn_rt_bug; switch (res.type) { @@ -1515,54 +1549,68 @@ static int dn_rt_fill_info(struct sk_buff *skb, u32 pid, u32 seq, struct dn_route *rt = (struct dn_route *)skb_dst(skb); struct rtmsg *r; struct nlmsghdr *nlh; - unsigned char *b = skb_tail_pointer(skb); long expires; - nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*r), flags); - r = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*r), flags); + if (!nlh) + return -EMSGSIZE; + + r = nlmsg_data(nlh); r->rtm_family = AF_DECnet; r->rtm_dst_len = 16; r->rtm_src_len = 0; r->rtm_tos = 0; r->rtm_table = RT_TABLE_MAIN; - RTA_PUT_U32(skb, RTA_TABLE, RT_TABLE_MAIN); r->rtm_type = rt->rt_type; r->rtm_flags = (rt->rt_flags & ~0xFFFF) | RTM_F_CLONED; r->rtm_scope = RT_SCOPE_UNIVERSE; r->rtm_protocol = RTPROT_UNSPEC; + if (rt->rt_flags & RTCF_NOTIFY) r->rtm_flags |= RTM_F_NOTIFY; - RTA_PUT(skb, RTA_DST, 2, &rt->rt_daddr); + + if (nla_put_u32(skb, RTA_TABLE, RT_TABLE_MAIN) < 0 || + nla_put_le16(skb, RTA_DST, rt->rt_daddr) < 0) + goto errout; + if (rt->fld.saddr) { r->rtm_src_len = 16; - RTA_PUT(skb, RTA_SRC, 2, &rt->fld.saddr); + if (nla_put_le16(skb, RTA_SRC, rt->fld.saddr) < 0) + goto errout; } - if (rt->dst.dev) - RTA_PUT(skb, RTA_OIF, sizeof(int), &rt->dst.dev->ifindex); + if (rt->dst.dev && + nla_put_u32(skb, RTA_OIF, rt->dst.dev->ifindex) < 0) + goto errout; + /* * Note to self - change this if input routes reverse direction when * they deal only with inputs and not with replies like they do * currently. */ - RTA_PUT(skb, RTA_PREFSRC, 2, &rt->rt_local_src); - if (rt->rt_daddr != rt->rt_gateway) - RTA_PUT(skb, RTA_GATEWAY, 2, &rt->rt_gateway); + if (nla_put_le16(skb, RTA_PREFSRC, rt->rt_local_src) < 0) + goto errout; + + if (rt->rt_daddr != rt->rt_gateway && + nla_put_le16(skb, RTA_GATEWAY, rt->rt_gateway) < 0) + goto errout; + if (rtnetlink_put_metrics(skb, dst_metrics_ptr(&rt->dst)) < 0) - goto rtattr_failure; + goto errout; + expires = rt->dst.expires ? rt->dst.expires - jiffies : 0; - if (rtnl_put_cacheinfo(skb, &rt->dst, 0, 0, 0, expires, + if (rtnl_put_cacheinfo(skb, &rt->dst, 0, expires, rt->dst.error) < 0) - goto rtattr_failure; - if (dn_is_input_route(rt)) - RTA_PUT(skb, RTA_IIF, sizeof(int), &rt->fld.flowidn_iif); + goto errout; - nlh->nlmsg_len = skb_tail_pointer(skb) - b; - return skb->len; + if (dn_is_input_route(rt) && + nla_put_u32(skb, RTA_IIF, rt->fld.flowidn_iif) < 0) + goto errout; -nlmsg_failure: -rtattr_failure: - nlmsg_trim(skb, b); - return -1; + return nlmsg_end(skb, nlh); + +errout: + nlmsg_cancel(skb, nlh); + return -EMSGSIZE; } /* @@ -1572,7 +1620,7 @@ static int dn_cache_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, void { struct net *net = sock_net(in_skb->sk); struct rtattr **rta = arg; - struct rtmsg *rtm = NLMSG_DATA(nlh); + struct rtmsg *rtm = nlmsg_data(nlh); struct dn_route *rt = NULL; struct dn_skb_cb *cb; int err; @@ -1585,7 +1633,7 @@ static int dn_cache_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, void memset(&fld, 0, sizeof(fld)); fld.flowidn_proto = DNPROTO_NSP; - skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL); + skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (skb == NULL) return -ENOBUFS; skb_reset_mac_header(skb); @@ -1663,13 +1711,16 @@ int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb) struct dn_route *rt; int h, s_h; int idx, s_idx; + struct rtmsg *rtm; if (!net_eq(net, &init_net)) return 0; - if (NLMSG_PAYLOAD(cb->nlh, 0) < sizeof(struct rtmsg)) + if (nlmsg_len(cb->nlh) < sizeof(struct rtmsg)) return -EINVAL; - if (!(((struct rtmsg *)NLMSG_DATA(cb->nlh))->rtm_flags&RTM_F_CLONED)) + + rtm = nlmsg_data(cb->nlh); + if (!(rtm->rtm_flags & RTM_F_CLONED)) return 0; s_h = cb->args[0]; @@ -1769,12 +1820,11 @@ static int dn_rt_cache_seq_show(struct seq_file *seq, void *v) char buf1[DN_ASCBUF_LEN], buf2[DN_ASCBUF_LEN]; seq_printf(seq, "%-8s %-7s %-7s %04d %04d %04d\n", - rt->dst.dev ? rt->dst.dev->name : "*", - dn_addr2asc(le16_to_cpu(rt->rt_daddr), buf1), - dn_addr2asc(le16_to_cpu(rt->rt_saddr), buf2), - atomic_read(&rt->dst.__refcnt), - rt->dst.__use, - (int) dst_metric(&rt->dst, RTAX_RTT)); + rt->dst.dev ? rt->dst.dev->name : "*", + dn_addr2asc(le16_to_cpu(rt->rt_daddr), buf1), + dn_addr2asc(le16_to_cpu(rt->rt_saddr), buf2), + atomic_read(&rt->dst.__refcnt), + rt->dst.__use, 0); return 0; } diff --git a/net/decnet/dn_table.c b/net/decnet/dn_table.c index 650f3380c98a..16c986ab1228 100644 --- a/net/decnet/dn_table.c +++ b/net/decnet/dn_table.c @@ -297,61 +297,75 @@ static int dn_fib_dump_info(struct sk_buff *skb, u32 pid, u32 seq, int event, { struct rtmsg *rtm; struct nlmsghdr *nlh; - unsigned char *b = skb_tail_pointer(skb); - nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*rtm), flags); - rtm = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*rtm), flags); + if (!nlh) + return -EMSGSIZE; + + rtm = nlmsg_data(nlh); rtm->rtm_family = AF_DECnet; rtm->rtm_dst_len = dst_len; rtm->rtm_src_len = 0; rtm->rtm_tos = 0; rtm->rtm_table = tb_id; - RTA_PUT_U32(skb, RTA_TABLE, tb_id); rtm->rtm_flags = fi->fib_flags; rtm->rtm_scope = scope; rtm->rtm_type = type; - if (rtm->rtm_dst_len) - RTA_PUT(skb, RTA_DST, 2, dst); rtm->rtm_protocol = fi->fib_protocol; - if (fi->fib_priority) - RTA_PUT(skb, RTA_PRIORITY, 4, &fi->fib_priority); + + if (nla_put_u32(skb, RTA_TABLE, tb_id) < 0) + goto errout; + + if (rtm->rtm_dst_len && + nla_put(skb, RTA_DST, 2, dst) < 0) + goto errout; + + if (fi->fib_priority && + nla_put_u32(skb, RTA_PRIORITY, fi->fib_priority) < 0) + goto errout; + if (rtnetlink_put_metrics(skb, fi->fib_metrics) < 0) - goto rtattr_failure; + goto errout; + if (fi->fib_nhs == 1) { - if (fi->fib_nh->nh_gw) - RTA_PUT(skb, RTA_GATEWAY, 2, &fi->fib_nh->nh_gw); - if (fi->fib_nh->nh_oif) - RTA_PUT(skb, RTA_OIF, sizeof(int), &fi->fib_nh->nh_oif); + if (fi->fib_nh->nh_gw && + nla_put_le16(skb, RTA_GATEWAY, fi->fib_nh->nh_gw) < 0) + goto errout; + + if (fi->fib_nh->nh_oif && + nla_put_u32(skb, RTA_OIF, fi->fib_nh->nh_oif) < 0) + goto errout; } + if (fi->fib_nhs > 1) { struct rtnexthop *nhp; - struct rtattr *mp_head; - if (skb_tailroom(skb) <= RTA_SPACE(0)) - goto rtattr_failure; - mp_head = (struct rtattr *)skb_put(skb, RTA_SPACE(0)); + struct nlattr *mp_head; + + if (!(mp_head = nla_nest_start(skb, RTA_MULTIPATH))) + goto errout; for_nexthops(fi) { - if (skb_tailroom(skb) < RTA_ALIGN(RTA_ALIGN(sizeof(*nhp)) + 4)) - goto rtattr_failure; - nhp = (struct rtnexthop *)skb_put(skb, RTA_ALIGN(sizeof(*nhp))); + if (!(nhp = nla_reserve_nohdr(skb, sizeof(*nhp)))) + goto errout; + nhp->rtnh_flags = nh->nh_flags & 0xFF; nhp->rtnh_hops = nh->nh_weight - 1; nhp->rtnh_ifindex = nh->nh_oif; - if (nh->nh_gw) - RTA_PUT(skb, RTA_GATEWAY, 2, &nh->nh_gw); + + if (nh->nh_gw && + nla_put_le16(skb, RTA_GATEWAY, nh->nh_gw) < 0) + goto errout; + nhp->rtnh_len = skb_tail_pointer(skb) - (unsigned char *)nhp; } endfor_nexthops(fi); - mp_head->rta_type = RTA_MULTIPATH; - mp_head->rta_len = skb_tail_pointer(skb) - (u8 *)mp_head; - } - nlh->nlmsg_len = skb_tail_pointer(skb) - b; - return skb->len; + nla_nest_end(skb, mp_head); + } + return nlmsg_end(skb, nlh); -nlmsg_failure: -rtattr_failure: - nlmsg_trim(skb, b); +errout: + nlmsg_cancel(skb, nlh); return -EMSGSIZE; } @@ -476,7 +490,7 @@ int dn_fib_dump(struct sk_buff *skb, struct netlink_callback *cb) return 0; if (NLMSG_PAYLOAD(cb->nlh, 0) >= sizeof(struct rtmsg) && - ((struct rtmsg *)NLMSG_DATA(cb->nlh))->rtm_flags&RTM_F_CLONED) + ((struct rtmsg *)nlmsg_data(cb->nlh))->rtm_flags&RTM_F_CLONED) return dn_cache_dump(skb, cb); s_h = cb->args[0]; diff --git a/net/decnet/netfilter/dn_rtmsg.c b/net/decnet/netfilter/dn_rtmsg.c index 44b890936fc0..11db0ecf342f 100644 --- a/net/decnet/netfilter/dn_rtmsg.c +++ b/net/decnet/netfilter/dn_rtmsg.c @@ -42,23 +42,23 @@ static struct sk_buff *dnrmg_build_message(struct sk_buff *rt_skb, int *errp) size = NLMSG_SPACE(rt_skb->len); size += NLMSG_ALIGN(sizeof(struct nf_dn_rtmsg)); skb = alloc_skb(size, GFP_ATOMIC); - if (!skb) - goto nlmsg_failure; + if (!skb) { + *errp = -ENOMEM; + return NULL; + } old_tail = skb->tail; - nlh = NLMSG_PUT(skb, 0, 0, 0, size - sizeof(*nlh)); + nlh = nlmsg_put(skb, 0, 0, 0, size - sizeof(*nlh), 0); + if (!nlh) { + kfree_skb(skb); + *errp = -ENOMEM; + return NULL; + } rtm = (struct nf_dn_rtmsg *)NLMSG_DATA(nlh); rtm->nfdn_ifindex = rt_skb->dev->ifindex; ptr = NFDN_RTMSG(rtm); skb_copy_from_linear_data(rt_skb, ptr, rt_skb->len); nlh->nlmsg_len = skb->tail - old_tail; return skb; - -nlmsg_failure: - if (skb) - kfree_skb(skb); - *errp = -ENOMEM; - net_err_ratelimited("dn_rtmsg: error creating netlink message\n"); - return NULL; } static void dnrmg_send_peer(struct sk_buff *skb) @@ -117,7 +117,7 @@ static inline void dnrmg_receive_user_skb(struct sk_buff *skb) static struct nf_hook_ops dnrmg_ops __read_mostly = { .hook = dnrmg_hook, - .pf = PF_DECnet, + .pf = NFPROTO_DECNET, .hooknum = NF_DN_ROUTE, .priority = NF_DN_PRI_DNRTMSG, }; @@ -125,11 +125,13 @@ static struct nf_hook_ops dnrmg_ops __read_mostly = { static int __init dn_rtmsg_init(void) { int rv = 0; + struct netlink_kernel_cfg cfg = { + .groups = DNRNG_NLGRP_MAX, + .input = dnrmg_receive_user_skb, + }; dnrmg = netlink_kernel_create(&init_net, - NETLINK_DNRTMSG, DNRNG_NLGRP_MAX, - dnrmg_receive_user_skb, - NULL, THIS_MODULE); + NETLINK_DNRTMSG, THIS_MODULE, &cfg); if (dnrmg == NULL) { printk(KERN_ERR "dn_rtmsg: Cannot create netlink socket"); return -ENOMEM; diff --git a/net/ethernet/Makefile b/net/ethernet/Makefile index 7cef1d8ace27..323177505404 100644 --- a/net/ethernet/Makefile +++ b/net/ethernet/Makefile @@ -3,5 +3,3 @@ # obj-y += eth.o -obj-$(subst m,y,$(CONFIG_IPX)) += pe2.o -obj-$(subst m,y,$(CONFIG_ATALK)) += pe2.o diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c index 36e58800a9e3..4efad533e5f6 100644 --- a/net/ethernet/eth.c +++ b/net/ethernet/eth.c @@ -232,6 +232,7 @@ EXPORT_SYMBOL(eth_header_parse); * @neigh: source neighbour * @hh: destination cache entry * @type: Ethernet type field + * * Create an Ethernet header template from the neighbour. */ int eth_header_cache(const struct neighbour *neigh, struct hh_cache *hh, __be16 type) @@ -274,6 +275,7 @@ EXPORT_SYMBOL(eth_header_cache_update); * eth_mac_addr - set new Ethernet hardware address * @dev: network device * @p: socket address + * * Change hardware address of device. * * This doesn't change hardware matching, so needs to be overridden @@ -283,7 +285,7 @@ int eth_mac_addr(struct net_device *dev, void *p) { struct sockaddr *addr = p; - if (netif_running(dev)) + if (!(dev->priv_flags & IFF_LIVE_ADDR_CHANGE) && netif_running(dev)) return -EBUSY; if (!is_valid_ether_addr(addr->sa_data)) return -EADDRNOTAVAIL; @@ -331,6 +333,7 @@ const struct header_ops eth_header_ops ____cacheline_aligned = { /** * ether_setup - setup Ethernet network device * @dev: network device + * * Fill in the fields of the device structure with Ethernet-generic values. */ void ether_setup(struct net_device *dev) diff --git a/net/ieee802154/6lowpan.c b/net/ieee802154/6lowpan.c index 32eb4179e8fa..6a095225148e 100644 --- a/net/ieee802154/6lowpan.c +++ b/net/ieee802154/6lowpan.c @@ -55,7 +55,6 @@ #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/netdevice.h> -#include <linux/etherdevice.h> #include <net/af_ieee802154.h> #include <net/ieee802154.h> #include <net/ieee802154_netdev.h> @@ -114,7 +113,6 @@ struct lowpan_dev_record { struct lowpan_fragment { struct sk_buff *skb; /* skb to be assembled */ - spinlock_t lock; /* concurency lock */ u16 length; /* length to be assemled */ u32 bytes_rcv; /* bytes received */ u16 tag; /* current fragment tag */ @@ -124,7 +122,7 @@ struct lowpan_fragment { static unsigned short fragment_tag; static LIST_HEAD(lowpan_fragments); -spinlock_t flist_lock; +static DEFINE_SPINLOCK(flist_lock); static inline struct lowpan_dev_info *lowpan_dev_info(const struct net_device *dev) @@ -240,8 +238,7 @@ lowpan_uncompress_addr(struct sk_buff *skb, struct in6_addr *ipaddr, lowpan_uip_ds6_set_addr_iid(ipaddr, lladdr); } - pr_debug("(%s): uncompressing %d + %d => ", __func__, prefcount, - postcount); + pr_debug("uncompressing %d + %d => ", prefcount, postcount); lowpan_raw_dump_inline(NULL, NULL, ipaddr->s6_addr, 16); return 0; @@ -252,13 +249,11 @@ lowpan_compress_udp_header(u8 **hc06_ptr, struct sk_buff *skb) { struct udphdr *uh = udp_hdr(skb); - pr_debug("(%s): UDP header compression\n", __func__); - if (((uh->source & LOWPAN_NHC_UDP_4BIT_MASK) == LOWPAN_NHC_UDP_4BIT_PORT) && ((uh->dest & LOWPAN_NHC_UDP_4BIT_MASK) == LOWPAN_NHC_UDP_4BIT_PORT)) { - pr_debug("(%s): both ports compression to 4 bits\n", __func__); + pr_debug("UDP header: both ports compression to 4 bits\n"); **hc06_ptr = LOWPAN_NHC_UDP_CS_P_11; **(hc06_ptr + 1) = /* subtraction is faster */ (u8)((uh->dest - LOWPAN_NHC_UDP_4BIT_PORT) + @@ -266,20 +261,20 @@ lowpan_compress_udp_header(u8 **hc06_ptr, struct sk_buff *skb) *hc06_ptr += 2; } else if ((uh->dest & LOWPAN_NHC_UDP_8BIT_MASK) == LOWPAN_NHC_UDP_8BIT_PORT) { - pr_debug("(%s): remove 8 bits of dest\n", __func__); + pr_debug("UDP header: remove 8 bits of dest\n"); **hc06_ptr = LOWPAN_NHC_UDP_CS_P_01; memcpy(*hc06_ptr + 1, &uh->source, 2); **(hc06_ptr + 3) = (u8)(uh->dest - LOWPAN_NHC_UDP_8BIT_PORT); *hc06_ptr += 4; } else if ((uh->source & LOWPAN_NHC_UDP_8BIT_MASK) == LOWPAN_NHC_UDP_8BIT_PORT) { - pr_debug("(%s): remove 8 bits of source\n", __func__); + pr_debug("UDP header: remove 8 bits of source\n"); **hc06_ptr = LOWPAN_NHC_UDP_CS_P_10; memcpy(*hc06_ptr + 1, &uh->dest, 2); **(hc06_ptr + 3) = (u8)(uh->source - LOWPAN_NHC_UDP_8BIT_PORT); *hc06_ptr += 4; } else { - pr_debug("(%s): can't compress header\n", __func__); + pr_debug("UDP header: can't compress\n"); **hc06_ptr = LOWPAN_NHC_UDP_CS_P_00; memcpy(*hc06_ptr + 1, &uh->source, 2); memcpy(*hc06_ptr + 3, &uh->dest, 2); @@ -291,25 +286,26 @@ lowpan_compress_udp_header(u8 **hc06_ptr, struct sk_buff *skb) *hc06_ptr += 2; } -static u8 lowpan_fetch_skb_u8(struct sk_buff *skb) +static inline int lowpan_fetch_skb_u8(struct sk_buff *skb, u8 *val) { - u8 ret; + if (unlikely(!pskb_may_pull(skb, 1))) + return -EINVAL; - ret = skb->data[0]; + *val = skb->data[0]; skb_pull(skb, 1); - return ret; + return 0; } -static u16 lowpan_fetch_skb_u16(struct sk_buff *skb) +static inline int lowpan_fetch_skb_u16(struct sk_buff *skb, u16 *val) { - u16 ret; - - BUG_ON(!pskb_may_pull(skb, 2)); + if (unlikely(!pskb_may_pull(skb, 2))) + return -EINVAL; - ret = skb->data[0] | (skb->data[1] << 8); + *val = (skb->data[0] << 8) | skb->data[1]; skb_pull(skb, 2); - return ret; + + return 0; } static int @@ -318,10 +314,14 @@ lowpan_uncompress_udp_header(struct sk_buff *skb) struct udphdr *uh = udp_hdr(skb); u8 tmp; - tmp = lowpan_fetch_skb_u8(skb); + if (!uh) + goto err; + + if (lowpan_fetch_skb_u8(skb, &tmp)) + goto err; if ((tmp & LOWPAN_NHC_UDP_MASK) == LOWPAN_NHC_UDP_ID) { - pr_debug("(%s): UDP header uncompression\n", __func__); + pr_debug("UDP header uncompression\n"); switch (tmp & LOWPAN_NHC_UDP_CS_P_11) { case LOWPAN_NHC_UDP_CS_P_00: memcpy(&uh->source, &skb->data[0], 2); @@ -347,19 +347,19 @@ lowpan_uncompress_udp_header(struct sk_buff *skb) skb_pull(skb, 1); break; default: - pr_debug("(%s) ERROR: unknown UDP format\n", __func__); + pr_debug("ERROR: unknown UDP format\n"); goto err; break; } - pr_debug("(%s): uncompressed UDP ports: src = %d, dst = %d\n", - __func__, uh->source, uh->dest); + pr_debug("uncompressed UDP ports: src = %d, dst = %d\n", + uh->source, uh->dest); /* copy checksum */ memcpy(&uh->check, &skb->data[0], 2); skb_pull(skb, 2); } else { - pr_debug("(%s): ERROR: unsupported NH format\n", __func__); + pr_debug("ERROR: unsupported NH format\n"); goto err; } @@ -392,10 +392,9 @@ static int lowpan_header_create(struct sk_buff *skb, hdr = ipv6_hdr(skb); hc06_ptr = head + 2; - pr_debug("(%s): IPv6 header dump:\n\tversion = %d\n\tlength = %d\n" - "\tnexthdr = 0x%02x\n\thop_lim = %d\n", __func__, - hdr->version, ntohs(hdr->payload_len), hdr->nexthdr, - hdr->hop_limit); + pr_debug("IPv6 header dump:\n\tversion = %d\n\tlength = %d\n" + "\tnexthdr = 0x%02x\n\thop_lim = %d\n", hdr->version, + ntohs(hdr->payload_len), hdr->nexthdr, hdr->hop_limit); lowpan_raw_dump_table(__func__, "raw skb network header dump", skb_network_header(skb), sizeof(struct ipv6hdr)); @@ -490,28 +489,28 @@ static int lowpan_header_create(struct sk_buff *skb, break; default: *hc06_ptr = hdr->hop_limit; + hc06_ptr += 1; break; } /* source address compression */ if (is_addr_unspecified(&hdr->saddr)) { - pr_debug("(%s): source address is unspecified, setting SAC\n", - __func__); + pr_debug("source address is unspecified, setting SAC\n"); iphc1 |= LOWPAN_IPHC_SAC; /* TODO: context lookup */ } else if (is_addr_link_local(&hdr->saddr)) { - pr_debug("(%s): source address is link-local\n", __func__); + pr_debug("source address is link-local\n"); iphc1 |= lowpan_compress_addr_64(&hc06_ptr, LOWPAN_IPHC_SAM_BIT, &hdr->saddr, saddr); } else { - pr_debug("(%s): send the full source address\n", __func__); + pr_debug("send the full source address\n"); memcpy(hc06_ptr, &hdr->saddr.s6_addr16[0], 16); hc06_ptr += 16; } /* destination address compression */ if (is_addr_mcast(&hdr->daddr)) { - pr_debug("(%s): destination address is multicast", __func__); + pr_debug("destination address is multicast: "); iphc1 |= LOWPAN_IPHC_M; if (lowpan_is_mcast_addr_compressable8(&hdr->daddr)) { pr_debug("compressed to 1 octet\n"); @@ -540,14 +539,13 @@ static int lowpan_header_create(struct sk_buff *skb, hc06_ptr += 16; } } else { - pr_debug("(%s): destination address is unicast: ", __func__); /* TODO: context lookup */ if (is_addr_link_local(&hdr->daddr)) { - pr_debug("destination address is link-local\n"); + pr_debug("dest address is unicast and link-local\n"); iphc1 |= lowpan_compress_addr_64(&hc06_ptr, LOWPAN_IPHC_DAM_BIT, &hdr->daddr, daddr); } else { - pr_debug("using full address\n"); + pr_debug("dest address is unicast: using full one\n"); memcpy(hc06_ptr, &hdr->daddr.s6_addr16[0], 16); hc06_ptr += 16; } @@ -639,19 +637,15 @@ static void lowpan_fragment_timer_expired(unsigned long entry_addr) { struct lowpan_fragment *entry = (struct lowpan_fragment *)entry_addr; - pr_debug("%s: timer expired for frame with tag %d\n", __func__, - entry->tag); + pr_debug("timer expired for frame with tag %d\n", entry->tag); - spin_lock(&flist_lock); list_del(&entry->list); - spin_unlock(&flist_lock); - dev_kfree_skb(entry->skb); kfree(entry); } static struct lowpan_fragment * -lowpan_alloc_new_frame(struct sk_buff *skb, u8 iphc0, u8 len, u8 tag) +lowpan_alloc_new_frame(struct sk_buff *skb, u8 len, u16 tag) { struct lowpan_fragment *frame; @@ -662,12 +656,12 @@ lowpan_alloc_new_frame(struct sk_buff *skb, u8 iphc0, u8 len, u8 tag) INIT_LIST_HEAD(&frame->list); - frame->length = (iphc0 & 7) | (len << 3); + frame->length = len; frame->tag = tag; /* allocate buffer for frame assembling */ - frame->skb = alloc_skb(frame->length + - sizeof(struct ipv6hdr), GFP_ATOMIC); + frame->skb = netdev_alloc_skb_ip_align(skb->dev, frame->length + + sizeof(struct ipv6hdr)); if (!frame->skb) goto skb_err; @@ -710,7 +704,9 @@ lowpan_process_data(struct sk_buff *skb) /* at least two bytes will be used for the encoding */ if (skb->len < 2) goto drop; - iphc0 = lowpan_fetch_skb_u8(skb); + + if (lowpan_fetch_skb_u8(skb, &iphc0)) + goto drop; /* fragments assembling */ switch (iphc0 & LOWPAN_DISPATCH_MASK) { @@ -718,18 +714,23 @@ lowpan_process_data(struct sk_buff *skb) case LOWPAN_DISPATCH_FRAGN: { struct lowpan_fragment *frame; - u8 len, offset; - u16 tag; + /* slen stores the rightmost 8 bits of the 11 bits length */ + u8 slen, offset; + u16 len, tag; bool found = false; - len = lowpan_fetch_skb_u8(skb); /* frame length */ - tag = lowpan_fetch_skb_u16(skb); + if (lowpan_fetch_skb_u8(skb, &slen) || /* frame length */ + lowpan_fetch_skb_u16(skb, &tag)) /* fragment tag */ + goto drop; + + /* adds the 3 MSB to the 8 LSB to retrieve the 11 bits length */ + len = ((iphc0 & 7) << 8) | slen; /* * check if frame assembling with the same tag is * already in progress */ - spin_lock(&flist_lock); + spin_lock_bh(&flist_lock); list_for_each_entry(frame, &lowpan_fragments, list) if (frame->tag == tag) { @@ -739,7 +740,7 @@ lowpan_process_data(struct sk_buff *skb) /* alloc new frame structure */ if (!found) { - frame = lowpan_alloc_new_frame(skb, iphc0, len, tag); + frame = lowpan_alloc_new_frame(skb, len, tag); if (!frame) goto unlock_and_drop; } @@ -747,7 +748,8 @@ lowpan_process_data(struct sk_buff *skb) if ((iphc0 & LOWPAN_DISPATCH_MASK) == LOWPAN_DISPATCH_FRAG1) goto unlock_and_drop; - offset = lowpan_fetch_skb_u8(skb); /* fetch offset */ + if (lowpan_fetch_skb_u8(skb, &offset)) /* fetch offset */ + goto unlock_and_drop; /* if payload fits buffer, copy it */ if (likely((offset * 8 + skb->len) <= frame->length)) @@ -762,17 +764,20 @@ lowpan_process_data(struct sk_buff *skb) if ((frame->bytes_rcv == frame->length) && frame->timer.expires > jiffies) { /* if timer haven't expired - first of all delete it */ - del_timer(&frame->timer); + del_timer_sync(&frame->timer); list_del(&frame->list); - spin_unlock(&flist_lock); + spin_unlock_bh(&flist_lock); dev_kfree_skb(skb); skb = frame->skb; kfree(frame); - iphc0 = lowpan_fetch_skb_u8(skb); + + if (lowpan_fetch_skb_u8(skb, &iphc0)) + goto drop; + break; } - spin_unlock(&flist_lock); + spin_unlock_bh(&flist_lock); return kfree_skb(skb), 0; } @@ -780,20 +785,19 @@ lowpan_process_data(struct sk_buff *skb) break; } - iphc1 = lowpan_fetch_skb_u8(skb); + if (lowpan_fetch_skb_u8(skb, &iphc1)) + goto drop; _saddr = mac_cb(skb)->sa.hwaddr; _daddr = mac_cb(skb)->da.hwaddr; - pr_debug("(%s): iphc0 = %02x, iphc1 = %02x\n", __func__, iphc0, iphc1); + pr_debug("iphc0 = %02x, iphc1 = %02x\n", iphc0, iphc1); /* another if the CID flag is set */ if (iphc1 & LOWPAN_IPHC_CID) { - pr_debug("(%s): CID flag is set, increase header with one\n", - __func__); - if (!skb->len) + pr_debug("CID flag is set, increase header with one\n"); + if (lowpan_fetch_skb_u8(skb, &num_context)) goto drop; - num_context = lowpan_fetch_skb_u8(skb); } hdr.version = 6; @@ -805,9 +809,9 @@ lowpan_process_data(struct sk_buff *skb) * ECN + DSCP + 4-bit Pad + Flow Label (4 bytes) */ case 0: /* 00b */ - if (!skb->len) + if (lowpan_fetch_skb_u8(skb, &tmp)) goto drop; - tmp = lowpan_fetch_skb_u8(skb); + memcpy(&hdr.flow_lbl, &skb->data[0], 3); skb_pull(skb, 3); hdr.priority = ((tmp >> 2) & 0x0f); @@ -819,9 +823,9 @@ lowpan_process_data(struct sk_buff *skb) * ECN + DSCP (1 byte), Flow Label is elided */ case 1: /* 10b */ - if (!skb->len) + if (lowpan_fetch_skb_u8(skb, &tmp)) goto drop; - tmp = lowpan_fetch_skb_u8(skb); + hdr.priority = ((tmp >> 2) & 0x0f); hdr.flow_lbl[0] = ((tmp << 6) & 0xC0) | ((tmp >> 2) & 0x30); hdr.flow_lbl[1] = 0; @@ -832,9 +836,9 @@ lowpan_process_data(struct sk_buff *skb) * ECN + 2-bit Pad + Flow Label (3 bytes), DSCP is elided */ case 2: /* 01b */ - if (!skb->len) + if (lowpan_fetch_skb_u8(skb, &tmp)) goto drop; - tmp = lowpan_fetch_skb_u8(skb); + hdr.flow_lbl[0] = (skb->data[0] & 0x0F) | ((tmp >> 2) & 0x30); memcpy(&hdr.flow_lbl[1], &skb->data[0], 2); skb_pull(skb, 2); @@ -853,27 +857,26 @@ lowpan_process_data(struct sk_buff *skb) /* Next Header */ if ((iphc0 & LOWPAN_IPHC_NH_C) == 0) { /* Next header is carried inline */ - if (!skb->len) + if (lowpan_fetch_skb_u8(skb, &(hdr.nexthdr))) goto drop; - hdr.nexthdr = lowpan_fetch_skb_u8(skb); - pr_debug("(%s): NH flag is set, next header is carried " - "inline: %02x\n", __func__, hdr.nexthdr); + + pr_debug("NH flag is set, next header carried inline: %02x\n", + hdr.nexthdr); } /* Hop Limit */ if ((iphc0 & 0x03) != LOWPAN_IPHC_TTL_I) hdr.hop_limit = lowpan_ttl_values[iphc0 & 0x03]; else { - if (!skb->len) + if (lowpan_fetch_skb_u8(skb, &(hdr.hop_limit))) goto drop; - hdr.hop_limit = lowpan_fetch_skb_u8(skb); } /* Extract SAM to the tmp variable */ tmp = ((iphc1 & LOWPAN_IPHC_SAM) >> LOWPAN_IPHC_SAM_BIT) & 0x03; /* Source address uncompression */ - pr_debug("(%s): source address stateless compression\n", __func__); + pr_debug("source address stateless compression\n"); err = lowpan_uncompress_addr(skb, &hdr.saddr, lowpan_llprefix, lowpan_unc_llconf[tmp], skb->data); if (err) @@ -885,19 +888,15 @@ lowpan_process_data(struct sk_buff *skb) /* check for Multicast Compression */ if (iphc1 & LOWPAN_IPHC_M) { if (iphc1 & LOWPAN_IPHC_DAC) { - pr_debug("(%s): destination address context-based " - "multicast compression\n", __func__); + pr_debug("dest: context-based mcast compression\n"); /* TODO: implement this */ } else { u8 prefix[] = {0xff, 0x02}; - pr_debug("(%s): destination address non-context-based" - " multicast compression\n", __func__); + pr_debug("dest: non context-based mcast compression\n"); if (0 < tmp && tmp < 3) { - if (!skb->len) + if (lowpan_fetch_skb_u8(skb, &prefix[1])) goto drop; - else - prefix[1] = lowpan_fetch_skb_u8(skb); } err = lowpan_uncompress_addr(skb, &hdr.daddr, prefix, @@ -906,8 +905,7 @@ lowpan_process_data(struct sk_buff *skb) goto drop; } } else { - pr_debug("(%s): destination address stateless compression\n", - __func__); + pr_debug("dest: stateless compression\n"); err = lowpan_uncompress_addr(skb, &hdr.daddr, lowpan_llprefix, lowpan_unc_llconf[tmp], skb->data); if (err) @@ -922,11 +920,11 @@ lowpan_process_data(struct sk_buff *skb) /* Not fragmented package */ hdr.payload_len = htons(skb->len); - pr_debug("(%s): skb headroom size = %d, data length = %d\n", __func__, - skb_headroom(skb), skb->len); + pr_debug("skb headroom size = %d, data length = %d\n", + skb_headroom(skb), skb->len); - pr_debug("(%s): IPv6 header dump:\n\tversion = %d\n\tlength = %d\n\t" - "nexthdr = 0x%02x\n\thop_lim = %d\n", __func__, hdr.version, + pr_debug("IPv6 header dump:\n\tversion = %d\n\tlength = %d\n\t" + "nexthdr = 0x%02x\n\thop_lim = %d\n", hdr.version, ntohs(hdr.payload_len), hdr.nexthdr, hdr.hop_limit); lowpan_raw_dump_table(__func__, "raw header dump", (u8 *)&hdr, @@ -934,12 +932,25 @@ lowpan_process_data(struct sk_buff *skb) return lowpan_skb_deliver(skb, &hdr); unlock_and_drop: - spin_unlock(&flist_lock); + spin_unlock_bh(&flist_lock); drop: kfree_skb(skb); return -EINVAL; } +static int lowpan_set_address(struct net_device *dev, void *p) +{ + struct sockaddr *sa = p; + + if (netif_running(dev)) + return -EBUSY; + + /* TODO: validate addr */ + memcpy(dev->dev_addr, sa->sa_data, dev->addr_len); + + return 0; +} + static int lowpan_get_mac_header_length(struct sk_buff *skb) { /* @@ -997,10 +1008,10 @@ lowpan_skb_fragmentation(struct sk_buff *skb) tag = fragment_tag++; /* first fragment header */ - head[0] = LOWPAN_DISPATCH_FRAG1 | (payload_length & 0x7); - head[1] = (payload_length >> 3) & 0xff; - head[2] = tag & 0xff; - head[3] = tag >> 8; + head[0] = LOWPAN_DISPATCH_FRAG1 | ((payload_length >> 8) & 0x7); + head[1] = payload_length & 0xff; + head[2] = tag >> 8; + head[3] = tag & 0xff; err = lowpan_fragment_xmit(skb, head, header_length, 0, 0); @@ -1028,11 +1039,11 @@ static netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *dev) { int err = -1; - pr_debug("(%s): package xmit\n", __func__); + pr_debug("package xmit\n"); skb->dev = lowpan_dev_info(dev)->real_dev; if (skb->dev == NULL) { - pr_debug("(%s) ERROR: no real wpan device found\n", __func__); + pr_debug("ERROR: no real wpan device found\n"); goto error; } @@ -1041,14 +1052,13 @@ static netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *dev) goto out; } - pr_debug("(%s): frame is too big, fragmentation is needed\n", - __func__); + pr_debug("frame is too big, fragmentation is needed\n"); err = lowpan_skb_fragmentation(skb); error: dev_kfree_skb(skb); out: if (err < 0) - pr_debug("(%s): ERROR: xmit failed\n", __func__); + pr_debug("ERROR: xmit failed\n"); return (err < 0 ? NETDEV_TX_BUSY : NETDEV_TX_OK); } @@ -1083,7 +1093,7 @@ static struct header_ops lowpan_header_ops = { static const struct net_device_ops lowpan_netdev_ops = { .ndo_start_xmit = lowpan_xmit, - .ndo_set_mac_address = eth_mac_addr, + .ndo_set_mac_address = lowpan_set_address, }; static struct ieee802154_mlme_ops lowpan_mlme = { @@ -1094,8 +1104,6 @@ static struct ieee802154_mlme_ops lowpan_mlme = { static void lowpan_setup(struct net_device *dev) { - pr_debug("(%s)\n", __func__); - dev->addr_len = IEEE802154_ADDR_LEN; memset(dev->broadcast, 0xff, IEEE802154_ADDR_LEN); dev->type = ARPHRD_IEEE802154; @@ -1115,8 +1123,6 @@ static void lowpan_setup(struct net_device *dev) static int lowpan_validate(struct nlattr *tb[], struct nlattr *data[]) { - pr_debug("(%s)\n", __func__); - if (tb[IFLA_ADDRESS]) { if (nla_len(tb[IFLA_ADDRESS]) != IEEE802154_ADDR_LEN) return -EINVAL; @@ -1157,7 +1163,7 @@ static int lowpan_newlink(struct net *src_net, struct net_device *dev, struct net_device *real_dev; struct lowpan_dev_record *entry; - pr_debug("(%s)\n", __func__); + pr_debug("adding new link\n"); if (!tb[IFLA_LINK]) return -EINVAL; @@ -1183,8 +1189,6 @@ static int lowpan_newlink(struct net *src_net, struct net_device *dev, list_add_tail(&entry->list, &lowpan_devices); mutex_unlock(&lowpan_dev_info(dev)->dev_list_mtx); - spin_lock_init(&flist_lock); - register_netdevice(dev); return 0; @@ -1195,19 +1199,9 @@ static void lowpan_dellink(struct net_device *dev, struct list_head *head) struct lowpan_dev_info *lowpan_dev = lowpan_dev_info(dev); struct net_device *real_dev = lowpan_dev->real_dev; struct lowpan_dev_record *entry, *tmp; - struct lowpan_fragment *frame, *tframe; ASSERT_RTNL(); - spin_lock(&flist_lock); - list_for_each_entry_safe(frame, tframe, &lowpan_fragments, list) { - del_timer(&frame->timer); - list_del(&frame->list); - dev_kfree_skb(frame->skb); - kfree(frame); - } - spin_unlock(&flist_lock); - mutex_lock(&lowpan_dev_info(dev)->dev_list_mtx); list_for_each_entry_safe(entry, tmp, &lowpan_devices, list) { if (entry->ldev == dev) { @@ -1252,8 +1246,6 @@ static int __init lowpan_init_module(void) { int err = 0; - pr_debug("(%s)\n", __func__); - err = lowpan_netlink_init(); if (err < 0) goto out; @@ -1265,11 +1257,24 @@ out: static void __exit lowpan_cleanup_module(void) { - pr_debug("(%s)\n", __func__); + struct lowpan_fragment *frame, *tframe; lowpan_netlink_fini(); dev_remove_pack(&lowpan_packet_type); + + /* Now 6lowpan packet_type is removed, so no new fragments are + * expected on RX, therefore that's the time to clean incomplete + * fragments. + */ + spin_lock_bh(&flist_lock); + list_for_each_entry_safe(frame, tframe, &lowpan_fragments, list) { + del_timer_sync(&frame->timer); + list_del(&frame->list); + dev_kfree_skb(frame->skb); + kfree(frame); + } + spin_unlock_bh(&flist_lock); } module_init(lowpan_init_module); diff --git a/net/ieee802154/netlink.c b/net/ieee802154/netlink.c index c8097ae2482f..97351e1d07a4 100644 --- a/net/ieee802154/netlink.c +++ b/net/ieee802154/netlink.c @@ -44,7 +44,7 @@ struct genl_family nl802154_family = { struct sk_buff *ieee802154_nl_create(int flags, u8 req) { void *hdr; - struct sk_buff *msg = nlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC); + struct sk_buff *msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); unsigned long f; if (!msg) @@ -80,7 +80,7 @@ struct sk_buff *ieee802154_nl_new_reply(struct genl_info *info, int flags, u8 req) { void *hdr; - struct sk_buff *msg = nlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC); + struct sk_buff *msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); if (!msg) return NULL; diff --git a/net/ieee802154/nl-mac.c b/net/ieee802154/nl-mac.c index ca92587720f4..1e9917124e75 100644 --- a/net/ieee802154/nl-mac.c +++ b/net/ieee802154/nl-mac.c @@ -530,7 +530,7 @@ static int ieee802154_list_iface(struct sk_buff *skb, if (!dev) return -ENODEV; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) goto out_dev; diff --git a/net/ieee802154/nl-phy.c b/net/ieee802154/nl-phy.c index eed291626da6..d54be34cca94 100644 --- a/net/ieee802154/nl-phy.c +++ b/net/ieee802154/nl-phy.c @@ -101,7 +101,7 @@ static int ieee802154_list_phy(struct sk_buff *skb, if (!phy) return -ENODEV; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) goto out_dev; diff --git a/net/ipv4/Kconfig b/net/ipv4/Kconfig index 20f1cb5c8aba..5a19aeb86094 100644 --- a/net/ipv4/Kconfig +++ b/net/ipv4/Kconfig @@ -310,6 +310,17 @@ config SYN_COOKIES If unsure, say N. +config NET_IPVTI + tristate "Virtual (secure) IP: tunneling" + select INET_TUNNEL + depends on INET_XFRM_MODE_TUNNEL + ---help--- + Tunneling means encapsulating data of one protocol type within + another protocol and sending it over a channel that understands the + encapsulating protocol. This can be used with xfrm mode tunnel to give + the notion of a secure tunnel for IPSEC and then use routing protocol + on top. + config INET_AH tristate "IP: AH transformation" select XFRM_ALGO diff --git a/net/ipv4/Makefile b/net/ipv4/Makefile index ff75d3bbcd6a..ae2ccf2890e4 100644 --- a/net/ipv4/Makefile +++ b/net/ipv4/Makefile @@ -7,7 +7,7 @@ obj-y := route.o inetpeer.o protocol.o \ ip_output.o ip_sockglue.o inet_hashtables.o \ inet_timewait_sock.o inet_connection_sock.o \ tcp.o tcp_input.o tcp_output.o tcp_timer.o tcp_ipv4.o \ - tcp_minisocks.o tcp_cong.o \ + tcp_minisocks.o tcp_cong.o tcp_metrics.o tcp_fastopen.o \ datagram.o raw.o udp.o udplite.o \ arp.o icmp.o devinet.o af_inet.o igmp.o \ fib_frontend.o fib_semantics.o fib_trie.o \ @@ -20,6 +20,7 @@ obj-$(CONFIG_IP_MROUTE) += ipmr.o obj-$(CONFIG_NET_IPIP) += ipip.o obj-$(CONFIG_NET_IPGRE_DEMUX) += gre.o obj-$(CONFIG_NET_IPGRE) += ip_gre.o +obj-$(CONFIG_NET_IPVTI) += ip_vti.o obj-$(CONFIG_SYN_COOKIES) += syncookies.o obj-$(CONFIG_INET_AH) += ah4.o obj-$(CONFIG_INET_ESP) += esp4.o diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index c8f7aee587d1..fe4582ca969a 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -157,6 +157,7 @@ void inet_sock_destruct(struct sock *sk) kfree(rcu_dereference_protected(inet->inet_opt, 1)); dst_release(rcu_dereference_check(sk->sk_dst_cache, 1)); + dst_release(sk->sk_rx_dst); sk_refcnt_debug_dec(sk); } EXPORT_SYMBOL(inet_sock_destruct); @@ -242,20 +243,18 @@ void build_ehash_secret(void) } EXPORT_SYMBOL(build_ehash_secret); -static inline int inet_netns_ok(struct net *net, int protocol) +static inline int inet_netns_ok(struct net *net, __u8 protocol) { - int hash; const struct net_protocol *ipprot; if (net_eq(net, &init_net)) return 1; - hash = protocol & (MAX_INET_PROTOS - 1); - ipprot = rcu_dereference(inet_protos[hash]); - - if (ipprot == NULL) + ipprot = rcu_dereference(inet_protos[protocol]); + if (ipprot == NULL) { /* raw IP is OK */ return 1; + } return ipprot->netns_ok; } @@ -553,15 +552,16 @@ int inet_dgram_connect(struct socket *sock, struct sockaddr *uaddr, if (!inet_sk(sk)->inet_num && inet_autobind(sk)) return -EAGAIN; - return sk->sk_prot->connect(sk, (struct sockaddr *)uaddr, addr_len); + return sk->sk_prot->connect(sk, uaddr, addr_len); } EXPORT_SYMBOL(inet_dgram_connect); -static long inet_wait_for_connect(struct sock *sk, long timeo) +static long inet_wait_for_connect(struct sock *sk, long timeo, int writebias) { DEFINE_WAIT(wait); prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); + sk->sk_write_pending += writebias; /* Basic assumption: if someone sets sk->sk_err, he _must_ * change state of the socket from TCP_SYN_*. @@ -577,6 +577,7 @@ static long inet_wait_for_connect(struct sock *sk, long timeo) prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); } finish_wait(sk_sleep(sk), &wait); + sk->sk_write_pending -= writebias; return timeo; } @@ -584,8 +585,8 @@ static long inet_wait_for_connect(struct sock *sk, long timeo) * Connect to a remote host. There is regrettably still a little * TCP 'magic' in here. */ -int inet_stream_connect(struct socket *sock, struct sockaddr *uaddr, - int addr_len, int flags) +int __inet_stream_connect(struct socket *sock, struct sockaddr *uaddr, + int addr_len, int flags) { struct sock *sk = sock->sk; int err; @@ -594,8 +595,6 @@ int inet_stream_connect(struct socket *sock, struct sockaddr *uaddr, if (addr_len < sizeof(uaddr->sa_family)) return -EINVAL; - lock_sock(sk); - if (uaddr->sa_family == AF_UNSPEC) { err = sk->sk_prot->disconnect(sk, flags); sock->state = err ? SS_DISCONNECTING : SS_UNCONNECTED; @@ -635,8 +634,12 @@ int inet_stream_connect(struct socket *sock, struct sockaddr *uaddr, timeo = sock_sndtimeo(sk, flags & O_NONBLOCK); if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) { + int writebias = (sk->sk_protocol == IPPROTO_TCP) && + tcp_sk(sk)->fastopen_req && + tcp_sk(sk)->fastopen_req->data ? 1 : 0; + /* Error code is set above */ - if (!timeo || !inet_wait_for_connect(sk, timeo)) + if (!timeo || !inet_wait_for_connect(sk, timeo, writebias)) goto out; err = sock_intr_errno(timeo); @@ -658,7 +661,6 @@ int inet_stream_connect(struct socket *sock, struct sockaddr *uaddr, sock->state = SS_CONNECTED; err = 0; out: - release_sock(sk); return err; sock_error: @@ -668,6 +670,18 @@ sock_error: sock->state = SS_DISCONNECTING; goto out; } +EXPORT_SYMBOL(__inet_stream_connect); + +int inet_stream_connect(struct socket *sock, struct sockaddr *uaddr, + int addr_len, int flags) +{ + int err; + + lock_sock(sock->sk); + err = __inet_stream_connect(sock, uaddr, addr_len, flags); + release_sock(sock->sk); + return err; +} EXPORT_SYMBOL(inet_stream_connect); /* @@ -1216,8 +1230,8 @@ EXPORT_SYMBOL(inet_sk_rebuild_header); static int inet_gso_send_check(struct sk_buff *skb) { - const struct iphdr *iph; const struct net_protocol *ops; + const struct iphdr *iph; int proto; int ihl; int err = -EINVAL; @@ -1236,7 +1250,7 @@ static int inet_gso_send_check(struct sk_buff *skb) __skb_pull(skb, ihl); skb_reset_transport_header(skb); iph = ip_hdr(skb); - proto = iph->protocol & (MAX_INET_PROTOS - 1); + proto = iph->protocol; err = -EPROTONOSUPPORT; rcu_read_lock(); @@ -1253,8 +1267,8 @@ static struct sk_buff *inet_gso_segment(struct sk_buff *skb, netdev_features_t features) { struct sk_buff *segs = ERR_PTR(-EINVAL); - struct iphdr *iph; const struct net_protocol *ops; + struct iphdr *iph; int proto; int ihl; int id; @@ -1286,7 +1300,7 @@ static struct sk_buff *inet_gso_segment(struct sk_buff *skb, skb_reset_transport_header(skb); iph = ip_hdr(skb); id = ntohs(iph->id); - proto = iph->protocol & (MAX_INET_PROTOS - 1); + proto = iph->protocol; segs = ERR_PTR(-EPROTONOSUPPORT); rcu_read_lock(); @@ -1340,7 +1354,7 @@ static struct sk_buff **inet_gro_receive(struct sk_buff **head, goto out; } - proto = iph->protocol & (MAX_INET_PROTOS - 1); + proto = iph->protocol; rcu_read_lock(); ops = rcu_dereference(inet_protos[proto]); @@ -1398,11 +1412,11 @@ out: static int inet_gro_complete(struct sk_buff *skb) { - const struct net_protocol *ops; + __be16 newlen = htons(skb->len - skb_network_offset(skb)); struct iphdr *iph = ip_hdr(skb); - int proto = iph->protocol & (MAX_INET_PROTOS - 1); + const struct net_protocol *ops; + int proto = iph->protocol; int err = -ENOSYS; - __be16 newlen = htons(skb->len - skb_network_offset(skb)); csum_replace2(&iph->check, iph->tot_len, newlen); iph->tot_len = newlen; @@ -1520,14 +1534,15 @@ static const struct net_protocol igmp_protocol = { #endif static const struct net_protocol tcp_protocol = { - .handler = tcp_v4_rcv, - .err_handler = tcp_v4_err, - .gso_send_check = tcp_v4_gso_send_check, - .gso_segment = tcp_tso_segment, - .gro_receive = tcp4_gro_receive, - .gro_complete = tcp4_gro_complete, - .no_policy = 1, - .netns_ok = 1, + .early_demux = tcp_v4_early_demux, + .handler = tcp_v4_rcv, + .err_handler = tcp_v4_err, + .gso_send_check = tcp_v4_gso_send_check, + .gso_segment = tcp_tso_segment, + .gro_receive = tcp4_gro_receive, + .gro_complete = tcp4_gro_complete, + .no_policy = 1, + .netns_ok = 1, }; static const struct net_protocol udp_protocol = { diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c index e8f2617ecd47..a0d8392491c3 100644 --- a/net/ipv4/ah4.c +++ b/net/ipv4/ah4.c @@ -398,16 +398,25 @@ static void ah4_err(struct sk_buff *skb, u32 info) struct ip_auth_hdr *ah = (struct ip_auth_hdr *)(skb->data+(iph->ihl<<2)); struct xfrm_state *x; - if (icmp_hdr(skb)->type != ICMP_DEST_UNREACH || - icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) + switch (icmp_hdr(skb)->type) { + case ICMP_DEST_UNREACH: + if (icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) + return; + case ICMP_REDIRECT: + break; + default: return; + } x = xfrm_state_lookup(net, skb->mark, (const xfrm_address_t *)&iph->daddr, ah->spi, IPPROTO_AH, AF_INET); if (!x) return; - pr_debug("pmtu discovery on SA AH/%08x/%08x\n", - ntohl(ah->spi), ntohl(iph->daddr)); + + if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) + ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_AH, 0); + else + ipv4_redirect(skb, net, 0, 0, IPPROTO_AH, 0); xfrm_state_put(x); } diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c index cda37be02f8d..a0124eb7dbea 100644 --- a/net/ipv4/arp.c +++ b/net/ipv4/arp.c @@ -475,8 +475,7 @@ int arp_find(unsigned char *haddr, struct sk_buff *skb) return 1; } - paddr = skb_rtable(skb)->rt_gateway; - + paddr = rt_nexthop(skb_rtable(skb), ip_hdr(skb)->daddr); if (arp_set_predefined(inet_addr_type(dev_net(dev), paddr), haddr, paddr, dev)) return 0; @@ -790,7 +789,8 @@ static int arp_process(struct sk_buff *skb) * Check for bad requests for 127.x.x.x and requests for multicast * addresses. If this is one such, delete it. */ - if (ipv4_is_loopback(tip) || ipv4_is_multicast(tip)) + if (ipv4_is_multicast(tip) || + (!IN_DEV_ROUTE_LOCALNET(in_dev) && ipv4_is_loopback(tip))) goto out; /* @@ -827,7 +827,7 @@ static int arp_process(struct sk_buff *skb) } if (arp->ar_op == htons(ARPOP_REQUEST) && - ip_route_input_noref(skb, tip, sip, 0, dev) == 0) { + ip_route_input(skb, tip, sip, 0, dev) == 0) { rt = skb_rtable(skb); addr_type = rt->rt_type; diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c index 10e15a144e95..44bf82e3aef7 100644 --- a/net/ipv4/devinet.c +++ b/net/ipv4/devinet.c @@ -1500,7 +1500,8 @@ static int devinet_conf_proc(ctl_table *ctl, int write, if (cnf == net->ipv4.devconf_dflt) devinet_copy_dflt_conf(net, i); - if (i == IPV4_DEVCONF_ACCEPT_LOCAL - 1) + if (i == IPV4_DEVCONF_ACCEPT_LOCAL - 1 || + i == IPV4_DEVCONF_ROUTE_LOCALNET - 1) if ((new_value == 0) && (old_value != 0)) rt_cache_flush(net, 0); } @@ -1617,6 +1618,8 @@ static struct devinet_sysctl_table { "force_igmp_version"), DEVINET_SYSCTL_FLUSHING_ENTRY(PROMOTE_SECONDARIES, "promote_secondaries"), + DEVINET_SYSCTL_FLUSHING_ENTRY(ROUTE_LOCALNET, + "route_localnet"), }, }; diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index cb982a61536f..b61e9deb7c7e 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -484,16 +484,25 @@ static void esp4_err(struct sk_buff *skb, u32 info) struct ip_esp_hdr *esph = (struct ip_esp_hdr *)(skb->data+(iph->ihl<<2)); struct xfrm_state *x; - if (icmp_hdr(skb)->type != ICMP_DEST_UNREACH || - icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) + switch (icmp_hdr(skb)->type) { + case ICMP_DEST_UNREACH: + if (icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) + return; + case ICMP_REDIRECT: + break; + default: return; + } x = xfrm_state_lookup(net, skb->mark, (const xfrm_address_t *)&iph->daddr, esph->spi, IPPROTO_ESP, AF_INET); if (!x) return; - NETDEBUG(KERN_DEBUG "pmtu discovery on SA ESP/%08x/%08x\n", - ntohl(esph->spi), ntohl(iph->daddr)); + + if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) + ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_ESP, 0); + else + ipv4_redirect(skb, net, 0, 0, IPPROTO_ESP, 0); xfrm_state_put(x); } diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c index 3854411fa37c..8732cc7920ed 100644 --- a/net/ipv4/fib_frontend.c +++ b/net/ipv4/fib_frontend.c @@ -31,6 +31,7 @@ #include <linux/if_addr.h> #include <linux/if_arp.h> #include <linux/skbuff.h> +#include <linux/cache.h> #include <linux/init.h> #include <linux/list.h> #include <linux/slab.h> @@ -85,6 +86,24 @@ struct fib_table *fib_new_table(struct net *net, u32 id) tb = fib_trie_table(id); if (!tb) return NULL; + + switch (id) { + case RT_TABLE_LOCAL: + net->ipv4.fib_local = tb; + break; + + case RT_TABLE_MAIN: + net->ipv4.fib_main = tb; + break; + + case RT_TABLE_DEFAULT: + net->ipv4.fib_default = tb; + break; + + default: + break; + } + h = id & (FIB_TABLE_HASHSZ - 1); hlist_add_head_rcu(&tb->tb_hlist, &net->ipv4.fib_table_hash[h]); return tb; @@ -150,10 +169,6 @@ static inline unsigned int __inet_dev_addr_type(struct net *net, if (ipv4_is_multicast(addr)) return RTN_MULTICAST; -#ifdef CONFIG_IP_MULTIPLE_TABLES - res.r = NULL; -#endif - local_table = fib_get_table(net, RT_TABLE_LOCAL); if (local_table) { ret = RTN_UNICAST; @@ -180,6 +195,44 @@ unsigned int inet_dev_addr_type(struct net *net, const struct net_device *dev, } EXPORT_SYMBOL(inet_dev_addr_type); +__be32 fib_compute_spec_dst(struct sk_buff *skb) +{ + struct net_device *dev = skb->dev; + struct in_device *in_dev; + struct fib_result res; + struct rtable *rt; + struct flowi4 fl4; + struct net *net; + int scope; + + rt = skb_rtable(skb); + if ((rt->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST | RTCF_LOCAL)) == + RTCF_LOCAL) + return ip_hdr(skb)->daddr; + + in_dev = __in_dev_get_rcu(dev); + BUG_ON(!in_dev); + + net = dev_net(dev); + + scope = RT_SCOPE_UNIVERSE; + if (!ipv4_is_zeronet(ip_hdr(skb)->saddr)) { + fl4.flowi4_oif = 0; + fl4.flowi4_iif = net->loopback_dev->ifindex; + fl4.daddr = ip_hdr(skb)->saddr; + fl4.saddr = 0; + fl4.flowi4_tos = RT_TOS(ip_hdr(skb)->tos); + fl4.flowi4_scope = scope; + fl4.flowi4_mark = IN_DEV_SRC_VMARK(in_dev) ? skb->mark : 0; + if (!fib_lookup(net, &fl4, &res)) + return FIB_RES_PREFSRC(net, res); + } else { + scope = RT_SCOPE_LINK; + } + + return inet_select_addr(dev, ip_hdr(skb)->saddr, scope); +} + /* Given (packet source, input interface) and optional (dst, oif, tos): * - (main) check, that source is valid i.e. not broadcast or our local * address. @@ -188,17 +241,15 @@ EXPORT_SYMBOL(inet_dev_addr_type); * - check, that packet arrived from expected physical interface. * called with rcu_read_lock() */ -int fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst, u8 tos, - int oif, struct net_device *dev, __be32 *spec_dst, - u32 *itag) +static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst, + u8 tos, int oif, struct net_device *dev, + int rpf, struct in_device *idev, u32 *itag) { - struct in_device *in_dev; - struct flowi4 fl4; + int ret, no_addr, accept_local; struct fib_result res; - int no_addr, rpf, accept_local; - bool dev_match; - int ret; + struct flowi4 fl4; struct net *net; + bool dev_match; fl4.flowi4_oif = 0; fl4.flowi4_iif = oif; @@ -207,20 +258,10 @@ int fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst, u8 tos, fl4.flowi4_tos = tos; fl4.flowi4_scope = RT_SCOPE_UNIVERSE; - no_addr = rpf = accept_local = 0; - in_dev = __in_dev_get_rcu(dev); - if (in_dev) { - no_addr = in_dev->ifa_list == NULL; - - /* Ignore rp_filter for packets protected by IPsec. */ - rpf = secpath_exists(skb) ? 0 : IN_DEV_RPFILTER(in_dev); - - accept_local = IN_DEV_ACCEPT_LOCAL(in_dev); - fl4.flowi4_mark = IN_DEV_SRC_VMARK(in_dev) ? skb->mark : 0; - } + no_addr = idev->ifa_list == NULL; - if (in_dev == NULL) - goto e_inval; + accept_local = IN_DEV_ACCEPT_LOCAL(idev); + fl4.flowi4_mark = IN_DEV_SRC_VMARK(idev) ? skb->mark : 0; net = dev_net(dev); if (fib_lookup(net, &fl4, &res)) @@ -229,7 +270,6 @@ int fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst, u8 tos, if (res.type != RTN_LOCAL || !accept_local) goto e_inval; } - *spec_dst = FIB_RES_PREFSRC(net, res); fib_combine_itag(itag, &res); dev_match = false; @@ -258,17 +298,14 @@ int fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst, u8 tos, ret = 0; if (fib_lookup(net, &fl4, &res) == 0) { - if (res.type == RTN_UNICAST) { - *spec_dst = FIB_RES_PREFSRC(net, res); + if (res.type == RTN_UNICAST) ret = FIB_RES_NH(res).nh_scope >= RT_SCOPE_HOST; - } } return ret; last_resort: if (rpf) goto e_rpf; - *spec_dst = inet_select_addr(dev, 0, RT_SCOPE_UNIVERSE); *itag = 0; return 0; @@ -278,6 +315,20 @@ e_rpf: return -EXDEV; } +/* Ignore rp_filter for packets protected by IPsec. */ +int fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst, + u8 tos, int oif, struct net_device *dev, + struct in_device *idev, u32 *itag) +{ + int r = secpath_exists(skb) ? 0 : IN_DEV_RPFILTER(idev); + + if (!r && !fib_num_tclassid_users(dev_net(dev))) { + *itag = 0; + return 0; + } + return __fib_validate_source(skb, src, dst, tos, oif, dev, r, idev, itag); +} + static inline __be32 sk_extract_addr(struct sockaddr *addr) { return ((struct sockaddr_in *) addr)->sin_addr.s_addr; @@ -879,10 +930,6 @@ static void nl_fib_lookup(struct fib_result_nl *frn, struct fib_table *tb) .flowi4_scope = frn->fl_scope, }; -#ifdef CONFIG_IP_MULTIPLE_TABLES - res.r = NULL; -#endif - frn->err = -ENOENT; if (tb) { local_bh_disable(); @@ -935,8 +982,11 @@ static void nl_fib_input(struct sk_buff *skb) static int __net_init nl_fib_lookup_init(struct net *net) { struct sock *sk; - sk = netlink_kernel_create(net, NETLINK_FIB_LOOKUP, 0, - nl_fib_input, NULL, THIS_MODULE); + struct netlink_kernel_cfg cfg = { + .input = nl_fib_input, + }; + + sk = netlink_kernel_create(net, NETLINK_FIB_LOOKUP, THIS_MODULE, &cfg); if (sk == NULL) return -EAFNOSUPPORT; net->ipv4.fibnl = sk; @@ -1021,11 +1071,6 @@ static int fib_netdev_event(struct notifier_block *this, unsigned long event, vo rt_cache_flush(dev_net(dev), 0); break; case NETDEV_UNREGISTER_BATCH: - /* The batch unregister is only called on the first - * device in the list of devices being unregistered. - * Therefore we should not pass dev_net(dev) in here. - */ - rt_cache_flush_batch(NULL); break; } return NOTIFY_DONE; @@ -1090,6 +1135,9 @@ static int __net_init fib_net_init(struct net *net) { int error; +#ifdef CONFIG_IP_ROUTE_CLASSID + net->ipv4.fib_num_tclassid_users = 0; +#endif error = ip_fib_net_init(net); if (error < 0) goto out; diff --git a/net/ipv4/fib_rules.c b/net/ipv4/fib_rules.c index 2d043f71ef70..a83d74e498d2 100644 --- a/net/ipv4/fib_rules.c +++ b/net/ipv4/fib_rules.c @@ -47,14 +47,7 @@ struct fib4_rule { #endif }; -#ifdef CONFIG_IP_ROUTE_CLASSID -u32 fib_rules_tclass(const struct fib_result *res) -{ - return res->r ? ((struct fib4_rule *) res->r)->tclassid : 0; -} -#endif - -int fib_lookup(struct net *net, struct flowi4 *flp, struct fib_result *res) +int __fib_lookup(struct net *net, struct flowi4 *flp, struct fib_result *res) { struct fib_lookup_arg arg = { .result = res, @@ -63,11 +56,15 @@ int fib_lookup(struct net *net, struct flowi4 *flp, struct fib_result *res) int err; err = fib_rules_lookup(net->ipv4.rules_ops, flowi4_to_flowi(flp), 0, &arg); - res->r = arg.rule; - +#ifdef CONFIG_IP_ROUTE_CLASSID + if (arg.rule) + res->tclassid = ((struct fib4_rule *)arg.rule)->tclassid; + else + res->tclassid = 0; +#endif return err; } -EXPORT_SYMBOL_GPL(fib_lookup); +EXPORT_SYMBOL_GPL(__fib_lookup); static int fib4_rule_action(struct fib_rule *rule, struct flowi *flp, int flags, struct fib_lookup_arg *arg) @@ -169,8 +166,11 @@ static int fib4_rule_configure(struct fib_rule *rule, struct sk_buff *skb, rule4->dst = nla_get_be32(tb[FRA_DST]); #ifdef CONFIG_IP_ROUTE_CLASSID - if (tb[FRA_FLOW]) + if (tb[FRA_FLOW]) { rule4->tclassid = nla_get_u32(tb[FRA_FLOW]); + if (rule4->tclassid) + net->ipv4.fib_num_tclassid_users++; + } #endif rule4->src_len = frh->src_len; @@ -179,11 +179,24 @@ static int fib4_rule_configure(struct fib_rule *rule, struct sk_buff *skb, rule4->dstmask = inet_make_mask(rule4->dst_len); rule4->tos = frh->tos; + net->ipv4.fib_has_custom_rules = true; err = 0; errout: return err; } +static void fib4_rule_delete(struct fib_rule *rule) +{ + struct net *net = rule->fr_net; +#ifdef CONFIG_IP_ROUTE_CLASSID + struct fib4_rule *rule4 = (struct fib4_rule *) rule; + + if (rule4->tclassid) + net->ipv4.fib_num_tclassid_users--; +#endif + net->ipv4.fib_has_custom_rules = true; +} + static int fib4_rule_compare(struct fib_rule *rule, struct fib_rule_hdr *frh, struct nlattr **tb) { @@ -256,6 +269,7 @@ static const struct fib_rules_ops __net_initdata fib4_rules_ops_template = { .action = fib4_rule_action, .match = fib4_rule_match, .configure = fib4_rule_configure, + .delete = fib4_rule_delete, .compare = fib4_rule_compare, .fill = fib4_rule_fill, .default_pref = fib_default_rule_pref, @@ -295,6 +309,7 @@ int __net_init fib4_rules_init(struct net *net) if (err < 0) goto fail; net->ipv4.rules_ops = ops; + net->ipv4.fib_has_custom_rules = false; return 0; fail: diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index e5b7182fa099..e55171f184f9 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -140,6 +140,27 @@ const struct fib_prop fib_props[RTN_MAX + 1] = { }, }; +static void free_nh_exceptions(struct fib_nh *nh) +{ + struct fnhe_hash_bucket *hash = nh->nh_exceptions; + int i; + + for (i = 0; i < FNHE_HASH_SIZE; i++) { + struct fib_nh_exception *fnhe; + + fnhe = rcu_dereference_protected(hash[i].chain, 1); + while (fnhe) { + struct fib_nh_exception *next; + + next = rcu_dereference_protected(fnhe->fnhe_next, 1); + kfree(fnhe); + + fnhe = next; + } + } + kfree(hash); +} + /* Release a nexthop info record */ static void free_fib_info_rcu(struct rcu_head *head) { @@ -148,6 +169,12 @@ static void free_fib_info_rcu(struct rcu_head *head) change_nexthops(fi) { if (nexthop_nh->nh_dev) dev_put(nexthop_nh->nh_dev); + if (nexthop_nh->nh_exceptions) + free_nh_exceptions(nexthop_nh); + if (nexthop_nh->nh_rth_output) + dst_release(&nexthop_nh->nh_rth_output->dst); + if (nexthop_nh->nh_rth_input) + dst_release(&nexthop_nh->nh_rth_input->dst); } endfor_nexthops(fi); release_net(fi->fib_net); @@ -163,6 +190,12 @@ void free_fib_info(struct fib_info *fi) return; } fib_info_cnt--; +#ifdef CONFIG_IP_ROUTE_CLASSID + change_nexthops(fi) { + if (nexthop_nh->nh_tclassid) + fi->fib_net->ipv4.fib_num_tclassid_users--; + } endfor_nexthops(fi); +#endif call_rcu(&fi->rcu, free_fib_info_rcu); } @@ -421,6 +454,8 @@ static int fib_get_nhs(struct fib_info *fi, struct rtnexthop *rtnh, #ifdef CONFIG_IP_ROUTE_CLASSID nla = nla_find(attrs, attrlen, RTA_FLOW); nexthop_nh->nh_tclassid = nla ? nla_get_u32(nla) : 0; + if (nexthop_nh->nh_tclassid) + fi->fib_net->ipv4.fib_num_tclassid_users++; #endif } @@ -779,9 +814,16 @@ struct fib_info *fib_create_info(struct fib_config *cfg) int type = nla_type(nla); if (type) { + u32 val; + if (type > RTAX_MAX) goto err_inval; - fi->fib_metrics[type - 1] = nla_get_u32(nla); + val = nla_get_u32(nla); + if (type == RTAX_ADVMSS && val > 65535 - 40) + val = 65535 - 40; + if (type == RTAX_MTU && val > 65535 - 15) + val = 65535 - 15; + fi->fib_metrics[type - 1] = val; } } } @@ -810,6 +852,8 @@ struct fib_info *fib_create_info(struct fib_config *cfg) nh->nh_flags = cfg->fc_flags; #ifdef CONFIG_IP_ROUTE_CLASSID nh->nh_tclassid = cfg->fc_flow; + if (nh->nh_tclassid) + fi->fib_net->ipv4.fib_num_tclassid_users++; #endif #ifdef CONFIG_IP_ROUTE_MULTIPATH nh->nh_weight = 1; diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c index 30b88d7b4bd6..18cbc15b20d5 100644 --- a/net/ipv4/fib_trie.c +++ b/net/ipv4/fib_trie.c @@ -1007,9 +1007,9 @@ static void trie_rebalance(struct trie *t, struct tnode *tn) while (tn != NULL && (tp = node_parent((struct rt_trie_node *)tn)) != NULL) { cindex = tkey_extract_bits(key, tp->pos, tp->bits); wasfull = tnode_full(tp, tnode_get_child(tp, cindex)); - tn = (struct tnode *) resize(t, (struct tnode *)tn); + tn = (struct tnode *)resize(t, tn); - tnode_put_child_reorg((struct tnode *)tp, cindex, + tnode_put_child_reorg(tp, cindex, (struct rt_trie_node *)tn, wasfull); tp = node_parent((struct rt_trie_node *) tn); @@ -1024,7 +1024,7 @@ static void trie_rebalance(struct trie *t, struct tnode *tn) /* Handle last (top) tnode */ if (IS_TNODE(tn)) - tn = (struct tnode *)resize(t, (struct tnode *)tn); + tn = (struct tnode *)resize(t, tn); rcu_assign_pointer(t->trie, (struct rt_trie_node *)tn); tnode_free_flush(); @@ -1125,7 +1125,7 @@ static struct list_head *fib_insert_node(struct trie *t, u32 key, int plen) node_set_parent((struct rt_trie_node *)l, tp); cindex = tkey_extract_bits(key, tp->pos, tp->bits); - put_child(t, (struct tnode *)tp, cindex, (struct rt_trie_node *)l); + put_child(t, tp, cindex, (struct rt_trie_node *)l); } else { /* Case 3: n is a LEAF or a TNODE and the key doesn't match. */ /* @@ -1160,8 +1160,7 @@ static struct list_head *fib_insert_node(struct trie *t, u32 key, int plen) if (tp) { cindex = tkey_extract_bits(key, tp->pos, tp->bits); - put_child(t, (struct tnode *)tp, cindex, - (struct rt_trie_node *)tn); + put_child(t, tp, cindex, (struct rt_trie_node *)tn); } else { rcu_assign_pointer(t->trie, (struct rt_trie_node *)tn); tp = tn; @@ -1620,7 +1619,7 @@ static void trie_leaf_remove(struct trie *t, struct leaf *l) if (tp) { t_key cindex = tkey_extract_bits(l->key, tp->pos, tp->bits); - put_child(t, (struct tnode *)tp, cindex, NULL); + put_child(t, tp, cindex, NULL); trie_rebalance(t, tp); } else RCU_INIT_POINTER(t->trie, NULL); diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index c75efbdc71cb..f2eccd531746 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -95,6 +95,7 @@ #include <net/checksum.h> #include <net/xfrm.h> #include <net/inet_common.h> +#include <net/ip_fib.h> /* * Build xmit assembly blocks @@ -253,10 +254,10 @@ static inline bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt, /* Limit if icmp type is enabled in ratemask. */ if ((1 << type) & net->ipv4.sysctl_icmp_ratemask) { - if (!rt->peer) - rt_bind_peer(rt, fl4->daddr, 1); - rc = inet_peer_xrlim_allow(rt->peer, + struct inet_peer *peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr, 1); + rc = inet_peer_xrlim_allow(peer, net->ipv4.sysctl_icmp_ratelimit); + inet_putpeer(peer); } out: return rc; @@ -334,7 +335,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb) struct flowi4 fl4; struct sock *sk; struct inet_sock *inet; - __be32 daddr; + __be32 daddr, saddr; if (ip_options_echo(&icmp_param->replyopts.opt.opt, skb)) return; @@ -348,6 +349,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb) inet->tos = ip_hdr(skb)->tos; daddr = ipc.addr = ip_hdr(skb)->saddr; + saddr = fib_compute_spec_dst(skb); ipc.opt = NULL; ipc.tx_flags = 0; if (icmp_param->replyopts.opt.opt.optlen) { @@ -357,7 +359,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb) } memset(&fl4, 0, sizeof(fl4)); fl4.daddr = daddr; - fl4.saddr = rt->rt_spec_dst; + fl4.saddr = saddr; fl4.flowi4_tos = RT_TOS(ip_hdr(skb)->tos); fl4.flowi4_proto = IPPROTO_ICMP; security_skb_classify_flow(skb, flowi4_to_flowi(&fl4)); @@ -569,7 +571,7 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info) rcu_read_lock(); if (rt_is_input_route(rt) && net->ipv4.sysctl_icmp_errors_use_inbound_ifaddr) - dev = dev_get_by_index_rcu(net, rt->rt_iif); + dev = dev_get_by_index_rcu(net, inet_iif(skb_in)); if (dev) saddr = inet_select_addr(dev, 0, RT_SCOPE_LINK); @@ -632,6 +634,27 @@ out:; EXPORT_SYMBOL(icmp_send); +static void icmp_socket_deliver(struct sk_buff *skb, u32 info) +{ + const struct iphdr *iph = (const struct iphdr *) skb->data; + const struct net_protocol *ipprot; + int protocol = iph->protocol; + + /* Checkin full IP header plus 8 bytes of protocol to + * avoid additional coding at protocol handlers. + */ + if (!pskb_may_pull(skb, iph->ihl * 4 + 8)) + return; + + raw_icmp_error(skb, protocol, info); + + rcu_read_lock(); + ipprot = rcu_dereference(inet_protos[protocol]); + if (ipprot && ipprot->err_handler) + ipprot->err_handler(skb, info); + rcu_read_unlock(); +} + /* * Handle ICMP_DEST_UNREACH, ICMP_TIME_EXCEED, and ICMP_QUENCH. */ @@ -640,10 +663,8 @@ static void icmp_unreach(struct sk_buff *skb) { const struct iphdr *iph; struct icmphdr *icmph; - int hash, protocol; - const struct net_protocol *ipprot; - u32 info = 0; struct net *net; + u32 info = 0; net = dev_net(skb_dst(skb)->dev); @@ -674,9 +695,7 @@ static void icmp_unreach(struct sk_buff *skb) LIMIT_NETDEBUG(KERN_INFO pr_fmt("%pI4: fragmentation needed and DF set\n"), &iph->daddr); } else { - info = ip_rt_frag_needed(net, iph, - ntohs(icmph->un.frag.mtu), - skb->dev); + info = ntohs(icmph->un.frag.mtu); if (!info) goto out; } @@ -720,26 +739,7 @@ static void icmp_unreach(struct sk_buff *skb) goto out; } - /* Checkin full IP header plus 8 bytes of protocol to - * avoid additional coding at protocol handlers. - */ - if (!pskb_may_pull(skb, iph->ihl * 4 + 8)) - goto out; - - iph = (const struct iphdr *)skb->data; - protocol = iph->protocol; - - /* - * Deliver ICMP message to raw sockets. Pretty useless feature? - */ - raw_icmp_error(skb, protocol, info); - - hash = protocol & (MAX_INET_PROTOS - 1); - rcu_read_lock(); - ipprot = rcu_dereference(inet_protos[hash]); - if (ipprot && ipprot->err_handler) - ipprot->err_handler(skb, info); - rcu_read_unlock(); + icmp_socket_deliver(skb, info); out: return; @@ -755,46 +755,15 @@ out_err: static void icmp_redirect(struct sk_buff *skb) { - const struct iphdr *iph; - - if (skb->len < sizeof(struct iphdr)) - goto out_err; - - /* - * Get the copied header of the packet that caused the redirect - */ - if (!pskb_may_pull(skb, sizeof(struct iphdr))) - goto out; - - iph = (const struct iphdr *)skb->data; - - switch (icmp_hdr(skb)->code & 7) { - case ICMP_REDIR_NET: - case ICMP_REDIR_NETTOS: - /* - * As per RFC recommendations now handle it as a host redirect. - */ - case ICMP_REDIR_HOST: - case ICMP_REDIR_HOSTTOS: - ip_rt_redirect(ip_hdr(skb)->saddr, iph->daddr, - icmp_hdr(skb)->un.gateway, - iph->saddr, skb->dev); - break; + if (skb->len < sizeof(struct iphdr)) { + ICMP_INC_STATS_BH(dev_net(skb->dev), ICMP_MIB_INERRORS); + return; } - /* Ping wants to see redirects. - * Let's pretend they are errors of sorts... */ - if (iph->protocol == IPPROTO_ICMP && - iph->ihl >= 5 && - pskb_may_pull(skb, (iph->ihl<<2)+8)) { - ping_err(skb, icmp_hdr(skb)->un.gateway); - } + if (!pskb_may_pull(skb, sizeof(struct iphdr))) + return; -out: - return; -out_err: - ICMP_INC_STATS_BH(dev_net(skb->dev), ICMP_MIB_INERRORS); - goto out; + icmp_socket_deliver(skb, icmp_hdr(skb)->un.gateway); } /* @@ -868,86 +837,6 @@ out_err: goto out; } - -/* - * Handle ICMP_ADDRESS_MASK requests. (RFC950) - * - * RFC1122 (3.2.2.9). A host MUST only send replies to - * ADDRESS_MASK requests if it's been configured as an address mask - * agent. Receiving a request doesn't constitute implicit permission to - * act as one. Of course, implementing this correctly requires (SHOULD) - * a way to turn the functionality on and off. Another one for sysctl(), - * I guess. -- MS - * - * RFC1812 (4.3.3.9). A router MUST implement it. - * A router SHOULD have switch turning it on/off. - * This switch MUST be ON by default. - * - * Gratuitous replies, zero-source replies are not implemented, - * that complies with RFC. DO NOT implement them!!! All the idea - * of broadcast addrmask replies as specified in RFC950 is broken. - * The problem is that it is not uncommon to have several prefixes - * on one physical interface. Moreover, addrmask agent can even be - * not aware of existing another prefixes. - * If source is zero, addrmask agent cannot choose correct prefix. - * Gratuitous mask announcements suffer from the same problem. - * RFC1812 explains it, but still allows to use ADDRMASK, - * that is pretty silly. --ANK - * - * All these rules are so bizarre, that I removed kernel addrmask - * support at all. It is wrong, it is obsolete, nobody uses it in - * any case. --ANK - * - * Furthermore you can do it with a usermode address agent program - * anyway... - */ - -static void icmp_address(struct sk_buff *skb) -{ -#if 0 - net_dbg_ratelimited("a guy asks for address mask. Who is it?\n"); -#endif -} - -/* - * RFC1812 (4.3.3.9). A router SHOULD listen all replies, and complain - * loudly if an inconsistency is found. - * called with rcu_read_lock() - */ - -static void icmp_address_reply(struct sk_buff *skb) -{ - struct rtable *rt = skb_rtable(skb); - struct net_device *dev = skb->dev; - struct in_device *in_dev; - struct in_ifaddr *ifa; - - if (skb->len < 4 || !(rt->rt_flags&RTCF_DIRECTSRC)) - return; - - in_dev = __in_dev_get_rcu(dev); - if (!in_dev) - return; - - if (in_dev->ifa_list && - IN_DEV_LOG_MARTIANS(in_dev) && - IN_DEV_FORWARD(in_dev)) { - __be32 _mask, *mp; - - mp = skb_header_pointer(skb, 0, sizeof(_mask), &_mask); - BUG_ON(mp == NULL); - for (ifa = in_dev->ifa_list; ifa; ifa = ifa->ifa_next) { - if (*mp == ifa->ifa_mask && - inet_ifa_match(ip_hdr(skb)->saddr, ifa)) - break; - } - if (!ifa) - net_info_ratelimited("Wrong address mask %pI4 from %s/%pI4\n", - mp, - dev->name, &ip_hdr(skb)->saddr); - } -} - static void icmp_discard(struct sk_buff *skb) { } @@ -1111,10 +1000,10 @@ static const struct icmp_control icmp_pointers[NR_ICMP_TYPES + 1] = { .handler = icmp_discard, }, [ICMP_ADDRESS] = { - .handler = icmp_address, + .handler = icmp_discard, }, [ICMP_ADDRESSREPLY] = { - .handler = icmp_address_reply, + .handler = icmp_discard, }, }; diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index f9ee7417f6a0..db0cf17c00f7 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -374,18 +374,19 @@ struct dst_entry *inet_csk_route_req(struct sock *sk, const struct inet_request_sock *ireq = inet_rsk(req); struct ip_options_rcu *opt = inet_rsk(req)->opt; struct net *net = sock_net(sk); + int flags = inet_sk_flowi_flags(sk); flowi4_init_output(fl4, sk->sk_bound_dev_if, sk->sk_mark, RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE, sk->sk_protocol, - inet_sk_flowi_flags(sk) & ~FLOWI_FLAG_PRECOW_METRICS, + flags, (opt && opt->opt.srr) ? opt->opt.faddr : ireq->rmt_addr, ireq->loc_addr, ireq->rmt_port, inet_sk(sk)->inet_sport); security_req_classify_flow(req, flowi4_to_flowi(fl4)); rt = ip_route_output_flow(net, fl4, sk); if (IS_ERR(rt)) goto no_route; - if (opt && opt->opt.is_strictroute && fl4->daddr != rt->rt_gateway) + if (opt && opt->opt.is_strictroute && rt->rt_gateway) goto route_err; return &rt->dst; @@ -418,7 +419,7 @@ struct dst_entry *inet_csk_route_child_sock(struct sock *sk, rt = ip_route_output_flow(net, fl4, sk); if (IS_ERR(rt)) goto no_route; - if (opt && opt->opt.is_strictroute && fl4->daddr != rt->rt_gateway) + if (opt && opt->opt.is_strictroute && rt->rt_gateway) goto route_err; return &rt->dst; @@ -799,3 +800,49 @@ int inet_csk_compat_setsockopt(struct sock *sk, int level, int optname, } EXPORT_SYMBOL_GPL(inet_csk_compat_setsockopt); #endif + +static struct dst_entry *inet_csk_rebuild_route(struct sock *sk, struct flowi *fl) +{ + const struct inet_sock *inet = inet_sk(sk); + const struct ip_options_rcu *inet_opt; + __be32 daddr = inet->inet_daddr; + struct flowi4 *fl4; + struct rtable *rt; + + rcu_read_lock(); + inet_opt = rcu_dereference(inet->inet_opt); + if (inet_opt && inet_opt->opt.srr) + daddr = inet_opt->opt.faddr; + fl4 = &fl->u.ip4; + rt = ip_route_output_ports(sock_net(sk), fl4, sk, daddr, + inet->inet_saddr, inet->inet_dport, + inet->inet_sport, sk->sk_protocol, + RT_CONN_FLAGS(sk), sk->sk_bound_dev_if); + if (IS_ERR(rt)) + rt = NULL; + if (rt) + sk_setup_caps(sk, &rt->dst); + rcu_read_unlock(); + + return &rt->dst; +} + +struct dst_entry *inet_csk_update_pmtu(struct sock *sk, u32 mtu) +{ + struct dst_entry *dst = __sk_dst_check(sk, 0); + struct inet_sock *inet = inet_sk(sk); + + if (!dst) { + dst = inet_csk_rebuild_route(sk, &inet->cork.fl); + if (!dst) + goto out; + } + dst->ops->update_pmtu(dst, sk, NULL, mtu); + + dst = __sk_dst_check(sk, 0); + if (!dst) + dst = inet_csk_rebuild_route(sk, &inet->cork.fl); +out: + return dst; +} +EXPORT_SYMBOL_GPL(inet_csk_update_pmtu); diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c index 46d1e7199a8c..570e61f9611f 100644 --- a/net/ipv4/inet_diag.c +++ b/net/ipv4/inet_diag.c @@ -46,9 +46,6 @@ struct inet_diag_entry { u16 userlocks; }; -#define INET_DIAG_PUT(skb, attrtype, attrlen) \ - RTA_DATA(__RTA_PUT(skb, attrtype, attrlen)) - static DEFINE_MUTEX(inet_diag_table_mutex); static const struct inet_diag_handler *inet_diag_lock_handler(int proto) @@ -78,24 +75,22 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, const struct inet_sock *inet = inet_sk(sk); struct inet_diag_msg *r; struct nlmsghdr *nlh; + struct nlattr *attr; void *info = NULL; - struct inet_diag_meminfo *minfo = NULL; - unsigned char *b = skb_tail_pointer(skb); const struct inet_diag_handler *handler; int ext = req->idiag_ext; handler = inet_diag_table[req->sdiag_protocol]; BUG_ON(handler == NULL); - nlh = NLMSG_PUT(skb, pid, seq, unlh->nlmsg_type, sizeof(*r)); - nlh->nlmsg_flags = nlmsg_flags; + nlh = nlmsg_put(skb, pid, seq, unlh->nlmsg_type, sizeof(*r), + nlmsg_flags); + if (!nlh) + return -EMSGSIZE; - r = NLMSG_DATA(nlh); + r = nlmsg_data(nlh); BUG_ON(sk->sk_state == TCP_TIME_WAIT); - if (ext & (1 << (INET_DIAG_MEMINFO - 1))) - minfo = INET_DIAG_PUT(skb, INET_DIAG_MEMINFO, sizeof(*minfo)); - r->idiag_family = sk->sk_family; r->idiag_state = sk->sk_state; r->idiag_timer = 0; @@ -113,7 +108,8 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, * hence this needs to be included regardless of socket family. */ if (ext & (1 << (INET_DIAG_TOS - 1))) - RTA_PUT_U8(skb, INET_DIAG_TOS, inet->tos); + if (nla_put_u8(skb, INET_DIAG_TOS, inet->tos) < 0) + goto errout; #if IS_ENABLED(CONFIG_IPV6) if (r->idiag_family == AF_INET6) { @@ -121,24 +117,31 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, *(struct in6_addr *)r->id.idiag_src = np->rcv_saddr; *(struct in6_addr *)r->id.idiag_dst = np->daddr; + if (ext & (1 << (INET_DIAG_TCLASS - 1))) - RTA_PUT_U8(skb, INET_DIAG_TCLASS, np->tclass); + if (nla_put_u8(skb, INET_DIAG_TCLASS, np->tclass) < 0) + goto errout; } #endif r->idiag_uid = sock_i_uid(sk); r->idiag_inode = sock_i_ino(sk); - if (minfo) { - minfo->idiag_rmem = sk_rmem_alloc_get(sk); - minfo->idiag_wmem = sk->sk_wmem_queued; - minfo->idiag_fmem = sk->sk_forward_alloc; - minfo->idiag_tmem = sk_wmem_alloc_get(sk); + if (ext & (1 << (INET_DIAG_MEMINFO - 1))) { + struct inet_diag_meminfo minfo = { + .idiag_rmem = sk_rmem_alloc_get(sk), + .idiag_wmem = sk->sk_wmem_queued, + .idiag_fmem = sk->sk_forward_alloc, + .idiag_tmem = sk_wmem_alloc_get(sk), + }; + + if (nla_put(skb, INET_DIAG_MEMINFO, sizeof(minfo), &minfo) < 0) + goto errout; } if (ext & (1 << (INET_DIAG_SKMEMINFO - 1))) if (sock_diag_put_meminfo(sk, skb, INET_DIAG_SKMEMINFO)) - goto rtattr_failure; + goto errout; if (icsk == NULL) { handler->idiag_get_info(sk, r, NULL); @@ -165,16 +168,20 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, } #undef EXPIRES_IN_MS - if (ext & (1 << (INET_DIAG_INFO - 1))) - info = INET_DIAG_PUT(skb, INET_DIAG_INFO, sizeof(struct tcp_info)); - - if ((ext & (1 << (INET_DIAG_CONG - 1))) && icsk->icsk_ca_ops) { - const size_t len = strlen(icsk->icsk_ca_ops->name); + if (ext & (1 << (INET_DIAG_INFO - 1))) { + attr = nla_reserve(skb, INET_DIAG_INFO, + sizeof(struct tcp_info)); + if (!attr) + goto errout; - strcpy(INET_DIAG_PUT(skb, INET_DIAG_CONG, len + 1), - icsk->icsk_ca_ops->name); + info = nla_data(attr); } + if ((ext & (1 << (INET_DIAG_CONG - 1))) && icsk->icsk_ca_ops) + if (nla_put_string(skb, INET_DIAG_CONG, + icsk->icsk_ca_ops->name) < 0) + goto errout; + handler->idiag_get_info(sk, r, info); if (sk->sk_state < TCP_TIME_WAIT && @@ -182,12 +189,10 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, icsk->icsk_ca_ops->get_info(sk, ext, skb); out: - nlh->nlmsg_len = skb_tail_pointer(skb) - b; - return skb->len; + return nlmsg_end(skb, nlh); -rtattr_failure: -nlmsg_failure: - nlmsg_trim(skb, b); +errout: + nlmsg_cancel(skb, nlh); return -EMSGSIZE; } EXPORT_SYMBOL_GPL(inet_sk_diag_fill); @@ -208,14 +213,15 @@ static int inet_twsk_diag_fill(struct inet_timewait_sock *tw, { long tmo; struct inet_diag_msg *r; - const unsigned char *previous_tail = skb_tail_pointer(skb); - struct nlmsghdr *nlh = NLMSG_PUT(skb, pid, seq, - unlh->nlmsg_type, sizeof(*r)); + struct nlmsghdr *nlh; - r = NLMSG_DATA(nlh); - BUG_ON(tw->tw_state != TCP_TIME_WAIT); + nlh = nlmsg_put(skb, pid, seq, unlh->nlmsg_type, sizeof(*r), + nlmsg_flags); + if (!nlh) + return -EMSGSIZE; - nlh->nlmsg_flags = nlmsg_flags; + r = nlmsg_data(nlh); + BUG_ON(tw->tw_state != TCP_TIME_WAIT); tmo = tw->tw_ttd - jiffies; if (tmo < 0) @@ -245,11 +251,8 @@ static int inet_twsk_diag_fill(struct inet_timewait_sock *tw, *(struct in6_addr *)r->id.idiag_dst = tw6->tw_v6_daddr; } #endif - nlh->nlmsg_len = skb_tail_pointer(skb) - previous_tail; - return skb->len; -nlmsg_failure: - nlmsg_trim(skb, previous_tail); - return -EMSGSIZE; + + return nlmsg_end(skb, nlh); } static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, @@ -269,16 +272,17 @@ int inet_diag_dump_one_icsk(struct inet_hashinfo *hashinfo, struct sk_buff *in_s int err; struct sock *sk; struct sk_buff *rep; + struct net *net = sock_net(in_skb->sk); err = -EINVAL; if (req->sdiag_family == AF_INET) { - sk = inet_lookup(&init_net, hashinfo, req->id.idiag_dst[0], + sk = inet_lookup(net, hashinfo, req->id.idiag_dst[0], req->id.idiag_dport, req->id.idiag_src[0], req->id.idiag_sport, req->id.idiag_if); } #if IS_ENABLED(CONFIG_IPV6) else if (req->sdiag_family == AF_INET6) { - sk = inet6_lookup(&init_net, hashinfo, + sk = inet6_lookup(net, hashinfo, (struct in6_addr *)req->id.idiag_dst, req->id.idiag_dport, (struct in6_addr *)req->id.idiag_src, @@ -298,23 +302,23 @@ int inet_diag_dump_one_icsk(struct inet_hashinfo *hashinfo, struct sk_buff *in_s if (err) goto out; - err = -ENOMEM; - rep = alloc_skb(NLMSG_SPACE((sizeof(struct inet_diag_msg) + - sizeof(struct inet_diag_meminfo) + - sizeof(struct tcp_info) + 64)), - GFP_KERNEL); - if (!rep) + rep = nlmsg_new(sizeof(struct inet_diag_msg) + + sizeof(struct inet_diag_meminfo) + + sizeof(struct tcp_info) + 64, GFP_KERNEL); + if (!rep) { + err = -ENOMEM; goto out; + } err = sk_diag_fill(sk, rep, req, NETLINK_CB(in_skb).pid, nlh->nlmsg_seq, 0, nlh); if (err < 0) { WARN_ON(err == -EMSGSIZE); - kfree_skb(rep); + nlmsg_free(rep); goto out; } - err = netlink_unicast(sock_diag_nlsk, rep, NETLINK_CB(in_skb).pid, + err = netlink_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).pid, MSG_DONTWAIT); if (err > 0) err = 0; @@ -592,15 +596,16 @@ static int inet_diag_fill_req(struct sk_buff *skb, struct sock *sk, { const struct inet_request_sock *ireq = inet_rsk(req); struct inet_sock *inet = inet_sk(sk); - unsigned char *b = skb_tail_pointer(skb); struct inet_diag_msg *r; struct nlmsghdr *nlh; long tmo; - nlh = NLMSG_PUT(skb, pid, seq, unlh->nlmsg_type, sizeof(*r)); - nlh->nlmsg_flags = NLM_F_MULTI; - r = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, unlh->nlmsg_type, sizeof(*r), + NLM_F_MULTI); + if (!nlh) + return -EMSGSIZE; + r = nlmsg_data(nlh); r->idiag_family = sk->sk_family; r->idiag_state = TCP_SYN_RECV; r->idiag_timer = 1; @@ -628,13 +633,8 @@ static int inet_diag_fill_req(struct sk_buff *skb, struct sock *sk, *(struct in6_addr *)r->id.idiag_dst = inet6_rsk(req)->rmt_addr; } #endif - nlh->nlmsg_len = skb_tail_pointer(skb) - b; - - return skb->len; -nlmsg_failure: - nlmsg_trim(skb, b); - return -1; + return nlmsg_end(skb, nlh); } static int inet_diag_dump_reqs(struct sk_buff *skb, struct sock *sk, @@ -725,6 +725,7 @@ void inet_diag_dump_icsk(struct inet_hashinfo *hashinfo, struct sk_buff *skb, { int i, num; int s_i, s_num; + struct net *net = sock_net(skb->sk); s_i = cb->args[1]; s_num = num = cb->args[2]; @@ -744,6 +745,9 @@ void inet_diag_dump_icsk(struct inet_hashinfo *hashinfo, struct sk_buff *skb, sk_nulls_for_each(sk, node, &ilb->head) { struct inet_sock *inet = inet_sk(sk); + if (!net_eq(sock_net(sk), net)) + continue; + if (num < s_num) { num++; continue; @@ -814,6 +818,8 @@ skip_listen_ht: sk_nulls_for_each(sk, node, &head->chain) { struct inet_sock *inet = inet_sk(sk); + if (!net_eq(sock_net(sk), net)) + continue; if (num < s_num) goto next_normal; if (!(r->idiag_states & (1 << sk->sk_state))) @@ -840,6 +846,8 @@ next_normal: inet_twsk_for_each(tw, node, &head->twchain) { + if (!net_eq(twsk_net(tw), net)) + continue; if (num < s_num) goto next_dying; @@ -892,7 +900,7 @@ static int inet_diag_dump(struct sk_buff *skb, struct netlink_callback *cb) if (nlmsg_attrlen(cb->nlh, hdrlen)) bc = nlmsg_find_attr(cb->nlh, hdrlen, INET_DIAG_REQ_BYTECODE); - return __inet_diag_dump(skb, cb, (struct inet_diag_req_v2 *)NLMSG_DATA(cb->nlh), bc); + return __inet_diag_dump(skb, cb, nlmsg_data(cb->nlh), bc); } static inline int inet_diag_type2proto(int type) @@ -909,7 +917,7 @@ static inline int inet_diag_type2proto(int type) static int inet_diag_dump_compat(struct sk_buff *skb, struct netlink_callback *cb) { - struct inet_diag_req *rc = NLMSG_DATA(cb->nlh); + struct inet_diag_req *rc = nlmsg_data(cb->nlh); struct inet_diag_req_v2 req; struct nlattr *bc = NULL; int hdrlen = sizeof(struct inet_diag_req); @@ -929,7 +937,7 @@ static int inet_diag_dump_compat(struct sk_buff *skb, struct netlink_callback *c static int inet_diag_get_exact_compat(struct sk_buff *in_skb, const struct nlmsghdr *nlh) { - struct inet_diag_req *rc = NLMSG_DATA(nlh); + struct inet_diag_req *rc = nlmsg_data(nlh); struct inet_diag_req_v2 req; req.sdiag_family = rc->idiag_family; @@ -944,6 +952,7 @@ static int inet_diag_get_exact_compat(struct sk_buff *in_skb, static int inet_diag_rcv_msg_compat(struct sk_buff *skb, struct nlmsghdr *nlh) { int hdrlen = sizeof(struct inet_diag_req); + struct net *net = sock_net(skb->sk); if (nlh->nlmsg_type >= INET_DIAG_GETSOCK_MAX || nlmsg_len(nlh) < hdrlen) @@ -964,7 +973,7 @@ static int inet_diag_rcv_msg_compat(struct sk_buff *skb, struct nlmsghdr *nlh) struct netlink_dump_control c = { .dump = inet_diag_dump_compat, }; - return netlink_dump_start(sock_diag_nlsk, skb, nlh, &c); + return netlink_dump_start(net->diag_nlsk, skb, nlh, &c); } } @@ -974,6 +983,7 @@ static int inet_diag_rcv_msg_compat(struct sk_buff *skb, struct nlmsghdr *nlh) static int inet_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h) { int hdrlen = sizeof(struct inet_diag_req_v2); + struct net *net = sock_net(skb->sk); if (nlmsg_len(h) < hdrlen) return -EINVAL; @@ -992,11 +1002,11 @@ static int inet_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h) struct netlink_dump_control c = { .dump = inet_diag_dump, }; - return netlink_dump_start(sock_diag_nlsk, skb, h, &c); + return netlink_dump_start(net->diag_nlsk, skb, h, &c); } } - return inet_diag_get_exact(skb, h, (struct inet_diag_req_v2 *)NLMSG_DATA(h)); + return inet_diag_get_exact(skb, h, nlmsg_data(h)); } static const struct sock_diag_handler inet_diag_handler = { diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c index 5ff2a51b6d0c..85190e69297b 100644 --- a/net/ipv4/inet_fragment.c +++ b/net/ipv4/inet_fragment.c @@ -243,12 +243,12 @@ static struct inet_frag_queue *inet_frag_alloc(struct netns_frags *nf, if (q == NULL) return NULL; + q->net = nf; f->constructor(q, arg); atomic_add(f->qsize, &nf->mem); setup_timer(&q->timer, f->frag_expire, (unsigned long)q); spin_lock_init(&q->lock); atomic_set(&q->refcnt, 1); - q->net = nf; return q; } diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c index dfba343b2509..e1e0a4e8fd34 100644 --- a/net/ipv4/inetpeer.c +++ b/net/ipv4/inetpeer.c @@ -82,23 +82,39 @@ static const struct inet_peer peer_fake_node = { .avl_height = 0 }; -struct inet_peer_base { - struct inet_peer __rcu *root; - seqlock_t lock; - int total; -}; +void inet_peer_base_init(struct inet_peer_base *bp) +{ + bp->root = peer_avl_empty_rcu; + seqlock_init(&bp->lock); + bp->flush_seq = ~0U; + bp->total = 0; +} +EXPORT_SYMBOL_GPL(inet_peer_base_init); -static struct inet_peer_base v4_peers = { - .root = peer_avl_empty_rcu, - .lock = __SEQLOCK_UNLOCKED(v4_peers.lock), - .total = 0, -}; +static atomic_t v4_seq = ATOMIC_INIT(0); +static atomic_t v6_seq = ATOMIC_INIT(0); -static struct inet_peer_base v6_peers = { - .root = peer_avl_empty_rcu, - .lock = __SEQLOCK_UNLOCKED(v6_peers.lock), - .total = 0, -}; +static atomic_t *inetpeer_seq_ptr(int family) +{ + return (family == AF_INET ? &v4_seq : &v6_seq); +} + +static inline void flush_check(struct inet_peer_base *base, int family) +{ + atomic_t *fp = inetpeer_seq_ptr(family); + + if (unlikely(base->flush_seq != atomic_read(fp))) { + inetpeer_invalidate_tree(base); + base->flush_seq = atomic_read(fp); + } +} + +void inetpeer_invalidate_family(int family) +{ + atomic_t *fp = inetpeer_seq_ptr(family); + + atomic_inc(fp); +} #define PEER_MAXDEPTH 40 /* sufficient for about 2^27 nodes */ @@ -110,7 +126,7 @@ int inet_peer_maxttl __read_mostly = 10 * 60 * HZ; /* usual time to live: 10 min static void inetpeer_gc_worker(struct work_struct *work) { - struct inet_peer *p, *n; + struct inet_peer *p, *n, *c; LIST_HEAD(list); spin_lock_bh(&gc_lock); @@ -122,17 +138,19 @@ static void inetpeer_gc_worker(struct work_struct *work) list_for_each_entry_safe(p, n, &list, gc_list) { - if(need_resched()) + if (need_resched()) cond_resched(); - if (p->avl_left != peer_avl_empty) { - list_add_tail(&p->avl_left->gc_list, &list); - p->avl_left = peer_avl_empty; + c = rcu_dereference_protected(p->avl_left, 1); + if (c != peer_avl_empty) { + list_add_tail(&c->gc_list, &list); + p->avl_left = peer_avl_empty_rcu; } - if (p->avl_right != peer_avl_empty) { - list_add_tail(&p->avl_right->gc_list, &list); - p->avl_right = peer_avl_empty; + c = rcu_dereference_protected(p->avl_right, 1); + if (c != peer_avl_empty) { + list_add_tail(&c->gc_list, &list); + p->avl_right = peer_avl_empty_rcu; } n = list_entry(p->gc_list.next, struct inet_peer, gc_list); @@ -401,11 +419,6 @@ static void unlink_from_pool(struct inet_peer *p, struct inet_peer_base *base, call_rcu(&p->rcu, inetpeer_free_rcu); } -static struct inet_peer_base *family_to_base(int family) -{ - return family == AF_INET ? &v4_peers : &v6_peers; -} - /* perform garbage collect on all items stacked during a lookup */ static int inet_peer_gc(struct inet_peer_base *base, struct inet_peer __rcu **stack[PEER_MAXDEPTH], @@ -443,14 +456,17 @@ static int inet_peer_gc(struct inet_peer_base *base, return cnt; } -struct inet_peer *inet_getpeer(const struct inetpeer_addr *daddr, int create) +struct inet_peer *inet_getpeer(struct inet_peer_base *base, + const struct inetpeer_addr *daddr, + int create) { struct inet_peer __rcu **stack[PEER_MAXDEPTH], ***stackptr; - struct inet_peer_base *base = family_to_base(daddr->family); struct inet_peer *p; unsigned int sequence; int invalidated, gccnt = 0; + flush_check(base, daddr->family); + /* Attempt a lockless lookup first. * Because of a concurrent writer, we might not find an existing entry. */ @@ -492,13 +508,9 @@ relookup: (daddr->family == AF_INET) ? secure_ip_id(daddr->addr.a4) : secure_ipv6_id(daddr->addr.a6)); - p->tcp_ts_stamp = 0; p->metrics[RTAX_LOCK-1] = INETPEER_METRICS_NEW; p->rate_tokens = 0; p->rate_last = 0; - p->pmtu_expires = 0; - p->pmtu_orig = 0; - memset(&p->redirect_learned, 0, sizeof(p->redirect_learned)); INIT_LIST_HEAD(&p->gc_list); /* Link the node. */ @@ -571,26 +583,19 @@ static void inetpeer_inval_rcu(struct rcu_head *head) schedule_delayed_work(&gc_work, gc_delay); } -void inetpeer_invalidate_tree(int family) +void inetpeer_invalidate_tree(struct inet_peer_base *base) { - struct inet_peer *old, *new, *prev; - struct inet_peer_base *base = family_to_base(family); + struct inet_peer *root; write_seqlock_bh(&base->lock); - old = base->root; - if (old == peer_avl_empty_rcu) - goto out; - - new = peer_avl_empty_rcu; - - prev = cmpxchg(&base->root, old, new); - if (prev == old) { + root = rcu_deref_locked(base->root, base); + if (root != peer_avl_empty) { + base->root = peer_avl_empty_rcu; base->total = 0; - call_rcu(&prev->gc_rcu, inetpeer_inval_rcu); + call_rcu(&root->gc_rcu, inetpeer_inval_rcu); } -out: write_sequnlock_bh(&base->lock); } EXPORT_SYMBOL(inetpeer_invalidate_tree); diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c index 9dbd3dd6022d..7ad88e5e7110 100644 --- a/net/ipv4/ip_fragment.c +++ b/net/ipv4/ip_fragment.c @@ -171,6 +171,10 @@ static void frag_kfree_skb(struct netns_frags *nf, struct sk_buff *skb) static void ip4_frag_init(struct inet_frag_queue *q, void *a) { struct ipq *qp = container_of(q, struct ipq, q); + struct netns_ipv4 *ipv4 = container_of(q->net, struct netns_ipv4, + frags); + struct net *net = container_of(ipv4, struct net, ipv4); + struct ip4_create_arg *arg = a; qp->protocol = arg->iph->protocol; @@ -180,7 +184,7 @@ static void ip4_frag_init(struct inet_frag_queue *q, void *a) qp->daddr = arg->iph->daddr; qp->user = arg->user; qp->peer = sysctl_ipfrag_max_dist ? - inet_getpeer_v4(arg->iph->saddr, 1) : NULL; + inet_getpeer_v4(net->ipv4.peers, arg->iph->saddr, 1) : NULL; } static __inline__ void ip4_frag_free(struct inet_frag_queue *q) @@ -254,8 +258,8 @@ static void ip_expire(unsigned long arg) /* skb dst is stale, drop it, and perform route lookup again */ skb_dst_drop(head); iph = ip_hdr(head); - err = ip_route_input_noref(head, iph->daddr, iph->saddr, - iph->tos, head->dev); + err = ip_route_input(head, iph->daddr, iph->saddr, + iph->tos, head->dev); if (err) goto out_rcu_unlock; diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c index f49047b79609..b062a98574f2 100644 --- a/net/ipv4/ip_gre.c +++ b/net/ipv4/ip_gre.c @@ -516,9 +516,6 @@ static void ipgre_err(struct sk_buff *skb, u32 info) case ICMP_PORT_UNREACH: /* Impossible event. */ return; - case ICMP_FRAG_NEEDED: - /* Soft state for pmtu is maintained by IP core. */ - return; default: /* All others are translated to HOST_UNREACH. rfc2003 contains "deep thoughts" about NET_UNREACH, @@ -531,6 +528,9 @@ static void ipgre_err(struct sk_buff *skb, u32 info) if (code != ICMP_EXC_TTL) return; break; + + case ICMP_REDIRECT: + break; } rcu_read_lock(); @@ -538,7 +538,20 @@ static void ipgre_err(struct sk_buff *skb, u32 info) flags & GRE_KEY ? *(((__be32 *)p) + (grehlen / 4) - 1) : 0, p[1]); - if (t == NULL || t->parms.iph.daddr == 0 || + if (t == NULL) + goto out; + + if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) { + ipv4_update_pmtu(skb, dev_net(skb->dev), info, + t->parms.link, 0, IPPROTO_GRE, 0); + goto out; + } + if (type == ICMP_REDIRECT) { + ipv4_redirect(skb, dev_net(skb->dev), t->parms.link, 0, + IPPROTO_GRE, 0); + goto out; + } + if (t->parms.iph.daddr == 0 || ipv4_is_multicast(t->parms.iph.daddr)) goto out; @@ -753,7 +766,7 @@ static netdev_tx_t ipgre_tunnel_xmit(struct sk_buff *skb, struct net_device *dev if (skb->protocol == htons(ETH_P_IP)) { rt = skb_rtable(skb); - dst = rt->rt_gateway; + dst = rt_nexthop(rt, old_iph->daddr); } #if IS_ENABLED(CONFIG_IPV6) else if (skb->protocol == htons(ETH_P_IPV6)) { @@ -820,7 +833,7 @@ static netdev_tx_t ipgre_tunnel_xmit(struct sk_buff *skb, struct net_device *dev mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu; if (skb_dst(skb)) - skb_dst(skb)->ops->update_pmtu(skb_dst(skb), mtu); + skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); if (skb->protocol == htons(ETH_P_IP)) { df |= (old_iph->frag_off&htons(IP_DF)); diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c index 8590144ca330..4ebc6feee250 100644 --- a/net/ipv4/ip_input.c +++ b/net/ipv4/ip_input.c @@ -198,14 +198,13 @@ static int ip_local_deliver_finish(struct sk_buff *skb) rcu_read_lock(); { int protocol = ip_hdr(skb)->protocol; - int hash, raw; const struct net_protocol *ipprot; + int raw; resubmit: raw = raw_local_deliver(skb, protocol); - hash = protocol & (MAX_INET_PROTOS - 1); - ipprot = rcu_dereference(inet_protos[hash]); + ipprot = rcu_dereference(inet_protos[protocol]); if (ipprot != NULL) { int ret; @@ -314,26 +313,33 @@ drop: return true; } +int sysctl_ip_early_demux __read_mostly = 1; + static int ip_rcv_finish(struct sk_buff *skb) { const struct iphdr *iph = ip_hdr(skb); struct rtable *rt; + if (sysctl_ip_early_demux && !skb_dst(skb)) { + const struct net_protocol *ipprot; + int protocol = iph->protocol; + + rcu_read_lock(); + ipprot = rcu_dereference(inet_protos[protocol]); + if (ipprot && ipprot->early_demux) + ipprot->early_demux(skb); + rcu_read_unlock(); + } + /* * Initialise the virtual path cache for the packet. It describes * how the packet travels inside Linux networking. */ - if (skb_dst(skb) == NULL) { - int err = ip_route_input_noref(skb, iph->daddr, iph->saddr, - iph->tos, skb->dev); + if (!skb_dst(skb)) { + int err = ip_route_input(skb, iph->daddr, iph->saddr, + iph->tos, skb->dev); if (unlikely(err)) { - if (err == -EHOSTUNREACH) - IP_INC_STATS_BH(dev_net(skb->dev), - IPSTATS_MIB_INADDRERRORS); - else if (err == -ENETUNREACH) - IP_INC_STATS_BH(dev_net(skb->dev), - IPSTATS_MIB_INNOROUTES); - else if (err == -EXDEV) + if (err == -EXDEV) NET_INC_STATS_BH(dev_net(skb->dev), LINUX_MIB_IPRPFILTER); goto drop; diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c index 708b99494e23..1dc01f9793d5 100644 --- a/net/ipv4/ip_options.c +++ b/net/ipv4/ip_options.c @@ -27,6 +27,7 @@ #include <net/icmp.h> #include <net/route.h> #include <net/cipso_ipv4.h> +#include <net/ip_fib.h> /* * Write options to IP header, record destination address to @@ -92,7 +93,6 @@ int ip_options_echo(struct ip_options *dopt, struct sk_buff *skb) unsigned char *sptr, *dptr; int soffset, doffset; int optlen; - __be32 daddr; memset(dopt, 0, sizeof(struct ip_options)); @@ -104,8 +104,6 @@ int ip_options_echo(struct ip_options *dopt, struct sk_buff *skb) sptr = skb_network_header(skb); dptr = dopt->__data; - daddr = skb_rtable(skb)->rt_spec_dst; - if (sopt->rr) { optlen = sptr[sopt->rr+1]; soffset = sptr[sopt->rr+2]; @@ -179,6 +177,8 @@ int ip_options_echo(struct ip_options *dopt, struct sk_buff *skb) doffset -= 4; } if (doffset > 3) { + __be32 daddr = fib_compute_spec_dst(skb); + memcpy(&start[doffset-1], &daddr, 4); dopt->faddr = faddr; dptr[0] = start[0]; @@ -241,6 +241,15 @@ void ip_options_fragment(struct sk_buff *skb) opt->ts_needtime = 0; } +/* helper used by ip_options_compile() to call fib_compute_spec_dst() + * at most one time. + */ +static void spec_dst_fill(__be32 *spec_dst, struct sk_buff *skb) +{ + if (*spec_dst == htonl(INADDR_ANY)) + *spec_dst = fib_compute_spec_dst(skb); +} + /* * Verify options and fill pointers in struct options. * Caller should clear *opt, and set opt->data. @@ -250,12 +259,12 @@ void ip_options_fragment(struct sk_buff *skb) int ip_options_compile(struct net *net, struct ip_options *opt, struct sk_buff *skb) { - int l; - unsigned char *iph; - unsigned char *optptr; - int optlen; + __be32 spec_dst = htonl(INADDR_ANY); unsigned char *pp_ptr = NULL; struct rtable *rt = NULL; + unsigned char *optptr; + unsigned char *iph; + int optlen, l; if (skb != NULL) { rt = skb_rtable(skb); @@ -331,7 +340,8 @@ int ip_options_compile(struct net *net, goto error; } if (rt) { - memcpy(&optptr[optptr[2]-1], &rt->rt_spec_dst, 4); + spec_dst_fill(&spec_dst, skb); + memcpy(&optptr[optptr[2]-1], &spec_dst, 4); opt->is_changed = 1; } optptr[2] += 4; @@ -373,7 +383,8 @@ int ip_options_compile(struct net *net, } opt->ts = optptr - iph; if (rt) { - memcpy(&optptr[optptr[2]-1], &rt->rt_spec_dst, 4); + spec_dst_fill(&spec_dst, skb); + memcpy(&optptr[optptr[2]-1], &spec_dst, 4); timeptr = &optptr[optptr[2]+3]; } opt->ts_needaddr = 1; diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 451f97c42eb4..ba39a52d18c1 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -113,19 +113,6 @@ int ip_local_out(struct sk_buff *skb) } EXPORT_SYMBOL_GPL(ip_local_out); -/* dev_loopback_xmit for use with netfilter. */ -static int ip_dev_loopback_xmit(struct sk_buff *newskb) -{ - skb_reset_mac_header(newskb); - __skb_pull(newskb, skb_network_offset(newskb)); - newskb->pkt_type = PACKET_LOOPBACK; - newskb->ip_summed = CHECKSUM_UNNECESSARY; - WARN_ON(!skb_dst(newskb)); - skb_dst_force(newskb); - netif_rx_ni(newskb); - return 0; -} - static inline int ip_select_ttl(struct inet_sock *inet, struct dst_entry *dst) { int ttl = inet->uc_ttl; @@ -183,6 +170,7 @@ static inline int ip_finish_output2(struct sk_buff *skb) struct net_device *dev = dst->dev; unsigned int hh_len = LL_RESERVED_SPACE(dev); struct neighbour *neigh; + u32 nexthop; if (rt->rt_type == RTN_MULTICAST) { IP_UPD_PO_STATS(dev_net(dev), IPSTATS_MIB_OUTMCAST, skb->len); @@ -200,19 +188,22 @@ static inline int ip_finish_output2(struct sk_buff *skb) } if (skb->sk) skb_set_owner_w(skb2, skb->sk); - kfree_skb(skb); + consume_skb(skb); skb = skb2; } - rcu_read_lock(); - neigh = dst_get_neighbour_noref(dst); + rcu_read_lock_bh(); + nexthop = rt->rt_gateway ? rt->rt_gateway : ip_hdr(skb)->daddr; + neigh = __ipv4_neigh_lookup_noref(dev, nexthop); + if (unlikely(!neigh)) + neigh = __neigh_create(&arp_tbl, &nexthop, dev, false); if (neigh) { - int res = neigh_output(neigh, skb); + int res = dst_neigh_output(dst, neigh, skb); - rcu_read_unlock(); + rcu_read_unlock_bh(); return res; } - rcu_read_unlock(); + rcu_read_unlock_bh(); net_dbg_ratelimited("%s: No header cache and no neighbour!\n", __func__); @@ -281,7 +272,7 @@ int ip_mc_output(struct sk_buff *skb) if (newskb) NF_HOOK(NFPROTO_IPV4, NF_INET_POST_ROUTING, newskb, NULL, newskb->dev, - ip_dev_loopback_xmit); + dev_loopback_xmit); } /* Multicasts with ttl 0 must not go beyond the host */ @@ -296,7 +287,7 @@ int ip_mc_output(struct sk_buff *skb) struct sk_buff *newskb = skb_clone(skb, GFP_ATOMIC); if (newskb) NF_HOOK(NFPROTO_IPV4, NF_INET_POST_ROUTING, newskb, - NULL, newskb->dev, ip_dev_loopback_xmit); + NULL, newskb->dev, dev_loopback_xmit); } return NF_HOOK_COND(NFPROTO_IPV4, NF_INET_POST_ROUTING, skb, NULL, @@ -380,7 +371,7 @@ int ip_queue_xmit(struct sk_buff *skb, struct flowi *fl) skb_dst_set_noref(skb, &rt->dst); packet_routed: - if (inet_opt && inet_opt->opt.is_strictroute && fl4->daddr != rt->rt_gateway) + if (inet_opt && inet_opt->opt.is_strictroute && rt->rt_gateway) goto no_route; /* OK, we know where to send it, allocate and build IP header. */ @@ -709,7 +700,7 @@ slow_path: IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGCREATES); } - kfree_skb(skb); + consume_skb(skb); IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGOKS); return err; @@ -1472,19 +1463,34 @@ static int ip_reply_glue_bits(void *dptr, char *to, int offset, /* * Generic function to send a packet as reply to another packet. - * Used to send TCP resets so far. ICMP should use this function too. + * Used to send some TCP resets/acks so far. * - * Should run single threaded per socket because it uses the sock - * structure to pass arguments. + * Use a fake percpu inet socket to avoid false sharing and contention. */ -void ip_send_reply(struct sock *sk, struct sk_buff *skb, __be32 daddr, - const struct ip_reply_arg *arg, unsigned int len) +static DEFINE_PER_CPU(struct inet_sock, unicast_sock) = { + .sk = { + .__sk_common = { + .skc_refcnt = ATOMIC_INIT(1), + }, + .sk_wmem_alloc = ATOMIC_INIT(1), + .sk_allocation = GFP_ATOMIC, + .sk_flags = (1UL << SOCK_USE_WRITE_QUEUE), + }, + .pmtudisc = IP_PMTUDISC_WANT, + .uc_ttl = -1, +}; + +void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr, + __be32 saddr, const struct ip_reply_arg *arg, + unsigned int len) { - struct inet_sock *inet = inet_sk(sk); struct ip_options_data replyopts; struct ipcm_cookie ipc; struct flowi4 fl4; struct rtable *rt = skb_rtable(skb); + struct sk_buff *nskb; + struct sock *sk; + struct inet_sock *inet; if (ip_options_echo(&replyopts.opt.opt, skb)) return; @@ -1502,38 +1508,39 @@ void ip_send_reply(struct sock *sk, struct sk_buff *skb, __be32 daddr, flowi4_init_output(&fl4, arg->bound_dev_if, 0, RT_TOS(arg->tos), - RT_SCOPE_UNIVERSE, sk->sk_protocol, + RT_SCOPE_UNIVERSE, ip_hdr(skb)->protocol, ip_reply_arg_flowi_flags(arg), - daddr, rt->rt_spec_dst, + daddr, saddr, tcp_hdr(skb)->source, tcp_hdr(skb)->dest); security_skb_classify_flow(skb, flowi4_to_flowi(&fl4)); - rt = ip_route_output_key(sock_net(sk), &fl4); + rt = ip_route_output_key(net, &fl4); if (IS_ERR(rt)) return; - /* And let IP do all the hard work. + inet = &get_cpu_var(unicast_sock); - This chunk is not reenterable, hence spinlock. - Note that it uses the fact, that this function is called - with locally disabled BH and that sk cannot be already spinlocked. - */ - bh_lock_sock(sk); inet->tos = arg->tos; + sk = &inet->sk; sk->sk_priority = skb->priority; sk->sk_protocol = ip_hdr(skb)->protocol; sk->sk_bound_dev_if = arg->bound_dev_if; + sock_net_set(sk, net); + __skb_queue_head_init(&sk->sk_write_queue); + sk->sk_sndbuf = sysctl_wmem_default; ip_append_data(sk, &fl4, ip_reply_glue_bits, arg->iov->iov_base, len, 0, &ipc, &rt, MSG_DONTWAIT); - if ((skb = skb_peek(&sk->sk_write_queue)) != NULL) { + nskb = skb_peek(&sk->sk_write_queue); + if (nskb) { if (arg->csumoffset >= 0) - *((__sum16 *)skb_transport_header(skb) + - arg->csumoffset) = csum_fold(csum_add(skb->csum, + *((__sum16 *)skb_transport_header(nskb) + + arg->csumoffset) = csum_fold(csum_add(nskb->csum, arg->csum)); - skb->ip_summed = CHECKSUM_NONE; + nskb->ip_summed = CHECKSUM_NONE; + skb_set_queue_mapping(nskb, skb_get_queue_mapping(skb)); ip_push_pending_frames(sk, &fl4); } - bh_unlock_sock(sk); + put_cpu_var(unicast_sock); ip_rt_put(rt); } diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c index 0d11f234d615..5eea4a811042 100644 --- a/net/ipv4/ip_sockglue.c +++ b/net/ipv4/ip_sockglue.c @@ -40,6 +40,7 @@ #if IS_ENABLED(CONFIG_IPV6) #include <net/transp_v6.h> #endif +#include <net/ip_fib.h> #include <linux/errqueue.h> #include <asm/uaccess.h> @@ -1019,18 +1020,17 @@ e_inval: * @sk: socket * @skb: buffer * - * To support IP_CMSG_PKTINFO option, we store rt_iif and rt_spec_dst - * in skb->cb[] before dst drop. + * To support IP_CMSG_PKTINFO option, we store rt_iif and specific + * destination in skb->cb[] before dst drop. * This way, receiver doesnt make cache line misses to read rtable. */ void ipv4_pktinfo_prepare(struct sk_buff *skb) { struct in_pktinfo *pktinfo = PKTINFO_SKB_CB(skb); - const struct rtable *rt = skb_rtable(skb); - if (rt) { - pktinfo->ipi_ifindex = rt->rt_iif; - pktinfo->ipi_spec_dst.s_addr = rt->rt_spec_dst; + if (skb_rtable(skb)) { + pktinfo->ipi_ifindex = inet_iif(skb); + pktinfo->ipi_spec_dst.s_addr = fib_compute_spec_dst(skb); } else { pktinfo->ipi_ifindex = 0; pktinfo->ipi_spec_dst.s_addr = 0; diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c new file mode 100644 index 000000000000..3511ffba7bd4 --- /dev/null +++ b/net/ipv4/ip_vti.c @@ -0,0 +1,956 @@ +/* + * Linux NET3: IP/IP protocol decoder modified to support + * virtual tunnel interface + * + * Authors: + * Saurabh Mohan (saurabh.mohan@vyatta.com) 05/07/2012 + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + * + */ + +/* + This version of net/ipv4/ip_vti.c is cloned of net/ipv4/ipip.c + + For comments look at net/ipv4/ip_gre.c --ANK + */ + + +#include <linux/capability.h> +#include <linux/module.h> +#include <linux/types.h> +#include <linux/kernel.h> +#include <linux/uaccess.h> +#include <linux/skbuff.h> +#include <linux/netdevice.h> +#include <linux/in.h> +#include <linux/tcp.h> +#include <linux/udp.h> +#include <linux/if_arp.h> +#include <linux/mroute.h> +#include <linux/init.h> +#include <linux/netfilter_ipv4.h> +#include <linux/if_ether.h> + +#include <net/sock.h> +#include <net/ip.h> +#include <net/icmp.h> +#include <net/ipip.h> +#include <net/inet_ecn.h> +#include <net/xfrm.h> +#include <net/net_namespace.h> +#include <net/netns/generic.h> + +#define HASH_SIZE 16 +#define HASH(addr) (((__force u32)addr^((__force u32)addr>>4))&(HASH_SIZE-1)) + +static struct rtnl_link_ops vti_link_ops __read_mostly; + +static int vti_net_id __read_mostly; +struct vti_net { + struct ip_tunnel __rcu *tunnels_r_l[HASH_SIZE]; + struct ip_tunnel __rcu *tunnels_r[HASH_SIZE]; + struct ip_tunnel __rcu *tunnels_l[HASH_SIZE]; + struct ip_tunnel __rcu *tunnels_wc[1]; + struct ip_tunnel __rcu **tunnels[4]; + + struct net_device *fb_tunnel_dev; +}; + +static int vti_fb_tunnel_init(struct net_device *dev); +static int vti_tunnel_init(struct net_device *dev); +static void vti_tunnel_setup(struct net_device *dev); +static void vti_dev_free(struct net_device *dev); +static int vti_tunnel_bind_dev(struct net_device *dev); + +/* Locking : hash tables are protected by RCU and RTNL */ + +#define for_each_ip_tunnel_rcu(start) \ + for (t = rcu_dereference(start); t; t = rcu_dereference(t->next)) + +/* often modified stats are per cpu, other are shared (netdev->stats) */ +struct pcpu_tstats { + u64 rx_packets; + u64 rx_bytes; + u64 tx_packets; + u64 tx_bytes; + struct u64_stats_sync syncp; +}; + +#define VTI_XMIT(stats1, stats2) do { \ + int err; \ + int pkt_len = skb->len; \ + err = dst_output(skb); \ + if (net_xmit_eval(err) == 0) { \ + u64_stats_update_begin(&(stats1)->syncp); \ + (stats1)->tx_bytes += pkt_len; \ + (stats1)->tx_packets++; \ + u64_stats_update_end(&(stats1)->syncp); \ + } else { \ + (stats2)->tx_errors++; \ + (stats2)->tx_aborted_errors++; \ + } \ +} while (0) + + +static struct rtnl_link_stats64 *vti_get_stats64(struct net_device *dev, + struct rtnl_link_stats64 *tot) +{ + int i; + + for_each_possible_cpu(i) { + const struct pcpu_tstats *tstats = per_cpu_ptr(dev->tstats, i); + u64 rx_packets, rx_bytes, tx_packets, tx_bytes; + unsigned int start; + + do { + start = u64_stats_fetch_begin_bh(&tstats->syncp); + rx_packets = tstats->rx_packets; + tx_packets = tstats->tx_packets; + rx_bytes = tstats->rx_bytes; + tx_bytes = tstats->tx_bytes; + } while (u64_stats_fetch_retry_bh(&tstats->syncp, start)); + + tot->rx_packets += rx_packets; + tot->tx_packets += tx_packets; + tot->rx_bytes += rx_bytes; + tot->tx_bytes += tx_bytes; + } + + tot->multicast = dev->stats.multicast; + tot->rx_crc_errors = dev->stats.rx_crc_errors; + tot->rx_fifo_errors = dev->stats.rx_fifo_errors; + tot->rx_length_errors = dev->stats.rx_length_errors; + tot->rx_errors = dev->stats.rx_errors; + tot->tx_fifo_errors = dev->stats.tx_fifo_errors; + tot->tx_carrier_errors = dev->stats.tx_carrier_errors; + tot->tx_dropped = dev->stats.tx_dropped; + tot->tx_aborted_errors = dev->stats.tx_aborted_errors; + tot->tx_errors = dev->stats.tx_errors; + + return tot; +} + +static struct ip_tunnel *vti_tunnel_lookup(struct net *net, + __be32 remote, __be32 local) +{ + unsigned h0 = HASH(remote); + unsigned h1 = HASH(local); + struct ip_tunnel *t; + struct vti_net *ipn = net_generic(net, vti_net_id); + + for_each_ip_tunnel_rcu(ipn->tunnels_r_l[h0 ^ h1]) + if (local == t->parms.iph.saddr && + remote == t->parms.iph.daddr && (t->dev->flags&IFF_UP)) + return t; + for_each_ip_tunnel_rcu(ipn->tunnels_r[h0]) + if (remote == t->parms.iph.daddr && (t->dev->flags&IFF_UP)) + return t; + + for_each_ip_tunnel_rcu(ipn->tunnels_l[h1]) + if (local == t->parms.iph.saddr && (t->dev->flags&IFF_UP)) + return t; + + for_each_ip_tunnel_rcu(ipn->tunnels_wc[0]) + if (t && (t->dev->flags&IFF_UP)) + return t; + return NULL; +} + +static struct ip_tunnel __rcu **__vti_bucket(struct vti_net *ipn, + struct ip_tunnel_parm *parms) +{ + __be32 remote = parms->iph.daddr; + __be32 local = parms->iph.saddr; + unsigned h = 0; + int prio = 0; + + if (remote) { + prio |= 2; + h ^= HASH(remote); + } + if (local) { + prio |= 1; + h ^= HASH(local); + } + return &ipn->tunnels[prio][h]; +} + +static inline struct ip_tunnel __rcu **vti_bucket(struct vti_net *ipn, + struct ip_tunnel *t) +{ + return __vti_bucket(ipn, &t->parms); +} + +static void vti_tunnel_unlink(struct vti_net *ipn, struct ip_tunnel *t) +{ + struct ip_tunnel __rcu **tp; + struct ip_tunnel *iter; + + for (tp = vti_bucket(ipn, t); + (iter = rtnl_dereference(*tp)) != NULL; + tp = &iter->next) { + if (t == iter) { + rcu_assign_pointer(*tp, t->next); + break; + } + } +} + +static void vti_tunnel_link(struct vti_net *ipn, struct ip_tunnel *t) +{ + struct ip_tunnel __rcu **tp = vti_bucket(ipn, t); + + rcu_assign_pointer(t->next, rtnl_dereference(*tp)); + rcu_assign_pointer(*tp, t); +} + +static struct ip_tunnel *vti_tunnel_locate(struct net *net, + struct ip_tunnel_parm *parms, + int create) +{ + __be32 remote = parms->iph.daddr; + __be32 local = parms->iph.saddr; + struct ip_tunnel *t, *nt; + struct ip_tunnel __rcu **tp; + struct net_device *dev; + char name[IFNAMSIZ]; + struct vti_net *ipn = net_generic(net, vti_net_id); + + for (tp = __vti_bucket(ipn, parms); + (t = rtnl_dereference(*tp)) != NULL; + tp = &t->next) { + if (local == t->parms.iph.saddr && remote == t->parms.iph.daddr) + return t; + } + if (!create) + return NULL; + + if (parms->name[0]) + strlcpy(name, parms->name, IFNAMSIZ); + else + strcpy(name, "vti%d"); + + dev = alloc_netdev(sizeof(*t), name, vti_tunnel_setup); + if (dev == NULL) + return NULL; + + dev_net_set(dev, net); + + nt = netdev_priv(dev); + nt->parms = *parms; + dev->rtnl_link_ops = &vti_link_ops; + + vti_tunnel_bind_dev(dev); + + if (register_netdevice(dev) < 0) + goto failed_free; + + dev_hold(dev); + vti_tunnel_link(ipn, nt); + return nt; + +failed_free: + free_netdev(dev); + return NULL; +} + +static void vti_tunnel_uninit(struct net_device *dev) +{ + struct net *net = dev_net(dev); + struct vti_net *ipn = net_generic(net, vti_net_id); + + vti_tunnel_unlink(ipn, netdev_priv(dev)); + dev_put(dev); +} + +static int vti_err(struct sk_buff *skb, u32 info) +{ + + /* All the routers (except for Linux) return only + * 8 bytes of packet payload. It means, that precise relaying of + * ICMP in the real Internet is absolutely infeasible. + */ + struct iphdr *iph = (struct iphdr *)skb->data; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; + struct ip_tunnel *t; + int err; + + switch (type) { + default: + case ICMP_PARAMETERPROB: + return 0; + + case ICMP_DEST_UNREACH: + switch (code) { + case ICMP_SR_FAILED: + case ICMP_PORT_UNREACH: + /* Impossible event. */ + return 0; + default: + /* All others are translated to HOST_UNREACH. */ + break; + } + break; + case ICMP_TIME_EXCEEDED: + if (code != ICMP_EXC_TTL) + return 0; + break; + } + + err = -ENOENT; + + rcu_read_lock(); + t = vti_tunnel_lookup(dev_net(skb->dev), iph->daddr, iph->saddr); + if (t == NULL) + goto out; + + if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) { + ipv4_update_pmtu(skb, dev_net(skb->dev), info, + t->parms.link, 0, IPPROTO_IPIP, 0); + err = 0; + goto out; + } + + err = 0; + if (t->parms.iph.ttl == 0 && type == ICMP_TIME_EXCEEDED) + goto out; + + if (time_before(jiffies, t->err_time + IPTUNNEL_ERR_TIMEO)) + t->err_count++; + else + t->err_count = 1; + t->err_time = jiffies; +out: + rcu_read_unlock(); + return err; +} + +/* We dont digest the packet therefore let the packet pass */ +static int vti_rcv(struct sk_buff *skb) +{ + struct ip_tunnel *tunnel; + const struct iphdr *iph = ip_hdr(skb); + + rcu_read_lock(); + tunnel = vti_tunnel_lookup(dev_net(skb->dev), iph->saddr, iph->daddr); + if (tunnel != NULL) { + struct pcpu_tstats *tstats; + + tstats = this_cpu_ptr(tunnel->dev->tstats); + u64_stats_update_begin(&tstats->syncp); + tstats->rx_packets++; + tstats->rx_bytes += skb->len; + u64_stats_update_end(&tstats->syncp); + + skb->dev = tunnel->dev; + rcu_read_unlock(); + return 1; + } + rcu_read_unlock(); + + return -1; +} + +/* This function assumes it is being called from dev_queue_xmit() + * and that skb is filled properly by that function. + */ + +static netdev_tx_t vti_tunnel_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct ip_tunnel *tunnel = netdev_priv(dev); + struct pcpu_tstats *tstats; + struct iphdr *tiph = &tunnel->parms.iph; + u8 tos; + struct rtable *rt; /* Route to the other host */ + struct net_device *tdev; /* Device to other host */ + struct iphdr *old_iph = ip_hdr(skb); + __be32 dst = tiph->daddr; + struct flowi4 fl4; + + if (skb->protocol != htons(ETH_P_IP)) + goto tx_error; + + tos = old_iph->tos; + + memset(&fl4, 0, sizeof(fl4)); + flowi4_init_output(&fl4, tunnel->parms.link, + htonl(tunnel->parms.i_key), RT_TOS(tos), + RT_SCOPE_UNIVERSE, + IPPROTO_IPIP, 0, + dst, tiph->saddr, 0, 0); + rt = ip_route_output_key(dev_net(dev), &fl4); + if (IS_ERR(rt)) { + dev->stats.tx_carrier_errors++; + goto tx_error_icmp; + } + /* if there is no transform then this tunnel is not functional. + * Or if the xfrm is not mode tunnel. + */ + if (!rt->dst.xfrm || + rt->dst.xfrm->props.mode != XFRM_MODE_TUNNEL) { + dev->stats.tx_carrier_errors++; + goto tx_error_icmp; + } + tdev = rt->dst.dev; + + if (tdev == dev) { + ip_rt_put(rt); + dev->stats.collisions++; + goto tx_error; + } + + if (tunnel->err_count > 0) { + if (time_before(jiffies, + tunnel->err_time + IPTUNNEL_ERR_TIMEO)) { + tunnel->err_count--; + dst_link_failure(skb); + } else + tunnel->err_count = 0; + } + + IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | + IPSKB_REROUTED); + skb_dst_drop(skb); + skb_dst_set(skb, &rt->dst); + nf_reset(skb); + skb->dev = skb_dst(skb)->dev; + + tstats = this_cpu_ptr(dev->tstats); + VTI_XMIT(tstats, &dev->stats); + return NETDEV_TX_OK; + +tx_error_icmp: + dst_link_failure(skb); +tx_error: + dev->stats.tx_errors++; + dev_kfree_skb(skb); + return NETDEV_TX_OK; +} + +static int vti_tunnel_bind_dev(struct net_device *dev) +{ + struct net_device *tdev = NULL; + struct ip_tunnel *tunnel; + struct iphdr *iph; + + tunnel = netdev_priv(dev); + iph = &tunnel->parms.iph; + + if (iph->daddr) { + struct rtable *rt; + struct flowi4 fl4; + memset(&fl4, 0, sizeof(fl4)); + flowi4_init_output(&fl4, tunnel->parms.link, + htonl(tunnel->parms.i_key), + RT_TOS(iph->tos), RT_SCOPE_UNIVERSE, + IPPROTO_IPIP, 0, + iph->daddr, iph->saddr, 0, 0); + rt = ip_route_output_key(dev_net(dev), &fl4); + if (!IS_ERR(rt)) { + tdev = rt->dst.dev; + ip_rt_put(rt); + } + dev->flags |= IFF_POINTOPOINT; + } + + if (!tdev && tunnel->parms.link) + tdev = __dev_get_by_index(dev_net(dev), tunnel->parms.link); + + if (tdev) { + dev->hard_header_len = tdev->hard_header_len + + sizeof(struct iphdr); + dev->mtu = tdev->mtu; + } + dev->iflink = tunnel->parms.link; + return dev->mtu; +} + +static int +vti_tunnel_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) +{ + int err = 0; + struct ip_tunnel_parm p; + struct ip_tunnel *t; + struct net *net = dev_net(dev); + struct vti_net *ipn = net_generic(net, vti_net_id); + + switch (cmd) { + case SIOCGETTUNNEL: + t = NULL; + if (dev == ipn->fb_tunnel_dev) { + if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, + sizeof(p))) { + err = -EFAULT; + break; + } + t = vti_tunnel_locate(net, &p, 0); + } + if (t == NULL) + t = netdev_priv(dev); + memcpy(&p, &t->parms, sizeof(p)); + p.i_flags |= GRE_KEY | VTI_ISVTI; + p.o_flags |= GRE_KEY; + if (copy_to_user(ifr->ifr_ifru.ifru_data, &p, sizeof(p))) + err = -EFAULT; + break; + + case SIOCADDTUNNEL: + case SIOCCHGTUNNEL: + err = -EPERM; + if (!capable(CAP_NET_ADMIN)) + goto done; + + err = -EFAULT; + if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof(p))) + goto done; + + err = -EINVAL; + if (p.iph.version != 4 || p.iph.protocol != IPPROTO_IPIP || + p.iph.ihl != 5) + goto done; + + t = vti_tunnel_locate(net, &p, cmd == SIOCADDTUNNEL); + + if (dev != ipn->fb_tunnel_dev && cmd == SIOCCHGTUNNEL) { + if (t != NULL) { + if (t->dev != dev) { + err = -EEXIST; + break; + } + } else { + if (((dev->flags&IFF_POINTOPOINT) && + !p.iph.daddr) || + (!(dev->flags&IFF_POINTOPOINT) && + p.iph.daddr)) { + err = -EINVAL; + break; + } + t = netdev_priv(dev); + vti_tunnel_unlink(ipn, t); + synchronize_net(); + t->parms.iph.saddr = p.iph.saddr; + t->parms.iph.daddr = p.iph.daddr; + t->parms.i_key = p.i_key; + t->parms.o_key = p.o_key; + t->parms.iph.protocol = IPPROTO_IPIP; + memcpy(dev->dev_addr, &p.iph.saddr, 4); + memcpy(dev->broadcast, &p.iph.daddr, 4); + vti_tunnel_link(ipn, t); + netdev_state_change(dev); + } + } + + if (t) { + err = 0; + if (cmd == SIOCCHGTUNNEL) { + t->parms.i_key = p.i_key; + t->parms.o_key = p.o_key; + if (t->parms.link != p.link) { + t->parms.link = p.link; + vti_tunnel_bind_dev(dev); + netdev_state_change(dev); + } + } + p.i_flags |= GRE_KEY | VTI_ISVTI; + p.o_flags |= GRE_KEY; + if (copy_to_user(ifr->ifr_ifru.ifru_data, &t->parms, + sizeof(p))) + err = -EFAULT; + } else + err = (cmd == SIOCADDTUNNEL ? -ENOBUFS : -ENOENT); + break; + + case SIOCDELTUNNEL: + err = -EPERM; + if (!capable(CAP_NET_ADMIN)) + goto done; + + if (dev == ipn->fb_tunnel_dev) { + err = -EFAULT; + if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, + sizeof(p))) + goto done; + err = -ENOENT; + + t = vti_tunnel_locate(net, &p, 0); + if (t == NULL) + goto done; + err = -EPERM; + if (t->dev == ipn->fb_tunnel_dev) + goto done; + dev = t->dev; + } + unregister_netdevice(dev); + err = 0; + break; + + default: + err = -EINVAL; + } + +done: + return err; +} + +static int vti_tunnel_change_mtu(struct net_device *dev, int new_mtu) +{ + if (new_mtu < 68 || new_mtu > 0xFFF8) + return -EINVAL; + dev->mtu = new_mtu; + return 0; +} + +static const struct net_device_ops vti_netdev_ops = { + .ndo_init = vti_tunnel_init, + .ndo_uninit = vti_tunnel_uninit, + .ndo_start_xmit = vti_tunnel_xmit, + .ndo_do_ioctl = vti_tunnel_ioctl, + .ndo_change_mtu = vti_tunnel_change_mtu, + .ndo_get_stats64 = vti_get_stats64, +}; + +static void vti_dev_free(struct net_device *dev) +{ + free_percpu(dev->tstats); + free_netdev(dev); +} + +static void vti_tunnel_setup(struct net_device *dev) +{ + dev->netdev_ops = &vti_netdev_ops; + dev->destructor = vti_dev_free; + + dev->type = ARPHRD_TUNNEL; + dev->hard_header_len = LL_MAX_HEADER + sizeof(struct iphdr); + dev->mtu = ETH_DATA_LEN; + dev->flags = IFF_NOARP; + dev->iflink = 0; + dev->addr_len = 4; + dev->features |= NETIF_F_NETNS_LOCAL; + dev->features |= NETIF_F_LLTX; + dev->priv_flags &= ~IFF_XMIT_DST_RELEASE; +} + +static int vti_tunnel_init(struct net_device *dev) +{ + struct ip_tunnel *tunnel = netdev_priv(dev); + + tunnel->dev = dev; + strcpy(tunnel->parms.name, dev->name); + + memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4); + memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4); + + dev->tstats = alloc_percpu(struct pcpu_tstats); + if (!dev->tstats) + return -ENOMEM; + + return 0; +} + +static int __net_init vti_fb_tunnel_init(struct net_device *dev) +{ + struct ip_tunnel *tunnel = netdev_priv(dev); + struct iphdr *iph = &tunnel->parms.iph; + struct vti_net *ipn = net_generic(dev_net(dev), vti_net_id); + + tunnel->dev = dev; + strcpy(tunnel->parms.name, dev->name); + + iph->version = 4; + iph->protocol = IPPROTO_IPIP; + iph->ihl = 5; + + dev->tstats = alloc_percpu(struct pcpu_tstats); + if (!dev->tstats) + return -ENOMEM; + + dev_hold(dev); + rcu_assign_pointer(ipn->tunnels_wc[0], tunnel); + return 0; +} + +static struct xfrm_tunnel vti_handler __read_mostly = { + .handler = vti_rcv, + .err_handler = vti_err, + .priority = 1, +}; + +static void vti_destroy_tunnels(struct vti_net *ipn, struct list_head *head) +{ + int prio; + + for (prio = 1; prio < 4; prio++) { + int h; + for (h = 0; h < HASH_SIZE; h++) { + struct ip_tunnel *t; + + t = rtnl_dereference(ipn->tunnels[prio][h]); + while (t != NULL) { + unregister_netdevice_queue(t->dev, head); + t = rtnl_dereference(t->next); + } + } + } +} + +static int __net_init vti_init_net(struct net *net) +{ + int err; + struct vti_net *ipn = net_generic(net, vti_net_id); + + ipn->tunnels[0] = ipn->tunnels_wc; + ipn->tunnels[1] = ipn->tunnels_l; + ipn->tunnels[2] = ipn->tunnels_r; + ipn->tunnels[3] = ipn->tunnels_r_l; + + ipn->fb_tunnel_dev = alloc_netdev(sizeof(struct ip_tunnel), + "ip_vti0", + vti_tunnel_setup); + if (!ipn->fb_tunnel_dev) { + err = -ENOMEM; + goto err_alloc_dev; + } + dev_net_set(ipn->fb_tunnel_dev, net); + + err = vti_fb_tunnel_init(ipn->fb_tunnel_dev); + if (err) + goto err_reg_dev; + ipn->fb_tunnel_dev->rtnl_link_ops = &vti_link_ops; + + err = register_netdev(ipn->fb_tunnel_dev); + if (err) + goto err_reg_dev; + return 0; + +err_reg_dev: + vti_dev_free(ipn->fb_tunnel_dev); +err_alloc_dev: + /* nothing */ + return err; +} + +static void __net_exit vti_exit_net(struct net *net) +{ + struct vti_net *ipn = net_generic(net, vti_net_id); + LIST_HEAD(list); + + rtnl_lock(); + vti_destroy_tunnels(ipn, &list); + unregister_netdevice_many(&list); + rtnl_unlock(); +} + +static struct pernet_operations vti_net_ops = { + .init = vti_init_net, + .exit = vti_exit_net, + .id = &vti_net_id, + .size = sizeof(struct vti_net), +}; + +static int vti_tunnel_validate(struct nlattr *tb[], struct nlattr *data[]) +{ + return 0; +} + +static void vti_netlink_parms(struct nlattr *data[], + struct ip_tunnel_parm *parms) +{ + memset(parms, 0, sizeof(*parms)); + + parms->iph.protocol = IPPROTO_IPIP; + + if (!data) + return; + + if (data[IFLA_VTI_LINK]) + parms->link = nla_get_u32(data[IFLA_VTI_LINK]); + + if (data[IFLA_VTI_IKEY]) + parms->i_key = nla_get_be32(data[IFLA_VTI_IKEY]); + + if (data[IFLA_VTI_OKEY]) + parms->o_key = nla_get_be32(data[IFLA_VTI_OKEY]); + + if (data[IFLA_VTI_LOCAL]) + parms->iph.saddr = nla_get_be32(data[IFLA_VTI_LOCAL]); + + if (data[IFLA_VTI_REMOTE]) + parms->iph.daddr = nla_get_be32(data[IFLA_VTI_REMOTE]); + +} + +static int vti_newlink(struct net *src_net, struct net_device *dev, + struct nlattr *tb[], struct nlattr *data[]) +{ + struct ip_tunnel *nt; + struct net *net = dev_net(dev); + struct vti_net *ipn = net_generic(net, vti_net_id); + int mtu; + int err; + + nt = netdev_priv(dev); + vti_netlink_parms(data, &nt->parms); + + if (vti_tunnel_locate(net, &nt->parms, 0)) + return -EEXIST; + + mtu = vti_tunnel_bind_dev(dev); + if (!tb[IFLA_MTU]) + dev->mtu = mtu; + + err = register_netdevice(dev); + if (err) + goto out; + + dev_hold(dev); + vti_tunnel_link(ipn, nt); + +out: + return err; +} + +static int vti_changelink(struct net_device *dev, struct nlattr *tb[], + struct nlattr *data[]) +{ + struct ip_tunnel *t, *nt; + struct net *net = dev_net(dev); + struct vti_net *ipn = net_generic(net, vti_net_id); + struct ip_tunnel_parm p; + int mtu; + + if (dev == ipn->fb_tunnel_dev) + return -EINVAL; + + nt = netdev_priv(dev); + vti_netlink_parms(data, &p); + + t = vti_tunnel_locate(net, &p, 0); + + if (t) { + if (t->dev != dev) + return -EEXIST; + } else { + t = nt; + + vti_tunnel_unlink(ipn, t); + t->parms.iph.saddr = p.iph.saddr; + t->parms.iph.daddr = p.iph.daddr; + t->parms.i_key = p.i_key; + t->parms.o_key = p.o_key; + if (dev->type != ARPHRD_ETHER) { + memcpy(dev->dev_addr, &p.iph.saddr, 4); + memcpy(dev->broadcast, &p.iph.daddr, 4); + } + vti_tunnel_link(ipn, t); + netdev_state_change(dev); + } + + if (t->parms.link != p.link) { + t->parms.link = p.link; + mtu = vti_tunnel_bind_dev(dev); + if (!tb[IFLA_MTU]) + dev->mtu = mtu; + netdev_state_change(dev); + } + + return 0; +} + +static size_t vti_get_size(const struct net_device *dev) +{ + return + /* IFLA_VTI_LINK */ + nla_total_size(4) + + /* IFLA_VTI_IKEY */ + nla_total_size(4) + + /* IFLA_VTI_OKEY */ + nla_total_size(4) + + /* IFLA_VTI_LOCAL */ + nla_total_size(4) + + /* IFLA_VTI_REMOTE */ + nla_total_size(4) + + 0; +} + +static int vti_fill_info(struct sk_buff *skb, const struct net_device *dev) +{ + struct ip_tunnel *t = netdev_priv(dev); + struct ip_tunnel_parm *p = &t->parms; + + nla_put_u32(skb, IFLA_VTI_LINK, p->link); + nla_put_be32(skb, IFLA_VTI_IKEY, p->i_key); + nla_put_be32(skb, IFLA_VTI_OKEY, p->o_key); + nla_put_be32(skb, IFLA_VTI_LOCAL, p->iph.saddr); + nla_put_be32(skb, IFLA_VTI_REMOTE, p->iph.daddr); + + return 0; +} + +static const struct nla_policy vti_policy[IFLA_VTI_MAX + 1] = { + [IFLA_VTI_LINK] = { .type = NLA_U32 }, + [IFLA_VTI_IKEY] = { .type = NLA_U32 }, + [IFLA_VTI_OKEY] = { .type = NLA_U32 }, + [IFLA_VTI_LOCAL] = { .len = FIELD_SIZEOF(struct iphdr, saddr) }, + [IFLA_VTI_REMOTE] = { .len = FIELD_SIZEOF(struct iphdr, daddr) }, +}; + +static struct rtnl_link_ops vti_link_ops __read_mostly = { + .kind = "vti", + .maxtype = IFLA_VTI_MAX, + .policy = vti_policy, + .priv_size = sizeof(struct ip_tunnel), + .setup = vti_tunnel_setup, + .validate = vti_tunnel_validate, + .newlink = vti_newlink, + .changelink = vti_changelink, + .get_size = vti_get_size, + .fill_info = vti_fill_info, +}; + +static int __init vti_init(void) +{ + int err; + + pr_info("IPv4 over IPSec tunneling driver\n"); + + err = register_pernet_device(&vti_net_ops); + if (err < 0) + return err; + err = xfrm4_mode_tunnel_input_register(&vti_handler); + if (err < 0) { + unregister_pernet_device(&vti_net_ops); + pr_info(KERN_INFO "vti init: can't register tunnel\n"); + } + + err = rtnl_link_register(&vti_link_ops); + if (err < 0) + goto rtnl_link_failed; + + return err; + +rtnl_link_failed: + xfrm4_mode_tunnel_input_deregister(&vti_handler); + unregister_pernet_device(&vti_net_ops); + return err; +} + +static void __exit vti_fini(void) +{ + rtnl_link_unregister(&vti_link_ops); + if (xfrm4_mode_tunnel_input_deregister(&vti_handler)) + pr_info("vti close: can't deregister tunnel\n"); + + unregister_pernet_device(&vti_net_ops); +} + +module_init(vti_init); +module_exit(vti_fini); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_RTNL_LINK("vti"); +MODULE_ALIAS_NETDEV("ip_vti0"); diff --git a/net/ipv4/ipcomp.c b/net/ipv4/ipcomp.c index 63b64c45a826..d3ab47e19a89 100644 --- a/net/ipv4/ipcomp.c +++ b/net/ipv4/ipcomp.c @@ -31,17 +31,26 @@ static void ipcomp4_err(struct sk_buff *skb, u32 info) struct ip_comp_hdr *ipch = (struct ip_comp_hdr *)(skb->data+(iph->ihl<<2)); struct xfrm_state *x; - if (icmp_hdr(skb)->type != ICMP_DEST_UNREACH || - icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) + switch (icmp_hdr(skb)->type) { + case ICMP_DEST_UNREACH: + if (icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) + return; + case ICMP_REDIRECT: + break; + default: return; + } spi = htonl(ntohs(ipch->cpi)); x = xfrm_state_lookup(net, skb->mark, (const xfrm_address_t *)&iph->daddr, spi, IPPROTO_COMP, AF_INET); if (!x) return; - NETDEBUG(KERN_DEBUG "pmtu discovery on SA IPCOMP/%08x/%pI4\n", - spi, &iph->daddr); + + if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) + ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_COMP, 0); + else + ipv4_redirect(skb, net, 0, 0, IPPROTO_COMP, 0); xfrm_state_put(x); } diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c index 2d0f99bf61b3..99af1f0cc658 100644 --- a/net/ipv4/ipip.c +++ b/net/ipv4/ipip.c @@ -348,9 +348,6 @@ static int ipip_err(struct sk_buff *skb, u32 info) case ICMP_PORT_UNREACH: /* Impossible event. */ return 0; - case ICMP_FRAG_NEEDED: - /* Soft state for pmtu is maintained by IP core. */ - return 0; default: /* All others are translated to HOST_UNREACH. rfc2003 contains "deep thoughts" about NET_UNREACH, @@ -363,13 +360,32 @@ static int ipip_err(struct sk_buff *skb, u32 info) if (code != ICMP_EXC_TTL) return 0; break; + case ICMP_REDIRECT: + break; } err = -ENOENT; rcu_read_lock(); t = ipip_tunnel_lookup(dev_net(skb->dev), iph->daddr, iph->saddr); - if (t == NULL || t->parms.iph.daddr == 0) + if (t == NULL) + goto out; + + if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) { + ipv4_update_pmtu(skb, dev_net(skb->dev), info, + t->dev->ifindex, 0, IPPROTO_IPIP, 0); + err = 0; + goto out; + } + + if (type == ICMP_REDIRECT) { + ipv4_redirect(skb, dev_net(skb->dev), t->dev->ifindex, 0, + IPPROTO_IPIP, 0); + err = 0; + goto out; + } + + if (t->parms.iph.daddr == 0) goto out; err = 0; @@ -471,7 +487,7 @@ static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev) dev->stats.tx_fifo_errors++; goto tx_error; } - dst = rt->rt_gateway; + dst = rt_nexthop(rt, old_iph->daddr); } rt = ip_route_output_ports(dev_net(dev), &fl4, NULL, @@ -503,7 +519,7 @@ static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev) } if (skb_dst(skb)) - skb_dst(skb)->ops->update_pmtu(skb_dst(skb), mtu); + skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); if ((old_iph->frag_off & htons(IP_DF)) && mtu < ntohs(old_iph->tot_len)) { diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c index c94bbc6f2ba3..8eec8f4a0536 100644 --- a/net/ipv4/ipmr.c +++ b/net/ipv4/ipmr.c @@ -524,8 +524,8 @@ failure: } #endif -/* - * Delete a VIF entry +/** + * vif_delete - Delete a VIF entry * @notify: Set to 1, if the caller is a notifier_call */ @@ -1795,9 +1795,12 @@ static struct mr_table *ipmr_rt_fib_lookup(struct net *net, struct sk_buff *skb) .daddr = iph->daddr, .saddr = iph->saddr, .flowi4_tos = RT_TOS(iph->tos), - .flowi4_oif = rt->rt_oif, - .flowi4_iif = rt->rt_iif, - .flowi4_mark = rt->rt_mark, + .flowi4_oif = (rt_is_output_route(rt) ? + skb->dev->ifindex : 0), + .flowi4_iif = (rt_is_output_route(rt) ? + net->loopback_dev->ifindex : + skb->dev->ifindex), + .flowi4_mark = skb->mark, }; struct mr_table *mrt; int err; @@ -2006,37 +2009,37 @@ static int __ipmr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb, { int ct; struct rtnexthop *nhp; - u8 *b = skb_tail_pointer(skb); - struct rtattr *mp_head; + struct nlattr *mp_attr; /* If cache is unresolved, don't try to parse IIF and OIF */ if (c->mfc_parent >= MAXVIFS) return -ENOENT; - if (VIF_EXISTS(mrt, c->mfc_parent)) - RTA_PUT(skb, RTA_IIF, 4, &mrt->vif_table[c->mfc_parent].dev->ifindex); + if (VIF_EXISTS(mrt, c->mfc_parent) && + nla_put_u32(skb, RTA_IIF, mrt->vif_table[c->mfc_parent].dev->ifindex) < 0) + return -EMSGSIZE; - mp_head = (struct rtattr *)skb_put(skb, RTA_LENGTH(0)); + if (!(mp_attr = nla_nest_start(skb, RTA_MULTIPATH))) + return -EMSGSIZE; for (ct = c->mfc_un.res.minvif; ct < c->mfc_un.res.maxvif; ct++) { if (VIF_EXISTS(mrt, ct) && c->mfc_un.res.ttls[ct] < 255) { - if (skb_tailroom(skb) < RTA_ALIGN(RTA_ALIGN(sizeof(*nhp)) + 4)) - goto rtattr_failure; - nhp = (struct rtnexthop *)skb_put(skb, RTA_ALIGN(sizeof(*nhp))); + if (!(nhp = nla_reserve_nohdr(skb, sizeof(*nhp)))) { + nla_nest_cancel(skb, mp_attr); + return -EMSGSIZE; + } + nhp->rtnh_flags = 0; nhp->rtnh_hops = c->mfc_un.res.ttls[ct]; nhp->rtnh_ifindex = mrt->vif_table[ct].dev->ifindex; nhp->rtnh_len = sizeof(*nhp); } } - mp_head->rta_type = RTA_MULTIPATH; - mp_head->rta_len = skb_tail_pointer(skb) - (u8 *)mp_head; + + nla_nest_end(skb, mp_attr); + rtm->rtm_type = RTN_MULTICAST; return 1; - -rtattr_failure: - nlmsg_trim(skb, b); - return -EMSGSIZE; } int ipmr_get_route(struct net *net, struct sk_buff *skb, diff --git a/net/ipv4/netfilter/ipt_MASQUERADE.c b/net/ipv4/netfilter/ipt_MASQUERADE.c index 2f210c79dc87..cbb6a1a6f6f7 100644 --- a/net/ipv4/netfilter/ipt_MASQUERADE.c +++ b/net/ipv4/netfilter/ipt_MASQUERADE.c @@ -52,7 +52,7 @@ masquerade_tg(struct sk_buff *skb, const struct xt_action_param *par) struct nf_nat_ipv4_range newrange; const struct nf_nat_ipv4_multi_range_compat *mr; const struct rtable *rt; - __be32 newsrc; + __be32 newsrc, nh; NF_CT_ASSERT(par->hooknum == NF_INET_POST_ROUTING); @@ -70,7 +70,8 @@ masquerade_tg(struct sk_buff *skb, const struct xt_action_param *par) mr = par->targinfo; rt = skb_rtable(skb); - newsrc = inet_select_addr(par->out, rt->rt_gateway, RT_SCOPE_UNIVERSE); + nh = rt_nexthop(rt, ip_hdr(skb)->daddr); + newsrc = inet_select_addr(par->out, nh, RT_SCOPE_UNIVERSE); if (!newsrc) { pr_info("%s ate my IP address\n", par->out->name); return NF_DROP; diff --git a/net/ipv4/netfilter/ipt_ULOG.c b/net/ipv4/netfilter/ipt_ULOG.c index ba5756d20165..1109f7f6c254 100644 --- a/net/ipv4/netfilter/ipt_ULOG.c +++ b/net/ipv4/netfilter/ipt_ULOG.c @@ -196,12 +196,15 @@ static void ipt_ulog_packet(unsigned int hooknum, pr_debug("qlen %d, qthreshold %Zu\n", ub->qlen, loginfo->qthreshold); - /* NLMSG_PUT contains a hidden goto nlmsg_failure !!! */ - nlh = NLMSG_PUT(ub->skb, 0, ub->qlen, ULOG_NL_EVENT, - sizeof(*pm)+copy_len); + nlh = nlmsg_put(ub->skb, 0, ub->qlen, ULOG_NL_EVENT, + sizeof(*pm)+copy_len, 0); + if (!nlh) { + pr_debug("error during nlmsg_put\n"); + goto out_unlock; + } ub->qlen++; - pm = NLMSG_DATA(nlh); + pm = nlmsg_data(nlh); /* We might not have a timestamp, get one */ if (skb->tstamp.tv64 == 0) @@ -261,13 +264,11 @@ static void ipt_ulog_packet(unsigned int hooknum, nlh->nlmsg_type = NLMSG_DONE; ulog_send(groupnum); } - +out_unlock: spin_unlock_bh(&ulog_lock); return; -nlmsg_failure: - pr_debug("error during NLMSG_PUT\n"); alloc_failure: pr_debug("Error building netlink message\n"); spin_unlock_bh(&ulog_lock); @@ -380,6 +381,9 @@ static struct nf_logger ipt_ulog_logger __read_mostly = { static int __init ulog_tg_init(void) { int ret, i; + struct netlink_kernel_cfg cfg = { + .groups = ULOG_MAXNLGROUPS, + }; pr_debug("init module\n"); @@ -392,9 +396,8 @@ static int __init ulog_tg_init(void) for (i = 0; i < ULOG_MAXNLGROUPS; i++) setup_timer(&ulog_buffers[i].timer, ulog_timer, i); - nflognl = netlink_kernel_create(&init_net, - NETLINK_NFLOG, ULOG_MAXNLGROUPS, NULL, - NULL, THIS_MODULE); + nflognl = netlink_kernel_create(&init_net, NETLINK_NFLOG, + THIS_MODULE, &cfg); if (!nflognl) return -ENOMEM; diff --git a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c index 91747d4ebc26..e7ff2dcab6ce 100644 --- a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c +++ b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c @@ -95,11 +95,11 @@ static int ipv4_get_l4proto(const struct sk_buff *skb, unsigned int nhoff, return NF_ACCEPT; } -static unsigned int ipv4_confirm(unsigned int hooknum, - struct sk_buff *skb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) +static unsigned int ipv4_helper(unsigned int hooknum, + struct sk_buff *skb, + const struct net_device *in, + const struct net_device *out, + int (*okfn)(struct sk_buff *)) { struct nf_conn *ct; enum ip_conntrack_info ctinfo; @@ -110,24 +110,38 @@ static unsigned int ipv4_confirm(unsigned int hooknum, /* This is where we call the helper: as the packet goes out. */ ct = nf_ct_get(skb, &ctinfo); if (!ct || ctinfo == IP_CT_RELATED_REPLY) - goto out; + return NF_ACCEPT; help = nfct_help(ct); if (!help) - goto out; + return NF_ACCEPT; /* rcu_read_lock()ed by nf_hook_slow */ helper = rcu_dereference(help->helper); if (!helper) - goto out; + return NF_ACCEPT; ret = helper->help(skb, skb_network_offset(skb) + ip_hdrlen(skb), ct, ctinfo); - if (ret != NF_ACCEPT) { + if (ret != NF_ACCEPT && (ret & NF_VERDICT_MASK) != NF_QUEUE) { nf_log_packet(NFPROTO_IPV4, hooknum, skb, in, out, NULL, "nf_ct_%s: dropping packet", helper->name); - return ret; } + return ret; +} + +static unsigned int ipv4_confirm(unsigned int hooknum, + struct sk_buff *skb, + const struct net_device *in, + const struct net_device *out, + int (*okfn)(struct sk_buff *)) +{ + struct nf_conn *ct; + enum ip_conntrack_info ctinfo; + + ct = nf_ct_get(skb, &ctinfo); + if (!ct || ctinfo == IP_CT_RELATED_REPLY) + goto out; /* adjust seqs for loopback traffic only in outgoing direction */ if (test_bit(IPS_SEQ_ADJUST_BIT, &ct->status) && @@ -185,6 +199,13 @@ static struct nf_hook_ops ipv4_conntrack_ops[] __read_mostly = { .priority = NF_IP_PRI_CONNTRACK, }, { + .hook = ipv4_helper, + .owner = THIS_MODULE, + .pf = NFPROTO_IPV4, + .hooknum = NF_INET_POST_ROUTING, + .priority = NF_IP_PRI_CONNTRACK_HELPER, + }, + { .hook = ipv4_confirm, .owner = THIS_MODULE, .pf = NFPROTO_IPV4, @@ -192,6 +213,13 @@ static struct nf_hook_ops ipv4_conntrack_ops[] __read_mostly = { .priority = NF_IP_PRI_CONNTRACK_CONFIRM, }, { + .hook = ipv4_helper, + .owner = THIS_MODULE, + .pf = NFPROTO_IPV4, + .hooknum = NF_INET_LOCAL_IN, + .priority = NF_IP_PRI_CONNTRACK_HELPER, + }, + { .hook = ipv4_confirm, .owner = THIS_MODULE, .pf = NFPROTO_IPV4, @@ -207,35 +235,30 @@ static int log_invalid_proto_max = 255; static ctl_table ip_ct_sysctl_table[] = { { .procname = "ip_conntrack_max", - .data = &nf_conntrack_max, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "ip_conntrack_count", - .data = &init_net.ct.count, .maxlen = sizeof(int), .mode = 0444, .proc_handler = proc_dointvec, }, { .procname = "ip_conntrack_buckets", - .data = &init_net.ct.htable_size, .maxlen = sizeof(unsigned int), .mode = 0444, .proc_handler = proc_dointvec, }, { .procname = "ip_conntrack_checksum", - .data = &init_net.ct.sysctl_checksum, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "ip_conntrack_log_invalid", - .data = &init_net.ct.sysctl_log_invalid, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, @@ -351,6 +374,25 @@ static struct nf_sockopt_ops so_getorigdst = { .owner = THIS_MODULE, }; +static int ipv4_init_net(struct net *net) +{ +#if defined(CONFIG_SYSCTL) && defined(CONFIG_NF_CONNTRACK_PROC_COMPAT) + struct nf_ip_net *in = &net->ct.nf_ct_proto; + in->ctl_table = kmemdup(ip_ct_sysctl_table, + sizeof(ip_ct_sysctl_table), + GFP_KERNEL); + if (!in->ctl_table) + return -ENOMEM; + + in->ctl_table[0].data = &nf_conntrack_max; + in->ctl_table[1].data = &net->ct.count; + in->ctl_table[2].data = &net->ct.htable_size; + in->ctl_table[3].data = &net->ct.sysctl_checksum; + in->ctl_table[4].data = &net->ct.sysctl_log_invalid; +#endif + return 0; +} + struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4 __read_mostly = { .l3proto = PF_INET, .name = "ipv4", @@ -366,8 +408,8 @@ struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4 __read_mostly = { #endif #if defined(CONFIG_SYSCTL) && defined(CONFIG_NF_CONNTRACK_PROC_COMPAT) .ctl_table_path = "net/ipv4/netfilter", - .ctl_table = ip_ct_sysctl_table, #endif + .init_net = ipv4_init_net, .me = THIS_MODULE, }; @@ -378,6 +420,65 @@ MODULE_ALIAS("nf_conntrack-" __stringify(AF_INET)); MODULE_ALIAS("ip_conntrack"); MODULE_LICENSE("GPL"); +static int ipv4_net_init(struct net *net) +{ + int ret = 0; + + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_tcp4); + if (ret < 0) { + pr_err("nf_conntrack_l4proto_tcp4 :protocol register failed\n"); + goto out_tcp; + } + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_udp4); + if (ret < 0) { + pr_err("nf_conntrack_l4proto_udp4 :protocol register failed\n"); + goto out_udp; + } + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_icmp); + if (ret < 0) { + pr_err("nf_conntrack_l4proto_icmp4 :protocol register failed\n"); + goto out_icmp; + } + ret = nf_conntrack_l3proto_register(net, + &nf_conntrack_l3proto_ipv4); + if (ret < 0) { + pr_err("nf_conntrack_l3proto_ipv4 :protocol register failed\n"); + goto out_ipv4; + } + return 0; +out_ipv4: + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_icmp); +out_icmp: + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_udp4); +out_udp: + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_tcp4); +out_tcp: + return ret; +} + +static void ipv4_net_exit(struct net *net) +{ + nf_conntrack_l3proto_unregister(net, + &nf_conntrack_l3proto_ipv4); + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_icmp); + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_udp4); + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_tcp4); +} + +static struct pernet_operations ipv4_net_ops = { + .init = ipv4_net_init, + .exit = ipv4_net_exit, +}; + static int __init nf_conntrack_l3proto_ipv4_init(void) { int ret = 0; @@ -391,35 +492,17 @@ static int __init nf_conntrack_l3proto_ipv4_init(void) return ret; } - ret = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_tcp4); + ret = register_pernet_subsys(&ipv4_net_ops); if (ret < 0) { - pr_err("nf_conntrack_ipv4: can't register tcp.\n"); + pr_err("nf_conntrack_ipv4: can't register pernet ops\n"); goto cleanup_sockopt; } - ret = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_udp4); - if (ret < 0) { - pr_err("nf_conntrack_ipv4: can't register udp.\n"); - goto cleanup_tcp; - } - - ret = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_icmp); - if (ret < 0) { - pr_err("nf_conntrack_ipv4: can't register icmp.\n"); - goto cleanup_udp; - } - - ret = nf_conntrack_l3proto_register(&nf_conntrack_l3proto_ipv4); - if (ret < 0) { - pr_err("nf_conntrack_ipv4: can't register ipv4\n"); - goto cleanup_icmp; - } - ret = nf_register_hooks(ipv4_conntrack_ops, ARRAY_SIZE(ipv4_conntrack_ops)); if (ret < 0) { pr_err("nf_conntrack_ipv4: can't register hooks.\n"); - goto cleanup_ipv4; + goto cleanup_pernet; } #if defined(CONFIG_PROC_FS) && defined(CONFIG_NF_CONNTRACK_PROC_COMPAT) ret = nf_conntrack_ipv4_compat_init(); @@ -431,14 +514,8 @@ static int __init nf_conntrack_l3proto_ipv4_init(void) cleanup_hooks: nf_unregister_hooks(ipv4_conntrack_ops, ARRAY_SIZE(ipv4_conntrack_ops)); #endif - cleanup_ipv4: - nf_conntrack_l3proto_unregister(&nf_conntrack_l3proto_ipv4); - cleanup_icmp: - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_icmp); - cleanup_udp: - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udp4); - cleanup_tcp: - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_tcp4); + cleanup_pernet: + unregister_pernet_subsys(&ipv4_net_ops); cleanup_sockopt: nf_unregister_sockopt(&so_getorigdst); return ret; @@ -451,10 +528,7 @@ static void __exit nf_conntrack_l3proto_ipv4_fini(void) nf_conntrack_ipv4_compat_fini(); #endif nf_unregister_hooks(ipv4_conntrack_ops, ARRAY_SIZE(ipv4_conntrack_ops)); - nf_conntrack_l3proto_unregister(&nf_conntrack_l3proto_ipv4); - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_icmp); - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udp4); - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_tcp4); + unregister_pernet_subsys(&ipv4_net_ops); nf_unregister_sockopt(&so_getorigdst); } diff --git a/net/ipv4/netfilter/nf_conntrack_proto_icmp.c b/net/ipv4/netfilter/nf_conntrack_proto_icmp.c index 0847e373d33c..5241d997ab75 100644 --- a/net/ipv4/netfilter/nf_conntrack_proto_icmp.c +++ b/net/ipv4/netfilter/nf_conntrack_proto_icmp.c @@ -23,6 +23,11 @@ static unsigned int nf_ct_icmp_timeout __read_mostly = 30*HZ; +static inline struct nf_icmp_net *icmp_pernet(struct net *net) +{ + return &net->ct.nf_ct_proto.icmp; +} + static bool icmp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, struct nf_conntrack_tuple *tuple) { @@ -77,7 +82,7 @@ static int icmp_print_tuple(struct seq_file *s, static unsigned int *icmp_get_timeouts(struct net *net) { - return &nf_ct_icmp_timeout; + return &icmp_pernet(net)->timeout; } /* Returns verdict for packet, or -1 for invalid. */ @@ -274,16 +279,18 @@ static int icmp_nlattr_tuple_size(void) #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_cttimeout.h> -static int icmp_timeout_nlattr_to_obj(struct nlattr *tb[], void *data) +static int icmp_timeout_nlattr_to_obj(struct nlattr *tb[], + struct net *net, void *data) { unsigned int *timeout = data; + struct nf_icmp_net *in = icmp_pernet(net); if (tb[CTA_TIMEOUT_ICMP_TIMEOUT]) { *timeout = ntohl(nla_get_be32(tb[CTA_TIMEOUT_ICMP_TIMEOUT])) * HZ; } else { /* Set default ICMP timeout. */ - *timeout = nf_ct_icmp_timeout; + *timeout = in->timeout; } return 0; } @@ -308,11 +315,9 @@ icmp_timeout_nla_policy[CTA_TIMEOUT_ICMP_MAX+1] = { #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #ifdef CONFIG_SYSCTL -static struct ctl_table_header *icmp_sysctl_header; static struct ctl_table icmp_sysctl_table[] = { { .procname = "nf_conntrack_icmp_timeout", - .data = &nf_ct_icmp_timeout, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -323,7 +328,6 @@ static struct ctl_table icmp_sysctl_table[] = { static struct ctl_table icmp_compat_sysctl_table[] = { { .procname = "ip_conntrack_icmp_timeout", - .data = &nf_ct_icmp_timeout, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -333,6 +337,62 @@ static struct ctl_table icmp_compat_sysctl_table[] = { #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ #endif /* CONFIG_SYSCTL */ +static int icmp_kmemdup_sysctl_table(struct nf_proto_net *pn, + struct nf_icmp_net *in) +{ +#ifdef CONFIG_SYSCTL + pn->ctl_table = kmemdup(icmp_sysctl_table, + sizeof(icmp_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_table) + return -ENOMEM; + + pn->ctl_table[0].data = &in->timeout; +#endif + return 0; +} + +static int icmp_kmemdup_compat_sysctl_table(struct nf_proto_net *pn, + struct nf_icmp_net *in) +{ +#ifdef CONFIG_SYSCTL +#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT + pn->ctl_compat_table = kmemdup(icmp_compat_sysctl_table, + sizeof(icmp_compat_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_compat_table) + return -ENOMEM; + + pn->ctl_compat_table[0].data = &in->timeout; +#endif +#endif + return 0; +} + +static int icmp_init_net(struct net *net, u_int16_t proto) +{ + int ret; + struct nf_icmp_net *in = icmp_pernet(net); + struct nf_proto_net *pn = &in->pn; + + in->timeout = nf_ct_icmp_timeout; + + ret = icmp_kmemdup_compat_sysctl_table(pn, in); + if (ret < 0) + return ret; + + ret = icmp_kmemdup_sysctl_table(pn, in); + if (ret < 0) + nf_ct_kfree_compat_sysctl_table(pn); + + return ret; +} + +static struct nf_proto_net *icmp_get_net_proto(struct net *net) +{ + return &net->ct.nf_ct_proto.icmp.pn; +} + struct nf_conntrack_l4proto nf_conntrack_l4proto_icmp __read_mostly = { .l3proto = PF_INET, @@ -362,11 +422,6 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_icmp __read_mostly = .nla_policy = icmp_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_header = &icmp_sysctl_header, - .ctl_table = icmp_sysctl_table, -#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT - .ctl_compat_table = icmp_compat_sysctl_table, -#endif -#endif + .init_net = icmp_init_net, + .get_net_proto = icmp_get_net_proto, }; diff --git a/net/ipv4/netfilter/nf_defrag_ipv4.c b/net/ipv4/netfilter/nf_defrag_ipv4.c index 9bb1b8a37a22..742815518b0f 100644 --- a/net/ipv4/netfilter/nf_defrag_ipv4.c +++ b/net/ipv4/netfilter/nf_defrag_ipv4.c @@ -94,14 +94,14 @@ static struct nf_hook_ops ipv4_defrag_ops[] = { { .hook = ipv4_conntrack_defrag, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_PRE_ROUTING, .priority = NF_IP_PRI_CONNTRACK_DEFRAG, }, { .hook = ipv4_conntrack_defrag, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_LOCAL_OUT, .priority = NF_IP_PRI_CONNTRACK_DEFRAG, }, diff --git a/net/ipv4/netfilter/nf_nat_amanda.c b/net/ipv4/netfilter/nf_nat_amanda.c index 7b22382ff0e9..3c04d24e2976 100644 --- a/net/ipv4/netfilter/nf_nat_amanda.c +++ b/net/ipv4/netfilter/nf_nat_amanda.c @@ -13,10 +13,10 @@ #include <linux/skbuff.h> #include <linux/udp.h> -#include <net/netfilter/nf_nat_helper.h> -#include <net/netfilter/nf_nat_rule.h> #include <net/netfilter/nf_conntrack_helper.h> #include <net/netfilter/nf_conntrack_expect.h> +#include <net/netfilter/nf_nat_helper.h> +#include <net/netfilter/nf_nat_rule.h> #include <linux/netfilter/nf_conntrack_amanda.h> MODULE_AUTHOR("Brian J. Murrell <netfilter@interlinx.bc.ca>"); diff --git a/net/ipv4/netfilter/nf_nat_core.c b/net/ipv4/netfilter/nf_nat_core.c index abb52adf5acd..44b082fd48ab 100644 --- a/net/ipv4/netfilter/nf_nat_core.c +++ b/net/ipv4/netfilter/nf_nat_core.c @@ -691,6 +691,10 @@ static struct nf_ct_helper_expectfn follow_master_nat = { .expectfn = nf_nat_follow_master, }; +static struct nfq_ct_nat_hook nfq_ct_nat = { + .seq_adjust = nf_nat_tcp_seq_adjust, +}; + static int __init nf_nat_init(void) { size_t i; @@ -731,6 +735,7 @@ static int __init nf_nat_init(void) nfnetlink_parse_nat_setup); BUG_ON(nf_ct_nat_offset != NULL); RCU_INIT_POINTER(nf_ct_nat_offset, nf_nat_get_offset); + RCU_INIT_POINTER(nfq_ct_nat_hook, &nfq_ct_nat); return 0; cleanup_extend: @@ -747,6 +752,7 @@ static void __exit nf_nat_cleanup(void) RCU_INIT_POINTER(nf_nat_seq_adjust_hook, NULL); RCU_INIT_POINTER(nfnetlink_parse_nat_setup_hook, NULL); RCU_INIT_POINTER(nf_ct_nat_offset, NULL); + RCU_INIT_POINTER(nfq_ct_nat_hook, NULL); synchronize_net(); } diff --git a/net/ipv4/netfilter/nf_nat_h323.c b/net/ipv4/netfilter/nf_nat_h323.c index cad29c121318..c6784a18c1c4 100644 --- a/net/ipv4/netfilter/nf_nat_h323.c +++ b/net/ipv4/netfilter/nf_nat_h323.c @@ -95,7 +95,7 @@ static int set_sig_addr(struct sk_buff *skb, struct nf_conn *ct, unsigned char **data, TransportAddress *taddr, int count) { - const struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + const struct nf_ct_h323_master *info = nfct_help_data(ct); int dir = CTINFO2DIR(ctinfo); int i; __be16 port; @@ -178,7 +178,7 @@ static int nat_rtp_rtcp(struct sk_buff *skb, struct nf_conn *ct, struct nf_conntrack_expect *rtp_exp, struct nf_conntrack_expect *rtcp_exp) { - struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + struct nf_ct_h323_master *info = nfct_help_data(ct); int dir = CTINFO2DIR(ctinfo); int i; u_int16_t nated_port; @@ -330,7 +330,7 @@ static int nat_h245(struct sk_buff *skb, struct nf_conn *ct, TransportAddress *taddr, __be16 port, struct nf_conntrack_expect *exp) { - struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + struct nf_ct_h323_master *info = nfct_help_data(ct); int dir = CTINFO2DIR(ctinfo); u_int16_t nated_port = ntohs(port); @@ -419,7 +419,7 @@ static int nat_q931(struct sk_buff *skb, struct nf_conn *ct, unsigned char **data, TransportAddress *taddr, int idx, __be16 port, struct nf_conntrack_expect *exp) { - struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + struct nf_ct_h323_master *info = nfct_help_data(ct); int dir = CTINFO2DIR(ctinfo); u_int16_t nated_port = ntohs(port); union nf_inet_addr addr; diff --git a/net/ipv4/netfilter/nf_nat_helper.c b/net/ipv4/netfilter/nf_nat_helper.c index af65958f6308..2e59ad0b90ca 100644 --- a/net/ipv4/netfilter/nf_nat_helper.c +++ b/net/ipv4/netfilter/nf_nat_helper.c @@ -153,6 +153,19 @@ void nf_nat_set_seq_adjust(struct nf_conn *ct, enum ip_conntrack_info ctinfo, } EXPORT_SYMBOL_GPL(nf_nat_set_seq_adjust); +void nf_nat_tcp_seq_adjust(struct sk_buff *skb, struct nf_conn *ct, + u32 ctinfo, int off) +{ + const struct tcphdr *th; + + if (nf_ct_protonum(ct) != IPPROTO_TCP) + return; + + th = (struct tcphdr *)(skb_network_header(skb)+ ip_hdrlen(skb)); + nf_nat_set_seq_adjust(ct, ctinfo, th->seq, off); +} +EXPORT_SYMBOL_GPL(nf_nat_tcp_seq_adjust); + static void nf_nat_csum(struct sk_buff *skb, const struct iphdr *iph, void *data, int datalen, __sum16 *check, int oldlen) { diff --git a/net/ipv4/netfilter/nf_nat_pptp.c b/net/ipv4/netfilter/nf_nat_pptp.c index c273d58980ae..388140881ebe 100644 --- a/net/ipv4/netfilter/nf_nat_pptp.c +++ b/net/ipv4/netfilter/nf_nat_pptp.c @@ -49,7 +49,7 @@ static void pptp_nat_expected(struct nf_conn *ct, const struct nf_nat_pptp *nat_pptp_info; struct nf_nat_ipv4_range range; - ct_pptp_info = &nfct_help(master)->help.ct_pptp_info; + ct_pptp_info = nfct_help_data(master); nat_pptp_info = &nfct_nat(master)->help.nat_pptp_info; /* And here goes the grand finale of corrosion... */ @@ -123,7 +123,7 @@ pptp_outbound_pkt(struct sk_buff *skb, __be16 new_callid; unsigned int cid_off; - ct_pptp_info = &nfct_help(ct)->help.ct_pptp_info; + ct_pptp_info = nfct_help_data(ct); nat_pptp_info = &nfct_nat(ct)->help.nat_pptp_info; new_callid = ct_pptp_info->pns_call_id; @@ -192,7 +192,7 @@ pptp_exp_gre(struct nf_conntrack_expect *expect_orig, struct nf_ct_pptp_master *ct_pptp_info; struct nf_nat_pptp *nat_pptp_info; - ct_pptp_info = &nfct_help(ct)->help.ct_pptp_info; + ct_pptp_info = nfct_help_data(ct); nat_pptp_info = &nfct_nat(ct)->help.nat_pptp_info; /* save original PAC call ID in nat_info */ diff --git a/net/ipv4/netfilter/nf_nat_snmp_basic.c b/net/ipv4/netfilter/nf_nat_snmp_basic.c index 746edec8b86e..bac712293fd6 100644 --- a/net/ipv4/netfilter/nf_nat_snmp_basic.c +++ b/net/ipv4/netfilter/nf_nat_snmp_basic.c @@ -405,7 +405,7 @@ static unsigned char asn1_octets_decode(struct asn1_ctx *ctx, ptr = *octets; while (ctx->pointer < eoc) { - if (!asn1_octet_decode(ctx, (unsigned char *)ptr++)) { + if (!asn1_octet_decode(ctx, ptr++)) { kfree(*octets); *octets = NULL; return 0; @@ -759,7 +759,7 @@ static unsigned char snmp_object_decode(struct asn1_ctx *ctx, } break; case SNMP_OBJECTID: - if (!asn1_oid_decode(ctx, end, (unsigned long **)&lp, &len)) { + if (!asn1_oid_decode(ctx, end, &lp, &len)) { kfree(id); return 0; } diff --git a/net/ipv4/netfilter/nf_nat_tftp.c b/net/ipv4/netfilter/nf_nat_tftp.c index a2901bf829c0..9dbb8d284f99 100644 --- a/net/ipv4/netfilter/nf_nat_tftp.c +++ b/net/ipv4/netfilter/nf_nat_tftp.c @@ -8,10 +8,10 @@ #include <linux/module.h> #include <linux/udp.h> -#include <net/netfilter/nf_nat_helper.h> -#include <net/netfilter/nf_nat_rule.h> #include <net/netfilter/nf_conntrack_helper.h> #include <net/netfilter/nf_conntrack_expect.h> +#include <net/netfilter/nf_nat_helper.h> +#include <net/netfilter/nf_nat_rule.h> #include <linux/netfilter/nf_conntrack_tftp.h> MODULE_AUTHOR("Magnus Boden <mb@ozaba.mine.nu>"); diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c index 2c00e8bf684d..6232d476f37e 100644 --- a/net/ipv4/ping.c +++ b/net/ipv4/ping.c @@ -371,6 +371,7 @@ void ping_err(struct sk_buff *skb, u32 info) break; case ICMP_DEST_UNREACH: if (code == ICMP_FRAG_NEEDED) { /* Path MTU discovery */ + ipv4_sk_update_pmtu(skb, sk, info); if (inet_sock->pmtudisc != IP_PMTUDISC_DONT) { err = EMSGSIZE; harderr = 1; @@ -386,6 +387,7 @@ void ping_err(struct sk_buff *skb, u32 info) break; case ICMP_REDIRECT: /* See ICMP_SOURCE_QUENCH */ + ipv4_sk_redirect(skb, sk); err = EREMOTEIO; break; } diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c index 8af0d44e4e22..957acd12250b 100644 --- a/net/ipv4/proc.c +++ b/net/ipv4/proc.c @@ -232,7 +232,6 @@ static const struct snmp_mib snmp4_net_list[] = { SNMP_MIB_ITEM("TCPDSACKOfoSent", LINUX_MIB_TCPDSACKOFOSENT), SNMP_MIB_ITEM("TCPDSACKRecv", LINUX_MIB_TCPDSACKRECV), SNMP_MIB_ITEM("TCPDSACKOfoRecv", LINUX_MIB_TCPDSACKOFORECV), - SNMP_MIB_ITEM("TCPAbortOnSyn", LINUX_MIB_TCPABORTONSYN), SNMP_MIB_ITEM("TCPAbortOnData", LINUX_MIB_TCPABORTONDATA), SNMP_MIB_ITEM("TCPAbortOnClose", LINUX_MIB_TCPABORTONCLOSE), SNMP_MIB_ITEM("TCPAbortOnMemory", LINUX_MIB_TCPABORTONMEMORY), @@ -258,6 +257,12 @@ static const struct snmp_mib snmp4_net_list[] = { SNMP_MIB_ITEM("TCPReqQFullDrop", LINUX_MIB_TCPREQQFULLDROP), SNMP_MIB_ITEM("TCPRetransFail", LINUX_MIB_TCPRETRANSFAIL), SNMP_MIB_ITEM("TCPRcvCoalesce", LINUX_MIB_TCPRCVCOALESCE), + SNMP_MIB_ITEM("TCPOFOQueue", LINUX_MIB_TCPOFOQUEUE), + SNMP_MIB_ITEM("TCPOFODrop", LINUX_MIB_TCPOFODROP), + SNMP_MIB_ITEM("TCPOFOMerge", LINUX_MIB_TCPOFOMERGE), + SNMP_MIB_ITEM("TCPChallengeACK", LINUX_MIB_TCPCHALLENGEACK), + SNMP_MIB_ITEM("TCPSYNChallenge", LINUX_MIB_TCPSYNCHALLENGE), + SNMP_MIB_ITEM("TCPFastOpenActive", LINUX_MIB_TCPFASTOPENACTIVE), SNMP_MIB_SENTINEL }; diff --git a/net/ipv4/protocol.c b/net/ipv4/protocol.c index 9ae5c01cd0b2..8918eff1426d 100644 --- a/net/ipv4/protocol.c +++ b/net/ipv4/protocol.c @@ -36,9 +36,7 @@ const struct net_protocol __rcu *inet_protos[MAX_INET_PROTOS] __read_mostly; int inet_add_protocol(const struct net_protocol *prot, unsigned char protocol) { - int hash = protocol & (MAX_INET_PROTOS - 1); - - return !cmpxchg((const struct net_protocol **)&inet_protos[hash], + return !cmpxchg((const struct net_protocol **)&inet_protos[protocol], NULL, prot) ? 0 : -1; } EXPORT_SYMBOL(inet_add_protocol); @@ -49,9 +47,9 @@ EXPORT_SYMBOL(inet_add_protocol); int inet_del_protocol(const struct net_protocol *prot, unsigned char protocol) { - int ret, hash = protocol & (MAX_INET_PROTOS - 1); + int ret; - ret = (cmpxchg((const struct net_protocol **)&inet_protos[hash], + ret = (cmpxchg((const struct net_protocol **)&inet_protos[protocol], prot, NULL) == prot) ? 0 : -1; synchronize_net(); diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c index 4032b818f3e4..ff0f071969ea 100644 --- a/net/ipv4/raw.c +++ b/net/ipv4/raw.c @@ -216,6 +216,11 @@ static void raw_err(struct sock *sk, struct sk_buff *skb, u32 info) int err = 0; int harderr = 0; + if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) + ipv4_sk_update_pmtu(skb, sk, info); + else if (type == ICMP_REDIRECT) + ipv4_sk_redirect(skb, sk); + /* Report error on raw socket, if: 1. User requested ip_recverr. 2. Socket is connected (otherwise the error indication diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 98b30d08efe9..6bcb8fc71cbc 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -133,10 +133,6 @@ static int ip_rt_gc_elasticity __read_mostly = 8; static int ip_rt_mtu_expires __read_mostly = 10 * 60 * HZ; static int ip_rt_min_pmtu __read_mostly = 512 + 20 + 20; static int ip_rt_min_advmss __read_mostly = 256; -static int rt_chain_length_max __read_mostly = 20; - -static struct delayed_work expires_work; -static unsigned long expires_ljiffies; /* * Interface to generic destination cache. @@ -145,11 +141,12 @@ static unsigned long expires_ljiffies; static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie); static unsigned int ipv4_default_advmss(const struct dst_entry *dst); static unsigned int ipv4_mtu(const struct dst_entry *dst); -static void ipv4_dst_destroy(struct dst_entry *dst); static struct dst_entry *ipv4_negative_advice(struct dst_entry *dst); static void ipv4_link_failure(struct sk_buff *skb); -static void ip_rt_update_pmtu(struct dst_entry *dst, u32 mtu); -static int rt_garbage_collect(struct dst_ops *ops); +static void ip_rt_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu); +static void ip_do_redirect(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb); static void ipv4_dst_ifdown(struct dst_entry *dst, struct net_device *dev, int how) @@ -158,54 +155,26 @@ static void ipv4_dst_ifdown(struct dst_entry *dst, struct net_device *dev, static u32 *ipv4_cow_metrics(struct dst_entry *dst, unsigned long old) { - struct rtable *rt = (struct rtable *) dst; - struct inet_peer *peer; - u32 *p = NULL; - - if (!rt->peer) - rt_bind_peer(rt, rt->rt_dst, 1); - - peer = rt->peer; - if (peer) { - u32 *old_p = __DST_METRICS_PTR(old); - unsigned long prev, new; - - p = peer->metrics; - if (inet_metrics_new(peer)) - memcpy(p, old_p, sizeof(u32) * RTAX_MAX); - - new = (unsigned long) p; - prev = cmpxchg(&dst->_metrics, old, new); - - if (prev != old) { - p = __DST_METRICS_PTR(prev); - if (prev & DST_METRICS_READ_ONLY) - p = NULL; - } else { - if (rt->fi) { - fib_info_put(rt->fi); - rt->fi = NULL; - } - } - } - return p; + WARN_ON(1); + return NULL; } -static struct neighbour *ipv4_neigh_lookup(const struct dst_entry *dst, const void *daddr); +static struct neighbour *ipv4_neigh_lookup(const struct dst_entry *dst, + struct sk_buff *skb, + const void *daddr); static struct dst_ops ipv4_dst_ops = { .family = AF_INET, .protocol = cpu_to_be16(ETH_P_IP), - .gc = rt_garbage_collect, .check = ipv4_dst_check, .default_advmss = ipv4_default_advmss, .mtu = ipv4_mtu, .cow_metrics = ipv4_cow_metrics, - .destroy = ipv4_dst_destroy, .ifdown = ipv4_dst_ifdown, .negative_advice = ipv4_negative_advice, .link_failure = ipv4_link_failure, .update_pmtu = ip_rt_update_pmtu, + .redirect = ip_do_redirect, .local_out = __ip_local_out, .neigh_lookup = ipv4_neigh_lookup, }; @@ -232,184 +201,30 @@ const __u8 ip_tos2prio[16] = { }; EXPORT_SYMBOL(ip_tos2prio); -/* - * Route cache. - */ - -/* The locking scheme is rather straight forward: - * - * 1) Read-Copy Update protects the buckets of the central route hash. - * 2) Only writers remove entries, and they hold the lock - * as they look at rtable reference counts. - * 3) Only readers acquire references to rtable entries, - * they do so with atomic increments and with the - * lock held. - */ - -struct rt_hash_bucket { - struct rtable __rcu *chain; -}; - -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) || \ - defined(CONFIG_PROVE_LOCKING) -/* - * Instead of using one spinlock for each rt_hash_bucket, we use a table of spinlocks - * The size of this table is a power of two and depends on the number of CPUS. - * (on lockdep we have a quite big spinlock_t, so keep the size down there) - */ -#ifdef CONFIG_LOCKDEP -# define RT_HASH_LOCK_SZ 256 -#else -# if NR_CPUS >= 32 -# define RT_HASH_LOCK_SZ 4096 -# elif NR_CPUS >= 16 -# define RT_HASH_LOCK_SZ 2048 -# elif NR_CPUS >= 8 -# define RT_HASH_LOCK_SZ 1024 -# elif NR_CPUS >= 4 -# define RT_HASH_LOCK_SZ 512 -# else -# define RT_HASH_LOCK_SZ 256 -# endif -#endif - -static spinlock_t *rt_hash_locks; -# define rt_hash_lock_addr(slot) &rt_hash_locks[(slot) & (RT_HASH_LOCK_SZ - 1)] - -static __init void rt_hash_lock_init(void) -{ - int i; - - rt_hash_locks = kmalloc(sizeof(spinlock_t) * RT_HASH_LOCK_SZ, - GFP_KERNEL); - if (!rt_hash_locks) - panic("IP: failed to allocate rt_hash_locks\n"); - - for (i = 0; i < RT_HASH_LOCK_SZ; i++) - spin_lock_init(&rt_hash_locks[i]); -} -#else -# define rt_hash_lock_addr(slot) NULL - -static inline void rt_hash_lock_init(void) -{ -} -#endif - -static struct rt_hash_bucket *rt_hash_table __read_mostly; -static unsigned int rt_hash_mask __read_mostly; -static unsigned int rt_hash_log __read_mostly; - static DEFINE_PER_CPU(struct rt_cache_stat, rt_cache_stat); #define RT_CACHE_STAT_INC(field) __this_cpu_inc(rt_cache_stat.field) -static inline unsigned int rt_hash(__be32 daddr, __be32 saddr, int idx, - int genid) -{ - return jhash_3words((__force u32)daddr, (__force u32)saddr, - idx, genid) - & rt_hash_mask; -} - static inline int rt_genid(struct net *net) { return atomic_read(&net->ipv4.rt_genid); } #ifdef CONFIG_PROC_FS -struct rt_cache_iter_state { - struct seq_net_private p; - int bucket; - int genid; -}; - -static struct rtable *rt_cache_get_first(struct seq_file *seq) -{ - struct rt_cache_iter_state *st = seq->private; - struct rtable *r = NULL; - - for (st->bucket = rt_hash_mask; st->bucket >= 0; --st->bucket) { - if (!rcu_access_pointer(rt_hash_table[st->bucket].chain)) - continue; - rcu_read_lock_bh(); - r = rcu_dereference_bh(rt_hash_table[st->bucket].chain); - while (r) { - if (dev_net(r->dst.dev) == seq_file_net(seq) && - r->rt_genid == st->genid) - return r; - r = rcu_dereference_bh(r->dst.rt_next); - } - rcu_read_unlock_bh(); - } - return r; -} - -static struct rtable *__rt_cache_get_next(struct seq_file *seq, - struct rtable *r) -{ - struct rt_cache_iter_state *st = seq->private; - - r = rcu_dereference_bh(r->dst.rt_next); - while (!r) { - rcu_read_unlock_bh(); - do { - if (--st->bucket < 0) - return NULL; - } while (!rcu_access_pointer(rt_hash_table[st->bucket].chain)); - rcu_read_lock_bh(); - r = rcu_dereference_bh(rt_hash_table[st->bucket].chain); - } - return r; -} - -static struct rtable *rt_cache_get_next(struct seq_file *seq, - struct rtable *r) -{ - struct rt_cache_iter_state *st = seq->private; - while ((r = __rt_cache_get_next(seq, r)) != NULL) { - if (dev_net(r->dst.dev) != seq_file_net(seq)) - continue; - if (r->rt_genid == st->genid) - break; - } - return r; -} - -static struct rtable *rt_cache_get_idx(struct seq_file *seq, loff_t pos) -{ - struct rtable *r = rt_cache_get_first(seq); - - if (r) - while (pos && (r = rt_cache_get_next(seq, r))) - --pos; - return pos ? NULL : r; -} - static void *rt_cache_seq_start(struct seq_file *seq, loff_t *pos) { - struct rt_cache_iter_state *st = seq->private; if (*pos) - return rt_cache_get_idx(seq, *pos - 1); - st->genid = rt_genid(seq_file_net(seq)); + return NULL; return SEQ_START_TOKEN; } static void *rt_cache_seq_next(struct seq_file *seq, void *v, loff_t *pos) { - struct rtable *r; - - if (v == SEQ_START_TOKEN) - r = rt_cache_get_first(seq); - else - r = rt_cache_get_next(seq, v); ++*pos; - return r; + return NULL; } static void rt_cache_seq_stop(struct seq_file *seq, void *v) { - if (v && v != SEQ_START_TOKEN) - rcu_read_unlock_bh(); } static int rt_cache_seq_show(struct seq_file *seq, void *v) @@ -419,34 +234,6 @@ static int rt_cache_seq_show(struct seq_file *seq, void *v) "Iface\tDestination\tGateway \tFlags\t\tRefCnt\tUse\t" "Metric\tSource\t\tMTU\tWindow\tIRTT\tTOS\tHHRef\t" "HHUptod\tSpecDst"); - else { - struct rtable *r = v; - struct neighbour *n; - int len, HHUptod; - - rcu_read_lock(); - n = dst_get_neighbour_noref(&r->dst); - HHUptod = (n && (n->nud_state & NUD_CONNECTED)) ? 1 : 0; - rcu_read_unlock(); - - seq_printf(seq, "%s\t%08X\t%08X\t%8X\t%d\t%u\t%d\t" - "%08X\t%d\t%u\t%u\t%02X\t%d\t%1d\t%08X%n", - r->dst.dev ? r->dst.dev->name : "*", - (__force u32)r->rt_dst, - (__force u32)r->rt_gateway, - r->rt_flags, atomic_read(&r->dst.__refcnt), - r->dst.__use, 0, (__force u32)r->rt_src, - dst_metric_advmss(&r->dst) + 40, - dst_metric(&r->dst, RTAX_WINDOW), - (int)((dst_metric(&r->dst, RTAX_RTT) >> 3) + - dst_metric(&r->dst, RTAX_RTTVAR)), - r->rt_key_tos, - -1, - HHUptod, - r->rt_spec_dst, &len); - - seq_printf(seq, "%*s\n", 127 - len, ""); - } return 0; } @@ -459,8 +246,7 @@ static const struct seq_operations rt_cache_seq_ops = { static int rt_cache_seq_open(struct inode *inode, struct file *file) { - return seq_open_net(inode, file, &rt_cache_seq_ops, - sizeof(struct rt_cache_iter_state)); + return seq_open(file, &rt_cache_seq_ops); } static const struct file_operations rt_cache_seq_fops = { @@ -468,7 +254,7 @@ static const struct file_operations rt_cache_seq_fops = { .open = rt_cache_seq_open, .read = seq_read, .llseek = seq_lseek, - .release = seq_release_net, + .release = seq_release, }; @@ -658,275 +444,12 @@ static inline int ip_rt_proc_init(void) } #endif /* CONFIG_PROC_FS */ -static inline void rt_free(struct rtable *rt) -{ - call_rcu_bh(&rt->dst.rcu_head, dst_rcu_free); -} - -static inline void rt_drop(struct rtable *rt) -{ - ip_rt_put(rt); - call_rcu_bh(&rt->dst.rcu_head, dst_rcu_free); -} - -static inline int rt_fast_clean(struct rtable *rth) -{ - /* Kill broadcast/multicast entries very aggresively, if they - collide in hash table with more useful entries */ - return (rth->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST)) && - rt_is_input_route(rth) && rth->dst.rt_next; -} - -static inline int rt_valuable(struct rtable *rth) -{ - return (rth->rt_flags & (RTCF_REDIRECTED | RTCF_NOTIFY)) || - (rth->peer && rth->peer->pmtu_expires); -} - -static int rt_may_expire(struct rtable *rth, unsigned long tmo1, unsigned long tmo2) -{ - unsigned long age; - int ret = 0; - - if (atomic_read(&rth->dst.__refcnt)) - goto out; - - age = jiffies - rth->dst.lastuse; - if ((age <= tmo1 && !rt_fast_clean(rth)) || - (age <= tmo2 && rt_valuable(rth))) - goto out; - ret = 1; -out: return ret; -} - -/* Bits of score are: - * 31: very valuable - * 30: not quite useless - * 29..0: usage counter - */ -static inline u32 rt_score(struct rtable *rt) -{ - u32 score = jiffies - rt->dst.lastuse; - - score = ~score & ~(3<<30); - - if (rt_valuable(rt)) - score |= (1<<31); - - if (rt_is_output_route(rt) || - !(rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST|RTCF_LOCAL))) - score |= (1<<30); - - return score; -} - -static inline bool rt_caching(const struct net *net) -{ - return net->ipv4.current_rt_cache_rebuild_count <= - net->ipv4.sysctl_rt_cache_rebuild_count; -} - -static inline bool compare_hash_inputs(const struct rtable *rt1, - const struct rtable *rt2) -{ - return ((((__force u32)rt1->rt_key_dst ^ (__force u32)rt2->rt_key_dst) | - ((__force u32)rt1->rt_key_src ^ (__force u32)rt2->rt_key_src) | - (rt1->rt_route_iif ^ rt2->rt_route_iif)) == 0); -} - -static inline int compare_keys(struct rtable *rt1, struct rtable *rt2) -{ - return (((__force u32)rt1->rt_key_dst ^ (__force u32)rt2->rt_key_dst) | - ((__force u32)rt1->rt_key_src ^ (__force u32)rt2->rt_key_src) | - (rt1->rt_mark ^ rt2->rt_mark) | - (rt1->rt_key_tos ^ rt2->rt_key_tos) | - (rt1->rt_route_iif ^ rt2->rt_route_iif) | - (rt1->rt_oif ^ rt2->rt_oif)) == 0; -} - -static inline int compare_netns(struct rtable *rt1, struct rtable *rt2) -{ - return net_eq(dev_net(rt1->dst.dev), dev_net(rt2->dst.dev)); -} - static inline int rt_is_expired(struct rtable *rth) { return rth->rt_genid != rt_genid(dev_net(rth->dst.dev)); } /* - * Perform a full scan of hash table and free all entries. - * Can be called by a softirq or a process. - * In the later case, we want to be reschedule if necessary - */ -static void rt_do_flush(struct net *net, int process_context) -{ - unsigned int i; - struct rtable *rth, *next; - - for (i = 0; i <= rt_hash_mask; i++) { - struct rtable __rcu **pprev; - struct rtable *list; - - if (process_context && need_resched()) - cond_resched(); - rth = rcu_access_pointer(rt_hash_table[i].chain); - if (!rth) - continue; - - spin_lock_bh(rt_hash_lock_addr(i)); - - list = NULL; - pprev = &rt_hash_table[i].chain; - rth = rcu_dereference_protected(*pprev, - lockdep_is_held(rt_hash_lock_addr(i))); - - while (rth) { - next = rcu_dereference_protected(rth->dst.rt_next, - lockdep_is_held(rt_hash_lock_addr(i))); - - if (!net || - net_eq(dev_net(rth->dst.dev), net)) { - rcu_assign_pointer(*pprev, next); - rcu_assign_pointer(rth->dst.rt_next, list); - list = rth; - } else { - pprev = &rth->dst.rt_next; - } - rth = next; - } - - spin_unlock_bh(rt_hash_lock_addr(i)); - - for (; list; list = next) { - next = rcu_dereference_protected(list->dst.rt_next, 1); - rt_free(list); - } - } -} - -/* - * While freeing expired entries, we compute average chain length - * and standard deviation, using fixed-point arithmetic. - * This to have an estimation of rt_chain_length_max - * rt_chain_length_max = max(elasticity, AVG + 4*SD) - * We use 3 bits for frational part, and 29 (or 61) for magnitude. - */ - -#define FRACT_BITS 3 -#define ONE (1UL << FRACT_BITS) - -/* - * Given a hash chain and an item in this hash chain, - * find if a previous entry has the same hash_inputs - * (but differs on tos, mark or oif) - * Returns 0 if an alias is found. - * Returns ONE if rth has no alias before itself. - */ -static int has_noalias(const struct rtable *head, const struct rtable *rth) -{ - const struct rtable *aux = head; - - while (aux != rth) { - if (compare_hash_inputs(aux, rth)) - return 0; - aux = rcu_dereference_protected(aux->dst.rt_next, 1); - } - return ONE; -} - -static void rt_check_expire(void) -{ - static unsigned int rover; - unsigned int i = rover, goal; - struct rtable *rth; - struct rtable __rcu **rthp; - unsigned long samples = 0; - unsigned long sum = 0, sum2 = 0; - unsigned long delta; - u64 mult; - - delta = jiffies - expires_ljiffies; - expires_ljiffies = jiffies; - mult = ((u64)delta) << rt_hash_log; - if (ip_rt_gc_timeout > 1) - do_div(mult, ip_rt_gc_timeout); - goal = (unsigned int)mult; - if (goal > rt_hash_mask) - goal = rt_hash_mask + 1; - for (; goal > 0; goal--) { - unsigned long tmo = ip_rt_gc_timeout; - unsigned long length; - - i = (i + 1) & rt_hash_mask; - rthp = &rt_hash_table[i].chain; - - if (need_resched()) - cond_resched(); - - samples++; - - if (rcu_dereference_raw(*rthp) == NULL) - continue; - length = 0; - spin_lock_bh(rt_hash_lock_addr(i)); - while ((rth = rcu_dereference_protected(*rthp, - lockdep_is_held(rt_hash_lock_addr(i)))) != NULL) { - prefetch(rth->dst.rt_next); - if (rt_is_expired(rth)) { - *rthp = rth->dst.rt_next; - rt_free(rth); - continue; - } - if (rth->dst.expires) { - /* Entry is expired even if it is in use */ - if (time_before_eq(jiffies, rth->dst.expires)) { -nofree: - tmo >>= 1; - rthp = &rth->dst.rt_next; - /* - * We only count entries on - * a chain with equal hash inputs once - * so that entries for different QOS - * levels, and other non-hash input - * attributes don't unfairly skew - * the length computation - */ - length += has_noalias(rt_hash_table[i].chain, rth); - continue; - } - } else if (!rt_may_expire(rth, tmo, ip_rt_gc_timeout)) - goto nofree; - - /* Cleanup aged off entries. */ - *rthp = rth->dst.rt_next; - rt_free(rth); - } - spin_unlock_bh(rt_hash_lock_addr(i)); - sum += length; - sum2 += length*length; - } - if (samples) { - unsigned long avg = sum / samples; - unsigned long sd = int_sqrt(sum2 / samples - avg*avg); - rt_chain_length_max = max_t(unsigned long, - ip_rt_gc_elasticity, - (avg + 4*sd) >> FRACT_BITS); - } - rover = i; -} - -/* - * rt_worker_func() is run in process context. - * we call rt_check_expire() to scan part of the hash table - */ -static void rt_worker_func(struct work_struct *work) -{ - rt_check_expire(); - schedule_delayed_work(&expires_work, ip_rt_gc_interval); -} - -/* * Perturbation of rt_genid by a small quantity [1..256] * Using 8 bits of shuffling ensure we can call rt_cache_invalidate() * many times (2^24) without giving recent rt_genid. @@ -938,7 +461,6 @@ static void rt_cache_invalidate(struct net *net) get_random_bytes(&shuffle, sizeof(shuffle)); atomic_add(shuffle + 1U, &net->ipv4.rt_genid); - inetpeer_invalidate_tree(AF_INET); } /* @@ -948,183 +470,22 @@ static void rt_cache_invalidate(struct net *net) void rt_cache_flush(struct net *net, int delay) { rt_cache_invalidate(net); - if (delay >= 0) - rt_do_flush(net, !in_softirq()); -} - -/* Flush previous cache invalidated entries from the cache */ -void rt_cache_flush_batch(struct net *net) -{ - rt_do_flush(net, !in_softirq()); -} - -static void rt_emergency_hash_rebuild(struct net *net) -{ - net_warn_ratelimited("Route hash chain too long!\n"); - rt_cache_invalidate(net); -} - -/* - Short description of GC goals. - - We want to build algorithm, which will keep routing cache - at some equilibrium point, when number of aged off entries - is kept approximately equal to newly generated ones. - - Current expiration strength is variable "expire". - We try to adjust it dynamically, so that if networking - is idle expires is large enough to keep enough of warm entries, - and when load increases it reduces to limit cache size. - */ - -static int rt_garbage_collect(struct dst_ops *ops) -{ - static unsigned long expire = RT_GC_TIMEOUT; - static unsigned long last_gc; - static int rover; - static int equilibrium; - struct rtable *rth; - struct rtable __rcu **rthp; - unsigned long now = jiffies; - int goal; - int entries = dst_entries_get_fast(&ipv4_dst_ops); - - /* - * Garbage collection is pretty expensive, - * do not make it too frequently. - */ - - RT_CACHE_STAT_INC(gc_total); - - if (now - last_gc < ip_rt_gc_min_interval && - entries < ip_rt_max_size) { - RT_CACHE_STAT_INC(gc_ignored); - goto out; - } - - entries = dst_entries_get_slow(&ipv4_dst_ops); - /* Calculate number of entries, which we want to expire now. */ - goal = entries - (ip_rt_gc_elasticity << rt_hash_log); - if (goal <= 0) { - if (equilibrium < ipv4_dst_ops.gc_thresh) - equilibrium = ipv4_dst_ops.gc_thresh; - goal = entries - equilibrium; - if (goal > 0) { - equilibrium += min_t(unsigned int, goal >> 1, rt_hash_mask + 1); - goal = entries - equilibrium; - } - } else { - /* We are in dangerous area. Try to reduce cache really - * aggressively. - */ - goal = max_t(unsigned int, goal >> 1, rt_hash_mask + 1); - equilibrium = entries - goal; - } - - if (now - last_gc >= ip_rt_gc_min_interval) - last_gc = now; - - if (goal <= 0) { - equilibrium += goal; - goto work_done; - } - - do { - int i, k; - - for (i = rt_hash_mask, k = rover; i >= 0; i--) { - unsigned long tmo = expire; - - k = (k + 1) & rt_hash_mask; - rthp = &rt_hash_table[k].chain; - spin_lock_bh(rt_hash_lock_addr(k)); - while ((rth = rcu_dereference_protected(*rthp, - lockdep_is_held(rt_hash_lock_addr(k)))) != NULL) { - if (!rt_is_expired(rth) && - !rt_may_expire(rth, tmo, expire)) { - tmo >>= 1; - rthp = &rth->dst.rt_next; - continue; - } - *rthp = rth->dst.rt_next; - rt_free(rth); - goal--; - } - spin_unlock_bh(rt_hash_lock_addr(k)); - if (goal <= 0) - break; - } - rover = k; - - if (goal <= 0) - goto work_done; - - /* Goal is not achieved. We stop process if: - - - if expire reduced to zero. Otherwise, expire is halfed. - - if table is not full. - - if we are called from interrupt. - - jiffies check is just fallback/debug loop breaker. - We will not spin here for long time in any case. - */ - - RT_CACHE_STAT_INC(gc_goal_miss); - - if (expire == 0) - break; - - expire >>= 1; - - if (dst_entries_get_fast(&ipv4_dst_ops) < ip_rt_max_size) - goto out; - } while (!in_softirq() && time_before_eq(jiffies, now)); - - if (dst_entries_get_fast(&ipv4_dst_ops) < ip_rt_max_size) - goto out; - if (dst_entries_get_slow(&ipv4_dst_ops) < ip_rt_max_size) - goto out; - net_warn_ratelimited("dst cache overflow\n"); - RT_CACHE_STAT_INC(gc_dst_overflow); - return 1; - -work_done: - expire += ip_rt_gc_min_interval; - if (expire > ip_rt_gc_timeout || - dst_entries_get_fast(&ipv4_dst_ops) < ipv4_dst_ops.gc_thresh || - dst_entries_get_slow(&ipv4_dst_ops) < ipv4_dst_ops.gc_thresh) - expire = ip_rt_gc_timeout; -out: return 0; -} - -/* - * Returns number of entries in a hash chain that have different hash_inputs - */ -static int slow_chain_length(const struct rtable *head) -{ - int length = 0; - const struct rtable *rth = head; - - while (rth) { - length += has_noalias(head, rth); - rth = rcu_dereference_protected(rth->dst.rt_next, 1); - } - return length >> FRACT_BITS; } -static struct neighbour *ipv4_neigh_lookup(const struct dst_entry *dst, const void *daddr) +static struct neighbour *ipv4_neigh_lookup(const struct dst_entry *dst, + struct sk_buff *skb, + const void *daddr) { - static const __be32 inaddr_any = 0; struct net_device *dev = dst->dev; const __be32 *pkey = daddr; const struct rtable *rt; struct neighbour *n; rt = (const struct rtable *) dst; - - if (dev->flags & (IFF_LOOPBACK | IFF_POINTOPOINT)) - pkey = &inaddr_any; - else if (rt->rt_gateway) + if (rt->rt_gateway) pkey = (const __be32 *) &rt->rt_gateway; + else if (skb) + pkey = &ip_hdr(skb)->daddr; n = __ipv4_neigh_lookup(dev, *(__force u32 *)pkey); if (n) @@ -1132,212 +493,6 @@ static struct neighbour *ipv4_neigh_lookup(const struct dst_entry *dst, const vo return neigh_create(&arp_tbl, pkey, dev); } -static int rt_bind_neighbour(struct rtable *rt) -{ - struct neighbour *n = ipv4_neigh_lookup(&rt->dst, &rt->rt_gateway); - if (IS_ERR(n)) - return PTR_ERR(n); - dst_set_neighbour(&rt->dst, n); - - return 0; -} - -static struct rtable *rt_intern_hash(unsigned int hash, struct rtable *rt, - struct sk_buff *skb, int ifindex) -{ - struct rtable *rth, *cand; - struct rtable __rcu **rthp, **candp; - unsigned long now; - u32 min_score; - int chain_length; - int attempts = !in_softirq(); - -restart: - chain_length = 0; - min_score = ~(u32)0; - cand = NULL; - candp = NULL; - now = jiffies; - - if (!rt_caching(dev_net(rt->dst.dev))) { - /* - * If we're not caching, just tell the caller we - * were successful and don't touch the route. The - * caller hold the sole reference to the cache entry, and - * it will be released when the caller is done with it. - * If we drop it here, the callers have no way to resolve routes - * when we're not caching. Instead, just point *rp at rt, so - * the caller gets a single use out of the route - * Note that we do rt_free on this new route entry, so that - * once its refcount hits zero, we are still able to reap it - * (Thanks Alexey) - * Note: To avoid expensive rcu stuff for this uncached dst, - * we set DST_NOCACHE so that dst_release() can free dst without - * waiting a grace period. - */ - - rt->dst.flags |= DST_NOCACHE; - if (rt->rt_type == RTN_UNICAST || rt_is_output_route(rt)) { - int err = rt_bind_neighbour(rt); - if (err) { - net_warn_ratelimited("Neighbour table failure & not caching routes\n"); - ip_rt_put(rt); - return ERR_PTR(err); - } - } - - goto skip_hashing; - } - - rthp = &rt_hash_table[hash].chain; - - spin_lock_bh(rt_hash_lock_addr(hash)); - while ((rth = rcu_dereference_protected(*rthp, - lockdep_is_held(rt_hash_lock_addr(hash)))) != NULL) { - if (rt_is_expired(rth)) { - *rthp = rth->dst.rt_next; - rt_free(rth); - continue; - } - if (compare_keys(rth, rt) && compare_netns(rth, rt)) { - /* Put it first */ - *rthp = rth->dst.rt_next; - /* - * Since lookup is lockfree, the deletion - * must be visible to another weakly ordered CPU before - * the insertion at the start of the hash chain. - */ - rcu_assign_pointer(rth->dst.rt_next, - rt_hash_table[hash].chain); - /* - * Since lookup is lockfree, the update writes - * must be ordered for consistency on SMP. - */ - rcu_assign_pointer(rt_hash_table[hash].chain, rth); - - dst_use(&rth->dst, now); - spin_unlock_bh(rt_hash_lock_addr(hash)); - - rt_drop(rt); - if (skb) - skb_dst_set(skb, &rth->dst); - return rth; - } - - if (!atomic_read(&rth->dst.__refcnt)) { - u32 score = rt_score(rth); - - if (score <= min_score) { - cand = rth; - candp = rthp; - min_score = score; - } - } - - chain_length++; - - rthp = &rth->dst.rt_next; - } - - if (cand) { - /* ip_rt_gc_elasticity used to be average length of chain - * length, when exceeded gc becomes really aggressive. - * - * The second limit is less certain. At the moment it allows - * only 2 entries per bucket. We will see. - */ - if (chain_length > ip_rt_gc_elasticity) { - *candp = cand->dst.rt_next; - rt_free(cand); - } - } else { - if (chain_length > rt_chain_length_max && - slow_chain_length(rt_hash_table[hash].chain) > rt_chain_length_max) { - struct net *net = dev_net(rt->dst.dev); - int num = ++net->ipv4.current_rt_cache_rebuild_count; - if (!rt_caching(net)) { - pr_warn("%s: %d rebuilds is over limit, route caching disabled\n", - rt->dst.dev->name, num); - } - rt_emergency_hash_rebuild(net); - spin_unlock_bh(rt_hash_lock_addr(hash)); - - hash = rt_hash(rt->rt_key_dst, rt->rt_key_src, - ifindex, rt_genid(net)); - goto restart; - } - } - - /* Try to bind route to arp only if it is output - route or unicast forwarding path. - */ - if (rt->rt_type == RTN_UNICAST || rt_is_output_route(rt)) { - int err = rt_bind_neighbour(rt); - if (err) { - spin_unlock_bh(rt_hash_lock_addr(hash)); - - if (err != -ENOBUFS) { - rt_drop(rt); - return ERR_PTR(err); - } - - /* Neighbour tables are full and nothing - can be released. Try to shrink route cache, - it is most likely it holds some neighbour records. - */ - if (attempts-- > 0) { - int saved_elasticity = ip_rt_gc_elasticity; - int saved_int = ip_rt_gc_min_interval; - ip_rt_gc_elasticity = 1; - ip_rt_gc_min_interval = 0; - rt_garbage_collect(&ipv4_dst_ops); - ip_rt_gc_min_interval = saved_int; - ip_rt_gc_elasticity = saved_elasticity; - goto restart; - } - - net_warn_ratelimited("Neighbour table overflow\n"); - rt_drop(rt); - return ERR_PTR(-ENOBUFS); - } - } - - rt->dst.rt_next = rt_hash_table[hash].chain; - - /* - * Since lookup is lockfree, we must make sure - * previous writes to rt are committed to memory - * before making rt visible to other CPUS. - */ - rcu_assign_pointer(rt_hash_table[hash].chain, rt); - - spin_unlock_bh(rt_hash_lock_addr(hash)); - -skip_hashing: - if (skb) - skb_dst_set(skb, &rt->dst); - return rt; -} - -static atomic_t __rt_peer_genid = ATOMIC_INIT(0); - -static u32 rt_peer_genid(void) -{ - return atomic_read(&__rt_peer_genid); -} - -void rt_bind_peer(struct rtable *rt, __be32 daddr, int create) -{ - struct inet_peer *peer; - - peer = inet_getpeer_v4(daddr, create); - - if (peer && cmpxchg(&rt->peer, NULL, peer) != NULL) - inet_putpeer(peer); - else - rt->rt_peer_genid = rt_peer_genid(); -} - /* * Peer allocation may fail only in serious out-of-memory conditions. However * we still can generate some output. @@ -1360,83 +515,188 @@ static void ip_select_fb_ident(struct iphdr *iph) void __ip_select_ident(struct iphdr *iph, struct dst_entry *dst, int more) { - struct rtable *rt = (struct rtable *) dst; - - if (rt && !(rt->dst.flags & DST_NOPEER)) { - if (rt->peer == NULL) - rt_bind_peer(rt, rt->rt_dst, 1); + struct net *net = dev_net(dst->dev); + struct inet_peer *peer; - /* If peer is attached to destination, it is never detached, - so that we need not to grab a lock to dereference it. - */ - if (rt->peer) { - iph->id = htons(inet_getid(rt->peer, more)); - return; - } - } else if (!rt) - pr_debug("rt_bind_peer(0) @%p\n", __builtin_return_address(0)); + peer = inet_getpeer_v4(net->ipv4.peers, iph->daddr, 1); + if (peer) { + iph->id = htons(inet_getid(peer, more)); + inet_putpeer(peer); + return; + } ip_select_fb_ident(iph); } EXPORT_SYMBOL(__ip_select_ident); -static void rt_del(unsigned int hash, struct rtable *rt) +static void __build_flow_key(struct flowi4 *fl4, const struct sock *sk, + const struct iphdr *iph, + int oif, u8 tos, + u8 prot, u32 mark, int flow_flags) { - struct rtable __rcu **rthp; - struct rtable *aux; + if (sk) { + const struct inet_sock *inet = inet_sk(sk); - rthp = &rt_hash_table[hash].chain; - spin_lock_bh(rt_hash_lock_addr(hash)); - ip_rt_put(rt); - while ((aux = rcu_dereference_protected(*rthp, - lockdep_is_held(rt_hash_lock_addr(hash)))) != NULL) { - if (aux == rt || rt_is_expired(aux)) { - *rthp = aux->dst.rt_next; - rt_free(aux); - continue; - } - rthp = &aux->dst.rt_next; + oif = sk->sk_bound_dev_if; + mark = sk->sk_mark; + tos = RT_CONN_FLAGS(sk); + prot = inet->hdrincl ? IPPROTO_RAW : sk->sk_protocol; } - spin_unlock_bh(rt_hash_lock_addr(hash)); + flowi4_init_output(fl4, oif, mark, tos, + RT_SCOPE_UNIVERSE, prot, + flow_flags, + iph->daddr, iph->saddr, 0, 0); } -static void check_peer_redir(struct dst_entry *dst, struct inet_peer *peer) +static void build_skb_flow_key(struct flowi4 *fl4, const struct sk_buff *skb, + const struct sock *sk) { - struct rtable *rt = (struct rtable *) dst; - __be32 orig_gw = rt->rt_gateway; - struct neighbour *n, *old_n; + const struct iphdr *iph = ip_hdr(skb); + int oif = skb->dev->ifindex; + u8 tos = RT_TOS(iph->tos); + u8 prot = iph->protocol; + u32 mark = skb->mark; - dst_confirm(&rt->dst); + __build_flow_key(fl4, sk, iph, oif, tos, prot, mark, 0); +} - rt->rt_gateway = peer->redirect_learned.a4; +static void build_sk_flow_key(struct flowi4 *fl4, const struct sock *sk) +{ + const struct inet_sock *inet = inet_sk(sk); + const struct ip_options_rcu *inet_opt; + __be32 daddr = inet->inet_daddr; - n = ipv4_neigh_lookup(&rt->dst, &rt->rt_gateway); - if (IS_ERR(n)) { - rt->rt_gateway = orig_gw; - return; + rcu_read_lock(); + inet_opt = rcu_dereference(inet->inet_opt); + if (inet_opt && inet_opt->opt.srr) + daddr = inet_opt->opt.faddr; + flowi4_init_output(fl4, sk->sk_bound_dev_if, sk->sk_mark, + RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE, + inet->hdrincl ? IPPROTO_RAW : sk->sk_protocol, + inet_sk_flowi_flags(sk), + daddr, inet->inet_saddr, 0, 0); + rcu_read_unlock(); +} + +static void ip_rt_build_flow_key(struct flowi4 *fl4, const struct sock *sk, + const struct sk_buff *skb) +{ + if (skb) + build_skb_flow_key(fl4, skb, sk); + else + build_sk_flow_key(fl4, sk); +} + +static DEFINE_SEQLOCK(fnhe_seqlock); + +static struct fib_nh_exception *fnhe_oldest(struct fnhe_hash_bucket *hash) +{ + struct fib_nh_exception *fnhe, *oldest; + + oldest = rcu_dereference(hash->chain); + for (fnhe = rcu_dereference(oldest->fnhe_next); fnhe; + fnhe = rcu_dereference(fnhe->fnhe_next)) { + if (time_before(fnhe->fnhe_stamp, oldest->fnhe_stamp)) + oldest = fnhe; } - old_n = xchg(&rt->dst._neighbour, n); - if (old_n) - neigh_release(old_n); - if (!(n->nud_state & NUD_VALID)) { - neigh_event_send(n, NULL); + return oldest; +} + +static inline u32 fnhe_hashfun(__be32 daddr) +{ + u32 hval; + + hval = (__force u32) daddr; + hval ^= (hval >> 11) ^ (hval >> 22); + + return hval & (FNHE_HASH_SIZE - 1); +} + +static void update_or_create_fnhe(struct fib_nh *nh, __be32 daddr, __be32 gw, + u32 pmtu, unsigned long expires) +{ + struct fnhe_hash_bucket *hash; + struct fib_nh_exception *fnhe; + int depth; + u32 hval = fnhe_hashfun(daddr); + + write_seqlock_bh(&fnhe_seqlock); + + hash = nh->nh_exceptions; + if (!hash) { + hash = kzalloc(FNHE_HASH_SIZE * sizeof(*hash), GFP_ATOMIC); + if (!hash) + goto out_unlock; + nh->nh_exceptions = hash; + } + + hash += hval; + + depth = 0; + for (fnhe = rcu_dereference(hash->chain); fnhe; + fnhe = rcu_dereference(fnhe->fnhe_next)) { + if (fnhe->fnhe_daddr == daddr) + break; + depth++; + } + + if (fnhe) { + if (gw) + fnhe->fnhe_gw = gw; + if (pmtu) { + fnhe->fnhe_pmtu = pmtu; + fnhe->fnhe_expires = expires; + } } else { - rt->rt_flags |= RTCF_REDIRECTED; - call_netevent_notifiers(NETEVENT_NEIGH_UPDATE, n); + if (depth > FNHE_RECLAIM_DEPTH) + fnhe = fnhe_oldest(hash); + else { + fnhe = kzalloc(sizeof(*fnhe), GFP_ATOMIC); + if (!fnhe) + goto out_unlock; + + fnhe->fnhe_next = hash->chain; + rcu_assign_pointer(hash->chain, fnhe); + } + fnhe->fnhe_daddr = daddr; + fnhe->fnhe_gw = gw; + fnhe->fnhe_pmtu = pmtu; + fnhe->fnhe_expires = expires; } + + fnhe->fnhe_stamp = jiffies; + +out_unlock: + write_sequnlock_bh(&fnhe_seqlock); + return; } -/* called in rcu_read_lock() section */ -void ip_rt_redirect(__be32 old_gw, __be32 daddr, __be32 new_gw, - __be32 saddr, struct net_device *dev) +static void __ip_do_redirect(struct rtable *rt, struct sk_buff *skb, struct flowi4 *fl4, + bool kill_route) { - int s, i; - struct in_device *in_dev = __in_dev_get_rcu(dev); - __be32 skeys[2] = { saddr, 0 }; - int ikeys[2] = { dev->ifindex, 0 }; - struct inet_peer *peer; + __be32 new_gw = icmp_hdr(skb)->un.gateway; + __be32 old_gw = ip_hdr(skb)->saddr; + struct net_device *dev = skb->dev; + struct in_device *in_dev; + struct fib_result res; + struct neighbour *n; struct net *net; + switch (icmp_hdr(skb)->code & 7) { + case ICMP_REDIR_NET: + case ICMP_REDIR_NETTOS: + case ICMP_REDIR_HOST: + case ICMP_REDIR_HOSTTOS: + break; + + default: + return; + } + + if (rt->rt_gateway != old_gw) + return; + + in_dev = __in_dev_get_rcu(dev); if (!in_dev) return; @@ -1456,72 +716,50 @@ void ip_rt_redirect(__be32 old_gw, __be32 daddr, __be32 new_gw, goto reject_redirect; } - for (s = 0; s < 2; s++) { - for (i = 0; i < 2; i++) { - unsigned int hash; - struct rtable __rcu **rthp; - struct rtable *rt; - - hash = rt_hash(daddr, skeys[s], ikeys[i], rt_genid(net)); - - rthp = &rt_hash_table[hash].chain; - - while ((rt = rcu_dereference(*rthp)) != NULL) { - rthp = &rt->dst.rt_next; - - if (rt->rt_key_dst != daddr || - rt->rt_key_src != skeys[s] || - rt->rt_oif != ikeys[i] || - rt_is_input_route(rt) || - rt_is_expired(rt) || - !net_eq(dev_net(rt->dst.dev), net) || - rt->dst.error || - rt->dst.dev != dev || - rt->rt_gateway != old_gw) - continue; - - if (!rt->peer) - rt_bind_peer(rt, rt->rt_dst, 1); - - peer = rt->peer; - if (peer) { - if (peer->redirect_learned.a4 != new_gw) { - peer->redirect_learned.a4 = new_gw; - atomic_inc(&__rt_peer_genid); - } - check_peer_redir(&rt->dst, peer); - } + n = ipv4_neigh_lookup(&rt->dst, NULL, &new_gw); + if (n) { + if (!(n->nud_state & NUD_VALID)) { + neigh_event_send(n, NULL); + } else { + if (fib_lookup(net, fl4, &res) == 0) { + struct fib_nh *nh = &FIB_RES_NH(res); + + update_or_create_fnhe(nh, fl4->daddr, new_gw, + 0, 0); } + if (kill_route) + rt->dst.obsolete = DST_OBSOLETE_KILL; + call_netevent_notifiers(NETEVENT_NEIGH_UPDATE, n); } + neigh_release(n); } return; reject_redirect: #ifdef CONFIG_IP_ROUTE_VERBOSE - if (IN_DEV_LOG_MARTIANS(in_dev)) + if (IN_DEV_LOG_MARTIANS(in_dev)) { + const struct iphdr *iph = (const struct iphdr *) skb->data; + __be32 daddr = iph->daddr; + __be32 saddr = iph->saddr; + net_info_ratelimited("Redirect from %pI4 on %s about %pI4 ignored\n" " Advised path = %pI4 -> %pI4\n", &old_gw, dev->name, &new_gw, &saddr, &daddr); + } #endif ; } -static bool peer_pmtu_expired(struct inet_peer *peer) +static void ip_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_buff *skb) { - unsigned long orig = ACCESS_ONCE(peer->pmtu_expires); + struct rtable *rt; + struct flowi4 fl4; - return orig && - time_after_eq(jiffies, orig) && - cmpxchg(&peer->pmtu_expires, orig, 0) == orig; -} + rt = (struct rtable *) dst; -static bool peer_pmtu_cleaned(struct inet_peer *peer) -{ - unsigned long orig = ACCESS_ONCE(peer->pmtu_expires); - - return orig && - cmpxchg(&peer->pmtu_expires, orig, 0) == orig; + ip_rt_build_flow_key(&fl4, sk, skb); + __ip_do_redirect(rt, skb, &fl4, true); } static struct dst_entry *ipv4_negative_advice(struct dst_entry *dst) @@ -1533,14 +771,10 @@ static struct dst_entry *ipv4_negative_advice(struct dst_entry *dst) if (dst->obsolete > 0) { ip_rt_put(rt); ret = NULL; - } else if (rt->rt_flags & RTCF_REDIRECTED) { - unsigned int hash = rt_hash(rt->rt_key_dst, rt->rt_key_src, - rt->rt_oif, - rt_genid(dev_net(dst->dev))); - rt_del(hash, rt); + } else if ((rt->rt_flags & RTCF_REDIRECTED) || + rt->dst.expires) { + ip_rt_put(rt); ret = NULL; - } else if (rt->peer && peer_pmtu_expired(rt->peer)) { - dst_metric_set(dst, RTAX_MTU, rt->peer->pmtu_orig); } } return ret; @@ -1567,6 +801,7 @@ void ip_rt_send_redirect(struct sk_buff *skb) struct rtable *rt = skb_rtable(skb); struct in_device *in_dev; struct inet_peer *peer; + struct net *net; int log_martians; rcu_read_lock(); @@ -1578,9 +813,8 @@ void ip_rt_send_redirect(struct sk_buff *skb) log_martians = IN_DEV_LOG_MARTIANS(in_dev); rcu_read_unlock(); - if (!rt->peer) - rt_bind_peer(rt, rt->rt_dst, 1); - peer = rt->peer; + net = dev_net(rt->dst.dev); + peer = inet_getpeer_v4(net->ipv4.peers, ip_hdr(skb)->saddr, 1); if (!peer) { icmp_send(skb, ICMP_REDIRECT, ICMP_REDIR_HOST, rt->rt_gateway); return; @@ -1597,7 +831,7 @@ void ip_rt_send_redirect(struct sk_buff *skb) */ if (peer->rate_tokens >= ip_rt_redirect_number) { peer->rate_last = jiffies; - return; + goto out_put_peer; } /* Check for load limit; set rate_last to the latest sent @@ -1614,20 +848,38 @@ void ip_rt_send_redirect(struct sk_buff *skb) if (log_martians && peer->rate_tokens == ip_rt_redirect_number) net_warn_ratelimited("host %pI4/if%d ignores redirects for %pI4 to %pI4\n", - &ip_hdr(skb)->saddr, rt->rt_iif, - &rt->rt_dst, &rt->rt_gateway); + &ip_hdr(skb)->saddr, inet_iif(skb), + &ip_hdr(skb)->daddr, &rt->rt_gateway); #endif } +out_put_peer: + inet_putpeer(peer); } static int ip_error(struct sk_buff *skb) { + struct in_device *in_dev = __in_dev_get_rcu(skb->dev); struct rtable *rt = skb_rtable(skb); struct inet_peer *peer; unsigned long now; + struct net *net; bool send; int code; + net = dev_net(rt->dst.dev); + if (!IN_DEV_FORWARD(in_dev)) { + switch (rt->dst.error) { + case EHOSTUNREACH: + IP_INC_STATS_BH(net, IPSTATS_MIB_INADDRERRORS); + break; + + case ENETUNREACH: + IP_INC_STATS_BH(net, IPSTATS_MIB_INNOROUTES); + break; + } + goto out; + } + switch (rt->dst.error) { case EINVAL: default: @@ -1637,17 +889,14 @@ static int ip_error(struct sk_buff *skb) break; case ENETUNREACH: code = ICMP_NET_UNREACH; - IP_INC_STATS_BH(dev_net(rt->dst.dev), - IPSTATS_MIB_INNOROUTES); + IP_INC_STATS_BH(net, IPSTATS_MIB_INNOROUTES); break; case EACCES: code = ICMP_PKT_FILTERED; break; } - if (!rt->peer) - rt_bind_peer(rt, rt->rt_dst, 1); - peer = rt->peer; + peer = inet_getpeer_v4(net->ipv4.peers, ip_hdr(skb)->saddr, 1); send = true; if (peer) { @@ -1660,6 +909,7 @@ static int ip_error(struct sk_buff *skb) peer->rate_tokens -= ip_rt_error_cost; else send = false; + inet_putpeer(peer); } if (send) icmp_send(skb, ICMP_DEST_UNREACH, code, 0); @@ -1668,163 +918,120 @@ out: kfree_skb(skb); return 0; } -/* - * The last two values are not from the RFC but - * are needed for AMPRnet AX.25 paths. - */ +static u32 __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu) +{ + struct fib_result res; -static const unsigned short mtu_plateau[] = -{32000, 17914, 8166, 4352, 2002, 1492, 576, 296, 216, 128 }; + if (mtu < ip_rt_min_pmtu) + mtu = ip_rt_min_pmtu; -static inline unsigned short guess_mtu(unsigned short old_mtu) -{ - int i; + if (fib_lookup(dev_net(rt->dst.dev), fl4, &res) == 0) { + struct fib_nh *nh = &FIB_RES_NH(res); - for (i = 0; i < ARRAY_SIZE(mtu_plateau); i++) - if (old_mtu > mtu_plateau[i]) - return mtu_plateau[i]; - return 68; + update_or_create_fnhe(nh, fl4->daddr, 0, mtu, + jiffies + ip_rt_mtu_expires); + } + return mtu; } -unsigned short ip_rt_frag_needed(struct net *net, const struct iphdr *iph, - unsigned short new_mtu, - struct net_device *dev) +static void ip_rt_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu) { - unsigned short old_mtu = ntohs(iph->tot_len); - unsigned short est_mtu = 0; - struct inet_peer *peer; - - peer = inet_getpeer_v4(iph->daddr, 1); - if (peer) { - unsigned short mtu = new_mtu; - - if (new_mtu < 68 || new_mtu >= old_mtu) { - /* BSD 4.2 derived systems incorrectly adjust - * tot_len by the IP header length, and report - * a zero MTU in the ICMP message. - */ - if (mtu == 0 && - old_mtu >= 68 + (iph->ihl << 2)) - old_mtu -= iph->ihl << 2; - mtu = guess_mtu(old_mtu); - } - - if (mtu < ip_rt_min_pmtu) - mtu = ip_rt_min_pmtu; - if (!peer->pmtu_expires || mtu < peer->pmtu_learned) { - unsigned long pmtu_expires; - - pmtu_expires = jiffies + ip_rt_mtu_expires; - if (!pmtu_expires) - pmtu_expires = 1UL; + struct rtable *rt = (struct rtable *) dst; + struct flowi4 fl4; - est_mtu = mtu; - peer->pmtu_learned = mtu; - peer->pmtu_expires = pmtu_expires; - atomic_inc(&__rt_peer_genid); - } + ip_rt_build_flow_key(&fl4, sk, skb); + mtu = __ip_rt_update_pmtu(rt, &fl4, mtu); - inet_putpeer(peer); + if (!rt->rt_pmtu) { + dst->obsolete = DST_OBSOLETE_KILL; + } else { + rt->rt_pmtu = mtu; + dst_set_expires(&rt->dst, ip_rt_mtu_expires); } - return est_mtu ? : new_mtu; } -static void check_peer_pmtu(struct dst_entry *dst, struct inet_peer *peer) +void ipv4_update_pmtu(struct sk_buff *skb, struct net *net, u32 mtu, + int oif, u32 mark, u8 protocol, int flow_flags) { - unsigned long expires = ACCESS_ONCE(peer->pmtu_expires); + const struct iphdr *iph = (const struct iphdr *) skb->data; + struct flowi4 fl4; + struct rtable *rt; - if (!expires) - return; - if (time_before(jiffies, expires)) { - u32 orig_dst_mtu = dst_mtu(dst); - if (peer->pmtu_learned < orig_dst_mtu) { - if (!peer->pmtu_orig) - peer->pmtu_orig = dst_metric_raw(dst, RTAX_MTU); - dst_metric_set(dst, RTAX_MTU, peer->pmtu_learned); - } - } else if (cmpxchg(&peer->pmtu_expires, expires, 0) == expires) - dst_metric_set(dst, RTAX_MTU, peer->pmtu_orig); + __build_flow_key(&fl4, NULL, iph, oif, + RT_TOS(iph->tos), protocol, mark, flow_flags); + rt = __ip_route_output_key(net, &fl4); + if (!IS_ERR(rt)) { + __ip_rt_update_pmtu(rt, &fl4, mtu); + ip_rt_put(rt); + } } +EXPORT_SYMBOL_GPL(ipv4_update_pmtu); -static void ip_rt_update_pmtu(struct dst_entry *dst, u32 mtu) +void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) { - struct rtable *rt = (struct rtable *) dst; - struct inet_peer *peer; - - dst_confirm(dst); - - if (!rt->peer) - rt_bind_peer(rt, rt->rt_dst, 1); - peer = rt->peer; - if (peer) { - unsigned long pmtu_expires = ACCESS_ONCE(peer->pmtu_expires); - - if (mtu < ip_rt_min_pmtu) - mtu = ip_rt_min_pmtu; - if (!pmtu_expires || mtu < peer->pmtu_learned) { - - pmtu_expires = jiffies + ip_rt_mtu_expires; - if (!pmtu_expires) - pmtu_expires = 1UL; - - peer->pmtu_learned = mtu; - peer->pmtu_expires = pmtu_expires; + const struct iphdr *iph = (const struct iphdr *) skb->data; + struct flowi4 fl4; + struct rtable *rt; - atomic_inc(&__rt_peer_genid); - rt->rt_peer_genid = rt_peer_genid(); - } - check_peer_pmtu(dst, peer); + __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); + rt = __ip_route_output_key(sock_net(sk), &fl4); + if (!IS_ERR(rt)) { + __ip_rt_update_pmtu(rt, &fl4, mtu); + ip_rt_put(rt); } } +EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu); - -static void ipv4_validate_peer(struct rtable *rt) +void ipv4_redirect(struct sk_buff *skb, struct net *net, + int oif, u32 mark, u8 protocol, int flow_flags) { - if (rt->rt_peer_genid != rt_peer_genid()) { - struct inet_peer *peer; - - if (!rt->peer) - rt_bind_peer(rt, rt->rt_dst, 0); + const struct iphdr *iph = (const struct iphdr *) skb->data; + struct flowi4 fl4; + struct rtable *rt; - peer = rt->peer; - if (peer) { - check_peer_pmtu(&rt->dst, peer); + __build_flow_key(&fl4, NULL, iph, oif, + RT_TOS(iph->tos), protocol, mark, flow_flags); + rt = __ip_route_output_key(net, &fl4); + if (!IS_ERR(rt)) { + __ip_do_redirect(rt, skb, &fl4, false); + ip_rt_put(rt); + } +} +EXPORT_SYMBOL_GPL(ipv4_redirect); - if (peer->redirect_learned.a4 && - peer->redirect_learned.a4 != rt->rt_gateway) - check_peer_redir(&rt->dst, peer); - } +void ipv4_sk_redirect(struct sk_buff *skb, struct sock *sk) +{ + const struct iphdr *iph = (const struct iphdr *) skb->data; + struct flowi4 fl4; + struct rtable *rt; - rt->rt_peer_genid = rt_peer_genid(); + __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); + rt = __ip_route_output_key(sock_net(sk), &fl4); + if (!IS_ERR(rt)) { + __ip_do_redirect(rt, skb, &fl4, false); + ip_rt_put(rt); } } +EXPORT_SYMBOL_GPL(ipv4_sk_redirect); static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie) { struct rtable *rt = (struct rtable *) dst; - if (rt_is_expired(rt)) + /* All IPV4 dsts are created with ->obsolete set to the value + * DST_OBSOLETE_FORCE_CHK which forces validation calls down + * into this function always. + * + * When a PMTU/redirect information update invalidates a + * route, this is indicated by setting obsolete to + * DST_OBSOLETE_KILL. + */ + if (dst->obsolete == DST_OBSOLETE_KILL || rt_is_expired(rt)) return NULL; - ipv4_validate_peer(rt); return dst; } -static void ipv4_dst_destroy(struct dst_entry *dst) -{ - struct rtable *rt = (struct rtable *) dst; - struct inet_peer *peer = rt->peer; - - if (rt->fi) { - fib_info_put(rt->fi); - rt->fi = NULL; - } - if (peer) { - rt->peer = NULL; - inet_putpeer(peer); - } -} - - static void ipv4_link_failure(struct sk_buff *skb) { struct rtable *rt; @@ -1832,8 +1039,8 @@ static void ipv4_link_failure(struct sk_buff *skb) icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0); rt = skb_rtable(skb); - if (rt && rt->peer && peer_pmtu_cleaned(rt->peer)) - dst_metric_set(&rt->dst, RTAX_MTU, rt->peer->pmtu_orig); + if (rt) + dst_set_expires(&rt->dst, 0); } static int ip_rt_bug(struct sk_buff *skb) @@ -1880,8 +1087,9 @@ void ip_rt_get_source(u8 *addr, struct sk_buff *skb, struct rtable *rt) if (fib_lookup(dev_net(rt->dst.dev), &fl4, &res) == 0) src = FIB_RES_PREFSRC(dev_net(rt->dst.dev), res); else - src = inet_select_addr(rt->dst.dev, rt->rt_gateway, - RT_SCOPE_UNIVERSE); + src = inet_select_addr(rt->dst.dev, + rt_nexthop(rt, iph->daddr), + RT_SCOPE_UNIVERSE); rcu_read_unlock(); } memcpy(addr, &src, 4); @@ -1913,7 +1121,13 @@ static unsigned int ipv4_default_advmss(const struct dst_entry *dst) static unsigned int ipv4_mtu(const struct dst_entry *dst) { const struct rtable *rt = (const struct rtable *) dst; - unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); + unsigned int mtu = rt->rt_pmtu; + + if (mtu && time_after_eq(jiffies, rt->dst.expires)) + mtu = 0; + + if (!mtu) + mtu = dst_metric_raw(dst, RTAX_MTU); if (mtu && rt_is_output_route(rt)) return mtu; @@ -1921,8 +1135,7 @@ static unsigned int ipv4_mtu(const struct dst_entry *dst) mtu = dst->dev->mtu; if (unlikely(dst_metric_locked(dst, RTAX_MTU))) { - - if (rt->rt_gateway != rt->rt_dst && mtu > 576) + if (rt->rt_gateway && mtu > 576) mtu = 576; } @@ -1932,76 +1145,121 @@ static unsigned int ipv4_mtu(const struct dst_entry *dst) return mtu; } -static void rt_init_metrics(struct rtable *rt, const struct flowi4 *fl4, - struct fib_info *fi) +static struct fib_nh_exception *find_exception(struct fib_nh *nh, __be32 daddr) { - struct inet_peer *peer; - int create = 0; + struct fnhe_hash_bucket *hash = nh->nh_exceptions; + struct fib_nh_exception *fnhe; + u32 hval; - /* If a peer entry exists for this destination, we must hook - * it up in order to get at cached metrics. - */ - if (fl4 && (fl4->flowi4_flags & FLOWI_FLAG_PRECOW_METRICS)) - create = 1; + if (!hash) + return NULL; - rt->peer = peer = inet_getpeer_v4(rt->rt_dst, create); - if (peer) { - rt->rt_peer_genid = rt_peer_genid(); - if (inet_metrics_new(peer)) - memcpy(peer->metrics, fi->fib_metrics, - sizeof(u32) * RTAX_MAX); - dst_init_metrics(&rt->dst, peer->metrics, false); - - check_peer_pmtu(&rt->dst, peer); - - if (peer->redirect_learned.a4 && - peer->redirect_learned.a4 != rt->rt_gateway) { - rt->rt_gateway = peer->redirect_learned.a4; - rt->rt_flags |= RTCF_REDIRECTED; - } - } else { - if (fi->fib_metrics != (u32 *) dst_default_metrics) { - rt->fi = fi; - atomic_inc(&fi->fib_clntref); + hval = fnhe_hashfun(daddr); + + for (fnhe = rcu_dereference(hash[hval].chain); fnhe; + fnhe = rcu_dereference(fnhe->fnhe_next)) { + if (fnhe->fnhe_daddr == daddr) + return fnhe; + } + return NULL; +} + +static void rt_bind_exception(struct rtable *rt, struct fib_nh_exception *fnhe, + __be32 daddr) +{ + __be32 fnhe_daddr, gw; + unsigned long expires; + unsigned int seq; + u32 pmtu; + +restart: + seq = read_seqbegin(&fnhe_seqlock); + fnhe_daddr = fnhe->fnhe_daddr; + gw = fnhe->fnhe_gw; + pmtu = fnhe->fnhe_pmtu; + expires = fnhe->fnhe_expires; + if (read_seqretry(&fnhe_seqlock, seq)) + goto restart; + + if (daddr != fnhe_daddr) + return; + + if (pmtu) { + unsigned long diff = expires - jiffies; + + if (time_before(jiffies, expires)) { + rt->rt_pmtu = pmtu; + dst_set_expires(&rt->dst, diff); } - dst_init_metrics(&rt->dst, fi->fib_metrics, true); } + if (gw) { + rt->rt_flags |= RTCF_REDIRECTED; + rt->rt_gateway = gw; + } + fnhe->fnhe_stamp = jiffies; +} + +static inline void rt_release_rcu(struct rcu_head *head) +{ + struct dst_entry *dst = container_of(head, struct dst_entry, rcu_head); + dst_release(dst); +} + +static void rt_cache_route(struct fib_nh *nh, struct rtable *rt) +{ + struct rtable *orig, *prev, **p = &nh->nh_rth_output; + + if (rt_is_input_route(rt)) + p = &nh->nh_rth_input; + + orig = *p; + + prev = cmpxchg(p, orig, rt); + if (prev == orig) { + dst_clone(&rt->dst); + if (orig) + call_rcu_bh(&orig->dst.rcu_head, rt_release_rcu); + } +} + +static bool rt_cache_valid(struct rtable *rt) +{ + return (rt && rt->dst.obsolete == DST_OBSOLETE_FORCE_CHK); } -static void rt_set_nexthop(struct rtable *rt, const struct flowi4 *fl4, +static void rt_set_nexthop(struct rtable *rt, __be32 daddr, const struct fib_result *res, + struct fib_nh_exception *fnhe, struct fib_info *fi, u16 type, u32 itag) { - struct dst_entry *dst = &rt->dst; - if (fi) { - if (FIB_RES_GW(*res) && - FIB_RES_NH(*res).nh_scope == RT_SCOPE_LINK) - rt->rt_gateway = FIB_RES_GW(*res); - rt_init_metrics(rt, fl4, fi); + struct fib_nh *nh = &FIB_RES_NH(*res); + + if (nh->nh_gw && nh->nh_scope == RT_SCOPE_LINK) + rt->rt_gateway = nh->nh_gw; + if (unlikely(fnhe)) + rt_bind_exception(rt, fnhe, daddr); + dst_init_metrics(&rt->dst, fi->fib_metrics, true); #ifdef CONFIG_IP_ROUTE_CLASSID - dst->tclassid = FIB_RES_NH(*res).nh_tclassid; + rt->dst.tclassid = nh->nh_tclassid; #endif + if (!(rt->dst.flags & DST_HOST)) + rt_cache_route(nh, rt); } - if (dst_mtu(dst) > IP_MAX_MTU) - dst_metric_set(dst, RTAX_MTU, IP_MAX_MTU); - if (dst_metric_raw(dst, RTAX_ADVMSS) > 65535 - 40) - dst_metric_set(dst, RTAX_ADVMSS, 65535 - 40); - #ifdef CONFIG_IP_ROUTE_CLASSID #ifdef CONFIG_IP_MULTIPLE_TABLES - set_class_tag(rt, fib_rules_tclass(res)); + set_class_tag(rt, res->tclassid); #endif set_class_tag(rt, itag); #endif } static struct rtable *rt_dst_alloc(struct net_device *dev, - bool nopolicy, bool noxfrm) + bool nopolicy, bool noxfrm, bool will_cache) { - return dst_alloc(&ipv4_dst_ops, dev, 1, -1, - DST_HOST | + return dst_alloc(&ipv4_dst_ops, dev, 1, DST_OBSOLETE_FORCE_CHK, + (will_cache ? 0 : DST_HOST) | DST_NOCACHE | (nopolicy ? DST_NOPOLICY : 0) | (noxfrm ? DST_NOXFRM : 0)); } @@ -2010,9 +1268,7 @@ static struct rtable *rt_dst_alloc(struct net_device *dev, static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, u8 tos, struct net_device *dev, int our) { - unsigned int hash; struct rtable *rth; - __be32 spec_dst; struct in_device *in_dev = __in_dev_get_rcu(dev); u32 itag = 0; int err; @@ -2023,21 +1279,24 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, return -EINVAL; if (ipv4_is_multicast(saddr) || ipv4_is_lbcast(saddr) || - ipv4_is_loopback(saddr) || skb->protocol != htons(ETH_P_IP)) + skb->protocol != htons(ETH_P_IP)) goto e_inval; + if (likely(!IN_DEV_ROUTE_LOCALNET(in_dev))) + if (ipv4_is_loopback(saddr)) + goto e_inval; + if (ipv4_is_zeronet(saddr)) { if (!ipv4_is_local_multicast(daddr)) goto e_inval; - spec_dst = inet_select_addr(dev, 0, RT_SCOPE_LINK); } else { - err = fib_validate_source(skb, saddr, 0, tos, 0, dev, &spec_dst, - &itag); + err = fib_validate_source(skb, saddr, 0, tos, 0, dev, + in_dev, &itag); if (err < 0) goto e_err; } rth = rt_dst_alloc(dev_net(dev)->loopback_dev, - IN_DEV_CONF_GET(in_dev, NOPOLICY), false); + IN_DEV_CONF_GET(in_dev, NOPOLICY), false, false); if (!rth) goto e_nobufs; @@ -2046,23 +1305,13 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, #endif rth->dst.output = ip_rt_bug; - rth->rt_key_dst = daddr; - rth->rt_key_src = saddr; rth->rt_genid = rt_genid(dev_net(dev)); rth->rt_flags = RTCF_MULTICAST; rth->rt_type = RTN_MULTICAST; - rth->rt_key_tos = tos; - rth->rt_dst = daddr; - rth->rt_src = saddr; - rth->rt_route_iif = dev->ifindex; - rth->rt_iif = dev->ifindex; - rth->rt_oif = 0; - rth->rt_mark = skb->mark; - rth->rt_gateway = daddr; - rth->rt_spec_dst= spec_dst; - rth->rt_peer_genid = 0; - rth->peer = NULL; - rth->fi = NULL; + rth->rt_is_input= 1; + rth->rt_iif = 0; + rth->rt_pmtu = 0; + rth->rt_gateway = 0; if (our) { rth->dst.input= ip_local_deliver; rth->rt_flags |= RTCF_LOCAL; @@ -2074,9 +1323,8 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, #endif RT_CACHE_STAT_INC(in_slow_mc); - hash = rt_hash(daddr, saddr, dev->ifindex, rt_genid(dev_net(dev))); - rth = rt_intern_hash(hash, rth, skb, dev->ifindex); - return IS_ERR(rth) ? PTR_ERR(rth) : 0; + skb_dst_set(skb, &rth->dst); + return 0; e_nobufs: return -ENOBUFS; @@ -2123,7 +1371,7 @@ static int __mkroute_input(struct sk_buff *skb, int err; struct in_device *out_dev; unsigned int flags = 0; - __be32 spec_dst; + bool do_cache; u32 itag; /* get a working reference to the output device */ @@ -2135,7 +1383,7 @@ static int __mkroute_input(struct sk_buff *skb, err = fib_validate_source(skb, saddr, daddr, tos, FIB_RES_OIF(*res), - in_dev->dev, &spec_dst, &itag); + in_dev->dev, in_dev, &itag); if (err < 0) { ip_handle_martian_source(in_dev->dev, in_dev, skb, daddr, saddr); @@ -2143,9 +1391,6 @@ static int __mkroute_input(struct sk_buff *skb, goto cleanup; } - if (err) - flags |= RTCF_DIRECTSRC; - if (out_dev == in_dev && err && (IN_DEV_SHARED_MEDIA(out_dev) || inet_addr_onlink(out_dev, saddr, FIB_RES_GW(*res)))) @@ -2166,37 +1411,39 @@ static int __mkroute_input(struct sk_buff *skb, } } + do_cache = false; + if (res->fi) { + if (!itag) { + rth = FIB_RES_NH(*res).nh_rth_input; + if (rt_cache_valid(rth)) { + dst_hold(&rth->dst); + goto out; + } + do_cache = true; + } + } + rth = rt_dst_alloc(out_dev->dev, IN_DEV_CONF_GET(in_dev, NOPOLICY), - IN_DEV_CONF_GET(out_dev, NOXFRM)); + IN_DEV_CONF_GET(out_dev, NOXFRM), do_cache); if (!rth) { err = -ENOBUFS; goto cleanup; } - rth->rt_key_dst = daddr; - rth->rt_key_src = saddr; rth->rt_genid = rt_genid(dev_net(rth->dst.dev)); rth->rt_flags = flags; rth->rt_type = res->type; - rth->rt_key_tos = tos; - rth->rt_dst = daddr; - rth->rt_src = saddr; - rth->rt_route_iif = in_dev->dev->ifindex; - rth->rt_iif = in_dev->dev->ifindex; - rth->rt_oif = 0; - rth->rt_mark = skb->mark; - rth->rt_gateway = daddr; - rth->rt_spec_dst= spec_dst; - rth->rt_peer_genid = 0; - rth->peer = NULL; - rth->fi = NULL; + rth->rt_is_input = 1; + rth->rt_iif = 0; + rth->rt_pmtu = 0; + rth->rt_gateway = 0; rth->dst.input = ip_forward; rth->dst.output = ip_output; - rt_set_nexthop(rth, NULL, res, res->fi, res->type, itag); - + rt_set_nexthop(rth, daddr, res, NULL, res->fi, res->type, itag); +out: *result = rth; err = 0; cleanup: @@ -2211,7 +1458,6 @@ static int ip_mkroute_input(struct sk_buff *skb, { struct rtable *rth = NULL; int err; - unsigned int hash; #ifdef CONFIG_IP_ROUTE_MULTIPATH if (res->fi && res->fi->fib_nhs > 1) @@ -2223,12 +1469,7 @@ static int ip_mkroute_input(struct sk_buff *skb, if (err) return err; - /* put it into the cache */ - hash = rt_hash(daddr, saddr, fl4->flowi4_iif, - rt_genid(dev_net(rth->dst.dev))); - rth = rt_intern_hash(hash, rth, skb, fl4->flowi4_iif); - if (IS_ERR(rth)) - return PTR_ERR(rth); + skb_dst_set(skb, &rth->dst); return 0; } @@ -2252,10 +1493,9 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr, unsigned int flags = 0; u32 itag = 0; struct rtable *rth; - unsigned int hash; - __be32 spec_dst; int err = -EINVAL; struct net *net = dev_net(dev); + bool do_cache; /* IP on this device is disabled. */ @@ -2266,10 +1506,10 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr, by fib_lookup. */ - if (ipv4_is_multicast(saddr) || ipv4_is_lbcast(saddr) || - ipv4_is_loopback(saddr)) + if (ipv4_is_multicast(saddr) || ipv4_is_lbcast(saddr)) goto martian_source; + res.fi = NULL; if (ipv4_is_lbcast(daddr) || (saddr == 0 && daddr == 0)) goto brd_input; @@ -2279,9 +1519,17 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr, if (ipv4_is_zeronet(saddr)) goto martian_source; - if (ipv4_is_zeronet(daddr) || ipv4_is_loopback(daddr)) + if (ipv4_is_zeronet(daddr)) goto martian_destination; + if (likely(!IN_DEV_ROUTE_LOCALNET(in_dev))) { + if (ipv4_is_loopback(daddr)) + goto martian_destination; + + if (ipv4_is_loopback(saddr)) + goto martian_source; + } + /* * Now we are ready to route packet. */ @@ -2293,11 +1541,8 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr, fl4.daddr = daddr; fl4.saddr = saddr; err = fib_lookup(net, &fl4, &res); - if (err != 0) { - if (!IN_DEV_FORWARD(in_dev)) - goto e_hostunreach; + if (err != 0) goto no_route; - } RT_CACHE_STAT_INC(in_slow_tot); @@ -2307,17 +1552,14 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr, if (res.type == RTN_LOCAL) { err = fib_validate_source(skb, saddr, daddr, tos, net->loopback_dev->ifindex, - dev, &spec_dst, &itag); + dev, in_dev, &itag); if (err < 0) goto martian_source_keep_err; - if (err) - flags |= RTCF_DIRECTSRC; - spec_dst = daddr; goto local_input; } if (!IN_DEV_FORWARD(in_dev)) - goto e_hostunreach; + goto no_route; if (res.type != RTN_UNICAST) goto martian_destination; @@ -2328,23 +1570,31 @@ brd_input: if (skb->protocol != htons(ETH_P_IP)) goto e_inval; - if (ipv4_is_zeronet(saddr)) - spec_dst = inet_select_addr(dev, 0, RT_SCOPE_LINK); - else { - err = fib_validate_source(skb, saddr, 0, tos, 0, dev, &spec_dst, - &itag); + if (!ipv4_is_zeronet(saddr)) { + err = fib_validate_source(skb, saddr, 0, tos, 0, dev, + in_dev, &itag); if (err < 0) goto martian_source_keep_err; - if (err) - flags |= RTCF_DIRECTSRC; } flags |= RTCF_BROADCAST; res.type = RTN_BROADCAST; RT_CACHE_STAT_INC(in_brd); local_input: + do_cache = false; + if (res.fi) { + if (!itag) { + rth = FIB_RES_NH(res).nh_rth_input; + if (rt_cache_valid(rth)) { + dst_hold(&rth->dst); + goto set_and_out; + } + do_cache = true; + } + } + rth = rt_dst_alloc(net->loopback_dev, - IN_DEV_CONF_GET(in_dev, NOPOLICY), false); + IN_DEV_CONF_GET(in_dev, NOPOLICY), false, do_cache); if (!rth) goto e_nobufs; @@ -2354,41 +1604,27 @@ local_input: rth->dst.tclassid = itag; #endif - rth->rt_key_dst = daddr; - rth->rt_key_src = saddr; rth->rt_genid = rt_genid(net); rth->rt_flags = flags|RTCF_LOCAL; rth->rt_type = res.type; - rth->rt_key_tos = tos; - rth->rt_dst = daddr; - rth->rt_src = saddr; -#ifdef CONFIG_IP_ROUTE_CLASSID - rth->dst.tclassid = itag; -#endif - rth->rt_route_iif = dev->ifindex; - rth->rt_iif = dev->ifindex; - rth->rt_oif = 0; - rth->rt_mark = skb->mark; - rth->rt_gateway = daddr; - rth->rt_spec_dst= spec_dst; - rth->rt_peer_genid = 0; - rth->peer = NULL; - rth->fi = NULL; + rth->rt_is_input = 1; + rth->rt_iif = 0; + rth->rt_pmtu = 0; + rth->rt_gateway = 0; if (res.type == RTN_UNREACHABLE) { rth->dst.input= ip_error; rth->dst.error= -err; rth->rt_flags &= ~RTCF_LOCAL; } - hash = rt_hash(daddr, saddr, fl4.flowi4_iif, rt_genid(net)); - rth = rt_intern_hash(hash, rth, skb, fl4.flowi4_iif); + if (do_cache) + rt_cache_route(&FIB_RES_NH(res), rth); +set_and_out: + skb_dst_set(skb, &rth->dst); err = 0; - if (IS_ERR(rth)) - err = PTR_ERR(rth); goto out; no_route: RT_CACHE_STAT_INC(in_no_route); - spec_dst = inet_select_addr(dev, 0, RT_SCOPE_UNIVERSE); res.type = RTN_UNREACHABLE; if (err == -ESRCH) err = -ENETUNREACH; @@ -2405,10 +1641,6 @@ martian_destination: &daddr, &saddr, dev->name); #endif -e_hostunreach: - err = -EHOSTUNREACH; - goto out; - e_inval: err = -EINVAL; goto out; @@ -2424,50 +1656,13 @@ martian_source_keep_err: goto out; } -int ip_route_input_common(struct sk_buff *skb, __be32 daddr, __be32 saddr, - u8 tos, struct net_device *dev, bool noref) +int ip_route_input(struct sk_buff *skb, __be32 daddr, __be32 saddr, + u8 tos, struct net_device *dev) { - struct rtable *rth; - unsigned int hash; - int iif = dev->ifindex; - struct net *net; int res; - net = dev_net(dev); - rcu_read_lock(); - if (!rt_caching(net)) - goto skip_cache; - - tos &= IPTOS_RT_MASK; - hash = rt_hash(daddr, saddr, iif, rt_genid(net)); - - for (rth = rcu_dereference(rt_hash_table[hash].chain); rth; - rth = rcu_dereference(rth->dst.rt_next)) { - if ((((__force u32)rth->rt_key_dst ^ (__force u32)daddr) | - ((__force u32)rth->rt_key_src ^ (__force u32)saddr) | - (rth->rt_route_iif ^ iif) | - (rth->rt_key_tos ^ tos)) == 0 && - rth->rt_mark == skb->mark && - net_eq(dev_net(rth->dst.dev), net) && - !rt_is_expired(rth)) { - ipv4_validate_peer(rth); - if (noref) { - dst_use_noref(&rth->dst, jiffies); - skb_dst_set_noref(skb, &rth->dst); - } else { - dst_use(&rth->dst, jiffies); - skb_dst_set(skb, &rth->dst); - } - RT_CACHE_STAT_INC(in_hit); - rcu_read_unlock(); - return 0; - } - RT_CACHE_STAT_INC(in_hlist_search); - } - -skip_cache: /* Multicast recognition logic is moved from route cache to here. The problem was that too many Ethernet cards have broken/missing hardware multicast filters :-( As result the host on multicasting @@ -2505,24 +1700,28 @@ skip_cache: rcu_read_unlock(); return res; } -EXPORT_SYMBOL(ip_route_input_common); +EXPORT_SYMBOL(ip_route_input); /* called with rcu_read_lock() */ static struct rtable *__mkroute_output(const struct fib_result *res, - const struct flowi4 *fl4, - __be32 orig_daddr, __be32 orig_saddr, - int orig_oif, __u8 orig_rtos, + const struct flowi4 *fl4, int orig_oif, struct net_device *dev_out, unsigned int flags) { struct fib_info *fi = res->fi; + struct fib_nh_exception *fnhe; struct in_device *in_dev; u16 type = res->type; struct rtable *rth; - if (ipv4_is_loopback(fl4->saddr) && !(dev_out->flags & IFF_LOOPBACK)) + in_dev = __in_dev_get_rcu(dev_out); + if (!in_dev) return ERR_PTR(-EINVAL); + if (likely(!IN_DEV_ROUTE_LOCALNET(in_dev))) + if (ipv4_is_loopback(fl4->saddr) && !(dev_out->flags & IFF_LOOPBACK)) + return ERR_PTR(-EINVAL); + if (ipv4_is_lbcast(fl4->daddr)) type = RTN_BROADCAST; else if (ipv4_is_multicast(fl4->daddr)) @@ -2533,10 +1732,6 @@ static struct rtable *__mkroute_output(const struct fib_result *res, if (dev_out->flags & IFF_LOOPBACK) flags |= RTCF_LOCAL; - in_dev = __in_dev_get_rcu(dev_out); - if (!in_dev) - return ERR_PTR(-EINVAL); - if (type == RTN_BROADCAST) { flags |= RTCF_BROADCAST | RTCF_LOCAL; fi = NULL; @@ -2553,40 +1748,39 @@ static struct rtable *__mkroute_output(const struct fib_result *res, fi = NULL; } + fnhe = NULL; + if (fi) { + fnhe = find_exception(&FIB_RES_NH(*res), fl4->daddr); + if (!fnhe) { + rth = FIB_RES_NH(*res).nh_rth_output; + if (rt_cache_valid(rth)) { + dst_hold(&rth->dst); + return rth; + } + } + } rth = rt_dst_alloc(dev_out, IN_DEV_CONF_GET(in_dev, NOPOLICY), - IN_DEV_CONF_GET(in_dev, NOXFRM)); + IN_DEV_CONF_GET(in_dev, NOXFRM), + fi && !fnhe); if (!rth) return ERR_PTR(-ENOBUFS); rth->dst.output = ip_output; - rth->rt_key_dst = orig_daddr; - rth->rt_key_src = orig_saddr; rth->rt_genid = rt_genid(dev_net(dev_out)); rth->rt_flags = flags; rth->rt_type = type; - rth->rt_key_tos = orig_rtos; - rth->rt_dst = fl4->daddr; - rth->rt_src = fl4->saddr; - rth->rt_route_iif = 0; - rth->rt_iif = orig_oif ? : dev_out->ifindex; - rth->rt_oif = orig_oif; - rth->rt_mark = fl4->flowi4_mark; - rth->rt_gateway = fl4->daddr; - rth->rt_spec_dst= fl4->saddr; - rth->rt_peer_genid = 0; - rth->peer = NULL; - rth->fi = NULL; + rth->rt_is_input = 0; + rth->rt_iif = orig_oif ? : 0; + rth->rt_pmtu = 0; + rth->rt_gateway = 0; RT_CACHE_STAT_INC(out_slow_tot); - if (flags & RTCF_LOCAL) { + if (flags & RTCF_LOCAL) rth->dst.input = ip_local_deliver; - rth->rt_spec_dst = fl4->daddr; - } if (flags & (RTCF_BROADCAST | RTCF_MULTICAST)) { - rth->rt_spec_dst = fl4->saddr; if (flags & RTCF_LOCAL && !(dev_out->flags & IFF_LOOPBACK)) { rth->dst.output = ip_mc_output; @@ -2603,34 +1797,28 @@ static struct rtable *__mkroute_output(const struct fib_result *res, #endif } - rt_set_nexthop(rth, fl4, res, fi, type, 0); + rt_set_nexthop(rth, fl4->daddr, res, fnhe, fi, type, 0); return rth; } /* * Major route resolver routine. - * called with rcu_read_lock(); */ -static struct rtable *ip_route_output_slow(struct net *net, struct flowi4 *fl4) +struct rtable *__ip_route_output_key(struct net *net, struct flowi4 *fl4) { struct net_device *dev_out = NULL; __u8 tos = RT_FL_TOS(fl4); unsigned int flags = 0; struct fib_result res; struct rtable *rth; - __be32 orig_daddr; - __be32 orig_saddr; int orig_oif; + res.tclassid = 0; res.fi = NULL; -#ifdef CONFIG_IP_MULTIPLE_TABLES - res.r = NULL; -#endif + res.table = NULL; - orig_daddr = fl4->daddr; - orig_saddr = fl4->saddr; orig_oif = fl4->flowi4_oif; fl4->flowi4_iif = net->loopback_dev->ifindex; @@ -2730,6 +1918,7 @@ static struct rtable *ip_route_output_slow(struct net *net, struct flowi4 *fl4) if (fib_lookup(net, fl4, &res)) { res.fi = NULL; + res.table = NULL; if (fl4->flowi4_oif) { /* Apparently, routing tables are wrong. Assume, that the destination is on link. @@ -2791,60 +1980,12 @@ static struct rtable *ip_route_output_slow(struct net *net, struct flowi4 *fl4) make_route: - rth = __mkroute_output(&res, fl4, orig_daddr, orig_saddr, orig_oif, - tos, dev_out, flags); - if (!IS_ERR(rth)) { - unsigned int hash; - - hash = rt_hash(orig_daddr, orig_saddr, orig_oif, - rt_genid(dev_net(dev_out))); - rth = rt_intern_hash(hash, rth, NULL, orig_oif); - } + rth = __mkroute_output(&res, fl4, orig_oif, dev_out, flags); out: rcu_read_unlock(); return rth; } - -struct rtable *__ip_route_output_key(struct net *net, struct flowi4 *flp4) -{ - struct rtable *rth; - unsigned int hash; - - if (!rt_caching(net)) - goto slow_output; - - hash = rt_hash(flp4->daddr, flp4->saddr, flp4->flowi4_oif, rt_genid(net)); - - rcu_read_lock_bh(); - for (rth = rcu_dereference_bh(rt_hash_table[hash].chain); rth; - rth = rcu_dereference_bh(rth->dst.rt_next)) { - if (rth->rt_key_dst == flp4->daddr && - rth->rt_key_src == flp4->saddr && - rt_is_output_route(rth) && - rth->rt_oif == flp4->flowi4_oif && - rth->rt_mark == flp4->flowi4_mark && - !((rth->rt_key_tos ^ flp4->flowi4_tos) & - (IPTOS_RT_MASK | RTO_ONLINK)) && - net_eq(dev_net(rth->dst.dev), net) && - !rt_is_expired(rth)) { - ipv4_validate_peer(rth); - dst_use(&rth->dst, jiffies); - RT_CACHE_STAT_INC(out_hit); - rcu_read_unlock_bh(); - if (!flp4->saddr) - flp4->saddr = rth->rt_src; - if (!flp4->daddr) - flp4->daddr = rth->rt_dst; - return rth; - } - RT_CACHE_STAT_INC(out_hlist_search); - } - rcu_read_unlock_bh(); - -slow_output: - return ip_route_output_slow(net, flp4); -} EXPORT_SYMBOL_GPL(__ip_route_output_key); static struct dst_entry *ipv4_blackhole_dst_check(struct dst_entry *dst, u32 cookie) @@ -2859,7 +2000,13 @@ static unsigned int ipv4_blackhole_mtu(const struct dst_entry *dst) return mtu ? : dst->dev->mtu; } -static void ipv4_rt_blackhole_update_pmtu(struct dst_entry *dst, u32 mtu) +static void ipv4_rt_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu) +{ +} + +static void ipv4_rt_blackhole_redirect(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb) { } @@ -2872,53 +2019,40 @@ static u32 *ipv4_rt_blackhole_cow_metrics(struct dst_entry *dst, static struct dst_ops ipv4_dst_blackhole_ops = { .family = AF_INET, .protocol = cpu_to_be16(ETH_P_IP), - .destroy = ipv4_dst_destroy, .check = ipv4_blackhole_dst_check, .mtu = ipv4_blackhole_mtu, .default_advmss = ipv4_default_advmss, .update_pmtu = ipv4_rt_blackhole_update_pmtu, + .redirect = ipv4_rt_blackhole_redirect, .cow_metrics = ipv4_rt_blackhole_cow_metrics, .neigh_lookup = ipv4_neigh_lookup, }; struct dst_entry *ipv4_blackhole_route(struct net *net, struct dst_entry *dst_orig) { - struct rtable *rt = dst_alloc(&ipv4_dst_blackhole_ops, NULL, 1, 0, 0); struct rtable *ort = (struct rtable *) dst_orig; + struct rtable *rt; + rt = dst_alloc(&ipv4_dst_blackhole_ops, NULL, 1, DST_OBSOLETE_NONE, 0); if (rt) { struct dst_entry *new = &rt->dst; new->__use = 1; new->input = dst_discard; new->output = dst_discard; - dst_copy_metrics(new, &ort->dst); new->dev = ort->dst.dev; if (new->dev) dev_hold(new->dev); - rt->rt_key_dst = ort->rt_key_dst; - rt->rt_key_src = ort->rt_key_src; - rt->rt_key_tos = ort->rt_key_tos; - rt->rt_route_iif = ort->rt_route_iif; + rt->rt_is_input = ort->rt_is_input; rt->rt_iif = ort->rt_iif; - rt->rt_oif = ort->rt_oif; - rt->rt_mark = ort->rt_mark; + rt->rt_pmtu = ort->rt_pmtu; rt->rt_genid = rt_genid(net); rt->rt_flags = ort->rt_flags; rt->rt_type = ort->rt_type; - rt->rt_dst = ort->rt_dst; - rt->rt_src = ort->rt_src; rt->rt_gateway = ort->rt_gateway; - rt->rt_spec_dst = ort->rt_spec_dst; - rt->peer = ort->peer; - if (rt->peer) - atomic_inc(&rt->peer->refcnt); - rt->fi = ort->fi; - if (rt->fi) - atomic_inc(&rt->fi->fib_clntref); dst_free(new); } @@ -2945,16 +2079,16 @@ struct rtable *ip_route_output_flow(struct net *net, struct flowi4 *flp4, } EXPORT_SYMBOL_GPL(ip_route_output_flow); -static int rt_fill_info(struct net *net, - struct sk_buff *skb, u32 pid, u32 seq, int event, - int nowait, unsigned int flags) +static int rt_fill_info(struct net *net, __be32 dst, __be32 src, + struct flowi4 *fl4, struct sk_buff *skb, u32 pid, + u32 seq, int event, int nowait, unsigned int flags) { struct rtable *rt = skb_rtable(skb); struct rtmsg *r; struct nlmsghdr *nlh; unsigned long expires = 0; - const struct inet_peer *peer = rt->peer; - u32 id = 0, ts = 0, tsage = 0, error; + u32 error; + u32 metrics[RTAX_MAX]; nlh = nlmsg_put(skb, pid, seq, event, sizeof(*r), flags); if (nlh == NULL) @@ -2964,7 +2098,7 @@ static int rt_fill_info(struct net *net, r->rtm_family = AF_INET; r->rtm_dst_len = 32; r->rtm_src_len = 0; - r->rtm_tos = rt->rt_key_tos; + r->rtm_tos = fl4->flowi4_tos; r->rtm_table = RT_TABLE_MAIN; if (nla_put_u32(skb, RTA_TABLE, RT_TABLE_MAIN)) goto nla_put_failure; @@ -2975,11 +2109,11 @@ static int rt_fill_info(struct net *net, if (rt->rt_flags & RTCF_NOTIFY) r->rtm_flags |= RTM_F_NOTIFY; - if (nla_put_be32(skb, RTA_DST, rt->rt_dst)) + if (nla_put_be32(skb, RTA_DST, dst)) goto nla_put_failure; - if (rt->rt_key_src) { + if (src) { r->rtm_src_len = 32; - if (nla_put_be32(skb, RTA_SRC, rt->rt_key_src)) + if (nla_put_be32(skb, RTA_SRC, src)) goto nla_put_failure; } if (rt->dst.dev && @@ -2990,69 +2124,40 @@ static int rt_fill_info(struct net *net, nla_put_u32(skb, RTA_FLOW, rt->dst.tclassid)) goto nla_put_failure; #endif - if (rt_is_input_route(rt)) { - if (nla_put_be32(skb, RTA_PREFSRC, rt->rt_spec_dst)) - goto nla_put_failure; - } else if (rt->rt_src != rt->rt_key_src) { - if (nla_put_be32(skb, RTA_PREFSRC, rt->rt_src)) + if (!rt_is_input_route(rt) && + fl4->saddr != src) { + if (nla_put_be32(skb, RTA_PREFSRC, fl4->saddr)) goto nla_put_failure; } - if (rt->rt_dst != rt->rt_gateway && + if (rt->rt_gateway && nla_put_be32(skb, RTA_GATEWAY, rt->rt_gateway)) goto nla_put_failure; - if (rtnetlink_put_metrics(skb, dst_metrics_ptr(&rt->dst)) < 0) + memcpy(metrics, dst_metrics_ptr(&rt->dst), sizeof(metrics)); + if (rt->rt_pmtu) + metrics[RTAX_MTU - 1] = rt->rt_pmtu; + if (rtnetlink_put_metrics(skb, metrics) < 0) goto nla_put_failure; - if (rt->rt_mark && - nla_put_be32(skb, RTA_MARK, rt->rt_mark)) + if (fl4->flowi4_mark && + nla_put_be32(skb, RTA_MARK, fl4->flowi4_mark)) goto nla_put_failure; error = rt->dst.error; - if (peer) { - inet_peer_refcheck(rt->peer); - id = atomic_read(&peer->ip_id_count) & 0xffff; - if (peer->tcp_ts_stamp) { - ts = peer->tcp_ts; - tsage = get_seconds() - peer->tcp_ts_stamp; - } - expires = ACCESS_ONCE(peer->pmtu_expires); - if (expires) { - if (time_before(jiffies, expires)) - expires -= jiffies; - else - expires = 0; - } + expires = rt->dst.expires; + if (expires) { + if (time_before(jiffies, expires)) + expires -= jiffies; + else + expires = 0; } if (rt_is_input_route(rt)) { -#ifdef CONFIG_IP_MROUTE - __be32 dst = rt->rt_dst; - - if (ipv4_is_multicast(dst) && !ipv4_is_local_multicast(dst) && - IPV4_DEVCONF_ALL(net, MC_FORWARDING)) { - int err = ipmr_get_route(net, skb, - rt->rt_src, rt->rt_dst, - r, nowait); - if (err <= 0) { - if (!nowait) { - if (err == 0) - return 0; - goto nla_put_failure; - } else { - if (err == -EMSGSIZE) - goto nla_put_failure; - error = err; - } - } - } else -#endif - if (nla_put_u32(skb, RTA_IIF, rt->rt_iif)) - goto nla_put_failure; + if (nla_put_u32(skb, RTA_IIF, rt->rt_iif)) + goto nla_put_failure; } - if (rtnl_put_cacheinfo(skb, &rt->dst, id, ts, tsage, - expires, error) < 0) + if (rtnl_put_cacheinfo(skb, &rt->dst, 0, expires, error) < 0) goto nla_put_failure; return nlmsg_end(skb, nlh); @@ -3068,6 +2173,7 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, void struct rtmsg *rtm; struct nlattr *tb[RTA_MAX+1]; struct rtable *rt = NULL; + struct flowi4 fl4; __be32 dst = 0; __be32 src = 0; u32 iif; @@ -3102,6 +2208,13 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, void iif = tb[RTA_IIF] ? nla_get_u32(tb[RTA_IIF]) : 0; mark = tb[RTA_MARK] ? nla_get_u32(tb[RTA_MARK]) : 0; + memset(&fl4, 0, sizeof(fl4)); + fl4.daddr = dst; + fl4.saddr = src; + fl4.flowi4_tos = rtm->rtm_tos; + fl4.flowi4_oif = tb[RTA_OIF] ? nla_get_u32(tb[RTA_OIF]) : 0; + fl4.flowi4_mark = mark; + if (iif) { struct net_device *dev; @@ -3122,13 +2235,6 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, void if (err == 0 && rt->dst.error) err = -rt->dst.error; } else { - struct flowi4 fl4 = { - .daddr = dst, - .saddr = src, - .flowi4_tos = rtm->rtm_tos, - .flowi4_oif = tb[RTA_OIF] ? nla_get_u32(tb[RTA_OIF]) : 0, - .flowi4_mark = mark, - }; rt = ip_route_output_key(net, &fl4); err = 0; @@ -3143,7 +2249,8 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, void if (rtm->rtm_flags & RTM_F_NOTIFY) rt->rt_flags |= RTCF_NOTIFY; - err = rt_fill_info(net, skb, NETLINK_CB(in_skb).pid, nlh->nlmsg_seq, + err = rt_fill_info(net, dst, src, &fl4, skb, + NETLINK_CB(in_skb).pid, nlh->nlmsg_seq, RTM_NEWROUTE, 0, 0); if (err <= 0) goto errout_free; @@ -3159,43 +2266,6 @@ errout_free: int ip_rt_dump(struct sk_buff *skb, struct netlink_callback *cb) { - struct rtable *rt; - int h, s_h; - int idx, s_idx; - struct net *net; - - net = sock_net(skb->sk); - - s_h = cb->args[0]; - if (s_h < 0) - s_h = 0; - s_idx = idx = cb->args[1]; - for (h = s_h; h <= rt_hash_mask; h++, s_idx = 0) { - if (!rt_hash_table[h].chain) - continue; - rcu_read_lock_bh(); - for (rt = rcu_dereference_bh(rt_hash_table[h].chain), idx = 0; rt; - rt = rcu_dereference_bh(rt->dst.rt_next), idx++) { - if (!net_eq(dev_net(rt->dst.dev), net) || idx < s_idx) - continue; - if (rt_is_expired(rt)) - continue; - skb_dst_set_noref(skb, &rt->dst); - if (rt_fill_info(net, skb, NETLINK_CB(cb->skb).pid, - cb->nlh->nlmsg_seq, RTM_NEWROUTE, - 1, NLM_F_MULTI) <= 0) { - skb_dst_drop(skb); - rcu_read_unlock_bh(); - goto done; - } - skb_dst_drop(skb); - } - rcu_read_unlock_bh(); - } - -done: - cb->args[0] = h; - cb->args[1] = idx; return skb->len; } @@ -3400,26 +2470,34 @@ static __net_initdata struct pernet_operations rt_genid_ops = { .init = rt_genid_init, }; +static int __net_init ipv4_inetpeer_init(struct net *net) +{ + struct inet_peer_base *bp = kmalloc(sizeof(*bp), GFP_KERNEL); -#ifdef CONFIG_IP_ROUTE_CLASSID -struct ip_rt_acct __percpu *ip_rt_acct __read_mostly; -#endif /* CONFIG_IP_ROUTE_CLASSID */ + if (!bp) + return -ENOMEM; + inet_peer_base_init(bp); + net->ipv4.peers = bp; + return 0; +} -static __initdata unsigned long rhash_entries; -static int __init set_rhash_entries(char *str) +static void __net_exit ipv4_inetpeer_exit(struct net *net) { - ssize_t ret; + struct inet_peer_base *bp = net->ipv4.peers; - if (!str) - return 0; + net->ipv4.peers = NULL; + inetpeer_invalidate_tree(bp); + kfree(bp); +} - ret = kstrtoul(str, 0, &rhash_entries); - if (ret) - return 0; +static __net_initdata struct pernet_operations ipv4_inetpeer_ops = { + .init = ipv4_inetpeer_init, + .exit = ipv4_inetpeer_exit, +}; - return 1; -} -__setup("rhash_entries=", set_rhash_entries); +#ifdef CONFIG_IP_ROUTE_CLASSID +struct ip_rt_acct __percpu *ip_rt_acct __read_mostly; +#endif /* CONFIG_IP_ROUTE_CLASSID */ int __init ip_rt_init(void) { @@ -3443,31 +2521,12 @@ int __init ip_rt_init(void) if (dst_entries_init(&ipv4_dst_blackhole_ops) < 0) panic("IP: failed to allocate ipv4_dst_blackhole_ops counter\n"); - rt_hash_table = (struct rt_hash_bucket *) - alloc_large_system_hash("IP route cache", - sizeof(struct rt_hash_bucket), - rhash_entries, - (totalram_pages >= 128 * 1024) ? - 15 : 17, - 0, - &rt_hash_log, - &rt_hash_mask, - 0, - rhash_entries ? 0 : 512 * 1024); - memset(rt_hash_table, 0, (rt_hash_mask + 1) * sizeof(struct rt_hash_bucket)); - rt_hash_lock_init(); - - ipv4_dst_ops.gc_thresh = (rt_hash_mask + 1); - ip_rt_max_size = (rt_hash_mask + 1) * 16; + ipv4_dst_ops.gc_thresh = ~0; + ip_rt_max_size = INT_MAX; devinet_init(); ip_fib_init(); - INIT_DELAYED_WORK_DEFERRABLE(&expires_work, rt_worker_func); - expires_ljiffies = jiffies; - schedule_delayed_work(&expires_work, - net_random() % ip_rt_gc_interval + ip_rt_gc_interval); - if (ip_rt_proc_init()) pr_err("Unable to create route proc files\n"); #ifdef CONFIG_XFRM @@ -3480,6 +2539,7 @@ int __init ip_rt_init(void) register_pernet_subsys(&sysctl_route_ops); #endif register_pernet_subsys(&rt_genid_ops); + register_pernet_subsys(&ipv4_inetpeer_ops); return rc; } diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c index eab2a7fb15d1..650e1528e1e6 100644 --- a/net/ipv4/syncookies.c +++ b/net/ipv4/syncookies.c @@ -293,7 +293,7 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb, /* check for timestamp cookie support */ memset(&tcp_opt, 0, sizeof(tcp_opt)); - tcp_parse_options(skb, &tcp_opt, &hash_location, 0); + tcp_parse_options(skb, &tcp_opt, &hash_location, 0, NULL); if (!cookie_check_timestamp(&tcp_opt, &ecn_ok)) goto out; diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c index ef32956ed655..5840c3255721 100644 --- a/net/ipv4/sysctl_net_ipv4.c +++ b/net/ipv4/sysctl_net_ipv4.c @@ -301,6 +301,13 @@ static struct ctl_table ipv4_table[] = { .proc_handler = proc_dointvec }, { + .procname = "ip_early_demux", + .data = &sysctl_ip_early_demux, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec + }, + { .procname = "ip_dynaddr", .data = &sysctl_ip_dynaddr, .maxlen = sizeof(int), @@ -360,6 +367,13 @@ static struct ctl_table ipv4_table[] = { }, #endif { + .procname = "tcp_fastopen", + .data = &sysctl_tcp_fastopen, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec, + }, + { .procname = "tcp_tw_recycle", .data = &tcp_death_row.sysctl_tw_recycle, .maxlen = sizeof(int), @@ -591,6 +605,20 @@ static struct ctl_table ipv4_table[] = { .mode = 0644, .proc_handler = proc_dointvec }, + { + .procname = "tcp_limit_output_bytes", + .data = &sysctl_tcp_limit_output_bytes, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec + }, + { + .procname = "tcp_challenge_ack_limit", + .data = &sysctl_tcp_challenge_ack_limit, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec + }, #ifdef CONFIG_NET_DMA { .procname = "tcp_dma_copybreak", diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 3ba605f60e4e..581ecf02c6b5 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -270,6 +270,7 @@ #include <linux/slab.h> #include <net/icmp.h> +#include <net/inet_common.h> #include <net/tcp.h> #include <net/xfrm.h> #include <net/ip.h> @@ -376,6 +377,7 @@ void tcp_init_sock(struct sock *sk) skb_queue_head_init(&tp->out_of_order_queue); tcp_init_xmit_timers(sk); tcp_prequeue_init(tp); + INIT_LIST_HEAD(&tp->tsq_node); icsk->icsk_rto = TCP_TIMEOUT_INIT; tp->mdev = TCP_TIMEOUT_INIT; @@ -796,6 +798,10 @@ static unsigned int tcp_xmit_size_goal(struct sock *sk, u32 mss_now, inet_csk(sk)->icsk_ext_hdr_len - tp->tcp_header_len); + /* TSQ : try to have two TSO segments in flight */ + xmit_size_goal = min_t(u32, xmit_size_goal, + sysctl_tcp_limit_output_bytes >> 1); + xmit_size_goal = tcp_bound_to_half_wnd(tp, xmit_size_goal); /* We try hard to avoid divides here */ @@ -977,26 +983,67 @@ static inline int select_size(const struct sock *sk, bool sg) return tmp; } +void tcp_free_fastopen_req(struct tcp_sock *tp) +{ + if (tp->fastopen_req != NULL) { + kfree(tp->fastopen_req); + tp->fastopen_req = NULL; + } +} + +static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, int *size) +{ + struct tcp_sock *tp = tcp_sk(sk); + int err, flags; + + if (!(sysctl_tcp_fastopen & TFO_CLIENT_ENABLE)) + return -EOPNOTSUPP; + if (tp->fastopen_req != NULL) + return -EALREADY; /* Another Fast Open is in progress */ + + tp->fastopen_req = kzalloc(sizeof(struct tcp_fastopen_request), + sk->sk_allocation); + if (unlikely(tp->fastopen_req == NULL)) + return -ENOBUFS; + tp->fastopen_req->data = msg; + + flags = (msg->msg_flags & MSG_DONTWAIT) ? O_NONBLOCK : 0; + err = __inet_stream_connect(sk->sk_socket, msg->msg_name, + msg->msg_namelen, flags); + *size = tp->fastopen_req->copied; + tcp_free_fastopen_req(tp); + return err; +} + int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, size_t size) { struct iovec *iov; struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *skb; - int iovlen, flags, err, copied; - int mss_now = 0, size_goal; + int iovlen, flags, err, copied = 0; + int mss_now = 0, size_goal, copied_syn = 0, offset = 0; bool sg; long timeo; lock_sock(sk); flags = msg->msg_flags; + if (flags & MSG_FASTOPEN) { + err = tcp_sendmsg_fastopen(sk, msg, &copied_syn); + if (err == -EINPROGRESS && copied_syn > 0) + goto out; + else if (err) + goto out_err; + offset = copied_syn; + } + timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); /* Wait for a connection to finish. */ if ((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) if ((err = sk_stream_wait_connect(sk, &timeo)) != 0) - goto out_err; + goto do_error; if (unlikely(tp->repair)) { if (tp->repair_queue == TCP_RECV_QUEUE) { @@ -1032,6 +1079,15 @@ int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, unsigned char __user *from = iov->iov_base; iov++; + if (unlikely(offset > 0)) { /* Skip bytes copied in SYN */ + if (offset >= seglen) { + offset -= seglen; + continue; + } + seglen -= offset; + from += offset; + offset = 0; + } while (seglen > 0) { int copy = 0; @@ -1194,7 +1250,7 @@ out: if (copied && likely(!tp->repair)) tcp_push(sk, flags, mss_now, tp->nonagle); release_sock(sk); - return copied; + return copied + copied_syn; do_fault: if (!skb->len) { @@ -1207,7 +1263,7 @@ do_fault: } do_error: - if (copied) + if (copied + copied_syn) goto out; out_err: err = sk_stream_error(sk, flags, err); @@ -3310,8 +3366,7 @@ EXPORT_SYMBOL(tcp_md5_hash_key); #endif -/** - * Each Responder maintains up to two secret values concurrently for +/* Each Responder maintains up to two secret values concurrently for * efficient secret rollover. Each secret value has 4 states: * * Generating. (tcp_secret_generating != tcp_secret_primary) @@ -3563,6 +3618,8 @@ void __init tcp_init(void) pr_info("Hash tables configured (established %u bind %u)\n", tcp_hashinfo.ehash_mask + 1, tcp_hashinfo.bhash_size); + tcp_metrics_init(); + tcp_register_congestion_control(&tcp_reno); memset(&tcp_secret_one.secrets[0], 0, sizeof(tcp_secret_one.secrets)); @@ -3573,4 +3630,5 @@ void __init tcp_init(void) tcp_secret_primary = &tcp_secret_one; tcp_secret_retiring = &tcp_secret_two; tcp_secret_secondary = &tcp_secret_two; + tcp_tasklet_init(); } diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c index 04dbd7ae7c62..4d4db16e336e 100644 --- a/net/ipv4/tcp_cong.c +++ b/net/ipv4/tcp_cong.c @@ -307,6 +307,7 @@ EXPORT_SYMBOL_GPL(tcp_is_cwnd_limited); void tcp_slow_start(struct tcp_sock *tp) { int cnt; /* increase in packets */ + unsigned int delta = 0; /* RFC3465: ABC Slow start * Increase only after a full MSS of bytes is acked @@ -333,9 +334,9 @@ void tcp_slow_start(struct tcp_sock *tp) tp->snd_cwnd_cnt += cnt; while (tp->snd_cwnd_cnt >= tp->snd_cwnd) { tp->snd_cwnd_cnt -= tp->snd_cwnd; - if (tp->snd_cwnd < tp->snd_cwnd_clamp) - tp->snd_cwnd++; + delta++; } + tp->snd_cwnd = min(tp->snd_cwnd + delta, tp->snd_cwnd_clamp); } EXPORT_SYMBOL_GPL(tcp_slow_start); diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c new file mode 100644 index 000000000000..a7f729c409d7 --- /dev/null +++ b/net/ipv4/tcp_fastopen.c @@ -0,0 +1,11 @@ +#include <linux/init.h> +#include <linux/kernel.h> + +int sysctl_tcp_fastopen; + +static int __init tcp_fastopen_init(void) +{ + return 0; +} + +late_initcall(tcp_fastopen_init); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index b224eb8bce8b..3e07a64ca44e 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -88,12 +88,14 @@ int sysctl_tcp_app_win __read_mostly = 31; int sysctl_tcp_adv_win_scale __read_mostly = 1; EXPORT_SYMBOL(sysctl_tcp_adv_win_scale); +/* rfc5961 challenge ack rate limiting */ +int sysctl_tcp_challenge_ack_limit = 100; + int sysctl_tcp_stdurg __read_mostly; int sysctl_tcp_rfc1337 __read_mostly; int sysctl_tcp_max_orphans __read_mostly = NR_FILE; int sysctl_tcp_frto __read_mostly = 2; int sysctl_tcp_frto_response __read_mostly; -int sysctl_tcp_nometrics_save __read_mostly; int sysctl_tcp_thin_dupack __read_mostly; @@ -701,7 +703,7 @@ static void tcp_rtt_estimator(struct sock *sk, const __u32 mrtt) /* Calculate rto without backoff. This is the second half of Van Jacobson's * routine referred to above. */ -static inline void tcp_set_rto(struct sock *sk) +void tcp_set_rto(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); /* Old crap is replaced with new one. 8) @@ -728,109 +730,6 @@ static inline void tcp_set_rto(struct sock *sk) tcp_bound_rto(sk); } -/* Save metrics learned by this TCP session. - This function is called only, when TCP finishes successfully - i.e. when it enters TIME-WAIT or goes from LAST-ACK to CLOSE. - */ -void tcp_update_metrics(struct sock *sk) -{ - struct tcp_sock *tp = tcp_sk(sk); - struct dst_entry *dst = __sk_dst_get(sk); - - if (sysctl_tcp_nometrics_save) - return; - - dst_confirm(dst); - - if (dst && (dst->flags & DST_HOST)) { - const struct inet_connection_sock *icsk = inet_csk(sk); - int m; - unsigned long rtt; - - if (icsk->icsk_backoff || !tp->srtt) { - /* This session failed to estimate rtt. Why? - * Probably, no packets returned in time. - * Reset our results. - */ - if (!(dst_metric_locked(dst, RTAX_RTT))) - dst_metric_set(dst, RTAX_RTT, 0); - return; - } - - rtt = dst_metric_rtt(dst, RTAX_RTT); - m = rtt - tp->srtt; - - /* If newly calculated rtt larger than stored one, - * store new one. Otherwise, use EWMA. Remember, - * rtt overestimation is always better than underestimation. - */ - if (!(dst_metric_locked(dst, RTAX_RTT))) { - if (m <= 0) - set_dst_metric_rtt(dst, RTAX_RTT, tp->srtt); - else - set_dst_metric_rtt(dst, RTAX_RTT, rtt - (m >> 3)); - } - - if (!(dst_metric_locked(dst, RTAX_RTTVAR))) { - unsigned long var; - if (m < 0) - m = -m; - - /* Scale deviation to rttvar fixed point */ - m >>= 1; - if (m < tp->mdev) - m = tp->mdev; - - var = dst_metric_rtt(dst, RTAX_RTTVAR); - if (m >= var) - var = m; - else - var -= (var - m) >> 2; - - set_dst_metric_rtt(dst, RTAX_RTTVAR, var); - } - - if (tcp_in_initial_slowstart(tp)) { - /* Slow start still did not finish. */ - if (dst_metric(dst, RTAX_SSTHRESH) && - !dst_metric_locked(dst, RTAX_SSTHRESH) && - (tp->snd_cwnd >> 1) > dst_metric(dst, RTAX_SSTHRESH)) - dst_metric_set(dst, RTAX_SSTHRESH, tp->snd_cwnd >> 1); - if (!dst_metric_locked(dst, RTAX_CWND) && - tp->snd_cwnd > dst_metric(dst, RTAX_CWND)) - dst_metric_set(dst, RTAX_CWND, tp->snd_cwnd); - } else if (tp->snd_cwnd > tp->snd_ssthresh && - icsk->icsk_ca_state == TCP_CA_Open) { - /* Cong. avoidance phase, cwnd is reliable. */ - if (!dst_metric_locked(dst, RTAX_SSTHRESH)) - dst_metric_set(dst, RTAX_SSTHRESH, - max(tp->snd_cwnd >> 1, tp->snd_ssthresh)); - if (!dst_metric_locked(dst, RTAX_CWND)) - dst_metric_set(dst, RTAX_CWND, - (dst_metric(dst, RTAX_CWND) + - tp->snd_cwnd) >> 1); - } else { - /* Else slow start did not finish, cwnd is non-sense, - ssthresh may be also invalid. - */ - if (!dst_metric_locked(dst, RTAX_CWND)) - dst_metric_set(dst, RTAX_CWND, - (dst_metric(dst, RTAX_CWND) + - tp->snd_ssthresh) >> 1); - if (dst_metric(dst, RTAX_SSTHRESH) && - !dst_metric_locked(dst, RTAX_SSTHRESH) && - tp->snd_ssthresh > dst_metric(dst, RTAX_SSTHRESH)) - dst_metric_set(dst, RTAX_SSTHRESH, tp->snd_ssthresh); - } - - if (!dst_metric_locked(dst, RTAX_REORDERING)) { - if (dst_metric(dst, RTAX_REORDERING) < tp->reordering && - tp->reordering != sysctl_tcp_reordering) - dst_metric_set(dst, RTAX_REORDERING, tp->reordering); - } - } -} - __u32 tcp_init_cwnd(const struct tcp_sock *tp, const struct dst_entry *dst) { __u32 cwnd = (dst ? dst_metric(dst, RTAX_INITCWND) : 0); @@ -867,7 +766,7 @@ void tcp_enter_cwr(struct sock *sk, const int set_ssthresh) * Packet counting of FACK is based on in-order assumptions, therefore TCP * disables it when reordering is detected */ -static void tcp_disable_fack(struct tcp_sock *tp) +void tcp_disable_fack(struct tcp_sock *tp) { /* RFC3517 uses different metric in lost marker => reset on change */ if (tcp_is_fack(tp)) @@ -881,86 +780,6 @@ static void tcp_dsack_seen(struct tcp_sock *tp) tp->rx_opt.sack_ok |= TCP_DSACK_SEEN; } -/* Initialize metrics on socket. */ - -static void tcp_init_metrics(struct sock *sk) -{ - struct tcp_sock *tp = tcp_sk(sk); - struct dst_entry *dst = __sk_dst_get(sk); - - if (dst == NULL) - goto reset; - - dst_confirm(dst); - - if (dst_metric_locked(dst, RTAX_CWND)) - tp->snd_cwnd_clamp = dst_metric(dst, RTAX_CWND); - if (dst_metric(dst, RTAX_SSTHRESH)) { - tp->snd_ssthresh = dst_metric(dst, RTAX_SSTHRESH); - if (tp->snd_ssthresh > tp->snd_cwnd_clamp) - tp->snd_ssthresh = tp->snd_cwnd_clamp; - } else { - /* ssthresh may have been reduced unnecessarily during. - * 3WHS. Restore it back to its initial default. - */ - tp->snd_ssthresh = TCP_INFINITE_SSTHRESH; - } - if (dst_metric(dst, RTAX_REORDERING) && - tp->reordering != dst_metric(dst, RTAX_REORDERING)) { - tcp_disable_fack(tp); - tcp_disable_early_retrans(tp); - tp->reordering = dst_metric(dst, RTAX_REORDERING); - } - - if (dst_metric(dst, RTAX_RTT) == 0 || tp->srtt == 0) - goto reset; - - /* Initial rtt is determined from SYN,SYN-ACK. - * The segment is small and rtt may appear much - * less than real one. Use per-dst memory - * to make it more realistic. - * - * A bit of theory. RTT is time passed after "normal" sized packet - * is sent until it is ACKed. In normal circumstances sending small - * packets force peer to delay ACKs and calculation is correct too. - * The algorithm is adaptive and, provided we follow specs, it - * NEVER underestimate RTT. BUT! If peer tries to make some clever - * tricks sort of "quick acks" for time long enough to decrease RTT - * to low value, and then abruptly stops to do it and starts to delay - * ACKs, wait for troubles. - */ - if (dst_metric_rtt(dst, RTAX_RTT) > tp->srtt) { - tp->srtt = dst_metric_rtt(dst, RTAX_RTT); - tp->rtt_seq = tp->snd_nxt; - } - if (dst_metric_rtt(dst, RTAX_RTTVAR) > tp->mdev) { - tp->mdev = dst_metric_rtt(dst, RTAX_RTTVAR); - tp->mdev_max = tp->rttvar = max(tp->mdev, tcp_rto_min(sk)); - } - tcp_set_rto(sk); -reset: - if (tp->srtt == 0) { - /* RFC6298: 5.7 We've failed to get a valid RTT sample from - * 3WHS. This is most likely due to retransmission, - * including spurious one. Reset the RTO back to 3secs - * from the more aggressive 1sec to avoid more spurious - * retransmission. - */ - tp->mdev = tp->mdev_max = tp->rttvar = TCP_TIMEOUT_FALLBACK; - inet_csk(sk)->icsk_rto = TCP_TIMEOUT_FALLBACK; - } - /* Cut cwnd down to 1 per RFC5681 if SYN or SYN-ACK has been - * retransmitted. In light of RFC6298 more aggressive 1sec - * initRTO, we only reset cwnd when more than 1 SYN/SYN-ACK - * retransmission has occurred. - */ - if (tp->total_retrans > 1) - tp->snd_cwnd = 1; - else - tp->snd_cwnd = tcp_init_cwnd(tp, dst); - tp->snd_cwnd_stamp = tcp_time_stamp; -} - static void tcp_update_reordering(struct sock *sk, const int metric, const int ts) { @@ -2702,7 +2521,7 @@ static void tcp_cwnd_down(struct sock *sk, int flag) /* Nothing was retransmitted or returned timestamp is less * than timestamp of the first retransmission. */ -static inline int tcp_packet_delayed(const struct tcp_sock *tp) +static inline bool tcp_packet_delayed(const struct tcp_sock *tp) { return !tp->retrans_stamp || (tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr && @@ -2763,7 +2582,7 @@ static void tcp_undo_cwr(struct sock *sk, const bool undo_ssthresh) tp->snd_cwnd_stamp = tcp_time_stamp; } -static inline int tcp_may_undo(const struct tcp_sock *tp) +static inline bool tcp_may_undo(const struct tcp_sock *tp) { return tp->undo_marker && (!tp->undo_retrans || tcp_packet_delayed(tp)); } @@ -3552,13 +3371,13 @@ static void tcp_ack_probe(struct sock *sk) } } -static inline int tcp_ack_is_dubious(const struct sock *sk, const int flag) +static inline bool tcp_ack_is_dubious(const struct sock *sk, const int flag) { return !(flag & FLAG_NOT_DUP) || (flag & FLAG_CA_ALERT) || inet_csk(sk)->icsk_ca_state != TCP_CA_Open; } -static inline int tcp_may_raise_cwnd(const struct sock *sk, const int flag) +static inline bool tcp_may_raise_cwnd(const struct sock *sk, const int flag) { const struct tcp_sock *tp = tcp_sk(sk); return (!(flag & FLAG_ECE) || tp->snd_cwnd < tp->snd_ssthresh) && @@ -3568,7 +3387,7 @@ static inline int tcp_may_raise_cwnd(const struct sock *sk, const int flag) /* Check that window update is acceptable. * The function assumes that snd_una<=ack<=snd_next. */ -static inline int tcp_may_update_window(const struct tcp_sock *tp, +static inline bool tcp_may_update_window(const struct tcp_sock *tp, const u32 ack, const u32 ack_seq, const u32 nwin) { @@ -3869,9 +3688,11 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) tcp_cong_avoid(sk, ack, prior_in_flight); } - if ((flag & FLAG_FORWARD_PROGRESS) || !(flag & FLAG_NOT_DUP)) - dst_confirm(__sk_dst_get(sk)); - + if ((flag & FLAG_FORWARD_PROGRESS) || !(flag & FLAG_NOT_DUP)) { + struct dst_entry *dst = __sk_dst_get(sk); + if (dst) + dst_confirm(dst); + } return 1; no_queue: @@ -3911,7 +3732,8 @@ old_ack: * the fast version below fails. */ void tcp_parse_options(const struct sk_buff *skb, struct tcp_options_received *opt_rx, - const u8 **hvpp, int estab) + const u8 **hvpp, int estab, + struct tcp_fastopen_cookie *foc) { const unsigned char *ptr; const struct tcphdr *th = tcp_hdr(skb); @@ -4018,8 +3840,25 @@ void tcp_parse_options(const struct sk_buff *skb, struct tcp_options_received *o break; } break; - } + case TCPOPT_EXP: + /* Fast Open option shares code 254 using a + * 16 bits magic number. It's valid only in + * SYN or SYN-ACK with an even size. + */ + if (opsize < TCPOLEN_EXP_FASTOPEN_BASE || + get_unaligned_be16(ptr) != TCPOPT_FASTOPEN_MAGIC || + foc == NULL || !th->syn || (opsize & 1)) + break; + foc->len = opsize - TCPOLEN_EXP_FASTOPEN_BASE; + if (foc->len >= TCP_FASTOPEN_COOKIE_MIN && + foc->len <= TCP_FASTOPEN_COOKIE_MAX) + memcpy(foc->val, ptr + 2, foc->len); + else if (foc->len != 0) + foc->len = -1; + break; + + } ptr += opsize-2; length -= opsize; } @@ -4061,7 +3900,7 @@ static bool tcp_fast_parse_options(const struct sk_buff *skb, if (tcp_parse_aligned_timestamp(tp, th)) return true; } - tcp_parse_options(skb, &tp->rx_opt, hvpp, 1); + tcp_parse_options(skb, &tp->rx_opt, hvpp, 1, NULL); return true; } @@ -4167,7 +4006,7 @@ static int tcp_disordered_ack(const struct sock *sk, const struct sk_buff *skb) (s32)(tp->rx_opt.ts_recent - tp->rx_opt.rcv_tsval) <= (inet_csk(sk)->icsk_rto * 1024) / HZ); } -static inline int tcp_paws_discard(const struct sock *sk, +static inline bool tcp_paws_discard(const struct sock *sk, const struct sk_buff *skb) { const struct tcp_sock *tp = tcp_sk(sk); @@ -4189,7 +4028,7 @@ static inline int tcp_paws_discard(const struct sock *sk, * (borrowed from freebsd) */ -static inline int tcp_sequence(const struct tcp_sock *tp, u32 seq, u32 end_seq) +static inline bool tcp_sequence(const struct tcp_sock *tp, u32 seq, u32 end_seq) { return !before(end_seq, tp->rcv_wup) && !after(seq, tp->rcv_nxt + tcp_receive_window(tp)); @@ -4579,8 +4418,8 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb) TCP_ECN_check_ce(tp, skb); - if (tcp_try_rmem_schedule(sk, skb->truesize)) { - /* TODO: should increment a counter */ + if (unlikely(tcp_try_rmem_schedule(sk, skb->truesize))) { + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPOFODROP); __kfree_skb(skb); return; } @@ -4589,6 +4428,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb) tp->pred_flags = 0; inet_csk_schedule_ack(sk); + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPOFOQUEUE); SOCK_DEBUG(sk, "out of order segment: rcv_next %X seq %X - %X\n", tp->rcv_nxt, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq); @@ -4642,6 +4482,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb) if (skb1 && before(seq, TCP_SKB_CB(skb1)->end_seq)) { if (!after(end_seq, TCP_SKB_CB(skb1)->end_seq)) { /* All the bits are present. Drop. */ + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPOFOMERGE); __kfree_skb(skb); skb = NULL; tcp_dsack_set(sk, seq, end_seq); @@ -4680,6 +4521,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb) __skb_unlink(skb1, &tp->out_of_order_queue); tcp_dsack_extend(sk, TCP_SKB_CB(skb1)->seq, TCP_SKB_CB(skb1)->end_seq); + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPOFOMERGE); __kfree_skb(skb1); } @@ -5372,7 +5214,7 @@ static __sum16 __tcp_checksum_complete_user(struct sock *sk, return result; } -static inline int tcp_checksum_complete_user(struct sock *sk, +static inline bool tcp_checksum_complete_user(struct sock *sk, struct sk_buff *skb) { return !skb_csum_unnecessary(skb) && @@ -5426,11 +5268,28 @@ out: } #endif /* CONFIG_NET_DMA */ +static void tcp_send_challenge_ack(struct sock *sk) +{ + /* unprotected vars, we dont care of overwrites */ + static u32 challenge_timestamp; + static unsigned int challenge_count; + u32 now = jiffies / HZ; + + if (now != challenge_timestamp) { + challenge_timestamp = now; + challenge_count = 0; + } + if (++challenge_count <= sysctl_tcp_challenge_ack_limit) { + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK); + tcp_send_ack(sk); + } +} + /* Does PAWS and seqno based validation of an incoming segment, flags will * play significant role here. */ -static int tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, - const struct tcphdr *th, int syn_inerr) +static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, + const struct tcphdr *th, int syn_inerr) { const u8 *hash_location; struct tcp_sock *tp = tcp_sk(sk); @@ -5455,14 +5314,26 @@ static int tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, * an acknowledgment should be sent in reply (unless the RST * bit is set, if so drop the segment and return)". */ - if (!th->rst) + if (!th->rst) { + if (th->syn) + goto syn_challenge; tcp_send_dupack(sk, skb); + } goto discard; } /* Step 2: check RST bit */ if (th->rst) { - tcp_reset(sk); + /* RFC 5961 3.2 : + * If sequence number exactly matches RCV.NXT, then + * RESET the connection + * else + * Send a challenge ACK + */ + if (TCP_SKB_CB(skb)->seq == tp->rcv_nxt) + tcp_reset(sk); + else + tcp_send_challenge_ack(sk); goto discard; } @@ -5473,20 +5344,23 @@ static int tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, /* step 3: check security and precedence [ignored] */ - /* step 4: Check for a SYN in window. */ - if (th->syn && !before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) { + /* step 4: Check for a SYN + * RFC 5691 4.2 : Send a challenge ack + */ + if (th->syn) { +syn_challenge: if (syn_inerr) TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS); - NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONSYN); - tcp_reset(sk); - return -1; + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSYNCHALLENGE); + tcp_send_challenge_ack(sk); + goto discard; } - return 1; + return true; discard: __kfree_skb(skb); - return 0; + return false; } /* @@ -5516,7 +5390,6 @@ int tcp_rcv_established(struct sock *sk, struct sk_buff *skb, const struct tcphdr *th, unsigned int len) { struct tcp_sock *tp = tcp_sk(sk); - int res; /* * Header prediction. @@ -5693,9 +5566,8 @@ slow_path: * Standard slow path. */ - res = tcp_validate_incoming(sk, skb, th, 1); - if (res <= 0) - return -res; + if (!tcp_validate_incoming(sk, skb, th, 1)) + return 0; step5: if (th->ack && tcp_ack(sk, skb, FLAG_SLOWPATH) < 0) @@ -5729,8 +5601,10 @@ void tcp_finish_connect(struct sock *sk, struct sk_buff *skb) tcp_set_state(sk, TCP_ESTABLISHED); - if (skb != NULL) + if (skb != NULL) { + sk->sk_rx_dst = dst_clone(skb_dst(skb)); security_inet_conn_established(sk, skb); + } /* Make sure socket is routed, for correct metrics. */ icsk->icsk_af_ops->rebuild_header(sk); @@ -5760,6 +5634,45 @@ void tcp_finish_connect(struct sock *sk, struct sk_buff *skb) } } +static bool tcp_rcv_fastopen_synack(struct sock *sk, struct sk_buff *synack, + struct tcp_fastopen_cookie *cookie) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct sk_buff *data = tp->syn_data ? tcp_write_queue_head(sk) : NULL; + u16 mss = tp->rx_opt.mss_clamp; + bool syn_drop; + + if (mss == tp->rx_opt.user_mss) { + struct tcp_options_received opt; + const u8 *hash_location; + + /* Get original SYNACK MSS value if user MSS sets mss_clamp */ + tcp_clear_options(&opt); + opt.user_mss = opt.mss_clamp = 0; + tcp_parse_options(synack, &opt, &hash_location, 0, NULL); + mss = opt.mss_clamp; + } + + if (!tp->syn_fastopen) /* Ignore an unsolicited cookie */ + cookie->len = -1; + + /* The SYN-ACK neither has cookie nor acknowledges the data. Presumably + * the remote receives only the retransmitted (regular) SYNs: either + * the original SYN-data or the corresponding SYN-ACK is lost. + */ + syn_drop = (cookie->len <= 0 && data && + inet_csk(sk)->icsk_retransmits); + + tcp_fastopen_cache_set(sk, mss, cookie, syn_drop); + + if (data) { /* Retransmit unacked data in SYN */ + tcp_retransmit_skb(sk, data); + tcp_rearm_rto(sk); + return true; + } + return false; +} + static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb, const struct tcphdr *th, unsigned int len) { @@ -5767,9 +5680,10 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb, struct inet_connection_sock *icsk = inet_csk(sk); struct tcp_sock *tp = tcp_sk(sk); struct tcp_cookie_values *cvp = tp->cookie_values; + struct tcp_fastopen_cookie foc = { .len = -1 }; int saved_clamp = tp->rx_opt.mss_clamp; - tcp_parse_options(skb, &tp->rx_opt, &hash_location, 0); + tcp_parse_options(skb, &tp->rx_opt, &hash_location, 0, &foc); if (th->ack) { /* rfc793: @@ -5779,11 +5693,9 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb, * If SEG.ACK =< ISS, or SEG.ACK > SND.NXT, send * a reset (unless the RST bit is set, if so drop * the segment and return)" - * - * We do not send data with SYN, so that RFC-correct - * test reduces to: */ - if (TCP_SKB_CB(skb)->ack_seq != tp->snd_nxt) + if (!after(TCP_SKB_CB(skb)->ack_seq, tp->snd_una) || + after(TCP_SKB_CB(skb)->ack_seq, tp->snd_nxt)) goto reset_and_undo; if (tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr && @@ -5895,6 +5807,10 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb, tcp_finish_connect(sk, skb); + if ((tp->syn_fastopen || tp->syn_data) && + tcp_rcv_fastopen_synack(sk, skb, &foc)) + return -1; + if (sk->sk_write_pending || icsk->icsk_accept_queue.rskq_defer_accept || icsk->icsk_ack.pingpong) { @@ -6013,7 +5929,6 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb, struct tcp_sock *tp = tcp_sk(sk); struct inet_connection_sock *icsk = inet_csk(sk); int queued = 0; - int res; tp->rx_opt.saw_tstamp = 0; @@ -6068,9 +5983,8 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb, return 0; } - res = tcp_validate_incoming(sk, skb, th, 0); - if (res <= 0) - return -res; + if (!tcp_validate_incoming(sk, skb, th, 0)) + return 0; /* step 5: check the ACK field */ if (th->ack) { @@ -6126,9 +6040,14 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb, case TCP_FIN_WAIT1: if (tp->snd_una == tp->write_seq) { + struct dst_entry *dst; + tcp_set_state(sk, TCP_FIN_WAIT2); sk->sk_shutdown |= SEND_SHUTDOWN; - dst_confirm(__sk_dst_get(sk)); + + dst = __sk_dst_get(sk); + if (dst) + dst_confirm(dst); if (!sock_flag(sk, SOCK_DEAD)) /* Wake up lingering close() */ diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index c8d28c433b2b..3e30548ac32a 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -209,22 +209,8 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) } if (tcp_death_row.sysctl_tw_recycle && - !tp->rx_opt.ts_recent_stamp && fl4->daddr == daddr) { - struct inet_peer *peer = rt_get_peer(rt, fl4->daddr); - /* - * VJ's idea. We save last timestamp seen from - * the destination in peer table, when entering state - * TIME-WAIT * and initialize rx_opt.ts_recent from it, - * when trying new connection. - */ - if (peer) { - inet_peer_refcheck(peer); - if ((u32)get_seconds() - peer->tcp_ts_stamp <= TCP_PAWS_MSL) { - tp->rx_opt.ts_recent_stamp = peer->tcp_ts_stamp; - tp->rx_opt.ts_recent = peer->tcp_ts; - } - } - } + !tp->rx_opt.ts_recent_stamp && fl4->daddr == daddr) + tcp_fetch_timewait_stamp(sk, &rt->dst); inet->inet_dport = usin->sin_port; inet->inet_daddr = daddr; @@ -289,12 +275,15 @@ failure: EXPORT_SYMBOL(tcp_v4_connect); /* - * This routine does path mtu discovery as defined in RFC1191. + * This routine reacts to ICMP_FRAG_NEEDED mtu indications as defined in RFC1191. + * It can be called through tcp_release_cb() if socket was owned by user + * at the time tcp_v4_err() was called to handle ICMP message. */ -static void do_pmtu_discovery(struct sock *sk, const struct iphdr *iph, u32 mtu) +static void tcp_v4_mtu_reduced(struct sock *sk) { struct dst_entry *dst; struct inet_sock *inet = inet_sk(sk); + u32 mtu = tcp_sk(sk)->mtu_info; /* We are not interested in TCP_LISTEN and open_requests (SYN-ACKs * send out by Linux are always <576bytes so they should go through @@ -303,17 +292,10 @@ static void do_pmtu_discovery(struct sock *sk, const struct iphdr *iph, u32 mtu) if (sk->sk_state == TCP_LISTEN) return; - /* We don't check in the destentry if pmtu discovery is forbidden - * on this route. We just assume that no packet_to_big packets - * are send back when pmtu discovery is not active. - * There is a small race when the user changes this flag in the - * route, but I think that's acceptable. - */ - if ((dst = __sk_dst_check(sk, 0)) == NULL) + dst = inet_csk_update_pmtu(sk, mtu); + if (!dst) return; - dst->ops->update_pmtu(dst, mtu); - /* Something is about to be wrong... Remember soft error * for the case, if this connection will not able to recover. */ @@ -335,6 +317,14 @@ static void do_pmtu_discovery(struct sock *sk, const struct iphdr *iph, u32 mtu) } /* else let the usual retransmit timer handle it */ } +static void do_redirect(struct sk_buff *skb, struct sock *sk) +{ + struct dst_entry *dst = __sk_dst_check(sk, 0); + + if (dst) + dst->ops->redirect(dst, sk, skb); +} + /* * This routine is called by the ICMP module when it gets some * sort of error condition. If err < 0 then the socket should @@ -386,8 +376,12 @@ void tcp_v4_err(struct sk_buff *icmp_skb, u32 info) bh_lock_sock(sk); /* If too many ICMPs get dropped on busy * servers this needs to be solved differently. + * We do take care of PMTU discovery (RFC1191) special case : + * we can receive locally generated ICMP messages while socket is held. */ - if (sock_owned_by_user(sk)) + if (sock_owned_by_user(sk) && + type != ICMP_DEST_UNREACH && + code != ICMP_FRAG_NEEDED) NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS); if (sk->sk_state == TCP_CLOSE) @@ -408,6 +402,9 @@ void tcp_v4_err(struct sk_buff *icmp_skb, u32 info) } switch (type) { + case ICMP_REDIRECT: + do_redirect(icmp_skb, sk); + goto out; case ICMP_SOURCE_QUENCH: /* Just silently ignore these. */ goto out; @@ -419,8 +416,11 @@ void tcp_v4_err(struct sk_buff *icmp_skb, u32 info) goto out; if (code == ICMP_FRAG_NEEDED) { /* PMTU discovery (RFC1191) */ + tp->mtu_info = info; if (!sock_owned_by_user(sk)) - do_pmtu_discovery(sk, iph, info); + tcp_v4_mtu_reduced(sk); + else + set_bit(TCP_MTU_REDUCED_DEFERRED, &tp->tsq_flags); goto out; } @@ -698,8 +698,8 @@ static void tcp_v4_send_reset(struct sock *sk, struct sk_buff *skb) net = dev_net(skb_dst(skb)->dev); arg.tos = ip_hdr(skb)->tos; - ip_send_reply(net->ipv4.tcp_sock, skb, ip_hdr(skb)->saddr, - &arg, arg.iov[0].iov_len); + ip_send_unicast_reply(net, skb, ip_hdr(skb)->saddr, + ip_hdr(skb)->daddr, &arg, arg.iov[0].iov_len); TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS); TCP_INC_STATS_BH(net, TCP_MIB_OUTRSTS); @@ -781,8 +781,8 @@ static void tcp_v4_send_ack(struct sk_buff *skb, u32 seq, u32 ack, if (oif) arg.bound_dev_if = oif; arg.tos = tos; - ip_send_reply(net->ipv4.tcp_sock, skb, ip_hdr(skb)->saddr, - &arg, arg.iov[0].iov_len); + ip_send_unicast_reply(net, skb, ip_hdr(skb)->saddr, + ip_hdr(skb)->daddr, &arg, arg.iov[0].iov_len); TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS); } @@ -825,7 +825,8 @@ static void tcp_v4_reqsk_send_ack(struct sock *sk, struct sk_buff *skb, static int tcp_v4_send_synack(struct sock *sk, struct dst_entry *dst, struct request_sock *req, struct request_values *rvp, - u16 queue_mapping) + u16 queue_mapping, + bool nocache) { const struct inet_request_sock *ireq = inet_rsk(req); struct flowi4 fl4; @@ -848,7 +849,6 @@ static int tcp_v4_send_synack(struct sock *sk, struct dst_entry *dst, err = net_xmit_eval(err); } - dst_release(dst); return err; } @@ -856,7 +856,7 @@ static int tcp_v4_rtx_synack(struct sock *sk, struct request_sock *req, struct request_values *rvp) { TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_RETRANSSEGS); - return tcp_v4_send_synack(sk, NULL, req, rvp, 0); + return tcp_v4_send_synack(sk, NULL, req, rvp, 0, false); } /* @@ -1317,7 +1317,7 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb) tcp_clear_options(&tmp_opt); tmp_opt.mss_clamp = TCP_MSS_DEFAULT; tmp_opt.user_mss = tp->rx_opt.user_mss; - tcp_parse_options(skb, &tmp_opt, &hash_location, 0); + tcp_parse_options(skb, &tmp_opt, &hash_location, 0, NULL); if (tmp_opt.cookie_plus > 0 && tmp_opt.saw_tstamp && @@ -1375,7 +1375,6 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb) isn = cookie_v4_init_sequence(sk, skb, &req->mss); req->cookie_ts = tmp_opt.tstamp_ok; } else if (!isn) { - struct inet_peer *peer = NULL; struct flowi4 fl4; /* VJ's idea. We save last timestamp seen @@ -1390,12 +1389,8 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb) if (tmp_opt.saw_tstamp && tcp_death_row.sysctl_tw_recycle && (dst = inet_csk_route_req(sk, &fl4, req)) != NULL && - fl4.daddr == saddr && - (peer = rt_get_peer((struct rtable *)dst, fl4.daddr)) != NULL) { - inet_peer_refcheck(peer); - if ((u32)get_seconds() - peer->tcp_ts_stamp < TCP_PAWS_MSL && - (s32)(peer->tcp_ts - req->ts_recent) > - TCP_PAWS_WINDOW) { + fl4.daddr == saddr) { + if (!tcp_peer_is_proven(req, dst, true)) { NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSPASSIVEREJECTED); goto drop_and_release; } @@ -1404,8 +1399,7 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb) else if (!sysctl_tcp_syncookies && (sysctl_max_syn_backlog - inet_csk_reqsk_queue_len(sk) < (sysctl_max_syn_backlog >> 2)) && - (!peer || !peer->tcp_ts_stamp) && - (!dst || !dst_metric(dst, RTAX_RTT))) { + !tcp_peer_is_proven(req, dst, false)) { /* Without syncookies last quarter of * backlog is filled with destinations, * proven to be alive. @@ -1425,7 +1419,8 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb) if (tcp_v4_send_synack(sk, dst, req, (struct request_values *)&tmp_ext, - skb_get_queue_mapping(skb)) || + skb_get_queue_mapping(skb), + want_cookie) || want_cookie) goto drop_and_free; @@ -1623,6 +1618,20 @@ int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb) if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */ sock_rps_save_rxhash(sk, skb); + if (sk->sk_rx_dst) { + struct dst_entry *dst = sk->sk_rx_dst; + if (dst->ops->check(dst, 0) == NULL) { + dst_release(dst); + sk->sk_rx_dst = NULL; + } + } + if (unlikely(sk->sk_rx_dst == NULL)) { + struct inet_sock *icsk = inet_sk(sk); + struct rtable *rt = skb_rtable(skb); + + sk->sk_rx_dst = dst_clone(&rt->dst); + icsk->rx_dst_ifindex = inet_iif(skb); + } if (tcp_rcv_established(sk, skb, tcp_hdr(skb), skb->len)) { rsk = sk; goto reset; @@ -1672,6 +1681,49 @@ csum_err: } EXPORT_SYMBOL(tcp_v4_do_rcv); +void tcp_v4_early_demux(struct sk_buff *skb) +{ + struct net *net = dev_net(skb->dev); + const struct iphdr *iph; + const struct tcphdr *th; + struct net_device *dev; + struct sock *sk; + + if (skb->pkt_type != PACKET_HOST) + return; + + if (!pskb_may_pull(skb, ip_hdrlen(skb) + sizeof(struct tcphdr))) + return; + + iph = ip_hdr(skb); + th = (struct tcphdr *) ((char *)iph + ip_hdrlen(skb)); + + if (th->doff < sizeof(struct tcphdr) / 4) + return; + + if (!pskb_may_pull(skb, ip_hdrlen(skb) + th->doff * 4)) + return; + + dev = skb->dev; + sk = __inet_lookup_established(net, &tcp_hashinfo, + iph->saddr, th->source, + iph->daddr, ntohs(th->dest), + dev->ifindex); + if (sk) { + skb->sk = sk; + skb->destructor = sock_edemux; + if (sk->sk_state != TCP_TIME_WAIT) { + struct dst_entry *dst = sk->sk_rx_dst; + struct inet_sock *icsk = inet_sk(sk); + if (dst) + dst = dst_check(dst, 0); + if (dst && + icsk->rx_dst_ifindex == dev->ifindex) + skb_dst_set_noref(skb, dst); + } + } +} + /* * From tcp_input.c */ @@ -1821,40 +1873,10 @@ do_time_wait: goto discard_it; } -struct inet_peer *tcp_v4_get_peer(struct sock *sk, bool *release_it) -{ - struct rtable *rt = (struct rtable *) __sk_dst_get(sk); - struct inet_sock *inet = inet_sk(sk); - struct inet_peer *peer; - - if (!rt || - inet->cork.fl.u.ip4.daddr != inet->inet_daddr) { - peer = inet_getpeer_v4(inet->inet_daddr, 1); - *release_it = true; - } else { - if (!rt->peer) - rt_bind_peer(rt, inet->inet_daddr, 1); - peer = rt->peer; - *release_it = false; - } - - return peer; -} -EXPORT_SYMBOL(tcp_v4_get_peer); - -void *tcp_v4_tw_get_peer(struct sock *sk) -{ - const struct inet_timewait_sock *tw = inet_twsk(sk); - - return inet_getpeer_v4(tw->tw_daddr, 1); -} -EXPORT_SYMBOL(tcp_v4_tw_get_peer); - static struct timewait_sock_ops tcp_timewait_sock_ops = { .twsk_obj_size = sizeof(struct tcp_timewait_sock), .twsk_unique = tcp_twsk_unique, .twsk_destructor= tcp_twsk_destructor, - .twsk_getpeer = tcp_v4_tw_get_peer, }; const struct inet_connection_sock_af_ops ipv4_specific = { @@ -1863,7 +1885,6 @@ const struct inet_connection_sock_af_ops ipv4_specific = { .rebuild_header = inet_sk_rebuild_header, .conn_request = tcp_v4_conn_request, .syn_recv_sock = tcp_v4_syn_recv_sock, - .get_peer = tcp_v4_get_peer, .net_header_len = sizeof(struct iphdr), .setsockopt = ip_setsockopt, .getsockopt = ip_getsockopt, @@ -1953,6 +1974,9 @@ void tcp_v4_destroy_sock(struct sock *sk) tp->cookie_values = NULL; } + /* If socket is aborted during connect operation */ + tcp_free_fastopen_req(tp); + sk_sockets_allocated_dec(sk); sock_release_memcg(sk); } @@ -2593,6 +2617,8 @@ struct proto tcp_prot = { .sendmsg = tcp_sendmsg, .sendpage = tcp_sendpage, .backlog_rcv = tcp_v4_do_rcv, + .release_cb = tcp_release_cb, + .mtu_reduced = tcp_v4_mtu_reduced, .hash = inet_hash, .unhash = inet_unhash, .get_port = inet_csk_get_port, @@ -2624,13 +2650,11 @@ EXPORT_SYMBOL(tcp_prot); static int __net_init tcp_sk_init(struct net *net) { - return inet_ctl_sock_create(&net->ipv4.tcp_sock, - PF_INET, SOCK_RAW, IPPROTO_TCP, net); + return 0; } static void __net_exit tcp_sk_exit(struct net *net) { - inet_ctl_sock_destroy(net->ipv4.tcp_sock); } static void __net_exit tcp_sk_exit_batch(struct list_head *net_exit_list) diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c new file mode 100644 index 000000000000..2288a6399e1e --- /dev/null +++ b/net/ipv4/tcp_metrics.c @@ -0,0 +1,745 @@ +#include <linux/rcupdate.h> +#include <linux/spinlock.h> +#include <linux/jiffies.h> +#include <linux/bootmem.h> +#include <linux/module.h> +#include <linux/cache.h> +#include <linux/slab.h> +#include <linux/init.h> +#include <linux/tcp.h> +#include <linux/hash.h> + +#include <net/inet_connection_sock.h> +#include <net/net_namespace.h> +#include <net/request_sock.h> +#include <net/inetpeer.h> +#include <net/sock.h> +#include <net/ipv6.h> +#include <net/dst.h> +#include <net/tcp.h> + +int sysctl_tcp_nometrics_save __read_mostly; + +enum tcp_metric_index { + TCP_METRIC_RTT, + TCP_METRIC_RTTVAR, + TCP_METRIC_SSTHRESH, + TCP_METRIC_CWND, + TCP_METRIC_REORDERING, + + /* Always last. */ + TCP_METRIC_MAX, +}; + +struct tcp_fastopen_metrics { + u16 mss; + u16 syn_loss:10; /* Recurring Fast Open SYN losses */ + unsigned long last_syn_loss; /* Last Fast Open SYN loss */ + struct tcp_fastopen_cookie cookie; +}; + +struct tcp_metrics_block { + struct tcp_metrics_block __rcu *tcpm_next; + struct inetpeer_addr tcpm_addr; + unsigned long tcpm_stamp; + u32 tcpm_ts; + u32 tcpm_ts_stamp; + u32 tcpm_lock; + u32 tcpm_vals[TCP_METRIC_MAX]; + struct tcp_fastopen_metrics tcpm_fastopen; +}; + +static bool tcp_metric_locked(struct tcp_metrics_block *tm, + enum tcp_metric_index idx) +{ + return tm->tcpm_lock & (1 << idx); +} + +static u32 tcp_metric_get(struct tcp_metrics_block *tm, + enum tcp_metric_index idx) +{ + return tm->tcpm_vals[idx]; +} + +static u32 tcp_metric_get_jiffies(struct tcp_metrics_block *tm, + enum tcp_metric_index idx) +{ + return msecs_to_jiffies(tm->tcpm_vals[idx]); +} + +static void tcp_metric_set(struct tcp_metrics_block *tm, + enum tcp_metric_index idx, + u32 val) +{ + tm->tcpm_vals[idx] = val; +} + +static void tcp_metric_set_msecs(struct tcp_metrics_block *tm, + enum tcp_metric_index idx, + u32 val) +{ + tm->tcpm_vals[idx] = jiffies_to_msecs(val); +} + +static bool addr_same(const struct inetpeer_addr *a, + const struct inetpeer_addr *b) +{ + const struct in6_addr *a6, *b6; + + if (a->family != b->family) + return false; + if (a->family == AF_INET) + return a->addr.a4 == b->addr.a4; + + a6 = (const struct in6_addr *) &a->addr.a6[0]; + b6 = (const struct in6_addr *) &b->addr.a6[0]; + + return ipv6_addr_equal(a6, b6); +} + +struct tcpm_hash_bucket { + struct tcp_metrics_block __rcu *chain; +}; + +static DEFINE_SPINLOCK(tcp_metrics_lock); + +static void tcpm_suck_dst(struct tcp_metrics_block *tm, struct dst_entry *dst) +{ + u32 val; + + tm->tcpm_stamp = jiffies; + + val = 0; + if (dst_metric_locked(dst, RTAX_RTT)) + val |= 1 << TCP_METRIC_RTT; + if (dst_metric_locked(dst, RTAX_RTTVAR)) + val |= 1 << TCP_METRIC_RTTVAR; + if (dst_metric_locked(dst, RTAX_SSTHRESH)) + val |= 1 << TCP_METRIC_SSTHRESH; + if (dst_metric_locked(dst, RTAX_CWND)) + val |= 1 << TCP_METRIC_CWND; + if (dst_metric_locked(dst, RTAX_REORDERING)) + val |= 1 << TCP_METRIC_REORDERING; + tm->tcpm_lock = val; + + tm->tcpm_vals[TCP_METRIC_RTT] = dst_metric_raw(dst, RTAX_RTT); + tm->tcpm_vals[TCP_METRIC_RTTVAR] = dst_metric_raw(dst, RTAX_RTTVAR); + tm->tcpm_vals[TCP_METRIC_SSTHRESH] = dst_metric_raw(dst, RTAX_SSTHRESH); + tm->tcpm_vals[TCP_METRIC_CWND] = dst_metric_raw(dst, RTAX_CWND); + tm->tcpm_vals[TCP_METRIC_REORDERING] = dst_metric_raw(dst, RTAX_REORDERING); + tm->tcpm_ts = 0; + tm->tcpm_ts_stamp = 0; + tm->tcpm_fastopen.mss = 0; + tm->tcpm_fastopen.syn_loss = 0; + tm->tcpm_fastopen.cookie.len = 0; +} + +static struct tcp_metrics_block *tcpm_new(struct dst_entry *dst, + struct inetpeer_addr *addr, + unsigned int hash, + bool reclaim) +{ + struct tcp_metrics_block *tm; + struct net *net; + + spin_lock_bh(&tcp_metrics_lock); + net = dev_net(dst->dev); + if (unlikely(reclaim)) { + struct tcp_metrics_block *oldest; + + oldest = rcu_dereference(net->ipv4.tcp_metrics_hash[hash].chain); + for (tm = rcu_dereference(oldest->tcpm_next); tm; + tm = rcu_dereference(tm->tcpm_next)) { + if (time_before(tm->tcpm_stamp, oldest->tcpm_stamp)) + oldest = tm; + } + tm = oldest; + } else { + tm = kmalloc(sizeof(*tm), GFP_ATOMIC); + if (!tm) + goto out_unlock; + } + tm->tcpm_addr = *addr; + + tcpm_suck_dst(tm, dst); + + if (likely(!reclaim)) { + tm->tcpm_next = net->ipv4.tcp_metrics_hash[hash].chain; + rcu_assign_pointer(net->ipv4.tcp_metrics_hash[hash].chain, tm); + } + +out_unlock: + spin_unlock_bh(&tcp_metrics_lock); + return tm; +} + +#define TCP_METRICS_TIMEOUT (60 * 60 * HZ) + +static void tcpm_check_stamp(struct tcp_metrics_block *tm, struct dst_entry *dst) +{ + if (tm && unlikely(time_after(jiffies, tm->tcpm_stamp + TCP_METRICS_TIMEOUT))) + tcpm_suck_dst(tm, dst); +} + +#define TCP_METRICS_RECLAIM_DEPTH 5 +#define TCP_METRICS_RECLAIM_PTR (struct tcp_metrics_block *) 0x1UL + +static struct tcp_metrics_block *tcp_get_encode(struct tcp_metrics_block *tm, int depth) +{ + if (tm) + return tm; + if (depth > TCP_METRICS_RECLAIM_DEPTH) + return TCP_METRICS_RECLAIM_PTR; + return NULL; +} + +static struct tcp_metrics_block *__tcp_get_metrics(const struct inetpeer_addr *addr, + struct net *net, unsigned int hash) +{ + struct tcp_metrics_block *tm; + int depth = 0; + + for (tm = rcu_dereference(net->ipv4.tcp_metrics_hash[hash].chain); tm; + tm = rcu_dereference(tm->tcpm_next)) { + if (addr_same(&tm->tcpm_addr, addr)) + break; + depth++; + } + return tcp_get_encode(tm, depth); +} + +static struct tcp_metrics_block *__tcp_get_metrics_req(struct request_sock *req, + struct dst_entry *dst) +{ + struct tcp_metrics_block *tm; + struct inetpeer_addr addr; + unsigned int hash; + struct net *net; + + addr.family = req->rsk_ops->family; + switch (addr.family) { + case AF_INET: + addr.addr.a4 = inet_rsk(req)->rmt_addr; + hash = (__force unsigned int) addr.addr.a4; + break; + case AF_INET6: + *(struct in6_addr *)addr.addr.a6 = inet6_rsk(req)->rmt_addr; + hash = ipv6_addr_hash(&inet6_rsk(req)->rmt_addr); + break; + default: + return NULL; + } + + net = dev_net(dst->dev); + hash = hash_32(hash, net->ipv4.tcp_metrics_hash_log); + + for (tm = rcu_dereference(net->ipv4.tcp_metrics_hash[hash].chain); tm; + tm = rcu_dereference(tm->tcpm_next)) { + if (addr_same(&tm->tcpm_addr, &addr)) + break; + } + tcpm_check_stamp(tm, dst); + return tm; +} + +static struct tcp_metrics_block *__tcp_get_metrics_tw(struct inet_timewait_sock *tw) +{ + struct inet6_timewait_sock *tw6; + struct tcp_metrics_block *tm; + struct inetpeer_addr addr; + unsigned int hash; + struct net *net; + + addr.family = tw->tw_family; + switch (addr.family) { + case AF_INET: + addr.addr.a4 = tw->tw_daddr; + hash = (__force unsigned int) addr.addr.a4; + break; + case AF_INET6: + tw6 = inet6_twsk((struct sock *)tw); + *(struct in6_addr *)addr.addr.a6 = tw6->tw_v6_daddr; + hash = ipv6_addr_hash(&tw6->tw_v6_daddr); + break; + default: + return NULL; + } + + net = twsk_net(tw); + hash = hash_32(hash, net->ipv4.tcp_metrics_hash_log); + + for (tm = rcu_dereference(net->ipv4.tcp_metrics_hash[hash].chain); tm; + tm = rcu_dereference(tm->tcpm_next)) { + if (addr_same(&tm->tcpm_addr, &addr)) + break; + } + return tm; +} + +static struct tcp_metrics_block *tcp_get_metrics(struct sock *sk, + struct dst_entry *dst, + bool create) +{ + struct tcp_metrics_block *tm; + struct inetpeer_addr addr; + unsigned int hash; + struct net *net; + bool reclaim; + + addr.family = sk->sk_family; + switch (addr.family) { + case AF_INET: + addr.addr.a4 = inet_sk(sk)->inet_daddr; + hash = (__force unsigned int) addr.addr.a4; + break; + case AF_INET6: + *(struct in6_addr *)addr.addr.a6 = inet6_sk(sk)->daddr; + hash = ipv6_addr_hash(&inet6_sk(sk)->daddr); + break; + default: + return NULL; + } + + net = dev_net(dst->dev); + hash = hash_32(hash, net->ipv4.tcp_metrics_hash_log); + + tm = __tcp_get_metrics(&addr, net, hash); + reclaim = false; + if (tm == TCP_METRICS_RECLAIM_PTR) { + reclaim = true; + tm = NULL; + } + if (!tm && create) + tm = tcpm_new(dst, &addr, hash, reclaim); + else + tcpm_check_stamp(tm, dst); + + return tm; +} + +/* Save metrics learned by this TCP session. This function is called + * only, when TCP finishes successfully i.e. when it enters TIME-WAIT + * or goes from LAST-ACK to CLOSE. + */ +void tcp_update_metrics(struct sock *sk) +{ + const struct inet_connection_sock *icsk = inet_csk(sk); + struct dst_entry *dst = __sk_dst_get(sk); + struct tcp_sock *tp = tcp_sk(sk); + struct tcp_metrics_block *tm; + unsigned long rtt; + u32 val; + int m; + + if (sysctl_tcp_nometrics_save || !dst) + return; + + if (dst->flags & DST_HOST) + dst_confirm(dst); + + rcu_read_lock(); + if (icsk->icsk_backoff || !tp->srtt) { + /* This session failed to estimate rtt. Why? + * Probably, no packets returned in time. Reset our + * results. + */ + tm = tcp_get_metrics(sk, dst, false); + if (tm && !tcp_metric_locked(tm, TCP_METRIC_RTT)) + tcp_metric_set(tm, TCP_METRIC_RTT, 0); + goto out_unlock; + } else + tm = tcp_get_metrics(sk, dst, true); + + if (!tm) + goto out_unlock; + + rtt = tcp_metric_get_jiffies(tm, TCP_METRIC_RTT); + m = rtt - tp->srtt; + + /* If newly calculated rtt larger than stored one, store new + * one. Otherwise, use EWMA. Remember, rtt overestimation is + * always better than underestimation. + */ + if (!tcp_metric_locked(tm, TCP_METRIC_RTT)) { + if (m <= 0) + rtt = tp->srtt; + else + rtt -= (m >> 3); + tcp_metric_set_msecs(tm, TCP_METRIC_RTT, rtt); + } + + if (!tcp_metric_locked(tm, TCP_METRIC_RTTVAR)) { + unsigned long var; + + if (m < 0) + m = -m; + + /* Scale deviation to rttvar fixed point */ + m >>= 1; + if (m < tp->mdev) + m = tp->mdev; + + var = tcp_metric_get_jiffies(tm, TCP_METRIC_RTTVAR); + if (m >= var) + var = m; + else + var -= (var - m) >> 2; + + tcp_metric_set_msecs(tm, TCP_METRIC_RTTVAR, var); + } + + if (tcp_in_initial_slowstart(tp)) { + /* Slow start still did not finish. */ + if (!tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) { + val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH); + if (val && (tp->snd_cwnd >> 1) > val) + tcp_metric_set(tm, TCP_METRIC_SSTHRESH, + tp->snd_cwnd >> 1); + } + if (!tcp_metric_locked(tm, TCP_METRIC_CWND)) { + val = tcp_metric_get(tm, TCP_METRIC_CWND); + if (tp->snd_cwnd > val) + tcp_metric_set(tm, TCP_METRIC_CWND, + tp->snd_cwnd); + } + } else if (tp->snd_cwnd > tp->snd_ssthresh && + icsk->icsk_ca_state == TCP_CA_Open) { + /* Cong. avoidance phase, cwnd is reliable. */ + if (!tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) + tcp_metric_set(tm, TCP_METRIC_SSTHRESH, + max(tp->snd_cwnd >> 1, tp->snd_ssthresh)); + if (!tcp_metric_locked(tm, TCP_METRIC_CWND)) { + val = tcp_metric_get(tm, TCP_METRIC_CWND); + tcp_metric_set(tm, TCP_METRIC_CWND, (val + tp->snd_cwnd) >> 1); + } + } else { + /* Else slow start did not finish, cwnd is non-sense, + * ssthresh may be also invalid. + */ + if (!tcp_metric_locked(tm, TCP_METRIC_CWND)) { + val = tcp_metric_get(tm, TCP_METRIC_CWND); + tcp_metric_set(tm, TCP_METRIC_CWND, + (val + tp->snd_ssthresh) >> 1); + } + if (!tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) { + val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH); + if (val && tp->snd_ssthresh > val) + tcp_metric_set(tm, TCP_METRIC_SSTHRESH, + tp->snd_ssthresh); + } + if (!tcp_metric_locked(tm, TCP_METRIC_REORDERING)) { + val = tcp_metric_get(tm, TCP_METRIC_REORDERING); + if (val < tp->reordering && + tp->reordering != sysctl_tcp_reordering) + tcp_metric_set(tm, TCP_METRIC_REORDERING, + tp->reordering); + } + } + tm->tcpm_stamp = jiffies; +out_unlock: + rcu_read_unlock(); +} + +/* Initialize metrics on socket. */ + +void tcp_init_metrics(struct sock *sk) +{ + struct dst_entry *dst = __sk_dst_get(sk); + struct tcp_sock *tp = tcp_sk(sk); + struct tcp_metrics_block *tm; + u32 val; + + if (dst == NULL) + goto reset; + + dst_confirm(dst); + + rcu_read_lock(); + tm = tcp_get_metrics(sk, dst, true); + if (!tm) { + rcu_read_unlock(); + goto reset; + } + + if (tcp_metric_locked(tm, TCP_METRIC_CWND)) + tp->snd_cwnd_clamp = tcp_metric_get(tm, TCP_METRIC_CWND); + + val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH); + if (val) { + tp->snd_ssthresh = val; + if (tp->snd_ssthresh > tp->snd_cwnd_clamp) + tp->snd_ssthresh = tp->snd_cwnd_clamp; + } else { + /* ssthresh may have been reduced unnecessarily during. + * 3WHS. Restore it back to its initial default. + */ + tp->snd_ssthresh = TCP_INFINITE_SSTHRESH; + } + val = tcp_metric_get(tm, TCP_METRIC_REORDERING); + if (val && tp->reordering != val) { + tcp_disable_fack(tp); + tcp_disable_early_retrans(tp); + tp->reordering = val; + } + + val = tcp_metric_get(tm, TCP_METRIC_RTT); + if (val == 0 || tp->srtt == 0) { + rcu_read_unlock(); + goto reset; + } + /* Initial rtt is determined from SYN,SYN-ACK. + * The segment is small and rtt may appear much + * less than real one. Use per-dst memory + * to make it more realistic. + * + * A bit of theory. RTT is time passed after "normal" sized packet + * is sent until it is ACKed. In normal circumstances sending small + * packets force peer to delay ACKs and calculation is correct too. + * The algorithm is adaptive and, provided we follow specs, it + * NEVER underestimate RTT. BUT! If peer tries to make some clever + * tricks sort of "quick acks" for time long enough to decrease RTT + * to low value, and then abruptly stops to do it and starts to delay + * ACKs, wait for troubles. + */ + val = msecs_to_jiffies(val); + if (val > tp->srtt) { + tp->srtt = val; + tp->rtt_seq = tp->snd_nxt; + } + val = tcp_metric_get_jiffies(tm, TCP_METRIC_RTTVAR); + if (val > tp->mdev) { + tp->mdev = val; + tp->mdev_max = tp->rttvar = max(tp->mdev, tcp_rto_min(sk)); + } + rcu_read_unlock(); + + tcp_set_rto(sk); +reset: + if (tp->srtt == 0) { + /* RFC6298: 5.7 We've failed to get a valid RTT sample from + * 3WHS. This is most likely due to retransmission, + * including spurious one. Reset the RTO back to 3secs + * from the more aggressive 1sec to avoid more spurious + * retransmission. + */ + tp->mdev = tp->mdev_max = tp->rttvar = TCP_TIMEOUT_FALLBACK; + inet_csk(sk)->icsk_rto = TCP_TIMEOUT_FALLBACK; + } + /* Cut cwnd down to 1 per RFC5681 if SYN or SYN-ACK has been + * retransmitted. In light of RFC6298 more aggressive 1sec + * initRTO, we only reset cwnd when more than 1 SYN/SYN-ACK + * retransmission has occurred. + */ + if (tp->total_retrans > 1) + tp->snd_cwnd = 1; + else + tp->snd_cwnd = tcp_init_cwnd(tp, dst); + tp->snd_cwnd_stamp = tcp_time_stamp; +} + +bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst, bool paws_check) +{ + struct tcp_metrics_block *tm; + bool ret; + + if (!dst) + return false; + + rcu_read_lock(); + tm = __tcp_get_metrics_req(req, dst); + if (paws_check) { + if (tm && + (u32)get_seconds() - tm->tcpm_ts_stamp < TCP_PAWS_MSL && + (s32)(tm->tcpm_ts - req->ts_recent) > TCP_PAWS_WINDOW) + ret = false; + else + ret = true; + } else { + if (tm && tcp_metric_get(tm, TCP_METRIC_RTT) && tm->tcpm_ts_stamp) + ret = true; + else + ret = false; + } + rcu_read_unlock(); + + return ret; +} +EXPORT_SYMBOL_GPL(tcp_peer_is_proven); + +void tcp_fetch_timewait_stamp(struct sock *sk, struct dst_entry *dst) +{ + struct tcp_metrics_block *tm; + + rcu_read_lock(); + tm = tcp_get_metrics(sk, dst, true); + if (tm) { + struct tcp_sock *tp = tcp_sk(sk); + + if ((u32)get_seconds() - tm->tcpm_ts_stamp <= TCP_PAWS_MSL) { + tp->rx_opt.ts_recent_stamp = tm->tcpm_ts_stamp; + tp->rx_opt.ts_recent = tm->tcpm_ts; + } + } + rcu_read_unlock(); +} +EXPORT_SYMBOL_GPL(tcp_fetch_timewait_stamp); + +/* VJ's idea. Save last timestamp seen from this destination and hold + * it at least for normal timewait interval to use for duplicate + * segment detection in subsequent connections, before they enter + * synchronized state. + */ +bool tcp_remember_stamp(struct sock *sk) +{ + struct dst_entry *dst = __sk_dst_get(sk); + bool ret = false; + + if (dst) { + struct tcp_metrics_block *tm; + + rcu_read_lock(); + tm = tcp_get_metrics(sk, dst, true); + if (tm) { + struct tcp_sock *tp = tcp_sk(sk); + + if ((s32)(tm->tcpm_ts - tp->rx_opt.ts_recent) <= 0 || + ((u32)get_seconds() - tm->tcpm_ts_stamp > TCP_PAWS_MSL && + tm->tcpm_ts_stamp <= (u32)tp->rx_opt.ts_recent_stamp)) { + tm->tcpm_ts_stamp = (u32)tp->rx_opt.ts_recent_stamp; + tm->tcpm_ts = tp->rx_opt.ts_recent; + } + ret = true; + } + rcu_read_unlock(); + } + return ret; +} + +bool tcp_tw_remember_stamp(struct inet_timewait_sock *tw) +{ + struct tcp_metrics_block *tm; + bool ret = false; + + rcu_read_lock(); + tm = __tcp_get_metrics_tw(tw); + if (tm) { + const struct tcp_timewait_sock *tcptw; + struct sock *sk = (struct sock *) tw; + + tcptw = tcp_twsk(sk); + if ((s32)(tm->tcpm_ts - tcptw->tw_ts_recent) <= 0 || + ((u32)get_seconds() - tm->tcpm_ts_stamp > TCP_PAWS_MSL && + tm->tcpm_ts_stamp <= (u32)tcptw->tw_ts_recent_stamp)) { + tm->tcpm_ts_stamp = (u32)tcptw->tw_ts_recent_stamp; + tm->tcpm_ts = tcptw->tw_ts_recent; + } + ret = true; + } + rcu_read_unlock(); + + return ret; +} + +static DEFINE_SEQLOCK(fastopen_seqlock); + +void tcp_fastopen_cache_get(struct sock *sk, u16 *mss, + struct tcp_fastopen_cookie *cookie, + int *syn_loss, unsigned long *last_syn_loss) +{ + struct tcp_metrics_block *tm; + + rcu_read_lock(); + tm = tcp_get_metrics(sk, __sk_dst_get(sk), false); + if (tm) { + struct tcp_fastopen_metrics *tfom = &tm->tcpm_fastopen; + unsigned int seq; + + do { + seq = read_seqbegin(&fastopen_seqlock); + if (tfom->mss) + *mss = tfom->mss; + *cookie = tfom->cookie; + *syn_loss = tfom->syn_loss; + *last_syn_loss = *syn_loss ? tfom->last_syn_loss : 0; + } while (read_seqretry(&fastopen_seqlock, seq)); + } + rcu_read_unlock(); +} + +void tcp_fastopen_cache_set(struct sock *sk, u16 mss, + struct tcp_fastopen_cookie *cookie, bool syn_lost) +{ + struct tcp_metrics_block *tm; + + rcu_read_lock(); + tm = tcp_get_metrics(sk, __sk_dst_get(sk), true); + if (tm) { + struct tcp_fastopen_metrics *tfom = &tm->tcpm_fastopen; + + write_seqlock_bh(&fastopen_seqlock); + tfom->mss = mss; + if (cookie->len > 0) + tfom->cookie = *cookie; + if (syn_lost) { + ++tfom->syn_loss; + tfom->last_syn_loss = jiffies; + } else + tfom->syn_loss = 0; + write_sequnlock_bh(&fastopen_seqlock); + } + rcu_read_unlock(); +} + +static unsigned int tcpmhash_entries; +static int __init set_tcpmhash_entries(char *str) +{ + ssize_t ret; + + if (!str) + return 0; + + ret = kstrtouint(str, 0, &tcpmhash_entries); + if (ret) + return 0; + + return 1; +} +__setup("tcpmhash_entries=", set_tcpmhash_entries); + +static int __net_init tcp_net_metrics_init(struct net *net) +{ + size_t size; + unsigned int slots; + + slots = tcpmhash_entries; + if (!slots) { + if (totalram_pages >= 128 * 1024) + slots = 16 * 1024; + else + slots = 8 * 1024; + } + + net->ipv4.tcp_metrics_hash_log = order_base_2(slots); + size = sizeof(struct tcpm_hash_bucket) << net->ipv4.tcp_metrics_hash_log; + + net->ipv4.tcp_metrics_hash = kzalloc(size, GFP_KERNEL); + if (!net->ipv4.tcp_metrics_hash) + return -ENOMEM; + + return 0; +} + +static void __net_exit tcp_net_metrics_exit(struct net *net) +{ + kfree(net->ipv4.tcp_metrics_hash); +} + +static __net_initdata struct pernet_operations tcp_net_metrics_ops = { + .init = tcp_net_metrics_init, + .exit = tcp_net_metrics_exit, +}; + +void __init tcp_metrics_init(void) +{ + register_pernet_subsys(&tcp_net_metrics_ops); +} diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index b85d9fe7d663..5912ac3fd240 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -49,56 +49,6 @@ struct inet_timewait_death_row tcp_death_row = { }; EXPORT_SYMBOL_GPL(tcp_death_row); -/* VJ's idea. Save last timestamp seen from this destination - * and hold it at least for normal timewait interval to use for duplicate - * segment detection in subsequent connections, before they enter synchronized - * state. - */ - -static bool tcp_remember_stamp(struct sock *sk) -{ - const struct inet_connection_sock *icsk = inet_csk(sk); - struct tcp_sock *tp = tcp_sk(sk); - struct inet_peer *peer; - bool release_it; - - peer = icsk->icsk_af_ops->get_peer(sk, &release_it); - if (peer) { - if ((s32)(peer->tcp_ts - tp->rx_opt.ts_recent) <= 0 || - ((u32)get_seconds() - peer->tcp_ts_stamp > TCP_PAWS_MSL && - peer->tcp_ts_stamp <= (u32)tp->rx_opt.ts_recent_stamp)) { - peer->tcp_ts_stamp = (u32)tp->rx_opt.ts_recent_stamp; - peer->tcp_ts = tp->rx_opt.ts_recent; - } - if (release_it) - inet_putpeer(peer); - return true; - } - - return false; -} - -static bool tcp_tw_remember_stamp(struct inet_timewait_sock *tw) -{ - struct sock *sk = (struct sock *) tw; - struct inet_peer *peer; - - peer = twsk_getpeer(sk); - if (peer) { - const struct tcp_timewait_sock *tcptw = tcp_twsk(sk); - - if ((s32)(peer->tcp_ts - tcptw->tw_ts_recent) <= 0 || - ((u32)get_seconds() - peer->tcp_ts_stamp > TCP_PAWS_MSL && - peer->tcp_ts_stamp <= (u32)tcptw->tw_ts_recent_stamp)) { - peer->tcp_ts_stamp = (u32)tcptw->tw_ts_recent_stamp; - peer->tcp_ts = tcptw->tw_ts_recent; - } - inet_putpeer(peer); - return true; - } - return false; -} - static bool tcp_in_window(u32 seq, u32 end_seq, u32 s_win, u32 e_win) { if (seq == s_win) @@ -147,7 +97,7 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb, tmp_opt.saw_tstamp = 0; if (th->doff > (sizeof(*th) >> 2) && tcptw->tw_ts_recent_stamp) { - tcp_parse_options(skb, &tmp_opt, &hash_location, 0); + tcp_parse_options(skb, &tmp_opt, &hash_location, 0, NULL); if (tmp_opt.saw_tstamp) { tmp_opt.ts_recent = tcptw->tw_ts_recent; @@ -327,8 +277,9 @@ void tcp_time_wait(struct sock *sk, int state, int timeo) if (tw != NULL) { struct tcp_timewait_sock *tcptw = tcp_twsk((struct sock *)tw); const int rto = (icsk->icsk_rto << 2) - (icsk->icsk_rto >> 1); + struct inet_sock *inet = inet_sk(sk); - tw->tw_transparent = inet_sk(sk)->transparent; + tw->tw_transparent = inet->transparent; tw->tw_rcv_wscale = tp->rx_opt.rcv_wscale; tcptw->tw_rcv_nxt = tp->rcv_nxt; tcptw->tw_snd_nxt = tp->snd_nxt; @@ -403,6 +354,7 @@ void tcp_twsk_destructor(struct sock *sk) { #ifdef CONFIG_TCP_MD5SIG struct tcp_timewait_sock *twsk = tcp_twsk(sk); + if (twsk->tw_md5_key) { tcp_free_md5sig_pool(); kfree_rcu(twsk->tw_md5_key, rcu); @@ -435,6 +387,8 @@ struct sock *tcp_create_openreq_child(struct sock *sk, struct request_sock *req, struct tcp_sock *oldtp = tcp_sk(sk); struct tcp_cookie_values *oldcvp = oldtp->cookie_values; + newsk->sk_rx_dst = dst_clone(skb_dst(skb)); + /* TCP Cookie Transactions require space for the cookie pair, * as it differs for each connection. There is no need to * copy any s_data_payload stored at the original socket. @@ -470,6 +424,7 @@ struct sock *tcp_create_openreq_child(struct sock *sk, struct request_sock *req, treq->snt_isn + 1 + tcp_s_data_size(oldtp); tcp_prequeue_init(newtp); + INIT_LIST_HEAD(&newtp->tsq_node); tcp_init_wl(newtp, treq->rcv_isn); @@ -579,7 +534,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb, tmp_opt.saw_tstamp = 0; if (th->doff > (sizeof(struct tcphdr)>>2)) { - tcp_parse_options(skb, &tmp_opt, &hash_location, 0); + tcp_parse_options(skb, &tmp_opt, &hash_location, 0, NULL); if (tmp_opt.saw_tstamp) { tmp_opt.ts_recent = req->ts_recent; diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 803cbfe82fbc..33cd065cfbd8 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -50,6 +50,9 @@ int sysctl_tcp_retrans_collapse __read_mostly = 1; */ int sysctl_tcp_workaround_signed_windows __read_mostly = 0; +/* Default TSQ limit of two TSO segments */ +int sysctl_tcp_limit_output_bytes __read_mostly = 131072; + /* This limits the percentage of the congestion window which we * will allow a single TSO frame to consume. Building TSO frames * which are too large can cause TCP streams to be bursty. @@ -65,6 +68,8 @@ int sysctl_tcp_slow_start_after_idle __read_mostly = 1; int sysctl_tcp_cookie_size __read_mostly = 0; /* TCP_COOKIE_MAX */ EXPORT_SYMBOL_GPL(sysctl_tcp_cookie_size); +static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle, + int push_one, gfp_t gfp); /* Account for new data that has been sent to the network. */ static void tcp_event_new_data_sent(struct sock *sk, const struct sk_buff *skb) @@ -380,15 +385,17 @@ static inline bool tcp_urg_mode(const struct tcp_sock *tp) #define OPTION_MD5 (1 << 2) #define OPTION_WSCALE (1 << 3) #define OPTION_COOKIE_EXTENSION (1 << 4) +#define OPTION_FAST_OPEN_COOKIE (1 << 8) struct tcp_out_options { - u8 options; /* bit field of OPTION_* */ + u16 options; /* bit field of OPTION_* */ + u16 mss; /* 0 to disable */ u8 ws; /* window scale, 0 to disable */ u8 num_sack_blocks; /* number of SACK blocks to include */ u8 hash_size; /* bytes in hash_location */ - u16 mss; /* 0 to disable */ - __u32 tsval, tsecr; /* need to include OPTION_TS */ __u8 *hash_location; /* temporary pointer, overloaded */ + __u32 tsval, tsecr; /* need to include OPTION_TS */ + struct tcp_fastopen_cookie *fastopen_cookie; /* Fast open cookie */ }; /* The sysctl int routines are generic, so check consistency here. @@ -437,7 +444,7 @@ static u8 tcp_cookie_size_check(u8 desired) static void tcp_options_write(__be32 *ptr, struct tcp_sock *tp, struct tcp_out_options *opts) { - u8 options = opts->options; /* mungable copy */ + u16 options = opts->options; /* mungable copy */ /* Having both authentication and cookies for security is redundant, * and there's certainly not enough room. Instead, the cookie-less @@ -559,6 +566,21 @@ static void tcp_options_write(__be32 *ptr, struct tcp_sock *tp, tp->rx_opt.dsack = 0; } + + if (unlikely(OPTION_FAST_OPEN_COOKIE & options)) { + struct tcp_fastopen_cookie *foc = opts->fastopen_cookie; + + *ptr++ = htonl((TCPOPT_EXP << 24) | + ((TCPOLEN_EXP_FASTOPEN_BASE + foc->len) << 16) | + TCPOPT_FASTOPEN_MAGIC); + + memcpy(ptr, foc->val, foc->len); + if ((foc->len & 3) == 2) { + u8 *align = ((u8 *)ptr) + foc->len; + align[0] = align[1] = TCPOPT_NOP; + } + ptr += (foc->len + 3) >> 2; + } } /* Compute TCP options for SYN packets. This is not the final @@ -574,6 +596,7 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, u8 cookie_size = (!tp->rx_opt.cookie_out_never && cvp != NULL) ? tcp_cookie_size_check(cvp->cookie_desired) : 0; + struct tcp_fastopen_request *fastopen = tp->fastopen_req; #ifdef CONFIG_TCP_MD5SIG *md5 = tp->af_specific->md5_lookup(sk, sk); @@ -614,6 +637,16 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, remaining -= TCPOLEN_SACKPERM_ALIGNED; } + if (fastopen && fastopen->cookie.len >= 0) { + u32 need = TCPOLEN_EXP_FASTOPEN_BASE + fastopen->cookie.len; + need = (need + 3) & ~3U; /* Align to 32 bits */ + if (remaining >= need) { + opts->options |= OPTION_FAST_OPEN_COOKIE; + opts->fastopen_cookie = &fastopen->cookie; + remaining -= need; + tp->syn_fastopen = 1; + } + } /* Note that timestamps are required by the specification. * * Odd numbers of bytes are prohibited by the specification, ensuring @@ -783,6 +816,156 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb return size; } + +/* TCP SMALL QUEUES (TSQ) + * + * TSQ goal is to keep small amount of skbs per tcp flow in tx queues (qdisc+dev) + * to reduce RTT and bufferbloat. + * We do this using a special skb destructor (tcp_wfree). + * + * Its important tcp_wfree() can be replaced by sock_wfree() in the event skb + * needs to be reallocated in a driver. + * The invariant being skb->truesize substracted from sk->sk_wmem_alloc + * + * Since transmit from skb destructor is forbidden, we use a tasklet + * to process all sockets that eventually need to send more skbs. + * We use one tasklet per cpu, with its own queue of sockets. + */ +struct tsq_tasklet { + struct tasklet_struct tasklet; + struct list_head head; /* queue of tcp sockets */ +}; +static DEFINE_PER_CPU(struct tsq_tasklet, tsq_tasklet); + +static void tcp_tsq_handler(struct sock *sk) +{ + if ((1 << sk->sk_state) & + (TCPF_ESTABLISHED | TCPF_FIN_WAIT1 | TCPF_CLOSING | + TCPF_CLOSE_WAIT | TCPF_LAST_ACK)) + tcp_write_xmit(sk, tcp_current_mss(sk), 0, 0, GFP_ATOMIC); +} +/* + * One tasklest per cpu tries to send more skbs. + * We run in tasklet context but need to disable irqs when + * transfering tsq->head because tcp_wfree() might + * interrupt us (non NAPI drivers) + */ +static void tcp_tasklet_func(unsigned long data) +{ + struct tsq_tasklet *tsq = (struct tsq_tasklet *)data; + LIST_HEAD(list); + unsigned long flags; + struct list_head *q, *n; + struct tcp_sock *tp; + struct sock *sk; + + local_irq_save(flags); + list_splice_init(&tsq->head, &list); + local_irq_restore(flags); + + list_for_each_safe(q, n, &list) { + tp = list_entry(q, struct tcp_sock, tsq_node); + list_del(&tp->tsq_node); + + sk = (struct sock *)tp; + bh_lock_sock(sk); + + if (!sock_owned_by_user(sk)) { + tcp_tsq_handler(sk); + } else { + /* defer the work to tcp_release_cb() */ + set_bit(TCP_TSQ_DEFERRED, &tp->tsq_flags); + } + bh_unlock_sock(sk); + + clear_bit(TSQ_QUEUED, &tp->tsq_flags); + sk_free(sk); + } +} + +#define TCP_DEFERRED_ALL ((1UL << TCP_TSQ_DEFERRED) | \ + (1UL << TCP_WRITE_TIMER_DEFERRED) | \ + (1UL << TCP_DELACK_TIMER_DEFERRED) | \ + (1UL << TCP_MTU_REDUCED_DEFERRED)) +/** + * tcp_release_cb - tcp release_sock() callback + * @sk: socket + * + * called from release_sock() to perform protocol dependent + * actions before socket release. + */ +void tcp_release_cb(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + unsigned long flags, nflags; + + /* perform an atomic operation only if at least one flag is set */ + do { + flags = tp->tsq_flags; + if (!(flags & TCP_DEFERRED_ALL)) + return; + nflags = flags & ~TCP_DEFERRED_ALL; + } while (cmpxchg(&tp->tsq_flags, flags, nflags) != flags); + + if (flags & (1UL << TCP_TSQ_DEFERRED)) + tcp_tsq_handler(sk); + + if (flags & (1UL << TCP_WRITE_TIMER_DEFERRED)) + tcp_write_timer_handler(sk); + + if (flags & (1UL << TCP_DELACK_TIMER_DEFERRED)) + tcp_delack_timer_handler(sk); + + if (flags & (1UL << TCP_MTU_REDUCED_DEFERRED)) + sk->sk_prot->mtu_reduced(sk); +} +EXPORT_SYMBOL(tcp_release_cb); + +void __init tcp_tasklet_init(void) +{ + int i; + + for_each_possible_cpu(i) { + struct tsq_tasklet *tsq = &per_cpu(tsq_tasklet, i); + + INIT_LIST_HEAD(&tsq->head); + tasklet_init(&tsq->tasklet, + tcp_tasklet_func, + (unsigned long)tsq); + } +} + +/* + * Write buffer destructor automatically called from kfree_skb. + * We cant xmit new skbs from this context, as we might already + * hold qdisc lock. + */ +void tcp_wfree(struct sk_buff *skb) +{ + struct sock *sk = skb->sk; + struct tcp_sock *tp = tcp_sk(sk); + + if (test_and_clear_bit(TSQ_THROTTLED, &tp->tsq_flags) && + !test_and_set_bit(TSQ_QUEUED, &tp->tsq_flags)) { + unsigned long flags; + struct tsq_tasklet *tsq; + + /* Keep a ref on socket. + * This last ref will be released in tcp_tasklet_func() + */ + atomic_sub(skb->truesize - 1, &sk->sk_wmem_alloc); + + /* queue this socket to tasklet queue */ + local_irq_save(flags); + tsq = &__get_cpu_var(tsq_tasklet); + list_add(&tp->tsq_node, &tsq->head); + tasklet_schedule(&tsq->tasklet); + local_irq_restore(flags); + } else { + sock_wfree(skb); + } +} + /* This routine actually transmits TCP packets queued in by * tcp_do_sendmsg(). This is used by both the initial * transmission and possible later retransmissions. @@ -844,7 +1027,12 @@ static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it, skb_push(skb, tcp_header_size); skb_reset_transport_header(skb); - skb_set_owner_w(skb, sk); + + skb_orphan(skb); + skb->sk = sk; + skb->destructor = (sysctl_tcp_limit_output_bytes > 0) ? + tcp_wfree : sock_wfree; + atomic_add(skb->truesize, &sk->sk_wmem_alloc); /* Build TCP header and checksum it. */ th = tcp_hdr(skb); @@ -1780,6 +1968,7 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle, while ((skb = tcp_send_head(sk))) { unsigned int limit; + tso_segs = tcp_init_tso_segs(sk, skb, mss_now); BUG_ON(!tso_segs); @@ -1800,6 +1989,13 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle, break; } + /* TSQ : sk_wmem_alloc accounts skb truesize, + * including skb overhead. But thats OK. + */ + if (atomic_read(&sk->sk_wmem_alloc) >= sysctl_tcp_limit_output_bytes) { + set_bit(TSQ_THROTTLED, &tp->tsq_flags); + break; + } limit = mss_now; if (tso_segs > 1 && !tcp_urg_mode(tp)) limit = tcp_mss_split_point(sk, skb, mss_now, @@ -2442,7 +2638,16 @@ int tcp_send_synack(struct sock *sk) return tcp_transmit_skb(sk, skb, 1, GFP_ATOMIC); } -/* Prepare a SYN-ACK. */ +/** + * tcp_make_synack - Prepare a SYN-ACK. + * sk: listener socket + * dst: dst entry attached to the SYNACK + * req: request_sock pointer + * rvp: request_values pointer + * + * Allocate one skb and build a SYNACK packet. + * @dst is consumed : Caller should not use it again. + */ struct sk_buff *tcp_make_synack(struct sock *sk, struct dst_entry *dst, struct request_sock *req, struct request_values *rvp) @@ -2461,14 +2666,15 @@ struct sk_buff *tcp_make_synack(struct sock *sk, struct dst_entry *dst, if (cvp != NULL && cvp->s_data_constant && cvp->s_data_desired) s_data_desired = cvp->s_data_desired; - skb = sock_wmalloc(sk, MAX_TCP_HEADER + 15 + s_data_desired, 1, GFP_ATOMIC); - if (skb == NULL) + skb = alloc_skb(MAX_TCP_HEADER + 15 + s_data_desired, GFP_ATOMIC); + if (unlikely(!skb)) { + dst_release(dst); return NULL; - + } /* Reserve space for headers. */ skb_reserve(skb, MAX_TCP_HEADER); - skb_dst_set(skb, dst_clone(dst)); + skb_dst_set(skb, dst); mss = dst_metric_advmss(dst); if (tp->rx_opt.user_mss && tp->rx_opt.user_mss < mss) @@ -2645,6 +2851,109 @@ void tcp_connect_init(struct sock *sk) tcp_clear_retrans(tp); } +static void tcp_connect_queue_skb(struct sock *sk, struct sk_buff *skb) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); + + tcb->end_seq += skb->len; + skb_header_release(skb); + __tcp_add_write_queue_tail(sk, skb); + sk->sk_wmem_queued += skb->truesize; + sk_mem_charge(sk, skb->truesize); + tp->write_seq = tcb->end_seq; + tp->packets_out += tcp_skb_pcount(skb); +} + +/* Build and send a SYN with data and (cached) Fast Open cookie. However, + * queue a data-only packet after the regular SYN, such that regular SYNs + * are retransmitted on timeouts. Also if the remote SYN-ACK acknowledges + * only the SYN sequence, the data are retransmitted in the first ACK. + * If cookie is not cached or other error occurs, falls back to send a + * regular SYN with Fast Open cookie request option. + */ +static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct tcp_fastopen_request *fo = tp->fastopen_req; + int syn_loss = 0, space, i, err = 0, iovlen = fo->data->msg_iovlen; + struct sk_buff *syn_data = NULL, *data; + unsigned long last_syn_loss = 0; + + tp->rx_opt.mss_clamp = tp->advmss; /* If MSS is not cached */ + tcp_fastopen_cache_get(sk, &tp->rx_opt.mss_clamp, &fo->cookie, + &syn_loss, &last_syn_loss); + /* Recurring FO SYN losses: revert to regular handshake temporarily */ + if (syn_loss > 1 && + time_before(jiffies, last_syn_loss + (60*HZ << syn_loss))) { + fo->cookie.len = -1; + goto fallback; + } + + if (sysctl_tcp_fastopen & TFO_CLIENT_NO_COOKIE) + fo->cookie.len = -1; + else if (fo->cookie.len <= 0) + goto fallback; + + /* MSS for SYN-data is based on cached MSS and bounded by PMTU and + * user-MSS. Reserve maximum option space for middleboxes that add + * private TCP options. The cost is reduced data space in SYN :( + */ + if (tp->rx_opt.user_mss && tp->rx_opt.user_mss < tp->rx_opt.mss_clamp) + tp->rx_opt.mss_clamp = tp->rx_opt.user_mss; + space = tcp_mtu_to_mss(sk, inet_csk(sk)->icsk_pmtu_cookie) - + MAX_TCP_OPTION_SPACE; + + syn_data = skb_copy_expand(syn, skb_headroom(syn), space, + sk->sk_allocation); + if (syn_data == NULL) + goto fallback; + + for (i = 0; i < iovlen && syn_data->len < space; ++i) { + struct iovec *iov = &fo->data->msg_iov[i]; + unsigned char __user *from = iov->iov_base; + int len = iov->iov_len; + + if (syn_data->len + len > space) + len = space - syn_data->len; + else if (i + 1 == iovlen) + /* No more data pending in inet_wait_for_connect() */ + fo->data = NULL; + + if (skb_add_data(syn_data, from, len)) + goto fallback; + } + + /* Queue a data-only packet after the regular SYN for retransmission */ + data = pskb_copy(syn_data, sk->sk_allocation); + if (data == NULL) + goto fallback; + TCP_SKB_CB(data)->seq++; + TCP_SKB_CB(data)->tcp_flags &= ~TCPHDR_SYN; + TCP_SKB_CB(data)->tcp_flags = (TCPHDR_ACK|TCPHDR_PSH); + tcp_connect_queue_skb(sk, data); + fo->copied = data->len; + + if (tcp_transmit_skb(sk, syn_data, 0, sk->sk_allocation) == 0) { + tp->syn_data = (fo->copied > 0); + NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPFASTOPENACTIVE); + goto done; + } + syn_data = NULL; + +fallback: + /* Send a regular SYN with Fast Open cookie request option */ + if (fo->cookie.len > 0) + fo->cookie.len = 0; + err = tcp_transmit_skb(sk, syn, 1, sk->sk_allocation); + if (err) + tp->syn_fastopen = 0; + kfree_skb(syn_data); +done: + fo->cookie.len = -1; /* Exclude Fast Open option for SYN retries */ + return err; +} + /* Build a SYN and send it off. */ int tcp_connect(struct sock *sk) { @@ -2662,17 +2971,13 @@ int tcp_connect(struct sock *sk) skb_reserve(buff, MAX_TCP_HEADER); tcp_init_nondata_skb(buff, tp->write_seq++, TCPHDR_SYN); + tp->retrans_stamp = TCP_SKB_CB(buff)->when = tcp_time_stamp; + tcp_connect_queue_skb(sk, buff); TCP_ECN_send_syn(sk, buff); - /* Send it off. */ - TCP_SKB_CB(buff)->when = tcp_time_stamp; - tp->retrans_stamp = TCP_SKB_CB(buff)->when; - skb_header_release(buff); - __tcp_add_write_queue_tail(sk, buff); - sk->sk_wmem_queued += buff->truesize; - sk_mem_charge(sk, buff->truesize); - tp->packets_out += tcp_skb_pcount(buff); - err = tcp_transmit_skb(sk, buff, 1, sk->sk_allocation); + /* Send off SYN; include data in Fast Open. */ + err = tp->fastopen_req ? tcp_send_syn_data(sk, buff) : + tcp_transmit_skb(sk, buff, 1, sk->sk_allocation); if (err == -ECONNREFUSED) return err; diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index e911e6c523ec..6df36ad55a38 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -32,17 +32,6 @@ int sysctl_tcp_retries2 __read_mostly = TCP_RETR2; int sysctl_tcp_orphan_retries __read_mostly; int sysctl_tcp_thin_linear_timeouts __read_mostly; -static void tcp_write_timer(unsigned long); -static void tcp_delack_timer(unsigned long); -static void tcp_keepalive_timer (unsigned long data); - -void tcp_init_xmit_timers(struct sock *sk) -{ - inet_csk_init_xmit_timers(sk, &tcp_write_timer, &tcp_delack_timer, - &tcp_keepalive_timer); -} -EXPORT_SYMBOL(tcp_init_xmit_timers); - static void tcp_write_err(struct sock *sk) { sk->sk_err = sk->sk_err_soft ? : ETIMEDOUT; @@ -205,21 +194,11 @@ static int tcp_write_timeout(struct sock *sk) return 0; } -static void tcp_delack_timer(unsigned long data) +void tcp_delack_timer_handler(struct sock *sk) { - struct sock *sk = (struct sock *)data; struct tcp_sock *tp = tcp_sk(sk); struct inet_connection_sock *icsk = inet_csk(sk); - bh_lock_sock(sk); - if (sock_owned_by_user(sk)) { - /* Try again later. */ - icsk->icsk_ack.blocked = 1; - NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOCKED); - sk_reset_timer(sk, &icsk->icsk_delack_timer, jiffies + TCP_DELACK_MIN); - goto out_unlock; - } - sk_mem_reclaim_partial(sk); if (sk->sk_state == TCP_CLOSE || !(icsk->icsk_ack.pending & ICSK_ACK_TIMER)) @@ -260,7 +239,21 @@ static void tcp_delack_timer(unsigned long data) out: if (sk_under_memory_pressure(sk)) sk_mem_reclaim(sk); -out_unlock: +} + +static void tcp_delack_timer(unsigned long data) +{ + struct sock *sk = (struct sock *)data; + + bh_lock_sock(sk); + if (!sock_owned_by_user(sk)) { + tcp_delack_timer_handler(sk); + } else { + inet_csk(sk)->icsk_ack.blocked = 1; + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOCKED); + /* deleguate our work to tcp_release_cb() */ + set_bit(TCP_WRITE_TIMER_DEFERRED, &tcp_sk(sk)->tsq_flags); + } bh_unlock_sock(sk); sock_put(sk); } @@ -450,19 +443,11 @@ out_reset_timer: out:; } -static void tcp_write_timer(unsigned long data) +void tcp_write_timer_handler(struct sock *sk) { - struct sock *sk = (struct sock *)data; struct inet_connection_sock *icsk = inet_csk(sk); int event; - bh_lock_sock(sk); - if (sock_owned_by_user(sk)) { - /* Try again later */ - sk_reset_timer(sk, &icsk->icsk_retransmit_timer, jiffies + (HZ / 20)); - goto out_unlock; - } - if (sk->sk_state == TCP_CLOSE || !icsk->icsk_pending) goto out; @@ -485,7 +470,19 @@ static void tcp_write_timer(unsigned long data) out: sk_mem_reclaim(sk); -out_unlock: +} + +static void tcp_write_timer(unsigned long data) +{ + struct sock *sk = (struct sock *)data; + + bh_lock_sock(sk); + if (!sock_owned_by_user(sk)) { + tcp_write_timer_handler(sk); + } else { + /* deleguate our work to tcp_release_cb() */ + set_bit(TCP_WRITE_TIMER_DEFERRED, &tcp_sk(sk)->tsq_flags); + } bh_unlock_sock(sk); sock_put(sk); } @@ -602,3 +599,10 @@ out: bh_unlock_sock(sk); sock_put(sk); } + +void tcp_init_xmit_timers(struct sock *sk) +{ + inet_csk_init_xmit_timers(sk, &tcp_write_timer, &tcp_delack_timer, + &tcp_keepalive_timer); +} +EXPORT_SYMBOL(tcp_init_xmit_timers); diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index eaca73644e79..b4c3582a991f 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -108,6 +108,7 @@ #include <net/xfrm.h> #include <trace/events/udp.h> #include <linux/static_key.h> +#include <trace/events/skb.h> #include "udp_impl.h" struct udp_table udp_table __read_mostly; @@ -615,6 +616,7 @@ void __udp4_lib_err(struct sk_buff *skb, u32 info, struct udp_table *udptable) break; case ICMP_DEST_UNREACH: if (code == ICMP_FRAG_NEEDED) { /* Path MTU discovery */ + ipv4_sk_update_pmtu(skb, sk, info); if (inet->pmtudisc != IP_PMTUDISC_DONT) { err = EMSGSIZE; harderr = 1; @@ -628,6 +630,9 @@ void __udp4_lib_err(struct sk_buff *skb, u32 info, struct udp_table *udptable) err = icmp_err_convert[code].errno; } break; + case ICMP_REDIRECT: + ipv4_sk_redirect(skb, sk); + break; } /* @@ -1219,8 +1224,10 @@ try_again: goto csum_copy_err; } - if (err) + if (unlikely(err)) { + trace_kfree_skb(skb, udp_recvmsg); goto out_free; + } if (!peeked) UDP_INC_STATS_USER(sock_net(sk), diff --git a/net/ipv4/udp_diag.c b/net/ipv4/udp_diag.c index a7f86a3cd502..16d0960062be 100644 --- a/net/ipv4/udp_diag.c +++ b/net/ipv4/udp_diag.c @@ -34,15 +34,16 @@ static int udp_dump_one(struct udp_table *tbl, struct sk_buff *in_skb, int err = -EINVAL; struct sock *sk; struct sk_buff *rep; + struct net *net = sock_net(in_skb->sk); if (req->sdiag_family == AF_INET) - sk = __udp4_lib_lookup(&init_net, + sk = __udp4_lib_lookup(net, req->id.idiag_src[0], req->id.idiag_sport, req->id.idiag_dst[0], req->id.idiag_dport, req->id.idiag_if, tbl); #if IS_ENABLED(CONFIG_IPV6) else if (req->sdiag_family == AF_INET6) - sk = __udp6_lib_lookup(&init_net, + sk = __udp6_lib_lookup(net, (struct in6_addr *)req->id.idiag_src, req->id.idiag_sport, (struct in6_addr *)req->id.idiag_dst, @@ -75,7 +76,7 @@ static int udp_dump_one(struct udp_table *tbl, struct sk_buff *in_skb, kfree_skb(rep); goto out; } - err = netlink_unicast(sock_diag_nlsk, rep, NETLINK_CB(in_skb).pid, + err = netlink_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).pid, MSG_DONTWAIT); if (err > 0) err = 0; @@ -90,6 +91,7 @@ static void udp_dump(struct udp_table *table, struct sk_buff *skb, struct netlin struct inet_diag_req_v2 *r, struct nlattr *bc) { int num, s_num, slot, s_slot; + struct net *net = sock_net(skb->sk); s_slot = cb->args[0]; num = s_num = cb->args[1]; @@ -106,6 +108,8 @@ static void udp_dump(struct udp_table *table, struct sk_buff *skb, struct netlin sk_nulls_for_each(sk, node, &hslot->head) { struct inet_sock *inet = inet_sk(sk); + if (!net_eq(sock_net(sk), net)) + continue; if (num < s_num) goto next; if (!(r->idiag_states & (1 << sk->sk_state))) diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c index 06814b6216dc..58d23a572509 100644 --- a/net/ipv4/xfrm4_input.c +++ b/net/ipv4/xfrm4_input.c @@ -27,8 +27,8 @@ static inline int xfrm4_rcv_encap_finish(struct sk_buff *skb) if (skb_dst(skb) == NULL) { const struct iphdr *iph = ip_hdr(skb); - if (ip_route_input_noref(skb, iph->daddr, iph->saddr, - iph->tos, skb->dev)) + if (ip_route_input(skb, iph->daddr, iph->saddr, + iph->tos, skb->dev)) goto drop; } return dst_input(skb); diff --git a/net/ipv4/xfrm4_mode_tunnel.c b/net/ipv4/xfrm4_mode_tunnel.c index ed4bf11ef9f4..ddee0a099a2c 100644 --- a/net/ipv4/xfrm4_mode_tunnel.c +++ b/net/ipv4/xfrm4_mode_tunnel.c @@ -15,6 +15,65 @@ #include <net/ip.h> #include <net/xfrm.h> +/* Informational hook. The decap is still done here. */ +static struct xfrm_tunnel __rcu *rcv_notify_handlers __read_mostly; +static DEFINE_MUTEX(xfrm4_mode_tunnel_input_mutex); + +int xfrm4_mode_tunnel_input_register(struct xfrm_tunnel *handler) +{ + struct xfrm_tunnel __rcu **pprev; + struct xfrm_tunnel *t; + int ret = -EEXIST; + int priority = handler->priority; + + mutex_lock(&xfrm4_mode_tunnel_input_mutex); + + for (pprev = &rcv_notify_handlers; + (t = rcu_dereference_protected(*pprev, + lockdep_is_held(&xfrm4_mode_tunnel_input_mutex))) != NULL; + pprev = &t->next) { + if (t->priority > priority) + break; + if (t->priority == priority) + goto err; + + } + + handler->next = *pprev; + rcu_assign_pointer(*pprev, handler); + + ret = 0; + +err: + mutex_unlock(&xfrm4_mode_tunnel_input_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(xfrm4_mode_tunnel_input_register); + +int xfrm4_mode_tunnel_input_deregister(struct xfrm_tunnel *handler) +{ + struct xfrm_tunnel __rcu **pprev; + struct xfrm_tunnel *t; + int ret = -ENOENT; + + mutex_lock(&xfrm4_mode_tunnel_input_mutex); + for (pprev = &rcv_notify_handlers; + (t = rcu_dereference_protected(*pprev, + lockdep_is_held(&xfrm4_mode_tunnel_input_mutex))) != NULL; + pprev = &t->next) { + if (t == handler) { + *pprev = handler->next; + ret = 0; + break; + } + } + mutex_unlock(&xfrm4_mode_tunnel_input_mutex); + synchronize_net(); + + return ret; +} +EXPORT_SYMBOL_GPL(xfrm4_mode_tunnel_input_deregister); + static inline void ipip_ecn_decapsulate(struct sk_buff *skb) { struct iphdr *inner_iph = ipip_hdr(skb); @@ -64,8 +123,14 @@ static int xfrm4_mode_tunnel_output(struct xfrm_state *x, struct sk_buff *skb) return 0; } +#define for_each_input_rcu(head, handler) \ + for (handler = rcu_dereference(head); \ + handler != NULL; \ + handler = rcu_dereference(handler->next)) + static int xfrm4_mode_tunnel_input(struct xfrm_state *x, struct sk_buff *skb) { + struct xfrm_tunnel *handler; int err = -EINVAL; if (XFRM_MODE_SKB_CB(skb)->protocol != IPPROTO_IPIP) @@ -74,6 +139,9 @@ static int xfrm4_mode_tunnel_input(struct xfrm_state *x, struct sk_buff *skb) if (!pskb_may_pull(skb, sizeof(struct iphdr))) goto out; + for_each_input_rcu(rcv_notify_handlers, handler) + handler->handler(skb); + if (skb_cloned(skb) && (err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC))) goto out; diff --git a/net/ipv4/xfrm4_policy.c b/net/ipv4/xfrm4_policy.c index 0d3426cb5c4f..c6281847f16a 100644 --- a/net/ipv4/xfrm4_policy.c +++ b/net/ipv4/xfrm4_policy.c @@ -79,30 +79,19 @@ static int xfrm4_fill_dst(struct xfrm_dst *xdst, struct net_device *dev, struct rtable *rt = (struct rtable *)xdst->route; const struct flowi4 *fl4 = &fl->u.ip4; - xdst->u.rt.rt_key_dst = fl4->daddr; - xdst->u.rt.rt_key_src = fl4->saddr; - xdst->u.rt.rt_key_tos = fl4->flowi4_tos; - xdst->u.rt.rt_route_iif = fl4->flowi4_iif; xdst->u.rt.rt_iif = fl4->flowi4_iif; - xdst->u.rt.rt_oif = fl4->flowi4_oif; - xdst->u.rt.rt_mark = fl4->flowi4_mark; xdst->u.dst.dev = dev; dev_hold(dev); - xdst->u.rt.peer = rt->peer; - if (rt->peer) - atomic_inc(&rt->peer->refcnt); - /* Sheit... I remember I did this right. Apparently, * it was magically lost, so this code needs audit */ + xdst->u.rt.rt_is_input = rt->rt_is_input; xdst->u.rt.rt_flags = rt->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST | RTCF_LOCAL); xdst->u.rt.rt_type = rt->rt_type; - xdst->u.rt.rt_src = rt->rt_src; - xdst->u.rt.rt_dst = rt->rt_dst; xdst->u.rt.rt_gateway = rt->rt_gateway; - xdst->u.rt.rt_spec_dst = rt->rt_spec_dst; + xdst->u.rt.rt_pmtu = rt->rt_pmtu; return 0; } @@ -198,12 +187,22 @@ static inline int xfrm4_garbage_collect(struct dst_ops *ops) return (dst_entries_get_slow(ops) > ops->gc_thresh * 2); } -static void xfrm4_update_pmtu(struct dst_entry *dst, u32 mtu) +static void xfrm4_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu) +{ + struct xfrm_dst *xdst = (struct xfrm_dst *)dst; + struct dst_entry *path = xdst->route; + + path->ops->update_pmtu(path, sk, skb, mtu); +} + +static void xfrm4_redirect(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb) { struct xfrm_dst *xdst = (struct xfrm_dst *)dst; struct dst_entry *path = xdst->route; - path->ops->update_pmtu(path, mtu); + path->ops->redirect(path, sk, skb); } static void xfrm4_dst_destroy(struct dst_entry *dst) @@ -212,9 +211,6 @@ static void xfrm4_dst_destroy(struct dst_entry *dst) dst_destroy_metrics_generic(dst); - if (likely(xdst->u.rt.peer)) - inet_putpeer(xdst->u.rt.peer); - xfrm_dst_destroy(xdst); } @@ -232,6 +228,7 @@ static struct dst_ops xfrm4_dst_ops = { .protocol = cpu_to_be16(ETH_P_IP), .gc = xfrm4_garbage_collect, .update_pmtu = xfrm4_update_pmtu, + .redirect = xfrm4_redirect, .cow_metrics = dst_cow_metrics_generic, .destroy = xfrm4_dst_destroy, .ifdown = xfrm4_dst_ifdown, diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 8f6411c97189..79181819a24f 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -63,6 +63,7 @@ #include <linux/delay.h> #include <linux/notifier.h> #include <linux/string.h> +#include <linux/hash.h> #include <net/net_namespace.h> #include <net/sock.h> @@ -579,15 +580,9 @@ ipv6_link_dev_addr(struct inet6_dev *idev, struct inet6_ifaddr *ifp) list_add_tail(&ifp->if_list, p); } -static u32 ipv6_addr_hash(const struct in6_addr *addr) +static u32 inet6_addr_hash(const struct in6_addr *addr) { - /* - * We perform the hash function over the last 64 bits of the address - * This will include the IEEE address token on links that support it. - */ - return jhash_2words((__force u32)addr->s6_addr32[2], - (__force u32)addr->s6_addr32[3], 0) - & (IN6_ADDR_HSIZE - 1); + return hash_32(ipv6_addr_hash(addr), IN6_ADDR_HSIZE_SHIFT); } /* On success it returns ifp with increased reference count */ @@ -662,7 +657,7 @@ ipv6_add_addr(struct inet6_dev *idev, const struct in6_addr *addr, int pfxlen, in6_ifa_hold(ifa); /* Add to big hash table */ - hash = ipv6_addr_hash(addr); + hash = inet6_addr_hash(addr); hlist_add_head_rcu(&ifa->addr_lst, &inet6_addr_lst[hash]); spin_unlock(&addrconf_hash_lock); @@ -1270,7 +1265,7 @@ int ipv6_chk_addr(struct net *net, const struct in6_addr *addr, { struct inet6_ifaddr *ifp; struct hlist_node *node; - unsigned int hash = ipv6_addr_hash(addr); + unsigned int hash = inet6_addr_hash(addr); rcu_read_lock_bh(); hlist_for_each_entry_rcu(ifp, node, &inet6_addr_lst[hash], addr_lst) { @@ -1293,7 +1288,7 @@ EXPORT_SYMBOL(ipv6_chk_addr); static bool ipv6_chk_same_addr(struct net *net, const struct in6_addr *addr, struct net_device *dev) { - unsigned int hash = ipv6_addr_hash(addr); + unsigned int hash = inet6_addr_hash(addr); struct inet6_ifaddr *ifp; struct hlist_node *node; @@ -1336,7 +1331,7 @@ struct inet6_ifaddr *ipv6_get_ifaddr(struct net *net, const struct in6_addr *add struct net_device *dev, int strict) { struct inet6_ifaddr *ifp, *result = NULL; - unsigned int hash = ipv6_addr_hash(addr); + unsigned int hash = inet6_addr_hash(addr); struct hlist_node *node; rcu_read_lock_bh(); @@ -3223,7 +3218,7 @@ int ipv6_chk_home_addr(struct net *net, const struct in6_addr *addr) int ret = 0; struct inet6_ifaddr *ifp = NULL; struct hlist_node *n; - unsigned int hash = ipv6_addr_hash(addr); + unsigned int hash = inet6_addr_hash(addr); rcu_read_lock_bh(); hlist_for_each_entry_rcu_bh(ifp, n, &inet6_addr_lst[hash], addr_lst) { diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c index f1a4a2c28ed3..7e6139508ee7 100644 --- a/net/ipv6/ah6.c +++ b/net/ipv6/ah6.c @@ -35,6 +35,7 @@ #include <linux/pfkeyv2.h> #include <linux/string.h> #include <linux/scatterlist.h> +#include <net/ip6_route.h> #include <net/icmp.h> #include <net/ipv6.h> #include <net/protocol.h> @@ -612,16 +613,18 @@ static void ah6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, struct xfrm_state *x; if (type != ICMPV6_DEST_UNREACH && - type != ICMPV6_PKT_TOOBIG) + type != ICMPV6_PKT_TOOBIG && + type != NDISC_REDIRECT) return; x = xfrm_state_lookup(net, skb->mark, (xfrm_address_t *)&iph->daddr, ah->spi, IPPROTO_AH, AF_INET6); if (!x) return; - NETDEBUG(KERN_DEBUG "pmtu discovery on SA AH/%08x/%pI6\n", - ntohl(ah->spi), &iph->daddr); - + if (type == NDISC_REDIRECT) + ip6_redirect(skb, net, 0, 0); + else + ip6_update_pmtu(skb, net, info, 0, 0); xfrm_state_put(x); } diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c index db1521fcda5b..6dc7fd353ef5 100644 --- a/net/ipv6/esp6.c +++ b/net/ipv6/esp6.c @@ -39,6 +39,7 @@ #include <linux/random.h> #include <linux/slab.h> #include <linux/spinlock.h> +#include <net/ip6_route.h> #include <net/icmp.h> #include <net/ipv6.h> #include <net/protocol.h> @@ -433,15 +434,19 @@ static void esp6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, struct xfrm_state *x; if (type != ICMPV6_DEST_UNREACH && - type != ICMPV6_PKT_TOOBIG) + type != ICMPV6_PKT_TOOBIG && + type != NDISC_REDIRECT) return; x = xfrm_state_lookup(net, skb->mark, (const xfrm_address_t *)&iph->daddr, esph->spi, IPPROTO_ESP, AF_INET6); if (!x) return; - pr_debug("pmtu discovery on SA ESP/%08x/%pI6\n", - ntohl(esph->spi), &iph->daddr); + + if (type == NDISC_REDIRECT) + ip6_redirect(skb, net, 0, 0); + else + ip6_update_pmtu(skb, net, info, 0, 0); xfrm_state_put(x); } diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c index 6447dc49429f..fa3d9c328092 100644 --- a/net/ipv6/exthdrs.c +++ b/net/ipv6/exthdrs.c @@ -791,14 +791,14 @@ static int ipv6_renew_option(void *ohdr, if (ohdr) { memcpy(*p, ohdr, ipv6_optlen((struct ipv6_opt_hdr *)ohdr)); *hdr = (struct ipv6_opt_hdr *)*p; - *p += CMSG_ALIGN(ipv6_optlen(*(struct ipv6_opt_hdr **)hdr)); + *p += CMSG_ALIGN(ipv6_optlen(*hdr)); } } else { if (newopt) { if (copy_from_user(*p, newopt, newoptlen)) return -EFAULT; *hdr = (struct ipv6_opt_hdr *)*p; - if (ipv6_optlen(*(struct ipv6_opt_hdr **)hdr) > newoptlen) + if (ipv6_optlen(*hdr) > newoptlen) return -EINVAL; *p += CMSG_ALIGN(newoptlen); } diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c index 091a2971c7b7..24d69dbca4d6 100644 --- a/net/ipv6/icmp.c +++ b/net/ipv6/icmp.c @@ -188,14 +188,16 @@ static inline bool icmpv6_xrlim_allow(struct sock *sk, u8 type, } else { struct rt6_info *rt = (struct rt6_info *)dst; int tmo = net->ipv6.sysctl.icmpv6_time; + struct inet_peer *peer; /* Give more bandwidth to wider prefixes. */ if (rt->rt6i_dst.plen < 128) tmo >>= ((128 - rt->rt6i_dst.plen)>>5); - if (!rt->rt6i_peer) - rt6_bind_peer(rt, 1); - res = inet_peer_xrlim_allow(rt->rt6i_peer, tmo); + peer = inet_getpeer_v6(net->ipv6.peers, &rt->rt6i_dst.addr, 1); + res = inet_peer_xrlim_allow(peer, tmo); + if (peer) + inet_putpeer(peer); } dst_release(dst); return res; @@ -596,13 +598,12 @@ out: icmpv6_xmit_unlock(sk); } -static void icmpv6_notify(struct sk_buff *skb, u8 type, u8 code, __be32 info) +void icmpv6_notify(struct sk_buff *skb, u8 type, u8 code, __be32 info) { const struct inet6_protocol *ipprot; int inner_offset; - int hash; - u8 nexthdr; __be16 frag_off; + u8 nexthdr; if (!pskb_may_pull(skb, sizeof(struct ipv6hdr))) return; @@ -629,10 +630,8 @@ static void icmpv6_notify(struct sk_buff *skb, u8 type, u8 code, __be32 info) --ANK (980726) */ - hash = nexthdr & (MAX_INET_PROTOS - 1); - rcu_read_lock(); - ipprot = rcu_dereference(inet6_protos[hash]); + ipprot = rcu_dereference(inet6_protos[nexthdr]); if (ipprot && ipprot->err_handler) ipprot->err_handler(skb, NULL, type, code, inner_offset, info); rcu_read_unlock(); @@ -649,7 +648,6 @@ static int icmpv6_rcv(struct sk_buff *skb) struct net_device *dev = skb->dev; struct inet6_dev *idev = __in6_dev_get(dev); const struct in6_addr *saddr, *daddr; - const struct ipv6hdr *orig_hdr; struct icmp6hdr *hdr; u8 type; @@ -661,7 +659,7 @@ static int icmpv6_rcv(struct sk_buff *skb) XFRM_STATE_ICMP)) goto drop_no_count; - if (!pskb_may_pull(skb, sizeof(*hdr) + sizeof(*orig_hdr))) + if (!pskb_may_pull(skb, sizeof(*hdr) + sizeof(struct ipv6hdr))) goto drop_no_count; nh = skb_network_offset(skb); @@ -722,9 +720,6 @@ static int icmpv6_rcv(struct sk_buff *skb) if (!pskb_may_pull(skb, sizeof(struct ipv6hdr))) goto discard_it; hdr = icmp6_hdr(skb); - orig_hdr = (struct ipv6hdr *) (hdr + 1); - rt6_pmtu_discovery(&orig_hdr->daddr, &orig_hdr->saddr, dev, - ntohl(hdr->icmp6_mtu)); /* * Drop through to notify diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c index e6cee5292a0b..0251a6005be8 100644 --- a/net/ipv6/inet6_connection_sock.c +++ b/net/ipv6/inet6_connection_sock.c @@ -55,26 +55,26 @@ int inet6_csk_bind_conflict(const struct sock *sk, EXPORT_SYMBOL_GPL(inet6_csk_bind_conflict); struct dst_entry *inet6_csk_route_req(struct sock *sk, + struct flowi6 *fl6, const struct request_sock *req) { struct inet6_request_sock *treq = inet6_rsk(req); struct ipv6_pinfo *np = inet6_sk(sk); struct in6_addr *final_p, final; struct dst_entry *dst; - struct flowi6 fl6; - memset(&fl6, 0, sizeof(fl6)); - fl6.flowi6_proto = IPPROTO_TCP; - fl6.daddr = treq->rmt_addr; - final_p = fl6_update_dst(&fl6, np->opt, &final); - fl6.saddr = treq->loc_addr; - fl6.flowi6_oif = sk->sk_bound_dev_if; - fl6.flowi6_mark = sk->sk_mark; - fl6.fl6_dport = inet_rsk(req)->rmt_port; - fl6.fl6_sport = inet_rsk(req)->loc_port; - security_req_classify_flow(req, flowi6_to_flowi(&fl6)); - - dst = ip6_dst_lookup_flow(sk, &fl6, final_p, false); + memset(fl6, 0, sizeof(*fl6)); + fl6->flowi6_proto = IPPROTO_TCP; + fl6->daddr = treq->rmt_addr; + final_p = fl6_update_dst(fl6, np->opt, &final); + fl6->saddr = treq->loc_addr; + fl6->flowi6_oif = treq->iif; + fl6->flowi6_mark = sk->sk_mark; + fl6->fl6_dport = inet_rsk(req)->rmt_port; + fl6->fl6_sport = inet_rsk(req)->loc_port; + security_req_classify_flow(req, flowi6_to_flowi(fl6)); + + dst = ip6_dst_lookup_flow(sk, fl6, final_p, false); if (IS_ERR(dst)) return NULL; @@ -171,7 +171,8 @@ EXPORT_SYMBOL_GPL(inet6_csk_addr2sockaddr); static inline void __inet6_csk_dst_store(struct sock *sk, struct dst_entry *dst, - struct in6_addr *daddr, struct in6_addr *saddr) + const struct in6_addr *daddr, + const struct in6_addr *saddr) { __ip6_dst_store(sk, dst, daddr, saddr); @@ -203,43 +204,52 @@ struct dst_entry *__inet6_csk_dst_check(struct sock *sk, u32 cookie) return dst; } -int inet6_csk_xmit(struct sk_buff *skb, struct flowi *fl_unused) +static struct dst_entry *inet6_csk_route_socket(struct sock *sk, + struct flowi6 *fl6) { - struct sock *sk = skb->sk; struct inet_sock *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); - struct flowi6 fl6; - struct dst_entry *dst; struct in6_addr *final_p, final; - int res; + struct dst_entry *dst; - memset(&fl6, 0, sizeof(fl6)); - fl6.flowi6_proto = sk->sk_protocol; - fl6.daddr = np->daddr; - fl6.saddr = np->saddr; - fl6.flowlabel = np->flow_label; - IP6_ECN_flow_xmit(sk, fl6.flowlabel); - fl6.flowi6_oif = sk->sk_bound_dev_if; - fl6.flowi6_mark = sk->sk_mark; - fl6.fl6_sport = inet->inet_sport; - fl6.fl6_dport = inet->inet_dport; - security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); + memset(fl6, 0, sizeof(*fl6)); + fl6->flowi6_proto = sk->sk_protocol; + fl6->daddr = np->daddr; + fl6->saddr = np->saddr; + fl6->flowlabel = np->flow_label; + IP6_ECN_flow_xmit(sk, fl6->flowlabel); + fl6->flowi6_oif = sk->sk_bound_dev_if; + fl6->flowi6_mark = sk->sk_mark; + fl6->fl6_sport = inet->inet_sport; + fl6->fl6_dport = inet->inet_dport; + security_sk_classify_flow(sk, flowi6_to_flowi(fl6)); - final_p = fl6_update_dst(&fl6, np->opt, &final); + final_p = fl6_update_dst(fl6, np->opt, &final); dst = __inet6_csk_dst_check(sk, np->dst_cookie); + if (!dst) { + dst = ip6_dst_lookup_flow(sk, fl6, final_p, false); - if (dst == NULL) { - dst = ip6_dst_lookup_flow(sk, &fl6, final_p, false); + if (!IS_ERR(dst)) + __inet6_csk_dst_store(sk, dst, NULL, NULL); + } + return dst; +} - if (IS_ERR(dst)) { - sk->sk_err_soft = -PTR_ERR(dst); - sk->sk_route_caps = 0; - kfree_skb(skb); - return PTR_ERR(dst); - } +int inet6_csk_xmit(struct sk_buff *skb, struct flowi *fl_unused) +{ + struct sock *sk = skb->sk; + struct ipv6_pinfo *np = inet6_sk(sk); + struct flowi6 fl6; + struct dst_entry *dst; + int res; - __inet6_csk_dst_store(sk, dst, NULL, NULL); + dst = inet6_csk_route_socket(sk, &fl6); + if (IS_ERR(dst)) { + sk->sk_err_soft = -PTR_ERR(dst); + sk->sk_route_caps = 0; + kfree_skb(skb); + return PTR_ERR(dst); } rcu_read_lock(); @@ -253,3 +263,16 @@ int inet6_csk_xmit(struct sk_buff *skb, struct flowi *fl_unused) return res; } EXPORT_SYMBOL_GPL(inet6_csk_xmit); + +struct dst_entry *inet6_csk_update_pmtu(struct sock *sk, u32 mtu) +{ + struct flowi6 fl6; + struct dst_entry *dst = inet6_csk_route_socket(sk, &fl6); + + if (IS_ERR(dst)) + return NULL; + dst->ops->update_pmtu(dst, sk, NULL, mtu); + + return inet6_csk_route_socket(sk, &fl6); +} +EXPORT_SYMBOL_GPL(inet6_csk_update_pmtu); diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c index 608327661960..13690d650c3e 100644 --- a/net/ipv6/ip6_fib.c +++ b/net/ipv6/ip6_fib.c @@ -197,6 +197,7 @@ static struct fib6_table *fib6_alloc_table(struct net *net, u32 id) table->tb6_id = id; table->tb6_root.leaf = net->ipv6.ip6_null_entry; table->tb6_root.fn_flags = RTN_ROOT | RTN_TL_ROOT | RTN_RTINFO; + inet_peer_base_init(&table->tb6_peers); } return table; @@ -1633,6 +1634,7 @@ static int __net_init fib6_net_init(struct net *net) net->ipv6.fib6_main_tbl->tb6_root.leaf = net->ipv6.ip6_null_entry; net->ipv6.fib6_main_tbl->tb6_root.fn_flags = RTN_ROOT | RTN_TL_ROOT | RTN_RTINFO; + inet_peer_base_init(&net->ipv6.fib6_main_tbl->tb6_peers); #ifdef CONFIG_IPV6_MULTIPLE_TABLES net->ipv6.fib6_local_tbl = kzalloc(sizeof(*net->ipv6.fib6_local_tbl), @@ -1643,6 +1645,7 @@ static int __net_init fib6_net_init(struct net *net) net->ipv6.fib6_local_tbl->tb6_root.leaf = net->ipv6.ip6_null_entry; net->ipv6.fib6_local_tbl->tb6_root.fn_flags = RTN_ROOT | RTN_TL_ROOT | RTN_RTINFO; + inet_peer_base_init(&net->ipv6.fib6_local_tbl->tb6_peers); #endif fib6_tables_init(net); @@ -1666,8 +1669,10 @@ static void fib6_net_exit(struct net *net) del_timer_sync(&net->ipv6.ip6_fib_timer); #ifdef CONFIG_IPV6_MULTIPLE_TABLES + inetpeer_invalidate_tree(&net->ipv6.fib6_local_tbl->tb6_peers); kfree(net->ipv6.fib6_local_tbl); #endif + inetpeer_invalidate_tree(&net->ipv6.fib6_main_tbl->tb6_peers); kfree(net->ipv6.fib6_main_tbl); kfree(net->ipv6.fib_table_hash); kfree(net->ipv6.rt6_stats); diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c index 21a15dfe4a9e..5ab923e51af3 100644 --- a/net/ipv6/ip6_input.c +++ b/net/ipv6/ip6_input.c @@ -168,13 +168,12 @@ drop: static int ip6_input_finish(struct sk_buff *skb) { + struct net *net = dev_net(skb_dst(skb)->dev); const struct inet6_protocol *ipprot; + struct inet6_dev *idev; unsigned int nhoff; int nexthdr; bool raw; - u8 hash; - struct inet6_dev *idev; - struct net *net = dev_net(skb_dst(skb)->dev); /* * Parse extension headers @@ -189,9 +188,7 @@ resubmit: nexthdr = skb_network_header(skb)[nhoff]; raw = raw6_local_deliver(skb, nexthdr); - - hash = nexthdr & (MAX_INET_PROTOS - 1); - if ((ipprot = rcu_dereference(inet6_protos[hash])) != NULL) { + if ((ipprot = rcu_dereference(inet6_protos[nexthdr])) != NULL) { int ret; if (ipprot->flags & INET6_PROTO_FINAL) { diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index decc21d19c53..5b2d63ed793e 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -83,24 +83,12 @@ int ip6_local_out(struct sk_buff *skb) } EXPORT_SYMBOL_GPL(ip6_local_out); -/* dev_loopback_xmit for use with netfilter. */ -static int ip6_dev_loopback_xmit(struct sk_buff *newskb) -{ - skb_reset_mac_header(newskb); - __skb_pull(newskb, skb_network_offset(newskb)); - newskb->pkt_type = PACKET_LOOPBACK; - newskb->ip_summed = CHECKSUM_UNNECESSARY; - WARN_ON(!skb_dst(newskb)); - - netif_rx_ni(newskb); - return 0; -} - static int ip6_finish_output2(struct sk_buff *skb) { struct dst_entry *dst = skb_dst(skb); struct net_device *dev = dst->dev; struct neighbour *neigh; + struct rt6_info *rt; skb->protocol = htons(ETH_P_IPV6); skb->dev = dev; @@ -121,7 +109,7 @@ static int ip6_finish_output2(struct sk_buff *skb) if (newskb) NF_HOOK(NFPROTO_IPV6, NF_INET_POST_ROUTING, newskb, NULL, newskb->dev, - ip6_dev_loopback_xmit); + dev_loopback_xmit); if (ipv6_hdr(skb)->hop_limit == 0) { IP6_INC_STATS(dev_net(dev), idev, @@ -136,9 +124,10 @@ static int ip6_finish_output2(struct sk_buff *skb) } rcu_read_lock(); - neigh = dst_get_neighbour_noref(dst); + rt = (struct rt6_info *) dst; + neigh = rt->n; if (neigh) { - int res = neigh_output(neigh, skb); + int res = dst_neigh_output(dst, neigh, skb); rcu_read_unlock(); return res; @@ -463,6 +452,7 @@ int ip6_forward(struct sk_buff *skb) */ if (skb->dev == dst->dev && opt->srcrt == 0 && !skb_sec_path(skb)) { struct in6_addr *target = NULL; + struct inet_peer *peer; struct rt6_info *rt; /* @@ -476,14 +466,15 @@ int ip6_forward(struct sk_buff *skb) else target = &hdr->daddr; - if (!rt->rt6i_peer) - rt6_bind_peer(rt, 1); + peer = inet_getpeer_v6(net->ipv6.peers, &rt->rt6i_dst.addr, 1); /* Limit redirects both by destination (here) and by source (inside ndisc_send_redirect) */ - if (inet_peer_xrlim_allow(rt->rt6i_peer, 1*HZ)) + if (inet_peer_xrlim_allow(peer, 1*HZ)) ndisc_send_redirect(skb, target); + if (peer) + inet_putpeer(peer); } else { int addrtype = ipv6_addr_type(&hdr->saddr); @@ -604,12 +595,13 @@ void ipv6_select_ident(struct frag_hdr *fhdr, struct rt6_info *rt) if (rt && !(rt->dst.flags & DST_NOPEER)) { struct inet_peer *peer; + struct net *net; - if (!rt->rt6i_peer) - rt6_bind_peer(rt, 1); - peer = rt->rt6i_peer; + net = dev_net(rt->dst.dev); + peer = inet_getpeer_v6(net->ipv6.peers, &rt->rt6i_dst.addr, 1); if (peer) { fhdr->identification = htonl(inet_getid(peer, 0)); + inet_putpeer(peer); return; } } @@ -960,6 +952,7 @@ static int ip6_dst_lookup_tail(struct sock *sk, struct net *net = sock_net(sk); #ifdef CONFIG_IPV6_OPTIMISTIC_DAD struct neighbour *n; + struct rt6_info *rt; #endif int err; @@ -988,7 +981,8 @@ static int ip6_dst_lookup_tail(struct sock *sk, * dst entry of the nexthop router */ rcu_read_lock(); - n = dst_get_neighbour_noref(*dst); + rt = (struct rt6_info *) *dst; + n = rt->n; if (n && !(n->nud_state & NUD_VALID)) { struct inet6_ifaddr *ifp; struct flowi6 fl_gw6; diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c index c9015fad8d65..9a1d5fe6aef8 100644 --- a/net/ipv6/ip6_tunnel.c +++ b/net/ipv6/ip6_tunnel.c @@ -40,6 +40,7 @@ #include <linux/rtnetlink.h> #include <linux/netfilter_ipv6.h> #include <linux/slab.h> +#include <linux/hash.h> #include <asm/uaccess.h> #include <linux/atomic.h> @@ -70,11 +71,15 @@ MODULE_ALIAS_NETDEV("ip6tnl0"); #define IPV6_TCLASS_MASK (IPV6_FLOWINFO_MASK & ~IPV6_FLOWLABEL_MASK) #define IPV6_TCLASS_SHIFT 20 -#define HASH_SIZE 32 +#define HASH_SIZE_SHIFT 5 +#define HASH_SIZE (1 << HASH_SIZE_SHIFT) -#define HASH(addr) ((__force u32)((addr)->s6_addr32[0] ^ (addr)->s6_addr32[1] ^ \ - (addr)->s6_addr32[2] ^ (addr)->s6_addr32[3]) & \ - (HASH_SIZE - 1)) +static u32 HASH(const struct in6_addr *addr1, const struct in6_addr *addr2) +{ + u32 hash = ipv6_addr_hash(addr1) ^ ipv6_addr_hash(addr2); + + return hash_32(hash, HASH_SIZE_SHIFT); +} static int ip6_tnl_dev_init(struct net_device *dev); static void ip6_tnl_dev_setup(struct net_device *dev); @@ -166,12 +171,11 @@ static inline void ip6_tnl_dst_store(struct ip6_tnl *t, struct dst_entry *dst) static struct ip6_tnl * ip6_tnl_lookup(struct net *net, const struct in6_addr *remote, const struct in6_addr *local) { - unsigned int h0 = HASH(remote); - unsigned int h1 = HASH(local); + unsigned int hash = HASH(remote, local); struct ip6_tnl *t; struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id); - for_each_ip6_tunnel_rcu(ip6n->tnls_r_l[h0 ^ h1]) { + for_each_ip6_tunnel_rcu(ip6n->tnls_r_l[hash]) { if (ipv6_addr_equal(local, &t->parms.laddr) && ipv6_addr_equal(remote, &t->parms.raddr) && (t->dev->flags & IFF_UP)) @@ -205,7 +209,7 @@ ip6_tnl_bucket(struct ip6_tnl_net *ip6n, const struct ip6_tnl_parm *p) if (!ipv6_addr_any(remote) || !ipv6_addr_any(local)) { prio = 1; - h = HASH(remote) ^ HASH(local); + h = HASH(remote, local); } return &ip6n->tnls[prio][h]; } @@ -252,7 +256,7 @@ static void ip6_dev_free(struct net_device *dev) } /** - * ip6_tnl_create() - create a new tunnel + * ip6_tnl_create - create a new tunnel * @p: tunnel parameters * @pt: pointer to new tunnel * @@ -550,6 +554,9 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, rel_type = ICMP_DEST_UNREACH; rel_code = ICMP_FRAG_NEEDED; break; + case NDISC_REDIRECT: + rel_type = ICMP_REDIRECT; + rel_code = ICMP_REDIR_HOST; default: return 0; } @@ -606,8 +613,10 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, if (rel_info > dst_mtu(skb_dst(skb2))) goto out; - skb_dst(skb2)->ops->update_pmtu(skb_dst(skb2), rel_info); + skb_dst(skb2)->ops->update_pmtu(skb_dst(skb2), NULL, skb2, rel_info); } + if (rel_type == ICMP_REDIRECT) + skb_dst(skb2)->ops->redirect(skb_dst(skb2), NULL, skb2); icmp_send(skb2, rel_type, rel_code, htonl(rel_info)); @@ -684,24 +693,50 @@ static void ip6ip6_dscp_ecn_decapsulate(const struct ip6_tnl *t, IP6_ECN_set_ce(ipv6_hdr(skb)); } +static __u32 ip6_tnl_get_cap(struct ip6_tnl *t, + const struct in6_addr *laddr, + const struct in6_addr *raddr) +{ + struct ip6_tnl_parm *p = &t->parms; + int ltype = ipv6_addr_type(laddr); + int rtype = ipv6_addr_type(raddr); + __u32 flags = 0; + + if (ltype == IPV6_ADDR_ANY || rtype == IPV6_ADDR_ANY) { + flags = IP6_TNL_F_CAP_PER_PACKET; + } else if (ltype & (IPV6_ADDR_UNICAST|IPV6_ADDR_MULTICAST) && + rtype & (IPV6_ADDR_UNICAST|IPV6_ADDR_MULTICAST) && + !((ltype|rtype) & IPV6_ADDR_LOOPBACK) && + (!((ltype|rtype) & IPV6_ADDR_LINKLOCAL) || p->link)) { + if (ltype&IPV6_ADDR_UNICAST) + flags |= IP6_TNL_F_CAP_XMIT; + if (rtype&IPV6_ADDR_UNICAST) + flags |= IP6_TNL_F_CAP_RCV; + } + return flags; +} + /* called with rcu_read_lock() */ -static inline int ip6_tnl_rcv_ctl(struct ip6_tnl *t) +static inline int ip6_tnl_rcv_ctl(struct ip6_tnl *t, + const struct in6_addr *laddr, + const struct in6_addr *raddr) { struct ip6_tnl_parm *p = &t->parms; int ret = 0; struct net *net = dev_net(t->dev); - if (p->flags & IP6_TNL_F_CAP_RCV) { + if ((p->flags & IP6_TNL_F_CAP_RCV) || + ((p->flags & IP6_TNL_F_CAP_PER_PACKET) && + (ip6_tnl_get_cap(t, laddr, raddr) & IP6_TNL_F_CAP_RCV))) { struct net_device *ldev = NULL; if (p->link) ldev = dev_get_by_index_rcu(net, p->link); - if ((ipv6_addr_is_multicast(&p->laddr) || - likely(ipv6_chk_addr(net, &p->laddr, ldev, 0))) && - likely(!ipv6_chk_addr(net, &p->raddr, NULL, 0))) + if ((ipv6_addr_is_multicast(laddr) || + likely(ipv6_chk_addr(net, laddr, ldev, 0))) && + likely(!ipv6_chk_addr(net, raddr, NULL, 0))) ret = 1; - } return ret; } @@ -740,7 +775,7 @@ static int ip6_tnl_rcv(struct sk_buff *skb, __u16 protocol, goto discard; } - if (!ip6_tnl_rcv_ctl(t)) { + if (!ip6_tnl_rcv_ctl(t, &ipv6h->daddr, &ipv6h->saddr)) { t->dev->stats.rx_dropped++; rcu_read_unlock(); goto discard; @@ -921,7 +956,7 @@ static int ip6_tnl_xmit2(struct sk_buff *skb, if (mtu < IPV6_MIN_MTU) mtu = IPV6_MIN_MTU; if (skb_dst(skb)) - skb_dst(skb)->ops->update_pmtu(skb_dst(skb), mtu); + skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); if (skb->len > mtu) { *pmtu = mtu; err = -EMSGSIZE; @@ -1114,25 +1149,6 @@ tx_err: return NETDEV_TX_OK; } -static void ip6_tnl_set_cap(struct ip6_tnl *t) -{ - struct ip6_tnl_parm *p = &t->parms; - int ltype = ipv6_addr_type(&p->laddr); - int rtype = ipv6_addr_type(&p->raddr); - - p->flags &= ~(IP6_TNL_F_CAP_XMIT|IP6_TNL_F_CAP_RCV); - - if (ltype & (IPV6_ADDR_UNICAST|IPV6_ADDR_MULTICAST) && - rtype & (IPV6_ADDR_UNICAST|IPV6_ADDR_MULTICAST) && - !((ltype|rtype) & IPV6_ADDR_LOOPBACK) && - (!((ltype|rtype) & IPV6_ADDR_LINKLOCAL) || p->link)) { - if (ltype&IPV6_ADDR_UNICAST) - p->flags |= IP6_TNL_F_CAP_XMIT; - if (rtype&IPV6_ADDR_UNICAST) - p->flags |= IP6_TNL_F_CAP_RCV; - } -} - static void ip6_tnl_link_config(struct ip6_tnl *t) { struct net_device *dev = t->dev; @@ -1153,7 +1169,8 @@ static void ip6_tnl_link_config(struct ip6_tnl *t) if (!(p->flags&IP6_TNL_F_USE_ORIG_FLOWLABEL)) fl6->flowlabel |= IPV6_FLOWLABEL_MASK & p->flowinfo; - ip6_tnl_set_cap(t); + p->flags &= ~(IP6_TNL_F_CAP_XMIT|IP6_TNL_F_CAP_RCV|IP6_TNL_F_CAP_PER_PACKET); + p->flags |= ip6_tnl_get_cap(t, &p->laddr, &p->raddr); if (p->flags&IP6_TNL_F_CAP_XMIT && p->flags&IP6_TNL_F_CAP_RCV) dev->flags |= IFF_POINTOPOINT; @@ -1438,6 +1455,9 @@ static int __net_init ip6_fb_tnl_dev_init(struct net_device *dev) t->parms.proto = IPPROTO_IPV6; dev_hold(dev); + + ip6_tnl_link_config(t); + rcu_assign_pointer(ip6n->tnls_wc[0], t); return 0; } diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c index 461e47c8e956..4532973f0dd4 100644 --- a/net/ipv6/ip6mr.c +++ b/net/ipv6/ip6mr.c @@ -2104,8 +2104,9 @@ static int __ip6mr_fill_mroute(struct mr6_table *mrt, struct sk_buff *skb, if (c->mf6c_parent >= MAXMIFS) return -ENOENT; - if (MIF_EXISTS(mrt, c->mf6c_parent)) - RTA_PUT(skb, RTA_IIF, 4, &mrt->vif6_table[c->mf6c_parent].dev->ifindex); + if (MIF_EXISTS(mrt, c->mf6c_parent) && + nla_put_u32(skb, RTA_IIF, mrt->vif6_table[c->mf6c_parent].dev->ifindex) < 0) + return -EMSGSIZE; mp_head = (struct rtattr *)skb_put(skb, RTA_LENGTH(0)); diff --git a/net/ipv6/ipcomp6.c b/net/ipv6/ipcomp6.c index 5cb75bfe45b1..7af5aee75d98 100644 --- a/net/ipv6/ipcomp6.c +++ b/net/ipv6/ipcomp6.c @@ -46,6 +46,7 @@ #include <linux/list.h> #include <linux/vmalloc.h> #include <linux/rtnetlink.h> +#include <net/ip6_route.h> #include <net/icmp.h> #include <net/ipv6.h> #include <net/protocol.h> @@ -63,7 +64,9 @@ static void ipcomp6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, (struct ip_comp_hdr *)(skb->data + offset); struct xfrm_state *x; - if (type != ICMPV6_DEST_UNREACH && type != ICMPV6_PKT_TOOBIG) + if (type != ICMPV6_DEST_UNREACH && + type != ICMPV6_PKT_TOOBIG && + type != NDISC_REDIRECT) return; spi = htonl(ntohs(ipcomph->cpi)); @@ -72,8 +75,10 @@ static void ipcomp6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, if (!x) return; - pr_debug("pmtu discovery on SA IPCOMP/%08x/%pI6\n", - spi, &iph->daddr); + if (type == NDISC_REDIRECT) + ip6_redirect(skb, net, 0, 0); + else + ip6_update_pmtu(skb, net, info, 0, 0); xfrm_state_put(x); } diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 6d0f5dc8e3a6..92f8e48e4ba4 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -211,6 +211,9 @@ int ipv6_sock_mc_drop(struct sock *sk, int ifindex, const struct in6_addr *addr) struct ipv6_mc_socklist __rcu **lnk; struct net *net = sock_net(sk); + if (!ipv6_addr_is_multicast(addr)) + return -EINVAL; + spin_lock(&ipv6_sk_mc_lock); for (lnk = &np->ipv6_mc_list; (mc_lst = rcu_dereference_protected(*lnk, diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c index 54f62d3b8dd6..ff36194a71aa 100644 --- a/net/ipv6/ndisc.c +++ b/net/ipv6/ndisc.c @@ -143,40 +143,6 @@ struct neigh_table nd_tbl = { .gc_thresh3 = 1024, }; -/* ND options */ -struct ndisc_options { - struct nd_opt_hdr *nd_opt_array[__ND_OPT_ARRAY_MAX]; -#ifdef CONFIG_IPV6_ROUTE_INFO - struct nd_opt_hdr *nd_opts_ri; - struct nd_opt_hdr *nd_opts_ri_end; -#endif - struct nd_opt_hdr *nd_useropts; - struct nd_opt_hdr *nd_useropts_end; -}; - -#define nd_opts_src_lladdr nd_opt_array[ND_OPT_SOURCE_LL_ADDR] -#define nd_opts_tgt_lladdr nd_opt_array[ND_OPT_TARGET_LL_ADDR] -#define nd_opts_pi nd_opt_array[ND_OPT_PREFIX_INFO] -#define nd_opts_pi_end nd_opt_array[__ND_OPT_PREFIX_INFO_END] -#define nd_opts_rh nd_opt_array[ND_OPT_REDIRECT_HDR] -#define nd_opts_mtu nd_opt_array[ND_OPT_MTU] - -#define NDISC_OPT_SPACE(len) (((len)+2+7)&~7) - -/* - * Return the padding between the option length and the start of the - * link addr. Currently only IP-over-InfiniBand needs this, although - * if RFC 3831 IPv6-over-Fibre Channel is ever implemented it may - * also need a pad of 2. - */ -static int ndisc_addr_option_pad(unsigned short type) -{ - switch (type) { - case ARPHRD_INFINIBAND: return 2; - default: return 0; - } -} - static inline int ndisc_opt_addr_space(struct net_device *dev) { return NDISC_OPT_SPACE(dev->addr_len + ndisc_addr_option_pad(dev->type)); @@ -233,8 +199,8 @@ static struct nd_opt_hdr *ndisc_next_useropt(struct nd_opt_hdr *cur, return cur <= end && ndisc_is_useropt(cur) ? cur : NULL; } -static struct ndisc_options *ndisc_parse_options(u8 *opt, int opt_len, - struct ndisc_options *ndopts) +struct ndisc_options *ndisc_parse_options(u8 *opt, int opt_len, + struct ndisc_options *ndopts) { struct nd_opt_hdr *nd_opt = (struct nd_opt_hdr *)opt; @@ -297,17 +263,6 @@ static struct ndisc_options *ndisc_parse_options(u8 *opt, int opt_len, return ndopts; } -static inline u8 *ndisc_opt_addr_data(struct nd_opt_hdr *p, - struct net_device *dev) -{ - u8 *lladdr = (u8 *)(p + 1); - int lladdrlen = p->nd_opt_len << 3; - int prepad = ndisc_addr_option_pad(dev->type); - if (lladdrlen != NDISC_OPT_SPACE(dev->addr_len + prepad)) - return NULL; - return lladdr + prepad; -} - int ndisc_mc_map(const struct in6_addr *addr, char *buf, struct net_device *dev, int dir) { switch (dev->type) { @@ -1379,16 +1334,6 @@ out: static void ndisc_redirect_rcv(struct sk_buff *skb) { - struct inet6_dev *in6_dev; - struct icmp6hdr *icmph; - const struct in6_addr *dest; - const struct in6_addr *target; /* new first hop to destination */ - struct neighbour *neigh; - int on_link = 0; - struct ndisc_options ndopts; - int optlen; - u8 *lladdr = NULL; - #ifdef CONFIG_IPV6_NDISC_NODETYPE switch (skb->ndisc_nodetype) { case NDISC_NODETYPE_HOST: @@ -1405,65 +1350,7 @@ static void ndisc_redirect_rcv(struct sk_buff *skb) return; } - optlen = skb->tail - skb->transport_header; - optlen -= sizeof(struct icmp6hdr) + 2 * sizeof(struct in6_addr); - - if (optlen < 0) { - ND_PRINTK(2, warn, "Redirect: packet too short\n"); - return; - } - - icmph = icmp6_hdr(skb); - target = (const struct in6_addr *) (icmph + 1); - dest = target + 1; - - if (ipv6_addr_is_multicast(dest)) { - ND_PRINTK(2, warn, - "Redirect: destination address is multicast\n"); - return; - } - - if (ipv6_addr_equal(dest, target)) { - on_link = 1; - } else if (ipv6_addr_type(target) != - (IPV6_ADDR_UNICAST|IPV6_ADDR_LINKLOCAL)) { - ND_PRINTK(2, warn, - "Redirect: target address is not link-local unicast\n"); - return; - } - - in6_dev = __in6_dev_get(skb->dev); - if (!in6_dev) - return; - if (in6_dev->cnf.forwarding || !in6_dev->cnf.accept_redirects) - return; - - /* RFC2461 8.1: - * The IP source address of the Redirect MUST be the same as the current - * first-hop router for the specified ICMP Destination Address. - */ - - if (!ndisc_parse_options((u8*)(dest + 1), optlen, &ndopts)) { - ND_PRINTK(2, warn, "Redirect: invalid ND options\n"); - return; - } - if (ndopts.nd_opts_tgt_lladdr) { - lladdr = ndisc_opt_addr_data(ndopts.nd_opts_tgt_lladdr, - skb->dev); - if (!lladdr) { - ND_PRINTK(2, warn, - "Redirect: invalid link-layer address length\n"); - return; - } - } - - neigh = __neigh_lookup(&nd_tbl, target, skb->dev, 1); - if (neigh) { - rt6_redirect(dest, &ipv6_hdr(skb)->daddr, - &ipv6_hdr(skb)->saddr, neigh, lladdr, - on_link); - neigh_release(neigh); - } + icmpv6_notify(skb, NDISC_REDIRECT, 0, 0); } void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target) @@ -1472,6 +1359,7 @@ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target) struct net *net = dev_net(dev); struct sock *sk = net->ipv6.ndisc_sk; int len = sizeof(struct icmp6hdr) + 2 * sizeof(struct in6_addr); + struct inet_peer *peer; struct sk_buff *buff; struct icmp6hdr *icmph; struct in6_addr saddr_buf; @@ -1485,6 +1373,7 @@ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target) int rd_len; int err; u8 ha_buf[MAX_ADDR_LEN], *ha = NULL; + bool ret; if (ipv6_get_lladdr(dev, &saddr_buf, IFA_F_TENTATIVE)) { ND_PRINTK(2, warn, "Redirect: no link-local address on %s\n", @@ -1518,9 +1407,11 @@ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target) "Redirect: destination is not a neighbour\n"); goto release; } - if (!rt->rt6i_peer) - rt6_bind_peer(rt, 1); - if (!inet_peer_xrlim_allow(rt->rt6i_peer, 1*HZ)) + peer = inet_getpeer_v6(net->ipv6.peers, &rt->rt6i_dst.addr, 1); + ret = inet_peer_xrlim_allow(peer, 1*HZ); + if (peer) + inet_putpeer(peer); + if (!ret) goto release; if (dev->addr_len) { diff --git a/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c b/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c index 3224ef90a21a..4794f96cf2e0 100644 --- a/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c +++ b/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c @@ -143,11 +143,11 @@ static int ipv6_get_l4proto(const struct sk_buff *skb, unsigned int nhoff, return NF_ACCEPT; } -static unsigned int ipv6_confirm(unsigned int hooknum, - struct sk_buff *skb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) +static unsigned int ipv6_helper(unsigned int hooknum, + struct sk_buff *skb, + const struct net_device *in, + const struct net_device *out, + int (*okfn)(struct sk_buff *)) { struct nf_conn *ct; const struct nf_conn_help *help; @@ -161,15 +161,15 @@ static unsigned int ipv6_confirm(unsigned int hooknum, /* This is where we call the helper: as the packet goes out. */ ct = nf_ct_get(skb, &ctinfo); if (!ct || ctinfo == IP_CT_RELATED_REPLY) - goto out; + return NF_ACCEPT; help = nfct_help(ct); if (!help) - goto out; + return NF_ACCEPT; /* rcu_read_lock()ed by nf_hook_slow */ helper = rcu_dereference(help->helper); if (!helper) - goto out; + return NF_ACCEPT; protoff = nf_ct_ipv6_skip_exthdr(skb, extoff, &pnum, skb->len - extoff); @@ -179,12 +179,19 @@ static unsigned int ipv6_confirm(unsigned int hooknum, } ret = helper->help(skb, protoff, ct, ctinfo); - if (ret != NF_ACCEPT) { + if (ret != NF_ACCEPT && (ret & NF_VERDICT_MASK) != NF_QUEUE) { nf_log_packet(NFPROTO_IPV6, hooknum, skb, in, out, NULL, "nf_ct_%s: dropping packet", helper->name); - return ret; } -out: + return ret; +} + +static unsigned int ipv6_confirm(unsigned int hooknum, + struct sk_buff *skb, + const struct net_device *in, + const struct net_device *out, + int (*okfn)(struct sk_buff *)) +{ /* We've seen it coming out the other side: confirm it */ return nf_conntrack_confirm(skb); } @@ -254,6 +261,13 @@ static struct nf_hook_ops ipv6_conntrack_ops[] __read_mostly = { .priority = NF_IP6_PRI_CONNTRACK, }, { + .hook = ipv6_helper, + .owner = THIS_MODULE, + .pf = NFPROTO_IPV6, + .hooknum = NF_INET_POST_ROUTING, + .priority = NF_IP6_PRI_CONNTRACK_HELPER, + }, + { .hook = ipv6_confirm, .owner = THIS_MODULE, .pf = NFPROTO_IPV6, @@ -261,6 +275,13 @@ static struct nf_hook_ops ipv6_conntrack_ops[] __read_mostly = { .priority = NF_IP6_PRI_LAST, }, { + .hook = ipv6_helper, + .owner = THIS_MODULE, + .pf = NFPROTO_IPV6, + .hooknum = NF_INET_LOCAL_IN, + .priority = NF_IP6_PRI_CONNTRACK_HELPER, + }, + { .hook = ipv6_confirm, .owner = THIS_MODULE, .pf = NFPROTO_IPV6, @@ -333,37 +354,75 @@ MODULE_ALIAS("nf_conntrack-" __stringify(AF_INET6)); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Yasuyuki KOZAKAI @USAGI <yasuyuki.kozakai@toshiba.co.jp>"); -static int __init nf_conntrack_l3proto_ipv6_init(void) +static int ipv6_net_init(struct net *net) { int ret = 0; - need_conntrack(); - nf_defrag_ipv6_enable(); - - ret = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_tcp6); + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_tcp6); if (ret < 0) { - pr_err("nf_conntrack_ipv6: can't register tcp.\n"); - return ret; + printk(KERN_ERR "nf_conntrack_l4proto_tcp6: protocol register failed\n"); + goto out; } - - ret = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_udp6); + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_udp6); if (ret < 0) { - pr_err("nf_conntrack_ipv6: can't register udp.\n"); - goto cleanup_tcp; + printk(KERN_ERR "nf_conntrack_l4proto_udp6: protocol register failed\n"); + goto cleanup_tcp6; } - - ret = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_icmpv6); + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_icmpv6); if (ret < 0) { - pr_err("nf_conntrack_ipv6: can't register icmpv6.\n"); - goto cleanup_udp; + printk(KERN_ERR "nf_conntrack_l4proto_icmp6: protocol register failed\n"); + goto cleanup_udp6; } - - ret = nf_conntrack_l3proto_register(&nf_conntrack_l3proto_ipv6); + ret = nf_conntrack_l3proto_register(net, + &nf_conntrack_l3proto_ipv6); if (ret < 0) { - pr_err("nf_conntrack_ipv6: can't register ipv6\n"); + printk(KERN_ERR "nf_conntrack_l3proto_ipv6: protocol register failed\n"); goto cleanup_icmpv6; } + return 0; + cleanup_icmpv6: + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_icmpv6); + cleanup_udp6: + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_udp6); + cleanup_tcp6: + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_tcp6); + out: + return ret; +} +static void ipv6_net_exit(struct net *net) +{ + nf_conntrack_l3proto_unregister(net, + &nf_conntrack_l3proto_ipv6); + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_icmpv6); + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_udp6); + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_tcp6); +} + +static struct pernet_operations ipv6_net_ops = { + .init = ipv6_net_init, + .exit = ipv6_net_exit, +}; + +static int __init nf_conntrack_l3proto_ipv6_init(void) +{ + int ret = 0; + + need_conntrack(); + nf_defrag_ipv6_enable(); + + ret = register_pernet_subsys(&ipv6_net_ops); + if (ret < 0) + goto cleanup_pernet; ret = nf_register_hooks(ipv6_conntrack_ops, ARRAY_SIZE(ipv6_conntrack_ops)); if (ret < 0) { @@ -374,13 +433,8 @@ static int __init nf_conntrack_l3proto_ipv6_init(void) return ret; cleanup_ipv6: - nf_conntrack_l3proto_unregister(&nf_conntrack_l3proto_ipv6); - cleanup_icmpv6: - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_icmpv6); - cleanup_udp: - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udp6); - cleanup_tcp: - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_tcp6); + unregister_pernet_subsys(&ipv6_net_ops); + cleanup_pernet: return ret; } @@ -388,10 +442,7 @@ static void __exit nf_conntrack_l3proto_ipv6_fini(void) { synchronize_net(); nf_unregister_hooks(ipv6_conntrack_ops, ARRAY_SIZE(ipv6_conntrack_ops)); - nf_conntrack_l3proto_unregister(&nf_conntrack_l3proto_ipv6); - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_icmpv6); - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udp6); - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_tcp6); + unregister_pernet_subsys(&ipv6_net_ops); } module_init(nf_conntrack_l3proto_ipv6_init); diff --git a/net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c b/net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c index 3e81904fbbcd..2d54b2061d68 100644 --- a/net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c +++ b/net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c @@ -29,6 +29,11 @@ static unsigned int nf_ct_icmpv6_timeout __read_mostly = 30*HZ; +static inline struct nf_icmp_net *icmpv6_pernet(struct net *net) +{ + return &net->ct.nf_ct_proto.icmpv6; +} + static bool icmpv6_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, struct nf_conntrack_tuple *tuple) @@ -90,7 +95,7 @@ static int icmpv6_print_tuple(struct seq_file *s, static unsigned int *icmpv6_get_timeouts(struct net *net) { - return &nf_ct_icmpv6_timeout; + return &icmpv6_pernet(net)->timeout; } /* Returns verdict for packet, or -1 for invalid. */ @@ -281,16 +286,18 @@ static int icmpv6_nlattr_tuple_size(void) #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_cttimeout.h> -static int icmpv6_timeout_nlattr_to_obj(struct nlattr *tb[], void *data) +static int icmpv6_timeout_nlattr_to_obj(struct nlattr *tb[], + struct net *net, void *data) { unsigned int *timeout = data; + struct nf_icmp_net *in = icmpv6_pernet(net); if (tb[CTA_TIMEOUT_ICMPV6_TIMEOUT]) { *timeout = ntohl(nla_get_be32(tb[CTA_TIMEOUT_ICMPV6_TIMEOUT])) * HZ; } else { /* Set default ICMPv6 timeout. */ - *timeout = nf_ct_icmpv6_timeout; + *timeout = in->timeout; } return 0; } @@ -315,11 +322,9 @@ icmpv6_timeout_nla_policy[CTA_TIMEOUT_ICMPV6_MAX+1] = { #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #ifdef CONFIG_SYSCTL -static struct ctl_table_header *icmpv6_sysctl_header; static struct ctl_table icmpv6_sysctl_table[] = { { .procname = "nf_conntrack_icmpv6_timeout", - .data = &nf_ct_icmpv6_timeout, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -328,6 +333,36 @@ static struct ctl_table icmpv6_sysctl_table[] = { }; #endif /* CONFIG_SYSCTL */ +static int icmpv6_kmemdup_sysctl_table(struct nf_proto_net *pn, + struct nf_icmp_net *in) +{ +#ifdef CONFIG_SYSCTL + pn->ctl_table = kmemdup(icmpv6_sysctl_table, + sizeof(icmpv6_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_table) + return -ENOMEM; + + pn->ctl_table[0].data = &in->timeout; +#endif + return 0; +} + +static int icmpv6_init_net(struct net *net, u_int16_t proto) +{ + struct nf_icmp_net *in = icmpv6_pernet(net); + struct nf_proto_net *pn = &in->pn; + + in->timeout = nf_ct_icmpv6_timeout; + + return icmpv6_kmemdup_sysctl_table(pn, in); +} + +static struct nf_proto_net *icmpv6_get_net_proto(struct net *net) +{ + return &net->ct.nf_ct_proto.icmpv6.pn; +} + struct nf_conntrack_l4proto nf_conntrack_l4proto_icmpv6 __read_mostly = { .l3proto = PF_INET6, @@ -355,8 +390,6 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_icmpv6 __read_mostly = .nla_policy = icmpv6_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_header = &icmpv6_sysctl_header, - .ctl_table = icmpv6_sysctl_table, -#endif + .init_net = icmpv6_init_net, + .get_net_proto = icmpv6_get_net_proto, }; diff --git a/net/ipv6/protocol.c b/net/ipv6/protocol.c index 9a7978fdc02a..053082dfc93e 100644 --- a/net/ipv6/protocol.c +++ b/net/ipv6/protocol.c @@ -29,9 +29,7 @@ const struct inet6_protocol __rcu *inet6_protos[MAX_INET_PROTOS] __read_mostly; int inet6_add_protocol(const struct inet6_protocol *prot, unsigned char protocol) { - int hash = protocol & (MAX_INET_PROTOS - 1); - - return !cmpxchg((const struct inet6_protocol **)&inet6_protos[hash], + return !cmpxchg((const struct inet6_protocol **)&inet6_protos[protocol], NULL, prot) ? 0 : -1; } EXPORT_SYMBOL(inet6_add_protocol); @@ -42,9 +40,9 @@ EXPORT_SYMBOL(inet6_add_protocol); int inet6_del_protocol(const struct inet6_protocol *prot, unsigned char protocol) { - int ret, hash = protocol & (MAX_INET_PROTOS - 1); + int ret; - ret = (cmpxchg((const struct inet6_protocol **)&inet6_protos[hash], + ret = (cmpxchg((const struct inet6_protocol **)&inet6_protos[protocol], prot, NULL) == prot) ? 0 : -1; synchronize_net(); diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c index 93d69836fded..ef0579d5bca6 100644 --- a/net/ipv6/raw.c +++ b/net/ipv6/raw.c @@ -165,7 +165,7 @@ static bool ipv6_raw_deliver(struct sk_buff *skb, int nexthdr) saddr = &ipv6_hdr(skb)->saddr; daddr = saddr + 1; - hash = nexthdr & (MAX_INET_PROTOS - 1); + hash = nexthdr & (RAW_HTABLE_SIZE - 1); read_lock(&raw_v6_hashinfo.lock); sk = sk_head(&raw_v6_hashinfo.ht[hash]); @@ -229,7 +229,7 @@ bool raw6_local_deliver(struct sk_buff *skb, int nexthdr) { struct sock *raw_sk; - raw_sk = sk_head(&raw_v6_hashinfo.ht[nexthdr & (MAX_INET_PROTOS - 1)]); + raw_sk = sk_head(&raw_v6_hashinfo.ht[nexthdr & (RAW_HTABLE_SIZE - 1)]); if (raw_sk && !ipv6_raw_deliver(skb, nexthdr)) raw_sk = NULL; @@ -328,9 +328,12 @@ static void rawv6_err(struct sock *sk, struct sk_buff *skb, return; harderr = icmpv6_err_convert(type, code, &err); - if (type == ICMPV6_PKT_TOOBIG) + if (type == ICMPV6_PKT_TOOBIG) { + ip6_sk_update_pmtu(skb, sk, info); harderr = (np->pmtudisc == IPV6_PMTUDISC_DO); - + } + if (type == NDISC_REDIRECT) + ip6_sk_redirect(skb, sk); if (np->recverr) { u8 *payload = skb->data; if (!inet->hdrincl) diff --git a/net/ipv6/route.c b/net/ipv6/route.c index becb048d18d4..cf02cb97bbdd 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -78,7 +78,10 @@ static int ip6_dst_gc(struct dst_ops *ops); static int ip6_pkt_discard(struct sk_buff *skb); static int ip6_pkt_discard_out(struct sk_buff *skb); static void ip6_link_failure(struct sk_buff *skb); -static void ip6_rt_update_pmtu(struct dst_entry *dst, u32 mtu); +static void ip6_rt_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu); +static void rt6_do_redirect(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb); #ifdef CONFIG_IPV6_ROUTE_INFO static struct rt6_info *rt6_add_route_info(struct net *net, @@ -99,10 +102,7 @@ static u32 *ipv6_cow_metrics(struct dst_entry *dst, unsigned long old) if (!(rt->dst.flags & DST_HOST)) return NULL; - if (!rt->rt6i_peer) - rt6_bind_peer(rt, 1); - - peer = rt->rt6i_peer; + peer = rt6_get_peer_create(rt); if (peer) { u32 *old_p = __DST_METRICS_PTR(old); unsigned long prev, new; @@ -123,21 +123,27 @@ static u32 *ipv6_cow_metrics(struct dst_entry *dst, unsigned long old) return p; } -static inline const void *choose_neigh_daddr(struct rt6_info *rt, const void *daddr) +static inline const void *choose_neigh_daddr(struct rt6_info *rt, + struct sk_buff *skb, + const void *daddr) { struct in6_addr *p = &rt->rt6i_gateway; if (!ipv6_addr_any(p)) return (const void *) p; + else if (skb) + return &ipv6_hdr(skb)->daddr; return daddr; } -static struct neighbour *ip6_neigh_lookup(const struct dst_entry *dst, const void *daddr) +static struct neighbour *ip6_neigh_lookup(const struct dst_entry *dst, + struct sk_buff *skb, + const void *daddr) { struct rt6_info *rt = (struct rt6_info *) dst; struct neighbour *n; - daddr = choose_neigh_daddr(rt, daddr); + daddr = choose_neigh_daddr(rt, skb, daddr); n = __ipv6_neigh_lookup(&nd_tbl, dst->dev, daddr); if (n) return n; @@ -152,7 +158,7 @@ static int rt6_bind_neighbour(struct rt6_info *rt, struct net_device *dev) if (IS_ERR(n)) return PTR_ERR(n); } - dst_set_neighbour(&rt->dst, n); + rt->n = n; return 0; } @@ -171,6 +177,7 @@ static struct dst_ops ip6_dst_ops_template = { .negative_advice = ip6_negative_advice, .link_failure = ip6_link_failure, .update_pmtu = ip6_rt_update_pmtu, + .redirect = rt6_do_redirect, .local_out = __ip6_local_out, .neigh_lookup = ip6_neigh_lookup, }; @@ -182,7 +189,13 @@ static unsigned int ip6_blackhole_mtu(const struct dst_entry *dst) return mtu ? : dst->dev->mtu; } -static void ip6_rt_blackhole_update_pmtu(struct dst_entry *dst, u32 mtu) +static void ip6_rt_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu) +{ +} + +static void ip6_rt_blackhole_redirect(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb) { } @@ -200,6 +213,7 @@ static struct dst_ops ip6_dst_blackhole_ops = { .mtu = ip6_blackhole_mtu, .default_advmss = ip6_default_advmss, .update_pmtu = ip6_rt_blackhole_update_pmtu, + .redirect = ip6_rt_blackhole_redirect, .cow_metrics = ip6_rt_blackhole_cow_metrics, .neigh_lookup = ip6_neigh_lookup, }; @@ -261,16 +275,20 @@ static struct rt6_info ip6_blk_hole_entry_template = { #endif /* allocate dst with ip6_dst_ops */ -static inline struct rt6_info *ip6_dst_alloc(struct dst_ops *ops, +static inline struct rt6_info *ip6_dst_alloc(struct net *net, struct net_device *dev, - int flags) + int flags, + struct fib6_table *table) { - struct rt6_info *rt = dst_alloc(ops, dev, 0, 0, flags); + struct rt6_info *rt = dst_alloc(&net->ipv6.ip6_dst_ops, dev, + 0, DST_OBSOLETE_NONE, flags); - if (rt) - memset(&rt->rt6i_table, 0, - sizeof(*rt) - sizeof(struct dst_entry)); + if (rt) { + struct dst_entry *dst = &rt->dst; + memset(dst + 1, 0, sizeof(*rt) - sizeof(*dst)); + rt6_init_peer(rt, table ? &table->tb6_peers : net->ipv6.peers); + } return rt; } @@ -278,7 +296,9 @@ static void ip6_dst_destroy(struct dst_entry *dst) { struct rt6_info *rt = (struct rt6_info *)dst; struct inet6_dev *idev = rt->rt6i_idev; - struct inet_peer *peer = rt->rt6i_peer; + + if (rt->n) + neigh_release(rt->n); if (!(rt->dst.flags & DST_HOST)) dst_destroy_metrics_generic(dst); @@ -291,8 +311,8 @@ static void ip6_dst_destroy(struct dst_entry *dst) if (!(rt->rt6i_flags & RTF_EXPIRES) && dst->from) dst_release(dst->from); - if (peer) { - rt->rt6i_peer = NULL; + if (rt6_has_peer(rt)) { + struct inet_peer *peer = rt6_peer_ptr(rt); inet_putpeer(peer); } } @@ -306,13 +326,20 @@ static u32 rt6_peer_genid(void) void rt6_bind_peer(struct rt6_info *rt, int create) { + struct inet_peer_base *base; struct inet_peer *peer; - peer = inet_getpeer_v6(&rt->rt6i_dst.addr, create); - if (peer && cmpxchg(&rt->rt6i_peer, NULL, peer) != NULL) - inet_putpeer(peer); - else - rt->rt6i_peer_genid = rt6_peer_genid(); + base = inetpeer_base_ptr(rt->_rt6i_peer); + if (!base) + return; + + peer = inet_getpeer_v6(base, &rt->rt6i_dst.addr, create); + if (peer) { + if (!rt6_set_peer(rt, peer)) + inet_putpeer(peer); + else + rt->rt6i_peer_genid = rt6_peer_genid(); + } } static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev, @@ -323,12 +350,19 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev, struct net_device *loopback_dev = dev_net(dev)->loopback_dev; - if (dev != loopback_dev && idev && idev->dev == dev) { - struct inet6_dev *loopback_idev = - in6_dev_get(loopback_dev); - if (loopback_idev) { - rt->rt6i_idev = loopback_idev; - in6_dev_put(idev); + if (dev != loopback_dev) { + if (idev && idev->dev == dev) { + struct inet6_dev *loopback_idev = + in6_dev_get(loopback_dev); + if (loopback_idev) { + rt->rt6i_idev = loopback_idev; + in6_dev_put(idev); + } + } + if (rt->n && rt->n->dev == dev) { + rt->n->dev = loopback_dev; + dev_hold(loopback_dev); + dev_put(dev); } } } @@ -418,7 +452,7 @@ static void rt6_probe(struct rt6_info *rt) * to no more than one per minute. */ rcu_read_lock(); - neigh = rt ? dst_get_neighbour_noref(&rt->dst) : NULL; + neigh = rt ? rt->n : NULL; if (!neigh || (neigh->nud_state & NUD_VALID)) goto out; read_lock_bh(&neigh->lock); @@ -465,7 +499,7 @@ static inline int rt6_check_neigh(struct rt6_info *rt) int m; rcu_read_lock(); - neigh = dst_get_neighbour_noref(&rt->dst); + neigh = rt->n; if (rt->rt6i_flags & RTF_NONEXTHOP || !(rt->rt6i_flags & RTF_GATEWAY)) m = 1; @@ -812,7 +846,7 @@ static struct rt6_info *rt6_alloc_clone(struct rt6_info *ort, if (rt) { rt->rt6i_flags |= RTF_CACHE; - dst_set_neighbour(&rt->dst, neigh_clone(dst_get_neighbour_noref_raw(&ort->dst))); + rt->n = neigh_clone(ort->n); } return rt; } @@ -846,7 +880,7 @@ restart: dst_hold(&rt->dst); read_unlock_bh(&table->tb6_lock); - if (!dst_get_neighbour_noref_raw(&rt->dst) && !(rt->rt6i_flags & RTF_NONEXTHOP)) + if (!rt->n && !(rt->rt6i_flags & RTF_NONEXTHOP)) nrt = rt6_alloc_cow(rt, &fl6->daddr, &fl6->saddr); else if (!(rt->dst.flags & DST_HOST)) nrt = rt6_alloc_clone(rt, &fl6->daddr); @@ -931,6 +965,8 @@ struct dst_entry * ip6_route_output(struct net *net, const struct sock *sk, { int flags = 0; + fl6->flowi6_iif = net->loopback_dev->ifindex; + if ((sk && sk->sk_bound_dev_if) || rt6_need_strict(&fl6->daddr)) flags |= RT6_LOOKUP_F_IFACE; @@ -949,12 +985,13 @@ struct dst_entry *ip6_blackhole_route(struct net *net, struct dst_entry *dst_ori struct rt6_info *rt, *ort = (struct rt6_info *) dst_orig; struct dst_entry *new = NULL; - rt = dst_alloc(&ip6_dst_blackhole_ops, ort->dst.dev, 1, 0, 0); + rt = dst_alloc(&ip6_dst_blackhole_ops, ort->dst.dev, 1, DST_OBSOLETE_NONE, 0); if (rt) { - memset(&rt->rt6i_table, 0, sizeof(*rt) - sizeof(struct dst_entry)); - new = &rt->dst; + memset(new + 1, 0, sizeof(*rt) - sizeof(*new)); + rt6_init_peer(rt, net->ipv6.peers); + new->__use = 1; new->input = dst_discard; new->output = dst_discard; @@ -996,7 +1033,7 @@ static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie) if (rt->rt6i_node && (rt->rt6i_node->fn_sernum == cookie)) { if (rt->rt6i_peer_genid != rt6_peer_genid()) { - if (!rt->rt6i_peer) + if (!rt6_has_peer(rt)) rt6_bind_peer(rt, 0); rt->rt6i_peer_genid = rt6_peer_genid(); } @@ -1038,11 +1075,15 @@ static void ip6_link_failure(struct sk_buff *skb) } } -static void ip6_rt_update_pmtu(struct dst_entry *dst, u32 mtu) +static void ip6_rt_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu) { struct rt6_info *rt6 = (struct rt6_info*)dst; + dst_confirm(dst); if (mtu < dst_mtu(dst) && rt6->rt6i_dst.plen == 128) { + struct net *net = dev_net(dst->dev); + rt6->rt6i_flags |= RTF_MODIFIED; if (mtu < IPV6_MIN_MTU) { u32 features = dst_metric(dst, RTAX_FEATURES); @@ -1051,9 +1092,66 @@ static void ip6_rt_update_pmtu(struct dst_entry *dst, u32 mtu) dst_metric_set(dst, RTAX_FEATURES, features); } dst_metric_set(dst, RTAX_MTU, mtu); + rt6_update_expires(rt6, net->ipv6.sysctl.ip6_rt_mtu_expires); } } +void ip6_update_pmtu(struct sk_buff *skb, struct net *net, __be32 mtu, + int oif, u32 mark) +{ + const struct ipv6hdr *iph = (struct ipv6hdr *) skb->data; + struct dst_entry *dst; + struct flowi6 fl6; + + memset(&fl6, 0, sizeof(fl6)); + fl6.flowi6_oif = oif; + fl6.flowi6_mark = mark; + fl6.flowi6_flags = 0; + fl6.daddr = iph->daddr; + fl6.saddr = iph->saddr; + fl6.flowlabel = (*(__be32 *) iph) & IPV6_FLOWINFO_MASK; + + dst = ip6_route_output(net, NULL, &fl6); + if (!dst->error) + ip6_rt_update_pmtu(dst, NULL, skb, ntohl(mtu)); + dst_release(dst); +} +EXPORT_SYMBOL_GPL(ip6_update_pmtu); + +void ip6_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, __be32 mtu) +{ + ip6_update_pmtu(skb, sock_net(sk), mtu, + sk->sk_bound_dev_if, sk->sk_mark); +} +EXPORT_SYMBOL_GPL(ip6_sk_update_pmtu); + +void ip6_redirect(struct sk_buff *skb, struct net *net, int oif, u32 mark) +{ + const struct ipv6hdr *iph = (struct ipv6hdr *) skb->data; + struct dst_entry *dst; + struct flowi6 fl6; + + memset(&fl6, 0, sizeof(fl6)); + fl6.flowi6_oif = oif; + fl6.flowi6_mark = mark; + fl6.flowi6_flags = 0; + fl6.daddr = iph->daddr; + fl6.saddr = iph->saddr; + fl6.flowlabel = (*(__be32 *) iph) & IPV6_FLOWINFO_MASK; + + dst = ip6_route_output(net, NULL, &fl6); + if (!dst->error) + rt6_do_redirect(dst, NULL, skb); + dst_release(dst); +} +EXPORT_SYMBOL_GPL(ip6_redirect); + +void ip6_sk_redirect(struct sk_buff *skb, struct sock *sk) +{ + ip6_redirect(skb, sock_net(sk), sk->sk_bound_dev_if, sk->sk_mark); +} +EXPORT_SYMBOL_GPL(ip6_sk_redirect); + static unsigned int ip6_default_advmss(const struct dst_entry *dst) { struct net_device *dev = dst->dev; @@ -1110,7 +1208,7 @@ struct dst_entry *icmp6_dst_alloc(struct net_device *dev, if (unlikely(!idev)) return ERR_PTR(-ENODEV); - rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, dev, 0); + rt = ip6_dst_alloc(net, dev, 0, NULL); if (unlikely(!rt)) { in6_dev_put(idev); dst = ERR_PTR(-ENOMEM); @@ -1120,7 +1218,7 @@ struct dst_entry *icmp6_dst_alloc(struct net_device *dev, if (neigh) neigh_hold(neigh); else { - neigh = ip6_neigh_lookup(&rt->dst, &fl6->daddr); + neigh = ip6_neigh_lookup(&rt->dst, NULL, &fl6->daddr); if (IS_ERR(neigh)) { in6_dev_put(idev); dst_free(&rt->dst); @@ -1130,7 +1228,7 @@ struct dst_entry *icmp6_dst_alloc(struct net_device *dev, rt->dst.flags |= DST_HOST; rt->dst.output = ip6_output; - dst_set_neighbour(&rt->dst, neigh); + rt->n = neigh; atomic_set(&rt->dst.__refcnt, 1); rt->rt6i_dst.addr = fl6->daddr; rt->rt6i_dst.plen = 128; @@ -1292,7 +1390,7 @@ int ip6_route_add(struct fib6_config *cfg) if (!table) goto out; - rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, NULL, DST_NOCOUNT); + rt = ip6_dst_alloc(net, NULL, DST_NOCOUNT, table); if (!rt) { err = -ENOMEM; @@ -1546,107 +1644,94 @@ static int ip6_route_del(struct fib6_config *cfg) return err; } -/* - * Handle redirects - */ -struct ip6rd_flowi { - struct flowi6 fl6; - struct in6_addr gateway; -}; - -static struct rt6_info *__ip6_route_redirect(struct net *net, - struct fib6_table *table, - struct flowi6 *fl6, - int flags) +static void rt6_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_buff *skb) { - struct ip6rd_flowi *rdfl = (struct ip6rd_flowi *)fl6; - struct rt6_info *rt; - struct fib6_node *fn; + struct net *net = dev_net(skb->dev); + struct netevent_redirect netevent; + struct rt6_info *rt, *nrt = NULL; + const struct in6_addr *target; + struct ndisc_options ndopts; + const struct in6_addr *dest; + struct neighbour *old_neigh; + struct inet6_dev *in6_dev; + struct neighbour *neigh; + struct icmp6hdr *icmph; + int optlen, on_link; + u8 *lladdr; - /* - * Get the "current" route for this destination and - * check if the redirect has come from approriate router. - * - * RFC 2461 specifies that redirects should only be - * accepted if they come from the nexthop to the target. - * Due to the way the routes are chosen, this notion - * is a bit fuzzy and one might need to check all possible - * routes. - */ + optlen = skb->tail - skb->transport_header; + optlen -= sizeof(struct icmp6hdr) + 2 * sizeof(struct in6_addr); - read_lock_bh(&table->tb6_lock); - fn = fib6_lookup(&table->tb6_root, &fl6->daddr, &fl6->saddr); -restart: - for (rt = fn->leaf; rt; rt = rt->dst.rt6_next) { - /* - * Current route is on-link; redirect is always invalid. - * - * Seems, previous statement is not true. It could - * be node, which looks for us as on-link (f.e. proxy ndisc) - * But then router serving it might decide, that we should - * know truth 8)8) --ANK (980726). - */ - if (rt6_check_expired(rt)) - continue; - if (!(rt->rt6i_flags & RTF_GATEWAY)) - continue; - if (fl6->flowi6_oif != rt->dst.dev->ifindex) - continue; - if (!ipv6_addr_equal(&rdfl->gateway, &rt->rt6i_gateway)) - continue; - break; + if (optlen < 0) { + net_dbg_ratelimited("rt6_do_redirect: packet too short\n"); + return; } - if (!rt) - rt = net->ipv6.ip6_null_entry; - BACKTRACK(net, &fl6->saddr); -out: - dst_hold(&rt->dst); - - read_unlock_bh(&table->tb6_lock); - - return rt; -}; + icmph = icmp6_hdr(skb); + target = (const struct in6_addr *) (icmph + 1); + dest = target + 1; -static struct rt6_info *ip6_route_redirect(const struct in6_addr *dest, - const struct in6_addr *src, - const struct in6_addr *gateway, - struct net_device *dev) -{ - int flags = RT6_LOOKUP_F_HAS_SADDR; - struct net *net = dev_net(dev); - struct ip6rd_flowi rdfl = { - .fl6 = { - .flowi6_oif = dev->ifindex, - .daddr = *dest, - .saddr = *src, - }, - }; + if (ipv6_addr_is_multicast(dest)) { + net_dbg_ratelimited("rt6_do_redirect: destination address is multicast\n"); + return; + } - rdfl.gateway = *gateway; + on_link = 0; + if (ipv6_addr_equal(dest, target)) { + on_link = 1; + } else if (ipv6_addr_type(target) != + (IPV6_ADDR_UNICAST|IPV6_ADDR_LINKLOCAL)) { + net_dbg_ratelimited("rt6_do_redirect: target address is not link-local unicast\n"); + return; + } - if (rt6_need_strict(dest)) - flags |= RT6_LOOKUP_F_IFACE; + in6_dev = __in6_dev_get(skb->dev); + if (!in6_dev) + return; + if (in6_dev->cnf.forwarding || !in6_dev->cnf.accept_redirects) + return; - return (struct rt6_info *)fib6_rule_lookup(net, &rdfl.fl6, - flags, __ip6_route_redirect); -} + /* RFC2461 8.1: + * The IP source address of the Redirect MUST be the same as the current + * first-hop router for the specified ICMP Destination Address. + */ -void rt6_redirect(const struct in6_addr *dest, const struct in6_addr *src, - const struct in6_addr *saddr, - struct neighbour *neigh, u8 *lladdr, int on_link) -{ - struct rt6_info *rt, *nrt = NULL; - struct netevent_redirect netevent; - struct net *net = dev_net(neigh->dev); + if (!ndisc_parse_options((u8*)(dest + 1), optlen, &ndopts)) { + net_dbg_ratelimited("rt6_redirect: invalid ND options\n"); + return; + } - rt = ip6_route_redirect(dest, src, saddr, neigh->dev); + lladdr = NULL; + if (ndopts.nd_opts_tgt_lladdr) { + lladdr = ndisc_opt_addr_data(ndopts.nd_opts_tgt_lladdr, + skb->dev); + if (!lladdr) { + net_dbg_ratelimited("rt6_redirect: invalid link-layer address length\n"); + return; + } + } + rt = (struct rt6_info *) dst; if (rt == net->ipv6.ip6_null_entry) { net_dbg_ratelimited("rt6_redirect: source isn't a valid nexthop for redirect target\n"); - goto out; + return; } + /* Redirect received -> path was valid. + * Look, redirects are sent only in response to data packets, + * so that this nexthop apparently is reachable. --ANK + */ + dst_confirm(&rt->dst); + + neigh = __neigh_lookup(&nd_tbl, target, skb->dev, 1); + if (!neigh) + return; + + /* Duplicate redirect: silently ignore. */ + old_neigh = rt->n; + if (neigh == old_neigh) + goto out; + /* * We have finally decided to accept it. */ @@ -1658,17 +1743,6 @@ void rt6_redirect(const struct in6_addr *dest, const struct in6_addr *src, NEIGH_UPDATE_F_ISROUTER)) ); - /* - * Redirect received -> path was valid. - * Look, redirects are sent only in response to data packets, - * so that this nexthop apparently is reachable. --ANK - */ - dst_confirm(&rt->dst); - - /* Duplicate redirect: silently ignore. */ - if (neigh == dst_get_neighbour_noref_raw(&rt->dst)) - goto out; - nrt = ip6_rt_copy(rt, dest); if (!nrt) goto out; @@ -1678,132 +1752,25 @@ void rt6_redirect(const struct in6_addr *dest, const struct in6_addr *src, nrt->rt6i_flags &= ~RTF_GATEWAY; nrt->rt6i_gateway = *(struct in6_addr *)neigh->primary_key; - dst_set_neighbour(&nrt->dst, neigh_clone(neigh)); + nrt->n = neigh_clone(neigh); if (ip6_ins_rt(nrt)) goto out; netevent.old = &rt->dst; + netevent.old_neigh = old_neigh; netevent.new = &nrt->dst; + netevent.new_neigh = neigh; + netevent.daddr = dest; call_netevent_notifiers(NETEVENT_REDIRECT, &netevent); if (rt->rt6i_flags & RTF_CACHE) { + rt = (struct rt6_info *) dst_clone(&rt->dst); ip6_del_rt(rt); - return; } out: - dst_release(&rt->dst); -} - -/* - * Handle ICMP "packet too big" messages - * i.e. Path MTU discovery - */ - -static void rt6_do_pmtu_disc(const struct in6_addr *daddr, const struct in6_addr *saddr, - struct net *net, u32 pmtu, int ifindex) -{ - struct rt6_info *rt, *nrt; - int allfrag = 0; -again: - rt = rt6_lookup(net, daddr, saddr, ifindex, 0); - if (!rt) - return; - - if (rt6_check_expired(rt)) { - ip6_del_rt(rt); - goto again; - } - - if (pmtu >= dst_mtu(&rt->dst)) - goto out; - - if (pmtu < IPV6_MIN_MTU) { - /* - * According to RFC2460, PMTU is set to the IPv6 Minimum Link - * MTU (1280) and a fragment header should always be included - * after a node receiving Too Big message reporting PMTU is - * less than the IPv6 Minimum Link MTU. - */ - pmtu = IPV6_MIN_MTU; - allfrag = 1; - } - - /* New mtu received -> path was valid. - They are sent only in response to data packets, - so that this nexthop apparently is reachable. --ANK - */ - dst_confirm(&rt->dst); - - /* Host route. If it is static, it would be better - not to override it, but add new one, so that - when cache entry will expire old pmtu - would return automatically. - */ - if (rt->rt6i_flags & RTF_CACHE) { - dst_metric_set(&rt->dst, RTAX_MTU, pmtu); - if (allfrag) { - u32 features = dst_metric(&rt->dst, RTAX_FEATURES); - features |= RTAX_FEATURE_ALLFRAG; - dst_metric_set(&rt->dst, RTAX_FEATURES, features); - } - rt6_update_expires(rt, net->ipv6.sysctl.ip6_rt_mtu_expires); - rt->rt6i_flags |= RTF_MODIFIED; - goto out; - } - - /* Network route. - Two cases are possible: - 1. It is connected route. Action: COW - 2. It is gatewayed route or NONEXTHOP route. Action: clone it. - */ - if (!dst_get_neighbour_noref_raw(&rt->dst) && !(rt->rt6i_flags & RTF_NONEXTHOP)) - nrt = rt6_alloc_cow(rt, daddr, saddr); - else - nrt = rt6_alloc_clone(rt, daddr); - - if (nrt) { - dst_metric_set(&nrt->dst, RTAX_MTU, pmtu); - if (allfrag) { - u32 features = dst_metric(&nrt->dst, RTAX_FEATURES); - features |= RTAX_FEATURE_ALLFRAG; - dst_metric_set(&nrt->dst, RTAX_FEATURES, features); - } - - /* According to RFC 1981, detecting PMTU increase shouldn't be - * happened within 5 mins, the recommended timer is 10 mins. - * Here this route expiration time is set to ip6_rt_mtu_expires - * which is 10 mins. After 10 mins the decreased pmtu is expired - * and detecting PMTU increase will be automatically happened. - */ - rt6_update_expires(nrt, net->ipv6.sysctl.ip6_rt_mtu_expires); - nrt->rt6i_flags |= RTF_DYNAMIC; - ip6_ins_rt(nrt); - } -out: - dst_release(&rt->dst); -} - -void rt6_pmtu_discovery(const struct in6_addr *daddr, const struct in6_addr *saddr, - struct net_device *dev, u32 pmtu) -{ - struct net *net = dev_net(dev); - - /* - * RFC 1981 states that a node "MUST reduce the size of the packets it - * is sending along the path" that caused the Packet Too Big message. - * Since it's not possible in the general case to determine which - * interface was used to send the original packet, we update the MTU - * on the interface that will be used to send future packets. We also - * update the MTU on the interface that received the Packet Too Big in - * case the original packet was forced out that interface with - * SO_BINDTODEVICE or similar. This is the next best thing to the - * correct behaviour, which would be to update the MTU on all - * interfaces. - */ - rt6_do_pmtu_disc(daddr, saddr, net, pmtu, 0); - rt6_do_pmtu_disc(daddr, saddr, net, pmtu, dev->ifindex); + neigh_release(neigh); } /* @@ -1814,8 +1781,8 @@ static struct rt6_info *ip6_rt_copy(struct rt6_info *ort, const struct in6_addr *dest) { struct net *net = dev_net(ort->dst.dev); - struct rt6_info *rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, - ort->dst.dev, 0); + struct rt6_info *rt = ip6_dst_alloc(net, ort->dst.dev, 0, + ort->rt6i_table); if (rt) { rt->dst.input = ort->dst.input; @@ -2099,8 +2066,7 @@ struct rt6_info *addrconf_dst_alloc(struct inet6_dev *idev, bool anycast) { struct net *net = dev_net(idev->dev); - struct rt6_info *rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, - net->loopback_dev, 0); + struct rt6_info *rt = ip6_dst_alloc(net, net->loopback_dev, 0, NULL); int err; if (!rt) { @@ -2396,13 +2362,11 @@ static int rt6_fill_node(struct net *net, int iif, int type, u32 pid, u32 seq, int prefix, int nowait, unsigned int flags) { - const struct inet_peer *peer; struct rtmsg *rtm; struct nlmsghdr *nlh; long expires; u32 table; struct neighbour *n; - u32 ts, tsage; if (prefix) { /* user wants prefix routes only */ if (!(rt->rt6i_flags & RTF_PREFIX_RT)) { @@ -2440,10 +2404,12 @@ static int rt6_fill_node(struct net *net, rtm->rtm_protocol = rt->rt6i_protocol; if (rt->rt6i_flags & RTF_DYNAMIC) rtm->rtm_protocol = RTPROT_REDIRECT; - else if (rt->rt6i_flags & RTF_ADDRCONF) - rtm->rtm_protocol = RTPROT_KERNEL; - else if (rt->rt6i_flags & RTF_DEFAULT) - rtm->rtm_protocol = RTPROT_RA; + else if (rt->rt6i_flags & RTF_ADDRCONF) { + if (rt->rt6i_flags & (RTF_DEFAULT | RTF_ROUTEINFO)) + rtm->rtm_protocol = RTPROT_RA; + else + rtm->rtm_protocol = RTPROT_KERNEL; + } if (rt->rt6i_flags & RTF_CACHE) rtm->rtm_flags |= RTM_F_CLONED; @@ -2500,7 +2466,7 @@ static int rt6_fill_node(struct net *net, goto nla_put_failure; rcu_read_lock(); - n = dst_get_neighbour_noref(&rt->dst); + n = rt->n; if (n) { if (nla_put(skb, RTA_GATEWAY, 16, &n->primary_key) < 0) { rcu_read_unlock(); @@ -2521,15 +2487,7 @@ static int rt6_fill_node(struct net *net, else expires = INT_MAX; - peer = rt->rt6i_peer; - ts = tsage = 0; - if (peer && peer->tcp_ts_stamp) { - ts = peer->tcp_ts; - tsage = get_seconds() - peer->tcp_ts_stamp; - } - - if (rtnl_put_cacheinfo(skb, &rt->dst, 0, ts, tsage, - expires, rt->dst.error) < 0) + if (rtnl_put_cacheinfo(skb, &rt->dst, 0, expires, rt->dst.error) < 0) goto nla_put_failure; return nlmsg_end(skb, nlh); @@ -2722,7 +2680,7 @@ static int rt6_info_route(struct rt6_info *rt, void *p_arg) seq_puts(m, "00000000000000000000000000000000 00 "); #endif rcu_read_lock(); - n = dst_get_neighbour_noref(&rt->dst); + n = rt->n; if (n) { seq_printf(m, "%pi6", n->primary_key); } else { @@ -3007,6 +2965,31 @@ static struct pernet_operations ip6_route_net_ops = { .exit = ip6_route_net_exit, }; +static int __net_init ipv6_inetpeer_init(struct net *net) +{ + struct inet_peer_base *bp = kmalloc(sizeof(*bp), GFP_KERNEL); + + if (!bp) + return -ENOMEM; + inet_peer_base_init(bp); + net->ipv6.peers = bp; + return 0; +} + +static void __net_exit ipv6_inetpeer_exit(struct net *net) +{ + struct inet_peer_base *bp = net->ipv6.peers; + + net->ipv6.peers = NULL; + inetpeer_invalidate_tree(bp); + kfree(bp); +} + +static struct pernet_operations ipv6_inetpeer_ops = { + .init = ipv6_inetpeer_init, + .exit = ipv6_inetpeer_exit, +}; + static struct pernet_operations ip6_route_net_late_ops = { .init = ip6_route_net_init_late, .exit = ip6_route_net_exit_late, @@ -3032,10 +3015,14 @@ int __init ip6_route_init(void) if (ret) goto out_kmem_cache; - ret = register_pernet_subsys(&ip6_route_net_ops); + ret = register_pernet_subsys(&ipv6_inetpeer_ops); if (ret) goto out_dst_entries; + ret = register_pernet_subsys(&ip6_route_net_ops); + if (ret) + goto out_register_inetpeer; + ip6_dst_blackhole_ops.kmem_cachep = ip6_dst_ops_template.kmem_cachep; /* Registering of the loopback is done before this portion of code, @@ -3088,6 +3075,8 @@ out_fib6_init: fib6_gc_cleanup(); out_register_subsys: unregister_pernet_subsys(&ip6_route_net_ops); +out_register_inetpeer: + unregister_pernet_subsys(&ipv6_inetpeer_ops); out_dst_entries: dst_entries_destroy(&ip6_dst_blackhole_ops); out_kmem_cache: @@ -3102,6 +3091,7 @@ void ip6_route_cleanup(void) fib6_rules_cleanup(); xfrm6_fini(); fib6_gc_cleanup(); + unregister_pernet_subsys(&ipv6_inetpeer_ops); unregister_pernet_subsys(&ip6_route_net_ops); dst_entries_destroy(&ip6_dst_blackhole_ops); kmem_cache_destroy(ip6_dst_ops_template.kmem_cachep); diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c index 60415711563f..3bd1bfc01f85 100644 --- a/net/ipv6/sit.c +++ b/net/ipv6/sit.c @@ -527,9 +527,6 @@ static int ipip6_err(struct sk_buff *skb, u32 info) case ICMP_PORT_UNREACH: /* Impossible event. */ return 0; - case ICMP_FRAG_NEEDED: - /* Soft state for pmtu is maintained by IP core. */ - return 0; default: /* All others are translated to HOST_UNREACH. rfc2003 contains "deep thoughts" about NET_UNREACH, @@ -542,6 +539,8 @@ static int ipip6_err(struct sk_buff *skb, u32 info) if (code != ICMP_EXC_TTL) return 0; break; + case ICMP_REDIRECT: + break; } err = -ENOENT; @@ -551,7 +550,23 @@ static int ipip6_err(struct sk_buff *skb, u32 info) skb->dev, iph->daddr, iph->saddr); - if (t == NULL || t->parms.iph.daddr == 0) + if (t == NULL) + goto out; + + if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) { + ipv4_update_pmtu(skb, dev_net(skb->dev), info, + t->dev->ifindex, 0, IPPROTO_IPV6, 0); + err = 0; + goto out; + } + if (type == ICMP_REDIRECT) { + ipv4_redirect(skb, dev_net(skb->dev), t->dev->ifindex, 0, + IPPROTO_IPV6, 0); + err = 0; + goto out; + } + + if (t->parms.iph.daddr == 0) goto out; err = 0; @@ -792,7 +807,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb, } if (tunnel->parms.iph.daddr && skb_dst(skb)) - skb_dst(skb)->ops->update_pmtu(skb_dst(skb), mtu); + skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); if (skb->len > mtu) { icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c index 8e951d8d3b81..bb46061c813a 100644 --- a/net/ipv6/syncookies.c +++ b/net/ipv6/syncookies.c @@ -21,9 +21,6 @@ #include <net/ipv6.h> #include <net/tcp.h> -extern int sysctl_tcp_syncookies; -extern __u32 syncookie_secret[2][16-4+SHA_DIGEST_WORDS]; - #define COOKIEBITS 24 /* Upper bits store count */ #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1) @@ -180,7 +177,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb) /* check for timestamp cookie support */ memset(&tcp_opt, 0, sizeof(tcp_opt)); - tcp_parse_options(skb, &tcp_opt, &hash_location, 0); + tcp_parse_options(skb, &tcp_opt, &hash_location, 0, NULL); if (!cookie_check_timestamp(&tcp_opt, &ecn_ok)) goto out; diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 9df64a50b075..f49476e2d884 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -277,22 +277,8 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr, rt = (struct rt6_info *) dst; if (tcp_death_row.sysctl_tw_recycle && !tp->rx_opt.ts_recent_stamp && - ipv6_addr_equal(&rt->rt6i_dst.addr, &np->daddr)) { - struct inet_peer *peer = rt6_get_peer(rt); - /* - * VJ's idea. We save last timestamp seen from - * the destination in peer table, when entering state - * TIME-WAIT * and initialize rx_opt.ts_recent from it, - * when trying new connection. - */ - if (peer) { - inet_peer_refcheck(peer); - if ((u32)get_seconds() - peer->tcp_ts_stamp <= TCP_PAWS_MSL) { - tp->rx_opt.ts_recent_stamp = peer->tcp_ts_stamp; - tp->rx_opt.ts_recent = peer->tcp_ts; - } - } - } + ipv6_addr_equal(&rt->rt6i_dst.addr, &np->daddr)) + tcp_fetch_timewait_stamp(sk, dst); icsk->icsk_ext_hdr_len = 0; if (np->opt) @@ -329,6 +315,23 @@ failure: return err; } +static void tcp_v6_mtu_reduced(struct sock *sk) +{ + struct dst_entry *dst; + + if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) + return; + + dst = inet6_csk_update_pmtu(sk, tcp_sk(sk)->mtu_info); + if (!dst) + return; + + if (inet_csk(sk)->icsk_pmtu_cookie > dst_mtu(dst)) { + tcp_sync_mss(sk, dst_mtu(dst)); + tcp_simple_retransmit(sk); + } +} + static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, u8 type, u8 code, int offset, __be32 info) { @@ -356,7 +359,7 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, } bh_lock_sock(sk); - if (sock_owned_by_user(sk)) + if (sock_owned_by_user(sk) && type != ICMPV6_PKT_TOOBIG) NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS); if (sk->sk_state == TCP_CLOSE) @@ -377,49 +380,19 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, np = inet6_sk(sk); - if (type == ICMPV6_PKT_TOOBIG) { - struct dst_entry *dst; - - if (sock_owned_by_user(sk)) - goto out; - if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) - goto out; - - /* icmp should have updated the destination cache entry */ - dst = __sk_dst_check(sk, np->dst_cookie); - - if (dst == NULL) { - struct inet_sock *inet = inet_sk(sk); - struct flowi6 fl6; - - /* BUGGG_FUTURE: Again, it is not clear how - to handle rthdr case. Ignore this complexity - for now. - */ - memset(&fl6, 0, sizeof(fl6)); - fl6.flowi6_proto = IPPROTO_TCP; - fl6.daddr = np->daddr; - fl6.saddr = np->saddr; - fl6.flowi6_oif = sk->sk_bound_dev_if; - fl6.flowi6_mark = sk->sk_mark; - fl6.fl6_dport = inet->inet_dport; - fl6.fl6_sport = inet->inet_sport; - security_skb_classify_flow(skb, flowi6_to_flowi(&fl6)); - - dst = ip6_dst_lookup_flow(sk, &fl6, NULL, false); - if (IS_ERR(dst)) { - sk->sk_err_soft = -PTR_ERR(dst); - goto out; - } + if (type == NDISC_REDIRECT) { + struct dst_entry *dst = __sk_dst_check(sk, np->dst_cookie); - } else - dst_hold(dst); + if (dst) + dst->ops->redirect(dst, sk, skb); + } - if (inet_csk(sk)->icsk_pmtu_cookie > dst_mtu(dst)) { - tcp_sync_mss(sk, dst_mtu(dst)); - tcp_simple_retransmit(sk); - } /* else let the usual retransmit timer handle it */ - dst_release(dst); + if (type == ICMPV6_PKT_TOOBIG) { + tp->mtu_info = ntohl(info); + if (!sock_owned_by_user(sk)) + tcp_v6_mtu_reduced(sk); + else + set_bit(TCP_MTU_REDUCED_DEFERRED, &tp->tsq_flags); goto out; } @@ -475,62 +448,43 @@ out: } -static int tcp_v6_send_synack(struct sock *sk, struct request_sock *req, +static int tcp_v6_send_synack(struct sock *sk, struct dst_entry *dst, + struct flowi6 *fl6, + struct request_sock *req, struct request_values *rvp, u16 queue_mapping) { struct inet6_request_sock *treq = inet6_rsk(req); struct ipv6_pinfo *np = inet6_sk(sk); struct sk_buff * skb; - struct ipv6_txoptions *opt = NULL; - struct in6_addr * final_p, final; - struct flowi6 fl6; - struct dst_entry *dst; - int err; + int err = -ENOMEM; - memset(&fl6, 0, sizeof(fl6)); - fl6.flowi6_proto = IPPROTO_TCP; - fl6.daddr = treq->rmt_addr; - fl6.saddr = treq->loc_addr; - fl6.flowlabel = 0; - fl6.flowi6_oif = treq->iif; - fl6.flowi6_mark = sk->sk_mark; - fl6.fl6_dport = inet_rsk(req)->rmt_port; - fl6.fl6_sport = inet_rsk(req)->loc_port; - security_req_classify_flow(req, flowi6_to_flowi(&fl6)); - - opt = np->opt; - final_p = fl6_update_dst(&fl6, opt, &final); - - dst = ip6_dst_lookup_flow(sk, &fl6, final_p, false); - if (IS_ERR(dst)) { - err = PTR_ERR(dst); - dst = NULL; + /* First, grab a route. */ + if (!dst && (dst = inet6_csk_route_req(sk, fl6, req)) == NULL) goto done; - } + skb = tcp_make_synack(sk, dst, req, rvp); - err = -ENOMEM; + if (skb) { __tcp_v6_send_check(skb, &treq->loc_addr, &treq->rmt_addr); - fl6.daddr = treq->rmt_addr; + fl6->daddr = treq->rmt_addr; skb_set_queue_mapping(skb, queue_mapping); - err = ip6_xmit(sk, skb, &fl6, opt, np->tclass); + err = ip6_xmit(sk, skb, fl6, np->opt, np->tclass); err = net_xmit_eval(err); } done: - if (opt && opt != np->opt) - sock_kfree_s(sk, opt, opt->tot_len); - dst_release(dst); return err; } static int tcp_v6_rtx_synack(struct sock *sk, struct request_sock *req, struct request_values *rvp) { + struct flowi6 fl6; + TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_RETRANSSEGS); - return tcp_v6_send_synack(sk, req, rvp, 0); + return tcp_v6_send_synack(sk, NULL, &fl6, req, rvp, 0); } static void tcp_v6_reqsk_destructor(struct request_sock *req) @@ -1057,6 +1011,7 @@ static int tcp_v6_conn_request(struct sock *sk, struct sk_buff *skb) struct tcp_sock *tp = tcp_sk(sk); __u32 isn = TCP_SKB_CB(skb)->when; struct dst_entry *dst = NULL; + struct flowi6 fl6; bool want_cookie = false; if (skb->protocol == htons(ETH_P_IP)) @@ -1085,7 +1040,7 @@ static int tcp_v6_conn_request(struct sock *sk, struct sk_buff *skb) tcp_clear_options(&tmp_opt); tmp_opt.mss_clamp = IPV6_MIN_MTU - sizeof(struct tcphdr) - sizeof(struct ipv6hdr); tmp_opt.user_mss = tp->rx_opt.user_mss; - tcp_parse_options(skb, &tmp_opt, &hash_location, 0); + tcp_parse_options(skb, &tmp_opt, &hash_location, 0, NULL); if (tmp_opt.cookie_plus > 0 && tmp_opt.saw_tstamp && @@ -1150,8 +1105,6 @@ static int tcp_v6_conn_request(struct sock *sk, struct sk_buff *skb) treq->iif = inet6_iif(skb); if (!isn) { - struct inet_peer *peer = NULL; - if (ipv6_opt_accepted(sk, skb) || np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo || np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) { @@ -1176,14 +1129,8 @@ static int tcp_v6_conn_request(struct sock *sk, struct sk_buff *skb) */ if (tmp_opt.saw_tstamp && tcp_death_row.sysctl_tw_recycle && - (dst = inet6_csk_route_req(sk, req)) != NULL && - (peer = rt6_get_peer((struct rt6_info *)dst)) != NULL && - ipv6_addr_equal((struct in6_addr *)peer->daddr.addr.a6, - &treq->rmt_addr)) { - inet_peer_refcheck(peer); - if ((u32)get_seconds() - peer->tcp_ts_stamp < TCP_PAWS_MSL && - (s32)(peer->tcp_ts - req->ts_recent) > - TCP_PAWS_WINDOW) { + (dst = inet6_csk_route_req(sk, &fl6, req)) != NULL) { + if (!tcp_peer_is_proven(req, dst, true)) { NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSPASSIVEREJECTED); goto drop_and_release; } @@ -1192,8 +1139,7 @@ static int tcp_v6_conn_request(struct sock *sk, struct sk_buff *skb) else if (!sysctl_tcp_syncookies && (sysctl_max_syn_backlog - inet_csk_reqsk_queue_len(sk) < (sysctl_max_syn_backlog >> 2)) && - (!peer || !peer->tcp_ts_stamp) && - (!dst || !dst_metric(dst, RTAX_RTT))) { + !tcp_peer_is_proven(req, dst, false)) { /* Without syncookies last quarter of * backlog is filled with destinations, * proven to be alive. @@ -1215,7 +1161,7 @@ have_isn: if (security_inet_conn_request(sk, skb, req)) goto drop_and_release; - if (tcp_v6_send_synack(sk, req, + if (tcp_v6_send_synack(sk, dst, &fl6, req, (struct request_values *)&tmp_ext, skb_get_queue_mapping(skb)) || want_cookie) @@ -1242,10 +1188,10 @@ static struct sock * tcp_v6_syn_recv_sock(struct sock *sk, struct sk_buff *skb, struct inet_sock *newinet; struct tcp_sock *newtp; struct sock *newsk; - struct ipv6_txoptions *opt; #ifdef CONFIG_TCP_MD5SIG struct tcp_md5sig_key *key; #endif + struct flowi6 fl6; if (skb->protocol == htons(ETH_P_IP)) { /* @@ -1302,13 +1248,12 @@ static struct sock * tcp_v6_syn_recv_sock(struct sock *sk, struct sk_buff *skb, } treq = inet6_rsk(req); - opt = np->opt; if (sk_acceptq_is_full(sk)) goto out_overflow; if (!dst) { - dst = inet6_csk_route_req(sk, req); + dst = inet6_csk_route_req(sk, &fl6, req); if (!dst) goto out; } @@ -1371,11 +1316,8 @@ static struct sock * tcp_v6_syn_recv_sock(struct sock *sk, struct sk_buff *skb, but we make one more one thing there: reattach optmem to newsk. */ - if (opt) { - newnp->opt = ipv6_dup_options(newsk, opt); - if (opt != np->opt) - sock_kfree_s(sk, opt, opt->tot_len); - } + if (np->opt) + newnp->opt = ipv6_dup_options(newsk, np->opt); inet_csk(newsk)->icsk_ext_hdr_len = 0; if (newnp->opt) @@ -1422,8 +1364,6 @@ static struct sock * tcp_v6_syn_recv_sock(struct sock *sk, struct sk_buff *skb, out_overflow: NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS); out_nonewsk: - if (opt && opt != np->opt) - sock_kfree_s(sk, opt, opt->tot_len); dst_release(dst); out: NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS); @@ -1734,42 +1674,10 @@ do_time_wait: goto discard_it; } -static struct inet_peer *tcp_v6_get_peer(struct sock *sk, bool *release_it) -{ - struct rt6_info *rt = (struct rt6_info *) __sk_dst_get(sk); - struct ipv6_pinfo *np = inet6_sk(sk); - struct inet_peer *peer; - - if (!rt || - !ipv6_addr_equal(&np->daddr, &rt->rt6i_dst.addr)) { - peer = inet_getpeer_v6(&np->daddr, 1); - *release_it = true; - } else { - if (!rt->rt6i_peer) - rt6_bind_peer(rt, 1); - peer = rt->rt6i_peer; - *release_it = false; - } - - return peer; -} - -static void *tcp_v6_tw_get_peer(struct sock *sk) -{ - const struct inet6_timewait_sock *tw6 = inet6_twsk(sk); - const struct inet_timewait_sock *tw = inet_twsk(sk); - - if (tw->tw_family == AF_INET) - return tcp_v4_tw_get_peer(sk); - - return inet_getpeer_v6(&tw6->tw_v6_daddr, 1); -} - static struct timewait_sock_ops tcp6_timewait_sock_ops = { .twsk_obj_size = sizeof(struct tcp6_timewait_sock), .twsk_unique = tcp_twsk_unique, .twsk_destructor= tcp_twsk_destructor, - .twsk_getpeer = tcp_v6_tw_get_peer, }; static const struct inet_connection_sock_af_ops ipv6_specific = { @@ -1778,7 +1686,6 @@ static const struct inet_connection_sock_af_ops ipv6_specific = { .rebuild_header = inet6_sk_rebuild_header, .conn_request = tcp_v6_conn_request, .syn_recv_sock = tcp_v6_syn_recv_sock, - .get_peer = tcp_v6_get_peer, .net_header_len = sizeof(struct ipv6hdr), .net_frag_header_len = sizeof(struct frag_hdr), .setsockopt = ipv6_setsockopt, @@ -1810,7 +1717,6 @@ static const struct inet_connection_sock_af_ops ipv6_mapped = { .rebuild_header = inet_sk_rebuild_header, .conn_request = tcp_v6_conn_request, .syn_recv_sock = tcp_v6_syn_recv_sock, - .get_peer = tcp_v4_get_peer, .net_header_len = sizeof(struct iphdr), .setsockopt = ipv6_setsockopt, .getsockopt = ipv6_getsockopt, @@ -2049,6 +1955,8 @@ struct proto tcpv6_prot = { .sendmsg = tcp_sendmsg, .sendpage = tcp_sendpage, .backlog_rcv = tcp_v6_do_rcv, + .release_cb = tcp_release_cb, + .mtu_reduced = tcp_v6_mtu_reduced, .hash = tcp_v6_hash, .unhash = inet_unhash, .get_port = inet_csk_get_port, diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index f05099fc5901..99d0077b56b8 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -48,6 +48,7 @@ #include <linux/proc_fs.h> #include <linux/seq_file.h> +#include <trace/events/skb.h> #include "udp_impl.h" int ipv6_rcv_saddr_equal(const struct sock *sk, const struct sock *sk2) @@ -385,15 +386,16 @@ try_again: if (skb_csum_unnecessary(skb)) err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), - msg->msg_iov, copied ); + msg->msg_iov, copied); else { err = skb_copy_and_csum_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov); if (err == -EINVAL) goto csum_copy_err; } - if (err) + if (unlikely(err)) { + trace_kfree_skb(skb, udpv6_recvmsg); goto out_free; - + } if (!peeked) { if (is_udp4) UDP_INC_STATS_USER(sock_net(sk), @@ -479,6 +481,11 @@ void __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt, if (sk == NULL) return; + if (type == ICMPV6_PKT_TOOBIG) + ip6_sk_update_pmtu(skb, sk, info); + if (type == NDISC_REDIRECT) + ip6_sk_redirect(skb, sk); + np = inet6_sk(sk); if (!icmpv6_err_convert(type, code, &err) && !np->recverr) diff --git a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c index 8625fba96db9..ef39812107b1 100644 --- a/net/ipv6/xfrm6_policy.c +++ b/net/ipv6/xfrm6_policy.c @@ -99,12 +99,11 @@ static int xfrm6_fill_dst(struct xfrm_dst *xdst, struct net_device *dev, if (!xdst->u.rt6.rt6i_idev) return -ENODEV; - xdst->u.rt6.rt6i_peer = rt->rt6i_peer; - if (rt->rt6i_peer) - atomic_inc(&rt->rt6i_peer->refcnt); + rt6_transfer_peer(&xdst->u.rt6, rt); /* Sheit... I remember I did this right. Apparently, * it was magically lost, so this code needs audit */ + xdst->u.rt6.n = neigh_clone(rt->n); xdst->u.rt6.rt6i_flags = rt->rt6i_flags & (RTF_ANYCAST | RTF_LOCAL); xdst->u.rt6.rt6i_metric = rt->rt6i_metric; @@ -208,12 +207,22 @@ static inline int xfrm6_garbage_collect(struct dst_ops *ops) return dst_entries_get_fast(ops) > ops->gc_thresh * 2; } -static void xfrm6_update_pmtu(struct dst_entry *dst, u32 mtu) +static void xfrm6_update_pmtu(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb, u32 mtu) { struct xfrm_dst *xdst = (struct xfrm_dst *)dst; struct dst_entry *path = xdst->route; - path->ops->update_pmtu(path, mtu); + path->ops->update_pmtu(path, sk, skb, mtu); +} + +static void xfrm6_redirect(struct dst_entry *dst, struct sock *sk, + struct sk_buff *skb) +{ + struct xfrm_dst *xdst = (struct xfrm_dst *)dst; + struct dst_entry *path = xdst->route; + + path->ops->redirect(path, sk, skb); } static void xfrm6_dst_destroy(struct dst_entry *dst) @@ -223,8 +232,10 @@ static void xfrm6_dst_destroy(struct dst_entry *dst) if (likely(xdst->u.rt6.rt6i_idev)) in6_dev_put(xdst->u.rt6.rt6i_idev); dst_destroy_metrics_generic(dst); - if (likely(xdst->u.rt6.rt6i_peer)) - inet_putpeer(xdst->u.rt6.rt6i_peer); + if (rt6_has_peer(&xdst->u.rt6)) { + struct inet_peer *peer = rt6_peer_ptr(&xdst->u.rt6); + inet_putpeer(peer); + } xfrm_dst_destroy(xdst); } @@ -260,6 +271,7 @@ static struct dst_ops xfrm6_dst_ops = { .protocol = cpu_to_be16(ETH_P_IPV6), .gc = xfrm6_garbage_collect, .update_pmtu = xfrm6_update_pmtu, + .redirect = xfrm6_redirect, .cow_metrics = dst_cow_metrics_generic, .destroy = xfrm6_dst_destroy, .ifdown = xfrm6_dst_ifdown, diff --git a/net/ipx/Makefile b/net/ipx/Makefile index 4b95e3ea0f8b..440fafa9fd07 100644 --- a/net/ipx/Makefile +++ b/net/ipx/Makefile @@ -4,5 +4,5 @@ obj-$(CONFIG_IPX) += ipx.o -ipx-y := af_ipx.o ipx_route.o ipx_proc.o +ipx-y := af_ipx.o ipx_route.o ipx_proc.o pe2.o ipx-$(CONFIG_SYSCTL) += sysctl_net_ipx.o diff --git a/net/ethernet/pe2.c b/net/ipx/pe2.c index 85d574addbc1..32dcd601ab32 100644 --- a/net/ethernet/pe2.c +++ b/net/ipx/pe2.c @@ -28,10 +28,8 @@ struct datalink_proto *make_EII_client(void) return proto; } -EXPORT_SYMBOL(make_EII_client); void destroy_EII_client(struct datalink_proto *dl) { kfree(dl); } -EXPORT_SYMBOL(destroy_EII_client); diff --git a/net/irda/af_irda.c b/net/irda/af_irda.c index bb14c3477680..bb738c9f9146 100644 --- a/net/irda/af_irda.c +++ b/net/irda/af_irda.c @@ -955,7 +955,7 @@ out: * The main difference with a "standard" connect is that with IrDA we need * to resolve the service name into a TSAP selector (in TCP, port number * doesn't have to be resolved). - * Because of this service name resoltion, we can offer "auto-connect", + * Because of this service name resolution, we can offer "auto-connect", * where we connect to a service without specifying a destination address. * * Note : by consulting "errno", the user space caller may learn the cause diff --git a/net/irda/irlan/irlan_provider.c b/net/irda/irlan/irlan_provider.c index 32dcaac70b0c..4664855222f4 100644 --- a/net/irda/irlan/irlan_provider.c +++ b/net/irda/irlan/irlan_provider.c @@ -296,7 +296,7 @@ void irlan_provider_send_reply(struct irlan_cb *self, int command, skb = alloc_skb(IRLAN_MAX_HEADER + IRLAN_CMD_HEADER + /* Bigger param length comes from CMD_GET_MEDIA_CHAR */ IRLAN_STRING_PARAMETER_LEN("FILTER_TYPE", "DIRECTED") + - IRLAN_STRING_PARAMETER_LEN("FILTER_TYPE", "BORADCAST") + + IRLAN_STRING_PARAMETER_LEN("FILTER_TYPE", "BROADCAST") + IRLAN_STRING_PARAMETER_LEN("FILTER_TYPE", "MULTICAST") + IRLAN_STRING_PARAMETER_LEN("ACCESS_TYPE", "HOSTED"), GFP_ATOMIC); diff --git a/net/irda/irqueue.c b/net/irda/irqueue.c index f06947c4fa82..7152624ed5f1 100644 --- a/net/irda/irqueue.c +++ b/net/irda/irqueue.c @@ -523,7 +523,7 @@ void *hashbin_remove_first( hashbin_t *hashbin) * Dequeue the entry... */ dequeue_general( (irda_queue_t**) &hashbin->hb_queue[ bin ], - (irda_queue_t*) entry ); + entry); hashbin->hb_size--; entry->q_next = NULL; entry->q_prev = NULL; @@ -615,7 +615,7 @@ void* hashbin_remove( hashbin_t* hashbin, long hashv, const char* name) */ if ( found ) { dequeue_general( (irda_queue_t**) &hashbin->hb_queue[ bin ], - (irda_queue_t*) entry ); + entry); hashbin->hb_size--; /* @@ -685,7 +685,7 @@ void* hashbin_remove_this( hashbin_t* hashbin, irda_queue_t* entry) * Dequeue the entry... */ dequeue_general( (irda_queue_t**) &hashbin->hb_queue[ bin ], - (irda_queue_t*) entry ); + entry); hashbin->hb_size--; entry->q_next = NULL; entry->q_prev = NULL; diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c index 32b2155e7ab4..393355d37b47 100644 --- a/net/l2tp/l2tp_core.c +++ b/net/l2tp/l2tp_core.c @@ -1128,6 +1128,7 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len int headroom; int uhlen = (tunnel->encap == L2TP_ENCAPTYPE_UDP) ? sizeof(struct udphdr) : 0; int udp_len; + int ret = NET_XMIT_SUCCESS; /* Check that there's enough headroom in the skb to insert IP, * UDP and L2TP headers. If not enough, expand it to @@ -1137,8 +1138,8 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len uhlen + hdr_len; old_headroom = skb_headroom(skb); if (skb_cow_head(skb, headroom)) { - dev_kfree_skb(skb); - goto abort; + kfree_skb(skb); + return NET_XMIT_DROP; } new_headroom = skb_headroom(skb); @@ -1156,7 +1157,8 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len bh_lock_sock(sk); if (sock_owned_by_user(sk)) { - dev_kfree_skb(skb); + kfree_skb(skb); + ret = NET_XMIT_DROP; goto out_unlock; } @@ -1215,8 +1217,7 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len out_unlock: bh_unlock_sock(sk); -abort: - return 0; + return ret; } EXPORT_SYMBOL_GPL(l2tp_xmit_skb); diff --git a/net/l2tp/l2tp_eth.c b/net/l2tp/l2tp_eth.c index 47b259fccd27..f9ee74deeac2 100644 --- a/net/l2tp/l2tp_eth.c +++ b/net/l2tp/l2tp_eth.c @@ -44,6 +44,7 @@ struct l2tp_eth { struct list_head list; atomic_long_t tx_bytes; atomic_long_t tx_packets; + atomic_long_t tx_dropped; atomic_long_t rx_bytes; atomic_long_t rx_packets; atomic_long_t rx_errors; @@ -92,12 +93,15 @@ static int l2tp_eth_dev_xmit(struct sk_buff *skb, struct net_device *dev) { struct l2tp_eth *priv = netdev_priv(dev); struct l2tp_session *session = priv->session; + unsigned int len = skb->len; + int ret = l2tp_xmit_skb(session, skb, session->hdr_len); - atomic_long_add(skb->len, &priv->tx_bytes); - atomic_long_inc(&priv->tx_packets); - - l2tp_xmit_skb(session, skb, session->hdr_len); - + if (likely(ret == NET_XMIT_SUCCESS)) { + atomic_long_add(len, &priv->tx_bytes); + atomic_long_inc(&priv->tx_packets); + } else { + atomic_long_inc(&priv->tx_dropped); + } return NETDEV_TX_OK; } @@ -108,6 +112,7 @@ static struct rtnl_link_stats64 *l2tp_eth_get_stats64(struct net_device *dev, stats->tx_bytes = atomic_long_read(&priv->tx_bytes); stats->tx_packets = atomic_long_read(&priv->tx_packets); + stats->tx_dropped = atomic_long_read(&priv->tx_dropped); stats->rx_bytes = atomic_long_read(&priv->rx_bytes); stats->rx_packets = atomic_long_read(&priv->rx_packets); stats->rx_errors = atomic_long_read(&priv->rx_errors); diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c index ddc553e76671..d71cd9229a47 100644 --- a/net/l2tp/l2tp_netlink.c +++ b/net/l2tp/l2tp_netlink.c @@ -72,7 +72,7 @@ static int l2tp_nl_cmd_noop(struct sk_buff *skb, struct genl_info *info) void *hdr; int ret = -ENOBUFS; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) { ret = -ENOMEM; goto out; @@ -353,7 +353,7 @@ static int l2tp_nl_cmd_tunnel_get(struct sk_buff *skb, struct genl_info *info) goto out; } - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) { ret = -ENOMEM; goto out; @@ -699,7 +699,7 @@ static int l2tp_nl_cmd_session_get(struct sk_buff *skb, struct genl_info *info) goto out; } - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) { ret = -ENOMEM; goto out; diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c index 8ef6b9416cba..286366ef8930 100644 --- a/net/l2tp/l2tp_ppp.c +++ b/net/l2tp/l2tp_ppp.c @@ -1522,8 +1522,8 @@ static int pppol2tp_session_getsockopt(struct sock *sk, * handler, according to whether the PPPoX socket is a for a regular session * or the special tunnel type. */ -static int pppol2tp_getsockopt(struct socket *sock, int level, - int optname, char __user *optval, int __user *optlen) +static int pppol2tp_getsockopt(struct socket *sock, int level, int optname, + char __user *optval, int __user *optlen) { struct sock *sk = sock->sk; struct l2tp_session *session; @@ -1535,7 +1535,7 @@ static int pppol2tp_getsockopt(struct socket *sock, int level, if (level != SOL_PPPOL2TP) return udp_prot.getsockopt(sk, level, optname, optval, optlen); - if (get_user(len, (int __user *) optlen)) + if (get_user(len, optlen)) return -EFAULT; len = min_t(unsigned int, len, sizeof(int)); @@ -1568,7 +1568,7 @@ static int pppol2tp_getsockopt(struct socket *sock, int level, err = pppol2tp_session_getsockopt(sk, session, optname, &val); err = -EFAULT; - if (put_user(len, (int __user *) optlen)) + if (put_user(len, optlen)) goto end_put_sess; if (copy_to_user((void __user *) optval, &val, len)) diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c index fe5453c3e719..f6fe4d400502 100644 --- a/net/llc/af_llc.c +++ b/net/llc/af_llc.c @@ -1024,7 +1024,7 @@ static int llc_ui_ioctl(struct socket *sock, unsigned int cmd, * @sock: Socket to set options on. * @level: Socket level user is requesting operations on. * @optname: Operation name. - * @optval User provided operation data. + * @optval: User provided operation data. * @optlen: Length of optval. * * Set various connection specific parameters. diff --git a/net/llc/llc_station.c b/net/llc/llc_station.c index cf4aea3ba30f..39a8d8924b9c 100644 --- a/net/llc/llc_station.c +++ b/net/llc/llc_station.c @@ -30,12 +30,12 @@ * * SAP and connection resource manager, one per adapter. * - * @state - state of station - * @xid_r_count - XID response PDU counter - * @mac_sa - MAC source address - * @sap_list - list of related SAPs - * @ev_q - events entering state mach. - * @mac_pdu_q - PDUs ready to send to MAC + * @state: state of station + * @xid_r_count: XID response PDU counter + * @mac_sa: MAC source address + * @sap_list: list of related SAPs + * @ev_q: events entering state mach. + * @mac_pdu_q: PDUs ready to send to MAC */ struct llc_station { u8 state; @@ -646,7 +646,7 @@ static void llc_station_service_events(void) } /** - * llc_station_state_process: queue event and try to process queue. + * llc_station_state_process - queue event and try to process queue. * @skb: Address of the event * * Queues an event (on the station event queue) for handling by the @@ -672,7 +672,7 @@ static void llc_station_ack_tmr_cb(unsigned long timeout_data) } } -/* +/** * llc_station_rcv - send received pdu to the station state machine * @skb: received frame. * diff --git a/net/mac80211/Kconfig b/net/mac80211/Kconfig index 8d249d705980..63af25458fda 100644 --- a/net/mac80211/Kconfig +++ b/net/mac80211/Kconfig @@ -107,6 +107,19 @@ config MAC80211_DEBUGFS Say N unless you know you need this. +config MAC80211_MESSAGE_TRACING + bool "Trace all mac80211 debug messages" + depends on MAC80211 + ---help--- + Select this option to have mac80211 register the + mac80211_msg trace subsystem with tracepoints to + collect all debugging messages, independent of + printing them into the kernel log. + + The overhead in this option is that all the messages + need to be present in the binary and formatted at + runtime for tracing. + menuconfig MAC80211_DEBUG_MENU bool "Select mac80211 debugging features" depends on MAC80211 @@ -140,26 +153,35 @@ config MAC80211_VERBOSE_DEBUG Do not select this option. -config MAC80211_HT_DEBUG - bool "Verbose HT debugging" +config MAC80211_MLME_DEBUG + bool "Verbose managed MLME output" depends on MAC80211_DEBUG_MENU ---help--- - This option enables 802.11n High Throughput features - debug tracing output. - - It should not be selected on production systems as some + Selecting this option causes mac80211 to print out + debugging messages for the managed-mode MLME. It + should not be selected on production systems as some of the messages are remotely triggerable. Do not select this option. -config MAC80211_TKIP_DEBUG - bool "Verbose TKIP debugging" +config MAC80211_STA_DEBUG + bool "Verbose station debugging" depends on MAC80211_DEBUG_MENU ---help--- Selecting this option causes mac80211 to print out - very verbose TKIP debugging messages. It should not - be selected on production systems as those messages - are remotely triggerable. + debugging messages for station addition/removal. + + Do not select this option. + +config MAC80211_HT_DEBUG + bool "Verbose HT debugging" + depends on MAC80211_DEBUG_MENU + ---help--- + This option enables 802.11n High Throughput features + debug tracing output. + + It should not be selected on production systems as some + of the messages are remotely triggerable. Do not select this option. @@ -174,7 +196,7 @@ config MAC80211_IBSS_DEBUG Do not select this option. -config MAC80211_VERBOSE_PS_DEBUG +config MAC80211_PS_DEBUG bool "Verbose powersave mode debugging" depends on MAC80211_DEBUG_MENU ---help--- @@ -186,7 +208,7 @@ config MAC80211_VERBOSE_PS_DEBUG Do not select this option. -config MAC80211_VERBOSE_MPL_DEBUG +config MAC80211_MPL_DEBUG bool "Verbose mesh peer link debugging" depends on MAC80211_DEBUG_MENU depends on MAC80211_MESH @@ -199,7 +221,7 @@ config MAC80211_VERBOSE_MPL_DEBUG Do not select this option. -config MAC80211_VERBOSE_MPATH_DEBUG +config MAC80211_MPATH_DEBUG bool "Verbose mesh path debugging" depends on MAC80211_DEBUG_MENU depends on MAC80211_MESH @@ -212,7 +234,7 @@ config MAC80211_VERBOSE_MPATH_DEBUG Do not select this option. -config MAC80211_VERBOSE_MHWMP_DEBUG +config MAC80211_MHWMP_DEBUG bool "Verbose mesh HWMP routing debugging" depends on MAC80211_DEBUG_MENU depends on MAC80211_MESH @@ -225,7 +247,7 @@ config MAC80211_VERBOSE_MHWMP_DEBUG Do not select this option. -config MAC80211_VERBOSE_MESH_SYNC_DEBUG +config MAC80211_MESH_SYNC_DEBUG bool "Verbose mesh mesh synchronization debugging" depends on MAC80211_DEBUG_MENU depends on MAC80211_MESH @@ -236,7 +258,7 @@ config MAC80211_VERBOSE_MESH_SYNC_DEBUG Do not select this option. -config MAC80211_VERBOSE_TDLS_DEBUG +config MAC80211_TDLS_DEBUG bool "Verbose TDLS debugging" depends on MAC80211_DEBUG_MENU ---help--- diff --git a/net/mac80211/Makefile b/net/mac80211/Makefile index 3e9d931bba35..a7dd110faafa 100644 --- a/net/mac80211/Makefile +++ b/net/mac80211/Makefile @@ -9,7 +9,6 @@ mac80211-y := \ scan.o offchannel.o \ ht.o agg-tx.o agg-rx.o \ ibss.o \ - work.o \ iface.o \ rate.o \ michael.o \ @@ -25,7 +24,7 @@ mac80211-y := \ wme.o \ event.o \ chan.o \ - driver-trace.o mlme.o + trace.o mlme.o mac80211-$(CONFIG_MAC80211_LEDS) += led.o mac80211-$(CONFIG_MAC80211_DEBUGFS) += \ @@ -43,7 +42,7 @@ mac80211-$(CONFIG_MAC80211_MESH) += \ mac80211-$(CONFIG_PM) += pm.o -CFLAGS_driver-trace.o := -I$(src) +CFLAGS_trace.o := -I$(src) # objects for PID algorithm rc80211_pid-y := rc80211_pid_algo.o @@ -59,4 +58,4 @@ mac80211-$(CONFIG_MAC80211_RC_PID) += $(rc80211_pid-y) mac80211-$(CONFIG_MAC80211_RC_MINSTREL) += $(rc80211_minstrel-y) mac80211-$(CONFIG_MAC80211_RC_MINSTREL_HT) += $(rc80211_minstrel_ht-y) -ccflags-y += -D__CHECK_ENDIAN__ +ccflags-y += -D__CHECK_ENDIAN__ -DDEBUG diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c index c649188314cc..186d9919b043 100644 --- a/net/mac80211/agg-rx.c +++ b/net/mac80211/agg-rx.c @@ -74,18 +74,17 @@ void ___ieee80211_stop_rx_ba_session(struct sta_info *sta, u16 tid, RCU_INIT_POINTER(sta->ampdu_mlme.tid_rx[tid], NULL); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG + ht_dbg(sta->sdata, "Rx BA session stop requested for %pM tid %u %s reason: %d\n", sta->sta.addr, tid, initiator == WLAN_BACK_RECIPIENT ? "recipient" : "inititator", (int)reason); -#endif /* CONFIG_MAC80211_HT_DEBUG */ if (drv_ampdu_action(local, sta->sdata, IEEE80211_AMPDU_RX_STOP, &sta->sta, tid, NULL, 0)) - printk(KERN_DEBUG "HW problem - can not stop rx " - "aggregation for tid %d\n", tid); + sdata_info(sta->sdata, + "HW problem - can not stop rx aggregation for tid %d\n", + tid); /* check if this is a self generated aggregation halt */ if (initiator == WLAN_BACK_RECIPIENT && tx) @@ -160,9 +159,8 @@ static void sta_rx_agg_session_timer_expired(unsigned long data) } rcu_read_unlock(); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "rx session timer expired on tid %d\n", (u16)*ptid); -#endif + ht_dbg(sta->sdata, "rx session timer expired on tid %d\n", (u16)*ptid); + set_bit(*ptid, sta->ampdu_mlme.tid_rx_timer_expired); ieee80211_queue_work(&sta->local->hw, &sta->ampdu_mlme.work); } @@ -249,10 +247,7 @@ void ieee80211_process_addba_request(struct ieee80211_local *local, status = WLAN_STATUS_REQUEST_DECLINED; if (test_sta_flag(sta, WLAN_STA_BLOCK_BA)) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Suspend in progress. " - "Denying ADDBA request\n"); -#endif + ht_dbg(sta->sdata, "Suspend in progress - Denying ADDBA request\n"); goto end_no_lock; } @@ -264,10 +259,9 @@ void ieee80211_process_addba_request(struct ieee80211_local *local, (!(sta->sta.ht_cap.cap & IEEE80211_HT_CAP_DELAY_BA))) || (buf_size > IEEE80211_MAX_AMPDU_BUF)) { status = WLAN_STATUS_INVALID_QOS_PARAM; -#ifdef CONFIG_MAC80211_HT_DEBUG - net_dbg_ratelimited("AddBA Req with bad params from %pM on tid %u. policy %d, buffer size %d\n", - mgmt->sa, tid, ba_policy, buf_size); -#endif /* CONFIG_MAC80211_HT_DEBUG */ + ht_dbg_ratelimited(sta->sdata, + "AddBA Req with bad params from %pM on tid %u. policy %d, buffer size %d\n", + mgmt->sa, tid, ba_policy, buf_size); goto end_no_lock; } /* determine default buffer size */ @@ -282,10 +276,9 @@ void ieee80211_process_addba_request(struct ieee80211_local *local, mutex_lock(&sta->ampdu_mlme.mtx); if (sta->ampdu_mlme.tid_rx[tid]) { -#ifdef CONFIG_MAC80211_HT_DEBUG - net_dbg_ratelimited("unexpected AddBA Req from %pM on tid %u\n", - mgmt->sa, tid); -#endif /* CONFIG_MAC80211_HT_DEBUG */ + ht_dbg_ratelimited(sta->sdata, + "unexpected AddBA Req from %pM on tid %u\n", + mgmt->sa, tid); /* delete existing Rx BA session on the same tid */ ___ieee80211_stop_rx_ba_session(sta, tid, WLAN_BACK_RECIPIENT, @@ -324,10 +317,7 @@ void ieee80211_process_addba_request(struct ieee80211_local *local, ret = drv_ampdu_action(local, sta->sdata, IEEE80211_AMPDU_RX_START, &sta->sta, tid, &start_seq_num, 0); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Rx A-MPDU request on tid %d result %d\n", tid, ret); -#endif /* CONFIG_MAC80211_HT_DEBUG */ - + ht_dbg(sta->sdata, "Rx A-MPDU request on tid %d result %d\n", tid, ret); if (ret) { kfree(tid_agg_rx->reorder_buf); kfree(tid_agg_rx->reorder_time); diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c index 7cf07158805c..d0deb3edae21 100644 --- a/net/mac80211/agg-tx.c +++ b/net/mac80211/agg-tx.c @@ -135,7 +135,8 @@ void ieee80211_send_bar(struct ieee80211_vif *vif, u8 *ra, u16 tid, u16 ssn) bar->control = cpu_to_le16(bar_control); bar->start_seq_num = cpu_to_le16(ssn); - IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT; + IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT | + IEEE80211_TX_CTL_REQ_TX_STATUS; ieee80211_tx_skb_tid(sdata, skb, tid); } EXPORT_SYMBOL(ieee80211_send_bar); @@ -184,10 +185,8 @@ int ___ieee80211_stop_tx_ba_session(struct sta_info *sta, u16 tid, spin_unlock_bh(&sta->lock); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Tx BA session stop requested for %pM tid %u\n", + ht_dbg(sta->sdata, "Tx BA session stop requested for %pM tid %u\n", sta->sta.addr, tid); -#endif /* CONFIG_MAC80211_HT_DEBUG */ del_timer_sync(&tid_tx->addba_resp_timer); del_timer_sync(&tid_tx->session_timer); @@ -253,17 +252,13 @@ static void sta_addba_resp_timer_expired(unsigned long data) if (!tid_tx || test_bit(HT_AGG_STATE_RESPONSE_RECEIVED, &tid_tx->state)) { rcu_read_unlock(); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "timer expired on tid %d but we are not " - "(or no longer) expecting addBA response there\n", - tid); -#endif + ht_dbg(sta->sdata, + "timer expired on tid %d but we are not (or no longer) expecting addBA response there\n", + tid); return; } -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "addBA response timer expired on tid %d\n", tid); -#endif + ht_dbg(sta->sdata, "addBA response timer expired on tid %d\n", tid); ieee80211_stop_tx_ba_session(&sta->sta, tid); rcu_read_unlock(); @@ -323,8 +318,9 @@ ieee80211_agg_splice_packets(struct ieee80211_sub_if_data *sdata, ieee80211_stop_queue_agg(sdata, tid); - if (WARN(!tid_tx, "TID %d gone but expected when splicing aggregates" - " from the pending queue\n", tid)) + if (WARN(!tid_tx, + "TID %d gone but expected when splicing aggregates from the pending queue\n", + tid)) return; if (!skb_queue_empty(&tid_tx->pending)) { @@ -372,10 +368,8 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid) ret = drv_ampdu_action(local, sdata, IEEE80211_AMPDU_TX_START, &sta->sta, tid, &start_seq_num, 0); if (ret) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "BA request denied - HW unavailable for" - " tid %d\n", tid); -#endif + ht_dbg(sdata, + "BA request denied - HW unavailable for tid %d\n", tid); spin_lock_bh(&sta->lock); ieee80211_agg_splice_packets(sdata, tid_tx, tid); ieee80211_assign_tid_tx(sta, tid, NULL); @@ -388,9 +382,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid) /* activate the timer for the recipient's addBA response */ mod_timer(&tid_tx->addba_resp_timer, jiffies + ADDBA_RESP_INTERVAL); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "activated addBA response timer on tid %d\n", tid); -#endif + ht_dbg(sdata, "activated addBA response timer on tid %d\n", tid); spin_lock_bh(&sta->lock); sta->ampdu_mlme.last_addba_req_time[tid] = jiffies; @@ -437,9 +429,7 @@ static void sta_tx_agg_session_timer_expired(unsigned long data) rcu_read_unlock(); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "tx session timer expired on tid %d\n", (u16)*ptid); -#endif + ht_dbg(sta->sdata, "tx session timer expired on tid %d\n", (u16)*ptid); ieee80211_stop_tx_ba_session(&sta->sta, *ptid); } @@ -463,10 +453,8 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid, (local->hw.flags & IEEE80211_HW_TX_AMPDU_SETUP_IN_HW)) return -EINVAL; -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Open BA session requested for %pM tid %u\n", + ht_dbg(sdata, "Open BA session requested for %pM tid %u\n", pubsta->addr, tid); -#endif /* CONFIG_MAC80211_HT_DEBUG */ if (sdata->vif.type != NL80211_IFTYPE_STATION && sdata->vif.type != NL80211_IFTYPE_MESH_POINT && @@ -476,10 +464,8 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid, return -EINVAL; if (test_sta_flag(sta, WLAN_STA_BLOCK_BA)) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "BA sessions blocked. " - "Denying BA session request\n"); -#endif + ht_dbg(sdata, + "BA sessions blocked - Denying BA session request\n"); return -EINVAL; } @@ -497,10 +483,9 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid, */ if (sta->sdata->vif.type == NL80211_IFTYPE_ADHOC && !sta->sta.ht_cap.ht_supported) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "BA request denied - IBSS STA %pM" - "does not advertise HT support\n", pubsta->addr); -#endif /* CONFIG_MAC80211_HT_DEBUG */ + ht_dbg(sdata, + "BA request denied - IBSS STA %pM does not advertise HT support\n", + pubsta->addr); return -EINVAL; } @@ -520,12 +505,9 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid, if (sta->ampdu_mlme.addba_req_num[tid] > HT_AGG_BURST_RETRIES && time_before(jiffies, sta->ampdu_mlme.last_addba_req_time[tid] + HT_AGG_RETRIES_PERIOD)) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "BA request denied - " - "waiting a grace period after %d failed requests " - "on tid %u\n", + ht_dbg(sdata, + "BA request denied - waiting a grace period after %d failed requests on tid %u\n", sta->ampdu_mlme.addba_req_num[tid], tid); -#endif /* CONFIG_MAC80211_HT_DEBUG */ ret = -EBUSY; goto err_unlock_sta; } @@ -533,10 +515,9 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid, tid_tx = rcu_dereference_protected_tid_tx(sta, tid); /* check if the TID is not in aggregation flow already */ if (tid_tx || sta->ampdu_mlme.tid_start_tx[tid]) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "BA request denied - session is not " - "idle on tid %u\n", tid); -#endif /* CONFIG_MAC80211_HT_DEBUG */ + ht_dbg(sdata, + "BA request denied - session is not idle on tid %u\n", + tid); ret = -EAGAIN; goto err_unlock_sta; } @@ -591,9 +572,7 @@ static void ieee80211_agg_tx_operational(struct ieee80211_local *local, tid_tx = rcu_dereference_protected_tid_tx(sta, tid); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Aggregation is on for tid %d\n", tid); -#endif + ht_dbg(sta->sdata, "Aggregation is on for tid %d\n", tid); drv_ampdu_action(local, sta->sdata, IEEE80211_AMPDU_TX_OPERATIONAL, @@ -627,10 +606,8 @@ void ieee80211_start_tx_ba_cb(struct ieee80211_vif *vif, u8 *ra, u16 tid) trace_api_start_tx_ba_cb(sdata, ra, tid); if (tid >= STA_TID_NUM) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Bad TID value: tid = %d (>= %d)\n", - tid, STA_TID_NUM); -#endif + ht_dbg(sdata, "Bad TID value: tid = %d (>= %d)\n", + tid, STA_TID_NUM); return; } @@ -638,9 +615,7 @@ void ieee80211_start_tx_ba_cb(struct ieee80211_vif *vif, u8 *ra, u16 tid) sta = sta_info_get_bss(sdata, ra); if (!sta) { mutex_unlock(&local->sta_mtx); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Could not find station: %pM\n", ra); -#endif + ht_dbg(sdata, "Could not find station: %pM\n", ra); return; } @@ -648,9 +623,7 @@ void ieee80211_start_tx_ba_cb(struct ieee80211_vif *vif, u8 *ra, u16 tid) tid_tx = rcu_dereference_protected_tid_tx(sta, tid); if (WARN_ON(!tid_tx)) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "addBA was not requested!\n"); -#endif + ht_dbg(sdata, "addBA was not requested!\n"); goto unlock; } @@ -750,25 +723,18 @@ void ieee80211_stop_tx_ba_cb(struct ieee80211_vif *vif, u8 *ra, u8 tid) trace_api_stop_tx_ba_cb(sdata, ra, tid); if (tid >= STA_TID_NUM) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Bad TID value: tid = %d (>= %d)\n", - tid, STA_TID_NUM); -#endif + ht_dbg(sdata, "Bad TID value: tid = %d (>= %d)\n", + tid, STA_TID_NUM); return; } -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Stopping Tx BA session for %pM tid %d\n", - ra, tid); -#endif /* CONFIG_MAC80211_HT_DEBUG */ + ht_dbg(sdata, "Stopping Tx BA session for %pM tid %d\n", ra, tid); mutex_lock(&local->sta_mtx); sta = sta_info_get_bss(sdata, ra); if (!sta) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "Could not find station: %pM\n", ra); -#endif + ht_dbg(sdata, "Could not find station: %pM\n", ra); goto unlock; } @@ -777,9 +743,7 @@ void ieee80211_stop_tx_ba_cb(struct ieee80211_vif *vif, u8 *ra, u8 tid) tid_tx = rcu_dereference_protected_tid_tx(sta, tid); if (!tid_tx || !test_bit(HT_AGG_STATE_STOPPING, &tid_tx->state)) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "unexpected callback to A-MPDU stop\n"); -#endif + ht_dbg(sdata, "unexpected callback to A-MPDU stop\n"); goto unlock_sta; } @@ -855,17 +819,13 @@ void ieee80211_process_addba_resp(struct ieee80211_local *local, goto out; if (mgmt->u.action.u.addba_resp.dialog_token != tid_tx->dialog_token) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "wrong addBA response token, tid %d\n", tid); -#endif + ht_dbg(sta->sdata, "wrong addBA response token, tid %d\n", tid); goto out; } del_timer_sync(&tid_tx->addba_resp_timer); -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG "switched off addBA timer for tid %d\n", tid); -#endif + ht_dbg(sta->sdata, "switched off addBA timer for tid %d\n", tid); /* * addba_resp_timer may have fired before we got here, and @@ -874,11 +834,9 @@ void ieee80211_process_addba_resp(struct ieee80211_local *local, */ if (test_bit(HT_AGG_STATE_WANT_STOP, &tid_tx->state) || test_bit(HT_AGG_STATE_STOPPING, &tid_tx->state)) { -#ifdef CONFIG_MAC80211_HT_DEBUG - printk(KERN_DEBUG + ht_dbg(sta->sdata, "got addBA resp for tid %d but we already gave up\n", tid); -#endif goto out; } diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c index 7d5108a867ad..d41974aacf51 100644 --- a/net/mac80211/cfg.c +++ b/net/mac80211/cfg.c @@ -20,31 +20,31 @@ #include "rate.h" #include "mesh.h" -static struct net_device *ieee80211_add_iface(struct wiphy *wiphy, char *name, - enum nl80211_iftype type, - u32 *flags, - struct vif_params *params) +static struct wireless_dev *ieee80211_add_iface(struct wiphy *wiphy, char *name, + enum nl80211_iftype type, + u32 *flags, + struct vif_params *params) { struct ieee80211_local *local = wiphy_priv(wiphy); - struct net_device *dev; + struct wireless_dev *wdev; struct ieee80211_sub_if_data *sdata; int err; - err = ieee80211_if_add(local, name, &dev, type, params); + err = ieee80211_if_add(local, name, &wdev, type, params); if (err) return ERR_PTR(err); if (type == NL80211_IFTYPE_MONITOR && flags) { - sdata = IEEE80211_DEV_TO_SUB_IF(dev); + sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); sdata->u.mntr_flags = *flags; } - return dev; + return wdev; } -static int ieee80211_del_iface(struct wiphy *wiphy, struct net_device *dev) +static int ieee80211_del_iface(struct wiphy *wiphy, struct wireless_dev *wdev) { - ieee80211_if_remove(IEEE80211_DEV_TO_SUB_IF(dev)); + ieee80211_if_remove(IEEE80211_WDEV_TO_SUB_IF(wdev)); return 0; } @@ -353,6 +353,7 @@ void sta_set_rate_info_tx(struct sta_info *sta, static void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo) { struct ieee80211_sub_if_data *sdata = sta->sdata; + struct ieee80211_local *local = sdata->local; struct timespec uptime; sinfo->generation = sdata->local->sta_generation; @@ -388,7 +389,9 @@ static void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo) if ((sta->local->hw.flags & IEEE80211_HW_SIGNAL_DBM) || (sta->local->hw.flags & IEEE80211_HW_SIGNAL_UNSPEC)) { sinfo->filled |= STATION_INFO_SIGNAL | STATION_INFO_SIGNAL_AVG; - sinfo->signal = (s8)sta->last_signal; + if (!local->ops->get_rssi || + drv_get_rssi(local, sdata, &sta->sta, &sinfo->signal)) + sinfo->signal = (s8)sta->last_signal; sinfo->signal_avg = (s8) -ewma_read(&sta->avg_signal); } @@ -517,7 +520,7 @@ static void ieee80211_get_et_stats(struct wiphy *wiphy, * network device. */ - rcu_read_lock(); + mutex_lock(&local->sta_mtx); if (sdata->vif.type == NL80211_IFTYPE_STATION) { sta = sta_info_get_bss(sdata, sdata->u.mgd.bssid); @@ -546,7 +549,7 @@ static void ieee80211_get_et_stats(struct wiphy *wiphy, data[i] = (u8)sinfo.signal_avg; i++; } else { - list_for_each_entry_rcu(sta, &local->sta_list, list) { + list_for_each_entry(sta, &local->sta_list, list) { /* Make sure this station belongs to the proper dev */ if (sta->sdata->dev != dev) continue; @@ -603,7 +606,7 @@ do_survey: else data[i++] = -1LL; - rcu_read_unlock(); + mutex_unlock(&local->sta_mtx); if (WARN_ON(i != STA_STATS_LEN)) return; @@ -629,10 +632,11 @@ static int ieee80211_dump_station(struct wiphy *wiphy, struct net_device *dev, int idx, u8 *mac, struct station_info *sinfo) { struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); + struct ieee80211_local *local = sdata->local; struct sta_info *sta; int ret = -ENOENT; - rcu_read_lock(); + mutex_lock(&local->sta_mtx); sta = sta_info_get_by_idx(sdata, idx); if (sta) { @@ -641,7 +645,7 @@ static int ieee80211_dump_station(struct wiphy *wiphy, struct net_device *dev, sta_set_sinfo(sta, sinfo); } - rcu_read_unlock(); + mutex_unlock(&local->sta_mtx); return ret; } @@ -658,10 +662,11 @@ static int ieee80211_get_station(struct wiphy *wiphy, struct net_device *dev, u8 *mac, struct station_info *sinfo) { struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); + struct ieee80211_local *local = sdata->local; struct sta_info *sta; int ret = -ENOENT; - rcu_read_lock(); + mutex_lock(&local->sta_mtx); sta = sta_info_get_bss(sdata, mac); if (sta) { @@ -669,11 +674,54 @@ static int ieee80211_get_station(struct wiphy *wiphy, struct net_device *dev, sta_set_sinfo(sta, sinfo); } - rcu_read_unlock(); + mutex_unlock(&local->sta_mtx); return ret; } +static int ieee80211_set_channel(struct wiphy *wiphy, + struct net_device *netdev, + struct ieee80211_channel *chan, + enum nl80211_channel_type channel_type) +{ + struct ieee80211_local *local = wiphy_priv(wiphy); + struct ieee80211_sub_if_data *sdata = NULL; + + if (netdev) + sdata = IEEE80211_DEV_TO_SUB_IF(netdev); + + switch (ieee80211_get_channel_mode(local, NULL)) { + case CHAN_MODE_HOPPING: + return -EBUSY; + case CHAN_MODE_FIXED: + if (local->oper_channel != chan || + (!sdata && local->_oper_channel_type != channel_type)) + return -EBUSY; + if (!sdata && local->_oper_channel_type == channel_type) + return 0; + break; + case CHAN_MODE_UNDEFINED: + break; + } + + if (!ieee80211_set_channel_type(local, sdata, channel_type)) + return -EBUSY; + + local->oper_channel = chan; + + /* auto-detects changes */ + ieee80211_hw_config(local, 0); + + return 0; +} + +static int ieee80211_set_monitor_channel(struct wiphy *wiphy, + struct ieee80211_channel *chan, + enum nl80211_channel_type channel_type) +{ + return ieee80211_set_channel(wiphy, NULL, chan, channel_type); +} + static int ieee80211_set_probe_resp(struct ieee80211_sub_if_data *sdata, const u8 *resp, size_t resp_len) { @@ -788,6 +836,11 @@ static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev, if (old) return -EALREADY; + err = ieee80211_set_channel(wiphy, dev, params->channel, + params->channel_type); + if (err) + return err; + /* * Apply control port protocol, this allows us to * not encrypt dynamic WEP control frames. @@ -864,6 +917,7 @@ static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev) kfree_rcu(old, rcu_head); + sta_info_flush(sdata->local, sdata); ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_BEACON_ENABLED); return 0; @@ -1482,7 +1536,7 @@ static int ieee80211_update_mesh_config(struct wiphy *wiphy, if (_chg_mesh_attr(NL80211_MESHCONF_TTL, mask)) conf->dot11MeshTTL = nconf->dot11MeshTTL; if (_chg_mesh_attr(NL80211_MESHCONF_ELEMENT_TTL, mask)) - conf->dot11MeshTTL = nconf->element_ttl; + conf->element_ttl = nconf->element_ttl; if (_chg_mesh_attr(NL80211_MESHCONF_AUTO_OPEN_PLINKS, mask)) conf->auto_open_plinks = nconf->auto_open_plinks; if (_chg_mesh_attr(NL80211_MESHCONF_SYNC_OFFSET_MAX_NEIGHBOR, mask)) @@ -1517,17 +1571,16 @@ static int ieee80211_update_mesh_config(struct wiphy *wiphy, * announcements, so require this ifmsh to also be a root node * */ if (nconf->dot11MeshGateAnnouncementProtocol && - !conf->dot11MeshHWMPRootMode) { - conf->dot11MeshHWMPRootMode = 1; + !(conf->dot11MeshHWMPRootMode > IEEE80211_ROOTMODE_ROOT)) { + conf->dot11MeshHWMPRootMode = IEEE80211_PROACTIVE_RANN; ieee80211_mesh_root_setup(ifmsh); } conf->dot11MeshGateAnnouncementProtocol = nconf->dot11MeshGateAnnouncementProtocol; } - if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_RANN_INTERVAL, mask)) { + if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_RANN_INTERVAL, mask)) conf->dot11MeshHWMPRannInterval = nconf->dot11MeshHWMPRannInterval; - } if (_chg_mesh_attr(NL80211_MESHCONF_FORWARDING, mask)) conf->dot11MeshForwarding = nconf->dot11MeshForwarding; if (_chg_mesh_attr(NL80211_MESHCONF_RSSI_THRESHOLD, mask)) { @@ -1543,6 +1596,15 @@ static int ieee80211_update_mesh_config(struct wiphy *wiphy, sdata->vif.bss_conf.ht_operation_mode = nconf->ht_opmode; ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_HT); } + if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_PATH_TO_ROOT_TIMEOUT, mask)) + conf->dot11MeshHWMPactivePathToRootTimeout = + nconf->dot11MeshHWMPactivePathToRootTimeout; + if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_ROOT_INTERVAL, mask)) + conf->dot11MeshHWMProotInterval = + nconf->dot11MeshHWMProotInterval; + if (_chg_mesh_attr(NL80211_MESHCONF_HWMP_CONFIRMATION_INTERVAL, mask)) + conf->dot11MeshHWMPconfirmationInterval = + nconf->dot11MeshHWMPconfirmationInterval; return 0; } @@ -1558,6 +1620,12 @@ static int ieee80211_join_mesh(struct wiphy *wiphy, struct net_device *dev, err = copy_mesh_setup(ifmsh, setup); if (err) return err; + + err = ieee80211_set_channel(wiphy, dev, setup->channel, + setup->channel_type); + if (err) + return err; + ieee80211_start_mesh(sdata); return 0; @@ -1674,54 +1742,7 @@ static int ieee80211_set_txq_params(struct wiphy *wiphy, return -EINVAL; } - return 0; -} - -static int ieee80211_set_channel(struct wiphy *wiphy, - struct net_device *netdev, - struct ieee80211_channel *chan, - enum nl80211_channel_type channel_type) -{ - struct ieee80211_local *local = wiphy_priv(wiphy); - struct ieee80211_sub_if_data *sdata = NULL; - struct ieee80211_channel *old_oper; - enum nl80211_channel_type old_oper_type; - enum nl80211_channel_type old_vif_oper_type= NL80211_CHAN_NO_HT; - - if (netdev) - sdata = IEEE80211_DEV_TO_SUB_IF(netdev); - - switch (ieee80211_get_channel_mode(local, NULL)) { - case CHAN_MODE_HOPPING: - return -EBUSY; - case CHAN_MODE_FIXED: - if (local->oper_channel != chan) - return -EBUSY; - if (!sdata && local->_oper_channel_type == channel_type) - return 0; - break; - case CHAN_MODE_UNDEFINED: - break; - } - - if (sdata) - old_vif_oper_type = sdata->vif.bss_conf.channel_type; - old_oper_type = local->_oper_channel_type; - - if (!ieee80211_set_channel_type(local, sdata, channel_type)) - return -EBUSY; - - old_oper = local->oper_channel; - local->oper_channel = chan; - - /* Update driver if changes were actually made. */ - if ((old_oper != local->oper_channel) || - (old_oper_type != local->_oper_channel_type)) - ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL); - - if (sdata && sdata->vif.type != NL80211_IFTYPE_MONITOR && - old_vif_oper_type != sdata->vif.bss_conf.channel_type) - ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_HT); + ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_QOS); return 0; } @@ -1743,10 +1764,11 @@ static int ieee80211_resume(struct wiphy *wiphy) #endif static int ieee80211_scan(struct wiphy *wiphy, - struct net_device *dev, struct cfg80211_scan_request *req) { - struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); + struct ieee80211_sub_if_data *sdata; + + sdata = IEEE80211_WDEV_TO_SUB_IF(req->wdev); switch (ieee80211_vif_type_p2p(&sdata->vif)) { case NL80211_IFTYPE_STATION: @@ -2111,143 +2133,291 @@ static int ieee80211_set_bitrate_mask(struct wiphy *wiphy, return 0; } -static int ieee80211_remain_on_channel_hw(struct ieee80211_local *local, - struct net_device *dev, - struct ieee80211_channel *chan, - enum nl80211_channel_type chantype, - unsigned int duration, u64 *cookie) +static int ieee80211_start_roc_work(struct ieee80211_local *local, + struct ieee80211_sub_if_data *sdata, + struct ieee80211_channel *channel, + enum nl80211_channel_type channel_type, + unsigned int duration, u64 *cookie, + struct sk_buff *txskb) { + struct ieee80211_roc_work *roc, *tmp; + bool queued = false; int ret; - u32 random_cookie; lockdep_assert_held(&local->mtx); - if (local->hw_roc_cookie) - return -EBUSY; - /* must be nonzero */ - random_cookie = random32() | 1; - - *cookie = random_cookie; - local->hw_roc_dev = dev; - local->hw_roc_cookie = random_cookie; - local->hw_roc_channel = chan; - local->hw_roc_channel_type = chantype; - local->hw_roc_duration = duration; - ret = drv_remain_on_channel(local, chan, chantype, duration); + roc = kzalloc(sizeof(*roc), GFP_KERNEL); + if (!roc) + return -ENOMEM; + + roc->chan = channel; + roc->chan_type = channel_type; + roc->duration = duration; + roc->req_duration = duration; + roc->frame = txskb; + roc->mgmt_tx_cookie = (unsigned long)txskb; + roc->sdata = sdata; + INIT_DELAYED_WORK(&roc->work, ieee80211_sw_roc_work); + INIT_LIST_HEAD(&roc->dependents); + + /* if there's one pending or we're scanning, queue this one */ + if (!list_empty(&local->roc_list) || local->scanning) + goto out_check_combine; + + /* if not HW assist, just queue & schedule work */ + if (!local->ops->remain_on_channel) { + ieee80211_queue_delayed_work(&local->hw, &roc->work, 0); + goto out_queue; + } + + /* otherwise actually kick it off here (for error handling) */ + + /* + * If the duration is zero, then the driver + * wouldn't actually do anything. Set it to + * 10 for now. + * + * TODO: cancel the off-channel operation + * when we get the SKB's TX status and + * the wait time was zero before. + */ + if (!duration) + duration = 10; + + ret = drv_remain_on_channel(local, channel, channel_type, duration); if (ret) { - local->hw_roc_channel = NULL; - local->hw_roc_cookie = 0; + kfree(roc); + return ret; } - return ret; + roc->started = true; + goto out_queue; + + out_check_combine: + list_for_each_entry(tmp, &local->roc_list, list) { + if (tmp->chan != channel || tmp->chan_type != channel_type) + continue; + + /* + * Extend this ROC if possible: + * + * If it hasn't started yet, just increase the duration + * and add the new one to the list of dependents. + */ + if (!tmp->started) { + list_add_tail(&roc->list, &tmp->dependents); + tmp->duration = max(tmp->duration, roc->duration); + queued = true; + break; + } + + /* If it has already started, it's more difficult ... */ + if (local->ops->remain_on_channel) { + unsigned long j = jiffies; + + /* + * In the offloaded ROC case, if it hasn't begun, add + * this new one to the dependent list to be handled + * when the the master one begins. If it has begun, + * check that there's still a minimum time left and + * if so, start this one, transmitting the frame, but + * add it to the list directly after this one with a + * a reduced time so we'll ask the driver to execute + * it right after finishing the previous one, in the + * hope that it'll also be executed right afterwards, + * effectively extending the old one. + * If there's no minimum time left, just add it to the + * normal list. + */ + if (!tmp->hw_begun) { + list_add_tail(&roc->list, &tmp->dependents); + queued = true; + break; + } + + if (time_before(j + IEEE80211_ROC_MIN_LEFT, + tmp->hw_start_time + + msecs_to_jiffies(tmp->duration))) { + int new_dur; + + ieee80211_handle_roc_started(roc); + + new_dur = roc->duration - + jiffies_to_msecs(tmp->hw_start_time + + msecs_to_jiffies( + tmp->duration) - + j); + + if (new_dur > 0) { + /* add right after tmp */ + list_add(&roc->list, &tmp->list); + } else { + list_add_tail(&roc->list, + &tmp->dependents); + } + queued = true; + } + } else if (del_timer_sync(&tmp->work.timer)) { + unsigned long new_end; + + /* + * In the software ROC case, cancel the timer, if + * that fails then the finish work is already + * queued/pending and thus we queue the new ROC + * normally, if that succeeds then we can extend + * the timer duration and TX the frame (if any.) + */ + + list_add_tail(&roc->list, &tmp->dependents); + queued = true; + + new_end = jiffies + msecs_to_jiffies(roc->duration); + + /* ok, it was started & we canceled timer */ + if (time_after(new_end, tmp->work.timer.expires)) + mod_timer(&tmp->work.timer, new_end); + else + add_timer(&tmp->work.timer); + + ieee80211_handle_roc_started(roc); + } + break; + } + + out_queue: + if (!queued) + list_add_tail(&roc->list, &local->roc_list); + + /* + * cookie is either the roc (for normal roc) + * or the SKB (for mgmt TX) + */ + if (txskb) + *cookie = (unsigned long)txskb; + else + *cookie = (unsigned long)roc; + + return 0; } static int ieee80211_remain_on_channel(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, unsigned int duration, u64 *cookie) { - struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); + struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); struct ieee80211_local *local = sdata->local; + int ret; - if (local->ops->remain_on_channel) { - int ret; - - mutex_lock(&local->mtx); - ret = ieee80211_remain_on_channel_hw(local, dev, - chan, channel_type, - duration, cookie); - local->hw_roc_for_tx = false; - mutex_unlock(&local->mtx); - - return ret; - } + mutex_lock(&local->mtx); + ret = ieee80211_start_roc_work(local, sdata, chan, channel_type, + duration, cookie, NULL); + mutex_unlock(&local->mtx); - return ieee80211_wk_remain_on_channel(sdata, chan, channel_type, - duration, cookie); + return ret; } -static int ieee80211_cancel_remain_on_channel_hw(struct ieee80211_local *local, - u64 cookie) +static int ieee80211_cancel_roc(struct ieee80211_local *local, + u64 cookie, bool mgmt_tx) { + struct ieee80211_roc_work *roc, *tmp, *found = NULL; int ret; - lockdep_assert_held(&local->mtx); + mutex_lock(&local->mtx); + list_for_each_entry_safe(roc, tmp, &local->roc_list, list) { + struct ieee80211_roc_work *dep, *tmp2; - if (local->hw_roc_cookie != cookie) - return -ENOENT; + list_for_each_entry_safe(dep, tmp2, &roc->dependents, list) { + if (!mgmt_tx && (unsigned long)dep != cookie) + continue; + else if (mgmt_tx && dep->mgmt_tx_cookie != cookie) + continue; + /* found dependent item -- just remove it */ + list_del(&dep->list); + mutex_unlock(&local->mtx); - ret = drv_cancel_remain_on_channel(local); - if (ret) - return ret; + ieee80211_roc_notify_destroy(dep); + return 0; + } - local->hw_roc_cookie = 0; - local->hw_roc_channel = NULL; + if (!mgmt_tx && (unsigned long)roc != cookie) + continue; + else if (mgmt_tx && roc->mgmt_tx_cookie != cookie) + continue; + + found = roc; + break; + } - ieee80211_recalc_idle(local); + if (!found) { + mutex_unlock(&local->mtx); + return -ENOENT; + } - return 0; -} + /* + * We found the item to cancel, so do that. Note that it + * may have dependents, which we also cancel (and send + * the expired signal for.) Not doing so would be quite + * tricky here, but we may need to fix it later. + */ -static int ieee80211_cancel_remain_on_channel(struct wiphy *wiphy, - struct net_device *dev, - u64 cookie) -{ - struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); - struct ieee80211_local *local = sdata->local; + if (local->ops->remain_on_channel) { + if (found->started) { + ret = drv_cancel_remain_on_channel(local); + if (WARN_ON_ONCE(ret)) { + mutex_unlock(&local->mtx); + return ret; + } + } - if (local->ops->cancel_remain_on_channel) { - int ret; + list_del(&found->list); - mutex_lock(&local->mtx); - ret = ieee80211_cancel_remain_on_channel_hw(local, cookie); + if (found->started) + ieee80211_start_next_roc(local); mutex_unlock(&local->mtx); - return ret; + ieee80211_roc_notify_destroy(found); + } else { + /* work may be pending so use it all the time */ + found->abort = true; + ieee80211_queue_delayed_work(&local->hw, &found->work, 0); + + mutex_unlock(&local->mtx); + + /* work will clean up etc */ + flush_delayed_work(&found->work); } - return ieee80211_wk_cancel_remain_on_channel(sdata, cookie); + return 0; } -static enum work_done_result -ieee80211_offchan_tx_done(struct ieee80211_work *wk, struct sk_buff *skb) +static int ieee80211_cancel_remain_on_channel(struct wiphy *wiphy, + struct wireless_dev *wdev, + u64 cookie) { - /* - * Use the data embedded in the work struct for reporting - * here so if the driver mangled the SKB before dropping - * it (which is the only way we really should get here) - * then we don't report mangled data. - * - * If there was no wait time, then by the time we get here - * the driver will likely not have reported the status yet, - * so in that case userspace will have to deal with it. - */ - - if (wk->offchan_tx.wait && !wk->offchan_tx.status) - cfg80211_mgmt_tx_status(wk->sdata->dev, - (unsigned long) wk->offchan_tx.frame, - wk->data, wk->data_len, false, GFP_KERNEL); + struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); + struct ieee80211_local *local = sdata->local; - return WORK_DONE_DESTROY; + return ieee80211_cancel_roc(local, cookie, false); } -static int ieee80211_mgmt_tx(struct wiphy *wiphy, struct net_device *dev, +static int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev, struct ieee80211_channel *chan, bool offchan, enum nl80211_channel_type channel_type, bool channel_type_valid, unsigned int wait, const u8 *buf, size_t len, bool no_cck, bool dont_wait_for_ack, u64 *cookie) { - struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); + struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); struct ieee80211_local *local = sdata->local; struct sk_buff *skb; struct sta_info *sta; - struct ieee80211_work *wk; const struct ieee80211_mgmt *mgmt = (void *)buf; + bool need_offchan = false; u32 flags; - bool is_offchan = false; + int ret; if (dont_wait_for_ack) flags = IEEE80211_TX_CTL_NO_ACK; @@ -2255,33 +2425,28 @@ static int ieee80211_mgmt_tx(struct wiphy *wiphy, struct net_device *dev, flags = IEEE80211_TX_INTFL_NL80211_FRAME_TX | IEEE80211_TX_CTL_REQ_TX_STATUS; - /* Check that we are on the requested channel for transmission */ - if (chan != local->tmp_channel && - chan != local->oper_channel) - is_offchan = true; - if (channel_type_valid && - (channel_type != local->tmp_channel_type && - channel_type != local->_oper_channel_type)) - is_offchan = true; - - if (chan == local->hw_roc_channel) { - /* TODO: check channel type? */ - is_offchan = false; - flags |= IEEE80211_TX_CTL_TX_OFFCHAN; - } - if (no_cck) flags |= IEEE80211_TX_CTL_NO_CCK_RATE; - if (is_offchan && !offchan) - return -EBUSY; - switch (sdata->vif.type) { case NL80211_IFTYPE_ADHOC: + if (!sdata->vif.bss_conf.ibss_joined) + need_offchan = true; + /* fall through */ +#ifdef CONFIG_MAC80211_MESH + case NL80211_IFTYPE_MESH_POINT: + if (ieee80211_vif_is_mesh(&sdata->vif) && + !sdata->u.mesh.mesh_id_len) + need_offchan = true; + /* fall through */ +#endif case NL80211_IFTYPE_AP: case NL80211_IFTYPE_AP_VLAN: case NL80211_IFTYPE_P2P_GO: - case NL80211_IFTYPE_MESH_POINT: + if (sdata->vif.type != NL80211_IFTYPE_ADHOC && + !ieee80211_vif_is_mesh(&sdata->vif) && + !rcu_access_pointer(sdata->bss->beacon)) + need_offchan = true; if (!ieee80211_is_action(mgmt->frame_control) || mgmt->u.action.category == WLAN_CATEGORY_PUBLIC) break; @@ -2293,167 +2458,101 @@ static int ieee80211_mgmt_tx(struct wiphy *wiphy, struct net_device *dev, break; case NL80211_IFTYPE_STATION: case NL80211_IFTYPE_P2P_CLIENT: + if (!sdata->u.mgd.associated) + need_offchan = true; break; default: return -EOPNOTSUPP; } + mutex_lock(&local->mtx); + + /* Check if the operating channel is the requested channel */ + if (!need_offchan) { + need_offchan = chan != local->oper_channel; + if (channel_type_valid && + channel_type != local->_oper_channel_type) + need_offchan = true; + } + + if (need_offchan && !offchan) { + ret = -EBUSY; + goto out_unlock; + } + skb = dev_alloc_skb(local->hw.extra_tx_headroom + len); - if (!skb) - return -ENOMEM; + if (!skb) { + ret = -ENOMEM; + goto out_unlock; + } skb_reserve(skb, local->hw.extra_tx_headroom); memcpy(skb_put(skb, len), buf, len); IEEE80211_SKB_CB(skb)->flags = flags; - if (flags & IEEE80211_TX_CTL_TX_OFFCHAN) - IEEE80211_SKB_CB(skb)->hw_queue = - local->hw.offchannel_tx_hw_queue; - skb->dev = sdata->dev; - *cookie = (unsigned long) skb; - - if (is_offchan && local->ops->remain_on_channel) { - unsigned int duration; - int ret; - - mutex_lock(&local->mtx); - /* - * If the duration is zero, then the driver - * wouldn't actually do anything. Set it to - * 100 for now. - * - * TODO: cancel the off-channel operation - * when we get the SKB's TX status and - * the wait time was zero before. - */ - duration = 100; - if (wait) - duration = wait; - ret = ieee80211_remain_on_channel_hw(local, dev, chan, - channel_type, - duration, cookie); - if (ret) { - kfree_skb(skb); - mutex_unlock(&local->mtx); - return ret; - } - - local->hw_roc_for_tx = true; - local->hw_roc_duration = wait; - - /* - * queue up frame for transmission after - * ieee80211_ready_on_channel call - */ + if (!need_offchan) { + *cookie = (unsigned long) skb; + ieee80211_tx_skb(sdata, skb); + ret = 0; + goto out_unlock; + } - /* modify cookie to prevent API mismatches */ - *cookie ^= 2; - IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_TX_OFFCHAN; + IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_TX_OFFCHAN; + if (local->hw.flags & IEEE80211_HW_QUEUE_CONTROL) IEEE80211_SKB_CB(skb)->hw_queue = local->hw.offchannel_tx_hw_queue; - local->hw_roc_skb = skb; - local->hw_roc_skb_for_status = skb; - mutex_unlock(&local->mtx); - - return 0; - } - - /* - * Can transmit right away if the channel was the - * right one and there's no wait involved... If a - * wait is involved, we might otherwise not be on - * the right channel for long enough! - */ - if (!is_offchan && !wait && !sdata->vif.bss_conf.idle) { - ieee80211_tx_skb(sdata, skb); - return 0; - } - wk = kzalloc(sizeof(*wk) + len, GFP_KERNEL); - if (!wk) { + /* This will handle all kinds of coalescing and immediate TX */ + ret = ieee80211_start_roc_work(local, sdata, chan, channel_type, + wait, cookie, skb); + if (ret) kfree_skb(skb); - return -ENOMEM; - } - - wk->type = IEEE80211_WORK_OFFCHANNEL_TX; - wk->chan = chan; - wk->chan_type = channel_type; - wk->sdata = sdata; - wk->done = ieee80211_offchan_tx_done; - wk->offchan_tx.frame = skb; - wk->offchan_tx.wait = wait; - wk->data_len = len; - memcpy(wk->data, buf, len); - - ieee80211_add_work(wk); - return 0; + out_unlock: + mutex_unlock(&local->mtx); + return ret; } static int ieee80211_mgmt_tx_cancel_wait(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, u64 cookie) { - struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); - struct ieee80211_local *local = sdata->local; - struct ieee80211_work *wk; - int ret = -ENOENT; - - mutex_lock(&local->mtx); - - if (local->ops->cancel_remain_on_channel) { - cookie ^= 2; - ret = ieee80211_cancel_remain_on_channel_hw(local, cookie); - - if (ret == 0) { - kfree_skb(local->hw_roc_skb); - local->hw_roc_skb = NULL; - local->hw_roc_skb_for_status = NULL; - } - - mutex_unlock(&local->mtx); - - return ret; - } - - list_for_each_entry(wk, &local->work_list, list) { - if (wk->sdata != sdata) - continue; - - if (wk->type != IEEE80211_WORK_OFFCHANNEL_TX) - continue; - - if (cookie != (unsigned long) wk->offchan_tx.frame) - continue; - - wk->timeout = jiffies; - - ieee80211_queue_work(&local->hw, &local->work_work); - ret = 0; - break; - } - mutex_unlock(&local->mtx); + struct ieee80211_local *local = wiphy_priv(wiphy); - return ret; + return ieee80211_cancel_roc(local, cookie, true); } static void ieee80211_mgmt_frame_register(struct wiphy *wiphy, - struct net_device *dev, + struct wireless_dev *wdev, u16 frame_type, bool reg) { struct ieee80211_local *local = wiphy_priv(wiphy); + struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); - if (frame_type != (IEEE80211_FTYPE_MGMT | IEEE80211_STYPE_PROBE_REQ)) - return; + switch (frame_type) { + case IEEE80211_FTYPE_MGMT | IEEE80211_STYPE_AUTH: + if (sdata->vif.type == NL80211_IFTYPE_ADHOC) { + struct ieee80211_if_ibss *ifibss = &sdata->u.ibss; - if (reg) - local->probe_req_reg++; - else - local->probe_req_reg--; + if (reg) + ifibss->auth_frame_registrations++; + else + ifibss->auth_frame_registrations--; + } + break; + case IEEE80211_FTYPE_MGMT | IEEE80211_STYPE_PROBE_REQ: + if (reg) + local->probe_req_reg++; + else + local->probe_req_reg--; - ieee80211_queue_work(&local->hw, &local->reconfig_filter); + ieee80211_queue_work(&local->hw, &local->reconfig_filter); + break; + default: + break; + } } static int ieee80211_set_antenna(struct wiphy *wiphy, u32 tx_ant, u32 rx_ant) @@ -2573,8 +2672,8 @@ ieee80211_prep_tdls_encap_data(struct wiphy *wiphy, struct net_device *dev, tf->u.setup_req.capability = cpu_to_le16(ieee80211_get_tdls_sta_capab(sdata)); - ieee80211_add_srates_ie(&sdata->vif, skb, false); - ieee80211_add_ext_srates_ie(&sdata->vif, skb, false); + ieee80211_add_srates_ie(sdata, skb, false); + ieee80211_add_ext_srates_ie(sdata, skb, false); ieee80211_tdls_add_ext_capab(skb); break; case WLAN_TDLS_SETUP_RESPONSE: @@ -2587,8 +2686,8 @@ ieee80211_prep_tdls_encap_data(struct wiphy *wiphy, struct net_device *dev, tf->u.setup_resp.capability = cpu_to_le16(ieee80211_get_tdls_sta_capab(sdata)); - ieee80211_add_srates_ie(&sdata->vif, skb, false); - ieee80211_add_ext_srates_ie(&sdata->vif, skb, false); + ieee80211_add_srates_ie(sdata, skb, false); + ieee80211_add_ext_srates_ie(sdata, skb, false); ieee80211_tdls_add_ext_capab(skb); break; case WLAN_TDLS_SETUP_CONFIRM: @@ -2648,8 +2747,8 @@ ieee80211_prep_tdls_direct(struct wiphy *wiphy, struct net_device *dev, mgmt->u.action.u.tdls_discover_resp.capability = cpu_to_le16(ieee80211_get_tdls_sta_capab(sdata)); - ieee80211_add_srates_ie(&sdata->vif, skb, false); - ieee80211_add_ext_srates_ie(&sdata->vif, skb, false); + ieee80211_add_srates_ie(sdata, skb, false); + ieee80211_add_ext_srates_ie(sdata, skb, false); ieee80211_tdls_add_ext_capab(skb); break; default: @@ -2679,9 +2778,8 @@ static int ieee80211_tdls_mgmt(struct wiphy *wiphy, struct net_device *dev, !sdata->u.mgd.associated) return -EINVAL; -#ifdef CONFIG_MAC80211_VERBOSE_TDLS_DEBUG - printk(KERN_DEBUG "TDLS mgmt action %d peer %pM\n", action_code, peer); -#endif + tdls_dbg(sdata, "TDLS mgmt action %d peer %pM\n", + action_code, peer); skb = dev_alloc_skb(local->hw.extra_tx_headroom + max(sizeof(struct ieee80211_mgmt), @@ -2790,9 +2888,7 @@ static int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, if (sdata->vif.type != NL80211_IFTYPE_STATION) return -EINVAL; -#ifdef CONFIG_MAC80211_VERBOSE_TDLS_DEBUG - printk(KERN_DEBUG "TDLS oper %d peer %pM\n", oper, peer); -#endif + tdls_dbg(sdata, "TDLS oper %d peer %pM\n", oper, peer); switch (oper) { case NL80211_TDLS_ENABLE_LINK: @@ -2889,8 +2985,8 @@ static int ieee80211_probe_client(struct wiphy *wiphy, struct net_device *dev, } static struct ieee80211_channel * -ieee80211_wiphy_get_channel(struct wiphy *wiphy, - enum nl80211_channel_type *type) +ieee80211_cfg_get_channel(struct wiphy *wiphy, struct wireless_dev *wdev, + enum nl80211_channel_type *type) { struct ieee80211_local *local = wiphy_priv(wiphy); @@ -2936,7 +3032,7 @@ struct cfg80211_ops mac80211_config_ops = { #endif .change_bss = ieee80211_change_bss, .set_txq_params = ieee80211_set_txq_params, - .set_channel = ieee80211_set_channel, + .set_monitor_channel = ieee80211_set_monitor_channel, .suspend = ieee80211_suspend, .resume = ieee80211_resume, .scan = ieee80211_scan, @@ -2971,7 +3067,6 @@ struct cfg80211_ops mac80211_config_ops = { .tdls_oper = ieee80211_tdls_oper, .tdls_mgmt = ieee80211_tdls_mgmt, .probe_client = ieee80211_probe_client, - .get_channel = ieee80211_wiphy_get_channel, .set_noack_map = ieee80211_set_noack_map, #ifdef CONFIG_PM .set_wakeup = ieee80211_set_wakeup, @@ -2979,4 +3074,5 @@ struct cfg80211_ops mac80211_config_ops = { .get_et_sset_count = ieee80211_get_et_sset_count, .get_et_stats = ieee80211_get_et_stats, .get_et_strings = ieee80211_get_et_strings, + .get_channel = ieee80211_cfg_get_channel, }; diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c index c76cf7230c7d..f0f87e5a1d35 100644 --- a/net/mac80211/chan.c +++ b/net/mac80211/chan.c @@ -41,6 +41,10 @@ __ieee80211_get_channel_mode(struct ieee80211_local *local, if (!sdata->u.ap.beacon) continue; break; + case NL80211_IFTYPE_MESH_POINT: + if (!sdata->wdev.mesh_id_len) + continue; + break; default: break; } diff --git a/net/mac80211/debug.h b/net/mac80211/debug.h new file mode 100644 index 000000000000..8f383a576016 --- /dev/null +++ b/net/mac80211/debug.h @@ -0,0 +1,170 @@ +#ifndef __MAC80211_DEBUG_H +#define __MAC80211_DEBUG_H +#include <net/cfg80211.h> + +#ifdef CONFIG_MAC80211_IBSS_DEBUG +#define MAC80211_IBSS_DEBUG 1 +#else +#define MAC80211_IBSS_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_PS_DEBUG +#define MAC80211_PS_DEBUG 1 +#else +#define MAC80211_PS_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_HT_DEBUG +#define MAC80211_HT_DEBUG 1 +#else +#define MAC80211_HT_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_MPL_DEBUG +#define MAC80211_MPL_DEBUG 1 +#else +#define MAC80211_MPL_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_MPATH_DEBUG +#define MAC80211_MPATH_DEBUG 1 +#else +#define MAC80211_MPATH_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_MHWMP_DEBUG +#define MAC80211_MHWMP_DEBUG 1 +#else +#define MAC80211_MHWMP_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_MESH_SYNC_DEBUG +#define MAC80211_MESH_SYNC_DEBUG 1 +#else +#define MAC80211_MESH_SYNC_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_TDLS_DEBUG +#define MAC80211_TDLS_DEBUG 1 +#else +#define MAC80211_TDLS_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_STA_DEBUG +#define MAC80211_STA_DEBUG 1 +#else +#define MAC80211_STA_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_MLME_DEBUG +#define MAC80211_MLME_DEBUG 1 +#else +#define MAC80211_MLME_DEBUG 0 +#endif + +#ifdef CONFIG_MAC80211_MESSAGE_TRACING +void __sdata_info(const char *fmt, ...) __printf(1, 2); +void __sdata_dbg(bool print, const char *fmt, ...) __printf(2, 3); +void __sdata_err(const char *fmt, ...) __printf(1, 2); +void __wiphy_dbg(struct wiphy *wiphy, bool print, const char *fmt, ...) + __printf(3, 4); + +#define _sdata_info(sdata, fmt, ...) \ + __sdata_info("%s: " fmt, (sdata)->name, ##__VA_ARGS__) +#define _sdata_dbg(print, sdata, fmt, ...) \ + __sdata_dbg(print, "%s: " fmt, (sdata)->name, ##__VA_ARGS__) +#define _sdata_err(sdata, fmt, ...) \ + __sdata_err("%s: " fmt, (sdata)->name, ##__VA_ARGS__) +#define _wiphy_dbg(print, wiphy, fmt, ...) \ + __wiphy_dbg(wiphy, print, fmt, ##__VA_ARGS__) +#else +#define _sdata_info(sdata, fmt, ...) \ +do { \ + pr_info("%s: " fmt, \ + (sdata)->name, ##__VA_ARGS__); \ +} while (0) + +#define _sdata_dbg(print, sdata, fmt, ...) \ +do { \ + if (print) \ + pr_debug("%s: " fmt, \ + (sdata)->name, ##__VA_ARGS__); \ +} while (0) + +#define _sdata_err(sdata, fmt, ...) \ +do { \ + pr_err("%s: " fmt, \ + (sdata)->name, ##__VA_ARGS__); \ +} while (0) + +#define _wiphy_dbg(print, wiphy, fmt, ...) \ +do { \ + if (print) \ + wiphy_dbg((wiphy), fmt, ##__VA_ARGS__); \ +} while (0) +#endif + +#define sdata_info(sdata, fmt, ...) \ + _sdata_info(sdata, fmt, ##__VA_ARGS__) +#define sdata_err(sdata, fmt, ...) \ + _sdata_err(sdata, fmt, ##__VA_ARGS__) +#define sdata_dbg(sdata, fmt, ...) \ + _sdata_dbg(1, sdata, fmt, ##__VA_ARGS__) + +#define ht_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_HT_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define ht_dbg_ratelimited(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_HT_DEBUG && net_ratelimit(), \ + sdata, fmt, ##__VA_ARGS__) + +#define ibss_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_IBSS_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define ps_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_PS_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define ps_dbg_hw(hw, fmt, ...) \ + _wiphy_dbg(MAC80211_PS_DEBUG, \ + (hw)->wiphy, fmt, ##__VA_ARGS__) + +#define ps_dbg_ratelimited(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_PS_DEBUG && net_ratelimit(), \ + sdata, fmt, ##__VA_ARGS__) + +#define mpl_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_MPL_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define mpath_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_MPATH_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define mhwmp_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_MHWMP_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define msync_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_MESH_SYNC_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define tdls_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_TDLS_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define sta_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_STA_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define mlme_dbg(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_MLME_DEBUG, \ + sdata, fmt, ##__VA_ARGS__) + +#define mlme_dbg_ratelimited(sdata, fmt, ...) \ + _sdata_dbg(MAC80211_MLME_DEBUG && net_ratelimit(), \ + sdata, fmt, ##__VA_ARGS__) + +#endif /* __MAC80211_DEBUG_H */ diff --git a/net/mac80211/debugfs.c b/net/mac80211/debugfs.c index 778e5916d7c3..b8dfb440c8ef 100644 --- a/net/mac80211/debugfs.c +++ b/net/mac80211/debugfs.c @@ -325,8 +325,6 @@ void debugfs_hw_add(struct ieee80211_local *local) local->rx_handlers_drop_defrag); DEBUGFS_STATS_ADD(rx_handlers_drop_short, local->rx_handlers_drop_short); - DEBUGFS_STATS_ADD(rx_handlers_drop_passive_scan, - local->rx_handlers_drop_passive_scan); DEBUGFS_STATS_ADD(tx_expand_skb_head, local->tx_expand_skb_head); DEBUGFS_STATS_ADD(tx_expand_skb_head_cloned, diff --git a/net/mac80211/debugfs_key.c b/net/mac80211/debugfs_key.c index 7932767bb482..090d08ff22c4 100644 --- a/net/mac80211/debugfs_key.c +++ b/net/mac80211/debugfs_key.c @@ -283,6 +283,11 @@ void ieee80211_debugfs_key_update_default(struct ieee80211_sub_if_data *sdata) lockdep_assert_held(&sdata->local->key_mtx); + if (sdata->debugfs.default_unicast_key) { + debugfs_remove(sdata->debugfs.default_unicast_key); + sdata->debugfs.default_unicast_key = NULL; + } + if (sdata->default_unicast_key) { key = key_mtx_dereference(sdata->local, sdata->default_unicast_key); @@ -290,9 +295,11 @@ void ieee80211_debugfs_key_update_default(struct ieee80211_sub_if_data *sdata) sdata->debugfs.default_unicast_key = debugfs_create_symlink("default_unicast_key", sdata->debugfs.dir, buf); - } else { - debugfs_remove(sdata->debugfs.default_unicast_key); - sdata->debugfs.default_unicast_key = NULL; + } + + if (sdata->debugfs.default_multicast_key) { + debugfs_remove(sdata->debugfs.default_multicast_key); + sdata->debugfs.default_multicast_key = NULL; } if (sdata->default_multicast_key) { @@ -302,9 +309,6 @@ void ieee80211_debugfs_key_update_default(struct ieee80211_sub_if_data *sdata) sdata->debugfs.default_multicast_key = debugfs_create_symlink("default_multicast_key", sdata->debugfs.dir, buf); - } else { - debugfs_remove(sdata->debugfs.default_multicast_key); - sdata->debugfs.default_multicast_key = NULL; } } diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c index 7ed433c66d68..6d5aec9418ee 100644 --- a/net/mac80211/debugfs_netdev.c +++ b/net/mac80211/debugfs_netdev.c @@ -468,48 +468,54 @@ IEEE80211_IF_FILE(fwded_unicast, u.mesh.mshstats.fwded_unicast, DEC); IEEE80211_IF_FILE(fwded_frames, u.mesh.mshstats.fwded_frames, DEC); IEEE80211_IF_FILE(dropped_frames_ttl, u.mesh.mshstats.dropped_frames_ttl, DEC); IEEE80211_IF_FILE(dropped_frames_congestion, - u.mesh.mshstats.dropped_frames_congestion, DEC); + u.mesh.mshstats.dropped_frames_congestion, DEC); IEEE80211_IF_FILE(dropped_frames_no_route, - u.mesh.mshstats.dropped_frames_no_route, DEC); + u.mesh.mshstats.dropped_frames_no_route, DEC); IEEE80211_IF_FILE(estab_plinks, u.mesh.mshstats.estab_plinks, ATOMIC); /* Mesh parameters */ IEEE80211_IF_FILE(dot11MeshMaxRetries, - u.mesh.mshcfg.dot11MeshMaxRetries, DEC); + u.mesh.mshcfg.dot11MeshMaxRetries, DEC); IEEE80211_IF_FILE(dot11MeshRetryTimeout, - u.mesh.mshcfg.dot11MeshRetryTimeout, DEC); + u.mesh.mshcfg.dot11MeshRetryTimeout, DEC); IEEE80211_IF_FILE(dot11MeshConfirmTimeout, - u.mesh.mshcfg.dot11MeshConfirmTimeout, DEC); + u.mesh.mshcfg.dot11MeshConfirmTimeout, DEC); IEEE80211_IF_FILE(dot11MeshHoldingTimeout, - u.mesh.mshcfg.dot11MeshHoldingTimeout, DEC); + u.mesh.mshcfg.dot11MeshHoldingTimeout, DEC); IEEE80211_IF_FILE(dot11MeshTTL, u.mesh.mshcfg.dot11MeshTTL, DEC); IEEE80211_IF_FILE(element_ttl, u.mesh.mshcfg.element_ttl, DEC); IEEE80211_IF_FILE(auto_open_plinks, u.mesh.mshcfg.auto_open_plinks, DEC); IEEE80211_IF_FILE(dot11MeshMaxPeerLinks, - u.mesh.mshcfg.dot11MeshMaxPeerLinks, DEC); + u.mesh.mshcfg.dot11MeshMaxPeerLinks, DEC); IEEE80211_IF_FILE(dot11MeshHWMPactivePathTimeout, - u.mesh.mshcfg.dot11MeshHWMPactivePathTimeout, DEC); + u.mesh.mshcfg.dot11MeshHWMPactivePathTimeout, DEC); IEEE80211_IF_FILE(dot11MeshHWMPpreqMinInterval, - u.mesh.mshcfg.dot11MeshHWMPpreqMinInterval, DEC); + u.mesh.mshcfg.dot11MeshHWMPpreqMinInterval, DEC); IEEE80211_IF_FILE(dot11MeshHWMPperrMinInterval, - u.mesh.mshcfg.dot11MeshHWMPperrMinInterval, DEC); + u.mesh.mshcfg.dot11MeshHWMPperrMinInterval, DEC); IEEE80211_IF_FILE(dot11MeshHWMPnetDiameterTraversalTime, - u.mesh.mshcfg.dot11MeshHWMPnetDiameterTraversalTime, DEC); + u.mesh.mshcfg.dot11MeshHWMPnetDiameterTraversalTime, DEC); IEEE80211_IF_FILE(dot11MeshHWMPmaxPREQretries, - u.mesh.mshcfg.dot11MeshHWMPmaxPREQretries, DEC); + u.mesh.mshcfg.dot11MeshHWMPmaxPREQretries, DEC); IEEE80211_IF_FILE(path_refresh_time, - u.mesh.mshcfg.path_refresh_time, DEC); + u.mesh.mshcfg.path_refresh_time, DEC); IEEE80211_IF_FILE(min_discovery_timeout, - u.mesh.mshcfg.min_discovery_timeout, DEC); + u.mesh.mshcfg.min_discovery_timeout, DEC); IEEE80211_IF_FILE(dot11MeshHWMPRootMode, - u.mesh.mshcfg.dot11MeshHWMPRootMode, DEC); + u.mesh.mshcfg.dot11MeshHWMPRootMode, DEC); IEEE80211_IF_FILE(dot11MeshGateAnnouncementProtocol, - u.mesh.mshcfg.dot11MeshGateAnnouncementProtocol, DEC); + u.mesh.mshcfg.dot11MeshGateAnnouncementProtocol, DEC); IEEE80211_IF_FILE(dot11MeshHWMPRannInterval, - u.mesh.mshcfg.dot11MeshHWMPRannInterval, DEC); + u.mesh.mshcfg.dot11MeshHWMPRannInterval, DEC); IEEE80211_IF_FILE(dot11MeshForwarding, u.mesh.mshcfg.dot11MeshForwarding, DEC); IEEE80211_IF_FILE(rssi_threshold, u.mesh.mshcfg.rssi_threshold, DEC); IEEE80211_IF_FILE(ht_opmode, u.mesh.mshcfg.ht_opmode, DEC); +IEEE80211_IF_FILE(dot11MeshHWMPactivePathToRootTimeout, + u.mesh.mshcfg.dot11MeshHWMPactivePathToRootTimeout, DEC); +IEEE80211_IF_FILE(dot11MeshHWMProotInterval, + u.mesh.mshcfg.dot11MeshHWMProotInterval, DEC); +IEEE80211_IF_FILE(dot11MeshHWMPconfirmationInterval, + u.mesh.mshcfg.dot11MeshHWMPconfirmationInterval, DEC); #endif #define DEBUGFS_ADD_MODE(name, mode) \ @@ -607,9 +613,13 @@ static void add_mesh_config(struct ieee80211_sub_if_data *sdata) MESHPARAMS_ADD(min_discovery_timeout); MESHPARAMS_ADD(dot11MeshHWMPRootMode); MESHPARAMS_ADD(dot11MeshHWMPRannInterval); + MESHPARAMS_ADD(dot11MeshForwarding); MESHPARAMS_ADD(dot11MeshGateAnnouncementProtocol); MESHPARAMS_ADD(rssi_threshold); MESHPARAMS_ADD(ht_opmode); + MESHPARAMS_ADD(dot11MeshHWMPactivePathToRootTimeout); + MESHPARAMS_ADD(dot11MeshHWMProotInterval); + MESHPARAMS_ADD(dot11MeshHWMPconfirmationInterval); #undef MESHPARAMS_ADD } #endif @@ -685,6 +695,7 @@ void ieee80211_debugfs_rename_netdev(struct ieee80211_sub_if_data *sdata) sprintf(buf, "netdev:%s", sdata->name); if (!debugfs_rename(dir->d_parent, dir, dir->d_parent, buf)) - printk(KERN_ERR "mac80211: debugfs: failed to rename debugfs " - "dir to %s\n", buf); + sdata_err(sdata, + "debugfs: failed to rename debugfs dir to %s\n", + buf); } diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h index 6d33a0c743ab..df9203199102 100644 --- a/net/mac80211/driver-ops.h +++ b/net/mac80211/driver-ops.h @@ -3,7 +3,7 @@ #include <net/mac80211.h> #include "ieee80211_i.h" -#include "driver-trace.h" +#include "trace.h" static inline void check_sdata_in_driver(struct ieee80211_sub_if_data *sdata) { @@ -27,14 +27,6 @@ static inline void drv_tx(struct ieee80211_local *local, struct sk_buff *skb) local->ops->tx(&local->hw, skb); } -static inline void drv_tx_frags(struct ieee80211_local *local, - struct ieee80211_vif *vif, - struct ieee80211_sta *sta, - struct sk_buff_head *skbs) -{ - local->ops->tx_frags(&local->hw, vif, sta, skbs); -} - static inline void drv_get_et_strings(struct ieee80211_sub_if_data *sdata, u32 sset, u8 *data) { @@ -845,4 +837,33 @@ drv_allow_buffered_frames(struct ieee80211_local *local, more_data); trace_drv_return_void(local); } + +static inline int drv_get_rssi(struct ieee80211_local *local, + struct ieee80211_sub_if_data *sdata, + struct ieee80211_sta *sta, + s8 *rssi_dbm) +{ + int ret; + + might_sleep(); + + ret = local->ops->get_rssi(&local->hw, &sdata->vif, sta, rssi_dbm); + trace_drv_get_rssi(local, sta, *rssi_dbm, ret); + + return ret; +} + +static inline void drv_mgd_prepare_tx(struct ieee80211_local *local, + struct ieee80211_sub_if_data *sdata) +{ + might_sleep(); + + check_sdata_in_driver(sdata); + WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_STATION); + + trace_drv_mgd_prepare_tx(local, sdata); + if (local->ops->mgd_prepare_tx) + local->ops->mgd_prepare_tx(&local->hw, &sdata->vif); + trace_drv_return_void(local); +} #endif /* __MAC80211_DRIVER_OPS */ diff --git a/net/mac80211/driver-trace.c b/net/mac80211/driver-trace.c deleted file mode 100644 index 8ed8711b1a6d..000000000000 --- a/net/mac80211/driver-trace.c +++ /dev/null @@ -1,9 +0,0 @@ -/* bug in tracepoint.h, it should include this */ -#include <linux/module.h> - -/* sparse isn't too happy with all macros... */ -#ifndef __CHECKER__ -#include "driver-ops.h" -#define CREATE_TRACE_POINTS -#include "driver-trace.h" -#endif diff --git a/net/mac80211/ht.c b/net/mac80211/ht.c index 6f8615c54b22..4b4538d63925 100644 --- a/net/mac80211/ht.c +++ b/net/mac80211/ht.c @@ -305,12 +305,10 @@ void ieee80211_process_delba(struct ieee80211_sub_if_data *sdata, tid = (params & IEEE80211_DELBA_PARAM_TID_MASK) >> 12; initiator = (params & IEEE80211_DELBA_PARAM_INITIATOR_MASK) >> 11; -#ifdef CONFIG_MAC80211_HT_DEBUG - net_dbg_ratelimited("delba from %pM (%s) tid %d reason code %d\n", - mgmt->sa, initiator ? "initiator" : "recipient", - tid, - le16_to_cpu(mgmt->u.action.u.delba.reason_code)); -#endif /* CONFIG_MAC80211_HT_DEBUG */ + ht_dbg_ratelimited(sdata, "delba from %pM (%s) tid %d reason code %d\n", + mgmt->sa, initiator ? "initiator" : "recipient", + tid, + le16_to_cpu(mgmt->u.action.u.delba.reason_code)); if (initiator == WLAN_BACK_INITIATOR) __ieee80211_stop_rx_ba_session(sta, tid, WLAN_BACK_INITIATOR, 0, diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c index 33d9d0c3e3d0..5746d62faba1 100644 --- a/net/mac80211/ibss.c +++ b/net/mac80211/ibss.c @@ -82,8 +82,7 @@ static void __ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata, local->oper_channel = chan; channel_type = ifibss->channel_type; - if (channel_type > NL80211_CHAN_HT20 && - !cfg80211_can_beacon_sec_chan(local->hw.wiphy, chan, channel_type)) + if (!cfg80211_can_beacon_sec_chan(local->hw.wiphy, chan, channel_type)) channel_type = NL80211_CHAN_HT20; if (!ieee80211_set_channel_type(local, sdata, channel_type)) { /* can only fail due to HT40+/- mismatch */ @@ -262,11 +261,7 @@ static struct sta_info *ieee80211_ibss_finish_sta(struct sta_info *sta, memcpy(addr, sta->sta.addr, ETH_ALEN); -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(sdata->local->hw.wiphy, - "Adding new IBSS station %pM (dev=%s)\n", - addr, sdata->name); -#endif + ibss_dbg(sdata, "Adding new IBSS station %pM\n", addr); sta_info_pre_move_state(sta, IEEE80211_STA_AUTH); sta_info_pre_move_state(sta, IEEE80211_STA_ASSOC); @@ -280,12 +275,10 @@ static struct sta_info *ieee80211_ibss_finish_sta(struct sta_info *sta, /* If it fails, maybe we raced another insertion? */ if (sta_info_insert_rcu(sta)) return sta_info_get(sdata, addr); - if (auth) { -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG "TX Auth SA=%pM DA=%pM BSSID=%pM" - "(auth_transaction=1)\n", sdata->vif.addr, - sdata->u.ibss.bssid, addr); -#endif + if (auth && !sdata->u.ibss.auth_frame_registrations) { + ibss_dbg(sdata, + "TX Auth SA=%pM DA=%pM BSSID=%pM (auth_transaction=1)\n", + sdata->vif.addr, sdata->u.ibss.bssid, addr); ieee80211_send_auth(sdata, 1, WLAN_AUTH_OPEN, NULL, 0, addr, sdata->u.ibss.bssid, NULL, 0, 0); } @@ -308,7 +301,7 @@ ieee80211_ibss_add_sta(struct ieee80211_sub_if_data *sdata, * allow new one to be added. */ if (local->num_sta >= IEEE80211_IBSS_MAX_STA_ENTRIES) { - net_dbg_ratelimited("%s: No room for a new IBSS STA entry %pM\n", + net_info_ratelimited("%s: No room for a new IBSS STA entry %pM\n", sdata->name, addr); rcu_read_lock(); return NULL; @@ -355,11 +348,9 @@ static void ieee80211_rx_mgmt_auth_ibss(struct ieee80211_sub_if_data *sdata, if (auth_alg != WLAN_AUTH_OPEN || auth_transaction != 1) return; -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG "%s: RX Auth SA=%pM DA=%pM BSSID=%pM." - "(auth_transaction=%d)\n", - sdata->name, mgmt->sa, mgmt->da, mgmt->bssid, auth_transaction); -#endif + ibss_dbg(sdata, + "RX Auth SA=%pM DA=%pM BSSID=%pM (auth_transaction=%d)\n", + mgmt->sa, mgmt->da, mgmt->bssid, auth_transaction); sta_info_destroy_addr(sdata, mgmt->sa); ieee80211_ibss_add_sta(sdata, mgmt->bssid, mgmt->sa, 0, false); rcu_read_unlock(); @@ -422,15 +413,10 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata, ieee80211_mandatory_rates(local, band); if (sta->sta.supp_rates[band] != prev_rates) { -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG - "%s: updated supp_rates set " - "for %pM based on beacon" - "/probe_resp (0x%x -> 0x%x)\n", - sdata->name, sta->sta.addr, - prev_rates, - sta->sta.supp_rates[band]); -#endif + ibss_dbg(sdata, + "updated supp_rates set for %pM based on beacon/probe_resp (0x%x -> 0x%x)\n", + sta->sta.addr, prev_rates, + sta->sta.supp_rates[band]); rates_updated = true; } } else { @@ -545,22 +531,18 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata, rx_timestamp = drv_get_tsf(local, sdata); } -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG "RX beacon SA=%pM BSSID=" - "%pM TSF=0x%llx BCN=0x%llx diff=%lld @%lu\n", - mgmt->sa, mgmt->bssid, - (unsigned long long)rx_timestamp, - (unsigned long long)beacon_timestamp, - (unsigned long long)(rx_timestamp - beacon_timestamp), - jiffies); -#endif + ibss_dbg(sdata, + "RX beacon SA=%pM BSSID=%pM TSF=0x%llx BCN=0x%llx diff=%lld @%lu\n", + mgmt->sa, mgmt->bssid, + (unsigned long long)rx_timestamp, + (unsigned long long)beacon_timestamp, + (unsigned long long)(rx_timestamp - beacon_timestamp), + jiffies); if (beacon_timestamp > rx_timestamp) { -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG "%s: beacon TSF higher than " - "local TSF - IBSS merge with BSSID %pM\n", - sdata->name, mgmt->bssid); -#endif + ibss_dbg(sdata, + "beacon TSF higher than local TSF - IBSS merge with BSSID %pM\n", + mgmt->bssid); ieee80211_sta_join_ibss(sdata, bss); supp_rates = ieee80211_sta_get_rates(local, elems, band, NULL); ieee80211_ibss_add_sta(sdata, mgmt->bssid, mgmt->sa, @@ -586,7 +568,7 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata, * allow new one to be added. */ if (local->num_sta >= IEEE80211_IBSS_MAX_STA_ENTRIES) { - net_dbg_ratelimited("%s: No room for a new IBSS STA entry %pM\n", + net_info_ratelimited("%s: No room for a new IBSS STA entry %pM\n", sdata->name, addr); return; } @@ -662,8 +644,8 @@ static void ieee80211_sta_merge_ibss(struct ieee80211_sub_if_data *sdata) if (ifibss->fixed_channel) return; - printk(KERN_DEBUG "%s: No active IBSS STAs - trying to scan for other " - "IBSS networks with same SSID (merge)\n", sdata->name); + sdata_info(sdata, + "No active IBSS STAs - trying to scan for other IBSS networks with same SSID (merge)\n"); ieee80211_request_internal_scan(sdata, ifibss->ssid, ifibss->ssid_len, NULL); @@ -691,8 +673,7 @@ static void ieee80211_sta_create_ibss(struct ieee80211_sub_if_data *sdata) bssid[0] |= 0x02; } - printk(KERN_DEBUG "%s: Creating new IBSS network, BSSID %pM\n", - sdata->name, bssid); + sdata_info(sdata, "Creating new IBSS network, BSSID %pM\n", bssid); capability = WLAN_CAPABILITY_IBSS; @@ -723,10 +704,7 @@ static void ieee80211_sta_find_ibss(struct ieee80211_sub_if_data *sdata) lockdep_assert_held(&ifibss->mtx); active_ibss = ieee80211_sta_active_ibss(sdata); -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG "%s: sta_find_ibss (active_ibss=%d)\n", - sdata->name, active_ibss); -#endif /* CONFIG_MAC80211_IBSS_DEBUG */ + ibss_dbg(sdata, "sta_find_ibss (active_ibss=%d)\n", active_ibss); if (active_ibss) return; @@ -749,29 +727,24 @@ static void ieee80211_sta_find_ibss(struct ieee80211_sub_if_data *sdata) struct ieee80211_bss *bss; bss = (void *)cbss->priv; -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG " sta_find_ibss: selected %pM current " - "%pM\n", cbss->bssid, ifibss->bssid); -#endif /* CONFIG_MAC80211_IBSS_DEBUG */ - - printk(KERN_DEBUG "%s: Selected IBSS BSSID %pM" - " based on configured SSID\n", - sdata->name, cbss->bssid); + ibss_dbg(sdata, + "sta_find_ibss: selected %pM current %pM\n", + cbss->bssid, ifibss->bssid); + sdata_info(sdata, + "Selected IBSS BSSID %pM based on configured SSID\n", + cbss->bssid); ieee80211_sta_join_ibss(sdata, bss); ieee80211_rx_bss_put(local, bss); return; } -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG " did not try to join ibss\n"); -#endif /* CONFIG_MAC80211_IBSS_DEBUG */ + ibss_dbg(sdata, "sta_find_ibss: did not try to join ibss\n"); /* Selected IBSS not found in current scan results - try to scan */ if (time_after(jiffies, ifibss->last_scan_completed + IEEE80211_SCAN_INTERVAL)) { - printk(KERN_DEBUG "%s: Trigger new scan to find an IBSS to " - "join\n", sdata->name); + sdata_info(sdata, "Trigger new scan to find an IBSS to join\n"); ieee80211_request_internal_scan(sdata, ifibss->ssid, ifibss->ssid_len, @@ -785,9 +758,8 @@ static void ieee80211_sta_find_ibss(struct ieee80211_sub_if_data *sdata) ieee80211_sta_create_ibss(sdata); return; } - printk(KERN_DEBUG "%s: IBSS not allowed on" - " %d MHz\n", sdata->name, - local->hw.conf.channel->center_freq); + sdata_info(sdata, "IBSS not allowed on %d MHz\n", + local->hw.conf.channel->center_freq); /* No IBSS found - decrease scan interval and continue * scanning. */ @@ -822,12 +794,9 @@ static void ieee80211_rx_mgmt_probe_req(struct ieee80211_sub_if_data *sdata, tx_last_beacon = drv_tx_last_beacon(local); -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG "%s: RX ProbeReq SA=%pM DA=%pM BSSID=%pM" - " (tx_last_beacon=%d)\n", - sdata->name, mgmt->sa, mgmt->da, - mgmt->bssid, tx_last_beacon); -#endif /* CONFIG_MAC80211_IBSS_DEBUG */ + ibss_dbg(sdata, + "RX ProbeReq SA=%pM DA=%pM BSSID=%pM (tx_last_beacon=%d)\n", + mgmt->sa, mgmt->da, mgmt->bssid, tx_last_beacon); if (!tx_last_beacon && is_multicast_ether_addr(mgmt->da)) return; @@ -840,11 +809,8 @@ static void ieee80211_rx_mgmt_probe_req(struct ieee80211_sub_if_data *sdata, pos = mgmt->u.probe_req.variable; if (pos[0] != WLAN_EID_SSID || pos + 2 + pos[1] > end) { -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG "%s: Invalid SSID IE in ProbeReq " - "from %pM\n", - sdata->name, mgmt->sa); -#endif + ibss_dbg(sdata, "Invalid SSID IE in ProbeReq from %pM\n", + mgmt->sa); return; } if (pos[1] != 0 && @@ -861,10 +827,7 @@ static void ieee80211_rx_mgmt_probe_req(struct ieee80211_sub_if_data *sdata, resp = (struct ieee80211_mgmt *) skb->data; memcpy(resp->da, mgmt->sa, ETH_ALEN); -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG "%s: Sending ProbeResp to %pM\n", - sdata->name, resp->da); -#endif /* CONFIG_MAC80211_IBSS_DEBUG */ + ibss_dbg(sdata, "Sending ProbeResp to %pM\n", resp->da); IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT; ieee80211_tx_skb(sdata, skb); } diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 3f3cd50fff16..bb61f7718c4c 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -30,6 +30,7 @@ #include <net/mac80211.h> #include "key.h" #include "sta_info.h" +#include "debug.h" struct ieee80211_local; @@ -55,11 +56,14 @@ struct ieee80211_local; #define TU_TO_JIFFIES(x) (usecs_to_jiffies((x) * 1024)) #define TU_TO_EXP_TIME(x) (jiffies + TU_TO_JIFFIES(x)) +/* + * Some APs experience problems when working with U-APSD. Decrease the + * probability of that happening by using legacy mode for all ACs but VO. + * The AP that caused us trouble was a Cisco 4410N. It ignores our + * setting, and always treats non-VO ACs as legacy. + */ #define IEEE80211_DEFAULT_UAPSD_QUEUES \ - (IEEE80211_WMM_IE_STA_QOSINFO_AC_BK | \ - IEEE80211_WMM_IE_STA_QOSINFO_AC_BE | \ - IEEE80211_WMM_IE_STA_QOSINFO_AC_VI | \ - IEEE80211_WMM_IE_STA_QOSINFO_AC_VO) + IEEE80211_WMM_IE_STA_QOSINFO_AC_VO #define IEEE80211_DEFAULT_MAX_SP_LEN \ IEEE80211_WMM_IE_STA_QOSINFO_SP_ALL @@ -81,6 +85,8 @@ struct ieee80211_bss { size_t ssid_len; u8 ssid[IEEE80211_MAX_SSID_LEN]; + u32 device_ts; + u8 dtim_period; bool wmm_used; @@ -203,7 +209,6 @@ typedef unsigned __bitwise__ ieee80211_rx_result; * enum ieee80211_packet_rx_flags - packet RX flags * @IEEE80211_RX_RA_MATCH: frame is destined to interface currently processed * (incl. multicast frames) - * @IEEE80211_RX_IN_SCAN: received while scanning * @IEEE80211_RX_FRAGMENTED: fragmented frame * @IEEE80211_RX_AMSDU: a-MSDU packet * @IEEE80211_RX_MALFORMED_ACTION_FRM: action frame is malformed @@ -213,7 +218,6 @@ typedef unsigned __bitwise__ ieee80211_rx_result; * @rx_flags field of &struct ieee80211_rx_status. */ enum ieee80211_packet_rx_flags { - IEEE80211_RX_IN_SCAN = BIT(0), IEEE80211_RX_RA_MATCH = BIT(1), IEEE80211_RX_FRAGMENTED = BIT(2), IEEE80211_RX_AMSDU = BIT(3), @@ -317,55 +321,30 @@ struct mesh_preq_queue { u8 flags; }; -enum ieee80211_work_type { - IEEE80211_WORK_ABORT, - IEEE80211_WORK_REMAIN_ON_CHANNEL, - IEEE80211_WORK_OFFCHANNEL_TX, -}; - -/** - * enum work_done_result - indicates what to do after work was done - * - * @WORK_DONE_DESTROY: This work item is no longer needed, destroy. - * @WORK_DONE_REQUEUE: This work item was reset to be reused, and - * should be requeued. - */ -enum work_done_result { - WORK_DONE_DESTROY, - WORK_DONE_REQUEUE, -}; +#if HZ/100 == 0 +#define IEEE80211_ROC_MIN_LEFT 1 +#else +#define IEEE80211_ROC_MIN_LEFT (HZ/100) +#endif -struct ieee80211_work { +struct ieee80211_roc_work { struct list_head list; + struct list_head dependents; - struct rcu_head rcu_head; + struct delayed_work work; struct ieee80211_sub_if_data *sdata; - enum work_done_result (*done)(struct ieee80211_work *wk, - struct sk_buff *skb); - struct ieee80211_channel *chan; enum nl80211_channel_type chan_type; - unsigned long timeout; - enum ieee80211_work_type type; + bool started, abort, hw_begun, notified; - bool started; + unsigned long hw_start_time; - union { - struct { - u32 duration; - } remain; - struct { - struct sk_buff *frame; - u32 wait; - bool status; - } offchan_tx; - }; - - size_t data_len; - u8 data[]; + u32 duration, req_duration; + struct sk_buff *frame; + u64 mgmt_tx_cookie; }; /* flags used in struct ieee80211_if_managed.flags */ @@ -399,7 +378,6 @@ struct ieee80211_mgd_auth_data { struct ieee80211_mgd_assoc_data { struct cfg80211_bss *bss; const u8 *supp_rates; - const u8 *ht_operation_ie; unsigned long timeout; int tries; @@ -414,6 +392,8 @@ struct ieee80211_mgd_assoc_data { bool sent_assoc; bool synced; + u8 ap_ht_param; + size_t ie_len; u8 ie[]; }; @@ -532,6 +512,7 @@ struct ieee80211_if_ibss { bool privacy; bool control_port; + unsigned int auth_frame_registrations; u8 bssid[ETH_ALEN] __aligned(2); u8 ssid[IEEE80211_MAX_SSID_LEN]; @@ -701,6 +682,9 @@ struct ieee80211_sub_if_data { /* TID bitmap for NoAck policy */ u16 noack_map; + /* bit field of ACM bits (BIT(802.1D tag)) */ + u8 wmm_acm; + struct ieee80211_key __rcu *keys[NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS]; struct ieee80211_key __rcu *default_unicast_key; struct ieee80211_key __rcu *default_multicast_key; @@ -847,13 +831,6 @@ struct ieee80211_local { const struct ieee80211_ops *ops; /* - * work stuff, potentially off-channel (in the future) - */ - struct list_head work_list; - struct timer_list work_timer; - struct work_struct work_work; - - /* * private workqueue to mac80211. mac80211 makes this accessible * via ieee80211_queue_work() */ @@ -912,6 +889,9 @@ struct ieee80211_local { /* device is started */ bool started; + /* device is during a HW reconfig */ + bool in_reconfig; + /* wowlan is enabled -- don't reconfig on resume */ bool wowlan; @@ -985,14 +965,14 @@ struct ieee80211_local { int scan_channel_idx; int scan_ies_len; - bool sched_scanning; struct ieee80211_sched_scan_ies sched_scan_ies; struct work_struct sched_scan_stopped_work; + struct ieee80211_sub_if_data __rcu *sched_scan_sdata; unsigned long leave_oper_channel_time; enum mac80211_scan_state next_scan_state; struct delayed_work scan_work; - struct ieee80211_sub_if_data *scan_sdata; + struct ieee80211_sub_if_data __rcu *scan_sdata; enum nl80211_channel_type _oper_channel_type; struct ieee80211_channel *oper_channel, *csa_channel; @@ -1034,7 +1014,6 @@ struct ieee80211_local { unsigned int rx_handlers_drop_nullfunc; unsigned int rx_handlers_drop_defrag; unsigned int rx_handlers_drop_short; - unsigned int rx_handlers_drop_passive_scan; unsigned int tx_expand_skb_head; unsigned int tx_expand_skb_head_cloned; unsigned int rx_expand_skb_head; @@ -1050,7 +1029,6 @@ struct ieee80211_local { int total_ps_buffered; /* total number of all buffered unicast and * multicast packets for power saving stations */ - unsigned int wmm_acm; /* bit field of ACM bits (BIT(802.1D tag)) */ bool pspolling; bool offchannel_ps_enabled; @@ -1087,14 +1065,12 @@ struct ieee80211_local { } debugfs; #endif - struct ieee80211_channel *hw_roc_channel; - struct net_device *hw_roc_dev; - struct sk_buff *hw_roc_skb, *hw_roc_skb_for_status; + /* + * Remain-on-channel support + */ + struct list_head roc_list; struct work_struct hw_roc_start, hw_roc_done; - enum nl80211_channel_type hw_roc_channel_type; - unsigned int hw_roc_duration; - u32 hw_roc_cookie; - bool hw_roc_for_tx; + unsigned long hw_roc_start_time; struct idr ack_status_frames; spinlock_t ack_status_lock; @@ -1114,6 +1090,12 @@ IEEE80211_DEV_TO_SUB_IF(struct net_device *dev) return netdev_priv(dev); } +static inline struct ieee80211_sub_if_data * +IEEE80211_WDEV_TO_SUB_IF(struct wireless_dev *wdev) +{ + return container_of(wdev, struct ieee80211_sub_if_data, wdev); +} + /* this struct represents 802.11n's RA/TID combination */ struct ieee80211_ra_tid { u8 ra[ETH_ALEN]; @@ -1264,8 +1246,7 @@ int ieee80211_request_scan(struct ieee80211_sub_if_data *sdata, struct cfg80211_scan_request *req); void ieee80211_scan_cancel(struct ieee80211_local *local); void ieee80211_run_deferred_scan(struct ieee80211_local *local); -ieee80211_rx_result -ieee80211_scan_rx(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb); +void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb); void ieee80211_mlme_notify_scan_completed(struct ieee80211_local *local); struct ieee80211_bss * @@ -1290,19 +1271,23 @@ void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local, bool offchannel_ps_enable); void ieee80211_offchannel_return(struct ieee80211_local *local, bool offchannel_ps_disable); -void ieee80211_hw_roc_setup(struct ieee80211_local *local); +void ieee80211_roc_setup(struct ieee80211_local *local); +void ieee80211_start_next_roc(struct ieee80211_local *local); +void ieee80211_roc_purge(struct ieee80211_sub_if_data *sdata); +void ieee80211_roc_notify_destroy(struct ieee80211_roc_work *roc); +void ieee80211_sw_roc_work(struct work_struct *work); +void ieee80211_handle_roc_started(struct ieee80211_roc_work *roc); /* interface handling */ int ieee80211_iface_init(void); void ieee80211_iface_exit(void); int ieee80211_if_add(struct ieee80211_local *local, const char *name, - struct net_device **new_dev, enum nl80211_iftype type, + struct wireless_dev **new_wdev, enum nl80211_iftype type, struct vif_params *params); int ieee80211_if_change_type(struct ieee80211_sub_if_data *sdata, enum nl80211_iftype type); void ieee80211_if_remove(struct ieee80211_sub_if_data *sdata); void ieee80211_remove_interfaces(struct ieee80211_local *local); -u32 __ieee80211_recalc_idle(struct ieee80211_local *local); void ieee80211_recalc_idle(struct ieee80211_local *local); void ieee80211_adjust_monitor_flags(struct ieee80211_sub_if_data *sdata, const int offset); @@ -1499,18 +1484,12 @@ u8 *ieee80211_ie_build_ht_oper(u8 *pos, struct ieee80211_sta_ht_cap *ht_cap, struct ieee80211_channel *channel, enum nl80211_channel_type channel_type, u16 prot_mode); - -/* internal work items */ -void ieee80211_work_init(struct ieee80211_local *local); -void ieee80211_add_work(struct ieee80211_work *wk); -void free_work(struct ieee80211_work *wk); -void ieee80211_work_purge(struct ieee80211_sub_if_data *sdata); -int ieee80211_wk_remain_on_channel(struct ieee80211_sub_if_data *sdata, - struct ieee80211_channel *chan, - enum nl80211_channel_type channel_type, - unsigned int duration, u64 *cookie); -int ieee80211_wk_cancel_remain_on_channel( - struct ieee80211_sub_if_data *sdata, u64 cookie); +u8 *ieee80211_ie_build_vht_cap(u8 *pos, struct ieee80211_sta_vht_cap *vht_cap, + u32 cap); +int ieee80211_add_srates_ie(struct ieee80211_sub_if_data *sdata, + struct sk_buff *skb, bool need_basic); +int ieee80211_add_ext_srates_ie(struct ieee80211_sub_if_data *sdata, + struct sk_buff *skb, bool need_basic); /* channel management */ enum ieee80211_chan_mode { diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c index 8664111d0566..bfb57dcc1538 100644 --- a/net/mac80211/iface.c +++ b/net/mac80211/iface.c @@ -43,6 +43,128 @@ */ +static u32 ieee80211_idle_off(struct ieee80211_local *local, + const char *reason) +{ + if (!(local->hw.conf.flags & IEEE80211_CONF_IDLE)) + return 0; + + local->hw.conf.flags &= ~IEEE80211_CONF_IDLE; + return IEEE80211_CONF_CHANGE_IDLE; +} + +static u32 ieee80211_idle_on(struct ieee80211_local *local) +{ + if (local->hw.conf.flags & IEEE80211_CONF_IDLE) + return 0; + + drv_flush(local, false); + + local->hw.conf.flags |= IEEE80211_CONF_IDLE; + return IEEE80211_CONF_CHANGE_IDLE; +} + +static u32 __ieee80211_recalc_idle(struct ieee80211_local *local) +{ + struct ieee80211_sub_if_data *sdata; + int count = 0; + bool working = false, scanning = false; + unsigned int led_trig_start = 0, led_trig_stop = 0; + struct ieee80211_roc_work *roc; + +#ifdef CONFIG_PROVE_LOCKING + WARN_ON(debug_locks && !lockdep_rtnl_is_held() && + !lockdep_is_held(&local->iflist_mtx)); +#endif + lockdep_assert_held(&local->mtx); + + list_for_each_entry(sdata, &local->interfaces, list) { + if (!ieee80211_sdata_running(sdata)) { + sdata->vif.bss_conf.idle = true; + continue; + } + + sdata->old_idle = sdata->vif.bss_conf.idle; + + /* do not count disabled managed interfaces */ + if (sdata->vif.type == NL80211_IFTYPE_STATION && + !sdata->u.mgd.associated && + !sdata->u.mgd.auth_data && + !sdata->u.mgd.assoc_data) { + sdata->vif.bss_conf.idle = true; + continue; + } + /* do not count unused IBSS interfaces */ + if (sdata->vif.type == NL80211_IFTYPE_ADHOC && + !sdata->u.ibss.ssid_len) { + sdata->vif.bss_conf.idle = true; + continue; + } + /* count everything else */ + sdata->vif.bss_conf.idle = false; + count++; + } + + if (!local->ops->remain_on_channel) { + list_for_each_entry(roc, &local->roc_list, list) { + working = true; + roc->sdata->vif.bss_conf.idle = false; + } + } + + sdata = rcu_dereference_protected(local->scan_sdata, + lockdep_is_held(&local->mtx)); + if (sdata && !(local->hw.flags & IEEE80211_HW_SCAN_WHILE_IDLE)) { + scanning = true; + sdata->vif.bss_conf.idle = false; + } + + list_for_each_entry(sdata, &local->interfaces, list) { + if (sdata->vif.type == NL80211_IFTYPE_MONITOR || + sdata->vif.type == NL80211_IFTYPE_AP_VLAN) + continue; + if (sdata->old_idle == sdata->vif.bss_conf.idle) + continue; + if (!ieee80211_sdata_running(sdata)) + continue; + ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_IDLE); + } + + if (working || scanning) + led_trig_start |= IEEE80211_TPT_LEDTRIG_FL_WORK; + else + led_trig_stop |= IEEE80211_TPT_LEDTRIG_FL_WORK; + + if (count) + led_trig_start |= IEEE80211_TPT_LEDTRIG_FL_CONNECTED; + else + led_trig_stop |= IEEE80211_TPT_LEDTRIG_FL_CONNECTED; + + ieee80211_mod_tpt_led_trig(local, led_trig_start, led_trig_stop); + + if (working) + return ieee80211_idle_off(local, "working"); + if (scanning) + return ieee80211_idle_off(local, "scanning"); + if (!count) + return ieee80211_idle_on(local); + else + return ieee80211_idle_off(local, "in use"); + + return 0; +} + +void ieee80211_recalc_idle(struct ieee80211_local *local) +{ + u32 chg; + + mutex_lock(&local->iflist_mtx); + chg = __ieee80211_recalc_idle(local); + mutex_unlock(&local->iflist_mtx); + if (chg) + ieee80211_hw_config(local, chg); +} + static int ieee80211_change_mtu(struct net_device *dev, int new_mtu) { int meshhdrlen; @@ -57,9 +179,6 @@ static int ieee80211_change_mtu(struct net_device *dev, int new_mtu) return -EINVAL; } -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - printk(KERN_DEBUG "%s: setting MTU %d\n", dev->name, new_mtu); -#endif /* CONFIG_MAC80211_VERBOSE_DEBUG */ dev->mtu = new_mtu; return 0; } @@ -100,15 +219,12 @@ static int ieee80211_check_concurrent_iface(struct ieee80211_sub_if_data *sdata, { struct ieee80211_local *local = sdata->local; struct ieee80211_sub_if_data *nsdata; - struct net_device *dev = sdata->dev; ASSERT_RTNL(); /* we hold the RTNL here so can safely walk the list */ list_for_each_entry(nsdata, &local->interfaces, list) { - struct net_device *ndev = nsdata->dev; - - if (ndev != dev && ieee80211_sdata_running(nsdata)) { + if (nsdata != sdata && ieee80211_sdata_running(nsdata)) { /* * Allow only a single IBSS interface to be up at any * time. This is restricted because beacon distribution @@ -127,7 +243,8 @@ static int ieee80211_check_concurrent_iface(struct ieee80211_sub_if_data *sdata, * The remaining checks are only performed for interfaces * with the same MAC address. */ - if (!ether_addr_equal(dev->dev_addr, ndev->dev_addr)) + if (!ether_addr_equal(sdata->vif.addr, + nsdata->vif.addr)) continue; /* @@ -217,17 +334,21 @@ static void ieee80211_set_default_queues(struct ieee80211_sub_if_data *sdata) static int ieee80211_add_virtual_monitor(struct ieee80211_local *local) { struct ieee80211_sub_if_data *sdata; - int ret; + int ret = 0; if (!(local->hw.flags & IEEE80211_HW_WANT_MONITOR_VIF)) return 0; + mutex_lock(&local->iflist_mtx); + if (local->monitor_sdata) - return 0; + goto out_unlock; sdata = kzalloc(sizeof(*sdata) + local->hw.vif_data_size, GFP_KERNEL); - if (!sdata) - return -ENOMEM; + if (!sdata) { + ret = -ENOMEM; + goto out_unlock; + } /* set up data */ sdata->local = local; @@ -241,18 +362,19 @@ static int ieee80211_add_virtual_monitor(struct ieee80211_local *local) if (WARN_ON(ret)) { /* ok .. stupid driver, it asked for this! */ kfree(sdata); - return ret; + goto out_unlock; } ret = ieee80211_check_queues(sdata); if (ret) { kfree(sdata); - return ret; + goto out_unlock; } rcu_assign_pointer(local->monitor_sdata, sdata); - - return 0; + out_unlock: + mutex_unlock(&local->iflist_mtx); + return ret; } static void ieee80211_del_virtual_monitor(struct ieee80211_local *local) @@ -262,10 +384,12 @@ static void ieee80211_del_virtual_monitor(struct ieee80211_local *local) if (!(local->hw.flags & IEEE80211_HW_WANT_MONITOR_VIF)) return; - sdata = rtnl_dereference(local->monitor_sdata); + mutex_lock(&local->iflist_mtx); + sdata = rcu_dereference_protected(local->monitor_sdata, + lockdep_is_held(&local->iflist_mtx)); if (!sdata) - return; + goto out_unlock; rcu_assign_pointer(local->monitor_sdata, NULL); synchronize_net(); @@ -273,6 +397,8 @@ static void ieee80211_del_virtual_monitor(struct ieee80211_local *local) drv_remove_interface(local, sdata); kfree(sdata); + out_unlock: + mutex_unlock(&local->iflist_mtx); } /* @@ -520,7 +646,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, clear_bit(SDATA_STATE_RUNNING, &sdata->state); - if (local->scan_sdata == sdata) + if (rcu_access_pointer(local->scan_sdata) == sdata) ieee80211_scan_cancel(local); /* @@ -528,10 +654,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, */ netif_tx_stop_all_queues(sdata->dev); - /* - * Purge work for this interface. - */ - ieee80211_work_purge(sdata); + ieee80211_roc_purge(sdata); /* * Remove all stations associated with this interface. @@ -637,18 +760,6 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, ieee80211_configure_filter(local); break; default: - mutex_lock(&local->mtx); - if (local->hw_roc_dev == sdata->dev && - local->hw_roc_channel) { - /* ignore return value since this is racy */ - drv_cancel_remain_on_channel(local); - ieee80211_queue_work(&local->hw, &local->hw_roc_done); - } - mutex_unlock(&local->mtx); - - flush_work(&local->hw_roc_start); - flush_work(&local->hw_roc_done); - flush_work(&sdata->work); /* * When we get here, the interface is marked down. @@ -823,7 +934,7 @@ static u16 ieee80211_monitor_select_queue(struct net_device *dev, hdr = (void *)((u8 *)skb->data + le16_to_cpu(rtap->it_len)); - return ieee80211_select_queue_80211(local, skb, hdr); + return ieee80211_select_queue_80211(sdata, skb, hdr); } static const struct net_device_ops ieee80211_monitorif_ops = { @@ -1238,7 +1349,7 @@ static void ieee80211_assign_perm_addr(struct ieee80211_local *local, if (__ffs64(mask) + hweight64(mask) != fls64(mask)) { /* not a contiguous mask ... not handled now! */ - printk(KERN_DEBUG "not contiguous\n"); + pr_info("not contiguous\n"); break; } @@ -1284,7 +1395,7 @@ static void ieee80211_assign_perm_addr(struct ieee80211_local *local, } int ieee80211_if_add(struct ieee80211_local *local, const char *name, - struct net_device **new_dev, enum nl80211_iftype type, + struct wireless_dev **new_wdev, enum nl80211_iftype type, struct vif_params *params) { struct net_device *ndev; @@ -1364,6 +1475,8 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name, sdata->u.mgd.use_4addr = params->use_4addr; } + ndev->features |= local->hw.netdev_features; + ret = register_netdevice(ndev); if (ret) goto fail; @@ -1372,8 +1485,8 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name, list_add_tail_rcu(&sdata->list, &local->interfaces); mutex_unlock(&local->iflist_mtx); - if (new_dev) - *new_dev = ndev; + if (new_wdev) + *new_wdev = &sdata->wdev; return 0; @@ -1421,138 +1534,6 @@ void ieee80211_remove_interfaces(struct ieee80211_local *local) list_del(&unreg_list); } -static u32 ieee80211_idle_off(struct ieee80211_local *local, - const char *reason) -{ - if (!(local->hw.conf.flags & IEEE80211_CONF_IDLE)) - return 0; - -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, "device no longer idle - %s\n", reason); -#endif - - local->hw.conf.flags &= ~IEEE80211_CONF_IDLE; - return IEEE80211_CONF_CHANGE_IDLE; -} - -static u32 ieee80211_idle_on(struct ieee80211_local *local) -{ - if (local->hw.conf.flags & IEEE80211_CONF_IDLE) - return 0; - -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, "device now idle\n"); -#endif - - drv_flush(local, false); - - local->hw.conf.flags |= IEEE80211_CONF_IDLE; - return IEEE80211_CONF_CHANGE_IDLE; -} - -u32 __ieee80211_recalc_idle(struct ieee80211_local *local) -{ - struct ieee80211_sub_if_data *sdata; - int count = 0; - bool working = false, scanning = false, hw_roc = false; - struct ieee80211_work *wk; - unsigned int led_trig_start = 0, led_trig_stop = 0; - -#ifdef CONFIG_PROVE_LOCKING - WARN_ON(debug_locks && !lockdep_rtnl_is_held() && - !lockdep_is_held(&local->iflist_mtx)); -#endif - lockdep_assert_held(&local->mtx); - - list_for_each_entry(sdata, &local->interfaces, list) { - if (!ieee80211_sdata_running(sdata)) { - sdata->vif.bss_conf.idle = true; - continue; - } - - sdata->old_idle = sdata->vif.bss_conf.idle; - - /* do not count disabled managed interfaces */ - if (sdata->vif.type == NL80211_IFTYPE_STATION && - !sdata->u.mgd.associated && - !sdata->u.mgd.auth_data && - !sdata->u.mgd.assoc_data) { - sdata->vif.bss_conf.idle = true; - continue; - } - /* do not count unused IBSS interfaces */ - if (sdata->vif.type == NL80211_IFTYPE_ADHOC && - !sdata->u.ibss.ssid_len) { - sdata->vif.bss_conf.idle = true; - continue; - } - /* count everything else */ - sdata->vif.bss_conf.idle = false; - count++; - } - - list_for_each_entry(wk, &local->work_list, list) { - working = true; - wk->sdata->vif.bss_conf.idle = false; - } - - if (local->scan_sdata && - !(local->hw.flags & IEEE80211_HW_SCAN_WHILE_IDLE)) { - scanning = true; - local->scan_sdata->vif.bss_conf.idle = false; - } - - if (local->hw_roc_channel) - hw_roc = true; - - list_for_each_entry(sdata, &local->interfaces, list) { - if (sdata->vif.type == NL80211_IFTYPE_MONITOR || - sdata->vif.type == NL80211_IFTYPE_AP_VLAN) - continue; - if (sdata->old_idle == sdata->vif.bss_conf.idle) - continue; - if (!ieee80211_sdata_running(sdata)) - continue; - ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_IDLE); - } - - if (working || scanning || hw_roc) - led_trig_start |= IEEE80211_TPT_LEDTRIG_FL_WORK; - else - led_trig_stop |= IEEE80211_TPT_LEDTRIG_FL_WORK; - - if (count) - led_trig_start |= IEEE80211_TPT_LEDTRIG_FL_CONNECTED; - else - led_trig_stop |= IEEE80211_TPT_LEDTRIG_FL_CONNECTED; - - ieee80211_mod_tpt_led_trig(local, led_trig_start, led_trig_stop); - - if (hw_roc) - return ieee80211_idle_off(local, "hw remain-on-channel"); - if (working) - return ieee80211_idle_off(local, "working"); - if (scanning) - return ieee80211_idle_off(local, "scanning"); - if (!count) - return ieee80211_idle_on(local); - else - return ieee80211_idle_off(local, "in use"); - - return 0; -} - -void ieee80211_recalc_idle(struct ieee80211_local *local) -{ - u32 chg; - - mutex_lock(&local->iflist_mtx); - chg = __ieee80211_recalc_idle(local); - mutex_unlock(&local->iflist_mtx); - if (chg) - ieee80211_hw_config(local, chg); -} - static int netdev_notify(struct notifier_block *nb, unsigned long state, void *ndev) diff --git a/net/mac80211/key.c b/net/mac80211/key.c index 5bb600d93d77..7ae678ba5d67 100644 --- a/net/mac80211/key.c +++ b/net/mac80211/key.c @@ -139,7 +139,7 @@ static int ieee80211_key_enable_hw_accel(struct ieee80211_key *key) } if (ret != -ENOSPC && ret != -EOPNOTSUPP) - wiphy_err(key->local->hw.wiphy, + sdata_err(sdata, "failed to set key (%d, %pM) to hardware (%d)\n", key->conf.keyidx, sta ? sta->sta.addr : bcast_addr, ret); @@ -186,7 +186,7 @@ static void ieee80211_key_disable_hw_accel(struct ieee80211_key *key) sta ? &sta->sta : NULL, &key->conf); if (ret) - wiphy_err(key->local->hw.wiphy, + sdata_err(sdata, "failed to remove key (%d, %pM) from hardware (%d)\n", key->conf.keyidx, sta ? sta->sta.addr : bcast_addr, ret); @@ -194,26 +194,6 @@ static void ieee80211_key_disable_hw_accel(struct ieee80211_key *key) key->flags &= ~KEY_FLAG_UPLOADED_TO_HARDWARE; } -void ieee80211_key_removed(struct ieee80211_key_conf *key_conf) -{ - struct ieee80211_key *key; - - key = container_of(key_conf, struct ieee80211_key, conf); - - might_sleep(); - assert_key_lock(key->local); - - key->flags &= ~KEY_FLAG_UPLOADED_TO_HARDWARE; - - /* - * Flush TX path to avoid attempts to use this key - * after this function returns. Until then, drivers - * must be prepared to handle the key. - */ - synchronize_rcu(); -} -EXPORT_SYMBOL_GPL(ieee80211_key_removed); - static void __ieee80211_set_default_key(struct ieee80211_sub_if_data *sdata, int idx, bool uni, bool multi) { diff --git a/net/mac80211/main.c b/net/mac80211/main.c index f5548e953259..c26e231c733a 100644 --- a/net/mac80211/main.c +++ b/net/mac80211/main.c @@ -322,7 +322,8 @@ static void ieee80211_restart_work(struct work_struct *work) mutex_lock(&local->mtx); WARN(test_bit(SCAN_HW_SCANNING, &local->scanning) || - local->sched_scanning, + rcu_dereference_protected(local->sched_scan_sdata, + lockdep_is_held(&local->mtx)), "%s called with hardware scan in progress\n", __func__); mutex_unlock(&local->mtx); @@ -345,6 +346,13 @@ void ieee80211_restart_hw(struct ieee80211_hw *hw) ieee80211_stop_queues_by_reason(hw, IEEE80211_QUEUE_STOP_REASON_SUSPEND); + /* + * Stop all Rx during the reconfig. We don't want state changes + * or driver callbacks while this is in progress. + */ + local->in_reconfig = true; + barrier(); + schedule_work(&local->restart_work); } EXPORT_SYMBOL(ieee80211_restart_hw); @@ -455,7 +463,9 @@ static const struct ieee80211_txrx_stypes ieee80211_default_mgmt_stypes[NUM_NL80211_IFTYPES] = { [NL80211_IFTYPE_ADHOC] = { .tx = 0xffff, - .rx = BIT(IEEE80211_STYPE_ACTION >> 4), + .rx = BIT(IEEE80211_STYPE_ACTION >> 4) | + BIT(IEEE80211_STYPE_AUTH >> 4) | + BIT(IEEE80211_STYPE_DEAUTH >> 4), }, [NL80211_IFTYPE_STATION] = { .tx = 0xffff, @@ -578,7 +588,7 @@ struct ieee80211_hw *ieee80211_alloc_hw(size_t priv_data_len, local->hw.priv = (char *)local + ALIGN(sizeof(*local), NETDEV_ALIGN); - BUG_ON(!ops->tx && !ops->tx_frags); + BUG_ON(!ops->tx); BUG_ON(!ops->start); BUG_ON(!ops->stop); BUG_ON(!ops->config); @@ -625,8 +635,6 @@ struct ieee80211_hw *ieee80211_alloc_hw(size_t priv_data_len, INIT_DELAYED_WORK(&local->scan_work, ieee80211_scan_work); - ieee80211_work_init(local); - INIT_WORK(&local->restart_work, ieee80211_restart_work); INIT_WORK(&local->reconfig_filter, ieee80211_reconfig_filter); @@ -669,7 +677,7 @@ struct ieee80211_hw *ieee80211_alloc_hw(size_t priv_data_len, ieee80211_led_names(local); - ieee80211_hw_roc_setup(local); + ieee80211_roc_setup(local); return &local->hw; } @@ -681,7 +689,8 @@ int ieee80211_register_hw(struct ieee80211_hw *hw) int result, i; enum ieee80211_band band; int channels, max_bitrates; - bool supp_ht; + bool supp_ht, supp_vht; + netdev_features_t feature_whitelist; static const u32 cipher_suites[] = { /* keep WEP first, it may be removed below */ WLAN_CIPHER_SUITE_WEP40, @@ -698,16 +707,21 @@ int ieee80211_register_hw(struct ieee80211_hw *hw) local->hw.offchannel_tx_hw_queue >= local->hw.queues)) return -EINVAL; - if ((hw->wiphy->wowlan.flags || hw->wiphy->wowlan.n_patterns) #ifdef CONFIG_PM - && (!local->ops->suspend || !local->ops->resume) -#endif - ) + if ((hw->wiphy->wowlan.flags || hw->wiphy->wowlan.n_patterns) && + (!local->ops->suspend || !local->ops->resume)) return -EINVAL; +#endif if ((hw->flags & IEEE80211_HW_SCAN_WHILE_IDLE) && !local->ops->hw_scan) return -EINVAL; + /* Only HW csum features are currently compatible with mac80211 */ + feature_whitelist = NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | + NETIF_F_HW_CSUM; + if (WARN_ON(hw->netdev_features & ~feature_whitelist)) + return -EINVAL; + if (hw->max_report_rates == 0) hw->max_report_rates = hw->max_rates; @@ -719,6 +733,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw) channels = 0; max_bitrates = 0; supp_ht = false; + supp_vht = false; for (band = 0; band < IEEE80211_NUM_BANDS; band++) { struct ieee80211_supported_band *sband; @@ -736,6 +751,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw) if (max_bitrates < sband->n_bitrates) max_bitrates = sband->n_bitrates; supp_ht = supp_ht || sband->ht_cap.ht_supported; + supp_vht = supp_vht || sband->vht_cap.vht_supported; } local->int_scan_req = kzalloc(sizeof(*local->int_scan_req) + @@ -811,6 +827,10 @@ int ieee80211_register_hw(struct ieee80211_hw *hw) if (supp_ht) local->scan_ies_len += 2 + sizeof(struct ieee80211_ht_cap); + if (supp_vht) + local->scan_ies_len += + 2 + sizeof(struct ieee80211_vht_capabilities); + if (!local->ops->hw_scan) { /* For hw_scan, driver needs to set these up. */ local->hw.wiphy->max_scan_ssids = 4; @@ -1009,12 +1029,6 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw) rtnl_unlock(); - /* - * Now all work items will be gone, but the - * timer might still be armed, so delete it - */ - del_timer_sync(&local->work_timer); - cancel_work_sync(&local->restart_work); cancel_work_sync(&local->reconfig_filter); diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c index 2913113c5833..6fac18c0423f 100644 --- a/net/mac80211/mesh.c +++ b/net/mac80211/mesh.c @@ -133,7 +133,7 @@ bool mesh_peer_accepts_plinks(struct ieee802_11_elems *ie) } /** - * mesh_accept_plinks_update: update accepting_plink in local mesh beacons + * mesh_accept_plinks_update - update accepting_plink in local mesh beacons * * @sdata: mesh interface in which mesh beacons are going to be updated */ @@ -443,7 +443,7 @@ static void ieee80211_mesh_path_root_timer(unsigned long data) void ieee80211_mesh_root_setup(struct ieee80211_if_mesh *ifmsh) { - if (ifmsh->mshcfg.dot11MeshHWMPRootMode) + if (ifmsh->mshcfg.dot11MeshHWMPRootMode > IEEE80211_ROOTMODE_ROOT) set_bit(MESH_WORK_ROOT, &ifmsh->wrkq_flags); else { clear_bit(MESH_WORK_ROOT, &ifmsh->wrkq_flags); @@ -523,11 +523,6 @@ static void ieee80211_mesh_housekeeping(struct ieee80211_sub_if_data *sdata, { bool free_plinks; -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - printk(KERN_DEBUG "%s: running mesh housekeeping\n", - sdata->name); -#endif - ieee80211_sta_expire(sdata, IEEE80211_MESH_PEER_INACTIVITY_LIMIT); mesh_path_expire(sdata); @@ -542,11 +537,17 @@ static void ieee80211_mesh_housekeeping(struct ieee80211_sub_if_data *sdata, static void ieee80211_mesh_rootpath(struct ieee80211_sub_if_data *sdata) { struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh; + u32 interval; mesh_path_tx_root_frame(sdata); + + if (ifmsh->mshcfg.dot11MeshHWMPRootMode == IEEE80211_PROACTIVE_RANN) + interval = ifmsh->mshcfg.dot11MeshHWMPRannInterval; + else + interval = ifmsh->mshcfg.dot11MeshHWMProotInterval; + mod_timer(&ifmsh->mesh_path_root_timer, - round_jiffies(TU_TO_EXP_TIME( - ifmsh->mshcfg.dot11MeshHWMPRannInterval))); + round_jiffies(TU_TO_EXP_TIME(interval))); } #ifdef CONFIG_PM diff --git a/net/mac80211/mesh.h b/net/mac80211/mesh.h index e3642756f8f4..faaa39bcfd10 100644 --- a/net/mac80211/mesh.h +++ b/net/mac80211/mesh.h @@ -104,6 +104,7 @@ enum mesh_deferred_task_flags { * an mpath to a hash bucket on a path table. * @rann_snd_addr: the RANN sender address * @rann_metric: the aggregated path metric towards the root node + * @last_preq_to_root: Timestamp of last PREQ sent to root * @is_root: the destination station of this path is a root node * @is_gate: the destination station of this path is a mesh gate * @@ -131,6 +132,7 @@ struct mesh_path { spinlock_t state_lock; u8 rann_snd_addr[ETH_ALEN]; u32 rann_metric; + unsigned long last_preq_to_root; bool is_root; bool is_gate; }; @@ -245,7 +247,7 @@ void mesh_rmc_free(struct ieee80211_sub_if_data *sdata); int mesh_rmc_init(struct ieee80211_sub_if_data *sdata); void ieee80211s_init(void); void ieee80211s_update_metric(struct ieee80211_local *local, - struct sta_info *stainfo, struct sk_buff *skb); + struct sta_info *sta, struct sk_buff *skb); void ieee80211s_stop(void); void ieee80211_mesh_init_sdata(struct ieee80211_sub_if_data *sdata); void ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata); diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c index 9b59658e8650..494bc39f61a4 100644 --- a/net/mac80211/mesh_hwmp.c +++ b/net/mac80211/mesh_hwmp.c @@ -13,13 +13,6 @@ #include "wme.h" #include "mesh.h" -#ifdef CONFIG_MAC80211_VERBOSE_MHWMP_DEBUG -#define mhwmp_dbg(fmt, args...) \ - printk(KERN_DEBUG "Mesh HWMP (%s): " fmt "\n", sdata->name, ##args) -#else -#define mhwmp_dbg(fmt, args...) do { (void)(0); } while (0) -#endif - #define TEST_FRAME_LEN 8192 #define MAX_METRIC 0xffffffff #define ARITH_SHIFT 8 @@ -98,6 +91,8 @@ static inline u32 u16_field_get(u8 *preq_elem, int offset, bool ae) #define max_preq_retries(s) (s->u.mesh.mshcfg.dot11MeshHWMPmaxPREQretries) #define disc_timeout_jiff(s) \ msecs_to_jiffies(sdata->u.mesh.mshcfg.min_discovery_timeout) +#define root_path_confirmation_jiffies(s) \ + msecs_to_jiffies(sdata->u.mesh.mshcfg.dot11MeshHWMPconfirmationInterval) enum mpath_frame_type { MPATH_PREQ = 0, @@ -142,19 +137,19 @@ static int mesh_path_sel_frame_tx(enum mpath_frame_type action, u8 flags, switch (action) { case MPATH_PREQ: - mhwmp_dbg("sending PREQ to %pM", target); + mhwmp_dbg(sdata, "sending PREQ to %pM\n", target); ie_len = 37; pos = skb_put(skb, 2 + ie_len); *pos++ = WLAN_EID_PREQ; break; case MPATH_PREP: - mhwmp_dbg("sending PREP to %pM", target); + mhwmp_dbg(sdata, "sending PREP to %pM\n", target); ie_len = 31; pos = skb_put(skb, 2 + ie_len); *pos++ = WLAN_EID_PREP; break; case MPATH_RANN: - mhwmp_dbg("sending RANN from %pM", orig_addr); + mhwmp_dbg(sdata, "sending RANN from %pM\n", orig_addr); ie_len = sizeof(struct ieee80211_rann_ie); pos = skb_put(skb, 2 + ie_len); *pos++ = WLAN_EID_RANN; @@ -303,7 +298,7 @@ int mesh_path_error_tx(u8 ttl, u8 *target, __le32 target_sn, } void ieee80211s_update_metric(struct ieee80211_local *local, - struct sta_info *stainfo, struct sk_buff *skb) + struct sta_info *sta, struct sk_buff *skb) { struct ieee80211_tx_info *txinfo = IEEE80211_SKB_CB(skb); struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; @@ -315,15 +310,14 @@ void ieee80211s_update_metric(struct ieee80211_local *local, failed = !(txinfo->flags & IEEE80211_TX_STAT_ACK); /* moving average, scaled to 100 */ - stainfo->fail_avg = ((80 * stainfo->fail_avg + 5) / 100 + 20 * failed); - if (stainfo->fail_avg > 95) - mesh_plink_broken(stainfo); + sta->fail_avg = ((80 * sta->fail_avg + 5) / 100 + 20 * failed); + if (sta->fail_avg > 95) + mesh_plink_broken(sta); } static u32 airtime_link_metric_get(struct ieee80211_local *local, struct sta_info *sta) { - struct ieee80211_supported_band *sband; struct rate_info rinfo; /* This should be adjusted for each device */ int device_constant = 1 << ARITH_SHIFT; @@ -333,8 +327,6 @@ static u32 airtime_link_metric_get(struct ieee80211_local *local, u32 tx_time, estimated_retx; u64 result; - sband = local->hw.wiphy->bands[local->hw.conf.channel->band]; - if (sta->fail_avg >= 100) return MAX_METRIC; @@ -519,10 +511,11 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata, struct mesh_path *mpath = NULL; u8 *target_addr, *orig_addr; const u8 *da; - u8 target_flags, ttl; - u32 orig_sn, target_sn, lifetime; + u8 target_flags, ttl, flags; + u32 orig_sn, target_sn, lifetime, orig_metric; bool reply = false; bool forward = true; + bool root_is_gate; /* Update target SN, if present */ target_addr = PREQ_IE_TARGET_ADDR(preq_elem); @@ -530,11 +523,15 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata, target_sn = PREQ_IE_TARGET_SN(preq_elem); orig_sn = PREQ_IE_ORIG_SN(preq_elem); target_flags = PREQ_IE_TARGET_F(preq_elem); + orig_metric = metric; + /* Proactive PREQ gate announcements */ + flags = PREQ_IE_FLAGS(preq_elem); + root_is_gate = !!(flags & RANN_FLAG_IS_GATE); - mhwmp_dbg("received PREQ from %pM", orig_addr); + mhwmp_dbg(sdata, "received PREQ from %pM\n", orig_addr); if (ether_addr_equal(target_addr, sdata->vif.addr)) { - mhwmp_dbg("PREQ is for us"); + mhwmp_dbg(sdata, "PREQ is for us\n"); forward = false; reply = true; metric = 0; @@ -544,6 +541,22 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata, target_sn = ++ifmsh->sn; ifmsh->last_sn_update = jiffies; } + } else if (is_broadcast_ether_addr(target_addr) && + (target_flags & IEEE80211_PREQ_TO_FLAG)) { + rcu_read_lock(); + mpath = mesh_path_lookup(orig_addr, sdata); + if (mpath) { + if (flags & IEEE80211_PREQ_PROACTIVE_PREP_FLAG) { + reply = true; + target_addr = sdata->vif.addr; + target_sn = ++ifmsh->sn; + metric = 0; + ifmsh->last_sn_update = jiffies; + } + if (root_is_gate) + mesh_path_add_gate(mpath); + } + rcu_read_unlock(); } else { rcu_read_lock(); mpath = mesh_path_lookup(target_addr, sdata); @@ -570,19 +583,20 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata, lifetime = PREQ_IE_LIFETIME(preq_elem); ttl = ifmsh->mshcfg.element_ttl; if (ttl != 0) { - mhwmp_dbg("replying to the PREQ"); + mhwmp_dbg(sdata, "replying to the PREQ\n"); mesh_path_sel_frame_tx(MPATH_PREP, 0, orig_addr, cpu_to_le32(orig_sn), 0, target_addr, cpu_to_le32(target_sn), mgmt->sa, 0, ttl, cpu_to_le32(lifetime), cpu_to_le32(metric), 0, sdata); - } else + } else { ifmsh->mshstats.dropped_frames_ttl++; + } } if (forward && ifmsh->mshcfg.dot11MeshForwarding) { u32 preq_id; - u8 hopcount, flags; + u8 hopcount; ttl = PREQ_IE_TTL(preq_elem); lifetime = PREQ_IE_LIFETIME(preq_elem); @@ -590,13 +604,19 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata, ifmsh->mshstats.dropped_frames_ttl++; return; } - mhwmp_dbg("forwarding the PREQ from %pM", orig_addr); + mhwmp_dbg(sdata, "forwarding the PREQ from %pM\n", orig_addr); --ttl; - flags = PREQ_IE_FLAGS(preq_elem); preq_id = PREQ_IE_PREQ_ID(preq_elem); hopcount = PREQ_IE_HOPCOUNT(preq_elem) + 1; da = (mpath && mpath->is_root) ? mpath->rann_snd_addr : broadcast_addr; + + if (flags & IEEE80211_PREQ_PROACTIVE_PREP_FLAG) { + target_addr = PREQ_IE_TARGET_ADDR(preq_elem); + target_sn = PREQ_IE_TARGET_SN(preq_elem); + metric = orig_metric; + } + mesh_path_sel_frame_tx(MPATH_PREQ, flags, orig_addr, cpu_to_le32(orig_sn), target_flags, target_addr, cpu_to_le32(target_sn), da, @@ -631,7 +651,8 @@ static void hwmp_prep_frame_process(struct ieee80211_sub_if_data *sdata, u8 next_hop[ETH_ALEN]; u32 target_sn, orig_sn, lifetime; - mhwmp_dbg("received PREP from %pM", PREP_IE_ORIG_ADDR(prep_elem)); + mhwmp_dbg(sdata, "received PREP from %pM\n", + PREP_IE_ORIG_ADDR(prep_elem)); orig_addr = PREP_IE_ORIG_ADDR(prep_elem); if (ether_addr_equal(orig_addr, sdata->vif.addr)) @@ -744,11 +765,6 @@ static void hwmp_rann_frame_process(struct ieee80211_sub_if_data *sdata, bool root_is_gate; ttl = rann->rann_ttl; - if (ttl <= 1) { - ifmsh->mshstats.dropped_frames_ttl++; - return; - } - ttl--; flags = rann->rann_flags; root_is_gate = !!(flags & RANN_FLAG_IS_GATE); orig_addr = rann->rann_addr; @@ -762,8 +778,9 @@ static void hwmp_rann_frame_process(struct ieee80211_sub_if_data *sdata, if (ether_addr_equal(orig_addr, sdata->vif.addr)) return; - mhwmp_dbg("received RANN from %pM via neighbour %pM (is_gate=%d)", - orig_addr, mgmt->sa, root_is_gate); + mhwmp_dbg(sdata, + "received RANN from %pM via neighbour %pM (is_gate=%d)\n", + orig_addr, mgmt->sa, root_is_gate); rcu_read_lock(); sta = sta_info_get(sdata, mgmt->sa); @@ -785,34 +802,50 @@ static void hwmp_rann_frame_process(struct ieee80211_sub_if_data *sdata, } } + if (!(SN_LT(mpath->sn, orig_sn)) && + !(mpath->sn == orig_sn && metric < mpath->rann_metric)) { + rcu_read_unlock(); + return; + } + if ((!(mpath->flags & (MESH_PATH_ACTIVE | MESH_PATH_RESOLVING)) || - time_after(jiffies, mpath->exp_time - 1*HZ)) && - !(mpath->flags & MESH_PATH_FIXED)) { - mhwmp_dbg("%s time to refresh root mpath %pM", sdata->name, - orig_addr); + (time_after(jiffies, mpath->last_preq_to_root + + root_path_confirmation_jiffies(sdata)) || + time_before(jiffies, mpath->last_preq_to_root))) && + !(mpath->flags & MESH_PATH_FIXED) && (ttl != 0)) { + mhwmp_dbg(sdata, + "time to refresh root mpath %pM\n", + orig_addr); mesh_queue_preq(mpath, PREQ_Q_F_START | PREQ_Q_F_REFRESH); + mpath->last_preq_to_root = jiffies; } - if ((SN_LT(mpath->sn, orig_sn) || (mpath->sn == orig_sn && - metric < mpath->rann_metric)) && ifmsh->mshcfg.dot11MeshForwarding) { + mpath->sn = orig_sn; + mpath->rann_metric = metric + metric_txsta; + mpath->is_root = true; + /* Recording RANNs sender address to send individually + * addressed PREQs destined for root mesh STA */ + memcpy(mpath->rann_snd_addr, mgmt->sa, ETH_ALEN); + + if (root_is_gate) + mesh_path_add_gate(mpath); + + if (ttl <= 1) { + ifmsh->mshstats.dropped_frames_ttl++; + rcu_read_unlock(); + return; + } + ttl--; + + if (ifmsh->mshcfg.dot11MeshForwarding) { mesh_path_sel_frame_tx(MPATH_RANN, flags, orig_addr, cpu_to_le32(orig_sn), 0, NULL, 0, broadcast_addr, hopcount, ttl, cpu_to_le32(interval), cpu_to_le32(metric + metric_txsta), 0, sdata); - mpath->sn = orig_sn; - mpath->rann_metric = metric + metric_txsta; - /* Recording RANNs sender address to send individually - * addressed PREQs destined for root mesh STA */ - memcpy(mpath->rann_snd_addr, mgmt->sa, ETH_ALEN); } - mpath->is_root = true; - - if (root_is_gate) - mesh_path_add_gate(mpath); - rcu_read_unlock(); } @@ -889,7 +922,7 @@ static void mesh_queue_preq(struct mesh_path *mpath, u8 flags) preq_node = kmalloc(sizeof(struct mesh_preq_queue), GFP_ATOMIC); if (!preq_node) { - mhwmp_dbg("could not allocate PREQ node"); + mhwmp_dbg(sdata, "could not allocate PREQ node\n"); return; } @@ -898,7 +931,7 @@ static void mesh_queue_preq(struct mesh_path *mpath, u8 flags) spin_unlock_bh(&ifmsh->mesh_preq_queue_lock); kfree(preq_node); if (printk_ratelimit()) - mhwmp_dbg("PREQ node queue full"); + mhwmp_dbg(sdata, "PREQ node queue full\n"); return; } @@ -1021,12 +1054,15 @@ enddiscovery: kfree(preq_node); } -/* mesh_nexthop_resolve - lookup next hop for given skb and start path - * discovery if no forwarding information is found. +/** + * mesh_nexthop_resolve - lookup next hop; conditionally start path discovery * * @skb: 802.11 frame to be sent * @sdata: network subif the frame will be sent through * + * Lookup next hop for given skb and start path discovery if no + * forwarding information is found. + * * Returns: 0 if the next hop was found and -ENOENT if the frame was queued. * skb is freeed here if no mpath could be allocated. */ @@ -1146,7 +1182,7 @@ void mesh_path_timer(unsigned long data) if (!mpath->is_gate && mesh_gate_num(sdata) > 0) { ret = mesh_path_send_to_gates(mpath); if (ret) - mhwmp_dbg("no gate was reachable"); + mhwmp_dbg(sdata, "no gate was reachable\n"); } else mesh_path_flush_pending(mpath); } @@ -1157,13 +1193,34 @@ mesh_path_tx_root_frame(struct ieee80211_sub_if_data *sdata) { struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh; u32 interval = ifmsh->mshcfg.dot11MeshHWMPRannInterval; - u8 flags; + u8 flags, target_flags = 0; flags = (ifmsh->mshcfg.dot11MeshGateAnnouncementProtocol) ? RANN_FLAG_IS_GATE : 0; - mesh_path_sel_frame_tx(MPATH_RANN, flags, sdata->vif.addr, + + switch (ifmsh->mshcfg.dot11MeshHWMPRootMode) { + case IEEE80211_PROACTIVE_RANN: + mesh_path_sel_frame_tx(MPATH_RANN, flags, sdata->vif.addr, cpu_to_le32(++ifmsh->sn), 0, NULL, 0, broadcast_addr, - 0, sdata->u.mesh.mshcfg.element_ttl, + 0, ifmsh->mshcfg.element_ttl, cpu_to_le32(interval), 0, 0, sdata); + break; + case IEEE80211_PROACTIVE_PREQ_WITH_PREP: + flags |= IEEE80211_PREQ_PROACTIVE_PREP_FLAG; + case IEEE80211_PROACTIVE_PREQ_NO_PREP: + interval = ifmsh->mshcfg.dot11MeshHWMPactivePathToRootTimeout; + target_flags |= IEEE80211_PREQ_TO_FLAG | + IEEE80211_PREQ_USN_FLAG; + mesh_path_sel_frame_tx(MPATH_PREQ, flags, sdata->vif.addr, + cpu_to_le32(++ifmsh->sn), target_flags, + (u8 *) broadcast_addr, 0, broadcast_addr, + 0, ifmsh->mshcfg.element_ttl, + cpu_to_le32(interval), + 0, cpu_to_le32(ifmsh->preq_id++), sdata); + break; + default: + mhwmp_dbg(sdata, "Proactive mechanism not supported\n"); + return; + } } diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c index b39224d8255c..075bc535c601 100644 --- a/net/mac80211/mesh_pathtbl.c +++ b/net/mac80211/mesh_pathtbl.c @@ -18,12 +18,6 @@ #include "ieee80211_i.h" #include "mesh.h" -#ifdef CONFIG_MAC80211_VERBOSE_MPATH_DEBUG -#define mpath_dbg(fmt, args...) printk(KERN_DEBUG fmt, ##args) -#else -#define mpath_dbg(fmt, args...) do { (void)(0); } while (0) -#endif - /* There will be initially 2^INIT_PATHS_SIZE_ORDER buckets */ #define INIT_PATHS_SIZE_ORDER 2 @@ -322,9 +316,8 @@ static void mesh_path_move_to_queue(struct mesh_path *gate_mpath, spin_lock_irqsave(&gate_mpath->frame_queue.lock, flags); skb_queue_splice(&gateq, &gate_mpath->frame_queue); - mpath_dbg("Mpath queue for gate %pM has %d frames\n", - gate_mpath->dst, - skb_queue_len(&gate_mpath->frame_queue)); + mpath_dbg(gate_mpath->sdata, "Mpath queue for gate %pM has %d frames\n", + gate_mpath->dst, skb_queue_len(&gate_mpath->frame_queue)); spin_unlock_irqrestore(&gate_mpath->frame_queue.lock, flags); if (!copy) @@ -446,9 +439,9 @@ int mesh_path_add_gate(struct mesh_path *mpath) hlist_add_head_rcu(&new_gate->list, tbl->known_gates); spin_unlock_bh(&tbl->gates_lock); rcu_read_unlock(); - mpath_dbg("Mesh path (%s): Recorded new gate: %pM. %d known gates\n", - mpath->sdata->name, mpath->dst, - mpath->sdata->u.mesh.num_gates); + mpath_dbg(mpath->sdata, + "Mesh path: Recorded new gate: %pM. %d known gates\n", + mpath->dst, mpath->sdata->u.mesh.num_gates); return 0; err_rcu: rcu_read_unlock(); @@ -477,8 +470,8 @@ static int mesh_gate_del(struct mesh_table *tbl, struct mesh_path *mpath) spin_unlock_bh(&tbl->gates_lock); mpath->sdata->u.mesh.num_gates--; mpath->is_gate = false; - mpath_dbg("Mesh path (%s): Deleted gate: %pM. " - "%d known gates\n", mpath->sdata->name, + mpath_dbg(mpath->sdata, + "Mesh path: Deleted gate: %pM. %d known gates\n", mpath->dst, mpath->sdata->u.mesh.num_gates); break; } @@ -785,7 +778,7 @@ static void __mesh_path_del(struct mesh_table *tbl, struct mpath_node *node) /** * mesh_path_flush_by_nexthop - Deletes mesh paths if their next hop matches * - * @sta - mesh peer to match + * @sta: mesh peer to match * * RCU notes: this function is called when a mesh plink transitions from * PLINK_ESTAB to any other state, since PLINK_ESTAB state is the only one that @@ -840,7 +833,7 @@ static void table_flush_by_iface(struct mesh_table *tbl, * * This function deletes both mesh paths as well as mesh portal paths. * - * @sdata - interface data to match + * @sdata: interface data to match * */ void mesh_path_flush_by_iface(struct ieee80211_sub_if_data *sdata) @@ -946,19 +939,20 @@ int mesh_path_send_to_gates(struct mesh_path *mpath) continue; if (gate->mpath->flags & MESH_PATH_ACTIVE) { - mpath_dbg("Forwarding to %pM\n", gate->mpath->dst); + mpath_dbg(sdata, "Forwarding to %pM\n", gate->mpath->dst); mesh_path_move_to_queue(gate->mpath, from_mpath, copy); from_mpath = gate->mpath; copy = true; } else { - mpath_dbg("Not forwarding %p\n", gate->mpath); - mpath_dbg("flags %x\n", gate->mpath->flags); + mpath_dbg(sdata, + "Not forwarding %p (flags %#x)\n", + gate->mpath, gate->mpath->flags); } } hlist_for_each_entry_rcu(gate, n, known_gates, list) if (gate->mpath->sdata == sdata) { - mpath_dbg("Sending to %pM\n", gate->mpath->dst); + mpath_dbg(sdata, "Sending to %pM\n", gate->mpath->dst); mesh_path_tx_pending(gate->mpath); } diff --git a/net/mac80211/mesh_plink.c b/net/mac80211/mesh_plink.c index 60ef235c9d9b..af671b984df3 100644 --- a/net/mac80211/mesh_plink.c +++ b/net/mac80211/mesh_plink.c @@ -13,12 +13,6 @@ #include "rate.h" #include "mesh.h" -#ifdef CONFIG_MAC80211_VERBOSE_MPL_DEBUG -#define mpl_dbg(fmt, args...) printk(KERN_DEBUG fmt, ##args) -#else -#define mpl_dbg(fmt, args...) do { (void)(0); } while (0) -#endif - #define PLINK_GET_LLID(p) (p + 2) #define PLINK_GET_PLID(p) (p + 4) @@ -105,7 +99,7 @@ static struct sta_info *mesh_plink_alloc(struct ieee80211_sub_if_data *sdata, return sta; } -/* +/** * mesh_set_ht_prot_mode - set correct HT protection mode * * Section 9.23.3.5 of IEEE 80211-2012 describes the protection rules for HT @@ -134,12 +128,14 @@ static u32 mesh_set_ht_prot_mode(struct ieee80211_sub_if_data *sdata) switch (sta->ch_type) { case NL80211_CHAN_NO_HT: - mpl_dbg("mesh_plink %pM: nonHT sta (%pM) is present", + mpl_dbg(sdata, + "mesh_plink %pM: nonHT sta (%pM) is present\n", sdata->vif.addr, sta->sta.addr); non_ht_sta = true; goto out; case NL80211_CHAN_HT20: - mpl_dbg("mesh_plink %pM: HT20 sta (%pM) is present", + mpl_dbg(sdata, + "mesh_plink %pM: HT20 sta (%pM) is present\n", sdata->vif.addr, sta->sta.addr); ht20_sta = true; default: @@ -160,7 +156,8 @@ out: sdata->vif.bss_conf.ht_operation_mode = ht_opmode; sdata->u.mesh.mshcfg.ht_opmode = ht_opmode; changed = BSS_CHANGED_HT; - mpl_dbg("mesh_plink %pM: protection mode changed to %d", + mpl_dbg(sdata, + "mesh_plink %pM: protection mode changed to %d\n", sdata->vif.addr, ht_opmode); } @@ -261,8 +258,8 @@ static int mesh_plink_frame_tx(struct ieee80211_sub_if_data *sdata, pos = skb_put(skb, 2); memcpy(pos + 2, &plid, 2); } - if (ieee80211_add_srates_ie(&sdata->vif, skb, true) || - ieee80211_add_ext_srates_ie(&sdata->vif, skb, true) || + if (ieee80211_add_srates_ie(sdata, skb, true) || + ieee80211_add_ext_srates_ie(sdata, skb, true) || mesh_add_rsn_ie(skb, sdata) || mesh_add_meshid_ie(skb, sdata) || mesh_add_meshconf_ie(skb, sdata)) @@ -323,7 +320,8 @@ static int mesh_plink_frame_tx(struct ieee80211_sub_if_data *sdata, return 0; } -/* mesh_peer_init - initialize new mesh peer and return resulting sta_info +/** + * mesh_peer_init - initialize new mesh peer and return resulting sta_info * * @sdata: local meshif * @addr: peer's address @@ -437,7 +435,8 @@ static void mesh_plink_timer(unsigned long data) spin_unlock_bh(&sta->lock); return; } - mpl_dbg("Mesh plink timer for %pM fired on state %d\n", + mpl_dbg(sta->sdata, + "Mesh plink timer for %pM fired on state %d\n", sta->sta.addr, sta->plink_state); reason = 0; llid = sta->llid; @@ -450,7 +449,8 @@ static void mesh_plink_timer(unsigned long data) /* retry timer */ if (sta->plink_retries < dot11MeshMaxRetries(sdata)) { u32 rand; - mpl_dbg("Mesh plink for %pM (retry, timeout): %d %d\n", + mpl_dbg(sta->sdata, + "Mesh plink for %pM (retry, timeout): %d %d\n", sta->sta.addr, sta->plink_retries, sta->plink_timeout); get_random_bytes(&rand, sizeof(u32)); @@ -530,7 +530,8 @@ int mesh_plink_open(struct sta_info *sta) sta->plink_state = NL80211_PLINK_OPN_SNT; mesh_plink_timer_set(sta, dot11MeshRetryTimeout(sdata)); spin_unlock_bh(&sta->lock); - mpl_dbg("Mesh plink: starting establishment with %pM\n", + mpl_dbg(sdata, + "Mesh plink: starting establishment with %pM\n", sta->sta.addr); return mesh_plink_frame_tx(sdata, WLAN_SP_MESH_PEERING_OPEN, @@ -565,7 +566,6 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m u8 *baseaddr; u32 changed = 0; __le16 plid, llid, reason; -#ifdef CONFIG_MAC80211_VERBOSE_MPL_DEBUG static const char *mplstates[] = { [NL80211_PLINK_LISTEN] = "LISTEN", [NL80211_PLINK_OPN_SNT] = "OPN-SNT", @@ -575,14 +575,14 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m [NL80211_PLINK_HOLDING] = "HOLDING", [NL80211_PLINK_BLOCKED] = "BLOCKED" }; -#endif /* need action_code, aux */ if (len < IEEE80211_MIN_ACTION_SIZE + 3) return; if (is_multicast_ether_addr(mgmt->da)) { - mpl_dbg("Mesh plink: ignore frame from multicast address"); + mpl_dbg(sdata, + "Mesh plink: ignore frame from multicast address\n"); return; } @@ -595,12 +595,14 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m } ieee802_11_parse_elems(baseaddr, len - baselen, &elems); if (!elems.peering) { - mpl_dbg("Mesh plink: missing necessary peer link ie\n"); + mpl_dbg(sdata, + "Mesh plink: missing necessary peer link ie\n"); return; } if (elems.rsn_len && sdata->u.mesh.security == IEEE80211_MESH_SEC_NONE) { - mpl_dbg("Mesh plink: can't establish link with secure peer\n"); + mpl_dbg(sdata, + "Mesh plink: can't establish link with secure peer\n"); return; } @@ -610,14 +612,15 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m (ftype == WLAN_SP_MESH_PEERING_CONFIRM && ie_len != 6) || (ftype == WLAN_SP_MESH_PEERING_CLOSE && ie_len != 6 && ie_len != 8)) { - mpl_dbg("Mesh plink: incorrect plink ie length %d %d\n", - ftype, ie_len); + mpl_dbg(sdata, + "Mesh plink: incorrect plink ie length %d %d\n", + ftype, ie_len); return; } if (ftype != WLAN_SP_MESH_PEERING_CLOSE && (!elems.mesh_id || !elems.mesh_config)) { - mpl_dbg("Mesh plink: missing necessary ie\n"); + mpl_dbg(sdata, "Mesh plink: missing necessary ie\n"); return; } /* Note the lines below are correct, the llid in the frame is the plid @@ -632,21 +635,21 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m sta = sta_info_get(sdata, mgmt->sa); if (!sta && ftype != WLAN_SP_MESH_PEERING_OPEN) { - mpl_dbg("Mesh plink: cls or cnf from unknown peer\n"); + mpl_dbg(sdata, "Mesh plink: cls or cnf from unknown peer\n"); rcu_read_unlock(); return; } if (ftype == WLAN_SP_MESH_PEERING_OPEN && !rssi_threshold_check(sta, sdata)) { - mpl_dbg("Mesh plink: %pM does not meet rssi threshold\n", + mpl_dbg(sdata, "Mesh plink: %pM does not meet rssi threshold\n", mgmt->sa); rcu_read_unlock(); return; } if (sta && !test_sta_flag(sta, WLAN_STA_AUTH)) { - mpl_dbg("Mesh plink: Action frame from non-authed peer\n"); + mpl_dbg(sdata, "Mesh plink: Action frame from non-authed peer\n"); rcu_read_unlock(); return; } @@ -683,7 +686,7 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m } else if (!sta) { /* ftype == WLAN_SP_MESH_PEERING_OPEN */ if (!mesh_plink_free_count(sdata)) { - mpl_dbg("Mesh plink error: no more free plinks\n"); + mpl_dbg(sdata, "Mesh plink error: no more free plinks\n"); rcu_read_unlock(); return; } @@ -724,7 +727,7 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m event = CLS_ACPT; break; default: - mpl_dbg("Mesh plink: unknown frame subtype\n"); + mpl_dbg(sdata, "Mesh plink: unknown frame subtype\n"); rcu_read_unlock(); return; } @@ -734,13 +737,14 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m /* allocate sta entry if necessary and update info */ sta = mesh_peer_init(sdata, mgmt->sa, &elems); if (!sta) { - mpl_dbg("Mesh plink: failed to init peer!\n"); + mpl_dbg(sdata, "Mesh plink: failed to init peer!\n"); rcu_read_unlock(); return; } } - mpl_dbg("Mesh plink (peer, state, llid, plid, event): %pM %s %d %d %d\n", + mpl_dbg(sdata, + "Mesh plink (peer, state, llid, plid, event): %pM %s %d %d %d\n", mgmt->sa, mplstates[sta->plink_state], le16_to_cpu(sta->llid), le16_to_cpu(sta->plid), event); @@ -851,7 +855,7 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m mesh_plink_inc_estab_count(sdata); changed |= mesh_set_ht_prot_mode(sdata); changed |= BSS_CHANGED_BEACON; - mpl_dbg("Mesh plink with %pM ESTABLISHED\n", + mpl_dbg(sdata, "Mesh plink with %pM ESTABLISHED\n", sta->sta.addr); break; default: @@ -887,7 +891,7 @@ void mesh_rx_plink_frame(struct ieee80211_sub_if_data *sdata, struct ieee80211_m mesh_plink_inc_estab_count(sdata); changed |= mesh_set_ht_prot_mode(sdata); changed |= BSS_CHANGED_BEACON; - mpl_dbg("Mesh plink with %pM ESTABLISHED\n", + mpl_dbg(sdata, "Mesh plink with %pM ESTABLISHED\n", sta->sta.addr); mesh_plink_frame_tx(sdata, WLAN_SP_MESH_PEERING_CONFIRM, diff --git a/net/mac80211/mesh_sync.c b/net/mac80211/mesh_sync.c index 38d30e8ce6dc..accfa00ffcdf 100644 --- a/net/mac80211/mesh_sync.c +++ b/net/mac80211/mesh_sync.c @@ -12,13 +12,6 @@ #include "mesh.h" #include "driver-ops.h" -#ifdef CONFIG_MAC80211_VERBOSE_MESH_SYNC_DEBUG -#define msync_dbg(fmt, args...) \ - printk(KERN_DEBUG "Mesh sync (%s): " fmt "\n", sdata->name, ##args) -#else -#define msync_dbg(fmt, args...) do { (void)(0); } while (0) -#endif - /* This is not in the standard. It represents a tolerable tbtt drift below * which we do no TSF adjustment. */ @@ -65,14 +58,14 @@ void mesh_sync_adjust_tbtt(struct ieee80211_sub_if_data *sdata) spin_lock_bh(&ifmsh->sync_offset_lock); if (ifmsh->sync_offset_clockdrift_max < beacon_int_fraction) { - msync_dbg("TBTT : max clockdrift=%lld; adjusting", - (long long) ifmsh->sync_offset_clockdrift_max); + msync_dbg(sdata, "TBTT : max clockdrift=%lld; adjusting\n", + (long long) ifmsh->sync_offset_clockdrift_max); tsfdelta = -ifmsh->sync_offset_clockdrift_max; ifmsh->sync_offset_clockdrift_max = 0; } else { - msync_dbg("TBTT : max clockdrift=%lld; adjusting by %llu", - (long long) ifmsh->sync_offset_clockdrift_max, - (unsigned long long) beacon_int_fraction); + msync_dbg(sdata, "TBTT : max clockdrift=%lld; adjusting by %llu\n", + (long long) ifmsh->sync_offset_clockdrift_max, + (unsigned long long) beacon_int_fraction); tsfdelta = -beacon_int_fraction; ifmsh->sync_offset_clockdrift_max -= beacon_int_fraction; } @@ -120,7 +113,7 @@ static void mesh_sync_offset_rx_bcn_presp(struct ieee80211_sub_if_data *sdata, if (elems->mesh_config && mesh_peer_tbtt_adjusting(elems)) { clear_sta_flag(sta, WLAN_STA_TOFFSET_KNOWN); - msync_dbg("STA %pM : is adjusting TBTT", sta->sta.addr); + msync_dbg(sdata, "STA %pM : is adjusting TBTT\n", sta->sta.addr); goto no_sync; } @@ -169,7 +162,8 @@ static void mesh_sync_offset_rx_bcn_presp(struct ieee80211_sub_if_data *sdata, if (test_sta_flag(sta, WLAN_STA_TOFFSET_KNOWN)) { s64 t_clockdrift = sta->t_offset_setpoint - sta->t_offset; - msync_dbg("STA %pM : sta->t_offset=%lld, sta->t_offset_setpoint=%lld, t_clockdrift=%lld", + msync_dbg(sdata, + "STA %pM : sta->t_offset=%lld, sta->t_offset_setpoint=%lld, t_clockdrift=%lld\n", sta->sta.addr, (long long) sta->t_offset, (long long) @@ -178,7 +172,8 @@ static void mesh_sync_offset_rx_bcn_presp(struct ieee80211_sub_if_data *sdata, if (t_clockdrift > TOFFSET_MAXIMUM_ADJUSTMENT || t_clockdrift < -TOFFSET_MAXIMUM_ADJUSTMENT) { - msync_dbg("STA %pM : t_clockdrift=%lld too large, setpoint reset", + msync_dbg(sdata, + "STA %pM : t_clockdrift=%lld too large, setpoint reset\n", sta->sta.addr, (long long) t_clockdrift); clear_sta_flag(sta, WLAN_STA_TOFFSET_KNOWN); @@ -197,8 +192,8 @@ static void mesh_sync_offset_rx_bcn_presp(struct ieee80211_sub_if_data *sdata, } else { sta->t_offset_setpoint = sta->t_offset - TOFFSET_SET_MARGIN; set_sta_flag(sta, WLAN_STA_TOFFSET_KNOWN); - msync_dbg("STA %pM : offset was invalid, " - " sta->t_offset=%lld", + msync_dbg(sdata, + "STA %pM : offset was invalid, sta->t_offset=%lld\n", sta->sta.addr, (long long) sta->t_offset); rcu_read_unlock(); @@ -226,17 +221,15 @@ static void mesh_sync_offset_adjust_tbtt(struct ieee80211_sub_if_data *sdata) * to the driver tsf setter, we punt * the tsf adjustment to the mesh tasklet */ - msync_dbg("TBTT : kicking off TBTT " - "adjustment with " - "clockdrift_max=%lld", - ifmsh->sync_offset_clockdrift_max); + msync_dbg(sdata, + "TBTT : kicking off TBTT adjustment with clockdrift_max=%lld\n", + ifmsh->sync_offset_clockdrift_max); set_bit(MESH_WORK_DRIFT_ADJUST, &ifmsh->wrkq_flags); } else { - msync_dbg("TBTT : max clockdrift=%lld; " - "too small to adjust", - (long long) - ifmsh->sync_offset_clockdrift_max); + msync_dbg(sdata, + "TBTT : max clockdrift=%lld; too small to adjust\n", + (long long)ifmsh->sync_offset_clockdrift_max); ifmsh->sync_offset_clockdrift_max = 0; } spin_unlock_bh(&ifmsh->sync_offset_lock); @@ -268,7 +261,7 @@ static void mesh_sync_vendor_rx_bcn_presp(struct ieee80211_sub_if_data *sdata, const u8 *oui; WARN_ON(sdata->u.mesh.mesh_sp_id != IEEE80211_SYNC_METHOD_VENDOR); - msync_dbg("called mesh_sync_vendor_rx_bcn_presp"); + msync_dbg(sdata, "called mesh_sync_vendor_rx_bcn_presp\n"); oui = mesh_get_vendor_oui(sdata); /* here you would implement the vendor offset tracking for this oui */ } @@ -278,7 +271,7 @@ static void mesh_sync_vendor_adjust_tbtt(struct ieee80211_sub_if_data *sdata) const u8 *oui; WARN_ON(sdata->u.mesh.mesh_sp_id != IEEE80211_SYNC_METHOD_VENDOR); - msync_dbg("called mesh_sync_vendor_adjust_tbtt"); + msync_dbg(sdata, "called mesh_sync_vendor_adjust_tbtt\n"); oui = mesh_get_vendor_oui(sdata); /* here you would implement the vendor tsf adjustment for this oui */ } diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 0db5d34a06b6..cef0c9e79aba 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -258,12 +258,11 @@ static int ieee80211_compatible_rates(const u8 *supp_rates, int supp_rates_len, } static void ieee80211_add_ht_ie(struct ieee80211_sub_if_data *sdata, - struct sk_buff *skb, const u8 *ht_oper_ie, + struct sk_buff *skb, u8 ap_ht_param, struct ieee80211_supported_band *sband, struct ieee80211_channel *channel, enum ieee80211_smps_mode smps) { - struct ieee80211_ht_operation *ht_oper; u8 *pos; u32 flags = channel->flags; u16 cap; @@ -271,21 +270,13 @@ static void ieee80211_add_ht_ie(struct ieee80211_sub_if_data *sdata, BUILD_BUG_ON(sizeof(ht_cap) != sizeof(sband->ht_cap)); - if (!ht_oper_ie) - return; - - if (ht_oper_ie[1] < sizeof(struct ieee80211_ht_operation)) - return; - memcpy(&ht_cap, &sband->ht_cap, sizeof(ht_cap)); ieee80211_apply_htcap_overrides(sdata, &ht_cap); - ht_oper = (struct ieee80211_ht_operation *)(ht_oper_ie + 2); - /* determine capability flags */ cap = ht_cap.cap; - switch (ht_oper->ht_param & IEEE80211_HT_PARAM_CHA_SEC_OFFSET) { + switch (ap_ht_param & IEEE80211_HT_PARAM_CHA_SEC_OFFSET) { case IEEE80211_HT_PARAM_CHA_SEC_ABOVE: if (flags & IEEE80211_CHAN_NO_HT40PLUS) { cap &= ~IEEE80211_HT_CAP_SUP_WIDTH_20_40; @@ -509,7 +500,7 @@ static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata) } if (!(ifmgd->flags & IEEE80211_STA_DISABLE_11N)) - ieee80211_add_ht_ie(sdata, skb, assoc_data->ht_operation_ie, + ieee80211_add_ht_ie(sdata, skb, assoc_data->ap_ht_param, sband, local->oper_channel, ifmgd->ap_smps); /* if present, add any custom non-vendor IEs that go after HT */ @@ -550,6 +541,8 @@ static void ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata) memcpy(pos, assoc_data->ie + offset, noffset - offset); } + drv_mgd_prepare_tx(local, sdata); + IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT; ieee80211_tx_skb(sdata, skb); } @@ -589,6 +582,9 @@ static void ieee80211_send_deauth_disassoc(struct ieee80211_sub_if_data *sdata, if (!(ifmgd->flags & IEEE80211_STA_MFP_ENABLED)) IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT; + + drv_mgd_prepare_tx(local, sdata); + ieee80211_tx_skb(sdata, skb); } } @@ -911,9 +907,6 @@ static bool ieee80211_powersave_allowed(struct ieee80211_sub_if_data *sdata) if (!mgd->associated) return false; - if (!mgd->associated->beacon_ies) - return false; - if (mgd->flags & (IEEE80211_STA_BEACON_POLL | IEEE80211_STA_CONNECTION_POLL)) return false; @@ -939,11 +932,6 @@ void ieee80211_recalc_ps(struct ieee80211_local *local, s32 latency) return; } - if (!list_empty(&local->work_list)) { - local->ps_sdata = NULL; - goto change; - } - list_for_each_entry(sdata, &local->interfaces, list) { if (!ieee80211_sdata_running(sdata)) continue; @@ -1016,7 +1004,6 @@ void ieee80211_recalc_ps(struct ieee80211_local *local, s32 latency) local->ps_sdata = NULL; } - change: ieee80211_change_ps(local); } @@ -1121,7 +1108,7 @@ void ieee80211_dynamic_ps_timer(unsigned long data) } /* MLME */ -static void ieee80211_sta_wmm_params(struct ieee80211_local *local, +static bool ieee80211_sta_wmm_params(struct ieee80211_local *local, struct ieee80211_sub_if_data *sdata, u8 *wmm_param, size_t wmm_param_len) { @@ -1132,23 +1119,23 @@ static void ieee80211_sta_wmm_params(struct ieee80211_local *local, u8 *pos, uapsd_queues = 0; if (!local->ops->conf_tx) - return; + return false; if (local->hw.queues < IEEE80211_NUM_ACS) - return; + return false; if (!wmm_param) - return; + return false; if (wmm_param_len < 8 || wmm_param[5] /* version */ != 1) - return; + return false; if (ifmgd->flags & IEEE80211_STA_UAPSD_ENABLED) uapsd_queues = ifmgd->uapsd_queues; count = wmm_param[6] & 0x0f; if (count == ifmgd->wmm_last_param_set) - return; + return false; ifmgd->wmm_last_param_set = count; pos = wmm_param + 8; @@ -1156,7 +1143,7 @@ static void ieee80211_sta_wmm_params(struct ieee80211_local *local, memset(¶ms, 0, sizeof(params)); - local->wmm_acm = 0; + sdata->wmm_acm = 0; for (; left >= 4; left -= 4, pos += 4) { int aci = (pos[0] >> 5) & 0x03; int acm = (pos[0] >> 4) & 0x01; @@ -1167,21 +1154,21 @@ static void ieee80211_sta_wmm_params(struct ieee80211_local *local, case 1: /* AC_BK */ queue = 3; if (acm) - local->wmm_acm |= BIT(1) | BIT(2); /* BK/- */ + sdata->wmm_acm |= BIT(1) | BIT(2); /* BK/- */ if (uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_BK) uapsd = true; break; case 2: /* AC_VI */ queue = 1; if (acm) - local->wmm_acm |= BIT(4) | BIT(5); /* CL/VI */ + sdata->wmm_acm |= BIT(4) | BIT(5); /* CL/VI */ if (uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VI) uapsd = true; break; case 3: /* AC_VO */ queue = 0; if (acm) - local->wmm_acm |= BIT(6) | BIT(7); /* VO/NC */ + sdata->wmm_acm |= BIT(6) | BIT(7); /* VO/NC */ if (uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VO) uapsd = true; break; @@ -1189,7 +1176,7 @@ static void ieee80211_sta_wmm_params(struct ieee80211_local *local, default: queue = 2; if (acm) - local->wmm_acm |= BIT(0) | BIT(3); /* BE/EE */ + sdata->wmm_acm |= BIT(0) | BIT(3); /* BE/EE */ if (uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_BE) uapsd = true; break; @@ -1201,23 +1188,21 @@ static void ieee80211_sta_wmm_params(struct ieee80211_local *local, params.txop = get_unaligned_le16(pos + 2); params.uapsd = uapsd; -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, - "WMM queue=%d aci=%d acm=%d aifs=%d " - "cWmin=%d cWmax=%d txop=%d uapsd=%d\n", - queue, aci, acm, - params.aifs, params.cw_min, params.cw_max, - params.txop, params.uapsd); -#endif + mlme_dbg(sdata, + "WMM queue=%d aci=%d acm=%d aifs=%d cWmin=%d cWmax=%d txop=%d uapsd=%d\n", + queue, aci, acm, + params.aifs, params.cw_min, params.cw_max, + params.txop, params.uapsd); sdata->tx_conf[queue] = params; if (drv_conf_tx(local, sdata, queue, ¶ms)) - wiphy_debug(local->hw.wiphy, - "failed to set TX queue parameters for queue %d\n", - queue); + sdata_err(sdata, + "failed to set TX queue parameters for queue %d\n", + queue); } /* enable WMM or activate new settings */ sdata->vif.bss_conf.qos = true; + return true; } static void __ieee80211_stop_poll(struct ieee80211_sub_if_data *sdata) @@ -1284,13 +1269,8 @@ static void ieee80211_set_associated(struct ieee80211_sub_if_data *sdata, struct ieee80211_bss_conf *bss_conf = &sdata->vif.bss_conf; bss_info_changed |= BSS_CHANGED_ASSOC; - /* set timing information */ - bss_conf->beacon_int = cbss->beacon_interval; - bss_conf->last_tsf = cbss->tsf; - - bss_info_changed |= BSS_CHANGED_BEACON_INT; bss_info_changed |= ieee80211_handle_bss_capability(sdata, - cbss->capability, bss->has_erp_value, bss->erp_value); + bss_conf->assoc_capability, bss->has_erp_value, bss->erp_value); sdata->u.mgd.beacon_timeout = usecs_to_jiffies(ieee80211_tu_to_usec( IEEE80211_BEACON_LOSS_COUNT * bss_conf->beacon_int)); @@ -1380,6 +1360,21 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata, } mutex_unlock(&local->sta_mtx); + /* + * if we want to get out of ps before disassoc (why?) we have + * to do it before sending disassoc, as otherwise the null-packet + * won't be valid. + */ + if (local->hw.conf.flags & IEEE80211_CONF_PS) { + local->hw.conf.flags &= ~IEEE80211_CONF_PS; + ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS); + } + local->ps_sdata = NULL; + + /* flush out any pending frame (e.g. DELBA) before deauth/disassoc */ + if (tx) + drv_flush(local, false); + /* deauthenticate/disassociate now */ if (tx || frame_buf) ieee80211_send_deauth_disassoc(sdata, ifmgd->bssid, stype, @@ -1411,12 +1406,6 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata, del_timer_sync(&local->dynamic_ps_timer); cancel_work_sync(&local->dynamic_ps_enable_work); - if (local->hw.conf.flags & IEEE80211_CONF_PS) { - local->hw.conf.flags &= ~IEEE80211_CONF_PS; - ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS); - } - local->ps_sdata = NULL; - /* Disable ARP filtering */ if (sdata->vif.bss_conf.arp_filter_enabled) { sdata->vif.bss_conf.arp_filter_enabled = false; @@ -1581,11 +1570,12 @@ static void ieee80211_mgd_probe_ap(struct ieee80211_sub_if_data *sdata, goto out; } -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG if (beacon) - net_dbg_ratelimited("%s: detected beacon loss from AP - sending probe request\n", - sdata->name); -#endif + mlme_dbg_ratelimited(sdata, + "detected beacon loss from AP - sending probe request\n"); + + ieee80211_cqm_rssi_notify(&sdata->vif, + NL80211_CQM_RSSI_BEACON_LOSS_EVENT, GFP_KERNEL); /* * The driver/our work has already reported this event or the @@ -1627,6 +1617,7 @@ struct sk_buff *ieee80211_ap_probereq_get(struct ieee80211_hw *hw, { struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif); struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; + struct cfg80211_bss *cbss; struct sk_buff *skb; const u8 *ssid; int ssid_len; @@ -1636,16 +1627,22 @@ struct sk_buff *ieee80211_ap_probereq_get(struct ieee80211_hw *hw, ASSERT_MGD_MTX(ifmgd); - if (!ifmgd->associated) + if (ifmgd->associated) + cbss = ifmgd->associated; + else if (ifmgd->auth_data) + cbss = ifmgd->auth_data->bss; + else if (ifmgd->assoc_data) + cbss = ifmgd->assoc_data->bss; + else return NULL; - ssid = ieee80211_bss_get_ie(ifmgd->associated, WLAN_EID_SSID); + ssid = ieee80211_bss_get_ie(cbss, WLAN_EID_SSID); if (WARN_ON_ONCE(ssid == NULL)) ssid_len = 0; else ssid_len = ssid[1]; - skb = ieee80211_build_probe_req(sdata, ifmgd->associated->bssid, + skb = ieee80211_build_probe_req(sdata, cbss->bssid, (u32) -1, ssid + 2, ssid_len, NULL, 0, true); @@ -1668,8 +1665,7 @@ static void __ieee80211_connection_loss(struct ieee80211_sub_if_data *sdata) memcpy(bssid, ifmgd->associated->bssid, ETH_ALEN); - printk(KERN_DEBUG "%s: Connection to AP %pM lost.\n", - sdata->name, bssid); + sdata_info(sdata, "Connection to AP %pM lost\n", bssid); ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH, WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY, @@ -1765,6 +1761,7 @@ static void ieee80211_auth_challenge(struct ieee80211_sub_if_data *sdata, if (!elems.challenge) return; auth_data->expected_transaction = 4; + drv_mgd_prepare_tx(sdata->local, sdata); ieee80211_send_auth(sdata, 3, auth_data->algorithm, elems.challenge - 2, elems.challenge_len + 2, auth_data->bss->bssid, auth_data->bss->bssid, @@ -1803,9 +1800,10 @@ ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata, return RX_MGMT_NONE; if (status_code != WLAN_STATUS_SUCCESS) { - printk(KERN_DEBUG "%s: %pM denied authentication (status %d)\n", - sdata->name, mgmt->sa, status_code); - goto out; + sdata_info(sdata, "%pM denied authentication (status %d)\n", + mgmt->sa, status_code); + ieee80211_destroy_auth_data(sdata, false); + return RX_MGMT_CFG80211_RX_AUTH; } switch (ifmgd->auth_data->algorithm) { @@ -1826,8 +1824,7 @@ ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata, return RX_MGMT_NONE; } - printk(KERN_DEBUG "%s: authenticated\n", sdata->name); - out: + sdata_info(sdata, "authenticated\n"); ifmgd->auth_data->done = true; ifmgd->auth_data->timeout = jiffies + IEEE80211_AUTH_WAIT_ASSOC; run_again(ifmgd, ifmgd->auth_data->timeout); @@ -1840,8 +1837,7 @@ ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata, goto out_err; } if (sta_info_move_state(sta, IEEE80211_STA_AUTH)) { - printk(KERN_DEBUG "%s: failed moving %pM to auth\n", - sdata->name, bssid); + sdata_info(sdata, "failed moving %pM to auth\n", bssid); goto out_err; } mutex_unlock(&sdata->local->sta_mtx); @@ -1875,8 +1871,8 @@ ieee80211_rx_mgmt_deauth(struct ieee80211_sub_if_data *sdata, reason_code = le16_to_cpu(mgmt->u.deauth.reason_code); - printk(KERN_DEBUG "%s: deauthenticated from %pM (Reason: %u)\n", - sdata->name, bssid, reason_code); + sdata_info(sdata, "deauthenticated from %pM (Reason: %u)\n", + bssid, reason_code); ieee80211_set_disassoc(sdata, 0, 0, false, NULL); @@ -1906,8 +1902,8 @@ ieee80211_rx_mgmt_disassoc(struct ieee80211_sub_if_data *sdata, reason_code = le16_to_cpu(mgmt->u.disassoc.reason_code); - printk(KERN_DEBUG "%s: disassociated from %pM (Reason: %u)\n", - sdata->name, mgmt->sa, reason_code); + sdata_info(sdata, "disassociated from %pM (Reason: %u)\n", + mgmt->sa, reason_code); ieee80211_set_disassoc(sdata, 0, 0, false, NULL); @@ -1999,17 +1995,15 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, capab_info = le16_to_cpu(mgmt->u.assoc_resp.capab_info); if ((aid & (BIT(15) | BIT(14))) != (BIT(15) | BIT(14))) - printk(KERN_DEBUG - "%s: invalid AID value 0x%x; bits 15:14 not set\n", - sdata->name, aid); + sdata_info(sdata, "invalid AID value 0x%x; bits 15:14 not set\n", + aid); aid &= ~(BIT(15) | BIT(14)); ifmgd->broken_ap = false; if (aid == 0 || aid > IEEE80211_MAX_AID) { - printk(KERN_DEBUG - "%s: invalid AID value %d (out of range), turn off PS\n", - sdata->name, aid); + sdata_info(sdata, "invalid AID value %d (out of range), turn off PS\n", + aid); aid = 0; ifmgd->broken_ap = true; } @@ -2018,8 +2012,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, ieee802_11_parse_elems(pos, len - (pos - (u8 *) mgmt), &elems); if (!elems.supp_rates) { - printk(KERN_DEBUG "%s: no SuppRates element in AssocResp\n", - sdata->name); + sdata_info(sdata, "no SuppRates element in AssocResp\n"); return false; } @@ -2059,9 +2052,9 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, if (!err && !(ifmgd->flags & IEEE80211_STA_CONTROL_PORT)) err = sta_info_move_state(sta, IEEE80211_STA_AUTHORIZED); if (err) { - printk(KERN_DEBUG - "%s: failed to move station %pM to desired state\n", - sdata->name, sta->sta.addr); + sdata_info(sdata, + "failed to move station %pM to desired state\n", + sta->sta.addr); WARN_ON(__sta_info_destroy(sta)); mutex_unlock(&sdata->local->sta_mtx); return false; @@ -2144,10 +2137,10 @@ ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata, status_code = le16_to_cpu(mgmt->u.assoc_resp.status_code); aid = le16_to_cpu(mgmt->u.assoc_resp.aid); - printk(KERN_DEBUG "%s: RX %sssocResp from %pM (capab=0x%x " - "status=%d aid=%d)\n", - sdata->name, reassoc ? "Rea" : "A", mgmt->sa, - capab_info, status_code, (u16)(aid & ~(BIT(15) | BIT(14)))); + sdata_info(sdata, + "RX %sssocResp from %pM (capab=0x%x status=%d aid=%d)\n", + reassoc ? "Rea" : "A", mgmt->sa, + capab_info, status_code, (u16)(aid & ~(BIT(15) | BIT(14)))); pos = mgmt->u.assoc_resp.variable; ieee802_11_parse_elems(pos, len - (pos - (u8 *) mgmt), &elems); @@ -2158,9 +2151,9 @@ ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata, u32 tu, ms; tu = get_unaligned_le32(elems.timeout_int + 1); ms = tu * 1024 / 1000; - printk(KERN_DEBUG "%s: %pM rejected association temporarily; " - "comeback duration %u TU (%u ms)\n", - sdata->name, mgmt->sa, tu, ms); + sdata_info(sdata, + "%pM rejected association temporarily; comeback duration %u TU (%u ms)\n", + mgmt->sa, tu, ms); assoc_data->timeout = jiffies + msecs_to_jiffies(ms); if (ms > IEEE80211_ASSOC_TIMEOUT) run_again(ifmgd, assoc_data->timeout); @@ -2170,8 +2163,8 @@ ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata, *bss = assoc_data->bss; if (status_code != WLAN_STATUS_SUCCESS) { - printk(KERN_DEBUG "%s: %pM denied association (code=%d)\n", - sdata->name, mgmt->sa, status_code); + sdata_info(sdata, "%pM denied association (code=%d)\n", + mgmt->sa, status_code); ieee80211_destroy_assoc_data(sdata, false); } else { if (!ieee80211_assoc_success(sdata, *bss, mgmt, len)) { @@ -2180,7 +2173,7 @@ ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata, cfg80211_put_bss(*bss); return RX_MGMT_CFG80211_ASSOC_TIMEOUT; } - printk(KERN_DEBUG "%s: associated\n", sdata->name); + sdata_info(sdata, "associated\n"); /* * destroy assoc_data afterwards, as otherwise an idle @@ -2280,7 +2273,7 @@ static void ieee80211_rx_mgmt_probe_resp(struct ieee80211_sub_if_data *sdata, if (ifmgd->auth_data && !ifmgd->auth_data->bss->proberesp_ies && ether_addr_equal(mgmt->bssid, ifmgd->auth_data->bss->bssid)) { /* got probe response, continue with auth */ - printk(KERN_DEBUG "%s: direct probe responded\n", sdata->name); + sdata_info(sdata, "direct probe responded\n"); ifmgd->auth_data->tries = 0; ifmgd->auth_data->timeout = jiffies; run_again(ifmgd, ifmgd->auth_data->timeout); @@ -2416,10 +2409,8 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, } if (ifmgd->flags & IEEE80211_STA_BEACON_POLL) { -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - net_dbg_ratelimited("%s: cancelling probereq poll due to a received beacon\n", - sdata->name); -#endif + mlme_dbg_ratelimited(sdata, + "cancelling probereq poll due to a received beacon\n"); mutex_lock(&local->mtx); ifmgd->flags &= ~IEEE80211_STA_BEACON_POLL; ieee80211_run_deferred_scan(local); @@ -2445,14 +2436,6 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, directed_tim = ieee80211_check_tim(elems.tim, elems.tim_len, ifmgd->aid); - if (ncrc != ifmgd->beacon_crc || !ifmgd->beacon_crc_valid) { - ieee80211_rx_bss_info(sdata, mgmt, len, rx_status, &elems, - true); - - ieee80211_sta_wmm_params(local, sdata, elems.wmm_param, - elems.wmm_param_len); - } - if (local->hw.flags & IEEE80211_HW_PS_NULLFUNC_STACK) { if (directed_tim) { if (local->hw.conf.dynamic_ps_timeout > 0) { @@ -2483,6 +2466,13 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, ifmgd->beacon_crc = ncrc; ifmgd->beacon_crc_valid = true; + ieee80211_rx_bss_info(sdata, mgmt, len, rx_status, &elems, + true); + + if (ieee80211_sta_wmm_params(local, sdata, elems.wmm_param, + elems.wmm_param_len)) + changed |= BSS_CHANGED_QOS; + if (elems.erp_info && elems.erp_info_len >= 1) { erp_valid = true; erp_value = elems.erp_info[0]; @@ -2642,8 +2632,8 @@ static int ieee80211_probe_auth(struct ieee80211_sub_if_data *sdata) auth_data->tries++; if (auth_data->tries > IEEE80211_AUTH_MAX_TRIES) { - printk(KERN_DEBUG "%s: authentication with %pM timed out\n", - sdata->name, auth_data->bss->bssid); + sdata_info(sdata, "authentication with %pM timed out\n", + auth_data->bss->bssid); /* * Most likely AP is not in the range so remove the @@ -2654,10 +2644,12 @@ static int ieee80211_probe_auth(struct ieee80211_sub_if_data *sdata) return -ETIMEDOUT; } + drv_mgd_prepare_tx(local, sdata); + if (auth_data->bss->proberesp_ies) { - printk(KERN_DEBUG "%s: send auth to %pM (try %d/%d)\n", - sdata->name, auth_data->bss->bssid, auth_data->tries, - IEEE80211_AUTH_MAX_TRIES); + sdata_info(sdata, "send auth to %pM (try %d/%d)\n", + auth_data->bss->bssid, auth_data->tries, + IEEE80211_AUTH_MAX_TRIES); auth_data->expected_transaction = 2; ieee80211_send_auth(sdata, 1, auth_data->algorithm, @@ -2667,9 +2659,9 @@ static int ieee80211_probe_auth(struct ieee80211_sub_if_data *sdata) } else { const u8 *ssidie; - printk(KERN_DEBUG "%s: direct probe to %pM (try %d/%i)\n", - sdata->name, auth_data->bss->bssid, auth_data->tries, - IEEE80211_AUTH_MAX_TRIES); + sdata_info(sdata, "direct probe to %pM (try %d/%i)\n", + auth_data->bss->bssid, auth_data->tries, + IEEE80211_AUTH_MAX_TRIES); ssidie = ieee80211_bss_get_ie(auth_data->bss, WLAN_EID_SSID); if (!ssidie) @@ -2697,8 +2689,8 @@ static int ieee80211_do_assoc(struct ieee80211_sub_if_data *sdata) assoc_data->tries++; if (assoc_data->tries > IEEE80211_ASSOC_MAX_TRIES) { - printk(KERN_DEBUG "%s: association with %pM timed out\n", - sdata->name, assoc_data->bss->bssid); + sdata_info(sdata, "association with %pM timed out\n", + assoc_data->bss->bssid); /* * Most likely AP is not in the range so remove the @@ -2709,9 +2701,9 @@ static int ieee80211_do_assoc(struct ieee80211_sub_if_data *sdata) return -ETIMEDOUT; } - printk(KERN_DEBUG "%s: associate with %pM (try %d/%d)\n", - sdata->name, assoc_data->bss->bssid, assoc_data->tries, - IEEE80211_ASSOC_MAX_TRIES); + sdata_info(sdata, "associate with %pM (try %d/%d)\n", + assoc_data->bss->bssid, assoc_data->tries, + IEEE80211_ASSOC_MAX_TRIES); ieee80211_send_assoc(sdata); assoc_data->timeout = jiffies + IEEE80211_ASSOC_TIMEOUT; @@ -2784,45 +2776,31 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata) ieee80211_reset_ap_probe(sdata); else if (ifmgd->nullfunc_failed) { if (ifmgd->probe_send_count < max_tries) { -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, - "%s: No ack for nullfunc frame to" - " AP %pM, try %d/%i\n", - sdata->name, bssid, - ifmgd->probe_send_count, max_tries); -#endif + mlme_dbg(sdata, + "No ack for nullfunc frame to AP %pM, try %d/%i\n", + bssid, ifmgd->probe_send_count, + max_tries); ieee80211_mgd_probe_ap_send(sdata); } else { -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, - "%s: No ack for nullfunc frame to" - " AP %pM, disconnecting.\n", - sdata->name, bssid); -#endif + mlme_dbg(sdata, + "No ack for nullfunc frame to AP %pM, disconnecting.\n", + bssid); ieee80211_sta_connection_lost(sdata, bssid, WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY); } } else if (time_is_after_jiffies(ifmgd->probe_timeout)) run_again(ifmgd, ifmgd->probe_timeout); else if (local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS) { -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, - "%s: Failed to send nullfunc to AP %pM" - " after %dms, disconnecting.\n", - sdata->name, - bssid, probe_wait_ms); -#endif + mlme_dbg(sdata, + "Failed to send nullfunc to AP %pM after %dms, disconnecting\n", + bssid, probe_wait_ms); ieee80211_sta_connection_lost(sdata, bssid, WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY); } else if (ifmgd->probe_send_count < max_tries) { -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, - "%s: No probe response from AP %pM" - " after %dms, try %d/%i\n", - sdata->name, - bssid, probe_wait_ms, - ifmgd->probe_send_count, max_tries); -#endif + mlme_dbg(sdata, + "No probe response from AP %pM after %dms, try %d/%i\n", + bssid, probe_wait_ms, + ifmgd->probe_send_count, max_tries); ieee80211_mgd_probe_ap_send(sdata); } else { /* @@ -2937,11 +2915,8 @@ void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata) sdata->flags &= ~IEEE80211_SDATA_DISCONNECT_RESUME; mutex_lock(&ifmgd->mtx); if (ifmgd->associated) { -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(sdata->local->hw.wiphy, - "%s: driver requested disconnect after resume.\n", - sdata->name); -#endif + mlme_dbg(sdata, + "driver requested disconnect after resume\n"); ieee80211_sta_connection_lost(sdata, ifmgd->associated->bssid, WLAN_REASON_UNSPECIFIED); @@ -2999,7 +2974,7 @@ void ieee80211_sta_setup_sdata(struct ieee80211_sub_if_data *sdata) /* scan finished notification */ void ieee80211_mlme_notify_scan_completed(struct ieee80211_local *local) { - struct ieee80211_sub_if_data *sdata = local->scan_sdata; + struct ieee80211_sub_if_data *sdata; /* Restart STA timers */ rcu_read_lock(); @@ -3029,7 +3004,7 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata, struct ieee80211_local *local = sdata->local; struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; struct ieee80211_bss *bss = (void *)cbss->priv; - struct sta_info *sta; + struct sta_info *sta = NULL; bool have_sta = false; int err; int ht_cfreq; @@ -3082,13 +3057,11 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata, * since we look at probe response/beacon data here * it should be OK. */ - printk(KERN_DEBUG - "%s: Wrong control channel: center-freq: %d" - " ht-cfreq: %d ht->primary_chan: %d" - " band: %d. Disabling HT.\n", - sdata->name, cbss->channel->center_freq, - ht_cfreq, ht_oper->primary_chan, - cbss->channel->band); + sdata_info(sdata, + "Wrong control channel: center-freq: %d ht-cfreq: %d ht->primary_chan: %d band: %d - Disabling HT\n", + cbss->channel->center_freq, + ht_cfreq, ht_oper->primary_chan, + cbss->channel->band); ht_oper = NULL; } } @@ -3112,9 +3085,8 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata, if (!ieee80211_set_channel_type(local, sdata, channel_type)) { /* can only fail due to HT40+/- mismatch */ channel_type = NL80211_CHAN_HT20; - printk(KERN_DEBUG - "%s: disabling 40 MHz due to multi-vif mismatch\n", - sdata->name); + sdata_info(sdata, + "disabling 40 MHz due to multi-vif mismatch\n"); ifmgd->flags |= IEEE80211_STA_DISABLE_40MHZ; WARN_ON(!ieee80211_set_channel_type(local, sdata, channel_type)); @@ -3123,7 +3095,7 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata, local->oper_channel = cbss->channel; ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL); - if (!have_sta) { + if (sta) { u32 rates = 0, basic_rates = 0; bool have_higher_than_11mbit; int min_rate = INT_MAX, min_rate_index = -1; @@ -3143,9 +3115,8 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata, * we can connect -- with a warning. */ if (!basic_rates && min_rate_index >= 0) { - printk(KERN_DEBUG - "%s: No basic rates, using min rate instead.\n", - sdata->name); + sdata_info(sdata, + "No basic rates, using min rate instead\n"); basic_rates = BIT(min_rate_index); } @@ -3161,9 +3132,15 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata, memcpy(ifmgd->bssid, cbss->bssid, ETH_ALEN); - /* tell driver about BSSID and basic rates */ + /* set timing information */ + sdata->vif.bss_conf.beacon_int = cbss->beacon_interval; + sdata->vif.bss_conf.sync_tsf = cbss->tsf; + sdata->vif.bss_conf.sync_device_ts = bss->device_ts; + + /* tell driver about BSSID, basic rates and timing */ ieee80211_bss_info_change_notify(sdata, - BSS_CHANGED_BSSID | BSS_CHANGED_BASIC_RATES); + BSS_CHANGED_BSSID | BSS_CHANGED_BASIC_RATES | + BSS_CHANGED_BEACON_INT); if (assoc) sta_info_pre_move_state(sta, IEEE80211_STA_AUTH); @@ -3171,9 +3148,9 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata, err = sta_info_insert(sta); sta = NULL; if (err) { - printk(KERN_DEBUG - "%s: failed to insert STA entry for the AP (error %d)\n", - sdata->name, err); + sdata_info(sdata, + "failed to insert STA entry for the AP (error %d)\n", + err); return err; } } else @@ -3251,8 +3228,7 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata, if (ifmgd->associated) ieee80211_set_disassoc(sdata, 0, 0, false, NULL); - printk(KERN_DEBUG "%s: authenticate with %pM\n", - sdata->name, req->bss->bssid); + sdata_info(sdata, "authenticate with %pM\n", req->bss->bssid); err = ieee80211_prep_connection(sdata, req->bss, false); if (err) @@ -3287,7 +3263,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, struct ieee80211_bss *bss = (void *)req->bss->priv; struct ieee80211_mgd_assoc_data *assoc_data; struct ieee80211_supported_band *sband; - const u8 *ssidie; + const u8 *ssidie, *ht_ie; int i, err; ssidie = ieee80211_bss_get_ie(req->bss, WLAN_EID_SSID); @@ -3335,11 +3311,15 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, * We can set this to true for non-11n hardware, that'll be checked * separately along with the peer capabilities. */ - for (i = 0; i < req->crypto.n_ciphers_pairwise; i++) + for (i = 0; i < req->crypto.n_ciphers_pairwise; i++) { if (req->crypto.ciphers_pairwise[i] == WLAN_CIPHER_SUITE_WEP40 || req->crypto.ciphers_pairwise[i] == WLAN_CIPHER_SUITE_TKIP || - req->crypto.ciphers_pairwise[i] == WLAN_CIPHER_SUITE_WEP104) + req->crypto.ciphers_pairwise[i] == WLAN_CIPHER_SUITE_WEP104) { ifmgd->flags |= IEEE80211_STA_DISABLE_11N; + netdev_info(sdata->dev, + "disabling HT due to WEP/TKIP use\n"); + } + } if (req->flags & ASSOC_REQ_DISABLE_HT) ifmgd->flags |= IEEE80211_STA_DISABLE_11N; @@ -3347,8 +3327,11 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, /* Also disable HT if we don't support it or the AP doesn't use WMM */ sband = local->hw.wiphy->bands[req->bss->channel->band]; if (!sband->ht_cap.ht_supported || - local->hw.queues < IEEE80211_NUM_ACS || !bss->wmm_used) + local->hw.queues < IEEE80211_NUM_ACS || !bss->wmm_used) { ifmgd->flags |= IEEE80211_STA_DISABLE_11N; + netdev_info(sdata->dev, + "disabling HT as WMM/QoS is not supported\n"); + } memcpy(&ifmgd->ht_capa, &req->ht_capa, sizeof(ifmgd->ht_capa)); memcpy(&ifmgd->ht_capa_mask, &req->ht_capa_mask, @@ -3374,8 +3357,13 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, (local->hw.queues >= IEEE80211_NUM_ACS); assoc_data->supp_rates = bss->supp_rates; assoc_data->supp_rates_len = bss->supp_rates_len; - assoc_data->ht_operation_ie = - ieee80211_bss_get_ie(req->bss, WLAN_EID_HT_OPERATION); + + ht_ie = ieee80211_bss_get_ie(req->bss, WLAN_EID_HT_OPERATION); + if (ht_ie && ht_ie[1] >= sizeof(struct ieee80211_ht_operation)) + assoc_data->ap_ht_param = + ((struct ieee80211_ht_operation *)(ht_ie + 2))->ht_param; + else + ifmgd->flags |= IEEE80211_STA_DISABLE_11N; if (bss->wmm_used && bss->uapsd_supported && (sdata->local->hw.flags & IEEE80211_HW_SUPPORTS_UAPSD)) { @@ -3422,8 +3410,8 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, * Wait up to one beacon interval ... * should this be more if we miss one? */ - printk(KERN_DEBUG "%s: waiting for beacon from %pM\n", - sdata->name, ifmgd->bssid); + sdata_info(sdata, "waiting for beacon from %pM\n", + ifmgd->bssid); assoc_data->timeout = TU_TO_EXP_TIME(req->bss->beacon_interval); } else { assoc_data->have_beacon = true; @@ -3442,8 +3430,8 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, corrupt_type = "beacon"; } else if (bss->corrupt_data & IEEE80211_BSS_CORRUPT_PROBE_RESP) corrupt_type = "probe response"; - printk(KERN_DEBUG "%s: associating with AP with corrupt %s\n", - sdata->name, corrupt_type); + sdata_info(sdata, "associating with AP with corrupt %s\n", + corrupt_type); } err = 0; @@ -3472,9 +3460,9 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata, return 0; } - printk(KERN_DEBUG - "%s: deauthenticating from %pM by local choice (reason=%d)\n", - sdata->name, req->bssid, req->reason_code); + sdata_info(sdata, + "deauthenticating from %pM by local choice (reason=%d)\n", + req->bssid, req->reason_code); if (ifmgd->associated && ether_addr_equal(ifmgd->associated->bssid, req->bssid)) @@ -3516,8 +3504,9 @@ int ieee80211_mgd_disassoc(struct ieee80211_sub_if_data *sdata, return -ENOLINK; } - printk(KERN_DEBUG "%s: disassociating from %pM by local choice (reason=%d)\n", - sdata->name, req->bss->bssid, req->reason_code); + sdata_info(sdata, + "disassociating from %pM by local choice (reason=%d)\n", + req->bss->bssid, req->reason_code); memcpy(bssid, req->bss->bssid, ETH_ALEN); ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DISASSOC, @@ -3558,10 +3547,3 @@ void ieee80211_cqm_rssi_notify(struct ieee80211_vif *vif, cfg80211_cqm_rssi_notify(sdata->dev, rssi_event, gfp); } EXPORT_SYMBOL(ieee80211_cqm_rssi_notify); - -unsigned char ieee80211_get_operstate(struct ieee80211_vif *vif) -{ - struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif); - return sdata->dev->operstate; -} -EXPORT_SYMBOL(ieee80211_get_operstate); diff --git a/net/mac80211/offchannel.c b/net/mac80211/offchannel.c index 935aa4b6deee..635c3250c668 100644 --- a/net/mac80211/offchannel.c +++ b/net/mac80211/offchannel.c @@ -15,7 +15,7 @@ #include <linux/export.h> #include <net/mac80211.h> #include "ieee80211_i.h" -#include "driver-trace.h" +#include "driver-ops.h" /* * Tell our hardware to disable PS. @@ -24,8 +24,7 @@ * because we *may* be doing work on-operating channel, and want our * hardware unconditionally awake, but still let the AP send us normal frames. */ -static void ieee80211_offchannel_ps_enable(struct ieee80211_sub_if_data *sdata, - bool tell_ap) +static void ieee80211_offchannel_ps_enable(struct ieee80211_sub_if_data *sdata) { struct ieee80211_local *local = sdata->local; struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; @@ -46,8 +45,8 @@ static void ieee80211_offchannel_ps_enable(struct ieee80211_sub_if_data *sdata, ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS); } - if (tell_ap && (!local->offchannel_ps_enabled || - !(local->hw.flags & IEEE80211_HW_PS_NULLFUNC_STACK))) + if (!local->offchannel_ps_enabled || + !(local->hw.flags & IEEE80211_HW_PS_NULLFUNC_STACK)) /* * If power save was enabled, no need to send a nullfunc * frame because AP knows that we are sleeping. But if the @@ -132,7 +131,7 @@ void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local, if (offchannel_ps_enable && (sdata->vif.type == NL80211_IFTYPE_STATION) && sdata->u.mgd.associated) - ieee80211_offchannel_ps_enable(sdata, true); + ieee80211_offchannel_ps_enable(sdata); } } mutex_unlock(&local->iflist_mtx); @@ -181,34 +180,58 @@ void ieee80211_offchannel_return(struct ieee80211_local *local, mutex_unlock(&local->iflist_mtx); } +void ieee80211_handle_roc_started(struct ieee80211_roc_work *roc) +{ + if (roc->notified) + return; + + if (roc->mgmt_tx_cookie) { + if (!WARN_ON(!roc->frame)) { + ieee80211_tx_skb(roc->sdata, roc->frame); + roc->frame = NULL; + } + } else { + cfg80211_ready_on_channel(&roc->sdata->wdev, (unsigned long)roc, + roc->chan, roc->chan_type, + roc->req_duration, GFP_KERNEL); + } + + roc->notified = true; +} + static void ieee80211_hw_roc_start(struct work_struct *work) { struct ieee80211_local *local = container_of(work, struct ieee80211_local, hw_roc_start); - struct ieee80211_sub_if_data *sdata; + struct ieee80211_roc_work *roc, *dep, *tmp; mutex_lock(&local->mtx); - if (!local->hw_roc_channel) { - mutex_unlock(&local->mtx); - return; - } + if (list_empty(&local->roc_list)) + goto out_unlock; - if (local->hw_roc_skb) { - sdata = IEEE80211_DEV_TO_SUB_IF(local->hw_roc_dev); - ieee80211_tx_skb(sdata, local->hw_roc_skb); - local->hw_roc_skb = NULL; - } else { - cfg80211_ready_on_channel(local->hw_roc_dev, - local->hw_roc_cookie, - local->hw_roc_channel, - local->hw_roc_channel_type, - local->hw_roc_duration, - GFP_KERNEL); - } + roc = list_first_entry(&local->roc_list, struct ieee80211_roc_work, + list); + + if (!roc->started) + goto out_unlock; - ieee80211_recalc_idle(local); + roc->hw_begun = true; + roc->hw_start_time = local->hw_roc_start_time; + ieee80211_handle_roc_started(roc); + list_for_each_entry_safe(dep, tmp, &roc->dependents, list) { + ieee80211_handle_roc_started(dep); + + if (dep->duration > roc->duration) { + u32 dur = dep->duration; + dep->duration = dur - roc->duration; + roc->duration = dur; + list_del(&dep->list); + list_add(&dep->list, &roc->list); + } + } + out_unlock: mutex_unlock(&local->mtx); } @@ -216,52 +239,181 @@ void ieee80211_ready_on_channel(struct ieee80211_hw *hw) { struct ieee80211_local *local = hw_to_local(hw); + local->hw_roc_start_time = jiffies; + trace_api_ready_on_channel(local); ieee80211_queue_work(hw, &local->hw_roc_start); } EXPORT_SYMBOL_GPL(ieee80211_ready_on_channel); -static void ieee80211_hw_roc_done(struct work_struct *work) +void ieee80211_start_next_roc(struct ieee80211_local *local) { - struct ieee80211_local *local = - container_of(work, struct ieee80211_local, hw_roc_done); + struct ieee80211_roc_work *roc; - mutex_lock(&local->mtx); + lockdep_assert_held(&local->mtx); - if (!local->hw_roc_channel) { - mutex_unlock(&local->mtx); + if (list_empty(&local->roc_list)) { + ieee80211_run_deferred_scan(local); return; } - /* was never transmitted */ - if (local->hw_roc_skb) { - u64 cookie; + roc = list_first_entry(&local->roc_list, struct ieee80211_roc_work, + list); - cookie = local->hw_roc_cookie ^ 2; + if (WARN_ON_ONCE(roc->started)) + return; + + if (local->ops->remain_on_channel) { + int ret, duration = roc->duration; + + /* XXX: duplicated, see ieee80211_start_roc_work() */ + if (!duration) + duration = 10; + + ret = drv_remain_on_channel(local, roc->chan, + roc->chan_type, + duration); + + roc->started = true; + + if (ret) { + wiphy_warn(local->hw.wiphy, + "failed to start next HW ROC (%d)\n", ret); + /* + * queue the work struct again to avoid recursion + * when multiple failures occur + */ + ieee80211_remain_on_channel_expired(&local->hw); + } + } else { + /* delay it a bit */ + ieee80211_queue_delayed_work(&local->hw, &roc->work, + round_jiffies_relative(HZ/2)); + } +} - cfg80211_mgmt_tx_status(local->hw_roc_dev, cookie, - local->hw_roc_skb->data, - local->hw_roc_skb->len, false, - GFP_KERNEL); +void ieee80211_roc_notify_destroy(struct ieee80211_roc_work *roc) +{ + struct ieee80211_roc_work *dep, *tmp; - kfree_skb(local->hw_roc_skb); - local->hw_roc_skb = NULL; - local->hw_roc_skb_for_status = NULL; + /* was never transmitted */ + if (roc->frame) { + cfg80211_mgmt_tx_status(&roc->sdata->wdev, + (unsigned long)roc->frame, + roc->frame->data, roc->frame->len, + false, GFP_KERNEL); + kfree_skb(roc->frame); } - if (!local->hw_roc_for_tx) - cfg80211_remain_on_channel_expired(local->hw_roc_dev, - local->hw_roc_cookie, - local->hw_roc_channel, - local->hw_roc_channel_type, + if (!roc->mgmt_tx_cookie) + cfg80211_remain_on_channel_expired(&roc->sdata->wdev, + (unsigned long)roc, + roc->chan, roc->chan_type, GFP_KERNEL); - local->hw_roc_channel = NULL; - local->hw_roc_cookie = 0; + list_for_each_entry_safe(dep, tmp, &roc->dependents, list) + ieee80211_roc_notify_destroy(dep); + + kfree(roc); +} + +void ieee80211_sw_roc_work(struct work_struct *work) +{ + struct ieee80211_roc_work *roc = + container_of(work, struct ieee80211_roc_work, work.work); + struct ieee80211_sub_if_data *sdata = roc->sdata; + struct ieee80211_local *local = sdata->local; + bool started; + + mutex_lock(&local->mtx); + + if (roc->abort) + goto finish; + + if (WARN_ON(list_empty(&local->roc_list))) + goto out_unlock; + + if (WARN_ON(roc != list_first_entry(&local->roc_list, + struct ieee80211_roc_work, + list))) + goto out_unlock; - ieee80211_recalc_idle(local); + if (!roc->started) { + struct ieee80211_roc_work *dep; + /* start this ROC */ + + /* switch channel etc */ + ieee80211_recalc_idle(local); + + local->tmp_channel = roc->chan; + local->tmp_channel_type = roc->chan_type; + ieee80211_hw_config(local, 0); + + /* tell userspace or send frame */ + ieee80211_handle_roc_started(roc); + list_for_each_entry(dep, &roc->dependents, list) + ieee80211_handle_roc_started(dep); + + /* if it was pure TX, just finish right away */ + if (!roc->duration) + goto finish; + + roc->started = true; + ieee80211_queue_delayed_work(&local->hw, &roc->work, + msecs_to_jiffies(roc->duration)); + } else { + /* finish this ROC */ + finish: + list_del(&roc->list); + started = roc->started; + ieee80211_roc_notify_destroy(roc); + + if (started) { + drv_flush(local, false); + + local->tmp_channel = NULL; + ieee80211_hw_config(local, 0); + + ieee80211_offchannel_return(local, true); + } + + ieee80211_recalc_idle(local); + + if (started) + ieee80211_start_next_roc(local); + } + + out_unlock: + mutex_unlock(&local->mtx); +} + +static void ieee80211_hw_roc_done(struct work_struct *work) +{ + struct ieee80211_local *local = + container_of(work, struct ieee80211_local, hw_roc_done); + struct ieee80211_roc_work *roc; + + mutex_lock(&local->mtx); + + if (list_empty(&local->roc_list)) + goto out_unlock; + + roc = list_first_entry(&local->roc_list, struct ieee80211_roc_work, + list); + + if (!roc->started) + goto out_unlock; + + list_del(&roc->list); + + ieee80211_roc_notify_destroy(roc); + + /* if there's another roc, start it now */ + ieee80211_start_next_roc(local); + + out_unlock: mutex_unlock(&local->mtx); } @@ -275,8 +427,47 @@ void ieee80211_remain_on_channel_expired(struct ieee80211_hw *hw) } EXPORT_SYMBOL_GPL(ieee80211_remain_on_channel_expired); -void ieee80211_hw_roc_setup(struct ieee80211_local *local) +void ieee80211_roc_setup(struct ieee80211_local *local) { INIT_WORK(&local->hw_roc_start, ieee80211_hw_roc_start); INIT_WORK(&local->hw_roc_done, ieee80211_hw_roc_done); + INIT_LIST_HEAD(&local->roc_list); +} + +void ieee80211_roc_purge(struct ieee80211_sub_if_data *sdata) +{ + struct ieee80211_local *local = sdata->local; + struct ieee80211_roc_work *roc, *tmp; + LIST_HEAD(tmp_list); + + mutex_lock(&local->mtx); + list_for_each_entry_safe(roc, tmp, &local->roc_list, list) { + if (roc->sdata != sdata) + continue; + + if (roc->started && local->ops->remain_on_channel) { + /* can race, so ignore return value */ + drv_cancel_remain_on_channel(local); + } + + list_move_tail(&roc->list, &tmp_list); + roc->abort = true; + } + + ieee80211_start_next_roc(local); + mutex_unlock(&local->mtx); + + list_for_each_entry_safe(roc, tmp, &tmp_list, list) { + if (local->ops->remain_on_channel) { + list_del(&roc->list); + ieee80211_roc_notify_destroy(roc); + } else { + ieee80211_queue_delayed_work(&local->hw, &roc->work, 0); + + /* work will clean up etc */ + flush_delayed_work(&roc->work); + } + } + + WARN_ON_ONCE(!list_empty(&tmp_list)); } diff --git a/net/mac80211/pm.c b/net/mac80211/pm.c index af1c4e26e965..5c572e7a1a71 100644 --- a/net/mac80211/pm.c +++ b/net/mac80211/pm.c @@ -77,6 +77,17 @@ int __ieee80211_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan) int err = drv_suspend(local, wowlan); if (err < 0) { local->quiescing = false; + local->wowlan = false; + if (hw->flags & IEEE80211_HW_AMPDU_AGGREGATION) { + mutex_lock(&local->sta_mtx); + list_for_each_entry(sta, + &local->sta_list, list) { + clear_sta_flag(sta, WLAN_STA_BLOCK_BA); + } + mutex_unlock(&local->sta_mtx); + } + ieee80211_wake_queues_by_reason(hw, + IEEE80211_QUEUE_STOP_REASON_SUSPEND); return err; } else if (err > 0) { WARN_ON(err != 1); diff --git a/net/mac80211/rc80211_minstrel_ht.c b/net/mac80211/rc80211_minstrel_ht.c index f9e51ef8dfa2..fb1d4aa65e8c 100644 --- a/net/mac80211/rc80211_minstrel_ht.c +++ b/net/mac80211/rc80211_minstrel_ht.c @@ -626,8 +626,12 @@ minstrel_ht_get_rate(void *priv, struct ieee80211_sta *sta, void *priv_sta, #ifdef CONFIG_MAC80211_DEBUGFS /* use fixed index if set */ - if (mp->fixed_rate_idx != -1) - sample_idx = mp->fixed_rate_idx; + if (mp->fixed_rate_idx != -1) { + mi->max_tp_rate = mp->fixed_rate_idx; + mi->max_tp_rate2 = mp->fixed_rate_idx; + mi->max_prob_rate = mp->fixed_rate_idx; + sample_idx = -1; + } #endif if (sample_idx >= 0) { diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index 965e6ec0adb6..0cb4edee6af5 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -94,7 +94,7 @@ ieee80211_rx_radiotap_len(struct ieee80211_local *local, return len; } -/* +/** * ieee80211_add_rx_radiotap_header - add radiotap header * * add a radiotap header containing all the fields which the hardware provided. @@ -413,29 +413,6 @@ static void ieee80211_verify_alignment(struct ieee80211_rx_data *rx) /* rx handlers */ -static ieee80211_rx_result debug_noinline -ieee80211_rx_h_passive_scan(struct ieee80211_rx_data *rx) -{ - struct ieee80211_local *local = rx->local; - struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb); - struct sk_buff *skb = rx->skb; - - if (likely(!(status->rx_flags & IEEE80211_RX_IN_SCAN) && - !local->sched_scanning)) - return RX_CONTINUE; - - if (test_bit(SCAN_HW_SCANNING, &local->scanning) || - test_bit(SCAN_SW_SCANNING, &local->scanning) || - test_bit(SCAN_ONCHANNEL_SCANNING, &local->scanning) || - local->sched_scanning) - return ieee80211_scan_rx(rx->sdata, skb); - - /* scanning finished during invoking of handlers */ - I802_DEBUG_INC(local->rx_handlers_drop_passive_scan); - return RX_DROP_UNUSABLE; -} - - static int ieee80211_is_unicast_robust_mgmt_frame(struct sk_buff *skb) { struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; @@ -554,11 +531,11 @@ static inline u16 seq_sub(u16 sq1, u16 sq2) } -static void ieee80211_release_reorder_frame(struct ieee80211_hw *hw, +static void ieee80211_release_reorder_frame(struct ieee80211_sub_if_data *sdata, struct tid_ampdu_rx *tid_agg_rx, int index) { - struct ieee80211_local *local = hw_to_local(hw); + struct ieee80211_local *local = sdata->local; struct sk_buff *skb = tid_agg_rx->reorder_buf[index]; struct ieee80211_rx_status *status; @@ -578,7 +555,7 @@ no_frame: tid_agg_rx->head_seq_num = seq_inc(tid_agg_rx->head_seq_num); } -static void ieee80211_release_reorder_frames(struct ieee80211_hw *hw, +static void ieee80211_release_reorder_frames(struct ieee80211_sub_if_data *sdata, struct tid_ampdu_rx *tid_agg_rx, u16 head_seq_num) { @@ -589,7 +566,7 @@ static void ieee80211_release_reorder_frames(struct ieee80211_hw *hw, while (seq_less(tid_agg_rx->head_seq_num, head_seq_num)) { index = seq_sub(tid_agg_rx->head_seq_num, tid_agg_rx->ssn) % tid_agg_rx->buf_size; - ieee80211_release_reorder_frame(hw, tid_agg_rx, index); + ieee80211_release_reorder_frame(sdata, tid_agg_rx, index); } } @@ -604,7 +581,7 @@ static void ieee80211_release_reorder_frames(struct ieee80211_hw *hw, */ #define HT_RX_REORDER_BUF_TIMEOUT (HZ / 10) -static void ieee80211_sta_reorder_release(struct ieee80211_hw *hw, +static void ieee80211_sta_reorder_release(struct ieee80211_sub_if_data *sdata, struct tid_ampdu_rx *tid_agg_rx) { int index, j; @@ -632,12 +609,9 @@ static void ieee80211_sta_reorder_release(struct ieee80211_hw *hw, HT_RX_REORDER_BUF_TIMEOUT)) goto set_release_timer; -#ifdef CONFIG_MAC80211_HT_DEBUG - if (net_ratelimit()) - wiphy_debug(hw->wiphy, - "release an RX reorder frame due to timeout on earlier frames\n"); -#endif - ieee80211_release_reorder_frame(hw, tid_agg_rx, j); + ht_dbg_ratelimited(sdata, + "release an RX reorder frame due to timeout on earlier frames\n"); + ieee80211_release_reorder_frame(sdata, tid_agg_rx, j); /* * Increment the head seq# also for the skipped slots. @@ -647,7 +621,7 @@ static void ieee80211_sta_reorder_release(struct ieee80211_hw *hw, skipped = 0; } } else while (tid_agg_rx->reorder_buf[index]) { - ieee80211_release_reorder_frame(hw, tid_agg_rx, index); + ieee80211_release_reorder_frame(sdata, tid_agg_rx, index); index = seq_sub(tid_agg_rx->head_seq_num, tid_agg_rx->ssn) % tid_agg_rx->buf_size; } @@ -677,7 +651,7 @@ static void ieee80211_sta_reorder_release(struct ieee80211_hw *hw, * rcu_read_lock protection. It returns false if the frame * can be processed immediately, true if it was consumed. */ -static bool ieee80211_sta_manage_reorder_buf(struct ieee80211_hw *hw, +static bool ieee80211_sta_manage_reorder_buf(struct ieee80211_sub_if_data *sdata, struct tid_ampdu_rx *tid_agg_rx, struct sk_buff *skb) { @@ -706,7 +680,8 @@ static bool ieee80211_sta_manage_reorder_buf(struct ieee80211_hw *hw, if (!seq_less(mpdu_seq_num, head_seq_num + buf_size)) { head_seq_num = seq_inc(seq_sub(mpdu_seq_num, buf_size)); /* release stored frames up to new head to stack */ - ieee80211_release_reorder_frames(hw, tid_agg_rx, head_seq_num); + ieee80211_release_reorder_frames(sdata, tid_agg_rx, + head_seq_num); } /* Now the new frame is always in the range of the reordering buffer */ @@ -736,7 +711,7 @@ static bool ieee80211_sta_manage_reorder_buf(struct ieee80211_hw *hw, tid_agg_rx->reorder_buf[index] = skb; tid_agg_rx->reorder_time[index] = jiffies; tid_agg_rx->stored_mpdu_num++; - ieee80211_sta_reorder_release(hw, tid_agg_rx); + ieee80211_sta_reorder_release(sdata, tid_agg_rx); out: spin_unlock(&tid_agg_rx->reorder_lock); @@ -751,7 +726,6 @@ static void ieee80211_rx_reorder_ampdu(struct ieee80211_rx_data *rx) { struct sk_buff *skb = rx->skb; struct ieee80211_local *local = rx->local; - struct ieee80211_hw *hw = &local->hw; struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb); struct sta_info *sta = rx->sta; @@ -813,7 +787,7 @@ static void ieee80211_rx_reorder_ampdu(struct ieee80211_rx_data *rx) * sure that we cannot get to it any more before doing * anything with it. */ - if (ieee80211_sta_manage_reorder_buf(hw, tid_agg_rx, skb)) + if (ieee80211_sta_manage_reorder_buf(rx->sdata, tid_agg_rx, skb)) return; dont_reorder: @@ -1136,24 +1110,18 @@ static void ap_sta_ps_start(struct sta_info *sta) set_sta_flag(sta, WLAN_STA_PS_STA); if (!(local->hw.flags & IEEE80211_HW_AP_LINK_PS)) drv_sta_notify(local, sdata, STA_NOTIFY_SLEEP, &sta->sta); -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - printk(KERN_DEBUG "%s: STA %pM aid %d enters power save mode\n", - sdata->name, sta->sta.addr, sta->sta.aid); -#endif /* CONFIG_MAC80211_VERBOSE_PS_DEBUG */ + ps_dbg(sdata, "STA %pM aid %d enters power save mode\n", + sta->sta.addr, sta->sta.aid); } static void ap_sta_ps_end(struct sta_info *sta) { -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - printk(KERN_DEBUG "%s: STA %pM aid %d exits power save mode\n", - sta->sdata->name, sta->sta.addr, sta->sta.aid); -#endif /* CONFIG_MAC80211_VERBOSE_PS_DEBUG */ + ps_dbg(sta->sdata, "STA %pM aid %d exits power save mode\n", + sta->sta.addr, sta->sta.aid); if (test_sta_flag(sta, WLAN_STA_PS_DRIVER)) { -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - printk(KERN_DEBUG "%s: STA %pM aid %d driver-ps-blocked\n", - sta->sdata->name, sta->sta.addr, sta->sta.aid); -#endif /* CONFIG_MAC80211_VERBOSE_PS_DEBUG */ + ps_dbg(sta->sdata, "STA %pM aid %d driver-ps-blocked\n", + sta->sta.addr, sta->sta.aid); return; } @@ -1383,19 +1351,8 @@ ieee80211_reassemble_add(struct ieee80211_sub_if_data *sdata, if (sdata->fragment_next >= IEEE80211_FRAGMENT_MAX) sdata->fragment_next = 0; - if (!skb_queue_empty(&entry->skb_list)) { -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - struct ieee80211_hdr *hdr = - (struct ieee80211_hdr *) entry->skb_list.next->data; - printk(KERN_DEBUG "%s: RX reassembly removed oldest " - "fragment entry (idx=%d age=%lu seq=%d last_frag=%d " - "addr1=%pM addr2=%pM\n", - sdata->name, idx, - jiffies - entry->first_frag_time, entry->seq, - entry->last_frag, hdr->addr1, hdr->addr2); -#endif + if (!skb_queue_empty(&entry->skb_list)) __skb_queue_purge(&entry->skb_list); - } __skb_queue_tail(&entry->skb_list, *skb); /* no need for locking */ *skb = NULL; @@ -1753,7 +1710,7 @@ ieee80211_deliver_skb(struct ieee80211_rx_data *rx) */ xmit_skb = skb_copy(skb, GFP_ATOMIC); if (!xmit_skb) - net_dbg_ratelimited("%s: failed to clone multicast frame\n", + net_info_ratelimited("%s: failed to clone multicast frame\n", dev->name); } else { dsta = sta_info_get(sdata, skb->data); @@ -1937,7 +1894,7 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx) ether_addr_equal(sdata->vif.addr, hdr->addr3)) return RX_CONTINUE; - q = ieee80211_select_queue_80211(local, skb, hdr); + q = ieee80211_select_queue_80211(sdata, skb, hdr); if (ieee80211_queue_stopped(&local->hw, q)) { IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, dropped_frames_congestion); return RX_DROP_MONITOR; @@ -1957,7 +1914,7 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx) fwd_skb = skb_copy(skb, GFP_ATOMIC); if (!fwd_skb) { - net_dbg_ratelimited("%s: failed to clone mesh frame\n", + net_info_ratelimited("%s: failed to clone mesh frame\n", sdata->name); goto out; } @@ -2060,8 +2017,6 @@ ieee80211_rx_h_data(struct ieee80211_rx_data *rx) static ieee80211_rx_result debug_noinline ieee80211_rx_h_ctrl(struct ieee80211_rx_data *rx) { - struct ieee80211_local *local = rx->local; - struct ieee80211_hw *hw = &local->hw; struct sk_buff *skb = rx->skb; struct ieee80211_bar *bar = (struct ieee80211_bar *)skb->data; struct tid_ampdu_rx *tid_agg_rx; @@ -2098,7 +2053,8 @@ ieee80211_rx_h_ctrl(struct ieee80211_rx_data *rx) spin_lock(&tid_agg_rx->reorder_lock); /* release stored frames up to start of BAR */ - ieee80211_release_reorder_frames(hw, tid_agg_rx, start_seq_num); + ieee80211_release_reorder_frames(rx->sdata, tid_agg_rx, + start_seq_num); spin_unlock(&tid_agg_rx->reorder_lock); kfree_skb(skb); @@ -2425,7 +2381,7 @@ ieee80211_rx_h_userspace_mgmt(struct ieee80211_rx_data *rx) if (rx->local->hw.flags & IEEE80211_HW_SIGNAL_DBM) sig = status->signal; - if (cfg80211_rx_mgmt(rx->sdata->dev, status->freq, sig, + if (cfg80211_rx_mgmt(&rx->sdata->wdev, status->freq, sig, rx->skb->data, rx->skb->len, GFP_ATOMIC)) { if (rx->sta) @@ -2716,7 +2672,6 @@ static void ieee80211_invoke_rx_handlers(struct ieee80211_rx_data *rx) goto rxh_next; \ } while (0); - CALL_RXH(ieee80211_rx_h_passive_scan) CALL_RXH(ieee80211_rx_h_check) ieee80211_rx_reorder_ampdu(rx); @@ -2752,7 +2707,7 @@ void ieee80211_release_reorder_timeout(struct sta_info *sta, int tid) return; spin_lock(&tid_agg_rx->reorder_lock); - ieee80211_sta_reorder_release(&sta->local->hw, tid_agg_rx); + ieee80211_sta_reorder_release(sta->sdata, tid_agg_rx); spin_unlock(&tid_agg_rx->reorder_lock); ieee80211_rx_handlers(&rx); @@ -2786,11 +2741,8 @@ static int prepare_for_handlers(struct ieee80211_rx_data *rx, return 0; if (ieee80211_is_beacon(hdr->frame_control)) { return 1; - } - else if (!ieee80211_bssid_match(bssid, sdata->u.ibss.bssid)) { - if (!(status->rx_flags & IEEE80211_RX_IN_SCAN)) - return 0; - status->rx_flags &= ~IEEE80211_RX_RA_MATCH; + } else if (!ieee80211_bssid_match(bssid, sdata->u.ibss.bssid)) { + return 0; } else if (!multicast && !ether_addr_equal(sdata->vif.addr, hdr->addr1)) { if (!(sdata->dev->flags & IFF_PROMISC)) @@ -2828,11 +2780,9 @@ static int prepare_for_handlers(struct ieee80211_rx_data *rx, * and location updates. Note that mac80211 * itself never looks at these frames. */ - if (!(status->rx_flags & IEEE80211_RX_IN_SCAN) && - ieee80211_is_public_action(hdr, skb->len)) + if (ieee80211_is_public_action(hdr, skb->len)) return 1; - if (!(status->rx_flags & IEEE80211_RX_IN_SCAN) && - !ieee80211_is_beacon(hdr->frame_control)) + if (!ieee80211_is_beacon(hdr->frame_control)) return 0; status->rx_flags &= ~IEEE80211_RX_RA_MATCH; } @@ -2898,7 +2848,6 @@ static bool ieee80211_prepare_and_rx_handle(struct ieee80211_rx_data *rx, static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw, struct sk_buff *skb) { - struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb); struct ieee80211_local *local = hw_to_local(hw); struct ieee80211_sub_if_data *sdata; struct ieee80211_hdr *hdr; @@ -2916,11 +2865,6 @@ static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw, if (ieee80211_is_data(fc) || ieee80211_is_mgmt(fc)) local->dot11ReceivedFragmentCount++; - if (unlikely(test_bit(SCAN_HW_SCANNING, &local->scanning) || - test_bit(SCAN_ONCHANNEL_SCANNING, &local->scanning) || - test_bit(SCAN_SW_SCANNING, &local->scanning))) - status->rx_flags |= IEEE80211_RX_IN_SCAN; - if (ieee80211_is_mgmt(fc)) err = skb_linearize(skb); else @@ -2935,6 +2879,10 @@ static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw, ieee80211_parse_qos(&rx); ieee80211_verify_alignment(&rx); + if (unlikely(ieee80211_is_probe_resp(hdr->frame_control) || + ieee80211_is_beacon(hdr->frame_control))) + ieee80211_scan_rx(local, skb); + if (ieee80211_is_data(fc)) { prev_sta = NULL; @@ -3032,6 +2980,10 @@ void ieee80211_rx(struct ieee80211_hw *hw, struct sk_buff *skb) if (unlikely(local->quiescing || local->suspended)) goto drop; + /* We might be during a HW reconfig, prevent Rx for the same reason */ + if (unlikely(local->in_reconfig)) + goto drop; + /* * The same happens when we're not even started, * but that's worth a warning. diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c index 169da0742c81..bcaee5d12839 100644 --- a/net/mac80211/scan.c +++ b/net/mac80211/scan.c @@ -83,13 +83,14 @@ ieee80211_bss_info_update(struct ieee80211_local *local, cbss = cfg80211_inform_bss_frame(local->hw.wiphy, channel, mgmt, len, signal, GFP_ATOMIC); - if (!cbss) return NULL; cbss->free_priv = ieee80211_rx_bss_free; bss = (void *)cbss->priv; + bss->device_ts = rx_status->device_timestamp; + if (elems->parse_error) { if (beacon) bss->corrupt_data |= IEEE80211_BSS_CORRUPT_BEACON; @@ -114,8 +115,7 @@ ieee80211_bss_info_update(struct ieee80211_local *local, if (elems->tim && (!elems->parse_error || !(bss->valid_data & IEEE80211_BSS_VALID_DTIM))) { - struct ieee80211_tim_ie *tim_ie = - (struct ieee80211_tim_ie *)elems->tim; + struct ieee80211_tim_ie *tim_ie = elems->tim; bss->dtim_period = tim_ie->dtim_period; if (!elems->parse_error) bss->valid_data |= IEEE80211_BSS_VALID_DTIM; @@ -165,52 +165,47 @@ ieee80211_bss_info_update(struct ieee80211_local *local, return bss; } -ieee80211_rx_result -ieee80211_scan_rx(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb) +void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb) { struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb); - struct ieee80211_mgmt *mgmt; + struct ieee80211_sub_if_data *sdata1, *sdata2; + struct ieee80211_mgmt *mgmt = (void *)skb->data; struct ieee80211_bss *bss; u8 *elements; struct ieee80211_channel *channel; size_t baselen; int freq; - __le16 fc; - bool presp, beacon = false; + bool beacon; struct ieee802_11_elems elems; - if (skb->len < 2) - return RX_DROP_UNUSABLE; - - mgmt = (struct ieee80211_mgmt *) skb->data; - fc = mgmt->frame_control; + if (skb->len < 24 || + (!ieee80211_is_probe_resp(mgmt->frame_control) && + !ieee80211_is_beacon(mgmt->frame_control))) + return; - if (ieee80211_is_ctl(fc)) - return RX_CONTINUE; + sdata1 = rcu_dereference(local->scan_sdata); + sdata2 = rcu_dereference(local->sched_scan_sdata); - if (skb->len < 24) - return RX_CONTINUE; + if (likely(!sdata1 && !sdata2)) + return; - presp = ieee80211_is_probe_resp(fc); - if (presp) { + if (ieee80211_is_probe_resp(mgmt->frame_control)) { /* ignore ProbeResp to foreign address */ - if (!ether_addr_equal(mgmt->da, sdata->vif.addr)) - return RX_DROP_MONITOR; + if ((!sdata1 || !ether_addr_equal(mgmt->da, sdata1->vif.addr)) && + (!sdata2 || !ether_addr_equal(mgmt->da, sdata2->vif.addr))) + return; - presp = true; elements = mgmt->u.probe_resp.variable; baselen = offsetof(struct ieee80211_mgmt, u.probe_resp.variable); + beacon = false; } else { - beacon = ieee80211_is_beacon(fc); baselen = offsetof(struct ieee80211_mgmt, u.beacon.variable); elements = mgmt->u.beacon.variable; + beacon = true; } - if (!presp && !beacon) - return RX_CONTINUE; - if (baselen > skb->len) - return RX_DROP_MONITOR; + return; ieee802_11_parse_elems(elements, skb->len - baselen, &elems); @@ -220,22 +215,16 @@ ieee80211_scan_rx(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb) else freq = rx_status->freq; - channel = ieee80211_get_channel(sdata->local->hw.wiphy, freq); + channel = ieee80211_get_channel(local->hw.wiphy, freq); if (!channel || channel->flags & IEEE80211_CHAN_DISABLED) - return RX_DROP_MONITOR; + return; - bss = ieee80211_bss_info_update(sdata->local, rx_status, + bss = ieee80211_bss_info_update(local, rx_status, mgmt, skb->len, &elems, channel, beacon); if (bss) - ieee80211_rx_bss_put(sdata->local, bss); - - if (channel == sdata->local->oper_channel) - return RX_CONTINUE; - - dev_kfree_skb(skb); - return RX_QUEUED; + ieee80211_rx_bss_put(local, bss); } /* return false if no more work */ @@ -293,7 +282,13 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted, return; if (was_hw_scan && !aborted && ieee80211_prep_hw_scan(local)) { - int rc = drv_hw_scan(local, local->scan_sdata, local->hw_scan_req); + int rc; + + rc = drv_hw_scan(local, + rcu_dereference_protected(local->scan_sdata, + lockdep_is_held(&local->mtx)), + local->hw_scan_req); + if (rc == 0) return; } @@ -323,7 +318,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted, ieee80211_mlme_notify_scan_completed(local); ieee80211_ibss_notify_scan_completed(local); ieee80211_mesh_notify_scan_completed(local); - ieee80211_queue_work(&local->hw, &local->work_work); + ieee80211_start_next_roc(local); } void ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted) @@ -376,7 +371,7 @@ static int ieee80211_start_sw_scan(struct ieee80211_local *local) static bool ieee80211_can_scan(struct ieee80211_local *local, struct ieee80211_sub_if_data *sdata) { - if (!list_empty(&local->work_list)) + if (!list_empty(&local->roc_list)) return false; if (sdata->vif.type == NL80211_IFTYPE_STATION && @@ -394,7 +389,10 @@ void ieee80211_run_deferred_scan(struct ieee80211_local *local) if (!local->scan_req || local->scanning) return; - if (!ieee80211_can_scan(local, local->scan_sdata)) + if (!ieee80211_can_scan(local, + rcu_dereference_protected( + local->scan_sdata, + lockdep_is_held(&local->mtx)))) return; ieee80211_queue_delayed_work(&local->hw, &local->scan_work, @@ -405,9 +403,12 @@ static void ieee80211_scan_state_send_probe(struct ieee80211_local *local, unsigned long *next_delay) { int i; - struct ieee80211_sub_if_data *sdata = local->scan_sdata; + struct ieee80211_sub_if_data *sdata; enum ieee80211_band band = local->hw.conf.channel->band; + sdata = rcu_dereference_protected(local->scan_sdata, + lockdep_is_held(&local->mtx));; + for (i = 0; i < local->scan_req->n_ssids; i++) ieee80211_send_probe_req( sdata, NULL, @@ -439,7 +440,7 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata, if (!ieee80211_can_scan(local, sdata)) { /* wait for the work to finish/time out */ local->scan_req = req; - local->scan_sdata = sdata; + rcu_assign_pointer(local->scan_sdata, sdata); return 0; } @@ -473,7 +474,7 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata, } local->scan_req = req; - local->scan_sdata = sdata; + rcu_assign_pointer(local->scan_sdata, sdata); if (local->ops->hw_scan) { __set_bit(SCAN_HW_SCANNING, &local->scanning); @@ -533,7 +534,7 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata, ieee80211_recalc_idle(local); local->scan_req = NULL; - local->scan_sdata = NULL; + rcu_assign_pointer(local->scan_sdata, NULL); } return rc; @@ -720,7 +721,8 @@ void ieee80211_scan_work(struct work_struct *work) mutex_lock(&local->mtx); - sdata = local->scan_sdata; + sdata = rcu_dereference_protected(local->scan_sdata, + lockdep_is_held(&local->mtx)); /* When scanning on-channel, the first-callback means completed. */ if (test_bit(SCAN_ONCHANNEL_SCANNING, &local->scanning)) { @@ -741,7 +743,7 @@ void ieee80211_scan_work(struct work_struct *work) int rc; local->scan_req = NULL; - local->scan_sdata = NULL; + rcu_assign_pointer(local->scan_sdata, NULL); rc = __ieee80211_start_scan(sdata, req); if (rc) { @@ -893,7 +895,9 @@ void ieee80211_scan_cancel(struct ieee80211_local *local) if (test_bit(SCAN_HW_SCANNING, &local->scanning)) { if (local->ops->cancel_hw_scan) - drv_cancel_hw_scan(local, local->scan_sdata); + drv_cancel_hw_scan(local, + rcu_dereference_protected(local->scan_sdata, + lockdep_is_held(&local->mtx))); goto out; } @@ -915,9 +919,9 @@ int ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata, struct ieee80211_local *local = sdata->local; int ret, i; - mutex_lock(&sdata->local->mtx); + mutex_lock(&local->mtx); - if (local->sched_scanning) { + if (rcu_access_pointer(local->sched_scan_sdata)) { ret = -EBUSY; goto out; } @@ -928,6 +932,9 @@ int ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata, } for (i = 0; i < IEEE80211_NUM_BANDS; i++) { + if (!local->hw.wiphy->bands[i]) + continue; + local->sched_scan_ies.ie[i] = kzalloc(2 + IEEE80211_MAX_SSID_LEN + local->scan_ies_len + @@ -948,7 +955,7 @@ int ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata, ret = drv_sched_scan_start(local, sdata, req, &local->sched_scan_ies); if (ret == 0) { - local->sched_scanning = true; + rcu_assign_pointer(local->sched_scan_sdata, sdata); goto out; } @@ -956,7 +963,7 @@ out_free: while (i > 0) kfree(local->sched_scan_ies.ie[--i]); out: - mutex_unlock(&sdata->local->mtx); + mutex_unlock(&local->mtx); return ret; } @@ -965,22 +972,22 @@ int ieee80211_request_sched_scan_stop(struct ieee80211_sub_if_data *sdata) struct ieee80211_local *local = sdata->local; int ret = 0, i; - mutex_lock(&sdata->local->mtx); + mutex_lock(&local->mtx); if (!local->ops->sched_scan_stop) { ret = -ENOTSUPP; goto out; } - if (local->sched_scanning) { + if (rcu_access_pointer(local->sched_scan_sdata)) { for (i = 0; i < IEEE80211_NUM_BANDS; i++) kfree(local->sched_scan_ies.ie[i]); drv_sched_scan_stop(local, sdata); - local->sched_scanning = false; + rcu_assign_pointer(local->sched_scan_sdata, NULL); } out: - mutex_unlock(&sdata->local->mtx); + mutex_unlock(&local->mtx); return ret; } @@ -1004,7 +1011,7 @@ void ieee80211_sched_scan_stopped_work(struct work_struct *work) mutex_lock(&local->mtx); - if (!local->sched_scanning) { + if (!rcu_access_pointer(local->sched_scan_sdata)) { mutex_unlock(&local->mtx); return; } @@ -1012,7 +1019,7 @@ void ieee80211_sched_scan_stopped_work(struct work_struct *work) for (i = 0; i < IEEE80211_NUM_BANDS; i++) kfree(local->sched_scan_ies.ie[i]); - local->sched_scanning = false; + rcu_assign_pointer(local->sched_scan_sdata, NULL); mutex_unlock(&local->mtx); diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c index de455f8bbb91..06fa75ceb025 100644 --- a/net/mac80211/sta_info.c +++ b/net/mac80211/sta_info.c @@ -169,9 +169,7 @@ void sta_info_free(struct ieee80211_local *local, struct sta_info *sta) if (sta->rate_ctrl) rate_control_free_sta(sta); -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, "Destroyed STA %pM\n", sta->sta.addr); -#endif /* CONFIG_MAC80211_VERBOSE_DEBUG */ + sta_dbg(sta->sdata, "Destroyed STA %pM\n", sta->sta.addr); kfree(sta); } @@ -278,9 +276,7 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata, for (i = 0; i < NUM_RX_DATA_QUEUES; i++) sta->last_seq_ctrl[i] = cpu_to_le16(USHRT_MAX); -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, "Allocated STA %pM\n", sta->sta.addr); -#endif /* CONFIG_MAC80211_VERBOSE_DEBUG */ + sta_dbg(sdata, "Allocated STA %pM\n", sta->sta.addr); #ifdef CONFIG_MAC80211_MESH sta->plink_state = NL80211_PLINK_LISTEN; @@ -333,9 +329,9 @@ static int sta_info_insert_drv_state(struct ieee80211_local *local, } if (sdata->vif.type == NL80211_IFTYPE_ADHOC) { - printk(KERN_DEBUG - "%s: failed to move IBSS STA %pM to state %d (%d) - keeping it anyway.\n", - sdata->name, sta->sta.addr, state + 1, err); + sdata_info(sdata, + "failed to move IBSS STA %pM to state %d (%d) - keeping it anyway\n", + sta->sta.addr, state + 1, err); err = 0; } @@ -390,9 +386,7 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU) sinfo.generation = local->sta_generation; cfg80211_new_sta(sdata->dev, sta->sta.addr, &sinfo, GFP_KERNEL); -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, "Inserted STA %pM\n", sta->sta.addr); -#endif /* CONFIG_MAC80211_VERBOSE_DEBUG */ + sta_dbg(sdata, "Inserted STA %pM\n", sta->sta.addr); /* move reference to rcu-protected */ rcu_read_lock(); @@ -618,10 +612,8 @@ static bool sta_info_cleanup_expire_buffered_ac(struct ieee80211_local *local, break; local->total_ps_buffered--; -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - printk(KERN_DEBUG "Buffered frame expired (STA %pM)\n", + ps_dbg(sta->sdata, "Buffered frame expired (STA %pM)\n", sta->sta.addr); -#endif dev_kfree_skb(skb); } @@ -747,9 +739,8 @@ int __must_check __sta_info_destroy(struct sta_info *sta) mesh_accept_plinks_update(sdata); #endif -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - wiphy_debug(local->hw.wiphy, "Removed STA %pM\n", sta->sta.addr); -#endif /* CONFIG_MAC80211_VERBOSE_DEBUG */ + sta_dbg(sdata, "Removed STA %pM\n", sta->sta.addr); + cancel_work_sync(&sta->drv_unblock_wk); cfg80211_del_sta(sdata->dev, sta->sta.addr, GFP_KERNEL); @@ -889,10 +880,8 @@ void ieee80211_sta_expire(struct ieee80211_sub_if_data *sdata, continue; if (time_after(jiffies, sta->last_rx + exp_time)) { -#ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG "%s: expiring inactive STA %pM\n", - sdata->name, sta->sta.addr); -#endif + ibss_dbg(sdata, "expiring inactive STA %pM\n", + sta->sta.addr); WARN_ON(__sta_info_destroy(sta)); } } @@ -990,11 +979,9 @@ void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta) sta_info_recalc_tim(sta); -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - printk(KERN_DEBUG "%s: STA %pM aid %d sending %d filtered/%d PS frames " - "since STA not sleeping anymore\n", sdata->name, + ps_dbg(sdata, + "STA %pM aid %d sending %d filtered/%d PS frames since STA not sleeping anymore\n", sta->sta.addr, sta->sta.aid, filtered, buffered); -#endif /* CONFIG_MAC80211_VERBOSE_PS_DEBUG */ } static void ieee80211_send_null_response(struct ieee80211_sub_if_data *sdata, @@ -1384,10 +1371,8 @@ int sta_info_move_state(struct sta_info *sta, return -EINVAL; } -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - printk(KERN_DEBUG "%s: moving STA %pM to state %d\n", - sta->sdata->name, sta->sta.addr, new_state); -#endif + sta_dbg(sta->sdata, "moving STA %pM to state %d\n", + sta->sta.addr, new_state); /* * notify the driver before the actual changes so it can diff --git a/net/mac80211/status.c b/net/mac80211/status.c index 28cfa981cfb1..8cd72914cdaf 100644 --- a/net/mac80211/status.c +++ b/net/mac80211/status.c @@ -155,13 +155,10 @@ static void ieee80211_handle_filtered_frame(struct ieee80211_local *local, return; } -#ifdef CONFIG_MAC80211_VERBOSE_DEBUG - if (net_ratelimit()) - wiphy_debug(local->hw.wiphy, - "dropped TX filtered frame, queue_len=%d PS=%d @%lu\n", - skb_queue_len(&sta->tx_filtered[ac]), - !!test_sta_flag(sta, WLAN_STA_PS_STA), jiffies); -#endif + ps_dbg_ratelimited(sta->sdata, + "dropped TX filtered frame, queue_len=%d PS=%d @%lu\n", + skb_queue_len(&sta->tx_filtered[ac]), + !!test_sta_flag(sta, WLAN_STA_PS_STA), jiffies); dev_kfree_skb(skb); } @@ -520,36 +517,21 @@ void ieee80211_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb) if (info->flags & IEEE80211_TX_INTFL_NL80211_FRAME_TX) { u64 cookie = (unsigned long)skb; + acked = info->flags & IEEE80211_TX_STAT_ACK; - if (ieee80211_is_nullfunc(hdr->frame_control) || - ieee80211_is_qos_nullfunc(hdr->frame_control)) { - acked = info->flags & IEEE80211_TX_STAT_ACK; + /* + * TODO: When we have non-netdev frame TX, + * we cannot use skb->dev->ieee80211_ptr + */ + if (ieee80211_is_nullfunc(hdr->frame_control) || + ieee80211_is_qos_nullfunc(hdr->frame_control)) cfg80211_probe_status(skb->dev, hdr->addr1, cookie, acked, GFP_ATOMIC); - } else { - struct ieee80211_work *wk; - - rcu_read_lock(); - list_for_each_entry_rcu(wk, &local->work_list, list) { - if (wk->type != IEEE80211_WORK_OFFCHANNEL_TX) - continue; - if (wk->offchan_tx.frame != skb) - continue; - wk->offchan_tx.status = true; - break; - } - rcu_read_unlock(); - if (local->hw_roc_skb_for_status == skb) { - cookie = local->hw_roc_cookie ^ 2; - local->hw_roc_skb_for_status = NULL; - } - + else cfg80211_mgmt_tx_status( - skb->dev, cookie, skb->data, skb->len, - !!(info->flags & IEEE80211_TX_STAT_ACK), - GFP_ATOMIC); - } + skb->dev->ieee80211_ptr, cookie, skb->data, + skb->len, acked, GFP_ATOMIC); } if (unlikely(info->ack_frame_id)) { @@ -589,7 +571,7 @@ void ieee80211_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb) /* send frame to monitor interfaces now */ rtap_len = ieee80211_tx_radiotap_len(info); if (WARN_ON_ONCE(skb_headroom(skb) < rtap_len)) { - printk(KERN_ERR "ieee80211_tx_status: headroom too small\n"); + pr_err("ieee80211_tx_status: headroom too small\n"); dev_kfree_skb(skb); return; } diff --git a/net/mac80211/tkip.c b/net/mac80211/tkip.c index 51077a956a83..57e14d59e12f 100644 --- a/net/mac80211/tkip.c +++ b/net/mac80211/tkip.c @@ -260,17 +260,6 @@ int ieee80211_tkip_decrypt_data(struct crypto_cipher *tfm, keyid = pos[3]; iv32 = get_unaligned_le32(pos + 4); pos += 8; -#ifdef CONFIG_MAC80211_TKIP_DEBUG - { - int i; - printk(KERN_DEBUG "TKIP decrypt: data(len=%zd)", payload_len); - for (i = 0; i < payload_len; i++) - printk(" %02x", payload[i]); - printk("\n"); - printk(KERN_DEBUG "TKIP decrypt: iv16=%04x iv32=%08x\n", - iv16, iv32); - } -#endif if (!(keyid & (1 << 5))) return TKIP_DECRYPT_NO_EXT_IV; @@ -281,16 +270,8 @@ int ieee80211_tkip_decrypt_data(struct crypto_cipher *tfm, if (key->u.tkip.rx[queue].state != TKIP_STATE_NOT_INIT && (iv32 < key->u.tkip.rx[queue].iv32 || (iv32 == key->u.tkip.rx[queue].iv32 && - iv16 <= key->u.tkip.rx[queue].iv16))) { -#ifdef CONFIG_MAC80211_TKIP_DEBUG - printk(KERN_DEBUG "TKIP replay detected for RX frame from " - "%pM (RX IV (%04x,%02x) <= prev. IV (%04x,%02x)\n", - ta, - iv32, iv16, key->u.tkip.rx[queue].iv32, - key->u.tkip.rx[queue].iv16); -#endif + iv16 <= key->u.tkip.rx[queue].iv16))) return TKIP_DECRYPT_REPLAY; - } if (only_iv) { res = TKIP_DECRYPT_OK; @@ -302,22 +283,6 @@ int ieee80211_tkip_decrypt_data(struct crypto_cipher *tfm, key->u.tkip.rx[queue].iv32 != iv32) { /* IV16 wrapped around - perform TKIP phase 1 */ tkip_mixing_phase1(tk, &key->u.tkip.rx[queue], ta, iv32); -#ifdef CONFIG_MAC80211_TKIP_DEBUG - { - int i; - u8 key_offset = NL80211_TKIP_DATA_OFFSET_ENCR_KEY; - printk(KERN_DEBUG "TKIP decrypt: Phase1 TA=%pM" - " TK=", ta); - for (i = 0; i < 16; i++) - printk("%02x ", - key->conf.key[key_offset + i]); - printk("\n"); - printk(KERN_DEBUG "TKIP decrypt: P1K="); - for (i = 0; i < 5; i++) - printk("%04x ", key->u.tkip.rx[queue].p1k[i]); - printk("\n"); - } -#endif } if (key->local->ops->update_tkip_key && key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE && @@ -333,15 +298,6 @@ int ieee80211_tkip_decrypt_data(struct crypto_cipher *tfm, } tkip_mixing_phase2(tk, &key->u.tkip.rx[queue], iv16, rc4key); -#ifdef CONFIG_MAC80211_TKIP_DEBUG - { - int i; - printk(KERN_DEBUG "TKIP decrypt: Phase2 rc4key="); - for (i = 0; i < 16; i++) - printk("%02x ", rc4key[i]); - printk("\n"); - } -#endif res = ieee80211_wep_decrypt_data(tfm, rc4key, 16, pos, payload_len - 12); done: diff --git a/net/mac80211/trace.c b/net/mac80211/trace.c new file mode 100644 index 000000000000..386e45d8a958 --- /dev/null +++ b/net/mac80211/trace.c @@ -0,0 +1,75 @@ +/* bug in tracepoint.h, it should include this */ +#include <linux/module.h> + +/* sparse isn't too happy with all macros... */ +#ifndef __CHECKER__ +#include <net/cfg80211.h> +#include "driver-ops.h" +#include "debug.h" +#define CREATE_TRACE_POINTS +#include "trace.h" + +#ifdef CONFIG_MAC80211_MESSAGE_TRACING +void __sdata_info(const char *fmt, ...) +{ + struct va_format vaf = { + .fmt = fmt, + }; + va_list args; + + va_start(args, fmt); + vaf.va = &args; + + pr_info("%pV", &vaf); + trace_mac80211_info(&vaf); + va_end(args); +} + +void __sdata_dbg(bool print, const char *fmt, ...) +{ + struct va_format vaf = { + .fmt = fmt, + }; + va_list args; + + va_start(args, fmt); + vaf.va = &args; + + if (print) + pr_debug("%pV", &vaf); + trace_mac80211_dbg(&vaf); + va_end(args); +} + +void __sdata_err(const char *fmt, ...) +{ + struct va_format vaf = { + .fmt = fmt, + }; + va_list args; + + va_start(args, fmt); + vaf.va = &args; + + pr_err("%pV", &vaf); + trace_mac80211_err(&vaf); + va_end(args); +} + +void __wiphy_dbg(struct wiphy *wiphy, bool print, const char *fmt, ...) +{ + struct va_format vaf = { + .fmt = fmt, + }; + va_list args; + + va_start(args, fmt); + vaf.va = &args; + + if (print) + wiphy_dbg(wiphy, "%pV", &vaf); + trace_mac80211_dbg(&vaf); + va_end(args); +} +#endif +#endif diff --git a/net/mac80211/driver-trace.h b/net/mac80211/trace.h index 6de00b2c268c..c6d33b55b2df 100644 --- a/net/mac80211/driver-trace.h +++ b/net/mac80211/trace.h @@ -306,7 +306,8 @@ TRACE_EVENT(drv_bss_info_changed, __field(u8, dtimper) __field(u16, bcnint) __field(u16, assoc_cap) - __field(u64, timestamp) + __field(u64, sync_tsf) + __field(u32, sync_device_ts) __field(u32, basic_rates) __field(u32, changed) __field(bool, enable_beacon) @@ -325,7 +326,8 @@ TRACE_EVENT(drv_bss_info_changed, __entry->dtimper = info->dtim_period; __entry->bcnint = info->beacon_int; __entry->assoc_cap = info->assoc_capability; - __entry->timestamp = info->last_tsf; + __entry->sync_tsf = info->sync_tsf; + __entry->sync_device_ts = info->sync_device_ts; __entry->basic_rates = info->basic_rates; __entry->enable_beacon = info->enable_beacon; __entry->ht_operation_mode = info->ht_operation_mode; @@ -1218,6 +1220,39 @@ DEFINE_EVENT(release_evt, drv_allow_buffered_frames, TP_ARGS(local, sta, tids, num_frames, reason, more_data) ); +TRACE_EVENT(drv_get_rssi, + TP_PROTO(struct ieee80211_local *local, struct ieee80211_sta *sta, + s8 rssi, int ret), + + TP_ARGS(local, sta, rssi, ret), + + TP_STRUCT__entry( + LOCAL_ENTRY + STA_ENTRY + __field(s8, rssi) + __field(int, ret) + ), + + TP_fast_assign( + LOCAL_ASSIGN; + STA_ASSIGN; + __entry->rssi = rssi; + __entry->ret = ret; + ), + + TP_printk( + LOCAL_PR_FMT STA_PR_FMT " rssi:%d ret:%d", + LOCAL_PR_ARG, STA_PR_ARG, __entry->rssi, __entry->ret + ) +); + +DEFINE_EVENT(local_sdata_evt, drv_mgd_prepare_tx, + TP_PROTO(struct ieee80211_local *local, + struct ieee80211_sub_if_data *sdata), + + TP_ARGS(local, sdata) +); + /* * Tracing for API calls that drivers call. */ @@ -1606,10 +1641,49 @@ TRACE_EVENT(stop_queue, LOCAL_PR_ARG, __entry->queue, __entry->reason ) ); + +#ifdef CONFIG_MAC80211_MESSAGE_TRACING +#undef TRACE_SYSTEM +#define TRACE_SYSTEM mac80211_msg + +#define MAX_MSG_LEN 100 + +DECLARE_EVENT_CLASS(mac80211_msg_event, + TP_PROTO(struct va_format *vaf), + + TP_ARGS(vaf), + + TP_STRUCT__entry( + __dynamic_array(char, msg, MAX_MSG_LEN) + ), + + TP_fast_assign( + WARN_ON_ONCE(vsnprintf(__get_dynamic_array(msg), + MAX_MSG_LEN, vaf->fmt, + *vaf->va) >= MAX_MSG_LEN); + ), + + TP_printk("%s", __get_str(msg)) +); + +DEFINE_EVENT(mac80211_msg_event, mac80211_info, + TP_PROTO(struct va_format *vaf), + TP_ARGS(vaf) +); +DEFINE_EVENT(mac80211_msg_event, mac80211_dbg, + TP_PROTO(struct va_format *vaf), + TP_ARGS(vaf) +); +DEFINE_EVENT(mac80211_msg_event, mac80211_err, + TP_PROTO(struct va_format *vaf), + TP_ARGS(vaf) +); +#endif + #endif /* !__MAC80211_DRIVER_TRACE || TRACE_HEADER_MULTI_READ */ #undef TRACE_INCLUDE_PATH #define TRACE_INCLUDE_PATH . #undef TRACE_INCLUDE_FILE -#define TRACE_INCLUDE_FILE driver-trace +#define TRACE_INCLUDE_FILE trace #include <trace/define_trace.h> diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c index e453212fa17f..acf712ffb5e6 100644 --- a/net/mac80211/tx.c +++ b/net/mac80211/tx.c @@ -140,6 +140,8 @@ static __le16 ieee80211_duration(struct ieee80211_tx_data *tx, if (r->flags & IEEE80211_RATE_MANDATORY_A) mrate = r->bitrate; break; + case IEEE80211_BAND_60GHZ: + /* TODO, for now fall through */ case IEEE80211_NUM_BANDS: WARN_ON(1); break; @@ -175,12 +177,6 @@ static __le16 ieee80211_duration(struct ieee80211_tx_data *tx, return cpu_to_le16(dur); } -static inline int is_ieee80211_device(struct ieee80211_local *local, - struct net_device *dev) -{ - return local == wdev_priv(dev->ieee80211_ptr); -} - /* tx handlers */ static ieee80211_tx_result debug_noinline ieee80211_tx_h_dynamic_ps(struct ieee80211_tx_data *tx) @@ -297,10 +293,10 @@ ieee80211_tx_h_check_assoc(struct ieee80211_tx_data *tx) if (unlikely(!assoc && ieee80211_is_data(hdr->frame_control))) { #ifdef CONFIG_MAC80211_VERBOSE_DEBUG - printk(KERN_DEBUG "%s: dropped data frame to not " - "associated station %pM\n", - tx->sdata->name, hdr->addr1); -#endif /* CONFIG_MAC80211_VERBOSE_DEBUG */ + sdata_info(tx->sdata, + "dropped data frame to not associated station %pM\n", + hdr->addr1); +#endif I802_DEBUG_INC(tx->local->tx_handlers_drop_not_assoc); return TX_DROP; } @@ -367,10 +363,7 @@ static void purge_old_ps_buffers(struct ieee80211_local *local) rcu_read_unlock(); local->total_ps_buffered = total; -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - wiphy_debug(local->hw.wiphy, "PS buffers full - purged %d frames\n", - purged); -#endif + ps_dbg_hw(&local->hw, "PS buffers full - purged %d frames\n", purged); } static ieee80211_tx_result @@ -412,10 +405,8 @@ ieee80211_tx_h_multicast_ps_buf(struct ieee80211_tx_data *tx) purge_old_ps_buffers(tx->local); if (skb_queue_len(&tx->sdata->bss->ps_bc_buf) >= AP_MAX_BC_BUFFER) { -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - net_dbg_ratelimited("%s: BC TX buffer full - dropping the oldest frame\n", - tx->sdata->name); -#endif + ps_dbg(tx->sdata, + "BC TX buffer full - dropping the oldest frame\n"); dev_kfree_skb(skb_dequeue(&tx->sdata->bss->ps_bc_buf)); } else tx->local->total_ps_buffered++; @@ -466,18 +457,15 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx) return TX_CONTINUE; } -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - printk(KERN_DEBUG "STA %pM aid %d: PS buffer for AC %d\n", + ps_dbg(sta->sdata, "STA %pM aid %d: PS buffer for AC %d\n", sta->sta.addr, sta->sta.aid, ac); -#endif /* CONFIG_MAC80211_VERBOSE_PS_DEBUG */ if (tx->local->total_ps_buffered >= TOTAL_MAX_TX_BUFFER) purge_old_ps_buffers(tx->local); if (skb_queue_len(&sta->ps_tx_buf[ac]) >= STA_MAX_TX_BUFFER) { struct sk_buff *old = skb_dequeue(&sta->ps_tx_buf[ac]); -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - net_dbg_ratelimited("%s: STA %pM TX buffer for AC %d full - dropping oldest frame\n", - tx->sdata->name, sta->sta.addr, ac); -#endif + ps_dbg(tx->sdata, + "STA %pM TX buffer for AC %d full - dropping oldest frame\n", + sta->sta.addr, ac); dev_kfree_skb(old); } else tx->local->total_ps_buffered++; @@ -499,14 +487,11 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx) sta_info_recalc_tim(sta); return TX_QUEUED; + } else if (unlikely(test_sta_flag(sta, WLAN_STA_PS_STA))) { + ps_dbg(tx->sdata, + "STA %pM in PS mode, but polling/in SP -> send frame\n", + sta->sta.addr); } -#ifdef CONFIG_MAC80211_VERBOSE_PS_DEBUG - else if (unlikely(test_sta_flag(sta, WLAN_STA_PS_STA))) { - printk(KERN_DEBUG - "%s: STA %pM in PS mode, but polling/in SP -> send frame\n", - tx->sdata->name, sta->sta.addr); - } -#endif /* CONFIG_MAC80211_VERBOSE_PS_DEBUG */ return TX_CONTINUE; } @@ -538,7 +523,7 @@ ieee80211_tx_h_check_control_port_protocol(struct ieee80211_tx_data *tx) static ieee80211_tx_result debug_noinline ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx) { - struct ieee80211_key *key = NULL; + struct ieee80211_key *key; struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx->skb); struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)tx->skb->data; @@ -557,16 +542,23 @@ ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx) else if (!is_multicast_ether_addr(hdr->addr1) && (key = rcu_dereference(tx->sdata->default_unicast_key))) tx->key = key; - else if (tx->sdata->drop_unencrypted && - (tx->skb->protocol != tx->sdata->control_port_protocol) && - !(info->flags & IEEE80211_TX_CTL_INJECTED) && - (!ieee80211_is_robust_mgmt_frame(hdr) || - (ieee80211_is_action(hdr->frame_control) && - tx->sta && test_sta_flag(tx->sta, WLAN_STA_MFP)))) { + else if (info->flags & IEEE80211_TX_CTL_INJECTED) + tx->key = NULL; + else if (!tx->sdata->drop_unencrypted) + tx->key = NULL; + else if (tx->skb->protocol == tx->sdata->control_port_protocol) + tx->key = NULL; + else if (ieee80211_is_robust_mgmt_frame(hdr) && + !(ieee80211_is_action(hdr->frame_control) && + tx->sta && test_sta_flag(tx->sta, WLAN_STA_MFP))) + tx->key = NULL; + else if (ieee80211_is_mgmt(hdr->frame_control) && + !ieee80211_is_robust_mgmt_frame(hdr)) + tx->key = NULL; + else { I802_DEBUG_INC(tx->local->tx_handlers_drop_unencrypted); return TX_DROP; - } else - tx->key = NULL; + } if (tx->key) { bool skip_hw = false; @@ -974,8 +966,7 @@ ieee80211_tx_h_fragment(struct ieee80211_tx_data *tx) info->control.rates[1].idx = -1; info->control.rates[2].idx = -1; info->control.rates[3].idx = -1; - info->control.rates[4].idx = -1; - BUILD_BUG_ON(IEEE80211_TX_MAX_RATES != 5); + BUILD_BUG_ON(IEEE80211_TX_MAX_RATES != 4); info->flags &= ~IEEE80211_TX_CTL_RATE_CTRL_PROBE; } else { hdr->frame_control &= ~morefrags; @@ -1310,11 +1301,8 @@ static bool __ieee80211_tx(struct ieee80211_local *local, break; } - if (local->ops->tx_frags) - drv_tx_frags(local, vif, pubsta, skbs); - else - result = ieee80211_tx_frags(local, vif, pubsta, skbs, - txpending); + result = ieee80211_tx_frags(local, vif, pubsta, skbs, + txpending); ieee80211_tpt_led_trig_tx(local, fc, led_len); ieee80211_led_tx(local, 1); @@ -1836,6 +1824,9 @@ netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb, /* RA TA mDA mSA AE:DA SA */ mesh_da = mppath->mpp; is_mesh_mcast = 0; + } else if (mpath) { + mesh_da = mpath->dst; + is_mesh_mcast = 0; } else { /* DA TA mSA AE:SA */ mesh_da = bcast; @@ -1965,7 +1956,7 @@ netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb, (cpu_to_be16(ethertype) != sdata->control_port_protocol || !ether_addr_equal(sdata->vif.addr, skb->data + ETH_ALEN)))) { #ifdef CONFIG_MAC80211_VERBOSE_DEBUG - net_dbg_ratelimited("%s: dropped frame to %pM (unauthorized port)\n", + net_info_ratelimited("%s: dropped frame to %pM (unauthorized port)\n", dev->name, hdr.addr1); #endif @@ -2437,9 +2428,9 @@ struct sk_buff *ieee80211_beacon_get_tim(struct ieee80211_hw *hw, *pos++ = WLAN_EID_SSID; *pos++ = 0x0; - if (ieee80211_add_srates_ie(&sdata->vif, skb, true) || + if (ieee80211_add_srates_ie(sdata, skb, true) || mesh_add_ds_params_ie(skb, sdata) || - ieee80211_add_ext_srates_ie(&sdata->vif, skb, true) || + ieee80211_add_ext_srates_ie(sdata, skb, true) || mesh_add_rsn_ie(skb, sdata) || mesh_add_ht_cap_ie(skb, sdata) || mesh_add_ht_oper_ie(skb, sdata) || @@ -2733,7 +2724,7 @@ EXPORT_SYMBOL(ieee80211_get_buffered_bc); void ieee80211_tx_skb_tid(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb, int tid) { - int ac = ieee802_1d_to_ac[tid]; + int ac = ieee802_1d_to_ac[tid & 7]; skb_set_mac_header(skb, 0); skb_set_network_header(skb, 0); diff --git a/net/mac80211/util.c b/net/mac80211/util.c index 8dd4712620ff..39b82fee4904 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -268,6 +268,10 @@ EXPORT_SYMBOL(ieee80211_ctstoself_duration); void ieee80211_propagate_queue_wake(struct ieee80211_local *local, int queue) { struct ieee80211_sub_if_data *sdata; + int n_acs = IEEE80211_NUM_ACS; + + if (local->hw.queues < IEEE80211_NUM_ACS) + n_acs = 1; list_for_each_entry_rcu(sdata, &local->interfaces, list) { int ac; @@ -279,7 +283,7 @@ void ieee80211_propagate_queue_wake(struct ieee80211_local *local, int queue) local->queue_stop_reasons[sdata->vif.cab_queue] != 0) continue; - for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { + for (ac = 0; ac < n_acs; ac++) { int ac_queue = sdata->vif.hw_queue[ac]; if (ac_queue == queue || @@ -341,6 +345,7 @@ static void __ieee80211_stop_queue(struct ieee80211_hw *hw, int queue, { struct ieee80211_local *local = hw_to_local(hw); struct ieee80211_sub_if_data *sdata; + int n_acs = IEEE80211_NUM_ACS; trace_stop_queue(local, queue, reason); @@ -352,11 +357,14 @@ static void __ieee80211_stop_queue(struct ieee80211_hw *hw, int queue, __set_bit(reason, &local->queue_stop_reasons[queue]); + if (local->hw.queues < IEEE80211_NUM_ACS) + n_acs = 1; + rcu_read_lock(); list_for_each_entry_rcu(sdata, &local->interfaces, list) { int ac; - for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { + for (ac = 0; ac < n_acs; ac++) { if (sdata->vif.hw_queue[ac] == queue || sdata->vif.cab_queue == queue) netif_stop_subqueue(sdata->dev, ac); @@ -521,6 +529,11 @@ void ieee80211_iterate_active_interfaces( &sdata->vif); } + sdata = rcu_dereference_protected(local->monitor_sdata, + lockdep_is_held(&local->iflist_mtx)); + if (sdata) + iterator(data, sdata->vif.addr, &sdata->vif); + mutex_unlock(&local->iflist_mtx); } EXPORT_SYMBOL_GPL(ieee80211_iterate_active_interfaces); @@ -549,6 +562,10 @@ void ieee80211_iterate_active_interfaces_atomic( &sdata->vif); } + sdata = rcu_dereference(local->monitor_sdata); + if (sdata) + iterator(data, sdata->vif.addr, &sdata->vif); + rcu_read_unlock(); } EXPORT_SYMBOL_GPL(ieee80211_iterate_active_interfaces_atomic); @@ -804,7 +821,7 @@ void ieee80211_set_wmm_default(struct ieee80211_sub_if_data *sdata, struct ieee80211_local *local = sdata->local; struct ieee80211_tx_queue_params qparam; int ac; - bool use_11b; + bool use_11b, enable_qos; int aCWmin, aCWmax; if (!local->ops->conf_tx) @@ -818,6 +835,13 @@ void ieee80211_set_wmm_default(struct ieee80211_sub_if_data *sdata, use_11b = (local->hw.conf.channel->band == IEEE80211_BAND_2GHZ) && !(sdata->flags & IEEE80211_SDATA_OPERATING_GMODE); + /* + * By default disable QoS in STA mode for old access points, which do + * not support 802.11e. New APs will provide proper queue parameters, + * that we will configure later. + */ + enable_qos = (sdata->vif.type != NL80211_IFTYPE_STATION); + for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { /* Set defaults according to 802.11-2007 Table 7-37 */ aCWmax = 1023; @@ -826,38 +850,47 @@ void ieee80211_set_wmm_default(struct ieee80211_sub_if_data *sdata, else aCWmin = 15; - switch (ac) { - case IEEE80211_AC_BK: - qparam.cw_max = aCWmax; - qparam.cw_min = aCWmin; - qparam.txop = 0; - qparam.aifs = 7; - break; - default: /* never happens but let's not leave undefined */ - case IEEE80211_AC_BE: + if (enable_qos) { + switch (ac) { + case IEEE80211_AC_BK: + qparam.cw_max = aCWmax; + qparam.cw_min = aCWmin; + qparam.txop = 0; + qparam.aifs = 7; + break; + /* never happens but let's not leave undefined */ + default: + case IEEE80211_AC_BE: + qparam.cw_max = aCWmax; + qparam.cw_min = aCWmin; + qparam.txop = 0; + qparam.aifs = 3; + break; + case IEEE80211_AC_VI: + qparam.cw_max = aCWmin; + qparam.cw_min = (aCWmin + 1) / 2 - 1; + if (use_11b) + qparam.txop = 6016/32; + else + qparam.txop = 3008/32; + qparam.aifs = 2; + break; + case IEEE80211_AC_VO: + qparam.cw_max = (aCWmin + 1) / 2 - 1; + qparam.cw_min = (aCWmin + 1) / 4 - 1; + if (use_11b) + qparam.txop = 3264/32; + else + qparam.txop = 1504/32; + qparam.aifs = 2; + break; + } + } else { + /* Confiure old 802.11b/g medium access rules. */ qparam.cw_max = aCWmax; qparam.cw_min = aCWmin; qparam.txop = 0; - qparam.aifs = 3; - break; - case IEEE80211_AC_VI: - qparam.cw_max = aCWmin; - qparam.cw_min = (aCWmin + 1) / 2 - 1; - if (use_11b) - qparam.txop = 6016/32; - else - qparam.txop = 3008/32; - qparam.aifs = 2; - break; - case IEEE80211_AC_VO: - qparam.cw_max = (aCWmin + 1) / 2 - 1; - qparam.cw_min = (aCWmin + 1) / 4 - 1; - if (use_11b) - qparam.txop = 3264/32; - else - qparam.txop = 1504/32; qparam.aifs = 2; - break; } qparam.uapsd = false; @@ -866,12 +899,8 @@ void ieee80211_set_wmm_default(struct ieee80211_sub_if_data *sdata, drv_conf_tx(local, sdata, ac, &qparam); } - /* after reinitialize QoS TX queues setting to default, - * disable QoS at all */ - if (sdata->vif.type != NL80211_IFTYPE_MONITOR) { - sdata->vif.bss_conf.qos = - sdata->vif.type != NL80211_IFTYPE_STATION; + sdata->vif.bss_conf.qos = enable_qos; if (bss_notify) ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_QOS); @@ -979,6 +1008,8 @@ int ieee80211_build_preq_ies(struct ieee80211_local *local, u8 *buffer, int ext_rates_len; sband = local->hw.wiphy->bands[band]; + if (WARN_ON_ONCE(!sband)) + return 0; pos = buffer; @@ -1060,6 +1091,10 @@ int ieee80211_build_preq_ies(struct ieee80211_local *local, u8 *buffer, pos += noffset - offset; } + if (sband->vht_cap.vht_supported) + pos = ieee80211_ie_build_vht_cap(pos, &sband->vht_cap, + sband->vht_cap.cap); + return pos - buffer; } @@ -1267,14 +1302,19 @@ int ieee80211_reconfig(struct ieee80211_local *local) /* add STAs back */ mutex_lock(&local->sta_mtx); list_for_each_entry(sta, &local->sta_list, list) { - if (sta->uploaded) { - enum ieee80211_sta_state state; + enum ieee80211_sta_state state; - for (state = IEEE80211_STA_NOTEXIST; - state < sta->sta_state; state++) - WARN_ON(drv_sta_state(local, sta->sdata, sta, - state, state + 1)); - } + if (!sta->uploaded) + continue; + + /* AP-mode stations will be added later */ + if (sta->sdata->vif.type == NL80211_IFTYPE_AP) + continue; + + for (state = IEEE80211_STA_NOTEXIST; + state < sta->sta_state; state++) + WARN_ON(drv_sta_state(local, sta->sdata, sta, state, + state + 1)); } mutex_unlock(&local->sta_mtx); @@ -1371,12 +1411,33 @@ int ieee80211_reconfig(struct ieee80211_local *local) } } + /* APs are now beaconing, add back stations */ + mutex_lock(&local->sta_mtx); + list_for_each_entry(sta, &local->sta_list, list) { + enum ieee80211_sta_state state; + + if (!sta->uploaded) + continue; + + if (sta->sdata->vif.type != NL80211_IFTYPE_AP) + continue; + + for (state = IEEE80211_STA_NOTEXIST; + state < sta->sta_state; state++) + WARN_ON(drv_sta_state(local, sta->sdata, sta, state, + state + 1)); + } + mutex_unlock(&local->sta_mtx); + /* add back keys */ list_for_each_entry(sdata, &local->interfaces, list) if (ieee80211_sdata_running(sdata)) ieee80211_enable_keys(sdata); wake_up: + local->in_reconfig = false; + barrier(); + /* * Clear the WLAN_STA_BLOCK_BA flag so new aggregation * sessions can be established after a resume. @@ -1661,6 +1722,27 @@ u8 *ieee80211_ie_build_ht_cap(u8 *pos, struct ieee80211_sta_ht_cap *ht_cap, return pos; } +u8 *ieee80211_ie_build_vht_cap(u8 *pos, struct ieee80211_sta_vht_cap *vht_cap, + u32 cap) +{ + __le32 tmp; + + *pos++ = WLAN_EID_VHT_CAPABILITY; + *pos++ = sizeof(struct ieee80211_vht_capabilities); + memset(pos, 0, sizeof(struct ieee80211_vht_capabilities)); + + /* capability flags */ + tmp = cpu_to_le32(cap); + memcpy(pos, &tmp, sizeof(u32)); + pos += sizeof(u32); + + /* VHT MCS set */ + memcpy(pos, &vht_cap->vht_mcs, sizeof(vht_cap->vht_mcs)); + pos += sizeof(vht_cap->vht_mcs); + + return pos; +} + u8 *ieee80211_ie_build_ht_oper(u8 *pos, struct ieee80211_sta_ht_cap *ht_cap, struct ieee80211_channel *channel, enum nl80211_channel_type channel_type, @@ -1726,15 +1808,14 @@ ieee80211_ht_oper_to_channel_type(struct ieee80211_ht_operation *ht_oper) return channel_type; } -int ieee80211_add_srates_ie(struct ieee80211_vif *vif, +int ieee80211_add_srates_ie(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb, bool need_basic) { - struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif); struct ieee80211_local *local = sdata->local; struct ieee80211_supported_band *sband; int rate; u8 i, rates, *pos; - u32 basic_rates = vif->bss_conf.basic_rates; + u32 basic_rates = sdata->vif.bss_conf.basic_rates; sband = local->hw.wiphy->bands[local->hw.conf.channel->band]; rates = sband->n_bitrates; @@ -1758,15 +1839,14 @@ int ieee80211_add_srates_ie(struct ieee80211_vif *vif, return 0; } -int ieee80211_add_ext_srates_ie(struct ieee80211_vif *vif, +int ieee80211_add_ext_srates_ie(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb, bool need_basic) { - struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif); struct ieee80211_local *local = sdata->local; struct ieee80211_supported_band *sband; int rate; u8 i, exrates, *pos; - u32 basic_rates = vif->bss_conf.basic_rates; + u32 basic_rates = sdata->vif.bss_conf.basic_rates; sband = local->hw.wiphy->bands[local->hw.conf.channel->band]; exrates = sband->n_bitrates; diff --git a/net/mac80211/wme.c b/net/mac80211/wme.c index c3d643a6536c..cea06e9f26f4 100644 --- a/net/mac80211/wme.c +++ b/net/mac80211/wme.c @@ -52,11 +52,11 @@ static int wme_downgrade_ac(struct sk_buff *skb) } } -static u16 ieee80211_downgrade_queue(struct ieee80211_local *local, +static u16 ieee80211_downgrade_queue(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb) { /* in case we are a client verify acm is not set for this ac */ - while (unlikely(local->wmm_acm & BIT(skb->priority))) { + while (unlikely(sdata->wmm_acm & BIT(skb->priority))) { if (wme_downgrade_ac(skb)) { /* * This should not really happen. The AP has marked all @@ -73,10 +73,11 @@ static u16 ieee80211_downgrade_queue(struct ieee80211_local *local, } /* Indicate which queue to use for this fully formed 802.11 frame */ -u16 ieee80211_select_queue_80211(struct ieee80211_local *local, +u16 ieee80211_select_queue_80211(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb, struct ieee80211_hdr *hdr) { + struct ieee80211_local *local = sdata->local; u8 *p; if (local->hw.queues < IEEE80211_NUM_ACS) @@ -94,7 +95,7 @@ u16 ieee80211_select_queue_80211(struct ieee80211_local *local, p = ieee80211_get_qos_ctl(hdr); skb->priority = *p & IEEE80211_QOS_CTL_TAG1D_MASK; - return ieee80211_downgrade_queue(local, skb); + return ieee80211_downgrade_queue(sdata, skb); } /* Indicate which queue to use. */ @@ -156,7 +157,7 @@ u16 ieee80211_select_queue(struct ieee80211_sub_if_data *sdata, * data frame has */ skb->priority = cfg80211_classify8021d(skb); - return ieee80211_downgrade_queue(local, skb); + return ieee80211_downgrade_queue(sdata, skb); } void ieee80211_set_qos_hdr(struct ieee80211_sub_if_data *sdata, diff --git a/net/mac80211/wme.h b/net/mac80211/wme.h index ca80818b7b66..7fea4bb8acbc 100644 --- a/net/mac80211/wme.h +++ b/net/mac80211/wme.h @@ -15,7 +15,7 @@ extern const int ieee802_1d_to_ac[8]; -u16 ieee80211_select_queue_80211(struct ieee80211_local *local, +u16 ieee80211_select_queue_80211(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb, struct ieee80211_hdr *hdr); u16 ieee80211_select_queue(struct ieee80211_sub_if_data *sdata, diff --git a/net/mac80211/work.c b/net/mac80211/work.c deleted file mode 100644 index b2650a9d45ff..000000000000 --- a/net/mac80211/work.c +++ /dev/null @@ -1,370 +0,0 @@ -/* - * mac80211 work implementation - * - * Copyright 2003-2008, Jouni Malinen <j@w1.fi> - * Copyright 2004, Instant802 Networks, Inc. - * Copyright 2005, Devicescape Software, Inc. - * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> - * Copyright 2007, Michael Wu <flamingice@sourmilk.net> - * Copyright 2009, Johannes Berg <johannes@sipsolutions.net> - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include <linux/delay.h> -#include <linux/if_ether.h> -#include <linux/skbuff.h> -#include <linux/if_arp.h> -#include <linux/etherdevice.h> -#include <linux/crc32.h> -#include <linux/slab.h> -#include <net/mac80211.h> -#include <asm/unaligned.h> - -#include "ieee80211_i.h" -#include "rate.h" -#include "driver-ops.h" - -enum work_action { - WORK_ACT_NONE, - WORK_ACT_TIMEOUT, -}; - - -/* utils */ -static inline void ASSERT_WORK_MTX(struct ieee80211_local *local) -{ - lockdep_assert_held(&local->mtx); -} - -/* - * We can have multiple work items (and connection probing) - * scheduling this timer, but we need to take care to only - * reschedule it when it should fire _earlier_ than it was - * asked for before, or if it's not pending right now. This - * function ensures that. Note that it then is required to - * run this function for all timeouts after the first one - * has happened -- the work that runs from this timer will - * do that. - */ -static void run_again(struct ieee80211_local *local, - unsigned long timeout) -{ - ASSERT_WORK_MTX(local); - - if (!timer_pending(&local->work_timer) || - time_before(timeout, local->work_timer.expires)) - mod_timer(&local->work_timer, timeout); -} - -void free_work(struct ieee80211_work *wk) -{ - kfree_rcu(wk, rcu_head); -} - -static enum work_action __must_check -ieee80211_remain_on_channel_timeout(struct ieee80211_work *wk) -{ - /* - * First time we run, do nothing -- the generic code will - * have switched to the right channel etc. - */ - if (!wk->started) { - wk->timeout = jiffies + msecs_to_jiffies(wk->remain.duration); - - cfg80211_ready_on_channel(wk->sdata->dev, (unsigned long) wk, - wk->chan, wk->chan_type, - wk->remain.duration, GFP_KERNEL); - - return WORK_ACT_NONE; - } - - return WORK_ACT_TIMEOUT; -} - -static enum work_action __must_check -ieee80211_offchannel_tx(struct ieee80211_work *wk) -{ - if (!wk->started) { - wk->timeout = jiffies + msecs_to_jiffies(wk->offchan_tx.wait); - - /* - * After this, offchan_tx.frame remains but now is no - * longer a valid pointer -- we still need it as the - * cookie for canceling this work/status matching. - */ - ieee80211_tx_skb(wk->sdata, wk->offchan_tx.frame); - - return WORK_ACT_NONE; - } - - return WORK_ACT_TIMEOUT; -} - -static void ieee80211_work_timer(unsigned long data) -{ - struct ieee80211_local *local = (void *) data; - - if (local->quiescing) - return; - - ieee80211_queue_work(&local->hw, &local->work_work); -} - -static void ieee80211_work_work(struct work_struct *work) -{ - struct ieee80211_local *local = - container_of(work, struct ieee80211_local, work_work); - struct ieee80211_work *wk, *tmp; - LIST_HEAD(free_work); - enum work_action rma; - bool remain_off_channel = false; - - /* - * ieee80211_queue_work() should have picked up most cases, - * here we'll pick the rest. - */ - if (WARN(local->suspended, "work scheduled while going to suspend\n")) - return; - - mutex_lock(&local->mtx); - - if (local->scanning) { - mutex_unlock(&local->mtx); - return; - } - - ieee80211_recalc_idle(local); - - list_for_each_entry_safe(wk, tmp, &local->work_list, list) { - bool started = wk->started; - - /* mark work as started if it's on the current off-channel */ - if (!started && local->tmp_channel && - wk->chan == local->tmp_channel && - wk->chan_type == local->tmp_channel_type) { - started = true; - wk->timeout = jiffies; - } - - if (!started && !local->tmp_channel) { - ieee80211_offchannel_stop_vifs(local, true); - - local->tmp_channel = wk->chan; - local->tmp_channel_type = wk->chan_type; - - ieee80211_hw_config(local, 0); - - started = true; - wk->timeout = jiffies; - } - - /* don't try to work with items that aren't started */ - if (!started) - continue; - - if (time_is_after_jiffies(wk->timeout)) { - /* - * This work item isn't supposed to be worked on - * right now, but take care to adjust the timer - * properly. - */ - run_again(local, wk->timeout); - continue; - } - - switch (wk->type) { - default: - WARN_ON(1); - /* nothing */ - rma = WORK_ACT_NONE; - break; - case IEEE80211_WORK_ABORT: - rma = WORK_ACT_TIMEOUT; - break; - case IEEE80211_WORK_REMAIN_ON_CHANNEL: - rma = ieee80211_remain_on_channel_timeout(wk); - break; - case IEEE80211_WORK_OFFCHANNEL_TX: - rma = ieee80211_offchannel_tx(wk); - break; - } - - wk->started = started; - - switch (rma) { - case WORK_ACT_NONE: - /* might have changed the timeout */ - run_again(local, wk->timeout); - break; - case WORK_ACT_TIMEOUT: - list_del_rcu(&wk->list); - synchronize_rcu(); - list_add(&wk->list, &free_work); - break; - default: - WARN(1, "unexpected: %d", rma); - } - } - - list_for_each_entry(wk, &local->work_list, list) { - if (!wk->started) - continue; - if (wk->chan != local->tmp_channel || - wk->chan_type != local->tmp_channel_type) - continue; - remain_off_channel = true; - } - - if (!remain_off_channel && local->tmp_channel) { - local->tmp_channel = NULL; - ieee80211_hw_config(local, 0); - - ieee80211_offchannel_return(local, true); - - /* give connection some time to breathe */ - run_again(local, jiffies + HZ/2); - } - - ieee80211_recalc_idle(local); - ieee80211_run_deferred_scan(local); - - mutex_unlock(&local->mtx); - - list_for_each_entry_safe(wk, tmp, &free_work, list) { - wk->done(wk, NULL); - list_del(&wk->list); - kfree(wk); - } -} - -void ieee80211_add_work(struct ieee80211_work *wk) -{ - struct ieee80211_local *local; - - if (WARN_ON(!wk->chan)) - return; - - if (WARN_ON(!wk->sdata)) - return; - - if (WARN_ON(!wk->done)) - return; - - if (WARN_ON(!ieee80211_sdata_running(wk->sdata))) - return; - - wk->started = false; - - local = wk->sdata->local; - mutex_lock(&local->mtx); - list_add_tail(&wk->list, &local->work_list); - mutex_unlock(&local->mtx); - - ieee80211_queue_work(&local->hw, &local->work_work); -} - -void ieee80211_work_init(struct ieee80211_local *local) -{ - INIT_LIST_HEAD(&local->work_list); - setup_timer(&local->work_timer, ieee80211_work_timer, - (unsigned long)local); - INIT_WORK(&local->work_work, ieee80211_work_work); -} - -void ieee80211_work_purge(struct ieee80211_sub_if_data *sdata) -{ - struct ieee80211_local *local = sdata->local; - struct ieee80211_work *wk; - bool cleanup = false; - - mutex_lock(&local->mtx); - list_for_each_entry(wk, &local->work_list, list) { - if (wk->sdata != sdata) - continue; - cleanup = true; - wk->type = IEEE80211_WORK_ABORT; - wk->started = true; - wk->timeout = jiffies; - } - mutex_unlock(&local->mtx); - - /* run cleanups etc. */ - if (cleanup) - ieee80211_work_work(&local->work_work); - - mutex_lock(&local->mtx); - list_for_each_entry(wk, &local->work_list, list) { - if (wk->sdata != sdata) - continue; - WARN_ON(1); - break; - } - mutex_unlock(&local->mtx); -} - -static enum work_done_result ieee80211_remain_done(struct ieee80211_work *wk, - struct sk_buff *skb) -{ - /* - * We are done serving the remain-on-channel command. - */ - cfg80211_remain_on_channel_expired(wk->sdata->dev, (unsigned long) wk, - wk->chan, wk->chan_type, - GFP_KERNEL); - - return WORK_DONE_DESTROY; -} - -int ieee80211_wk_remain_on_channel(struct ieee80211_sub_if_data *sdata, - struct ieee80211_channel *chan, - enum nl80211_channel_type channel_type, - unsigned int duration, u64 *cookie) -{ - struct ieee80211_work *wk; - - wk = kzalloc(sizeof(*wk), GFP_KERNEL); - if (!wk) - return -ENOMEM; - - wk->type = IEEE80211_WORK_REMAIN_ON_CHANNEL; - wk->chan = chan; - wk->chan_type = channel_type; - wk->sdata = sdata; - wk->done = ieee80211_remain_done; - - wk->remain.duration = duration; - - *cookie = (unsigned long) wk; - - ieee80211_add_work(wk); - - return 0; -} - -int ieee80211_wk_cancel_remain_on_channel(struct ieee80211_sub_if_data *sdata, - u64 cookie) -{ - struct ieee80211_local *local = sdata->local; - struct ieee80211_work *wk, *tmp; - bool found = false; - - mutex_lock(&local->mtx); - list_for_each_entry_safe(wk, tmp, &local->work_list, list) { - if ((unsigned long) wk == cookie) { - wk->timeout = jiffies; - found = true; - break; - } - } - mutex_unlock(&local->mtx); - - if (!found) - return -ENOENT; - - ieee80211_queue_work(&local->hw, &local->work_work); - - return 0; -} diff --git a/net/mac802154/Makefile b/net/mac802154/Makefile index ec1bd3fc1273..57cf5d1a2e4a 100644 --- a/net/mac802154/Makefile +++ b/net/mac802154/Makefile @@ -1,2 +1,2 @@ obj-$(CONFIG_MAC802154) += mac802154.o -mac802154-objs := ieee802154_dev.o rx.o tx.o mac_cmd.o mib.o monitor.o +mac802154-objs := ieee802154_dev.o rx.o tx.o mac_cmd.o mib.o monitor.o wpan.o diff --git a/net/mac802154/ieee802154_dev.c b/net/mac802154/ieee802154_dev.c index e3edfb0661b0..e748aed290aa 100644 --- a/net/mac802154/ieee802154_dev.c +++ b/net/mac802154/ieee802154_dev.c @@ -140,6 +140,10 @@ mac802154_add_iface(struct wpan_phy *phy, const char *name, int type) dev = alloc_netdev(sizeof(struct mac802154_sub_if_data), name, mac802154_monitor_setup); break; + case IEEE802154_DEV_WPAN: + dev = alloc_netdev(sizeof(struct mac802154_sub_if_data), + name, mac802154_wpan_setup); + break; default: dev = NULL; err = -EINVAL; diff --git a/net/mac802154/mac802154.h b/net/mac802154/mac802154.h index 789d9c948aec..a4dcaf1dd4b6 100644 --- a/net/mac802154/mac802154.h +++ b/net/mac802154/mac802154.h @@ -93,6 +93,7 @@ struct mac802154_sub_if_data { #define MAC802154_CHAN_NONE (~(u8)0) /* No channel is assigned */ extern struct ieee802154_reduced_mlme_ops mac802154_mlme_reduced; +extern struct ieee802154_mlme_ops mac802154_mlme_wpan; int mac802154_slave_open(struct net_device *dev); int mac802154_slave_close(struct net_device *dev); @@ -100,10 +101,18 @@ int mac802154_slave_close(struct net_device *dev); void mac802154_monitors_rx(struct mac802154_priv *priv, struct sk_buff *skb); void mac802154_monitor_setup(struct net_device *dev); +void mac802154_wpans_rx(struct mac802154_priv *priv, struct sk_buff *skb); +void mac802154_wpan_setup(struct net_device *dev); + netdev_tx_t mac802154_tx(struct mac802154_priv *priv, struct sk_buff *skb, u8 page, u8 chan); /* MIB callbacks */ +void mac802154_dev_set_short_addr(struct net_device *dev, u16 val); +u16 mac802154_dev_get_short_addr(const struct net_device *dev); void mac802154_dev_set_ieee_addr(struct net_device *dev); +u16 mac802154_dev_get_pan_id(const struct net_device *dev); +void mac802154_dev_set_pan_id(struct net_device *dev, u16 val); +void mac802154_dev_set_page_channel(struct net_device *dev, u8 page, u8 chan); #endif /* MAC802154_H */ diff --git a/net/mac802154/mac_cmd.c b/net/mac802154/mac_cmd.c index 7a5d0e052cd7..d8d277006089 100644 --- a/net/mac802154/mac_cmd.c +++ b/net/mac802154/mac_cmd.c @@ -25,13 +25,37 @@ #include <linux/skbuff.h> #include <linux/if_arp.h> +#include <net/ieee802154.h> #include <net/ieee802154_netdev.h> #include <net/wpan-phy.h> #include <net/mac802154.h> +#include <net/nl802154.h> #include "mac802154.h" -struct wpan_phy *mac802154_get_phy(const struct net_device *dev) +static int mac802154_mlme_start_req(struct net_device *dev, + struct ieee802154_addr *addr, + u8 channel, u8 page, + u8 bcn_ord, u8 sf_ord, + u8 pan_coord, u8 blx, + u8 coord_realign) +{ + BUG_ON(addr->addr_type != IEEE802154_ADDR_SHORT); + + mac802154_dev_set_pan_id(dev, addr->pan_id); + mac802154_dev_set_short_addr(dev, addr->short_addr); + mac802154_dev_set_ieee_addr(dev); + mac802154_dev_set_page_channel(dev, page, channel); + + /* FIXME: add validation for unused parameters to be sane + * for SoftMAC + */ + ieee802154_nl_start_confirm(dev, IEEE802154_SUCCESS); + + return 0; +} + +static struct wpan_phy *mac802154_get_phy(const struct net_device *dev) { struct mac802154_sub_if_data *priv = netdev_priv(dev); @@ -43,3 +67,10 @@ struct wpan_phy *mac802154_get_phy(const struct net_device *dev) struct ieee802154_reduced_mlme_ops mac802154_mlme_reduced = { .get_phy = mac802154_get_phy, }; + +struct ieee802154_mlme_ops mac802154_mlme_wpan = { + .get_phy = mac802154_get_phy, + .start_req = mac802154_mlme_start_req, + .get_pan_id = mac802154_dev_get_pan_id, + .get_short_addr = mac802154_dev_get_short_addr, +}; diff --git a/net/mac802154/mib.c b/net/mac802154/mib.c index ab59821ec729..f47781ab0ccc 100644 --- a/net/mac802154/mib.c +++ b/net/mac802154/mib.c @@ -28,13 +28,18 @@ #include "mac802154.h" +struct phy_chan_notify_work { + struct work_struct work; + struct net_device *dev; +}; + struct hw_addr_filt_notify_work { struct work_struct work; struct net_device *dev; unsigned long changed; }; -struct mac802154_priv *mac802154_slave_get_priv(struct net_device *dev) +static struct mac802154_priv *mac802154_slave_get_priv(struct net_device *dev) { struct mac802154_sub_if_data *priv = netdev_priv(dev); @@ -78,6 +83,37 @@ static void set_hw_addr_filt(struct net_device *dev, unsigned long changed) return; } +void mac802154_dev_set_short_addr(struct net_device *dev, u16 val) +{ + struct mac802154_sub_if_data *priv = netdev_priv(dev); + + BUG_ON(dev->type != ARPHRD_IEEE802154); + + spin_lock_bh(&priv->mib_lock); + priv->short_addr = val; + spin_unlock_bh(&priv->mib_lock); + + if ((priv->hw->ops->set_hw_addr_filt) && + (priv->hw->hw.hw_filt.short_addr != priv->short_addr)) { + priv->hw->hw.hw_filt.short_addr = priv->short_addr; + set_hw_addr_filt(dev, IEEE802515_AFILT_SADDR_CHANGED); + } +} + +u16 mac802154_dev_get_short_addr(const struct net_device *dev) +{ + struct mac802154_sub_if_data *priv = netdev_priv(dev); + u16 ret; + + BUG_ON(dev->type != ARPHRD_IEEE802154); + + spin_lock_bh(&priv->mib_lock); + ret = priv->short_addr; + spin_unlock_bh(&priv->mib_lock); + + return ret; +} + void mac802154_dev_set_ieee_addr(struct net_device *dev) { struct mac802154_sub_if_data *priv = netdev_priv(dev); @@ -91,3 +127,73 @@ void mac802154_dev_set_ieee_addr(struct net_device *dev) set_hw_addr_filt(dev, IEEE802515_AFILT_IEEEADDR_CHANGED); } } + +u16 mac802154_dev_get_pan_id(const struct net_device *dev) +{ + struct mac802154_sub_if_data *priv = netdev_priv(dev); + u16 ret; + + BUG_ON(dev->type != ARPHRD_IEEE802154); + + spin_lock_bh(&priv->mib_lock); + ret = priv->pan_id; + spin_unlock_bh(&priv->mib_lock); + + return ret; +} + +void mac802154_dev_set_pan_id(struct net_device *dev, u16 val) +{ + struct mac802154_sub_if_data *priv = netdev_priv(dev); + + BUG_ON(dev->type != ARPHRD_IEEE802154); + + spin_lock_bh(&priv->mib_lock); + priv->pan_id = val; + spin_unlock_bh(&priv->mib_lock); + + if ((priv->hw->ops->set_hw_addr_filt) && + (priv->hw->hw.hw_filt.pan_id != priv->pan_id)) { + priv->hw->hw.hw_filt.pan_id = priv->pan_id; + set_hw_addr_filt(dev, IEEE802515_AFILT_PANID_CHANGED); + } +} + +static void phy_chan_notify(struct work_struct *work) +{ + struct phy_chan_notify_work *nw = container_of(work, + struct phy_chan_notify_work, work); + struct mac802154_priv *hw = mac802154_slave_get_priv(nw->dev); + struct mac802154_sub_if_data *priv = netdev_priv(nw->dev); + int res; + + res = hw->ops->set_channel(&hw->hw, priv->page, priv->chan); + if (res) + pr_debug("set_channel failed\n"); + + kfree(nw); +} + +void mac802154_dev_set_page_channel(struct net_device *dev, u8 page, u8 chan) +{ + struct mac802154_sub_if_data *priv = netdev_priv(dev); + struct phy_chan_notify_work *work; + + BUG_ON(dev->type != ARPHRD_IEEE802154); + + spin_lock_bh(&priv->mib_lock); + priv->page = page; + priv->chan = chan; + spin_unlock_bh(&priv->mib_lock); + + if (priv->hw->phy->current_channel != priv->chan || + priv->hw->phy->current_page != priv->page) { + work = kzalloc(sizeof(*work), GFP_ATOMIC); + if (!work) + return; + + INIT_WORK(&work->work, phy_chan_notify); + work->dev = dev; + queue_work(priv->hw->dev_workqueue, &work->work); + } +} diff --git a/net/mac802154/rx.c b/net/mac802154/rx.c index 4a7d76d4f8bc..38548ec2098f 100644 --- a/net/mac802154/rx.c +++ b/net/mac802154/rx.c @@ -77,6 +77,7 @@ mac802154_subif_rx(struct ieee802154_dev *hw, struct sk_buff *skb, u8 lqi) } mac802154_monitors_rx(priv, skb); + mac802154_wpans_rx(priv, skb); out: dev_kfree_skb(skb); return; diff --git a/net/mac802154/tx.c b/net/mac802154/tx.c index 434b6873b352..1a4df39c722e 100644 --- a/net/mac802154/tx.c +++ b/net/mac802154/tx.c @@ -88,6 +88,8 @@ netdev_tx_t mac802154_tx(struct mac802154_priv *priv, struct sk_buff *skb, return NETDEV_TX_OK; } + mac802154_monitors_rx(mac802154_to_priv(&priv->hw), skb); + if (!(priv->hw.flags & IEEE802154_HW_OMIT_CKSUM)) { u16 crc = crc_ccitt(0, skb->data, skb->len); u8 *data = skb_put(skb, 2); diff --git a/net/mac802154/wpan.c b/net/mac802154/wpan.c new file mode 100644 index 000000000000..f30f6d4beea1 --- /dev/null +++ b/net/mac802154/wpan.c @@ -0,0 +1,559 @@ +/* + * Copyright 2007-2012 Siemens AG + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Written by: + * Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> + * Sergey Lapin <slapin@ossfans.org> + * Maxim Gorbachyov <maxim.gorbachev@siemens.com> + * Alexander Smirnov <alex.bluesman.smirnov@gmail.com> + */ + +#include <linux/netdevice.h> +#include <linux/module.h> +#include <linux/if_arp.h> + +#include <net/rtnetlink.h> +#include <linux/nl802154.h> +#include <net/af_ieee802154.h> +#include <net/mac802154.h> +#include <net/ieee802154_netdev.h> +#include <net/ieee802154.h> +#include <net/wpan-phy.h> + +#include "mac802154.h" + +static inline int mac802154_fetch_skb_u8(struct sk_buff *skb, u8 *val) +{ + if (unlikely(!pskb_may_pull(skb, 1))) + return -EINVAL; + + *val = skb->data[0]; + skb_pull(skb, 1); + + return 0; +} + +static inline int mac802154_fetch_skb_u16(struct sk_buff *skb, u16 *val) +{ + if (unlikely(!pskb_may_pull(skb, 2))) + return -EINVAL; + + *val = skb->data[0] | (skb->data[1] << 8); + skb_pull(skb, 2); + + return 0; +} + +static inline void mac802154_haddr_copy_swap(u8 *dest, const u8 *src) +{ + int i; + for (i = 0; i < IEEE802154_ADDR_LEN; i++) + dest[IEEE802154_ADDR_LEN - i - 1] = src[i]; +} + +static int +mac802154_wpan_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) +{ + struct mac802154_sub_if_data *priv = netdev_priv(dev); + struct sockaddr_ieee802154 *sa = + (struct sockaddr_ieee802154 *)&ifr->ifr_addr; + int err = -ENOIOCTLCMD; + + spin_lock_bh(&priv->mib_lock); + + switch (cmd) { + case SIOCGIFADDR: + if (priv->pan_id == IEEE802154_PANID_BROADCAST || + priv->short_addr == IEEE802154_ADDR_BROADCAST) { + err = -EADDRNOTAVAIL; + break; + } + + sa->family = AF_IEEE802154; + sa->addr.addr_type = IEEE802154_ADDR_SHORT; + sa->addr.pan_id = priv->pan_id; + sa->addr.short_addr = priv->short_addr; + + err = 0; + break; + case SIOCSIFADDR: + dev_warn(&dev->dev, + "Using DEBUGing ioctl SIOCSIFADDR isn't recommened!\n"); + if (sa->family != AF_IEEE802154 || + sa->addr.addr_type != IEEE802154_ADDR_SHORT || + sa->addr.pan_id == IEEE802154_PANID_BROADCAST || + sa->addr.short_addr == IEEE802154_ADDR_BROADCAST || + sa->addr.short_addr == IEEE802154_ADDR_UNDEF) { + err = -EINVAL; + break; + } + + priv->pan_id = sa->addr.pan_id; + priv->short_addr = sa->addr.short_addr; + + err = 0; + break; + } + + spin_unlock_bh(&priv->mib_lock); + return err; +} + +static int mac802154_wpan_mac_addr(struct net_device *dev, void *p) +{ + struct sockaddr *addr = p; + + if (netif_running(dev)) + return -EBUSY; + + /* FIXME: validate addr */ + memcpy(dev->dev_addr, addr->sa_data, dev->addr_len); + mac802154_dev_set_ieee_addr(dev); + return 0; +} + +static int mac802154_header_create(struct sk_buff *skb, + struct net_device *dev, + unsigned short type, + const void *_daddr, + const void *_saddr, + unsigned len) +{ + const struct ieee802154_addr *saddr = _saddr; + const struct ieee802154_addr *daddr = _daddr; + struct ieee802154_addr dev_addr; + struct mac802154_sub_if_data *priv = netdev_priv(dev); + int pos = 2; + u8 *head; + u16 fc; + + if (!daddr) + return -EINVAL; + + head = kzalloc(MAC802154_FRAME_HARD_HEADER_LEN, GFP_KERNEL); + if (head == NULL) + return -ENOMEM; + + head[pos++] = mac_cb(skb)->seq; /* DSN/BSN */ + fc = mac_cb_type(skb); + + if (!saddr) { + spin_lock_bh(&priv->mib_lock); + + if (priv->short_addr == IEEE802154_ADDR_BROADCAST || + priv->short_addr == IEEE802154_ADDR_UNDEF || + priv->pan_id == IEEE802154_PANID_BROADCAST) { + dev_addr.addr_type = IEEE802154_ADDR_LONG; + memcpy(dev_addr.hwaddr, dev->dev_addr, + IEEE802154_ADDR_LEN); + } else { + dev_addr.addr_type = IEEE802154_ADDR_SHORT; + dev_addr.short_addr = priv->short_addr; + } + + dev_addr.pan_id = priv->pan_id; + saddr = &dev_addr; + + spin_unlock_bh(&priv->mib_lock); + } + + if (daddr->addr_type != IEEE802154_ADDR_NONE) { + fc |= (daddr->addr_type << IEEE802154_FC_DAMODE_SHIFT); + + head[pos++] = daddr->pan_id & 0xff; + head[pos++] = daddr->pan_id >> 8; + + if (daddr->addr_type == IEEE802154_ADDR_SHORT) { + head[pos++] = daddr->short_addr & 0xff; + head[pos++] = daddr->short_addr >> 8; + } else { + mac802154_haddr_copy_swap(head + pos, daddr->hwaddr); + pos += IEEE802154_ADDR_LEN; + } + } + + if (saddr->addr_type != IEEE802154_ADDR_NONE) { + fc |= (saddr->addr_type << IEEE802154_FC_SAMODE_SHIFT); + + if ((saddr->pan_id == daddr->pan_id) && + (saddr->pan_id != IEEE802154_PANID_BROADCAST)) { + /* PANID compression/intra PAN */ + fc |= IEEE802154_FC_INTRA_PAN; + } else { + head[pos++] = saddr->pan_id & 0xff; + head[pos++] = saddr->pan_id >> 8; + } + + if (saddr->addr_type == IEEE802154_ADDR_SHORT) { + head[pos++] = saddr->short_addr & 0xff; + head[pos++] = saddr->short_addr >> 8; + } else { + mac802154_haddr_copy_swap(head + pos, saddr->hwaddr); + pos += IEEE802154_ADDR_LEN; + } + } + + head[0] = fc; + head[1] = fc >> 8; + + memcpy(skb_push(skb, pos), head, pos); + kfree(head); + + return pos; +} + +static int +mac802154_header_parse(const struct sk_buff *skb, unsigned char *haddr) +{ + const u8 *hdr = skb_mac_header(skb); + const u8 *tail = skb_tail_pointer(skb); + struct ieee802154_addr *addr = (struct ieee802154_addr *)haddr; + u16 fc; + int da_type; + + if (hdr + 3 > tail) + goto malformed; + + fc = hdr[0] | (hdr[1] << 8); + + hdr += 3; + + da_type = IEEE802154_FC_DAMODE(fc); + addr->addr_type = IEEE802154_FC_SAMODE(fc); + + switch (da_type) { + case IEEE802154_ADDR_NONE: + if (fc & IEEE802154_FC_INTRA_PAN) + goto malformed; + break; + case IEEE802154_ADDR_LONG: + if (fc & IEEE802154_FC_INTRA_PAN) { + if (hdr + 2 > tail) + goto malformed; + addr->pan_id = hdr[0] | (hdr[1] << 8); + hdr += 2; + } + + if (hdr + IEEE802154_ADDR_LEN > tail) + goto malformed; + + hdr += IEEE802154_ADDR_LEN; + break; + case IEEE802154_ADDR_SHORT: + if (fc & IEEE802154_FC_INTRA_PAN) { + if (hdr + 2 > tail) + goto malformed; + addr->pan_id = hdr[0] | (hdr[1] << 8); + hdr += 2; + } + + if (hdr + 2 > tail) + goto malformed; + + hdr += 2; + break; + default: + goto malformed; + + } + + switch (addr->addr_type) { + case IEEE802154_ADDR_NONE: + break; + case IEEE802154_ADDR_LONG: + if (!(fc & IEEE802154_FC_INTRA_PAN)) { + if (hdr + 2 > tail) + goto malformed; + addr->pan_id = hdr[0] | (hdr[1] << 8); + hdr += 2; + } + + if (hdr + IEEE802154_ADDR_LEN > tail) + goto malformed; + + mac802154_haddr_copy_swap(addr->hwaddr, hdr); + hdr += IEEE802154_ADDR_LEN; + break; + case IEEE802154_ADDR_SHORT: + if (!(fc & IEEE802154_FC_INTRA_PAN)) { + if (hdr + 2 > tail) + goto malformed; + addr->pan_id = hdr[0] | (hdr[1] << 8); + hdr += 2; + } + + if (hdr + 2 > tail) + goto malformed; + + addr->short_addr = hdr[0] | (hdr[1] << 8); + hdr += 2; + break; + default: + goto malformed; + } + + return sizeof(struct ieee802154_addr); + +malformed: + pr_debug("malformed packet\n"); + return 0; +} + +static netdev_tx_t +mac802154_wpan_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct mac802154_sub_if_data *priv; + u8 chan, page; + + priv = netdev_priv(dev); + + spin_lock_bh(&priv->mib_lock); + chan = priv->chan; + page = priv->page; + spin_unlock_bh(&priv->mib_lock); + + if (chan == MAC802154_CHAN_NONE || + page >= WPAN_NUM_PAGES || + chan >= WPAN_NUM_CHANNELS) + return NETDEV_TX_OK; + + skb->skb_iif = dev->ifindex; + dev->stats.tx_packets++; + dev->stats.tx_bytes += skb->len; + + return mac802154_tx(priv->hw, skb, page, chan); +} + +static struct header_ops mac802154_header_ops = { + .create = mac802154_header_create, + .parse = mac802154_header_parse, +}; + +static const struct net_device_ops mac802154_wpan_ops = { + .ndo_open = mac802154_slave_open, + .ndo_stop = mac802154_slave_close, + .ndo_start_xmit = mac802154_wpan_xmit, + .ndo_do_ioctl = mac802154_wpan_ioctl, + .ndo_set_mac_address = mac802154_wpan_mac_addr, +}; + +void mac802154_wpan_setup(struct net_device *dev) +{ + struct mac802154_sub_if_data *priv; + + dev->addr_len = IEEE802154_ADDR_LEN; + memset(dev->broadcast, 0xff, IEEE802154_ADDR_LEN); + + dev->hard_header_len = MAC802154_FRAME_HARD_HEADER_LEN; + dev->header_ops = &mac802154_header_ops; + dev->needed_tailroom = 2; /* FCS */ + dev->mtu = IEEE802154_MTU; + dev->tx_queue_len = 10; + dev->type = ARPHRD_IEEE802154; + dev->flags = IFF_NOARP | IFF_BROADCAST; + dev->watchdog_timeo = 0; + + dev->destructor = free_netdev; + dev->netdev_ops = &mac802154_wpan_ops; + dev->ml_priv = &mac802154_mlme_wpan; + + priv = netdev_priv(dev); + priv->type = IEEE802154_DEV_WPAN; + + priv->chan = MAC802154_CHAN_NONE; + priv->page = 0; + + spin_lock_init(&priv->mib_lock); + + get_random_bytes(&priv->bsn, 1); + get_random_bytes(&priv->dsn, 1); + + priv->pan_id = IEEE802154_PANID_BROADCAST; + priv->short_addr = IEEE802154_ADDR_BROADCAST; +} + +static int mac802154_process_data(struct net_device *dev, struct sk_buff *skb) +{ + return netif_rx(skb); +} + +static int +mac802154_subif_frame(struct mac802154_sub_if_data *sdata, struct sk_buff *skb) +{ + pr_debug("getting packet via slave interface %s\n", sdata->dev->name); + + spin_lock_bh(&sdata->mib_lock); + + switch (mac_cb(skb)->da.addr_type) { + case IEEE802154_ADDR_NONE: + if (mac_cb(skb)->sa.addr_type != IEEE802154_ADDR_NONE) + /* FIXME: check if we are PAN coordinator */ + skb->pkt_type = PACKET_OTHERHOST; + else + /* ACK comes with both addresses empty */ + skb->pkt_type = PACKET_HOST; + break; + case IEEE802154_ADDR_LONG: + if (mac_cb(skb)->da.pan_id != sdata->pan_id && + mac_cb(skb)->da.pan_id != IEEE802154_PANID_BROADCAST) + skb->pkt_type = PACKET_OTHERHOST; + else if (!memcmp(mac_cb(skb)->da.hwaddr, sdata->dev->dev_addr, + IEEE802154_ADDR_LEN)) + skb->pkt_type = PACKET_HOST; + else + skb->pkt_type = PACKET_OTHERHOST; + break; + case IEEE802154_ADDR_SHORT: + if (mac_cb(skb)->da.pan_id != sdata->pan_id && + mac_cb(skb)->da.pan_id != IEEE802154_PANID_BROADCAST) + skb->pkt_type = PACKET_OTHERHOST; + else if (mac_cb(skb)->da.short_addr == sdata->short_addr) + skb->pkt_type = PACKET_HOST; + else if (mac_cb(skb)->da.short_addr == + IEEE802154_ADDR_BROADCAST) + skb->pkt_type = PACKET_BROADCAST; + else + skb->pkt_type = PACKET_OTHERHOST; + break; + default: + break; + } + + spin_unlock_bh(&sdata->mib_lock); + + skb->dev = sdata->dev; + + sdata->dev->stats.rx_packets++; + sdata->dev->stats.rx_bytes += skb->len; + + switch (mac_cb_type(skb)) { + case IEEE802154_FC_TYPE_DATA: + return mac802154_process_data(sdata->dev, skb); + default: + pr_warning("ieee802154: bad frame received (type = %d)\n", + mac_cb_type(skb)); + kfree_skb(skb); + return NET_RX_DROP; + } +} + +static int mac802154_parse_frame_start(struct sk_buff *skb) +{ + u8 *head = skb->data; + u16 fc; + + if (mac802154_fetch_skb_u16(skb, &fc) || + mac802154_fetch_skb_u8(skb, &(mac_cb(skb)->seq))) + goto err; + + pr_debug("fc: %04x dsn: %02x\n", fc, head[2]); + + mac_cb(skb)->flags = IEEE802154_FC_TYPE(fc); + mac_cb(skb)->sa.addr_type = IEEE802154_FC_SAMODE(fc); + mac_cb(skb)->da.addr_type = IEEE802154_FC_DAMODE(fc); + + if (fc & IEEE802154_FC_INTRA_PAN) + mac_cb(skb)->flags |= MAC_CB_FLAG_INTRAPAN; + + if (mac_cb(skb)->da.addr_type != IEEE802154_ADDR_NONE) { + if (mac802154_fetch_skb_u16(skb, &(mac_cb(skb)->da.pan_id))) + goto err; + + /* source PAN id compression */ + if (mac_cb_is_intrapan(skb)) + mac_cb(skb)->sa.pan_id = mac_cb(skb)->da.pan_id; + + pr_debug("dest PAN addr: %04x\n", mac_cb(skb)->da.pan_id); + + if (mac_cb(skb)->da.addr_type == IEEE802154_ADDR_SHORT) { + u16 *da = &(mac_cb(skb)->da.short_addr); + + if (mac802154_fetch_skb_u16(skb, da)) + goto err; + + pr_debug("destination address is short: %04x\n", + mac_cb(skb)->da.short_addr); + } else { + if (!pskb_may_pull(skb, IEEE802154_ADDR_LEN)) + goto err; + + mac802154_haddr_copy_swap(mac_cb(skb)->da.hwaddr, + skb->data); + skb_pull(skb, IEEE802154_ADDR_LEN); + + pr_debug("destination address is hardware\n"); + } + } + + if (mac_cb(skb)->sa.addr_type != IEEE802154_ADDR_NONE) { + /* non PAN-compression, fetch source address id */ + if (!(mac_cb_is_intrapan(skb))) { + u16 *sa_pan = &(mac_cb(skb)->sa.pan_id); + + if (mac802154_fetch_skb_u16(skb, sa_pan)) + goto err; + } + + pr_debug("source PAN addr: %04x\n", mac_cb(skb)->da.pan_id); + + if (mac_cb(skb)->sa.addr_type == IEEE802154_ADDR_SHORT) { + u16 *sa = &(mac_cb(skb)->sa.short_addr); + + if (mac802154_fetch_skb_u16(skb, sa)) + goto err; + + pr_debug("source address is short: %04x\n", + mac_cb(skb)->sa.short_addr); + } else { + if (!pskb_may_pull(skb, IEEE802154_ADDR_LEN)) + goto err; + + mac802154_haddr_copy_swap(mac_cb(skb)->sa.hwaddr, + skb->data); + skb_pull(skb, IEEE802154_ADDR_LEN); + + pr_debug("source address is hardware\n"); + } + } + + return 0; +err: + return -EINVAL; +} + +void mac802154_wpans_rx(struct mac802154_priv *priv, struct sk_buff *skb) +{ + int ret; + struct sk_buff *sskb; + struct mac802154_sub_if_data *sdata; + + ret = mac802154_parse_frame_start(skb); + if (ret) { + pr_debug("got invalid frame\n"); + return; + } + + rcu_read_lock(); + list_for_each_entry_rcu(sdata, &priv->slaves, list) { + if (sdata->type != IEEE802154_DEV_WPAN) + continue; + + sskb = skb_clone(skb, GFP_ATOMIC); + if (sskb) + mac802154_subif_frame(sdata, sskb); + } + rcu_read_unlock(); +} diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig index 209c1ed43368..c19b214ffd57 100644 --- a/net/netfilter/Kconfig +++ b/net/netfilter/Kconfig @@ -335,6 +335,27 @@ config NF_CT_NETLINK_TIMEOUT If unsure, say `N'. +config NF_CT_NETLINK_HELPER + tristate 'Connection tracking helpers in user-space via Netlink' + select NETFILTER_NETLINK + depends on NF_CT_NETLINK + depends on NETFILTER_NETLINK_QUEUE + depends on NETFILTER_NETLINK_QUEUE_CT + depends on NETFILTER_ADVANCED + help + This option enables the user-space connection tracking helpers + infrastructure. + + If unsure, say `N'. + +config NETFILTER_NETLINK_QUEUE_CT + bool "NFQUEUE integration with Connection Tracking" + default n + depends on NETFILTER_NETLINK_QUEUE + help + If this option is enabled, NFQUEUE can include Connection Tracking + information together with the packet is the enqueued via NFNETLINK. + endif # NF_CONNTRACK # transparent proxy support diff --git a/net/netfilter/Makefile b/net/netfilter/Makefile index 4e7960cc7b97..1c5160f2278e 100644 --- a/net/netfilter/Makefile +++ b/net/netfilter/Makefile @@ -9,6 +9,8 @@ obj-$(CONFIG_NETFILTER) = netfilter.o obj-$(CONFIG_NETFILTER_NETLINK) += nfnetlink.o obj-$(CONFIG_NETFILTER_NETLINK_ACCT) += nfnetlink_acct.o +nfnetlink_queue-y := nfnetlink_queue_core.o +nfnetlink_queue-$(CONFIG_NETFILTER_NETLINK_QUEUE_CT) += nfnetlink_queue_ct.o obj-$(CONFIG_NETFILTER_NETLINK_QUEUE) += nfnetlink_queue.o obj-$(CONFIG_NETFILTER_NETLINK_LOG) += nfnetlink_log.o @@ -24,6 +26,7 @@ obj-$(CONFIG_NF_CT_PROTO_UDPLITE) += nf_conntrack_proto_udplite.o # netlink interface for nf_conntrack obj-$(CONFIG_NF_CT_NETLINK) += nf_conntrack_netlink.o obj-$(CONFIG_NF_CT_NETLINK_TIMEOUT) += nfnetlink_cttimeout.o +obj-$(CONFIG_NF_CT_NETLINK_HELPER) += nfnetlink_cthelper.o # connection tracking helpers nf_conntrack_h323-objs := nf_conntrack_h323_main.o nf_conntrack_h323_asn1.o diff --git a/net/netfilter/core.c b/net/netfilter/core.c index e19f3653db23..0bc6b60db4df 100644 --- a/net/netfilter/core.c +++ b/net/netfilter/core.c @@ -264,6 +264,13 @@ void nf_conntrack_destroy(struct nf_conntrack *nfct) rcu_read_unlock(); } EXPORT_SYMBOL(nf_conntrack_destroy); + +struct nfq_ct_hook __rcu *nfq_ct_hook __read_mostly; +EXPORT_SYMBOL_GPL(nfq_ct_hook); + +struct nfq_ct_nat_hook __rcu *nfq_ct_nat_hook __read_mostly; +EXPORT_SYMBOL_GPL(nfq_ct_nat_hook); + #endif /* CONFIG_NF_CONNTRACK */ #ifdef CONFIG_PROC_FS diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c index a54b018c6eea..b54eccef40b5 100644 --- a/net/netfilter/ipvs/ip_vs_core.c +++ b/net/netfilter/ipvs/ip_vs_core.c @@ -1742,7 +1742,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_reply4, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_LOCAL_IN, .priority = NF_IP_PRI_NAT_SRC - 2, }, @@ -1752,7 +1752,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_remote_request4, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_LOCAL_IN, .priority = NF_IP_PRI_NAT_SRC - 1, }, @@ -1760,7 +1760,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_local_reply4, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_LOCAL_OUT, .priority = NF_IP_PRI_NAT_DST + 1, }, @@ -1768,7 +1768,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_local_request4, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_LOCAL_OUT, .priority = NF_IP_PRI_NAT_DST + 2, }, @@ -1777,7 +1777,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_forward_icmp, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_FORWARD, .priority = 99, }, @@ -1785,7 +1785,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_reply4, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_FORWARD, .priority = 100, }, @@ -1794,7 +1794,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_reply6, .owner = THIS_MODULE, - .pf = PF_INET6, + .pf = NFPROTO_IPV6, .hooknum = NF_INET_LOCAL_IN, .priority = NF_IP6_PRI_NAT_SRC - 2, }, @@ -1804,7 +1804,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_remote_request6, .owner = THIS_MODULE, - .pf = PF_INET6, + .pf = NFPROTO_IPV6, .hooknum = NF_INET_LOCAL_IN, .priority = NF_IP6_PRI_NAT_SRC - 1, }, @@ -1812,7 +1812,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_local_reply6, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_LOCAL_OUT, .priority = NF_IP6_PRI_NAT_DST + 1, }, @@ -1820,7 +1820,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_local_request6, .owner = THIS_MODULE, - .pf = PF_INET6, + .pf = NFPROTO_IPV6, .hooknum = NF_INET_LOCAL_OUT, .priority = NF_IP6_PRI_NAT_DST + 2, }, @@ -1829,7 +1829,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_forward_icmp_v6, .owner = THIS_MODULE, - .pf = PF_INET6, + .pf = NFPROTO_IPV6, .hooknum = NF_INET_FORWARD, .priority = 99, }, @@ -1837,7 +1837,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_reply6, .owner = THIS_MODULE, - .pf = PF_INET6, + .pf = NFPROTO_IPV6, .hooknum = NF_INET_FORWARD, .priority = 100, }, diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c index 7fd66dec859d..65b616ae1716 100644 --- a/net/netfilter/ipvs/ip_vs_xmit.c +++ b/net/netfilter/ipvs/ip_vs_xmit.c @@ -797,7 +797,7 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, struct ip_vs_conn *cp, goto tx_error_put; } if (skb_dst(skb)) - skb_dst(skb)->ops->update_pmtu(skb_dst(skb), mtu); + skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); df |= (old_iph->frag_off & htons(IP_DF)); @@ -823,7 +823,7 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, struct ip_vs_conn *cp, IP_VS_ERR_RL("%s(): no memory\n", __func__); return NF_STOLEN; } - kfree_skb(skb); + consume_skb(skb); skb = new_skb; old_iph = ip_hdr(skb); } @@ -913,7 +913,7 @@ ip_vs_tunnel_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp, goto tx_error_put; } if (skb_dst(skb)) - skb_dst(skb)->ops->update_pmtu(skb_dst(skb), mtu); + skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); if (mtu < ntohs(old_iph->payload_len) + sizeof(struct ipv6hdr) && !skb_is_gso(skb)) { @@ -942,7 +942,7 @@ ip_vs_tunnel_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp, IP_VS_ERR_RL("%s(): no memory\n", __func__); return NF_STOLEN; } - kfree_skb(skb); + consume_skb(skb); skb = new_skb; old_iph = ipv6_hdr(skb); } diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c index ac3af97cc468..cf4875565d67 100644 --- a/net/netfilter/nf_conntrack_core.c +++ b/net/netfilter/nf_conntrack_core.c @@ -531,7 +531,7 @@ __nf_conntrack_confirm(struct sk_buff *skb) tstamp = nf_conn_tstamp_find(ct); if (tstamp) { if (skb->tstamp.tv64 == 0) - __net_timestamp((struct sk_buff *)skb); + __net_timestamp(skb); tstamp->start = ktime_to_ns(skb->tstamp); } @@ -819,7 +819,8 @@ init_conntrack(struct net *net, struct nf_conn *tmpl, __set_bit(IPS_EXPECTED_BIT, &ct->status); ct->master = exp->master; if (exp->helper) { - help = nf_ct_helper_ext_add(ct, GFP_ATOMIC); + help = nf_ct_helper_ext_add(ct, exp->helper, + GFP_ATOMIC); if (help) rcu_assign_pointer(help->helper, exp->helper); } @@ -1333,7 +1334,6 @@ static void nf_conntrack_cleanup_init_net(void) while (untrack_refs() > 0) schedule(); - nf_conntrack_proto_fini(); #ifdef CONFIG_NF_CONNTRACK_ZONES nf_ct_extend_unregister(&nf_ct_zone_extend); #endif @@ -1372,7 +1372,7 @@ void nf_conntrack_cleanup(struct net *net) netfilter framework. Roll on, two-stage module delete... */ synchronize_net(); - + nf_conntrack_proto_fini(net); nf_conntrack_cleanup_net(net); if (net_eq(net, &init_net)) { @@ -1496,11 +1496,6 @@ static int nf_conntrack_init_init_net(void) printk(KERN_INFO "nf_conntrack version %s (%u buckets, %d max)\n", NF_CONNTRACK_VERSION, nf_conntrack_htable_size, nf_conntrack_max); - - ret = nf_conntrack_proto_init(); - if (ret < 0) - goto err_proto; - #ifdef CONFIG_NF_CONNTRACK_ZONES ret = nf_ct_extend_register(&nf_ct_zone_extend); if (ret < 0) @@ -1518,9 +1513,7 @@ static int nf_conntrack_init_init_net(void) #ifdef CONFIG_NF_CONNTRACK_ZONES err_extend: - nf_conntrack_proto_fini(); #endif -err_proto: return ret; } @@ -1583,9 +1576,7 @@ static int nf_conntrack_init_net(struct net *net) ret = nf_conntrack_helper_init(net); if (ret < 0) goto err_helper; - return 0; - err_helper: nf_conntrack_timeout_fini(net); err_timeout: @@ -1622,6 +1613,9 @@ int nf_conntrack_init(struct net *net) if (ret < 0) goto out_init_net; } + ret = nf_conntrack_proto_init(net); + if (ret < 0) + goto out_proto; ret = nf_conntrack_init_net(net); if (ret < 0) goto out_net; @@ -1637,6 +1631,8 @@ int nf_conntrack_init(struct net *net) return 0; out_net: + nf_conntrack_proto_fini(net); +out_proto: if (net_eq(net, &init_net)) nf_conntrack_cleanup_init_net(); out_init_net: diff --git a/net/netfilter/nf_conntrack_extend.c b/net/netfilter/nf_conntrack_extend.c index 641ff5f96718..1a9545965c0d 100644 --- a/net/netfilter/nf_conntrack_extend.c +++ b/net/netfilter/nf_conntrack_extend.c @@ -44,7 +44,8 @@ void __nf_ct_ext_destroy(struct nf_conn *ct) EXPORT_SYMBOL(__nf_ct_ext_destroy); static void * -nf_ct_ext_create(struct nf_ct_ext **ext, enum nf_ct_ext_id id, gfp_t gfp) +nf_ct_ext_create(struct nf_ct_ext **ext, enum nf_ct_ext_id id, + size_t var_alloc_len, gfp_t gfp) { unsigned int off, len; struct nf_ct_ext_type *t; @@ -54,8 +55,8 @@ nf_ct_ext_create(struct nf_ct_ext **ext, enum nf_ct_ext_id id, gfp_t gfp) t = rcu_dereference(nf_ct_ext_types[id]); BUG_ON(t == NULL); off = ALIGN(sizeof(struct nf_ct_ext), t->align); - len = off + t->len; - alloc_size = t->alloc_size; + len = off + t->len + var_alloc_len; + alloc_size = t->alloc_size + var_alloc_len; rcu_read_unlock(); *ext = kzalloc(alloc_size, gfp); @@ -68,7 +69,8 @@ nf_ct_ext_create(struct nf_ct_ext **ext, enum nf_ct_ext_id id, gfp_t gfp) return (void *)(*ext) + off; } -void *__nf_ct_ext_add(struct nf_conn *ct, enum nf_ct_ext_id id, gfp_t gfp) +void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id, + size_t var_alloc_len, gfp_t gfp) { struct nf_ct_ext *old, *new; int i, newlen, newoff; @@ -79,7 +81,7 @@ void *__nf_ct_ext_add(struct nf_conn *ct, enum nf_ct_ext_id id, gfp_t gfp) old = ct->ext; if (!old) - return nf_ct_ext_create(&ct->ext, id, gfp); + return nf_ct_ext_create(&ct->ext, id, var_alloc_len, gfp); if (__nf_ct_ext_exist(old, id)) return NULL; @@ -89,7 +91,7 @@ void *__nf_ct_ext_add(struct nf_conn *ct, enum nf_ct_ext_id id, gfp_t gfp) BUG_ON(t == NULL); newoff = ALIGN(old->len, t->align); - newlen = newoff + t->len; + newlen = newoff + t->len + var_alloc_len; rcu_read_unlock(); new = __krealloc(old, newlen, gfp); @@ -117,7 +119,7 @@ void *__nf_ct_ext_add(struct nf_conn *ct, enum nf_ct_ext_id id, gfp_t gfp) memset((void *)new + newoff, 0, newlen - newoff); return (void *)new + newoff; } -EXPORT_SYMBOL(__nf_ct_ext_add); +EXPORT_SYMBOL(__nf_ct_ext_add_length); static void update_alloc_size(struct nf_ct_ext_type *type) { diff --git a/net/netfilter/nf_conntrack_ftp.c b/net/netfilter/nf_conntrack_ftp.c index 8c5c95c6d34f..4bb771d1f57a 100644 --- a/net/netfilter/nf_conntrack_ftp.c +++ b/net/netfilter/nf_conntrack_ftp.c @@ -358,7 +358,7 @@ static int help(struct sk_buff *skb, u32 seq; int dir = CTINFO2DIR(ctinfo); unsigned int uninitialized_var(matchlen), uninitialized_var(matchoff); - struct nf_ct_ftp_master *ct_ftp_info = &nfct_help(ct)->help.ct_ftp_info; + struct nf_ct_ftp_master *ct_ftp_info = nfct_help_data(ct); struct nf_conntrack_expect *exp; union nf_inet_addr *daddr; struct nf_conntrack_man cmd = {}; @@ -512,7 +512,6 @@ out_update_nl: } static struct nf_conntrack_helper ftp[MAX_PORTS][2] __read_mostly; -static char ftp_names[MAX_PORTS][2][sizeof("ftp-65535")] __read_mostly; static const struct nf_conntrack_expect_policy ftp_exp_policy = { .max_expected = 1, @@ -541,7 +540,6 @@ static void nf_conntrack_ftp_fini(void) static int __init nf_conntrack_ftp_init(void) { int i, j = -1, ret = 0; - char *tmpname; ftp_buffer = kmalloc(65536, GFP_KERNEL); if (!ftp_buffer) @@ -556,17 +554,16 @@ static int __init nf_conntrack_ftp_init(void) ftp[i][0].tuple.src.l3num = PF_INET; ftp[i][1].tuple.src.l3num = PF_INET6; for (j = 0; j < 2; j++) { + ftp[i][j].data_len = sizeof(struct nf_ct_ftp_master); ftp[i][j].tuple.src.u.tcp.port = htons(ports[i]); ftp[i][j].tuple.dst.protonum = IPPROTO_TCP; ftp[i][j].expect_policy = &ftp_exp_policy; ftp[i][j].me = THIS_MODULE; ftp[i][j].help = help; - tmpname = &ftp_names[i][j][0]; if (ports[i] == FTP_PORT) - sprintf(tmpname, "ftp"); + sprintf(ftp[i][j].name, "ftp"); else - sprintf(tmpname, "ftp-%d", ports[i]); - ftp[i][j].name = tmpname; + sprintf(ftp[i][j].name, "ftp-%d", ports[i]); pr_debug("nf_ct_ftp: registering helper for pf: %d " "port: %d\n", diff --git a/net/netfilter/nf_conntrack_h323_main.c b/net/netfilter/nf_conntrack_h323_main.c index 31f50bc3a312..4283b207e63b 100644 --- a/net/netfilter/nf_conntrack_h323_main.c +++ b/net/netfilter/nf_conntrack_h323_main.c @@ -114,7 +114,7 @@ static int get_tpkt_data(struct sk_buff *skb, unsigned int protoff, struct nf_conn *ct, enum ip_conntrack_info ctinfo, unsigned char **data, int *datalen, int *dataoff) { - struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + struct nf_ct_h323_master *info = nfct_help_data(ct); int dir = CTINFO2DIR(ctinfo); const struct tcphdr *th; struct tcphdr _tcph; @@ -617,6 +617,7 @@ static const struct nf_conntrack_expect_policy h245_exp_policy = { static struct nf_conntrack_helper nf_conntrack_helper_h245 __read_mostly = { .name = "H.245", .me = THIS_MODULE, + .data_len = sizeof(struct nf_ct_h323_master), .tuple.src.l3num = AF_UNSPEC, .tuple.dst.protonum = IPPROTO_UDP, .help = h245_help, @@ -1169,6 +1170,7 @@ static struct nf_conntrack_helper nf_conntrack_helper_q931[] __read_mostly = { { .name = "Q.931", .me = THIS_MODULE, + .data_len = sizeof(struct nf_ct_h323_master), .tuple.src.l3num = AF_INET, .tuple.src.u.tcp.port = cpu_to_be16(Q931_PORT), .tuple.dst.protonum = IPPROTO_TCP, @@ -1244,7 +1246,7 @@ static int expect_q931(struct sk_buff *skb, struct nf_conn *ct, unsigned char **data, TransportAddress *taddr, int count) { - struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + struct nf_ct_h323_master *info = nfct_help_data(ct); int dir = CTINFO2DIR(ctinfo); int ret = 0; int i; @@ -1359,7 +1361,7 @@ static int process_rrq(struct sk_buff *skb, struct nf_conn *ct, enum ip_conntrack_info ctinfo, unsigned char **data, RegistrationRequest *rrq) { - struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + struct nf_ct_h323_master *info = nfct_help_data(ct); int ret; typeof(set_ras_addr_hook) set_ras_addr; @@ -1394,7 +1396,7 @@ static int process_rcf(struct sk_buff *skb, struct nf_conn *ct, enum ip_conntrack_info ctinfo, unsigned char **data, RegistrationConfirm *rcf) { - struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + struct nf_ct_h323_master *info = nfct_help_data(ct); int dir = CTINFO2DIR(ctinfo); int ret; struct nf_conntrack_expect *exp; @@ -1443,7 +1445,7 @@ static int process_urq(struct sk_buff *skb, struct nf_conn *ct, enum ip_conntrack_info ctinfo, unsigned char **data, UnregistrationRequest *urq) { - struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + struct nf_ct_h323_master *info = nfct_help_data(ct); int dir = CTINFO2DIR(ctinfo); int ret; typeof(set_sig_addr_hook) set_sig_addr; @@ -1475,7 +1477,7 @@ static int process_arq(struct sk_buff *skb, struct nf_conn *ct, enum ip_conntrack_info ctinfo, unsigned char **data, AdmissionRequest *arq) { - const struct nf_ct_h323_master *info = &nfct_help(ct)->help.ct_h323_info; + const struct nf_ct_h323_master *info = nfct_help_data(ct); int dir = CTINFO2DIR(ctinfo); __be16 port; union nf_inet_addr addr; @@ -1742,6 +1744,7 @@ static struct nf_conntrack_helper nf_conntrack_helper_ras[] __read_mostly = { { .name = "RAS", .me = THIS_MODULE, + .data_len = sizeof(struct nf_ct_h323_master), .tuple.src.l3num = AF_INET, .tuple.src.u.udp.port = cpu_to_be16(RAS_PORT), .tuple.dst.protonum = IPPROTO_UDP, @@ -1751,6 +1754,7 @@ static struct nf_conntrack_helper nf_conntrack_helper_ras[] __read_mostly = { { .name = "RAS", .me = THIS_MODULE, + .data_len = sizeof(struct nf_ct_h323_master), .tuple.src.l3num = AF_INET6, .tuple.src.u.udp.port = cpu_to_be16(RAS_PORT), .tuple.dst.protonum = IPPROTO_UDP, diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c index 4fa2ff961f5a..c4bc637feb76 100644 --- a/net/netfilter/nf_conntrack_helper.c +++ b/net/netfilter/nf_conntrack_helper.c @@ -30,8 +30,10 @@ #include <net/netfilter/nf_conntrack_extend.h> static DEFINE_MUTEX(nf_ct_helper_mutex); -static struct hlist_head *nf_ct_helper_hash __read_mostly; -static unsigned int nf_ct_helper_hsize __read_mostly; +struct hlist_head *nf_ct_helper_hash __read_mostly; +EXPORT_SYMBOL_GPL(nf_ct_helper_hash); +unsigned int nf_ct_helper_hsize __read_mostly; +EXPORT_SYMBOL_GPL(nf_ct_helper_hsize); static unsigned int nf_ct_helper_count __read_mostly; static bool nf_ct_auto_assign_helper __read_mostly = true; @@ -161,11 +163,14 @@ nf_conntrack_helper_try_module_get(const char *name, u16 l3num, u8 protonum) } EXPORT_SYMBOL_GPL(nf_conntrack_helper_try_module_get); -struct nf_conn_help *nf_ct_helper_ext_add(struct nf_conn *ct, gfp_t gfp) +struct nf_conn_help * +nf_ct_helper_ext_add(struct nf_conn *ct, + struct nf_conntrack_helper *helper, gfp_t gfp) { struct nf_conn_help *help; - help = nf_ct_ext_add(ct, NF_CT_EXT_HELPER, gfp); + help = nf_ct_ext_add_length(ct, NF_CT_EXT_HELPER, + helper->data_len, gfp); if (help) INIT_HLIST_HEAD(&help->expectations); else @@ -218,13 +223,19 @@ int __nf_ct_try_assign_helper(struct nf_conn *ct, struct nf_conn *tmpl, } if (help == NULL) { - help = nf_ct_helper_ext_add(ct, flags); + help = nf_ct_helper_ext_add(ct, helper, flags); if (help == NULL) { ret = -ENOMEM; goto out; } } else { - memset(&help->help, 0, sizeof(help->help)); + /* We only allow helper re-assignment of the same sort since + * we cannot reallocate the helper extension area. + */ + if (help->helper != helper) { + RCU_INIT_POINTER(help->helper, NULL); + goto out; + } } rcu_assign_pointer(help->helper, helper); @@ -319,6 +330,9 @@ EXPORT_SYMBOL_GPL(nf_ct_helper_expectfn_find_by_symbol); int nf_conntrack_helper_register(struct nf_conntrack_helper *me) { + int ret = 0; + struct nf_conntrack_helper *cur; + struct hlist_node *n; unsigned int h = helper_hash(&me->tuple); BUG_ON(me->expect_policy == NULL); @@ -326,11 +340,19 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me) BUG_ON(strlen(me->name) > NF_CT_HELPER_NAME_LEN - 1); mutex_lock(&nf_ct_helper_mutex); + hlist_for_each_entry(cur, n, &nf_ct_helper_hash[h], hnode) { + if (strncmp(cur->name, me->name, NF_CT_HELPER_NAME_LEN) == 0 && + cur->tuple.src.l3num == me->tuple.src.l3num && + cur->tuple.dst.protonum == me->tuple.dst.protonum) { + ret = -EEXIST; + goto out; + } + } hlist_add_head_rcu(&me->hnode, &nf_ct_helper_hash[h]); nf_ct_helper_count++; +out: mutex_unlock(&nf_ct_helper_mutex); - - return 0; + return ret; } EXPORT_SYMBOL_GPL(nf_conntrack_helper_register); diff --git a/net/netfilter/nf_conntrack_irc.c b/net/netfilter/nf_conntrack_irc.c index 81366c118271..009c52cfd1ec 100644 --- a/net/netfilter/nf_conntrack_irc.c +++ b/net/netfilter/nf_conntrack_irc.c @@ -221,7 +221,6 @@ static int help(struct sk_buff *skb, unsigned int protoff, } static struct nf_conntrack_helper irc[MAX_PORTS] __read_mostly; -static char irc_names[MAX_PORTS][sizeof("irc-65535")] __read_mostly; static struct nf_conntrack_expect_policy irc_exp_policy; static void nf_conntrack_irc_fini(void); @@ -229,7 +228,6 @@ static void nf_conntrack_irc_fini(void); static int __init nf_conntrack_irc_init(void) { int i, ret; - char *tmpname; if (max_dcc_channels < 1) { printk(KERN_ERR "nf_ct_irc: max_dcc_channels must not be zero\n"); @@ -255,12 +253,10 @@ static int __init nf_conntrack_irc_init(void) irc[i].me = THIS_MODULE; irc[i].help = help; - tmpname = &irc_names[i][0]; if (ports[i] == IRC_PORT) - sprintf(tmpname, "irc"); + sprintf(irc[i].name, "irc"); else - sprintf(tmpname, "irc-%u", i); - irc[i].name = tmpname; + sprintf(irc[i].name, "irc-%u", i); ret = nf_conntrack_helper_register(&irc[i]); if (ret) { diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c index 6f4b00a8fc73..14f67a2cbcb5 100644 --- a/net/netfilter/nf_conntrack_netlink.c +++ b/net/netfilter/nf_conntrack_netlink.c @@ -4,7 +4,7 @@ * (C) 2001 by Jay Schulist <jschlst@samba.org> * (C) 2002-2006 by Harald Welte <laforge@gnumonks.org> * (C) 2003 by Patrick Mchardy <kaber@trash.net> - * (C) 2005-2011 by Pablo Neira Ayuso <pablo@netfilter.org> + * (C) 2005-2012 by Pablo Neira Ayuso <pablo@netfilter.org> * * Initial connection tracking via netlink development funded and * generally made possible by Network Robots, Inc. (www.networkrobots.com) @@ -46,6 +46,7 @@ #ifdef CONFIG_NF_NAT_NEEDED #include <net/netfilter/nf_nat_core.h> #include <net/netfilter/nf_nat_protocol.h> +#include <net/netfilter/nf_nat_helper.h> #endif #include <linux/netfilter/nfnetlink.h> @@ -477,7 +478,6 @@ nla_put_failure: return -1; } -#ifdef CONFIG_NF_CONNTRACK_EVENTS static inline size_t ctnetlink_proto_size(const struct nf_conn *ct) { @@ -564,6 +564,7 @@ ctnetlink_nlmsg_size(const struct nf_conn *ct) ; } +#ifdef CONFIG_NF_CONNTRACK_EVENTS static int ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item) { @@ -901,7 +902,8 @@ static const struct nla_policy help_nla_policy[CTA_HELP_MAX+1] = { }; static inline int -ctnetlink_parse_help(const struct nlattr *attr, char **helper_name) +ctnetlink_parse_help(const struct nlattr *attr, char **helper_name, + struct nlattr **helpinfo) { struct nlattr *tb[CTA_HELP_MAX+1]; @@ -912,6 +914,9 @@ ctnetlink_parse_help(const struct nlattr *attr, char **helper_name) *helper_name = nla_data(tb[CTA_HELP_NAME]); + if (tb[CTA_HELP_INFO]) + *helpinfo = tb[CTA_HELP_INFO]; + return 0; } @@ -1172,13 +1177,14 @@ ctnetlink_change_helper(struct nf_conn *ct, const struct nlattr * const cda[]) struct nf_conntrack_helper *helper; struct nf_conn_help *help = nfct_help(ct); char *helpname = NULL; + struct nlattr *helpinfo = NULL; int err; /* don't change helper of sibling connections */ if (ct->master) return -EBUSY; - err = ctnetlink_parse_help(cda[CTA_HELP], &helpname); + err = ctnetlink_parse_help(cda[CTA_HELP], &helpname, &helpinfo); if (err < 0) return err; @@ -1213,20 +1219,17 @@ ctnetlink_change_helper(struct nf_conn *ct, const struct nlattr * const cda[]) } if (help) { - if (help->helper == helper) + if (help->helper == helper) { + /* update private helper data if allowed. */ + if (helper->from_nlattr && helpinfo) + helper->from_nlattr(helpinfo, ct); return 0; - if (help->helper) + } else return -EBUSY; - /* need to zero data of old helper */ - memset(&help->help, 0, sizeof(help->help)); - } else { - /* we cannot set a helper for an existing conntrack */ - return -EOPNOTSUPP; } - rcu_assign_pointer(help->helper, helper); - - return 0; + /* we cannot set a helper for an existing conntrack */ + return -EOPNOTSUPP; } static inline int @@ -1410,8 +1413,9 @@ ctnetlink_create_conntrack(struct net *net, u16 zone, rcu_read_lock(); if (cda[CTA_HELP]) { char *helpname = NULL; - - err = ctnetlink_parse_help(cda[CTA_HELP], &helpname); + struct nlattr *helpinfo = NULL; + + err = ctnetlink_parse_help(cda[CTA_HELP], &helpname, &helpinfo); if (err < 0) goto err2; @@ -1440,11 +1444,14 @@ ctnetlink_create_conntrack(struct net *net, u16 zone, } else { struct nf_conn_help *help; - help = nf_ct_helper_ext_add(ct, GFP_ATOMIC); + help = nf_ct_helper_ext_add(ct, helper, GFP_ATOMIC); if (help == NULL) { err = -ENOMEM; goto err2; } + /* set private helper data if allowed. */ + if (helper->from_nlattr && helpinfo) + helper->from_nlattr(helpinfo, ct); /* not in hash table yet so not strictly necessary */ RCU_INIT_POINTER(help->helper, helper); @@ -1620,6 +1627,288 @@ ctnetlink_new_conntrack(struct sock *ctnl, struct sk_buff *skb, return err; } +static int +ctnetlink_ct_stat_cpu_fill_info(struct sk_buff *skb, u32 pid, u32 seq, + __u16 cpu, const struct ip_conntrack_stat *st) +{ + struct nlmsghdr *nlh; + struct nfgenmsg *nfmsg; + unsigned int flags = pid ? NLM_F_MULTI : 0, event; + + event = (NFNL_SUBSYS_CTNETLINK << 8 | IPCTNL_MSG_CT_GET_STATS_CPU); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*nfmsg), flags); + if (nlh == NULL) + goto nlmsg_failure; + + nfmsg = nlmsg_data(nlh); + nfmsg->nfgen_family = AF_UNSPEC; + nfmsg->version = NFNETLINK_V0; + nfmsg->res_id = htons(cpu); + + if (nla_put_be32(skb, CTA_STATS_SEARCHED, htonl(st->searched)) || + nla_put_be32(skb, CTA_STATS_FOUND, htonl(st->found)) || + nla_put_be32(skb, CTA_STATS_NEW, htonl(st->new)) || + nla_put_be32(skb, CTA_STATS_INVALID, htonl(st->invalid)) || + nla_put_be32(skb, CTA_STATS_IGNORE, htonl(st->ignore)) || + nla_put_be32(skb, CTA_STATS_DELETE, htonl(st->delete)) || + nla_put_be32(skb, CTA_STATS_DELETE_LIST, htonl(st->delete_list)) || + nla_put_be32(skb, CTA_STATS_INSERT, htonl(st->insert)) || + nla_put_be32(skb, CTA_STATS_INSERT_FAILED, + htonl(st->insert_failed)) || + nla_put_be32(skb, CTA_STATS_DROP, htonl(st->drop)) || + nla_put_be32(skb, CTA_STATS_EARLY_DROP, htonl(st->early_drop)) || + nla_put_be32(skb, CTA_STATS_ERROR, htonl(st->error)) || + nla_put_be32(skb, CTA_STATS_SEARCH_RESTART, + htonl(st->search_restart))) + goto nla_put_failure; + + nlmsg_end(skb, nlh); + return skb->len; + +nla_put_failure: +nlmsg_failure: + nlmsg_cancel(skb, nlh); + return -1; +} + +static int +ctnetlink_ct_stat_cpu_dump(struct sk_buff *skb, struct netlink_callback *cb) +{ + int cpu; + struct net *net = sock_net(skb->sk); + + if (cb->args[0] == nr_cpu_ids) + return 0; + + for (cpu = cb->args[0]; cpu < nr_cpu_ids; cpu++) { + const struct ip_conntrack_stat *st; + + if (!cpu_possible(cpu)) + continue; + + st = per_cpu_ptr(net->ct.stat, cpu); + if (ctnetlink_ct_stat_cpu_fill_info(skb, + NETLINK_CB(cb->skb).pid, + cb->nlh->nlmsg_seq, + cpu, st) < 0) + break; + } + cb->args[0] = cpu; + + return skb->len; +} + +static int +ctnetlink_stat_ct_cpu(struct sock *ctnl, struct sk_buff *skb, + const struct nlmsghdr *nlh, + const struct nlattr * const cda[]) +{ + if (nlh->nlmsg_flags & NLM_F_DUMP) { + struct netlink_dump_control c = { + .dump = ctnetlink_ct_stat_cpu_dump, + }; + return netlink_dump_start(ctnl, skb, nlh, &c); + } + + return 0; +} + +static int +ctnetlink_stat_ct_fill_info(struct sk_buff *skb, u32 pid, u32 seq, u32 type, + struct net *net) +{ + struct nlmsghdr *nlh; + struct nfgenmsg *nfmsg; + unsigned int flags = pid ? NLM_F_MULTI : 0, event; + unsigned int nr_conntracks = atomic_read(&net->ct.count); + + event = (NFNL_SUBSYS_CTNETLINK << 8 | IPCTNL_MSG_CT_GET_STATS); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*nfmsg), flags); + if (nlh == NULL) + goto nlmsg_failure; + + nfmsg = nlmsg_data(nlh); + nfmsg->nfgen_family = AF_UNSPEC; + nfmsg->version = NFNETLINK_V0; + nfmsg->res_id = 0; + + if (nla_put_be32(skb, CTA_STATS_GLOBAL_ENTRIES, htonl(nr_conntracks))) + goto nla_put_failure; + + nlmsg_end(skb, nlh); + return skb->len; + +nla_put_failure: +nlmsg_failure: + nlmsg_cancel(skb, nlh); + return -1; +} + +static int +ctnetlink_stat_ct(struct sock *ctnl, struct sk_buff *skb, + const struct nlmsghdr *nlh, + const struct nlattr * const cda[]) +{ + struct sk_buff *skb2; + int err; + + skb2 = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (skb2 == NULL) + return -ENOMEM; + + err = ctnetlink_stat_ct_fill_info(skb2, NETLINK_CB(skb).pid, + nlh->nlmsg_seq, + NFNL_MSG_TYPE(nlh->nlmsg_type), + sock_net(skb->sk)); + if (err <= 0) + goto free; + + err = netlink_unicast(ctnl, skb2, NETLINK_CB(skb).pid, MSG_DONTWAIT); + if (err < 0) + goto out; + + return 0; + +free: + kfree_skb(skb2); +out: + /* this avoids a loop in nfnetlink. */ + return err == -EAGAIN ? -ENOBUFS : err; +} + +#ifdef CONFIG_NETFILTER_NETLINK_QUEUE_CT +static size_t +ctnetlink_nfqueue_build_size(const struct nf_conn *ct) +{ + return 3 * nla_total_size(0) /* CTA_TUPLE_ORIG|REPL|MASTER */ + + 3 * nla_total_size(0) /* CTA_TUPLE_IP */ + + 3 * nla_total_size(0) /* CTA_TUPLE_PROTO */ + + 3 * nla_total_size(sizeof(u_int8_t)) /* CTA_PROTO_NUM */ + + nla_total_size(sizeof(u_int32_t)) /* CTA_ID */ + + nla_total_size(sizeof(u_int32_t)) /* CTA_STATUS */ + + nla_total_size(sizeof(u_int32_t)) /* CTA_TIMEOUT */ + + nla_total_size(0) /* CTA_PROTOINFO */ + + nla_total_size(0) /* CTA_HELP */ + + nla_total_size(NF_CT_HELPER_NAME_LEN) /* CTA_HELP_NAME */ + + ctnetlink_secctx_size(ct) +#ifdef CONFIG_NF_NAT_NEEDED + + 2 * nla_total_size(0) /* CTA_NAT_SEQ_ADJ_ORIG|REPL */ + + 6 * nla_total_size(sizeof(u_int32_t)) /* CTA_NAT_SEQ_OFFSET */ +#endif +#ifdef CONFIG_NF_CONNTRACK_MARK + + nla_total_size(sizeof(u_int32_t)) /* CTA_MARK */ +#endif + + ctnetlink_proto_size(ct) + ; +} + +static int +ctnetlink_nfqueue_build(struct sk_buff *skb, struct nf_conn *ct) +{ + struct nlattr *nest_parms; + + rcu_read_lock(); + nest_parms = nla_nest_start(skb, CTA_TUPLE_ORIG | NLA_F_NESTED); + if (!nest_parms) + goto nla_put_failure; + if (ctnetlink_dump_tuples(skb, nf_ct_tuple(ct, IP_CT_DIR_ORIGINAL)) < 0) + goto nla_put_failure; + nla_nest_end(skb, nest_parms); + + nest_parms = nla_nest_start(skb, CTA_TUPLE_REPLY | NLA_F_NESTED); + if (!nest_parms) + goto nla_put_failure; + if (ctnetlink_dump_tuples(skb, nf_ct_tuple(ct, IP_CT_DIR_REPLY)) < 0) + goto nla_put_failure; + nla_nest_end(skb, nest_parms); + + if (nf_ct_zone(ct)) { + if (nla_put_be16(skb, CTA_ZONE, htons(nf_ct_zone(ct)))) + goto nla_put_failure; + } + + if (ctnetlink_dump_id(skb, ct) < 0) + goto nla_put_failure; + + if (ctnetlink_dump_status(skb, ct) < 0) + goto nla_put_failure; + + if (ctnetlink_dump_timeout(skb, ct) < 0) + goto nla_put_failure; + + if (ctnetlink_dump_protoinfo(skb, ct) < 0) + goto nla_put_failure; + + if (ctnetlink_dump_helpinfo(skb, ct) < 0) + goto nla_put_failure; + +#ifdef CONFIG_NF_CONNTRACK_SECMARK + if (ct->secmark && ctnetlink_dump_secctx(skb, ct) < 0) + goto nla_put_failure; +#endif + if (ct->master && ctnetlink_dump_master(skb, ct) < 0) + goto nla_put_failure; + + if ((ct->status & IPS_SEQ_ADJUST) && + ctnetlink_dump_nat_seq_adj(skb, ct) < 0) + goto nla_put_failure; + +#ifdef CONFIG_NF_CONNTRACK_MARK + if (ct->mark && ctnetlink_dump_mark(skb, ct) < 0) + goto nla_put_failure; +#endif + rcu_read_unlock(); + return 0; + +nla_put_failure: + rcu_read_unlock(); + return -ENOSPC; +} + +static int +ctnetlink_nfqueue_parse_ct(const struct nlattr *cda[], struct nf_conn *ct) +{ + int err; + + if (cda[CTA_TIMEOUT]) { + err = ctnetlink_change_timeout(ct, cda); + if (err < 0) + return err; + } + if (cda[CTA_STATUS]) { + err = ctnetlink_change_status(ct, cda); + if (err < 0) + return err; + } + if (cda[CTA_HELP]) { + err = ctnetlink_change_helper(ct, cda); + if (err < 0) + return err; + } +#if defined(CONFIG_NF_CONNTRACK_MARK) + if (cda[CTA_MARK]) + ct->mark = ntohl(nla_get_be32(cda[CTA_MARK])); +#endif + return 0; +} + +static int +ctnetlink_nfqueue_parse(const struct nlattr *attr, struct nf_conn *ct) +{ + struct nlattr *cda[CTA_MAX+1]; + + nla_parse_nested(cda, CTA_MAX, attr, ct_nla_policy); + + return ctnetlink_nfqueue_parse_ct((const struct nlattr **)cda, ct); +} + +static struct nfq_ct_hook ctnetlink_nfqueue_hook = { + .build_size = ctnetlink_nfqueue_build_size, + .build = ctnetlink_nfqueue_build, + .parse = ctnetlink_nfqueue_parse, +}; +#endif /* CONFIG_NETFILTER_NETLINK_QUEUE_CT */ + /*********************************************************************** * EXPECT ***********************************************************************/ @@ -2300,6 +2589,79 @@ ctnetlink_new_expect(struct sock *ctnl, struct sk_buff *skb, return err; } +static int +ctnetlink_exp_stat_fill_info(struct sk_buff *skb, u32 pid, u32 seq, int cpu, + const struct ip_conntrack_stat *st) +{ + struct nlmsghdr *nlh; + struct nfgenmsg *nfmsg; + unsigned int flags = pid ? NLM_F_MULTI : 0, event; + + event = (NFNL_SUBSYS_CTNETLINK << 8 | IPCTNL_MSG_EXP_GET_STATS_CPU); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*nfmsg), flags); + if (nlh == NULL) + goto nlmsg_failure; + + nfmsg = nlmsg_data(nlh); + nfmsg->nfgen_family = AF_UNSPEC; + nfmsg->version = NFNETLINK_V0; + nfmsg->res_id = htons(cpu); + + if (nla_put_be32(skb, CTA_STATS_EXP_NEW, htonl(st->expect_new)) || + nla_put_be32(skb, CTA_STATS_EXP_CREATE, htonl(st->expect_create)) || + nla_put_be32(skb, CTA_STATS_EXP_DELETE, htonl(st->expect_delete))) + goto nla_put_failure; + + nlmsg_end(skb, nlh); + return skb->len; + +nla_put_failure: +nlmsg_failure: + nlmsg_cancel(skb, nlh); + return -1; +} + +static int +ctnetlink_exp_stat_cpu_dump(struct sk_buff *skb, struct netlink_callback *cb) +{ + int cpu; + struct net *net = sock_net(skb->sk); + + if (cb->args[0] == nr_cpu_ids) + return 0; + + for (cpu = cb->args[0]; cpu < nr_cpu_ids; cpu++) { + const struct ip_conntrack_stat *st; + + if (!cpu_possible(cpu)) + continue; + + st = per_cpu_ptr(net->ct.stat, cpu); + if (ctnetlink_exp_stat_fill_info(skb, NETLINK_CB(cb->skb).pid, + cb->nlh->nlmsg_seq, + cpu, st) < 0) + break; + } + cb->args[0] = cpu; + + return skb->len; +} + +static int +ctnetlink_stat_exp_cpu(struct sock *ctnl, struct sk_buff *skb, + const struct nlmsghdr *nlh, + const struct nlattr * const cda[]) +{ + if (nlh->nlmsg_flags & NLM_F_DUMP) { + struct netlink_dump_control c = { + .dump = ctnetlink_exp_stat_cpu_dump, + }; + return netlink_dump_start(ctnl, skb, nlh, &c); + } + + return 0; +} + #ifdef CONFIG_NF_CONNTRACK_EVENTS static struct nf_ct_event_notifier ctnl_notifier = { .fcn = ctnetlink_conntrack_event, @@ -2323,6 +2685,8 @@ static const struct nfnl_callback ctnl_cb[IPCTNL_MSG_MAX] = { [IPCTNL_MSG_CT_GET_CTRZERO] = { .call = ctnetlink_get_conntrack, .attr_count = CTA_MAX, .policy = ct_nla_policy }, + [IPCTNL_MSG_CT_GET_STATS_CPU] = { .call = ctnetlink_stat_ct_cpu }, + [IPCTNL_MSG_CT_GET_STATS] = { .call = ctnetlink_stat_ct }, }; static const struct nfnl_callback ctnl_exp_cb[IPCTNL_MSG_EXP_MAX] = { @@ -2335,6 +2699,7 @@ static const struct nfnl_callback ctnl_exp_cb[IPCTNL_MSG_EXP_MAX] = { [IPCTNL_MSG_EXP_DELETE] = { .call = ctnetlink_del_expect, .attr_count = CTA_EXPECT_MAX, .policy = exp_nla_policy }, + [IPCTNL_MSG_EXP_GET_STATS_CPU] = { .call = ctnetlink_stat_exp_cpu }, }; static const struct nfnetlink_subsystem ctnl_subsys = { @@ -2424,7 +2789,10 @@ static int __init ctnetlink_init(void) pr_err("ctnetlink_init: cannot register pernet operations\n"); goto err_unreg_exp_subsys; } - +#ifdef CONFIG_NETFILTER_NETLINK_QUEUE_CT + /* setup interaction between nf_queue and nf_conntrack_netlink. */ + RCU_INIT_POINTER(nfq_ct_hook, &ctnetlink_nfqueue_hook); +#endif return 0; err_unreg_exp_subsys: @@ -2442,6 +2810,9 @@ static void __exit ctnetlink_exit(void) unregister_pernet_subsys(&ctnetlink_net_ops); nfnetlink_subsys_unregister(&ctnl_exp_subsys); nfnetlink_subsys_unregister(&ctnl_subsys); +#ifdef CONFIG_NETFILTER_NETLINK_QUEUE_CT + RCU_INIT_POINTER(nfq_ct_hook, NULL); +#endif } module_init(ctnetlink_init); diff --git a/net/netfilter/nf_conntrack_pptp.c b/net/netfilter/nf_conntrack_pptp.c index 31d56b23b9e9..6fed9ec35248 100644 --- a/net/netfilter/nf_conntrack_pptp.c +++ b/net/netfilter/nf_conntrack_pptp.c @@ -174,7 +174,7 @@ static int destroy_sibling_or_exp(struct net *net, struct nf_conn *ct, static void pptp_destroy_siblings(struct nf_conn *ct) { struct net *net = nf_ct_net(ct); - const struct nf_conn_help *help = nfct_help(ct); + const struct nf_ct_pptp_master *ct_pptp_info = nfct_help_data(ct); struct nf_conntrack_tuple t; nf_ct_gre_keymap_destroy(ct); @@ -182,16 +182,16 @@ static void pptp_destroy_siblings(struct nf_conn *ct) /* try original (pns->pac) tuple */ memcpy(&t, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, sizeof(t)); t.dst.protonum = IPPROTO_GRE; - t.src.u.gre.key = help->help.ct_pptp_info.pns_call_id; - t.dst.u.gre.key = help->help.ct_pptp_info.pac_call_id; + t.src.u.gre.key = ct_pptp_info->pns_call_id; + t.dst.u.gre.key = ct_pptp_info->pac_call_id; if (!destroy_sibling_or_exp(net, ct, &t)) pr_debug("failed to timeout original pns->pac ct/exp\n"); /* try reply (pac->pns) tuple */ memcpy(&t, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, sizeof(t)); t.dst.protonum = IPPROTO_GRE; - t.src.u.gre.key = help->help.ct_pptp_info.pac_call_id; - t.dst.u.gre.key = help->help.ct_pptp_info.pns_call_id; + t.src.u.gre.key = ct_pptp_info->pac_call_id; + t.dst.u.gre.key = ct_pptp_info->pns_call_id; if (!destroy_sibling_or_exp(net, ct, &t)) pr_debug("failed to timeout reply pac->pns ct/exp\n"); } @@ -269,7 +269,7 @@ pptp_inbound_pkt(struct sk_buff *skb, struct nf_conn *ct, enum ip_conntrack_info ctinfo) { - struct nf_ct_pptp_master *info = &nfct_help(ct)->help.ct_pptp_info; + struct nf_ct_pptp_master *info = nfct_help_data(ct); u_int16_t msg; __be16 cid = 0, pcid = 0; typeof(nf_nat_pptp_hook_inbound) nf_nat_pptp_inbound; @@ -396,7 +396,7 @@ pptp_outbound_pkt(struct sk_buff *skb, struct nf_conn *ct, enum ip_conntrack_info ctinfo) { - struct nf_ct_pptp_master *info = &nfct_help(ct)->help.ct_pptp_info; + struct nf_ct_pptp_master *info = nfct_help_data(ct); u_int16_t msg; __be16 cid = 0, pcid = 0; typeof(nf_nat_pptp_hook_outbound) nf_nat_pptp_outbound; @@ -506,7 +506,7 @@ conntrack_pptp_help(struct sk_buff *skb, unsigned int protoff, { int dir = CTINFO2DIR(ctinfo); - const struct nf_ct_pptp_master *info = &nfct_help(ct)->help.ct_pptp_info; + const struct nf_ct_pptp_master *info = nfct_help_data(ct); const struct tcphdr *tcph; struct tcphdr _tcph; const struct pptp_pkt_hdr *pptph; @@ -592,6 +592,7 @@ static const struct nf_conntrack_expect_policy pptp_exp_policy = { static struct nf_conntrack_helper pptp __read_mostly = { .name = "pptp", .me = THIS_MODULE, + .data_len = sizeof(struct nf_ct_pptp_master), .tuple.src.l3num = AF_INET, .tuple.src.u.tcp.port = cpu_to_be16(PPTP_CONTROL_PORT), .tuple.dst.protonum = IPPROTO_TCP, diff --git a/net/netfilter/nf_conntrack_proto.c b/net/netfilter/nf_conntrack_proto.c index 8b631b07a645..0dc63854390f 100644 --- a/net/netfilter/nf_conntrack_proto.c +++ b/net/netfilter/nf_conntrack_proto.c @@ -36,28 +36,32 @@ static DEFINE_MUTEX(nf_ct_proto_mutex); #ifdef CONFIG_SYSCTL static int -nf_ct_register_sysctl(struct ctl_table_header **header, const char *path, - struct ctl_table *table, unsigned int *users) +nf_ct_register_sysctl(struct net *net, + struct ctl_table_header **header, + const char *path, + struct ctl_table *table) { if (*header == NULL) { - *header = register_net_sysctl(&init_net, path, table); + *header = register_net_sysctl(net, path, table); if (*header == NULL) return -ENOMEM; } - if (users != NULL) - (*users)++; + return 0; } static void nf_ct_unregister_sysctl(struct ctl_table_header **header, - struct ctl_table *table, unsigned int *users) + struct ctl_table **table, + unsigned int users) { - if (users != NULL && --*users > 0) + if (users > 0) return; unregister_net_sysctl_table(*header); + kfree(*table); *header = NULL; + *table = NULL; } #endif @@ -161,30 +165,56 @@ static int kill_l4proto(struct nf_conn *i, void *data) nf_ct_l3num(i) == l4proto->l3proto; } -static int nf_ct_l3proto_register_sysctl(struct nf_conntrack_l3proto *l3proto) +static struct nf_ip_net *nf_ct_l3proto_net(struct net *net, + struct nf_conntrack_l3proto *l3proto) { - int err = 0; + if (l3proto->l3proto == PF_INET) + return &net->ct.nf_ct_proto; + else + return NULL; +} -#ifdef CONFIG_SYSCTL - if (l3proto->ctl_table != NULL) { - err = nf_ct_register_sysctl(&l3proto->ctl_table_header, +static int nf_ct_l3proto_register_sysctl(struct net *net, + struct nf_conntrack_l3proto *l3proto) +{ + int err = 0; + struct nf_ip_net *in = nf_ct_l3proto_net(net, l3proto); + /* nf_conntrack_l3proto_ipv6 doesn't support sysctl */ + if (in == NULL) + return 0; + +#if defined(CONFIG_SYSCTL) && defined(CONFIG_NF_CONNTRACK_PROC_COMPAT) + if (in->ctl_table != NULL) { + err = nf_ct_register_sysctl(net, + &in->ctl_table_header, l3proto->ctl_table_path, - l3proto->ctl_table, NULL); + in->ctl_table); + if (err < 0) { + kfree(in->ctl_table); + in->ctl_table = NULL; + } } #endif return err; } -static void nf_ct_l3proto_unregister_sysctl(struct nf_conntrack_l3proto *l3proto) +static void nf_ct_l3proto_unregister_sysctl(struct net *net, + struct nf_conntrack_l3proto *l3proto) { -#ifdef CONFIG_SYSCTL - if (l3proto->ctl_table_header != NULL) - nf_ct_unregister_sysctl(&l3proto->ctl_table_header, - l3proto->ctl_table, NULL); + struct nf_ip_net *in = nf_ct_l3proto_net(net, l3proto); + + if (in == NULL) + return; +#if defined(CONFIG_SYSCTL) && defined(CONFIG_NF_CONNTRACK_PROC_COMPAT) + if (in->ctl_table_header != NULL) + nf_ct_unregister_sysctl(&in->ctl_table_header, + &in->ctl_table, + 0); #endif } -int nf_conntrack_l3proto_register(struct nf_conntrack_l3proto *proto) +static int +nf_conntrack_l3proto_register_net(struct nf_conntrack_l3proto *proto) { int ret = 0; struct nf_conntrack_l3proto *old; @@ -203,10 +233,6 @@ int nf_conntrack_l3proto_register(struct nf_conntrack_l3proto *proto) goto out_unlock; } - ret = nf_ct_l3proto_register_sysctl(proto); - if (ret < 0) - goto out_unlock; - if (proto->nlattr_tuple_size) proto->nla_size = 3 * proto->nlattr_tuple_size(); @@ -215,13 +241,37 @@ int nf_conntrack_l3proto_register(struct nf_conntrack_l3proto *proto) out_unlock: mutex_unlock(&nf_ct_proto_mutex); return ret; + } -EXPORT_SYMBOL_GPL(nf_conntrack_l3proto_register); -void nf_conntrack_l3proto_unregister(struct nf_conntrack_l3proto *proto) +int nf_conntrack_l3proto_register(struct net *net, + struct nf_conntrack_l3proto *proto) { - struct net *net; + int ret = 0; + + if (proto->init_net) { + ret = proto->init_net(net); + if (ret < 0) + return ret; + } + + ret = nf_ct_l3proto_register_sysctl(net, proto); + if (ret < 0) + return ret; + if (net == &init_net) { + ret = nf_conntrack_l3proto_register_net(proto); + if (ret < 0) + nf_ct_l3proto_unregister_sysctl(net, proto); + } + + return ret; +} +EXPORT_SYMBOL_GPL(nf_conntrack_l3proto_register); + +static void +nf_conntrack_l3proto_unregister_net(struct nf_conntrack_l3proto *proto) +{ BUG_ON(proto->l3proto >= AF_MAX); mutex_lock(&nf_ct_proto_mutex); @@ -230,68 +280,107 @@ void nf_conntrack_l3proto_unregister(struct nf_conntrack_l3proto *proto) ) != proto); rcu_assign_pointer(nf_ct_l3protos[proto->l3proto], &nf_conntrack_l3proto_generic); - nf_ct_l3proto_unregister_sysctl(proto); mutex_unlock(&nf_ct_proto_mutex); synchronize_rcu(); +} + +void nf_conntrack_l3proto_unregister(struct net *net, + struct nf_conntrack_l3proto *proto) +{ + if (net == &init_net) + nf_conntrack_l3proto_unregister_net(proto); + + nf_ct_l3proto_unregister_sysctl(net, proto); /* Remove all contrack entries for this protocol */ rtnl_lock(); - for_each_net(net) - nf_ct_iterate_cleanup(net, kill_l3proto, proto); + nf_ct_iterate_cleanup(net, kill_l3proto, proto); rtnl_unlock(); } EXPORT_SYMBOL_GPL(nf_conntrack_l3proto_unregister); -static int nf_ct_l4proto_register_sysctl(struct nf_conntrack_l4proto *l4proto) +static struct nf_proto_net *nf_ct_l4proto_net(struct net *net, + struct nf_conntrack_l4proto *l4proto) +{ + if (l4proto->get_net_proto) { + /* statically built-in protocols use static per-net */ + return l4proto->get_net_proto(net); + } else if (l4proto->net_id) { + /* ... and loadable protocols use dynamic per-net */ + return net_generic(net, *l4proto->net_id); + } + return NULL; +} + +static +int nf_ct_l4proto_register_sysctl(struct net *net, + struct nf_proto_net *pn, + struct nf_conntrack_l4proto *l4proto) { int err = 0; #ifdef CONFIG_SYSCTL - if (l4proto->ctl_table != NULL) { - err = nf_ct_register_sysctl(l4proto->ctl_table_header, + if (pn->ctl_table != NULL) { + err = nf_ct_register_sysctl(net, + &pn->ctl_table_header, "net/netfilter", - l4proto->ctl_table, - l4proto->ctl_table_users); - if (err < 0) - goto out; + pn->ctl_table); + if (err < 0) { + if (!pn->users) { + kfree(pn->ctl_table); + pn->ctl_table = NULL; + } + } } #ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT - if (l4proto->ctl_compat_table != NULL) { - err = nf_ct_register_sysctl(&l4proto->ctl_compat_table_header, + if (l4proto->l3proto != AF_INET6 && pn->ctl_compat_table != NULL) { + if (err < 0) { + nf_ct_kfree_compat_sysctl_table(pn); + goto out; + } + err = nf_ct_register_sysctl(net, + &pn->ctl_compat_header, "net/ipv4/netfilter", - l4proto->ctl_compat_table, NULL); + pn->ctl_compat_table); if (err == 0) goto out; - nf_ct_unregister_sysctl(l4proto->ctl_table_header, - l4proto->ctl_table, - l4proto->ctl_table_users); + + nf_ct_kfree_compat_sysctl_table(pn); + nf_ct_unregister_sysctl(&pn->ctl_table_header, + &pn->ctl_table, + pn->users); } -#endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ out: +#endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ #endif /* CONFIG_SYSCTL */ return err; } -static void nf_ct_l4proto_unregister_sysctl(struct nf_conntrack_l4proto *l4proto) +static +void nf_ct_l4proto_unregister_sysctl(struct net *net, + struct nf_proto_net *pn, + struct nf_conntrack_l4proto *l4proto) { #ifdef CONFIG_SYSCTL - if (l4proto->ctl_table_header != NULL && - *l4proto->ctl_table_header != NULL) - nf_ct_unregister_sysctl(l4proto->ctl_table_header, - l4proto->ctl_table, - l4proto->ctl_table_users); + if (pn->ctl_table_header != NULL) + nf_ct_unregister_sysctl(&pn->ctl_table_header, + &pn->ctl_table, + pn->users); + #ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT - if (l4proto->ctl_compat_table_header != NULL) - nf_ct_unregister_sysctl(&l4proto->ctl_compat_table_header, - l4proto->ctl_compat_table, NULL); + if (l4proto->l3proto != AF_INET6 && pn->ctl_compat_header != NULL) + nf_ct_unregister_sysctl(&pn->ctl_compat_header, + &pn->ctl_compat_table, + 0); #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ #endif /* CONFIG_SYSCTL */ } /* FIXME: Allow NULL functions and sub in pointers to generic for them. --RR */ -int nf_conntrack_l4proto_register(struct nf_conntrack_l4proto *l4proto) +static int +nf_conntrack_l4proto_register_net(struct nf_conntrack_l4proto *l4proto) { int ret = 0; @@ -333,10 +422,6 @@ int nf_conntrack_l4proto_register(struct nf_conntrack_l4proto *l4proto) goto out_unlock; } - ret = nf_ct_l4proto_register_sysctl(l4proto); - if (ret < 0) - goto out_unlock; - l4proto->nla_size = 0; if (l4proto->nlattr_size) l4proto->nla_size += l4proto->nlattr_size(); @@ -345,17 +430,48 @@ int nf_conntrack_l4proto_register(struct nf_conntrack_l4proto *l4proto) rcu_assign_pointer(nf_ct_protos[l4proto->l3proto][l4proto->l4proto], l4proto); - out_unlock: mutex_unlock(&nf_ct_proto_mutex); return ret; } -EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_register); -void nf_conntrack_l4proto_unregister(struct nf_conntrack_l4proto *l4proto) +int nf_conntrack_l4proto_register(struct net *net, + struct nf_conntrack_l4proto *l4proto) { - struct net *net; + int ret = 0; + struct nf_proto_net *pn = NULL; + if (l4proto->init_net) { + ret = l4proto->init_net(net, l4proto->l3proto); + if (ret < 0) + goto out; + } + + pn = nf_ct_l4proto_net(net, l4proto); + if (pn == NULL) + goto out; + + ret = nf_ct_l4proto_register_sysctl(net, pn, l4proto); + if (ret < 0) + goto out; + + if (net == &init_net) { + ret = nf_conntrack_l4proto_register_net(l4proto); + if (ret < 0) { + nf_ct_l4proto_unregister_sysctl(net, pn, l4proto); + goto out; + } + } + + pn->users++; +out: + return ret; +} +EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_register); + +static void +nf_conntrack_l4proto_unregister_net(struct nf_conntrack_l4proto *l4proto) +{ BUG_ON(l4proto->l3proto >= PF_MAX); mutex_lock(&nf_ct_proto_mutex); @@ -365,41 +481,73 @@ void nf_conntrack_l4proto_unregister(struct nf_conntrack_l4proto *l4proto) ) != l4proto); rcu_assign_pointer(nf_ct_protos[l4proto->l3proto][l4proto->l4proto], &nf_conntrack_l4proto_generic); - nf_ct_l4proto_unregister_sysctl(l4proto); mutex_unlock(&nf_ct_proto_mutex); synchronize_rcu(); +} + +void nf_conntrack_l4proto_unregister(struct net *net, + struct nf_conntrack_l4proto *l4proto) +{ + struct nf_proto_net *pn = NULL; + + if (net == &init_net) + nf_conntrack_l4proto_unregister_net(l4proto); + + pn = nf_ct_l4proto_net(net, l4proto); + if (pn == NULL) + return; + + pn->users--; + nf_ct_l4proto_unregister_sysctl(net, pn, l4proto); /* Remove all contrack entries for this protocol */ rtnl_lock(); - for_each_net(net) - nf_ct_iterate_cleanup(net, kill_l4proto, l4proto); + nf_ct_iterate_cleanup(net, kill_l4proto, l4proto); rtnl_unlock(); } EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_unregister); -int nf_conntrack_proto_init(void) +int nf_conntrack_proto_init(struct net *net) { unsigned int i; int err; + struct nf_proto_net *pn = nf_ct_l4proto_net(net, + &nf_conntrack_l4proto_generic); - err = nf_ct_l4proto_register_sysctl(&nf_conntrack_l4proto_generic); + err = nf_conntrack_l4proto_generic.init_net(net, + nf_conntrack_l4proto_generic.l3proto); + if (err < 0) + return err; + err = nf_ct_l4proto_register_sysctl(net, + pn, + &nf_conntrack_l4proto_generic); if (err < 0) return err; - for (i = 0; i < AF_MAX; i++) - rcu_assign_pointer(nf_ct_l3protos[i], - &nf_conntrack_l3proto_generic); + if (net == &init_net) { + for (i = 0; i < AF_MAX; i++) + rcu_assign_pointer(nf_ct_l3protos[i], + &nf_conntrack_l3proto_generic); + } + + pn->users++; return 0; } -void nf_conntrack_proto_fini(void) +void nf_conntrack_proto_fini(struct net *net) { unsigned int i; - - nf_ct_l4proto_unregister_sysctl(&nf_conntrack_l4proto_generic); - - /* free l3proto protocol tables */ - for (i = 0; i < PF_MAX; i++) - kfree(nf_ct_protos[i]); + struct nf_proto_net *pn = nf_ct_l4proto_net(net, + &nf_conntrack_l4proto_generic); + + pn->users--; + nf_ct_l4proto_unregister_sysctl(net, + pn, + &nf_conntrack_l4proto_generic); + if (net == &init_net) { + /* free l3proto protocol tables */ + for (i = 0; i < PF_MAX; i++) + kfree(nf_ct_protos[i]); + } } diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c index ef706a485be1..6535326cf07c 100644 --- a/net/netfilter/nf_conntrack_proto_dccp.c +++ b/net/netfilter/nf_conntrack_proto_dccp.c @@ -387,12 +387,9 @@ dccp_state_table[CT_DCCP_ROLE_MAX + 1][DCCP_PKT_SYNCACK + 1][CT_DCCP_MAX + 1] = /* this module per-net specifics */ static int dccp_net_id __read_mostly; struct dccp_net { + struct nf_proto_net pn; int dccp_loose; unsigned int dccp_timeout[CT_DCCP_MAX + 1]; -#ifdef CONFIG_SYSCTL - struct ctl_table_header *sysctl_header; - struct ctl_table *sysctl_table; -#endif }; static inline struct dccp_net *dccp_pernet(struct net *net) @@ -715,9 +712,10 @@ static int dccp_nlattr_size(void) #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_cttimeout.h> -static int dccp_timeout_nlattr_to_obj(struct nlattr *tb[], void *data) +static int dccp_timeout_nlattr_to_obj(struct nlattr *tb[], + struct net *net, void *data) { - struct dccp_net *dn = dccp_pernet(&init_net); + struct dccp_net *dn = dccp_pernet(net); unsigned int *timeouts = data; int i; @@ -817,6 +815,51 @@ static struct ctl_table dccp_sysctl_table[] = { }; #endif /* CONFIG_SYSCTL */ +static int dccp_kmemdup_sysctl_table(struct nf_proto_net *pn, + struct dccp_net *dn) +{ +#ifdef CONFIG_SYSCTL + if (pn->ctl_table) + return 0; + + pn->ctl_table = kmemdup(dccp_sysctl_table, + sizeof(dccp_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_table) + return -ENOMEM; + + pn->ctl_table[0].data = &dn->dccp_timeout[CT_DCCP_REQUEST]; + pn->ctl_table[1].data = &dn->dccp_timeout[CT_DCCP_RESPOND]; + pn->ctl_table[2].data = &dn->dccp_timeout[CT_DCCP_PARTOPEN]; + pn->ctl_table[3].data = &dn->dccp_timeout[CT_DCCP_OPEN]; + pn->ctl_table[4].data = &dn->dccp_timeout[CT_DCCP_CLOSEREQ]; + pn->ctl_table[5].data = &dn->dccp_timeout[CT_DCCP_CLOSING]; + pn->ctl_table[6].data = &dn->dccp_timeout[CT_DCCP_TIMEWAIT]; + pn->ctl_table[7].data = &dn->dccp_loose; +#endif + return 0; +} + +static int dccp_init_net(struct net *net, u_int16_t proto) +{ + struct dccp_net *dn = dccp_pernet(net); + struct nf_proto_net *pn = &dn->pn; + + if (!pn->users) { + /* default values */ + dn->dccp_loose = 1; + dn->dccp_timeout[CT_DCCP_REQUEST] = 2 * DCCP_MSL; + dn->dccp_timeout[CT_DCCP_RESPOND] = 4 * DCCP_MSL; + dn->dccp_timeout[CT_DCCP_PARTOPEN] = 4 * DCCP_MSL; + dn->dccp_timeout[CT_DCCP_OPEN] = 12 * 3600 * HZ; + dn->dccp_timeout[CT_DCCP_CLOSEREQ] = 64 * HZ; + dn->dccp_timeout[CT_DCCP_CLOSING] = 64 * HZ; + dn->dccp_timeout[CT_DCCP_TIMEWAIT] = 2 * DCCP_MSL; + } + + return dccp_kmemdup_sysctl_table(pn, dn); +} + static struct nf_conntrack_l4proto dccp_proto4 __read_mostly = { .l3proto = AF_INET, .l4proto = IPPROTO_DCCP, @@ -847,6 +890,8 @@ static struct nf_conntrack_l4proto dccp_proto4 __read_mostly = { .nla_policy = dccp_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ + .net_id = &dccp_net_id, + .init_net = dccp_init_net, }; static struct nf_conntrack_l4proto dccp_proto6 __read_mostly = { @@ -879,55 +924,39 @@ static struct nf_conntrack_l4proto dccp_proto6 __read_mostly = { .nla_policy = dccp_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ + .net_id = &dccp_net_id, + .init_net = dccp_init_net, }; static __net_init int dccp_net_init(struct net *net) { - struct dccp_net *dn = dccp_pernet(net); - - /* default values */ - dn->dccp_loose = 1; - dn->dccp_timeout[CT_DCCP_REQUEST] = 2 * DCCP_MSL; - dn->dccp_timeout[CT_DCCP_RESPOND] = 4 * DCCP_MSL; - dn->dccp_timeout[CT_DCCP_PARTOPEN] = 4 * DCCP_MSL; - dn->dccp_timeout[CT_DCCP_OPEN] = 12 * 3600 * HZ; - dn->dccp_timeout[CT_DCCP_CLOSEREQ] = 64 * HZ; - dn->dccp_timeout[CT_DCCP_CLOSING] = 64 * HZ; - dn->dccp_timeout[CT_DCCP_TIMEWAIT] = 2 * DCCP_MSL; - -#ifdef CONFIG_SYSCTL - dn->sysctl_table = kmemdup(dccp_sysctl_table, - sizeof(dccp_sysctl_table), GFP_KERNEL); - if (!dn->sysctl_table) - return -ENOMEM; - - dn->sysctl_table[0].data = &dn->dccp_timeout[CT_DCCP_REQUEST]; - dn->sysctl_table[1].data = &dn->dccp_timeout[CT_DCCP_RESPOND]; - dn->sysctl_table[2].data = &dn->dccp_timeout[CT_DCCP_PARTOPEN]; - dn->sysctl_table[3].data = &dn->dccp_timeout[CT_DCCP_OPEN]; - dn->sysctl_table[4].data = &dn->dccp_timeout[CT_DCCP_CLOSEREQ]; - dn->sysctl_table[5].data = &dn->dccp_timeout[CT_DCCP_CLOSING]; - dn->sysctl_table[6].data = &dn->dccp_timeout[CT_DCCP_TIMEWAIT]; - dn->sysctl_table[7].data = &dn->dccp_loose; - - dn->sysctl_header = register_net_sysctl(net, "net/netfilter", - dn->sysctl_table); - if (!dn->sysctl_header) { - kfree(dn->sysctl_table); - return -ENOMEM; + int ret = 0; + ret = nf_conntrack_l4proto_register(net, + &dccp_proto4); + if (ret < 0) { + pr_err("nf_conntrack_l4proto_dccp4 :protocol register failed.\n"); + goto out; + } + ret = nf_conntrack_l4proto_register(net, + &dccp_proto6); + if (ret < 0) { + pr_err("nf_conntrack_l4proto_dccp6 :protocol register failed.\n"); + goto cleanup_dccp4; } -#endif - return 0; +cleanup_dccp4: + nf_conntrack_l4proto_unregister(net, + &dccp_proto4); +out: + return ret; } static __net_exit void dccp_net_exit(struct net *net) { - struct dccp_net *dn = dccp_pernet(net); -#ifdef CONFIG_SYSCTL - unregister_net_sysctl_table(dn->sysctl_header); - kfree(dn->sysctl_table); -#endif + nf_conntrack_l4proto_unregister(net, + &dccp_proto6); + nf_conntrack_l4proto_unregister(net, + &dccp_proto4); } static struct pernet_operations dccp_net_ops = { @@ -939,34 +968,12 @@ static struct pernet_operations dccp_net_ops = { static int __init nf_conntrack_proto_dccp_init(void) { - int err; - - err = register_pernet_subsys(&dccp_net_ops); - if (err < 0) - goto err1; - - err = nf_conntrack_l4proto_register(&dccp_proto4); - if (err < 0) - goto err2; - - err = nf_conntrack_l4proto_register(&dccp_proto6); - if (err < 0) - goto err3; - return 0; - -err3: - nf_conntrack_l4proto_unregister(&dccp_proto4); -err2: - unregister_pernet_subsys(&dccp_net_ops); -err1: - return err; + return register_pernet_subsys(&dccp_net_ops); } static void __exit nf_conntrack_proto_dccp_fini(void) { unregister_pernet_subsys(&dccp_net_ops); - nf_conntrack_l4proto_unregister(&dccp_proto6); - nf_conntrack_l4proto_unregister(&dccp_proto4); } module_init(nf_conntrack_proto_dccp_init); diff --git a/net/netfilter/nf_conntrack_proto_generic.c b/net/netfilter/nf_conntrack_proto_generic.c index d8923d54b358..d25f29377648 100644 --- a/net/netfilter/nf_conntrack_proto_generic.c +++ b/net/netfilter/nf_conntrack_proto_generic.c @@ -14,6 +14,11 @@ static unsigned int nf_ct_generic_timeout __read_mostly = 600*HZ; +static inline struct nf_generic_net *generic_pernet(struct net *net) +{ + return &net->ct.nf_ct_proto.generic; +} + static bool generic_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, struct nf_conntrack_tuple *tuple) @@ -42,7 +47,7 @@ static int generic_print_tuple(struct seq_file *s, static unsigned int *generic_get_timeouts(struct net *net) { - return &nf_ct_generic_timeout; + return &(generic_pernet(net)->timeout); } /* Returns verdict for packet, or -1 for invalid. */ @@ -70,16 +75,18 @@ static bool generic_new(struct nf_conn *ct, const struct sk_buff *skb, #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_cttimeout.h> -static int generic_timeout_nlattr_to_obj(struct nlattr *tb[], void *data) +static int generic_timeout_nlattr_to_obj(struct nlattr *tb[], + struct net *net, void *data) { unsigned int *timeout = data; + struct nf_generic_net *gn = generic_pernet(net); if (tb[CTA_TIMEOUT_GENERIC_TIMEOUT]) *timeout = ntohl(nla_get_be32(tb[CTA_TIMEOUT_GENERIC_TIMEOUT])) * HZ; else { /* Set default generic timeout. */ - *timeout = nf_ct_generic_timeout; + *timeout = gn->timeout; } return 0; @@ -106,11 +113,9 @@ generic_timeout_nla_policy[CTA_TIMEOUT_GENERIC_MAX+1] = { #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #ifdef CONFIG_SYSCTL -static struct ctl_table_header *generic_sysctl_header; static struct ctl_table generic_sysctl_table[] = { { .procname = "nf_conntrack_generic_timeout", - .data = &nf_ct_generic_timeout, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -121,7 +126,6 @@ static struct ctl_table generic_sysctl_table[] = { static struct ctl_table generic_compat_sysctl_table[] = { { .procname = "ip_conntrack_generic_timeout", - .data = &nf_ct_generic_timeout, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -131,6 +135,62 @@ static struct ctl_table generic_compat_sysctl_table[] = { #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ #endif /* CONFIG_SYSCTL */ +static int generic_kmemdup_sysctl_table(struct nf_proto_net *pn, + struct nf_generic_net *gn) +{ +#ifdef CONFIG_SYSCTL + pn->ctl_table = kmemdup(generic_sysctl_table, + sizeof(generic_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_table) + return -ENOMEM; + + pn->ctl_table[0].data = &gn->timeout; +#endif + return 0; +} + +static int generic_kmemdup_compat_sysctl_table(struct nf_proto_net *pn, + struct nf_generic_net *gn) +{ +#ifdef CONFIG_SYSCTL +#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT + pn->ctl_compat_table = kmemdup(generic_compat_sysctl_table, + sizeof(generic_compat_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_compat_table) + return -ENOMEM; + + pn->ctl_compat_table[0].data = &gn->timeout; +#endif +#endif + return 0; +} + +static int generic_init_net(struct net *net, u_int16_t proto) +{ + int ret; + struct nf_generic_net *gn = generic_pernet(net); + struct nf_proto_net *pn = &gn->pn; + + gn->timeout = nf_ct_generic_timeout; + + ret = generic_kmemdup_compat_sysctl_table(pn, gn); + if (ret < 0) + return ret; + + ret = generic_kmemdup_sysctl_table(pn, gn); + if (ret < 0) + nf_ct_kfree_compat_sysctl_table(pn); + + return ret; +} + +static struct nf_proto_net *generic_get_net_proto(struct net *net) +{ + return &net->ct.nf_ct_proto.generic.pn; +} + struct nf_conntrack_l4proto nf_conntrack_l4proto_generic __read_mostly = { .l3proto = PF_UNSPEC, @@ -151,11 +211,6 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_generic __read_mostly = .nla_policy = generic_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_header = &generic_sysctl_header, - .ctl_table = generic_sysctl_table, -#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT - .ctl_compat_table = generic_compat_sysctl_table, -#endif -#endif + .init_net = generic_init_net, + .get_net_proto = generic_get_net_proto, }; diff --git a/net/netfilter/nf_conntrack_proto_gre.c b/net/netfilter/nf_conntrack_proto_gre.c index 4bf6b4e4b776..b09b7af7f6f8 100644 --- a/net/netfilter/nf_conntrack_proto_gre.c +++ b/net/netfilter/nf_conntrack_proto_gre.c @@ -54,13 +54,20 @@ static unsigned int gre_timeouts[GRE_CT_MAX] = { static int proto_gre_net_id __read_mostly; struct netns_proto_gre { + struct nf_proto_net nf; rwlock_t keymap_lock; struct list_head keymap_list; + unsigned int gre_timeouts[GRE_CT_MAX]; }; +static inline struct netns_proto_gre *gre_pernet(struct net *net) +{ + return net_generic(net, proto_gre_net_id); +} + void nf_ct_gre_keymap_flush(struct net *net) { - struct netns_proto_gre *net_gre = net_generic(net, proto_gre_net_id); + struct netns_proto_gre *net_gre = gre_pernet(net); struct nf_ct_gre_keymap *km, *tmp; write_lock_bh(&net_gre->keymap_lock); @@ -85,7 +92,7 @@ static inline int gre_key_cmpfn(const struct nf_ct_gre_keymap *km, /* look up the source key for a given tuple */ static __be16 gre_keymap_lookup(struct net *net, struct nf_conntrack_tuple *t) { - struct netns_proto_gre *net_gre = net_generic(net, proto_gre_net_id); + struct netns_proto_gre *net_gre = gre_pernet(net); struct nf_ct_gre_keymap *km; __be16 key = 0; @@ -109,11 +116,11 @@ int nf_ct_gre_keymap_add(struct nf_conn *ct, enum ip_conntrack_dir dir, struct nf_conntrack_tuple *t) { struct net *net = nf_ct_net(ct); - struct netns_proto_gre *net_gre = net_generic(net, proto_gre_net_id); - struct nf_conn_help *help = nfct_help(ct); + struct netns_proto_gre *net_gre = gre_pernet(net); + struct nf_ct_pptp_master *ct_pptp_info = nfct_help_data(ct); struct nf_ct_gre_keymap **kmp, *km; - kmp = &help->help.ct_pptp_info.keymap[dir]; + kmp = &ct_pptp_info->keymap[dir]; if (*kmp) { /* check whether it's a retransmission */ read_lock_bh(&net_gre->keymap_lock); @@ -150,20 +157,20 @@ EXPORT_SYMBOL_GPL(nf_ct_gre_keymap_add); void nf_ct_gre_keymap_destroy(struct nf_conn *ct) { struct net *net = nf_ct_net(ct); - struct netns_proto_gre *net_gre = net_generic(net, proto_gre_net_id); - struct nf_conn_help *help = nfct_help(ct); + struct netns_proto_gre *net_gre = gre_pernet(net); + struct nf_ct_pptp_master *ct_pptp_info = nfct_help_data(ct); enum ip_conntrack_dir dir; pr_debug("entering for ct %p\n", ct); write_lock_bh(&net_gre->keymap_lock); for (dir = IP_CT_DIR_ORIGINAL; dir < IP_CT_DIR_MAX; dir++) { - if (help->help.ct_pptp_info.keymap[dir]) { + if (ct_pptp_info->keymap[dir]) { pr_debug("removing %p from list\n", - help->help.ct_pptp_info.keymap[dir]); - list_del(&help->help.ct_pptp_info.keymap[dir]->list); - kfree(help->help.ct_pptp_info.keymap[dir]); - help->help.ct_pptp_info.keymap[dir] = NULL; + ct_pptp_info->keymap[dir]); + list_del(&ct_pptp_info->keymap[dir]->list); + kfree(ct_pptp_info->keymap[dir]); + ct_pptp_info->keymap[dir] = NULL; } } write_unlock_bh(&net_gre->keymap_lock); @@ -237,7 +244,7 @@ static int gre_print_conntrack(struct seq_file *s, struct nf_conn *ct) static unsigned int *gre_get_timeouts(struct net *net) { - return gre_timeouts; + return gre_pernet(net)->gre_timeouts; } /* Returns verdict for packet, and may modify conntrack */ @@ -297,13 +304,15 @@ static void gre_destroy(struct nf_conn *ct) #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_cttimeout.h> -static int gre_timeout_nlattr_to_obj(struct nlattr *tb[], void *data) +static int gre_timeout_nlattr_to_obj(struct nlattr *tb[], + struct net *net, void *data) { unsigned int *timeouts = data; + struct netns_proto_gre *net_gre = gre_pernet(net); /* set default timeouts for GRE. */ - timeouts[GRE_CT_UNREPLIED] = gre_timeouts[GRE_CT_UNREPLIED]; - timeouts[GRE_CT_REPLIED] = gre_timeouts[GRE_CT_REPLIED]; + timeouts[GRE_CT_UNREPLIED] = net_gre->gre_timeouts[GRE_CT_UNREPLIED]; + timeouts[GRE_CT_REPLIED] = net_gre->gre_timeouts[GRE_CT_REPLIED]; if (tb[CTA_TIMEOUT_GRE_UNREPLIED]) { timeouts[GRE_CT_UNREPLIED] = @@ -339,6 +348,19 @@ gre_timeout_nla_policy[CTA_TIMEOUT_GRE_MAX+1] = { }; #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ +static int gre_init_net(struct net *net, u_int16_t proto) +{ + struct netns_proto_gre *net_gre = gre_pernet(net); + int i; + + rwlock_init(&net_gre->keymap_lock); + INIT_LIST_HEAD(&net_gre->keymap_list); + for (i = 0; i < GRE_CT_MAX; i++) + net_gre->gre_timeouts[i] = gre_timeouts[i]; + + return 0; +} + /* protocol helper struct */ static struct nf_conntrack_l4proto nf_conntrack_l4proto_gre4 __read_mostly = { .l3proto = AF_INET, @@ -368,20 +390,22 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_gre4 __read_mostly = { .nla_policy = gre_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ + .net_id = &proto_gre_net_id, + .init_net = gre_init_net, }; static int proto_gre_net_init(struct net *net) { - struct netns_proto_gre *net_gre = net_generic(net, proto_gre_net_id); - - rwlock_init(&net_gre->keymap_lock); - INIT_LIST_HEAD(&net_gre->keymap_list); - - return 0; + int ret = 0; + ret = nf_conntrack_l4proto_register(net, &nf_conntrack_l4proto_gre4); + if (ret < 0) + pr_err("nf_conntrack_l4proto_gre4 :protocol register failed.\n"); + return ret; } static void proto_gre_net_exit(struct net *net) { + nf_conntrack_l4proto_unregister(net, &nf_conntrack_l4proto_gre4); nf_ct_gre_keymap_flush(net); } @@ -394,20 +418,11 @@ static struct pernet_operations proto_gre_net_ops = { static int __init nf_ct_proto_gre_init(void) { - int rv; - - rv = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_gre4); - if (rv < 0) - return rv; - rv = register_pernet_subsys(&proto_gre_net_ops); - if (rv < 0) - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_gre4); - return rv; + return register_pernet_subsys(&proto_gre_net_ops); } static void __exit nf_ct_proto_gre_fini(void) { - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_gre4); unregister_pernet_subsys(&proto_gre_net_ops); } diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c index 996db2fa21f7..c746d61f83ed 100644 --- a/net/netfilter/nf_conntrack_proto_sctp.c +++ b/net/netfilter/nf_conntrack_proto_sctp.c @@ -127,6 +127,17 @@ static const u8 sctp_conntracks[2][9][SCTP_CONNTRACK_MAX] = { } }; +static int sctp_net_id __read_mostly; +struct sctp_net { + struct nf_proto_net pn; + unsigned int timeouts[SCTP_CONNTRACK_MAX]; +}; + +static inline struct sctp_net *sctp_pernet(struct net *net) +{ + return net_generic(net, sctp_net_id); +} + static bool sctp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, struct nf_conntrack_tuple *tuple) { @@ -281,7 +292,7 @@ static int sctp_new_state(enum ip_conntrack_dir dir, static unsigned int *sctp_get_timeouts(struct net *net) { - return sctp_timeouts; + return sctp_pernet(net)->timeouts; } /* Returns verdict for packet, or -NF_ACCEPT for invalid. */ @@ -551,14 +562,16 @@ static int sctp_nlattr_size(void) #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_cttimeout.h> -static int sctp_timeout_nlattr_to_obj(struct nlattr *tb[], void *data) +static int sctp_timeout_nlattr_to_obj(struct nlattr *tb[], + struct net *net, void *data) { unsigned int *timeouts = data; + struct sctp_net *sn = sctp_pernet(net); int i; /* set default SCTP timeouts. */ for (i=0; i<SCTP_CONNTRACK_MAX; i++) - timeouts[i] = sctp_timeouts[i]; + timeouts[i] = sn->timeouts[i]; /* there's a 1:1 mapping between attributes and protocol states. */ for (i=CTA_TIMEOUT_SCTP_UNSPEC+1; i<CTA_TIMEOUT_SCTP_MAX+1; i++) { @@ -599,54 +612,45 @@ sctp_timeout_nla_policy[CTA_TIMEOUT_SCTP_MAX+1] = { #ifdef CONFIG_SYSCTL -static unsigned int sctp_sysctl_table_users; -static struct ctl_table_header *sctp_sysctl_header; static struct ctl_table sctp_sysctl_table[] = { { .procname = "nf_conntrack_sctp_timeout_closed", - .data = &sctp_timeouts[SCTP_CONNTRACK_CLOSED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_sctp_timeout_cookie_wait", - .data = &sctp_timeouts[SCTP_CONNTRACK_COOKIE_WAIT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_sctp_timeout_cookie_echoed", - .data = &sctp_timeouts[SCTP_CONNTRACK_COOKIE_ECHOED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_sctp_timeout_established", - .data = &sctp_timeouts[SCTP_CONNTRACK_ESTABLISHED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_sctp_timeout_shutdown_sent", - .data = &sctp_timeouts[SCTP_CONNTRACK_SHUTDOWN_SENT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_sctp_timeout_shutdown_recd", - .data = &sctp_timeouts[SCTP_CONNTRACK_SHUTDOWN_RECD], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_sctp_timeout_shutdown_ack_sent", - .data = &sctp_timeouts[SCTP_CONNTRACK_SHUTDOWN_ACK_SENT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -658,49 +662,42 @@ static struct ctl_table sctp_sysctl_table[] = { static struct ctl_table sctp_compat_sysctl_table[] = { { .procname = "ip_conntrack_sctp_timeout_closed", - .data = &sctp_timeouts[SCTP_CONNTRACK_CLOSED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_sctp_timeout_cookie_wait", - .data = &sctp_timeouts[SCTP_CONNTRACK_COOKIE_WAIT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_sctp_timeout_cookie_echoed", - .data = &sctp_timeouts[SCTP_CONNTRACK_COOKIE_ECHOED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_sctp_timeout_established", - .data = &sctp_timeouts[SCTP_CONNTRACK_ESTABLISHED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_sctp_timeout_shutdown_sent", - .data = &sctp_timeouts[SCTP_CONNTRACK_SHUTDOWN_SENT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_sctp_timeout_shutdown_recd", - .data = &sctp_timeouts[SCTP_CONNTRACK_SHUTDOWN_RECD], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_sctp_timeout_shutdown_ack_sent", - .data = &sctp_timeouts[SCTP_CONNTRACK_SHUTDOWN_ACK_SENT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -710,6 +707,80 @@ static struct ctl_table sctp_compat_sysctl_table[] = { #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ #endif +static int sctp_kmemdup_sysctl_table(struct nf_proto_net *pn, + struct sctp_net *sn) +{ +#ifdef CONFIG_SYSCTL + if (pn->ctl_table) + return 0; + + pn->ctl_table = kmemdup(sctp_sysctl_table, + sizeof(sctp_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_table) + return -ENOMEM; + + pn->ctl_table[0].data = &sn->timeouts[SCTP_CONNTRACK_CLOSED]; + pn->ctl_table[1].data = &sn->timeouts[SCTP_CONNTRACK_COOKIE_WAIT]; + pn->ctl_table[2].data = &sn->timeouts[SCTP_CONNTRACK_COOKIE_ECHOED]; + pn->ctl_table[3].data = &sn->timeouts[SCTP_CONNTRACK_ESTABLISHED]; + pn->ctl_table[4].data = &sn->timeouts[SCTP_CONNTRACK_SHUTDOWN_SENT]; + pn->ctl_table[5].data = &sn->timeouts[SCTP_CONNTRACK_SHUTDOWN_RECD]; + pn->ctl_table[6].data = &sn->timeouts[SCTP_CONNTRACK_SHUTDOWN_ACK_SENT]; +#endif + return 0; +} + +static int sctp_kmemdup_compat_sysctl_table(struct nf_proto_net *pn, + struct sctp_net *sn) +{ +#ifdef CONFIG_SYSCTL +#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT + pn->ctl_compat_table = kmemdup(sctp_compat_sysctl_table, + sizeof(sctp_compat_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_compat_table) + return -ENOMEM; + + pn->ctl_compat_table[0].data = &sn->timeouts[SCTP_CONNTRACK_CLOSED]; + pn->ctl_compat_table[1].data = &sn->timeouts[SCTP_CONNTRACK_COOKIE_WAIT]; + pn->ctl_compat_table[2].data = &sn->timeouts[SCTP_CONNTRACK_COOKIE_ECHOED]; + pn->ctl_compat_table[3].data = &sn->timeouts[SCTP_CONNTRACK_ESTABLISHED]; + pn->ctl_compat_table[4].data = &sn->timeouts[SCTP_CONNTRACK_SHUTDOWN_SENT]; + pn->ctl_compat_table[5].data = &sn->timeouts[SCTP_CONNTRACK_SHUTDOWN_RECD]; + pn->ctl_compat_table[6].data = &sn->timeouts[SCTP_CONNTRACK_SHUTDOWN_ACK_SENT]; +#endif +#endif + return 0; +} + +static int sctp_init_net(struct net *net, u_int16_t proto) +{ + int ret; + struct sctp_net *sn = sctp_pernet(net); + struct nf_proto_net *pn = &sn->pn; + + if (!pn->users) { + int i; + + for (i = 0; i < SCTP_CONNTRACK_MAX; i++) + sn->timeouts[i] = sctp_timeouts[i]; + } + + if (proto == AF_INET) { + ret = sctp_kmemdup_compat_sysctl_table(pn, sn); + if (ret < 0) + return ret; + + ret = sctp_kmemdup_sysctl_table(pn, sn); + if (ret < 0) + nf_ct_kfree_compat_sysctl_table(pn); + } else + ret = sctp_kmemdup_sysctl_table(pn, sn); + + return ret; +} + static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4 __read_mostly = { .l3proto = PF_INET, .l4proto = IPPROTO_SCTP, @@ -740,14 +811,8 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4 __read_mostly = { .nla_policy = sctp_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_users = &sctp_sysctl_table_users, - .ctl_table_header = &sctp_sysctl_header, - .ctl_table = sctp_sysctl_table, -#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT - .ctl_compat_table = sctp_compat_sysctl_table, -#endif -#endif + .net_id = &sctp_net_id, + .init_net = sctp_init_net, }; static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = { @@ -780,40 +845,58 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = { }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #endif -#ifdef CONFIG_SYSCTL - .ctl_table_users = &sctp_sysctl_table_users, - .ctl_table_header = &sctp_sysctl_header, - .ctl_table = sctp_sysctl_table, -#endif + .net_id = &sctp_net_id, + .init_net = sctp_init_net, }; -static int __init nf_conntrack_proto_sctp_init(void) +static int sctp_net_init(struct net *net) { - int ret; + int ret = 0; - ret = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_sctp4); - if (ret) { - pr_err("nf_conntrack_l4proto_sctp4: protocol register failed\n"); + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_sctp4); + if (ret < 0) { + pr_err("nf_conntrack_l4proto_sctp4 :protocol register failed.\n"); goto out; } - ret = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_sctp6); - if (ret) { - pr_err("nf_conntrack_l4proto_sctp6: protocol register failed\n"); + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_sctp6); + if (ret < 0) { + pr_err("nf_conntrack_l4proto_sctp6 :protocol register failed.\n"); goto cleanup_sctp4; } + return 0; +cleanup_sctp4: + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_sctp4); +out: return ret; +} - cleanup_sctp4: - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_sctp4); - out: - return ret; +static void sctp_net_exit(struct net *net) +{ + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_sctp6); + nf_conntrack_l4proto_unregister(net, + &nf_conntrack_l4proto_sctp4); +} + +static struct pernet_operations sctp_net_ops = { + .init = sctp_net_init, + .exit = sctp_net_exit, + .id = &sctp_net_id, + .size = sizeof(struct sctp_net), +}; + +static int __init nf_conntrack_proto_sctp_init(void) +{ + return register_pernet_subsys(&sctp_net_ops); } static void __exit nf_conntrack_proto_sctp_fini(void) { - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_sctp6); - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_sctp4); + unregister_pernet_subsys(&sctp_net_ops); } module_init(nf_conntrack_proto_sctp_init); diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c index 21ff1a99f534..a5ac11ebef33 100644 --- a/net/netfilter/nf_conntrack_proto_tcp.c +++ b/net/netfilter/nf_conntrack_proto_tcp.c @@ -270,6 +270,11 @@ static const u8 tcp_conntracks[2][6][TCP_CONNTRACK_MAX] = { } }; +static inline struct nf_tcp_net *tcp_pernet(struct net *net) +{ + return &net->ct.nf_ct_proto.tcp; +} + static bool tcp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, struct nf_conntrack_tuple *tuple) { @@ -516,6 +521,7 @@ static bool tcp_in_window(const struct nf_conn *ct, u_int8_t pf) { struct net *net = nf_ct_net(ct); + struct nf_tcp_net *tn = tcp_pernet(net); struct ip_ct_tcp_state *sender = &state->seen[dir]; struct ip_ct_tcp_state *receiver = &state->seen[!dir]; const struct nf_conntrack_tuple *tuple = &ct->tuplehash[dir].tuple; @@ -720,7 +726,7 @@ static bool tcp_in_window(const struct nf_conn *ct, } else { res = false; if (sender->flags & IP_CT_TCP_FLAG_BE_LIBERAL || - nf_ct_tcp_be_liberal) + tn->tcp_be_liberal) res = true; if (!res && LOG_INVALID(net, IPPROTO_TCP)) nf_log_packet(pf, 0, skb, NULL, NULL, NULL, @@ -815,7 +821,7 @@ static int tcp_error(struct net *net, struct nf_conn *tmpl, static unsigned int *tcp_get_timeouts(struct net *net) { - return tcp_timeouts; + return tcp_pernet(net)->timeouts; } /* Returns verdict for packet, or -1 for invalid. */ @@ -828,6 +834,7 @@ static int tcp_packet(struct nf_conn *ct, unsigned int *timeouts) { struct net *net = nf_ct_net(ct); + struct nf_tcp_net *tn = tcp_pernet(net); struct nf_conntrack_tuple *tuple; enum tcp_conntrack new_state, old_state; enum ip_conntrack_dir dir; @@ -1020,7 +1027,7 @@ static int tcp_packet(struct nf_conn *ct, && new_state == TCP_CONNTRACK_FIN_WAIT) ct->proto.tcp.seen[dir].flags |= IP_CT_TCP_FLAG_CLOSE_INIT; - if (ct->proto.tcp.retrans >= nf_ct_tcp_max_retrans && + if (ct->proto.tcp.retrans >= tn->tcp_max_retrans && timeouts[new_state] > timeouts[TCP_CONNTRACK_RETRANS]) timeout = timeouts[TCP_CONNTRACK_RETRANS]; else if ((ct->proto.tcp.seen[0].flags | ct->proto.tcp.seen[1].flags) & @@ -1065,6 +1072,8 @@ static bool tcp_new(struct nf_conn *ct, const struct sk_buff *skb, enum tcp_conntrack new_state; const struct tcphdr *th; struct tcphdr _tcph; + struct net *net = nf_ct_net(ct); + struct nf_tcp_net *tn = tcp_pernet(net); const struct ip_ct_tcp_state *sender = &ct->proto.tcp.seen[0]; const struct ip_ct_tcp_state *receiver = &ct->proto.tcp.seen[1]; @@ -1093,7 +1102,7 @@ static bool tcp_new(struct nf_conn *ct, const struct sk_buff *skb, ct->proto.tcp.seen[0].td_end; tcp_options(skb, dataoff, th, &ct->proto.tcp.seen[0]); - } else if (nf_ct_tcp_loose == 0) { + } else if (tn->tcp_loose == 0) { /* Don't try to pick up connections. */ return false; } else { @@ -1251,14 +1260,16 @@ static int tcp_nlattr_tuple_size(void) #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_cttimeout.h> -static int tcp_timeout_nlattr_to_obj(struct nlattr *tb[], void *data) +static int tcp_timeout_nlattr_to_obj(struct nlattr *tb[], + struct net *net, void *data) { unsigned int *timeouts = data; + struct nf_tcp_net *tn = tcp_pernet(net); int i; /* set default TCP timeouts. */ for (i=0; i<TCP_CONNTRACK_TIMEOUT_MAX; i++) - timeouts[i] = tcp_timeouts[i]; + timeouts[i] = tn->timeouts[i]; if (tb[CTA_TIMEOUT_TCP_SYN_SENT]) { timeouts[TCP_CONNTRACK_SYN_SENT] = @@ -1355,96 +1366,81 @@ static const struct nla_policy tcp_timeout_nla_policy[CTA_TIMEOUT_TCP_MAX+1] = { #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #ifdef CONFIG_SYSCTL -static unsigned int tcp_sysctl_table_users; -static struct ctl_table_header *tcp_sysctl_header; static struct ctl_table tcp_sysctl_table[] = { { .procname = "nf_conntrack_tcp_timeout_syn_sent", - .data = &tcp_timeouts[TCP_CONNTRACK_SYN_SENT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_timeout_syn_recv", - .data = &tcp_timeouts[TCP_CONNTRACK_SYN_RECV], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_timeout_established", - .data = &tcp_timeouts[TCP_CONNTRACK_ESTABLISHED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_timeout_fin_wait", - .data = &tcp_timeouts[TCP_CONNTRACK_FIN_WAIT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_timeout_close_wait", - .data = &tcp_timeouts[TCP_CONNTRACK_CLOSE_WAIT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_timeout_last_ack", - .data = &tcp_timeouts[TCP_CONNTRACK_LAST_ACK], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_timeout_time_wait", - .data = &tcp_timeouts[TCP_CONNTRACK_TIME_WAIT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_timeout_close", - .data = &tcp_timeouts[TCP_CONNTRACK_CLOSE], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_timeout_max_retrans", - .data = &tcp_timeouts[TCP_CONNTRACK_RETRANS], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_timeout_unacknowledged", - .data = &tcp_timeouts[TCP_CONNTRACK_UNACK], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_tcp_loose", - .data = &nf_ct_tcp_loose, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "nf_conntrack_tcp_be_liberal", - .data = &nf_ct_tcp_be_liberal, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "nf_conntrack_tcp_max_retrans", - .data = &nf_ct_tcp_max_retrans, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec, @@ -1456,91 +1452,78 @@ static struct ctl_table tcp_sysctl_table[] = { static struct ctl_table tcp_compat_sysctl_table[] = { { .procname = "ip_conntrack_tcp_timeout_syn_sent", - .data = &tcp_timeouts[TCP_CONNTRACK_SYN_SENT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_timeout_syn_sent2", - .data = &tcp_timeouts[TCP_CONNTRACK_SYN_SENT2], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_timeout_syn_recv", - .data = &tcp_timeouts[TCP_CONNTRACK_SYN_RECV], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_timeout_established", - .data = &tcp_timeouts[TCP_CONNTRACK_ESTABLISHED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_timeout_fin_wait", - .data = &tcp_timeouts[TCP_CONNTRACK_FIN_WAIT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_timeout_close_wait", - .data = &tcp_timeouts[TCP_CONNTRACK_CLOSE_WAIT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_timeout_last_ack", - .data = &tcp_timeouts[TCP_CONNTRACK_LAST_ACK], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_timeout_time_wait", - .data = &tcp_timeouts[TCP_CONNTRACK_TIME_WAIT], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_timeout_close", - .data = &tcp_timeouts[TCP_CONNTRACK_CLOSE], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_timeout_max_retrans", - .data = &tcp_timeouts[TCP_CONNTRACK_RETRANS], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_tcp_loose", - .data = &nf_ct_tcp_loose, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "ip_conntrack_tcp_be_liberal", - .data = &nf_ct_tcp_be_liberal, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "ip_conntrack_tcp_max_retrans", - .data = &nf_ct_tcp_max_retrans, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec, @@ -1550,6 +1533,101 @@ static struct ctl_table tcp_compat_sysctl_table[] = { #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ #endif /* CONFIG_SYSCTL */ +static int tcp_kmemdup_sysctl_table(struct nf_proto_net *pn, + struct nf_tcp_net *tn) +{ +#ifdef CONFIG_SYSCTL + if (pn->ctl_table) + return 0; + + pn->ctl_table = kmemdup(tcp_sysctl_table, + sizeof(tcp_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_table) + return -ENOMEM; + + pn->ctl_table[0].data = &tn->timeouts[TCP_CONNTRACK_SYN_SENT]; + pn->ctl_table[1].data = &tn->timeouts[TCP_CONNTRACK_SYN_RECV]; + pn->ctl_table[2].data = &tn->timeouts[TCP_CONNTRACK_ESTABLISHED]; + pn->ctl_table[3].data = &tn->timeouts[TCP_CONNTRACK_FIN_WAIT]; + pn->ctl_table[4].data = &tn->timeouts[TCP_CONNTRACK_CLOSE_WAIT]; + pn->ctl_table[5].data = &tn->timeouts[TCP_CONNTRACK_LAST_ACK]; + pn->ctl_table[6].data = &tn->timeouts[TCP_CONNTRACK_TIME_WAIT]; + pn->ctl_table[7].data = &tn->timeouts[TCP_CONNTRACK_CLOSE]; + pn->ctl_table[8].data = &tn->timeouts[TCP_CONNTRACK_RETRANS]; + pn->ctl_table[9].data = &tn->timeouts[TCP_CONNTRACK_UNACK]; + pn->ctl_table[10].data = &tn->tcp_loose; + pn->ctl_table[11].data = &tn->tcp_be_liberal; + pn->ctl_table[12].data = &tn->tcp_max_retrans; +#endif + return 0; +} + +static int tcp_kmemdup_compat_sysctl_table(struct nf_proto_net *pn, + struct nf_tcp_net *tn) +{ +#ifdef CONFIG_SYSCTL +#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT + pn->ctl_compat_table = kmemdup(tcp_compat_sysctl_table, + sizeof(tcp_compat_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_compat_table) + return -ENOMEM; + + pn->ctl_compat_table[0].data = &tn->timeouts[TCP_CONNTRACK_SYN_SENT]; + pn->ctl_compat_table[1].data = &tn->timeouts[TCP_CONNTRACK_SYN_SENT2]; + pn->ctl_compat_table[2].data = &tn->timeouts[TCP_CONNTRACK_SYN_RECV]; + pn->ctl_compat_table[3].data = &tn->timeouts[TCP_CONNTRACK_ESTABLISHED]; + pn->ctl_compat_table[4].data = &tn->timeouts[TCP_CONNTRACK_FIN_WAIT]; + pn->ctl_compat_table[5].data = &tn->timeouts[TCP_CONNTRACK_CLOSE_WAIT]; + pn->ctl_compat_table[6].data = &tn->timeouts[TCP_CONNTRACK_LAST_ACK]; + pn->ctl_compat_table[7].data = &tn->timeouts[TCP_CONNTRACK_TIME_WAIT]; + pn->ctl_compat_table[8].data = &tn->timeouts[TCP_CONNTRACK_CLOSE]; + pn->ctl_compat_table[9].data = &tn->timeouts[TCP_CONNTRACK_RETRANS]; + pn->ctl_compat_table[10].data = &tn->tcp_loose; + pn->ctl_compat_table[11].data = &tn->tcp_be_liberal; + pn->ctl_compat_table[12].data = &tn->tcp_max_retrans; +#endif +#endif + return 0; +} + +static int tcp_init_net(struct net *net, u_int16_t proto) +{ + int ret; + struct nf_tcp_net *tn = tcp_pernet(net); + struct nf_proto_net *pn = &tn->pn; + + if (!pn->users) { + int i; + + for (i = 0; i < TCP_CONNTRACK_TIMEOUT_MAX; i++) + tn->timeouts[i] = tcp_timeouts[i]; + + tn->tcp_loose = nf_ct_tcp_loose; + tn->tcp_be_liberal = nf_ct_tcp_be_liberal; + tn->tcp_max_retrans = nf_ct_tcp_max_retrans; + } + + if (proto == AF_INET) { + ret = tcp_kmemdup_compat_sysctl_table(pn, tn); + if (ret < 0) + return ret; + + ret = tcp_kmemdup_sysctl_table(pn, tn); + if (ret < 0) + nf_ct_kfree_compat_sysctl_table(pn); + } else + ret = tcp_kmemdup_sysctl_table(pn, tn); + + return ret; +} + +static struct nf_proto_net *tcp_get_net_proto(struct net *net) +{ + return &net->ct.nf_ct_proto.tcp.pn; +} + struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp4 __read_mostly = { .l3proto = PF_INET, @@ -1582,14 +1660,8 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp4 __read_mostly = .nla_policy = tcp_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_users = &tcp_sysctl_table_users, - .ctl_table_header = &tcp_sysctl_header, - .ctl_table = tcp_sysctl_table, -#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT - .ctl_compat_table = tcp_compat_sysctl_table, -#endif -#endif + .init_net = tcp_init_net, + .get_net_proto = tcp_get_net_proto, }; EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_tcp4); @@ -1625,10 +1697,7 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp6 __read_mostly = .nla_policy = tcp_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_users = &tcp_sysctl_table_users, - .ctl_table_header = &tcp_sysctl_header, - .ctl_table = tcp_sysctl_table, -#endif + .init_net = tcp_init_net, + .get_net_proto = tcp_get_net_proto, }; EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_tcp6); diff --git a/net/netfilter/nf_conntrack_proto_udp.c b/net/netfilter/nf_conntrack_proto_udp.c index 7259a6bdeb49..59623cc56e8d 100644 --- a/net/netfilter/nf_conntrack_proto_udp.c +++ b/net/netfilter/nf_conntrack_proto_udp.c @@ -25,17 +25,16 @@ #include <net/netfilter/ipv4/nf_conntrack_ipv4.h> #include <net/netfilter/ipv6/nf_conntrack_ipv6.h> -enum udp_conntrack { - UDP_CT_UNREPLIED, - UDP_CT_REPLIED, - UDP_CT_MAX -}; - static unsigned int udp_timeouts[UDP_CT_MAX] = { [UDP_CT_UNREPLIED] = 30*HZ, [UDP_CT_REPLIED] = 180*HZ, }; +static inline struct nf_udp_net *udp_pernet(struct net *net) +{ + return &net->ct.nf_ct_proto.udp; +} + static bool udp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, struct nf_conntrack_tuple *tuple) @@ -73,7 +72,7 @@ static int udp_print_tuple(struct seq_file *s, static unsigned int *udp_get_timeouts(struct net *net) { - return udp_timeouts; + return udp_pernet(net)->timeouts; } /* Returns verdict for packet, and may modify conntracktype */ @@ -157,13 +156,15 @@ static int udp_error(struct net *net, struct nf_conn *tmpl, struct sk_buff *skb, #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_cttimeout.h> -static int udp_timeout_nlattr_to_obj(struct nlattr *tb[], void *data) +static int udp_timeout_nlattr_to_obj(struct nlattr *tb[], + struct net *net, void *data) { unsigned int *timeouts = data; + struct nf_udp_net *un = udp_pernet(net); /* set default timeouts for UDP. */ - timeouts[UDP_CT_UNREPLIED] = udp_timeouts[UDP_CT_UNREPLIED]; - timeouts[UDP_CT_REPLIED] = udp_timeouts[UDP_CT_REPLIED]; + timeouts[UDP_CT_UNREPLIED] = un->timeouts[UDP_CT_UNREPLIED]; + timeouts[UDP_CT_REPLIED] = un->timeouts[UDP_CT_REPLIED]; if (tb[CTA_TIMEOUT_UDP_UNREPLIED]) { timeouts[UDP_CT_UNREPLIED] = @@ -200,19 +201,15 @@ udp_timeout_nla_policy[CTA_TIMEOUT_UDP_MAX+1] = { #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #ifdef CONFIG_SYSCTL -static unsigned int udp_sysctl_table_users; -static struct ctl_table_header *udp_sysctl_header; static struct ctl_table udp_sysctl_table[] = { { .procname = "nf_conntrack_udp_timeout", - .data = &udp_timeouts[UDP_CT_UNREPLIED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_udp_timeout_stream", - .data = &udp_timeouts[UDP_CT_REPLIED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -223,14 +220,12 @@ static struct ctl_table udp_sysctl_table[] = { static struct ctl_table udp_compat_sysctl_table[] = { { .procname = "ip_conntrack_udp_timeout", - .data = &udp_timeouts[UDP_CT_UNREPLIED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "ip_conntrack_udp_timeout_stream", - .data = &udp_timeouts[UDP_CT_REPLIED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -240,6 +235,73 @@ static struct ctl_table udp_compat_sysctl_table[] = { #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ #endif /* CONFIG_SYSCTL */ +static int udp_kmemdup_sysctl_table(struct nf_proto_net *pn, + struct nf_udp_net *un) +{ +#ifdef CONFIG_SYSCTL + if (pn->ctl_table) + return 0; + pn->ctl_table = kmemdup(udp_sysctl_table, + sizeof(udp_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_table) + return -ENOMEM; + pn->ctl_table[0].data = &un->timeouts[UDP_CT_UNREPLIED]; + pn->ctl_table[1].data = &un->timeouts[UDP_CT_REPLIED]; +#endif + return 0; +} + +static int udp_kmemdup_compat_sysctl_table(struct nf_proto_net *pn, + struct nf_udp_net *un) +{ +#ifdef CONFIG_SYSCTL +#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT + pn->ctl_compat_table = kmemdup(udp_compat_sysctl_table, + sizeof(udp_compat_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_compat_table) + return -ENOMEM; + + pn->ctl_compat_table[0].data = &un->timeouts[UDP_CT_UNREPLIED]; + pn->ctl_compat_table[1].data = &un->timeouts[UDP_CT_REPLIED]; +#endif +#endif + return 0; +} + +static int udp_init_net(struct net *net, u_int16_t proto) +{ + int ret; + struct nf_udp_net *un = udp_pernet(net); + struct nf_proto_net *pn = &un->pn; + + if (!pn->users) { + int i; + + for (i = 0; i < UDP_CT_MAX; i++) + un->timeouts[i] = udp_timeouts[i]; + } + + if (proto == AF_INET) { + ret = udp_kmemdup_compat_sysctl_table(pn, un); + if (ret < 0) + return ret; + + ret = udp_kmemdup_sysctl_table(pn, un); + if (ret < 0) + nf_ct_kfree_compat_sysctl_table(pn); + } else + ret = udp_kmemdup_sysctl_table(pn, un); + + return ret; +} + +static struct nf_proto_net *udp_get_net_proto(struct net *net) +{ + return &net->ct.nf_ct_proto.udp.pn; +} + struct nf_conntrack_l4proto nf_conntrack_l4proto_udp4 __read_mostly = { .l3proto = PF_INET, @@ -267,14 +329,8 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_udp4 __read_mostly = .nla_policy = udp_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_users = &udp_sysctl_table_users, - .ctl_table_header = &udp_sysctl_header, - .ctl_table = udp_sysctl_table, -#ifdef CONFIG_NF_CONNTRACK_PROC_COMPAT - .ctl_compat_table = udp_compat_sysctl_table, -#endif -#endif + .init_net = udp_init_net, + .get_net_proto = udp_get_net_proto, }; EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_udp4); @@ -305,10 +361,7 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_udp6 __read_mostly = .nla_policy = udp_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_users = &udp_sysctl_table_users, - .ctl_table_header = &udp_sysctl_header, - .ctl_table = udp_sysctl_table, -#endif + .init_net = udp_init_net, + .get_net_proto = udp_get_net_proto, }; EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_udp6); diff --git a/net/netfilter/nf_conntrack_proto_udplite.c b/net/netfilter/nf_conntrack_proto_udplite.c index 4d60a5376aa6..4b66df209286 100644 --- a/net/netfilter/nf_conntrack_proto_udplite.c +++ b/net/netfilter/nf_conntrack_proto_udplite.c @@ -35,6 +35,17 @@ static unsigned int udplite_timeouts[UDPLITE_CT_MAX] = { [UDPLITE_CT_REPLIED] = 180*HZ, }; +static int udplite_net_id __read_mostly; +struct udplite_net { + struct nf_proto_net pn; + unsigned int timeouts[UDPLITE_CT_MAX]; +}; + +static inline struct udplite_net *udplite_pernet(struct net *net) +{ + return net_generic(net, udplite_net_id); +} + static bool udplite_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, struct nf_conntrack_tuple *tuple) @@ -70,7 +81,7 @@ static int udplite_print_tuple(struct seq_file *s, static unsigned int *udplite_get_timeouts(struct net *net) { - return udplite_timeouts; + return udplite_pernet(net)->timeouts; } /* Returns verdict for packet, and may modify conntracktype */ @@ -161,13 +172,15 @@ static int udplite_error(struct net *net, struct nf_conn *tmpl, #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/nfnetlink_cttimeout.h> -static int udplite_timeout_nlattr_to_obj(struct nlattr *tb[], void *data) +static int udplite_timeout_nlattr_to_obj(struct nlattr *tb[], + struct net *net, void *data) { unsigned int *timeouts = data; + struct udplite_net *un = udplite_pernet(net); /* set default timeouts for UDPlite. */ - timeouts[UDPLITE_CT_UNREPLIED] = udplite_timeouts[UDPLITE_CT_UNREPLIED]; - timeouts[UDPLITE_CT_REPLIED] = udplite_timeouts[UDPLITE_CT_REPLIED]; + timeouts[UDPLITE_CT_UNREPLIED] = un->timeouts[UDPLITE_CT_UNREPLIED]; + timeouts[UDPLITE_CT_REPLIED] = un->timeouts[UDPLITE_CT_REPLIED]; if (tb[CTA_TIMEOUT_UDPLITE_UNREPLIED]) { timeouts[UDPLITE_CT_UNREPLIED] = @@ -204,19 +217,15 @@ udplite_timeout_nla_policy[CTA_TIMEOUT_UDPLITE_MAX+1] = { #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #ifdef CONFIG_SYSCTL -static unsigned int udplite_sysctl_table_users; -static struct ctl_table_header *udplite_sysctl_header; static struct ctl_table udplite_sysctl_table[] = { { .procname = "nf_conntrack_udplite_timeout", - .data = &udplite_timeouts[UDPLITE_CT_UNREPLIED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, }, { .procname = "nf_conntrack_udplite_timeout_stream", - .data = &udplite_timeouts[UDPLITE_CT_REPLIED], .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_jiffies, @@ -225,6 +234,40 @@ static struct ctl_table udplite_sysctl_table[] = { }; #endif /* CONFIG_SYSCTL */ +static int udplite_kmemdup_sysctl_table(struct nf_proto_net *pn, + struct udplite_net *un) +{ +#ifdef CONFIG_SYSCTL + if (pn->ctl_table) + return 0; + + pn->ctl_table = kmemdup(udplite_sysctl_table, + sizeof(udplite_sysctl_table), + GFP_KERNEL); + if (!pn->ctl_table) + return -ENOMEM; + + pn->ctl_table[0].data = &un->timeouts[UDPLITE_CT_UNREPLIED]; + pn->ctl_table[1].data = &un->timeouts[UDPLITE_CT_REPLIED]; +#endif + return 0; +} + +static int udplite_init_net(struct net *net, u_int16_t proto) +{ + struct udplite_net *un = udplite_pernet(net); + struct nf_proto_net *pn = &un->pn; + + if (!pn->users) { + int i; + + for (i = 0 ; i < UDPLITE_CT_MAX; i++) + un->timeouts[i] = udplite_timeouts[i]; + } + + return udplite_kmemdup_sysctl_table(pn, un); +} + static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite4 __read_mostly = { .l3proto = PF_INET, @@ -253,11 +296,8 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite4 __read_mostly = .nla_policy = udplite_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_users = &udplite_sysctl_table_users, - .ctl_table_header = &udplite_sysctl_header, - .ctl_table = udplite_sysctl_table, -#endif + .net_id = &udplite_net_id, + .init_net = udplite_init_net, }; static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6 __read_mostly = @@ -288,34 +328,55 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6 __read_mostly = .nla_policy = udplite_timeout_nla_policy, }, #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ -#ifdef CONFIG_SYSCTL - .ctl_table_users = &udplite_sysctl_table_users, - .ctl_table_header = &udplite_sysctl_header, - .ctl_table = udplite_sysctl_table, -#endif + .net_id = &udplite_net_id, + .init_net = udplite_init_net, }; -static int __init nf_conntrack_proto_udplite_init(void) +static int udplite_net_init(struct net *net) { - int err; - - err = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_udplite4); - if (err < 0) - goto err1; - err = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_udplite6); - if (err < 0) - goto err2; + int ret = 0; + + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_udplite4); + if (ret < 0) { + pr_err("nf_conntrack_l4proto_udplite4 :protocol register failed.\n"); + goto out; + } + ret = nf_conntrack_l4proto_register(net, + &nf_conntrack_l4proto_udplite6); + if (ret < 0) { + pr_err("nf_conntrack_l4proto_udplite4 :protocol register failed.\n"); + goto cleanup_udplite4; + } return 0; -err2: - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udplite4); -err1: - return err; + +cleanup_udplite4: + nf_conntrack_l4proto_unregister(net, &nf_conntrack_l4proto_udplite4); +out: + return ret; +} + +static void udplite_net_exit(struct net *net) +{ + nf_conntrack_l4proto_unregister(net, &nf_conntrack_l4proto_udplite6); + nf_conntrack_l4proto_unregister(net, &nf_conntrack_l4proto_udplite4); +} + +static struct pernet_operations udplite_net_ops = { + .init = udplite_net_init, + .exit = udplite_net_exit, + .id = &udplite_net_id, + .size = sizeof(struct udplite_net), +}; + +static int __init nf_conntrack_proto_udplite_init(void) +{ + return register_pernet_subsys(&udplite_net_ops); } static void __exit nf_conntrack_proto_udplite_exit(void) { - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udplite6); - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udplite4); + unregister_pernet_subsys(&udplite_net_ops); } module_init(nf_conntrack_proto_udplite_init); diff --git a/net/netfilter/nf_conntrack_sane.c b/net/netfilter/nf_conntrack_sane.c index 8501823b3f9b..295429f39088 100644 --- a/net/netfilter/nf_conntrack_sane.c +++ b/net/netfilter/nf_conntrack_sane.c @@ -69,13 +69,12 @@ static int help(struct sk_buff *skb, void *sb_ptr; int ret = NF_ACCEPT; int dir = CTINFO2DIR(ctinfo); - struct nf_ct_sane_master *ct_sane_info; + struct nf_ct_sane_master *ct_sane_info = nfct_help_data(ct); struct nf_conntrack_expect *exp; struct nf_conntrack_tuple *tuple; struct sane_request *req; struct sane_reply_net_start *reply; - ct_sane_info = &nfct_help(ct)->help.ct_sane_info; /* Until there's been traffic both ways, don't look in packets. */ if (ctinfo != IP_CT_ESTABLISHED && ctinfo != IP_CT_ESTABLISHED_REPLY) @@ -163,7 +162,6 @@ out: } static struct nf_conntrack_helper sane[MAX_PORTS][2] __read_mostly; -static char sane_names[MAX_PORTS][2][sizeof("sane-65535")] __read_mostly; static const struct nf_conntrack_expect_policy sane_exp_policy = { .max_expected = 1, @@ -190,7 +188,6 @@ static void nf_conntrack_sane_fini(void) static int __init nf_conntrack_sane_init(void) { int i, j = -1, ret = 0; - char *tmpname; sane_buffer = kmalloc(65536, GFP_KERNEL); if (!sane_buffer) @@ -205,17 +202,16 @@ static int __init nf_conntrack_sane_init(void) sane[i][0].tuple.src.l3num = PF_INET; sane[i][1].tuple.src.l3num = PF_INET6; for (j = 0; j < 2; j++) { + sane[i][j].data_len = sizeof(struct nf_ct_sane_master); sane[i][j].tuple.src.u.tcp.port = htons(ports[i]); sane[i][j].tuple.dst.protonum = IPPROTO_TCP; sane[i][j].expect_policy = &sane_exp_policy; sane[i][j].me = THIS_MODULE; sane[i][j].help = help; - tmpname = &sane_names[i][j][0]; if (ports[i] == SANE_PORT) - sprintf(tmpname, "sane"); + sprintf(sane[i][j].name, "sane"); else - sprintf(tmpname, "sane-%d", ports[i]); - sane[i][j].name = tmpname; + sprintf(sane[i][j].name, "sane-%d", ports[i]); pr_debug("nf_ct_sane: registering helper for pf: %d " "port: %d\n", diff --git a/net/netfilter/nf_conntrack_sip.c b/net/netfilter/nf_conntrack_sip.c index 93faf6a3a637..758a1bacc126 100644 --- a/net/netfilter/nf_conntrack_sip.c +++ b/net/netfilter/nf_conntrack_sip.c @@ -1075,12 +1075,12 @@ static int process_invite_response(struct sk_buff *skb, unsigned int dataoff, { enum ip_conntrack_info ctinfo; struct nf_conn *ct = nf_ct_get(skb, &ctinfo); - struct nf_conn_help *help = nfct_help(ct); + struct nf_ct_sip_master *ct_sip_info = nfct_help_data(ct); if ((code >= 100 && code <= 199) || (code >= 200 && code <= 299)) return process_sdp(skb, dataoff, dptr, datalen, cseq); - else if (help->help.ct_sip_info.invite_cseq == cseq) + else if (ct_sip_info->invite_cseq == cseq) flush_expectations(ct, true); return NF_ACCEPT; } @@ -1091,12 +1091,12 @@ static int process_update_response(struct sk_buff *skb, unsigned int dataoff, { enum ip_conntrack_info ctinfo; struct nf_conn *ct = nf_ct_get(skb, &ctinfo); - struct nf_conn_help *help = nfct_help(ct); + struct nf_ct_sip_master *ct_sip_info = nfct_help_data(ct); if ((code >= 100 && code <= 199) || (code >= 200 && code <= 299)) return process_sdp(skb, dataoff, dptr, datalen, cseq); - else if (help->help.ct_sip_info.invite_cseq == cseq) + else if (ct_sip_info->invite_cseq == cseq) flush_expectations(ct, true); return NF_ACCEPT; } @@ -1107,12 +1107,12 @@ static int process_prack_response(struct sk_buff *skb, unsigned int dataoff, { enum ip_conntrack_info ctinfo; struct nf_conn *ct = nf_ct_get(skb, &ctinfo); - struct nf_conn_help *help = nfct_help(ct); + struct nf_ct_sip_master *ct_sip_info = nfct_help_data(ct); if ((code >= 100 && code <= 199) || (code >= 200 && code <= 299)) return process_sdp(skb, dataoff, dptr, datalen, cseq); - else if (help->help.ct_sip_info.invite_cseq == cseq) + else if (ct_sip_info->invite_cseq == cseq) flush_expectations(ct, true); return NF_ACCEPT; } @@ -1123,13 +1123,13 @@ static int process_invite_request(struct sk_buff *skb, unsigned int dataoff, { enum ip_conntrack_info ctinfo; struct nf_conn *ct = nf_ct_get(skb, &ctinfo); - struct nf_conn_help *help = nfct_help(ct); + struct nf_ct_sip_master *ct_sip_info = nfct_help_data(ct); unsigned int ret; flush_expectations(ct, true); ret = process_sdp(skb, dataoff, dptr, datalen, cseq); if (ret == NF_ACCEPT) - help->help.ct_sip_info.invite_cseq = cseq; + ct_sip_info->invite_cseq = cseq; return ret; } @@ -1154,7 +1154,7 @@ static int process_register_request(struct sk_buff *skb, unsigned int dataoff, { enum ip_conntrack_info ctinfo; struct nf_conn *ct = nf_ct_get(skb, &ctinfo); - struct nf_conn_help *help = nfct_help(ct); + struct nf_ct_sip_master *ct_sip_info = nfct_help_data(ct); enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); unsigned int matchoff, matchlen; struct nf_conntrack_expect *exp; @@ -1235,7 +1235,7 @@ static int process_register_request(struct sk_buff *skb, unsigned int dataoff, store_cseq: if (ret == NF_ACCEPT) - help->help.ct_sip_info.register_cseq = cseq; + ct_sip_info->register_cseq = cseq; return ret; } @@ -1245,7 +1245,7 @@ static int process_register_response(struct sk_buff *skb, unsigned int dataoff, { enum ip_conntrack_info ctinfo; struct nf_conn *ct = nf_ct_get(skb, &ctinfo); - struct nf_conn_help *help = nfct_help(ct); + struct nf_ct_sip_master *ct_sip_info = nfct_help_data(ct); enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); union nf_inet_addr addr; __be16 port; @@ -1262,7 +1262,7 @@ static int process_register_response(struct sk_buff *skb, unsigned int dataoff, * responses, so we store the sequence number of the last valid * request and compare it here. */ - if (help->help.ct_sip_info.register_cseq != cseq) + if (ct_sip_info->register_cseq != cseq) return NF_ACCEPT; if (code >= 100 && code <= 199) @@ -1556,7 +1556,6 @@ static void nf_conntrack_sip_fini(void) static int __init nf_conntrack_sip_init(void) { int i, j, ret; - char *tmpname; if (ports_c == 0) ports[ports_c++] = SIP_PORT; @@ -1579,17 +1578,16 @@ static int __init nf_conntrack_sip_init(void) sip[i][3].help = sip_help_tcp; for (j = 0; j < ARRAY_SIZE(sip[i]); j++) { + sip[i][j].data_len = sizeof(struct nf_ct_sip_master); sip[i][j].tuple.src.u.udp.port = htons(ports[i]); sip[i][j].expect_policy = sip_exp_policy; sip[i][j].expect_class_max = SIP_EXPECT_MAX; sip[i][j].me = THIS_MODULE; - tmpname = &sip_names[i][j][0]; if (ports[i] == SIP_PORT) - sprintf(tmpname, "sip"); + sprintf(sip_names[i][j], "sip"); else - sprintf(tmpname, "sip-%u", i); - sip[i][j].name = tmpname; + sprintf(sip_names[i][j], "sip-%u", i); pr_debug("port #%u: %u\n", i, ports[i]); diff --git a/net/netfilter/nf_conntrack_tftp.c b/net/netfilter/nf_conntrack_tftp.c index 75466fd72f4f..81fc61c05263 100644 --- a/net/netfilter/nf_conntrack_tftp.c +++ b/net/netfilter/nf_conntrack_tftp.c @@ -92,7 +92,6 @@ static int tftp_help(struct sk_buff *skb, } static struct nf_conntrack_helper tftp[MAX_PORTS][2] __read_mostly; -static char tftp_names[MAX_PORTS][2][sizeof("tftp-65535")] __read_mostly; static const struct nf_conntrack_expect_policy tftp_exp_policy = { .max_expected = 1, @@ -112,7 +111,6 @@ static void nf_conntrack_tftp_fini(void) static int __init nf_conntrack_tftp_init(void) { int i, j, ret; - char *tmpname; if (ports_c == 0) ports[ports_c++] = TFTP_PORT; @@ -129,12 +127,10 @@ static int __init nf_conntrack_tftp_init(void) tftp[i][j].me = THIS_MODULE; tftp[i][j].help = tftp_help; - tmpname = &tftp_names[i][j][0]; if (ports[i] == TFTP_PORT) - sprintf(tmpname, "tftp"); + sprintf(tftp[i][j].name, "tftp"); else - sprintf(tmpname, "tftp-%u", i); - tftp[i][j].name = tmpname; + sprintf(tftp[i][j].name, "tftp-%u", i); ret = nf_conntrack_helper_register(&tftp[i][j]); if (ret) { diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c index 791d56bbd74a..a26503342e71 100644 --- a/net/netfilter/nfnetlink.c +++ b/net/netfilter/nfnetlink.c @@ -39,6 +39,15 @@ static char __initdata nfversion[] = "0.30"; static const struct nfnetlink_subsystem __rcu *subsys_table[NFNL_SUBSYS_COUNT]; static DEFINE_MUTEX(nfnl_mutex); +static const int nfnl_group2type[NFNLGRP_MAX+1] = { + [NFNLGRP_CONNTRACK_NEW] = NFNL_SUBSYS_CTNETLINK, + [NFNLGRP_CONNTRACK_UPDATE] = NFNL_SUBSYS_CTNETLINK, + [NFNLGRP_CONNTRACK_DESTROY] = NFNL_SUBSYS_CTNETLINK, + [NFNLGRP_CONNTRACK_EXP_NEW] = NFNL_SUBSYS_CTNETLINK_EXP, + [NFNLGRP_CONNTRACK_EXP_UPDATE] = NFNL_SUBSYS_CTNETLINK_EXP, + [NFNLGRP_CONNTRACK_EXP_DESTROY] = NFNL_SUBSYS_CTNETLINK_EXP, +}; + void nfnl_lock(void) { mutex_lock(&nfnl_mutex); @@ -186,9 +195,11 @@ replay: lockdep_is_held(&nfnl_mutex)) != ss || nfnetlink_find_client(type, ss) != nc) err = -EAGAIN; - else + else if (nc->call) err = nc->call(net->nfnl, skb, nlh, (const struct nlattr **)cda); + else + err = -EINVAL; nfnl_unlock(); } if (err == -EAGAIN) @@ -202,12 +213,35 @@ static void nfnetlink_rcv(struct sk_buff *skb) netlink_rcv_skb(skb, &nfnetlink_rcv_msg); } +#ifdef CONFIG_MODULES +static void nfnetlink_bind(int group) +{ + const struct nfnetlink_subsystem *ss; + int type = nfnl_group2type[group]; + + rcu_read_lock(); + ss = nfnetlink_get_subsys(type); + if (!ss) { + rcu_read_unlock(); + request_module("nfnetlink-subsys-%d", type); + return; + } + rcu_read_unlock(); +} +#endif + static int __net_init nfnetlink_net_init(struct net *net) { struct sock *nfnl; + struct netlink_kernel_cfg cfg = { + .groups = NFNLGRP_MAX, + .input = nfnetlink_rcv, +#ifdef CONFIG_MODULES + .bind = nfnetlink_bind, +#endif + }; - nfnl = netlink_kernel_create(net, NETLINK_NETFILTER, NFNLGRP_MAX, - nfnetlink_rcv, NULL, THIS_MODULE); + nfnl = netlink_kernel_create(net, NETLINK_NETFILTER, THIS_MODULE, &cfg); if (!nfnl) return -ENOMEM; net->nfnl_stash = nfnl; diff --git a/net/netfilter/nfnetlink_cthelper.c b/net/netfilter/nfnetlink_cthelper.c new file mode 100644 index 000000000000..d6836193d479 --- /dev/null +++ b/net/netfilter/nfnetlink_cthelper.c @@ -0,0 +1,672 @@ +/* + * (C) 2012 Pablo Neira Ayuso <pablo@netfilter.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation (or any later at your option). + * + * This software has been sponsored by Vyatta Inc. <http://www.vyatta.com> + */ +#include <linux/init.h> +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/skbuff.h> +#include <linux/netlink.h> +#include <linux/rculist.h> +#include <linux/slab.h> +#include <linux/types.h> +#include <linux/list.h> +#include <linux/errno.h> +#include <net/netlink.h> +#include <net/sock.h> + +#include <net/netfilter/nf_conntrack_helper.h> +#include <net/netfilter/nf_conntrack_expect.h> +#include <net/netfilter/nf_conntrack_ecache.h> + +#include <linux/netfilter/nfnetlink.h> +#include <linux/netfilter/nfnetlink_conntrack.h> +#include <linux/netfilter/nfnetlink_cthelper.h> + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); +MODULE_DESCRIPTION("nfnl_cthelper: User-space connection tracking helpers"); + +static int +nfnl_userspace_cthelper(struct sk_buff *skb, unsigned int protoff, + struct nf_conn *ct, enum ip_conntrack_info ctinfo) +{ + const struct nf_conn_help *help; + struct nf_conntrack_helper *helper; + + help = nfct_help(ct); + if (help == NULL) + return NF_DROP; + + /* rcu_read_lock()ed by nf_hook_slow */ + helper = rcu_dereference(help->helper); + if (helper == NULL) + return NF_DROP; + + /* This is an user-space helper not yet configured, skip. */ + if ((helper->flags & + (NF_CT_HELPER_F_USERSPACE | NF_CT_HELPER_F_CONFIGURED)) == + NF_CT_HELPER_F_USERSPACE) + return NF_ACCEPT; + + /* If the user-space helper is not available, don't block traffic. */ + return NF_QUEUE_NR(helper->queue_num) | NF_VERDICT_FLAG_QUEUE_BYPASS; +} + +static const struct nla_policy nfnl_cthelper_tuple_pol[NFCTH_TUPLE_MAX+1] = { + [NFCTH_TUPLE_L3PROTONUM] = { .type = NLA_U16, }, + [NFCTH_TUPLE_L4PROTONUM] = { .type = NLA_U8, }, +}; + +static int +nfnl_cthelper_parse_tuple(struct nf_conntrack_tuple *tuple, + const struct nlattr *attr) +{ + struct nlattr *tb[NFCTH_TUPLE_MAX+1]; + + nla_parse_nested(tb, NFCTH_TUPLE_MAX, attr, nfnl_cthelper_tuple_pol); + + if (!tb[NFCTH_TUPLE_L3PROTONUM] || !tb[NFCTH_TUPLE_L4PROTONUM]) + return -EINVAL; + + tuple->src.l3num = ntohs(nla_get_u16(tb[NFCTH_TUPLE_L3PROTONUM])); + tuple->dst.protonum = nla_get_u8(tb[NFCTH_TUPLE_L4PROTONUM]); + + return 0; +} + +static int +nfnl_cthelper_from_nlattr(struct nlattr *attr, struct nf_conn *ct) +{ + const struct nf_conn_help *help = nfct_help(ct); + + if (help->helper->data_len == 0) + return -EINVAL; + + memcpy(&help->data, nla_data(attr), help->helper->data_len); + return 0; +} + +static int +nfnl_cthelper_to_nlattr(struct sk_buff *skb, const struct nf_conn *ct) +{ + const struct nf_conn_help *help = nfct_help(ct); + + if (help->helper->data_len && + nla_put(skb, CTA_HELP_INFO, help->helper->data_len, &help->data)) + goto nla_put_failure; + + return 0; + +nla_put_failure: + return -ENOSPC; +} + +static const struct nla_policy nfnl_cthelper_expect_pol[NFCTH_POLICY_MAX+1] = { + [NFCTH_POLICY_NAME] = { .type = NLA_NUL_STRING, + .len = NF_CT_HELPER_NAME_LEN-1 }, + [NFCTH_POLICY_EXPECT_MAX] = { .type = NLA_U32, }, + [NFCTH_POLICY_EXPECT_TIMEOUT] = { .type = NLA_U32, }, +}; + +static int +nfnl_cthelper_expect_policy(struct nf_conntrack_expect_policy *expect_policy, + const struct nlattr *attr) +{ + struct nlattr *tb[NFCTH_POLICY_MAX+1]; + + nla_parse_nested(tb, NFCTH_POLICY_MAX, attr, nfnl_cthelper_expect_pol); + + if (!tb[NFCTH_POLICY_NAME] || + !tb[NFCTH_POLICY_EXPECT_MAX] || + !tb[NFCTH_POLICY_EXPECT_TIMEOUT]) + return -EINVAL; + + strncpy(expect_policy->name, + nla_data(tb[NFCTH_POLICY_NAME]), NF_CT_HELPER_NAME_LEN); + expect_policy->max_expected = + ntohl(nla_get_be32(tb[NFCTH_POLICY_EXPECT_MAX])); + expect_policy->timeout = + ntohl(nla_get_be32(tb[NFCTH_POLICY_EXPECT_TIMEOUT])); + + return 0; +} + +static const struct nla_policy +nfnl_cthelper_expect_policy_set[NFCTH_POLICY_SET_MAX+1] = { + [NFCTH_POLICY_SET_NUM] = { .type = NLA_U32, }, +}; + +static int +nfnl_cthelper_parse_expect_policy(struct nf_conntrack_helper *helper, + const struct nlattr *attr) +{ + int i, ret; + struct nf_conntrack_expect_policy *expect_policy; + struct nlattr *tb[NFCTH_POLICY_SET_MAX+1]; + + nla_parse_nested(tb, NFCTH_POLICY_SET_MAX, attr, + nfnl_cthelper_expect_policy_set); + + if (!tb[NFCTH_POLICY_SET_NUM]) + return -EINVAL; + + helper->expect_class_max = + ntohl(nla_get_be32(tb[NFCTH_POLICY_SET_NUM])); + + if (helper->expect_class_max != 0 && + helper->expect_class_max > NF_CT_MAX_EXPECT_CLASSES) + return -EOVERFLOW; + + expect_policy = kzalloc(sizeof(struct nf_conntrack_expect_policy) * + helper->expect_class_max, GFP_KERNEL); + if (expect_policy == NULL) + return -ENOMEM; + + for (i=0; i<helper->expect_class_max; i++) { + if (!tb[NFCTH_POLICY_SET+i]) + goto err; + + ret = nfnl_cthelper_expect_policy(&expect_policy[i], + tb[NFCTH_POLICY_SET+i]); + if (ret < 0) + goto err; + } + helper->expect_policy = expect_policy; + return 0; +err: + kfree(expect_policy); + return -EINVAL; +} + +static int +nfnl_cthelper_create(const struct nlattr * const tb[], + struct nf_conntrack_tuple *tuple) +{ + struct nf_conntrack_helper *helper; + int ret; + + if (!tb[NFCTH_TUPLE] || !tb[NFCTH_POLICY] || !tb[NFCTH_PRIV_DATA_LEN]) + return -EINVAL; + + helper = kzalloc(sizeof(struct nf_conntrack_helper), GFP_KERNEL); + if (helper == NULL) + return -ENOMEM; + + ret = nfnl_cthelper_parse_expect_policy(helper, tb[NFCTH_POLICY]); + if (ret < 0) + goto err; + + strncpy(helper->name, nla_data(tb[NFCTH_NAME]), NF_CT_HELPER_NAME_LEN); + helper->data_len = ntohl(nla_get_be32(tb[NFCTH_PRIV_DATA_LEN])); + helper->flags |= NF_CT_HELPER_F_USERSPACE; + memcpy(&helper->tuple, tuple, sizeof(struct nf_conntrack_tuple)); + + helper->me = THIS_MODULE; + helper->help = nfnl_userspace_cthelper; + helper->from_nlattr = nfnl_cthelper_from_nlattr; + helper->to_nlattr = nfnl_cthelper_to_nlattr; + + /* Default to queue number zero, this can be updated at any time. */ + if (tb[NFCTH_QUEUE_NUM]) + helper->queue_num = ntohl(nla_get_be32(tb[NFCTH_QUEUE_NUM])); + + if (tb[NFCTH_STATUS]) { + int status = ntohl(nla_get_be32(tb[NFCTH_STATUS])); + + switch(status) { + case NFCT_HELPER_STATUS_ENABLED: + helper->flags |= NF_CT_HELPER_F_CONFIGURED; + break; + case NFCT_HELPER_STATUS_DISABLED: + helper->flags &= ~NF_CT_HELPER_F_CONFIGURED; + break; + } + } + + ret = nf_conntrack_helper_register(helper); + if (ret < 0) + goto err; + + return 0; +err: + kfree(helper); + return ret; +} + +static int +nfnl_cthelper_update(const struct nlattr * const tb[], + struct nf_conntrack_helper *helper) +{ + int ret; + + if (tb[NFCTH_PRIV_DATA_LEN]) + return -EBUSY; + + if (tb[NFCTH_POLICY]) { + ret = nfnl_cthelper_parse_expect_policy(helper, + tb[NFCTH_POLICY]); + if (ret < 0) + return ret; + } + if (tb[NFCTH_QUEUE_NUM]) + helper->queue_num = ntohl(nla_get_be32(tb[NFCTH_QUEUE_NUM])); + + if (tb[NFCTH_STATUS]) { + int status = ntohl(nla_get_be32(tb[NFCTH_STATUS])); + + switch(status) { + case NFCT_HELPER_STATUS_ENABLED: + helper->flags |= NF_CT_HELPER_F_CONFIGURED; + break; + case NFCT_HELPER_STATUS_DISABLED: + helper->flags &= ~NF_CT_HELPER_F_CONFIGURED; + break; + } + } + return 0; +} + +static int +nfnl_cthelper_new(struct sock *nfnl, struct sk_buff *skb, + const struct nlmsghdr *nlh, const struct nlattr * const tb[]) +{ + const char *helper_name; + struct nf_conntrack_helper *cur, *helper = NULL; + struct nf_conntrack_tuple tuple; + struct hlist_node *n; + int ret = 0, i; + + if (!tb[NFCTH_NAME] || !tb[NFCTH_TUPLE]) + return -EINVAL; + + helper_name = nla_data(tb[NFCTH_NAME]); + + ret = nfnl_cthelper_parse_tuple(&tuple, tb[NFCTH_TUPLE]); + if (ret < 0) + return ret; + + rcu_read_lock(); + for (i = 0; i < nf_ct_helper_hsize && !helper; i++) { + hlist_for_each_entry_rcu(cur, n, &nf_ct_helper_hash[i], hnode) { + + /* skip non-userspace conntrack helpers. */ + if (!(cur->flags & NF_CT_HELPER_F_USERSPACE)) + continue; + + if (strncmp(cur->name, helper_name, + NF_CT_HELPER_NAME_LEN) != 0) + continue; + + if ((tuple.src.l3num != cur->tuple.src.l3num || + tuple.dst.protonum != cur->tuple.dst.protonum)) + continue; + + if (nlh->nlmsg_flags & NLM_F_EXCL) { + ret = -EEXIST; + goto err; + } + helper = cur; + break; + } + } + rcu_read_unlock(); + + if (helper == NULL) + ret = nfnl_cthelper_create(tb, &tuple); + else + ret = nfnl_cthelper_update(tb, helper); + + return ret; +err: + rcu_read_unlock(); + return ret; +} + +static int +nfnl_cthelper_dump_tuple(struct sk_buff *skb, + struct nf_conntrack_helper *helper) +{ + struct nlattr *nest_parms; + + nest_parms = nla_nest_start(skb, NFCTH_TUPLE | NLA_F_NESTED); + if (nest_parms == NULL) + goto nla_put_failure; + + if (nla_put_be16(skb, NFCTH_TUPLE_L3PROTONUM, + htons(helper->tuple.src.l3num))) + goto nla_put_failure; + + if (nla_put_u8(skb, NFCTH_TUPLE_L4PROTONUM, helper->tuple.dst.protonum)) + goto nla_put_failure; + + nla_nest_end(skb, nest_parms); + return 0; + +nla_put_failure: + return -1; +} + +static int +nfnl_cthelper_dump_policy(struct sk_buff *skb, + struct nf_conntrack_helper *helper) +{ + int i; + struct nlattr *nest_parms1, *nest_parms2; + + nest_parms1 = nla_nest_start(skb, NFCTH_POLICY | NLA_F_NESTED); + if (nest_parms1 == NULL) + goto nla_put_failure; + + if (nla_put_be32(skb, NFCTH_POLICY_SET_NUM, + htonl(helper->expect_class_max))) + goto nla_put_failure; + + for (i=0; i<helper->expect_class_max; i++) { + nest_parms2 = nla_nest_start(skb, + (NFCTH_POLICY_SET+i) | NLA_F_NESTED); + if (nest_parms2 == NULL) + goto nla_put_failure; + + if (nla_put_string(skb, NFCTH_POLICY_NAME, + helper->expect_policy[i].name)) + goto nla_put_failure; + + if (nla_put_be32(skb, NFCTH_POLICY_EXPECT_MAX, + htonl(helper->expect_policy[i].max_expected))) + goto nla_put_failure; + + if (nla_put_be32(skb, NFCTH_POLICY_EXPECT_TIMEOUT, + htonl(helper->expect_policy[i].timeout))) + goto nla_put_failure; + + nla_nest_end(skb, nest_parms2); + } + nla_nest_end(skb, nest_parms1); + return 0; + +nla_put_failure: + return -1; +} + +static int +nfnl_cthelper_fill_info(struct sk_buff *skb, u32 pid, u32 seq, u32 type, + int event, struct nf_conntrack_helper *helper) +{ + struct nlmsghdr *nlh; + struct nfgenmsg *nfmsg; + unsigned int flags = pid ? NLM_F_MULTI : 0; + int status; + + event |= NFNL_SUBSYS_CTHELPER << 8; + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*nfmsg), flags); + if (nlh == NULL) + goto nlmsg_failure; + + nfmsg = nlmsg_data(nlh); + nfmsg->nfgen_family = AF_UNSPEC; + nfmsg->version = NFNETLINK_V0; + nfmsg->res_id = 0; + + if (nla_put_string(skb, NFCTH_NAME, helper->name)) + goto nla_put_failure; + + if (nla_put_be32(skb, NFCTH_QUEUE_NUM, htonl(helper->queue_num))) + goto nla_put_failure; + + if (nfnl_cthelper_dump_tuple(skb, helper) < 0) + goto nla_put_failure; + + if (nfnl_cthelper_dump_policy(skb, helper) < 0) + goto nla_put_failure; + + if (nla_put_be32(skb, NFCTH_PRIV_DATA_LEN, htonl(helper->data_len))) + goto nla_put_failure; + + if (helper->flags & NF_CT_HELPER_F_CONFIGURED) + status = NFCT_HELPER_STATUS_ENABLED; + else + status = NFCT_HELPER_STATUS_DISABLED; + + if (nla_put_be32(skb, NFCTH_STATUS, htonl(status))) + goto nla_put_failure; + + nlmsg_end(skb, nlh); + return skb->len; + +nlmsg_failure: +nla_put_failure: + nlmsg_cancel(skb, nlh); + return -1; +} + +static int +nfnl_cthelper_dump_table(struct sk_buff *skb, struct netlink_callback *cb) +{ + struct nf_conntrack_helper *cur, *last; + struct hlist_node *n; + + rcu_read_lock(); + last = (struct nf_conntrack_helper *)cb->args[1]; + for (; cb->args[0] < nf_ct_helper_hsize; cb->args[0]++) { +restart: + hlist_for_each_entry_rcu(cur, n, + &nf_ct_helper_hash[cb->args[0]], hnode) { + + /* skip non-userspace conntrack helpers. */ + if (!(cur->flags & NF_CT_HELPER_F_USERSPACE)) + continue; + + if (cb->args[1]) { + if (cur != last) + continue; + cb->args[1] = 0; + } + if (nfnl_cthelper_fill_info(skb, + NETLINK_CB(cb->skb).pid, + cb->nlh->nlmsg_seq, + NFNL_MSG_TYPE(cb->nlh->nlmsg_type), + NFNL_MSG_CTHELPER_NEW, cur) < 0) { + cb->args[1] = (unsigned long)cur; + goto out; + } + } + } + if (cb->args[1]) { + cb->args[1] = 0; + goto restart; + } +out: + rcu_read_unlock(); + return skb->len; +} + +static int +nfnl_cthelper_get(struct sock *nfnl, struct sk_buff *skb, + const struct nlmsghdr *nlh, const struct nlattr * const tb[]) +{ + int ret = -ENOENT, i; + struct nf_conntrack_helper *cur; + struct hlist_node *n; + struct sk_buff *skb2; + char *helper_name = NULL; + struct nf_conntrack_tuple tuple; + bool tuple_set = false; + + if (nlh->nlmsg_flags & NLM_F_DUMP) { + struct netlink_dump_control c = { + .dump = nfnl_cthelper_dump_table, + }; + return netlink_dump_start(nfnl, skb, nlh, &c); + } + + if (tb[NFCTH_NAME]) + helper_name = nla_data(tb[NFCTH_NAME]); + + if (tb[NFCTH_TUPLE]) { + ret = nfnl_cthelper_parse_tuple(&tuple, tb[NFCTH_TUPLE]); + if (ret < 0) + return ret; + + tuple_set = true; + } + + for (i = 0; i < nf_ct_helper_hsize; i++) { + hlist_for_each_entry_rcu(cur, n, &nf_ct_helper_hash[i], hnode) { + + /* skip non-userspace conntrack helpers. */ + if (!(cur->flags & NF_CT_HELPER_F_USERSPACE)) + continue; + + if (helper_name && strncmp(cur->name, helper_name, + NF_CT_HELPER_NAME_LEN) != 0) { + continue; + } + if (tuple_set && + (tuple.src.l3num != cur->tuple.src.l3num || + tuple.dst.protonum != cur->tuple.dst.protonum)) + continue; + + skb2 = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (skb2 == NULL) { + ret = -ENOMEM; + break; + } + + ret = nfnl_cthelper_fill_info(skb2, NETLINK_CB(skb).pid, + nlh->nlmsg_seq, + NFNL_MSG_TYPE(nlh->nlmsg_type), + NFNL_MSG_CTHELPER_NEW, cur); + if (ret <= 0) { + kfree_skb(skb2); + break; + } + + ret = netlink_unicast(nfnl, skb2, NETLINK_CB(skb).pid, + MSG_DONTWAIT); + if (ret > 0) + ret = 0; + + /* this avoids a loop in nfnetlink. */ + return ret == -EAGAIN ? -ENOBUFS : ret; + } + } + return ret; +} + +static int +nfnl_cthelper_del(struct sock *nfnl, struct sk_buff *skb, + const struct nlmsghdr *nlh, const struct nlattr * const tb[]) +{ + char *helper_name = NULL; + struct nf_conntrack_helper *cur; + struct hlist_node *n, *tmp; + struct nf_conntrack_tuple tuple; + bool tuple_set = false, found = false; + int i, j = 0, ret; + + if (tb[NFCTH_NAME]) + helper_name = nla_data(tb[NFCTH_NAME]); + + if (tb[NFCTH_TUPLE]) { + ret = nfnl_cthelper_parse_tuple(&tuple, tb[NFCTH_TUPLE]); + if (ret < 0) + return ret; + + tuple_set = true; + } + + for (i = 0; i < nf_ct_helper_hsize; i++) { + hlist_for_each_entry_safe(cur, n, tmp, &nf_ct_helper_hash[i], + hnode) { + /* skip non-userspace conntrack helpers. */ + if (!(cur->flags & NF_CT_HELPER_F_USERSPACE)) + continue; + + j++; + + if (helper_name && strncmp(cur->name, helper_name, + NF_CT_HELPER_NAME_LEN) != 0) { + continue; + } + if (tuple_set && + (tuple.src.l3num != cur->tuple.src.l3num || + tuple.dst.protonum != cur->tuple.dst.protonum)) + continue; + + found = true; + nf_conntrack_helper_unregister(cur); + } + } + /* Make sure we return success if we flush and there is no helpers */ + return (found || j == 0) ? 0 : -ENOENT; +} + +static const struct nla_policy nfnl_cthelper_policy[NFCTH_MAX+1] = { + [NFCTH_NAME] = { .type = NLA_NUL_STRING, + .len = NF_CT_HELPER_NAME_LEN-1 }, + [NFCTH_QUEUE_NUM] = { .type = NLA_U32, }, +}; + +static const struct nfnl_callback nfnl_cthelper_cb[NFNL_MSG_CTHELPER_MAX] = { + [NFNL_MSG_CTHELPER_NEW] = { .call = nfnl_cthelper_new, + .attr_count = NFCTH_MAX, + .policy = nfnl_cthelper_policy }, + [NFNL_MSG_CTHELPER_GET] = { .call = nfnl_cthelper_get, + .attr_count = NFCTH_MAX, + .policy = nfnl_cthelper_policy }, + [NFNL_MSG_CTHELPER_DEL] = { .call = nfnl_cthelper_del, + .attr_count = NFCTH_MAX, + .policy = nfnl_cthelper_policy }, +}; + +static const struct nfnetlink_subsystem nfnl_cthelper_subsys = { + .name = "cthelper", + .subsys_id = NFNL_SUBSYS_CTHELPER, + .cb_count = NFNL_MSG_CTHELPER_MAX, + .cb = nfnl_cthelper_cb, +}; + +MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_CTHELPER); + +static int __init nfnl_cthelper_init(void) +{ + int ret; + + ret = nfnetlink_subsys_register(&nfnl_cthelper_subsys); + if (ret < 0) { + pr_err("nfnl_cthelper: cannot register with nfnetlink.\n"); + goto err_out; + } + return 0; +err_out: + return ret; +} + +static void __exit nfnl_cthelper_exit(void) +{ + struct nf_conntrack_helper *cur; + struct hlist_node *n, *tmp; + int i; + + nfnetlink_subsys_unregister(&nfnl_cthelper_subsys); + + for (i=0; i<nf_ct_helper_hsize; i++) { + hlist_for_each_entry_safe(cur, n, tmp, &nf_ct_helper_hash[i], + hnode) { + /* skip non-userspace conntrack helpers. */ + if (!(cur->flags & NF_CT_HELPER_F_USERSPACE)) + continue; + + nf_conntrack_helper_unregister(cur); + } + } +} + +module_init(nfnl_cthelper_init); +module_exit(nfnl_cthelper_exit); diff --git a/net/netfilter/nfnetlink_cttimeout.c b/net/netfilter/nfnetlink_cttimeout.c index 3e655288d1d6..cdecbc8fe965 100644 --- a/net/netfilter/nfnetlink_cttimeout.c +++ b/net/netfilter/nfnetlink_cttimeout.c @@ -49,8 +49,9 @@ static const struct nla_policy cttimeout_nla_policy[CTA_TIMEOUT_MAX+1] = { static int ctnl_timeout_parse_policy(struct ctnl_timeout *timeout, - struct nf_conntrack_l4proto *l4proto, - const struct nlattr *attr) + struct nf_conntrack_l4proto *l4proto, + struct net *net, + const struct nlattr *attr) { int ret = 0; @@ -60,7 +61,8 @@ ctnl_timeout_parse_policy(struct ctnl_timeout *timeout, nla_parse_nested(tb, l4proto->ctnl_timeout.nlattr_max, attr, l4proto->ctnl_timeout.nla_policy); - ret = l4proto->ctnl_timeout.nlattr_to_obj(tb, &timeout->data); + ret = l4proto->ctnl_timeout.nlattr_to_obj(tb, net, + &timeout->data); } return ret; } @@ -74,6 +76,7 @@ cttimeout_new_timeout(struct sock *ctnl, struct sk_buff *skb, __u8 l4num; struct nf_conntrack_l4proto *l4proto; struct ctnl_timeout *timeout, *matching = NULL; + struct net *net = sock_net(skb->sk); char *name; int ret; @@ -117,7 +120,7 @@ cttimeout_new_timeout(struct sock *ctnl, struct sk_buff *skb, goto err_proto_put; } - ret = ctnl_timeout_parse_policy(matching, l4proto, + ret = ctnl_timeout_parse_policy(matching, l4proto, net, cda[CTA_TIMEOUT_DATA]); return ret; } @@ -132,7 +135,7 @@ cttimeout_new_timeout(struct sock *ctnl, struct sk_buff *skb, goto err_proto_put; } - ret = ctnl_timeout_parse_policy(timeout, l4proto, + ret = ctnl_timeout_parse_policy(timeout, l4proto, net, cda[CTA_TIMEOUT_DATA]); if (ret < 0) goto err; diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c index 3c3cfc0cc9b5..169ab59ed9d4 100644 --- a/net/netfilter/nfnetlink_log.c +++ b/net/netfilter/nfnetlink_log.c @@ -326,18 +326,20 @@ __nfulnl_send(struct nfulnl_instance *inst) { int status = -1; - if (inst->qlen > 1) - NLMSG_PUT(inst->skb, 0, 0, - NLMSG_DONE, - sizeof(struct nfgenmsg)); - + if (inst->qlen > 1) { + struct nlmsghdr *nlh = nlmsg_put(inst->skb, 0, 0, + NLMSG_DONE, + sizeof(struct nfgenmsg), + 0); + if (!nlh) + goto out; + } status = nfnetlink_unicast(inst->skb, &init_net, inst->peer_pid, MSG_DONTWAIT); inst->qlen = 0; inst->skb = NULL; - -nlmsg_failure: +out: return status; } @@ -380,10 +382,12 @@ __build_packet_message(struct nfulnl_instance *inst, struct nfgenmsg *nfmsg; sk_buff_data_t old_tail = inst->skb->tail; - nlh = NLMSG_PUT(inst->skb, 0, 0, + nlh = nlmsg_put(inst->skb, 0, 0, NFNL_SUBSYS_ULOG << 8 | NFULNL_MSG_PACKET, - sizeof(struct nfgenmsg)); - nfmsg = NLMSG_DATA(nlh); + sizeof(struct nfgenmsg), 0); + if (!nlh) + return -1; + nfmsg = nlmsg_data(nlh); nfmsg->nfgen_family = pf; nfmsg->version = NFNETLINK_V0; nfmsg->res_id = htons(inst->group_num); @@ -526,7 +530,7 @@ __build_packet_message(struct nfulnl_instance *inst, if (skb_tailroom(inst->skb) < nla_total_size(data_len)) { printk(KERN_WARNING "nfnetlink_log: no tailroom!\n"); - goto nlmsg_failure; + return -1; } nla = (struct nlattr *)skb_put(inst->skb, nla_total_size(data_len)); @@ -540,7 +544,6 @@ __build_packet_message(struct nfulnl_instance *inst, nlh->nlmsg_len = inst->skb->tail - old_tail; return 0; -nlmsg_failure: nla_put_failure: PRINTR(KERN_ERR "nfnetlink_log: error creating log nlmsg\n"); return -1; @@ -745,7 +748,7 @@ nfulnl_recv_config(struct sock *ctnl, struct sk_buff *skb, const struct nlmsghdr *nlh, const struct nlattr * const nfula[]) { - struct nfgenmsg *nfmsg = NLMSG_DATA(nlh); + struct nfgenmsg *nfmsg = nlmsg_data(nlh); u_int16_t group_num = ntohs(nfmsg->res_id); struct nfulnl_instance *inst; struct nfulnl_msg_config_cmd *cmd = NULL; diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue_core.c index 4162437b8361..c0496a55ad0c 100644 --- a/net/netfilter/nfnetlink_queue.c +++ b/net/netfilter/nfnetlink_queue_core.c @@ -30,6 +30,7 @@ #include <linux/list.h> #include <net/sock.h> #include <net/netfilter/nf_queue.h> +#include <net/netfilter/nfnetlink_queue.h> #include <linux/atomic.h> @@ -52,6 +53,7 @@ struct nfqnl_instance { u_int16_t queue_num; /* number of this queue */ u_int8_t copy_mode; + u_int32_t flags; /* Set using NFQA_CFG_FLAGS */ /* * Following fields are dirtied for each queued packet, * keep them in same cache line if possible. @@ -232,6 +234,8 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue, struct sk_buff *entskb = entry->skb; struct net_device *indev; struct net_device *outdev; + struct nf_conn *ct = NULL; + enum ip_conntrack_info uninitialized_var(ctinfo); size = NLMSG_SPACE(sizeof(struct nfgenmsg)) + nla_total_size(sizeof(struct nfqnl_msg_packet_hdr)) @@ -265,16 +269,22 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue, break; } + if (queue->flags & NFQA_CFG_F_CONNTRACK) + ct = nfqnl_ct_get(entskb, &size, &ctinfo); skb = alloc_skb(size, GFP_ATOMIC); if (!skb) - goto nlmsg_failure; + return NULL; old_tail = skb->tail; - nlh = NLMSG_PUT(skb, 0, 0, + nlh = nlmsg_put(skb, 0, 0, NFNL_SUBSYS_QUEUE << 8 | NFQNL_MSG_PACKET, - sizeof(struct nfgenmsg)); - nfmsg = NLMSG_DATA(nlh); + sizeof(struct nfgenmsg), 0); + if (!nlh) { + kfree_skb(skb); + return NULL; + } + nfmsg = nlmsg_data(nlh); nfmsg->nfgen_family = entry->pf; nfmsg->version = NFNETLINK_V0; nfmsg->res_id = htons(queue->queue_num); @@ -377,7 +387,8 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue, if (skb_tailroom(skb) < nla_total_size(data_len)) { printk(KERN_WARNING "nf_queue: no tailroom!\n"); - goto nlmsg_failure; + kfree_skb(skb); + return NULL; } nla = (struct nlattr *)skb_put(skb, nla_total_size(data_len)); @@ -388,10 +399,12 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue, BUG(); } + if (ct && nfqnl_ct_put(skb, ct, ctinfo) < 0) + goto nla_put_failure; + nlh->nlmsg_len = skb->tail - old_tail; return skb; -nlmsg_failure: nla_put_failure: if (skb) kfree_skb(skb); @@ -406,6 +419,7 @@ nfqnl_enqueue_packet(struct nf_queue_entry *entry, unsigned int queuenum) struct nfqnl_instance *queue; int err = -ENOBUFS; __be32 *packet_id_ptr; + int failopen = 0; /* rcu_read_lock()ed by nf_hook_slow() */ queue = instance_lookup(queuenum); @@ -431,9 +445,14 @@ nfqnl_enqueue_packet(struct nf_queue_entry *entry, unsigned int queuenum) goto err_out_free_nskb; } if (queue->queue_total >= queue->queue_maxlen) { - queue->queue_dropped++; - net_warn_ratelimited("nf_queue: full at %d entries, dropping packets(s)\n", - queue->queue_total); + if (queue->flags & NFQA_CFG_F_FAIL_OPEN) { + failopen = 1; + err = 0; + } else { + queue->queue_dropped++; + net_warn_ratelimited("nf_queue: full at %d entries, dropping packets(s)\n", + queue->queue_total); + } goto err_out_free_nskb; } entry->id = ++queue->id_sequence; @@ -455,17 +474,17 @@ err_out_free_nskb: kfree_skb(nskb); err_out_unlock: spin_unlock_bh(&queue->lock); + if (failopen) + nf_reinject(entry, NF_ACCEPT); err_out: return err; } static int -nfqnl_mangle(void *data, int data_len, struct nf_queue_entry *e) +nfqnl_mangle(void *data, int data_len, struct nf_queue_entry *e, int diff) { struct sk_buff *nskb; - int diff; - diff = data_len - e->skb->len; if (diff < 0) { if (pskb_trim(e->skb, data_len)) return -ENOMEM; @@ -623,6 +642,7 @@ static const struct nla_policy nfqa_verdict_policy[NFQA_MAX+1] = { [NFQA_VERDICT_HDR] = { .len = sizeof(struct nfqnl_msg_verdict_hdr) }, [NFQA_MARK] = { .type = NLA_U32 }, [NFQA_PAYLOAD] = { .type = NLA_UNSPEC }, + [NFQA_CT] = { .type = NLA_UNSPEC }, }; static const struct nla_policy nfqa_verdict_batch_policy[NFQA_MAX+1] = { @@ -670,7 +690,7 @@ nfqnl_recv_verdict_batch(struct sock *ctnl, struct sk_buff *skb, const struct nlmsghdr *nlh, const struct nlattr * const nfqa[]) { - struct nfgenmsg *nfmsg = NLMSG_DATA(nlh); + struct nfgenmsg *nfmsg = nlmsg_data(nlh); struct nf_queue_entry *entry, *tmp; unsigned int verdict, maxid; struct nfqnl_msg_verdict_hdr *vhdr; @@ -716,13 +736,15 @@ nfqnl_recv_verdict(struct sock *ctnl, struct sk_buff *skb, const struct nlmsghdr *nlh, const struct nlattr * const nfqa[]) { - struct nfgenmsg *nfmsg = NLMSG_DATA(nlh); + struct nfgenmsg *nfmsg = nlmsg_data(nlh); u_int16_t queue_num = ntohs(nfmsg->res_id); struct nfqnl_msg_verdict_hdr *vhdr; struct nfqnl_instance *queue; unsigned int verdict; struct nf_queue_entry *entry; + enum ip_conntrack_info uninitialized_var(ctinfo); + struct nf_conn *ct = NULL; queue = instance_lookup(queue_num); if (!queue) @@ -741,11 +763,22 @@ nfqnl_recv_verdict(struct sock *ctnl, struct sk_buff *skb, if (entry == NULL) return -ENOENT; + rcu_read_lock(); + if (nfqa[NFQA_CT] && (queue->flags & NFQA_CFG_F_CONNTRACK)) + ct = nfqnl_ct_parse(entry->skb, nfqa[NFQA_CT], &ctinfo); + if (nfqa[NFQA_PAYLOAD]) { + u16 payload_len = nla_len(nfqa[NFQA_PAYLOAD]); + int diff = payload_len - entry->skb->len; + if (nfqnl_mangle(nla_data(nfqa[NFQA_PAYLOAD]), - nla_len(nfqa[NFQA_PAYLOAD]), entry) < 0) + payload_len, entry, diff) < 0) verdict = NF_DROP; + + if (ct) + nfqnl_ct_seq_adjust(skb, ct, ctinfo, diff); } + rcu_read_unlock(); if (nfqa[NFQA_MARK]) entry->skb->mark = ntohl(nla_get_be32(nfqa[NFQA_MARK])); @@ -777,7 +810,7 @@ nfqnl_recv_config(struct sock *ctnl, struct sk_buff *skb, const struct nlmsghdr *nlh, const struct nlattr * const nfqa[]) { - struct nfgenmsg *nfmsg = NLMSG_DATA(nlh); + struct nfgenmsg *nfmsg = nlmsg_data(nlh); u_int16_t queue_num = ntohs(nfmsg->res_id); struct nfqnl_instance *queue; struct nfqnl_msg_config_cmd *cmd = NULL; @@ -858,6 +891,36 @@ nfqnl_recv_config(struct sock *ctnl, struct sk_buff *skb, spin_unlock_bh(&queue->lock); } + if (nfqa[NFQA_CFG_FLAGS]) { + __u32 flags, mask; + + if (!queue) { + ret = -ENODEV; + goto err_out_unlock; + } + + if (!nfqa[NFQA_CFG_MASK]) { + /* A mask is needed to specify which flags are being + * changed. + */ + ret = -EINVAL; + goto err_out_unlock; + } + + flags = ntohl(nla_get_be32(nfqa[NFQA_CFG_FLAGS])); + mask = ntohl(nla_get_be32(nfqa[NFQA_CFG_MASK])); + + if (flags >= NFQA_CFG_F_MAX) { + ret = -EOPNOTSUPP; + goto err_out_unlock; + } + + spin_lock_bh(&queue->lock); + queue->flags &= ~mask; + queue->flags |= flags & mask; + spin_unlock_bh(&queue->lock); + } + err_out_unlock: rcu_read_unlock(); return ret; diff --git a/net/netfilter/nfnetlink_queue_ct.c b/net/netfilter/nfnetlink_queue_ct.c new file mode 100644 index 000000000000..ab61d66bc0b9 --- /dev/null +++ b/net/netfilter/nfnetlink_queue_ct.c @@ -0,0 +1,98 @@ +/* + * (C) 2012 by Pablo Neira Ayuso <pablo@netfilter.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include <linux/skbuff.h> +#include <linux/netfilter.h> +#include <linux/netfilter/nfnetlink.h> +#include <linux/netfilter/nfnetlink_queue.h> +#include <net/netfilter/nf_conntrack.h> +#include <net/netfilter/nfnetlink_queue.h> + +struct nf_conn *nfqnl_ct_get(struct sk_buff *entskb, size_t *size, + enum ip_conntrack_info *ctinfo) +{ + struct nfq_ct_hook *nfq_ct; + struct nf_conn *ct; + + /* rcu_read_lock()ed by __nf_queue already. */ + nfq_ct = rcu_dereference(nfq_ct_hook); + if (nfq_ct == NULL) + return NULL; + + ct = nf_ct_get(entskb, ctinfo); + if (ct) { + if (!nf_ct_is_untracked(ct)) + *size += nfq_ct->build_size(ct); + else + ct = NULL; + } + return ct; +} + +struct nf_conn * +nfqnl_ct_parse(const struct sk_buff *skb, const struct nlattr *attr, + enum ip_conntrack_info *ctinfo) +{ + struct nfq_ct_hook *nfq_ct; + struct nf_conn *ct; + + /* rcu_read_lock()ed by __nf_queue already. */ + nfq_ct = rcu_dereference(nfq_ct_hook); + if (nfq_ct == NULL) + return NULL; + + ct = nf_ct_get(skb, ctinfo); + if (ct && !nf_ct_is_untracked(ct)) + nfq_ct->parse(attr, ct); + + return ct; +} + +int nfqnl_ct_put(struct sk_buff *skb, struct nf_conn *ct, + enum ip_conntrack_info ctinfo) +{ + struct nfq_ct_hook *nfq_ct; + struct nlattr *nest_parms; + u_int32_t tmp; + + nfq_ct = rcu_dereference(nfq_ct_hook); + if (nfq_ct == NULL) + return 0; + + nest_parms = nla_nest_start(skb, NFQA_CT | NLA_F_NESTED); + if (!nest_parms) + goto nla_put_failure; + + if (nfq_ct->build(skb, ct) < 0) + goto nla_put_failure; + + nla_nest_end(skb, nest_parms); + + tmp = ctinfo; + if (nla_put_be32(skb, NFQA_CT_INFO, htonl(tmp))) + goto nla_put_failure; + + return 0; + +nla_put_failure: + return -1; +} + +void nfqnl_ct_seq_adjust(struct sk_buff *skb, struct nf_conn *ct, + enum ip_conntrack_info ctinfo, int diff) +{ + struct nfq_ct_nat_hook *nfq_nat_ct; + + nfq_nat_ct = rcu_dereference(nfq_ct_nat_hook); + if (nfq_nat_ct == NULL) + return; + + if ((ct->status & IPS_NAT_MASK) && diff) + nfq_nat_ct->seq_adjust(skb, ct, ctinfo, diff); +} diff --git a/net/netfilter/xt_CT.c b/net/netfilter/xt_CT.c index a51de9b052be..116018560c60 100644 --- a/net/netfilter/xt_CT.c +++ b/net/netfilter/xt_CT.c @@ -112,6 +112,8 @@ static int xt_ct_tg_check_v0(const struct xt_tgchk_param *par) goto err3; if (info->helper[0]) { + struct nf_conntrack_helper *helper; + ret = -ENOENT; proto = xt_ct_find_proto(par); if (!proto) { @@ -120,19 +122,21 @@ static int xt_ct_tg_check_v0(const struct xt_tgchk_param *par) goto err3; } - ret = -ENOMEM; - help = nf_ct_helper_ext_add(ct, GFP_KERNEL); - if (help == NULL) - goto err3; - ret = -ENOENT; - help->helper = nf_conntrack_helper_try_module_get(info->helper, - par->family, - proto); - if (help->helper == NULL) { + helper = nf_conntrack_helper_try_module_get(info->helper, + par->family, + proto); + if (helper == NULL) { pr_info("No such helper \"%s\"\n", info->helper); goto err3; } + + ret = -ENOMEM; + help = nf_ct_helper_ext_add(ct, helper, GFP_KERNEL); + if (help == NULL) + goto err3; + + help->helper = helper; } __set_bit(IPS_TEMPLATE_BIT, &ct->status); @@ -202,6 +206,8 @@ static int xt_ct_tg_check_v1(const struct xt_tgchk_param *par) goto err3; if (info->helper[0]) { + struct nf_conntrack_helper *helper; + ret = -ENOENT; proto = xt_ct_find_proto(par); if (!proto) { @@ -210,19 +216,21 @@ static int xt_ct_tg_check_v1(const struct xt_tgchk_param *par) goto err3; } - ret = -ENOMEM; - help = nf_ct_helper_ext_add(ct, GFP_KERNEL); - if (help == NULL) - goto err3; - ret = -ENOENT; - help->helper = nf_conntrack_helper_try_module_get(info->helper, - par->family, - proto); - if (help->helper == NULL) { + helper = nf_conntrack_helper_try_module_get(info->helper, + par->family, + proto); + if (helper == NULL) { pr_info("No such helper \"%s\"\n", info->helper); goto err3; } + + ret = -ENOMEM; + help = nf_ct_helper_ext_add(ct, helper, GFP_KERNEL); + if (help == NULL) + goto err3; + + help->helper = helper; } #ifdef CONFIG_NF_CONNTRACK_TIMEOUT diff --git a/net/netfilter/xt_NFQUEUE.c b/net/netfilter/xt_NFQUEUE.c index 95237c89607a..7babe7d68716 100644 --- a/net/netfilter/xt_NFQUEUE.c +++ b/net/netfilter/xt_NFQUEUE.c @@ -41,26 +41,36 @@ nfqueue_tg(struct sk_buff *skb, const struct xt_action_param *par) static u32 hash_v4(const struct sk_buff *skb) { const struct iphdr *iph = ip_hdr(skb); - __be32 ipaddr; /* packets in either direction go into same queue */ - ipaddr = iph->saddr ^ iph->daddr; + if (iph->saddr < iph->daddr) + return jhash_3words((__force u32)iph->saddr, + (__force u32)iph->daddr, iph->protocol, jhash_initval); - return jhash_2words((__force u32)ipaddr, iph->protocol, jhash_initval); + return jhash_3words((__force u32)iph->daddr, + (__force u32)iph->saddr, iph->protocol, jhash_initval); } #if IS_ENABLED(CONFIG_IP6_NF_IPTABLES) static u32 hash_v6(const struct sk_buff *skb) { const struct ipv6hdr *ip6h = ipv6_hdr(skb); - __be32 addr[4]; + u32 a, b, c; + + if (ip6h->saddr.s6_addr32[3] < ip6h->daddr.s6_addr32[3]) { + a = (__force u32) ip6h->saddr.s6_addr32[3]; + b = (__force u32) ip6h->daddr.s6_addr32[3]; + } else { + b = (__force u32) ip6h->saddr.s6_addr32[3]; + a = (__force u32) ip6h->daddr.s6_addr32[3]; + } - addr[0] = ip6h->saddr.s6_addr32[0] ^ ip6h->daddr.s6_addr32[0]; - addr[1] = ip6h->saddr.s6_addr32[1] ^ ip6h->daddr.s6_addr32[1]; - addr[2] = ip6h->saddr.s6_addr32[2] ^ ip6h->daddr.s6_addr32[2]; - addr[3] = ip6h->saddr.s6_addr32[3] ^ ip6h->daddr.s6_addr32[3]; + if (ip6h->saddr.s6_addr32[1] < ip6h->daddr.s6_addr32[1]) + c = (__force u32) ip6h->saddr.s6_addr32[1]; + else + c = (__force u32) ip6h->daddr.s6_addr32[1]; - return jhash2((__force u32 *)addr, ARRAY_SIZE(addr), jhash_initval); + return jhash_3words(a, b, c, jhash_initval); } #endif diff --git a/net/netfilter/xt_TPROXY.c b/net/netfilter/xt_TPROXY.c index 146033a86de8..d7f195388f66 100644 --- a/net/netfilter/xt_TPROXY.c +++ b/net/netfilter/xt_TPROXY.c @@ -69,7 +69,7 @@ tproxy_laddr4(struct sk_buff *skb, __be32 user_laddr, __be32 daddr) } /** - * tproxy_handle_time_wait4() - handle IPv4 TCP TIME_WAIT reopen redirections + * tproxy_handle_time_wait4 - handle IPv4 TCP TIME_WAIT reopen redirections * @skb: The skb being processed. * @laddr: IPv4 address to redirect to or zero. * @lport: TCP port to redirect to or zero. @@ -220,7 +220,7 @@ tproxy_laddr6(struct sk_buff *skb, const struct in6_addr *user_laddr, } /** - * tproxy_handle_time_wait6() - handle IPv6 TCP TIME_WAIT reopen redirections + * tproxy_handle_time_wait6 - handle IPv6 TCP TIME_WAIT reopen redirections * @skb: The skb being processed. * @tproto: Transport protocol. * @thoff: Transport protocol header offset. diff --git a/net/netfilter/xt_connlimit.c b/net/netfilter/xt_connlimit.c index c6d5a83450c9..70b5591a2586 100644 --- a/net/netfilter/xt_connlimit.c +++ b/net/netfilter/xt_connlimit.c @@ -274,38 +274,25 @@ static void connlimit_mt_destroy(const struct xt_mtdtor_param *par) kfree(info->data); } -static struct xt_match connlimit_mt_reg[] __read_mostly = { - { - .name = "connlimit", - .revision = 0, - .family = NFPROTO_UNSPEC, - .checkentry = connlimit_mt_check, - .match = connlimit_mt, - .matchsize = sizeof(struct xt_connlimit_info), - .destroy = connlimit_mt_destroy, - .me = THIS_MODULE, - }, - { - .name = "connlimit", - .revision = 1, - .family = NFPROTO_UNSPEC, - .checkentry = connlimit_mt_check, - .match = connlimit_mt, - .matchsize = sizeof(struct xt_connlimit_info), - .destroy = connlimit_mt_destroy, - .me = THIS_MODULE, - }, +static struct xt_match connlimit_mt_reg __read_mostly = { + .name = "connlimit", + .revision = 1, + .family = NFPROTO_UNSPEC, + .checkentry = connlimit_mt_check, + .match = connlimit_mt, + .matchsize = sizeof(struct xt_connlimit_info), + .destroy = connlimit_mt_destroy, + .me = THIS_MODULE, }; static int __init connlimit_mt_init(void) { - return xt_register_matches(connlimit_mt_reg, - ARRAY_SIZE(connlimit_mt_reg)); + return xt_register_match(&connlimit_mt_reg); } static void __exit connlimit_mt_exit(void) { - xt_unregister_matches(connlimit_mt_reg, ARRAY_SIZE(connlimit_mt_reg)); + xt_unregister_match(&connlimit_mt_reg); } module_init(connlimit_mt_init); diff --git a/net/netfilter/xt_recent.c b/net/netfilter/xt_recent.c index fc0d6dbe5d17..ae2ad1eec8d0 100644 --- a/net/netfilter/xt_recent.c +++ b/net/netfilter/xt_recent.c @@ -75,6 +75,7 @@ struct recent_entry { struct recent_table { struct list_head list; char name[XT_RECENT_NAME_LEN]; + union nf_inet_addr mask; unsigned int refcnt; unsigned int entries; struct list_head lru_list; @@ -228,10 +229,10 @@ recent_mt(const struct sk_buff *skb, struct xt_action_param *par) { struct net *net = dev_net(par->in ? par->in : par->out); struct recent_net *recent_net = recent_pernet(net); - const struct xt_recent_mtinfo *info = par->matchinfo; + const struct xt_recent_mtinfo_v1 *info = par->matchinfo; struct recent_table *t; struct recent_entry *e; - union nf_inet_addr addr = {}; + union nf_inet_addr addr = {}, addr_mask; u_int8_t ttl; bool ret = info->invert; @@ -261,12 +262,15 @@ recent_mt(const struct sk_buff *skb, struct xt_action_param *par) spin_lock_bh(&recent_lock); t = recent_table_lookup(recent_net, info->name); - e = recent_entry_lookup(t, &addr, par->family, + + nf_inet_addr_mask(&addr, &addr_mask, &t->mask); + + e = recent_entry_lookup(t, &addr_mask, par->family, (info->check_set & XT_RECENT_TTL) ? ttl : 0); if (e == NULL) { if (!(info->check_set & XT_RECENT_SET)) goto out; - e = recent_entry_init(t, &addr, par->family, ttl); + e = recent_entry_init(t, &addr_mask, par->family, ttl); if (e == NULL) par->hotdrop = true; ret = !ret; @@ -306,10 +310,10 @@ out: return ret; } -static int recent_mt_check(const struct xt_mtchk_param *par) +static int recent_mt_check(const struct xt_mtchk_param *par, + const struct xt_recent_mtinfo_v1 *info) { struct recent_net *recent_net = recent_pernet(par->net); - const struct xt_recent_mtinfo *info = par->matchinfo; struct recent_table *t; #ifdef CONFIG_PROC_FS struct proc_dir_entry *pde; @@ -361,6 +365,8 @@ static int recent_mt_check(const struct xt_mtchk_param *par) goto out; } t->refcnt = 1; + + memcpy(&t->mask, &info->mask, sizeof(t->mask)); strcpy(t->name, info->name); INIT_LIST_HEAD(&t->lru_list); for (i = 0; i < ip_list_hash_size; i++) @@ -385,10 +391,28 @@ out: return ret; } +static int recent_mt_check_v0(const struct xt_mtchk_param *par) +{ + const struct xt_recent_mtinfo_v0 *info_v0 = par->matchinfo; + struct xt_recent_mtinfo_v1 info_v1; + + /* Copy revision 0 structure to revision 1 */ + memcpy(&info_v1, info_v0, sizeof(struct xt_recent_mtinfo)); + /* Set default mask to ensure backward compatible behaviour */ + memset(info_v1.mask.all, 0xFF, sizeof(info_v1.mask.all)); + + return recent_mt_check(par, &info_v1); +} + +static int recent_mt_check_v1(const struct xt_mtchk_param *par) +{ + return recent_mt_check(par, par->matchinfo); +} + static void recent_mt_destroy(const struct xt_mtdtor_param *par) { struct recent_net *recent_net = recent_pernet(par->net); - const struct xt_recent_mtinfo *info = par->matchinfo; + const struct xt_recent_mtinfo_v1 *info = par->matchinfo; struct recent_table *t; mutex_lock(&recent_mutex); @@ -625,7 +649,7 @@ static struct xt_match recent_mt_reg[] __read_mostly = { .family = NFPROTO_IPV4, .match = recent_mt, .matchsize = sizeof(struct xt_recent_mtinfo), - .checkentry = recent_mt_check, + .checkentry = recent_mt_check_v0, .destroy = recent_mt_destroy, .me = THIS_MODULE, }, @@ -635,10 +659,30 @@ static struct xt_match recent_mt_reg[] __read_mostly = { .family = NFPROTO_IPV6, .match = recent_mt, .matchsize = sizeof(struct xt_recent_mtinfo), - .checkentry = recent_mt_check, + .checkentry = recent_mt_check_v0, + .destroy = recent_mt_destroy, + .me = THIS_MODULE, + }, + { + .name = "recent", + .revision = 1, + .family = NFPROTO_IPV4, + .match = recent_mt, + .matchsize = sizeof(struct xt_recent_mtinfo_v1), + .checkentry = recent_mt_check_v1, .destroy = recent_mt_destroy, .me = THIS_MODULE, }, + { + .name = "recent", + .revision = 1, + .family = NFPROTO_IPV6, + .match = recent_mt, + .matchsize = sizeof(struct xt_recent_mtinfo_v1), + .checkentry = recent_mt_check_v1, + .destroy = recent_mt_destroy, + .me = THIS_MODULE, + } }; static int __init recent_mt_init(void) diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index b3025a603d56..5463969da45b 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -80,6 +80,7 @@ struct netlink_sock { struct mutex *cb_mutex; struct mutex cb_def_mutex; void (*netlink_rcv)(struct sk_buff *skb); + void (*netlink_bind)(int group); struct module *module; }; @@ -124,6 +125,7 @@ struct netlink_table { unsigned int groups; struct mutex *cb_mutex; struct module *module; + void (*bind)(int group); int registered; }; @@ -444,6 +446,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol, struct module *module = NULL; struct mutex *cb_mutex; struct netlink_sock *nlk; + void (*bind)(int group); int err = 0; sock->state = SS_UNCONNECTED; @@ -468,6 +471,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol, else err = -EPROTONOSUPPORT; cb_mutex = nl_table[protocol].cb_mutex; + bind = nl_table[protocol].bind; netlink_unlock_table(); if (err < 0) @@ -483,6 +487,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol, nlk = nlk_sk(sock->sk); nlk->module = module; + nlk->netlink_bind = bind; out: return err; @@ -683,6 +688,15 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr, netlink_update_listeners(sk); netlink_table_ungrab(); + if (nlk->netlink_bind && nlk->groups[0]) { + int i; + + for (i=0; i<nlk->ngroups; i++) { + if (test_bit(i, nlk->groups)) + nlk->netlink_bind(i); + } + } + return 0; } @@ -1239,6 +1253,10 @@ static int netlink_setsockopt(struct socket *sock, int level, int optname, netlink_update_socket_mc(nlk, val, optname == NETLINK_ADD_MEMBERSHIP); netlink_table_ungrab(); + + if (nlk->netlink_bind) + nlk->netlink_bind(val); + err = 0; break; } @@ -1503,14 +1521,16 @@ static void netlink_data_ready(struct sock *sk, int len) */ struct sock * -netlink_kernel_create(struct net *net, int unit, unsigned int groups, - void (*input)(struct sk_buff *skb), - struct mutex *cb_mutex, struct module *module) +netlink_kernel_create(struct net *net, int unit, + struct module *module, + struct netlink_kernel_cfg *cfg) { struct socket *sock; struct sock *sk; struct netlink_sock *nlk; struct listeners *listeners = NULL; + struct mutex *cb_mutex = cfg ? cfg->cb_mutex : NULL; + unsigned int groups; BUG_ON(!nl_table); @@ -1532,16 +1552,18 @@ netlink_kernel_create(struct net *net, int unit, unsigned int groups, sk = sock->sk; sk_change_net(sk, net); - if (groups < 32) + if (!cfg || cfg->groups < 32) groups = 32; + else + groups = cfg->groups; listeners = kzalloc(sizeof(*listeners) + NLGRPSZ(groups), GFP_KERNEL); if (!listeners) goto out_sock_release; sk->sk_data_ready = netlink_data_ready; - if (input) - nlk_sk(sk)->netlink_rcv = input; + if (cfg && cfg->input) + nlk_sk(sk)->netlink_rcv = cfg->input; if (netlink_insert(sk, net, 0)) goto out_sock_release; @@ -1555,6 +1577,7 @@ netlink_kernel_create(struct net *net, int unit, unsigned int groups, rcu_assign_pointer(nl_table[unit].listeners, listeners); nl_table[unit].cb_mutex = cb_mutex; nl_table[unit].module = module; + nl_table[unit].bind = cfg ? cfg->bind : NULL; nl_table[unit].registered = 1; } else { kfree(listeners); diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c index 2cc7c1ee7690..fda497412fc3 100644 --- a/net/netlink/genetlink.c +++ b/net/netlink/genetlink.c @@ -33,7 +33,7 @@ void genl_unlock(void) } EXPORT_SYMBOL(genl_unlock); -#ifdef CONFIG_PROVE_LOCKING +#ifdef CONFIG_LOCKDEP int lockdep_genl_is_held(void) { return lockdep_is_held(&genl_mutex); @@ -504,7 +504,7 @@ EXPORT_SYMBOL(genl_unregister_family); * @pid: netlink pid the message is addressed to * @seq: sequence number (usually the one of the sender) * @family: generic netlink family - * @flags netlink message flags + * @flags: netlink message flags * @cmd: generic netlink command * * Returns pointer to user specific header @@ -915,10 +915,14 @@ static struct genl_multicast_group notify_grp = { static int __net_init genl_pernet_init(struct net *net) { + struct netlink_kernel_cfg cfg = { + .input = genl_rcv, + .cb_mutex = &genl_mutex, + }; + /* we'll bump the group number right afterwards */ - net->genl_sock = netlink_kernel_create(net, NETLINK_GENERIC, 0, - genl_rcv, &genl_mutex, - THIS_MODULE); + net->genl_sock = netlink_kernel_create(net, NETLINK_GENERIC, + THIS_MODULE, &cfg); if (!net->genl_sock && net_eq(net, &init_net)) panic("GENL: Cannot initialize generic netlink\n"); diff --git a/net/nfc/core.c b/net/nfc/core.c index 9f6ce011d35d..ff749794bc5b 100644 --- a/net/nfc/core.c +++ b/net/nfc/core.c @@ -29,6 +29,8 @@ #include <linux/slab.h> #include <linux/nfc.h> +#include <net/genetlink.h> + #include "nfc.h" #define VERSION "0.1" @@ -121,14 +123,14 @@ error: * The device remains polling for targets until a target is found or * the nfc_stop_poll function is called. */ -int nfc_start_poll(struct nfc_dev *dev, u32 protocols) +int nfc_start_poll(struct nfc_dev *dev, u32 im_protocols, u32 tm_protocols) { int rc; - pr_debug("dev_name=%s protocols=0x%x\n", - dev_name(&dev->dev), protocols); + pr_debug("dev_name %s initiator protocols 0x%x target protocols 0x%x\n", + dev_name(&dev->dev), im_protocols, tm_protocols); - if (!protocols) + if (!im_protocols && !tm_protocols) return -EINVAL; device_lock(&dev->dev); @@ -143,9 +145,11 @@ int nfc_start_poll(struct nfc_dev *dev, u32 protocols) goto error; } - rc = dev->ops->start_poll(dev, protocols); - if (!rc) + rc = dev->ops->start_poll(dev, im_protocols, tm_protocols); + if (!rc) { dev->polling = true; + dev->rf_mode = NFC_RF_NONE; + } error: device_unlock(&dev->dev); @@ -235,8 +239,10 @@ int nfc_dep_link_up(struct nfc_dev *dev, int target_index, u8 comm_mode) } rc = dev->ops->dep_link_up(dev, target, comm_mode, gb, gb_len); - if (!rc) + if (!rc) { dev->active_target = target; + dev->rf_mode = NFC_RF_INITIATOR; + } error: device_unlock(&dev->dev); @@ -264,11 +270,6 @@ int nfc_dep_link_down(struct nfc_dev *dev) goto error; } - if (dev->dep_rf_mode == NFC_RF_TARGET) { - rc = -EOPNOTSUPP; - goto error; - } - rc = dev->ops->dep_link_down(dev); if (!rc) { dev->dep_link_up = false; @@ -286,7 +287,6 @@ int nfc_dep_link_is_up(struct nfc_dev *dev, u32 target_idx, u8 comm_mode, u8 rf_mode) { dev->dep_link_up = true; - dev->dep_rf_mode = rf_mode; nfc_llcp_mac_is_up(dev, target_idx, comm_mode, rf_mode); @@ -330,6 +330,7 @@ int nfc_activate_target(struct nfc_dev *dev, u32 target_idx, u32 protocol) rc = dev->ops->activate_target(dev, target, protocol); if (!rc) { dev->active_target = target; + dev->rf_mode = NFC_RF_INITIATOR; if (dev->ops->check_presence) mod_timer(&dev->check_pres_timer, jiffies + @@ -409,27 +410,30 @@ int nfc_data_exchange(struct nfc_dev *dev, u32 target_idx, struct sk_buff *skb, goto error; } - if (dev->active_target == NULL) { - rc = -ENOTCONN; - kfree_skb(skb); - goto error; - } + if (dev->rf_mode == NFC_RF_INITIATOR && dev->active_target != NULL) { + if (dev->active_target->idx != target_idx) { + rc = -EADDRNOTAVAIL; + kfree_skb(skb); + goto error; + } - if (dev->active_target->idx != target_idx) { - rc = -EADDRNOTAVAIL; + if (dev->ops->check_presence) + del_timer_sync(&dev->check_pres_timer); + + rc = dev->ops->im_transceive(dev, dev->active_target, skb, cb, + cb_context); + + if (!rc && dev->ops->check_presence) + mod_timer(&dev->check_pres_timer, jiffies + + msecs_to_jiffies(NFC_CHECK_PRES_FREQ_MS)); + } else if (dev->rf_mode == NFC_RF_TARGET && dev->ops->tm_send != NULL) { + rc = dev->ops->tm_send(dev, skb); + } else { + rc = -ENOTCONN; kfree_skb(skb); goto error; } - if (dev->ops->check_presence) - del_timer_sync(&dev->check_pres_timer); - - rc = dev->ops->data_exchange(dev, dev->active_target, skb, cb, - cb_context); - - if (!rc && dev->ops->check_presence) - mod_timer(&dev->check_pres_timer, jiffies + - msecs_to_jiffies(NFC_CHECK_PRES_FREQ_MS)); error: device_unlock(&dev->dev); @@ -447,6 +451,63 @@ int nfc_set_remote_general_bytes(struct nfc_dev *dev, u8 *gb, u8 gb_len) } EXPORT_SYMBOL(nfc_set_remote_general_bytes); +u8 *nfc_get_local_general_bytes(struct nfc_dev *dev, size_t *gb_len) +{ + pr_debug("dev_name=%s\n", dev_name(&dev->dev)); + + return nfc_llcp_general_bytes(dev, gb_len); +} +EXPORT_SYMBOL(nfc_get_local_general_bytes); + +int nfc_tm_data_received(struct nfc_dev *dev, struct sk_buff *skb) +{ + /* Only LLCP target mode for now */ + if (dev->dep_link_up == false) { + kfree_skb(skb); + return -ENOLINK; + } + + return nfc_llcp_data_received(dev, skb); +} +EXPORT_SYMBOL(nfc_tm_data_received); + +int nfc_tm_activated(struct nfc_dev *dev, u32 protocol, u8 comm_mode, + u8 *gb, size_t gb_len) +{ + int rc; + + device_lock(&dev->dev); + + dev->polling = false; + + if (gb != NULL) { + rc = nfc_set_remote_general_bytes(dev, gb, gb_len); + if (rc < 0) + goto out; + } + + dev->rf_mode = NFC_RF_TARGET; + + if (protocol == NFC_PROTO_NFC_DEP_MASK) + nfc_dep_link_is_up(dev, 0, comm_mode, NFC_RF_TARGET); + + rc = nfc_genl_tm_activated(dev, protocol); + +out: + device_unlock(&dev->dev); + + return rc; +} +EXPORT_SYMBOL(nfc_tm_activated); + +int nfc_tm_deactivated(struct nfc_dev *dev) +{ + dev->dep_link_up = false; + + return nfc_genl_tm_deactivated(dev); +} +EXPORT_SYMBOL(nfc_tm_deactivated); + /** * nfc_alloc_send_skb - allocate a skb for data exchange responses * @@ -501,6 +562,8 @@ EXPORT_SYMBOL(nfc_alloc_recv_skb); * The device driver must call this function when one or many nfc targets * are found. After calling this function, the device driver must stop * polling for targets. + * NOTE: This function can be called with targets=NULL and n_targets=0 to + * notify a driver error, meaning that the polling operation cannot complete. * IMPORTANT: this function must not be called from an atomic context. * In addition, it must also not be called from a context that would prevent * the NFC Core to call other nfc ops entry point concurrently. @@ -512,23 +575,33 @@ int nfc_targets_found(struct nfc_dev *dev, pr_debug("dev_name=%s n_targets=%d\n", dev_name(&dev->dev), n_targets); - dev->polling = false; - for (i = 0; i < n_targets; i++) targets[i].idx = dev->target_next_idx++; device_lock(&dev->dev); + if (dev->polling == false) { + device_unlock(&dev->dev); + return 0; + } + + dev->polling = false; + dev->targets_generation++; kfree(dev->targets); - dev->targets = kmemdup(targets, n_targets * sizeof(struct nfc_target), - GFP_ATOMIC); + dev->targets = NULL; - if (!dev->targets) { - dev->n_targets = 0; - device_unlock(&dev->dev); - return -ENOMEM; + if (targets) { + dev->targets = kmemdup(targets, + n_targets * sizeof(struct nfc_target), + GFP_ATOMIC); + + if (!dev->targets) { + dev->n_targets = 0; + device_unlock(&dev->dev); + return -ENOMEM; + } } dev->n_targets = n_targets; @@ -592,6 +665,12 @@ int nfc_target_lost(struct nfc_dev *dev, u32 target_idx) } EXPORT_SYMBOL(nfc_target_lost); +inline void nfc_driver_failure(struct nfc_dev *dev, int err) +{ + nfc_targets_found(dev, NULL, 0); +} +EXPORT_SYMBOL(nfc_driver_failure); + static void nfc_release(struct device *d) { struct nfc_dev *dev = to_nfc_dev(d); @@ -678,7 +757,7 @@ struct nfc_dev *nfc_allocate_device(struct nfc_ops *ops, struct nfc_dev *dev; if (!ops->start_poll || !ops->stop_poll || !ops->activate_target || - !ops->deactivate_target || !ops->data_exchange) + !ops->deactivate_target || !ops->im_transceive) return NULL; if (!supported_protocols) @@ -847,3 +926,5 @@ MODULE_AUTHOR("Lauro Ramos Venancio <lauro.venancio@openbossa.org>"); MODULE_DESCRIPTION("NFC Core ver " VERSION); MODULE_VERSION(VERSION); MODULE_LICENSE("GPL"); +MODULE_ALIAS_NETPROTO(PF_NFC); +MODULE_ALIAS_GENL_FAMILY(NFC_GENL_NAME); diff --git a/net/nfc/hci/command.c b/net/nfc/hci/command.c index 8729abf5f18b..46362ef979db 100644 --- a/net/nfc/hci/command.c +++ b/net/nfc/hci/command.c @@ -28,26 +28,14 @@ #include "hci.h" -static int nfc_hci_result_to_errno(u8 result) -{ - switch (result) { - case NFC_HCI_ANY_OK: - return 0; - case NFC_HCI_ANY_E_TIMEOUT: - return -ETIMEDOUT; - default: - return -1; - } -} - -static void nfc_hci_execute_cb(struct nfc_hci_dev *hdev, u8 result, +static void nfc_hci_execute_cb(struct nfc_hci_dev *hdev, int err, struct sk_buff *skb, void *cb_data) { struct hcp_exec_waiter *hcp_ew = (struct hcp_exec_waiter *)cb_data; - pr_debug("HCI Cmd completed with HCI result=%d\n", result); + pr_debug("HCI Cmd completed with result=%d\n", err); - hcp_ew->exec_result = nfc_hci_result_to_errno(result); + hcp_ew->exec_result = err; if (hcp_ew->exec_result == 0) hcp_ew->result_skb = skb; else @@ -311,9 +299,9 @@ int nfc_hci_disconnect_all_gates(struct nfc_hci_dev *hdev) } EXPORT_SYMBOL(nfc_hci_disconnect_all_gates); -int nfc_hci_connect_gate(struct nfc_hci_dev *hdev, u8 dest_host, u8 dest_gate) +int nfc_hci_connect_gate(struct nfc_hci_dev *hdev, u8 dest_host, u8 dest_gate, + u8 pipe) { - u8 pipe = NFC_HCI_INVALID_PIPE; bool pipe_created = false; int r; @@ -322,6 +310,9 @@ int nfc_hci_connect_gate(struct nfc_hci_dev *hdev, u8 dest_host, u8 dest_gate) if (hdev->gate2pipe[dest_gate] != NFC_HCI_INVALID_PIPE) return -EADDRINUSE; + if (pipe != NFC_HCI_INVALID_PIPE) + goto pipe_is_open; + switch (dest_gate) { case NFC_HCI_LINK_MGMT_GATE: pipe = NFC_HCI_LINK_MGMT_PIPE; @@ -347,6 +338,7 @@ int nfc_hci_connect_gate(struct nfc_hci_dev *hdev, u8 dest_host, u8 dest_gate) return r; } +pipe_is_open: hdev->gate2pipe[dest_gate] = pipe; return 0; diff --git a/net/nfc/hci/core.c b/net/nfc/hci/core.c index e1a640d2b588..1ac7b3fac6c9 100644 --- a/net/nfc/hci/core.c +++ b/net/nfc/hci/core.c @@ -32,6 +32,18 @@ /* Largest headroom needed for outgoing HCI commands */ #define HCI_CMDS_HEADROOM 1 +static int nfc_hci_result_to_errno(u8 result) +{ + switch (result) { + case NFC_HCI_ANY_OK: + return 0; + case NFC_HCI_ANY_E_TIMEOUT: + return -ETIME; + default: + return -1; + } +} + static void nfc_hci_msg_tx_work(struct work_struct *work) { struct nfc_hci_dev *hdev = container_of(work, struct nfc_hci_dev, @@ -46,7 +58,7 @@ static void nfc_hci_msg_tx_work(struct work_struct *work) if (timer_pending(&hdev->cmd_timer) == 0) { if (hdev->cmd_pending_msg->cb) hdev->cmd_pending_msg->cb(hdev, - NFC_HCI_ANY_E_TIMEOUT, + -ETIME, NULL, hdev-> cmd_pending_msg-> @@ -71,8 +83,7 @@ next_msg: kfree_skb(skb); skb_queue_purge(&msg->msg_frags); if (msg->cb) - msg->cb(hdev, NFC_HCI_ANY_E_NOK, NULL, - msg->cb_context); + msg->cb(hdev, r, NULL, msg->cb_context); kfree(msg); break; } @@ -116,20 +127,13 @@ static void nfc_hci_msg_rx_work(struct work_struct *work) } } -void nfc_hci_resp_received(struct nfc_hci_dev *hdev, u8 result, - struct sk_buff *skb) +static void __nfc_hci_cmd_completion(struct nfc_hci_dev *hdev, int err, + struct sk_buff *skb) { - mutex_lock(&hdev->msg_tx_mutex); - - if (hdev->cmd_pending_msg == NULL) { - kfree_skb(skb); - goto exit; - } - del_timer_sync(&hdev->cmd_timer); if (hdev->cmd_pending_msg->cb) - hdev->cmd_pending_msg->cb(hdev, result, skb, + hdev->cmd_pending_msg->cb(hdev, err, skb, hdev->cmd_pending_msg->cb_context); else kfree_skb(skb); @@ -138,6 +142,19 @@ void nfc_hci_resp_received(struct nfc_hci_dev *hdev, u8 result, hdev->cmd_pending_msg = NULL; queue_work(hdev->msg_tx_wq, &hdev->msg_tx_work); +} + +void nfc_hci_resp_received(struct nfc_hci_dev *hdev, u8 result, + struct sk_buff *skb) +{ + mutex_lock(&hdev->msg_tx_mutex); + + if (hdev->cmd_pending_msg == NULL) { + kfree_skb(skb); + goto exit; + } + + __nfc_hci_cmd_completion(hdev, nfc_hci_result_to_errno(result), skb); exit: mutex_unlock(&hdev->msg_tx_mutex); @@ -170,6 +187,7 @@ static int nfc_hci_target_discovered(struct nfc_hci_dev *hdev, u8 gate) struct nfc_target *targets; struct sk_buff *atqa_skb = NULL; struct sk_buff *sak_skb = NULL; + struct sk_buff *uid_skb = NULL; int r; pr_debug("from gate %d\n", gate); @@ -205,6 +223,19 @@ static int nfc_hci_target_discovered(struct nfc_hci_dev *hdev, u8 gate) targets->sens_res = be16_to_cpu(*(u16 *)atqa_skb->data); targets->sel_res = sak_skb->data[0]; + r = nfc_hci_get_param(hdev, NFC_HCI_RF_READER_A_GATE, + NFC_HCI_RF_READER_A_UID, &uid_skb); + if (r < 0) + goto exit; + + if (uid_skb->len == 0 || uid_skb->len > NFC_NFCID1_MAXSIZE) { + r = -EPROTO; + goto exit; + } + + memcpy(targets->nfcid1, uid_skb->data, uid_skb->len); + targets->nfcid1_len = uid_skb->len; + if (hdev->ops->complete_target_discovered) { r = hdev->ops->complete_target_discovered(hdev, gate, targets); @@ -213,7 +244,7 @@ static int nfc_hci_target_discovered(struct nfc_hci_dev *hdev, u8 gate) } break; case NFC_HCI_RF_READER_B_GATE: - targets->supported_protocols = NFC_PROTO_ISO14443_MASK; + targets->supported_protocols = NFC_PROTO_ISO14443_B_MASK; break; default: if (hdev->ops->target_from_gate) @@ -240,6 +271,7 @@ exit: kfree(targets); kfree_skb(atqa_skb); kfree_skb(sak_skb); + kfree_skb(uid_skb); return r; } @@ -298,15 +330,15 @@ static void nfc_hci_cmd_timeout(unsigned long data) } static int hci_dev_connect_gates(struct nfc_hci_dev *hdev, u8 gate_count, - u8 gates[]) + struct nfc_hci_gate *gates) { int r; - u8 *p = gates; while (gate_count--) { - r = nfc_hci_connect_gate(hdev, NFC_HCI_HOST_CONTROLLER_ID, *p); + r = nfc_hci_connect_gate(hdev, NFC_HCI_HOST_CONTROLLER_ID, + gates->gate, gates->pipe); if (r < 0) return r; - p++; + gates++; } return 0; @@ -316,14 +348,13 @@ static int hci_dev_session_init(struct nfc_hci_dev *hdev) { struct sk_buff *skb = NULL; int r; - u8 hci_gates[] = { /* NFC_HCI_ADMIN_GATE MUST be first */ - NFC_HCI_ADMIN_GATE, NFC_HCI_LOOPBACK_GATE, - NFC_HCI_ID_MGMT_GATE, NFC_HCI_LINK_MGMT_GATE, - NFC_HCI_RF_READER_B_GATE, NFC_HCI_RF_READER_A_GATE - }; + + if (hdev->init_data.gates[0].gate != NFC_HCI_ADMIN_GATE) + return -EPROTO; r = nfc_hci_connect_gate(hdev, NFC_HCI_HOST_CONTROLLER_ID, - NFC_HCI_ADMIN_GATE); + hdev->init_data.gates[0].gate, + hdev->init_data.gates[0].pipe); if (r < 0) goto exit; @@ -351,10 +382,6 @@ static int hci_dev_session_init(struct nfc_hci_dev *hdev) if (r < 0) goto exit; - r = hci_dev_connect_gates(hdev, sizeof(hci_gates), hci_gates); - if (r < 0) - goto disconnect_all; - r = hci_dev_connect_gates(hdev, hdev->init_data.gate_count, hdev->init_data.gates); if (r < 0) @@ -481,12 +508,13 @@ static int hci_dev_down(struct nfc_dev *nfc_dev) return 0; } -static int hci_start_poll(struct nfc_dev *nfc_dev, u32 protocols) +static int hci_start_poll(struct nfc_dev *nfc_dev, + u32 im_protocols, u32 tm_protocols) { struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); if (hdev->ops->start_poll) - return hdev->ops->start_poll(hdev, protocols); + return hdev->ops->start_poll(hdev, im_protocols, tm_protocols); else return nfc_hci_send_event(hdev, NFC_HCI_RF_READER_A_GATE, NFC_HCI_EVT_READER_REQUESTED, NULL, 0); @@ -511,9 +539,9 @@ static void hci_deactivate_target(struct nfc_dev *nfc_dev, { } -static int hci_data_exchange(struct nfc_dev *nfc_dev, struct nfc_target *target, - struct sk_buff *skb, data_exchange_cb_t cb, - void *cb_context) +static int hci_transceive(struct nfc_dev *nfc_dev, struct nfc_target *target, + struct sk_buff *skb, data_exchange_cb_t cb, + void *cb_context) { struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); int r; @@ -579,7 +607,7 @@ static struct nfc_ops hci_nfc_ops = { .stop_poll = hci_stop_poll, .activate_target = hci_activate_target, .deactivate_target = hci_deactivate_target, - .data_exchange = hci_data_exchange, + .im_transceive = hci_transceive, .check_presence = hci_check_presence, }; @@ -682,13 +710,12 @@ EXPORT_SYMBOL(nfc_hci_register_device); void nfc_hci_unregister_device(struct nfc_hci_dev *hdev) { - struct hci_msg *msg; + struct hci_msg *msg, *n; skb_queue_purge(&hdev->rx_hcp_frags); skb_queue_purge(&hdev->msg_rx_queue); - while ((msg = list_first_entry(&hdev->msg_tx_queue, struct hci_msg, - msg_l)) != NULL) { + list_for_each_entry_safe(msg, n, &hdev->msg_tx_queue, msg_l) { list_del(&msg->msg_l); skb_queue_purge(&msg->msg_frags); kfree(msg); @@ -716,6 +743,27 @@ void *nfc_hci_get_clientdata(struct nfc_hci_dev *hdev) } EXPORT_SYMBOL(nfc_hci_get_clientdata); +static void nfc_hci_failure(struct nfc_hci_dev *hdev, int err) +{ + mutex_lock(&hdev->msg_tx_mutex); + + if (hdev->cmd_pending_msg == NULL) { + nfc_driver_failure(hdev->ndev, err); + goto exit; + } + + __nfc_hci_cmd_completion(hdev, err, NULL); + +exit: + mutex_unlock(&hdev->msg_tx_mutex); +} + +void nfc_hci_driver_failure(struct nfc_hci_dev *hdev, int err) +{ + nfc_hci_failure(hdev, err); +} +EXPORT_SYMBOL(nfc_hci_driver_failure); + void nfc_hci_recv_frame(struct nfc_hci_dev *hdev, struct sk_buff *skb) { struct hcp_packet *packet; @@ -726,16 +774,6 @@ void nfc_hci_recv_frame(struct nfc_hci_dev *hdev, struct sk_buff *skb) struct sk_buff *frag_skb; int msg_len; - if (skb == NULL) { - /* TODO ELa: lower layer had permanent failure, need to - * propagate that up - */ - - skb_queue_purge(&hdev->rx_hcp_frags); - - return; - } - packet = (struct hcp_packet *)skb->data; if ((packet->header & ~NFC_HCI_FRAGMENT) == 0) { skb_queue_tail(&hdev->rx_hcp_frags, skb); @@ -756,9 +794,8 @@ void nfc_hci_recv_frame(struct nfc_hci_dev *hdev, struct sk_buff *skb) hcp_skb = nfc_alloc_recv_skb(NFC_HCI_HCP_PACKET_HEADER_LEN + msg_len, GFP_KERNEL); if (hcp_skb == NULL) { - /* TODO ELa: cannot deliver HCP message. How to - * propagate error up? - */ + nfc_hci_failure(hdev, -ENOMEM); + return; } *skb_put(hcp_skb, NFC_HCI_HCP_PACKET_HEADER_LEN) = pipe; diff --git a/net/nfc/hci/hci.h b/net/nfc/hci/hci.h index 45f2fe4fd486..fa9a21e92239 100644 --- a/net/nfc/hci/hci.h +++ b/net/nfc/hci/hci.h @@ -37,10 +37,11 @@ struct hcp_packet { /* * HCI command execution completion callback. - * result will be one of the HCI response codes. - * skb contains the response data and must be disposed. + * result will be a standard linux error (may be converted from HCI response) + * skb contains the response data and must be disposed, or may be NULL if + * an error occured */ -typedef void (*hci_cmd_cb_t) (struct nfc_hci_dev *hdev, u8 result, +typedef void (*hci_cmd_cb_t) (struct nfc_hci_dev *hdev, int result, struct sk_buff *skb, void *cb_data); struct hcp_exec_waiter { @@ -131,9 +132,4 @@ void nfc_hci_hcp_message_rx(struct nfc_hci_dev *hdev, u8 pipe, u8 type, #define NFC_HCI_ANY_E_REG_ACCESS_DENIED 0x0a #define NFC_HCI_ANY_E_PIPE_ACCESS_DENIED 0x0b -/* Pipes */ -#define NFC_HCI_INVALID_PIPE 0x80 -#define NFC_HCI_LINK_MGMT_PIPE 0x00 -#define NFC_HCI_ADMIN_PIPE 0x01 - #endif /* __LOCAL_HCI_H */ diff --git a/net/nfc/hci/hcp.c b/net/nfc/hci/hcp.c index 7212cf2c5785..f4dad1a89740 100644 --- a/net/nfc/hci/hcp.c +++ b/net/nfc/hci/hcp.c @@ -105,7 +105,7 @@ int nfc_hci_hcp_message_tx(struct nfc_hci_dev *hdev, u8 pipe, } mutex_lock(&hdev->msg_tx_mutex); - list_add_tail(&hdev->msg_tx_queue, &cmd->msg_l); + list_add_tail(&cmd->msg_l, &hdev->msg_tx_queue); mutex_unlock(&hdev->msg_tx_mutex); queue_work(hdev->msg_tx_wq, &hdev->msg_tx_work); diff --git a/net/nfc/hci/shdlc.c b/net/nfc/hci/shdlc.c index 5665dc6d893a..6f840c18c892 100644 --- a/net/nfc/hci/shdlc.c +++ b/net/nfc/hci/shdlc.c @@ -340,15 +340,6 @@ static void nfc_shdlc_connect_complete(struct nfc_shdlc *shdlc, int r) shdlc->state = SHDLC_CONNECTED; } else { shdlc->state = SHDLC_DISCONNECTED; - - /* - * TODO: Could it be possible that there are pending - * executing commands that are waiting for connect to complete - * before they can be carried? As connect is a blocking - * operation, it would require that the userspace process can - * send commands on the same device from a second thread before - * the device is up. I don't think that is possible, is it? - */ } shdlc->connect_result = r; @@ -413,12 +404,12 @@ static void nfc_shdlc_rcv_u_frame(struct nfc_shdlc *shdlc, r = nfc_shdlc_connect_send_ua(shdlc); nfc_shdlc_connect_complete(shdlc, r); } - } else if (shdlc->state > SHDLC_NEGOCIATING) { + } else if (shdlc->state == SHDLC_CONNECTED) { /* - * TODO: Chip wants to reset link - * send ua, empty skb lists, reset counters - * propagate info to HCI layer + * Chip wants to reset link. This is unexpected and + * unsupported. */ + shdlc->hard_fault = -ECONNRESET; } break; case U_FRAME_UA: @@ -523,10 +514,6 @@ static void nfc_shdlc_handle_send_queue(struct nfc_shdlc *shdlc) r = shdlc->ops->xmit(shdlc, skb); if (r < 0) { - /* - * TODO: Cannot send, shdlc machine is dead, we - * must propagate the information up to HCI. - */ shdlc->hard_fault = r; break; } @@ -590,6 +577,11 @@ static void nfc_shdlc_sm_work(struct work_struct *work) skb_queue_purge(&shdlc->ack_pending_q); break; case SHDLC_CONNECTING: + if (shdlc->hard_fault) { + nfc_shdlc_connect_complete(shdlc, shdlc->hard_fault); + break; + } + if (shdlc->connect_tries++ < 5) r = nfc_shdlc_connect_initiate(shdlc); else @@ -610,6 +602,11 @@ static void nfc_shdlc_sm_work(struct work_struct *work) } nfc_shdlc_handle_rcv_queue(shdlc); + + if (shdlc->hard_fault) { + nfc_shdlc_connect_complete(shdlc, shdlc->hard_fault); + break; + } break; case SHDLC_CONNECTED: nfc_shdlc_handle_rcv_queue(shdlc); @@ -637,10 +634,7 @@ static void nfc_shdlc_sm_work(struct work_struct *work) } if (shdlc->hard_fault) { - /* - * TODO: Handle hard_fault that occured during - * this invocation of the shdlc worker - */ + nfc_hci_driver_failure(shdlc->hdev, shdlc->hard_fault); } break; default: @@ -765,14 +759,16 @@ static int nfc_shdlc_xmit(struct nfc_hci_dev *hdev, struct sk_buff *skb) return 0; } -static int nfc_shdlc_start_poll(struct nfc_hci_dev *hdev, u32 protocols) +static int nfc_shdlc_start_poll(struct nfc_hci_dev *hdev, + u32 im_protocols, u32 tm_protocols) { struct nfc_shdlc *shdlc = nfc_hci_get_clientdata(hdev); pr_debug("\n"); if (shdlc->ops->start_poll) - return shdlc->ops->start_poll(shdlc, protocols); + return shdlc->ops->start_poll(shdlc, + im_protocols, tm_protocols); return 0; } @@ -921,8 +917,6 @@ void nfc_shdlc_free(struct nfc_shdlc *shdlc) { pr_debug("\n"); - /* TODO: Check that this cannot be called while still in use */ - nfc_hci_unregister_device(shdlc->hdev); nfc_hci_free_device(shdlc->hdev); diff --git a/net/nfc/llcp/commands.c b/net/nfc/llcp/commands.c index bf8ae4f0b90c..b982b5b890d7 100644 --- a/net/nfc/llcp/commands.c +++ b/net/nfc/llcp/commands.c @@ -51,7 +51,7 @@ static u8 llcp_tlv8(u8 *tlv, u8 type) return tlv[2]; } -static u8 llcp_tlv16(u8 *tlv, u8 type) +static u16 llcp_tlv16(u8 *tlv, u8 type) { if (tlv[0] != type || tlv[1] != llcp_tlv_length[tlv[0]]) return 0; @@ -67,7 +67,7 @@ static u8 llcp_tlv_version(u8 *tlv) static u16 llcp_tlv_miux(u8 *tlv) { - return llcp_tlv16(tlv, LLCP_TLV_MIUX) & 0x7f; + return llcp_tlv16(tlv, LLCP_TLV_MIUX) & 0x7ff; } static u16 llcp_tlv_wks(u8 *tlv) @@ -117,8 +117,8 @@ u8 *nfc_llcp_build_tlv(u8 type, u8 *value, u8 value_length, u8 *tlv_length) return tlv; } -int nfc_llcp_parse_tlv(struct nfc_llcp_local *local, - u8 *tlv_array, u16 tlv_array_len) +int nfc_llcp_parse_gb_tlv(struct nfc_llcp_local *local, + u8 *tlv_array, u16 tlv_array_len) { u8 *tlv = tlv_array, type, length, offset = 0; @@ -149,8 +149,45 @@ int nfc_llcp_parse_tlv(struct nfc_llcp_local *local, case LLCP_TLV_OPT: local->remote_opt = llcp_tlv_opt(tlv); break; + default: + pr_err("Invalid gt tlv value 0x%x\n", type); + break; + } + + offset += length + 2; + tlv += length + 2; + } + + pr_debug("version 0x%x miu %d lto %d opt 0x%x wks 0x%x\n", + local->remote_version, local->remote_miu, + local->remote_lto, local->remote_opt, + local->remote_wks); + + return 0; +} + +int nfc_llcp_parse_connection_tlv(struct nfc_llcp_sock *sock, + u8 *tlv_array, u16 tlv_array_len) +{ + u8 *tlv = tlv_array, type, length, offset = 0; + + pr_debug("TLV array length %d\n", tlv_array_len); + + if (sock == NULL) + return -ENOTCONN; + + while (offset < tlv_array_len) { + type = tlv[0]; + length = tlv[1]; + + pr_debug("type 0x%x length %d\n", type, length); + + switch (type) { + case LLCP_TLV_MIUX: + sock->miu = llcp_tlv_miux(tlv) + 128; + break; case LLCP_TLV_RW: - local->remote_rw = llcp_tlv_rw(tlv); + sock->rw = llcp_tlv_rw(tlv); break; case LLCP_TLV_SN: break; @@ -163,10 +200,7 @@ int nfc_llcp_parse_tlv(struct nfc_llcp_local *local, tlv += length + 2; } - pr_debug("version 0x%x miu %d lto %d opt 0x%x wks 0x%x rw %d\n", - local->remote_version, local->remote_miu, - local->remote_lto, local->remote_opt, - local->remote_wks, local->remote_rw); + pr_debug("sock %p rw %d miu %d\n", sock, sock->rw, sock->miu); return 0; } @@ -474,7 +508,7 @@ int nfc_llcp_send_i_frame(struct nfc_llcp_sock *sock, while (remaining_len > 0) { - frag_len = min_t(size_t, local->remote_miu, remaining_len); + frag_len = min_t(size_t, sock->miu, remaining_len); pr_debug("Fragment %zd bytes remaining %zd", frag_len, remaining_len); diff --git a/net/nfc/llcp/llcp.c b/net/nfc/llcp/llcp.c index 42994fac26d6..82f0f7588b46 100644 --- a/net/nfc/llcp/llcp.c +++ b/net/nfc/llcp/llcp.c @@ -31,47 +31,41 @@ static u8 llcp_magic[3] = {0x46, 0x66, 0x6d}; static struct list_head llcp_devices; -static void nfc_llcp_socket_release(struct nfc_llcp_local *local) +void nfc_llcp_sock_link(struct llcp_sock_list *l, struct sock *sk) { - struct nfc_llcp_sock *parent, *s, *n; - struct sock *sk, *parent_sk; - int i; - - mutex_lock(&local->socket_lock); - - for (i = 0; i < LLCP_MAX_SAP; i++) { - parent = local->sockets[i]; - if (parent == NULL) - continue; - - /* Release all child sockets */ - list_for_each_entry_safe(s, n, &parent->list, list) { - list_del_init(&s->list); - sk = &s->sk; - - lock_sock(sk); - - if (sk->sk_state == LLCP_CONNECTED) - nfc_put_device(s->dev); + write_lock(&l->lock); + sk_add_node(sk, &l->head); + write_unlock(&l->lock); +} - sk->sk_state = LLCP_CLOSED; +void nfc_llcp_sock_unlink(struct llcp_sock_list *l, struct sock *sk) +{ + write_lock(&l->lock); + sk_del_node_init(sk); + write_unlock(&l->lock); +} - release_sock(sk); +static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool listen) +{ + struct sock *sk; + struct hlist_node *node, *tmp; + struct nfc_llcp_sock *llcp_sock; - sock_orphan(sk); + write_lock(&local->sockets.lock); - s->local = NULL; - } + sk_for_each_safe(sk, node, tmp, &local->sockets.head) { + llcp_sock = nfc_llcp_sock(sk); - parent_sk = &parent->sk; + lock_sock(sk); - lock_sock(parent_sk); + if (sk->sk_state == LLCP_CONNECTED) + nfc_put_device(llcp_sock->dev); - if (parent_sk->sk_state == LLCP_LISTEN) { + if (sk->sk_state == LLCP_LISTEN) { struct nfc_llcp_sock *lsk, *n; struct sock *accept_sk; - list_for_each_entry_safe(lsk, n, &parent->accept_queue, + list_for_each_entry_safe(lsk, n, &llcp_sock->accept_queue, accept_queue) { accept_sk = &lsk->sk; lock_sock(accept_sk); @@ -83,35 +77,94 @@ static void nfc_llcp_socket_release(struct nfc_llcp_local *local) release_sock(accept_sk); sock_orphan(accept_sk); + } - lsk->local = NULL; + if (listen == true) { + release_sock(sk); + continue; } } - if (parent_sk->sk_state == LLCP_CONNECTED) - nfc_put_device(parent->dev); - - parent_sk->sk_state = LLCP_CLOSED; + sk->sk_state = LLCP_CLOSED; - release_sock(parent_sk); + release_sock(sk); - sock_orphan(parent_sk); + sock_orphan(sk); - parent->local = NULL; + sk_del_node_init(sk); } - mutex_unlock(&local->socket_lock); + write_unlock(&local->sockets.lock); } -static void nfc_llcp_clear_sdp(struct nfc_llcp_local *local) +struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local) { - mutex_lock(&local->sdp_lock); + kref_get(&local->ref); - local->local_wks = 0; - local->local_sdp = 0; - local->local_sap = 0; + return local; +} - mutex_unlock(&local->sdp_lock); +static void local_release(struct kref *ref) +{ + struct nfc_llcp_local *local; + + local = container_of(ref, struct nfc_llcp_local, ref); + + list_del(&local->list); + nfc_llcp_socket_release(local, false); + del_timer_sync(&local->link_timer); + skb_queue_purge(&local->tx_queue); + destroy_workqueue(local->tx_wq); + destroy_workqueue(local->rx_wq); + destroy_workqueue(local->timeout_wq); + kfree_skb(local->rx_pending); + kfree(local); +} + +int nfc_llcp_local_put(struct nfc_llcp_local *local) +{ + if (local == NULL) + return 0; + + return kref_put(&local->ref, local_release); +} + +static struct nfc_llcp_sock *nfc_llcp_sock_get(struct nfc_llcp_local *local, + u8 ssap, u8 dsap) +{ + struct sock *sk; + struct hlist_node *node; + struct nfc_llcp_sock *llcp_sock; + + pr_debug("ssap dsap %d %d\n", ssap, dsap); + + if (ssap == 0 && dsap == 0) + return NULL; + + read_lock(&local->sockets.lock); + + llcp_sock = NULL; + + sk_for_each(sk, node, &local->sockets.head) { + llcp_sock = nfc_llcp_sock(sk); + + if (llcp_sock->ssap == ssap && llcp_sock->dsap == dsap) + break; + } + + read_unlock(&local->sockets.lock); + + if (llcp_sock == NULL) + return NULL; + + sock_hold(&llcp_sock->sk); + + return llcp_sock; +} + +static void nfc_llcp_sock_put(struct nfc_llcp_sock *sock) +{ + sock_put(&sock->sk); } static void nfc_llcp_timeout_work(struct work_struct *work) @@ -174,6 +227,51 @@ static int nfc_llcp_wks_sap(char *service_name, size_t service_name_len) return -EINVAL; } +static +struct nfc_llcp_sock *nfc_llcp_sock_from_sn(struct nfc_llcp_local *local, + u8 *sn, size_t sn_len) +{ + struct sock *sk; + struct hlist_node *node; + struct nfc_llcp_sock *llcp_sock, *tmp_sock; + + pr_debug("sn %zd %p\n", sn_len, sn); + + if (sn == NULL || sn_len == 0) + return NULL; + + read_lock(&local->sockets.lock); + + llcp_sock = NULL; + + sk_for_each(sk, node, &local->sockets.head) { + tmp_sock = nfc_llcp_sock(sk); + + pr_debug("llcp sock %p\n", tmp_sock); + + if (tmp_sock->sk.sk_state != LLCP_LISTEN) + continue; + + if (tmp_sock->service_name == NULL || + tmp_sock->service_name_len == 0) + continue; + + if (tmp_sock->service_name_len != sn_len) + continue; + + if (memcmp(sn, tmp_sock->service_name, sn_len) == 0) { + llcp_sock = tmp_sock; + break; + } + } + + read_unlock(&local->sockets.lock); + + pr_debug("Found llcp sock %p\n", llcp_sock); + + return llcp_sock; +} + u8 nfc_llcp_get_sdp_ssap(struct nfc_llcp_local *local, struct nfc_llcp_sock *sock) { @@ -200,41 +298,26 @@ u8 nfc_llcp_get_sdp_ssap(struct nfc_llcp_local *local, } /* - * This is not a well known service, - * we should try to find a local SDP free spot + * Check if there already is a non WKS socket bound + * to this service name. */ - ssap = find_first_zero_bit(&local->local_sdp, LLCP_SDP_NUM_SAP); - if (ssap == LLCP_SDP_NUM_SAP) { + if (nfc_llcp_sock_from_sn(local, sock->service_name, + sock->service_name_len) != NULL) { mutex_unlock(&local->sdp_lock); return LLCP_SAP_MAX; } - pr_debug("SDP ssap %d\n", LLCP_WKS_NUM_SAP + ssap); - - set_bit(ssap, &local->local_sdp); mutex_unlock(&local->sdp_lock); - return LLCP_WKS_NUM_SAP + ssap; + return LLCP_SDP_UNBOUND; - } else if (sock->ssap != 0) { - if (sock->ssap < LLCP_WKS_NUM_SAP) { - if (!test_bit(sock->ssap, &local->local_wks)) { - set_bit(sock->ssap, &local->local_wks); - mutex_unlock(&local->sdp_lock); - - return sock->ssap; - } - - } else if (sock->ssap < LLCP_SDP_NUM_SAP) { - if (!test_bit(sock->ssap - LLCP_WKS_NUM_SAP, - &local->local_sdp)) { - set_bit(sock->ssap - LLCP_WKS_NUM_SAP, - &local->local_sdp); - mutex_unlock(&local->sdp_lock); + } else if (sock->ssap != 0 && sock->ssap < LLCP_WKS_NUM_SAP) { + if (!test_bit(sock->ssap, &local->local_wks)) { + set_bit(sock->ssap, &local->local_wks); + mutex_unlock(&local->sdp_lock); - return sock->ssap; - } + return sock->ssap; } } @@ -271,8 +354,34 @@ void nfc_llcp_put_ssap(struct nfc_llcp_local *local, u8 ssap) local_ssap = ssap; sdp = &local->local_wks; } else if (ssap < LLCP_LOCAL_NUM_SAP) { + atomic_t *client_cnt; + local_ssap = ssap - LLCP_WKS_NUM_SAP; sdp = &local->local_sdp; + client_cnt = &local->local_sdp_cnt[local_ssap]; + + pr_debug("%d clients\n", atomic_read(client_cnt)); + + mutex_lock(&local->sdp_lock); + + if (atomic_dec_and_test(client_cnt)) { + struct nfc_llcp_sock *l_sock; + + pr_debug("No more clients for SAP %d\n", ssap); + + clear_bit(local_ssap, sdp); + + /* Find the listening sock and set it back to UNBOUND */ + l_sock = nfc_llcp_sock_get(local, ssap, LLCP_SAP_SDP); + if (l_sock) { + l_sock->ssap = LLCP_SDP_UNBOUND; + nfc_llcp_sock_put(l_sock); + } + } + + mutex_unlock(&local->sdp_lock); + + return; } else if (ssap < LLCP_MAX_SAP) { local_ssap = ssap - LLCP_LOCAL_NUM_SAP; sdp = &local->local_sap; @@ -287,19 +396,26 @@ void nfc_llcp_put_ssap(struct nfc_llcp_local *local, u8 ssap) mutex_unlock(&local->sdp_lock); } -u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len) +static u8 nfc_llcp_reserve_sdp_ssap(struct nfc_llcp_local *local) { - struct nfc_llcp_local *local; + u8 ssap; - local = nfc_llcp_find_local(dev); - if (local == NULL) { - *general_bytes_len = 0; - return NULL; + mutex_lock(&local->sdp_lock); + + ssap = find_first_zero_bit(&local->local_sdp, LLCP_SDP_NUM_SAP); + if (ssap == LLCP_SDP_NUM_SAP) { + mutex_unlock(&local->sdp_lock); + + return LLCP_SAP_MAX; } - *general_bytes_len = local->gb_len; + pr_debug("SDP ssap %d\n", LLCP_WKS_NUM_SAP + ssap); - return local->gb; + set_bit(ssap, &local->local_sdp); + + mutex_unlock(&local->sdp_lock); + + return LLCP_WKS_NUM_SAP + ssap; } static int nfc_llcp_build_gb(struct nfc_llcp_local *local) @@ -363,6 +479,23 @@ static int nfc_llcp_build_gb(struct nfc_llcp_local *local) return 0; } +u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len) +{ + struct nfc_llcp_local *local; + + local = nfc_llcp_find_local(dev); + if (local == NULL) { + *general_bytes_len = 0; + return NULL; + } + + nfc_llcp_build_gb(local); + + *general_bytes_len = local->gb_len; + + return local->gb; +} + int nfc_llcp_set_remote_gb(struct nfc_dev *dev, u8 *gb, u8 gb_len) { struct nfc_llcp_local *local = nfc_llcp_find_local(dev); @@ -384,31 +517,9 @@ int nfc_llcp_set_remote_gb(struct nfc_dev *dev, u8 *gb, u8 gb_len) return -EINVAL; } - return nfc_llcp_parse_tlv(local, - &local->remote_gb[3], - local->remote_gb_len - 3); -} - -static void nfc_llcp_tx_work(struct work_struct *work) -{ - struct nfc_llcp_local *local = container_of(work, struct nfc_llcp_local, - tx_work); - struct sk_buff *skb; - - skb = skb_dequeue(&local->tx_queue); - if (skb != NULL) { - pr_debug("Sending pending skb\n"); - print_hex_dump(KERN_DEBUG, "LLCP Tx: ", DUMP_PREFIX_OFFSET, - 16, 1, skb->data, skb->len, true); - - nfc_data_exchange(local->dev, local->target_idx, - skb, nfc_llcp_recv, local); - } else { - nfc_llcp_send_symm(local->dev); - } - - mod_timer(&local->link_timer, - jiffies + msecs_to_jiffies(local->remote_lto)); + return nfc_llcp_parse_gb_tlv(local, + &local->remote_gb[3], + local->remote_gb_len - 3); } static u8 nfc_llcp_dsap(struct sk_buff *pdu) @@ -443,51 +554,84 @@ static void nfc_llcp_set_nrns(struct nfc_llcp_sock *sock, struct sk_buff *pdu) sock->recv_ack_n = (sock->recv_n - 1) % 16; } -static struct nfc_llcp_sock *nfc_llcp_sock_get(struct nfc_llcp_local *local, - u8 ssap, u8 dsap) +static void nfc_llcp_tx_work(struct work_struct *work) { - struct nfc_llcp_sock *sock, *llcp_sock, *n; + struct nfc_llcp_local *local = container_of(work, struct nfc_llcp_local, + tx_work); + struct sk_buff *skb; + struct sock *sk; + struct nfc_llcp_sock *llcp_sock; - pr_debug("ssap dsap %d %d\n", ssap, dsap); + skb = skb_dequeue(&local->tx_queue); + if (skb != NULL) { + sk = skb->sk; + llcp_sock = nfc_llcp_sock(sk); + if (llcp_sock != NULL) { + int ret; + + pr_debug("Sending pending skb\n"); + print_hex_dump(KERN_DEBUG, "LLCP Tx: ", + DUMP_PREFIX_OFFSET, 16, 1, + skb->data, skb->len, true); + + ret = nfc_data_exchange(local->dev, local->target_idx, + skb, nfc_llcp_recv, local); + + if (!ret && nfc_llcp_ptype(skb) == LLCP_PDU_I) { + skb = skb_get(skb); + skb_queue_tail(&llcp_sock->tx_pending_queue, + skb); + } + } else { + nfc_llcp_send_symm(local->dev); + } + } else { + nfc_llcp_send_symm(local->dev); + } - if (ssap == 0 && dsap == 0) - return NULL; + mod_timer(&local->link_timer, + jiffies + msecs_to_jiffies(2 * local->remote_lto)); +} - mutex_lock(&local->socket_lock); - sock = local->sockets[ssap]; - if (sock == NULL) { - mutex_unlock(&local->socket_lock); - return NULL; - } +static struct nfc_llcp_sock *nfc_llcp_connecting_sock_get(struct nfc_llcp_local *local, + u8 ssap) +{ + struct sock *sk; + struct nfc_llcp_sock *llcp_sock; + struct hlist_node *node; - pr_debug("root dsap %d (%d)\n", sock->dsap, dsap); + read_lock(&local->connecting_sockets.lock); - if (sock->dsap == dsap) { - sock_hold(&sock->sk); - mutex_unlock(&local->socket_lock); - return sock; - } + sk_for_each(sk, node, &local->connecting_sockets.head) { + llcp_sock = nfc_llcp_sock(sk); - list_for_each_entry_safe(llcp_sock, n, &sock->list, list) { - pr_debug("llcp_sock %p sk %p dsap %d\n", llcp_sock, - &llcp_sock->sk, llcp_sock->dsap); - if (llcp_sock->dsap == dsap) { + if (llcp_sock->ssap == ssap) { sock_hold(&llcp_sock->sk); - mutex_unlock(&local->socket_lock); - return llcp_sock; + goto out; } } - pr_err("Could not find socket for %d %d\n", ssap, dsap); + llcp_sock = NULL; - mutex_unlock(&local->socket_lock); +out: + read_unlock(&local->connecting_sockets.lock); - return NULL; + return llcp_sock; } -static void nfc_llcp_sock_put(struct nfc_llcp_sock *sock) +static struct nfc_llcp_sock *nfc_llcp_sock_get_sn(struct nfc_llcp_local *local, + u8 *sn, size_t sn_len) { - sock_put(&sock->sk); + struct nfc_llcp_sock *llcp_sock; + + llcp_sock = nfc_llcp_sock_from_sn(local, sn, sn_len); + + if (llcp_sock == NULL) + return NULL; + + sock_hold(&llcp_sock->sk); + + return llcp_sock; } static u8 *nfc_llcp_connect_sn(struct sk_buff *skb, size_t *sn_len) @@ -518,35 +662,19 @@ static void nfc_llcp_recv_connect(struct nfc_llcp_local *local, { struct sock *new_sk, *parent; struct nfc_llcp_sock *sock, *new_sock; - u8 dsap, ssap, bound_sap, reason; + u8 dsap, ssap, reason; dsap = nfc_llcp_dsap(skb); ssap = nfc_llcp_ssap(skb); pr_debug("%d %d\n", dsap, ssap); - nfc_llcp_parse_tlv(local, &skb->data[LLCP_HEADER_SIZE], - skb->len - LLCP_HEADER_SIZE); - if (dsap != LLCP_SAP_SDP) { - bound_sap = dsap; - - mutex_lock(&local->socket_lock); - sock = local->sockets[dsap]; - if (sock == NULL) { - mutex_unlock(&local->socket_lock); + sock = nfc_llcp_sock_get(local, dsap, LLCP_SAP_SDP); + if (sock == NULL || sock->sk.sk_state != LLCP_LISTEN) { reason = LLCP_DM_NOBOUND; goto fail; } - - sock_hold(&sock->sk); - mutex_unlock(&local->socket_lock); - - lock_sock(&sock->sk); - - if (sock->dsap == LLCP_SAP_SDP && - sock->sk.sk_state == LLCP_LISTEN) - goto enqueue; } else { u8 *sn; size_t sn_len; @@ -559,40 +687,15 @@ static void nfc_llcp_recv_connect(struct nfc_llcp_local *local, pr_debug("Service name length %zu\n", sn_len); - mutex_lock(&local->socket_lock); - for (bound_sap = 0; bound_sap < LLCP_LOCAL_SAP_OFFSET; - bound_sap++) { - sock = local->sockets[bound_sap]; - if (sock == NULL) - continue; - - if (sock->service_name == NULL || - sock->service_name_len == 0) - continue; - - if (sock->service_name_len != sn_len) - continue; - - if (sock->dsap == LLCP_SAP_SDP && - sock->sk.sk_state == LLCP_LISTEN && - !memcmp(sn, sock->service_name, sn_len)) { - pr_debug("Found service name at SAP %d\n", - bound_sap); - sock_hold(&sock->sk); - mutex_unlock(&local->socket_lock); - - lock_sock(&sock->sk); - - goto enqueue; - } + sock = nfc_llcp_sock_get_sn(local, sn, sn_len); + if (sock == NULL) { + reason = LLCP_DM_NOBOUND; + goto fail; } - mutex_unlock(&local->socket_lock); } - reason = LLCP_DM_NOBOUND; - goto fail; + lock_sock(&sock->sk); -enqueue: parent = &sock->sk; if (sk_acceptq_is_full(parent)) { @@ -602,6 +705,21 @@ enqueue: goto fail; } + if (sock->ssap == LLCP_SDP_UNBOUND) { + u8 ssap = nfc_llcp_reserve_sdp_ssap(local); + + pr_debug("First client, reserving %d\n", ssap); + + if (ssap == LLCP_SAP_MAX) { + reason = LLCP_DM_REJ; + release_sock(&sock->sk); + sock_put(&sock->sk); + goto fail; + } + + sock->ssap = ssap; + } + new_sk = nfc_llcp_sock_alloc(NULL, parent->sk_type, GFP_ATOMIC); if (new_sk == NULL) { reason = LLCP_DM_REJ; @@ -612,15 +730,31 @@ enqueue: new_sock = nfc_llcp_sock(new_sk); new_sock->dev = local->dev; - new_sock->local = local; + new_sock->local = nfc_llcp_local_get(local); + new_sock->miu = local->remote_miu; new_sock->nfc_protocol = sock->nfc_protocol; - new_sock->ssap = bound_sap; new_sock->dsap = ssap; + new_sock->target_idx = local->target_idx; new_sock->parent = parent; + new_sock->ssap = sock->ssap; + if (sock->ssap < LLCP_LOCAL_NUM_SAP && sock->ssap >= LLCP_WKS_NUM_SAP) { + atomic_t *client_count; + + pr_debug("reserved_ssap %d for %p\n", sock->ssap, new_sock); + + client_count = + &local->local_sdp_cnt[sock->ssap - LLCP_WKS_NUM_SAP]; + + atomic_inc(client_count); + new_sock->reserved_ssap = sock->ssap; + } + + nfc_llcp_parse_connection_tlv(new_sock, &skb->data[LLCP_HEADER_SIZE], + skb->len - LLCP_HEADER_SIZE); pr_debug("new sock %p sk %p\n", new_sock, &new_sock->sk); - list_add_tail(&new_sock->list, &sock->list); + nfc_llcp_sock_link(&local->sockets, new_sk); nfc_llcp_accept_enqueue(&sock->sk, new_sk); @@ -654,12 +788,12 @@ int nfc_llcp_queue_i_frames(struct nfc_llcp_sock *sock) pr_debug("Remote ready %d tx queue len %d remote rw %d", sock->remote_ready, skb_queue_len(&sock->tx_pending_queue), - local->remote_rw); + sock->rw); /* Try to queue some I frames for transmission */ while (sock->remote_ready && - skb_queue_len(&sock->tx_pending_queue) < local->remote_rw) { - struct sk_buff *pdu, *pending_pdu; + skb_queue_len(&sock->tx_pending_queue) < sock->rw) { + struct sk_buff *pdu; pdu = skb_dequeue(&sock->tx_queue); if (pdu == NULL) @@ -668,10 +802,7 @@ int nfc_llcp_queue_i_frames(struct nfc_llcp_sock *sock) /* Update N(S)/N(R) */ nfc_llcp_set_nrns(sock, pdu); - pending_pdu = skb_clone(pdu, GFP_KERNEL); - skb_queue_tail(&local->tx_queue, pdu); - skb_queue_tail(&sock->tx_pending_queue, pending_pdu); nr_frames++; } @@ -728,11 +859,21 @@ static void nfc_llcp_recv_hdlc(struct nfc_llcp_local *local, llcp_sock->send_ack_n = nr; - skb_queue_walk_safe(&llcp_sock->tx_pending_queue, s, tmp) - if (nfc_llcp_ns(s) <= nr) { - skb_unlink(s, &llcp_sock->tx_pending_queue); - kfree_skb(s); - } + /* Remove and free all skbs until ns == nr */ + skb_queue_walk_safe(&llcp_sock->tx_pending_queue, s, tmp) { + skb_unlink(s, &llcp_sock->tx_pending_queue); + kfree_skb(s); + + if (nfc_llcp_ns(s) == nr) + break; + } + + /* Re-queue the remaining skbs for transmission */ + skb_queue_reverse_walk_safe(&llcp_sock->tx_pending_queue, + s, tmp) { + skb_unlink(s, &llcp_sock->tx_pending_queue); + skb_queue_head(&local->tx_queue, s); + } } if (ptype == LLCP_PDU_RR) @@ -740,7 +881,7 @@ static void nfc_llcp_recv_hdlc(struct nfc_llcp_local *local, else if (ptype == LLCP_PDU_RNR) llcp_sock->remote_ready = false; - if (nfc_llcp_queue_i_frames(llcp_sock) == 0) + if (nfc_llcp_queue_i_frames(llcp_sock) == 0 && ptype == LLCP_PDU_I) nfc_llcp_send_rr(llcp_sock); release_sock(sk); @@ -791,11 +932,7 @@ static void nfc_llcp_recv_cc(struct nfc_llcp_local *local, struct sk_buff *skb) dsap = nfc_llcp_dsap(skb); ssap = nfc_llcp_ssap(skb); - llcp_sock = nfc_llcp_sock_get(local, dsap, ssap); - - if (llcp_sock == NULL) - llcp_sock = nfc_llcp_sock_get(local, dsap, LLCP_SAP_SDP); - + llcp_sock = nfc_llcp_connecting_sock_get(local, dsap); if (llcp_sock == NULL) { pr_err("Invalid CC\n"); nfc_llcp_send_dm(local, dsap, ssap, LLCP_DM_NOCONN); @@ -803,11 +940,15 @@ static void nfc_llcp_recv_cc(struct nfc_llcp_local *local, struct sk_buff *skb) return; } - llcp_sock->dsap = ssap; sk = &llcp_sock->sk; - nfc_llcp_parse_tlv(local, &skb->data[LLCP_HEADER_SIZE], - skb->len - LLCP_HEADER_SIZE); + /* Unlink from connecting and link to the client array */ + nfc_llcp_sock_unlink(&local->connecting_sockets, sk); + nfc_llcp_sock_link(&local->sockets, sk); + llcp_sock->dsap = ssap; + + nfc_llcp_parse_connection_tlv(llcp_sock, &skb->data[LLCP_HEADER_SIZE], + skb->len - LLCP_HEADER_SIZE); sk->sk_state = LLCP_CONNECTED; sk->sk_state_change(sk); @@ -815,6 +956,45 @@ static void nfc_llcp_recv_cc(struct nfc_llcp_local *local, struct sk_buff *skb) nfc_llcp_sock_put(llcp_sock); } +static void nfc_llcp_recv_dm(struct nfc_llcp_local *local, struct sk_buff *skb) +{ + struct nfc_llcp_sock *llcp_sock; + struct sock *sk; + u8 dsap, ssap, reason; + + dsap = nfc_llcp_dsap(skb); + ssap = nfc_llcp_ssap(skb); + reason = skb->data[2]; + + pr_debug("%d %d reason %d\n", ssap, dsap, reason); + + switch (reason) { + case LLCP_DM_NOBOUND: + case LLCP_DM_REJ: + llcp_sock = nfc_llcp_connecting_sock_get(local, dsap); + break; + + default: + llcp_sock = nfc_llcp_sock_get(local, dsap, ssap); + break; + } + + if (llcp_sock == NULL) { + pr_err("Invalid DM\n"); + return; + } + + sk = &llcp_sock->sk; + + sk->sk_err = ENXIO; + sk->sk_state = LLCP_CLOSED; + sk->sk_state_change(sk); + + nfc_llcp_sock_put(llcp_sock); + + return; +} + static void nfc_llcp_rx_work(struct work_struct *work) { struct nfc_llcp_local *local = container_of(work, struct nfc_llcp_local, @@ -858,6 +1038,11 @@ static void nfc_llcp_rx_work(struct work_struct *work) nfc_llcp_recv_cc(local, skb); break; + case LLCP_PDU_DM: + pr_debug("DM\n"); + nfc_llcp_recv_dm(local, skb); + break; + case LLCP_PDU_I: case LLCP_PDU_RR: case LLCP_PDU_RNR: @@ -891,6 +1076,21 @@ void nfc_llcp_recv(void *data, struct sk_buff *skb, int err) return; } +int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb) +{ + struct nfc_llcp_local *local; + + local = nfc_llcp_find_local(dev); + if (local == NULL) + return -ENODEV; + + local->rx_pending = skb_get(skb); + del_timer(&local->link_timer); + queue_work(local->rx_wq, &local->rx_work); + + return 0; +} + void nfc_llcp_mac_is_down(struct nfc_dev *dev) { struct nfc_llcp_local *local; @@ -899,10 +1099,8 @@ void nfc_llcp_mac_is_down(struct nfc_dev *dev) if (local == NULL) return; - nfc_llcp_clear_sdp(local); - /* Close and purge all existing sockets */ - nfc_llcp_socket_release(local); + nfc_llcp_socket_release(local, true); } void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx, @@ -943,8 +1141,8 @@ int nfc_llcp_register_device(struct nfc_dev *ndev) local->dev = ndev; INIT_LIST_HEAD(&local->list); + kref_init(&local->ref); mutex_init(&local->sdp_lock); - mutex_init(&local->socket_lock); init_timer(&local->link_timer); local->link_timer.data = (unsigned long) local; local->link_timer.function = nfc_llcp_symm_timer; @@ -984,11 +1182,13 @@ int nfc_llcp_register_device(struct nfc_dev *ndev) goto err_rx_wq; } + local->sockets.lock = __RW_LOCK_UNLOCKED(local->sockets.lock); + local->connecting_sockets.lock = __RW_LOCK_UNLOCKED(local->connecting_sockets.lock); + nfc_llcp_build_gb(local); local->remote_miu = LLCP_DEFAULT_MIU; local->remote_lto = LLCP_DEFAULT_LTO; - local->remote_rw = LLCP_DEFAULT_RW; list_add(&llcp_devices, &local->list); @@ -1015,14 +1215,7 @@ void nfc_llcp_unregister_device(struct nfc_dev *dev) return; } - list_del(&local->list); - nfc_llcp_socket_release(local); - del_timer_sync(&local->link_timer); - skb_queue_purge(&local->tx_queue); - destroy_workqueue(local->tx_wq); - destroy_workqueue(local->rx_wq); - kfree_skb(local->rx_pending); - kfree(local); + nfc_llcp_local_put(local); } int __init nfc_llcp_init(void) diff --git a/net/nfc/llcp/llcp.h b/net/nfc/llcp/llcp.h index 50680ce5ae43..83b8bba5a280 100644 --- a/net/nfc/llcp/llcp.h +++ b/net/nfc/llcp/llcp.h @@ -37,15 +37,22 @@ enum llcp_state { #define LLCP_LOCAL_NUM_SAP 32 #define LLCP_LOCAL_SAP_OFFSET (LLCP_WKS_NUM_SAP + LLCP_SDP_NUM_SAP) #define LLCP_MAX_SAP (LLCP_WKS_NUM_SAP + LLCP_SDP_NUM_SAP + LLCP_LOCAL_NUM_SAP) +#define LLCP_SDP_UNBOUND (LLCP_MAX_SAP + 1) struct nfc_llcp_sock; +struct llcp_sock_list { + struct hlist_head head; + rwlock_t lock; +}; + struct nfc_llcp_local { struct list_head list; struct nfc_dev *dev; + struct kref ref; + struct mutex sdp_lock; - struct mutex socket_lock; struct timer_list link_timer; struct sk_buff_head tx_queue; @@ -63,6 +70,7 @@ struct nfc_llcp_local { unsigned long local_wks; /* Well known services */ unsigned long local_sdp; /* Local services */ unsigned long local_sap; /* Local SAPs, not available for discovery */ + atomic_t local_sdp_cnt[LLCP_SDP_NUM_SAP]; /* local */ u8 gb[NFC_MAX_GT_LEN]; @@ -77,24 +85,26 @@ struct nfc_llcp_local { u16 remote_lto; u8 remote_opt; u16 remote_wks; - u8 remote_rw; /* sockets array */ - struct nfc_llcp_sock *sockets[LLCP_MAX_SAP]; + struct llcp_sock_list sockets; + struct llcp_sock_list connecting_sockets; }; struct nfc_llcp_sock { struct sock sk; - struct list_head list; struct nfc_dev *dev; struct nfc_llcp_local *local; u32 target_idx; u32 nfc_protocol; + /* Link parameters */ u8 ssap; u8 dsap; char *service_name; size_t service_name_len; + u8 rw; + u16 miu; /* Link variables */ u8 send_n; @@ -105,6 +115,9 @@ struct nfc_llcp_sock { /* Is the remote peer ready to receive */ u8 remote_ready; + /* Reserved source SAP */ + u8 reserved_ssap; + struct sk_buff_head tx_queue; struct sk_buff_head tx_pending_queue; struct sk_buff_head tx_backlog_queue; @@ -164,7 +177,11 @@ struct nfc_llcp_sock { #define LLCP_DM_REJ 0x03 +void nfc_llcp_sock_link(struct llcp_sock_list *l, struct sock *s); +void nfc_llcp_sock_unlink(struct llcp_sock_list *l, struct sock *s); struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev); +struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local); +int nfc_llcp_local_put(struct nfc_llcp_local *local); u8 nfc_llcp_get_sdp_ssap(struct nfc_llcp_local *local, struct nfc_llcp_sock *sock); u8 nfc_llcp_get_local_ssap(struct nfc_llcp_local *local); @@ -179,8 +196,10 @@ void nfc_llcp_accept_enqueue(struct sock *parent, struct sock *sk); struct sock *nfc_llcp_accept_dequeue(struct sock *sk, struct socket *newsock); /* TLV API */ -int nfc_llcp_parse_tlv(struct nfc_llcp_local *local, - u8 *tlv_array, u16 tlv_array_len); +int nfc_llcp_parse_gb_tlv(struct nfc_llcp_local *local, + u8 *tlv_array, u16 tlv_array_len); +int nfc_llcp_parse_connection_tlv(struct nfc_llcp_sock *sock, + u8 *tlv_array, u16 tlv_array_len); /* Commands API */ void nfc_llcp_recv(void *data, struct sk_buff *skb, int err); diff --git a/net/nfc/llcp/sock.c b/net/nfc/llcp/sock.c index e06d458fc719..ddeb9aa398f0 100644 --- a/net/nfc/llcp/sock.c +++ b/net/nfc/llcp/sock.c @@ -78,11 +78,11 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen) struct sockaddr_nfc_llcp llcp_addr; int len, ret = 0; - pr_debug("sk %p addr %p family %d\n", sk, addr, addr->sa_family); - if (!addr || addr->sa_family != AF_NFC) return -EINVAL; + pr_debug("sk %p addr %p family %d\n", sk, addr, addr->sa_family); + memset(&llcp_addr, 0, sizeof(llcp_addr)); len = min_t(unsigned int, sizeof(llcp_addr), alen); memcpy(&llcp_addr, addr, len); @@ -111,7 +111,7 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen) } llcp_sock->dev = dev; - llcp_sock->local = local; + llcp_sock->local = nfc_llcp_local_get(local); llcp_sock->nfc_protocol = llcp_addr.nfc_protocol; llcp_sock->service_name_len = min_t(unsigned int, llcp_addr.service_name_len, @@ -121,10 +121,14 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen) GFP_KERNEL); llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock); - if (llcp_sock->ssap == LLCP_MAX_SAP) + if (llcp_sock->ssap == LLCP_SAP_MAX) { + ret = -EADDRINUSE; goto put_dev; + } + + llcp_sock->reserved_ssap = llcp_sock->ssap; - local->sockets[llcp_sock->ssap] = llcp_sock; + nfc_llcp_sock_link(&local->sockets, sk); pr_debug("Socket bound to SAP %d\n", llcp_sock->ssap); @@ -283,22 +287,28 @@ error: return ret; } -static int llcp_sock_getname(struct socket *sock, struct sockaddr *addr, +static int llcp_sock_getname(struct socket *sock, struct sockaddr *uaddr, int *len, int peer) { - struct sockaddr_nfc_llcp *llcp_addr = (struct sockaddr_nfc_llcp *)addr; struct sock *sk = sock->sk; struct nfc_llcp_sock *llcp_sock = nfc_llcp_sock(sk); + DECLARE_SOCKADDR(struct sockaddr_nfc_llcp *, llcp_addr, uaddr); - pr_debug("%p\n", sk); + if (llcp_sock == NULL || llcp_sock->dev == NULL) + return -EBADFD; + + pr_debug("%p %d %d %d\n", sk, llcp_sock->target_idx, + llcp_sock->dsap, llcp_sock->ssap); if (llcp_sock == NULL || llcp_sock->dev == NULL) return -EBADFD; - addr->sa_family = AF_NFC; + uaddr->sa_family = AF_NFC; + *len = sizeof(struct sockaddr_nfc_llcp); llcp_addr->dev_idx = llcp_sock->dev->idx; + llcp_addr->target_idx = llcp_sock->target_idx; llcp_addr->dsap = llcp_sock->dsap; llcp_addr->ssap = llcp_sock->ssap; llcp_addr->service_name_len = llcp_sock->service_name_len; @@ -382,15 +392,6 @@ static int llcp_sock_release(struct socket *sock) goto out; } - mutex_lock(&local->socket_lock); - - if (llcp_sock == local->sockets[llcp_sock->ssap]) - local->sockets[llcp_sock->ssap] = NULL; - else - list_del_init(&llcp_sock->list); - - mutex_unlock(&local->socket_lock); - lock_sock(sk); /* Send a DISC */ @@ -415,14 +416,13 @@ static int llcp_sock_release(struct socket *sock) } } - /* Freeing the SAP */ - if ((sk->sk_state == LLCP_CONNECTED - && llcp_sock->ssap > LLCP_LOCAL_SAP_OFFSET) || - sk->sk_state == LLCP_BOUND || sk->sk_state == LLCP_LISTEN) + if (llcp_sock->reserved_ssap < LLCP_SAP_MAX) nfc_llcp_put_ssap(llcp_sock->local, llcp_sock->ssap); release_sock(sk); + nfc_llcp_sock_unlink(&local->sockets, sk); + out: sock_orphan(sk); sock_put(sk); @@ -490,12 +490,16 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr, } llcp_sock->dev = dev; - llcp_sock->local = local; + llcp_sock->local = nfc_llcp_local_get(local); + llcp_sock->miu = llcp_sock->local->remote_miu; llcp_sock->ssap = nfc_llcp_get_local_ssap(local); if (llcp_sock->ssap == LLCP_SAP_MAX) { ret = -ENOMEM; goto put_dev; } + + llcp_sock->reserved_ssap = llcp_sock->ssap; + if (addr->service_name_len == 0) llcp_sock->dsap = addr->dsap; else @@ -508,21 +512,26 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr, llcp_sock->service_name_len, GFP_KERNEL); - local->sockets[llcp_sock->ssap] = llcp_sock; + nfc_llcp_sock_link(&local->connecting_sockets, sk); ret = nfc_llcp_send_connect(llcp_sock); if (ret) - goto put_dev; + goto sock_unlink; ret = sock_wait_state(sk, LLCP_CONNECTED, sock_sndtimeo(sk, flags & O_NONBLOCK)); if (ret) - goto put_dev; + goto sock_unlink; release_sock(sk); return 0; +sock_unlink: + nfc_llcp_put_ssap(local, llcp_sock->ssap); + + nfc_llcp_sock_unlink(&local->connecting_sockets, sk); + put_dev: nfc_put_device(dev); @@ -687,13 +696,15 @@ struct sock *nfc_llcp_sock_alloc(struct socket *sock, int type, gfp_t gfp) llcp_sock->ssap = 0; llcp_sock->dsap = LLCP_SAP_SDP; + llcp_sock->rw = LLCP_DEFAULT_RW; + llcp_sock->miu = LLCP_DEFAULT_MIU; llcp_sock->send_n = llcp_sock->send_ack_n = 0; llcp_sock->recv_n = llcp_sock->recv_ack_n = 0; llcp_sock->remote_ready = 1; + llcp_sock->reserved_ssap = LLCP_SAP_MAX; skb_queue_head_init(&llcp_sock->tx_queue); skb_queue_head_init(&llcp_sock->tx_pending_queue); skb_queue_head_init(&llcp_sock->tx_backlog_queue); - INIT_LIST_HEAD(&llcp_sock->list); INIT_LIST_HEAD(&llcp_sock->accept_queue); if (sock != NULL) @@ -704,8 +715,6 @@ struct sock *nfc_llcp_sock_alloc(struct socket *sock, int type, gfp_t gfp) void nfc_llcp_sock_free(struct nfc_llcp_sock *sock) { - struct nfc_llcp_local *local = sock->local; - kfree(sock->service_name); skb_queue_purge(&sock->tx_queue); @@ -714,12 +723,9 @@ void nfc_llcp_sock_free(struct nfc_llcp_sock *sock) list_del_init(&sock->accept_queue); - if (local != NULL && sock == local->sockets[sock->ssap]) - local->sockets[sock->ssap] = NULL; - else - list_del_init(&sock->list); - sock->parent = NULL; + + nfc_llcp_local_put(sock->local); } static int llcp_sock_create(struct net *net, struct socket *sock, diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c index d560e6f13072..f81efe13985a 100644 --- a/net/nfc/nci/core.c +++ b/net/nfc/nci/core.c @@ -27,6 +27,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": %s: " fmt, __func__ +#include <linux/module.h> #include <linux/types.h> #include <linux/workqueue.h> #include <linux/completion.h> @@ -194,7 +195,7 @@ static void nci_rf_discover_req(struct nci_dev *ndev, unsigned long opt) } if ((cmd.num_disc_configs < NCI_MAX_NUM_RF_CONFIGS) && - (protocols & NFC_PROTO_ISO14443_MASK)) { + (protocols & NFC_PROTO_ISO14443_B_MASK)) { cmd.disc_configs[cmd.num_disc_configs].rf_tech_and_mode = NCI_NFC_B_PASSIVE_POLL_MODE; cmd.disc_configs[cmd.num_disc_configs].frequency = 1; @@ -387,7 +388,8 @@ static int nci_dev_down(struct nfc_dev *nfc_dev) return nci_close_device(ndev); } -static int nci_start_poll(struct nfc_dev *nfc_dev, __u32 protocols) +static int nci_start_poll(struct nfc_dev *nfc_dev, + __u32 im_protocols, __u32 tm_protocols) { struct nci_dev *ndev = nfc_get_drvdata(nfc_dev); int rc; @@ -413,11 +415,11 @@ static int nci_start_poll(struct nfc_dev *nfc_dev, __u32 protocols) return -EBUSY; } - rc = nci_request(ndev, nci_rf_discover_req, protocols, + rc = nci_request(ndev, nci_rf_discover_req, im_protocols, msecs_to_jiffies(NCI_RF_DISC_TIMEOUT)); if (!rc) - ndev->poll_prots = protocols; + ndev->poll_prots = im_protocols; return rc; } @@ -485,7 +487,8 @@ static int nci_activate_target(struct nfc_dev *nfc_dev, param.rf_protocol = NCI_RF_PROTOCOL_T2T; else if (protocol == NFC_PROTO_FELICA) param.rf_protocol = NCI_RF_PROTOCOL_T3T; - else if (protocol == NFC_PROTO_ISO14443) + else if (protocol == NFC_PROTO_ISO14443 || + protocol == NFC_PROTO_ISO14443_B) param.rf_protocol = NCI_RF_PROTOCOL_ISO_DEP; else param.rf_protocol = NCI_RF_PROTOCOL_NFC_DEP; @@ -521,9 +524,9 @@ static void nci_deactivate_target(struct nfc_dev *nfc_dev, } } -static int nci_data_exchange(struct nfc_dev *nfc_dev, struct nfc_target *target, - struct sk_buff *skb, - data_exchange_cb_t cb, void *cb_context) +static int nci_transceive(struct nfc_dev *nfc_dev, struct nfc_target *target, + struct sk_buff *skb, + data_exchange_cb_t cb, void *cb_context) { struct nci_dev *ndev = nfc_get_drvdata(nfc_dev); int rc; @@ -556,7 +559,7 @@ static struct nfc_ops nci_nfc_ops = { .stop_poll = nci_stop_poll, .activate_target = nci_activate_target, .deactivate_target = nci_deactivate_target, - .data_exchange = nci_data_exchange, + .im_transceive = nci_transceive, }; /* ---- Interface to NCI drivers ---- */ @@ -878,3 +881,5 @@ static void nci_cmd_work(struct work_struct *work) jiffies + msecs_to_jiffies(NCI_CMD_TIMEOUT)); } } + +MODULE_LICENSE("GPL"); diff --git a/net/nfc/nci/ntf.c b/net/nfc/nci/ntf.c index 2ab196a9f228..af7a93b04393 100644 --- a/net/nfc/nci/ntf.c +++ b/net/nfc/nci/ntf.c @@ -170,7 +170,10 @@ static int nci_add_new_protocol(struct nci_dev *ndev, if (rf_protocol == NCI_RF_PROTOCOL_T2T) protocol = NFC_PROTO_MIFARE_MASK; else if (rf_protocol == NCI_RF_PROTOCOL_ISO_DEP) - protocol = NFC_PROTO_ISO14443_MASK; + if (rf_tech_and_mode == NCI_NFC_A_PASSIVE_POLL_MODE) + protocol = NFC_PROTO_ISO14443_MASK; + else + protocol = NFC_PROTO_ISO14443_B_MASK; else if (rf_protocol == NCI_RF_PROTOCOL_T3T) protocol = NFC_PROTO_FELICA_MASK; else diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c index 581d419083aa..4c51714ee741 100644 --- a/net/nfc/netlink.c +++ b/net/nfc/netlink.c @@ -49,6 +49,8 @@ static const struct nla_policy nfc_genl_policy[NFC_ATTR_MAX + 1] = { [NFC_ATTR_COMM_MODE] = { .type = NLA_U8 }, [NFC_ATTR_RF_MODE] = { .type = NLA_U8 }, [NFC_ATTR_DEVICE_POWERED] = { .type = NLA_U8 }, + [NFC_ATTR_IM_PROTOCOLS] = { .type = NLA_U32 }, + [NFC_ATTR_TM_PROTOCOLS] = { .type = NLA_U32 }, }; static int nfc_genl_send_target(struct sk_buff *msg, struct nfc_target *target, @@ -165,7 +167,7 @@ int nfc_genl_targets_found(struct nfc_dev *dev) dev->genl_data.poll_req_pid = 0; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); if (!msg) return -ENOMEM; @@ -193,7 +195,7 @@ int nfc_genl_target_lost(struct nfc_dev *dev, u32 target_idx) struct sk_buff *msg; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) return -ENOMEM; @@ -219,12 +221,74 @@ free_msg: return -EMSGSIZE; } +int nfc_genl_tm_activated(struct nfc_dev *dev, u32 protocol) +{ + struct sk_buff *msg; + void *hdr; + + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + hdr = genlmsg_put(msg, 0, 0, &nfc_genl_family, 0, + NFC_EVENT_TM_ACTIVATED); + if (!hdr) + goto free_msg; + + if (nla_put_u32(msg, NFC_ATTR_DEVICE_INDEX, dev->idx)) + goto nla_put_failure; + if (nla_put_u32(msg, NFC_ATTR_TM_PROTOCOLS, protocol)) + goto nla_put_failure; + + genlmsg_end(msg, hdr); + + genlmsg_multicast(msg, 0, nfc_genl_event_mcgrp.id, GFP_KERNEL); + + return 0; + +nla_put_failure: + genlmsg_cancel(msg, hdr); +free_msg: + nlmsg_free(msg); + return -EMSGSIZE; +} + +int nfc_genl_tm_deactivated(struct nfc_dev *dev) +{ + struct sk_buff *msg; + void *hdr; + + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + hdr = genlmsg_put(msg, 0, 0, &nfc_genl_family, 0, + NFC_EVENT_TM_DEACTIVATED); + if (!hdr) + goto free_msg; + + if (nla_put_u32(msg, NFC_ATTR_DEVICE_INDEX, dev->idx)) + goto nla_put_failure; + + genlmsg_end(msg, hdr); + + genlmsg_multicast(msg, 0, nfc_genl_event_mcgrp.id, GFP_KERNEL); + + return 0; + +nla_put_failure: + genlmsg_cancel(msg, hdr); +free_msg: + nlmsg_free(msg); + return -EMSGSIZE; +} + int nfc_genl_device_added(struct nfc_dev *dev) { struct sk_buff *msg; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) return -ENOMEM; @@ -257,7 +321,7 @@ int nfc_genl_device_removed(struct nfc_dev *dev) struct sk_buff *msg; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) return -ENOMEM; @@ -370,7 +434,7 @@ int nfc_genl_dep_link_up_event(struct nfc_dev *dev, u32 target_idx, pr_debug("DEP link is up\n"); - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); if (!msg) return -ENOMEM; @@ -409,7 +473,7 @@ int nfc_genl_dep_link_down_event(struct nfc_dev *dev) pr_debug("DEP link is down\n"); - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); if (!msg) return -ENOMEM; @@ -450,7 +514,7 @@ static int nfc_genl_get_device(struct sk_buff *skb, struct genl_info *info) if (!dev) return -ENODEV; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) { rc = -ENOMEM; goto out_putdev; @@ -519,16 +583,25 @@ static int nfc_genl_start_poll(struct sk_buff *skb, struct genl_info *info) struct nfc_dev *dev; int rc; u32 idx; - u32 protocols; + u32 im_protocols = 0, tm_protocols = 0; pr_debug("Poll start\n"); if (!info->attrs[NFC_ATTR_DEVICE_INDEX] || - !info->attrs[NFC_ATTR_PROTOCOLS]) + ((!info->attrs[NFC_ATTR_IM_PROTOCOLS] && + !info->attrs[NFC_ATTR_PROTOCOLS]) && + !info->attrs[NFC_ATTR_TM_PROTOCOLS])) return -EINVAL; idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]); - protocols = nla_get_u32(info->attrs[NFC_ATTR_PROTOCOLS]); + + if (info->attrs[NFC_ATTR_TM_PROTOCOLS]) + tm_protocols = nla_get_u32(info->attrs[NFC_ATTR_TM_PROTOCOLS]); + + if (info->attrs[NFC_ATTR_IM_PROTOCOLS]) + im_protocols = nla_get_u32(info->attrs[NFC_ATTR_IM_PROTOCOLS]); + else if (info->attrs[NFC_ATTR_PROTOCOLS]) + im_protocols = nla_get_u32(info->attrs[NFC_ATTR_PROTOCOLS]); dev = nfc_get_device(idx); if (!dev) @@ -536,7 +609,7 @@ static int nfc_genl_start_poll(struct sk_buff *skb, struct genl_info *info) mutex_lock(&dev->genl_data.genl_data_mutex); - rc = nfc_start_poll(dev, protocols); + rc = nfc_start_poll(dev, im_protocols, tm_protocols); if (!rc) dev->genl_data.poll_req_pid = info->snd_pid; @@ -561,6 +634,15 @@ static int nfc_genl_stop_poll(struct sk_buff *skb, struct genl_info *info) if (!dev) return -ENODEV; + device_lock(&dev->dev); + + if (!dev->polling) { + device_unlock(&dev->dev); + return -EINVAL; + } + + device_unlock(&dev->dev); + mutex_lock(&dev->genl_data.genl_data_mutex); if (dev->genl_data.poll_req_pid != info->snd_pid) { diff --git a/net/nfc/nfc.h b/net/nfc/nfc.h index 3dd4232ae664..c5e42b79a418 100644 --- a/net/nfc/nfc.h +++ b/net/nfc/nfc.h @@ -55,6 +55,7 @@ int nfc_llcp_register_device(struct nfc_dev *dev); void nfc_llcp_unregister_device(struct nfc_dev *dev); int nfc_llcp_set_remote_gb(struct nfc_dev *dev, u8 *gb, u8 gb_len); u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len); +int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb); int __init nfc_llcp_init(void); void nfc_llcp_exit(void); @@ -90,6 +91,12 @@ static inline u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *gb_len) return NULL; } +static inline int nfc_llcp_data_received(struct nfc_dev *dev, + struct sk_buff *skb) +{ + return 0; +} + static inline int nfc_llcp_init(void) { return 0; @@ -128,6 +135,9 @@ int nfc_genl_dep_link_up_event(struct nfc_dev *dev, u32 target_idx, u8 comm_mode, u8 rf_mode); int nfc_genl_dep_link_down_event(struct nfc_dev *dev); +int nfc_genl_tm_activated(struct nfc_dev *dev, u32 protocol); +int nfc_genl_tm_deactivated(struct nfc_dev *dev); + struct nfc_dev *nfc_get_device(unsigned int idx); static inline void nfc_put_device(struct nfc_dev *dev) @@ -158,7 +168,7 @@ int nfc_dev_up(struct nfc_dev *dev); int nfc_dev_down(struct nfc_dev *dev); -int nfc_start_poll(struct nfc_dev *dev, u32 protocols); +int nfc_start_poll(struct nfc_dev *dev, u32 im_protocols, u32 tm_protocols); int nfc_stop_poll(struct nfc_dev *dev); diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c index 48badffaafc1..320fa0e6951a 100644 --- a/net/openvswitch/actions.c +++ b/net/openvswitch/actions.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2012 Nicira Networks. + * Copyright (c) 2007-2012 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public @@ -325,6 +325,9 @@ static int sample(struct datapath *dp, struct sk_buff *skb, } } + if (!acts_list) + return 0; + return do_execute_actions(dp, skb, nla_data(acts_list), nla_len(acts_list), true); } diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c index 2c74daa5aca5..d8277d29e710 100644 --- a/net/openvswitch/datapath.c +++ b/net/openvswitch/datapath.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2012 Nicira Networks. + * Copyright (c) 2007-2012 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public @@ -263,14 +263,15 @@ err: static int queue_gso_packets(int dp_ifindex, struct sk_buff *skb, const struct dp_upcall_info *upcall_info) { + unsigned short gso_type = skb_shinfo(skb)->gso_type; struct dp_upcall_info later_info; struct sw_flow_key later_key; struct sk_buff *segs, *nskb; int err; segs = skb_gso_segment(skb, NETIF_F_SG | NETIF_F_HW_CSUM); - if (IS_ERR(skb)) - return PTR_ERR(skb); + if (IS_ERR(segs)) + return PTR_ERR(segs); /* Queue all of the segments. */ skb = segs; @@ -279,7 +280,7 @@ static int queue_gso_packets(int dp_ifindex, struct sk_buff *skb, if (err) break; - if (skb == segs && skb_shinfo(skb)->gso_type & SKB_GSO_UDP) { + if (skb == segs && gso_type & SKB_GSO_UDP) { /* The initial flow key extracted by ovs_flow_extract() * in this case is for a first fragment, so we need to * properly mark later fragments. @@ -1649,7 +1650,9 @@ static int ovs_vport_cmd_set(struct sk_buff *skb, struct genl_info *info) if (!err && a[OVS_VPORT_ATTR_OPTIONS]) err = ovs_vport_set_options(vport, a[OVS_VPORT_ATTR_OPTIONS]); - if (!err && a[OVS_VPORT_ATTR_UPCALL_PID]) + if (err) + goto exit_unlock; + if (a[OVS_VPORT_ATTR_UPCALL_PID]) vport->upcall_pid = nla_get_u32(a[OVS_VPORT_ATTR_UPCALL_PID]); reply = ovs_vport_cmd_build_info(vport, info->snd_pid, info->snd_seq, diff --git a/net/openvswitch/datapath.h b/net/openvswitch/datapath.h index c73370cc1f02..c1105c147531 100644 --- a/net/openvswitch/datapath.h +++ b/net/openvswitch/datapath.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2012 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public diff --git a/net/openvswitch/dp_notify.c b/net/openvswitch/dp_notify.c index 46736518c453..36dcee8fc84a 100644 --- a/net/openvswitch/dp_notify.c +++ b/net/openvswitch/dp_notify.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2012 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c index 6d4d8097cf96..b7f38b161909 100644 --- a/net/openvswitch/flow.c +++ b/net/openvswitch/flow.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2011 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public @@ -182,7 +182,8 @@ void ovs_flow_used(struct sw_flow *flow, struct sk_buff *skb) { u8 tcp_flags = 0; - if (flow->key.eth.type == htons(ETH_P_IP) && + if ((flow->key.eth.type == htons(ETH_P_IP) || + flow->key.eth.type == htons(ETH_P_IPV6)) && flow->key.ip.proto == IPPROTO_TCP && likely(skb->len >= skb_transport_offset(skb) + sizeof(struct tcphdr))) { u8 *tcp = (u8 *)tcp_hdr(skb); diff --git a/net/openvswitch/flow.h b/net/openvswitch/flow.h index 2747dc2c4ac1..9b75617ca4e0 100644 --- a/net/openvswitch/flow.h +++ b/net/openvswitch/flow.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2011 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public diff --git a/net/openvswitch/vport-internal_dev.c b/net/openvswitch/vport-internal_dev.c index b6b1d7daa3cb..4061b9ee07f7 100644 --- a/net/openvswitch/vport-internal_dev.c +++ b/net/openvswitch/vport-internal_dev.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2012 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public @@ -24,6 +24,9 @@ #include <linux/ethtool.h> #include <linux/skbuff.h> +#include <net/dst.h> +#include <net/xfrm.h> + #include "datapath.h" #include "vport-internal_dev.h" #include "vport-netdev.h" @@ -209,6 +212,11 @@ static int internal_dev_recv(struct vport *vport, struct sk_buff *skb) int len; len = skb->len; + + skb_dst_drop(skb); + nf_reset(skb); + secpath_reset(skb); + skb->dev = netdev; skb->pkt_type = PACKET_HOST; skb->protocol = eth_type_trans(skb, netdev); diff --git a/net/openvswitch/vport-internal_dev.h b/net/openvswitch/vport-internal_dev.h index 3454447c5f11..9a7d30ecc6a2 100644 --- a/net/openvswitch/vport-internal_dev.h +++ b/net/openvswitch/vport-internal_dev.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2011 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public diff --git a/net/openvswitch/vport-netdev.c b/net/openvswitch/vport-netdev.c index 3fd6c0d88e12..6ea3551cc78c 100644 --- a/net/openvswitch/vport-netdev.c +++ b/net/openvswitch/vport-netdev.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2012 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public diff --git a/net/openvswitch/vport-netdev.h b/net/openvswitch/vport-netdev.h index fd9b008a0e6e..f7072a25c604 100644 --- a/net/openvswitch/vport-netdev.h +++ b/net/openvswitch/vport-netdev.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2011 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c index 6c066ba25dc7..6140336e79d7 100644 --- a/net/openvswitch/vport.c +++ b/net/openvswitch/vport.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2012 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public diff --git a/net/openvswitch/vport.h b/net/openvswitch/vport.h index 19609629dabd..aac680ca2b06 100644 --- a/net/openvswitch/vport.h +++ b/net/openvswitch/vport.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2011 Nicira Networks. + * Copyright (c) 2007-2012 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index 0f661745df0f..ceaca7c134a0 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -531,6 +531,7 @@ static int prb_calc_retire_blk_tmo(struct packet_sock *po, unsigned int mbits = 0, msec = 0, div = 0, tmo = 0; struct ethtool_cmd ecmd; int err; + u32 speed; rtnl_lock(); dev = __dev_get_by_index(sock_net(&po->sk), po->ifindex); @@ -539,25 +540,18 @@ static int prb_calc_retire_blk_tmo(struct packet_sock *po, return DEFAULT_PRB_RETIRE_TOV; } err = __ethtool_get_settings(dev, &ecmd); + speed = ethtool_cmd_speed(&ecmd); rtnl_unlock(); if (!err) { - switch (ecmd.speed) { - case SPEED_10000: - msec = 1; - div = 10000/1000; - break; - case SPEED_1000: - msec = 1; - div = 1000/1000; - break; /* * If the link speed is so slow you don't really * need to worry about perf anyways */ - case SPEED_100: - case SPEED_10: - default: + if (speed < SPEED_1000 || speed == SPEED_UNKNOWN) { return DEFAULT_PRB_RETIRE_TOV; + } else { + msec = 1; + div = speed / 1000; } } @@ -592,7 +586,7 @@ static void init_prb_bdqc(struct packet_sock *po, p1->knxt_seq_num = 1; p1->pkbdq = pg_vec; pbd = (struct tpacket_block_desc *)pg_vec[0].buffer; - p1->pkblk_start = (char *)pg_vec[0].buffer; + p1->pkblk_start = pg_vec[0].buffer; p1->kblk_size = req_u->req3.tp_block_size; p1->knum_blocks = req_u->req3.tp_block_nr; p1->hdrlen = po->tp_hdrlen; @@ -824,8 +818,7 @@ static void prb_open_block(struct tpacket_kbdq_core *pkc1, h1->ts_first_pkt.ts_sec = ts.tv_sec; h1->ts_first_pkt.ts_nsec = ts.tv_nsec; pkc1->pkblk_start = (char *)pbd1; - pkc1->nxt_offset = (char *)(pkc1->pkblk_start + - BLK_PLUS_PRIV(pkc1->blk_sizeof_priv)); + pkc1->nxt_offset = pkc1->pkblk_start + BLK_PLUS_PRIV(pkc1->blk_sizeof_priv); BLOCK_O2FP(pbd1) = (__u32)BLK_PLUS_PRIV(pkc1->blk_sizeof_priv); BLOCK_O2PRIV(pbd1) = BLK_HDR_LEN; pbd1->version = pkc1->version; @@ -1018,7 +1011,7 @@ static void *__packet_lookup_frame_in_block(struct packet_sock *po, struct tpacket_block_desc *pbd; char *curr, *end; - pkc = GET_PBDQC_FROM_RB(((struct packet_ring_buffer *)&po->rx_ring)); + pkc = GET_PBDQC_FROM_RB(&po->rx_ring); pbd = GET_CURR_PBLOCK_DESC_FROM_CORE(pkc); /* Queue is frozen when user space is lagging behind */ @@ -1044,7 +1037,7 @@ static void *__packet_lookup_frame_in_block(struct packet_sock *po, smp_mb(); curr = pkc->nxt_offset; pkc->skb = skb; - end = (char *) ((char *)pbd + pkc->kblk_size); + end = (char *)pbd + pkc->kblk_size; /* first try the current block */ if (curr+TOTAL_PKT_LEN_INCL_ALIGN(len) < end) { @@ -1476,7 +1469,7 @@ static int packet_sendmsg_spkt(struct kiocb *iocb, struct socket *sock, * Find the device first to size check it */ - saddr->spkt_device[13] = 0; + saddr->spkt_device[sizeof(saddr->spkt_device) - 1] = 0; retry: rcu_read_lock(); dev = dev_get_by_name_rcu(sock_net(sk), saddr->spkt_device); diff --git a/net/rds/page.c b/net/rds/page.c index 2499cd108421..9005a2c920ee 100644 --- a/net/rds/page.c +++ b/net/rds/page.c @@ -74,11 +74,12 @@ int rds_page_copy_user(struct page *page, unsigned long offset, } EXPORT_SYMBOL_GPL(rds_page_copy_user); -/* - * Message allocation uses this to build up regions of a message. +/** + * rds_page_remainder_alloc - build up regions of a message. * - * @bytes - the number of bytes needed. - * @gfp - the waiting behaviour of the allocation + * @scat: Scatter list for message + * @bytes: the number of bytes needed. + * @gfp: the waiting behaviour of the allocation * * @gfp is always ored with __GFP_HIGHMEM. Callers must be prepared to * kmap the pages, etc. diff --git a/net/rds/recv.c b/net/rds/recv.c index 5c6e9f132026..9f0f17cf6bf9 100644 --- a/net/rds/recv.c +++ b/net/rds/recv.c @@ -410,6 +410,8 @@ int rds_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg, rdsdebug("size %zu flags 0x%x timeo %ld\n", size, msg_flags, timeo); + msg->msg_namelen = 0; + if (msg_flags & MSG_OOB) goto out; @@ -485,6 +487,7 @@ int rds_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg, sin->sin_port = inc->i_hdr.h_sport; sin->sin_addr.s_addr = inc->i_saddr; memset(sin->sin_zero, 0, sizeof(sin->sin_zero)); + msg->msg_namelen = sizeof(*sin); } break; } diff --git a/net/rfkill/core.c b/net/rfkill/core.c index f974961754ca..752b72360ebc 100644 --- a/net/rfkill/core.c +++ b/net/rfkill/core.c @@ -325,7 +325,7 @@ static void __rfkill_switch_all(const enum rfkill_type type, bool blocked) rfkill_global_states[type].cur = blocked; list_for_each_entry(rfkill, &rfkill_list, node) { - if (rfkill->type != type) + if (rfkill->type != type && type != RFKILL_TYPE_ALL) continue; rfkill_set_block(rfkill, blocked); diff --git a/net/rxrpc/ar-error.c b/net/rxrpc/ar-error.c index 5d6b572a6704..a9206087b4d7 100644 --- a/net/rxrpc/ar-error.c +++ b/net/rxrpc/ar-error.c @@ -81,10 +81,6 @@ void rxrpc_UDP_error_report(struct sock *sk) _net("I/F MTU %u", mtu); } - /* ip_rt_frag_needed() may have eaten the info */ - if (mtu == 0) - mtu = ntohs(icmp_hdr(skb)->un.frag.mtu); - if (mtu == 0) { /* they didn't give us a size, estimate one */ if (mtu > 1500) { diff --git a/net/rxrpc/ar-output.c b/net/rxrpc/ar-output.c index 16ae88762d00..e1ac183d50bb 100644 --- a/net/rxrpc/ar-output.c +++ b/net/rxrpc/ar-output.c @@ -242,7 +242,7 @@ int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg, EXPORT_SYMBOL(rxrpc_kernel_send_data); -/* +/** * rxrpc_kernel_abort_call - Allow a kernel service to abort a call * @call: The call to be aborted * @abort_code: The abort code to stick into the ABORT packet diff --git a/net/sched/Kconfig b/net/sched/Kconfig index e7a8976bf25c..62fb51face8a 100644 --- a/net/sched/Kconfig +++ b/net/sched/Kconfig @@ -507,6 +507,26 @@ config NET_EMATCH_TEXT To compile this code as a module, choose M here: the module will be called em_text. +config NET_EMATCH_CANID + tristate "CAN Identifier" + depends on NET_EMATCH && CAN + ---help--- + Say Y here if you want to be able to classify CAN frames based + on CAN Identifier. + + To compile this code as a module, choose M here: the + module will be called em_canid. + +config NET_EMATCH_IPSET + tristate "IPset" + depends on NET_EMATCH && IP_SET + ---help--- + Say Y here if you want to be able to classify packets based on + ipset membership. + + To compile this code as a module, choose M here: the + module will be called em_ipset. + config NET_CLS_ACT bool "Actions" ---help--- diff --git a/net/sched/Makefile b/net/sched/Makefile index 5940a1992f0d..978cbf004e80 100644 --- a/net/sched/Makefile +++ b/net/sched/Makefile @@ -55,3 +55,5 @@ obj-$(CONFIG_NET_EMATCH_NBYTE) += em_nbyte.o obj-$(CONFIG_NET_EMATCH_U32) += em_u32.o obj-$(CONFIG_NET_EMATCH_META) += em_meta.o obj-$(CONFIG_NET_EMATCH_TEXT) += em_text.o +obj-$(CONFIG_NET_EMATCH_CANID) += em_canid.o +obj-$(CONFIG_NET_EMATCH_IPSET) += em_ipset.o diff --git a/net/sched/act_api.c b/net/sched/act_api.c index 5cfb160df063..e3d2c78cb52c 100644 --- a/net/sched/act_api.c +++ b/net/sched/act_api.c @@ -652,27 +652,27 @@ tca_get_fill(struct sk_buff *skb, struct tc_action *a, u32 pid, u32 seq, unsigned char *b = skb_tail_pointer(skb); struct nlattr *nest; - nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*t), flags); - - t = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*t), flags); + if (!nlh) + goto out_nlmsg_trim; + t = nlmsg_data(nlh); t->tca_family = AF_UNSPEC; t->tca__pad1 = 0; t->tca__pad2 = 0; nest = nla_nest_start(skb, TCA_ACT_TAB); if (nest == NULL) - goto nla_put_failure; + goto out_nlmsg_trim; if (tcf_action_dump(skb, a, bind, ref) < 0) - goto nla_put_failure; + goto out_nlmsg_trim; nla_nest_end(skb, nest); nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; -nla_put_failure: -nlmsg_failure: +out_nlmsg_trim: nlmsg_trim(skb, b); return -1; } @@ -799,19 +799,21 @@ static int tca_action_flush(struct net *net, struct nlattr *nla, if (a->ops == NULL) goto err_out; - nlh = NLMSG_PUT(skb, pid, n->nlmsg_seq, RTM_DELACTION, sizeof(*t)); - t = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, n->nlmsg_seq, RTM_DELACTION, sizeof(*t), 0); + if (!nlh) + goto out_module_put; + t = nlmsg_data(nlh); t->tca_family = AF_UNSPEC; t->tca__pad1 = 0; t->tca__pad2 = 0; nest = nla_nest_start(skb, TCA_ACT_TAB); if (nest == NULL) - goto nla_put_failure; + goto out_module_put; err = a->ops->walk(skb, &dcb, RTM_DELACTION, a); if (err < 0) - goto nla_put_failure; + goto out_module_put; if (err == 0) goto noflush_out; @@ -828,8 +830,7 @@ static int tca_action_flush(struct net *net, struct nlattr *nla, return err; -nla_put_failure: -nlmsg_failure: +out_module_put: module_put(a->ops->owner); err_out: noflush_out: @@ -919,18 +920,20 @@ static int tcf_add_notify(struct net *net, struct tc_action *a, b = skb_tail_pointer(skb); - nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*t), flags); - t = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*t), flags); + if (!nlh) + goto out_kfree_skb; + t = nlmsg_data(nlh); t->tca_family = AF_UNSPEC; t->tca__pad1 = 0; t->tca__pad2 = 0; nest = nla_nest_start(skb, TCA_ACT_TAB); if (nest == NULL) - goto nla_put_failure; + goto out_kfree_skb; if (tcf_action_dump(skb, a, 0, 0) < 0) - goto nla_put_failure; + goto out_kfree_skb; nla_nest_end(skb, nest); @@ -942,8 +945,7 @@ static int tcf_add_notify(struct net *net, struct tc_action *a, err = 0; return err; -nla_put_failure: -nlmsg_failure: +out_kfree_skb: kfree_skb(skb); return -1; } @@ -1062,7 +1064,7 @@ tc_dump_action(struct sk_buff *skb, struct netlink_callback *cb) struct tc_action_ops *a_o; struct tc_action a; int ret = 0; - struct tcamsg *t = (struct tcamsg *) NLMSG_DATA(cb->nlh); + struct tcamsg *t = (struct tcamsg *) nlmsg_data(cb->nlh); struct nlattr *kind = find_dump_kind(cb->nlh); if (kind == NULL) { @@ -1080,23 +1082,25 @@ tc_dump_action(struct sk_buff *skb, struct netlink_callback *cb) if (a_o->walk == NULL) { WARN(1, "tc_dump_action: %s !capable of dumping table\n", a_o->kind); - goto nla_put_failure; + goto out_module_put; } - nlh = NLMSG_PUT(skb, NETLINK_CB(cb->skb).pid, cb->nlh->nlmsg_seq, - cb->nlh->nlmsg_type, sizeof(*t)); - t = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).pid, cb->nlh->nlmsg_seq, + cb->nlh->nlmsg_type, sizeof(*t), 0); + if (!nlh) + goto out_module_put; + t = nlmsg_data(nlh); t->tca_family = AF_UNSPEC; t->tca__pad1 = 0; t->tca__pad2 = 0; nest = nla_nest_start(skb, TCA_ACT_TAB); if (nest == NULL) - goto nla_put_failure; + goto out_module_put; ret = a_o->walk(skb, cb, RTM_GETACTION, &a); if (ret < 0) - goto nla_put_failure; + goto out_module_put; if (ret > 0) { nla_nest_end(skb, nest); @@ -1110,8 +1114,7 @@ tc_dump_action(struct sk_buff *skb, struct netlink_callback *cb) module_put(a_o->owner); return skb->len; -nla_put_failure: -nlmsg_failure: +out_module_put: module_put(a_o->owner); nlmsg_trim(skb, b); return skb->len; diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index f452f696b4b3..6dd1131f2ec1 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -140,7 +140,7 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n, void *arg) int tp_created = 0; replay: - t = NLMSG_DATA(n); + t = nlmsg_data(n); protocol = TC_H_MIN(t->tcm_info); prio = TC_H_MAJ(t->tcm_info); nprio = prio; @@ -349,8 +349,10 @@ static int tcf_fill_node(struct sk_buff *skb, struct tcf_proto *tp, struct nlmsghdr *nlh; unsigned char *b = skb_tail_pointer(skb); - nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*tcm), flags); - tcm = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*tcm), flags); + if (!nlh) + goto out_nlmsg_trim; + tcm = nlmsg_data(nlh); tcm->tcm_family = AF_UNSPEC; tcm->tcm__pad1 = 0; tcm->tcm__pad2 = 0; @@ -368,7 +370,7 @@ static int tcf_fill_node(struct sk_buff *skb, struct tcf_proto *tp, nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; -nlmsg_failure: +out_nlmsg_trim: nla_put_failure: nlmsg_trim(skb, b); return -1; @@ -418,7 +420,7 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb) struct net_device *dev; struct Qdisc *q; struct tcf_proto *tp, **chain; - struct tcmsg *tcm = (struct tcmsg *)NLMSG_DATA(cb->nlh); + struct tcmsg *tcm = nlmsg_data(cb->nlh); unsigned long cl = 0; const struct Qdisc_class_ops *cops; struct tcf_dump_args arg; diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c index 36fec4227401..44f405cb9aaf 100644 --- a/net/sched/cls_route.c +++ b/net/sched/cls_route.c @@ -143,7 +143,7 @@ static int route4_classify(struct sk_buff *skb, const struct tcf_proto *tp, if (head == NULL) goto old_method; - iif = ((struct rtable *)dst)->rt_iif; + iif = inet_iif(skb); h = route4_fastmap_hash(id, iif); if (id == head->fastmap[h].id && diff --git a/net/sched/em_canid.c b/net/sched/em_canid.c new file mode 100644 index 000000000000..bfd34e4c1afc --- /dev/null +++ b/net/sched/em_canid.c @@ -0,0 +1,240 @@ +/* + * em_canid.c Ematch rule to match CAN frames according to their CAN IDs + * + * This program is free software; you can distribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + * + * Idea: Oliver Hartkopp <oliver.hartkopp@volkswagen.de> + * Copyright: (c) 2011 Czech Technical University in Prague + * (c) 2011 Volkswagen Group Research + * Authors: Michal Sojka <sojkam1@fel.cvut.cz> + * Pavel Pisa <pisa@cmp.felk.cvut.cz> + * Rostislav Lisovy <lisovy@gmail.cz> + * Funded by: Volkswagen Group Research + */ + +#include <linux/slab.h> +#include <linux/module.h> +#include <linux/types.h> +#include <linux/kernel.h> +#include <linux/string.h> +#include <linux/skbuff.h> +#include <net/pkt_cls.h> +#include <linux/can.h> + +#define EM_CAN_RULES_MAX 500 + +struct canid_match { + /* For each SFF CAN ID (11 bit) there is one record in this bitfield */ + DECLARE_BITMAP(match_sff, (1 << CAN_SFF_ID_BITS)); + + int rules_count; + int sff_rules_count; + int eff_rules_count; + + /* + * Raw rules copied from netlink message; Used for sending + * information to userspace (when 'tc filter show' is invoked) + * AND when matching EFF frames + */ + struct can_filter rules_raw[]; +}; + +/** + * em_canid_get_id() - Extracts Can ID out of the sk_buff structure. + */ +static canid_t em_canid_get_id(struct sk_buff *skb) +{ + /* CAN ID is stored within the data field */ + struct can_frame *cf = (struct can_frame *)skb->data; + + return cf->can_id; +} + +static void em_canid_sff_match_add(struct canid_match *cm, u32 can_id, + u32 can_mask) +{ + int i; + + /* + * Limit can_mask and can_id to SFF range to + * protect against write after end of array + */ + can_mask &= CAN_SFF_MASK; + can_id &= can_mask; + + /* Single frame */ + if (can_mask == CAN_SFF_MASK) { + set_bit(can_id, cm->match_sff); + return; + } + + /* All frames */ + if (can_mask == 0) { + bitmap_fill(cm->match_sff, (1 << CAN_SFF_ID_BITS)); + return; + } + + /* + * Individual frame filter. + * Add record (set bit to 1) for each ID that + * conforms particular rule + */ + for (i = 0; i < (1 << CAN_SFF_ID_BITS); i++) { + if ((i & can_mask) == can_id) + set_bit(i, cm->match_sff); + } +} + +static inline struct canid_match *em_canid_priv(struct tcf_ematch *m) +{ + return (struct canid_match *)m->data; +} + +static int em_canid_match(struct sk_buff *skb, struct tcf_ematch *m, + struct tcf_pkt_info *info) +{ + struct canid_match *cm = em_canid_priv(m); + canid_t can_id; + int match = 0; + int i; + const struct can_filter *lp; + + can_id = em_canid_get_id(skb); + + if (can_id & CAN_EFF_FLAG) { + for (i = 0, lp = cm->rules_raw; + i < cm->eff_rules_count; i++, lp++) { + if (!(((lp->can_id ^ can_id) & lp->can_mask))) { + match = 1; + break; + } + } + } else { /* SFF */ + can_id &= CAN_SFF_MASK; + match = (test_bit(can_id, cm->match_sff) ? 1 : 0); + } + + return match; +} + +static int em_canid_change(struct tcf_proto *tp, void *data, int len, + struct tcf_ematch *m) +{ + struct can_filter *conf = data; /* Array with rules */ + struct canid_match *cm; + struct canid_match *cm_old = (struct canid_match *)m->data; + int i; + + if (!len) + return -EINVAL; + + if (len % sizeof(struct can_filter)) + return -EINVAL; + + if (len > sizeof(struct can_filter) * EM_CAN_RULES_MAX) + return -EINVAL; + + cm = kzalloc(sizeof(struct canid_match) + len, GFP_KERNEL); + if (!cm) + return -ENOMEM; + + cm->rules_count = len / sizeof(struct can_filter); + + /* + * We need two for() loops for copying rules into two contiguous + * areas in rules_raw to process all eff rules with a simple loop. + * NB: The configuration interface supports sff and eff rules. + * We do not support filters here that match for the same can_id + * provided in a SFF and EFF frame (e.g. 0x123 / 0x80000123). + * For this (unusual case) two filters have to be specified. The + * SFF/EFF separation is done with the CAN_EFF_FLAG in the can_id. + */ + + /* Fill rules_raw with EFF rules first */ + for (i = 0; i < cm->rules_count; i++) { + if (conf[i].can_id & CAN_EFF_FLAG) { + memcpy(cm->rules_raw + cm->eff_rules_count, + &conf[i], + sizeof(struct can_filter)); + + cm->eff_rules_count++; + } + } + + /* append SFF frame rules */ + for (i = 0; i < cm->rules_count; i++) { + if (!(conf[i].can_id & CAN_EFF_FLAG)) { + memcpy(cm->rules_raw + + cm->eff_rules_count + + cm->sff_rules_count, + &conf[i], sizeof(struct can_filter)); + + cm->sff_rules_count++; + + em_canid_sff_match_add(cm, + conf[i].can_id, conf[i].can_mask); + } + } + + m->datalen = sizeof(struct canid_match) + len; + m->data = (unsigned long)cm; + + if (cm_old != NULL) { + pr_err("canid: Configuring an existing ematch!\n"); + kfree(cm_old); + } + + return 0; +} + +static void em_canid_destroy(struct tcf_proto *tp, struct tcf_ematch *m) +{ + struct canid_match *cm = em_canid_priv(m); + + kfree(cm); +} + +static int em_canid_dump(struct sk_buff *skb, struct tcf_ematch *m) +{ + struct canid_match *cm = em_canid_priv(m); + + /* + * When configuring this ematch 'rules_count' is set not to exceed + * 'rules_raw' array size + */ + if (nla_put_nohdr(skb, sizeof(struct can_filter) * cm->rules_count, + &cm->rules_raw) < 0) + return -EMSGSIZE; + + return 0; +} + +static struct tcf_ematch_ops em_canid_ops = { + .kind = TCF_EM_CANID, + .change = em_canid_change, + .match = em_canid_match, + .destroy = em_canid_destroy, + .dump = em_canid_dump, + .owner = THIS_MODULE, + .link = LIST_HEAD_INIT(em_canid_ops.link) +}; + +static int __init init_em_canid(void) +{ + return tcf_em_register(&em_canid_ops); +} + +static void __exit exit_em_canid(void) +{ + tcf_em_unregister(&em_canid_ops); +} + +MODULE_LICENSE("GPL"); + +module_init(init_em_canid); +module_exit(exit_em_canid); + +MODULE_ALIAS_TCF_EMATCH(TCF_EM_CANID); diff --git a/net/sched/em_ipset.c b/net/sched/em_ipset.c new file mode 100644 index 000000000000..3130320997e2 --- /dev/null +++ b/net/sched/em_ipset.c @@ -0,0 +1,135 @@ +/* + * net/sched/em_ipset.c ipset ematch + * + * Copyright (c) 2012 Florian Westphal <fw@strlen.de> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * version 2 as published by the Free Software Foundation. + */ + +#include <linux/gfp.h> +#include <linux/module.h> +#include <linux/types.h> +#include <linux/kernel.h> +#include <linux/string.h> +#include <linux/skbuff.h> +#include <linux/netfilter/xt_set.h> +#include <linux/ipv6.h> +#include <net/ip.h> +#include <net/pkt_cls.h> + +static int em_ipset_change(struct tcf_proto *tp, void *data, int data_len, + struct tcf_ematch *em) +{ + struct xt_set_info *set = data; + ip_set_id_t index; + + if (data_len != sizeof(*set)) + return -EINVAL; + + index = ip_set_nfnl_get_byindex(set->index); + if (index == IPSET_INVALID_ID) + return -ENOENT; + + em->datalen = sizeof(*set); + em->data = (unsigned long)kmemdup(data, em->datalen, GFP_KERNEL); + if (em->data) + return 0; + + ip_set_nfnl_put(index); + return -ENOMEM; +} + +static void em_ipset_destroy(struct tcf_proto *p, struct tcf_ematch *em) +{ + const struct xt_set_info *set = (const void *) em->data; + if (set) { + ip_set_nfnl_put(set->index); + kfree((void *) em->data); + } +} + +static int em_ipset_match(struct sk_buff *skb, struct tcf_ematch *em, + struct tcf_pkt_info *info) +{ + struct ip_set_adt_opt opt; + struct xt_action_param acpar; + const struct xt_set_info *set = (const void *) em->data; + struct net_device *dev, *indev = NULL; + int ret, network_offset; + + switch (skb->protocol) { + case htons(ETH_P_IP): + acpar.family = NFPROTO_IPV4; + if (!pskb_network_may_pull(skb, sizeof(struct iphdr))) + return 0; + acpar.thoff = ip_hdrlen(skb); + break; + case htons(ETH_P_IPV6): + acpar.family = NFPROTO_IPV6; + if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr))) + return 0; + /* doesn't call ipv6_find_hdr() because ipset doesn't use thoff, yet */ + acpar.thoff = sizeof(struct ipv6hdr); + break; + default: + return 0; + } + + acpar.hooknum = 0; + + opt.family = acpar.family; + opt.dim = set->dim; + opt.flags = set->flags; + opt.cmdflags = 0; + opt.timeout = ~0u; + + network_offset = skb_network_offset(skb); + skb_pull(skb, network_offset); + + dev = skb->dev; + + rcu_read_lock(); + + if (dev && skb->skb_iif) + indev = dev_get_by_index_rcu(dev_net(dev), skb->skb_iif); + + acpar.in = indev ? indev : dev; + acpar.out = dev; + + ret = ip_set_test(set->index, skb, &acpar, &opt); + + rcu_read_unlock(); + + skb_push(skb, network_offset); + return ret; +} + +static struct tcf_ematch_ops em_ipset_ops = { + .kind = TCF_EM_IPSET, + .change = em_ipset_change, + .destroy = em_ipset_destroy, + .match = em_ipset_match, + .owner = THIS_MODULE, + .link = LIST_HEAD_INIT(em_ipset_ops.link) +}; + +static int __init init_em_ipset(void) +{ + return tcf_em_register(&em_ipset_ops); +} + +static void __exit exit_em_ipset(void) +{ + tcf_em_unregister(&em_ipset_ops); +} + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Florian Westphal <fw@strlen.de>"); +MODULE_DESCRIPTION("TC extended match for IP sets"); + +module_init(init_em_ipset); +module_exit(exit_em_ipset); + +MODULE_ALIAS_TCF_EMATCH(TCF_EM_IPSET); diff --git a/net/sched/em_meta.c b/net/sched/em_meta.c index 4790c696cbce..4ab6e3325573 100644 --- a/net/sched/em_meta.c +++ b/net/sched/em_meta.c @@ -264,7 +264,7 @@ META_COLLECTOR(int_rtiif) if (unlikely(skb_rtable(skb) == NULL)) *err = -1; else - dst->value = skb_rtable(skb)->rt_iif; + dst->value = inet_iif(skb); } /************************************************************************** diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index 085ce53d570a..a08b4ab3e421 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -973,7 +973,7 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w) static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n, void *arg) { struct net *net = sock_net(skb->sk); - struct tcmsg *tcm = NLMSG_DATA(n); + struct tcmsg *tcm = nlmsg_data(n); struct nlattr *tca[TCA_MAX + 1]; struct net_device *dev; u32 clid = tcm->tcm_parent; @@ -1046,7 +1046,7 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n, void *arg) replay: /* Reinit, just in case something touches this. */ - tcm = NLMSG_DATA(n); + tcm = nlmsg_data(n); clid = tcm->tcm_parent; q = p = NULL; @@ -1193,8 +1193,10 @@ static int tc_fill_qdisc(struct sk_buff *skb, struct Qdisc *q, u32 clid, struct gnet_dump d; struct qdisc_size_table *stab; - nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*tcm), flags); - tcm = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*tcm), flags); + if (!nlh) + goto out_nlmsg_trim; + tcm = nlmsg_data(nlh); tcm->tcm_family = AF_UNSPEC; tcm->tcm__pad1 = 0; tcm->tcm__pad2 = 0; @@ -1230,7 +1232,7 @@ static int tc_fill_qdisc(struct sk_buff *skb, struct Qdisc *q, u32 clid, nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; -nlmsg_failure: +out_nlmsg_trim: nla_put_failure: nlmsg_trim(skb, b); return -1; @@ -1366,7 +1368,7 @@ done: static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n, void *arg) { struct net *net = sock_net(skb->sk); - struct tcmsg *tcm = NLMSG_DATA(n); + struct tcmsg *tcm = nlmsg_data(n); struct nlattr *tca[TCA_MAX + 1]; struct net_device *dev; struct Qdisc *q = NULL; @@ -1498,8 +1500,10 @@ static int tc_fill_tclass(struct sk_buff *skb, struct Qdisc *q, struct gnet_dump d; const struct Qdisc_class_ops *cl_ops = q->ops->cl_ops; - nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*tcm), flags); - tcm = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, event, sizeof(*tcm), flags); + if (!nlh) + goto out_nlmsg_trim; + tcm = nlmsg_data(nlh); tcm->tcm_family = AF_UNSPEC; tcm->tcm__pad1 = 0; tcm->tcm__pad2 = 0; @@ -1525,7 +1529,7 @@ static int tc_fill_tclass(struct sk_buff *skb, struct Qdisc *q, nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; -nlmsg_failure: +out_nlmsg_trim: nla_put_failure: nlmsg_trim(skb, b); return -1; @@ -1616,7 +1620,7 @@ static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb, static int tc_dump_tclass(struct sk_buff *skb, struct netlink_callback *cb) { - struct tcmsg *tcm = (struct tcmsg *)NLMSG_DATA(cb->nlh); + struct tcmsg *tcm = nlmsg_data(cb->nlh); struct net *net = sock_net(skb->sk); struct netdev_queue *dev_queue; struct net_device *dev; diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c index c412ad0d0308..298c0ddfb57e 100644 --- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c @@ -380,7 +380,14 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch) return NET_XMIT_SUCCESS | __NET_XMIT_BYPASS; } - skb_orphan(skb); + /* If a delay is expected, orphan the skb. (orphaning usually takes + * place at TX completion time, so _before_ the link transit delay) + * Ideally, this orphaning should be done after the rate limiting + * module, because this breaks TCP Small Queue, and other mechanisms + * based on socket sk_wmem_alloc. + */ + if (q->latency || q->jitter) + skb_orphan(skb); /* * If we need to duplicate packet, then re-insert at top of the diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c index ca0c29695d51..474167162947 100644 --- a/net/sched/sch_teql.c +++ b/net/sched/sch_teql.c @@ -67,7 +67,6 @@ struct teql_master { struct teql_sched_data { struct Qdisc *next; struct teql_master *m; - struct neighbour *ncache; struct sk_buff_head q; }; @@ -134,7 +133,6 @@ teql_reset(struct Qdisc *sch) skb_queue_purge(&dat->q); sch->q.qlen = 0; - teql_neigh_release(xchg(&dat->ncache, NULL)); } static void @@ -166,7 +164,6 @@ teql_destroy(struct Qdisc *sch) } } skb_queue_purge(&dat->q); - teql_neigh_release(xchg(&dat->ncache, NULL)); break; } @@ -225,21 +222,25 @@ static int teql_qdisc_init(struct Qdisc *sch, struct nlattr *opt) static int __teql_resolve(struct sk_buff *skb, struct sk_buff *skb_res, struct net_device *dev, struct netdev_queue *txq, - struct neighbour *mn) + struct dst_entry *dst) { - struct teql_sched_data *q = qdisc_priv(txq->qdisc); - struct neighbour *n = q->ncache; + struct neighbour *n; + int err = 0; - if (mn->tbl == NULL) - return -EINVAL; - if (n && n->tbl == mn->tbl && - memcmp(n->primary_key, mn->primary_key, mn->tbl->key_len) == 0) { - atomic_inc(&n->refcnt); - } else { - n = __neigh_lookup_errno(mn->tbl, mn->primary_key, dev); - if (IS_ERR(n)) - return PTR_ERR(n); + n = dst_neigh_lookup_skb(dst, skb); + if (!n) + return -ENOENT; + + if (dst->dev != dev) { + struct neighbour *mn; + + mn = __neigh_lookup_errno(n->tbl, n->primary_key, dev); + neigh_release(n); + if (IS_ERR(mn)) + return PTR_ERR(mn); + n = mn; } + if (neigh_event_send(n, skb_res) == 0) { int err; char haddr[MAX_ADDR_LEN]; @@ -248,15 +249,13 @@ __teql_resolve(struct sk_buff *skb, struct sk_buff *skb_res, err = dev_hard_header(skb, dev, ntohs(skb->protocol), haddr, NULL, skb->len); - if (err < 0) { - neigh_release(n); - return -EINVAL; - } - teql_neigh_release(xchg(&q->ncache, n)); - return 0; + if (err < 0) + err = -EINVAL; + } else { + err = (skb_res == NULL) ? -EAGAIN : 1; } neigh_release(n); - return (skb_res == NULL) ? -EAGAIN : 1; + return err; } static inline int teql_resolve(struct sk_buff *skb, @@ -265,7 +264,6 @@ static inline int teql_resolve(struct sk_buff *skb, struct netdev_queue *txq) { struct dst_entry *dst = skb_dst(skb); - struct neighbour *mn; int res; if (txq->qdisc == &noop_qdisc) @@ -275,8 +273,7 @@ static inline int teql_resolve(struct sk_buff *skb, return 0; rcu_read_lock(); - mn = dst_get_neighbour_noref(dst); - res = mn ? __teql_resolve(skb, skb_res, dev, txq, mn) : 0; + res = __teql_resolve(skb, skb_res, dev, txq, dst); rcu_read_unlock(); return res; diff --git a/net/sctp/associola.c b/net/sctp/associola.c index b16517ee1aaf..ebaef3ed6065 100644 --- a/net/sctp/associola.c +++ b/net/sctp/associola.c @@ -124,6 +124,8 @@ static struct sctp_association *sctp_association_init(struct sctp_association *a * socket values. */ asoc->max_retrans = sp->assocparams.sasoc_asocmaxrxt; + asoc->pf_retrans = sctp_pf_retrans; + asoc->rto_initial = msecs_to_jiffies(sp->rtoinfo.srto_initial); asoc->rto_max = msecs_to_jiffies(sp->rtoinfo.srto_max); asoc->rto_min = msecs_to_jiffies(sp->rtoinfo.srto_min); @@ -686,6 +688,9 @@ struct sctp_transport *sctp_assoc_add_peer(struct sctp_association *asoc, /* Set the path max_retrans. */ peer->pathmaxrxt = asoc->pathmaxrxt; + /* And the partial failure retrnas threshold */ + peer->pf_retrans = asoc->pf_retrans; + /* Initialize the peer's SACK delay timeout based on the * association configured value. */ @@ -841,6 +846,7 @@ void sctp_assoc_control_transport(struct sctp_association *asoc, struct sctp_ulpevent *event; struct sockaddr_storage addr; int spc_state = 0; + bool ulp_notify = true; /* Record the transition on the transport. */ switch (command) { @@ -854,6 +860,14 @@ void sctp_assoc_control_transport(struct sctp_association *asoc, spc_state = SCTP_ADDR_CONFIRMED; else spc_state = SCTP_ADDR_AVAILABLE; + /* Don't inform ULP about transition from PF to + * active state and set cwnd to 1, see SCTP + * Quick failover draft section 5.1, point 5 + */ + if (transport->state == SCTP_PF) { + ulp_notify = false; + transport->cwnd = 1; + } transport->state = SCTP_ACTIVE; break; @@ -872,6 +886,11 @@ void sctp_assoc_control_transport(struct sctp_association *asoc, spc_state = SCTP_ADDR_UNREACHABLE; break; + case SCTP_TRANSPORT_PF: + transport->state = SCTP_PF; + ulp_notify = false; + break; + default: return; } @@ -879,12 +898,15 @@ void sctp_assoc_control_transport(struct sctp_association *asoc, /* Generate and send a SCTP_PEER_ADDR_CHANGE notification to the * user. */ - memset(&addr, 0, sizeof(struct sockaddr_storage)); - memcpy(&addr, &transport->ipaddr, transport->af_specific->sockaddr_len); - event = sctp_ulpevent_make_peer_addr_change(asoc, &addr, - 0, spc_state, error, GFP_ATOMIC); - if (event) - sctp_ulpq_tail_event(&asoc->ulpq, event); + if (ulp_notify) { + memset(&addr, 0, sizeof(struct sockaddr_storage)); + memcpy(&addr, &transport->ipaddr, + transport->af_specific->sockaddr_len); + event = sctp_ulpevent_make_peer_addr_change(asoc, &addr, + 0, spc_state, error, GFP_ATOMIC); + if (event) + sctp_ulpq_tail_event(&asoc->ulpq, event); + } /* Select new active and retran paths. */ @@ -900,7 +922,8 @@ void sctp_assoc_control_transport(struct sctp_association *asoc, transports) { if ((t->state == SCTP_INACTIVE) || - (t->state == SCTP_UNCONFIRMED)) + (t->state == SCTP_UNCONFIRMED) || + (t->state == SCTP_PF)) continue; if (!first || t->last_time_heard > first->last_time_heard) { second = first; @@ -1360,7 +1383,7 @@ struct sctp_transport *sctp_assoc_choose_alter_transport( /* Update the association's pmtu and frag_point by going through all the * transports. This routine is called when a transport's PMTU has changed. */ -void sctp_assoc_sync_pmtu(struct sctp_association *asoc) +void sctp_assoc_sync_pmtu(struct sock *sk, struct sctp_association *asoc) { struct sctp_transport *t; __u32 pmtu = 0; @@ -1372,7 +1395,7 @@ void sctp_assoc_sync_pmtu(struct sctp_association *asoc) list_for_each_entry(t, &asoc->peer.transport_addr_list, transports) { if (t->pmtu_pending && t->dst) { - sctp_transport_update_pmtu(t, dst_mtu(t->dst)); + sctp_transport_update_pmtu(sk, t, dst_mtu(t->dst)); t->pmtu_pending = 0; } if (!pmtu || (t->pathmtu < pmtu)) diff --git a/net/sctp/input.c b/net/sctp/input.c index 8b9b6790a3df..e64d5210ed13 100644 --- a/net/sctp/input.c +++ b/net/sctp/input.c @@ -408,10 +408,10 @@ void sctp_icmp_frag_needed(struct sock *sk, struct sctp_association *asoc, if (t->param_flags & SPP_PMTUD_ENABLE) { /* Update transports view of the MTU */ - sctp_transport_update_pmtu(t, pmtu); + sctp_transport_update_pmtu(sk, t, pmtu); /* Update association pmtu. */ - sctp_assoc_sync_pmtu(asoc); + sctp_assoc_sync_pmtu(sk, asoc); } /* Retransmit with the new pmtu setting. @@ -423,6 +423,18 @@ void sctp_icmp_frag_needed(struct sock *sk, struct sctp_association *asoc, sctp_retransmit(&asoc->outqueue, t, SCTP_RTXR_PMTUD); } +void sctp_icmp_redirect(struct sock *sk, struct sctp_transport *t, + struct sk_buff *skb) +{ + struct dst_entry *dst; + + if (!t) + return; + dst = sctp_transport_dst_check(t); + if (dst) + dst->ops->redirect(dst, sk, skb); +} + /* * SCTP Implementer's Guide, 2.37 ICMP handling procedures * @@ -628,6 +640,10 @@ void sctp_v4_err(struct sk_buff *skb, __u32 info) err = EHOSTUNREACH; break; + case ICMP_REDIRECT: + sctp_icmp_redirect(sk, transport, skb); + err = 0; + break; default: goto out_unlock; } diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c index 91f479121c55..ed7139ea7978 100644 --- a/net/sctp/ipv6.c +++ b/net/sctp/ipv6.c @@ -185,6 +185,9 @@ SCTP_STATIC void sctp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, goto out_unlock; } break; + case NDISC_REDIRECT: + sctp_icmp_redirect(sk, transport, skb); + break; default: break; } diff --git a/net/sctp/output.c b/net/sctp/output.c index 6ae47acaaec6..838e18b4d7ea 100644 --- a/net/sctp/output.c +++ b/net/sctp/output.c @@ -64,6 +64,8 @@ #include <net/sctp/checksum.h> /* Forward declarations for private helpers. */ +static sctp_xmit_t __sctp_packet_append_chunk(struct sctp_packet *packet, + struct sctp_chunk *chunk); static sctp_xmit_t sctp_packet_can_append_data(struct sctp_packet *packet, struct sctp_chunk *chunk); static void sctp_packet_append_data(struct sctp_packet *packet, @@ -224,7 +226,10 @@ static sctp_xmit_t sctp_packet_bundle_auth(struct sctp_packet *pkt, if (!auth) return retval; - retval = sctp_packet_append_chunk(pkt, auth); + retval = __sctp_packet_append_chunk(pkt, auth); + + if (retval != SCTP_XMIT_OK) + sctp_chunk_free(auth); return retval; } @@ -256,48 +261,31 @@ static sctp_xmit_t sctp_packet_bundle_sack(struct sctp_packet *pkt, asoc->a_rwnd = asoc->rwnd; sack = sctp_make_sack(asoc); if (sack) { - retval = sctp_packet_append_chunk(pkt, sack); + retval = __sctp_packet_append_chunk(pkt, sack); + if (retval != SCTP_XMIT_OK) { + sctp_chunk_free(sack); + goto out; + } asoc->peer.sack_needed = 0; if (del_timer(timer)) sctp_association_put(asoc); } } } +out: return retval; } + /* Append a chunk to the offered packet reporting back any inability to do * so. */ -sctp_xmit_t sctp_packet_append_chunk(struct sctp_packet *packet, - struct sctp_chunk *chunk) +static sctp_xmit_t __sctp_packet_append_chunk(struct sctp_packet *packet, + struct sctp_chunk *chunk) { sctp_xmit_t retval = SCTP_XMIT_OK; __u16 chunk_len = WORD_ROUND(ntohs(chunk->chunk_hdr->length)); - SCTP_DEBUG_PRINTK("%s: packet:%p chunk:%p\n", __func__, packet, - chunk); - - /* Data chunks are special. Before seeing what else we can - * bundle into this packet, check to see if we are allowed to - * send this DATA. - */ - if (sctp_chunk_is_data(chunk)) { - retval = sctp_packet_can_append_data(packet, chunk); - if (retval != SCTP_XMIT_OK) - goto finish; - } - - /* Try to bundle AUTH chunk */ - retval = sctp_packet_bundle_auth(packet, chunk); - if (retval != SCTP_XMIT_OK) - goto finish; - - /* Try to bundle SACK chunk */ - retval = sctp_packet_bundle_sack(packet, chunk); - if (retval != SCTP_XMIT_OK) - goto finish; - /* Check to see if this chunk will fit into the packet */ retval = sctp_packet_will_fit(packet, chunk, chunk_len); if (retval != SCTP_XMIT_OK) @@ -339,6 +327,43 @@ finish: return retval; } +/* Append a chunk to the offered packet reporting back any inability to do + * so. + */ +sctp_xmit_t sctp_packet_append_chunk(struct sctp_packet *packet, + struct sctp_chunk *chunk) +{ + sctp_xmit_t retval = SCTP_XMIT_OK; + + SCTP_DEBUG_PRINTK("%s: packet:%p chunk:%p\n", __func__, packet, + chunk); + + /* Data chunks are special. Before seeing what else we can + * bundle into this packet, check to see if we are allowed to + * send this DATA. + */ + if (sctp_chunk_is_data(chunk)) { + retval = sctp_packet_can_append_data(packet, chunk); + if (retval != SCTP_XMIT_OK) + goto finish; + } + + /* Try to bundle AUTH chunk */ + retval = sctp_packet_bundle_auth(packet, chunk); + if (retval != SCTP_XMIT_OK) + goto finish; + + /* Try to bundle SACK chunk */ + retval = sctp_packet_bundle_sack(packet, chunk); + if (retval != SCTP_XMIT_OK) + goto finish; + + retval = __sctp_packet_append_chunk(packet, chunk); + +finish: + return retval; +} + /* All packets are sent to the network through this function from * sctp_outq_tail(). * @@ -385,7 +410,7 @@ int sctp_packet_transmit(struct sctp_packet *packet) if (!sctp_transport_dst_check(tp)) { sctp_transport_route(tp, NULL, sctp_sk(sk)); if (asoc && (asoc->param_flags & SPP_PMTUD_ENABLE)) { - sctp_assoc_sync_pmtu(asoc); + sctp_assoc_sync_pmtu(sk, asoc); } } dst = dst_clone(tp->dst); diff --git a/net/sctp/outqueue.c b/net/sctp/outqueue.c index a0fa19f5650c..e7aa177c9522 100644 --- a/net/sctp/outqueue.c +++ b/net/sctp/outqueue.c @@ -792,7 +792,8 @@ static int sctp_outq_flush(struct sctp_outq *q, int rtx_timeout) if (!new_transport) new_transport = asoc->peer.active_path; } else if ((new_transport->state == SCTP_INACTIVE) || - (new_transport->state == SCTP_UNCONFIRMED)) { + (new_transport->state == SCTP_UNCONFIRMED) || + (new_transport->state == SCTP_PF)) { /* If the chunk is Heartbeat or Heartbeat Ack, * send it to chunk->transport, even if it's * inactive. @@ -987,7 +988,8 @@ static int sctp_outq_flush(struct sctp_outq *q, int rtx_timeout) new_transport = chunk->transport; if (!new_transport || ((new_transport->state == SCTP_INACTIVE) || - (new_transport->state == SCTP_UNCONFIRMED))) + (new_transport->state == SCTP_UNCONFIRMED) || + (new_transport->state == SCTP_PF))) new_transport = asoc->peer.active_path; if (new_transport->state == SCTP_UNCONFIRMED) continue; diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c index 9c90811d1134..1f89c4e69645 100644 --- a/net/sctp/protocol.c +++ b/net/sctp/protocol.c @@ -568,7 +568,7 @@ static void sctp_v4_get_saddr(struct sctp_sock *sk, /* What interface did this skb arrive on? */ static int sctp_v4_skb_iif(const struct sk_buff *skb) { - return skb_rtable(skb)->rt_iif; + return inet_iif(skb); } /* Was this packet marked by Explicit Congestion Notification? */ diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c index b6de71efb140..479a70ef6ff8 100644 --- a/net/sctp/sm_make_chunk.c +++ b/net/sctp/sm_make_chunk.c @@ -132,7 +132,7 @@ void sctp_init_cause(struct sctp_chunk *chunk, __be16 cause_code, * abort chunk. Differs from sctp_init_cause in that it won't oops * if there isn't enough space in the op error chunk */ -int sctp_init_cause_fixed(struct sctp_chunk *chunk, __be16 cause_code, +static int sctp_init_cause_fixed(struct sctp_chunk *chunk, __be16 cause_code, size_t paylen) { sctp_errhdr_t err; diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c index 8716da1a8592..fe99628e1257 100644 --- a/net/sctp/sm_sideeffect.c +++ b/net/sctp/sm_sideeffect.c @@ -76,6 +76,8 @@ static int sctp_side_effects(sctp_event_t event_type, sctp_subtype_t subtype, sctp_cmd_seq_t *commands, gfp_t gfp); +static void sctp_cmd_hb_timer_update(sctp_cmd_seq_t *cmds, + struct sctp_transport *t); /******************************************************************** * Helper functions ********************************************************************/ @@ -470,7 +472,8 @@ sctp_timer_event_t *sctp_timer_events[SCTP_NUM_TIMEOUT_TYPES] = { * notification SHOULD be sent to the upper layer. * */ -static void sctp_do_8_2_transport_strike(struct sctp_association *asoc, +static void sctp_do_8_2_transport_strike(sctp_cmd_seq_t *commands, + struct sctp_association *asoc, struct sctp_transport *transport, int is_hb) { @@ -495,6 +498,23 @@ static void sctp_do_8_2_transport_strike(struct sctp_association *asoc, transport->error_count++; } + /* If the transport error count is greater than the pf_retrans + * threshold, and less than pathmaxrtx, then mark this transport + * as Partially Failed, ee SCTP Quick Failover Draft, secon 5.1, + * point 1 + */ + if ((transport->state != SCTP_PF) && + (asoc->pf_retrans < transport->pathmaxrxt) && + (transport->error_count > asoc->pf_retrans)) { + + sctp_assoc_control_transport(asoc, transport, + SCTP_TRANSPORT_PF, + 0); + + /* Update the hb timer to resend a heartbeat every rto */ + sctp_cmd_hb_timer_update(commands, transport); + } + if (transport->state != SCTP_INACTIVE && (transport->error_count > transport->pathmaxrxt)) { SCTP_DEBUG_PRINTK_IPADDR("transport_strike:association %p", @@ -699,6 +719,10 @@ static void sctp_cmd_transport_on(sctp_cmd_seq_t *cmds, SCTP_HEARTBEAT_SUCCESS); } + if (t->state == SCTP_PF) + sctp_assoc_control_transport(asoc, t, SCTP_TRANSPORT_UP, + SCTP_HEARTBEAT_SUCCESS); + /* The receiver of the HEARTBEAT ACK should also perform an * RTT measurement for that destination transport address * using the time value carried in the HEARTBEAT ACK chunk. @@ -1565,8 +1589,8 @@ static int sctp_cmd_interpreter(sctp_event_t event_type, case SCTP_CMD_STRIKE: /* Mark one strike against a transport. */ - sctp_do_8_2_transport_strike(asoc, cmd->obj.transport, - 0); + sctp_do_8_2_transport_strike(commands, asoc, + cmd->obj.transport, 0); break; case SCTP_CMD_TRANSPORT_IDLE: @@ -1576,7 +1600,8 @@ static int sctp_cmd_interpreter(sctp_event_t event_type, case SCTP_CMD_TRANSPORT_HB_SENT: t = cmd->obj.transport; - sctp_do_8_2_transport_strike(asoc, t, 1); + sctp_do_8_2_transport_strike(commands, asoc, + t, 1); t->hb_sent = 1; break; diff --git a/net/sctp/socket.c b/net/sctp/socket.c index 31c7bfcd9b58..5e259817a7f3 100644 --- a/net/sctp/socket.c +++ b/net/sctp/socket.c @@ -1859,7 +1859,7 @@ SCTP_STATIC int sctp_sendmsg(struct kiocb *iocb, struct sock *sk, } if (asoc->pmtu_pending) - sctp_assoc_pending_pmtu(asoc); + sctp_assoc_pending_pmtu(sk, asoc); /* If fragmentation is disabled and the message length exceeds the * association fragmentation point, return EMSGSIZE. The I-D @@ -2373,7 +2373,7 @@ static int sctp_apply_peer_addr_params(struct sctp_paddrparams *params, if ((params->spp_flags & SPP_PMTUD_DISABLE) && params->spp_pathmtu) { if (trans) { trans->pathmtu = params->spp_pathmtu; - sctp_assoc_sync_pmtu(asoc); + sctp_assoc_sync_pmtu(sctp_opt2sk(sp), asoc); } else if (asoc) { asoc->pathmtu = params->spp_pathmtu; sctp_frag_point(asoc, params->spp_pathmtu); @@ -2390,7 +2390,7 @@ static int sctp_apply_peer_addr_params(struct sctp_paddrparams *params, (trans->param_flags & ~SPP_PMTUD) | pmtud_change; if (update) { sctp_transport_pmtu(trans, sctp_opt2sk(sp)); - sctp_assoc_sync_pmtu(asoc); + sctp_assoc_sync_pmtu(sctp_opt2sk(sp), asoc); } } else if (asoc) { asoc->param_flags = @@ -3478,6 +3478,56 @@ static int sctp_setsockopt_auto_asconf(struct sock *sk, char __user *optval, } +/* + * SCTP_PEER_ADDR_THLDS + * + * This option allows us to alter the partially failed threshold for one or all + * transports in an association. See Section 6.1 of: + * http://www.ietf.org/id/draft-nishida-tsvwg-sctp-failover-05.txt + */ +static int sctp_setsockopt_paddr_thresholds(struct sock *sk, + char __user *optval, + unsigned int optlen) +{ + struct sctp_paddrthlds val; + struct sctp_transport *trans; + struct sctp_association *asoc; + + if (optlen < sizeof(struct sctp_paddrthlds)) + return -EINVAL; + if (copy_from_user(&val, (struct sctp_paddrthlds __user *)optval, + sizeof(struct sctp_paddrthlds))) + return -EFAULT; + + + if (sctp_is_any(sk, (const union sctp_addr *)&val.spt_address)) { + asoc = sctp_id2assoc(sk, val.spt_assoc_id); + if (!asoc) + return -ENOENT; + list_for_each_entry(trans, &asoc->peer.transport_addr_list, + transports) { + if (val.spt_pathmaxrxt) + trans->pathmaxrxt = val.spt_pathmaxrxt; + trans->pf_retrans = val.spt_pathpfthld; + } + + if (val.spt_pathmaxrxt) + asoc->pathmaxrxt = val.spt_pathmaxrxt; + asoc->pf_retrans = val.spt_pathpfthld; + } else { + trans = sctp_addr_id2transport(sk, &val.spt_address, + val.spt_assoc_id); + if (!trans) + return -ENOENT; + + if (val.spt_pathmaxrxt) + trans->pathmaxrxt = val.spt_pathmaxrxt; + trans->pf_retrans = val.spt_pathpfthld; + } + + return 0; +} + /* API 6.2 setsockopt(), getsockopt() * * Applications use setsockopt() and getsockopt() to set or retrieve @@ -3627,6 +3677,9 @@ SCTP_STATIC int sctp_setsockopt(struct sock *sk, int level, int optname, case SCTP_AUTO_ASCONF: retval = sctp_setsockopt_auto_asconf(sk, optval, optlen); break; + case SCTP_PEER_ADDR_THLDS: + retval = sctp_setsockopt_paddr_thresholds(sk, optval, optlen); + break; default: retval = -ENOPROTOOPT; break; @@ -5498,6 +5551,51 @@ static int sctp_getsockopt_assoc_ids(struct sock *sk, int len, return 0; } +/* + * SCTP_PEER_ADDR_THLDS + * + * This option allows us to fetch the partially failed threshold for one or all + * transports in an association. See Section 6.1 of: + * http://www.ietf.org/id/draft-nishida-tsvwg-sctp-failover-05.txt + */ +static int sctp_getsockopt_paddr_thresholds(struct sock *sk, + char __user *optval, + int len, + int __user *optlen) +{ + struct sctp_paddrthlds val; + struct sctp_transport *trans; + struct sctp_association *asoc; + + if (len < sizeof(struct sctp_paddrthlds)) + return -EINVAL; + len = sizeof(struct sctp_paddrthlds); + if (copy_from_user(&val, (struct sctp_paddrthlds __user *)optval, len)) + return -EFAULT; + + if (sctp_is_any(sk, (const union sctp_addr *)&val.spt_address)) { + asoc = sctp_id2assoc(sk, val.spt_assoc_id); + if (!asoc) + return -ENOENT; + + val.spt_pathpfthld = asoc->pf_retrans; + val.spt_pathmaxrxt = asoc->pathmaxrxt; + } else { + trans = sctp_addr_id2transport(sk, &val.spt_address, + val.spt_assoc_id); + if (!trans) + return -ENOENT; + + val.spt_pathmaxrxt = trans->pathmaxrxt; + val.spt_pathpfthld = trans->pf_retrans; + } + + if (put_user(len, optlen) || copy_to_user(optval, &val, len)) + return -EFAULT; + + return 0; +} + SCTP_STATIC int sctp_getsockopt(struct sock *sk, int level, int optname, char __user *optval, int __user *optlen) { @@ -5636,6 +5734,9 @@ SCTP_STATIC int sctp_getsockopt(struct sock *sk, int level, int optname, case SCTP_AUTO_ASCONF: retval = sctp_getsockopt_auto_asconf(sk, len, optval, optlen); break; + case SCTP_PEER_ADDR_THLDS: + retval = sctp_getsockopt_paddr_thresholds(sk, optval, len, optlen); + break; default: retval = -ENOPROTOOPT; break; diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c index e5fe639c89e7..2b2bfe933ff1 100644 --- a/net/sctp/sysctl.c +++ b/net/sctp/sysctl.c @@ -141,6 +141,15 @@ static ctl_table sctp_table[] = { .extra2 = &int_max }, { + .procname = "pf_retrans", + .data = &sctp_pf_retrans, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = &zero, + .extra2 = &int_max + }, + { .procname = "max_init_retransmits", .data = &sctp_max_retrans_init, .maxlen = sizeof(int), diff --git a/net/sctp/transport.c b/net/sctp/transport.c index 1dcceb6e0ce6..c97472b248a2 100644 --- a/net/sctp/transport.c +++ b/net/sctp/transport.c @@ -87,6 +87,7 @@ static struct sctp_transport *sctp_transport_init(struct sctp_transport *peer, /* Initialize the default path max_retrans. */ peer->pathmaxrxt = sctp_max_retrans_path; + peer->pf_retrans = sctp_pf_retrans; INIT_LIST_HEAD(&peer->transmitted); INIT_LIST_HEAD(&peer->send_ready); @@ -216,7 +217,7 @@ void sctp_transport_set_owner(struct sctp_transport *transport, void sctp_transport_pmtu(struct sctp_transport *transport, struct sock *sk) { /* If we don't have a fresh route, look one up */ - if (!transport->dst || transport->dst->obsolete > 1) { + if (!transport->dst || transport->dst->obsolete) { dst_release(transport->dst); transport->af_specific->get_dst(transport, &transport->saddr, &transport->fl, sk); @@ -228,7 +229,7 @@ void sctp_transport_pmtu(struct sctp_transport *transport, struct sock *sk) transport->pathmtu = SCTP_DEFAULT_MAXSEGMENT; } -void sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu) +void sctp_transport_update_pmtu(struct sock *sk, struct sctp_transport *t, u32 pmtu) { struct dst_entry *dst; @@ -245,8 +246,16 @@ void sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu) } dst = sctp_transport_dst_check(t); - if (dst) - dst->ops->update_pmtu(dst, pmtu); + if (!dst) + t->af_specific->get_dst(t, &t->saddr, &t->fl, sk); + + if (dst) { + dst->ops->update_pmtu(dst, sk, NULL, pmtu); + + dst = sctp_transport_dst_check(t); + if (!dst) + t->af_specific->get_dst(t, &t->saddr, &t->fl, sk); + } } /* Caches the dst entry and source address for a transport's destination @@ -587,7 +596,8 @@ unsigned long sctp_transport_timeout(struct sctp_transport *t) { unsigned long timeout; timeout = t->rto + sctp_jitter(t->rto); - if (t->state != SCTP_UNCONFIRMED) + if ((t->state != SCTP_UNCONFIRMED) && + (t->state != SCTP_PF)) timeout += t->hbinterval; timeout += jiffies; return timeout; diff --git a/net/socket.c b/net/socket.c index 6e0ccc09b313..dfe5b66c97e0 100644 --- a/net/socket.c +++ b/net/socket.c @@ -398,7 +398,7 @@ int sock_map_fd(struct socket *sock, int flags) } EXPORT_SYMBOL(sock_map_fd); -static struct socket *sock_from_file(struct file *file, int *err) +struct socket *sock_from_file(struct file *file, int *err) { if (file->f_op == &socket_file_ops) return file->private_data; /* set in sock_map_fd */ @@ -406,6 +406,7 @@ static struct socket *sock_from_file(struct file *file, int *err) *err = -ENOTSOCK; return NULL; } +EXPORT_SYMBOL(sock_from_file); /** * sockfd_lookup - Go from a file number to its socket slot @@ -522,6 +523,9 @@ void sock_release(struct socket *sock) if (rcu_dereference_protected(sock->wq, 1)->fasync_list) printk(KERN_ERR "sock_release: fasync list not empty!\n"); + if (test_bit(SOCK_EXTERNALLY_ALLOCATED, &sock->flags)) + return; + this_cpu_sub(sockets_in_use, 1); if (!sock->file) { iput(SOCK_INODE(sock)); @@ -551,8 +555,6 @@ static inline int __sock_sendmsg_nosec(struct kiocb *iocb, struct socket *sock, sock_update_classid(sock->sk); - sock_update_netprioidx(sock->sk); - si->sock = sock; si->scm = NULL; si->msg = msg; diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c index 31def68a0f6e..5a3d675d2f2f 100644 --- a/net/sunrpc/backchannel_rqst.c +++ b/net/sunrpc/backchannel_rqst.c @@ -176,13 +176,14 @@ out_free: } EXPORT_SYMBOL_GPL(xprt_setup_backchannel); -/* - * Destroys the backchannel preallocated structures. +/** + * xprt_destroy_backchannel - Destroys the backchannel preallocated structures. + * @xprt: the transport holding the preallocated strucures + * @max_reqs the maximum number of preallocated structures to destroy + * * Since these structures may have been allocated by multiple calls * to xprt_setup_backchannel, we only destroy up to the maximum number * of reqs specified by the caller. - * @xprt: the transport holding the preallocated strucures - * @max_reqs the maximum number of preallocated structures to destroy */ void xprt_destroy_backchannel(struct rpc_xprt *xprt, unsigned int max_reqs) { diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c index f56f045778ae..00eb859b7de5 100644 --- a/net/sunrpc/clnt.c +++ b/net/sunrpc/clnt.c @@ -385,7 +385,7 @@ out_no_rpciod: return ERR_PTR(err); } -/* +/** * rpc_create - create an RPC client and transport with one call * @args: rpc_clnt create argument structure * diff --git a/net/sunrpc/svcauth_unix.c b/net/sunrpc/svcauth_unix.c index 2777fa896645..4d0129203733 100644 --- a/net/sunrpc/svcauth_unix.c +++ b/net/sunrpc/svcauth_unix.c @@ -104,23 +104,9 @@ static void ip_map_put(struct kref *kref) kfree(im); } -#if IP_HASHBITS == 8 -/* hash_long on a 64 bit machine is currently REALLY BAD for - * IP addresses in reverse-endian (i.e. on a little-endian machine). - * So use a trivial but reliable hash instead - */ -static inline int hash_ip(__be32 ip) -{ - int hash = (__force u32)ip ^ ((__force u32)ip>>16); - return (hash ^ (hash>>8)) & 0xff; -} -#endif -static inline int hash_ip6(struct in6_addr ip) +static inline int hash_ip6(const struct in6_addr *ip) { - return (hash_ip(ip.s6_addr32[0]) ^ - hash_ip(ip.s6_addr32[1]) ^ - hash_ip(ip.s6_addr32[2]) ^ - hash_ip(ip.s6_addr32[3])); + return hash_32(ipv6_addr_hash(ip), IP_HASHBITS); } static int ip_map_match(struct cache_head *corig, struct cache_head *cnew) { @@ -301,7 +287,7 @@ static struct ip_map *__ip_map_lookup(struct cache_detail *cd, char *class, ip.m_addr = *addr; ch = sunrpc_cache_lookup(cd, &ip.h, hash_str(class, IP_HASHBITS) ^ - hash_ip6(*addr)); + hash_ip6(addr)); if (ch) return container_of(ch, struct ip_map, h); @@ -331,7 +317,7 @@ static int __ip_map_update(struct cache_detail *cd, struct ip_map *ipm, ip.h.expiry_time = expiry; ch = sunrpc_cache_update(cd, &ip.h, &ipm->h, hash_str(ipm->m_class, IP_HASHBITS) ^ - hash_ip6(ipm->m_addr)); + hash_ip6(&ipm->m_addr)); if (!ch) return -ENOMEM; cache_put(ch, cd); diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index a6de09de5d21..18bc130255a7 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -43,6 +43,7 @@ #include <net/tcp_states.h> #include <asm/uaccess.h> #include <asm/ioctls.h> +#include <trace/events/skb.h> #include <linux/sunrpc/types.h> #include <linux/sunrpc/clnt.h> @@ -619,6 +620,8 @@ static int svc_udp_recvfrom(struct svc_rqst *rqstp) if (!svc_udp_get_dest_address(rqstp, cmh)) { net_warn_ratelimited("svc: received unknown control message %d/%d; dropping RPC reply datagram\n", cmh->cmsg_level, cmh->cmsg_type); +out_free: + trace_kfree_skb(skb, svc_udp_recvfrom); skb_free_datagram_locked(svsk->sk_sk, skb); return 0; } @@ -630,8 +633,7 @@ static int svc_udp_recvfrom(struct svc_rqst *rqstp) if (csum_partial_copy_to_xdr(&rqstp->rq_arg, skb)) { local_bh_enable(); /* checksum error */ - skb_free_datagram_locked(svsk->sk_sk, skb); - return 0; + goto out_free; } local_bh_enable(); skb_free_datagram_locked(svsk->sk_sk, skb); @@ -640,10 +642,8 @@ static int svc_udp_recvfrom(struct svc_rqst *rqstp) rqstp->rq_arg.head[0].iov_base = skb->data + sizeof(struct udphdr); rqstp->rq_arg.head[0].iov_len = len; - if (skb_checksum_complete(skb)) { - skb_free_datagram_locked(svsk->sk_sk, skb); - return 0; - } + if (skb_checksum_complete(skb)) + goto out_free; rqstp->rq_xprt_ctxt = skb; } diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index fddcccfcdf76..0cf165580d8d 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -180,7 +180,9 @@ EXPORT_SYMBOL_GPL(xdr_inline_pages); /* * Helper routines for doing 'memmove' like operations on a struct xdr_buf - * + */ + +/** * _shift_data_right_pages * @pages: vector of pages containing both the source and dest memory area. * @pgto_base: page vector address of destination @@ -242,7 +244,7 @@ _shift_data_right_pages(struct page **pages, size_t pgto_base, } while ((len -= copy) != 0); } -/* +/** * _copy_to_pages * @pages: array of pages * @pgbase: page vector address of destination @@ -286,7 +288,7 @@ _copy_to_pages(struct page **pages, size_t pgbase, const char *p, size_t len) flush_dcache_page(*pgto); } -/* +/** * _copy_from_pages * @p: pointer to destination * @pages: array of pages @@ -326,7 +328,7 @@ _copy_from_pages(char *p, struct page **pages, size_t pgbase, size_t len) } EXPORT_SYMBOL_GPL(_copy_from_pages); -/* +/** * xdr_shrink_bufhead * @buf: xdr_buf * @len: bytes to remove from buf->head[0] @@ -399,7 +401,7 @@ xdr_shrink_bufhead(struct xdr_buf *buf, size_t len) buf->len = buf->buflen; } -/* +/** * xdr_shrink_pagelen * @buf: xdr_buf * @len: bytes to remove from buf->pages diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c index 3c83035cdaa9..a5a402a7d21f 100644 --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -531,7 +531,7 @@ void xprt_set_retrans_timeout_def(struct rpc_task *task) } EXPORT_SYMBOL_GPL(xprt_set_retrans_timeout_def); -/* +/** * xprt_set_retrans_timeout_rtt - set a request's retransmit timeout * @task: task whose timeout is to be set * diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index 890b03f8d877..62d0dac8f780 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -1014,9 +1014,6 @@ static void xs_udp_data_ready(struct sock *sk, int len) UDPX_INC_STATS_BH(sk, UDP_MIB_INDATAGRAMS); - /* Something worked... */ - dst_confirm(skb_dst(skb)); - xprt_adjust_cwnd(task, copied); xprt_complete_rqst(task, copied); diff --git a/net/tipc/Kconfig b/net/tipc/Kconfig index 2c5954b85933..585460180ffb 100644 --- a/net/tipc/Kconfig +++ b/net/tipc/Kconfig @@ -41,29 +41,4 @@ config TIPC_PORTS Setting this to a smaller value saves some memory, setting it to higher allows for more ports. -config TIPC_LOG - int "Size of log buffer" - depends on TIPC_ADVANCED - range 0 32768 - default "0" - help - Size (in bytes) of TIPC's internal log buffer, which records the - occurrence of significant events. Can range from 0 to 32768 bytes; - default is 0. - - There is no need to enable the log buffer unless the node will be - managed remotely via TIPC. - -config TIPC_DEBUG - bool "Enable debugging support" - default n - help - Saying Y here enables TIPC debugging capabilities used by developers. - Most users do not need to bother; if unsure, just say N. - - Enabling debugging support causes TIPC to display data about its - internal state when certain abnormal conditions occur. It also - makes it easy for developers to capture additional information of - interest using the dbg() or msg_dbg() macros. - endif # TIPC diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c index 2625f5ebe3e8..e4e6d8cd47e6 100644 --- a/net/tipc/bcast.c +++ b/net/tipc/bcast.c @@ -162,7 +162,7 @@ static void bclink_update_last_sent(struct tipc_node *node, u32 seqno) } -/* +/** * tipc_bclink_retransmit_to - get most recent node to request retransmission * * Called with bc_lock locked @@ -270,7 +270,7 @@ exit: spin_unlock_bh(&bc_lock); } -/* +/** * tipc_bclink_update_link_state - update broadcast link state * * tipc_net_lock and node lock set @@ -330,7 +330,7 @@ void tipc_bclink_update_link_state(struct tipc_node *n_ptr, u32 last_sent) } } -/* +/** * bclink_peek_nack - monitor retransmission requests sent by other nodes * * Delay any upcoming NACK by this node if another node has already @@ -381,7 +381,7 @@ exit: return res; } -/* +/** * bclink_accept_pkt - accept an incoming, in-sequence broadcast packet * * Called with both sending node's lock and bc_lock taken. @@ -406,7 +406,7 @@ static void bclink_accept_pkt(struct tipc_node *node, u32 seqno) } } -/* +/** * tipc_bclink_recv_pkt - receive a broadcast packet, and deliver upwards * * tipc_net_lock is read_locked, no other locks set @@ -701,48 +701,43 @@ void tipc_bcbearer_sort(void) int tipc_bclink_stats(char *buf, const u32 buf_size) { - struct print_buf pb; + int ret; + struct tipc_stats *s; if (!bcl) return 0; - tipc_printbuf_init(&pb, buf, buf_size); - spin_lock_bh(&bc_lock); - tipc_printf(&pb, "Link <%s>\n" - " Window:%u packets\n", - bcl->name, bcl->queue_limit[0]); - tipc_printf(&pb, " RX packets:%u fragments:%u/%u bundles:%u/%u\n", - bcl->stats.recv_info, - bcl->stats.recv_fragments, - bcl->stats.recv_fragmented, - bcl->stats.recv_bundles, - bcl->stats.recv_bundled); - tipc_printf(&pb, " TX packets:%u fragments:%u/%u bundles:%u/%u\n", - bcl->stats.sent_info, - bcl->stats.sent_fragments, - bcl->stats.sent_fragmented, - bcl->stats.sent_bundles, - bcl->stats.sent_bundled); - tipc_printf(&pb, " RX naks:%u defs:%u dups:%u\n", - bcl->stats.recv_nacks, - bcl->stats.deferred_recv, - bcl->stats.duplicates); - tipc_printf(&pb, " TX naks:%u acks:%u dups:%u\n", - bcl->stats.sent_nacks, - bcl->stats.sent_acks, - bcl->stats.retransmitted); - tipc_printf(&pb, " Congestion bearer:%u link:%u Send queue max:%u avg:%u\n", - bcl->stats.bearer_congs, - bcl->stats.link_congs, - bcl->stats.max_queue_sz, - bcl->stats.queue_sz_counts - ? (bcl->stats.accu_queue_sz / bcl->stats.queue_sz_counts) - : 0); + s = &bcl->stats; + + ret = tipc_snprintf(buf, buf_size, "Link <%s>\n" + " Window:%u packets\n", + bcl->name, bcl->queue_limit[0]); + ret += tipc_snprintf(buf + ret, buf_size - ret, + " RX packets:%u fragments:%u/%u bundles:%u/%u\n", + s->recv_info, s->recv_fragments, + s->recv_fragmented, s->recv_bundles, + s->recv_bundled); + ret += tipc_snprintf(buf + ret, buf_size - ret, + " TX packets:%u fragments:%u/%u bundles:%u/%u\n", + s->sent_info, s->sent_fragments, + s->sent_fragmented, s->sent_bundles, + s->sent_bundled); + ret += tipc_snprintf(buf + ret, buf_size - ret, + " RX naks:%u defs:%u dups:%u\n", + s->recv_nacks, s->deferred_recv, s->duplicates); + ret += tipc_snprintf(buf + ret, buf_size - ret, + " TX naks:%u acks:%u dups:%u\n", + s->sent_nacks, s->sent_acks, s->retransmitted); + ret += tipc_snprintf(buf + ret, buf_size - ret, + " Congestion bearer:%u link:%u Send queue max:%u avg:%u\n", + s->bearer_congs, s->link_congs, s->max_queue_sz, + s->queue_sz_counts ? + (s->accu_queue_sz / s->queue_sz_counts) : 0); spin_unlock_bh(&bc_lock); - return tipc_printbuf_validate(&pb); + return ret; } int tipc_bclink_reset_stats(void) @@ -880,7 +875,7 @@ void tipc_port_list_add(struct tipc_port_list *pl_ptr, u32 port) if (!item->next) { item->next = kmalloc(sizeof(*item), GFP_ATOMIC); if (!item->next) { - warn("Incomplete multicast delivery, no memory\n"); + pr_warn("Incomplete multicast delivery, no memory\n"); return; } item->next->next = NULL; diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c index a297e3a2e3e7..09e71241265d 100644 --- a/net/tipc/bearer.c +++ b/net/tipc/bearer.c @@ -123,28 +123,30 @@ int tipc_register_media(struct tipc_media *m_ptr) exit: write_unlock_bh(&tipc_net_lock); if (res) - warn("Media <%s> registration error\n", m_ptr->name); + pr_warn("Media <%s> registration error\n", m_ptr->name); return res; } /** * tipc_media_addr_printf - record media address in print buffer */ -void tipc_media_addr_printf(struct print_buf *pb, struct tipc_media_addr *a) +void tipc_media_addr_printf(char *buf, int len, struct tipc_media_addr *a) { char addr_str[MAX_ADDR_STR]; struct tipc_media *m_ptr; + int ret; m_ptr = media_find_id(a->media_id); if (m_ptr && !m_ptr->addr2str(a, addr_str, sizeof(addr_str))) - tipc_printf(pb, "%s(%s)", m_ptr->name, addr_str); + ret = tipc_snprintf(buf, len, "%s(%s)", m_ptr->name, addr_str); else { u32 i; - tipc_printf(pb, "UNKNOWN(%u)", a->media_id); + ret = tipc_snprintf(buf, len, "UNKNOWN(%u)", a->media_id); for (i = 0; i < sizeof(a->value); i++) - tipc_printf(pb, "-%02x", a->value[i]); + ret += tipc_snprintf(buf - ret, len + ret, + "-%02x", a->value[i]); } } @@ -172,8 +174,8 @@ struct sk_buff *tipc_media_get_names(void) /** * bearer_name_validate - validate & (optionally) deconstruct bearer name - * @name - ptr to bearer name string - * @name_parts - ptr to area for bearer name components (or NULL if not needed) + * @name: ptr to bearer name string + * @name_parts: ptr to area for bearer name components (or NULL if not needed) * * Returns 1 if bearer name is valid, otherwise 0. */ @@ -418,12 +420,12 @@ int tipc_enable_bearer(const char *name, u32 disc_domain, u32 priority) int res = -EINVAL; if (!tipc_own_addr) { - warn("Bearer <%s> rejected, not supported in standalone mode\n", - name); + pr_warn("Bearer <%s> rejected, not supported in standalone mode\n", + name); return -ENOPROTOOPT; } if (!bearer_name_validate(name, &b_names)) { - warn("Bearer <%s> rejected, illegal name\n", name); + pr_warn("Bearer <%s> rejected, illegal name\n", name); return -EINVAL; } if (tipc_addr_domain_valid(disc_domain) && @@ -435,12 +437,13 @@ int tipc_enable_bearer(const char *name, u32 disc_domain, u32 priority) res = 0; /* accept specified node in own cluster */ } if (res) { - warn("Bearer <%s> rejected, illegal discovery domain\n", name); + pr_warn("Bearer <%s> rejected, illegal discovery domain\n", + name); return -EINVAL; } if ((priority > TIPC_MAX_LINK_PRI) && (priority != TIPC_MEDIA_LINK_PRI)) { - warn("Bearer <%s> rejected, illegal priority\n", name); + pr_warn("Bearer <%s> rejected, illegal priority\n", name); return -EINVAL; } @@ -448,8 +451,8 @@ int tipc_enable_bearer(const char *name, u32 disc_domain, u32 priority) m_ptr = tipc_media_find(b_names.media_name); if (!m_ptr) { - warn("Bearer <%s> rejected, media <%s> not registered\n", name, - b_names.media_name); + pr_warn("Bearer <%s> rejected, media <%s> not registered\n", + name, b_names.media_name); goto exit; } @@ -465,24 +468,25 @@ restart: continue; } if (!strcmp(name, tipc_bearers[i].name)) { - warn("Bearer <%s> rejected, already enabled\n", name); + pr_warn("Bearer <%s> rejected, already enabled\n", + name); goto exit; } if ((tipc_bearers[i].priority == priority) && (++with_this_prio > 2)) { if (priority-- == 0) { - warn("Bearer <%s> rejected, duplicate priority\n", - name); + pr_warn("Bearer <%s> rejected, duplicate priority\n", + name); goto exit; } - warn("Bearer <%s> priority adjustment required %u->%u\n", - name, priority + 1, priority); + pr_warn("Bearer <%s> priority adjustment required %u->%u\n", + name, priority + 1, priority); goto restart; } } if (bearer_id >= MAX_BEARERS) { - warn("Bearer <%s> rejected, bearer limit reached (%u)\n", - name, MAX_BEARERS); + pr_warn("Bearer <%s> rejected, bearer limit reached (%u)\n", + name, MAX_BEARERS); goto exit; } @@ -490,7 +494,8 @@ restart: strcpy(b_ptr->name, name); res = m_ptr->enable_bearer(b_ptr); if (res) { - warn("Bearer <%s> rejected, enable failure (%d)\n", name, -res); + pr_warn("Bearer <%s> rejected, enable failure (%d)\n", + name, -res); goto exit; } @@ -508,20 +513,20 @@ restart: res = tipc_disc_create(b_ptr, &m_ptr->bcast_addr, disc_domain); if (res) { bearer_disable(b_ptr); - warn("Bearer <%s> rejected, discovery object creation failed\n", - name); + pr_warn("Bearer <%s> rejected, discovery object creation failed\n", + name); goto exit; } - info("Enabled bearer <%s>, discovery domain %s, priority %u\n", - name, tipc_addr_string_fill(addr_string, disc_domain), priority); + pr_info("Enabled bearer <%s>, discovery domain %s, priority %u\n", + name, + tipc_addr_string_fill(addr_string, disc_domain), priority); exit: write_unlock_bh(&tipc_net_lock); return res; } /** - * tipc_block_bearer(): Block the bearer with the given name, - * and reset all its links + * tipc_block_bearer - Block the bearer with the given name, and reset all its links */ int tipc_block_bearer(const char *name) { @@ -532,12 +537,12 @@ int tipc_block_bearer(const char *name) read_lock_bh(&tipc_net_lock); b_ptr = tipc_bearer_find(name); if (!b_ptr) { - warn("Attempt to block unknown bearer <%s>\n", name); + pr_warn("Attempt to block unknown bearer <%s>\n", name); read_unlock_bh(&tipc_net_lock); return -EINVAL; } - info("Blocking bearer <%s>\n", name); + pr_info("Blocking bearer <%s>\n", name); spin_lock_bh(&b_ptr->lock); b_ptr->blocked = 1; list_splice_init(&b_ptr->cong_links, &b_ptr->links); @@ -563,7 +568,7 @@ static void bearer_disable(struct tipc_bearer *b_ptr) struct tipc_link *l_ptr; struct tipc_link *temp_l_ptr; - info("Disabling bearer <%s>\n", b_ptr->name); + pr_info("Disabling bearer <%s>\n", b_ptr->name); spin_lock_bh(&b_ptr->lock); b_ptr->blocked = 1; b_ptr->media->disable_bearer(b_ptr); @@ -585,7 +590,7 @@ int tipc_disable_bearer(const char *name) write_lock_bh(&tipc_net_lock); b_ptr = tipc_bearer_find(name); if (b_ptr == NULL) { - warn("Attempt to disable unknown bearer <%s>\n", name); + pr_warn("Attempt to disable unknown bearer <%s>\n", name); res = -EINVAL; } else { bearer_disable(b_ptr); diff --git a/net/tipc/bearer.h b/net/tipc/bearer.h index e3b2be37fb31..dd4c2abf08e7 100644 --- a/net/tipc/bearer.h +++ b/net/tipc/bearer.h @@ -57,7 +57,7 @@ */ #define TIPC_MEDIA_TYPE_ETH 1 -/* +/** * struct tipc_media_addr - destination address used by TIPC bearers * @value: address info (format defined by media) * @media_id: TIPC media type identifier @@ -179,7 +179,7 @@ void tipc_eth_media_stop(void); int tipc_media_set_priority(const char *name, u32 new_value); int tipc_media_set_window(const char *name, u32 new_value); -void tipc_media_addr_printf(struct print_buf *pb, struct tipc_media_addr *a); +void tipc_media_addr_printf(char *buf, int len, struct tipc_media_addr *a); struct sk_buff *tipc_media_get_names(void); struct sk_buff *tipc_bearer_get_names(void); diff --git a/net/tipc/config.c b/net/tipc/config.c index c5712a343810..a056a3852f71 100644 --- a/net/tipc/config.c +++ b/net/tipc/config.c @@ -39,6 +39,8 @@ #include "name_table.h" #include "config.h" +#define REPLY_TRUNCATED "<truncated>\n" + static u32 config_port_ref; static DEFINE_SPINLOCK(config_lock); @@ -104,13 +106,12 @@ struct sk_buff *tipc_cfg_reply_string_type(u16 tlv_type, char *string) return buf; } -#define MAX_STATS_INFO 2000 - static struct sk_buff *tipc_show_stats(void) { struct sk_buff *buf; struct tlv_desc *rep_tlv; - struct print_buf pb; + char *pb; + int pb_len; int str_len; u32 value; @@ -121,17 +122,16 @@ static struct sk_buff *tipc_show_stats(void) if (value != 0) return tipc_cfg_reply_error_string("unsupported argument"); - buf = tipc_cfg_reply_alloc(TLV_SPACE(MAX_STATS_INFO)); + buf = tipc_cfg_reply_alloc(TLV_SPACE(ULTRA_STRING_MAX_LEN)); if (buf == NULL) return NULL; rep_tlv = (struct tlv_desc *)buf->data; - tipc_printbuf_init(&pb, (char *)TLV_DATA(rep_tlv), MAX_STATS_INFO); - - tipc_printf(&pb, "TIPC version " TIPC_MOD_VER "\n"); + pb = TLV_DATA(rep_tlv); + pb_len = ULTRA_STRING_MAX_LEN; - /* Use additional tipc_printf()'s to return more info ... */ - str_len = tipc_printbuf_validate(&pb); + str_len = tipc_snprintf(pb, pb_len, "TIPC version " TIPC_MOD_VER "\n"); + str_len += 1; /* for "\0" */ skb_put(buf, TLV_SPACE(str_len)); TLV_SET(rep_tlv, TIPC_TLV_ULTRA_STRING, NULL, str_len); @@ -334,12 +334,6 @@ struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd, const void *request_area case TIPC_CMD_SHOW_PORTS: rep_tlv_buf = tipc_port_get_ports(); break; - case TIPC_CMD_SET_LOG_SIZE: - rep_tlv_buf = tipc_log_resize_cmd(req_tlv_area, req_tlv_space); - break; - case TIPC_CMD_DUMP_LOG: - rep_tlv_buf = tipc_log_dump(); - break; case TIPC_CMD_SHOW_STATS: rep_tlv_buf = tipc_show_stats(); break; @@ -399,6 +393,8 @@ struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd, const void *request_area case TIPC_CMD_GET_MAX_CLUSTERS: case TIPC_CMD_SET_MAX_NODES: case TIPC_CMD_GET_MAX_NODES: + case TIPC_CMD_SET_LOG_SIZE: + case TIPC_CMD_DUMP_LOG: rep_tlv_buf = tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED " (obsolete command)"); break; @@ -408,6 +404,15 @@ struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd, const void *request_area break; } + WARN_ON(rep_tlv_buf->len > TLV_SPACE(ULTRA_STRING_MAX_LEN)); + + /* Append an error message if we cannot return all requested data */ + if (rep_tlv_buf->len == TLV_SPACE(ULTRA_STRING_MAX_LEN)) { + if (*(rep_tlv_buf->data + ULTRA_STRING_MAX_LEN) != '\0') + sprintf(rep_tlv_buf->data + rep_tlv_buf->len - + sizeof(REPLY_TRUNCATED) - 1, REPLY_TRUNCATED); + } + /* Return reply buffer */ exit: spin_unlock_bh(&config_lock); @@ -432,7 +437,7 @@ static void cfg_named_msg_event(void *userdata, if ((size < sizeof(*req_hdr)) || (size != TCM_ALIGN(ntohl(req_hdr->tcm_len))) || (ntohs(req_hdr->tcm_flags) != TCM_F_REQUEST)) { - warn("Invalid configuration message discarded\n"); + pr_warn("Invalid configuration message discarded\n"); return; } @@ -478,7 +483,7 @@ int tipc_cfg_init(void) return 0; failed: - err("Unable to create configuration service\n"); + pr_err("Unable to create configuration service\n"); return res; } @@ -494,7 +499,7 @@ void tipc_cfg_reinit(void) seq.lower = seq.upper = tipc_own_addr; res = tipc_publish(config_port_ref, TIPC_ZONE_SCOPE, &seq); if (res) - err("Unable to reinitialize configuration service\n"); + pr_err("Unable to reinitialize configuration service\n"); } void tipc_cfg_stop(void) diff --git a/net/tipc/core.c b/net/tipc/core.c index f7b95239ebda..6586eac6a50e 100644 --- a/net/tipc/core.c +++ b/net/tipc/core.c @@ -34,22 +34,18 @@ * POSSIBILITY OF SUCH DAMAGE. */ -#include <linux/module.h> - #include "core.h" #include "ref.h" #include "name_table.h" #include "subscr.h" #include "config.h" +#include <linux/module.h> #ifndef CONFIG_TIPC_PORTS #define CONFIG_TIPC_PORTS 8191 #endif -#ifndef CONFIG_TIPC_LOG -#define CONFIG_TIPC_LOG 0 -#endif /* global variables used by multiple sub-systems within TIPC */ int tipc_random; @@ -125,7 +121,6 @@ static void tipc_core_stop(void) tipc_nametbl_stop(); tipc_ref_table_stop(); tipc_socket_stop(); - tipc_log_resize(0); } /** @@ -161,10 +156,7 @@ static int __init tipc_init(void) { int res; - if (tipc_log_resize(CONFIG_TIPC_LOG) != 0) - warn("Unable to create log buffer\n"); - - info("Activated (version " TIPC_MOD_VER ")\n"); + pr_info("Activated (version " TIPC_MOD_VER ")\n"); tipc_own_addr = 0; tipc_remote_management = 1; @@ -175,9 +167,9 @@ static int __init tipc_init(void) res = tipc_core_start(); if (res) - err("Unable to start in single node mode\n"); + pr_err("Unable to start in single node mode\n"); else - info("Started in single node mode\n"); + pr_info("Started in single node mode\n"); return res; } @@ -185,7 +177,7 @@ static void __exit tipc_exit(void) { tipc_core_stop_net(); tipc_core_stop(); - info("Deactivated\n"); + pr_info("Deactivated\n"); } module_init(tipc_init); diff --git a/net/tipc/core.h b/net/tipc/core.h index 2a9bb99537b3..fd42e106c185 100644 --- a/net/tipc/core.h +++ b/net/tipc/core.h @@ -37,6 +37,8 @@ #ifndef _TIPC_CORE_H #define _TIPC_CORE_H +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/tipc.h> #include <linux/tipc_config.h> #include <linux/types.h> @@ -58,68 +60,11 @@ #define TIPC_MOD_VER "2.0.0" -struct tipc_msg; /* msg.h */ -struct print_buf; /* log.h */ - -/* - * TIPC system monitoring code - */ - -/* - * TIPC's print buffer subsystem supports the following print buffers: - * - * TIPC_NULL : null buffer (i.e. print nowhere) - * TIPC_CONS : system console - * TIPC_LOG : TIPC log buffer - * &buf : user-defined buffer (struct print_buf *) - * - * Note: TIPC_LOG is configured to echo its output to the system console; - * user-defined buffers can be configured to do the same thing. - */ -extern struct print_buf *const TIPC_NULL; -extern struct print_buf *const TIPC_CONS; -extern struct print_buf *const TIPC_LOG; - -void tipc_printf(struct print_buf *, const char *fmt, ...); - -/* - * TIPC_OUTPUT is the destination print buffer for system messages. - */ -#ifndef TIPC_OUTPUT -#define TIPC_OUTPUT TIPC_LOG -#endif +#define ULTRA_STRING_MAX_LEN 32768 -#define err(fmt, arg...) tipc_printf(TIPC_OUTPUT, \ - KERN_ERR "TIPC: " fmt, ## arg) -#define warn(fmt, arg...) tipc_printf(TIPC_OUTPUT, \ - KERN_WARNING "TIPC: " fmt, ## arg) -#define info(fmt, arg...) tipc_printf(TIPC_OUTPUT, \ - KERN_NOTICE "TIPC: " fmt, ## arg) - -#ifdef CONFIG_TIPC_DEBUG - -/* - * DBG_OUTPUT is the destination print buffer for debug messages. - */ -#ifndef DBG_OUTPUT -#define DBG_OUTPUT TIPC_LOG -#endif - -#define dbg(fmt, arg...) tipc_printf(DBG_OUTPUT, KERN_DEBUG fmt, ## arg); - -#define msg_dbg(msg, txt) tipc_msg_dbg(DBG_OUTPUT, msg, txt); - -void tipc_msg_dbg(struct print_buf *, struct tipc_msg *, const char *); - -#else - -#define dbg(fmt, arg...) do {} while (0) -#define msg_dbg(msg, txt) do {} while (0) - -#define tipc_msg_dbg(buf, msg, txt) do {} while (0) - -#endif +struct tipc_msg; /* msg.h */ +int tipc_snprintf(char *buf, int len, const char *fmt, ...); /* * TIPC-specific error codes diff --git a/net/tipc/discover.c b/net/tipc/discover.c index ae054cfe179f..50eaa403eb6e 100644 --- a/net/tipc/discover.c +++ b/net/tipc/discover.c @@ -100,14 +100,12 @@ static void disc_dupl_alert(struct tipc_bearer *b_ptr, u32 node_addr, { char node_addr_str[16]; char media_addr_str[64]; - struct print_buf pb; tipc_addr_string_fill(node_addr_str, node_addr); - tipc_printbuf_init(&pb, media_addr_str, sizeof(media_addr_str)); - tipc_media_addr_printf(&pb, media_addr); - tipc_printbuf_validate(&pb); - warn("Duplicate %s using %s seen on <%s>\n", - node_addr_str, media_addr_str, b_ptr->name); + tipc_media_addr_printf(media_addr_str, sizeof(media_addr_str), + media_addr); + pr_warn("Duplicate %s using %s seen on <%s>\n", node_addr_str, + media_addr_str, b_ptr->name); } /** diff --git a/net/tipc/handler.c b/net/tipc/handler.c index 9c6f22ff1c6d..7a52d3922f3c 100644 --- a/net/tipc/handler.c +++ b/net/tipc/handler.c @@ -57,14 +57,14 @@ unsigned int tipc_k_signal(Handler routine, unsigned long argument) struct queue_item *item; if (!handler_enabled) { - err("Signal request ignored by handler\n"); + pr_err("Signal request ignored by handler\n"); return -ENOPROTOOPT; } spin_lock_bh(&qitem_lock); item = kmem_cache_alloc(tipc_queue_item_cache, GFP_ATOMIC); if (!item) { - err("Signal queue out of memory\n"); + pr_err("Signal queue out of memory\n"); spin_unlock_bh(&qitem_lock); return -ENOMEM; } diff --git a/net/tipc/link.c b/net/tipc/link.c index 7a614f43549d..1c1e6151875e 100644 --- a/net/tipc/link.c +++ b/net/tipc/link.c @@ -41,6 +41,12 @@ #include "discover.h" #include "config.h" +/* + * Error message prefixes + */ +static const char *link_co_err = "Link changeover error, "; +static const char *link_rst_msg = "Resetting link "; +static const char *link_unk_evt = "Unknown link event "; /* * Out-of-range value for link session numbers @@ -153,8 +159,8 @@ int tipc_link_is_active(struct tipc_link *l_ptr) /** * link_name_validate - validate & (optionally) deconstruct tipc_link name - * @name - ptr to link name string - * @name_parts - ptr to area for link name components (or NULL if not needed) + * @name: ptr to link name string + * @name_parts: ptr to area for link name components (or NULL if not needed) * * Returns 1 if link name is valid, otherwise 0. */ @@ -300,20 +306,20 @@ struct tipc_link *tipc_link_create(struct tipc_node *n_ptr, if (n_ptr->link_cnt >= 2) { tipc_addr_string_fill(addr_string, n_ptr->addr); - err("Attempt to establish third link to %s\n", addr_string); + pr_err("Attempt to establish third link to %s\n", addr_string); return NULL; } if (n_ptr->links[b_ptr->identity]) { tipc_addr_string_fill(addr_string, n_ptr->addr); - err("Attempt to establish second link on <%s> to %s\n", - b_ptr->name, addr_string); + pr_err("Attempt to establish second link on <%s> to %s\n", + b_ptr->name, addr_string); return NULL; } l_ptr = kzalloc(sizeof(*l_ptr), GFP_ATOMIC); if (!l_ptr) { - warn("Link creation failed, no memory\n"); + pr_warn("Link creation failed, no memory\n"); return NULL; } @@ -371,7 +377,7 @@ struct tipc_link *tipc_link_create(struct tipc_node *n_ptr, void tipc_link_delete(struct tipc_link *l_ptr) { if (!l_ptr) { - err("Attempt to delete non-existent link\n"); + pr_err("Attempt to delete non-existent link\n"); return; } @@ -632,8 +638,8 @@ static void link_state_event(struct tipc_link *l_ptr, unsigned int event) link_set_timer(l_ptr, cont_intv / 4); break; case RESET_MSG: - info("Resetting link <%s>, requested by peer\n", - l_ptr->name); + pr_info("%s<%s>, requested by peer\n", link_rst_msg, + l_ptr->name); tipc_link_reset(l_ptr); l_ptr->state = RESET_RESET; l_ptr->fsm_msg_cnt = 0; @@ -642,7 +648,7 @@ static void link_state_event(struct tipc_link *l_ptr, unsigned int event) link_set_timer(l_ptr, cont_intv); break; default: - err("Unknown link event %u in WW state\n", event); + pr_err("%s%u in WW state\n", link_unk_evt, event); } break; case WORKING_UNKNOWN: @@ -654,8 +660,8 @@ static void link_state_event(struct tipc_link *l_ptr, unsigned int event) link_set_timer(l_ptr, cont_intv); break; case RESET_MSG: - info("Resetting link <%s>, requested by peer " - "while probing\n", l_ptr->name); + pr_info("%s<%s>, requested by peer while probing\n", + link_rst_msg, l_ptr->name); tipc_link_reset(l_ptr); l_ptr->state = RESET_RESET; l_ptr->fsm_msg_cnt = 0; @@ -680,8 +686,8 @@ static void link_state_event(struct tipc_link *l_ptr, unsigned int event) l_ptr->fsm_msg_cnt++; link_set_timer(l_ptr, cont_intv / 4); } else { /* Link has failed */ - warn("Resetting link <%s>, peer not responding\n", - l_ptr->name); + pr_warn("%s<%s>, peer not responding\n", + link_rst_msg, l_ptr->name); tipc_link_reset(l_ptr); l_ptr->state = RESET_UNKNOWN; l_ptr->fsm_msg_cnt = 0; @@ -692,7 +698,7 @@ static void link_state_event(struct tipc_link *l_ptr, unsigned int event) } break; default: - err("Unknown link event %u in WU state\n", event); + pr_err("%s%u in WU state\n", link_unk_evt, event); } break; case RESET_UNKNOWN: @@ -726,7 +732,7 @@ static void link_state_event(struct tipc_link *l_ptr, unsigned int event) link_set_timer(l_ptr, cont_intv); break; default: - err("Unknown link event %u in RU state\n", event); + pr_err("%s%u in RU state\n", link_unk_evt, event); } break; case RESET_RESET: @@ -751,11 +757,11 @@ static void link_state_event(struct tipc_link *l_ptr, unsigned int event) link_set_timer(l_ptr, cont_intv); break; default: - err("Unknown link event %u in RR state\n", event); + pr_err("%s%u in RR state\n", link_unk_evt, event); } break; default: - err("Unknown link state %u/%u\n", l_ptr->state, event); + pr_err("Unknown link state %u/%u\n", l_ptr->state, event); } } @@ -856,7 +862,8 @@ int tipc_link_send_buf(struct tipc_link *l_ptr, struct sk_buff *buf) } kfree_skb(buf); if (imp > CONN_MANAGER) { - warn("Resetting link <%s>, send queue full", l_ptr->name); + pr_warn("%s<%s>, send queue full", link_rst_msg, + l_ptr->name); tipc_link_reset(l_ptr); } return dsz; @@ -944,7 +951,7 @@ int tipc_link_send(struct sk_buff *buf, u32 dest, u32 selector) return res; } -/* +/** * tipc_link_send_names - send name table entries to new neighbor * * Send routine for bulk delivery of name table messages when contact @@ -1409,8 +1416,8 @@ static void link_reset_all(unsigned long addr) tipc_node_lock(n_ptr); - warn("Resetting all links to %s\n", - tipc_addr_string_fill(addr_string, n_ptr->addr)); + pr_warn("Resetting all links to %s\n", + tipc_addr_string_fill(addr_string, n_ptr->addr)); for (i = 0; i < MAX_BEARERS; i++) { if (n_ptr->links[i]) { @@ -1428,7 +1435,7 @@ static void link_retransmit_failure(struct tipc_link *l_ptr, { struct tipc_msg *msg = buf_msg(buf); - warn("Retransmission failure on link <%s>\n", l_ptr->name); + pr_warn("Retransmission failure on link <%s>\n", l_ptr->name); if (l_ptr->addr) { /* Handle failure on standard link */ @@ -1440,21 +1447,23 @@ static void link_retransmit_failure(struct tipc_link *l_ptr, struct tipc_node *n_ptr; char addr_string[16]; - info("Msg seq number: %u, ", msg_seqno(msg)); - info("Outstanding acks: %lu\n", - (unsigned long) TIPC_SKB_CB(buf)->handle); + pr_info("Msg seq number: %u, ", msg_seqno(msg)); + pr_cont("Outstanding acks: %lu\n", + (unsigned long) TIPC_SKB_CB(buf)->handle); n_ptr = tipc_bclink_retransmit_to(); tipc_node_lock(n_ptr); tipc_addr_string_fill(addr_string, n_ptr->addr); - info("Broadcast link info for %s\n", addr_string); - info("Supportable: %d, ", n_ptr->bclink.supportable); - info("Supported: %d, ", n_ptr->bclink.supported); - info("Acked: %u\n", n_ptr->bclink.acked); - info("Last in: %u, ", n_ptr->bclink.last_in); - info("Oos state: %u, ", n_ptr->bclink.oos_state); - info("Last sent: %u\n", n_ptr->bclink.last_sent); + pr_info("Broadcast link info for %s\n", addr_string); + pr_info("Supportable: %d, Supported: %d, Acked: %u\n", + n_ptr->bclink.supportable, + n_ptr->bclink.supported, + n_ptr->bclink.acked); + pr_info("Last in: %u, Oos state: %u, Last sent: %u\n", + n_ptr->bclink.last_in, + n_ptr->bclink.oos_state, + n_ptr->bclink.last_sent); tipc_k_signal((Handler)link_reset_all, (unsigned long)n_ptr->addr); @@ -1479,8 +1488,8 @@ void tipc_link_retransmit(struct tipc_link *l_ptr, struct sk_buff *buf, l_ptr->retransm_queue_head = msg_seqno(msg); l_ptr->retransm_queue_size = retransmits; } else { - err("Unexpected retransmit on link %s (qsize=%d)\n", - l_ptr->name, l_ptr->retransm_queue_size); + pr_err("Unexpected retransmit on link %s (qsize=%d)\n", + l_ptr->name, l_ptr->retransm_queue_size); } return; } else { @@ -1787,7 +1796,7 @@ cont: read_unlock_bh(&tipc_net_lock); } -/* +/** * tipc_link_defer_pkt - Add out-of-sequence message to deferred reception queue * * Returns increase in queue length (i.e. 0 or 1) @@ -2074,8 +2083,9 @@ static void link_recv_proto_msg(struct tipc_link *l_ptr, struct sk_buff *buf) if (msg_linkprio(msg) && (msg_linkprio(msg) != l_ptr->priority)) { - warn("Resetting link <%s>, priority change %u->%u\n", - l_ptr->name, l_ptr->priority, msg_linkprio(msg)); + pr_warn("%s<%s>, priority change %u->%u\n", + link_rst_msg, l_ptr->name, l_ptr->priority, + msg_linkprio(msg)); l_ptr->priority = msg_linkprio(msg); tipc_link_reset(l_ptr); /* Enforce change to take effect */ break; @@ -2139,15 +2149,13 @@ static void tipc_link_tunnel(struct tipc_link *l_ptr, tunnel = l_ptr->owner->active_links[selector & 1]; if (!tipc_link_is_up(tunnel)) { - warn("Link changeover error, " - "tunnel link no longer available\n"); + pr_warn("%stunnel link no longer available\n", link_co_err); return; } msg_set_size(tunnel_hdr, length + INT_H_SIZE); buf = tipc_buf_acquire(length + INT_H_SIZE); if (!buf) { - warn("Link changeover error, " - "unable to send tunnel msg\n"); + pr_warn("%sunable to send tunnel msg\n", link_co_err); return; } skb_copy_to_linear_data(buf, tunnel_hdr, INT_H_SIZE); @@ -2173,8 +2181,7 @@ void tipc_link_changeover(struct tipc_link *l_ptr) return; if (!l_ptr->owner->permit_changeover) { - warn("Link changeover error, " - "peer did not permit changeover\n"); + pr_warn("%speer did not permit changeover\n", link_co_err); return; } @@ -2192,8 +2199,8 @@ void tipc_link_changeover(struct tipc_link *l_ptr) msg_set_size(&tunnel_hdr, INT_H_SIZE); tipc_link_send_buf(tunnel, buf); } else { - warn("Link changeover error, " - "unable to send changeover msg\n"); + pr_warn("%sunable to send changeover msg\n", + link_co_err); } return; } @@ -2246,8 +2253,8 @@ void tipc_link_send_duplicate(struct tipc_link *l_ptr, struct tipc_link *tunnel) msg_set_size(&tunnel_hdr, length + INT_H_SIZE); outbuf = tipc_buf_acquire(length + INT_H_SIZE); if (outbuf == NULL) { - warn("Link changeover error, " - "unable to send duplicate msg\n"); + pr_warn("%sunable to send duplicate msg\n", + link_co_err); return; } skb_copy_to_linear_data(outbuf, &tunnel_hdr, INT_H_SIZE); @@ -2298,8 +2305,8 @@ static int link_recv_changeover_msg(struct tipc_link **l_ptr, if (!dest_link) goto exit; if (dest_link == *l_ptr) { - err("Unexpected changeover message on link <%s>\n", - (*l_ptr)->name); + pr_err("Unexpected changeover message on link <%s>\n", + (*l_ptr)->name); goto exit; } *l_ptr = dest_link; @@ -2310,7 +2317,7 @@ static int link_recv_changeover_msg(struct tipc_link **l_ptr, goto exit; *buf = buf_extract(tunnel_buf, INT_H_SIZE); if (*buf == NULL) { - warn("Link changeover error, duplicate msg dropped\n"); + pr_warn("%sduplicate msg dropped\n", link_co_err); goto exit; } kfree_skb(tunnel_buf); @@ -2319,8 +2326,8 @@ static int link_recv_changeover_msg(struct tipc_link **l_ptr, /* First original message ?: */ if (tipc_link_is_up(dest_link)) { - info("Resetting link <%s>, changeover initiated by peer\n", - dest_link->name); + pr_info("%s<%s>, changeover initiated by peer\n", link_rst_msg, + dest_link->name); tipc_link_reset(dest_link); dest_link->exp_msg_count = msg_count; if (!msg_count) @@ -2333,8 +2340,7 @@ static int link_recv_changeover_msg(struct tipc_link **l_ptr, /* Receive original message */ if (dest_link->exp_msg_count == 0) { - warn("Link switchover error, " - "got too many tunnelled messages\n"); + pr_warn("%sgot too many tunnelled messages\n", link_co_err); goto exit; } dest_link->exp_msg_count--; @@ -2346,7 +2352,7 @@ static int link_recv_changeover_msg(struct tipc_link **l_ptr, kfree_skb(tunnel_buf); return 1; } else { - warn("Link changeover error, original msg dropped\n"); + pr_warn("%soriginal msg dropped\n", link_co_err); } } exit: @@ -2367,7 +2373,7 @@ void tipc_link_recv_bundle(struct sk_buff *buf) while (msgcount--) { obuf = buf_extract(buf, pos); if (obuf == NULL) { - warn("Link unable to unbundle message(s)\n"); + pr_warn("Link unable to unbundle message(s)\n"); break; } pos += align(msg_size(buf_msg(obuf))); @@ -2538,7 +2544,7 @@ int tipc_link_recv_fragment(struct sk_buff **pending, struct sk_buff **fb, set_fragm_size(pbuf, fragm_sz); set_expected_frags(pbuf, exp_fragm_cnt - 1); } else { - dbg("Link unable to reassemble fragmented message\n"); + pr_debug("Link unable to reassemble fragmented message\n"); kfree_skb(fbuf); return -1; } @@ -2635,8 +2641,8 @@ void tipc_link_set_queue_limits(struct tipc_link *l_ptr, u32 window) /** * link_find_link - locate link by name - * @name - ptr to link name string - * @node - ptr to area to be filled with ptr to associated node + * @name: ptr to link name string + * @node: ptr to area to be filled with ptr to associated node * * Caller must hold 'tipc_net_lock' to ensure node and bearer are not deleted; * this also prevents link deletion. @@ -2671,8 +2677,8 @@ static struct tipc_link *link_find_link(const char *name, /** * link_value_is_valid -- validate proposed link tolerance/priority/window * - * @cmd - value type (TIPC_CMD_SET_LINK_*) - * @new_value - the new value + * @cmd: value type (TIPC_CMD_SET_LINK_*) + * @new_value: the new value * * Returns 1 if value is within range, 0 if not. */ @@ -2693,9 +2699,9 @@ static int link_value_is_valid(u16 cmd, u32 new_value) /** * link_cmd_set_value - change priority/tolerance/window for link/bearer/media - * @name - ptr to link, bearer, or media name - * @new_value - new value of link, bearer, or media setting - * @cmd - which link, bearer, or media attribute to set (TIPC_CMD_SET_LINK_*) + * @name: ptr to link, bearer, or media name + * @new_value: new value of link, bearer, or media setting + * @cmd: which link, bearer, or media attribute to set (TIPC_CMD_SET_LINK_*) * * Caller must hold 'tipc_net_lock' to ensure link/bearer/media is not deleted. * @@ -2860,112 +2866,114 @@ static u32 percent(u32 count, u32 total) */ static int tipc_link_stats(const char *name, char *buf, const u32 buf_size) { - struct print_buf pb; - struct tipc_link *l_ptr; + struct tipc_link *l; + struct tipc_stats *s; struct tipc_node *node; char *status; u32 profile_total = 0; + int ret; if (!strcmp(name, tipc_bclink_name)) return tipc_bclink_stats(buf, buf_size); - tipc_printbuf_init(&pb, buf, buf_size); - read_lock_bh(&tipc_net_lock); - l_ptr = link_find_link(name, &node); - if (!l_ptr) { + l = link_find_link(name, &node); + if (!l) { read_unlock_bh(&tipc_net_lock); return 0; } tipc_node_lock(node); + s = &l->stats; - if (tipc_link_is_active(l_ptr)) + if (tipc_link_is_active(l)) status = "ACTIVE"; - else if (tipc_link_is_up(l_ptr)) + else if (tipc_link_is_up(l)) status = "STANDBY"; else status = "DEFUNCT"; - tipc_printf(&pb, "Link <%s>\n" - " %s MTU:%u Priority:%u Tolerance:%u ms" - " Window:%u packets\n", - l_ptr->name, status, l_ptr->max_pkt, - l_ptr->priority, l_ptr->tolerance, l_ptr->queue_limit[0]); - tipc_printf(&pb, " RX packets:%u fragments:%u/%u bundles:%u/%u\n", - l_ptr->next_in_no - l_ptr->stats.recv_info, - l_ptr->stats.recv_fragments, - l_ptr->stats.recv_fragmented, - l_ptr->stats.recv_bundles, - l_ptr->stats.recv_bundled); - tipc_printf(&pb, " TX packets:%u fragments:%u/%u bundles:%u/%u\n", - l_ptr->next_out_no - l_ptr->stats.sent_info, - l_ptr->stats.sent_fragments, - l_ptr->stats.sent_fragmented, - l_ptr->stats.sent_bundles, - l_ptr->stats.sent_bundled); - profile_total = l_ptr->stats.msg_length_counts; + + ret = tipc_snprintf(buf, buf_size, "Link <%s>\n" + " %s MTU:%u Priority:%u Tolerance:%u ms" + " Window:%u packets\n", + l->name, status, l->max_pkt, l->priority, + l->tolerance, l->queue_limit[0]); + + ret += tipc_snprintf(buf + ret, buf_size - ret, + " RX packets:%u fragments:%u/%u bundles:%u/%u\n", + l->next_in_no - s->recv_info, s->recv_fragments, + s->recv_fragmented, s->recv_bundles, + s->recv_bundled); + + ret += tipc_snprintf(buf + ret, buf_size - ret, + " TX packets:%u fragments:%u/%u bundles:%u/%u\n", + l->next_out_no - s->sent_info, s->sent_fragments, + s->sent_fragmented, s->sent_bundles, + s->sent_bundled); + + profile_total = s->msg_length_counts; if (!profile_total) profile_total = 1; - tipc_printf(&pb, " TX profile sample:%u packets average:%u octets\n" - " 0-64:%u%% -256:%u%% -1024:%u%% -4096:%u%% " - "-16384:%u%% -32768:%u%% -66000:%u%%\n", - l_ptr->stats.msg_length_counts, - l_ptr->stats.msg_lengths_total / profile_total, - percent(l_ptr->stats.msg_length_profile[0], profile_total), - percent(l_ptr->stats.msg_length_profile[1], profile_total), - percent(l_ptr->stats.msg_length_profile[2], profile_total), - percent(l_ptr->stats.msg_length_profile[3], profile_total), - percent(l_ptr->stats.msg_length_profile[4], profile_total), - percent(l_ptr->stats.msg_length_profile[5], profile_total), - percent(l_ptr->stats.msg_length_profile[6], profile_total)); - tipc_printf(&pb, " RX states:%u probes:%u naks:%u defs:%u dups:%u\n", - l_ptr->stats.recv_states, - l_ptr->stats.recv_probes, - l_ptr->stats.recv_nacks, - l_ptr->stats.deferred_recv, - l_ptr->stats.duplicates); - tipc_printf(&pb, " TX states:%u probes:%u naks:%u acks:%u dups:%u\n", - l_ptr->stats.sent_states, - l_ptr->stats.sent_probes, - l_ptr->stats.sent_nacks, - l_ptr->stats.sent_acks, - l_ptr->stats.retransmitted); - tipc_printf(&pb, " Congestion bearer:%u link:%u Send queue max:%u avg:%u\n", - l_ptr->stats.bearer_congs, - l_ptr->stats.link_congs, - l_ptr->stats.max_queue_sz, - l_ptr->stats.queue_sz_counts - ? (l_ptr->stats.accu_queue_sz / l_ptr->stats.queue_sz_counts) - : 0); + + ret += tipc_snprintf(buf + ret, buf_size - ret, + " TX profile sample:%u packets average:%u octets\n" + " 0-64:%u%% -256:%u%% -1024:%u%% -4096:%u%% " + "-16384:%u%% -32768:%u%% -66000:%u%%\n", + s->msg_length_counts, + s->msg_lengths_total / profile_total, + percent(s->msg_length_profile[0], profile_total), + percent(s->msg_length_profile[1], profile_total), + percent(s->msg_length_profile[2], profile_total), + percent(s->msg_length_profile[3], profile_total), + percent(s->msg_length_profile[4], profile_total), + percent(s->msg_length_profile[5], profile_total), + percent(s->msg_length_profile[6], profile_total)); + + ret += tipc_snprintf(buf + ret, buf_size - ret, + " RX states:%u probes:%u naks:%u defs:%u" + " dups:%u\n", s->recv_states, s->recv_probes, + s->recv_nacks, s->deferred_recv, s->duplicates); + + ret += tipc_snprintf(buf + ret, buf_size - ret, + " TX states:%u probes:%u naks:%u acks:%u" + " dups:%u\n", s->sent_states, s->sent_probes, + s->sent_nacks, s->sent_acks, s->retransmitted); + + ret += tipc_snprintf(buf + ret, buf_size - ret, + " Congestion bearer:%u link:%u Send queue" + " max:%u avg:%u\n", s->bearer_congs, s->link_congs, + s->max_queue_sz, s->queue_sz_counts ? + (s->accu_queue_sz / s->queue_sz_counts) : 0); tipc_node_unlock(node); read_unlock_bh(&tipc_net_lock); - return tipc_printbuf_validate(&pb); + return ret; } -#define MAX_LINK_STATS_INFO 2000 - struct sk_buff *tipc_link_cmd_show_stats(const void *req_tlv_area, int req_tlv_space) { struct sk_buff *buf; struct tlv_desc *rep_tlv; int str_len; + int pb_len; + char *pb; if (!TLV_CHECK(req_tlv_area, req_tlv_space, TIPC_TLV_LINK_NAME)) return tipc_cfg_reply_error_string(TIPC_CFG_TLV_ERROR); - buf = tipc_cfg_reply_alloc(TLV_SPACE(MAX_LINK_STATS_INFO)); + buf = tipc_cfg_reply_alloc(TLV_SPACE(ULTRA_STRING_MAX_LEN)); if (!buf) return NULL; rep_tlv = (struct tlv_desc *)buf->data; - + pb = TLV_DATA(rep_tlv); + pb_len = ULTRA_STRING_MAX_LEN; str_len = tipc_link_stats((char *)TLV_DATA(req_tlv_area), - (char *)TLV_DATA(rep_tlv), MAX_LINK_STATS_INFO); + pb, pb_len); if (!str_len) { kfree_skb(buf); return tipc_cfg_reply_error_string("link not found"); } - + str_len += 1; /* for "\0" */ skb_put(buf, TLV_SPACE(str_len)); TLV_SET(rep_tlv, TIPC_TLV_ULTRA_STRING, NULL, str_len); @@ -3003,62 +3011,16 @@ u32 tipc_link_get_max_pkt(u32 dest, u32 selector) static void link_print(struct tipc_link *l_ptr, const char *str) { - char print_area[256]; - struct print_buf pb; - struct print_buf *buf = &pb; - - tipc_printbuf_init(buf, print_area, sizeof(print_area)); - - tipc_printf(buf, str); - tipc_printf(buf, "Link %x<%s>:", - l_ptr->addr, l_ptr->b_ptr->name); - -#ifdef CONFIG_TIPC_DEBUG - if (link_reset_reset(l_ptr) || link_reset_unknown(l_ptr)) - goto print_state; - - tipc_printf(buf, ": NXO(%u):", mod(l_ptr->next_out_no)); - tipc_printf(buf, "NXI(%u):", mod(l_ptr->next_in_no)); - tipc_printf(buf, "SQUE"); - if (l_ptr->first_out) { - tipc_printf(buf, "[%u..", buf_seqno(l_ptr->first_out)); - if (l_ptr->next_out) - tipc_printf(buf, "%u..", buf_seqno(l_ptr->next_out)); - tipc_printf(buf, "%u]", buf_seqno(l_ptr->last_out)); - if ((mod(buf_seqno(l_ptr->last_out) - - buf_seqno(l_ptr->first_out)) - != (l_ptr->out_queue_size - 1)) || - (l_ptr->last_out->next != NULL)) { - tipc_printf(buf, "\nSend queue inconsistency\n"); - tipc_printf(buf, "first_out= %p ", l_ptr->first_out); - tipc_printf(buf, "next_out= %p ", l_ptr->next_out); - tipc_printf(buf, "last_out= %p ", l_ptr->last_out); - } - } else - tipc_printf(buf, "[]"); - tipc_printf(buf, "SQSIZ(%u)", l_ptr->out_queue_size); - if (l_ptr->oldest_deferred_in) { - u32 o = buf_seqno(l_ptr->oldest_deferred_in); - u32 n = buf_seqno(l_ptr->newest_deferred_in); - tipc_printf(buf, ":RQUE[%u..%u]", o, n); - if (l_ptr->deferred_inqueue_sz != mod((n + 1) - o)) { - tipc_printf(buf, ":RQSIZ(%u)", - l_ptr->deferred_inqueue_sz); - } - } -print_state: -#endif + pr_info("%s Link %x<%s>:", str, l_ptr->addr, l_ptr->b_ptr->name); if (link_working_unknown(l_ptr)) - tipc_printf(buf, ":WU"); + pr_cont(":WU\n"); else if (link_reset_reset(l_ptr)) - tipc_printf(buf, ":RR"); + pr_cont(":RR\n"); else if (link_reset_unknown(l_ptr)) - tipc_printf(buf, ":RU"); + pr_cont(":RU\n"); else if (link_working_working(l_ptr)) - tipc_printf(buf, ":WW"); - tipc_printf(buf, "\n"); - - tipc_printbuf_validate(buf); - info("%s", print_area); + pr_cont(":WW\n"); + else + pr_cont("\n"); } diff --git a/net/tipc/link.h b/net/tipc/link.h index d6a60a963ce6..6e921121be06 100644 --- a/net/tipc/link.h +++ b/net/tipc/link.h @@ -37,7 +37,6 @@ #ifndef _TIPC_LINK_H #define _TIPC_LINK_H -#include "log.h" #include "msg.h" #include "node.h" @@ -63,6 +62,37 @@ */ #define MAX_PKT_DEFAULT 1500 +struct tipc_stats { + u32 sent_info; /* used in counting # sent packets */ + u32 recv_info; /* used in counting # recv'd packets */ + u32 sent_states; + u32 recv_states; + u32 sent_probes; + u32 recv_probes; + u32 sent_nacks; + u32 recv_nacks; + u32 sent_acks; + u32 sent_bundled; + u32 sent_bundles; + u32 recv_bundled; + u32 recv_bundles; + u32 retransmitted; + u32 sent_fragmented; + u32 sent_fragments; + u32 recv_fragmented; + u32 recv_fragments; + u32 link_congs; /* # port sends blocked by congestion */ + u32 bearer_congs; + u32 deferred_recv; + u32 duplicates; + u32 max_queue_sz; /* send queue size high water mark */ + u32 accu_queue_sz; /* used for send queue size profiling */ + u32 queue_sz_counts; /* used for send queue size profiling */ + u32 msg_length_counts; /* used for message length profiling */ + u32 msg_lengths_total; /* used for message length profiling */ + u32 msg_length_profile[7]; /* used for msg. length profiling */ +}; + /** * struct tipc_link - TIPC link data structure * @addr: network address of link's peer node @@ -175,36 +205,7 @@ struct tipc_link { struct sk_buff *defragm_buf; /* Statistics */ - struct { - u32 sent_info; /* used in counting # sent packets */ - u32 recv_info; /* used in counting # recv'd packets */ - u32 sent_states; - u32 recv_states; - u32 sent_probes; - u32 recv_probes; - u32 sent_nacks; - u32 recv_nacks; - u32 sent_acks; - u32 sent_bundled; - u32 sent_bundles; - u32 recv_bundled; - u32 recv_bundles; - u32 retransmitted; - u32 sent_fragmented; - u32 sent_fragments; - u32 recv_fragmented; - u32 recv_fragments; - u32 link_congs; /* # port sends blocked by congestion */ - u32 bearer_congs; - u32 deferred_recv; - u32 duplicates; - u32 max_queue_sz; /* send queue size high water mark */ - u32 accu_queue_sz; /* used for send queue size profiling */ - u32 queue_sz_counts; /* used for send queue size profiling */ - u32 msg_length_counts; /* used for message length profiling */ - u32 msg_lengths_total; /* used for message length profiling */ - u32 msg_length_profile[7]; /* used for msg. length profiling */ - } stats; + struct tipc_stats stats; }; struct tipc_port; diff --git a/net/tipc/log.c b/net/tipc/log.c index 026733f24919..abef644f27d8 100644 --- a/net/tipc/log.c +++ b/net/tipc/log.c @@ -36,302 +36,20 @@ #include "core.h" #include "config.h" -#include "log.h" - -/* - * TIPC pre-defines the following print buffers: - * - * TIPC_NULL : null buffer (i.e. print nowhere) - * TIPC_CONS : system console - * TIPC_LOG : TIPC log buffer - * - * Additional user-defined print buffers are also permitted. - */ -static struct print_buf null_buf = { NULL, 0, NULL, 0 }; -struct print_buf *const TIPC_NULL = &null_buf; - -static struct print_buf cons_buf = { NULL, 0, NULL, 1 }; -struct print_buf *const TIPC_CONS = &cons_buf; - -static struct print_buf log_buf = { NULL, 0, NULL, 1 }; -struct print_buf *const TIPC_LOG = &log_buf; - -/* - * Locking policy when using print buffers. - * - * 1) tipc_printf() uses 'print_lock' to protect against concurrent access to - * 'print_string' when writing to a print buffer. This also protects against - * concurrent writes to the print buffer being written to. - * - * 2) tipc_log_XXX() leverages the aforementioned use of 'print_lock' to - * protect against all types of concurrent operations on their associated - * print buffer (not just write operations). - * - * Note: All routines of the form tipc_printbuf_XXX() are lock-free, and rely - * on the caller to prevent simultaneous use of the print buffer(s) being - * manipulated. - */ -static char print_string[TIPC_PB_MAX_STR]; -static DEFINE_SPINLOCK(print_lock); - -static void tipc_printbuf_move(struct print_buf *pb_to, - struct print_buf *pb_from); - -#define FORMAT(PTR, LEN, FMT) \ -{\ - va_list args;\ - va_start(args, FMT);\ - LEN = vsprintf(PTR, FMT, args);\ - va_end(args);\ - *(PTR + LEN) = '\0';\ -} - -/** - * tipc_printbuf_init - initialize print buffer to empty - * @pb: pointer to print buffer structure - * @raw: pointer to character array used by print buffer - * @size: size of character array - * - * Note: If the character array is too small (or absent), the print buffer - * becomes a null device that discards anything written to it. - */ -void tipc_printbuf_init(struct print_buf *pb, char *raw, u32 size) -{ - pb->buf = raw; - pb->crs = raw; - pb->size = size; - pb->echo = 0; - - if (size < TIPC_PB_MIN_SIZE) { - pb->buf = NULL; - } else if (raw) { - pb->buf[0] = 0; - pb->buf[size - 1] = ~0; - } -} - -/** - * tipc_printbuf_reset - reinitialize print buffer to empty state - * @pb: pointer to print buffer structure - */ -static void tipc_printbuf_reset(struct print_buf *pb) -{ - if (pb->buf) { - pb->crs = pb->buf; - pb->buf[0] = 0; - pb->buf[pb->size - 1] = ~0; - } -} - -/** - * tipc_printbuf_empty - test if print buffer is in empty state - * @pb: pointer to print buffer structure - * - * Returns non-zero if print buffer is empty. - */ -static int tipc_printbuf_empty(struct print_buf *pb) -{ - return !pb->buf || (pb->crs == pb->buf); -} - -/** - * tipc_printbuf_validate - check for print buffer overflow - * @pb: pointer to print buffer structure - * - * Verifies that a print buffer has captured all data written to it. - * If data has been lost, linearize buffer and prepend an error message - * - * Returns length of print buffer data string (including trailing NUL) - */ -int tipc_printbuf_validate(struct print_buf *pb) -{ - char *err = "\n\n*** PRINT BUFFER OVERFLOW ***\n\n"; - char *cp_buf; - struct print_buf cb; - - if (!pb->buf) - return 0; - - if (pb->buf[pb->size - 1] == 0) { - cp_buf = kmalloc(pb->size, GFP_ATOMIC); - if (cp_buf) { - tipc_printbuf_init(&cb, cp_buf, pb->size); - tipc_printbuf_move(&cb, pb); - tipc_printbuf_move(pb, &cb); - kfree(cp_buf); - memcpy(pb->buf, err, strlen(err)); - } else { - tipc_printbuf_reset(pb); - tipc_printf(pb, err); - } - } - return pb->crs - pb->buf + 1; -} - -/** - * tipc_printbuf_move - move print buffer contents to another print buffer - * @pb_to: pointer to destination print buffer structure - * @pb_from: pointer to source print buffer structure - * - * Current contents of destination print buffer (if any) are discarded. - * Source print buffer becomes empty if a successful move occurs. - */ -static void tipc_printbuf_move(struct print_buf *pb_to, - struct print_buf *pb_from) -{ - int len; - - /* Handle the cases where contents can't be moved */ - if (!pb_to->buf) - return; - - if (!pb_from->buf) { - tipc_printbuf_reset(pb_to); - return; - } - - if (pb_to->size < pb_from->size) { - strcpy(pb_to->buf, "*** PRINT BUFFER MOVE ERROR ***"); - pb_to->buf[pb_to->size - 1] = ~0; - pb_to->crs = strchr(pb_to->buf, 0); - return; - } - - /* Copy data from char after cursor to end (if used) */ - len = pb_from->buf + pb_from->size - pb_from->crs - 2; - if ((pb_from->buf[pb_from->size - 1] == 0) && (len > 0)) { - strcpy(pb_to->buf, pb_from->crs + 1); - pb_to->crs = pb_to->buf + len; - } else - pb_to->crs = pb_to->buf; - - /* Copy data from start to cursor (always) */ - len = pb_from->crs - pb_from->buf; - strcpy(pb_to->crs, pb_from->buf); - pb_to->crs += len; - - tipc_printbuf_reset(pb_from); -} /** - * tipc_printf - append formatted output to print buffer - * @pb: pointer to print buffer + * tipc_snprintf - append formatted output to print buffer + * @buf: pointer to print buffer + * @len: buffer length * @fmt: formatted info to be printed */ -void tipc_printf(struct print_buf *pb, const char *fmt, ...) -{ - int chars_to_add; - int chars_left; - char save_char; - - spin_lock_bh(&print_lock); - - FORMAT(print_string, chars_to_add, fmt); - if (chars_to_add >= TIPC_PB_MAX_STR) - strcpy(print_string, "*** PRINT BUFFER STRING TOO LONG ***"); - - if (pb->buf) { - chars_left = pb->buf + pb->size - pb->crs - 1; - if (chars_to_add <= chars_left) { - strcpy(pb->crs, print_string); - pb->crs += chars_to_add; - } else if (chars_to_add >= (pb->size - 1)) { - strcpy(pb->buf, print_string + chars_to_add + 1 - - pb->size); - pb->crs = pb->buf + pb->size - 1; - } else { - strcpy(pb->buf, print_string + chars_left); - save_char = print_string[chars_left]; - print_string[chars_left] = 0; - strcpy(pb->crs, print_string); - print_string[chars_left] = save_char; - pb->crs = pb->buf + chars_to_add - chars_left; - } - } - - if (pb->echo) - printk("%s", print_string); - - spin_unlock_bh(&print_lock); -} - -/** - * tipc_log_resize - change the size of the TIPC log buffer - * @log_size: print buffer size to use - */ -int tipc_log_resize(int log_size) -{ - int res = 0; - - spin_lock_bh(&print_lock); - kfree(TIPC_LOG->buf); - TIPC_LOG->buf = NULL; - if (log_size) { - if (log_size < TIPC_PB_MIN_SIZE) - log_size = TIPC_PB_MIN_SIZE; - res = TIPC_LOG->echo; - tipc_printbuf_init(TIPC_LOG, kmalloc(log_size, GFP_ATOMIC), - log_size); - TIPC_LOG->echo = res; - res = !TIPC_LOG->buf; - } - spin_unlock_bh(&print_lock); - - return res; -} - -/** - * tipc_log_resize_cmd - reconfigure size of TIPC log buffer - */ -struct sk_buff *tipc_log_resize_cmd(const void *req_tlv_area, int req_tlv_space) -{ - u32 value; - - if (!TLV_CHECK(req_tlv_area, req_tlv_space, TIPC_TLV_UNSIGNED)) - return tipc_cfg_reply_error_string(TIPC_CFG_TLV_ERROR); - - value = ntohl(*(__be32 *)TLV_DATA(req_tlv_area)); - if (value > 32768) - return tipc_cfg_reply_error_string(TIPC_CFG_INVALID_VALUE - " (log size must be 0-32768)"); - if (tipc_log_resize(value)) - return tipc_cfg_reply_error_string( - "unable to create specified log (log size is now 0)"); - return tipc_cfg_reply_none(); -} - -/** - * tipc_log_dump - capture TIPC log buffer contents in configuration message - */ -struct sk_buff *tipc_log_dump(void) +int tipc_snprintf(char *buf, int len, const char *fmt, ...) { - struct sk_buff *reply; - - spin_lock_bh(&print_lock); - if (!TIPC_LOG->buf) { - spin_unlock_bh(&print_lock); - reply = tipc_cfg_reply_ultra_string("log not activated\n"); - } else if (tipc_printbuf_empty(TIPC_LOG)) { - spin_unlock_bh(&print_lock); - reply = tipc_cfg_reply_ultra_string("log is empty\n"); - } else { - struct tlv_desc *rep_tlv; - struct print_buf pb; - int str_len; + int i; + va_list args; - str_len = min(TIPC_LOG->size, 32768u); - spin_unlock_bh(&print_lock); - reply = tipc_cfg_reply_alloc(TLV_SPACE(str_len)); - if (reply) { - rep_tlv = (struct tlv_desc *)reply->data; - tipc_printbuf_init(&pb, TLV_DATA(rep_tlv), str_len); - spin_lock_bh(&print_lock); - tipc_printbuf_move(&pb, TIPC_LOG); - spin_unlock_bh(&print_lock); - str_len = strlen(TLV_DATA(rep_tlv)) + 1; - skb_put(reply, TLV_SPACE(str_len)); - TLV_SET(rep_tlv, TIPC_TLV_ULTRA_STRING, NULL, str_len); - } - } - return reply; + va_start(args, fmt); + i = vscnprintf(buf, len, fmt, args); + va_end(args); + return i; } diff --git a/net/tipc/log.h b/net/tipc/log.h deleted file mode 100644 index d1f5eb967fd8..000000000000 --- a/net/tipc/log.h +++ /dev/null @@ -1,66 +0,0 @@ -/* - * net/tipc/log.h: Include file for TIPC print buffer routines - * - * Copyright (c) 1997-2006, Ericsson AB - * Copyright (c) 2005-2007, Wind River Systems - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. Neither the names of the copyright holders nor the names of its - * contributors may be used to endorse or promote products derived from - * this software without specific prior written permission. - * - * Alternatively, this software may be distributed under the terms of the - * GNU General Public License ("GPL") version 2 as published by the Free - * Software Foundation. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE - * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef _TIPC_LOG_H -#define _TIPC_LOG_H - -/** - * struct print_buf - TIPC print buffer structure - * @buf: pointer to character array containing print buffer contents - * @size: size of character array - * @crs: pointer to first unused space in character array (i.e. final NUL) - * @echo: echo output to system console if non-zero - */ -struct print_buf { - char *buf; - u32 size; - char *crs; - int echo; -}; - -#define TIPC_PB_MIN_SIZE 64 /* minimum size for a print buffer's array */ -#define TIPC_PB_MAX_STR 512 /* max printable string (with trailing NUL) */ - -void tipc_printbuf_init(struct print_buf *pb, char *buf, u32 size); -int tipc_printbuf_validate(struct print_buf *pb); - -int tipc_log_resize(int log_size); - -struct sk_buff *tipc_log_resize_cmd(const void *req_tlv_area, - int req_tlv_space); -struct sk_buff *tipc_log_dump(void); - -#endif diff --git a/net/tipc/msg.c b/net/tipc/msg.c index deea0d232dca..f2db8a87d9c5 100644 --- a/net/tipc/msg.c +++ b/net/tipc/msg.c @@ -109,245 +109,3 @@ int tipc_msg_build(struct tipc_msg *hdr, struct iovec const *msg_sect, *buf = NULL; return -EFAULT; } - -#ifdef CONFIG_TIPC_DEBUG -void tipc_msg_dbg(struct print_buf *buf, struct tipc_msg *msg, const char *str) -{ - u32 usr = msg_user(msg); - tipc_printf(buf, KERN_DEBUG); - tipc_printf(buf, str); - - switch (usr) { - case MSG_BUNDLER: - tipc_printf(buf, "BNDL::"); - tipc_printf(buf, "MSGS(%u):", msg_msgcnt(msg)); - break; - case BCAST_PROTOCOL: - tipc_printf(buf, "BCASTP::"); - break; - case MSG_FRAGMENTER: - tipc_printf(buf, "FRAGM::"); - switch (msg_type(msg)) { - case FIRST_FRAGMENT: - tipc_printf(buf, "FIRST:"); - break; - case FRAGMENT: - tipc_printf(buf, "BODY:"); - break; - case LAST_FRAGMENT: - tipc_printf(buf, "LAST:"); - break; - default: - tipc_printf(buf, "UNKNOWN:%x", msg_type(msg)); - - } - tipc_printf(buf, "NO(%u/%u):", msg_long_msgno(msg), - msg_fragm_no(msg)); - break; - case TIPC_LOW_IMPORTANCE: - case TIPC_MEDIUM_IMPORTANCE: - case TIPC_HIGH_IMPORTANCE: - case TIPC_CRITICAL_IMPORTANCE: - tipc_printf(buf, "DAT%u:", msg_user(msg)); - if (msg_short(msg)) { - tipc_printf(buf, "CON:"); - break; - } - switch (msg_type(msg)) { - case TIPC_CONN_MSG: - tipc_printf(buf, "CON:"); - break; - case TIPC_MCAST_MSG: - tipc_printf(buf, "MCST:"); - break; - case TIPC_NAMED_MSG: - tipc_printf(buf, "NAM:"); - break; - case TIPC_DIRECT_MSG: - tipc_printf(buf, "DIR:"); - break; - default: - tipc_printf(buf, "UNKNOWN TYPE %u", msg_type(msg)); - } - if (msg_reroute_cnt(msg)) - tipc_printf(buf, "REROUTED(%u):", - msg_reroute_cnt(msg)); - break; - case NAME_DISTRIBUTOR: - tipc_printf(buf, "NMD::"); - switch (msg_type(msg)) { - case PUBLICATION: - tipc_printf(buf, "PUBL(%u):", (msg_size(msg) - msg_hdr_sz(msg)) / 20); /* Items */ - break; - case WITHDRAWAL: - tipc_printf(buf, "WDRW:"); - break; - default: - tipc_printf(buf, "UNKNOWN:%x", msg_type(msg)); - } - if (msg_reroute_cnt(msg)) - tipc_printf(buf, "REROUTED(%u):", - msg_reroute_cnt(msg)); - break; - case CONN_MANAGER: - tipc_printf(buf, "CONN_MNG:"); - switch (msg_type(msg)) { - case CONN_PROBE: - tipc_printf(buf, "PROBE:"); - break; - case CONN_PROBE_REPLY: - tipc_printf(buf, "PROBE_REPLY:"); - break; - case CONN_ACK: - tipc_printf(buf, "CONN_ACK:"); - tipc_printf(buf, "ACK(%u):", msg_msgcnt(msg)); - break; - default: - tipc_printf(buf, "UNKNOWN TYPE:%x", msg_type(msg)); - } - if (msg_reroute_cnt(msg)) - tipc_printf(buf, "REROUTED(%u):", msg_reroute_cnt(msg)); - break; - case LINK_PROTOCOL: - switch (msg_type(msg)) { - case STATE_MSG: - tipc_printf(buf, "STATE:"); - tipc_printf(buf, "%s:", msg_probe(msg) ? "PRB" : ""); - tipc_printf(buf, "NXS(%u):", msg_next_sent(msg)); - tipc_printf(buf, "GAP(%u):", msg_seq_gap(msg)); - tipc_printf(buf, "LSTBC(%u):", msg_last_bcast(msg)); - break; - case RESET_MSG: - tipc_printf(buf, "RESET:"); - if (msg_size(msg) != msg_hdr_sz(msg)) - tipc_printf(buf, "BEAR:%s:", msg_data(msg)); - break; - case ACTIVATE_MSG: - tipc_printf(buf, "ACTIVATE:"); - break; - default: - tipc_printf(buf, "UNKNOWN TYPE:%x", msg_type(msg)); - } - tipc_printf(buf, "PLANE(%c):", msg_net_plane(msg)); - tipc_printf(buf, "SESS(%u):", msg_session(msg)); - break; - case CHANGEOVER_PROTOCOL: - tipc_printf(buf, "TUNL:"); - switch (msg_type(msg)) { - case DUPLICATE_MSG: - tipc_printf(buf, "DUPL:"); - break; - case ORIGINAL_MSG: - tipc_printf(buf, "ORIG:"); - tipc_printf(buf, "EXP(%u)", msg_msgcnt(msg)); - break; - default: - tipc_printf(buf, "UNKNOWN TYPE:%x", msg_type(msg)); - } - break; - case LINK_CONFIG: - tipc_printf(buf, "CFG:"); - switch (msg_type(msg)) { - case DSC_REQ_MSG: - tipc_printf(buf, "DSC_REQ:"); - break; - case DSC_RESP_MSG: - tipc_printf(buf, "DSC_RESP:"); - break; - default: - tipc_printf(buf, "UNKNOWN TYPE:%x:", msg_type(msg)); - break; - } - break; - default: - tipc_printf(buf, "UNKNOWN USER:"); - } - - switch (usr) { - case CONN_MANAGER: - case TIPC_LOW_IMPORTANCE: - case TIPC_MEDIUM_IMPORTANCE: - case TIPC_HIGH_IMPORTANCE: - case TIPC_CRITICAL_IMPORTANCE: - switch (msg_errcode(msg)) { - case TIPC_OK: - break; - case TIPC_ERR_NO_NAME: - tipc_printf(buf, "NO_NAME:"); - break; - case TIPC_ERR_NO_PORT: - tipc_printf(buf, "NO_PORT:"); - break; - case TIPC_ERR_NO_NODE: - tipc_printf(buf, "NO_PROC:"); - break; - case TIPC_ERR_OVERLOAD: - tipc_printf(buf, "OVERLOAD:"); - break; - case TIPC_CONN_SHUTDOWN: - tipc_printf(buf, "SHUTDOWN:"); - break; - default: - tipc_printf(buf, "UNKNOWN ERROR(%x):", - msg_errcode(msg)); - } - default: - break; - } - - tipc_printf(buf, "HZ(%u):", msg_hdr_sz(msg)); - tipc_printf(buf, "SZ(%u):", msg_size(msg)); - tipc_printf(buf, "SQNO(%u):", msg_seqno(msg)); - - if (msg_non_seq(msg)) - tipc_printf(buf, "NOSEQ:"); - else - tipc_printf(buf, "ACK(%u):", msg_ack(msg)); - tipc_printf(buf, "BACK(%u):", msg_bcast_ack(msg)); - tipc_printf(buf, "PRND(%x)", msg_prevnode(msg)); - - if (msg_isdata(msg)) { - if (msg_named(msg)) { - tipc_printf(buf, "NTYP(%u):", msg_nametype(msg)); - tipc_printf(buf, "NINST(%u)", msg_nameinst(msg)); - } - } - - if ((usr != LINK_PROTOCOL) && (usr != LINK_CONFIG) && - (usr != MSG_BUNDLER)) { - if (!msg_short(msg)) { - tipc_printf(buf, ":ORIG(%x:%u):", - msg_orignode(msg), msg_origport(msg)); - tipc_printf(buf, ":DEST(%x:%u):", - msg_destnode(msg), msg_destport(msg)); - } else { - tipc_printf(buf, ":OPRT(%u):", msg_origport(msg)); - tipc_printf(buf, ":DPRT(%u):", msg_destport(msg)); - } - } - if (msg_user(msg) == NAME_DISTRIBUTOR) { - tipc_printf(buf, ":ONOD(%x):", msg_orignode(msg)); - tipc_printf(buf, ":DNOD(%x):", msg_destnode(msg)); - } - - if (msg_user(msg) == LINK_CONFIG) { - struct tipc_media_addr orig; - - tipc_printf(buf, ":DDOM(%x):", msg_dest_domain(msg)); - tipc_printf(buf, ":NETID(%u):", msg_bc_netid(msg)); - memcpy(orig.value, msg_media_addr(msg), sizeof(orig.value)); - orig.media_id = 0; - orig.broadcast = 0; - tipc_media_addr_printf(buf, &orig); - } - if (msg_user(msg) == BCAST_PROTOCOL) { - tipc_printf(buf, "BCNACK:AFTER(%u):", msg_bcgap_after(msg)); - tipc_printf(buf, "TO(%u):", msg_bcgap_to(msg)); - } - tipc_printf(buf, "\n"); - if ((usr == CHANGEOVER_PROTOCOL) && (msg_msgcnt(msg))) - tipc_msg_dbg(buf, msg_get_wrapped(msg), " /"); - if ((usr == MSG_FRAGMENTER) && (msg_type(msg) == FIRST_FRAGMENT)) - tipc_msg_dbg(buf, msg_get_wrapped(msg), " /"); -} -#endif diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c index 158318e67b08..55d3928dfd67 100644 --- a/net/tipc/name_distr.c +++ b/net/tipc/name_distr.c @@ -161,7 +161,7 @@ void tipc_named_publish(struct publication *publ) buf = named_prepare_buf(PUBLICATION, ITEM_SIZE, 0); if (!buf) { - warn("Publication distribution failure\n"); + pr_warn("Publication distribution failure\n"); return; } @@ -186,7 +186,7 @@ void tipc_named_withdraw(struct publication *publ) buf = named_prepare_buf(WITHDRAWAL, ITEM_SIZE, 0); if (!buf) { - warn("Withdrawal distribution failure\n"); + pr_warn("Withdrawal distribution failure\n"); return; } @@ -213,7 +213,7 @@ static void named_distribute(struct list_head *message_list, u32 node, rest -= left; buf = named_prepare_buf(PUBLICATION, left, node); if (!buf) { - warn("Bulk publication failure\n"); + pr_warn("Bulk publication failure\n"); return; } item = (struct distr_item *)msg_data(buf_msg(buf)); @@ -283,9 +283,10 @@ static void named_purge_publ(struct publication *publ) write_unlock_bh(&tipc_nametbl_lock); if (p != publ) { - err("Unable to remove publication from failed node\n" - "(type=%u, lower=%u, node=0x%x, ref=%u, key=%u)\n", - publ->type, publ->lower, publ->node, publ->ref, publ->key); + pr_err("Unable to remove publication from failed node\n" + " (type=%u, lower=%u, node=0x%x, ref=%u, key=%u)\n", + publ->type, publ->lower, publ->node, publ->ref, + publ->key); } kfree(p); @@ -329,14 +330,14 @@ void tipc_named_recv(struct sk_buff *buf) tipc_nodesub_unsubscribe(&publ->subscr); kfree(publ); } else { - err("Unable to remove publication by node 0x%x\n" - "(type=%u, lower=%u, ref=%u, key=%u)\n", - msg_orignode(msg), - ntohl(item->type), ntohl(item->lower), - ntohl(item->ref), ntohl(item->key)); + pr_err("Unable to remove publication by node 0x%x\n" + " (type=%u, lower=%u, ref=%u, key=%u)\n", + msg_orignode(msg), ntohl(item->type), + ntohl(item->lower), ntohl(item->ref), + ntohl(item->key)); } } else { - warn("Unrecognized name table message received\n"); + pr_warn("Unrecognized name table message received\n"); } item++; } diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c index 010f24a59da2..360c478b0b53 100644 --- a/net/tipc/name_table.c +++ b/net/tipc/name_table.c @@ -126,7 +126,7 @@ static struct publication *publ_create(u32 type, u32 lower, u32 upper, { struct publication *publ = kzalloc(sizeof(*publ), GFP_ATOMIC); if (publ == NULL) { - warn("Publication creation failure, no memory\n"); + pr_warn("Publication creation failure, no memory\n"); return NULL; } @@ -163,7 +163,7 @@ static struct name_seq *tipc_nameseq_create(u32 type, struct hlist_head *seq_hea struct sub_seq *sseq = tipc_subseq_alloc(1); if (!nseq || !sseq) { - warn("Name sequence creation failed, no memory\n"); + pr_warn("Name sequence creation failed, no memory\n"); kfree(nseq); kfree(sseq); return NULL; @@ -191,7 +191,7 @@ static void nameseq_delete_empty(struct name_seq *seq) } } -/* +/** * nameseq_find_subseq - find sub-sequence (if any) matching a name instance * * Very time-critical, so binary searches through sub-sequence array. @@ -263,8 +263,8 @@ static struct publication *tipc_nameseq_insert_publ(struct name_seq *nseq, /* Lower end overlaps existing entry => need an exact match */ if ((sseq->lower != lower) || (sseq->upper != upper)) { - warn("Cannot publish {%u,%u,%u}, overlap error\n", - type, lower, upper); + pr_warn("Cannot publish {%u,%u,%u}, overlap error\n", + type, lower, upper); return NULL; } @@ -286,8 +286,8 @@ static struct publication *tipc_nameseq_insert_publ(struct name_seq *nseq, /* Fail if upper end overlaps into an existing entry */ if ((inspos < nseq->first_free) && (upper >= nseq->sseqs[inspos].lower)) { - warn("Cannot publish {%u,%u,%u}, overlap error\n", - type, lower, upper); + pr_warn("Cannot publish {%u,%u,%u}, overlap error\n", + type, lower, upper); return NULL; } @@ -296,8 +296,8 @@ static struct publication *tipc_nameseq_insert_publ(struct name_seq *nseq, struct sub_seq *sseqs = tipc_subseq_alloc(nseq->alloc * 2); if (!sseqs) { - warn("Cannot publish {%u,%u,%u}, no memory\n", - type, lower, upper); + pr_warn("Cannot publish {%u,%u,%u}, no memory\n", + type, lower, upper); return NULL; } memcpy(sseqs, nseq->sseqs, @@ -309,8 +309,8 @@ static struct publication *tipc_nameseq_insert_publ(struct name_seq *nseq, info = kzalloc(sizeof(*info), GFP_ATOMIC); if (!info) { - warn("Cannot publish {%u,%u,%u}, no memory\n", - type, lower, upper); + pr_warn("Cannot publish {%u,%u,%u}, no memory\n", + type, lower, upper); return NULL; } @@ -435,7 +435,7 @@ found: } /** - * tipc_nameseq_subscribe: attach a subscription, and issue + * tipc_nameseq_subscribe - attach a subscription, and issue * the prescribed number of events if there is any sub- * sequence overlapping with the requested sequence */ @@ -492,8 +492,8 @@ struct publication *tipc_nametbl_insert_publ(u32 type, u32 lower, u32 upper, if ((scope < TIPC_ZONE_SCOPE) || (scope > TIPC_NODE_SCOPE) || (lower > upper)) { - dbg("Failed to publish illegal {%u,%u,%u} with scope %u\n", - type, lower, upper, scope); + pr_debug("Failed to publish illegal {%u,%u,%u} with scope %u\n", + type, lower, upper, scope); return NULL; } @@ -520,7 +520,7 @@ struct publication *tipc_nametbl_remove_publ(u32 type, u32 lower, return publ; } -/* +/** * tipc_nametbl_translate - perform name translation * * On entry, 'destnode' is the search domain used during translation. @@ -668,8 +668,8 @@ struct publication *tipc_nametbl_publish(u32 type, u32 lower, u32 upper, struct publication *publ; if (table.local_publ_count >= tipc_max_publications) { - warn("Publication failed, local publication limit reached (%u)\n", - tipc_max_publications); + pr_warn("Publication failed, local publication limit reached (%u)\n", + tipc_max_publications); return NULL; } @@ -702,9 +702,9 @@ int tipc_nametbl_withdraw(u32 type, u32 lower, u32 ref, u32 key) return 1; } write_unlock_bh(&tipc_nametbl_lock); - err("Unable to remove local publication\n" - "(type=%u, lower=%u, ref=%u, key=%u)\n", - type, lower, ref, key); + pr_err("Unable to remove local publication\n" + "(type=%u, lower=%u, ref=%u, key=%u)\n", + type, lower, ref, key); return 0; } @@ -725,8 +725,8 @@ void tipc_nametbl_subscribe(struct tipc_subscription *s) tipc_nameseq_subscribe(seq, s); spin_unlock_bh(&seq->lock); } else { - warn("Failed to create subscription for {%u,%u,%u}\n", - s->seq.type, s->seq.lower, s->seq.upper); + pr_warn("Failed to create subscription for {%u,%u,%u}\n", + s->seq.type, s->seq.lower, s->seq.upper); } write_unlock_bh(&tipc_nametbl_lock); } @@ -751,21 +751,22 @@ void tipc_nametbl_unsubscribe(struct tipc_subscription *s) /** - * subseq_list: print specified sub-sequence contents into the given buffer + * subseq_list - print specified sub-sequence contents into the given buffer */ -static void subseq_list(struct sub_seq *sseq, struct print_buf *buf, u32 depth, +static int subseq_list(struct sub_seq *sseq, char *buf, int len, u32 depth, u32 index) { char portIdStr[27]; const char *scope_str[] = {"", " zone", " cluster", " node"}; struct publication *publ; struct name_info *info; + int ret; - tipc_printf(buf, "%-10u %-10u ", sseq->lower, sseq->upper); + ret = tipc_snprintf(buf, len, "%-10u %-10u ", sseq->lower, sseq->upper); if (depth == 2) { - tipc_printf(buf, "\n"); - return; + ret += tipc_snprintf(buf - ret, len + ret, "\n"); + return ret; } info = sseq->info; @@ -774,52 +775,58 @@ static void subseq_list(struct sub_seq *sseq, struct print_buf *buf, u32 depth, sprintf(portIdStr, "<%u.%u.%u:%u>", tipc_zone(publ->node), tipc_cluster(publ->node), tipc_node(publ->node), publ->ref); - tipc_printf(buf, "%-26s ", portIdStr); + ret += tipc_snprintf(buf + ret, len - ret, "%-26s ", portIdStr); if (depth > 3) { - tipc_printf(buf, "%-10u %s", publ->key, - scope_str[publ->scope]); + ret += tipc_snprintf(buf + ret, len - ret, "%-10u %s", + publ->key, scope_str[publ->scope]); } if (!list_is_last(&publ->zone_list, &info->zone_list)) - tipc_printf(buf, "\n%33s", " "); + ret += tipc_snprintf(buf + ret, len - ret, + "\n%33s", " "); }; - tipc_printf(buf, "\n"); + ret += tipc_snprintf(buf + ret, len - ret, "\n"); + return ret; } /** - * nameseq_list: print specified name sequence contents into the given buffer + * nameseq_list - print specified name sequence contents into the given buffer */ -static void nameseq_list(struct name_seq *seq, struct print_buf *buf, u32 depth, +static int nameseq_list(struct name_seq *seq, char *buf, int len, u32 depth, u32 type, u32 lowbound, u32 upbound, u32 index) { struct sub_seq *sseq; char typearea[11]; + int ret = 0; if (seq->first_free == 0) - return; + return 0; sprintf(typearea, "%-10u", seq->type); if (depth == 1) { - tipc_printf(buf, "%s\n", typearea); - return; + ret += tipc_snprintf(buf, len, "%s\n", typearea); + return ret; } for (sseq = seq->sseqs; sseq != &seq->sseqs[seq->first_free]; sseq++) { if ((lowbound <= sseq->upper) && (upbound >= sseq->lower)) { - tipc_printf(buf, "%s ", typearea); + ret += tipc_snprintf(buf + ret, len - ret, "%s ", + typearea); spin_lock_bh(&seq->lock); - subseq_list(sseq, buf, depth, index); + ret += subseq_list(sseq, buf + ret, len - ret, + depth, index); spin_unlock_bh(&seq->lock); sprintf(typearea, "%10s", " "); } } + return ret; } /** * nametbl_header - print name table header into the given buffer */ -static void nametbl_header(struct print_buf *buf, u32 depth) +static int nametbl_header(char *buf, int len, u32 depth) { const char *header[] = { "Type ", @@ -829,24 +836,27 @@ static void nametbl_header(struct print_buf *buf, u32 depth) }; int i; + int ret = 0; if (depth > 4) depth = 4; for (i = 0; i < depth; i++) - tipc_printf(buf, header[i]); - tipc_printf(buf, "\n"); + ret += tipc_snprintf(buf + ret, len - ret, header[i]); + ret += tipc_snprintf(buf + ret, len - ret, "\n"); + return ret; } /** * nametbl_list - print specified name table contents into the given buffer */ -static void nametbl_list(struct print_buf *buf, u32 depth_info, +static int nametbl_list(char *buf, int len, u32 depth_info, u32 type, u32 lowbound, u32 upbound) { struct hlist_head *seq_head; struct hlist_node *seq_node; struct name_seq *seq; int all_types; + int ret = 0; u32 depth; u32 i; @@ -854,65 +864,69 @@ static void nametbl_list(struct print_buf *buf, u32 depth_info, depth = (depth_info & ~TIPC_NTQ_ALLTYPES); if (depth == 0) - return; + return 0; if (all_types) { /* display all entries in name table to specified depth */ - nametbl_header(buf, depth); + ret += nametbl_header(buf, len, depth); lowbound = 0; upbound = ~0; for (i = 0; i < tipc_nametbl_size; i++) { seq_head = &table.types[i]; hlist_for_each_entry(seq, seq_node, seq_head, ns_list) { - nameseq_list(seq, buf, depth, seq->type, - lowbound, upbound, i); + ret += nameseq_list(seq, buf + ret, len - ret, + depth, seq->type, + lowbound, upbound, i); } } } else { /* display only the sequence that matches the specified type */ if (upbound < lowbound) { - tipc_printf(buf, "invalid name sequence specified\n"); - return; + ret += tipc_snprintf(buf + ret, len - ret, + "invalid name sequence specified\n"); + return ret; } - nametbl_header(buf, depth); + ret += nametbl_header(buf + ret, len - ret, depth); i = hash(type); seq_head = &table.types[i]; hlist_for_each_entry(seq, seq_node, seq_head, ns_list) { if (seq->type == type) { - nameseq_list(seq, buf, depth, type, - lowbound, upbound, i); + ret += nameseq_list(seq, buf + ret, len - ret, + depth, type, + lowbound, upbound, i); break; } } } + return ret; } -#define MAX_NAME_TBL_QUERY 32768 - struct sk_buff *tipc_nametbl_get(const void *req_tlv_area, int req_tlv_space) { struct sk_buff *buf; struct tipc_name_table_query *argv; struct tlv_desc *rep_tlv; - struct print_buf b; + char *pb; + int pb_len; int str_len; if (!TLV_CHECK(req_tlv_area, req_tlv_space, TIPC_TLV_NAME_TBL_QUERY)) return tipc_cfg_reply_error_string(TIPC_CFG_TLV_ERROR); - buf = tipc_cfg_reply_alloc(TLV_SPACE(MAX_NAME_TBL_QUERY)); + buf = tipc_cfg_reply_alloc(TLV_SPACE(ULTRA_STRING_MAX_LEN)); if (!buf) return NULL; rep_tlv = (struct tlv_desc *)buf->data; - tipc_printbuf_init(&b, TLV_DATA(rep_tlv), MAX_NAME_TBL_QUERY); + pb = TLV_DATA(rep_tlv); + pb_len = ULTRA_STRING_MAX_LEN; argv = (struct tipc_name_table_query *)TLV_DATA(req_tlv_area); read_lock_bh(&tipc_nametbl_lock); - nametbl_list(&b, ntohl(argv->depth), ntohl(argv->type), - ntohl(argv->lowbound), ntohl(argv->upbound)); + str_len = nametbl_list(pb, pb_len, ntohl(argv->depth), + ntohl(argv->type), + ntohl(argv->lowbound), ntohl(argv->upbound)); read_unlock_bh(&tipc_nametbl_lock); - str_len = tipc_printbuf_validate(&b); - + str_len += 1; /* for "\0" */ skb_put(buf, TLV_SPACE(str_len)); TLV_SET(rep_tlv, TIPC_TLV_ULTRA_STRING, NULL, str_len); @@ -940,8 +954,10 @@ void tipc_nametbl_stop(void) /* Verify name table is empty, then release it */ write_lock_bh(&tipc_nametbl_lock); for (i = 0; i < tipc_nametbl_size; i++) { - if (!hlist_empty(&table.types[i])) - err("tipc_nametbl_stop(): hash chain %u is non-null\n", i); + if (hlist_empty(&table.types[i])) + continue; + pr_err("nametbl_stop(): orphaned hash chain detected\n"); + break; } kfree(table.types); table.types = NULL; diff --git a/net/tipc/net.c b/net/tipc/net.c index 7c236c89cf5e..5b5cea259caf 100644 --- a/net/tipc/net.c +++ b/net/tipc/net.c @@ -184,9 +184,9 @@ int tipc_net_start(u32 addr) tipc_cfg_reinit(); - info("Started in network mode\n"); - info("Own node address %s, network identity %u\n", - tipc_addr_string_fill(addr_string, tipc_own_addr), tipc_net_id); + pr_info("Started in network mode\n"); + pr_info("Own node address %s, network identity %u\n", + tipc_addr_string_fill(addr_string, tipc_own_addr), tipc_net_id); return 0; } @@ -202,5 +202,5 @@ void tipc_net_stop(void) list_for_each_entry_safe(node, t_node, &tipc_node_list, list) tipc_node_delete(node); write_unlock_bh(&tipc_net_lock); - info("Left network mode\n"); + pr_info("Left network mode\n"); } diff --git a/net/tipc/netlink.c b/net/tipc/netlink.c index 7bda8e3d1398..47a839df27dc 100644 --- a/net/tipc/netlink.c +++ b/net/tipc/netlink.c @@ -90,7 +90,7 @@ int tipc_netlink_start(void) res = genl_register_family_with_ops(&tipc_genl_family, &tipc_genl_ops, 1); if (res) { - err("Failed to register netlink interface\n"); + pr_err("Failed to register netlink interface\n"); return res; } diff --git a/net/tipc/node.c b/net/tipc/node.c index d4fd341e6e0d..d21db204e25a 100644 --- a/net/tipc/node.c +++ b/net/tipc/node.c @@ -105,7 +105,7 @@ struct tipc_node *tipc_node_create(u32 addr) n_ptr = kzalloc(sizeof(*n_ptr), GFP_ATOMIC); if (!n_ptr) { spin_unlock_bh(&node_create_lock); - warn("Node creation failed, no memory\n"); + pr_warn("Node creation failed, no memory\n"); return NULL; } @@ -151,8 +151,8 @@ void tipc_node_link_up(struct tipc_node *n_ptr, struct tipc_link *l_ptr) n_ptr->working_links++; - info("Established link <%s> on network plane %c\n", - l_ptr->name, l_ptr->b_ptr->net_plane); + pr_info("Established link <%s> on network plane %c\n", + l_ptr->name, l_ptr->b_ptr->net_plane); if (!active[0]) { active[0] = active[1] = l_ptr; @@ -160,7 +160,7 @@ void tipc_node_link_up(struct tipc_node *n_ptr, struct tipc_link *l_ptr) return; } if (l_ptr->priority < active[0]->priority) { - info("New link <%s> becomes standby\n", l_ptr->name); + pr_info("New link <%s> becomes standby\n", l_ptr->name); return; } tipc_link_send_duplicate(active[0], l_ptr); @@ -168,9 +168,9 @@ void tipc_node_link_up(struct tipc_node *n_ptr, struct tipc_link *l_ptr) active[0] = l_ptr; return; } - info("Old link <%s> becomes standby\n", active[0]->name); + pr_info("Old link <%s> becomes standby\n", active[0]->name); if (active[1] != active[0]) - info("Old link <%s> becomes standby\n", active[1]->name); + pr_info("Old link <%s> becomes standby\n", active[1]->name); active[0] = active[1] = l_ptr; } @@ -211,11 +211,11 @@ void tipc_node_link_down(struct tipc_node *n_ptr, struct tipc_link *l_ptr) n_ptr->working_links--; if (!tipc_link_is_active(l_ptr)) { - info("Lost standby link <%s> on network plane %c\n", - l_ptr->name, l_ptr->b_ptr->net_plane); + pr_info("Lost standby link <%s> on network plane %c\n", + l_ptr->name, l_ptr->b_ptr->net_plane); return; } - info("Lost link <%s> on network plane %c\n", + pr_info("Lost link <%s> on network plane %c\n", l_ptr->name, l_ptr->b_ptr->net_plane); active = &n_ptr->active_links[0]; @@ -290,8 +290,8 @@ static void node_lost_contact(struct tipc_node *n_ptr) char addr_string[16]; u32 i; - info("Lost contact with %s\n", - tipc_addr_string_fill(addr_string, n_ptr->addr)); + pr_info("Lost contact with %s\n", + tipc_addr_string_fill(addr_string, n_ptr->addr)); /* Flush broadcast link info associated with lost node */ if (n_ptr->bclink.supported) { diff --git a/net/tipc/node_subscr.c b/net/tipc/node_subscr.c index 7a27344108fe..5e34b015da45 100644 --- a/net/tipc/node_subscr.c +++ b/net/tipc/node_subscr.c @@ -51,7 +51,8 @@ void tipc_nodesub_subscribe(struct tipc_node_subscr *node_sub, u32 addr, node_sub->node = tipc_node_find(addr); if (!node_sub->node) { - warn("Node subscription rejected, unknown node 0x%x\n", addr); + pr_warn("Node subscription rejected, unknown node 0x%x\n", + addr); return; } node_sub->handle_node_down = handle_down; diff --git a/net/tipc/port.c b/net/tipc/port.c index 2ad37a4db376..07c42fba672b 100644 --- a/net/tipc/port.c +++ b/net/tipc/port.c @@ -69,7 +69,7 @@ static u32 port_peerport(struct tipc_port *p_ptr) return msg_destport(&p_ptr->phdr); } -/* +/** * tipc_port_peer_msg - verify message was sent by connected port's peer * * Handles cases where the node's network address has changed from @@ -191,7 +191,7 @@ void tipc_port_recv_mcast(struct sk_buff *buf, struct tipc_port_list *dp) struct sk_buff *b = skb_clone(buf, GFP_ATOMIC); if (b == NULL) { - warn("Unable to deliver multicast message(s)\n"); + pr_warn("Unable to deliver multicast message(s)\n"); goto exit; } if ((index == 0) && (cnt != 0)) @@ -221,12 +221,12 @@ struct tipc_port *tipc_createport_raw(void *usr_handle, p_ptr = kzalloc(sizeof(*p_ptr), GFP_ATOMIC); if (!p_ptr) { - warn("Port creation failed, no memory\n"); + pr_warn("Port creation failed, no memory\n"); return NULL; } ref = tipc_ref_acquire(p_ptr, &p_ptr->lock); if (!ref) { - warn("Port creation failed, reference table exhausted\n"); + pr_warn("Port creation failed, ref. table exhausted\n"); kfree(p_ptr); return NULL; } @@ -581,67 +581,73 @@ exit: kfree_skb(buf); } -static void port_print(struct tipc_port *p_ptr, struct print_buf *buf, int full_id) +static int port_print(struct tipc_port *p_ptr, char *buf, int len, int full_id) { struct publication *publ; + int ret; if (full_id) - tipc_printf(buf, "<%u.%u.%u:%u>:", - tipc_zone(tipc_own_addr), tipc_cluster(tipc_own_addr), - tipc_node(tipc_own_addr), p_ptr->ref); + ret = tipc_snprintf(buf, len, "<%u.%u.%u:%u>:", + tipc_zone(tipc_own_addr), + tipc_cluster(tipc_own_addr), + tipc_node(tipc_own_addr), p_ptr->ref); else - tipc_printf(buf, "%-10u:", p_ptr->ref); + ret = tipc_snprintf(buf, len, "%-10u:", p_ptr->ref); if (p_ptr->connected) { u32 dport = port_peerport(p_ptr); u32 destnode = port_peernode(p_ptr); - tipc_printf(buf, " connected to <%u.%u.%u:%u>", - tipc_zone(destnode), tipc_cluster(destnode), - tipc_node(destnode), dport); + ret += tipc_snprintf(buf + ret, len - ret, + " connected to <%u.%u.%u:%u>", + tipc_zone(destnode), + tipc_cluster(destnode), + tipc_node(destnode), dport); if (p_ptr->conn_type != 0) - tipc_printf(buf, " via {%u,%u}", - p_ptr->conn_type, - p_ptr->conn_instance); + ret += tipc_snprintf(buf + ret, len - ret, + " via {%u,%u}", p_ptr->conn_type, + p_ptr->conn_instance); } else if (p_ptr->published) { - tipc_printf(buf, " bound to"); + ret += tipc_snprintf(buf + ret, len - ret, " bound to"); list_for_each_entry(publ, &p_ptr->publications, pport_list) { if (publ->lower == publ->upper) - tipc_printf(buf, " {%u,%u}", publ->type, - publ->lower); + ret += tipc_snprintf(buf + ret, len - ret, + " {%u,%u}", publ->type, + publ->lower); else - tipc_printf(buf, " {%u,%u,%u}", publ->type, - publ->lower, publ->upper); + ret += tipc_snprintf(buf + ret, len - ret, + " {%u,%u,%u}", publ->type, + publ->lower, publ->upper); } } - tipc_printf(buf, "\n"); + ret += tipc_snprintf(buf + ret, len - ret, "\n"); + return ret; } -#define MAX_PORT_QUERY 32768 - struct sk_buff *tipc_port_get_ports(void) { struct sk_buff *buf; struct tlv_desc *rep_tlv; - struct print_buf pb; + char *pb; + int pb_len; struct tipc_port *p_ptr; - int str_len; + int str_len = 0; - buf = tipc_cfg_reply_alloc(TLV_SPACE(MAX_PORT_QUERY)); + buf = tipc_cfg_reply_alloc(TLV_SPACE(ULTRA_STRING_MAX_LEN)); if (!buf) return NULL; rep_tlv = (struct tlv_desc *)buf->data; + pb = TLV_DATA(rep_tlv); + pb_len = ULTRA_STRING_MAX_LEN; - tipc_printbuf_init(&pb, TLV_DATA(rep_tlv), MAX_PORT_QUERY); spin_lock_bh(&tipc_port_list_lock); list_for_each_entry(p_ptr, &ports, port_list) { spin_lock_bh(p_ptr->lock); - port_print(p_ptr, &pb, 0); + str_len += port_print(p_ptr, pb, pb_len, 0); spin_unlock_bh(p_ptr->lock); } spin_unlock_bh(&tipc_port_list_lock); - str_len = tipc_printbuf_validate(&pb); - + str_len += 1; /* for "\0" */ skb_put(buf, TLV_SPACE(str_len)); TLV_SET(rep_tlv, TIPC_TLV_ULTRA_STRING, NULL, str_len); @@ -906,11 +912,11 @@ int tipc_createport(void *usr_handle, up_ptr = kmalloc(sizeof(*up_ptr), GFP_ATOMIC); if (!up_ptr) { - warn("Port creation failed, no memory\n"); + pr_warn("Port creation failed, no memory\n"); return -ENOMEM; } - p_ptr = (struct tipc_port *)tipc_createport_raw(NULL, port_dispatcher, - port_wakeup, importance); + p_ptr = tipc_createport_raw(NULL, port_dispatcher, port_wakeup, + importance); if (!p_ptr) { kfree(up_ptr); return -ENOMEM; @@ -1078,8 +1084,7 @@ int tipc_disconnect_port(struct tipc_port *tp_ptr) if (tp_ptr->connected) { tp_ptr->connected = 0; /* let timer expire on it's own to avoid deadlock! */ - tipc_nodesub_unsubscribe( - &((struct tipc_port *)tp_ptr)->subscription); + tipc_nodesub_unsubscribe(&tp_ptr->subscription); res = 0; } else { res = -ENOTCONN; @@ -1099,7 +1104,7 @@ int tipc_disconnect(u32 ref) p_ptr = tipc_port_lock(ref); if (!p_ptr) return -EINVAL; - res = tipc_disconnect_port((struct tipc_port *)p_ptr); + res = tipc_disconnect_port(p_ptr); tipc_port_unlock(p_ptr); return res; } diff --git a/net/tipc/port.h b/net/tipc/port.h index 98cbec9c4532..4660e3065790 100644 --- a/net/tipc/port.h +++ b/net/tipc/port.h @@ -79,6 +79,7 @@ typedef void (*tipc_continue_event) (void *usr_handle, u32 portref); * struct user_port - TIPC user port (used with native API) * @usr_handle: user-specified field * @ref: object reference to associated TIPC port + * * <various callback routines> */ struct user_port { diff --git a/net/tipc/ref.c b/net/tipc/ref.c index 5cada0e38e03..2a2a938dc22c 100644 --- a/net/tipc/ref.c +++ b/net/tipc/ref.c @@ -153,11 +153,11 @@ u32 tipc_ref_acquire(void *object, spinlock_t **lock) struct reference *entry = NULL; if (!object) { - err("Attempt to acquire reference to non-existent object\n"); + pr_err("Attempt to acquire ref. to non-existent obj\n"); return 0; } if (!tipc_ref_table.entries) { - err("Reference table not found during acquisition attempt\n"); + pr_err("Ref. table not found in acquisition attempt\n"); return 0; } @@ -211,7 +211,7 @@ void tipc_ref_discard(u32 ref) u32 index_mask; if (!tipc_ref_table.entries) { - err("Reference table not found during discard attempt\n"); + pr_err("Ref. table not found during discard attempt\n"); return; } @@ -222,11 +222,11 @@ void tipc_ref_discard(u32 ref) write_lock_bh(&ref_table_lock); if (!entry->object) { - err("Attempt to discard reference to non-existent object\n"); + pr_err("Attempt to discard ref. to non-existent obj\n"); goto exit; } if (entry->ref != ref) { - err("Attempt to discard non-existent reference\n"); + pr_err("Attempt to discard non-existent reference\n"); goto exit; } diff --git a/net/tipc/socket.c b/net/tipc/socket.c index 5577a447f531..09dc5b97e079 100644 --- a/net/tipc/socket.c +++ b/net/tipc/socket.c @@ -34,12 +34,12 @@ * POSSIBILITY OF SUCH DAMAGE. */ -#include <linux/export.h> -#include <net/sock.h> - #include "core.h" #include "port.h" +#include <linux/export.h> +#include <net/sock.h> + #define SS_LISTENING -1 /* socket is listening */ #define SS_READY -2 /* socket is connectionless */ @@ -54,7 +54,7 @@ struct tipc_sock { }; #define tipc_sk(sk) ((struct tipc_sock *)(sk)) -#define tipc_sk_port(sk) ((struct tipc_port *)(tipc_sk(sk)->p)) +#define tipc_sk_port(sk) (tipc_sk(sk)->p) #define tipc_rx_ready(sock) (!skb_queue_empty(&sock->sk->sk_receive_queue) || \ (sock->state == SS_DISCONNECTING)) @@ -1699,9 +1699,8 @@ static int getsockopt(struct socket *sock, return put_user(sizeof(value), ol); } -/** - * Protocol switches for the various types of TIPC sockets - */ +/* Protocol switches for the various types of TIPC sockets */ + static const struct proto_ops msg_ops = { .owner = THIS_MODULE, .family = AF_TIPC, @@ -1788,13 +1787,13 @@ int tipc_socket_init(void) res = proto_register(&tipc_proto, 1); if (res) { - err("Failed to register TIPC protocol type\n"); + pr_err("Failed to register TIPC protocol type\n"); goto out; } res = sock_register(&tipc_family_ops); if (res) { - err("Failed to register TIPC socket type\n"); + pr_err("Failed to register TIPC socket type\n"); proto_unregister(&tipc_proto); goto out; } diff --git a/net/tipc/subscr.c b/net/tipc/subscr.c index f976e9cd6a72..5ed5965eb0be 100644 --- a/net/tipc/subscr.c +++ b/net/tipc/subscr.c @@ -305,8 +305,8 @@ static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s, /* Refuse subscription if global limit exceeded */ if (atomic_read(&topsrv.subscription_count) >= tipc_max_subscriptions) { - warn("Subscription rejected, subscription limit reached (%u)\n", - tipc_max_subscriptions); + pr_warn("Subscription rejected, limit reached (%u)\n", + tipc_max_subscriptions); subscr_terminate(subscriber); return NULL; } @@ -314,7 +314,7 @@ static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s, /* Allocate subscription object */ sub = kmalloc(sizeof(*sub), GFP_ATOMIC); if (!sub) { - warn("Subscription rejected, no memory\n"); + pr_warn("Subscription rejected, no memory\n"); subscr_terminate(subscriber); return NULL; } @@ -328,7 +328,7 @@ static struct tipc_subscription *subscr_subscribe(struct tipc_subscr *s, if ((!(sub->filter & TIPC_SUB_PORTS) == !(sub->filter & TIPC_SUB_SERVICE)) || (sub->seq.lower > sub->seq.upper)) { - warn("Subscription rejected, illegal request\n"); + pr_warn("Subscription rejected, illegal request\n"); kfree(sub); subscr_terminate(subscriber); return NULL; @@ -440,7 +440,7 @@ static void subscr_named_msg_event(void *usr_handle, /* Create subscriber object */ subscriber = kzalloc(sizeof(struct tipc_subscriber), GFP_ATOMIC); if (subscriber == NULL) { - warn("Subscriber rejected, no memory\n"); + pr_warn("Subscriber rejected, no memory\n"); return; } INIT_LIST_HEAD(&subscriber->subscription_list); @@ -458,7 +458,7 @@ static void subscr_named_msg_event(void *usr_handle, NULL, &subscriber->port_ref); if (subscriber->port_ref == 0) { - warn("Subscriber rejected, unable to create port\n"); + pr_warn("Subscriber rejected, unable to create port\n"); kfree(subscriber); return; } @@ -517,7 +517,7 @@ int tipc_subscr_start(void) return 0; failed: - err("Failed to create subscription service\n"); + pr_err("Failed to create subscription service\n"); return res; } diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index 641f2e47f165..79981d97bc9c 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -115,15 +115,24 @@ #include <net/checksum.h> #include <linux/security.h> -struct hlist_head unix_socket_table[UNIX_HASH_SIZE + 1]; +struct hlist_head unix_socket_table[2 * UNIX_HASH_SIZE]; EXPORT_SYMBOL_GPL(unix_socket_table); DEFINE_SPINLOCK(unix_table_lock); EXPORT_SYMBOL_GPL(unix_table_lock); static atomic_long_t unix_nr_socks; -#define unix_sockets_unbound (&unix_socket_table[UNIX_HASH_SIZE]) -#define UNIX_ABSTRACT(sk) (unix_sk(sk)->addr->hash != UNIX_HASH_SIZE) +static struct hlist_head *unix_sockets_unbound(void *addr) +{ + unsigned long hash = (unsigned long)addr; + + hash ^= hash >> 16; + hash ^= hash >> 8; + hash %= UNIX_HASH_SIZE; + return &unix_socket_table[UNIX_HASH_SIZE + hash]; +} + +#define UNIX_ABSTRACT(sk) (unix_sk(sk)->addr->hash < UNIX_HASH_SIZE) #ifdef CONFIG_SECURITY_NETWORK static void unix_get_secdata(struct scm_cookie *scm, struct sk_buff *skb) @@ -645,7 +654,7 @@ static struct sock *unix_create1(struct net *net, struct socket *sock) INIT_LIST_HEAD(&u->link); mutex_init(&u->readlock); /* single task reading lock */ init_waitqueue_head(&u->peer_wait); - unix_insert_socket(unix_sockets_unbound, sk); + unix_insert_socket(unix_sockets_unbound(sk), sk); out: if (sk == NULL) atomic_long_dec(&unix_nr_socks); @@ -2239,47 +2248,54 @@ static unsigned int unix_dgram_poll(struct file *file, struct socket *sock, } #ifdef CONFIG_PROC_FS -static struct sock *first_unix_socket(int *i) + +#define BUCKET_SPACE (BITS_PER_LONG - (UNIX_HASH_BITS + 1) - 1) + +#define get_bucket(x) ((x) >> BUCKET_SPACE) +#define get_offset(x) ((x) & ((1L << BUCKET_SPACE) - 1)) +#define set_bucket_offset(b, o) ((b) << BUCKET_SPACE | (o)) + +static struct sock *unix_from_bucket(struct seq_file *seq, loff_t *pos) { - for (*i = 0; *i <= UNIX_HASH_SIZE; (*i)++) { - if (!hlist_empty(&unix_socket_table[*i])) - return __sk_head(&unix_socket_table[*i]); + unsigned long offset = get_offset(*pos); + unsigned long bucket = get_bucket(*pos); + struct sock *sk; + unsigned long count = 0; + + for (sk = sk_head(&unix_socket_table[bucket]); sk; sk = sk_next(sk)) { + if (sock_net(sk) != seq_file_net(seq)) + continue; + if (++count == offset) + break; } - return NULL; + + return sk; } -static struct sock *next_unix_socket(int *i, struct sock *s) +static struct sock *unix_next_socket(struct seq_file *seq, + struct sock *sk, + loff_t *pos) { - struct sock *next = sk_next(s); - /* More in this chain? */ - if (next) - return next; - /* Look for next non-empty chain. */ - for ((*i)++; *i <= UNIX_HASH_SIZE; (*i)++) { - if (!hlist_empty(&unix_socket_table[*i])) - return __sk_head(&unix_socket_table[*i]); + unsigned long bucket; + + while (sk > (struct sock *)SEQ_START_TOKEN) { + sk = sk_next(sk); + if (!sk) + goto next_bucket; + if (sock_net(sk) == seq_file_net(seq)) + return sk; } - return NULL; -} -struct unix_iter_state { - struct seq_net_private p; - int i; -}; + do { + sk = unix_from_bucket(seq, pos); + if (sk) + return sk; -static struct sock *unix_seq_idx(struct seq_file *seq, loff_t pos) -{ - struct unix_iter_state *iter = seq->private; - loff_t off = 0; - struct sock *s; +next_bucket: + bucket = get_bucket(*pos) + 1; + *pos = set_bucket_offset(bucket, 1); + } while (bucket < ARRAY_SIZE(unix_socket_table)); - for (s = first_unix_socket(&iter->i); s; s = next_unix_socket(&iter->i, s)) { - if (sock_net(s) != seq_file_net(seq)) - continue; - if (off == pos) - return s; - ++off; - } return NULL; } @@ -2287,22 +2303,20 @@ static void *unix_seq_start(struct seq_file *seq, loff_t *pos) __acquires(unix_table_lock) { spin_lock(&unix_table_lock); - return *pos ? unix_seq_idx(seq, *pos - 1) : SEQ_START_TOKEN; + + if (!*pos) + return SEQ_START_TOKEN; + + if (get_bucket(*pos) >= ARRAY_SIZE(unix_socket_table)) + return NULL; + + return unix_next_socket(seq, NULL, pos); } static void *unix_seq_next(struct seq_file *seq, void *v, loff_t *pos) { - struct unix_iter_state *iter = seq->private; - struct sock *sk = v; ++*pos; - - if (v == SEQ_START_TOKEN) - sk = first_unix_socket(&iter->i); - else - sk = next_unix_socket(&iter->i, sk); - while (sk && (sock_net(sk) != seq_file_net(seq))) - sk = next_unix_socket(&iter->i, sk); - return sk; + return unix_next_socket(seq, v, pos); } static void unix_seq_stop(struct seq_file *seq, void *v) @@ -2365,7 +2379,7 @@ static const struct seq_operations unix_seq_ops = { static int unix_seq_open(struct inode *inode, struct file *file) { return seq_open_net(inode, file, &unix_seq_ops, - sizeof(struct unix_iter_state)); + sizeof(struct seq_net_private)); } static const struct file_operations unix_seq_fops = { diff --git a/net/unix/diag.c b/net/unix/diag.c index 47d3002737f5..750b13408449 100644 --- a/net/unix/diag.c +++ b/net/unix/diag.c @@ -8,40 +8,31 @@ #include <net/af_unix.h> #include <net/tcp_states.h> -#define UNIX_DIAG_PUT(skb, attrtype, attrlen) \ - RTA_DATA(__RTA_PUT(skb, attrtype, attrlen)) - static int sk_diag_dump_name(struct sock *sk, struct sk_buff *nlskb) { struct unix_address *addr = unix_sk(sk)->addr; - char *s; - - if (addr) { - s = UNIX_DIAG_PUT(nlskb, UNIX_DIAG_NAME, addr->len - sizeof(short)); - memcpy(s, addr->name->sun_path, addr->len - sizeof(short)); - } - return 0; + if (!addr) + return 0; -rtattr_failure: - return -EMSGSIZE; + return nla_put(nlskb, UNIX_DIAG_NAME, addr->len - sizeof(short), + addr->name->sun_path); } static int sk_diag_dump_vfs(struct sock *sk, struct sk_buff *nlskb) { struct dentry *dentry = unix_sk(sk)->path.dentry; - struct unix_diag_vfs *uv; if (dentry) { - uv = UNIX_DIAG_PUT(nlskb, UNIX_DIAG_VFS, sizeof(*uv)); - uv->udiag_vfs_ino = dentry->d_inode->i_ino; - uv->udiag_vfs_dev = dentry->d_sb->s_dev; + struct unix_diag_vfs uv = { + .udiag_vfs_ino = dentry->d_inode->i_ino, + .udiag_vfs_dev = dentry->d_sb->s_dev, + }; + + return nla_put(nlskb, UNIX_DIAG_VFS, sizeof(uv), &uv); } return 0; - -rtattr_failure: - return -EMSGSIZE; } static int sk_diag_dump_peer(struct sock *sk, struct sk_buff *nlskb) @@ -56,24 +47,28 @@ static int sk_diag_dump_peer(struct sock *sk, struct sk_buff *nlskb) unix_state_unlock(peer); sock_put(peer); - RTA_PUT_U32(nlskb, UNIX_DIAG_PEER, ino); + return nla_put_u32(nlskb, UNIX_DIAG_PEER, ino); } return 0; -rtattr_failure: - return -EMSGSIZE; } static int sk_diag_dump_icons(struct sock *sk, struct sk_buff *nlskb) { struct sk_buff *skb; + struct nlattr *attr; u32 *buf; int i; if (sk->sk_state == TCP_LISTEN) { spin_lock(&sk->sk_receive_queue.lock); - buf = UNIX_DIAG_PUT(nlskb, UNIX_DIAG_ICONS, - sk->sk_receive_queue.qlen * sizeof(u32)); + + attr = nla_reserve(nlskb, UNIX_DIAG_ICONS, + sk->sk_receive_queue.qlen * sizeof(u32)); + if (!attr) + goto errout; + + buf = nla_data(attr); i = 0; skb_queue_walk(&sk->sk_receive_queue, skb) { struct sock *req, *peer; @@ -94,43 +89,38 @@ static int sk_diag_dump_icons(struct sock *sk, struct sk_buff *nlskb) return 0; -rtattr_failure: +errout: spin_unlock(&sk->sk_receive_queue.lock); return -EMSGSIZE; } static int sk_diag_show_rqlen(struct sock *sk, struct sk_buff *nlskb) { - struct unix_diag_rqlen *rql; - - rql = UNIX_DIAG_PUT(nlskb, UNIX_DIAG_RQLEN, sizeof(*rql)); + struct unix_diag_rqlen rql; if (sk->sk_state == TCP_LISTEN) { - rql->udiag_rqueue = sk->sk_receive_queue.qlen; - rql->udiag_wqueue = sk->sk_max_ack_backlog; + rql.udiag_rqueue = sk->sk_receive_queue.qlen; + rql.udiag_wqueue = sk->sk_max_ack_backlog; } else { - rql->udiag_rqueue = (__u32)unix_inq_len(sk); - rql->udiag_wqueue = (__u32)unix_outq_len(sk); + rql.udiag_rqueue = (u32) unix_inq_len(sk); + rql.udiag_wqueue = (u32) unix_outq_len(sk); } - return 0; - -rtattr_failure: - return -EMSGSIZE; + return nla_put(nlskb, UNIX_DIAG_RQLEN, sizeof(rql), &rql); } static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, struct unix_diag_req *req, u32 pid, u32 seq, u32 flags, int sk_ino) { - unsigned char *b = skb_tail_pointer(skb); struct nlmsghdr *nlh; struct unix_diag_msg *rep; - nlh = NLMSG_PUT(skb, pid, seq, SOCK_DIAG_BY_FAMILY, sizeof(*rep)); - nlh->nlmsg_flags = flags; - - rep = NLMSG_DATA(nlh); + nlh = nlmsg_put(skb, pid, seq, SOCK_DIAG_BY_FAMILY, sizeof(*rep), + flags); + if (!nlh) + return -EMSGSIZE; + rep = nlmsg_data(nlh); rep->udiag_family = AF_UNIX; rep->udiag_type = sk->sk_type; rep->udiag_state = sk->sk_state; @@ -139,33 +129,32 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, struct unix_diag_r if ((req->udiag_show & UDIAG_SHOW_NAME) && sk_diag_dump_name(sk, skb)) - goto nlmsg_failure; + goto out_nlmsg_trim; if ((req->udiag_show & UDIAG_SHOW_VFS) && sk_diag_dump_vfs(sk, skb)) - goto nlmsg_failure; + goto out_nlmsg_trim; if ((req->udiag_show & UDIAG_SHOW_PEER) && sk_diag_dump_peer(sk, skb)) - goto nlmsg_failure; + goto out_nlmsg_trim; if ((req->udiag_show & UDIAG_SHOW_ICONS) && sk_diag_dump_icons(sk, skb)) - goto nlmsg_failure; + goto out_nlmsg_trim; if ((req->udiag_show & UDIAG_SHOW_RQLEN) && sk_diag_show_rqlen(sk, skb)) - goto nlmsg_failure; + goto out_nlmsg_trim; if ((req->udiag_show & UDIAG_SHOW_MEMINFO) && sock_diag_put_meminfo(sk, skb, UNIX_DIAG_MEMINFO)) - goto nlmsg_failure; + goto out_nlmsg_trim; - nlh->nlmsg_len = skb_tail_pointer(skb) - b; - return skb->len; + return nlmsg_end(skb, nlh); -nlmsg_failure: - nlmsg_trim(skb, b); +out_nlmsg_trim: + nlmsg_cancel(skb, nlh); return -EMSGSIZE; } @@ -188,19 +177,24 @@ static int unix_diag_dump(struct sk_buff *skb, struct netlink_callback *cb) { struct unix_diag_req *req; int num, s_num, slot, s_slot; + struct net *net = sock_net(skb->sk); - req = NLMSG_DATA(cb->nlh); + req = nlmsg_data(cb->nlh); s_slot = cb->args[0]; num = s_num = cb->args[1]; spin_lock(&unix_table_lock); - for (slot = s_slot; slot <= UNIX_HASH_SIZE; s_num = 0, slot++) { + for (slot = s_slot; + slot < ARRAY_SIZE(unix_socket_table); + s_num = 0, slot++) { struct sock *sk; struct hlist_node *node; num = 0; sk_for_each(sk, node, &unix_socket_table[slot]) { + if (!net_eq(sock_net(sk), net)) + continue; if (num < s_num) goto next; if (!(req->udiag_states & (1 << sk->sk_state))) @@ -228,7 +222,7 @@ static struct sock *unix_lookup_by_ino(int ino) struct sock *sk; spin_lock(&unix_table_lock); - for (i = 0; i <= UNIX_HASH_SIZE; i++) { + for (i = 0; i < ARRAY_SIZE(unix_socket_table); i++) { struct hlist_node *node; sk_for_each(sk, node, &unix_socket_table[i]) @@ -252,6 +246,7 @@ static int unix_diag_get_exact(struct sk_buff *in_skb, struct sock *sk; struct sk_buff *rep; unsigned int extra_len; + struct net *net = sock_net(in_skb->sk); if (req->udiag_ino == 0) goto out_nosk; @@ -268,22 +263,21 @@ static int unix_diag_get_exact(struct sk_buff *in_skb, extra_len = 256; again: err = -ENOMEM; - rep = alloc_skb(NLMSG_SPACE((sizeof(struct unix_diag_msg) + extra_len)), - GFP_KERNEL); + rep = nlmsg_new(sizeof(struct unix_diag_msg) + extra_len, GFP_KERNEL); if (!rep) goto out; err = sk_diag_fill(sk, rep, req, NETLINK_CB(in_skb).pid, nlh->nlmsg_seq, 0, req->udiag_ino); if (err < 0) { - kfree_skb(rep); + nlmsg_free(rep); extra_len += 256; if (extra_len >= PAGE_SIZE) goto out; goto again; } - err = netlink_unicast(sock_diag_nlsk, rep, NETLINK_CB(in_skb).pid, + err = netlink_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).pid, MSG_DONTWAIT); if (err > 0) err = 0; @@ -297,6 +291,7 @@ out_nosk: static int unix_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h) { int hdrlen = sizeof(struct unix_diag_req); + struct net *net = sock_net(skb->sk); if (nlmsg_len(h) < hdrlen) return -EINVAL; @@ -305,9 +300,9 @@ static int unix_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h) struct netlink_dump_control c = { .dump = unix_diag_dump, }; - return netlink_dump_start(sock_diag_nlsk, skb, h, &c); + return netlink_dump_start(net->diag_nlsk, skb, h, &c); } else - return unix_diag_get_exact(skb, h, (struct unix_diag_req *)NLMSG_DATA(h)); + return unix_diag_get_exact(skb, h, nlmsg_data(h)); } static const struct sock_diag_handler unix_diag_handler = { diff --git a/net/wireless/Kconfig b/net/wireless/Kconfig index 2e4444fedbe0..fe4adb12b3ef 100644 --- a/net/wireless/Kconfig +++ b/net/wireless/Kconfig @@ -74,6 +74,27 @@ config CFG80211_REG_DEBUG If unsure, say N. +config CFG80211_CERTIFICATION_ONUS + bool "cfg80211 certification onus" + depends on CFG80211 && EXPERT + default n + ---help--- + You should disable this option unless you are both capable + and willing to ensure your system will remain regulatory + compliant with the features available under this option. + Some options may still be under heavy development and + for whatever reason regulatory compliance has not or + cannot yet be verified. Regulatory verification may at + times only be possible until you have the final system + in place. + + This option should only be enabled by system integrators + or distributions that have done work necessary to ensure + regulatory certification on the system with the enabled + features. Alternatively you can enable this option if + you are a wireless researcher and are working in a controlled + and approved environment by your local regulatory agency. + config CFG80211_DEFAULT_PS bool "enable powersave by default" depends on CFG80211 @@ -114,24 +135,10 @@ config CFG80211_WEXT bool "cfg80211 wireless extensions compatibility" depends on CFG80211 select WEXT_CORE - default y help Enable this option if you need old userspace for wireless extensions with cfg80211-based drivers. -config WIRELESS_EXT_SYSFS - bool "Wireless extensions sysfs files" - depends on WEXT_CORE && SYSFS - help - This option enables the deprecated wireless statistics - files in /sys/class/net/*/wireless/. The same information - is available via the ioctls as well. - - Say N. If you know you have ancient tools requiring it, - like very old versions of hal (prior to 0.5.12 release), - say Y and update the tools as soon as possible as this - option will be removed soon. - config LIB80211 tristate "Common routines for IEEE802.11 drivers" default n diff --git a/net/wireless/Makefile b/net/wireless/Makefile index 55a28ab21db9..0f7e0d621ab0 100644 --- a/net/wireless/Makefile +++ b/net/wireless/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_WEXT_SPY) += wext-spy.o obj-$(CONFIG_WEXT_PRIV) += wext-priv.o cfg80211-y += core.o sysfs.o radiotap.o util.o reg.o scan.o nl80211.o -cfg80211-y += mlme.o ibss.o sme.o chan.o ethtool.o mesh.o +cfg80211-y += mlme.o ibss.o sme.o chan.o ethtool.o mesh.o ap.o cfg80211-$(CONFIG_CFG80211_DEBUGFS) += debugfs.o cfg80211-$(CONFIG_CFG80211_WEXT) += wext-compat.o wext-sme.o cfg80211-$(CONFIG_CFG80211_INTERNAL_REGDB) += regdb.o diff --git a/net/wireless/ap.c b/net/wireless/ap.c new file mode 100644 index 000000000000..fcc60d8dbefa --- /dev/null +++ b/net/wireless/ap.c @@ -0,0 +1,46 @@ +#include <linux/ieee80211.h> +#include <linux/export.h> +#include <net/cfg80211.h> +#include "nl80211.h" +#include "core.h" + + +static int __cfg80211_stop_ap(struct cfg80211_registered_device *rdev, + struct net_device *dev) +{ + struct wireless_dev *wdev = dev->ieee80211_ptr; + int err; + + ASSERT_WDEV_LOCK(wdev); + + if (!rdev->ops->stop_ap) + return -EOPNOTSUPP; + + if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_AP && + dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_GO) + return -EOPNOTSUPP; + + if (!wdev->beacon_interval) + return -ENOENT; + + err = rdev->ops->stop_ap(&rdev->wiphy, dev); + if (!err) { + wdev->beacon_interval = 0; + wdev->channel = NULL; + } + + return err; +} + +int cfg80211_stop_ap(struct cfg80211_registered_device *rdev, + struct net_device *dev) +{ + struct wireless_dev *wdev = dev->ieee80211_ptr; + int err; + + wdev_lock(wdev); + err = __cfg80211_stop_ap(rdev, dev); + wdev_unlock(wdev); + + return err; +} diff --git a/net/wireless/chan.c b/net/wireless/chan.c index 884801ac4dd0..d355f67d0cdd 100644 --- a/net/wireless/chan.c +++ b/net/wireless/chan.c @@ -60,7 +60,7 @@ bool cfg80211_can_beacon_sec_chan(struct wiphy *wiphy, diff = -20; break; default: - return false; + return true; } sec_chan = ieee80211_get_channel(wiphy, chan->center_freq + diff); @@ -78,60 +78,75 @@ bool cfg80211_can_beacon_sec_chan(struct wiphy *wiphy, } EXPORT_SYMBOL(cfg80211_can_beacon_sec_chan); -int cfg80211_set_freq(struct cfg80211_registered_device *rdev, - struct wireless_dev *wdev, int freq, - enum nl80211_channel_type channel_type) +int cfg80211_set_monitor_channel(struct cfg80211_registered_device *rdev, + int freq, enum nl80211_channel_type chantype) { struct ieee80211_channel *chan; - int result; - - if (wdev && wdev->iftype == NL80211_IFTYPE_MONITOR) - wdev = NULL; - if (wdev) { - ASSERT_WDEV_LOCK(wdev); - - if (!netif_running(wdev->netdev)) - return -ENETDOWN; - } - - if (!rdev->ops->set_channel) + if (!rdev->ops->set_monitor_channel) return -EOPNOTSUPP; + if (!cfg80211_has_monitors_only(rdev)) + return -EBUSY; - chan = rdev_freq_to_chan(rdev, freq, channel_type); + chan = rdev_freq_to_chan(rdev, freq, chantype); if (!chan) return -EINVAL; - /* Both channels should be able to initiate communication */ - if (wdev && (wdev->iftype == NL80211_IFTYPE_ADHOC || - wdev->iftype == NL80211_IFTYPE_AP || - wdev->iftype == NL80211_IFTYPE_AP_VLAN || - wdev->iftype == NL80211_IFTYPE_MESH_POINT || - wdev->iftype == NL80211_IFTYPE_P2P_GO)) { - switch (channel_type) { - case NL80211_CHAN_HT40PLUS: - case NL80211_CHAN_HT40MINUS: - if (!cfg80211_can_beacon_sec_chan(&rdev->wiphy, chan, - channel_type)) { - printk(KERN_DEBUG - "cfg80211: Secondary channel not " - "allowed to initiate communication\n"); - return -EINVAL; - } - break; - default: - break; + return rdev->ops->set_monitor_channel(&rdev->wiphy, chan, chantype); +} + +void +cfg80211_get_chan_state(struct wireless_dev *wdev, + struct ieee80211_channel **chan, + enum cfg80211_chan_mode *chanmode) +{ + *chan = NULL; + *chanmode = CHAN_MODE_UNDEFINED; + + ASSERT_WDEV_LOCK(wdev); + + if (!netif_running(wdev->netdev)) + return; + + switch (wdev->iftype) { + case NL80211_IFTYPE_ADHOC: + if (wdev->current_bss) { + *chan = wdev->current_bss->pub.channel; + *chanmode = wdev->ibss_fixed + ? CHAN_MODE_SHARED + : CHAN_MODE_EXCLUSIVE; + return; + } + case NL80211_IFTYPE_STATION: + case NL80211_IFTYPE_P2P_CLIENT: + if (wdev->current_bss) { + *chan = wdev->current_bss->pub.channel; + *chanmode = CHAN_MODE_SHARED; + return; + } + break; + case NL80211_IFTYPE_AP: + case NL80211_IFTYPE_P2P_GO: + if (wdev->beacon_interval) { + *chan = wdev->channel; + *chanmode = CHAN_MODE_SHARED; } + return; + case NL80211_IFTYPE_MESH_POINT: + if (wdev->mesh_id_len) { + *chan = wdev->channel; + *chanmode = CHAN_MODE_SHARED; + } + return; + case NL80211_IFTYPE_MONITOR: + case NL80211_IFTYPE_AP_VLAN: + case NL80211_IFTYPE_WDS: + /* these interface types don't really have a channel */ + return; + case NL80211_IFTYPE_UNSPECIFIED: + case NUM_NL80211_IFTYPES: + WARN_ON(1); } - result = rdev->ops->set_channel(&rdev->wiphy, - wdev ? wdev->netdev : NULL, - chan, channel_type); - if (result) - return result; - - if (wdev) - wdev->channel = chan; - - return 0; + return; } diff --git a/net/wireless/core.c b/net/wireless/core.c index a87d43552974..31b40cc4a9c3 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -96,69 +96,6 @@ struct wiphy *wiphy_idx_to_wiphy(int wiphy_idx) return &rdev->wiphy; } -/* requires cfg80211_mutex to be held! */ -struct cfg80211_registered_device * -__cfg80211_rdev_from_info(struct genl_info *info) -{ - int ifindex; - struct cfg80211_registered_device *bywiphyidx = NULL, *byifidx = NULL; - struct net_device *dev; - int err = -EINVAL; - - assert_cfg80211_lock(); - - if (info->attrs[NL80211_ATTR_WIPHY]) { - bywiphyidx = cfg80211_rdev_by_wiphy_idx( - nla_get_u32(info->attrs[NL80211_ATTR_WIPHY])); - err = -ENODEV; - } - - if (info->attrs[NL80211_ATTR_IFINDEX]) { - ifindex = nla_get_u32(info->attrs[NL80211_ATTR_IFINDEX]); - dev = dev_get_by_index(genl_info_net(info), ifindex); - if (dev) { - if (dev->ieee80211_ptr) - byifidx = - wiphy_to_dev(dev->ieee80211_ptr->wiphy); - dev_put(dev); - } - err = -ENODEV; - } - - if (bywiphyidx && byifidx) { - if (bywiphyidx != byifidx) - return ERR_PTR(-EINVAL); - else - return bywiphyidx; /* == byifidx */ - } - if (bywiphyidx) - return bywiphyidx; - - if (byifidx) - return byifidx; - - return ERR_PTR(err); -} - -struct cfg80211_registered_device * -cfg80211_get_dev_from_info(struct genl_info *info) -{ - struct cfg80211_registered_device *rdev; - - mutex_lock(&cfg80211_mutex); - rdev = __cfg80211_rdev_from_info(info); - - /* if it is not an error we grab the lock on - * it to assure it won't be going away while - * we operate on it */ - if (!IS_ERR(rdev)) - mutex_lock(&rdev->mtx); - - mutex_unlock(&cfg80211_mutex); - - return rdev; -} - struct cfg80211_registered_device * cfg80211_get_dev_from_ifindex(struct net *net, int ifindex) { @@ -239,7 +176,9 @@ int cfg80211_switch_netns(struct cfg80211_registered_device *rdev, if (!(rdev->wiphy.flags & WIPHY_FLAG_NETNS_OK)) return -EOPNOTSUPP; - list_for_each_entry(wdev, &rdev->netdev_list, list) { + list_for_each_entry(wdev, &rdev->wdev_list, list) { + if (!wdev->netdev) + continue; wdev->netdev->features &= ~NETIF_F_NETNS_LOCAL; err = dev_change_net_namespace(wdev->netdev, net, "wlan%d"); if (err) @@ -251,8 +190,10 @@ int cfg80211_switch_netns(struct cfg80211_registered_device *rdev, /* failed -- clean up to old netns */ net = wiphy_net(&rdev->wiphy); - list_for_each_entry_continue_reverse(wdev, &rdev->netdev_list, + list_for_each_entry_continue_reverse(wdev, &rdev->wdev_list, list) { + if (!wdev->netdev) + continue; wdev->netdev->features &= ~NETIF_F_NETNS_LOCAL; err = dev_change_net_namespace(wdev->netdev, net, "wlan%d"); @@ -289,8 +230,9 @@ static int cfg80211_rfkill_set_block(void *data, bool blocked) rtnl_lock(); mutex_lock(&rdev->devlist_mtx); - list_for_each_entry(wdev, &rdev->netdev_list, list) - dev_close(wdev->netdev); + list_for_each_entry(wdev, &rdev->wdev_list, list) + if (wdev->netdev) + dev_close(wdev->netdev); mutex_unlock(&rdev->devlist_mtx); rtnl_unlock(); @@ -367,7 +309,7 @@ struct wiphy *wiphy_new(const struct cfg80211_ops *ops, int sizeof_priv) mutex_init(&rdev->mtx); mutex_init(&rdev->devlist_mtx); mutex_init(&rdev->sched_scan_mtx); - INIT_LIST_HEAD(&rdev->netdev_list); + INIT_LIST_HEAD(&rdev->wdev_list); spin_lock_init(&rdev->bss_lock); INIT_LIST_HEAD(&rdev->bss_list); INIT_WORK(&rdev->scan_done_wk, __cfg80211_scan_done); @@ -436,6 +378,14 @@ static int wiphy_verify_combinations(struct wiphy *wiphy) if (WARN_ON(!c->num_different_channels)) return -EINVAL; + /* + * Put a sane limit on maximum number of different + * channels to simplify channel accounting code. + */ + if (WARN_ON(c->num_different_channels > + CFG80211_MAX_NUM_DIFFERENT_CHANNELS)) + return -EINVAL; + if (WARN_ON(!c->n_limits)) return -EINVAL; @@ -484,9 +434,11 @@ int wiphy_register(struct wiphy *wiphy) int i; u16 ifmodes = wiphy->interface_modes; +#ifdef CONFIG_PM if (WARN_ON((wiphy->wowlan.flags & WIPHY_WOWLAN_GTK_REKEY_FAILURE) && !(wiphy->wowlan.flags & WIPHY_WOWLAN_SUPPORTS_GTK_REKEY))) return -EINVAL; +#endif if (WARN_ON(wiphy->ap_sme_capa && !(wiphy->flags & WIPHY_FLAG_HAVE_AP_SME))) @@ -521,8 +473,14 @@ int wiphy_register(struct wiphy *wiphy) continue; sband->band = band; - - if (WARN_ON(!sband->n_channels || !sband->n_bitrates)) + if (WARN_ON(!sband->n_channels)) + return -EINVAL; + /* + * on 60gHz band, there are no legacy rates, so + * n_bitrates is 0 + */ + if (WARN_ON(band != IEEE80211_BAND_60GHZ && + !sband->n_bitrates)) return -EINVAL; /* @@ -563,12 +521,14 @@ int wiphy_register(struct wiphy *wiphy) return -EINVAL; } +#ifdef CONFIG_PM if (rdev->wiphy.wowlan.n_patterns) { if (WARN_ON(!rdev->wiphy.wowlan.pattern_min_len || rdev->wiphy.wowlan.pattern_min_len > rdev->wiphy.wowlan.pattern_max_len)) return -EINVAL; } +#endif /* check and set up bitrates */ ieee80211_set_bitrate_flags(wiphy); @@ -582,7 +542,7 @@ int wiphy_register(struct wiphy *wiphy) } /* set up regulatory info */ - regulatory_update(wiphy, NL80211_REGDOM_SET_BY_CORE); + wiphy_regulatory_register(wiphy); list_add_rcu(&rdev->list, &cfg80211_rdev_list); cfg80211_rdev_list_generation++; @@ -667,7 +627,7 @@ void wiphy_unregister(struct wiphy *wiphy) __count == 0; })); mutex_lock(&rdev->devlist_mtx); - BUG_ON(!list_empty(&rdev->netdev_list)); + BUG_ON(!list_empty(&rdev->wdev_list)); mutex_unlock(&rdev->devlist_mtx); /* @@ -692,9 +652,11 @@ void wiphy_unregister(struct wiphy *wiphy) /* nothing */ cfg80211_unlock_rdev(rdev); - /* If this device got a regulatory hint tell core its - * free to listen now to a new shiny device regulatory hint */ - reg_device_remove(wiphy); + /* + * If this device got a regulatory hint tell core its + * free to listen now to a new shiny device regulatory hint + */ + wiphy_regulatory_deregister(wiphy); cfg80211_rdev_list_generation++; device_del(&rdev->wiphy.dev); @@ -748,7 +710,7 @@ static void wdev_cleanup_work(struct work_struct *work) cfg80211_lock_rdev(rdev); - if (WARN_ON(rdev->scan_req && rdev->scan_req->dev == wdev->netdev)) { + if (WARN_ON(rdev->scan_req && rdev->scan_req->wdev == wdev)) { rdev->scan_req->aborted = true; ___cfg80211_scan_done(rdev, true); } @@ -776,6 +738,16 @@ static struct device_type wiphy_type = { .name = "wlan", }; +void cfg80211_update_iface_num(struct cfg80211_registered_device *rdev, + enum nl80211_iftype iftype, int num) +{ + ASSERT_RTNL(); + + rdev->num_running_ifaces += num; + if (iftype == NL80211_IFTYPE_MONITOR) + rdev->num_running_monitor_ifaces += num; +} + static int cfg80211_netdev_notifier_call(struct notifier_block *nb, unsigned long state, void *ndev) @@ -810,7 +782,8 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb, spin_lock_init(&wdev->mgmt_registrations_lock); mutex_lock(&rdev->devlist_mtx); - list_add_rcu(&wdev->list, &rdev->netdev_list); + wdev->identifier = ++rdev->wdev_id; + list_add_rcu(&wdev->list, &rdev->wdev_list); rdev->devlist_generation++; /* can only change netns with wiphy */ dev->features |= NETIF_F_NETNS_LOCAL; @@ -869,12 +842,16 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb, case NL80211_IFTYPE_MESH_POINT: cfg80211_leave_mesh(rdev, dev); break; + case NL80211_IFTYPE_AP: + cfg80211_stop_ap(rdev, dev); + break; default: break; } wdev->beacon_interval = 0; break; case NETDEV_DOWN: + cfg80211_update_iface_num(rdev, wdev->iftype, -1); dev_hold(dev); queue_work(cfg80211_wq, &wdev->cleanup_work); break; @@ -891,6 +868,7 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb, mutex_unlock(&rdev->devlist_mtx); dev_put(dev); } + cfg80211_update_iface_num(rdev, wdev->iftype, 1); cfg80211_lock_rdev(rdev); mutex_lock(&rdev->devlist_mtx); wdev_lock(wdev); @@ -980,7 +958,9 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb, return notifier_from_errno(-EOPNOTSUPP); if (rfkill_blocked(rdev->rfkill)) return notifier_from_errno(-ERFKILL); + mutex_lock(&rdev->devlist_mtx); ret = cfg80211_can_add_interface(rdev, wdev->iftype); + mutex_unlock(&rdev->devlist_mtx); if (ret) return notifier_from_errno(ret); break; diff --git a/net/wireless/core.h b/net/wireless/core.h index 8523f3878677..5206c6844fd7 100644 --- a/net/wireless/core.h +++ b/net/wireless/core.h @@ -13,6 +13,7 @@ #include <linux/debugfs.h> #include <linux/rfkill.h> #include <linux/workqueue.h> +#include <linux/rtnetlink.h> #include <net/genetlink.h> #include <net/cfg80211.h> #include "reg.h" @@ -46,16 +47,20 @@ struct cfg80211_registered_device { /* wiphy index, internal only */ int wiphy_idx; - /* associate netdev list */ + /* associated wireless interfaces */ struct mutex devlist_mtx; /* protected by devlist_mtx or RCU */ - struct list_head netdev_list; - int devlist_generation; + struct list_head wdev_list; + int devlist_generation, wdev_id; int opencount; /* also protected by devlist_mtx */ wait_queue_head_t dev_wait; u32 ap_beacons_nlpid; + /* protected by RTNL only */ + int num_running_ifaces; + int num_running_monitor_ifaces; + /* BSSes/scanning */ spinlock_t bss_lock; struct list_head bss_list; @@ -159,32 +164,6 @@ static inline void cfg80211_unhold_bss(struct cfg80211_internal_bss *bss) struct cfg80211_registered_device *cfg80211_rdev_by_wiphy_idx(int wiphy_idx); int get_wiphy_idx(struct wiphy *wiphy); -struct cfg80211_registered_device * -__cfg80211_rdev_from_info(struct genl_info *info); - -/* - * This function returns a pointer to the driver - * that the genl_info item that is passed refers to. - * If successful, it returns non-NULL and also locks - * the driver's mutex! - * - * This means that you need to call cfg80211_unlock_rdev() - * before being allowed to acquire &cfg80211_mutex! - * - * This is necessary because we need to lock the global - * mutex to get an item off the list safely, and then - * we lock the rdev mutex so it doesn't go away under us. - * - * We don't want to keep cfg80211_mutex locked - * for all the time in order to allow requests on - * other interfaces to go through at the same time. - * - * The result of this can be a PTR_ERR and hence must - * be checked with IS_ERR() for errors. - */ -extern struct cfg80211_registered_device * -cfg80211_get_dev_from_info(struct genl_info *info); - /* requires cfg80211_rdev_mutex to be held! */ struct wiphy *wiphy_idx_to_wiphy(int wiphy_idx); @@ -223,6 +202,14 @@ static inline void wdev_unlock(struct wireless_dev *wdev) #define ASSERT_RDEV_LOCK(rdev) lockdep_assert_held(&(rdev)->mtx) #define ASSERT_WDEV_LOCK(wdev) lockdep_assert_held(&(wdev)->mtx) +static inline bool cfg80211_has_monitors_only(struct cfg80211_registered_device *rdev) +{ + ASSERT_RTNL(); + + return rdev->num_running_ifaces == rdev->num_running_monitor_ifaces && + rdev->num_running_ifaces > 0; +} + enum cfg80211_event_type { EVENT_CONNECT_RESULT, EVENT_ROAMED, @@ -267,6 +254,12 @@ struct cfg80211_cached_keys { int def, defmgmt; }; +enum cfg80211_chan_mode { + CHAN_MODE_UNDEFINED, + CHAN_MODE_SHARED, + CHAN_MODE_EXCLUSIVE, +}; + /* free object */ extern void cfg80211_dev_free(struct cfg80211_registered_device *rdev); @@ -303,14 +296,21 @@ extern const struct mesh_config default_mesh_config; extern const struct mesh_setup default_mesh_setup; int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev, struct net_device *dev, - const struct mesh_setup *setup, + struct mesh_setup *setup, const struct mesh_config *conf); int cfg80211_join_mesh(struct cfg80211_registered_device *rdev, struct net_device *dev, - const struct mesh_setup *setup, + struct mesh_setup *setup, const struct mesh_config *conf); int cfg80211_leave_mesh(struct cfg80211_registered_device *rdev, struct net_device *dev); +int cfg80211_set_mesh_freq(struct cfg80211_registered_device *rdev, + struct wireless_dev *wdev, int freq, + enum nl80211_channel_type channel_type); + +/* AP */ +int cfg80211_stop_ap(struct cfg80211_registered_device *rdev, + struct net_device *dev); /* MLME */ int __cfg80211_mlme_auth(struct cfg80211_registered_device *rdev, @@ -369,7 +369,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_pid, void cfg80211_mlme_unregister_socket(struct wireless_dev *wdev, u32 nlpid); void cfg80211_mlme_purge_registrations(struct wireless_dev *wdev); int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev, - struct net_device *dev, + struct wireless_dev *wdev, struct ieee80211_channel *chan, bool offchan, enum nl80211_channel_type channel_type, bool channel_type_valid, unsigned int wait, @@ -427,9 +427,20 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev, u32 *flags, struct vif_params *params); void cfg80211_process_rdev_events(struct cfg80211_registered_device *rdev); -int cfg80211_can_change_interface(struct cfg80211_registered_device *rdev, - struct wireless_dev *wdev, - enum nl80211_iftype iftype); +int cfg80211_can_use_iftype_chan(struct cfg80211_registered_device *rdev, + struct wireless_dev *wdev, + enum nl80211_iftype iftype, + struct ieee80211_channel *chan, + enum cfg80211_chan_mode chanmode); + +static inline int +cfg80211_can_change_interface(struct cfg80211_registered_device *rdev, + struct wireless_dev *wdev, + enum nl80211_iftype iftype) +{ + return cfg80211_can_use_iftype_chan(rdev, wdev, iftype, NULL, + CHAN_MODE_UNDEFINED); +} static inline int cfg80211_can_add_interface(struct cfg80211_registered_device *rdev, @@ -438,12 +449,26 @@ cfg80211_can_add_interface(struct cfg80211_registered_device *rdev, return cfg80211_can_change_interface(rdev, NULL, iftype); } +static inline int +cfg80211_can_use_chan(struct cfg80211_registered_device *rdev, + struct wireless_dev *wdev, + struct ieee80211_channel *chan, + enum cfg80211_chan_mode chanmode) +{ + return cfg80211_can_use_iftype_chan(rdev, wdev, wdev->iftype, + chan, chanmode); +} + +void +cfg80211_get_chan_state(struct wireless_dev *wdev, + struct ieee80211_channel **chan, + enum cfg80211_chan_mode *chanmode); + struct ieee80211_channel * rdev_freq_to_chan(struct cfg80211_registered_device *rdev, int freq, enum nl80211_channel_type channel_type); -int cfg80211_set_freq(struct cfg80211_registered_device *rdev, - struct wireless_dev *wdev, int freq, - enum nl80211_channel_type channel_type); +int cfg80211_set_monitor_channel(struct cfg80211_registered_device *rdev, + int freq, enum nl80211_channel_type chantype); int ieee80211_get_ratemask(struct ieee80211_supported_band *sband, const u8 *rates, unsigned int n_rates, @@ -452,6 +477,11 @@ int ieee80211_get_ratemask(struct ieee80211_supported_band *sband, int cfg80211_validate_beacon_int(struct cfg80211_registered_device *rdev, u32 beacon_int); +void cfg80211_update_iface_num(struct cfg80211_registered_device *rdev, + enum nl80211_iftype iftype, int num); + +#define CFG80211_MAX_NUM_DIFFERENT_CHANNELS 10 + #ifdef CONFIG_CFG80211_DEVELOPER_WARNINGS #define CFG80211_DEV_WARN_ON(cond) WARN_ON(cond) #else diff --git a/net/wireless/ibss.c b/net/wireless/ibss.c index 89baa3328411..ca5672f6ee2f 100644 --- a/net/wireless/ibss.c +++ b/net/wireless/ibss.c @@ -113,10 +113,21 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev, kfree(wdev->connect_keys); wdev->connect_keys = connkeys; + wdev->ibss_fixed = params->channel_fixed; #ifdef CONFIG_CFG80211_WEXT wdev->wext.ibss.channel = params->channel; #endif wdev->sme_state = CFG80211_SME_CONNECTING; + + err = cfg80211_can_use_chan(rdev, wdev, params->channel, + params->channel_fixed + ? CHAN_MODE_SHARED + : CHAN_MODE_EXCLUSIVE); + if (err) { + wdev->connect_keys = NULL; + return err; + } + err = rdev->ops->join_ibss(&rdev->wiphy, dev, params); if (err) { wdev->connect_keys = NULL; diff --git a/net/wireless/mesh.c b/net/wireless/mesh.c index 2749cb86b462..c384e77ff77a 100644 --- a/net/wireless/mesh.c +++ b/net/wireless/mesh.c @@ -14,6 +14,9 @@ #define MESH_PATH_TIMEOUT 5000 #define MESH_RANN_INTERVAL 5000 +#define MESH_PATH_TO_ROOT_TIMEOUT 6000 +#define MESH_ROOT_INTERVAL 5000 +#define MESH_ROOT_CONFIRMATION_INTERVAL 2000 /* * Minimum interval between two consecutive PREQs originated by the same @@ -62,9 +65,15 @@ const struct mesh_config default_mesh_config = { .dot11MeshForwarding = true, .rssi_threshold = MESH_RSSI_THRESHOLD, .ht_opmode = IEEE80211_HT_OP_MODE_PROTECTION_NONHT_MIXED, + .dot11MeshHWMPactivePathToRootTimeout = MESH_PATH_TO_ROOT_TIMEOUT, + .dot11MeshHWMProotInterval = MESH_ROOT_INTERVAL, + .dot11MeshHWMPconfirmationInterval = MESH_ROOT_CONFIRMATION_INTERVAL, }; const struct mesh_setup default_mesh_setup = { + /* cfg80211_join_mesh() will pick a channel if needed */ + .channel = NULL, + .channel_type = NL80211_CHAN_NO_HT, .sync_method = IEEE80211_SYNC_METHOD_NEIGHBOR_OFFSET, .path_sel_proto = IEEE80211_PATH_PROTOCOL_HWMP, .path_metric = IEEE80211_PATH_METRIC_AIRTIME, @@ -75,7 +84,7 @@ const struct mesh_setup default_mesh_setup = { int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev, struct net_device *dev, - const struct mesh_setup *setup, + struct mesh_setup *setup, const struct mesh_config *conf) { struct wireless_dev *wdev = dev->ieee80211_ptr; @@ -101,10 +110,61 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev, if (!rdev->ops->join_mesh) return -EOPNOTSUPP; + if (!setup->channel) { + /* if no channel explicitly given, use preset channel */ + setup->channel = wdev->preset_chan; + setup->channel_type = wdev->preset_chantype; + } + + if (!setup->channel) { + /* if we don't have that either, use the first usable channel */ + enum ieee80211_band band; + + for (band = 0; band < IEEE80211_NUM_BANDS; band++) { + struct ieee80211_supported_band *sband; + struct ieee80211_channel *chan; + int i; + + sband = rdev->wiphy.bands[band]; + if (!sband) + continue; + + for (i = 0; i < sband->n_channels; i++) { + chan = &sband->channels[i]; + if (chan->flags & (IEEE80211_CHAN_NO_IBSS | + IEEE80211_CHAN_PASSIVE_SCAN | + IEEE80211_CHAN_DISABLED | + IEEE80211_CHAN_RADAR)) + continue; + setup->channel = chan; + break; + } + + if (setup->channel) + break; + } + + /* no usable channel ... */ + if (!setup->channel) + return -EINVAL; + + setup->channel_type = NL80211_CHAN_NO_HT; + } + + if (!cfg80211_can_beacon_sec_chan(&rdev->wiphy, setup->channel, + setup->channel_type)) + return -EINVAL; + + err = cfg80211_can_use_chan(rdev, wdev, setup->channel, + CHAN_MODE_SHARED); + if (err) + return err; + err = rdev->ops->join_mesh(&rdev->wiphy, dev, conf, setup); if (!err) { memcpy(wdev->ssid, setup->mesh_id, setup->mesh_id_len); wdev->mesh_id_len = setup->mesh_id_len; + wdev->channel = setup->channel; } return err; @@ -112,19 +172,71 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev, int cfg80211_join_mesh(struct cfg80211_registered_device *rdev, struct net_device *dev, - const struct mesh_setup *setup, + struct mesh_setup *setup, const struct mesh_config *conf) { struct wireless_dev *wdev = dev->ieee80211_ptr; int err; + mutex_lock(&rdev->devlist_mtx); wdev_lock(wdev); err = __cfg80211_join_mesh(rdev, dev, setup, conf); wdev_unlock(wdev); + mutex_unlock(&rdev->devlist_mtx); return err; } +int cfg80211_set_mesh_freq(struct cfg80211_registered_device *rdev, + struct wireless_dev *wdev, int freq, + enum nl80211_channel_type channel_type) +{ + struct ieee80211_channel *channel; + int err; + + channel = rdev_freq_to_chan(rdev, freq, channel_type); + if (!channel || !cfg80211_can_beacon_sec_chan(&rdev->wiphy, + channel, + channel_type)) { + return -EINVAL; + } + + /* + * Workaround for libertas (only!), it puts the interface + * into mesh mode but doesn't implement join_mesh. Instead, + * it is configured via sysfs and then joins the mesh when + * you set the channel. Note that the libertas mesh isn't + * compatible with 802.11 mesh. + */ + if (rdev->ops->libertas_set_mesh_channel) { + if (channel_type != NL80211_CHAN_NO_HT) + return -EINVAL; + + if (!netif_running(wdev->netdev)) + return -ENETDOWN; + + err = cfg80211_can_use_chan(rdev, wdev, channel, + CHAN_MODE_SHARED); + if (err) + return err; + + err = rdev->ops->libertas_set_mesh_channel(&rdev->wiphy, + wdev->netdev, + channel); + if (!err) + wdev->channel = channel; + + return err; + } + + if (wdev->mesh_id_len) + return -EBUSY; + + wdev->preset_chan = channel; + wdev->preset_chantype = channel_type; + return 0; +} + void cfg80211_notify_new_peer_candidate(struct net_device *dev, const u8 *macaddr, const u8* ie, u8 ie_len, gfp_t gfp) { @@ -156,8 +268,11 @@ static int __cfg80211_leave_mesh(struct cfg80211_registered_device *rdev, return -ENOTCONN; err = rdev->ops->leave_mesh(&rdev->wiphy, dev); - if (!err) + if (!err) { wdev->mesh_id_len = 0; + wdev->channel = NULL; + } + return err; } diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c index eb90988bbd36..1cdb1d5e6b0f 100644 --- a/net/wireless/mlme.c +++ b/net/wireless/mlme.c @@ -302,8 +302,14 @@ int __cfg80211_mlme_auth(struct cfg80211_registered_device *rdev, if (!req.bss) return -ENOENT; + err = cfg80211_can_use_chan(rdev, wdev, req.bss->channel, + CHAN_MODE_SHARED); + if (err) + goto out; + err = rdev->ops->auth(&rdev->wiphy, dev, &req); +out: cfg80211_put_bss(req.bss); return err; } @@ -317,11 +323,13 @@ int cfg80211_mlme_auth(struct cfg80211_registered_device *rdev, { int err; + mutex_lock(&rdev->devlist_mtx); wdev_lock(dev->ieee80211_ptr); err = __cfg80211_mlme_auth(rdev, dev, chan, auth_type, bssid, ssid, ssid_len, ie, ie_len, key, key_len, key_idx); wdev_unlock(dev->ieee80211_ptr); + mutex_unlock(&rdev->devlist_mtx); return err; } @@ -397,8 +405,14 @@ int __cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev, return -ENOENT; } + err = cfg80211_can_use_chan(rdev, wdev, req.bss->channel, + CHAN_MODE_SHARED); + if (err) + goto out; + err = rdev->ops->assoc(&rdev->wiphy, dev, &req); +out: if (err) { if (was_connected) wdev->sme_state = CFG80211_SME_CONNECTED; @@ -421,11 +435,13 @@ int cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev, struct wireless_dev *wdev = dev->ieee80211_ptr; int err; + mutex_lock(&rdev->devlist_mtx); wdev_lock(wdev); err = __cfg80211_mlme_assoc(rdev, dev, chan, bssid, prev_bssid, ssid, ssid_len, ie, ie_len, use_mfp, crypt, assoc_flags, ht_capa, ht_capa_mask); wdev_unlock(wdev); + mutex_unlock(&rdev->devlist_mtx); return err; } @@ -551,29 +567,28 @@ void cfg80211_mlme_down(struct cfg80211_registered_device *rdev, } } -void cfg80211_ready_on_channel(struct net_device *dev, u64 cookie, +void cfg80211_ready_on_channel(struct wireless_dev *wdev, u64 cookie, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, unsigned int duration, gfp_t gfp) { - struct wiphy *wiphy = dev->ieee80211_ptr->wiphy; + struct wiphy *wiphy = wdev->wiphy; struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); - nl80211_send_remain_on_channel(rdev, dev, cookie, chan, channel_type, + nl80211_send_remain_on_channel(rdev, wdev, cookie, chan, channel_type, duration, gfp); } EXPORT_SYMBOL(cfg80211_ready_on_channel); -void cfg80211_remain_on_channel_expired(struct net_device *dev, - u64 cookie, +void cfg80211_remain_on_channel_expired(struct wireless_dev *wdev, u64 cookie, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, gfp_t gfp) { - struct wiphy *wiphy = dev->ieee80211_ptr->wiphy; + struct wiphy *wiphy = wdev->wiphy; struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); - nl80211_send_remain_on_channel_cancel(rdev, dev, cookie, chan, + nl80211_send_remain_on_channel_cancel(rdev, wdev, cookie, chan, channel_type, gfp); } EXPORT_SYMBOL(cfg80211_remain_on_channel_expired); @@ -662,8 +677,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_pid, list_add(&nreg->list, &wdev->mgmt_registrations); if (rdev->ops->mgmt_frame_register) - rdev->ops->mgmt_frame_register(wiphy, wdev->netdev, - frame_type, true); + rdev->ops->mgmt_frame_register(wiphy, wdev, frame_type, true); out: spin_unlock_bh(&wdev->mgmt_registrations_lock); @@ -686,7 +700,7 @@ void cfg80211_mlme_unregister_socket(struct wireless_dev *wdev, u32 nlpid) if (rdev->ops->mgmt_frame_register) { u16 frame_type = le16_to_cpu(reg->frame_type); - rdev->ops->mgmt_frame_register(wiphy, wdev->netdev, + rdev->ops->mgmt_frame_register(wiphy, wdev, frame_type, false); } @@ -715,14 +729,14 @@ void cfg80211_mlme_purge_registrations(struct wireless_dev *wdev) } int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev, - struct net_device *dev, + struct wireless_dev *wdev, struct ieee80211_channel *chan, bool offchan, enum nl80211_channel_type channel_type, bool channel_type_valid, unsigned int wait, const u8 *buf, size_t len, bool no_cck, bool dont_wait_for_ack, u64 *cookie) { - struct wireless_dev *wdev = dev->ieee80211_ptr; + struct net_device *dev = wdev->netdev; const struct ieee80211_mgmt *mgmt; u16 stype; @@ -809,16 +823,15 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev, return -EINVAL; /* Transmit the Action frame as requested by user space */ - return rdev->ops->mgmt_tx(&rdev->wiphy, dev, chan, offchan, + return rdev->ops->mgmt_tx(&rdev->wiphy, wdev, chan, offchan, channel_type, channel_type_valid, wait, buf, len, no_cck, dont_wait_for_ack, cookie); } -bool cfg80211_rx_mgmt(struct net_device *dev, int freq, int sig_mbm, +bool cfg80211_rx_mgmt(struct wireless_dev *wdev, int freq, int sig_mbm, const u8 *buf, size_t len, gfp_t gfp) { - struct wireless_dev *wdev = dev->ieee80211_ptr; struct wiphy *wiphy = wdev->wiphy; struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); struct cfg80211_mgmt_registration *reg; @@ -855,7 +868,7 @@ bool cfg80211_rx_mgmt(struct net_device *dev, int freq, int sig_mbm, /* found match! */ /* Indicate the received Action frame to user space */ - if (nl80211_send_mgmt(rdev, dev, reg->nlpid, + if (nl80211_send_mgmt(rdev, wdev, reg->nlpid, freq, sig_mbm, buf, len, gfp)) continue; @@ -870,15 +883,14 @@ bool cfg80211_rx_mgmt(struct net_device *dev, int freq, int sig_mbm, } EXPORT_SYMBOL(cfg80211_rx_mgmt); -void cfg80211_mgmt_tx_status(struct net_device *dev, u64 cookie, +void cfg80211_mgmt_tx_status(struct wireless_dev *wdev, u64 cookie, const u8 *buf, size_t len, bool ack, gfp_t gfp) { - struct wireless_dev *wdev = dev->ieee80211_ptr; struct wiphy *wiphy = wdev->wiphy; struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); /* Indicate TX status of the Action frame to user space */ - nl80211_send_mgmt_tx_status(rdev, dev, cookie, buf, len, ack, gfp); + nl80211_send_mgmt_tx_status(rdev, wdev, cookie, buf, len, ack, gfp); } EXPORT_SYMBOL(cfg80211_mgmt_tx_status); @@ -907,6 +919,19 @@ void cfg80211_cqm_pktloss_notify(struct net_device *dev, } EXPORT_SYMBOL(cfg80211_cqm_pktloss_notify); +void cfg80211_cqm_txe_notify(struct net_device *dev, + const u8 *peer, u32 num_packets, + u32 rate, u32 intvl, gfp_t gfp) +{ + struct wireless_dev *wdev = dev->ieee80211_ptr; + struct wiphy *wiphy = wdev->wiphy; + struct cfg80211_registered_device *rdev = wiphy_to_dev(wiphy); + + nl80211_send_cqm_txe_notify(rdev, dev, peer, num_packets, + rate, intvl, gfp); +} +EXPORT_SYMBOL(cfg80211_cqm_txe_notify); + void cfg80211_gtk_rekey_notify(struct net_device *dev, const u8 *bssid, const u8 *replay_ctr, gfp_t gfp) { @@ -948,7 +973,6 @@ void cfg80211_ch_switch_notify(struct net_device *dev, int freq, goto out; wdev->channel = chan; - nl80211_ch_switch_notify(rdev, dev, freq, type, GFP_KERNEL); out: wdev_unlock(wdev); diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index 206465dc0cab..97026f3b215a 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -46,28 +46,175 @@ static struct genl_family nl80211_fam = { .post_doit = nl80211_post_doit, }; -/* internal helper: get rdev and dev */ -static int get_rdev_dev_by_ifindex(struct net *netns, struct nlattr **attrs, - struct cfg80211_registered_device **rdev, - struct net_device **dev) +/* returns ERR_PTR values */ +static struct wireless_dev * +__cfg80211_wdev_from_attrs(struct net *netns, struct nlattr **attrs) { - int ifindex; + struct cfg80211_registered_device *rdev; + struct wireless_dev *result = NULL; + bool have_ifidx = attrs[NL80211_ATTR_IFINDEX]; + bool have_wdev_id = attrs[NL80211_ATTR_WDEV]; + u64 wdev_id; + int wiphy_idx = -1; + int ifidx = -1; - if (!attrs[NL80211_ATTR_IFINDEX]) - return -EINVAL; + assert_cfg80211_lock(); - ifindex = nla_get_u32(attrs[NL80211_ATTR_IFINDEX]); - *dev = dev_get_by_index(netns, ifindex); - if (!*dev) - return -ENODEV; + if (!have_ifidx && !have_wdev_id) + return ERR_PTR(-EINVAL); - *rdev = cfg80211_get_dev_from_ifindex(netns, ifindex); - if (IS_ERR(*rdev)) { - dev_put(*dev); - return PTR_ERR(*rdev); + if (have_ifidx) + ifidx = nla_get_u32(attrs[NL80211_ATTR_IFINDEX]); + if (have_wdev_id) { + wdev_id = nla_get_u64(attrs[NL80211_ATTR_WDEV]); + wiphy_idx = wdev_id >> 32; } - return 0; + list_for_each_entry(rdev, &cfg80211_rdev_list, list) { + struct wireless_dev *wdev; + + if (wiphy_net(&rdev->wiphy) != netns) + continue; + + if (have_wdev_id && rdev->wiphy_idx != wiphy_idx) + continue; + + mutex_lock(&rdev->devlist_mtx); + list_for_each_entry(wdev, &rdev->wdev_list, list) { + if (have_ifidx && wdev->netdev && + wdev->netdev->ifindex == ifidx) { + result = wdev; + break; + } + if (have_wdev_id && wdev->identifier == (u32)wdev_id) { + result = wdev; + break; + } + } + mutex_unlock(&rdev->devlist_mtx); + + if (result) + break; + } + + if (result) + return result; + return ERR_PTR(-ENODEV); +} + +static struct cfg80211_registered_device * +__cfg80211_rdev_from_attrs(struct net *netns, struct nlattr **attrs) +{ + struct cfg80211_registered_device *rdev = NULL, *tmp; + struct net_device *netdev; + + assert_cfg80211_lock(); + + if (!attrs[NL80211_ATTR_WIPHY] && + !attrs[NL80211_ATTR_IFINDEX] && + !attrs[NL80211_ATTR_WDEV]) + return ERR_PTR(-EINVAL); + + if (attrs[NL80211_ATTR_WIPHY]) + rdev = cfg80211_rdev_by_wiphy_idx( + nla_get_u32(attrs[NL80211_ATTR_WIPHY])); + + if (attrs[NL80211_ATTR_WDEV]) { + u64 wdev_id = nla_get_u64(attrs[NL80211_ATTR_WDEV]); + struct wireless_dev *wdev; + bool found = false; + + tmp = cfg80211_rdev_by_wiphy_idx(wdev_id >> 32); + if (tmp) { + /* make sure wdev exists */ + mutex_lock(&tmp->devlist_mtx); + list_for_each_entry(wdev, &tmp->wdev_list, list) { + if (wdev->identifier != (u32)wdev_id) + continue; + found = true; + break; + } + mutex_unlock(&tmp->devlist_mtx); + + if (!found) + tmp = NULL; + + if (rdev && tmp != rdev) + return ERR_PTR(-EINVAL); + rdev = tmp; + } + } + + if (attrs[NL80211_ATTR_IFINDEX]) { + int ifindex = nla_get_u32(attrs[NL80211_ATTR_IFINDEX]); + netdev = dev_get_by_index(netns, ifindex); + if (netdev) { + if (netdev->ieee80211_ptr) + tmp = wiphy_to_dev( + netdev->ieee80211_ptr->wiphy); + else + tmp = NULL; + + dev_put(netdev); + + /* not wireless device -- return error */ + if (!tmp) + return ERR_PTR(-EINVAL); + + /* mismatch -- return error */ + if (rdev && tmp != rdev) + return ERR_PTR(-EINVAL); + + rdev = tmp; + } + } + + if (!rdev) + return ERR_PTR(-ENODEV); + + if (netns != wiphy_net(&rdev->wiphy)) + return ERR_PTR(-ENODEV); + + return rdev; +} + +/* + * This function returns a pointer to the driver + * that the genl_info item that is passed refers to. + * If successful, it returns non-NULL and also locks + * the driver's mutex! + * + * This means that you need to call cfg80211_unlock_rdev() + * before being allowed to acquire &cfg80211_mutex! + * + * This is necessary because we need to lock the global + * mutex to get an item off the list safely, and then + * we lock the rdev mutex so it doesn't go away under us. + * + * We don't want to keep cfg80211_mutex locked + * for all the time in order to allow requests on + * other interfaces to go through at the same time. + * + * The result of this can be a PTR_ERR and hence must + * be checked with IS_ERR() for errors. + */ +static struct cfg80211_registered_device * +cfg80211_get_dev_from_info(struct net *netns, struct genl_info *info) +{ + struct cfg80211_registered_device *rdev; + + mutex_lock(&cfg80211_mutex); + rdev = __cfg80211_rdev_from_attrs(netns, info->attrs); + + /* if it is not an error we grab the lock on + * it to assure it won't be going away while + * we operate on it */ + if (!IS_ERR(rdev)) + mutex_lock(&rdev->mtx); + + mutex_unlock(&cfg80211_mutex); + + return rdev; } /* policy for the attributes */ @@ -115,7 +262,7 @@ static const struct nla_policy nl80211_policy[NL80211_ATTR_MAX+1] = { [NL80211_ATTR_STA_VLAN] = { .type = NLA_U32 }, [NL80211_ATTR_MNTR_FLAGS] = { /* NLA_NESTED can't be empty */ }, [NL80211_ATTR_MESH_ID] = { .type = NLA_BINARY, - .len = IEEE80211_MAX_MESH_ID_LEN }, + .len = IEEE80211_MAX_MESH_ID_LEN }, [NL80211_ATTR_MPATH_NEXT_HOP] = { .type = NLA_U32 }, [NL80211_ATTR_REG_ALPHA2] = { .type = NLA_STRING, .len = 2 }, @@ -206,6 +353,8 @@ static const struct nla_policy nl80211_policy[NL80211_ATTR_MAX+1] = { [NL80211_ATTR_NOACK_MAP] = { .type = NLA_U16 }, [NL80211_ATTR_INACTIVITY_TIMEOUT] = { .type = NLA_U16 }, [NL80211_ATTR_BG_SCAN_PERIOD] = { .type = NLA_U16 }, + [NL80211_ATTR_WDEV] = { .type = NLA_U64 }, + [NL80211_ATTR_USER_REG_HINT_TYPE] = { .type = NLA_U32 }, }; /* policy for the key attributes */ @@ -250,8 +399,9 @@ nl80211_rekey_policy[NUM_NL80211_REKEY_DATA] = { static const struct nla_policy nl80211_match_policy[NL80211_SCHED_SCAN_MATCH_ATTR_MAX + 1] = { - [NL80211_ATTR_SCHED_SCAN_MATCH_SSID] = { .type = NLA_BINARY, + [NL80211_SCHED_SCAN_MATCH_ATTR_SSID] = { .type = NLA_BINARY, .len = IEEE80211_MAX_SSID_LEN }, + [NL80211_SCHED_SCAN_MATCH_ATTR_RSSI] = { .type = NLA_U32 }, }; /* ifidx get helper */ @@ -832,6 +982,15 @@ static int nl80211_send_wiphy(struct sk_buff *msg, u32 pid, u32 seq, int flags, dev->wiphy.bands[band]->ht_cap.ampdu_density))) goto nla_put_failure; + /* add VHT info */ + if (dev->wiphy.bands[band]->vht_cap.vht_supported && + (nla_put(msg, NL80211_BAND_ATTR_VHT_MCS_SET, + sizeof(dev->wiphy.bands[band]->vht_cap.vht_mcs), + &dev->wiphy.bands[band]->vht_cap.vht_mcs) || + nla_put_u32(msg, NL80211_BAND_ATTR_VHT_CAPA, + dev->wiphy.bands[band]->vht_cap.cap))) + goto nla_put_failure; + /* add frequencies */ nl_freqs = nla_nest_start(msg, NL80211_BAND_ATTR_FREQS); if (!nl_freqs) @@ -921,7 +1080,12 @@ static int nl80211_send_wiphy(struct sk_buff *msg, u32 pid, u32 seq, int flags, if (nla_put_u32(msg, i, NL80211_CMD_SET_WIPHY_NETNS)) goto nla_put_failure; } - CMD(set_channel, SET_CHANNEL); + if (dev->ops->set_monitor_channel || dev->ops->start_ap || + dev->ops->join_mesh) { + i++; + if (nla_put_u32(msg, i, NL80211_CMD_SET_CHANNEL)) + goto nla_put_failure; + } CMD(set_wds_peer, SET_WDS_PEER); if (dev->wiphy.flags & WIPHY_FLAG_SUPPORTS_TDLS) { CMD(tdls_mgmt, TDLS_MGMT); @@ -1018,6 +1182,7 @@ static int nl80211_send_wiphy(struct sk_buff *msg, u32 pid, u32 seq, int flags, nla_nest_end(msg, nl_ifs); } +#ifdef CONFIG_PM if (dev->wiphy.wowlan.flags || dev->wiphy.wowlan.n_patterns) { struct nlattr *nl_wowlan; @@ -1058,6 +1223,7 @@ static int nl80211_send_wiphy(struct sk_buff *msg, u32 pid, u32 seq, int flags, nla_nest_end(msg, nl_wowlan); } +#endif if (nl80211_put_iftypes(msg, NL80211_ATTR_SOFTWARE_IFTYPES, dev->wiphy.software_iftypes)) @@ -1162,18 +1328,22 @@ static int parse_txq_params(struct nlattr *tb[], static bool nl80211_can_set_dev_channel(struct wireless_dev *wdev) { /* - * You can only set the channel explicitly for AP, mesh - * and WDS type interfaces; all others have their channel - * managed via their respective "establish a connection" - * command (connect, join, ...) + * You can only set the channel explicitly for WDS interfaces, + * all others have their channel managed via their respective + * "establish a connection" command (connect, join, ...) + * + * For AP/GO and mesh mode, the channel can be set with the + * channel userspace API, but is only stored and passed to the + * low-level driver when the AP starts or the mesh is joined. + * This is for backward compatibility, userspace can also give + * the channel in the start-ap or join-mesh commands instead. * * Monitors are special as they are normally slaved to - * whatever else is going on, so they behave as though - * you tried setting the wiphy channel itself. + * whatever else is going on, so they have their own special + * operation to set the monitor channel if possible. */ return !wdev || wdev->iftype == NL80211_IFTYPE_AP || - wdev->iftype == NL80211_IFTYPE_WDS || wdev->iftype == NL80211_IFTYPE_MESH_POINT || wdev->iftype == NL80211_IFTYPE_MONITOR || wdev->iftype == NL80211_IFTYPE_P2P_GO; @@ -1204,9 +1374,14 @@ static int __nl80211_set_channel(struct cfg80211_registered_device *rdev, struct wireless_dev *wdev, struct genl_info *info) { + struct ieee80211_channel *channel; enum nl80211_channel_type channel_type = NL80211_CHAN_NO_HT; u32 freq; int result; + enum nl80211_iftype iftype = NL80211_IFTYPE_MONITOR; + + if (wdev) + iftype = wdev->iftype; if (!info->attrs[NL80211_ATTR_WIPHY_FREQ]) return -EINVAL; @@ -1221,12 +1396,32 @@ static int __nl80211_set_channel(struct cfg80211_registered_device *rdev, freq = nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ]); mutex_lock(&rdev->devlist_mtx); - if (wdev) { - wdev_lock(wdev); - result = cfg80211_set_freq(rdev, wdev, freq, channel_type); - wdev_unlock(wdev); - } else { - result = cfg80211_set_freq(rdev, NULL, freq, channel_type); + switch (iftype) { + case NL80211_IFTYPE_AP: + case NL80211_IFTYPE_P2P_GO: + if (wdev->beacon_interval) { + result = -EBUSY; + break; + } + channel = rdev_freq_to_chan(rdev, freq, channel_type); + if (!channel || !cfg80211_can_beacon_sec_chan(&rdev->wiphy, + channel, + channel_type)) { + result = -EINVAL; + break; + } + wdev->preset_chan = channel; + wdev->preset_chantype = channel_type; + result = 0; + break; + case NL80211_IFTYPE_MESH_POINT: + result = cfg80211_set_mesh_freq(rdev, wdev, freq, channel_type); + break; + case NL80211_IFTYPE_MONITOR: + result = cfg80211_set_monitor_channel(rdev, freq, channel_type); + break; + default: + result = -EINVAL; } mutex_unlock(&rdev->devlist_mtx); @@ -1300,7 +1495,8 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info) } if (!netdev) { - rdev = __cfg80211_rdev_from_info(info); + rdev = __cfg80211_rdev_from_attrs(genl_info_net(info), + info->attrs); if (IS_ERR(rdev)) { mutex_unlock(&cfg80211_mutex); return PTR_ERR(rdev); @@ -1310,8 +1506,7 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info) result = 0; mutex_lock(&rdev->mtx); - } else if (netif_running(netdev) && - nl80211_can_set_dev_channel(netdev->ieee80211_ptr)) + } else if (nl80211_can_set_dev_channel(netdev->ieee80211_ptr)) wdev = netdev->ieee80211_ptr; else wdev = NULL; @@ -1534,22 +1729,32 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info) return result; } +static inline u64 wdev_id(struct wireless_dev *wdev) +{ + return (u64)wdev->identifier | + ((u64)wiphy_to_dev(wdev->wiphy)->wiphy_idx << 32); +} static int nl80211_send_iface(struct sk_buff *msg, u32 pid, u32 seq, int flags, struct cfg80211_registered_device *rdev, - struct net_device *dev) + struct wireless_dev *wdev) { + struct net_device *dev = wdev->netdev; void *hdr; hdr = nl80211hdr_put(msg, pid, seq, flags, NL80211_CMD_NEW_INTERFACE); if (!hdr) return -1; - if (nla_put_u32(msg, NL80211_ATTR_IFINDEX, dev->ifindex) || - nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) || - nla_put_string(msg, NL80211_ATTR_IFNAME, dev->name) || - nla_put_u32(msg, NL80211_ATTR_IFTYPE, - dev->ieee80211_ptr->iftype) || + if (dev && + (nla_put_u32(msg, NL80211_ATTR_IFINDEX, dev->ifindex) || + nla_put_string(msg, NL80211_ATTR_IFNAME, dev->name) || + nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, dev->dev_addr))) + goto nla_put_failure; + + if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) || + nla_put_u32(msg, NL80211_ATTR_IFTYPE, wdev->iftype) || + nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)) || nla_put_u32(msg, NL80211_ATTR_GENERATION, rdev->devlist_generation ^ (cfg80211_rdev_list_generation << 2))) @@ -1559,12 +1764,13 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 pid, u32 seq, int flags, struct ieee80211_channel *chan; enum nl80211_channel_type channel_type; - chan = rdev->ops->get_channel(&rdev->wiphy, &channel_type); + chan = rdev->ops->get_channel(&rdev->wiphy, wdev, + &channel_type); if (chan && (nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, - chan->center_freq) || + chan->center_freq) || nla_put_u32(msg, NL80211_ATTR_WIPHY_CHANNEL_TYPE, - channel_type))) + channel_type))) goto nla_put_failure; } @@ -1595,14 +1801,14 @@ static int nl80211_dump_interface(struct sk_buff *skb, struct netlink_callback * if_idx = 0; mutex_lock(&rdev->devlist_mtx); - list_for_each_entry(wdev, &rdev->netdev_list, list) { + list_for_each_entry(wdev, &rdev->wdev_list, list) { if (if_idx < if_start) { if_idx++; continue; } if (nl80211_send_iface(skb, NETLINK_CB(cb->skb).pid, cb->nlh->nlmsg_seq, NLM_F_MULTI, - rdev, wdev->netdev) < 0) { + rdev, wdev) < 0) { mutex_unlock(&rdev->devlist_mtx); goto out; } @@ -1625,14 +1831,14 @@ static int nl80211_get_interface(struct sk_buff *skb, struct genl_info *info) { struct sk_buff *msg; struct cfg80211_registered_device *dev = info->user_ptr[0]; - struct net_device *netdev = info->user_ptr[1]; + struct wireless_dev *wdev = info->user_ptr[1]; msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) return -ENOMEM; if (nl80211_send_iface(msg, info->snd_pid, info->snd_seq, 0, - dev, netdev) < 0) { + dev, wdev) < 0) { nlmsg_free(msg); return -ENOBUFS; } @@ -1772,7 +1978,8 @@ static int nl80211_new_interface(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; struct vif_params params; - struct net_device *dev; + struct wireless_dev *wdev; + struct sk_buff *msg; int err; enum nl80211_iftype type = NL80211_IFTYPE_UNSPECIFIED; u32 flags; @@ -1799,19 +2006,23 @@ static int nl80211_new_interface(struct sk_buff *skb, struct genl_info *info) return err; } + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!msg) + return -ENOMEM; + err = parse_monitor_flags(type == NL80211_IFTYPE_MONITOR ? info->attrs[NL80211_ATTR_MNTR_FLAGS] : NULL, &flags); - dev = rdev->ops->add_virtual_intf(&rdev->wiphy, + wdev = rdev->ops->add_virtual_intf(&rdev->wiphy, nla_data(info->attrs[NL80211_ATTR_IFNAME]), type, err ? NULL : &flags, ¶ms); - if (IS_ERR(dev)) - return PTR_ERR(dev); + if (IS_ERR(wdev)) { + nlmsg_free(msg); + return PTR_ERR(wdev); + } if (type == NL80211_IFTYPE_MESH_POINT && info->attrs[NL80211_ATTR_MESH_ID]) { - struct wireless_dev *wdev = dev->ieee80211_ptr; - wdev_lock(wdev); BUILD_BUG_ON(IEEE80211_MAX_SSID_LEN != IEEE80211_MAX_MESH_ID_LEN); @@ -1822,18 +2033,34 @@ static int nl80211_new_interface(struct sk_buff *skb, struct genl_info *info) wdev_unlock(wdev); } - return 0; + if (nl80211_send_iface(msg, info->snd_pid, info->snd_seq, 0, + rdev, wdev) < 0) { + nlmsg_free(msg); + return -ENOBUFS; + } + + return genlmsg_reply(msg, info); } static int nl80211_del_interface(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; - struct net_device *dev = info->user_ptr[1]; + struct wireless_dev *wdev = info->user_ptr[1]; if (!rdev->ops->del_virtual_intf) return -EOPNOTSUPP; - return rdev->ops->del_virtual_intf(&rdev->wiphy, dev); + /* + * If we remove a wireless device without a netdev then clear + * user_ptr[1] so that nl80211_post_doit won't dereference it + * to check if it needs to do dev_put(). Otherwise it crashes + * since the wdev has been freed, unlike with a netdev where + * we need the dev_put() for the netdev to really be freed. + */ + if (!wdev->netdev) + info->user_ptr[1] = NULL; + + return rdev->ops->del_virtual_intf(&rdev->wiphy, wdev); } static int nl80211_set_noack_map(struct sk_buff *skb, struct genl_info *info) @@ -2213,6 +2440,33 @@ static int nl80211_parse_beacon(struct genl_info *info, return 0; } +static bool nl80211_get_ap_channel(struct cfg80211_registered_device *rdev, + struct cfg80211_ap_settings *params) +{ + struct wireless_dev *wdev; + bool ret = false; + + mutex_lock(&rdev->devlist_mtx); + + list_for_each_entry(wdev, &rdev->wdev_list, list) { + if (wdev->iftype != NL80211_IFTYPE_AP && + wdev->iftype != NL80211_IFTYPE_P2P_GO) + continue; + + if (!wdev->preset_chan) + continue; + + params->channel = wdev->preset_chan; + params->channel_type = wdev->preset_chantype; + ret = true; + break; + } + + mutex_unlock(&rdev->devlist_mtx); + + return ret; +} + static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; @@ -2299,9 +2553,44 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info) info->attrs[NL80211_ATTR_INACTIVITY_TIMEOUT]); } + if (info->attrs[NL80211_ATTR_WIPHY_FREQ]) { + enum nl80211_channel_type channel_type = NL80211_CHAN_NO_HT; + + if (info->attrs[NL80211_ATTR_WIPHY_CHANNEL_TYPE] && + !nl80211_valid_channel_type(info, &channel_type)) + return -EINVAL; + + params.channel = rdev_freq_to_chan(rdev, + nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ]), + channel_type); + if (!params.channel) + return -EINVAL; + params.channel_type = channel_type; + } else if (wdev->preset_chan) { + params.channel = wdev->preset_chan; + params.channel_type = wdev->preset_chantype; + } else if (!nl80211_get_ap_channel(rdev, ¶ms)) + return -EINVAL; + + if (!cfg80211_can_beacon_sec_chan(&rdev->wiphy, params.channel, + params.channel_type)) + return -EINVAL; + + mutex_lock(&rdev->devlist_mtx); + err = cfg80211_can_use_chan(rdev, wdev, params.channel, + CHAN_MODE_SHARED); + mutex_unlock(&rdev->devlist_mtx); + + if (err) + return err; + err = rdev->ops->start_ap(&rdev->wiphy, dev, ¶ms); - if (!err) + if (!err) { + wdev->preset_chan = params.channel; + wdev->preset_chantype = params.channel_type; wdev->beacon_interval = params.beacon_interval; + wdev->channel = params.channel; + } return err; } @@ -2334,23 +2623,8 @@ static int nl80211_stop_ap(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; struct net_device *dev = info->user_ptr[1]; - struct wireless_dev *wdev = dev->ieee80211_ptr; - int err; - if (!rdev->ops->stop_ap) - return -EOPNOTSUPP; - - if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_AP && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_GO) - return -EOPNOTSUPP; - - if (!wdev->beacon_interval) - return -ENOENT; - - err = rdev->ops->stop_ap(&rdev->wiphy, dev); - if (!err) - wdev->beacon_interval = 0; - return err; + return cfg80211_stop_ap(rdev, dev); } static const struct nla_policy sta_flags_policy[NL80211_STA_FLAG_MAX + 1] = { @@ -2442,7 +2716,8 @@ static bool nl80211_put_sta_rate(struct sk_buff *msg, struct rate_info *info, int attr) { struct nlattr *rate; - u16 bitrate; + u32 bitrate; + u16 bitrate_compat; rate = nla_nest_start(msg, attr); if (!rate) @@ -2450,8 +2725,12 @@ static bool nl80211_put_sta_rate(struct sk_buff *msg, struct rate_info *info, /* cfg80211_calculate_bitrate will return 0 for mcs >= 32 */ bitrate = cfg80211_calculate_bitrate(info); + /* report 16-bit bitrate only if we can */ + bitrate_compat = bitrate < (1UL << 16) ? bitrate : 0; if ((bitrate > 0 && - nla_put_u16(msg, NL80211_RATE_INFO_BITRATE, bitrate)) || + nla_put_u32(msg, NL80211_RATE_INFO_BITRATE32, bitrate)) || + (bitrate_compat > 0 && + nla_put_u16(msg, NL80211_RATE_INFO_BITRATE, bitrate_compat)) || ((info->flags & RATE_INFO_FLAGS_MCS) && nla_put_u8(msg, NL80211_RATE_INFO_MCS, info->mcs)) || ((info->flags & RATE_INFO_FLAGS_40_MHZ_WIDTH) && @@ -3304,6 +3583,7 @@ static int nl80211_req_set_reg(struct sk_buff *skb, struct genl_info *info) { int r; char *data = NULL; + enum nl80211_user_reg_hint_type user_reg_hint_type; /* * You should only get this when cfg80211 hasn't yet initialized @@ -3323,7 +3603,21 @@ static int nl80211_req_set_reg(struct sk_buff *skb, struct genl_info *info) data = nla_data(info->attrs[NL80211_ATTR_REG_ALPHA2]); - r = regulatory_hint_user(data); + if (info->attrs[NL80211_ATTR_USER_REG_HINT_TYPE]) + user_reg_hint_type = + nla_get_u32(info->attrs[NL80211_ATTR_USER_REG_HINT_TYPE]); + else + user_reg_hint_type = NL80211_USER_REG_HINT_USER; + + switch (user_reg_hint_type) { + case NL80211_USER_REG_HINT_USER: + case NL80211_USER_REG_HINT_CELL_BASE: + break; + default: + return -EINVAL; + } + + r = regulatory_hint_user(data, user_reg_hint_type); return r; } @@ -3413,7 +3707,13 @@ static int nl80211_get_mesh_config(struct sk_buff *skb, nla_put_u32(msg, NL80211_MESHCONF_RSSI_THRESHOLD, cur_params.rssi_threshold) || nla_put_u32(msg, NL80211_MESHCONF_HT_OPMODE, - cur_params.ht_opmode)) + cur_params.ht_opmode) || + nla_put_u32(msg, NL80211_MESHCONF_HWMP_PATH_TO_ROOT_TIMEOUT, + cur_params.dot11MeshHWMPactivePathToRootTimeout) || + nla_put_u16(msg, NL80211_MESHCONF_HWMP_ROOT_INTERVAL, + cur_params.dot11MeshHWMProotInterval) || + nla_put_u16(msg, NL80211_MESHCONF_HWMP_CONFIRMATION_INTERVAL, + cur_params.dot11MeshHWMPconfirmationInterval)) goto nla_put_failure; nla_nest_end(msg, pinfoattr); genlmsg_end(msg, hdr); @@ -3436,7 +3736,6 @@ static const struct nla_policy nl80211_meshconf_params_policy[NL80211_MESHCONF_A [NL80211_MESHCONF_ELEMENT_TTL] = { .type = NLA_U8 }, [NL80211_MESHCONF_AUTO_OPEN_PLINKS] = { .type = NLA_U8 }, [NL80211_MESHCONF_SYNC_OFFSET_MAX_NEIGHBOR] = { .type = NLA_U32 }, - [NL80211_MESHCONF_HWMP_MAX_PREQ_RETRIES] = { .type = NLA_U8 }, [NL80211_MESHCONF_PATH_REFRESH_TIME] = { .type = NLA_U32 }, [NL80211_MESHCONF_MIN_DISCOVERY_TIMEOUT] = { .type = NLA_U16 }, @@ -3448,8 +3747,11 @@ static const struct nla_policy nl80211_meshconf_params_policy[NL80211_MESHCONF_A [NL80211_MESHCONF_HWMP_RANN_INTERVAL] = { .type = NLA_U16 }, [NL80211_MESHCONF_GATE_ANNOUNCEMENTS] = { .type = NLA_U8 }, [NL80211_MESHCONF_FORWARDING] = { .type = NLA_U8 }, - [NL80211_MESHCONF_RSSI_THRESHOLD] = { .type = NLA_U32}, - [NL80211_MESHCONF_HT_OPMODE] = { .type = NLA_U16}, + [NL80211_MESHCONF_RSSI_THRESHOLD] = { .type = NLA_U32 }, + [NL80211_MESHCONF_HT_OPMODE] = { .type = NLA_U16 }, + [NL80211_MESHCONF_HWMP_PATH_TO_ROOT_TIMEOUT] = { .type = NLA_U32 }, + [NL80211_MESHCONF_HWMP_ROOT_INTERVAL] = { .type = NLA_U16 }, + [NL80211_MESHCONF_HWMP_CONFIRMATION_INTERVAL] = { .type = NLA_U16 }, }; static const struct nla_policy @@ -3459,7 +3761,7 @@ static const struct nla_policy [NL80211_MESH_SETUP_ENABLE_VENDOR_METRIC] = { .type = NLA_U8 }, [NL80211_MESH_SETUP_USERSPACE_AUTH] = { .type = NLA_FLAG }, [NL80211_MESH_SETUP_IE] = { .type = NLA_BINARY, - .len = IEEE80211_MAX_DATA_LEN }, + .len = IEEE80211_MAX_DATA_LEN }, [NL80211_MESH_SETUP_USERSPACE_AMPE] = { .type = NLA_FLAG }, }; @@ -3492,63 +3794,82 @@ do {\ /* Fill in the params struct */ FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshRetryTimeout, - mask, NL80211_MESHCONF_RETRY_TIMEOUT, nla_get_u16); + mask, NL80211_MESHCONF_RETRY_TIMEOUT, + nla_get_u16); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshConfirmTimeout, - mask, NL80211_MESHCONF_CONFIRM_TIMEOUT, nla_get_u16); + mask, NL80211_MESHCONF_CONFIRM_TIMEOUT, + nla_get_u16); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHoldingTimeout, - mask, NL80211_MESHCONF_HOLDING_TIMEOUT, nla_get_u16); + mask, NL80211_MESHCONF_HOLDING_TIMEOUT, + nla_get_u16); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshMaxPeerLinks, - mask, NL80211_MESHCONF_MAX_PEER_LINKS, nla_get_u16); + mask, NL80211_MESHCONF_MAX_PEER_LINKS, + nla_get_u16); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshMaxRetries, - mask, NL80211_MESHCONF_MAX_RETRIES, nla_get_u8); + mask, NL80211_MESHCONF_MAX_RETRIES, + nla_get_u8); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshTTL, - mask, NL80211_MESHCONF_TTL, nla_get_u8); + mask, NL80211_MESHCONF_TTL, nla_get_u8); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, element_ttl, - mask, NL80211_MESHCONF_ELEMENT_TTL, nla_get_u8); + mask, NL80211_MESHCONF_ELEMENT_TTL, + nla_get_u8); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, auto_open_plinks, - mask, NL80211_MESHCONF_AUTO_OPEN_PLINKS, nla_get_u8); - FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshNbrOffsetMaxNeighbor, - mask, NL80211_MESHCONF_SYNC_OFFSET_MAX_NEIGHBOR, - nla_get_u32); + mask, NL80211_MESHCONF_AUTO_OPEN_PLINKS, + nla_get_u8); + FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshNbrOffsetMaxNeighbor, mask, + NL80211_MESHCONF_SYNC_OFFSET_MAX_NEIGHBOR, + nla_get_u32); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMPmaxPREQretries, - mask, NL80211_MESHCONF_HWMP_MAX_PREQ_RETRIES, - nla_get_u8); + mask, NL80211_MESHCONF_HWMP_MAX_PREQ_RETRIES, + nla_get_u8); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, path_refresh_time, - mask, NL80211_MESHCONF_PATH_REFRESH_TIME, nla_get_u32); + mask, NL80211_MESHCONF_PATH_REFRESH_TIME, + nla_get_u32); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, min_discovery_timeout, - mask, NL80211_MESHCONF_MIN_DISCOVERY_TIMEOUT, - nla_get_u16); - FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMPactivePathTimeout, - mask, NL80211_MESHCONF_HWMP_ACTIVE_PATH_TIMEOUT, - nla_get_u32); + mask, NL80211_MESHCONF_MIN_DISCOVERY_TIMEOUT, + nla_get_u16); + FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMPactivePathTimeout, mask, + NL80211_MESHCONF_HWMP_ACTIVE_PATH_TIMEOUT, + nla_get_u32); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMPpreqMinInterval, - mask, NL80211_MESHCONF_HWMP_PREQ_MIN_INTERVAL, - nla_get_u16); + mask, NL80211_MESHCONF_HWMP_PREQ_MIN_INTERVAL, + nla_get_u16); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMPperrMinInterval, - mask, NL80211_MESHCONF_HWMP_PERR_MIN_INTERVAL, - nla_get_u16); + mask, NL80211_MESHCONF_HWMP_PERR_MIN_INTERVAL, + nla_get_u16); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, - dot11MeshHWMPnetDiameterTraversalTime, - mask, NL80211_MESHCONF_HWMP_NET_DIAM_TRVS_TIME, - nla_get_u16); + dot11MeshHWMPnetDiameterTraversalTime, mask, + NL80211_MESHCONF_HWMP_NET_DIAM_TRVS_TIME, + nla_get_u16); + FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMPRootMode, mask, + NL80211_MESHCONF_HWMP_ROOTMODE, nla_get_u8); + FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMPRannInterval, mask, + NL80211_MESHCONF_HWMP_RANN_INTERVAL, + nla_get_u16); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, - dot11MeshHWMPRootMode, mask, - NL80211_MESHCONF_HWMP_ROOTMODE, - nla_get_u8); - FILL_IN_MESH_PARAM_IF_SET(tb, cfg, - dot11MeshHWMPRannInterval, mask, - NL80211_MESHCONF_HWMP_RANN_INTERVAL, - nla_get_u16); - FILL_IN_MESH_PARAM_IF_SET(tb, cfg, - dot11MeshGateAnnouncementProtocol, mask, - NL80211_MESHCONF_GATE_ANNOUNCEMENTS, - nla_get_u8); + dot11MeshGateAnnouncementProtocol, mask, + NL80211_MESHCONF_GATE_ANNOUNCEMENTS, + nla_get_u8); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshForwarding, - mask, NL80211_MESHCONF_FORWARDING, nla_get_u8); + mask, NL80211_MESHCONF_FORWARDING, + nla_get_u8); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, rssi_threshold, - mask, NL80211_MESHCONF_RSSI_THRESHOLD, nla_get_u32); + mask, NL80211_MESHCONF_RSSI_THRESHOLD, + nla_get_u32); FILL_IN_MESH_PARAM_IF_SET(tb, cfg, ht_opmode, - mask, NL80211_MESHCONF_HT_OPMODE, nla_get_u16); + mask, NL80211_MESHCONF_HT_OPMODE, + nla_get_u16); + FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMPactivePathToRootTimeout, + mask, + NL80211_MESHCONF_HWMP_PATH_TO_ROOT_TIMEOUT, + nla_get_u32); + FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMProotInterval, + mask, NL80211_MESHCONF_HWMP_ROOT_INTERVAL, + nla_get_u16); + FILL_IN_MESH_PARAM_IF_SET(tb, cfg, + dot11MeshHWMPconfirmationInterval, mask, + NL80211_MESHCONF_HWMP_CONFIRMATION_INTERVAL, + nla_get_u16); if (mask_out) *mask_out = mask; @@ -3666,6 +3987,11 @@ static int nl80211_get_reg(struct sk_buff *skb, struct genl_info *info) cfg80211_regdomain->dfs_region))) goto nla_put_failure; + if (reg_last_request_cell_base() && + nla_put_u32(msg, NL80211_ATTR_USER_REG_HINT_TYPE, + NL80211_USER_REG_HINT_CELL_BASE)) + goto nla_put_failure; + nl_reg_rules = nla_nest_start(msg, NL80211_ATTR_REG_RULES); if (!nl_reg_rules) goto nla_put_failure; @@ -3831,7 +4157,7 @@ static int validate_scan_freqs(struct nlattr *freqs) static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; - struct net_device *dev = info->user_ptr[1]; + struct wireless_dev *wdev = info->user_ptr[1]; struct cfg80211_scan_request *request; struct nlattr *attr; struct wiphy *wiphy; @@ -3991,15 +4317,16 @@ static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info) request->no_cck = nla_get_flag(info->attrs[NL80211_ATTR_TX_NO_CCK_RATE]); - request->dev = dev; + request->wdev = wdev; request->wiphy = &rdev->wiphy; rdev->scan_req = request; - err = rdev->ops->scan(&rdev->wiphy, dev, request); + err = rdev->ops->scan(&rdev->wiphy, request); if (!err) { - nl80211_send_scan_start(rdev, dev); - dev_hold(dev); + nl80211_send_scan_start(rdev, wdev); + if (wdev->netdev) + dev_hold(wdev->netdev); } else { out_free: rdev->scan_req = NULL; @@ -4185,12 +4512,12 @@ static int nl80211_start_sched_scan(struct sk_buff *skb, nla_for_each_nested(attr, info->attrs[NL80211_ATTR_SCHED_SCAN_MATCH], tmp) { - struct nlattr *ssid; + struct nlattr *ssid, *rssi; nla_parse(tb, NL80211_SCHED_SCAN_MATCH_ATTR_MAX, nla_data(attr), nla_len(attr), nl80211_match_policy); - ssid = tb[NL80211_ATTR_SCHED_SCAN_MATCH_SSID]; + ssid = tb[NL80211_SCHED_SCAN_MATCH_ATTR_SSID]; if (ssid) { if (nla_len(ssid) > IEEE80211_MAX_SSID_LEN) { err = -EINVAL; @@ -4201,6 +4528,12 @@ static int nl80211_start_sched_scan(struct sk_buff *skb, request->match_sets[i].ssid.ssid_len = nla_len(ssid); } + rssi = tb[NL80211_SCHED_SCAN_MATCH_ATTR_RSSI]; + if (rssi) + request->rssi_thold = nla_get_u32(rssi); + else + request->rssi_thold = + NL80211_SCAN_RSSI_THOLD_OFF; i++; } } @@ -5058,21 +5391,18 @@ static int nl80211_testmode_dump(struct sk_buff *skb, nl80211_policy); if (err) return err; - if (nl80211_fam.attrbuf[NL80211_ATTR_WIPHY]) { - phy_idx = nla_get_u32( - nl80211_fam.attrbuf[NL80211_ATTR_WIPHY]); - } else { - struct net_device *netdev; - err = get_rdev_dev_by_ifindex(sock_net(skb->sk), - nl80211_fam.attrbuf, - &rdev, &netdev); - if (err) - return err; - dev_put(netdev); - phy_idx = rdev->wiphy_idx; - cfg80211_unlock_rdev(rdev); + mutex_lock(&cfg80211_mutex); + rdev = __cfg80211_rdev_from_attrs(sock_net(skb->sk), + nl80211_fam.attrbuf); + if (IS_ERR(rdev)) { + mutex_unlock(&cfg80211_mutex); + return PTR_ERR(rdev); } + phy_idx = rdev->wiphy_idx; + rdev = NULL; + mutex_unlock(&cfg80211_mutex); + if (nl80211_fam.attrbuf[NL80211_ATTR_TESTDATA]) cb->args[1] = (long)nl80211_fam.attrbuf[NL80211_ATTR_TESTDATA]; @@ -5474,7 +5804,7 @@ static int nl80211_remain_on_channel(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; - struct net_device *dev = info->user_ptr[1]; + struct wireless_dev *wdev = info->user_ptr[1]; struct ieee80211_channel *chan; struct sk_buff *msg; void *hdr; @@ -5489,18 +5819,18 @@ static int nl80211_remain_on_channel(struct sk_buff *skb, duration = nla_get_u32(info->attrs[NL80211_ATTR_DURATION]); + if (!rdev->ops->remain_on_channel || + !(rdev->wiphy.flags & WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL)) + return -EOPNOTSUPP; + /* - * We should be on that channel for at least one jiffie, - * and more than 5 seconds seems excessive. + * We should be on that channel for at least a minimum amount of + * time (10ms) but no longer than the driver supports. */ - if (!duration || !msecs_to_jiffies(duration) || + if (duration < NL80211_MIN_REMAIN_ON_CHANNEL_TIME || duration > rdev->wiphy.max_remain_on_channel_duration) return -EINVAL; - if (!rdev->ops->remain_on_channel || - !(rdev->wiphy.flags & WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL)) - return -EOPNOTSUPP; - if (info->attrs[NL80211_ATTR_WIPHY_CHANNEL_TYPE] && !nl80211_valid_channel_type(info, &channel_type)) return -EINVAL; @@ -5522,7 +5852,7 @@ static int nl80211_remain_on_channel(struct sk_buff *skb, goto free_msg; } - err = rdev->ops->remain_on_channel(&rdev->wiphy, dev, chan, + err = rdev->ops->remain_on_channel(&rdev->wiphy, wdev, chan, channel_type, duration, &cookie); if (err) @@ -5546,7 +5876,7 @@ static int nl80211_cancel_remain_on_channel(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; - struct net_device *dev = info->user_ptr[1]; + struct wireless_dev *wdev = info->user_ptr[1]; u64 cookie; if (!info->attrs[NL80211_ATTR_COOKIE]) @@ -5557,7 +5887,7 @@ static int nl80211_cancel_remain_on_channel(struct sk_buff *skb, cookie = nla_get_u64(info->attrs[NL80211_ATTR_COOKIE]); - return rdev->ops->cancel_remain_on_channel(&rdev->wiphy, dev, cookie); + return rdev->ops->cancel_remain_on_channel(&rdev->wiphy, wdev, cookie); } static u32 rateset_to_mask(struct ieee80211_supported_band *sband, @@ -5706,7 +6036,7 @@ static int nl80211_set_tx_bitrate_mask(struct sk_buff *skb, static int nl80211_register_mgmt(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; - struct net_device *dev = info->user_ptr[1]; + struct wireless_dev *wdev = info->user_ptr[1]; u16 frame_type = IEEE80211_FTYPE_MGMT | IEEE80211_STYPE_ACTION; if (!info->attrs[NL80211_ATTR_FRAME_MATCH]) @@ -5715,21 +6045,24 @@ static int nl80211_register_mgmt(struct sk_buff *skb, struct genl_info *info) if (info->attrs[NL80211_ATTR_FRAME_TYPE]) frame_type = nla_get_u16(info->attrs[NL80211_ATTR_FRAME_TYPE]); - if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_STATION && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_ADHOC && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_CLIENT && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_AP && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_AP_VLAN && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_MESH_POINT && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_GO) + switch (wdev->iftype) { + case NL80211_IFTYPE_STATION: + case NL80211_IFTYPE_ADHOC: + case NL80211_IFTYPE_P2P_CLIENT: + case NL80211_IFTYPE_AP: + case NL80211_IFTYPE_AP_VLAN: + case NL80211_IFTYPE_MESH_POINT: + case NL80211_IFTYPE_P2P_GO: + break; + default: return -EOPNOTSUPP; + } /* not much point in registering if we can't reply */ if (!rdev->ops->mgmt_tx) return -EOPNOTSUPP; - return cfg80211_mlme_register_mgmt(dev->ieee80211_ptr, info->snd_pid, - frame_type, + return cfg80211_mlme_register_mgmt(wdev, info->snd_pid, frame_type, nla_data(info->attrs[NL80211_ATTR_FRAME_MATCH]), nla_len(info->attrs[NL80211_ATTR_FRAME_MATCH])); } @@ -5737,7 +6070,7 @@ static int nl80211_register_mgmt(struct sk_buff *skb, struct genl_info *info) static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; - struct net_device *dev = info->user_ptr[1]; + struct wireless_dev *wdev = info->user_ptr[1]; struct ieee80211_channel *chan; enum nl80211_channel_type channel_type = NL80211_CHAN_NO_HT; bool channel_type_valid = false; @@ -5758,19 +6091,32 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info) if (!rdev->ops->mgmt_tx) return -EOPNOTSUPP; - if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_STATION && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_ADHOC && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_CLIENT && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_AP && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_AP_VLAN && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_MESH_POINT && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_GO) + switch (wdev->iftype) { + case NL80211_IFTYPE_STATION: + case NL80211_IFTYPE_ADHOC: + case NL80211_IFTYPE_P2P_CLIENT: + case NL80211_IFTYPE_AP: + case NL80211_IFTYPE_AP_VLAN: + case NL80211_IFTYPE_MESH_POINT: + case NL80211_IFTYPE_P2P_GO: + break; + default: return -EOPNOTSUPP; + } if (info->attrs[NL80211_ATTR_DURATION]) { if (!(rdev->wiphy.flags & WIPHY_FLAG_OFFCHAN_TX)) return -EINVAL; wait = nla_get_u32(info->attrs[NL80211_ATTR_DURATION]); + + /* + * We should wait on the channel for at least a minimum amount + * of time (10ms) but no longer than the driver supports. + */ + if (wait < NL80211_MIN_REMAIN_ON_CHANNEL_TIME || + wait > rdev->wiphy.max_remain_on_channel_duration) + return -EINVAL; + } if (info->attrs[NL80211_ATTR_WIPHY_CHANNEL_TYPE]) { @@ -5805,7 +6151,7 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info) } } - err = cfg80211_mlme_mgmt_tx(rdev, dev, chan, offchan, channel_type, + err = cfg80211_mlme_mgmt_tx(rdev, wdev, chan, offchan, channel_type, channel_type_valid, wait, nla_data(info->attrs[NL80211_ATTR_FRAME]), nla_len(info->attrs[NL80211_ATTR_FRAME]), @@ -5833,7 +6179,7 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info) static int nl80211_tx_mgmt_cancel_wait(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; - struct net_device *dev = info->user_ptr[1]; + struct wireless_dev *wdev = info->user_ptr[1]; u64 cookie; if (!info->attrs[NL80211_ATTR_COOKIE]) @@ -5842,17 +6188,21 @@ static int nl80211_tx_mgmt_cancel_wait(struct sk_buff *skb, struct genl_info *in if (!rdev->ops->mgmt_tx_cancel_wait) return -EOPNOTSUPP; - if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_STATION && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_ADHOC && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_CLIENT && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_AP && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_AP_VLAN && - dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_GO) + switch (wdev->iftype) { + case NL80211_IFTYPE_STATION: + case NL80211_IFTYPE_ADHOC: + case NL80211_IFTYPE_P2P_CLIENT: + case NL80211_IFTYPE_AP: + case NL80211_IFTYPE_AP_VLAN: + case NL80211_IFTYPE_P2P_GO: + break; + default: return -EOPNOTSUPP; + } cookie = nla_get_u64(info->attrs[NL80211_ATTR_COOKIE]); - return rdev->ops->mgmt_tx_cancel_wait(&rdev->wiphy, dev, cookie); + return rdev->ops->mgmt_tx_cancel_wait(&rdev->wiphy, wdev, cookie); } static int nl80211_set_power_save(struct sk_buff *skb, struct genl_info *info) @@ -5938,8 +6288,35 @@ nl80211_attr_cqm_policy[NL80211_ATTR_CQM_MAX + 1] __read_mostly = { [NL80211_ATTR_CQM_RSSI_THOLD] = { .type = NLA_U32 }, [NL80211_ATTR_CQM_RSSI_HYST] = { .type = NLA_U32 }, [NL80211_ATTR_CQM_RSSI_THRESHOLD_EVENT] = { .type = NLA_U32 }, + [NL80211_ATTR_CQM_TXE_RATE] = { .type = NLA_U32 }, + [NL80211_ATTR_CQM_TXE_PKTS] = { .type = NLA_U32 }, + [NL80211_ATTR_CQM_TXE_INTVL] = { .type = NLA_U32 }, }; +static int nl80211_set_cqm_txe(struct genl_info *info, + u32 rate, u32 pkts, u32 intvl) +{ + struct cfg80211_registered_device *rdev = info->user_ptr[0]; + struct wireless_dev *wdev; + struct net_device *dev = info->user_ptr[1]; + + if ((rate < 0 || rate > 100) || + (intvl < 0 || intvl > NL80211_CQM_TXE_MAX_INTVL)) + return -EINVAL; + + wdev = dev->ieee80211_ptr; + + if (!rdev->ops->set_cqm_txe_config) + return -EOPNOTSUPP; + + if (wdev->iftype != NL80211_IFTYPE_STATION && + wdev->iftype != NL80211_IFTYPE_P2P_CLIENT) + return -EOPNOTSUPP; + + return rdev->ops->set_cqm_txe_config(wdev->wiphy, dev, + rate, pkts, intvl); +} + static int nl80211_set_cqm_rssi(struct genl_info *info, s32 threshold, u32 hysteresis) { @@ -5987,6 +6364,14 @@ static int nl80211_set_cqm(struct sk_buff *skb, struct genl_info *info) threshold = nla_get_u32(attrs[NL80211_ATTR_CQM_RSSI_THOLD]); hysteresis = nla_get_u32(attrs[NL80211_ATTR_CQM_RSSI_HYST]); err = nl80211_set_cqm_rssi(info, threshold, hysteresis); + } else if (attrs[NL80211_ATTR_CQM_TXE_RATE] && + attrs[NL80211_ATTR_CQM_TXE_PKTS] && + attrs[NL80211_ATTR_CQM_TXE_INTVL]) { + u32 rate, pkts, intvl; + rate = nla_get_u32(attrs[NL80211_ATTR_CQM_TXE_RATE]); + pkts = nla_get_u32(attrs[NL80211_ATTR_CQM_TXE_PKTS]); + intvl = nla_get_u32(attrs[NL80211_ATTR_CQM_TXE_INTVL]); + err = nl80211_set_cqm_txe(info, rate, pkts, intvl); } else err = -EINVAL; @@ -6032,6 +6417,24 @@ static int nl80211_join_mesh(struct sk_buff *skb, struct genl_info *info) return err; } + if (info->attrs[NL80211_ATTR_WIPHY_FREQ]) { + enum nl80211_channel_type channel_type = NL80211_CHAN_NO_HT; + + if (info->attrs[NL80211_ATTR_WIPHY_CHANNEL_TYPE] && + !nl80211_valid_channel_type(info, &channel_type)) + return -EINVAL; + + setup.channel = rdev_freq_to_chan(rdev, + nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ]), + channel_type); + if (!setup.channel) + return -EINVAL; + setup.channel_type = channel_type; + } else { + /* cfg80211_join_mesh() will sort it out */ + setup.channel = NULL; + } + return cfg80211_join_mesh(rdev, dev, &setup, &cfg); } @@ -6043,6 +6446,7 @@ static int nl80211_leave_mesh(struct sk_buff *skb, struct genl_info *info) return cfg80211_leave_mesh(rdev, dev); } +#ifdef CONFIG_PM static int nl80211_get_wowlan(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; @@ -6124,8 +6528,8 @@ static int nl80211_set_wowlan(struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev = info->user_ptr[0]; struct nlattr *tb[NUM_NL80211_WOWLAN_TRIG]; - struct cfg80211_wowlan no_triggers = {}; struct cfg80211_wowlan new_triggers = {}; + struct cfg80211_wowlan *ntrig; struct wiphy_wowlan_support *wowlan = &rdev->wiphy.wowlan; int err, i; bool prev_enabled = rdev->wowlan; @@ -6133,8 +6537,11 @@ static int nl80211_set_wowlan(struct sk_buff *skb, struct genl_info *info) if (!rdev->wiphy.wowlan.flags && !rdev->wiphy.wowlan.n_patterns) return -EOPNOTSUPP; - if (!info->attrs[NL80211_ATTR_WOWLAN_TRIGGERS]) - goto no_triggers; + if (!info->attrs[NL80211_ATTR_WOWLAN_TRIGGERS]) { + cfg80211_rdev_free_wowlan(rdev); + rdev->wowlan = NULL; + goto set_wakeup; + } err = nla_parse(tb, MAX_NL80211_WOWLAN_TRIG, nla_data(info->attrs[NL80211_ATTR_WOWLAN_TRIGGERS]), @@ -6245,22 +6652,15 @@ static int nl80211_set_wowlan(struct sk_buff *skb, struct genl_info *info) } } - if (memcmp(&new_triggers, &no_triggers, sizeof(new_triggers))) { - struct cfg80211_wowlan *ntrig; - ntrig = kmemdup(&new_triggers, sizeof(new_triggers), - GFP_KERNEL); - if (!ntrig) { - err = -ENOMEM; - goto error; - } - cfg80211_rdev_free_wowlan(rdev); - rdev->wowlan = ntrig; - } else { - no_triggers: - cfg80211_rdev_free_wowlan(rdev); - rdev->wowlan = NULL; + ntrig = kmemdup(&new_triggers, sizeof(new_triggers), GFP_KERNEL); + if (!ntrig) { + err = -ENOMEM; + goto error; } + cfg80211_rdev_free_wowlan(rdev); + rdev->wowlan = ntrig; + set_wakeup: if (rdev->ops->set_wakeup && prev_enabled != !!rdev->wowlan) rdev->ops->set_wakeup(&rdev->wiphy, rdev->wowlan); @@ -6271,6 +6671,7 @@ static int nl80211_set_wowlan(struct sk_buff *skb, struct genl_info *info) kfree(new_triggers.patterns); return err; } +#endif static int nl80211_set_rekey_data(struct sk_buff *skb, struct genl_info *info) { @@ -6415,44 +6816,75 @@ static int nl80211_register_beacons(struct sk_buff *skb, struct genl_info *info) #define NL80211_FLAG_CHECK_NETDEV_UP 0x08 #define NL80211_FLAG_NEED_NETDEV_UP (NL80211_FLAG_NEED_NETDEV |\ NL80211_FLAG_CHECK_NETDEV_UP) +#define NL80211_FLAG_NEED_WDEV 0x10 +/* If a netdev is associated, it must be UP */ +#define NL80211_FLAG_NEED_WDEV_UP (NL80211_FLAG_NEED_WDEV |\ + NL80211_FLAG_CHECK_NETDEV_UP) static int nl80211_pre_doit(struct genl_ops *ops, struct sk_buff *skb, struct genl_info *info) { struct cfg80211_registered_device *rdev; + struct wireless_dev *wdev; struct net_device *dev; - int err; bool rtnl = ops->internal_flags & NL80211_FLAG_NEED_RTNL; if (rtnl) rtnl_lock(); if (ops->internal_flags & NL80211_FLAG_NEED_WIPHY) { - rdev = cfg80211_get_dev_from_info(info); + rdev = cfg80211_get_dev_from_info(genl_info_net(info), info); if (IS_ERR(rdev)) { if (rtnl) rtnl_unlock(); return PTR_ERR(rdev); } info->user_ptr[0] = rdev; - } else if (ops->internal_flags & NL80211_FLAG_NEED_NETDEV) { - err = get_rdev_dev_by_ifindex(genl_info_net(info), info->attrs, - &rdev, &dev); - if (err) { + } else if (ops->internal_flags & NL80211_FLAG_NEED_NETDEV || + ops->internal_flags & NL80211_FLAG_NEED_WDEV) { + mutex_lock(&cfg80211_mutex); + wdev = __cfg80211_wdev_from_attrs(genl_info_net(info), + info->attrs); + if (IS_ERR(wdev)) { + mutex_unlock(&cfg80211_mutex); if (rtnl) rtnl_unlock(); - return err; + return PTR_ERR(wdev); } - if (ops->internal_flags & NL80211_FLAG_CHECK_NETDEV_UP && - !netif_running(dev)) { - cfg80211_unlock_rdev(rdev); - dev_put(dev); - if (rtnl) - rtnl_unlock(); - return -ENETDOWN; + + dev = wdev->netdev; + rdev = wiphy_to_dev(wdev->wiphy); + + if (ops->internal_flags & NL80211_FLAG_NEED_NETDEV) { + if (!dev) { + mutex_unlock(&cfg80211_mutex); + if (rtnl) + rtnl_unlock(); + return -EINVAL; + } + + info->user_ptr[1] = dev; + } else { + info->user_ptr[1] = wdev; } + + if (dev) { + if (ops->internal_flags & NL80211_FLAG_CHECK_NETDEV_UP && + !netif_running(dev)) { + mutex_unlock(&cfg80211_mutex); + if (rtnl) + rtnl_unlock(); + return -ENETDOWN; + } + + dev_hold(dev); + } + + cfg80211_lock_rdev(rdev); + + mutex_unlock(&cfg80211_mutex); + info->user_ptr[0] = rdev; - info->user_ptr[1] = dev; } return 0; @@ -6463,8 +6895,16 @@ static void nl80211_post_doit(struct genl_ops *ops, struct sk_buff *skb, { if (info->user_ptr[0]) cfg80211_unlock_rdev(info->user_ptr[0]); - if (info->user_ptr[1]) - dev_put(info->user_ptr[1]); + if (info->user_ptr[1]) { + if (ops->internal_flags & NL80211_FLAG_NEED_WDEV) { + struct wireless_dev *wdev = info->user_ptr[1]; + + if (wdev->netdev) + dev_put(wdev->netdev); + } else { + dev_put(info->user_ptr[1]); + } + } if (ops->internal_flags & NL80211_FLAG_NEED_RTNL) rtnl_unlock(); } @@ -6491,7 +6931,7 @@ static struct genl_ops nl80211_ops[] = { .dumpit = nl80211_dump_interface, .policy = nl80211_policy, /* can be retrieved by unprivileged users */ - .internal_flags = NL80211_FLAG_NEED_NETDEV, + .internal_flags = NL80211_FLAG_NEED_WDEV, }, { .cmd = NL80211_CMD_SET_INTERFACE, @@ -6514,7 +6954,7 @@ static struct genl_ops nl80211_ops[] = { .doit = nl80211_del_interface, .policy = nl80211_policy, .flags = GENL_ADMIN_PERM, - .internal_flags = NL80211_FLAG_NEED_NETDEV | + .internal_flags = NL80211_FLAG_NEED_WDEV | NL80211_FLAG_NEED_RTNL, }, { @@ -6685,7 +7125,7 @@ static struct genl_ops nl80211_ops[] = { .doit = nl80211_trigger_scan, .policy = nl80211_policy, .flags = GENL_ADMIN_PERM, - .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | + .internal_flags = NL80211_FLAG_NEED_WDEV_UP | NL80211_FLAG_NEED_RTNL, }, { @@ -6826,7 +7266,7 @@ static struct genl_ops nl80211_ops[] = { .doit = nl80211_remain_on_channel, .policy = nl80211_policy, .flags = GENL_ADMIN_PERM, - .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | + .internal_flags = NL80211_FLAG_NEED_WDEV_UP | NL80211_FLAG_NEED_RTNL, }, { @@ -6834,7 +7274,7 @@ static struct genl_ops nl80211_ops[] = { .doit = nl80211_cancel_remain_on_channel, .policy = nl80211_policy, .flags = GENL_ADMIN_PERM, - .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | + .internal_flags = NL80211_FLAG_NEED_WDEV_UP | NL80211_FLAG_NEED_RTNL, }, { @@ -6850,7 +7290,7 @@ static struct genl_ops nl80211_ops[] = { .doit = nl80211_register_mgmt, .policy = nl80211_policy, .flags = GENL_ADMIN_PERM, - .internal_flags = NL80211_FLAG_NEED_NETDEV | + .internal_flags = NL80211_FLAG_NEED_WDEV | NL80211_FLAG_NEED_RTNL, }, { @@ -6858,7 +7298,7 @@ static struct genl_ops nl80211_ops[] = { .doit = nl80211_tx_mgmt, .policy = nl80211_policy, .flags = GENL_ADMIN_PERM, - .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | + .internal_flags = NL80211_FLAG_NEED_WDEV_UP | NL80211_FLAG_NEED_RTNL, }, { @@ -6866,7 +7306,7 @@ static struct genl_ops nl80211_ops[] = { .doit = nl80211_tx_mgmt_cancel_wait, .policy = nl80211_policy, .flags = GENL_ADMIN_PERM, - .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | + .internal_flags = NL80211_FLAG_NEED_WDEV_UP | NL80211_FLAG_NEED_RTNL, }, { @@ -6925,6 +7365,7 @@ static struct genl_ops nl80211_ops[] = { .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | NL80211_FLAG_NEED_RTNL, }, +#ifdef CONFIG_PM { .cmd = NL80211_CMD_GET_WOWLAN, .doit = nl80211_get_wowlan, @@ -6941,6 +7382,7 @@ static struct genl_ops nl80211_ops[] = { .internal_flags = NL80211_FLAG_NEED_WIPHY | NL80211_FLAG_NEED_RTNL, }, +#endif { .cmd = NL80211_CMD_SET_REKEY_OFFLOAD, .doit = nl80211_set_rekey_data, @@ -7075,7 +7517,7 @@ static int nl80211_add_scan_req(struct sk_buff *msg, static int nl80211_send_scan_msg(struct sk_buff *msg, struct cfg80211_registered_device *rdev, - struct net_device *netdev, + struct wireless_dev *wdev, u32 pid, u32 seq, int flags, u32 cmd) { @@ -7086,7 +7528,9 @@ static int nl80211_send_scan_msg(struct sk_buff *msg, return -1; if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) || - nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex)) + (wdev->netdev && nla_put_u32(msg, NL80211_ATTR_IFINDEX, + wdev->netdev->ifindex)) || + nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev))) goto nla_put_failure; /* ignore errors and send incomplete event anyway */ @@ -7123,15 +7567,15 @@ nl80211_send_sched_scan_msg(struct sk_buff *msg, } void nl80211_send_scan_start(struct cfg80211_registered_device *rdev, - struct net_device *netdev) + struct wireless_dev *wdev) { struct sk_buff *msg; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) return; - if (nl80211_send_scan_msg(msg, rdev, netdev, 0, 0, 0, + if (nl80211_send_scan_msg(msg, rdev, wdev, 0, 0, 0, NL80211_CMD_TRIGGER_SCAN) < 0) { nlmsg_free(msg); return; @@ -7142,7 +7586,7 @@ void nl80211_send_scan_start(struct cfg80211_registered_device *rdev, } void nl80211_send_scan_done(struct cfg80211_registered_device *rdev, - struct net_device *netdev) + struct wireless_dev *wdev) { struct sk_buff *msg; @@ -7150,7 +7594,7 @@ void nl80211_send_scan_done(struct cfg80211_registered_device *rdev, if (!msg) return; - if (nl80211_send_scan_msg(msg, rdev, netdev, 0, 0, 0, + if (nl80211_send_scan_msg(msg, rdev, wdev, 0, 0, 0, NL80211_CMD_NEW_SCAN_RESULTS) < 0) { nlmsg_free(msg); return; @@ -7161,7 +7605,7 @@ void nl80211_send_scan_done(struct cfg80211_registered_device *rdev, } void nl80211_send_scan_aborted(struct cfg80211_registered_device *rdev, - struct net_device *netdev) + struct wireless_dev *wdev) { struct sk_buff *msg; @@ -7169,7 +7613,7 @@ void nl80211_send_scan_aborted(struct cfg80211_registered_device *rdev, if (!msg) return; - if (nl80211_send_scan_msg(msg, rdev, netdev, 0, 0, 0, + if (nl80211_send_scan_msg(msg, rdev, wdev, 0, 0, 0, NL80211_CMD_SCAN_ABORTED) < 0) { nlmsg_free(msg); return; @@ -7203,7 +7647,7 @@ void nl80211_send_sched_scan(struct cfg80211_registered_device *rdev, { struct sk_buff *msg; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) return; @@ -7419,7 +7863,7 @@ void nl80211_send_connect_result(struct cfg80211_registered_device *rdev, struct sk_buff *msg; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -7459,7 +7903,7 @@ void nl80211_send_roamed(struct cfg80211_registered_device *rdev, struct sk_buff *msg; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -7497,7 +7941,7 @@ void nl80211_send_disconnected(struct cfg80211_registered_device *rdev, struct sk_buff *msg; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); if (!msg) return; @@ -7692,7 +8136,7 @@ nla_put_failure: static void nl80211_send_remain_on_chan_event( int cmd, struct cfg80211_registered_device *rdev, - struct net_device *netdev, u64 cookie, + struct wireless_dev *wdev, u64 cookie, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, unsigned int duration, gfp_t gfp) @@ -7711,7 +8155,9 @@ static void nl80211_send_remain_on_chan_event( } if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) || - nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex) || + (wdev->netdev && nla_put_u32(msg, NL80211_ATTR_IFINDEX, + wdev->netdev->ifindex)) || + nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)) || nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, chan->center_freq) || nla_put_u32(msg, NL80211_ATTR_WIPHY_CHANNEL_TYPE, channel_type) || nla_put_u64(msg, NL80211_ATTR_COOKIE, cookie)) @@ -7733,23 +8179,24 @@ static void nl80211_send_remain_on_chan_event( } void nl80211_send_remain_on_channel(struct cfg80211_registered_device *rdev, - struct net_device *netdev, u64 cookie, + struct wireless_dev *wdev, u64 cookie, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, unsigned int duration, gfp_t gfp) { nl80211_send_remain_on_chan_event(NL80211_CMD_REMAIN_ON_CHANNEL, - rdev, netdev, cookie, chan, + rdev, wdev, cookie, chan, channel_type, duration, gfp); } void nl80211_send_remain_on_channel_cancel( - struct cfg80211_registered_device *rdev, struct net_device *netdev, + struct cfg80211_registered_device *rdev, + struct wireless_dev *wdev, u64 cookie, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, gfp_t gfp) { nl80211_send_remain_on_chan_event(NL80211_CMD_CANCEL_REMAIN_ON_CHANNEL, - rdev, netdev, cookie, chan, + rdev, wdev, cookie, chan, channel_type, 0, gfp); } @@ -7759,7 +8206,7 @@ void nl80211_send_sta_event(struct cfg80211_registered_device *rdev, { struct sk_buff *msg; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -7780,7 +8227,7 @@ void nl80211_send_sta_del_event(struct cfg80211_registered_device *rdev, struct sk_buff *msg; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -7863,10 +8310,11 @@ bool nl80211_unexpected_4addr_frame(struct net_device *dev, } int nl80211_send_mgmt(struct cfg80211_registered_device *rdev, - struct net_device *netdev, u32 nlpid, + struct wireless_dev *wdev, u32 nlpid, int freq, int sig_dbm, const u8 *buf, size_t len, gfp_t gfp) { + struct net_device *netdev = wdev->netdev; struct sk_buff *msg; void *hdr; @@ -7881,7 +8329,8 @@ int nl80211_send_mgmt(struct cfg80211_registered_device *rdev, } if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) || - nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex) || + (netdev && nla_put_u32(msg, NL80211_ATTR_IFINDEX, + netdev->ifindex)) || nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, freq) || (sig_dbm && nla_put_u32(msg, NL80211_ATTR_RX_SIGNAL_DBM, sig_dbm)) || @@ -7899,10 +8348,11 @@ int nl80211_send_mgmt(struct cfg80211_registered_device *rdev, } void nl80211_send_mgmt_tx_status(struct cfg80211_registered_device *rdev, - struct net_device *netdev, u64 cookie, + struct wireless_dev *wdev, u64 cookie, const u8 *buf, size_t len, bool ack, gfp_t gfp) { + struct net_device *netdev = wdev->netdev; struct sk_buff *msg; void *hdr; @@ -7917,7 +8367,8 @@ void nl80211_send_mgmt_tx_status(struct cfg80211_registered_device *rdev, } if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) || - nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex) || + (netdev && nla_put_u32(msg, NL80211_ATTR_IFINDEX, + netdev->ifindex)) || nla_put(msg, NL80211_ATTR_FRAME, len, buf) || nla_put_u64(msg, NL80211_ATTR_COOKIE, cookie) || (ack && nla_put_flag(msg, NL80211_ATTR_ACK))) @@ -7943,7 +8394,7 @@ nl80211_send_cqm_rssi_notify(struct cfg80211_registered_device *rdev, struct nlattr *pinfoattr; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -7986,7 +8437,7 @@ void nl80211_gtk_rekey_notify(struct cfg80211_registered_device *rdev, struct nlattr *rekey_attr; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -8030,7 +8481,7 @@ void nl80211_pmksa_candidate_notify(struct cfg80211_registered_device *rdev, struct nlattr *attr; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -8074,7 +8525,7 @@ void nl80211_ch_switch_notify(struct cfg80211_registered_device *rdev, struct sk_buff *msg; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -8101,6 +8552,56 @@ void nl80211_ch_switch_notify(struct cfg80211_registered_device *rdev, } void +nl80211_send_cqm_txe_notify(struct cfg80211_registered_device *rdev, + struct net_device *netdev, const u8 *peer, + u32 num_packets, u32 rate, u32 intvl, gfp_t gfp) +{ + struct sk_buff *msg; + struct nlattr *pinfoattr; + void *hdr; + + msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + if (!msg) + return; + + hdr = nl80211hdr_put(msg, 0, 0, 0, NL80211_CMD_NOTIFY_CQM); + if (!hdr) { + nlmsg_free(msg); + return; + } + + if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) || + nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex) || + nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, peer)) + goto nla_put_failure; + + pinfoattr = nla_nest_start(msg, NL80211_ATTR_CQM); + if (!pinfoattr) + goto nla_put_failure; + + if (nla_put_u32(msg, NL80211_ATTR_CQM_TXE_PKTS, num_packets)) + goto nla_put_failure; + + if (nla_put_u32(msg, NL80211_ATTR_CQM_TXE_RATE, rate)) + goto nla_put_failure; + + if (nla_put_u32(msg, NL80211_ATTR_CQM_TXE_INTVL, intvl)) + goto nla_put_failure; + + nla_nest_end(msg, pinfoattr); + + genlmsg_end(msg, hdr); + + genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0, + nl80211_mlme_mcgrp.id, gfp); + return; + + nla_put_failure: + genlmsg_cancel(msg, hdr); + nlmsg_free(msg); +} + +void nl80211_send_cqm_pktloss_notify(struct cfg80211_registered_device *rdev, struct net_device *netdev, const u8 *peer, u32 num_packets, gfp_t gfp) @@ -8109,7 +8610,7 @@ nl80211_send_cqm_pktloss_notify(struct cfg80211_registered_device *rdev, struct nlattr *pinfoattr; void *hdr; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -8153,7 +8654,7 @@ void cfg80211_probe_status(struct net_device *dev, const u8 *addr, void *hdr; int err; - msg = nlmsg_new(NLMSG_GOODSIZE, gfp); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, gfp); if (!msg) return; @@ -8241,7 +8742,7 @@ static int nl80211_netlink_notify(struct notifier_block * nb, rcu_read_lock(); list_for_each_entry_rcu(rdev, &cfg80211_rdev_list, list) { - list_for_each_entry_rcu(wdev, &rdev->netdev_list, list) + list_for_each_entry_rcu(wdev, &rdev->wdev_list, list) cfg80211_mlme_unregister_socket(wdev, notify->pid); if (rdev->ap_beacons_nlpid == notify->pid) rdev->ap_beacons_nlpid = 0; diff --git a/net/wireless/nl80211.h b/net/wireless/nl80211.h index 01a1122c3b33..9f2616fffb40 100644 --- a/net/wireless/nl80211.h +++ b/net/wireless/nl80211.h @@ -7,11 +7,11 @@ int nl80211_init(void); void nl80211_exit(void); void nl80211_notify_dev_rename(struct cfg80211_registered_device *rdev); void nl80211_send_scan_start(struct cfg80211_registered_device *rdev, - struct net_device *netdev); + struct wireless_dev *wdev); void nl80211_send_scan_done(struct cfg80211_registered_device *rdev, - struct net_device *netdev); + struct wireless_dev *wdev); void nl80211_send_scan_aborted(struct cfg80211_registered_device *rdev, - struct net_device *netdev); + struct wireless_dev *wdev); void nl80211_send_sched_scan(struct cfg80211_registered_device *rdev, struct net_device *netdev, u32 cmd); void nl80211_send_sched_scan_results(struct cfg80211_registered_device *rdev, @@ -74,13 +74,13 @@ void nl80211_send_ibss_bssid(struct cfg80211_registered_device *rdev, gfp_t gfp); void nl80211_send_remain_on_channel(struct cfg80211_registered_device *rdev, - struct net_device *netdev, - u64 cookie, + struct wireless_dev *wdev, u64 cookie, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, unsigned int duration, gfp_t gfp); void nl80211_send_remain_on_channel_cancel( - struct cfg80211_registered_device *rdev, struct net_device *netdev, + struct cfg80211_registered_device *rdev, + struct wireless_dev *wdev, u64 cookie, struct ieee80211_channel *chan, enum nl80211_channel_type channel_type, gfp_t gfp); @@ -92,11 +92,11 @@ void nl80211_send_sta_del_event(struct cfg80211_registered_device *rdev, gfp_t gfp); int nl80211_send_mgmt(struct cfg80211_registered_device *rdev, - struct net_device *netdev, u32 nlpid, + struct wireless_dev *wdev, u32 nlpid, int freq, int sig_dbm, const u8 *buf, size_t len, gfp_t gfp); void nl80211_send_mgmt_tx_status(struct cfg80211_registered_device *rdev, - struct net_device *netdev, u64 cookie, + struct wireless_dev *wdev, u64 cookie, const u8 *buf, size_t len, bool ack, gfp_t gfp); @@ -110,6 +110,11 @@ nl80211_send_cqm_pktloss_notify(struct cfg80211_registered_device *rdev, struct net_device *netdev, const u8 *peer, u32 num_packets, gfp_t gfp); +void +nl80211_send_cqm_txe_notify(struct cfg80211_registered_device *rdev, + struct net_device *netdev, const u8 *peer, + u32 num_packets, u32 rate, u32 intvl, gfp_t gfp); + void nl80211_gtk_rekey_notify(struct cfg80211_registered_device *rdev, struct net_device *netdev, const u8 *bssid, const u8 *replay_ctr, gfp_t gfp); diff --git a/net/wireless/reg.c b/net/wireless/reg.c index baf5704740ee..2303ee73b50a 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -97,9 +97,16 @@ const struct ieee80211_regdomain *cfg80211_regdomain; * - cfg80211_world_regdom * - cfg80211_regdom * - last_request + * - reg_num_devs_support_basehint */ static DEFINE_MUTEX(reg_mutex); +/* + * Number of devices that registered to the core + * that support cellular base station regulatory hints + */ +static int reg_num_devs_support_basehint; + static inline void assert_reg_lock(void) { lockdep_assert_held(®_mutex); @@ -129,7 +136,7 @@ static DECLARE_DELAYED_WORK(reg_timeout, reg_timeout_work); /* We keep a static world regulatory domain in case of the absence of CRDA */ static const struct ieee80211_regdomain world_regdom = { - .n_reg_rules = 5, + .n_reg_rules = 6, .alpha2 = "00", .reg_rules = { /* IEEE 802.11b/g, channels 1..11 */ @@ -156,6 +163,9 @@ static const struct ieee80211_regdomain world_regdom = { REG_RULE(5745-10, 5825+10, 40, 6, 20, NL80211_RRF_PASSIVE_SCAN | NL80211_RRF_NO_IBSS), + + /* IEEE 802.11ad (60gHz), channels 1..3 */ + REG_RULE(56160+2160*1-1080, 56160+2160*3+1080, 2160, 0, 0, 0), } }; @@ -908,6 +918,61 @@ static void handle_band(struct wiphy *wiphy, handle_channel(wiphy, initiator, band, i); } +static bool reg_request_cell_base(struct regulatory_request *request) +{ + if (request->initiator != NL80211_REGDOM_SET_BY_USER) + return false; + if (request->user_reg_hint_type != NL80211_USER_REG_HINT_CELL_BASE) + return false; + return true; +} + +bool reg_last_request_cell_base(void) +{ + bool val; + assert_cfg80211_lock(); + + mutex_lock(®_mutex); + val = reg_request_cell_base(last_request); + mutex_unlock(®_mutex); + return val; +} + +#ifdef CONFIG_CFG80211_CERTIFICATION_ONUS + +/* Core specific check */ +static int reg_ignore_cell_hint(struct regulatory_request *pending_request) +{ + if (!reg_num_devs_support_basehint) + return -EOPNOTSUPP; + + if (reg_request_cell_base(last_request)) { + if (!regdom_changes(pending_request->alpha2)) + return -EALREADY; + return 0; + } + return 0; +} + +/* Device specific check */ +static bool reg_dev_ignore_cell_hint(struct wiphy *wiphy) +{ + if (!(wiphy->features & NL80211_FEATURE_CELL_BASE_REG_HINTS)) + return true; + return false; +} +#else +static int reg_ignore_cell_hint(struct regulatory_request *pending_request) +{ + return -EOPNOTSUPP; +} +static int reg_dev_ignore_cell_hint(struct wiphy *wiphy) +{ + return true; +} +#endif + + static bool ignore_reg_update(struct wiphy *wiphy, enum nl80211_reg_initiator initiator) { @@ -941,6 +1006,9 @@ static bool ignore_reg_update(struct wiphy *wiphy, return true; } + if (reg_request_cell_base(last_request)) + return reg_dev_ignore_cell_hint(wiphy); + return false; } @@ -1166,14 +1234,6 @@ static void wiphy_update_regulatory(struct wiphy *wiphy, wiphy->reg_notifier(wiphy, last_request); } -void regulatory_update(struct wiphy *wiphy, - enum nl80211_reg_initiator setby) -{ - mutex_lock(®_mutex); - wiphy_update_regulatory(wiphy, setby); - mutex_unlock(®_mutex); -} - static void update_all_wiphy_regulatory(enum nl80211_reg_initiator initiator) { struct cfg80211_registered_device *rdev; @@ -1304,6 +1364,13 @@ static int ignore_request(struct wiphy *wiphy, return 0; case NL80211_REGDOM_SET_BY_COUNTRY_IE: + if (reg_request_cell_base(last_request)) { + /* Trust a Cell base station over the AP's country IE */ + if (regdom_changes(pending_request->alpha2)) + return -EOPNOTSUPP; + return -EALREADY; + } + last_wiphy = wiphy_idx_to_wiphy(last_request->wiphy_idx); if (unlikely(!is_an_alpha2(pending_request->alpha2))) @@ -1348,6 +1415,12 @@ static int ignore_request(struct wiphy *wiphy, return REG_INTERSECT; case NL80211_REGDOM_SET_BY_USER: + if (reg_request_cell_base(pending_request)) + return reg_ignore_cell_hint(pending_request); + + if (reg_request_cell_base(last_request)) + return -EOPNOTSUPP; + if (last_request->initiator == NL80211_REGDOM_SET_BY_COUNTRY_IE) return REG_INTERSECT; /* @@ -1637,7 +1710,8 @@ static int regulatory_hint_core(const char *alpha2) } /* User hints */ -int regulatory_hint_user(const char *alpha2) +int regulatory_hint_user(const char *alpha2, + enum nl80211_user_reg_hint_type user_reg_hint_type) { struct regulatory_request *request; @@ -1651,6 +1725,7 @@ int regulatory_hint_user(const char *alpha2) request->alpha2[0] = alpha2[0]; request->alpha2[1] = alpha2[1]; request->initiator = NL80211_REGDOM_SET_BY_USER; + request->user_reg_hint_type = user_reg_hint_type; queue_regulatory_request(request); @@ -1903,7 +1978,7 @@ static void restore_regulatory_settings(bool reset_user) * settings, user regulatory settings takes precedence. */ if (is_an_alpha2(alpha2)) - regulatory_hint_user(user_alpha2); + regulatory_hint_user(user_alpha2, NL80211_USER_REG_HINT_USER); if (list_empty(&tmp_reg_req_list)) return; @@ -2078,9 +2153,16 @@ static void print_regdomain(const struct ieee80211_regdomain *rd) else { if (is_unknown_alpha2(rd->alpha2)) pr_info("Regulatory domain changed to driver built-in settings (unknown country)\n"); - else - pr_info("Regulatory domain changed to country: %c%c\n", - rd->alpha2[0], rd->alpha2[1]); + else { + if (reg_request_cell_base(last_request)) + pr_info("Regulatory domain changed " + "to country: %c%c by Cell Station\n", + rd->alpha2[0], rd->alpha2[1]); + else + pr_info("Regulatory domain changed " + "to country: %c%c\n", + rd->alpha2[0], rd->alpha2[1]); + } } print_dfs_region(rd->dfs_region); print_rd_rules(rd); @@ -2125,7 +2207,7 @@ static int __set_regdom(const struct ieee80211_regdomain *rd) * checking if the alpha2 changes if CRDA was already called */ if (!regdom_changes(rd->alpha2)) - return -EINVAL; + return -EALREADY; } /* @@ -2245,6 +2327,9 @@ int set_regdom(const struct ieee80211_regdomain *rd) /* Note that this doesn't update the wiphys, this is done below */ r = __set_regdom(rd); if (r) { + if (r == -EALREADY) + reg_set_request_processed(); + kfree(rd); mutex_unlock(®_mutex); return r; @@ -2287,8 +2372,22 @@ int reg_device_uevent(struct device *dev, struct kobj_uevent_env *env) } #endif /* CONFIG_HOTPLUG */ +void wiphy_regulatory_register(struct wiphy *wiphy) +{ + assert_cfg80211_lock(); + + mutex_lock(®_mutex); + + if (!reg_dev_ignore_cell_hint(wiphy)) + reg_num_devs_support_basehint++; + + wiphy_update_regulatory(wiphy, NL80211_REGDOM_SET_BY_CORE); + + mutex_unlock(®_mutex); +} + /* Caller must hold cfg80211_mutex */ -void reg_device_remove(struct wiphy *wiphy) +void wiphy_regulatory_deregister(struct wiphy *wiphy) { struct wiphy *request_wiphy = NULL; @@ -2296,6 +2395,9 @@ void reg_device_remove(struct wiphy *wiphy) mutex_lock(®_mutex); + if (!reg_dev_ignore_cell_hint(wiphy)) + reg_num_devs_support_basehint--; + kfree(wiphy->regd); if (last_request) @@ -2361,7 +2463,8 @@ int __init regulatory_init(void) * as a user hint. */ if (!is_world_regdom(ieee80211_regdom)) - regulatory_hint_user(ieee80211_regdom); + regulatory_hint_user(ieee80211_regdom, + NL80211_USER_REG_HINT_USER); return 0; } diff --git a/net/wireless/reg.h b/net/wireless/reg.h index e2aaaf525a22..f023c8a31c60 100644 --- a/net/wireless/reg.h +++ b/net/wireless/reg.h @@ -22,17 +22,19 @@ bool is_world_regdom(const char *alpha2); bool reg_is_valid_request(const char *alpha2); bool reg_supported_dfs_region(u8 dfs_region); -int regulatory_hint_user(const char *alpha2); +int regulatory_hint_user(const char *alpha2, + enum nl80211_user_reg_hint_type user_reg_hint_type); int reg_device_uevent(struct device *dev, struct kobj_uevent_env *env); -void reg_device_remove(struct wiphy *wiphy); +void wiphy_regulatory_register(struct wiphy *wiphy); +void wiphy_regulatory_deregister(struct wiphy *wiphy); int __init regulatory_init(void); void regulatory_exit(void); int set_regdom(const struct ieee80211_regdomain *rd); -void regulatory_update(struct wiphy *wiphy, enum nl80211_reg_initiator setby); +bool reg_last_request_cell_base(void); /** * regulatory_hint_found_beacon - hints a beacon was found on a channel diff --git a/net/wireless/scan.c b/net/wireless/scan.c index af2b1caa37fa..848523a2b22f 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -23,7 +23,7 @@ void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev, bool leak) { struct cfg80211_scan_request *request; - struct net_device *dev; + struct wireless_dev *wdev; #ifdef CONFIG_CFG80211_WEXT union iwreq_data wrqu; #endif @@ -35,29 +35,31 @@ void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev, bool leak) if (!request) return; - dev = request->dev; + wdev = request->wdev; /* * This must be before sending the other events! * Otherwise, wpa_supplicant gets completely confused with * wext events. */ - cfg80211_sme_scan_done(dev); + if (wdev->netdev) + cfg80211_sme_scan_done(wdev->netdev); if (request->aborted) - nl80211_send_scan_aborted(rdev, dev); + nl80211_send_scan_aborted(rdev, wdev); else - nl80211_send_scan_done(rdev, dev); + nl80211_send_scan_done(rdev, wdev); #ifdef CONFIG_CFG80211_WEXT - if (!request->aborted) { + if (wdev->netdev && !request->aborted) { memset(&wrqu, 0, sizeof(wrqu)); - wireless_send_event(dev, SIOCGIWSCAN, &wrqu, NULL); + wireless_send_event(wdev->netdev, SIOCGIWSCAN, &wrqu, NULL); } #endif - dev_put(dev); + if (wdev->netdev) + dev_put(wdev->netdev); rdev->scan_req = NULL; @@ -955,7 +957,7 @@ int cfg80211_wext_siwscan(struct net_device *dev, } creq->wiphy = wiphy; - creq->dev = dev; + creq->wdev = dev->ieee80211_ptr; /* SSIDs come after channels */ creq->ssids = (void *)&creq->channels[n_channels]; creq->n_channels = n_channels; @@ -1024,12 +1026,12 @@ int cfg80211_wext_siwscan(struct net_device *dev, creq->rates[i] = (1 << wiphy->bands[i]->n_bitrates) - 1; rdev->scan_req = creq; - err = rdev->ops->scan(wiphy, dev, creq); + err = rdev->ops->scan(wiphy, creq); if (err) { rdev->scan_req = NULL; /* creq will be freed below */ } else { - nl80211_send_scan_start(rdev, dev); + nl80211_send_scan_start(rdev, dev->ieee80211_ptr); /* creq now owned by driver */ creq = NULL; dev_hold(dev); diff --git a/net/wireless/sme.c b/net/wireless/sme.c index f7e937ff8978..6f39cb808302 100644 --- a/net/wireless/sme.c +++ b/net/wireless/sme.c @@ -51,7 +51,7 @@ static bool cfg80211_is_all_idle(void) */ list_for_each_entry(rdev, &cfg80211_rdev_list, list) { cfg80211_lock_rdev(rdev); - list_for_each_entry(wdev, &rdev->netdev_list, list) { + list_for_each_entry(wdev, &rdev->wdev_list, list) { wdev_lock(wdev); if (wdev->sme_state != CFG80211_SME_IDLE) is_all_idle = false; @@ -136,15 +136,15 @@ static int cfg80211_conn_scan(struct wireless_dev *wdev) wdev->conn->params.ssid_len); request->ssids[0].ssid_len = wdev->conn->params.ssid_len; - request->dev = wdev->netdev; + request->wdev = wdev; request->wiphy = &rdev->wiphy; rdev->scan_req = request; - err = rdev->ops->scan(wdev->wiphy, wdev->netdev, request); + err = rdev->ops->scan(wdev->wiphy, request); if (!err) { wdev->conn->state = CFG80211_CONN_SCANNING; - nl80211_send_scan_start(rdev, wdev->netdev); + nl80211_send_scan_start(rdev, wdev); dev_hold(wdev->netdev); } else { rdev->scan_req = NULL; @@ -221,7 +221,7 @@ void cfg80211_conn_work(struct work_struct *work) cfg80211_lock_rdev(rdev); mutex_lock(&rdev->devlist_mtx); - list_for_each_entry(wdev, &rdev->netdev_list, list) { + list_for_each_entry(wdev, &rdev->wdev_list, list) { wdev_lock(wdev); if (!netif_running(wdev->netdev)) { wdev_unlock(wdev); diff --git a/net/wireless/util.c b/net/wireless/util.c index 316cfd00914f..26f8cd30f712 100644 --- a/net/wireless/util.c +++ b/net/wireless/util.c @@ -35,19 +35,29 @@ int ieee80211_channel_to_frequency(int chan, enum ieee80211_band band) { /* see 802.11 17.3.8.3.2 and Annex J * there are overlapping channel numbers in 5GHz and 2GHz bands */ - if (band == IEEE80211_BAND_5GHZ) { - if (chan >= 182 && chan <= 196) - return 4000 + chan * 5; - else - return 5000 + chan * 5; - } else { /* IEEE80211_BAND_2GHZ */ + if (chan <= 0) + return 0; /* not supported */ + switch (band) { + case IEEE80211_BAND_2GHZ: if (chan == 14) return 2484; else if (chan < 14) return 2407 + chan * 5; + break; + case IEEE80211_BAND_5GHZ: + if (chan >= 182 && chan <= 196) + return 4000 + chan * 5; else - return 0; /* not supported */ + return 5000 + chan * 5; + break; + case IEEE80211_BAND_60GHZ: + if (chan < 5) + return 56160 + chan * 2160; + break; + default: + ; } + return 0; /* not supported */ } EXPORT_SYMBOL(ieee80211_channel_to_frequency); @@ -60,8 +70,12 @@ int ieee80211_frequency_to_channel(int freq) return (freq - 2407) / 5; else if (freq >= 4910 && freq <= 4980) return (freq - 4000) / 5; - else + else if (freq <= 45000) /* DMG band lower limit */ return (freq - 5000) / 5; + else if (freq >= 58320 && freq <= 64800) + return (freq - 56160) / 2160; + else + return 0; } EXPORT_SYMBOL(ieee80211_frequency_to_channel); @@ -137,6 +151,11 @@ static void set_mandatory_flags_band(struct ieee80211_supported_band *sband, } WARN_ON(want != 0 && want != 3 && want != 6); break; + case IEEE80211_BAND_60GHZ: + /* check for mandatory HT MCS 1..4 */ + WARN_ON(!sband->ht_cap.ht_supported); + WARN_ON((sband->ht_cap.mcs.rx_mask[0] & 0x1e) != 0x1e); + break; case IEEE80211_NUM_BANDS: WARN_ON(1); break; @@ -774,7 +793,7 @@ void cfg80211_process_rdev_events(struct cfg80211_registered_device *rdev) mutex_lock(&rdev->devlist_mtx); - list_for_each_entry(wdev, &rdev->netdev_list, list) + list_for_each_entry(wdev, &rdev->wdev_list, list) cfg80211_process_wdev_events(wdev); mutex_unlock(&rdev->devlist_mtx); @@ -805,8 +824,10 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev, return -EBUSY; if (ntype != otype && netif_running(dev)) { + mutex_lock(&rdev->devlist_mtx); err = cfg80211_can_change_interface(rdev, dev->ieee80211_ptr, ntype); + mutex_unlock(&rdev->devlist_mtx); if (err) return err; @@ -814,6 +835,9 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev, dev->ieee80211_ptr->mesh_id_up_len = 0; switch (otype) { + case NL80211_IFTYPE_AP: + cfg80211_stop_ap(rdev, dev); + break; case NL80211_IFTYPE_ADHOC: cfg80211_leave_ibss(rdev, dev, false); break; @@ -868,15 +892,69 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev, } } + if (!err && ntype != otype && netif_running(dev)) { + cfg80211_update_iface_num(rdev, ntype, 1); + cfg80211_update_iface_num(rdev, otype, -1); + } + return err; } -u16 cfg80211_calculate_bitrate(struct rate_info *rate) +static u32 cfg80211_calculate_bitrate_60g(struct rate_info *rate) +{ + static const u32 __mcs2bitrate[] = { + /* control PHY */ + [0] = 275, + /* SC PHY */ + [1] = 3850, + [2] = 7700, + [3] = 9625, + [4] = 11550, + [5] = 12512, /* 1251.25 mbps */ + [6] = 15400, + [7] = 19250, + [8] = 23100, + [9] = 25025, + [10] = 30800, + [11] = 38500, + [12] = 46200, + /* OFDM PHY */ + [13] = 6930, + [14] = 8662, /* 866.25 mbps */ + [15] = 13860, + [16] = 17325, + [17] = 20790, + [18] = 27720, + [19] = 34650, + [20] = 41580, + [21] = 45045, + [22] = 51975, + [23] = 62370, + [24] = 67568, /* 6756.75 mbps */ + /* LP-SC PHY */ + [25] = 6260, + [26] = 8340, + [27] = 11120, + [28] = 12510, + [29] = 16680, + [30] = 22240, + [31] = 25030, + }; + + if (WARN_ON_ONCE(rate->mcs >= ARRAY_SIZE(__mcs2bitrate))) + return 0; + + return __mcs2bitrate[rate->mcs]; +} + +u32 cfg80211_calculate_bitrate(struct rate_info *rate) { int modulation, streams, bitrate; if (!(rate->flags & RATE_INFO_FLAGS_MCS)) return rate->legacy; + if (rate->flags & RATE_INFO_FLAGS_60G) + return cfg80211_calculate_bitrate_60g(rate); /* the formula below does only work for MCS values smaller than 32 */ if (WARN_ON_ONCE(rate->mcs >= 32)) @@ -916,7 +994,7 @@ int cfg80211_validate_beacon_int(struct cfg80211_registered_device *rdev, mutex_lock(&rdev->devlist_mtx); - list_for_each_entry(wdev, &rdev->netdev_list, list) { + list_for_each_entry(wdev, &rdev->wdev_list, list) { if (!wdev->beacon_interval) continue; if (wdev->beacon_interval != beacon_int) { @@ -930,28 +1008,49 @@ int cfg80211_validate_beacon_int(struct cfg80211_registered_device *rdev, return res; } -int cfg80211_can_change_interface(struct cfg80211_registered_device *rdev, - struct wireless_dev *wdev, - enum nl80211_iftype iftype) +int cfg80211_can_use_iftype_chan(struct cfg80211_registered_device *rdev, + struct wireless_dev *wdev, + enum nl80211_iftype iftype, + struct ieee80211_channel *chan, + enum cfg80211_chan_mode chanmode) { struct wireless_dev *wdev_iter; u32 used_iftypes = BIT(iftype); int num[NUM_NL80211_IFTYPES]; + struct ieee80211_channel + *used_channels[CFG80211_MAX_NUM_DIFFERENT_CHANNELS]; + struct ieee80211_channel *ch; + enum cfg80211_chan_mode chmode; + int num_different_channels = 0; int total = 1; int i, j; ASSERT_RTNL(); + lockdep_assert_held(&rdev->devlist_mtx); /* Always allow software iftypes */ if (rdev->wiphy.software_iftypes & BIT(iftype)) return 0; memset(num, 0, sizeof(num)); + memset(used_channels, 0, sizeof(used_channels)); num[iftype] = 1; - mutex_lock(&rdev->devlist_mtx); - list_for_each_entry(wdev_iter, &rdev->netdev_list, list) { + switch (chanmode) { + case CHAN_MODE_UNDEFINED: + break; + case CHAN_MODE_SHARED: + WARN_ON(!chan); + used_channels[0] = chan; + num_different_channels++; + break; + case CHAN_MODE_EXCLUSIVE: + num_different_channels++; + break; + } + + list_for_each_entry(wdev_iter, &rdev->wdev_list, list) { if (wdev_iter == wdev) continue; if (!netif_running(wdev_iter->netdev)) @@ -960,11 +1059,42 @@ int cfg80211_can_change_interface(struct cfg80211_registered_device *rdev, if (rdev->wiphy.software_iftypes & BIT(wdev_iter->iftype)) continue; + /* + * We may be holding the "wdev" mutex, but now need to lock + * wdev_iter. This is OK because once we get here wdev_iter + * is not wdev (tested above), but we need to use the nested + * locking for lockdep. + */ + mutex_lock_nested(&wdev_iter->mtx, 1); + __acquire(wdev_iter->mtx); + cfg80211_get_chan_state(wdev_iter, &ch, &chmode); + wdev_unlock(wdev_iter); + + switch (chmode) { + case CHAN_MODE_UNDEFINED: + break; + case CHAN_MODE_SHARED: + for (i = 0; i < CFG80211_MAX_NUM_DIFFERENT_CHANNELS; i++) + if (!used_channels[i] || used_channels[i] == ch) + break; + + if (i == CFG80211_MAX_NUM_DIFFERENT_CHANNELS) + return -EBUSY; + + if (used_channels[i] == NULL) { + used_channels[i] = ch; + num_different_channels++; + } + break; + case CHAN_MODE_EXCLUSIVE: + num_different_channels++; + break; + } + num[wdev_iter->iftype]++; total++; used_iftypes |= BIT(wdev_iter->iftype); } - mutex_unlock(&rdev->devlist_mtx); if (total == 1) return 0; @@ -976,12 +1106,15 @@ int cfg80211_can_change_interface(struct cfg80211_registered_device *rdev, c = &rdev->wiphy.iface_combinations[i]; + if (total > c->max_interfaces) + continue; + if (num_different_channels > c->num_different_channels) + continue; + limits = kmemdup(c->limits, sizeof(limits[0]) * c->n_limits, GFP_KERNEL); if (!limits) return -ENOMEM; - if (total > c->max_interfaces) - goto cont; for (iftype = 0; iftype < NUM_NL80211_IFTYPES; iftype++) { if (rdev->wiphy.software_iftypes & BIT(iftype)) diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c index 6a6181a673ca..494379eb464f 100644 --- a/net/wireless/wext-compat.c +++ b/net/wireless/wext-compat.c @@ -796,7 +796,15 @@ static int cfg80211_wext_siwfreq(struct net_device *dev, case NL80211_IFTYPE_ADHOC: return cfg80211_ibss_wext_siwfreq(dev, info, wextfreq, extra); case NL80211_IFTYPE_MONITOR: - case NL80211_IFTYPE_WDS: + freq = cfg80211_wext_freq(wdev->wiphy, wextfreq); + if (freq < 0) + return freq; + if (freq == 0) + return -EINVAL; + mutex_lock(&rdev->devlist_mtx); + err = cfg80211_set_monitor_channel(rdev, freq, NL80211_CHAN_NO_HT); + mutex_unlock(&rdev->devlist_mtx); + return err; case NL80211_IFTYPE_MESH_POINT: freq = cfg80211_wext_freq(wdev->wiphy, wextfreq); if (freq < 0) @@ -804,9 +812,8 @@ static int cfg80211_wext_siwfreq(struct net_device *dev, if (freq == 0) return -EINVAL; mutex_lock(&rdev->devlist_mtx); - wdev_lock(wdev); - err = cfg80211_set_freq(rdev, wdev, freq, NL80211_CHAN_NO_HT); - wdev_unlock(wdev); + err = cfg80211_set_mesh_freq(rdev, wdev, freq, + NL80211_CHAN_NO_HT); mutex_unlock(&rdev->devlist_mtx); return err; default: @@ -832,18 +839,14 @@ static int cfg80211_wext_giwfreq(struct net_device *dev, if (!rdev->ops->get_channel) return -EINVAL; - chan = rdev->ops->get_channel(wdev->wiphy, &channel_type); + chan = rdev->ops->get_channel(wdev->wiphy, wdev, &channel_type); if (!chan) return -EINVAL; freq->m = chan->center_freq; freq->e = 6; return 0; default: - if (!wdev->channel) - return -EINVAL; - freq->m = wdev->channel->center_freq; - freq->e = 6; - return 0; + return -EINVAL; } } diff --git a/net/wireless/wext-sme.c b/net/wireless/wext-sme.c index 7decbd357d51..1f773f668d1a 100644 --- a/net/wireless/wext-sme.c +++ b/net/wireless/wext-sme.c @@ -111,9 +111,15 @@ int cfg80211_mgd_wext_siwfreq(struct net_device *dev, wdev->wext.connect.channel = chan; - /* SSID is not set, we just want to switch channel */ + /* + * SSID is not set, we just want to switch monitor channel, + * this is really just backward compatibility, if the SSID + * is set then we use the channel to select the BSS to use + * to connect to instead. If we were connected on another + * channel we disconnected above and reconnect below. + */ if (chan && !wdev->wext.connect.ssid_len) { - err = cfg80211_set_freq(rdev, wdev, freq, NL80211_CHAN_NO_HT); + err = cfg80211_set_monitor_channel(rdev, freq, NL80211_CHAN_NO_HT); goto out; } diff --git a/net/x25/x25_route.c b/net/x25/x25_route.c index cf6366270054..277c8d2448d6 100644 --- a/net/x25/x25_route.c +++ b/net/x25/x25_route.c @@ -66,7 +66,7 @@ out: /** * __x25_remove_route - remove route from x25_route_list - * @rt - route to remove + * @rt: route to remove * * Remove route from x25_route_list. If it was there. * Caller must hold x25_route_list_lock. diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index ccfbd328a69d..c5a5165a5927 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -1350,11 +1350,12 @@ static inline struct xfrm_dst *xfrm_alloc_dst(struct net *net, int family) default: BUG(); } - xdst = dst_alloc(dst_ops, NULL, 0, 0, 0); + xdst = dst_alloc(dst_ops, NULL, 0, DST_OBSOLETE_NONE, 0); if (likely(xdst)) { - memset(&xdst->u.rt6.rt6i_table, 0, - sizeof(*xdst) - sizeof(struct dst_entry)); + struct dst_entry *dst = &xdst->u.dst; + + memset(dst + 1, 0, sizeof(*xdst) - sizeof(*dst)); xdst->flo.ops = &xfrm_bundle_fc_ops; } else xdst = ERR_PTR(-ENOBUFS); @@ -1476,7 +1477,7 @@ static struct dst_entry *xfrm_bundle_create(struct xfrm_policy *policy, dst1->xfrm = xfrm[i]; xdst->xfrm_genid = xfrm[i]->genid; - dst1->obsolete = -1; + dst1->obsolete = DST_OBSOLETE_FORCE_CHK; dst1->flags |= DST_HOST; dst1->lastuse = now; @@ -1500,9 +1501,6 @@ static struct dst_entry *xfrm_bundle_create(struct xfrm_policy *policy, if (!dev) goto free_dst; - /* Copy neighbour for reachability confirmation */ - dst_set_neighbour(dst0, neigh_clone(dst_get_neighbour_noref(dst))); - xfrm_init_path((struct xfrm_dst *)dst0, dst, nfheader_len); xfrm_init_pmtu(dst_prev); @@ -2221,12 +2219,13 @@ EXPORT_SYMBOL(__xfrm_route_forward); static struct dst_entry *xfrm_dst_check(struct dst_entry *dst, u32 cookie) { /* Code (such as __xfrm4_bundle_create()) sets dst->obsolete - * to "-1" to force all XFRM destinations to get validated by - * dst_ops->check on every use. We do this because when a - * normal route referenced by an XFRM dst is obsoleted we do - * not go looking around for all parent referencing XFRM dsts - * so that we can invalidate them. It is just too much work. - * Instead we make the checks here on every use. For example: + * to DST_OBSOLETE_FORCE_CHK to force all XFRM destinations to + * get validated by dst_ops->check on every use. We do this + * because when a normal route referenced by an XFRM dst is + * obsoleted we do not go looking around for all parent + * referencing XFRM dsts so that we can invalidate them. It + * is just too much work. Instead we make the checks here on + * every use. For example: * * XFRM dst A --> IPv4 dst X * @@ -2236,9 +2235,9 @@ static struct dst_entry *xfrm_dst_check(struct dst_entry *dst, u32 cookie) * stale_bundle() check. * * When a policy's bundle is pruned, we dst_free() the XFRM - * dst which causes it's ->obsolete field to be set to a - * positive non-zero integer. If an XFRM dst has been pruned - * like this, we want to force a new route lookup. + * dst which causes it's ->obsolete field to be set to + * DST_OBSOLETE_DEAD. If an XFRM dst has been pruned like + * this, we want to force a new route lookup. */ if (dst->obsolete < 0 && !stale_bundle(dst)) return dst; @@ -2404,9 +2403,11 @@ static unsigned int xfrm_mtu(const struct dst_entry *dst) return mtu ? : dst_mtu(dst->path); } -static struct neighbour *xfrm_neigh_lookup(const struct dst_entry *dst, const void *daddr) +static struct neighbour *xfrm_neigh_lookup(const struct dst_entry *dst, + struct sk_buff *skb, + const void *daddr) { - return dst_neigh_lookup(dst->path, daddr); + return dst->path->ops->neigh_lookup(dst, skb, daddr); } int xfrm_policy_register_afinfo(struct xfrm_policy_afinfo *afinfo) diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 44293b3fd6a1..e75d8e47f35c 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -754,58 +754,67 @@ static int copy_to_user_state_extra(struct xfrm_state *x, struct xfrm_usersa_info *p, struct sk_buff *skb) { - copy_to_user_state(x, p); - - if (x->coaddr && - nla_put(skb, XFRMA_COADDR, sizeof(*x->coaddr), x->coaddr)) - goto nla_put_failure; - - if (x->lastused && - nla_put_u64(skb, XFRMA_LASTUSED, x->lastused)) - goto nla_put_failure; - - if (x->aead && - nla_put(skb, XFRMA_ALG_AEAD, aead_len(x->aead), x->aead)) - goto nla_put_failure; - - if (x->aalg && - (copy_to_user_auth(x->aalg, skb) || - nla_put(skb, XFRMA_ALG_AUTH_TRUNC, - xfrm_alg_auth_len(x->aalg), x->aalg))) - goto nla_put_failure; - - if (x->ealg && - nla_put(skb, XFRMA_ALG_CRYPT, xfrm_alg_len(x->ealg), x->ealg)) - goto nla_put_failure; - - if (x->calg && - nla_put(skb, XFRMA_ALG_COMP, sizeof(*(x->calg)), x->calg)) - goto nla_put_failure; - - if (x->encap && - nla_put(skb, XFRMA_ENCAP, sizeof(*x->encap), x->encap)) - goto nla_put_failure; + int ret = 0; - if (x->tfcpad && - nla_put_u32(skb, XFRMA_TFCPAD, x->tfcpad)) - goto nla_put_failure; - - if (xfrm_mark_put(skb, &x->mark)) - goto nla_put_failure; - - if (x->replay_esn && - nla_put(skb, XFRMA_REPLAY_ESN_VAL, - xfrm_replay_state_esn_len(x->replay_esn), - x->replay_esn)) - goto nla_put_failure; - - if (x->security && copy_sec_ctx(x->security, skb)) - goto nla_put_failure; - - return 0; + copy_to_user_state(x, p); -nla_put_failure: - return -EMSGSIZE; + if (x->coaddr) { + ret = nla_put(skb, XFRMA_COADDR, sizeof(*x->coaddr), x->coaddr); + if (ret) + goto out; + } + if (x->lastused) { + ret = nla_put_u64(skb, XFRMA_LASTUSED, x->lastused); + if (ret) + goto out; + } + if (x->aead) { + ret = nla_put(skb, XFRMA_ALG_AEAD, aead_len(x->aead), x->aead); + if (ret) + goto out; + } + if (x->aalg) { + ret = copy_to_user_auth(x->aalg, skb); + if (!ret) + ret = nla_put(skb, XFRMA_ALG_AUTH_TRUNC, + xfrm_alg_auth_len(x->aalg), x->aalg); + if (ret) + goto out; + } + if (x->ealg) { + ret = nla_put(skb, XFRMA_ALG_CRYPT, xfrm_alg_len(x->ealg), x->ealg); + if (ret) + goto out; + } + if (x->calg) { + ret = nla_put(skb, XFRMA_ALG_COMP, sizeof(*(x->calg)), x->calg); + if (ret) + goto out; + } + if (x->encap) { + ret = nla_put(skb, XFRMA_ENCAP, sizeof(*x->encap), x->encap); + if (ret) + goto out; + } + if (x->tfcpad) { + ret = nla_put_u32(skb, XFRMA_TFCPAD, x->tfcpad); + if (ret) + goto out; + } + ret = xfrm_mark_put(skb, &x->mark); + if (ret) + goto out; + if (x->replay_esn) { + ret = nla_put(skb, XFRMA_REPLAY_ESN_VAL, + xfrm_replay_state_esn_len(x->replay_esn), + x->replay_esn); + if (ret) + goto out; + } + if (x->security) + ret = copy_sec_ctx(x->security, skb); +out: + return ret; } static int dump_one_state(struct xfrm_state *x, int count, void *ptr) @@ -825,15 +834,12 @@ static int dump_one_state(struct xfrm_state *x, int count, void *ptr) p = nlmsg_data(nlh); err = copy_to_user_state_extra(x, p, skb); - if (err) - goto nla_put_failure; - + if (err) { + nlmsg_cancel(skb, nlh); + return err; + } nlmsg_end(skb, nlh); return 0; - -nla_put_failure: - nlmsg_cancel(skb, nlh); - return err; } static int xfrm_dump_sa_done(struct netlink_callback *cb) @@ -904,6 +910,7 @@ static int build_spdinfo(struct sk_buff *skb, struct net *net, struct xfrmu_spdinfo spc; struct xfrmu_spdhinfo sph; struct nlmsghdr *nlh; + int err; u32 *f; nlh = nlmsg_put(skb, pid, seq, XFRM_MSG_NEWSPDINFO, sizeof(u32), 0); @@ -922,15 +929,15 @@ static int build_spdinfo(struct sk_buff *skb, struct net *net, sph.spdhcnt = si.spdhcnt; sph.spdhmcnt = si.spdhmcnt; - if (nla_put(skb, XFRMA_SPD_INFO, sizeof(spc), &spc) || - nla_put(skb, XFRMA_SPD_HINFO, sizeof(sph), &sph)) - goto nla_put_failure; + err = nla_put(skb, XFRMA_SPD_INFO, sizeof(spc), &spc); + if (!err) + err = nla_put(skb, XFRMA_SPD_HINFO, sizeof(sph), &sph); + if (err) { + nlmsg_cancel(skb, nlh); + return err; + } return nlmsg_end(skb, nlh); - -nla_put_failure: - nlmsg_cancel(skb, nlh); - return -EMSGSIZE; } static int xfrm_get_spdinfo(struct sk_buff *skb, struct nlmsghdr *nlh, @@ -965,6 +972,7 @@ static int build_sadinfo(struct sk_buff *skb, struct net *net, struct xfrmk_sadinfo si; struct xfrmu_sadhinfo sh; struct nlmsghdr *nlh; + int err; u32 *f; nlh = nlmsg_put(skb, pid, seq, XFRM_MSG_NEWSADINFO, sizeof(u32), 0); @@ -978,15 +986,15 @@ static int build_sadinfo(struct sk_buff *skb, struct net *net, sh.sadhmcnt = si.sadhmcnt; sh.sadhcnt = si.sadhcnt; - if (nla_put_u32(skb, XFRMA_SAD_CNT, si.sadcnt) || - nla_put(skb, XFRMA_SAD_HINFO, sizeof(sh), &sh)) - goto nla_put_failure; + err = nla_put_u32(skb, XFRMA_SAD_CNT, si.sadcnt); + if (!err) + err = nla_put(skb, XFRMA_SAD_HINFO, sizeof(sh), &sh); + if (err) { + nlmsg_cancel(skb, nlh); + return err; + } return nlmsg_end(skb, nlh); - -nla_put_failure: - nlmsg_cancel(skb, nlh); - return -EMSGSIZE; } static int xfrm_get_sadinfo(struct sk_buff *skb, struct nlmsghdr *nlh, @@ -1439,9 +1447,8 @@ static inline int copy_to_user_state_sec_ctx(struct xfrm_state *x, struct sk_buf static inline int copy_to_user_sec_ctx(struct xfrm_policy *xp, struct sk_buff *skb) { - if (xp->security) { + if (xp->security) return copy_sec_ctx(xp->security, skb); - } return 0; } static inline size_t userpolicy_type_attrsize(void) @@ -1477,6 +1484,7 @@ static int dump_one_policy(struct xfrm_policy *xp, int dir, int count, void *ptr struct sk_buff *in_skb = sp->in_skb; struct sk_buff *skb = sp->out_skb; struct nlmsghdr *nlh; + int err; nlh = nlmsg_put(skb, NETLINK_CB(in_skb).pid, sp->nlmsg_seq, XFRM_MSG_NEWPOLICY, sizeof(*p), sp->nlmsg_flags); @@ -1485,22 +1493,19 @@ static int dump_one_policy(struct xfrm_policy *xp, int dir, int count, void *ptr p = nlmsg_data(nlh); copy_to_user_policy(xp, p, dir); - if (copy_to_user_tmpl(xp, skb) < 0) - goto nlmsg_failure; - if (copy_to_user_sec_ctx(xp, skb)) - goto nlmsg_failure; - if (copy_to_user_policy_type(xp->type, skb) < 0) - goto nlmsg_failure; - if (xfrm_mark_put(skb, &xp->mark)) - goto nla_put_failure; - + err = copy_to_user_tmpl(xp, skb); + if (!err) + err = copy_to_user_sec_ctx(xp, skb); + if (!err) + err = copy_to_user_policy_type(xp->type, skb); + if (!err) + err = xfrm_mark_put(skb, &xp->mark); + if (err) { + nlmsg_cancel(skb, nlh); + return err; + } nlmsg_end(skb, nlh); return 0; - -nla_put_failure: -nlmsg_failure: - nlmsg_cancel(skb, nlh); - return -EMSGSIZE; } static int xfrm_dump_policy_done(struct netlink_callback *cb) @@ -1688,6 +1693,7 @@ static int build_aevent(struct sk_buff *skb, struct xfrm_state *x, const struct { struct xfrm_aevent_id *id; struct nlmsghdr *nlh; + int err; nlh = nlmsg_put(skb, c->pid, c->seq, XFRM_MSG_NEWAE, sizeof(*id), 0); if (nlh == NULL) @@ -1703,35 +1709,39 @@ static int build_aevent(struct sk_buff *skb, struct xfrm_state *x, const struct id->flags = c->data.aevent; if (x->replay_esn) { - if (nla_put(skb, XFRMA_REPLAY_ESN_VAL, - xfrm_replay_state_esn_len(x->replay_esn), - x->replay_esn)) - goto nla_put_failure; + err = nla_put(skb, XFRMA_REPLAY_ESN_VAL, + xfrm_replay_state_esn_len(x->replay_esn), + x->replay_esn); } else { - if (nla_put(skb, XFRMA_REPLAY_VAL, sizeof(x->replay), - &x->replay)) - goto nla_put_failure; + err = nla_put(skb, XFRMA_REPLAY_VAL, sizeof(x->replay), + &x->replay); } - if (nla_put(skb, XFRMA_LTIME_VAL, sizeof(x->curlft), &x->curlft)) - goto nla_put_failure; - - if ((id->flags & XFRM_AE_RTHR) && - nla_put_u32(skb, XFRMA_REPLAY_THRESH, x->replay_maxdiff)) - goto nla_put_failure; - - if ((id->flags & XFRM_AE_ETHR) && - nla_put_u32(skb, XFRMA_ETIMER_THRESH, - x->replay_maxage * 10 / HZ)) - goto nla_put_failure; + if (err) + goto out_cancel; + err = nla_put(skb, XFRMA_LTIME_VAL, sizeof(x->curlft), &x->curlft); + if (err) + goto out_cancel; - if (xfrm_mark_put(skb, &x->mark)) - goto nla_put_failure; + if (id->flags & XFRM_AE_RTHR) { + err = nla_put_u32(skb, XFRMA_REPLAY_THRESH, x->replay_maxdiff); + if (err) + goto out_cancel; + } + if (id->flags & XFRM_AE_ETHR) { + err = nla_put_u32(skb, XFRMA_ETIMER_THRESH, + x->replay_maxage * 10 / HZ); + if (err) + goto out_cancel; + } + err = xfrm_mark_put(skb, &x->mark); + if (err) + goto out_cancel; return nlmsg_end(skb, nlh); -nla_put_failure: +out_cancel: nlmsg_cancel(skb, nlh); - return -EMSGSIZE; + return err; } static int xfrm_get_ae(struct sk_buff *skb, struct nlmsghdr *nlh, @@ -2155,7 +2165,7 @@ static int build_migrate(struct sk_buff *skb, const struct xfrm_migrate *m, const struct xfrm_migrate *mp; struct xfrm_userpolicy_id *pol_id; struct nlmsghdr *nlh; - int i; + int i, err; nlh = nlmsg_put(skb, 0, 0, XFRM_MSG_MIGRATE, sizeof(*pol_id), 0); if (nlh == NULL) @@ -2167,21 +2177,25 @@ static int build_migrate(struct sk_buff *skb, const struct xfrm_migrate *m, memcpy(&pol_id->sel, sel, sizeof(pol_id->sel)); pol_id->dir = dir; - if (k != NULL && (copy_to_user_kmaddress(k, skb) < 0)) - goto nlmsg_failure; - - if (copy_to_user_policy_type(type, skb) < 0) - goto nlmsg_failure; - + if (k != NULL) { + err = copy_to_user_kmaddress(k, skb); + if (err) + goto out_cancel; + } + err = copy_to_user_policy_type(type, skb); + if (err) + goto out_cancel; for (i = 0, mp = m ; i < num_migrate; i++, mp++) { - if (copy_to_user_migrate(mp, skb) < 0) - goto nlmsg_failure; + err = copy_to_user_migrate(mp, skb); + if (err) + goto out_cancel; } return nlmsg_end(skb, nlh); -nlmsg_failure: + +out_cancel: nlmsg_cancel(skb, nlh); - return -EMSGSIZE; + return err; } static int xfrm_send_migrate(const struct xfrm_selector *sel, u8 dir, u8 type, @@ -2354,6 +2368,7 @@ static int build_expire(struct sk_buff *skb, struct xfrm_state *x, const struct { struct xfrm_user_expire *ue; struct nlmsghdr *nlh; + int err; nlh = nlmsg_put(skb, c->pid, 0, XFRM_MSG_EXPIRE, sizeof(*ue), 0); if (nlh == NULL) @@ -2363,13 +2378,11 @@ static int build_expire(struct sk_buff *skb, struct xfrm_state *x, const struct copy_to_user_state(x, &ue->state); ue->hard = (c->data.hard != 0) ? 1 : 0; - if (xfrm_mark_put(skb, &x->mark)) - goto nla_put_failure; + err = xfrm_mark_put(skb, &x->mark); + if (err) + return err; return nlmsg_end(skb, nlh); - -nla_put_failure: - return -EMSGSIZE; } static int xfrm_exp_state_notify(struct xfrm_state *x, const struct km_event *c) @@ -2470,7 +2483,7 @@ static int xfrm_notify_sa(struct xfrm_state *x, const struct km_event *c) struct nlmsghdr *nlh; struct sk_buff *skb; int len = xfrm_sa_len(x); - int headlen; + int headlen, err; headlen = sizeof(*p); if (c->event == XFRM_MSG_DELSA) { @@ -2485,8 +2498,9 @@ static int xfrm_notify_sa(struct xfrm_state *x, const struct km_event *c) return -ENOMEM; nlh = nlmsg_put(skb, c->pid, c->seq, c->event, headlen, 0); + err = -EMSGSIZE; if (nlh == NULL) - goto nla_put_failure; + goto out_free_skb; p = nlmsg_data(nlh); if (c->event == XFRM_MSG_DELSA) { @@ -2499,24 +2513,23 @@ static int xfrm_notify_sa(struct xfrm_state *x, const struct km_event *c) id->proto = x->id.proto; attr = nla_reserve(skb, XFRMA_SA, sizeof(*p)); + err = -EMSGSIZE; if (attr == NULL) - goto nla_put_failure; + goto out_free_skb; p = nla_data(attr); } - - if (copy_to_user_state_extra(x, p, skb)) - goto nla_put_failure; + err = copy_to_user_state_extra(x, p, skb); + if (err) + goto out_free_skb; nlmsg_end(skb, nlh); return nlmsg_multicast(net->xfrm.nlsk, skb, 0, XFRMNLGRP_SA, GFP_ATOMIC); -nla_put_failure: - /* Somebody screwed up with xfrm_sa_len! */ - WARN_ON(1); +out_free_skb: kfree_skb(skb); - return -1; + return err; } static int xfrm_send_state_notify(struct xfrm_state *x, const struct km_event *c) @@ -2557,9 +2570,10 @@ static int build_acquire(struct sk_buff *skb, struct xfrm_state *x, struct xfrm_tmpl *xt, struct xfrm_policy *xp, int dir) { + __u32 seq = xfrm_get_acqseq(); struct xfrm_user_acquire *ua; struct nlmsghdr *nlh; - __u32 seq = xfrm_get_acqseq(); + int err; nlh = nlmsg_put(skb, 0, 0, XFRM_MSG_ACQUIRE, sizeof(*ua), 0); if (nlh == NULL) @@ -2575,21 +2589,19 @@ static int build_acquire(struct sk_buff *skb, struct xfrm_state *x, ua->calgos = xt->calgos; ua->seq = x->km.seq = seq; - if (copy_to_user_tmpl(xp, skb) < 0) - goto nlmsg_failure; - if (copy_to_user_state_sec_ctx(x, skb)) - goto nlmsg_failure; - if (copy_to_user_policy_type(xp->type, skb) < 0) - goto nlmsg_failure; - if (xfrm_mark_put(skb, &xp->mark)) - goto nla_put_failure; + err = copy_to_user_tmpl(xp, skb); + if (!err) + err = copy_to_user_state_sec_ctx(x, skb); + if (!err) + err = copy_to_user_policy_type(xp->type, skb); + if (!err) + err = xfrm_mark_put(skb, &xp->mark); + if (err) { + nlmsg_cancel(skb, nlh); + return err; + } return nlmsg_end(skb, nlh); - -nla_put_failure: -nlmsg_failure: - nlmsg_cancel(skb, nlh); - return -EMSGSIZE; } static int xfrm_send_acquire(struct xfrm_state *x, struct xfrm_tmpl *xt, @@ -2681,8 +2693,9 @@ static int build_polexpire(struct sk_buff *skb, struct xfrm_policy *xp, int dir, const struct km_event *c) { struct xfrm_user_polexpire *upe; - struct nlmsghdr *nlh; int hard = c->data.hard; + struct nlmsghdr *nlh; + int err; nlh = nlmsg_put(skb, c->pid, 0, XFRM_MSG_POLEXPIRE, sizeof(*upe), 0); if (nlh == NULL) @@ -2690,22 +2703,20 @@ static int build_polexpire(struct sk_buff *skb, struct xfrm_policy *xp, upe = nlmsg_data(nlh); copy_to_user_policy(xp, &upe->pol, dir); - if (copy_to_user_tmpl(xp, skb) < 0) - goto nlmsg_failure; - if (copy_to_user_sec_ctx(xp, skb)) - goto nlmsg_failure; - if (copy_to_user_policy_type(xp->type, skb) < 0) - goto nlmsg_failure; - if (xfrm_mark_put(skb, &xp->mark)) - goto nla_put_failure; + err = copy_to_user_tmpl(xp, skb); + if (!err) + err = copy_to_user_sec_ctx(xp, skb); + if (!err) + err = copy_to_user_policy_type(xp->type, skb); + if (!err) + err = xfrm_mark_put(skb, &xp->mark); + if (err) { + nlmsg_cancel(skb, nlh); + return err; + } upe->hard = !!hard; return nlmsg_end(skb, nlh); - -nla_put_failure: -nlmsg_failure: - nlmsg_cancel(skb, nlh); - return -EMSGSIZE; } static int xfrm_exp_policy_notify(struct xfrm_policy *xp, int dir, const struct km_event *c) @@ -2725,13 +2736,13 @@ static int xfrm_exp_policy_notify(struct xfrm_policy *xp, int dir, const struct static int xfrm_notify_policy(struct xfrm_policy *xp, int dir, const struct km_event *c) { + int len = nla_total_size(sizeof(struct xfrm_user_tmpl) * xp->xfrm_nr); struct net *net = xp_net(xp); struct xfrm_userpolicy_info *p; struct xfrm_userpolicy_id *id; struct nlmsghdr *nlh; struct sk_buff *skb; - int len = nla_total_size(sizeof(struct xfrm_user_tmpl) * xp->xfrm_nr); - int headlen; + int headlen, err; headlen = sizeof(*p); if (c->event == XFRM_MSG_DELPOLICY) { @@ -2747,8 +2758,9 @@ static int xfrm_notify_policy(struct xfrm_policy *xp, int dir, const struct km_e return -ENOMEM; nlh = nlmsg_put(skb, c->pid, c->seq, c->event, headlen, 0); + err = -EMSGSIZE; if (nlh == NULL) - goto nlmsg_failure; + goto out_free_skb; p = nlmsg_data(nlh); if (c->event == XFRM_MSG_DELPOLICY) { @@ -2763,29 +2775,29 @@ static int xfrm_notify_policy(struct xfrm_policy *xp, int dir, const struct km_e memcpy(&id->sel, &xp->selector, sizeof(id->sel)); attr = nla_reserve(skb, XFRMA_POLICY, sizeof(*p)); + err = -EMSGSIZE; if (attr == NULL) - goto nlmsg_failure; + goto out_free_skb; p = nla_data(attr); } copy_to_user_policy(xp, p, dir); - if (copy_to_user_tmpl(xp, skb) < 0) - goto nlmsg_failure; - if (copy_to_user_policy_type(xp->type, skb) < 0) - goto nlmsg_failure; - - if (xfrm_mark_put(skb, &xp->mark)) - goto nla_put_failure; + err = copy_to_user_tmpl(xp, skb); + if (!err) + err = copy_to_user_policy_type(xp->type, skb); + if (!err) + err = xfrm_mark_put(skb, &xp->mark); + if (err) + goto out_free_skb; nlmsg_end(skb, nlh); return nlmsg_multicast(net->xfrm.nlsk, skb, 0, XFRMNLGRP_POLICY, GFP_ATOMIC); -nla_put_failure: -nlmsg_failure: +out_free_skb: kfree_skb(skb); - return -1; + return err; } static int xfrm_notify_policy_flush(const struct km_event *c) @@ -2793,24 +2805,27 @@ static int xfrm_notify_policy_flush(const struct km_event *c) struct net *net = c->net; struct nlmsghdr *nlh; struct sk_buff *skb; + int err; skb = nlmsg_new(userpolicy_type_attrsize(), GFP_ATOMIC); if (skb == NULL) return -ENOMEM; nlh = nlmsg_put(skb, c->pid, c->seq, XFRM_MSG_FLUSHPOLICY, 0, 0); + err = -EMSGSIZE; if (nlh == NULL) - goto nlmsg_failure; - if (copy_to_user_policy_type(c->data.type, skb) < 0) - goto nlmsg_failure; + goto out_free_skb; + err = copy_to_user_policy_type(c->data.type, skb); + if (err) + goto out_free_skb; nlmsg_end(skb, nlh); return nlmsg_multicast(net->xfrm.nlsk, skb, 0, XFRMNLGRP_POLICY, GFP_ATOMIC); -nlmsg_failure: +out_free_skb: kfree_skb(skb); - return -1; + return err; } static int xfrm_send_policy_notify(struct xfrm_policy *xp, int dir, const struct km_event *c) @@ -2853,15 +2868,14 @@ static int build_report(struct sk_buff *skb, u8 proto, ur->proto = proto; memcpy(&ur->sel, sel, sizeof(ur->sel)); - if (addr && - nla_put(skb, XFRMA_COADDR, sizeof(*addr), addr)) - goto nla_put_failure; - + if (addr) { + int err = nla_put(skb, XFRMA_COADDR, sizeof(*addr), addr); + if (err) { + nlmsg_cancel(skb, nlh); + return err; + } + } return nlmsg_end(skb, nlh); - -nla_put_failure: - nlmsg_cancel(skb, nlh); - return -EMSGSIZE; } static int xfrm_send_report(struct net *net, u8 proto, @@ -2945,9 +2959,12 @@ static struct xfrm_mgr netlink_mgr = { static int __net_init xfrm_user_net_init(struct net *net) { struct sock *nlsk; + struct netlink_kernel_cfg cfg = { + .groups = XFRMNLGRP_MAX, + .input = xfrm_netlink_rcv, + }; - nlsk = netlink_kernel_create(net, NETLINK_XFRM, XFRMNLGRP_MAX, - xfrm_netlink_rcv, NULL, THIS_MODULE); + nlsk = netlink_kernel_create(net, NETLINK_XFRM, THIS_MODULE, &cfg); if (nlsk == NULL) return -ENOMEM; net->xfrm.nlsk_stash = nlsk; /* Don't set to NULL */ diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 9292a8971e66..689fe2d22165 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -5762,21 +5762,21 @@ static struct nf_hook_ops selinux_ipv4_ops[] = { { .hook = selinux_ipv4_postroute, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_POST_ROUTING, .priority = NF_IP_PRI_SELINUX_LAST, }, { .hook = selinux_ipv4_forward, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_FORWARD, .priority = NF_IP_PRI_SELINUX_FIRST, }, { .hook = selinux_ipv4_output, .owner = THIS_MODULE, - .pf = PF_INET, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_LOCAL_OUT, .priority = NF_IP_PRI_SELINUX_FIRST, } @@ -5788,14 +5788,14 @@ static struct nf_hook_ops selinux_ipv6_ops[] = { { .hook = selinux_ipv6_postroute, .owner = THIS_MODULE, - .pf = PF_INET6, + .pf = NFPROTO_IPV6, .hooknum = NF_INET_POST_ROUTING, .priority = NF_IP6_PRI_SELINUX_LAST, }, { .hook = selinux_ipv6_forward, .owner = THIS_MODULE, - .pf = PF_INET6, + .pf = NFPROTO_IPV6, .hooknum = NF_INET_FORWARD, .priority = NF_IP6_PRI_SELINUX_FIRST, } diff --git a/security/selinux/netlink.c b/security/selinux/netlink.c index 161e01a6c7ef..8a77725423e0 100644 --- a/security/selinux/netlink.c +++ b/security/selinux/netlink.c @@ -19,6 +19,7 @@ #include <linux/netlink.h> #include <linux/selinux_netlink.h> #include <net/net_namespace.h> +#include <net/netlink.h> #include "security.h" @@ -47,7 +48,7 @@ static void selnl_add_payload(struct nlmsghdr *nlh, int len, int msgtype, void * { switch (msgtype) { case SELNL_MSG_SETENFORCE: { - struct selnl_msg_setenforce *msg = NLMSG_DATA(nlh); + struct selnl_msg_setenforce *msg = nlmsg_data(nlh); memset(msg, 0, len); msg->val = *((int *)data); @@ -55,7 +56,7 @@ static void selnl_add_payload(struct nlmsghdr *nlh, int len, int msgtype, void * } case SELNL_MSG_POLICYLOAD: { - struct selnl_msg_policyload *msg = NLMSG_DATA(nlh); + struct selnl_msg_policyload *msg = nlmsg_data(nlh); memset(msg, 0, len); msg->seqno = *((u32 *)data); @@ -81,7 +82,9 @@ static void selnl_notify(int msgtype, void *data) goto oom; tmp = skb->tail; - nlh = NLMSG_PUT(skb, 0, 0, msgtype, len); + nlh = nlmsg_put(skb, 0, 0, msgtype, len, 0); + if (!nlh) + goto out_kfree_skb; selnl_add_payload(nlh, len, msgtype, data); nlh->nlmsg_len = skb->tail - tmp; NETLINK_CB(skb).dst_group = SELNLGRP_AVC; @@ -89,7 +92,7 @@ static void selnl_notify(int msgtype, void *data) out: return; -nlmsg_failure: +out_kfree_skb: kfree_skb(skb); oom: printk(KERN_ERR "SELinux: OOM in %s\n", __func__); @@ -108,8 +111,12 @@ void selnl_notify_policyload(u32 seqno) static int __init selnl_init(void) { + struct netlink_kernel_cfg cfg = { + .groups = SELNLGRP_MAX, + }; + selnl = netlink_kernel_create(&init_net, NETLINK_SELINUX, - SELNLGRP_MAX, NULL, NULL, THIS_MODULE); + THIS_MODULE, &cfg); if (selnl == NULL) panic("SELinux: Cannot create netlink socket."); netlink_set_nonroot(NETLINK_SELINUX, NL_NONROOT_RECV); |