summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* RDS: TCP: Simplify reconnect to avoid duelling reconnnect attemptsSowmini Varadhan2016-07-012-3/+6
| | | | | | | | | When reconnecting, the peer with the smaller IP address will initiate the reconnect, to avoid needless duelling SYN issues. Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* RDS: TCP: Hooks to set up a single connection pathSowmini Varadhan2016-07-019-17/+20
| | | | | | | | | This patch adds ->conn_path_connect callbacks in the rds_transport that are used to set up a single connection path. Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* RDS: TCP: make receive path use the rds_conn_pathSowmini Varadhan2016-07-019-22/+26
| | | | | | | | | | The ->sk_user_data contains a pointer to the rds_conn_path for the socket. Use this consistently in the rds_tcp_data_ready callbacks to get the rds_conn_path for rds_recv_incoming. Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* RDS: TCP: make ->sk_user_data point to a rds_conn_pathSowmini Varadhan2016-07-016-40/+41
| | | | | | | | | The socket callbacks should all operate on a struct rds_conn_path, in preparation for a MP capable RDS-TCP. Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* RDS: TCP: Refactor connection destruction to handle multiple pathsSowmini Varadhan2016-07-011-7/+39
| | | | | | | | | | | | | | | | | | | A single rds_connection may have multiple rds_conn_paths that have to be carefully and correctly destroyed, for both rmmod and netns-delete cases. For both cases, we extract a single rds_tcp_connection for each conn into a temporary list, and then invoke rds_conn_destroy() which iteratively dismantles every path in the rds_connection. For the netns deletion case, we additionally have to make sure that we do not leave a socket in TIME_WAIT state, as this will hold up the netns deletion. Thus we call rds_tcp_conn_paths_destroy() to reset state quickly. Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* RDS: TCP: Make rds_tcp_connection track the rds_conn_pathSowmini Varadhan2016-07-015-42/+48
| | | | | | | | | | | The struct rds_tcp_connection is the transport-specific private data structure that tracks TCP information per rds_conn_path. Modify this structure to have a back-pointer to the rds_conn_path for which it is the ->cp_transport_data. Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* RDS: TCP: Remove dead logic around c_passive in rds-tcpSowmini Varadhan2016-07-011-6/+1
| | | | | | | | | | The c_passive bit is only intended for the IB transport and will never be encountered in rds-tcp, so remove the dead logic that predicates on this bit. Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* RDS: Rework path specific indirectionsSowmini Varadhan2016-07-0112-40/+29
| | | | | | | | | | | Refactor code to avoid separate indirections for single-path and multipath transports. All transports (both single and mp-capable) will get a pointer to the rds_conn_path, and can trivially derive the rds_connection from the ->cp_conn. Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'bpf-cgroup2'David S. Miller2016-07-0112-1/+506
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Martin KaFai Lau says: ==================== cgroup: bpf: cgroup2 membership test on skb This series is to implement a bpf-way to check the cgroup2 membership of a skb (sk_buff). It is similar to the feature added in netfilter: c38c4597e4bf ("netfilter: implement xt_cgroup cgroup2 path match") The current target is the tc-like usage. v3: - Remove WARN_ON_ONCE(!rcu_read_lock_held()) - Stop BPF_MAP_TYPE_CGROUP_ARRAY usage in patch 2/4 - Avoid mounting bpf fs manually in patch 4/4 - Thanks for Daniel's review and the above suggestions - Check CONFIG_SOCK_CGROUP_DATA instead of CONFIG_CGROUPS. Thanks to the kbuild bot's report. Patch 2/4 only needs CONFIG_CGROUPS while patch 3/4 needs CONFIG_SOCK_CGROUP_DATA. Since a single bpf cgrp2 array alone is not useful for now, CONFIG_SOCK_CGROUP_DATA is also used in patch 2/4. We can fine tune it later if we find other use cases for the cgrp2 array. - Return EAGAIN instead of ENOENT if the cgrp2 array entry is NULL. It is to distinguish these two cases: 1) the userland has not populated this array entry yet. or 2) not finding cgrp2 from the skb. - Be-lated thanks to Alexei and Tejun on reviewing v1 and giving advice on this work. v2: - Fix two return cases in cgroup_get_from_fd() - Fix compilation errors when CONFIG_CGROUPS is not used: - arraymap.c: avoid registering BPF_MAP_TYPE_CGROUP_ARRAY - filter.c: tc_cls_act_func_proto() returns NULL on BPF_FUNC_skb_in_cgroup - Add comments to BPF_FUNC_skb_in_cgroup and cgroup_get_from_fd() ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * cgroup: bpf: Add an example to do cgroup checking in BPFMartin KaFai Lau2016-07-015-0/+367
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | test_cgrp2_array_pin.c: A userland program that creates a bpf_map (BPF_MAP_TYPE_GROUP_ARRAY), pouplates/updates it with a cgroup2's backed fd and pins it to a bpf-fs's file. The pinned file can be loaded by tc and then used by the bpf prog later. This program can also update an existing pinned array and it could be useful for debugging/testing purpose. test_cgrp2_tc_kern.c: A bpf prog which should be loaded by tc. It is to demonstrate the usage of bpf_skb_in_cgroup. test_cgrp2_tc.sh: A script that glues the test_cgrp2_array_pin.c and test_cgrp2_tc_kern.c together. The idea is like: 1. Load the test_cgrp2_tc_kern.o by tc 2. Use test_cgrp2_array_pin.c to populate a BPF_MAP_TYPE_CGROUP_ARRAY with a cgroup fd 3. Do a 'ping -6 ff02::1%ve' to ensure the packet has been dropped because of a match on the cgroup Most of the lines in test_cgrp2_tc.sh is the boilerplate to setup the cgroup/bpf-fs/net-devices/netns...etc. It is not bulletproof on errors but should work well enough and give enough debug info if things did not go well. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Tejun Heo <tj@kernel.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
| * cgroup: bpf: Add bpf_skb_in_cgroup_protoMartin KaFai Lau2016-07-013-1/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds a bpf helper, bpf_skb_in_cgroup, to decide if a skb->sk belongs to a descendant of a cgroup2. It is similar to the feature added in netfilter: commit c38c4597e4bf ("netfilter: implement xt_cgroup cgroup2 path match") The user is expected to populate a BPF_MAP_TYPE_CGROUP_ARRAY which will be used by the bpf_skb_in_cgroup. Modifications to the bpf verifier is to ensure BPF_MAP_TYPE_CGROUP_ARRAY and bpf_skb_in_cgroup() are always used together. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Tejun Heo <tj@kernel.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
| * cgroup: bpf: Add BPF_MAP_TYPE_CGROUP_ARRAYMartin KaFai Lau2016-07-014-1/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | Add a BPF_MAP_TYPE_CGROUP_ARRAY and its bpf_map_ops's implementations. To update an element, the caller is expected to obtain a cgroup2 backed fd by open(cgroup2_dir) and then update the array with that fd. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Tejun Heo <tj@kernel.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
| * cgroup: Add cgroup_get_from_fdMartin KaFai Lau2016-07-012-0/+36
|/ | | | | | | | | | | | | Add a helper function to get a cgroup2 from a fd. It will be stored in a bpf array (BPF_MAP_TYPE_CGROUP_ARRAY) which will be introduced in the later patch. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Tejun Heo <tj@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'bpf-robustify'David S. Miller2016-07-019-61/+34
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Daniel Borkmann says: ==================== Further robustify putting BPF progs This series addresses a potential issue reported to us by Jann Horn with regards to putting progs. First patch moves progs generally under RCU destruction and second patch refactors getting of progs to simplify code a bit. For details, please see individual patches. Note, we think that addressing this one in net-next should be sufficient. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * bpf: refactor bpf_prog_get and type check into helperDaniel Borkmann2016-07-017-44/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since bpf_prog_get() and program type check is used in a couple of places, refactor this into a small helper function that we can make use of. Since the non RO prog->aux part is not used in performance critical paths and a program destruction via RCU is rather very unlikley when doing the put, we shouldn't have an issue just doing the bpf_prog_get() + prog->type != type check, but actually not taking the ref at all (due to being in fdget() / fdput() section of the bpf fd) is even cleaner and makes the diff smaller as well, so just go for that. Callsites are changed to make use of the new helper where possible. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
| * bpf: generally move prog destruction to RCU deferralDaniel Borkmann2016-07-014-19/+5
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jann Horn reported following analysis that could potentially result in a very hard to trigger (if not impossible) UAF race, to quote his event timeline: - Set up a process with threads T1, T2 and T3 - Let T1 set up a socket filter F1 that invokes another filter F2 through a BPF map [tail call] - Let T1 trigger the socket filter via a unix domain socket write, don't wait for completion - Let T2 call PERF_EVENT_IOC_SET_BPF with F2, don't wait for completion - Now T2 should be behind bpf_prog_get(), but before bpf_prog_put() - Let T3 close the file descriptor for F2, dropping the reference count of F2 to 2 - At this point, T1 should have looked up F2 from the map, but not finished executing it - Let T3 remove F2 from the BPF map, dropping the reference count of F2 to 1 - Now T2 should call bpf_prog_put() (wrong BPF program type), dropping the reference count of F2 to 0 and scheduling bpf_prog_free_deferred() via schedule_work() - At this point, the BPF program could be freed - BPF execution is still running in a freed BPF program While at PERF_EVENT_IOC_SET_BPF time it's only guaranteed that the perf event fd we're doing the syscall on doesn't disappear from underneath us for whole syscall time, it may not be the case for the bpf fd used as an argument only after we did the put. It needs to be a valid fd pointing to a BPF program at the time of the call to make the bpf_prog_get() and while T2 gets preempted, F2 must have dropped reference to 1 on the other CPU. The fput() from the close() in T3 should also add additionally delay to the reference drop via exit_task_work() when bpf_prog_release() gets called as well as scheduling bpf_prog_free_deferred(). That said, it makes nevertheless sense to move the BPF prog destruction generally after RCU grace period to guarantee that such scenario above, but also others as recently fixed in ceb56070359b ("bpf, perf: delay release of BPF prog after grace period") with regards to tail calls won't happen. Integrating bpf_prog_free_deferred() directly into the RCU callback is not allowed since the invocation might happen from either softirq or process context, so we're not permitted to block. Reviewing all bpf_prog_put() invocations from eBPF side (note, cBPF -> eBPF progs don't use this for their destruction) with call_rcu() look good to me. Since we don't know whether at the time of attaching the program, we're already part of a tail call map, we need to use RCU variant. However, due to this, there won't be severely more stress on the RCU callback queue: situations with above bpf_prog_get() and bpf_prog_put() combo in practice normally won't lead to releases, but even if they would, enough effort/ cycles have to be put into loading a BPF program into the kernel already. Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* atm: horizon: Use setup_timerAmitoj Kaur Chawla2016-07-011-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert a call to init_timer and accompanying intializations of the timer's data and function fields to a call to setup_timer. The Coccinelle semantic patch that fixes this problem is as follows: @@ expression t,d,f,e1; identifier x1; statement S1; @@ ( -t.data = d; | -t.function = f; | -init_timer(&t); +setup_timer(&t,f,d); | -init_timer_on_stack(&t); +setup_timer_on_stack(&t,f,d); ) <... when != S1 t.x1 = e1; ...> Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'qed-next'David S. Miller2016-07-013-58/+133
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Manish Chopra says: ==================== qede: Enhancements This patch series have few small fastpath features support and code refactoring. Note - regarding get/set tunable configuration via ethtool Surprisingly, there is NO ethtool application support for such configuration given that we have kernel support. Do let us know if we need to add support for that in user ethtool. Please consider applying this series to "net-next". ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * qede: Bump up driver version to 8.10.1.20Manish Chopra2016-07-011-1/+1
| | | | | | | | | | | | Signed-off-by: Manish Chopra <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * qede: Add get/set rx copy break tunable supportManish Chopra2016-07-013-1/+51
| | | | | | | | | | | | Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * qede: Utilize xmit_moreManish Chopra2016-07-011-14/+23
| | | | | | | | | | | | | | | | | | This patch uses xmit_more optimization to reduce number of TX doorbells write per packet. Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * qede: qede_poll refactoringManish Chopra2016-07-011-42/+35
| | | | | | | | | | | | | | | | | | | | | | | | This patch cleanups qede_poll() routine a bit and allows qede_poll() to do single iteration to handle TX completion [As under heavy TX load qede_poll() might run for indefinite time in the while(1) loop for TX completion processing and cause CPU stuck]. Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * qede: Add support for handling IP fragmented packets.Manish Chopra2016-07-013-0/+23
|/ | | | | | | | | | | | | | | | | | | | When handling IP fragmented packets with csum in their transport header, the csum isn't changed as part of the fragmentation. As a result, the packet containing the transport headers would have the correct csum of the original packet, but one that mismatches the actual packet that passes on the wire. As a result, on receive path HW would give an indication that the packet has incorrect csum, which would cause qede to discard the incoming packet. Since HW also delivers a notification of IP fragments, change driver behavior to pass such incoming packets to stack and let it make the decision whether it needs to be dropped. Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'tun-skb_array'David S. Miller2016-07-019-27/+255
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jason Wang says: ==================== switch to use tx skb array in tun This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the ability to resize multiple rings at once to avoid handling partial resize failure for mutiple rings. - add the support for zero length ring. - introduce a notifier which was triggered when tx_queue_len was changed for a netdev. - resize all queues during the tx_queue_len changing. Tests shows about 15% improvement on guest rx pps: Before: ~1300000pps After : ~1500000pps Changes from V3: - fix kbuild warnings - call NETDEV_CHANGE_TX_QUEUE_LEN on IFLA_TXQLEN Changes from V2: - add multiple rings resizing support for ptr_ring/skb_array - add zero length ring support - introdce a NETDEV_CHANGE_TX_QUEUE_LEN - drop new flags Changes from V1: - switch to use skb array instead of a customized circular buffer - add non-blocking support - rename .peek to .peek_len - drop lockless peeking since test show very minor improvement ==================== Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-from-altitude: 34697 feet. Signed-off-by: David S. Miller <davem@davemloft.net>
| * tun: switch to use skb array for txJason Wang2016-07-013-9/+146
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We used to queue tx packets in sk_receive_queue, this is less efficient since it requires spinlocks to synchronize between producer and consumer. This patch tries to address this by: - switch from sk_receive_queue to a skb_array, and resize it when tx_queue_len was changed. - introduce a new proto_ops peek_len which was used for peeking the skb length. - implement a tun version of peek_len for vhost_net to use and convert vhost_net to use peek_len if possible. Pktgen test shows about 15.3% improvement on guest receiving pps for small buffers: Before: ~1300000pps After : ~1500000pps Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: introduce NETDEV_CHANGE_TX_QUEUE_LENJason Wang2016-07-013-5/+27
| | | | | | | | | | | | | | | | | | | | | | | | This patch introduces a new event - NETDEV_CHANGE_TX_QUEUE_LEN, this will be triggered when tx_queue_len. It could be used by net device who want to do some processing at that time. An example is tun who may want to resize tx array when tx_queue_len is changed. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * skb_array: add wrappers for resizingJason Wang2016-07-011-0/+9
| | | | | | | | | | | | Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * ptr_ring: support resizing multiple queuesMichael S. Tsirkin2016-07-012-9/+67
| | | | | | | | | | | | | | | | | | | | Sometimes, we need support resizing multiple queues at once. This is because it was not easy to recover to recover from a partial failure of multiple queues resizing. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * skb_array: minor tweakJason Wang2016-07-011-2/+2
| | | | | | | | | | | | Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * ptr_ring: support zero length ringJason Wang2016-07-011-2/+4
|/ | | | | | | | Sometimes, we need zero length ring. But current code will crash since we don't do any check before accessing the ring. This patch fixes this. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'sch_hfsc-fixes-cleanups'David S. Miller2016-07-011-27/+27
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Michal Soltys says: ==================== HFSC patches, part 1 It's revised version of part of the patches I submitted really, really long time ago (back then I asked Patrick to ignore them as I found some issues shortly after submitting). Anyway this is the first set with very simple fixes/changes though some of them relatively subtle (I tried to do very exhaustive commit messages explaining what and why with those). The patches are against net-next tree. The second set will be heavier - or rather with more complex explanations, among those I have: - a fix to subtle issue introduced in http://permalink.gmane.org/gmane.linux.kernel.commits.2-4/8281 along with simplifying related stuff - update times to 96 bits (which allows to "just" use 32 bit shifts and improves curve definition accuracy at more extreme low/high speeds) - add curve "merging" instead of just selecting in convex case (computations mirror those from concave intersection) But these are eventually for later. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/sched/sch_hfsc.c: anchor virtual curve at proper vt in hfsc_change_fsc()Michal Soltys2016-07-011-1/+1
| | | | | | | | | | | | | | | | | | cl->cl_vt alone is relative only to the current backlog period, while the curve operates on cumulative virtual time. This patch adds missing cl->cl_vtoff. Signed-off-by: Michal Soltys <soltys@ziu.info> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/sched/sch_hfsc.c: go passive after vt updateMichal Soltys2016-07-011-16/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a class is going passive, it should update its cl_vt first to be consistent with the last dequeue operation. Otherwise its cl_vt will be one packet behind and parent's cvtmax might not be updated as well. One possible side effect is if some class goes passive and subsequently goes active /without/ its parent going passive - with cl_vt lagging one packet behind - comparison made in init_vf() will be affected (same period). Signed-off-by: Michal Soltys <soltys@ziu.info> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/sched/sch_hfsc.c: remove leftover dlist and droplistMichal Soltys2016-07-011-8/+0
| | | | | | | | | | | | | | | | | | | | | | This is update to: commit a09ceb0e08140a ("sched: remove qdisc->drop") That commit removed qdisc->drop, but left alone dlist and droplist that no longer serve any meaningful purpose. Signed-off-by: Michal Soltys <soltys@ziu.info> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/sched/sch_hfsc.c: add unlikely() in qdisc_peek_len()Michal Soltys2016-07-011-1/+1
| | | | | | | | | | | | | | The condition can only succeed on wrong configurations. Signed-off-by: Michal Soltys <soltys@ziu.info> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/sched/sch_hfsc.c: handle corner cases where head may change invalidating ↵Michal Soltys2016-07-011-1/+10
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | calculated deadline Realtime scheduling implemented in HFSC uses head of the queue to make the decision about which packet to schedule next. But in case of any head drop, the deadline calculated for the previous head is not necessarily correct for the next head (unless both packets have the same length). Thanks to peek() function used during dequeue - which internally is a dequeue operation - hfsc is almost safe from this issue, as peek() dequeues and isolates the head storing it temporarily until the real dequeue happens. But there is one exception: if after the class activation a drop happens before the first dequeue operation, there's never a chance to do the peek(). Adding peek() call in enqueue - if this is the first packet in a new backlog period AND the scheduler has realtime curve defined - fixes that one corner case. The 1st hfsc_dequeue() will use that peeked packet, similarly as every subsequent hfsc_dequeue() call uses packet peeked by the previous call. Signed-off-by: Michal Soltys <soltys@ziu.info> Signed-off-by: David S. Miller <davem@davemloft.net>
* tcp: md5: use kmalloc() backed scratch areasEric Dumazet2016-07-014-32/+41
| | | | | | | | | | | | | | | | | Some arches have virtually mapped kernel stacks, or will soon have. tcp_md5_hash_header() uses an automatic variable to copy tcp header before mangling th->check and calling crypto function, which might be problematic on such arches. David says that using percpu storage is also problematic on non SMP builds. Just use kmalloc() to allocate scratch areas. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch '1GbE' of ↵David S. Miller2016-06-3011-67/+148
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue Jeff Kirsher says: ==================== Intel Wired LAN Driver Updates 2016-06-29 This series contains updates and fixes to e1000e, igb, ixgbe and fm10k. A true smorgasbord of changes. Jake cleans up some obscurity by not using the BIT() macro on bitshift operation and also fixed the calculated index when looping through the indir array. Fixes the issue with igb's workqueue item for overflow check from causing a surprise remove event. The ptp_flags variable is added to simplify the work of writing several complex MAC type checks in the PTP code while fixing the workqueue. Alex Duyck fixes the receive buffers alignment which should not be L1 cache aligned, but to 512 bytes instead. Denys Vlasenko prevents a division by zero which was reported under VMWare for e1000e. Amritha fixes an issue where filters in a child hash table must be cleared from the hardware before delete the filter links in ixgbe. Bhaktipriya Shridhar simply replaces the deprecated create_workqueue() with alloc_workqueue() for fm10k. Tony corrects ixgbe ethtool reporting to show x550 supports hardware timestamping of all packets. Emil fixes an issue where MAC-VLANs on the VF fail to pass traffic due to spoofed packets. Andrew Lunn increases performance on some systems where syncing a buffer for DMA is expensive. So rather than sync the whole 2K receive buffer, only synchronize the length of the frame. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * igb: Only DMA sync frame lengthAndrew Lunn2016-06-291-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | On some platforms, syncing a buffer for DMA is expensive. Rather than sync the whole 2K receive buffer, only synchronise the length of the frame, which will typically be the MTU, or a much smaller TCP ACK. For an IMX6Q, this gives around 6% increased TCP receive performance, which is cache operations bound and reduces CPU load for TCP transmit. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * ixgbe: fix spoofed packets with macvlansEmil Tantilov2016-06-291-0/+1
| | | | | | | | | | | | | | | | | | | | When setting spoofing, both VLAN and MAC need to be set together. This change resolves an issue where MAC-VLANs on the VF fail to pass traffic due to spoofed packets. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * ixgbe: Correct reporting of timestamping for x550Tony Nguyen2016-06-291-2/+6
| | | | | | | | | | | | | | | | | | | | | | Update ixgbe_ethtool_get_ts_info() to show that x550 supports hardware timestamping of all packets. Reported-by: Guy Harris <guy@alum.mit.edu> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * fm10k: Remove create_workqueueBhaktipriya Shridhar2016-06-291-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | alloc_workqueue replaces deprecated create_workqueue(). A dedicated workqueue has been used since the workitem (viz fm10k_service_task, which manages and runs other subtasks) is involved in normal device operation and requires forward progress under memory pressure. create_workqueue has been replaced with alloc_workqueue with max_active as 0 since there is no need for throttling the number of active work items. Since network devices may be used in memory reclaim path, WQ_MEM_RECLAIM has been set to guarantee forward progress. flush_workqueue is unnecessary since destroy_workqueue() itself calls drain_workqueue() which flushes repeatedly till the workqueue becomes empty. Hence the call to flush_workqueue() has been dropped. Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * igb: call igb_ptp_suspend during suspend/resume cycleJacob Keller2016-06-291-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | Properly stop the extra workqueue items and ensure that we resume cleanly. This is better than using igb_ptp_init and igb_ptp_stop since these functions destroy the PHC device, which will cause other problems if we do so. Since igb_ptp_reset now re-schedules the work-queue item we don't need an equivalent igb_ptp_resume in the resume workflow. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * igb: implement igb_ptp_suspendJacob Keller2016-06-292-5/+18
| | | | | | | | | | | | | | | | | | Make igb_ptp_stop take advantage of this new function to reduce code duplication. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * igb: re-use igb_ptp_reset in igb_ptp_initJacob Keller2016-06-292-33/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Modify igb_ptp_init to take advantage of igb_ptp_reset, and remove duplicated work that was occurring in both igb_ptp_reset and igb_ptp_init. In total, resetting the TSAUXC register, and resetting the system time both happen in igb_ptp_reset already. igb_ptp_reset now also takes care of starting the delayed work item for overflow checks, as well. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * igb: introduce IGB_PTP_OVERFLOW_CHECK flagJacob Keller2016-06-292-13/+9
| | | | | | | | | | | | | | | | | | | | | | Don't continue to use complex MAC type checks for handling various cases where we have overflow check code. Make this code more obvious by introducing a flag which is enabled for hardware that needs these checks. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * igb: introduce ptp_flags variable and use it to replace IGB_FLAG_PTPJacob Keller2016-06-292-4/+8
| | | | | | | | | | | | | | | | | | | | | | Upcoming patches will introduce new PTP specific flags. To avoid cluttering the normal flags variable, introduce PTP specific "ptp_flags" variable for this purpose, and move IGB_FLAG_PTP to become IGB_PTP_ENABLED. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * ixgbe: Error handler for duplicate filter locations in hardware for cls_u32 ↵Amritha Nambiar2016-06-291-16/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | offloads For u32 classifier filters, avoid overwriting existing filter in a hardware location without removing it first, to clean up inconsistencies due to duplicate values for filter location. Verified with the following filters: Create child hash tables: handle 1: u32 divisor 1 handle 2: u32 divisor 1 Link to the child hash table from parent hash table: handle 800:0:11 u32 ht 800: link 1: \ offset at 0 mask 0f00 shift 6 plus 0 eat \ match ip protocol 6 ff match ip dst 15.0.0.1/32 handle 800:0:12 u32 ht 800: link 2: \ offset at 0 mask 0f00 shift 6 plus 0 eat \ match ip protocol 17 ff match ip dst 16.0.0.1/32 Add filter into child hash table: handle 1:0:3 u32 ht 1: \ match tcp src 22 ffff action drop Add another filter to the same location: handle 2:0:3 u32 ht 2: \ match tcp src 33 ffff action drop Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * ixgbe: Fix deleting link filters for cls_u32 offloadsAmritha Nambiar2016-06-292-7/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On deleting filters which are links to a child hash table, the filters in the child hash table must be cleared from the hardware if there is no link between the parent and child hash table. Verified with the following filters: Create a child hash table: handle 1: u32 divisor 1 Link to the child hash table from parent hash table: handle 800:0:10 u32 ht 800: link 1: \ offset at 0 mask 0f00 shift 6 plus 0 eat \ match ip protocol 6 ff match ip dst 15.0.0.1/32 Add filters into child hash table: handle 1:0:2 u32 ht 1: \ match tcp src 22 ffff action drop handle 1:0:3 u32 ht 1: \ match tcp src 33 ffff action drop Delete link filter from parent hash table: handle 800:0:10 u32 Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com> Acked-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * e1000e: prevent division by zero if TIMINCA is zeroDenys Vlasenko2016-06-291-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Users report that under VMWare, er32(TIMINCA) returns zero. This causes division by zero at init time as follows: ==> incvalue = er32(TIMINCA) & E1000_TIMINCA_INCVALUE_MASK; for (i = 0; i < E1000_MAX_82574_SYSTIM_REREADS; i++) { /* latch SYSTIMH on read of SYSTIML */ systim_next = (cycle_t)er32(SYSTIML); systim_next |= (cycle_t)er32(SYSTIMH) << 32; time_delta = systim_next - systim; temp = time_delta; ====> rem = do_div(temp, incvalue); This change makes kernel survive this, and users report that NIC does work after this change. Since on real hardware incvalue is never zero, this should not affect real hardware use case. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>