summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* stop_machine: reimplement using cpu_stopTejun Heo2010-05-066-173/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reimplement stop_machine using cpu_stop. As cpu stoppers are guaranteed to be available for all online cpus, stop_machine_create/destroy() are no longer necessary and removed. With resource management and synchronization handled by cpu_stop, the new implementation is much simpler. Asking the cpu_stop to execute the stop_cpu() state machine on all online cpus with cpu hotplug disabled is enough. stop_machine itself doesn't need to manage any global resources anymore, so all per-instance information is rolled into struct stop_machine_data and the mutex and all static data variables are removed. The previous implementation created and destroyed RT workqueues as necessary which made stop_machine() calls highly expensive on very large machines. According to Dimitri Sivanich, preventing the dynamic creation/destruction makes booting faster more than twice on very large machines. cpu_stop resources are preallocated for all online cpus and should have the same effect. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Dimitri Sivanich <sivanich@sgi.com>
* cpu_stop: implement stop_cpu[s]()Tejun Heo2010-05-062-9/+402
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement a simplistic per-cpu maximum priority cpu monopolization mechanism. A non-sleeping callback can be scheduled to run on one or multiple cpus with maximum priority monopolozing those cpus. This is primarily to replace and unify RT workqueue usage in stop_machine and scheduler migration_thread which currently is serving multiple purposes. Four functions are provided - stop_one_cpu(), stop_one_cpu_nowait(), stop_cpus() and try_stop_cpus(). This is to allow clean sharing of resources among stop_cpu and all the migration thread users. One stopper thread per cpu is created which is currently named "stopper/CPU". This will eventually replace the migration thread and take on its name. * This facility was originally named cpuhog and lived in separate files but Peter Zijlstra nacked the name and thus got renamed to cpu_stop and moved into stop_machine.c. * Better reporting of preemption leak as per Peter's suggestion. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Dimitri Sivanich <sivanich@sgi.com>
* sched: Fix select_idle_sibling() logic in select_task_rq_fair()Suresh Siddha2010-04-231-42/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issues in the current select_idle_sibling() logic in select_task_rq_fair() in the context of a task wake-up: a) Once we select the idle sibling, we use that domain (spanning the cpu that the task is currently woken-up and the idle sibling that we found) in our wake_affine() decisions. This domain is completely different from the domain(we are supposed to use) that spans the cpu that the task currently woken-up and the cpu where the task previously ran. b) We do select_idle_sibling() check only for the cpu that the task is currently woken-up on. If select_task_rq_fair() selects the previously run cpu for waking the task, doing a select_idle_sibling() check for that cpu also helps and we don't do this currently. c) In the scenarios where the cpu that the task is woken-up is busy but with its HT siblings are idle, we are selecting the task be woken-up on the idle HT sibling instead of a core that it previously ran and currently completely idle. i.e., we are not taking decisions based on wake_affine() but directly selecting an idle sibling that can cause an imbalance at the SMT/MC level which will be later corrected by the periodic load balancer. Fix this by first going through the load imbalance calculations using wake_affine() and once we make a decision of woken-up cpu vs previously-ran cpu, then choose a possible idle sibling for waking up the task on. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1270079265.7835.8.camel@sbs-t61.sc.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: Pre-compute cpumask_weight(sched_domain_span(sd))Peter Zijlstra2010-04-233-7/+9
| | | | | | | | | | | | Dave reported that his large SPARC machines spend lots of time in hweight64(), try and optimize some of those needless cpumask_weight() invocations (esp. with the large offstack cpumasks these are very expensive indeed). Reported-by: David Miller <davem@davemloft.net> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: Cure load average vs NO_HZ woesPeter Zijlstra2010-04-232-15/+68
| | | | | | | | | | | | | | | | | Chase reported that due to us decrementing calc_load_task prematurely (before the next LOAD_FREQ sample), the load average could be scewed by as much as the number of CPUs in the machine. This patch, based on Chase's patch, cures the problem by keeping the delta of the CPU going into NO_HZ idle separately and folding that in on the next LOAD_FREQ update. This restores the balance and we get strict LOAD_FREQ period samples. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Chase Douglas <chase.douglas@canonical.com> LKML-Reference: <1271934490.1776.343.camel@laptop> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: Fix UP update_avg() build warningMike Galbraith2010-04-151-6/+6
| | | | | | | | | | | update_avg() is only used for SMP builds, move it to the nearest SMP block. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <1271309399.14779.17.camel@marge.simson.net> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Merge branch 'linus' into sched/coreIngo Molnar2010-04-154545-3881/+23837
|\ | | | | | | | | | | Merge reason: merge the latest fixes, update to -rc4. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * Merge branch 'pm-fixes' of ↵Linus Torvalds2010-04-131-1/+1
| |\ | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6 * 'pm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6: PM / Hibernate: user.c, fix SNAPSHOT_SET_SWAP_AREA handling
| | * PM / Hibernate: user.c, fix SNAPSHOT_SET_SWAP_AREA handlingJiri Slaby2010-04-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When CONFIG_DEBUG_BLOCK_EXT_DEVT is set we decode the device improperly by old_decode_dev and it results in an error while hibernating with s2disk. All users already pass the new device number, so switch to new_decode_dev(). Signed-off-by: Jiri Slaby <jslaby@suse.cz> Reported-and-tested-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: "Rafael J. Wysocki" <rjw@sisk.pl>
| * | Merge branch 'bugfixes' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6Linus Torvalds2010-04-136-23/+39
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'bugfixes' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6: NFSv4: fix delegated locking NFS: Ensure that the WRITE and COMMIT RPC calls are always uninterruptible NFS: Fix a race with the new commit code NFS: Ensure that writeback_single_inode() calls write_inode() when syncing NFS: Fix the mode calculation in nfs_find_open_context NFSv4: Fall back to ordinary lookup if nfs4_atomic_open() returns EISDIR
| | * | NFSv4: fix delegated lockingTrond Myklebust2010-04-123-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Arnaud Giersch reports that NFSv4 locking is broken when we hold a delegation since commit 8e469ebd6dc32cbaf620e134d79f740bf0ebab79 (NFSv4: Don't allow posix locking against servers that don't support it). According to Arnaud, the lock succeeds the first time he opens the file (since we cannot do a delegated open) but then fails after we start using delegated opens. The following patch fixes it by ensuring that locking behaviour is governed by a per-filesystem capability flag that is initially set, but gets cleared if the server ever returns an OPEN without the NFS4_OPEN_RESULT_LOCKTYPE_POSIX flag being set. Reported-by: Arnaud Giersch <arnaud.giersch@iut-bm.univ-fcomte.fr> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
| | * | NFS: Ensure that the WRITE and COMMIT RPC calls are always uninterruptibleTrond Myklebust2010-04-091-7/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We always want to ensure that WRITE and COMMIT completes, whether or not the user presses ^C. Do this by making the call asynchronous, and allowing the user to do an interruptible wait for rpc_task completion. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| | * | NFS: Fix a race with the new commit codeTrond Myklebust2010-04-091-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes a race which occurs due to the fact that we release the PG_writeback flag while still holding the nfs_page locked. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| | * | NFS: Ensure that writeback_single_inode() calls write_inode() when syncingTrond Myklebust2010-04-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since writeback_single_inode() checks the inode->i_state flags _before_ it flushes out the data, we need to ensure that the I_DIRTY_DATASYNC flag is already set. Otherwise we risk not seeing a call to write_inode(), which again means that we break fsync() et al... Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| | * | NFS: Fix the mode calculation in nfs_find_open_contextTrond Myklebust2010-04-091-4/+4
| | | | | | | | | | | | | | | | Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| | * | NFSv4: Fall back to ordinary lookup if nfs4_atomic_open() returns EISDIRTrond Myklebust2010-04-091-1/+1
| | |/ | | | | | | | | | | | | Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
| * | Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds2010-04-1317-135/+206
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: sparc64: Add some more commentary to __raw_local_irq_save() sparc64: Fix memory leak in pci_register_iommu_region(). sparc64: Add kmemleak annotation to sun4v_build_virq() sparc64: Support kmemleak. sparc64: Add function graph tracer support. sparc64: Give a stack frame to the ftrace call sites. sparc64: Use a seperate counter for timer interrupts and NMI checks, like x86. sparc64: Remove profiling from some low-level bits. sparc64: Kill unnecessary static on local var in ftrace_call_replace(). sparc64: Kill CONFIG_STACK_DEBUG code. sparc64: Add HAVE_FUNCTION_TRACE_MCOUNT_TEST and tidy up. sparc64: Adjust __raw_local_irq_save() to cooperate in NMIs. sparc64: Use kstack_valid() in die_if_kernel().
| | * | sparc64: Add some more commentary to __raw_local_irq_save()David S. Miller2010-04-131-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | Suggested by Peter Zijlstra Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | Merge branch 'master' of /home/davem/src/GIT/linux-2.6/David S. Miller2010-04-134482-3286/+22501
| | |\ \ | | | | | | | | | | | | | | | | | | | | Conflicts: lib/Kconfig.debug
| | * | | sparc64: Fix memory leak in pci_register_iommu_region().David S. Miller2010-04-121-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Found by kmemleak. If request_resource() fails, we leak the struct resource we allocated to represent the IOMMU mapping area. This actually happens on sun4v machines because the IOMEM area is only reported sans the IOMMU region, unlike all previous systems. I'll need to fix that at some point, but for now fix the leak. Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Add kmemleak annotation to sun4v_build_virq()David S. Miller2010-04-121-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The only reference we store to this memory is in the form of a physical address, so kmemleak can't see it. Add a kmemleak_not_leak() annotation. It's probably useful to be able to look at a dump of these things either via debugfs or similar, and thus we could at some point store them in some kind of table and therefore get rid of this annotation. Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Support kmemleak.David S. Miller2010-04-122-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only missing thing was an _sdata marker in vmlinux.lds.S Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Add function graph tracer support.David S. Miller2010-04-1210-15/+132
| | | | | | | | | | | | | | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Give a stack frame to the ftrace call sites.David S. Miller2010-04-121-15/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's the only way we'll be able to implement the function graph tracer properly. A positive is that we no longer have to worry about the linker over-optimizing the tail call, since we don't use a tail call any more. Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Use a seperate counter for timer interrupts and NMI checks, like x86.David S. Miller2010-04-123-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This keeps us from having to use kstat_irqs_cpu() from the NMI handler, the former of which is a profiled function. Instead we use a currently empty slot in the cpu_data Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Remove profiling from some low-level bits.David S. Miller2010-04-121-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These include the timer implementation, perf events support, and the performance counter register (pcr) programming layer. Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Kill unnecessary static on local var in ftrace_call_replace().David S. Miller2010-04-121-1/+1
| | | | | | | | | | | | | | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Kill CONFIG_STACK_DEBUG code.David S. Miller2010-04-122-78/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The generic stack tracer does this job just as well. Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Add HAVE_FUNCTION_TRACE_MCOUNT_TEST and tidy up.David S. Miller2010-04-122-7/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Check function_trace_stop at ftrace_caller Toss mcount_call and dummy call of ftrace_stub, unnecessary. Document problems we'll have if the final kernel image link ever turns on relaxation. Properly size 'ftrace_call' so it looks right when inspecting instructions under gdb et al. Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Adjust __raw_local_irq_save() to cooperate in NMIs.David S. Miller2010-04-121-2/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we are in an NMI then doing a plain raw_local_irq_disable() will write PIL_NORMAL_MAX into %pil, which is lower than PIL_NMI, and thus we'll re-enable NMIs and recurse. Doing a simple: %pil = %pil | PIL_NORMAL_MAX does what we want, if we're already at PIL_NMI (15) we leave it at that setting, else we set it to PIL_NORMAL_MAX (14). This should get the function tracer working on sparc64. Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | sparc64: Use kstack_valid() in die_if_kernel().David S. Miller2010-04-121-23/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This gets rid of a local function (is_kernel_stack()) which tries to do the same thing, yet poorly in that it doesn't handle IRQ stacks properly. Signed-off-by: David S. Miller <davem@davemloft.net>
| * | | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6Linus Torvalds2010-04-1330-138/+348
| |\ \ \ \ | | |_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (25 commits) smc91c92_cs: define multicast_table as unsigned char can: avoids a false warning e1000e: stop cleaning when we reach tx_ring->next_to_use igb: restrict WoL for 82576 ET2 Quad Port Server Adapter virtio_net: missing sg_init_table Revert "tcp: Set CHECKSUM_UNNECESSARY in tcp_init_nondata_skb" iwlwifi: need check for valid qos packet before free tcp: Set CHECKSUM_UNNECESSARY in tcp_init_nondata_skb udp: fix for unicast RX path optimization myri10ge: fix rx_pause in myri10ge_set_pauseparam net: corrected documentation for hardware time stamping stmmac: use resource_size() x.25 attempts to negotiate invalid throughput x25: Patch to fix bug 15678 - x25 accesses fields beyond end of packet. bridge: Fix IGMP3 report parsing cnic: Fix crash during bnx2x MTU change. qlcnic: fix set mac addr r6040: fix r6040_multicast_list vhost-net: fix vq_memory_access_ok error checking ath9k: fix double calls to ath_radio_enable ...
| | * | | smc91c92_cs: define multicast_table as unsigned charKen Kawasaki2010-04-131-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | smc91c92_cs: * define multicast_table as unsigned char * remove unnecessary "#ifndef final_version" Signed-off-by: Ken Kawasaki <ken_kawasaki@spring.nifty.jp> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | can: avoids a false warningEric Dumazet2010-04-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At this point optlen == sizeof(sfilter) but some compilers are dumb. Reported-by: Németh Márton <nm127@freemail.h Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Oliver Hartkopp <oliver@hartkopp.net> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | e1000e: stop cleaning when we reach tx_ring->next_to_useTerry Loftin2010-04-131-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tx ring buffers after tx_ring->next_to_use are volatile and could change, possibly causing a crash. Stop cleaning when we hit tx_ring->next_to_use. Signed-off-by: Terry Loftin <terry.loftin@hp.com> Acked-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | igb: restrict WoL for 82576 ET2 Quad Port Server AdapterStefan Assmann2010-04-132-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Restrict Wake-on-LAN to first port on 82576 ET2 quad port NICs, as it is only supported there. Signed-off-by: Stefan Assmann <sassmann@redhat.com> Acked-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | virtio_net: missing sg_init_tableShirley Ma2010-04-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add missing sg_init_table for sg_set_buf in virtio_net which induced in defer skb patch. Reported-by: Thomas Müller <thomas@mathtm.de> Tested-by: Thomas Müller <thomas@mathtm.de> Signed-off-by: Shirley Ma <xma@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | Merge branch 'master' of /home/davem/src/GIT/linux-2.6/David S. Miller2010-04-114636-6193/+13245
| | |\ \ \ | | | | |/ | | | |/|
| | * | | Revert "tcp: Set CHECKSUM_UNNECESSARY in tcp_init_nondata_skb"David S. Miller2010-04-111-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 2626419ad5be1a054d350786b684b41d23de1538. It causes regressions for people with IGB cards. Connection requests don't complete etc. The true cause of the issue is still not known, but we should sort this out in net-next-2.6 not net-2.6 Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | Merge branch 'master' of ↵David S. Miller2010-04-091-4/+9
| | |\ \ \ | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-2.6
| | | * | | iwlwifi: need check for valid qos packet before freeWey-Yi Guy2010-04-081-4/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For 4965, need to check it is valid qos frame before free, only valid QoS frame has the tid used to free the packets. Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
| | * | | | tcp: Set CHECKSUM_UNNECESSARY in tcp_init_nondata_skbDavid S. Miller2010-04-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Back in commit 04a0551c87363f100b04d28d7a15a632b70e18e7 ("loopback: Drop obsolete ip_summed setting") we stopped setting CHECKSUM_UNNECESSARY in the loopback xmit. This is because such a setting was a lie since it implies that the checksum field of the packet is properly filled in. Instead what happens normally is that CHECKSUM_PARTIAL is set and skb->csum is calculated as needed. But this was only happening for TCP data packets (via the skb->ip_summed assignment done in tcp_sendmsg()). It doesn't happen for non-data packets like ACKs etc. Fix this by setting skb->ip_summed in the common non-data packet constructor. It already is setting skb->csum to zero. But this reminds us that we still have things like ip_output.c's ip_dev_loopback_xmit() which sets skb->ip_summed to the value CHECKSUM_UNNECESSARY, which Herbert's patch teaches us is not valid. So we'll have to address that at some point too. Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | | udp: fix for unicast RX path optimizationJorge Boncompte [DTI2]2010-04-082-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits 5051ebd275de672b807c28d93002c2fb0514a3c9 and 5051ebd275de672b807c28d93002c2fb0514a3c9 ("ipv[46]: udp: optimize unicast RX path") broke some programs. After upgrading a L2TP server to 2.6.33 it started to fail, tunnels going up an down, after the 10th tunnel came up. My modified rp-l2tp uses a global unconnected socket bound to (INADDR_ANY, 1701) and one connected socket per tunnel after parameter negotiation. After ten sockets were open and due to mixed parameters to udp[46]_lib_lookup2() kernel started to drop packets. Signed-off-by: Jorge Boncompte [DTI2] <jorge@dti2.net> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | | myri10ge: fix rx_pause in myri10ge_set_pauseparamBrice Goglin2010-04-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix rx_pause management in myri10ge_set_pauseparam(). Signed-off-by: Brice Goglin <brice@myri.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | | net: corrected documentation for hardware time stampingPatrick Loschmidt2010-04-071-30/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current documentation for hardware time stamping does not correctly specify the available kernel functions since the implementation was changed later on. Signed-off-by: Patrick Loschmidt <Patrick.Loschmidt@oeaw.ac.at> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | | stmmac: use resource_size()Dan Carpenter2010-04-071-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Resource size should be calculated as end - start + 1 because we start counting at zero. I changed the code to resource_size() to do the calculation. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | | x.25 attempts to negotiate invalid throughputJohn Hughes2010-04-072-7/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current X.25 code has some bugs in throughput negotiation: 1. It does negotiation in all cases, usually there is no need 2. It incorrectly attempts to negotiate the throughput class in one direction only. There are separate throughput classes for input and output and if either is negotiated both mist be negotiates. This is bug https://bugzilla.kernel.org/show_bug.cgi?id=15681 This bug was first reported by Daniel Ferenci to the linux-x25 mailing list on 6/8/2004, but is still present. The current (2.6.34) x.25 code doesn't seem to know that the X.25 throughput facility includes two values, one for the required throughput outbound, one for inbound. This causes it to attempt to negotiate throughput 0x0A, which is throughput 9600 inbound and the illegal value "0" for inbound throughput. Because of this some X.25 devices (e.g. Cisco 1600) refuse to connect to Linux X.25. The following patch fixes this behaviour. Unless the user specifies a required throughput it does not attempt to negotiate. If the user does not specify a throughput it accepts the suggestion of the remote X.25 system. If the user requests a throughput then it validates both the input and output throughputs and correctly negotiates them with the remote end. Signed-off-by: John Hughes <john@calva.com> Tested-by: Andrew Hendry <andrew.hendry@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | | x25: Patch to fix bug 15678 - x25 accesses fields beyond end of packet.John Hughes2010-04-074-6/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here is a patch to stop X.25 examining fields beyond the end of the packet. For example, when a simple CALL ACCEPTED was received: 10 10 0f x25_parse_facilities was attempting to decode the FACILITIES field, but this packet contains no facilities field. Signed-off-by: John Hughes <john@calva.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | | bridge: Fix IGMP3 report parsingHerbert Xu2010-04-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IGMP3 report parsing is looking at the wrong address for group records. This patch fixes it. Reported-by: Banyeer <banyeer@yahoo.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
| | * | | | cnic: Fix crash during bnx2x MTU change.Michael Chan2010-04-071-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cnic_service_bnx2x() irq handler can be called during chip reset from MTU change. Need to check that the cnic's device state is up before handling the irq. Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>