summaryrefslogtreecommitdiffstats
path: root/kernel
Commit message (Collapse)AuthorAgeFilesLines
* Add new 'cond_resched_bkl()' helper functionLinus Torvalds2008-05-111-2/+0
| | | | | | | | | | | | | | | | | | It acts exactly like a regular 'cond_resched()', but will not get optimized away when CONFIG_PREEMPT is set. Normal kernel code is already preemptable in the presense of CONFIG_PREEMPT, so cond_resched() is optimized away (see commit 02b67cc3ba36bdba351d6c3a00593f4ec550d9d3 "sched: do not do cond_resched() when CONFIG_PREEMPT"). But when wanting to conditionally reschedule while holding a lock, you need to use "cond_sched_lock(lock)", and the new function is the BKL equivalent of that. Also make fs/locks.c use it. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* BKL: revert back to the old spinlock implementationLinus Torvalds2008-05-101-23/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The generic semaphore rewrite had a huge performance regression on AIM7 (and potentially other BKL-heavy benchmarks) because the generic semaphores had been rewritten to be simple to understand and fair. The latter, in particular, turns a semaphore-based BKL implementation into a mess of scheduling. The attempt to fix the performance regression failed miserably (see the previous commit 00b41ec2611dc98f87f30753ee00a53db648d662 'Revert "semaphore: fix"'), and so for now the simple and sane approach is to instead just go back to the old spinlock-based BKL implementation that never had any issues like this. This patch also has the advantage of being reported to fix the regression completely according to Yanmin Zhang, unlike the semaphore hack which still left a couple percentage point regression. As a spinlock, the BKL obviously has the potential to be a latency issue, but it's not really any different from any other spinlock in that respect. We do want to get rid of the BKL asap, but that has been the plan for several years. These days, the biggest users are in the tty layer (open/release in particular) and Alan holds out some hope: "tty release is probably a few months away from getting cured - I'm afraid it will almost certainly be the very last user of the BKL in tty to get fixed as it depends on everything else being sanely locked." so while we're not there yet, we do have a plan of action. Tested-by: Yanmin Zhang <yanmin_zhang@linux.intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <andi@firstfloor.org> Cc: Matthew Wilcox <matthew@wil.cx> Cc: Alexander Viro <viro@ftp.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Revert "semaphore: fix"Linus Torvalds2008-05-101-30/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit bf726eab3711cf192405d21688a4b21e07b6188a, as it has been reported to cause a regression with processes stuck in __down(), apparently because some missing wakeup. Quoth Sven Wegener: "I'm currently investigating a regression that has showed up with my last git pull yesterday. Bisecting the commits showed bf726e "semaphore: fix" to be the culprit, reverting it fixed the issue. Symptoms: During heavy filesystem usage (e.g. a kernel compile) I get several compiler processes in uninterruptible sleep, blocking all i/o on the filesystem. System is an Intel Core 2 Quad running a 64bit kernel and userspace. Filesystem is xfs on top of lvm. See below for the output of sysrq-w." See http://lkml.org/lkml/2008/5/10/45 for full report. In the meantime, we can just fix the BKL performance regression by reverting back to the good old BKL spinlock implementation instead, since any sleeping lock will generally perform badly, especially if it tries to be fair. Reported-by: Sven Wegener <sven.wegener@stealer.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* module: don't ignore vermagic string if module doesn't have modversionsRusty Russell2008-05-091-6/+10
| | | | | | | | | | | | | | | Linus found a logic bug: we ignore the version number in a module's vermagic string if we have CONFIG_MODVERSIONS set, but modversions also lets through a module with no __versions section for modprobe --force (with tainting, but still). We should only ignore the start of the vermagic string if the module actually *has* crcs to check. Rather than (say) having an entertaining hissy fit and creating a config option to work around the buggy code. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* module: be more picky about allowing missing module versionsRusty Russell2008-05-091-2/+7
| | | | | | | | | We allow missing __versions sections, because modprobe --force strips it. It makes less sense to allow sections where there's no version for a specific symbol the module uses, so disallow that. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-linus' of ↵Linus Torvalds2008-05-082-37/+38
|\ | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-fixes * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-fixes: sched: fix weight calculations semaphore: fix
| * sched: fix weight calculationsMike Galbraith2008-05-081-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The conversion between virtual and real time is as follows: dvt = rw/w * dt <=> dt = w/rw * dvt Since we want the fair sleeper granularity to be in real time, we actually need to do: dvt = - rw/w * l This bug could be related to the regression reported by Yanmin Zhang: | Comparing with kernel 2.6.25, sysbench+mysql(oltp, readonly) has lots | of regressions with 2.6.26-rc1: | | 1) 8-core stoakley: 28%; | 2) 16-core tigerton: 20%; | 3) Itanium Montvale: 50%. Reported-by: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com> Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * semaphore: fixIngo Molnar2008-05-081-34/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Yanmin Zhang reported: | Comparing with kernel 2.6.25, AIM7 (use tmpfs) has more th | regression under 2.6.26-rc1 on my 8-core stoakley, 16-core tigerton, | and Itanium Montecito. Bisect located the patch below: | | 64ac24e738823161693bf791f87adc802cf529ff is first bad commit | commit 64ac24e738823161693bf791f87adc802cf529ff | Author: Matthew Wilcox <matthew@wil.cx> | Date: Fri Mar 7 21:55:58 2008 -0500 | | Generic semaphore implementation | | After I manually reverted the patch against 2.6.26-rc1 while fixing | lots of conflicts/errors, aim7 regression became less than 2%. i reproduced the AIM7 workload and can confirm Yanmin's findings that -.26-rc1 regresses over .25 - by over 67% here. Looking at the workload i found and fixed what i believe to be the real bug causing the AIM7 regression: it was inefficient wakeup / scheduling / locking behavior of the new generic semaphore code, causing suboptimal performance. The problem comes from the following code. The new semaphore code does this on down(): spin_lock_irqsave(&sem->lock, flags); if (likely(sem->count > 0)) sem->count--; else __down(sem); spin_unlock_irqrestore(&sem->lock, flags); and this on up(): spin_lock_irqsave(&sem->lock, flags); if (likely(list_empty(&sem->wait_list))) sem->count++; else __up(sem); spin_unlock_irqrestore(&sem->lock, flags); where __up() does: list_del(&waiter->list); waiter->up = 1; wake_up_process(waiter->task); and where __down() does this in essence: list_add_tail(&waiter.list, &sem->wait_list); waiter.task = task; waiter.up = 0; for (;;) { [...] spin_unlock_irq(&sem->lock); timeout = schedule_timeout(timeout); spin_lock_irq(&sem->lock); if (waiter.up) return 0; } the fastpath looks good and obvious, but note the following property of the contended path: if there's a task on the ->wait_list, the up() of the current owner will "pass over" ownership to that waiting task, in a wake-one manner, via the waiter->up flag and by removing the waiter from the wait list. That is all and fine in principle, but as implemented in kernel/semaphore.c it also creates a nasty, hidden source of contention! The contention comes from the following property of the new semaphore code: the new owner owns the semaphore exclusively, even if it is not running yet. So if the old owner, even if just a few instructions later, does a down() [lock_kernel()] again, it will be blocked and will have to wait on the new owner to eventually be scheduled (possibly on another CPU)! Or if another task gets to lock_kernel() sooner than the "new owner" scheduled, it will be blocked unnecessarily and for a very long time when there are 2000 tasks running. I.e. the implementation of the new semaphores code does wake-one and lock ownership in a very restrictive way - it does not allow opportunistic re-locking of the lock at all and keeps the scheduler from picking task order intelligently. This kind of scheduling, with 2000 AIM7 processes running, creates awful cross-scheduling between those 2000 tasks, causes reduced parallelism, a throttled runqueue length and a lot of idle time. With increasing number of CPUs it causes an exponentially worse behavior in AIM7, as the chance for a newly woken new-owner task to actually run anytime soon is less and less likely. Note that it takes just a tiny bit of contention for the 'new-semaphore catastrophy' to happen: the wakeup latencies get added to whatever small contention there is, and quickly snowball out of control! I believe Yanmin's findings and numbers support this analysis too. The best fix for this problem is to use the same scheduling logic that the kernel/mutex.c code uses: keep the wake-one behavior (that is OK and wanted because we do not want to over-schedule), but also allow opportunistic locking of the lock even if a wakee is already "in flight". The patch below implements this new logic. With this patch applied the AIM7 regression is largely fixed on my quad testbox: # v2.6.25 vanilla: .................. Tasks Jobs/Min JTI Real CPU Jobs/sec/task 2000 56096.4 91 207.5 789.7 0.4675 2000 55894.4 94 208.2 792.7 0.4658 # v2.6.26-rc1-166-gc0a1811 vanilla: ................................... Tasks Jobs/Min JTI Real CPU Jobs/sec/task 2000 33230.6 83 350.3 784.5 0.2769 2000 31778.1 86 366.3 783.6 0.2648 # v2.6.26-rc1-166-gc0a1811 + semaphore-speedup: ............................................... Tasks Jobs/Min JTI Real CPU Jobs/sec/task 2000 55707.1 92 209.0 795.6 0.4642 2000 55704.4 96 209.0 796.0 0.4642 i.e. a 67% speedup. We are now back to within 1% of the v2.6.25 performance levels and have zero idle time during the test, as expected. Btw., interactivity also improved dramatically with the fix - for example console-switching became almost instantaneous during this workload (which after all is running 2000 tasks at once!), without the patch it was stuck for a minute at times. There's another nice side-effect of this speedup patch, the new generic semaphore code got even smaller: text data bss dec hex filename 1241 0 0 1241 4d9 semaphore.o.before 1207 0 0 1207 4b7 semaphore.o.after (because the waiter.up complication got removed.) Longer-term we should look into using the mutex code for the generic semaphore code as well - but i's not easy due to legacies and it's outside of the scope of v2.6.26 and outside the scope of this patch as well. Bisected-by: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | Merge branch 'for-linus' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds2008-05-081-1/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://git.kernel.dk/linux-2.6-block: Revert "relay: fix splice problem" docbook: fix bio missing parameter block: use unitialized_var() in bio_alloc_bioset() block: avoid duplicate calls to get_part() in disk stat code cfq-iosched: make io priorities inherit CPU scheduling class as well as nice block: optimize generic_unplug_device() block: get rid of likely/unlikely predictions in merge logic vfs: splice remove_suid() cleanup cfq-iosched: fix RCU race in the cfq io_context destructor handling block: adjust tagging function queue bit locking block: sysfs store function needs to grab queue_lock and use queue_flag_*()
| * | Revert "relay: fix splice problem"Jens Axboe2008-05-081-1/+1
| |/ | | | | | | This reverts commit c3270e577c18b3d0e984c3371493205a4807db9d.
* / Fix cpuset sched_relax_domain_level control filePaul Menage2008-05-081-12/+40
|/ | | | | | | | | | | | | | | | | | | | | Due to a merge conflict, the sched_relax_domain_level control file was marked as being handled by cpuset_read/write_u64, but the code to handle it was actually in cpuset_common_file_read/write. Since the value being written/read is in fact a signed integer, it should be treated as such; this patch adds cpuset_read/write_s64 functions, and uses them to handle the sched_relax_domain_level file. With this patch, the sched_relax_domain_level can be read and written, and the correct contents seen/updated. Signed-off-by: Paul Menage <menage@google.com> Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Paul Jackson <pj@sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Reviewed-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sched: add optional support for CONFIG_HAVE_UNSTABLE_SCHED_CLOCKPeter Zijlstra2008-05-055-161/+251
| | | | | | | | | | | | | | | | | | | | | | this replaces the rq->clock stuff (and possibly cpu_clock()). - architectures that have an 'imperfect' hardware clock can set CONFIG_HAVE_UNSTABLE_SCHED_CLOCK - the 'jiffie' window might be superfulous when we update tick_gtod before the __update_sched_clock() call in sched_clock_tick() - cpu_clock() might be implemented as: sched_clock_cpu(smp_processor_id()) if the accuracy proves good enough - how far can TSC drift in a single jiffie when considering the filtering and idle hooks? [ mingo@elte.hu: various fixes and cleanups ] Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix cpu clockIngo Molnar2008-05-051-9/+15
| | | | | | | | | | | | | | | | | | | | | | David Miller pointed it out that nothing in cpu_clock() sets prev_cpu_time. This caused __sync_cpu_clock() to be called all the time - against the intention of this code. The result was that in practice we hit a global spinlock every time cpu_clock() is called - which - even though cpu_clock() is used for tracing and debugging, is suboptimal. While at it, also: - move the irq disabling to the outest layer, this should make cpu_clock() warp-free when called with irqs enabled. - use long long instead of cycles_t - for platforms where cycles_t is 32-bit. Reported-by: David Miller <davem@davemloft.net> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fair-group: fix a Div0 error of the fair group schedulerMiao Xie2008-05-051-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When I echoed 0 into the "cpu.shares" file, a Div0 error occured. We found it is caused by the following calling. sched_group_set_shares(tg, shares) set_se_shares(tg->se[i], shares/nr_cpu_ids) __set_se_shares(se, shares) div64_64((1ULL<<32), shares) When the echoed value was less than the number of processores, the result of the sentence "shares/nr_cpu_ids" was 0, and then the system called div64() to divide the result, the Div0 error occured. It is unnecessary that the shares value is divided by nr_cpu_ids, I think. Because in the function __update_group_shares_cpu() and init_tg_cfs_entry(), the shares value isn't divided by nr_cpu_ids when setting shares of the sched entity. This patch fixes this bug. And echoing ULONG_MAX value into cpu.shares also causes Div0 error, so we set a macro MAX_SHARES to limit the max value of shares. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix missing locking in sched_domains codeHeiko Carstens2008-05-051-17/+12
| | | | | | | | | | | | | | | Concurrent calls to detach_destroy_domains and arch_init_sched_domains were prevented by the old scheduler subsystem cpu hotplug mutex. When this got converted to get_online_cpus() the locking got broken. Unlike before now several processes can concurrently enter the critical sections that were protected by the old lock. So use the already present doms_cur_mutex to protect these sections again. Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Paul Jackson <pj@sgi.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: make clock sync tunable by architecture codeIngo Molnar2008-05-051-1/+1
| | | | | | make time_sync_thresh tunable to architecture code. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix debuggingMike Galbraith2008-05-051-27/+0
| | | | | | | | | Revert debugging commit 7ba2e74ab5a0518bc953042952dd165724bc70c9. print_cfs_rq_tasks() can induce live-lock if a task is dequeued during list traversal. Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix sched_info_switch not being called according to documentationDavid Simner2008-05-051-2/+2
| | | | | | | | | http://bugzilla.kernel.org/show_bug.cgi?id=10545 sched_stats.h says that __sched_info_switch is "called when prev != next" in the comment. sched.c should therefore do that. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix hrtick_start_fair and CPU-HotplugPeter Zijlstra2008-05-051-1/+65
| | | | | | | | | | | | | | | | | | | | | | | | | Gautham R Shenoy reported: > While running the usual CPU-Hotplug stress tests on linux-2.6.25, > I noticed the following in the console logs. > > This is a wee bit difficult to reproduce. In the past 10 runs I hit this > only once. > > ------------[ cut here ]------------ > > WARNING: at kernel/sched.c:962 hrtick+0x2e/0x65() > > Just wondering if we are doing a good job at handling the cancellation > of any per-cpu scheduler timers during CPU-Hotplug. This looks like its indeed not cancelled at all and migrates the it to another cpu. Fix it via a proper hotplug notifier mechanism. Reported-by: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: stable@kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix SCHED_FAIR wake-idle logic errorGregory Haskins2008-05-051-1/+1
| | | | | | | | | | | | | | | | | | | We currently use an optimization to skip the overhead of wake-idle processing if more than one task is assigned to a run-queue. The assumption is that the system must already be load-balanced or we wouldnt be overloaded to begin with. The problem is that we are looking at rq->nr_running, which may include RT tasks in addition to CFS tasks. Since the presence of RT tasks really has no bearing on the balance status of CFS tasks, this throws the calculation off. This patch changes the logic to only consider the number of CFS tasks when making the decision to optimze the wake-idle. Signed-off-by: Gregory Haskins <ghaskins@novell.com> CC: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix RT task-wakeup logicGregory Haskins2008-05-051-2/+5
| | | | | | | | | | | | | | | | | | | Dmitry Adamushko pointed out a logic error in task_wake_up_rt() where we will always evaluate to "true". You can find the thread here: http://lkml.org/lkml/2008/4/22/296 In reality, we only want to try to push tasks away when a wake up request is not going to preempt the current task. So lets fix it. Note: We introduce test_tsk_need_resched() instead of open-coding the flag check so that the merge-conflict with -rt should help remind us that we may need to support NEEDS_RESCHED_DELAYED in the future, too. Signed-off-by: Gregory Haskins <ghaskins@novell.com> CC: Dmitry Adamushko <dmitry.adamushko@gmail.com> CC: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: add statics, don't return void expressionsHarvey Harrison2008-05-052-6/+10
| | | | | | | | | | | Noticed by sparse: kernel/sched.c:760:20: warning: symbol 'sched_feat_names' was not declared. Should it be static? kernel/sched.c:767:5: warning: symbol 'sched_feat_open' was not declared. Should it be static? kernel/sched_fair.c:845:3: warning: returning void-valued expression kernel/sched.c:4386:3: warning: returning void-valued expression Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: add debug checks to idle functionsAndrew Morton2008-05-051-0/+2
| | | | | | | Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: "Justin Mattock" <justinmattock@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: make rt_sched_class, idle_sched_class staticHarvey Harrison2008-05-052-2/+2
| | | | | | | | | The C files are included directly in sched.c, so they are effectively static. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: optimize calc_delta_mine()Peter Zijlstra2008-05-051-4/+4
| | | | | | | | | | Joel noticed that the !lw->inv_weight contition isn't unlikely anymore so remove the unlikely annotation. Also, remove the two div64_u64() inv_weight calculations, which makes them rely on the calc_delta_mine() path as well. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> CC: Joel Schopp <jschopp@austin.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix normalized sleeperPeter Zijlstra2008-05-051-1/+1
| | | | | | | | | Normalized sleeper uses calc_delta*() which requires that the rq load is already updated, so move account_entity_enqueue() before place_entity() Tested-by: Frans Pop <elendil@planet.nl> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Merge branch 'for_linus' of ↵Linus Torvalds2008-05-051-4/+4
|\ | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/linux-2.6-kgdb * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/linux-2.6-kgdb: kgdb: kconfig fix xconfig/menuconfig element kgdb: fix signedness mixmatches, add statics, add declaration to header kgdb: 1000 loops for the single step test in kgdbts kgdb: trivial sparse fixes in kgdb test-suite kgdb: minor documentation fixes
| * kgdb: fix signedness mixmatches, add statics, add declaration to headerHarvey Harrison2008-05-051-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Noticed by sparse: arch/x86/kernel/kgdb.c:556:15: warning: symbol 'kgdb_arch_pc' was not declared. Should it be static? kernel/kgdb.c:149:8: warning: symbol 'kgdb_do_roundup' was not declared. Should it be static? kernel/kgdb.c:193:22: warning: symbol 'kgdb_arch_pc' was not declared. Should it be static? kernel/kgdb.c:712:5: warning: symbol 'remove_all_break' was not declared. Should it be static? Related to kgdb_hex2long: arch/x86/kernel/kgdb.c:371:28: warning: incorrect type in argument 2 (different signedness) arch/x86/kernel/kgdb.c:371:28: expected long *long_val arch/x86/kernel/kgdb.c:371:28: got unsigned long *<noident> kernel/kgdb.c:469:27: warning: incorrect type in argument 2 (different signedness) kernel/kgdb.c:469:27: expected long *long_val kernel/kgdb.c:469:27: got unsigned long *<noident> kernel/kgdb.c:470:27: warning: incorrect type in argument 2 (different signedness) kernel/kgdb.c:470:27: expected long *long_val kernel/kgdb.c:470:27: got unsigned long *<noident> kernel/kgdb.c:894:27: warning: incorrect type in argument 2 (different signedness) kernel/kgdb.c:894:27: expected long *long_val kernel/kgdb.c:894:27: got unsigned long *<noident> kernel/kgdb.c:895:27: warning: incorrect type in argument 2 (different signedness) kernel/kgdb.c:895:27: expected long *long_val kernel/kgdb.c:895:27: got unsigned long *<noident> kernel/kgdb.c:1127:28: warning: incorrect type in argument 2 (different signedness) kernel/kgdb.c:1127:28: expected long *long_val kernel/kgdb.c:1127:28: got unsigned long *<noident> kernel/kgdb.c:1132:25: warning: incorrect type in argument 2 (different signedness) kernel/kgdb.c:1132:25: expected long *long_val kernel/kgdb.c:1132:25: got unsigned long *<noident> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
* | Removal of FUTEX_FDEric Sesterhenn2008-05-051-170/+6
|/ | | | | | | | | | | | | | | Since FUTEX_FD was scheduled for removal in June 2007 lets remove it. Google Code search found no users for it and NGPT was abandoned in 2003 according to IBM. futex.h is left untouched to make sure the id does not get reassigned. Since queue_me() has no users left it is commented out to avoid a warning, i didnt remove it completely since it is part of the internal api (matching unqueue_me()) Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (removed rest) Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Make forced module loading optionalLinus Torvalds2008-05-041-15/+29
| | | | | | | | | | | | | | | | | | | The kernel module loader used to be much too happy to allow loading of modules for the wrong kernel version by default. For example, if you had MODVERSIONS enabled, but tried to load a module with no version info, it would happily load it and taint the kernel - whether it was likely to actually work or not! Generally, such forced module loading should be considered a really really bad idea, so make it conditional on a new config option (MODULE_FORCE_LOAD), and make it default to off. If somebody really wants to force module loads, that's their problem, but we should not encourage it. Especially as it happened to me by mistake (ie regular unversioned Fedora modules getting loaded) causing lots of strange behavior. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/tglx/linux-2.6-hrtLinus Torvalds2008-05-032-11/+2
|\ | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/tglx/linux-2.6-hrt: clocksource: allow read access to available/current_clocksource clocksource: Fix permissions for available_clocksource hrtimer: remove duplicate helper function
| * clocksource: allow read access to available/current_clocksourceHeiko Carstens2008-05-031-2/+2
| | | | | | | | | | | | | | | | | | | | There is no harm, when users can read the info and we ask often enough during debugging for this kind of information. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: John Stultz <johnstul@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * clocksource: Fix permissions for available_clocksourceHeiko Carstens2008-05-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | File permissions for /sys/devices/system/clocksource/clocksource0/available_clocksource are 600 which allows write access. But this is in fact a read only file. So change permissions to 400. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: John Stultz <johnstul@us.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * hrtimer: remove duplicate helper functionOliver Hartkopp2008-05-031-9/+0
| | | | | | | | | | | | | | | | | | | | | | | | The helper function hrtimer_callback_running() is used in kernel/hrtimer.c as well as in the updated net/can/bcm.c which now supports hrtimers. Moving the helper function to hrtimer.h removes the duplicate definition in the C-files. Signed-off-by: Oliver Hartkopp <oliver@hartkopp.net> Cc: David Miller <davem@davemloft.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | Make constants in kernel/timeconst.h fixed 64 bitsH. Peter Anvin2008-05-022-76/+52
|/ | | | | | | | | | | | | | | | | Force constants in kernel/timeconst.h (except shift counts) to be 64 bits, using U64_C() constructor macros, and eliminate constants that cannot be represented at all in 64 bits. This avoids warnings with some gcc versions. Drop generating 64-bit constants, since we have no real hope of getting a full set (operation on 64-bit values requires a 128-bit intermediate result, which gcc only supports on 64-bit platforms, and only with libgcc support on some.) Note that the use of these constants does not depend on if we are on a 32- or 64-bit architecture. This resolves Bugzilla 10153. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* Merge branch 'for-linus' of ↵Linus Torvalds2008-05-023-0/+3
|\ | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6: [PATCH] fix sysctl_nr_open bugs [PATCH] sanitize anon_inode_getfd() [PATCH] split linux/file.h [PATCH] make osf_select() use core_sys_select() [PATCH] remove horrors with irix tty ioctls handling [PATCH] fix file and descriptor handling in perfmon
| * [PATCH] split linux/file.hAl Viro2008-05-013-0/+3
| | | | | | | | | | | | Initial splitoff of the low-level stuff; taken to fdtable.h Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | genirq: reenable a nobody cared disabled irq when a new driver arrivesThomas Gleixner2008-05-022-19/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Uwe Kleine-Koenig has some strange hardware where one of the shared interrupts can be asserted during boot before the appropriate driver loads. Requesting the shared irq line from another driver result in a spurious interrupt storm which finally disables the interrupt line. I have seen similar behaviour on resume before (the hardware does not work anymore so I can not verify). Change the spurious disable logic to increment the disable depth and mark the interrupt with an extra flag which allows us to reenable the interrupt when a new driver arrives which requests the same irq line. In the worst case this will disable the irq again via the spurious trap, but there is a decent chance that the new driver is the one which can handle the already asserted interrupt and makes the box usable again. Eric Biederman said further: This case also happens on a regular basis in kdump kernels where we deliberately don't shutdown the hardware before starting the new kernel. This patch should reduce the need for using irqpoll in that situation by a small amount. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-and-Acked-by: Uwe Kleine-König <Uwe.Kleine-Koenig@digi.com>
* | make generic sys_ptrace unconditionalChristoph Hellwig2008-05-011-2/+0
| | | | | | | | | | | | | | | | | | With s390 the last arch switched to the generic sys_ptrace yesterday so we can now kill the ifdef around it to enforce every new port it using it instead of introducing new weirdo versions. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linusLinus Torvalds2008-05-011-148/+171
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus: module: add MODULE_STATE_GOING notifier call module: Enhance verify_export_symbols module: set unused_gpl_crcs instead of overwriting unused_crcs module: neaten __find_symbol, rename to find_symbol module: reduce module image and resident size module: make module_sect_attrs private to kernel/module.c
| * | module: add MODULE_STATE_GOING notifier callPeter Oberparleiter2008-05-011-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | Provide module unload callback. Required by the gcov profiling infrastructure to keep track of profiling data structures. Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * | module: Enhance verify_export_symbolsRusty Russell2008-05-011-24/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | Make verify_export_symbols check the modules unused, unused_gpl and gpl_future syms. Inspired by Jan Beulich's fix, but table-driven. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * | module: set unused_gpl_crcs instead of overwriting unused_crcsRusty Russell2008-05-011-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Obvious typo, but I don't know of any modules with unused GPL exports, and then it would take someone noticing that the version shouldn't have matched in a dependent module. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * | module: neaten __find_symbol, rename to find_symbolRusty Russell2008-05-011-121/+125
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __find_symbol() has grown over time: there are now 5 different arrays of symbols it traverses. It also shouldn't print out a warning on some calls (ie. verify_symbol which simply checks for name clashes, and __symbol_put which checks for bugs). 1) Rename to find_symbol: no need for underscores. 2) Use bool and add "warn" parameter to suppress warnings. 3) Make table-driven rather than open coded. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * | module: reduce module image and resident sizeRusty Russell2008-05-011-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Resulting reduction (x86-64, gcc 4.1.2) with my (special purpose, i.e. much reduced) configurations: - 16k kernel resident size - 180k module resident size - 10k module image size Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * | module: make module_sect_attrs private to kernel/module.cRusty Russell2008-05-011-1/+15
| |/ | | | | | | | | | | | | No-one else is using these afaics. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* | workqueue: remove redundant function invocationAndrew Liu2008-05-011-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | timer_stats_timer_set_start_info is invoked twice, additionally, the invocation of this function can be moved to where it is only called when a delay is really required. Signed-off-by: Andrew Liu <shengping.liu@windriver.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Ingo Molnar <mingo@elte.hu> Cc: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | kexec: make extended crashkernel= syntax less confusingMichael Ellerman2008-05-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The extended crashkernel syntax is a little confusing in the way it handles ranges. eg: crashkernel=512M-2G:64M,2G-:128M Means if the machine has between 512M and 2G of memory the crash region should be 64M, and if the machine has 2G of memory the region should be 64M. Only if the machine has more than 2G memory will 128M be allocated. Although that semantic is correct, it is somewhat baffling. Instead I propose that the end of the range means the first address past the end of the range, ie: 512M up to but not including 2G. [bwalle@suse.de: clarify inclusive/exclusive in crashkernel commandline in documentation] Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Bernhard Walle <bwalle@suse.de> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Simon Horman <horms@verge.net.au> Signed-off-by: Bernhard Walle <bwalle@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | ntp: handle leap second via timerRoman Zippel2008-05-012-45/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the leap second handling from second_overflow(), which doesn't have to check for it every second anymore. With CONFIG_NO_HZ this also makes sure the leap second is handled close to the full second. Additionally this makes it possible to abort a leap second properly by resetting the STA_INS/STA_DEL status bits. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Cc: john stultz <johnstul@us.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | ntp: remove current_tick_length()Roman Zippel2008-05-012-17/+4
| | | | | | | | | | | | | | | | | | | | | | | | current_tick_length used to do a little more, but now it just returns tick_length, which we can also access directly at the few places, where it's needed. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Cc: john stultz <johnstul@us.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>