summaryrefslogtreecommitdiffstats
path: root/arch/arc/mm/tlb.c
Commit message (Collapse)AuthorAgeFilesLines
* ARC: mm: PAE: use 40-bit physical page maskVladimir Isaev2021-05-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit c5f756d8c6265ebb1736a7787231f010a3b782e5 upstream. 32-bit PAGE_MASK can not be used as a mask for physical addresses when PAE is enabled. PAGE_MASK_PHYS must be used for physical addresses instead of PAGE_MASK. Without this, init gets SIGSEGV if pte_modify was called: | potentially unexpected fatal signal 11. | Path: /bin/busybox | CPU: 0 PID: 1 Comm: init Not tainted 5.12.0-rc5-00003-g1e43c377a79f-dirty | Insn could not be fetched | @No matching VMA found | ECR: 0x00040000 EFA: 0x00000000 ERET: 0x00000000 | STAT: 0x80080082 [IE U ] BTA: 0x00000000 | SP: 0x5f9ffe44 FP: 0x00000000 BLK: 0xaf3d4 | LPS: 0x000d093e LPE: 0x000d0950 LPC: 0x00000000 | r00: 0x00000002 r01: 0x5f9fff14 r02: 0x5f9fff20 | ... | Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b Signed-off-by: Vladimir Isaev <isaev@synopsys.com> Reported-by: kernel test robot <lkp@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: stable@vger.kernel.org Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner2019-06-191-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ARC: fix build warningsVineet Gupta2019-05-201-5/+8
| | | | | | | | arch/arc/mm/tlb.c:914:2: warning: variable length array 'pd0' is used [-Wvla] | arch/arc/include/asm/cmpxchg.h:95:29: warning: value computed is not used [-Wunused-value] Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARCv2: Accomodate HS48 MMUv5 by relaxing MMU ver checkingVineet Gupta2017-11-061-24/+33
| | | | | | | | | | | | | | HS48 cpus will have a new MMUv5, although Linux is currently not explicitly supporting the newer features (so remains at V4). The existing software/hardware version check is very tight and causes boot abort. Given that the MMUv5 hardware is backwards compatible, relax the boot check to allow current kernel support level to work with new hardware. Also while at it, move the ancient MMU related code to under ARCompact builds as baseline MMU for HS cpus is v4. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Re-enable MMU upon Machine Check exceptionJose Abreu2017-09-011-3/+0
| | | | | | | | | | | | | | | | | | | I recently came upon a scenario where I would get a double fault machine check exception tiriggered by a kernel module. However the ensuing crash stacktrace (ksym lookup) was not working correctly. Turns out that machine check auto-disables MMU while modules are allocated in kernel vaddr spapce. This patch re-enables the MMU before start printing the stacktrace making stacktracing of modules work upon a fatal exception. Cc: stable@kernel.org Signed-off-by: Jose Abreu <joabreu@synopsys.com> Reviewed-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> [vgupta: moved code into low level handler to avoid in 2 places]
* ARC: set boot print log level to PR_INFONoam Camus2017-08-281-1/+1
| | | | | | | | | | | | | Some of the boot printing code had printk() w/o explicit log level. This patch introduces consistency allowing platforms to switch to less verbose console logging using cmdline. NPS400 with 4K CPUs needs to avoid the cpu info printing for faster bootup. Signed-off-by: Noam Camus <noamca@mellanox.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARCv2: PAE40: set MSB even if !CONFIG_ARC_HAS_PAE40 but PAE exists in SoCVineet Gupta2017-08-041-1/+11
| | | | | | | | | | | | | | | | | | PAE40 confiuration in hardware extends some of the address registers for TLB/cache ops to 2 words. So far kernel was NOT setting the higher word if feature was not enabled in software which is wrong. Those need to be set to 0 in such case. Normally this would be done in the cache flush / tlb ops, however since these registers only exist conditionally, this would have to be conditional to a flag being set on boot which is expensive/ugly - specially for the more common case of PAE exists but not in use. Optimize that by zero'ing them once at boot - nobody will write to them afterwards Cc: stable@vger.kernel.org #4.4+ Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* sched/headers: Prepare to remove the <linux/mm_types.h> dependency from ↵Ingo Molnar2017-03-021-0/+2
| | | | | | | | | | | | | | | <linux/sched.h> Update code that relied on sched.h including various MM types for them. This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* ARC: boot log: remove awkward space comma from MMU lineVineet Gupta2016-10-281-3/+3
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [plat-eznps] Use dedicated user stack topNoam Camus2016-05-091-0/+6
| | | | | | | | | NPS use special mapping right below TASK_SIZE. Hence we need to lower STACK_TOP so that user stack won't overlap NPS special mapping. Signed-off-by: Noam Camus <noamc@ezchip.com> Acked-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Make vmalloc size configurableNoam Camus2016-05-091-0/+5
| | | | | | | | | | | | | | | | | | | | | | | On ARC, lower 2G of address space is translated and used for - user vaddr space (region 0 to 5) - unused kernel-user gutter (region 6) - kernel vaddr space (region 7) where each region simply represents 256MB of address space. The kernel vaddr space of 256MB is used to implement vmalloc, modules So far this was enough, but not on EZChip system with 4K CPUs (given that per cpu mechanism uses vmalloc for allocating chunks) So allow VMALLOC_SIZE to be configurable by expanding down into the unused kernel-user gutter region which at default 256M was excessive anyways. Also use _BITUL() to fix a build error since PGDIR_SIZE cannot use "1UL" as called from assembly code in mm/tlbex.S Signed-off-by: Noam Camus <noamc@ezchip.com> [vgupta: rewrote changelog, debugged bootup crash due to int vs. hex] Acked-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Fix misspellings in comments.Adam Buchbinder2016-03-111-4/+4
| | | | | Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: comments updateVineet Gupta2015-11-161-2/+2
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: mm: PAE40 supportVineet Gupta2015-10-291-5/+22
| | | | | | | | This is the first working implementation of 40-bit physical address extension on ARCv2. Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: mm: PAE40: switch to using phys_addr_t for physical addressesVineet Gupta2015-10-281-5/+5
| | | | | | | That way a single flip of phys_addr_t to 64 bit ensures all places dealing with physical addresses get correct data Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: mm: Improve Duplicate PD Fault handlerVineet Gupta2015-10-281-24/+24
| | | | | | | | - Move the verbosity knob from .data to .bss by using inverted logic - No need to readout PD1 descriptor - clip the non pfn bits of PD0 to avoid clipping inside the loop Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: boot log: decode more mmu config itemsVineet Gupta2015-10-171-6/+8
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: boot log: move helper macros to header for reuseVineet Gupta2015-10-171-1/+1
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: mm: compute TLB size as needed from ways * setsVineet Gupta2015-10-171-5/+4
| | | | | | | This frees up some bits to hold more high level info such as PAE being present, w/o increasing the size of already bloated cpuinfo struct Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARCv2: mm: THP: flush_pmd_tlb_range make SMP safeVineet Gupta2015-10-171-2/+25
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARCv2: mm: THP: Implement flush_pmd_tlb_range() optimizationVineet Gupta2015-10-171-0/+20
| | | | | | | Implement the TLB flush routine to evict a sepcific Super TLB entry, vs. moving to a new ASID on every such flush. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARCv2: mm: THP: boot validation/reportingVineet Gupta2015-10-171-1/+7
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARCv2: mm: THP supportVineet Gupta2015-10-171-0/+81
| | | | | | | | | | | | | | | | | | | | | | | | | MMUv4 in HS38x cores supports Super Pages which are basis for Linux THP support. Normal and Super pages can co-exist (ofcourse not overlap) in TLB with a new bit "SZ" in TLB page desciptor to distinguish between them. Super Page size is configurable in hardware (4K to 16M), but fixed once RTL builds. The exact THP size a Linx configuration will support is a function of: - MMU page size (typical 8K, RTL fixed) - software page walker address split between PGD:PTE:PFN (typical 11:8:13, but can be changed with 1 line) So for above default, THP size supported is 8K * 256 = 2M Default Page Walker is 2 levels, PGD:PTE:PFN, which in THP regime reduces to 1 level (as PTE is folded into PGD and canonically referred to as PMD). Thus thp PMD accessors are implemented in terms of PTE (just like sparc) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARCv2: MMUv4: TLB programming Model changesVineet Gupta2015-06-221-3/+51
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: compress cpuinfo_arc_mmu (mainly save page size in KB)Vineet Gupta2015-06-191-4/+4
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: boot: cpu feature print enhancementsVineet Gupta2014-10-131-5/+3
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [SMP] TLB flushVineet Gupta2013-11-061-0/+73
| | | | | | | | | | - Add mm_cpumask setting (aggregating only, unlike some other arches) used to restrict the TLB flush cross-calling - cross-calling versions of TLB flush routines (thanks to Noam) Signed-off-by: Noam Camus <noamc@ezchip.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [SMP] ASID allocationVineet Gupta2013-11-061-6/+8
| | | | | | | -Track a Per CPU ASID counter -mm-per-cpu ASID (multiple threads, or mm migrated around) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Fix bogus gcc warning and micro-optimise TLB iteration loopVineet Gupta2013-11-061-2/+2
| | | | | | | | | | | | | | | | ------------------>8---------------------- arch/arc/mm/tlb.c: In function ‘do_tlb_overlap_fault’: arch/arc/mm/tlb.c:688:13: warning: array subscript is above array bounds [-Warray-bounds] (pd0[n] & PAGE_MASK)) { ^ ------------------>8---------------------- While at it, remove the usless last iteration of outer loop when reading a TLB SET for duplicate entries. Suggested-by: Mischa Jonker <mjonker@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [ASID] Track ASID allocation cycles/generationsVineet Gupta2013-08-301-15/+7
| | | | | | | | | | | | | | | | | | | | | | | This helps remove asid-to-mm reverse map While mm->context.id contains the ASID assigned to a process, our ASID allocator also used asid_mm_map[] reverse map. In a new allocation cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this to check if new @asid_cache belonged to some mm2 (from prev cycle). If so, it could locate that mm using the ASID reverse map, and mark that mm as unallocated ASID, to force it to refresh at the time of switch_mm() However, for SMP, the reverse map has to be maintained per CPU, so becomes 2 dimensional, hence got rid of it. With reverse map gone, it is NOT possible to reach out to current assignee. So we track the ASID allocation generation/cycle and on every switch_mm(), check if the current generation of CPU ASID is same as mm's ASID; If not it is refreshed. (Based loosely on arch/sh implementation) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [ASID] get_new_mmu_context() to conditionally allocate new ASIDVineet Gupta2013-08-301-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | ASID allocation changes/1 This patch does 2 things: (1) get_new_mmu_context() NOW moves mm->ASID to a new value ONLY if it was from a prev allocation cycle/generation OR if mm had no ASID allocated (vs. before would unconditionally moving to a new ASID) Callers desiring unconditional update of ASID, e.g.local_flush_tlb_mm() (for parent's address space invalidation at fork) need to first force the parent to an unallocated ASID. (2) get_new_mmu_context() always sets the MMU PID reg with unchanged/new ASID value. The gains are: - consolidation of all asid alloc logic into get_new_mmu_context() - avoiding code duplication in switch_mm() for PID reg setting - Enables future change to fold activate_mm() into switch_mm() Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [ASID] Refactor the TLB paranoid debug codeVineet Gupta2013-08-301-11/+13
| | | | | | | -Asm code already has values of SW and HW ASID values, so they can be passed to the printing routine. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: No need to flush the TLB in early bootVineet Gupta2013-08-301-7/+0
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: MMUv4 preps/3 - Abstract out TLB Insert/DeleteVineet Gupta2013-08-301-40/+54
| | | | | | | This reorganizes the current TLB operations into psuedo-ops to better pair with MMUv4's native Insert/Delete operations Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: MMUv4 preps/2 - Reshuffle PTE bitsVineet Gupta2013-08-301-8/+3
| | | | | | | | | | With previous commit freeing up PTE bits, reassign them so as to: - Match the bit to H/w counterpart where possible (e.g. MMUv2 GLOBAL/PRESENT, this avoids a shift in create_tlb()) - Avoid holes in _PAGE_xxx definitions Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: MMUv4 preps/1 - Fold PTE K/U access flagsVineet Gupta2013-08-291-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current ARC VM code has 13 flags in Page Table entry: some software (accesed/dirty/non-linear-maps) and rest hardware specific. With 8k MMU page, we need 19 bits for addressing page frame so remaining 13 bits is just about enough to accomodate the current flags. In MMUv4 there are 2 additional flags, SZ (normal or super page) and WT (cache access mode write-thru) - and additionally PFN is 20 bits (vs. 19 before for 8k). Thus these can't be held in current PTE w/o making each entry 64bit wide. It seems there is some scope of compressing the current PTE flags (and freeing up a few bits). Currently PTE contains fully orthogonal distinct access permissions for kernel and user mode (Kr, Kw, Kx; Ur, Uw, Ux) which can be folded into one set (R, W, X). The translation of 3 PTE bits into 6 TLB bits (when programming the MMU) can be done based on following pre-requites/assumptions: 1. For kernel-mode-only translations (vmalloc: 0x7000_0000 to 0x7FFF_FFFF), PTE additionally has PAGE_GLOBAL flag set (and user space entries can never be global). Thus such a PTE can translate to Kr, Kw, Kx (as appropriate) and zero for User mode counterparts. 2. For non global entries, the PTE flags can be used to create mirrored K and U TLB bits. This is true after commit a950549c675f2c8c504 "ARC: copy_(to|from)_user() to honor usermode-access permissions" which ensured that user-space translations _MUST_ have same access permissions for both U/K mode accesses so that copy_{to,from}_user() play fair with fault based CoW break and such... There is no such thing as free lunch - the cost is slightly infalted TLB-Miss Handlers. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* arc: delete __cpuinit usage from all arc filesPaul Gortmaker2013-06-271-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/arc uses of the __cpuinit macros from all C files. Currently arc does not have any __CPUINIT used in assembly files. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [mm] Assume pagecache page dirty by defaultVineet Gupta2013-06-221-1/+1
| | | | | | Similar to ARM/SH Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [mm] Zero page optimizationVineet Gupta2013-06-221-1/+5
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Disintegrate arcregs.hVineet Gupta2013-06-221-4/+20
| | | | | | | | | * Move the various sub-system defines/types into relevant files/functions (reduces compilation time) * move CPU specific stuff out of asm/tlb.h into asm/mmu.h Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Use kconfig helper IS_ENABLED() to get rid of defines.hVineet Gupta2013-06-221-1/+1
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Brown paper bag bug in macro for checking cache colorVineet Gupta2013-05-231-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | The VM_EXEC check in update_mmu_cache() was getting optimized away because of a stupid error in definition of macro addr_not_cache_congruent() The intention was to have the equivalent of following: if (a || (1 ? b : 0)) but we ended up with following: if (a || 1 ? b : 0) And because precedence of '||' is more that that of '?', gcc was optimizing away evaluation of <a> Nasty Repercussions: 1. For non-aliasing configs it would mean some extraneous dcache flushes for non-code pages if U/K mappings were not congruent. 2. For aliasing config, some needed dcache flush for code pages might be missed if U/K mappings were congruent. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [mm] Aliasing VIPT dcache support 2/4Vineet Gupta2013-05-091-6/+21
| | | | | | | | | | | | | | | | | | | | | | | This is the meat of the series which prevents any dcache alias creation by always keeping the U and K mapping of a page congruent. If a mapping already exists, and other tries to access the page, prev one is flushed to physical page (wback+inv) Essentially flush_dcache_page()/copy_user_highpage() create K-mapping of a page, but try to defer flushing, unless U-mapping exist. When page is actually mapped to userspace, update_mmu_cache() flushes the K-mapping (in certain cases this can be optimised out) Additonally flush_cache_mm(), flush_cache_range(), flush_cache_page() handle the puring of stale userspace mappings on exit/munmap... flush_anon_page() handles the existing U-mapping for anon page before kernel reads it via the GUP path. Note that while not complete, this is enough to boot a simple dynamically linked Busybox based rootfs Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [mm] Aliasing VIPT dcache support 1/4Vineet Gupta2013-05-091-1/+1
| | | | | | | This preps the low level dcache flush helpers to take vaddr argument in addition to the existing paddr to properly flush the VIPT dcache Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [mm] Lazy D-cache flush (non aliasing VIPT)Vineet Gupta2013-05-071-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | flush_dcache_page( ) is MM hook to ensure that a page has consistent views between kernel and userspace. Thus it is called when * kernel writes to a page which at some later point could get mapped to userspace (so kernel mapping needs to be flushed-n-inv) * kernel is about to read from a page with possible userspace mappings (so userspace mappings needs to be made coherent with kernel ones) However for Non aliasing VIPT dcache, any userspace mapping will always be congruent to kernel mapping. Thus d-cache need need not be flushed at all (or delayed indefinitely). The only reason it does need to be flushed is when mapping code pages. Since icache doesn't snoop dcache, those dirty dcache lines need to be written back to memory and icache line invalidated so that icache lines fetch will get the right data. Decent gains on LMBench fork/exec/sh and File I/O micro-benchmarks. (1) FPGA @ 80 MHZ Processor, Processes - times in microseconds - smaller is better ------------------------------------------------------------------------------ Host OS Mhz null null open slct sig sig fork exec sh call I/O stat clos TCP inst hndl proc proc proc --------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- 3.9-rc6-a Linux 3.9.0-r 80 4.79 8.72 66.7 116. 239. 8.39 30.4 4798 14.K 34.K 3.9-rc6-b Linux 3.9.0-r 80 4.79 8.62 65.4 111. 239. 8.35 29.0 3995 12.K 30.K 3.9-rc7-c Linux 3.9.0-r 80 4.79 9.00 66.1 106. 239. 8.61 30.4 2858 10.K 24.K ^^^^ ^^^^ ^^^ File & VM system latencies in microseconds - smaller is better ------------------------------------------------------------------------------- Host OS 0K File 10K File Mmap Prot Page 100fd Create Delete Create Delete Latency Fault Fault selct --------- ------------- ------ ------ ------ ------ ------- ----- ------- ----- 3.9-rc6-a Linux 3.9.0-r 317.8 204.2 1122.3 375.1 3522.0 4.288 20.7 126.8 3.9-rc6-b Linux 3.9.0-r 298.7 223.0 1141.6 367.8 3531.0 4.866 20.9 126.4 3.9-rc7-c Linux 3.9.0-r 278.4 179.2 862.1 339.3 3705.0 3.223 20.3 126.6 ^^^^^ ^^^^^ ^^^^^ ^^^^ (2) Customer Silicon @ 500 MHz (166 MHz mem) ------------------------------------------------------------------------------ Host OS Mhz null null open slct sig sig fork exec sh call I/O stat clos TCP inst hndl proc proc proc --------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- abilis-ba Linux 3.9.0-r 497 0.71 1.38 4.58 12.0 35.5 1.40 3.89 2070 5525 13.K abilis-ca Linux 3.9.0-r 497 0.71 1.40 4.61 11.8 35.6 1.37 3.92 1411 4317 10.K ^^^^ ^^^^ ^^^ Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [mm] optimise icache flush for user mappingsVineet Gupta2013-05-071-3/+9
| | | | | | | | | | | | | | | | | | | | ARC icache doesn't snoop dcache thus executable pages need to be made coherent before mapping into userspace in flush_icache_page(). However ARC700 CDU (hardware cache flush module) requires both vaddr (index in cache) as well as paddr (tag match) to correctly identify a line in the VIPT cache. A typical ARC700 SoC has aliasing icache, thus the paddr only based flush_icache_page() API couldn't be implemented efficiently. It had to loop thru all possible alias indexes and perform the invalidate operation (ofcourse the cache op would only succeed at the index(es) where tag matches - typically only 1, but the cost of visiting all the cache-bins needs to paid nevertheless). Turns out however that the vaddr (along with paddr) is available in update_mmu_cache() hence better suits ARC icache flush semantics. With both vaddr+paddr, exactly one flush operation per line is done. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Respect the cpu_id passed for fetching correct cpu infoNoam Camus2013-05-071-1/+1
| | | | | Signed-off-by: Noam Camus <noamc@ezchip.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [build] Fix warnings with CONFIG_DEBUG_SECTION_MISMATCHVineet Gupta2013-04-091-2/+2
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Boot #2: Verbose Boot reporting / feature verificationVineet Gupta2013-02-151-0/+38
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: SMP supportVineet Gupta2013-02-151-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARC common code to enable a SMP system + ISS provided SMP extensions. ARC700 natively lacks SMP support, hence some of the core features are are only enabled if SoCs have the necessary h/w pixie-dust. This includes: -Inter Processor Interrupts (IPI) -Cache coherency -load-locked/store-conditional ... The low level exception handling would be completely broken in SMP because we don't have hardware assisted stack switching. Thus a fair bit of this code is repurposing the MMU_SCRATCH reg for event handler prologues to keep them re-entrant. Many thanks to Rajeshwar Ranga for his initial "major" contributions to SMP Port (back in 2008), and to Noam Camus and Gilad Ben-Yossef for help with resurrecting that in 3.2 kernel (2012). Note that this platform code is again singleton design pattern - so multiple SMP platforms won't build at the moment - this deficiency is addressed in subsequent patches within this series. Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rajeshwar Ranga <rajeshwar.ranga@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Gilad Ben-Yossef <gilad@benyossef.com>