summaryrefslogtreecommitdiffstats
path: root/arch/arc/include/asm/mmu.h
Commit message (Collapse)AuthorAgeFilesLines
* ARC: [SMP] ASID allocationVineet Gupta2013-11-061-1/+1
| | | | | | | -Track a Per CPU ASID counter -mm-per-cpu ASID (multiple threads, or mm migrated around) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: fix new Section mismatches in build (post __cpuinit cleanup)Vineet Gupta2013-09-051-1/+1
| | | | | | | | | | | | | | --------------->8-------------------- WARNING: vmlinux.o(.text+0x708): Section mismatch in reference from the function read_arc_build_cfg_regs() to the function .init.text:read_decode_cache_bcr() WARNING: vmlinux.o(.text+0x702): Section mismatch in reference from the function read_arc_build_cfg_regs() to the function .init.text:read_decode_mmu_bcr() --------------->8-------------------- Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [ASID] Track ASID allocation cycles/generationsVineet Gupta2013-08-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | This helps remove asid-to-mm reverse map While mm->context.id contains the ASID assigned to a process, our ASID allocator also used asid_mm_map[] reverse map. In a new allocation cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this to check if new @asid_cache belonged to some mm2 (from prev cycle). If so, it could locate that mm using the ASID reverse map, and mark that mm as unallocated ASID, to force it to refresh at the time of switch_mm() However, for SMP, the reverse map has to be maintained per CPU, so becomes 2 dimensional, hence got rid of it. With reverse map gone, it is NOT possible to reach out to current assignee. So we track the ASID allocation generation/cycle and on every switch_mm(), check if the current generation of CPU ASID is same as mm's ASID; If not it is refreshed. (Based loosely on arch/sh implementation) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [ASID] Refactor the TLB paranoid debug codeVineet Gupta2013-08-301-1/+1
| | | | | | | -Asm code already has values of SW and HW ASID values, so they can be passed to the printing routine. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: [ASID] Remove legacy/unused debug codeVineet Gupta2013-08-301-3/+0
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: MMUv4 preps/3 - Abstract out TLB Insert/DeleteVineet Gupta2013-08-301-0/+2
| | | | | | | This reorganizes the current TLB operations into psuedo-ops to better pair with MMUv4's native Insert/Delete operations Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Disintegrate arcregs.hVineet Gupta2013-06-221-0/+36
| | | | | | | | | * Move the various sub-system defines/types into relevant files/functions (reduces compilation time) * move CPU specific stuff out of asm/tlb.h into asm/mmu.h Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: Use kconfig helper IS_ENABLED() to get rid of defines.hVineet Gupta2013-06-221-0/+8
| | | | Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* ARC: MMU Context ManagementVineet Gupta2013-02-151-0/+23
ARC700 MMU provides for tagging TLB entries with a 8-bit ASID to avoid having to flush the TLB every task switch. It also allows for a quick way to invalidate all the TLB entries for task useful for: * COW sementics during fork() * task exit()ing Signed-off-by: Vineet Gupta <vgupta@synopsys.com>