summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* powerpc/mm: Move hash64 tlbflush code into a new headerAneesh Kumar K.V2016-03-032-91/+95
| | | | | | | No code changes. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Move hash related mmu-*.h headers to book3s/Aneesh Kumar K.V2016-03-0313-12/+12
| | | | | | | No code changes. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: add _PAGE_HASHPTE similar to 4K hashAneesh Kumar K.V2016-03-031-1/+1
| | | | | | | | | | | | | | | | We don't need to update linux page table entry with _PAGE_HASHPTE early in hash pte fault. A parallel pte update will loop via _PAGE_BUSY and look at _PAGE_HASHPTE for a required hpte flush only if _PAGE_BUSY is cleared. That ensures a pte update will wait for a parallel hpte insert to finish before looking at _PAGE_HASHPTE bit. To avoid further confusion drop setting _PAGE_HASHPTE in cmpxchg in __hash_page_4K. commit 41743a4e34f0 ("powerpc: Free a PTE bit on ppc64 with 64K pages") did similar change for 64K config Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerp/mm: Update code commentsAneesh Kumar K.V2016-03-032-3/+3
| | | | | | | We are updating pte in those functions. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* mm: Some arch may want to use HPAGE_PMD related values as variablesKirill A. Shutemov2016-03-034-6/+30
| | | | | | | | | | | | | | | | | | | | With next generation power processor, we are having a new mmu model [1] that require us to maintain a different linux page table format. Inorder to support both current and future ppc64 systems with a single kernel we need to make sure kernel can select between different page table format at runtime. With the new MMU (radix MMU) added, we will have two different pmd hugepage size 16MB for hash model and 2MB for Radix model. Hence make HPAGE_PMD related values as a variable. Actual conversion of HPAGE_PMD to a variable for ppc64 happens in a followup patch. [1] http://ibm.biz/power-isa3 (Needs registration). Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Switch book3s 64 with 64K page size to 4 level page tableAneesh Kumar K.V2016-03-038-64/+102
| | | | | | | | | | | | This is needed so that we can support both hash and radix page table using single kernel. Radix kernel uses a 4 level table. We now use physical address in upper page table tree levels. Even though they are aligned to their size, for the masked bits we use the bit positions as per PowerISA 3.0. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Don't have conditional defines for real_pte_tAneesh Kumar K.V2016-03-032-22/+9
| | | | | | | | We remove real_pte_t out of STRICT_MM_TYPESCHECK. Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Split pgtable types to separate headerAneesh Kumar K.V2016-03-032-103/+109
| | | | | | | | | | We move the page table accessors into a separate header. We will later add a big endian variant of the table which is needed for radix. No functionality change only code movement. Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Add the ability to save VSX without giving it upCyril Bur2016-03-024-37/+30
| | | | | | | | | | | | This patch adds the ability to be able to save the VSX registers to the thread struct without giving up (disabling the facility) next time the process returns to userspace. This patch builds on a previous optimisation for the FPU and VEC registers in the thread copy path to avoid a possibly pointless reload of VSX state. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Add the ability to save Altivec without giving it upCyril Bur2016-03-023-22/+17
| | | | | | | | | | | | This patch adds the ability to be able to save the VEC registers to the thread struct without giving up (disabling the facility) next time the process returns to userspace. This patch builds on a previous optimisation for the FPU registers in the thread copy path to avoid a possibly pointless reload of VEC state. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Add the ability to save FPU without giving it upCyril Bur2016-03-023-19/+17
| | | | | | | | | | | | | This patch adds the ability to be able to save the FPU registers to the thread struct without giving up (disabling the facility) next time the process returns to userspace. This patch optimises the thread copy path (as a result of a fork() or clone()) so that the parent thread can return to userspace with hot registers avoiding a possibly pointless reload of FPU register state. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Prepare for splitting giveup_{fpu, altivec, vsx} in twoCyril Bur2016-03-023-1/+45
| | | | | | | | | | | | | | | This prepares for the decoupling of saving {fpu,altivec,vsx} registers and marking {fpu,altivec,vsx} as being unused by a thread. Currently giveup_{fpu,altivec,vsx}() does both however optimisations to task switching can be made if these two operations are decoupled. save_all() will permit the saving of registers to thread structs and leave threads MSR with bits enabled. This patch introduces no functional change. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Restore FPU/VEC/VSX if previously usedCyril Bur2016-03-026-14/+107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the FPU, VEC and VSX facilities are lazily loaded. This is not a problem unless a process is using these facilities. Modern versions of GCC are very good at automatically vectorising code, new and modernised workloads make use of floating point and vector facilities, even the kernel makes use of vectorised memcpy. All this combined greatly increases the cost of a syscall since the kernel uses the facilities sometimes even in syscall fast-path making it increasingly common for a thread to take an *_unavailable exception soon after a syscall, not to mention potentially taking all three. The obvious overcompensation to this problem is to simply always load all the facilities on every exit to userspace. Loading up all FPU, VEC and VSX registers every time can be expensive and if a workload does avoid using them, it should not be forced to incur this penalty. An 8bit counter is used to detect if the registers have been used in the past and the registers are always loaded until the value wraps to back to zero. Several versions of the assembly in entry_64.S were tested: 1. Always calling C. 2. Performing a common case check and then calling C. 3. A complex check in asm. After some benchmarking it was determined that avoiding C in the common case is a performance benefit (option 2). The full check in asm (option 3) greatly complicated that codepath for a negligible performance gain and the trade-off was deemed not worth it. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> [mpe: Move load_vec in the struct to fill an existing hole, reword change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> fixup
* powerpc: Explicitly disable math features when copying threadCyril Bur2016-03-021-0/+1
| | | | | | | | | | | | | | | | | | | Currently when threads get scheduled off they always giveup the FPU, Altivec (VMX) and Vector (VSX) units if they were using them. When they are scheduled back on a fault is then taken to enable each facility and load registers. As a result explicitly disabling FPU/VMX/VSX has not been necessary. Future changes and optimisations remove this mandatory giveup and fault which could cause calls such as clone() and fork() to copy threads and run them later with FPU/VMX/VSX enabled but no registers loaded. This patch starts the process of having MSR_{FP,VEC,VSX} mean that a threads registers are hot while not having MSR_{FP,VEC,VSX} means that the registers must be loaded. This allows for a smarter return to userspace. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* selftests/powerpc: Test FPU and VMX regs in signal ucontextCyril Bur2016-03-024-1/+296
| | | | | | | | Load up the non volatile FPU and VMX regs and ensure that they are the expected value in a signal handler Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* selftests/powerpc: Test preservation of FPU and VMX regs across preemptionCyril Bur2016-03-026-3/+310
| | | | | | | Loop in assembly checking the registers with many threads. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* selftests/powerpc: Test the preservation of FPU and VMX regs across syscallCyril Bur2016-03-028-1/+625
| | | | | | | | | | | | Test that the non volatile floating point and Altivec registers get correctly preserved across the fork() syscall. fork() works nicely for this purpose, the registers should be the same for both parent and child Signed-off-by: Cyril Bur <cyrilbur@gmail.com> [mpe: Add include guards to basic_asm.h, minor formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* selftests/powerpc: Remove -flto from common CFLAGSSuraj Jitindar Singh2016-03-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | LTO can cause GCC to inline some functions which have attributes set. The act of inlining the functions can lead to GCC forgetting about the attributes which leads to incorrect tests. Notable example being: __attribute__((__target__("no-vsx"))) LTO can also interact strangely with custom assembly functions and cause tests to intermittently fail. Both these cases are hard to detect and require manual inspection of binaries which is unlikely to happen for all tests. Furthermore, LTO optimisations are not necessary for selftests and correctness is paramount and as such it is best to disable LTO. LTO can be enabled on a per test basis. A pseries_le_defconfig kernel on a POWER8 was used to determine that the same subset of selftests pass and fail with and without -flto in the common Makefile. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* selftests/powerpc: Fix out of bounds access in TM signal testMichael Ellerman2016-03-021-1/+1
| | | | | | | | | | | | | | | | Gcc helpfully points out that we're accessing past the end of the gprs array: tm-signal-msr-resv.c: In function 'signal_usr1': tm-signal-msr-resv.c:43:37: error: array subscript is above array bounds [-Werror=array-bounds] ucp->uc_mcontext.regs->gpr[PT_MSR] |= (7ULL); We haven't noticed previously because -flto was hiding it somehow. The code is confused, PT_MSR isn't a gpr, instead it's in uc_regs->gregs, so fix it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Split hash page table sizing heuristic into a helperDavid Gibson2016-03-022-13/+24
| | | | | | | | | | | | | | | htab_get_table_size() either retrieve the size of the hash page table (HPT) from the device tree - if the HPT size is determined by firmware - or uses a heuristic to determine a good size based on RAM size if the kernel is responsible for allocating the HPT. To support a PAPR extension allowing resizing of the HPT, we're going to want the memory size -> HPT size logic elsewhere, so split it out into a helper function. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Clean up memory hotplug failure pathsDavid Gibson2016-03-013-19/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes a number of cleanups to handling of mapping failures during memory hotplug on Power: For errors creating the linear mapping for the hot-added region: * This is now reported with EFAULT which is more appropriate than the previous EINVAL (the failure is unlikely to be related to the function's parameters) * An error in this path now prints a warning message, rather than just silently failing to add the extra memory. * Previously a failure here could result in the region being partially mapped. We now clean up any partial mapping before failing. For errors creating the vmemmap for the hot-added region: * This is now reported with EFAULT instead of causing a BUG() - this could happen for external reason (e.g. full hash table) so it's better to handle this non-fatally * An error message is also printed, so the failure won't be silent * As above a failure could cause a partially mapped region, we now clean this up. [mpe: move htab_remove_mapping() out of #ifdef CONFIG_MEMORY_HOTPLUG to enable this] Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Paul Mackerras <paulus@samba.org> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Handle removing maybe-present bolted HPTEsDavid Gibson2016-03-014-11/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At the moment the hpte_removebolted callback in ppc_md returns void and will BUG_ON() if the hpte it's asked to remove doesn't exist in the first place. This is awkward for the case of cleaning up a mapping which was partially made before failing. So, we add a return value to hpte_removebolted, and have it return ENOENT in the case that the HPTE to remove didn't exist in the first place. In the (sole) caller, we propagate errors in hpte_removebolted to its caller to handle. However, we handle ENOENT specially, continuing to complete the unmapping over the specified range before returning the error to the caller. This means that htab_remove_mapping() will work sanely on a partially present mapping, removing any HPTEs which are present, while also returning ENOENT to its caller in case it's important there. There are two callers of htab_remove_mapping(): - In remove_section_mapping() we already WARN_ON() any error return, which is reasonable - in this case the mapping should be fully present - In vmemmap_remove_mapping() we BUG_ON() any error. We change that to just a WARN_ON() in the case of ENOENT, since failing to remove a mapping that wasn't there in the first place probably shouldn't be fatal. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Clean up error handling for htab_remove_mappingDavid Gibson2016-03-011-7/+6
| | | | | | | | | | | | | | | | | | | | | | | Currently, the only error that htab_remove_mapping() can report is -EINVAL, if removal of bolted HPTEs isn't implemeted for this platform. We make a few clean ups to the handling of this: * EINVAL isn't really the right code - there's nothing wrong with the function's arguments - use ENODEV instead * We were also printing a warning message, but that's a decision better left up to the callers, so remove it * One caller is vmemmap_remove_mapping(), which will just BUG_ON() on error, making the warning message redundant, so no change is needed there. * The other caller is remove_section_mapping(). This is called in the memory hot remove path at a point after vmemmap_remove_mapping() so if hpte_removebolted isn't implemented, we'd expect to have already BUG()ed anyway. Put a WARN_ON() here, in lieu of a printk() since this really shouldn't be happening. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Fix misspellings in comments.Adam Buchbinder2016-03-0142-56/+56
| | | | | Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/ps3: gelic_udbg: use struct udphdr from <linux/udp.h>Luis Henriques2016-03-011-8/+2
| | | | | | | | | | Instead of defining a local version of struct udphdr use the standard definition from <linux/udp.h>. The 'src' field is named 'source' in the <linux/udp.h> definition. Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/ps3: gelic_udbg: use struct iphdr from <linux/ip.h>Luis Henriques2016-03-011-20/+9
| | | | | | | | | | | | | | | | | Instead of defining a local version of struct iphdr use the standard definition from <linux/ip.h>. Several fields in the <linux/ip.h> definition have different names: - proto -> protocol - src -> saddr - dest -> daddr - total_length -> tot_len - checksum -> check Also, 'ver_len' is composed by 'version' and 'ihl' in <linux/ip.h>. Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/ps3: gelic_udbg: use struct vlan_hdr from <linux/if_vlan.h>Luis Henriques2016-03-011-10/+6
| | | | | | | | | | | | | | | Instead of defining the local struct vlantag use the standard definition of vlan_hdr from <linux/if_vlan.h>. The fields in the <linux/if_vlan.h> definition have different names: - vlan -> h_vlan_TCI - subtype -> h_vlan_encapsulated_proto While there, use also the ETH_P_IP macro instead of an hard-coded 0x0800 value. Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/ps3: gelic_udbg: use struct ethhdr from <linux/if_ether.h>Luis Henriques2016-03-011-10/+7
| | | | | | | | | | | | | | | | | | Instead of defining a local version of struct ethhdr use the standard definition from <linux/if_ether.h>. The fields in the <linux/if_ether.h> definition have different names: - dest -> h_dest - src -> h_source - type -> h_proto While there, use a few other standard functions/macros: - eth_broadcast_addr (instead of a memset) - ETH_ALEN - ETH_P_8021Q Signed-off-by: Luis Henriques <luis.henriques@canonical.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/book3s-64: Expand the real page number field of the Linux PTEPaul Mackerras2016-02-292-8/+8
| | | | | | | | | | | Now that other PTE fields have been moved out of the way, we can expand the RPN field of the PTE on 64-bit Book 3S systems and align it with the RPN field in the radix PTE format used by PowerISA v3.0 CPUs in radix mode. For 64k page size, this means we need to move the _PAGE_COMBO and _PAGE_4K_PFN bits. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/book3s-64: Move software-used bits in PTEPaul Mackerras2016-02-291-3/+3
| | | | | | | | | | This moves the _PAGE_SPECIAL and _PAGE_SOFT_DIRTY bits in the Linux PTE on 64-bit Book 3S systems to bit positions which are designated for software use in the radix PTE format used by PowerISA v3.0 CPUs in radix mode. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/book3s-64: Shuffle read, write, execute and user bits in PTEPaul Mackerras2016-02-291-4/+6
| | | | | | | | | | | | This moves the _PAGE_EXEC, _PAGE_RW and _PAGE_USER bits around in the Linux PTE on 64-bit Book 3S systems to correspond with the bit positions used in radix mode by PowerISA v3.0 CPUs. This also adds a _PAGE_READ bit corresponding to the read permission bit in the radix PTE. _PAGE_READ is currently unused but could possibly be used in future to improve pte_protnone(). Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/book3s-64: Move HPTE-related bits in PTE to upper endPaul Mackerras2016-02-292-6/+7
| | | | | | | | | | | | | | This moves the _PAGE_HASHPTE, _PAGE_F_GIX and _PAGE_F_SECOND fields in the Linux PTE on 64-bit Book 3S systems to the most significant byte. Of the 5 bits, one is a software-use bit and the other four are reserved bit positions in the PowerISA v3.0 radix PTE format. Using these bits is OK because these bits are all to do with tracking the HPTE(s) associated with the Linux PTE, and therefore won't be needed in radix mode. This frees up bit positions in the lower two bytes. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/book3s-64: Move _PAGE_PTE to 2nd most significant bitPaul Mackerras2016-02-291-1/+1
| | | | | | | | | | | This changes _PAGE_PTE for 64-bit Book 3S processors from 0x1 to 0x4000_0000_0000_0000, because that bit is used as the L (leaf) bit by PowerISA v3.0 CPUs in radix mode. The "leaf" bit indicates that the PTE points to a page directly rather than another radix level, which is what the _PAGE_PTE bit means. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/book3s-64: Move _PAGE_PRESENT to the most significant bitPaul Mackerras2016-02-294-9/+11
| | | | | | | | | This changes _PAGE_PRESENT for 64-bit Book 3S processors from 0x2 to 0x8000_0000_0000_0000, because that is where PowerISA v3.0 CPUs in radix mode will expect to find it. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/book3s-64: Use physical addresses in upper page table tree levelsPaul Mackerras2016-02-297-18/+28
| | | | | | | | | | | | | | | | | This changes the Linux page tables to store physical addresses rather than kernel virtual addresses in the upper levels of the tree (pgd, pud and pmd) for 64-bit Book 3S machines. This also changes the hugepd pointers used to implement hugepages when the base page size is 4k to store physical addresses rather than virtual addresses (again just for 64-bit Book3S machines). This frees up some high order bits, and will be needed with PowerISA v3.0 machines which read the page table tree in hardware in radix mode. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/book3s-64: Free up 7 high-order bits in the Linux PTEPaul Mackerras2016-02-275-11/+14
| | | | | | | | | | | | | | | | | | | This frees up bits 57-63 in the Linux PTE on 64-bit Book 3S machines. In the 4k page case, this is done just by reducing the size of the RPN field to 39 bits, giving 51-bit real addresses. In the 64k page case, we had 10 unused bits in the middle of the PTE, so this moves the RPN field down 10 bits to make use of those unused bits. This means the RPN field is now 3 bits larger at 37 bits, giving 53-bit real addresses in the normal case, or 49-bit real addresses for the special 4k PFN case. We are doing this in order to be able to move some other PTE bits into the positions where PowerISA V3.0 processors will expect to find them in radix-tree mode. Ultimately we will be able to move the RPN field to lower bit positions and make it larger. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/book3s-64: Clean up some obsolete or misleading commentsPaul Mackerras2016-02-273-14/+12
| | | | | | | No code changes. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* Merge tag 'powerpc-4.5-4' into nextMichael Ellerman2016-02-2516-11/+110
|\ | | | | | | | | Pull in our current fixes from 4.5, in particular the "Fix Multi hit ERAT" bug is causing folks some grief when testing next.
| * powerpc/mm/hash: Clear the invalid slot information correctlyAneesh Kumar K.V2016-02-222-2/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can get a hash pte fault with 4k base page size and find the pte already inserted with 64K base page size. In that case we need to clear the existing slot information from the old pte. Fix this correctly With THP, we also clear the slot information with respect to all the 64K hash pte mapping that 16MB page. They are all invalid now. This make sure we don't find the slot valid when we fault with 4k base page size. Finding the slot valid should not result in any wrong behavior because we do check again in hash page table for the validity. But we can avoid that check completely. Fixes: a43c0eb8364c022 ("powerpc/mm: Convert 4k hash insert to C") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/eeh: Fix partial hotplug criterionGavin Shan2016-02-221-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During error recovery, the device could be removed as part of the partial hotplug. The criterion used to come with partial hotplug is: if the device driver provides error_detected(), slot_reset() and resume() callbacks, it's immune from hotplug. Otherwise, it's going to experience partial hotplug during EEH recovery. But the criterion isn't correct enough: mlx4_core driver for Mellanox adapters provides error_detected(), slot_reset() callbacks, but resume() isn't there. Those Mellanox adapters won't be to involved in the partial hotplug. This fixes the criterion to a practical one: adpater with driver that provides error_detected(), slot_reset() will be immune from partial hotplug. resume() isn't mandatory. Fixes: f2da4ccf ("powerpc/eeh: More relaxed hotplug criterion") Cc: stable@vger.kernel.org #v4.4+ Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/ioda: Set "read" permission when "write" is setAlexey Kardashevskiy2016-02-171-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Quite often drivers set only "write" permission assuming that this includes "read" permission as well and this works on plenty of platforms. However IODA2 is strict about this and produces an EEH when "read" permission is not set and reading happens. This adds a workaround in the IODA code to always add the "read" bit when the "write" bit is set. Fixes: 10b35b2b7485 ("powerpc/powernv: Do not set "read" flag if direction==DMA_NONE") Cc: stable@vger.kernel.org # 4.2+ Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Tested-by: Douglas Miller <dougmill@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm: Fix Multi hit ERAT cause by recent THP updateAneesh Kumar K.V2016-02-154-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With ppc64 we use the deposited pgtable_t to store the hash pte slot information. We should not withdraw the deposited pgtable_t without marking the pmd none. This ensure that low level hash fault handling will skip this huge pte and we will handle them at upper levels. Recent change to pmd splitting changed the above in order to handle the race between pmd split and exit_mmap. The race is explained below. Consider following race: CPU0 CPU1 shrink_page_list() add_to_swap() split_huge_page_to_list() __split_huge_pmd_locked() pmdp_huge_clear_flush_notify() // pmd_none() == true exit_mmap() unmap_vmas() zap_pmd_range() // no action on pmd since pmd_none() == true pmd_populate() As result the THP will not be freed. The leak is detected by check_mm(): BUG: Bad rss-counter state mm:ffff880058d2e580 idx:1 val:512 The above required us to not mark pmd none during a pmd split. The fix for ppc is to clear the huge pte of _PAGE_USER, so that low level fault handling code skip this pte. At higher level we do take ptl lock. That should serialze us against the pmd split. Once the lock is acquired we do check the pmd again using pmd_same. That should always return false for us and hence we should retry the access. We do the pmd_same check in all case after taking plt with THP (do_huge_pmd_wp_page, do_huge_pmd_numa_page and huge_pmd_set_accessed) Also make sure we wait for irq disable section in other cpus to finish before flipping a huge pte entry with a regular pmd entry. Code paths like find_linux_pte_or_hugepte depend on irq disable to get a stable pte_t pointer. A parallel thp split need to make sure we don't convert a pmd pte to a regular pmd entry without waiting for the irq disable section to finish. Fixes: eef1b3ba053a ("thp: implement split_huge_pmd()") Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/powernv: Fix stale PE primary busGavin Shan2016-02-153-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When PCI bus is unplugged during full hotplug for EEH recovery, the platform PE instance (struct pnv_ioda_pe) isn't released and it dereferences the stale PCI bus that has been released. It leads to kernel crash when referring to the stale PCI bus. This fixes the issue by correcting the PE's primary bus when it's oneline at plugging time, in pnv_pci_dma_bus_setup() which is to be called by pcibios_fixup_bus(). Cc: stable@vger.kernel.org # v4.1+ Reported-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Reported-by: Pradipta Ghosh <pradghos@in.ibm.com> Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Tested-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/eeh: Fix stale cached primary busGavin Shan2016-02-154-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When PE is created, its primary bus is cached to pe->bus. At later point, the cached primary bus is returned from eeh_pe_bus_get(). However, we could get stale cached primary bus and run into kernel crash in one case: full hotplug as part of fenced PHB error recovery releases all PCI busses under the PHB at unplugging time and recreate them at plugging time. pe->bus is still dereferencing the PCI bus that was released. This adds another PE flag (EEH_PE_PRI_BUS) to represent the validity of pe->bus. pe->bus is updated when its first child EEH device is online and the flag is set. Before unplugging in full hotplug for error recovery, the flag is cleared. Fixes: 8cdb2833 ("powerpc/eeh: Trace PCI bus from PE") Cc: stable@vger.kernel.org #v3.11+ Reported-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Reported-by: Pradipta Ghosh <pradghos@in.ibm.com> Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Tested-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/pseries: Don't trace hcalls on offline CPUsDenis Kirjanov2016-02-151-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a cpu is hotplugged while the hcall trace points are active, it's possible to hit a warning from RCU due to the trace points calling into RCU from an offline cpu, eg: RCU used illegally from offline CPU! rcu_scheduler_active = 1, debug_locks = 1 Make the hypervisor tracepoints conditional by using TRACE_EVENT_FN_COND. Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Denis Kirjanov <kda@linux-powerpc.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc: Fix dedotify for binutils >= 2.26Andreas Schwab2016-02-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since binutils 2.26 BFD is doing suffix merging on STRTAB sections. But dedotify modifies the symbol names in place, which can also modify unrelated symbols with a name that matches a suffix of a dotted name. To remove the leading dot of a symbol name we can just increment the pointer into the STRTAB section instead. Backport to all stables to avoid breakage when people update their binutils - mpe. Cc: stable@vger.kernel.org Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/book3s_32: Fix build error with checkpoint restartAneesh Kumar K.V2016-01-311-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In file included from mm/vmscan.c:54:0: include/linux/swapops.h: In function ‘pte_to_swp_entry’: include/linux/swapops.h:69:2: error: implicit declaration of function ‘pte_swp_soft_dirty’ [-Werror=implicit-function-declaration] if (pte_swp_soft_dirty(pte)) ^ include/linux/swapops.h:70:3: error: implicit declaration of function ‘pte_swp_clear_soft_dirty’ [-Werror=implicit-function-declaration] pte = pte_swp_clear_soft_dirty(pte); We support soft dirty tracking only with book3s 64 for now. So change the Kconfig dependency accordingly. Also CHECKPOINT_RESTORE feature is not really dependent on SOFT_DIRTY. We track the dependency between MEM_SOFT_DIRTY and ARCH_SOFT_DIRTY through headers Fixes: 7207f43665b8 ("powerpc/mm: Add page soft dirty tracking") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* | powerpc: Fix BUG_ON() reporting in real modeBalbir Singh2016-02-241-1/+9
| | | | | | | | | | | | | | | | | | | | | | I ran into this issue while debugging an early boot problem. The system hit a BUG_ON() but report bug failed to print the line number and file name. The reason being that the system was running in real mode and report_bug() searches for addresses in the PAGE_OFFSET+ region. Suggested-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* | powerpc: Use BUILD_BUG_ON_MSG() for unsupported {cmp}xchg sizespan xinhui2016-02-241-16/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | __xchg_called_with_bad_pointer() can't tell us which code uses {cmp}xchg with an unsupported size, and no error is reported until the link stage. To make such problems easier to debug, use BUILD_BUG_ON_MSG() instead. Signed-off-by: pan xinhui <xinhui.pan@linux.vnet.ibm.com> [mpe: Tweak change log wording & add relaxed/acquire] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> fixup
* | powerpc/powernv: Add AST graphics driver to powernv_defconfigJeremy Kerr2016-02-241-2/+2
| | | | | | | | | | | | | | | | Most current OpenPOWER platforms have an AST BMC, so add graphics support via the AST DRM driver. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>