summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* tipc: fix crash during node removalJon Paul Maloy2017-04-271-13/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit d25a01257e422a4bdeb426f69529d57c73b235fe upstream. When the TIPC module is unloaded, we have identified a race condition that allows a node reference counter to go to zero and the node instance being freed before the node timer is finished with accessing it. This leads to occasional crashes, especially in multi-namespace environments. The scenario goes as follows: CPU0:(node_stop) CPU1:(node_timeout) // ref == 2 1: if(!mod_timer()) 2: if (del_timer()) 3: tipc_node_put() // ref -> 1 4: tipc_node_put() // ref -> 0 5: kfree_rcu(node); 6: tipc_node_get(node) 7: // BOOM! We now clean up this functionality as follows: 1) We remove the node pointer from the node lookup table before we attempt deactivating the timer. This way, we reduce the risk that tipc_node_find() may obtain a valid pointer to an instance marked for deletion; a harmless but undesirable situation. 2) We use del_timer_sync() instead of del_timer() to safely deactivate the node timer without any risk that it might be reactivated by the timeout handler. There is no risk of deadlock here, since the two functions never touch the same spinlocks. 3: We remove a pointless tipc_node_get() + tipc_node_put() from the timeout handler. Reported-by: Zhijiang Hu <huzhijiang@gmail.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* block: fix del_gendisk() vs blkdev_ioctl crashDan Williams2017-04-271-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ac34f15e0c6d2fd58480052b6985f6991fb53bcc upstream. When tearing down a block device early in its lifetime, userspace may still be performing discovery actions like blkdev_ioctl() to re-read partitions. The nvdimm_revalidate_disk() implementation depends on disk->driverfs_dev to be valid at entry. However, it is set to NULL in del_gendisk() and fatally this is happening *before* the disk device is deleted from userspace view. There's no reason for del_gendisk() to clear ->driverfs_dev. That device is the parent of the disk. It is guaranteed to not be freed until the disk, as a child, drops its ->parent reference. We could also fix this issue locally in nvdimm_revalidate_disk() by using disk_to_dev(disk)->parent, but lets fix it globally since ->driverfs_dev follows the lifetime of the parent. Longer term we should probably just add a @parent parameter to add_disk(), and stop carrying this pointer in the gendisk. BUG: unable to handle kernel NULL pointer dereference at (null) IP: [<ffffffffa00340a8>] nvdimm_revalidate_disk+0x18/0x90 [libnvdimm] CPU: 2 PID: 538 Comm: systemd-udevd Tainted: G O 4.4.0-rc5 #2257 [..] Call Trace: [<ffffffff8143e5c7>] rescan_partitions+0x87/0x2c0 [<ffffffff810f37f9>] ? __lock_is_held+0x49/0x70 [<ffffffff81438c62>] __blkdev_reread_part+0x72/0xb0 [<ffffffff81438cc5>] blkdev_reread_part+0x25/0x40 [<ffffffff8143982d>] blkdev_ioctl+0x4fd/0x9c0 [<ffffffff811246c9>] ? current_kernel_time64+0x69/0xd0 [<ffffffff812916dd>] block_ioctl+0x3d/0x50 [<ffffffff81264c38>] do_vfs_ioctl+0x308/0x560 [<ffffffff8115dbd1>] ? __audit_syscall_entry+0xb1/0x100 [<ffffffff810031d6>] ? do_audit_syscall_entry+0x66/0x70 [<ffffffff81264f09>] SyS_ioctl+0x79/0x90 [<ffffffff81902672>] entry_SYSCALL_64_fastpath+0x12/0x76 Cc: Jan Kara <jack@suse.cz> Cc: Jens Axboe <axboe@fb.com> Reported-by: Robert Hu <robert.hu@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* x86, pmem: fix broken __copy_user_nocache cache-bypass assumptionsDan Williams2017-04-271-13/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 11e63f6d920d6f2dfd3cd421e939a4aec9a58dcd upstream. Before we rework the "pmem api" to stop abusing __copy_user_nocache() for memcpy_to_pmem() we need to fix cases where we may strand dirty data in the cpu cache. The problem occurs when copy_from_iter_pmem() is used for arbitrary data transfers from userspace. There is no guarantee that these transfers, performed by dax_iomap_actor(), will have aligned destinations or aligned transfer lengths. Backstop the usage __copy_user_nocache() with explicit cache management in these unaligned cases. Yes, copy_from_iter_pmem() is now too big for an inline, but addressing that is saved for a later patch that moves the entirety of the "pmem api" into the pmem driver directly. Fixes: 5de490daec8b ("pmem: add copy_from_iter_pmem() and clear_pmem()") Cc: <x86@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Matthew Wilcox <mawilcox@microsoft.com> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Toshi Kani <toshi.kani@hpe.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* hv: don't reset hv_context.tsc_page on crashVitaly Kuznetsov2017-04-271-2/+3
| | | | | | | | | | | | | | | commit 56ef6718a1d8d77745033c5291e025ce18504159 upstream. It may happen that secondary CPUs are still alive and resetting hv_context.tsc_page will cause a consequent crash in read_hv_clock_tsc() as we don't check for it being not NULL there. It is safe as we're not freeing this page anyways. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Drivers: hv: balloon: account for gaps in hot add regionsVitaly Kuznetsov2017-04-271-37/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit cb7a5724c7e1bfb5766ad1c3beba14cc715991cf upstream. I'm observing the following hot add requests from the WS2012 host: hot_add_req: start_pfn = 0x108200 count = 330752 hot_add_req: start_pfn = 0x158e00 count = 193536 hot_add_req: start_pfn = 0x188400 count = 239616 As the host doesn't specify hot add regions we're trying to create 128Mb-aligned region covering the first request, we create the 0x108000 - 0x160000 region and we add 0x108000 - 0x158e00 memory. The second request passes the pfn_covered() check, we enlarge the region to 0x108000 - 0x190000 and add 0x158e00 - 0x188200 memory. The problem emerges with the third request as it starts at 0x188400 so there is a 0x200 gap which is not covered. As the end of our region is 0x190000 now it again passes the pfn_covered() check were we just adjust the covered_end_pfn and make it 0x188400 instead of 0x188200 which means that we'll try to online 0x188200-0x188400 pages but these pages were never assigned to us and we crash. We can't react to such requests by creating new hot add regions as it may happen that the whole suggested range falls into the previously identified 128Mb-aligned area so we'll end up adding nothing or create intersecting regions and our current logic doesn't allow that. Instead, create a list of such 'gaps' and check for them in the page online callback. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Drivers: hv: balloon: keep track of where ha_region startsVitaly Kuznetsov2017-04-271-2/+5
| | | | | | | | | | | | | | | | | | | | | | | commit 7cf3b79ec85ee1a5bbaaf936bb1d050dc652983b upstream. Windows 2012 (non-R2) does not specify hot add region in hot add requests and the logic in hot_add_req() is trying to find a 128Mb-aligned region covering the request. It may also happen that host's requests are not 128Mb aligned and the created ha_region will start before the first specified PFN. We can't online these non-present pages but we don't remember the real start of the region. This is a regression introduced by the commit 5abbbb75d733 ("Drivers: hv: hv_balloon: don't lose memory when onlining order is not natural"). While the idea of keeping the 'moving window' was wrong (as there is no guarantee that hot add requests come ordered) we should still keep track of covered_start_pfn. This is not a revert, the logic is different. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Tools: hv: kvp: ensure kvp device fd is closed on execVitaly Kuznetsov2017-04-271-1/+1
| | | | | | | | | | | | | | | commit 26840437cbd6d3625ea6ab34e17cd34bb810c861 upstream. KVP daemon does fork()/exec() (with popen()) so we need to close our fds to avoid sharing them with child processes. The immediate implication of not doing so I see is SELinux complaining about 'ip' trying to access '/dev/vmbus/hv_kvp'. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* kvm: arm/arm64: Fix locking for kvm_free_stage2_pgdSuzuki K Poulose2017-04-271-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | commit 8b3405e345b5a098101b0c31b264c812bba045d9 upstream. In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while calling unmap_stage2_range() on the entire memory range for the guest. This could cause problems with other callers (e.g, munmap on a memslot) trying to unmap a range. And since we have to unmap the entire Guest memory range holding a spinlock, make sure we yield the lock if necessary, after we unmap each PUD range. Fixes: commit d5d8184d35c9 ("KVM: ARM: Memory virtualization setup") Cc: Paolo Bonzini <pbonzin@redhat.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> [ Avoid vCPU starvation and lockup detector warnings ] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* x86/mce/AMD: Give a name to MCA bank 3 when accessed with legacy MSRsYazen Ghannam2017-04-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 29f72ce3e4d18066ec75c79c857bee0618a3504b upstream. MCA bank 3 is reserved on systems pre-Fam17h, so it didn't have a name. However, MCA bank 3 is defined on Fam17h systems and can be accessed using legacy MSRs. Without a name we get a stack trace on Fam17h systems when trying to register sysfs files for bank 3 on kernels that don't recognize Scalable MCA. Call MCA bank 3 "decode_unit" since this is what it represents on Fam17h. This will allow kernels without SMCA support to see this bank on Fam17h+ and prevent the stack trace. This will not affect older systems since this bank is reserved on them, i.e. it'll be ignored. Tested on AMD Fam15h and Fam17h systems. WARNING: CPU: 26 PID: 1 at lib/kobject.c:210 kobject_add_internal kobject: (ffff88085bb256c0): attempted to be registered with empty name! ... Call Trace: kobject_add_internal kobject_add kobject_create_and_add threshold_create_device threshold_init_device Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1490102285-3659-1-git-send-email-Yazen.Ghannam@amd.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/kprobe: Fix oops when kprobed on 'stdu' instructionRavi Bangoria2017-04-271-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 9e1ba4f27f018742a1aa95d11e35106feba08ec1 upstream. If we set a kprobe on a 'stdu' instruction on powerpc64, we see a kernel OOPS: Bad kernel stack pointer cd93c840 at c000000000009868 Oops: Bad kernel stack pointer, sig: 6 [#1] ... GPR00: c000001fcd93cb30 00000000cd93c840 c0000000015c5e00 00000000cd93c840 ... NIP [c000000000009868] resume_kernel+0x2c/0x58 LR [c000000000006208] program_check_common+0x108/0x180 On a 64-bit system when the user probes on a 'stdu' instruction, the kernel does not emulate actual store in emulate_step() because it may corrupt the exception frame. So the kernel does the actual store operation in exception return code i.e. resume_kernel(). resume_kernel() loads the saved stack pointer from memory using lwz, which only loads the low 32-bits of the address, causing the kernel crash. Fix this by loading the 64-bit value instead. Fixes: be96f63375a1 ("powerpc: Split out instruction analysis part of emulate_step()") Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> [mpe: Change log massage, add stable tag] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ubi/upd: Always flush after prepared for an updateSebastian Siewior2017-04-271-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 9cd9a21ce070be8a918ffd3381468315a7a76ba6 upstream. In commit 6afaf8a484cb ("UBI: flush wl before clearing update marker") I managed to trigger and fix a similar bug. Now here is another version of which I assumed it wouldn't matter back then but it turns out UBI has a check for it and will error out like this: |ubi0 warning: validate_vid_hdr: inconsistent used_ebs |ubi0 error: validate_vid_hdr: inconsistent VID header at PEB 592 All you need to trigger this is? "ubiupdatevol /dev/ubi0_0 file" + a powercut in the middle of the operation. ubi_start_update() sets the update-marker and puts all EBs on the erase list. After that userland can proceed to write new data while the old EB aren't erased completely. A powercut at this point is usually not that much of a tragedy. UBI won't give read access to the static volume because it has the update marker. It will most likely set the corrupted flag because it misses some EBs. So we are all good. Unless the size of the image that has been written differs from the old image in the magnitude of at least one EB. In that case UBI will find two different values for `used_ebs' and refuse to attach the image with the error message mentioned above. So in order not to get in the situation, the patch will ensure that we wait until everything is removed before it tries to write any data. The alternative would be to detect such a case and remove all EBs at the attached time after we processed the volume-table and see the update-marker set. The patch looks bigger and I doubt it is worth it since usually the write() will wait from time to time for a new EB since usually there not that many spare EB that can be used. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* mac80211: reject ToDS broadcast data framesJohannes Berg2017-04-271-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | commit 3018e947d7fd536d57e2b550c33e456d921fff8c upstream. AP/AP_VLAN modes don't accept any real 802.11 multicast data frames, but since they do need to accept broadcast management frames the same is currently permitted for data frames. This opens a security problem because such frames would be decrypted with the GTK, and could even contain unicast L3 frames. Since the spec says that ToDS frames must always have the BSSID as the RA (addr1), reject any other data frames. The problem was originally reported in "Predicting, Decrypting, and Abusing WPA2/802.11 Group Keys" at usenix https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/vanhoef and brought to my attention by Jouni. Reported-by: Jouni Malinen <j@w1.fi> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> --
* mmc: sdhci-esdhc-imx: increase the pad I/O drive strength for DDR50 cardHaibo Chen2017-04-271-0/+1
| | | | | | | | | | | | | | | | | | | | | | commit 9f327845358d3dd0d8a5a7a5436b0aa5c432e757 upstream. Currently for DDR50 card, it need tuning in default. We meet tuning fail issue for DDR50 card and some data CRC error when DDR50 sd card works. This is because the default pad I/O drive strength can't make sure DDR50 card work stable. So increase the pad I/O drive strength for DDR50 card, and use pins_100mhz. This fixes DDR50 card support for IMX since DDR50 tuning was enabled from commit 9faac7b95ea4 ("mmc: sdhci: enable tuning for DDR50") Tested-and-reported-by: Tim Harvey <tharvey@gateworks.com> Signed-off-by: Haibo Chen <haibo.chen@nxp.com> Acked-by: Dong Aisheng <aisheng.dong@nxp.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ACPI / power: Avoid maybe-uninitialized warningArnd Bergmann2017-04-271-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | commit fe8c470ab87d90e4b5115902dd94eced7e3305c3 upstream. gcc -O2 cannot always prove that the loop in acpi_power_get_inferred_state() is enterered at least once, so it assumes that cur_state might not get initialized: drivers/acpi/power.c: In function 'acpi_power_get_inferred_state': drivers/acpi/power.c:222:9: error: 'cur_state' may be used uninitialized in this function [-Werror=maybe-uninitialized] This sets the variable to zero at the start of the loop, to ensure that there is well-defined behavior even for an empty list. This gets rid of the warning. The warning first showed up when the -Os flag got removed in a bug fix patch in linux-4.11-rc5. I would suggest merging this addon patch on top of that bug fix to avoid introducing a new warning in the stable kernels. Fixes: 61b79e16c68d (ACPI: Fix incompatibility with mcount-based function graph tracing) Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Input: elantech - add Fujitsu Lifebook E547 to force crc_enabledThorsten Leemhuis2017-04-271-0/+8
| | | | | | | | | | | | | | | | | commit 704de489e0e3640a2ee2d0daf173e9f7375582ba upstream. Temporary got a Lifebook E547 into my hands and noticed the touchpad only works after running: echo "1" > /sys/devices/platform/i8042/serio2/crc_enabled Add it to the list of machines that need this workaround. Signed-off-by: Thorsten Leemhuis <linux@leemhuis.info> Reviewed-by: Ulrik De Bie <ulrik.debie-os@e2big.org> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* VSOCK: Detach QP check should filter out non matching QPs.Jorgen Hansen2017-04-271-2/+2
| | | | | | | | | | | | | | | | | commit 8ab18d71de8b07d2c4d6f984b718418c09ea45c5 upstream. The check in vmci_transport_peer_detach_cb should only allow a detach when the qp handle of the transport matches the one in the detach message. Testing: Before this change, a detach from a peer on a different socket would cause an active stream socket to register a detach. Reviewed-by: George Zhang <georgezhang@vmware.com> Signed-off-by: Jorgen Hansen <jhansen@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Drivers: hv: vmbus: Reduce the delay between retries in vmbus_post_msg()K. Y. Srinivasan2017-04-271-4/+4
| | | | | | | | | | | | commit 8de0d7e951826d7592e0ba1da655b175c4aa0923 upstream. The current delay between retries is unnecessarily high and is negatively affecting the time it takes to boot the system. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Drivers: hv: get rid of timeout in vmbus_open()Vitaly Kuznetsov2017-04-271-6/+1
| | | | | | | | | | | | | | | | commit 396e287fa2ff46e83ae016cdcb300c3faa3b02f6 upstream. vmbus_teardown_gpadl() can result in infinite wait when it is called on 5 second timeout in vmbus_open(). The issue is caused by the fact that gpadl teardown operation won't ever succeed for an opened channel and the timeout isn't always enough. As a guest, we can always trust the host to respond to our request (and there is nothing we can do if it doesn't). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Drivers: hv: don't leak memory in vmbus_establish_gpadl()Vitaly Kuznetsov2017-04-271-1/+8
| | | | | | | | | | | | | | | | | | | commit 7cc80c98070ccc7940fc28811c92cca0a681015d upstream. In some cases create_gpadl_header() allocates submessages but we never free them. [sumits] Note for stable: Upstream commit 4d63763296ab7865a98bc29cc7d77145815ef89f: (Drivers: hv: get rid of redundant messagecount in create_gpadl_header()) changes the list usage to initialize list header in all cases; that patch isn't added to stable, so the current patch is modified a little bit from the upstream commit to check if the list is valid or not. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* s390/mm: fix CMMA vs KSM vs othersChristian Borntraeger2017-04-271-0/+2
| | | | | | | | | | | | | | | | | | commit a8f60d1fadf7b8b54449fcc9d6b15248917478ba upstream. On heavy paging with KSM I see guest data corruption. Turns out that KSM will add pages to its tree, where the mapping return true for pte_unused (or might become as such later). KSM will unmap such pages and reinstantiate with different attributes (e.g. write protected or special, e.g. in replace_page or write_protect_page)). This uncovered a bug in our pagetable handling: We must remove the unused flag as soon as an entry becomes present again. Signed-of-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* CIFS: remove bad_network_name flagGermano Percossi2017-04-272-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | commit a0918f1ce6a43ac980b42b300ec443c154970979 upstream. STATUS_BAD_NETWORK_NAME can be received during node failover, causing the flag to be set and making the reconnect thread always unsuccessful, thereafter. Once the only place where it is set is removed, the remaining bits are rendered moot. Removing it does not prevent "mount" from failing when a non existent share is passed. What happens when the share really ceases to exist while the share is mounted is undefined now as much as it was before. Signed-off-by: Germano Percossi <germano.percossi@citrix.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* cifs: Do not send echoes before Negotiate is completeSachin Prabhu2017-04-271-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 62a6cfddcc0a5313e7da3e8311ba16226fe0ac10 upstream. commit 4fcd1813e640 ("Fix reconnect to not defer smb3 session reconnect long after socket reconnect") added support for Negotiate requests to be initiated by echo calls. To avoid delays in calling echo after a reconnect, I added the patch introduced by the commit b8c600120fc8 ("Call echo service immediately after socket reconnect"). This has however caused a regression with cifs shares which do not have support for echo calls to trigger Negotiate requests. On connections which need to call Negotiation, the echo calls trigger an error which triggers a reconnect which in turn triggers another echo call. This results in a loop which is only broken when an operation is performed on the cifs share. For an idle share, it can DOS a server. The patch uses the smb_operation can_echo() for cifs so that it is called only if connection has been already been setup. kernel bz: 194531 Signed-off-by: Sachin Prabhu <sprabhu@redhat.com> Tested-by: Jonathan Liu <net147@gmail.com> Acked-by: Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ring-buffer: Have ring_buffer_iter_empty() return true when emptySteven Rostedt (VMware)2017-04-271-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 78f7a45dac2a2d2002f98a3a95f7979867868d73 upstream. I noticed that reading the snapshot file when it is empty no longer gives a status. It suppose to show the status of the snapshot buffer as well as how to allocate and use it. For example: ># cat snapshot # tracer: nop # # # * Snapshot is allocated * # # Snapshot commands: # echo 0 > snapshot : Clears and frees snapshot buffer # echo 1 > snapshot : Allocates snapshot buffer, if not already allocated. # Takes a snapshot of the main buffer. # echo 2 > snapshot : Clears snapshot buffer (but does not allocate or free) # (Doesn't have to be '2' works with any number that # is not a '0' or '1') But instead it just showed an empty buffer: ># cat snapshot # tracer: nop # # entries-in-buffer/entries-written: 0/0 #P:4 # # _-----=> irqs-off # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / delay # TASK-PID CPU# |||| TIMESTAMP FUNCTION # | | | |||| | | What happened was that it was using the ring_buffer_iter_empty() function to see if it was empty, and if it was, it showed the status. But that function was returning false when it was empty. The reason was that the iter header page was on the reader page, and the reader page was empty, but so was the buffer itself. The check only tested to see if the iter was on the commit page, but the commit page was no longer pointing to the reader page, but as all pages were empty, the buffer is also. Fixes: 651e22f2701b ("ring-buffer: Always reset iterator to reader page") Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* tracing: Allocate the snapshot buffer before enabling probeSteven Rostedt (VMware)2017-04-271-3/+5
| | | | | | | | | | | | | | | | | | commit df62db5be2e5f070ecd1a5ece5945b590ee112e0 upstream. Currently the snapshot trigger enables the probe and then allocates the snapshot. If the probe triggers before the allocation, it could cause the snapshot to fail and turn tracing off. It's best to allocate the snapshot buffer first, and then enable the trigger. If something goes wrong in the enabling of the trigger, the snapshot buffer is still allocated, but it can also be freed by the user by writting zero into the snapshot buffer file. Also add a check of the return status of alloc_snapshot(). Fixes: 77fd5c15e3 ("tracing: Add snapshot trigger to function probes") Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KEYS: fix keyctl_set_reqkey_keyring() to not leak thread keyringsEric Biggers2017-04-272-24/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | commit c9f838d104fed6f2f61d68164712e3204bf5271b upstream. This fixes CVE-2017-7472. Running the following program as an unprivileged user exhausts kernel memory by leaking thread keyrings: #include <keyutils.h> int main() { for (;;) keyctl_set_reqkey_keyring(KEY_REQKEY_DEFL_THREAD_KEYRING); } Fix it by only creating a new thread keyring if there wasn't one before. To make things more consistent, make install_thread_keyring_to_cred() and install_process_keyring_to_cred() both return 0 if the corresponding keyring is already present. Fixes: d84f4f992cbd ("CRED: Inaugurate COW credentials") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KEYS: Change the name of the dead type to ".dead" to prevent user accessDavid Howells2017-04-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | commit c1644fe041ebaf6519f6809146a77c3ead9193af upstream. This fixes CVE-2017-6951. Userspace should not be able to do things with the "dead" key type as it doesn't have some of the helper functions set upon it that the kernel needs. Attempting to use it may cause the kernel to crash. Fix this by changing the name of the type to ".dead" so that it's rejected up front on userspace syscalls by key_get_type_from_user(). Though this doesn't seem to affect recent kernels, it does affect older ones, certainly those prior to: commit c06cfb08b88dfbe13be44a69ae2fdc3a7c902d81 Author: David Howells <dhowells@redhat.com> Date: Tue Sep 16 17:36:06 2014 +0100 KEYS: Remove key_type::match in favour of overriding default by match_preparse which went in before 3.18-rc1. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KEYS: Disallow keyrings beginning with '.' to be joined as session keyringsDavid Howells2017-04-271-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ee8f844e3c5a73b999edf733df1c529d6503ec2f upstream. This fixes CVE-2016-9604. Keyrings whose name begin with a '.' are special internal keyrings and so userspace isn't allowed to create keyrings by this name to prevent shadowing. However, the patch that added the guard didn't fix KEYCTL_JOIN_SESSION_KEYRING. Not only can that create dot-named keyrings, it can also subscribe to them as a session keyring if they grant SEARCH permission to the user. This, for example, allows a root process to set .builtin_trusted_keys as its session keyring, at which point it has full access because now the possessor permissions are added. This permits root to add extra public keys, thereby bypassing module verification. This also affects kexec and IMA. This can be tested by (as root): keyctl session .builtin_trusted_keys keyctl add user a a @s keyctl list @s which on my test box gives me: 2 keys in keyring: 180010936: ---lswrv 0 0 asymmetric: Build time autogenerated kernel key: ae3d4a31b82daa8e1a75b49dc2bba949fd992a05 801382539: --alswrv 0 0 user: a Fix this by rejecting names beginning with a '.' in the keyctl. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com> cc: linux-ima-devel@lists.sourceforge.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Linux 4.4.63v4.4.63Greg Kroah-Hartman2017-04-211-1/+1
|
* MIPS: fix Select HAVE_IRQ_EXIT_ON_IRQ_STACK patch.Greg Kroah-Hartman2017-04-211-2/+2
| | | | | | | | | | | | | | | | | | Commit f017e58da4aba293e4a6ab62ca5d4801f79cc929 which was commit 3cc3434fd6307d06b53b98ce83e76bf9807689b9 upstream, was misapplied to the 4.4 stable kernel. This patch fixes this and moves the chunk to the proper Kconfig area. Reported-by: "Maciej W. Rozycki" <macro@linux-mips.org> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Jason A. Donenfeld <jason@zx2c4.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Amit Pundir <amit.pundir@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* sctp: deny peeloff operation on asocs with threads sleeping on itMarcelo Ricardo Leitner2017-04-211-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit dfcb9f4f99f1e9a49e43398a7bfbf56927544af1 upstream. commit 2dcab5984841 ("sctp: avoid BUG_ON on sctp_wait_for_sndbuf") attempted to avoid a BUG_ON call when the association being used for a sendmsg() is blocked waiting for more sndbuf and another thread did a peeloff operation on such asoc, moving it to another socket. As Ben Hutchings noticed, then in such case it would return without locking back the socket and would cause two unlocks in a row. Further analysis also revealed that it could allow a double free if the application managed to peeloff the asoc that is created during the sendmsg call, because then sctp_sendmsg() would try to free the asoc that was created only for that call. This patch takes another approach. It will deny the peeloff operation if there is a thread sleeping on the asoc, so this situation doesn't exist anymore. This avoids the issues described above and also honors the syscalls that are already being handled (it can be multiple sendmsg calls). Joint work with Xin Long. Fixes: 2dcab5984841 ("sctp: avoid BUG_ON on sctp_wait_for_sndbuf") Cc: Alexander Popov <alex.popov@linux.com> Cc: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* net: ipv6: check route protocol when deleting routesMantas M2017-04-211-0/+2
| | | | | | | | | | | | | | | | commit c2ed1880fd61a998e3ce40254a99a2ad000f1a7d upstream. The protocol field is checked when deleting IPv4 routes, but ignored for IPv6, which causes problems with routing daemons accidentally deleting externally set routes (observed by multiple bird6 users). This can be verified using `ip -6 route del <prefix> proto something`. Signed-off-by: Mantas Mikulėnas <grawity@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* tty/serial: atmel: RS485 half duplex w/DMA: enable RX after TX is doneRichard Genoud2017-04-211-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit b389f173aaa1204d6dc1f299082a162eb0491545 upstream. When using RS485 in half duplex, RX should be enabled when TX is finished, and stopped when TX starts. Before commit 0058f0871efe7b01c6 ("tty/serial: atmel: fix RS485 half duplex with DMA"), RX was not disabled in atmel_start_tx() if the DMA was used. So, collisions could happened. But disabling RX in atmel_start_tx() uncovered another bug: RX was enabled again in the wrong place (in atmel_tx_dma) instead of being enabled when TX is finished (in atmel_complete_tx_dma), so the transmission simply stopped. This bug was not triggered before commit 0058f0871efe7b01c6 ("tty/serial: atmel: fix RS485 half duplex with DMA") because RX was never disabled before. Moving atmel_start_rx() in atmel_complete_tx_dma() corrects the problem. Reported-by: Gil Weber <webergil@gmail.com> Fixes: 0058f0871efe7b01c6 Tested-by: Gil Weber <webergil@gmail.com> Signed-off-by: Richard Genoud <richard.genoud@gmail.com> Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com> Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com> Tested-by: Bryan Evenson <bevenson@melinkcorp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* SUNRPC: fix refcounting problems with auth_gss messages.NeilBrown2017-04-211-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 1cded9d2974fe4fe339fc0ccd6638b80d465ab2c upstream. There are two problems with refcounting of auth_gss messages. First, the reference on the pipe->pipe list (taken by a call to rpc_queue_upcall()) is not counted. It seems to be assumed that a message in pipe->pipe will always also be in pipe->in_downcall, where it is correctly reference counted. However there is no guaranty of this. I have a report of a NULL dereferences in rpc_pipe_read() which suggests a msg that has been freed is still on the pipe->pipe list. One way I imagine this might happen is: - message is queued for uid=U and auth->service=S1 - rpc.gssd reads this message and starts processing. This removes the message from pipe->pipe - message is queued for uid=U and auth->service=S2 - rpc.gssd replies to the first message. gss_pipe_downcall() calls __gss_find_upcall(pipe, U, NULL) and it finds the *second* message, as new messages are placed at the head of ->in_downcall, and the service type is not checked. - This second message is removed from ->in_downcall and freed by gss_release_msg() (even though it is still on pipe->pipe) - rpc.gssd tries to read another message, and dereferences a pointer to this message that has just been freed. I fix this by incrementing the reference count before calling rpc_queue_upcall(), and decrementing it if that fails, or normally in gss_pipe_destroy_msg(). It seems strange that the reply doesn't target the message more precisely, but I don't know all the details. In any case, I think the reference counting irregularity became a measureable bug when the extra arg was added to __gss_find_upcall(), hence the Fixes: line below. The second problem is that if rpc_queue_upcall() fails, the new message is not freed. gss_alloc_msg() set the ->count to 1, gss_add_msg() increments this to 2, gss_unhash_msg() decrements to 1, then the pointer is discarded so the memory never gets freed. Fixes: 9130b8dbc6ac ("SUNRPC: allow for upcalls for same uid but different gss service") Link: https://bugzilla.opensuse.org/show_bug.cgi?id=1011250 Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ibmveth: calculate gso_segs for large packetsThomas Falcon2017-04-211-2/+10
| | | | | | | | | | | | | | | commit 94acf164dc8f1184e8d0737be7125134c2701dbe upstream. Include calculations to compute the number of segments that comprise an aggregated large packet. Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Reviewed-by: Jonathan Maxwell <jmaxwell37@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* catc: Use heap buffer for memory size testBen Hutchings2017-04-211-7/+18
| | | | | | | | | | | | | | commit 2d6a0e9de03ee658a9adc3bfb2f0ca55dff1e478 upstream. Allocating USB buffers on the stack is not portable, and no longer works on x86_64 (with VMAP_STACK enabled as per default). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Brad Spengler <spender@grsecurity.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* catc: Combine failure cleanup code in catc_probe()Ben Hutchings2017-04-211-16/+17
| | | | | | | | | commit d41149145f98fe26dcd0bfd1d6cc095e6e041418 upstream. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* rtl8150: Use heap buffers for all register accessBen Hutchings2017-04-211-7/+27
| | | | | | | | | | | | | | commit 7926aff5c57b577ab0f43364ff0c59d968f6a414 upstream. Allocating USB buffers on the stack is not portable, and no longer works on x86_64 (with VMAP_STACK enabled as per default). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Brad Spengler <spender@grsecurity.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* pegasus: Use heap buffers for all register accessBen Hutchings2017-04-211-4/+25
| | | | | | | | | | | | | | | | | commit 5593523f968bc86d42a035c6df47d5e0979b5ace upstream. Allocating USB buffers on the stack is not portable, and no longer works on x86_64 (with VMAP_STACK enabled as per default). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") References: https://bugs.debian.org/852556 Reported-by: Lisandro Damián Nicanor Pérez Meyer <lisandro@debian.org> Tested-by: Lisandro Damián Nicanor Pérez Meyer <lisandro@debian.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Brad Spengler <spender@grsecurity.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* virtio-console: avoid DMA from stackOmar Sandoval2017-04-211-2/+10
| | | | | | | | | | | | | | | | commit c4baad50297d84bde1a7ad45e50c73adae4a2192 upstream. put_chars() stuffs the buffer it gets into an sg, but that buffer may be on the stack. This breaks with CONFIG_VMAP_STACK=y (for me, it manifested as printks getting turned into NUL bytes). Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Brad Spengler <spender@grsecurity.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dvb-usb-firmware: don't do DMA on stackStefan Brüns2017-04-211-10/+12
| | | | | | | | | | | | | | | | commit 67b0503db9c29b04eadfeede6bebbfe5ddad94ef upstream. The buffer allocation for the firmware data was changed in commit 43fab9793c1f ("[media] dvb-usb: don't use stack for firmware load") but the same applies for the reset value. Fixes: 43fab9793c1f ("[media] dvb-usb: don't use stack for firmware load") Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Brad Spengler <spender@grsecurity.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dvb-usb: don't use stack for firmware loadMauro Carvalho Chehab2017-04-211-6/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 43fab9793c1f44e665b4f98035a14942edf03ddc upstream. As reported by Marc Duponcheel <marc@offline.be>, firmware load on dvb-usb is using the stack, with is not allowed anymore on default Kernel configurations: [ 1025.958836] dvb-usb: found a 'WideView WT-220U PenType Receiver (based on ZL353)' in cold state, will try to load a firmware [ 1025.958853] dvb-usb: downloading firmware from file 'dvb-usb-wt220u-zl0353-01.fw' [ 1025.958855] dvb-usb: could not stop the USB controller CPU. [ 1025.958856] dvb-usb: error while transferring firmware (transferred size: -11, block size: 3) [ 1025.958856] dvb-usb: firmware download failed at 8 with -22 [ 1025.958867] usbcore: registered new interface driver dvb_usb_dtt200u [ 2.789902] dvb-usb: downloading firmware from file 'dvb-usb-wt220u-zl0353-01.fw' [ 2.789905] ------------[ cut here ]------------ [ 2.789911] WARNING: CPU: 3 PID: 2196 at drivers/usb/core/hcd.c:1584 usb_hcd_map_urb_for_dma+0x430/0x560 [usbcore] [ 2.789912] transfer buffer not dma capable [ 2.789912] Modules linked in: btusb dvb_usb_dtt200u(+) dvb_usb_af9035(+) btrtl btbcm dvb_usb dvb_usb_v2 btintel dvb_core bluetooth rc_core rfkill x86_pkg_temp_thermal intel_powerclamp coretemp crc32_pclmul aesni_intel aes_x86_64 glue_helper lrw gf128mul ablk_helper cryptd drm_kms_helper syscopyarea sysfillrect pcspkr i2c_i801 sysimgblt fb_sys_fops drm i2c_smbus i2c_core r8169 lpc_ich mfd_core mii thermal fan rtc_cmos video button acpi_cpufreq processor snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core snd_pcm snd_timer snd crc32c_intel ahci libahci libata xhci_pci ehci_pci xhci_hcd ehci_hcd usbcore usb_common dm_mirror dm_region_hash dm_log dm_mod [ 2.789936] CPU: 3 PID: 2196 Comm: systemd-udevd Not tainted 4.9.0-gentoo #1 [ 2.789937] Hardware name: ASUS All Series/H81I-PLUS, BIOS 0401 07/23/2013 [ 2.789938] ffffc9000339b690 ffffffff812bd397 ffffc9000339b6e0 0000000000000000 [ 2.789939] ffffc9000339b6d0 ffffffff81055c86 000006300339b6a0 ffff880116c0c000 [ 2.789941] 0000000000000000 0000000000000000 0000000000000001 ffff880116c08000 [ 2.789942] Call Trace: [ 2.789945] [<ffffffff812bd397>] dump_stack+0x4d/0x66 [ 2.789947] [<ffffffff81055c86>] __warn+0xc6/0xe0 [ 2.789948] [<ffffffff81055cea>] warn_slowpath_fmt+0x4a/0x50 [ 2.789952] [<ffffffffa006d460>] usb_hcd_map_urb_for_dma+0x430/0x560 [usbcore] [ 2.789954] [<ffffffff814ed5a8>] ? io_schedule_timeout+0xd8/0x110 [ 2.789956] [<ffffffffa006e09c>] usb_hcd_submit_urb+0x9c/0x980 [usbcore] [ 2.789958] [<ffffffff812d0ebf>] ? copy_page_to_iter+0x14f/0x2b0 [ 2.789960] [<ffffffff81126818>] ? pagecache_get_page+0x28/0x240 [ 2.789962] [<ffffffff8118c2a0>] ? touch_atime+0x20/0xa0 [ 2.789964] [<ffffffffa006f7c4>] usb_submit_urb+0x2c4/0x520 [usbcore] [ 2.789967] [<ffffffffa006feca>] usb_start_wait_urb+0x5a/0xe0 [usbcore] [ 2.789969] [<ffffffffa007000c>] usb_control_msg+0xbc/0xf0 [usbcore] [ 2.789970] [<ffffffffa067903d>] usb_cypress_writemem+0x3d/0x40 [dvb_usb] [ 2.789972] [<ffffffffa06791cf>] usb_cypress_load_firmware+0x4f/0x130 [dvb_usb] [ 2.789973] [<ffffffff8109dbbe>] ? console_unlock+0x2fe/0x5d0 [ 2.789974] [<ffffffff8109e10c>] ? vprintk_emit+0x27c/0x410 [ 2.789975] [<ffffffff8109e40a>] ? vprintk_default+0x1a/0x20 [ 2.789976] [<ffffffff81124d76>] ? printk+0x43/0x4b [ 2.789977] [<ffffffffa0679310>] dvb_usb_download_firmware+0x60/0xd0 [dvb_usb] [ 2.789979] [<ffffffffa0679898>] dvb_usb_device_init+0x3d8/0x610 [dvb_usb] [ 2.789981] [<ffffffffa069e302>] dtt200u_usb_probe+0x92/0xd0 [dvb_usb_dtt200u] [ 2.789984] [<ffffffffa007420c>] usb_probe_interface+0xfc/0x270 [usbcore] [ 2.789985] [<ffffffff8138bf95>] driver_probe_device+0x215/0x2d0 [ 2.789986] [<ffffffff8138c0e6>] __driver_attach+0x96/0xa0 [ 2.789987] [<ffffffff8138c050>] ? driver_probe_device+0x2d0/0x2d0 [ 2.789988] [<ffffffff81389ffb>] bus_for_each_dev+0x5b/0x90 [ 2.789989] [<ffffffff8138b7b9>] driver_attach+0x19/0x20 [ 2.789990] [<ffffffff8138b33c>] bus_add_driver+0x11c/0x220 [ 2.789991] [<ffffffff8138c91b>] driver_register+0x5b/0xd0 [ 2.789994] [<ffffffffa0072f6c>] usb_register_driver+0x7c/0x130 [usbcore] [ 2.789994] [<ffffffffa06a5000>] ? 0xffffffffa06a5000 [ 2.789996] [<ffffffffa06a501e>] dtt200u_usb_driver_init+0x1e/0x20 [dvb_usb_dtt200u] [ 2.789997] [<ffffffff81000408>] do_one_initcall+0x38/0x140 [ 2.789998] [<ffffffff8116001c>] ? __vunmap+0x7c/0xc0 [ 2.789999] [<ffffffff81124fb0>] ? do_init_module+0x22/0x1d2 [ 2.790000] [<ffffffff81124fe8>] do_init_module+0x5a/0x1d2 [ 2.790002] [<ffffffff810c96b1>] load_module+0x1e11/0x2580 [ 2.790003] [<ffffffff810c68b0>] ? show_taint+0x30/0x30 [ 2.790004] [<ffffffff81177250>] ? kernel_read_file+0x100/0x190 [ 2.790005] [<ffffffff810c9ffa>] SyS_finit_module+0xba/0xc0 [ 2.790007] [<ffffffff814f13e0>] entry_SYSCALL_64_fastpath+0x13/0x94 [ 2.790008] ---[ end trace c78a74e78baec6fc ]--- So, allocate the structure dynamically. Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* mm: Tighten x86 /dev/mem with zeroing readsKees Cook2017-04-212-41/+82
| | | | | | | | | | | | | | | | | | | | | | | | commit a4866aa812518ed1a37d8ea0c881dc946409de94 upstream. Under CONFIG_STRICT_DEVMEM, reading System RAM through /dev/mem is disallowed. However, on x86, the first 1MB was always allowed for BIOS and similar things, regardless of it actually being System RAM. It was possible for heap to end up getting allocated in low 1MB RAM, and then read by things like x86info or dd, which would trip hardened usercopy: usercopy: kernel memory exposure attempt detected from ffff880000090000 (dma-kmalloc-256) (4096 bytes) This changes the x86 exception for the low 1MB by reading back zeros for System RAM areas instead of blindly allowing them. More work is needed to extend this to mmap, but currently mmap doesn't go through usercopy, so hardened usercopy won't Oops the kernel. Reported-by: Tommi Rantala <tommi.t.rantala@nokia.com> Tested-by: Tommi Rantala <tommi.t.rantala@nokia.com> Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Brad Spengler <spender@grsecurity.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* rtc: tegra: Implement clock handlingThierry Reding2017-04-211-2/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 5fa4086987506b2ab8c92f8f99f2295db9918856 upstream. Accessing the registers of the RTC block on Tegra requires the module clock to be enabled. This only works because the RTC module clock will be enabled by default during early boot. However, because the clock is unused, the CCF will disable it at late_init time. This causes the RTC to become unusable afterwards. This can easily be reproduced by trying to use the RTC: $ hwclock --rtc /dev/rtc1 This will hang the system. I ran into this by following up on a report by Martin Michlmayr that reboot wasn't working on Tegra210 systems. It turns out that the rtc-tegra driver's ->shutdown() implementation will hang the CPU, because of the disabled clock, before the system can be rebooted. What confused me for a while is that the same driver is used on prior Tegra generations where the hang can not be observed. However, as Peter De Schrijver pointed out, this is because on 32-bit Tegra chips the RTC clock is enabled by the tegra20_timer.c clocksource driver, which uses the RTC to provide a persistent clock. This code is never enabled on 64-bit Tegra because the persistent clock infrastructure does not exist on 64-bit ARM. The proper fix for this is to add proper clock handling to the RTC driver in order to ensure that the clock is enabled when the driver requires it. All device trees contain the clock already, therefore no additional changes are required. Reported-by: Martin Michlmayr <tbm@cyrius.com> Acked-By Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com> [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* platform/x86: acer-wmi: setup accelerometer when machine has appropriate ↵Lee, Chun-Yi2017-04-211-4/+18
| | | | | | | | | | | | | | | | | | | | | | | | notify event commit 98d610c3739ac354319a6590b915f4624d9151e6 upstream. The accelerometer event relies on the ACERWMID_EVENT_GUID notify. So, this patch changes the codes to setup accelerometer input device when detected ACERWMID_EVENT_GUID. It avoids that the accel input device created on every Acer machines. In addition, patch adds a clearly parsing logic of accelerometer hid to acer_wmi_get_handle_cb callback function. It is positive matching the "SENR" name with "BST0001" device to avoid non-supported hardware. Reported-by: Bjørn Mork <bjorn@mork.no> Cc: Darren Hart <dvhart@infradead.org> Signed-off-by: Lee, Chun-Yi <jlee@suse.com> [andy: slightly massage commit message] Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ext4: fix inode checksum calculation problem if i_extra_size is smallDaeho Jeong2017-04-211-3/+2
| | | | | | | | | | | | | | | | | | | commit 05ac5aa18abd7db341e54df4ae2b4c98ea0e43b7 upstream. We've fixed the race condition problem in calculating ext4 checksum value in commit b47820edd163 ("ext4: avoid modifying checksum fields directly during checksum veficationon"). However, by this change, when calculating the checksum value of inode whose i_extra_size is less than 4, we couldn't calculate the checksum value in a proper way. This problem was found and reported by Nix, Thank you. Reported-by: Nix <nix@esperi.org.uk> Signed-off-by: Daeho Jeong <daeho.jeong@samsung.com> Signed-off-by: Youngjin Gil <youngjin.gil@samsung.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dvb-usb-v2: avoid use-after-freeArnd Bergmann2017-04-211-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | commit 005145378c9ad7575a01b6ce1ba118fb427f583a upstream. I ran into a stack frame size warning because of the on-stack copy of the USB device structure: drivers/media/usb/dvb-usb-v2/dvb_usb_core.c: In function 'dvb_usbv2_disconnect': drivers/media/usb/dvb-usb-v2/dvb_usb_core.c:1029:1: error: the frame size of 1104 bytes is larger than 1024 bytes [-Werror=frame-larger-than=] Copying a device structure like this is wrong for a number of other reasons too aside from the possible stack overflow. One of them is that the dev_info() call will print the name of the device later, but AFAICT we have only copied a pointer to the name earlier and the actual name has been freed by the time it gets printed. This removes the on-stack copy of the device and instead copies the device name using kstrdup(). I'm ignoring the possible failure here as both printk() and kfree() are able to deal with NULL pointers. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> Cc: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ath9k: fix NULL pointer dereferenceMiaoqing Pan2017-04-211-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 40bea976c72b9ee60f8d097852deb53ccbeaffbe upstream. relay_open() may return NULL, check the return value to avoid the crash. BUG: unable to handle kernel NULL pointer dereference at 0000000000000040 IP: [<ffffffffa01a95c5>] ath_cmn_process_fft+0xd5/0x700 [ath9k_common] PGD 41cf28067 PUD 41be92067 PMD 0 Oops: 0000 [#1] SMP CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.8.6+ #35 Hardware name: Hewlett-Packard h8-1080t/2A86, BIOS 6.15 07/04/2011 task: ffffffff81e0c4c0 task.stack: ffffffff81e00000 RIP: 0010:[<ffffffffa01a95c5>] [<ffffffffa01a95c5>] ath_cmn_process_fft+0xd5/0x700 [ath9k_common] RSP: 0018:ffff88041f203ca0 EFLAGS: 00010293 RAX: 0000000000000000 RBX: 000000000000059f RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000040 RDI: ffffffff81f0ca98 RBP: ffff88041f203dc8 R08: ffffffffffffffff R09: 00000000000000ff R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: ffffffff81f0ca98 R14: 0000000000000000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff88041f200000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000040 CR3: 000000041b6ec000 CR4: 00000000000006f0 Stack: 0000000000000363 00000000000003f3 00000000000003f3 00000000000001f9 000000000000049a 0000000001252c04 ffff88041f203e44 ffff880417b4bfd0 0000000000000008 ffff88041785b9c0 0000000000000002 ffff88041613dc60 Call Trace: <IRQ> [<ffffffffa01b6441>] ath9k_tasklet+0x1b1/0x220 [ath9k] [<ffffffff8105d8dd>] tasklet_action+0x4d/0xf0 [<ffffffff8105dde2>] __do_softirq+0x92/0x2a0 Reported-by: Devin Tuchsen <devin.tuchsen@gmail.com> Tested-by: Devin Tuchsen <devin.tuchsen@gmail.com> Signed-off-by: Miaoqing Pan <miaoqing@codeaurora.org> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: ahash - Fix EINPROGRESS notification callbackHerbert Xu2017-04-212-29/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ef0579b64e93188710d48667cb5e014926af9f1b upstream. The ahash API modifies the request's callback function in order to clean up after itself in some corner cases (unaligned final and missing finup). When the request is complete ahash will restore the original callback and everything is fine. However, when the request gets an EBUSY on a full queue, an EINPROGRESS callback is made while the request is still ongoing. In this case the ahash API will incorrectly call its own callback. This patch fixes the problem by creating a temporary request object on the stack which is used to relay EINPROGRESS back to the original completion function. This patch also adds code to preserve the original flags value. Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...") Reported-by: Sabrina Dubroca <sd@queasysnail.net> Tested-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc: Disable HFSCR[TM] if TM is not supportedBenjamin Herrenschmidt2017-04-211-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 7ed23e1bae8bf7e37fd555066550a00b95a3a98b upstream. On Power8 & Power9 the early CPU inititialisation in __init_HFSCR() turns on HFSCR[TM] (Hypervisor Facility Status and Control Register [Transactional Memory]), but that doesn't take into account that TM might be disabled by CPU features, or disabled by the kernel being built with CONFIG_PPC_TRANSACTIONAL_MEM=n. So later in boot, when we have setup the CPU features, clear HSCR[TM] if the TM CPU feature has been disabled. We use CPU_FTR_TM_COMP to account for the CONFIG_PPC_TRANSACTIONAL_MEM=n case. Without this a KVM guest might try use TM, even if told not to, and cause an oops in the host kernel. Typically the oops is seen in __kvmppc_vcore_entry() and may or may not be fatal to the host, but is always bad news. In practice all shipping CPU revisions do support TM, and all host kernels we are aware of build with TM support enabled, so no one should actually be able to hit this in the wild. Fixes: 2a3563b023e5 ("powerpc: Setup in HFSCR for POWER8") Cc: stable@vger.kernel.org # v3.10+ Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Tested-by: Sam Bobroff <sam.bobroff@au1.ibm.com> [mpe: Rewrite change log with input from Sam, add Fixes/stable] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> [sb: Backported to linux-4.4.y: adjusted context] Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* zram: do not use copy_page with non-page aligned addressMinchan Kim2017-04-211-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e upstream. The copy_page is optimized memcpy for page-alinged address. If it is used with non-page aligned address, it can corrupt memory which means system corruption. With zram, it can happen with 1. 64K architecture 2. partial IO 3. slub debug Partial IO need to allocate a page and zram allocates it via kmalloc. With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned address. And finally, copy_page(mem, cmem) corrupts memory. So, this patch changes it to memcpy. Actuaully, we don't need to change zram_bvec_write part because zsmalloc returns page-aligned address in case of PAGE_SIZE class but it's not good to rely on the internal of zsmalloc. Note: When this patch is merged to stable, clear_page should be fixed, too. Unfortunately, recent zram removes it by "same page merge" feature so it's hard to backport this patch to -stable tree. I will handle it when I receive the mail from stable tree maintainer to merge this patch to backport. Fixes: 42e99bd ("zram: optimize memory operations with clear_page()/copy_page()") Link: http://lkml.kernel.org/r/1492042622-12074-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>