summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* ocfs2: Remove unused truncate function from alloc.cTao Ma2011-01-072-78/+0
| | | | | | | | | | Tristan Ye has done some refactoring against our truncate process, so some functions like ocfs2_prepare_truncate and ocfs2_free_truncate_context are no use and we'd better remove them. Signed-off-by: Tao Ma <tao.ma@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: dereferencing before checking in nst_seq_show()Dan Carpenter2011-01-071-23/+24
| | | | | | | | | In the original code, we dereferenced "nst" before checking that it was non-NULL. I moved the check forward and pulled the code in an indent level. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2: fix build for OCFS2_FS_STATS not enabledRandy Dunlap2011-01-071-0/+2
| | | | | | | | | | | | When CONFIG_OCFS2_FS_STATS is not enabled: fs/ocfs2/cluster/tcp.c:1254: error: implicit declaration of function 'o2net_update_recv_stats' Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <joel.becker@oracle.com> Cc: ocfs2-devel@oss.oracle.com Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Show o2net timing statisticsSunil Mushran2010-12-221-54/+157
| | | | | | | Adds debugfs dentry o2net/stats to show the o2net timing statistics. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Track process message timing stats for each socketSunil Mushran2010-12-222-0/+17
| | | | | | | Tracks total time taken to process messages received on a socket. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Track send message timing stats for each socketSunil Mushran2010-12-222-0/+30
| | | | | | | Tracks total send and status times for all messages sent on a socket. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Use ktime instead of timeval in struct o2net_sock_containerSunil Mushran2010-12-223-62/+76
| | | | | | | | Replace time trackers in struct o2net_sock_container from struct timeval to union ktime. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Replace timeval with ktime in struct o2net_send_trackingSunil Mushran2010-12-223-15/+19
| | | | | | | | Replace time trackers in struct o2net_send_tracking from struct timeval to union ktime. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2: Add DEBUG_FS dependencySunil Mushran2010-12-221-1/+1
| | | | | | | Make OCFS2_FS_STATS depend on DEBUG_FS. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/dlm: Hard code the values for enumsSunil Mushran2010-12-221-43/+43
| | | | | | | | | In o2dlm, the enumerated message values are part of the protocol. The patch hard codes each value so as to reduce the chance of an editing error causing a protocol mismatch. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/dlm: Minor cleanupSunil Mushran2010-12-222-27/+15
| | | | | | | | Patch makes use of task_pid_nr(). Also removes the null check before calling debugfs_remove(). Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/dlm: Cleanup dlmdebug.cSunil Mushran2010-12-222-117/+66
| | | | | | | Remove struct debug_buffer in dlmdebug.c/h. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2: Release buffer_head in case of error in ocfs2_double_lock.Tao Ma2010-12-221-1/+4
| | | | | | | | | | | | | | In ocfs2_double_lock, when ocfs2_inode_lock for inode1 fails, we just unlock inode2 and return without releasing buffer we get from inode_lock(inode2). The good thing is that it is freed by the only caller ocfs2_rename when it exits. But I don't think this is a right way for error handling. We should free the buffer_head we get in ocfs2_double_lock before exit so that the caller doesn't need to take care of it. Signed-off-by: Tao Ma <boyu.mt@taobao.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Pin the local node when o2hb thread startsSunil Mushran2010-12-161-0/+6
| | | | | | | | | | | The patch pins the node item of the local node when the o2hb thread starts and unpins on stop. An earlier patch pinned the node item of the remote node on o2net connect and unpinned on disconnect. Signed-off-by Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Show pin state for each o2hb regionSunil Mushran2010-12-161-0/+23
| | | | | | | | This patch adds a per o2hb region debugfs file that shows whether that region is pinned or not. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Pin/unpin o2hb regionsSunil Mushran2010-12-162-42/+182
| | | | | | | | | | | | | | | | This patch adds support for pinning o2hb regions in configfs. Pinning disallows a region to be cleanly stopped as long as it has an active dependent user (read o2dlm). In local heartbeat mode, the region uuid matching the domain name is pinned as long as the o2dlm domain is active. In global heartbeat mode, all regions are pinned as long as there is atleast one dependent user and the region count is 3 or less. All regions are unpinned if the number of dependent users is zero or region count is greater than 3. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Remove dropped region from o2hb quorum region bitmapSunil Mushran2010-12-161-0/+1
| | | | | | | Patch removes a dropped region from the quorum region bitmap maintained by o2hb. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/cluster: Pin the remote node item in configfsSunil Mushran2010-12-161-0/+9
| | | | | | | | | o2net pins the node item of the remote node in configfs before initiating the connection. It is unpinned on disconnect. This is to prevent the node item from being unlinked while it is still in use. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/dlm: make existing convertion precedent over new lockWengang Wang2010-12-161-0/+3
| | | | | | | | Make existing convertion precedent over new lock. It makes o2dlm locking more like fair locking. Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2/dlm: Cleanup mlogs in dlmthread.c, dlmast.c and dlmdomain.cSunil Mushran2010-12-163-90/+120
| | | | | | | Add the domain name and the resource name in the mlogs. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* ocfs2: Try to free truncate log when meeting ENOSPC in write.Tao Ma2010-12-163-1/+66
| | | | | | | | | | | | | | | | | | | | | Recently, one of our colleagues meet with a problem that if we write/delete a 32mb files repeatly, we will get an ENOSPC in the end. And the corresponding bug is 1288. http://oss.oracle.com/bugzilla/show_bug.cgi?id=1288 The real problem is that although we have freed the clusters, they are in truncate log and they will be summed up so that we can free them once in a whole. So this patch just try to resolve it. In case we see -ENOSPC in ocfs2_write_begin_no_lock, we will check whether the truncate log has enough clusters for our need, if yes, we will try to flush the truncate log at that point and try again. This method is inspired by Mark Fasheh <mfasheh@suse.com>. Thanks. Cc: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Tao Ma <tao.ma@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
* Merge branch 'for_linus' of ↵Linus Torvalds2010-12-154-4/+18
|\ | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4 * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: fix typo which broke '..' detection in ext4_find_entry() ext4: Turn off multiple page-io submission by default
| * ext4: fix typo which broke '..' detection in ext4_find_entry()Aaro Koskinen2010-12-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | There should be a check for the NUL character instead of '0'. Fortunately the only thing that cares about this is NFS serving, which is why we didn't notice this in the merge window testing. Reported-by: Phil Carmody <ext-phil.2.carmody@nokia.com> Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
| * ext4: Turn off multiple page-io submission by defaultTheodore Ts'o2010-12-143-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jon Nelson has found a test case which causes postgresql to fail with the error: psql:t.sql:4: ERROR: invalid page header in block 38269 of relation base/16384/16581 Under memory pressure, it looks like part of a file can end up getting replaced by zero's. Until we can figure out the cause, we'll roll back the change and use block_write_full_page() instead of ext4_bio_write_page(). The new, more efficient writing function can be used via the mount option mblk_io_submit, so we can test and fix the new page I/O code. To reproduce the problem, install postgres 8.4 or 9.0, and pin enough memory such that the system just at the end of triggering writeback before running the following sql script: begin; create temporary table foo as select x as a, ARRAY[x] as b FROM generate_series(1, 10000000 ) AS x; create index foo_a_idx on foo (a); create index foo_b_idx on foo USING GIN (b); rollback; If the temporary table is created on a hard drive partition which is encrypted using dm_crypt, then under memory pressure, approximately 30-40% of the time, pgsql will issue the above failure. This patch should fix this problem, and the problem will come back if the file system is mounted with the mblk_io_submit mount option. Reported-by: Jon Nelson <jnelson@jamponi.net> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* | install_special_mapping skips security_file_mmap check.Tavis Ormandy2010-12-151-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The install_special_mapping routine (used, for example, to setup the vdso) skips the security check before insert_vm_struct, allowing a local attacker to bypass the mmap_min_addr security restriction by limiting the available pages for special mappings. bprm_mm_init() also skips the check, and although I don't think this can be used to bypass any restrictions, I don't see any reason not to have the security check. $ uname -m x86_64 $ cat /proc/sys/vm/mmap_min_addr 65536 $ cat install_special_mapping.s section .bss resb BSS_SIZE section .text global _start _start: mov eax, __NR_pause int 0x80 $ nasm -D__NR_pause=29 -DBSS_SIZE=0xfffed000 -f elf -o install_special_mapping.o install_special_mapping.s $ ld -m elf_i386 -Ttext=0x10000 -Tbss=0x11000 -o install_special_mapping install_special_mapping.o $ ./install_special_mapping & [1] 14303 $ cat /proc/14303/maps 0000f000-00010000 r-xp 00000000 00:00 0 [vdso] 00010000-00011000 r-xp 00001000 00:19 2453665 /home/taviso/install_special_mapping 00011000-ffffe000 rwxp 00000000 00:00 0 [stack] It's worth noting that Red Hat are shipping with mmap_min_addr set to 4096. Signed-off-by: Tavis Ormandy <taviso@google.com> Acked-by: Kees Cook <kees@ubuntu.com> Acked-by: Robert Swiecki <swiecki@google.com> [ Changed to not drop the error code - akpm ] Reviewed-by: James Morris <jmorris@namei.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'for-2.6.37' of git://linux-nfs.org/~bfields/linuxLinus Torvalds2010-12-142-13/+14
|\ \ | | | | | | | | | | | | | | | * 'for-2.6.37' of git://linux-nfs.org/~bfields/linux: nfsd: Fix possible BUG_ON firing in set_change_info sunrpc: prevent use-after-free on clearing XPT_BUSY
| * | nfsd: Fix possible BUG_ON firing in set_change_infoNeil Brown2010-12-082-13/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If vfs_getattr in fill_post_wcc returns an error, we don't set fh_post_change. For NFSv4, this can result in set_change_info triggering a BUG_ON. i.e. fh_post_saved being zero isn't really a bug. So: - instead of BUGging when fh_post_saved is zero, just clear ->atomic. - if vfs_getattr fails in fill_post_wcc, take a copy of i_ctime anyway. This will be used i seg_change_info, but not overly trusted. - While we are there, remove the pointless 'if' statements in set_change_info. There is no harm setting all the values. Signed-off-by: NeilBrown <neilb@suse.de> Cc: stable@kernel.org Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstableLinus Torvalds2010-12-1411-94/+207
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable: Btrfs: prevent RAID level downgrades when space is low Btrfs: account for missing devices in RAID allocation profiles Btrfs: EIO when we fail to read tree roots Btrfs: fix compiler warnings Btrfs: Make async snapshot ioctl more generic Btrfs: pwrite blocked when writing from the mmaped buffer of the same page Btrfs: Fix a crash when mounting a subvolume Btrfs: fix sync subvol/snapshot creation Btrfs: Fix page leak in compressed writeback path Btrfs: do not BUG if we fail to remove the orphan item for dead snapshots Btrfs: fixup return code for btrfs_del_orphan_item Btrfs: do not do fast caching if we are allocating blocks for tree_root Btrfs: deal with space cache errors better Btrfs: fix use after free in O_DIRECT
| * | | Btrfs: prevent RAID level downgrades when space is lowChris Mason2010-12-131-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The extent allocator has code that allows us to fill allocations from any available block group, even if it doesn't match the raid level we've requested. This was put in because adding a new drive to a filesystem made with the default mkfs options actually upgrades the metadata from single spindle dup to full RAID1. But, the code also allows us to allocate from a raid0 chunk when we really want a raid1 or raid10 chunk. This can cause big trouble because mkfs creates a small (4MB) raid0 chunk for data and metadata which then goes unused for raid1/raid10 installs. The allocator will happily wander in and allocate from that chunk when things get tight, which is not correct. The fix here is to make sure that we provide duplication when the caller has asked for it. It does all the dups to be any raid level, which preserves the dup->raid1 upgrade abilities. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: account for missing devices in RAID allocation profilesChris Mason2010-12-133-3/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we mount in RAID degraded mode without adding a new device to replace the failed one, we can end up using the wrong RAID flags for allocations. This results in strange combinations of block groups (raid1 in a raid10 filesystem) and corruptions when we try to allocate blocks from single spindle chunks on drives that are actually missing. The first device has two small 4MB chunks in it that mkfs creates and these are usually unused in a raid1 or raid10 setup. But, in -o degraded, the allocator will fall back to these because the mask of desired raid groups isn't correct. The fix here is to count the missing devices as we build up the list of devices in the system. This count is used when picking the raid level to make sure we continue using the same levels that were in place before we lost a drive. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: EIO when we fail to read tree rootsChris Mason2010-12-131-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we just get a plain IO error when we read tree roots, the code wasn't properly sending that error up the chain. This allowed mounts to continue when they should failed, and allowed operations on partially setup root structs. The end result was usually oopsen on spinlocks that hadn't been spun up correctly. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: fix compiler warningsJan Beulich2010-12-102-7/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... regarding an unused function when !MIGRATION, and regarding a printk() format string vs argument mismatch. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: Make async snapshot ioctl more genericLi Zefan2010-12-102-22/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we had reserved some bytes in struct btrfs_ioctl_vol_args, we wouldn't have to create a new structure for async snapshot creation. Here we convert async snapshot ioctl to use a more generic ABI, as we'll add more ioctls for snapshots/subvolumes in the future, readonly snapshots for example. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: pwrite blocked when writing from the mmaped buffer of the same pageXin Zhong2010-12-101-32/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This problem is found in meego testing: http://bugs.meego.com/show_bug.cgi?id=6672 A file in btrfs is mmaped and the mmaped buffer is passed to pwrite to write to the same page of the same file. In btrfs_file_aio_write(), the pages is locked by prepare_pages(). So when btrfs_copy_from_user() is called, page fault happens and the same page needs to be locked again in filemap_fault(). The fix is to move iov_iter_fault_in_readable() before prepage_pages() to make page fault happen before pages are locked. And also disable page fault in critical region in btrfs_copy_from_user(). Reviewed-by: Yan, Zheng<zheng.z.yan@intel.com> Signed-off-by: Zhong, Xin <xin.zhong@intel.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: Fix a crash when mounting a subvolumeLi Zefan2010-12-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We should drop dentry before deactivating the superblock, otherwise we can hit this bug: BUG: Dentry f349a690{i=100,n=/} still in use (1) [unmount of btrfs loop1] ... Steps to reproduce the bug: # mount /dev/loop1 /mnt # mkdir save # btrfs subvolume snapshot /mnt save/snap1 # umount /mnt # mount -o subvol=save/snap1 /dev/loop1 /mnt (crash) Reported-by: Michael Niederle <mniederle@gmx.at> Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: fix sync subvol/snapshot creationSage Weil2010-12-101-9/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were incorrectly taking the async path even for the sync ioctls by passing in &transid unconditionally. There's ample room for further cleanup here, but this keeps the fix simple. Signed-off-by: Sage Weil <sage@newdream.net> Reviewed-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: Fix page leak in compressed writeback pathYan, Zheng2010-12-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "start + num_bytes >= actual_end" can happen when compressed page writeback races with file truncation. In that case we need unlock and release pages past the end of file. Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: do not BUG if we fail to remove the orphan item for dead snapshotsJosef Bacik2010-12-101-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Not being able to delete an orphan item isn't a horrible thing. The worst that happens is the next time around we try and do the orphan cleanup and we can't find the referenced object and just delete the item and move on. Signed-off-by: Josef Bacik <josef@redhat.com>
| * | | Btrfs: fixup return code for btrfs_del_orphan_itemJosef Bacik2010-12-091-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If the orphan item doesn't exist, we return 1, which doesn't make any sense to the callers. Instead return -ENOENT if we didn't find the item. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| * | | Btrfs: do not do fast caching if we are allocating blocks for tree_rootJosef Bacik2010-12-091-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the fast caching uses normal tree locking, we can possibly deadlock if we get to the caching via a btrfs_search_slot() on the tree_root. So just check to see if the root we are on is the tree root, and just don't do the fast caching. Reported-by: Sage Weil <sage@newdream.net> Signed-off-by: Josef Bacik <josef@redhat.com>
| * | | Btrfs: deal with space cache errors betterJosef Bacik2010-12-092-9/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently if the space cache inode generation number doesn't match the generation number in the space cache header we will just fail to load the space cache, but we won't mark the space cache as an error, so we'll keep getting that error each time somebody tries to cache that block group until we actually clear the thing. Fix this by marking the space cache as having an error so we only get the message once. This patch also makes it so that we don't try and setup space cache for a block group that isn't cached, since we won't be able to write it out anyway. None of these problems are actual problems, they are just annoying and sub-optimal. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| * | | Btrfs: fix use after free in O_DIRECTJosef Bacik2010-12-091-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a bug where we use dip after we have freed it. Instead just use the file_offset that was passed to the function. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
* | | | Merge branch 'for-linus' of ↵Linus Torvalds2010-12-141-6/+66
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse: fuse: verify ioctl retries fuse: fix ioctl when server is 32bit
| * | | | fuse: verify ioctl retriesMiklos Szeredi2010-11-301-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Verify that the total length of the iovec returned in FUSE_IOCTL_RETRY doesn't overflow iov_length(). Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Tejun Heo <tj@kernel.org> CC: <stable@kernel.org> [2.6.31+]
| * | | | fuse: fix ioctl when server is 32bitMiklos Szeredi2010-11-301-6/+44
| | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a 32bit CUSE server is run on 64bit this results in EIO being returned to the caller. The reason is that FUSE_IOCTL_RETRY reply was defined to use 'struct iovec', which is different on 32bit and 64bit archs. Work around this by looking at the size of the reply to determine which struct was used. This is only needed if CONFIG_COMPAT is defined. A more permanent fix for the interface will be to use the same struct on both 32bit and 64bit. Reported-by: "ccmail111" <ccmail111@yahoo.com> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Tejun Heo <tj@kernel.org> CC: <stable@kernel.org> [2.6.31+]
* | | | Merge branch 'for-linus' of git://oss.sgi.com/xfs/xfsLinus Torvalds2010-12-141-0/+1
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://oss.sgi.com/xfs/xfs: xfs: log timestamp changes to the source inode in rename
| * | | | xfs: log timestamp changes to the source inode in renameChristoph Hellwig2010-12-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we don't mark VFS inodes dirty anymore for internal timestamp changes, but rely on the transaction subsystem to push them out, we need to explicitly log the source inode in rename after updating it's timestamps to make sure the changes actually get forced out by sync/fsync or an AIL push. We already account for the fourth inode in the log reservation, as a rename of directories needs to update the nlink field, so just adding the xfs_trans_log_inode call is enough. This fixes the xfsqa 065 regression introduced by: "xfs: don't use vfs writeback for pure metadata modifications" Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Alex Elder <aelder@sgi.com>
* | | | | Merge branch 'for-linus' of ↵Linus Torvalds2010-12-145-61/+111
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client: ceph: fix ioctl magic ceph: Behave better when handling file lock replies. ceph: pass lock information by struct file_lock instead of as individual params. ceph: Handle file locks in replies from the MDS. ceph: avoid possible null deref in readdir after dir llseek
| * | | | | ceph: fix ioctl magicSage Weil2010-12-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ioctl magic was inadvertently changed in 571dba52. Signed-off-by: Sage Weil <sage@newdream.net>
| * | | | | ceph: Behave better when handling file lock replies.Herb Shiu2010-12-011-8/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fill in the local lock with response data if appropriate, and don't call posix_lock_file when reading locks. Signed-off-by: Herb Shiu <herb_shiu@tcloudcomputing.com> Acked-by: Greg Farnum <gregf@hq.newdream.net> Signed-off-by: Sage Weil <sage@newdream.net>