| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a problem I encountered while running bonnie++. When you have one
thread that opens a file and starts to write to it, and then another thread that
tries to open and write to the same file, the second thread will loop forever
trying to grab the inode lock for that inode. Basically we come in through
generic_buffered_file_write, which calls gfs2_prepare_write, which then attempts
to grab the glock. Because we don't own the lock, gfs2_prepare_write gets
GLR_TRYFAILED, which returns AOP_TRUNCATED_PAGE to generic_buffered_file_write.
At this point generic_buffered_file_write loops around again and immediately
retries the prepare_write. This means that the second process never gets off of
the processor in order to allow the process that holds the lock to finish its
work and let go of the lock. This patch makes gfs2_glock_nq schedule() if it
gets back a GLR_TRYFAILED, which resolves this problem.
Signed-off-by: Josef Whiter <jwhiter@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On Mon, Jan 29, 2007 at 08:45:28PM -0800, Andrew Morton wrote:
>...
> Changes since 2.6.20-rc6-mm2:
>...
> git-gfs2-nmw.patch
>...
> git trees
>...
This patch makes the needlessly global gfs2_writepages() static.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
| |
When the try lock of the glock failed in prepare_write we were
incorrectly exiting this function with the page still locked.
This was resulting in further I/O to this page hanging.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It occurred to me that although a gfs2 specific writepages for ordered
writes and journaled data would be tricky, by hooking writepages only
for "data=writeback" mounts we could take advantage of not needing
buffer heads (we don't use them on the read side, nor have we for some
time) and create much larger I/Os for the block layer.
Using blktrace both before and after, its possible to see that for large
I/Os, most of the requests generated through writepages are now 1024
sectors after this patch is applied as opposed to 8 sectors before.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is partially derrived from a patch written by Russell Cattelan.
It fixes a bug where there is a race between readpages and truncate
by ignoring readpages for stuffed files. This is ok because a stuffed
file will never be more than one block (minus sizeof(struct gfs2_dinode))
in size and block size is always less than page size, so we do not lose
anything efficiency-wise by not doing readahead for stuffed files. They
will have already been "read ahead" by the action of reading the inode
in, in the first place.
This is the remaining part of the fix for Red Hat bugzilla #218966
which had not yet made it upstream.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Russell Cattelan <cattelan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes Red Hat bugzilla #212627 in which a deadlock occurs
due to trying to take the i_mutex while holding a glock. The correct
locking order is defined as i_mutex -> glock in all cases.
I've left dealing with allocating writes. I know that we need to do
that, but for now this should do the trick. We don't need to take the
i_mutex on write, because the VFS has already taken it for us. On read
we don't need it since the glock is enough protection. The reason that
I've made some of the checks into a separate function is that we'll need
to do the checks again in the allocating write case eventually, so this
is partly in preparation for this. Likewise the return value test of !=
1 might look a bit odd and thats because we'll need a third return value
in case of requiring an allocation.
I've made the change to deferred mode on the glock to ensure flushing
read caches on other nodes. I notice that (using blktrace to look at
whats going on) we appear to do a better job of large I/Os than ext3
after this patch (in terms of not splitting up the I/Os).
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Wendy Cheng <wcheng@redhat.com>
|
|
|
|
|
|
| |
Writes to stuffed files were not being marked dirty correctly.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
| |
The gfs2_glock_nq_m_atime function is unused in so far as its only
ever called with num_gh = 1, and this falls through to the
gfs2_glock_nq_atime function, so we might as well call that directly.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Stuffed files only consist of a maximum of
(gfs2 block size - sizeof(struct gfs2_dinode)) bytes. Since the
gfs2 block size is always less than page size, we will never see
a call to stuffed_readpage for anything other than the first page
in the file.
Signed-off-by: Russell Cattelan <cattelan@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a race between the glock and the page lock encountered
during truncate in gfs2_readpage and gfs2_prepare_write. The gfs2_readpages
function doesn't need the same fix since it only uses a try lock anyway, so
it will fail back to gfs2_readpage in the case of a potential deadlock.
This bug was spotted by Russell Cattelan.
Cc: Russell Cattelan <cattelan@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
| |
Remove the di_[amc]time fields and use inode->i_[amc]time
fields instead. This saves 24 bytes from the gfs2_inode.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
| |
Remove duplicate di_uid/di_gid fields in favour of using
inode->i_uid/inode->i_gid instead. This saves 8 bytes.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
| |
This removes the duplicate di_mode field in favour of using the
inode->i_mode field. This saves 4 bytes.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This just ignore the remaining pages, and remove unneeded unlock_pages().
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Steven French <sfrench@us.ibm.com>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Acked-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
|
|
|
| |
This fix means that bmap will map extents of the length requested
by the VFS rather than guessing at it, or just mapping one block
at a time. The other callers of gfs2_block_map are audited to ensure
they send the correct max extent lengths (i.e. set bh->b_size correctly).
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
| |
Pass kaddr rather than (incorrect) struct page to kunmap_atomic.
Signed-off-by: Russell Cattelan <cattelan@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
| |
This fixes a bug where, in certain cases an uninitialised variable
could cause a dereference of a NULL pointer in gfs2_commit_write().
Also a typo in a comment is fixed at the same time.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix a size calculation error.
The size was incorrect being computed as a
negative length and then being passed to an
unsigned parameter.
This in turn would cause the allocator to
think it needed enough meta data to store
a gigabyte file for every file created.
Signed-off-by: Russell Cattelan <cattelan@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In many places GFS2 was calling the endian conversion routines
for an inode even when only a single field, or a few fields might
have changed. As a result we were copying lots of data needlessly.
This patch replaces those calls with conversion of just the
required fields in each case. This should be faster and easier
to understand. There are still other places which suffer from this
problem, but this is a start in the right direction.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
| |
As per Andrew Morton's request, removed trailing whitespace.
Cc: Andrew Morton <akpm@osdl.org>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix a bug in the directory reading code, where we might have dereferenced
a NULL pointer in case of OOM. Updated the directory code to use the new
& improved version of gfs2_meta_ra() which now returns the first block
that was being read. Previously it was releasing it requiring following
code to grab the block again at each point it was called.
Also turned off readahead on directory lookups since we are reading a
hash table, and therefore reading the entries in order is very
unlikely. Readahead is still used for all other calls to the
directory reading function (e.g. when growing the hash table).
Removed the DIO_START constant. Everywhere this was used, it was
used to unconditionally start i/o aside from a couple of places, so
I've removed it and made the couple of exceptions to this rule into
separate functions.
Also hunted through the other DIO flags and removed them as arguments
from functions which were always called with the same combination of
arguments.
Updated gfs2_meta_indirect_buffer to be a bit more efficient and
hopefully also be a bit easier to read.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
| |
Three of the DIO constants were not being used, so remove them.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lm_interface.h has a few out of the tree clients such as GFS1
and userland tools.
Right now, these clients keeps a copy of the file in their build tree
that can go out of sync.
Move lm_interface.h to include/linux, export it to userland and
clean up fs/gfs2 to use the new location.
Signed-off-by: Fabio M. Di Nitto <fabbione@ubuntu.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
| |
This make the unlock test a bit simpler.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
| |
Fix for Red Hat bz 205307. Don't need to lock in readpage if
the higher level code has already grabbed the lock.
Signed-off-by: Russell Cattelan <cattelan@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a tidy up of the GFS2 bmap code. The main change is that the
bh is passed to gfs2_block_map allowing the flags to be set directly
rather than having to repeat that code several times in ops_address.c.
At the same time, the extent mapping code from gfs2_extent_map has
been moved into gfs2_block_map. This allows all calls to gfs2_block_map
to map extents in the case that no allocation is taking place. As a
result reads and non-allocating writes should be faster. A quick test
with postmark appears to support this.
There is a limit on the number of blocks mapped in a single bmap
call in that it will only ever map blocks which are pointed to
from a single pointer block. So in other words, it will never try
to do additional i/o in order to satisfy read-ahead. The maximum
number of blocks is thus somewhat less than 512 (the GFS2 4k block
size minus the header divided by sizeof(u64)). I've further limited
the mapping of "normal" blocks to 32 blocks (to avoid extra work)
since readpages() will currently read a maximum of 32 blocks ahead (128k).
Some further work will probably be needed to set a suitable value
for DIO as well, but for now thats left at the maximum 512 (see
ops_address.c:gfs2_get_block_direct).
There is probably a lot more that can be done to improve bmap for GFS2,
but this is a good first step.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
| |
As per the remainder of Jan Engelhardt's fourth email comments,
remove an cast thats not required. Also tidy up the "limit" code
in stuck_releasepage().
Cc: Jan Engelhardt <jengelh@linux01.gwdg.de>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
| |
A spelling mistake (one of mine).
Cc: Jan Engelhardt <jengelh@linux01.gwdg.de>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
| |
This makes all fixed size types have consistent names.
Cc: Jan Engelhardt <jengelh@linux01.gwdg.de>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per comments from Jan Engelhardt <jengelh@linux01.gwdg.de> this
updates the copyright message to say "version" in full rather than
"v.2". Also incore.h has been updated to remove forward structure
declarations which are not required.
The gfs2_quota_lvb structure has now had endianess annotations added
to it. Also quota.c has been updated so that we now store the
lvb data locally in endian independant format to avoid needing
a structure in host endianess too. As a result the endianess
conversions are done as required at various points and thus the
conversion routines in lvb.[ch] are no longer required. I've
moved the one remaining constant in lvb.h thats used into lm.h
and removed the unused lvb.[ch].
I have not changed the HIF_ constants. That is left to a later patch
which I hope will unify the gh_flags and gh_iflags fields of the
struct gfs2_holder.
Cc: Jan Engelhardt <jengelh@linux01.gwdg.de>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes three main bugs. Firstly the direct i/o get_block
was returning the wrong return code in certain cases. Secondly, the
GFS2's releasepage function was not dealing with cases when clean,
ordered buffers were found still queued on a transaction (which can
happen depending on the ordering of journal flushes). Thirdly, the
journaling code itself needed altering to take account of the
after effects of removing the clean ordered buffers from the transactions
before a journal flush.
The releasepage bug did also show up under "normal" buffered i/o
as well, so its not just a fix for direct i/o. In fact its not
normally used in the direct i/o path at all, except when flushing
existing buffers after performing a direct i/o write, but that was
the code path that led us to spot this.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
| |
This should clarify the logic in gfs2_releasepage() relating to
error handling as well as making the response to errors a bit
more graceful.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a memory leak of struct gfs2_bufdata and also some
problems in the ordered write handling code. It needs a bit
more testing, but I believe that the reference counting of
ordered write buffers should now be correct.
This is aimed at fixing Red Hat bugzilla: #201028 and #201082
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases we can enter write page without there being buffers
attached to the page. In this case the function to add gfs2_bufdata
to the buffers fails sliently causing further failures down the
stack.
This fix ensures that we always add buffers in writepage if they
didn't already exist (mmap is one way to trigger this).
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mmapped files were able to trigger a lock ordering bug. Private
maps do not need to take the glock so early on. Shared maps do
unfortunately, however we can get around that by adding a flag
into the flags for the struct gfs2_file. This only works because
we are taking an exclusive lock at this point, so we know that
nobody else can be racing with us.
Fixes Red Hat bugzilla: #201196
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
| |
The remaining routines in page.c were all only used in one other
file, so they are now moved into the files where they are referenced
and made static. Thus page.[ch] are no longer required.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tidy up gfs2_unstuffer_page by:
a) Moving it into bmap.c
b) Making it static
c) Calling it directly from gfs2_unstuff_dinode
d) Updating all callers of gfs2_unstuff_dinode due to one less
required argument.
It doesn't change the behaviour at all.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per comments received, alter the GFS2 direct I/O path so that
it uses the standard read functions "out of the box". Needs a
small change to one of the VFS functions. This reduces the size
of the code quite a lot and also removes the need for one new export.
Some more work remains to be done, but this is the bones of the
thing.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a generation number for the eventual use of NFS to the
ondisk inode. Its backward compatible with the current code since
it doesn't really matter what the generation number is to start with,
and indeed since its set to zero, due to it being taken from padding
in both the inode and rgrp header, it should be fine.
The eventual plan is to use this rather than no_formal_ino in the
NFS filehandles. At that point no_formal_ino will be unused.
At the same time we also add a releasepages call back to the
"normal" address space for gfs2 inodes. Also I've removed a
one-linrer function thats not required any more.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This fixes a bug where we were releasing a page incorrectly
sometimes when reading a stuffed file. This fixes the bug
that Kevin reported when using Xen.
Cc: Kevin Anderson <kanderso@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
| |
We need to hold i_mutex when doing direct i/o reads.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
| |
As per Christoph's patch:
http://www.kernel.org/git/?p=linux/kernel/git/steve/gfs2-2.6.git;a=commitdiff;h=f5e54d6e53a20cef45af7499e86164f0e0d16bb2
We mark struct address_space_operations const in GFS2.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes the way we have been dealing with unlinked,
but still open files. It removes all limits (other than memory
for inodes, as per every other filesystem) on numbers of these
which we can support on GFS2. It also means that (like other
fs) its the responsibility of the last process to close the file
to deallocate the storage, rather than the person who did the
unlinking. Note that with GFS2, those two events might take place
on different nodes.
Also there are a number of other changes:
o We use the Linux inode subsystem as it was intended to be
used, wrt allocating GFS2 inodes
o The Linux inode cache is now the point which we use for
local enforcement of only holding one copy of the inode in
core at once (previous to this we used the glock layer).
o We no longer use the unlinked "special" file. We just ignore it
completely. This makes unlinking more efficient.
o We now use the 4th block allocation state. The previously unused
state is used to track unlinked but still open inodes.
o gfs2_inoded is no longer needed
o Several fields are now no longer needed (and removed) from the in
core struct gfs2_inode
o Several fields are no longer needed (and removed) from the in core
superblock
There are a number of future possible optimisations and clean ups
which have been made possible by this patch.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
| |
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
| |
We no longer use semaphores, everything has been converted to
mutex or rwsem, so we don't need to include this header any more.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds readpages support (and also corrects a small bug in
the readpage error path at the same time). Hopefully this will
improve performance by allowing GFS to submit larger lumps of
I/O at a time.
In order to simplify the setting of BH_Boundary, it currently gets
set when we hit the end of a indirect pointer block. There is
always a boundary at this point with the current allocation code.
It doesn't get all the boundaries right though, so there is still
room for improvement in this.
See comments in fs/gfs2/ops_address.c for further information about
readpages with GFS2.
Signed-off-by: Steven Whitehouse
|
|
|
|
|
|
|
|
|
| |
As pointed out by Wendy Cheng, the logic in GFS2's writepage() function
wasn't quite right with respect to invalidating pages when a file has been
truncated. This patch fixes that.
CC: Wendy Cheng <wcheng@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When allocating memory to sort directory entries, use vmalloc()
rather than kmalloc() since for larger directories, the required
size can easily be graeter than the 128k maximum of kmalloc().
Also adding the first steps towards getting the AOP_TRUNCATED_PAGE
return code get in the glock code by flagging all places where we
request a glock and we are holding a page lock.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some interfaces have changed. In particular one of the posix
locking functions has changed prototype, along with the
address space operation invalidatepage and the block getting
callback to the direct IO function.
In addition add the splice file operations. These will need to
be updated to support AOP_TRUNCATED_PAGE before they will be
of much use to us.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As suggested by Pekka Enberg <penberg@cs.helsinki.fi>.
The DIV_RU macro is renamed DIV_ROUND_UP and and moved to kernel.h
The other macros are gone from gfs2.h as (although not requested
by Pekka Enberg) are a number of included header file which are now
included individually. The inode number comparison function is
now an inline function.
The DT2IF and IF2DT may be addressed in a future patch.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|