Message ID | 1246100695-10344-1-git-send-email-jlayton@redhat.com (mailing list archive) |
---|---|
State | Accepted |
Headers | show |
Good catch . I have reviewed and merged into cifs-2.6.git. Merge requested for upstream. On Sat, Jun 27, 2009 at 6:04 AM, Jeff Layton<jlayton@redhat.com> wrote: > Fixes a regression caused by commit a6ce4932fbdbcd8f8e8c6df76812014351c32892 > > When this lock was converted to a mutex, the locks were turned into > unlocks and vice-versa. > > Signed-off-by: Jeff Layton <jlayton@redhat.com> > Cc: Stable Tree <stable@kernel.org> > --- > Â fs/cifs/file.c | Â 10 +++++----- > Â 1 files changed, 5 insertions(+), 5 deletions(-) > > diff --git a/fs/cifs/file.c b/fs/cifs/file.c > index ebdbe62..97ce4bf 100644 > --- a/fs/cifs/file.c > +++ b/fs/cifs/file.c > @@ -493,9 +493,9 @@ static int cifs_reopen_file(struct file *file, bool can_flush) > Â Â Â Â Â Â Â Â return -EBADF; > > Â Â Â Â xid = GetXid(); > - Â Â Â mutex_unlock(&pCifsFile->fh_mutex); > + Â Â Â mutex_lock(&pCifsFile->fh_mutex); > Â Â Â Â if (!pCifsFile->invalidHandle) { > - Â Â Â Â Â Â Â mutex_lock(&pCifsFile->fh_mutex); > + Â Â Â Â Â Â Â mutex_unlock(&pCifsFile->fh_mutex); > Â Â Â Â Â Â Â Â rc = 0; > Â Â Â Â Â Â Â Â FreeXid(xid); > Â Â Â Â Â Â Â Â return rc; > @@ -527,7 +527,7 @@ static int cifs_reopen_file(struct file *file, bool can_flush) > Â Â Â Â if (full_path == NULL) { > Â Â Â Â Â Â Â Â rc = -ENOMEM; > Â reopen_error_exit: > - Â Â Â Â Â Â Â mutex_lock(&pCifsFile->fh_mutex); > + Â Â Â Â Â Â Â mutex_unlock(&pCifsFile->fh_mutex); > Â Â Â Â Â Â Â Â FreeXid(xid); > Â Â Â Â Â Â Â Â return rc; > Â Â Â Â } > @@ -569,14 +569,14 @@ reopen_error_exit: > Â Â Â Â Â Â Â Â Â Â Â Â cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â CIFS_MOUNT_MAP_SPECIAL_CHR); > Â Â Â Â if (rc) { > - Â Â Â Â Â Â Â mutex_lock(&pCifsFile->fh_mutex); > + Â Â Â Â Â Â Â mutex_unlock(&pCifsFile->fh_mutex); > Â Â Â Â Â Â Â Â cFYI(1, ("cifs_open returned 0x%x", rc)); > Â Â Â Â Â Â Â Â cFYI(1, ("oplock: %d", oplock)); > Â Â Â Â } else { > Â reopen_success: > Â Â Â Â Â Â Â Â pCifsFile->netfid = netfid; > Â Â Â Â Â Â Â Â pCifsFile->invalidHandle = false; > - Â Â Â Â Â Â Â mutex_lock(&pCifsFile->fh_mutex); > + Â Â Â Â Â Â Â mutex_unlock(&pCifsFile->fh_mutex); > Â Â Â Â Â Â Â Â pCifsInode = CIFS_I(inode); > Â Â Â Â Â Â Â Â if (pCifsInode) { > Â Â Â Â Â Â Â Â Â Â Â Â if (can_flush) { > -- > 1.6.0.6 > >
On Sat, 2009-06-27 at 19:10 -0500, Steve French wrote: > Good catch . > > I have reviewed and merged into cifs-2.6.git. Merge requested for upstream. > Great. Did it fix the problems you were seeing when testing with dbench?
On Sat, Jun 27, 2009 at 8:23 PM, Jeff Layton<jlayton@redhat.com> wrote: > On Sat, 2009-06-27 at 19:10 -0500, Steve French wrote: >> Good catch . >> >> I have reviewed and merged into cifs-2.6.git. Merge requested for upstream. >> > Great. Did it fix the problems you were seeing when testing with dbench? I doubt that this patch would affect my dbench runs because I wasn't getting any network reconnections (they would have logged a warning to dmesg) and the fix is in reopen_file (which is invoked on reconnections). I am updating and building current samba3 on a different machine (the one I more frequently run dbench test runs on) to check on whether this is an Ubuntu specific limit which I had not seen before (I usually run dbench with 50 rather than 60 processes). The Samba handle limit should (and was) be quite large. In any case, as long as we are sure we are hitting a Samba server limit (or server side per-process limit), we are ok and can continue to review/merge the very large inode patches. I am verifying with one additional pair of temporary stats (counters of successful opens) in the exit path of cifs_open and cifs_close) to make sure that those match what I have already verified that we are seeing with smbstatus and the client side counters on number of successful posix_opens.
On Sun, Jun 28, 2009 at 11:43 AM, Steve French<smfrench@gmail.com> wrote: > On Sat, Jun 27, 2009 at 8:23 PM, Jeff Layton<jlayton@redhat.com> wrote: > In any case, as long as we are sure we are hitting a Samba server > limit (or server side > per-process limit), we are ok and can continue to review/merge the very large > inode patches. Â I am verifying with one additional pair of temporary > stats (counters of successful opens) in the exit path of cifs_open and > cifs_close) > to make sure that those match what I have already verified that we are seeing > with smbstatus and the client side counters on number of successful posix_opens. I am puzzled about the Samba 3.4 max files limit (I am seeing it at 1014 opens) and seems strange that dbench would open so many files, but with counters in cifs_open and cifs_close - I see 1014 more opens than closes (from the vfs) which matches what I see at the SMB level and what I see in Samba server. dbench 4 fails even faster. This also fails on other OS (opensuse, Ubuntu etc.), but worked on Samba 3.0.28. Is it possible that Samba 3.4 changed their max open file limit?
On Sun, 2009-06-28 at 14:02 -0500, Steve French wrote: > On Sun, Jun 28, 2009 at 11:43 AM, Steve French<smfrench@gmail.com> wrote: > > On Sat, Jun 27, 2009 at 8:23 PM, Jeff Layton<jlayton@redhat.com> wrote: > > In any case, as long as we are sure we are hitting a Samba server > > limit (or server side > > per-process limit), we are ok and can continue to review/merge the very large > > inode patches. I am verifying with one additional pair of temporary > > stats (counters of successful opens) in the exit path of cifs_open and > > cifs_close) > > to make sure that those match what I have already verified that we are seeing > > with smbstatus and the client side counters on number of successful posix_opens. > > I am puzzled about the Samba 3.4 max files limit (I am seeing it at > 1014 opens) and seems > strange that dbench would open so many files, but with counters in > cifs_open and cifs_close - I see 1014 more opens than closes (from the vfs) > which matches what I see at the SMB level and what I see in Samba server. > dbench 4 fails even faster. This also fails on other OS (opensuse, > Ubuntu etc.), > but worked on Samba 3.0.28. Is it possible that Samba 3.4 changed their > max open file limit? > > Doesn't 3.0.28 have broken POSIX open calls? That may account for the difference. I can't be certain I was seeing the same failures you were with dbench, but I never got a passing run until I applied that patch to fix the reopen locking.
diff --git a/fs/cifs/file.c b/fs/cifs/file.c index ebdbe62..97ce4bf 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -493,9 +493,9 @@ static int cifs_reopen_file(struct file *file, bool can_flush) return -EBADF; xid = GetXid(); - mutex_unlock(&pCifsFile->fh_mutex); + mutex_lock(&pCifsFile->fh_mutex); if (!pCifsFile->invalidHandle) { - mutex_lock(&pCifsFile->fh_mutex); + mutex_unlock(&pCifsFile->fh_mutex); rc = 0; FreeXid(xid); return rc; @@ -527,7 +527,7 @@ static int cifs_reopen_file(struct file *file, bool can_flush) if (full_path == NULL) { rc = -ENOMEM; reopen_error_exit: - mutex_lock(&pCifsFile->fh_mutex); + mutex_unlock(&pCifsFile->fh_mutex); FreeXid(xid); return rc; } @@ -569,14 +569,14 @@ reopen_error_exit: cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); if (rc) { - mutex_lock(&pCifsFile->fh_mutex); + mutex_unlock(&pCifsFile->fh_mutex); cFYI(1, ("cifs_open returned 0x%x", rc)); cFYI(1, ("oplock: %d", oplock)); } else { reopen_success: pCifsFile->netfid = netfid; pCifsFile->invalidHandle = false; - mutex_lock(&pCifsFile->fh_mutex); + mutex_unlock(&pCifsFile->fh_mutex); pCifsInode = CIFS_I(inode); if (pCifsInode) { if (can_flush) {
Fixes a regression caused by commit a6ce4932fbdbcd8f8e8c6df76812014351c32892 When this lock was converted to a mutex, the locks were turned into unlocks and vice-versa. Signed-off-by: Jeff Layton <jlayton@redhat.com> Cc: Stable Tree <stable@kernel.org> --- fs/cifs/file.c | 10 +++++----- 1 files changed, 5 insertions(+), 5 deletions(-)