diff mbox

btrfs hang in flush-btrfs-5

Message ID 4E1B08FC.6030907@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Josef Bacik July 11, 2011, 2:30 p.m. UTC
On 07/11/2011 07:40 AM, Jeremy Sanders wrote:
> Jeremy Sanders wrote:
> 
>> Hi - I'm trying btrfs with kernel 2.6.38.8-32.fc15.x86_64 (a Fedora
>> kernel). I'm just doing a tar-to-tar copy onto the file system with
>> compress- force=zlib. Here are some traces of the stuck processes.
> 
> I've managed to reproduce the hang using the latest btrfs from the 
> repository. I had to remove some of the tracing lines to get it to compile 
> under 2.6.38.8 and an ioctl which wasn't defined. Here is is where it is 
> stuck:
> 

Hrm well that is just unlikely and hard to hit.  Will you try this and
see if it helps you?  Thanks,

Josef

 			err = -ENOMEM;
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Jeremy Sanders July 11, 2011, 9:21 p.m. UTC | #1
Josef Bacik wrote:

> On 07/11/2011 07:40 AM, Jeremy Sanders wrote:
>> Jeremy Sanders wrote:
>> 
>>> Hi - I'm trying btrfs with kernel 2.6.38.8-32.fc15.x86_64 (a Fedora
>>> kernel). I'm just doing a tar-to-tar copy onto the file system with
>>> compress- force=zlib. Here are some traces of the stuck processes.
>> 
>> I've managed to reproduce the hang using the latest btrfs from the
>> repository. I had to remove some of the tracing lines to get it to
>> compile under 2.6.38.8 and an ioctl which wasn't defined. Here is is
>> where it is stuck:
>> 
> 
> Hrm well that is just unlikely and hard to hit.  Will you try this and
> see if it helps you?  Thanks,

It's got quite a bit further past than where it got before and hasn't 
crashed yet. I will let you know when it has finished ok.

I see that the btrfs-delalloc (rather than endio-write) thread is taking up 
100% of CPU and the write speed seems to have dropped during the copying, 
however. The copy started with using endio-write fully on both cores and now 
is using dealloc a lot.

Jeremy


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Josef Bacik July 13, 2011, 2:55 p.m. UTC | #2
On 07/11/2011 05:21 PM, Jeremy Sanders wrote:
> Josef Bacik wrote:
> 
>> On 07/11/2011 07:40 AM, Jeremy Sanders wrote:
>>> Jeremy Sanders wrote:
>>>
>>>> Hi - I'm trying btrfs with kernel 2.6.38.8-32.fc15.x86_64 (a Fedora
>>>> kernel). I'm just doing a tar-to-tar copy onto the file system with
>>>> compress- force=zlib. Here are some traces of the stuck processes.
>>>
>>> I've managed to reproduce the hang using the latest btrfs from the
>>> repository. I had to remove some of the tracing lines to get it to
>>> compile under 2.6.38.8 and an ioctl which wasn't defined. Here is is
>>> where it is stuck:
>>>
>>
>> Hrm well that is just unlikely and hard to hit.  Will you try this and
>> see if it helps you?  Thanks,
> 
> It's got quite a bit further past than where it got before and hasn't 
> crashed yet. I will let you know when it has finished ok.
> 
> I see that the btrfs-delalloc (rather than endio-write) thread is taking up 
> 100% of CPU and the write speed seems to have dropped during the copying, 
> however. The copy started with using endio-write fully on both cores and now 
> is using dealloc a lot.
> 


When you see that can you get sysrq+w or sysrq+t to get a stacktrace of
what it's doing so I can see if it's something that can be fixed.  Thanks,

Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 59cbdb1..3c8c435 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1081,7 +1081,8 @@  static noinline int prepare_pages(struct
btrfs_root *root, struct file *file,

 again:
 	for (i = 0; i < num_pages; i++) {
-		pages[i] = grab_cache_page(inode->i_mapping, index + i);
+		pages[i] = find_or_create_page(inode->i_mapping, index + i,
+					       GFP_NOFS);
 		if (!pages[i]) {
 			faili = i - 1;