Message ID | 20131204135457.GA16205@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, 4 Dec 2013 05:54:57 -0800 Christoph Hellwig <hch@infradead.org> wrote: > > Yeah, I've noticed the same hang, but hadn't able to determine why it > > was hanging. I suspect that that hang is what's tickles the bug that my > > patch fixes. With the hang, we see the client doing retransmits and not > > getting replies and that means that we exercise the DRC more... > > FYI here is the one that just kills the silly direct reclaim. It also > fixes the oops, but I still see the hang: > > > diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c > index 9186c7c..dd260a1 100644 > --- a/fs/nfsd/nfscache.c > +++ b/fs/nfsd/nfscache.c > @@ -380,11 +380,8 @@ nfsd_cache_search(struct svc_rqst *rqstp, __wsum csum) > } > > /* > - * Try to find an entry matching the current call in the cache. When none > - * is found, we try to grab the oldest expired entry off the LRU list. If > - * a suitable one isn't there, then drop the cache_lock and allocate a > - * new one, then search again in case one got inserted while this thread > - * didn't hold the lock. > + * Try to find an entry matching the current call in the cache and if none is > + * found allocate and insert a new one. > */ > int > nfsd_cache_lookup(struct svc_rqst *rqstp) > @@ -409,22 +406,8 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) > > /* > * Since the common case is a cache miss followed by an insert, > - * preallocate an entry. First, try to reuse the first entry on the LRU > - * if it works, then go ahead and prune the LRU list. > + * preallocate an entry. > */ > - spin_lock(&cache_lock); > - if (!list_empty(&lru_head)) { > - rp = list_first_entry(&lru_head, struct svc_cacherep, c_lru); > - if (nfsd_cache_entry_expired(rp) || > - num_drc_entries >= max_drc_entries) { > - lru_put_end(rp); > - prune_cache_entries(); > - goto search_cache; > - } > - } > - > - /* No expired ones available, allocate a new one. */ > - spin_unlock(&cache_lock); > rp = nfsd_reply_cache_alloc(); > spin_lock(&cache_lock); > if (likely(rp)) { > @@ -432,7 +415,6 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) > drc_mem_usage += sizeof(*rp); > } > It might be good to run prune_cache_entries(); at this point. Otherwise, this looks like it'll be fine... > -search_cache: > found = nfsd_cache_search(rqstp, csum); > if (found) { > if (likely(rp)) > @@ -446,15 +428,6 @@ search_cache: > goto out; > } > > - /* > - * We're keeping the one we just allocated. Are we now over the > - * limit? Prune one off the tip of the LRU in trade for the one we > - * just allocated if so. > - */ > - if (num_drc_entries >= max_drc_entries) > - nfsd_reply_cache_free_locked(list_first_entry(&lru_head, > - struct svc_cacherep, c_lru)); > - > nfsdstats.rcmisses++; > rqstp->rq_cacherep = rp; > rp->c_state = RC_INPROG;
diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index 9186c7c..dd260a1 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -380,11 +380,8 @@ nfsd_cache_search(struct svc_rqst *rqstp, __wsum csum) } /* - * Try to find an entry matching the current call in the cache. When none - * is found, we try to grab the oldest expired entry off the LRU list. If - * a suitable one isn't there, then drop the cache_lock and allocate a - * new one, then search again in case one got inserted while this thread - * didn't hold the lock. + * Try to find an entry matching the current call in the cache and if none is + * found allocate and insert a new one. */ int nfsd_cache_lookup(struct svc_rqst *rqstp) @@ -409,22 +406,8 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) /* * Since the common case is a cache miss followed by an insert, - * preallocate an entry. First, try to reuse the first entry on the LRU - * if it works, then go ahead and prune the LRU list. + * preallocate an entry. */ - spin_lock(&cache_lock); - if (!list_empty(&lru_head)) { - rp = list_first_entry(&lru_head, struct svc_cacherep, c_lru); - if (nfsd_cache_entry_expired(rp) || - num_drc_entries >= max_drc_entries) { - lru_put_end(rp); - prune_cache_entries(); - goto search_cache; - } - } - - /* No expired ones available, allocate a new one. */ - spin_unlock(&cache_lock); rp = nfsd_reply_cache_alloc(); spin_lock(&cache_lock); if (likely(rp)) { @@ -432,7 +415,6 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) drc_mem_usage += sizeof(*rp); } -search_cache: found = nfsd_cache_search(rqstp, csum); if (found) { if (likely(rp)) @@ -446,15 +428,6 @@ search_cache: goto out; } - /* - * We're keeping the one we just allocated. Are we now over the - * limit? Prune one off the tip of the LRU in trade for the one we - * just allocated if so. - */ - if (num_drc_entries >= max_drc_entries) - nfsd_reply_cache_free_locked(list_first_entry(&lru_head, - struct svc_cacherep, c_lru)); - nfsdstats.rcmisses++; rqstp->rq_cacherep = rp; rp->c_state = RC_INPROG;