diff mbox

[RFC,5/9] ARM64: mm: Add support for flushing huge pages.

Message ID 1367339448-21727-6-git-send-email-steve.capper@linaro.org (mailing list archive)
State New, archived
Headers show

Commit Message

Steve Capper April 30, 2013, 4:30 p.m. UTC
The code to flush the dcache of a dirty page, __flush_dcache_page,
will only flush the head of a HugeTLB/THP page.

This patch adjusts __flush_dcache_page such that the order of the
compound page is used to determine the size of area to flush.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
 arch/arm64/mm/flush.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Will Deacon May 1, 2013, 11:05 a.m. UTC | #1
Hi Steve,

On Tue, Apr 30, 2013 at 05:30:44PM +0100, Steve Capper wrote:
> The code to flush the dcache of a dirty page, __flush_dcache_page,
> will only flush the head of a HugeTLB/THP page.
> 
> This patch adjusts __flush_dcache_page such that the order of the
> compound page is used to determine the size of area to flush.
> 
> Signed-off-by: Steve Capper <steve.capper@linaro.org>
> ---
>  arch/arm64/mm/flush.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index 88611c3..71c182d 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -72,7 +72,8 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
>  
>  void __flush_dcache_page(struct page *page)
>  {
> -	__flush_dcache_area(page_address(page), PAGE_SIZE);
> +	size_t page_size = PAGE_SIZE << compound_order(page);
> +	__flush_dcache_area(page_address(page), page_size);
>  }

This penalises flush_dcache_page, while it might only be required for
__sync_icache_dcache (called when installing executable ptes).

Now, the job of flush_dcache_page is to deal with D-side aliases between
concurrent user and kernel mappings. On arm64, D-side aliasing is not a
problem so in theory flush_dcache_page could be a nop (well, a clear_bit,
but close enough).

The reason that flush_dcache_page *isn't* currently a nop, is because we
have harvard caches, so if we end up in a situation where the kernel is
writing executable text to a page which is also mapped into userspace (i.e.
there won't be a subsequent call to set_pte), then the user will execute
junk if we don't flush/invalidate things here.

This might be overkill. I can't think of a scenario where the above is true,
but I'm no mm expert, so it would be great if somebody else could chime in
to confirm/rubbish out suspicions...

Cheers,

Will
Catalin Marinas May 1, 2013, 11:31 a.m. UTC | #2
On Wed, May 01, 2013 at 12:05:34PM +0100, Will Deacon wrote:
> Hi Steve,
> 
> On Tue, Apr 30, 2013 at 05:30:44PM +0100, Steve Capper wrote:
> > The code to flush the dcache of a dirty page, __flush_dcache_page,
> > will only flush the head of a HugeTLB/THP page.
> > 
> > This patch adjusts __flush_dcache_page such that the order of the
> > compound page is used to determine the size of area to flush.
> > 
> > Signed-off-by: Steve Capper <steve.capper@linaro.org>
> > ---
> >  arch/arm64/mm/flush.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> > index 88611c3..71c182d 100644
> > --- a/arch/arm64/mm/flush.c
> > +++ b/arch/arm64/mm/flush.c
> > @@ -72,7 +72,8 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> >  
> >  void __flush_dcache_page(struct page *page)
> >  {
> > -	__flush_dcache_area(page_address(page), PAGE_SIZE);
> > +	size_t page_size = PAGE_SIZE << compound_order(page);
> > +	__flush_dcache_area(page_address(page), page_size);
> >  }
> 
> This penalises flush_dcache_page, while it might only be required for
> __sync_icache_dcache (called when installing executable ptes).
> 
> Now, the job of flush_dcache_page is to deal with D-side aliases between
> concurrent user and kernel mappings. On arm64, D-side aliasing is not a
> problem so in theory flush_dcache_page could be a nop (well, a clear_bit,
> but close enough).
> 
> The reason that flush_dcache_page *isn't* currently a nop, is because we
> have harvard caches, so if we end up in a situation where the kernel is
> writing executable text to a page which is also mapped into userspace (i.e.
> there won't be a subsequent call to set_pte), then the user will execute
> junk if we don't flush/invalidate things here.
> 
> This might be overkill. I can't think of a scenario where the above is true,
> but I'm no mm expert, so it would be great if somebody else could chime in
> to confirm/rubbish out suspicions...

And looking at the powerpc implementation, flush_dcache_page just does
clear_bit(PG_arch_1) and they haven't seen any problems. I'm pretty sure
we can to the same on arm64 (and 32-bit ARMv7 where we don't have
D-cache aliasing). But let's see the mm guys have a better insight here.
diff mbox

Patch

diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 88611c3..71c182d 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -72,7 +72,8 @@  void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
 
 void __flush_dcache_page(struct page *page)
 {
-	__flush_dcache_area(page_address(page), PAGE_SIZE);
+	size_t page_size = PAGE_SIZE << compound_order(page);
+	__flush_dcache_area(page_address(page), page_size);
 }
 
 void __sync_icache_dcache(pte_t pte, unsigned long addr)