@@ -377,3 +377,30 @@ maps this page at its virtual address.
All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
remove this interface completely.
+
+The final category of APIs is for I/O to deliberately aliased address
+ranges inside the kernel. Such aliases are set up by use of the
+vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O
+subsystem assumes that the user mapping and kernel offset mapping are
+the only aliases. This isn't true for vmap aliases, so anything in
+the kernel trying to do I/O to vmap areas must manually manage
+coherency. It must do this by flushing the vmap range before doing
+I/O and invalidating it after the I/O returns.
+
+ void flush_kernel_vmap_range(void *vaddr, int size)
+ flushes the kernel cache for a given virtual address range in
+ the vmap area. This API makes sure that and data the kernel
+ modified in the vmap range is made visible to the physical
+ page. The design is to make this area safe to perform I/O on.
+ Note that this API does *not* also flush the offset map alias
+ of the area.
+
+ void invalidate_kernel_vmap_range(void *vaddr, int size)
+ invalidates the kernel cache for a given virtual address range
+ in the vmap area. This API is designed to make sure that while
+ I/O went on to an address range in the vmap area, the processor
+ didn't speculate cache reads and thus make the cache over the
+ virtual address stale. Its implementation may be a nop if the
+ architecture guarantees never to speculate on flushed ranges
+ during I/O.
+
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
static inline void flush_kernel_dcache_page(struct page *page)
{
}
+static inline void flush_kernel_vmap_range(void *vaddr, int size)
+{
+}
+static inline void invalidate_kernel_vmap_range(void *vaddr, int size)
+{
+}
#endif
#include <asm/kmap_types.h>