On Mon, Jan 23, 2017 at 05:06:30PM -0700, Dave Jiang wrote:
__GFP_IO got accidentally dropped in the dax fault handler during
vmf parameter conversion. Putting it back.
Signed-off-by: Dave Jiang <dave.jiang(a)intel.com>
I don't see why this is needed? We already have an overly generous gfp_mask
that explicitly includes both __GFP_FS (which is probably wrong), and
__GFP_IO. I ran a quick test to verify this, and when I get a PMD fault my
mask does indeed include both __GFP_IO and __GFP_FS. The test was done with
the current mmots/master and this patch.
The mask is set up in __handle_mm_fault() => __get_fault_gfp_mask().
So, no need to mess with the mask. Also, the patch is wrong because if
__GFP_IO *was* already part of the gfp_mask, this patch would re-add it, then
unconditionally remove it. So, with the current code we would end up with a
more restrictive mask when we exited the PMD fault handler. Really if we
wanted to do something like this we would need to save and restore, not just
OR in a value and then mask it back out.
fs/dax.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/dax.c b/fs/dax.c
index 2e90f7a..20e9db8 100644
@@ -1332,6 +1332,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, struct
+ vmf->gfp_mask |= __GFP_IO;
* Check whether offset isn't beyond end of file now. Caller is
* supposed to hold locks serializing us with truncate / punch hole so
@@ -1423,6 +1424,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, struct
trace_dax_pmd_fault_done(inode, vmf, max_pgoff, result);
+ vmf->gfp_mask &= ~__GFP_IO;