On Sun 30-06-19 08:23:24, Matthew Wilcox wrote:
On Sun, Jun 30, 2019 at 01:01:04AM -0700, Dan Williams wrote:
> @@ -215,7 +216,7 @@ static wait_queue_head_t
> *dax_entry_waitqueue(struct xa_state *xas,
> * queue to the start of that PMD. This ensures that all offsets in
> * the range covered by the PMD map to the same bit lock.
> - if (dax_is_pmd_entry(entry))
> + //if (dax_is_pmd_entry(entry))
> index &= ~PG_PMD_COLOUR;
> key->xa = xas->xa;
> key->entry_start = index;
Hah, that's a great naive fix! Thanks for trying that out.
I think my theory was slightly mistaken, but your fix has the effect of
fixing the actual problem too.
The xas->xa_index for a PMD is going to be PMD-aligned (ie a multiple of
512), but xas_find_conflict() does _not_ adjust xa_index (... which I
really should have mentioned in the documentation). So we go to sleep
on the PMD-aligned index instead of the index of the PTE. Your patch
fixes this by using the PMD-aligned index for PTEs too.
I'm trying to come up with a clean fix for this. Clearly we
shouldn't wait for a PTE entry if we're looking for a PMD entry.
But what should get_unlocked_entry() return if it detects that case?
We could have it return an error code encoded as an internal entry,
like grab_mapping_entry() does. Or we could have it return the _locked_
PTE entry, and have callers interpret that.
At least get_unlocked_entry() is static, but it's got quite a few callers.
Trying to discern which ones might ask for a PMD entry is a bit tricky.
So this seems like a large patch which might have bugs.
Yeah. So get_unlocked_entry() is used in several cases:
1) Case where we already have entry at given index but it is locked and we
need it unlocked so that we can do our thing `(dax_writeback_one(),
2) Case where we want any entry covering given index (in
__dax_invalidate_entry()). This is essentially the same as case 1) since we
have already looked up the entry (just didn't propagate that information
from mm/truncate.c) - we want any unlocked entry covering given index.
3) Cases where we really want entry at given index and we have some entry
order constraints (dax_insert_pfn_mkwrite(), grab_mapping_entry()).
Honestly I'd make the rule that get_unlocked_entry() returns entry of any
order that is covering given index. I agree it may be unnecessarily waiting
for PTE entry lock for the case where in case 3) we are really looking only
for PMD entry but that seems like a relatively small cost for the
simplicity of the interface.
BTW, looking into the xarray code, I think I found another difference
between the old radix tree code and the new xarray code that could cause
issues. In the old radix tree code if we tried to insert PMD entry but
there was some PTE entry in the covered range, we'd get EEXIST error back
and the DAX fault code relies on this. I don't see how similar behavior is
achieved by xas_store()...
Jan Kara <jack(a)suse.com>
SUSE Labs, CR