On Wed, 2017-01-18 at 13:32 -0800, Dan Williams wrote:
On Wed, Jan 18, 2017 at 1:02 PM, Darrick J. Wong
> On Wed, Jan 18, 2017 at 03:39:17PM -0500, Jeff Moyer wrote:
> > Jan Kara <jack(a)suse.cz> writes:
> > > On Tue 17-01-17 15:14:21, Vishal Verma wrote:
> > > > Your note on the online repair does raise another tangentially
> > > > related
> > > > topic. Currently, if there are badblocks, writes via the bio
> > > > submission
> > > > path will clear the error (if the hardware is able to remap
> > > > the bad
> > > > locations). However, if the filesystem is mounted eith DAX,
> > > > even
> > > > non-mmap operations - read() and write() will go through the
> > > > dax paths
> > > > (dax_do_io()). We haven't found a good/agreeable way to
> > > > perform
> > > > error-clearing in this case. So currently, if a dax mounted
> > > > filesystem
> > > > has badblocks, the only way to clear those badblocks is to
> > > > mount it
> > > > without DAX, and overwrite/zero the bad locations. This is a
> > > > pretty
> > > > terrible user experience, and I'm hoping this can be solved in
> > > > a better
> > > > way.
> > >
> > > Please remind me, what is the problem with DAX code doing
> > > necessary work to
> > > clear the error when it gets EIO from memcpy on write?
> > You won't get an MCE for a store; only loads generate them.
> > Won't fallocate FL_ZERO_RANGE clear bad blocks when mounted with
> > -o dax?
> Not necessarily; XFS usually implements this by punching out the
> and then reallocating it as unwritten blocks.
That does clear the error because the unwritten blocks are zeroed and
errors cleared when they become allocated again.
Yes, the problem was that writes won't clear errors. zeroing through
either hole-punch, truncate, unlinking the file should all work
(assuming the hole-punch or truncate ranges wholly contain the
> Linux-nvdimm mailing list