On Fri, May 18, 2018 at 1:12 PM, Mikulas Patocka <mpatocka(a)redhat.com> wrote:
On Fri, 18 May 2018, Dan Williams wrote:
> On Fri, May 18, 2018 at 8:44 AM, Mike Snitzer <snitzer(a)redhat.com> wrote:
> > On Thu, Mar 08 2018 at 12:08pm -0500,
> > Dan Williams <dan.j.williams(a)intel.com> wrote:
> >> Mikulas sent this useful enhancement to the memcpy_flushcache API:
> >> https://patchwork.kernel.org/patch/10217655/
> >> ...it's in my queue to either push through -tip or add it to the next
> >> libnvdimm pull request for 4.17-rc1.
> > Hi Dan,
> > Seems this never actually went upstream. I've staged it in
> > linux-dm.git's "for-next" for the time being:
> > But do you intend to pick it up for 4.18 inclusion? If so I'll drop
> > it.. would just hate for it to get dropped on the floor by getting lost
> > in the shuffle between trees.
> > Please avise, thanks!
> > Mike
> Thanks for picking it up! I was hoping to resend it to get acks from
> x86 folks, and then yes it fell through the cracks in my patch
> Now that I look at it again I don't think we need this hunk:
> void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
> size_t len)
> char *from = kmap_atomic(page);
> - memcpy_flushcache(to, from + offset, len);
> + __memcpy_flushcache(to, from + offset, len);
Yes - this is not needed.
> ...and I wonder what the benefit is of the 16-byte case? I would
> assume the bulk of the benefit is limited to the 4 and 8 byte copy
dm-writecache uses 16-byte writes frequently, so it is needed for that.
If we split 16-byte write to two 8-byte writes, it would degrade
performance for architectures where memcpy_flushcache needs to flush the
My question was how measurable it is to special case 16-byte
transfers? I know Ingo is going to ask this question, so it would
speed things along if this patch included performance benefit numbers
for each special case in the changelog.