On Sun, Apr 14, 2019 at 8:26 PM Li,Rongqing <lirongqing(a)baidu.com> wrote:
> -----邮件原件-----
> 发件人: Elliott, Robert (Servers) [mailto:elliott@hpe.com]
> 发送时间: 2019年4月14日 11:21
> 收件人: Li,Rongqing <lirongqing(a)baidu.com>; Dan Williams
> <dan.j.williams(a)intel.com>
> 抄送: linux-nvdimm <linux-nvdimm(a)lists.01.org>
> 主题: RE: [PATCH][RFC] nvdimm: pmem: always flush nvdimm for write request
>
>
>
> >> @@ -215,7 +216,7 @@ static blk_qc_t pmem_make_request(struct
> request_queue *q, struct bio *bio)
> >> if (do_acct)
> >> nd_iostat_end(bio, start);
> >>
> >> - if (bio->bi_opf & REQ_FUA)
> >> + if (bio->bi_opf & REQ_FUA || op_is_write(op))
> >> nvdimm_flush(nd_region);
> ...
> >> Before:
> >> Jobs: 32 (f=32): [W(32)][14.2%][w=1884MiB/s][w=482k IOPS][eta
> >> 01m:43s]
> >> After:
> >> Jobs: 32 (f=32): [W(32)][8.3%][w=2378MiB/s][w=609k IOPS][eta 01m:50s]
> >>
> >> -RongQing
>
>
> Doing more work cannot be faster than doing less work, so something else
> must be happening here.
>
Dan Williams maybe know more.
One thought is that back-pressure from awaiting write-posted-queue
flush completion causes full buffer writes to coalesce at the
controller, i.e. write-combining effects from the flush-delay.