"Elliott, Robert (Server Storage)" <Elliott(a)hp.com> writes:
> +config BLK_DEV_PMEM_COUNT
> + int "Default number of PMEM disks"
> + default "4"
For real use I think a default of 1 would be better.
For "real" use, it will be the number of actual DIMMs, not a config
option, I would think. I don't take any issue with defaulting to 4.
This will go away.
> + pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
> + if (!pmem->pmem_queue)
> + goto out_free_dev;
This driver should be blk-mq based. Not doing so loses
out on a number of SMP and NUMA performance improvements.
For example, blk_alloc_queue calls blk_alloc_queue_node
with NUMA_NO_NODE. If it called blk_mq_init_queue, then
each CPU would get structures located on its node.
No need to use blk-mq just to set the numa node, as the driver could
just call blk_alloc_queue_node itself. I'd chalk this up to another
thing that could be fixed when the driver is used for actual hardware
that describes its own proximity domain. Further, there is no DMA
engine, here, so what is the benefit of using blk-mq? I/O completes in
the submission path, and I don't see any locking that's preventing
> + blk_queue_make_request(pmem->pmem_queue, pmem_make_request);
> + blk_queue_max_hw_sectors(pmem->pmem_queue, 1024);
Many storage controllers support 1 MiB IOs now, and increasing
amounts of software feel comfortable generating those sizes.
That means this should be at least 2048.
> + blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
It might be appropriate to call blk_queue_rq_timeout
and set up a very short timeout, since persistent memory is
very fast. I'm not sure what the default is for non-blk-mq
drivers; for blk-mq, I think it is 30 seconds, which is
far too long for memory based storage.
If you timeout on a memcpy, then we've definitely got problems. :)