[PATCH 0/3] remove rw_page() from brd, pmem and btt
by Ross Zwisler
Dan Williams and Christoph Hellwig have recently expressed doubt about
whether the rw_page() interface made sense for synchronous memory drivers
[1][2]. It's unclear whether this interface has any performance benefit
for these drivers, but as we continue to fix bugs it is clear that it does
have a maintenance burden. This series removes the rw_page()
implementations in brd, pmem and btt to relieve this burden.
The last existing user of the rw_page interface is the zram driver, and
according to the changelog for the patch that added zram_rw_page() that
driver does see a clear performance gain:
I implemented the feature in zram and tested it. Test bed was the G2, LG
electronic mobile device, whtich has msm8974 processor and 2GB memory.
With a memory allocation test program consuming memory, the system
generates swap.
Operating time of swap_write_page() was measured.
--------------------------------------------------
| | operating time | improvement |
| | (20 runs average) | |
--------------------------------------------------
|with patch | 1061.15 us | +2.4% |
--------------------------------------------------
|without patch| 1087.35 us | |
--------------------------------------------------
Each test(with paged_io,with BIO) result set shows normal distribution
and has equal variance. I mean the two values are valid result to
compare. I can say operation with paged I/O(without BIO) is faster 2.4%
with confidence level 95%.
These patches have passed ext4 and XFS xfstest regression testing with
a memory mode pmem driver (without DAX), with pmem + btt and with brd.
These patches apply cleanly to the current v4.13-rc2 based linux/master.
[1] https://lists.01.org/pipermail/linux-nvdimm/2017-July/011389.html
[2] https://www.mail-archive.com/linux-block@vger.kernel.org/msg11170.html
Ross Zwisler (3):
btt: remove btt_rw_page()
pmem: remove pmem_rw_page()
brd: remove brd_rw_page()
drivers/block/brd.c | 10 ----------
drivers/nvdimm/btt.c | 15 ---------------
drivers/nvdimm/pmem.c | 21 ---------------------
3 files changed, 46 deletions(-)
--
2.9.4
4 years, 9 months
[PATCH 0/3] fs, xfs: block map immutable files for dax, dma-to-storage, and swap
by Dan Williams
tl;dr: The proposed S_IOMAP_IMMUTABLE mechanism
The daxfile proposal a few weeks back [1] sought to piggy back on the
swapfile implementation to approximate a block map immutable file. This
is an idea Dave originated last year to solve the dax "flush from
userspace" problem [2].
The discussion yielded several results. First, Christoph pointed out that
swapfiles are subtly broken [3]. Second, Darrick [4]
and Dave [5] proposed how to properly implement a block map immutable file.
Finally, Dave identified some improvements to swapfiles that can be
built on the block-map-immutable mecahanism. These patches seek to
implement the first part of the proposal and save the swapfile work to
build on top once the base mechanism is complete.
While the initial motivation for this feature is support for
byte-addressable updates of persistent memory and managing cache
maintenance from userspace, the applications of the feature are broader.
In addition to being the start of a better swapfile mechanism it can
also support a DMA-to-storage use case. This use case enables
data-acquisition hardware to DMA directly to a storage device address
while being safe in the knowledge that storage mappings will not change.
These patches are relative to Darrick's 'devel' tree. Patch 3 is likely
wrong in the way it sets the new XFS_DIFLAG2_IOMAP_IMMUTABLE flag, but
seems to work with a basic test. The test just turns the flag on and
off, checks that the file is fully allocated and immutable, and
validates that the state persists over a umount / mount cycle. A proper
xfstest is in the works, but comments on this first draft are welcome.
[1]: https://lkml.org/lkml/2017/6/16/790
[2]: https://lkml.org/lkml/2016/9/11/159
[3]: https://lkml.org/lkml/2017/6/18/31
[4]: https://lkml.org/lkml/2017/6/20/49
[5]: https://www.spinics.net/lists/linux-xfs/msg07871.html
---
Dan Williams (3):
fs, xfs: introduce S_IOMAP_IMMUTABLE
fs, xfs: introduce FALLOC_FL_SEAL_BLOCK_MAP
xfs: persist S_IOMAP_IMMUTABLE in di_flags2
fs/attr.c | 10 ++++
fs/namei.c | 3 +
fs/open.c | 28 ++++++++++++
fs/read_write.c | 3 +
fs/xfs/libxfs/xfs_format.h | 5 ++
fs/xfs/xfs_bmap_util.c | 98 ++++++++++++++++++++++++++++++++++++++++++-
fs/xfs/xfs_bmap_util.h | 4 +-
fs/xfs/xfs_file.c | 14 ++++--
fs/xfs/xfs_ioctl.c | 10 ++++
fs/xfs/xfs_iops.c | 8 ++--
include/linux/falloc.h | 3 +
include/linux/fs.h | 2 +
include/uapi/linux/falloc.h | 19 ++++++++
mm/filemap.c | 9 ++++
14 files changed, 200 insertions(+), 16 deletions(-)
4 years, 9 months
[PATCH 0/5] Adding blk-mq and DMA support to pmem block driver
by Dave Jiang
The following series implements adds blk-mq support to the pmem block driver
and also adds infrastructure code to ioatdma and dmaengine in order to
support copying to and from scatterlist in order to process block
requests provided by blk-mq. The usage of DMA engines available on certain
platforms allow us to drastically reduce CPU utilization and at the same time
maintain performance that is good enough. Experimentations have been done on
DRAM backed pmem block device that showed the utilization of DMA engine is
beneficial. User can revert back to original behavior by providing
queue_mode=0 to the nd_pmem kernel module if desired.
---
Dave Jiang (5):
dmaengine: ioatdma: revert 7618d035 to allow sharing of DMA channels
dmaengine: ioatdma: dma_prep_memcpy_to/from_sg support
dmaengine: add SG support to dmaengine_unmap
libnvdimm: Adding blk-mq support to the pmem driver
libnvdimm: add DMA support for pmem blk-mq
drivers/dma/dmaengine.c | 45 +++++-
drivers/dma/ioat/dma.h | 8 +
drivers/dma/ioat/init.c | 6 -
drivers/dma/ioat/prep.c | 105 ++++++++++++++
drivers/nvdimm/pmem.c | 340 ++++++++++++++++++++++++++++++++++++++++++---
drivers/nvdimm/pmem.h | 3
include/linux/dmaengine.h | 14 ++
7 files changed, 488 insertions(+), 33 deletions(-)
--
4 years, 9 months
[PATCH v4 0/6] BTT error clearing rework
by Vishal Verma
Changes in v4:
- move the deadlock fix to before enabling the BTT error clear paths (Dan)
- No need for an error lock per freelist entry, just have one per arena (Dan)
Changes in v3:
- Change the dynamically allocated (during IO) zerobuf to the kernel's
ZERO_PAGE for error clearing (patch 5) (Dan).
- Move the NOIO fixes a level down into nvdimm_clear_poison since both
btt and pmem poison clearing goes through that (Dan).
Changes in v2:
- Drop the ACPI allocation change patch. Instead use
memalloc_noio_{save,restore} to set the GFP_NOIO flag around anything
that can be expected to call into ACPI for clearing errors. (Rafael, Dan).
Clearing errors or badblocks during a BTT write requires sending an ACPI
DSM, which means potentially sleeping. Since a BTT IO happens in atomic
context (preemption disabled, spinlocks may be held), we cannot perform
error clearing in the course of an IO. Due to this error clearing for
BTT IOs has hitherto been disabled.
This series fixes these problems by moving the error clearing out of
the atomic sections in the BTT.
Also fix a potential deadlock that can occur while clearing errors
from either BTT or pmem due to memory allocations in the IO path.
Vishal Verma (6):
btt: fix a missed NVDIMM_IO_ATOMIC case in the write path
btt: refactor map entry operations with macros
btt: ensure that flags were also unchanged during a map_read
btt: cache sector_size in arena_info
libnvdimm: fix potential deadlock while clearing errors
libnvdimm, btt: rework error clearing
drivers/nvdimm/btt.c | 116 +++++++++++++++++++++++++++++++++++++++++--------
drivers/nvdimm/btt.h | 11 +++++
drivers/nvdimm/bus.c | 6 +++
drivers/nvdimm/claim.c | 9 +---
4 files changed, 117 insertions(+), 25 deletions(-)
--
2.9.3
4 years, 9 months
QEMU NVDIMM as type 7 in e820 table
by Ross Zwisler
I've been using the virtualized NVDIMM support in QEMU for testing, and I
noticed that the physical addresses used by the virtual NVDIMMs aren't present
in the guest's e820 table.
Here is the e820 table on my QEMU instance where I have one 32 GiB virtual
NVDIMM:
[ 0.000000] e820: BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdefff] usable
[ 0.000000] BIOS-e820: [mem 0x00000000bffdf000-0x00000000bfffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
The physical addresses used by the virtual NVDIMM are 0x240000000-0xA40000000.
You can see this by looking at ndctl and the values we get from the NFIT:
# ndctl list -R
{
"dev":"region0",
"size":34359738368,
"available_size":0,
"type":"pmem"
}
# grep . /sys/bus/nd/devices/region0/{resource,size}
region0/resource:0x240000000
region0/size:34359738368
Or you can see the same info by using iasl to dump
/sys/firmware/acpi/tables/NFIT:
[028h 0040 2] Subtable Type : 0000 [System Physical Address Range]
[02Ah 0042 2] Length : 0038
[02Ch 0044 2] Range Index : 0002
[02Eh 0046 2] Flags (decoded below) : 0003
Add/Online Operation Only : 1
Proximity Domain Valid : 1
[030h 0048 4] Reserved : 00000000
[034h 0052 4] Proximity Domain : 00000000
[038h 0056 16] Address Range GUID : 66F0D379-B4F3-4074-AC43-0D3318B78CDB
[048h 0072 8] Address Range Base : 0000000240000000
[050h 0080 8] Address Range Length : 0000000800000000
[058h 0088 8] Memory Map Attribute : 0000000000008008
I expected to see a type 7 region for the NVDIMM physical address range in the
e820 table, so something like:
[ 0.000000] e820: BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdefff] usable
[ 0.000000] BIOS-e820: [mem 0x00000000bffdf000-0x00000000bfffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000000240000000-0x0000000A40000000] persistent (type 7)
Thanks,
- Ross
4 years, 9 months
[resend PATCH v2 00/33] dax: introduce dax_operations
by Dan Williams
[ resend to add dm-devel, linux-block, and fs-devel, apologies for the
duplicates ]
Changes since v1 [1] and the dax-fs RFC [2]:
* rename struct dax_inode to struct dax_device (Christoph)
* rewrite arch_memcpy_to_pmem() in C with inline asm
* use QUEUE_FLAG_WC to gate dax cache management (Jeff)
* add device-mapper plumbing for the ->copy_from_iter() and ->flush()
dax_operations
* kill struct blk_dax_ctl and bdev_direct_access (Christoph)
* cleanup the ->direct_access() calling convention to be page based
(Christoph)
* introduce dax_get_by_host() and don't pollute struct super_block with
dax_device details (Christoph)
[1]: https://lists.01.org/pipermail/linux-nvdimm/2017-January/008586.html
[2]: https://lwn.net/Articles/713064/
---
A few months back, in the course of reviewing the memcpy_nocache()
proposal from Brian, Linus proposed that the pmem specific
memcpy_to_pmem() routine be moved to be implemented at the driver level
[3]:
"Quite frankly, the whole 'memcpy_nocache()' idea or (ab-)using
copy_user_nocache() just needs to die. It's idiotic.
As you point out, it's also fundamentally buggy crap.
Throw it away. There is no possible way this is ever valid or
portable. We're not going to lie and claim that it is.
If some driver ends up using 'movnt' by hand, that is up to that
*driver*. But no way in hell should we care about this one whit in
the sense of <linux/uaccess.h>."
This feedback also dovetails with another fs/dax.c design wart of being
hard coded to assume the backing device is pmem. We call the pmem
specific copy, clear, and flush routines even if the backing device
driver is one of the other 3 dax drivers (axonram, dccssblk, or brd).
There is no reason to spend cpu cycles flushing the cache after writing
to brd, for example, since it is using volatile memory for storage.
Moreover, the pmem driver might be fronting a volatile memory range
published by the ACPI NFIT, or the platform might have arranged to flush
cpu caches on power fail. This latter capability is a feature that has
appeared in embedded storage appliances (pre-ACPI-NFIT nvdimm
platforms).
So, this series:
1/ moves what was previously named "the pmem api" out of the global
namespace and into drivers that need to be concerned with
architecture specific persistent memory considerations.
2/ arranges for dax to stop abusing __copy_user_nocache() and implements
a libnvdimm-local memcpy that uses 'movnt' on x86_64. This might be
expanded in the future to use 'movntdqa' if the copy size is above
some threshold, or expanded with support for other architectures [4].
3/ makes cache maintenance optional by arranging for dax to call driver
specific copy and flush operations only if the driver publishes them.
4/ allows filesytem-dax cache management to be controlled by the block
device write-cache queue flag. The pmem driver is updated to clear
that flag by default when pmem is driving volatile memory.
[3]: https://lists.01.org/pipermail/linux-nvdimm/2017-January/008364.html
[4]: https://lists.01.org/pipermail/linux-nvdimm/2017-April/009478.html
These patches have been through a round of build regression fixes
notified by the 0day robot. All review welcome, but the patches that
need extra attention are the device-mapper and uio changes
(copy_from_iter_ops).
This series is based on a merge of char-misc-next (for cdev api reworks)
and libnvdimm-fixes (dax locking and __copy_user_nocache fixes).
---
Dan Williams (33):
device-dax: rename 'dax_dev' to 'dev_dax'
dax: refactor dax-fs into a generic provider of 'struct dax_device' instances
dax: add a facility to lookup a dax device by 'host' device name
dax: introduce dax_operations
pmem: add dax_operations support
axon_ram: add dax_operations support
brd: add dax_operations support
dcssblk: add dax_operations support
block: kill bdev_dax_capable()
dax: introduce dax_direct_access()
dm: add dax_device and dax_operations support
dm: teach dm-targets to use a dax_device + dax_operations
ext2, ext4, xfs: retrieve dax_device for iomap operations
Revert "block: use DAX for partition table reads"
filesystem-dax: convert to dax_direct_access()
block, dax: convert bdev_dax_supported() to dax_direct_access()
block: remove block_device_operations ->direct_access()
x86, dax, pmem: remove indirection around memcpy_from_pmem()
dax, pmem: introduce 'copy_from_iter' dax operation
dm: add ->copy_from_iter() dax operation support
filesystem-dax: convert to dax_copy_from_iter()
dax, pmem: introduce an optional 'flush' dax_operation
dm: add ->flush() dax operation support
filesystem-dax: convert to dax_flush()
x86, dax: replace clear_pmem() with open coded memset + dax_ops->flush
x86, dax, libnvdimm: move wb_cache_pmem() to libnvdimm
x86, libnvdimm, pmem: move arch_invalidate_pmem() to libnvdimm
x86, libnvdimm, dax: stop abusing __copy_user_nocache
uio, libnvdimm, pmem: implement cache bypass for all copy_from_iter() operations
libnvdimm, pmem: fix persistence warning
libnvdimm, nfit: enable support for volatile ranges
filesystem-dax: gate calls to dax_flush() on QUEUE_FLAG_WC
libnvdimm, pmem: disable dax flushing when pmem is fronting a volatile region
MAINTAINERS | 2
arch/powerpc/platforms/Kconfig | 1
arch/powerpc/sysdev/axonram.c | 45 +++-
arch/x86/Kconfig | 1
arch/x86/include/asm/pmem.h | 141 ------------
arch/x86/include/asm/string_64.h | 1
block/Kconfig | 1
block/partition-generic.c | 17 -
drivers/Makefile | 2
drivers/acpi/nfit/core.c | 15 +
drivers/block/Kconfig | 1
drivers/block/brd.c | 52 +++-
drivers/dax/Kconfig | 10 +
drivers/dax/Makefile | 5
drivers/dax/dax.h | 15 -
drivers/dax/device-dax.h | 25 ++
drivers/dax/device.c | 415 +++++++++++------------------------
drivers/dax/pmem.c | 10 -
drivers/dax/super.c | 445 ++++++++++++++++++++++++++++++++++++++
drivers/md/Kconfig | 1
drivers/md/dm-core.h | 1
drivers/md/dm-linear.c | 53 ++++-
drivers/md/dm-snap.c | 6 -
drivers/md/dm-stripe.c | 65 ++++--
drivers/md/dm-target.c | 6 -
drivers/md/dm.c | 112 ++++++++--
drivers/nvdimm/Kconfig | 6 +
drivers/nvdimm/Makefile | 1
drivers/nvdimm/bus.c | 10 -
drivers/nvdimm/claim.c | 9 -
drivers/nvdimm/core.c | 2
drivers/nvdimm/dax_devs.c | 2
drivers/nvdimm/dimm_devs.c | 2
drivers/nvdimm/namespace_devs.c | 9 -
drivers/nvdimm/nd-core.h | 9 +
drivers/nvdimm/pfn_devs.c | 4
drivers/nvdimm/pmem.c | 82 +++++--
drivers/nvdimm/pmem.h | 26 ++
drivers/nvdimm/region_devs.c | 39 ++-
drivers/nvdimm/x86.c | 155 +++++++++++++
drivers/s390/block/Kconfig | 1
drivers/s390/block/dcssblk.c | 44 +++-
fs/block_dev.c | 117 +++-------
fs/dax.c | 302 ++++++++++++++------------
fs/ext2/inode.c | 9 +
fs/ext4/inode.c | 9 +
fs/iomap.c | 3
fs/xfs/xfs_iomap.c | 10 +
include/linux/blkdev.h | 19 --
include/linux/dax.h | 43 +++-
include/linux/device-mapper.h | 14 +
include/linux/iomap.h | 1
include/linux/libnvdimm.h | 10 +
include/linux/pmem.h | 165 --------------
include/linux/string.h | 8 +
include/linux/uio.h | 4
lib/Kconfig | 6 -
lib/iov_iter.c | 25 ++
tools/testing/nvdimm/Kbuild | 11 +
tools/testing/nvdimm/pmem-dax.c | 21 +-
60 files changed, 1584 insertions(+), 1042 deletions(-)
delete mode 100644 arch/x86/include/asm/pmem.h
create mode 100644 drivers/dax/device-dax.h
rename drivers/dax/{dax.c => device.c} (60%)
create mode 100644 drivers/dax/super.c
create mode 100644 drivers/nvdimm/x86.c
delete mode 100644 include/linux/pmem.h
4 years, 9 months
[bug report] libnvdimm: control (ioctl) messages for nvdimm_bus and nvdimm devices
by Dan Carpenter
Hello Dan Williams,
The patch 62232e45f4a2: "libnvdimm: control (ioctl) messages for
nvdimm_bus and nvdimm devices" from Jun 8, 2015, leads to the
following static checker warning:
drivers/nvdimm/bus.c:1018 __nd_ioctl()
warn: integer overflows 'buf_len'
drivers/nvdimm/bus.c
959 /* process an input envelope */
960 for (i = 0; i < desc->in_num; i++) {
961 u32 in_size, copy;
962
963 in_size = nd_cmd_in_size(nvdimm, cmd, desc, i, in_env);
964 if (in_size == UINT_MAX) {
965 dev_err(dev, "%s:%s unknown input size cmd: %s field: %d\n",
966 __func__, dimm_name, cmd_name, i);
967 return -ENXIO;
968 }
969 if (in_len < sizeof(in_env))
970 copy = min_t(u32, sizeof(in_env) - in_len, in_size);
971 else
972 copy = 0;
973 if (copy && copy_from_user(&in_env[in_len], p + in_len, copy))
974 return -EFAULT;
975 in_len += in_size;
>From a casual review, this seems like it might be a real bug. On the
first iteration we load some data into in_env[]. On the second
iteration we read a use controlled "in_size" from nd_cmd_in_size(). It
can go up to UINT_MAX - 1. A high number means we will fill the whole
in_env[] buffer. But we potentially keep looping and adding more to
in_len so now it can be any value.
It simple enough to change:
- in_len += in_size;
+ in_len += copy;
But it feels weird that we keep looping even though in_env is totally
full. Shouldn't we just return an error if we don't have space for
desc->in_num.
976 }
977
978 if (cmd == ND_CMD_CALL) {
979 func = pkg.nd_command;
980 dev_dbg(dev, "%s:%s, idx: %llu, in: %zu, out: %zu, len %zu\n",
981 __func__, dimm_name, pkg.nd_command,
982 in_len, out_len, buf_len);
983
984 for (i = 0; i < ARRAY_SIZE(pkg.nd_reserved2); i++)
985 if (pkg.nd_reserved2[i])
986 return -EINVAL;
987 }
988
989 /* process an output envelope */
990 for (i = 0; i < desc->out_num; i++) {
991 u32 out_size = nd_cmd_out_size(nvdimm, cmd, desc, i,
992 (u32 *) in_env, (u32 *) out_env, 0);
993 u32 copy;
994
995 if (out_size == UINT_MAX) {
996 dev_dbg(dev, "%s:%s unknown output size cmd: %s field: %d\n",
997 __func__, dimm_name, cmd_name, i);
998 return -EFAULT;
999 }
1000 if (out_len < sizeof(out_env))
1001 copy = min_t(u32, sizeof(out_env) - out_len, out_size);
1002 else
1003 copy = 0;
1004 if (copy && copy_from_user(&out_env[out_len],
1005 p + in_len + out_len, copy))
1006 return -EFAULT;
1007 out_len += out_size;
Same thing.
1008 }
1009
1010 buf_len = out_len + in_len;
It means this addition could overflow.
1011 if (buf_len > ND_IOCTL_MAX_BUFLEN) {
1012 dev_dbg(dev, "%s:%s cmd: %s buf_len: %zu > %d\n", __func__,
1013 dimm_name, cmd_name, buf_len,
1014 ND_IOCTL_MAX_BUFLEN);
1015 return -EINVAL;
1016 }
1017
1018 buf = vmalloc(buf_len);
^^^^^^^
The checker complains we might allocate less than intended.
1019 if (!buf)
1020 return -ENOMEM;
1021
1022 if (copy_from_user(buf, p, buf_len)) {
1023 rc = -EFAULT;
1024 goto out;
1025 }
regards,
dan carpenter
4 years, 10 months
[PATCH 0/4] ndctl: integrate with tools/ infrastructure
by Dan Williams
For 4.14 I am proposing that ndctl development move into the kernel tree
[1]. The main motivations are to get the userspace tests in the same
source tree as the kernel-space test infrastructure, and to raise the
profile of this cpu-architecture and vendor agnostic nvdimm tooling.
There are some benefits to the kernel-tree as well. For example, more
users of the tools/lib/ common code and future expansion of that code to
support endian handling and GUID parsing.
These patches jettison the custom port of the git option parsing and
sub-command handling in favor of tools/lib/subcmd/. It removes the ccan
modules that have replacement in tools/include/. It also removes a custom
port of bitmap primitives.
The main difference of this tool compared to perf that it uses
autotools for builds and a different compiler-warning regime. It is not
clear that it is worth reconciling those differences in the near term.
Comments welcome...
These patches are also available on the for-4.14/ndctl branch of
djbw/nvdimm.git [2].
[1]: https://lkml.org/lkml/2017/7/21/688
[2]: https://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm.git/log/?h=fo...
---
Dan Williams (4):
ndctl: switch to kernel versioning scheme
MAINTAINERS: add ndctl files to libnvdimm
ndctl: switch to tools/include/linux/{kernel,list,bitmap}.h
ndctl: switch to tools/lib/subcmd/
MAINTAINERS | 1
tools/include/linux/hashtable.h | 4
tools/include/linux/kernel.h | 10
tools/lib/subcmd/parse-options.h | 1
tools/ndctl/Makefile.am | 18 -
tools/ndctl/Makefile.am.in | 2
tools/ndctl/ccan/array_size/LICENSE | 1
tools/ndctl/ccan/array_size/array_size.h | 26 -
tools/ndctl/ccan/container_of/LICENSE | 1
tools/ndctl/ccan/container_of/container_of.h | 109 ----
tools/ndctl/ccan/list/LICENSE | 1
tools/ndctl/ccan/list/list.c | 43 --
tools/ndctl/ccan/list/list.h | 656 ------------------------
tools/ndctl/ccan/minmax/LICENSE | 1
tools/ndctl/ccan/minmax/minmax.h | 65 --
tools/ndctl/configure.ac | 1
tools/ndctl/daxctl/daxctl.c | 4
tools/ndctl/daxctl/lib/libdaxctl-private.h | 4
tools/ndctl/daxctl/lib/libdaxctl.c | 20 -
tools/ndctl/daxctl/list.c | 6
tools/ndctl/git-version | 8
tools/ndctl/ndctl/bat.c | 4
tools/ndctl/ndctl/check.c | 16 -
tools/ndctl/ndctl/create-nfit.c | 16 -
tools/ndctl/ndctl/dimm.c | 15 -
tools/ndctl/ndctl/lib/libndctl-private.h | 6
tools/ndctl/ndctl/lib/libndctl-smart.c | 5
tools/ndctl/ndctl/lib/libndctl.c | 116 ++--
tools/ndctl/ndctl/list.c | 6
tools/ndctl/ndctl/namespace.c | 8
tools/ndctl/ndctl/ndctl.c | 4
tools/ndctl/ndctl/region.c | 3
tools/ndctl/ndctl/test.c | 4
tools/ndctl/ndctl/util/json-smart.c | 2
tools/ndctl/nfit.h | 1
tools/ndctl/test/blk_namespaces.c | 2
tools/ndctl/test/core.c | 2
tools/ndctl/test/daxdev-errors.c | 2
tools/ndctl/test/device-dax.c | 2
tools/ndctl/test/dpa-alloc.c | 2
tools/ndctl/test/dsm-fail.c | 2
tools/ndctl/test/libndctl.c | 3
tools/ndctl/test/multi-pmem.c | 2
tools/ndctl/test/pmem_namespaces.c | 3
tools/ndctl/util/bitmap.c | 131 -----
tools/ndctl/util/bitmap.h | 44 --
tools/ndctl/util/help.c | 4
tools/ndctl/util/json.c | 2
tools/ndctl/util/kernel.h | 9
tools/ndctl/util/list.h | 24 +
tools/ndctl/util/parse-options.c | 697 --------------------------
tools/ndctl/util/parse-options.h | 225 --------
tools/ndctl/util/size.h | 1
tools/ndctl/util/util.h | 1
tools/perf/util/util.h | 2
55 files changed, 205 insertions(+), 2143 deletions(-)
delete mode 120000 tools/ndctl/ccan/array_size/LICENSE
delete mode 100644 tools/ndctl/ccan/array_size/array_size.h
delete mode 120000 tools/ndctl/ccan/container_of/LICENSE
delete mode 100644 tools/ndctl/ccan/container_of/container_of.h
delete mode 120000 tools/ndctl/ccan/list/LICENSE
delete mode 100644 tools/ndctl/ccan/list/list.c
delete mode 100644 tools/ndctl/ccan/list/list.h
delete mode 120000 tools/ndctl/ccan/minmax/LICENSE
delete mode 100644 tools/ndctl/ccan/minmax/minmax.h
delete mode 100644 tools/ndctl/util/bitmap.c
delete mode 100644 tools/ndctl/util/bitmap.h
create mode 100644 tools/ndctl/util/kernel.h
create mode 100644 tools/ndctl/util/list.h
delete mode 100644 tools/ndctl/util/parse-options.c
delete mode 100644 tools/ndctl/util/parse-options.h
4 years, 10 months