[SPDK] spdk can't pass fio test if 4 clients testing with 4 split partitions

nixun_992 at sina.com nixun_992 at sina.com
Mon Jun 12 00:39:35 PDT 2017


 Hi, All:

   spdk can't pass fio test after 2 hours testing. And it can pass the same test if we use the version before Mar 29

 the error message is the following
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:1310481280 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:202 nsid:1 lba:1310481408 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:202 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:151 nsid:1 lba:1310481664 len:256
ABORTED - BY REQUEST (00/07) sqid:1 cid:151 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1312030816 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:253 nsid:1 lba:998926248 len:88
ABORTED - BY REQUEST (00/07) sqid:1 cid:253 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:243 nsid:1 lba:1049582336 len:176
ABORTED - BY REQUEST (00/07) sqid:1 cid:243 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:169 nsid:1 lba:1109679488 len:128
ABORTED - BY REQUEST (00/07) sqid:1 cid:169 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:134 nsid:1 lba:958884728 len:136
ABORTED - BY REQUEST (00/07) sqid:1 cid:134 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:152 nsid:1 lba:1018345728 len:240
ABORTED - BY REQUEST (00/07) sqid:1 cid:152 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:234 nsid:1 lba:898096896 len:8
ABORTED - BY REQUEST (00/07) sqid:1 cid:234 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:991125248 len:96
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
resetting controller
resetting controller
nvme_pcie.c:1133:nvme_pcie_qpair_abort_trackers: ***ERROR*** aborting outstanding command
READ sqid:1 cid:130 nsid:1 lba:609149952 len:64
ABORTED - BY REQUEST (00/07) sqid:1 cid:130 cdw0:0 sqhd:0000 p:0 m:0 dnr:0


=========================

our vhost conf is the following

# The Split virtual block device slices block devices into multiple smaller bdevs.
[Split]
  # Syntax:
  #   Split <bdev> <count> [<size_in_megabytes>]
  #
  # Split Nvme1n1 into two equally-sized portions, Nvme1n1p0 and Nvme1n1p1
  Split Nvme0n1 4 200000

  # Split Malloc2 into eight 1-megabyte portions, Malloc2p0 ... Malloc2p7,
  # leaving the rest of the device inaccessible
  #Split Malloc2 8 1
[VhostScsi0]
    Dev 0 Nvme0n1p0
[VhostScsi1]
    Dev 0 Nvme0n1p1
[VhostScsi2]
    Dev 0 Nvme0n1p2
[VhostScsi3]
    Dev 0 Nvme0n1p3

fio script is the following:

[global]
filename=/mnt/ssdtest1
size=100G
numjobs=8
iodepth=16
ioengine=libaio
group_reporting
do_verify=1
verify=md5

# direct rand read
[rand-read]
bsrange=1k-512k
#direct=1
rw=randread
runtime=10000
stonewall

# direct seq read
[seq-read]
bsrange=1k-512k
direct=1
rw=read
runtime=10000
stonewall

# direct rand write
[rand-write]
bsrange=1k-512k
direct=1
rw=randwrite
runtime=10000
stonewall

# direct seq write
[seq-write]
bsrange=1k-512k
direct=1
rw=write
runtime=10000
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.01.org/pipermail/spdk/attachments/20170612/787e1b04/attachment.html>


More information about the SPDK mailing list