[xfs] 73e5fff98b: kmsg.dev/zero:Can't_open_blockdev
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 73e5fff98b6446de1490a8d7809121b0108d49f4 ("xfs: switch to use the new mount-api")
https://git.kernel.org/cgit/fs/xfs/xfs-linux.git xfs-5.5-merge
in testcase: ltp
with following parameters:
disk: 1HDD
fs: xfs
test: fs-03
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 135.976643] LTP: starting fs_fill
[ 135.993912] /dev/zero: Can't open blockdev
[ 136.020327] raid6: sse2x4 gen() 14769 MB/s
[ 136.037281] raid6: sse2x4 xor() 8927 MB/s
[ 136.054236] raid6: sse2x2 gen() 12445 MB/s
[ 136.071397] raid6: sse2x2 xor() 7441 MB/s
[ 136.089313] raid6: sse2x1 gen() 10089 MB/s
[ 136.107334] raid6: sse2x1 xor() 7201 MB/s
[ 136.108198] raid6: using algorithm sse2x4 gen() 14769 MB/s
[ 136.109320] raid6: .... xor() 8927 MB/s, rmw enabled
[ 136.111966] raid6: using ssse3x2 recovery algorithm
[ 136.122740] xor: automatically using best checksumming function avx
[ 136.187956] Btrfs loaded, crc32c=crc32c-intel
[ 136.216946] fuse: init (API version 7.31)
[ 136.327654] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem
[ 136.334974] EXT4-fs (loop0): mounted filesystem without journal. Opts: (null)
[ 136.338933] Mounted ext2 file system at /tmp/ltp-bl4kncm4Ti/g2oJfj/mntpoint supports timestamps until 2038 (0x7fffffff)
[ 137.897422] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem
[ 137.908242] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null)
[ 137.910111] Mounted ext3 file system at /tmp/ltp-bl4kncm4Ti/g2oJfj/mntpoint supports timestamps until 2038 (0x7fffffff)
To reproduce:
# build kernel
cd linux
cp config-5.4.0-rc3-00117-g73e5fff98b644 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 2 months
[perf evsel] 076f2986d3: BUG:sleeping_function_called_from_invalid_context_at_kernel/events/core.c
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 076f2986d3dc78d8f60d35d34dacf3d59d3a59c5 ("perf evsel: Support opening on a specific CPU")
https://git.kernel.org/cgit/linux/kernel/git/ak/linux-misc.git perf/stat-scale-8
in testcase: perf-sanity-tests
with following parameters:
perf_compiler: gcc
ucode: 0x27
on test machine: 8 threads Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-----------------------------------------------+------------+------------+
| | 5b336ec4fb | 076f2986d3 |
+-----------------------------------------------+------------+------------+
| boot_successes | 6 | 0 |
| boot_failures | 0 | 6 |
| WARNING:at_kernel/sched/core.c:#__might_sleep | 0 | 6 |
| RIP:__might_sleep | 0 | 6 |
+-----------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
kern :err : [ 62.538355] BUG: sleeping function called from invalid context at kernel/events/core.c:10702
kern :err : [ 62.548862] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 5835, name: perf
kern :warn : [ 62.556960] CPU: 5 PID: 5835 Comm: perf Not tainted 5.4.0-rc5-00157-g076f2986d3dc7 #1
kern :warn : [ 62.565246] Hardware name: Dell Inc. OptiPlex 9020/0DNKMN, BIOS A05 12/05/2013
kern :warn : [ 62.572939] Call Trace:
kern :warn : [ 62.575913] dump_stack+0x5c/0x7b
kern :warn : [ 62.579727] ___might_sleep+0x102/0x120
kern :warn : [ 62.584059] __might_fault+0x2b/0x30
kern :warn : [ 62.588128] perf_copy_attr+0x33/0x300
kern :warn : [ 62.592340] __do_sys_perf_event_open+0x87/0xcf0
kern :warn : [ 62.597402] do_syscall_64+0x5b/0x1d0
kern :warn : [ 62.601505] entry_SYSCALL_64_after_hwframe+0x44/0xa9
kern :warn : [ 62.607008] RIP: 0033:0x7f865a968469
kern :warn : [ 62.611055] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ff 49 2b 00 f7 d8 64 89 01 48
kern :warn : [ 62.630651] RSP: 002b:00007fff9cfe8848 EFLAGS: 00000246 ORIG_RAX: 000000000000012a
kern :warn : [ 62.638636] RAX: ffffffffffffffda RBX: 000055bfa2505a90 RCX: 00007f865a968469
kern :warn : [ 62.646233] RDX: 00000000ffffffff RSI: 00000000000016cc RDI: 000055bfa2505aa0
kern :warn : [ 62.653836] RBP: 00007fff9cfe8960 R08: 0000000000000008 R09: 0000000000000008
kern :warn : [ 62.661406] R10: 00000000ffffffff R11: 0000000000000246 R12: 00000000ffffffff
kern :warn : [ 62.668986] R13: 00000000000016cc R14: 00000000ffffffff R15: 0000000000000008
kern :warn : [ 62.676550] ------------[ cut here ]------------
kern :warn : [ 62.681622] WARNING: CPU: 5 PID: 5835 at lib/usercopy.c:12 _copy_from_user+0x69/0xa0
kern :warn : [ 62.691430] Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver binfmt_misc btrfs xor zstd_decompress zstd_compress raid6_pq libcrc32c sr_mod cdrom sd_mod sg snd_hda_codec_hdmi ata_generic pata_acpi intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp snd_hda_codec_realtek snd_hda_codec_generic kvm_intel i915 ledtrig_audio mei_wdt kvm snd_hda_intel dcdbas snd_intel_nhlt irqbypass crct10dif_pclmul drm_kms_helper snd_hda_codec crc32_pclmul snd_hda_core crc32c_intel syscopyarea ghash_clmulni_intel sysfillrect snd_hwdep aesni_intel sysimgblt snd_pcm fb_sys_fops crypto_simd mei_me snd_timer ata_piix cryptd glue_helper snd pcspkr drm libata i2c_i801 joydev mei lpc_ich soundcore video ip_tables
kern :warn : [ 62.758469] CPU: 5 PID: 5835 Comm: perf Tainted: G W 5.4.0-rc5-00157-g076f2986d3dc7 #1
kern :warn : [ 62.768169] Hardware name: Dell Inc. OptiPlex 9020/0DNKMN, BIOS A05 12/05/2013
kern :warn : [ 62.775905] RIP: 0010:_copy_from_user+0x69/0xa0
kern :warn : [ 62.780988] Code: 48 89 ea 65 48 8b 04 25 c0 6b 01 00 48 01 da 48 8b 80 d8 22 00 00 72 30 48 39 c2 76 11 48 89 e8 48 85 c0 75 1a 5b 5d 41 5c c3 <0f> 0b eb d2 4c 89 e7 48 89 de 89 ea e8 96 85 5f 00 89 c0 eb e1 48
kern :warn : [ 62.800794] RSP: 0018:ffffc90000cf3e00 EFLAGS: 00010246
kern :warn : [ 62.806562] RAX: 0000000000000000 RBX: 000055bfa2505aa0 RCX: 0000000000000001
kern :warn : [ 62.814241] RDX: 00000000fffc53bd RSI: 000000000000000b RDI: ffffffff8233a790
kern :warn : [ 62.821941] RBP: 0000000000000070 R08: 000000000000092e R09: 0000000000aaaaaa
kern :warn : [ 62.829589] R10: 0000000000000003 R11: 00000000ffffffff R12: ffffc90000cf3e90
kern :warn : [ 62.837267] R13: 0000000000000070 R14: 000055bfa2505aa4 R15: 0000000000000070
kern :warn : [ 62.844955] FS: 00007f865d742000(0000) GS:ffff88821eb40000(0000) knlGS:0000000000000000
kern :warn : [ 62.853526] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
kern :warn : [ 62.859825] CR2: 000055bfa25050c0 CR3: 0000000208084006 CR4: 00000000001606e0
kern :warn : [ 62.867466] Call Trace:
kern :warn : [ 62.870478] perf_copy_attr+0xa5/0x300
kern :warn : [ 62.874795] __do_sys_perf_event_open+0x87/0xcf0
kern :warn : [ 62.879957] do_syscall_64+0x5b/0x1d0
kern :warn : [ 62.884184] entry_SYSCALL_64_after_hwframe+0x44/0xa9
kern :warn : [ 62.889772] RIP: 0033:0x7f865a968469
kern :warn : [ 62.893895] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ff 49 2b 00 f7 d8 64 89 01 48
kern :warn : [ 62.913609] RSP: 002b:00007fff9cfe8848 EFLAGS: 00000246 ORIG_RAX: 000000000000012a
kern :warn : [ 62.921663] RAX: ffffffffffffffda RBX: 000055bfa2505a90 RCX: 00007f865a968469
kern :warn : [ 62.929307] RDX: 00000000ffffffff RSI: 00000000000016cc RDI: 000055bfa2505aa0
kern :warn : [ 62.936972] RBP: 00007fff9cfe8960 R08: 0000000000000008 R09: 0000000000000008
kern :warn : [ 62.944607] R10: 00000000ffffffff R11: 0000000000000246 R12: 00000000ffffffff
kern :warn : [ 62.952280] R13: 00000000000016cc R14: 00000000ffffffff R15: 0000000000000008
kern :warn : [ 62.959947] ---[ end trace 33de98532d382fbb ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
1 year, 2 months
[btrfs] 0670fc0515: WARNING:at_fs/btrfs/volumes.c:#close_fs_devices[btrfs]
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 0670fc0515994b1cbb1788b9d50b740b692b290e ("btrfs: remove final BUG_ON() in close_fs_devices()")
https://git.kernel.org/cgit/linux/kernel/git/jth/linux.git close_fs_devices
in testcase: ltp
with following parameters:
disk: 1HDD
fs: btrfs
test: io
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+--------------------------------------------------------+------------+------------+
| | b642ff0797 | 0670fc0515 |
+--------------------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 0 | 4 |
| WARNING:at_fs/btrfs/volumes.c:#close_fs_devices[btrfs] | 0 | 4 |
| RIP:close_fs_devices[btrfs] | 0 | 4 |
+--------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 38.566982] WARNING: CPU: 1 PID: 2524 at fs/btrfs/volumes.c:1129 close_fs_devices+0x217/0x220 [btrfs]
[ 38.572292] Modules linked in: loop btrfs xor zstd_decompress zstd_compress raid6_pq libcrc32c dm_mod intel_rapl_msr intel_rapl_common sr_mod cdrom sg crct10dif_pclmul ata_generic pata_acpi crc32_pclmul crc32c_intel ghash_clmulni_intel bochs_drm drm_vram_helper ttm ppdev drm_kms_helper snd_pcm aesni_intel syscopyarea crypto_simd snd_timer sysfillrect cryptd glue_helper snd sysimgblt fb_sys_fops ata_piix soundcore pcspkr joydev serio_raw libata drm i2c_piix4 floppy parport_pc parport ip_tables
[ 38.574649] /usr/bin/wget -q --timeout=1800 --tries=1 --local-encoding=UTF-8 http://inn:80/~lkp/cgi-bin/lkp-jobfile-append-var?job_file=/lkp/jobs/sche... -O /dev/null
[ 38.574655]
[ 38.591682] CPU: 1 PID: 2524 Comm: umount Not tainted 5.4.0-rc6-00141-g0670fc0515994 #5
[ 38.591684] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 38.591731] RIP: 0010:close_fs_devices+0x217/0x220 [btrfs]
[ 38.591737] Code: 89 55 10 49 89 55 18 49 83 6c 24 48 01 e9 0d ff ff ff 85 c0 41 89 c4 0f 84 5b ff ff ff e9 aa fe ff ff 0f 0b eb 85 0f 0b eb 88 <0f> 0b eb 8e 0f 1f 44 00 00 66 66 66 66 90 41 55 41 54 45 31 ed 55
[ 38.603898] target ucode:
[ 38.603902]
[ 38.604241] RSP: 0018:ffffa7b4804efd98 EFLAGS: 00010286
[ 38.609872] 2019-11-09 14:54:14 dmsetup remove_all
[ 38.609877]
[ 38.615418] RAX: 00000000ffffffff RBX: ffff9b02acb1b400 RCX: 0000000000000000
[ 38.615420] RDX: ffff9b02948a4d00 RSI: 0000000000000001 RDI: ffff9b02acb1b478
[ 38.615421] RBP: ffff9b02ab002000 R08: 0000000000000000 R09: ffffffffc04dc100
[ 38.615423] R10: ffff9b02acb1b600 R11: 0000000000000001 R12: ffff9b02acb1b400
[ 38.615424] R13: ffff9b02acb1b498 R14: dead000000000122 R15: ffff9b02ab002000
[ 38.615426] FS: 00007f57675bfe40(0000) GS:ffff9b033fd00000(0000) knlGS:0000000000000000
[ 38.615428] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 38.615429] CR2: 00007f5766e02d30 CR3: 0000000196726000 CR4: 00000000000406e0
[ 38.615435] Call Trace:
[ 38.615497] btrfs_close_devices+0x55/0xd0 [btrfs]
[ 38.621759] 2019-11-09 14:54:14 wipefs -a --force /dev/vda
[ 38.621763]
[ 38.625995] close_ctree+0x26c/0x2d6 [btrfs]
[ 38.626006] generic_shutdown_super+0x6c/0x120
[ 38.630922] 2019-11-09 14:54:14 mkfs -t btrfs /dev/vda
[ 38.630927]
[ 38.634597] kill_anon_super+0xe/0x30
[ 38.638138] btrfs-progs v4.7.3
[ 38.638142]
[ 38.639046] btrfs_kill_super+0x12/0xa0 [btrfs]
[ 38.639060] deactivate_locked_super+0x3f/0x70
[ 38.645435] See http://btrfs.wiki.kernel.org for more information.
[ 38.645439]
[ 38.645985] cleanup_mnt+0xb8/0x150
[ 38.649634]
[ 38.652960] task_work_run+0xa3/0xe0
[ 38.652969] exit_to_usermode_loop+0xeb/0xf0
[ 38.652972] do_syscall_64+0x1a7/0x1d0
[ 38.652981] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 38.652987] RIP: 0033:0x7f5766ea4d77
[ 38.658695] Label: (null)
[ 38.658699]
[ 38.660151] Code: 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 31 f6 e9 09 00 00 00 66 0f 1f 84 00 00 00 00 00 b8 a6 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f1 00 2b 00 f7 d8 64 89 01 48
[ 38.660153] RSP: 002b:00007fff86870668 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 38.660155] RAX: 0000000000000000 RBX: 000055c62eec4760 RCX: 00007f5766ea4d77
[ 38.660165] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055c62eec4940
[ 38.664313] UUID:
[ 38.664318]
[ 38.666748] RBP: 000055c62eec4940 R08: 000055c62eec5ce0 R09: 0000000000000015
[ 38.666750] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007f57673a6e64
[ 38.666751] R13: 0000000000000000 R14: 0000000000000000 R15: 00007fff868708f0
[ 38.666757] ---[ end trace d80dd6e09789e541 ]---
To reproduce:
# build kernel
cd linux
cp config-5.4.0-rc6-00141-g0670fc0515994 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
lkp
1 year, 2 months
[scsi] 9393c8de62: Initramfs_unpacking_failed
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 9393c8de628cf0968d81a17cc11841e42191e041 ("scsi: core: Handle drivers which set sg_tablesize to zero")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: xfstests
with following parameters:
disk: 4HDD
fs: xfs
test: xfs-group18
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------+------------+------------+
| | 8b1062d513 | 9393c8de62 |
+----------------------------+------------+------------+
| boot_successes | 8 | 0 |
| boot_failures | 0 | 430 |
| Initramfs_unpacking_failed | 0 | 430 |
+----------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 1.272210] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[ 1.273715] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[ 1.275324] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[ 1.277815] PCI: CLS 0 bytes, default 64
[ 1.278970] Trying to unpack rootfs image as initramfs...
[ 4.011404] Initramfs unpacking failed: broken padding
[ 4.099786] Freeing initrd memory: 346408K
[ 4.103301] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[ 4.104965] software IO TLB: mapped [mem 0xbbfd2000-0xbffd2000] (64MB)
[ 4.106874] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x29b38900952, max_idle_ns: 440795323692 ns
[ 4.119274] Initialise system trusted keyrings
To reproduce:
# build kernel
cd linux
cp config-5.4.0-rc2-00003-g9393c8de628cf .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
lkp
1 year, 2 months
[block] fa53228721: WARNING:at_block/blk-merge.c:#blk_rq_map_sg
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: fa53228721876515adabc7bc74368490bd97aa3b ("block: avoid blk_bio_segment_split for small I/O operations")
https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-5.5/block
in testcase: xfstests
with following parameters:
disk: 4HDD
fs: xfs
test: xfs-group16
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+---------------------------------------------+------------+------------+
| | d2c9be89f8 | fa53228721 |
+---------------------------------------------+------------+------------+
| boot_successes | 12 | 0 |
| boot_failures | 0 | 16 |
| WARNING:at_block/blk-merge.c:#blk_rq_map_sg | 0 | 16 |
| RIP:blk_rq_map_sg | 0 | 16 |
| kernel_BUG_at_drivers/scsi/scsi_lib.c | 0 | 16 |
| invalid_opcode:#[##] | 0 | 16 |
| RIP:scsi_init_io | 0 | 16 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 16 |
+---------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 203.892883] WARNING: CPU: 0 PID: 443 at block/blk-merge.c:559 blk_rq_map_sg+0x649/0x700
[ 203.897634] Modules linked in: sd_mod scsi_debug xfs libcrc32c dm_mod sr_mod cdrom intel_rapl_msr intel_rapl_common sg ata_generic pata_acpi crct10dif_pclmul crc32_pclmul crc32c_intel bochs_drm ppdev drm_vram_helper ghash_clmulni_intel ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops snd_pcm drm snd_timer snd aesni_intel crypto_simd ata_piix cryptd glue_helper soundcore joydev pcspkr serio_raw libata i2c_piix4 floppy parport_pc parport ip_tables
[ 203.910875] CPU: 0 PID: 443 Comm: kworker/0:1H Not tainted 5.4.0-rc2-00027-gfa53228721876 #7
[ 203.913336] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 203.915809] Workqueue: kblockd blk_mq_run_work_fn
[ 203.917547] RIP: 0010:blk_rq_map_sg+0x649/0x700
[ 203.919306] Code: 0f 84 83 fb ff ff f7 d0 21 d0 83 c0 01 01 41 0c 01 86 cc 00 00 00 e9 6e fb ff ff 48 8b 04 24 4c 89 e1 8b 40 1c e9 56 fb ff ff <0f> 0b e9 5d fc ff ff 0f 0b 0f 0b 0f 0b 80 3d cf 5b 3b 01 00 74 09
[ 203.924618] RSP: 0018:ffffb420403c3bd8 EFLAGS: 00010202
[ 203.926540] RAX: 0000000000000001 RBX: 0000000000000001 RCX: ffff8f4f5fe49800
[ 203.928780] RDX: 0000000000001000 RSI: ffff8f4f31832400 RDI: ffffb4204035fb60
[ 203.931084] RBP: ffff8f4f5ed641c0 R08: ffff8f4f5ed641c0 R09: 0000000000000600
[ 203.933389] R10: 0000000000001000 R11: 0000000000001000 R12: ffff8f4f5fe49800
[ 203.935687] R13: 0000000000000002 R14: 0000000000000600 R15: 0000000000000000
[ 203.937962] FS: 0000000000000000(0000) GS:ffff8f4fbfc00000(0000) knlGS:0000000000000000
[ 203.940416] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 203.942414] CR2: 000056365d914000 CR3: 0000000228cb6000 CR4: 00000000000406f0
[ 203.944683] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 203.946992] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 203.949261] Call Trace:
[ 203.951375] scsi_init_io+0x66/0x170
[ 203.952941] sd_init_command+0x192/0xac0 [sd_mod]
[ 203.954775] scsi_queue_rq+0x597/0xac0
[ 203.956361] blk_mq_dispatch_rq_list+0x3da/0x5b0
[ 203.958160] ? syscall_return_via_sysret+0x10/0x7f
[ 203.959984] ? __switch_to_asm+0x40/0x70
[ 203.961606] ? __switch_to_asm+0x34/0x70
[ 203.963263] ? elv_rb_del+0x1f/0x30
[ 203.964810] ? deadline_remove_request+0x55/0xc0
[ 203.966618] blk_mq_do_dispatch_sched+0x76/0x120
[ 203.968365] blk_mq_sched_dispatch_requests+0x100/0x170
[ 203.970222] __blk_mq_run_hw_queue+0x60/0x130
[ 203.971930] process_one_work+0x1ae/0x3d0
[ 203.973539] worker_thread+0x3c/0x3b0
[ 203.975115] ? process_one_work+0x3d0/0x3d0
[ 203.976737] kthread+0x11e/0x140
[ 203.978206] ? kthread_park+0x90/0x90
[ 203.979727] ret_from_fork+0x35/0x40
[ 203.981236] ---[ end trace a0fde01679c74e77 ]---
To reproduce:
# build kernel
cd linux
cp config-5.4.0-rc2-00027-gfa53228721876 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
lkp
1 year, 2 months
[sched/fair] 0b0695f2b3: vm-scalability.median 3.1% improvement
by kernel test robot
Greeting,
FYI, we noticed a 3.1% improvement of vm-scalability.median due to commit:
commit: 0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912 ("sched/fair: Rework load_balance()")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: vm-scalability
on test machine: 104 threads Skylake with 192G memory
with following parameters:
runtime: 300s
size: 8T
test: anon-cow-seq
cpufreq_governor: performance
ucode: 0x2000064
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 43.0% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=interrupt |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | testtime=30s |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-09-23.cgz/300s/8T/lkp-skl-fpga01/anon-cow-seq/vm-scalability/0x2000064
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
413301 +3.1% 426103 vm-scalability.median
0.04 ± 2% -34.0% 0.03 ± 12% vm-scalability.median_stddev
43837589 +2.4% 44902458 vm-scalability.throughput
181085 -18.7% 147221 vm-scalability.time.involuntary_context_switches
12762365 ± 2% +3.9% 13262025 vm-scalability.time.minor_page_faults
7773 +2.9% 7997 vm-scalability.time.percent_of_cpu_this_job_got
11449 +1.2% 11589 vm-scalability.time.system_time
12024 +4.7% 12584 vm-scalability.time.user_time
439194 ± 2% +46.0% 641402 ± 2% vm-scalability.time.voluntary_context_switches
1.148e+10 +5.0% 1.206e+10 vm-scalability.workload
0.00 ± 54% +0.0 0.00 ± 17% mpstat.cpu.all.iowait%
4767597 +52.5% 7268430 ± 41% numa-numastat.node1.local_node
4781030 +52.3% 7280347 ± 41% numa-numastat.node1.numa_hit
24.75 -9.1% 22.50 ± 2% vmstat.cpu.id
37.50 +4.7% 39.25 vmstat.cpu.us
6643 ± 3% +15.1% 7647 vmstat.system.cs
12220504 +33.4% 16298593 ± 4% cpuidle.C1.time
260215 ± 6% +55.3% 404158 ± 3% cpuidle.C1.usage
4986034 ± 3% +56.2% 7786811 ± 2% cpuidle.POLL.time
145941 ± 3% +61.2% 235218 ± 2% cpuidle.POLL.usage
1990 +3.0% 2049 turbostat.Avg_MHz
254633 ± 6% +56.7% 398892 ± 4% turbostat.C1
0.04 +0.0 0.05 turbostat.C1%
309.99 +1.5% 314.75 turbostat.RAMWatt
1688 ± 11% +17.4% 1983 ± 5% slabinfo.UNIX.active_objs
1688 ± 11% +17.4% 1983 ± 5% slabinfo.UNIX.num_objs
2460 ± 3% -15.8% 2072 ± 11% slabinfo.dmaengine-unmap-16.active_objs
2460 ± 3% -15.8% 2072 ± 11% slabinfo.dmaengine-unmap-16.num_objs
2814 ± 9% +14.6% 3225 ± 4% slabinfo.sock_inode_cache.active_objs
2814 ± 9% +14.6% 3225 ± 4% slabinfo.sock_inode_cache.num_objs
0.67 ± 5% +0.1 0.73 ± 3% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault
0.68 ± 6% +0.1 0.74 ± 2% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.05 +0.0 0.07 ± 7% perf-profile.children.cycles-pp.schedule
0.06 +0.0 0.08 ± 6% perf-profile.children.cycles-pp.__wake_up_common
0.06 ± 7% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.wake_up_page_bit
0.23 ± 7% +0.0 0.28 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.05 perf-profile.children.cycles-pp.drm_fb_helper_sys_imageblit
0.00 +0.1 0.05 perf-profile.children.cycles-pp.sys_imageblit
29026 ± 3% -26.7% 21283 ± 44% numa-vmstat.node0.nr_inactive_anon
30069 ± 3% -20.5% 23905 ± 26% numa-vmstat.node0.nr_shmem
12120 ± 2% -15.5% 10241 ± 12% numa-vmstat.node0.nr_slab_reclaimable
29026 ± 3% -26.7% 21283 ± 44% numa-vmstat.node0.nr_zone_inactive_anon
4010893 +16.1% 4655889 ± 9% numa-vmstat.node1.nr_active_anon
3982581 +16.3% 4632344 ± 9% numa-vmstat.node1.nr_anon_pages
6861 +16.1% 7964 ± 8% numa-vmstat.node1.nr_anon_transparent_hugepages
2317 ± 42% +336.9% 10125 ± 93% numa-vmstat.node1.nr_inactive_anon
6596 ± 4% +18.2% 7799 ± 14% numa-vmstat.node1.nr_kernel_stack
9629 ± 8% +66.4% 16020 ± 41% numa-vmstat.node1.nr_shmem
7558 ± 3% +26.5% 9561 ± 14% numa-vmstat.node1.nr_slab_reclaimable
4010227 +16.1% 4655056 ± 9% numa-vmstat.node1.nr_zone_active_anon
2317 ± 42% +336.9% 10125 ± 93% numa-vmstat.node1.nr_zone_inactive_anon
2859663 ± 2% +46.2% 4179500 ± 36% numa-vmstat.node1.numa_hit
2680260 ± 2% +49.3% 4002218 ± 37% numa-vmstat.node1.numa_local
116661 ± 3% -26.3% 86010 ± 44% numa-meminfo.node0.Inactive
116192 ± 3% -26.7% 85146 ± 44% numa-meminfo.node0.Inactive(anon)
48486 ± 2% -15.5% 40966 ± 12% numa-meminfo.node0.KReclaimable
48486 ± 2% -15.5% 40966 ± 12% numa-meminfo.node0.SReclaimable
120367 ± 3% -20.5% 95642 ± 26% numa-meminfo.node0.Shmem
16210528 +15.2% 18673368 ± 6% numa-meminfo.node1.Active
16210394 +15.2% 18673287 ± 6% numa-meminfo.node1.Active(anon)
14170064 +15.6% 16379835 ± 7% numa-meminfo.node1.AnonHugePages
16113351 +15.3% 18577254 ± 7% numa-meminfo.node1.AnonPages
10534 ± 33% +293.8% 41480 ± 92% numa-meminfo.node1.Inactive
9262 ± 42% +338.2% 40589 ± 93% numa-meminfo.node1.Inactive(anon)
30235 ± 3% +26.5% 38242 ± 14% numa-meminfo.node1.KReclaimable
6594 ± 4% +18.3% 7802 ± 14% numa-meminfo.node1.KernelStack
17083646 +15.1% 19656922 ± 7% numa-meminfo.node1.MemUsed
30235 ± 3% +26.5% 38242 ± 14% numa-meminfo.node1.SReclaimable
38540 ± 8% +66.4% 64117 ± 42% numa-meminfo.node1.Shmem
106342 +19.8% 127451 ± 11% numa-meminfo.node1.Slab
9479688 +4.5% 9905902 proc-vmstat.nr_active_anon
9434298 +4.5% 9856978 proc-vmstat.nr_anon_pages
16194 +4.3% 16895 proc-vmstat.nr_anon_transparent_hugepages
276.75 +3.6% 286.75 proc-vmstat.nr_dirtied
3888633 -1.1% 3845882 proc-vmstat.nr_dirty_background_threshold
7786774 -1.1% 7701168 proc-vmstat.nr_dirty_threshold
39168820 -1.1% 38741444 proc-vmstat.nr_free_pages
50391 +1.0% 50904 proc-vmstat.nr_slab_unreclaimable
257.50 +3.6% 266.75 proc-vmstat.nr_written
9479678 +4.5% 9905895 proc-vmstat.nr_zone_active_anon
1501517 -5.9% 1412958 proc-vmstat.numa_hint_faults
1075936 -13.1% 934706 proc-vmstat.numa_hint_faults_local
17306395 +4.8% 18141722 proc-vmstat.numa_hit
5211079 +4.2% 5427541 proc-vmstat.numa_huge_pte_updates
17272620 +4.8% 18107691 proc-vmstat.numa_local
33774 +0.8% 34031 proc-vmstat.numa_other
690793 ± 3% -13.7% 596166 ± 2% proc-vmstat.numa_pages_migrated
2.669e+09 +4.2% 2.78e+09 proc-vmstat.numa_pte_updates
2.755e+09 +5.6% 2.909e+09 proc-vmstat.pgalloc_normal
13573227 ± 2% +3.6% 14060842 proc-vmstat.pgfault
2.752e+09 +5.6% 2.906e+09 proc-vmstat.pgfree
1.723e+08 ± 2% +14.3% 1.97e+08 ± 8% proc-vmstat.pgmigrate_fail
690793 ± 3% -13.7% 596166 ± 2% proc-vmstat.pgmigrate_success
5015265 +5.0% 5266730 proc-vmstat.thp_deferred_split_page
5019661 +5.0% 5271482 proc-vmstat.thp_fault_alloc
18284 ± 62% -79.9% 3681 ±172% sched_debug.cfs_rq:/.MIN_vruntime.avg
1901618 ± 62% -89.9% 192494 ±172% sched_debug.cfs_rq:/.MIN_vruntime.max
185571 ± 62% -85.8% 26313 ±172% sched_debug.cfs_rq:/.MIN_vruntime.stddev
15241 ± 6% -36.6% 9655 ± 6% sched_debug.cfs_rq:/.exec_clock.stddev
18284 ± 62% -79.9% 3681 ±172% sched_debug.cfs_rq:/.max_vruntime.avg
1901618 ± 62% -89.9% 192494 ±172% sched_debug.cfs_rq:/.max_vruntime.max
185571 ± 62% -85.8% 26313 ±172% sched_debug.cfs_rq:/.max_vruntime.stddev
898812 ± 7% -31.2% 618552 ± 5% sched_debug.cfs_rq:/.min_vruntime.stddev
10.30 ± 12% +34.5% 13.86 ± 6% sched_debug.cfs_rq:/.nr_spread_over.avg
34.75 ± 8% +95.9% 68.08 ± 4% sched_debug.cfs_rq:/.nr_spread_over.max
9.12 ± 11% +82.3% 16.62 ± 9% sched_debug.cfs_rq:/.nr_spread_over.stddev
-1470498 -31.9% -1000709 sched_debug.cfs_rq:/.spread0.min
899820 ± 7% -31.2% 618970 ± 5% sched_debug.cfs_rq:/.spread0.stddev
1589 ± 9% -19.2% 1284 ± 9% sched_debug.cfs_rq:/.util_avg.max
0.54 ± 39% +7484.6% 41.08 ± 92% sched_debug.cfs_rq:/.util_est_enqueued.min
238.84 ± 8% -33.2% 159.61 ± 26% sched_debug.cfs_rq:/.util_est_enqueued.stddev
10787 ± 2% +13.8% 12274 sched_debug.cpu.nr_switches.avg
35242 ± 9% +32.3% 46641 ± 25% sched_debug.cpu.nr_switches.max
9139 ± 3% +16.4% 10636 sched_debug.cpu.sched_count.avg
32025 ± 10% +34.6% 43091 ± 27% sched_debug.cpu.sched_count.max
4016 ± 2% +14.7% 4606 ± 5% sched_debug.cpu.sched_count.min
2960 +38.3% 4093 sched_debug.cpu.sched_goidle.avg
11201 ± 24% +75.8% 19691 ± 26% sched_debug.cpu.sched_goidle.max
1099 ± 6% +56.9% 1725 ± 6% sched_debug.cpu.sched_goidle.min
1877 ± 10% +32.5% 2487 ± 17% sched_debug.cpu.sched_goidle.stddev
4348 ± 3% +19.3% 5188 sched_debug.cpu.ttwu_count.avg
17832 ± 11% +78.6% 31852 ± 29% sched_debug.cpu.ttwu_count.max
1699 ± 6% +28.2% 2178 ± 7% sched_debug.cpu.ttwu_count.min
1357 ± 10% -22.6% 1050 ± 4% sched_debug.cpu.ttwu_local.avg
11483 ± 5% -25.0% 8614 ± 15% sched_debug.cpu.ttwu_local.max
1979 ± 12% -36.8% 1251 ± 10% sched_debug.cpu.ttwu_local.stddev
3.941e+10 +5.0% 4.137e+10 perf-stat.i.branch-instructions
0.02 ± 50% -0.0 0.02 ± 5% perf-stat.i.branch-miss-rate%
67.94 -3.9 63.99 perf-stat.i.cache-miss-rate%
8.329e+08 -1.9% 8.17e+08 perf-stat.i.cache-misses
1.224e+09 +4.5% 1.28e+09 perf-stat.i.cache-references
6650 ± 3% +15.5% 7678 perf-stat.i.context-switches
1.64 -1.8% 1.61 perf-stat.i.cpi
2.037e+11 +2.8% 2.095e+11 perf-stat.i.cpu-cycles
257.56 -4.0% 247.13 perf-stat.i.cpu-migrations
244.94 +4.5% 255.91 perf-stat.i.cycles-between-cache-misses
1189446 ± 2% +3.2% 1227527 perf-stat.i.dTLB-load-misses
2.669e+10 +4.7% 2.794e+10 perf-stat.i.dTLB-loads
0.00 ± 7% -0.0 0.00 perf-stat.i.dTLB-store-miss-rate%
337782 +4.5% 353044 perf-stat.i.dTLB-store-misses
9.096e+09 +4.7% 9.526e+09 perf-stat.i.dTLB-stores
39.50 +2.1 41.64 perf-stat.i.iTLB-load-miss-rate%
296305 ± 2% +9.0% 323020 perf-stat.i.iTLB-load-misses
1.238e+11 +4.9% 1.299e+11 perf-stat.i.instructions
428249 ± 2% -4.4% 409553 perf-stat.i.instructions-per-iTLB-miss
0.61 +1.6% 0.62 perf-stat.i.ipc
44430 +3.8% 46121 perf-stat.i.minor-faults
54.82 +3.9 58.73 perf-stat.i.node-load-miss-rate%
68519419 ± 4% -11.7% 60479057 ± 6% perf-stat.i.node-load-misses
49879161 ± 3% -20.7% 39554915 ± 4% perf-stat.i.node-loads
44428 +3.8% 46119 perf-stat.i.page-faults
0.02 -0.0 0.01 ± 5% perf-stat.overall.branch-miss-rate%
68.03 -4.2 63.83 perf-stat.overall.cache-miss-rate%
1.65 -2.0% 1.61 perf-stat.overall.cpi
244.61 +4.8% 256.41 perf-stat.overall.cycles-between-cache-misses
30.21 +2.2 32.38 perf-stat.overall.iTLB-load-miss-rate%
417920 ± 2% -3.7% 402452 perf-stat.overall.instructions-per-iTLB-miss
0.61 +2.1% 0.62 perf-stat.overall.ipc
57.84 +2.6 60.44 perf-stat.overall.node-load-miss-rate%
3.925e+10 +5.1% 4.124e+10 perf-stat.ps.branch-instructions
8.295e+08 -1.8% 8.144e+08 perf-stat.ps.cache-misses
1.219e+09 +4.6% 1.276e+09 perf-stat.ps.cache-references
6625 ± 3% +15.4% 7648 perf-stat.ps.context-switches
2.029e+11 +2.9% 2.088e+11 perf-stat.ps.cpu-cycles
256.82 -4.2% 246.09 perf-stat.ps.cpu-migrations
1184763 ± 2% +3.3% 1223366 perf-stat.ps.dTLB-load-misses
2.658e+10 +4.8% 2.786e+10 perf-stat.ps.dTLB-loads
336658 +4.5% 351710 perf-stat.ps.dTLB-store-misses
9.059e+09 +4.8% 9.497e+09 perf-stat.ps.dTLB-stores
295140 ± 2% +9.0% 321824 perf-stat.ps.iTLB-load-misses
1.233e+11 +5.0% 1.295e+11 perf-stat.ps.instructions
44309 +3.7% 45933 perf-stat.ps.minor-faults
68208972 ± 4% -11.6% 60272675 ± 6% perf-stat.ps.node-load-misses
49689740 ± 3% -20.7% 39401789 ± 4% perf-stat.ps.node-loads
44308 +3.7% 45932 perf-stat.ps.page-faults
3.732e+13 +5.1% 3.922e+13 perf-stat.total.instructions
14949 ± 2% +14.5% 17124 ± 11% softirqs.CPU0.SCHED
9940 +37.8% 13700 ± 24% softirqs.CPU1.SCHED
9370 ± 2% +28.2% 12014 ± 16% softirqs.CPU10.SCHED
17637 ± 2% -16.5% 14733 ± 16% softirqs.CPU101.SCHED
17846 ± 3% -17.4% 14745 ± 16% softirqs.CPU103.SCHED
9552 +24.7% 11916 ± 17% softirqs.CPU11.SCHED
9210 ± 5% +27.9% 11784 ± 16% softirqs.CPU12.SCHED
9378 ± 3% +27.7% 11974 ± 16% softirqs.CPU13.SCHED
9164 ± 2% +29.4% 11856 ± 18% softirqs.CPU14.SCHED
9215 +21.2% 11170 ± 19% softirqs.CPU15.SCHED
9118 ± 2% +29.1% 11772 ± 16% softirqs.CPU16.SCHED
9413 +29.2% 12165 ± 18% softirqs.CPU17.SCHED
9309 ± 2% +29.9% 12097 ± 17% softirqs.CPU18.SCHED
9423 +26.1% 11880 ± 15% softirqs.CPU19.SCHED
9010 ± 7% +37.8% 12420 ± 18% softirqs.CPU2.SCHED
9382 ± 3% +27.0% 11916 ± 15% softirqs.CPU20.SCHED
9102 ± 4% +30.0% 11830 ± 16% softirqs.CPU21.SCHED
9543 ± 3% +23.4% 11780 ± 18% softirqs.CPU22.SCHED
8998 ± 5% +29.2% 11630 ± 18% softirqs.CPU24.SCHED
9254 ± 2% +23.9% 11462 ± 19% softirqs.CPU25.SCHED
18450 ± 4% -16.9% 15341 ± 16% softirqs.CPU26.SCHED
17551 ± 4% -14.8% 14956 ± 13% softirqs.CPU27.SCHED
17575 ± 4% -14.6% 15010 ± 14% softirqs.CPU28.SCHED
17515 ± 5% -14.2% 15021 ± 13% softirqs.CPU29.SCHED
17715 ± 2% -16.1% 14856 ± 13% softirqs.CPU30.SCHED
17754 ± 4% -16.1% 14904 ± 13% softirqs.CPU31.SCHED
17675 ± 2% -17.0% 14679 ± 21% softirqs.CPU32.SCHED
17625 ± 2% -16.0% 14813 ± 13% softirqs.CPU34.SCHED
17619 ± 2% -14.7% 15024 ± 14% softirqs.CPU35.SCHED
17887 ± 3% -17.0% 14841 ± 14% softirqs.CPU36.SCHED
17658 ± 3% -16.3% 14771 ± 12% softirqs.CPU38.SCHED
17501 ± 2% -15.3% 14816 ± 14% softirqs.CPU39.SCHED
9360 ± 2% +25.4% 11740 ± 14% softirqs.CPU4.SCHED
17699 ± 4% -16.2% 14827 ± 14% softirqs.CPU42.SCHED
17580 ± 3% -16.5% 14679 ± 15% softirqs.CPU43.SCHED
17658 ± 3% -17.1% 14644 ± 14% softirqs.CPU44.SCHED
17452 ± 4% -14.0% 15001 ± 15% softirqs.CPU46.SCHED
17599 ± 4% -17.4% 14544 ± 14% softirqs.CPU47.SCHED
17792 ± 3% -16.5% 14864 ± 14% softirqs.CPU48.SCHED
17333 ± 2% -16.7% 14445 ± 14% softirqs.CPU49.SCHED
9483 +32.3% 12547 ± 24% softirqs.CPU5.SCHED
17842 ± 3% -15.9% 14997 ± 16% softirqs.CPU51.SCHED
9051 ± 2% +23.3% 11160 ± 13% softirqs.CPU52.SCHED
9385 ± 3% +25.2% 11752 ± 16% softirqs.CPU53.SCHED
9446 ± 6% +24.9% 11798 ± 14% softirqs.CPU54.SCHED
10006 ± 6% +22.4% 12249 ± 14% softirqs.CPU55.SCHED
9657 +22.0% 11780 ± 16% softirqs.CPU57.SCHED
9399 +27.5% 11980 ± 15% softirqs.CPU58.SCHED
9234 ± 3% +27.7% 11795 ± 14% softirqs.CPU59.SCHED
9726 ± 6% +24.0% 12062 ± 16% softirqs.CPU6.SCHED
9165 ± 2% +23.7% 11342 ± 14% softirqs.CPU60.SCHED
9357 ± 2% +25.8% 11774 ± 15% softirqs.CPU61.SCHED
9406 ± 3% +25.2% 11780 ± 16% softirqs.CPU62.SCHED
9489 +23.2% 11688 ± 15% softirqs.CPU63.SCHED
9399 ± 2% +23.5% 11604 ± 16% softirqs.CPU65.SCHED
8950 ± 2% +31.6% 11774 ± 16% softirqs.CPU66.SCHED
9260 +21.7% 11267 ± 19% softirqs.CPU67.SCHED
9187 +27.1% 11672 ± 17% softirqs.CPU68.SCHED
9443 ± 2% +25.5% 11847 ± 17% softirqs.CPU69.SCHED
9144 ± 3% +28.0% 11706 ± 16% softirqs.CPU7.SCHED
9276 ± 2% +28.0% 11871 ± 17% softirqs.CPU70.SCHED
9494 +21.4% 11526 ± 14% softirqs.CPU71.SCHED
9124 ± 3% +27.8% 11657 ± 17% softirqs.CPU72.SCHED
9189 ± 3% +25.9% 11568 ± 16% softirqs.CPU73.SCHED
9392 ± 2% +23.7% 11619 ± 16% softirqs.CPU74.SCHED
17821 ± 3% -14.7% 15197 ± 17% softirqs.CPU78.SCHED
17581 ± 2% -15.7% 14827 ± 15% softirqs.CPU79.SCHED
9123 +28.2% 11695 ± 15% softirqs.CPU8.SCHED
17524 ± 2% -16.7% 14601 ± 14% softirqs.CPU80.SCHED
17644 ± 3% -16.2% 14782 ± 14% softirqs.CPU81.SCHED
17705 ± 3% -18.6% 14414 ± 22% softirqs.CPU84.SCHED
17679 ± 2% -14.1% 15185 ± 11% softirqs.CPU85.SCHED
17434 ± 3% -15.5% 14724 ± 14% softirqs.CPU86.SCHED
17409 ± 2% -15.0% 14794 ± 13% softirqs.CPU87.SCHED
17470 ± 3% -15.7% 14730 ± 13% softirqs.CPU88.SCHED
17748 ± 4% -17.1% 14721 ± 12% softirqs.CPU89.SCHED
9323 +28.0% 11929 ± 17% softirqs.CPU9.SCHED
17471 ± 2% -16.9% 14525 ± 13% softirqs.CPU90.SCHED
17900 ± 3% -17.0% 14850 ± 14% softirqs.CPU94.SCHED
17599 ± 4% -17.4% 14544 ± 15% softirqs.CPU95.SCHED
17697 ± 4% -17.7% 14569 ± 13% softirqs.CPU96.SCHED
17561 ± 3% -15.1% 14901 ± 13% softirqs.CPU97.SCHED
17404 ± 3% -16.1% 14601 ± 13% softirqs.CPU98.SCHED
17802 ± 3% -19.4% 14344 ± 15% softirqs.CPU99.SCHED
1310 ± 10% -17.0% 1088 ± 5% interrupts.CPU1.RES:Rescheduling_interrupts
3427 +13.3% 3883 ± 9% interrupts.CPU10.CAL:Function_call_interrupts
736.50 ± 20% +34.4% 989.75 ± 17% interrupts.CPU100.RES:Rescheduling_interrupts
3421 ± 3% +14.6% 3921 ± 9% interrupts.CPU101.CAL:Function_call_interrupts
4873 ± 8% +16.2% 5662 ± 7% interrupts.CPU101.NMI:Non-maskable_interrupts
4873 ± 8% +16.2% 5662 ± 7% interrupts.CPU101.PMI:Performance_monitoring_interrupts
629.50 ± 19% +83.2% 1153 ± 46% interrupts.CPU101.RES:Rescheduling_interrupts
661.75 ± 14% +25.7% 832.00 ± 13% interrupts.CPU102.RES:Rescheduling_interrupts
4695 ± 5% +15.5% 5420 ± 9% interrupts.CPU103.NMI:Non-maskable_interrupts
4695 ± 5% +15.5% 5420 ± 9% interrupts.CPU103.PMI:Performance_monitoring_interrupts
3460 +12.1% 3877 ± 9% interrupts.CPU11.CAL:Function_call_interrupts
691.50 ± 7% +41.0% 975.00 ± 32% interrupts.CPU19.RES:Rescheduling_interrupts
3413 ± 2% +13.4% 3870 ± 10% interrupts.CPU20.CAL:Function_call_interrupts
3413 ± 2% +13.4% 3871 ± 10% interrupts.CPU22.CAL:Function_call_interrupts
863.00 ± 36% +45.3% 1254 ± 24% interrupts.CPU23.RES:Rescheduling_interrupts
659.75 ± 12% +83.4% 1209 ± 20% interrupts.CPU26.RES:Rescheduling_interrupts
615.00 ± 10% +87.8% 1155 ± 14% interrupts.CPU27.RES:Rescheduling_interrupts
663.75 ± 5% +67.9% 1114 ± 7% interrupts.CPU28.RES:Rescheduling_interrupts
3421 ± 4% +13.4% 3879 ± 9% interrupts.CPU29.CAL:Function_call_interrupts
805.25 ± 16% +33.0% 1071 ± 15% interrupts.CPU29.RES:Rescheduling_interrupts
3482 ± 3% +11.0% 3864 ± 8% interrupts.CPU3.CAL:Function_call_interrupts
819.75 ± 19% +48.4% 1216 ± 12% interrupts.CPU30.RES:Rescheduling_interrupts
777.25 ± 8% +31.6% 1023 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts
844.50 ± 25% +41.7% 1196 ± 20% interrupts.CPU32.RES:Rescheduling_interrupts
722.75 ± 14% +94.2% 1403 ± 26% interrupts.CPU33.RES:Rescheduling_interrupts
3944 ± 25% +36.8% 5394 ± 9% interrupts.CPU34.NMI:Non-maskable_interrupts
3944 ± 25% +36.8% 5394 ± 9% interrupts.CPU34.PMI:Performance_monitoring_interrupts
781.75 ± 9% +45.3% 1136 ± 27% interrupts.CPU34.RES:Rescheduling_interrupts
735.50 ± 9% +33.3% 980.75 ± 4% interrupts.CPU35.RES:Rescheduling_interrupts
691.75 ± 10% +41.6% 979.50 ± 13% interrupts.CPU36.RES:Rescheduling_interrupts
727.00 ± 16% +47.7% 1074 ± 15% interrupts.CPU37.RES:Rescheduling_interrupts
4413 ± 7% +24.9% 5511 ± 9% interrupts.CPU38.NMI:Non-maskable_interrupts
4413 ± 7% +24.9% 5511 ± 9% interrupts.CPU38.PMI:Performance_monitoring_interrupts
708.75 ± 25% +62.6% 1152 ± 22% interrupts.CPU38.RES:Rescheduling_interrupts
666.50 ± 7% +57.8% 1052 ± 13% interrupts.CPU39.RES:Rescheduling_interrupts
765.75 ± 11% +25.2% 958.75 ± 14% interrupts.CPU4.RES:Rescheduling_interrupts
3395 ± 2% +15.1% 3908 ± 10% interrupts.CPU40.CAL:Function_call_interrupts
770.00 ± 16% +45.3% 1119 ± 18% interrupts.CPU40.RES:Rescheduling_interrupts
740.50 ± 26% +61.9% 1198 ± 19% interrupts.CPU41.RES:Rescheduling_interrupts
3459 ± 2% +12.9% 3905 ± 11% interrupts.CPU42.CAL:Function_call_interrupts
4530 ± 5% +22.8% 5564 ± 9% interrupts.CPU42.NMI:Non-maskable_interrupts
4530 ± 5% +22.8% 5564 ± 9% interrupts.CPU42.PMI:Performance_monitoring_interrupts
3330 ± 25% +60.0% 5328 ± 10% interrupts.CPU44.NMI:Non-maskable_interrupts
3330 ± 25% +60.0% 5328 ± 10% interrupts.CPU44.PMI:Performance_monitoring_interrupts
686.25 ± 9% +48.4% 1018 ± 10% interrupts.CPU44.RES:Rescheduling_interrupts
702.00 ± 15% +38.6% 973.25 ± 5% interrupts.CPU45.RES:Rescheduling_interrupts
4742 ± 7% +19.3% 5657 ± 8% interrupts.CPU46.NMI:Non-maskable_interrupts
4742 ± 7% +19.3% 5657 ± 8% interrupts.CPU46.PMI:Performance_monitoring_interrupts
732.75 ± 6% +51.9% 1113 ± 7% interrupts.CPU46.RES:Rescheduling_interrupts
775.50 ± 17% +41.3% 1095 ± 6% interrupts.CPU47.RES:Rescheduling_interrupts
670.75 ± 5% +60.7% 1078 ± 6% interrupts.CPU48.RES:Rescheduling_interrupts
4870 ± 8% +16.5% 5676 ± 7% interrupts.CPU49.NMI:Non-maskable_interrupts
4870 ± 8% +16.5% 5676 ± 7% interrupts.CPU49.PMI:Performance_monitoring_interrupts
694.75 ± 12% +25.8% 874.00 ± 11% interrupts.CPU49.RES:Rescheduling_interrupts
686.00 ± 9% +52.0% 1042 ± 20% interrupts.CPU50.RES:Rescheduling_interrupts
3361 +17.2% 3938 ± 9% interrupts.CPU51.CAL:Function_call_interrupts
4707 ± 6% +16.0% 5463 ± 8% interrupts.CPU51.NMI:Non-maskable_interrupts
4707 ± 6% +16.0% 5463 ± 8% interrupts.CPU51.PMI:Performance_monitoring_interrupts
638.75 ± 12% +28.6% 821.25 ± 15% interrupts.CPU54.RES:Rescheduling_interrupts
677.50 ± 8% +51.8% 1028 ± 29% interrupts.CPU58.RES:Rescheduling_interrupts
3465 ± 2% +12.0% 3880 ± 9% interrupts.CPU6.CAL:Function_call_interrupts
641.25 ± 2% +26.1% 808.75 ± 10% interrupts.CPU60.RES:Rescheduling_interrupts
599.75 ± 2% +45.6% 873.50 ± 8% interrupts.CPU62.RES:Rescheduling_interrupts
661.50 ± 9% +52.4% 1008 ± 27% interrupts.CPU63.RES:Rescheduling_interrupts
611.00 ± 12% +31.1% 801.00 ± 13% interrupts.CPU69.RES:Rescheduling_interrupts
3507 ± 2% +10.8% 3888 ± 9% interrupts.CPU7.CAL:Function_call_interrupts
664.00 ± 5% +32.3% 878.50 ± 23% interrupts.CPU70.RES:Rescheduling_interrupts
5780 ± 9% -38.8% 3540 ± 37% interrupts.CPU73.NMI:Non-maskable_interrupts
5780 ± 9% -38.8% 3540 ± 37% interrupts.CPU73.PMI:Performance_monitoring_interrupts
5787 ± 9% -26.7% 4243 ± 28% interrupts.CPU76.NMI:Non-maskable_interrupts
5787 ± 9% -26.7% 4243 ± 28% interrupts.CPU76.PMI:Performance_monitoring_interrupts
751.50 ± 15% +88.0% 1413 ± 37% interrupts.CPU78.RES:Rescheduling_interrupts
725.50 ± 12% +82.9% 1327 ± 36% interrupts.CPU79.RES:Rescheduling_interrupts
714.00 ± 18% +33.2% 951.00 ± 15% interrupts.CPU80.RES:Rescheduling_interrupts
706.25 ± 19% +55.6% 1098 ± 27% interrupts.CPU82.RES:Rescheduling_interrupts
4524 ± 6% +19.6% 5409 ± 8% interrupts.CPU83.NMI:Non-maskable_interrupts
4524 ± 6% +19.6% 5409 ± 8% interrupts.CPU83.PMI:Performance_monitoring_interrupts
666.75 ± 15% +37.3% 915.50 ± 4% interrupts.CPU83.RES:Rescheduling_interrupts
782.50 ± 26% +57.6% 1233 ± 21% interrupts.CPU84.RES:Rescheduling_interrupts
622.75 ± 12% +77.8% 1107 ± 17% interrupts.CPU85.RES:Rescheduling_interrupts
3465 ± 3% +13.5% 3933 ± 9% interrupts.CPU86.CAL:Function_call_interrupts
714.75 ± 14% +47.0% 1050 ± 10% interrupts.CPU86.RES:Rescheduling_interrupts
3519 ± 2% +11.7% 3929 ± 9% interrupts.CPU87.CAL:Function_call_interrupts
582.75 ± 10% +54.2% 898.75 ± 11% interrupts.CPU87.RES:Rescheduling_interrupts
713.00 ± 10% +36.6% 974.25 ± 11% interrupts.CPU88.RES:Rescheduling_interrupts
690.50 ± 13% +53.0% 1056 ± 13% interrupts.CPU89.RES:Rescheduling_interrupts
3477 +11.0% 3860 ± 8% interrupts.CPU9.CAL:Function_call_interrupts
684.50 ± 14% +39.7% 956.25 ± 11% interrupts.CPU90.RES:Rescheduling_interrupts
3946 ± 21% +39.8% 5516 ± 10% interrupts.CPU91.NMI:Non-maskable_interrupts
3946 ± 21% +39.8% 5516 ± 10% interrupts.CPU91.PMI:Performance_monitoring_interrupts
649.00 ± 13% +54.3% 1001 ± 6% interrupts.CPU91.RES:Rescheduling_interrupts
674.25 ± 21% +39.5% 940.25 ± 11% interrupts.CPU92.RES:Rescheduling_interrupts
3971 ± 26% +41.2% 5606 ± 8% interrupts.CPU94.NMI:Non-maskable_interrupts
3971 ± 26% +41.2% 5606 ± 8% interrupts.CPU94.PMI:Performance_monitoring_interrupts
4129 ± 22% +33.2% 5499 ± 9% interrupts.CPU95.NMI:Non-maskable_interrupts
4129 ± 22% +33.2% 5499 ± 9% interrupts.CPU95.PMI:Performance_monitoring_interrupts
685.75 ± 14% +38.0% 946.50 ± 9% interrupts.CPU96.RES:Rescheduling_interrupts
4630 ± 11% +18.3% 5477 ± 8% interrupts.CPU97.NMI:Non-maskable_interrupts
4630 ± 11% +18.3% 5477 ± 8% interrupts.CPU97.PMI:Performance_monitoring_interrupts
4835 ± 9% +16.3% 5622 ± 9% interrupts.CPU98.NMI:Non-maskable_interrupts
4835 ± 9% +16.3% 5622 ± 9% interrupts.CPU98.PMI:Performance_monitoring_interrupts
596.25 ± 11% +81.8% 1083 ± 9% interrupts.CPU98.RES:Rescheduling_interrupts
674.75 ± 17% +43.7% 969.50 ± 5% interrupts.CPU99.RES:Rescheduling_interrupts
78.25 ± 13% +21.4% 95.00 ± 10% interrupts.IWI:IRQ_work_interrupts
85705 ± 6% +26.0% 107990 ± 6% interrupts.RES:Rescheduling_interrupts
vm-scalability.throughput
4.55e+07 +-+--------------------------------------------------------------+
| O |
4.5e+07 +-+ O O O O O O O |
O O O O O O |
| O O O O |
4.45e+07 +-+ O |
| |
4.4e+07 +-+ .+.|
| .+.+ |
4.35e+07 +-+ +. .+. +. .+. .+. + |
|.+.+. : + +. + + +.+..+.+ + + +.+ +. + |
| +. : +.+ + + + + + + + |
4.3e+07 +-+ + + + + |
| |
4.25e+07 +-+--------------------------------------------------------------+
vm-scalability.time.user_time
13000 +-+-----------------------------------------------------------------+
O O O |
12800 +-+ |
| O O |
| O O O O O O O O |
12600 +-O O O O O |
| O |
12400 +-+ |
| |
12200 +-+ |
| +.. |
| + .+. +.+.+. .+. .+.+.+. .+.+.|
12000 +-+.+ +.+. .+ +.+.+..+.+.+ : +. + +.+.+.. + |
| + + : + |
11800 +-+-----------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
8050 O-+-------------------O----------------------------------------------+
| O O O O O O |
8000 +-O O O O O O O O O |
| O O |
7950 +-+ |
| |
7900 +-+ |
| |
7850 +-+ |
| + |
7800 +-+ +..+.+ :: + +. .+ +. |
|. + : : : + : + : +. .+. .+..+ : .+. + +.|
7750 +-+.+ : : :.. + : +. : + + : .+ +..+ |
| +.+ + + +.+..+ + |
7700 +-+------------------------------------------------------------------+
vm-scalability.time.involuntary_context_switches
190000 +-+----------------------------------------------------------------+
| +. + +.+ .+.|
180000 +-+ : +. + + + +. .+ .. + .+ |
| .+. : +.+ +.+.+ .+. : : : + : + + |
| + +. .+ + .+ +. : : : : + |
170000 +-++ +. +. + + + |
| + |
160000 +-+ |
| |
150000 +-+ |
| O O O O O O O |
| O O O |
140000 O-+ O O O O O O O |
| O |
130000 +-+----------------------------------------------------------------+
vm-scalability.median
430000 +-+----------O---------O---O---------------------------------------+
O O O O O O |
425000 +-O O O O O |
| O O O O O |
| |
420000 +-+ |
| + |
415000 +-+ :: + .+ |
| : : + : + .+.+ +|
410000 +-+ .+ : : .+ : + + .+..+. .+.+. .+ |
|.+.+ +: + +. .+ +.+ + +. .+. .+ +..+ |
| + + + + + + |
405000 +-+ + |
| |
400000 +-+----------------------------------------------------------------+
vm-scalability.workload
1.23e+10 +-+---O----------------------------------------------------------+
1.22e+10 +-+ |
O O O O O O O O |
1.21e+10 +-O O O O O O O O O O |
1.2e+10 +-+ |
| |
1.19e+10 +-+ |
1.18e+10 +-+ |
1.17e+10 +-+ |
| |
1.16e+10 +-+ |
1.15e+10 +-+ + + +.+ +.+ +.+ + +.+.+ +.+.|
| + + + + + + + + .. + + + + + + |
1.14e+10 +-+.+ + + +.+.+ +.+ +.+ + +.+.+.+.+ |
1.13e+10 +-+--------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
98318389 +43.0% 1.406e+08 stress-ng.schedpolicy.ops
3277346 +43.0% 4685146 stress-ng.schedpolicy.ops_per_sec
3.506e+08 ± 4% -10.3% 3.146e+08 ± 3% stress-ng.sigq.ops
11684738 ± 4% -10.3% 10485353 ± 3% stress-ng.sigq.ops_per_sec
3.628e+08 ± 6% -19.4% 2.925e+08 ± 6% stress-ng.time.involuntary_context_switches
29456 +2.8% 30285 stress-ng.time.system_time
7636655 ± 9% +46.6% 11197377 ± 27% cpuidle.C1E.usage
1111483 ± 3% -9.5% 1005829 vmstat.system.cs
22638222 ± 4% +16.5% 26370816 ± 11% meminfo.Committed_AS
28908 ± 6% +24.6% 36020 ± 16% meminfo.KernelStack
7636543 ± 9% +46.6% 11196090 ± 27% turbostat.C1E
3.46 ± 16% -61.2% 1.35 ± 7% turbostat.Pkg%pc2
217.54 +1.7% 221.33 turbostat.PkgWatt
13.34 ± 2% +5.8% 14.11 turbostat.RAMWatt
525.50 ± 8% -15.7% 443.00 ± 12% slabinfo.biovec-128.active_objs
525.50 ± 8% -15.7% 443.00 ± 12% slabinfo.biovec-128.num_objs
28089 ± 12% -33.0% 18833 ± 22% slabinfo.pool_workqueue.active_objs
877.25 ± 12% -32.6% 591.00 ± 21% slabinfo.pool_workqueue.active_slabs
28089 ± 12% -32.6% 18925 ± 21% slabinfo.pool_workqueue.num_objs
877.25 ± 12% -32.6% 591.00 ± 21% slabinfo.pool_workqueue.num_slabs
846.75 ± 6% -18.0% 694.75 ± 9% slabinfo.skbuff_fclone_cache.active_objs
846.75 ± 6% -18.0% 694.75 ± 9% slabinfo.skbuff_fclone_cache.num_objs
63348 ± 6% -20.7% 50261 ± 4% softirqs.CPU14.SCHED
44394 ± 4% +21.4% 53880 ± 8% softirqs.CPU42.SCHED
52246 ± 7% -15.1% 44352 softirqs.CPU47.SCHED
58350 ± 4% -11.0% 51914 ± 7% softirqs.CPU6.SCHED
58009 ± 7% -23.8% 44206 ± 4% softirqs.CPU63.SCHED
49166 ± 6% +23.4% 60683 ± 9% softirqs.CPU68.SCHED
44594 ± 7% +14.3% 50951 ± 8% softirqs.CPU78.SCHED
46407 ± 9% +19.6% 55515 ± 8% softirqs.CPU84.SCHED
55555 ± 8% -15.5% 46933 ± 4% softirqs.CPU9.SCHED
198757 ± 18% +44.1% 286316 ± 9% numa-meminfo.node0.Active
189280 ± 19% +37.1% 259422 ± 7% numa-meminfo.node0.Active(anon)
110438 ± 33% +68.3% 185869 ± 16% numa-meminfo.node0.AnonHugePages
143458 ± 28% +67.7% 240547 ± 13% numa-meminfo.node0.AnonPages
12438 ± 16% +61.9% 20134 ± 37% numa-meminfo.node0.KernelStack
1004379 ± 7% +16.4% 1168764 ± 4% numa-meminfo.node0.MemUsed
357111 ± 24% -41.6% 208655 ± 29% numa-meminfo.node1.Active
330094 ± 22% -39.6% 199339 ± 32% numa-meminfo.node1.Active(anon)
265924 ± 25% -52.2% 127138 ± 46% numa-meminfo.node1.AnonHugePages
314059 ± 22% -49.6% 158305 ± 36% numa-meminfo.node1.AnonPages
15386 ± 16% -25.1% 11525 ± 15% numa-meminfo.node1.KernelStack
1200805 ± 11% -18.6% 977595 ± 7% numa-meminfo.node1.MemUsed
965.50 ± 15% -29.3% 682.25 ± 43% numa-meminfo.node1.Mlocked
46762 ± 18% +37.8% 64452 ± 8% numa-vmstat.node0.nr_active_anon
35393 ± 27% +68.9% 59793 ± 12% numa-vmstat.node0.nr_anon_pages
52.75 ± 33% +71.1% 90.25 ± 15% numa-vmstat.node0.nr_anon_transparent_hugepages
15.00 ± 96% +598.3% 104.75 ± 15% numa-vmstat.node0.nr_inactive_file
11555 ± 22% +68.9% 19513 ± 41% numa-vmstat.node0.nr_kernel_stack
550.25 ±162% +207.5% 1691 ± 48% numa-vmstat.node0.nr_written
46762 ± 18% +37.8% 64452 ± 8% numa-vmstat.node0.nr_zone_active_anon
15.00 ± 96% +598.3% 104.75 ± 15% numa-vmstat.node0.nr_zone_inactive_file
82094 ± 22% -39.5% 49641 ± 32% numa-vmstat.node1.nr_active_anon
78146 ± 23% -49.5% 39455 ± 37% numa-vmstat.node1.nr_anon_pages
129.00 ± 25% -52.3% 61.50 ± 47% numa-vmstat.node1.nr_anon_transparent_hugepages
107.75 ± 12% -85.4% 15.75 ±103% numa-vmstat.node1.nr_inactive_file
14322 ± 11% -21.1% 11304 ± 11% numa-vmstat.node1.nr_kernel_stack
241.00 ± 15% -29.5% 170.00 ± 43% numa-vmstat.node1.nr_mlock
82094 ± 22% -39.5% 49641 ± 32% numa-vmstat.node1.nr_zone_active_anon
107.75 ± 12% -85.4% 15.75 ±103% numa-vmstat.node1.nr_zone_inactive_file
0.81 ± 5% +0.2 0.99 ± 10% perf-profile.calltrace.cycles-pp.task_rq_lock.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime
0.60 ± 11% +0.2 0.83 ± 9% perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime
1.73 ± 9% +0.3 2.05 ± 8% perf-profile.calltrace.cycles-pp.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime.do_syscall_64
3.92 ± 5% +0.6 4.49 ± 7% perf-profile.calltrace.cycles-pp.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime
4.17 ± 4% +0.6 4.78 ± 7% perf-profile.calltrace.cycles-pp.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64
5.72 ± 3% +0.7 6.43 ± 7% perf-profile.calltrace.cycles-pp.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.24 ± 54% -0.2 0.07 ±131% perf-profile.children.cycles-pp.ext4_inode_csum_set
0.45 ± 3% +0.1 0.56 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.84 ± 5% +0.2 1.03 ± 9% perf-profile.children.cycles-pp.task_rq_lock
0.66 ± 8% +0.2 0.88 ± 7% perf-profile.children.cycles-pp.___might_sleep
1.83 ± 9% +0.3 2.16 ± 8% perf-profile.children.cycles-pp.__might_fault
4.04 ± 5% +0.6 4.62 ± 7% perf-profile.children.cycles-pp.task_sched_runtime
4.24 ± 4% +0.6 4.87 ± 7% perf-profile.children.cycles-pp.cpu_clock_sample
5.77 ± 3% +0.7 6.48 ± 7% perf-profile.children.cycles-pp.posix_cpu_timer_get
0.22 ± 11% +0.1 0.28 ± 15% perf-profile.self.cycles-pp.cpu_clock_sample
0.47 ± 7% +0.1 0.55 ± 5% perf-profile.self.cycles-pp.update_curr
0.28 ± 5% +0.1 0.38 ± 14% perf-profile.self.cycles-pp.task_rq_lock
0.42 ± 3% +0.1 0.53 ± 4% perf-profile.self.cycles-pp.__might_sleep
0.50 ± 5% +0.1 0.61 ± 11% perf-profile.self.cycles-pp.task_sched_runtime
0.63 ± 9% +0.2 0.85 ± 7% perf-profile.self.cycles-pp.___might_sleep
9180611 ± 5% +40.1% 12859327 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.max
1479571 ± 6% +57.6% 2331469 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.stddev
7951 ± 6% -52.5% 3773 ± 17% sched_debug.cfs_rq:/.exec_clock.stddev
321306 ± 39% -44.2% 179273 sched_debug.cfs_rq:/.load.max
9180613 ± 5% +40.1% 12859327 ± 14% sched_debug.cfs_rq:/.max_vruntime.max
1479571 ± 6% +57.6% 2331469 ± 14% sched_debug.cfs_rq:/.max_vruntime.stddev
16622378 +20.0% 19940069 ± 7% sched_debug.cfs_rq:/.min_vruntime.avg
18123901 +19.7% 21686545 ± 6% sched_debug.cfs_rq:/.min_vruntime.max
14338218 ± 3% +27.4% 18267927 ± 7% sched_debug.cfs_rq:/.min_vruntime.min
0.17 ± 16% +23.4% 0.21 ± 11% sched_debug.cfs_rq:/.nr_running.stddev
319990 ± 39% -44.6% 177347 sched_debug.cfs_rq:/.runnable_weight.max
-2067420 -33.5% -1375445 sched_debug.cfs_rq:/.spread0.min
1033 ± 8% -13.7% 891.85 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.max
93676 ± 16% -29.0% 66471 ± 17% sched_debug.cpu.avg_idle.min
10391 ± 52% +118.9% 22750 ± 15% sched_debug.cpu.curr->pid.avg
14393 ± 35% +113.2% 30689 ± 17% sched_debug.cpu.curr->pid.max
3041 ± 38% +161.8% 7963 ± 11% sched_debug.cpu.curr->pid.stddev
3.38 ± 6% -16.3% 2.83 ± 5% sched_debug.cpu.nr_running.max
2412687 ± 4% -16.0% 2027251 ± 3% sched_debug.cpu.nr_switches.avg
4038819 ± 3% -20.2% 3223112 ± 5% sched_debug.cpu.nr_switches.max
834203 ± 17% -37.8% 518798 ± 27% sched_debug.cpu.nr_switches.stddev
45.85 ± 13% +41.2% 64.75 ± 18% sched_debug.cpu.nr_uninterruptible.max
1937209 ± 2% +58.5% 3070891 ± 3% sched_debug.cpu.sched_count.min
1074023 ± 13% -57.9% 451958 ± 12% sched_debug.cpu.sched_count.stddev
1283769 ± 7% +65.1% 2118907 ± 7% sched_debug.cpu.yld_count.min
714244 ± 5% -51.9% 343373 ± 22% sched_debug.cpu.yld_count.stddev
12.54 ± 9% -18.8% 10.18 ± 15% perf-stat.i.MPKI
1.011e+10 +2.6% 1.038e+10 perf-stat.i.branch-instructions
13.22 ± 5% +2.5 15.75 ± 3% perf-stat.i.cache-miss-rate%
21084021 ± 6% +33.9% 28231058 ± 6% perf-stat.i.cache-misses
1143861 ± 5% -12.1% 1005721 ± 6% perf-stat.i.context-switches
1.984e+11 +1.8% 2.02e+11 perf-stat.i.cpu-cycles
1.525e+10 +1.3% 1.544e+10 perf-stat.i.dTLB-loads
65.46 -2.7 62.76 ± 3% perf-stat.i.iTLB-load-miss-rate%
20360883 ± 4% +10.5% 22500874 ± 4% perf-stat.i.iTLB-loads
4.963e+10 +2.0% 5.062e+10 perf-stat.i.instructions
181557 -2.4% 177113 perf-stat.i.msec
5350122 ± 8% +26.5% 6765332 ± 7% perf-stat.i.node-load-misses
4264320 ± 3% +24.8% 5321600 ± 4% perf-stat.i.node-store-misses
6.12 ± 5% +1.5 7.60 ± 2% perf-stat.overall.cache-miss-rate%
7646 ± 6% -17.7% 6295 ± 3% perf-stat.overall.cycles-between-cache-misses
69.29 -1.1 68.22 perf-stat.overall.iTLB-load-miss-rate%
61.11 ± 2% +6.6 67.71 ± 5% perf-stat.overall.node-load-miss-rate%
74.82 +1.8 76.58 perf-stat.overall.node-store-miss-rate%
1.044e+10 +1.8% 1.063e+10 perf-stat.ps.branch-instructions
26325951 ± 6% +22.9% 32366684 ± 2% perf-stat.ps.cache-misses
1115530 ± 3% -9.5% 1009780 perf-stat.ps.context-switches
1.536e+10 +1.0% 1.552e+10 perf-stat.ps.dTLB-loads
44718416 ± 2% +5.8% 47308605 ± 3% perf-stat.ps.iTLB-load-misses
19831973 ± 4% +11.1% 22040029 ± 4% perf-stat.ps.iTLB-loads
5.064e+10 +1.4% 5.137e+10 perf-stat.ps.instructions
5454694 ± 9% +26.4% 6892365 ± 6% perf-stat.ps.node-load-misses
4263688 ± 4% +24.9% 5325279 ± 4% perf-stat.ps.node-store-misses
3.001e+13 +1.7% 3.052e+13 perf-stat.total.instructions
18550 -74.9% 4650 ±173% interrupts.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
7642 ± 9% -20.4% 6086 ± 2% interrupts.CPU0.CAL:Function_call_interrupts
4376 ± 22% -75.4% 1077 ± 41% interrupts.CPU0.TLB:TLB_shootdowns
8402 ± 5% -19.0% 6806 interrupts.CPU1.CAL:Function_call_interrupts
4559 ± 20% -73.7% 1199 ± 15% interrupts.CPU1.TLB:TLB_shootdowns
8423 ± 4% -20.2% 6725 ± 2% interrupts.CPU10.CAL:Function_call_interrupts
4536 ± 14% -75.0% 1135 ± 20% interrupts.CPU10.TLB:TLB_shootdowns
8303 ± 3% -18.2% 6795 ± 2% interrupts.CPU11.CAL:Function_call_interrupts
4404 ± 11% -71.6% 1250 ± 35% interrupts.CPU11.TLB:TLB_shootdowns
8491 ± 6% -21.3% 6683 interrupts.CPU12.CAL:Function_call_interrupts
4723 ± 20% -77.2% 1077 ± 17% interrupts.CPU12.TLB:TLB_shootdowns
8403 ± 5% -20.3% 6700 ± 2% interrupts.CPU13.CAL:Function_call_interrupts
4557 ± 19% -74.2% 1175 ± 22% interrupts.CPU13.TLB:TLB_shootdowns
8459 ± 4% -18.6% 6884 interrupts.CPU14.CAL:Function_call_interrupts
4559 ± 18% -69.8% 1376 ± 13% interrupts.CPU14.TLB:TLB_shootdowns
8305 ± 7% -17.7% 6833 ± 2% interrupts.CPU15.CAL:Function_call_interrupts
4261 ± 25% -67.6% 1382 ± 24% interrupts.CPU15.TLB:TLB_shootdowns
8277 ± 5% -19.1% 6696 ± 3% interrupts.CPU16.CAL:Function_call_interrupts
4214 ± 22% -69.6% 1282 ± 8% interrupts.CPU16.TLB:TLB_shootdowns
8258 ± 5% -18.9% 6694 ± 3% interrupts.CPU17.CAL:Function_call_interrupts
4461 ± 19% -74.1% 1155 ± 21% interrupts.CPU17.TLB:TLB_shootdowns
8457 ± 6% -20.6% 6717 interrupts.CPU18.CAL:Function_call_interrupts
4889 ± 34% +60.0% 7822 interrupts.CPU18.NMI:Non-maskable_interrupts
4889 ± 34% +60.0% 7822 interrupts.CPU18.PMI:Performance_monitoring_interrupts
4731 ± 22% -77.2% 1078 ± 10% interrupts.CPU18.TLB:TLB_shootdowns
8160 ± 5% -18.1% 6684 interrupts.CPU19.CAL:Function_call_interrupts
4311 ± 20% -74.2% 1114 ± 13% interrupts.CPU19.TLB:TLB_shootdowns
8464 ± 2% -18.2% 6927 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
4938 ± 14% -70.5% 1457 ± 18% interrupts.CPU2.TLB:TLB_shootdowns
8358 ± 6% -19.7% 6715 ± 3% interrupts.CPU20.CAL:Function_call_interrupts
4567 ± 24% -74.6% 1160 ± 35% interrupts.CPU20.TLB:TLB_shootdowns
8460 ± 4% -22.3% 6577 ± 2% interrupts.CPU21.CAL:Function_call_interrupts
4514 ± 18% -76.0% 1084 ± 22% interrupts.CPU21.TLB:TLB_shootdowns
6677 ± 6% +19.6% 7988 ± 9% interrupts.CPU22.CAL:Function_call_interrupts
1288 ± 14% +209.1% 3983 ± 35% interrupts.CPU22.TLB:TLB_shootdowns
6751 ± 2% +24.0% 8370 ± 9% interrupts.CPU23.CAL:Function_call_interrupts
1037 ± 29% +323.0% 4388 ± 36% interrupts.CPU23.TLB:TLB_shootdowns
6844 +20.6% 8251 ± 9% interrupts.CPU24.CAL:Function_call_interrupts
1205 ± 17% +229.2% 3967 ± 40% interrupts.CPU24.TLB:TLB_shootdowns
6880 +21.9% 8389 ± 7% interrupts.CPU25.CAL:Function_call_interrupts
1228 ± 19% +245.2% 4240 ± 35% interrupts.CPU25.TLB:TLB_shootdowns
6494 ± 8% +25.1% 8123 ± 9% interrupts.CPU26.CAL:Function_call_interrupts
1141 ± 13% +262.5% 4139 ± 32% interrupts.CPU26.TLB:TLB_shootdowns
6852 +19.2% 8166 ± 7% interrupts.CPU27.CAL:Function_call_interrupts
1298 ± 8% +197.1% 3857 ± 31% interrupts.CPU27.TLB:TLB_shootdowns
6563 ± 6% +25.2% 8214 ± 8% interrupts.CPU28.CAL:Function_call_interrupts
1176 ± 8% +237.1% 3964 ± 33% interrupts.CPU28.TLB:TLB_shootdowns
6842 ± 2% +21.4% 8308 ± 8% interrupts.CPU29.CAL:Function_call_interrupts
1271 ± 11% +223.8% 4118 ± 33% interrupts.CPU29.TLB:TLB_shootdowns
8418 ± 3% -21.1% 6643 ± 2% interrupts.CPU3.CAL:Function_call_interrupts
4677 ± 11% -75.1% 1164 ± 16% interrupts.CPU3.TLB:TLB_shootdowns
6798 ± 3% +21.8% 8284 ± 7% interrupts.CPU30.CAL:Function_call_interrupts
1219 ± 12% +236.3% 4102 ± 30% interrupts.CPU30.TLB:TLB_shootdowns
6503 ± 4% +25.9% 8186 ± 6% interrupts.CPU31.CAL:Function_call_interrupts
1046 ± 15% +289.1% 4072 ± 32% interrupts.CPU31.TLB:TLB_shootdowns
6949 ± 3% +17.2% 8141 ± 8% interrupts.CPU32.CAL:Function_call_interrupts
1241 ± 23% +210.6% 3854 ± 34% interrupts.CPU32.TLB:TLB_shootdowns
1487 ± 26% +161.6% 3889 ± 46% interrupts.CPU33.TLB:TLB_shootdowns
1710 ± 44% +140.1% 4105 ± 36% interrupts.CPU34.TLB:TLB_shootdowns
6957 ± 2% +15.2% 8012 ± 9% interrupts.CPU35.CAL:Function_call_interrupts
1165 ± 8% +223.1% 3765 ± 38% interrupts.CPU35.TLB:TLB_shootdowns
1423 ± 24% +173.4% 3892 ± 33% interrupts.CPU36.TLB:TLB_shootdowns
1279 ± 29% +224.2% 4148 ± 39% interrupts.CPU37.TLB:TLB_shootdowns
1301 ± 20% +226.1% 4244 ± 35% interrupts.CPU38.TLB:TLB_shootdowns
6906 ± 2% +18.5% 8181 ± 8% interrupts.CPU39.CAL:Function_call_interrupts
368828 ± 20% +96.2% 723710 ± 7% interrupts.CPU39.RES:Rescheduling_interrupts
1438 ± 12% +174.8% 3951 ± 33% interrupts.CPU39.TLB:TLB_shootdowns
8399 ± 5% -19.2% 6788 ± 2% interrupts.CPU4.CAL:Function_call_interrupts
4567 ± 18% -72.7% 1245 ± 28% interrupts.CPU4.TLB:TLB_shootdowns
6895 +22.4% 8439 ± 9% interrupts.CPU40.CAL:Function_call_interrupts
1233 ± 11% +247.1% 4280 ± 36% interrupts.CPU40.TLB:TLB_shootdowns
6819 ± 2% +21.3% 8274 ± 9% interrupts.CPU41.CAL:Function_call_interrupts
1260 ± 14% +207.1% 3871 ± 38% interrupts.CPU41.TLB:TLB_shootdowns
1301 ± 9% +204.7% 3963 ± 36% interrupts.CPU42.TLB:TLB_shootdowns
6721 ± 3% +22.3% 8221 ± 7% interrupts.CPU43.CAL:Function_call_interrupts
1237 ± 19% +224.8% 4017 ± 35% interrupts.CPU43.TLB:TLB_shootdowns
8422 ± 8% -22.7% 6506 ± 5% interrupts.CPU44.CAL:Function_call_interrupts
15261375 ± 7% -7.8% 14064176 interrupts.CPU44.LOC:Local_timer_interrupts
4376 ± 25% -75.7% 1063 ± 26% interrupts.CPU44.TLB:TLB_shootdowns
8451 ± 5% -23.7% 6448 ± 6% interrupts.CPU45.CAL:Function_call_interrupts
4351 ± 18% -74.9% 1094 ± 12% interrupts.CPU45.TLB:TLB_shootdowns
8705 ± 6% -21.2% 6860 ± 2% interrupts.CPU46.CAL:Function_call_interrupts
4787 ± 20% -69.5% 1462 ± 16% interrupts.CPU46.TLB:TLB_shootdowns
8334 ± 3% -18.9% 6763 interrupts.CPU47.CAL:Function_call_interrupts
4126 ± 10% -71.3% 1186 ± 18% interrupts.CPU47.TLB:TLB_shootdowns
8578 ± 4% -21.7% 6713 interrupts.CPU48.CAL:Function_call_interrupts
4520 ± 15% -74.5% 1154 ± 23% interrupts.CPU48.TLB:TLB_shootdowns
8450 ± 8% -18.8% 6863 ± 3% interrupts.CPU49.CAL:Function_call_interrupts
4494 ± 24% -66.5% 1505 ± 22% interrupts.CPU49.TLB:TLB_shootdowns
8307 ± 4% -18.0% 6816 ± 2% interrupts.CPU5.CAL:Function_call_interrupts
7845 -37.4% 4908 ± 34% interrupts.CPU5.NMI:Non-maskable_interrupts
7845 -37.4% 4908 ± 34% interrupts.CPU5.PMI:Performance_monitoring_interrupts
4429 ± 17% -69.8% 1339 ± 20% interrupts.CPU5.TLB:TLB_shootdowns
8444 ± 4% -21.7% 6613 interrupts.CPU50.CAL:Function_call_interrupts
4282 ± 16% -76.0% 1029 ± 17% interrupts.CPU50.TLB:TLB_shootdowns
8750 ± 6% -22.2% 6803 interrupts.CPU51.CAL:Function_call_interrupts
4755 ± 20% -73.1% 1277 ± 15% interrupts.CPU51.TLB:TLB_shootdowns
8478 ± 6% -20.2% 6766 ± 2% interrupts.CPU52.CAL:Function_call_interrupts
4337 ± 20% -72.6% 1190 ± 22% interrupts.CPU52.TLB:TLB_shootdowns
8604 ± 7% -21.5% 6750 ± 4% interrupts.CPU53.CAL:Function_call_interrupts
4649 ± 17% -74.3% 1193 ± 23% interrupts.CPU53.TLB:TLB_shootdowns
8317 ± 9% -19.4% 6706 ± 3% interrupts.CPU54.CAL:Function_call_interrupts
4372 ± 12% -75.4% 1076 ± 29% interrupts.CPU54.TLB:TLB_shootdowns
8439 ± 3% -18.5% 6876 interrupts.CPU55.CAL:Function_call_interrupts
4415 ± 11% -71.6% 1254 ± 17% interrupts.CPU55.TLB:TLB_shootdowns
8869 ± 6% -22.6% 6864 ± 2% interrupts.CPU56.CAL:Function_call_interrupts
517594 ± 13% +123.3% 1155539 ± 25% interrupts.CPU56.RES:Rescheduling_interrupts
5085 ± 22% -74.9% 1278 ± 17% interrupts.CPU56.TLB:TLB_shootdowns
8682 ± 4% -21.7% 6796 ± 2% interrupts.CPU57.CAL:Function_call_interrupts
4808 ± 19% -74.1% 1243 ± 13% interrupts.CPU57.TLB:TLB_shootdowns
8626 ± 7% -21.8% 6746 ± 2% interrupts.CPU58.CAL:Function_call_interrupts
4816 ± 20% -79.1% 1007 ± 28% interrupts.CPU58.TLB:TLB_shootdowns
8759 ± 8% -20.3% 6984 interrupts.CPU59.CAL:Function_call_interrupts
4840 ± 22% -70.6% 1423 ± 14% interrupts.CPU59.TLB:TLB_shootdowns
8167 ± 6% -19.0% 6615 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
4129 ± 21% -75.4% 1017 ± 24% interrupts.CPU6.TLB:TLB_shootdowns
8910 ± 4% -23.7% 6794 ± 3% interrupts.CPU60.CAL:Function_call_interrupts
5017 ± 12% -77.8% 1113 ± 15% interrupts.CPU60.TLB:TLB_shootdowns
8689 ± 5% -21.6% 6808 interrupts.CPU61.CAL:Function_call_interrupts
4715 ± 20% -77.6% 1055 ± 19% interrupts.CPU61.TLB:TLB_shootdowns
8574 ± 4% -18.9% 6953 ± 2% interrupts.CPU62.CAL:Function_call_interrupts
4494 ± 17% -72.3% 1244 ± 7% interrupts.CPU62.TLB:TLB_shootdowns
8865 ± 3% -25.4% 6614 ± 7% interrupts.CPU63.CAL:Function_call_interrupts
4870 ± 12% -76.8% 1130 ± 12% interrupts.CPU63.TLB:TLB_shootdowns
8724 ± 7% -20.2% 6958 ± 3% interrupts.CPU64.CAL:Function_call_interrupts
4736 ± 16% -72.6% 1295 ± 7% interrupts.CPU64.TLB:TLB_shootdowns
8717 ± 6% -23.7% 6653 ± 4% interrupts.CPU65.CAL:Function_call_interrupts
4626 ± 19% -76.5% 1087 ± 21% interrupts.CPU65.TLB:TLB_shootdowns
6671 +24.7% 8318 ± 9% interrupts.CPU66.CAL:Function_call_interrupts
1091 ± 8% +249.8% 3819 ± 32% interrupts.CPU66.TLB:TLB_shootdowns
6795 ± 2% +26.9% 8624 ± 9% interrupts.CPU67.CAL:Function_call_interrupts
1098 ± 24% +299.5% 4388 ± 39% interrupts.CPU67.TLB:TLB_shootdowns
6704 ± 5% +25.8% 8431 ± 8% interrupts.CPU68.CAL:Function_call_interrupts
1214 ± 15% +236.1% 4083 ± 36% interrupts.CPU68.TLB:TLB_shootdowns
1049 ± 15% +326.2% 4473 ± 33% interrupts.CPU69.TLB:TLB_shootdowns
8554 ± 6% -19.6% 6874 ± 2% interrupts.CPU7.CAL:Function_call_interrupts
4753 ± 19% -71.7% 1344 ± 16% interrupts.CPU7.TLB:TLB_shootdowns
1298 ± 13% +227.4% 4249 ± 38% interrupts.CPU70.TLB:TLB_shootdowns
6976 +19.9% 8362 ± 7% interrupts.CPU71.CAL:Function_call_interrupts
1232748 ± 18% -57.3% 525824 ± 33% interrupts.CPU71.RES:Rescheduling_interrupts
1253 ± 9% +211.8% 3909 ± 31% interrupts.CPU71.TLB:TLB_shootdowns
1316 ± 22% +188.7% 3800 ± 33% interrupts.CPU72.TLB:TLB_shootdowns
6665 ± 5% +26.5% 8429 ± 8% interrupts.CPU73.CAL:Function_call_interrupts
1202 ± 13% +234.1% 4017 ± 37% interrupts.CPU73.TLB:TLB_shootdowns
6639 ± 5% +27.0% 8434 ± 8% interrupts.CPU74.CAL:Function_call_interrupts
1079 ± 16% +269.4% 3986 ± 36% interrupts.CPU74.TLB:TLB_shootdowns
1055 ± 12% +301.2% 4235 ± 34% interrupts.CPU75.TLB:TLB_shootdowns
7011 ± 3% +21.6% 8522 ± 8% interrupts.CPU76.CAL:Function_call_interrupts
1223 ± 13% +230.7% 4047 ± 35% interrupts.CPU76.TLB:TLB_shootdowns
6886 ± 7% +25.6% 8652 ± 10% interrupts.CPU77.CAL:Function_call_interrupts
1316 ± 16% +229.8% 4339 ± 36% interrupts.CPU77.TLB:TLB_shootdowns
7343 ± 5% +19.1% 8743 ± 9% interrupts.CPU78.CAL:Function_call_interrupts
1699 ± 37% +144.4% 4152 ± 31% interrupts.CPU78.TLB:TLB_shootdowns
7136 ± 4% +21.4% 8666 ± 9% interrupts.CPU79.CAL:Function_call_interrupts
1094 ± 13% +276.2% 4118 ± 34% interrupts.CPU79.TLB:TLB_shootdowns
8531 ± 5% -19.5% 6869 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
4764 ± 16% -71.0% 1382 ± 14% interrupts.CPU8.TLB:TLB_shootdowns
1387 ± 29% +181.8% 3910 ± 38% interrupts.CPU80.TLB:TLB_shootdowns
1114 ± 30% +259.7% 4007 ± 36% interrupts.CPU81.TLB:TLB_shootdowns
7012 +23.9% 8685 ± 8% interrupts.CPU82.CAL:Function_call_interrupts
1274 ± 12% +255.4% 4530 ± 27% interrupts.CPU82.TLB:TLB_shootdowns
6971 ± 3% +23.8% 8628 ± 9% interrupts.CPU83.CAL:Function_call_interrupts
1156 ± 18% +260.1% 4162 ± 34% interrupts.CPU83.TLB:TLB_shootdowns
7030 ± 4% +21.0% 8504 ± 8% interrupts.CPU84.CAL:Function_call_interrupts
1286 ± 23% +224.0% 4166 ± 31% interrupts.CPU84.TLB:TLB_shootdowns
7059 +22.4% 8644 ± 11% interrupts.CPU85.CAL:Function_call_interrupts
1421 ± 22% +208.8% 4388 ± 33% interrupts.CPU85.TLB:TLB_shootdowns
7018 ± 2% +22.8% 8615 ± 9% interrupts.CPU86.CAL:Function_call_interrupts
1258 ± 8% +231.1% 4167 ± 34% interrupts.CPU86.TLB:TLB_shootdowns
1338 ± 3% +217.9% 4255 ± 31% interrupts.CPU87.TLB:TLB_shootdowns
8376 ± 4% -19.0% 6787 ± 2% interrupts.CPU9.CAL:Function_call_interrupts
4466 ± 17% -71.2% 1286 ± 18% interrupts.CPU9.TLB:TLB_shootdowns
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 2 months
[sched] 2802bafd8f: will-it-scale.per_thread_ops 2.0% improvement
by kernel test robot
Greeting,
FYI, we noticed a 2.0% improvement of will-it-scale.per_thread_ops due to commit:
commit: 2802bafd8f4b449471c2276f2f7a18eb17766d68 ("sched: Optimize pick_next_task()")
https://git.kernel.org/cgit/linux/kernel/git/peterz/queue.git sched/core
in testcase: will-it-scale
on test machine: 104 threads Skylake with 192G memory
with following parameters:
nr_task: 100%
mode: thread
test: sched_yield
cpufreq_governor: performance
ucode: 0x2000064
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
In addition to that, the commit also has significant impact on the following tests:
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/thread/100%/debian-x86_64-2019-09-23.cgz/lkp-skl-fpga01/sched_yield/will-it-scale/0x2000064
commit:
8629787388 ("sched/fair: Fix pick_next_task_fair() slow path")
2802bafd8f ("sched: Optimize pick_next_task()")
8629787388c77c01 2802bafd8f4b449471c2276f2f7
---------------- ---------------------------
%stddev %change %stddev
\ | \
1121358 +2.0% 1144146 will-it-scale.per_thread_ops
1.166e+08 +2.0% 1.19e+08 will-it-scale.workload
4728 ± 4% -10.6% 4226 ± 6% slabinfo.eventpoll_pwq.active_objs
4728 ± 4% -10.6% 4226 ± 6% slabinfo.eventpoll_pwq.num_objs
4088 ± 4% +6.8% 4368 ± 4% slabinfo.skbuff_head_cache.num_objs
4.25 ± 25% +8523.5% 366.50 ±170% interrupts.CPU13.RES:Rescheduling_interrupts
1488 ± 97% -97.7% 34.75 ± 87% interrupts.CPU52.RES:Rescheduling_interrupts
23.00 ±108% +1737.0% 422.50 ±147% interrupts.CPU61.RES:Rescheduling_interrupts
10.25 ± 89% +3724.4% 392.00 ±160% interrupts.CPU72.RES:Rescheduling_interrupts
34590 ± 4% +16.0% 40113 ± 12% softirqs.CPU17.RCU
29850 ± 7% +16.6% 34795 ± 15% softirqs.CPU26.RCU
32834 ± 2% +12.5% 36942 ± 11% softirqs.CPU59.RCU
30388 ± 2% +15.5% 35088 ± 13% softirqs.CPU69.RCU
28450 ± 6% +16.7% 33189 ± 13% softirqs.CPU7.RCU
31881 ± 4% +18.4% 37741 ± 13% softirqs.CPU84.RCU
0.00 +2.8e+09% 28.00 ± 18% sched_debug.cfs_rq:/.MIN_vruntime.avg
0.00 +2.4e+11% 2428 ± 25% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +1.4e+25% 259.04 ± 22% sched_debug.cfs_rq:/.MIN_vruntime.stddev
22810 ± 5% +573.5% 153632 ± 92% sched_debug.cfs_rq:/.load.max
2145 ± 9% +659.3% 16290 ± 93% sched_debug.cfs_rq:/.load.stddev
0.00 +2.8e+09% 28.00 ± 18% sched_debug.cfs_rq:/.max_vruntime.avg
0.00 +2.4e+11% 2428 ± 25% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +1.4e+25% 259.04 ± 22% sched_debug.cfs_rq:/.max_vruntime.stddev
553.67 ± 8% +34.0% 741.77 ± 14% sched_debug.cfs_rq:/.util_est_enqueued.avg
30374 ± 40% +86.5% 56651 ± 15% numa-vmstat.node0.nr_active_anon
28090 ± 41% +90.5% 53524 ± 7% numa-vmstat.node0.nr_anon_pages
2775 ± 31% +81.1% 5024 ± 19% numa-vmstat.node0.nr_inactive_anon
588.50 ± 18% +44.6% 851.25 ± 14% numa-vmstat.node0.nr_page_table_pages
30374 ± 40% +86.5% 56651 ± 15% numa-vmstat.node0.nr_zone_active_anon
2775 ± 31% +81.1% 5024 ± 19% numa-vmstat.node0.nr_zone_inactive_anon
59807 ± 21% -44.3% 33333 ± 26% numa-vmstat.node1.nr_active_anon
50347 ± 23% -50.8% 24791 ± 15% numa-vmstat.node1.nr_anon_pages
4226 ± 20% -53.6% 1962 ± 51% numa-vmstat.node1.nr_inactive_anon
747.75 ± 14% -35.1% 485.50 ± 24% numa-vmstat.node1.nr_page_table_pages
59807 ± 21% -44.3% 33333 ± 26% numa-vmstat.node1.nr_zone_active_anon
4226 ± 20% -53.6% 1962 ± 51% numa-vmstat.node1.nr_zone_inactive_anon
121684 ± 40% +86.3% 226672 ± 15% numa-meminfo.node0.Active
121544 ± 40% +86.4% 226576 ± 15% numa-meminfo.node0.Active(anon)
66420 ± 58% +122.1% 147493 ± 7% numa-meminfo.node0.AnonHugePages
112439 ± 42% +90.4% 214081 ± 7% numa-meminfo.node0.AnonPages
11302 ± 29% +79.4% 20281 ± 19% numa-meminfo.node0.Inactive
11102 ± 30% +81.0% 20097 ± 19% numa-meminfo.node0.Inactive(anon)
14296 ± 19% +26.0% 18013 ± 10% numa-meminfo.node0.Mapped
2355 ± 18% +44.6% 3405 ± 14% numa-meminfo.node0.PageTables
239300 ± 21% -44.3% 133379 ± 26% numa-meminfo.node1.Active
239256 ± 21% -44.3% 133295 ± 26% numa-meminfo.node1.Active(anon)
142814 ± 27% -56.6% 61957 ± 18% numa-meminfo.node1.AnonHugePages
201352 ± 23% -50.7% 99169 ± 16% numa-meminfo.node1.AnonPages
16931 ± 18% -52.4% 8067 ± 51% numa-meminfo.node1.Inactive
16782 ± 19% -53.0% 7888 ± 51% numa-meminfo.node1.Inactive(anon)
1123002 ± 4% -10.3% 1007152 ± 5% numa-meminfo.node1.MemUsed
2991 ± 14% -35.0% 1945 ± 24% numa-meminfo.node1.PageTables
1.52 -0.5 1.03 perf-stat.i.branch-miss-rate%
3.529e+08 -31.5% 2.417e+08 perf-stat.i.branch-misses
2.57 -1.2% 2.54 perf-stat.i.cpi
1.166e+08 +2.0% 1.189e+08 perf-stat.i.dTLB-load-misses
3.438e+10 +1.6% 3.494e+10 perf-stat.i.dTLB-loads
69.72 +13.8 83.55 ± 4% perf-stat.i.iTLB-load-miss-rate%
1.167e+08 +11.7% 1.303e+08 perf-stat.i.iTLB-load-misses
52564786 ± 2% -45.0% 28903742 ± 27% perf-stat.i.iTLB-loads
1.116e+11 +1.5% 1.132e+11 perf-stat.i.instructions
956.44 -9.0% 870.82 perf-stat.i.instructions-per-iTLB-miss
0.39 +1.3% 0.39 perf-stat.i.ipc
94.33 -1.0 93.31 perf-stat.i.node-store-miss-rate%
1.52 -0.5 1.03 perf-stat.overall.branch-miss-rate%
2.57 -1.2% 2.54 perf-stat.overall.cpi
68.94 +13.1 82.07 ± 4% perf-stat.overall.iTLB-load-miss-rate%
956.38 -9.1% 869.22 perf-stat.overall.instructions-per-iTLB-miss
0.39 +1.3% 0.39 perf-stat.overall.ipc
3.518e+08 -31.5% 2.409e+08 perf-stat.ps.branch-misses
1.162e+08 +2.0% 1.185e+08 perf-stat.ps.dTLB-load-misses
3.426e+10 +1.6% 3.482e+10 perf-stat.ps.dTLB-loads
1.163e+08 +11.7% 1.299e+08 perf-stat.ps.iTLB-load-misses
52389145 ± 2% -45.0% 28807970 ± 27% perf-stat.ps.iTLB-loads
1.112e+11 +1.5% 1.128e+11 perf-stat.ps.instructions
3.357e+13 +1.5% 3.406e+13 perf-stat.total.instructions
7.97 -3.1 4.87 perf-profile.calltrace.cycles-pp.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
38.18 -2.5 35.67 perf-profile.calltrace.cycles-pp.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
3.28 -1.9 1.38 perf-profile.calltrace.cycles-pp.yield_task_fair.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
28.01 -1.4 26.62 perf-profile.calltrace.cycles-pp.__schedule.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
28.71 -1.4 27.33 perf-profile.calltrace.cycles-pp.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
20.20 -1.3 18.90 perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.__x64_sys_sched_yield.do_syscall_64
61.90 -1.2 60.72 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
2.31 ± 4% -1.1 1.19 perf-profile.calltrace.cycles-pp._raw_spin_lock.do_sched_yield.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
63.36 -1.0 62.41 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__sched_yield
1.35 ± 2% -0.2 1.17 ± 2% perf-profile.calltrace.cycles-pp.clear_buddies.pick_next_entity.pick_next_task_fair.__schedule.schedule
2.91 -0.1 2.80 perf-profile.calltrace.cycles-pp.__calc_delta.update_curr.pick_next_task_fair.__schedule.schedule
1.86 +0.0 1.88 perf-profile.calltrace.cycles-pp.native_sched_clock.sched_clock.sched_clock_cpu.update_rq_clock.__schedule
2.07 +0.0 2.09 perf-profile.calltrace.cycles-pp.sched_clock_cpu.update_rq_clock.__schedule.schedule.__x64_sys_sched_yield
1.39 +0.1 1.46 perf-profile.calltrace.cycles-pp.update_min_vruntime.update_curr.pick_next_task_fair.__schedule.schedule
3.16 ± 2% +0.2 3.33 ± 2% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__schedule.schedule.__x64_sys_sched_yield
18.63 +0.9 19.52 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.__sched_yield
8.14 -3.2 4.89 perf-profile.children.cycles-pp.do_sched_yield
38.36 -2.5 35.87 perf-profile.children.cycles-pp.__x64_sys_sched_yield
3.29 -1.9 1.40 perf-profile.children.cycles-pp.yield_task_fair
28.72 -1.4 27.34 perf-profile.children.cycles-pp.schedule
28.20 -1.4 26.83 perf-profile.children.cycles-pp.__schedule
20.57 -1.3 19.31 perf-profile.children.cycles-pp.pick_next_task_fair
3.52 ± 3% -1.2 2.37 perf-profile.children.cycles-pp._raw_spin_lock
62.08 -0.9 61.15 perf-profile.children.cycles-pp.do_syscall_64
63.45 -0.9 62.52 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
1.10 -0.2 0.87 ± 2% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
1.41 ± 2% -0.1 1.29 ± 2% perf-profile.children.cycles-pp.clear_buddies
3.14 -0.1 3.03 perf-profile.children.cycles-pp.__calc_delta
0.41 ± 2% -0.1 0.35 ± 2% perf-profile.children.cycles-pp.rcu_note_context_switch
99.52 -0.0 99.50 perf-profile.children.cycles-pp.__sched_yield
0.40 ± 2% +0.0 0.43 perf-profile.children.cycles-pp.__list_add_valid
0.77 ± 2% +0.0 0.82 ± 2% perf-profile.children.cycles-pp.testcase
1.49 +0.1 1.57 perf-profile.children.cycles-pp.update_min_vruntime
0.29 ± 4% +0.1 0.40 ± 3% perf-profile.children.cycles-pp.check_cfs_rq_runtime
17.47 +0.3 17.75 perf-profile.children.cycles-pp.syscall_return_via_sysret
16.89 +0.8 17.73 perf-profile.children.cycles-pp.entry_SYSCALL_64
3.15 -1.9 1.29 perf-profile.self.cycles-pp.yield_task_fair
4.83 -1.3 3.56 perf-profile.self.cycles-pp.pick_next_task_fair
3.44 ± 3% -1.1 2.30 perf-profile.self.cycles-pp._raw_spin_lock
0.86 -0.3 0.55 ± 2% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
2.36 ± 3% -0.2 2.12 perf-profile.self.cycles-pp.do_sched_yield
3.07 -0.1 2.96 perf-profile.self.cycles-pp.__calc_delta
1.14 ± 2% -0.1 1.06 ± 2% perf-profile.self.cycles-pp.clear_buddies
0.28 ± 2% -0.1 0.23 ± 3% perf-profile.self.cycles-pp.rcu_note_context_switch
0.30 +0.0 0.33 perf-profile.self.cycles-pp.__raw_spin_unlock_irq
0.39 +0.0 0.41 perf-profile.self.cycles-pp.__list_add_valid
1.33 +0.0 1.38 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.21 ± 5% +0.1 0.30 ± 3% perf-profile.self.cycles-pp.check_cfs_rq_runtime
1.40 +0.1 1.49 perf-profile.self.cycles-pp.update_min_vruntime
4.17 +0.1 4.31 perf-profile.self.cycles-pp.update_curr
2.22 ± 2% +0.2 2.38 ± 2% perf-profile.self.cycles-pp.pick_next_entity
17.44 +0.3 17.73 perf-profile.self.cycles-pp.syscall_return_via_sysret
15.10 +0.8 15.89 perf-profile.self.cycles-pp.entry_SYSCALL_64
23.08 +1.5 24.62 perf-profile.self.cycles-pp.do_syscall_64
1.52 ± 3% +2.1 3.58 perf-profile.self.cycles-pp.__x64_sys_sched_yield
will-it-scale.per_thread_ops
1.155e+06 +-+-------------------------------------------------------------+
| |
1.15e+06 +-O O O O |
O O |
1.145e+06 +-+ O O O O O O O O |
| O O O |
1.14e+06 +-+ O O |
| |
1.135e+06 +-+ |
| |
1.13e+06 +-+ |
|.+.+.+..+ .+. .+ .+.+.+.+.+. .+ |
1.125e+06 +-+ + .+ +. .+ + .+.+. +.+ + |
| + +.+.+..+ +.+ + .+. .|
1.12e+06 +-+-------------------------------------------------------------+
will-it-scale.workload
1.2e+08 +-+-------------------------------------------------------------+
| O O |
1.195e+08 O-+ O O |
| O |
1.19e+08 +-+ O O O O O O O O O O O |
| |
1.185e+08 +-+ O O |
| |
1.18e+08 +-+ |
| |
1.175e+08 +-+ |
|.+.+.+..+ .+. .+ .+.+.+.+.+. .+ |
1.17e+08 +-+ + .+ +. .+ + .+.+. +.+ + |
| + +.+.+..+ +.+ + .+. .|
1.165e+08 +-+-------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-cfl-e1: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6/process/100%/debian-x86_64-2019-09-23.cgz/lkp-cfl-e1/hackbench/0xb8
commit:
8629787388 ("sched/fair: Fix pick_next_task_fair() slow path")
2802bafd8f ("sched: Optimize pick_next_task()")
8629787388c77c01 2802bafd8f4b449471c2276f2f7
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:2 -100% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
319243 +1.0% 322577 vmstat.system.in
44734 ± 67% -97.1% 1299 ± 35% interrupts.133:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
44734 ± 67% -97.1% 1299 ± 35% interrupts.CPU1.133:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
1.015e+08 -2.3% 99090799 proc-vmstat.numa_hit
1.015e+08 -2.3% 99090799 proc-vmstat.numa_local
1.016e+08 -2.4% 99233446 proc-vmstat.pgalloc_normal
1.016e+08 -2.4% 99212634 proc-vmstat.pgfree
4661 +25.8% 5865 ± 15% sched_debug.cfs_rq:/.runnable_weight.min
15464 ± 66% -298.6% -30718 sched_debug.cfs_rq:/.spread0.avg
-24756 +182.0% -69800 sched_debug.cfs_rq:/.spread0.min
184.62 ± 3% -12.9% 160.72 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
128525 ± 25% -26.0% 95064 sched_debug.cpu.avg_idle.max
41958 ± 20% -46.0% 22660 ± 13% sched_debug.cpu.avg_idle.stddev
0.82 +38.9% 1.14 ± 6% sched_debug.cpu.nr_running.min
1859930 -65.9% 634915 ± 2% cpuidle.C1E.time
16778 -47.2% 8858 cpuidle.C1E.usage
526113 ± 2% +214.4% 1653841 ± 12% cpuidle.C3.time
2677 ± 4% +170.2% 7233 ± 6% cpuidle.C3.usage
19880613 +98.2% 39413282 ± 42% cpuidle.C6.time
26687 +221.8% 85874 ± 41% cpuidle.C6.usage
9751983 -42.2% 5639021 ± 7% cpuidle.C8.time
10643 -39.5% 6435 ± 7% cpuidle.C8.usage
23713 ± 69% -96.8% 756.75 ± 34% softirqs.CPU1.NET_RX
88898 ± 24% -26.5% 65346 ± 3% softirqs.CPU1.RCU
91011 ± 26% -28.9% 64686 softirqs.CPU13.RCU
90901 ± 27% -29.5% 64076 ± 2% softirqs.CPU14.RCU
89193 ± 27% -27.6% 64601 ± 2% softirqs.CPU2.RCU
90478 ± 26% -28.5% 64684 softirqs.CPU5.RCU
91912 ± 25% -29.8% 64548 ± 2% softirqs.CPU6.RCU
90295 ± 25% -27.2% 65713 softirqs.CPU9.RCU
25645 ± 62% -71.6% 7288 ± 38% softirqs.NET_RX
1441859 ± 27% -27.9% 1040000 softirqs.RCU
16685 -47.6% 8747 turbostat.C1E
0.02 -0.0 0.01 turbostat.C1E%
2604 ± 4% +175.0% 7161 ± 6% turbostat.C3
26241 +225.0% 85284 ± 41% turbostat.C6
0.20 +0.2 0.40 ± 43% turbostat.C6%
10506 -40.9% 6209 ± 8% turbostat.C8
0.10 -0.0 0.05 ± 9% turbostat.C8%
0.01 +100.0% 0.02 turbostat.CPU%c3
0.15 +76.7% 0.26 ± 37% turbostat.CPU%c6
0.06 -62.5% 0.02 ± 36% turbostat.CPU%c7
45.92 +1.7% 46.70 perf-stat.i.MPKI
97193839 -1.5% 95703183 perf-stat.i.branch-misses
0.98 -0.0 0.96 perf-stat.i.cache-miss-rate%
19893530 -1.9% 19511719 perf-stat.i.cache-misses
78.80 -0.9 77.91 perf-stat.i.iTLB-load-miss-rate%
51185994 -4.1% 49062994 perf-stat.i.iTLB-load-misses
904.82 +4.0% 940.91 perf-stat.i.instructions-per-iTLB-miss
1239044 -2.0% 1214362 perf-stat.i.node-loads
1032724 -3.0% 1001536 perf-stat.i.node-stores
45.07 +1.5% 45.73 perf-stat.overall.MPKI
1.08 -0.0 1.07 perf-stat.overall.branch-miss-rate%
0.95 -0.0 0.93 perf-stat.overall.cache-miss-rate%
3116 +1.9% 3176 perf-stat.overall.cycles-between-cache-misses
78.91 -0.9 78.06 perf-stat.overall.iTLB-load-miss-rate%
903.47 +3.9% 938.56 perf-stat.overall.instructions-per-iTLB-miss
97034122 -1.5% 95545924 perf-stat.ps.branch-misses
19860860 -1.9% 19479750 perf-stat.ps.cache-misses
51101833 -4.1% 48982450 perf-stat.ps.iTLB-load-misses
1237005 -2.0% 1212368 perf-stat.ps.node-loads
1031069 -3.0% 999913 perf-stat.ps.node-stores
35.95 -14.8 21.14 ± 71% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
34.48 -14.2 20.29 ± 71% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.81 ± 3% -1.0 0.85 ±100% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.91 -0.5 0.44 ±100% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.79 -0.2 2.56 ± 5% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_write.new_sync_write.vfs_write.ksys_write
1.70 ± 2% -0.2 1.47 ± 15% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.69 ± 5% -0.2 0.48 ± 57% perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
2.49 ± 2% -0.2 2.29 ± 8% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.new_sync_write.vfs_write.ksys_write
0.89 -0.1 0.77 ± 14% perf-profile.calltrace.cycles-pp.__inode_security_revalidate.selinux_file_permission.security_file_permission.vfs_write.ksys_write
1.42 -0.1 1.33 ± 4% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_read
2.21 -0.1 2.16 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.new_sync_read
35.98 -0.5 35.52 perf-profile.children.cycles-pp.ksys_read
34.58 -0.5 34.12 perf-profile.children.cycles-pp.vfs_read
4.95 -0.3 4.68 ± 2% perf-profile.children.cycles-pp.mutex_lock
3.48 -0.2 3.26 ± 4% perf-profile.children.cycles-pp.mutex_unlock
2.81 -0.1 2.66 perf-profile.children.cycles-pp.__fdget_pos
2.53 -0.1 2.40 perf-profile.children.cycles-pp.__fget_light
1.03 -0.1 0.94 ± 8% perf-profile.children.cycles-pp.__mutex_unlock_slowpath
1.83 ± 2% -0.1 1.75 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.23 ± 8% -0.0 0.18 ± 13% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
0.42 ± 3% -0.0 0.38 ± 3% perf-profile.children.cycles-pp.sched_clock
0.45 ± 3% -0.0 0.41 ± 3% perf-profile.children.cycles-pp.sched_clock_cpu
0.52 -0.0 0.49 ± 3% perf-profile.children.cycles-pp.inode_has_perm
0.40 ± 3% -0.0 0.37 ± 4% perf-profile.children.cycles-pp.native_sched_clock
0.15 ± 6% -0.0 0.12 ± 6% perf-profile.children.cycles-pp.bpf_fd_pass
0.22 ± 4% -0.0 0.20 ± 4% perf-profile.children.cycles-pp.wake_q_add
0.17 ± 2% -0.0 0.15 ± 9% perf-profile.children.cycles-pp.rb_next
0.29 -0.0 0.27 ± 5% perf-profile.children.cycles-pp.iov_iter_init
0.11 ± 4% -0.0 0.10 ± 7% perf-profile.children.cycles-pp.finish_wait
0.18 +0.0 0.20 ± 4% perf-profile.children.cycles-pp.rb_erase
0.12 ± 4% +0.0 0.15 ± 5% perf-profile.children.cycles-pp.resched_curr
0.45 +0.0 0.48 ± 3% perf-profile.children.cycles-pp.pick_next_entity
0.72 +0.0 0.77 ± 2% perf-profile.children.cycles-pp.__switch_to_asm
0.60 +0.1 0.66 perf-profile.children.cycles-pp.check_preempt_wakeup
0.82 ± 3% +0.1 0.89 ± 2% perf-profile.children.cycles-pp.check_preempt_curr
0.91 ± 2% +0.1 0.97 ± 2% perf-profile.children.cycles-pp.ttwu_do_wakeup
3.32 +0.1 3.46 perf-profile.children.cycles-pp.select_task_rq_fair
3.05 -0.2 2.80 ± 4% perf-profile.self.cycles-pp.mutex_lock
3.41 -0.2 3.19 ± 5% perf-profile.self.cycles-pp.mutex_unlock
2.44 -0.1 2.33 perf-profile.self.cycles-pp.__fget_light
0.57 ± 4% -0.1 0.50 perf-profile.self.cycles-pp.pick_next_task_fair
1.09 ± 2% -0.1 1.02 ± 2% perf-profile.self.cycles-pp.vfs_read
0.80 -0.1 0.73 ± 3% perf-profile.self.cycles-pp.copy_page_from_iter
0.45 ± 2% -0.1 0.39 ± 5% perf-profile.self.cycles-pp.ksys_write
1.27 -0.1 1.21 ± 3% perf-profile.self.cycles-pp.do_syscall_64
0.62 ± 3% -0.0 0.58 ± 4% perf-profile.self.cycles-pp.__inode_security_revalidate
0.19 ± 10% -0.0 0.15 ± 16% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.39 ± 3% -0.0 0.35 ± 3% perf-profile.self.cycles-pp.native_sched_clock
0.17 ± 2% -0.0 0.14 ± 10% perf-profile.self.cycles-pp.rb_next
0.45 -0.0 0.42 ± 2% perf-profile.self.cycles-pp.inode_has_perm
0.21 ± 2% -0.0 0.18 ± 2% perf-profile.self.cycles-pp.set_next_entity
0.21 ± 4% -0.0 0.19 ± 5% perf-profile.self.cycles-pp.wake_q_add
0.05 +0.0 0.06 ± 6% perf-profile.self.cycles-pp.exit_to_usermode_loop
0.06 +0.0 0.08 ± 8% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.12 ± 4% +0.0 0.15 ± 5% perf-profile.self.cycles-pp.resched_curr
0.72 +0.0 0.77 ± 2% perf-profile.self.cycles-pp.__switch_to_asm
0.55 ± 2% +0.1 0.61 ± 4% perf-profile.self.cycles-pp.select_task_rq_fair
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 2 months
[pipe] d60337eff1: phoronix-test-suite.noise-level.0.activity_level 144.0% improvement
by lkp report check
Greeting,
FYI, we noticed a 144.0% improvement of phoronix-test-suite.noise-level.0.activity_level due to commit:
commit: d60337eff18a3c587832ab8053a567f1da9710d2 ("[RFC PATCH 04/11] pipe: Use head and tail pointers for the ring, not cursor and length [ver #3]")
url: https://github.com/0day-ci/linux/commits/David-Howells/pipe-Notification-...
in testcase: phoronix-test-suite
on test machine: 16 threads Intel(R) Xeon(R) CPU X5570 @ 2.93GHz with 48G memory
with following parameters:
test: noise-level-1.1.0
cpufreq_governor: performance
ucode: 0x1d
test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-phoronix/lkp-nhm-2ep1/noise-level-1.1.0/phoronix-test-suite/0x1d
commit:
77a98a59a1 ("Add wake_up_interruptible_sync_poll_locked() [ver #3]")
d60337eff1 ("pipe: Use head and tail pointers for the ring, not cursor and length [ver #3]")
77a98a59a1ec5e35 d60337eff18a3c587832ab8053a
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 kmsg.de###]
:4 100% 4:4 kmsg.head=#,tail=#,buffers=
:4 100% 4:4 kmsg.idx=#,offset=
:4 100% 4:4 dmesg.RIP:sanity
1:4 -25% :4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
:4 100% 4:4 dmesg.WARNING:at_lib/iov_iter.c:#sanity
1:4 -25% :4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
75726 ± 2% +144.0% 184798 ± 8% phoronix-test-suite.noise-level.0.activity_level
46.75 ±115% -84.0% 7.50 ±157% interrupts.CPU0.TLB:TLB_shootdowns
36.05 ± 4% +26.1% 45.47 ± 4% boot-time.boot
504.75 ± 4% +21.9% 615.34 ± 3% boot-time.idle
84041 -1.2% 83015 proc-vmstat.nr_inactive_file
84041 -1.2% 83015 proc-vmstat.nr_zone_inactive_file
1016 ± 6% +10.9% 1126 ± 4% vmstat.system.cs
31702 -47.1% 16758 vmstat.system.in
2596 ± 19% -59.5% 1051 ± 70% numa-numastat.node0.other_node
111841 ± 34% +42.4% 159231 ± 28% numa-numastat.node1.local_node
112390 ± 34% +43.6% 161373 ± 28% numa-numastat.node1.numa_hit
548.75 ± 96% +290.6% 2143 ± 35% numa-numastat.node1.other_node
1.228e+09 ± 28% +52.4% 1.872e+09 ± 35% cpuidle.C3.time
2650850 ± 15% +87.2% 4961279 ± 36% cpuidle.C6.time
3765 ± 18% +70.5% 6419 ± 40% cpuidle.C6.usage
552648 ± 27% +374.5% 2622569 ± 48% cpuidle.POLL.time
8453 ± 12% +150.4% 21166 ± 30% cpuidle.POLL.usage
23.38 ± 43% -57.0% 10.04 ± 53% sched_debug.cfs_rq:/.load_avg.min
7840 ± 7% +74.8% 13701 ± 4% sched_debug.cfs_rq:/.min_vruntime.avg
16419 ± 8% +105.1% 33672 ± 10% sched_debug.cfs_rq:/.min_vruntime.max
3796 ± 4% +101.9% 7666 ± 11% sched_debug.cfs_rq:/.min_vruntime.stddev
3799 ± 4% +101.8% 7667 ± 11% sched_debug.cfs_rq:/.spread0.stddev
0.73 ± 28% -44.7% 0.41 ± 40% sched_debug.cpu.nr_running.avg
0.72 ± 20% -35.9% 0.46 ± 18% sched_debug.cpu.nr_running.stddev
6932 ± 23% +80.9% 12543 ± 12% sched_debug.cpu.nr_switches.avg
3053 ± 27% +78.6% 5452 ± 7% sched_debug.cpu.nr_switches.min
16201 ± 32% -70.7% 4750 ± 38% numa-vmstat.node0.nr_active_file
215951 ± 35% -56.4% 94255 ± 60% numa-vmstat.node0.nr_file_pages
151995 ± 50% -61.0% 59210 ± 57% numa-vmstat.node0.nr_inactive_anon
152629 ± 50% -60.9% 59672 ± 57% numa-vmstat.node0.nr_shmem
16201 ± 32% -70.7% 4750 ± 38% numa-vmstat.node0.nr_zone_active_file
151995 ± 50% -61.0% 59210 ± 57% numa-vmstat.node0.nr_zone_inactive_anon
12930 ±137% -90.0% 1293 ± 54% numa-vmstat.node0.numa_other
9501 ± 58% +123.8% 21263 ± 9% numa-vmstat.node1.nr_active_file
158673 ± 47% +75.9% 279042 ± 20% numa-vmstat.node1.nr_file_pages
2767 ± 11% -15.7% 2334 ± 6% numa-vmstat.node1.nr_kernel_stack
9501 ± 58% +123.8% 21263 ± 9% numa-vmstat.node1.nr_zone_active_file
64781 ± 32% -70.7% 19003 ± 38% numa-meminfo.node0.Active(file)
863806 ± 35% -56.4% 377022 ± 60% numa-meminfo.node0.FilePages
796441 ± 36% -55.3% 356115 ± 61% numa-meminfo.node0.Inactive
607982 ± 50% -61.0% 236841 ± 57% numa-meminfo.node0.Inactive(anon)
1156778 ± 26% -46.6% 617340 ± 46% numa-meminfo.node0.MemUsed
610519 ± 50% -60.9% 238689 ± 57% numa-meminfo.node0.Shmem
163141 ± 13% +58.1% 257940 ± 30% numa-meminfo.node1.Active
38001 ± 58% +123.8% 85055 ± 9% numa-meminfo.node1.Active(file)
634701 ± 47% +75.9% 1116171 ± 20% numa-meminfo.node1.FilePages
593680 ± 49% +72.8% 1026170 ± 21% numa-meminfo.node1.Inactive
2774 ± 11% -15.7% 2338 ± 5% numa-meminfo.node1.KernelStack
884323 ± 34% +60.4% 1418847 ± 20% numa-meminfo.node1.MemUsed
11.10 ± 4% +21.4% 13.47 ± 2% perf-stat.i.MPKI
3.67e+08 ± 13% -48.5% 1.889e+08 ± 3% perf-stat.i.branch-instructions
6020544 ± 72% -60.4% 2383477 ± 2% perf-stat.i.branch-misses
4.82 ± 6% -0.9 3.92 ± 3% perf-stat.i.cache-miss-rate%
754312 ± 8% -45.6% 410529 ± 7% perf-stat.i.cache-misses
15105378 -35.5% 9742537 ± 2% perf-stat.i.cache-references
951.84 ± 7% +12.5% 1070 ± 3% perf-stat.i.context-switches
5.32e+09 ± 5% -47.6% 2.79e+09 ± 4% perf-stat.i.cpu-cycles
0.11 ± 9% +0.1 0.17 ± 7% perf-stat.i.dTLB-load-miss-rate%
4.238e+08 ± 13% -46.4% 2.273e+08 ± 2% perf-stat.i.dTLB-loads
0.22 ± 5% +0.1 0.35 ± 3% perf-stat.i.dTLB-store-miss-rate%
1.705e+08 ± 7% -38.9% 1.042e+08 ± 3% perf-stat.i.dTLB-stores
365420 ± 3% -38.8% 223608 ± 5% perf-stat.i.iTLB-load-misses
1.533e+09 ± 15% -48.4% 7.915e+08 ± 2% perf-stat.i.iTLB-loads
1.511e+09 ± 15% -50.0% 7.565e+08 perf-stat.i.instructions
4573 ± 15% -16.7% 3808 ± 3% perf-stat.i.instructions-per-iTLB-miss
10.20 ± 13% +26.0% 12.85 ± 2% perf-stat.overall.MPKI
4.99 ± 7% -0.8 4.22 ± 4% perf-stat.overall.cache-miss-rate%
0.13 ± 11% +0.1 0.21 ± 12% perf-stat.overall.dTLB-load-miss-rate%
0.27 ± 11% +0.1 0.39 ± 11% perf-stat.overall.dTLB-store-miss-rate%
4142 ± 15% -18.1% 3392 ± 6% perf-stat.overall.instructions-per-iTLB-miss
3.623e+08 ± 13% -48.2% 1.877e+08 ± 3% perf-stat.ps.branch-instructions
5944803 ± 72% -60.1% 2372479 ± 2% perf-stat.ps.branch-misses
744148 ± 8% -45.2% 408044 ± 6% perf-stat.ps.cache-misses
14895664 -35.2% 9653768 perf-stat.ps.cache-references
939.36 ± 6% +12.9% 1060 ± 2% perf-stat.ps.context-switches
5.252e+09 ± 5% -47.2% 2.771e+09 ± 4% perf-stat.ps.cpu-cycles
4.185e+08 ± 13% -46.0% 2.259e+08 ± 2% perf-stat.ps.dTLB-loads
1.684e+08 ± 7% -38.3% 1.038e+08 ± 3% perf-stat.ps.dTLB-stores
360508 ± 3% -38.4% 222217 ± 5% perf-stat.ps.iTLB-load-misses
1.514e+09 ± 15% -48.0% 7.866e+08 ± 2% perf-stat.ps.iTLB-loads
1.492e+09 ± 15% -49.6% 7.515e+08 perf-stat.ps.instructions
17337 ± 23% +98.0% 34327 ± 16% softirqs.CPU0.SCHED
50316 ± 20% +74.0% 87563 ± 29% softirqs.CPU0.TIMER
15541 ± 23% +75.0% 27190 ± 25% softirqs.CPU1.SCHED
46536 ± 26% +81.8% 84612 ± 18% softirqs.CPU1.TIMER
14140 ± 27% +90.7% 26969 ± 29% softirqs.CPU10.SCHED
44353 ± 23% +88.2% 83495 ± 23% softirqs.CPU10.TIMER
14456 ± 22% +109.7% 30308 ± 23% softirqs.CPU11.SCHED
45458 ± 22% +89.8% 86266 ± 28% softirqs.CPU11.TIMER
14745 ± 29% +113.5% 31485 ± 18% softirqs.CPU12.SCHED
45475 ± 28% +79.8% 81785 ± 22% softirqs.CPU12.TIMER
14339 ± 21% +116.7% 31070 ± 22% softirqs.CPU13.SCHED
45935 ± 20% +90.1% 87325 ± 25% softirqs.CPU13.TIMER
14439 ± 21% +95.7% 28260 ± 17% softirqs.CPU14.SCHED
45691 ± 20% +90.0% 86793 ± 19% softirqs.CPU14.TIMER
14234 ± 24% +107.0% 29466 ± 23% softirqs.CPU15.SCHED
47195 ± 23% +83.0% 86374 ± 21% softirqs.CPU15.TIMER
14252 ± 26% +101.4% 28710 ± 20% softirqs.CPU2.SCHED
50804 ± 23% +62.1% 82341 ± 20% softirqs.CPU2.TIMER
46701 ± 30% +81.4% 84697 ± 22% softirqs.CPU3.TIMER
14081 ± 26% +104.9% 28858 ± 28% softirqs.CPU4.SCHED
44533 ± 29% +91.5% 85281 ± 21% softirqs.CPU4.TIMER
14698 ± 23% +93.4% 28431 ± 24% softirqs.CPU5.SCHED
48080 ± 26% +68.6% 81081 ± 23% softirqs.CPU5.TIMER
14040 ± 25% +93.9% 27217 ± 18% softirqs.CPU6.SCHED
41048 ± 28% +100.0% 82104 ± 20% softirqs.CPU6.TIMER
13985 ± 20% +100.3% 28012 ± 28% softirqs.CPU7.SCHED
44020 ± 23% +85.0% 81425 ± 27% softirqs.CPU7.TIMER
14187 ± 24% +100.1% 28390 ± 17% softirqs.CPU8.SCHED
44349 ± 26% +84.9% 81992 ± 24% softirqs.CPU8.TIMER
14201 ± 24% +104.2% 29004 ± 19% softirqs.CPU9.SCHED
45976 ± 24% +82.6% 83939 ± 22% softirqs.CPU9.TIMER
233752 ± 23% +99.2% 465530 ± 21% softirqs.SCHED
736478 ± 24% +82.9% 1347084 ± 22% softirqs.TIMER
28.76 ±119% -25.5 3.26 ±173% perf-profile.calltrace.cycles-pp.cpu_idle_poll.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
15.99 ± 92% -13.0 3.00 ±173% perf-profile.calltrace.cycles-pp.__libc_start_main
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.ioctl.__libc_start_main
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.ioctl.__libc_start_main
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.event_function_call.perf_event_for_each_child._perf_ioctl.perf_ioctl.do_vfs_ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call.perf_event_for_each_child._perf_ioctl.perf_ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.ioctl.__libc_start_main
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.ioctl.__libc_start_main
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.ksys_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.do_vfs_ioctl.ksys_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.perf_ioctl.do_vfs_ioctl.ksys_ioctl.__x64_sys_ioctl.do_syscall_64
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp._perf_ioctl.perf_ioctl.do_vfs_ioctl.ksys_ioctl.__x64_sys_ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.calltrace.cycles-pp.perf_event_for_each_child._perf_ioctl.perf_ioctl.do_vfs_ioctl.ksys_ioctl
9.93 ±130% -8.8 1.17 ±173% perf-profile.calltrace.cycles-pp.tick_check_broadcast_expired.cpu_idle_poll.do_idle.cpu_startup_entry.start_secondary
3.69 ±108% -1.0 2.66 ±173% perf-profile.calltrace.cycles-pp.perf_release.__fput.task_work_run.do_exit.do_group_exit
3.69 ±108% -1.0 2.66 ±173% perf-profile.calltrace.cycles-pp.perf_event_release_kernel.perf_release.__fput.task_work_run.do_exit
3.69 ±108% -1.0 2.67 ±173% perf-profile.calltrace.cycles-pp.task_work_run.do_exit.do_group_exit.get_signal.do_signal
3.69 ±108% -1.0 2.67 ±173% perf-profile.calltrace.cycles-pp.__fput.task_work_run.do_exit.do_group_exit.get_signal
3.70 ±108% -1.0 2.69 ±173% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
3.70 ±108% -1.0 2.69 ±173% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
3.70 ±108% -1.0 2.69 ±173% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.70 ±108% -1.0 2.69 ±173% perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.70 ±108% -1.0 2.69 ±173% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.on_each_cpu.flush_tlb_kernel_range.pmd_free_pte_page.ioremap_page_range.__ioremap_caller
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.smp_call_function_many.on_each_cpu.flush_tlb_kernel_range.pmd_free_pte_page.ioremap_page_range
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.drm_client_buffer_vmap.drm_fb_helper_dirty_work.process_one_work.worker_thread.kthread
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.drm_gem_vmap.drm_client_buffer_vmap.drm_fb_helper_dirty_work.process_one_work.worker_thread
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.drm_gem_vram_object_vmap.drm_gem_vmap.drm_client_buffer_vmap.drm_fb_helper_dirty_work.process_one_work
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.drm_gem_vram_kmap.drm_gem_vram_object_vmap.drm_gem_vmap.drm_client_buffer_vmap.drm_fb_helper_dirty_work
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.ttm_bo_kmap.drm_gem_vram_kmap.drm_gem_vram_object_vmap.drm_gem_vmap.drm_client_buffer_vmap
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.__ioremap_caller.ttm_bo_kmap.drm_gem_vram_kmap.drm_gem_vram_object_vmap.drm_gem_vmap
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.ioremap_page_range.__ioremap_caller.ttm_bo_kmap.drm_gem_vram_kmap.drm_gem_vram_object_vmap
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.pmd_free_pte_page.ioremap_page_range.__ioremap_caller.ttm_bo_kmap.drm_gem_vram_kmap
8.19 ± 96% +2.2 10.43 ±173% perf-profile.calltrace.cycles-pp.flush_tlb_kernel_range.pmd_free_pte_page.ioremap_page_range.__ioremap_caller.ttm_bo_kmap
28.76 ±119% -25.5 3.26 ±173% perf-profile.children.cycles-pp.cpu_idle_poll
15.19 ± 82% -13.2 1.95 ±164% perf-profile.children.cycles-pp.smp_call_function_single
10.12 ± 67% -9.2 0.94 ±173% perf-profile.children.cycles-pp.ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.children.cycles-pp.__x64_sys_ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.children.cycles-pp.ksys_ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.children.cycles-pp.do_vfs_ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.children.cycles-pp.perf_ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.children.cycles-pp._perf_ioctl
10.12 ± 67% -9.2 0.94 ±173% perf-profile.children.cycles-pp.perf_event_for_each_child
10.18 ±125% -9.0 1.22 ±163% perf-profile.children.cycles-pp.tick_check_broadcast_expired
10.55 ± 69% -8.7 1.87 ±173% perf-profile.children.cycles-pp.event_function_call
3.08 ± 73% -3.1 0.00 perf-profile.children.cycles-pp.perf_event_ctx_lock_nested
3.08 ± 73% -3.1 0.00 perf-profile.children.cycles-pp.__mutex_lock
3.08 ± 73% -3.1 0.00 perf-profile.children.cycles-pp.mutex_spin_on_owner
3.69 ±108% -1.0 2.66 ±173% perf-profile.children.cycles-pp.perf_release
3.69 ±108% -1.0 2.66 ±173% perf-profile.children.cycles-pp.perf_event_release_kernel
3.70 ±108% -1.0 2.69 ±173% perf-profile.children.cycles-pp.do_signal
3.70 ±108% -1.0 2.69 ±173% perf-profile.children.cycles-pp.get_signal
3.69 ±108% -1.0 2.68 ±173% perf-profile.children.cycles-pp.task_work_run
3.69 ±108% -1.0 2.68 ±173% perf-profile.children.cycles-pp.__fput
3.70 ±108% -1.0 2.70 ±173% perf-profile.children.cycles-pp.exit_to_usermode_loop
8.19 ± 96% +2.2 10.43 ±173% perf-profile.children.cycles-pp.drm_client_buffer_vmap
8.19 ± 96% +2.2 10.43 ±173% perf-profile.children.cycles-pp.drm_gem_vmap
8.19 ± 96% +2.2 10.43 ±173% perf-profile.children.cycles-pp.drm_gem_vram_object_vmap
8.19 ± 96% +2.2 10.43 ±173% perf-profile.children.cycles-pp.drm_gem_vram_kmap
8.19 ± 96% +2.2 10.43 ±173% perf-profile.children.cycles-pp.ttm_bo_kmap
8.19 ± 96% +2.2 10.43 ±173% perf-profile.children.cycles-pp.__ioremap_caller
8.19 ± 96% +2.2 10.43 ±173% perf-profile.children.cycles-pp.ioremap_page_range
8.19 ± 96% +2.2 10.43 ±173% perf-profile.children.cycles-pp.pmd_free_pte_page
8.19 ± 96% +2.2 10.43 ±173% perf-profile.children.cycles-pp.flush_tlb_kernel_range
9.71 ± 94% +2.4 12.15 ±173% perf-profile.children.cycles-pp.on_each_cpu
9.71 ± 94% +2.4 12.15 ±173% perf-profile.children.cycles-pp.smp_call_function_many
0.07 ± 65% +6.3 6.34 ±169% perf-profile.children.cycles-pp.handle_mm_fault
0.06 ± 68% +6.3 6.34 ±170% perf-profile.children.cycles-pp.__handle_mm_fault
18.50 ±116% -16.4 2.09 ±173% perf-profile.self.cycles-pp.cpu_idle_poll
15.18 ± 82% -13.8 1.35 ±161% perf-profile.self.cycles-pp.smp_call_function_single
10.18 ±125% -9.0 1.22 ±163% perf-profile.self.cycles-pp.tick_check_broadcast_expired
2.84 ± 85% -2.8 0.00 perf-profile.self.cycles-pp.mutex_spin_on_owner
9.32 ± 99% +2.8 12.11 ±173% perf-profile.self.cycles-pp.smp_call_function_many
phoronix-test-suite.noise-level.0.activity_level
220000 +-O----------------------------------------------------------------+
| O O |
200000 +-+ O O
180000 +-+ O O O O |
| O O |
160000 +-+ O O O O O O O O O |
O O O O O O |
140000 +-+ O O O |
| |
120000 +-+ |
100000 +-+ |
| |
80000 +-+ .+.. .+. .+. .+. .+.+.+..+.+.+.. .+.. .+.. |
|.+..+ + +. +..+.+.+. +. + +.+ +.+ |
60000 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 2 months
[sched] 10e7071b2f: BUG:kernel_NULL_pointer_dereference,address
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 10e7071b2f491b0fb981717ea0a585c441906ede ("sched: Rework CPU hotplug task selection")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-01
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------+------------+------------+
| | f95d4eaee6 | 10e7071b2f |
+-------------------------------------------------+------------+------------+
| boot_successes | 54 | 12 |
| boot_failures | 0 | 82 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 79 |
| Oops:#[##] | 0 | 79 |
| RIP:pick_next_task_dl | 0 | 79 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 79 |
| BUG:kernel_reboot-without-warning_in_test_stage | 0 | 3 |
+-------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 84.432464] BUG: kernel NULL pointer dereference, address: 0000000000000064
[ 84.433700] #PF: supervisor read access in kernel mode
[ 84.434589] #PF: error_code(0x0000) - not-present page
[ 84.435499] PGD 0 P4D 0
[ 84.435933] Oops: 0000 [#1] SMP PTI
[ 84.436581] CPU: 1 PID: 15 Comm: migration/1 Not tainted 5.3.0-rc1-00086-g10e7071b2f491 #1
[ 84.438004] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 84.439461] RIP: 0010:pick_next_task_dl+0xe/0xf0
[ 84.440266] Code: ed bd 70 01 01 e8 42 2d fb ff 0f 0b e9 6b ff ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 66 66 66 66 90 55 53 48 89 fb 48 83 ec 10 <8b> 46 64 85 c0 78 73 48 81 7e 78 a0 3f e2 a7 74 57 48 83 bb 10 09
[ 84.443485] RSP: 0000:ffffa5518008bd40 EFLAGS: 00010082
[ 84.444423] RAX: ffffffffa6eeeae0 RBX: ffff98ebbfd2b0c0 RCX: ffff98ebbfd2d040
[ 84.445641] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff98ebbfd2b0c0
[ 84.446877] RBP: ffffa5518008bdc0 R08: 0000001ac1016512 R09: 0000000000000001
[ 84.448128] R10: ffffffffa863e640 R11: 0000000000000003 R12: ffff98ebbfd2b0c0
[ 84.449349] R13: ffffffffa7e23fa0 R14: ffffffffa7e24060 R15: 0000000000000000
[ 84.450603] FS: 0000000000000000(0000) GS:ffff98ebbfd00000(0000) knlGS:0000000000000000
[ 84.452007] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 84.453022] CR2: 0000000000000064 CR3: 00000001aab84000 CR4: 00000000000406e0
[ 84.454244] Call Trace:
[ 84.455263] ? update_rq_clock+0x6d/0xe0
[ 84.456081] sched_cpu_dying+0x104/0x380
[ 84.456777] ? sched_cpu_starting+0xf0/0xf0
[ 84.457510] cpuhp_invoke_callback+0x86/0x5d0
[ 84.458279] ? cpu_disable_common+0x292/0x2b0
[ 84.459047] take_cpu_down+0x60/0xb0
[ 84.459649] multi_cpu_stop+0x6b/0x100
[ 84.460339] ? stop_machine_yield+0x10/0x10
[ 84.461078] cpu_stopper_thread+0x9e/0x110
[ 84.461809] ? smpboot_thread_fn+0x2f/0x1e0
[ 84.462539] ? smpboot_thread_fn+0x74/0x1e0
[ 84.463280] ? smpboot_thread_fn+0x14e/0x1e0
[ 84.464024] smpboot_thread_fn+0x149/0x1e0
[ 84.464768] ? sort_range+0x20/0x20
[ 84.465389] kthread+0x11e/0x140
[ 84.465961] ? kthread_park+0xa0/0xa0
[ 84.466605] ret_from_fork+0x35/0x40
[ 84.467236] Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver binfmt_misc intel_rapl_msr intel_rapl_common sr_mod crct10dif_pclmul cdrom crc32_pclmul sg crc32c_intel ghash_clmulni_intel ata_generic pata_acpi ppdev bochs_drm drm_vram_helper ttm drm_kms_helper syscopyarea sysfillrect snd_pcm sysimgblt fb_sys_fops drm aesni_intel snd_timer ata_piix crypto_simd snd cryptd glue_helper libata soundcore pcspkr joydev serio_raw parport_pc i2c_piix4 parport floppy ip_tables
[ 84.474471] CR2: 0000000000000064
[ 84.475066] ---[ end trace af8f1919a81ca744 ]---
To reproduce:
# build kernel
cd linux
cp config-5.3.0-rc1-00086-g10e7071b2f491 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 2 months