[refcount] 26d2e0d5df: WARNING:at_lib/refcount.c:#refcount_warn_saturate
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 26d2e0d5df5b9aab517d8327743e66fcb38e8136 ("refcount: Consolidate implementations of refcount_t")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/will/linux.git refcount/full
in testcase: ocfs2test
with following parameters:
disk: 1SSD
test: test-mkfs
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-----------------------------------------------------------------------------+------------+------------+
| | 90c6af41d2 | 26d2e0d5df |
+-----------------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 24 | 24 |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/rwsem.c | 24 | 24 |
| kernel_BUG_at_mm/slub.c | 2 | |
| invalid_opcode:#[##] | 2 | |
| RIP:kfree | 2 | |
| RIP:native_safe_halt | 2 | |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 2 | |
| WARNING:at_lib/refcount.c:#refcount_warn_saturate | 0 | 9 |
| RIP:refcount_warn_saturate | 0 | 9 |
| BUG:unable_to_handle_page_fault_for_address | 0 | 1 |
| Oops:#[##] | 0 | 1 |
| RIP:__kmalloc | 0 | 1 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 2 |
| general_protection_fault:#[##] | 0 | 1 |
| RIP:kmem_cache_alloc_trace | 0 | 1 |
+-----------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 72.121725] BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:1499
[ 72.126078] in_atomic(): 1, irqs_disabled(): 0, pid: 2466, name: mount.ocfs2
[ 72.128523] CPU: 1 PID: 2466 Comm: mount.ocfs2 Not tainted 5.3.0-rc3-00008-g26d2e0d5df5b9 #1
[ 72.130522] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 72.132522] Call Trace:
[ 72.134024] dump_stack+0x5c/0x7b
[ 72.135106] ___might_sleep+0xf1/0x110
[ 72.136280] down_write+0x1c/0x50
[ 72.137349] configfs_depend_item+0x3a/0xb0
[ 72.138597] o2hb_region_pin+0xf9/0x180 [ocfs2_nodemanager]
[ 72.140103] ? inode_init_always+0x120/0x1d0
[ 72.141368] o2hb_register_callback+0xc6/0x2a0 [ocfs2_nodemanager]
[ 72.143016] dlm_join_domain+0xbd/0x7a0 [ocfs2_dlm]
[ 72.144441] ? dlm_alloc_ctxt+0x50a/0x580 [ocfs2_dlm]
[ 72.145880] dlm_register_domain+0x31f/0x440 [ocfs2_dlm]
[ 72.147395] ? enqueue_entity+0x109/0x6c0
[ 72.148658] ? _cond_resched+0x19/0x30
[ 72.149870] o2cb_cluster_connect+0x132/0x2c0 [ocfs2_stack_o2cb]
[ 72.151524] ocfs2_cluster_connect+0x14b/0x220 [ocfs2_stackglue]
[ 72.153237] ocfs2_dlm_init+0x2f1/0x4b0 [ocfs2]
[ 72.154647] ? ocfs2_init_node_maps+0x50/0x50 [ocfs2]
[ 72.156167] ocfs2_fill_super+0xcfc/0x12b0 [ocfs2]
[ 72.157640] ? ocfs2_initialize_super+0x1030/0x1030 [ocfs2]
[ 72.159395] mount_bdev+0x173/0x1b0
[ 72.160627] legacy_get_tree+0x27/0x40
[ 72.161900] vfs_get_tree+0x25/0xf0
[ 72.163138] do_mount+0x691/0x9c0
[ 72.164343] ksys_mount+0x80/0xd0
[ 72.165536] __x64_sys_mount+0x21/0x30
[ 72.166828] do_syscall_64+0x5b/0x1f0
[ 72.168124] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 72.169649] RIP: 0033:0x7f9aec59148a
[ 72.170904] Code: 48 8b 0d 11 fa 2a 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d de f9 2a 00 f7 d8 64 89 01 48
[ 72.175696] RSP: 002b:00007ffc97973af8 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5
[ 72.177764] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f9aec59148a
[ 72.179756] RDX: 00005630f7e3b3ee RSI: 00005630f988a0b0 RDI: 00005630f988a310
[ 72.181768] RBP: 00007ffc97973ca0 R08: 00005630f988a2b0 R09: 0000000000000020
[ 72.183776] R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffc97973b90
[ 72.185795] R13: 0000000000000000 R14: 00005630f988b000 R15: 00007ffc97973b10
[ 72.193593] o2dlm: Joining domain 87608FBB69A6455A927DB6EE644FA256
[ 72.193593] (
[ 72.195534] 1
[ 72.196744] ) 1 nodes
[ 72.201889] JBD2: Ignoring recovery information on journal
[ 72.211850] ocfs2: Mounting device (8,0) on (node 1, slot 0) with ordered data mode.
[ 72.261789] mount /dev/sda /mnt/ocfs2 /dev/sda 16515072 243712 16271360 2% /mnt/ocfs2
[ 72.261792]
[ 72.268799] OK
[ 72.268801]
[ 72.273936] create testdir /mnt/ocfs2/20190907_114755
[ 72.273938]
[ 72.286732] create 15890 files .
[ 72.286734]
[ 72.290339]
[ 76.331476] o2dlm: Leaving domain 87608FBB69A6455A927DB6EE644FA256
[ 76.402235] blk_update_request: I/O error, dev fd0, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 76.406909] floppy: error 10 while reading block 0
[ 78.260271] ocfs2: Unmounting device (8,0) on (node 1)
[ 78.264188] ------------[ cut here ]------------
[ 78.267624] refcount_t: underflow; use-after-free.
[ 78.271193] WARNING: CPU: 1 PID: 2531 at lib/refcount.c:28 refcount_warn_saturate+0x95/0xe0
[ 78.275787] Modules linked in: ocfs2_stack_o2cb ocfs2_dlm ocfs2 ocfs2_nodemanager ocfs2_stackglue jbd2 bochs_drm sr_mod cdrom ata_generic drm_vram_helper pata_acpi ttm intel_rapl_msr sd_mod sg intel_rapl_common drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel ppdev syscopyarea sysfillrect sysimgblt fb_sys_fops drm snd_pcm aesni_intel ata_piix crypto_simd snd_timer libata joydev snd cryptd glue_helper soundcore pcspkr serio_raw virtio_scsi i2c_piix4 floppy parport_pc parport ip_tables
[ 78.294615] CPU: 1 PID: 2531 Comm: umount Tainted: G W 5.3.0-rc3-00008-g26d2e0d5df5b9 #1
[ 78.297253] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 78.299697] RIP: 0010:refcount_warn_saturate+0x95/0xe0
[ 78.301533] Code: 05 13 c8 37 01 01 e8 3a fd c1 ff 0f 0b c3 80 3d 01 c8 37 01 00 75 aa 48 c7 c7 c8 79 d1 aa c6 05 f1 c7 37 01 01 e8 1b fd c1 ff <0f> 0b c3 80 3d e3 c7 37 01 00 75 8b 48 c7 c7 98 79 d1 aa c6 05 d3
[ 78.306840] RSP: 0018:ffffad0540413e20 EFLAGS: 00010282
[ 78.308741] RAX: 0000000000000000 RBX: ffff96be3da63000 RCX: 0000000000000000
[ 78.311011] RDX: ffff96beffd27200 RSI: ffff96beffd17778 RDI: ffff96beffd17778
[ 78.313311] RBP: ffff96be3baf6000 R08: 00000000000004ff R09: 0000000000aaaaaa
[ 78.315620] R10: ffff96bedf05eec0 R11: ffff96bedc6ee950 R12: ffffad0540413e34
[ 78.317899] R13: ffff96be3da63240 R14: ffff96be3da630c8 R15: 0000000000000000
[ 78.320192] FS: 00007fb431b58e40(0000) GS:ffff96beffd00000(0000) knlGS:0000000000000000
[ 78.322666] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 78.324751] CR2: 00000000004216d0 CR3: 000000007cf4c000 CR4: 00000000000406e0
[ 78.327003] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 78.329278] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 78.331517] Call Trace:
[ 78.332928] ocfs2_dismount_volume+0x342/0x400 [ocfs2]
[ 78.334790] generic_shutdown_super+0x6c/0x120
[ 78.336521] kill_block_super+0x21/0x50
[ 78.338131] deactivate_locked_super+0x3f/0x70
[ 78.339858] cleanup_mnt+0xb8/0x150
[ 78.341396] task_work_run+0xa3/0xe0
[ 78.342965] exit_to_usermode_loop+0xeb/0xf0
[ 78.344679] do_syscall_64+0x1c1/0x1f0
[ 78.346272] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 78.348112] RIP: 0033:0x7fb43143cd77
[ 78.349647] Code: 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 31 f6 e9 09 00 00 00 66 0f 1f 84 00 00 00 00 00 b8 a6 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f1 00 2b 00 f7 d8 64 89 01 48
[ 78.353636] RSP: 002b:00007ffc671b3ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 78.355313] RAX: 0000000000000000 RBX: 0000558a27aef080 RCX: 00007fb43143cd77
[ 78.356858] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000558a27aef260
[ 78.358382] RBP: 0000558a27aef260 R08: 0000558a27af0600 R09: 0000000000000015
[ 78.359939] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007fb43193ee64
[ 78.361625] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffc671b3d60
[ 78.363256] ---[ end trace 06b247ab8ebf3051 ]---
To reproduce:
# build kernel
cd linux
cp config-5.3.0-rc3-00008-g26d2e0d5df5b9 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 11 months
[xfs] 1aabb011bf: aim7.jobs-per-min -17.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -17.8% regression of aim7.jobs-per-min due to commit:
commit: 1aabb011bfba5a8942d3e0d0c84a4ac7cad277b1 ("xfs: deferred inode inactivation")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/djwong/xfs-linux... rough-fixes
in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
disk: 4BRD_12G
md: RAID0
fs: xfs
test: disk_rw
load: 3000
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min -35.7% regression |
| test machine | 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory |
| test parameters | cpufreq_governor=performance |
| | disk=4BRD_12G |
| | fs=xfs |
| | load=3000 |
| | md=RAID0 |
| | test=disk_cp |
+------------------+-----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.6/3000/RAID0/debian-x86_64-2019-05-14.cgz/lkp-ivb-ep01/disk_rw/aim7
commit:
d515962b00 ("xfs: pass around xfs_inode_ag_walk iget/irele helper functions")
1aabb011bf ("xfs: deferred inode inactivation")
d515962b00d64454 1aabb011bfba5a8942d3e0d0c84
---------------- ---------------------------
%stddev %change %stddev
\ | \
139763 -17.8% 114843 aim7.jobs-per-min
129.25 +21.6% 157.23 aim7.time.elapsed_time
129.25 +21.6% 157.23 aim7.time.elapsed_time.max
31367 ± 3% -39.2% 19066 ± 11% aim7.time.involuntary_context_switches
2862 -7.1% 2659 aim7.time.system_time
798.18 -5.9% 751.02 aim7.time.user_time
1049629 -21.6% 822793 aim7.time.voluntary_context_switches
1.016e+09 ± 12% +111.5% 2.149e+09 ± 11% cpuidle.C6.time
1873251 ± 7% +86.0% 3483843 ± 18% cpuidle.C6.usage
2956 ± 10% +25.2% 3701 ± 17% cpuidle.POLL.usage
29.00 +15.7 44.70 ± 2% mpstat.cpu.all.idle%
55.41 -12.2 43.21 ± 2% mpstat.cpu.all.sys%
15.55 -3.5 12.07 mpstat.cpu.all.usr%
30.00 +51.2% 45.35 ± 2% iostat.cpu.idle
54.65 -21.8% 42.72 ± 2% iostat.cpu.system
15.34 -22.2% 11.93 iostat.cpu.user
42.06 ± 11% +139.1% 100.59 ± 7% iostat.md0.w/s
115553 +14.5% 132340 ± 3% meminfo.AnonHugePages
95312 +20.4% 114787 meminfo.KReclaimable
95312 +20.4% 114787 meminfo.SReclaimable
261148 +13.4% 296269 meminfo.Slab
31512 -16.2% 26409 ± 2% meminfo.max_used_kB
2740 ± 44% +108.3% 5707 ± 11% numa-vmstat.node0.nr_shmem
10048 ± 35% +81.5% 18236 ± 11% numa-vmstat.node0.nr_slab_reclaimable
16555 ± 52% +74.7% 28924 ± 28% numa-vmstat.node0.nr_slab_unreclaimable
3417 ± 16% -20.1% 2730 ± 20% numa-vmstat.node1.nr_mapped
4555 ± 27% -66.3% 1533 ± 53% numa-vmstat.node1.nr_shmem
29.50 +52.5% 45.00 ± 2% vmstat.cpu.id
54.25 -22.6% 42.00 ± 2% vmstat.cpu.sy
15.00 -26.7% 11.00 vmstat.cpu.us
27.50 -22.7% 21.25 ± 2% vmstat.procs.r
17205 -9.9% 15509 vmstat.system.cs
40194 ± 35% +81.5% 72950 ± 11% numa-meminfo.node0.KReclaimable
40194 ± 35% +81.5% 72950 ± 11% numa-meminfo.node0.SReclaimable
66241 ± 52% +74.7% 115712 ± 28% numa-meminfo.node0.SUnreclaim
10965 ± 44% +108.3% 22838 ± 11% numa-meminfo.node0.Shmem
106435 ± 44% +77.3% 188663 ± 18% numa-meminfo.node0.Slab
13273 ± 15% -18.7% 10793 ± 18% numa-meminfo.node1.Mapped
18226 ± 27% -66.3% 6135 ± 53% numa-meminfo.node1.Shmem
346095 -2.0% 339002 proc-vmstat.nr_dirty
346482 -2.0% 339409 proc-vmstat.nr_inactive_file
23834 +20.4% 28691 proc-vmstat.nr_slab_reclaimable
41519 +9.2% 45351 proc-vmstat.nr_slab_unreclaimable
346482 -2.0% 339409 proc-vmstat.nr_zone_inactive_file
346479 -2.0% 339510 proc-vmstat.nr_zone_write_pending
509175 +14.6% 583653 proc-vmstat.pgfault
871.00 -19.2% 703.50 ± 3% turbostat.Avg_MHz
71.90 -15.1 56.80 turbostat.Busy%
1869906 ± 7% +86.1% 3479825 ± 19% turbostat.C6
19.42 ± 12% +14.4 33.80 ± 11% turbostat.C6%
22.63 ± 2% +44.9% 32.79 ± 7% turbostat.CPU%c1
4.42 ± 34% +114.8% 9.50 ± 30% turbostat.CPU%c6
47.42 -4.3% 45.40 ± 2% turbostat.CorWatt
10614698 +20.8% 12819724 turbostat.IRQ
65.00 ± 3% +3.8% 67.50 turbostat.PkgTmp
73.99 -2.8% 71.95 turbostat.PkgWatt
34.29 -4.1% 32.90 turbostat.RAMWatt
10450 +21.5% 12696 turbostat.SMI
2930 ± 5% +309.5% 12001 slabinfo.dmaengine-unmap-16.active_objs
73.00 ± 7% +626.0% 530.00 ± 4% slabinfo.dmaengine-unmap-16.active_slabs
3088 ± 7% +621.4% 22279 ± 4% slabinfo.dmaengine-unmap-16.num_objs
73.00 ± 7% +626.0% 530.00 ± 4% slabinfo.dmaengine-unmap-16.num_slabs
2663 ± 3% +28.7% 3427 ± 6% slabinfo.kmalloc-128.active_objs
2663 ± 3% +28.7% 3427 ± 6% slabinfo.kmalloc-128.num_objs
19529 +129.5% 44820 ± 4% slabinfo.kmalloc-16.active_objs
75.75 +141.9% 183.25 ± 4% slabinfo.kmalloc-16.active_slabs
19529 +140.8% 47025 ± 4% slabinfo.kmalloc-16.num_objs
75.75 +141.9% 183.25 ± 4% slabinfo.kmalloc-16.num_slabs
2850 ± 3% +19.0% 3391 ± 3% slabinfo.kmalloc-2k.active_objs
2905 ± 3% +17.8% 3423 ± 3% slabinfo.kmalloc-2k.num_objs
769.75 +33.0% 1023 ± 3% slabinfo.kmalloc-4k.active_objs
782.75 +34.1% 1049 ± 3% slabinfo.kmalloc-4k.num_objs
6379 +151.9% 16067 slabinfo.kmalloc-512.active_objs
209.25 +187.8% 602.25 slabinfo.kmalloc-512.active_slabs
6712 +187.4% 19289 slabinfo.kmalloc-512.num_objs
209.25 +187.8% 602.25 slabinfo.kmalloc-512.num_slabs
453.75 +41.7% 643.00 slabinfo.kmalloc-8k.active_objs
469.50 +49.6% 702.25 slabinfo.kmalloc-8k.num_objs
923.25 ± 14% +22.9% 1135 ± 7% slabinfo.task_group.active_objs
923.25 ± 14% +22.9% 1135 ± 7% slabinfo.task_group.num_objs
1201 +79.2% 2152 ± 2% slabinfo.xfs_buf_item.active_objs
1201 +79.2% 2152 ± 2% slabinfo.xfs_buf_item.num_objs
1264 +51.1% 1909 ± 13% slabinfo.xfs_efd_item.active_objs
1264 +51.1% 1909 ± 13% slabinfo.xfs_efd_item.num_objs
2437 ± 2% +375.6% 11590 slabinfo.xfs_inode.active_objs
84.25 ± 3% +688.7% 664.50 ± 4% slabinfo.xfs_inode.active_slabs
2710 ± 3% +685.2% 21283 ± 4% slabinfo.xfs_inode.num_objs
84.25 ± 3% +688.7% 664.50 ± 4% slabinfo.xfs_inode.num_slabs
40593 -23.2% 31163 ± 2% sched_debug.cfs_rq:/.exec_clock.avg
42413 -22.9% 32721 ± 2% sched_debug.cfs_rq:/.exec_clock.max
40001 -23.7% 30528 ± 2% sched_debug.cfs_rq:/.exec_clock.min
21776 ± 22% +54.6% 33663 ± 23% sched_debug.cfs_rq:/.load.avg
131390 ±112% +327.1% 561113 ± 52% sched_debug.cfs_rq:/.load.max
26684 ± 79% +269.6% 98631 ± 45% sched_debug.cfs_rq:/.load.stddev
508.00 ± 37% +133.3% 1185 ± 39% sched_debug.cfs_rq:/.load_avg.max
9.00 ± 17% -44.4% 5.00 ± 33% sched_debug.cfs_rq:/.load_avg.min
120.95 ± 40% +94.6% 235.39 ± 32% sched_debug.cfs_rq:/.load_avg.stddev
1315047 -37.2% 825488 ± 4% sched_debug.cfs_rq:/.min_vruntime.avg
1467409 ± 3% -35.8% 941380 ± 2% sched_debug.cfs_rq:/.min_vruntime.max
1271354 -38.8% 777738 ± 4% sched_debug.cfs_rq:/.min_vruntime.min
0.62 ± 5% -30.4% 0.43 ± 10% sched_debug.cfs_rq:/.nr_running.avg
35.08 ± 9% +449.6% 192.83 ± 63% sched_debug.cfs_rq:/.runnable_load_avg.max
10.30 +221.3% 33.10 ± 54% sched_debug.cfs_rq:/.runnable_load_avg.stddev
20216 ± 22% +48.6% 30046 ± 29% sched_debug.cfs_rq:/.runnable_weight.avg
120127 ±125% +287.7% 465751 ± 60% sched_debug.cfs_rq:/.runnable_weight.max
25146 ± 87% +234.5% 84116 ± 53% sched_debug.cfs_rq:/.runnable_weight.stddev
14472 ±135% -101.5% -211.35 sched_debug.cfs_rq:/.spread0.avg
166817 ± 25% -30.7% 115662 ± 20% sched_debug.cfs_rq:/.spread0.max
-28876 +66.0% -47942 sched_debug.cfs_rq:/.spread0.min
757.34 ± 2% -27.9% 545.98 ± 4% sched_debug.cfs_rq:/.util_avg.avg
1299 ± 11% -14.5% 1111 sched_debug.cfs_rq:/.util_avg.max
397.33 ± 17% -38.5% 244.17 ± 17% sched_debug.cfs_rq:/.util_avg.min
103.73 ± 13% -38.5% 63.79 ± 13% sched_debug.cfs_rq:/.util_est_enqueued.avg
204770 ± 4% -10.9% 182535 ± 3% sched_debug.cpu.avg_idle.stddev
1795 ± 10% -27.6% 1300 ± 17% sched_debug.cpu.curr->pid.avg
1336 +18.0% 1576 ± 5% sched_debug.cpu.curr->pid.stddev
0.64 ± 5% -28.8% 0.45 ± 10% sched_debug.cpu.nr_running.avg
27275 -10.7% 24364 sched_debug.cpu.nr_switches.avg
34560 ± 3% +68.3% 58168 ± 4% sched_debug.cpu.nr_switches.max
24872 -31.7% 16990 sched_debug.cpu.nr_switches.min
2128 ± 7% +328.5% 9122 ± 2% sched_debug.cpu.nr_switches.stddev
25106 -11.7% 22178 sched_debug.cpu.sched_count.avg
27859 ± 2% +89.0% 52655 ± 3% sched_debug.cpu.sched_count.max
23769 -32.1% 16139 sched_debug.cpu.sched_count.min
874.04 ± 15% +856.0% 8355 sched_debug.cpu.sched_count.stddev
11954 -10.8% 10660 sched_debug.cpu.sched_goidle.avg
13338 ± 2% +93.1% 25761 ± 3% sched_debug.cpu.sched_goidle.max
11339 -32.5% 7653 sched_debug.cpu.sched_goidle.min
420.12 ± 13% +887.8% 4149 sched_debug.cpu.sched_goidle.stddev
12641 -12.2% 11095 sched_debug.cpu.ttwu_count.avg
14696 +73.7% 25527 sched_debug.cpu.ttwu_count.max
12080 -33.3% 8056 sched_debug.cpu.ttwu_count.min
598.46 ± 9% +580.7% 4073 sched_debug.cpu.ttwu_count.stddev
470.25 ± 2% +23.9% 582.47 ± 3% sched_debug.cpu.ttwu_local.avg
883.75 ± 10% +23.9% 1094 ± 9% sched_debug.cpu.ttwu_local.max
91.75 ± 12% +68.5% 154.59 ± 7% sched_debug.cpu.ttwu_local.stddev
3.867e+09 -14.4% 3.312e+09 perf-stat.i.branch-instructions
2.08 ± 2% -0.1 1.95 ± 3% perf-stat.i.branch-miss-rate%
55566907 -12.0% 48886699 ± 2% perf-stat.i.branch-misses
26.51 -1.7 24.84 ± 2% perf-stat.i.cache-miss-rate%
37534712 -23.4% 28761515 perf-stat.i.cache-misses
1.407e+08 -16.4% 1.176e+08 ± 2% perf-stat.i.cache-references
17417 -9.9% 15689 perf-stat.i.context-switches
2.05 -6.7% 1.91 ± 3% perf-stat.i.cpi
3.442e+10 -19.1% 2.784e+10 ± 4% perf-stat.i.cpu-cycles
2461 -48.3% 1271 perf-stat.i.cpu-migrations
898.31 +4.4% 937.81 ± 4% perf-stat.i.cycles-between-cache-misses
1.176e+08 ± 9% -22.4% 91285096 ± 2% perf-stat.i.dTLB-load-misses
6.032e+09 -14.2% 5.174e+09 perf-stat.i.dTLB-loads
17987322 ± 3% -17.7% 14796811 ± 3% perf-stat.i.dTLB-store-misses
4.184e+09 -15.1% 3.553e+09 perf-stat.i.dTLB-stores
6915996 ± 2% -5.9% 6510336 perf-stat.i.iTLB-load-misses
2586772 ± 8% -18.9% 2098325 ± 13% perf-stat.i.iTLB-loads
1.945e+10 -14.3% 1.667e+10 perf-stat.i.instructions
2785 ± 3% -10.5% 2492 perf-stat.i.instructions-per-iTLB-miss
0.55 +7.4% 0.59 ± 2% perf-stat.i.ipc
3723 -5.1% 3534 perf-stat.i.minor-faults
2160613 -24.3% 1634850 ± 2% perf-stat.i.node-load-misses
4136129 -22.3% 3213515 perf-stat.i.node-loads
23.00 -1.2 21.79 ± 3% perf-stat.i.node-store-miss-rate%
8687009 -31.6% 5940686 ± 2% perf-stat.i.node-store-misses
29184014 -26.3% 21502342 perf-stat.i.node-stores
3723 -5.1% 3534 perf-stat.i.page-faults
7.23 -2.5% 7.05 perf-stat.overall.MPKI
26.68 -2.2 24.47 perf-stat.overall.cache-miss-rate%
1.77 -5.7% 1.67 ± 2% perf-stat.overall.cpi
917.20 +5.6% 968.21 ± 4% perf-stat.overall.cycles-between-cache-misses
2814 ± 2% -9.0% 2560 perf-stat.overall.instructions-per-iTLB-miss
0.57 +6.1% 0.60 ± 2% perf-stat.overall.ipc
34.31 -0.6 33.71 perf-stat.overall.node-load-miss-rate%
22.94 ± 2% -1.3 21.65 perf-stat.overall.node-store-miss-rate%
3.835e+09 -14.2% 3.291e+09 perf-stat.ps.branch-instructions
55111206 -11.9% 48573654 ± 2% perf-stat.ps.branch-misses
37224107 -23.2% 28577554 perf-stat.ps.cache-misses
1.395e+08 -16.3% 1.168e+08 ± 2% perf-stat.ps.cache-references
17274 -9.8% 15588 perf-stat.ps.context-switches
3.414e+10 -19.0% 2.766e+10 ± 4% perf-stat.ps.cpu-cycles
2441 -48.2% 1263 perf-stat.ps.cpu-migrations
1.166e+08 ± 9% -22.2% 90699316 ± 2% perf-stat.ps.dTLB-load-misses
5.983e+09 -14.1% 5.141e+09 perf-stat.ps.dTLB-loads
17837008 ± 3% -17.6% 14701867 ± 3% perf-stat.ps.dTLB-store-misses
4.149e+09 -14.9% 3.53e+09 perf-stat.ps.dTLB-stores
6858348 ± 2% -5.7% 6468604 perf-stat.ps.iTLB-load-misses
2565117 ± 8% -18.7% 2084894 ± 13% perf-stat.ps.iTLB-loads
1.929e+10 -14.1% 1.656e+10 perf-stat.ps.instructions
3714 -5.4% 3513 perf-stat.ps.minor-faults
2142679 -24.2% 1624370 ± 2% perf-stat.ps.node-load-misses
4101979 -22.2% 3192920 perf-stat.ps.node-loads
8614251 -31.5% 5902590 ± 2% perf-stat.ps.node-store-misses
28939527 -26.2% 21364553 perf-stat.ps.node-stores
3714 -5.4% 3513 perf-stat.ps.page-faults
2.506e+12 +4.5% 2.619e+12 perf-stat.total.instructions
145.25 ± 16% +54.9% 225.00 ± 18% interrupts.39:IR-PCI-MSI.524289-edge.eth0-TxRx-0
103.00 ± 49% +1794.7% 1951 ±161% interrupts.44:IR-PCI-MSI.524294-edge.eth0-TxRx-5
1380 ± 11% +21.4% 1676 ± 5% interrupts.CPU0.CAL:Function_call_interrupts
260369 +21.4% 316051 interrupts.CPU0.LOC:Local_timer_interrupts
2090 ± 11% -32.7% 1407 ± 6% interrupts.CPU0.RES:Rescheduling_interrupts
259956 +21.7% 316451 interrupts.CPU1.LOC:Local_timer_interrupts
259976 +21.4% 315572 interrupts.CPU10.LOC:Local_timer_interrupts
1869 ± 22% -56.8% 808.50 ± 12% interrupts.CPU10.RES:Rescheduling_interrupts
1382 ± 9% +15.6% 1598 ± 7% interrupts.CPU11.CAL:Function_call_interrupts
260390 +21.4% 316071 interrupts.CPU11.LOC:Local_timer_interrupts
1514 ± 7% -45.4% 827.00 ± 31% interrupts.CPU11.RES:Rescheduling_interrupts
1218 ± 6% +36.8% 1665 ± 12% interrupts.CPU12.CAL:Function_call_interrupts
260198 +21.7% 316720 interrupts.CPU12.LOC:Local_timer_interrupts
1484 ± 8% -37.2% 932.50 ± 16% interrupts.CPU12.RES:Rescheduling_interrupts
1309 ± 10% +25.9% 1648 ± 11% interrupts.CPU13.CAL:Function_call_interrupts
260491 +21.6% 316752 interrupts.CPU13.LOC:Local_timer_interrupts
1435 ± 7% -35.7% 923.00 ± 23% interrupts.CPU13.RES:Rescheduling_interrupts
260090 +21.6% 316365 interrupts.CPU14.LOC:Local_timer_interrupts
2335 ± 33% -60.6% 920.75 ± 57% interrupts.CPU14.NMI:Non-maskable_interrupts
2335 ± 33% -60.6% 920.75 ± 57% interrupts.CPU14.PMI:Performance_monitoring_interrupts
1534 ± 7% -43.3% 870.50 ± 27% interrupts.CPU14.RES:Rescheduling_interrupts
260275 +21.2% 315386 interrupts.CPU15.LOC:Local_timer_interrupts
1533 ± 7% -40.2% 916.50 ± 28% interrupts.CPU15.RES:Rescheduling_interrupts
259511 +21.3% 314731 interrupts.CPU16.LOC:Local_timer_interrupts
1530 ± 6% -47.6% 802.50 ± 13% interrupts.CPU16.RES:Rescheduling_interrupts
260066 +21.7% 316545 interrupts.CPU17.LOC:Local_timer_interrupts
260096 +21.0% 314637 interrupts.CPU18.LOC:Local_timer_interrupts
1502 ± 8% -48.9% 767.75 ± 18% interrupts.CPU18.RES:Rescheduling_interrupts
260276 +21.4% 315940 interrupts.CPU19.LOC:Local_timer_interrupts
1540 ± 7% -33.4% 1025 ± 24% interrupts.CPU19.RES:Rescheduling_interrupts
103.00 ± 49% +1794.7% 1951 ±161% interrupts.CPU2.44:IR-PCI-MSI.524294-edge.eth0-TxRx-5
260077 +21.7% 316607 interrupts.CPU2.LOC:Local_timer_interrupts
1696 ± 22% -49.0% 865.00 ± 17% interrupts.CPU2.RES:Rescheduling_interrupts
260201 +21.8% 316819 interrupts.CPU20.LOC:Local_timer_interrupts
446.50 ±141% +283.6% 1712 ± 32% interrupts.CPU20.NMI:Non-maskable_interrupts
446.50 ±141% +283.6% 1712 ± 32% interrupts.CPU20.PMI:Performance_monitoring_interrupts
260321 +21.6% 316575 interrupts.CPU21.LOC:Local_timer_interrupts
260161 +21.1% 315079 interrupts.CPU22.LOC:Local_timer_interrupts
1533 ± 8% -21.8% 1198 ± 23% interrupts.CPU22.RES:Rescheduling_interrupts
1356 ± 10% +19.9% 1626 ± 4% interrupts.CPU23.CAL:Function_call_interrupts
259884 +21.6% 316144 interrupts.CPU23.LOC:Local_timer_interrupts
1332 ± 5% -43.3% 755.00 interrupts.CPU23.RES:Rescheduling_interrupts
260225 +21.2% 315429 interrupts.CPU24.LOC:Local_timer_interrupts
1388 ± 4% -47.4% 730.25 ± 19% interrupts.CPU24.RES:Rescheduling_interrupts
259779 +21.8% 316330 interrupts.CPU25.LOC:Local_timer_interrupts
1468 ± 8% -29.7% 1033 ± 31% interrupts.CPU25.RES:Rescheduling_interrupts
1376 ± 11% +22.5% 1687 ± 10% interrupts.CPU26.CAL:Function_call_interrupts
260128 +21.2% 315219 interrupts.CPU26.LOC:Local_timer_interrupts
259346 +22.3% 317098 interrupts.CPU27.LOC:Local_timer_interrupts
1465 ± 7% -42.2% 846.25 ± 31% interrupts.CPU27.RES:Rescheduling_interrupts
260225 +21.0% 314795 interrupts.CPU28.LOC:Local_timer_interrupts
1459 ± 7% -42.3% 842.00 ± 17% interrupts.CPU28.RES:Rescheduling_interrupts
260085 +21.5% 316006 interrupts.CPU29.LOC:Local_timer_interrupts
1379 ± 4% -47.2% 727.75 ± 16% interrupts.CPU29.RES:Rescheduling_interrupts
260259 +21.7% 316731 interrupts.CPU3.LOC:Local_timer_interrupts
1567 ± 9% -39.8% 943.25 ± 30% interrupts.CPU3.RES:Rescheduling_interrupts
145.25 ± 16% +54.9% 225.00 ± 18% interrupts.CPU30.39:IR-PCI-MSI.524289-edge.eth0-TxRx-0
260074 +21.1% 314955 interrupts.CPU30.LOC:Local_timer_interrupts
1390 ± 4% -50.9% 682.50 ± 14% interrupts.CPU30.RES:Rescheduling_interrupts
1358 ± 11% +18.5% 1609 ± 14% interrupts.CPU31.CAL:Function_call_interrupts
260132 +21.7% 316628 interrupts.CPU31.LOC:Local_timer_interrupts
1392 ± 10% -31.4% 955.75 ± 17% interrupts.CPU31.RES:Rescheduling_interrupts
1360 ± 11% +15.9% 1576 ± 11% interrupts.CPU32.CAL:Function_call_interrupts
260317 +20.9% 314826 interrupts.CPU32.LOC:Local_timer_interrupts
1464 ± 11% -45.5% 798.25 ± 18% interrupts.CPU32.RES:Rescheduling_interrupts
260012 +21.4% 315686 interrupts.CPU33.LOC:Local_timer_interrupts
1486 ± 10% -48.4% 767.00 ± 14% interrupts.CPU33.RES:Rescheduling_interrupts
1265 ± 18% +25.0% 1582 ± 12% interrupts.CPU34.CAL:Function_call_interrupts
260359 +21.0% 315007 interrupts.CPU34.LOC:Local_timer_interrupts
1603 ± 27% -52.0% 770.00 ± 27% interrupts.CPU34.RES:Rescheduling_interrupts
1390 ± 8% +16.0% 1612 ± 10% interrupts.CPU35.CAL:Function_call_interrupts
260282 +21.7% 316652 interrupts.CPU35.LOC:Local_timer_interrupts
1434 ± 11% -42.3% 827.50 ± 14% interrupts.CPU35.RES:Rescheduling_interrupts
260380 +20.8% 314555 interrupts.CPU36.LOC:Local_timer_interrupts
1343 ± 10% -38.3% 829.25 ± 9% interrupts.CPU36.RES:Rescheduling_interrupts
260230 +21.1% 315199 interrupts.CPU37.LOC:Local_timer_interrupts
1346 ± 12% +21.2% 1631 ± 13% interrupts.CPU38.CAL:Function_call_interrupts
259976 +21.8% 316532 interrupts.CPU38.LOC:Local_timer_interrupts
1481 ± 6% -51.1% 724.25 ± 9% interrupts.CPU38.RES:Rescheduling_interrupts
260061 +20.9% 314458 interrupts.CPU39.LOC:Local_timer_interrupts
1436 ± 8% -44.9% 791.00 ± 34% interrupts.CPU39.RES:Rescheduling_interrupts
1359 ± 13% +22.9% 1671 ± 12% interrupts.CPU4.CAL:Function_call_interrupts
260201 +21.7% 316705 interrupts.CPU4.LOC:Local_timer_interrupts
1668 ± 20% -43.6% 941.00 ± 10% interrupts.CPU4.RES:Rescheduling_interrupts
1385 ± 9% +12.1% 1552 ± 8% interrupts.CPU5.CAL:Function_call_interrupts
260181 +21.7% 316657 interrupts.CPU5.LOC:Local_timer_interrupts
1164 ± 55% -74.2% 300.25 ±173% interrupts.CPU5.NMI:Non-maskable_interrupts
1164 ± 55% -74.2% 300.25 ±173% interrupts.CPU5.PMI:Performance_monitoring_interrupts
1464 ± 9% -42.8% 838.00 ± 15% interrupts.CPU5.RES:Rescheduling_interrupts
260255 +21.7% 316804 interrupts.CPU6.LOC:Local_timer_interrupts
1497 ± 6% -50.5% 741.00 ± 19% interrupts.CPU6.RES:Rescheduling_interrupts
1372 ± 9% +19.6% 1642 ± 4% interrupts.CPU7.CAL:Function_call_interrupts
259919 +21.5% 315765 interrupts.CPU7.LOC:Local_timer_interrupts
1481 ± 10% -26.4% 1089 ± 29% interrupts.CPU7.RES:Rescheduling_interrupts
1339 ± 13% +17.6% 1575 ± 11% interrupts.CPU8.CAL:Function_call_interrupts
260401 +21.6% 316700 interrupts.CPU8.LOC:Local_timer_interrupts
1861 ± 33% -53.3% 869.50 ± 19% interrupts.CPU8.RES:Rescheduling_interrupts
1426 ± 6% +18.8% 1695 ± 9% interrupts.CPU9.CAL:Function_call_interrupts
259391 +22.2% 316922 interrupts.CPU9.LOC:Local_timer_interrupts
1551 -18.5% 1264 ± 3% interrupts.CPU9.NMI:Non-maskable_interrupts
1551 -18.5% 1264 ± 3% interrupts.CPU9.PMI:Performance_monitoring_interrupts
1646 ± 14% -47.5% 864.75 ± 8% interrupts.CPU9.RES:Rescheduling_interrupts
10404571 +21.5% 12638118 interrupts.LOC:Local_timer_interrupts
61340 ± 2% -39.7% 37011 ± 2% interrupts.RES:Rescheduling_interrupts
265.00 ± 6% +76.1% 466.75 ± 12% interrupts.TLB:TLB_shootdowns
21780 ± 3% +34.1% 29206 ± 4% softirqs.CPU0.RCU
19352 ± 2% +14.7% 22198 softirqs.CPU0.SCHED
50306 ± 4% +17.8% 59258 ± 8% softirqs.CPU0.TIMER
23163 ± 8% +21.3% 28089 ± 6% softirqs.CPU1.RCU
16179 +20.8% 19547 softirqs.CPU1.SCHED
48867 ± 5% +16.5% 56934 ± 3% softirqs.CPU1.TIMER
21339 ± 2% +31.5% 28064 ± 4% softirqs.CPU10.RCU
15492 ± 2% +21.8% 18872 softirqs.CPU10.SCHED
48447 ± 2% +16.0% 56208 ± 2% softirqs.CPU10.TIMER
21278 ± 5% +31.9% 28067 ± 5% softirqs.CPU11.RCU
15581 +20.5% 18770 softirqs.CPU11.SCHED
48567 +17.7% 57140 ± 4% softirqs.CPU11.TIMER
22261 ± 3% +36.5% 30385 ± 9% softirqs.CPU12.RCU
15644 +20.3% 18824 ± 2% softirqs.CPU12.SCHED
49536 ± 2% +32.3% 65518 ± 21% softirqs.CPU12.TIMER
21724 ± 2% +27.9% 27794 ± 4% softirqs.CPU13.RCU
15508 +21.1% 18776 ± 2% softirqs.CPU13.SCHED
48945 +19.2% 58366 ± 4% softirqs.CPU13.TIMER
21297 ± 3% +35.0% 28761 ± 3% softirqs.CPU14.RCU
15583 +23.0% 19162 ± 5% softirqs.CPU14.SCHED
48772 ± 2% +17.2% 57154 ± 4% softirqs.CPU14.TIMER
21231 ± 3% +35.4% 28746 ± 8% softirqs.CPU15.RCU
15571 +21.9% 18985 ± 2% softirqs.CPU15.SCHED
48486 +17.6% 57020 ± 3% softirqs.CPU15.TIMER
21571 ± 7% +31.7% 28415 softirqs.CPU16.RCU
15314 +23.3% 18884 ± 2% softirqs.CPU16.SCHED
49988 ± 4% +15.1% 57561 ± 3% softirqs.CPU16.TIMER
21326 ± 4% +31.0% 27927 ± 4% softirqs.CPU17.RCU
15434 +22.8% 18960 ± 3% softirqs.CPU17.SCHED
48588 ± 2% +17.1% 56905 ± 2% softirqs.CPU17.TIMER
21212 ± 2% +31.8% 27968 ± 4% softirqs.CPU18.RCU
15571 +20.9% 18827 ± 3% softirqs.CPU18.SCHED
49151 ± 2% +33.3% 65535 ± 15% softirqs.CPU18.TIMER
21490 ± 4% +27.9% 27490 ± 5% softirqs.CPU19.RCU
15550 +20.6% 18747 ± 2% softirqs.CPU19.SCHED
50252 ± 8% +13.4% 57009 ± 2% softirqs.CPU19.TIMER
23244 ± 11% +32.3% 30747 ± 6% softirqs.CPU2.RCU
15982 +30.8% 20905 softirqs.CPU2.SCHED
50703 ± 5% +22.8% 62276 ± 4% softirqs.CPU2.TIMER
21304 ± 2% +45.9% 31089 ± 7% softirqs.CPU20.RCU
15409 ± 2% +23.2% 18988 ± 3% softirqs.CPU20.SCHED
22317 ± 4% +28.2% 28608 ± 5% softirqs.CPU21.RCU
50492 ± 3% +18.6% 59868 ± 8% softirqs.CPU21.TIMER
21099 ± 4% +32.6% 27983 ± 3% softirqs.CPU22.RCU
15607 +21.3% 18935 ± 2% softirqs.CPU22.SCHED
20953 ± 2% +36.0% 28492 ± 6% softirqs.CPU23.RCU
15463 +22.4% 18924 ± 6% softirqs.CPU23.SCHED
49790 ± 2% +17.2% 58355 ± 4% softirqs.CPU23.TIMER
21424 ± 2% +33.5% 28607 ± 4% softirqs.CPU24.RCU
15603 +19.8% 18692 softirqs.CPU24.SCHED
49433 ± 4% +18.7% 58682 ± 4% softirqs.CPU24.TIMER
21527 ± 2% +34.8% 29025 ± 4% softirqs.CPU25.RCU
15413 +22.6% 18891 ± 2% softirqs.CPU25.SCHED
48601 ± 2% +17.9% 57281 ± 3% softirqs.CPU25.TIMER
21284 ± 3% +31.1% 27898 ± 2% softirqs.CPU26.RCU
15979 ± 4% +18.2% 18888 softirqs.CPU26.SCHED
50363 ± 3% +12.6% 56721 ± 4% softirqs.CPU26.TIMER
21307 ± 2% +34.4% 28646 ± 5% softirqs.CPU27.RCU
15342 +23.2% 18905 ± 3% softirqs.CPU27.SCHED
48693 ± 3% +19.5% 58194 ± 2% softirqs.CPU27.TIMER
21278 +30.1% 27694 ± 3% softirqs.CPU28.RCU
15623 +20.2% 18773 softirqs.CPU28.SCHED
49220 ± 2% +14.4% 56308 ± 3% softirqs.CPU28.TIMER
21511 ± 4% +30.1% 27979 ± 5% softirqs.CPU29.RCU
15816 ± 2% +18.4% 18725 softirqs.CPU29.SCHED
48698 +17.7% 57323 ± 3% softirqs.CPU29.TIMER
21335 ± 2% +36.2% 29048 ± 6% softirqs.CPU3.RCU
15763 ± 2% +20.0% 18917 softirqs.CPU3.SCHED
21413 ± 2% +32.4% 28350 ± 3% softirqs.CPU30.RCU
15489 ± 2% +21.6% 18831 ± 2% softirqs.CPU30.SCHED
48627 ± 2% +16.2% 56482 ± 2% softirqs.CPU30.TIMER
21314 ± 4% +32.2% 28181 ± 5% softirqs.CPU31.RCU
15542 +21.2% 18844 ± 3% softirqs.CPU31.SCHED
49740 ± 6% +14.4% 56918 ± 3% softirqs.CPU31.TIMER
19067 ± 11% +19.3% 22739 ± 2% softirqs.CPU32.RCU
15786 ± 3% +19.3% 18839 ± 2% softirqs.CPU32.SCHED
51277 ± 4% +12.1% 57503 ± 4% softirqs.CPU32.TIMER
17283 ± 3% +28.4% 22185 ± 4% softirqs.CPU33.RCU
15463 ± 2% +20.5% 18635 softirqs.CPU33.SCHED
48637 ± 2% +32.4% 64412 ± 18% softirqs.CPU33.TIMER
17794 ± 3% +29.8% 23103 ± 4% softirqs.CPU34.RCU
15838 +19.2% 18886 softirqs.CPU34.SCHED
49140 ± 2% +16.2% 57109 ± 4% softirqs.CPU34.TIMER
17452 ± 2% +28.5% 22432 ± 3% softirqs.CPU35.RCU
15466 +19.9% 18543 ± 2% softirqs.CPU35.SCHED
48547 +16.8% 56684 ± 3% softirqs.CPU35.TIMER
17422 ± 4% +36.2% 23721 ± 5% softirqs.CPU36.RCU
15474 +22.9% 19016 ± 3% softirqs.CPU36.SCHED
18024 ± 5% +25.0% 22537 ± 3% softirqs.CPU37.RCU
15591 ± 2% +21.2% 18892 ± 2% softirqs.CPU37.SCHED
49241 ± 2% +15.7% 56982 ± 3% softirqs.CPU37.TIMER
18270 ± 6% +23.0% 22468 ± 2% softirqs.CPU38.RCU
15698 +18.2% 18555 ± 2% softirqs.CPU38.SCHED
17354 ± 2% +29.0% 22393 ± 3% softirqs.CPU39.RCU
15582 +18.3% 18430 ± 4% softirqs.CPU39.SCHED
48135 ± 2% +35.5% 65242 ± 18% softirqs.CPU39.TIMER
21472 +30.6% 28049 softirqs.CPU4.RCU
15684 +20.5% 18896 ± 4% softirqs.CPU4.SCHED
49567 ± 2% +14.5% 56732 ± 4% softirqs.CPU4.TIMER
21886 +33.1% 29122 ± 8% softirqs.CPU5.RCU
15912 ± 4% +19.3% 18983 ± 4% softirqs.CPU5.SCHED
49334 ± 4% +17.3% 57889 ± 4% softirqs.CPU5.TIMER
22759 ± 12% +28.1% 29150 ± 3% softirqs.CPU6.RCU
15561 +23.0% 19144 softirqs.CPU6.SCHED
50896 +15.8% 58936 ± 3% softirqs.CPU6.TIMER
21754 ± 5% +26.8% 27588 ± 4% softirqs.CPU7.RCU
15439 +22.1% 18851 ± 3% softirqs.CPU7.SCHED
51023 ± 3% +12.0% 57169 ± 4% softirqs.CPU7.TIMER
21768 ± 3% +28.9% 28061 ± 3% softirqs.CPU8.RCU
15680 +24.5% 19522 ± 5% softirqs.CPU8.SCHED
48977 ± 2% +17.5% 57545 ± 3% softirqs.CPU8.TIMER
20951 +32.1% 27683 ± 4% softirqs.CPU9.RCU
15426 +20.8% 18638 ± 3% softirqs.CPU9.SCHED
48306 ± 3% +18.3% 57152 ± 3% softirqs.CPU9.TIMER
834261 ± 2% +31.2% 1094511 ± 3% softirqs.RCU
629022 +20.9% 760517 ± 2% softirqs.SCHED
2001302 +16.7% 2335824 ± 3% softirqs.TIMER
63.72 -11.9 51.82 perf-profile.calltrace.cycles-pp.write
54.54 -10.4 44.19 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
54.05 -10.3 43.78 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
45.97 -9.1 36.91 ± 2% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
44.57 -8.8 35.77 ± 2% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
38.87 -7.6 31.22 ± 2% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
36.73 -7.1 29.68 ± 3% perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
29.28 -5.2 24.09 ± 3% perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
29.00 -5.1 23.86 ± 3% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write
21.42 -4.1 17.29 ± 2% perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write
9.82 -2.0 7.84 ± 2% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
10.19 -1.6 8.57 ± 4% perf-profile.calltrace.cycles-pp.lseek64
3.70 -1.5 2.15 ± 4% perf-profile.calltrace.cycles-pp.close
3.64 -1.5 2.13 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.close
3.64 -1.5 2.13 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
3.62 -1.5 2.11 ± 3% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
3.60 -1.5 2.10 ± 3% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.60 -1.5 2.11 ± 3% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
3.48 -1.5 1.99 ± 4% perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
3.46 -1.5 1.98 ± 3% perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.exit_to_usermode_loop
7.25 ± 2% -1.5 5.77 perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
6.93 -1.4 5.53 perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply
4.82 ± 2% -1.2 3.59 ± 4% perf-profile.calltrace.cycles-pp.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
4.07 ± 2% -0.9 3.16 perf-profile.calltrace.cycles-pp.iov_iter_copy_from_user_atomic.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
5.73 -0.8 4.89 ± 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.lseek64
3.46 ± 2% -0.8 2.65 perf-profile.calltrace.cycles-pp.copyin.iov_iter_copy_from_user_atomic.iomap_write_actor.iomap_apply.iomap_file_buffered_write
5.57 ± 2% -0.8 4.76 ± 7% perf-profile.calltrace.cycles-pp.xfs_file_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write
4.54 ± 3% -0.8 3.74 ± 3% perf-profile.calltrace.cycles-pp.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
3.36 ± 3% -0.8 2.57 ± 2% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.iov_iter_copy_from_user_atomic.iomap_write_actor.iomap_apply
4.53 -0.8 3.74 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.write
5.46 -0.8 4.69 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.lseek64
2.63 -0.7 1.91 ± 3% perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.__dentry_kill.dput.__fput
2.65 -0.7 1.93 ± 3% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run
2.48 ± 4% -0.7 1.79 ± 8% perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.new_sync_write.vfs_write
3.72 ± 2% -0.7 3.06 ± 2% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.write
2.42 ± 4% -0.6 1.83 ± 8% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.57 ± 3% -0.6 3.98 ± 9% perf-profile.calltrace.cycles-pp.xfs_file_iomap_begin_delay.xfs_file_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
2.68 ± 2% -0.6 2.10 perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor
1.69 ± 7% -0.5 1.18 ± 10% perf-profile.calltrace.cycles-pp.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.new_sync_write
1.52 ± 17% -0.5 1.03 ± 17% perf-profile.calltrace.cycles-pp.xfs_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
1.93 -0.4 1.49 ± 4% perf-profile.calltrace.cycles-pp.iomap_read_page_sync.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.72 ± 6% -0.4 1.30 ± 6% perf-profile.calltrace.cycles-pp.iomap_set_page_dirty.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.92 ± 2% -0.4 1.51 ± 4% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor
1.80 ± 5% -0.4 1.40 ± 7% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
1.19 ± 2% -0.4 0.82 ± 12% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write
0.70 ± 24% -0.4 0.33 ±104% perf-profile.calltrace.cycles-pp.xfs_bmbt_to_iomap.xfs_file_iomap_begin_delay.xfs_file_iomap_begin.iomap_apply.iomap_file_buffered_write
1.13 ± 3% -0.4 0.77 ± 13% perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks
1.45 -0.4 1.10 ± 4% perf-profile.calltrace.cycles-pp.memset_erms.iomap_read_page_sync.iomap_write_begin.iomap_write_actor.iomap_apply
1.84 -0.3 1.49 ± 5% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.lseek64
2.22 ± 2% -0.3 1.88 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.lseek64
1.12 ± 3% -0.3 0.79 ± 3% perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.truncate_inode_pages_range.evict.__dentry_kill
1.14 ± 3% -0.3 0.81 ± 3% perf-profile.calltrace.cycles-pp.__pagevec_release.truncate_inode_pages_range.evict.__dentry_kill.dput
0.71 ± 2% -0.3 0.40 ± 57% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin
1.11 ± 4% -0.3 0.85 ± 7% perf-profile.calltrace.cycles-pp.xas_load.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
1.34 -0.3 1.08 ± 3% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor
1.06 ± 2% -0.2 0.84 ± 3% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
0.66 ± 3% -0.2 0.43 ± 57% perf-profile.calltrace.cycles-pp.disk_rw
1.08 ± 4% -0.2 0.88 ± 5% perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
0.76 -0.2 0.57 ± 5% perf-profile.calltrace.cycles-pp.__lru_cache_add.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
1.87 ± 3% -0.2 1.68 ± 4% perf-profile.calltrace.cycles-pp.iomap_set_range_uptodate.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.14 ± 6% -0.2 0.95 ± 3% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_iomap_begin_delay.xfs_file_iomap_begin.iomap_apply.iomap_file_buffered_write
0.94 ± 6% -0.2 0.76 ± 4% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
0.78 ± 9% -0.2 0.61 ± 5% perf-profile.calltrace.cycles-pp.xfs_iext_lookup_extent.xfs_file_iomap_begin_delay.xfs_file_iomap_begin.iomap_apply.iomap_file_buffered_write
0.72 ± 7% -0.2 0.56 ± 3% perf-profile.calltrace.cycles-pp.xfs_break_layouts.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.new_sync_write.vfs_write
0.69 ± 6% -0.2 0.53 ± 4% perf-profile.calltrace.cycles-pp.__set_page_dirty.iomap_set_page_dirty.iomap_write_end.iomap_write_actor.iomap_apply
0.94 ± 5% -0.2 0.79 ± 7% perf-profile.calltrace.cycles-pp.down_write.xfs_ilock.xfs_file_iomap_begin_delay.xfs_file_iomap_begin.iomap_apply
0.78 ± 6% -0.2 0.62 ± 4% perf-profile.calltrace.cycles-pp.down_write.xfs_ilock.xfs_file_buffered_aio_write.new_sync_write.vfs_write
0.86 ± 2% +0.4 1.23 ± 3% perf-profile.calltrace.cycles-pp.xfs_create.xfs_generic_create.path_openat.do_filp_open.do_sys_open
0.89 ± 2% +0.4 1.25 ± 2% perf-profile.calltrace.cycles-pp.xfs_generic_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
1.59 ± 2% +0.5 2.13 ± 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat
1.58 ± 2% +0.5 2.13 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
1.57 +0.5 2.12 ± 5% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
1.54 ± 2% +0.6 2.09 ± 5% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.61 +0.6 2.17 ± 5% perf-profile.calltrace.cycles-pp.creat
1.54 ± 2% +0.6 2.10 ± 5% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
0.00 +0.6 0.57 ± 10% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +0.6 0.62 ± 5% perf-profile.calltrace.cycles-pp.xfs_check_agi_freecount.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc
0.00 +0.7 0.69 ± 4% perf-profile.calltrace.cycles-pp.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create
0.00 +0.7 0.74 ± 3% perf-profile.calltrace.cycles-pp.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create
0.00 +0.8 0.79 ± 2% perf-profile.calltrace.cycles-pp.xfs_dir_ialloc.xfs_create.xfs_generic_create.path_openat.do_filp_open
0.00 +0.8 0.79 ± 2% perf-profile.calltrace.cycles-pp.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create.path_openat
0.00 +1.1 1.08 ± 7% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.00 +1.1 1.15 ± 7% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
16.68 ± 2% +11.8 28.47 ± 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
16.49 ± 4% +12.5 29.02 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
16.51 ± 4% +12.6 29.09 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
16.96 ± 4% +12.9 29.88 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
16.96 ± 4% +12.9 29.89 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
16.96 ± 4% +12.9 29.89 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
17.71 +13.0 30.74 ± 2% perf-profile.calltrace.cycles-pp.secondary_startup_64
67.26 -12.2 55.10 ± 2% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
66.62 -12.1 54.57 ± 2% perf-profile.children.cycles-pp.do_syscall_64
64.32 -12.0 52.29 perf-profile.children.cycles-pp.write
45.98 -9.1 36.93 ± 2% perf-profile.children.cycles-pp.ksys_write
44.63 -8.8 35.81 ± 2% perf-profile.children.cycles-pp.vfs_write
38.90 -7.7 31.25 ± 2% perf-profile.children.cycles-pp.new_sync_write
36.78 -7.1 29.72 ± 3% perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
29.30 -5.2 24.10 ± 3% perf-profile.children.cycles-pp.iomap_file_buffered_write
29.06 -5.2 23.91 ± 4% perf-profile.children.cycles-pp.iomap_apply
21.54 -4.2 17.38 ± 2% perf-profile.children.cycles-pp.iomap_write_actor
9.86 -2.0 7.88 ± 2% perf-profile.children.cycles-pp.iomap_write_begin
10.49 -1.7 8.81 ± 4% perf-profile.children.cycles-pp.lseek64
3.70 -1.5 2.15 ± 4% perf-profile.children.cycles-pp.close
3.63 -1.5 2.12 ± 3% perf-profile.children.cycles-pp.exit_to_usermode_loop
3.60 -1.5 2.10 ± 3% perf-profile.children.cycles-pp.__fput
3.60 -1.5 2.11 ± 3% perf-profile.children.cycles-pp.task_work_run
3.48 -1.5 2.00 ± 4% perf-profile.children.cycles-pp.dput
7.29 ± 2% -1.5 5.80 perf-profile.children.cycles-pp.grab_cache_page_write_begin
3.46 -1.5 1.98 ± 3% perf-profile.children.cycles-pp.__dentry_kill
6.98 -1.4 5.57 perf-profile.children.cycles-pp.pagecache_get_page
4.87 ± 2% -1.2 3.63 ± 5% perf-profile.children.cycles-pp.xfs_file_aio_write_checks
6.78 -1.1 5.67 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64
6.23 -1.1 5.12 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
4.10 ± 2% -0.9 3.17 perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
3.49 ± 2% -0.8 2.67 perf-profile.children.cycles-pp.copyin
5.63 ± 2% -0.8 4.82 ± 7% perf-profile.children.cycles-pp.xfs_file_iomap_begin
4.58 ± 3% -0.8 3.77 ± 3% perf-profile.children.cycles-pp.iomap_write_end
3.38 ± 2% -0.8 2.59 ± 2% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
2.65 -0.7 1.93 ± 3% perf-profile.children.cycles-pp.evict
2.63 -0.7 1.92 ± 4% perf-profile.children.cycles-pp.truncate_inode_pages_range
2.50 ± 4% -0.7 1.81 ± 8% perf-profile.children.cycles-pp.file_update_time
4.65 ± 3% -0.6 4.04 ± 10% perf-profile.children.cycles-pp.xfs_file_iomap_begin_delay
2.44 ± 4% -0.6 1.84 ± 8% perf-profile.children.cycles-pp.security_file_permission
2.71 ± 2% -0.6 2.12 perf-profile.children.cycles-pp.find_get_entry
1.69 ± 7% -0.5 1.18 ± 10% perf-profile.children.cycles-pp.xfs_vn_update_time
1.53 ± 17% -0.5 1.04 ± 18% perf-profile.children.cycles-pp.xfs_file_write_iter
1.94 -0.4 1.50 ± 4% perf-profile.children.cycles-pp.iomap_read_page_sync
1.76 ± 6% -0.4 1.33 ± 5% perf-profile.children.cycles-pp.iomap_set_page_dirty
1.20 ± 2% -0.4 0.77 ± 8% perf-profile.children.cycles-pp._raw_spin_lock
0.89 -0.4 0.46 ± 8% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.83 ± 5% -0.4 1.42 ± 7% perf-profile.children.cycles-pp.selinux_file_permission
1.92 ± 2% -0.4 1.52 ± 4% perf-profile.children.cycles-pp.add_to_page_cache_lru
2.17 ± 5% -0.4 1.81 ± 3% perf-profile.children.cycles-pp.xfs_ilock
1.84 -0.4 1.48 ± 14% perf-profile.children.cycles-pp.xfs_log_commit_cil
1.19 ± 3% -0.3 0.85 ± 3% perf-profile.children.cycles-pp.release_pages
1.47 -0.3 1.13 ± 4% perf-profile.children.cycles-pp.memset_erms
1.81 ± 5% -0.3 1.47 ± 6% perf-profile.children.cycles-pp.down_write
1.14 ± 3% -0.3 0.81 ± 3% perf-profile.children.cycles-pp.__pagevec_release
1.34 ± 4% -0.3 1.05 ± 5% perf-profile.children.cycles-pp.xas_load
1.36 ± 2% -0.3 1.11 ± 4% perf-profile.children.cycles-pp.__alloc_pages_nodemask
2.25 ± 4% -0.3 2.00 ± 4% perf-profile.children.cycles-pp.iomap_set_range_uptodate
1.08 ± 2% -0.2 0.86 ± 4% perf-profile.children.cycles-pp.get_page_from_freelist
0.77 -0.2 0.57 ± 4% perf-profile.children.cycles-pp.__lru_cache_add
1.10 ± 4% -0.2 0.90 ± 5% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.73 -0.2 0.53 ± 7% perf-profile.children.cycles-pp.pagevec_lru_move_fn
1.35 ± 3% -0.2 1.18 ± 6% perf-profile.children.cycles-pp.___might_sleep
0.80 ± 8% -0.2 0.62 ± 5% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.81 ± 4% -0.2 0.63 ± 6% perf-profile.children.cycles-pp.__might_sleep
0.75 ± 7% -0.2 0.58 ± 3% perf-profile.children.cycles-pp.xfs_break_layouts
0.62 ± 5% -0.2 0.45 ± 4% perf-profile.children.cycles-pp.free_unref_page_list
0.70 ± 6% -0.2 0.53 ± 4% perf-profile.children.cycles-pp.__set_page_dirty
0.56 ± 3% -0.2 0.39 ± 7% perf-profile.children.cycles-pp.truncate_cleanup_page
0.59 ± 7% -0.2 0.43 ± 5% perf-profile.children.cycles-pp.generic_write_checks
1.16 ± 5% -0.2 1.00 ± 5% perf-profile.children.cycles-pp.xfs_iunlock
0.46 ± 4% -0.1 0.32 ± 9% perf-profile.children.cycles-pp.__cancel_dirty_page
0.53 ± 10% -0.1 0.39 ± 11% perf-profile.children.cycles-pp.xas_start
0.60 ± 5% -0.1 0.46 ± 6% perf-profile.children.cycles-pp.delete_from_page_cache_batch
0.50 ± 5% -0.1 0.36 ± 8% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.72 ± 10% -0.1 0.59 ± 6% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.44 ± 5% -0.1 0.30 ± 7% perf-profile.children.cycles-pp.free_pcppages_bulk
0.78 ± 5% -0.1 0.65 ± 3% perf-profile.children.cycles-pp.up_write
0.74 ± 3% -0.1 0.61 ± 3% perf-profile.children.cycles-pp._cond_resched
0.39 ± 8% -0.1 0.27 ± 15% perf-profile.children.cycles-pp.__mod_lruvec_state
0.47 ± 2% -0.1 0.35 ± 6% perf-profile.children.cycles-pp.current_time
0.71 ± 3% -0.1 0.58 ± 7% perf-profile.children.cycles-pp.disk_rw
0.57 ± 8% -0.1 0.46 ± 6% perf-profile.children.cycles-pp.xas_store
0.42 ± 13% -0.1 0.30 ± 9% perf-profile.children.cycles-pp.xfs_trans_reserve
0.61 ± 15% -0.1 0.49 ± 6% perf-profile.children.cycles-pp.__inode_security_revalidate
0.44 ± 4% -0.1 0.33 ± 3% perf-profile.children.cycles-pp.account_page_dirtied
0.43 ± 5% -0.1 0.33 ± 13% perf-profile.children.cycles-pp.iov_iter_advance
0.42 ± 3% -0.1 0.33 ± 7% perf-profile.children.cycles-pp.__list_del_entry_valid
0.34 ± 6% -0.1 0.24 ± 9% perf-profile.children.cycles-pp.account_page_cleaned
0.43 ± 9% -0.1 0.35 ± 5% perf-profile.children.cycles-pp.xfs_break_leased_layouts
0.22 ± 11% -0.1 0.14 ± 11% perf-profile.children.cycles-pp.xlog_grant_add_space
0.35 ± 5% -0.1 0.27 perf-profile.children.cycles-pp.file_remove_privs
0.38 ± 9% -0.1 0.30 ± 5% perf-profile.children.cycles-pp.file_modified
0.32 ± 15% -0.1 0.24 ± 3% perf-profile.children.cycles-pp.generic_write_check_limits
0.35 ± 5% -0.1 0.28 ± 9% perf-profile.children.cycles-pp.unlock_page
0.44 ± 6% -0.1 0.36 ± 10% perf-profile.children.cycles-pp.__x64_sys_write
0.17 ± 4% -0.1 0.11 ± 20% perf-profile.children.cycles-pp.__mod_memcg_state
0.24 ± 8% -0.1 0.19 ± 14% perf-profile.children.cycles-pp.__x64_sys_lseek
0.11 ± 22% -0.1 0.05 ± 70% perf-profile.children.cycles-pp.xfs_find_daxdev_for_inode
0.36 ± 4% -0.1 0.31 perf-profile.children.cycles-pp.rcu_all_qs
0.24 ± 6% -0.1 0.19 ± 15% perf-profile.children.cycles-pp.xfs_dir2_leafn_lookup_for_entry
0.12 ± 4% -0.1 0.07 ± 17% perf-profile.children.cycles-pp.xfs_isilocked
0.23 ± 10% -0.0 0.18 ± 12% perf-profile.children.cycles-pp.node_dirty_ok
0.16 ± 12% -0.0 0.11 ± 7% perf-profile.children.cycles-pp.timespec64_trunc
0.18 ± 9% -0.0 0.14 ± 6% perf-profile.children.cycles-pp.find_get_entries
0.18 ± 9% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.pagevec_lookup_entries
0.22 ± 5% -0.0 0.17 ± 16% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.16 ± 9% -0.0 0.12 ± 15% perf-profile.children.cycles-pp.__xa_set_mark
0.30 ± 2% -0.0 0.26 ± 9% perf-profile.children.cycles-pp.page_mapping
0.14 ± 15% -0.0 0.10 ± 17% perf-profile.children.cycles-pp.wake_up_q
0.19 ± 12% -0.0 0.15 ± 5% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.18 ± 6% -0.0 0.14 ± 10% perf-profile.children.cycles-pp.__mod_node_page_state
0.12 ± 15% -0.0 0.08 ± 19% perf-profile.children.cycles-pp.unaccount_page_cache_page
0.22 ± 3% -0.0 0.19 ± 10% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.11 ± 10% -0.0 0.08 ± 6% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
0.08 ± 6% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.xfs_mod_fdblocks
0.08 ± 6% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.xfs_bmap_add_extent_hole_delay
0.07 ± 17% -0.0 0.06 ± 15% perf-profile.children.cycles-pp.__mark_inode_dirty
0.09 ± 9% +0.0 0.12 ± 14% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.03 ±100% +0.1 0.08 ± 20% perf-profile.children.cycles-pp.run_timer_softirq
0.09 ± 28% +0.1 0.14 ± 9% perf-profile.children.cycles-pp.rcu_core
0.06 ± 13% +0.1 0.12 ± 7% perf-profile.children.cycles-pp.xfs_iunlink
0.00 +0.1 0.06 ± 20% perf-profile.children.cycles-pp.__remove_hrtimer
0.04 ± 59% +0.1 0.12 ± 16% perf-profile.children.cycles-pp.update_blocked_averages
0.05 ± 61% +0.1 0.12 ± 22% perf-profile.children.cycles-pp.rebalance_domains
0.05 ± 58% +0.1 0.12 ± 18% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.1 0.08 ± 27% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.1 0.08 ± 27% perf-profile.children.cycles-pp.tick_nohz_next_event
0.00 +0.1 0.09 ± 20% perf-profile.children.cycles-pp.irq_enter
0.00 +0.1 0.11 ± 21% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.46 ± 6% +0.1 0.58 ± 6% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.11 ± 17% +0.1 0.23 ± 17% perf-profile.children.cycles-pp.ktime_get
0.12 ± 21% +0.1 0.25 ± 23% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.1 0.15 ± 72% perf-profile.children.cycles-pp.xfs_verify_agino
0.04 ± 60% +0.2 0.26 ± 71% perf-profile.children.cycles-pp.xfs_inobt_get_maxrecs
0.11 ± 4% +0.2 0.34 ± 17% perf-profile.children.cycles-pp.menu_select
0.15 ± 42% +0.3 0.41 ± 21% perf-profile.children.cycles-pp.osq_lock
0.26 ± 33% +0.3 0.52 ± 11% perf-profile.children.cycles-pp.__softirqentry_text_start
0.64 ± 33% +0.3 0.92 ± 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.30 ± 29% +0.3 0.59 ± 9% perf-profile.children.cycles-pp.irq_exit
0.76 ± 10% +0.3 1.08 ± 12% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
0.63 ± 12% +0.4 0.99 ± 11% perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.86 ± 2% +0.4 1.23 ± 3% perf-profile.children.cycles-pp.xfs_create
0.89 ± 2% +0.4 1.25 ± 2% perf-profile.children.cycles-pp.xfs_generic_create
0.34 ± 6% +0.4 0.74 ± 3% perf-profile.children.cycles-pp.xfs_dialloc
0.28 ± 7% +0.4 0.69 ± 4% perf-profile.children.cycles-pp.xfs_dialloc_ag
0.38 ± 6% +0.4 0.79 ± 2% perf-profile.children.cycles-pp.xfs_dir_ialloc
0.38 ± 6% +0.4 0.79 ± 2% perf-profile.children.cycles-pp.xfs_ialloc
0.12 ± 19% +0.4 0.55 ± 68% perf-profile.children.cycles-pp.xfs_btree_get_rec
0.12 ± 7% +0.4 0.55 ± 70% perf-profile.children.cycles-pp.xfs_btree_increment
0.10 ± 27% +0.4 0.55 ± 70% perf-profile.children.cycles-pp.__xfs_btree_check_sblock
0.88 ± 27% +0.5 1.33 ± 8% perf-profile.children.cycles-pp.hrtimer_interrupt
1.67 +0.5 2.20 ± 5% perf-profile.children.cycles-pp.do_sys_open
1.61 ± 2% +0.6 2.16 ± 5% perf-profile.children.cycles-pp.do_filp_open
1.61 +0.6 2.17 ± 5% perf-profile.children.cycles-pp.creat
1.61 ± 2% +0.6 2.16 ± 5% perf-profile.children.cycles-pp.path_openat
0.15 ± 23% +0.6 0.76 ± 69% perf-profile.children.cycles-pp.xfs_btree_check_sblock
0.20 ± 6% +0.7 0.89 ± 70% perf-profile.children.cycles-pp.xfs_inobt_get_rec
1.25 ± 27% +0.8 2.07 ± 8% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.36 ± 26% +0.9 2.23 ± 8% perf-profile.children.cycles-pp.apic_timer_interrupt
0.41 ± 4% +1.1 1.54 ± 68% perf-profile.children.cycles-pp.xfs_check_agi_freecount
0.06 ± 64% +1.6 1.62 ±107% perf-profile.children.cycles-pp.worker_thread
0.06 ± 64% +1.6 1.62 ±107% perf-profile.children.cycles-pp.process_one_work
0.07 ± 59% +1.6 1.66 ±105% perf-profile.children.cycles-pp.kthread
0.07 ± 59% +1.6 1.66 ±105% perf-profile.children.cycles-pp.ret_from_fork
16.80 ± 2% +11.7 28.49 ± 2% perf-profile.children.cycles-pp.intel_idle
17.24 +12.7 29.90 ± 2% perf-profile.children.cycles-pp.cpuidle_enter_state
17.24 +12.7 29.91 ± 2% perf-profile.children.cycles-pp.cpuidle_enter
16.96 ± 4% +12.9 29.89 perf-profile.children.cycles-pp.start_secondary
17.71 +13.0 30.74 ± 2% perf-profile.children.cycles-pp.do_idle
17.71 +13.0 30.74 ± 2% perf-profile.children.cycles-pp.secondary_startup_64
17.71 +13.0 30.74 ± 2% perf-profile.children.cycles-pp.cpu_startup_entry
11.21 -1.7 9.51 ± 3% perf-profile.self.cycles-pp.do_syscall_64
6.22 -1.1 5.11 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
6.11 -1.0 5.14 ± 3% perf-profile.self.cycles-pp.entry_SYSCALL_64
3.33 ± 3% -0.8 2.53 ± 2% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.47 ± 18% -0.5 1.00 ± 18% perf-profile.self.cycles-pp.xfs_file_write_iter
0.89 -0.4 0.46 ± 8% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.10 ± 8% -0.4 0.75 ± 6% perf-profile.self.cycles-pp.xfs_file_buffered_aio_write
1.46 -0.3 1.11 ± 3% perf-profile.self.cycles-pp.memset_erms
1.53 ± 3% -0.3 1.21 ± 2% perf-profile.self.cycles-pp.find_get_entry
1.18 ± 5% -0.3 0.87 ± 8% perf-profile.self.cycles-pp.selinux_file_permission
2.19 ± 4% -0.2 1.96 ± 3% perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.83 ± 5% -0.2 0.65 ± 5% perf-profile.self.cycles-pp.iomap_apply
1.31 ± 3% -0.2 1.13 ± 7% perf-profile.self.cycles-pp.___might_sleep
0.60 ± 6% -0.2 0.42 ± 10% perf-profile.self.cycles-pp.security_file_permission
0.77 ± 9% -0.2 0.60 ± 5% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.81 ± 5% -0.2 0.65 ± 6% perf-profile.self.cycles-pp.xas_load
0.84 ± 6% -0.2 0.69 ± 9% perf-profile.self.cycles-pp.xfs_file_iomap_begin
0.72 ± 3% -0.2 0.57 ± 4% perf-profile.self.cycles-pp.write
0.71 ± 4% -0.1 0.56 ± 5% perf-profile.self.cycles-pp.__might_sleep
0.49 ± 11% -0.1 0.36 ± 11% perf-profile.self.cycles-pp.xas_start
0.76 ± 3% -0.1 0.63 ± 7% perf-profile.self.cycles-pp.down_write
0.66 ± 8% -0.1 0.54 ± 7% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.72 ± 6% -0.1 0.60 ± 4% perf-profile.self.cycles-pp.up_write
0.65 -0.1 0.53 ± 4% perf-profile.self.cycles-pp.iomap_write_end
0.60 ± 8% -0.1 0.48 ± 8% perf-profile.self.cycles-pp.iov_iter_copy_from_user_atomic
0.58 ± 4% -0.1 0.48 ± 10% perf-profile.self.cycles-pp.vfs_write
0.65 ± 3% -0.1 0.55 ± 5% perf-profile.self.cycles-pp.iov_iter_fault_in_readable
0.52 ± 8% -0.1 0.42 ± 5% perf-profile.self.cycles-pp.pagecache_get_page
0.41 ± 6% -0.1 0.31 ± 11% perf-profile.self.cycles-pp.iov_iter_advance
0.28 ± 6% -0.1 0.18 ± 11% perf-profile.self.cycles-pp.generic_write_checks
0.42 ± 3% -0.1 0.32 ± 7% perf-profile.self.cycles-pp.__list_del_entry_valid
0.31 ± 9% -0.1 0.22 ± 5% perf-profile.self.cycles-pp.free_pcppages_bulk
0.50 ± 4% -0.1 0.41 ± 7% perf-profile.self.cycles-pp.disk_rw
0.29 ± 5% -0.1 0.20 ± 9% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.22 ± 11% -0.1 0.14 ± 11% perf-profile.self.cycles-pp.xlog_grant_add_space
0.34 ± 5% -0.1 0.27 ± 8% perf-profile.self.cycles-pp.unlock_page
0.41 ± 8% -0.1 0.33 ± 5% perf-profile.self.cycles-pp.xfs_break_leased_layouts
0.42 ± 13% -0.1 0.34 ± 17% perf-profile.self.cycles-pp.new_sync_write
0.33 ± 7% -0.1 0.25 ± 2% perf-profile.self.cycles-pp.file_remove_privs
0.36 -0.1 0.28 ± 7% perf-profile.self.cycles-pp.lseek64
0.30 ± 17% -0.1 0.23 ± 4% perf-profile.self.cycles-pp.generic_write_check_limits
0.23 ± 9% -0.1 0.16 ± 5% perf-profile.self.cycles-pp.xfs_break_layouts
0.22 ± 8% -0.1 0.15 ± 7% perf-profile.self.cycles-pp.__fdget_pos
0.33 ± 10% -0.1 0.26 ± 5% perf-profile.self.cycles-pp.file_update_time
0.17 ± 13% -0.1 0.10 ± 12% perf-profile.self.cycles-pp.__mod_lruvec_state
0.40 ± 3% -0.1 0.34 ± 5% perf-profile.self.cycles-pp.get_page_from_freelist
0.25 ± 10% -0.1 0.19 ± 2% perf-profile.self.cycles-pp.xas_store
0.43 ± 2% -0.1 0.37 ± 9% perf-profile.self.cycles-pp._raw_spin_lock
0.22 ± 3% -0.1 0.17 ± 12% perf-profile.self.cycles-pp.current_time
0.21 ± 8% -0.1 0.16 ± 11% perf-profile.self.cycles-pp.__x64_sys_lseek
0.21 ± 10% -0.1 0.16 ± 8% perf-profile.self.cycles-pp.release_pages
0.28 ± 9% -0.1 0.22 perf-profile.self.cycles-pp.rcu_all_qs
0.37 ± 7% -0.1 0.32 ± 10% perf-profile.self.cycles-pp.__x64_sys_write
0.35 ± 6% -0.1 0.30 ± 3% perf-profile.self.cycles-pp._cond_resched
0.16 ± 5% -0.0 0.11 ± 20% perf-profile.self.cycles-pp.__mod_memcg_state
0.14 ± 5% -0.0 0.09 ± 8% perf-profile.self.cycles-pp.account_page_cleaned
0.07 ± 11% -0.0 0.03 ±100% perf-profile.self.cycles-pp.alloc_pages_current
0.22 ± 3% -0.0 0.17 ± 7% perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.21 ± 13% -0.0 0.16 ± 10% perf-profile.self.cycles-pp.ksys_lseek
0.37 ± 5% -0.0 0.33 ± 6% perf-profile.self.cycles-pp.ksys_write
0.30 ± 4% -0.0 0.26 ± 10% perf-profile.self.cycles-pp.page_mapping
0.18 ± 12% -0.0 0.14 ± 8% perf-profile.self.cycles-pp.xfs_log_commit_cil
0.14 ± 17% -0.0 0.10 ± 15% perf-profile.self.cycles-pp.account_page_dirtied
0.15 ± 10% -0.0 0.11 ± 7% perf-profile.self.cycles-pp.find_get_entries
0.17 ± 6% -0.0 0.13 ± 10% perf-profile.self.cycles-pp.__mod_node_page_state
0.11 ± 7% -0.0 0.07 ± 17% perf-profile.self.cycles-pp.xfs_isilocked
0.07 ± 14% -0.0 0.03 ±100% perf-profile.self.cycles-pp.__mark_inode_dirty
0.09 ± 12% -0.0 0.05 ± 60% perf-profile.self.cycles-pp.__cancel_dirty_page
0.09 ± 11% -0.0 0.06 ± 58% perf-profile.self.cycles-pp.__x86_indirect_thunk_r15
0.17 ± 13% -0.0 0.14 ± 8% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.13 ± 11% -0.0 0.09 ± 20% perf-profile.self.cycles-pp.workingset_update_node
0.14 ± 7% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.timespec64_trunc
0.11 ± 9% -0.0 0.08 ± 19% perf-profile.self.cycles-pp.truncate_inode_pages_range
0.10 ± 15% -0.0 0.07 ± 12% perf-profile.self.cycles-pp.iomap_read_page_sync
0.07 ± 12% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.mem_cgroup_commit_charge
0.10 ± 8% -0.0 0.07 ± 10% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.09 ± 8% -0.0 0.07 ± 10% perf-profile.self.cycles-pp.delete_from_page_cache_batch
0.09 ± 9% -0.0 0.07 ± 16% perf-profile.self.cycles-pp.xfs_dir3_data_entsize
0.11 ± 4% -0.0 0.09 ± 7% perf-profile.self.cycles-pp.copyin
0.00 +0.1 0.11 ± 14% perf-profile.self.cycles-pp.cpuidle_enter_state
0.06 ± 58% +0.1 0.16 ± 22% perf-profile.self.cycles-pp.ktime_get
0.07 ± 5% +0.1 0.19 ± 15% perf-profile.self.cycles-pp.menu_select
0.45 ± 5% +0.1 0.57 ± 6% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.04 ± 60% +0.2 0.22 ± 72% perf-profile.self.cycles-pp.xfs_inobt_get_maxrecs
0.04 ± 58% +0.2 0.29 ± 68% perf-profile.self.cycles-pp.__xfs_btree_check_sblock
0.14 ± 36% +0.3 0.41 ± 22% perf-profile.self.cycles-pp.osq_lock
16.78 ± 2% +11.7 28.46 ± 2% perf-profile.self.cycles-pp.intel_idle
aim7.jobs-per-min
160000 +-+----------------------------------------------------------------+
155000 +-+ + |
| : + |
150000 +-+ :: :: |
145000 +-+ : : : : |
| : : .+. : : .+.|
140000 +-+.+.++.+.+.+.+.+ +.+.+ +.+.+.++.+.+. .+.+.++.+. .+ +.++.+ |
135000 +-+ + + |
130000 +-+ |
| |
125000 +-+ |
120000 +-+ |
| O |
115000 O-O O OO O O O O O OO O O O O O O |
110000 +-+----------------------------------------------------------------+
aim7.time.elapsed_time
165 +-+-------------------------------------------------------------------+
160 O-O O O O O O O O OO O |
| O O O O O O |
155 +-+ O |
150 +-+ |
| |
145 +-+ |
140 +-+ |
135 +-+ |
|.+. .+.+. .+.+.+. .+. .+.+.+.+.+.++. |
130 +-+ + + + +.+.+.+.+.+.+.+.+ + + +.+.+.+.+.|
125 +-+ : : : : |
| :: : : |
120 +-+ :: + |
115 +-+-------------------------------------------------------------------+
aim7.time.elapsed_time.max
165 +-+-------------------------------------------------------------------+
160 O-O O O O O O O O OO O |
| O O O O O O |
155 +-+ O |
150 +-+ |
| |
145 +-+ |
140 +-+ |
135 +-+ |
|.+. .+.+. .+.+.+. .+. .+.+.+.+.+.++. |
130 +-+ + + + +.+.+.+.+.+.+.+.+ + + +.+.+.+.+.|
125 +-+ : : : : |
| :: : : |
120 +-+ :: + |
115 +-+-------------------------------------------------------------------+
aim7.time.voluntary_context_switches
1.1e+06 +-+--------------------------------------------------------------+
| |
1.05e+06 +-+.++.+.+.+.++.+.+.+.++.+.+.+.+.++.+.+.+.++.+.+.+.++.+.+.+.++.+.|
| |
1e+06 +-+ |
| |
950000 +-+ |
| |
900000 +-+ |
| |
850000 O-+ O O O |
| O OO O O O O O O OO O O O |
800000 +-+ O |
| |
750000 +-+--------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-ivb-ep01: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.6/3000/RAID0/debian-x86_64-2019-05-14.cgz/lkp-ivb-ep01/disk_cp/aim7
commit:
d515962b00 ("xfs: pass around xfs_inode_ag_walk iget/irele helper functions")
1aabb011bf ("xfs: deferred inode inactivation")
d515962b00d64454 1aabb011bfba5a8942d3e0d0c84
---------------- ---------------------------
%stddev %change %stddev
\ | \
204959 -35.7% 131770 ± 3% aim7.jobs-per-min
88.12 +55.7% 137.19 ± 3% aim7.time.elapsed_time
88.12 +55.7% 137.19 ± 3% aim7.time.elapsed_time.max
72330 ± 3% +12.2% 81167 ± 4% aim7.time.involuntary_context_switches
151188 +6.4% 160846 ± 2% aim7.time.minor_page_faults
2437 +33.6% 3257 ± 3% aim7.time.system_time
305.87 +45.8% 446.05 ± 5% aim7.time.user_time
737855 -13.0% 642174 ± 3% aim7.time.voluntary_context_switches
22.63 +9.0 31.68 mpstat.cpu.all.idle%
68.59 -8.5 60.06 mpstat.cpu.all.sys%
61676265 ± 55% +177.3% 1.711e+08 ± 18% cpuidle.C1.time
612540 ± 47% +231.9% 2032764 ± 14% cpuidle.C1.usage
5.694e+08 ± 13% +157.5% 1.466e+09 ± 6% cpuidle.C6.time
872994 ± 2% +116.8% 1892610 ± 6% cpuidle.C6.usage
23.75 ± 3% +34.7% 32.00 ± 2% vmstat.cpu.id
66.75 -11.6% 59.00 vmstat.cpu.sy
1534 ± 26% +87.2% 2873 ± 8% vmstat.io.bo
17277 ± 2% -14.2% 14823 vmstat.system.cs
24.17 +34.6% 32.54 iostat.cpu.idle
67.23 -11.8% 59.31 iostat.cpu.system
8.60 -5.2% 8.15 iostat.cpu.user
46.02 ± 5% +114.0% 98.46 ± 3% iostat.md0.w/s
1601 ± 25% +86.8% 2991 ± 7% iostat.md0.wkB/s
81296 ± 2% +47.8% 120146 ± 2% meminfo.AnonHugePages
16049 ± 7% -14.2% 13766 meminfo.Dirty
16741 ± 6% -13.7% 14450 ± 2% meminfo.Inactive(file)
91250 +22.9% 112105 meminfo.KReclaimable
109853 ± 5% -7.0% 102116 ± 8% meminfo.PageTables
91250 +22.9% 112105 meminfo.SReclaimable
36157 ± 2% -12.9% 31489 ± 4% meminfo.Shmem
255575 +14.2% 291816 meminfo.Slab
27539 -34.1% 18158 ± 4% meminfo.max_used_kB
2265 -45.2% 1241 ± 7% turbostat.Avg_MHz
77.95 -8.8 69.19 turbostat.Busy%
2905 -38.3% 1793 ± 7% turbostat.Bzy_MHz
608401 ± 47% +233.6% 2029542 ± 15% turbostat.C1
3.84 ± 73% -3.2 0.68 ± 7% turbostat.C3%
868860 ± 2% +117.3% 1888276 ± 6% turbostat.C6
15.87 ± 13% +10.5 26.42 ± 2% turbostat.C6%
14.37 ± 8% +46.5% 21.05 turbostat.CPU%c1
6.78 ± 20% +41.9% 9.62 ± 4% turbostat.CPU%c6
121.47 -44.3% 67.72 ± 8% turbostat.CorWatt
7419316 +52.0% 11279514 ± 4% turbostat.IRQ
3.43 ± 19% -31.4% 2.35 ± 19% turbostat.Pkg%pc2
71.50 -4.0% 68.67 turbostat.PkgTmp
148.87 -36.5% 94.53 ± 6% turbostat.PkgWatt
36.86 -5.5% 34.83 turbostat.RAMWatt
7146 +55.8% 11133 ± 3% turbostat.SMI
623.50 ±105% +220.2% 1996 ± 37% numa-meminfo.node0.Active(file)
8186 ± 3% -17.1% 6789 numa-meminfo.node0.Dirty
15459 ± 10% -26.0% 11435 ± 5% numa-meminfo.node0.Inactive
7151 ± 19% -41.4% 4190 ± 21% numa-meminfo.node0.Inactive(anon)
8307 ± 3% -12.8% 7245 ± 4% numa-meminfo.node0.Inactive(file)
1057156 ± 15% +30.0% 1374548 ± 4% numa-meminfo.node0.MemUsed
27710 ±145% +258.8% 99419 ± 8% numa-meminfo.node0.PageTables
65500 ± 45% +93.9% 126976 numa-meminfo.node0.SUnreclaim
113557 ± 26% +57.4% 178704 ± 6% numa-meminfo.node0.Slab
1849 ± 35% -72.2% 513.33 ±141% numa-meminfo.node1.Active(file)
8087 ± 6% -14.6% 6906 numa-meminfo.node1.Dirty
8654 ± 9% -17.9% 7105 ± 7% numa-meminfo.node1.Inactive(file)
43179 ± 16% +39.7% 60339 ± 13% numa-meminfo.node1.KReclaimable
38009 ± 50% -89.6% 3961 ± 9% numa-meminfo.node1.KernelStack
1284978 ± 12% -21.8% 1004429 ± 4% numa-meminfo.node1.MemUsed
81947 ± 54% -96.7% 2695 numa-meminfo.node1.PageTables
43179 ± 16% +39.7% 60339 ± 13% numa-meminfo.node1.SReclaimable
156.25 ±105% +218.3% 497.33 ± 37% numa-vmstat.node0.nr_active_file
16205972 ± 2% +14.6% 18577156 numa-vmstat.node0.nr_dirtied
2062 ± 3% -17.9% 1694 ± 3% numa-vmstat.node0.nr_dirty
1788 ± 19% -41.2% 1051 ± 21% numa-vmstat.node0.nr_inactive_anon
2090 ± 7% -13.2% 1814 ± 4% numa-vmstat.node0.nr_inactive_file
6975 ±145% +255.1% 24766 ± 8% numa-vmstat.node0.nr_page_table_pages
16397 ± 45% +93.3% 31699 numa-vmstat.node0.nr_slab_unreclaimable
156.25 ±105% +218.3% 497.33 ± 37% numa-vmstat.node0.nr_zone_active_file
1788 ± 19% -41.2% 1051 ± 21% numa-vmstat.node0.nr_zone_inactive_anon
2093 ± 7% -13.2% 1816 ± 4% numa-vmstat.node0.nr_zone_inactive_file
16642980 ± 3% +14.9% 19130375 numa-vmstat.node0.numa_hit
16605490 ± 2% +14.9% 19079593 numa-vmstat.node0.numa_local
461.25 ± 35% -72.2% 128.33 ±141% numa-vmstat.node1.nr_active_file
2046 -15.2% 1735 numa-vmstat.node1.nr_dirty
2184 ± 5% -18.2% 1786 ± 6% numa-vmstat.node1.nr_inactive_file
37823 ± 50% -89.5% 3960 ± 9% numa-vmstat.node1.nr_kernel_stack
20375 ± 54% -96.7% 672.67 numa-vmstat.node1.nr_page_table_pages
10793 ± 16% +39.7% 15083 ± 13% numa-vmstat.node1.nr_slab_reclaimable
461.25 ± 35% -72.2% 128.33 ±141% numa-vmstat.node1.nr_zone_active_file
2183 ± 5% -18.3% 1785 ± 6% numa-vmstat.node1.nr_zone_inactive_file
91478 -2.1% 89547 proc-vmstat.nr_active_anon
617.75 +1.5% 627.00 proc-vmstat.nr_active_file
4045 ± 2% -15.8% 3405 proc-vmstat.nr_dirty
5069 -3.6% 4888 proc-vmstat.nr_inactive_anon
4217 ± 2% -15.4% 3567 proc-vmstat.nr_inactive_file
53612 -1.9% 52592 proc-vmstat.nr_kernel_stack
6369 -3.5% 6149 proc-vmstat.nr_mapped
9055 ± 2% -13.1% 7871 ± 4% proc-vmstat.nr_shmem
22808 +22.8% 28015 proc-vmstat.nr_slab_reclaimable
41043 +9.4% 44903 proc-vmstat.nr_slab_unreclaimable
91478 -2.1% 89547 proc-vmstat.nr_zone_active_anon
617.75 +1.5% 627.00 proc-vmstat.nr_zone_active_file
5069 -3.6% 4888 proc-vmstat.nr_zone_inactive_anon
4217 ± 2% -15.4% 3567 proc-vmstat.nr_zone_inactive_file
3453 ± 2% -8.4% 3163 proc-vmstat.nr_zone_write_pending
1737 ±142% +716.8% 14190 ± 3% proc-vmstat.numa_hint_faults
2731 ±113% +389.5% 13370 ± 3% proc-vmstat.numa_pages_migrated
4460 ± 7% -39.5% 2697 ± 14% proc-vmstat.pgactivate
402077 +31.0% 526544 ± 2% proc-vmstat.pgfault
2731 ±113% +389.5% 13370 ± 3% proc-vmstat.pgmigrate_success
139264 ± 25% +188.9% 402269 ± 12% proc-vmstat.pgpgout
2856 +360.5% 13152 ± 3% slabinfo.dmaengine-unmap-16.active_objs
77.75 +589.0% 535.67 ± 3% slabinfo.dmaengine-unmap-16.active_slabs
3292 +584.1% 22520 ± 3% slabinfo.dmaengine-unmap-16.num_objs
77.75 +589.0% 535.67 ± 3% slabinfo.dmaengine-unmap-16.num_slabs
2332 ± 7% +39.7% 3257 slabinfo.kmalloc-128.active_objs
2332 ± 7% +39.7% 3257 slabinfo.kmalloc-128.num_objs
19136 +126.1% 43271 ± 3% slabinfo.kmalloc-16.active_objs
74.75 +139.5% 179.00 ± 3% slabinfo.kmalloc-16.active_slabs
19136 +140.5% 46024 ± 3% slabinfo.kmalloc-16.num_objs
74.75 +139.5% 179.00 ± 3% slabinfo.kmalloc-16.num_slabs
741.00 +24.5% 922.67 ± 2% slabinfo.kmalloc-4k.active_objs
751.25 +24.4% 934.67 ± 2% slabinfo.kmalloc-4k.num_objs
6130 +175.2% 16869 ± 2% slabinfo.kmalloc-512.active_objs
202.00 +207.8% 621.67 ± 2% slabinfo.kmalloc-512.active_slabs
6484 +207.0% 19906 ± 2% slabinfo.kmalloc-512.num_objs
202.00 +207.8% 621.67 ± 2% slabinfo.kmalloc-512.num_slabs
461.50 ± 2% +43.6% 662.67 slabinfo.kmalloc-8k.active_objs
476.25 ± 2% +54.5% 735.67 slabinfo.kmalloc-8k.num_objs
1160 +72.2% 1998 ± 2% slabinfo.xfs_buf_item.active_objs
1160 +72.2% 1998 ± 2% slabinfo.xfs_buf_item.num_objs
1186 +64.4% 1950 slabinfo.xfs_efd_item.active_objs
1186 +64.4% 1950 slabinfo.xfs_efd_item.num_objs
2168 +487.1% 12729 ± 3% slabinfo.xfs_inode.active_objs
87.50 +676.8% 679.67 ± 4% slabinfo.xfs_inode.active_slabs
2819 +671.9% 21761 ± 4% slabinfo.xfs_inode.num_objs
87.50 +676.8% 679.67 ± 4% slabinfo.xfs_inode.num_slabs
13.63 ± 4% -8.1% 12.53 perf-stat.i.MPKI
4.74e+09 -32.0% 3.224e+09 ± 4% perf-stat.i.branch-instructions
59710263 -29.8% 41920989 ± 3% perf-stat.i.branch-misses
24239162 ± 2% -29.6% 17056459 ± 6% perf-stat.i.cache-misses
2.448e+08 -32.9% 1.644e+08 ± 4% perf-stat.i.cache-references
17563 ± 2% -14.8% 14962 perf-stat.i.context-switches
3.90 ± 2% -22.0% 3.04 ± 3% perf-stat.i.cpi
9.031e+10 -45.5% 4.926e+10 ± 8% perf-stat.i.cpu-cycles
3006 -59.5% 1218 ± 2% perf-stat.i.cpu-migrations
3432 -23.3% 2633 perf-stat.i.cycles-between-cache-misses
1.64 ± 12% -0.4 1.25 ± 2% perf-stat.i.dTLB-load-miss-rate%
1.328e+08 ± 13% -50.8% 65422884 ± 5% perf-stat.i.dTLB-load-misses
7.404e+09 -31.8% 5.052e+09 ± 4% perf-stat.i.dTLB-loads
16794433 ± 5% -37.4% 10521517 ± 3% perf-stat.i.dTLB-store-misses
4.914e+09 -32.9% 3.3e+09 ± 4% perf-stat.i.dTLB-stores
87.61 -3.8 83.84 perf-stat.i.iTLB-load-miss-rate%
7267620 -33.6% 4828701 ± 2% perf-stat.i.iTLB-load-misses
2.391e+10 -31.8% 1.63e+10 ± 4% perf-stat.i.instructions
0.28 ± 2% +28.8% 0.36 ± 2% perf-stat.i.ipc
4279 -14.3% 3669 perf-stat.i.minor-faults
45.88 +0.6 46.48 perf-stat.i.node-load-miss-rate%
13857418 ± 2% -33.2% 9253829 ± 7% perf-stat.i.node-load-misses
15830808 ± 2% -33.8% 10483641 ± 7% perf-stat.i.node-loads
5925929 -28.2% 4252744 ± 5% perf-stat.i.node-store-misses
7451625 -27.1% 5432276 ± 5% perf-stat.i.node-stores
4279 -14.3% 3669 perf-stat.i.page-faults
1.26 +0.0 1.30 perf-stat.overall.branch-miss-rate%
9.90 +0.5 10.37 ± 2% perf-stat.overall.cache-miss-rate%
3.78 -20.1% 3.02 ± 3% perf-stat.overall.cpi
3726 -22.6% 2884 perf-stat.overall.cycles-between-cache-misses
1.76 ± 12% -0.5 1.28 ± 2% perf-stat.overall.dTLB-load-miss-rate%
0.34 ± 5% -0.0 0.32 perf-stat.overall.dTLB-store-miss-rate%
87.44 -3.8 83.60 perf-stat.overall.iTLB-load-miss-rate%
3290 +2.6% 3374 perf-stat.overall.instructions-per-iTLB-miss
0.26 +25.4% 0.33 ± 3% perf-stat.overall.ipc
4.685e+09 -31.7% 3.199e+09 ± 4% perf-stat.ps.branch-instructions
59020206 -29.5% 41606551 ± 3% perf-stat.ps.branch-misses
23958401 ± 2% -29.3% 16929391 ± 6% perf-stat.ps.cache-misses
2.42e+08 -32.6% 1.631e+08 ± 4% perf-stat.ps.cache-references
17359 ± 2% -14.5% 14848 perf-stat.ps.context-switches
8.926e+10 -45.2% 4.89e+10 ± 8% perf-stat.ps.cpu-cycles
2971 -59.3% 1208 ± 2% perf-stat.ps.cpu-migrations
1.313e+08 ± 13% -50.5% 64933807 ± 5% perf-stat.ps.dTLB-load-misses
7.318e+09 -31.5% 5.014e+09 ± 4% perf-stat.ps.dTLB-loads
16598516 ± 5% -37.1% 10442914 ± 3% perf-stat.ps.dTLB-store-misses
4.857e+09 -32.6% 3.275e+09 ± 4% perf-stat.ps.dTLB-stores
7182997 -33.3% 4792440 ± 2% perf-stat.ps.iTLB-load-misses
2.363e+10 -31.5% 1.618e+10 ± 4% perf-stat.ps.instructions
4233 -14.0% 3642 perf-stat.ps.minor-faults
13695799 ± 2% -32.9% 9184667 ± 7% perf-stat.ps.node-load-misses
15646175 ± 2% -33.5% 10405160 ± 7% perf-stat.ps.node-loads
5856803 -27.9% 4221222 ± 5% perf-stat.ps.node-store-misses
7364898 -26.8% 5392052 ± 5% perf-stat.ps.node-stores
4233 -14.0% 3642 perf-stat.ps.page-faults
2.091e+12 +6.2% 2.221e+12 perf-stat.total.instructions
21578 +80.2% 38887 sched_debug.cfs_rq:/.exec_clock.avg
23353 +75.3% 40945 ± 2% sched_debug.cfs_rq:/.exec_clock.max
21322 +79.5% 38269 sched_debug.cfs_rq:/.exec_clock.min
304.37 ± 8% +41.5% 430.55 ± 25% sched_debug.cfs_rq:/.exec_clock.stddev
632.50 ± 12% +29.7% 820.11 ± 12% sched_debug.cfs_rq:/.load_avg.max
780200 +60.8% 1254863 sched_debug.cfs_rq:/.min_vruntime.avg
872093 +58.1% 1378898 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
762295 +57.2% 1198056 sched_debug.cfs_rq:/.min_vruntime.min
17394 ± 5% +57.7% 27431 ± 14% sched_debug.cfs_rq:/.min_vruntime.stddev
0.40 ± 5% +15.0% 0.46 ± 8% sched_debug.cfs_rq:/.nr_running.stddev
0.13 ± 38% +111.6% 0.28 ± 18% sched_debug.cfs_rq:/.nr_spread_over.avg
0.31 ± 29% +63.8% 0.50 ± 13% sched_debug.cfs_rq:/.nr_spread_over.stddev
-1508 -590.1% 7393 ± 78% sched_debug.cfs_rq:/.spread0.avg
-19404 +154.1% -49302 sched_debug.cfs_rq:/.spread0.min
17422 ± 5% +57.4% 27431 ± 14% sched_debug.cfs_rq:/.spread0.stddev
777.17 -15.3% 658.55 ± 11% sched_debug.cfs_rq:/.util_avg.avg
1497 ± 8% -16.1% 1256 ± 15% sched_debug.cfs_rq:/.util_avg.max
123.49 ± 11% -31.9% 84.04 ± 20% sched_debug.cfs_rq:/.util_est_enqueued.avg
473039 ± 7% +37.9% 652433 ± 2% sched_debug.cpu.avg_idle.avg
236058 ± 12% -18.6% 192130 ± 4% sched_debug.cpu.avg_idle.stddev
71615 +43.2% 102524 sched_debug.cpu.clock.avg
71626 +43.2% 102548 sched_debug.cpu.clock.max
71600 +43.2% 102498 sched_debug.cpu.clock.min
7.20 ± 22% +102.5% 14.59 ± 12% sched_debug.cpu.clock.stddev
71615 +43.2% 102524 sched_debug.cpu.clock_task.avg
71626 +43.2% 102548 sched_debug.cpu.clock_task.max
71600 +43.2% 102498 sched_debug.cpu.clock_task.min
7.20 ± 22% +102.5% 14.59 ± 12% sched_debug.cpu.clock_task.stddev
1820 ± 11% -18.6% 1482 ± 14% sched_debug.cpu.curr->pid.avg
3837 ± 8% +37.7% 5283 sched_debug.cpu.curr->pid.max
0.00 ± 3% +32.7% 0.00 ± 15% sched_debug.cpu.next_balance.stddev
0.42 ± 8% +17.3% 0.49 ± 4% sched_debug.cpu.nr_running.stddev
14410 ± 2% +63.7% 23596 sched_debug.cpu.nr_switches.avg
20732 +148.7% 51565 ± 9% sched_debug.cpu.nr_switches.max
12235 +29.2% 15807 ± 2% sched_debug.cpu.nr_switches.min
1916 ± 5% +373.8% 9082 ± 8% sched_debug.cpu.nr_switches.stddev
37.10 +17.6% 43.62 ± 4% sched_debug.cpu.nr_uninterruptible.avg
-42.25 -35.6% -27.22 sched_debug.cpu.nr_uninterruptible.min
12279 ± 2% +74.6% 21437 sched_debug.cpu.sched_count.avg
14279 ± 2% +231.0% 47258 ± 10% sched_debug.cpu.sched_count.max
11249 ± 2% +31.0% 14739 sched_debug.cpu.sched_count.min
621.89 ± 12% +1268.2% 8508 ± 10% sched_debug.cpu.sched_count.stddev
5282 ± 3% +80.1% 9514 sched_debug.cpu.sched_goidle.avg
6253 ± 2% +253.4% 22097 ± 10% sched_debug.cpu.sched_goidle.max
4834 ± 2% +32.4% 6403 sched_debug.cpu.sched_goidle.min
301.81 ± 13% +1275.2% 4150 ± 10% sched_debug.cpu.sched_goidle.stddev
6298 ± 2% +69.2% 10658 sched_debug.cpu.ttwu_count.avg
7777 +200.5% 23368 ± 10% sched_debug.cpu.ttwu_count.max
5825 ± 3% +23.6% 7199 sched_debug.cpu.ttwu_count.min
379.64 ± 7% +1010.1% 4214 ± 11% sched_debug.cpu.ttwu_count.stddev
526.01 +69.8% 893.13 ± 2% sched_debug.cpu.ttwu_local.avg
773.75 ± 8% +125.6% 1745 ± 9% sched_debug.cpu.ttwu_local.max
424.12 +40.9% 597.78 ± 3% sched_debug.cpu.ttwu_local.min
68.72 ± 10% +219.5% 219.59 ± 10% sched_debug.cpu.ttwu_local.stddev
71601 +43.1% 102497 sched_debug.cpu_clk
68011 +45.4% 98899 sched_debug.ktime
71961 +42.9% 102856 sched_debug.sched_clk
13240 ± 4% +36.2% 18033 ± 3% softirqs.CPU0.RCU
12004 ± 3% +24.9% 14997 ± 3% softirqs.CPU0.SCHED
35478 ± 3% +57.3% 55794 softirqs.CPU0.TIMER
9817 ± 7% +18.1% 11592 ± 7% softirqs.CPU1.SCHED
38872 ± 3% +42.3% 55318 softirqs.CPU1.TIMER
13094 ± 5% +33.8% 17518 ± 5% softirqs.CPU10.RCU
8546 ± 8% +35.4% 11568 ± 6% softirqs.CPU10.SCHED
33896 ± 3% +59.6% 54082 ± 2% softirqs.CPU10.TIMER
12959 ± 4% +33.4% 17288 ± 4% softirqs.CPU11.RCU
8569 +32.8% 11380 ± 4% softirqs.CPU11.SCHED
34467 ± 2% +56.0% 53759 softirqs.CPU11.TIMER
14180 ± 12% +52.6% 21643 ± 4% softirqs.CPU12.RCU
8429 ± 3% +36.6% 11515 ± 6% softirqs.CPU12.SCHED
36338 ± 7% +59.4% 57933 ± 6% softirqs.CPU12.TIMER
13213 ± 5% +33.5% 17641 ± 5% softirqs.CPU13.RCU
8630 +33.2% 11498 ± 4% softirqs.CPU13.SCHED
38830 ± 17% +40.0% 54352 softirqs.CPU13.TIMER
12844 +39.2% 17877 ± 3% softirqs.CPU14.RCU
8556 +37.5% 11765 ± 4% softirqs.CPU14.SCHED
39110 ± 16% +38.8% 54296 ± 3% softirqs.CPU14.TIMER
13167 ± 4% +35.0% 17771 ± 3% softirqs.CPU15.RCU
8742 +29.3% 11303 ± 4% softirqs.CPU15.SCHED
35002 ± 2% +53.5% 53737 softirqs.CPU15.TIMER
13022 +35.7% 17668 ± 5% softirqs.CPU16.RCU
8456 ± 2% +37.4% 11616 ± 3% softirqs.CPU16.SCHED
35793 ± 3% +52.5% 54602 ± 2% softirqs.CPU16.TIMER
13010 ± 2% +36.1% 17702 ± 10% softirqs.CPU17.RCU
8813 ± 4% +26.7% 11168 ± 3% softirqs.CPU17.SCHED
35279 +53.4% 54103 softirqs.CPU17.TIMER
8652 ± 2% +29.9% 11243 softirqs.CPU18.SCHED
35157 ± 4% +55.9% 54825 ± 5% softirqs.CPU18.TIMER
13004 +44.6% 18807 ± 12% softirqs.CPU19.RCU
8658 +28.9% 11159 ± 2% softirqs.CPU19.SCHED
34629 +55.5% 53864 softirqs.CPU19.TIMER
8765 ± 7% +35.4% 11871 ± 3% softirqs.CPU2.SCHED
35122 ± 6% +53.9% 54041 ± 2% softirqs.CPU2.TIMER
12585 +54.3% 19421 ± 14% softirqs.CPU20.RCU
8327 ± 2% +40.1% 11666 softirqs.CPU20.SCHED
33760 ± 2% +59.5% 53863 ± 2% softirqs.CPU20.TIMER
12746 ± 2% +45.1% 18498 ± 10% softirqs.CPU21.RCU
8471 ± 4% +41.3% 11968 ± 4% softirqs.CPU21.SCHED
35143 ± 2% +73.9% 61115 ± 7% softirqs.CPU21.TIMER
13098 +34.9% 17671 ± 4% softirqs.CPU22.RCU
8566 +30.3% 11165 ± 6% softirqs.CPU22.SCHED
34415 ± 2% +60.1% 55110 softirqs.CPU22.TIMER
13351 ± 5% +31.5% 17556 ± 5% softirqs.CPU23.RCU
9174 ± 8% +26.7% 11622 ± 6% softirqs.CPU23.SCHED
37855 ± 13% +54.2% 58371 ± 4% softirqs.CPU23.TIMER
13334 ± 4% +35.3% 18035 ± 9% softirqs.CPU24.RCU
9011 ± 9% +28.1% 11546 ± 11% softirqs.CPU24.SCHED
35153 ± 5% +56.0% 54856 softirqs.CPU24.TIMER
13056 +35.1% 17633 ± 5% softirqs.CPU25.RCU
8668 +31.9% 11435 ± 6% softirqs.CPU25.SCHED
34832 ± 2% +56.5% 54499 softirqs.CPU25.TIMER
13612 ± 6% +31.1% 17846 ± 4% softirqs.CPU26.RCU
8564 +33.0% 11392 ± 5% softirqs.CPU26.SCHED
34752 ± 3% +67.9% 58356 ± 9% softirqs.CPU26.TIMER
13063 +38.7% 18123 ± 5% softirqs.CPU27.RCU
8692 +24.6% 10831 ± 4% softirqs.CPU27.SCHED
34964 ± 2% +57.0% 54889 softirqs.CPU27.TIMER
12612 +37.8% 17376 ± 4% softirqs.CPU28.RCU
8625 ± 2% +34.5% 11605 ± 7% softirqs.CPU28.SCHED
34376 ± 3% +59.0% 54672 softirqs.CPU28.TIMER
12943 ± 4% +33.9% 17333 ± 6% softirqs.CPU29.RCU
8678 ± 2% +28.6% 11158 ± 5% softirqs.CPU29.SCHED
36977 ± 14% +47.1% 54377 softirqs.CPU29.TIMER
12972 +42.8% 18526 ± 14% softirqs.CPU3.RCU
9119 ± 5% +25.9% 11481 ± 6% softirqs.CPU3.SCHED
35879 +75.6% 63012 ± 17% softirqs.CPU3.TIMER
12581 +39.8% 17591 ± 5% softirqs.CPU30.RCU
8255 +31.4% 10844 ± 7% softirqs.CPU30.SCHED
33406 ± 3% +58.7% 53003 ± 3% softirqs.CPU30.TIMER
13201 ± 5% +32.6% 17503 ± 5% softirqs.CPU31.RCU
8582 +30.9% 11232 ± 5% softirqs.CPU31.SCHED
34045 +57.9% 53746 softirqs.CPU31.TIMER
10821 ± 8% +37.8% 14913 softirqs.CPU32.RCU
8508 ± 3% +37.7% 11712 ± 4% softirqs.CPU32.SCHED
34901 ± 3% +59.6% 55704 ± 3% softirqs.CPU32.TIMER
10359 +34.3% 13912 ± 5% softirqs.CPU33.RCU
8624 +29.3% 11149 ± 3% softirqs.CPU33.SCHED
35940 ± 5% +51.3% 54362 ± 2% softirqs.CPU33.TIMER
10512 ± 2% +36.5% 14346 ± 6% softirqs.CPU34.RCU
8604 +28.7% 11077 ± 3% softirqs.CPU34.SCHED
34636 ± 2% +53.5% 53154 ± 3% softirqs.CPU34.TIMER
10444 ± 2% +35.0% 14100 ± 7% softirqs.CPU35.RCU
8547 +27.7% 10915 ± 5% softirqs.CPU35.SCHED
34476 +55.8% 53719 softirqs.CPU35.TIMER
10196 ± 2% +44.4% 14724 ± 4% softirqs.CPU36.RCU
8427 ± 2% +34.3% 11321 ± 3% softirqs.CPU36.SCHED
34764 ± 2% +56.2% 54288 ± 3% softirqs.CPU36.TIMER
10273 ± 2% +33.6% 13728 ± 5% softirqs.CPU37.RCU
8527 ± 2% +27.6% 10885 ± 3% softirqs.CPU37.SCHED
34727 +54.0% 53489 softirqs.CPU37.TIMER
10103 ± 2% +44.2% 14571 ± 2% softirqs.CPU38.RCU
8356 ± 2% +37.1% 11454 ± 4% softirqs.CPU38.SCHED
35884 ± 9% +68.3% 60394 ± 12% softirqs.CPU38.TIMER
10341 ± 3% +32.3% 13681 ± 5% softirqs.CPU39.RCU
8556 +27.0% 10868 ± 6% softirqs.CPU39.SCHED
34267 +56.6% 53654 softirqs.CPU39.TIMER
13012 +35.5% 17627 ± 5% softirqs.CPU4.RCU
8735 ± 2% +27.5% 11142 ± 5% softirqs.CPU4.SCHED
38127 ± 18% +44.2% 54976 softirqs.CPU4.TIMER
12911 +35.8% 17529 ± 4% softirqs.CPU5.RCU
8699 +38.2% 12025 ± 13% softirqs.CPU5.SCHED
34819 ± 2% +61.8% 56335 ± 2% softirqs.CPU5.TIMER
12740 ± 2% +35.8% 17297 ± 5% softirqs.CPU6.RCU
8761 ± 2% +27.0% 11124 ± 4% softirqs.CPU6.SCHED
35057 ± 3% +74.8% 61271 ± 16% softirqs.CPU6.TIMER
12939 +36.1% 17608 ± 7% softirqs.CPU7.RCU
8556 ± 2% +28.7% 11015 ± 4% softirqs.CPU7.SCHED
35488 ± 4% +54.8% 54918 softirqs.CPU7.TIMER
12719 ± 2% +37.1% 17432 ± 2% softirqs.CPU8.RCU
8648 +32.3% 11438 ± 6% softirqs.CPU8.SCHED
34507 ± 2% +58.6% 54712 softirqs.CPU8.TIMER
12769 ± 3% +35.1% 17249 ± 7% softirqs.CPU9.RCU
8597 ± 3% +33.3% 11461 ± 2% softirqs.CPU9.SCHED
38857 ± 19% +40.5% 54604 softirqs.CPU9.TIMER
503590 +36.6% 688115 ± 4% softirqs.RCU
349541 +31.1% 458229 ± 4% softirqs.SCHED
1420004 +56.1% 2216175 softirqs.TIMER
77.00 ± 51% +3025.1% 2406 ±130% interrupts.40:IR-PCI-MSI.524290-edge.eth0-TxRx-1
77.00 ± 39% +71.4% 132.00 ± 30% interrupts.43:IR-PCI-MSI.524293-edge.eth0-TxRx-4
54.75 ± 8% +64.4% 90.00 ± 21% interrupts.46:IR-PCI-MSI.524296-edge.eth0-TxRx-7
38565 +40.0% 54002 ± 4% interrupts.CAL:Function_call_interrupts
983.50 ± 4% +36.0% 1338 ± 8% interrupts.CPU0.CAL:Function_call_interrupts
177382 +55.8% 276400 ± 3% interrupts.CPU0.LOC:Local_timer_interrupts
4852 ± 66% -87.6% 601.33 ±141% interrupts.CPU0.NMI:Non-maskable_interrupts
4852 ± 66% -87.6% 601.33 ±141% interrupts.CPU0.PMI:Performance_monitoring_interrupts
177458 +55.6% 276134 ± 3% interrupts.CPU1.LOC:Local_timer_interrupts
2399 ± 34% -39.3% 1456 ± 8% interrupts.CPU1.RES:Rescheduling_interrupts
1007 +38.4% 1394 ± 7% interrupts.CPU10.CAL:Function_call_interrupts
175614 ± 2% +57.4% 276332 ± 4% interrupts.CPU10.LOC:Local_timer_interrupts
4849 ± 33% -54.8% 2193 ± 27% interrupts.CPU10.NMI:Non-maskable_interrupts
4849 ± 33% -54.8% 2193 ± 27% interrupts.CPU10.PMI:Performance_monitoring_interrupts
2487 ± 31% -33.4% 1655 ± 15% interrupts.CPU10.RES:Rescheduling_interrupts
1005 ± 2% +33.0% 1337 ± 3% interrupts.CPU11.CAL:Function_call_interrupts
177053 +56.1% 276460 ± 3% interrupts.CPU11.LOC:Local_timer_interrupts
3695 ± 9% -52.1% 1768 ± 4% interrupts.CPU11.NMI:Non-maskable_interrupts
3695 ± 9% -52.1% 1768 ± 4% interrupts.CPU11.PMI:Performance_monitoring_interrupts
972.00 ± 5% +41.8% 1378 ± 2% interrupts.CPU12.CAL:Function_call_interrupts
175821 ± 2% +57.2% 276353 ± 4% interrupts.CPU12.LOC:Local_timer_interrupts
1947 ± 5% -23.8% 1483 ± 16% interrupts.CPU12.RES:Rescheduling_interrupts
922.25 ± 6% +42.0% 1309 interrupts.CPU13.CAL:Function_call_interrupts
176892 +55.9% 275829 ± 3% interrupts.CPU13.LOC:Local_timer_interrupts
979.25 ± 5% +31.0% 1282 ± 4% interrupts.CPU14.CAL:Function_call_interrupts
176168 ± 2% +56.8% 276178 ± 3% interrupts.CPU14.LOC:Local_timer_interrupts
955.75 ± 3% +40.3% 1341 ± 6% interrupts.CPU15.CAL:Function_call_interrupts
176793 ± 2% +56.1% 276062 ± 3% interrupts.CPU15.LOC:Local_timer_interrupts
1858 ± 4% -19.4% 1497 ± 13% interrupts.CPU15.RES:Rescheduling_interrupts
969.50 ± 3% +35.7% 1315 ± 6% interrupts.CPU16.CAL:Function_call_interrupts
176054 ± 2% +56.8% 276087 ± 3% interrupts.CPU16.LOC:Local_timer_interrupts
1927 ± 5% -24.2% 1461 ± 8% interrupts.CPU16.RES:Rescheduling_interrupts
937.25 ± 2% +39.8% 1310 ± 9% interrupts.CPU17.CAL:Function_call_interrupts
176586 ± 2% +55.6% 274709 ± 3% interrupts.CPU17.LOC:Local_timer_interrupts
972.00 ± 4% +41.1% 1371 ± 9% interrupts.CPU18.CAL:Function_call_interrupts
175927 ± 2% +56.9% 275969 ± 3% interrupts.CPU18.LOC:Local_timer_interrupts
2285 ± 18% -42.0% 1325 ± 7% interrupts.CPU18.RES:Rescheduling_interrupts
958.25 ± 4% +37.9% 1321 ± 6% interrupts.CPU19.CAL:Function_call_interrupts
176354 ± 2% +56.0% 275068 ± 4% interrupts.CPU19.LOC:Local_timer_interrupts
1007 ± 4% +34.8% 1357 ± 11% interrupts.CPU2.CAL:Function_call_interrupts
176971 +56.1% 276231 ± 3% interrupts.CPU2.LOC:Local_timer_interrupts
948.00 ± 5% +46.7% 1390 ± 6% interrupts.CPU20.CAL:Function_call_interrupts
176878 +56.1% 276171 ± 3% interrupts.CPU20.LOC:Local_timer_interrupts
970.25 ± 4% +33.5% 1295 ± 3% interrupts.CPU21.CAL:Function_call_interrupts
177571 +55.5% 276093 ± 4% interrupts.CPU21.LOC:Local_timer_interrupts
1893 -34.6% 1237 ± 6% interrupts.CPU21.RES:Rescheduling_interrupts
963.50 ± 2% +34.6% 1296 ± 2% interrupts.CPU22.CAL:Function_call_interrupts
176684 +56.3% 276201 ± 3% interrupts.CPU22.LOC:Local_timer_interrupts
1975 ± 2% -35.7% 1270 ± 12% interrupts.CPU22.RES:Rescheduling_interrupts
949.50 ± 3% +32.7% 1260 ± 2% interrupts.CPU23.CAL:Function_call_interrupts
177742 +55.1% 275624 ± 3% interrupts.CPU23.LOC:Local_timer_interrupts
1937 ± 4% -34.9% 1261 ± 9% interrupts.CPU23.RES:Rescheduling_interrupts
968.75 ± 5% +41.5% 1370 ± 7% interrupts.CPU24.CAL:Function_call_interrupts
176809 +56.1% 275962 ± 3% interrupts.CPU24.LOC:Local_timer_interrupts
1893 -30.5% 1316 ± 14% interrupts.CPU24.RES:Rescheduling_interrupts
949.00 ± 2% +41.6% 1344 ± 4% interrupts.CPU25.CAL:Function_call_interrupts
177584 +55.4% 275974 ± 4% interrupts.CPU25.LOC:Local_timer_interrupts
922.50 ± 9% +43.9% 1327 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
177144 +55.9% 276093 ± 3% interrupts.CPU26.LOC:Local_timer_interrupts
4881 ± 35% -76.1% 1165 ±141% interrupts.CPU26.NMI:Non-maskable_interrupts
4881 ± 35% -76.1% 1165 ±141% interrupts.CPU26.PMI:Performance_monitoring_interrupts
961.75 ± 5% +32.1% 1270 ± 7% interrupts.CPU27.CAL:Function_call_interrupts
177543 +55.6% 276207 ± 4% interrupts.CPU27.LOC:Local_timer_interrupts
1920 -39.2% 1167 ± 7% interrupts.CPU27.RES:Rescheduling_interrupts
948.25 ± 3% +43.4% 1359 ± 10% interrupts.CPU28.CAL:Function_call_interrupts
176358 ± 2% +56.6% 276170 ± 3% interrupts.CPU28.LOC:Local_timer_interrupts
926.25 ± 2% +52.0% 1408 ± 3% interrupts.CPU29.CAL:Function_call_interrupts
177193 +56.0% 276457 ± 3% interrupts.CPU29.LOC:Local_timer_interrupts
1014 ±112% +163.8% 2677 ± 24% interrupts.CPU29.NMI:Non-maskable_interrupts
1014 ±112% +163.8% 2677 ± 24% interrupts.CPU29.PMI:Performance_monitoring_interrupts
1927 ± 2% -30.8% 1334 ± 9% interrupts.CPU29.RES:Rescheduling_interrupts
983.25 ± 4% +43.7% 1413 interrupts.CPU3.CAL:Function_call_interrupts
177682 +55.6% 276520 ± 3% interrupts.CPU3.LOC:Local_timer_interrupts
1974 ± 2% -27.2% 1436 ± 9% interrupts.CPU3.RES:Rescheduling_interrupts
924.75 ± 2% +54.0% 1424 ± 3% interrupts.CPU30.CAL:Function_call_interrupts
176131 ± 2% +56.7% 276051 ± 3% interrupts.CPU30.LOC:Local_timer_interrupts
1888 ± 3% -35.4% 1221 interrupts.CPU30.RES:Rescheduling_interrupts
960.50 ± 4% +42.4% 1367 ± 6% interrupts.CPU31.CAL:Function_call_interrupts
177002 +56.1% 276368 ± 3% interrupts.CPU31.LOC:Local_timer_interrupts
1939 -29.2% 1373 ± 14% interrupts.CPU31.RES:Rescheduling_interrupts
77.00 ± 51% +3025.1% 2406 ±130% interrupts.CPU32.40:IR-PCI-MSI.524290-edge.eth0-TxRx-1
914.25 ± 14% +51.8% 1388 ± 8% interrupts.CPU32.CAL:Function_call_interrupts
176087 ± 2% +56.7% 276010 ± 3% interrupts.CPU32.LOC:Local_timer_interrupts
1947 ± 2% -27.7% 1407 ± 11% interrupts.CPU32.RES:Rescheduling_interrupts
994.25 ± 3% +39.9% 1390 ± 5% interrupts.CPU33.CAL:Function_call_interrupts
176835 +56.1% 276100 ± 3% interrupts.CPU33.LOC:Local_timer_interrupts
1923 ± 2% -32.0% 1308 ± 15% interrupts.CPU33.RES:Rescheduling_interrupts
976.75 ± 3% +43.2% 1398 ± 7% interrupts.CPU34.CAL:Function_call_interrupts
175901 ± 2% +56.8% 275892 ± 3% interrupts.CPU34.LOC:Local_timer_interrupts
1920 ± 3% -35.9% 1230 ± 6% interrupts.CPU34.RES:Rescheduling_interrupts
995.75 ± 3% +35.5% 1349 ± 2% interrupts.CPU35.CAL:Function_call_interrupts
176626 +56.4% 276218 ± 3% interrupts.CPU35.LOC:Local_timer_interrupts
2002 ± 6% -34.0% 1322 ± 4% interrupts.CPU35.RES:Rescheduling_interrupts
977.75 ± 3% +41.3% 1381 ± 5% interrupts.CPU36.CAL:Function_call_interrupts
175681 ± 2% +57.0% 275759 ± 3% interrupts.CPU36.LOC:Local_timer_interrupts
1944 -35.9% 1245 ± 3% interrupts.CPU36.RES:Rescheduling_interrupts
967.00 ± 5% +42.5% 1378 ± 4% interrupts.CPU37.CAL:Function_call_interrupts
176570 ± 2% +55.7% 274976 ± 3% interrupts.CPU37.LOC:Local_timer_interrupts
4719 ± 35% -74.9% 1184 ± 67% interrupts.CPU37.NMI:Non-maskable_interrupts
4719 ± 35% -74.9% 1184 ± 67% interrupts.CPU37.PMI:Performance_monitoring_interrupts
1897 -31.3% 1302 ± 13% interrupts.CPU37.RES:Rescheduling_interrupts
77.00 ± 39% +71.4% 132.00 ± 30% interrupts.CPU38.43:IR-PCI-MSI.524293-edge.eth0-TxRx-4
953.50 ± 2% +42.8% 1361 interrupts.CPU38.CAL:Function_call_interrupts
175461 ± 3% +57.2% 275862 ± 4% interrupts.CPU38.LOC:Local_timer_interrupts
1919 ± 3% -32.3% 1300 ± 10% interrupts.CPU38.RES:Rescheduling_interrupts
977.75 ± 4% +34.6% 1316 ± 6% interrupts.CPU39.CAL:Function_call_interrupts
176475 ± 2% +56.5% 276208 ± 3% interrupts.CPU39.LOC:Local_timer_interrupts
1884 -33.0% 1263 ± 7% interrupts.CPU39.RES:Rescheduling_interrupts
897.00 ± 12% +56.3% 1402 ± 8% interrupts.CPU4.CAL:Function_call_interrupts
176870 +56.2% 276260 ± 3% interrupts.CPU4.LOC:Local_timer_interrupts
2133 ± 10% -36.2% 1361 ± 20% interrupts.CPU4.RES:Rescheduling_interrupts
995.00 ± 3% +36.3% 1356 ± 5% interrupts.CPU5.CAL:Function_call_interrupts
177709 +55.3% 275932 ± 4% interrupts.CPU5.LOC:Local_timer_interrupts
5802 ± 33% -62.7% 2163 ± 72% interrupts.CPU5.NMI:Non-maskable_interrupts
5802 ± 33% -62.7% 2163 ± 72% interrupts.CPU5.PMI:Performance_monitoring_interrupts
54.75 ± 8% +64.4% 90.00 ± 21% interrupts.CPU6.46:IR-PCI-MSI.524296-edge.eth0-TxRx-7
931.75 ± 3% +46.4% 1364 ± 7% interrupts.CPU6.CAL:Function_call_interrupts
176814 +56.1% 275995 ± 3% interrupts.CPU6.LOC:Local_timer_interrupts
1.25 ± 66% +1.4e+05% 1764 ± 77% interrupts.CPU6.NMI:Non-maskable_interrupts
1.25 ± 66% +1.4e+05% 1764 ± 77% interrupts.CPU6.PMI:Performance_monitoring_interrupts
2020 ± 4% -26.1% 1493 ± 21% interrupts.CPU6.RES:Rescheduling_interrupts
951.50 ± 3% +42.3% 1353 ± 5% interrupts.CPU7.CAL:Function_call_interrupts
177433 +55.7% 276226 ± 4% interrupts.CPU7.LOC:Local_timer_interrupts
1961 -39.5% 1187 ± 2% interrupts.CPU7.RES:Rescheduling_interrupts
958.00 ± 4% +46.1% 1400 ± 3% interrupts.CPU8.CAL:Function_call_interrupts
176160 ± 2% +56.6% 275946 ± 3% interrupts.CPU8.LOC:Local_timer_interrupts
2051 ± 3% -33.9% 1356 ± 5% interrupts.CPU8.RES:Rescheduling_interrupts
1045 ± 2% +26.9% 1326 ± 6% interrupts.CPU9.CAL:Function_call_interrupts
177126 +55.9% 276223 ± 3% interrupts.CPU9.LOC:Local_timer_interrupts
4878 ± 34% -63.5% 1780 ± 80% interrupts.CPU9.NMI:Non-maskable_interrupts
4878 ± 34% -63.5% 1780 ± 80% interrupts.CPU9.PMI:Performance_monitoring_interrupts
7069159 +56.2% 11041322 ± 3% interrupts.LOC:Local_timer_interrupts
118316 ± 7% -46.4% 63378 ± 17% interrupts.NMI:Non-maskable_interrupts
118316 ± 7% -46.4% 63378 ± 17% interrupts.PMI:Performance_monitoring_interrupts
79699 -24.7% 60027 ± 2% interrupts.RES:Rescheduling_interrupts
176.75 ± 10% +71.8% 303.67 ± 10% interrupts.TLB:TLB_shootdowns
58.95 -15.1 43.86 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
59.10 -15.1 44.02 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
54.72 -14.7 39.98 ± 4% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
54.25 -14.7 39.52 ± 4% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
61.93 -14.6 47.29 ± 3% perf-profile.calltrace.cycles-pp.read
45.30 -12.1 33.15 ± 2% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
38.88 -10.1 28.73 ± 2% perf-profile.calltrace.cycles-pp.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
36.45 ± 2% -9.5 26.96 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read
20.84 -5.5 15.37 ± 3% perf-profile.calltrace.cycles-pp.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read
9.89 ± 4% -3.2 6.69 ± 7% perf-profile.calltrace.cycles-pp.touch_atime.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
9.80 ± 4% -3.2 6.62 ± 8% perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter
9.55 ± 6% -2.6 6.97 ± 5% perf-profile.calltrace.cycles-pp.down_read.xfs_ilock.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
9.61 ± 7% -2.6 7.04 ± 5% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read
5.55 ± 10% -1.7 3.83 ± 11% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.80 ± 5% -1.5 4.32 perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
5.86 ± 5% -1.4 4.41 perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read
4.33 ± 4% -1.4 2.96 ± 3% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_read.ksys_read.do_syscall_64
2.97 ± 4% -1.0 2.01 perf-profile.calltrace.cycles-pp.__inode_security_revalidate.selinux_file_permission.security_file_permission.vfs_read.ksys_read
2.46 ± 3% -0.5 2.00 ± 3% perf-profile.calltrace.cycles-pp.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
1.56 ± 21% -0.4 1.20 ± 9% perf-profile.calltrace.cycles-pp.fsnotify.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.08 ± 3% -0.3 1.78 ± 6% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.14 ± 2% -0.3 1.84 ± 6% perf-profile.calltrace.cycles-pp.close
2.09 ± 3% -0.3 1.79 ± 6% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
2.11 ± 3% -0.3 1.81 ± 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.close
2.11 ± 2% -0.3 1.81 ± 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
2.09 ± 3% -0.3 1.79 ± 6% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
2.05 ± 3% -0.3 1.75 ± 6% perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.exit_to_usermode_loop
2.06 ± 3% -0.3 1.77 ± 6% perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
1.39 ± 3% -0.3 1.10 ± 5% perf-profile.calltrace.cycles-pp.iomap_set_page_dirty.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.97 ± 6% -0.3 1.70 ± 5% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.path_openat.do_filp_open
2.28 ± 6% -0.2 2.07 ± 3% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_open.do_syscall_64
2.22 ± 6% -0.2 2.03 ± 3% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_open
0.59 ± 2% +0.1 0.69 ± 5% perf-profile.calltrace.cycles-pp.memset_erms.iomap_read_page_sync.iomap_write_begin.iomap_write_actor.iomap_apply
0.87 ± 2% +0.1 0.97 ± 3% perf-profile.calltrace.cycles-pp.iomap_read_page_sync.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
0.79 ± 5% +0.1 0.89 ± 4% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor
1.19 +0.1 1.32 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.write
1.15 ± 5% +0.1 1.28 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.read
1.56 ± 2% +0.2 1.72 ± 6% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run
1.55 ± 2% +0.2 1.71 ± 6% perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.__dentry_kill.dput.__fput
1.40 ± 3% +0.2 1.59 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.read
1.38 ± 2% +0.2 1.61 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.write
3.72 ± 2% +0.3 3.97 perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply
3.86 ± 2% +0.3 4.15 perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.72 ± 2% +0.4 2.12 perf-profile.calltrace.cycles-pp.xfs_file_iomap_begin_delay.xfs_file_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
2.90 ± 5% +0.4 3.31 ± 3% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.91 ± 5% +0.4 3.31 ± 3% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
2.92 ± 5% +0.4 3.33 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat
2.92 ± 5% +0.4 3.32 ± 3% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
2.92 ± 5% +0.4 3.33 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
2.94 ± 5% +0.4 3.35 ± 2% perf-profile.calltrace.cycles-pp.creat
5.01 ± 2% +0.5 5.47 perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
0.00 +0.6 0.58 ± 5% perf-profile.calltrace.cycles-pp.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create
2.00 ± 2% +0.6 2.59 ± 2% perf-profile.calltrace.cycles-pp.xfs_file_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write
0.00 +0.6 0.62 ± 3% perf-profile.calltrace.cycles-pp.xfs_vn_unlink.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.62 ± 3% perf-profile.calltrace.cycles-pp.xfs_remove.xfs_vn_unlink.vfs_unlink.do_unlinkat.do_syscall_64
0.00 +0.6 0.63 ± 3% perf-profile.calltrace.cycles-pp.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create.path_openat
0.00 +0.6 0.63 ± 3% perf-profile.calltrace.cycles-pp.xfs_dir_ialloc.xfs_create.xfs_generic_create.path_openat.do_filp_open
0.00 +0.6 0.65 ± 3% perf-profile.calltrace.cycles-pp.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
0.77 ± 4% +0.7 1.45 ± 2% perf-profile.calltrace.cycles-pp.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
0.78 ± 5% +0.7 1.46 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
0.78 ± 5% +0.7 1.46 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
0.00 +0.7 0.70 ± 4% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.81 ± 5% +0.7 1.51 ± 2% perf-profile.calltrace.cycles-pp.unlink
0.00 +0.7 0.74 ± 3% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
12.46 ± 2% +0.8 13.22 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
12.37 ± 2% +0.8 13.13 perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write
0.13 ±173% +0.9 1.02 perf-profile.calltrace.cycles-pp.xfs_generic_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
0.00 +1.0 1.00 ± 2% perf-profile.calltrace.cycles-pp.xfs_create.xfs_generic_create.path_openat.do_filp_open.do_sys_open
15.73 ± 4% +1.3 17.05 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
22.72 ± 3% +1.5 24.18 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
22.86 ± 3% +1.5 24.35 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
16.41 ± 3% +1.5 17.94 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
18.33 ± 3% +1.7 19.99 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
18.75 ± 3% +1.7 20.47 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
25.76 ± 3% +1.9 27.68 perf-profile.calltrace.cycles-pp.write
5.47 ± 15% +10.7 16.19 ± 8% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
5.55 ± 15% +11.2 16.79 ± 9% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
5.56 ± 15% +11.3 16.82 ± 9% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
5.89 ± 12% +11.5 17.42 ± 8% perf-profile.calltrace.cycles-pp.secondary_startup_64
5.69 ± 14% +11.6 17.25 ± 9% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
5.69 ± 14% +11.6 17.26 ± 9% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
5.69 ± 14% +11.6 17.26 ± 9% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
54.73 -14.7 40.00 ± 4% perf-profile.children.cycles-pp.ksys_read
54.28 -14.7 39.56 ± 4% perf-profile.children.cycles-pp.vfs_read
62.12 -14.6 47.49 ± 3% perf-profile.children.cycles-pp.read
87.70 -12.8 74.92 ± 2% perf-profile.children.cycles-pp.do_syscall_64
87.98 -12.8 75.22 ± 2% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
45.31 -12.1 33.16 ± 2% perf-profile.children.cycles-pp.new_sync_read
38.88 -10.1 28.74 ± 2% perf-profile.children.cycles-pp.xfs_file_read_iter
36.47 ± 2% -9.5 26.98 perf-profile.children.cycles-pp.xfs_file_buffered_aio_read
20.86 -5.5 15.39 ± 3% perf-profile.children.cycles-pp.generic_file_read_iter
9.89 ± 4% -3.2 6.70 ± 7% perf-profile.children.cycles-pp.touch_atime
9.80 ± 4% -3.2 6.63 ± 8% perf-profile.children.cycles-pp.atime_needs_update
9.60 ± 6% -2.6 7.02 ± 4% perf-profile.children.cycles-pp.down_read
10.44 ± 7% -2.5 7.95 ± 5% perf-profile.children.cycles-pp.xfs_ilock
6.35 ± 9% -1.7 4.67 ± 9% perf-profile.children.cycles-pp.security_file_permission
5.81 ± 5% -1.5 4.34 perf-profile.children.cycles-pp.up_read
6.41 ± 5% -1.5 4.93 perf-profile.children.cycles-pp.xfs_iunlock
4.94 ± 4% -1.3 3.60 ± 3% perf-profile.children.cycles-pp.selinux_file_permission
3.17 ± 3% -0.9 2.27 perf-profile.children.cycles-pp.__inode_security_revalidate
2.47 ± 3% -0.5 2.02 ± 3% perf-profile.children.cycles-pp.iomap_write_end
1.75 ± 18% -0.3 1.41 ± 7% perf-profile.children.cycles-pp.fsnotify
2.14 ± 2% -0.3 1.84 ± 6% perf-profile.children.cycles-pp.close
2.09 ± 3% -0.3 1.79 ± 6% perf-profile.children.cycles-pp.task_work_run
2.08 ± 3% -0.3 1.79 ± 6% perf-profile.children.cycles-pp.__fput
2.06 ± 3% -0.3 1.77 ± 6% perf-profile.children.cycles-pp.dput
2.10 ± 3% -0.3 1.81 ± 6% perf-profile.children.cycles-pp.exit_to_usermode_loop
2.05 ± 3% -0.3 1.76 ± 6% perf-profile.children.cycles-pp.__dentry_kill
1.40 ± 3% -0.3 1.12 ± 5% perf-profile.children.cycles-pp.iomap_set_page_dirty
0.74 ± 7% -0.2 0.53 ± 8% perf-profile.children.cycles-pp.rw_verify_area
0.34 ± 3% -0.2 0.17 ± 7% perf-profile.children.cycles-pp.page_mapping
0.43 ± 9% -0.1 0.36 ± 3% perf-profile.children.cycles-pp.up_write
0.22 ± 7% -0.0 0.20 ± 8% perf-profile.children.cycles-pp.unlock_page
0.07 -0.0 0.06 ± 8% perf-profile.children.cycles-pp.xfs_btree_lookup
0.10 ± 4% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.09 ± 4% +0.0 0.11 perf-profile.children.cycles-pp.workingset_update_node
0.07 ± 10% +0.0 0.09 ± 9% perf-profile.children.cycles-pp.xfs_lookup
0.07 ± 14% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.xfs_vn_lookup
0.18 ± 7% +0.0 0.21 ± 6% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.13 ± 5% +0.0 0.16 perf-profile.children.cycles-pp.xfs_dir2_leafn_lookup_for_entry
0.16 +0.0 0.19 ± 7% perf-profile.children.cycles-pp.__x64_sys_read
0.07 ± 7% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.xfs_buf_item_format
0.66 ± 2% +0.0 0.69 perf-profile.children.cycles-pp.down_write
0.17 ± 9% +0.0 0.20 ± 6% perf-profile.children.cycles-pp.disk_cp
0.38 ± 2% +0.0 0.41 ± 4% perf-profile.children.cycles-pp.xas_store
0.21 ± 3% +0.0 0.25 ± 9% perf-profile.children.cycles-pp.rcu_all_qs
0.05 ± 8% +0.0 0.09 ± 9% perf-profile.children.cycles-pp.unwind_next_frame
0.04 ± 57% +0.0 0.08 ± 12% perf-profile.children.cycles-pp.kmem_cache_alloc
0.21 ± 3% +0.0 0.25 ± 3% perf-profile.children.cycles-pp.__list_del_entry_valid
0.18 ± 4% +0.0 0.22 ± 2% perf-profile.children.cycles-pp.xfs_dir3_data_check
0.18 ± 4% +0.0 0.22 ± 2% perf-profile.children.cycles-pp.__xfs_dir3_data_check
0.35 ± 3% +0.0 0.39 ± 3% perf-profile.children.cycles-pp.delete_from_page_cache_batch
0.44 ± 2% +0.0 0.48 ± 5% perf-profile.children.cycles-pp._cond_resched
0.03 ±100% +0.0 0.07 ± 6% perf-profile.children.cycles-pp.kmem_zone_alloc
0.05 ± 58% +0.1 0.10 ± 8% perf-profile.children.cycles-pp.xfs_get_extsz_hint
0.04 ± 57% +0.1 0.09 ± 9% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.read_tsc
0.21 ± 6% +0.1 0.27 ± 8% perf-profile.children.cycles-pp.xfs_da3_node_lookup_int
0.00 +0.1 0.06 ± 8% perf-profile.children.cycles-pp.native_irq_return_iret
0.08 ± 10% +0.1 0.14 ± 11% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.11 ± 9% +0.1 0.17 ± 20% perf-profile.children.cycles-pp.wait_for_xmitr
0.01 ±173% +0.1 0.07 ± 17% perf-profile.children.cycles-pp.xlog_space_left
0.10 ± 9% +0.1 0.15 ± 11% perf-profile.children.cycles-pp.__account_scheduler_latency
0.08 ± 5% +0.1 0.14 ± 12% perf-profile.children.cycles-pp.arch_stack_walk
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.lapic_next_deadline
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.__remove_hrtimer
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.xfs_verify_ino
0.27 ± 5% +0.1 0.33 perf-profile.children.cycles-pp.xfs_dir2_node_removename
0.11 ± 4% +0.1 0.17 ± 19% perf-profile.children.cycles-pp.serial8250_console_putchar
0.06 ± 14% +0.1 0.12 ± 11% perf-profile.children.cycles-pp.sched_ttwu_pending
0.11 ± 7% +0.1 0.17 ± 20% perf-profile.children.cycles-pp.uart_console_write
0.45 ± 4% +0.1 0.52 ± 3% perf-profile.children.cycles-pp.__might_sleep
0.28 ± 5% +0.1 0.34 ± 2% perf-profile.children.cycles-pp.xfs_dir_removename
0.08 ± 58% +0.1 0.14 ± 10% perf-profile.children.cycles-pp.xlog_ungrant_log_space
0.01 ±173% +0.1 0.08 perf-profile.children.cycles-pp.xfs_iunlink
0.00 +0.1 0.07 ± 20% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.12 ± 10% +0.1 0.19 ± 8% perf-profile.children.cycles-pp.enqueue_entity
0.30 ± 4% +0.1 0.38 ± 6% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.12 ± 10% +0.1 0.20 ± 10% perf-profile.children.cycles-pp.enqueue_task_fair
0.10 ± 58% +0.1 0.18 ± 9% perf-profile.children.cycles-pp.xfs_log_done
0.00 +0.1 0.08 ± 22% perf-profile.children.cycles-pp.native_write_msr
0.00 +0.1 0.08 ± 12% perf-profile.children.cycles-pp.xfs_agino_range
0.12 ± 10% +0.1 0.20 ± 12% perf-profile.children.cycles-pp.ttwu_do_activate
0.12 ± 10% +0.1 0.20 ± 12% perf-profile.children.cycles-pp.activate_task
0.12 ± 8% +0.1 0.21 ± 20% perf-profile.children.cycles-pp.console_unlock
0.08 ± 8% +0.1 0.17 ± 10% perf-profile.children.cycles-pp.update_load_avg
0.12 ± 8% +0.1 0.21 ± 17% perf-profile.children.cycles-pp.irq_work_run_list
0.35 ± 8% +0.1 0.44 ± 5% perf-profile.children.cycles-pp.xfs_file_iomap_end
0.00 +0.1 0.09 ± 50% perf-profile.children.cycles-pp.xfs_inobt_get_maxrecs
0.11 ± 11% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.irq_work_interrupt
0.11 ± 11% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.11 ± 11% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.irq_work_run
0.11 ± 11% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.printk
0.11 ± 11% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.vprintk_emit
0.04 ± 57% +0.1 0.13 ± 25% perf-profile.children.cycles-pp.clockevents_program_event
0.60 +0.1 0.69 ± 4% perf-profile.children.cycles-pp.memset_erms
0.88 ± 2% +0.1 0.98 ± 3% perf-profile.children.cycles-pp.iomap_read_page_sync
0.14 ± 9% +0.1 0.26 ± 6% perf-profile.children.cycles-pp.task_tick_fair
0.71 ± 2% +0.1 0.82 ± 2% perf-profile.children.cycles-pp.___might_sleep
0.00 +0.1 0.12 ± 26% perf-profile.children.cycles-pp.ktime_get
0.11 ± 61% +0.1 0.24 ± 12% perf-profile.children.cycles-pp.xlog_grant_add_space
0.42 ± 8% +0.1 0.55 ± 13% perf-profile.children.cycles-pp.xfs_file_write_iter
0.06 ± 11% +0.1 0.20 ± 47% perf-profile.children.cycles-pp.xfs_btree_increment
0.10 ± 15% +0.1 0.24 ± 24% perf-profile.children.cycles-pp.__softirqentry_text_start
0.06 ± 6% +0.2 0.22 ± 35% perf-profile.children.cycles-pp.xfs_btree_get_rec
0.06 ± 7% +0.2 0.22 ± 41% perf-profile.children.cycles-pp.__xfs_btree_check_sblock
1.56 ± 2% +0.2 1.72 ± 6% perf-profile.children.cycles-pp.evict
0.17 ± 8% +0.2 0.34 ± 6% perf-profile.children.cycles-pp.scheduler_tick
1.55 ± 2% +0.2 1.71 ± 6% perf-profile.children.cycles-pp.truncate_inode_pages_range
0.11 ± 14% +0.2 0.29 ± 25% perf-profile.children.cycles-pp.irq_exit
0.00 +0.2 0.18 ± 27% perf-profile.children.cycles-pp.menu_select
0.43 ± 3% +0.2 0.62 ± 3% perf-profile.children.cycles-pp.xfs_vn_unlink
0.43 ± 3% +0.2 0.62 ± 3% perf-profile.children.cycles-pp.xfs_remove
0.23 ± 41% +0.2 0.42 ± 11% perf-profile.children.cycles-pp.xfs_trans_reserve
0.21 ± 46% +0.2 0.40 ± 11% perf-profile.children.cycles-pp.xfs_log_reserve
0.45 ± 2% +0.2 0.65 ± 3% perf-profile.children.cycles-pp.vfs_unlink
0.34 ± 5% +0.2 0.55 ± 4% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.08 ± 5% +0.2 0.29 ± 42% perf-profile.children.cycles-pp.xfs_btree_check_sblock
0.28 ± 36% +0.2 0.50 ± 9% perf-profile.children.cycles-pp.xfs_trans_alloc
0.26 ± 4% +0.3 0.51 perf-profile.children.cycles-pp.update_process_times
0.26 ± 3% +0.3 0.52 ± 2% perf-profile.children.cycles-pp.tick_sched_handle
0.10 ± 8% +0.3 0.36 ± 36% perf-profile.children.cycles-pp.xfs_inobt_get_rec
2.49 ± 6% +0.3 2.75 ± 3% perf-profile.children.cycles-pp.rwsem_optimistic_spin
3.88 ± 2% +0.3 4.17 perf-profile.children.cycles-pp.grab_cache_page_write_begin
0.29 ± 3% +0.3 0.59 ± 4% perf-profile.children.cycles-pp.tick_sched_timer
2.61 ± 2% +0.3 2.94 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.12 ± 8% +0.4 0.52 ± 4% perf-profile.children.cycles-pp.xfs_dialloc_ag
3.00 ± 5% +0.4 3.40 ± 2% perf-profile.children.cycles-pp.do_sys_open
2.97 ± 5% +0.4 3.37 ± 2% perf-profile.children.cycles-pp.path_openat
2.97 ± 5% +0.4 3.37 ± 3% perf-profile.children.cycles-pp.do_filp_open
0.20 ± 8% +0.4 0.61 ± 37% perf-profile.children.cycles-pp.xfs_check_agi_freecount
1.75 ± 2% +0.4 2.15 perf-profile.children.cycles-pp.xfs_file_iomap_begin_delay
2.94 ± 5% +0.4 3.35 ± 2% perf-profile.children.cycles-pp.creat
0.15 ± 10% +0.4 0.58 ± 5% perf-profile.children.cycles-pp.xfs_dialloc
0.39 ± 3% +0.4 0.83 ± 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
2.79 ± 2% +0.4 3.23 perf-profile.children.cycles-pp.entry_SYSCALL_64
0.18 ± 7% +0.4 0.63 ± 3% perf-profile.children.cycles-pp.xfs_ialloc
0.18 ± 7% +0.5 0.63 ± 3% perf-profile.children.cycles-pp.xfs_dir_ialloc
5.03 ± 2% +0.5 5.49 perf-profile.children.cycles-pp.iomap_write_begin
0.45 ± 7% +0.6 1.00 perf-profile.children.cycles-pp.xfs_create
0.46 ± 7% +0.6 1.02 perf-profile.children.cycles-pp.xfs_generic_create
0.50 ± 4% +0.6 1.08 ± 3% perf-profile.children.cycles-pp.hrtimer_interrupt
2.02 ± 2% +0.6 2.62 perf-profile.children.cycles-pp.xfs_file_iomap_begin
0.77 ± 4% +0.7 1.45 ± 2% perf-profile.children.cycles-pp.do_unlinkat
0.81 ± 5% +0.7 1.51 ± 2% perf-profile.children.cycles-pp.unlink
12.39 ± 2% +0.8 13.15 perf-profile.children.cycles-pp.iomap_apply
12.46 ± 2% +0.8 13.22 perf-profile.children.cycles-pp.iomap_file_buffered_write
0.64 ± 5% +0.8 1.46 ± 8% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.69 ± 4% +0.9 1.56 ± 8% perf-profile.children.cycles-pp.apic_timer_interrupt
15.75 ± 4% +1.3 17.07 perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
16.43 ± 3% +1.5 17.95 perf-profile.children.cycles-pp.new_sync_write
18.36 ± 3% +1.7 20.02 perf-profile.children.cycles-pp.vfs_write
18.77 ± 3% +1.7 20.48 perf-profile.children.cycles-pp.ksys_write
25.94 ± 3% +2.0 27.92 perf-profile.children.cycles-pp.write
5.65 ± 12% +10.7 16.35 ± 7% perf-profile.children.cycles-pp.intel_idle
5.75 ± 13% +11.2 16.98 ± 7% perf-profile.children.cycles-pp.cpuidle_enter_state
5.75 ± 13% +11.2 16.98 ± 8% perf-profile.children.cycles-pp.cpuidle_enter
5.89 ± 12% +11.5 17.42 ± 8% perf-profile.children.cycles-pp.secondary_startup_64
5.89 ± 12% +11.5 17.42 ± 8% perf-profile.children.cycles-pp.cpu_startup_entry
5.89 ± 12% +11.5 17.42 ± 8% perf-profile.children.cycles-pp.do_idle
5.69 ± 14% +11.6 17.26 ± 9% perf-profile.children.cycles-pp.start_secondary
9.36 ± 3% -3.2 6.20 ± 7% perf-profile.self.cycles-pp.atime_needs_update
9.38 ± 6% -2.6 6.74 ± 5% perf-profile.self.cycles-pp.down_read
7.04 ± 2% -2.2 4.82 ± 3% perf-profile.self.cycles-pp.generic_file_read_iter
6.34 -2.0 4.32 ± 6% perf-profile.self.cycles-pp.new_sync_read
5.76 ± 5% -1.5 4.27 perf-profile.self.cycles-pp.up_read
2.93 ± 4% -1.0 1.96 perf-profile.self.cycles-pp.__inode_security_revalidate
7.79 -0.6 7.14 perf-profile.self.cycles-pp.do_syscall_64
1.73 ± 6% -0.5 1.27 ± 6% perf-profile.self.cycles-pp.selinux_file_permission
1.73 ± 18% -0.4 1.36 ± 8% perf-profile.self.cycles-pp.fsnotify
0.72 ± 7% -0.2 0.51 ± 9% perf-profile.self.cycles-pp.rw_verify_area
0.34 ± 3% -0.2 0.17 ± 10% perf-profile.self.cycles-pp.page_mapping
0.42 ± 4% -0.1 0.30 ± 6% perf-profile.self.cycles-pp.iomap_write_end
0.24 ± 6% -0.1 0.16 ± 10% perf-profile.self.cycles-pp.iomap_set_page_dirty
0.16 ± 11% -0.1 0.08 ± 20% perf-profile.self.cycles-pp.rwsem_optimistic_spin
0.42 ± 8% -0.1 0.34 ± 5% perf-profile.self.cycles-pp.up_write
0.33 ± 2% -0.0 0.29 ± 2% perf-profile.self.cycles-pp._raw_spin_lock
0.28 ± 5% -0.0 0.24 ± 5% perf-profile.self.cycles-pp.iov_iter_copy_from_user_atomic
0.08 +0.0 0.10 ± 4% perf-profile.self.cycles-pp.workingset_update_node
0.05 ± 8% +0.0 0.07 ± 17% perf-profile.self.cycles-pp.__lock_text_start
0.14 ± 3% +0.0 0.17 ± 2% perf-profile.self.cycles-pp.__x64_sys_read
0.22 +0.0 0.25 ± 6% perf-profile.self.cycles-pp.read
0.16 +0.0 0.19 ± 9% perf-profile.self.cycles-pp.rcu_all_qs
0.18 ± 2% +0.0 0.21 ± 7% perf-profile.self.cycles-pp._cond_resched
0.11 ± 4% +0.0 0.15 ± 9% perf-profile.self.cycles-pp.xfs_file_aio_write_checks
0.32 ± 5% +0.0 0.35 perf-profile.self.cycles-pp.xfs_file_buffered_aio_write
0.14 ± 6% +0.0 0.17 ± 10% perf-profile.self.cycles-pp.ksys_write
0.04 ± 57% +0.0 0.08 ± 6% perf-profile.self.cycles-pp.copyin
0.41 ± 4% +0.0 0.45 ± 3% perf-profile.self.cycles-pp.__might_sleep
0.20 ± 4% +0.0 0.25 ± 3% perf-profile.self.cycles-pp.__list_del_entry_valid
0.20 ± 7% +0.0 0.25 ± 6% perf-profile.self.cycles-pp.iomap_write_begin
0.03 ±100% +0.1 0.08 ± 16% perf-profile.self.cycles-pp.xfs_get_extsz_hint
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.read_tsc
0.27 ± 6% +0.1 0.32 ± 6% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.17 ± 13% +0.1 0.23 ± 10% perf-profile.self.cycles-pp.new_sync_write
0.00 +0.1 0.06 ± 8% perf-profile.self.cycles-pp.native_irq_return_iret
0.00 +0.1 0.06 ± 16% perf-profile.self.cycles-pp.memcpy_erms
0.01 ±173% +0.1 0.07 ± 17% perf-profile.self.cycles-pp.xlog_space_left
0.20 ± 4% +0.1 0.26 perf-profile.self.cycles-pp.write
0.10 ± 25% +0.1 0.16 ± 18% perf-profile.self.cycles-pp.xfs_log_commit_cil
0.08 ± 58% +0.1 0.14 ± 10% perf-profile.self.cycles-pp.xlog_ungrant_log_space
0.00 +0.1 0.07 ± 20% perf-profile.self.cycles-pp.update_load_avg
0.25 ± 4% +0.1 0.32 ± 5% perf-profile.self.cycles-pp.xfs_file_iomap_end
0.23 ± 9% +0.1 0.30 ± 8% perf-profile.self.cycles-pp.pagecache_get_page
0.00 +0.1 0.08 ± 30% perf-profile.self.cycles-pp.ktime_get
0.00 +0.1 0.08 ± 22% perf-profile.self.cycles-pp.native_write_msr
0.00 +0.1 0.08 ± 53% perf-profile.self.cycles-pp.xfs_inobt_get_maxrecs
0.59 +0.1 0.68 ± 4% perf-profile.self.cycles-pp.memset_erms
0.17 ± 9% +0.1 0.26 ± 7% perf-profile.self.cycles-pp.xfs_iunlock
0.00 +0.1 0.09 ± 44% perf-profile.self.cycles-pp.menu_select
0.68 ± 2% +0.1 0.79 ± 3% perf-profile.self.cycles-pp.___might_sleep
0.40 ± 10% +0.1 0.52 ± 13% perf-profile.self.cycles-pp.xfs_file_write_iter
0.00 +0.1 0.12 ± 38% perf-profile.self.cycles-pp.__xfs_btree_check_sblock
0.11 ± 61% +0.1 0.24 ± 12% perf-profile.self.cycles-pp.xlog_grant_add_space
0.24 ± 7% +0.2 0.40 perf-profile.self.cycles-pp.xfs_file_iomap_begin
0.31 ± 6% +0.2 0.53 ± 4% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.32 ± 12% +0.2 0.57 perf-profile.self.cycles-pp.xfs_file_iomap_begin_delay
2.60 ± 2% +0.3 2.94 perf-profile.self.cycles-pp.syscall_return_via_sysret
2.52 ± 2% +0.4 2.91 perf-profile.self.cycles-pp.entry_SYSCALL_64
5.65 ± 12% +10.7 16.34 ± 7% perf-profile.self.cycles-pp.intel_idle
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 11 months
[btrfs] 0096420adb: aim7.jobs-per-min 18.3% improvement
by kernel test robot
Greeting,
FYI, we noticed a 18.3% improvement of aim7.jobs-per-min due to commit:
commit: 0096420adb036b143d6c42ad7c02a945b3e1119c ("btrfs: do not account global reserve in can_overcommit")
https://github.com/kdave/btrfs-devel.git misc-next
in testcase: aim7
on test machine: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
with following parameters:
disk: 1BRD_48G
fs: btrfs
test: disk_wrt
load: 1500
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-7/performance/1BRD_48G/btrfs/x86_64-rhel-7.6/1500/debian-x86_64-2019-05-14.cgz/lkp-csl-2ap2/disk_wrt/aim7
commit:
426551f686 ("btrfs: use btrfs_try_granting_tickets in update_global_rsv")
0096420adb ("btrfs: do not account global reserve in can_overcommit")
426551f6866a369c 0096420adb036b143d6c42ad7c0
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 50% 2:4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
:4 75% 3:4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
28200 ± 3% +18.3% 33348 ± 2% aim7.jobs-per-min
319.55 ± 3% -15.4% 270.21 ± 2% aim7.time.elapsed_time
319.55 ± 3% -15.4% 270.21 ± 2% aim7.time.elapsed_time.max
2258511 ± 11% -41.4% 1323971 ± 9% aim7.time.involuntary_context_switches
275841 -13.6% 238362 aim7.time.minor_page_faults
58858 ± 3% -16.2% 49335 ± 3% aim7.time.system_time
3139592 ± 2% -10.9% 2795903 ± 2% aim7.time.voluntary_context_switches
4.74 +23.1% 5.83 ± 5% iostat.cpu.idle
4.16 +1.0 5.15 ± 6% mpstat.cpu.all.idle%
0.00 ± 77% +0.0 0.01 ± 17% mpstat.cpu.all.iowait%
1344 ± 15% +19.8% 1610 ± 6% slabinfo.btrfs_ordered_extent.active_objs
1344 ± 15% +19.8% 1610 ± 6% slabinfo.btrfs_ordered_extent.num_objs
95.00 -1.6% 93.50 vmstat.cpu.sy
254.50 ± 2% -11.5% 225.25 ± 2% vmstat.procs.r
13087494 ± 33% +160.2% 34058682 ± 72% cpuidle.C1.time
180167 ± 56% +399.7% 900240 ±115% cpuidle.POLL.time
24359 ± 15% +151.6% 61296 ± 49% cpuidle.POLL.usage
119695 ± 3% -12.0% 105338 meminfo.Active(file)
115327 ± 3% -12.6% 100744 meminfo.Dirty
10759 ± 2% +17.6% 12654 ± 2% meminfo.max_used_kB
0.01 ± 34% +0.0 0.04 ± 66% turbostat.C1%
0.94 ± 49% +0.6 1.52 ± 46% turbostat.C6%
1.21 ± 27% +43.3% 1.74 ± 31% turbostat.CPU%c6
64911790 ± 3% -15.6% 54761840 ± 2% turbostat.IRQ
0.92 ± 21% +104.9% 1.88 ± 37% turbostat.Pkg%pc2
267.99 -1.4% 264.11 turbostat.PkgWatt
28691 ± 2% -11.7% 25337 ± 2% numa-meminfo.node0.Dirty
29558 ± 2% -11.3% 26229 ± 4% numa-meminfo.node1.Active(file)
28710 ± 2% -12.1% 25222 ± 2% numa-meminfo.node1.Dirty
30247 ± 2% -13.2% 26260 numa-meminfo.node2.Active(file)
28855 ± 2% -13.0% 25116 ± 2% numa-meminfo.node2.Dirty
29810 ± 3% -10.1% 26799 ± 2% numa-meminfo.node3.Active(file)
28760 ± 3% -12.0% 25301 numa-meminfo.node3.Dirty
29931 ± 3% -12.0% 26348 ± 2% proc-vmstat.nr_active_file
28787 ± 3% -12.3% 25256 ± 2% proc-vmstat.nr_dirty
432697 ± 5% -10.8% 386012 ± 10% proc-vmstat.nr_written
29931 ± 3% -12.0% 26348 ± 2% proc-vmstat.nr_zone_active_file
29330 ± 3% -12.3% 25709 ± 2% proc-vmstat.nr_zone_write_pending
201248 ± 2% -18.1% 164914 ± 2% proc-vmstat.numa_hint_faults
153989 ± 2% -19.9% 123355 ± 3% proc-vmstat.numa_hint_faults_local
25435 ± 2% -7.0% 23649 proc-vmstat.numa_pages_migrated
212160 ± 7% -16.7% 176683 ± 5% proc-vmstat.numa_pte_updates
2740553 -2.6% 2669001 proc-vmstat.pgactivate
1366994 ± 2% -13.8% 1177924 proc-vmstat.pgfault
25435 ± 2% -7.0% 23649 proc-vmstat.pgmigrate_success
1730705 ± 5% -10.8% 1543836 ± 10% proc-vmstat.pgpgout
7150 ± 3% -11.9% 6298 ± 3% numa-vmstat.node0.nr_dirty
7281 ± 2% -11.5% 6441 ± 3% numa-vmstat.node0.nr_zone_write_pending
7373 ± 3% -11.0% 6560 ± 5% numa-vmstat.node1.nr_active_file
7153 ± 4% -11.8% 6306 ± 3% numa-vmstat.node1.nr_dirty
7373 ± 3% -11.0% 6560 ± 5% numa-vmstat.node1.nr_zone_active_file
7313 ± 4% -12.2% 6423 ± 2% numa-vmstat.node1.nr_zone_write_pending
7496 ± 2% -12.3% 6577 numa-vmstat.node2.nr_active_file
7143 ± 4% -12.1% 6278 ± 2% numa-vmstat.node2.nr_dirty
7496 ± 2% -12.3% 6577 numa-vmstat.node2.nr_zone_active_file
7251 ± 4% -12.0% 6377 ± 2% numa-vmstat.node2.nr_zone_write_pending
7452 ± 3% -10.2% 6695 ± 2% numa-vmstat.node3.nr_active_file
7170 ± 3% -12.0% 6307 ± 2% numa-vmstat.node3.nr_dirty
7452 ± 3% -10.2% 6695 ± 2% numa-vmstat.node3.nr_zone_active_file
7292 ± 4% -12.2% 6404 ± 2% numa-vmstat.node3.nr_zone_write_pending
11.11 -1.4% 10.96 perf-stat.i.cpi
3815 -5.0% 3623 perf-stat.i.cpu-migrations
0.01 ± 66% +0.0 0.02 ± 58% perf-stat.i.dTLB-load-miss-rate%
693238 ± 8% +15.2% 798500 ± 9% perf-stat.i.dTLB-load-misses
1.079e+09 ± 2% +14.2% 1.232e+09 ± 2% perf-stat.i.dTLB-stores
94.53 +1.1 95.62 perf-stat.i.node-load-miss-rate%
617476 ± 3% -25.4% 460892 ± 6% perf-stat.i.node-loads
6022550 ± 2% +9.8% 6610829 ± 2% perf-stat.i.node-store-misses
198186 ± 2% -4.5% 189325 ± 2% perf-stat.i.node-stores
11.28 -1.3% 11.13 perf-stat.overall.cpi
0.01 ± 8% +0.0 0.01 ± 9% perf-stat.overall.dTLB-load-miss-rate%
0.09 +1.3% 0.09 perf-stat.overall.ipc
94.61 +1.1 95.73 perf-stat.overall.node-load-miss-rate%
3802 -5.1% 3609 perf-stat.ps.cpu-migrations
691108 ± 8% +15.2% 795955 ± 9% perf-stat.ps.dTLB-load-misses
1.076e+09 ± 2% +14.1% 1.227e+09 ± 2% perf-stat.ps.dTLB-stores
615361 ± 3% -25.4% 459051 ± 6% perf-stat.ps.node-loads
6001881 ± 2% +9.7% 6583844 ± 2% perf-stat.ps.node-store-misses
197520 ± 2% -4.5% 188572 ± 2% perf-stat.ps.node-stores
1.608e+13 ± 3% -15.1% 1.365e+13 ± 2% perf-stat.total.instructions
140405 -21.3% 110515 sched_debug.cfs_rq:/.exec_clock.avg
143580 -21.0% 113460 sched_debug.cfs_rq:/.exec_clock.max
135599 -21.7% 106144 sched_debug.cfs_rq:/.exec_clock.min
8.16 ± 12% +22.7% 10.01 ± 13% sched_debug.cfs_rq:/.load_avg.avg
23.68 ± 22% +31.5% 31.13 ± 15% sched_debug.cfs_rq:/.load_avg.stddev
32836566 -25.8% 24380252 sched_debug.cfs_rq:/.min_vruntime.avg
34142704 -26.6% 25069511 sched_debug.cfs_rq:/.min_vruntime.max
30685932 -24.8% 23066314 sched_debug.cfs_rq:/.min_vruntime.min
512098 ± 7% -45.8% 277506 ± 12% sched_debug.cfs_rq:/.min_vruntime.stddev
511619 ± 7% -45.8% 277245 ± 12% sched_debug.cfs_rq:/.spread0.stddev
807.86 -8.1% 742.26 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.avg
367.99 ± 6% -20.4% 292.83 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.stddev
65938 ± 9% -31.8% 45001 ± 18% sched_debug.cpu.avg_idle.min
192768 -13.9% 165966 ± 2% sched_debug.cpu.clock.avg
192837 -13.9% 166030 ± 2% sched_debug.cpu.clock.max
192685 -13.9% 165891 ± 2% sched_debug.cpu.clock.min
192768 -13.9% 165966 ± 2% sched_debug.cpu.clock_task.avg
192837 -13.9% 166030 ± 2% sched_debug.cpu.clock_task.max
192685 -13.9% 165891 ± 2% sched_debug.cpu.clock_task.min
8622 -10.6% 7707 sched_debug.cpu.curr->pid.max
1.16 ± 4% -18.7% 0.95 ± 3% sched_debug.cpu.nr_running.avg
3.04 ± 7% -16.2% 2.55 ± 3% sched_debug.cpu.nr_running.max
0.52 ± 8% -29.8% 0.37 ± 8% sched_debug.cpu.nr_running.stddev
19837 -18.3% 16216 sched_debug.cpu.nr_switches.avg
32384 ± 2% -17.3% 26785 ± 2% sched_debug.cpu.nr_switches.max
17910 -20.4% 14262 sched_debug.cpu.nr_switches.min
419.29 ± 3% -35.4% 271.05 ± 10% sched_debug.cpu.nr_uninterruptible.max
-184.50 -25.7% -137.10 sched_debug.cpu.nr_uninterruptible.min
79.55 ± 6% -27.3% 57.81 ± 4% sched_debug.cpu.nr_uninterruptible.stddev
18526 -21.7% 14513 sched_debug.cpu.sched_count.avg
25587 ± 7% -22.2% 19900 ± 2% sched_debug.cpu.sched_count.max
17081 -21.9% 13343 sched_debug.cpu.sched_count.min
8402 -16.0% 7057 sched_debug.cpu.ttwu_count.avg
24742 ± 11% -30.4% 17221 ± 5% sched_debug.cpu.ttwu_count.max
4030 ± 8% -31.0% 2779 ± 2% sched_debug.cpu.ttwu_count.stddev
748.79 -20.4% 595.77 sched_debug.cpu.ttwu_local.avg
433.17 -18.1% 354.90 ± 2% sched_debug.cpu.ttwu_local.min
276.39 ± 9% -16.6% 230.51 ± 10% sched_debug.cpu.ttwu_local.stddev
192684 -13.9% 165890 ± 2% sched_debug.cpu_clk
188644 -14.2% 161850 ± 2% sched_debug.ktime
193281 -13.8% 166554 ± 2% sched_debug.sched_clk
35.53 -0.3 35.25 perf-profile.calltrace.cycles-pp.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
35.63 -0.3 35.36 perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
98.57 -0.3 98.31 perf-profile.calltrace.cycles-pp.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write
98.60 -0.3 98.34 perf-profile.calltrace.cycles-pp.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
98.60 -0.3 98.35 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
98.65 -0.3 98.39 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
98.64 -0.3 98.38 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
98.67 -0.2 98.42 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
98.66 -0.2 98.42 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
98.69 -0.2 98.44 perf-profile.calltrace.cycles-pp.write
35.21 -0.2 34.98 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write
35.28 -0.2 35.06 perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter
27.41 +0.1 27.49 perf-profile.calltrace.cycles-pp.btrfs_dirty_pages.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
0.59 ± 9% +0.1 0.68 ± 4% perf-profile.calltrace.cycles-pp.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_file_extent.btrfs_get_extent
0.61 ± 8% +0.1 0.70 ± 4% perf-profile.calltrace.cycles-pp.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_dirty_pages
0.67 ± 7% +0.1 0.77 ± 4% perf-profile.calltrace.cycles-pp.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_dirty_pages.btrfs_buffered_write.btrfs_file_write_iter
0.71 ± 7% +0.1 0.82 ± 4% perf-profile.calltrace.cycles-pp.btrfs_get_extent.btrfs_dirty_pages.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
0.67 ± 8% +0.1 0.77 ± 4% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_dirty_pages.btrfs_buffered_write
97.52 -0.4 97.13 perf-profile.children.cycles-pp._raw_spin_lock
97.65 -0.3 97.31 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
35.67 -0.3 35.39 perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
35.63 -0.3 35.36 perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
98.57 -0.3 98.31 perf-profile.children.cycles-pp.btrfs_buffered_write
98.60 -0.3 98.34 perf-profile.children.cycles-pp.btrfs_file_write_iter
98.62 -0.3 98.36 perf-profile.children.cycles-pp.new_sync_write
98.65 -0.2 98.40 perf-profile.children.cycles-pp.vfs_write
98.66 -0.2 98.41 perf-profile.children.cycles-pp.ksys_write
98.69 -0.2 98.45 perf-profile.children.cycles-pp.write
99.53 -0.2 99.37 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.53 -0.2 99.37 perf-profile.children.cycles-pp.do_syscall_64
0.18 ± 2% -0.1 0.12 ± 4% perf-profile.children.cycles-pp.can_overcommit
0.05 +0.0 0.06 perf-profile.children.cycles-pp.set_extent_bit
0.06 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.btrfs_check_data_free_space
0.06 +0.0 0.08 ± 6% perf-profile.children.cycles-pp.btrfs_alloc_data_chunk_ondemand
0.14 ± 3% +0.0 0.16 ± 2% perf-profile.children.cycles-pp.prepare_pages
0.03 ±100% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.btrfs_drop_pages
0.34 ± 9% +0.1 0.40 ± 4% perf-profile.children.cycles-pp.prepare_to_wait_event
0.23 ± 4% +0.1 0.30 ± 8% perf-profile.children.cycles-pp.osq_lock
0.38 ± 2% +0.1 0.46 ± 6% perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.42 ± 3% +0.1 0.49 ± 5% perf-profile.children.cycles-pp.creat
0.42 ± 3% +0.1 0.49 ± 5% perf-profile.children.cycles-pp.do_sys_open
0.41 ± 3% +0.1 0.49 ± 5% perf-profile.children.cycles-pp.do_filp_open
0.41 ± 3% +0.1 0.49 ± 5% perf-profile.children.cycles-pp.path_openat
0.38 ± 2% +0.1 0.46 ± 7% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
27.41 +0.1 27.49 perf-profile.children.cycles-pp.btrfs_dirty_pages
0.59 ± 8% +0.1 0.67 ± 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.60 ± 8% +0.1 0.69 ± 4% perf-profile.children.cycles-pp.btrfs_tree_read_lock
0.62 ± 8% +0.1 0.71 ± 4% perf-profile.children.cycles-pp.btrfs_read_lock_root_node
0.67 ± 7% +0.1 0.77 ± 4% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
0.68 ± 8% +0.1 0.79 ± 4% perf-profile.children.cycles-pp.btrfs_search_slot
0.72 ± 7% +0.1 0.82 ± 4% perf-profile.children.cycles-pp.btrfs_get_extent
0.33 ± 13% +0.1 0.47 ± 15% perf-profile.children.cycles-pp.intel_idle
0.34 ± 14% +0.1 0.48 ± 15% perf-profile.children.cycles-pp.cpuidle_enter
0.34 ± 14% +0.1 0.48 ± 15% perf-profile.children.cycles-pp.cpuidle_enter_state
0.35 ± 13% +0.1 0.49 ± 15% perf-profile.children.cycles-pp.start_secondary
0.35 ± 13% +0.2 0.50 ± 14% perf-profile.children.cycles-pp.secondary_startup_64
0.35 ± 13% +0.2 0.50 ± 14% perf-profile.children.cycles-pp.cpu_startup_entry
0.35 ± 13% +0.2 0.50 ± 14% perf-profile.children.cycles-pp.do_idle
97.02 -0.4 96.61 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.08 -0.0 0.06 perf-profile.self.cycles-pp.can_overcommit
0.44 +0.0 0.49 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +0.1 0.06 perf-profile.self.cycles-pp.btrfs_drop_pages
0.23 ± 4% +0.1 0.30 ± 8% perf-profile.self.cycles-pp.osq_lock
0.33 ± 13% +0.1 0.47 ± 15% perf-profile.self.cycles-pp.intel_idle
106990 ± 6% -14.5% 91529 ± 3% softirqs.CPU0.TIMER
102783 ± 7% -14.0% 88392 ± 3% softirqs.CPU1.TIMER
105966 ± 6% -12.1% 93171 ± 6% softirqs.CPU10.TIMER
105264 ± 6% -14.7% 89814 ± 3% softirqs.CPU100.TIMER
105573 ± 6% -14.4% 90416 ± 3% softirqs.CPU101.TIMER
104858 ± 6% -14.2% 89969 ± 3% softirqs.CPU102.TIMER
104199 ± 6% -15.0% 88552 ± 3% softirqs.CPU103.TIMER
103939 ± 5% -14.4% 88925 ± 3% softirqs.CPU104.TIMER
104571 ± 6% -15.2% 88727 ± 3% softirqs.CPU105.TIMER
105349 ± 6% -15.5% 89047 ± 2% softirqs.CPU106.TIMER
105601 ± 6% -15.6% 89162 ± 3% softirqs.CPU107.TIMER
105737 ± 7% -15.5% 89311 ± 3% softirqs.CPU108.TIMER
103934 ± 6% -14.7% 88606 ± 3% softirqs.CPU109.TIMER
105775 ± 6% -13.8% 91200 ± 3% softirqs.CPU11.TIMER
104397 ± 6% -14.6% 89103 ± 3% softirqs.CPU110.TIMER
104606 ± 6% -14.2% 89724 ± 4% softirqs.CPU111.TIMER
105358 ± 6% -15.2% 89379 ± 3% softirqs.CPU112.TIMER
107242 ± 9% -16.9% 89090 ± 2% softirqs.CPU113.TIMER
104788 ± 6% -14.8% 89259 ± 2% softirqs.CPU114.TIMER
104166 ± 5% -15.0% 88505 ± 2% softirqs.CPU115.TIMER
104715 ± 6% -14.2% 89804 ± 3% softirqs.CPU116.TIMER
106727 ± 6% -16.3% 89373 ± 3% softirqs.CPU117.TIMER
105905 ± 6% -16.0% 89005 ± 3% softirqs.CPU118.TIMER
104713 ± 5% -15.3% 88708 ± 3% softirqs.CPU119.TIMER
105521 ± 6% -14.7% 89978 ± 2% softirqs.CPU12.TIMER
102077 ± 4% -13.1% 88695 ± 3% softirqs.CPU120.TIMER
103932 ± 6% -14.5% 88882 ± 4% softirqs.CPU121.TIMER
101870 ± 4% -12.3% 89294 ± 4% softirqs.CPU122.TIMER
101959 ± 4% -11.7% 90050 ± 5% softirqs.CPU123.TIMER
102891 ± 3% -12.9% 89668 ± 4% softirqs.CPU124.TIMER
102642 ± 4% -12.9% 89432 ± 4% softirqs.CPU125.TIMER
102788 ± 4% -12.9% 89530 ± 4% softirqs.CPU126.TIMER
102125 ± 4% -12.1% 89726 ± 5% softirqs.CPU127.TIMER
101647 ± 4% -11.6% 89896 ± 5% softirqs.CPU128.TIMER
101991 ± 4% -13.0% 88774 ± 3% softirqs.CPU129.TIMER
104798 ± 6% -14.8% 89334 ± 2% softirqs.CPU13.TIMER
102709 ± 3% -13.1% 89293 ± 4% softirqs.CPU130.TIMER
102587 ± 4% -12.9% 89326 ± 4% softirqs.CPU131.TIMER
102771 ± 4% -12.6% 89821 ± 4% softirqs.CPU132.TIMER
101972 ± 4% -12.8% 88931 ± 4% softirqs.CPU133.TIMER
101673 ± 3% -12.6% 88909 ± 4% softirqs.CPU134.TIMER
101759 ± 4% -12.4% 89136 ± 4% softirqs.CPU135.TIMER
102776 ± 4% -12.9% 89555 ± 4% softirqs.CPU136.TIMER
102623 ± 5% -12.2% 90063 ± 4% softirqs.CPU138.TIMER
101517 ± 4% -12.6% 88776 ± 4% softirqs.CPU139.TIMER
105965 ± 7% -15.7% 89366 ± 2% softirqs.CPU14.TIMER
101779 ± 4% -12.5% 89053 ± 3% softirqs.CPU140.TIMER
102067 ± 4% -12.2% 89659 ± 4% softirqs.CPU141.TIMER
102798 ± 5% -12.7% 89735 ± 4% softirqs.CPU142.TIMER
102255 ± 4% -12.3% 89661 ± 4% softirqs.CPU143.TIMER
103367 ± 4% -13.9% 89016 ± 3% softirqs.CPU144.TIMER
103150 ± 4% -12.3% 90413 ± 4% softirqs.CPU145.TIMER
103361 ± 3% -13.4% 89534 ± 3% softirqs.CPU146.TIMER
103269 ± 4% -13.3% 89510 ± 4% softirqs.CPU147.TIMER
103891 ± 4% -13.3% 90089 ± 4% softirqs.CPU148.TIMER
104148 ± 4% -13.8% 89724 ± 3% softirqs.CPU149.TIMER
105190 ± 6% -14.3% 90165 ± 3% softirqs.CPU15.TIMER
104098 ± 4% -13.7% 89792 ± 3% softirqs.CPU150.TIMER
103025 ± 4% -13.7% 88875 ± 3% softirqs.CPU151.TIMER
103185 ± 4% -13.2% 89564 ± 3% softirqs.CPU152.TIMER
104039 ± 4% -13.4% 90090 ± 4% softirqs.CPU153.TIMER
103637 ± 4% -13.0% 90122 ± 3% softirqs.CPU154.TIMER
103486 ± 4% -13.3% 89742 ± 3% softirqs.CPU155.TIMER
102733 ± 4% -13.6% 88789 ± 3% softirqs.CPU156.TIMER
103066 ± 4% -13.3% 89331 ± 3% softirqs.CPU157.TIMER
102799 ± 4% -13.0% 89430 ± 3% softirqs.CPU158.TIMER
104213 ± 4% -13.3% 90319 ± 4% softirqs.CPU159.TIMER
107936 ± 6% -16.6% 90012 ± 3% softirqs.CPU16.TIMER
103784 ± 4% -13.7% 89614 ± 4% softirqs.CPU160.TIMER
103963 ± 3% -13.2% 90274 ± 4% softirqs.CPU161.TIMER
103835 ± 3% -13.7% 89608 ± 3% softirqs.CPU162.TIMER
103540 ± 3% -14.3% 88742 ± 3% softirqs.CPU163.TIMER
103419 ± 4% -13.7% 89261 ± 3% softirqs.CPU164.TIMER
103967 ± 4% -13.0% 90400 ± 4% softirqs.CPU165.TIMER
8352 ± 34% -32.4% 5643 ± 5% softirqs.CPU166.RCU
103132 ± 5% -12.8% 89893 ± 4% softirqs.CPU166.TIMER
103426 ± 4% -13.1% 89845 ± 4% softirqs.CPU167.TIMER
109048 ± 4% -14.5% 93248 ± 3% softirqs.CPU168.TIMER
110554 ± 4% -15.6% 93343 ± 3% softirqs.CPU169.TIMER
11363 ± 62% -43.0% 6482 ± 5% softirqs.CPU17.RCU
121547 ± 27% -25.8% 90215 ± 2% softirqs.CPU17.TIMER
109063 ± 4% -13.9% 93947 ± 3% softirqs.CPU170.TIMER
108359 ± 3% -13.2% 94013 ± 3% softirqs.CPU171.TIMER
109620 ± 4% -14.3% 93976 ± 3% softirqs.CPU172.TIMER
109325 ± 3% -13.8% 94210 ± 2% softirqs.CPU173.TIMER
111187 ± 3% -15.1% 94429 ± 3% softirqs.CPU174.TIMER
124419 ± 18% -24.7% 93727 ± 3% softirqs.CPU175.TIMER
109671 ± 2% -14.9% 93364 ± 3% softirqs.CPU176.TIMER
108707 ± 3% -13.7% 93817 ± 2% softirqs.CPU177.TIMER
108700 ± 3% -13.7% 93809 ± 3% softirqs.CPU178.TIMER
109435 ± 3% -14.2% 93872 ± 3% softirqs.CPU179.TIMER
8155 ± 14% -21.8% 6377 ± 4% softirqs.CPU18.RCU
106852 ± 5% -16.0% 89762 ± 2% softirqs.CPU18.TIMER
109976 ± 3% -14.3% 94237 ± 3% softirqs.CPU180.TIMER
108138 ± 3% -13.9% 93133 ± 3% softirqs.CPU181.TIMER
108000 ± 3% -13.8% 93097 ± 3% softirqs.CPU182.TIMER
108330 ± 3% -13.7% 93452 ± 2% softirqs.CPU183.TIMER
109169 ± 3% -13.8% 94094 ± 3% softirqs.CPU184.TIMER
110704 ± 4% -14.9% 94224 ± 3% softirqs.CPU185.TIMER
109893 ± 3% -14.6% 93825 ± 3% softirqs.CPU186.TIMER
109112 ± 4% -13.2% 94667 ± 2% softirqs.CPU187.TIMER
108840 ± 3% -14.0% 93606 ± 3% softirqs.CPU188.TIMER
109071 ± 3% -14.1% 93682 ± 3% softirqs.CPU189.TIMER
104630 ± 6% -14.1% 89853 ± 2% softirqs.CPU19.TIMER
109646 ± 3% -14.2% 94107 ± 3% softirqs.CPU190.TIMER
112198 ± 2% -15.3% 94991 ± 4% softirqs.CPU191.TIMER
105022 ± 6% -14.8% 89428 ± 3% softirqs.CPU2.TIMER
105421 ± 6% -14.8% 89782 ± 3% softirqs.CPU20.TIMER
106244 ± 7% -14.4% 90984 ± 2% softirqs.CPU21.TIMER
105057 ± 6% -14.4% 89911 ± 3% softirqs.CPU22.TIMER
105186 ± 5% -14.8% 89616 ± 3% softirqs.CPU23.TIMER
12591 ± 44% -49.7% 6338 ± 7% softirqs.CPU24.RCU
105225 ± 6% -14.9% 89583 ± 4% softirqs.CPU24.TIMER
118018 ± 23% -23.8% 89881 ± 4% softirqs.CPU25.TIMER
102729 ± 3% -12.9% 89512 ± 3% softirqs.CPU26.TIMER
102630 ± 4% -12.3% 90046 ± 4% softirqs.CPU27.TIMER
103237 ± 4% -11.8% 91022 ± 5% softirqs.CPU28.TIMER
103231 ± 4% -12.5% 90286 ± 4% softirqs.CPU29.TIMER
104931 ± 6% -14.4% 89807 ± 2% softirqs.CPU3.TIMER
103513 ± 4% -12.7% 90393 ± 4% softirqs.CPU30.TIMER
103022 ± 4% -11.9% 90799 ± 4% softirqs.CPU31.TIMER
103254 ± 4% -13.3% 89550 ± 4% softirqs.CPU32.TIMER
102868 ± 4% -12.5% 90046 ± 4% softirqs.CPU33.TIMER
103456 ± 4% -13.0% 90056 ± 3% softirqs.CPU34.TIMER
103281 ± 4% -12.8% 90068 ± 4% softirqs.CPU35.TIMER
103255 ± 4% -12.4% 90447 ± 4% softirqs.CPU36.TIMER
102927 ± 4% -13.0% 89551 ± 3% softirqs.CPU37.TIMER
102188 ± 4% -11.4% 90581 ± 4% softirqs.CPU38.TIMER
102507 ± 4% -12.9% 89257 ± 3% softirqs.CPU39.TIMER
106326 ± 6% -15.0% 90396 ± 3% softirqs.CPU4.TIMER
102928 ± 4% -11.9% 90675 ± 4% softirqs.CPU40.TIMER
103776 ± 5% -12.1% 91230 ± 5% softirqs.CPU41.TIMER
103307 ± 4% -12.4% 90473 ± 4% softirqs.CPU42.TIMER
102151 ± 4% -12.3% 89629 ± 3% softirqs.CPU43.TIMER
103427 ± 5% -11.8% 91195 ± 3% softirqs.CPU44.TIMER
102922 ± 4% -12.2% 90400 ± 3% softirqs.CPU45.TIMER
103479 ± 4% -12.3% 90743 ± 4% softirqs.CPU46.TIMER
103087 ± 4% -12.0% 90697 ± 5% softirqs.CPU47.TIMER
103861 ± 4% -13.4% 89977 ± 4% softirqs.CPU48.TIMER
105856 ± 6% -14.6% 90411 ± 2% softirqs.CPU5.TIMER
103795 ± 4% -13.2% 90138 ± 4% softirqs.CPU50.TIMER
104110 ± 4% -13.1% 90501 ± 4% softirqs.CPU51.TIMER
105216 ± 4% -14.0% 90512 ± 3% softirqs.CPU52.TIMER
104823 ± 4% -12.6% 91580 ± 4% softirqs.CPU53.TIMER
106267 ± 2% -15.1% 90179 ± 3% softirqs.CPU54.TIMER
104174 ± 4% -12.4% 91280 ± 2% softirqs.CPU55.TIMER
104300 ± 3% -14.0% 89732 ± 3% softirqs.CPU56.TIMER
104373 ± 4% -12.9% 90934 ± 3% softirqs.CPU57.TIMER
104248 ± 4% -13.3% 90343 ± 3% softirqs.CPU58.TIMER
104205 ± 4% -12.6% 91057 ± 3% softirqs.CPU59.TIMER
106539 ± 6% -13.9% 91729 softirqs.CPU6.TIMER
103163 ± 4% -13.0% 89707 ± 3% softirqs.CPU60.TIMER
103501 ± 4% -12.6% 90420 ± 4% softirqs.CPU61.TIMER
103820 ± 4% -12.8% 90495 ± 3% softirqs.CPU62.TIMER
104800 ± 4% -12.6% 91566 ± 4% softirqs.CPU63.TIMER
104582 ± 4% -12.9% 91082 ± 3% softirqs.CPU64.TIMER
104601 ± 4% -13.3% 90703 ± 3% softirqs.CPU65.TIMER
103339 ± 4% -12.9% 89988 ± 3% softirqs.CPU66.TIMER
104105 ± 3% -13.9% 89659 ± 3% softirqs.CPU67.TIMER
105036 ± 2% -14.3% 90013 ± 3% softirqs.CPU68.TIMER
104809 ± 3% -13.6% 90589 ± 3% softirqs.CPU69.TIMER
104761 ± 5% -12.8% 91374 ± 3% softirqs.CPU7.TIMER
105291 ± 2% -13.6% 90953 ± 4% softirqs.CPU70.TIMER
104241 ± 4% -11.6% 92167 ± 3% softirqs.CPU71.TIMER
108443 ± 3% -12.4% 94961 ± 4% softirqs.CPU72.TIMER
125085 ± 20% -24.1% 94958 ± 3% softirqs.CPU73.TIMER
109162 ± 3% -13.5% 94476 ± 3% softirqs.CPU75.TIMER
110431 ± 3% -14.3% 94594 ± 2% softirqs.CPU76.TIMER
110073 ± 3% -13.4% 95277 ± 3% softirqs.CPU77.TIMER
110349 ± 3% -14.0% 94847 ± 3% softirqs.CPU78.TIMER
111314 -15.0% 94625 ± 3% softirqs.CPU79.TIMER
105558 ± 7% -14.7% 90020 ± 3% softirqs.CPU8.TIMER
109610 ± 3% -13.4% 94868 ± 2% softirqs.CPU80.TIMER
110865 ± 5% -14.2% 95126 ± 3% softirqs.CPU81.TIMER
109289 ± 3% -13.8% 94249 ± 3% softirqs.CPU82.TIMER
110000 ± 3% -13.6% 95001 ± 3% softirqs.CPU83.TIMER
109917 ± 3% -13.9% 94686 ± 3% softirqs.CPU84.TIMER
108751 ± 3% -14.0% 93489 ± 3% softirqs.CPU85.TIMER
109258 ± 4% -14.1% 93840 ± 3% softirqs.CPU87.TIMER
110311 ± 3% -14.0% 94868 ± 3% softirqs.CPU88.TIMER
110715 ± 3% -14.2% 94942 ± 3% softirqs.CPU89.TIMER
105782 ± 6% -14.9% 90003 ± 2% softirqs.CPU9.TIMER
110652 ± 3% -14.3% 94810 ± 3% softirqs.CPU90.TIMER
109169 ± 3% -13.8% 94129 ± 3% softirqs.CPU91.TIMER
109563 ± 3% -14.0% 94276 ± 3% softirqs.CPU92.TIMER
109832 ± 3% -13.8% 94625 ± 3% softirqs.CPU93.TIMER
110786 ± 3% -14.4% 94833 ± 3% softirqs.CPU94.TIMER
110757 -14.3% 94973 ± 3% softirqs.CPU95.TIMER
104117 ± 6% -14.5% 89029 ± 3% softirqs.CPU96.TIMER
103952 ± 6% -16.6% 86706 ± 3% softirqs.CPU97.TIMER
106433 ± 4% -14.2% 91372 ± 4% softirqs.CPU98.TIMER
105102 ± 6% -15.1% 89255 ± 3% softirqs.CPU99.TIMER
1364078 ± 4% -12.8% 1189977 ± 4% softirqs.RCU
20301703 ± 4% -13.7% 17520659 ± 3% softirqs.TIMER
642.00 ± 3% -15.4% 543.00 ± 2% interrupts.9:IO-APIC.9-fasteoi.acpi
634290 ± 3% -15.9% 533174 ± 2% interrupts.CPU0.LOC:Local_timer_interrupts
9908 ± 3% -21.0% 7829 ± 9% interrupts.CPU0.RES:Rescheduling_interrupts
642.00 ± 3% -15.4% 543.00 ± 2% interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
634152 ± 3% -16.0% 532695 ± 2% interrupts.CPU1.LOC:Local_timer_interrupts
8594 ± 4% -28.5% 6148 ± 8% interrupts.CPU1.RES:Rescheduling_interrupts
634314 ± 3% -16.0% 532665 ± 2% interrupts.CPU10.LOC:Local_timer_interrupts
7552 ± 2% -26.6% 5546 ± 7% interrupts.CPU10.RES:Rescheduling_interrupts
634121 ± 3% -16.0% 532791 ± 2% interrupts.CPU100.LOC:Local_timer_interrupts
7143 ± 4% -25.3% 5339 ± 5% interrupts.CPU100.RES:Rescheduling_interrupts
634106 ± 3% -16.0% 532666 ± 2% interrupts.CPU101.LOC:Local_timer_interrupts
7195 -26.1% 5314 ± 7% interrupts.CPU101.RES:Rescheduling_interrupts
634045 ± 3% -16.0% 532657 ± 2% interrupts.CPU102.LOC:Local_timer_interrupts
7314 ± 3% -26.5% 5374 ± 8% interrupts.CPU102.RES:Rescheduling_interrupts
634109 ± 3% -16.0% 532704 ± 2% interrupts.CPU103.LOC:Local_timer_interrupts
7323 ± 2% -26.5% 5379 ± 5% interrupts.CPU103.RES:Rescheduling_interrupts
634076 ± 3% -16.0% 532636 ± 2% interrupts.CPU104.LOC:Local_timer_interrupts
7446 ± 3% -27.1% 5428 ± 7% interrupts.CPU104.RES:Rescheduling_interrupts
634149 ± 3% -16.0% 532646 ± 2% interrupts.CPU105.LOC:Local_timer_interrupts
7607 ± 4% -29.9% 5330 ± 5% interrupts.CPU105.RES:Rescheduling_interrupts
634052 ± 3% -16.0% 532670 ± 2% interrupts.CPU106.LOC:Local_timer_interrupts
7434 ± 2% -30.2% 5192 ± 6% interrupts.CPU106.RES:Rescheduling_interrupts
634112 ± 3% -16.0% 532729 ± 2% interrupts.CPU107.LOC:Local_timer_interrupts
7366 ± 3% -27.0% 5375 ± 7% interrupts.CPU107.RES:Rescheduling_interrupts
634231 ± 3% -16.0% 532661 ± 2% interrupts.CPU108.LOC:Local_timer_interrupts
7430 -28.8% 5289 ± 4% interrupts.CPU108.RES:Rescheduling_interrupts
634134 ± 3% -16.0% 532955 ± 2% interrupts.CPU109.LOC:Local_timer_interrupts
7248 ± 2% -25.5% 5403 ± 7% interrupts.CPU109.RES:Rescheduling_interrupts
634408 ± 3% -15.9% 533749 ± 2% interrupts.CPU11.LOC:Local_timer_interrupts
7236 -25.0% 5428 ± 6% interrupts.CPU11.RES:Rescheduling_interrupts
634240 ± 3% -16.0% 532701 ± 2% interrupts.CPU110.LOC:Local_timer_interrupts
7368 ± 3% -26.9% 5386 ± 6% interrupts.CPU110.RES:Rescheduling_interrupts
634329 ± 3% -16.0% 532986 ± 2% interrupts.CPU111.LOC:Local_timer_interrupts
7168 ± 3% -24.3% 5424 ± 4% interrupts.CPU111.RES:Rescheduling_interrupts
634186 ± 3% -16.0% 532696 ± 2% interrupts.CPU112.LOC:Local_timer_interrupts
7112 -23.7% 5423 ± 5% interrupts.CPU112.RES:Rescheduling_interrupts
634122 ± 3% -16.0% 532748 ± 2% interrupts.CPU113.LOC:Local_timer_interrupts
7465 ± 3% -27.9% 5380 ± 6% interrupts.CPU113.RES:Rescheduling_interrupts
634131 ± 3% -16.0% 532685 ± 2% interrupts.CPU114.LOC:Local_timer_interrupts
7410 -26.5% 5446 ± 4% interrupts.CPU114.RES:Rescheduling_interrupts
634271 ± 3% -16.0% 532746 ± 2% interrupts.CPU115.LOC:Local_timer_interrupts
7381 ± 3% -28.1% 5308 ± 5% interrupts.CPU115.RES:Rescheduling_interrupts
634124 ± 3% -16.0% 532742 ± 2% interrupts.CPU116.LOC:Local_timer_interrupts
7330 ± 3% -27.2% 5338 ± 6% interrupts.CPU116.RES:Rescheduling_interrupts
634319 ± 3% -16.0% 532644 ± 2% interrupts.CPU117.LOC:Local_timer_interrupts
7277 ± 2% -24.8% 5473 ± 7% interrupts.CPU117.RES:Rescheduling_interrupts
634275 ± 3% -16.0% 532876 ± 2% interrupts.CPU118.LOC:Local_timer_interrupts
7494 -28.4% 5362 ± 7% interrupts.CPU118.RES:Rescheduling_interrupts
634120 ± 3% -16.0% 532652 ± 2% interrupts.CPU119.LOC:Local_timer_interrupts
7238 -27.9% 5221 ± 7% interrupts.CPU119.RES:Rescheduling_interrupts
634095 ± 3% -16.0% 532744 ± 2% interrupts.CPU12.LOC:Local_timer_interrupts
7158 ± 2% -26.7% 5246 ± 3% interrupts.CPU12.RES:Rescheduling_interrupts
633975 ± 3% -16.0% 532712 ± 2% interrupts.CPU120.LOC:Local_timer_interrupts
7439 ± 7% -30.2% 5195 ± 4% interrupts.CPU120.RES:Rescheduling_interrupts
634066 ± 3% -16.0% 532710 ± 2% interrupts.CPU121.LOC:Local_timer_interrupts
7619 ± 5% -31.0% 5258 ± 3% interrupts.CPU121.RES:Rescheduling_interrupts
633982 ± 3% -15.9% 533343 ± 2% interrupts.CPU122.LOC:Local_timer_interrupts
7422 ± 7% -31.0% 5124 ± 5% interrupts.CPU122.RES:Rescheduling_interrupts
633950 ± 3% -15.9% 533224 ± 2% interrupts.CPU123.LOC:Local_timer_interrupts
7384 ± 9% -28.8% 5259 ± 6% interrupts.CPU123.RES:Rescheduling_interrupts
634068 ± 3% -16.0% 532757 ± 2% interrupts.CPU124.LOC:Local_timer_interrupts
7410 ± 5% -29.7% 5212 ± 4% interrupts.CPU124.RES:Rescheduling_interrupts
634147 ± 3% -16.0% 532745 ± 2% interrupts.CPU125.LOC:Local_timer_interrupts
7401 ± 8% -30.5% 5142 ± 6% interrupts.CPU125.RES:Rescheduling_interrupts
633921 ± 3% -16.0% 532683 ± 2% interrupts.CPU126.LOC:Local_timer_interrupts
7431 ± 7% -29.2% 5259 ± 4% interrupts.CPU126.RES:Rescheduling_interrupts
633973 ± 3% -16.0% 532755 ± 2% interrupts.CPU127.LOC:Local_timer_interrupts
7644 ± 8% -31.3% 5253 ± 5% interrupts.CPU127.RES:Rescheduling_interrupts
633956 ± 3% -16.0% 532726 ± 2% interrupts.CPU128.LOC:Local_timer_interrupts
7344 ± 4% -29.8% 5156 ± 5% interrupts.CPU128.RES:Rescheduling_interrupts
633983 ± 3% -16.0% 532681 ± 2% interrupts.CPU129.LOC:Local_timer_interrupts
7437 ± 6% -29.1% 5273 ± 6% interrupts.CPU129.RES:Rescheduling_interrupts
634191 ± 3% -16.0% 532705 ± 2% interrupts.CPU13.LOC:Local_timer_interrupts
7427 ± 3% -27.0% 5421 ± 7% interrupts.CPU13.RES:Rescheduling_interrupts
634029 ± 3% -16.1% 532066 ± 2% interrupts.CPU130.LOC:Local_timer_interrupts
7365 ± 9% -31.0% 5084 ± 5% interrupts.CPU130.RES:Rescheduling_interrupts
633961 ± 3% -16.0% 532693 ± 2% interrupts.CPU131.LOC:Local_timer_interrupts
7485 ± 6% -30.0% 5237 ± 6% interrupts.CPU131.RES:Rescheduling_interrupts
633990 ± 3% -15.9% 533190 ± 2% interrupts.CPU132.LOC:Local_timer_interrupts
7382 ± 8% -29.8% 5181 ± 4% interrupts.CPU132.RES:Rescheduling_interrupts
634112 ± 3% -15.9% 533232 ± 2% interrupts.CPU133.LOC:Local_timer_interrupts
7486 ± 8% -30.4% 5211 ± 4% interrupts.CPU133.RES:Rescheduling_interrupts
633991 ± 3% -15.9% 533323 ± 2% interrupts.CPU134.LOC:Local_timer_interrupts
7519 ± 6% -29.1% 5330 ± 6% interrupts.CPU134.RES:Rescheduling_interrupts
633976 ± 3% -15.9% 533179 ± 2% interrupts.CPU135.LOC:Local_timer_interrupts
7454 ± 5% -28.9% 5301 ± 4% interrupts.CPU135.RES:Rescheduling_interrupts
634000 ± 3% -15.9% 533238 ± 2% interrupts.CPU136.LOC:Local_timer_interrupts
7326 ± 6% -29.1% 5196 ± 3% interrupts.CPU136.RES:Rescheduling_interrupts
634099 ± 3% -15.7% 534298 ± 3% interrupts.CPU137.LOC:Local_timer_interrupts
7410 ± 7% -28.7% 5280 ± 6% interrupts.CPU137.RES:Rescheduling_interrupts
633962 ± 3% -15.9% 533226 ± 2% interrupts.CPU138.LOC:Local_timer_interrupts
7547 ± 9% -30.4% 5254 ± 6% interrupts.CPU138.RES:Rescheduling_interrupts
634018 ± 3% -16.0% 532784 ± 2% interrupts.CPU139.LOC:Local_timer_interrupts
7491 ± 6% -30.4% 5212 ± 6% interrupts.CPU139.RES:Rescheduling_interrupts
634141 ± 3% -16.0% 532878 ± 2% interrupts.CPU14.LOC:Local_timer_interrupts
7208 ± 2% -25.8% 5346 ± 7% interrupts.CPU14.RES:Rescheduling_interrupts
634110 ± 3% -16.0% 532694 ± 2% interrupts.CPU140.LOC:Local_timer_interrupts
7305 ± 5% -27.0% 5330 ± 4% interrupts.CPU140.RES:Rescheduling_interrupts
634118 ± 3% -15.9% 533348 ± 2% interrupts.CPU141.LOC:Local_timer_interrupts
7410 ± 9% -29.9% 5194 ± 6% interrupts.CPU141.RES:Rescheduling_interrupts
634014 ± 3% -15.9% 533465 ± 2% interrupts.CPU142.LOC:Local_timer_interrupts
7531 ± 6% -32.4% 5089 ± 5% interrupts.CPU142.RES:Rescheduling_interrupts
634019 ± 3% -16.0% 532744 ± 2% interrupts.CPU143.LOC:Local_timer_interrupts
7500 ± 5% -31.0% 5175 ± 6% interrupts.CPU143.RES:Rescheduling_interrupts
634152 ± 3% -16.0% 532844 ± 2% interrupts.CPU144.LOC:Local_timer_interrupts
7171 ± 12% -24.7% 5397 ± 8% interrupts.CPU144.RES:Rescheduling_interrupts
634077 ± 3% -15.9% 533014 ± 2% interrupts.CPU145.LOC:Local_timer_interrupts
7213 ± 11% -25.7% 5363 ± 10% interrupts.CPU145.RES:Rescheduling_interrupts
635125 ± 3% -16.0% 533295 ± 2% interrupts.CPU146.LOC:Local_timer_interrupts
7122 ± 13% -27.0% 5201 ± 8% interrupts.CPU146.RES:Rescheduling_interrupts
634196 ± 3% -16.0% 532855 ± 2% interrupts.CPU147.LOC:Local_timer_interrupts
7220 ± 12% -26.1% 5333 ± 8% interrupts.CPU147.RES:Rescheduling_interrupts
634068 ± 3% -16.0% 532844 ± 2% interrupts.CPU148.LOC:Local_timer_interrupts
6993 ± 12% -25.3% 5226 ± 9% interrupts.CPU148.RES:Rescheduling_interrupts
634086 ± 3% -16.0% 532719 ± 2% interrupts.CPU149.LOC:Local_timer_interrupts
7230 ± 10% -25.7% 5375 ± 8% interrupts.CPU149.RES:Rescheduling_interrupts
634249 ± 3% -16.0% 532684 ± 2% interrupts.CPU15.LOC:Local_timer_interrupts
7375 -28.6% 5262 ± 7% interrupts.CPU15.RES:Rescheduling_interrupts
635061 ± 3% -16.1% 532692 ± 2% interrupts.CPU150.LOC:Local_timer_interrupts
7180 ± 9% -25.5% 5352 ± 8% interrupts.CPU150.RES:Rescheduling_interrupts
634071 ± 3% -16.0% 532807 ± 2% interrupts.CPU151.LOC:Local_timer_interrupts
7317 ± 11% -27.4% 5310 ± 9% interrupts.CPU151.RES:Rescheduling_interrupts
634044 ± 3% -15.9% 533157 ± 2% interrupts.CPU152.LOC:Local_timer_interrupts
6988 ± 9% -24.1% 5303 ± 9% interrupts.CPU152.RES:Rescheduling_interrupts
634136 ± 3% -15.9% 533146 ± 2% interrupts.CPU153.LOC:Local_timer_interrupts
7287 ± 11% -25.5% 5430 ± 9% interrupts.CPU153.RES:Rescheduling_interrupts
634389 ± 3% -16.0% 533019 ± 2% interrupts.CPU154.LOC:Local_timer_interrupts
7120 ± 11% -25.4% 5313 ± 10% interrupts.CPU154.RES:Rescheduling_interrupts
634058 ± 3% -16.0% 532925 ± 2% interrupts.CPU155.LOC:Local_timer_interrupts
7266 ± 13% -27.2% 5292 ± 7% interrupts.CPU155.RES:Rescheduling_interrupts
634059 ± 3% -15.9% 533016 ± 2% interrupts.CPU156.LOC:Local_timer_interrupts
7124 ± 10% -24.9% 5348 ± 7% interrupts.CPU156.RES:Rescheduling_interrupts
634128 ± 3% -15.9% 533083 ± 2% interrupts.CPU157.LOC:Local_timer_interrupts
7063 ± 11% -26.8% 5172 ± 9% interrupts.CPU157.RES:Rescheduling_interrupts
634087 ± 3% -15.9% 533156 ± 2% interrupts.CPU158.LOC:Local_timer_interrupts
7132 ± 12% -25.5% 5313 ± 8% interrupts.CPU158.RES:Rescheduling_interrupts
634128 ± 3% -16.0% 532919 ± 2% interrupts.CPU159.LOC:Local_timer_interrupts
7244 ± 11% -26.7% 5309 ± 11% interrupts.CPU159.RES:Rescheduling_interrupts
634462 ± 3% -16.0% 532679 ± 2% interrupts.CPU16.LOC:Local_timer_interrupts
7357 -26.3% 5422 ± 8% interrupts.CPU16.RES:Rescheduling_interrupts
634159 ± 3% -16.0% 532804 ± 2% interrupts.CPU160.LOC:Local_timer_interrupts
7101 ± 11% -25.4% 5294 ± 8% interrupts.CPU160.RES:Rescheduling_interrupts
634216 ± 3% -16.0% 532870 ± 2% interrupts.CPU161.LOC:Local_timer_interrupts
7115 ± 10% -25.6% 5294 ± 10% interrupts.CPU161.RES:Rescheduling_interrupts
634367 ± 3% -16.0% 532831 ± 2% interrupts.CPU162.LOC:Local_timer_interrupts
7255 ± 11% -24.4% 5485 ± 8% interrupts.CPU162.RES:Rescheduling_interrupts
634183 ± 3% -16.0% 532842 ± 2% interrupts.CPU163.LOC:Local_timer_interrupts
7237 ± 10% -27.1% 5278 ± 8% interrupts.CPU163.RES:Rescheduling_interrupts
634205 ± 3% -16.0% 532754 ± 2% interrupts.CPU164.LOC:Local_timer_interrupts
7309 ± 7% -27.5% 5299 ± 9% interrupts.CPU164.RES:Rescheduling_interrupts
634191 ± 3% -15.9% 533112 ± 2% interrupts.CPU165.LOC:Local_timer_interrupts
7084 ± 10% -24.2% 5371 ± 10% interrupts.CPU165.RES:Rescheduling_interrupts
634194 ± 3% -16.0% 532868 ± 2% interrupts.CPU166.LOC:Local_timer_interrupts
7308 ± 11% -28.3% 5241 ± 11% interrupts.CPU166.RES:Rescheduling_interrupts
634134 ± 3% -16.0% 532956 ± 2% interrupts.CPU167.LOC:Local_timer_interrupts
7321 ± 11% -27.8% 5286 ± 9% interrupts.CPU167.RES:Rescheduling_interrupts
634549 ± 3% -16.0% 532980 ± 2% interrupts.CPU168.LOC:Local_timer_interrupts
7161 ± 8% -28.8% 5099 ± 4% interrupts.CPU168.RES:Rescheduling_interrupts
634193 ± 3% -16.0% 532979 ± 2% interrupts.CPU169.LOC:Local_timer_interrupts
7186 ± 10% -28.4% 5144 ± 7% interrupts.CPU169.RES:Rescheduling_interrupts
635042 ± 3% -16.1% 532669 ± 2% interrupts.CPU17.LOC:Local_timer_interrupts
7442 -27.3% 5408 ± 5% interrupts.CPU17.RES:Rescheduling_interrupts
634137 ± 3% -16.0% 532801 ± 2% interrupts.CPU170.LOC:Local_timer_interrupts
7329 ± 9% -28.4% 5251 ± 5% interrupts.CPU170.RES:Rescheduling_interrupts
634237 ± 3% -16.0% 533026 ± 2% interrupts.CPU171.LOC:Local_timer_interrupts
7311 ± 8% -29.5% 5155 ± 7% interrupts.CPU171.RES:Rescheduling_interrupts
634216 ± 3% -16.0% 532758 ± 2% interrupts.CPU172.LOC:Local_timer_interrupts
7249 ± 9% -28.2% 5202 ± 6% interrupts.CPU172.RES:Rescheduling_interrupts
634211 ± 3% -16.0% 532822 ± 2% interrupts.CPU173.LOC:Local_timer_interrupts
7246 ± 6% -27.7% 5238 ± 7% interrupts.CPU173.RES:Rescheduling_interrupts
634351 ± 3% -16.0% 532749 ± 2% interrupts.CPU174.LOC:Local_timer_interrupts
7297 ± 7% -27.7% 5274 ± 8% interrupts.CPU174.RES:Rescheduling_interrupts
635175 ± 3% -16.1% 532816 ± 2% interrupts.CPU175.LOC:Local_timer_interrupts
7381 ± 8% -28.1% 5303 ± 8% interrupts.CPU175.RES:Rescheduling_interrupts
634315 ± 3% -16.0% 532862 ± 2% interrupts.CPU176.LOC:Local_timer_interrupts
7328 ± 5% -27.7% 5297 ± 6% interrupts.CPU176.RES:Rescheduling_interrupts
634160 ± 3% -16.0% 532881 ± 2% interrupts.CPU177.LOC:Local_timer_interrupts
7441 ± 7% -29.4% 5250 ± 6% interrupts.CPU177.RES:Rescheduling_interrupts
634148 ± 3% -16.0% 532905 ± 2% interrupts.CPU178.LOC:Local_timer_interrupts
7172 ± 7% -25.6% 5333 ± 8% interrupts.CPU178.RES:Rescheduling_interrupts
634134 ± 3% -16.0% 532871 ± 2% interrupts.CPU179.LOC:Local_timer_interrupts
7405 ± 9% -28.7% 5281 ± 6% interrupts.CPU179.RES:Rescheduling_interrupts
634239 ± 3% -16.0% 532764 ± 2% interrupts.CPU18.LOC:Local_timer_interrupts
7155 ± 3% -25.0% 5366 ± 6% interrupts.CPU18.RES:Rescheduling_interrupts
634130 ± 3% -16.0% 532851 ± 2% interrupts.CPU180.LOC:Local_timer_interrupts
7197 ± 7% -24.7% 5417 ± 6% interrupts.CPU180.RES:Rescheduling_interrupts
634172 ± 3% -16.0% 532763 ± 2% interrupts.CPU181.LOC:Local_timer_interrupts
7331 ± 7% -26.1% 5420 ± 7% interrupts.CPU181.RES:Rescheduling_interrupts
634250 ± 3% -16.0% 532818 ± 2% interrupts.CPU182.LOC:Local_timer_interrupts
7351 ± 8% -30.2% 5132 ± 7% interrupts.CPU182.RES:Rescheduling_interrupts
634207 ± 3% -16.0% 532963 ± 2% interrupts.CPU183.LOC:Local_timer_interrupts
7384 ± 10% -31.3% 5076 ± 9% interrupts.CPU183.RES:Rescheduling_interrupts
634167 ± 3% -16.0% 532950 ± 2% interrupts.CPU184.LOC:Local_timer_interrupts
7327 ± 10% -28.8% 5214 ± 7% interrupts.CPU184.RES:Rescheduling_interrupts
634117 ± 3% -15.9% 533074 ± 2% interrupts.CPU185.LOC:Local_timer_interrupts
7361 ± 10% -28.6% 5259 ± 7% interrupts.CPU185.RES:Rescheduling_interrupts
634153 ± 3% -16.0% 532822 ± 2% interrupts.CPU186.LOC:Local_timer_interrupts
7247 ± 8% -27.8% 5235 ± 7% interrupts.CPU186.RES:Rescheduling_interrupts
634283 ± 3% -16.0% 532964 ± 2% interrupts.CPU187.LOC:Local_timer_interrupts
7211 ± 9% -26.7% 5285 ± 4% interrupts.CPU187.RES:Rescheduling_interrupts
634375 ± 3% -16.0% 532948 ± 2% interrupts.CPU188.LOC:Local_timer_interrupts
7320 ± 7% -27.2% 5328 ± 6% interrupts.CPU188.RES:Rescheduling_interrupts
634116 ± 3% -16.0% 532828 ± 2% interrupts.CPU189.LOC:Local_timer_interrupts
7274 ± 7% -27.6% 5264 ± 7% interrupts.CPU189.RES:Rescheduling_interrupts
634131 ± 3% -16.0% 532720 ± 2% interrupts.CPU19.LOC:Local_timer_interrupts
7327 -27.5% 5315 ± 8% interrupts.CPU19.RES:Rescheduling_interrupts
634357 ± 3% -16.0% 532773 ± 2% interrupts.CPU190.LOC:Local_timer_interrupts
7330 ± 10% -29.4% 5178 ± 7% interrupts.CPU190.RES:Rescheduling_interrupts
634561 ± 3% -16.0% 532886 ± 2% interrupts.CPU191.LOC:Local_timer_interrupts
7294 ± 9% -29.0% 5180 ± 7% interrupts.CPU191.RES:Rescheduling_interrupts
634356 ± 3% -16.0% 532713 ± 2% interrupts.CPU2.LOC:Local_timer_interrupts
7538 -29.1% 5346 ± 8% interrupts.CPU2.RES:Rescheduling_interrupts
634133 ± 3% -16.0% 532820 ± 2% interrupts.CPU20.LOC:Local_timer_interrupts
7587 ± 2% -28.8% 5402 ± 5% interrupts.CPU20.RES:Rescheduling_interrupts
634403 ± 3% -15.9% 533653 ± 2% interrupts.CPU21.LOC:Local_timer_interrupts
7399 ± 2% -26.4% 5444 ± 6% interrupts.CPU21.RES:Rescheduling_interrupts
634133 ± 3% -16.0% 532675 ± 2% interrupts.CPU22.LOC:Local_timer_interrupts
7293 -26.3% 5377 ± 6% interrupts.CPU22.RES:Rescheduling_interrupts
634112 ± 3% -16.0% 532641 ± 2% interrupts.CPU23.LOC:Local_timer_interrupts
7315 -27.2% 5328 ± 6% interrupts.CPU23.RES:Rescheduling_interrupts
635686 ± 3% -16.2% 532889 ± 2% interrupts.CPU24.LOC:Local_timer_interrupts
7730 ± 6% -26.0% 5718 ± 6% interrupts.CPU24.RES:Rescheduling_interrupts
635207 ± 3% -16.0% 533270 ± 2% interrupts.CPU25.LOC:Local_timer_interrupts
7395 ± 6% -28.8% 5263 ± 8% interrupts.CPU25.RES:Rescheduling_interrupts
634047 ± 3% -16.0% 532739 ± 2% interrupts.CPU26.LOC:Local_timer_interrupts
7425 ± 7% -29.7% 5219 ± 6% interrupts.CPU26.RES:Rescheduling_interrupts
634063 ± 3% -15.9% 533315 ± 2% interrupts.CPU27.LOC:Local_timer_interrupts
7480 ± 9% -29.7% 5261 ± 5% interrupts.CPU27.RES:Rescheduling_interrupts
634083 ± 3% -15.9% 533301 ± 2% interrupts.CPU28.LOC:Local_timer_interrupts
7524 ± 7% -29.7% 5289 ± 5% interrupts.CPU28.RES:Rescheduling_interrupts
633993 ± 3% -16.0% 532726 ± 2% interrupts.CPU29.LOC:Local_timer_interrupts
7534 ± 5% -32.0% 5124 ± 7% interrupts.CPU29.RES:Rescheduling_interrupts
634137 ± 3% -16.0% 532683 ± 2% interrupts.CPU3.LOC:Local_timer_interrupts
7310 -27.3% 5313 ± 7% interrupts.CPU3.RES:Rescheduling_interrupts
633978 ± 3% -16.0% 532753 ± 2% interrupts.CPU30.LOC:Local_timer_interrupts
7355 ± 6% -29.2% 5210 ± 5% interrupts.CPU30.RES:Rescheduling_interrupts
634008 ± 3% -16.0% 532735 ± 2% interrupts.CPU31.LOC:Local_timer_interrupts
7468 ± 5% -29.3% 5283 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts
634000 ± 3% -16.0% 532724 ± 2% interrupts.CPU32.LOC:Local_timer_interrupts
7611 ± 8% -31.1% 5241 ± 4% interrupts.CPU32.RES:Rescheduling_interrupts
633999 ± 3% -15.9% 533277 ± 2% interrupts.CPU33.LOC:Local_timer_interrupts
7726 ± 6% -32.6% 5206 ± 5% interrupts.CPU33.RES:Rescheduling_interrupts
634054 ± 3% -16.0% 532758 ± 2% interrupts.CPU34.LOC:Local_timer_interrupts
7284 ± 9% -28.2% 5231 ± 5% interrupts.CPU34.RES:Rescheduling_interrupts
633986 ± 3% -16.0% 532696 ± 2% interrupts.CPU35.LOC:Local_timer_interrupts
7313 ± 7% -29.1% 5181 ± 6% interrupts.CPU35.RES:Rescheduling_interrupts
634075 ± 3% -16.0% 532736 ± 2% interrupts.CPU36.LOC:Local_timer_interrupts
7391 ± 6% -30.3% 5148 ± 7% interrupts.CPU36.RES:Rescheduling_interrupts
634091 ± 3% -16.0% 532757 ± 2% interrupts.CPU37.LOC:Local_timer_interrupts
7342 ± 5% -27.5% 5321 ± 5% interrupts.CPU37.RES:Rescheduling_interrupts
634060 ± 3% -15.9% 533239 ± 2% interrupts.CPU38.LOC:Local_timer_interrupts
7459 ± 8% -28.3% 5347 ± 8% interrupts.CPU38.RES:Rescheduling_interrupts
633976 ± 3% -15.9% 533234 ± 2% interrupts.CPU39.LOC:Local_timer_interrupts
7384 ± 8% -28.7% 5263 ± 6% interrupts.CPU39.RES:Rescheduling_interrupts
634096 ± 3% -16.0% 532763 ± 2% interrupts.CPU4.LOC:Local_timer_interrupts
7222 ± 2% -26.3% 5320 ± 5% interrupts.CPU4.RES:Rescheduling_interrupts
634079 ± 3% -15.9% 533250 ± 2% interrupts.CPU40.LOC:Local_timer_interrupts
7381 ± 6% -28.9% 5244 ± 6% interrupts.CPU40.RES:Rescheduling_interrupts
634081 ± 3% -16.0% 532769 ± 2% interrupts.CPU41.LOC:Local_timer_interrupts
7407 ± 5% -29.4% 5230 ± 6% interrupts.CPU41.RES:Rescheduling_interrupts
633977 ± 3% -15.9% 533284 ± 2% interrupts.CPU42.LOC:Local_timer_interrupts
7344 ± 5% -27.7% 5308 ± 4% interrupts.CPU42.RES:Rescheduling_interrupts
633982 ± 3% -16.0% 532725 ± 2% interrupts.CPU43.LOC:Local_timer_interrupts
7388 ± 8% -28.8% 5258 ± 7% interrupts.CPU43.RES:Rescheduling_interrupts
634132 ± 3% -16.0% 532705 ± 2% interrupts.CPU44.LOC:Local_timer_interrupts
7408 ± 5% -27.3% 5388 ± 7% interrupts.CPU44.RES:Rescheduling_interrupts
634077 ± 3% -16.0% 532717 ± 2% interrupts.CPU45.LOC:Local_timer_interrupts
7598 ± 7% -30.9% 5252 ± 6% interrupts.CPU45.RES:Rescheduling_interrupts
633959 ± 3% -15.9% 533334 ± 2% interrupts.CPU46.LOC:Local_timer_interrupts
7429 ± 7% -27.9% 5359 ± 5% interrupts.CPU46.RES:Rescheduling_interrupts
634004 ± 3% -16.0% 532726 ± 2% interrupts.CPU47.LOC:Local_timer_interrupts
7433 ± 10% -30.0% 5207 ± 2% interrupts.CPU47.RES:Rescheduling_interrupts
634082 ± 3% -15.9% 532972 ± 2% interrupts.CPU48.LOC:Local_timer_interrupts
7515 ± 12% -26.7% 5509 ± 11% interrupts.CPU48.RES:Rescheduling_interrupts
634057 ± 3% -15.8% 533744 ± 2% interrupts.CPU49.LOC:Local_timer_interrupts
7234 ± 10% -25.3% 5403 ± 7% interrupts.CPU49.RES:Rescheduling_interrupts
634086 ± 3% -16.0% 532629 ± 2% interrupts.CPU5.LOC:Local_timer_interrupts
7390 ± 3% -27.5% 5361 ± 7% interrupts.CPU5.RES:Rescheduling_interrupts
634105 ± 3% -15.9% 533100 ± 2% interrupts.CPU50.LOC:Local_timer_interrupts
7201 ± 13% -25.1% 5395 ± 10% interrupts.CPU50.RES:Rescheduling_interrupts
634287 ± 3% -16.0% 532940 ± 2% interrupts.CPU51.LOC:Local_timer_interrupts
7317 ± 14% -26.2% 5402 ± 10% interrupts.CPU51.RES:Rescheduling_interrupts
634129 ± 3% -16.0% 532717 ± 2% interrupts.CPU52.LOC:Local_timer_interrupts
7046 ± 10% -25.1% 5278 ± 8% interrupts.CPU52.RES:Rescheduling_interrupts
634048 ± 3% -16.0% 532721 ± 2% interrupts.CPU53.LOC:Local_timer_interrupts
7163 ± 12% -26.3% 5277 ± 7% interrupts.CPU53.RES:Rescheduling_interrupts
634378 ± 3% -16.0% 532724 ± 2% interrupts.CPU54.LOC:Local_timer_interrupts
7253 ± 10% -27.0% 5293 ± 8% interrupts.CPU54.RES:Rescheduling_interrupts
634132 ± 3% -15.8% 533627 ± 2% interrupts.CPU55.LOC:Local_timer_interrupts
7272 ± 8% -25.8% 5397 ± 7% interrupts.CPU55.RES:Rescheduling_interrupts
634421 ± 3% -16.0% 532797 ± 2% interrupts.CPU56.LOC:Local_timer_interrupts
7338 ± 10% -27.8% 5299 ± 9% interrupts.CPU56.RES:Rescheduling_interrupts
634120 ± 3% -15.9% 533237 ± 2% interrupts.CPU57.LOC:Local_timer_interrupts
7304 ± 9% -26.6% 5359 ± 8% interrupts.CPU57.RES:Rescheduling_interrupts
634083 ± 3% -15.9% 532950 ± 2% interrupts.CPU58.LOC:Local_timer_interrupts
7216 ± 10% -25.7% 5359 ± 8% interrupts.CPU58.RES:Rescheduling_interrupts
634064 ± 3% -15.9% 533077 ± 2% interrupts.CPU59.LOC:Local_timer_interrupts
7300 ± 9% -26.6% 5356 ± 8% interrupts.CPU59.RES:Rescheduling_interrupts
634059 ± 3% -16.0% 532625 ± 2% interrupts.CPU6.LOC:Local_timer_interrupts
7290 ± 3% -25.7% 5420 ± 6% interrupts.CPU6.RES:Rescheduling_interrupts
634099 ± 3% -15.9% 533126 ± 2% interrupts.CPU60.LOC:Local_timer_interrupts
7316 ± 10% -26.7% 5362 ± 10% interrupts.CPU60.RES:Rescheduling_interrupts
634122 ± 3% -15.9% 533131 ± 2% interrupts.CPU61.LOC:Local_timer_interrupts
7113 ± 10% -24.7% 5355 ± 11% interrupts.CPU61.RES:Rescheduling_interrupts
634065 ± 3% -15.9% 533141 ± 2% interrupts.CPU62.LOC:Local_timer_interrupts
7179 ± 11% -27.6% 5200 ± 9% interrupts.CPU62.RES:Rescheduling_interrupts
634051 ± 3% -15.9% 533113 ± 2% interrupts.CPU63.LOC:Local_timer_interrupts
7061 ± 9% -25.5% 5259 ± 8% interrupts.CPU63.RES:Rescheduling_interrupts
634171 ± 3% -16.0% 532895 ± 2% interrupts.CPU64.LOC:Local_timer_interrupts
7283 ± 11% -26.3% 5369 ± 9% interrupts.CPU64.RES:Rescheduling_interrupts
634154 ± 3% -16.0% 532778 ± 2% interrupts.CPU65.LOC:Local_timer_interrupts
7225 ± 11% -25.7% 5369 ± 10% interrupts.CPU65.RES:Rescheduling_interrupts
634161 ± 3% -16.0% 532873 ± 2% interrupts.CPU66.LOC:Local_timer_interrupts
7149 ± 12% -26.2% 5273 ± 12% interrupts.CPU66.RES:Rescheduling_interrupts
634140 ± 3% -16.0% 532774 ± 2% interrupts.CPU67.LOC:Local_timer_interrupts
7198 ± 12% -24.9% 5405 ± 8% interrupts.CPU67.RES:Rescheduling_interrupts
634164 ± 3% -16.0% 532734 ± 2% interrupts.CPU68.LOC:Local_timer_interrupts
7316 ± 10% -26.1% 5408 ± 8% interrupts.CPU68.RES:Rescheduling_interrupts
634144 ± 3% -16.0% 532765 ± 2% interrupts.CPU69.LOC:Local_timer_interrupts
7245 ± 11% -24.6% 5463 ± 8% interrupts.CPU69.RES:Rescheduling_interrupts
634095 ± 3% -16.0% 532674 ± 2% interrupts.CPU7.LOC:Local_timer_interrupts
7516 ± 4% -28.8% 5348 ± 5% interrupts.CPU7.RES:Rescheduling_interrupts
634163 ± 3% -15.8% 533993 ± 2% interrupts.CPU70.LOC:Local_timer_interrupts
7191 ± 12% -27.9% 5181 ± 8% interrupts.CPU70.RES:Rescheduling_interrupts
634170 ± 3% -16.0% 532959 ± 2% interrupts.CPU71.LOC:Local_timer_interrupts
7122 ± 12% -26.8% 5215 ± 10% interrupts.CPU71.RES:Rescheduling_interrupts
634311 ± 3% -16.0% 532876 ± 2% interrupts.CPU72.LOC:Local_timer_interrupts
7669 ± 9% -28.6% 5479 ± 6% interrupts.CPU72.RES:Rescheduling_interrupts
635155 ± 3% -16.1% 532973 ± 2% interrupts.CPU73.LOC:Local_timer_interrupts
7465 ± 9% -29.7% 5249 ± 8% interrupts.CPU73.RES:Rescheduling_interrupts
634490 ± 3% -15.9% 533852 ± 2% interrupts.CPU74.LOC:Local_timer_interrupts
7396 ± 8% -29.7% 5199 ± 3% interrupts.CPU74.RES:Rescheduling_interrupts
634277 ± 3% -16.0% 532935 ± 2% interrupts.CPU75.LOC:Local_timer_interrupts
7398 ± 7% -29.4% 5223 ± 5% interrupts.CPU75.RES:Rescheduling_interrupts
634280 ± 3% -16.0% 532911 ± 2% interrupts.CPU76.LOC:Local_timer_interrupts
7365 ± 9% -28.1% 5296 ± 6% interrupts.CPU76.RES:Rescheduling_interrupts
634260 ± 3% -16.0% 532884 ± 2% interrupts.CPU77.LOC:Local_timer_interrupts
7330 ± 7% -30.1% 5125 ± 8% interrupts.CPU77.RES:Rescheduling_interrupts
634264 ± 3% -16.0% 532981 ± 2% interrupts.CPU78.LOC:Local_timer_interrupts
7482 ± 9% -29.7% 5263 ± 7% interrupts.CPU78.RES:Rescheduling_interrupts
634240 ± 3% -16.0% 532999 ± 2% interrupts.CPU79.LOC:Local_timer_interrupts
7460 ± 7% -29.6% 5251 ± 8% interrupts.CPU79.RES:Rescheduling_interrupts
634191 ± 3% -15.9% 533275 ± 2% interrupts.CPU8.LOC:Local_timer_interrupts
7201 ± 2% -22.8% 5562 ± 4% interrupts.CPU8.RES:Rescheduling_interrupts
634193 ± 3% -15.9% 533039 ± 2% interrupts.CPU80.LOC:Local_timer_interrupts
7418 ± 9% -30.4% 5160 ± 9% interrupts.CPU80.RES:Rescheduling_interrupts
634206 ± 3% -16.0% 532986 ± 2% interrupts.CPU81.LOC:Local_timer_interrupts
7249 ± 9% -27.7% 5240 ± 6% interrupts.CPU81.RES:Rescheduling_interrupts
634248 ± 3% -16.0% 532931 ± 2% interrupts.CPU82.LOC:Local_timer_interrupts
5456 ± 34% +60.2% 8743 interrupts.CPU82.NMI:Non-maskable_interrupts
5456 ± 34% +60.2% 8743 interrupts.CPU82.PMI:Performance_monitoring_interrupts
7401 ± 8% -29.2% 5240 ± 7% interrupts.CPU82.RES:Rescheduling_interrupts
634133 ± 3% -15.8% 533998 ± 2% interrupts.CPU83.LOC:Local_timer_interrupts
7310 ± 8% -30.2% 5101 ± 5% interrupts.CPU83.RES:Rescheduling_interrupts
634164 ± 3% -16.0% 532834 ± 2% interrupts.CPU84.LOC:Local_timer_interrupts
7204 ± 10% -26.2% 5315 ± 8% interrupts.CPU84.RES:Rescheduling_interrupts
634150 ± 3% -16.0% 532774 ± 2% interrupts.CPU85.LOC:Local_timer_interrupts
7314 ± 7% -28.5% 5232 ± 7% interrupts.CPU85.RES:Rescheduling_interrupts
634258 ± 3% -15.9% 533624 ± 2% interrupts.CPU86.LOC:Local_timer_interrupts
7204 ± 7% -26.8% 5276 ± 5% interrupts.CPU86.RES:Rescheduling_interrupts
634234 ± 3% -16.0% 532968 ± 2% interrupts.CPU87.LOC:Local_timer_interrupts
7467 ± 7% -28.8% 5318 ± 7% interrupts.CPU87.RES:Rescheduling_interrupts
634211 ± 3% -16.0% 532949 ± 2% interrupts.CPU88.LOC:Local_timer_interrupts
7230 ± 7% -27.3% 5253 ± 6% interrupts.CPU88.RES:Rescheduling_interrupts
634100 ± 3% -16.0% 532763 ± 2% interrupts.CPU89.LOC:Local_timer_interrupts
7370 ± 8% -28.6% 5259 ± 5% interrupts.CPU89.RES:Rescheduling_interrupts
634536 ± 3% -16.0% 532972 ± 2% interrupts.CPU9.LOC:Local_timer_interrupts
7332 ± 4% -25.6% 5455 ± 5% interrupts.CPU9.RES:Rescheduling_interrupts
634176 ± 3% -16.0% 532829 ± 2% interrupts.CPU90.LOC:Local_timer_interrupts
7471 ± 4% -31.0% 5155 ± 3% interrupts.CPU90.RES:Rescheduling_interrupts
634204 ± 3% -16.0% 532913 ± 2% interrupts.CPU91.LOC:Local_timer_interrupts
7343 ± 6% -30.0% 5142 ± 9% interrupts.CPU91.RES:Rescheduling_interrupts
634200 ± 3% -16.0% 532870 ± 2% interrupts.CPU92.LOC:Local_timer_interrupts
7430 ± 8% -29.4% 5245 ± 7% interrupts.CPU92.RES:Rescheduling_interrupts
634126 ± 3% -16.0% 532785 ± 2% interrupts.CPU93.LOC:Local_timer_interrupts
7336 ± 5% -29.9% 5143 ± 6% interrupts.CPU93.RES:Rescheduling_interrupts
634192 ± 3% -16.0% 532833 ± 2% interrupts.CPU94.LOC:Local_timer_interrupts
7059 ± 9% -28.2% 5068 ± 6% interrupts.CPU94.RES:Rescheduling_interrupts
634264 ± 3% -16.0% 532983 ± 2% interrupts.CPU95.LOC:Local_timer_interrupts
10638 ± 5% -25.0% 7977 ± 4% interrupts.CPU95.RES:Rescheduling_interrupts
634082 ± 3% -16.0% 532705 ± 2% interrupts.CPU96.LOC:Local_timer_interrupts
7252 -25.6% 5397 ± 7% interrupts.CPU96.RES:Rescheduling_interrupts
634167 ± 3% -16.0% 532675 ± 2% interrupts.CPU97.LOC:Local_timer_interrupts
7384 ± 2% -27.3% 5366 ± 9% interrupts.CPU97.RES:Rescheduling_interrupts
634167 ± 3% -16.0% 532722 ± 2% interrupts.CPU98.LOC:Local_timer_interrupts
7396 ± 2% -24.8% 5561 ± 9% interrupts.CPU98.RES:Rescheduling_interrupts
634129 ± 3% -16.0% 532777 ± 2% interrupts.CPU99.LOC:Local_timer_interrupts
7354 ± 2% -29.5% 5181 ± 8% interrupts.CPU99.RES:Rescheduling_interrupts
1.218e+08 ± 3% -16.0% 1.023e+08 ± 2% interrupts.LOC:Local_timer_interrupts
192.00 -100.0% 0.00 interrupts.MCP:Machine_check_polls
1414991 ± 6% -27.7% 1022722 ± 6% interrupts.RES:Rescheduling_interrupts
584.50 ± 6% +21.0% 707.50 ± 10% interrupts.TLB:TLB_shootdowns
aim7.jobs-per-min
35000 +-+------O-----------------------------------------O----------------+
| O O O O O O O |
34000 +-O O O O O O |
33000 +-+ O O O O |
| O O O |
32000 O-+ O O O O O O |
31000 +-+ |
| |
30000 +-+ +. |
29000 +-+ +. +. +. + +. .+ .+. |
| : + +. .+ : +.+ : ++.+ + + .+.++.+ + |
28000 +-+ : :+ +. .+.+.+.+ : : + : + :|
27000 +-+. : + + +. : +. : :|
| + + + |
26000 +-+-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 11 months
[ipv6] d5382fef70: kernel_selftests.net.fib_tests.sh.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: d5382fef70ce273608d6fc652c24f075de3737ef ("ipv6: Stop sending in-kernel notifications for each nexthop")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: 8 threads Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz with 16G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
# selftests: net: fib_tests.sh
#
# Single path route test
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# Nexthop device deleted
# TEST: IPv4 fibmatch - no route [ OK ]
# TEST: IPv6 fibmatch - no route [ OK ]
#
# Multipath route test
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# One nexthop device deleted
# TEST: IPv4 - multipath route removed on delete [ OK ]
# TEST: IPv6 - multipath down to single path [ OK ]
# Second nexthop device deleted
# TEST: IPv6 - no route [ OK ]
#
# Single path, admin down
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# Route deleted on down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
#
# Admin down multipath
# Verify start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# One device down, one up
# TEST: IPv4 fibmatch on down device [ OK ]
# TEST: IPv6 fibmatch on down device [ OK ]
# TEST: IPv4 fibmatch on up device [ OK ]
# TEST: IPv6 fibmatch on up device [ OK ]
# TEST: IPv4 flags on down device [ OK ]
# TEST: IPv6 flags on down device [ OK ]
# TEST: IPv4 flags on up device [ OK ]
# TEST: IPv6 flags on up device [ OK ]
# Other device down and up
# TEST: IPv4 fibmatch on down device [ OK ]
# TEST: IPv6 fibmatch on down device [ OK ]
# TEST: IPv4 fibmatch on up device [ OK ]
# TEST: IPv6 fibmatch on up device [ OK ]
# TEST: IPv4 flags on down device [ OK ]
# TEST: IPv6 flags on down device [ OK ]
# TEST: IPv4 flags on up device [ OK ]
# TEST: IPv6 flags on up device [ OK ]
# Both devices down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
#
# Local carrier tests - single path
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 - no linkdown flag [ OK ]
# TEST: IPv6 - no linkdown flag [ OK ]
# Carrier off on nexthop
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 - linkdown flag set [ OK ]
# TEST: IPv6 - linkdown flag set [ OK ]
# Route to local address with carrier down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 linkdown flag set [ OK ]
# TEST: IPv6 linkdown flag set [ OK ]
#
# Single path route carrier test
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 no linkdown flag [ OK ]
# TEST: IPv6 no linkdown flag [ OK ]
# Carrier down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 linkdown flag set [ OK ]
# TEST: IPv6 linkdown flag set [ OK ]
# Second address added with carrier down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 linkdown flag set [ OK ]
# TEST: IPv6 linkdown flag set [ OK ]
#
# IPv4 nexthop tests
# <<< write me >>>
#
# IPv6 nexthop tests
# TEST: Directly connected nexthop, unicast address [ OK ]
# TEST: Directly connected nexthop, unicast address with device [ OK ]
# TEST: Gateway is linklocal address [ OK ]
# TEST: Gateway is linklocal address, no device [ OK ]
# TEST: Gateway can not be local unicast address [ OK ]
# TEST: Gateway can not be local unicast address, with device [ OK ]
# TEST: Gateway can not be a local linklocal address [ OK ]
# TEST: Gateway can be local address in a VRF [ OK ]
# TEST: Gateway can be local address in a VRF, with device [ OK ]
# TEST: Gateway can be local linklocal address in a VRF [ OK ]
# TEST: Redirect to VRF lookup [ OK ]
# TEST: VRF route, gateway can be local address in default VRF [ OK ]
# TEST: VRF route, gateway can not be a local address [ OK ]
# TEST: VRF route, gateway can not be a local addr with device [ OK ]
#
# IPv6 route add / append tests
# TEST: Attempt to add duplicate route - gw [ OK ]
# TEST: Attempt to add duplicate route - dev only [ OK ]
# TEST: Attempt to add duplicate route - reject route [ OK ]
# TEST: Append nexthop to existing route - gw [ OK ]
# TEST: Add multipath route [ OK ]
# TEST: Attempt to add duplicate multipath route [ OK ]
# TEST: Route add with different metrics [ OK ]
# TEST: Route delete with metric [ OK ]
#
# IPv6 route replace tests
# TEST: Single path with single path [ OK ]
# TEST: Single path with multipath [ OK ]
# TEST: Single path with single path via multipath attribute [ OK ]
# TEST: Invalid nexthop [ OK ]
# TEST: Single path - replace of non-existent route [ OK ]
# TEST: Multipath with multipath [ OK ]
# TEST: Multipath with single path [ OK ]
# TEST: Multipath with single path via multipath attribute [ OK ]
# TEST: Multipath - invalid first nexthop [ OK ]
# TEST: Multipath - invalid second nexthop [ OK ]
# TEST: Multipath - replace of non-existent route [ OK ]
#
# IPv4 route add / append tests
# TEST: Attempt to add duplicate route - gw [ OK ]
# TEST: Attempt to add duplicate route - dev only [ OK ]
# TEST: Attempt to add duplicate route - reject route [ OK ]
# TEST: Add new nexthop for existing prefix [ OK ]
# TEST: Append nexthop to existing route - gw [ OK ]
# TEST: Append nexthop to existing route - dev only [ OK ]
# TEST: Append nexthop to existing route - reject route [ OK ]
# TEST: Append nexthop to existing reject route - gw [ OK ]
# TEST: Append nexthop to existing reject route - dev only [ OK ]
# TEST: add multipath route [ OK ]
# TEST: Attempt to add duplicate multipath route [ OK ]
# TEST: Route add with different metrics [ OK ]
# TEST: Route delete with metric [ OK ]
#
# IPv4 route replace tests
# TEST: Single path with single path [ OK ]
# TEST: Single path with multipath [ OK ]
# TEST: Single path with reject route [ OK ]
# TEST: Single path with single path via multipath attribute [ OK ]
# TEST: Invalid nexthop [ OK ]
# TEST: Single path - replace of non-existent route [ OK ]
# TEST: Multipath with multipath [ OK ]
# TEST: Multipath with single path [ OK ]
# TEST: Multipath with single path via multipath attribute [ OK ]
# TEST: Multipath with reject route [ OK ]
# TEST: Multipath - invalid first nexthop [ OK ]
# TEST: Multipath - invalid second nexthop [ OK ]
# TEST: Multipath - replace of non-existent route [ OK ]
#
# IPv6 prefix route tests
# TEST: Default metric [ OK ]
# TEST: User specified metric on first device [ OK ]
# TEST: User specified metric on second device [ OK ]
# TEST: Delete of address on first device [ OK ]
# TEST: Modify metric of address [ OK ]
# TEST: Prefix route removed on link down [ OK ]
# TEST: Prefix route with metric on link up [ OK ]
#
# IPv4 prefix route tests
# TEST: Default metric [ OK ]
# TEST: User specified metric on first device [ OK ]
# TEST: User specified metric on second device [ OK ]
# TEST: Delete of address on first device [ OK ]
# TEST: Modify metric of address [ OK ]
# TEST: Prefix route removed on link down [ OK ]
# TEST: Prefix route with metric on link up [ OK ]
#
# IPv6 routes with metrics
# TEST: Single path route with mtu metric [FAIL]
# TEST: Multipath route via 2 single routes with mtu metric on first [FAIL]
# TEST: Multipath route via 2 single routes with mtu metric on 2nd [FAIL]
# TEST: Multipath route with mtu metric [FAIL]
# RTNETLINK answers: No route to host
# TEST: Using route with mtu metric [FAIL]
# TEST: Invalid metric (fails metric_convert) [ OK ]
#
# IPv4 route add / append tests
# TEST: Single path route with mtu metric [ OK ]
# TEST: Multipath route with mtu metric [ OK ]
# TEST: Using route with mtu metric [ OK ]
# TEST: Invalid metric (fails metric_convert) [ OK ]
#
# IPv4 route with IPv6 gateway tests
# TEST: Single path route with IPv6 gateway [FAIL]
# TEST: Single path route with IPv6 gateway - ping [FAIL]
# TEST: Single path route delete [FAIL]
# TEST: Multipath route add - v6 nexthop then v4 [FAIL]
# TEST: Multipath route delete - nexthops in wrong order [ OK ]
# TEST: Multipath route delete exact match [FAIL]
# TEST: Multipath route add - v4 nexthop then v6 [FAIL]
# TEST: Multipath route delete - nexthops in wrong order [ OK ]
# TEST: Multipath route delete exact match [FAIL]
#
# Tests passed: 137
# Tests failed: 12
not ok 13 selftests: net: fib_tests.sh
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
2 years, 11 months
[vfs] 8bb3c61baf: vm-scalability.median -23.7% regression
by kernel test robot
Greeting,
FYI, we noticed a -23.7% regression of vm-scalability.median due to commit:
commit: 8bb3c61bafa8c1cd222ada602bb94ff23119e738 ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/viro/vfs.git work.mount
in testcase: vm-scalability
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
runtime: 300s
size: 16G
test: shm-pread-rand
cpufreq_governor: performance
ucode: 0xb000036
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median -21.8% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=16G |
| | test=shm-xread-rand |
| | ucode=0xb000036 |
+------------------+-----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/16G/lkp-bdw-ep4/shm-pread-rand/vm-scalability/0xb000036
commit:
63228b974a ("make shmem_fill_super() static")
8bb3c61baf ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
63228b974a6e1e39 8bb3c61bafa8c1cd222ada602bb
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
0:4 -1% 0:4 perf-profile.children.cycles-pp.error_entry
%stddev %change %stddev
\ | \
45.57 -59.3% 18.56 vm-scalability.free_time
54243 -23.7% 41391 ± 3% vm-scalability.median
4760988 -23.3% 3650267 ± 2% vm-scalability.throughput
356.61 -9.0% 324.46 vm-scalability.time.elapsed_time
356.61 -9.0% 324.46 vm-scalability.time.elapsed_time.max
114308 ± 3% -6.6% 106792 vm-scalability.time.involuntary_context_switches
65954824 -50.0% 32978012 vm-scalability.time.maximum_resident_set_size
1.362e+08 -49.9% 68168167 vm-scalability.time.minor_page_faults
8533 +1.3% 8642 vm-scalability.time.percent_of_cpu_this_job_got
6021 -56.2% 2635 vm-scalability.time.system_time
24410 +4.1% 25408 vm-scalability.time.user_time
1.431e+09 -23.3% 1.097e+09 ± 2% vm-scalability.workload
37.62 +6.5% 40.07 ± 7% boot-time.boot
33180 ±146% -89.5% 3493 ± 14% cpuidle.C1.usage
8.428e+08 ± 40% -65.6% 2.899e+08 ± 50% cpuidle.C6.time
996104 ± 31% -57.2% 425958 ± 45% cpuidle.C6.usage
4.29 ± 27% -2.4 1.92 ± 4% mpstat.cpu.all.idle%
18.96 -9.7 9.24 mpstat.cpu.all.sys%
76.74 +12.1 88.83 mpstat.cpu.all.usr%
10748 ± 8% +9.9% 11815 ± 14% sched_debug.cfs_rq:/.runnable_weight.avg
208.36 ± 4% +14.5% 238.52 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.stddev
61.83 ± 6% +19.2% 73.68 ± 12% sched_debug.cpu.sched_goidle.avg
10203622 -46.5% 5456851 ± 5% numa-numastat.node0.local_node
10207916 -46.5% 5463997 ± 5% numa-numastat.node0.numa_hit
9917963 -50.2% 4941906 ± 5% numa-numastat.node1.local_node
9930875 -50.1% 4951982 ± 5% numa-numastat.node1.numa_hit
267261 -46.4% 143303 slabinfo.radix_tree_node.active_objs
4837 -46.7% 2577 slabinfo.radix_tree_node.active_slabs
270907 -46.7% 144336 slabinfo.radix_tree_node.num_objs
4837 -46.7% 2577 slabinfo.radix_tree_node.num_slabs
76.00 +15.8% 88.00 vmstat.cpu.us
63526778 -47.7% 33255415 vmstat.memory.cache
56890579 +62.4% 92373408 vmstat.memory.free
1070 -3.0% 1038 vmstat.system.cs
2674 +2.4% 2738 turbostat.Avg_MHz
32037 ±150% -93.3% 2139 ± 25% turbostat.C1
991833 ± 31% -57.4% 422249 ± 45% turbostat.C6
2.62 ± 41% -1.6 0.99 ± 50% turbostat.C6%
1.62 ± 62% -66.7% 0.54 ± 65% turbostat.CPU%c6
66.25 ± 3% +5.3% 69.75 turbostat.PkgTmp
29.80 +5.1% 31.30 turbostat.RAMWatt
5404973 -72.3% 1495267 meminfo.Active
5404747 -72.3% 1495039 meminfo.Active(anon)
63415329 -47.6% 33229890 meminfo.Cached
63554859 -48.6% 32651783 meminfo.Committed_AS
57234837 -45.9% 30959238 meminfo.Inactive
57233523 -45.9% 30957913 meminfo.Inactive(anon)
216029 -33.4% 143846 meminfo.KReclaimable
57179054 -46.0% 30902610 meminfo.Mapped
56257735 +63.0% 91685305 meminfo.MemAvailable
56767938 +62.5% 92231592 meminfo.MemFree
75139272 -47.2% 39675618 meminfo.Memused
1039 ± 64% -58.7% 429.50 ±165% meminfo.Mlocked
10730605 -48.5% 5531358 meminfo.PageTables
216029 -33.4% 143846 meminfo.SReclaimable
62385638 -48.4% 32200708 meminfo.Shmem
350797 -20.7% 278299 meminfo.Slab
226631 -42.8% 129724 meminfo.max_used_kB
2747395 ± 2% -71.7% 777377 ± 6% numa-meminfo.node0.Active
2747257 ± 2% -71.7% 777239 ± 6% numa-meminfo.node0.Active(anon)
31748347 -47.3% 16719086 ± 2% numa-meminfo.node0.FilePages
28618699 -45.6% 15557625 ± 2% numa-meminfo.node0.Inactive
28618165 -45.6% 15557164 ± 2% numa-meminfo.node0.Inactive(anon)
28570648 -45.7% 15509571 ± 2% numa-meminfo.node0.Mapped
28083214 ± 3% +62.5% 45634775 ± 2% numa-meminfo.node0.MemFree
37802576 ± 2% -46.4% 20251015 ± 4% numa-meminfo.node0.MemUsed
5439763 ± 6% -45.6% 2957207 ± 16% numa-meminfo.node0.PageTables
31232535 -48.2% 16193393 ± 2% numa-meminfo.node0.Shmem
2718587 -73.2% 729480 ± 6% numa-meminfo.node1.Active
2718497 -73.2% 729390 ± 6% numa-meminfo.node1.Active(anon)
31668773 -48.0% 16468291 ± 2% numa-meminfo.node1.FilePages
28555887 -46.3% 15346454 ± 2% numa-meminfo.node1.Inactive
28555107 -46.3% 15345591 ± 2% numa-meminfo.node1.Inactive(anon)
28548207 -46.3% 15337939 ± 2% numa-meminfo.node1.Mapped
28682299 ± 2% +62.6% 46649099 ± 2% numa-meminfo.node1.MemFree
37339119 ± 2% -48.1% 19372319 ± 5% numa-meminfo.node1.MemUsed
5292403 ± 5% -51.5% 2564683 ± 18% numa-meminfo.node1.PageTables
31153849 -48.8% 15963746 ± 2% numa-meminfo.node1.Shmem
1350072 -72.4% 373214 proc-vmstat.nr_active_anon
1401204 +63.3% 2287872 proc-vmstat.nr_dirty_background_threshold
2805837 +63.3% 4581345 proc-vmstat.nr_dirty_threshold
15857347 -47.7% 8300115 proc-vmstat.nr_file_pages
14186977 +62.6% 23066682 proc-vmstat.nr_free_pages
14312758 -46.0% 7732409 proc-vmstat.nr_inactive_anon
14299279 -46.0% 7718742 proc-vmstat.nr_mapped
259.50 ± 65% -58.5% 107.75 ±165% proc-vmstat.nr_mlock
2684388 -48.5% 1381692 proc-vmstat.nr_page_table_pages
15599663 -48.4% 8042556 proc-vmstat.nr_shmem
54043 -33.5% 35937 proc-vmstat.nr_slab_reclaimable
1350072 -72.4% 373214 proc-vmstat.nr_zone_active_anon
14312758 -46.0% 7732409 proc-vmstat.nr_zone_inactive_anon
193.50 ± 62% +1207.8% 2530 ±113% proc-vmstat.numa_hint_faults_local
20165318 -48.2% 10440741 proc-vmstat.numa_hit
20148104 -48.3% 10423512 proc-vmstat.numa_local
16499590 -50.0% 8255098 proc-vmstat.pgactivate
20258153 -48.1% 10514291 proc-vmstat.pgalloc_normal
1.371e+08 -49.7% 69006001 proc-vmstat.pgfault
20112038 -48.8% 10289827 ± 2% proc-vmstat.pgfree
686373 ± 2% -71.7% 194202 ± 6% numa-vmstat.node0.nr_active_anon
7938445 -47.3% 4181535 ± 2% numa-vmstat.node0.nr_file_pages
7018951 ± 3% +62.5% 11407016 ± 2% numa-vmstat.node0.nr_free_pages
7156363 -45.6% 3891170 ± 2% numa-vmstat.node0.nr_inactive_anon
7144564 -45.7% 3879384 ± 2% numa-vmstat.node0.nr_mapped
147.25 ± 68% -57.7% 62.25 ±166% numa-vmstat.node0.nr_mlock
1360414 ± 6% -45.7% 739239 ± 16% numa-vmstat.node0.nr_page_table_pages
7809492 -48.1% 4050112 ± 2% numa-vmstat.node0.nr_shmem
686373 ± 2% -71.7% 194202 ± 6% numa-vmstat.node0.nr_zone_active_anon
7156363 -45.6% 3891171 ± 2% numa-vmstat.node0.nr_zone_inactive_anon
10281673 -44.9% 5666708 ± 5% numa-vmstat.node0.numa_hit
10277368 -44.9% 5659426 ± 5% numa-vmstat.node0.numa_local
679201 -73.2% 182247 ± 6% numa-vmstat.node1.nr_active_anon
7918638 -48.0% 4118797 ± 2% numa-vmstat.node1.nr_file_pages
7168402 ± 2% +62.7% 11660690 ± 2% numa-vmstat.node1.nr_free_pages
7140634 -46.2% 3838218 ± 2% numa-vmstat.node1.nr_inactive_anon
7138960 -46.3% 3836299 ± 2% numa-vmstat.node1.nr_mapped
112.00 ± 60% -59.8% 45.00 ±165% numa-vmstat.node1.nr_mlock
1323543 ± 6% -51.6% 640919 ± 18% numa-vmstat.node1.nr_page_table_pages
7789907 -48.7% 3992660 ± 2% numa-vmstat.node1.nr_shmem
679202 -73.2% 182246 ± 6% numa-vmstat.node1.nr_zone_active_anon
7140633 -46.2% 3838218 ± 2% numa-vmstat.node1.nr_zone_inactive_anon
9901667 -47.9% 5155800 ± 5% numa-vmstat.node1.numa_hit
9739325 -48.7% 4996127 ± 5% numa-vmstat.node1.numa_local
164592 ± 3% -9.9% 148339 ± 6% softirqs.CPU0.TIMER
164109 ± 2% -15.4% 138916 ± 7% softirqs.CPU10.TIMER
161316 ± 2% -14.7% 137665 ± 10% softirqs.CPU11.TIMER
161790 ± 3% -13.7% 139674 ± 9% softirqs.CPU13.TIMER
167454 ± 2% -10.0% 150695 ± 6% softirqs.CPU14.TIMER
36730 ± 10% -11.1% 32650 ± 3% softirqs.CPU17.RCU
161800 ± 5% -20.2% 129065 ± 9% softirqs.CPU17.TIMER
159221 ± 5% -13.8% 137234 ± 7% softirqs.CPU18.TIMER
37187 ± 4% -6.5% 34765 ± 2% softirqs.CPU19.RCU
160907 ± 4% -15.7% 135603 ± 8% softirqs.CPU19.TIMER
165935 ± 4% -11.2% 147397 ± 7% softirqs.CPU20.TIMER
155034 ± 3% -10.5% 138768 ± 6% softirqs.CPU22.TIMER
151672 ± 3% -12.5% 132674 ± 8% softirqs.CPU27.TIMER
154953 ± 6% -11.0% 137969 ± 8% softirqs.CPU29.TIMER
164268 ± 3% -14.3% 140762 ± 7% softirqs.CPU3.TIMER
160670 ± 5% -16.6% 134041 ± 9% softirqs.CPU31.TIMER
155203 ± 4% -12.4% 136016 ± 6% softirqs.CPU32.TIMER
155364 ± 5% -14.8% 132311 ± 5% softirqs.CPU33.TIMER
159227 ± 4% -12.7% 139061 ± 11% softirqs.CPU34.TIMER
150917 ± 4% -11.8% 133176 ± 7% softirqs.CPU35.TIMER
7508 ± 23% -23.6% 5734 ± 3% softirqs.CPU36.SCHED
156798 ± 8% -12.9% 136643 softirqs.CPU39.TIMER
155204 ± 7% -10.9% 138356 ± 2% softirqs.CPU40.TIMER
7919 ± 22% -29.9% 5554 ± 2% softirqs.CPU43.SCHED
161460 ± 5% -10.8% 143999 ± 6% softirqs.CPU44.TIMER
161143 ± 4% -13.6% 139241 ± 5% softirqs.CPU47.TIMER
162003 ± 12% -15.6% 136703 ± 4% softirqs.CPU48.TIMER
158763 ± 5% -12.8% 138486 ± 2% softirqs.CPU49.TIMER
158713 ± 5% -10.9% 141433 ± 6% softirqs.CPU5.TIMER
162862 ± 4% -11.9% 143450 ± 4% softirqs.CPU50.TIMER
8632 ± 23% -33.0% 5780 ± 5% softirqs.CPU52.SCHED
163977 ± 4% -11.2% 145624 ± 8% softirqs.CPU52.TIMER
158723 ± 3% -14.2% 136159 ± 7% softirqs.CPU54.TIMER
160583 ± 2% -16.4% 134203 ± 8% softirqs.CPU55.TIMER
160141 ± 3% -14.6% 136793 ± 8% softirqs.CPU57.TIMER
164000 ± 2% -9.5% 148475 ± 5% softirqs.CPU58.TIMER
158146 ± 5% -18.6% 128705 ± 9% softirqs.CPU61.TIMER
155401 ± 5% -11.0% 138276 ± 4% softirqs.CPU66.TIMER
155533 ± 3% -11.2% 138081 ± 6% softirqs.CPU67.TIMER
151725 ± 4% -13.4% 131410 ± 5% softirqs.CPU68.TIMER
149529 -11.3% 132690 ± 6% softirqs.CPU70.TIMER
34610 ± 4% -9.6% 31273 ± 3% softirqs.CPU75.RCU
158786 ± 4% -18.8% 128867 ± 6% softirqs.CPU75.TIMER
151385 ± 4% -15.1% 128505 ± 4% softirqs.CPU77.TIMER
149154 ± 3% -13.0% 129780 ± 4% softirqs.CPU79.TIMER
153403 ± 5% -12.9% 133602 ± 5% softirqs.CPU83.TIMER
152849 ± 6% -11.2% 135697 ± 4% softirqs.CPU84.TIMER
612122 ± 8% -17.9% 502274 ± 2% softirqs.SCHED
13993264 ± 3% -11.5% 12384879 ± 5% softirqs.TIMER
27.18 +3.9% 28.24 perf-stat.i.MPKI
7.958e+09 -6.3% 7.459e+09 ± 3% perf-stat.i.branch-instructions
0.24 ± 26% -0.1 0.12 ± 5% perf-stat.i.branch-miss-rate%
12550714 ± 2% -39.9% 7546804 perf-stat.i.branch-misses
61.09 +5.7 66.82 perf-stat.i.cache-miss-rate%
5.434e+08 +9.3% 5.937e+08 ± 3% perf-stat.i.cache-misses
8.066e+08 +5.6% 8.52e+08 ± 3% perf-stat.i.cache-references
1046 ± 2% -3.7% 1007 perf-stat.i.context-switches
7.30 +6.2% 7.75 ± 3% perf-stat.i.cpi
2.348e+11 +2.3% 2.404e+11 perf-stat.i.cpu-cycles
49.30 ± 5% +15.9% 57.14 ± 4% perf-stat.i.cpu-migrations
1645 ± 64% -64.1% 590.65 perf-stat.i.cycles-between-cache-misses
5.16 +0.8 5.94 perf-stat.i.dTLB-load-miss-rate%
4.801e+08 +14.5% 5.495e+08 ± 3% perf-stat.i.dTLB-load-misses
9.535e+09 -4.4% 9.111e+09 ± 3% perf-stat.i.dTLB-loads
2.812e+09 +5.4% 2.964e+09 ± 3% perf-stat.i.dTLB-stores
89.44 +2.9 92.36 perf-stat.i.iTLB-load-miss-rate%
1043185 ± 3% -30.3% 726967 perf-stat.i.iTLB-load-misses
3.402e+10 -5.9% 3.201e+10 ± 3% perf-stat.i.instructions
242803 ± 27% -39.5% 146788 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.15 -9.1% 0.14 ± 2% perf-stat.i.ipc
378192 -44.1% 211320 perf-stat.i.minor-faults
1.515e+08 ± 7% +54.7% 2.344e+08 ± 22% perf-stat.i.node-load-misses
2306246 -45.6% 1255650 ± 2% perf-stat.i.node-store-misses
1509272 ± 2% -42.0% 874825 ± 3% perf-stat.i.node-stores
378199 -44.1% 211328 perf-stat.i.page-faults
23.60 +12.5% 26.55 perf-stat.overall.MPKI
0.16 ± 2% -0.1 0.10 ± 3% perf-stat.overall.branch-miss-rate%
67.33 +2.3 69.66 perf-stat.overall.cache-miss-rate%
6.89 +9.0% 7.51 ± 3% perf-stat.overall.cpi
433.61 -6.4% 405.91 ± 3% perf-stat.overall.cycles-between-cache-misses
4.77 +0.9 5.67 perf-stat.overall.dTLB-load-miss-rate%
32563 ± 4% +34.8% 43901 ± 3% perf-stat.overall.instructions-per-iTLB-miss
0.15 -8.1% 0.13 ± 3% perf-stat.overall.ipc
8588 +10.3% 9470 perf-stat.overall.path-length
7.954e+09 -6.4% 7.444e+09 ± 3% perf-stat.ps.branch-instructions
12645894 ± 3% -39.9% 7603958 perf-stat.ps.branch-misses
5.401e+08 +9.4% 5.909e+08 ± 3% perf-stat.ps.cache-misses
8.022e+08 +5.7% 8.482e+08 ± 3% perf-stat.ps.cache-references
1044 ± 2% -3.7% 1006 perf-stat.ps.context-switches
2.342e+11 +2.3% 2.396e+11 perf-stat.ps.cpu-cycles
49.11 ± 5% +16.0% 56.98 ± 4% perf-stat.ps.cpu-migrations
4.769e+08 +14.7% 5.468e+08 ± 3% perf-stat.ps.dTLB-load-misses
9.524e+09 -4.6% 9.09e+09 ± 3% perf-stat.ps.dTLB-loads
2.799e+09 +5.5% 2.953e+09 ± 3% perf-stat.ps.dTLB-stores
1045456 ± 3% -30.4% 727863 perf-stat.ps.iTLB-load-misses
3.4e+10 -6.0% 3.195e+10 ± 2% perf-stat.ps.instructions
379246 -44.1% 212102 perf-stat.ps.minor-faults
1.506e+08 ± 7% +54.9% 2.333e+08 ± 22% perf-stat.ps.node-load-misses
2327702 -45.6% 1266384 ± 2% perf-stat.ps.node-store-misses
1523195 ± 2% -42.1% 882438 ± 3% perf-stat.ps.node-stores
379246 -44.1% 212103 perf-stat.ps.page-faults
1.229e+13 -15.5% 1.039e+13 ± 3% perf-stat.total.instructions
24.07 ± 6% -3.6 20.48 ± 3% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
19.40 ± 6% -2.7 16.74 ± 5% perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
2.27 ± 21% -1.4 0.84 ± 62% perf-profile.calltrace.cycles-pp.ret_from_fork
2.27 ± 21% -1.4 0.84 ± 62% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.92 ± 30% -1.3 0.65 ± 62% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
1.93 ± 29% -1.3 0.67 ± 64% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
1.08 ± 14% -0.5 0.56 ± 62% perf-profile.calltrace.cycles-pp.mga_dirty_update.soft_cursor.bit_cursor.fb_flashcursor.process_one_work
1.08 ± 14% -0.5 0.57 ± 62% perf-profile.calltrace.cycles-pp.fb_flashcursor.process_one_work.worker_thread.kthread.ret_from_fork
1.08 ± 14% -0.5 0.57 ± 62% perf-profile.calltrace.cycles-pp.bit_cursor.fb_flashcursor.process_one_work.worker_thread.kthread
1.08 ± 14% -0.5 0.57 ± 62% perf-profile.calltrace.cycles-pp.soft_cursor.bit_cursor.fb_flashcursor.process_one_work.worker_thread
1.00 ± 11% -0.5 0.54 ± 64% perf-profile.calltrace.cycles-pp.memcpy_toio.mga_dirty_update.soft_cursor.bit_cursor.fb_flashcursor
0.71 ± 15% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.__GI___libc_write.__libc_start_main
0.69 ± 14% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.__libc_start_main
0.69 ± 14% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write.__libc_start_main
0.68 ± 15% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.__libc_start_main
0.68 ± 15% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
0.68 ± 15% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.68 ± 15% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.generic_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.67 ± 16% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.ksys_write
0.67 ± 15% -0.4 0.26 ±100% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
2.78 ± 7% -0.3 2.49 ± 5% perf-profile.calltrace.cycles-pp.update_curr.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.95 ± 10% +0.3 1.26 ± 11% perf-profile.calltrace.cycles-pp.note_gp_changes.rcu_core.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt
0.15 ±173% +0.5 0.68 ± 13% perf-profile.calltrace.cycles-pp.rb_next.timerqueue_del.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt
0.94 ± 33% +0.7 1.64 ± 7% perf-profile.calltrace.cycles-pp.timerqueue_del.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.88 ± 9% +0.8 2.66 ± 13% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.34 ± 26% +0.8 2.12 ± 9% perf-profile.calltrace.cycles-pp.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.81 ± 38% +1.3 2.12 ± 25% perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.60 ±104% +1.8 2.44 ± 25% perf-profile.calltrace.cycles-pp.irq_work_interrupt
0.60 ±104% +1.8 2.44 ± 25% perf-profile.calltrace.cycles-pp.smp_irq_work_interrupt.irq_work_interrupt
0.58 ±105% +1.9 2.44 ± 25% perf-profile.calltrace.cycles-pp.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
0.58 ±105% +1.9 2.44 ± 25% perf-profile.calltrace.cycles-pp.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
0.78 ±100% +2.0 2.76 ± 24% perf-profile.calltrace.cycles-pp.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
0.78 ±100% +2.1 2.89 ± 22% perf-profile.calltrace.cycles-pp.vprintk_emit.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
4.43 ± 27% +2.2 6.61 ± 13% perf-profile.calltrace.cycles-pp.interrupt_entry
24.11 ± 6% -3.6 20.52 ± 3% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
19.46 ± 6% -2.6 16.82 ± 5% perf-profile.children.cycles-pp.prepare_exit_to_usermode
2.32 ± 21% -1.3 0.98 ± 36% perf-profile.children.cycles-pp.ret_from_fork
2.27 ± 21% -1.3 0.96 ± 37% perf-profile.children.cycles-pp.kthread
1.92 ± 30% -1.2 0.71 ± 45% perf-profile.children.cycles-pp.process_one_work
1.93 ± 29% -1.2 0.73 ± 47% perf-profile.children.cycles-pp.worker_thread
5.72 ± 11% -0.9 4.81 ± 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
5.71 ± 12% -0.9 4.80 ± 4% perf-profile.children.cycles-pp.do_syscall_64
1.43 ± 21% -0.6 0.86 ± 13% perf-profile.children.cycles-pp.page_fault
1.32 ± 22% -0.5 0.79 ± 14% perf-profile.children.cycles-pp.do_page_fault
1.30 ± 22% -0.5 0.78 ± 15% perf-profile.children.cycles-pp.__do_page_fault
1.08 ± 14% -0.5 0.61 ± 46% perf-profile.children.cycles-pp.memcpy_toio
1.08 ± 14% -0.5 0.61 ± 47% perf-profile.children.cycles-pp.mga_dirty_update
1.08 ± 14% -0.5 0.61 ± 46% perf-profile.children.cycles-pp.fb_flashcursor
1.08 ± 14% -0.5 0.61 ± 46% perf-profile.children.cycles-pp.bit_cursor
1.08 ± 14% -0.5 0.61 ± 46% perf-profile.children.cycles-pp.soft_cursor
1.19 ± 26% -0.4 0.76 ± 12% perf-profile.children.cycles-pp.handle_mm_fault
3.50 ± 8% -0.4 3.11 ± 5% perf-profile.children.cycles-pp.native_irq_return_iret
0.89 ± 10% -0.3 0.60 ± 11% perf-profile.children.cycles-pp.ksys_write
0.88 ± 10% -0.3 0.60 ± 11% perf-profile.children.cycles-pp.vfs_write
0.87 ± 9% -0.3 0.59 ± 12% perf-profile.children.cycles-pp.new_sync_write
0.71 ± 14% -0.2 0.47 ± 14% perf-profile.children.cycles-pp.__GI___libc_write
0.68 ± 15% -0.2 0.46 ± 21% perf-profile.children.cycles-pp.generic_file_write_iter
0.62 ± 22% -0.2 0.42 ± 18% perf-profile.children.cycles-pp.__libc_fork
0.41 ± 25% -0.2 0.23 ± 25% perf-profile.children.cycles-pp.filemap_map_pages
0.36 ± 33% -0.2 0.20 ± 39% perf-profile.children.cycles-pp.select_task_rq_fair
0.28 ± 61% -0.1 0.14 ± 28% perf-profile.children.cycles-pp.alloc_pages_vma
0.24 ± 42% -0.1 0.10 ± 27% perf-profile.children.cycles-pp.clear_page_erms
0.24 ± 42% -0.1 0.11 ± 24% perf-profile.children.cycles-pp.prep_new_page
0.26 ± 31% -0.1 0.15 ± 22% perf-profile.children.cycles-pp.wp_page_copy
0.27 ± 27% -0.1 0.17 ± 24% perf-profile.children.cycles-pp.do_wp_page
0.20 ± 19% -0.1 0.11 ± 37% perf-profile.children.cycles-pp.select_idle_sibling
0.14 ± 16% -0.1 0.05 ±103% perf-profile.children.cycles-pp.__do_sys_wait4
0.15 ± 32% -0.1 0.07 ± 22% perf-profile.children.cycles-pp.available_idle_cpu
0.20 ± 25% -0.1 0.12 ± 39% perf-profile.children.cycles-pp.iov_iter_fault_in_readable
0.17 ± 14% -0.1 0.10 ± 31% perf-profile.children.cycles-pp.wait4
0.23 ± 8% -0.1 0.17 ± 15% perf-profile.children.cycles-pp.dup_mm
0.18 ± 34% -0.1 0.11 ± 22% perf-profile.children.cycles-pp.walk_component
0.11 ± 25% -0.1 0.04 ±106% perf-profile.children.cycles-pp.alloc_set_pte
0.18 ± 18% -0.1 0.11 ± 36% perf-profile.children.cycles-pp.__wake_up_common_lock
0.18 ± 18% -0.1 0.11 ± 36% perf-profile.children.cycles-pp.__wake_up_common
0.10 ± 30% -0.0 0.05 ± 62% perf-profile.children.cycles-pp.find_vma
0.13 ± 12% -0.0 0.10 ± 22% perf-profile.children.cycles-pp.free_pgtables
0.08 ± 29% +0.0 0.12 ± 16% perf-profile.children.cycles-pp.tlb_finish_mmu
0.08 ± 29% +0.0 0.12 ± 16% perf-profile.children.cycles-pp.tlb_flush_mmu
0.07 ± 25% +0.1 0.12 ± 26% perf-profile.children.cycles-pp.__note_gp_changes
0.07 ± 77% +0.1 0.18 ± 32% perf-profile.children.cycles-pp.intel_pmu_disable_all
0.05 ± 70% +0.1 0.18 ± 58% perf-profile.children.cycles-pp.change_prot_numa
0.39 ± 25% +0.1 0.52 ± 14% perf-profile.children.cycles-pp.rb_erase
0.06 ± 77% +0.1 0.19 ± 48% perf-profile.children.cycles-pp.change_p4d_range
0.09 ± 20% +0.1 0.23 ± 47% perf-profile.children.cycles-pp.change_protection
0.57 ± 10% +0.2 0.72 ± 12% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.38 ± 10% +0.2 0.61 ± 15% perf-profile.children.cycles-pp.rcu_irq_enter
0.95 ± 10% +0.3 1.27 ± 12% perf-profile.children.cycles-pp.note_gp_changes
0.42 ± 29% +0.4 0.79 ± 14% perf-profile.children.cycles-pp.rb_next
0.97 ± 32% +0.7 1.65 ± 7% perf-profile.children.cycles-pp.timerqueue_del
1.34 ± 26% +0.8 2.12 ± 9% perf-profile.children.cycles-pp.__remove_hrtimer
1.89 ± 9% +0.8 2.67 ± 13% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
1.19 ± 50% +1.3 2.53 ± 6% perf-profile.children.cycles-pp.io_serial_in
0.82 ± 94% +2.0 2.81 ± 24% perf-profile.children.cycles-pp.irq_work_interrupt
0.82 ± 94% +2.0 2.81 ± 24% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.79 ±100% +2.0 2.81 ± 24% perf-profile.children.cycles-pp.irq_work_run
0.78 ±100% +2.0 2.81 ± 24% perf-profile.children.cycles-pp.printk
0.78 ±100% +2.0 2.81 ± 24% perf-profile.children.cycles-pp.vprintk_emit
1.43 ± 92% +2.5 3.95 ± 19% perf-profile.children.cycles-pp.irq_work_run_list
6.58 ± 21% +2.7 9.33 ± 10% perf-profile.children.cycles-pp.interrupt_entry
17.55 ± 8% -2.3 15.23 ± 5% perf-profile.self.cycles-pp.prepare_exit_to_usermode
4.65 ± 9% -1.0 3.70 ± 6% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
1.08 ± 14% -0.5 0.61 ± 46% perf-profile.self.cycles-pp.memcpy_toio
3.48 ± 8% -0.4 3.09 ± 5% perf-profile.self.cycles-pp.native_irq_return_iret
0.84 ± 17% -0.4 0.46 ± 24% perf-profile.self.cycles-pp.tick_sched_timer
0.24 ± 45% -0.1 0.10 ± 27% perf-profile.self.cycles-pp.clear_page_erms
0.15 ± 32% -0.1 0.07 ± 22% perf-profile.self.cycles-pp.available_idle_cpu
0.10 ± 15% -0.0 0.05 ± 60% perf-profile.self.cycles-pp.iov_iter_fault_in_readable
0.03 ±102% +0.1 0.11 ± 31% perf-profile.self.cycles-pp.intel_pmu_disable_all
0.04 ±115% +0.1 0.15 ± 52% perf-profile.self.cycles-pp.change_p4d_range
0.38 ± 22% +0.1 0.51 ± 15% perf-profile.self.cycles-pp.rb_erase
0.55 ± 10% +0.2 0.71 ± 13% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.19 ± 53% +0.2 0.36 ± 8% perf-profile.self.cycles-pp.timerqueue_del
0.36 ± 14% +0.2 0.61 ± 15% perf-profile.self.cycles-pp.rcu_irq_enter
0.41 ± 33% +0.4 0.79 ± 14% perf-profile.self.cycles-pp.rb_next
0.98 ± 33% +0.5 1.45 ± 10% perf-profile.self.cycles-pp.smp_apic_timer_interrupt
1.19 ± 26% +0.7 1.90 ± 18% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
1.19 ± 50% +1.1 2.29 ± 16% perf-profile.self.cycles-pp.io_serial_in
6.58 ± 21% +2.7 9.30 ± 10% perf-profile.self.cycles-pp.interrupt_entry
298.00 ± 32% -39.8% 179.50 ± 9% interrupts.37:PCI-MSI.1572868-edge.eth0-TxRx-4
290.75 ± 19% -31.2% 200.00 ± 22% interrupts.42:PCI-MSI.1572873-edge.eth0-TxRx-9
283.75 ± 41% -31.9% 193.25 ± 14% interrupts.46:PCI-MSI.1572877-edge.eth0-TxRx-13
177.50 -10.4% 159.00 interrupts.49:PCI-MSI.1572880-edge.eth0-TxRx-16
180.00 ± 2% -11.7% 159.00 interrupts.50:PCI-MSI.1572881-edge.eth0-TxRx-17
177.00 -9.7% 159.75 interrupts.51:PCI-MSI.1572882-edge.eth0-TxRx-18
177.00 -10.2% 159.00 interrupts.52:PCI-MSI.1572883-edge.eth0-TxRx-19
178.00 ± 2% -10.7% 159.00 interrupts.53:PCI-MSI.1572884-edge.eth0-TxRx-20
179.50 ± 2% -8.9% 163.50 ± 4% interrupts.54:PCI-MSI.1572885-edge.eth0-TxRx-21
177.00 -9.9% 159.50 interrupts.55:PCI-MSI.1572886-edge.eth0-TxRx-22
177.00 -10.2% 159.00 interrupts.56:PCI-MSI.1572887-edge.eth0-TxRx-23
177.00 -8.6% 161.75 ± 2% interrupts.57:PCI-MSI.1572888-edge.eth0-TxRx-24
177.00 -8.8% 161.50 ± 2% interrupts.58:PCI-MSI.1572889-edge.eth0-TxRx-25
177.00 -8.1% 162.75 ± 3% interrupts.59:PCI-MSI.1572890-edge.eth0-TxRx-26
177.00 -10.2% 159.00 interrupts.60:PCI-MSI.1572891-edge.eth0-TxRx-27
177.00 -10.2% 159.00 interrupts.61:PCI-MSI.1572892-edge.eth0-TxRx-28
177.00 -10.0% 159.25 interrupts.62:PCI-MSI.1572893-edge.eth0-TxRx-29
177.00 -10.2% 159.00 interrupts.63:PCI-MSI.1572894-edge.eth0-TxRx-30
177.00 -10.0% 159.25 interrupts.64:PCI-MSI.1572895-edge.eth0-TxRx-31
177.00 -10.0% 159.25 interrupts.65:PCI-MSI.1572896-edge.eth0-TxRx-32
177.00 -10.2% 159.00 interrupts.66:PCI-MSI.1572897-edge.eth0-TxRx-33
177.00 -10.2% 159.00 interrupts.67:PCI-MSI.1572898-edge.eth0-TxRx-34
177.00 -10.2% 159.00 interrupts.68:PCI-MSI.1572899-edge.eth0-TxRx-35
177.00 -10.2% 159.00 interrupts.69:PCI-MSI.1572900-edge.eth0-TxRx-36
182.50 ± 4% -12.9% 159.00 interrupts.70:PCI-MSI.1572901-edge.eth0-TxRx-37
177.00 -9.6% 160.00 interrupts.71:PCI-MSI.1572902-edge.eth0-TxRx-38
177.00 -10.2% 159.00 interrupts.72:PCI-MSI.1572903-edge.eth0-TxRx-39
177.00 -7.6% 163.50 ± 4% interrupts.73:PCI-MSI.1572904-edge.eth0-TxRx-40
177.00 -10.2% 159.00 interrupts.74:PCI-MSI.1572905-edge.eth0-TxRx-41
179.75 ± 3% -11.5% 159.00 interrupts.75:PCI-MSI.1572906-edge.eth0-TxRx-42
177.00 -10.2% 159.00 interrupts.76:PCI-MSI.1572907-edge.eth0-TxRx-43
178.25 -7.9% 164.25 ± 4% interrupts.77:PCI-MSI.1572908-edge.eth0-TxRx-44
177.00 -9.3% 160.50 interrupts.78:PCI-MSI.1572909-edge.eth0-TxRx-45
177.00 -10.2% 159.00 interrupts.79:PCI-MSI.1572910-edge.eth0-TxRx-46
177.00 -9.5% 160.25 interrupts.80:PCI-MSI.1572911-edge.eth0-TxRx-47
177.00 -10.2% 159.00 interrupts.81:PCI-MSI.1572912-edge.eth0-TxRx-48
177.00 -10.2% 159.00 interrupts.83:PCI-MSI.1572914-edge.eth0-TxRx-50
177.00 -9.7% 159.75 interrupts.84:PCI-MSI.1572915-edge.eth0-TxRx-51
188.75 ± 5% -13.0% 164.25 ± 5% interrupts.85:PCI-MSI.1572916-edge.eth0-TxRx-52
225.75 ± 38% -29.6% 159.00 interrupts.86:PCI-MSI.1572917-edge.eth0-TxRx-53
181.75 ± 5% -12.4% 159.25 interrupts.87:PCI-MSI.1572918-edge.eth0-TxRx-54
179.75 ± 2% -9.7% 162.25 ± 3% interrupts.88:PCI-MSI.1572919-edge.eth0-TxRx-55
181.00 ± 4% -12.2% 159.00 interrupts.89:PCI-MSI.1572920-edge.eth0-TxRx-56
177.00 -10.2% 159.00 interrupts.90:PCI-MSI.1572921-edge.eth0-TxRx-57
177.00 -10.2% 159.00 interrupts.91:PCI-MSI.1572922-edge.eth0-TxRx-58
177.00 -7.8% 163.25 ± 4% interrupts.92:PCI-MSI.1572923-edge.eth0-TxRx-59
177.00 -10.2% 159.00 interrupts.93:PCI-MSI.1572924-edge.eth0-TxRx-60
177.25 -8.5% 162.25 ± 2% interrupts.94:PCI-MSI.1572925-edge.eth0-TxRx-61
177.00 -9.7% 159.75 interrupts.95:PCI-MSI.1572926-edge.eth0-TxRx-62
448.75 ± 2% -11.1% 398.75 interrupts.9:IO-APIC.9-fasteoi.acpi
358186 -9.5% 324307 interrupts.CAL:Function_call_interrupts
448.75 ± 2% -11.1% 398.75 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
4090 ± 4% -11.0% 3641 ± 2% interrupts.CPU1.CAL:Function_call_interrupts
4094 ± 3% -10.4% 3670 ± 4% interrupts.CPU10.CAL:Function_call_interrupts
4086 ± 3% -20.1% 3267 ± 26% interrupts.CPU11.CAL:Function_call_interrupts
4118 ± 2% -9.4% 3729 interrupts.CPU12.CAL:Function_call_interrupts
283.75 ± 41% -31.9% 193.25 ± 14% interrupts.CPU13.46:PCI-MSI.1572877-edge.eth0-TxRx-13
4136 ± 3% -10.6% 3699 interrupts.CPU13.CAL:Function_call_interrupts
4080 ± 3% -8.5% 3732 interrupts.CPU14.CAL:Function_call_interrupts
4075 ± 3% -8.5% 3727 interrupts.CPU15.CAL:Function_call_interrupts
177.50 -10.4% 159.00 interrupts.CPU16.49:PCI-MSI.1572880-edge.eth0-TxRx-16
4096 ± 3% -9.1% 3722 interrupts.CPU16.CAL:Function_call_interrupts
659.50 ± 5% +26.9% 836.75 ± 12% interrupts.CPU16.RES:Rescheduling_interrupts
180.00 ± 2% -11.7% 159.00 interrupts.CPU17.50:PCI-MSI.1572881-edge.eth0-TxRx-17
4059 ± 3% -8.8% 3700 interrupts.CPU17.CAL:Function_call_interrupts
177.00 -9.7% 159.75 interrupts.CPU18.51:PCI-MSI.1572882-edge.eth0-TxRx-18
4057 ± 2% -8.8% 3698 interrupts.CPU18.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU19.52:PCI-MSI.1572883-edge.eth0-TxRx-19
4082 ± 2% -26.5% 3000 ± 41% interrupts.CPU19.CAL:Function_call_interrupts
4101 ± 3% -9.8% 3701 ± 2% interrupts.CPU2.CAL:Function_call_interrupts
178.00 ± 2% -10.7% 159.00 interrupts.CPU20.53:PCI-MSI.1572884-edge.eth0-TxRx-20
4060 ± 3% -9.0% 3693 interrupts.CPU20.CAL:Function_call_interrupts
555.75 ± 41% +111.9% 1177 ± 58% interrupts.CPU20.RES:Rescheduling_interrupts
179.50 ± 2% -8.9% 163.50 ± 4% interrupts.CPU21.54:PCI-MSI.1572885-edge.eth0-TxRx-21
4053 ± 3% -9.3% 3676 interrupts.CPU21.CAL:Function_call_interrupts
1004 ± 23% -50.7% 495.25 ± 21% interrupts.CPU21.RES:Rescheduling_interrupts
177.00 -9.9% 159.50 interrupts.CPU22.55:PCI-MSI.1572886-edge.eth0-TxRx-22
4033 ± 3% -9.0% 3671 ± 4% interrupts.CPU22.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU23.56:PCI-MSI.1572887-edge.eth0-TxRx-23
4023 ± 3% -8.0% 3700 interrupts.CPU23.CAL:Function_call_interrupts
596.25 ± 76% -58.0% 250.25 ± 15% interrupts.CPU23.RES:Rescheduling_interrupts
177.00 -8.6% 161.75 ± 2% interrupts.CPU24.57:PCI-MSI.1572888-edge.eth0-TxRx-24
4004 ± 3% -7.6% 3698 interrupts.CPU24.CAL:Function_call_interrupts
177.00 -8.8% 161.50 ± 2% interrupts.CPU25.58:PCI-MSI.1572889-edge.eth0-TxRx-25
4034 ± 3% -9.4% 3656 ± 2% interrupts.CPU25.CAL:Function_call_interrupts
177.00 -8.1% 162.75 ± 3% interrupts.CPU26.59:PCI-MSI.1572890-edge.eth0-TxRx-26
4027 ± 3% -8.1% 3700 interrupts.CPU26.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU27.60:PCI-MSI.1572891-edge.eth0-TxRx-27
4072 ± 2% -9.2% 3696 interrupts.CPU27.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU28.61:PCI-MSI.1572892-edge.eth0-TxRx-28
177.00 -10.0% 159.25 interrupts.CPU29.62:PCI-MSI.1572893-edge.eth0-TxRx-29
4068 ± 3% -10.8% 3627 ± 2% interrupts.CPU29.CAL:Function_call_interrupts
4077 ± 3% -19.2% 3293 ± 19% interrupts.CPU3.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU30.63:PCI-MSI.1572894-edge.eth0-TxRx-30
4085 ± 3% -10.9% 3641 ± 2% interrupts.CPU30.CAL:Function_call_interrupts
177.00 -10.0% 159.25 interrupts.CPU31.64:PCI-MSI.1572895-edge.eth0-TxRx-31
4072 ± 3% -30.4% 2833 ± 50% interrupts.CPU31.CAL:Function_call_interrupts
416.25 ± 42% -67.5% 135.25 ± 50% interrupts.CPU31.RES:Rescheduling_interrupts
177.00 -10.0% 159.25 interrupts.CPU32.65:PCI-MSI.1572896-edge.eth0-TxRx-32
4099 ± 2% -11.2% 3639 ± 2% interrupts.CPU32.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU33.66:PCI-MSI.1572897-edge.eth0-TxRx-33
4092 ± 2% -11.2% 3635 ± 2% interrupts.CPU33.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU34.67:PCI-MSI.1572898-edge.eth0-TxRx-34
320.75 ± 30% -34.7% 209.50 ± 30% interrupts.CPU34.RES:Rescheduling_interrupts
177.00 -10.2% 159.00 interrupts.CPU35.68:PCI-MSI.1572899-edge.eth0-TxRx-35
4081 ± 2% -10.6% 3648 interrupts.CPU35.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU36.69:PCI-MSI.1572900-edge.eth0-TxRx-36
182.50 ± 4% -12.9% 159.00 interrupts.CPU37.70:PCI-MSI.1572901-edge.eth0-TxRx-37
4061 ± 2% -10.5% 3633 ± 3% interrupts.CPU37.CAL:Function_call_interrupts
177.00 -9.6% 160.00 interrupts.CPU38.71:PCI-MSI.1572902-edge.eth0-TxRx-38
4053 ± 2% -10.4% 3631 ± 3% interrupts.CPU38.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU39.72:PCI-MSI.1572903-edge.eth0-TxRx-39
4037 ± 2% -10.1% 3628 ± 3% interrupts.CPU39.CAL:Function_call_interrupts
337.25 ± 45% -47.1% 178.50 ± 44% interrupts.CPU39.RES:Rescheduling_interrupts
298.00 ± 32% -39.8% 179.50 ± 9% interrupts.CPU4.37:PCI-MSI.1572868-edge.eth0-TxRx-4
4069 ± 3% -9.6% 3680 ± 2% interrupts.CPU4.CAL:Function_call_interrupts
718679 -9.5% 650161 interrupts.CPU4.LOC:Local_timer_interrupts
177.00 -7.6% 163.50 ± 4% interrupts.CPU40.73:PCI-MSI.1572904-edge.eth0-TxRx-40
4039 ± 2% -9.7% 3646 ± 2% interrupts.CPU40.CAL:Function_call_interrupts
274.00 ± 31% -43.0% 156.25 ± 42% interrupts.CPU40.RES:Rescheduling_interrupts
177.00 -10.2% 159.00 interrupts.CPU41.74:PCI-MSI.1572905-edge.eth0-TxRx-41
4058 ± 3% -10.3% 3640 ± 2% interrupts.CPU41.CAL:Function_call_interrupts
179.75 ± 3% -11.5% 159.00 interrupts.CPU42.75:PCI-MSI.1572906-edge.eth0-TxRx-42
4073 ± 3% -10.8% 3634 ± 2% interrupts.CPU42.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU43.76:PCI-MSI.1572907-edge.eth0-TxRx-43
178.25 -7.9% 164.25 ± 4% interrupts.CPU44.77:PCI-MSI.1572908-edge.eth0-TxRx-44
177.00 -9.3% 160.50 interrupts.CPU45.78:PCI-MSI.1572909-edge.eth0-TxRx-45
7928 -37.7% 4939 ± 34% interrupts.CPU45.NMI:Non-maskable_interrupts
7928 -37.7% 4939 ± 34% interrupts.CPU45.PMI:Performance_monitoring_interrupts
177.00 -10.2% 159.00 interrupts.CPU46.79:PCI-MSI.1572910-edge.eth0-TxRx-46
177.00 -9.5% 160.25 interrupts.CPU47.80:PCI-MSI.1572911-edge.eth0-TxRx-47
7916 -37.5% 4947 ± 34% interrupts.CPU47.NMI:Non-maskable_interrupts
7916 -37.5% 4947 ± 34% interrupts.CPU47.PMI:Performance_monitoring_interrupts
177.00 -10.2% 159.00 interrupts.CPU48.81:PCI-MSI.1572912-edge.eth0-TxRx-48
718707 -9.5% 650597 interrupts.CPU48.LOC:Local_timer_interrupts
7952 -37.8% 4948 ± 34% interrupts.CPU49.NMI:Non-maskable_interrupts
7952 -37.8% 4948 ± 34% interrupts.CPU49.PMI:Performance_monitoring_interrupts
4107 ± 4% -12.3% 3601 ± 3% interrupts.CPU5.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU50.83:PCI-MSI.1572914-edge.eth0-TxRx-50
177.00 -9.7% 159.75 interrupts.CPU51.84:PCI-MSI.1572915-edge.eth0-TxRx-51
718389 -9.4% 651067 interrupts.CPU51.LOC:Local_timer_interrupts
188.75 ± 5% -13.0% 164.25 ± 5% interrupts.CPU52.85:PCI-MSI.1572916-edge.eth0-TxRx-52
225.75 ± 38% -29.6% 159.00 interrupts.CPU53.86:PCI-MSI.1572917-edge.eth0-TxRx-53
181.75 ± 5% -12.4% 159.25 interrupts.CPU54.87:PCI-MSI.1572918-edge.eth0-TxRx-54
4099 ± 4% -8.9% 3734 ± 4% interrupts.CPU54.CAL:Function_call_interrupts
179.75 ± 2% -9.7% 162.25 ± 3% interrupts.CPU55.88:PCI-MSI.1572919-edge.eth0-TxRx-55
4201 -9.7% 3793 ± 4% interrupts.CPU55.CAL:Function_call_interrupts
181.00 ± 4% -12.2% 159.00 interrupts.CPU56.89:PCI-MSI.1572920-edge.eth0-TxRx-56
4188 -11.6% 3703 ± 6% interrupts.CPU56.CAL:Function_call_interrupts
718672 -9.4% 650886 interrupts.CPU56.LOC:Local_timer_interrupts
177.00 -10.2% 159.00 interrupts.CPU57.90:PCI-MSI.1572921-edge.eth0-TxRx-57
4186 ± 2% -31.1% 2885 ± 50% interrupts.CPU57.CAL:Function_call_interrupts
71.75 ± 42% +120.2% 158.00 ± 32% interrupts.CPU57.RES:Rescheduling_interrupts
177.00 -10.2% 159.00 interrupts.CPU58.91:PCI-MSI.1572922-edge.eth0-TxRx-58
125.75 ± 62% +74.0% 218.75 ± 53% interrupts.CPU58.RES:Rescheduling_interrupts
177.00 -7.8% 163.25 ± 4% interrupts.CPU59.92:PCI-MSI.1572923-edge.eth0-TxRx-59
4169 ± 2% -9.1% 3788 ± 4% interrupts.CPU59.CAL:Function_call_interrupts
4072 ± 3% -11.4% 3610 ± 3% interrupts.CPU6.CAL:Function_call_interrupts
177.00 -10.2% 159.00 interrupts.CPU60.93:PCI-MSI.1572924-edge.eth0-TxRx-60
177.25 -8.5% 162.25 ± 2% interrupts.CPU61.94:PCI-MSI.1572925-edge.eth0-TxRx-61
188.50 ± 71% -65.1% 65.75 ± 42% interrupts.CPU61.RES:Rescheduling_interrupts
177.00 -9.7% 159.75 interrupts.CPU62.95:PCI-MSI.1572926-edge.eth0-TxRx-62
4067 ± 2% -6.0% 3823 ± 3% interrupts.CPU62.CAL:Function_call_interrupts
4183 ± 3% -8.7% 3819 ± 3% interrupts.CPU63.CAL:Function_call_interrupts
4265 -11.9% 3756 ± 5% interrupts.CPU64.CAL:Function_call_interrupts
4275 -11.4% 3786 ± 4% interrupts.CPU65.CAL:Function_call_interrupts
4216 ± 2% -9.7% 3809 ± 3% interrupts.CPU66.CAL:Function_call_interrupts
4259 -9.9% 3837 ± 3% interrupts.CPU67.CAL:Function_call_interrupts
4239 -10.2% 3807 ± 2% interrupts.CPU69.CAL:Function_call_interrupts
4124 ± 3% -13.7% 3558 ± 7% interrupts.CPU7.CAL:Function_call_interrupts
4229 -8.8% 3857 ± 3% interrupts.CPU70.CAL:Function_call_interrupts
4237 -9.8% 3823 ± 3% interrupts.CPU73.CAL:Function_call_interrupts
4230 -9.8% 3816 ± 3% interrupts.CPU74.CAL:Function_call_interrupts
4218 -10.1% 3791 ± 3% interrupts.CPU75.CAL:Function_call_interrupts
4213 -10.5% 3770 ± 4% interrupts.CPU76.CAL:Function_call_interrupts
719037 -9.5% 651067 interrupts.CPU76.LOC:Local_timer_interrupts
4235 -10.4% 3795 ± 3% interrupts.CPU77.CAL:Function_call_interrupts
4103 ± 4% -6.7% 3829 ± 3% interrupts.CPU78.CAL:Function_call_interrupts
4139 ± 5% -8.1% 3803 ± 3% interrupts.CPU79.CAL:Function_call_interrupts
4118 ± 3% -11.7% 3638 ± 3% interrupts.CPU8.CAL:Function_call_interrupts
4247 ± 2% -11.1% 3774 ± 2% interrupts.CPU85.CAL:Function_call_interrupts
4243 ± 2% -11.7% 3746 ± 2% interrupts.CPU86.CAL:Function_call_interrupts
4152 ± 4% -11.1% 3691 ± 2% interrupts.CPU87.CAL:Function_call_interrupts
393.50 ± 86% -73.6% 103.75 ± 65% interrupts.CPU87.RES:Rescheduling_interrupts
290.75 ± 19% -31.2% 200.00 ± 22% interrupts.CPU9.42:PCI-MSI.1572873-edge.eth0-TxRx-9
4148 ± 3% -11.3% 3678 interrupts.CPU9.CAL:Function_call_interrupts
vm-scalability.time.user_time
25600 +-+-----------------------------------------------------------------+
| O O O O O O O
25400 O-+O O O O O O O O O O O O O O O O |
| O |
25200 +-+ |
| |
25000 +-+ |
| |
24800 +-+ |
| |
24600 +-+ |
| .+.. .+..+.. .+..+..+. .+.. |
24400 +-++. + +. +. +..+..+..+.+..+..+ |
| |
24200 +-+-----------------------------------------------------------------+
vm-scalability.time.system_time
6500 +-+------------------------------------------------------------------+
| +.. |
6000 +-+ : +..+..+..+..+ |
5500 +-+ : |
|..+..+..+..+.+.. .+..+..+..+..+..+ |
5000 +-+ +. |
| |
4500 +-+ |
| |
4000 +-+ |
3500 +-+ |
| |
3000 +-+ |
O O O O O O O O |
2500 +-+O-----O--O----O--O--------O--O--O-O--O--O--O--O-----O----O--O-----O
vm-scalability.time.percent_of_cpu_this_job_got
8660 +-+--------------------------O---------------------------------------+
O O O O O O O O O O O O O O O O O O O O |
8640 +-+ O O O O
8620 +-+ |
| |
8600 +-+ |
| |
8580 +-+ |
| |
8560 +-+ |
8540 +-+ .+.. |
|..+..+..+.. +. +.. .+..+ |
8520 +-+ +.+..+.. .+..+.. .+.. + +. |
| +. +. + |
8500 +-+------------------------------------------------------------------+
vm-scalability.time.elapsed_time
360 +-+-------------------------------------------------------------------+
| +..+..+..+..+..+ |
355 +-+ + |
350 +-+ + |
|..+..+..+..+..+..+.+..+..+..+..+..+ |
345 +-+ |
| |
340 +-+ |
| |
335 +-+ |
330 +-+ |
| |
325 O-+ O O O O O O O O O O O O O O O O
| O O O O O O O O |
320 +-+-------------------------------------------------------------------+
vm-scalability.time.elapsed_time.max
360 +-+-------------------------------------------------------------------+
| +..+..+..+..+..+ |
355 +-+ + |
350 +-+ + |
|..+..+..+..+..+..+.+..+..+..+..+..+ |
345 +-+ |
| |
340 +-+ |
| |
335 +-+ |
330 +-+ |
| |
325 O-+ O O O O O O O O O O O O O O O O
| O O O O O O O O |
320 +-+-------------------------------------------------------------------+
vm-scalability.time.maximum_resident_set_size
7e+07 +-+---------------------------------------------------------------+
|..+..+.+..+..+..+.+..+..+..+.+..+..+..+.+..+..+..+ |
6.5e+07 +-+ |
6e+07 +-+ |
| |
5.5e+07 +-+ |
| |
5e+07 +-+ |
| |
4.5e+07 +-+ |
4e+07 +-+ |
| |
3.5e+07 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O O O
3e+07 +-+---------------------------------------------------------------+
vm-scalability.time.minor_page_faults
1.4e+08 +-+---------------------------------------------------------------+
|..+..+.+..+..+..+.+..+..+..+.+..+..+..+.+..+..+..+ |
1.3e+08 +-+ |
1.2e+08 +-+ |
| |
1.1e+08 +-+ |
| |
1e+08 +-+ |
| |
9e+07 +-+ |
8e+07 +-+ |
| |
7e+07 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O O O
6e+07 +-+---------------------------------------------------------------+
vm-scalability.throughput
5e+06 +-+---------------------------------------------------------------+
| |
4.8e+06 +-++..+.+..+..+..+. .+..+.+..+..+..+.+..+..+..+ |
| +..+. |
4.6e+06 +-+ |
4.4e+06 +-+ |
| |
4.2e+06 +-+ |
| |
4e+06 +-+ |
3.8e+06 +-+ |
O O O O O O O O O O O O O O O O O O O |
3.6e+06 +-+ O O O O |
| O O
3.4e+06 +-+---------------------------------------------------------------+
vm-scalability.free_time
50 +-+--------------------------------------------------------------------+
| +.. .+ |
45 +-+ : +..+..+..+. |
| : |
40 +-+ : |
|..+..+..+..+..+..+..+..+..+..+..+..+ |
35 +-+ |
| |
30 +-+ |
| |
25 +-+ |
| |
20 O-+ O O O O O O O O O |
| O O O O O O O O O O O O O O O
15 +-+--------------------------------------------------------------------+
vm-scalability.median
56000 +-+-----------------------------------------------------------------+
| .+..+..+.+..+..+.. .+.+..+..+..+..+..+.+..+..+ |
54000 +-+ +..+. |
52000 +-+ |
| |
50000 +-+ |
48000 +-+ |
| |
46000 +-+ |
44000 +-+ |
O O O O O O O O O O O O O O O O O O O |
42000 +-+ O O O |
40000 +-+ O |
| O O
38000 +-+-----------------------------------------------------------------+
vm-scalability.workload
1.45e+09 +-+--------------------------------------------------------------+
|. +..+..+. .+ +. + +..+ |
1.4e+09 +-+ +. |
1.35e+09 +-+ |
| |
1.3e+09 +-+ |
1.25e+09 +-+ |
| |
1.2e+09 +-+ |
1.15e+09 +-+ |
O O O O O O O O O O O O O O O O O O O |
1.1e+09 +-+ O O O O |
1.05e+09 +-+ O O
| |
1e+09 +-+--------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/16G/lkp-bdw-ep4/shm-xread-rand/vm-scalability/0xb000036
commit:
63228b974a ("make shmem_fill_super() static")
8bb3c61baf ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
63228b974a6e1e39 8bb3c61bafa8c1cd222ada602bb
---------------- ---------------------------
%stddev %change %stddev
\ | \
54356 -21.8% 42499 vm-scalability.median
4770855 -21.8% 3728764 vm-scalability.throughput
349.31 -7.0% 324.86 vm-scalability.time.elapsed_time
349.31 -7.0% 324.86 vm-scalability.time.elapsed_time.max
65954846 -50.0% 32978141 vm-scalability.time.maximum_resident_set_size
1.362e+08 -50.0% 68103210 vm-scalability.time.minor_page_faults
8530 +1.3% 8643 vm-scalability.time.percent_of_cpu_this_job_got
5405 -51.0% 2646 vm-scalability.time.system_time
24394 +4.3% 25433 vm-scalability.time.user_time
1.432e+09 -21.7% 1.121e+09 vm-scalability.workload
7.168e+08 ± 34% -75.7% 1.742e+08 ± 93% cpuidle.C6.time
907209 ± 47% -78.1% 198824 ± 83% cpuidle.C6.usage
3.70 ± 21% -1.6 2.10 ± 8% mpstat.cpu.all.idle%
17.49 -8.2 9.25 mpstat.cpu.all.sys%
78.81 +9.8 88.65 mpstat.cpu.all.usr%
77.75 +12.9% 87.75 vmstat.cpu.us
63901378 -48.0% 33213370 vmstat.memory.cache
56436353 +63.8% 92422539 vmstat.memory.free
10350080 ± 2% -45.0% 5688023 ± 2% numa-numastat.node0.local_node
10354411 ± 2% -45.0% 5690870 ± 2% numa-numastat.node0.numa_hit
9750290 ± 2% -51.8% 4699438 ± 2% numa-numastat.node1.local_node
9763173 ± 2% -51.7% 4713809 ± 2% numa-numastat.node1.numa_hit
2689 +1.7% 2734 turbostat.Avg_MHz
902180 ± 47% -78.4% 194763 ± 85% turbostat.C6
2.28 ± 34% -1.7 0.58 ± 96% turbostat.C6%
1.35 ± 46% -77.7% 0.30 ±121% turbostat.CPU%c6
30.21 +4.6% 31.60 turbostat.RAMWatt
-72806 +87.6% -136565 sched_debug.cfs_rq:/.spread0.min
1.92 ± 7% +19.6% 2.29 ± 12% sched_debug.cpu.nr_running.max
0.18 ± 9% +22.4% 0.22 ± 10% sched_debug.cpu.nr_running.stddev
121.46 ± 13% -16.7% 101.12 ± 2% sched_debug.cpu.ttwu_count.min
3932 ± 11% -29.4% 2775 ± 9% sched_debug.cpu.ttwu_local.max
615.08 ± 5% -15.7% 518.29 ± 11% sched_debug.cpu.ttwu_local.stddev
20816 ± 5% -7.9% 19164 ± 3% slabinfo.filp.active_objs
20932 ± 5% -8.2% 19211 ± 3% slabinfo.filp.num_objs
269203 -46.8% 143215 slabinfo.radix_tree_node.active_objs
4873 -47.2% 2575 slabinfo.radix_tree_node.active_slabs
272913 -47.2% 144211 slabinfo.radix_tree_node.num_objs
4873 -47.2% 2575 slabinfo.radix_tree_node.num_slabs
1405 ± 5% -12.5% 1229 ± 6% slabinfo.task_group.active_objs
1405 ± 5% -12.5% 1229 ± 6% slabinfo.task_group.num_objs
4915431 -69.3% 1509547 meminfo.Active
4915200 -69.3% 1509312 meminfo.Active(anon)
63839608 -48.0% 33183635 meminfo.Cached
63979230 -49.0% 32623586 meminfo.Committed_AS
6921742 ± 6% +7.8% 7461414 ± 5% meminfo.DirectMap2M
58148546 -46.9% 30898463 meminfo.Inactive
58147241 -46.9% 30897147 meminfo.Inactive(anon)
217284 -34.0% 143504 meminfo.KReclaimable
58092433 -46.9% 30842010 meminfo.Mapped
55756866 +64.5% 91744663 meminfo.MemAvailable
56266443 +64.0% 92291121 meminfo.MemFree
75640767 -47.6% 39616089 meminfo.Memused
964.00 ± 80% -54.8% 435.75 ±162% meminfo.Mlocked
10802723 -48.9% 5519172 meminfo.PageTables
217284 -34.0% 143504 meminfo.SReclaimable
62810185 -48.8% 32154716 meminfo.Shmem
353627 -21.6% 277251 meminfo.Slab
233089 -44.5% 129423 meminfo.max_used_kB
1225495 -69.2% 377122 proc-vmstat.nr_active_anon
1388615 +64.8% 2287986 proc-vmstat.nr_dirty_background_threshold
2780630 +64.8% 4581571 proc-vmstat.nr_dirty_threshold
15965493 -48.0% 8300092 proc-vmstat.nr_file_pages
14060913 +64.1% 23067814 proc-vmstat.nr_free_pages
14545439 -46.9% 7728430 proc-vmstat.nr_inactive_anon
14531845 -46.9% 7714767 proc-vmstat.nr_mapped
240.00 ± 80% -54.7% 108.75 ±163% proc-vmstat.nr_mlock
2701369 -48.9% 1380808 proc-vmstat.nr_page_table_pages
15707875 -48.8% 8042599 proc-vmstat.nr_shmem
54332 -34.0% 35869 proc-vmstat.nr_slab_reclaimable
34085 -1.9% 33436 proc-vmstat.nr_slab_unreclaimable
1225495 -69.2% 377122 proc-vmstat.nr_zone_active_anon
14545439 -46.9% 7728430 proc-vmstat.nr_zone_inactive_anon
20143497 -48.2% 10429002 proc-vmstat.numa_hit
20126277 -48.3% 10411774 proc-vmstat.numa_local
943.75 ±107% +498.7% 5650 ± 93% proc-vmstat.numa_pages_migrated
16499608 -50.0% 8255793 proc-vmstat.pgactivate
20232662 -48.1% 10504886 proc-vmstat.pgalloc_normal
1.371e+08 -49.7% 68940704 proc-vmstat.pgfault
20002795 -48.8% 10235632 ± 3% proc-vmstat.pgfree
943.75 ±107% +498.7% 5650 ± 93% proc-vmstat.pgmigrate_success
155505 ± 3% -9.4% 140961 ± 2% softirqs.CPU1.TIMER
157106 ± 3% -12.6% 137264 ± 6% softirqs.CPU11.TIMER
153765 ± 2% -15.4% 130055 ± 8% softirqs.CPU17.TIMER
162850 ± 9% -14.9% 138618 ± 6% softirqs.CPU19.TIMER
172284 ± 7% -9.0% 156755 ± 4% softirqs.CPU2.TIMER
159844 ± 3% -9.2% 145111 ± 6% softirqs.CPU20.TIMER
36807 ± 3% +15.1% 42364 ± 6% softirqs.CPU22.RCU
162686 ± 2% -9.4% 147337 ± 5% softirqs.CPU3.TIMER
163397 ± 7% -11.9% 143969 ± 6% softirqs.CPU4.TIMER
154428 ± 2% -11.2% 137081 ± 2% softirqs.CPU45.TIMER
163755 ± 10% -13.2% 142160 ± 6% softirqs.CPU48.TIMER
153570 -13.0% 133532 ± 6% softirqs.CPU52.TIMER
154036 ± 2% -12.4% 134900 ± 4% softirqs.CPU53.TIMER
154008 ± 4% -11.7% 136016 ± 6% softirqs.CPU55.TIMER
34006 ± 6% +8.0% 36725 ± 3% softirqs.CPU57.RCU
158723 ± 3% -9.2% 144056 ± 4% softirqs.CPU62.TIMER
162701 ± 4% -7.7% 150237 ± 5% softirqs.CPU64.TIMER
35775 ± 2% +11.7% 39947 ± 4% softirqs.CPU66.RCU
33878 ± 4% +10.9% 37584 ± 5% softirqs.CPU70.RCU
150699 ± 4% -6.8% 140491 ± 6% softirqs.CPU72.TIMER
154115 ± 3% -13.0% 134119 ± 7% softirqs.CPU8.TIMER
153577 ± 3% -12.2% 134832 ± 5% softirqs.CPU9.TIMER
582521 ± 5% -15.8% 490718 ± 3% softirqs.SCHED
2510138 -68.8% 782461 ± 7% numa-meminfo.node0.Active
2509964 -68.8% 782225 ± 7% numa-meminfo.node0.Active(anon)
32261712 -46.4% 17291935 ± 2% numa-meminfo.node0.FilePages
29358600 -45.2% 16101644 ± 2% numa-meminfo.node0.Inactive
29357617 -45.2% 16100328 ± 2% numa-meminfo.node0.Inactive(anon)
29327818 -45.3% 16037347 ± 2% numa-meminfo.node0.Mapped
27396097 ± 2% +63.0% 44648994 numa-meminfo.node0.MemFree
38489693 -44.8% 21236797 ± 3% numa-meminfo.node0.MemUsed
550.50 ± 82% -54.1% 252.75 ±161% numa-meminfo.node0.Mlocked
5650587 ± 7% -40.3% 3372589 ± 8% numa-meminfo.node0.PageTables
31739245 -47.2% 16772234 ± 2% numa-meminfo.node0.Shmem
2446002 -69.8% 738096 ± 7% numa-meminfo.node1.Active
2445944 -69.8% 738096 ± 7% numa-meminfo.node1.Active(anon)
31580012 -49.7% 15892140 ± 2% numa-meminfo.node1.FilePages
28750348 -48.6% 14785203 ± 2% numa-meminfo.node1.Inactive
28750027 -48.6% 14785203 ± 2% numa-meminfo.node1.Inactive(anon)
81497 ± 39% -57.4% 34677 ± 26% numa-meminfo.node1.KReclaimable
6512 ± 13% -16.3% 5453 ± 2% numa-meminfo.node1.KernelStack
28724993 -48.5% 14793096 ± 2% numa-meminfo.node1.Mapped
28876430 +65.0% 47644386 numa-meminfo.node1.MemFree
37144988 -50.5% 18377032 ± 3% numa-meminfo.node1.MemUsed
5144842 ± 7% -58.3% 2144958 ± 13% numa-meminfo.node1.PageTables
81497 ± 39% -57.4% 34677 ± 26% numa-meminfo.node1.SReclaimable
62066 ± 8% -14.3% 53170 ± 5% numa-meminfo.node1.SUnreclaim
31072005 -50.5% 15381867 ± 2% numa-meminfo.node1.Shmem
143564 ± 23% -38.8% 87849 ± 8% numa-meminfo.node1.Slab
627134 -68.8% 195945 ± 7% numa-vmstat.node0.nr_active_anon
8061764 -46.4% 4321607 ± 2% numa-vmstat.node0.nr_file_pages
6852625 ± 2% +62.9% 11163615 numa-vmstat.node0.nr_free_pages
7336090 -45.2% 4023329 ± 2% numa-vmstat.node0.nr_inactive_anon
7328743 -45.3% 4007665 ± 2% numa-vmstat.node0.nr_mapped
137.25 ± 82% -54.5% 62.50 ±163% numa-vmstat.node0.nr_mlock
1412807 ± 7% -40.3% 843137 ± 8% numa-vmstat.node0.nr_page_table_pages
7931148 -47.1% 4191682 ± 2% numa-vmstat.node0.nr_shmem
627133 -68.8% 195945 ± 7% numa-vmstat.node0.nr_zone_active_anon
7336090 -45.2% 4023330 ± 2% numa-vmstat.node0.nr_zone_inactive_anon
10364106 -43.1% 5896146 ± 2% numa-vmstat.node0.numa_hit
10359380 -43.1% 5893189 ± 2% numa-vmstat.node0.numa_local
611156 -69.8% 184871 ± 7% numa-vmstat.node1.nr_active_anon
7891437 -49.7% 3971770 ± 2% numa-vmstat.node1.nr_file_pages
7222570 +64.9% 11912451 numa-vmstat.node1.nr_free_pages
7184265 -48.6% 3694684 ± 2% numa-vmstat.node1.nr_inactive_anon
6512 ± 13% -16.2% 5455 ± 2% numa-vmstat.node1.nr_kernel_stack
7178065 -48.5% 3696721 ± 2% numa-vmstat.node1.nr_mapped
103.00 ± 76% -55.8% 45.50 ±164% numa-vmstat.node1.nr_mlock
1286193 ± 7% -58.3% 536154 ± 13% numa-vmstat.node1.nr_page_table_pages
7764436 -50.5% 3844201 ± 2% numa-vmstat.node1.nr_shmem
20366 ± 39% -57.4% 8668 ± 26% numa-vmstat.node1.nr_slab_reclaimable
15516 ± 8% -14.3% 13292 ± 5% numa-vmstat.node1.nr_slab_unreclaimable
611156 -69.8% 184870 ± 7% numa-vmstat.node1.nr_zone_active_anon
7184265 -48.6% 3694684 ± 2% numa-vmstat.node1.nr_zone_inactive_anon
9814840 -50.0% 4908634 ± 2% numa-vmstat.node1.numa_hit
9652600 -50.8% 4744956 ± 3% numa-vmstat.node1.numa_local
303.25 ± 41% -42.6% 174.00 ± 6% interrupts.41:PCI-MSI.1572872-edge.eth0-TxRx-8
178.00 ± 3% -10.4% 159.50 interrupts.95:PCI-MSI.1572926-edge.eth0-TxRx-62
350658 -7.3% 325134 interrupts.CAL:Function_call_interrupts
4086 ± 2% -10.6% 3651 interrupts.CPU1.CAL:Function_call_interrupts
4151 ± 2% -13.9% 3572 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
151.25 ± 41% +277.2% 570.50 ± 36% interrupts.CPU24.RES:Rescheduling_interrupts
4071 ± 2% -11.0% 3625 interrupts.CPU3.CAL:Function_call_interrupts
356.00 ± 22% -52.4% 169.50 ± 35% interrupts.CPU30.RES:Rescheduling_interrupts
4066 ± 2% -11.0% 3619 interrupts.CPU4.CAL:Function_call_interrupts
443.75 ± 9% -56.2% 194.25 ± 18% interrupts.CPU41.RES:Rescheduling_interrupts
132.25 ± 36% +213.8% 415.00 ± 93% interrupts.CPU48.RES:Rescheduling_interrupts
4067 ± 2% -11.4% 3603 interrupts.CPU5.CAL:Function_call_interrupts
4169 -11.4% 3692 ± 3% interrupts.CPU58.CAL:Function_call_interrupts
4162 -13.4% 3603 ± 3% interrupts.CPU59.CAL:Function_call_interrupts
4048 -10.6% 3620 interrupts.CPU6.CAL:Function_call_interrupts
1193 ± 25% -60.5% 471.25 ± 21% interrupts.CPU6.RES:Rescheduling_interrupts
4154 -9.4% 3763 interrupts.CPU60.CAL:Function_call_interrupts
4147 ± 2% -9.4% 3759 interrupts.CPU61.CAL:Function_call_interrupts
178.00 ± 3% -10.4% 159.50 interrupts.CPU62.95:PCI-MSI.1572926-edge.eth0-TxRx-62
74.50 ± 32% +146.3% 183.50 ± 43% interrupts.CPU66.RES:Rescheduling_interrupts
4038 ± 2% -10.4% 3616 interrupts.CPU7.CAL:Function_call_interrupts
4106 ± 3% -8.2% 3770 interrupts.CPU70.CAL:Function_call_interrupts
4102 ± 3% -8.1% 3767 interrupts.CPU71.CAL:Function_call_interrupts
4118 ± 2% -8.6% 3765 interrupts.CPU72.CAL:Function_call_interrupts
4113 ± 2% -8.6% 3761 interrupts.CPU73.CAL:Function_call_interrupts
4048 ± 4% -7.2% 3755 interrupts.CPU74.CAL:Function_call_interrupts
4101 -8.6% 3750 interrupts.CPU75.CAL:Function_call_interrupts
4087 ± 2% -8.8% 3726 interrupts.CPU76.CAL:Function_call_interrupts
303.25 ± 41% -42.6% 174.00 ± 6% interrupts.CPU8.41:PCI-MSI.1572872-edge.eth0-TxRx-8
4121 ± 2% -8.1% 3788 ± 2% interrupts.CPU81.CAL:Function_call_interrupts
81.50 ± 53% -78.8% 17.25 ± 52% interrupts.CPU81.RES:Rescheduling_interrupts
4215 -10.2% 3783 interrupts.CPU82.CAL:Function_call_interrupts
4198 -9.9% 3782 interrupts.CPU83.CAL:Function_call_interrupts
4130 ± 2% -8.6% 3776 ± 2% interrupts.CPU84.CAL:Function_call_interrupts
4118 ± 4% -9.3% 3736 ± 2% interrupts.CPU85.CAL:Function_call_interrupts
4197 -10.3% 3765 interrupts.CPU86.CAL:Function_call_interrupts
4105 ± 2% -9.0% 3736 ± 2% interrupts.CPU87.CAL:Function_call_interrupts
7.925e+09 -3.5% 7.647e+09 perf-stat.i.branch-instructions
0.20 ± 37% -0.1 0.11 ± 9% perf-stat.i.branch-miss-rate%
10814780 ± 2% -31.4% 7415640 perf-stat.i.branch-misses
61.93 +5.0 66.91 perf-stat.i.cache-miss-rate%
5.56e+08 +9.9% 6.111e+08 perf-stat.i.cache-misses
8.267e+08 +5.8% 8.747e+08 perf-stat.i.cache-references
7.31 +2.7% 7.51 perf-stat.i.cpi
2.363e+11 +1.6% 2.402e+11 perf-stat.i.cpu-cycles
49.67 ± 7% +17.8% 58.50 ± 3% perf-stat.i.cpu-migrations
1035 ± 48% -44.0% 579.74 perf-stat.i.cycles-between-cache-misses
5.29 +0.6 5.91 perf-stat.i.dTLB-load-miss-rate%
4.939e+08 +14.1% 5.636e+08 perf-stat.i.dTLB-load-misses
9.551e+09 -2.2% 9.346e+09 perf-stat.i.dTLB-loads
2.889e+09 +5.3% 3.041e+09 perf-stat.i.dTLB-stores
88.66 +2.8 91.44 perf-stat.i.iTLB-load-miss-rate%
1048381 ± 2% -31.4% 719167 ± 3% perf-stat.i.iTLB-load-misses
48064 ± 21% -32.9% 32249 ± 12% perf-stat.i.iTLB-loads
3.396e+10 -3.3% 3.282e+10 perf-stat.i.instructions
221052 ± 20% -28.8% 157319 ± 8% perf-stat.i.instructions-per-iTLB-miss
0.15 -6.2% 0.14 perf-stat.i.ipc
388532 -45.8% 210677 perf-stat.i.minor-faults
3.8e+08 ± 6% +9.4% 4.157e+08 ± 2% perf-stat.i.node-loads
33.35 ± 14% -8.7 24.65 ± 3% perf-stat.i.node-store-miss-rate%
2262085 -44.7% 1251208 perf-stat.i.node-store-misses
1533212 ± 2% -43.6% 864523 ± 3% perf-stat.i.node-stores
388536 -45.8% 210682 perf-stat.i.page-faults
24.25 +9.7% 26.59 perf-stat.overall.MPKI
0.14 ± 2% -0.0 0.10 perf-stat.overall.branch-miss-rate%
67.22 +2.6 69.85 perf-stat.overall.cache-miss-rate%
6.94 +5.2% 7.31 perf-stat.overall.cpi
426.12 -7.6% 393.54 perf-stat.overall.cycles-between-cache-misses
4.90 +0.8 5.67 perf-stat.overall.dTLB-load-miss-rate%
32322 ± 3% +40.9% 45552 ± 3% perf-stat.overall.instructions-per-iTLB-miss
0.14 -5.0% 0.14 perf-stat.overall.ipc
8330 +14.3% 9525 perf-stat.overall.path-length
7.918e+09 -3.6% 7.632e+09 perf-stat.ps.branch-instructions
10882606 ± 2% -31.4% 7464205 perf-stat.ps.branch-misses
5.529e+08 +10.0% 6.083e+08 perf-stat.ps.cache-misses
8.226e+08 +5.9% 8.71e+08 perf-stat.ps.cache-references
2.356e+11 +1.6% 2.394e+11 perf-stat.ps.cpu-cycles
49.51 ± 7% +17.8% 58.31 ± 3% perf-stat.ps.cpu-migrations
4.909e+08 +14.3% 5.609e+08 perf-stat.ps.dTLB-load-misses
9.537e+09 -2.2% 9.324e+09 perf-stat.ps.dTLB-loads
2.876e+09 +5.3% 3.029e+09 perf-stat.ps.dTLB-stores
1050462 ± 2% -31.5% 720005 ± 3% perf-stat.ps.iTLB-load-misses
48065 ± 21% -32.8% 32284 ± 12% perf-stat.ps.iTLB-loads
3.392e+10 -3.4% 3.275e+10 perf-stat.ps.instructions
389703 -45.7% 211444 perf-stat.ps.minor-faults
3.777e+08 ± 6% +9.5% 4.137e+08 ± 2% perf-stat.ps.node-loads
2279161 -44.6% 1261817 perf-stat.ps.node-store-misses
1545271 ± 2% -43.6% 871478 ± 3% perf-stat.ps.node-stores
389703 -45.7% 211444 perf-stat.ps.page-faults
1.193e+13 -10.5% 1.067e+13 perf-stat.total.instructions
24.30 ± 9% -5.1 19.16 ± 5% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
24.08 ± 9% -5.1 18.95 ± 5% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
26.38 ± 7% -4.7 21.71 ± 7% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
33.92 ± 6% -3.7 30.24 ± 6% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
6.80 ± 9% -1.2 5.60 ± 12% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.33 ± 35% -1.0 0.36 ±100% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
1.60 ± 24% -0.9 0.66 ± 59% perf-profile.calltrace.cycles-pp.ret_from_fork
1.60 ± 24% -0.9 0.66 ± 59% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.31 ± 37% -0.9 0.36 ±100% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
5.61 ± 8% -0.9 4.68 ± 13% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
2.87 ± 14% -0.9 1.99 ± 8% perf-profile.calltrace.cycles-pp.update_curr.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.84 ± 20% -0.5 0.29 ±100% perf-profile.calltrace.cycles-pp.fb_flashcursor.process_one_work.worker_thread.kthread.ret_from_fork
0.84 ± 20% -0.5 0.29 ±100% perf-profile.calltrace.cycles-pp.bit_cursor.fb_flashcursor.process_one_work.worker_thread.kthread
0.84 ± 20% -0.5 0.29 ±100% perf-profile.calltrace.cycles-pp.soft_cursor.bit_cursor.fb_flashcursor.process_one_work.worker_thread
0.83 ± 20% -0.5 0.29 ±100% perf-profile.calltrace.cycles-pp.mga_dirty_update.soft_cursor.bit_cursor.fb_flashcursor.process_one_work
1.74 ± 15% -0.5 1.25 ± 4% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.update_load_avg.task_tick_fair.scheduler_tick.update_process_times
0.77 ± 23% -0.5 0.29 ±100% perf-profile.calltrace.cycles-pp.memcpy_toio.mga_dirty_update.soft_cursor.bit_cursor.fb_flashcursor
0.76 ± 11% -0.4 0.33 ±100% perf-profile.calltrace.cycles-pp.trigger_load_balance.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.50 ± 12% +0.5 1.99 ± 14% perf-profile.calltrace.cycles-pp.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.65 ± 12% +2.0 6.65 ± 13% perf-profile.calltrace.cycles-pp.interrupt_entry
24.16 ± 9% -5.2 19.00 ± 5% perf-profile.children.cycles-pp.update_process_times
24.33 ± 9% -5.1 19.18 ± 5% perf-profile.children.cycles-pp.tick_sched_handle
26.44 ± 7% -4.7 21.75 ± 7% perf-profile.children.cycles-pp.tick_sched_timer
34.01 ± 6% -3.7 30.31 ± 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
6.91 ± 8% -1.2 5.69 ± 13% perf-profile.children.cycles-pp.irq_exit
5.69 ± 8% -1.0 4.71 ± 13% perf-profile.children.cycles-pp.__softirqentry_text_start
1.29 ± 18% -0.9 0.35 ±124% perf-profile.children.cycles-pp.wake_up_klogd_work_func
3.02 ± 12% -0.9 2.11 ± 7% perf-profile.children.cycles-pp.update_curr
1.68 ± 22% -0.9 0.82 ± 27% perf-profile.children.cycles-pp.ret_from_fork
1.60 ± 24% -0.8 0.76 ± 28% perf-profile.children.cycles-pp.kthread
1.33 ± 35% -0.8 0.53 ± 39% perf-profile.children.cycles-pp.worker_thread
1.31 ± 37% -0.8 0.53 ± 40% perf-profile.children.cycles-pp.process_one_work
1.77 ± 15% -0.5 1.27 ± 5% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.84 ± 20% -0.4 0.42 ± 43% perf-profile.children.cycles-pp.fb_flashcursor
0.83 ± 20% -0.4 0.42 ± 43% perf-profile.children.cycles-pp.mga_dirty_update
0.84 ± 20% -0.4 0.42 ± 41% perf-profile.children.cycles-pp.bit_cursor
0.84 ± 20% -0.4 0.42 ± 41% perf-profile.children.cycles-pp.soft_cursor
0.83 ± 21% -0.4 0.42 ± 43% perf-profile.children.cycles-pp.memcpy_toio
0.76 ± 11% -0.2 0.52 ± 31% perf-profile.children.cycles-pp.trigger_load_balance
0.34 ± 22% -0.2 0.10 ± 96% perf-profile.children.cycles-pp.fbcon_redraw
0.67 ± 16% -0.2 0.44 ± 12% perf-profile.children.cycles-pp._raw_spin_lock
0.34 ± 22% -0.2 0.12 ± 72% perf-profile.children.cycles-pp.con_scroll
0.34 ± 22% -0.2 0.12 ± 72% perf-profile.children.cycles-pp.fbcon_scroll
0.34 ± 22% -0.2 0.12 ± 72% perf-profile.children.cycles-pp.lf
0.41 ± 18% -0.2 0.26 ± 32% perf-profile.children.cycles-pp.__calc_delta
0.35 ± 19% -0.1 0.22 ± 29% perf-profile.children.cycles-pp.rebalance_domains
0.31 ± 15% -0.1 0.20 ± 28% perf-profile.children.cycles-pp.do_sys_open
0.22 ± 20% -0.1 0.12 ± 17% perf-profile.children.cycles-pp.run_rebalance_domains
0.12 ± 15% -0.1 0.05 ± 60% perf-profile.children.cycles-pp.lookup_fast
0.28 ± 17% -0.1 0.22 ± 19% perf-profile.children.cycles-pp.tick_sched_do_timer
0.14 ± 23% -0.1 0.08 ± 27% perf-profile.children.cycles-pp.__get_user_pages
0.14 ± 23% -0.1 0.08 ± 27% perf-profile.children.cycles-pp.get_user_pages_remote
0.03 ±102% +0.1 0.09 ± 30% perf-profile.children.cycles-pp.alloc_set_pte
0.10 ± 12% +0.1 0.17 ± 28% perf-profile.children.cycles-pp.proc_reg_read
0.21 ± 35% +0.2 0.37 ± 41% perf-profile.children.cycles-pp.perf_event_task_tick
0.40 ± 29% +0.2 0.63 ± 18% perf-profile.children.cycles-pp.rb_erase
1.52 ± 12% +0.5 2.00 ± 14% perf-profile.children.cycles-pp.__remove_hrtimer
2.18 ± 15% -0.7 1.50 ± 4% perf-profile.self.cycles-pp.update_curr
1.59 ± 14% -0.5 1.09 ± 6% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
1.08 ± 15% -0.5 0.59 ± 44% perf-profile.self.cycles-pp.delay_tsc
0.83 ± 20% -0.4 0.42 ± 43% perf-profile.self.cycles-pp.memcpy_toio
1.69 ± 12% -0.3 1.35 ± 8% perf-profile.self.cycles-pp.task_tick_fair
0.76 ± 11% -0.3 0.49 ± 30% perf-profile.self.cycles-pp.trigger_load_balance
0.32 ± 25% -0.2 0.11 ± 76% perf-profile.self.cycles-pp.sys_imageblit
0.52 ± 11% -0.2 0.37 ± 29% perf-profile.self.cycles-pp.idle_cpu
0.40 ± 22% -0.1 0.25 ± 32% perf-profile.self.cycles-pp.__calc_delta
0.12 ± 26% -0.1 0.04 ±101% perf-profile.self.cycles-pp.update_process_times
0.01 ±173% +0.1 0.10 ± 31% perf-profile.self.cycles-pp.get_page_from_freelist
0.21 ± 35% +0.2 0.37 ± 41% perf-profile.self.cycles-pp.perf_event_task_tick
0.40 ± 29% +0.2 0.63 ± 18% perf-profile.self.cycles-pp.rb_erase
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 11 months
[xfs] 8afc415a53: fsmark.files_per_sec -51.6% regression
by kernel test robot
Greeting,
FYI, we noticed a -51.6% regression of fsmark.files_per_sec due to commit:
commit: 8afc415a531ba62ee23fac421b937f1f61bdd84a ("xfs: prevent CIL push holdoff in log recovery")
git://git.infradead.org/users/hch/xfs xfs-cow-iomap
in testcase: fsmark
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
iterations: 1x
nr_threads: 1t
disk: 1BRD_48G
fs: xfs
filesize: 4M
test_size: 40G
sync_method: fsyncBeforeClose
cpufreq_governor: performance
test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-7/performance/1BRD_48G/4M/xfs/1x/x86_64-rhel-7.6/1t/debian-x86_64-2019-05-14.cgz/fsyncBeforeClose/lkp-ivb-ep01/40G/fsmark
commit:
7da0720109 ("xfs: fix missed wakeup on l_flush_wait")
8afc415a53 ("xfs: prevent CIL push holdoff in log recovery")
7da0720109312149 8afc415a531ba62ee23fac421b9
---------------- ---------------------------
%stddev %change %stddev
\ | \
180849 ± 9% +146.7% 446148 fsmark.app_overhead
249.95 ± 6% -51.6% 120.90 fsmark.files_per_sec
44.25 ± 7% +98.9% 88.00 fsmark.time.elapsed_time
44.25 ± 7% +98.9% 88.00 fsmark.time.elapsed_time.max
14380 -70.2% 4283 ± 2% fsmark.time.involuntary_context_switches
41.75 ± 7% +96.1% 81.88 fsmark.time.system_time
414897 ± 5% -9.2% 376542 ± 4% fsmark.time.voluntary_context_switches
0.32 ± 5% -0.1 0.26 ± 9% mpstat.cpu.all.usr%
896326 ± 6% -48.2% 464712 vmstat.io.bo
20217 ± 10% -49.1% 10295 ± 3% vmstat.system.cs
212.00 ± 36% +56.6% 332.00 ± 8% numa-vmstat.node1.nr_dirty
251.50 ± 45% +57.5% 396.00 ± 6% numa-vmstat.node1.nr_writeback
448.25 ± 41% +54.7% 693.50 ± 5% numa-vmstat.node1.nr_zone_write_pending
4675 ± 6% +8.3% 5065 ± 5% slabinfo.kmalloc-512.active_objs
6382 +12.6% 7186 slabinfo.vmap_area.active_objs
6386 +12.5% 7187 slabinfo.vmap_area.num_objs
108110 ± 48% +1032.8% 1224728 ±122% cpuidle.C1.usage
1.503e+09 ± 13% +62.6% 2.444e+09 ± 21% cpuidle.C6.time
70422 ± 2% -13.4% 60960 ± 8% cpuidle.POLL.time
359792 ± 13% -89.9% 36335 ± 50% cpuidle.POLL.usage
49124 ± 3% +64.7% 80915 ± 3% meminfo.AnonHugePages
477350 ± 4% -12.2% 419169 ± 11% meminfo.Committed_AS
1218 ± 6% +22.1% 1488 ± 8% meminfo.Dirty
1844820 ± 6% -48.2% 956326 meminfo.max_used_kB
20.00 ± 16% +45.0% 29.00 ± 12% sched_debug.cfs_rq:/.runnable_load_avg.max
19567 ± 20% +36.8% 26763 ± 22% sched_debug.cfs_rq:/.runnable_weight.max
-2392 -137.2% 889.90 ±208% sched_debug.cfs_rq:/.spread0.avg
1435 ± 8% +29.6% 1860 ± 6% sched_debug.cfs_rq:/.util_avg.max
282.53 ± 14% +38.7% 391.93 ± 8% sched_debug.cfs_rq:/.util_avg.stddev
6.77 ± 10% +17.7% 7.97 ± 8% sched_debug.cpu.nr_uninterruptible.stddev
58488 +0.8% 58965 proc-vmstat.nr_active_anon
5161550 +6.9% 5516680 proc-vmstat.nr_file_pages
4888868 +7.3% 5243755 proc-vmstat.nr_inactive_file
5423 +2.1% 5536 proc-vmstat.nr_shmem
42279 +1.8% 43038 proc-vmstat.nr_slab_reclaimable
58488 +0.8% 58965 proc-vmstat.nr_zone_active_anon
4888868 +7.3% 5243755 proc-vmstat.nr_zone_inactive_file
146850 ± 9% +72.5% 253361 ± 2% proc-vmstat.pgfault
9752169 ± 10% +10.4% 10765130 proc-vmstat.pgfree
152.25 ± 3% -34.5% 99.75 ± 4% turbostat.Avg_MHz
6.78 ± 5% +0.9 7.66 ± 3% turbostat.Busy%
2253 ± 6% -42.3% 1301 turbostat.Bzy_MHz
104276 ± 51% +1071.1% 1221211 ±123% turbostat.C1
56.67 ± 6% -51.1% 27.72 ± 6% turbostat.CorWatt
3701162 ± 6% +94.2% 7189101 turbostat.IRQ
16.99 ± 16% +51.1% 25.66 ± 22% turbostat.Pkg%pc2
83.15 ± 4% -35.2% 53.85 ± 3% turbostat.PkgWatt
32.43 -4.5% 30.97 turbostat.RAMWatt
3660 ± 6% +95.1% 7142 turbostat.SMI
51.70 ± 18% -37.1% 32.54 ± 12% perf-stat.i.MPKI
9.64 ± 19% -5.5 4.19 ± 15% perf-stat.i.branch-miss-rate%
34.81 ± 17% +9.4 44.17 ± 10% perf-stat.i.cache-miss-rate%
21441 ± 10% -51.0% 10516 ± 3% perf-stat.i.context-switches
5.77 ± 18% -48.2% 2.99 ± 10% perf-stat.i.cpi
469.30 ± 7% -48.3% 242.70 perf-stat.i.cpu-migrations
380.66 ± 28% -40.4% 226.72 ± 12% perf-stat.i.cycles-between-cache-misses
0.74 ± 19% -0.3 0.47 ± 8% perf-stat.i.dTLB-load-miss-rate%
0.17 ± 11% -0.1 0.11 ± 12% perf-stat.i.dTLB-store-miss-rate%
88.63 ± 4% -6.0 82.67 ± 2% perf-stat.i.iTLB-load-miss-rate%
0.27 ± 21% +34.5% 0.37 ± 7% perf-stat.i.ipc
2941 ± 2% -9.4% 2666 perf-stat.i.minor-faults
35.82 ± 7% -5.0 30.83 ± 5% perf-stat.i.node-load-miss-rate%
16.33 ± 15% -13.0 3.29 ± 13% perf-stat.i.node-store-miss-rate%
2942 ± 2% -9.4% 2666 perf-stat.i.page-faults
20958 ± 10% -50.4% 10397 ± 3% perf-stat.ps.context-switches
458.71 ± 6% -47.7% 239.95 perf-stat.ps.cpu-migrations
2876 ± 2% -8.4% 2636 perf-stat.ps.minor-faults
78207 +1.1% 79094 perf-stat.ps.msec
2877 ± 2% -8.4% 2636 perf-stat.ps.page-faults
7.674e+10 ± 32% +89.3% 1.453e+11 ± 7% perf-stat.total.instructions
0.13 ±173% +0.6 0.68 ± 14% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.rebalance_domains.__softirqentry_text_start.irq_exit
0.77 ± 63% +0.7 1.49 ± 11% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.36 ± 14% -0.3 1.05 ± 21% perf-profile.children.cycles-pp.irq_work_interrupt
1.36 ± 14% -0.3 1.05 ± 21% perf-profile.children.cycles-pp.smp_irq_work_interrupt
1.36 ± 14% -0.3 1.05 ± 21% perf-profile.children.cycles-pp.irq_work_run
1.36 ± 14% -0.3 1.05 ± 21% perf-profile.children.cycles-pp.printk
0.17 ± 15% -0.1 0.10 ± 15% perf-profile.children.cycles-pp.nr_iowait_cpu
0.01 ±173% +0.0 0.06 ± 20% perf-profile.children.cycles-pp.__xfs_free_extent
0.01 ±173% +0.0 0.06 ± 20% perf-profile.children.cycles-pp.xfs_extent_free_finish_item
0.01 ±173% +0.0 0.06 ± 20% perf-profile.children.cycles-pp.xfs_trans_free_extent
0.06 ± 9% +0.1 0.11 ± 11% perf-profile.children.cycles-pp.x86_pmu_disable
0.12 ± 24% +0.1 0.19 ± 9% perf-profile.children.cycles-pp.__intel_pmu_disable_all
0.04 ±102% +0.1 0.13 ± 3% perf-profile.children.cycles-pp.unwind_next_frame
0.06 ±100% +0.2 0.23 ± 10% perf-profile.children.cycles-pp.arch_stack_walk
0.08 ± 72% +0.2 0.26 ± 13% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.09 ± 72% +0.2 0.29 ± 11% perf-profile.children.cycles-pp.__account_scheduler_latency
0.19 ± 58% +0.2 0.40 ± 11% perf-profile.children.cycles-pp.enqueue_task_fair
0.17 ± 57% +0.2 0.39 ± 11% perf-profile.children.cycles-pp.enqueue_entity
0.19 ± 58% +0.2 0.41 ± 10% perf-profile.children.cycles-pp.activate_task
0.19 ± 61% +0.2 0.41 ± 11% perf-profile.children.cycles-pp.ttwu_do_activate
0.41 ± 27% +0.2 0.65 ± 13% perf-profile.children.cycles-pp.update_sd_lb_stats
0.45 ± 26% +0.3 0.73 ± 15% perf-profile.children.cycles-pp.find_busiest_group
0.95 ± 33% +0.6 1.56 ± 14% perf-profile.children.cycles-pp.rebalance_domains
0.16 ± 10% -0.1 0.10 ± 18% perf-profile.self.cycles-pp.nr_iowait_cpu
0.20 ± 12% -0.1 0.14 ± 8% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.05 ± 8% +0.1 0.11 ± 11% perf-profile.self.cycles-pp.x86_pmu_disable
0.30 ± 5% +0.1 0.42 ± 16% perf-profile.self.cycles-pp.__softirqentry_text_start
0.15 ± 31% +0.1 0.27 ± 9% perf-profile.self.cycles-pp.rebalance_domains
0.20 ± 33% +0.1 0.34 ± 16% perf-profile.self.cycles-pp.update_blocked_averages
28.75 ± 18% +164.3% 76.00 ± 35% interrupts.46:IR-PCI-MSI.524296-edge.eth0-TxRx-7
771.00 ± 17% +28.7% 992.25 ± 4% interrupts.CPU0.CAL:Function_call_interrupts
91149 ± 7% +95.2% 177951 interrupts.CPU0.LOC:Local_timer_interrupts
90856 ± 7% +95.6% 177752 interrupts.CPU1.LOC:Local_timer_interrupts
91094 ± 6% +95.1% 177749 interrupts.CPU10.LOC:Local_timer_interrupts
746.75 ± 19% +32.0% 986.00 ± 5% interrupts.CPU11.CAL:Function_call_interrupts
90979 ± 6% +95.2% 177576 interrupts.CPU11.LOC:Local_timer_interrupts
14.25 ± 56% +1717.5% 259.00 ± 36% interrupts.CPU11.RES:Rescheduling_interrupts
1141 ± 2% -76.3% 270.25 ±173% interrupts.CPU12.10:IR-IO-APIC.10-edge.ipmi_si
90204 ± 7% +97.3% 177982 interrupts.CPU12.LOC:Local_timer_interrupts
90771 ± 7% +95.8% 177727 interrupts.CPU13.LOC:Local_timer_interrupts
11.25 ± 96% +3382.2% 391.75 ± 42% interrupts.CPU13.RES:Rescheduling_interrupts
90988 ± 7% +95.4% 177798 interrupts.CPU14.LOC:Local_timer_interrupts
90856 ± 6% +95.7% 177799 interrupts.CPU15.LOC:Local_timer_interrupts
18.00 ± 87% +1466.7% 282.00 ± 11% interrupts.CPU15.RES:Rescheduling_interrupts
90924 ± 7% +95.6% 177815 interrupts.CPU16.LOC:Local_timer_interrupts
90888 ± 7% +95.7% 177831 interrupts.CPU17.LOC:Local_timer_interrupts
644.25 ± 8% +59.1% 1024 interrupts.CPU18.CAL:Function_call_interrupts
91179 ± 7% +94.8% 177619 interrupts.CPU18.LOC:Local_timer_interrupts
90762 ± 7% +95.7% 177614 interrupts.CPU19.LOC:Local_timer_interrupts
91081 ± 7% +94.8% 177453 interrupts.CPU2.LOC:Local_timer_interrupts
90705 ± 6% +96.0% 177751 interrupts.CPU20.LOC:Local_timer_interrupts
739.00 ± 18% +31.4% 971.00 ± 13% interrupts.CPU21.CAL:Function_call_interrupts
91044 ± 7% +95.3% 177843 interrupts.CPU21.LOC:Local_timer_interrupts
90614 ± 7% +96.2% 177753 interrupts.CPU22.LOC:Local_timer_interrupts
90780 ± 6% +94.9% 176975 interrupts.CPU23.LOC:Local_timer_interrupts
759.00 ± 17% +30.0% 986.75 ± 5% interrupts.CPU24.CAL:Function_call_interrupts
90852 ± 7% +95.7% 177792 interrupts.CPU24.LOC:Local_timer_interrupts
750.25 ± 19% +31.4% 986.00 ± 6% interrupts.CPU25.CAL:Function_call_interrupts
90714 ± 6% +95.8% 177623 interrupts.CPU25.LOC:Local_timer_interrupts
34.75 ± 60% +732.4% 289.25 ± 33% interrupts.CPU25.RES:Rescheduling_interrupts
90757 ± 7% +96.1% 177966 interrupts.CPU26.LOC:Local_timer_interrupts
90986 ± 7% +95.2% 177586 interrupts.CPU27.LOC:Local_timer_interrupts
90882 ± 7% +95.5% 177652 interrupts.CPU28.LOC:Local_timer_interrupts
741.50 ± 19% +30.1% 964.50 ± 11% interrupts.CPU29.CAL:Function_call_interrupts
90835 ± 6% +95.3% 177404 interrupts.CPU29.LOC:Local_timer_interrupts
144.75 ± 95% +435.8% 775.50 ± 31% interrupts.CPU29.NMI:Non-maskable_interrupts
144.75 ± 95% +435.8% 775.50 ± 31% interrupts.CPU29.PMI:Performance_monitoring_interrupts
673.25 ± 2% +43.9% 968.75 ± 4% interrupts.CPU3.CAL:Function_call_interrupts
91119 ± 6% +95.5% 178126 interrupts.CPU3.LOC:Local_timer_interrupts
566.00 ± 44% -67.4% 184.50 ± 70% interrupts.CPU3.NMI:Non-maskable_interrupts
566.00 ± 44% -67.4% 184.50 ± 70% interrupts.CPU3.PMI:Performance_monitoring_interrupts
90750 ± 7% +96.2% 178071 interrupts.CPU30.LOC:Local_timer_interrupts
5.25 ± 59% +3766.7% 203.00 ±150% interrupts.CPU30.RES:Rescheduling_interrupts
90742 ± 7% +96.1% 177947 interrupts.CPU31.LOC:Local_timer_interrupts
90950 ± 6% +95.4% 177757 interrupts.CPU32.LOC:Local_timer_interrupts
12.75 ±146% +2794.1% 369.00 ±102% interrupts.CPU32.RES:Rescheduling_interrupts
770.50 ± 19% +27.5% 982.50 ± 5% interrupts.CPU33.CAL:Function_call_interrupts
90865 ± 7% +95.5% 177644 interrupts.CPU33.LOC:Local_timer_interrupts
776.00 ± 19% +32.5% 1028 interrupts.CPU34.CAL:Function_call_interrupts
90672 ± 6% +95.9% 177648 interrupts.CPU34.LOC:Local_timer_interrupts
625.75 ± 42% +56.0% 976.25 ± 4% interrupts.CPU35.CAL:Function_call_interrupts
91086 ± 7% +95.0% 177628 interrupts.CPU35.LOC:Local_timer_interrupts
21.75 ± 92% +1141.4% 270.00 ± 21% interrupts.CPU35.RES:Rescheduling_interrupts
90799 ± 7% +95.8% 177817 interrupts.CPU36.LOC:Local_timer_interrupts
603.50 ± 51% +63.3% 985.25 ± 4% interrupts.CPU37.CAL:Function_call_interrupts
91106 ± 6% +94.9% 177572 interrupts.CPU37.LOC:Local_timer_interrupts
53.75 ±100% +421.9% 280.50 ± 39% interrupts.CPU37.RES:Rescheduling_interrupts
90518 ± 6% +96.2% 177595 interrupts.CPU38.LOC:Local_timer_interrupts
302.75 ± 44% -72.6% 83.00 ±164% interrupts.CPU38.NMI:Non-maskable_interrupts
302.75 ± 44% -72.6% 83.00 ±164% interrupts.CPU38.PMI:Performance_monitoring_interrupts
734.75 ± 23% +37.0% 1006 ± 4% interrupts.CPU39.CAL:Function_call_interrupts
90754 ± 7% +96.0% 177857 interrupts.CPU39.LOC:Local_timer_interrupts
90924 ± 7% +95.5% 177715 interrupts.CPU4.LOC:Local_timer_interrupts
2150 ±161% -98.4% 34.25 ± 79% interrupts.CPU4.RES:Rescheduling_interrupts
90984 ± 7% +95.2% 177588 interrupts.CPU5.LOC:Local_timer_interrupts
64.25 ±124% +273.5% 240.00 ± 24% interrupts.CPU5.RES:Rescheduling_interrupts
28.75 ± 18% +164.3% 76.00 ± 35% interrupts.CPU6.46:IR-PCI-MSI.524296-edge.eth0-TxRx-7
765.00 ± 17% +29.1% 987.25 ± 4% interrupts.CPU6.CAL:Function_call_interrupts
90922 ± 7% +95.6% 177805 interrupts.CPU6.LOC:Local_timer_interrupts
90794 ± 6% +95.7% 177707 interrupts.CPU7.LOC:Local_timer_interrupts
118.00 ± 67% +101.9% 238.25 ± 21% interrupts.CPU7.RES:Rescheduling_interrupts
90952 ± 7% +95.6% 177860 interrupts.CPU8.LOC:Local_timer_interrupts
90813 ± 6% +95.8% 177824 interrupts.CPU9.LOC:Local_timer_interrupts
392.00 ± 40% -95.9% 16.00 ±109% interrupts.CPU9.NMI:Non-maskable_interrupts
392.00 ± 40% -95.9% 16.00 ±109% interrupts.CPU9.PMI:Performance_monitoring_interrupts
16.25 ± 97% +1509.2% 261.50 ± 36% interrupts.CPU9.RES:Rescheduling_interrupts
3634665 ± 7% +95.6% 7109191 interrupts.LOC:Local_timer_interrupts
11848 ± 3% +57.0% 18607 ± 21% softirqs.CPU0.RCU
10600 ± 7% +62.5% 17229 ± 7% softirqs.CPU0.SCHED
24443 ± 15% +54.1% 37673 ± 20% softirqs.CPU0.TIMER
12333 ± 16% +50.3% 18543 ± 26% softirqs.CPU1.RCU
8299 ± 3% +66.0% 13778 ± 2% softirqs.CPU1.SCHED
26965 ± 12% +45.1% 39125 ± 23% softirqs.CPU1.TIMER
7516 ± 5% +85.9% 13972 ± 2% softirqs.CPU10.SCHED
11387 ± 4% +53.9% 17522 ± 24% softirqs.CPU11.RCU
7654 ± 6% +68.2% 12878 ± 4% softirqs.CPU11.SCHED
26755 ± 8% +40.9% 37691 ± 21% softirqs.CPU11.TIMER
12554 ± 15% +55.7% 19547 ± 24% softirqs.CPU12.RCU
7561 ± 8% +84.5% 13955 softirqs.CPU12.SCHED
25401 ± 23% +54.7% 39289 ± 24% softirqs.CPU12.TIMER
11087 ± 3% +57.7% 17484 ± 23% softirqs.CPU13.RCU
7618 ± 7% +63.9% 12486 ± 8% softirqs.CPU13.SCHED
26266 ± 8% +41.9% 37272 ± 22% softirqs.CPU13.TIMER
7755 ± 5% +78.7% 13861 softirqs.CPU14.SCHED
28978 ± 10% +42.3% 41250 ± 19% softirqs.CPU14.TIMER
11205 ± 4% +57.6% 17654 ± 19% softirqs.CPU15.RCU
7717 ± 9% +66.2% 12828 ± 5% softirqs.CPU15.SCHED
27120 ± 6% +35.8% 36821 ± 21% softirqs.CPU15.TIMER
12310 ± 12% +62.9% 20058 ± 21% softirqs.CPU16.RCU
7801 ± 7% +77.7% 13861 softirqs.CPU16.SCHED
27595 ± 5% +45.1% 40035 ± 29% softirqs.CPU16.TIMER
10090 ± 18% +82.3% 18397 ± 24% softirqs.CPU17.RCU
7785 ± 4% +64.7% 12823 ± 4% softirqs.CPU17.SCHED
28607 ± 19% +27.0% 36334 ± 20% softirqs.CPU17.TIMER
12818 ± 17% +32.4% 16965 ± 22% softirqs.CPU18.RCU
7789 ± 5% +77.2% 13804 softirqs.CPU18.SCHED
11066 ± 2% +82.4% 20183 ± 33% softirqs.CPU19.RCU
7698 ± 6% +62.7% 12525 ± 6% softirqs.CPU19.SCHED
26740 ± 8% +32.6% 35463 ± 18% softirqs.CPU19.TIMER
11263 ± 18% +72.2% 19401 ± 22% softirqs.CPU2.RCU
7795 ± 9% +70.5% 13288 ± 9% softirqs.CPU2.SCHED
11228 ± 3% +65.3% 18559 ± 26% softirqs.CPU20.RCU
7378 ± 6% +76.2% 12999 ± 9% softirqs.CPU20.SCHED
25316 ± 17% +51.4% 38323 ± 25% softirqs.CPU20.TIMER
11176 ± 6% +57.8% 17634 ± 25% softirqs.CPU21.RCU
8265 ± 11% +64.1% 13564 ± 6% softirqs.CPU21.SCHED
27686 ± 9% +43.7% 39776 ± 21% softirqs.CPU21.TIMER
11362 ± 4% +60.9% 18283 ± 25% softirqs.CPU22.RCU
7751 ± 11% +74.2% 13503 ± 11% softirqs.CPU22.SCHED
22655 ± 19% +68.4% 38153 ± 24% softirqs.CPU22.TIMER
9994 ± 18% +77.2% 17709 ± 25% softirqs.CPU23.RCU
8276 ± 10% +72.2% 14250 ± 9% softirqs.CPU23.SCHED
26579 ± 7% +50.6% 40021 ± 19% softirqs.CPU23.TIMER
12377 ± 4% +62.4% 20102 ± 23% softirqs.CPU24.RCU
7411 ± 10% +77.7% 13172 ± 8% softirqs.CPU24.SCHED
23660 ± 18% +46.2% 34593 ± 25% softirqs.CPU24.TIMER
11693 ± 11% +65.1% 19304 ± 14% softirqs.CPU25.RCU
7465 ± 7% +74.0% 12993 ± 4% softirqs.CPU25.SCHED
25967 ± 7% +49.5% 38831 ± 24% softirqs.CPU25.TIMER
11604 ± 4% +62.4% 18846 ± 23% softirqs.CPU26.RCU
7490 ± 9% +81.1% 13567 ± 6% softirqs.CPU26.SCHED
24150 ± 17% +41.1% 34086 ± 26% softirqs.CPU26.TIMER
10923 ± 3% +69.6% 18522 ± 25% softirqs.CPU27.RCU
7895 ± 17% +64.6% 12999 ± 12% softirqs.CPU27.SCHED
25640 ± 8% +53.9% 39462 ± 25% softirqs.CPU27.TIMER
12459 ± 9% +55.1% 19328 ± 24% softirqs.CPU28.RCU
7569 ± 10% +82.6% 13819 ± 2% softirqs.CPU28.SCHED
11812 ± 10% +59.0% 18777 ± 15% softirqs.CPU29.RCU
7497 ± 8% +74.1% 13053 ± 5% softirqs.CPU29.SCHED
24012 ± 17% +57.5% 37825 ± 23% softirqs.CPU29.TIMER
11649 ± 2% +59.0% 18526 ± 21% softirqs.CPU3.RCU
7598 ± 7% +77.1% 13454 ± 2% softirqs.CPU3.SCHED
26085 ± 6% +50.7% 39297 ± 24% softirqs.CPU3.TIMER
11481 ± 4% +58.9% 18244 ± 18% softirqs.CPU30.RCU
7580 ± 5% +78.7% 13548 ± 2% softirqs.CPU30.SCHED
24275 ± 19% +68.5% 40898 ± 31% softirqs.CPU30.TIMER
11338 ± 4% +55.0% 17570 ± 23% softirqs.CPU31.RCU
7774 ± 7% +68.2% 13077 ± 5% softirqs.CPU31.SCHED
24025 ± 19% +58.5% 38069 ± 23% softirqs.CPU31.TIMER
11694 ± 8% +33.5% 15607 ± 15% softirqs.CPU32.RCU
7360 ± 7% +90.3% 14009 ± 3% softirqs.CPU32.SCHED
29422 ± 13% +32.0% 38827 ± 27% softirqs.CPU32.TIMER
10544 +45.9% 15387 ± 14% softirqs.CPU33.RCU
7719 ± 6% +66.0% 12813 ± 7% softirqs.CPU33.SCHED
26240 ± 9% +42.0% 37258 ± 21% softirqs.CPU33.TIMER
12082 ± 10% +36.4% 16484 ± 16% softirqs.CPU34.RCU
7887 ± 8% +75.3% 13827 ± 3% softirqs.CPU34.SCHED
26711 ± 5% +43.7% 38382 ± 26% softirqs.CPU34.TIMER
11676 ± 8% +38.9% 16215 ± 11% softirqs.CPU35.RCU
7067 ± 17% +79.8% 12704 ± 4% softirqs.CPU35.SCHED
26407 ± 10% +46.5% 38686 ± 25% softirqs.CPU35.TIMER
7806 ± 3% +149.2% 19456 ± 44% softirqs.CPU36.SCHED
26842 ± 5% +45.4% 39016 ± 27% softirqs.CPU36.TIMER
11605 ± 15% +37.8% 15996 ± 15% softirqs.CPU37.RCU
7575 ± 7% +62.4% 12302 ± 7% softirqs.CPU37.SCHED
28610 ± 18% +26.2% 36120 ± 20% softirqs.CPU37.TIMER
10784 ± 4% +54.7% 16687 ± 22% softirqs.CPU38.RCU
7756 ± 4% +78.9% 13878 ± 5% softirqs.CPU38.SCHED
26781 ± 4% +73.8% 46539 ± 22% softirqs.CPU38.TIMER
11064 ± 7% +44.2% 15959 ± 15% softirqs.CPU39.RCU
7661 ± 6% +67.0% 12796 ± 6% softirqs.CPU39.SCHED
26363 ± 7% +34.4% 35420 ± 18% softirqs.CPU39.TIMER
13577 ± 6% +48.7% 20195 ± 19% softirqs.CPU4.RCU
9012 ± 15% +47.3% 13271 ± 9% softirqs.CPU4.SCHED
11491 ± 10% +68.9% 19407 ± 13% softirqs.CPU5.RCU
7577 ± 8% +74.3% 13203 ± 2% softirqs.CPU5.SCHED
26393 ± 8% +46.9% 38772 ± 24% softirqs.CPU5.TIMER
7664 ± 14% +77.6% 13611 ± 2% softirqs.CPU6.SCHED
11189 +56.5% 17507 ± 22% softirqs.CPU7.RCU
7443 ± 7% +75.7% 13079 ± 2% softirqs.CPU7.SCHED
25804 ± 8% +49.9% 38693 ± 23% softirqs.CPU7.TIMER
12339 ± 5% +46.2% 18039 ± 20% softirqs.CPU8.RCU
7556 ± 9% +83.2% 13839 softirqs.CPU8.SCHED
11929 ± 7% +66.1% 19819 ± 15% softirqs.CPU9.RCU
7396 ± 9% +77.2% 13104 ± 3% softirqs.CPU9.SCHED
24380 ± 17% +60.6% 39159 ± 20% softirqs.CPU9.TIMER
467815 ± 2% +55.1% 725428 ± 20% softirqs.RCU
311041 ± 6% +74.6% 543204 ± 4% softirqs.SCHED
1042459 ± 8% +45.7% 1518485 ± 20% softirqs.TIMER
fsmark.time.system_time
90 +-+--------------------------------------------------------------------+
O O O O O O O O O OO O O O O O OO O O O O OO O O O O O |
80 +-+ O O |
70 +-+ |
| |
60 +-+ |
50 +-+ |
|.+.+. .+. +. +. .+. .+.|
40 +-+ ++ +.+.+.+ + +.+.+.+.+.+.+ +.+.+.+.+.++.+.+ +.+.+.++.+ |
30 +-+ : : |
| : : |
20 +-+ : : |
10 +-+ : : |
| : |
0 +-+----O-O-------------------------------------------------------------+
fsmark.time.elapsed_time
100 +-+-------------------------------------------------------------------+
90 +-O O O O O O O O OO O |
O O OO O O O OO O O O O O O OO O O O |
80 +-+ |
70 +-+ |
| |
60 +-+ |
50 +-+.+. .|
40 +-+ ++ +.+.+.++.+.+.+.+.++.+.+.+.+.+.++.+.+.+.+.++.+.+.+.+.++.+.+ |
| : : |
30 +-+ : : |
20 +-+ : : |
| : : |
10 +-+ : |
0 +-+----O-O------------------------------------------------------------+
fsmark.time.elapsed_time.max
100 +-+-------------------------------------------------------------------+
90 +-O O O O O O O O OO O |
O O OO O O O OO O O O O O O OO O O O |
80 +-+ |
70 +-+ |
| |
60 +-+ |
50 +-+.+. .|
40 +-+ ++ +.+.+.++.+.+.+.+.++.+.+.+.+.+.++.+.+.+.+.++.+.+.+.+.++.+.+ |
| : : |
30 +-+ : : |
20 +-+ : : |
| : : |
10 +-+ : |
0 +-+----O-O------------------------------------------------------------+
fsmark.time.involuntary_context_switches
16000 +-+-----------------------------------------------------------------+
|.+.++.+ +.++.+.+.++.+.+.+.++.+.+.+.++.+.+.+.++.+.+.++.+.+.+.++.+.|
14000 +-+ : : |
12000 +-+ : : |
| : : |
10000 +-+ : : |
| : : |
8000 +-+ : : |
| : : |
6000 +-+ : : |
4000 O-O OO : :O OO O O OO O O O OO O O O OO O O O OO O O OO O |
| : |
2000 +-+ : |
| : |
0 +-+----O-O----------------------------------------------------------+
fsmark.files_per_sec
300 +-+-------------------------------------------------------------------+
| +. .+. +. +. .+. +. +. |
250 +-+ ++ +.+.+.+ .+. + +.++ +.+. + + +.+.+ ++. + +.+.+ +. |
|.+. + : : + + + + +.|
| + : : |
200 +-+ : : |
| : : |
150 +-+ : : |
O O O : :O O O OO O O O O OO O O O O O O OO O O O |
100 +-O : : O O OO O |
| : : |
| :: |
50 +-+ : |
| : |
0 +-+----O-O------------------------------------------------------------+
fsmark.app_overhead
500000 +-+----------------------------------------------------------------+
450000 +-+ O O OO O OO O O OO O O |
O O OO OO O O OO O OO O O O O O |
400000 +-+ |
350000 +-+ |
| |
300000 +-+ |
250000 +-+ |
200000 +-+.+ .|
|.+ :.+ ++.+.+.++.+. .+ .+. .+. +. .+. +. .+. +. .+. +.+.+.++.+ |
150000 +-+ + : : + + + + + + + + + + |
100000 +-+ : : |
| : : |
50000 +-+ : |
0 +-+----O-O---------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 11 months
[module] 8651ec01da: BUG:kernel_hang_in_test_stage
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 8651ec01daedad26290f76beeb4736f9d2da4b87 ("module: add support for symbol namespaces.")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/jeyu/linux.git modules-next
in testcase: rcutorture
with following parameters:
runtime: 300s
test: default
torture_type: rcu
test-description: rcutorture is rcutorture kernel module load/unload test.
test-url: https://www.kernel.org/doc/Documentation/RCU/torture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------+------------+------------+
| | ed13fc33f7 | 8651ec01da |
+-------------------------------+------------+------------+
| boot_successes | 49 | 0 |
| boot_failures | 0 | 40 |
| BUG:kernel_hang_in_test_stage | 0 | 40 |
+-------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 14.878132]
[ 14.914040] result_service=inn:/result, RESULT_MNT=/inn/result, RESULT_ROOT=/inn/result/rcutorture/300s-default-rcu/vm-snb-ssd-4G/aliyun-x86_64-2019-06-26.cgz/x86_64-fedora-25/gcc-7/8651ec01daedad26290f76beeb4736f9d2da4b87/3
[ 14.914043]
[ 14.925470] mount.nfs: try 1 time... mount.nfs -o vers=3 inn:/result /inn/result
[ 14.925472]
BUG: kernel hang in test stage
Elapsed time: 2740
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-7d4a4c5ef8d1-0 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-7d4a4c5ef8d1-1 256G
To reproduce:
# build kernel
cd linux
cp config-5.3.0-rc7-00003-g8651ec01daeda .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
2 years, 11 months
[mm, memcg] 1e577f970f: will-it-scale.per_process_ops -7.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -7.2% regression of will-it-scale.per_process_ops due to commit:
commit: 1e577f970f66a53d429cbee37b36177c9712f488 ("mm, memcg: introduce memory.events.local")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: will-it-scale
on test machine: 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory
with following parameters:
nr_task: 50%
mode: process
test: page_fault2
ucode: 0x400001c
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
In addition to that, the commit also has significant impact on the following tests:
+------------------+------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median -13.0% regression |
| test machine | 160 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=1T |
| | test=lru-shm |
| | ucode=0xb000036 |
+------------------+------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2-clear/process/50%/clear-ota-25590-x86_64-2018-10-18.cgz/lkp-csl-2sp4/page_fault2/will-it-scale/0x400001c
commit:
ec16545096 ("memcg, fsnotify: no oom-kill for remote memcg charging")
1e577f970f ("mm, memcg: introduce memory.events.local")
ec165450968b2629 1e577f970f66a53d429cbee37b3
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
4:4 -2% 4:4 perf-profile.calltrace.cycles-pp.error_entry
4:4 -1% 4:4 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
5:4 -1% 5:4 perf-profile.children.cycles-pp.error_entry
0:4 -0% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
208264 -7.2% 193178 will-it-scale.per_process_ops
9996705 -7.2% 9272609 will-it-scale.workload
1.88 ± 2% +0.2 2.07 mpstat.cpu.all.usr%
75.16 +1.9% 76.59 boot-time.boot
6771 +2.0% 6908 boot-time.idle
298.63 -1.5% 294.02 turbostat.PkgWatt
131.45 -3.8% 126.39 turbostat.RAMWatt
11773 ± 2% +5.8% 12459 ± 5% slabinfo.kmalloc-96.active_objs
11899 ± 2% +5.6% 12570 ± 5% slabinfo.kmalloc-96.num_objs
480.00 ± 16% -23.3% 368.00 ± 13% slabinfo.kmalloc-rcl-128.active_objs
480.00 ± 16% -23.3% 368.00 ± 13% slabinfo.kmalloc-rcl-128.num_objs
3.016e+09 -7.3% 2.797e+09 proc-vmstat.numa_hit
3.016e+09 -7.3% 2.797e+09 proc-vmstat.numa_local
17450 ± 5% -14.5% 14912 ± 6% proc-vmstat.pgactivate
3.017e+09 -7.2% 2.798e+09 proc-vmstat.pgalloc_normal
3.009e+09 -7.2% 2.791e+09 proc-vmstat.pgfault
3.016e+09 -7.2% 2.798e+09 proc-vmstat.pgfree
20008 ± 26% +37.0% 27407 ± 8% sched_debug.cfs_rq:/.exec_clock.min
189.33 +720.9% 1554 ±128% sched_debug.cfs_rq:/.load_avg.max
962257 ± 27% +37.1% 1318963 ± 10% sched_debug.cfs_rq:/.min_vruntime.min
4492285 ± 9% -24.4% 3396980 ± 14% sched_debug.cfs_rq:/.spread0.max
287345 ± 14% +45.1% 417049 ± 33% sched_debug.cpu.avg_idle.max
40048 ± 11% +40.1% 56118 ± 34% sched_debug.cpu.avg_idle.stddev
137993 ± 18% +54.7% 213511 ± 30% sched_debug.cpu.max_idle_balance_cost.max
16624 ± 20% +60.4% 26669 ± 39% sched_debug.cpu.max_idle_balance_cost.stddev
18665 ± 18% +26.8% 23665 ± 18% softirqs.CPU15.SCHED
24899 ± 11% -18.4% 20325 ± 7% softirqs.CPU21.SCHED
24857 ± 10% -20.5% 19766 ± 11% softirqs.CPU30.SCHED
14209 ± 38% +66.4% 23644 ± 13% softirqs.CPU34.SCHED
21668 ± 2% -18.5% 17663 ± 17% softirqs.CPU41.SCHED
15193 ± 16% +27.6% 19383 ± 7% softirqs.CPU69.SCHED
14814 ± 18% +37.7% 20393 ± 10% softirqs.CPU78.SCHED
25299 ± 7% -17.2% 20955 ± 12% softirqs.CPU8.SCHED
725.75 ± 5% -9.5% 657.00 interrupts.4:IO-APIC.4-edge.ttyS0
17.25 ± 42% +1407.2% 260.00 ±149% interrupts.CPU18.RES:Rescheduling_interrupts
6134 ± 38% -39.9% 3686 ± 37% interrupts.CPU25.NMI:Non-maskable_interrupts
6134 ± 38% -39.9% 3686 ± 37% interrupts.CPU25.PMI:Performance_monitoring_interrupts
52.75 ± 75% +367.8% 246.75 ± 68% interrupts.CPU27.RES:Rescheduling_interrupts
393.75 ± 68% -83.9% 63.25 ±121% interrupts.CPU3.RES:Rescheduling_interrupts
7887 ± 10% -54.3% 3605 ± 20% interrupts.CPU32.NMI:Non-maskable_interrupts
7887 ± 10% -54.3% 3605 ± 20% interrupts.CPU32.PMI:Performance_monitoring_interrupts
8647 -33.9% 5717 ± 42% interrupts.CPU34.NMI:Non-maskable_interrupts
8647 -33.9% 5717 ± 42% interrupts.CPU34.PMI:Performance_monitoring_interrupts
7125 ± 22% -49.7% 3581 ± 39% interrupts.CPU4.NMI:Non-maskable_interrupts
7125 ± 22% -49.7% 3581 ± 39% interrupts.CPU4.PMI:Performance_monitoring_interrupts
171.25 ±156% +366.0% 798.00 ±120% interrupts.CPU44.RES:Rescheduling_interrupts
7084 ± 16% -31.8% 4828 ± 28% interrupts.CPU53.NMI:Non-maskable_interrupts
7084 ± 16% -31.8% 4828 ± 28% interrupts.CPU53.PMI:Performance_monitoring_interrupts
6189 ± 41% -46.5% 3310 ± 48% interrupts.CPU54.NMI:Non-maskable_interrupts
6189 ± 41% -46.5% 3310 ± 48% interrupts.CPU54.PMI:Performance_monitoring_interrupts
5873 ± 27% -52.5% 2790 ± 62% interrupts.CPU62.NMI:Non-maskable_interrupts
5873 ± 27% -52.5% 2790 ± 62% interrupts.CPU62.PMI:Performance_monitoring_interrupts
13.25 ± 22% +1422.6% 201.75 ±136% interrupts.CPU62.RES:Rescheduling_interrupts
10.75 ± 22% +465.1% 60.75 ± 94% interrupts.CPU64.RES:Rescheduling_interrupts
345.50 ±120% -96.1% 13.50 ± 28% interrupts.CPU8.RES:Rescheduling_interrupts
32.00 ± 31% +262.5% 116.00 ±113% interrupts.CPU84.RES:Rescheduling_interrupts
19.75 ± 48% +941.8% 205.75 ±136% interrupts.CPU90.RES:Rescheduling_interrupts
119.50 ± 15% -24.1% 90.75 ± 12% interrupts.IWI:IRQ_work_interrupts
588624 ± 7% -17.4% 485913 ± 9% interrupts.NMI:Non-maskable_interrupts
588624 ± 7% -17.4% 485913 ± 9% interrupts.PMI:Performance_monitoring_interrupts
20329 ± 3% -11.0% 18098 ± 4% interrupts.RES:Rescheduling_interrupts
22.73 +2.2% 23.24 perf-stat.i.MPKI
9.332e+09 -5.7% 8.8e+09 perf-stat.i.branch-instructions
26885789 -5.7% 25357143 perf-stat.i.branch-misses
79.06 -1.7 77.37 perf-stat.i.cache-miss-rate%
8.564e+08 -6.0% 8.053e+08 perf-stat.i.cache-misses
1.083e+09 -3.9% 1.041e+09 perf-stat.i.cache-references
3.13 +6.4% 3.33 perf-stat.i.cpi
174.10 +6.3% 185.10 perf-stat.i.cycles-between-cache-misses
1.28e+10 -5.7% 1.206e+10 perf-stat.i.dTLB-loads
1.10 +0.1 1.16 perf-stat.i.dTLB-store-miss-rate%
6.836e+09 -6.9% 6.367e+09 perf-stat.i.dTLB-stores
22964025 -5.1% 21793736 perf-stat.i.iTLB-load-misses
4.766e+10 -6.0% 4.479e+10 perf-stat.i.instructions
0.32 -6.0% 0.30 perf-stat.i.ipc
10000057 -7.2% 9275971 perf-stat.i.minor-faults
1.42 +0.5 1.94 ± 4% perf-stat.i.node-load-miss-rate%
3780251 +25.3% 4737976 ± 4% perf-stat.i.node-load-misses
2.628e+08 -8.8% 2.395e+08 perf-stat.i.node-loads
7.34 +0.5 7.87 perf-stat.i.node-store-miss-rate%
59339717 -8.4% 54339533 perf-stat.i.node-stores
10000075 -7.2% 9275983 perf-stat.i.page-faults
22.73 +2.2% 23.23 perf-stat.overall.MPKI
79.05 -1.7 77.37 perf-stat.overall.cache-miss-rate%
3.13 +6.4% 3.33 perf-stat.overall.cpi
174.08 +6.3% 185.08 perf-stat.overall.cycles-between-cache-misses
1.10 +0.1 1.15 perf-stat.overall.dTLB-store-miss-rate%
2075 -1.0% 2055 perf-stat.overall.instructions-per-iTLB-miss
0.32 -6.0% 0.30 perf-stat.overall.ipc
1.42 +0.5 1.94 ± 4% perf-stat.overall.node-load-miss-rate%
7.34 +0.5 7.87 perf-stat.overall.node-store-miss-rate%
1419837 +1.2% 1437149 perf-stat.overall.path-length
9.3e+09 -5.7% 8.77e+09 perf-stat.ps.branch-instructions
26796456 -5.7% 25273598 perf-stat.ps.branch-misses
8.535e+08 -6.0% 8.025e+08 perf-stat.ps.cache-misses
1.08e+09 -3.9% 1.037e+09 perf-stat.ps.cache-references
1.275e+10 -5.7% 1.202e+10 perf-stat.ps.dTLB-loads
6.813e+09 -6.9% 6.345e+09 perf-stat.ps.dTLB-stores
22886580 -5.1% 21720014 perf-stat.ps.iTLB-load-misses
4.749e+10 -6.0% 4.464e+10 perf-stat.ps.instructions
9966243 -7.2% 9244557 perf-stat.ps.minor-faults
3767498 +25.3% 4722078 ± 4% perf-stat.ps.node-load-misses
2.619e+08 -8.8% 2.387e+08 perf-stat.ps.node-loads
59139171 -8.4% 54155638 perf-stat.ps.node-stores
9966269 -7.2% 9244575 perf-stat.ps.page-faults
1.419e+13 -6.1% 1.333e+13 perf-stat.total.instructions
8.87 ± 14% -3.1 5.78 ± 12% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
9.47 ± 13% -3.1 6.39 ± 12% perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
9.16 ± 13% -3.1 6.09 ± 12% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
6.63 ± 15% -3.0 3.63 ± 14% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
6.62 ± 15% -3.0 3.62 ± 14% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
2.90 ± 8% -0.6 2.31 ± 6% perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas
2.67 ± 8% -0.6 2.10 ± 6% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range
2.14 ± 8% -0.6 1.57 ± 7% perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
2.03 ± 8% -0.6 1.47 ± 7% perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.88 ± 8% -0.6 1.33 ± 6% perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
1.77 ± 8% -0.6 1.21 ± 7% perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
1.42 ± 8% -0.5 0.87 ± 8% perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
1.65 ± 9% -0.5 1.12 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu
1.64 ± 8% -0.5 1.10 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages
0.92 ± 13% +0.3 1.26 ± 10% perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_mm.mem_cgroup_try_charge.mem_cgroup_try_charge_delay.__handle_mm_fault.handle_mm_fault
0.77 ± 8% +0.6 1.39 ± 11% perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
0.60 ± 7% +0.6 1.22 ± 11% perf-profile.calltrace.cycles-pp.mem_cgroup_charge_statistics.mem_cgroup_commit_charge.alloc_set_pte.finish_fault.__handle_mm_fault
0.00 +1.0 1.00 ± 12% perf-profile.calltrace.cycles-pp.__mod_memcg_state.mem_cgroup_charge_statistics.mem_cgroup_commit_charge.alloc_set_pte.finish_fault
0.00 +1.1 1.08 ± 10% perf-profile.calltrace.cycles-pp.__count_memcg_events.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
12.20 ± 6% +3.1 15.35 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte
12.27 ± 6% +3.2 15.42 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault
14.03 ± 6% +3.3 17.35 ± 7% perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
13.89 ± 6% +3.3 17.22 ± 7% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault
11.46 ± 12% -3.7 7.74 ± 10% perf-profile.children.cycles-pp._raw_spin_lock
8.96 ± 14% -3.1 5.86 ± 12% perf-profile.children.cycles-pp.get_page_from_freelist
9.27 ± 13% -3.1 6.18 ± 12% perf-profile.children.cycles-pp.__alloc_pages_nodemask
9.49 ± 13% -3.1 6.41 ± 11% perf-profile.children.cycles-pp.alloc_pages_vma
3.36 ± 8% -0.7 2.67 ± 7% perf-profile.children.cycles-pp.free_unref_page_list
3.08 ± 8% -0.7 2.40 ± 6% perf-profile.children.cycles-pp.free_pcppages_bulk
2.15 ± 8% -0.6 1.58 ± 7% perf-profile.children.cycles-pp.__do_fault
2.03 ± 8% -0.6 1.47 ± 7% perf-profile.children.cycles-pp.shmem_fault
1.77 ± 8% -0.6 1.22 ± 7% perf-profile.children.cycles-pp.find_lock_entry
1.89 ± 8% -0.6 1.34 ± 6% perf-profile.children.cycles-pp.shmem_getpage_gfp
1.42 ± 8% -0.5 0.88 ± 8% perf-profile.children.cycles-pp.find_get_entry
0.80 ± 9% +0.3 1.10 ± 10% perf-profile.children.cycles-pp.__mod_lruvec_state
0.93 ± 13% +0.3 1.27 ± 10% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.61 ± 7% +0.6 1.23 ± 11% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.78 ± 7% +0.6 1.39 ± 11% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.61 ± 9% +0.7 1.29 ± 9% perf-profile.children.cycles-pp.__count_memcg_events
0.90 ± 8% +0.9 1.80 ± 11% perf-profile.children.cycles-pp.__mod_memcg_state
14.03 ± 6% +3.3 17.36 ± 7% perf-profile.children.cycles-pp.__lru_cache_add
13.92 ± 6% +3.3 17.25 ± 7% perf-profile.children.cycles-pp.pagevec_lru_move_fn
1.02 ± 8% -0.5 0.48 ± 9% perf-profile.self.cycles-pp.find_get_entry
0.35 ± 14% +0.1 0.46 ± 9% perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.93 ± 13% +0.3 1.26 ± 10% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.59 ± 8% +0.7 1.28 ± 9% perf-profile.self.cycles-pp.__count_memcg_events
0.90 ± 9% +0.9 1.79 ± 11% perf-profile.self.cycles-pp.__mod_memcg_state
will-it-scale.per_process_ops
250000 +-+----------------------------------------------------------------+
| |
| .+.+. .+.+.+.+..+.+.+.+..+.+.+.+.+..+.+.+.+..+.+.+.+..+ |
200000 O-O.O. O O O.O. O O O O O O O O O O O O O O O O O O O O O O O O
| : |
| : |
150000 +-+ |
|: |
100000 +-+ |
|: |
|: |
50000 +-+ |
| |
| |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ex2: 160 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/1T/lkp-bdw-ex2/lru-shm/vm-scalability/0xb000036
commit:
ec16545096 ("memcg, fsnotify: no oom-kill for remote memcg charging")
1e577f970f ("mm, memcg: introduce memory.events.local")
ec165450968b2629 1e577f970f66a53d429cbee37b3
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
4:4 25% 5:4 perf-profile.calltrace.cycles-pp.error_entry
4:4 26% 5:4 perf-profile.children.cycles-pp.error_entry
3:4 21% 4:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.08 ± 2% -7.6% 0.07 vm-scalability.free_time
453831 -13.0% 394631 vm-scalability.median
87319393 -13.1% 75860153 vm-scalability.throughput
2126 ± 2% +12.0% 2381 vm-scalability.time.percent_of_cpu_this_job_got
4441 ± 2% +17.9% 5233 vm-scalability.time.system_time
54.78 ± 2% -4.4% 52.36 boot-time.boot
9222 -6.6% 8614 ± 2% boot-time.idle
11735511 +24.2% 14571272 ± 2% meminfo.Mapped
26619 +17.5% 31283 meminfo.PageTables
0.03 ± 43% -0.0 0.01 ±100% mpstat.cpu.all.iowait%
6.97 +1.8 8.75 mpstat.cpu.all.sys%
89.00 -2.2% 87.00 vmstat.cpu.id
19.25 ± 2% +22.1% 23.50 ± 2% vmstat.procs.r
4873613 ±100% -76.4% 1151031 ±154% cpuidle.C1.usage
6.044e+08 ±160% +219.4% 1.931e+09 ± 56% cpuidle.C1E.time
5543505 ±143% -94.1% 326343 ± 45% cpuidle.POLL.time
905525 ±135% -88.3% 106104 ± 5% cpuidle.POLL.usage
331.00 +18.1% 391.00 turbostat.Avg_MHz
14.01 +2.2 16.21 turbostat.Busy%
4872319 ±100% -76.4% 1149325 ±155% turbostat.C1
0.95 ±161% +2.2 3.17 ± 56% turbostat.C1E%
51.00 ± 3% +10.8% 56.50 ± 3% turbostat.PkgTmp
308.72 ± 4% +7.3% 331.33 turbostat.PkgWatt
2768727 +20.6% 3338396 ± 2% numa-meminfo.node0.Mapped
88169 ± 9% +15.3% 101648 ± 6% numa-meminfo.node1.KReclaimable
2742386 ± 2% +27.0% 3481981 numa-meminfo.node1.Mapped
5416 ± 4% +23.7% 6699 numa-meminfo.node1.PageTables
88169 ± 9% +15.3% 101648 ± 6% numa-meminfo.node1.SReclaimable
141351 ± 5% +10.0% 155426 ± 7% numa-meminfo.node1.Slab
2840934 ± 3% +24.1% 3524572 ± 2% numa-meminfo.node2.Mapped
998.50 ± 35% -59.2% 407.25 ±102% numa-meminfo.node3.Inactive(file)
3022322 ± 2% +24.2% 3754006 ± 3% numa-meminfo.node3.Mapped
701407 ± 3% +18.4% 830770 ± 2% numa-vmstat.node0.nr_mapped
694990 ± 2% +20.9% 840082 ± 3% numa-vmstat.node1.nr_mapped
1384 ± 4% +16.8% 1617 ± 3% numa-vmstat.node1.nr_page_table_pages
22041 ± 9% +15.4% 25433 ± 6% numa-vmstat.node1.nr_slab_reclaimable
717728 +22.7% 880970 ± 3% numa-vmstat.node2.nr_mapped
1561 ± 19% +28.7% 2009 ± 14% numa-vmstat.node2.nr_page_table_pages
249.25 ± 35% -59.3% 101.50 ±102% numa-vmstat.node3.nr_inactive_file
759778 ± 4% +22.1% 927375 ± 5% numa-vmstat.node3.nr_mapped
249.25 ± 35% -59.3% 101.50 ±102% numa-vmstat.node3.nr_zone_inactive_file
17748 +12.1% 19898 sched_debug.cfs_rq:/.exec_clock.avg
15366 ± 3% +13.8% 17486 sched_debug.cfs_rq:/.exec_clock.min
3136948 +14.6% 3595078 sched_debug.cfs_rq:/.min_vruntime.avg
3312023 +15.0% 3807455 sched_debug.cfs_rq:/.min_vruntime.max
2777689 ± 5% +12.9% 3135353 ± 4% sched_debug.cfs_rq:/.min_vruntime.min
265759 ± 3% -17.0% 220576 ± 5% sched_debug.cpu.clock.avg
265767 ± 3% -17.0% 220585 ± 5% sched_debug.cpu.clock.max
265751 ± 3% -17.0% 220568 ± 5% sched_debug.cpu.clock.min
265759 ± 3% -17.0% 220576 ± 5% sched_debug.cpu.clock_task.avg
265767 ± 3% -17.0% 220585 ± 5% sched_debug.cpu.clock_task.max
265751 ± 3% -17.0% 220568 ± 5% sched_debug.cpu.clock_task.min
29771 ± 12% +33.9% 39850 ± 26% sched_debug.cpu.ttwu_count.max
265751 ± 3% -17.0% 220568 ± 5% sched_debug.cpu_clk
260980 ± 3% -17.3% 215794 ± 5% sched_debug.ktime
269253 ± 3% -16.8% 224063 ± 4% sched_debug.sched_clk
59.50 ± 2% +75.2% 104.25 ± 71% proc-vmstat.nr_active_file
98.75 -9.1% 89.75 ± 2% proc-vmstat.nr_anon_transparent_hugepages
9468053 -2.4% 9236647 proc-vmstat.nr_dirty_background_threshold
18959420 -2.4% 18496065 proc-vmstat.nr_dirty_threshold
28182185 ± 2% +8.2% 30490955 proc-vmstat.nr_file_pages
95231947 -2.4% 92914596 proc-vmstat.nr_free_pages
27914900 ± 2% +8.3% 30222180 proc-vmstat.nr_inactive_anon
402.75 +6.0% 426.75 proc-vmstat.nr_inactive_file
26432 +1.3% 26786 proc-vmstat.nr_kernel_stack
2909616 +24.8% 3630796 proc-vmstat.nr_mapped
6604 +18.0% 7795 proc-vmstat.nr_page_table_pages
27924019 ± 2% +8.3% 30232857 proc-vmstat.nr_shmem
89458 +5.5% 94371 proc-vmstat.nr_slab_reclaimable
59.50 ± 2% +75.2% 104.25 ± 71% proc-vmstat.nr_zone_active_file
27914900 ± 2% +8.3% 30222179 proc-vmstat.nr_zone_inactive_anon
402.75 +6.0% 426.75 proc-vmstat.nr_zone_inactive_file
2289 ± 77% +684.3% 17957 ±112% proc-vmstat.numa_pages_migrated
2289 ± 77% +684.3% 17957 ±112% proc-vmstat.pgmigrate_success
1.207e+10 ± 2% +8.5% 1.31e+10 perf-stat.i.branch-instructions
29052163 +10.7% 32158881 perf-stat.i.cache-misses
3.44 ± 8% -21.9% 2.69 ± 6% perf-stat.i.cpi
6.185e+10 ± 2% +17.1% 7.246e+10 perf-stat.i.cpu-cycles
47.42 ± 2% +8.7% 51.55 ± 2% perf-stat.i.cpu-migrations
8412 ± 10% -67.8% 2711 ± 20% perf-stat.i.cycles-between-cache-misses
1.161e+10 +8.0% 1.255e+10 perf-stat.i.dTLB-loads
4.362e+10 ± 2% +8.9% 4.753e+10 perf-stat.i.instructions
1805696 +5.6% 1906421 perf-stat.i.minor-faults
1805087 +5.6% 1905647 perf-stat.i.page-faults
1.41 ± 2% +7.5% 1.52 perf-stat.overall.cpi
2130 ± 3% +6.0% 2257 ± 2% perf-stat.overall.cycles-between-cache-misses
0.71 ± 2% -7.0% 0.66 perf-stat.overall.ipc
61.18 +1.4 62.61 perf-stat.overall.node-load-miss-rate%
5377 +3.0% 5540 perf-stat.overall.path-length
1.231e+10 +9.7% 1.351e+10 perf-stat.ps.branch-instructions
29515966 +11.7% 32965638 perf-stat.ps.cache-misses
6.286e+10 ± 2% +18.3% 7.439e+10 perf-stat.ps.cpu-cycles
47.77 ± 2% +9.3% 52.19 ± 2% perf-stat.ps.cpu-migrations
1.183e+10 +9.2% 1.292e+10 perf-stat.ps.dTLB-loads
4.446e+10 +10.1% 4.895e+10 perf-stat.ps.instructions
1843881 +6.8% 1970028 perf-stat.ps.minor-faults
6078583 +3.6% 6295092 perf-stat.ps.node-stores
1843880 +6.8% 1970028 perf-stat.ps.page-faults
1.498e+13 +3.0% 1.543e+13 perf-stat.total.instructions
39017 ± 2% -9.0% 35489 ± 2% softirqs.CPU101.SCHED
38287 ± 2% -9.5% 34633 ± 4% softirqs.CPU11.SCHED
38567 ± 2% -8.5% 35301 ± 4% softirqs.CPU112.SCHED
38837 -9.7% 35054 ± 3% softirqs.CPU113.SCHED
37853 ± 2% -7.0% 35188 ± 4% softirqs.CPU119.SCHED
38469 ± 2% -9.4% 34857 ± 3% softirqs.CPU12.SCHED
38615 ± 2% -11.9% 34020 ± 5% softirqs.CPU13.SCHED
38757 ± 3% -9.4% 35129 ± 4% softirqs.CPU158.SCHED
38239 ± 2% -8.2% 35109 ± 3% softirqs.CPU16.SCHED
38539 ± 2% -8.9% 35118 ± 4% softirqs.CPU161.SCHED
38661 ± 3% -9.2% 35110 ± 4% softirqs.CPU163.SCHED
38424 ± 2% -7.9% 35396 ± 3% softirqs.CPU178.SCHED
39368 ± 2% -11.0% 35034 ± 3% softirqs.CPU19.SCHED
37789 ± 3% -9.3% 34285 ± 7% softirqs.CPU191.SCHED
38213 -7.9% 35191 ± 3% softirqs.CPU23.SCHED
38537 ± 2% -9.1% 35016 softirqs.CPU3.SCHED
38657 ± 3% -9.1% 35158 ± 4% softirqs.CPU62.SCHED
38418 ± 2% -8.7% 35081 ± 4% softirqs.CPU67.SCHED
38519 ± 2% -8.4% 35274 ± 3% softirqs.CPU69.SCHED
38754 -9.4% 35095 ± 4% softirqs.CPU71.SCHED
38386 ± 2% -8.9% 34983 ± 4% softirqs.CPU73.SCHED
37979 ± 3% -9.5% 34371 softirqs.CPU9.SCHED
38393 ± 2% -9.2% 34854 ± 2% softirqs.CPU90.SCHED
39251 ± 3% -8.8% 35779 ± 3% softirqs.CPU96.SCHED
14549 ± 3% -14.4% 12454 ± 2% softirqs.NET_RX
173.50 ± 6% -10.8% 154.75 interrupts.100:PCI-MSI.1572919-edge.eth0-TxRx-55
363.25 ± 59% -57.4% 154.75 interrupts.101:PCI-MSI.1572920-edge.eth0-TxRx-56
251.75 ± 33% -34.3% 165.50 ± 3% interrupts.46:PCI-MSI.1572865-edge.eth0-TxRx-1
245.50 ± 43% -35.6% 158.00 ± 2% interrupts.56:PCI-MSI.1572875-edge.eth0-TxRx-11
720167 -5.0% 684354 interrupts.CAL:Function_call_interrupts
251.75 ± 33% -34.3% 165.50 ± 3% interrupts.CPU1.46:PCI-MSI.1572865-edge.eth0-TxRx-1
137.50 ± 73% +11475.8% 15916 ±165% interrupts.CPU1.RES:Rescheduling_interrupts
34.25 ± 85% +4651.8% 1627 ±156% interrupts.CPU100.RES:Rescheduling_interrupts
2221 ± 44% +35.6% 3011 ± 6% interrupts.CPU103.NMI:Non-maskable_interrupts
2221 ± 44% +35.6% 3011 ± 6% interrupts.CPU103.PMI:Performance_monitoring_interrupts
2166 ± 43% +37.9% 2988 ± 4% interrupts.CPU107.NMI:Non-maskable_interrupts
2166 ± 43% +37.9% 2988 ± 4% interrupts.CPU107.PMI:Performance_monitoring_interrupts
2209 ± 43% +34.2% 2963 ± 4% interrupts.CPU108.NMI:Non-maskable_interrupts
2209 ± 43% +34.2% 2963 ± 4% interrupts.CPU108.PMI:Performance_monitoring_interrupts
245.50 ± 43% -35.6% 158.00 ± 2% interrupts.CPU11.56:PCI-MSI.1572875-edge.eth0-TxRx-11
46.25 ± 76% +7385.9% 3462 ±159% interrupts.CPU110.RES:Rescheduling_interrupts
359.75 ± 62% -72.8% 98.00 ±136% interrupts.CPU125.RES:Rescheduling_interrupts
391.25 ± 70% -93.2% 26.50 ± 99% interrupts.CPU130.RES:Rescheduling_interrupts
21.00 ± 39% +9888.1% 2097 ±165% interrupts.CPU133.RES:Rescheduling_interrupts
23.25 ± 62% +1324.7% 331.25 ±130% interrupts.CPU144.RES:Rescheduling_interrupts
1522 ± 56% +73.2% 2637 ± 25% interrupts.CPU150.NMI:Non-maskable_interrupts
1522 ± 56% +73.2% 2637 ± 25% interrupts.CPU150.PMI:Performance_monitoring_interrupts
1904 ± 53% +60.3% 3052 ± 2% interrupts.CPU154.NMI:Non-maskable_interrupts
1904 ± 53% +60.3% 3052 ± 2% interrupts.CPU154.PMI:Performance_monitoring_interrupts
32.25 ± 91% +12542.6% 4077 ±167% interrupts.CPU157.RES:Rescheduling_interrupts
15.25 ± 42% +386.9% 74.25 ±113% interrupts.CPU165.RES:Rescheduling_interrupts
243.00 ±128% -88.5% 28.00 ± 94% interrupts.CPU172.RES:Rescheduling_interrupts
1908 ± 51% +64.1% 3130 ± 3% interrupts.CPU173.NMI:Non-maskable_interrupts
1908 ± 51% +64.1% 3130 ± 3% interrupts.CPU173.PMI:Performance_monitoring_interrupts
2285 ± 43% +37.7% 3146 ± 4% interrupts.CPU180.NMI:Non-maskable_interrupts
2285 ± 43% +37.7% 3146 ± 4% interrupts.CPU180.PMI:Performance_monitoring_interrupts
1971 ± 52% +57.8% 3111 ± 5% interrupts.CPU181.NMI:Non-maskable_interrupts
1971 ± 52% +57.8% 3111 ± 5% interrupts.CPU181.PMI:Performance_monitoring_interrupts
2269 ± 48% +38.6% 3146 ± 3% interrupts.CPU184.NMI:Non-maskable_interrupts
2269 ± 48% +38.6% 3146 ± 3% interrupts.CPU184.PMI:Performance_monitoring_interrupts
2218 ± 50% +41.1% 3129 ± 4% interrupts.CPU187.NMI:Non-maskable_interrupts
2218 ± 50% +41.1% 3129 ± 4% interrupts.CPU187.PMI:Performance_monitoring_interrupts
2220 ± 41% +38.3% 3072 ± 6% interrupts.CPU20.NMI:Non-maskable_interrupts
2220 ± 41% +38.3% 3072 ± 6% interrupts.CPU20.PMI:Performance_monitoring_interrupts
2255 ± 43% +33.7% 3015 ± 4% interrupts.CPU23.NMI:Non-maskable_interrupts
2255 ± 43% +33.7% 3015 ± 4% interrupts.CPU23.PMI:Performance_monitoring_interrupts
358.00 ± 68% -81.3% 67.00 ± 76% interrupts.CPU24.RES:Rescheduling_interrupts
263.00 ± 87% -90.9% 24.00 ± 60% interrupts.CPU28.RES:Rescheduling_interrupts
1865 ± 48% +59.0% 2965 ± 3% interrupts.CPU3.NMI:Non-maskable_interrupts
1865 ± 48% +59.0% 2965 ± 3% interrupts.CPU3.PMI:Performance_monitoring_interrupts
273.00 ±102% -90.8% 25.25 ± 61% interrupts.CPU30.RES:Rescheduling_interrupts
25.75 ± 50% +292.2% 101.00 ± 81% interrupts.CPU4.RES:Rescheduling_interrupts
282.25 ± 80% -89.3% 30.25 ± 66% interrupts.CPU44.RES:Rescheduling_interrupts
20.00 ± 32% +623.8% 144.75 ± 73% interrupts.CPU52.RES:Rescheduling_interrupts
2255 ± 34% +31.8% 2972 ± 4% interrupts.CPU53.NMI:Non-maskable_interrupts
2255 ± 34% +31.8% 2972 ± 4% interrupts.CPU53.PMI:Performance_monitoring_interrupts
173.50 ± 6% -10.8% 154.75 interrupts.CPU55.100:PCI-MSI.1572919-edge.eth0-TxRx-55
363.25 ± 59% -57.4% 154.75 interrupts.CPU56.101:PCI-MSI.1572920-edge.eth0-TxRx-56
16.25 ± 6% +380.0% 78.00 ± 59% interrupts.CPU58.RES:Rescheduling_interrupts
2329 ± 38% +45.8% 3396 ± 18% interrupts.CPU60.NMI:Non-maskable_interrupts
2329 ± 38% +45.8% 3396 ± 18% interrupts.CPU60.PMI:Performance_monitoring_interrupts
29.00 ±107% +228.4% 95.25 ± 79% interrupts.CPU69.RES:Rescheduling_interrupts
2332 ± 41% +34.5% 3137 ± 4% interrupts.CPU72.NMI:Non-maskable_interrupts
2332 ± 41% +34.5% 3137 ± 4% interrupts.CPU72.PMI:Performance_monitoring_interrupts
2281 ± 44% +37.4% 3133 ± 3% interrupts.CPU76.NMI:Non-maskable_interrupts
2281 ± 44% +37.4% 3133 ± 3% interrupts.CPU76.PMI:Performance_monitoring_interrupts
2283 ± 44% +36.3% 3111 ± 3% interrupts.CPU77.NMI:Non-maskable_interrupts
2283 ± 44% +36.3% 3111 ± 3% interrupts.CPU77.PMI:Performance_monitoring_interrupts
1941 ± 51% +61.2% 3130 ± 4% interrupts.CPU78.NMI:Non-maskable_interrupts
1941 ± 51% +61.2% 3130 ± 4% interrupts.CPU78.PMI:Performance_monitoring_interrupts
2294 ± 44% +36.3% 3127 ± 4% interrupts.CPU80.NMI:Non-maskable_interrupts
2294 ± 44% +36.3% 3127 ± 4% interrupts.CPU80.PMI:Performance_monitoring_interrupts
40.28 ± 75% -22.4 17.86 ± 3% perf-profile.calltrace.cycles-pp.secondary_startup_64
40.03 ± 75% -22.3 17.76 ± 3% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
40.03 ± 75% -22.3 17.76 ± 3% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
40.00 ± 75% -22.3 17.74 ± 3% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
36.97 ± 75% -20.6 16.33 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
36.71 ± 75% -20.6 16.15 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
30.82 ± 73% -17.2 13.62 ± 6% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
5.50 ± 89% -3.2 2.33 ± 5% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
4.83 ± 83% -2.7 2.14 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
5.46 ± 14% -2.6 2.85 ± 21% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
5.46 ± 14% -2.6 2.85 ± 21% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.29 ± 71% -1.2 1.09 ± 5% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.32 ± 73% -0.8 0.55 ± 6% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.44 ± 57% +0.4 0.89 perf-profile.calltrace.cycles-pp.__mod_memcg_state.mem_cgroup_charge_statistics.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault
0.98 ± 58% +0.5 1.49 ± 3% perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
0.62 ± 57% +0.5 1.14 perf-profile.calltrace.cycles-pp.mem_cgroup_charge_statistics.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault.__do_fault
0.00 +0.5 0.54 ± 5% perf-profile.calltrace.cycles-pp.lock_page_memcg.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault
0.79 ± 57% +0.6 1.35 perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
0.00 +0.6 0.60 ± 2% perf-profile.calltrace.cycles-pp.__mod_memcg_state.__mod_lruvec_state.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add
1.41 ± 57% +0.6 2.04 ± 3% perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.13 ±173% +0.6 0.76 perf-profile.calltrace.cycles-pp.__mod_lruvec_state.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp
1.46 ± 57% +0.6 2.11 ± 3% perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.35 ± 57% +0.7 2.00 perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault
0.00 +0.8 0.77 perf-profile.calltrace.cycles-pp.__count_memcg_events.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3.19 ± 57% +2.0 5.21 perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
4.04 ± 57% +2.2 6.27 perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
10.40 ± 57% +13.3 23.69 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp
10.44 ± 57% +13.3 23.74 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault
12.17 ± 57% +14.2 26.36 ± 3% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault
12.29 ± 57% +14.2 26.50 ± 3% perf-profile.calltrace.cycles-pp.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
35.44 ± 57% +18.0 53.48 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
35.73 ± 57% +18.1 53.84 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
35.81 ± 57% +18.1 53.93 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
44.51 ± 57% +20.4 64.90 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
45.23 ± 57% +20.9 66.12 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
46.62 ± 57% +21.2 67.81 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
47.04 ± 57% +21.3 68.30 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
47.22 ± 57% +21.3 68.56 perf-profile.calltrace.cycles-pp.page_fault
40.29 ± 75% -22.4 17.87 ± 3% perf-profile.children.cycles-pp.do_idle
40.28 ± 75% -22.4 17.86 ± 3% perf-profile.children.cycles-pp.secondary_startup_64
40.28 ± 75% -22.4 17.86 ± 3% perf-profile.children.cycles-pp.cpu_startup_entry
40.03 ± 75% -22.3 17.76 ± 3% perf-profile.children.cycles-pp.start_secondary
37.19 ± 75% -20.8 16.42 ± 4% perf-profile.children.cycles-pp.cpuidle_enter_state
37.20 ± 75% -20.8 16.43 ± 4% perf-profile.children.cycles-pp.cpuidle_enter
31.02 ± 73% -17.3 13.71 ± 6% perf-profile.children.cycles-pp.intel_idle
5.91 ± 70% -2.9 3.02 ± 3% perf-profile.children.cycles-pp.apic_timer_interrupt
5.43 ± 69% -2.7 2.73 ± 3% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
5.46 ± 14% -2.6 2.85 ± 21% perf-profile.children.cycles-pp.do_syscall_64
5.46 ± 14% -2.6 2.86 ± 21% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
2.32 ± 71% -1.2 1.11 ± 5% perf-profile.children.cycles-pp.menu_select
1.63 ± 58% -1.1 0.55 ± 11% perf-profile.children.cycles-pp.irq_exit
1.27 ± 55% -0.9 0.40 ± 14% perf-profile.children.cycles-pp.__softirqentry_text_start
1.34 ± 73% -0.8 0.56 ± 6% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
1.09 ± 70% -0.6 0.44 ± 11% perf-profile.children.cycles-pp.tick_nohz_next_event
0.71 ± 92% -0.4 0.26 ± 26% perf-profile.children.cycles-pp.truncate_inode_page
0.59 ± 40% -0.4 0.17 ± 28% perf-profile.children.cycles-pp.rebalance_domains
0.80 ± 52% -0.4 0.39 ± 12% perf-profile.children.cycles-pp.ktime_get
0.59 ± 91% -0.4 0.22 ± 28% perf-profile.children.cycles-pp.delete_from_page_cache
0.39 ± 43% -0.3 0.11 ± 20% perf-profile.children.cycles-pp.load_balance
0.42 ± 91% -0.3 0.15 ± 29% perf-profile.children.cycles-pp.__delete_from_page_cache
0.26 ± 42% -0.2 0.08 ± 20% perf-profile.children.cycles-pp.find_busiest_group
0.23 ± 42% -0.2 0.07 ± 23% perf-profile.children.cycles-pp.update_sd_lb_stats
0.23 ± 52% -0.2 0.07 ± 10% perf-profile.children.cycles-pp.run_rebalance_domains
0.22 ± 52% -0.2 0.07 ± 12% perf-profile.children.cycles-pp.update_blocked_averages
0.24 ± 62% -0.1 0.11 ± 13% perf-profile.children.cycles-pp.start_kernel
0.19 ± 56% -0.1 0.08 ± 6% perf-profile.children.cycles-pp.run_timer_softirq
0.17 ± 57% -0.1 0.08 ± 16% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.50 ± 14% -0.1 0.42 ± 8% perf-profile.children.cycles-pp.xas_store
0.14 ± 5% -0.1 0.07 ± 12% perf-profile.children.cycles-pp.ksys_read
0.14 ± 5% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.vfs_read
0.10 ± 11% -0.1 0.03 ±100% perf-profile.children.cycles-pp.smp_call_function_single
0.11 ± 4% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.perf_read
0.06 ± 58% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.mem_cgroup_page_lruvec
0.30 ± 58% +0.3 0.64 ± 6% perf-profile.children.cycles-pp.lock_page_memcg
0.46 ± 57% +0.4 0.91 perf-profile.children.cycles-pp.__count_memcg_events
0.99 ± 57% +0.5 1.50 ± 4% perf-profile.children.cycles-pp.page_add_file_rmap
0.63 ± 57% +0.5 1.15 perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.79 ± 57% +0.6 1.36 perf-profile.children.cycles-pp.mem_cgroup_commit_charge
1.47 ± 57% +0.6 2.12 ± 3% perf-profile.children.cycles-pp.finish_fault
1.18 ± 54% +0.7 1.83 ± 3% perf-profile.children.cycles-pp.__mod_memcg_state
1.37 ± 57% +0.7 2.03 perf-profile.children.cycles-pp.__pagevec_lru_add_fn
1.94 ± 57% +0.8 2.70 ± 2% perf-profile.children.cycles-pp.alloc_set_pte
1.65 ± 39% +1.1 2.79 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
3.19 ± 57% +2.0 5.22 perf-profile.children.cycles-pp.prepare_exit_to_usermode
4.04 ± 57% +2.2 6.27 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
12.66 ± 57% +13.1 25.75 ± 2% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
10.69 ± 54% +13.2 23.88 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
12.19 ± 57% +14.2 26.39 ± 3% perf-profile.children.cycles-pp.pagevec_lru_move_fn
12.30 ± 57% +14.2 26.52 ± 3% perf-profile.children.cycles-pp.__lru_cache_add
35.47 ± 57% +18.0 53.51 perf-profile.children.cycles-pp.shmem_getpage_gfp
35.74 ± 57% +18.1 53.84 perf-profile.children.cycles-pp.shmem_fault
35.81 ± 57% +18.1 53.93 perf-profile.children.cycles-pp.__do_fault
44.54 ± 57% +20.4 64.94 perf-profile.children.cycles-pp.__handle_mm_fault
45.27 ± 57% +20.9 66.17 perf-profile.children.cycles-pp.handle_mm_fault
46.65 ± 57% +21.2 67.84 perf-profile.children.cycles-pp.__do_page_fault
47.05 ± 57% +21.3 68.31 perf-profile.children.cycles-pp.do_page_fault
47.27 ± 57% +21.4 68.62 perf-profile.children.cycles-pp.page_fault
30.95 ± 73% -17.3 13.69 ± 6% perf-profile.self.cycles-pp.intel_idle
0.58 ± 39% -0.3 0.26 ± 25% perf-profile.self.cycles-pp.ktime_get
0.41 ± 93% -0.3 0.14 ± 29% perf-profile.self.cycles-pp.free_pcppages_bulk
0.19 ± 54% -0.1 0.08 ± 57% perf-profile.self.cycles-pp.tick_nohz_next_event
0.24 ± 54% -0.1 0.14 ± 10% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.15 ± 61% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.run_timer_softirq
0.34 ± 4% -0.1 0.27 ± 10% perf-profile.self.cycles-pp.release_pages
0.05 ± 58% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.mem_cgroup_page_lruvec
0.20 ± 57% +0.1 0.29 perf-profile.self.cycles-pp.page_fault
0.30 ± 58% +0.3 0.63 ± 6% perf-profile.self.cycles-pp.lock_page_memcg
0.46 ± 57% +0.4 0.91 perf-profile.self.cycles-pp.__count_memcg_events
1.16 ± 54% +0.7 1.82 ± 3% perf-profile.self.cycles-pp.__mod_memcg_state
1.65 ± 39% +1.1 2.79 ± 2% perf-profile.self.cycles-pp.native_irq_return_iret
3.13 ± 57% +2.0 5.13 perf-profile.self.cycles-pp.prepare_exit_to_usermode
12.65 ± 57% +13.1 25.75 ± 2% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 11 months
[xfs] 610125ab1e: fsmark.app_overhead -71.2% improvement
by kernel test robot
Greeting,
FYI, we noticed a -71.2% improvement of fsmark.app_overhead due to commit:
commit: 610125ab1e4b1b48dcffe74d9d82b0606bf1b923 ("xfs: speed up directory bestfree block scanning")
https://kernel.googlesource.com/pub/scm/fs/xfs/xfs-linux.git xfs-5.4-merge
in testcase: fsmark
on test machine: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
with following parameters:
iterations: 1x
nr_threads: 1t
disk: 1BRD_32G
fs: xfs
filesize: 4K
test_size: 4G
sync_method: fsyncBeforeClose
nr_files_per_directory: 1fpd
cpufreq_governor: performance
ucode: 0x200005e
test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-7/performance/1BRD_32G/4K/xfs/1x/x86_64-rhel-7.6/1fpd/1t/debian-x86_64-2019-05-14.cgz/fsyncBeforeClose/lkp-skl-2sp7/4G/fsmark/0x200005e
commit:
0e822255f9 ("xfs: factor free block index lookup from xfs_dir2_node_addname_int()")
610125ab1e ("xfs: speed up directory bestfree block scanning")
0e822255f95db400 610125ab1e4b1b48dcffe74d9d8
---------------- ---------------------------
%stddev %change %stddev
\ | \
1.095e+08 -71.2% 31557568 fsmark.app_overhead
6157 +95.5% 12034 fsmark.files_per_sec
167.31 -47.3% 88.25 fsmark.time.elapsed_time
167.31 -47.3% 88.25 fsmark.time.elapsed_time.max
91.00 -8.8% 83.00 fsmark.time.percent_of_cpu_this_job_got
148.15 -53.2% 69.38 fsmark.time.system_time
1458 +1.9% 1486 boot-time.idle
0.07 ± 3% +0.1 0.12 ± 4% mpstat.cpu.all.usr%
23038251 ± 10% -43.7% 12973669 turbostat.IRQ
1969578 ±152% -90.4% 189726 cpuidle.POLL.time
110709 ±105% -74.3% 28436 cpuidle.POLL.usage
124071 +87.1% 232161 vmstat.io.bo
4894717 -9.5% 4427627 vmstat.memory.cache
49434 +84.5% 91224 vmstat.system.cs
234459 ± 9% -24.4% 177214 meminfo.AnonHugePages
2320891 -12.7% 2027075 meminfo.Inactive
2303727 -12.8% 2009922 meminfo.Inactive(file)
1523980 -10.6% 1362149 meminfo.KReclaimable
1523980 -10.6% 1362149 meminfo.SReclaimable
2897760 -10.1% 2605999 meminfo.Slab
99131 +87.0% 185384 meminfo.max_used_kB
89186 -5.6% 84175 proc-vmstat.nr_active_anon
87992 -5.7% 82994 proc-vmstat.nr_anon_pages
114.25 ± 9% -24.7% 86.00 proc-vmstat.nr_anon_transparent_hugepages
848819 -8.8% 774504 proc-vmstat.nr_file_pages
576244 -12.9% 502074 proc-vmstat.nr_inactive_file
5661 -5.3% 5362 proc-vmstat.nr_shmem
380808 -10.6% 340328 proc-vmstat.nr_slab_reclaimable
343272 -9.5% 310812 proc-vmstat.nr_slab_unreclaimable
89186 -5.6% 84175 proc-vmstat.nr_zone_active_anon
576244 -12.9% 502074 proc-vmstat.nr_zone_inactive_file
3255223 -5.1% 3089747 proc-vmstat.numa_hit
3231815 -5.1% 3066299 proc-vmstat.numa_local
4209793 -4.3% 4027510 proc-vmstat.pgalloc_normal
459994 -43.9% 258220 proc-vmstat.pgfault
7.199e+08 +27.9% 9.21e+08 perf-stat.i.branch-instructions
6.44 ± 5% -4.7 1.71 ± 16% perf-stat.i.branch-miss-rate%
45296402 ± 5% -63.9% 16346177 ± 14% perf-stat.i.branch-misses
3285889 ± 17% +51.8% 4988808 ± 6% perf-stat.i.cache-misses
50440 +88.1% 94880 perf-stat.i.context-switches
1.67 ± 6% -24.0% 1.27 ± 5% perf-stat.i.cpi
12011 +90.8% 22912 perf-stat.i.cpu-migrations
2310 ± 15% -32.9% 1549 ± 6% perf-stat.i.cycles-between-cache-misses
9.371e+08 +23.3% 1.155e+09 perf-stat.i.dTLB-loads
5.335e+08 +15.0% 6.136e+08 perf-stat.i.dTLB-stores
2160962 ± 10% +55.5% 3360517 ± 3% perf-stat.i.iTLB-loads
3.307e+09 +34.0% 4.429e+09 perf-stat.i.instructions
0.61 ± 6% +32.3% 0.81 ± 5% perf-stat.i.ipc
2610 +3.8% 2709 perf-stat.i.minor-faults
63.65 ± 9% -14.6 49.01 ± 2% perf-stat.i.node-load-miss-rate%
363143 ± 10% +43.5% 521003 ± 5% perf-stat.i.node-load-misses
174677 ± 8% +81.8% 317608 ± 6% perf-stat.i.node-loads
23.59 ± 50% -13.6 9.98 ± 10% perf-stat.i.node-store-miss-rate%
314463 ± 14% +110.6% 662185 perf-stat.i.node-stores
2610 +3.8% 2709 perf-stat.i.page-faults
6.29 ± 5% -4.5 1.78 ± 14% perf-stat.overall.branch-miss-rate%
1.63 ± 6% -24.3% 1.24 ± 5% perf-stat.overall.cpi
1677 ± 12% -34.4% 1099 ± 3% perf-stat.overall.cycles-between-cache-misses
0.61 ± 6% +32.0% 0.81 ± 5% perf-stat.overall.ipc
7.157e+08 +27.2% 9.106e+08 perf-stat.ps.branch-instructions
45024667 ± 5% -64.1% 16162000 ± 14% perf-stat.ps.branch-misses
3266153 ± 17% +51.0% 4931578 ± 6% perf-stat.ps.cache-misses
50139 +87.1% 93800 perf-stat.ps.context-switches
11939 +89.7% 22651 perf-stat.ps.cpu-migrations
9.316e+08 +22.6% 1.142e+09 perf-stat.ps.dTLB-loads
5.303e+08 +14.4% 6.066e+08 perf-stat.ps.dTLB-stores
2148043 ± 10% +54.7% 3322297 ± 3% perf-stat.ps.iTLB-loads
3.287e+09 +33.2% 4.379e+09 perf-stat.ps.instructions
2595 +3.2% 2679 perf-stat.ps.minor-faults
360953 ± 10% +42.7% 515006 ± 5% perf-stat.ps.node-load-misses
173629 ± 8% +80.8% 313978 ± 6% perf-stat.ps.node-loads
312582 ± 14% +109.4% 654639 perf-stat.ps.node-stores
2595 +3.2% 2679 perf-stat.ps.page-faults
5.508e+11 -29.8% 3.868e+11 perf-stat.total.instructions
1225325 -12.0% 1078389 slabinfo.Acpi-Parse.active_objs
16786 -12.0% 14774 slabinfo.Acpi-Parse.active_slabs
1225458 -12.0% 1078525 slabinfo.Acpi-Parse.num_objs
16786 -12.0% 14774 slabinfo.Acpi-Parse.num_slabs
1207809 -14.3% 1034830 slabinfo.dentry.active_objs
28843 -13.7% 24877 slabinfo.dentry.active_slabs
1211435 -13.7% 1044864 slabinfo.dentry.num_objs
28843 -13.7% 24877 slabinfo.dentry.num_slabs
1174298 -10.9% 1046324 slabinfo.dmaengine-unmap-16.active_objs
28125 -10.3% 25227 slabinfo.dmaengine-unmap-16.active_slabs
1181287 -10.3% 1059571 slabinfo.dmaengine-unmap-16.num_objs
28125 -10.3% 25227 slabinfo.dmaengine-unmap-16.num_slabs
610992 -10.4% 547361 slabinfo.kmalloc-16.active_objs
2400 -9.9% 2163 slabinfo.kmalloc-16.active_slabs
614491 -9.8% 553990 slabinfo.kmalloc-16.num_objs
2400 -9.9% 2163 slabinfo.kmalloc-16.num_slabs
590336 -10.9% 526133 slabinfo.kmalloc-1k.active_objs
18557 -10.3% 16649 slabinfo.kmalloc-1k.active_slabs
593850 -10.3% 532789 slabinfo.kmalloc-1k.num_objs
18557 -10.3% 16649 slabinfo.kmalloc-1k.num_slabs
593092 -10.8% 528897 slabinfo.kmalloc-512.active_objs
18643 -10.2% 16736 slabinfo.kmalloc-512.active_slabs
596596 -10.2% 535564 slabinfo.kmalloc-512.num_objs
18643 -10.2% 16736 slabinfo.kmalloc-512.num_slabs
630847 -10.2% 566617 slabinfo.kmalloc-64.active_objs
9912 -9.6% 8957 slabinfo.kmalloc-64.active_slabs
634394 -9.6% 573316 slabinfo.kmalloc-64.num_objs
9912 -9.6% 8957 slabinfo.kmalloc-64.num_slabs
567224 -15.6% 478901 slabinfo.kmalloc-rcl-64.active_objs
8863 -15.6% 7483 slabinfo.kmalloc-rcl-64.active_slabs
567265 -15.6% 478992 slabinfo.kmalloc-rcl-64.num_objs
8863 -15.6% 7483 slabinfo.kmalloc-rcl-64.num_slabs
11286 -14.7% 9631 slabinfo.vmap_area.active_objs
11286 -14.6% 9633 slabinfo.vmap_area.num_objs
45177 -13.4% 39132 slabinfo.xfs_buf.active_objs
1075 -13.3% 931.50 slabinfo.xfs_buf.active_slabs
45178 -13.4% 39134 slabinfo.xfs_buf.num_objs
1075 -13.3% 931.50 slabinfo.xfs_buf.num_slabs
1173565 -10.9% 1045662 slabinfo.xfs_inode.active_objs
36891 -10.3% 33088 slabinfo.xfs_inode.active_slabs
1180533 -10.3% 1058834 slabinfo.xfs_inode.num_objs
36891 -10.3% 33088 slabinfo.xfs_inode.num_slabs
129.15 ±140% -100.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.avg
1120 ± 14% -100.0% 0.00 ±100% sched_debug.cfs_rq:/.exec_clock.avg
6891 ± 14% -100.0% 0.19 ±100% sched_debug.cfs_rq:/.exec_clock.max
5.15 ± 40% -100.0% 0.00 sched_debug.cfs_rq:/.exec_clock.min
1619 ± 10% -100.0% 0.02 ±100% sched_debug.cfs_rq:/.exec_clock.stddev
40258 ± 23% -91.8% 3286 ± 25% sched_debug.cfs_rq:/.load.avg
513659 ± 40% -93.9% 31226 ± 25% sched_debug.cfs_rq:/.load.max
114530 ± 25% -93.5% 7431 ± 15% sched_debug.cfs_rq:/.load.stddev
140.06 ± 16% -50.3% 69.56 ± 26% sched_debug.cfs_rq:/.load_avg.avg
2506 -59.4% 1017 sched_debug.cfs_rq:/.load_avg.max
513.22 ± 8% -57.1% 220.03 ± 16% sched_debug.cfs_rq:/.load_avg.stddev
129.15 ±140% -100.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.avg
5244 ± 11% -34.5% 3432 ± 8% sched_debug.cfs_rq:/.min_vruntime.stddev
341.33 +197.3% 1014 sched_debug.cfs_rq:/.removed.load_avg.max
65.65 ± 23% +155.1% 167.45 ± 30% sched_debug.cfs_rq:/.removed.load_avg.stddev
15860 +193.6% 46568 sched_debug.cfs_rq:/.removed.runnable_sum.max
3033 ± 23% +153.2% 7682 ± 30% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
5.29 ± 44% +155.7% 13.53 ± 48% sched_debug.cfs_rq:/.removed.util_avg.avg
150.58 ± 22% +241.0% 513.50 sched_debug.cfs_rq:/.removed.util_avg.max
24.80 ± 33% +206.1% 75.89 ± 21% sched_debug.cfs_rq:/.removed.util_avg.stddev
19.51 ± 16% -87.8% 2.38 ± 20% sched_debug.cfs_rq:/.runnable_load_avg.avg
258.08 ± 7% -93.3% 17.25 ± 4% sched_debug.cfs_rq:/.runnable_load_avg.max
64.31 ± 10% -91.8% 5.27 ± 8% sched_debug.cfs_rq:/.runnable_load_avg.stddev
40193 ± 23% -93.0% 2808 ± 19% sched_debug.cfs_rq:/.runnable_weight.avg
512372 ± 40% -96.1% 19760 ± 6% sched_debug.cfs_rq:/.runnable_weight.max
114425 ± 25% -94.7% 6060 ± 6% sched_debug.cfs_rq:/.runnable_weight.stddev
5250 ± 11% -34.6% 3433 ± 8% sched_debug.cfs_rq:/.spread0.stddev
226.16 ± 7% +87.7% 424.41 ± 2% sched_debug.cfs_rq:/.util_avg.avg
306.13 ± 10% -13.4% 265.03 ± 6% sched_debug.cfs_rq:/.util_avg.stddev
801745 ± 2% -25.2% 599562 ± 4% sched_debug.cpu.avg_idle.avg
89241 -67.0% 29493 sched_debug.cpu.clock.avg
89243 -66.9% 29496 sched_debug.cpu.clock.max
89238 -67.0% 29488 sched_debug.cpu.clock.min
1.40 ± 3% +21.3% 1.69 ± 6% sched_debug.cpu.clock.stddev
89241 -67.0% 29493 sched_debug.cpu.clock_task.avg
89243 -66.9% 29496 sched_debug.cpu.clock_task.max
89238 -67.0% 29488 sched_debug.cpu.clock_task.min
1.40 ± 3% +21.3% 1.69 ± 6% sched_debug.cpu.clock_task.stddev
3443 -46.8% 1831 sched_debug.cpu.curr->pid.max
0.00 ± 10% +35.7% 0.00 ± 4% sched_debug.cpu.next_balance.stddev
0.10 ± 3% +76.5% 0.17 ± 25% sched_debug.cpu.nr_running.avg
0.27 ± 2% +36.2% 0.37 ± 10% sched_debug.cpu.nr_running.stddev
52142 -97.0% 1560 sched_debug.cpu.nr_switches.avg
342144 ± 24% -97.9% 7120 ± 10% sched_debug.cpu.nr_switches.max
877.83 ± 4% -31.0% 605.50 ± 7% sched_debug.cpu.nr_switches.min
82546 ± 14% -98.6% 1147 ± 7% sched_debug.cpu.nr_switches.stddev
50698 -100.0% 4.01 ±170% sched_debug.cpu.sched_count.avg
339992 ± 24% -100.0% 36.00 ±171% sched_debug.cpu.sched_count.max
82.50 ± 39% -100.0% 0.00 sched_debug.cpu.sched_count.min
82285 ± 14% -100.0% 11.33 ±171% sched_debug.cpu.sched_count.stddev
12702 -100.0% 2.07 ±168% sched_debug.cpu.sched_goidle.avg
79446 ± 18% -100.0% 18.50 ±170% sched_debug.cpu.sched_goidle.max
32.83 ± 29% -100.0% 0.00 sched_debug.cpu.sched_goidle.min
20121 ± 13% -100.0% 5.83 ±169% sched_debug.cpu.sched_goidle.stddev
31319 -100.0% 0.00 sched_debug.cpu.ttwu_count.avg
206185 ± 23% -100.0% 0.00 sched_debug.cpu.ttwu_count.max
56.17 ± 31% -100.0% 0.00 sched_debug.cpu.ttwu_count.min
51045 ± 14% -100.0% 0.00 sched_debug.cpu.ttwu_count.stddev
18916 -100.0% 0.00 sched_debug.cpu.ttwu_local.avg
131935 ± 25% -100.0% 0.00 sched_debug.cpu.ttwu_local.max
29.17 ± 32% -100.0% 0.00 sched_debug.cpu.ttwu_local.min
31305 ± 14% -100.0% 0.00 sched_debug.cpu.ttwu_local.stddev
89239 -67.0% 29491 sched_debug.cpu_clk
86525 -69.1% 26779 ± 2% sched_debug.ktime
89588 -66.7% 29845 sched_debug.sched_clk
130961 ± 2% -37.3% 82053 interrupts.CAL:Function_call_interrupts
1876 ± 2% -38.2% 1159 interrupts.CPU0.CAL:Function_call_interrupts
298448 ± 22% -40.3% 178093 interrupts.CPU0.LOC:Local_timer_interrupts
1861 ± 2% -37.6% 1161 interrupts.CPU1.CAL:Function_call_interrupts
297911 ± 22% -40.4% 177407 interrupts.CPU1.LOC:Local_timer_interrupts
1849 ± 2% -37.4% 1158 interrupts.CPU10.CAL:Function_call_interrupts
298345 ± 22% -40.4% 177854 interrupts.CPU10.LOC:Local_timer_interrupts
298329 ± 22% -40.3% 178050 interrupts.CPU11.LOC:Local_timer_interrupts
1847 ± 2% -39.3% 1120 ± 5% interrupts.CPU12.CAL:Function_call_interrupts
298206 ± 22% -40.4% 177830 interrupts.CPU12.LOC:Local_timer_interrupts
1865 ± 2% -38.0% 1156 interrupts.CPU13.CAL:Function_call_interrupts
298205 ± 22% -40.4% 177816 interrupts.CPU13.LOC:Local_timer_interrupts
1845 ± 2% -36.3% 1175 ± 3% interrupts.CPU14.CAL:Function_call_interrupts
298413 ± 22% -40.5% 177584 interrupts.CPU14.LOC:Local_timer_interrupts
1843 ± 2% -37.1% 1159 interrupts.CPU15.CAL:Function_call_interrupts
298297 ± 22% -40.5% 177602 interrupts.CPU15.LOC:Local_timer_interrupts
1847 ± 2% -47.6% 967.50 ± 34% interrupts.CPU16.CAL:Function_call_interrupts
298288 ± 22% -40.4% 177717 interrupts.CPU16.LOC:Local_timer_interrupts
1849 ± 2% -40.2% 1105 ± 8% interrupts.CPU17.CAL:Function_call_interrupts
298301 ± 22% -40.3% 177979 interrupts.CPU17.LOC:Local_timer_interrupts
1826 ± 2% -37.0% 1151 interrupts.CPU18.CAL:Function_call_interrupts
334281 -46.9% 177508 interrupts.CPU18.LOC:Local_timer_interrupts
1843 ± 2% -37.4% 1154 interrupts.CPU19.CAL:Function_call_interrupts
334646 -46.9% 177846 interrupts.CPU19.LOC:Local_timer_interrupts
1862 ± 2% -37.9% 1155 interrupts.CPU2.CAL:Function_call_interrupts
298202 ± 22% -40.4% 177729 interrupts.CPU2.LOC:Local_timer_interrupts
1842 ± 2% -37.4% 1154 interrupts.CPU20.CAL:Function_call_interrupts
334659 -46.9% 177869 interrupts.CPU20.LOC:Local_timer_interrupts
1839 ± 2% -37.0% 1158 interrupts.CPU21.CAL:Function_call_interrupts
334086 -46.8% 177808 interrupts.CPU21.LOC:Local_timer_interrupts
1841 ± 2% -39.4% 1115 ± 6% interrupts.CPU22.CAL:Function_call_interrupts
334799 -47.0% 177365 interrupts.CPU22.LOC:Local_timer_interrupts
334131 -47.1% 176872 interrupts.CPU23.LOC:Local_timer_interrupts
1836 ± 3% -37.2% 1153 interrupts.CPU24.CAL:Function_call_interrupts
334736 -47.0% 177319 interrupts.CPU24.LOC:Local_timer_interrupts
333998 -47.0% 177085 interrupts.CPU25.LOC:Local_timer_interrupts
1798 ± 6% -35.7% 1157 interrupts.CPU26.CAL:Function_call_interrupts
334028 -47.1% 176616 interrupts.CPU26.LOC:Local_timer_interrupts
1795 ± 6% -39.8% 1081 ± 11% interrupts.CPU27.CAL:Function_call_interrupts
334764 -46.8% 178012 interrupts.CPU27.LOC:Local_timer_interrupts
1838 ± 2% -38.1% 1138 interrupts.CPU28.CAL:Function_call_interrupts
333965 -46.7% 177894 interrupts.CPU28.LOC:Local_timer_interrupts
1838 ± 2% -43.1% 1046 ± 15% interrupts.CPU29.CAL:Function_call_interrupts
334696 -46.9% 177578 interrupts.CPU29.LOC:Local_timer_interrupts
1860 ± 2% -37.8% 1156 interrupts.CPU3.CAL:Function_call_interrupts
298245 ± 22% -40.5% 177558 interrupts.CPU3.LOC:Local_timer_interrupts
1838 ± 2% -46.1% 991.00 ± 22% interrupts.CPU30.CAL:Function_call_interrupts
334744 -47.0% 177459 interrupts.CPU30.LOC:Local_timer_interrupts
1837 ± 2% -38.5% 1129 ± 3% interrupts.CPU31.CAL:Function_call_interrupts
334691 -47.0% 177363 interrupts.CPU31.LOC:Local_timer_interrupts
1835 ± 2% -38.1% 1137 ± 3% interrupts.CPU32.CAL:Function_call_interrupts
334881 -47.0% 177343 interrupts.CPU32.LOC:Local_timer_interrupts
1835 ± 2% -38.3% 1132 ± 3% interrupts.CPU33.CAL:Function_call_interrupts
334485 -46.8% 177826 interrupts.CPU33.LOC:Local_timer_interrupts
1839 ± 3% -39.2% 1118 ± 7% interrupts.CPU34.CAL:Function_call_interrupts
333942 -46.9% 177322 interrupts.CPU34.LOC:Local_timer_interrupts
1823 ± 2% -39.1% 1110 ± 6% interrupts.CPU35.CAL:Function_call_interrupts
334697 -47.1% 177091 interrupts.CPU35.LOC:Local_timer_interrupts
1834 ± 3% -37.3% 1150 interrupts.CPU36.CAL:Function_call_interrupts
298352 ± 22% -40.5% 177449 interrupts.CPU36.LOC:Local_timer_interrupts
1840 ± 3% -36.5% 1169 ± 2% interrupts.CPU37.CAL:Function_call_interrupts
297803 ± 22% -40.3% 177920 interrupts.CPU37.LOC:Local_timer_interrupts
1840 ± 3% -37.4% 1152 interrupts.CPU38.CAL:Function_call_interrupts
298231 ± 22% -40.6% 177238 interrupts.CPU38.LOC:Local_timer_interrupts
1840 ± 3% -37.5% 1150 interrupts.CPU39.CAL:Function_call_interrupts
298199 ± 22% -40.3% 177897 interrupts.CPU39.LOC:Local_timer_interrupts
1856 ± 2% -36.7% 1175 ± 3% interrupts.CPU4.CAL:Function_call_interrupts
298236 ± 22% -40.4% 177852 interrupts.CPU4.LOC:Local_timer_interrupts
1849 ± 2% -37.7% 1152 interrupts.CPU40.CAL:Function_call_interrupts
298299 ± 22% -40.4% 177931 interrupts.CPU40.LOC:Local_timer_interrupts
1834 ± 2% -37.2% 1151 interrupts.CPU41.CAL:Function_call_interrupts
298088 ± 22% -40.3% 177823 interrupts.CPU41.LOC:Local_timer_interrupts
1857 -38.0% 1152 interrupts.CPU42.CAL:Function_call_interrupts
298311 ± 22% -40.5% 177553 interrupts.CPU42.LOC:Local_timer_interrupts
1834 ± 2% -50.6% 906.25 ± 46% interrupts.CPU43.CAL:Function_call_interrupts
299772 ± 21% -40.7% 177842 interrupts.CPU43.LOC:Local_timer_interrupts
1826 ± 2% -36.7% 1156 interrupts.CPU44.CAL:Function_call_interrupts
298276 ± 22% -40.4% 177649 interrupts.CPU44.LOC:Local_timer_interrupts
1834 ± 2% -37.1% 1153 interrupts.CPU45.CAL:Function_call_interrupts
298009 ± 22% -40.8% 176523 interrupts.CPU45.LOC:Local_timer_interrupts
1798 ± 4% -35.8% 1154 interrupts.CPU46.CAL:Function_call_interrupts
298135 ± 22% -40.4% 177796 interrupts.CPU46.LOC:Local_timer_interrupts
1836 ± 2% -36.8% 1159 interrupts.CPU47.CAL:Function_call_interrupts
298390 ± 22% -40.5% 177610 interrupts.CPU47.LOC:Local_timer_interrupts
1834 ± 2% -35.9% 1176 ± 2% interrupts.CPU48.CAL:Function_call_interrupts
297983 ± 22% -40.3% 177906 interrupts.CPU48.LOC:Local_timer_interrupts
1834 ± 2% -36.8% 1160 interrupts.CPU49.CAL:Function_call_interrupts
297780 ± 22% -40.2% 177933 interrupts.CPU49.LOC:Local_timer_interrupts
1846 -37.1% 1161 interrupts.CPU5.CAL:Function_call_interrupts
298332 ± 22% -40.4% 177853 interrupts.CPU5.LOC:Local_timer_interrupts
298082 ± 22% -40.3% 177963 interrupts.CPU50.LOC:Local_timer_interrupts
1833 ± 2% -36.9% 1157 interrupts.CPU51.CAL:Function_call_interrupts
298197 ± 22% -40.4% 177784 interrupts.CPU51.LOC:Local_timer_interrupts
1829 ± 2% -36.8% 1157 interrupts.CPU52.CAL:Function_call_interrupts
297924 ± 22% -40.4% 177669 interrupts.CPU52.LOC:Local_timer_interrupts
1834 ± 2% -38.8% 1121 ± 6% interrupts.CPU53.CAL:Function_call_interrupts
298022 ± 22% -40.4% 177532 interrupts.CPU53.LOC:Local_timer_interrupts
1845 ± 3% -37.6% 1151 interrupts.CPU54.CAL:Function_call_interrupts
334653 -46.8% 177926 interrupts.CPU54.LOC:Local_timer_interrupts
1833 ± 2% -37.4% 1148 ± 2% interrupts.CPU55.CAL:Function_call_interrupts
334703 -47.0% 177550 interrupts.CPU55.LOC:Local_timer_interrupts
1830 ± 2% -37.1% 1151 interrupts.CPU56.CAL:Function_call_interrupts
334671 -46.9% 177807 interrupts.CPU56.LOC:Local_timer_interrupts
1829 ± 2% -36.8% 1155 interrupts.CPU57.CAL:Function_call_interrupts
334245 -46.9% 177398 interrupts.CPU57.LOC:Local_timer_interrupts
1847 ± 2% -37.6% 1152 interrupts.CPU58.CAL:Function_call_interrupts
334753 -47.0% 177354 interrupts.CPU58.LOC:Local_timer_interrupts
1830 ± 2% -37.0% 1153 interrupts.CPU59.CAL:Function_call_interrupts
334077 -46.9% 177294 interrupts.CPU59.LOC:Local_timer_interrupts
1860 ± 2% -37.9% 1155 interrupts.CPU6.CAL:Function_call_interrupts
298163 ± 22% -40.4% 177574 interrupts.CPU6.LOC:Local_timer_interrupts
1801 ± 6% -36.1% 1150 ± 2% interrupts.CPU60.CAL:Function_call_interrupts
334294 -47.0% 177312 interrupts.CPU60.LOC:Local_timer_interrupts
1830 ± 2% -36.9% 1154 interrupts.CPU61.CAL:Function_call_interrupts
333969 -46.8% 177800 interrupts.CPU61.LOC:Local_timer_interrupts
1855 -38.0% 1151 ± 2% interrupts.CPU62.CAL:Function_call_interrupts
333877 -47.0% 176853 interrupts.CPU62.LOC:Local_timer_interrupts
1853 -38.1% 1148 ± 2% interrupts.CPU63.CAL:Function_call_interrupts
334786 -46.9% 177619 interrupts.CPU63.LOC:Local_timer_interrupts
1853 -37.8% 1152 interrupts.CPU64.CAL:Function_call_interrupts
333959 -46.8% 177811 interrupts.CPU64.LOC:Local_timer_interrupts
1816 ± 2% -36.6% 1150 ± 2% interrupts.CPU65.CAL:Function_call_interrupts
333969 -46.8% 177795 interrupts.CPU65.LOC:Local_timer_interrupts
1853 -37.9% 1150 ± 2% interrupts.CPU66.CAL:Function_call_interrupts
333935 -46.9% 177389 interrupts.CPU66.LOC:Local_timer_interrupts
1854 -38.1% 1148 interrupts.CPU67.CAL:Function_call_interrupts
334692 -47.1% 176938 interrupts.CPU67.LOC:Local_timer_interrupts
1853 -38.0% 1148 ± 2% interrupts.CPU68.CAL:Function_call_interrupts
334727 -46.9% 177891 interrupts.CPU68.LOC:Local_timer_interrupts
1854 -38.2% 1147 ± 2% interrupts.CPU69.CAL:Function_call_interrupts
334693 -47.0% 177419 interrupts.CPU69.LOC:Local_timer_interrupts
1854 ± 2% -37.5% 1159 interrupts.CPU7.CAL:Function_call_interrupts
299761 ± 21% -40.7% 177868 interrupts.CPU7.LOC:Local_timer_interrupts
1854 -39.9% 1114 ± 3% interrupts.CPU70.CAL:Function_call_interrupts
334651 -47.1% 176865 interrupts.CPU70.LOC:Local_timer_interrupts
1831 -37.2% 1150 ± 2% interrupts.CPU71.CAL:Function_call_interrupts
333963 -46.9% 177495 interrupts.CPU71.LOC:Local_timer_interrupts
1828 ± 3% -36.6% 1159 interrupts.CPU8.CAL:Function_call_interrupts
298354 ± 22% -40.5% 177494 interrupts.CPU8.LOC:Local_timer_interrupts
1851 ± 2% -37.6% 1155 interrupts.CPU9.CAL:Function_call_interrupts
298428 ± 22% -40.4% 177796 interrupts.CPU9.LOC:Local_timer_interrupts
22777195 ± 11% -43.9% 12786394 interrupts.LOC:Local_timer_interrupts
54209 ± 6% -43.7% 30528 ± 9% softirqs.CPU0.RCU
27877 ± 4% -38.6% 17118 ± 10% softirqs.CPU0.SCHED
58068 ± 14% -35.6% 37377 ± 23% softirqs.CPU0.TIMER
60711 ± 6% -46.9% 32237 ± 14% softirqs.CPU1.RCU
24357 ± 4% -43.1% 13854 ± 6% softirqs.CPU1.SCHED
59549 ± 3% -48.8% 30511 ± 8% softirqs.CPU10.RCU
67034 ± 22% -46.2% 36078 ± 24% softirqs.CPU10.TIMER
56388 ± 7% -45.1% 30950 ± 9% softirqs.CPU11.RCU
23837 ± 3% -48.7% 12228 ± 4% softirqs.CPU11.SCHED
58817 ± 15% -39.1% 35827 ± 25% softirqs.CPU11.TIMER
58286 ± 4% -46.7% 31043 ± 9% softirqs.CPU12.RCU
23918 ± 5% -44.2% 13336 ± 7% softirqs.CPU12.SCHED
65991 ± 21% -45.3% 36065 ± 24% softirqs.CPU12.TIMER
59965 ± 3% -46.4% 32167 ± 15% softirqs.CPU13.RCU
24565 ± 6% -45.9% 13294 ± 10% softirqs.CPU13.SCHED
59966 ± 11% -40.8% 35509 ± 25% softirqs.CPU13.TIMER
54831 ± 12% -46.7% 29222 ± 8% softirqs.CPU14.RCU
24140 ± 6% -48.3% 12490 ± 4% softirqs.CPU14.SCHED
57748 ± 11% -38.4% 35595 ± 25% softirqs.CPU14.TIMER
43504 ± 2% -50.7% 21441 ± 5% softirqs.CPU15.RCU
60025 ± 10% -40.2% 35911 ± 25% softirqs.CPU15.TIMER
43023 ± 2% -53.2% 20128 ± 30% softirqs.CPU16.RCU
24043 ± 6% -46.5% 12853 ± 6% softirqs.CPU16.SCHED
59514 ± 11% -40.3% 35545 ± 25% softirqs.CPU16.TIMER
43746 -48.6% 22492 ± 7% softirqs.CPU17.RCU
23561 ± 4% -45.2% 12915 ± 4% softirqs.CPU17.SCHED
63540 ± 3% -51.0% 31119 ± 4% softirqs.CPU18.RCU
24341 ± 7% -44.6% 13480 ± 4% softirqs.CPU18.SCHED
90114 -46.4% 48260 ± 3% softirqs.CPU18.TIMER
61842 ± 3% -53.1% 28999 ± 7% softirqs.CPU19.RCU
23742 -42.6% 13635 ± 2% softirqs.CPU19.SCHED
89467 -47.3% 47179 ± 9% softirqs.CPU19.TIMER
58870 ± 2% -47.2% 31087 ± 9% softirqs.CPU2.RCU
21315 ± 21% -38.7% 13075 ± 7% softirqs.CPU2.SCHED
60956 ± 13% -30.3% 42468 ± 22% softirqs.CPU2.TIMER
63022 ± 4% -51.7% 30411 ± 2% softirqs.CPU20.RCU
23176 ± 9% -42.5% 13324 ± 2% softirqs.CPU20.SCHED
89127 -45.0% 49004 ± 2% softirqs.CPU20.TIMER
59624 ± 12% -46.6% 31810 softirqs.CPU21.RCU
36968 ± 60% -63.2% 13586 ± 3% softirqs.CPU21.SCHED
88586 -45.0% 48717 ± 2% softirqs.CPU21.TIMER
61745 ± 4% -53.7% 28593 ± 11% softirqs.CPU22.RCU
24353 ± 3% -49.3% 12344 ± 12% softirqs.CPU22.SCHED
89098 -46.1% 48051 ± 4% softirqs.CPU22.TIMER
57573 ± 7% -49.3% 29174 ± 5% softirqs.CPU23.RCU
23875 -44.9% 13166 softirqs.CPU23.SCHED
88353 -49.0% 45088 ± 15% softirqs.CPU23.TIMER
59875 ± 8% -51.0% 29356 ± 9% softirqs.CPU24.RCU
25654 ± 6% -39.9% 15430 ± 40% softirqs.CPU24.SCHED
89079 -48.4% 45921 ± 13% softirqs.CPU24.TIMER
23800 ± 2% -44.9% 13111 ± 2% softirqs.CPU25.SCHED
88540 -46.0% 47779 ± 8% softirqs.CPU25.TIMER
24394 ± 2% -44.9% 13430 ± 3% softirqs.CPU26.SCHED
88406 -45.5% 48198 ± 3% softirqs.CPU26.TIMER
55400 ± 10% -44.2% 30899 ± 11% softirqs.CPU27.RCU
23649 ± 8% -41.7% 13794 ± 3% softirqs.CPU27.SCHED
88583 -48.4% 45702 ± 17% softirqs.CPU27.TIMER
61076 ± 5% -47.6% 31975 softirqs.CPU28.RCU
24801 ± 7% -44.4% 13790 ± 3% softirqs.CPU28.SCHED
88270 -46.1% 47538 ± 7% softirqs.CPU28.TIMER
62589 ± 5% -49.1% 31861 ± 12% softirqs.CPU29.RCU
22344 ± 15% -39.7% 13465 ± 9% softirqs.CPU29.SCHED
88571 -48.6% 45494 ± 13% softirqs.CPU29.TIMER
59158 ± 3% -51.4% 28739 ± 9% softirqs.CPU3.RCU
23256 ± 4% -43.6% 13120 ± 5% softirqs.CPU3.SCHED
51817 ± 9% -48.4% 26759 ± 5% softirqs.CPU30.RCU
24876 ± 4% -49.9% 12455 ± 14% softirqs.CPU30.SCHED
88641 -44.5% 49180 softirqs.CPU30.TIMER
53284 ± 3% -50.5% 26371 ± 4% softirqs.CPU31.RCU
25595 ± 11% -48.0% 13317 softirqs.CPU31.SCHED
88517 -45.4% 48371 ± 3% softirqs.CPU31.TIMER
50257 ± 2% -49.9% 25185 ± 8% softirqs.CPU32.RCU
24804 ± 3% -41.0% 14646 ± 17% softirqs.CPU32.SCHED
88909 -45.9% 48111 ± 16% softirqs.CPU32.TIMER
52557 ± 5% -51.6% 25429 ± 6% softirqs.CPU33.RCU
24982 ± 10% -46.9% 13269 softirqs.CPU33.SCHED
88651 -45.6% 48201 ± 4% softirqs.CPU33.TIMER
52083 ± 4% -48.5% 26825 ± 6% softirqs.CPU34.RCU
24250 ± 5% -44.1% 13555 ± 4% softirqs.CPU34.SCHED
88094 -47.5% 46212 ± 11% softirqs.CPU34.TIMER
54414 ± 4% -51.0% 26688 ± 3% softirqs.CPU35.RCU
23430 ± 19% -42.7% 13434 ± 3% softirqs.CPU35.SCHED
88413 -47.0% 46879 ± 8% softirqs.CPU35.TIMER
21444 ± 15% -42.9% 12251 ± 7% softirqs.CPU36.SCHED
63391 ± 26% -45.0% 34882 ± 26% softirqs.CPU36.TIMER
50652 ± 2% -46.2% 27253 ± 7% softirqs.CPU37.RCU
22122 ± 3% -43.2% 12572 ± 3% softirqs.CPU37.SCHED
56142 ± 10% -35.5% 36219 ± 23% softirqs.CPU37.TIMER
51707 ± 2% -47.8% 26966 ± 2% softirqs.CPU38.RCU
22642 ± 2% -45.7% 12291 ± 9% softirqs.CPU38.SCHED
57468 ± 8% -37.1% 36175 ± 23% softirqs.CPU38.TIMER
52008 ± 3% -48.3% 26892 ± 5% softirqs.CPU39.RCU
22626 ± 4% -46.4% 12119 ± 2% softirqs.CPU39.SCHED
56604 ± 6% -39.1% 34465 ± 22% softirqs.CPU39.TIMER
56855 ± 6% -44.6% 31487 ± 7% softirqs.CPU4.RCU
23920 ± 4% -44.0% 13407 ± 5% softirqs.CPU4.SCHED
58578 ± 13% -38.2% 36222 ± 23% softirqs.CPU4.TIMER
52364 ± 7% -49.7% 26331 ± 3% softirqs.CPU40.RCU
22321 ± 4% -43.5% 12605 ± 5% softirqs.CPU40.SCHED
58028 ± 4% -38.9% 35469 ± 25% softirqs.CPU40.TIMER
51016 ± 2% -49.6% 25717 softirqs.CPU41.RCU
21001 ± 10% -41.7% 12253 ± 4% softirqs.CPU41.SCHED
53330 ± 7% -33.4% 35502 ± 25% softirqs.CPU41.TIMER
50075 ± 5% -46.9% 26589 ± 5% softirqs.CPU42.RCU
21727 ± 6% -42.9% 12414 ± 3% softirqs.CPU42.SCHED
56367 ± 16% -37.6% 35152 ± 26% softirqs.CPU42.TIMER
51297 ± 5% -48.5% 26393 ± 5% softirqs.CPU43.RCU
21310 ± 9% -42.4% 12265 ± 4% softirqs.CPU43.SCHED
51204 ± 7% -49.4% 25897 ± 2% softirqs.CPU44.RCU
21921 ± 9% -45.1% 12032 ± 3% softirqs.CPU44.SCHED
57559 ± 7% -38.5% 35403 ± 24% softirqs.CPU44.TIMER
50926 ± 7% -47.1% 26927 ± 6% softirqs.CPU45.RCU
22361 ± 3% -51.2% 10908 ± 20% softirqs.CPU45.SCHED
52729 ± 7% -33.9% 34866 ± 27% softirqs.CPU45.TIMER
51134 ± 6% -46.7% 27231 ± 4% softirqs.CPU46.RCU
21394 ± 8% -45.2% 11717 ± 8% softirqs.CPU46.SCHED
55168 ± 6% -36.3% 35147 ± 25% softirqs.CPU46.TIMER
51129 ± 6% -47.5% 26866 ± 5% softirqs.CPU47.RCU
22067 ± 4% -46.8% 11731 ± 8% softirqs.CPU47.SCHED
56573 ± 15% -38.0% 35062 ± 25% softirqs.CPU47.TIMER
51671 ± 4% -47.6% 27065 ± 4% softirqs.CPU48.RCU
22484 ± 4% -47.1% 11888 ± 4% softirqs.CPU48.SCHED
58561 ± 12% -39.8% 35240 ± 24% softirqs.CPU48.TIMER
53232 ± 2% -55.6% 23630 ± 30% softirqs.CPU49.RCU
22228 ± 2% -51.1% 10873 ± 19% softirqs.CPU49.SCHED
57658 ± 8% -38.2% 35650 ± 25% softirqs.CPU49.TIMER
58872 ± 3% -49.8% 29578 ± 5% softirqs.CPU5.RCU
24052 ± 7% -45.7% 13054 ± 5% softirqs.CPU5.SCHED
55224 ± 11% -35.1% 35857 ± 24% softirqs.CPU5.TIMER
42482 ± 83% -70.7% 12428 ± 8% softirqs.CPU50.SCHED
70678 ± 24% -49.3% 35855 ± 24% softirqs.CPU50.TIMER
52835 ± 2% -48.9% 26979 ± 4% softirqs.CPU51.RCU
21980 ± 4% -43.6% 12388 ± 3% softirqs.CPU51.SCHED
56773 ± 7% -37.5% 35469 ± 24% softirqs.CPU51.TIMER
52464 ± 2% -48.9% 26812 ± 4% softirqs.CPU52.RCU
23131 ± 3% -46.9% 12275 ± 3% softirqs.CPU52.SCHED
55622 ± 9% -37.0% 35054 ± 26% softirqs.CPU52.TIMER
51387 ± 6% -49.1% 26168 ± 5% softirqs.CPU53.RCU
22182 ± 6% -43.4% 12559 ± 7% softirqs.CPU53.SCHED
58565 ± 12% -40.5% 34841 ± 27% softirqs.CPU53.TIMER
53529 ± 4% -48.7% 27453 ± 2% softirqs.CPU54.RCU
23768 -43.4% 13455 softirqs.CPU54.SCHED
88492 -41.7% 51611 ± 7% softirqs.CPU54.TIMER
53399 ± 2% -47.6% 27982 ± 10% softirqs.CPU55.RCU
23761 -43.0% 13535 ± 2% softirqs.CPU55.SCHED
88058 -47.3% 46403 ± 10% softirqs.CPU55.TIMER
54407 ± 4% -57.5% 23106 ± 29% softirqs.CPU56.RCU
23252 ± 3% -47.6% 12186 ± 11% softirqs.CPU56.SCHED
88127 -45.1% 48396 ± 2% softirqs.CPU56.TIMER
54038 ± 6% -50.8% 26611 ± 4% softirqs.CPU57.RCU
22815 ± 4% -45.4% 12447 ± 6% softirqs.CPU57.SCHED
88053 -45.1% 48354 ± 3% softirqs.CPU57.TIMER
52967 ± 6% -49.9% 26515 ± 3% softirqs.CPU58.RCU
22423 ± 10% -43.0% 12785 ± 4% softirqs.CPU58.SCHED
88072 -45.1% 48363 ± 6% softirqs.CPU58.TIMER
49189 ± 4% -50.5% 24366 ± 6% softirqs.CPU59.RCU
22559 ± 7% -43.1% 12843 ± 3% softirqs.CPU59.SCHED
87584 ± 2% -48.7% 44961 ± 16% softirqs.CPU59.TIMER
56427 ± 7% -46.3% 30325 ± 10% softirqs.CPU6.RCU
24292 ± 7% -45.0% 13362 ± 5% softirqs.CPU6.SCHED
58722 ± 15% -38.2% 36319 ± 24% softirqs.CPU6.TIMER
59290 -49.8% 29772 ± 7% softirqs.CPU60.RCU
23158 ± 4% -42.3% 13373 ± 2% softirqs.CPU60.SCHED
87956 -49.1% 44784 ± 17% softirqs.CPU60.TIMER
59842 ± 4% -50.6% 29551 ± 5% softirqs.CPU61.RCU
23617 -47.6% 12367 ± 13% softirqs.CPU61.SCHED
87914 -46.5% 47007 ± 8% softirqs.CPU61.TIMER
60511 ± 5% -50.8% 29757 ± 5% softirqs.CPU62.RCU
23759 ± 2% -46.5% 12719 ± 5% softirqs.CPU62.SCHED
87916 ± 2% -45.5% 47873 ± 2% softirqs.CPU62.TIMER
57371 -51.7% 27703 ± 7% softirqs.CPU63.RCU
23728 -44.7% 13115 softirqs.CPU63.SCHED
87929 -49.3% 44539 ± 18% softirqs.CPU63.TIMER
59716 ± 4% -50.0% 29879 ± 4% softirqs.CPU64.RCU
22868 ± 6% -41.9% 13290 softirqs.CPU64.SCHED
88069 -46.3% 47282 ± 6% softirqs.CPU64.TIMER
60190 ± 6% -51.4% 29224 ± 5% softirqs.CPU65.RCU
23928 -45.6% 13017 softirqs.CPU65.SCHED
87666 ± 2% -48.6% 45053 ± 15% softirqs.CPU65.TIMER
60185 ± 7% -50.5% 29798 ± 5% softirqs.CPU66.RCU
22217 ± 6% -41.9% 12898 ± 2% softirqs.CPU66.SCHED
87772 ± 2% -45.5% 47793 ± 2% softirqs.CPU66.TIMER
61622 ± 5% -50.6% 30452 ± 3% softirqs.CPU67.RCU
23487 ± 3% -45.1% 12901 softirqs.CPU67.SCHED
89315 ± 3% -46.8% 47559 ± 4% softirqs.CPU67.TIMER
55707 ± 4% -50.4% 27645 ± 6% softirqs.CPU68.RCU
23621 ± 2% -44.5% 13105 softirqs.CPU68.SCHED
88412 -48.8% 45264 ± 14% softirqs.CPU68.TIMER
58554 ± 7% -54.3% 26731 ± 9% softirqs.CPU69.RCU
88123 ± 2% -45.9% 47680 ± 4% softirqs.CPU69.TIMER
57962 ± 3% -47.2% 30609 ± 7% softirqs.CPU7.RCU
24363 ± 5% -45.3% 13330 ± 10% softirqs.CPU7.SCHED
60985 ± 14% -38.8% 37305 ± 22% softirqs.CPU7.TIMER
61598 ± 7% -52.5% 29242 ± 3% softirqs.CPU70.RCU
22981 ± 3% -43.8% 12911 ± 3% softirqs.CPU70.SCHED
89549 ± 4% -48.9% 45756 ± 11% softirqs.CPU70.TIMER
60772 ± 6% -52.2% 29062 ± 2% softirqs.CPU71.RCU
23554 ± 2% -44.3% 13126 softirqs.CPU71.SCHED
87724 -46.8% 46691 ± 8% softirqs.CPU71.TIMER
57043 ± 6% -46.7% 30377 ± 8% softirqs.CPU8.RCU
23768 ± 4% -45.2% 13016 ± 6% softirqs.CPU8.SCHED
61683 ± 11% -41.7% 35992 ± 24% softirqs.CPU8.TIMER
57296 ± 6% -48.2% 29661 ± 7% softirqs.CPU9.RCU
24029 ± 5% -48.1% 12479 ± 6% softirqs.CPU9.SCHED
56125 ± 13% -36.7% 35534 ± 24% softirqs.CPU9.TIMER
3967379 ± 3% -48.9% 2028493 ± 3% softirqs.RCU
1715302 ± 3% -44.6% 949923 ± 3% softirqs.SCHED
5295164 ± 2% -43.3% 3002602 ± 6% softirqs.TIMER
8.40 ± 3% -6.3 2.14 ± 4% perf-profile.calltrace.cycles-pp.xfs_dir2_node_addname.xfs_dir_createname.xfs_create.xfs_generic_create.vfs_mkdir
8.49 ± 3% -6.2 2.25 ± 4% perf-profile.calltrace.cycles-pp.xfs_dir_createname.xfs_create.xfs_generic_create.vfs_mkdir.do_mkdirat
14.51 ± 2% -4.7 9.76 ± 3% perf-profile.calltrace.cycles-pp.mkdir
12.04 ± 3% -4.6 7.46 ± 3% perf-profile.calltrace.cycles-pp.xfs_create.xfs_generic_create.vfs_mkdir.do_mkdirat.do_syscall_64
12.16 ± 3% -4.6 7.60 ± 3% perf-profile.calltrace.cycles-pp.xfs_generic_create.vfs_mkdir.do_mkdirat.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.26 ± 3% -4.5 7.72 ± 3% perf-profile.calltrace.cycles-pp.vfs_mkdir.do_mkdirat.do_syscall_64.entry_SYSCALL_64_after_hwframe.mkdir
13.51 ± 3% -4.1 9.44 ± 3% perf-profile.calltrace.cycles-pp.do_mkdirat.do_syscall_64.entry_SYSCALL_64_after_hwframe.mkdir
13.57 ± 3% -4.0 9.54 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mkdir
13.57 ± 3% -4.0 9.54 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mkdir
0.95 +0.2 1.10 ± 5% perf-profile.calltrace.cycles-pp.xfs_trans_committed_bulk.xlog_cil_committed.xlog_cil_process_committed.xlog_state_do_callback.xlog_ioend_work
0.61 ± 8% +0.2 0.78 ± 9% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.xlog_state_do_callback.xlog_ioend_work.process_one_work.worker_thread
0.59 ± 9% +0.2 0.77 ± 10% perf-profile.calltrace.cycles-pp.try_to_wake_up.__wake_up_common.__wake_up_common_lock.xlog_state_do_callback.xlog_ioend_work
0.59 ± 9% +0.2 0.77 ± 10% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.xlog_state_do_callback.xlog_ioend_work.process_one_work
1.14 ± 3% +0.2 1.34 ± 3% perf-profile.calltrace.cycles-pp.xlog_cil_committed.xlog_cil_process_committed.xlog_state_do_callback.xlog_ioend_work.process_one_work
0.58 ± 12% +0.2 0.78 ± 11% perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write
1.15 ± 3% +0.2 1.35 ± 3% perf-profile.calltrace.cycles-pp.xlog_cil_process_committed.xlog_state_do_callback.xlog_ioend_work.process_one_work.worker_thread
0.67 ± 6% +0.2 0.89 ± 4% perf-profile.calltrace.cycles-pp.xlog_write.xlog_cil_push.process_one_work.worker_thread.kthread
0.57 ± 2% +0.2 0.81 ± 6% perf-profile.calltrace.cycles-pp.xfs_da3_node_lookup_int.xfs_dir2_node_lookup.xfs_dir_lookup.xfs_lookup.xfs_vn_lookup
0.72 ± 6% +0.2 0.96 ± 4% perf-profile.calltrace.cycles-pp.xfs_dir2_node_lookup.xfs_dir_lookup.xfs_lookup.xfs_vn_lookup.__lookup_hash
0.56 ± 7% +0.3 0.82 ± 6% perf-profile.calltrace.cycles-pp.wake_up_page_bit.xfs_destroy_ioend.xfs_end_ioend.xfs_end_io.process_one_work
0.84 ± 3% +0.3 1.10 ± 6% perf-profile.calltrace.cycles-pp.xfs_dir_lookup.xfs_lookup.xfs_vn_lookup.__lookup_hash.filename_create
0.84 ± 3% +0.3 1.11 ± 6% perf-profile.calltrace.cycles-pp.xfs_lookup.xfs_vn_lookup.__lookup_hash.filename_create.do_mkdirat
0.86 ± 4% +0.3 1.14 ± 5% perf-profile.calltrace.cycles-pp.xfs_vn_lookup.__lookup_hash.filename_create.do_mkdirat.do_syscall_64
0.66 ± 5% +0.3 0.96 ± 5% perf-profile.calltrace.cycles-pp.xfs_destroy_ioend.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread
0.80 ± 4% +0.3 1.11 ± 13% perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_create.xfs_generic_create.path_openat
0.78 ± 13% +0.3 1.10 ± 11% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write
0.97 ± 2% +0.3 1.30 ± 7% perf-profile.calltrace.cycles-pp.__lookup_hash.filename_create.do_mkdirat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.78 ± 13% +0.3 1.11 ± 11% perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
0.81 ± 15% +0.3 1.14 ± 7% perf-profile.calltrace.cycles-pp.brd_make_request.generic_make_request.submit_bio.xfs_submit_ioend.xfs_vm_writepages
0.42 ± 57% +0.3 0.76 ± 6% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.wake_up_page_bit.xfs_destroy_ioend
0.90 ± 11% +0.3 1.24 ± 9% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.84 ± 12% +0.3 1.19 ± 10% perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.95 ± 12% +0.3 1.30 ± 8% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.42 ± 57% +0.4 0.77 ± 6% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.wake_up_page_bit.xfs_destroy_ioend.xfs_end_ioend
0.26 ±100% +0.4 0.61 ± 9% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_bmapi_convert_delalloc.xfs_map_blocks.xfs_do_writepage.write_cache_pages
0.82 ± 5% +0.4 1.18 ± 14% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_create.xfs_generic_create.path_openat.do_filp_open
0.43 ± 57% +0.4 0.79 ± 7% perf-profile.calltrace.cycles-pp.__wake_up_common.wake_up_page_bit.xfs_destroy_ioend.xfs_end_ioend.xfs_end_io
0.96 ± 12% +0.4 1.33 ± 9% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.01 ± 7% +0.4 1.38 ± 3% perf-profile.calltrace.cycles-pp.xfs_dir_ialloc.xfs_create.xfs_generic_create.path_openat.do_filp_open
1.03 ± 10% +0.4 1.40 ± 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
1.01 ± 7% +0.4 1.38 ± 3% perf-profile.calltrace.cycles-pp.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create.path_openat
0.86 ± 14% +0.4 1.24 ± 8% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.xfs_submit_ioend.xfs_vm_writepages.do_writepages
0.67 ± 13% +0.4 1.04 ± 10% perf-profile.calltrace.cycles-pp.xfs_alloc_ag_vextent_near.xfs_alloc_ag_vextent.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate
1.03 ± 11% +0.4 1.40 ± 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.86 ± 14% +0.4 1.24 ± 8% perf-profile.calltrace.cycles-pp.submit_bio.xfs_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range
1.11 ± 10% +0.4 1.50 ± 8% perf-profile.calltrace.cycles-pp.write
1.81 ± 4% +0.4 2.20 perf-profile.calltrace.cycles-pp.xlog_state_do_callback.xlog_ioend_work.process_one_work.worker_thread.kthread
0.93 ± 12% +0.4 1.32 ± 8% perf-profile.calltrace.cycles-pp.xfs_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range
0.71 ± 13% +0.4 1.11 ± 8% perf-profile.calltrace.cycles-pp.xfs_alloc_ag_vextent.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc
1.83 ± 4% +0.4 2.24 perf-profile.calltrace.cycles-pp.xlog_ioend_work.process_one_work.worker_thread.kthread.ret_from_fork
0.86 ± 7% +0.4 1.27 ± 5% perf-profile.calltrace.cycles-pp.xfs_end_io.process_one_work.worker_thread.kthread.ret_from_fork
0.84 ± 6% +0.4 1.25 ± 5% perf-profile.calltrace.cycles-pp.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread.kthread
1.19 ± 2% +0.4 1.61 ± 7% perf-profile.calltrace.cycles-pp.filename_create.do_mkdirat.do_syscall_64.entry_SYSCALL_64_after_hwframe.mkdir
1.32 ± 3% +0.4 1.75 ± 6% perf-profile.calltrace.cycles-pp.xlog_cil_push.process_one_work.worker_thread.kthread.ret_from_fork
0.13 ±173% +0.5 0.58 ± 11% perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_bmapi_convert_delalloc.xfs_map_blocks.xfs_do_writepage
0.28 ±100% +0.5 0.74 ± 7% perf-profile.calltrace.cycles-pp.xfs_da3_node_lookup_int.xfs_dir2_node_addname.xfs_dir_createname.xfs_create.xfs_generic_create
0.12 ±173% +0.5 0.60 ± 8% perf-profile.calltrace.cycles-pp.xlog_cil_force_lsn.xfs_log_force_lsn.xfs_file_fsync.do_fsync.__x64_sys_fsync
0.95 ± 12% +0.5 1.47 ± 5% perf-profile.calltrace.cycles-pp.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc.xfs_map_blocks
0.00 +0.5 0.53 ± 3% perf-profile.calltrace.cycles-pp.__queue_work.queue_work_on.xfs_end_bio.brd_make_request.generic_make_request
0.00 +0.5 0.54 ± 2% perf-profile.calltrace.cycles-pp.queue_work_on.xfs_end_bio.brd_make_request.generic_make_request.submit_bio
1.03 ± 12% +0.5 1.58 ± 4% perf-profile.calltrace.cycles-pp.xfs_bmap_btalloc.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc.xfs_map_blocks.xfs_do_writepage
0.00 +0.6 0.55 ± 7% perf-profile.calltrace.cycles-pp.xfs_buf_item_unpin.xfs_trans_committed_bulk.xlog_cil_committed.xlog_cil_process_committed.xlog_state_do_callback
0.00 +0.6 0.55 ± 7% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +0.6 0.55 ± 4% perf-profile.calltrace.cycles-pp.xfs_end_bio.brd_make_request.generic_make_request.submit_bio.xfs_submit_ioend
1.14 ± 13% +0.6 1.71 ± 4% perf-profile.calltrace.cycles-pp.xfs_bmapi_allocate.xfs_bmapi_convert_delalloc.xfs_map_blocks.xfs_do_writepage.write_cache_pages
0.00 +0.6 0.58 ± 6% perf-profile.calltrace.cycles-pp.memcpy_erms.xlog_write.xlog_cil_push.process_one_work.worker_thread
0.00 +0.6 0.59 ± 4% perf-profile.calltrace.cycles-pp.xfs_buf_item_format.xfs_log_commit_cil.__xfs_trans_commit.xfs_create.xfs_generic_create
0.00 +0.6 0.65 ± 5% perf-profile.calltrace.cycles-pp.xfsaild.kthread.ret_from_fork
1.52 ± 3% +0.7 2.17 ± 5% perf-profile.calltrace.cycles-pp.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create
1.64 ± 7% +0.7 2.36 ± 5% perf-profile.calltrace.cycles-pp.xfs_dir_ialloc.xfs_create.xfs_generic_create.vfs_mkdir.do_mkdirat
1.59 ± 6% +0.7 2.30 ± 6% perf-profile.calltrace.cycles-pp.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create.vfs_mkdir
1.78 ± 8% +0.7 2.51 ± 3% perf-profile.calltrace.cycles-pp.xfs_map_blocks.xfs_do_writepage.write_cache_pages.xfs_vm_writepages.do_writepages
1.74 ± 8% +0.7 2.48 ± 3% perf-profile.calltrace.cycles-pp.xfs_bmapi_convert_delalloc.xfs_map_blocks.xfs_do_writepage.write_cache_pages.xfs_vm_writepages
1.96 ± 8% +0.8 2.72 ± 3% perf-profile.calltrace.cycles-pp.xfs_do_writepage.write_cache_pages.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range
2.14 ± 8% +0.8 2.94 ± 3% perf-profile.calltrace.cycles-pp.write_cache_pages.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range
1.58 +0.8 2.40 ± 4% perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_create.xfs_generic_create.vfs_mkdir
1.60 +0.8 2.44 ± 4% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_create.xfs_generic_create.vfs_mkdir.do_mkdirat
1.93 ± 4% +0.8 2.78 ± 4% perf-profile.calltrace.cycles-pp.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create
2.09 ± 5% +0.9 2.95 ± 8% perf-profile.calltrace.cycles-pp.xfs_create.xfs_generic_create.path_openat.do_filp_open.do_sys_open
2.18 ± 4% +0.9 3.06 ± 8% perf-profile.calltrace.cycles-pp.xfs_generic_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
2.88 ± 2% +1.2 4.03 ± 7% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
2.86 ± 2% +1.2 4.01 ± 7% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.97 ± 2% +1.2 4.14 ± 7% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
3.09 ± 9% +1.2 4.28 ± 4% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync
3.09 ± 9% +1.2 4.29 ± 4% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.do_fsync
3.02 ± 2% +1.2 4.22 ± 7% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
3.02 ± 2% +1.2 4.23 ± 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
3.11 ± 9% +1.2 4.32 ± 4% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.do_fsync.__x64_sys_fsync
3.11 ± 2% +1.2 4.35 ± 6% perf-profile.calltrace.cycles-pp.open64
4.33 ± 3% +1.3 5.61 ± 2% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
4.79 ± 3% +1.4 6.15 ± 2% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
3.60 ± 8% +1.4 4.97 ± 4% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.xfs_file_fsync.do_fsync.__x64_sys_fsync.do_syscall_64
5.37 ± 3% +1.5 6.89 ± 2% perf-profile.calltrace.cycles-pp.ret_from_fork
5.37 ± 3% +1.5 6.89 ± 2% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
6.52 ± 6% +1.6 8.08 ± 4% perf-profile.calltrace.cycles-pp.xfs_file_fsync.do_fsync.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.53 ± 6% +1.6 8.10 ± 4% perf-profile.calltrace.cycles-pp.do_fsync.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe.fsync
6.54 ± 6% +1.6 8.11 ± 4% perf-profile.calltrace.cycles-pp.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe.fsync
6.60 ± 5% +1.6 8.20 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.fsync
6.60 ± 5% +1.6 8.21 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.fsync
7.02 ± 5% +1.7 8.75 ± 4% perf-profile.calltrace.cycles-pp.fsync
8.41 ± 3% -6.3 2.15 ± 4% perf-profile.children.cycles-pp.xfs_dir2_node_addname
8.66 ± 3% -6.2 2.51 ± 5% perf-profile.children.cycles-pp.xfs_dir_createname
14.51 ± 2% -4.7 9.77 ± 3% perf-profile.children.cycles-pp.mkdir
12.26 ± 3% -4.5 7.72 ± 3% perf-profile.children.cycles-pp.vfs_mkdir
13.51 ± 3% -4.1 9.44 ± 3% perf-profile.children.cycles-pp.do_mkdirat
14.13 ± 2% -3.7 10.42 ± 4% perf-profile.children.cycles-pp.xfs_create
14.35 ± 2% -3.7 10.66 ± 4% perf-profile.children.cycles-pp.xfs_generic_create
2.35 ± 3% -1.9 0.47 ± 7% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.07 ± 6% +0.0 0.08 ± 10% perf-profile.children.cycles-pp.xfs_inobt_init_cursor
0.08 ± 8% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.__module_address
0.06 +0.0 0.08 ± 5% perf-profile.children.cycles-pp.xlog_verify_iclog
0.05 ± 8% +0.0 0.08 ± 10% perf-profile.children.cycles-pp.xfs_buf_iodone_callbacks
0.05 ± 8% +0.0 0.08 ± 10% perf-profile.children.cycles-pp.xfs_buf_do_callbacks
0.08 ± 8% +0.0 0.11 ± 12% perf-profile.children.cycles-pp.put_prev_entity
0.09 ± 11% +0.0 0.12 ± 10% perf-profile.children.cycles-pp.xfs_errortag_test
0.06 ± 11% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.xfs_trans_del_item
0.07 ± 7% +0.0 0.10 ± 11% perf-profile.children.cycles-pp.__module_text_address
0.04 ± 57% +0.0 0.07 ± 12% perf-profile.children.cycles-pp.xfs_ail_check
0.06 ± 7% +0.0 0.09 ± 16% perf-profile.children.cycles-pp.lookup_dcache
0.11 ± 15% +0.0 0.15 ± 7% perf-profile.children.cycles-pp.__pagevec_release
0.06 ± 14% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.check_preempt_wakeup
0.06 ± 11% +0.0 0.09 ± 24% perf-profile.children.cycles-pp.down_write
0.06 ± 17% +0.0 0.10 ± 22% perf-profile.children.cycles-pp.test_clear_page_writeback
0.08 ± 11% +0.0 0.11 ± 19% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.24 ± 5% +0.0 0.27 ± 7% perf-profile.children.cycles-pp.rcu_idle_exit
0.10 ± 21% +0.0 0.14 ± 8% perf-profile.children.cycles-pp.kmem_cache_free
0.04 ± 57% +0.0 0.07 ± 22% perf-profile.children.cycles-pp.xfs_iext_last
0.11 ± 9% +0.0 0.15 ± 8% perf-profile.children.cycles-pp.xfs_perag_put
0.08 ± 20% +0.0 0.11 ± 19% perf-profile.children.cycles-pp.lookup_fast
0.07 ± 7% +0.0 0.10 ± 12% perf-profile.children.cycles-pp.is_module_text_address
0.16 ± 7% +0.0 0.20 ± 13% perf-profile.children.cycles-pp.orc_find
0.09 ± 17% +0.0 0.12 ± 8% perf-profile.children.cycles-pp.walk_component
0.08 ± 14% +0.0 0.12 ± 25% perf-profile.children.cycles-pp.getname_flags
0.05 ± 60% +0.0 0.09 ± 17% perf-profile.children.cycles-pp.xlog_verify_dest_ptr
0.15 ± 10% +0.0 0.19 ± 12% perf-profile.children.cycles-pp.update_curr
0.11 ± 6% +0.0 0.15 ± 11% perf-profile.children.cycles-pp.kfree
0.13 ± 17% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.__slab_free
0.10 ± 15% +0.0 0.14 ± 7% perf-profile.children.cycles-pp.check_preempt_curr
0.10 ± 15% +0.0 0.14 ± 12% perf-profile.children.cycles-pp.__might_sleep
0.03 ±100% +0.0 0.07 ± 10% perf-profile.children.cycles-pp.pick_next_entity
0.10 ± 18% +0.0 0.15 ± 12% perf-profile.children.cycles-pp.xlog_ticket_alloc
0.01 ±173% +0.0 0.06 ± 14% perf-profile.children.cycles-pp.xlog_grant_add_space
0.06 ± 16% +0.0 0.11 ± 8% perf-profile.children.cycles-pp.up
0.07 ± 17% +0.0 0.11 ± 20% perf-profile.children.cycles-pp.__d_alloc
0.03 ±100% +0.0 0.07 ± 15% perf-profile.children.cycles-pp.xfs_inode_item_pin
0.12 ± 6% +0.0 0.17 ± 14% perf-profile.children.cycles-pp.xfs_trans_brelse
0.05 ± 61% +0.0 0.10 ± 19% perf-profile.children.cycles-pp.xfs_trans_buf_item_match
0.03 ±100% +0.0 0.08 ± 10% perf-profile.children.cycles-pp.file_check_and_advance_wb_err
0.01 ±173% +0.0 0.06 ± 11% perf-profile.children.cycles-pp.xfs_buf_item_log
0.03 ±100% +0.0 0.07 ± 11% perf-profile.children.cycles-pp.read
0.12 ± 10% +0.0 0.17 ± 10% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.03 ±100% +0.0 0.08 ± 14% perf-profile.children.cycles-pp.xfs_iflush_done
0.15 ± 16% +0.0 0.20 ± 14% perf-profile.children.cycles-pp.vfprintf
0.20 ± 12% +0.0 0.25 ± 12% perf-profile.children.cycles-pp.dequeue_entity
0.07 ± 61% +0.1 0.12 ± 11% perf-profile.children.cycles-pp.generic_make_request_checks
0.01 ±173% +0.1 0.06 ± 26% perf-profile.children.cycles-pp.inode_permission
0.21 ± 5% +0.1 0.27 ± 10% perf-profile.children.cycles-pp.__flush_work
0.10 ± 10% +0.1 0.15 ± 12% perf-profile.children.cycles-pp.kernel_text_address
0.07 ± 59% +0.1 0.12 ± 14% perf-profile.children.cycles-pp.xfs_dialloc_ag_finobt_near
0.10 ± 36% +0.1 0.15 ± 14% perf-profile.children.cycles-pp.xfs_alloc_read_agf
0.03 ±100% +0.1 0.08 ± 19% perf-profile.children.cycles-pp.strncpy_from_user
0.11 ± 15% +0.1 0.17 ± 16% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.15 ± 12% +0.1 0.20 ± 10% perf-profile.children.cycles-pp.xfs_btree_insert
0.01 ±173% +0.1 0.07 ± 12% perf-profile.children.cycles-pp.xfs_ialloc_ag_select
0.03 ±100% +0.1 0.08 ± 10% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.10 ± 8% +0.1 0.15 ± 19% perf-profile.children.cycles-pp.xfs_iunlock
0.12 ± 10% +0.1 0.18 ± 10% perf-profile.children.cycles-pp.xfs_btree_insrec
0.22 ± 20% +0.1 0.28 ± 13% perf-profile.children.cycles-pp.xfs_trans_reserve
0.14 ± 16% +0.1 0.19 ± 8% perf-profile.children.cycles-pp.xfs_buf_unlock
0.09 ± 10% +0.1 0.14 ± 30% perf-profile.children.cycles-pp.xfs_da_grow_inode_int
0.14 ± 8% +0.1 0.20 ± 10% perf-profile.children.cycles-pp.stack_trace_consume_entry_nosched
0.14 ± 13% +0.1 0.20 ± 16% perf-profile.children.cycles-pp.xfs_fs_inode_init_once
0.14 ± 12% +0.1 0.20 ± 7% perf-profile.children.cycles-pp.xfs_dir2_sf_addname
0.01 ±173% +0.1 0.07 ± 24% perf-profile.children.cycles-pp.xfs_mod_fdblocks
0.11 ± 13% +0.1 0.17 ± 32% perf-profile.children.cycles-pp.xfs_bmapi_reserve_delalloc
0.01 ±173% +0.1 0.07 ± 31% perf-profile.children.cycles-pp.xfs_inode_verify_forks
0.22 ± 17% +0.1 0.28 ± 6% perf-profile.children.cycles-pp.set_next_entity
0.15 ± 11% +0.1 0.21 ± 3% perf-profile.children.cycles-pp._xfs_buf_ioapply
0.13 ± 9% +0.1 0.19 ± 5% perf-profile.children.cycles-pp.__xfs_dir3_free_read
0.11 ± 7% +0.1 0.18 ± 20% perf-profile.children.cycles-pp.d_alloc
0.00 +0.1 0.06 ± 26% perf-profile.children.cycles-pp.__sb_start_write
0.23 ± 9% +0.1 0.29 ± 6% perf-profile.children.cycles-pp.xfs_bmapi_read
0.14 ± 12% +0.1 0.20 ± 10% perf-profile.children.cycles-pp.xfs_buf_offset
0.12 ± 14% +0.1 0.18 ± 20% perf-profile.children.cycles-pp.__kernel_text_address
0.19 ± 13% +0.1 0.25 ± 5% perf-profile.children.cycles-pp.xfs_next_bit
0.20 ± 9% +0.1 0.27 ± 9% perf-profile.children.cycles-pp.__radix_tree_lookup
0.11 ± 19% +0.1 0.17 ± 12% perf-profile.children.cycles-pp.xfs_trans_log_inode
0.13 ± 13% +0.1 0.20 ± 18% perf-profile.children.cycles-pp.unwind_get_return_address
0.10 ± 20% +0.1 0.16 ± 24% perf-profile.children.cycles-pp._xfs_buf_obj_cmp
0.01 ±173% +0.1 0.08 ± 14% perf-profile.children.cycles-pp.__x64_sys_close
0.08 ± 10% +0.1 0.14 ± 7% perf-profile.children.cycles-pp.xfs_inobt_key_diff
0.18 ± 12% +0.1 0.24 ± 13% perf-profile.children.cycles-pp.__orc_find
0.22 ± 7% +0.1 0.28 ± 7% perf-profile.children.cycles-pp.xfs_dir2_leafn_lookup_for_entry
0.18 ± 15% +0.1 0.25 ± 13% perf-profile.children.cycles-pp.xfs_buf_item_release
0.48 ± 3% +0.1 0.55 ± 7% perf-profile.children.cycles-pp.xfs_buf_item_unpin
0.10 ± 19% +0.1 0.18 ± 6% perf-profile.children.cycles-pp.__unwind_start
0.16 ± 7% +0.1 0.24 ± 3% perf-profile.children.cycles-pp.__xfs_buf_submit
0.03 ±100% +0.1 0.10 ± 50% perf-profile.children.cycles-pp.xfs_bmapi_write
0.27 ± 8% +0.1 0.34 ± 6% perf-profile.children.cycles-pp.xfs_dabuf_map
0.19 ± 5% +0.1 0.27 ± 11% perf-profile.children.cycles-pp.filename_parentat
0.19 ± 13% +0.1 0.27 ± 4% perf-profile.children.cycles-pp.xfs_dir2_leafn_lookup_for_addname
0.19 ± 7% +0.1 0.26 ± 13% perf-profile.children.cycles-pp.pagecache_get_page
0.13 ± 12% +0.1 0.21 ± 12% perf-profile.children.cycles-pp.xfs_btree_get_rec
0.17 ± 11% +0.1 0.24 ± 11% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.14 ± 5% +0.1 0.22 ± 16% perf-profile.children.cycles-pp.down_trylock
0.16 ± 7% +0.1 0.24 ± 4% perf-profile.children.cycles-pp.xfs_buf_delwri_submit_buffers
0.01 ±173% +0.1 0.09 ± 59% perf-profile.children.cycles-pp.xfs_bmap_add_extent_hole_real
0.17 ± 6% +0.1 0.25 ± 11% perf-profile.children.cycles-pp.path_parentat
0.19 ± 7% +0.1 0.27 ± 12% perf-profile.children.cycles-pp.grab_cache_page_write_begin
0.17 ± 12% +0.1 0.24 ± 10% perf-profile.children.cycles-pp.xfs_log_reserve
0.20 ± 4% +0.1 0.27 ± 8% perf-profile.children.cycles-pp.xfs_iflush_int
0.32 ± 11% +0.1 0.40 ± 11% perf-profile.children.cycles-pp.update_load_avg
0.16 ± 15% +0.1 0.24 ± 16% perf-profile.children.cycles-pp.xfs_read_agi
0.15 ± 21% +0.1 0.23 ± 9% perf-profile.children.cycles-pp.xfs_ialloc_ag_alloc
0.15 ± 26% +0.1 0.24 ± 9% perf-profile.children.cycles-pp.__xfs_btree_check_sblock
0.15 ± 13% +0.1 0.24 ± 12% perf-profile.children.cycles-pp.copyin
0.16 ± 11% +0.1 0.24 ± 19% perf-profile.children.cycles-pp.xfs_inobt_get_rec
0.14 ± 22% +0.1 0.22 ± 12% perf-profile.children.cycles-pp.security_inode_permission
0.13 ± 18% +0.1 0.21 ± 13% perf-profile.children.cycles-pp.selinux_inode_permission
0.01 ±173% +0.1 0.10 ± 45% perf-profile.children.cycles-pp.xfs_dir2_leafn_split
0.16 ± 12% +0.1 0.24 ± 5% perf-profile.children.cycles-pp.xfs_inobt_update
0.17 ± 14% +0.1 0.26 ± 10% perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
0.01 ±173% +0.1 0.10 ± 43% perf-profile.children.cycles-pp.xfs_da3_split
0.17 ± 15% +0.1 0.25 ± 22% perf-profile.children.cycles-pp.xfs_file_iomap_begin_delay
0.15 ± 14% +0.1 0.24 ± 12% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.00 +0.1 0.09 ± 48% perf-profile.children.cycles-pp.xfs_da_grow_inode
0.15 ± 4% +0.1 0.24 ± 12% perf-profile.children.cycles-pp.xfs_buf_trylock
0.13 ± 15% +0.1 0.22 ± 6% perf-profile.children.cycles-pp.xfs_buf_item_init
0.28 ± 3% +0.1 0.37 ± 9% perf-profile.children.cycles-pp.xfs_inode_alloc
0.20 ± 9% +0.1 0.29 ± 11% perf-profile.children.cycles-pp.__vsprintf_chk
0.25 ± 15% +0.1 0.35 ± 4% perf-profile.children.cycles-pp.xfs_buf_item_pin
0.21 ± 20% +0.1 0.31 ± 6% perf-profile.children.cycles-pp.xfs_btree_check_sblock
0.18 ± 20% +0.1 0.27 ± 16% perf-profile.children.cycles-pp.xfs_file_iomap_begin
0.29 ± 11% +0.1 0.38 ± 5% perf-profile.children.cycles-pp.xfs_buf_item_size_segment
0.19 ± 14% +0.1 0.29 ± 16% perf-profile.children.cycles-pp.xfs_ialloc_read_agi
0.22 ± 8% +0.1 0.32 ± 22% perf-profile.children.cycles-pp.get_page_from_freelist
0.30 ± 14% +0.1 0.40 ± 4% perf-profile.children.cycles-pp.xfs_trans_alloc
0.23 ± 4% +0.1 0.33 ± 6% perf-profile.children.cycles-pp.xfs_iflush_cluster
0.29 ± 2% +0.1 0.40 ± 14% perf-profile.children.cycles-pp.io_schedule
0.24 ± 4% +0.1 0.35 ± 19% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.35 ± 2% +0.1 0.47 ± 15% perf-profile.children.cycles-pp.wait_on_page_bit
0.25 ± 5% +0.1 0.36 ± 7% perf-profile.children.cycles-pp.xfs_iflush
0.18 ± 16% +0.1 0.29 ± 23% perf-profile.children.cycles-pp.xfs_inode_item_format
0.15 ± 12% +0.1 0.26 ± 30% perf-profile.children.cycles-pp.memset_erms
0.33 ± 9% +0.1 0.45 ± 7% perf-profile.children.cycles-pp.xfs_buf_item_size
0.43 ± 11% +0.1 0.55 ± 6% perf-profile.children.cycles-pp.schedule_idle
0.18 ± 4% +0.1 0.30 ± 5% perf-profile.children.cycles-pp.xfs_btree_update
0.23 ± 24% +0.1 0.35 ± 10% perf-profile.children.cycles-pp.xfs_alloc_fix_freelist
0.23 ± 9% +0.1 0.35 ± 13% perf-profile.children.cycles-pp.__list_del_entry_valid
0.20 ± 6% +0.1 0.33 ± 15% perf-profile.children.cycles-pp.iomap_write_begin
0.30 ± 14% +0.1 0.42 ± 18% perf-profile.children.cycles-pp.__percpu_counter_sum
0.18 ± 8% +0.1 0.31 ± 12% perf-profile.children.cycles-pp.xfs_perag_get
0.28 ± 2% +0.1 0.41 ± 13% perf-profile.children.cycles-pp.new_slab
0.27 ± 4% +0.1 0.40 ± 7% perf-profile.children.cycles-pp.xfs_inode_item_push
0.22 ± 12% +0.1 0.35 ± 9% perf-profile.children.cycles-pp.link_path_walk
0.47 ± 6% +0.1 0.60 ± 8% perf-profile.children.cycles-pp.xlog_cil_force_lsn
0.43 ± 3% +0.1 0.57 ± 14% perf-profile.children.cycles-pp.__filemap_fdatawait_range
0.36 ± 9% +0.1 0.50 ± 11% perf-profile.children.cycles-pp.brd_insert_page
0.31 ± 12% +0.1 0.45 ± 18% perf-profile.children.cycles-pp.xfs_mod_ifree
0.33 ± 14% +0.1 0.47 ± 3% perf-profile.children.cycles-pp.xfs_cil_prepare_item
0.32 ± 16% +0.1 0.46 ± 18% perf-profile.children.cycles-pp.__percpu_counter_compare
0.48 ± 8% +0.1 0.62 ± 9% perf-profile.children.cycles-pp.xfs_iget
0.95 +0.2 1.10 ± 5% perf-profile.children.cycles-pp.xfs_trans_committed_bulk
0.32 ± 15% +0.2 0.47 ± 16% perf-profile.children.cycles-pp.xfs_trans_unreserve_and_mod_sb
0.37 ± 4% +0.2 0.53 ± 14% perf-profile.children.cycles-pp.___slab_alloc
0.39 ± 8% +0.2 0.55 ± 8% perf-profile.children.cycles-pp.close
0.41 ± 2% +0.2 0.57 ± 5% perf-profile.children.cycles-pp.xfs_dir3_leaf_check_int
0.32 ± 2% +0.2 0.49 ± 14% perf-profile.children.cycles-pp.kmem_alloc_large
0.38 ± 3% +0.2 0.55 ± 14% perf-profile.children.cycles-pp.__slab_alloc
0.39 ± 15% +0.2 0.57 ± 5% perf-profile.children.cycles-pp.xfs_alloc_fixup_trees
0.37 ± 20% +0.2 0.55 ± 4% perf-profile.children.cycles-pp.xfs_end_bio
0.49 ± 11% +0.2 0.67 ± 8% perf-profile.children.cycles-pp.unwind_next_frame
0.47 ± 2% +0.2 0.66 ± 8% perf-profile.children.cycles-pp.xfs_check_agi_freecount
0.34 ± 11% +0.2 0.54 ± 11% perf-profile.children.cycles-pp._xfs_trans_bjoin
0.41 ± 5% +0.2 0.60 ± 5% perf-profile.children.cycles-pp.__kmalloc
0.45 ± 3% +0.2 0.65 ± 5% perf-profile.children.cycles-pp.xfsaild
1.15 ± 3% +0.2 1.35 ± 3% perf-profile.children.cycles-pp.xlog_cil_committed
0.58 ± 12% +0.2 0.78 ± 11% perf-profile.children.cycles-pp.iomap_write_actor
1.15 ± 3% +0.2 1.35 ± 3% perf-profile.children.cycles-pp.xlog_cil_process_committed
0.87 ± 5% +0.2 1.09 ± 5% perf-profile.children.cycles-pp.__wake_up_common_lock
1.10 ± 5% +0.2 1.32 ± 2% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.43 ± 5% +0.2 0.65 ± 4% perf-profile.children.cycles-pp.kmem_alloc
0.48 ± 6% +0.2 0.72 ± 12% perf-profile.children.cycles-pp.kmem_zone_alloc
0.55 ± 7% +0.2 0.79 ± 7% perf-profile.children.cycles-pp.autoremove_wake_function
1.57 ± 2% +0.2 1.81 ± 5% perf-profile.children.cycles-pp.__sched_text_start
0.72 ± 6% +0.2 0.96 ± 4% perf-profile.children.cycles-pp.xfs_dir2_node_lookup
0.69 ± 7% +0.2 0.94 ± 4% perf-profile.children.cycles-pp.xlog_write
0.57 ± 8% +0.3 0.83 ± 7% perf-profile.children.cycles-pp.wake_up_page_bit
0.57 ± 4% +0.3 0.83 ± 9% perf-profile.children.cycles-pp.xfs_da3_node_read
0.56 ± 2% +0.3 0.83 ± 12% perf-profile.children.cycles-pp.kmem_cache_alloc
0.63 ± 8% +0.3 0.90 ± 2% perf-profile.children.cycles-pp.xfs_dialloc_ag_update_inobt
0.62 ± 7% +0.3 0.90 ± 3% perf-profile.children.cycles-pp.xfs_buf_item_format
0.77 ± 8% +0.3 1.06 ± 2% perf-profile.children.cycles-pp.memcpy_erms
0.83 ± 17% +0.3 1.12 ± 4% perf-profile.children.cycles-pp.__queue_work
0.66 ± 5% +0.3 0.96 ± 5% perf-profile.children.cycles-pp.xfs_destroy_ioend
0.89 ± 4% +0.3 1.19 ± 5% perf-profile.children.cycles-pp.xfs_dir_lookup
0.89 ± 5% +0.3 1.20 ± 6% perf-profile.children.cycles-pp.xfs_lookup
0.96 ± 6% +0.3 1.27 ± 5% perf-profile.children.cycles-pp.xfs_vn_lookup
0.84 ± 17% +0.3 1.16 ± 4% perf-profile.children.cycles-pp.queue_work_on
0.78 ± 13% +0.3 1.10 ± 11% perf-profile.children.cycles-pp.iomap_apply
0.97 ± 2% +0.3 1.30 ± 7% perf-profile.children.cycles-pp.__lookup_hash
0.78 ± 13% +0.3 1.11 ± 11% perf-profile.children.cycles-pp.iomap_file_buffered_write
0.56 ± 6% +0.3 0.90 ± 5% perf-profile.children.cycles-pp.xfs_btree_read_buf_block
0.83 ± 6% +0.3 1.17 ± 6% perf-profile.children.cycles-pp.xfs_da_read_buf
0.90 ± 11% +0.3 1.24 ± 9% perf-profile.children.cycles-pp.new_sync_write
0.84 ± 12% +0.3 1.19 ± 10% perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
0.81 ± 11% +0.4 1.16 ± 5% perf-profile.children.cycles-pp.arch_stack_walk
0.95 ± 12% +0.4 1.30 ± 8% perf-profile.children.cycles-pp.vfs_write
0.86 ± 10% +0.4 1.22 ± 5% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.69 ± 6% +0.4 1.05 ± 6% perf-profile.children.cycles-pp.xfs_btree_lookup_get_block
0.96 ± 12% +0.4 1.33 ± 8% perf-profile.children.cycles-pp.ksys_write
0.70 ± 12% +0.4 1.08 ± 10% perf-profile.children.cycles-pp.xfs_alloc_ag_vextent_near
0.93 ± 12% +0.4 1.32 ± 8% perf-profile.children.cycles-pp.xfs_submit_ioend
1.81 ± 4% +0.4 2.21 perf-profile.children.cycles-pp.xlog_state_do_callback
1.01 ± 10% +0.4 1.41 ± 4% perf-profile.children.cycles-pp.__account_scheduler_latency
1.11 ± 10% +0.4 1.51 ± 8% perf-profile.children.cycles-pp.write
1.54 ± 4% +0.4 1.94 ± 3% perf-profile.children.cycles-pp._raw_spin_lock
1.84 ± 4% +0.4 2.24 perf-profile.children.cycles-pp.xlog_ioend_work
0.86 ± 7% +0.4 1.27 ± 5% perf-profile.children.cycles-pp.xfs_end_io
0.84 ± 6% +0.4 1.25 ± 5% perf-profile.children.cycles-pp.xfs_end_ioend
0.74 ± 11% +0.4 1.16 ± 8% perf-profile.children.cycles-pp.xfs_alloc_ag_vextent
1.19 ± 2% +0.4 1.61 ± 7% perf-profile.children.cycles-pp.filename_create
1.16 ± 7% +0.4 1.58 ± 7% perf-profile.children.cycles-pp.__wake_up_common
0.73 ± 6% +0.4 1.16 ± 4% perf-profile.children.cycles-pp.xfs_buf_find
1.33 ± 3% +0.4 1.76 ± 6% perf-profile.children.cycles-pp.xlog_cil_push
0.87 ± 7% +0.5 1.32 perf-profile.children.cycles-pp.xfs_buf_read_map
0.81 ± 7% +0.5 1.27 ± 2% perf-profile.children.cycles-pp.xfs_buf_get_map
1.09 ± 3% +0.5 1.55 ± 6% perf-profile.children.cycles-pp.xfs_da3_node_lookup_int
1.38 ± 10% +0.5 1.85 ± 4% perf-profile.children.cycles-pp.enqueue_entity
1.45 ± 11% +0.5 1.95 ± 4% perf-profile.children.cycles-pp.ttwu_do_activate
1.44 ± 11% +0.5 1.95 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
1.44 ± 11% +0.5 1.95 ± 4% perf-profile.children.cycles-pp.activate_task
1.04 ± 12% +0.5 1.58 ± 5% perf-profile.children.cycles-pp.xfs_bmap_btalloc
1.00 ± 11% +0.5 1.54 ± 6% perf-profile.children.cycles-pp.xfs_alloc_vextent
1.13 ± 4% +0.6 1.68 ± 2% perf-profile.children.cycles-pp.xfs_btree_lookup
1.19 ± 13% +0.6 1.81 ± 4% perf-profile.children.cycles-pp.xfs_bmapi_allocate
1.52 ± 3% +0.6 2.17 ± 5% perf-profile.children.cycles-pp.xfs_dialloc_ag
1.94 ± 10% +0.7 2.63 ± 3% perf-profile.children.cycles-pp.try_to_wake_up
1.31 ± 6% +0.7 2.02 ± 3% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
1.78 ± 8% +0.7 2.51 ± 3% perf-profile.children.cycles-pp.xfs_map_blocks
1.75 ± 8% +0.7 2.48 ± 3% perf-profile.children.cycles-pp.xfs_bmapi_convert_delalloc
1.96 ± 8% +0.8 2.72 ± 3% perf-profile.children.cycles-pp.xfs_do_writepage
2.14 ± 8% +0.8 2.94 ± 3% perf-profile.children.cycles-pp.write_cache_pages
1.93 ± 4% +0.9 2.79 ± 4% perf-profile.children.cycles-pp.xfs_dialloc
2.65 ± 3% +1.1 3.74 ± 4% perf-profile.children.cycles-pp.xfs_dir_ialloc
2.59 ± 2% +1.1 3.68 ± 5% perf-profile.children.cycles-pp.xfs_ialloc
2.89 ± 2% +1.2 4.04 ± 7% perf-profile.children.cycles-pp.path_openat
2.90 ± 2% +1.2 4.05 ± 7% perf-profile.children.cycles-pp.do_filp_open
2.98 +1.2 4.17 ± 7% perf-profile.children.cycles-pp.do_sys_open
3.09 ± 9% +1.2 4.28 ± 4% perf-profile.children.cycles-pp.xfs_vm_writepages
3.09 ± 9% +1.2 4.29 ± 4% perf-profile.children.cycles-pp.do_writepages
3.11 ± 9% +1.2 4.32 ± 4% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
3.12 ± 2% +1.2 4.35 ± 6% perf-profile.children.cycles-pp.open64
4.34 ± 3% +1.3 5.61 ± 2% perf-profile.children.cycles-pp.process_one_work
4.79 ± 3% +1.4 6.15 ± 2% perf-profile.children.cycles-pp.worker_thread
3.04 ± 2% +1.4 4.40 ± 3% perf-profile.children.cycles-pp.xfs_log_commit_cil
3.61 ± 8% +1.4 4.98 ± 4% perf-profile.children.cycles-pp.file_write_and_wait_range
3.08 ± 2% +1.4 4.51 ± 4% perf-profile.children.cycles-pp.__xfs_trans_commit
5.38 ± 3% +1.5 6.89 ± 2% perf-profile.children.cycles-pp.ret_from_fork
5.37 ± 3% +1.5 6.89 ± 2% perf-profile.children.cycles-pp.kthread
6.52 ± 6% +1.6 8.08 ± 4% perf-profile.children.cycles-pp.xfs_file_fsync
6.53 ± 6% +1.6 8.10 ± 4% perf-profile.children.cycles-pp.do_fsync
6.54 ± 6% +1.6 8.11 ± 4% perf-profile.children.cycles-pp.__x64_sys_fsync
7.03 ± 5% +1.7 8.75 ± 4% perf-profile.children.cycles-pp.fsync
2.31 ± 3% -1.9 0.43 ± 8% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
1.39 ± 3% -1.0 0.35 ± 5% perf-profile.self.cycles-pp.xfs_dir2_node_addname
0.06 ± 7% +0.0 0.08 ± 8% perf-profile.self.cycles-pp.xlog_verify_iclog
0.08 ± 6% +0.0 0.10 ± 8% perf-profile.self.cycles-pp.__module_address
0.04 ± 57% +0.0 0.07 ± 17% perf-profile.self.cycles-pp.xfs_iext_last
0.09 ± 17% +0.0 0.13 ± 12% perf-profile.self.cycles-pp._xfs_buf_obj_cmp
0.05 ± 62% +0.0 0.09 ± 19% perf-profile.self.cycles-pp.xlog_verify_dest_ptr
0.04 ± 58% +0.0 0.08 ± 8% perf-profile.self.cycles-pp.xfs_trans_del_item
0.11 ± 11% +0.0 0.15 ± 8% perf-profile.self.cycles-pp.xfs_perag_put
0.08 ± 19% +0.0 0.12 ± 15% perf-profile.self.cycles-pp.kmem_cache_free
0.09 ± 13% +0.0 0.13 ± 12% perf-profile.self.cycles-pp.__might_sleep
0.11 ± 6% +0.0 0.15 ± 12% perf-profile.self.cycles-pp.kfree
0.12 ± 7% +0.0 0.16 ± 8% perf-profile.self.cycles-pp.xfs_buf_offset
0.18 ± 13% +0.0 0.23 ± 4% perf-profile.self.cycles-pp.__percpu_counter_sum
0.13 ± 17% +0.0 0.17 ± 4% perf-profile.self.cycles-pp.__slab_free
0.11 ± 6% +0.0 0.15 ± 9% perf-profile.self.cycles-pp.new_slab
0.07 ± 12% +0.0 0.11 ± 7% perf-profile.self.cycles-pp.xfs_inobt_key_diff
0.01 ±173% +0.0 0.06 ± 7% perf-profile.self.cycles-pp.xfs_btree_ptr_to_daddr
0.05 ± 62% +0.0 0.09 ± 24% perf-profile.self.cycles-pp.xfs_trans_buf_item_match
0.15 ± 16% +0.0 0.20 ± 14% perf-profile.self.cycles-pp.vfprintf
0.11 ± 9% +0.0 0.16 ± 15% perf-profile.self.cycles-pp.down_read
0.09 ± 16% +0.0 0.14 ± 23% perf-profile.self.cycles-pp.stack_trace_consume_entry_nosched
0.01 ±173% +0.0 0.06 ± 11% perf-profile.self.cycles-pp.xfs_buf_item_log
0.10 ± 15% +0.0 0.14 ± 5% perf-profile.self.cycles-pp.xfs_inode_item_format
0.08 ± 15% +0.1 0.13 ± 3% perf-profile.self.cycles-pp.xfs_buf_item_init
0.01 ±173% +0.1 0.07 ± 13% perf-profile.self.cycles-pp.xfs_inode_item_pin
0.12 ± 7% +0.1 0.17 ± 10% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.08 ± 19% +0.1 0.13 ± 16% perf-profile.self.cycles-pp.__xfs_btree_check_sblock
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.check_preempt_wakeup
0.01 ±173% +0.1 0.07 ± 16% perf-profile.self.cycles-pp.xfs_buf_item_size
0.00 +0.1 0.06 ± 15% perf-profile.self.cycles-pp.xlog_grant_add_space
0.17 ± 4% +0.1 0.22 ± 9% perf-profile.self.cycles-pp.xlog_write
0.15 ± 7% +0.1 0.21 ± 8% perf-profile.self.cycles-pp.__kmalloc
0.10 ± 23% +0.1 0.16 ± 12% perf-profile.self.cycles-pp.xfs_trans_log_inode
0.15 ± 13% +0.1 0.21 ± 8% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.18 ± 8% +0.1 0.25 ± 10% perf-profile.self.cycles-pp.xfs_btree_lookup
0.18 ± 11% +0.1 0.25 ± 5% perf-profile.self.cycles-pp.xfs_next_bit
0.20 ± 8% +0.1 0.26 ± 11% perf-profile.self.cycles-pp.__radix_tree_lookup
0.30 ± 9% +0.1 0.37 ± 11% perf-profile.self.cycles-pp.__sched_text_start
0.17 ± 9% +0.1 0.24 ± 3% perf-profile.self.cycles-pp.xfs_trans_dirty_buf
0.07 ± 63% +0.1 0.14 ± 13% perf-profile.self.cycles-pp.selinux_inode_permission
0.17 ± 11% +0.1 0.24 ± 13% perf-profile.self.cycles-pp.__orc_find
0.13 ± 14% +0.1 0.21 ± 17% perf-profile.self.cycles-pp.___might_sleep
0.14 ± 9% +0.1 0.22 ± 17% perf-profile.self.cycles-pp.memset_erms
0.12 ± 21% +0.1 0.20 ± 10% perf-profile.self.cycles-pp.xfs_buf_item_size_segment
0.19 ± 21% +0.1 0.27 ± 8% perf-profile.self.cycles-pp.xfs_buf_item_format
0.15 ± 14% +0.1 0.24 ± 12% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.25 ± 14% +0.1 0.34 ± 3% perf-profile.self.cycles-pp.xfs_buf_item_pin
0.18 ± 15% +0.1 0.28 ± 10% perf-profile.self.cycles-pp.unwind_next_frame
0.15 ± 7% +0.1 0.24 ± 6% perf-profile.self.cycles-pp.xfs_log_commit_cil
0.12 ± 7% +0.1 0.21 ± 14% perf-profile.self.cycles-pp.xfs_perag_get
0.27 ± 11% +0.1 0.38 ± 5% perf-profile.self.cycles-pp.do_syscall_64
0.22 ± 9% +0.1 0.34 ± 14% perf-profile.self.cycles-pp.__list_del_entry_valid
0.38 ± 3% +0.2 0.54 ± 4% perf-profile.self.cycles-pp.xfs_dir3_leaf_check_int
0.30 ± 12% +0.2 0.48 ± 2% perf-profile.self.cycles-pp.xfs_buf_find
0.94 ± 3% +0.2 1.14 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.76 ± 8% +0.3 1.04 ± 3% perf-profile.self.cycles-pp.memcpy_erms
1.51 ± 4% +0.4 1.90 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
fsmark.time.system_time
160 +-+-------------------------------------------------------------------+
| +..+.+.+ + + + + +.+.+. .+.+.+.+.. .+.+.+.+..+.+.|
140 +-+ : : : : : : : +..+ +.+ |
120 +-+ : : : : : : : |
| : : :: : : :: : |
100 +-+: : : : : : : : : : : |
| : : : : : : : : : : : |
80 +-+: : : : : : : : : : : |
O O: O O: :O : :O: :O: :O : :O O O O O O O O O O O O |
60 +-+: : : : : : : : : : : |
40 +-+: : : : : : : : : : : |
| : : :: : : :: |
20 +-+ : : : : : |
| : : : : : : |
0 +-+-O----O---O----O---O---O----O--------------------------------------+
fsmark.time.percent_of_cpu_this_job_got
100 +-+-------------------------------------------------------------------+
90 +-+ +..+.+.+ + + + + +.+.+.+..+.+.+.+.+..+.+.+.+.+.+..+.+.|
| O : : O O O O : O O O O |
80 O-+ : O O : : : : O O O O O O O O |
70 +-+ : : :: : : :: : |
| : : : : : : : : : : : |
60 +-+: : : : : : : : : : : |
50 +-+: : : : : : : : : : : |
40 +-+: : : : : : : : : : : |
| : : : : : : : : : : : |
30 +-+: : : : : : : : : : : |
20 +-+ : :: : : :: |
| : : : : : : |
10 +-+ : : : : : |
0 +-+-O----O---O----O---O---O----O--------------------------------------+
fsmark.time.elapsed_time
180 +-+-------------------------------------------------------------------+
| +..+.+.+ + + + + +.+.+. .+.+.+.+.. .+.+.+.+..+.+.|
160 +-+ : : : : : : : +..+ +.+ |
140 +-+ : : : : : : : |
| : : :: : : :: : |
120 +-+: : : : : : : : : : : |
100 +-+: : : : : : : : : : : |
O O: O O: :O: :O: :O: :O: :O O O O O O O O O O O O |
80 +-+: : : : : : : : : : : |
60 +-+: : : : : : : : : : : |
| : : : : : : : : : : : |
40 +-+ : :: : : :: |
20 +-+ : : : : : |
| : : : : : : |
0 +-+-O----O---O----O---O---O----O--------------------------------------+
fsmark.time.elapsed_time.max
180 +-+-------------------------------------------------------------------+
| +..+.+.+ + + + + +.+.+. .+.+.+.+.. .+.+.+.+..+.+.|
160 +-+ : : : : : : : +..+ +.+ |
140 +-+ : : : : : : : |
| : : :: : : :: : |
120 +-+: : : : : : : : : : : |
100 +-+: : : : : : : : : : : |
O O: O O: :O: :O: :O: :O: :O O O O O O O O O O O O |
80 +-+: : : : : : : : : : : |
60 +-+: : : : : : : : : : : |
| : : : : : : : : : : : |
40 +-+ : :: : : :: |
20 +-+ : : : : : |
| : : : : : : |
0 +-+-O----O---O----O---O---O----O--------------------------------------+
fsmark.files_per_sec
14000 +-+-----------------------------------------------------------------+
| |
12000 O-O O O O O O O O O O O O O O O O O O O |
| |
10000 +-+ |
| |
8000 +-+ |
| |
6000 +-+ +.+..+.+ + + + + +.+.+.+.+..+.+.+.+.+.+.+.+..+.+.+.+.|
| : : : : :: : : |
4000 +-+: : : : : : : : : : : |
| : : : : : : : : : : : |
2000 +-+: : : : : : : : : : : |
| : : : : : : |
0 +-+-O----O---O---O---O----O---O-------------------------------------+
fsmark.app_overhead
1.2e+08 +-+---------------------------------------------------------------+
| +.+.+.+ + + + + +.+. .+.+.+. .+.+.+.+.+.+.|
1e+08 +-+ : : : : : : : +.+.+ +..+.+ |
| : : : : : : : |
| : : :: : : : : |
8e+07 +-+: : : : : : : : : : : |
| : : : : : : : : : : : |
6e+07 +-+: : : : : : : : : : : |
| : : : : : : : : : : : |
4e+07 +-+: : : : : : : : : : : |
O : O O: : : :O: :O: :O: : O O O O O O O |
| O : O :: : : : O O O O O |
2e+07 +-+ : : : : : |
| : : : : : : |
0 +-+-O---O---O----O---O---O---O------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 11 months
[selftests] adb701d6cf: kernel_selftests.net.fib_tests.sh.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: adb701d6cfa432f5dbdf28839b5e64291a7ed30b ("selftests: add a test case for rp_filter")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url:https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot<rong.a.chen(a)intel.com>
# selftests: net: fib_tests.sh
#
# Single path route test
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# Nexthop device deleted
# TEST: IPv4 fibmatch - no route [ OK ]
# TEST: IPv6 fibmatch - no route [ OK ]
#
# Multipath route test
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# One nexthop device deleted
# TEST: IPv4 - multipath route removed on delete [ OK ]
# TEST: IPv6 - multipath down to single path [ OK ]
# Second nexthop device deleted
# TEST: IPv6 - no route [ OK ]
#
# Single path, admin down
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# Route deleted on down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
#
# Admin down multipath
# Verify start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# One device down, one up
# TEST: IPv4 fibmatch on down device [ OK ]
# TEST: IPv6 fibmatch on down device [ OK ]
# TEST: IPv4 fibmatch on up device [ OK ]
# TEST: IPv6 fibmatch on up device [ OK ]
# TEST: IPv4 flags on down device [ OK ]
# TEST: IPv6 flags on down device [ OK ]
# TEST: IPv4 flags on up device [ OK ]
# TEST: IPv6 flags on up device [ OK ]
# Other device down and up
# TEST: IPv4 fibmatch on down device [ OK ]
# TEST: IPv6 fibmatch on down device [ OK ]
# TEST: IPv4 fibmatch on up device [ OK ]
# TEST: IPv6 fibmatch on up device [ OK ]
# TEST: IPv4 flags on down device [ OK ]
# TEST: IPv6 flags on down device [ OK ]
# TEST: IPv4 flags on up device [ OK ]
# TEST: IPv6 flags on up device [ OK ]
# Both devices down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
#
# Local carrier tests - single path
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 - no linkdown flag [ OK ]
# TEST: IPv6 - no linkdown flag [ OK ]
# Carrier off on nexthop
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 - linkdown flag set [ OK ]
# TEST: IPv6 - linkdown flag set [ OK ]
# Route to local address with carrier down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 linkdown flag set [ OK ]
# TEST: IPv6 linkdown flag set [ OK ]
#
# Single path route carrier test
# Start point
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 no linkdown flag [ OK ]
# TEST: IPv6 no linkdown flag [ OK ]
# Carrier down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 linkdown flag set [ OK ]
# TEST: IPv6 linkdown flag set [ OK ]
# Second address added with carrier down
# TEST: IPv4 fibmatch [ OK ]
# TEST: IPv6 fibmatch [ OK ]
# TEST: IPv4 linkdown flag set [ OK ]
# TEST: IPv6 linkdown flag set [ OK ]
#
# IPv4 nexthop tests
# <<< write me >>>
#
# IPv6 nexthop tests
# TEST: Directly connected nexthop, unicast address [ OK ]
# TEST: Directly connected nexthop, unicast address with device [ OK ]
# TEST: Gateway is linklocal address [ OK ]
# TEST: Gateway is linklocal address, no device [ OK ]
# TEST: Gateway can not be local unicast address [ OK ]
# TEST: Gateway can not be local unicast address, with device [ OK ]
# TEST: Gateway can not be a local linklocal address [ OK ]
# TEST: Gateway can be local address in a VRF [ OK ]
# TEST: Gateway can be local address in a VRF, with device [ OK ]
# TEST: Gateway can be local linklocal address in a VRF [ OK ]
# TEST: Redirect to VRF lookup [ OK ]
# TEST: VRF route, gateway can be local address in default VRF [ OK ]
# TEST: VRF route, gateway can not be a local address [ OK ]
# TEST: VRF route, gateway can not be a local addr with device [ OK ]
#
# IPv6 route add / append tests
# TEST: Attempt to add duplicate route - gw [ OK ]
# TEST: Attempt to add duplicate route - dev only [ OK ]
# TEST: Attempt to add duplicate route - reject route [ OK ]
# TEST: Append nexthop to existing route - gw [ OK ]
# TEST: Add multipath route [ OK ]
# TEST: Attempt to add duplicate multipath route [ OK ]
# TEST: Route add with different metrics [ OK ]
# TEST: Route delete with metric [ OK ]
#
# IPv6 route replace tests
# TEST: Single path with single path [ OK ]
# TEST: Single path with multipath [ OK ]
# TEST: Single path with single path via multipath attribute [ OK ]
# TEST: Invalid nexthop [ OK ]
# TEST: Single path - replace of non-existent route [ OK ]
# TEST: Multipath with multipath [ OK ]
# TEST: Multipath with single path [ OK ]
# TEST: Multipath with single path via multipath attribute [ OK ]
# TEST: Multipath - invalid first nexthop [ OK ]
# TEST: Multipath - invalid second nexthop [ OK ]
# TEST: Multipath - replace of non-existent route [ OK ]
#
# IPv4 route add / append tests
# TEST: Attempt to add duplicate route - gw [ OK ]
# TEST: Attempt to add duplicate route - dev only [ OK ]
# TEST: Attempt to add duplicate route - reject route [ OK ]
# TEST: Add new nexthop for existing prefix [ OK ]
# TEST: Append nexthop to existing route - gw [ OK ]
# TEST: Append nexthop to existing route - dev only [ OK ]
# TEST: Append nexthop to existing route - reject route [ OK ]
# TEST: Append nexthop to existing reject route - gw [ OK ]
# TEST: Append nexthop to existing reject route - dev only [ OK ]
# TEST: add multipath route [ OK ]
# TEST: Attempt to add duplicate multipath route [ OK ]
# TEST: Route add with different metrics [ OK ]
# TEST: Route delete with metric [ OK ]
#
# IPv4 route replace tests
# TEST: Single path with single path [ OK ]
# TEST: Single path with multipath [ OK ]
# TEST: Single path with reject route [ OK ]
# TEST: Single path with single path via multipath attribute [ OK ]
# TEST: Invalid nexthop [ OK ]
# TEST: Single path - replace of non-existent route [ OK ]
# TEST: Multipath with multipath [ OK ]
# TEST: Multipath with single path [ OK ]
# TEST: Multipath with single path via multipath attribute [ OK ]
# TEST: Multipath with reject route [ OK ]
# TEST: Multipath - invalid first nexthop [ OK ]
# TEST: Multipath - invalid second nexthop [ OK ]
# TEST: Multipath - replace of non-existent route [ OK ]
#
# IPv6 prefix route tests
# TEST: Default metric [ OK ]
# TEST: User specified metric on first device [ OK ]
# TEST: User specified metric on second device [ OK ]
# TEST: Delete of address on first device [ OK ]
# TEST: Modify metric of address [ OK ]
# TEST: Prefix route removed on link down [ OK ]
# TEST: Prefix route with metric on link up [ OK ]
#
# IPv4 prefix route tests
# TEST: Default metric [ OK ]
# TEST: User specified metric on first device [ OK ]
# TEST: User specified metric on second device [ OK ]
# TEST: Delete of address on first device [ OK ]
# TEST: Modify metric of address [ OK ]
# TEST: Prefix route removed on link down [ OK ]
# TEST: Prefix route with metric on link up [ OK ]
#
# IPv6 routes with metrics
# TEST: Single path route with mtu metric [ OK ]
# TEST: Multipath route via 2 single routes with mtu metric on first [ OK ]
# TEST: Multipath route via 2 single routes with mtu metric on 2nd [ OK ]
# TEST: MTU of second leg [ OK ]
# TEST: Multipath route with mtu metric [ OK ]
# TEST: Using route with mtu metric [ OK ]
# TEST: Invalid metric (fails metric_convert) [ OK ]
#
# IPv4 route add / append tests
# TEST: Single path route with mtu metric [ OK ]
# TEST: Multipath route with mtu metric [ OK ]
# TEST: Using route with mtu metric [ OK ]
# TEST: Invalid metric (fails metric_convert) [ OK ]
#
# IPv4 route with IPv6 gateway tests
# TEST: Single path route with IPv6 gateway [ OK ]
# TEST: Single path route with IPv6 gateway - ping [ OK ]
# TEST: Single path route delete [ OK ]
# TEST: Multipath route add - v6 nexthop then v4 [ OK ]
# TEST: Multipath route delete - nexthops in wrong order [ OK ]
# TEST: Multipath route delete exact match [ OK ]
# TEST: Multipath route add - v4 nexthop then v6 [ OK ]
# TEST: Multipath route delete - nexthops in wrong order [ OK ]
# TEST: Multipath route delete exact match [ OK ]
#
# IPv4 rp_filter tests
# TEST: rp_filter passes local packets [FAIL]
# TEST: rp_filter passes loopback packets [FAIL]
#
# Tests passed: 150
# Tests failed: 2
not ok 14 selftests: net: fib_tests.sh
To reproduce:
# build kernel
cd linux
cp config-5.2.0-08276-gadb701d6cfa43 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clonehttps://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 11 months