[mm, slab_common] c5e1edaa1a: BUG:unable_to_handle_page_fault_for_address
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: c5e1edaa1a52581b350315e4b163cdcc0fd7605d ("[PATCH v5 7/7] mm, slab_common: Modify kmalloc_caches[type][idx] to kmalloc_caches[idx][type]")
url: https://github.com/0day-ci/linux/commits/Pengfei-Li/mm-slab-Make-kmalloc_...
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+---------------------------------------------+------------+------------+
| | 4e0d9effb5 | c5e1edaa1a |
+---------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 0 | 4 |
| BUG:unable_to_handle_page_fault_for_address | 0 | 4 |
| Oops:#[##] | 0 | 4 |
| RIP:calculate_slab_order | 0 | 4 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 4 |
+---------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.417634] BUG: unable to handle page fault for address: 0000000000301171
[ 0.418931] #PF: supervisor read access in kernel mode
[ 0.419879] #PF: error_code(0x0000) - not-present page
[ 0.420827] PGD 0 P4D 0
[ 0.421296] Oops: 0000 [#1] DEBUG_PAGEALLOC
[ 0.422073] CPU: 0 PID: 0 Comm: swapper Tainted: G T 5.3.0-00007-gc5e1edaa1a525 #2
[ 0.423725] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 0.425265] RIP: 0010:calculate_slab_order+0x81/0xd9
[ 0.426194] Code: ba e4 11 41 89 5f 24 41 89 4f 28 73 35 eb 61 31 f6 89 c7 89 4c 24 04 48 89 54 24 08 e8 95 8a fd ff 48 85 c0 8b 4c 24 04 74 38 <83> 78 20 00 78 32 41 8b 77 14 48 8b 54 24 08 d1 ee 39 70 14 76 be
[ 0.429675] RSP: 0000:ffffffff82603e50 EFLAGS: 00010002
[ 0.430648] RAX: 0000000000301151 RBX: 0000000000000001 RCX: 0000000000000000
[ 0.431966] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000001
[ 0.433282] RBP: 0000000000001000 R08: 0000000000000000 R09: 0000000000000000
[ 0.434617] R10: 0000000000002000 R11: 0000000000000005 R12: 0000000080002800
[ 0.435936] R13: 0000000080000000 R14: 0000000000000000 R15: ffffffff82853bc0
[ 0.437259] FS: 0000000000000000(0000) GS:ffffffff8264e000(0000) knlGS:0000000000000000
[ 0.438765] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.439840] CR2: 0000000000301171 CR3: 0000000002622000 CR4: 00000000000406b0
[ 0.441157] Call Trace:
[ 0.441614] set_off_slab_cache+0x39/0x59
[ 0.442357] __kmem_cache_create+0x11a/0x26e
[ 0.443171] create_boot_cache+0x46/0x69
[ 0.443900] kmem_cache_init+0x72/0x115
[ 0.444612] start_kernel+0x204/0x454
[ 0.445301] secondary_startup_64+0xa4/0xb0
[ 0.446078] Modules linked in:
[ 0.446696] CR2: 0000000000301171
[ 0.447323] random: get_random_bytes called from init_oops_id+0x1d/0x2c with crng_init=0
[ 0.448820] ---[ end trace b2352b5ced6eae1c ]---
To reproduce:
# build kernel
cd linux
cp config-5.3.0-00007-gc5e1edaa1a525 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
2 years, 10 months
46987b364a: BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 46987b364a12b2a4a7942139acc85735275d39df ("experiment")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/brauner/linux.git seccomp_notify_filter_empty
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-----------------------------------------------------------------------------+------------+------------+
| | d4148b5838 | 46987b364a |
+-----------------------------------------------------------------------------+------------+------------+
| boot_successes | 22 | 1 |
| boot_failures | 0 | 7 |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c | 0 | 7 |
| RIP:errseq_sample | 0 | 1 |
+-----------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 88.357847] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:255
[ 88.382807] in_atomic(): 1, irqs_disabled(): 0, pid: 407, name: seq
[ 88.399332] CPU: 1 PID: 407 Comm: seq Not tainted 5.3.0-00004-g46987b364a12b #1
[ 88.418090] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 88.440355] Call Trace:
[ 88.447512] <IRQ>
[ 88.453380] dump_stack+0x5c/0x7b
[ 88.462779] ___might_sleep+0xf1/0x110
[ 88.472044] mutex_lock+0x1c/0x40
[ 88.480509] put_seccomp_filter+0x58/0x90
[ 88.490783] free_task+0x23/0x50
[ 88.499207] rcu_core+0x2e8/0x480
[ 88.508404] __do_softirq+0xe3/0x2f8
[ 88.517929] irq_exit+0xd5/0xe0
[ 88.526459] smp_apic_timer_interrupt+0x74/0x140
[ 88.537544] apic_timer_interrupt+0xf/0x20
[ 88.547394] </IRQ>
[ 88.552830] RIP: 0010:errseq_sample+0x2/0x10
[ 88.564439] Code: 39 c2 77 0f 48 89 ef 4c 89 e6 89 da e8 e7 d9 5e 00 89 c3 48 89 d8 5b 5d 41 5c c3 0f 0b eb c8 90 90 90 90 90 90 90 90 90 8b 07 <ba> 00 00 00 00 f6 c4 10 0f 44 c2 c3 66 90 8b 17 31 c0 39 16 74 1b
[ 88.611105] RSP: 0018:ffffa05600423cb8 EFLAGS: 00000283 ORIG_RAX: ffffffffffffff13
[ 88.629121] RAX: 0000000000000000 RBX: ffff90fb2956c800 RCX: 0000000a00000000
[ 88.646850] RDX: 0000000900000000 RSI: 0000000a00000000 RDI: ffff90fb6a36a440
[ 88.662923] RBP: ffff90fb6a36a250 R08: 0000000000000064 R09: ffff90fa47d5e160
[ 88.681164] R10: ffff90fb6a35cd80 R11: 0000544e454d4552 R12: ffff90fb2956c810
[ 88.699278] R13: 0000000000000000 R14: ffff90fb2956c800 R15: 0000000000000000
[ 88.717946] do_dentry_open+0x3a/0x380
[ 88.728316] path_openat+0x2e6/0x15b0
[ 88.749473] ? mem_cgroup_commit_charge+0x61/0x4d0
[ 88.761528] ? mem_cgroup_try_charge+0x70/0x1d0
[ 88.781652] do_filp_open+0x9b/0x110
[ 88.790661] ? __check_object_size+0xd4/0x1a0
[ 88.801714] ? do_sys_open+0x1bd/0x250
[ 88.810853] do_sys_open+0x1bd/0x250
[ 88.819985] do_syscall_64+0x5b/0x1f0
[ 88.830863] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 88.843141] RIP: 0033:0x7ff76fffb6ec
[ 88.852474] Code: 53 41 89 f4 48 89 fb be 00 00 08 00 44 89 e8 48 81 ec a8 00 00 00 c7 47 08 01 00 00 00 48 c7 47 10 00 00 00 00 48 8b 3f 0f 05 <48> 3d 00 f0 ff ff 0f 87 08 01 00 00 4c 63 f0 89 85 38 ff ff ff 4d
[ 88.897740] RSP: 002b:00007ffc5e9b6740 EFLAGS: 00000202 ORIG_RAX: 0000000000000002
[ 88.925192] RAX: ffffffffffffffda RBX: 000055be4f56b380 RCX: 00007ff76fffb6ec
[ 88.952193] RDX: 0000000000000001 RSI: 0000000000080000 RDI: 000055be4f56b350
[ 88.979117] RBP: 00007ffc5e9b6810 R08: 000055be4f56b2a0 R09: 0000000000000000
[ 89.005499] R10: 00007ff770139810 R11: 0000000000000202 R12: 000000000000000b
[ 89.030670] R13: 0000000000000002 R14: 000055be4f56b300 R15: 0000000000000000
[ 89.793896] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:255
[ 89.837583] in_atomic(): 1, irqs_disabled(): 0, pid: 299, name: sed
[ 89.867783] CPU: 1 PID: 299 Comm: sed Tainted: G W 5.3.0-00004-g46987b364a12b #1
[ 89.899026] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 89.929134] Call Trace:
[ 89.944367] <IRQ>
[ 89.958527] dump_stack+0x5c/0x7b
[ 89.975865] ___might_sleep+0xf1/0x110
[ 89.994361] mutex_lock+0x1c/0x40
[ 90.010768] put_seccomp_filter+0x58/0x90
[ 90.029790] free_task+0x23/0x50
[ 90.046861] rcu_core+0x2e8/0x480
[ 90.064362] __do_softirq+0xe3/0x2f8
[ 90.081762] irq_exit+0xd5/0xe0
[ 90.098783] smp_apic_timer_interrupt+0x74/0x140
[ 90.119220] apic_timer_interrupt+0xf/0x20
[ 90.138672] </IRQ>
[ 90.153290] RIP: 0033:0x55fa217e6fe2
[ 90.171557] Code: 48 8b 55 00 31 ff 31 f6 48 39 d6 7c 4a 48 3b 3b 0f 8d c2 00 00 00 48 89 f8 48 c1 e0 04 49 03 45 00 45 89 e0 44 23 40 08 74 24 <49> 8b 4f 08 48 8b 00 48 89 ca 48 83 c1 01 48 c1 e2 04 49 03 17 48
[ 90.236822] RSP: 002b:00007ffc7a0c5be0 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff13
[ 90.263049] RAX: 000055fa24267240 RBX: 00007fa9d38b1bb0 RCX: 000055fa24106750
[ 90.289911] RDX: 0000000000000004 RSI: 0000000000000004 RDI: 0000000000000000
[ 90.315511] RBP: 00007ffc7a0c5cb8 R08: 00000000000001ff R09: 0000000000001185
[ 90.340974] R10: 00000000000001ff R11: 0000000000001186 R12: 00000000ffffffff
[ 90.366611] R13: 00007fa9d38b1ba8 R14: 00007ffc7a0c5cb0 R15: 00007ffc7a0c5c90
Elapsed time: 100
qemu-img create -f qcow2 disk-vm-snb-8G-4d3c63819661-0 256G
qemu-img create -f qcow2 disk-vm-snb-8G-4d3c63819661-1 256G
qemu-img create -f qcow2 disk-vm-snb-8G-4d3c63819661-2 256G
qemu-img create -f qcow2 disk-vm-snb-8G-4d3c63819661-3 256G
qemu-img create -f qcow2 disk-vm-snb-8G-4d3c63819661-4 256G
qemu-img create -f qcow2 disk-vm-snb-8G-4d3c63819661-5 256G
qemu-img create -f qcow2 disk-vm-snb-8G-4d3c63819661-6 256G
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu SandyBridge
-kernel $kernel
-initrd initrd-vm-snb-8G-4d3c63819661
-m 8192
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0,hostfwd=tcp::32032-:22
-boot order=nc
-no-reboot
-watchdog i6300esb
-watchdog-action debug
-rtc base=localtime
-drive file=disk-vm-snb-8G-4d3c63819661-0,media=disk,if=virtio
-drive file=disk-vm-snb-8G-4d3c63819661-1,media=disk,if=virtio
-drive file=disk-vm-snb-8G-4d3c63819661-2,media=disk,if=virtio
-drive file=disk-vm-snb-8G-4d3c63819661-3,media=disk,if=virtio
-drive file=disk-vm-snb-8G-4d3c63819661-4,media=disk,if=virtio
-drive file=disk-vm-snb-8G-4d3c63819661-5,media=disk,if=virtio
-drive file=disk-vm-snb-8G-4d3c63819661-6,media=disk,if=virtio
-serial stdio
-display none
-monitor null
)
append=(
ip=::::vm-snb-8G-4d3c63819661::dhcp
root=/dev/ram0
user=lkp
job=/job-script
ARCH=x86_64
kconfig=x86_64-rhel-7.6
branch=linux-devel/devel-catchup-201909200016
commit=46987b364a12b2a4a7942139acc85735275d39df
BOOT_IMAGE=/pkg/linux/x86_64-rhel-7.6/gcc-7/46987b364a12b2a4a7942139acc85735275d39df/vmlinuz-5.3.0-00004-g46987b364a12b
max_uptime=600
RESULT_ROOT=/result/boot/1/vm-snb-8G/debian-x86_64-2019-05-14.cgz/x86_64-rhel-7.6/gcc-7/46987b364a12b2a4a7942139acc85735275d39df/3
result_service=tmpfs
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
printk.devkmsg=on
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
drbd.minor_count=8
systemd.log_level=err
ignore_loglevel
console=tty0
earlyprintk=ttyS0,115200
console=ttyS0,115200
vga=normal
rw
rcuperf.shutdown=0
)
"${kvm[@]}" -append "${append[*]}"
To reproduce:
# build kernel
cd linux
cp config-5.3.0-00004-g46987b364a12b .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 10 months
[nfsd] 5920afa3c8: fsmark.files_per_sec 35.8% improvement
by kernel test robot
Greeting,
FYI, we noticed a 35.8% improvement of fsmark.files_per_sec due to commit:
commit: 5920afa3c85ff38642f652b6e3880e79392fcc89 ("nfsd: hook nfsd_commit up to the nfsd_file cache")
git://linux-nfs.org/~bfields/linux.git nfsd-next
in testcase: fsmark
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
iterations: 1x
nr_threads: 64t
disk: 1BRD_48G
fs: f2fs
fs2: nfsv4
filesize: 4M
test_size: 40G
sync_method: NoSync
cpufreq_governor: performance
test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | fsmark: fsmark.files_per_sec 37.8% improvement |
| test machine | 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1BRD_48G |
| | filesize=4M |
| | fs2=nfsv4 |
| | fs=f2fs |
| | iterations=1x |
| | nr_threads=64t |
| | sync_method=fsyncBeforeClose |
| | test_size=40G |
+------------------+-----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-7/performance/1BRD_48G/4M/nfsv4/f2fs/1x/x86_64-rhel-7.6/64t/debian-x86_64-2019-05-14.cgz/NoSync/lkp-ivb-ep01/40G/fsmark
commit:
48cd7b5125 ("nfsd: hook up nfsd_read to the nfsd_file cache")
5920afa3c8 ("nfsd: hook nfsd_commit up to the nfsd_file cache")
48cd7b51258c1a15 5920afa3c85ff38642f652b6e38
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
32291717 ± 7% -32.8% 21687061 ± 16% fsmark.app_overhead
211.12 ± 4% +35.8% 286.65 ± 6% fsmark.files_per_sec
53.09 ± 2% -26.0% 39.30 ± 6% fsmark.time.elapsed_time
53.09 ± 2% -26.0% 39.30 ± 6% fsmark.time.elapsed_time.max
75134 -1.4% 74068 fsmark.time.minor_page_faults
147.00 -4.8% 140.00 fsmark.time.percent_of_cpu_this_job_got
75.41 -29.3% 53.34 ± 7% fsmark.time.system_time
4608000 ± 18% +39.8% 6444032 ± 8% meminfo.DirectMap2M
2308611 ± 3% +34.8% 3111571 ± 5% meminfo.max_used_kB
2.00 -75.0% 0.50 ±100% nfsstat.Client.nfs.v3.commit.percent
2.00 -75.0% 0.50 ±100% nfsstat.Client.nfs.v3.write.percent
2517 +16.4% 2931 ± 18% numa-vmstat.node0.nr_mapped
2412 ± 4% -12.4% 2112 ± 12% numa-vmstat.node1.nr_writeback
1.00 ±173% +450.0% 5.50 ± 48% sched_debug.cfs_rq:/.load_avg.min
2209 ± 23% -85.4% 322.06 ±252% sched_debug.cfs_rq:/.spread0.avg
1.151e+09 ± 12% -28.5% 8.234e+08 ± 12% cpuidle.C6.time
1927529 ± 16% -41.7% 1122894 ± 5% cpuidle.C6.usage
2225133 ± 25% -46.7% 1186043 ± 9% cpuidle.POLL.time
71.91 -3.3% 69.53 iostat.cpu.idle
18.10 +11.6% 20.19 iostat.cpu.system
1.66 ± 5% +23.6% 2.06 ± 6% iostat.cpu.user
0.20 ± 4% -0.0 0.17 ± 6% mpstat.cpu.all.soft%
18.52 +2.6 21.08 mpstat.cpu.all.sys%
1.65 ± 5% +0.4 2.07 ± 6% mpstat.cpu.all.usr%
71.25 -2.8% 69.25 vmstat.cpu.id
741189 ± 2% +33.9% 992300 ± 5% vmstat.io.bo
8.00 +12.5% 9.00 vmstat.procs.r
83493 ± 4% +26.1% 105304 ± 4% vmstat.system.cs
451.75 ± 4% +46.3% 661.00 ± 5% turbostat.Avg_MHz
1952 ± 3% +34.5% 2625 ± 5% turbostat.Bzy_MHz
1923510 ± 16% -41.8% 1118609 ± 5% turbostat.C6
4.73 ± 26% -40.9% 2.79 ± 21% turbostat.CPU%c3
63.09 ± 4% +48.5% 93.72 ± 7% turbostat.CorWatt
4549225 ± 2% -24.9% 3416110 ± 5% turbostat.IRQ
90.47 ± 3% +34.5% 121.67 ± 5% turbostat.PkgWatt
39.38 +7.4% 42.29 turbostat.RAMWatt
4430 ± 3% -26.4% 3260 ± 5% turbostat.SMI
2040 +97.3% 4024 ± 5% slabinfo.Acpi-State.active_objs
2040 +97.3% 4024 ± 5% slabinfo.Acpi-State.num_objs
2048 ± 4% +142.8% 4972 slabinfo.avc_xperms_data.active_objs
2048 ± 4% +142.8% 4972 slabinfo.avc_xperms_data.num_objs
726.25 ± 6% +206.1% 2223 slabinfo.blkdev_ioc.active_objs
726.25 ± 6% +216.6% 2299 slabinfo.blkdev_ioc.num_objs
13109 +37.4% 18014 ± 5% slabinfo.cred_jar.active_objs
311.50 +38.0% 429.75 ± 5% slabinfo.cred_jar.active_slabs
13109 +37.9% 18075 ± 5% slabinfo.cred_jar.num_objs
311.50 +38.0% 429.75 ± 5% slabinfo.cred_jar.num_slabs
5860 -10.6% 5242 ± 3% slabinfo.skbuff_fclone_cache.active_objs
5975 -10.6% 5343 ± 3% slabinfo.skbuff_fclone_cache.num_objs
12229 ± 2% -6.4% 11442 proc-vmstat.nr_active_file
21400 -5.2% 20277 ± 4% proc-vmstat.nr_dirty
11441687 -5.9% 10763357 ± 2% proc-vmstat.nr_file_pages
81478805 +1.3% 82556867 proc-vmstat.nr_free_pages
4889 +2.5% 5012 proc-vmstat.nr_inactive_anon
11154711 -6.1% 10477316 ± 2% proc-vmstat.nr_inactive_file
6339 +2.4% 6490 proc-vmstat.nr_mapped
58594 -4.8% 55779 proc-vmstat.nr_slab_reclaimable
12229 ± 2% -6.4% 11442 proc-vmstat.nr_zone_active_file
4889 +2.5% 5012 proc-vmstat.nr_zone_inactive_anon
11154711 -6.1% 10477316 ± 2% proc-vmstat.nr_zone_inactive_file
24681 -4.5% 23565 ± 3% proc-vmstat.nr_zone_write_pending
52443077 -6.2% 49180517 ± 2% proc-vmstat.numa_hit
52430086 -6.2% 49167546 ± 2% proc-vmstat.numa_local
889.25 ± 60% +319.3% 3729 ± 77% proc-vmstat.numa_pages_migrated
12167 +4.1% 12664 proc-vmstat.pgactivate
61727833 -5.3% 58460228 ± 2% proc-vmstat.pgalloc_normal
276663 -13.9% 238182 ± 2% proc-vmstat.pgfault
40465122 -9.6% 36566331 ± 2% proc-vmstat.pgfree
889.25 ± 60% +319.3% 3729 ± 77% proc-vmstat.pgmigrate_success
2.778e+09 ± 10% +36.7% 3.799e+09 ± 11% perf-stat.i.branch-instructions
41.84 +1.7 43.56 perf-stat.i.cache-miss-rate%
64127695 ± 7% +31.9% 84596516 ± 6% perf-stat.i.cache-misses
1.491e+08 ± 6% +26.9% 1.892e+08 ± 5% perf-stat.i.cache-references
88572 ± 5% +29.7% 114918 ± 4% perf-stat.i.context-switches
1.933e+10 ± 9% +41.7% 2.738e+10 ± 8% perf-stat.i.cpu-cycles
790.46 ± 3% +13.2% 894.61 ± 3% perf-stat.i.cpu-migrations
0.38 ± 30% -0.2 0.23 ± 10% perf-stat.i.dTLB-load-miss-rate%
3.797e+09 ± 10% +43.7% 5.456e+09 ± 10% perf-stat.i.dTLB-loads
0.08 ± 17% -0.0 0.06 ± 10% perf-stat.i.dTLB-store-miss-rate%
1.492e+09 ± 9% +22.2% 1.823e+09 ± 7% perf-stat.i.dTLB-stores
74.27 -2.1 72.21 perf-stat.i.iTLB-load-miss-rate%
947221 ± 3% +6.5% 1008729 ± 3% perf-stat.i.iTLB-load-misses
334745 ± 4% +17.0% 391635 ± 2% perf-stat.i.iTLB-loads
1.335e+10 ± 10% +39.4% 1.862e+10 ± 11% perf-stat.i.instructions
13950 ± 11% +32.6% 18505 ± 8% perf-stat.i.instructions-per-iTLB-miss
4835 ± 4% +18.0% 5708 ± 5% perf-stat.i.minor-faults
15925667 ± 7% +35.4% 21560041 ± 7% perf-stat.i.node-load-misses
28992349 ± 8% +33.0% 38560978 ± 6% perf-stat.i.node-loads
18.98 ± 2% -0.7 18.26 ± 2% perf-stat.i.node-store-miss-rate%
9102504 ± 5% +33.2% 12125619 ± 4% perf-stat.i.node-store-misses
41079385 ± 6% +37.5% 56484591 ± 6% perf-stat.i.node-stores
4835 ± 4% +18.0% 5708 ± 5% perf-stat.i.page-faults
43.01 +1.7 44.70 perf-stat.overall.cache-miss-rate%
0.35 ± 30% -0.1 0.21 ± 11% perf-stat.overall.dTLB-load-miss-rate%
0.08 ± 18% -0.0 0.06 ± 10% perf-stat.overall.dTLB-store-miss-rate%
73.89 -1.9 72.03 perf-stat.overall.iTLB-load-miss-rate%
14118 ± 10% +30.5% 18419 ± 9% perf-stat.overall.instructions-per-iTLB-miss
35.46 +0.4 35.85 perf-stat.overall.node-load-miss-rate%
2.726e+09 ± 10% +35.8% 3.702e+09 ± 11% perf-stat.ps.branch-instructions
62934901 ± 6% +31.0% 82433820 ± 6% perf-stat.ps.cache-misses
1.463e+08 ± 6% +26.0% 1.844e+08 ± 5% perf-stat.ps.cache-references
86924 ± 5% +28.8% 111983 ± 4% perf-stat.ps.context-switches
1.897e+10 ± 9% +40.7% 2.668e+10 ± 8% perf-stat.ps.cpu-cycles
775.77 ± 3% +12.4% 871.92 ± 3% perf-stat.ps.cpu-migrations
3.726e+09 ± 10% +42.7% 5.316e+09 ± 10% perf-stat.ps.dTLB-loads
1.465e+09 ± 8% +21.3% 1.777e+09 ± 7% perf-stat.ps.dTLB-stores
929736 ± 3% +5.8% 983266 ± 3% perf-stat.ps.iTLB-load-misses
328543 ± 4% +16.2% 381691 ± 2% perf-stat.ps.iTLB-loads
1.31e+10 ± 10% +38.4% 1.814e+10 ± 11% perf-stat.ps.instructions
4748 ± 4% +17.4% 5574 ± 5% perf-stat.ps.minor-faults
15629024 ± 7% +34.4% 21008888 ± 7% perf-stat.ps.node-load-misses
28452374 ± 7% +32.1% 37575683 ± 6% perf-stat.ps.node-loads
8933444 ± 5% +32.3% 11815604 ± 4% perf-stat.ps.node-store-misses
40315303 ± 6% +36.5% 55038999 ± 6% perf-stat.ps.node-stores
4748 ± 4% +17.4% 5574 ± 5% perf-stat.ps.page-faults
11922 ± 5% -14.0% 10258 ± 6% softirqs.CPU0.NET_RX
12832 ± 6% -21.0% 10141 ± 10% softirqs.CPU0.SCHED
28642 ± 4% -20.2% 22864 ± 6% softirqs.CPU0.TIMER
28480 ± 9% -27.0% 20791 ± 14% softirqs.CPU1.TIMER
28618 ± 9% -25.7% 21249 ± 6% softirqs.CPU10.TIMER
29227 ± 8% -33.8% 19363 ± 11% softirqs.CPU11.TIMER
30448 ± 7% -27.6% 22057 ± 4% softirqs.CPU12.TIMER
29356 ± 5% -28.4% 21005 ± 12% softirqs.CPU13.TIMER
28044 -23.2% 21547 ± 5% softirqs.CPU14.TIMER
28980 ± 5% -32.2% 19660 ± 10% softirqs.CPU15.TIMER
12148 ± 8% -21.6% 9518 ± 17% softirqs.CPU16.NET_RX
28193 ± 2% -21.8% 22045 ± 8% softirqs.CPU16.TIMER
29107 ± 6% -32.7% 19580 ± 9% softirqs.CPU17.TIMER
28146 ± 2% -26.3% 20740 ± 3% softirqs.CPU18.TIMER
28689 ± 7% -32.1% 19466 ± 10% softirqs.CPU19.TIMER
9565 ± 7% -20.8% 7573 ± 15% softirqs.CPU21.SCHED
30341 ± 9% -29.0% 21554 ± 16% softirqs.CPU21.TIMER
31369 ± 27% -33.8% 20780 ± 8% softirqs.CPU22.TIMER
9824 ± 3% -22.7% 7591 ± 13% softirqs.CPU23.SCHED
30822 ± 5% -30.6% 21378 ± 7% softirqs.CPU23.TIMER
25494 ± 9% -19.5% 20532 ± 2% softirqs.CPU24.TIMER
28887 ± 6% -31.2% 19881 ± 12% softirqs.CPU25.TIMER
26978 ± 3% -21.9% 21059 ± 4% softirqs.CPU26.TIMER
30354 ± 2% -34.6% 19848 ± 11% softirqs.CPU27.TIMER
32156 ± 22% -33.7% 21305 ± 4% softirqs.CPU28.TIMER
29059 ± 6% -31.8% 19813 ± 13% softirqs.CPU29.TIMER
9430 ± 14% -28.9% 6705 ± 10% softirqs.CPU3.SCHED
30092 ± 12% -27.2% 21911 ± 16% softirqs.CPU3.TIMER
26556 -22.5% 20576 ± 7% softirqs.CPU30.TIMER
27797 -24.7% 20945 ± 6% softirqs.CPU32.TIMER
29077 ± 4% -32.9% 19516 ± 16% softirqs.CPU33.TIMER
30991 ± 15% -28.2% 22254 ± 9% softirqs.CPU34.TIMER
29571 ± 6% -30.6% 20514 ± 9% softirqs.CPU35.TIMER
11611 ± 8% -24.9% 8717 ± 23% softirqs.CPU36.NET_RX
28044 ± 3% -26.4% 20642 ± 6% softirqs.CPU36.TIMER
29065 ± 6% -33.4% 19366 ± 10% softirqs.CPU37.TIMER
28321 -26.0% 20949 ± 5% softirqs.CPU38.TIMER
29504 ± 5% -33.5% 19621 ± 11% softirqs.CPU39.TIMER
6175 ± 44% -48.0% 3209 ± 15% softirqs.CPU4.RCU
29785 ± 15% -29.0% 21137 ± 6% softirqs.CPU4.TIMER
28961 ± 8% -31.7% 19778 ± 9% softirqs.CPU5.TIMER
11523 ± 14% -20.1% 9211 ± 25% softirqs.CPU6.NET_RX
27481 ± 3% -19.5% 22133 ± 14% softirqs.CPU6.TIMER
28297 ± 12% -30.8% 19574 ± 12% softirqs.CPU7.TIMER
26016 ± 9% -17.7% 21401 ± 6% softirqs.CPU8.TIMER
29290 ± 6% -32.2% 19870 ± 9% softirqs.CPU9.TIMER
190170 ± 2% -31.8% 129696 ± 7% softirqs.RCU
354491 ± 2% -23.5% 271220 ± 5% softirqs.SCHED
1152378 ± 2% -27.2% 838929 ± 5% softirqs.TIMER
78.75 ± 61% -65.7% 27.00 ± 34% interrupts.41:IR-PCI-MSI.524291-edge.eth0-TxRx-2
101.75 ± 79% -62.9% 37.75 ± 46% interrupts.46:IR-PCI-MSI.524296-edge.eth0-TxRx-7
32913 ± 21% -25.9% 24387 ± 3% interrupts.CAL:Function_call_interrupts
819.00 ± 16% -24.9% 615.25 ± 6% interrupts.CPU0.CAL:Function_call_interrupts
109511 ± 3% -26.3% 80758 ± 5% interrupts.CPU0.LOC:Local_timer_interrupts
849.00 ± 22% -33.8% 561.75 ± 17% interrupts.CPU1.CAL:Function_call_interrupts
109108 ± 3% -26.4% 80310 ± 5% interrupts.CPU1.LOC:Local_timer_interrupts
823.25 ± 26% -31.1% 567.50 ± 9% interrupts.CPU10.CAL:Function_call_interrupts
109288 ± 3% -26.3% 80499 ± 5% interrupts.CPU10.LOC:Local_timer_interrupts
838.00 ± 19% -25.3% 626.00 ± 5% interrupts.CPU11.CAL:Function_call_interrupts
108888 ± 3% -26.1% 80479 ± 5% interrupts.CPU11.LOC:Local_timer_interrupts
834.75 ± 23% -31.6% 571.25 ± 13% interrupts.CPU12.CAL:Function_call_interrupts
109246 ± 3% -26.4% 80390 ± 6% interrupts.CPU12.LOC:Local_timer_interrupts
827.50 ± 20% -25.6% 615.25 ± 2% interrupts.CPU13.CAL:Function_call_interrupts
108792 ± 3% -25.7% 80850 ± 5% interrupts.CPU13.LOC:Local_timer_interrupts
823.75 ± 17% -25.7% 612.25 ± 4% interrupts.CPU14.CAL:Function_call_interrupts
109252 ± 3% -26.2% 80662 ± 5% interrupts.CPU14.LOC:Local_timer_interrupts
855.25 ± 22% -28.6% 611.00 ± 4% interrupts.CPU15.CAL:Function_call_interrupts
108574 ± 3% -25.8% 80554 ± 5% interrupts.CPU15.LOC:Local_timer_interrupts
804.25 ± 21% -21.2% 634.00 ± 2% interrupts.CPU16.CAL:Function_call_interrupts
109145 ± 3% -26.3% 80399 ± 5% interrupts.CPU16.LOC:Local_timer_interrupts
842.75 ± 22% -28.8% 599.75 ± 4% interrupts.CPU17.CAL:Function_call_interrupts
108917 ± 3% -26.1% 80509 ± 5% interrupts.CPU17.LOC:Local_timer_interrupts
1646 ± 9% -14.7% 1405 ± 12% interrupts.CPU17.RES:Rescheduling_interrupts
837.50 ± 19% -25.6% 623.00 ± 5% interrupts.CPU18.CAL:Function_call_interrupts
109113 ± 3% -26.3% 80442 ± 5% interrupts.CPU18.LOC:Local_timer_interrupts
108509 ± 3% -25.7% 80601 ± 5% interrupts.CPU19.LOC:Local_timer_interrupts
855.00 ± 20% -34.4% 561.25 ± 11% interrupts.CPU2.CAL:Function_call_interrupts
109036 ± 2% -26.1% 80569 ± 5% interrupts.CPU2.LOC:Local_timer_interrupts
837.50 ± 22% -29.0% 595.00 ± 7% interrupts.CPU20.CAL:Function_call_interrupts
109262 ± 3% -26.5% 80294 ± 5% interrupts.CPU20.LOC:Local_timer_interrupts
829.25 ± 20% -26.1% 612.75 ± 5% interrupts.CPU21.CAL:Function_call_interrupts
109035 ± 3% -26.0% 80730 ± 5% interrupts.CPU21.LOC:Local_timer_interrupts
810.00 ± 21% -26.3% 597.25 ± 7% interrupts.CPU22.CAL:Function_call_interrupts
109321 ± 3% -26.1% 80768 ± 5% interrupts.CPU22.LOC:Local_timer_interrupts
108875 ± 3% -26.1% 80425 ± 5% interrupts.CPU23.LOC:Local_timer_interrupts
832.75 ± 24% -26.7% 610.25 ± 5% interrupts.CPU24.CAL:Function_call_interrupts
109124 ± 3% -26.3% 80413 ± 5% interrupts.CPU24.LOC:Local_timer_interrupts
801.25 ± 27% -30.0% 561.25 ± 11% interrupts.CPU25.CAL:Function_call_interrupts
108898 ± 3% -26.0% 80613 ± 5% interrupts.CPU25.LOC:Local_timer_interrupts
814.00 ± 22% -25.0% 610.75 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
109118 ± 3% -26.0% 80724 ± 5% interrupts.CPU26.LOC:Local_timer_interrupts
835.50 ± 23% -26.8% 611.50 ± 5% interrupts.CPU27.CAL:Function_call_interrupts
109021 ± 3% -26.5% 80164 ± 5% interrupts.CPU27.LOC:Local_timer_interrupts
823.75 ± 21% -27.7% 595.75 ± 11% interrupts.CPU28.CAL:Function_call_interrupts
109269 ± 3% -26.3% 80548 ± 5% interrupts.CPU28.LOC:Local_timer_interrupts
834.75 ± 24% -26.8% 611.25 interrupts.CPU29.CAL:Function_call_interrupts
109006 ± 3% -26.0% 80616 ± 5% interrupts.CPU29.LOC:Local_timer_interrupts
814.50 ± 22% -22.2% 634.00 ± 3% interrupts.CPU3.CAL:Function_call_interrupts
108844 ± 3% -25.8% 80709 ± 5% interrupts.CPU3.LOC:Local_timer_interrupts
812.50 ± 21% -24.7% 611.50 ± 7% interrupts.CPU30.CAL:Function_call_interrupts
109104 ± 3% -26.2% 80505 ± 5% interrupts.CPU30.LOC:Local_timer_interrupts
108839 ± 3% -26.0% 80517 ± 6% interrupts.CPU31.LOC:Local_timer_interrupts
820.75 ± 19% -23.4% 629.00 ± 3% interrupts.CPU32.CAL:Function_call_interrupts
109221 ± 3% -26.2% 80574 ± 5% interrupts.CPU32.LOC:Local_timer_interrupts
835.75 ± 23% -28.4% 598.25 ± 2% interrupts.CPU33.CAL:Function_call_interrupts
108824 ± 3% -25.8% 80700 ± 5% interrupts.CPU33.LOC:Local_timer_interrupts
78.75 ± 61% -65.7% 27.00 ± 34% interrupts.CPU34.41:IR-PCI-MSI.524291-edge.eth0-TxRx-2
799.50 ± 20% -20.3% 637.00 ± 3% interrupts.CPU34.CAL:Function_call_interrupts
109076 ± 3% -26.2% 80507 ± 5% interrupts.CPU34.LOC:Local_timer_interrupts
222.50 ±126% +1043.6% 2544 ± 20% interrupts.CPU34.NMI:Non-maskable_interrupts
222.50 ±126% +1043.6% 2544 ± 20% interrupts.CPU34.PMI:Performance_monitoring_interrupts
108586 ± 2% -25.8% 80603 ± 5% interrupts.CPU35.LOC:Local_timer_interrupts
855.75 ± 21% -28.7% 610.00 ± 6% interrupts.CPU36.CAL:Function_call_interrupts
109017 ± 2% -26.0% 80642 ± 5% interrupts.CPU36.LOC:Local_timer_interrupts
108547 ± 3% -26.0% 80350 ± 5% interrupts.CPU37.LOC:Local_timer_interrupts
808.75 ± 21% -21.4% 635.75 ± 4% interrupts.CPU38.CAL:Function_call_interrupts
109251 ± 3% -26.5% 80279 ± 5% interrupts.CPU38.LOC:Local_timer_interrupts
279.00 ± 99% +535.5% 1773 ± 37% interrupts.CPU38.NMI:Non-maskable_interrupts
279.00 ± 99% +535.5% 1773 ± 37% interrupts.CPU38.PMI:Performance_monitoring_interrupts
852.00 ± 23% -30.2% 595.00 ± 7% interrupts.CPU39.CAL:Function_call_interrupts
108812 ± 3% -26.0% 80477 ± 5% interrupts.CPU39.LOC:Local_timer_interrupts
816.00 ± 16% -23.3% 625.75 ± 4% interrupts.CPU4.CAL:Function_call_interrupts
109158 ± 3% -26.3% 80429 ± 5% interrupts.CPU4.LOC:Local_timer_interrupts
828.50 ± 20% -24.0% 629.25 ± 4% interrupts.CPU5.CAL:Function_call_interrupts
108945 ± 3% -25.9% 80681 ± 6% interrupts.CPU5.LOC:Local_timer_interrupts
101.75 ± 79% -62.9% 37.75 ± 46% interrupts.CPU6.46:IR-PCI-MSI.524296-edge.eth0-TxRx-7
840.25 ± 23% -25.1% 629.50 ± 4% interrupts.CPU6.CAL:Function_call_interrupts
109284 ± 3% -26.2% 80632 ± 6% interrupts.CPU6.LOC:Local_timer_interrupts
108899 ± 3% -26.0% 80571 ± 5% interrupts.CPU7.LOC:Local_timer_interrupts
825.50 ± 15% -26.5% 606.50 ± 6% interrupts.CPU8.CAL:Function_call_interrupts
109115 ± 3% -26.4% 80325 ± 6% interrupts.CPU8.LOC:Local_timer_interrupts
108953 ± 3% -26.0% 80671 ± 6% interrupts.CPU9.LOC:Local_timer_interrupts
1037 ± 61% +88.9% 1959 ± 26% interrupts.CPU9.NMI:Non-maskable_interrupts
1037 ± 61% +88.9% 1959 ± 26% interrupts.CPU9.PMI:Performance_monitoring_interrupts
4360800 ± 3% -26.1% 3221504 ± 5% interrupts.LOC:Local_timer_interrupts
75599 ± 2% -10.9% 67395 ± 2% interrupts.RES:Rescheduling_interrupts
252.50 ± 13% -19.4% 203.50 ± 6% interrupts.TLB:TLB_shootdowns
8.99 ± 7% -2.2 6.74 ± 13% perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range
3.39 ± 7% -0.6 2.80 ± 10% perf-profile.calltrace.cycles-pp.svc_recv.nfsd.kthread.ret_from_fork
2.62 ± 5% -0.4 2.18 ± 11% perf-profile.calltrace.cycles-pp.svc_tcp_recvfrom.svc_recv.nfsd.kthread.ret_from_fork
1.58 ± 10% -0.4 1.23 ± 14% perf-profile.calltrace.cycles-pp.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page.__write_data_page
1.35 ± 12% -0.3 1.01 ± 17% perf-profile.calltrace.cycles-pp.brd_make_request.generic_make_request.submit_bio.__submit_merged_bio.f2fs_submit_page_write
1.35 ± 12% -0.3 1.01 ± 17% perf-profile.calltrace.cycles-pp.__submit_merged_bio.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page
1.35 ± 12% -0.3 1.01 ± 17% perf-profile.calltrace.cycles-pp.submit_bio.__submit_merged_bio.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data
1.35 ± 12% -0.3 1.01 ± 17% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.__submit_merged_bio.f2fs_submit_page_write.do_write_page
0.95 ± 7% -0.3 0.62 ± 9% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.10 ± 4% -0.3 1.78 ± 13% perf-profile.calltrace.cycles-pp.svc_recvfrom.svc_tcp_recvfrom.svc_recv.nfsd.kthread
2.08 ± 4% -0.3 1.76 ± 13% perf-profile.calltrace.cycles-pp.inet6_recvmsg.svc_recvfrom.svc_tcp_recvfrom.svc_recv.nfsd
2.07 ± 4% -0.3 1.76 ± 13% perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet6_recvmsg.svc_recvfrom.svc_tcp_recvfrom.svc_recv
2.66 ± 4% -0.3 2.39 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.close
2.66 ± 4% -0.3 2.39 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
2.70 ± 4% -0.3 2.44 ± 2% perf-profile.calltrace.cycles-pp.close
2.60 ± 4% -0.3 2.34 ± 2% perf-profile.calltrace.cycles-pp.nfs_wb_all.filp_close.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.61 ± 4% -0.3 2.34 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
2.61 ± 4% -0.3 2.34 ± 2% perf-profile.calltrace.cycles-pp.filp_close.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
2.30 ± 5% -0.2 2.06 ± 3% perf-profile.calltrace.cycles-pp.filemap_write_and_wait.nfs_wb_all.filp_close.__x64_sys_close.do_syscall_64
1.58 ± 3% -0.2 1.40 ± 4% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.filemap_write_and_wait.nfs_wb_all.filp_close.__x64_sys_close
1.58 ± 3% -0.2 1.40 ± 4% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait.nfs_wb_all.filp_close
1.58 ± 3% -0.2 1.40 ± 4% perf-profile.calltrace.cycles-pp.nfs_writepages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait.nfs_wb_all
1.56 ± 3% -0.2 1.38 ± 4% perf-profile.calltrace.cycles-pp.write_cache_pages.nfs_writepages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait
1.17 ± 2% -0.2 1.01 ± 5% perf-profile.calltrace.cycles-pp.nfs_writepages_callback.write_cache_pages.nfs_writepages.do_writepages.__filemap_fdatawrite_range
1.13 ± 2% -0.2 0.97 ± 6% perf-profile.calltrace.cycles-pp.nfs_do_writepage.nfs_writepages_callback.write_cache_pages.nfs_writepages.do_writepages
2.59 ± 5% -0.1 2.45 perf-profile.calltrace.cycles-pp.rpc_free_task.rpc_async_release.process_one_work.worker_thread.kthread
2.59 ± 5% -0.1 2.45 perf-profile.calltrace.cycles-pp.rpc_async_release.process_one_work.worker_thread.kthread.ret_from_fork
15.36 ± 11% +8.7 24.06 ± 19% perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range
9.17 ± 7% -2.3 6.87 ± 13% perf-profile.children.cycles-pp.mutex_spin_on_owner
3.27 ± 6% -0.8 2.45 ± 19% perf-profile.children.cycles-pp.apic_timer_interrupt
3.03 ± 5% -0.8 2.26 ± 20% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
3.40 ± 7% -0.6 2.80 ± 10% perf-profile.children.cycles-pp.svc_recv
8.83 ± 3% -0.6 8.25 ± 3% perf-profile.children.cycles-pp.do_syscall_64
8.83 ± 3% -0.6 8.26 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
2.62 ± 5% -0.4 2.18 ± 11% perf-profile.children.cycles-pp.svc_tcp_recvfrom
1.84 ± 7% -0.4 1.42 ± 15% perf-profile.children.cycles-pp.hrtimer_interrupt
1.59 ± 10% -0.4 1.24 ± 14% perf-profile.children.cycles-pp.f2fs_submit_page_write
2.23 ± 4% -0.3 1.89 ± 12% perf-profile.children.cycles-pp.tcp_recvmsg
2.23 ± 4% -0.3 1.89 ± 12% perf-profile.children.cycles-pp.inet6_recvmsg
0.98 ± 6% -0.3 0.65 ± 9% perf-profile.children.cycles-pp.menu_select
2.10 ± 4% -0.3 1.78 ± 13% perf-profile.children.cycles-pp.svc_recvfrom
2.60 ± 4% -0.3 2.34 ± 2% perf-profile.children.cycles-pp.nfs_wb_all
2.61 ± 4% -0.3 2.34 ± 2% perf-profile.children.cycles-pp.__x64_sys_close
2.61 ± 4% -0.3 2.34 ± 2% perf-profile.children.cycles-pp.filp_close
2.70 ± 4% -0.3 2.44 ± 2% perf-profile.children.cycles-pp.close
2.30 ± 5% -0.2 2.06 ± 3% perf-profile.children.cycles-pp.filemap_write_and_wait
1.58 ± 3% -0.2 1.40 ± 4% perf-profile.children.cycles-pp.nfs_writepages
1.56 ± 3% -0.2 1.39 ± 4% perf-profile.children.cycles-pp.write_cache_pages
0.33 ± 3% -0.2 0.16 ± 20% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
1.14 ± 2% -0.2 0.97 ± 6% perf-profile.children.cycles-pp.nfs_do_writepage
1.17 ± 2% -0.2 1.01 ± 5% perf-profile.children.cycles-pp.nfs_writepages_callback
1.24 ± 3% -0.2 1.08 ± 7% perf-profile.children.cycles-pp._raw_spin_lock
2.59 ± 5% -0.1 2.45 perf-profile.children.cycles-pp.rpc_async_release
1.21 ± 3% -0.1 1.08 ± 4% perf-profile.children.cycles-pp.try_to_wake_up
0.91 ± 9% -0.1 0.78 ± 6% perf-profile.children.cycles-pp.__tcp_transmit_skb
0.24 ± 2% -0.1 0.12 ± 28% perf-profile.children.cycles-pp.tick_nohz_next_event
0.89 ± 2% -0.1 0.77 ± 7% perf-profile.children.cycles-pp.ttwu_do_activate
0.89 ± 2% -0.1 0.77 ± 7% perf-profile.children.cycles-pp.activate_task
0.83 ± 2% -0.1 0.72 ± 6% perf-profile.children.cycles-pp.enqueue_entity
0.88 ± 2% -0.1 0.77 ± 6% perf-profile.children.cycles-pp.enqueue_task_fair
0.81 ± 11% -0.1 0.70 ± 5% perf-profile.children.cycles-pp.inet6_csk_xmit
0.76 ± 10% -0.1 0.66 ± 4% perf-profile.children.cycles-pp.ip6_xmit
0.60 ± 3% -0.1 0.50 ± 9% perf-profile.children.cycles-pp.__account_scheduler_latency
0.38 ± 11% -0.1 0.28 ± 8% perf-profile.children.cycles-pp.free_unref_page
0.71 ± 10% -0.1 0.62 ± 4% perf-profile.children.cycles-pp.ip6_output
0.52 ± 3% -0.1 0.43 ± 10% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.23 ± 12% -0.1 0.15 ± 3% perf-profile.children.cycles-pp.irq_enter
0.48 ± 17% -0.1 0.39 ± 2% perf-profile.children.cycles-pp.process_backlog
0.64 ± 12% -0.1 0.56 ± 3% perf-profile.children.cycles-pp.ip6_finish_output2
0.51 ± 15% -0.1 0.43 ± 2% perf-profile.children.cycles-pp.do_softirq_own_stack
0.47 ± 4% -0.1 0.39 ± 11% perf-profile.children.cycles-pp.arch_stack_walk
0.41 ± 7% -0.1 0.33 ± 15% perf-profile.children.cycles-pp.f2fs_write_end_io
0.50 ± 16% -0.1 0.42 ± 2% perf-profile.children.cycles-pp.net_rx_action
0.12 ± 16% -0.1 0.05 ± 61% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.26 ± 8% -0.1 0.19 ± 9% perf-profile.children.cycles-pp.free_pcppages_bulk
0.21 ± 20% -0.1 0.14 ± 13% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.34 ± 5% -0.1 0.27 ± 4% perf-profile.children.cycles-pp.schedule_idle
0.15 ± 7% -0.1 0.08 ± 13% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.26 ± 13% -0.1 0.20 ± 18% perf-profile.children.cycles-pp.find_busiest_group
0.17 ± 17% -0.1 0.11 ± 4% perf-profile.children.cycles-pp.tick_irq_enter
0.30 ± 7% -0.1 0.24 ± 15% perf-profile.children.cycles-pp.unwind_next_frame
0.11 ± 24% -0.1 0.05 ± 59% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.09 ± 9% -0.1 0.03 ±100% perf-profile.children.cycles-pp.timerqueue_del
0.46 ± 5% -0.1 0.41 ± 2% perf-profile.children.cycles-pp.__nfs_commit_inode
0.21 ± 15% -0.1 0.16 ± 8% perf-profile.children.cycles-pp.native_write_msr
0.39 ± 5% -0.0 0.34 ± 6% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.08 ± 14% -0.0 0.03 ±102% perf-profile.children.cycles-pp.timerqueue_add
0.11 ± 4% -0.0 0.07 ± 25% perf-profile.children.cycles-pp.__remove_hrtimer
0.30 ± 6% -0.0 0.26 ± 5% perf-profile.children.cycles-pp.nfs_scan_commit
0.15 ± 10% -0.0 0.10 ± 14% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.16 ± 2% -0.0 0.12 ± 15% perf-profile.children.cycles-pp.release_pages
0.07 ± 11% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.idle_cpu
0.10 ± 7% -0.0 0.07 ± 10% perf-profile.children.cycles-pp.lock_sock_nested
0.07 ± 12% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.rcu_idle_exit
0.15 ± 5% -0.0 0.12 ± 6% perf-profile.children.cycles-pp.nfs_scan_commit_list
0.10 ± 8% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.svc_xprt_do_enqueue
0.12 ± 3% -0.0 0.10 ± 11% perf-profile.children.cycles-pp.__xa_set_mark
0.09 ± 8% -0.0 0.07 ± 21% perf-profile.children.cycles-pp.__free_pages_ok
0.10 ± 12% -0.0 0.08 ± 17% perf-profile.children.cycles-pp.__pagevec_release
15.40 ± 11% +8.7 24.08 ± 19% perf-profile.children.cycles-pp.osq_lock
9.12 ± 7% -2.3 6.83 ± 13% perf-profile.self.cycles-pp.mutex_spin_on_owner
1.05 ± 4% -0.1 0.92 ± 9% perf-profile.self.cycles-pp._raw_spin_lock
0.43 ± 4% -0.1 0.34 ± 11% perf-profile.self.cycles-pp.nfs_do_writepage
0.09 ± 8% -0.1 0.03 ±100% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.21 ± 15% -0.1 0.15 ± 9% perf-profile.self.cycles-pp.native_write_msr
0.08 ± 10% -0.1 0.03 ±102% perf-profile.self.cycles-pp.tick_nohz_next_event
0.07 ± 11% -0.0 0.03 ±100% perf-profile.self.cycles-pp.idle_cpu
0.14 ± 7% -0.0 0.10 ± 11% perf-profile.self.cycles-pp.free_pcppages_bulk
0.14 ± 9% -0.0 0.10 ± 12% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.15 ± 7% -0.0 0.11 ± 11% perf-profile.self.cycles-pp.do_idle
0.14 ± 8% -0.0 0.10 ± 18% perf-profile.self.cycles-pp.release_pages
0.11 ± 11% -0.0 0.08 ± 13% perf-profile.self.cycles-pp.xas_start
0.11 ± 7% -0.0 0.08 ± 10% perf-profile.self.cycles-pp._raw_spin_lock_irq
15.23 ± 11% +8.7 23.89 ± 19% perf-profile.self.cycles-pp.osq_lock
fsmark.files_per_sec
340 +-+--------------O----------------------------------------------------+
O OO O OO O O |
320 +-+ O O O O O O O |
300 +-+ O O O O O O |
| O O |
280 +-+ O O |
| |
260 +-+ O |
| |
240 +-+ |
220 +-+ .+ +. .+ |
|. +.+.+. +.+ :+ + .+.++. +.+. .++.+.++. .++. .++. +. .+ : |
200 +-+ + + +.+ +.+ + +.+ + +.+ + +.|
| |
180 +-+-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-ivb-ep01: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-7/performance/1BRD_48G/4M/nfsv4/f2fs/1x/x86_64-rhel-7.6/64t/debian-x86_64-2019-05-14.cgz/fsyncBeforeClose/lkp-ivb-ep01/40G/fsmark
commit:
48cd7b5125 ("nfsd: hook up nfsd_read to the nfsd_file cache")
5920afa3c8 ("nfsd: hook nfsd_commit up to the nfsd_file cache")
48cd7b51258c1a15 5920afa3c85ff38642f652b6e38
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 50% 2:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
32566044 ± 8% -36.0% 20845104 ± 14% fsmark.app_overhead
208.05 ± 3% +37.8% 286.65 ± 5% fsmark.files_per_sec
53.62 ± 2% -26.3% 39.50 ± 4% fsmark.time.elapsed_time
53.62 ± 2% -26.3% 39.50 ± 4% fsmark.time.elapsed_time.max
73.28 ± 2% -27.3% 53.24 ± 5% fsmark.time.system_time
0.19 ± 9% -0.0 0.18 ± 8% mpstat.cpu.all.soft%
1.66 ± 5% +0.4 2.07 ± 4% mpstat.cpu.all.usr%
2.00 -50.0% 1.00 nfsstat.Client.nfs.v3.commit.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v3.write.percent
71.88 -2.7% 69.96 iostat.cpu.idle
18.29 ± 2% +9.4% 20.01 iostat.cpu.system
1.66 ± 4% +24.4% 2.06 ± 4% iostat.cpu.user
57280 ± 3% -21.3% 45065 ± 4% meminfo.AnonHugePages
26928 ± 3% -8.4% 24672 ± 6% meminfo.Percpu
2287477 ± 2% +32.9% 3040305 ± 3% meminfo.max_used_kB
71.50 -2.8% 69.50 vmstat.cpu.id
731170 +33.9% 978682 ± 3% vmstat.io.bo
79833 +31.4% 104920 ± 2% vmstat.system.cs
1.279e+08 ± 58% -69.4% 39118739 ± 4% cpuidle.C1.time
1922338 ± 36% -59.2% 783700 ± 3% cpuidle.C1.usage
420316 ± 30% +67.3% 703270 ± 23% cpuidle.C1E.usage
1.256e+09 ± 14% -38.2% 7.76e+08 ± 4% cpuidle.C6.time
2410954 ± 27% -49.6% 1215183 ± 10% cpuidle.POLL.time
442.25 ± 2% +49.6% 661.75 ± 3% turbostat.Avg_MHz
1916 +38.7% 2657 ± 3% turbostat.Bzy_MHz
1917918 ± 36% -59.4% 779427 ± 4% turbostat.C1
420063 ± 30% +67.4% 703044 ± 23% turbostat.C1E
4.50 ± 39% +4.9 9.43 ± 10% turbostat.C1E%
61.51 ± 3% +52.8% 94.00 ± 5% turbostat.CorWatt
4360818 ± 7% -20.6% 3463033 ± 3% turbostat.IRQ
88.89 ± 2% +37.1% 121.88 ± 4% turbostat.PkgWatt
39.29 +7.0% 42.03 turbostat.RAMWatt
4451 -25.8% 3301 ± 4% turbostat.SMI
274.22 ± 76% -64.4% 97.50 ± 23% sched_debug.cfs_rq:/.load_avg.avg
5893 ±132% -82.8% 1013 ± 5% sched_debug.cfs_rq:/.load_avg.max
1014 ±114% -76.5% 238.07 ± 18% sched_debug.cfs_rq:/.load_avg.stddev
75.84 ± 23% -66.4% 25.49 ±100% sched_debug.cfs_rq:/.removed.load_avg.avg
264.26 ± 10% -58.0% 111.01 ±100% sched_debug.cfs_rq:/.removed.load_avg.stddev
3490 ± 23% -74.8% 880.52 ±110% sched_debug.cfs_rq:/.removed.runnable_sum.avg
12157 ± 11% -63.9% 4389 ±102% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
380.90 ± 2% -18.4% 310.89 ± 7% sched_debug.cfs_rq:/.util_avg.stddev
66.09 ± 19% -29.8% 46.41 ± 26% sched_debug.cfs_rq:/.util_est_enqueued.avg
439385 ± 5% +16.7% 512977 ± 9% sched_debug.cpu.avg_idle.avg
462.00 ± 16% +51.3% 699.00 ± 14% sched_debug.cpu.nr_switches.min
2040 +98.3% 4046 ± 8% slabinfo.Acpi-State.active_objs
2040 +98.3% 4046 ± 8% slabinfo.Acpi-State.num_objs
2272 ± 4% +114.2% 4865 slabinfo.avc_xperms_data.active_objs
2272 ± 4% +114.2% 4865 slabinfo.avc_xperms_data.num_objs
799.75 ± 10% +193.0% 2343 ± 6% slabinfo.blkdev_ioc.active_objs
799.75 ± 10% +203.8% 2429 ± 6% slabinfo.blkdev_ioc.num_objs
13275 +34.8% 17900 ± 4% slabinfo.cred_jar.active_objs
315.50 +35.4% 427.25 ± 4% slabinfo.cred_jar.active_slabs
13275 +35.4% 17970 ± 4% slabinfo.cred_jar.num_objs
315.50 +35.4% 427.25 ± 4% slabinfo.cred_jar.num_slabs
22985 ± 5% +15.6% 26568 ± 3% slabinfo.kmalloc-32.active_objs
22994 ± 5% +15.6% 26585 ± 3% slabinfo.kmalloc-32.num_objs
940.00 ± 2% +11.9% 1052 ± 4% slabinfo.kmem_cache_node.active_objs
976.00 ± 2% +11.5% 1088 ± 4% slabinfo.kmem_cache_node.num_objs
81089 -1.3% 80063 proc-vmstat.nr_active_anon
12107 -3.3% 11709 proc-vmstat.nr_active_file
11262319 -4.0% 10808450 proc-vmstat.nr_file_pages
4912 +1.8% 5002 proc-vmstat.nr_inactive_anon
10976339 -4.1% 10522824 proc-vmstat.nr_inactive_file
6335 +2.2% 6477 proc-vmstat.nr_mapped
58243 -2.9% 56546 proc-vmstat.nr_slab_reclaimable
81089 -1.3% 80063 proc-vmstat.nr_zone_active_anon
12107 -3.3% 11709 proc-vmstat.nr_zone_active_file
4912 +1.8% 5002 proc-vmstat.nr_zone_inactive_anon
10976339 -4.1% 10522824 proc-vmstat.nr_zone_inactive_file
3592 ± 93% -98.7% 46.25 ± 33% proc-vmstat.numa_hint_faults
2330 ±120% -98.4% 37.75 ± 38% proc-vmstat.numa_hint_faults_local
53116 ± 28% -59.5% 21504 ± 36% proc-vmstat.numa_pte_updates
271448 ± 2% -14.9% 230971 proc-vmstat.pgfault
2.549e+09 ± 10% +52.7% 3.893e+09 ± 6% perf-stat.i.branch-instructions
61962321 ± 2% +34.3% 83210389 ± 3% perf-stat.i.cache-misses
1.439e+08 ± 3% +29.8% 1.868e+08 ± 3% perf-stat.i.cache-references
84903 +33.9% 113652 ± 3% perf-stat.i.context-switches
1.797e+10 ± 5% +53.6% 2.761e+10 ± 2% perf-stat.i.cpu-cycles
765.18 ± 3% +16.9% 894.18 ± 3% perf-stat.i.cpu-migrations
315.09 ± 8% +15.2% 363.12 ± 2% perf-stat.i.cycles-between-cache-misses
3.53e+09 ± 6% +54.2% 5.444e+09 ± 3% perf-stat.i.dTLB-loads
1.376e+09 ± 4% +33.1% 1.831e+09 ± 4% perf-stat.i.dTLB-stores
317840 ± 9% +18.9% 378002 perf-stat.i.iTLB-loads
1.223e+10 ± 10% +55.5% 1.902e+10 ± 6% perf-stat.i.instructions
13500 ± 13% +40.1% 18918 ± 7% perf-stat.i.instructions-per-iTLB-miss
4685 +15.0% 5389 ± 2% perf-stat.i.minor-faults
15554116 ± 2% +33.0% 20691477 ± 4% perf-stat.i.node-load-misses
28001682 +33.0% 37238555 ± 3% perf-stat.i.node-loads
8757073 ± 2% +36.0% 11906517 ± 4% perf-stat.i.node-store-misses
40582264 ± 2% +36.3% 55299010 ± 3% perf-stat.i.node-stores
4685 +15.0% 5389 ± 2% perf-stat.i.page-faults
43.07 +1.5 44.53 perf-stat.overall.cache-miss-rate%
290.22 ± 6% +14.5% 332.24 ± 4% perf-stat.overall.cycles-between-cache-misses
74.01 -1.4 72.56 perf-stat.overall.iTLB-load-miss-rate%
13633 ± 13% +39.7% 19044 ± 7% perf-stat.overall.instructions-per-iTLB-miss
2.503e+09 ± 10% +51.7% 3.796e+09 ± 6% perf-stat.ps.branch-instructions
60824316 ± 2% +33.4% 81124017 ± 3% perf-stat.ps.cache-misses
1.413e+08 ± 3% +28.9% 1.822e+08 ± 3% perf-stat.ps.cache-references
83343 +32.9% 110800 ± 2% perf-stat.ps.context-switches
1.764e+10 ± 5% +52.6% 2.692e+10 ± 2% perf-stat.ps.cpu-cycles
751.13 ± 3% +16.1% 871.98 ± 3% perf-stat.ps.cpu-migrations
3.465e+09 ± 6% +53.2% 5.308e+09 ± 3% perf-stat.ps.dTLB-loads
1.35e+09 ± 4% +32.2% 1.785e+09 ± 4% perf-stat.ps.dTLB-stores
312006 ± 9% +18.1% 368619 perf-stat.ps.iTLB-loads
1.201e+10 ± 10% +54.5% 1.855e+10 ± 6% perf-stat.ps.instructions
4599 +14.6% 5270 ± 2% perf-stat.ps.minor-faults
15268173 ± 2% +32.1% 20172672 ± 4% perf-stat.ps.node-load-misses
27486864 +32.1% 36306045 ± 3% perf-stat.ps.node-loads
8596268 ± 2% +35.0% 11607796 ± 4% perf-stat.ps.node-store-misses
39836857 ± 2% +35.3% 53910450 ± 3% perf-stat.ps.node-stores
4599 +14.6% 5270 ± 2% perf-stat.ps.page-faults
12731 ± 2% -15.6% 10749 ± 5% softirqs.CPU0.SCHED
28856 ± 7% -17.9% 23696 ± 8% softirqs.CPU0.TIMER
10032 ± 4% +9.6% 10999 ± 3% softirqs.CPU1.NET_RX
28495 ± 13% -26.4% 20976 ± 7% softirqs.CPU10.TIMER
27658 ± 5% -23.6% 21119 ± 2% softirqs.CPU11.TIMER
27597 ± 3% -25.5% 20548 ± 6% softirqs.CPU13.TIMER
28502 ± 5% -14.4% 24398 ± 13% softirqs.CPU14.TIMER
27819 ± 7% -22.8% 21480 ± 2% softirqs.CPU15.TIMER
27861 ± 5% -20.5% 22144 ± 7% softirqs.CPU16.TIMER
26971 ± 10% -22.0% 21034 ± 7% softirqs.CPU18.TIMER
28096 ± 2% -21.4% 22077 softirqs.CPU19.TIMER
8672 ± 17% -17.6% 7149 ± 8% softirqs.CPU2.SCHED
27667 ± 13% -23.3% 21215 ± 9% softirqs.CPU2.TIMER
27343 ± 6% -21.9% 21352 ± 4% softirqs.CPU20.TIMER
6011 ± 44% -48.7% 3082 ± 11% softirqs.CPU21.RCU
9612 ± 6% -26.3% 7083 ± 4% softirqs.CPU21.SCHED
33101 ± 4% -35.1% 21470 ± 8% softirqs.CPU21.TIMER
27674 ± 8% -22.1% 21553 ± 7% softirqs.CPU22.TIMER
30163 ± 11% -21.6% 23635 ± 8% softirqs.CPU23.TIMER
27904 ± 9% -24.9% 20963 ± 7% softirqs.CPU24.TIMER
28078 ± 4% -21.0% 22170 softirqs.CPU25.TIMER
27410 ± 7% -22.0% 21391 ± 8% softirqs.CPU26.TIMER
27845 ± 6% -22.1% 21705 softirqs.CPU27.TIMER
12132 ± 8% -20.6% 9633 ± 9% softirqs.CPU28.NET_RX
26827 ± 10% -22.3% 20857 ± 8% softirqs.CPU28.TIMER
27803 ± 6% -21.9% 21713 softirqs.CPU29.TIMER
28943 ± 4% -22.1% 22539 ± 3% softirqs.CPU3.TIMER
26918 ± 9% -24.7% 20259 ± 11% softirqs.CPU30.TIMER
27843 ± 7% -22.4% 21595 softirqs.CPU31.TIMER
5247 ± 53% -53.5% 2440 ± 11% softirqs.CPU32.RCU
26888 ± 9% -21.7% 21063 ± 9% softirqs.CPU34.TIMER
27591 ± 6% -20.3% 22003 ± 2% softirqs.CPU35.TIMER
32516 ± 22% -33.4% 21664 ± 7% softirqs.CPU36.TIMER
31822 ± 19% -30.3% 22178 softirqs.CPU37.TIMER
27913 ± 9% -22.6% 21592 ± 6% softirqs.CPU38.TIMER
8938 ± 9% +35.3% 12095 ± 5% softirqs.CPU39.NET_RX
28160 ± 4% -22.5% 21835 ± 2% softirqs.CPU39.TIMER
28989 ± 9% -24.2% 21960 ± 6% softirqs.CPU4.TIMER
28481 ± 4% -15.4% 24099 ± 8% softirqs.CPU5.TIMER
26983 ± 9% -21.1% 21281 ± 8% softirqs.CPU6.TIMER
27959 ± 5% -21.9% 21830 ± 2% softirqs.CPU7.TIMER
27772 ± 7% -23.6% 21227 ± 7% softirqs.CPU8.TIMER
27782 ± 7% -22.6% 21497 softirqs.CPU9.TIMER
185109 ± 2% -31.1% 127580 ± 2% softirqs.RCU
352266 -22.3% 273547 ± 2% softirqs.SCHED
1130972 -21.6% 887232 ± 3% softirqs.TIMER
28453 -15.2% 24132 interrupts.CAL:Function_call_interrupts
109239 -24.8% 82183 ± 4% interrupts.CPU0.LOC:Local_timer_interrupts
109081 -25.2% 81564 ± 3% interrupts.CPU1.LOC:Local_timer_interrupts
724.00 ± 5% -25.9% 536.75 ± 11% interrupts.CPU10.CAL:Function_call_interrupts
104320 ± 8% -21.5% 81943 ± 4% interrupts.CPU10.LOC:Local_timer_interrupts
724.50 ± 4% -13.7% 625.00 ± 4% interrupts.CPU11.CAL:Function_call_interrupts
106486 ± 3% -23.6% 81306 ± 3% interrupts.CPU11.LOC:Local_timer_interrupts
108005 ± 2% -24.4% 81612 ± 4% interrupts.CPU12.LOC:Local_timer_interrupts
107443 ± 2% -24.1% 81521 ± 3% interrupts.CPU13.LOC:Local_timer_interrupts
748.75 -20.8% 592.75 ± 2% interrupts.CPU14.CAL:Function_call_interrupts
104280 ± 8% -21.2% 82184 ± 4% interrupts.CPU14.LOC:Local_timer_interrupts
104051 ± 8% -21.2% 82015 ± 4% interrupts.CPU16.LOC:Local_timer_interrupts
699.50 ± 3% -22.2% 544.50 ± 19% interrupts.CPU18.CAL:Function_call_interrupts
107584 ± 3% -23.6% 82174 ± 4% interrupts.CPU18.LOC:Local_timer_interrupts
721.25 ± 4% -16.2% 604.25 ± 3% interrupts.CPU19.CAL:Function_call_interrupts
105157 ± 5% -21.9% 82094 ± 3% interrupts.CPU19.LOC:Local_timer_interrupts
109180 -24.9% 82043 ± 4% interrupts.CPU2.LOC:Local_timer_interrupts
8.50 ±139% +21497.1% 1835 ± 36% interrupts.CPU2.NMI:Non-maskable_interrupts
8.50 ±139% +21497.1% 1835 ± 36% interrupts.CPU2.PMI:Performance_monitoring_interrupts
726.25 ± 4% -19.4% 585.00 interrupts.CPU20.CAL:Function_call_interrupts
104894 ± 7% -22.2% 81599 ± 4% interrupts.CPU20.LOC:Local_timer_interrupts
697.75 ± 6% -10.4% 625.50 ± 3% interrupts.CPU21.CAL:Function_call_interrupts
104395 ± 8% -21.6% 81832 ± 4% interrupts.CPU22.LOC:Local_timer_interrupts
755.00 -18.3% 617.00 ± 3% interrupts.CPU23.CAL:Function_call_interrupts
720.50 ± 2% -18.8% 585.25 ± 9% interrupts.CPU24.CAL:Function_call_interrupts
104976 ± 8% -22.2% 81671 ± 3% interrupts.CPU24.LOC:Local_timer_interrupts
714.00 ± 3% -20.8% 565.50 ± 12% interrupts.CPU25.CAL:Function_call_interrupts
108355 -24.5% 81767 ± 3% interrupts.CPU25.LOC:Local_timer_interrupts
716.50 ± 4% -18.0% 587.75 ± 8% interrupts.CPU26.CAL:Function_call_interrupts
104216 ± 8% -21.4% 81935 ± 4% interrupts.CPU26.LOC:Local_timer_interrupts
685.50 -10.9% 610.75 ± 3% interrupts.CPU28.CAL:Function_call_interrupts
108453 -24.5% 81831 ± 4% interrupts.CPU28.LOC:Local_timer_interrupts
251.50 ±128% +476.6% 1450 ± 28% interrupts.CPU28.NMI:Non-maskable_interrupts
251.50 ±128% +476.6% 1450 ± 28% interrupts.CPU28.PMI:Performance_monitoring_interrupts
728.00 ± 3% -13.5% 629.75 ± 4% interrupts.CPU29.CAL:Function_call_interrupts
107424 ± 2% -23.9% 81773 ± 3% interrupts.CPU29.LOC:Local_timer_interrupts
2059 ± 29% -29.9% 1443 ± 28% interrupts.CPU3.RES:Rescheduling_interrupts
104181 ± 8% -21.5% 81753 ± 4% interrupts.CPU30.LOC:Local_timer_interrupts
748.25 ± 3% -19.4% 603.25 ± 8% interrupts.CPU31.CAL:Function_call_interrupts
747.25 ± 2% -21.0% 590.50 ± 4% interrupts.CPU32.CAL:Function_call_interrupts
107425 ± 2% -23.9% 81750 ± 4% interrupts.CPU32.LOC:Local_timer_interrupts
722.75 ± 5% -14.9% 614.75 ± 3% interrupts.CPU33.CAL:Function_call_interrupts
107584 ± 2% -23.9% 81909 ± 4% interrupts.CPU34.LOC:Local_timer_interrupts
709.50 ± 3% -18.2% 580.25 ± 9% interrupts.CPU35.CAL:Function_call_interrupts
105433 ± 5% -22.1% 82082 ± 3% interrupts.CPU35.LOC:Local_timer_interrupts
711.25 ± 6% -18.9% 576.50 ± 2% interrupts.CPU36.CAL:Function_call_interrupts
107722 ± 2% -24.1% 81741 ± 4% interrupts.CPU36.LOC:Local_timer_interrupts
708.25 ± 5% -11.9% 623.75 ± 2% interrupts.CPU37.CAL:Function_call_interrupts
105762 ± 5% -22.2% 82244 ± 3% interrupts.CPU37.LOC:Local_timer_interrupts
104373 ± 7% -21.3% 82118 ± 3% interrupts.CPU38.LOC:Local_timer_interrupts
727.00 ± 4% -15.1% 617.00 ± 3% interrupts.CPU4.CAL:Function_call_interrupts
109052 -24.9% 81942 ± 4% interrupts.CPU4.LOC:Local_timer_interrupts
727.00 ± 5% -12.0% 640.00 ± 2% interrupts.CPU5.CAL:Function_call_interrupts
856.25 ±127% +343.5% 3797 ± 59% interrupts.CPU5.NMI:Non-maskable_interrupts
856.25 ±127% +343.5% 3797 ± 59% interrupts.CPU5.PMI:Performance_monitoring_interrupts
733.75 ± 4% -15.7% 618.75 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
108698 -24.6% 81909 ± 4% interrupts.CPU6.LOC:Local_timer_interrupts
715.00 ± 7% -19.5% 575.50 ± 12% interrupts.CPU7.CAL:Function_call_interrupts
107906 ± 2% -24.3% 81733 ± 3% interrupts.CPU7.LOC:Local_timer_interrupts
750.50 -19.5% 604.50 interrupts.CPU8.CAL:Function_call_interrupts
104364 ± 8% -21.5% 81935 ± 4% interrupts.CPU8.LOC:Local_timer_interrupts
717.75 ± 7% -16.9% 596.25 ± 2% interrupts.CPU9.CAL:Function_call_interrupts
4180264 ± 7% -21.7% 3274083 ± 4% interrupts.LOC:Local_timer_interrupts
42304 ± 14% +22.4% 51791 ± 18% interrupts.NMI:Non-maskable_interrupts
42304 ± 14% +22.4% 51791 ± 18% interrupts.PMI:Performance_monitoring_interrupts
72811 ± 2% -8.7% 66455 ± 4% interrupts.RES:Rescheduling_interrupts
9.84 ± 5% -3.2 6.62 ± 9% perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range
2.67 ± 3% -1.1 1.60 ± 6% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
2.51 ± 4% -1.0 1.50 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
6.61 ± 5% -0.8 5.82 ± 6% perf-profile.calltrace.cycles-pp.f2fs_write_cache_pages.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range
4.46 ± 4% -0.6 3.82 ± 8% perf-profile.calltrace.cycles-pp.nfsd4_write.nfsd4_proc_compound.nfsd_dispatch.svc_process_common.svc_process
4.44 ± 4% -0.6 3.80 ± 8% perf-profile.calltrace.cycles-pp.nfsd_vfs_write.nfsd4_write.nfsd4_proc_compound.nfsd_dispatch.svc_process_common
4.43 ± 4% -0.6 3.79 ± 8% perf-profile.calltrace.cycles-pp.f2fs_file_write_iter.do_iter_readv_writev.do_iter_write.nfsd_vfs_write.nfsd4_write
4.43 ± 4% -0.6 3.80 ± 8% perf-profile.calltrace.cycles-pp.do_iter_readv_writev.do_iter_write.nfsd_vfs_write.nfsd4_write.nfsd4_proc_compound
4.43 ± 4% -0.6 3.80 ± 8% perf-profile.calltrace.cycles-pp.do_iter_write.nfsd_vfs_write.nfsd4_write.nfsd4_proc_compound.nfsd_dispatch
1.43 ± 4% -0.5 0.88 ± 9% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
3.47 ± 8% -0.5 2.97 ± 8% perf-profile.calltrace.cycles-pp.__write_data_page.f2fs_write_cache_pages.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range
2.32 ± 10% -0.4 1.93 ± 10% perf-profile.calltrace.cycles-pp.f2fs_outplace_write_data.f2fs_do_write_data_page.__write_data_page.f2fs_write_cache_pages.f2fs_write_data_pages
2.00 ± 10% -0.4 1.63 ± 12% perf-profile.calltrace.cycles-pp.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page.__write_data_page.f2fs_write_cache_pages
0.91 ± 3% -0.3 0.59 ± 8% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.53 ± 13% -0.3 1.23 ± 13% perf-profile.calltrace.cycles-pp.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page.__write_data_page
1.32 ± 2% -0.3 1.03 ± 11% perf-profile.calltrace.cycles-pp.f2fs_write_begin.generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.do_iter_readv_writev
3.61 ± 7% -0.3 3.34 perf-profile.calltrace.cycles-pp.__rpc_execute.rpc_async_schedule.process_one_work.worker_thread.kthread
0.87 ± 3% -0.3 0.60 ± 6% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
3.61 ± 7% -0.3 3.34 perf-profile.calltrace.cycles-pp.rpc_async_schedule.process_one_work.worker_thread.kthread.ret_from_fork
3.34 ± 8% -0.3 3.08 perf-profile.calltrace.cycles-pp.call_transmit.__rpc_execute.rpc_async_schedule.process_one_work.worker_thread
3.33 ± 8% -0.3 3.07 perf-profile.calltrace.cycles-pp.xprt_transmit.call_transmit.__rpc_execute.rpc_async_schedule.process_one_work
1.19 ± 2% -0.3 0.93 ± 11% perf-profile.calltrace.cycles-pp.pagecache_get_page.f2fs_write_begin.generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter
3.27 ± 7% -0.2 3.03 perf-profile.calltrace.cycles-pp.tcp_sendmsg.sock_sendmsg.xs_sendpages.xs_tcp_send_request.xprt_transmit
0.66 ± 4% -0.2 0.42 ± 57% perf-profile.calltrace.cycles-pp.f2fs_set_data_page_dirty.f2fs_write_end.generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter
0.61 ± 4% -0.2 0.41 ± 57% perf-profile.calltrace.cycles-pp.nfs_commit_release_pages.nfs_commit_release.rpc_free_task.rpc_async_release.process_one_work
2.43 ± 6% -0.2 2.25 ± 5% perf-profile.calltrace.cycles-pp.nfs_file_fsync.do_fsync.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.61 ± 5% -0.1 0.54 ± 4% perf-profile.calltrace.cycles-pp.nfs_commit_release.rpc_free_task.rpc_async_release.process_one_work.worker_thread
26.91 ± 5% +4.5 31.39 ± 9% perf-profile.calltrace.cycles-pp.__mutex_lock.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range
17.05 ± 6% +7.7 24.75 ± 10% perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range
10.01 ± 4% -3.3 6.75 ± 9% perf-profile.children.cycles-pp.mutex_spin_on_owner
3.35 ± 5% -1.3 2.08 ± 6% perf-profile.children.cycles-pp.apic_timer_interrupt
3.12 ± 5% -1.2 1.92 ± 6% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
6.62 ± 5% -0.8 5.82 ± 6% perf-profile.children.cycles-pp.f2fs_write_cache_pages
1.88 ± 3% -0.7 1.20 ± 8% perf-profile.children.cycles-pp.hrtimer_interrupt
4.46 ± 4% -0.6 3.82 ± 8% perf-profile.children.cycles-pp.nfsd4_write
4.44 ± 4% -0.6 3.80 ± 8% perf-profile.children.cycles-pp.nfsd_vfs_write
4.43 ± 4% -0.6 3.79 ± 8% perf-profile.children.cycles-pp.f2fs_file_write_iter
4.43 ± 4% -0.6 3.80 ± 8% perf-profile.children.cycles-pp.do_iter_readv_writev
4.43 ± 4% -0.6 3.80 ± 8% perf-profile.children.cycles-pp.do_iter_write
3.47 ± 8% -0.5 2.97 ± 8% perf-profile.children.cycles-pp.__write_data_page
3.19 ± 8% -0.5 2.72 ± 9% perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.24 ± 12% -0.5 0.78 ± 10% perf-profile.children.cycles-pp.__softirqentry_text_start
0.94 ± 17% -0.4 0.51 ± 15% perf-profile.children.cycles-pp.irq_exit
1.23 ± 2% -0.4 0.82 ± 8% perf-profile.children.cycles-pp.__hrtimer_run_queues
2.32 ± 10% -0.4 1.93 ± 10% perf-profile.children.cycles-pp.f2fs_outplace_write_data
2.01 ± 10% -0.4 1.64 ± 12% perf-profile.children.cycles-pp.do_write_page
1.53 ± 13% -0.3 1.23 ± 13% perf-profile.children.cycles-pp.f2fs_submit_page_write
2.70 ± 4% -0.3 2.41 ± 6% perf-profile.children.cycles-pp.pagecache_get_page
1.32 ± 2% -0.3 1.03 ± 11% perf-profile.children.cycles-pp.f2fs_write_begin
3.71 ± 7% -0.3 3.42 perf-profile.children.cycles-pp.__rpc_execute
3.61 ± 7% -0.3 3.34 perf-profile.children.cycles-pp.rpc_async_schedule
0.89 ± 5% -0.3 0.63 ± 5% perf-profile.children.cycles-pp.menu_select
3.39 ± 8% -0.3 3.13 ± 2% perf-profile.children.cycles-pp.call_transmit
0.74 ± 4% -0.2 0.51 ± 11% perf-profile.children.cycles-pp.tick_sched_timer
0.62 ± 6% -0.2 0.41 ± 11% perf-profile.children.cycles-pp.tick_sched_handle
1.34 ± 7% -0.2 1.15 ± 10% perf-profile.children.cycles-pp.clear_page_erms
0.57 ± 7% -0.2 0.38 ± 12% perf-profile.children.cycles-pp.update_process_times
2.43 ± 6% -0.2 2.25 ± 5% perf-profile.children.cycles-pp.__x64_sys_fsync
2.43 ± 6% -0.2 2.25 ± 5% perf-profile.children.cycles-pp.do_fsync
2.43 ± 6% -0.2 2.25 ± 5% perf-profile.children.cycles-pp.nfs_file_fsync
0.49 ± 20% -0.2 0.32 ± 9% perf-profile.children.cycles-pp.ktime_get
0.37 ± 14% -0.2 0.20 ± 13% perf-profile.children.cycles-pp.clockevents_program_event
1.34 ± 3% -0.2 1.17 ± 7% perf-profile.children.cycles-pp.add_to_page_cache_lru
0.98 ± 4% -0.2 0.83 ± 10% perf-profile.children.cycles-pp.__schedule
0.80 ± 10% -0.1 0.67 ± 4% perf-profile.children.cycles-pp.inet6_csk_xmit
0.23 ± 21% -0.1 0.10 ± 15% perf-profile.children.cycles-pp.run_rebalance_domains
0.75 ± 11% -0.1 0.63 ± 4% perf-profile.children.cycles-pp.ip6_xmit
0.23 ± 21% -0.1 0.11 ± 7% perf-profile.children.cycles-pp.update_blocked_averages
0.84 ± 3% -0.1 0.72 ± 7% perf-profile.children.cycles-pp.activate_task
0.67 ± 4% -0.1 0.55 ± 8% perf-profile.children.cycles-pp.f2fs_set_data_page_dirty
0.85 ± 3% -0.1 0.73 ± 8% perf-profile.children.cycles-pp.ttwu_do_activate
0.84 ± 3% -0.1 0.72 ± 7% perf-profile.children.cycles-pp.enqueue_task_fair
0.28 ± 3% -0.1 0.17 ± 11% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.61 ± 5% -0.1 0.50 ± 2% perf-profile.children.cycles-pp.__lru_cache_add
0.57 ± 5% -0.1 0.46 perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.21 ± 29% -0.1 0.11 ± 30% perf-profile.children.cycles-pp.rebalance_domains
0.22 ± 5% -0.1 0.12 ± 8% perf-profile.children.cycles-pp.tick_nohz_next_event
0.79 ± 2% -0.1 0.69 ± 8% perf-profile.children.cycles-pp.enqueue_entity
0.29 ± 8% -0.1 0.20 ± 12% perf-profile.children.cycles-pp.scheduler_tick
0.68 ± 5% -0.1 0.59 ± 4% perf-profile.children.cycles-pp.nfs_commit_release
0.68 ± 5% -0.1 0.59 ± 4% perf-profile.children.cycles-pp.nfs_commit_release_pages
0.66 ± 4% -0.1 0.58 ± 9% perf-profile.children.cycles-pp.schedule
0.81 ± 6% -0.1 0.72 ± 7% perf-profile.children.cycles-pp.__set_page_dirty_nobuffers
0.48 ± 9% -0.1 0.40 ± 8% perf-profile.children.cycles-pp.wait_on_page_bit
0.30 ± 15% -0.1 0.22 ± 8% perf-profile.children.cycles-pp.load_balance
0.42 ± 3% -0.1 0.34 ± 7% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.48 ± 3% -0.1 0.40 ± 9% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.43 ± 9% -0.1 0.35 ± 8% perf-profile.children.cycles-pp.ipv6_rcv
0.41 ± 4% -0.1 0.33 ± 10% perf-profile.children.cycles-pp.__mod_lruvec_state
0.46 ± 9% -0.1 0.39 ± 2% perf-profile.children.cycles-pp.tcp_v6_do_rcv
0.40 ± 10% -0.1 0.32 ± 8% perf-profile.children.cycles-pp.ip6_input
0.39 ± 11% -0.1 0.32 ± 8% perf-profile.children.cycles-pp.ip6_protocol_deliver_rcu
0.43 ± 4% -0.1 0.36 ± 10% perf-profile.children.cycles-pp.arch_stack_walk
0.39 ± 11% -0.1 0.32 ± 9% perf-profile.children.cycles-pp.ip6_input_finish
0.34 ± 5% -0.1 0.27 ± 16% perf-profile.children.cycles-pp.schedule_idle
0.37 ± 5% -0.1 0.29 ± 11% perf-profile.children.cycles-pp.sched_ttwu_pending
0.37 ± 11% -0.1 0.30 ± 8% perf-profile.children.cycles-pp.tcp_v6_rcv
0.55 ± 3% -0.1 0.48 ± 9% perf-profile.children.cycles-pp.__account_scheduler_latency
0.30 ± 5% -0.1 0.23 ± 7% perf-profile.children.cycles-pp.svc_send
0.19 ± 12% -0.1 0.12 ± 22% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.39 ± 9% -0.1 0.32 ± 7% perf-profile.children.cycles-pp.__list_del_entry_valid
0.12 ± 39% -0.1 0.05 ± 59% perf-profile.children.cycles-pp.rcu_core
0.34 ± 7% -0.1 0.27 ± 5% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.21 ± 10% -0.1 0.14 ± 5% perf-profile.children.cycles-pp.read_tsc
0.11 ± 15% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.28 ± 5% -0.1 0.22 ± 15% perf-profile.children.cycles-pp.f2fs_update_dirty_page
0.28 ± 6% -0.1 0.22 ± 9% perf-profile.children.cycles-pp.svc_send_common
0.28 ± 6% -0.1 0.22 ± 8% perf-profile.children.cycles-pp.svc_tcp_sendto
0.28 ± 6% -0.1 0.22 ± 8% perf-profile.children.cycles-pp.svc_sendto
0.28 ± 5% -0.1 0.22 ± 9% perf-profile.children.cycles-pp.kernel_sendpage
0.28 ± 5% -0.1 0.22 ± 9% perf-profile.children.cycles-pp.inet_sendpage
0.28 ± 5% -0.1 0.22 ± 9% perf-profile.children.cycles-pp.tcp_sendpage
0.38 ± 9% -0.1 0.32 ± 3% perf-profile.children.cycles-pp.__nfs_commit_inode
0.28 ± 7% -0.1 0.22 ± 15% perf-profile.children.cycles-pp.update_load_avg
0.25 ± 14% -0.1 0.19 ± 3% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.29 ± 9% -0.1 0.23 ± 8% perf-profile.children.cycles-pp.nfs_scan_commit
0.33 ± 10% -0.1 0.27 ± 10% perf-profile.children.cycles-pp.find_get_pages_range_tag
0.33 ± 10% -0.1 0.28 ± 10% perf-profile.children.cycles-pp.pagevec_lookup_range_tag
0.23 ± 6% -0.1 0.18 ± 11% perf-profile.children.cycles-pp.tcp_sendpage_locked
0.26 ± 4% -0.1 0.21 ± 2% perf-profile.children.cycles-pp.___might_sleep
0.09 ± 13% -0.1 0.04 ±102% perf-profile.children.cycles-pp.task_tick_fair
0.08 ± 10% -0.1 0.03 ±100% perf-profile.children.cycles-pp.timerqueue_del
0.23 ± 6% -0.0 0.18 ± 11% perf-profile.children.cycles-pp.do_tcp_sendpages
0.13 ± 18% -0.0 0.09 ± 20% perf-profile.children.cycles-pp.native_irq_return_iret
0.20 ± 11% -0.0 0.16 ± 4% perf-profile.children.cycles-pp.__queue_work
0.17 ± 9% -0.0 0.13 ± 22% perf-profile.children.cycles-pp.__mod_node_page_state
0.14 ± 8% -0.0 0.10 ± 12% perf-profile.children.cycles-pp.update_rq_clock
0.10 ± 8% -0.0 0.06 ± 16% perf-profile.children.cycles-pp.__remove_hrtimer
0.19 ± 9% -0.0 0.16 ± 5% perf-profile.children.cycles-pp.queue_work_on
0.16 ± 9% -0.0 0.13 ± 15% perf-profile.children.cycles-pp.sched_clock_cpu
0.14 ± 11% -0.0 0.11 ± 13% perf-profile.children.cycles-pp.sched_clock
0.07 ± 11% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.stack_trace_consume_entry_nosched
0.11 ± 4% -0.0 0.07 ± 27% perf-profile.children.cycles-pp.lock_sock_nested
0.12 ± 8% -0.0 0.09 ± 13% perf-profile.children.cycles-pp.unlock_page
0.10 ± 12% -0.0 0.07 ± 21% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.15 ± 3% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.__mod_memcg_state
0.13 ± 6% -0.0 0.11 ± 10% perf-profile.children.cycles-pp.down_write
0.15 ± 8% -0.0 0.13 ± 3% perf-profile.children.cycles-pp.tcp_ack
0.08 ± 21% -0.0 0.06 ± 28% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.11 ± 4% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.__wake_up_common_lock
27.09 ± 5% +4.4 31.52 ± 9% perf-profile.children.cycles-pp.__mutex_lock
17.07 ± 6% +7.7 24.77 ± 10% perf-profile.children.cycles-pp.osq_lock
9.86 ± 4% -3.2 6.69 ± 9% perf-profile.self.cycles-pp.mutex_spin_on_owner
1.32 ± 6% -0.2 1.14 ± 10% perf-profile.self.cycles-pp.clear_page_erms
0.95 ± 11% -0.2 0.78 ± 8% perf-profile.self.cycles-pp.get_page_from_freelist
0.31 ± 31% -0.1 0.18 ± 18% perf-profile.self.cycles-pp.ktime_get
0.45 ± 8% -0.1 0.33 ± 12% perf-profile.self.cycles-pp.menu_select
0.27 ± 17% -0.1 0.20 ± 5% perf-profile.self.cycles-pp.cpuidle_enter_state
0.38 ± 8% -0.1 0.31 ± 8% perf-profile.self.cycles-pp.__list_del_entry_valid
0.26 ± 7% -0.1 0.20 ± 7% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.25 ± 14% -0.1 0.19 ± 5% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.20 ± 10% -0.1 0.14 ± 5% perf-profile.self.cycles-pp.read_tsc
0.14 ± 15% -0.1 0.09 ± 7% perf-profile.self.cycles-pp.free_pcppages_bulk
0.25 ± 8% -0.0 0.20 ± 7% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.13 ± 18% -0.0 0.09 ± 20% perf-profile.self.cycles-pp.native_irq_return_iret
0.11 ± 17% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.f2fs_submit_page_write
0.24 ± 6% -0.0 0.20 ± 2% perf-profile.self.cycles-pp.___might_sleep
0.08 ± 11% -0.0 0.04 ± 58% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.21 ± 3% -0.0 0.17 ± 10% perf-profile.self.cycles-pp.__lock_text_start
0.13 ± 16% -0.0 0.09 ± 15% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.08 ± 10% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.wait_on_page_bit
0.13 ± 13% -0.0 0.10 ± 25% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.16 ± 8% -0.0 0.13 ± 10% perf-profile.self.cycles-pp.__schedule
0.07 ± 12% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.__nfs_pageio_add_request
0.10 ± 10% -0.0 0.08 ± 14% perf-profile.self.cycles-pp.update_rq_clock
0.13 ± 5% -0.0 0.10 ± 14% perf-profile.self.cycles-pp.__might_sleep
0.09 ± 13% -0.0 0.06 ± 13% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.11 ± 7% -0.0 0.08 ± 12% perf-profile.self.cycles-pp.f2fs_write_begin
0.14 ± 3% -0.0 0.11 ± 3% perf-profile.self.cycles-pp.__mod_memcg_state
0.15 ± 3% -0.0 0.12 ± 8% perf-profile.self.cycles-pp.page_mapping
16.97 ± 6% +7.6 24.54 ± 10% perf-profile.self.cycles-pp.osq_lock
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 10 months
[blk] b91f4a426e: fio.write_bw_MBps -9.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -9.3% regression of fio.write_bw_MBps due to commit:
commit: b91f4a426e30203b622094e01f64f38034f02d7f ("[PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request()")
url: https://github.com/0day-ci/linux/commits/Hannes-Reinecke/blk-mq-fixup-req...
in testcase: fio-basic
on test machine: 72 threads Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz with 192G memory
with following parameters:
runtime: 300s
disk: 1HDD
fs: ext4
nr_task: 100%
test_size: 128G
rw: write
bs: 4k
ioengine: sync
ucode: 0x200005e
cpufreq_governor: performance
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_bw_MBps -10.9% regression |
| test machine | 72 threads Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz with 192G memory |
| test parameters | bs=4k |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | fs=btrfs |
| | ioengine=sync |
| | nr_task=100% |
| | runtime=300s |
| | rw=write |
| | test_size=128G |
| | ucode=0x200005e |
+------------------+----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-7/performance/1HDD/ext4/sync/x86_64-rhel-7.6/100%/debian-x86_64-2019-05-14.cgz/300s/write/lkp-skl-2sp8/128G/fio-basic/0x200005e
commit:
ff08148702 ("blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()")
b91f4a426e ("blk-mq: always call into the scheduler in blk_mq_make_request()")
ff0814870258907f b91f4a426e30203b622094e01f6
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
6.62 ± 39% -5.1 1.48 ± 11% fio.latency_100us%
13.70 ± 13% +5.2 18.86 ± 9% fio.latency_20us%
0.07 ± 2% +0.0 0.08 ± 4% fio.latency_250ms%
9.82 ± 37% -4.2 5.59 ± 40% fio.latency_4us%
0.26 ± 48% -0.2 0.03 ± 8% fio.latency_500us%
0.02 ± 8% +0.0 0.02 ± 7% fio.latency_50ms%
23.32 ± 25% +11.2 34.56 fio.latency_50us%
501.83 +5.9% 531.37 fio.time.elapsed_time
501.83 +5.9% 531.37 fio.time.elapsed_time.max
1.694e+08 -9.3% 1.537e+08 fio.time.file_system_outputs
6683 ± 15% -30.8% 4627 ± 3% fio.time.involuntary_context_switches
116.50 ± 18% -41.6% 68.00 ± 2% fio.time.percent_of_cpu_this_job_got
566.73 ± 19% -40.1% 339.73 ± 2% fio.time.system_time
21173558 -9.3% 19213732 fio.workload
275.63 -9.3% 250.12 fio.write_bw_MBps
71360 ± 28% -53.0% 33504 ± 2% fio.write_clat_90%_us
120512 ± 36% -65.7% 41344 ± 2% fio.write_clat_95%_us
62652416 +8.2% 67764224 ± 5% fio.write_clat_99%_us
1018693 +10.2% 1122642 fio.write_clat_mean_us
9332015 +6.2% 9912654 ± 2% fio.write_clat_stddev
70561 -9.3% 64030 fio.write_iops
716.00 ± 4% -15.6% 604.00 ± 9% slabinfo.uid_cache.num_objs
159147 ± 6% +23.5% 196570 ± 22% softirqs.CPU12.TIMER
0.03 ± 5% -0.0 0.02 ± 2% mpstat.cpu.all.soft%
1.40 ± 22% -0.7 0.73 mpstat.cpu.all.sys%
11287932 -9.4% 10232075 ± 3% numa-numastat.node1.local_node
11296294 -9.3% 10240386 ± 3% numa-numastat.node1.numa_hit
82.00 ± 11% -25.6% 61.00 turbostat.Avg_MHz
4.86 ± 5% -0.7 4.19 ± 3% turbostat.Busy%
1691 ± 6% -13.6% 1462 ± 4% turbostat.Bzy_MHz
46.51 +4.6% 48.67 iostat.cpu.idle
52.00 -2.9% 50.51 iostat.cpu.iowait
1.42 ± 21% -46.8% 0.76 iostat.cpu.system
440.26 ±172% -99.3% 2.89 ± 57% iostat.sdb.r_await.max
942.75 ± 15% -44.2% 525.75 numa-meminfo.node0.Mlocked
35989 ± 2% +142.8% 87375 numa-meminfo.node0.Writeback
943.50 ± 15% -59.2% 384.75 numa-meminfo.node1.Mlocked
35665 ± 3% +137.9% 84863 ± 3% numa-meminfo.node1.Writeback
5480 ± 4% +6.3% 5823 ± 5% meminfo.Buffers
7040000 ± 6% -8.0% 6476288 meminfo.DirectMap2M
1886 -51.8% 909.25 meminfo.Mlocked
71533 +141.4% 172651 ± 2% meminfo.Writeback
177731 -13.9% 153014 meminfo.max_used_kB
235.50 ± 15% -44.4% 131.00 numa-vmstat.node0.nr_mlock
9207 ± 3% +138.0% 21912 numa-vmstat.node0.nr_writeback
235.50 ± 15% -59.4% 95.50 numa-vmstat.node1.nr_mlock
8697 ± 3% +143.3% 21161 ± 3% numa-vmstat.node1.nr_writeback
5397979 ± 3% -10.5% 4831386 ± 4% numa-vmstat.node1.nr_written
46.25 +3.8% 48.00 vmstat.cpu.id
51.25 -2.4% 50.00 vmstat.cpu.wa
167970 -14.1% 144239 vmstat.io.bo
5469 ± 4% +6.2% 5810 ± 5% vmstat.memory.buff
3079 ± 2% -7.0% 2864 vmstat.system.cs
135.35 ±125% -100.0% 0.00 ± 3% sched_debug.cfs_rq:/.MIN_vruntime.stddev
6969 ± 19% -39.3% 4233 ± 2% sched_debug.cfs_rq:/.exec_clock.avg
7965 ± 16% -33.0% 5335 sched_debug.cfs_rq:/.exec_clock.max
6505 ± 20% -42.4% 3749 ± 2% sched_debug.cfs_rq:/.exec_clock.min
135.35 ±125% -100.0% 0.00 ± 3% sched_debug.cfs_rq:/.max_vruntime.stddev
58920 ± 36% -25.0% 44204 ± 37% sched_debug.cpu.nr_switches.max
7912 ± 21% -22.8% 6109 ± 21% sched_debug.cpu.nr_switches.stddev
7728 ± 19% -21.8% 6040 ± 17% sched_debug.cpu.sched_count.stddev
26049 ± 36% -25.4% 19442 ± 36% sched_debug.cpu.sched_goidle.max
3614 ± 19% -22.7% 2793 ± 20% sched_debug.cpu.sched_goidle.stddev
2.625e+08 ± 2% -10.0% 2.362e+08 ± 2% perf-stat.i.branch-instructions
1.85 +0.1 1.91 perf-stat.i.branch-miss-rate%
3074 ± 2% -7.1% 2857 ± 2% perf-stat.i.context-switches
5.331e+09 ± 11% -28.5% 3.812e+09 perf-stat.i.cpu-cycles
0.03 ± 3% -0.0 0.03 ± 4% perf-stat.i.dTLB-load-miss-rate%
3.432e+08 -9.8% 3.095e+08 perf-stat.i.dTLB-loads
1.875e+08 -6.7% 1.75e+08 perf-stat.i.dTLB-stores
1.28e+09 ± 2% -9.4% 1.159e+09 ± 2% perf-stat.i.instructions
79.86 +1.0 80.89 perf-stat.i.node-load-miss-rate%
1.60 ± 2% +0.1 1.70 perf-stat.overall.branch-miss-rate%
4.16 ± 10% -20.9% 3.29 ± 2% perf-stat.overall.cpi
1756 ± 7% -21.9% 1371 ± 3% perf-stat.overall.cycles-between-cache-misses
0.24 ± 12% +24.8% 0.30 ± 2% perf-stat.overall.ipc
2.62e+08 ± 2% -10.0% 2.358e+08 ± 2% perf-stat.ps.branch-instructions
3068 ± 2% -7.0% 2851 ± 2% perf-stat.ps.context-switches
5.323e+09 ± 11% -28.4% 3.809e+09 perf-stat.ps.cpu-cycles
3.426e+08 -9.8% 3.09e+08 perf-stat.ps.dTLB-loads
1.872e+08 -6.7% 1.747e+08 perf-stat.ps.dTLB-stores
1.278e+09 ± 2% -9.4% 1.157e+09 ± 2% perf-stat.ps.instructions
6.427e+11 ± 2% -4.3% 6.153e+11 ± 2% perf-stat.total.instructions
0.75 ±173% +6700.0% 51.00 ±151% interrupts.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
396.00 ± 28% +76.3% 698.25 ± 45% interrupts.CPU0.RES:Rescheduling_interrupts
623.75 ± 57% -90.1% 61.50 ± 15% interrupts.CPU27.RES:Rescheduling_interrupts
151.00 ± 30% +45.9% 220.25 ± 11% interrupts.CPU45.NMI:Non-maskable_interrupts
151.00 ± 30% +45.9% 220.25 ± 11% interrupts.CPU45.PMI:Performance_monitoring_interrupts
80.00 ± 86% +104.4% 163.50 ± 36% interrupts.CPU45.RES:Rescheduling_interrupts
167.75 ± 30% +46.8% 246.25 ± 15% interrupts.CPU54.NMI:Non-maskable_interrupts
167.75 ± 30% +46.8% 246.25 ± 15% interrupts.CPU54.PMI:Performance_monitoring_interrupts
166.50 ± 27% +44.4% 240.50 ± 3% interrupts.CPU57.NMI:Non-maskable_interrupts
166.50 ± 27% +44.4% 240.50 ± 3% interrupts.CPU57.PMI:Performance_monitoring_interrupts
182.25 ± 18% +33.7% 243.75 ± 8% interrupts.CPU58.NMI:Non-maskable_interrupts
182.25 ± 18% +33.7% 243.75 ± 8% interrupts.CPU58.PMI:Performance_monitoring_interrupts
59.25 ± 60% +83.5% 108.75 ± 46% interrupts.CPU60.RES:Rescheduling_interrupts
179.75 ± 19% +61.9% 291.00 ± 25% interrupts.CPU62.NMI:Non-maskable_interrupts
179.75 ± 19% +61.9% 291.00 ± 25% interrupts.CPU62.PMI:Performance_monitoring_interrupts
4979 ± 5% +9.6% 5457 ± 7% interrupts.CPU65.CAL:Function_call_interrupts
4868 +25.0% 6085 ± 16% interrupts.CPU66.CAL:Function_call_interrupts
164.50 ± 28% +35.1% 222.25 ± 3% interrupts.CPU69.NMI:Non-maskable_interrupts
164.50 ± 28% +35.1% 222.25 ± 3% interrupts.CPU69.PMI:Performance_monitoring_interrupts
0.75 ±173% +6633.3% 50.50 ±153% interrupts.CPU7.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
338.25 ± 59% -77.9% 74.75 ± 36% interrupts.CPU7.RES:Rescheduling_interrupts
4705 ± 18% +36.7% 6432 ± 21% interrupts.CPU70.CAL:Function_call_interrupts
21175431 -9.3% 19215476 proc-vmstat.nr_dirtied
6559304 -1.9% 6432094 proc-vmstat.nr_dirty
17335665 -6.4% 16220070 proc-vmstat.nr_file_pages
31132550 +3.7% 32279732 proc-vmstat.nr_free_pages
65684 -5.1% 62356 proc-vmstat.nr_inactive_anon
16993871 -6.5% 15880980 proc-vmstat.nr_inactive_file
67545 -5.0% 64178 proc-vmstat.nr_mapped
471.25 -51.7% 227.50 proc-vmstat.nr_mlock
2160 -2.5% 2107 proc-vmstat.nr_page_table_pages
67596 -4.8% 64379 proc-vmstat.nr_shmem
491926 -6.3% 460880 proc-vmstat.nr_slab_reclaimable
17971 +139.9% 43107 proc-vmstat.nr_writeback
21175431 -9.3% 19215476 proc-vmstat.nr_written
65684 -5.1% 62356 proc-vmstat.nr_zone_inactive_anon
16993871 -6.5% 15880980 proc-vmstat.nr_zone_inactive_file
6578500 -1.6% 6475915 proc-vmstat.nr_zone_write_pending
222873 ± 60% +83.2% 408201 ± 13% proc-vmstat.numa_foreign
1201 ± 4% +24.9% 1500 ± 25% proc-vmstat.numa_hint_faults
22710959 -9.4% 20577032 proc-vmstat.numa_hit
22687436 -9.4% 20553544 proc-vmstat.numa_local
222873 ± 60% +83.2% 408201 ± 13% proc-vmstat.numa_miss
246396 ± 55% +75.2% 431690 ± 12% proc-vmstat.numa_other
23046534 -8.5% 21092140 proc-vmstat.pgalloc_normal
1398640 +5.2% 1471305 proc-vmstat.pgfault
23033875 -8.5% 21079937 proc-vmstat.pgfree
84865276 -9.2% 77023176 proc-vmstat.pgpgout
12.13 ± 5% -1.8 10.29 ± 4% perf-profile.calltrace.cycles-pp.__GI___libc_write
11.65 ± 6% -1.7 9.99 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write
11.63 ± 6% -1.7 9.98 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
10.60 ± 7% -1.6 9.00 ± 5% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.ext4_file_write_iter.new_sync_write.vfs_write.ksys_write
11.17 ± 7% -1.6 9.58 ± 4% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
10.78 ± 7% -1.6 9.20 ± 5% perf-profile.calltrace.cycles-pp.ext4_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
11.09 ± 7% -1.6 9.53 ± 5% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
10.82 ± 7% -1.5 9.28 ± 5% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.93 ± 8% -1.4 8.56 ± 6% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.new_sync_write.vfs_write
2.43 ± 9% -0.6 1.82 ± 11% perf-profile.calltrace.cycles-pp.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.new_sync_write
1.11 ± 15% -0.4 0.69 ± 15% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter
1.16 ± 15% -0.4 0.76 ± 4% perf-profile.calltrace.cycles-pp.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.new_sync_write
1.12 ± 15% -0.4 0.75 ± 3% perf-profile.calltrace.cycles-pp.copyin.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
1.19 ± 21% -0.2 0.98 ± 5% perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
0.79 ± 14% -0.2 0.59 ± 9% perf-profile.calltrace.cycles-pp.ext4_block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
1.37 ± 8% +0.2 1.54 ± 4% perf-profile.calltrace.cycles-pp.__next_timer_interrupt.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
1.38 ± 6% +0.2 1.59 ± 3% perf-profile.calltrace.cycles-pp.rcu_sched_clock_irq.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
2.29 ± 6% +0.2 2.50 ± 2% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
82.51 +1.6 84.06 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
82.51 +1.6 84.06 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
82.45 +1.6 84.01 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
83.41 +1.7 85.13 perf-profile.calltrace.cycles-pp.secondary_startup_64
12.16 ± 5% -1.8 10.33 ± 4% perf-profile.children.cycles-pp.__GI___libc_write
13.18 ± 6% -1.6 11.57 ± 4% perf-profile.children.cycles-pp.do_syscall_64
13.20 ± 6% -1.6 11.60 ± 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
10.60 ± 7% -1.6 9.00 ± 5% perf-profile.children.cycles-pp.__generic_file_write_iter
10.78 ± 7% -1.6 9.20 ± 5% perf-profile.children.cycles-pp.ext4_file_write_iter
11.21 ± 7% -1.6 9.63 ± 5% perf-profile.children.cycles-pp.ksys_write
11.12 ± 7% -1.6 9.57 ± 5% perf-profile.children.cycles-pp.vfs_write
10.85 ± 7% -1.5 9.32 ± 5% perf-profile.children.cycles-pp.new_sync_write
9.93 ± 8% -1.4 8.56 ± 6% perf-profile.children.cycles-pp.generic_perform_write
2.44 ± 8% -0.6 1.83 ± 11% perf-profile.children.cycles-pp.ext4_da_write_end
1.38 ± 21% -0.5 0.87 ± 15% perf-profile.children.cycles-pp.__ext4_journal_stop
1.35 ± 21% -0.5 0.85 ± 15% perf-profile.children.cycles-pp.jbd2_journal_stop
1.12 ± 15% -0.4 0.71 ± 14% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
1.16 ± 15% -0.4 0.76 ± 4% perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
1.12 ± 15% -0.4 0.75 ± 4% perf-profile.children.cycles-pp.copyin
1.20 ± 20% -0.2 0.99 ± 5% perf-profile.children.cycles-pp.pagecache_get_page
0.79 ± 14% -0.2 0.59 ± 9% perf-profile.children.cycles-pp.ext4_block_write_begin
0.48 ± 13% -0.1 0.34 ± 13% perf-profile.children.cycles-pp.ext4_da_get_block_prep
0.32 ± 28% -0.1 0.18 ± 14% perf-profile.children.cycles-pp.ext4_es_lookup_extent
0.28 ± 21% -0.1 0.18 ± 14% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.10 ± 18% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.__note_gp_changes
0.09 ± 8% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.__update_load_avg_se
0.04 ± 58% +0.0 0.08 ± 15% perf-profile.children.cycles-pp.__accumulate_pelt_segments
0.12 ± 8% +0.0 0.17 ± 12% perf-profile.children.cycles-pp.__hrtimer_get_next_event
0.01 ±173% +0.1 0.07 ± 30% perf-profile.children.cycles-pp.filemap_map_pages
0.01 ±173% +0.1 0.07 ± 30% perf-profile.children.cycles-pp.rcu_gp_kthread
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.autoremove_wake_function
0.03 ±105% +0.1 0.09 ± 27% perf-profile.children.cycles-pp.kjournald2
0.03 ±105% +0.1 0.09 ± 27% perf-profile.children.cycles-pp.jbd2_journal_commit_transaction
0.00 +0.1 0.06 ± 20% perf-profile.children.cycles-pp.sched_clock_idle_wakeup_event
0.00 +0.1 0.07 ± 13% perf-profile.children.cycles-pp.__wake_up_common
0.15 ± 24% +0.1 0.21 ± 11% perf-profile.children.cycles-pp.fb_flashcursor
0.15 ± 24% +0.1 0.21 ± 11% perf-profile.children.cycles-pp.bit_cursor
0.15 ± 24% +0.1 0.21 ± 11% perf-profile.children.cycles-pp.soft_cursor
0.14 ± 26% +0.1 0.21 ± 11% perf-profile.children.cycles-pp.memcpy_toio
0.14 ± 26% +0.1 0.21 ± 11% perf-profile.children.cycles-pp.ast_dirty_update
0.45 ± 7% +0.1 0.54 ± 8% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.74 ± 6% +0.1 0.83 ± 5% perf-profile.children.cycles-pp._raw_spin_trylock
1.45 ± 7% +0.2 1.63 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
1.56 ± 8% +0.2 1.78 ± 3% perf-profile.children.cycles-pp.__next_timer_interrupt
2.33 ± 6% +0.2 2.55 ± 3% perf-profile.children.cycles-pp.get_next_timer_interrupt
1.41 ± 6% +0.2 1.63 ± 3% perf-profile.children.cycles-pp.rcu_sched_clock_irq
82.51 +1.6 84.06 perf-profile.children.cycles-pp.start_secondary
83.41 +1.7 85.13 perf-profile.children.cycles-pp.secondary_startup_64
83.41 +1.7 85.13 perf-profile.children.cycles-pp.cpu_startup_entry
83.51 +1.7 85.24 perf-profile.children.cycles-pp.do_idle
0.93 ± 11% -0.2 0.70 ± 14% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.28 ± 21% -0.1 0.18 ± 14% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.27 ± 21% -0.1 0.19 ± 22% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.19 ± 8% -0.1 0.13 ± 18% perf-profile.self.cycles-pp.update_blocked_averages
0.15 ± 17% -0.0 0.10 ± 8% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.08 ± 15% +0.0 0.11 ± 6% perf-profile.self.cycles-pp.__update_load_avg_se
0.04 ± 57% +0.0 0.08 ± 6% perf-profile.self.cycles-pp.__accumulate_pelt_segments
0.14 ± 26% +0.1 0.21 ± 10% perf-profile.self.cycles-pp.memcpy_toio
0.01 ±173% +0.1 0.09 ± 27% perf-profile.self.cycles-pp.intel_pmu_disable_all
0.39 ± 4% +0.1 0.48 ± 8% perf-profile.self.cycles-pp.__hrtimer_next_event_base
1.17 ± 6% +0.1 1.31 ± 4% perf-profile.self.cycles-pp.rcu_sched_clock_irq
1.44 ± 8% +0.2 1.62 perf-profile.self.cycles-pp.native_irq_return_iret
2.63 +0.3 2.94 ± 4% perf-profile.self.cycles-pp.cpuidle_enter_state
fio.write_bw_MBps
280 +-+-------------------------------------------------------------------+
| :+ + + + :+ + :+ + .++.+.+ .+. |
275 +-++ + + + +.++.++ ++.++.|
| |
270 +-+ |
| |
265 +-+ |
| |
260 +-+ |
| |
255 +-+ |
O OO O OO |
250 +-+ OO OO O OO OO OO |
| |
245 +-+-------------------------------------------------------------------+
fio.write_iops
72000 +-+-----------------------------------------------------------------+
|.+ ++.++.++. +.+.++.++.++.++.+ +.++.+ ++. +. |
71000 +-+:+ + :+ :+ + ++.+.++.++.++.+ .++.|
70000 +-++ + + + |
| |
69000 +-+ |
68000 +-+ |
| |
67000 +-+ |
66000 +-+ |
| |
65000 +-+ |
64000 O-OO OO OO OO OO O O |
| O O O O |
63000 +-+-----------------------------------------------------------------+
fio.write_clat_mean_us
1.14e+06 +-+--------------------------------------------------------------+
| O O |
1.12e+06 +O+OO OO OO OO OO O OO |
O |
1.1e+06 +-+ |
| |
1.08e+06 +-+ |
| |
1.06e+06 +-+ |
| |
1.04e+06 +-+ |
| |
1.02e+06 +-++ + + .+ .+ +.++.++.++.++.+|
|+ :.++.+ .++. .+ .+ .+ .+ + + .+ + + + +.+ |
1e+06 +-+--------------------------------------------------------------+
fio.workload
2.15e+07 +-+--------------------------------------------------------------+
| + : + + + :+ + :+ +. +.++.+ |
| + + + + +.++.++.++.++.+|
2.1e+07 +-+ |
| |
| |
2.05e+07 +-+ |
| |
2e+07 +-+ |
| |
| |
1.95e+07 +-+ |
OO OO O OO |
| O OO OO OO OO O |
1.9e+07 +-+--------------------------------------------------------------+
fio.time.elapsed_time
535 +-+-------------------------------------------------------------------+
| O O OO OO OO |
530 O-OO OO O O O |
525 +-+ O |
| |
520 +-+ |
515 +-+ |
| |
510 +-+ |
505 +-+ |
| +. +. +. +. .+ .++.++.+.++.+ |
500 +-+ ++.+.+ .++.+ +.++.+ +.+ ++.+ + +.|
495 +-+ +.+ .+.++.++.++.+ |
| + |
490 +-+-------------------------------------------------------------------+
fio.time.elapsed_time.max
535 +-+-------------------------------------------------------------------+
| O O OO OO OO |
530 O-OO OO O O O |
525 +-+ O |
| |
520 +-+ |
515 +-+ |
| |
510 +-+ |
505 +-+ |
| +. +. +. +. .+ .++.++.+.++.+ |
500 +-+ ++.+.+ .++.+ +.++.+ +.+ ++.+ + +.|
495 +-+ +.+ .+.++.++.++.+ |
| + |
490 +-+-------------------------------------------------------------------+
fio.time.file_system_outputs
1.72e+08 +-+--------------------------------------------------------------+
1.7e+08 +-+ : + + + :+ + :+ +. +.++.+ |
| + + + + +.++.++.++.++.+|
1.68e+08 +-+ |
1.66e+08 +-+ |
| |
1.64e+08 +-+ |
1.62e+08 +-+ |
1.6e+08 +-+ |
| |
1.58e+08 +-+ |
1.56e+08 +-+ |
OO OO O OO |
1.54e+08 +-+ O OO OO OO OO O |
1.52e+08 +-+--------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-skl-2sp8: 72 threads Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz with 192G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-7/performance/1HDD/btrfs/sync/x86_64-rhel-7.6/100%/debian-x86_64-2019-05-14.cgz/300s/write/lkp-skl-2sp8/128G/fio-basic/0x200005e
commit:
ff08148702 ("blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()")
b91f4a426e ("blk-mq: always call into the scheduler in blk_mq_make_request()")
ff0814870258907f b91f4a426e30203b622094e01f6
---------------- ---------------------------
%stddev %change %stddev
\ | \
38.22 +4.2 42.42 fio.latency_100us%
0.01 ± 29% +0.0 0.02 ± 8% fio.latency_10ms%
33.42 ± 4% -5.9 27.50 ± 9% fio.latency_10us%
0.02 ± 17% +0.0 0.04 ± 4% fio.latency_20ms%
10.11 ± 13% +6.6 16.76 ± 14% fio.latency_20us%
0.05 ± 5% +0.0 0.07 ± 8% fio.latency_250ms%
13.88 ± 18% -7.1 6.77 ± 26% fio.latency_4us%
0.04 ± 12% -0.0 0.02 ± 35% fio.latency_500us%
498.84 +7.9% 538.26 fio.time.elapsed_time
498.84 +7.9% 538.26 fio.time.elapsed_time.max
1.723e+08 -10.9% 1.535e+08 fio.time.file_system_outputs
139.50 ± 2% -8.1% 128.25 ± 3% fio.time.percent_of_cpu_this_job_got
349028 -7.4% 323192 ± 3% fio.time.voluntary_context_switches
21535431 -10.9% 19191998 fio.workload
280.35 -10.9% 249.83 fio.write_bw_MBps
62914560 +10.1% 69271552 ± 4% fio.write_clat_99%_us
1001578 +12.2% 1124073 fio.write_clat_mean_us
8442623 +12.1% 9463511 fio.write_clat_stddev
71769 -10.9% 63957 fio.write_iops
54452 -11.8% 48021 meminfo.Active(file)
1950699 -21.3% 1535926 meminfo.Dirty
177986 -17.2% 147322 meminfo.max_used_kB
8292 ±113% +375.5% 39434 ± 85% numa-numastat.node0.other_node
11529753 ± 3% -11.5% 10202247 ± 2% numa-numastat.node1.local_node
11544959 ± 3% -11.6% 10210605 ± 2% numa-numastat.node1.numa_hit
57.50 +3.9% 59.75 vmstat.cpu.id
172275 -17.5% 142207 vmstat.io.bo
3935 -14.7% 3356 ± 2% vmstat.system.cs
3990 ± 2% +20.5% 4808 ± 3% slabinfo.kmalloc-8k.active_objs
1052 +20.6% 1269 ± 3% slabinfo.kmalloc-8k.active_slabs
4209 +20.6% 5078 ± 3% slabinfo.kmalloc-8k.num_objs
1052 +20.6% 1269 ± 3% slabinfo.kmalloc-8k.num_slabs
993762 ± 4% -25.2% 743653 ± 7% numa-meminfo.node0.Dirty
8242 ± 86% +2124.3% 183341 ± 52% numa-meminfo.node0.Inactive(anon)
16097 ± 42% +1053.1% 185613 ± 50% numa-meminfo.node0.Shmem
946941 ± 4% -17.3% 783272 ± 3% numa-meminfo.node1.Dirty
256213 ± 2% -75.5% 62683 ±153% numa-meminfo.node1.Inactive(anon)
112316 ± 4% -6.7% 104793 ± 3% numa-meminfo.node1.KReclaimable
258811 -73.7% 68027 ±143% numa-meminfo.node1.Mapped
112316 ± 4% -6.7% 104793 ± 3% numa-meminfo.node1.SReclaimable
259933 ± 3% -73.7% 68310 ±136% numa-meminfo.node1.Shmem
57.87 +4.3% 60.34 ± 2% iostat.cpu.idle
40.31 ± 2% -5.7% 38.01 ± 3% iostat.cpu.iowait
1.77 ± 3% -9.1% 1.61 ± 2% iostat.cpu.system
44.25 ±100% -100.0% 0.00 iostat.sda.avgqu-sz.max
287.37 ±100% -100.0% 0.00 iostat.sda.await.max
1.41 ±100% -100.0% 0.00 iostat.sda.r_await.max
0.60 ±100% -100.0% 0.00 iostat.sda.svctm.max
287.37 ±100% -100.0% 0.00 iostat.sda.w_await.max
196.93 ±100% +142.4% 477.30 iostat.sdb.await
196.94 ±100% +142.4% 477.31 iostat.sdb.w_await
269628 +5.7% 285063 ± 5% sched_debug.cpu.clock.avg
269632 +5.7% 285066 ± 5% sched_debug.cpu.clock.max
269625 +5.7% 285059 ± 5% sched_debug.cpu.clock.min
269628 +5.7% 285063 ± 5% sched_debug.cpu.clock_task.avg
269632 +5.7% 285066 ± 5% sched_debug.cpu.clock_task.max
269625 +5.7% 285059 ± 5% sched_debug.cpu.clock_task.min
1037 ± 2% +6.9% 1108 ± 4% sched_debug.cpu.curr->pid.stddev
20423 +27.9% 26120 ± 6% sched_debug.cpu.sched_count.avg
269625 +5.7% 285059 ± 5% sched_debug.cpu_clk
266935 +5.8% 282369 ± 5% sched_debug.ktime
269973 +5.7% 285400 ± 5% sched_debug.sched_clk
248672 ± 4% -25.1% 186254 ± 7% numa-vmstat.node0.nr_dirty
2063 ± 85% +2119.7% 45797 ± 52% numa-vmstat.node0.nr_inactive_anon
4027 ± 42% +1051.4% 46365 ± 50% numa-vmstat.node0.nr_shmem
2063 ± 85% +2119.7% 45797 ± 52% numa-vmstat.node0.nr_zone_inactive_anon
8708 ±105% +237.7% 29405 ± 54% numa-vmstat.node0.numa_other
236734 ± 4% -17.3% 195893 ± 3% numa-vmstat.node1.nr_dirty
64096 ± 2% -75.5% 15697 ±153% numa-vmstat.node1.nr_inactive_anon
64841 -73.7% 17067 ±143% numa-vmstat.node1.nr_mapped
65026 ± 3% -73.7% 17104 ±136% numa-vmstat.node1.nr_shmem
28075 ± 4% -6.7% 26189 ± 3% numa-vmstat.node1.nr_slab_reclaimable
5470165 -13.0% 4761134 ± 4% numa-vmstat.node1.nr_written
64096 ± 2% -75.5% 15697 ±153% numa-vmstat.node1.nr_zone_inactive_anon
9754289 ± 2% -9.5% 8829087 ± 3% numa-vmstat.node1.numa_hit
9570245 ± 2% -9.6% 8651021 ± 3% numa-vmstat.node1.numa_local
37089 -9.7% 33482 softirqs.BLOCK
63404 +12.2% 71138 ± 2% softirqs.CPU24.SCHED
160959 ± 6% +47.3% 237139 ± 18% softirqs.CPU24.TIMER
113209 ± 2% -9.2% 102824 ± 4% softirqs.CPU26.RCU
160213 ± 7% +10.1% 176471 ± 4% softirqs.CPU36.TIMER
161611 ± 10% +10.8% 179104 ± 4% softirqs.CPU38.TIMER
158202 ± 8% +14.9% 181805 ± 7% softirqs.CPU41.TIMER
161394 ± 8% +9.8% 177208 ± 4% softirqs.CPU44.TIMER
61191 ± 2% +9.7% 67142 ± 2% softirqs.CPU45.SCHED
158166 ± 8% +16.4% 184035 ± 10% softirqs.CPU45.TIMER
158609 ± 8% +11.0% 176046 ± 4% softirqs.CPU48.TIMER
159505 ± 8% +10.2% 175812 ± 4% softirqs.CPU49.TIMER
163999 ± 8% +11.0% 182032 ± 7% softirqs.CPU5.TIMER
159346 ± 8% +10.9% 176725 ± 4% softirqs.CPU51.TIMER
159810 ± 8% +10.5% 176531 ± 4% softirqs.CPU52.TIMER
62252 +10.2% 68624 ± 5% softirqs.CPU60.SCHED
153923 ± 5% +32.2% 203464 ± 25% softirqs.CPU60.TIMER
162719 ± 7% +14.9% 186976 ± 11% softirqs.CPU9.TIMER
0.00 +1.2 1.21 ± 43% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write
0.18 ±173% +1.2 1.40 ± 42% perf-profile.calltrace.cycles-pp._raw_spin_lock.__btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_file_write_iter
0.00 +1.2 1.23 ± 45% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write
0.20 ±173% +1.2 1.43 ± 42% perf-profile.calltrace.cycles-pp.__btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
0.20 ±173% +1.2 1.45 ± 41% perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
0.14 ±173% +1.3 1.39 ± 45% perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter
0.65 ± 64% +1.6 2.27 ± 38% perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
0.21 ±173% +1.7 1.88 ± 39% perf-profile.calltrace.cycles-pp.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.tick_nohz_idle_got_tick
0.13 ± 62% +0.3 0.39 ± 37% perf-profile.children.cycles-pp.can_overcommit
0.30 ± 97% +1.1 1.43 ± 42% perf-profile.children.cycles-pp.__btrfs_block_rsv_release
0.31 ± 96% +1.1 1.45 ± 41% perf-profile.children.cycles-pp.btrfs_inode_rsv_release
0.41 ± 68% +1.5 1.88 ± 39% perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
0.67 ± 57% +1.6 2.27 ± 38% perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
0.63 ± 71% +2.3 2.96 ± 32% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.41 ± 57% +2.7 4.14 ± 31% perf-profile.children.cycles-pp._raw_spin_lock
0.04 ± 58% +0.1 0.10 ± 49% perf-profile.self.cycles-pp.btrfs_reserve_metadata_bytes
0.03 ±100% +0.1 0.10 ± 27% perf-profile.self.cycles-pp.can_overcommit
0.63 ± 72% +2.3 2.95 ± 32% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
4.329e+08 ± 47% -33.5% 2.88e+08 ± 4% perf-stat.i.branch-instructions
3936 -14.8% 3352 ± 2% perf-stat.i.context-switches
6.07 ± 15% +38.8% 8.43 ± 16% perf-stat.i.cpu-migrations
5.515e+08 ± 37% -30.1% 3.856e+08 perf-stat.i.dTLB-loads
2.271e+08 ± 2% -10.8% 2.026e+08 perf-stat.i.dTLB-stores
1.978e+09 ± 37% -27.9% 1.426e+09 ± 3% perf-stat.i.instructions
539679 ± 3% -10.8% 481609 ± 3% perf-stat.i.node-load-misses
279956 ± 3% -14.1% 240383 ± 3% perf-stat.i.node-loads
94478 ± 23% -37.7% 58900 ± 7% perf-stat.i.node-stores
67.13 ± 14% +11.7 78.86 perf-stat.overall.node-store-miss-rate%
4.321e+08 ± 47% -33.5% 2.874e+08 ± 4% perf-stat.ps.branch-instructions
3928 -14.8% 3346 ± 2% perf-stat.ps.context-switches
6.07 ± 15% +38.7% 8.41 ± 16% perf-stat.ps.cpu-migrations
5.505e+08 ± 38% -30.1% 3.847e+08 perf-stat.ps.dTLB-loads
2.267e+08 ± 2% -10.9% 2.021e+08 perf-stat.ps.dTLB-stores
1.974e+09 ± 37% -28.0% 1.423e+09 ± 3% perf-stat.ps.instructions
538662 ± 3% -10.8% 480433 ± 3% perf-stat.ps.node-load-misses
279289 ± 3% -14.1% 239791 ± 3% perf-stat.ps.node-loads
94348 ± 23% -37.7% 58757 ± 7% perf-stat.ps.node-stores
13621 -12.0% 11980 proc-vmstat.nr_active_file
21578525 -10.9% 19231325 proc-vmstat.nr_dirtied
486311 -21.3% 382498 proc-vmstat.nr_dirty
17619220 -7.7% 16254004 proc-vmstat.nr_file_pages
31271641 +4.4% 32640654 proc-vmstat.nr_free_pages
66113 -7.0% 61506 proc-vmstat.nr_inactive_anon
17266628 -7.9% 15907732 proc-vmstat.nr_inactive_file
67965 -6.8% 63323 proc-vmstat.nr_mapped
2190 -4.6% 2089 proc-vmstat.nr_page_table_pages
69007 ± 2% -8.0% 63480 proc-vmstat.nr_shmem
56508 -5.5% 53422 proc-vmstat.nr_slab_reclaimable
6141712 -1.3% 6064303 proc-vmstat.nr_writeback
21578549 -10.9% 19231357 proc-vmstat.nr_written
13621 -12.0% 11980 proc-vmstat.nr_zone_active_file
66113 -7.0% 61506 proc-vmstat.nr_zone_inactive_anon
17266628 -7.9% 15907732 proc-vmstat.nr_zone_inactive_file
6628866 -2.7% 6447374 proc-vmstat.nr_zone_write_pending
22781579 -10.1% 20474832 proc-vmstat.numa_hit
22758080 -10.1% 20451301 proc-vmstat.numa_local
11840 ± 22% -17.0% 9824 ± 3% proc-vmstat.pgactivate
22943031 -9.9% 20668524 proc-vmstat.pgalloc_normal
1392625 +7.0% 1490792 proc-vmstat.pgfault
22469883 -8.1% 20650344 proc-vmstat.pgfree
86439201 -10.9% 77036052 proc-vmstat.pgpgout
73214 -9.9% 65995 interrupts.272:PCI-MSI.376832-edge.ahci[0000:00:17.0]
0.00 +1.3e+104% 130.75 ±153% interrupts.41:PCI-MSI.31981574-edge.i40e-eth0-TxRx-5
0.75 ±173% +5466.7% 41.75 ±149% interrupts.49:PCI-MSI.31981582-edge.i40e-eth0-TxRx-13
0.25 ±173% +1.5e+05% 369.00 ±170% interrupts.69:PCI-MSI.31981602-edge.i40e-eth0-TxRx-33
2.00 ±173% +7862.5% 159.25 ±135% interrupts.94:PCI-MSI.31981627-edge.i40e-eth0-TxRx-58
4558 ± 8% +14.1% 5199 interrupts.CPU0.CAL:Function_call_interrupts
4799 ± 2% +10.0% 5278 interrupts.CPU1.CAL:Function_call_interrupts
1172 ±131% -84.1% 186.25 ± 9% interrupts.CPU1.NMI:Non-maskable_interrupts
1172 ±131% -84.1% 186.25 ± 9% interrupts.CPU1.PMI:Performance_monitoring_interrupts
4787 ± 2% +10.2% 5273 interrupts.CPU10.CAL:Function_call_interrupts
51.00 ± 29% +255.4% 181.25 ± 91% interrupts.CPU10.RES:Rescheduling_interrupts
4790 ± 2% +10.0% 5270 interrupts.CPU11.CAL:Function_call_interrupts
4781 ± 2% +10.0% 5261 interrupts.CPU12.CAL:Function_call_interrupts
0.75 ±173% +5400.0% 41.25 ±152% interrupts.CPU13.49:PCI-MSI.31981582-edge.i40e-eth0-TxRx-13
4799 ± 2% +9.7% 5267 interrupts.CPU13.CAL:Function_call_interrupts
4776 ± 2% +9.1% 5209 ± 3% interrupts.CPU14.CAL:Function_call_interrupts
4782 ± 2% +10.1% 5263 interrupts.CPU15.CAL:Function_call_interrupts
4770 ± 2% +10.2% 5255 interrupts.CPU16.CAL:Function_call_interrupts
4803 ± 2% +15.3% 5539 ± 4% interrupts.CPU18.CAL:Function_call_interrupts
4795 ± 2% +10.1% 5277 interrupts.CPU2.CAL:Function_call_interrupts
849.75 ±130% -82.1% 152.50 ± 26% interrupts.CPU23.NMI:Non-maskable_interrupts
849.75 ±130% -82.1% 152.50 ± 26% interrupts.CPU23.PMI:Performance_monitoring_interrupts
182.50 ± 82% -74.1% 47.25 ± 31% interrupts.CPU25.RES:Rescheduling_interrupts
871.75 ±123% -79.1% 182.00 ± 6% interrupts.CPU26.NMI:Non-maskable_interrupts
871.75 ±123% -79.1% 182.00 ± 6% interrupts.CPU26.PMI:Performance_monitoring_interrupts
164.75 ±112% -99.5% 0.75 ±110% interrupts.CPU29.65:PCI-MSI.31981598-edge.i40e-eth0-TxRx-29
4789 ± 2% +10.2% 5278 interrupts.CPU3.CAL:Function_call_interrupts
94.25 ± 57% -58.6% 39.00 ± 35% interrupts.CPU30.RES:Rescheduling_interrupts
4304 ± 20% +26.5% 5445 ± 3% interrupts.CPU34.CAL:Function_call_interrupts
32.00 ± 34% +1215.6% 421.00 ±128% interrupts.CPU36.RES:Rescheduling_interrupts
3705 ± 48% +40.4% 5201 interrupts.CPU39.CAL:Function_call_interrupts
4791 ± 2% +10.1% 5277 interrupts.CPU4.CAL:Function_call_interrupts
771.75 ±106% -75.5% 188.75 ± 19% interrupts.CPU40.NMI:Non-maskable_interrupts
771.75 ±106% -75.5% 188.75 ± 19% interrupts.CPU40.PMI:Performance_monitoring_interrupts
21.25 ± 53% +251.8% 74.75 ± 68% interrupts.CPU44.RES:Rescheduling_interrupts
1170 ±132% -85.0% 175.75 ± 36% interrupts.CPU49.NMI:Non-maskable_interrupts
1170 ±132% -85.0% 175.75 ± 36% interrupts.CPU49.PMI:Performance_monitoring_interrupts
152.25 ±126% +235.6% 511.00 ± 94% interrupts.CPU5.RES:Rescheduling_interrupts
27.25 ± 64% +200.0% 81.75 ± 32% interrupts.CPU53.RES:Rescheduling_interrupts
211.50 ± 16% +29.8% 274.50 ± 23% interrupts.CPU57.NMI:Non-maskable_interrupts
211.50 ± 16% +29.8% 274.50 ± 23% interrupts.CPU57.PMI:Performance_monitoring_interrupts
1.75 ±173% +8957.1% 158.50 ±135% interrupts.CPU58.94:PCI-MSI.31981627-edge.i40e-eth0-TxRx-58
192.25 ± 88% -81.5% 35.50 ± 48% interrupts.CPU59.RES:Rescheduling_interrupts
4796 ± 2% +10.4% 5296 interrupts.CPU6.CAL:Function_call_interrupts
226.00 ±129% -87.6% 28.00 ± 91% interrupts.CPU65.RES:Rescheduling_interrupts
136.25 ± 73% -62.0% 51.75 ± 51% interrupts.CPU66.RES:Rescheduling_interrupts
4799 ± 2% +10.0% 5278 interrupts.CPU7.CAL:Function_call_interrupts
4794 ± 2% +11.9% 5363 ± 4% interrupts.CPU70.CAL:Function_call_interrupts
150.00 ± 48% -72.7% 41.00 ± 58% interrupts.CPU71.RES:Rescheduling_interrupts
4799 ± 2% +9.4% 5248 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
4795 ± 2% +9.9% 5269 ± 2% interrupts.CPU9.CAL:Function_call_interrupts
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 10 months
[mm/lru] 4084fff845: reaim.jobs_per_min -49.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -49.9% regression of reaim.jobs_per_min due to commit:
commit: 4084fff84504864a0eed35b6f68571c23c3d7f38 ("mm/lru: replace pgdat lru_lock with lruvec lock")
https://github.com/alexshi/linux.git lru_lock
in testcase: reaim
on test machine: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory
with following parameters:
runtime: 300s
nr_task: 1000t
test: page_test
cpufreq_governor: performance
ucode: 0x42e
test-description: REAIM is an updated and improved version of AIM 7 benchmark.
test-url: https://sourceforge.net/projects/re-aim-7/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | reaim: reaim.jobs_per_min -51.1% regression |
| test machine | 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=1000 |
| | runtime=300s |
| | test=page_test |
| | ucode=0x43 |
+------------------+-----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/1000t/debian-x86_64-2019-05-14.cgz/300s/lkp-ivb-2ep1/page_test/reaim/0x42e
commit:
3145e78472 ("mm/lruvec: add irqsave flags into lruvec struct")
4084fff845 ("mm/lru: replace pgdat lru_lock with lruvec lock")
3145e78472f7ad74 4084fff84504864a0eed35b6f68
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 86% 3:4 perf-profile.calltrace.cycles-pp.error_entry
7:4 -107% 3:4 perf-profile.children.cycles-pp.error_entry
6:4 -83% 2:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
972.70 +112.5% 2066 reaim.child_systime
259395 -49.9% 129999 reaim.jobs_per_min
259.40 -49.9% 130.00 reaim.jobs_per_min_child
63.00 ± 4% +21.4% 76.46 ± 3% reaim.jti
259965 -49.9% 130272 reaim.max_jobs_per_min
23.13 +99.5% 46.15 reaim.parent_time
36.54 ± 6% -36.8% 23.08 ± 11% reaim.std_dev_percent
6.03 ± 5% +50.0% 9.05 ± 9% reaim.std_dev_time
302.86 +11.6% 337.84 reaim.time.elapsed_time
302.86 +11.6% 337.84 reaim.time.elapsed_time.max
2486522 ± 5% +52.6% 3794364 reaim.time.involuntary_context_switches
1.921e+09 -41.7% 1.12e+09 reaim.time.minor_page_faults
4387 +4.4% 4579 reaim.time.percent_of_cpu_this_job_got
11674 +23.9% 14468 reaim.time.system_time
1614 -37.7% 1005 ± 6% reaim.time.user_time
47643 -40.8% 28185 reaim.time.voluntary_context_switches
1200000 -41.7% 700000 reaim.workload
63709624 ± 65% -90.2% 6220124 ±137% cpuidle.C1E.time
458452 ± 85% -86.8% 60682 ±153% cpuidle.C1E.usage
1.047e+09 ± 13% -32.0% 7.116e+08 ± 9% cpuidle.C6.time
1312334 ± 2% -37.2% 824087 ± 5% cpuidle.C6.usage
8.80 ± 2% -4.1 4.69 ± 2% mpstat.cpu.all.idle%
0.00 ± 51% -0.0 0.00 ± 75% mpstat.cpu.all.soft%
80.05 +9.0 89.04 mpstat.cpu.all.sys%
11.14 -4.9 6.26 ± 6% mpstat.cpu.all.usr%
133885 ± 21% +49.9% 200661 ± 6% numa-meminfo.node1.Active
133849 ± 21% +49.9% 200594 ± 6% numa-meminfo.node1.Active(anon)
127196 ± 19% +52.1% 193485 ± 5% numa-meminfo.node1.AnonPages
871151 ± 9% +15.9% 1009475 ± 6% numa-meminfo.node1.MemUsed
1.018e+09 -41.8% 5.926e+08 numa-numastat.node0.local_node
1.018e+09 -41.8% 5.926e+08 numa-numastat.node0.numa_hit
1.027e+09 -41.4% 6.013e+08 numa-numastat.node1.local_node
1.027e+09 -41.4% 6.014e+08 numa-numastat.node1.numa_hit
79.00 +11.7% 88.25 vmstat.cpu.sy
10.75 ± 4% -48.8% 5.50 ± 9% vmstat.cpu.us
649.25 ± 2% +24.7% 809.50 vmstat.procs.r
9216 ± 4% +32.1% 12178 vmstat.system.cs
329710 +10.2% 363389 meminfo.Active
329574 +10.2% 363253 meminfo.Active(anon)
310477 +10.5% 342973 meminfo.AnonPages
1649970 +17.2% 1934191 meminfo.Committed_AS
34403 +17.4% 40399 meminfo.PageTables
28386 ± 3% +16.9% 33196 meminfo.Shmem
5.086e+08 -41.7% 2.965e+08 numa-vmstat.node0.numa_hit
5.086e+08 -41.7% 2.965e+08 numa-vmstat.node0.numa_local
33455 ± 21% +49.8% 50124 ± 7% numa-vmstat.node1.nr_active_anon
31789 ± 19% +52.1% 48355 ± 5% numa-vmstat.node1.nr_anon_pages
33454 ± 21% +49.8% 50124 ± 7% numa-vmstat.node1.nr_zone_active_anon
5.126e+08 -41.3% 3.01e+08 numa-vmstat.node1.numa_hit
5.125e+08 -41.3% 3.009e+08 numa-vmstat.node1.numa_local
2732 +4.3% 2850 turbostat.Avg_MHz
458258 ± 85% -86.8% 60455 ±154% turbostat.C1E
0.43 ± 65% -0.4 0.04 ±144% turbostat.C1E%
1302982 ± 2% -37.5% 814693 ± 5% turbostat.C6
7.14 ± 12% -2.8 4.35 ± 9% turbostat.C6%
4.29 ± 15% -56.9% 1.85 ± 19% turbostat.CPU%c1
4.27 ± 20% -34.4% 2.80 ± 13% turbostat.CPU%c6
2.71 ± 18% -35.2% 1.76 ± 15% turbostat.Pkg%pc2
82312 +10.3% 90815 proc-vmstat.nr_active_anon
77547 +10.6% 85754 proc-vmstat.nr_anon_pages
3959 +1.4% 4015 proc-vmstat.nr_inactive_anon
22345 +9.1% 24381 proc-vmstat.nr_kernel_stack
8563 +17.9% 10097 proc-vmstat.nr_page_table_pages
7098 ± 3% +17.0% 8301 proc-vmstat.nr_shmem
82312 +10.3% 90815 proc-vmstat.nr_zone_active_anon
3959 +1.4% 4015 proc-vmstat.nr_zone_inactive_anon
114831 ± 6% -48.2% 59451 ± 5% proc-vmstat.numa_hint_faults
2.045e+09 -41.6% 1.194e+09 proc-vmstat.numa_hit
2.045e+09 -41.6% 1.194e+09 proc-vmstat.numa_local
16046 -1.4% 15819 proc-vmstat.numa_other
112800 ± 7% -46.2% 60740 ± 7% proc-vmstat.numa_pages_migrated
218783 ± 6% -31.9% 148968 ± 20% proc-vmstat.numa_pte_updates
4970 ± 8% +28.3% 6376 ± 3% proc-vmstat.pgactivate
2.045e+09 -41.6% 1.194e+09 proc-vmstat.pgalloc_normal
1.922e+09 -41.6% 1.121e+09 proc-vmstat.pgfault
2.045e+09 -41.6% 1.194e+09 proc-vmstat.pgfree
112800 ± 7% -46.2% 60740 ± 7% proc-vmstat.pgmigrate_success
7988 -24.3% 6044 slabinfo.Acpi-State.active_objs
7988 -24.3% 6044 slabinfo.Acpi-State.num_objs
33449 +12.8% 37727 ± 5% slabinfo.anon_vma.active_objs
57933 +15.7% 67021 ± 5% slabinfo.anon_vma_chain.active_objs
936.75 +13.6% 1063 ± 5% slabinfo.anon_vma_chain.active_slabs
59987 +13.5% 68108 ± 5% slabinfo.anon_vma_chain.num_objs
936.75 +13.6% 1063 ± 5% slabinfo.anon_vma_chain.num_slabs
7091 ± 4% -18.4% 5785 ± 8% slabinfo.cred_jar.active_objs
7112 ± 4% -18.5% 5794 ± 8% slabinfo.cred_jar.num_objs
8309 -25.5% 6191 slabinfo.files_cache.active_objs
8351 -25.9% 6191 slabinfo.files_cache.num_objs
45787 ± 3% -12.0% 40310 ± 3% slabinfo.kmalloc-32.active_objs
45875 ± 2% -12.0% 40381 ± 3% slabinfo.kmalloc-32.num_objs
8369 ± 4% -23.8% 6376 ± 5% slabinfo.pid.active_objs
8375 ± 4% -23.9% 6376 ± 5% slabinfo.pid.num_objs
6971 ± 6% -16.1% 5849 ± 8% slabinfo.task_delay_info.active_objs
7000 ± 6% -16.2% 5864 ± 8% slabinfo.task_delay_info.num_objs
47303 +13.6% 53728 slabinfo.vm_area_struct.active_objs
1217 +11.6% 1358 slabinfo.vm_area_struct.active_slabs
48695 +11.7% 54377 slabinfo.vm_area_struct.num_objs
1217 +11.6% 1358 slabinfo.vm_area_struct.num_slabs
238.75 ± 6% +37.3% 327.75 ± 27% interrupts.35:PCI-MSI.2621441-edge.eth0-TxRx-0
168.00 ± 4% +11.6% 187.50 ± 4% interrupts.37:PCI-MSI.2621443-edge.eth0-TxRx-2
146747 +8.3% 158955 interrupts.CAL:Function_call_interrupts
624.50 ± 32% +86.1% 1162 ± 39% interrupts.CPU0.RES:Rescheduling_interrupts
3072 ± 4% +13.3% 3480 ± 3% interrupts.CPU10.CAL:Function_call_interrupts
367.50 ±152% +290.3% 1434 ± 35% interrupts.CPU10.RES:Rescheduling_interrupts
3055 ± 4% +12.4% 3434 ± 4% interrupts.CPU11.CAL:Function_call_interrupts
2535 ± 27% +29.0% 3272 ± 2% interrupts.CPU14.CAL:Function_call_interrupts
1544 ± 33% -60.1% 615.50 ± 58% interrupts.CPU14.RES:Rescheduling_interrupts
1237 ± 29% -50.7% 610.50 ± 69% interrupts.CPU16.RES:Rescheduling_interrupts
3121 +7.9% 3367 ± 5% interrupts.CPU20.CAL:Function_call_interrupts
1365 ± 19% -40.4% 814.00 ± 26% interrupts.CPU22.RES:Rescheduling_interrupts
3006 ± 2% +10.5% 3322 ± 6% interrupts.CPU23.CAL:Function_call_interrupts
238.75 ± 6% +37.3% 327.75 ± 27% interrupts.CPU24.35:PCI-MSI.2621441-edge.eth0-TxRx-0
3027 ± 2% +14.6% 3468 ± 3% interrupts.CPU25.CAL:Function_call_interrupts
353.75 ±164% +523.0% 2204 ± 64% interrupts.CPU25.RES:Rescheduling_interrupts
168.00 ± 4% +11.6% 187.50 ± 4% interrupts.CPU26.37:PCI-MSI.2621443-edge.eth0-TxRx-2
754.25 ± 71% +135.3% 1775 ± 31% interrupts.CPU30.RES:Rescheduling_interrupts
874.25 ± 62% -54.5% 397.50 ± 35% interrupts.CPU36.RES:Rescheduling_interrupts
2347 ± 47% +44.5% 3391 ± 5% interrupts.CPU4.CAL:Function_call_interrupts
2714 ± 57% +54.2% 4185 interrupts.CPU40.NMI:Non-maskable_interrupts
2714 ± 57% +54.2% 4185 interrupts.CPU40.PMI:Performance_monitoring_interrupts
3065 ± 2% +15.3% 3532 ± 2% interrupts.CPU42.CAL:Function_call_interrupts
907.00 ±166% +475.0% 5215 ± 34% interrupts.CPU42.NMI:Non-maskable_interrupts
907.00 ±166% +475.0% 5215 ± 34% interrupts.CPU42.PMI:Performance_monitoring_interrupts
3117 +10.2% 3435 ± 5% interrupts.CPU44.CAL:Function_call_interrupts
2899 ± 11% +19.5% 3463 ± 2% interrupts.CPU46.CAL:Function_call_interrupts
3166 ± 2% +10.6% 3501 ± 3% interrupts.CPU5.CAL:Function_call_interrupts
3734 ± 69% +67.1% 6241 ± 33% interrupts.CPU5.NMI:Non-maskable_interrupts
3734 ± 69% +67.1% 6241 ± 33% interrupts.CPU5.PMI:Performance_monitoring_interrupts
3147 ± 2% +9.0% 3431 ± 5% interrupts.CPU7.CAL:Function_call_interrupts
3070 ± 4% +11.8% 3431 ± 5% interrupts.CPU9.CAL:Function_call_interrupts
126923 ± 10% +22.5% 155478 ± 3% interrupts.NMI:Non-maskable_interrupts
126923 ± 10% +22.5% 155478 ± 3% interrupts.PMI:Performance_monitoring_interrupts
48117 ± 2% +9.4% 52662 ± 3% interrupts.RES:Rescheduling_interrupts
210.75 ± 57% -76.4% 49.75 ± 52% interrupts.TLB:TLB_shootdowns
2033 ± 32% +279.6% 7718 ± 22% sched_debug.cfs_rq:/.load.min
1.75 ± 34% +340.5% 7.71 ± 15% sched_debug.cfs_rq:/.load_avg.min
31942491 ± 10% -62.1% 12118976 ± 6% sched_debug.cfs_rq:/.min_vruntime.avg
63995238 ± 11% -68.7% 20043786 ± 14% sched_debug.cfs_rq:/.min_vruntime.max
3934145 ± 2% +53.7% 6046451 ± 17% sched_debug.cfs_rq:/.min_vruntime.min
27776660 ± 12% -80.0% 5568603 ± 32% sched_debug.cfs_rq:/.min_vruntime.stddev
1.00 ± 75% +141.0% 2.42 ± 49% sched_debug.cfs_rq:/.removed.util_avg.avg
36.75 ± 59% +91.6% 70.42 ± 20% sched_debug.cfs_rq:/.removed.util_avg.max
5.79 ± 63% +109.6% 12.13 ± 27% sched_debug.cfs_rq:/.removed.util_avg.stddev
43.75 ± 9% -24.0% 33.25 ± 5% sched_debug.cfs_rq:/.runnable_load_avg.max
1.37 ± 46% +400.0% 6.88 ± 24% sched_debug.cfs_rq:/.runnable_load_avg.min
15.81 ± 4% -42.9% 9.03 ± 10% sched_debug.cfs_rq:/.runnable_load_avg.stddev
1963 ± 36% +290.9% 7675 ± 23% sched_debug.cfs_rq:/.runnable_weight.min
50614534 ± 38% -82.2% 9011524 ± 57% sched_debug.cfs_rq:/.spread0.max
27776751 ± 12% -80.0% 5568954 ± 32% sched_debug.cfs_rq:/.spread0.stddev
1385 ± 2% -11.2% 1230 ± 3% sched_debug.cfs_rq:/.util_avg.max
460.29 ± 19% +43.5% 660.62 ± 5% sched_debug.cfs_rq:/.util_avg.min
165.80 ± 12% -31.9% 112.91 ± 15% sched_debug.cfs_rq:/.util_avg.stddev
3397 ± 6% -31.9% 2314 ± 12% sched_debug.cfs_rq:/.util_est_enqueued.max
33.25 ± 87% +465.5% 188.04 ± 43% sched_debug.cfs_rq:/.util_est_enqueued.min
900.07 ± 7% -39.7% 542.94 ± 16% sched_debug.cfs_rq:/.util_est_enqueued.stddev
13.23 ± 38% +109.9% 27.76 ± 25% sched_debug.cpu.clock.stddev
13.23 ± 38% +109.9% 27.76 ± 25% sched_debug.cpu.clock_task.stddev
10646 -30.5% 7398 sched_debug.cpu.curr->pid.avg
11593 -27.1% 8448 sched_debug.cpu.curr->pid.max
1064 ± 28% -49.4% 538.36 ± 38% sched_debug.cpu.curr->pid.stddev
0.00 ± 9% +23.4% 0.00 ± 7% sched_debug.cpu.next_balance.stddev
26.38 ± 6% -18.5% 21.50 ± 3% sched_debug.cpu.nr_running.max
1.37 ± 39% +345.5% 6.12 ± 27% sched_debug.cpu.nr_running.min
9.97 ± 9% -46.5% 5.34 ± 13% sched_debug.cpu.nr_running.stddev
30699 ± 4% +30.0% 39902 sched_debug.cpu.nr_switches.avg
43269 ± 2% +9.1% 47225 sched_debug.cpu.nr_switches.max
19331 ± 12% +87.3% 36215 sched_debug.cpu.nr_switches.min
8261 ± 16% -72.9% 2235 sched_debug.cpu.nr_switches.stddev
-15.96 +38.4% -22.08 sched_debug.cpu.nr_uninterruptible.min
30636 ± 4% +29.5% 39659 sched_debug.cpu.sched_count.avg
19009 ± 12% +86.5% 35450 sched_debug.cpu.sched_count.min
11253 ± 11% -28.3% 8063 ± 10% sched_debug.cpu.sched_count.stddev
261.80 ± 4% -37.8% 162.72 ± 7% sched_debug.cpu.sched_goidle.avg
65.79 ± 15% -54.2% 30.12 ± 13% sched_debug.cpu.sched_goidle.min
333.13 ± 10% -22.1% 259.54 ± 16% sched_debug.cpu.sched_goidle.stddev
2413 -9.0% 2196 sched_debug.cpu.ttwu_count.avg
5902 ± 2% -10.7% 5273 ± 6% sched_debug.cpu.ttwu_count.max
308.58 ± 10% +94.1% 599.08 ± 27% sched_debug.cpu.ttwu_count.min
1855 -34.4% 1217 ± 12% sched_debug.cpu.ttwu_count.stddev
1440 ± 2% -24.8% 1083 ± 13% sched_debug.cpu.ttwu_local.stddev
21655 ± 9% +80.7% 39121 softirqs.CPU0.RCU
21912 ± 9% +79.8% 39388 ± 3% softirqs.CPU1.RCU
21143 ± 9% +81.6% 38395 softirqs.CPU10.RCU
21056 ± 10% +84.7% 38900 ± 2% softirqs.CPU11.RCU
22177 ± 8% +76.8% 39220 softirqs.CPU12.RCU
20795 ± 10% +83.8% 38229 softirqs.CPU13.RCU
21124 ± 10% +80.1% 38037 softirqs.CPU14.RCU
20984 ± 10% +79.0% 37570 ± 2% softirqs.CPU15.RCU
21866 ± 10% +77.6% 38843 ± 3% softirqs.CPU16.RCU
21715 ± 10% +82.7% 39673 ± 3% softirqs.CPU17.RCU
21994 ± 11% +79.3% 39437 ± 3% softirqs.CPU18.RCU
21458 ± 9% +80.0% 38619 ± 2% softirqs.CPU19.RCU
22182 ± 12% +75.3% 38876 ± 2% softirqs.CPU2.RCU
21992 ± 10% +74.7% 38411 ± 3% softirqs.CPU20.RCU
21346 ± 11% +79.2% 38257 ± 3% softirqs.CPU21.RCU
21222 ± 11% +84.1% 39077 softirqs.CPU22.RCU
21048 ± 11% +82.9% 38501 softirqs.CPU23.RCU
20986 ± 10% +81.7% 38125 ± 3% softirqs.CPU24.RCU
21408 ± 12% +75.6% 37599 ± 2% softirqs.CPU25.RCU
23270 ± 14% +76.0% 40965 ± 3% softirqs.CPU26.RCU
22021 ± 9% +75.3% 38603 softirqs.CPU27.RCU
23292 ± 11% +63.7% 38135 ± 2% softirqs.CPU28.RCU
22004 ± 9% +80.5% 39721 ± 6% softirqs.CPU29.RCU
22604 ± 9% +75.3% 39633 ± 5% softirqs.CPU3.RCU
21583 ± 9% +79.3% 38696 softirqs.CPU30.RCU
21686 ± 11% +77.0% 38381 ± 2% softirqs.CPU31.RCU
20877 ± 9% +80.0% 37570 softirqs.CPU32.RCU
21276 ± 10% +75.8% 37410 ± 3% softirqs.CPU33.RCU
22364 ± 11% +69.1% 37816 ± 5% softirqs.CPU34.RCU
20680 ± 10% +81.3% 37487 ± 2% softirqs.CPU35.RCU
21866 ± 12% +79.5% 39241 ± 2% softirqs.CPU36.RCU
21542 ± 10% +80.3% 38843 ± 4% softirqs.CPU37.RCU
21467 ± 11% +81.9% 39043 ± 2% softirqs.CPU38.RCU
21409 ± 9% +83.8% 39356 softirqs.CPU39.RCU
22182 ± 11% +73.8% 38560 softirqs.CPU4.RCU
21401 ± 10% +81.1% 38758 softirqs.CPU40.RCU
21708 ± 10% +79.3% 38927 ± 2% softirqs.CPU41.RCU
21478 ± 10% +81.7% 39028 ± 2% softirqs.CPU42.RCU
21230 ± 10% +85.9% 39463 ± 4% softirqs.CPU43.RCU
21513 ± 10% +79.1% 38538 softirqs.CPU44.RCU
21474 ± 9% +80.4% 38736 softirqs.CPU45.RCU
21859 ± 11% +78.8% 39094 ± 3% softirqs.CPU46.RCU
20910 ± 11% +82.5% 38151 ± 3% softirqs.CPU47.RCU
21785 ± 10% +76.5% 38444 softirqs.CPU5.RCU
21838 ± 10% +76.2% 38489 ± 3% softirqs.CPU6.RCU
21716 ± 10% +79.6% 39006 softirqs.CPU7.RCU
22478 ± 7% +74.6% 39255 ± 3% softirqs.CPU8.RCU
21390 ± 10% +81.3% 38777 softirqs.CPU9.RCU
1038997 ± 10% +78.9% 1858430 softirqs.RCU
302512 -19.7% 242882 ± 2% softirqs.SCHED
9.30 ± 3% -7.5% 8.60 perf-stat.i.MPKI
1.005e+10 -15.9% 8.454e+09 perf-stat.i.branch-instructions
0.98 ± 5% -0.4 0.60 ± 3% perf-stat.i.branch-miss-rate%
64363700 -36.6% 40808867 perf-stat.i.branch-misses
2.18 ± 4% +0.3 2.49 ± 4% perf-stat.i.cache-miss-rate%
3635806 ± 3% +49.4% 5433123 ± 4% perf-stat.i.cache-misses
3.788e+08 -20.2% 3.023e+08 perf-stat.i.cache-references
9291 ± 4% +31.9% 12254 perf-stat.i.context-switches
2.89 +26.2% 3.65 perf-stat.i.cpi
1.311e+11 +4.2% 1.366e+11 perf-stat.i.cpu-cycles
455.73 ± 2% -15.5% 384.95 ± 10% perf-stat.i.cpu-migrations
52425 ± 4% -39.6% 31677 ± 4% perf-stat.i.cycles-between-cache-misses
1.60 ± 9% -0.5 1.07 ± 7% perf-stat.i.dTLB-load-miss-rate%
2.122e+08 ± 10% -49.1% 1.081e+08 ± 7% perf-stat.i.dTLB-load-misses
1.288e+10 -22.6% 9.969e+09 perf-stat.i.dTLB-loads
55918772 ± 2% -46.7% 29779156 perf-stat.i.dTLB-store-misses
8.579e+09 -46.6% 4.582e+09 perf-stat.i.dTLB-stores
25169306 -43.7% 14172167 perf-stat.i.iTLB-load-misses
467419 ± 2% -53.8% 216106 ± 6% perf-stat.i.iTLB-loads
4.739e+10 -20.5% 3.767e+10 perf-stat.i.instructions
1850 +42.6% 2637 perf-stat.i.instructions-per-iTLB-miss
0.35 -22.2% 0.28 perf-stat.i.ipc
6349930 -47.7% 3319868 perf-stat.i.minor-faults
21.87 ± 2% +2.4 24.29 ± 5% perf-stat.i.node-load-miss-rate%
191723 ± 2% +72.0% 329808 ± 11% perf-stat.i.node-load-misses
774136 ± 4% +31.3% 1016558 ± 2% perf-stat.i.node-loads
21.28 ± 3% -9.3 11.97 ± 3% perf-stat.i.node-store-miss-rate%
501246 -15.9% 421348 ± 2% perf-stat.i.node-store-misses
2299710 ± 4% +64.8% 3790508 ± 3% perf-stat.i.node-stores
6349927 -47.7% 3319860 perf-stat.i.page-faults
0.64 -0.2 0.48 perf-stat.overall.branch-miss-rate%
0.96 ± 2% +0.8 1.80 ± 4% perf-stat.overall.cache-miss-rate%
2.77 +31.1% 3.63 perf-stat.overall.cpi
36060 ± 3% -30.1% 25200 ± 4% perf-stat.overall.cycles-between-cache-misses
1.62 ± 10% -0.5 1.07 ± 7% perf-stat.overall.dTLB-load-miss-rate%
1883 +41.2% 2658 perf-stat.overall.instructions-per-iTLB-miss
0.36 -23.7% 0.28 perf-stat.overall.ipc
19.87 ± 3% +4.6 24.42 ± 6% perf-stat.overall.node-load-miss-rate%
17.92 ± 3% -7.9 10.01 ± 3% perf-stat.overall.node-store-miss-rate%
11947227 +52.0% 18158313 perf-stat.overall.path-length
1.001e+10 -15.8% 8.428e+09 perf-stat.ps.branch-instructions
64134978 -36.6% 40688738 perf-stat.ps.branch-misses
3625586 ± 3% +49.4% 5417169 ± 4% perf-stat.ps.cache-misses
3.774e+08 -20.1% 3.014e+08 perf-stat.ps.cache-references
9257 ± 4% +32.0% 12217 perf-stat.ps.context-switches
1.306e+11 +4.3% 1.362e+11 perf-stat.ps.cpu-cycles
454.09 ± 2% -15.5% 383.80 ± 10% perf-stat.ps.cpu-migrations
2.114e+08 ± 10% -49.0% 1.077e+08 ± 7% perf-stat.ps.dTLB-load-misses
1.283e+10 -22.6% 9.939e+09 perf-stat.ps.dTLB-loads
55710798 ± 2% -46.7% 29689790 perf-stat.ps.dTLB-store-misses
8.547e+09 -46.6% 4.568e+09 perf-stat.ps.dTLB-stores
25076013 -43.7% 14129653 perf-stat.ps.iTLB-load-misses
465719 ± 2% -53.7% 215462 ± 6% perf-stat.ps.iTLB-loads
4.722e+10 -20.5% 3.756e+10 perf-stat.ps.instructions
6326322 -47.7% 3309901 perf-stat.ps.minor-faults
191088 ± 2% +72.1% 328839 ± 11% perf-stat.ps.node-load-misses
771395 ± 4% +31.4% 1013531 ± 2% perf-stat.ps.node-loads
499536 -15.9% 420084 ± 2% perf-stat.ps.node-store-misses
2291599 ± 4% +64.9% 3779145 ± 3% perf-stat.ps.node-stores
6326317 -47.7% 3309894 perf-stat.ps.page-faults
1.434e+13 -11.3% 1.271e+13 perf-stat.total.instructions
64.43 -64.4 0.00 perf-profile.calltrace.cycles-pp.brk
57.66 -57.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
57.41 -57.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
54.18 -54.2 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
44.14 -44.1 0.00 perf-profile.calltrace.cycles-pp.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
35.15 -35.1 0.00 perf-profile.calltrace.cycles-pp.page_test
43.40 -26.7 16.67 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.13 -25.1 0.00 perf-profile.calltrace.cycles-pp.page_fault.page_test
25.08 -25.1 0.00 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.page_test
24.56 -24.6 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
24.48 -24.5 0.00 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.page_test
24.43 -24.4 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu
22.12 -22.1 0.00 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.page_test
29.62 -18.6 11.02 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
29.51 -18.5 10.98 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk
27.50 -17.5 9.96 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
11.75 -11.8 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
11.69 -11.7 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
10.43 -10.4 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault
10.39 -10.4 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page
8.32 -8.3 0.00 perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
12.29 -7.5 4.83 perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
12.29 -7.5 4.82 perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap.__x64_sys_brk
12.25 -7.4 4.81 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap
5.56 ± 2% -5.6 0.00 perf-profile.calltrace.cycles-pp.page_fault.brk
5.43 -2.7 2.69 perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
4.97 -2.5 2.42 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault
4.13 -2.1 2.04 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault
2.87 -1.4 1.46 perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page
2.74 -1.4 1.39 perf-profile.calltrace.cycles-pp.clear_page_erms.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
2.40 -1.2 1.18 ± 2% perf-profile.calltrace.cycles-pp.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.25 -1.0 1.21 perf-profile.calltrace.cycles-pp.vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.70 -0.9 0.78 perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
1.59 ± 2% -0.9 0.72 perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region
1.54 -0.9 0.68 ± 4% perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.53 -0.8 0.70 perf-profile.calltrace.cycles-pp.native_flush_tlb_one_user.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu
1.68 ± 2% -0.8 0.90 perf-profile.calltrace.cycles-pp.__vma_adjust.vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64
1.22 ± 2% -0.6 0.61 ± 3% perf-profile.calltrace.cycles-pp.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.12 ± 3% -0.6 0.55 ± 2% perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.29 ± 3% -0.6 0.72 ± 2% perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
1.26 ± 4% -0.6 0.70 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__x64_sys_brk
1.37 -0.1 1.28 perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault
0.00 +0.7 0.74 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
0.00 +0.8 0.83 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
0.00 +1.4 1.38 perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
0.00 +2.1 2.05 perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.00 +4.1 4.14 perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +4.6 4.56 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu
0.00 +4.6 4.60 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
0.00 +4.6 4.61 ± 2% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
0.00 +8.3 8.30 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.lock_page_lruvec_irq.release_pages.tlb_flush_mmu
0.00 +8.4 8.35 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.lock_page_lruvec_irq.release_pages.tlb_flush_mmu.tlb_finish_mmu
0.00 +8.4 8.39 perf-profile.calltrace.cycles-pp.lock_page_lruvec_irq.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
0.00 +17.0 17.04 perf-profile.calltrace.cycles-pp.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +22.1 22.06 perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +23.7 23.71 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +23.8 23.81 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
21.34 +47.2 68.56 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
20.30 +47.7 68.03 perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
12.44 +51.7 64.09 perf-profile.calltrace.cycles-pp.__lru_cache_add.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
12.29 +51.7 64.02 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault.handle_mm_fault
0.00 +61.1 61.13 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.__lru_cache_add
0.00 +61.7 61.72 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page
0.00 +61.9 61.90 perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault
0.00 +68.9 68.94 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +70.1 70.13 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
0.00 +70.4 70.45 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.00 +70.5 70.52 perf-profile.calltrace.cycles-pp.page_fault
64.63 -64.6 0.00 perf-profile.children.cycles-pp.brk
36.35 -36.3 0.00 perf-profile.children.cycles-pp.page_test
57.79 -33.9 23.85 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
57.58 -33.8 23.74 perf-profile.children.cycles-pp.do_syscall_64
54.20 -32.1 22.07 perf-profile.children.cycles-pp.__x64_sys_brk
44.15 -27.1 17.05 perf-profile.children.cycles-pp.__do_munmap
43.41 -26.7 16.68 perf-profile.children.cycles-pp.unmap_region
29.63 -18.6 11.02 perf-profile.children.cycles-pp.tlb_finish_mmu
29.52 -18.5 10.98 perf-profile.children.cycles-pp.tlb_flush_mmu
27.87 -17.4 10.43 perf-profile.children.cycles-pp.release_pages
12.35 -7.5 4.86 perf-profile.children.cycles-pp.lru_add_drain
12.33 -7.5 4.86 perf-profile.children.cycles-pp.lru_add_drain_cpu
8.35 -4.2 4.16 perf-profile.children.cycles-pp.do_brk_flags
5.47 -2.8 2.71 perf-profile.children.cycles-pp.alloc_pages_vma
5.15 -2.6 2.52 perf-profile.children.cycles-pp.__alloc_pages_nodemask
4.26 -2.2 2.11 perf-profile.children.cycles-pp.get_page_from_freelist
3.97 -1.9 2.05 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
2.88 -1.4 1.46 perf-profile.children.cycles-pp.prep_new_page
2.75 -1.4 1.40 perf-profile.children.cycles-pp.clear_page_erms
2.45 -1.2 1.21 ± 3% perf-profile.children.cycles-pp.perf_event_mmap
2.60 -1.2 1.38 perf-profile.children.cycles-pp.prepare_exit_to_usermode
1.92 -1.1 0.83 ± 4% perf-profile.children.cycles-pp.entry_SYSCALL_64
2.27 -1.1 1.22 perf-profile.children.cycles-pp.vma_merge
1.71 -0.9 0.78 perf-profile.children.cycles-pp.flush_tlb_mm_range
1.90 ± 2% -0.9 1.01 ± 2% perf-profile.children.cycles-pp.__vma_adjust
1.60 -0.9 0.72 perf-profile.children.cycles-pp.flush_tlb_func_common
1.69 ± 3% -0.9 0.83 perf-profile.children.cycles-pp.syscall_return_via_sysret
1.54 -0.9 0.69 ± 4% perf-profile.children.cycles-pp.security_vm_enough_memory_mm
1.55 -0.8 0.71 perf-profile.children.cycles-pp.native_flush_tlb_one_user
1.57 -0.7 0.82 perf-profile.children.cycles-pp.native_irq_return_iret
1.10 ± 2% -0.7 0.45 ± 6% perf-profile.children.cycles-pp.selinux_vm_enough_memory
1.25 ± 2% -0.6 0.62 ± 2% perf-profile.children.cycles-pp.get_unmapped_area
1.14 ± 3% -0.6 0.56 ± 2% perf-profile.children.cycles-pp.perf_iterate_sb
1.15 ± 3% -0.6 0.58 perf-profile.children.cycles-pp.find_vma
1.30 ± 3% -0.6 0.73 ± 2% perf-profile.children.cycles-pp.unmap_vmas
1.27 ± 3% -0.6 0.71 perf-profile.children.cycles-pp.unmap_page_range
0.93 ± 2% -0.6 0.37 ± 10% perf-profile.children.cycles-pp.cred_has_capability
1.02 ± 4% -0.5 0.53 ± 4% perf-profile.children.cycles-pp.__perf_sw_event
0.68 ± 3% -0.4 0.24 ± 16% perf-profile.children.cycles-pp.avc_has_perm_noaudit
0.70 ± 2% -0.4 0.31 ± 5% perf-profile.children.cycles-pp.free_unref_page_list
0.82 ± 5% -0.4 0.43 ± 5% perf-profile.children.cycles-pp.___perf_sw_event
0.66 ± 2% -0.3 0.33 ± 3% perf-profile.children.cycles-pp.security_mmap_addr
0.71 ± 2% -0.3 0.38 perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
0.77 ± 15% -0.3 0.45 ± 8% perf-profile.children.cycles-pp.apic_timer_interrupt
0.70 ± 15% -0.3 0.40 ± 7% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.54 ± 5% -0.3 0.25 perf-profile.children.cycles-pp.vmacache_find
0.55 ± 4% -0.3 0.29 ± 7% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.45 ± 3% -0.2 0.20 ± 4% perf-profile.children.cycles-pp.sync_regs
0.41 ± 5% -0.2 0.17 ± 6% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
1.66 -0.2 1.42 perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.46 ± 4% -0.2 0.22 ± 4% perf-profile.children.cycles-pp.strlcpy
0.46 ± 5% -0.2 0.24 ± 6% perf-profile.children.cycles-pp.down_write_killable
0.42 ± 2% -0.2 0.21 ± 5% perf-profile.children.cycles-pp.___might_sleep
0.42 ± 5% -0.2 0.21 ± 5% perf-profile.children.cycles-pp.__split_vma
0.50 -0.2 0.30 perf-profile.children.cycles-pp.__mod_lruvec_state
0.42 ± 3% -0.2 0.23 ± 3% perf-profile.children.cycles-pp.mem_cgroup_try_charge
0.39 ± 2% -0.2 0.20 ± 5% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.33 ± 10% -0.2 0.15 ± 13% perf-profile.children.cycles-pp.__count_memcg_events
0.32 ± 3% -0.2 0.14 ± 6% perf-profile.children.cycles-pp.free_unref_page_commit
0.53 ± 11% -0.2 0.34 ± 6% perf-profile.children.cycles-pp.hrtimer_interrupt
0.34 ± 3% -0.2 0.16 ± 5% perf-profile.children.cycles-pp._cond_resched
0.36 -0.2 0.20 ± 2% perf-profile.children.cycles-pp.down_write
0.36 ± 2% -0.2 0.19 ± 3% perf-profile.children.cycles-pp.cap_vm_enough_memory
0.30 ± 9% -0.2 0.13 ± 15% perf-profile.children.cycles-pp.__mod_node_page_state
0.66 ± 4% -0.2 0.50 ± 3% perf-profile.children.cycles-pp.__list_del_entry_valid
0.31 ± 13% -0.2 0.16 ± 15% perf-profile.children.cycles-pp.perf_event_mmap_output
0.32 ± 2% -0.2 0.17 ± 5% perf-profile.children.cycles-pp.vma_compute_subtree_gap
0.30 -0.1 0.15 ± 2% perf-profile.children.cycles-pp.up_write
0.29 ± 3% -0.1 0.15 ± 8% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.26 ± 3% -0.1 0.13 ± 5% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.30 -0.1 0.17 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.39 ± 12% -0.1 0.26 ± 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.28 ± 9% -0.1 0.15 ± 13% perf-profile.children.cycles-pp._raw_spin_lock
0.32 ± 5% -0.1 0.20 ± 5% perf-profile.children.cycles-pp.page_remove_rmap
0.22 -0.1 0.10 ± 8% perf-profile.children.cycles-pp.__tlb_remove_page_size
0.24 ± 6% -0.1 0.12 ± 4% perf-profile.children.cycles-pp.__mod_memcg_state
0.21 ± 13% -0.1 0.10 ± 18% perf-profile.children.cycles-pp.page_add_new_anon_rmap
0.30 ± 4% -0.1 0.19 ± 5% perf-profile.children.cycles-pp.__mod_zone_page_state
0.17 ± 10% -0.1 0.06 ± 6% perf-profile.children.cycles-pp.serial8250_console_write
0.16 ± 11% -0.1 0.06 ± 6% perf-profile.children.cycles-pp.uart_console_write
0.16 ± 11% -0.1 0.06 ± 6% perf-profile.children.cycles-pp.wait_for_xmitr
0.16 ± 9% -0.1 0.06 perf-profile.children.cycles-pp.serial8250_console_putchar
0.18 ± 4% -0.1 0.09 ± 5% perf-profile.children.cycles-pp.free_unref_page_prepare
0.18 ± 4% -0.1 0.08 ± 5% perf-profile.children.cycles-pp.free_pages_and_swap_cache
0.18 ± 9% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.console_unlock
0.17 -0.1 0.08 ± 6% perf-profile.children.cycles-pp.cap_capable
0.13 ± 3% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.strlen
0.23 ± 3% -0.1 0.13 ± 3% perf-profile.children.cycles-pp.khugepaged_enter_vma_merge
0.18 ± 4% -0.1 0.08 ± 5% perf-profile.children.cycles-pp.cap_mmap_addr
0.16 ± 6% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.uncharge_batch
0.22 ± 3% -0.1 0.13 ± 6% perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
0.17 ± 21% -0.1 0.08 ± 59% perf-profile.children.cycles-pp.mem_cgroup_from_task
0.17 ± 8% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.vprintk_emit
0.17 ± 2% -0.1 0.08 perf-profile.children.cycles-pp.selinux_mmap_addr
0.17 ± 6% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.18 ± 2% -0.1 0.10 ± 12% perf-profile.children.cycles-pp.memcpy_erms
0.27 ± 14% -0.1 0.18 ± 6% perf-profile.children.cycles-pp.tick_sched_timer
0.14 ± 9% -0.1 0.06 ± 9% perf-profile.children.cycles-pp.uncharge_page
0.15 ± 13% -0.1 0.07 ± 14% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.16 ± 10% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.irq_work_run_list
0.10 ± 7% -0.1 0.03 ±100% perf-profile.children.cycles-pp.kmem_cache_alloc
0.10 ± 7% -0.1 0.03 ±100% perf-profile.children.cycles-pp.vm_area_dup
0.15 ± 8% -0.1 0.08 ± 5% perf-profile.children.cycles-pp.__get_free_pages
0.14 ± 6% -0.1 0.06 ± 13% perf-profile.children.cycles-pp.anon_vma_clone
0.14 ± 7% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
0.15 ± 2% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.rcu_all_qs
0.11 ± 14% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.may_expand_vm
0.14 ± 10% -0.1 0.07 ± 15% perf-profile.children.cycles-pp.up_read
0.13 ± 3% -0.1 0.06 ± 6% perf-profile.children.cycles-pp.down_read_trylock
0.11 ± 7% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.io_serial_in
0.15 ± 15% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.irq_work_interrupt
0.15 ± 15% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.15 ± 15% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.irq_work_run
0.15 ± 15% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.printk
0.23 ± 13% -0.1 0.16 ± 6% perf-profile.children.cycles-pp.tick_sched_handle
0.19 ± 2% -0.1 0.12 ± 6% perf-profile.children.cycles-pp.anon_vma_interval_tree_remove
0.15 ± 5% -0.1 0.09 ± 7% perf-profile.children.cycles-pp.get_task_policy
0.09 ± 13% -0.1 0.03 ±100% perf-profile.children.cycles-pp.memcg_check_events
0.23 ± 13% -0.1 0.16 ± 6% perf-profile.children.cycles-pp.update_process_times
0.12 ± 5% -0.1 0.06 ± 14% perf-profile.children.cycles-pp.page_evictable
0.14 ± 6% -0.1 0.08 ± 23% perf-profile.children.cycles-pp.__vm_enough_memory
0.15 ± 4% -0.1 0.10 ± 5% perf-profile.children.cycles-pp.hugepage_vma_check
0.12 ± 4% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.free_pgtables
0.11 ± 4% -0.1 0.06 ± 11% perf-profile.children.cycles-pp.unlink_anon_vmas
0.11 ± 16% -0.1 0.06 ± 9% perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
0.21 ± 3% -0.1 0.16 ± 7% perf-profile.children.cycles-pp.__list_add_valid
0.10 ± 10% -0.0 0.05 ± 9% perf-profile.children.cycles-pp.vma_adjust_trans_huge
0.09 ± 7% -0.0 0.04 ± 60% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.12 ± 8% -0.0 0.08 ± 6% perf-profile.children.cycles-pp.lru_cache_add_active_or_unevictable
0.26 ± 3% -0.0 0.23 ± 3% perf-profile.children.cycles-pp.mem_cgroup_page_lruvec
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.schedule
0.00 +0.1 0.06 ± 15% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.28 ± 4% +0.1 0.40 ± 3% perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.34 ± 8% +0.2 0.51 ± 5% perf-profile.children.cycles-pp.__lock_text_start
0.00 +8.4 8.37 perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +8.4 8.39 perf-profile.children.cycles-pp.lock_page_lruvec_irq
46.79 +19.6 66.38 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
46.54 +27.5 74.03 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
28.05 +42.5 70.54 perf-profile.children.cycles-pp.page_fault
24.57 +44.3 68.87 perf-profile.children.cycles-pp.pagevec_lru_move_fn
25.13 +45.3 70.45 perf-profile.children.cycles-pp.do_page_fault
24.55 +45.6 70.15 perf-profile.children.cycles-pp.__do_page_fault
22.19 +46.8 68.96 perf-profile.children.cycles-pp.handle_mm_fault
21.40 +47.2 68.57 perf-profile.children.cycles-pp.__handle_mm_fault
20.35 +47.7 68.07 perf-profile.children.cycles-pp.do_anonymous_page
12.45 +51.6 64.10 perf-profile.children.cycles-pp.__lru_cache_add
0.00 +66.5 66.54 perf-profile.children.cycles-pp.lock_page_lruvec_irqsave
3.15 -1.6 1.56 perf-profile.self.cycles-pp.do_syscall_64
2.74 -1.4 1.39 perf-profile.self.cycles-pp.clear_page_erms
2.50 -1.2 1.28 perf-profile.self.cycles-pp.prepare_exit_to_usermode
1.72 ± 2% -0.9 0.83 ± 4% perf-profile.self.cycles-pp.entry_SYSCALL_64
1.69 ± 2% -0.9 0.83 perf-profile.self.cycles-pp.syscall_return_via_sysret
1.54 ± 2% -0.8 0.70 perf-profile.self.cycles-pp.native_flush_tlb_one_user
1.56 -0.7 0.82 perf-profile.self.cycles-pp.native_irq_return_iret
1.37 -0.7 0.68 ± 2% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.89 -0.5 0.44 ± 3% perf-profile.self.cycles-pp.get_page_from_freelist
0.84 ± 2% -0.4 0.40 ± 3% perf-profile.self.cycles-pp.__handle_mm_fault
0.67 ± 3% -0.4 0.23 ± 17% perf-profile.self.cycles-pp.avc_has_perm_noaudit
0.83 ± 6% -0.4 0.41 ± 3% perf-profile.self.cycles-pp.__do_page_fault
0.82 ± 10% -0.4 0.40 ± 8% perf-profile.self.cycles-pp.perf_iterate_sb
0.73 -0.4 0.37 ± 5% perf-profile.self.cycles-pp.__vma_adjust
0.71 ± 6% -0.3 0.36 ± 6% perf-profile.self.cycles-pp.___perf_sw_event
0.66 ± 2% -0.3 0.34 ± 3% perf-profile.self.cycles-pp.perf_event_mmap
0.61 ± 5% -0.3 0.29 ± 2% perf-profile.self.cycles-pp.do_brk_flags
0.60 ± 3% -0.3 0.29 ± 5% perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.53 ± 6% -0.3 0.24 perf-profile.self.cycles-pp.vmacache_find
1.03 ± 3% -0.3 0.74 perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.51 -0.3 0.26 ± 8% perf-profile.self.cycles-pp.__x64_sys_brk
0.54 ± 3% -0.2 0.30 perf-profile.self.cycles-pp.find_vma
0.58 ± 3% -0.2 0.35 ± 2% perf-profile.self.cycles-pp.unmap_page_range
0.42 ± 3% -0.2 0.19 ± 8% perf-profile.self.cycles-pp.sync_regs
0.45 ± 4% -0.2 0.22 ± 4% perf-profile.self.cycles-pp.handle_mm_fault
0.27 ± 6% -0.2 0.07 ± 5% perf-profile.self.cycles-pp.__mod_zone_page_state
0.41 ± 2% -0.2 0.21 ± 5% perf-profile.self.cycles-pp.___might_sleep
0.39 ± 5% -0.2 0.20 ± 7% perf-profile.self.cycles-pp.do_anonymous_page
0.33 ± 10% -0.2 0.15 ± 13% perf-profile.self.cycles-pp.__count_memcg_events
0.33 ± 6% -0.2 0.17 ± 4% perf-profile.self.cycles-pp.vma_merge
0.30 ± 4% -0.2 0.15 ± 3% perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
0.29 ± 15% -0.2 0.14 ± 16% perf-profile.self.cycles-pp.perf_event_mmap_output
0.28 ± 8% -0.2 0.13 ± 15% perf-profile.self.cycles-pp.__mod_node_page_state
0.30 ± 4% -0.1 0.16 ± 5% perf-profile.self.cycles-pp.security_mmap_addr
0.64 ± 3% -0.1 0.49 ± 3% perf-profile.self.cycles-pp.__list_del_entry_valid
0.26 ± 5% -0.1 0.12 ± 3% perf-profile.self.cycles-pp.strlcpy
0.24 ± 4% -0.1 0.10 ± 8% perf-profile.self.cycles-pp.free_unref_page_commit
0.28 -0.1 0.14 ± 3% perf-profile.self.cycles-pp.up_write
0.30 ± 2% -0.1 0.16 ± 4% perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.27 ± 3% -0.1 0.14 ± 3% perf-profile.self.cycles-pp.__might_sleep
0.27 ± 10% -0.1 0.14 ± 13% perf-profile.self.cycles-pp._raw_spin_lock
0.19 ± 6% -0.1 0.07 ± 6% perf-profile.self.cycles-pp.page_fault
1.07 ± 2% -0.1 0.95 ± 2% perf-profile.self.cycles-pp.release_pages
0.22 ± 8% -0.1 0.11 ± 10% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.24 ± 2% -0.1 0.13 ± 8% perf-profile.self.cycles-pp.cred_has_capability
0.22 ± 6% -0.1 0.11 ± 9% perf-profile.self.cycles-pp.down_write_killable
0.23 ± 4% -0.1 0.11 ± 4% perf-profile.self.cycles-pp.__mod_memcg_state
0.19 ± 3% -0.1 0.08 perf-profile.self.cycles-pp.free_unref_page_list
0.21 ± 6% -0.1 0.11 ± 7% perf-profile.self.cycles-pp.__perf_sw_event
0.12 ± 3% -0.1 0.03 ±100% perf-profile.self.cycles-pp.strlen
0.17 ± 4% -0.1 0.08 ± 6% perf-profile.self.cycles-pp.free_unref_page_prepare
0.12 ± 5% -0.1 0.03 ±100% perf-profile.self.cycles-pp.mem_cgroup_uncharge_list
0.20 ± 6% -0.1 0.11 ± 8% perf-profile.self.cycles-pp.get_unmapped_area
0.20 ± 7% -0.1 0.11 ± 9% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.16 ± 5% -0.1 0.07 ± 15% perf-profile.self.cycles-pp.selinux_vm_enough_memory
0.17 ± 6% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.16 ± 4% -0.1 0.07 ± 5% perf-profile.self.cycles-pp.cap_capable
0.18 ± 3% -0.1 0.10 ± 9% perf-profile.self.cycles-pp.memcpy_erms
0.16 ± 24% -0.1 0.08 ± 58% perf-profile.self.cycles-pp.mem_cgroup_from_task
0.17 ± 4% -0.1 0.09 ± 4% perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
0.24 ± 5% -0.1 0.16 ± 5% perf-profile.self.cycles-pp.__mod_lruvec_state
0.16 ± 2% -0.1 0.08 ± 6% perf-profile.self.cycles-pp.cap_mmap_addr
0.26 ± 8% -0.1 0.18 ± 8% perf-profile.self.cycles-pp.page_remove_rmap
0.18 ± 3% -0.1 0.10 perf-profile.self.cycles-pp.alloc_pages_vma
0.15 ± 7% -0.1 0.07 ± 6% perf-profile.self.cycles-pp.__lru_cache_add
0.11 ± 14% -0.1 0.03 ±100% perf-profile.self.cycles-pp.may_expand_vm
0.14 ± 3% -0.1 0.07 ± 6% perf-profile.self.cycles-pp.selinux_mmap_addr
0.13 ± 6% -0.1 0.06 ± 7% perf-profile.self.cycles-pp._cond_resched
0.13 ± 7% -0.1 0.06 ± 9% perf-profile.self.cycles-pp.uncharge_page
0.18 ± 4% -0.1 0.11 ± 6% perf-profile.self.cycles-pp.cap_vm_enough_memory
0.19 ± 4% -0.1 0.11 ± 3% perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.15 ± 12% -0.1 0.07 ± 14% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.18 ± 3% -0.1 0.11 ± 4% perf-profile.self.cycles-pp.mem_cgroup_commit_charge
0.15 ± 9% -0.1 0.08 perf-profile.self.cycles-pp.down_write
0.14 ± 7% -0.1 0.07 ± 12% perf-profile.self.cycles-pp.up_read
0.12 ± 4% -0.1 0.06 ± 9% perf-profile.self.cycles-pp.down_read_trylock
0.10 ± 13% -0.1 0.03 ±100% perf-profile.self.cycles-pp.vma_adjust_trans_huge
0.11 ± 7% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.io_serial_in
0.14 ± 8% -0.1 0.07 ± 6% perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
0.12 ± 4% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.free_pages_and_swap_cache
0.15 ± 4% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.get_task_policy
0.13 ± 8% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.prep_new_page
0.12 ± 3% -0.1 0.07 ± 12% perf-profile.self.cycles-pp.rcu_all_qs
0.16 ± 6% -0.1 0.11 ± 4% perf-profile.self.cycles-pp.anon_vma_interval_tree_remove
0.12 ± 8% -0.1 0.07 ± 16% perf-profile.self.cycles-pp.mem_cgroup_charge_statistics
0.09 ± 16% -0.1 0.04 ± 57% perf-profile.self.cycles-pp.__do_munmap
0.09 ± 5% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.khugepaged_enter_vma_merge
0.12 ± 3% -0.0 0.07 perf-profile.self.cycles-pp.hugepage_vma_check
0.10 ± 8% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
0.11 ± 7% -0.0 0.07 ± 10% perf-profile.self.cycles-pp.lru_cache_add_active_or_unevictable
0.09 ± 5% -0.0 0.04 ± 60% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.08 -0.0 0.04 ± 57% perf-profile.self.cycles-pp.security_vm_enough_memory_mm
0.19 ± 4% -0.0 0.15 ± 6% perf-profile.self.cycles-pp.__list_add_valid
0.16 ± 7% -0.0 0.12 ± 6% perf-profile.self.cycles-pp.pagevec_lru_move_fn
0.25 ± 3% -0.0 0.22 ± 3% perf-profile.self.cycles-pp.mem_cgroup_page_lruvec
0.10 ± 13% +0.0 0.14 ± 7% perf-profile.self.cycles-pp.__lock_text_start
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.24 ± 5% +0.1 0.36 perf-profile.self.cycles-pp.mem_cgroup_update_lru_size
0.25 ± 9% +0.4 0.67 ± 3% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
46.54 +27.5 74.02 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
reaim.parent_time
50 +-+--------------------------------------------------------------------+
45 +-+ O OO O |
| |
40 +-+ |
35 +-+ |
| |
30 +-+ |
25 O-OO OO OO OO OO OO OO OO O |
20 +-++.++.++.+ ++.++.++.++.++.++.++.++.++.++.++.++.++.++.++.++.++.++.++.|
| : : |
15 +-+ : : |
10 +-+ :: |
| :: |
5 +-+ : |
0 +-+--------------------------------------------------------------------+
reaim.child_systime
2500 +-+------------------------------------------------------------------+
| |
| O OO O |
2000 +-+ |
| |
| |
1500 +-+ |
| |
1000 OO+OO OO OO OO OO OO OOO O .+ |
|+.++.++.++ +.++.++.+++.++.++.++.++.++.++.++.+++.++.++.++.++ +.++.+|
| : : |
500 +-+ : : |
| :: |
| : |
0 +-+------------------------------------------------------------------+
reaim.jobs_per_min
300000 +-+----------------------------------------------------------------+
| |
250000 +-+++.++.++ ++.++.+++.++.++.+++.++.+++.++.++.+++.++.++.+++.++.++.+|
OO OO OO OOO OO OO OOO OO |
| : : |
200000 +-+ : : |
| : : |
150000 +-+ : : |
| :: OO OO |
100000 +-+ :: |
| :: |
| :: |
50000 +-+ : |
| : |
0 +-+----------------------------------------------------------------+
reaim.jobs_per_min_child
300 +-+-------------------------------------------------------------------+
| |
250 +-+++.++.++ +.++.++.++.++.++.++.++.++.++.++.++.++.++.++.++.++.++.++.+|
OO OO OO OO OO OO OO OO OO |
| : : |
200 +-+ : : |
| : : |
150 +-+ : : |
| : : OO OO |
100 +-+ :: |
| :: |
| : |
50 +-+ : |
| : |
0 +-+-------------------------------------------------------------------+
reaim.max_jobs_per_min
300000 +-+----------------------------------------------------------------+
| |
250000 +-+++.++.++ ++.++.+++.++.++.+++.++.+++.++.++.+++.++.++.+++.++.++.+|
OO OO OO OOO OO OO OOO OO |
| : : |
200000 +-+ : : |
| : : |
150000 +-+ : : |
| :: OO OO |
100000 +-+ :: |
| :: |
| :: |
50000 +-+ : |
| : |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-hsw-ep4: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/1000/debian-x86_64-2019-05-14.cgz/300s/lkp-hsw-ep4/page_test/reaim/0x43
commit:
3145e78472 ("mm/lruvec: add irqsave flags into lruvec struct")
4084fff845 ("mm/lru: replace pgdat lru_lock with lruvec lock")
3145e78472f7ad74 4084fff84504864a0eed35b6f68
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at_ip_perf_event_mmap_output/0x
3:4 -41% 2:4 perf-profile.calltrace.cycles-pp.error_entry
4:4 -43% 2:4 perf-profile.children.cycles-pp.error_entry
3:4 -35% 1:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
1466 +117.8% 3193 reaim.child_systime
166.67 ± 7% -14.2% 143.04 ± 3% reaim.child_utime
263757 -51.1% 129102 reaim.jobs_per_min
263.76 -51.1% 129.10 reaim.jobs_per_min_child
60.72 +5.3% 63.96 ± 2% reaim.jti
264628 -51.1% 129375 reaim.max_jobs_per_min
22.75 +104.3% 46.48 reaim.parent_time
38.75 -8.1% 35.60 ± 3% reaim.std_dev_percent
6.24 +91.2% 11.94 ± 2% reaim.std_dev_time
323.41 +5.2% 340.23 reaim.time.elapsed_time
323.41 +5.2% 340.23 reaim.time.elapsed_time.max
3342567 +17.0% 3911156 ± 3% reaim.time.involuntary_context_switches
2.081e+09 -46.2% 1.12e+09 reaim.time.minor_page_faults
6563 +4.6% 6865 reaim.time.percent_of_cpu_this_job_got
19060 +17.3% 22357 reaim.time.system_time
2166 ± 7% -53.8% 1001 ± 3% reaim.time.user_time
51472 -45.4% 28090 reaim.time.voluntary_context_switches
1300000 -46.2% 700000 reaim.workload
17.33 ± 68% +5116.8% 904.25 ± 87% meminfo.Mlocked
41362 ± 4% +10.8% 45829 ± 4% meminfo.Shmem
1741 ± 24% +410.8% 8896 ± 69% numa-meminfo.node1.Inactive(anon)
10087 +24.7% 12583 ± 18% numa-meminfo.node1.Mapped
9.23 ± 5% -4.4 4.84 ± 2% mpstat.cpu.all.idle%
81.46 +9.6 91.04 mpstat.cpu.all.sys%
9.30 ± 8% -5.2 4.12 ± 2% mpstat.cpu.all.usr%
9.33 ± 5% -46.4% 5.00 vmstat.cpu.id
80.33 +12.0% 90.00 vmstat.cpu.sy
11317 +9.3% 12371 ± 3% vmstat.system.cs
1.101e+09 -46.2% 5.929e+08 numa-numastat.node0.local_node
1.101e+09 -46.2% 5.929e+08 numa-numastat.node0.numa_hit
1.117e+09 -46.0% 6.036e+08 numa-numastat.node1.local_node
1.118e+09 -46.0% 6.036e+08 numa-numastat.node1.numa_hit
12160 ± 6% -21.0% 9607 ± 23% cpuidle.C1.usage
2431411 ± 11% -35.5% 1567349 ± 25% cpuidle.C3.usage
1.262e+09 ± 24% -56.2% 5.532e+08 ± 53% cpuidle.C6.time
1944254 ± 7% -57.4% 828830 ± 48% cpuidle.C6.usage
11443 ± 14% -41.3% 6719 ± 18% cpuidle.POLL.time
5210 ± 16% -42.9% 2977 ± 19% cpuidle.POLL.usage
2534 +4.9% 2657 turbostat.Avg_MHz
2431227 ± 11% -35.5% 1567087 ± 25% turbostat.C3
1935131 ± 7% -57.7% 819257 ± 48% turbostat.C6
5.36 ± 24% -3.1 2.23 ± 54% turbostat.C6%
4.23 ± 9% -47.2% 2.23 ± 8% turbostat.CPU%c1
2.94 ± 16% -48.6% 1.51 ± 14% turbostat.Pkg%pc2
2.33 ± 53% +5428.6% 129.00 ± 89% numa-vmstat.node0.nr_mlock
5.491e+08 -45.9% 2.968e+08 numa-vmstat.node0.numa_hit
5.491e+08 -45.9% 2.968e+08 numa-vmstat.node0.numa_local
435.33 ± 24% +410.3% 2221 ± 69% numa-vmstat.node1.nr_inactive_anon
1.67 ± 56% +5660.0% 96.00 ± 85% numa-vmstat.node1.nr_mlock
435.33 ± 24% +410.3% 2221 ± 69% numa-vmstat.node1.nr_zone_inactive_anon
5.578e+08 -45.8% 3.023e+08 numa-vmstat.node1.numa_hit
5.576e+08 -45.8% 3.021e+08 numa-vmstat.node1.numa_local
9639 -24.9% 7242 slabinfo.Acpi-State.active_objs
9639 -24.9% 7242 slabinfo.Acpi-State.num_objs
827.67 ± 6% -21.6% 648.75 ± 9% slabinfo.buffer_head.active_objs
827.67 ± 6% -21.6% 648.75 ± 9% slabinfo.buffer_head.num_objs
530.33 ± 8% +23.4% 654.50 ± 10% slabinfo.file_lock_cache.active_objs
530.33 ± 8% +23.4% 654.50 ± 10% slabinfo.file_lock_cache.num_objs
10096 -28.3% 7238 slabinfo.files_cache.active_objs
10096 -28.3% 7238 slabinfo.files_cache.num_objs
43209 ± 3% -14.8% 36832 ± 3% slabinfo.kmalloc-32.active_objs
43315 ± 3% -14.8% 36912 ± 3% slabinfo.kmalloc-32.num_objs
3284 ± 3% +13.0% 3711 ± 4% slabinfo.kmalloc-rcl-64.active_objs
3284 ± 3% +13.0% 3711 ± 4% slabinfo.kmalloc-rcl-64.num_objs
6339 ± 2% -13.2% 5502 slabinfo.mm_struct.active_objs
6473 ± 2% -13.7% 5584 slabinfo.mm_struct.num_objs
11335 +9.1% 12362 ± 2% slabinfo.proc_inode_cache.active_objs
1173 ± 3% +15.1% 1350 ± 6% slabinfo.task_group.active_objs
1173 ± 3% +15.1% 1350 ± 6% slabinfo.task_group.num_objs
85163 +5.8% 90112 proc-vmstat.nr_active_anon
77660 +6.2% 82469 proc-vmstat.nr_anon_pages
4348 +3.2% 4487 proc-vmstat.nr_inactive_anon
23218 +1.2% 23507 proc-vmstat.nr_kernel_stack
4.00 ± 88% +5531.2% 225.25 ± 87% proc-vmstat.nr_mlock
8270 +4.9% 8678 proc-vmstat.nr_page_table_pages
10346 ± 4% +10.8% 11459 ± 4% proc-vmstat.nr_shmem
16996 +1.7% 17286 proc-vmstat.nr_slab_reclaimable
38597 -1.9% 37879 proc-vmstat.nr_slab_unreclaimable
85163 +5.8% 90112 proc-vmstat.nr_zone_active_anon
4348 +3.2% 4487 proc-vmstat.nr_zone_inactive_anon
84611 ± 4% -15.5% 71535 ± 5% proc-vmstat.numa_hint_faults
2.219e+09 -46.1% 1.196e+09 proc-vmstat.numa_hit
2.219e+09 -46.1% 1.196e+09 proc-vmstat.numa_local
82924 ± 6% -15.8% 69806 ± 6% proc-vmstat.numa_pages_migrated
193476 ± 3% -25.4% 144338 ± 14% proc-vmstat.numa_pte_updates
2.219e+09 -46.1% 1.197e+09 proc-vmstat.pgalloc_normal
2.082e+09 -46.1% 1.121e+09 proc-vmstat.pgfault
2.219e+09 -46.1% 1.197e+09 proc-vmstat.pgfree
82924 ± 6% -15.8% 69806 ± 6% proc-vmstat.pgmigrate_success
10.12 ±141% +8.8e+05% 88819 ±109% sched_debug.cfs_rq:/.MIN_vruntime.avg
108957 +30.9% 142667 sched_debug.cfs_rq:/.exec_clock.avg
110770 +30.3% 144311 sched_debug.cfs_rq:/.exec_clock.max
108758 +30.7% 142195 sched_debug.cfs_rq:/.exec_clock.min
12489 ± 2% +13.3% 14151 ± 8% sched_debug.cfs_rq:/.load.avg
1216 +43.2% 1742 ± 15% sched_debug.cfs_rq:/.load.min
0.67 ± 14% +87.5% 1.25 ± 24% sched_debug.cfs_rq:/.load_avg.min
10.12 ±141% +8.8e+05% 88819 ±109% sched_debug.cfs_rq:/.max_vruntime.avg
2.14 ± 20% +35.5% 2.90 ± 11% sched_debug.cfs_rq:/.nr_spread_over.avg
8.75 ± 19% +25.8% 11.01 ± 7% sched_debug.cfs_rq:/.nr_spread_over.stddev
0.40 +139.6% 0.96 ± 41% sched_debug.cfs_rq:/.runnable_load_avg.min
12413 ± 2% +12.9% 14020 ± 8% sched_debug.cfs_rq:/.runnable_weight.avg
28531 ± 2% +297.0% 113276 ± 74% sched_debug.cfs_rq:/.runnable_weight.max
1216 +43.2% 1741 ± 15% sched_debug.cfs_rq:/.runnable_weight.min
3405 ± 6% -9.0% 3099 sched_debug.cfs_rq:/.util_est_enqueued.max
157510 +19.4% 188074 sched_debug.cpu.clock.avg
157529 +19.4% 188095 sched_debug.cpu.clock.max
157496 +19.4% 188057 sched_debug.cpu.clock.min
157510 +19.4% 188074 sched_debug.cpu.clock_task.avg
157529 +19.4% 188095 sched_debug.cpu.clock_task.max
157496 +19.4% 188057 sched_debug.cpu.clock_task.min
8951 -13.5% 7740 sched_debug.cpu.curr->pid.avg
9920 -10.7% 8854 sched_debug.cpu.curr->pid.max
0.80 +35.4% 1.08 ± 17% sched_debug.cpu.nr_running.min
20702 +34.5% 27845 ± 3% sched_debug.cpu.nr_switches.avg
35633 ± 4% +22.9% 43786 ± 3% sched_debug.cpu.nr_switches.max
7852 ± 2% +66.4% 13063 ± 14% sched_debug.cpu.nr_switches.min
20058 +36.0% 27278 ± 3% sched_debug.cpu.sched_count.avg
7324 ± 2% +72.3% 12619 ± 15% sched_debug.cpu.sched_count.min
161.98 ± 8% -26.8% 118.52 ± 7% sched_debug.cpu.sched_goidle.avg
39.40 ± 4% -33.3% 26.29 ± 6% sched_debug.cpu.sched_goidle.min
1306 +12.8% 1473 sched_debug.cpu.ttwu_count.avg
3834 ± 8% +133.4% 8949 sched_debug.cpu.ttwu_count.max
199.67 ± 7% +13.7% 226.96 sched_debug.cpu.ttwu_count.min
1017 ± 5% +67.7% 1706 ± 2% sched_debug.cpu.ttwu_count.stddev
926.63 ± 2% +24.4% 1153 sched_debug.cpu.ttwu_local.avg
3187 ± 6% +165.8% 8473 sched_debug.cpu.ttwu_local.max
164.67 ± 4% +16.5% 191.92 sched_debug.cpu.ttwu_local.min
808.95 ± 4% +93.7% 1566 sched_debug.cpu.ttwu_local.stddev
157496 +19.4% 188058 sched_debug.cpu_clk
153808 +19.9% 184389 sched_debug.ktime
158790 ± 2% +18.7% 188530 sched_debug.sched_clk
14075 ± 2% -17.1% 11666 ± 13% softirqs.CPU0.RCU
14125 ± 7% -20.6% 11218 ± 11% softirqs.CPU11.RCU
14242 ± 6% -16.8% 11844 ± 10% softirqs.CPU12.RCU
14557 ± 8% -23.8% 11094 ± 11% softirqs.CPU13.RCU
14652 ± 9% -25.0% 10982 ± 10% softirqs.CPU14.RCU
13991 ± 12% -22.9% 10787 ± 12% softirqs.CPU16.RCU
13632 ± 7% -21.6% 10689 ± 14% softirqs.CPU17.RCU
14739 ± 7% -19.4% 11880 ± 8% softirqs.CPU18.RCU
14059 ± 5% -21.5% 11035 ± 11% softirqs.CPU19.RCU
125437 +9.8% 137683 ± 3% softirqs.CPU2.TIMER
14564 ± 9% -25.0% 10929 ± 11% softirqs.CPU21.RCU
13814 ± 6% -19.9% 11067 ± 13% softirqs.CPU22.RCU
14531 ± 12% -23.4% 11131 ± 13% softirqs.CPU23.RCU
13791 ± 10% -21.0% 10894 ± 14% softirqs.CPU24.RCU
13737 ± 6% -21.9% 10721 ± 13% softirqs.CPU25.RCU
13592 ± 7% -20.2% 10850 ± 12% softirqs.CPU26.RCU
13623 ± 7% -20.8% 10792 ± 11% softirqs.CPU27.RCU
13780 ± 6% -21.5% 10811 ± 12% softirqs.CPU28.RCU
14512 ± 5% -24.5% 10964 ± 10% softirqs.CPU29.RCU
15584 ± 15% -29.0% 11065 ± 11% softirqs.CPU3.RCU
13469 ± 6% -17.1% 11170 ± 14% softirqs.CPU30.RCU
13257 ± 7% -20.5% 10538 ± 12% softirqs.CPU33.RCU
13253 ± 6% -18.2% 10843 ± 14% softirqs.CPU34.RCU
13108 ± 6% -19.4% 10563 ± 12% softirqs.CPU35.RCU
13207 ± 9% -18.3% 10793 ± 11% softirqs.CPU38.RCU
13915 ± 6% -21.5% 10922 ± 12% softirqs.CPU40.RCU
13906 ± 6% -18.9% 11284 ± 12% softirqs.CPU41.RCU
13514 ± 7% -20.2% 10790 ± 15% softirqs.CPU42.RCU
13809 ± 6% -20.8% 10932 ± 12% softirqs.CPU44.RCU
13403 ± 8% -20.3% 10684 ± 15% softirqs.CPU45.RCU
13432 ± 8% -20.1% 10728 ± 14% softirqs.CPU46.RCU
13918 ± 9% -23.2% 10689 ± 14% softirqs.CPU47.RCU
14101 ± 7% -22.1% 10984 ± 12% softirqs.CPU48.RCU
13813 ± 8% -20.3% 11014 ± 11% softirqs.CPU49.RCU
14223 ± 7% -22.3% 11045 ± 11% softirqs.CPU5.RCU
14317 ± 6% -22.0% 11174 ± 9% softirqs.CPU51.RCU
14160 ± 7% -21.5% 11113 ± 12% softirqs.CPU52.RCU
13846 ± 7% -19.6% 11131 ± 12% softirqs.CPU53.RCU
13547 ± 7% -23.0% 10437 ± 13% softirqs.CPU54.RCU
13048 ± 6% -18.8% 10591 ± 13% softirqs.CPU55.RCU
12973 ± 6% -18.7% 10548 ± 11% softirqs.CPU57.RCU
12922 ± 6% -18.9% 10479 ± 12% softirqs.CPU58.RCU
13115 ± 7% -18.4% 10706 ± 12% softirqs.CPU59.RCU
12988 ± 8% -19.0% 10517 ± 9% softirqs.CPU60.RCU
12881 ± 6% -16.3% 10775 ± 13% softirqs.CPU61.RCU
13583 ± 3% -24.1% 10307 ± 10% softirqs.CPU63.RCU
13478 ± 6% -24.0% 10248 ± 13% softirqs.CPU64.RCU
13351 ± 7% -22.3% 10369 ± 13% softirqs.CPU65.RCU
13452 ± 5% -23.7% 10270 ± 14% softirqs.CPU67.RCU
12921 ± 6% -20.0% 10340 ± 13% softirqs.CPU68.RCU
13238 ± 6% -19.4% 10676 ± 14% softirqs.CPU70.RCU
13666 ± 4% -21.4% 10735 ± 13% softirqs.CPU71.RCU
15392 ± 11% -27.4% 11168 ± 14% softirqs.CPU8.RCU
14054 ± 9% -22.8% 10849 ± 11% softirqs.CPU9.RCU
993126 ± 6% -20.1% 793410 ± 12% softirqs.RCU
459597 -21.3% 361796 softirqs.SCHED
1.339e+10 -9.1% 1.217e+10 perf-stat.i.branch-instructions
0.75 ± 6% -0.3 0.46 ± 11% perf-stat.i.branch-miss-rate%
64097573 -37.3% 40214445 perf-stat.i.branch-misses
1.17 ± 7% -0.3 0.84 ± 7% perf-stat.i.cache-miss-rate%
77074258 +55.6% 1.199e+08 perf-stat.i.cache-references
11396 +9.2% 12448 ± 3% perf-stat.i.context-switches
3.20 +16.5% 3.73 perf-stat.i.cpi
1.825e+11 +4.7% 1.911e+11 perf-stat.i.cpu-cycles
215429 ± 7% +12.7% 242698 ± 6% perf-stat.i.cycles-between-cache-misses
0.22 ± 7% -0.1 0.14 ± 17% perf-stat.i.dTLB-load-miss-rate%
30315348 ± 6% -47.4% 15951637 ± 19% perf-stat.i.dTLB-load-misses
1.63e+10 -16.1% 1.368e+10 perf-stat.i.dTLB-loads
46015056 ± 3% -47.5% 24180076 ± 3% perf-stat.i.dTLB-store-misses
7.897e+09 -47.2% 4.166e+09 perf-stat.i.dTLB-stores
57.08 -1.9 55.20 perf-stat.i.iTLB-load-miss-rate%
24949798 -43.8% 14017378 perf-stat.i.iTLB-load-misses
19196666 -39.8% 11551346 ± 2% perf-stat.i.iTLB-loads
6.091e+10 -13.7% 5.256e+10 perf-stat.i.instructions
2397 +54.1% 3696 perf-stat.i.instructions-per-iTLB-miss
0.33 -16.2% 0.27 perf-stat.i.ipc
6425208 -48.7% 3294016 perf-stat.i.minor-faults
44.31 ± 4% -11.4 32.92 perf-stat.i.node-load-miss-rate%
160907 -20.6% 127693 ± 5% perf-stat.i.node-load-misses
317812 ± 13% +53.7% 488603 ± 4% perf-stat.i.node-loads
75.03 ± 3% -24.4 50.61 ± 5% perf-stat.i.node-store-miss-rate%
318586 -45.2% 174522 perf-stat.i.node-store-misses
106148 ± 17% +99.9% 212175 ± 12% perf-stat.i.node-stores
6424752 -48.7% 3293934 perf-stat.i.page-faults
1.27 +80.3% 2.28 perf-stat.overall.MPKI
0.48 -0.1 0.33 perf-stat.overall.branch-miss-rate%
1.18 ± 7% -0.3 0.84 ± 8% perf-stat.overall.cache-miss-rate%
3.00 +21.4% 3.64 perf-stat.overall.cpi
0.19 ± 7% -0.1 0.12 ± 19% perf-stat.overall.dTLB-load-miss-rate%
56.52 -1.7 54.83 perf-stat.overall.iTLB-load-miss-rate%
2441 +53.6% 3749 perf-stat.overall.instructions-per-iTLB-miss
0.33 -17.6% 0.28 perf-stat.overall.ipc
33.83 ± 8% -13.1 20.71 perf-stat.overall.node-load-miss-rate%
75.14 ± 3% -29.8 45.33 ± 7% perf-stat.overall.node-store-miss-rate%
15177537 +68.3% 25549502 perf-stat.overall.path-length
1.335e+10 -9.1% 1.213e+10 perf-stat.ps.branch-instructions
63881985 -37.2% 40098475 perf-stat.ps.branch-misses
76825882 +55.6% 1.196e+08 perf-stat.ps.cache-references
11353 +9.3% 12411 ± 3% perf-stat.ps.context-switches
1.818e+11 +4.8% 1.905e+11 perf-stat.ps.cpu-cycles
30208362 ± 6% -47.3% 15905194 ± 19% perf-stat.ps.dTLB-load-misses
1.625e+10 -16.0% 1.364e+10 perf-stat.ps.dTLB-loads
45850169 ± 3% -47.4% 24109858 ± 3% perf-stat.ps.dTLB-store-misses
7.869e+09 -47.2% 4.154e+09 perf-stat.ps.dTLB-stores
24857831 -43.8% 13976576 perf-stat.ps.iTLB-load-misses
19125831 -39.8% 11517778 ± 2% perf-stat.ps.iTLB-loads
6.07e+10 -13.7% 5.241e+10 perf-stat.ps.instructions
6400600 -48.7% 3284324 perf-stat.ps.minor-faults
160296 -20.6% 127306 ± 5% perf-stat.ps.node-load-misses
317405 ± 13% +53.5% 487286 ± 4% perf-stat.ps.node-loads
317392 -45.2% 174011 perf-stat.ps.node-store-misses
105891 ± 17% +99.8% 211596 ± 12% perf-stat.ps.node-stores
6400561 -48.7% 3284314 perf-stat.ps.page-faults
1.973e+13 -9.4% 1.788e+13 perf-stat.total.instructions
177.00 +35.5% 239.75 ± 15% interrupts.43:IR-PCI-MSI.1572868-edge.eth0-TxRx-4
206470 +4.3% 215306 interrupts.CAL:Function_call_interrupts
2731 ± 11% +15.0% 3140 ± 3% interrupts.CPU0.CAL:Function_call_interrupts
22.67 ±138% +25952.6% 5905 ± 57% interrupts.CPU0.NMI:Non-maskable_interrupts
22.67 ±138% +25952.6% 5905 ± 57% interrupts.CPU0.PMI:Performance_monitoring_interrupts
3910 ± 81% +51.5% 5922 ± 33% interrupts.CPU10.NMI:Non-maskable_interrupts
3910 ± 81% +51.5% 5922 ± 33% interrupts.CPU10.PMI:Performance_monitoring_interrupts
939.67 ± 11% -54.9% 424.00 ± 67% interrupts.CPU10.RES:Rescheduling_interrupts
1597 ± 35% -63.9% 576.00 ± 78% interrupts.CPU12.RES:Rescheduling_interrupts
1295 ±141% +356.2% 5907 ± 32% interrupts.CPU13.NMI:Non-maskable_interrupts
1295 ±141% +356.2% 5907 ± 32% interrupts.CPU13.PMI:Performance_monitoring_interrupts
1082 ± 31% -65.9% 369.00 ± 62% interrupts.CPU13.RES:Rescheduling_interrupts
6538 ± 28% -69.5% 1997 ±168% interrupts.CPU18.NMI:Non-maskable_interrupts
6538 ± 28% -69.5% 1997 ±168% interrupts.CPU18.PMI:Performance_monitoring_interrupts
2810 ± 2% +13.6% 3192 ± 3% interrupts.CPU21.CAL:Function_call_interrupts
3915 ± 81% +100.6% 7856 interrupts.CPU21.NMI:Non-maskable_interrupts
3915 ± 81% +100.6% 7856 interrupts.CPU21.PMI:Performance_monitoring_interrupts
71.00 ±138% -99.6% 0.25 ±173% interrupts.CPU21.TLB:TLB_shootdowns
2751 ± 6% +13.9% 3132 ± 4% interrupts.CPU22.CAL:Function_call_interrupts
733.33 ± 61% -85.8% 104.25 ±166% interrupts.CPU24.RES:Rescheduling_interrupts
2909 ± 6% +7.5% 3126 ± 4% interrupts.CPU25.CAL:Function_call_interrupts
2892 ± 5% +8.4% 3134 ± 3% interrupts.CPU26.CAL:Function_call_interrupts
2833 ± 3% +12.0% 3174 ± 2% interrupts.CPU27.CAL:Function_call_interrupts
1.00 ± 81% +6.9e+05% 6893 ± 24% interrupts.CPU27.NMI:Non-maskable_interrupts
1.00 ± 81% +6.9e+05% 6893 ± 24% interrupts.CPU27.PMI:Performance_monitoring_interrupts
2830 ± 3% +12.5% 3183 ± 2% interrupts.CPU28.CAL:Function_call_interrupts
1.00 ± 81% +7.9e+05% 7878 interrupts.CPU28.NMI:Non-maskable_interrupts
1.00 ± 81% +7.9e+05% 7878 interrupts.CPU28.PMI:Performance_monitoring_interrupts
531.00 ± 68% -77.7% 118.50 ±163% interrupts.CPU28.RES:Rescheduling_interrupts
2827 ± 3% +14.1% 3227 interrupts.CPU30.CAL:Function_call_interrupts
11.33 ± 81% +69482.4% 7886 interrupts.CPU30.NMI:Non-maskable_interrupts
11.33 ± 81% +69482.4% 7886 interrupts.CPU30.PMI:Performance_monitoring_interrupts
2825 ± 3% +14.2% 3226 interrupts.CPU32.CAL:Function_call_interrupts
3.00 ±118% +2.6e+05% 7897 interrupts.CPU32.NMI:Non-maskable_interrupts
3.00 ±118% +2.6e+05% 7897 interrupts.CPU32.PMI:Performance_monitoring_interrupts
3908 ± 81% +100.7% 7844 interrupts.CPU34.NMI:Non-maskable_interrupts
3908 ± 81% +100.7% 7844 interrupts.CPU34.PMI:Performance_monitoring_interrupts
2802 ± 4% +11.3% 3118 ± 3% interrupts.CPU35.CAL:Function_call_interrupts
177.00 +35.5% 239.75 ± 15% interrupts.CPU4.43:IR-PCI-MSI.1572868-edge.eth0-TxRx-4
2877 +7.7% 3098 ± 2% interrupts.CPU45.CAL:Function_call_interrupts
3914 ± 81% +101.3% 7881 interrupts.CPU45.NMI:Non-maskable_interrupts
3914 ± 81% +101.3% 7881 interrupts.CPU45.PMI:Performance_monitoring_interrupts
2743 ± 3% +12.7% 3091 ± 5% interrupts.CPU47.CAL:Function_call_interrupts
6.00 ±117% +98041.7% 5888 ± 57% interrupts.CPU47.NMI:Non-maskable_interrupts
6.00 ±117% +98041.7% 5888 ± 57% interrupts.CPU47.PMI:Performance_monitoring_interrupts
6540 ± 28% -99.9% 6.25 ± 42% interrupts.CPU49.NMI:Non-maskable_interrupts
6540 ± 28% -99.9% 6.25 ± 42% interrupts.CPU49.PMI:Performance_monitoring_interrupts
2757 ± 3% +12.7% 3107 ± 3% interrupts.CPU51.CAL:Function_call_interrupts
2738 ± 2% +9.4% 2997 ± 4% interrupts.CPU52.CAL:Function_call_interrupts
2.33 ±112% +41803.6% 977.75 ±171% interrupts.CPU52.NMI:Non-maskable_interrupts
2.33 ±112% +41803.6% 977.75 ±171% interrupts.CPU52.PMI:Performance_monitoring_interrupts
2680 +12.8% 3024 ± 5% interrupts.CPU54.CAL:Function_call_interrupts
10.00 ±101% +39905.0% 4000 ± 67% interrupts.CPU54.NMI:Non-maskable_interrupts
10.00 ±101% +39905.0% 4000 ± 67% interrupts.CPU54.PMI:Performance_monitoring_interrupts
2752 ± 3% +5.9% 2913 ± 3% interrupts.CPU56.CAL:Function_call_interrupts
2654 ±138% -99.9% 2.25 ±148% interrupts.CPU57.NMI:Non-maskable_interrupts
2654 ±138% -99.9% 2.25 ±148% interrupts.CPU57.PMI:Performance_monitoring_interrupts
7862 -98.4% 123.75 ±172% interrupts.CPU63.NMI:Non-maskable_interrupts
7862 -98.4% 123.75 ±172% interrupts.CPU63.PMI:Performance_monitoring_interrupts
1802 ± 29% -80.8% 346.00 ±100% interrupts.CPU63.RES:Rescheduling_interrupts
7841 -99.6% 29.50 ±163% interrupts.CPU64.NMI:Non-maskable_interrupts
7841 -99.6% 29.50 ±163% interrupts.CPU64.PMI:Performance_monitoring_interrupts
2345 ± 19% +28.4% 3010 ± 4% interrupts.CPU65.CAL:Function_call_interrupts
6531 ± 28% -85.0% 978.50 ±173% interrupts.CPU66.NMI:Non-maskable_interrupts
6531 ± 28% -85.0% 978.50 ±173% interrupts.CPU66.PMI:Performance_monitoring_interrupts
6522 ± 28% -84.9% 985.00 ±171% interrupts.CPU68.NMI:Non-maskable_interrupts
6522 ± 28% -84.9% 985.00 ±171% interrupts.CPU68.PMI:Performance_monitoring_interrupts
2575 ±141% +195.6% 7613 ± 5% interrupts.CPU7.NMI:Non-maskable_interrupts
2575 ±141% +195.6% 7613 ± 5% interrupts.CPU7.PMI:Performance_monitoring_interrupts
44690 ± 2% -20.9% 35360 ± 3% interrupts.RES:Rescheduling_interrupts
65.44 -43.2 22.23 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
65.30 -43.1 22.17 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
61.98 -41.4 20.62 perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
55.05 -37.5 17.52 perf-profile.calltrace.cycles-pp.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
54.66 -37.4 17.29 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
32.47 -32.5 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
32.39 -32.4 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu
37.10 -25.6 11.50 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
37.03 -25.6 11.47 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk
34.44 -24.3 10.16 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
16.32 -16.3 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
16.27 -16.3 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
15.18 -15.2 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault
15.13 -15.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page
16.67 -11.4 5.26 perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
16.65 -11.4 5.25 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap
16.66 -11.4 5.26 perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap.__x64_sys_brk
5.66 -3.1 2.54 perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.86 -2.0 1.82 perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
3.62 -2.0 1.67 ± 2% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
3.29 -1.8 1.52 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault
2.84 ± 2% -1.5 1.34 perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
2.54 -1.3 1.23 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault
2.43 -1.3 1.17 ± 2% perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
2.36 -1.2 1.13 ± 2% perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region
2.34 -1.2 1.12 ± 2% perf-profile.calltrace.cycles-pp.native_flush_tlb_one_user.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu
1.78 -0.9 0.89 ± 2% perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page
1.14 -0.9 0.26 ±100% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
1.63 -0.9 0.75 perf-profile.calltrace.cycles-pp.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.55 -0.9 0.69 ± 2% perf-profile.calltrace.cycles-pp.vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.68 -0.8 0.85 ± 2% perf-profile.calltrace.cycles-pp.clear_page_erms.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
1.11 -0.7 0.39 ± 57% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
1.19 -0.7 0.53 ± 2% perf-profile.calltrace.cycles-pp.__vma_adjust.vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64
0.00 +5.1 5.08 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu
0.00 +5.1 5.10 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
0.00 +5.1 5.11 perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
0.00 +9.0 8.97 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.lock_page_lruvec_irq.release_pages.tlb_flush_mmu
0.00 +9.0 9.01 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.lock_page_lruvec_irq.release_pages.tlb_flush_mmu.tlb_finish_mmu
0.00 +9.0 9.03 perf-profile.calltrace.cycles-pp.lock_page_lruvec_irq.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
25.36 +48.1 73.51 perf-profile.calltrace.cycles-pp.page_fault
25.21 +48.2 73.46 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
24.76 +48.5 73.25 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
23.20 +49.4 72.56 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
22.68 +49.6 72.33 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
21.95 +50.1 72.01 perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
16.54 +53.0 69.50 perf-profile.calltrace.cycles-pp.__lru_cache_add.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
16.42 +53.0 69.45 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault.handle_mm_fault
0.00 +67.4 67.37 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.__lru_cache_add
0.00 +67.8 67.78 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page
0.00 +67.9 67.88 perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault
65.48 -43.2 22.26 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
65.34 -43.1 22.20 perf-profile.children.cycles-pp.do_syscall_64
62.01 -41.4 20.64 perf-profile.children.cycles-pp.__x64_sys_brk
55.06 -37.5 17.53 perf-profile.children.cycles-pp.__do_munmap
54.67 -37.4 17.30 perf-profile.children.cycles-pp.unmap_region
37.12 -25.6 11.51 perf-profile.children.cycles-pp.tlb_finish_mmu
37.05 -25.6 11.48 perf-profile.children.cycles-pp.tlb_flush_mmu
34.69 -24.2 10.47 perf-profile.children.cycles-pp.release_pages
16.70 -11.4 5.30 perf-profile.children.cycles-pp.lru_add_drain
16.70 -11.4 5.30 perf-profile.children.cycles-pp.lru_add_drain_cpu
5.68 -3.1 2.56 perf-profile.children.cycles-pp.do_brk_flags
3.86 -2.0 1.83 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
3.65 -2.0 1.67 ± 2% perf-profile.children.cycles-pp.alloc_pages_vma
3.41 -1.8 1.56 ± 2% perf-profile.children.cycles-pp.__alloc_pages_nodemask
2.84 ± 2% -1.5 1.34 perf-profile.children.cycles-pp.prepare_exit_to_usermode
2.61 -1.4 1.26 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
2.43 -1.3 1.17 ± 2% perf-profile.children.cycles-pp.flush_tlb_mm_range
2.37 -1.2 1.14 ± 2% perf-profile.children.cycles-pp.flush_tlb_func_common
2.35 -1.2 1.12 ± 2% perf-profile.children.cycles-pp.native_flush_tlb_one_user
1.67 -0.9 0.77 perf-profile.children.cycles-pp.perf_event_mmap
1.79 -0.9 0.89 ± 2% perf-profile.children.cycles-pp.prep_new_page
1.56 -0.9 0.70 ± 2% perf-profile.children.cycles-pp.vma_merge
1.69 -0.8 0.85 ± 2% perf-profile.children.cycles-pp.clear_page_erms
1.33 -0.7 0.59 ± 2% perf-profile.children.cycles-pp.__vma_adjust
1.35 -0.7 0.64 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
1.27 -0.7 0.59 ± 4% perf-profile.children.cycles-pp.syscall_return_via_sysret
1.14 ± 2% -0.6 0.51 ± 2% perf-profile.children.cycles-pp.entry_SYSCALL_64
1.02 ± 3% -0.6 0.45 ± 5% perf-profile.children.cycles-pp.security_vm_enough_memory_mm
0.89 -0.5 0.39 ± 2% perf-profile.children.cycles-pp.get_unmapped_area
0.83 -0.5 0.36 perf-profile.children.cycles-pp.find_vma
0.79 -0.4 0.36 ± 5% perf-profile.children.cycles-pp.__perf_sw_event
0.79 -0.4 0.37 ± 4% perf-profile.children.cycles-pp.perf_iterate_sb
0.69 ± 6% -0.4 0.30 ± 6% perf-profile.children.cycles-pp.selinux_vm_enough_memory
0.63 -0.3 0.29 ± 5% perf-profile.children.cycles-pp.___perf_sw_event
0.80 -0.3 0.47 perf-profile.children.cycles-pp.unmap_vmas
0.65 ± 3% -0.3 0.33 ± 3% perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
0.79 ± 2% -0.3 0.46 perf-profile.children.cycles-pp.unmap_page_range
0.54 ± 6% -0.3 0.23 ± 8% perf-profile.children.cycles-pp.cred_has_capability
0.45 -0.2 0.22 ± 5% perf-profile.children.cycles-pp.free_unref_page_list
0.43 -0.2 0.20 ± 3% perf-profile.children.cycles-pp.security_mmap_addr
0.37 ± 2% -0.2 0.15 ± 3% perf-profile.children.cycles-pp.vmacache_find
0.38 -0.2 0.17 ± 2% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.44 ± 4% -0.2 0.23 ± 5% perf-profile.children.cycles-pp.mem_cgroup_try_charge
0.36 ± 8% -0.2 0.15 ± 12% perf-profile.children.cycles-pp.avc_has_perm_noaudit
0.38 ± 3% -0.2 0.18 ± 2% perf-profile.children.cycles-pp.___might_sleep
0.35 ± 2% -0.2 0.16 ± 2% perf-profile.children.cycles-pp.down_write_killable
0.31 -0.2 0.13 ± 3% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.29 -0.2 0.13 ± 6% perf-profile.children.cycles-pp.perf_event_mmap_output
0.30 ± 3% -0.2 0.14 ± 7% perf-profile.children.cycles-pp.down_write
0.26 ± 3% -0.1 0.11 ± 4% perf-profile.children.cycles-pp._cond_resched
0.26 -0.1 0.12 ± 3% perf-profile.children.cycles-pp.sync_regs
0.29 ± 4% -0.1 0.15 ± 3% perf-profile.children.cycles-pp.strlcpy
0.27 -0.1 0.13 perf-profile.children.cycles-pp.cap_vm_enough_memory
0.26 -0.1 0.13 ± 3% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
0.24 ± 10% -0.1 0.10 ± 4% perf-profile.children.cycles-pp.vma_compute_subtree_gap
0.22 ± 2% -0.1 0.10 ± 7% perf-profile.children.cycles-pp.up_write
0.30 ± 3% -0.1 0.18 ± 2% perf-profile.children.cycles-pp.__mod_lruvec_state
0.26 -0.1 0.14 ± 3% perf-profile.children.cycles-pp.try_charge
0.19 ± 2% -0.1 0.08 ± 5% perf-profile.children.cycles-pp.__mod_node_page_state
0.19 -0.1 0.08 ± 13% perf-profile.children.cycles-pp.__count_memcg_events
0.21 ± 3% -0.1 0.10 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.19 ± 2% -0.1 0.09 perf-profile.children.cycles-pp.free_unref_page_commit
0.19 ± 4% -0.1 0.09 ± 7% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.22 -0.1 0.12 perf-profile.children.cycles-pp.__split_vma
1.09 -0.1 0.99 perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.18 -0.1 0.08 ± 5% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.13 ± 6% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.__mod_zone_page_state
0.17 -0.1 0.08 perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
0.16 ± 5% -0.1 0.08 ± 5% perf-profile.children.cycles-pp.memcpy_erms
0.12 ± 3% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.rcu_all_qs
0.16 ± 5% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.page_add_new_anon_rmap
0.10 ± 4% -0.1 0.03 ±100% perf-profile.children.cycles-pp.lru_cache_add_active_or_unevictable
0.14 ± 3% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.cap_mmap_addr
0.10 -0.1 0.03 ±100% perf-profile.children.cycles-pp.cap_capable
0.15 ± 3% -0.1 0.07 ± 5% perf-profile.children.cycles-pp.khugepaged_enter_vma_merge
0.20 ± 2% -0.1 0.13 perf-profile.children.cycles-pp.page_remove_rmap
0.14 ± 3% -0.1 0.07 perf-profile.children.cycles-pp._raw_spin_lock
0.13 ± 7% -0.1 0.06 perf-profile.children.cycles-pp.__tlb_remove_page_size
0.11 ± 4% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.09 ± 28% -0.1 0.03 ±100% perf-profile.children.cycles-pp.mem_cgroup_from_task
0.14 ± 5% -0.1 0.07 ± 5% perf-profile.children.cycles-pp.__mod_memcg_state
0.12 ± 4% -0.1 0.05 ± 8% perf-profile.children.cycles-pp.anon_vma_interval_tree_remove
0.12 -0.1 0.06 ± 7% perf-profile.children.cycles-pp.selinux_mmap_addr
0.11 ± 8% -0.1 0.05 ± 8% perf-profile.children.cycles-pp.down_read_trylock
0.11 ± 4% -0.1 0.06 ± 14% perf-profile.children.cycles-pp.free_unref_page_prepare
0.09 ± 5% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.uncharge_batch
0.11 ± 4% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.free_pages_and_swap_cache
0.10 ± 8% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.up_read
0.41 ± 2% +0.0 0.43 perf-profile.children.cycles-pp.apic_timer_interrupt
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.__sched_text_start
0.27 +0.1 0.40 ± 5% perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.34 ± 2% +0.1 0.47 ± 3% perf-profile.children.cycles-pp.__lock_text_start
64.04 +8.9 72.94 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +9.0 9.02 perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +9.0 9.04 perf-profile.children.cycles-pp.lock_page_lruvec_irq
63.87 +17.6 81.47 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
33.11 +41.7 74.76 perf-profile.children.cycles-pp.pagevec_lru_move_fn
25.39 +48.1 73.52 perf-profile.children.cycles-pp.page_fault
25.22 +48.3 73.47 perf-profile.children.cycles-pp.do_page_fault
24.79 +48.5 73.27 perf-profile.children.cycles-pp.__do_page_fault
23.24 +49.3 72.58 perf-profile.children.cycles-pp.handle_mm_fault
22.70 +49.6 72.34 perf-profile.children.cycles-pp.__handle_mm_fault
21.99 +50.0 72.03 perf-profile.children.cycles-pp.do_anonymous_page
16.55 +53.0 69.53 perf-profile.children.cycles-pp.__lru_cache_add
0.00 +73.1 73.05 perf-profile.children.cycles-pp.lock_page_lruvec_irqsave
3.16 -1.7 1.45 perf-profile.self.cycles-pp.do_syscall_64
2.77 ± 2% -1.5 1.27 perf-profile.self.cycles-pp.prepare_exit_to_usermode
2.35 -1.2 1.12 ± 2% perf-profile.self.cycles-pp.native_flush_tlb_one_user
1.67 -0.8 0.85 ± 2% perf-profile.self.cycles-pp.clear_page_erms
1.35 -0.7 0.64 ± 3% perf-profile.self.cycles-pp.native_irq_return_iret
1.26 -0.7 0.58 ± 3% perf-profile.self.cycles-pp.syscall_return_via_sysret
1.14 ± 2% -0.6 0.51 ± 2% perf-profile.self.cycles-pp.entry_SYSCALL_64
1.02 ± 2% -0.5 0.48 perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.64 ± 2% -0.4 0.28 perf-profile.self.cycles-pp.get_page_from_freelist
0.54 -0.3 0.21 ± 2% perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.57 -0.3 0.26 perf-profile.self.cycles-pp.__handle_mm_fault
0.54 -0.3 0.25 ± 5% perf-profile.self.cycles-pp.___perf_sw_event
0.48 -0.3 0.21 ± 3% perf-profile.self.cycles-pp.perf_event_mmap
0.49 -0.3 0.23 ± 3% perf-profile.self.cycles-pp.perf_iterate_sb
0.46 -0.3 0.21 ± 2% perf-profile.self.cycles-pp.__vma_adjust
0.41 ± 5% -0.2 0.18 ± 4% perf-profile.self.cycles-pp.__do_page_fault
0.41 -0.2 0.18 ± 2% perf-profile.self.cycles-pp.do_brk_flags
0.40 ± 3% -0.2 0.18 ± 2% perf-profile.self.cycles-pp.find_vma
0.36 ± 2% -0.2 0.15 ± 3% perf-profile.self.cycles-pp.vmacache_find
0.36 ± 4% -0.2 0.16 ± 2% perf-profile.self.cycles-pp.__x64_sys_brk
0.35 ± 8% -0.2 0.15 ± 12% perf-profile.self.cycles-pp.avc_has_perm_noaudit
0.36 -0.2 0.16 ± 5% perf-profile.self.cycles-pp.handle_mm_fault
0.37 ± 3% -0.2 0.17 ± 2% perf-profile.self.cycles-pp.___might_sleep
0.66 -0.2 0.50 perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.28 -0.2 0.12 ± 4% perf-profile.self.cycles-pp.perf_event_mmap_output
0.26 -0.1 0.11 ± 6% perf-profile.self.cycles-pp.do_anonymous_page
0.24 -0.1 0.10 perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
0.24 -0.1 0.10 ± 8% perf-profile.self.cycles-pp.sync_regs
0.36 ± 2% -0.1 0.22 ± 3% perf-profile.self.cycles-pp.unmap_page_range
0.22 ± 5% -0.1 0.09 ± 4% perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.21 ± 2% -0.1 0.09 ± 8% perf-profile.self.cycles-pp.up_write
0.25 ± 3% -0.1 0.14 perf-profile.self.cycles-pp.try_charge
0.20 ± 2% -0.1 0.09 ± 8% perf-profile.self.cycles-pp.vma_merge
0.19 ± 2% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.__mod_node_page_state
0.19 ± 4% -0.1 0.09 perf-profile.self.cycles-pp.__might_sleep
0.18 -0.1 0.08 ± 15% perf-profile.self.cycles-pp.__count_memcg_events
0.12 ± 6% -0.1 0.03 ±100% perf-profile.self.cycles-pp.__mod_zone_page_state
0.16 ± 5% -0.1 0.07 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.16 -0.1 0.07 ± 6% perf-profile.self.cycles-pp.down_write_killable
0.16 ± 7% -0.1 0.07 ± 5% perf-profile.self.cycles-pp.security_mmap_addr
0.17 ± 2% -0.1 0.09 ± 10% perf-profile.self.cycles-pp.cred_has_capability
0.16 ± 3% -0.1 0.07 perf-profile.self.cycles-pp.memcpy_erms
0.66 ± 3% -0.1 0.58 ± 2% perf-profile.self.cycles-pp.release_pages
0.16 ± 2% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.__perf_sw_event
0.17 ± 4% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.11 -0.1 0.03 ±100% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.14 ± 5% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.selinux_vm_enough_memory
0.15 ± 6% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.get_unmapped_area
0.15 ± 3% -0.1 0.07 ± 6% perf-profile.self.cycles-pp.alloc_pages_vma
0.16 ± 3% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.cap_vm_enough_memory
0.10 ± 4% -0.1 0.03 ±100% perf-profile.self.cycles-pp.prep_new_page
0.14 -0.1 0.07 ± 7% perf-profile.self.cycles-pp.free_unref_page_commit
0.13 ± 3% -0.1 0.06 ± 6% perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
0.12 ± 3% -0.1 0.05 ± 8% perf-profile.self.cycles-pp.mem_cgroup_commit_charge
0.14 ± 5% -0.1 0.07 perf-profile.self.cycles-pp._raw_spin_lock
0.09 ± 5% -0.1 0.03 ±100% perf-profile.self.cycles-pp.lru_cache_add_active_or_unevictable
0.12 ± 3% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.cap_mmap_addr
0.13 ± 3% -0.1 0.06 ± 6% perf-profile.self.cycles-pp.free_unref_page_list
0.12 ± 3% -0.1 0.06 perf-profile.self.cycles-pp.down_write
0.11 ± 4% -0.1 0.05 perf-profile.self.cycles-pp.__lru_cache_add
0.10 -0.1 0.04 ± 57% perf-profile.self.cycles-pp.selinux_mmap_addr
0.12 -0.1 0.06 ± 7% perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.11 ± 8% -0.1 0.05 ± 8% perf-profile.self.cycles-pp.down_read_trylock
0.13 ± 3% -0.1 0.07 perf-profile.self.cycles-pp.__mod_memcg_state
0.12 ± 4% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.strlcpy
0.10 ± 4% -0.1 0.05 ± 8% perf-profile.self.cycles-pp.free_unref_page_prepare
0.16 ± 5% -0.1 0.11 perf-profile.self.cycles-pp.page_remove_rmap
0.07 ± 11% -0.0 0.03 ±100% perf-profile.self.cycles-pp.page_add_new_anon_rmap
0.09 ± 5% -0.0 0.05 perf-profile.self.cycles-pp.up_read
0.14 ± 6% -0.0 0.10 ± 4% perf-profile.self.cycles-pp.__mod_lruvec_state
0.09 ± 5% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.pagevec_lru_move_fn
0.07 ± 11% +0.0 0.11 ± 4% perf-profile.self.cycles-pp.__lock_text_start
0.26 +0.1 0.33 ± 4% perf-profile.self.cycles-pp.mem_cgroup_update_lru_size
0.18 ± 2% +0.3 0.44 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
63.87 +17.6 81.47 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 10 months
[mm] 0f5b256b2c: will-it-scale.per_process_ops -1.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -1.2% regression of will-it-scale.per_process_ops due to commit:
commit: 0f5b256b2c35bf7d0faf874ed01227b4b7cb0118 ("[PATCH v10 3/6] mm: Introduce Reported pages")
url: https://github.com/0day-ci/linux/commits/Alexander-Duyck/mm-virtio-Provid...
in testcase: will-it-scale
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
nr_task: 100%
mode: process
test: page_fault2
cpufreq_governor: performance
ucode: 0xb000036
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-2019-05-14.cgz/lkp-bdw-ep6/page_fault2/will-it-scale/0xb000036
commit:
e10e2ab29d ("mm: Use zone and order instead of free area in free_list manipulators")
0f5b256b2c ("mm: Introduce Reported pages")
e10e2ab29d6d4ee2 0f5b256b2c35bf7d0faf874ed01
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at_ip___perf_sw_event/0x
1:4 -25% :4 dmesg.WARNING:at_ip__fsnotify_parent/0x
3:4 1% 3:4 perf-profile.calltrace.cycles-pp.error_entry.testcase
3:4 1% 3:4 perf-profile.children.cycles-pp.error_entry
2:4 1% 2:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
83249 -1.2% 82239 will-it-scale.per_process_ops
7325970 -1.2% 7237126 will-it-scale.workload
1137 ± 2% -2.8% 1105 ± 2% vmstat.system.cs
9785 +11.8% 10942 ± 5% softirqs.CPU0.SCHED
7282 ± 30% +30.5% 9504 ± 9% softirqs.CPU2.RCU
2.211e+09 -1.2% 2.185e+09 proc-vmstat.numa_hit
2.211e+09 -1.2% 2.185e+09 proc-vmstat.numa_local
2.213e+09 -1.2% 2.186e+09 proc-vmstat.pgalloc_normal
2.204e+09 -1.2% 2.178e+09 proc-vmstat.pgfault
2.212e+09 -1.2% 2.185e+09 proc-vmstat.pgfree
232.75 ± 21% +359.8% 1070 ± 66% interrupts.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
232.75 ± 21% +359.8% 1070 ± 66% interrupts.CPU16.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
34.00 ± 76% +447.1% 186.00 ±114% interrupts.CPU18.RES:Rescheduling_interrupts
318.00 ± 42% -65.8% 108.75 ± 77% interrupts.CPU28.RES:Rescheduling_interrupts
173.50 ± 32% +143.7% 422.75 ± 28% interrupts.CPU36.RES:Rescheduling_interrupts
70.75 ± 73% +726.1% 584.50 ± 60% interrupts.CPU39.RES:Rescheduling_interrupts
66.75 ± 38% +78.7% 119.25 ± 19% interrupts.CPU83.RES:Rescheduling_interrupts
286.00 ± 93% -88.0% 34.25 ± 97% interrupts.CPU84.RES:Rescheduling_interrupts
41205135 -3.7% 39666469 perf-stat.i.branch-misses
1096 ± 2% -2.9% 1064 ± 2% perf-stat.i.context-switches
3.60 +3.8% 3.74 ± 3% perf-stat.i.cpi
34.38 ± 2% -4.6% 32.80 ± 2% perf-stat.i.cpu-migrations
547.67 +323.8% 2321 ± 80% perf-stat.i.cycles-between-cache-misses
71391394 -3.5% 68859821 ± 2% perf-stat.i.dTLB-store-misses
14877534 -3.1% 14415629 ± 2% perf-stat.i.iTLB-load-misses
7256992 -3.0% 7036423 ± 2% perf-stat.i.minor-faults
1.272e+08 -3.3% 1.231e+08 perf-stat.i.node-loads
2.62 +1.0 3.64 ± 25% perf-stat.i.node-store-miss-rate%
847863 +4.6% 887118 perf-stat.i.node-store-misses
31585215 -3.4% 30501990 ± 2% perf-stat.i.node-stores
7256096 -3.0% 7035925 ± 2% perf-stat.i.page-faults
0.33 ± 3% +0.0 0.34 ± 2% perf-stat.overall.node-load-miss-rate%
2.61 +0.2 2.83 ± 2% perf-stat.overall.node-store-miss-rate%
2791987 +1.1% 2822374 perf-stat.overall.path-length
41058236 -3.7% 39547998 perf-stat.ps.branch-misses
34.24 ± 2% -4.5% 32.69 perf-stat.ps.cpu-migrations
71150671 -3.5% 68679628 ± 2% perf-stat.ps.dTLB-store-misses
14827264 -3.0% 14377324 perf-stat.ps.iTLB-load-misses
7230962 -3.0% 7016784 ± 2% perf-stat.ps.minor-faults
1.268e+08 -3.2% 1.228e+08 perf-stat.ps.node-loads
844990 +4.7% 884796 perf-stat.ps.node-store-misses
31478503 -3.4% 30422235 ± 2% perf-stat.ps.node-stores
7230628 -3.0% 7016690 ± 2% perf-stat.ps.page-faults
4.10 -0.7 3.43 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte
4.13 -0.7 3.47 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault
5.35 -0.7 4.69 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault
5.44 -0.7 4.78 perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
7.50 -0.6 6.87 perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
7.41 -0.6 6.78 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
53.51 -0.3 53.17 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
53.93 -0.3 53.62 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.testcase
1.90 -0.3 1.60 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.unmap_page_range
1.92 -0.3 1.62 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas
54.98 -0.3 54.69 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.testcase
55.33 -0.3 55.04 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.testcase
61.72 -0.3 61.44 perf-profile.calltrace.cycles-pp.testcase
59.26 -0.2 59.03 perf-profile.calltrace.cycles-pp.page_fault.testcase
0.74 ± 2% -0.2 0.52 ± 3% perf-profile.calltrace.cycles-pp.__list_del_entry_valid.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
4.05 +0.0 4.09 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
4.08 +0.0 4.12 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap
4.10 +0.0 4.14 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
1.59 +0.1 1.64 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.73 +0.1 1.78 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.21 +0.1 1.28 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
0.97 +0.1 1.03 perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
1.38 +0.1 1.44 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
3.71 +0.1 3.79 perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
3.64 +0.1 3.73 perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.tlb_finish_mmu
33.13 +0.2 33.33 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
33.12 +0.2 33.32 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
31.65 +0.2 31.87 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas.unmap_region
31.88 +0.2 32.11 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
37.24 +0.2 37.48 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
37.24 +0.2 37.48 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
37.24 +0.2 37.49 perf-profile.calltrace.cycles-pp.munmap
37.23 +0.2 37.48 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
37.23 +0.2 37.48 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
37.23 +0.2 37.48 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
37.23 +0.2 37.48 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
28.96 +0.6 29.55 perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas
28.48 +0.6 29.07 perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range
30.94 +0.7 31.65 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages
31.01 +0.7 31.72 perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu
6.32 -1.0 5.30 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
5.45 -0.7 4.79 perf-profile.children.cycles-pp.__lru_cache_add
5.36 -0.7 4.70 perf-profile.children.cycles-pp.pagevec_lru_move_fn
7.46 -0.6 6.82 perf-profile.children.cycles-pp.alloc_set_pte
7.51 -0.6 6.88 perf-profile.children.cycles-pp.finish_fault
53.54 -0.3 53.20 perf-profile.children.cycles-pp.__handle_mm_fault
53.96 -0.3 53.65 perf-profile.children.cycles-pp.handle_mm_fault
55.00 -0.3 54.71 perf-profile.children.cycles-pp.__do_page_fault
55.34 -0.3 55.05 perf-profile.children.cycles-pp.do_page_fault
57.38 -0.3 57.12 perf-profile.children.cycles-pp.page_fault
62.64 -0.3 62.38 perf-profile.children.cycles-pp.testcase
0.99 -0.2 0.76 perf-profile.children.cycles-pp.__list_del_entry_valid
0.39 -0.0 0.36 ± 2% perf-profile.children.cycles-pp.__mod_lruvec_state
4.11 +0.0 4.14 perf-profile.children.cycles-pp.tlb_finish_mmu
1.60 +0.0 1.65 perf-profile.children.cycles-pp.shmem_fault
1.73 +0.1 1.79 perf-profile.children.cycles-pp.__do_fault
1.40 +0.1 1.46 perf-profile.children.cycles-pp.shmem_getpage_gfp
1.23 +0.1 1.29 perf-profile.children.cycles-pp.find_lock_entry
0.97 +0.1 1.03 perf-profile.children.cycles-pp.find_get_entry
33.13 +0.2 33.33 perf-profile.children.cycles-pp.unmap_vmas
33.13 +0.2 33.33 perf-profile.children.cycles-pp.unmap_page_range
37.24 +0.2 37.49 perf-profile.children.cycles-pp.munmap
37.23 +0.2 37.48 perf-profile.children.cycles-pp.__do_munmap
37.23 +0.2 37.48 perf-profile.children.cycles-pp.__x64_sys_munmap
37.23 +0.2 37.48 perf-profile.children.cycles-pp.__vm_munmap
37.23 +0.2 37.48 perf-profile.children.cycles-pp.unmap_region
37.33 +0.2 37.57 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
37.33 +0.2 37.57 perf-profile.children.cycles-pp.do_syscall_64
35.97 +0.3 36.22 perf-profile.children.cycles-pp.tlb_flush_mmu
35.83 +0.3 36.09 perf-profile.children.cycles-pp.release_pages
32.70 +0.7 33.38 perf-profile.children.cycles-pp.free_unref_page_list
32.15 +0.7 32.84 perf-profile.children.cycles-pp.free_pcppages_bulk
64.33 +0.9 65.19 perf-profile.children.cycles-pp._raw_spin_lock
0.98 -0.2 0.75 perf-profile.self.cycles-pp.__list_del_entry_valid
0.95 -0.0 0.92 perf-profile.self.cycles-pp.free_pcppages_bulk
0.14 ± 3% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.__mod_lruvec_state
0.26 +0.0 0.27 perf-profile.self.cycles-pp.handle_mm_fault
0.17 ± 4% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.__count_memcg_events
0.62 +0.1 0.68 perf-profile.self.cycles-pp.find_get_entry
0.89 +0.2 1.13 perf-profile.self.cycles-pp.get_page_from_freelist
will-it-scale.per_process_ops
86000 +-+-----------------------------------------------------------------+
|.+.+.+..+.+.+.+.+.+.+.+..+.+.+.+.+.+.+.+..+.+.+.+.+.+.+.+..+ |
84000 +-+ + |
| +.+.+.|
| O O O |
82000 +-+ O O O O O O O O O O O O O O O O O O O |
| |
80000 +-+ |
| |
78000 +-+ O |
| O O |
| O O |
76000 +-+ |
O O |
74000 +-+-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 10 months
Re: [LKP] 62e5d8180c ("mm/lru: change the lru_lock iff page's lruvec is .."): BUG: sleeping function called from invalid context at include/linux/pagemap.h:468
by Philip Li
On Fri, Sep 20, 2019 at 11:49:35PM +0800, Alex Shi wrote:
>
>
> 在 2019/9/20 上午7:10, kernel test robot 写道:
> >
> > f8efbd33e3 mm/lru: replace pgdat lru_lock with lruvec lock
> > 62e5d8180c mm/lru: change the lru_lock iff page's lruvec is different
> > 285d41f504 mm/lru: fix the comments of lru_lock
> > +------------------------------------------------------------------------------+------------+------------+------------+
> > | | f8efbd33e3 | 62e5d8180c | 285d41f504 |
> > +------------------------------------------------------------------------------+------------+------------+------------+
> > | boot_successes | 46 | 0 | 0 |
> > | boot_failures | 1 | 18 | 17 |
> > | BUG:unable_to_handle_page_fault_for_address | 1 | | |
>
> Thanks a lot Philip!
>
> Do you still have the above ops message of f8efbd33e3, which is hidden deeper and more valuable!
Hi Alex, you are welcome, pls allow a few days, we will check
and provide information to you next Monday. +Rong to help here.
>
> Thanks
> Alex
2 years, 10 months
bafa3901f2: BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: bafa3901f269b766c302f893de4470df9f5766ee ("experiment")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/brauner/linux.git seccomp_notify_filter_empty
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-----------------------------------------------------------------------------+------------+------------+
| | d4148b5838 | bafa3901f2 |
+-----------------------------------------------------------------------------+------------+------------+
| boot_successes | 12 | 0 |
| boot_failures | 0 | 11 |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c | 0 | 11 |
| RIP:default_idle | 0 | 2 |
+-----------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 7.132390] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:255
[ 7.135092] in_atomic(): 1, irqs_disabled(): 0, pid: 3226, name: sed
[ 7.136865] CPU: 1 PID: 3226 Comm: sed Not tainted 5.3.0-00004-gbafa3901f269b #1
[ 7.139134] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 7.141528] Call Trace:
[ 7.142554] <IRQ>
[ 7.143479] dump_stack+0x46/0x59
[ 7.144561] ___might_sleep+0xf8/0x109
[ 7.145790] mutex_lock+0x1c/0x3b
[ 7.146876] __put_seccomp_filter+0x22/0x73
[ 7.148102] free_task+0x25/0x4b
[ 7.149187] rcu_core+0x22a/0x38f
[ 7.150290] __do_softirq+0x120/0x27e
[ 7.151472] irq_exit+0x46/0x85
[ 7.152552] smp_apic_timer_interrupt+0x127/0x132
[ 7.153904] apic_timer_interrupt+0xf/0x20
[ 7.155179] </IRQ>
[ 7.156070] RIP: 0033:0x7f625f11c0e9
[ 7.157180] Code: 0f 11 29 0f 11 71 f0 0f 11 79 e0 44 0f 11 41 d0 41 0f 11 23 c3 0f 10 26 0f 10 6e 10 0f 10 76 20 0f 10 7e 30 44 0f 10 44 16 f0 <4c> 8d 5c 17 f0 48 8d 4c 16 f0 4d 89 d9 4d 89 d8 49 83 e0 0f 4c 29
[ 7.161664] RSP: 002b:00007ffe324fc658 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff13
[ 7.163865] RAX: 000055d1fa918070 RBX: 000055d1fa4aef10 RCX: 0000000000002940
[ 7.165449] RDX: 0000000000003d80 RSI: 000055d1fa4b0c80 RDI: 000055d1fa918070
[ 7.167346] RBP: 000055d1fa042a40 R08: 00000000000001ff R09: 0000000000001c20
[ 7.169141] R10: 0000000000000003 R11: 000055d1faaf5130 R12: 00000000000003c0
[ 7.171102] R13: 00007ffe324fc6d0 R14: 00007ffe324fc6f0 R15: 00000000000001eb
[ 11.771275] Kernel tests: Boot OK!
[ 11.771278]
Elapsed time: 10
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-0 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-1 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-2 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-3 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-4 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-5 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-6 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-7 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-8 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-9 256G
qemu-img create -f qcow2 disk-vm-snb-ssd-4G-fb6c0ece5cc1-10 256G
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu SandyBridge
-kernel $kernel
-initrd initrd-vm-snb-ssd-4G-fb6c0ece5cc1
-m 4096
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0,hostfwd=tcp::32032-:22
-boot order=nc
-no-reboot
-watchdog i6300esb
-watchdog-action debug
-rtc base=localtime
-device virtio-scsi-pci,id=scsi0
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-0,if=none,id=hd0,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd0,scsi-id=1,lun=0
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-1,if=none,id=hd1,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd1,scsi-id=1,lun=1
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-2,if=none,id=hd2,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd2,scsi-id=1,lun=2
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-3,if=none,id=hd3,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd3,scsi-id=1,lun=3
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-4,if=none,id=hd4,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd4,scsi-id=1,lun=4
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-5,if=none,id=hd5,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd5,scsi-id=1,lun=5
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-6,if=none,id=hd6,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd6,scsi-id=1,lun=6
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-7,if=none,id=hd7,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd7,scsi-id=1,lun=7
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-8,if=none,id=hd8,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd8,scsi-id=1,lun=8
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-9,if=none,id=hd9,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd9,scsi-id=1,lun=9
-drive file=disk-vm-snb-ssd-4G-fb6c0ece5cc1-10,if=none,id=hd10,media=disk,aio=native,cache=none
-device scsi-hd,bus=scsi0.0,drive=hd10,scsi-id=1,lun=10
-serial stdio
-display none
-monitor null
)
append=(
ip=::::vm-snb-ssd-4G-fb6c0ece5cc1::dhcp
root=/dev/ram0
user=lkp
job=/job-script
ARCH=x86_64
kconfig=x86_64-kexec
branch=linux-devel/devel-catchup-201909192322
commit=bafa3901f269b766c302f893de4470df9f5766ee
BOOT_IMAGE=/pkg/linux/x86_64-kexec/gcc-7/bafa3901f269b766c302f893de4470df9f5766ee/vmlinuz-5.3.0-00004-gbafa3901f269b
max_uptime=600
RESULT_ROOT=/result/boot/1/vm-snb-ssd-4G/debian-x86_64-2019-05-14.cgz/x86_64-kexec/gcc-7/bafa3901f269b766c302f893de4470df9f5766ee/8
result_service=tmpfs
debug
apic=debug
sysrq_always_enabled
To reproduce:
# build kernel
cd linux
cp config-5.3.0-00004-gbafa3901f269b .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
2 years, 11 months
[btrfs] ce2a340b1e: kernel_BUG_at_fs/btrfs/file-item.c
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: ce2a340b1e09d89835a9cb820149e246a42a9432 ("btrfs: Use iomap_dio_ops.submit_io()")
https://github.com/goldwynr/linux iomap5
in testcase: xfstests
with following parameters:
disk: 6HDD
fs: btrfs
test: btrfs-group00
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+--------------------------------------------------------------------------------+------------+------------+
| | 8dd11befa6 | ce2a340b1e |
+--------------------------------------------------------------------------------+------------+------------+
| boot_successes | 13 | 70 |
| boot_failures | 5 | 78 |
| BUG:kernel_hang_in_test_stage | 5 | 4 |
| WARNING:at_fs/btrfs/space-info.h:#btrfs_space_info_update_bytes_may_use[btrfs] | 0 | 64 |
| RIP:btrfs_space_info_update_bytes_may_use[btrfs] | 0 | 64 |
| WARNING:at_fs/btrfs/inode.c:#btrfs_destroy_inode[btrfs] | 0 | 52 |
| RIP:btrfs_destroy_inode[btrfs] | 0 | 52 |
| WARNING:at_fs/btrfs/extent-tree.c:#btrfs_free_block_groups[btrfs] | 0 | 51 |
| RIP:btrfs_free_block_groups[btrfs] | 0 | 51 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 1 |
| Oops:#[##] | 0 | 1 |
| RIP:btrfs_add_ordered_sum[btrfs] | 0 | 1 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 14 |
| kernel_BUG_at_fs/btrfs/file-item.c | 0 | 13 |
| invalid_opcode:#[##] | 0 | 13 |
| RIP:btrfs_csum_one_bio[btrfs] | 0 | 13 |
| general_protection_fault:#[##] | 0 | 1 |
| RIP:run_timer_softirq | 0 | 1 |
| RIP:native_queued_spin_lock_slowpath | 0 | 1 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 1 |
| WARNING:at_net/sched/sch_generic.c:#dev_watchdog | 0 | 1 |
| RIP:dev_watchdog | 0 | 1 |
| WARNING:at_fs/btrfs/extent_io.c:#insert_state[btrfs] | 0 | 1 |
| RIP:insert_state[btrfs] | 0 | 1 |
+--------------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 52.877675] kernel BUG at fs/btrfs/file-item.c:478!
[ 52.879320] invalid opcode: 0000 [#1] SMP PTI
[ 52.880742] CPU: 0 PID: 22551 Comm: fsstress Not tainted 5.3.0-rc2-00034-gce2a340b1e09d #2
[ 52.882536] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 52.884434] RIP: 0010:btrfs_csum_one_bio+0x448/0x4b0 [btrfs]
[ 52.885874] Code: 7c 24 40 89 4c 24 48 44 89 4c 24 4c e9 e9 fc ff ff 48 8b 7c 24 30 48 89 ee e8 94 cb 02 00 48 85 c0 49 89 c6 0f 85 5a fd ff ff <0f> 0b b8 09 00 00 00 48 8b 9c 24 d0 01 00 00 65 48 33 1c 25 28 00
[ 52.890665] RSP: 0018:ffffa04180823840 EFLAGS: 00010246
[ 52.892101] RAX: 0000000000000000 RBX: 0000000000001000 RCX: 0000000000000000
[ 52.893815] RDX: 00000000001d1000 RSI: 0000000000000000 RDI: ffff94063cc47268
[ 52.895681] RBP: 00000000001d2000 R08: 0000000000001000 R09: 0000000000000005
[ 52.897605] R10: 0000000000001000 R11: ffff9406f7652ff0 R12: ffff94063cab0000
[ 52.899871] R13: ffff940635550000 R14: 0000000000000000 R15: ffff9406c6126000
[ 52.901721] FS: 00007fc51d88db40(0000) GS:ffff9406ffc00000(0000) knlGS:0000000000000000
[ 52.903643] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 52.905235] CR2: 00007fc51d88b000 CR3: 00000000784d4000 CR4: 00000000000406f0
[ 52.906963] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 52.908692] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 52.911014] Call Trace:
[ 52.912342] ? alloc_extent_map+0x16/0x60 [btrfs]
[ 52.913699] ? btrfs_get_token_32+0x10c/0x130 [btrfs]
[ 52.915447] ? kmem_cache_alloc+0x35/0x230
[ 52.917074] ? finish_wait+0x80/0x80
[ 52.918380] ? mempool_alloc+0x67/0x1a0
[ 52.919662] ? gup_pgd_range+0x342/0xd90
[ 52.921430] ? btrfs_get_chunk_map+0x39/0xc0 [btrfs]
[ 52.923382] btrfs_submit_direct+0x611/0x650 [btrfs]
[ 52.924760] iomap_dio_submit_bio+0x74/0x90
[ 52.926204] iomap_dio_bio_actor+0x1c3/0x3f0
[ 52.927484] ? iomap_dio_bio_actor+0x3f0/0x3f0
[ 52.929402] iomap_apply+0xe2/0x1b0
[ 52.931289] iomap_dio_rw+0x2cf/0x410
[ 52.933082] ? iomap_dio_bio_actor+0x3f0/0x3f0
[ 52.934852] ? btrfs_direct_IO+0xe0/0x310 [btrfs]
[ 52.936569] btrfs_direct_IO+0xe0/0x310 [btrfs]
[ 52.937847] btrfs_file_write_iter+0x2e4/0x570 [btrfs]
[ 52.939215] ? _copy_to_user+0x69/0x80
[ 52.940379] new_sync_write+0x12d/0x1d0
[ 52.941627] vfs_write+0xbe/0x1d0
[ 52.942704] ksys_write+0xa1/0xe0
[ 52.943790] do_syscall_64+0x5b/0x1f0
[ 52.944899] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 52.946209] RIP: 0033:0x7fc51d0601b0
[ 52.947538] Code: 2e 0f 1f 84 00 00 00 00 00 90 48 8b 05 19 7e 20 00 c3 0f 1f 84 00 00 00 00 00 83 3d 19 c2 20 00 00 75 10 b8 01 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 ae fc ff ff 48 89 04 24
[ 52.951708] RSP: 002b:00007ffcb8b91738 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 52.953561] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007fc51d0601b0
[ 52.956019] RDX: 0000000000014000 RSI: 000055d5b4a44000 RDI: 0000000000000004
[ 52.958640] RBP: 00000000000002c4 R08: 0000000000000003 R09: 0000000000015040
[ 52.960902] R10: 00007fc51d04ab58 R11: 0000000000000246 R12: 00000000001cd000
[ 52.962693] R13: 0000000000014000 R14: 000055d5b4a44000 R15: 0000000000000000
[ 52.964570] Modules linked in: btrfs xor zstd_decompress zstd_compress raid6_pq libcrc32c dm_mod sr_mod cdrom sg ata_generic pata_acpi bochs_drm drm_vram_helper ttm intel_rapl_msr drm_kms_helper intel_rapl_common ppdev crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel syscopyarea sysfillrect sysimgblt fb_sys_fops snd_pcm drm snd_timer aesni_intel snd crypto_simd cryptd glue_helper ata_piix soundcore joydev pcspkr serio_raw libata i2c_piix4 parport_pc parport floppy ip_tables
[ 52.977330] ---[ end trace 13b62706dd56ede1 ]---
To reproduce:
# build kernel
cd linux
cp config-5.3.0-rc2-00034-gce2a340b1e09d .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 11 months
[btrfs] 9118264507: xfstests.generic.269.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 91182645075f9a41953bea703a7d10e9f661cd13 ("btrfs: stop partially refilling tickets when releasing space")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: xfstests
with following parameters:
disk: 4HDD
fs: btrfs
test: generic-group13
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2019-09-19 11:08:30 export TEST_DIR=/fs/vda
2019-09-19 11:08:30 export TEST_DEV=/dev/vda
2019-09-19 11:08:30 export FSTYP=btrfs
2019-09-19 11:08:30 export SCRATCH_MNT=/fs/scratch
2019-09-19 11:08:30 mkdir /fs/scratch -p
2019-09-19 11:08:30 export SCRATCH_DEV_POOL="/dev/vdb /dev/vdc /dev/vdd"
2019-09-19 11:08:30 sed "s:^:generic/:" //lkp/benchmarks/xfstests/tests/generic-group13 | grep -F -f merged_ignored_files
2019-09-19 11:08:30 sed "s:^:generic/:" //lkp/benchmarks/xfstests/tests/generic-group13 | grep -v -F -f merged_ignored_files
2019-09-19 11:08:30 ./check generic/252 generic/253 generic/254 generic/255 generic/256 generic/257 generic/258 generic/260 generic/263 generic/264 generic/265 generic/266 generic/267 generic/268 generic/269 generic/270 generic/271
FSTYP -- btrfs
PLATFORM -- Linux/x86_64 vm-snb-4G-a1e9014812f9 5.3.0-rc8-00124-g91182645075f9 #1 SMP Wed Sep 11 01:46:15 CST 2019
MKFS_OPTIONS -- /dev/vdb
MOUNT_OPTIONS -- /dev/vdb /fs/scratch
generic/252 3s
generic/253 1s
generic/254 1s
generic/255 1s
generic/256 228s
generic/257 0s
generic/258 1s
generic/260 [not run] FITRIM not supported on /fs/scratch
generic/263 21s
generic/264 [not run] xfs_io funshare failed (old kernel/wrong fs?)
generic/265 1s
generic/266 1s
generic/267 2s
generic/268 1s
generic/269 _check_dmesg: something found in dmesg (see /lkp/benchmarks/xfstests/results//generic/269.dmesg)
generic/270 [not run] disk quotas not supported by this filesystem type: btrfs
generic/271 2s
Ran: generic/252 generic/253 generic/254 generic/255 generic/256 generic/257 generic/258 generic/260 generic/263 generic/264 generic/265 generic/266 generic/267 generic/268 generic/269 generic/270 generic/271
Not run: generic/260 generic/264 generic/270
Failures: generic/269
Failed 1 of 17 tests
To reproduce:
# build kernel
cd linux
cp config-5.3.0-rc8-00124-g91182645075f9 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 11 months