[mm] f73a14220a: xfstests.btrfs.101.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: f73a14220ada7779b997116bd5c096ffb854123a ("mm: Probe for sub-page faults in fault_in_writeable()")
https://git.kernel.org/cgit/linux/kernel/git/arm64/linux.git devel/btrfs-live-lock-fix
in testcase: xfstests
version: xfstests-x86_64-99bc497-1_20211129
with following parameters:
disk: 6HDD
fs: btrfs
test: btrfs-group-10
ucode: 0x28
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
2021-11-29 20:29:13 export TEST_DIR=/fs/sda1
2021-11-29 20:29:13 export TEST_DEV=/dev/sda1
2021-11-29 20:29:13 export FSTYP=btrfs
2021-11-29 20:29:13 export SCRATCH_MNT=/fs/scratch
2021-11-29 20:29:13 mkdir /fs/scratch -p
2021-11-29 20:29:13 export SCRATCH_DEV_POOL="/dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 /dev/sda6"
2021-11-29 20:29:13 sed "s:^:btrfs/:" //lkp/benchmarks/xfstests/tests/btrfs-group-10
2021-11-29 20:29:13 ./check btrfs/100 btrfs/101 btrfs/102 btrfs/103 btrfs/104 btrfs/105 btrfs/106 btrfs/107 btrfs/108 btrfs/109
FSTYP -- btrfs
PLATFORM -- Linux/x86_64 lkp-hsw-d01 5.16.0-rc2-00002-gf73a14220ada #1 SMP Sun Nov 28 21:10:09 CST 2021
MKFS_OPTIONS -- /dev/sda2
MOUNT_OPTIONS -- /dev/sda2 /fs/scratch
btrfs/100 [failed, exit status 1]- output mismatch (see /lkp/benchmarks/xfstests/results//btrfs/100.out.bad)
--- tests/btrfs/100.out 2021-11-29 16:37:46.000000000 +0000
+++ /lkp/benchmarks/xfstests/results//btrfs/100.out.bad 2021-11-29 20:29:15.865210474 +0000
@@ -1,11 +1,3 @@
QA output created by 100
-Label: none uuid: <UUID>
- Total devices <NUM> FS bytes used <SIZE>
- devid <DEVID> size <SIZE> used <SIZE> path SCRATCH_DEV
- devid <DEVID> size <SIZE> used <SIZE> path /dev/mapper/error-test
-
-Label: none uuid: <UUID>
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/btrfs/100.out /lkp/benchmarks/xfstests/results//btrfs/100.out.bad' to see the entire diff)
btrfs/101 [failed, exit status 1]- output mismatch (see /lkp/benchmarks/xfstests/results//btrfs/101.out.bad)
--- tests/btrfs/101.out 2021-11-29 16:37:46.000000000 +0000
+++ /lkp/benchmarks/xfstests/results//btrfs/101.out.bad 2021-11-29 20:29:17.421210372 +0000
@@ -1,11 +1,3 @@
QA output created by 101
-Label: none uuid: <UUID>
- Total devices <NUM> FS bytes used <SIZE>
- devid <DEVID> size <SIZE> used <SIZE> path SCRATCH_DEV
- devid <DEVID> size <SIZE> used <SIZE> path /dev/mapper/error-test
-
-Label: none uuid: <UUID>
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/btrfs/101.out /lkp/benchmarks/xfstests/results//btrfs/101.out.bad' to see the entire diff)
btrfs/102 1s
btrfs/103 2s
btrfs/104 1s
btrfs/105 [failed, exit status 1]- output mismatch (see /lkp/benchmarks/xfstests/results//btrfs/105.out.bad)
--- tests/btrfs/105.out 2021-11-29 16:37:46.000000000 +0000
+++ /lkp/benchmarks/xfstests/results//btrfs/105.out.bad 2021-11-29 20:29:22.532210037 +0000
@@ -5,7 +5,5 @@
9802287a6faa01a1fd0e01732b732fca SCRATCH_MNT/mysnap1/foo/bar
wrote 98304/98304 bytes at offset 0
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-File digest in the original filesystem after being replaced:
-de277dfac706c359613033120349cf88 SCRATCH_MNT/mysnap2_ro/foo/bar
-File digest in the new filesystem:
-de277dfac706c359613033120349cf88 SCRATCH_MNT/mysnap2_ro/foo/bar
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/btrfs/105.out /lkp/benchmarks/xfstests/results//btrfs/105.out.bad' to see the entire diff)
btrfs/106 1s
btrfs/107 1s
btrfs/108 [failed, exit status 1]- output mismatch (see /lkp/benchmarks/xfstests/results//btrfs/108.out.bad)
--- tests/btrfs/108.out 2021-11-29 16:37:46.000000000 +0000
+++ /lkp/benchmarks/xfstests/results//btrfs/108.out.bad 2021-11-29 20:29:26.064209806 +0000
@@ -8,6 +8,5 @@
File digests in the original filesystem:
fbf36a062ffcbd644b5739c4d683ccc7 SCRATCH_MNT/snap/foo
5d2c92827a70aad932cfe7363105c55e SCRATCH_MNT/snap/bar
-File digests in the new filesystem:
-fbf36a062ffcbd644b5739c4d683ccc7 SCRATCH_MNT/snap/foo
-5d2c92827a70aad932cfe7363105c55e SCRATCH_MNT/snap/bar
+failed: '/bin/btrfs send -f /fs/sda1/btrfs-test-108/1.snap /fs/scratch/snap'
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/btrfs/108.out /lkp/benchmarks/xfstests/results//btrfs/108.out.bad' to see the entire diff)
btrfs/109 [failed, exit status 1]- output mismatch (see /lkp/benchmarks/xfstests/results//btrfs/109.out.bad)
--- tests/btrfs/109.out 2021-11-29 16:37:46.000000000 +0000
+++ /lkp/benchmarks/xfstests/results//btrfs/109.out.bad 2021-11-29 20:29:27.005209745 +0000
@@ -8,6 +8,5 @@
File digests in the original filesystem:
253f558dbd25727d7d2fdb77f9ad2590 SCRATCH_MNT/snap/foo
253f558dbd25727d7d2fdb77f9ad2590 SCRATCH_MNT/snap/bar
-File digests in the new filesystem:
-253f558dbd25727d7d2fdb77f9ad2590 SCRATCH_MNT/snap/foo
-253f558dbd25727d7d2fdb77f9ad2590 SCRATCH_MNT/snap/bar
+failed: '/bin/btrfs send -f /fs/sda1/btrfs-test-109/1.snap /fs/scratch/snap'
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/btrfs/109.out /lkp/benchmarks/xfstests/results//btrfs/109.out.bad' to see the entire diff)
Ran: btrfs/100 btrfs/101 btrfs/102 btrfs/103 btrfs/104 btrfs/105 btrfs/106 btrfs/107 btrfs/108 btrfs/109
Failures: btrfs/100 btrfs/101 btrfs/105 btrfs/108 btrfs/109
Failed 5 of 10 tests
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 2 weeks
[loop] c895b784c6: INFO:rcu_sched_self-detected_stall_on_CPU
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: c895b784c699224d690c7dfbdcff309df82366e3 ("loop: don't hold lo_mutex during __loop_clr_fd()")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: stress-ng
version: stress-ng-x86_64-0.11-06_20211126
with following parameters:
nr_threads: 100%
disk: 1HDD
testtime: 60s
class: device
test: loop
cpufreq_governor: performance
ucode: 0xb000280
on test machine: 96 threads 2 sockets Ice Lake with 256G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 194.222111][ C71] rcu: INFO: rcu_sched self-detected stall on CPU
[ 194.222114][ C71] rcu: 71-....: (14353 ticks this GP) idle=dfb/1/0x4000000000000000 softirq=9274/9274 fqs=21431
[ 194.222119][ C71] (t=100001 jiffies g=12029 q=232127)
[ 194.222123][ C71] NMI backtrace for cpu 71
[ 194.222125][ C71] CPU: 71 PID: 5857 Comm: stress-ng Not tainted 5.16.0-rc2-00042-gc895b784c699 #1
[ 194.222127][ C71] Call Trace:
[ 194.222128][ C71] <IRQ>
[ 194.222129][ C71] dump_stack_lvl (lib/dump_stack.c:107)
[ 194.222137][ C71] nmi_cpu_backtrace.cold (lib/nmi_backtrace.c:113)
[ 194.222139][ C71] ? lapic_can_unplug_cpu (arch/x86/kernel/apic/hw_nmi.c:33)
[ 194.222143][ C71] nmi_trigger_cpumask_backtrace (lib/nmi_backtrace.c:62)
[ 194.222147][ C71] rcu_dump_cpu_stacks (kernel/rcu/tree_stall.h:339 (discriminator 5))
[ 194.222151][ C71] rcu_sched_clock_irq.cold (kernel/rcu/tree_stall.h:629 kernel/rcu/tree_stall.h:711 kernel/rcu/tree.c:3878 kernel/rcu/tree.c:2597)
[ 194.222153][ C71] update_process_times (arch/x86/include/asm/preempt.h:27 kernel/time/timer.c:1787)
[ 194.222158][ C71] tick_sched_handle+0x1f/0x80
[ 194.222162][ C71] tick_sched_timer (kernel/time/tick-sched.c:1426)
[ 194.222163][ C71] ? tick_sched_do_timer (kernel/time/tick-sched.c:1408)
[ 194.222164][ C71] __hrtimer_run_queues (kernel/time/hrtimer.c:1685 kernel/time/hrtimer.c:1749)
[ 194.222166][ C71] hrtimer_interrupt (kernel/time/hrtimer.c:1814)
[ 194.222168][ C71] __sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1086 arch/x86/kernel/apic/apic.c:1103)
[ 194.222172][ C71] sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1097 (discriminator 14))
[ 194.222176][ C71] </IRQ>
[ 194.222177][ C71] <TASK>
[ 194.222178][ C71] asm_sysvec_apic_timer_interrupt (arch/x86/include/asm/idtentry.h:638)
[ 194.222183][ C71] RIP: 0010:console_unlock (kernel/printk/printk.c:2719)
[ 194.222187][ C71] Code: 00 44 89 e6 48 c7 c7 40 72 c3 82 e8 99 3c 01 00 65 ff 0d d2 f9 eb 7e e9 dd fd ff ff e8 88 1b 00 00 4d 85 ed 0f 85 e3 00 00 00 <8b> 44 24 0c 85 c0 0f 84 e4 fc ff ff 31 d2 be a0 0a 00 00 48 c7 c7
All code
========
0: 00 44 89 e6 add %al,-0x1a(%rcx,%rcx,4)
4: 48 c7 c7 40 72 c3 82 mov $0xffffffff82c37240,%rdi
b: e8 99 3c 01 00 callq 0x13ca9
10: 65 ff 0d d2 f9 eb 7e decl %gs:0x7eebf9d2(%rip) # 0x7eebf9e9
17: e9 dd fd ff ff jmpq 0xfffffffffffffdf9
1c: e8 88 1b 00 00 callq 0x1ba9
21: 4d 85 ed test %r13,%r13
24: 0f 85 e3 00 00 00 jne 0x10d
2a:* 8b 44 24 0c mov 0xc(%rsp),%eax <-- trapping instruction
2e: 85 c0 test %eax,%eax
30: 0f 84 e4 fc ff ff je 0xfffffffffffffd1a
36: 31 d2 xor %edx,%edx
38: be a0 0a 00 00 mov $0xaa0,%esi
3d: 48 rex.W
3e: c7 .byte 0xc7
3f: c7 .byte 0xc7
Code starting with the faulting instruction
===========================================
0: 8b 44 24 0c mov 0xc(%rsp),%eax
4: 85 c0 test %eax,%eax
6: 0f 84 e4 fc ff ff je 0xfffffffffffffcf0
c: 31 d2 xor %edx,%edx
e: be a0 0a 00 00 mov $0xaa0,%esi
13: 48 rex.W
14: c7 .byte 0xc7
15: c7 .byte 0xc7
[ 194.222189][ C71] RSP: 0018:ffa00000222dbb50 EFLAGS: 00000206
[ 194.222191][ C71] RAX: 0000000000000000 RBX: ffffffff8357d8b8 RCX: 0000000000000008
[ 194.222192][ C71] RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffffffff8357d8b8
[ 194.222193][ C71] RBP: 000000000000004d R08: 0000000000000300 R09: 0000000000000000
[ 194.222193][ C71] R10: 0000000000000001 R11: ffffffffc0598080 R12: 0000000000000000
[ 194.222194][ C71] R13: 0000000000000200 R14: 0000000000000000 R15: 0000000000000000
[ 194.222197][ C71] vprintk_emit (arch/x86/include/asm/preempt.h:85 kernel/printk/printk.c:2246)
[ 194.222200][ C71] ? kernfs_add_one (fs/kernfs/dir.c:766)
[ 194.222202][ C71] _printk (kernel/printk/printk.c:2270)
[ 194.222204][ C71] set_capacity_and_notify.cold (block/genhd.c:96)
[ 194.222207][ C71] loop_set_size+0x11/0x40 loop
[ 194.222211][ C71] loop_configure (drivers/block/loop.c:1057) loop
[ 194.222213][ C71] lo_ioctl (drivers/block/loop.c:1546) loop
[ 194.222216][ C71] blkdev_ioctl (block/ioctl.c:588)
[ 194.222221][ C71] ? do_sys_openat2 (fs/open.c:1222)
[ 194.222225][ C71] __x64_sys_ioctl (fs/ioctl.c:51 fs/ioctl.c:874 fs/ioctl.c:860 fs/ioctl.c:860)
[ 194.222230][ C71] do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80)
[ 194.222231][ C71] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:113)
[ 194.222233][ C71] RIP: 0033:0x7f40e173b427
[ 194.222235][ C71] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
All code
========
0: 00 00 add %al,(%rax)
2: 90 nop
3: 48 8b 05 69 aa 0c 00 mov 0xcaa69(%rip),%rax # 0xcaa73
a: 64 c7 00 26 00 00 00 movl $0x26,%fs:(%rax)
11: 48 c7 c0 ff ff ff ff mov $0xffffffffffffffff,%rax
18: c3 retq
19: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
20: 00 00 00
23: b8 10 00 00 00 mov $0x10,%eax
28: 0f 05 syscall
2a:* 48 3d 01 f0 ff ff cmp $0xfffffffffffff001,%rax <-- trapping instruction
30: 73 01 jae 0x33
32: c3 retq
33: 48 8b 0d 39 aa 0c 00 mov 0xcaa39(%rip),%rcx # 0xcaa73
3a: f7 d8 neg %eax
3c: 64 89 01 mov %eax,%fs:(%rcx)
3f: 48 rex.W
Code starting with the faulting instruction
===========================================
0: 48 3d 01 f0 ff ff cmp $0xfffffffffffff001,%rax
6: 73 01 jae 0x9
8: c3 retq
9: 48 8b 0d 39 aa 0c 00 mov 0xcaa39(%rip),%rcx # 0xcaa49
10: f7 d8 neg %eax
12: 64 89 01 mov %eax,%fs:(%rcx)
15: 48 rex.W
[ 194.222236][ C71] RSP: 002b:00007ffdaea025a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 194.222238][ C71] RAX: ffffffffffffffda RBX: 0000000000000028 RCX: 00007f40e173b427
[ 194.222238][ C71] RDX: 0000000000000005 RSI: 0000000000004c00 RDI: 0000000000000007
[ 194.222239][ C71] RBP: 0000000000000006 R08: 0000000000000000 R09: 00007ffdaea02316
[ 194.222240][ C71] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000007
[ 194.222240][ C71] R13: 00000000000003e8 R14: 00007ffdaea06980 R15: 00007ffdaea038b0
[ 194.222242][ C71] </TASK>
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 2 weeks
[sched/fair] b4d95a034c: phoronix-test-suite.tiobench.RandomWrite.64MB.8.mb_s -26.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -26.3% regression of phoronix-test-suite.tiobench.RandomWrite.64MB.8.mb_s due to commit:
commit: b4d95a034cffb1e4424874645549d3cac2de5c02 ("[PATCH 2/2] sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans multiple LLCs")
url: https://github.com/0day-ci/linux/commits/Mel-Gorman/Adjust-NUMA-imbalance...
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 8c92606ab81086db00cbb73347d124b4eb169b7e
in testcase: phoronix-test-suite
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 128G memory
with following parameters:
test: tiobench-1.3.1
option_a: Random Write
option_b: 64MB
option_c: 8
cpufreq_governor: performance
ucode: 0x5003006
test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/kconfig/option_a/option_b/option_c/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/Random Write/64MB/8/debian-x86_64-phoronix/lkp-csl-2sp8/tiobench-1.3.1/phoronix-test-suite/0x5003006
commit:
fee45dc486 ("sched/fair: Use weight of SD_NUMA domain in find_busiest_group")
b4d95a034c ("sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans multiple LLCs")
fee45dc486dd343a b4d95a034cffb1e442487464554
---------------- ---------------------------
%stddev %change %stddev
\ | \
190841 ± 4% -26.3% 140600 ± 3% phoronix-test-suite.tiobench.RandomWrite.64MB.8.mb_s
5.17 ±128% +1.3e+05% 6530 ± 64% proc-vmstat.numa_hint_faults
76503 ± 40% -25.3% 57153 ± 4% interrupts.CAL:Function_call_interrupts
4574 ± 50% -82.7% 791.14 ± 42% interrupts.CPU1.CAL:Function_call_interrupts
3.32 ± 41% +882.9% 32.65 ± 7% perf-stat.i.cpu-migrations
51246 ± 10% +104.4% 104748 ± 3% perf-stat.i.node-store-misses
1465 ± 21% -24.6% 1105 ± 13% numa-vmstat.node0.nr_active_anon
82443 ± 2% -47.6% 43196 ± 14% numa-vmstat.node0.nr_anon_pages
10866 ± 4% -8.3% 9965 ± 4% numa-vmstat.node0.nr_kernel_stack
14846 ± 15% -50.1% 7413 ± 43% numa-vmstat.node0.nr_mapped
1033 ± 2% -31.7% 706.14 ± 15% numa-vmstat.node0.nr_page_table_pages
1465 ± 21% -24.6% 1105 ± 13% numa-vmstat.node0.nr_zone_active_anon
8909 ± 26% +47.1% 13103 ± 20% numa-vmstat.node1.nr_active_file
8603 ± 15% +458.9% 48088 ± 11% numa-vmstat.node1.nr_anon_pages
8949 ± 5% +9.9% 9834 ± 4% numa-vmstat.node1.nr_kernel_stack
416.00 ± 7% +79.4% 746.14 ± 14% numa-vmstat.node1.nr_page_table_pages
8909 ± 26% +47.1% 13103 ± 20% numa-vmstat.node1.nr_zone_active_file
5844 ± 22% -24.3% 4426 ± 13% numa-meminfo.node0.Active(anon)
121357 ± 13% -45.1% 66683 ± 26% numa-meminfo.node0.AnonHugePages
329764 ± 2% -47.6% 172811 ± 14% numa-meminfo.node0.AnonPages
346450 -47.6% 181374 ± 14% numa-meminfo.node0.AnonPages.max
2050555 ± 13% -29.7% 1441806 ± 36% numa-meminfo.node0.Inactive
10866 ± 4% -8.3% 9966 ± 4% numa-meminfo.node0.KernelStack
59355 ± 15% -50.0% 29668 ± 43% numa-meminfo.node0.Mapped
2872827 ± 12% -20.3% 2288843 ± 24% numa-meminfo.node0.MemUsed
4133 ± 3% -31.6% 2829 ± 15% numa-meminfo.node0.PageTables
37735 ± 26% +47.9% 55814 ± 18% numa-meminfo.node1.Active
35639 ± 26% +47.1% 52416 ± 20% numa-meminfo.node1.Active(file)
5616 ± 27% +912.0% 56834 ± 44% numa-meminfo.node1.AnonHugePages
34408 ± 15% +459.0% 192349 ± 11% numa-meminfo.node1.AnonPages
39089 ± 19% +418.8% 202789 ± 12% numa-meminfo.node1.AnonPages.max
8950 ± 5% +9.9% 9833 ± 4% numa-meminfo.node1.KernelStack
1666 ± 6% +79.0% 2983 ± 14% numa-meminfo.node1.PageTables
4925 ± 8% -14.0% 4237 ± 8% slabinfo.kmalloc-cg-16.active_objs
4925 ± 8% -14.0% 4237 ± 8% slabinfo.kmalloc-cg-16.num_objs
3328 +11.4% 3709 ± 3% slabinfo.kmalloc-cg-192.active_objs
3328 +11.4% 3709 ± 3% slabinfo.kmalloc-cg-192.num_objs
2545 ± 3% +11.8% 2845 ± 3% slabinfo.kmalloc-cg-1k.active_objs
2545 ± 3% +11.8% 2845 ± 3% slabinfo.kmalloc-cg-1k.num_objs
1054 ± 6% +24.3% 1310 ± 3% slabinfo.kmalloc-cg-2k.active_objs
1054 ± 6% +24.3% 1310 ± 3% slabinfo.kmalloc-cg-2k.num_objs
4376 ± 5% +22.2% 5347 ± 2% slabinfo.kmalloc-cg-64.active_objs
4376 ± 5% +22.2% 5347 ± 2% slabinfo.kmalloc-cg-64.num_objs
2663 ± 7% +27.0% 3382 ± 3% slabinfo.kmalloc-cg-96.active_objs
2663 ± 7% +27.0% 3382 ± 3% slabinfo.kmalloc-cg-96.num_objs
1446 ± 9% -21.6% 1133 ± 7% slabinfo.task_group.active_objs
1446 ± 9% -21.6% 1133 ± 7% slabinfo.task_group.num_objs
14208 ± 5% -13.5% 12296 ± 3% slabinfo.vmap_area.active_objs
14213 ± 5% -13.5% 12297 ± 3% slabinfo.vmap_area.num_objs
8.25 ±110% -6.1 2.14 ±159% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
8.25 ±110% -6.1 2.14 ±159% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
8.25 ±110% -6.1 2.14 ±159% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
8.25 ±110% -6.1 2.14 ±159% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
8.25 ±110% -6.1 2.14 ±159% perf-profile.calltrace.cycles-pp.read
7.96 ±124% -5.5 2.49 ±158% perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap.mmput
6.44 ±111% -5.3 1.19 ±244% perf-profile.calltrace.cycles-pp.page_remove_rmap.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap
6.40 ±108% -4.3 2.14 ±159% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.40 ±108% -4.3 2.14 ±159% perf-profile.calltrace.cycles-pp.proc_reg_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
6.40 ±108% -4.3 2.14 ±159% perf-profile.calltrace.cycles-pp.seq_read_iter.proc_reg_read_iter.new_sync_read.vfs_read.ksys_read
6.40 ±108% -4.3 2.14 ±159% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read_iter.proc_reg_read_iter.new_sync_read.vfs_read
5.41 ±105% -4.2 1.19 ±244% perf-profile.calltrace.cycles-pp.release_task.wait_task_zombie.do_wait.kernel_waitid.__do_sys_waitid
4.22 ±101% -4.2 0.00 perf-profile.calltrace.cycles-pp.__dentry_kill.shrink_dentry_list.shrink_dcache_parent.d_invalidate.proc_invalidate_siblings_dcache
4.22 ±101% -4.2 0.00 perf-profile.calltrace.cycles-pp.d_invalidate.proc_invalidate_siblings_dcache.release_task.wait_task_zombie.do_wait
4.22 ±101% -4.2 0.00 perf-profile.calltrace.cycles-pp.proc_invalidate_siblings_dcache.release_task.wait_task_zombie.do_wait.kernel_waitid
4.22 ±101% -4.2 0.00 perf-profile.calltrace.cycles-pp.shrink_dcache_parent.d_invalidate.proc_invalidate_siblings_dcache.release_task.wait_task_zombie
4.22 ±101% -4.2 0.00 perf-profile.calltrace.cycles-pp.shrink_dentry_list.shrink_dcache_parent.d_invalidate.proc_invalidate_siblings_dcache.release_task
8.36 ±154% -4.0 4.36 ±179% perf-profile.calltrace.cycles-pp.mmput.begin_new_exec.load_elf_binary.exec_binprm.bprm_execve
8.36 ±154% -4.0 4.36 ±179% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.begin_new_exec.load_elf_binary.exec_binprm
5.41 ±105% -3.6 1.79 ±169% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.waitid
5.41 ±105% -3.6 1.79 ±169% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.waitid
5.41 ±105% -3.6 1.79 ±169% perf-profile.calltrace.cycles-pp.__do_sys_waitid.do_syscall_64.entry_SYSCALL_64_after_hwframe.waitid
5.41 ±105% -3.6 1.79 ±169% perf-profile.calltrace.cycles-pp.waitid
5.41 ±105% -3.6 1.79 ±169% perf-profile.calltrace.cycles-pp.kernel_waitid.__do_sys_waitid.do_syscall_64.entry_SYSCALL_64_after_hwframe.waitid
5.41 ±105% -3.6 1.79 ±169% perf-profile.calltrace.cycles-pp.do_wait.kernel_waitid.__do_sys_waitid.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.41 ±105% -3.6 1.79 ±169% perf-profile.calltrace.cycles-pp.wait_task_zombie.do_wait.kernel_waitid.__do_sys_waitid.do_syscall_64
8.36 ±154% +0.1 8.49 ±177% perf-profile.calltrace.cycles-pp.begin_new_exec.load_elf_binary.exec_binprm.bprm_execve.do_execveat_common
9.47 ±137% -7.0 2.49 ±158% perf-profile.children.cycles-pp.unmap_vmas
8.25 ±110% -6.1 2.14 ±159% perf-profile.children.cycles-pp.ksys_read
8.25 ±110% -6.1 2.14 ±159% perf-profile.children.cycles-pp.vfs_read
8.25 ±110% -6.1 2.14 ±159% perf-profile.children.cycles-pp.seq_read_iter
8.25 ±110% -6.1 2.14 ±159% perf-profile.children.cycles-pp.read
7.96 ±124% -5.5 2.49 ±158% perf-profile.children.cycles-pp.zap_pte_range
7.96 ±124% -5.5 2.49 ±158% perf-profile.children.cycles-pp.unmap_page_range
6.44 ±111% -5.3 1.19 ±244% perf-profile.children.cycles-pp.page_remove_rmap
6.40 ±108% -4.3 2.14 ±159% perf-profile.children.cycles-pp.new_sync_read
6.40 ±108% -4.3 2.14 ±159% perf-profile.children.cycles-pp.proc_reg_read_iter
6.40 ±108% -4.3 2.14 ±159% perf-profile.children.cycles-pp.show_interrupts
5.41 ±105% -4.2 1.19 ±244% perf-profile.children.cycles-pp.release_task
4.22 ±101% -4.2 0.00 perf-profile.children.cycles-pp.__dentry_kill
4.22 ±101% -4.2 0.00 perf-profile.children.cycles-pp.d_invalidate
4.22 ±101% -4.2 0.00 perf-profile.children.cycles-pp.proc_invalidate_siblings_dcache
4.22 ±101% -4.2 0.00 perf-profile.children.cycles-pp.shrink_dcache_parent
4.22 ±101% -4.2 0.00 perf-profile.children.cycles-pp.shrink_dentry_list
5.41 ±105% -3.6 1.79 ±169% perf-profile.children.cycles-pp.__do_sys_waitid
5.41 ±105% -3.6 1.79 ±169% perf-profile.children.cycles-pp.waitid
5.41 ±105% -3.6 1.79 ±169% perf-profile.children.cycles-pp.kernel_waitid
5.41 ±105% -3.6 1.79 ±169% perf-profile.children.cycles-pp.do_wait
5.41 ±105% -3.6 1.79 ±169% perf-profile.children.cycles-pp.wait_task_zombie
8.36 ±154% +0.1 8.49 ±177% perf-profile.children.cycles-pp.begin_new_exec
3.82 ±101% -2.6 1.19 ±244% perf-profile.self.cycles-pp.page_remove_rmap
phoronix-test-suite.tiobench.RandomWrite.64MB.8.mb_s
220000 +------------------------------------------------------------------+
| + |
210000 |-+ :: |
200000 |-+ +.++ : +. .+ + + + + +.+ |
| + : + .+ : ++ +. + .+ : : + :+ +.+ + :: : : ++ |
190000 |.+ +: ++ + + + +.+ :: +.+ +: + + : :: :+ +|
180000 |-+ + + + + +.+ + + |
| |
170000 |-+ |
160000 |-+ |
| O |
150000 |-+ OO O O OO O O O O |
140000 |-O O O O OO OO O O |
| O O O O O O O |
130000 +------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 2 weeks
[loop] c4249fd6a1: blktests.loop/006.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: c4249fd6a19722fa0ee4994da7c089594bbc413d ("[PATCH] loop: replace loop_validate_mutex with loop_validate_spinlock")
url: https://github.com/0day-ci/linux/commits/Tetsuo-Handa/loop-replace-loop_v...
base: https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-next
in testcase: blktests
version: blktests-x86_64-f51ee53-1_20211126
with following parameters:
test: loop-group-03
ucode: 0xe2
on test machine: 4 threads Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz with 32G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
2021-11-27 18:21:39 sed "s:^:loop/:" /lkp/benchmarks/blktests/tests/loop-group-03
2021-11-27 18:21:39 ./check loop/006 loop/007
loop/006 (change loop backing file while creating/removing another loop device)
loop/006 (change loop backing file while creating/removing another loop device) [failed]
runtime ... 31.068s
--- tests/loop/006.out 2021-11-26 23:12:29.000000000 +0000
+++ /lkp/benchmarks/blktests/results/nodev/loop/006.out.bad 2021-11-27 18:22:10.364779537 +0000
@@ -1,2 +1,3 @@
Running loop/006
+losetup: /dev/loop0: failed to set up loop device: Invalid argument
Test complete
loop/007 (update loop device capacity with filesystem)
loop/007 (update loop device capacity with filesystem) [passed]
runtime ... 0.253s
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 2 weeks
[sched/fair] 8d0920b981: stress-ng.sem.ops_per_sec 11.9% improvement
by kernel test robot
Greeting,
FYI, we noticed a 11.9% improvement of stress-ng.sem.ops_per_sec due to commit:
commit: 8d0920b981b634bfedfd0746451839d6f5d7f707 ("[PATCH] sched/fair: Fix detection of per-CPU kthreads waking a task")
url: https://github.com/0day-ci/linux/commits/Vincent-Donnefort/sched-fair-Fix...
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 8c92606ab81086db00cbb73347d124b4eb169b7e
patch link: https://lore.kernel.org/lkml/20211124154239.3191366-1-vincent.donnefort@a...
in testcase: stress-ng
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 512G memory
with following parameters:
nr_threads: 100%
testtime: 60s
sc_pid_max: 4194304
class: scheduler
test: sem
cpufreq_governor: performance
ucode: 0x5003006
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/sc_pid_max/tbox_group/test/testcase/testtime/ucode:
scheduler/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/4194304/lkp-csl-2sp7/sem/stress-ng/60s/0x5003006
commit:
8c92606ab8 ("sched/cpuacct: Make user/system times in cpuacct.stat more precise")
8d0920b981 ("sched/fair: Fix detection of per-CPU kthreads waking a task")
8c92606ab81086db 8d0920b981b634bfedfd0746451
---------------- ---------------------------
%stddev %change %stddev
\ | \
4.488e+08 +11.9% 5.023e+08 ± 2% stress-ng.sem.ops
7479868 +11.9% 8371718 ± 2% stress-ng.sem.ops_per_sec
44686811 ± 2% -43.4% 25289053 ± 9% stress-ng.time.involuntary_context_switches
19505 +13.5% 22136 ± 2% stress-ng.time.minor_page_faults
1099 +66.3% 1828 ± 4% stress-ng.time.percent_of_cpu_this_job_got
523.06 +44.7% 756.74 ± 4% stress-ng.time.system_time
159.55 ± 2% +136.8% 377.80 ± 16% stress-ng.time.user_time
2.244e+08 +11.9% 2.51e+08 ± 2% stress-ng.time.voluntary_context_switches
1.351e+08 +64.0% 2.215e+08 ± 4% cpuidle..usage
5.81 ± 44% +8.8 14.65 ± 7% mpstat.cpu.all.irq%
381.04 +2.0% 388.53 pmeter.Average_Active_Power
2457 ± 10% +26.5% 3109 ± 8% slabinfo.kmalloc-cg-16.active_objs
2457 ± 10% +26.5% 3109 ± 8% slabinfo.kmalloc-cg-16.num_objs
19769 ± 3% +18.6% 23443 ± 3% meminfo.Active
19514 ± 3% +18.8% 23188 ± 3% meminfo.Active(anon)
32952 ± 2% +15.2% 37965 ± 2% meminfo.Shmem
20.80 ± 8% +52.5% 31.71 ± 6% vmstat.procs.r
6251194 +22.7% 7669110 ± 2% vmstat.system.cs
1664035 -7.4% 1540404 vmstat.system.in
3221 ± 8% -49.1% 1640 ± 83% numa-vmstat.node0.nr_shmem
4430 ± 3% +23.6% 5476 ± 4% numa-vmstat.node1.nr_active_anon
798.40 ± 69% +400.5% 3996 ± 92% numa-vmstat.node1.nr_mapped
5018 ± 6% +56.4% 7850 ± 16% numa-vmstat.node1.nr_shmem
4430 ± 3% +23.6% 5476 ± 4% numa-vmstat.node1.nr_zone_active_anon
12885 ± 8% -49.1% 6563 ± 83% numa-meminfo.node0.Shmem
194184 ± 2% -18.6% 158144 ± 21% numa-meminfo.node0.Slab
17773 ± 3% +23.9% 22013 ± 4% numa-meminfo.node1.Active
17722 ± 3% +23.6% 21904 ± 4% numa-meminfo.node1.Active(anon)
3194 ± 69% +400.4% 15985 ± 92% numa-meminfo.node1.Mapped
1078298 ± 20% +87.5% 2021914 ± 56% numa-meminfo.node1.MemUsed
20072 ± 6% +56.5% 31404 ± 16% numa-meminfo.node1.Shmem
4878 ± 3% +18.8% 5797 ± 3% proc-vmstat.nr_active_anon
10268 +3.5% 10632 ± 2% proc-vmstat.nr_mapped
8237 ± 2% +15.2% 9491 ± 2% proc-vmstat.nr_shmem
4878 ± 3% +18.8% 5797 ± 3% proc-vmstat.nr_zone_active_anon
249939 ± 4% +58.8% 396814 ± 5% proc-vmstat.numa_pte_updates
11266 ± 3% +37.6% 15502 ± 4% proc-vmstat.pgactivate
351816 +2.0% 358879 proc-vmstat.pgfault
894.60 ± 2% +18.9% 1063 ± 3% turbostat.Avg_MHz
32.11 ± 2% +6.0 38.13 ± 3% turbostat.Busy%
55616227 ± 6% +255.0% 1.974e+08 ± 5% turbostat.C1
22.56 ± 5% +39.4 61.99 ± 2% turbostat.C1%
77386656 ± 3% -76.4% 18239341 ± 13% turbostat.C1E
47.00 ± 5% -35.6 11.41 ± 12% turbostat.C1E%
228.02 +3.2% 235.30 turbostat.PkgWatt
152.15 -2.7% 148.03 turbostat.RAMWatt
0.02 ± 78% -72.6% 0.01 ± 87% perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
3.05 ± 24% -73.8% 0.80 ± 67% perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
19.32 ± 12% -32.8% 12.98 ± 29% perf-sched.total_wait_and_delay.max.ms
19.31 ± 12% -33.6% 12.83 ± 30% perf-sched.total_wait_time.max.ms
1.77 ± 6% -49.1% 0.90 ± 86% perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.16 ± 35% -53.7% 0.07 ± 67% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr_locked
3.52 ± 6% -49.0% 1.79 ± 86% perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
3.05 ± 24% -58.1% 1.28 ± 27% perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
18.83 ± 14% -39.7% 11.36 ± 31% perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
1.75 ± 6% -48.9% 0.89 ± 86% perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.16 ± 35% -53.7% 0.07 ± 67% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr_locked
3.50 ± 6% -48.9% 1.79 ± 86% perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
18.83 ± 14% -42.6% 10.81 ± 36% perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork
15.68 ± 4% +31.4% 20.62 perf-stat.i.MPKI
9.107e+09 -7.1% 8.46e+09 ± 2% perf-stat.i.branch-instructions
2.07 -0.3 1.75 ± 2% perf-stat.i.branch-miss-rate%
1.826e+08 ± 2% -22.0% 1.424e+08 perf-stat.i.branch-misses
8.16 ± 18% -5.3 2.86 ± 12% perf-stat.i.cache-miss-rate%
61151992 ± 25% -64.4% 21751636 ± 15% perf-stat.i.cache-misses
7.555e+08 ± 6% +16.3% 8.789e+08 perf-stat.i.cache-references
6481277 +23.0% 7973471 ± 2% perf-stat.i.context-switches
1.86 +38.3% 2.57 ± 4% perf-stat.i.cpi
8.867e+10 ± 2% +22.7% 1.088e+11 ± 2% perf-stat.i.cpu-cycles
837057 ± 2% +208.6% 2583015 ± 3% perf-stat.i.cpu-migrations
1569 ± 17% +222.1% 5055 ± 15% perf-stat.i.cycles-between-cache-misses
0.04 ± 7% +0.1 0.09 ± 17% perf-stat.i.dTLB-load-miss-rate%
5151978 ± 6% +117.1% 11185554 ± 15% perf-stat.i.dTLB-load-misses
1.46e+10 ± 2% -16.6% 1.218e+10 ± 2% perf-stat.i.dTLB-loads
0.01 ± 6% +0.0 0.03 ± 11% perf-stat.i.dTLB-store-miss-rate%
1102322 ± 6% +127.4% 2506657 ± 9% perf-stat.i.dTLB-store-misses
7.926e+09 -6.3% 7.426e+09 ± 2% perf-stat.i.dTLB-stores
50435290 ± 2% +20.6% 60821528 ± 4% perf-stat.i.iTLB-load-misses
77671047 +17.3% 91078229 ± 3% perf-stat.i.iTLB-loads
4.716e+10 ± 2% -11.6% 4.168e+10 ± 2% perf-stat.i.instructions
1163 ± 2% -21.0% 918.78 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.54 -26.4% 0.40 ± 4% perf-stat.i.ipc
0.92 ± 2% +22.7% 1.13 ± 2% perf-stat.i.metric.GHz
1423 ± 9% -32.4% 961.53 ± 42% perf-stat.i.metric.K/sec
337.33 ± 2% -10.5% 301.77 ± 2% perf-stat.i.metric.M/sec
3728 +2.7% 3827 perf-stat.i.minor-faults
37250868 ± 24% -79.1% 7777013 ± 6% perf-stat.i.node-load-misses
1755814 ± 36% -74.2% 453701 ± 23% perf-stat.i.node-loads
9763477 ± 25% -50.5% 4836086 ± 9% perf-stat.i.node-store-misses
490531 ± 6% -30.9% 338718 ± 33% perf-stat.i.node-stores
3740 +2.6% 3839 perf-stat.i.page-faults
16.01 ± 3% +31.8% 21.09 perf-stat.overall.MPKI
2.00 -0.3 1.68 ± 2% perf-stat.overall.branch-miss-rate%
8.03 ± 19% -5.5 2.48 ± 15% perf-stat.overall.cache-miss-rate%
1.88 +39.0% 2.61 ± 4% perf-stat.overall.cpi
1519 ± 17% +237.1% 5123 ± 15% perf-stat.overall.cycles-between-cache-misses
0.04 ± 8% +0.1 0.09 ± 17% perf-stat.overall.dTLB-load-miss-rate%
0.01 ± 7% +0.0 0.03 ± 11% perf-stat.overall.dTLB-store-miss-rate%
935.54 ± 2% -26.6% 686.48 ± 4% perf-stat.overall.instructions-per-iTLB-miss
0.53 -27.9% 0.38 ± 4% perf-stat.overall.ipc
8.962e+09 -7.1% 8.326e+09 ± 2% perf-stat.ps.branch-instructions
1.797e+08 ± 2% -22.0% 1.401e+08 perf-stat.ps.branch-misses
60177942 ± 25% -64.4% 21405117 ± 15% perf-stat.ps.cache-misses
7.434e+08 ± 6% +16.3% 8.649e+08 perf-stat.ps.cache-references
6377951 +23.0% 7846602 ± 2% perf-stat.ps.context-switches
8.726e+10 ± 2% +22.8% 1.071e+11 ± 2% perf-stat.ps.cpu-cycles
823714 ± 2% +208.6% 2541916 ± 3% perf-stat.ps.cpu-migrations
5069909 ± 6% +117.1% 11007308 ± 15% perf-stat.ps.dTLB-load-misses
1.436e+10 ± 2% -16.6% 1.199e+10 ± 2% perf-stat.ps.dTLB-loads
1084759 ± 6% +127.4% 2466721 ± 9% perf-stat.ps.dTLB-store-misses
7.8e+09 -6.3% 7.308e+09 ± 2% perf-stat.ps.dTLB-stores
49631270 ± 2% +20.6% 59853737 ± 4% perf-stat.ps.iTLB-load-misses
76432899 +17.3% 89629050 ± 3% perf-stat.ps.iTLB-loads
4.641e+10 ± 2% -11.6% 4.102e+10 ± 2% perf-stat.ps.instructions
3668 +2.6% 3764 perf-stat.ps.minor-faults
36657427 ± 24% -79.1% 7653097 ± 6% perf-stat.ps.node-load-misses
1727854 ± 36% -74.2% 446510 ± 23% perf-stat.ps.node-loads
9607970 ± 25% -50.5% 4758943 ± 9% perf-stat.ps.node-store-misses
482731 ± 6% -30.9% 333342 ± 33% perf-stat.ps.node-stores
3680 +2.6% 3776 perf-stat.ps.page-faults
2.934e+12 ± 2% -11.7% 2.591e+12 ± 3% perf-stat.total.instructions
22391 ± 4% -32.8% 15040 ± 9% softirqs.CPU0.SCHED
17344 ± 3% -31.3% 11909 ± 12% softirqs.CPU1.SCHED
16640 ± 6% -34.7% 10861 ± 11% softirqs.CPU10.SCHED
16417 ± 4% -33.4% 10931 ± 10% softirqs.CPU11.SCHED
16837 ± 8% -36.9% 10630 ± 9% softirqs.CPU12.SCHED
16286 ± 2% -30.8% 11267 ± 15% softirqs.CPU13.SCHED
16440 ± 3% -32.9% 11037 ± 9% softirqs.CPU14.SCHED
16151 ± 4% -34.1% 10639 ± 9% softirqs.CPU15.SCHED
16090 ± 3% -33.0% 10777 ± 8% softirqs.CPU16.SCHED
16372 ± 3% -31.8% 11158 ± 10% softirqs.CPU17.SCHED
16231 ± 2% -32.1% 11025 ± 8% softirqs.CPU18.SCHED
15929 ± 4% -32.7% 10727 ± 10% softirqs.CPU19.SCHED
17549 ± 5% -34.1% 11569 ± 9% softirqs.CPU2.SCHED
16270 ± 3% -33.2% 10871 ± 11% softirqs.CPU20.SCHED
16374 ± 4% -33.6% 10870 ± 7% softirqs.CPU21.SCHED
16472 ± 3% -33.1% 11021 ± 12% softirqs.CPU22.SCHED
16405 ± 2% -32.2% 11122 ± 12% softirqs.CPU23.SCHED
16580 ± 4% -36.2% 10578 ± 12% softirqs.CPU24.SCHED
15730 -32.6% 10598 ± 11% softirqs.CPU25.SCHED
15877 ± 2% -32.8% 10672 ± 10% softirqs.CPU26.SCHED
15912 ± 2% -31.3% 10925 ± 11% softirqs.CPU27.SCHED
15896 ± 2% -31.7% 10863 ± 11% softirqs.CPU28.SCHED
16045 -31.8% 10948 ± 10% softirqs.CPU29.SCHED
16489 ± 3% -31.6% 11278 ± 10% softirqs.CPU3.SCHED
15868 ± 2% -32.9% 10646 ± 11% softirqs.CPU30.SCHED
15988 ± 3% -33.2% 10687 ± 10% softirqs.CPU31.SCHED
15765 ± 2% -32.1% 10707 ± 12% softirqs.CPU32.SCHED
15797 -34.1% 10417 ± 11% softirqs.CPU33.SCHED
15921 -31.6% 10885 ± 13% softirqs.CPU34.SCHED
15881 ± 2% -31.7% 10852 ± 14% softirqs.CPU35.SCHED
16352 ± 7% -34.2% 10762 ± 14% softirqs.CPU36.SCHED
15932 ± 2% -34.1% 10493 ± 12% softirqs.CPU37.SCHED
15799 -32.2% 10707 ± 10% softirqs.CPU38.SCHED
15935 ± 2% -32.7% 10721 ± 7% softirqs.CPU39.SCHED
16240 ± 3% -35.7% 10447 ± 10% softirqs.CPU40.SCHED
16009 ± 2% -33.0% 10730 ± 13% softirqs.CPU41.SCHED
16160 ± 4% -35.7% 10387 ± 12% softirqs.CPU42.SCHED
15874 -34.5% 10403 ± 12% softirqs.CPU43.SCHED
15851 -34.2% 10431 ± 10% softirqs.CPU44.SCHED
15825 ± 3% -34.3% 10393 ± 15% softirqs.CPU45.SCHED
15785 ± 2% -32.3% 10689 ± 12% softirqs.CPU47.SCHED
16028 ± 3% -35.4% 10348 ± 12% softirqs.CPU48.SCHED
15899 ± 4% -31.2% 10939 ± 9% softirqs.CPU49.SCHED
16483 ± 3% -32.4% 11141 ± 12% softirqs.CPU5.SCHED
16548 -33.8% 10953 ± 11% softirqs.CPU50.SCHED
16411 ± 4% -31.4% 11265 ± 14% softirqs.CPU51.SCHED
15875 ± 2% -32.8% 10675 ± 9% softirqs.CPU52.SCHED
16317 ± 3% -32.1% 11079 ± 12% softirqs.CPU53.SCHED
16070 ± 2% -30.5% 11162 ± 7% softirqs.CPU54.SCHED
16195 ± 2% -32.7% 10893 ± 10% softirqs.CPU55.SCHED
16155 ± 2% -33.9% 10680 ± 10% softirqs.CPU56.SCHED
15984 ± 4% -32.8% 10739 ± 7% softirqs.CPU57.SCHED
16338 ± 3% -32.8% 10983 ± 13% softirqs.CPU58.SCHED
16604 ± 2% -35.2% 10755 ± 11% softirqs.CPU59.SCHED
16357 ± 4% -32.8% 10988 ± 12% softirqs.CPU6.SCHED
16878 ± 8% -35.2% 10934 ± 8% softirqs.CPU60.SCHED
16570 ± 4% -35.5% 10693 ± 8% softirqs.CPU61.SCHED
16652 ± 5% -35.6% 10727 ± 9% softirqs.CPU62.SCHED
16652 ± 5% -34.1% 10972 ± 9% softirqs.CPU63.SCHED
16377 ± 3% -34.0% 10811 ± 12% softirqs.CPU64.SCHED
16324 ± 3% -31.8% 11128 ± 12% softirqs.CPU65.SCHED
16442 ± 3% -32.4% 11111 ± 11% softirqs.CPU66.SCHED
16730 ± 5% -32.2% 11337 ± 12% softirqs.CPU67.SCHED
16409 ± 2% -33.4% 10934 ± 9% softirqs.CPU68.SCHED
16157 ± 2% -33.1% 10815 ± 10% softirqs.CPU69.SCHED
16004 ± 2% -32.3% 10831 ± 9% softirqs.CPU7.SCHED
16374 ± 3% -31.9% 11157 ± 10% softirqs.CPU70.SCHED
16319 -32.8% 10968 ± 9% softirqs.CPU71.SCHED
16194 ± 2% -35.4% 10455 ± 10% softirqs.CPU72.SCHED
15911 -34.8% 10370 ± 11% softirqs.CPU73.SCHED
17109 ± 8% -38.5% 10530 ± 12% softirqs.CPU74.SCHED
15953 ± 2% -31.9% 10864 ± 12% softirqs.CPU75.SCHED
15986 ± 2% -33.1% 10691 ± 13% softirqs.CPU76.SCHED
16008 -33.3% 10669 ± 11% softirqs.CPU77.SCHED
16064 ± 2% -33.4% 10705 ± 13% softirqs.CPU78.SCHED
16202 ± 5% -35.0% 10526 ± 11% softirqs.CPU79.SCHED
16228 ± 3% -32.0% 11027 ± 8% softirqs.CPU8.SCHED
15833 ± 2% -32.8% 10636 ± 12% softirqs.CPU80.SCHED
16463 ± 3% -36.3% 10493 ± 10% softirqs.CPU81.SCHED
15911 ± 2% -32.0% 10820 ± 11% softirqs.CPU82.SCHED
16034 ± 2% -34.1% 10569 ± 10% softirqs.CPU83.SCHED
15904 -33.7% 10543 ± 12% softirqs.CPU84.SCHED
15722 -33.2% 10496 ± 12% softirqs.CPU85.SCHED
16013 ± 4% -32.9% 10740 ± 11% softirqs.CPU86.SCHED
16004 ± 2% -30.6% 11114 ± 18% softirqs.CPU87.SCHED
16698 ± 10% -33.8% 11054 ± 14% softirqs.CPU88.SCHED
16043 ± 3% -33.6% 10651 ± 11% softirqs.CPU89.SCHED
16303 ± 3% -33.5% 10835 ± 10% softirqs.CPU9.SCHED
15888 ± 2% -33.1% 10633 ± 11% softirqs.CPU90.SCHED
16115 ± 5% -34.7% 10516 ± 10% softirqs.CPU91.SCHED
16135 ± 3% -34.4% 10585 ± 10% softirqs.CPU92.SCHED
15604 ± 3% -31.2% 10735 ± 8% softirqs.CPU93.SCHED
15747 ± 2% -32.1% 10696 ± 10% softirqs.CPU94.SCHED
16121 ± 2% -35.3% 10435 ± 10% softirqs.CPU95.SCHED
1560425 -33.2% 1042262 ± 10% softirqs.SCHED
10914 ± 12% -46.2% 5872 ± 16% interrupts.CPU0.RES:Rescheduling_interrupts
10326 ± 7% -47.0% 5473 ± 17% interrupts.CPU1.RES:Rescheduling_interrupts
3165 ± 18% +57.0% 4969 ± 2% interrupts.CPU10.NMI:Non-maskable_interrupts
3165 ± 18% +57.0% 4969 ± 2% interrupts.CPU10.PMI:Performance_monitoring_interrupts
11496 ± 16% -52.6% 5449 ± 15% interrupts.CPU10.RES:Rescheduling_interrupts
10368 ± 15% -48.4% 5352 ± 17% interrupts.CPU11.RES:Rescheduling_interrupts
10655 ± 14% -48.3% 5509 ± 15% interrupts.CPU12.RES:Rescheduling_interrupts
3263 ± 22% +52.7% 4981 ± 2% interrupts.CPU13.NMI:Non-maskable_interrupts
3263 ± 22% +52.7% 4981 ± 2% interrupts.CPU13.PMI:Performance_monitoring_interrupts
10447 ± 12% -46.2% 5623 ± 21% interrupts.CPU13.RES:Rescheduling_interrupts
10601 ± 12% -49.6% 5346 ± 15% interrupts.CPU14.RES:Rescheduling_interrupts
10578 ± 15% -49.4% 5354 ± 16% interrupts.CPU15.RES:Rescheduling_interrupts
10673 ± 11% -49.7% 5366 ± 16% interrupts.CPU16.RES:Rescheduling_interrupts
2596 ± 27% +91.5% 4970 ± 2% interrupts.CPU17.NMI:Non-maskable_interrupts
2596 ± 27% +91.5% 4970 ± 2% interrupts.CPU17.PMI:Performance_monitoring_interrupts
10042 ± 16% -47.9% 5234 ± 16% interrupts.CPU17.RES:Rescheduling_interrupts
10394 ± 12% -45.6% 5651 ± 17% interrupts.CPU18.RES:Rescheduling_interrupts
9978 ± 14% -46.1% 5375 ± 15% interrupts.CPU19.RES:Rescheduling_interrupts
11767 ± 21% -53.1% 5519 ± 16% interrupts.CPU2.RES:Rescheduling_interrupts
10646 ± 14% -49.4% 5390 ± 15% interrupts.CPU20.RES:Rescheduling_interrupts
2567 ± 35% +79.0% 4595 ± 18% interrupts.CPU21.NMI:Non-maskable_interrupts
2567 ± 35% +79.0% 4595 ± 18% interrupts.CPU21.PMI:Performance_monitoring_interrupts
10407 ± 13% -48.4% 5368 ± 16% interrupts.CPU21.RES:Rescheduling_interrupts
10089 ± 14% -47.2% 5329 ± 16% interrupts.CPU22.RES:Rescheduling_interrupts
2686 ± 38% +71.3% 4602 ± 18% interrupts.CPU23.NMI:Non-maskable_interrupts
2686 ± 38% +71.3% 4602 ± 18% interrupts.CPU23.PMI:Performance_monitoring_interrupts
10006 ± 14% -48.1% 5193 ± 16% interrupts.CPU23.RES:Rescheduling_interrupts
2871 ± 34% +41.6% 4065 ± 31% interrupts.CPU24.NMI:Non-maskable_interrupts
2871 ± 34% +41.6% 4065 ± 31% interrupts.CPU24.PMI:Performance_monitoring_interrupts
12976 ± 17% -56.5% 5641 ± 14% interrupts.CPU24.RES:Rescheduling_interrupts
2869 ± 35% +63.4% 4687 ± 18% interrupts.CPU25.NMI:Non-maskable_interrupts
2869 ± 35% +63.4% 4687 ± 18% interrupts.CPU25.PMI:Performance_monitoring_interrupts
12226 ± 17% -58.1% 5123 ± 11% interrupts.CPU25.RES:Rescheduling_interrupts
2926 ± 32% +72.6% 5049 interrupts.CPU26.NMI:Non-maskable_interrupts
2926 ± 32% +72.6% 5049 interrupts.CPU26.PMI:Performance_monitoring_interrupts
11990 ± 16% -56.6% 5203 ± 12% interrupts.CPU26.RES:Rescheduling_interrupts
3184 ± 31% +59.4% 5075 interrupts.CPU27.NMI:Non-maskable_interrupts
3184 ± 31% +59.4% 5075 interrupts.CPU27.PMI:Performance_monitoring_interrupts
11858 ± 17% -56.6% 5146 ± 12% interrupts.CPU27.RES:Rescheduling_interrupts
3673 ± 21% +37.9% 5066 interrupts.CPU28.NMI:Non-maskable_interrupts
3673 ± 21% +37.9% 5066 interrupts.CPU28.PMI:Performance_monitoring_interrupts
12167 ± 18% -58.4% 5060 ± 12% interrupts.CPU28.RES:Rescheduling_interrupts
3640 ± 23% +38.9% 5058 interrupts.CPU29.NMI:Non-maskable_interrupts
3640 ± 23% +38.9% 5058 interrupts.CPU29.PMI:Performance_monitoring_interrupts
11866 ± 16% -58.9% 4873 ± 11% interrupts.CPU29.RES:Rescheduling_interrupts
10672 ± 9% -48.8% 5465 ± 17% interrupts.CPU3.RES:Rescheduling_interrupts
4128 ± 2% +22.8% 5068 interrupts.CPU30.NMI:Non-maskable_interrupts
4128 ± 2% +22.8% 5068 interrupts.CPU30.PMI:Performance_monitoring_interrupts
11897 ± 19% -58.3% 4957 ± 9% interrupts.CPU30.RES:Rescheduling_interrupts
4070 +24.3% 5058 interrupts.CPU31.NMI:Non-maskable_interrupts
4070 +24.3% 5058 interrupts.CPU31.PMI:Performance_monitoring_interrupts
11771 ± 15% -56.7% 5096 ± 10% interrupts.CPU31.RES:Rescheduling_interrupts
12028 ± 19% -57.6% 5103 ± 11% interrupts.CPU32.RES:Rescheduling_interrupts
11789 ± 16% -57.4% 5023 ± 12% interrupts.CPU33.RES:Rescheduling_interrupts
11954 ± 17% -58.2% 4998 ± 11% interrupts.CPU34.RES:Rescheduling_interrupts
11922 ± 16% -57.9% 5020 ± 11% interrupts.CPU35.RES:Rescheduling_interrupts
12005 ± 16% -58.1% 5034 ± 11% interrupts.CPU36.RES:Rescheduling_interrupts
12348 ± 14% -57.4% 5257 ± 11% interrupts.CPU37.RES:Rescheduling_interrupts
12417 ± 16% -58.7% 5129 ± 12% interrupts.CPU38.RES:Rescheduling_interrupts
12090 ± 17% -58.0% 5076 ± 11% interrupts.CPU39.RES:Rescheduling_interrupts
9627 ± 14% -44.4% 5351 ± 17% interrupts.CPU4.RES:Rescheduling_interrupts
11957 ± 18% -58.6% 4947 ± 10% interrupts.CPU40.RES:Rescheduling_interrupts
12107 ± 17% -57.9% 5091 ± 14% interrupts.CPU41.RES:Rescheduling_interrupts
12168 ± 19% -56.3% 5319 ± 11% interrupts.CPU42.RES:Rescheduling_interrupts
11956 ± 18% -57.6% 5063 ± 12% interrupts.CPU43.RES:Rescheduling_interrupts
12105 ± 17% -57.5% 5149 ± 11% interrupts.CPU44.RES:Rescheduling_interrupts
11557 ± 16% -56.2% 5064 ± 11% interrupts.CPU45.RES:Rescheduling_interrupts
12108 ± 12% -58.8% 4985 ± 12% interrupts.CPU46.RES:Rescheduling_interrupts
11660 ± 18% -56.7% 5046 ± 14% interrupts.CPU47.RES:Rescheduling_interrupts
10560 ± 12% -47.5% 5542 ± 14% interrupts.CPU48.RES:Rescheduling_interrupts
10652 ± 9% -48.6% 5474 ± 15% interrupts.CPU49.RES:Rescheduling_interrupts
10503 ± 9% -48.9% 5366 ± 16% interrupts.CPU5.RES:Rescheduling_interrupts
10515 ± 9% -48.7% 5389 ± 17% interrupts.CPU50.RES:Rescheduling_interrupts
10514 ± 8% -47.8% 5486 ± 15% interrupts.CPU51.RES:Rescheduling_interrupts
11152 ± 13% -52.2% 5336 ± 17% interrupts.CPU52.RES:Rescheduling_interrupts
10148 ± 10% -48.1% 5269 ± 18% interrupts.CPU53.RES:Rescheduling_interrupts
10387 ± 9% -49.1% 5290 ± 12% interrupts.CPU54.RES:Rescheduling_interrupts
3440 ± 4% +44.1% 4955 ± 2% interrupts.CPU55.NMI:Non-maskable_interrupts
3440 ± 4% +44.1% 4955 ± 2% interrupts.CPU55.PMI:Performance_monitoring_interrupts
10878 ± 12% -50.0% 5443 ± 16% interrupts.CPU55.RES:Rescheduling_interrupts
3694 ± 7% +34.3% 4960 ± 2% interrupts.CPU56.NMI:Non-maskable_interrupts
3694 ± 7% +34.3% 4960 ± 2% interrupts.CPU56.PMI:Performance_monitoring_interrupts
10650 ± 8% -49.2% 5405 ± 17% interrupts.CPU56.RES:Rescheduling_interrupts
3609 ± 4% +37.4% 4957 ± 2% interrupts.CPU57.NMI:Non-maskable_interrupts
3609 ± 4% +37.4% 4957 ± 2% interrupts.CPU57.PMI:Performance_monitoring_interrupts
10341 ± 12% -48.7% 5304 ± 18% interrupts.CPU57.RES:Rescheduling_interrupts
3224 ± 25% +54.1% 4967 ± 2% interrupts.CPU58.NMI:Non-maskable_interrupts
3224 ± 25% +54.1% 4967 ± 2% interrupts.CPU58.PMI:Performance_monitoring_interrupts
11137 ± 11% -51.5% 5397 ± 17% interrupts.CPU58.RES:Rescheduling_interrupts
10332 ± 13% -49.5% 5216 ± 17% interrupts.CPU59.RES:Rescheduling_interrupts
10312 ± 14% -48.3% 5329 ± 18% interrupts.CPU6.RES:Rescheduling_interrupts
11594 ± 26% -53.3% 5409 ± 16% interrupts.CPU60.RES:Rescheduling_interrupts
11154 ± 15% -50.6% 5505 ± 16% interrupts.CPU61.RES:Rescheduling_interrupts
10692 ± 12% -48.1% 5546 ± 18% interrupts.CPU62.RES:Rescheduling_interrupts
10114 ± 11% -47.3% 5333 ± 16% interrupts.CPU63.RES:Rescheduling_interrupts
10960 ± 11% -51.5% 5316 ± 16% interrupts.CPU64.RES:Rescheduling_interrupts
3800 ± 8% +30.6% 4965 ± 2% interrupts.CPU65.NMI:Non-maskable_interrupts
3800 ± 8% +30.6% 4965 ± 2% interrupts.CPU65.PMI:Performance_monitoring_interrupts
10451 ± 14% -49.8% 5249 ± 16% interrupts.CPU65.RES:Rescheduling_interrupts
10984 ± 10% -49.3% 5571 ± 16% interrupts.CPU66.RES:Rescheduling_interrupts
3743 ± 10% +32.6% 4965 ± 2% interrupts.CPU67.NMI:Non-maskable_interrupts
3743 ± 10% +32.6% 4965 ± 2% interrupts.CPU67.PMI:Performance_monitoring_interrupts
10871 ± 15% -49.8% 5458 ± 15% interrupts.CPU67.RES:Rescheduling_interrupts
10692 ± 11% -49.8% 5371 ± 15% interrupts.CPU68.RES:Rescheduling_interrupts
10518 ± 11% -48.2% 5453 ± 17% interrupts.CPU69.RES:Rescheduling_interrupts
10411 ± 12% -47.1% 5507 ± 16% interrupts.CPU7.RES:Rescheduling_interrupts
10435 ± 13% -48.8% 5345 ± 16% interrupts.CPU70.RES:Rescheduling_interrupts
10532 ± 14% -50.1% 5254 ± 18% interrupts.CPU71.RES:Rescheduling_interrupts
12972 ± 17% -57.0% 5582 ± 13% interrupts.CPU72.RES:Rescheduling_interrupts
4086 ± 2% +23.5% 5049 ± 2% interrupts.CPU73.NMI:Non-maskable_interrupts
4086 ± 2% +23.5% 5049 ± 2% interrupts.CPU73.PMI:Performance_monitoring_interrupts
12049 ± 14% -56.2% 5282 ± 11% interrupts.CPU73.RES:Rescheduling_interrupts
13200 ± 28% -60.5% 5216 ± 12% interrupts.CPU74.RES:Rescheduling_interrupts
3964 ± 3% +28.0% 5075 ± 2% interrupts.CPU75.NMI:Non-maskable_interrupts
3964 ± 3% +28.0% 5075 ± 2% interrupts.CPU75.PMI:Performance_monitoring_interrupts
11601 ± 15% -55.4% 5175 ± 12% interrupts.CPU75.RES:Rescheduling_interrupts
4088 ± 2% +23.9% 5066 interrupts.CPU76.NMI:Non-maskable_interrupts
4088 ± 2% +23.9% 5066 interrupts.CPU76.PMI:Performance_monitoring_interrupts
12116 ± 16% -58.1% 5071 ± 12% interrupts.CPU76.RES:Rescheduling_interrupts
4027 ± 4% +25.7% 5064 interrupts.CPU77.NMI:Non-maskable_interrupts
4027 ± 4% +25.7% 5064 interrupts.CPU77.PMI:Performance_monitoring_interrupts
11861 ± 17% -58.5% 4926 ± 12% interrupts.CPU77.RES:Rescheduling_interrupts
4129 ± 2% +22.9% 5074 interrupts.CPU78.NMI:Non-maskable_interrupts
4129 ± 2% +22.9% 5074 interrupts.CPU78.PMI:Performance_monitoring_interrupts
11823 ± 19% -57.8% 4994 ± 12% interrupts.CPU78.RES:Rescheduling_interrupts
4072 +24.2% 5059 interrupts.CPU79.NMI:Non-maskable_interrupts
4072 +24.2% 5059 interrupts.CPU79.PMI:Performance_monitoring_interrupts
11875 ± 18% -56.8% 5132 ± 11% interrupts.CPU79.RES:Rescheduling_interrupts
3286 ± 19% +50.8% 4956 ± 2% interrupts.CPU8.NMI:Non-maskable_interrupts
3286 ± 19% +50.8% 4956 ± 2% interrupts.CPU8.PMI:Performance_monitoring_interrupts
10577 ± 10% -48.9% 5400 ± 16% interrupts.CPU8.RES:Rescheduling_interrupts
4076 +23.9% 5050 interrupts.CPU80.NMI:Non-maskable_interrupts
4076 +23.9% 5050 interrupts.CPU80.PMI:Performance_monitoring_interrupts
11729 ± 18% -56.2% 5140 ± 11% interrupts.CPU80.RES:Rescheduling_interrupts
4101 ± 3% +23.1% 5046 ± 2% interrupts.CPU81.NMI:Non-maskable_interrupts
4101 ± 3% +23.1% 5046 ± 2% interrupts.CPU81.PMI:Performance_monitoring_interrupts
11689 ± 17% -55.8% 5167 ± 11% interrupts.CPU81.RES:Rescheduling_interrupts
4064 ± 2% +25.0% 5078 ± 2% interrupts.CPU82.NMI:Non-maskable_interrupts
4064 ± 2% +25.0% 5078 ± 2% interrupts.CPU82.PMI:Performance_monitoring_interrupts
11891 ± 18% -57.5% 5058 ± 12% interrupts.CPU82.RES:Rescheduling_interrupts
4090 ± 2% +23.9% 5069 interrupts.CPU83.NMI:Non-maskable_interrupts
4090 ± 2% +23.9% 5069 interrupts.CPU83.PMI:Performance_monitoring_interrupts
12084 ± 18% -59.3% 4922 ± 11% interrupts.CPU83.RES:Rescheduling_interrupts
3969 ± 3% +27.7% 5067 interrupts.CPU84.NMI:Non-maskable_interrupts
3969 ± 3% +27.7% 5067 interrupts.CPU84.PMI:Performance_monitoring_interrupts
11904 ± 17% -56.8% 5142 ± 12% interrupts.CPU84.RES:Rescheduling_interrupts
12313 ± 14% -57.0% 5293 ± 10% interrupts.CPU85.RES:Rescheduling_interrupts
12290 ± 16% -57.7% 5199 ± 11% interrupts.CPU86.RES:Rescheduling_interrupts
11551 ± 13% -56.0% 5084 ± 10% interrupts.CPU87.RES:Rescheduling_interrupts
12229 ± 17% -59.0% 5011 ± 12% interrupts.CPU88.RES:Rescheduling_interrupts
11836 ± 15% -58.6% 4904 ± 10% interrupts.CPU89.RES:Rescheduling_interrupts
10371 ± 14% -48.0% 5396 ± 16% interrupts.CPU9.RES:Rescheduling_interrupts
12005 ± 18% -56.1% 5271 ± 11% interrupts.CPU90.RES:Rescheduling_interrupts
11714 ± 14% -55.5% 5217 ± 12% interrupts.CPU91.RES:Rescheduling_interrupts
11997 ± 16% -57.8% 5063 ± 11% interrupts.CPU92.RES:Rescheduling_interrupts
12042 ± 16% -58.1% 5051 ± 13% interrupts.CPU93.RES:Rescheduling_interrupts
12016 ± 16% -58.2% 5027 ± 11% interrupts.CPU94.RES:Rescheduling_interrupts
12255 ± 17% -60.6% 4824 ± 13% interrupts.CPU95.RES:Rescheduling_interrupts
351763 ± 3% +26.9% 446394 ± 4% interrupts.NMI:Non-maskable_interrupts
351763 ± 3% +26.9% 446394 ± 4% interrupts.PMI:Performance_monitoring_interrupts
1086124 ± 13% -53.5% 504773 ± 13% interrupts.RES:Rescheduling_interrupts
1915 ± 6% +24.5% 2384 ± 8% interrupts.TLB:TLB_shootdowns
17.37 ± 6% -16.7 0.71 ± 21% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.do_nanosleep.hrtimer_nanosleep
16.97 ± 7% -16.6 0.40 ± 87% perf-profile.calltrace.cycles-pp.newidle_balance.pick_next_task_fair.__schedule.schedule.do_nanosleep
30.20 ± 3% -15.8 14.37 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
27.12 ± 4% -15.8 11.34 ± 2% perf-profile.calltrace.cycles-pp.schedule.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
26.99 ± 4% -15.7 11.26 ± 2% perf-profile.calltrace.cycles-pp.__schedule.schedule.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep
29.22 ± 3% -15.7 13.51 ± 2% perf-profile.calltrace.cycles-pp.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe
29.58 ± 3% -15.7 13.87 ± 2% perf-profile.calltrace.cycles-pp.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
33.34 ± 2% -15.2 18.10 ± 2% perf-profile.calltrace.cycles-pp.__nanosleep
31.51 ± 3% -15.2 16.30 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__nanosleep
31.23 ± 3% -15.2 16.07 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
15.22 ± 7% -15.1 0.15 ±158% perf-profile.calltrace.cycles-pp.load_balance.newidle_balance.pick_next_task_fair.__schedule.schedule
10.72 ± 8% -10.7 0.00 perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair.__schedule
10.49 ± 8% -10.5 0.00 perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair
5.71 ± 6% -5.5 0.17 ±158% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
5.25 ± 6% -5.1 0.16 ±158% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
3.43 -1.0 2.41 perf-profile.calltrace.cycles-pp.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
2.40 -0.4 1.96 perf-profile.calltrace.cycles-pp.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield
2.35 -0.4 1.92 perf-profile.calltrace.cycles-pp.__schedule.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.43 -0.2 1.25 ± 2% perf-profile.calltrace.cycles-pp.hrtimer_start_range_ns.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.42 ± 44% +0.2 0.65 ± 3% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.55 ± 45% +0.4 0.92 ± 2% perf-profile.calltrace.cycles-pp.update_load_avg.dequeue_entity.dequeue_task_fair.__schedule.schedule
1.08 ± 7% +0.4 1.47 ± 4% perf-profile.calltrace.cycles-pp.finish_task_switch.__schedule.schedule.do_nanosleep.hrtimer_nanosleep
0.78 ± 44% +0.4 1.18 ± 4% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.46 ± 45% +0.5 0.91 ± 4% perf-profile.calltrace.cycles-pp.restore_fpregs_from_fpstate.switch_fpu_return.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.96 ± 44% +0.5 1.45 ± 4% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.48 ± 50% +0.5 1.02 ± 7% perf-profile.calltrace.cycles-pp.finish_task_switch.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.66 ± 45% +0.6 1.25 ± 3% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
0.62 ± 44% +0.6 1.23 ± 3% perf-profile.calltrace.cycles-pp.switch_fpu_return.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.33 ± 82% +0.7 0.98 ± 8% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule.schedule_idle.do_idle
0.00 +0.7 0.65 ± 13% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule.schedule
0.00 +0.7 0.67 ± 13% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule.schedule.do_nanosleep
0.35 ± 70% +0.7 1.03 ± 4% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
0.88 ± 4% +0.7 1.57 ± 40% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__nanosleep
0.22 ±122% +0.7 0.95 ± 8% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule.schedule_idle
0.00 +0.7 0.74 ± 4% perf-profile.calltrace.cycles-pp.__switch_to_asm
0.00 +0.8 0.84 ± 13% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule_idle.do_idle
0.00 +0.9 0.87 ± 5% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues
0.00 +1.0 1.01 ± 11% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.25 ±100% +1.3 1.51 ± 15% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
0.00 +1.3 1.28 ± 18% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues
0.00 +1.4 1.37 ± 4% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.poll_idle
0.00 +1.4 1.37 ± 4% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.poll_idle.cpuidle_enter_state
0.00 +1.4 1.41 ± 4% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.poll_idle.cpuidle_enter_state.cpuidle_enter
0.00 +1.4 1.44 ± 4% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle
0.11 ±200% +1.5 1.57 ± 10% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.finish_task_switch
0.11 ±200% +1.5 1.58 ± 10% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.finish_task_switch.__schedule
0.00 +1.6 1.57 ± 3% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
5.19 ± 6% +1.6 6.79 ± 4% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.do_nanosleep.hrtimer_nanosleep
0.70 ± 45% +1.8 2.49 ± 4% perf-profile.calltrace.cycles-pp.update_load_avg.set_next_entity.pick_next_task_fair.__schedule.schedule_idle
1.33 ± 4% +2.0 3.35 ± 3% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.94 ± 44% +2.1 3.07 ± 3% perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule_idle.do_idle
0.82 ± 71% +2.2 3.03 ± 7% perf-profile.calltrace.cycles-pp.update_cfs_group.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
0.58 ± 44% +2.3 2.88 ± 5% perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
0.00 +2.4 2.43 ± 7% perf-profile.calltrace.cycles-pp.set_task_cpu.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
3.50 ± 6% +2.5 6.04 ± 4% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.do_nanosleep
1.40 ± 47% +2.6 3.99 ± 7% perf-profile.calltrace.cycles-pp.update_cfs_group.dequeue_entity.dequeue_task_fair.__schedule.schedule
4.87 ± 4% +3.6 8.48 ± 7% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
4.83 ± 4% +3.6 8.44 ± 7% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues
4.57 ± 3% +3.9 8.45 ± 2% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
4.50 ± 3% +3.9 8.39 ± 2% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
3.29 ± 4% +4.1 7.38 ± 6% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.hrtimer_wakeup
9.21 ± 3% +8.2 17.44 ± 5% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
8.51 ± 3% +8.3 16.81 ± 6% perf-profile.calltrace.cycles-pp.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
8.43 ± 3% +8.3 16.74 ± 6% perf-profile.calltrace.cycles-pp.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
34.67 +9.6 44.24 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
5.36 ± 2% +9.9 15.22 ± 4% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.intel_idle
5.40 ± 2% +9.9 15.28 ± 4% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.intel_idle.cpuidle_enter_state
5.74 +10.0 15.75 ± 4% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.intel_idle.cpuidle_enter_state.cpuidle_enter
49.86 +11.3 61.13 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
49.43 +11.6 61.07 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
56.80 +15.9 72.69 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
56.85 +15.9 72.74 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
56.87 +15.9 72.76 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
57.48 +16.1 73.53 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
22.29 ± 3% +22.4 44.73 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle
16.99 ± 7% -16.4 0.57 ± 27% perf-profile.children.cycles-pp.newidle_balance
29.53 ± 3% -16.2 13.31 ± 2% perf-profile.children.cycles-pp.schedule
35.46 ± 2% -16.1 19.33 ± 5% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
35.10 ± 2% -16.1 19.04 ± 5% perf-profile.children.cycles-pp.do_syscall_64
30.21 ± 3% -15.8 14.38 ± 2% perf-profile.children.cycles-pp.__x64_sys_nanosleep
29.24 ± 3% -15.7 13.52 ± 2% perf-profile.children.cycles-pp.do_nanosleep
29.59 ± 3% -15.7 13.87 ± 2% perf-profile.children.cycles-pp.hrtimer_nanosleep
33.48 ± 2% -15.3 18.22 ± 2% perf-profile.children.cycles-pp.__nanosleep
15.30 ± 7% -14.9 0.42 ± 30% perf-profile.children.cycles-pp.load_balance
19.78 ± 5% -14.8 4.97 ± 3% perf-profile.children.cycles-pp.pick_next_task_fair
33.95 ± 2% -12.2 21.72 ± 2% perf-profile.children.cycles-pp.__schedule
10.77 ± 8% -10.5 0.27 ± 32% perf-profile.children.cycles-pp.find_busiest_group
10.58 ± 8% -10.3 0.26 ± 33% perf-profile.children.cycles-pp.update_sd_lb_stats
2.06 ± 20% -2.0 0.06 ± 16% perf-profile.children.cycles-pp.raw_spin_rq_lock_nested
1.49 ± 11% -1.4 0.06 ± 20% perf-profile.children.cycles-pp.idle_cpu
5.09 ± 9% -1.1 4.02 ± 9% perf-profile.children.cycles-pp._raw_spin_lock
3.44 -1.0 2.42 perf-profile.children.cycles-pp.__x64_sys_sched_yield
1.04 ± 8% -0.9 0.15 ± 15% perf-profile.children.cycles-pp.update_blocked_averages
0.79 ± 7% -0.7 0.10 ± 7% perf-profile.children.cycles-pp._find_next_bit
0.96 -0.6 0.40 ± 3% perf-profile.children.cycles-pp.do_sched_yield
0.95 ± 5% -0.4 0.50 ± 3% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
1.05 ± 2% -0.3 0.72 ± 3% perf-profile.children.cycles-pp.clockevents_program_event
1.77 -0.3 1.45 ± 2% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.91 -0.3 0.63 ± 4% perf-profile.children.cycles-pp.sched_clock_cpu
0.79 ± 2% -0.2 0.57 ± 4% perf-profile.children.cycles-pp.native_sched_clock
0.76 ± 2% -0.2 0.54 ± 2% perf-profile.children.cycles-pp.lapic_next_deadline
1.30 ± 2% -0.2 1.10 ± 3% perf-profile.children.cycles-pp.update_rq_clock
0.31 ± 5% -0.2 0.11 ± 11% perf-profile.children.cycles-pp.yield_task_fair
1.02 ± 4% -0.2 0.82 ± 4% perf-profile.children.cycles-pp.sem_getvalue@@GLIBC_2.2.5
0.48 ± 3% -0.2 0.29 ± 3% perf-profile.children.cycles-pp.reweight_entity
1.44 -0.2 1.27 ± 2% perf-profile.children.cycles-pp.hrtimer_start_range_ns
0.86 ± 7% -0.1 0.72 ± 5% perf-profile.children.cycles-pp.semaphore_posix_thrash
0.41 -0.1 0.28 ± 4% perf-profile.children.cycles-pp.irq_exit_rcu
0.41 ± 8% -0.1 0.29 ± 4% perf-profile.children.cycles-pp.migrate_task_rq_fair
0.61 ± 2% -0.1 0.49 ± 4% perf-profile.children.cycles-pp.__update_load_avg_se
0.23 ± 13% -0.1 0.11 ± 4% perf-profile.children.cycles-pp.place_entity
0.27 ± 4% -0.1 0.14 ± 3% perf-profile.children.cycles-pp.irq_enter_rcu
0.62 ± 2% -0.1 0.51 ± 3% perf-profile.children.cycles-pp.ktime_get
0.38 ± 4% -0.1 0.27 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
0.83 -0.1 0.71 ± 2% perf-profile.children.cycles-pp.load_new_mm_cr3
0.24 ± 4% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.tick_irq_enter
1.59 ± 2% -0.1 1.50 ± 2% perf-profile.children.cycles-pp.update_curr
0.35 ± 2% -0.1 0.26 ± 2% perf-profile.children.cycles-pp.pick_next_entity
0.12 ± 18% -0.1 0.04 ± 63% perf-profile.children.cycles-pp.get_nohz_timer_target
0.44 ± 3% -0.1 0.36 ± 3% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.20 ± 4% -0.1 0.12 ± 5% perf-profile.children.cycles-pp.put_prev_entity
0.49 ± 5% -0.1 0.41 ± 3% perf-profile.children.cycles-pp.sem_post@@GLIBC_2.2.5
0.37 ± 3% -0.1 0.30 ± 3% perf-profile.children.cycles-pp.save_fpregs_to_fpstate
0.26 ± 4% -0.1 0.19 ± 8% perf-profile.children.cycles-pp.tick_sched_timer
0.45 -0.1 0.38 ± 4% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.44 ± 2% -0.1 0.38 ± 2% perf-profile.children.cycles-pp.get_timespec64
0.12 ± 6% -0.1 0.06 ± 7% perf-profile.children.cycles-pp.rebalance_domains
0.22 ± 5% -0.1 0.16 ± 4% perf-profile.children.cycles-pp.__softirqentry_text_start
0.24 ± 4% -0.1 0.17 ± 8% perf-profile.children.cycles-pp.tick_sched_handle
0.23 ± 5% -0.1 0.17 ± 9% perf-profile.children.cycles-pp.update_process_times
0.38 -0.1 0.32 ± 2% perf-profile.children.cycles-pp._copy_from_user
0.18 ± 9% -0.1 0.13 ± 5% perf-profile.children.cycles-pp.rb_erase
0.27 ± 4% -0.1 0.22 ± 4% perf-profile.children.cycles-pp.__calc_delta
0.18 ± 2% -0.0 0.13 ± 2% perf-profile.children.cycles-pp.__might_fault
0.44 ± 4% -0.0 0.40 ± 4% perf-profile.children.cycles-pp.__entry_text_start
0.09 ± 4% -0.0 0.05 ± 9% perf-profile.children.cycles-pp.__list_add_valid
0.13 ± 4% -0.0 0.09 ± 13% perf-profile.children.cycles-pp.scheduler_tick
0.10 ± 3% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.07 ± 10% -0.0 0.04 ± 63% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.25 ± 3% -0.0 0.22 ± 5% perf-profile.children.cycles-pp.perf_tp_event
0.13 ± 3% -0.0 0.09 ± 7% perf-profile.children.cycles-pp.pick_next_task_idle
0.33 ± 2% -0.0 0.30 ± 4% perf-profile.children.cycles-pp.sem_trywait@@GLIBC_2.2.5
0.10 ± 3% -0.0 0.07 ± 9% perf-profile.children.cycles-pp.__enqueue_entity
0.12 ± 6% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
0.11 ± 6% -0.0 0.09 ± 8% perf-profile.children.cycles-pp.set_next_task_idle
0.10 ± 5% -0.0 0.07 ± 6% perf-profile.children.cycles-pp.perf_trace_buf_update
0.23 ± 2% -0.0 0.21 ± 3% perf-profile.children.cycles-pp.update_min_vruntime
0.12 ± 4% -0.0 0.10 ± 6% perf-profile.children.cycles-pp.__cgroup_account_cputime
0.10 ± 6% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.06 ± 6% +0.0 0.08 ± 9% perf-profile.children.cycles-pp.hrtimer_reprogram
0.10 ± 5% +0.0 0.12 ± 7% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.12 ± 6% +0.0 0.16 ± 4% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.13 ± 8% +0.0 0.16 ± 3% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.30 ± 4% +0.0 0.33 ± 5% perf-profile.children.cycles-pp.enqueue_hrtimer
0.10 ± 44% +0.0 0.14 ± 5% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.26 ± 2% +0.0 0.30 ± 5% perf-profile.children.cycles-pp.timerqueue_add
0.08 ± 44% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.rcu_eqs_exit
0.06 ± 46% +0.0 0.10 ± 6% perf-profile.children.cycles-pp.call_cpuidle
0.03 ± 82% +0.0 0.07 ± 9% perf-profile.children.cycles-pp.cpumask_next
0.02 ± 99% +0.0 0.07 ± 10% perf-profile.children.cycles-pp.rcu_needs_cpu
0.13 ± 3% +0.0 0.18 ± 3% perf-profile.children.cycles-pp.rcu_idle_exit
0.12 ± 3% +0.0 0.17 ± 5% perf-profile.children.cycles-pp.rcu_dynticks_inc
0.01 ±223% +0.1 0.06 ± 10% perf-profile.children.cycles-pp.menu_reflect
0.17 ± 2% +0.1 0.22 ± 3% perf-profile.children.cycles-pp.tick_nohz_idle_enter
0.19 ± 2% +0.1 0.26 ± 2% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.14 ± 5% +0.1 0.24 ± 4% perf-profile.children.cycles-pp.update_ts_time_stats
0.35 ± 2% +0.1 0.45 ± 4% perf-profile.children.cycles-pp.tick_nohz_next_event
0.13 ± 3% +0.1 0.24 ± 3% perf-profile.children.cycles-pp.nr_iowait_cpu
0.07 ± 5% +0.1 0.17 ± 2% perf-profile.children.cycles-pp.attach_entity_load_avg
0.08 ± 9% +0.1 0.19 ± 8% perf-profile.children.cycles-pp.remove_entity_load_avg
0.08 ± 5% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.cpus_share_cache
0.29 ± 3% +0.1 0.43 perf-profile.children.cycles-pp.check_preempt_curr
0.39 ± 6% +0.1 0.54 ± 5% perf-profile.children.cycles-pp.available_idle_cpu
0.51 +0.1 0.66 ± 3% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.26 ± 14% +0.2 0.42 ± 9% perf-profile.children.cycles-pp.shim_nanosleep_uint64
0.17 ± 4% +0.2 0.35 ± 3% perf-profile.children.cycles-pp.resched_curr
0.28 ± 3% +0.2 0.46 ± 2% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.15 ± 4% +0.2 0.36 ± 4% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.67 ± 2% +0.2 0.89 ± 2% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.97 ± 3% +0.2 1.21 ± 2% perf-profile.children.cycles-pp.__switch_to_asm
0.15 ± 3% +0.2 0.39 ± 3% perf-profile.children.cycles-pp.hrtimer_try_to_cancel
0.13 ± 3% +0.2 0.37 ± 3% perf-profile.children.cycles-pp.hrtimer_active
0.41 ± 9% +0.3 0.68 ± 10% perf-profile.children.cycles-pp.select_idle_cpu
0.85 ± 3% +0.3 1.14 ± 2% perf-profile.children.cycles-pp.__switch_to
1.17 ± 2% +0.3 1.48 ± 4% perf-profile.children.cycles-pp.menu_select
0.63 ± 5% +0.3 0.94 ± 3% perf-profile.children.cycles-pp.restore_fpregs_from_fpstate
1.04 ± 5% +0.4 1.40 ± 3% perf-profile.children.cycles-pp.select_task_rq_fair
0.95 ± 4% +0.4 1.33 ± 3% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.87 ± 4% +0.4 1.27 ± 3% perf-profile.children.cycles-pp.switch_fpu_return
0.78 ± 5% +0.4 1.20 ± 4% perf-profile.children.cycles-pp.select_idle_sibling
1.09 ± 4% +0.8 1.93 ± 65% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
1.75 ± 10% +0.9 2.64 ± 5% perf-profile.children.cycles-pp.finish_task_switch
5.67 ± 6% +1.2 6.82 ± 4% perf-profile.children.cycles-pp.dequeue_task_fair
0.41 ± 12% +1.2 1.62 ± 3% perf-profile.children.cycles-pp.poll_idle
1.40 ± 3% +1.8 3.20 ± 3% perf-profile.children.cycles-pp.set_next_entity
1.06 ± 10% +2.2 3.30 ± 7% perf-profile.children.cycles-pp.set_task_cpu
3.70 ± 6% +2.4 6.08 ± 4% perf-profile.children.cycles-pp.dequeue_entity
4.07 ± 4% +3.5 7.59 ± 4% perf-profile.children.cycles-pp.update_load_avg
5.19 ± 14% +3.8 8.97 ± 8% perf-profile.children.cycles-pp.update_cfs_group
6.12 ± 6% +3.9 10.00 ± 5% perf-profile.children.cycles-pp.enqueue_task_fair
4.62 ± 3% +3.9 8.55 ± 2% perf-profile.children.cycles-pp.schedule_idle
4.42 ± 6% +4.4 8.83 ± 5% perf-profile.children.cycles-pp.enqueue_entity
5.56 ± 5% +4.4 10.01 ± 5% perf-profile.children.cycles-pp.ttwu_do_activate
12.73 ± 4% +7.9 20.66 ± 4% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
11.85 ± 4% +8.2 20.08 ± 4% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
11.75 ± 4% +8.2 19.99 ± 4% perf-profile.children.cycles-pp.hrtimer_interrupt
10.74 ± 4% +8.5 19.27 ± 5% perf-profile.children.cycles-pp.__hrtimer_run_queues
9.89 ± 4% +8.7 18.55 ± 5% perf-profile.children.cycles-pp.hrtimer_wakeup
9.83 ± 4% +8.7 18.51 ± 5% perf-profile.children.cycles-pp.try_to_wake_up
50.40 +11.4 61.77 perf-profile.children.cycles-pp.cpuidle_enter
50.37 +11.4 61.75 perf-profile.children.cycles-pp.cpuidle_enter_state
21.64 +13.9 35.50 ± 2% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
43.36 +15.7 59.08 perf-profile.children.cycles-pp.intel_idle
56.87 +15.9 72.76 perf-profile.children.cycles-pp.start_secondary
57.43 +16.1 73.48 perf-profile.children.cycles-pp.do_idle
57.48 +16.1 73.53 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
57.48 +16.1 73.53 perf-profile.children.cycles-pp.cpu_startup_entry
8.27 ± 8% -8.1 0.21 ± 33% perf-profile.self.cycles-pp.update_sd_lb_stats
1.47 ± 11% -1.4 0.06 ± 20% perf-profile.self.cycles-pp.idle_cpu
0.73 ± 7% -0.6 0.09 ± 9% perf-profile.self.cycles-pp._find_next_bit
2.02 ± 5% -0.6 1.38 ± 2% perf-profile.self.cycles-pp._raw_spin_lock
0.63 ± 4% -0.5 0.17 ± 4% perf-profile.self.cycles-pp.cpuidle_enter_state
0.87 ± 5% -0.4 0.48 ± 3% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.40 ± 25% -0.3 0.06 ± 18% perf-profile.self.cycles-pp.update_blocked_averages
0.40 ± 13% -0.3 0.11 ± 4% perf-profile.self.cycles-pp.newidle_balance
0.90 ± 4% -0.2 0.67 ± 4% perf-profile.self.cycles-pp.sem_getvalue@@GLIBC_2.2.5
0.76 ± 2% -0.2 0.54 ± 2% perf-profile.self.cycles-pp.lapic_next_deadline
0.76 ± 2% -0.2 0.55 ± 4% perf-profile.self.cycles-pp.native_sched_clock
0.40 ± 12% -0.2 0.19 ± 2% perf-profile.self.cycles-pp.dequeue_task_fair
0.93 ± 2% -0.2 0.73 ± 3% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.81 ± 7% -0.2 0.64 ± 6% perf-profile.self.cycles-pp.semaphore_posix_thrash
0.60 ± 13% -0.2 0.44 ± 4% perf-profile.self.cycles-pp.enqueue_task_fair
0.40 ± 3% -0.1 0.27 ± 4% perf-profile.self.cycles-pp.reweight_entity
0.60 ± 2% -0.1 0.48 ± 4% perf-profile.self.cycles-pp.__update_load_avg_se
0.83 -0.1 0.71 ± 2% perf-profile.self.cycles-pp.load_new_mm_cr3
0.38 ± 3% -0.1 0.27 ± 2% perf-profile.self.cycles-pp.native_irq_return_iret
0.45 ± 5% -0.1 0.36 ± 3% perf-profile.self.cycles-pp.sem_post@@GLIBC_2.2.5
0.17 ± 13% -0.1 0.09 ± 7% perf-profile.self.cycles-pp.place_entity
0.43 ± 4% -0.1 0.35 ± 3% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.33 ± 3% -0.1 0.26 ± 5% perf-profile.self.cycles-pp.pick_next_task_fair
0.37 ± 2% -0.1 0.29 ± 3% perf-profile.self.cycles-pp.save_fpregs_to_fpstate
0.37 ± 2% -0.1 0.30 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.56 ± 3% -0.1 0.49 ± 2% perf-profile.self.cycles-pp.enqueue_entity
0.30 -0.1 0.24 ± 3% perf-profile.self.cycles-pp.pick_next_entity
0.19 ± 7% -0.1 0.12 ± 16% perf-profile.self.cycles-pp.do_syscall_64
0.18 ± 5% -0.1 0.12 ± 5% perf-profile.self.cycles-pp.__x64_sys_nanosleep
0.15 ± 3% -0.1 0.09 ± 5% perf-profile.self.cycles-pp.schedule
0.26 ± 6% -0.1 0.21 ± 3% perf-profile.self.cycles-pp.select_task_rq_fair
0.26 ± 2% -0.1 0.20 ± 5% perf-profile.self.cycles-pp.ktime_get
0.09 ± 9% -0.1 0.04 ± 63% perf-profile.self.cycles-pp.__list_add_valid
0.10 ± 4% -0.0 0.06 ± 8% perf-profile.self.cycles-pp.sched_clock_cpu
0.26 ± 3% -0.0 0.22 ± 4% perf-profile.self.cycles-pp.__calc_delta
0.17 ± 9% -0.0 0.12 ± 5% perf-profile.self.cycles-pp.rb_erase
0.20 ± 3% -0.0 0.16 ± 4% perf-profile.self.cycles-pp.__sched_yield
0.30 ± 2% -0.0 0.26 ± 6% perf-profile.self.cycles-pp.sem_trywait@@GLIBC_2.2.5
0.09 ± 7% -0.0 0.05 ± 6% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.38 ± 3% -0.0 0.35 ± 3% perf-profile.self.cycles-pp.do_nanosleep
0.12 ± 6% -0.0 0.10 ± 4% perf-profile.self.cycles-pp.perf_trace_sched_stat_runtime
0.09 ± 4% -0.0 0.07 ± 10% perf-profile.self.cycles-pp.__enqueue_entity
0.11 ± 7% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.26 -0.0 0.24 ± 3% perf-profile.self.cycles-pp.try_to_wake_up
0.11 ± 8% -0.0 0.09 ± 3% perf-profile.self.cycles-pp.hrtimer_interrupt
0.22 ± 2% -0.0 0.20 ± 4% perf-profile.self.cycles-pp.update_min_vruntime
0.09 ± 5% -0.0 0.08 ± 4% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.06 ± 45% +0.0 0.09 ± 5% perf-profile.self.cycles-pp.perf_trace_sched_wakeup_template
0.05 ± 45% +0.0 0.08 ± 10% perf-profile.self.cycles-pp.hrtimer_reprogram
0.10 ± 5% +0.0 0.13 ± 3% perf-profile.self.cycles-pp.select_idle_sibling
0.06 ± 46% +0.0 0.10 ± 6% perf-profile.self.cycles-pp.call_cpuidle
0.02 ± 99% +0.0 0.07 ± 11% perf-profile.self.cycles-pp.rcu_needs_cpu
0.11 ± 7% +0.0 0.16 ± 4% perf-profile.self.cycles-pp.rcu_dynticks_inc
0.00 +0.1 0.05 perf-profile.self.cycles-pp.tick_nohz_idle_exit
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.rcu_idle_exit
0.00 +0.1 0.07 ± 4% perf-profile.self.cycles-pp.migrate_task_rq_fair
0.08 ± 22% +0.1 0.17 ± 12% perf-profile.self.cycles-pp.poll_idle
0.23 ± 2% +0.1 0.32 ± 2% perf-profile.self.cycles-pp.switch_fpu_return
0.13 ± 5% +0.1 0.23 ± 3% perf-profile.self.cycles-pp.nr_iowait_cpu
0.07 ± 12% +0.1 0.17 ± 2% perf-profile.self.cycles-pp.attach_entity_load_avg
0.42 +0.1 0.54 ± 3% perf-profile.self.cycles-pp.do_idle
0.25 ± 15% +0.1 0.38 ± 9% perf-profile.self.cycles-pp.shim_nanosleep_uint64
0.49 ± 4% +0.1 0.63 ± 5% perf-profile.self.cycles-pp.menu_select
0.08 ± 9% +0.1 0.22 ± 5% perf-profile.self.cycles-pp.cpus_share_cache
0.39 ± 6% +0.1 0.53 ± 5% perf-profile.self.cycles-pp.available_idle_cpu
0.16 ± 2% +0.2 0.34 ± 3% perf-profile.self.cycles-pp.resched_curr
0.66 ± 2% +0.2 0.88 ± 2% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.12 ± 4% +0.2 0.34 ± 3% perf-profile.self.cycles-pp.hrtimer_active
0.97 ± 3% +0.2 1.20 ± 2% perf-profile.self.cycles-pp.__switch_to_asm
0.27 ± 4% +0.2 0.52 ± 4% perf-profile.self.cycles-pp.set_next_entity
0.83 ± 2% +0.3 1.13 ± 3% perf-profile.self.cycles-pp.__switch_to
0.63 ± 5% +0.3 0.94 ± 3% perf-profile.self.cycles-pp.restore_fpregs_from_fpstate
0.63 ± 13% +2.3 2.98 ± 8% perf-profile.self.cycles-pp.set_task_cpu
2.50 ± 7% +3.5 5.97 ± 5% perf-profile.self.cycles-pp.update_load_avg
5.17 ± 14% +3.8 8.95 ± 8% perf-profile.self.cycles-pp.update_cfs_group
37.34 ± 2% +5.5 42.82 ± 2% perf-profile.self.cycles-pp.intel_idle
stress-ng.time.voluntary_context_switches
3e+08 +-----------------------------------------------------------------+
| |
2.5e+08 |-OO O O O O O OO OO OO O OO OO O OO O O O OO OO |
| O O +. .+.++. +.O .+ .+ .|
|.++.+.++.+.++.+.+ ++ +.+ + + + +.+.++ +.+.+ +.++ |
2e+08 |-+ : :: : : : |
| : :: : :: |
1.5e+08 |-+ + : + : + |
| : : : : |
1e+08 |-+ : : : : |
| : : : : |
| :: :: |
5e+07 |-+ : : |
| : : |
0 +-----------------------------------------------------------------+
stress-ng.time.involuntary_context_switches
6e+07 +-------------------------------------------------------------------+
| |
5e+07 |-+ .+ .+ + |
|.+ .+. +. .+ .+.++. .+ : + : .+. :+ .++. .|
| +.+ + +.++.+ +.+ +.++ : : : + ++.+.: +.+ +.++ |
4e+07 |-+ : : : : + |
| : : : : |
3e+07 |-+ O : : : : |
| O O O OO O O O OO O OO O O::O O: : O OO O O O |
2e+07 |-OO O OO :: :O: O |
| :: :: |
| :: : |
1e+07 |-+ : : |
| : : |
0 +-------------------------------------------------------------------+
stress-ng.sem.ops
6e+08 +-------------------------------------------------------------------+
| |
5e+08 |-OO O O O O O O OO O OO O OO OO O O OO OO O OO O |
| O O .+ .+.++. +.O .+. +. .|
|.++.+.+.++.+.++.+ +.+ +.+ + + + ++.+.+ +.+.+ +.++ |
4e+08 |-+ : :: : : : |
| : :: : :: |
3e+08 |-+ + : + : + |
| : : : : |
2e+08 |-+ : : : : |
| : : :: |
| :: :: |
1e+08 |-+ : :: |
| : : |
0 +-------------------------------------------------------------------+
stress-ng.sem.ops_per_sec
9e+06 +-------------------------------------------------------------------+
| OO O O O O OO O O O O OO O OO O OO O O O O OO O O O |
8e+06 |.++. .+.++. .++.+.++.+.+.++.+. +.+ + +.+. +. +.+.+.+ +.++.|
7e+06 |-+ + + + : :: : + +.+ : : |
| : :: : :: |
6e+06 |-+ : : : : :: |
5e+06 |-+ + : + : + |
| : : : : |
4e+06 |-+ : : : : |
3e+06 |-+ : : :: |
| :: :: |
2e+06 |-+ :: :: |
1e+06 |-+ : : |
| : : |
0 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 2 weeks
[btrfs] a323b5f59e: xfstests.btrfs.167.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: a323b5f59eeaeadecf4ad5d0aa11fd34dbac6980 ("[PATCH 09/21] btrfs: zoned: skip zoned check if block_group is marked as copy")
url: https://github.com/0day-ci/linux/commits/Johannes-Thumshirn/btrfs-first-b...
base: https://git.kernel.org/cgit/linux/kernel/git/kdave/linux.git for-next
patch link: https://lore.kernel.org/linux-btrfs/9cf41680544814baa4fe3ba3424a01a474755...
in testcase: xfstests
version: xfstests-x86_64-99bc497-1_20211126
with following parameters:
disk: 6HDD
fs: btrfs
test: btrfs-group-16
ucode: 0x28
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
2021-11-27 18:25:51 export TEST_DIR=/fs/sdb1
2021-11-27 18:25:51 export TEST_DEV=/dev/sdb1
2021-11-27 18:25:51 export FSTYP=btrfs
2021-11-27 18:25:51 export SCRATCH_MNT=/fs/scratch
2021-11-27 18:25:51 mkdir /fs/scratch -p
2021-11-27 18:25:51 export SCRATCH_DEV_POOL="/dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5 /dev/sdb6"
2021-11-27 18:25:51 sed "s:^:btrfs/:" //lkp/benchmarks/xfstests/tests/btrfs-group-16
2021-11-27 18:25:51 ./check btrfs/160 btrfs/161 btrfs/162 btrfs/163 btrfs/164 btrfs/165 btrfs/166 btrfs/167 btrfs/168 btrfs/169
FSTYP -- btrfs
PLATFORM -- Linux/x86_64 lkp-hsw-d01 5.16.0-rc1-00057-ga323b5f59eea #1 SMP Sat Nov 27 20:08:36 CST 2021
MKFS_OPTIONS -- /dev/sdb2
MOUNT_OPTIONS -- /dev/sdb2 /fs/scratch
btrfs/160 3s
btrfs/161 2s
btrfs/162 2s
btrfs/163 [not run] Require module btrfs to be unloadable
btrfs/164 [not run] Require module btrfs to be unloadable
btrfs/165 2s
btrfs/166 1s
btrfs/167 - output mismatch (see /lkp/benchmarks/xfstests/results//btrfs/167.out.bad)
--- tests/btrfs/167.out 2021-11-26 08:40:03.000000000 +0000
+++ /lkp/benchmarks/xfstests/results//btrfs/167.out.bad 2021-11-27 18:27:45.083970880 +0000
@@ -1,2 +1,5 @@
QA output created by 167
+mount: /fs/scratch: wrong fs type, bad option, bad superblock on /dev/sdb3, missing codepage or helper program, or other error.
+cat: /fs/scratch/nodatasum_file: No such file or directory
+umount: /fs/scratch: not mounted.
Silence is golden
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/btrfs/167.out /lkp/benchmarks/xfstests/results//btrfs/167.out.bad' to see the entire diff)
btrfs/168 2s
btrfs/169 1s
Ran: btrfs/160 btrfs/161 btrfs/162 btrfs/163 btrfs/164 btrfs/165 btrfs/166 btrfs/167 btrfs/168 btrfs/169
Not run: btrfs/163 btrfs/164
Failures: btrfs/167
Failed 1 of 10 tests
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 2 weeks
[ramfs] 0858d7da8a: canonical_address#:#[##]
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with clang-14):
commit: 0858d7da8a09e440fb192a0239d20249a2d16af8 ("ramfs: fix mount source show for ramfs")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------+------------+------------+
| | 2d93a5835a | 0858d7da8a |
+------------------------------------------+------------+------------+
| boot_successes | 17 | 4 |
| boot_failures | 0 | 13 |
| canonical_address#:#[##] | 0 | 12 |
| RIP:ntfs_update_mftmirr | 0 | 12 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 12 |
+------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 806.118664][ T1] selinux=0
[ 806.119418][ T1] softlockup_panic=1
[ 806.120350][ T1] nmi_watchdog=panic
[ 806.121180][ T1] vga=normal
[ 806.257788][ T204] /dev/root: Can't open blockdev
[ 806.259101][ T204] general protection fault, probably for non-canonical address 0xdffffc0000000003: 0000 [#1] SMP KASAN
[ 806.263082][ T204] KASAN: null-ptr-deref in range [0x0000000000000018-0x000000000000001f]
[ 806.264593][ T204] CPU: 1 PID: 204 Comm: mount Not tainted 5.15.0-00312-g0858d7da8a09 #1
[ 806.266012][ T204] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 806.267540][ T204] RIP: 0010:ntfs_update_mftmirr (kbuild/src/consumer/fs/ntfs3/fsntfs.c:834)
[ 806.268641][ T204] Code: 4c 89 e8 48 c1 e8 03 42 80 3c 30 00 74 08 4c 89 ef e8 f4 4b a0 ff 4d 8b 65 00 49 8d 5c 24 18 48 89 d8 48 c1 e8 03 48 89 45 90 <42> 80 3c 30 00 74 08 48 89 df e8 d1 4b a0 ff 48 89 9d 78 ff ff ff
All code
========
0: 4c 89 e8 mov %r13,%rax
3: 48 c1 e8 03 shr $0x3,%rax
7: 42 80 3c 30 00 cmpb $0x0,(%rax,%r14,1)
c: 74 08 je 0x16
e: 4c 89 ef mov %r13,%rdi
11: e8 f4 4b a0 ff callq 0xffffffffffa04c0a
16: 4d 8b 65 00 mov 0x0(%r13),%r12
1a: 49 8d 5c 24 18 lea 0x18(%r12),%rbx
1f: 48 89 d8 mov %rbx,%rax
22: 48 c1 e8 03 shr $0x3,%rax
26: 48 89 45 90 mov %rax,-0x70(%rbp)
2a:* 42 80 3c 30 00 cmpb $0x0,(%rax,%r14,1) <-- trapping instruction
2f: 74 08 je 0x39
31: 48 89 df mov %rbx,%rdi
34: e8 d1 4b a0 ff callq 0xffffffffffa04c0a
39: 48 89 9d 78 ff ff ff mov %rbx,-0x88(%rbp)
Code starting with the faulting instruction
===========================================
0: 42 80 3c 30 00 cmpb $0x0,(%rax,%r14,1)
5: 74 08 je 0xf
7: 48 89 df mov %rbx,%rdi
a: e8 d1 4b a0 ff callq 0xffffffffffa04be0
f: 48 89 9d 78 ff ff ff mov %rbx,-0x88(%rbp)
[ 806.271820][ T204] RSP: 0000:ffffc90000297c08 EFLAGS: 00010206
[ 806.272964][ T204] RAX: 0000000000000003 RBX: 0000000000000018 RCX: ffff888122c58000
[ 806.274379][ T204] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff888122a76000
[ 806.275793][ T204] RBP: ffffc90000297c90 R08: dffffc0000000000 R09: ffff888122a762a8
[ 806.277143][ T204] R10: dfffe9102454ec59 R11: 1ffff1102454ec55 R12: 0000000000000000
[ 806.278484][ T204] R13: ffff888122a76000 R14: dffffc0000000000 R15: dffffc0000000000
[ 806.279930][ T204] FS: 0000000000000000(0000) GS:ffff8883a0500000(0063) knlGS:00000000f7e8f200
[ 806.281545][ T204] CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033
[ 806.282669][ T204] CR2: 00000000565fa0ec CR3: 00000001229cf000 CR4: 00000000000406e0
[ 806.284123][ T204] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 806.285604][ T204] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 806.287064][ T204] Call Trace:
[ 806.287746][ T204] ? kfree (kbuild/src/consumer/mm/slub.c:4553)
[ 806.288623][ T204] ? trace_kfree (kbuild/src/consumer/include/trace/events/kmem.h:118)
[ 806.289448][ T204] ? memset (kbuild/src/consumer/mm/kasan/shadow.c:?)
[ 806.290232][ T204] put_ntfs (kbuild/src/consumer/fs/ntfs3/super.c:465)
[ 806.291046][ T204] ntfs_fs_free (kbuild/src/consumer/fs/ntfs3/super.c:1365)
To reproduce:
# build kernel
cd linux
cp config-5.15.0-00312-g0858d7da8a09 .config
make HOSTCC=clang-14 CC=clang-14 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=clang-14 CC=clang-14 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 3 weeks
[xfs] d63b32921c: BUG_xfs_sxi_item(Not_tainted):Objects_remaining_in_xfs_sxi_item_on__kmem_cache_shutdown()
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: d63b32921c8b7d6bea49fb41b73fbf0efc37e376 ("xfs: port xfs_swap_extents_rmap to our new code")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git vectorized-scrub
in testcase: xfstests
version: xfstests-x86_64-99bc497-1_20211125
with following parameters:
disk: 4HDD
fs: xfs
test: xfs-reflink-rmapbt
ucode: 0x21
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 4 threads 1 sockets Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
kern :notice: [ 135.661968] XFS (sda2): Mounting V5 Filesystem
kern :info : [ 135.713059] XFS (sda2): Ending clean mount
kern :warn : [ 135.720747] xfs filesystem being mounted at /fs/sda2 supports timestamps until 2038 (0x7fffffff)
kern :notice: [ 135.868579] XFS (sda2): Unmounting Filesystem
kern :err : [ 135.979420] =============================================================================
kern :err : [ 135.988497] BUG xfs_sxi_item (Not tainted): Objects remaining in xfs_sxi_item on __kmem_cache_shutdown()
kern :err : [ 135.998874] -----------------------------------------------------------------------------
kern :err : [ 136.010122] Slab 0x00000000d1a77f73 objects=27 used=2 fp=0x0000000056213d6d flags=0x17ffffc0010200(slab|head|node=0|zone=2|lastcpupid=0x1fffff)
kern :warn : [ 136.023913] CPU: 2 PID: 6263 Comm: modprobe Not tainted 5.15.0-00146-gd63b32921c8b #1
kern :warn : [ 136.032611] Hardware name: Hewlett-Packard HP Pro 3340 MT/17A1, BIOS 8.07 01/24/2013
kern :warn : [ 136.041202] Call Trace:
kern :warn : [ 136.044390] dump_stack_lvl (kbuild/src/consumer/lib/dump_stack.c:107)
kern :warn : [ 136.048843] slab_err (kbuild/src/consumer/mm/slub.c:881)
kern :warn : [ 136.052848] ? _raw_spin_lock_irq (kbuild/src/consumer/include/linux/instrumented.h:101 kbuild/src/consumer/include/linux/atomic/atomic-instrumented.h:512 kbuild/src/consumer/include/asm-generic/qspinlock.h:82 kbuild/src/consumer/include/linux/spinlock.h:187 kbuild/src/consumer/include/linux/spinlock_api_smp.h:129 kbuild/src/consumer/kernel/locking/spinlock.c:170)
kern :warn : [ 136.057917] ? _raw_read_unlock_irqrestore (kbuild/src/consumer/kernel/locking/spinlock.c:169)
kern :warn : [ 136.063664] ? flush_all_cpus_locked (kbuild/src/consumer/mm/slub.c:2694 (discriminator 1))
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 3 weeks
[memcg, kmem] 58056f7750: hackbench.throughput 10.3% improvement
by kernel test robot
Greeting,
FYI, we noticed a 10.3% improvement of hackbench.throughput due to commit:
commit: 58056f77502f3567b760c9a8fc8d2e9081515b2d ("memcg, kmem: further deprecate kmem.limit_in_bytes")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: hackbench
on test machine: 144 threads 4 sockets Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory
with following parameters:
nr_threads: 100%
iterations: 4
mode: process
ipc: socket
cpufreq_governor: performance
ucode: 0x700001e
test-description: Hackbench is both a benchmark and a stress test for the Linux kernel scheduler.
test-url: https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/sc...
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-9/performance/socket/4/x86_64-rhel-8.3/process/100%/debian-10.4-x86_64-20200603.cgz/lkp-cpl-4sp1/hackbench/0x700001e
commit:
16f6bf266c ("mm/list_lru.c: prefer struct_size over open coded arithmetic")
58056f7750 ("memcg, kmem: further deprecate kmem.limit_in_bytes")
16f6bf266c94017c 58056f77502f3567b760c9a8fc8
---------------- ---------------------------
%stddev %change %stddev
\ | \
124164 +10.3% 137012 ± 2% hackbench.throughput
553.20 -9.9% 498.51 ± 2% hackbench.time.elapsed_time
553.20 -9.9% 498.51 ± 2% hackbench.time.elapsed_time.max
3.519e+08 ± 4% -56.3% 1.539e+08 ± 2% hackbench.time.involuntary_context_switches
76944 -10.0% 69266 ± 2% hackbench.time.system_time
1046 -15.9% 880.42 ± 2% hackbench.time.user_time
9.209e+08 ± 2% -41.2% 5.414e+08 hackbench.time.voluntary_context_switches
516011 ± 17% -33.5% 343117 ± 32% numa-numastat.node1.numa_hit
260286 ± 8% -16.7% 216841 ± 15% numa-vmstat.node3.nr_shmem
607.74 -8.8% 554.41 uptime.boot
164520 +2.7% 168980 proc-vmstat.nr_slab_unreclaimable
137719 -4.5% 131582 proc-vmstat.pgreuse
0.42 ± 6% +0.1 0.53 ± 6% turbostat.C1%
2.966e+08 -18.8% 2.408e+08 ± 2% turbostat.IRQ
2746 -12.3% 2408 vmstat.procs.r
2336697 ± 3% -38.4% 1439816 vmstat.system.cs
534308 -9.9% 481527 vmstat.system.in
5546 ± 5% -12.6% 4846 ± 6% slabinfo.kmalloc-cg-192.active_objs
5546 ± 5% -12.6% 4846 ± 6% slabinfo.kmalloc-cg-192.num_objs
157208 +9.5% 172170 slabinfo.kmalloc-cg-512.active_objs
3128 +11.3% 3480 slabinfo.kmalloc-cg-512.active_slabs
200236 +11.3% 222802 slabinfo.kmalloc-cg-512.num_objs
3128 +11.3% 3480 slabinfo.kmalloc-cg-512.num_slabs
157390 +9.5% 172289 slabinfo.skbuff_head_cache.active_objs
3134 +11.2% 3485 slabinfo.skbuff_head_cache.active_slabs
200626 +11.2% 223129 slabinfo.skbuff_head_cache.num_objs
3134 +11.2% 3485 slabinfo.skbuff_head_cache.num_slabs
1.669e+10 -5.2% 1.582e+10 perf-stat.i.branch-instructions
0.80 -0.2 0.64 ± 2% perf-stat.i.branch-miss-rate%
1.312e+08 -24.3% 99283197 ± 2% perf-stat.i.branch-misses
27.80 +1.1 28.86 perf-stat.i.cache-miss-rate%
8.619e+08 -1.5% 8.491e+08 perf-stat.i.cache-references
2349252 ± 3% -38.2% 1452325 perf-stat.i.context-switches
143032 ± 2% -14.7% 121977 ± 2% perf-stat.i.cpu-migrations
2001 -3.5% 1931 perf-stat.i.cycles-between-cache-misses
0.07 ± 5% -0.0 0.06 ± 5% perf-stat.i.dTLB-load-miss-rate%
16549027 ± 4% -12.9% 14414299 ± 3% perf-stat.i.dTLB-load-misses
2.392e+10 -1.9% 2.348e+10 perf-stat.i.dTLB-loads
0.03 ± 11% +0.0 0.04 ± 6% perf-stat.i.dTLB-store-miss-rate%
4025883 ± 11% +30.5% 5252098 ± 5% perf-stat.i.dTLB-store-misses
56000903 -35.7% 35989438 ± 8% perf-stat.i.iTLB-load-misses
23047129 ± 3% -35.8% 14787249 perf-stat.i.iTLB-loads
8.356e+10 -3.9% 8.027e+10 perf-stat.i.instructions
1548 +47.8% 2288 ± 7% perf-stat.i.instructions-per-iTLB-miss
0.18 -4.4% 0.17 perf-stat.i.ipc
869.08 +4.5% 908.26 perf-stat.i.metric.K/sec
384.93 -2.7% 374.53 perf-stat.i.metric.M/sec
7130 +3.7% 7398 ± 3% perf-stat.i.minor-faults
66.31 -6.3 60.05 ± 2% perf-stat.i.node-load-miss-rate%
39678105 -3.8% 38161927 perf-stat.i.node-load-misses
20549470 +25.2% 25719712 ± 5% perf-stat.i.node-loads
43.76 -6.0 37.72 ± 5% perf-stat.i.node-store-miss-rate%
15613166 ± 2% +10.0% 17180670 ± 4% perf-stat.i.node-store-misses
20301635 +40.5% 28518438 ± 4% perf-stat.i.node-stores
7172 +3.8% 7443 ± 3% perf-stat.i.page-faults
0.79 -0.2 0.62 ± 2% perf-stat.overall.branch-miss-rate%
28.13 +0.9 29.08 perf-stat.overall.cache-miss-rate%
5.65 +2.5% 5.79 perf-stat.overall.cpi
1942 -2.3% 1899 perf-stat.overall.cycles-between-cache-misses
0.07 ± 5% -0.0 0.06 ± 5% perf-stat.overall.dTLB-load-miss-rate%
0.03 ± 11% +0.0 0.04 ± 6% perf-stat.overall.dTLB-store-miss-rate%
1500 +49.7% 2246 ± 7% perf-stat.overall.instructions-per-iTLB-miss
0.18 -2.4% 0.17 perf-stat.overall.ipc
64.79 -5.9 58.87 perf-stat.overall.node-load-miss-rate%
40.92 -5.0 35.89 ± 3% perf-stat.overall.node-store-miss-rate%
1.641e+10 -3.5% 1.584e+10 perf-stat.ps.branch-instructions
1.288e+08 -23.6% 98487415 ± 2% perf-stat.ps.branch-misses
2337457 ± 3% -38.5% 1437257 perf-stat.ps.context-switches
130209 ± 2% -13.6% 112490 perf-stat.ps.cpu-migrations
16228057 ± 4% -11.2% 14403294 ± 3% perf-stat.ps.dTLB-load-misses
3947999 ± 10% +31.5% 5191308 ± 5% perf-stat.ps.dTLB-store-misses
54846930 -34.4% 35981347 ± 8% perf-stat.ps.iTLB-load-misses
22682918 ± 2% -35.8% 14572556 perf-stat.ps.iTLB-loads
8.23e+10 -2.4% 8.035e+10 perf-stat.ps.instructions
20.85 ± 3% +15.9% 24.16 ± 4% perf-stat.ps.major-faults
5283 +10.5% 5839 perf-stat.ps.minor-faults
39315654 -4.2% 37669399 perf-stat.ps.node-load-misses
21363970 +23.3% 26334557 ± 4% perf-stat.ps.node-loads
14602656 +11.5% 16280558 ± 2% perf-stat.ps.node-store-misses
21089334 +38.0% 29102191 ± 3% perf-stat.ps.node-stores
5304 +10.5% 5863 perf-stat.ps.page-faults
4.56e+13 -12.0% 4.011e+13 perf-stat.total.instructions
0.14 ±149% +6903.8% 10.10 ± 90% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown]
146.03 ± 20% -59.7% 58.91 ± 32% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
20.00 ± 16% -39.8% 12.05 ± 15% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
45.20 ± 21% -45.1% 24.80 ± 17% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
168.56 ± 19% -77.4% 38.03 ± 34% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
87.74 ± 39% -69.7% 26.57 ± 32% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
142.77 ± 20% -39.2% 86.82 ± 21% perf-sched.sch_delay.avg.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
22.06 ± 12% -51.5% 10.70 ± 19% perf-sched.sch_delay.avg.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
6.08 ±146% +12273.2% 752.85 ± 96% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown]
22236 ± 14% -62.5% 8341 ± 29% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
26048 ± 9% -52.9% 12272 ± 28% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
24687 ± 8% -61.5% 9499 ± 27% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
24838 ± 8% -70.9% 7234 ± 34% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
21611 ± 13% -65.7% 7403 ± 44% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
26017 ± 10% -53.3% 12149 ± 32% perf-sched.sch_delay.max.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
26199 ± 9% -57.5% 11127 ± 28% perf-sched.sch_delay.max.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
26.44 ± 13% -36.5% 16.79 ± 16% perf-sched.total_sch_delay.average.ms
26690 ± 10% -52.4% 12702 ± 29% perf-sched.total_sch_delay.max.ms
92.56 ± 13% -34.8% 60.38 ± 16% perf-sched.total_wait_and_delay.average.ms
53236 ± 9% -51.9% 25601 ± 30% perf-sched.total_wait_and_delay.max.ms
66.12 ± 13% -34.1% 43.58 ± 17% perf-sched.total_wait_time.average.ms
26766 ± 8% -47.8% 13965 ± 29% perf-sched.total_wait_time.max.ms
549.67 ± 18% -64.8% 193.37 ± 28% perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
75.13 ± 14% -41.6% 43.84 ± 14% perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
167.49 ± 17% -45.5% 91.23 ± 16% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
594.39 ± 14% -76.9% 137.13 ± 29% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
317.87 ± 21% -65.9% 108.34 ± 28% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
1953 ±115% -90.1% 193.28 ±107% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.rmap_walk_anon.remove_migration_ptes.migrate_pages
154.92 ± 33% -82.5% 27.16 ±152% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr_locked
7364 ± 51% -83.3% 1228 ±110% perf-sched.wait_and_delay.avg.ms.rwsem_down_read_slowpath.rmap_walk_anon.remove_migration_ptes.migrate_pages
532.84 ± 22% -42.2% 308.22 ± 23% perf-sched.wait_and_delay.avg.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
73.20 ± 12% -48.1% 37.98 ± 19% perf-sched.wait_and_delay.avg.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
131.17 ± 12% +54.1% 202.17 ± 14% perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
1020 ± 16% +30.8% 1335 ± 15% perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
12199 ± 15% +39.9% 17063 ± 19% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
1423 ± 12% +44.5% 2056 ± 17% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
6.00 ± 19% +88.9% 11.33 ± 15% perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork
48952 ± 19% +143.5% 119182 ± 12% perf-sched.wait_and_delay.count.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
240.00 ± 17% +52.6% 366.17 ± 18% perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork
119.67 ± 28% +60.3% 191.83 ± 17% perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork
45862 ± 11% -63.5% 16731 ± 29% perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
52359 ± 9% -52.3% 24957 ± 30% perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
49522 ± 8% -61.6% 19021 ± 27% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
49837 ± 8% -70.5% 14699 ± 33% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
43808 ± 12% -66.0% 14910 ± 44% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
36717 ± 23% -82.8% 6317 ±122% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.rmap_walk_anon.remove_migration_ptes.migrate_pages
31347 ± 40% -75.5% 7689 ±112% perf-sched.wait_and_delay.max.ms.rwsem_down_read_slowpath.rmap_walk_anon.remove_migration_ptes.migrate_pages
52076 ± 10% -52.5% 24745 ± 34% perf-sched.wait_and_delay.max.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
52504 ± 9% -57.3% 22437 ± 28% perf-sched.wait_and_delay.max.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
1.82 ±163% +2252.9% 42.73 ±129% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown]
403.64 ± 24% -66.7% 134.46 ± 28% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
55.12 ± 14% -42.3% 31.79 ± 14% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.02 ±108% +534.1% 0.13 ± 85% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.alloc_pages_vma.do_anonymous_page
122.29 ± 16% -45.7% 66.43 ± 16% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
425.82 ± 16% -76.7% 99.10 ± 28% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
465.09 ±223% -100.0% 0.04 ±117% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.change_p4d_range.change_protection.change_prot_numa
9.43 ±125% +219.2% 30.11 ±104% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
230.14 ± 17% -64.5% 81.77 ± 28% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
70.21 ± 31% -56.3% 30.65 ± 43% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg
1362 ±123% -88.1% 161.89 ± 95% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.rmap_walk_anon.remove_migration_ptes.migrate_pages
0.17 ±118% +1194.7% 2.26 ± 80% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve
154.92 ± 33% -64.9% 54.45 ± 45% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr_locked
5277 ± 37% -81.9% 953.75 ±103% perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.rmap_walk_anon.remove_migration_ptes.migrate_pages
390.06 ± 23% -43.2% 221.40 ± 23% perf-sched.wait_time.avg.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
51.14 ± 12% -46.7% 27.27 ± 19% perf-sched.wait_time.avg.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
42.41 ±143% +7008.4% 3014 ±126% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown]
17958 ± 24% -60.2% 7144 ± 33% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
24588 ± 7% -65.6% 8461 ± 28% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
26488 ± 8% -51.3% 12906 ± 31% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
20113 ± 16% -45.9% 10885 ± 38% perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
25666 ± 9% -57.7% 10860 ± 33% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
19730 ± 11% -59.6% 7972 ± 23% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_recvmsg.sock_recvmsg
25151 ± 8% -64.3% 8977 ± 31% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
1860 ±223% -100.0% 0.06 ±105% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.change_p4d_range.change_protection.change_prot_numa
4.30 ±194% -97.7% 0.10 ±116% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.elf_map
320.22 ±175% +1509.4% 5153 ± 71% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
24087 ± 12% -61.8% 9191 ± 42% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
17911 ± 16% -62.7% 6687 ± 29% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg
20724 ± 17% -79.9% 4168 ± 82% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.rmap_walk_anon.remove_migration_ptes.migrate_pages
1.20 ±151% +2217.5% 27.84 ± 75% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve
19696 ± 23% -77.1% 4500 ± 99% perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.rmap_walk_anon.remove_migration_ptes.migrate_pages
26233 ± 10% -51.2% 12801 ± 36% perf-sched.wait_time.max.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
26305 ± 9% -56.7% 11393 ± 28% perf-sched.wait_time.max.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
24961 ± 11% -45.5% 13607 ± 28% perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork
25452 ± 10% -47.3% 13401 ± 30% perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork
84420 ± 6% -11.5% 74716 ± 8% softirqs.CPU0.RCU
86481 ± 5% -11.8% 76240 ± 9% softirqs.CPU1.RCU
87185 ± 5% -13.1% 75796 ± 9% softirqs.CPU10.RCU
92504 ± 6% -14.8% 78845 ± 7% softirqs.CPU100.RCU
92105 ± 6% -15.2% 78097 ± 7% softirqs.CPU101.RCU
92008 ± 6% -14.9% 78278 ± 7% softirqs.CPU102.RCU
92262 ± 6% -15.3% 78190 ± 6% softirqs.CPU103.RCU
92609 ± 6% -14.8% 78921 ± 7% softirqs.CPU104.RCU
92568 ± 6% -15.0% 78674 ± 7% softirqs.CPU105.RCU
93113 ± 6% -15.2% 78921 ± 7% softirqs.CPU106.RCU
94050 ± 6% -15.6% 79354 ± 8% softirqs.CPU107.RCU
86336 ± 6% -12.2% 75835 ± 7% softirqs.CPU108.RCU
85713 ± 6% -12.9% 74646 ± 9% softirqs.CPU109.RCU
85964 ± 5% -12.0% 75619 ± 9% softirqs.CPU11.RCU
85393 ± 7% -12.6% 74646 ± 9% softirqs.CPU110.RCU
85002 ± 6% -12.3% 74549 ± 8% softirqs.CPU111.RCU
95689 ± 6% -14.9% 81465 ± 10% softirqs.CPU112.RCU
95426 ± 6% -13.5% 82531 ± 8% softirqs.CPU113.RCU
93843 ± 6% -13.2% 81424 ± 9% softirqs.CPU114.RCU
94782 ± 6% -13.7% 81765 ± 8% softirqs.CPU115.RCU
95329 ± 6% -13.2% 82770 ± 9% softirqs.CPU116.RCU
95098 ± 6% -13.3% 82488 ± 9% softirqs.CPU117.RCU
94061 ± 6% -13.4% 81501 ± 8% softirqs.CPU118.RCU
94978 ± 6% -13.7% 81931 ± 9% softirqs.CPU119.RCU
85245 ± 6% -10.5% 76276 ± 9% softirqs.CPU12.RCU
94735 ± 6% -13.3% 82148 ± 9% softirqs.CPU120.RCU
95239 ± 6% -13.3% 82558 ± 9% softirqs.CPU121.RCU
94890 ± 6% -13.5% 82113 ± 9% softirqs.CPU122.RCU
94265 ± 6% -13.5% 81563 ± 8% softirqs.CPU123.RCU
95082 ± 6% -14.6% 81160 ± 10% softirqs.CPU124.RCU
95351 ± 6% -13.6% 82390 ± 9% softirqs.CPU125.RCU
81884 ± 5% -11.7% 72288 ± 8% softirqs.CPU126.RCU
81461 ± 6% -12.1% 71593 ± 8% softirqs.CPU127.RCU
88358 ± 5% -13.9% 76053 ± 9% softirqs.CPU128.RCU
88086 ± 5% -13.4% 76318 ± 8% softirqs.CPU129.RCU
14400 +9.9% 15821 ± 3% softirqs.CPU13.SCHED
88026 ± 5% -13.4% 76206 ± 8% softirqs.CPU130.RCU
88351 ± 5% -13.7% 76205 ± 9% softirqs.CPU131.RCU
87490 ± 5% -13.3% 75876 ± 9% softirqs.CPU132.RCU
87529 ± 5% -13.6% 75668 ± 9% softirqs.CPU133.RCU
87798 ± 5% -12.5% 76813 ± 9% softirqs.CPU134.RCU
88136 ± 5% -13.5% 76276 ± 9% softirqs.CPU135.RCU
88294 ± 5% -13.6% 76251 ± 9% softirqs.CPU136.RCU
87208 ± 5% -13.5% 75465 ± 9% softirqs.CPU137.RCU
87695 ± 5% -13.3% 75994 ± 9% softirqs.CPU138.RCU
88320 ± 5% -13.6% 76309 ± 9% softirqs.CPU139.RCU
86555 ± 6% -12.3% 75887 ± 8% softirqs.CPU14.RCU
14298 +7.8% 15412 ± 4% softirqs.CPU14.SCHED
88184 ± 5% -13.7% 76139 ± 9% softirqs.CPU140.RCU
87603 ± 5% -13.1% 76136 ± 9% softirqs.CPU141.RCU
87625 ± 5% -13.3% 75972 ± 9% softirqs.CPU142.RCU
88256 ± 5% -13.3% 76557 ± 9% softirqs.CPU143.RCU
87050 ± 5% -12.2% 76430 ± 8% softirqs.CPU15.RCU
81130 ± 6% -12.1% 71332 ± 8% softirqs.CPU16.RCU
82146 ± 6% -12.5% 71850 ± 9% softirqs.CPU17.RCU
14082 +10.2% 15524 ± 3% softirqs.CPU17.SCHED
93814 ± 5% -15.8% 79008 ± 7% softirqs.CPU18.RCU
92946 ± 5% -15.5% 78562 ± 7% softirqs.CPU19.RCU
86506 ± 5% -9.5% 78315 ± 6% softirqs.CPU2.RCU
92407 ± 5% -15.2% 78322 ± 7% softirqs.CPU20.RCU
92212 ± 6% -15.7% 77774 ± 7% softirqs.CPU21.RCU
92309 ± 6% -15.1% 78328 ± 7% softirqs.CPU22.RCU
92929 ± 5% -15.4% 78577 ± 7% softirqs.CPU23.RCU
93052 ± 6% -15.7% 78470 ± 7% softirqs.CPU24.RCU
92080 ± 6% -15.3% 77996 ± 7% softirqs.CPU25.RCU
91828 ± 5% -15.3% 77740 ± 7% softirqs.CPU26.RCU
92903 ± 5% -15.6% 78432 ± 7% softirqs.CPU27.RCU
92628 ± 6% -15.3% 78450 ± 7% softirqs.CPU28.RCU
92249 ± 6% -16.2% 77285 ± 8% softirqs.CPU29.RCU
86877 ± 5% -12.0% 76462 ± 8% softirqs.CPU3.RCU
91350 ± 6% -14.7% 77958 ± 8% softirqs.CPU30.RCU
91443 ± 6% -16.0% 76829 ± 5% softirqs.CPU31.RCU
86014 ± 6% -14.2% 73818 ± 8% softirqs.CPU32.RCU
85380 ± 6% -13.7% 73689 ± 8% softirqs.CPU33.RCU
86155 ± 5% -14.3% 73854 ± 8% softirqs.CPU34.RCU
86462 ± 5% -14.1% 74234 ± 8% softirqs.CPU35.RCU
92387 ± 6% -13.1% 80287 ± 9% softirqs.CPU36.RCU
92117 ± 6% -12.9% 80218 ± 8% softirqs.CPU37.RCU
90761 ± 8% -11.9% 79957 ± 9% softirqs.CPU38.RCU
90705 ± 5% -11.5% 80235 ± 9% softirqs.CPU39.RCU
86170 ± 5% -11.2% 76515 ± 7% softirqs.CPU4.RCU
13627 ± 6% +14.0% 15539 ± 3% softirqs.CPU4.SCHED
92368 ± 6% -13.7% 79721 ± 9% softirqs.CPU40.RCU
92055 ± 6% -13.0% 80123 ± 8% softirqs.CPU41.RCU
91447 ± 6% -13.0% 79565 ± 9% softirqs.CPU42.RCU
91880 ± 6% -13.2% 79751 ± 8% softirqs.CPU43.RCU
92254 ± 6% -12.7% 80501 ± 8% softirqs.CPU44.RCU
91783 ± 6% -12.8% 80015 ± 9% softirqs.CPU45.RCU
91421 ± 6% -12.9% 79645 ± 9% softirqs.CPU46.RCU
91390 ± 6% -12.8% 79722 ± 8% softirqs.CPU47.RCU
88932 ± 6% -12.8% 77543 ± 8% softirqs.CPU48.RCU
88850 ± 6% -12.4% 77848 ± 9% softirqs.CPU49.RCU
86897 ± 4% -12.3% 76176 ± 8% softirqs.CPU5.RCU
88709 ± 6% -12.5% 77631 ± 8% softirqs.CPU50.RCU
88614 ± 6% -12.5% 77564 ± 8% softirqs.CPU51.RCU
88876 ± 6% -13.3% 77084 ± 9% softirqs.CPU52.RCU
88779 ± 6% -12.4% 77752 ± 8% softirqs.CPU53.RCU
89712 ± 5% -13.5% 77589 ± 8% softirqs.CPU54.RCU
88569 ± 5% -13.2% 76854 ± 8% softirqs.CPU55.RCU
88777 ± 5% -13.1% 77130 ± 8% softirqs.CPU56.RCU
89211 ± 5% -13.8% 76929 ± 8% softirqs.CPU57.RCU
88989 ± 5% -13.0% 77400 ± 8% softirqs.CPU58.RCU
88940 ± 5% -13.1% 77250 ± 8% softirqs.CPU59.RCU
86373 ± 5% -12.3% 75749 ± 8% softirqs.CPU6.RCU
88320 ± 5% -13.0% 76839 ± 9% softirqs.CPU60.RCU
88405 ± 5% -13.2% 76706 ± 8% softirqs.CPU61.RCU
88722 ± 5% -12.8% 77340 ± 8% softirqs.CPU62.RCU
88541 ± 5% -12.9% 77089 ± 9% softirqs.CPU63.RCU
88636 ± 5% -13.6% 76600 ± 8% softirqs.CPU64.RCU
88387 ± 5% -13.4% 76579 ± 8% softirqs.CPU65.RCU
88139 ± 5% -13.3% 76411 ± 8% softirqs.CPU66.RCU
88420 ± 5% -13.3% 76702 ± 8% softirqs.CPU67.RCU
88484 ± 5% -13.2% 76794 ± 8% softirqs.CPU68.RCU
88366 ± 5% -12.7% 77154 ± 8% softirqs.CPU69.RCU
86334 ± 5% -12.1% 75872 ± 7% softirqs.CPU7.RCU
88438 ± 5% -13.4% 76599 ± 8% softirqs.CPU70.RCU
88125 ± 5% -13.3% 76369 ± 8% softirqs.CPU71.RCU
87488 ± 6% -12.1% 76899 ± 8% softirqs.CPU72.RCU
13664 ± 6% +12.1% 15314 ± 3% softirqs.CPU73.SCHED
87669 ± 6% -11.6% 77534 ± 7% softirqs.CPU74.RCU
87675 ± 5% -11.5% 77573 ± 8% softirqs.CPU75.RCU
87948 ± 5% -12.0% 77402 ± 8% softirqs.CPU76.RCU
88247 ± 5% -12.2% 77474 ± 7% softirqs.CPU77.RCU
87612 ± 5% -11.7% 77372 ± 8% softirqs.CPU78.RCU
87923 ± 6% -12.4% 77023 ± 8% softirqs.CPU79.RCU
86391 ± 5% -11.4% 76543 ± 8% softirqs.CPU8.RCU
88597 ± 6% -12.0% 78003 ± 8% softirqs.CPU80.RCU
89101 ± 6% -12.5% 77968 ± 8% softirqs.CPU81.RCU
14183 ± 2% +8.3% 15363 ± 3% softirqs.CPU81.SCHED
88545 ± 6% -12.4% 77527 ± 7% softirqs.CPU82.RCU
87757 ± 7% -11.5% 77621 ± 8% softirqs.CPU83.RCU
88307 ± 6% -11.5% 78128 ± 8% softirqs.CPU84.RCU
14211 +9.8% 15605 ± 3% softirqs.CPU84.SCHED
89048 ± 6% -13.0% 77482 ± 8% softirqs.CPU85.RCU
89209 ± 5% -12.3% 78258 ± 7% softirqs.CPU86.RCU
88691 ± 5% -12.3% 77825 ± 8% softirqs.CPU87.RCU
88012 ± 5% -11.8% 77588 ± 8% softirqs.CPU88.RCU
88781 ± 5% -11.9% 78190 ± 8% softirqs.CPU89.RCU
86346 ± 6% -10.6% 77225 ± 8% softirqs.CPU9.RCU
90192 ± 5% -14.8% 76846 ± 7% softirqs.CPU90.RCU
89520 ± 6% -14.7% 76378 ± 7% softirqs.CPU91.RCU
89174 ± 5% -14.4% 76374 ± 7% softirqs.CPU92.RCU
88839 ± 6% -14.4% 76030 ± 7% softirqs.CPU93.RCU
89065 ± 5% -14.4% 76238 ± 7% softirqs.CPU94.RCU
89354 ± 6% -14.3% 76619 ± 7% softirqs.CPU95.RCU
92662 ± 6% -14.8% 78959 ± 7% softirqs.CPU96.RCU
91610 ± 6% -14.8% 78044 ± 7% softirqs.CPU97.RCU
92230 ± 6% -15.0% 78392 ± 7% softirqs.CPU98.RCU
92985 ± 6% -14.9% 79138 ± 7% softirqs.CPU99.RCU
12886523 ± 6% -13.3% 11171782 ± 8% softirqs.RCU
112751 ± 3% -10.3% 101130 ± 7% softirqs.TIMER
6866613 ± 7% -20.9% 5431401 ± 9% interrupts.CAL:Function_call_interrupts
1109865 -13.2% 963752 ± 7% interrupts.CPU0.LOC:Local_timer_interrupts
746403 ± 6% -26.6% 547987 ± 6% interrupts.CPU0.RES:Rescheduling_interrupts
1109844 -11.5% 982545 ± 4% interrupts.CPU1.LOC:Local_timer_interrupts
784948 ± 6% -26.8% 574218 ± 6% interrupts.CPU1.RES:Rescheduling_interrupts
1109908 -11.5% 982586 ± 4% interrupts.CPU10.LOC:Local_timer_interrupts
768936 ± 6% -25.0% 576330 ± 7% interrupts.CPU10.RES:Rescheduling_interrupts
1109673 -11.3% 983929 ± 3% interrupts.CPU100.LOC:Local_timer_interrupts
933365 ± 9% -33.5% 620758 ± 11% interrupts.CPU100.RES:Rescheduling_interrupts
1109648 -11.2% 985868 ± 3% interrupts.CPU101.LOC:Local_timer_interrupts
905179 ± 4% -32.6% 610073 ± 8% interrupts.CPU101.RES:Rescheduling_interrupts
55524 ± 32% -37.9% 34488 ± 22% interrupts.CPU102.CAL:Function_call_interrupts
1109644 -11.3% 983935 ± 3% interrupts.CPU102.LOC:Local_timer_interrupts
919161 ± 4% -33.6% 610117 ± 10% interrupts.CPU102.RES:Rescheduling_interrupts
1109645 -11.3% 983934 ± 3% interrupts.CPU103.LOC:Local_timer_interrupts
913475 ± 5% -33.8% 604948 ± 10% interrupts.CPU103.RES:Rescheduling_interrupts
1109644 -11.4% 983504 ± 3% interrupts.CPU104.LOC:Local_timer_interrupts
891411 ± 4% -30.6% 618904 ± 11% interrupts.CPU104.RES:Rescheduling_interrupts
1109685 -11.3% 983840 ± 3% interrupts.CPU105.LOC:Local_timer_interrupts
916161 ± 6% -33.4% 609904 ± 12% interrupts.CPU105.RES:Rescheduling_interrupts
1109651 -11.4% 983590 ± 3% interrupts.CPU106.LOC:Local_timer_interrupts
940033 ± 4% -36.6% 596222 ± 8% interrupts.CPU106.RES:Rescheduling_interrupts
45963 ± 35% -31.8% 31357 ± 20% interrupts.CPU107.CAL:Function_call_interrupts
1109713 -11.3% 983779 ± 3% interrupts.CPU107.LOC:Local_timer_interrupts
934019 ± 10% -35.0% 606683 ± 9% interrupts.CPU107.RES:Rescheduling_interrupts
1109681 -11.2% 985738 ± 3% interrupts.CPU108.LOC:Local_timer_interrupts
907098 ± 10% -26.3% 668850 ± 4% interrupts.CPU108.RES:Rescheduling_interrupts
1109685 -11.2% 985797 ± 3% interrupts.CPU109.LOC:Local_timer_interrupts
926079 ± 10% -27.0% 675677 ± 2% interrupts.CPU109.RES:Rescheduling_interrupts
1109842 -11.5% 982553 ± 3% interrupts.CPU11.LOC:Local_timer_interrupts
777408 ± 7% -24.9% 584102 ± 6% interrupts.CPU11.RES:Rescheduling_interrupts
1109698 -11.2% 985930 ± 3% interrupts.CPU110.LOC:Local_timer_interrupts
927794 ± 6% -26.1% 685774 ± 3% interrupts.CPU110.RES:Rescheduling_interrupts
1109662 -11.1% 986214 ± 3% interrupts.CPU111.LOC:Local_timer_interrupts
910204 ± 4% -26.3% 670493 ± 3% interrupts.CPU111.RES:Rescheduling_interrupts
1109686 -11.3% 984562 ± 3% interrupts.CPU112.LOC:Local_timer_interrupts
956972 ± 6% -29.5% 674228 ± 3% interrupts.CPU112.RES:Rescheduling_interrupts
1109660 -11.1% 986024 ± 3% interrupts.CPU113.LOC:Local_timer_interrupts
911937 ± 6% -25.3% 680865 ± 3% interrupts.CPU113.RES:Rescheduling_interrupts
1109640 -11.1% 986264 ± 3% interrupts.CPU114.LOC:Local_timer_interrupts
945548 ± 10% -27.3% 687639 ± 2% interrupts.CPU114.RES:Rescheduling_interrupts
1109707 -11.1% 986350 ± 3% interrupts.CPU115.LOC:Local_timer_interrupts
965953 ± 5% -28.4% 692092 ± 4% interrupts.CPU115.RES:Rescheduling_interrupts
1109635 -11.1% 985969 ± 3% interrupts.CPU116.LOC:Local_timer_interrupts
961063 ± 9% -30.4% 669281 ± 3% interrupts.CPU116.RES:Rescheduling_interrupts
1109648 -11.1% 986070 ± 3% interrupts.CPU117.LOC:Local_timer_interrupts
953449 ± 5% -29.3% 674189 ± 4% interrupts.CPU117.RES:Rescheduling_interrupts
1109688 -11.1% 986329 ± 3% interrupts.CPU118.LOC:Local_timer_interrupts
909710 ± 9% -22.2% 707837 ± 2% interrupts.CPU118.RES:Rescheduling_interrupts
1109682 -11.1% 986282 ± 3% interrupts.CPU119.LOC:Local_timer_interrupts
951262 ± 7% -27.6% 688722 ± 4% interrupts.CPU119.RES:Rescheduling_interrupts
1109883 -11.5% 982267 ± 4% interrupts.CPU12.LOC:Local_timer_interrupts
787531 ± 3% -27.2% 573213 ± 7% interrupts.CPU12.RES:Rescheduling_interrupts
1109698 -11.1% 986222 ± 3% interrupts.CPU120.LOC:Local_timer_interrupts
943666 ± 5% -28.5% 674336 ± 4% interrupts.CPU120.RES:Rescheduling_interrupts
1109696 -11.1% 986248 ± 3% interrupts.CPU121.LOC:Local_timer_interrupts
912017 ± 7% -24.5% 688310 interrupts.CPU121.RES:Rescheduling_interrupts
1109688 -11.1% 986359 ± 3% interrupts.CPU122.LOC:Local_timer_interrupts
962703 ± 8% -29.7% 676523 ± 2% interrupts.CPU122.RES:Rescheduling_interrupts
1109653 -11.1% 986405 ± 3% interrupts.CPU123.LOC:Local_timer_interrupts
949945 ± 6% -25.6% 706314 ± 4% interrupts.CPU123.RES:Rescheduling_interrupts
1109693 -11.1% 986142 ± 3% interrupts.CPU124.LOC:Local_timer_interrupts
926407 ± 8% -25.0% 695173 ± 3% interrupts.CPU124.RES:Rescheduling_interrupts
1109690 -11.1% 986479 ± 3% interrupts.CPU125.LOC:Local_timer_interrupts
897337 ± 8% -24.5% 677912 ± 4% interrupts.CPU125.RES:Rescheduling_interrupts
1109701 -11.2% 985053 ± 3% interrupts.CPU126.LOC:Local_timer_interrupts
870200 ± 5% -27.7% 629561 ± 12% interrupts.CPU126.RES:Rescheduling_interrupts
1109638 -11.2% 985218 ± 3% interrupts.CPU127.LOC:Local_timer_interrupts
903216 ± 5% -26.2% 667011 ± 11% interrupts.CPU127.RES:Rescheduling_interrupts
1109678 -11.2% 985554 ± 3% interrupts.CPU128.LOC:Local_timer_interrupts
898010 ± 3% -27.7% 649239 ± 10% interrupts.CPU128.RES:Rescheduling_interrupts
1109640 -11.2% 985393 ± 3% interrupts.CPU129.LOC:Local_timer_interrupts
914045 ± 5% -27.6% 661981 ± 13% interrupts.CPU129.RES:Rescheduling_interrupts
1109756 -11.5% 982011 ± 4% interrupts.CPU13.LOC:Local_timer_interrupts
770920 ± 8% -26.9% 563555 ± 7% interrupts.CPU13.RES:Rescheduling_interrupts
1109642 -11.2% 985483 ± 3% interrupts.CPU130.LOC:Local_timer_interrupts
902719 ± 3% -27.4% 655719 ± 12% interrupts.CPU130.RES:Rescheduling_interrupts
1109678 -11.2% 985411 ± 3% interrupts.CPU131.LOC:Local_timer_interrupts
927667 ± 2% -29.9% 650000 ± 12% interrupts.CPU131.RES:Rescheduling_interrupts
1109655 -11.2% 985585 ± 3% interrupts.CPU132.LOC:Local_timer_interrupts
895668 ± 4% -26.7% 656131 ± 9% interrupts.CPU132.RES:Rescheduling_interrupts
1109644 -11.2% 985607 ± 3% interrupts.CPU133.LOC:Local_timer_interrupts
932719 ± 8% -28.6% 665823 ± 11% interrupts.CPU133.RES:Rescheduling_interrupts
1109654 -11.2% 985756 ± 3% interrupts.CPU134.LOC:Local_timer_interrupts
919080 ± 4% -28.9% 653480 ± 10% interrupts.CPU134.RES:Rescheduling_interrupts
1109612 -11.2% 985559 ± 3% interrupts.CPU135.LOC:Local_timer_interrupts
919061 ± 6% -29.3% 649968 ± 10% interrupts.CPU135.RES:Rescheduling_interrupts
1109671 -11.2% 985352 ± 3% interrupts.CPU136.LOC:Local_timer_interrupts
942893 ± 4% -30.7% 653873 ± 10% interrupts.CPU136.RES:Rescheduling_interrupts
1109638 -11.2% 985586 ± 3% interrupts.CPU137.LOC:Local_timer_interrupts
909483 ± 4% -26.2% 671290 ± 12% interrupts.CPU137.RES:Rescheduling_interrupts
1109667 -11.2% 985336 ± 3% interrupts.CPU138.LOC:Local_timer_interrupts
905979 -24.9% 680152 ± 12% interrupts.CPU138.RES:Rescheduling_interrupts
1109666 -11.2% 985457 ± 3% interrupts.CPU139.LOC:Local_timer_interrupts
914012 ± 2% -30.0% 639753 ± 11% interrupts.CPU139.RES:Rescheduling_interrupts
1109769 -11.5% 982477 ± 4% interrupts.CPU14.LOC:Local_timer_interrupts
786901 ± 6% -26.1% 581705 ± 5% interrupts.CPU14.RES:Rescheduling_interrupts
1109661 -11.2% 985582 ± 3% interrupts.CPU140.LOC:Local_timer_interrupts
891457 ± 4% -25.1% 667637 ± 9% interrupts.CPU140.RES:Rescheduling_interrupts
1109567 -11.2% 985643 ± 3% interrupts.CPU141.LOC:Local_timer_interrupts
920671 ± 4% -29.1% 652448 ± 10% interrupts.CPU141.RES:Rescheduling_interrupts
1109583 -11.2% 985732 ± 3% interrupts.CPU142.LOC:Local_timer_interrupts
901948 ± 3% -25.7% 670496 ± 11% interrupts.CPU142.RES:Rescheduling_interrupts
1109693 -11.2% 985489 ± 3% interrupts.CPU143.LOC:Local_timer_interrupts
897848 ± 3% -29.8% 630227 ± 11% interrupts.CPU143.RES:Rescheduling_interrupts
1109764 -11.5% 982288 ± 4% interrupts.CPU15.LOC:Local_timer_interrupts
769032 ± 4% -24.6% 579800 ± 5% interrupts.CPU15.RES:Rescheduling_interrupts
1109749 -11.5% 981954 ± 4% interrupts.CPU16.LOC:Local_timer_interrupts
753284 ± 7% -24.5% 568931 ± 7% interrupts.CPU16.RES:Rescheduling_interrupts
1109740 -11.5% 982067 ± 4% interrupts.CPU17.LOC:Local_timer_interrupts
762912 ± 3% -24.3% 577864 ± 6% interrupts.CPU17.RES:Rescheduling_interrupts
1109715 -11.3% 983766 ± 3% interrupts.CPU18.LOC:Local_timer_interrupts
862584 ± 9% -31.5% 591294 ± 8% interrupts.CPU18.RES:Rescheduling_interrupts
1109709 -11.4% 983571 ± 3% interrupts.CPU19.LOC:Local_timer_interrupts
914512 ± 5% -33.8% 605096 ± 9% interrupts.CPU19.RES:Rescheduling_interrupts
1109849 -11.5% 982602 ± 4% interrupts.CPU2.LOC:Local_timer_interrupts
763288 ± 2% -24.2% 578849 ± 8% interrupts.CPU2.RES:Rescheduling_interrupts
1109697 -11.3% 983869 ± 3% interrupts.CPU20.LOC:Local_timer_interrupts
935470 ± 7% -33.8% 619417 ± 8% interrupts.CPU20.RES:Rescheduling_interrupts
1109689 -11.3% 983811 ± 3% interrupts.CPU21.LOC:Local_timer_interrupts
953999 ± 5% -35.9% 611854 ± 9% interrupts.CPU21.RES:Rescheduling_interrupts
1109713 -11.3% 983859 ± 3% interrupts.CPU22.LOC:Local_timer_interrupts
949063 ± 4% -34.9% 617867 ± 10% interrupts.CPU22.RES:Rescheduling_interrupts
1109677 -11.3% 983782 ± 3% interrupts.CPU23.LOC:Local_timer_interrupts
916942 ± 2% -31.5% 627663 ± 9% interrupts.CPU23.RES:Rescheduling_interrupts
1109676 -11.3% 983778 ± 3% interrupts.CPU24.LOC:Local_timer_interrupts
946944 ± 7% -34.4% 621109 ± 9% interrupts.CPU24.RES:Rescheduling_interrupts
1109671 -11.3% 983788 ± 3% interrupts.CPU25.LOC:Local_timer_interrupts
927026 ± 3% -32.5% 625454 ± 10% interrupts.CPU25.RES:Rescheduling_interrupts
1109686 -11.3% 983799 ± 3% interrupts.CPU26.LOC:Local_timer_interrupts
925930 ± 4% -31.8% 631366 ± 9% interrupts.CPU26.RES:Rescheduling_interrupts
1109675 -11.3% 983850 ± 3% interrupts.CPU27.LOC:Local_timer_interrupts
930402 ± 7% -32.6% 627189 ± 8% interrupts.CPU27.RES:Rescheduling_interrupts
1109729 -11.4% 983703 ± 3% interrupts.CPU28.LOC:Local_timer_interrupts
928439 ± 4% -32.0% 630948 ± 7% interrupts.CPU28.RES:Rescheduling_interrupts
1109731 -11.9% 977654 ± 4% interrupts.CPU29.LOC:Local_timer_interrupts
917894 ± 3% -31.6% 627431 ± 7% interrupts.CPU29.RES:Rescheduling_interrupts
1109813 -11.5% 982425 ± 4% interrupts.CPU3.LOC:Local_timer_interrupts
763728 ± 4% -25.8% 566752 ± 7% interrupts.CPU3.RES:Rescheduling_interrupts
1109702 -11.4% 983710 ± 3% interrupts.CPU30.LOC:Local_timer_interrupts
959814 ± 6% -34.4% 630064 ± 11% interrupts.CPU30.RES:Rescheduling_interrupts
1109691 -11.3% 983822 ± 3% interrupts.CPU31.LOC:Local_timer_interrupts
953721 ± 7% -34.2% 627837 ± 12% interrupts.CPU31.RES:Rescheduling_interrupts
1109716 -11.4% 983559 ± 3% interrupts.CPU32.LOC:Local_timer_interrupts
929251 ± 5% -32.4% 628141 ± 10% interrupts.CPU32.RES:Rescheduling_interrupts
1109720 -11.4% 983527 ± 3% interrupts.CPU33.LOC:Local_timer_interrupts
955082 ± 8% -34.0% 629974 ± 9% interrupts.CPU33.RES:Rescheduling_interrupts
1109706 -11.4% 983432 ± 3% interrupts.CPU34.LOC:Local_timer_interrupts
933432 ± 6% -33.6% 619604 ± 9% interrupts.CPU34.RES:Rescheduling_interrupts
1109724 -11.4% 983426 ± 3% interrupts.CPU35.LOC:Local_timer_interrupts
904660 ± 3% -31.4% 620967 ± 9% interrupts.CPU35.RES:Rescheduling_interrupts
1109716 -11.1% 986109 ± 3% interrupts.CPU36.LOC:Local_timer_interrupts
908916 ± 7% -25.5% 677428 ± 2% interrupts.CPU36.RES:Rescheduling_interrupts
1109635 -11.1% 986076 ± 3% interrupts.CPU37.LOC:Local_timer_interrupts
926313 ± 8% -24.1% 702863 ± 3% interrupts.CPU37.RES:Rescheduling_interrupts
1109685 -11.1% 986009 ± 3% interrupts.CPU38.LOC:Local_timer_interrupts
951056 ± 6% -27.3% 691529 ± 4% interrupts.CPU38.RES:Rescheduling_interrupts
1109743 -11.1% 986389 ± 3% interrupts.CPU39.LOC:Local_timer_interrupts
927813 ± 7% -24.0% 705228 ± 4% interrupts.CPU39.RES:Rescheduling_interrupts
1109737 -11.5% 982501 ± 3% interrupts.CPU4.LOC:Local_timer_interrupts
766534 ± 8% -24.7% 577507 ± 6% interrupts.CPU4.RES:Rescheduling_interrupts
1109783 -11.1% 986790 ± 3% interrupts.CPU40.LOC:Local_timer_interrupts
929300 ± 3% -24.5% 702028 ± 3% interrupts.CPU40.RES:Rescheduling_interrupts
1109714 -11.2% 985893 ± 3% interrupts.CPU41.LOC:Local_timer_interrupts
952762 ± 9% -26.6% 698904 ± 3% interrupts.CPU41.RES:Rescheduling_interrupts
1109708 -11.1% 986368 ± 3% interrupts.CPU42.LOC:Local_timer_interrupts
961704 ± 6% -26.4% 707713 ± 3% interrupts.CPU42.RES:Rescheduling_interrupts
1109757 -11.1% 986228 ± 3% interrupts.CPU43.LOC:Local_timer_interrupts
952246 ± 6% -26.2% 702405 ± 2% interrupts.CPU43.RES:Rescheduling_interrupts
1109751 -11.1% 986140 ± 3% interrupts.CPU44.LOC:Local_timer_interrupts
980627 ± 2% -29.8% 688585 ± 4% interrupts.CPU44.RES:Rescheduling_interrupts
1109689 -11.1% 985981 ± 3% interrupts.CPU45.LOC:Local_timer_interrupts
976440 ± 3% -29.2% 691802 ± 3% interrupts.CPU45.RES:Rescheduling_interrupts
1109645 -11.1% 986013 ± 3% interrupts.CPU46.LOC:Local_timer_interrupts
952352 ± 2% -26.7% 697811 ± 4% interrupts.CPU46.RES:Rescheduling_interrupts
1109682 -11.1% 986172 ± 3% interrupts.CPU47.LOC:Local_timer_interrupts
957266 ± 4% -28.5% 684262 interrupts.CPU47.RES:Rescheduling_interrupts
49803 ± 13% -18.0% 40844 ± 12% interrupts.CPU48.CAL:Function_call_interrupts
1109702 -11.1% 986471 ± 3% interrupts.CPU48.LOC:Local_timer_interrupts
977427 ± 6% -27.8% 705625 ± 3% interrupts.CPU48.RES:Rescheduling_interrupts
1109658 -11.1% 986172 ± 3% interrupts.CPU49.LOC:Local_timer_interrupts
937091 ± 7% -26.2% 691409 ± 3% interrupts.CPU49.RES:Rescheduling_interrupts
1109735 -11.5% 982317 ± 4% interrupts.CPU5.LOC:Local_timer_interrupts
741049 ± 6% -23.6% 566103 ± 6% interrupts.CPU5.RES:Rescheduling_interrupts
1109692 -11.1% 986149 ± 3% interrupts.CPU50.LOC:Local_timer_interrupts
942242 ± 7% -26.2% 695845 ± 3% interrupts.CPU50.RES:Rescheduling_interrupts
1109688 -11.1% 986153 ± 3% interrupts.CPU51.LOC:Local_timer_interrupts
978237 ± 5% -27.8% 706561 ± 2% interrupts.CPU51.RES:Rescheduling_interrupts
1109680 -11.1% 986101 ± 3% interrupts.CPU52.LOC:Local_timer_interrupts
942523 ± 6% -25.9% 698844 ± 2% interrupts.CPU52.RES:Rescheduling_interrupts
1109709 -11.1% 986142 ± 3% interrupts.CPU53.LOC:Local_timer_interrupts
968770 ± 8% -28.5% 692330 ± 4% interrupts.CPU53.RES:Rescheduling_interrupts
1109706 -11.2% 985372 ± 3% interrupts.CPU54.LOC:Local_timer_interrupts
892744 ± 3% -27.4% 648201 ± 10% interrupts.CPU54.RES:Rescheduling_interrupts
1109711 -11.2% 985410 ± 3% interrupts.CPU55.LOC:Local_timer_interrupts
905314 ± 4% -26.4% 666564 ± 10% interrupts.CPU55.RES:Rescheduling_interrupts
1109760 -11.2% 985583 ± 3% interrupts.CPU56.LOC:Local_timer_interrupts
923287 ± 5% -27.0% 673797 ± 10% interrupts.CPU56.RES:Rescheduling_interrupts
1109750 -11.2% 985363 ± 3% interrupts.CPU57.LOC:Local_timer_interrupts
941749 ± 6% -27.9% 678794 ± 9% interrupts.CPU57.RES:Rescheduling_interrupts
1109711 -11.2% 985560 ± 3% interrupts.CPU58.LOC:Local_timer_interrupts
911298 ± 3% -27.4% 661172 ± 10% interrupts.CPU58.RES:Rescheduling_interrupts
1109694 -11.2% 985480 ± 3% interrupts.CPU59.LOC:Local_timer_interrupts
934580 ± 3% -29.9% 654916 ± 9% interrupts.CPU59.RES:Rescheduling_interrupts
1109735 -11.5% 982399 ± 4% interrupts.CPU6.LOC:Local_timer_interrupts
740135 ± 6% -24.3% 560193 ± 5% interrupts.CPU6.RES:Rescheduling_interrupts
1109690 -11.2% 985677 ± 3% interrupts.CPU60.LOC:Local_timer_interrupts
927684 ± 3% -28.1% 667287 ± 11% interrupts.CPU60.RES:Rescheduling_interrupts
1109694 -11.2% 985524 ± 3% interrupts.CPU61.LOC:Local_timer_interrupts
958054 ± 4% -30.3% 667940 ± 11% interrupts.CPU61.RES:Rescheduling_interrupts
1109679 -11.2% 985364 ± 3% interrupts.CPU62.LOC:Local_timer_interrupts
946262 ± 4% -30.0% 662460 ± 11% interrupts.CPU62.RES:Rescheduling_interrupts
1109678 -11.2% 985301 ± 3% interrupts.CPU63.LOC:Local_timer_interrupts
931012 ± 4% -26.6% 683774 ± 11% interrupts.CPU63.RES:Rescheduling_interrupts
1109691 -11.2% 985194 ± 3% interrupts.CPU64.LOC:Local_timer_interrupts
929091 ± 6% -29.7% 653309 ± 11% interrupts.CPU64.RES:Rescheduling_interrupts
1109744 -11.2% 985320 ± 3% interrupts.CPU65.LOC:Local_timer_interrupts
933881 ± 3% -27.6% 676297 ± 9% interrupts.CPU65.RES:Rescheduling_interrupts
1109710 -11.2% 985673 ± 3% interrupts.CPU66.LOC:Local_timer_interrupts
956361 ± 5% -29.2% 676634 ± 9% interrupts.CPU66.RES:Rescheduling_interrupts
1109687 -11.2% 985315 ± 3% interrupts.CPU67.LOC:Local_timer_interrupts
909257 ± 3% -26.6% 667439 ± 12% interrupts.CPU67.RES:Rescheduling_interrupts
1109693 -11.2% 985287 ± 3% interrupts.CPU68.LOC:Local_timer_interrupts
925288 ± 2% -28.0% 665809 ± 9% interrupts.CPU68.RES:Rescheduling_interrupts
1109675 -11.2% 985486 ± 3% interrupts.CPU69.LOC:Local_timer_interrupts
958528 ± 4% -29.0% 680393 ± 9% interrupts.CPU69.RES:Rescheduling_interrupts
31192 ± 5% -17.8% 25645 ± 15% interrupts.CPU7.CAL:Function_call_interrupts
1109763 -11.4% 982851 ± 3% interrupts.CPU7.LOC:Local_timer_interrupts
766751 ± 7% -25.8% 569219 ± 4% interrupts.CPU7.RES:Rescheduling_interrupts
1109572 -11.2% 985353 ± 3% interrupts.CPU70.LOC:Local_timer_interrupts
934309 ± 2% -28.6% 666929 ± 11% interrupts.CPU70.RES:Rescheduling_interrupts
1109770 -11.2% 985376 ± 3% interrupts.CPU71.LOC:Local_timer_interrupts
923580 ± 5% -26.7% 676802 ± 9% interrupts.CPU71.RES:Rescheduling_interrupts
48586 ± 18% -31.1% 33495 ± 20% interrupts.CPU72.CAL:Function_call_interrupts
1109690 -11.4% 983437 ± 3% interrupts.CPU72.LOC:Local_timer_interrupts
732823 ± 6% -25.1% 548780 ± 6% interrupts.CPU72.RES:Rescheduling_interrupts
1109780 -11.5% 982255 ± 4% interrupts.CPU73.LOC:Local_timer_interrupts
748198 ± 6% -25.7% 555808 ± 2% interrupts.CPU73.RES:Rescheduling_interrupts
1109731 -11.5% 982361 ± 4% interrupts.CPU74.LOC:Local_timer_interrupts
732000 ± 5% -24.4% 553248 ± 8% interrupts.CPU74.RES:Rescheduling_interrupts
1109689 -11.5% 982406 ± 4% interrupts.CPU75.LOC:Local_timer_interrupts
732437 ± 4% -23.9% 557346 ± 5% interrupts.CPU75.RES:Rescheduling_interrupts
1109783 -11.5% 982253 ± 4% interrupts.CPU76.LOC:Local_timer_interrupts
727115 -22.8% 561035 ± 6% interrupts.CPU76.RES:Rescheduling_interrupts
1109749 -11.5% 981909 ± 4% interrupts.CPU77.LOC:Local_timer_interrupts
722320 ± 5% -23.9% 549545 ± 7% interrupts.CPU77.RES:Rescheduling_interrupts
1109776 -11.5% 982201 ± 4% interrupts.CPU78.LOC:Local_timer_interrupts
774003 ± 8% -27.3% 562990 ± 5% interrupts.CPU78.RES:Rescheduling_interrupts
1109771 -11.5% 982534 ± 3% interrupts.CPU79.LOC:Local_timer_interrupts
725034 ± 4% -21.5% 569192 ± 5% interrupts.CPU79.RES:Rescheduling_interrupts
1109745 -11.5% 982390 ± 4% interrupts.CPU8.LOC:Local_timer_interrupts
766391 ± 6% -26.4% 564224 ± 5% interrupts.CPU8.RES:Rescheduling_interrupts
1109765 -11.5% 982241 ± 4% interrupts.CPU80.LOC:Local_timer_interrupts
747959 ± 5% -24.2% 566670 ± 8% interrupts.CPU80.RES:Rescheduling_interrupts
1109814 -11.5% 981873 ± 4% interrupts.CPU81.LOC:Local_timer_interrupts
747605 ± 7% -24.3% 565728 ± 7% interrupts.CPU81.RES:Rescheduling_interrupts
1109714 -11.5% 982405 ± 4% interrupts.CPU82.LOC:Local_timer_interrupts
740632 ± 4% -25.4% 552433 ± 7% interrupts.CPU82.RES:Rescheduling_interrupts
1109762 -11.5% 982189 ± 4% interrupts.CPU83.LOC:Local_timer_interrupts
752433 ± 6% -23.9% 572430 ± 6% interrupts.CPU83.RES:Rescheduling_interrupts
1109801 -11.5% 982438 ± 4% interrupts.CPU84.LOC:Local_timer_interrupts
750543 ± 6% -25.7% 557447 ± 7% interrupts.CPU84.RES:Rescheduling_interrupts
1109757 -11.5% 982181 ± 4% interrupts.CPU85.LOC:Local_timer_interrupts
710008 ± 5% -20.3% 565565 ± 7% interrupts.CPU85.RES:Rescheduling_interrupts
1109819 -11.5% 982651 ± 3% interrupts.CPU86.LOC:Local_timer_interrupts
756402 ± 4% -26.1% 559174 ± 6% interrupts.CPU86.RES:Rescheduling_interrupts
1109763 -11.5% 982634 ± 3% interrupts.CPU87.LOC:Local_timer_interrupts
763567 ± 5% -27.5% 553678 ± 5% interrupts.CPU87.RES:Rescheduling_interrupts
1109795 -11.4% 982983 ± 3% interrupts.CPU88.LOC:Local_timer_interrupts
736222 ± 4% -24.9% 552717 ± 6% interrupts.CPU88.RES:Rescheduling_interrupts
1109788 -11.4% 982800 ± 3% interrupts.CPU89.LOC:Local_timer_interrupts
729712 ± 5% -24.6% 550308 ± 6% interrupts.CPU89.RES:Rescheduling_interrupts
1109951 -11.5% 982025 ± 4% interrupts.CPU9.LOC:Local_timer_interrupts
749632 ± 7% -25.7% 556858 ± 7% interrupts.CPU9.RES:Rescheduling_interrupts
1109721 -11.4% 983595 ± 3% interrupts.CPU90.LOC:Local_timer_interrupts
889032 ± 8% -34.4% 582776 ± 9% interrupts.CPU90.RES:Rescheduling_interrupts
1109668 -11.4% 983263 ± 3% interrupts.CPU91.LOC:Local_timer_interrupts
914855 ± 5% -35.1% 593653 ± 8% interrupts.CPU91.RES:Rescheduling_interrupts
1109647 -11.3% 983819 ± 3% interrupts.CPU92.LOC:Local_timer_interrupts
948214 ± 8% -35.0% 616661 ± 10% interrupts.CPU92.RES:Rescheduling_interrupts
1109694 -11.4% 983708 ± 3% interrupts.CPU93.LOC:Local_timer_interrupts
892302 ± 7% -32.9% 598886 ± 9% interrupts.CPU93.RES:Rescheduling_interrupts
1109699 -11.4% 983475 ± 3% interrupts.CPU94.LOC:Local_timer_interrupts
913830 ± 4% -33.8% 605237 ± 10% interrupts.CPU94.RES:Rescheduling_interrupts
1109687 -11.4% 983654 ± 3% interrupts.CPU95.LOC:Local_timer_interrupts
942816 ± 8% -35.3% 610115 ± 9% interrupts.CPU95.RES:Rescheduling_interrupts
1109662 -11.3% 983930 ± 3% interrupts.CPU96.LOC:Local_timer_interrupts
921056 ± 7% -33.8% 609373 ± 10% interrupts.CPU96.RES:Rescheduling_interrupts
1109663 -11.3% 983971 ± 3% interrupts.CPU97.LOC:Local_timer_interrupts
939132 ± 5% -35.0% 610299 ± 10% interrupts.CPU97.RES:Rescheduling_interrupts
1109659 -11.3% 983854 ± 3% interrupts.CPU98.LOC:Local_timer_interrupts
905810 ± 6% -32.7% 609848 ± 9% interrupts.CPU98.RES:Rescheduling_interrupts
1109697 -11.3% 983762 ± 3% interrupts.CPU99.LOC:Local_timer_interrupts
936646 ± 7% -34.8% 610801 ± 10% interrupts.CPU99.RES:Rescheduling_interrupts
1.598e+08 -11.3% 1.417e+08 ± 3% interrupts.LOC:Local_timer_interrupts
1.274e+08 ± 2% -28.5% 91108791 interrupts.RES:Rescheduling_interrupts
21.07 ± 13% -14.9 6.16 ± 35% perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event
20.01 ± 13% -14.3 5.71 ± 38% perf-profile.calltrace.cycles-pp.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow
19.91 ± 13% -14.2 5.68 ± 38% perf-profile.calltrace.cycles-pp.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow
11.18 ± 11% -11.2 0.00 perf-profile.calltrace.cycles-pp.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve
17.91 ± 12% -11.0 6.86 ± 22% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
17.29 ± 12% -10.9 6.42 ± 23% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg.sock_sendmsg
17.15 ± 12% -10.8 6.31 ± 23% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg
17.06 ± 12% -10.8 6.26 ± 23% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable
18.74 ± 11% -10.4 8.30 ± 17% perf-profile.calltrace.cycles-pp.sock_def_readable.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
12.96 ± 13% -9.5 3.45 ± 49% perf-profile.calltrace.cycles-pp.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward
15.35 ± 11% -9.5 5.88 ± 22% perf-profile.calltrace.cycles-pp.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
15.28 ± 11% -9.4 5.84 ± 22% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
15.16 ± 11% -9.4 5.77 ± 23% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg
10.21 ± 12% -7.0 3.21 ± 32% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr
10.16 ± 12% -7.0 3.19 ± 32% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime
5.82 ± 26% -5.8 0.00 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.__get_user_nocheck_8.perf_callchain_user.get_perf_callchain.perf_callchain
5.63 ± 11% -5.6 0.00 perf-profile.calltrace.cycles-pp.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb
6.42 ± 13% -5.6 0.81 ±114% perf-profile.calltrace.cycles-pp.perf_callchain_user.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward
6.28 ± 14% -4.8 1.48 ± 46% perf-profile.calltrace.cycles-pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch.__schedule.schedule
6.27 ± 14% -4.8 1.47 ± 46% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch.__schedule
5.06 ± 17% -4.8 0.28 ±223% perf-profile.calltrace.cycles-pp.__unwind_start.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample
6.24 ± 14% -4.8 1.46 ± 46% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch
6.92 ± 13% -4.5 2.40 ± 24% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
4.80 ± 13% -4.5 0.31 ±223% perf-profile.calltrace.cycles-pp.unwind_next_frame.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample
6.87 ± 13% -4.5 2.38 ± 24% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
52.74 -4.5 48.28 perf-profile.calltrace.cycles-pp.__libc_write
52.06 -4.3 47.73 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_write
6.76 ± 11% -4.3 2.44 ± 26% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.schedule_timeout.unix_stream_read_generic
51.93 -4.3 47.62 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
6.17 ± 13% -4.1 2.04 ± 27% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
6.08 ± 11% -4.0 2.11 ± 28% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.schedule_timeout
5.56 ± 13% -3.8 1.73 ± 30% perf-profile.calltrace.cycles-pp.update_curr.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
5.78 ± 11% -3.8 1.95 ± 30% perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule
5.51 ± 11% -3.7 1.83 ± 31% perf-profile.calltrace.cycles-pp.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair.__schedule
5.25 ± 13% -3.7 1.59 ± 32% perf-profile.calltrace.cycles-pp.perf_trace_sched_stat_runtime.update_curr.enqueue_entity.enqueue_task_fair.ttwu_do_activate
5.45 ± 12% -3.6 1.83 ± 30% perf-profile.calltrace.cycles-pp.perf_trace_sched_wakeup_template.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
5.36 ± 11% -3.6 1.77 ± 32% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair
5.08 ± 13% -3.6 1.52 ± 33% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.enqueue_entity.enqueue_task_fair
5.31 ± 12% -3.6 1.76 ± 31% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_wakeup_template.try_to_wake_up.autoremove_wake_function.__wake_up_common
5.25 ± 11% -3.5 1.73 ± 31% perf-profile.calltrace.cycles-pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.dequeue_entity
4.98 ± 13% -3.5 1.49 ± 33% perf-profile.calltrace.cycles-pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.enqueue_entity
5.10 ± 12% -3.4 1.68 ± 31% perf-profile.calltrace.cycles-pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_wakeup_template.try_to_wake_up.autoremove_wake_function
5.09 ± 12% -3.4 1.67 ± 31% perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_wakeup_template.try_to_wake_up
5.07 ± 12% -3.4 1.66 ± 32% perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_wakeup_template
4.63 ± 13% -3.2 1.45 ± 33% perf-profile.calltrace.cycles-pp.perf_trace_sched_switch.__schedule.schedule.schedule_timeout.unix_stream_read_generic
4.45 ± 13% -3.1 1.38 ± 33% perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_switch.__schedule.schedule.schedule_timeout
4.07 ± 12% -2.9 1.15 ± 27% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
4.01 ± 12% -2.9 1.11 ± 28% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
3.78 ± 12% -2.8 0.99 ± 30% perf-profile.calltrace.cycles-pp.schedule.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.74 ± 12% -2.8 0.98 ± 30% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
2.40 ± 12% -1.5 0.90 ± 18% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
2.09 ± 13% -1.4 0.66 ± 46% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
47.70 -1.3 46.38 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
47.45 -1.3 46.14 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
46.94 -1.3 45.63 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
46.52 -1.2 45.29 perf-profile.calltrace.cycles-pp.sock_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
46.20 -1.1 45.08 perf-profile.calltrace.cycles-pp.sock_sendmsg.sock_write_iter.new_sync_write.vfs_write.ksys_write
45.87 -1.1 44.76 perf-profile.calltrace.cycles-pp.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write.vfs_write
1.33 ± 3% -0.6 0.78 ± 6% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.schedule_timeout.unix_stream_read_generic
0.62 ± 2% +0.2 0.78 perf-profile.calltrace.cycles-pp.__check_object_size.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor
0.65 ± 2% +0.2 0.81 perf-profile.calltrace.cycles-pp.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic
0.75 ± 2% +0.2 0.92 ± 4% perf-profile.calltrace.cycles-pp.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
0.62 ± 3% +0.3 0.90 ± 3% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
0.65 ± 2% +0.3 0.94 ± 3% perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor
0.81 ± 2% +0.3 1.12 ± 3% perf-profile.calltrace.cycles-pp._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic
0.58 ± 7% +0.4 0.96 ± 4% perf-profile.calltrace.cycles-pp.__slab_free.skb_release_data.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
0.18 ±141% +0.4 0.57 ± 6% perf-profile.calltrace.cycles-pp.mod_objcg_state.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
1.06 ± 6% +0.4 1.47 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.73 ± 7% +0.4 1.18 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.75 ± 7% +0.4 1.20 ± 7% perf-profile.calltrace.cycles-pp.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
0.60 ± 12% +0.5 1.06 ± 8% perf-profile.calltrace.cycles-pp.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.36 ± 71% +0.5 0.83 ± 9% perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_objcg.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve
1.52 ± 2% +0.5 1.99 ± 2% perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
1.50 ± 2% +0.5 1.97 ± 2% perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg
0.37 ± 71% +0.5 0.84 ± 8% perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_objcg.obj_cgroup_uncharge_pages.kfree.skb_release_data.consume_skb
1.53 +0.5 2.01 ± 2% perf-profile.calltrace.cycles-pp.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.09 ±223% +0.5 0.58 ± 6% perf-profile.calltrace.cycles-pp.mod_objcg_state.kfree.skb_release_data.consume_skb.unix_stream_read_generic
0.65 ± 5% +0.5 1.15 ± 6% perf-profile.calltrace.cycles-pp.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.08 ±223% +0.5 0.59 ± 5% perf-profile.calltrace.cycles-pp.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.41 ± 71% +0.6 1.00 ± 7% perf-profile.calltrace.cycles-pp.mutex_unlock.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.41 ± 71% +0.6 1.02 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
0.00 +0.7 0.67 ± 4% perf-profile.calltrace.cycles-pp.unix_write_space.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all
1.12 ± 7% +0.7 1.81 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
0.88 ± 10% +0.7 1.57 ± 5% perf-profile.calltrace.cycles-pp.__slab_free.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
0.96 ± 13% +0.7 1.69 ± 7% perf-profile.calltrace.cycles-pp.propagate_protected_usage.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic
1.16 ± 9% +0.8 1.99 ± 4% perf-profile.calltrace.cycles-pp.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
1.20 ± 8% +0.8 2.03 ± 4% perf-profile.calltrace.cycles-pp.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_read_generic
1.22 ± 8% +0.8 2.06 ± 4% perf-profile.calltrace.cycles-pp.skb_release_head_state.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
1.23 ± 8% +0.8 2.07 ± 4% perf-profile.calltrace.cycles-pp.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
1.97 ± 12% +1.4 3.41 ± 7% perf-profile.calltrace.cycles-pp.propagate_protected_usage.page_counter_uncharge.obj_cgroup_uncharge_pages.kfree.skb_release_data
0.00 +1.7 1.72 ± 7% perf-profile.calltrace.cycles-pp.propagate_protected_usage.page_counter_charge.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc_node
3.75 ± 12% +1.8 5.55 ± 6% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic
6.27 ± 10% +2.2 8.45 ± 5% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
6.32 ± 10% +2.2 8.50 ± 5% perf-profile.calltrace.cycles-pp.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
7.85 ± 8% +2.4 10.30 ± 4% perf-profile.calltrace.cycles-pp.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
4.76 ± 12% +2.6 7.32 ± 5% perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg
5.38 ± 11% +3.0 8.34 ± 5% perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
7.80 ± 11% +3.3 11.14 ± 5% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kfree.skb_release_data
0.00 +3.4 3.40 ± 6% perf-profile.calltrace.cycles-pp.propagate_protected_usage.page_counter_charge.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller
46.87 +3.5 50.41 perf-profile.calltrace.cycles-pp.__libc_read
45.87 +3.7 49.56 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_read
45.74 +3.7 49.46 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
44.83 +3.9 48.69 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
45.02 +3.9 48.89 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
43.59 +3.9 47.47 perf-profile.calltrace.cycles-pp.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read.vfs_read
43.68 +3.9 47.56 perf-profile.calltrace.cycles-pp.unix_stream_recvmsg.sock_read_iter.new_sync_read.vfs_read.ksys_read
44.11 +3.9 48.00 perf-profile.calltrace.cycles-pp.sock_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
44.32 +3.9 48.22 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.54 ± 9% +4.1 11.64 ± 4% perf-profile.calltrace.cycles-pp.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
12.45 ± 10% +4.3 16.71 ± 5% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
12.50 ± 10% +4.3 16.77 ± 5% perf-profile.calltrace.cycles-pp.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
14.17 ± 9% +4.5 18.70 ± 4% perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
14.23 ± 9% +4.5 18.75 ± 4% perf-profile.calltrace.cycles-pp.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
9.88 ± 11% +4.8 14.72 ± 5% perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kfree.skb_release_data.consume_skb
11.16 ± 11% +5.6 16.78 ± 4% perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kfree.skb_release_data.consume_skb.unix_stream_read_generic
12.06 ± 9% +5.9 17.92 ± 4% perf-profile.calltrace.cycles-pp.kfree.skb_release_data.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
13.11 ± 9% +6.5 19.61 ± 4% perf-profile.calltrace.cycles-pp.skb_release_data.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
22.76 ± 8% +7.1 29.89 ± 4% perf-profile.calltrace.cycles-pp.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
22.70 ± 8% +7.1 29.82 ± 4% perf-profile.calltrace.cycles-pp.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
0.00 +7.4 7.38 ± 6% perf-profile.calltrace.cycles-pp.page_counter_charge.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb
14.56 ± 9% +7.5 22.04 ± 4% perf-profile.calltrace.cycles-pp.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
23.80 ± 8% +7.9 31.75 ± 4% perf-profile.calltrace.cycles-pp.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
0.00 +14.6 14.60 ± 5% perf-profile.calltrace.cycles-pp.page_counter_charge.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve
17.18 ± 11% -16.6 0.59 ± 7% perf-profile.children.cycles-pp.page_counter_try_charge
23.30 ± 12% -15.5 7.83 ± 31% perf-profile.children.cycles-pp.perf_tp_event
22.67 ± 12% -15.1 7.58 ± 31% perf-profile.children.cycles-pp.perf_swevent_overflow
22.63 ± 12% -15.1 7.56 ± 31% perf-profile.children.cycles-pp.__perf_event_overflow
22.53 ± 12% -15.0 7.52 ± 31% perf-profile.children.cycles-pp.perf_event_output_forward
22.12 ± 12% -14.8 7.34 ± 31% perf-profile.children.cycles-pp.perf_prepare_sample
21.00 ± 12% -14.1 6.88 ± 32% perf-profile.children.cycles-pp.perf_callchain
20.89 ± 12% -14.0 6.85 ± 32% perf-profile.children.cycles-pp.get_perf_callchain
19.54 ± 10% -12.0 7.51 ± 22% perf-profile.children.cycles-pp.schedule
19.55 ± 10% -11.9 7.61 ± 21% perf-profile.children.cycles-pp.__schedule
18.11 ± 12% -10.8 7.26 ± 21% perf-profile.children.cycles-pp.__wake_up_common_lock
17.47 ± 12% -10.7 6.81 ± 21% perf-profile.children.cycles-pp.__wake_up_common
17.34 ± 12% -10.6 6.70 ± 22% perf-profile.children.cycles-pp.autoremove_wake_function
17.24 ± 12% -10.6 6.64 ± 22% perf-profile.children.cycles-pp.try_to_wake_up
18.74 ± 11% -10.4 8.30 ± 17% perf-profile.children.cycles-pp.sock_def_readable
13.76 ± 13% -9.3 4.50 ± 32% perf-profile.children.cycles-pp.perf_callchain_kernel
15.56 ± 10% -9.2 6.32 ± 21% perf-profile.children.cycles-pp.schedule_timeout
12.23 ± 12% -7.8 4.42 ± 28% perf-profile.children.cycles-pp.update_curr
11.26 ± 12% -7.3 3.96 ± 30% perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
10.42 ± 13% -7.0 3.38 ± 32% perf-profile.children.cycles-pp.unwind_next_frame
6.96 ± 13% -4.8 2.12 ± 34% perf-profile.children.cycles-pp.perf_trace_sched_switch
6.72 ± 13% -4.5 2.22 ± 31% perf-profile.children.cycles-pp.perf_callchain_user
7.06 ± 13% -4.5 2.60 ± 24% perf-profile.children.cycles-pp.ttwu_do_activate
52.87 -4.4 48.42 perf-profile.children.cycles-pp.__libc_write
7.00 ± 13% -4.4 2.58 ± 24% perf-profile.children.cycles-pp.enqueue_task_fair
6.37 ± 12% -4.3 2.11 ± 31% perf-profile.children.cycles-pp.__get_user_nocheck_8
6.86 ± 11% -4.2 2.63 ± 24% perf-profile.children.cycles-pp.dequeue_task_fair
6.30 ± 13% -4.1 2.21 ± 26% perf-profile.children.cycles-pp.enqueue_entity
6.17 ± 11% -3.9 2.28 ± 27% perf-profile.children.cycles-pp.dequeue_entity
5.50 ± 12% -3.7 1.76 ± 32% perf-profile.children.cycles-pp.__unwind_start
5.52 ± 11% -3.6 1.94 ± 29% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
4.70 ± 11% -3.1 1.64 ± 19% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
4.61 ± 11% -3.0 1.56 ± 20% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
4.40 ± 13% -2.5 1.87 ± 28% perf-profile.children.cycles-pp.asm_exc_page_fault
2.99 ± 13% -2.0 1.00 ± 31% perf-profile.children.cycles-pp.__orc_find
3.90 ± 10% -1.5 2.38 ± 21% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
47.70 -1.3 46.38 perf-profile.children.cycles-pp.ksys_write
47.46 -1.3 46.14 perf-profile.children.cycles-pp.vfs_write
46.95 -1.3 45.64 perf-profile.children.cycles-pp.new_sync_write
46.55 -1.2 45.36 perf-profile.children.cycles-pp.sock_write_iter
1.67 ± 13% -1.1 0.55 ± 32% perf-profile.children.cycles-pp.orc_find
46.21 -1.1 45.09 perf-profile.children.cycles-pp.sock_sendmsg
45.90 -1.1 44.79 perf-profile.children.cycles-pp.unix_stream_sendmsg
1.66 ± 12% -1.1 0.57 ± 31% perf-profile.children.cycles-pp.unwind_get_return_address
5.94 ± 5% -1.1 4.86 ± 3% perf-profile.children.cycles-pp._raw_spin_lock
2.00 ± 3% -0.9 1.12 ± 6% perf-profile.children.cycles-pp.switch_mm_irqs_off
1.30 ± 12% -0.8 0.46 ± 30% perf-profile.children.cycles-pp.__kernel_text_address
1.20 ± 10% -0.8 0.44 ± 28% perf-profile.children.cycles-pp.native_irq_return_iret
1.09 ± 13% -0.8 0.34 ± 31% perf-profile.children.cycles-pp.kernelmode_fixup_or_oops
1.05 ± 14% -0.7 0.32 ± 34% perf-profile.children.cycles-pp.stack_access_ok
1.01 ± 13% -0.7 0.31 ± 31% perf-profile.children.cycles-pp.fixup_exception
1.00 ± 12% -0.7 0.34 ± 30% perf-profile.children.cycles-pp.kernel_text_address
0.91 ± 13% -0.6 0.28 ± 32% perf-profile.children.cycles-pp.search_exception_tables
0.88 ± 13% -0.6 0.27 ± 32% perf-profile.children.cycles-pp.search_extable
0.84 ± 14% -0.6 0.26 ± 31% perf-profile.children.cycles-pp.bsearch
1.44 ± 14% -0.5 0.90 ± 29% perf-profile.children.cycles-pp.exc_page_fault
97.98 -0.5 97.46 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.95 ± 14% -0.5 0.46 ± 13% perf-profile.children.cycles-pp.select_task_rq_fair
97.74 -0.5 97.26 perf-profile.children.cycles-pp.do_syscall_64
0.82 ± 11% -0.4 0.42 ± 8% perf-profile.children.cycles-pp.pick_next_task_fair
0.70 ± 14% -0.4 0.31 ± 15% perf-profile.children.cycles-pp.select_idle_sibling
0.65 ± 9% -0.4 0.27 ± 28% perf-profile.children.cycles-pp.__perf_event_header__init_id
0.87 ± 12% -0.4 0.51 ± 9% perf-profile.children.cycles-pp.update_load_avg
0.51 ± 14% -0.4 0.16 ± 31% perf-profile.children.cycles-pp.cmp_ex_search
0.46 ± 11% -0.3 0.21 ± 13% perf-profile.children.cycles-pp.reweight_entity
0.44 ± 9% -0.3 0.19 ± 10% perf-profile.children.cycles-pp.load_new_mm_cr3
0.41 ± 15% -0.2 0.18 ± 16% perf-profile.children.cycles-pp.select_idle_cpu
0.46 ± 8% -0.2 0.24 ± 7% perf-profile.children.cycles-pp.switch_fpu_return
0.37 ± 8% -0.2 0.15 ± 26% perf-profile.children.cycles-pp.perf_event_pid_type
0.45 ± 3% -0.2 0.25 ± 6% perf-profile.children.cycles-pp.__switch_to
0.31 ± 11% -0.2 0.11 ± 32% perf-profile.children.cycles-pp.ftrace_graph_ret_addr
0.31 ± 12% -0.2 0.12 ± 30% perf-profile.children.cycles-pp.perf_output_begin_forward
0.31 ± 9% -0.2 0.13 ± 17% perf-profile.children.cycles-pp.sched_clock_cpu
0.30 ± 8% -0.2 0.12 ± 28% perf-profile.children.cycles-pp.__task_pid_nr_ns
0.32 ± 16% -0.2 0.15 ± 31% perf-profile.children.cycles-pp.perf_trace_sched_migrate_task
0.37 ± 5% -0.2 0.21 ± 7% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.37 ± 16% -0.2 0.21 ± 24% perf-profile.children.cycles-pp.set_task_cpu
0.27 ± 10% -0.2 0.11 ± 18% perf-profile.children.cycles-pp.native_sched_clock
0.27 ± 17% -0.2 0.11 ± 22% perf-profile.children.cycles-pp.available_idle_cpu
0.30 ± 9% -0.2 0.15 ± 10% perf-profile.children.cycles-pp.set_next_entity
0.35 ± 5% -0.2 0.20 ± 7% perf-profile.children.cycles-pp.check_preempt_curr
0.23 ± 13% -0.2 0.08 ± 48% perf-profile.children.cycles-pp.bad_get_user
0.29 ± 10% -0.1 0.14 ± 9% perf-profile.children.cycles-pp.__switch_to_asm
0.32 ± 9% -0.1 0.18 ± 14% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.66 ± 5% -0.1 0.52 ± 6% perf-profile.children.cycles-pp.prepare_to_wait
0.30 ± 4% -0.1 0.16 ± 9% perf-profile.children.cycles-pp.check_preempt_wakeup
0.20 ± 11% -0.1 0.06 ± 47% perf-profile.children.cycles-pp.error_entry
0.26 ± 11% -0.1 0.12 ± 10% perf-profile.children.cycles-pp.update_rq_clock
0.21 ± 16% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.__enqueue_entity
0.30 ± 15% -0.1 0.17 ± 22% perf-profile.children.cycles-pp.finish_task_switch
0.19 ± 14% -0.1 0.07 ± 48% perf-profile.children.cycles-pp.is_bpf_text_address
0.24 ± 11% -0.1 0.11 ± 13% perf-profile.children.cycles-pp.__restore_fpregs_from_fpstate
0.24 ± 12% -0.1 0.12 ± 11% perf-profile.children.cycles-pp.__update_load_avg_se
0.21 ± 16% -0.1 0.09 ± 20% perf-profile.children.cycles-pp.cpuacct_charge
0.22 ± 13% -0.1 0.10 ± 11% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.13 ± 15% -0.1 0.03 ±102% perf-profile.children.cycles-pp.perf_tp_event_match
0.16 ± 11% -0.1 0.05 ± 45% perf-profile.children.cycles-pp.perf_trace_buf_alloc
0.16 ± 16% -0.1 0.06 ± 11% perf-profile.children.cycles-pp.put_prev_entity
0.14 ± 14% -0.1 0.04 ± 75% perf-profile.children.cycles-pp.save_fpregs_to_fpstate
0.13 ± 11% -0.1 0.04 ± 73% perf-profile.children.cycles-pp.get_callchain_entry
0.15 ± 11% -0.1 0.06 ± 46% perf-profile.children.cycles-pp.perf_trace_buf_update
0.14 ± 15% -0.1 0.05 ± 51% perf-profile.children.cycles-pp.bpf_ksym_find
0.13 ± 11% -0.1 0.05 ± 47% perf-profile.children.cycles-pp.get_stack_info
0.16 ± 4% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.resched_curr
0.11 ± 11% -0.1 0.04 ± 71% perf-profile.children.cycles-pp.perf_instruction_pointer
0.11 ± 14% -0.1 0.04 ± 47% perf-profile.children.cycles-pp.___perf_sw_event
0.10 ± 14% -0.1 0.04 ± 71% perf-profile.children.cycles-pp.tracing_gen_ctx_irq_test
0.13 ± 15% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.update_min_vruntime
0.14 ± 14% -0.1 0.08 ± 11% perf-profile.children.cycles-pp.__list_del_entry_valid
0.14 ± 11% -0.0 0.09 ± 12% perf-profile.children.cycles-pp.preempt_schedule_common
0.10 ± 9% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.__list_add_valid
0.08 ± 6% -0.0 0.03 ± 70% perf-profile.children.cycles-pp.iov_iter_init
0.10 ± 7% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.__calc_delta
0.17 ± 4% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.asm_sysvec_reschedule_ipi
0.12 ± 6% -0.0 0.09 ± 6% perf-profile.children.cycles-pp.finish_wait
0.14 ± 3% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.rcu_read_unlock_strict
0.06 +0.0 0.07 perf-profile.children.cycles-pp.refill_obj_stock
0.06 +0.0 0.07 ± 5% perf-profile.children.cycles-pp.check_stack_object
0.07 +0.0 0.09 ± 7% perf-profile.children.cycles-pp.rcu_all_qs
0.15 ± 3% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.__ksize
0.04 ± 45% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.irq_exit_rcu
0.10 ± 4% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.wait_for_unix_gc
0.13 ± 5% +0.0 0.16 ± 4% perf-profile.children.cycles-pp.__might_fault
0.19 ± 3% +0.0 0.22 ± 4% perf-profile.children.cycles-pp.__virt_addr_valid
0.18 +0.0 0.21 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.19 ± 2% +0.0 0.22 ± 2% perf-profile.children.cycles-pp.security_socket_recvmsg
0.22 ± 2% +0.0 0.25 perf-profile.children.cycles-pp.sock_recvmsg
0.26 ± 3% +0.0 0.30 ± 2% perf-profile.children.cycles-pp.copyin
0.22 ± 2% +0.1 0.27 ± 3% perf-profile.children.cycles-pp.__check_heap_object
0.00 +0.1 0.05 ± 7% perf-profile.children.cycles-pp.load_balance
0.37 +0.1 0.42 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.33 ± 2% +0.1 0.38 ± 2% perf-profile.children.cycles-pp.aa_sk_perm
0.04 ± 73% +0.1 0.09 ± 13% perf-profile.children.cycles-pp.generic_file_write_iter
0.04 ± 73% +0.1 0.09 ± 13% perf-profile.children.cycles-pp.__generic_file_write_iter
0.04 ± 73% +0.1 0.09 ± 13% perf-profile.children.cycles-pp.generic_perform_write
0.00 +0.1 0.06 ± 6% perf-profile.children.cycles-pp.__softirqentry_text_start
0.42 ± 2% +0.1 0.49 ± 4% perf-profile.children.cycles-pp._copy_from_iter
0.38 +0.1 0.45 ± 3% perf-profile.children.cycles-pp.__entry_text_start
0.00 +0.1 0.08 ± 17% perf-profile.children.cycles-pp.schedule_idle
0.26 ± 4% +0.1 0.35 ± 3% perf-profile.children.cycles-pp.__build_skb_around
0.01 ±223% +0.1 0.11 ± 41% perf-profile.children.cycles-pp.try_to_migrate_one
0.49 ± 3% +0.1 0.60 ± 5% perf-profile.children.cycles-pp.mutex_lock
0.90 ± 3% +0.1 1.01 ± 6% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.01 ±223% +0.1 0.14 ± 40% perf-profile.children.cycles-pp.try_to_migrate
0.65 +0.2 0.81 perf-profile.children.cycles-pp.simple_copy_to_iter
0.75 ± 2% +0.2 0.92 ± 4% perf-profile.children.cycles-pp.skb_copy_datagram_from_iter
0.02 ±223% +0.2 0.22 ± 48% perf-profile.children.cycles-pp.remove_migration_pte
0.76 ± 7% +0.2 0.99 ± 12% perf-profile.children.cycles-pp.__slab_alloc
0.76 ± 6% +0.2 0.98 ± 11% perf-profile.children.cycles-pp.___slab_alloc
0.92 +0.2 1.15 ± 2% perf-profile.children.cycles-pp.__check_object_size
0.29 ± 15% +0.2 0.53 ± 7% perf-profile.children.cycles-pp.drain_stock
0.49 ± 12% +0.2 0.74 ± 19% perf-profile.children.cycles-pp.__unfreeze_partials
0.03 ±155% +0.2 0.27 ± 47% perf-profile.children.cycles-pp.remove_migration_ptes
0.34 ± 13% +0.3 0.60 ± 6% perf-profile.children.cycles-pp.refill_stock
0.40 ± 9% +0.3 0.67 ± 4% perf-profile.children.cycles-pp.unix_write_space
0.34 ± 15% +0.3 0.62 ± 7% perf-profile.children.cycles-pp.try_charge_memcg
0.03 ±161% +0.3 0.32 ± 45% perf-profile.children.cycles-pp.page_vma_mapped_walk
0.65 ± 2% +0.3 0.94 ± 3% perf-profile.children.cycles-pp.copyout
0.82 ± 2% +0.3 1.12 ± 3% perf-profile.children.cycles-pp._copy_to_iter
0.17 ± 38% +0.3 0.49 ± 39% perf-profile.children.cycles-pp.do_user_addr_fault
0.88 ± 2% +0.3 1.20 ± 3% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.06 ±106% +0.4 0.41 ± 44% perf-profile.children.cycles-pp.rmap_walk_anon
1.28 ± 9% +0.4 1.63 ± 7% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.06 ±106% +0.4 0.42 ± 44% perf-profile.children.cycles-pp.migrate_misplaced_page
0.06 ±106% +0.4 0.42 ± 44% perf-profile.children.cycles-pp.migrate_pages
0.06 ±108% +0.4 0.42 ± 45% perf-profile.children.cycles-pp.do_numa_page
0.08 ± 81% +0.4 0.44 ± 43% perf-profile.children.cycles-pp.__handle_mm_fault
0.08 ± 84% +0.4 0.45 ± 42% perf-profile.children.cycles-pp.handle_mm_fault
1.79 ± 8% +0.4 2.22 ± 6% perf-profile.children.cycles-pp.mod_objcg_state
0.00 +0.4 0.43 ± 22% perf-profile.children.cycles-pp.intel_idle
0.58 ± 12% +0.4 1.01 ± 7% perf-profile.children.cycles-pp.mutex_unlock
0.76 ± 7% +0.4 1.21 ± 7% perf-profile.children.cycles-pp.skb_queue_tail
0.01 ±223% +0.5 0.46 ± 22% perf-profile.children.cycles-pp.cpuidle_enter_state
0.01 ±223% +0.5 0.46 ± 22% perf-profile.children.cycles-pp.cpuidle_enter
0.60 ± 12% +0.5 1.06 ± 7% perf-profile.children.cycles-pp.skb_unlink
1.52 +0.5 1.99 ± 2% perf-profile.children.cycles-pp.skb_copy_datagram_iter
1.50 ± 2% +0.5 1.98 ± 2% perf-profile.children.cycles-pp.__skb_datagram_iter
1.53 +0.5 2.01 ± 2% perf-profile.children.cycles-pp.unix_stream_read_actor
0.65 ± 5% +0.5 1.15 ± 6% perf-profile.children.cycles-pp.skb_set_owner_w
0.02 ±144% +0.6 0.58 ± 21% perf-profile.children.cycles-pp.start_secondary
0.02 ±144% +0.6 0.59 ± 21% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
0.02 ±144% +0.6 0.59 ± 21% perf-profile.children.cycles-pp.cpu_startup_entry
0.02 ±144% +0.6 0.59 ± 21% perf-profile.children.cycles-pp.do_idle
1.16 ± 9% +0.8 1.99 ± 4% perf-profile.children.cycles-pp.sock_wfree
1.21 ± 8% +0.8 2.04 ± 4% perf-profile.children.cycles-pp.unix_destruct_scm
1.22 ± 8% +0.8 2.06 ± 4% perf-profile.children.cycles-pp.skb_release_head_state
1.23 ± 8% +0.8 2.07 ± 4% perf-profile.children.cycles-pp.skb_release_all
1.58 ± 7% +1.0 2.53 ± 8% perf-profile.children.cycles-pp.get_mem_cgroup_from_objcg
1.47 ± 9% +1.1 2.54 ± 5% perf-profile.children.cycles-pp.__slab_free
2.99 ± 5% +1.1 4.13 ± 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
7.88 ± 8% +2.4 10.33 ± 4% perf-profile.children.cycles-pp.kmem_cache_alloc_node
47.00 +3.5 50.55 perf-profile.children.cycles-pp.__libc_read
7.43 ± 12% +3.7 11.08 ± 7% perf-profile.children.cycles-pp.propagate_protected_usage
44.84 +3.9 48.70 perf-profile.children.cycles-pp.vfs_read
45.04 +3.9 48.91 perf-profile.children.cycles-pp.ksys_read
43.69 +3.9 47.56 perf-profile.children.cycles-pp.unix_stream_recvmsg
43.61 +3.9 47.49 perf-profile.children.cycles-pp.unix_stream_read_generic
44.12 +3.9 48.00 perf-profile.children.cycles-pp.sock_read_iter
44.34 +3.9 48.24 perf-profile.children.cycles-pp.new_sync_read
7.55 ± 9% +4.1 11.66 ± 4% perf-profile.children.cycles-pp.kmem_cache_free
14.24 ± 9% +4.5 18.76 ± 4% perf-profile.children.cycles-pp.kmalloc_reserve
14.20 ± 9% +4.5 18.72 ± 4% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
11.62 ± 11% +5.2 16.83 ± 6% perf-profile.children.cycles-pp.page_counter_cancel
12.06 ± 9% +5.9 17.92 ± 4% perf-profile.children.cycles-pp.kfree
18.74 ± 10% +6.4 25.17 ± 5% perf-profile.children.cycles-pp.obj_cgroup_charge_pages
18.83 ± 10% +6.5 25.28 ± 5% perf-profile.children.cycles-pp.obj_cgroup_charge
13.12 ± 9% +6.5 19.62 ± 4% perf-profile.children.cycles-pp.skb_release_data
22.71 ± 8% +7.1 29.84 ± 4% perf-profile.children.cycles-pp.__alloc_skb
22.76 ± 8% +7.1 29.89 ± 4% perf-profile.children.cycles-pp.alloc_skb_with_frags
14.56 ± 9% +7.5 22.04 ± 4% perf-profile.children.cycles-pp.consume_skb
14.92 ± 11% +7.6 22.56 ± 5% perf-profile.children.cycles-pp.page_counter_uncharge
23.80 ± 8% +8.0 31.76 ± 4% perf-profile.children.cycles-pp.sock_alloc_send_pskb
16.54 ± 11% +8.6 25.14 ± 5% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.00 +22.0 22.04 ± 5% perf-profile.children.cycles-pp.page_counter_charge
12.79 ± 10% -12.6 0.17 ± 9% perf-profile.self.cycles-pp.page_counter_try_charge
4.56 ± 13% -3.1 1.44 ± 33% perf-profile.self.cycles-pp.unwind_next_frame
3.43 ± 14% -2.3 1.14 ± 32% perf-profile.self.cycles-pp.__get_user_nocheck_8
2.98 ± 13% -2.0 0.99 ± 32% perf-profile.self.cycles-pp.__orc_find
3.89 ± 10% -1.5 2.38 ± 21% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.61 ± 13% -1.1 0.53 ± 32% perf-profile.self.cycles-pp.orc_find
1.22 ± 12% -0.8 0.39 ± 34% perf-profile.self.cycles-pp.perf_callchain_kernel
1.20 ± 10% -0.8 0.44 ± 28% perf-profile.self.cycles-pp.native_irq_return_iret
0.92 ± 13% -0.6 0.29 ± 33% perf-profile.self.cycles-pp.stack_access_ok
1.54 ± 5% -0.6 0.92 ± 6% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.82 ± 9% -0.4 0.45 ± 9% perf-profile.self.cycles-pp.__schedule
0.50 ± 14% -0.3 0.16 ± 30% perf-profile.self.cycles-pp.cmp_ex_search
0.50 ± 12% -0.3 0.16 ± 30% perf-profile.self.cycles-pp.__unwind_start
0.48 ± 13% -0.3 0.16 ± 31% perf-profile.self.cycles-pp.kernel_text_address
0.49 ± 13% -0.3 0.22 ± 10% perf-profile.self.cycles-pp.update_curr
0.36 ± 12% -0.3 0.11 ± 31% perf-profile.self.cycles-pp.unwind_get_return_address
0.44 ± 10% -0.3 0.19 ± 10% perf-profile.self.cycles-pp.load_new_mm_cr3
0.34 ± 13% -0.2 0.10 ± 49% perf-profile.self.cycles-pp.bsearch
0.30 ± 17% -0.2 0.08 ± 47% perf-profile.self.cycles-pp.perf_callchain_user
0.45 ± 3% -0.2 0.25 ± 7% perf-profile.self.cycles-pp.__switch_to
0.27 ± 9% -0.2 0.08 ± 47% perf-profile.self.cycles-pp.get_perf_callchain
0.33 ± 10% -0.2 0.14 ± 21% perf-profile.self.cycles-pp.perf_tp_event
0.28 ± 12% -0.2 0.11 ± 30% perf-profile.self.cycles-pp.perf_output_begin_forward
0.28 ± 9% -0.2 0.11 ± 27% perf-profile.self.cycles-pp.__task_pid_nr_ns
0.26 ± 12% -0.2 0.09 ± 48% perf-profile.self.cycles-pp.ftrace_graph_ret_addr
0.26 ± 12% -0.2 0.10 ± 46% perf-profile.self.cycles-pp.__kernel_text_address
0.26 ± 17% -0.2 0.10 ± 21% perf-profile.self.cycles-pp.available_idle_cpu
0.27 ± 10% -0.2 0.11 ± 27% perf-profile.self.cycles-pp.perf_prepare_sample
0.26 ± 9% -0.2 0.10 ± 19% perf-profile.self.cycles-pp.native_sched_clock
0.22 ± 11% -0.2 0.07 ± 48% perf-profile.self.cycles-pp.asm_exc_page_fault
0.30 ± 6% -0.1 0.16 ± 3% perf-profile.self.cycles-pp.new_sync_write
0.28 ± 10% -0.1 0.14 ± 9% perf-profile.self.cycles-pp.__switch_to_asm
0.22 ± 12% -0.1 0.08 ± 49% perf-profile.self.cycles-pp.perf_trace_sched_switch
0.40 ± 12% -0.1 0.27 ± 9% perf-profile.self.cycles-pp.update_load_avg
0.28 ± 11% -0.1 0.14 ± 14% perf-profile.self.cycles-pp.try_to_wake_up
0.21 ± 16% -0.1 0.08 ± 8% perf-profile.self.cycles-pp.__enqueue_entity
0.24 ± 11% -0.1 0.11 ± 13% perf-profile.self.cycles-pp.__restore_fpregs_from_fpstate
0.24 ± 12% -0.1 0.12 ± 10% perf-profile.self.cycles-pp.__update_load_avg_se
0.17 ± 13% -0.1 0.05 ± 47% perf-profile.self.cycles-pp.error_entry
0.22 ± 14% -0.1 0.11 ± 16% perf-profile.self.cycles-pp.perf_trace_sched_stat_runtime
0.20 ± 16% -0.1 0.09 ± 16% perf-profile.self.cycles-pp.cpuacct_charge
0.21 ± 14% -0.1 0.10 ± 14% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.22 ± 6% -0.1 0.12 ± 8% perf-profile.self.cycles-pp.switch_fpu_return
0.13 ± 14% -0.1 0.03 ±102% perf-profile.self.cycles-pp.perf_tp_event_match
0.28 ± 6% -0.1 0.18 ± 5% perf-profile.self.cycles-pp.prepare_to_wait
0.17 ± 10% -0.1 0.07 ± 12% perf-profile.self.cycles-pp.reweight_entity
0.13 ± 8% -0.1 0.04 ± 71% perf-profile.self.cycles-pp.get_callchain_entry
0.13 ± 14% -0.1 0.04 ± 75% perf-profile.self.cycles-pp.save_fpregs_to_fpstate
0.14 ± 15% -0.1 0.05 ± 49% perf-profile.self.cycles-pp.bpf_ksym_find
0.19 ± 11% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.pick_next_task_fair
0.16 ± 14% -0.1 0.08 ± 13% perf-profile.self.cycles-pp.update_rq_clock
0.16 ± 11% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.enqueue_entity
0.15 ± 13% -0.1 0.07 ± 12% perf-profile.self.cycles-pp.select_idle_sibling
0.34 ± 5% -0.1 0.26 ± 3% perf-profile.self.cycles-pp.sock_write_iter
0.15 ± 16% -0.1 0.07 ± 18% perf-profile.self.cycles-pp.select_idle_cpu
0.16 ± 4% -0.1 0.08 ± 8% perf-profile.self.cycles-pp.resched_curr
0.15 ± 10% -0.1 0.08 ± 4% perf-profile.self.cycles-pp.select_task_rq_fair
0.14 ± 14% -0.1 0.08 ± 16% perf-profile.self.cycles-pp.enqueue_task_fair
0.13 ± 8% -0.1 0.06 ± 11% perf-profile.self.cycles-pp.schedule
0.10 ± 14% -0.1 0.04 ± 71% perf-profile.self.cycles-pp.tracing_gen_ctx_irq_test
0.14 ± 10% -0.1 0.07 ± 6% perf-profile.self.cycles-pp.dequeue_task_fair
0.12 ± 14% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.update_min_vruntime
0.13 ± 7% -0.1 0.08 ± 7% perf-profile.self.cycles-pp.do_syscall_64
0.29 ± 6% -0.0 0.24 ± 2% perf-profile.self.cycles-pp.__libc_read
0.13 ± 12% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.__list_del_entry_valid
0.10 ± 9% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.__list_add_valid
0.11 ± 14% -0.0 0.06 ± 14% perf-profile.self.cycles-pp.perf_trace_sched_wakeup_template
0.10 ± 7% -0.0 0.06 ± 6% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.08 ± 10% -0.0 0.03 ± 70% perf-profile.self.cycles-pp.iov_iter_init
0.10 ± 7% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.__calc_delta
0.09 ± 7% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.finish_task_switch
0.24 ± 3% -0.0 0.20 ± 3% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.08 ± 10% -0.0 0.04 ± 45% perf-profile.self.cycles-pp.dequeue_entity
0.08 ± 8% -0.0 0.04 ± 45% perf-profile.self.cycles-pp.check_preempt_wakeup
0.08 ± 11% -0.0 0.05 ± 46% perf-profile.self.cycles-pp.__perf_event_header__init_id
0.07 ± 5% -0.0 0.04 ± 44% perf-profile.self.cycles-pp.sock_sendmsg
0.11 ± 6% -0.0 0.09 ± 5% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.07 ± 10% -0.0 0.05 ± 7% perf-profile.self.cycles-pp.schedule_timeout
0.21 ± 2% -0.0 0.18 ± 4% perf-profile.self.cycles-pp.sock_read_iter
0.08 ± 4% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.rcu_read_unlock_strict
0.23 ± 3% -0.0 0.21 ± 3% perf-profile.self.cycles-pp.__libc_write
0.13 ± 6% -0.0 0.11 ± 6% perf-profile.self.cycles-pp.security_file_permission
0.06 ± 6% +0.0 0.07 perf-profile.self.cycles-pp.refill_obj_stock
0.05 ± 7% +0.0 0.06 ± 7% perf-profile.self.cycles-pp.check_stack_object
0.09 ± 7% +0.0 0.11 ± 6% perf-profile.self.cycles-pp.obj_cgroup_charge
0.05 ± 8% +0.0 0.07 perf-profile.self.cycles-pp.__cond_resched
0.05 ± 7% +0.0 0.07 ± 8% perf-profile.self.cycles-pp.rcu_all_qs
0.10 ± 4% +0.0 0.12 ± 6% perf-profile.self.cycles-pp._copy_from_iter
0.18 ± 2% +0.0 0.20 ± 4% perf-profile.self.cycles-pp.new_sync_read
0.14 ± 3% +0.0 0.16 ± 6% perf-profile.self.cycles-pp.__ksize
0.16 +0.0 0.19 ± 3% perf-profile.self.cycles-pp.__might_sleep
0.19 ± 4% +0.0 0.22 ± 3% perf-profile.self.cycles-pp.aa_sk_perm
0.18 ± 4% +0.0 0.21 ± 4% perf-profile.self.cycles-pp.__virt_addr_valid
0.17 ± 2% +0.0 0.21 ± 3% perf-profile.self.cycles-pp.__entry_text_start
0.27 ± 2% +0.0 0.31 ± 3% perf-profile.self.cycles-pp.__alloc_skb
0.34 +0.0 0.38 ± 4% perf-profile.self.cycles-pp.kmem_cache_alloc_node
0.04 ± 45% +0.0 0.09 ± 9% perf-profile.self.cycles-pp.skb_copy_datagram_from_iter
0.21 ± 2% +0.0 0.26 ± 4% perf-profile.self.cycles-pp.__check_heap_object
0.50 ± 2% +0.1 0.54 ± 2% perf-profile.self.cycles-pp.__kmalloc_node_track_caller
0.46 ± 4% +0.1 0.52 ± 5% perf-profile.self.cycles-pp.unix_stream_sendmsg
0.36 +0.1 0.42 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.32 ± 2% +0.1 0.38 ± 5% perf-profile.self.cycles-pp.___slab_alloc
0.22 ± 5% +0.1 0.28 ± 6% perf-profile.self.cycles-pp.unix_write_space
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.refill_stock
0.14 ± 16% +0.1 0.21 ± 3% perf-profile.self.cycles-pp.page_counter_uncharge
0.19 ± 6% +0.1 0.27 ± 5% perf-profile.self.cycles-pp.__build_skb_around
0.40 ± 2% +0.1 0.50 ± 4% perf-profile.self.cycles-pp.kfree
0.37 ± 4% +0.1 0.49 ± 6% perf-profile.self.cycles-pp.mutex_lock
0.45 ± 3% +0.1 0.59 ± 2% perf-profile.self.cycles-pp.__check_object_size
0.21 ± 8% +0.1 0.35 ± 9% perf-profile.self.cycles-pp.consume_skb
0.02 ±223% +0.1 0.16 ± 45% perf-profile.self.cycles-pp.page_vma_mapped_walk
0.20 ± 11% +0.2 0.36 ± 7% perf-profile.self.cycles-pp.skb_release_data
1.02 ± 3% +0.2 1.22 ± 4% perf-profile.self.cycles-pp.unix_stream_read_generic
0.58 ± 2% +0.2 0.78 ± 3% perf-profile.self.cycles-pp.kmem_cache_free
0.86 ± 2% +0.3 1.19 ± 3% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.25 ± 9% +0.4 1.60 ± 6% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.4 0.43 ± 22% perf-profile.self.cycles-pp.intel_idle
0.57 ± 12% +0.4 1.00 ± 7% perf-profile.self.cycles-pp.mutex_unlock
0.76 ± 8% +0.4 1.20 ± 8% perf-profile.self.cycles-pp.obj_cgroup_uncharge_pages
0.64 ± 4% +0.5 1.14 ± 6% perf-profile.self.cycles-pp.skb_set_owner_w
0.73 ± 8% +0.5 1.25 ± 9% perf-profile.self.cycles-pp.obj_cgroup_charge_pages
0.74 ± 9% +0.6 1.30 ± 7% perf-profile.self.cycles-pp.sock_wfree
0.81 ± 11% +0.6 1.42 ± 7% perf-profile.self.cycles-pp.sock_def_readable
2.22 ± 4% +0.7 2.88 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
2.79 ± 4% +0.9 3.68 ± 5% perf-profile.self.cycles-pp._raw_spin_lock
1.56 ± 7% +0.9 2.50 ± 8% perf-profile.self.cycles-pp.get_mem_cgroup_from_objcg
1.45 ± 9% +1.1 2.51 ± 5% perf-profile.self.cycles-pp.__slab_free
7.36 ± 12% +3.6 10.97 ± 7% perf-profile.self.cycles-pp.propagate_protected_usage
11.50 ± 11% +5.2 16.66 ± 6% perf-profile.self.cycles-pp.page_counter_cancel
0.00 +16.7 16.74 ± 6% perf-profile.self.cycles-pp.page_counter_charge
hackbench.throughput
145000 +------------------------------------------------------------------+
| O |
140000 |-+ O O O |
| OO OO O O O O O O |
135000 |-O O O OO O O O O |
| O O O O O O O |
130000 |-+ O O O O |
| OO O ++ .+.+ |
125000 |.++.+.+ +.+.++.+.++.+.+.++ : + + +.+.+ .+ |
| + ++.+ : O : : +.+ +.+ |
120000 |-+ :O : : : : : |
| : : :: : |
115000 |-+ :: : + |
| + + |
110000 +------------------------------------------------------------------+
hackbench.time.voluntary_context_switches
1.4e+09 +-----------------------------------------------------------------+
| + |
1.3e+09 |-+ + :: |
1.2e+09 |-+ :+ : : |
| : + +.+. |
1.1e+09 |.++. .+ ++.++. .++.+.+ .+ + |
1e+09 |-+ + + + + + : |
| + + : .++. .+. |
9e+08 |-+ +.+. : ++.+ ++ + |
8e+08 |-+ + |
| |
7e+08 |-+ |
6e+08 |-+ |
| OO O OO O OO O OO OO O OO O OO O OO O OO O OO O O OO O OO O OO |
5e+08 +-----------------------------------------------------------------+
hackbench.time.involuntary_context_switches
6.5e+08 +-----------------------------------------------------------------+
6e+08 |-+ + |
| + : |
5.5e+08 |-+ :: : : |
5e+08 |-+ :: : +.+. |
|. : + ++. +. |
4.5e+08 |-++. .+ ++. .+ +.+ .+ .+ |
4e+08 |-+ + + + + + : |
3.5e+08 |-+ + : : .++. .+. |
| +.+.: ++.+ ++ + |
3e+08 |-+ + |
2.5e+08 |-+ |
| |
2e+08 |-+ O O O O OO |
1.5e+08 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 3 weeks
[cifs] 966a3cb7c7: page_allocation_failure:order:#,mode:#(GFP_KERNEL|__GFP_COMP|__GFP_ZERO),nodemask=(null),cpuset=/,mems_allowed=
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 966a3cb7c7db786452a87afdc3b48858fc4d4d6b ("cifs: improve fallocate emulation")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: xfstests
version: xfstests-x86_64-99bc497-1_20211122
with following parameters:
disk: 4HDD
fs: btrfs
fs2: smbv3
test: generic-group-32
ucode: 0xde
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz with 32G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
[ 417.012130][ T4464] smpboot: CPU 4 is now offline
[ 417.537914][ T4464] smpboot: CPU 7 is now offline
[ 418.548502][ T4464] smpboot: Booting Node 0 Processor 7 APIC 0x7
[ 419.141994][ T4464] smpboot: CPU 6 is now offline
[ 419.656801][ T4464] smpboot: Booting Node 0 Processor 2 APIC 0x4
[ 421.481311][ T4475] fsstress: page allocation failure: order:8, mode:0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null),cpuset=/,mems_allowed=0
[ 421.495142][ T4475] CPU: 2 PID: 4475 Comm: fsstress Not tainted 5.13.0-rc7-00007-g966a3cb7c7db #1
[ 421.504167][ T4475] Hardware name: Gigabyte Technology Co., Ltd. Z270X-UD5/Z270X-UD5-CF, BIOS F2 02/10/2017
[ 421.514129][ T4475] Call Trace:
[ 421.517378][ T4475] dump_stack (kbuild/src/consumer/lib/dump_stack.c:122)
[ 421.521463][ T4475] warn_alloc.cold (kbuild/src/consumer/mm/page_alloc.c:4066)
[ 421.526054][ T4475] ? zone_watermark_ok_safe (kbuild/src/consumer/mm/page_alloc.c:4047)
[ 421.531560][ T4475] ? __alloc_pages_direct_compact (kbuild/src/consumer/mm/page_alloc.c:4237)
[ 421.537596][ T4475] ? get_page_from_freelist (kbuild/src/consumer/mm/page_alloc.c:3778 kbuild/src/consumer/mm/page_alloc.c:3948)
[ 421.543109][ T4475] ? get_page_from_freelist+0xc80/0xc80
[ 421.548630][ T4475] __alloc_pages_slowpath+0x1a36/0x1f40
[ 421.555183][ T4475] ? kmemdup+0x30/0x40
[ 421.559193][ T4475] ? SMB2_ioctl_init+0x10a/0xb80 [cifs]
[ 421.564744][ T4475] ? warn_alloc+0x140/0x140
[ 421.569248][ T4475] ? get_page_from_freelist+0x439/0xc80
[ 421.574753][ T4475] __alloc_pages+0x4da/0x600
[ 421.579257][ T4475] ? __alloc_pages_slowpath+0x1f40/0x1f40
[ 421.585996][ T4475] kmalloc_order+0x38/0xc0
[ 421.590359][ T4475] kmalloc_order_trace+0x19/0xc0
[ 421.595240][ T4475] smb3_simple_falloc+0xde7/0x1740 [cifs]
[ 421.601005][ T4475] ? cp_new_stat+0x47a/0x5c0
[ 421.605536][ T4475] ? __ia32_sys_lstat+0x80/0x80
[ 421.610303][ T4475] ? smb3_llseek+0xb40/0xb40 [cifs]
[ 421.615512][ T4475] smb3_fallocate+0xc6/0x1700 [cifs]
[ 421.620737][ T4475] ? smb3_zero_range+0x1180/0x1180 [cifs]
[ 421.626457][ T4475] ? __do_sys_newfstat+0xbe/0x100
[ 421.631431][ T4475] ? __ia32_sys_fstat+0x80/0x80
[ 421.636226][ T4475] vfs_fallocate+0x2aa/0x9c0
[ 421.640743][ T4475] ksys_fallocate+0x3a/0x80
[ 421.645169][ T4475] __x64_sys_fallocate+0x93/0x100
[ 421.650083][ T4475] do_syscall_64+0x40/0x80
[ 421.654450][ T4475] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 421.660294][ T4475] RIP: 0033:0x7f5821e32437
[ 421.664678][ T4475] Code: 5f ba 0c 00 f7 d8 64 89 02 b8 ff ff ff ff eb ba 0f 1f 00 48 8d 05 c9 12 0d 00 49 89 ca 8b 00 85 c0 75 10 b8 1d 01 00 00 0f 05 <48> 3d 00
f0 ff ff 77 51 c3 41 55 49 89 cd 41 54 49 89 d4 55 89 f5
[ 421.684458][ T4475] RSP: 002b:00007ffc9d943de8 EFLAGS: 00000246 ORIG_RAX: 000000000000011d
[ 421.692871][ T4475] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f5821e32437
[ 421.700829][ T4475] RDX: 0000000000119f63 RSI: 0000000000000001 RDI: 0000000000000004
[ 421.708782][ T4475] RBP: 00000000000008f1 R08: 000000000000006e R09: 00007ffc9d943a15
[ 421.716766][ T4475] R10: 00000000000497b8 R11: 0000000000000246 R12: 0000000000000001
[ 421.724710][ T4475] R13: 00000000000497b8 R14: 0000000000119f63 R15: 00005648b262b7c0
[ 421.732707][ T4475] Mem-Info:
[ 421.735747][ T4475] active_anon:615 inactive_anon:66590 isolated_anon:0
[ 421.735747][ T4475] active_file:4968691 inactive_file:1229171 isolated_file:0
[ 421.735747][ T4475] unevictable:378410 dirty:28134 writeback:29866
[ 421.735747][ T4475] slab_reclaimable:44960 slab_unreclaimable:259563
[ 421.735747][ T4475] mapped:10165 shmem:3088 pagetables:870 bounce:0
[ 421.735747][ T4475] free:80492 free_pcp:394 free_cma:0
[ 421.774368][ T4475] Node 0 active_anon:2460kB inactive_anon:266360kB active_file:19877576kB inactive_file:4922012kB unevictable:1513640kB isolated(anon):0kB isolated(file):0kB mapped:40660kB dirty:6272kB writeback:222620kB shmem:12352kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 202752kB writeback_tmp:0kB kernel_stack:20736kB pagetables:3480kB all_unreclaimable? no
[ 421.807506][ T4475] Node 0 DMA free:15360kB min:36kB low:48kB high:60kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15980kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 421.833616][ T4475] lowmem_reserve[]: 0 2637 27181 27181 27181
[ 421.839547][ T4475] Node 0 DMA32 free:104912kB min:6552kB low:9252kB high:11952kB reserved_highatomic:0KB active_anon:4kB inactive_anon:8460kB active_file:2547244kB inactive_file:4740kB unevictable:0kB writepending:3644kB present:2766180kB managed:2700644kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 421.868224][ T4475] lowmem_reserve[]: 0 0 24543 24543 24543
[ 421.873910][ T4475] Node 0 Normal free:171620kB min:60992kB low:86124kB high:111256kB reserved_highatomic:2048KB active_anon:2456kB inactive_anon:257900kB active_file:17350520kB inactive_file:4918300kB unevictable:1513640kB writepending:233688kB present:30654464kB managed:25563308kB mlocked:2344kB bounce:0kB free_pcp:1656kB local_pcp:1360kB free_cma:0kB
[ 421.905685][ T4475] lowmem_reserve[]: 0 0 0 0 0
[ 421.910312][ T4475] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15360kB
[ 421.922896][ T4475] Node 0 DMA32: 574*4kB (UME) 185*8kB (UME) 119*16kB (UME) 15*32kB (UME) 255*64kB (UME) 176*128kB (UM) 56*256kB (UM) 15*512kB (UME) 3*1024kB (UM) 3*2048kB (UME) 7*4096kB (M) = 104912kB
[ 421.941219][ T4475] Node 0 Normal: 3933*4kB (UMEH) 6015*8kB (UMEH) 3728*16kB (UEH) 1291*32kB (UMEH) 35*64kB (UMEH) 1*128kB (H) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 167180kB
[ 421.957640][ T4475] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 421.967216][ T4475] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 421.976519][ T4475] 6590026 total pagecache pages
[ 421.981344][ T4475] 0 pages in swap cache
[ 421.985414][ T4475] Swap cache stats: add 0, delete 0, find 0/0
[ 421.991423][ T4475] Free swap = 0kB
[ 421.995072][ T4475] Total swap = 0kB
[ 421.998732][ T4475] 8359156 pages RAM
[ 422.002497][ T4475] 0 pages HighMem/MovableOnly
[ 422.007111][ T4475] 1289328 pages reserved
[ 422.011310][ T4475] 0 pages cma reserved
[ 422.015336][ T4475] 0 pages hwpoisoned
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
5 months, 3 weeks