[blackhole_dev] 509e56b37c: kernel_selftests.net.test_blackhole_dev.sh.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 509e56b37cc32c9b5fc2be585c25d1e60d6a1d73 ("blackhole_dev: add a selftest")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
# selftests: net: test_blackhole_dev.sh
# test_blackhole_dev: [FAIL]
not ok 13 selftests: net: test_blackhole_dev.sh
To reproduce:
# build kernel
cd linux
cp config-5.2.0-rc6-01701-g509e56b37cc32c .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 6 months
[sched/fair] e1f4f2a13d: reaim.jobs_per_min -20.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -20.1% regression of reaim.jobs_per_min due to commit:
commit: e1f4f2a13d35480157bd3b83255757b8f15ed90f ("sched/fair: rework load_balance")
https://git.linaro.org/people/vincent.guittot/kernel.git sched-rework-load_balance
in testcase: reaim
on test machine: 40 threads Skylake-SP with 64G memory
with following parameters:
runtime: 300s
nr_task: 100%
test: fserver
cpufreq_governor: performance
test-description: REAIM is an updated and improved version of AIM 7 benchmark.
test-url: https://sourceforge.net/projects/re-aim-7/
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -1.4% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | mode=threads |
| | nr_threads=50% |
| | ucode=0xb000036 |
+------------------+----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/100%/debian-x86_64-2019-05-14.cgz/300s/lkp-skl-sp2/fserver/reaim
commit:
6824881809 ("sched/fair: rename sum_nr_running to sum_h_nr_running")
e1f4f2a13d ("sched/fair: rework load_balance")
6824881809407252 e1f4f2a13d35480157bd3b83255
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 11% 2:4 perf-profile.calltrace.cycles-pp.error_entry
2:4 12% 3:4 perf-profile.children.cycles-pp.error_entry
1:4 8% 1:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
3.87 -17.5% 3.20 reaim.child_systime
13.47 -6.8% 12.56 reaim.child_utime
484360 -20.1% 387225 reaim.jobs_per_min
12109 -20.1% 9680 reaim.jobs_per_min_child
97.09 -2.9% 94.30 reaim.jti
497270 -18.8% 404000 reaim.max_jobs_per_min
0.50 +25.1% 0.63 reaim.parent_time
2.38 +118.3% 5.19 reaim.std_dev_percent
0.01 +190.6% 0.03 reaim.std_dev_time
95893567 -5.2% 90925485 reaim.time.minor_page_faults
688.25 -13.4% 595.75 reaim.time.percent_of_cpu_this_job_got
466.54 -21.6% 365.87 reaim.time.system_time
1617 -11.4% 1432 reaim.time.user_time
930114 -14.5% 795589 reaim.time.voluntary_context_switches
480000 -5.0% 456000 reaim.workload
0.03 -0.0 0.02 mpstat.cpu.all.soft%
4.80 -0.9 3.90 mpstat.cpu.all.sys%
12.84 -1.6 11.25 mpstat.cpu.all.usr%
584.25 -14.2% 501.50 turbostat.Avg_MHz
21.63 ± 3% -3.7 17.96 turbostat.Busy%
64.94 ± 14% +19.6% 77.69 ± 9% turbostat.CPU%c1
63.49 ± 2% +6.3% 67.52 ± 2% turbostat.PkgWatt
82.00 +2.4% 84.00 vmstat.cpu.id
12.00 -8.3% 11.00 vmstat.cpu.us
7.00 +14.3% 8.00 vmstat.procs.r
10856 +8.4% 11767 vmstat.system.cs
14157 -0.9% 14027 proc-vmstat.nr_slab_reclaimable
29868 -2.2% 29221 proc-vmstat.nr_slab_unreclaimable
90466060 -5.4% 85612219 proc-vmstat.numa_hit
90466060 -5.4% 85612219 proc-vmstat.numa_local
28134 +21.2% 34106 proc-vmstat.pgactivate
92547154 -5.5% 87485227 proc-vmstat.pgalloc_normal
96344232 -5.2% 91372003 proc-vmstat.pgfault
92521547 -5.5% 87459976 proc-vmstat.pgfree
24219 +14.2% 27655 slabinfo.anon_vma_chain.active_objs
24308 +14.1% 27727 slabinfo.anon_vma_chain.num_objs
71134 -33.7% 47163 ± 2% slabinfo.kmalloc-32.active_objs
555.50 -33.8% 368.00 ± 2% slabinfo.kmalloc-32.active_slabs
71171 -33.7% 47163 ± 2% slabinfo.kmalloc-32.num_objs
555.50 -33.8% 368.00 ± 2% slabinfo.kmalloc-32.num_slabs
1032 ± 5% -25.6% 768.00 ± 20% slabinfo.kmalloc-rcl-128.active_objs
1032 ± 5% -25.6% 768.00 ± 20% slabinfo.kmalloc-rcl-128.num_objs
3280 ± 2% -24.4% 2480 slabinfo.pid.active_objs
3280 ± 2% -24.4% 2480 slabinfo.pid.num_objs
3793 -31.8% 2588 ± 3% slabinfo.signal_cache.active_objs
3812 -32.1% 2588 ± 3% slabinfo.signal_cache.num_objs
4389 ± 2% -17.5% 3621 ± 2% slabinfo.task_delay_info.active_objs
4389 ± 2% -17.5% 3621 ± 2% slabinfo.task_delay_info.num_objs
16772 +10.9% 18600 slabinfo.vm_area_struct.active_objs
16801 +10.9% 18632 slabinfo.vm_area_struct.num_objs
26215 -26.5% 19277 ± 10% sched_debug.cfs_rq:/.exec_clock.avg
31037 ± 4% -22.1% 24191 ± 3% sched_debug.cfs_rq:/.exec_clock.max
25628 -27.8% 18508 ± 10% sched_debug.cfs_rq:/.exec_clock.min
46.85 ± 18% +31.1% 61.42 ± 13% sched_debug.cfs_rq:/.load_avg.avg
920636 -33.9% 608944 ± 10% sched_debug.cfs_rq:/.min_vruntime.avg
1008783 ± 2% -31.7% 689236 ± 6% sched_debug.cfs_rq:/.min_vruntime.max
900473 -34.7% 587753 ± 10% sched_debug.cfs_rq:/.min_vruntime.min
0.27 ± 5% +17.2% 0.32 ± 3% sched_debug.cfs_rq:/.nr_running.stddev
145.48 ± 36% +38.3% 201.16 ± 34% sched_debug.cfs_rq:/.util_avg.avg
128190 ± 14% +23.4% 158225 ± 4% sched_debug.cpu.avg_idle.stddev
18406 -42.6% 10573 ± 10% sched_debug.cpu.curr->pid.max
0.13 ± 50% +53.6% 0.20 ± 40% sched_debug.cpu.nr_running.avg
0.27 ± 6% +27.8% 0.34 ± 13% sched_debug.cpu.nr_running.stddev
36.54 ± 14% -26.7% 26.79 ± 5% sched_debug.cpu.nr_uninterruptible.max
-29.92 -21.7% -23.43 sched_debug.cpu.nr_uninterruptible.min
15.18 ± 10% -28.0% 10.93 ± 9% sched_debug.cpu.nr_uninterruptible.stddev
10011 ± 2% +19.5% 11964 ± 11% sched_debug.cpu.sched_goidle.min
1220 ± 10% -17.5% 1007 ± 11% sched_debug.cpu.sched_goidle.stddev
16483 -17.6% 13588 ± 10% sched_debug.cpu.ttwu_count.avg
14875 -17.5% 12267 ± 11% sched_debug.cpu.ttwu_count.min
7897 -15.1% 6708 ± 10% sched_debug.cpu.ttwu_local.avg
8676 -16.4% 7257 ± 9% sched_debug.cpu.ttwu_local.max
332.21 ± 12% -30.4% 231.14 ± 8% sched_debug.cpu.ttwu_local.stddev
2.331e+09 -4.8% 2.219e+09 perf-stat.i.branch-instructions
31648910 ± 5% -12.9% 27554102 ± 3% perf-stat.i.branch-misses
66054221 ± 13% -20.9% 52232556 ± 11% perf-stat.i.cache-references
10908 +8.1% 11789 perf-stat.i.context-switches
2.51 ± 12% -22.8% 1.94 ± 13% perf-stat.i.cpi
2.296e+10 -14.6% 1.961e+10 perf-stat.i.cpu-cycles
4070 -40.4% 2424 perf-stat.i.cpu-migrations
2.258e+09 -4.8% 2.15e+09 perf-stat.i.dTLB-loads
1561835 -6.0% 1468144 perf-stat.i.dTLB-store-misses
1.224e+09 -4.8% 1.165e+09 perf-stat.i.dTLB-stores
1593761 ± 4% -9.5% 1442915 ± 5% perf-stat.i.iTLB-load-misses
1.468e+10 -4.8% 1.397e+10 perf-stat.i.instructions
0.50 ± 6% +18.3% 0.59 ± 5% perf-stat.i.ipc
314246 -5.1% 298289 perf-stat.i.minor-faults
250295 +4.8% 262228 perf-stat.i.node-loads
314245 -5.1% 298288 perf-stat.i.page-faults
1.36 ± 6% -0.1 1.24 ± 4% perf-stat.overall.branch-miss-rate%
1.56 -10.3% 1.40 perf-stat.overall.cpi
39.29 ± 3% -2.9 36.37 ± 5% perf-stat.overall.iTLB-load-miss-rate%
0.64 +11.4% 0.71 perf-stat.overall.ipc
2.322e+09 -4.8% 2.211e+09 perf-stat.ps.branch-instructions
31534886 ± 5% -12.9% 27455873 ± 3% perf-stat.ps.branch-misses
65820893 ± 13% -20.9% 52046355 ± 11% perf-stat.ps.cache-references
10869 +8.1% 11746 perf-stat.ps.context-switches
2.287e+10 -14.6% 1.954e+10 perf-stat.ps.cpu-cycles
4055 -40.4% 2416 perf-stat.ps.cpu-migrations
2.25e+09 -4.8% 2.142e+09 perf-stat.ps.dTLB-loads
1556081 -6.0% 1462781 perf-stat.ps.dTLB-store-misses
1.219e+09 -4.8% 1.161e+09 perf-stat.ps.dTLB-stores
1587991 ± 4% -9.5% 1437722 ± 5% perf-stat.ps.iTLB-load-misses
1.462e+10 -4.8% 1.392e+10 perf-stat.ps.instructions
313084 -5.1% 297198 perf-stat.ps.minor-faults
249390 +4.8% 261293 perf-stat.ps.node-loads
313083 -5.1% 297196 perf-stat.ps.page-faults
4.431e+12 -5.2% 4.201e+12 perf-stat.total.instructions
97508 ± 6% +18.6% 115660 ± 8% softirqs.CPU0.TIMER
67484 ± 2% -9.9% 60810 ± 2% softirqs.CPU1.RCU
66767 ± 2% -11.2% 59300 ± 4% softirqs.CPU10.RCU
64911 ± 2% -10.8% 57931 ± 2% softirqs.CPU12.RCU
64303 ± 2% -7.7% 59326 ± 5% softirqs.CPU13.RCU
65820 ± 3% -10.6% 58850 softirqs.CPU14.RCU
63480 ± 2% -9.8% 57277 ± 3% softirqs.CPU15.RCU
93784 ± 3% +12.1% 105169 ± 7% softirqs.CPU15.TIMER
65991 ± 2% -11.3% 58552 softirqs.CPU16.RCU
65084 ± 3% -9.3% 59027 softirqs.CPU17.RCU
65370 ± 4% -10.9% 58274 softirqs.CPU18.RCU
65250 ± 3% -10.1% 58678 ± 2% softirqs.CPU19.RCU
67231 ± 3% -13.0% 58495 softirqs.CPU2.RCU
142574 ± 8% -15.5% 120453 ± 4% softirqs.CPU2.TIMER
64807 ± 3% -10.4% 58081 ± 3% softirqs.CPU20.RCU
65897 ± 3% -12.2% 57884 softirqs.CPU21.RCU
64836 ± 5% -9.8% 58514 softirqs.CPU23.RCU
64896 -9.9% 58498 ± 2% softirqs.CPU24.RCU
63980 ± 2% -13.9% 55055 ± 3% softirqs.CPU25.RCU
62771 ± 3% -8.4% 57522 softirqs.CPU26.RCU
62248 -9.6% 56292 ± 2% softirqs.CPU27.NET_RX
64517 ± 3% -10.4% 57799 softirqs.CPU27.RCU
63900 -10.5% 57161 ± 2% softirqs.CPU28.RCU
64220 ± 3% -10.2% 57696 ± 2% softirqs.CPU29.RCU
63362 ± 3% -8.9% 57692 ± 2% softirqs.CPU30.RCU
63691 -10.5% 56985 softirqs.CPU31.RCU
64601 ± 2% -13.5% 55902 ± 3% softirqs.CPU32.RCU
64135 -11.0% 57072 ± 4% softirqs.CPU33.RCU
63714 ± 3% -11.7% 56250 ± 2% softirqs.CPU35.RCU
63763 ± 2% -9.9% 57419 ± 3% softirqs.CPU36.RCU
63773 ± 2% -10.8% 56911 softirqs.CPU38.RCU
64713 ± 5% -12.0% 56956 softirqs.CPU39.RCU
90979 ± 2% +35.7% 123432 ± 16% softirqs.CPU5.TIMER
64366 ± 2% -8.2% 59076 softirqs.CPU6.RCU
62134 ± 3% -8.0% 57156 softirqs.CPU7.NET_RX
65082 -11.8% 57371 ± 3% softirqs.CPU8.RCU
2582721 -10.1% 2322807 softirqs.RCU
1.00 ±173% +4600.0% 47.00 ±112% interrupts.63:PCI-MSI.54001693-edge.i40e-eth0-TxRx-28
148.50 ±160% -99.3% 1.00 ±100% interrupts.74:PCI-MSI.54001704-edge.i40e-eth0-TxRx-39
176898 ± 2% +77.6% 314184 interrupts.CAL:Function_call_interrupts
4539 ± 3% +69.9% 7714 ± 2% interrupts.CPU0.CAL:Function_call_interrupts
1462 ± 5% -39.2% 889.00 ± 37% interrupts.CPU0.RES:Rescheduling_interrupts
1330 ± 4% +255.4% 4726 ± 7% interrupts.CPU0.TLB:TLB_shootdowns
4511 ± 2% +74.5% 7872 ± 2% interrupts.CPU1.CAL:Function_call_interrupts
1347 +255.4% 4787 ± 4% interrupts.CPU1.TLB:TLB_shootdowns
4401 ± 3% +78.8% 7871 ± 3% interrupts.CPU10.CAL:Function_call_interrupts
598.50 ± 4% -29.2% 423.50 ± 24% interrupts.CPU10.RES:Rescheduling_interrupts
1308 ± 2% +261.0% 4722 ± 5% interrupts.CPU10.TLB:TLB_shootdowns
4431 ± 2% +75.8% 7791 interrupts.CPU11.CAL:Function_call_interrupts
620.75 ± 4% -38.8% 380.00 ± 21% interrupts.CPU11.RES:Rescheduling_interrupts
1331 +250.9% 4673 ± 4% interrupts.CPU11.TLB:TLB_shootdowns
4474 ± 3% +75.3% 7843 interrupts.CPU12.CAL:Function_call_interrupts
638.75 ± 9% -52.7% 302.25 ± 18% interrupts.CPU12.RES:Rescheduling_interrupts
1335 ± 2% +256.0% 4754 ± 2% interrupts.CPU12.TLB:TLB_shootdowns
4510 ± 3% +73.1% 7809 ± 3% interrupts.CPU13.CAL:Function_call_interrupts
635.75 ± 4% -48.4% 328.25 ± 14% interrupts.CPU13.RES:Rescheduling_interrupts
1355 ± 2% +249.0% 4730 ± 4% interrupts.CPU13.TLB:TLB_shootdowns
4370 ± 2% +80.7% 7896 ± 2% interrupts.CPU14.CAL:Function_call_interrupts
651.00 ± 10% -51.8% 314.00 ± 11% interrupts.CPU14.RES:Rescheduling_interrupts
1413 ± 12% +237.8% 4774 ± 4% interrupts.CPU14.TLB:TLB_shootdowns
4487 +76.3% 7912 ± 2% interrupts.CPU15.CAL:Function_call_interrupts
667.25 ± 12% -53.3% 311.75 ± 13% interrupts.CPU15.RES:Rescheduling_interrupts
1327 ± 3% +265.1% 4847 ± 2% interrupts.CPU15.TLB:TLB_shootdowns
4478 +76.0% 7880 ± 2% interrupts.CPU16.CAL:Function_call_interrupts
587.00 ± 3% -49.4% 297.25 ± 15% interrupts.CPU16.RES:Rescheduling_interrupts
1335 ± 2% +257.6% 4774 ± 3% interrupts.CPU16.TLB:TLB_shootdowns
4508 ± 3% +74.6% 7869 ± 2% interrupts.CPU17.CAL:Function_call_interrupts
597.50 ± 3% -43.8% 335.50 ± 22% interrupts.CPU17.RES:Rescheduling_interrupts
1346 ± 4% +252.7% 4747 ± 3% interrupts.CPU17.TLB:TLB_shootdowns
4437 +78.0% 7900 ± 3% interrupts.CPU18.CAL:Function_call_interrupts
696.25 ± 15% -45.2% 381.50 ± 26% interrupts.CPU18.RES:Rescheduling_interrupts
1399 ± 10% +241.1% 4774 ± 6% interrupts.CPU18.TLB:TLB_shootdowns
4414 +76.3% 7783 ± 2% interrupts.CPU19.CAL:Function_call_interrupts
762.00 ± 14% -59.4% 309.25 ± 16% interrupts.CPU19.RES:Rescheduling_interrupts
1341 ± 2% +248.7% 4677 ± 4% interrupts.CPU19.TLB:TLB_shootdowns
4426 ± 2% +79.0% 7921 ± 2% interrupts.CPU2.CAL:Function_call_interrupts
1324 ± 4% +260.4% 4771 ± 5% interrupts.CPU2.TLB:TLB_shootdowns
4330 +78.5% 7728 ± 2% interrupts.CPU20.CAL:Function_call_interrupts
595.75 ± 3% -48.9% 304.25 ± 15% interrupts.CPU20.RES:Rescheduling_interrupts
1264 +267.6% 4646 ± 4% interrupts.CPU20.TLB:TLB_shootdowns
4358 ± 3% +80.4% 7862 ± 2% interrupts.CPU21.CAL:Function_call_interrupts
642.75 ± 15% -59.7% 259.25 ± 8% interrupts.CPU21.RES:Rescheduling_interrupts
1243 ± 2% +277.8% 4697 ± 4% interrupts.CPU21.TLB:TLB_shootdowns
4432 +80.1% 7984 interrupts.CPU22.CAL:Function_call_interrupts
588.75 ± 6% -45.4% 321.50 ± 20% interrupts.CPU22.RES:Rescheduling_interrupts
1270 +279.8% 4825 ± 3% interrupts.CPU22.TLB:TLB_shootdowns
4365 ± 2% +77.5% 7746 ± 2% interrupts.CPU23.CAL:Function_call_interrupts
600.50 ± 3% -47.3% 316.25 ± 20% interrupts.CPU23.RES:Rescheduling_interrupts
1252 ± 2% +265.4% 4576 ± 4% interrupts.CPU23.TLB:TLB_shootdowns
4326 ± 3% +80.9% 7826 interrupts.CPU24.CAL:Function_call_interrupts
650.00 ± 11% -50.1% 324.25 ± 31% interrupts.CPU24.RES:Rescheduling_interrupts
1243 ± 3% +278.1% 4702 ± 3% interrupts.CPU24.TLB:TLB_shootdowns
4355 ± 3% +77.3% 7721 ± 2% interrupts.CPU25.CAL:Function_call_interrupts
651.25 ± 10% -57.9% 274.25 ± 7% interrupts.CPU25.RES:Rescheduling_interrupts
1262 ± 4% +271.3% 4685 ± 4% interrupts.CPU25.TLB:TLB_shootdowns
4363 ± 3% +78.2% 7777 ± 5% interrupts.CPU26.CAL:Function_call_interrupts
581.00 ± 5% -45.4% 317.00 ± 17% interrupts.CPU26.RES:Rescheduling_interrupts
1245 ± 3% +280.8% 4741 ± 8% interrupts.CPU26.TLB:TLB_shootdowns
4363 ± 2% +81.5% 7919 interrupts.CPU27.CAL:Function_call_interrupts
625.50 ± 7% -40.0% 375.50 ± 35% interrupts.CPU27.RES:Rescheduling_interrupts
1290 ± 2% +270.6% 4783 ± 4% interrupts.CPU27.TLB:TLB_shootdowns
1.00 ±173% +4550.0% 46.50 ±114% interrupts.CPU28.63:PCI-MSI.54001693-edge.i40e-eth0-TxRx-28
4416 +74.8% 7722 ± 2% interrupts.CPU28.CAL:Function_call_interrupts
600.50 ± 8% -54.7% 271.75 ± 6% interrupts.CPU28.RES:Rescheduling_interrupts
1262 +263.2% 4586 ± 3% interrupts.CPU28.TLB:TLB_shootdowns
4400 ± 2% +79.3% 7890 ± 3% interrupts.CPU29.CAL:Function_call_interrupts
584.50 ± 7% -44.4% 325.00 ± 17% interrupts.CPU29.RES:Rescheduling_interrupts
1267 +278.5% 4795 ± 5% interrupts.CPU29.TLB:TLB_shootdowns
4423 ± 2% +79.4% 7936 interrupts.CPU3.CAL:Function_call_interrupts
1254 ± 4% +285.7% 4838 ± 3% interrupts.CPU3.TLB:TLB_shootdowns
4377 ± 3% +79.4% 7851 ± 2% interrupts.CPU30.CAL:Function_call_interrupts
592.50 ± 4% -47.8% 309.50 ± 21% interrupts.CPU30.RES:Rescheduling_interrupts
1272 ± 3% +274.9% 4769 ± 3% interrupts.CPU30.TLB:TLB_shootdowns
4351 ± 3% +79.7% 7819 interrupts.CPU31.CAL:Function_call_interrupts
651.50 ± 6% -57.3% 278.25 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts
1284 ± 2% +271.1% 4766 ± 3% interrupts.CPU31.TLB:TLB_shootdowns
4338 ± 3% +81.8% 7886 ± 5% interrupts.CPU32.CAL:Function_call_interrupts
576.25 ± 5% -45.7% 312.75 ± 9% interrupts.CPU32.RES:Rescheduling_interrupts
1278 ± 2% +275.4% 4797 ± 5% interrupts.CPU32.TLB:TLB_shootdowns
4391 ± 3% +77.1% 7778 interrupts.CPU33.CAL:Function_call_interrupts
596.50 ± 3% -52.0% 286.50 ± 7% interrupts.CPU33.RES:Rescheduling_interrupts
1278 ± 2% +562.1% 8463 ± 78% interrupts.CPU33.TLB:TLB_shootdowns
4385 ± 2% +81.8% 7974 ± 2% interrupts.CPU34.CAL:Function_call_interrupts
642.50 -47.6% 336.50 ± 30% interrupts.CPU34.RES:Rescheduling_interrupts
1259 ± 2% +280.2% 4789 ± 5% interrupts.CPU34.TLB:TLB_shootdowns
4416 +80.0% 7951 interrupts.CPU35.CAL:Function_call_interrupts
779.25 ± 33% -56.6% 338.25 ± 10% interrupts.CPU35.RES:Rescheduling_interrupts
1268 +279.4% 4812 ± 2% interrupts.CPU35.TLB:TLB_shootdowns
4362 ± 2% +79.4% 7827 ± 3% interrupts.CPU36.CAL:Function_call_interrupts
625.00 ± 7% -55.7% 276.75 ± 9% interrupts.CPU36.RES:Rescheduling_interrupts
1277 +270.4% 4731 ± 5% interrupts.CPU36.TLB:TLB_shootdowns
4421 +78.2% 7877 ± 2% interrupts.CPU37.CAL:Function_call_interrupts
586.25 ± 2% -50.1% 292.50 ± 9% interrupts.CPU37.RES:Rescheduling_interrupts
1315 +261.2% 4751 ± 4% interrupts.CPU37.TLB:TLB_shootdowns
4470 +77.1% 7918 ± 2% interrupts.CPU38.CAL:Function_call_interrupts
629.50 ± 5% -46.3% 337.75 ± 21% interrupts.CPU38.RES:Rescheduling_interrupts
1294 ± 3% +267.9% 4760 ± 5% interrupts.CPU38.TLB:TLB_shootdowns
148.00 ±161% -99.5% 0.75 ±110% interrupts.CPU39.74:PCI-MSI.54001704-edge.i40e-eth0-TxRx-39
4382 ± 2% +76.0% 7711 ± 2% interrupts.CPU39.CAL:Function_call_interrupts
642.50 ± 9% -59.3% 261.75 ± 11% interrupts.CPU39.RES:Rescheduling_interrupts
1282 ± 2% +256.9% 4576 ± 4% interrupts.CPU39.TLB:TLB_shootdowns
4467 +79.4% 8013 ± 2% interrupts.CPU4.CAL:Function_call_interrupts
1313 +270.4% 4866 ± 4% interrupts.CPU4.TLB:TLB_shootdowns
4457 ± 3% +74.3% 7767 interrupts.CPU5.CAL:Function_call_interrupts
849.50 ± 23% -60.8% 332.75 ± 7% interrupts.CPU5.RES:Rescheduling_interrupts
1334 ± 3% +252.7% 4705 ± 3% interrupts.CPU5.TLB:TLB_shootdowns
4489 +75.4% 7874 interrupts.CPU6.CAL:Function_call_interrupts
676.75 ± 13% -52.7% 320.25 ± 16% interrupts.CPU6.RES:Rescheduling_interrupts
1318 ± 2% +267.9% 4849 ± 6% interrupts.CPU6.TLB:TLB_shootdowns
4453 ± 2% +79.1% 7974 interrupts.CPU7.CAL:Function_call_interrupts
720.75 ± 20% -45.2% 395.25 ± 29% interrupts.CPU7.RES:Rescheduling_interrupts
1326 ± 3% +264.2% 4830 interrupts.CPU7.TLB:TLB_shootdowns
4509 ± 2% +76.1% 7942 interrupts.CPU8.CAL:Function_call_interrupts
762.00 ± 16% -53.0% 358.00 ± 36% interrupts.CPU8.RES:Rescheduling_interrupts
1348 ± 3% +257.9% 4827 ± 3% interrupts.CPU8.TLB:TLB_shootdowns
4498 ± 2% +74.2% 7834 ± 3% interrupts.CPU9.CAL:Function_call_interrupts
674.00 ± 13% -47.9% 351.00 ± 24% interrupts.CPU9.RES:Rescheduling_interrupts
1345 ± 2% +251.7% 4731 ± 5% interrupts.CPU9.TLB:TLB_shootdowns
26941 ± 5% -45.2% 14752 ± 12% interrupts.RES:Rescheduling_interrupts
52147 +271.4% 193667 ± 5% interrupts.TLB:TLB_shootdowns
27.61 ± 2% -4.9 22.68 ± 8% perf-profile.calltrace.cycles-pp.string_rtns_1
4.09 ± 7% -1.9 2.24 ± 9% perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
3.90 ± 7% -1.8 2.06 ± 10% perf-profile.calltrace.cycles-pp.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
4.31 ± 7% -1.8 2.48 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
4.31 ± 7% -1.8 2.48 ± 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
4.58 ± 7% -1.8 2.75 ± 10% perf-profile.calltrace.cycles-pp.brk
2.80 ± 8% -1.5 1.28 ± 11% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
2.77 ± 8% -1.5 1.26 ± 11% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk
2.20 ± 11% -1.4 0.76 ± 13% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
3.42 ± 8% -1.3 2.08 ± 5% perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.29 ± 3% -0.6 3.65 ± 8% perf-profile.calltrace.cycles-pp.div_long
4.66 -0.6 4.08 ± 8% perf-profile.calltrace.cycles-pp.add_int.add_int
2.41 ± 3% -0.5 1.93 ± 9% perf-profile.calltrace.cycles-pp.mem_rtns_1
3.05 ± 3% -0.4 2.64 ± 8% perf-profile.calltrace.cycles-pp.add_short.add_short
1.67 ± 2% -0.4 1.29 ± 8% perf-profile.calltrace.cycles-pp.creat
3.03 ± 5% -0.4 2.65 ± 2% perf-profile.calltrace.cycles-pp.execve
3.00 ± 4% -0.4 2.62 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
3.00 ± 4% -0.4 2.62 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
3.00 ± 4% -0.4 2.62 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
2.98 ± 4% -0.4 2.60 ± 2% perf-profile.calltrace.cycles-pp.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
1.52 ± 2% -0.3 1.18 ± 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat
1.51 ± 3% -0.3 1.17 ± 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
1.24 ± 13% -0.3 0.92 ± 17% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.36 ± 3% -0.3 1.06 ± 7% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
1.21 ± 3% -0.3 0.92 ± 6% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
1.20 ± 3% -0.3 0.92 ± 6% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.72 ± 4% -0.3 0.44 ± 58% perf-profile.calltrace.cycles-pp.write
2.14 ± 4% -0.3 1.87 ± 2% perf-profile.calltrace.cycles-pp.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.13 ± 4% -0.3 1.87 ± 2% perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64
1.92 ± 5% -0.1 1.77 ± 6% perf-profile.calltrace.cycles-pp.div_short
1.08 ± 3% -0.1 0.94 ± 4% perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file
1.07 ± 3% -0.1 0.93 ± 4% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
1.11 ± 3% -0.1 0.98 ± 4% perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
0.57 ± 3% +0.0 0.61 ± 2% perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
0.56 +0.2 0.71 ± 2% perf-profile.calltrace.cycles-pp._dl_addr
0.13 ±173% +0.6 0.73 ± 18% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
0.13 ±173% +0.6 0.73 ± 18% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_kernel.secondary_startup_64
0.13 ±173% +0.6 0.73 ± 18% perf-profile.calltrace.cycles-pp.start_kernel.secondary_startup_64
18.80 ± 10% +8.7 27.45 ± 9% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
18.80 ± 10% +8.7 27.45 ± 9% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
18.78 ± 10% +8.7 27.44 ± 9% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
19.21 ± 11% +9.0 28.18 ± 9% perf-profile.calltrace.cycles-pp.secondary_startup_64
16.88 ± 10% +9.0 25.92 ± 9% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
17.10 ± 10% +9.0 26.14 ± 9% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.76 ± 9% +10.1 23.87 ± 9% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
27.74 ± 2% -4.9 22.79 ± 8% perf-profile.children.cycles-pp.string_rtns_1
4.60 ± 7% -1.8 2.76 ± 10% perf-profile.children.cycles-pp.brk
2.25 ± 14% -1.5 0.71 ± 7% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
2.40 ± 12% -1.5 0.89 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
19.05 ± 4% -1.4 17.61 ± 5% perf-profile.children.cycles-pp.do_syscall_64
19.09 ± 4% -1.4 17.66 ± 5% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
4.30 ± 7% -1.3 2.96 ± 4% perf-profile.children.cycles-pp.__do_munmap
4.09 ± 7% -1.3 2.75 ± 5% perf-profile.children.cycles-pp.__x64_sys_brk
3.68 ± 8% -1.3 2.38 ± 4% perf-profile.children.cycles-pp.unmap_region
2.70 ± 9% -1.3 1.42 ± 5% perf-profile.children.cycles-pp.release_pages
3.34 ± 7% -1.2 2.15 ± 4% perf-profile.children.cycles-pp.tlb_flush_mmu
3.38 ± 7% -1.2 2.18 ± 4% perf-profile.children.cycles-pp.tlb_finish_mmu
4.29 ± 3% -0.6 3.66 ± 8% perf-profile.children.cycles-pp.div_long
4.66 -0.6 4.08 ± 8% perf-profile.children.cycles-pp.add_int
2.43 ± 3% -0.5 1.94 ± 9% perf-profile.children.cycles-pp.mem_rtns_1
1.36 ± 11% -0.5 0.91 ± 15% perf-profile.children.cycles-pp.write
3.05 ± 3% -0.4 2.64 ± 8% perf-profile.children.cycles-pp.add_short
1.70 ± 2% -0.4 1.31 ± 8% perf-profile.children.cycles-pp.creat
0.89 ± 20% -0.4 0.50 ± 26% perf-profile.children.cycles-pp.vprintk_emit
3.03 ± 5% -0.4 2.65 ± 2% perf-profile.children.cycles-pp.execve
1.28 ± 14% -0.3 0.95 ± 18% perf-profile.children.cycles-pp.menu_select
1.19 ± 14% -0.3 0.87 ± 11% perf-profile.children.cycles-pp.vfs_write
1.15 ± 14% -0.3 0.83 ± 11% perf-profile.children.cycles-pp.new_sync_write
1.20 ± 14% -0.3 0.88 ± 10% perf-profile.children.cycles-pp.ksys_write
0.64 ± 21% -0.3 0.34 ± 35% perf-profile.children.cycles-pp._fini
0.64 ± 21% -0.3 0.34 ± 35% perf-profile.children.cycles-pp.devkmsg_write
0.64 ± 21% -0.3 0.34 ± 35% perf-profile.children.cycles-pp.devkmsg_emit
0.84 ± 9% -0.2 0.63 ± 7% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.79 ± 21% -0.2 0.58 ± 19% perf-profile.children.cycles-pp.console_unlock
0.72 ± 20% -0.2 0.52 ± 20% perf-profile.children.cycles-pp.serial8250_console_write
0.70 ± 20% -0.2 0.51 ± 19% perf-profile.children.cycles-pp.wait_for_xmitr
0.68 ± 21% -0.2 0.50 ± 21% perf-profile.children.cycles-pp.uart_console_write
0.66 ± 20% -0.2 0.49 ± 20% perf-profile.children.cycles-pp.serial8250_console_putchar
0.54 ± 21% -0.2 0.36 ± 20% perf-profile.children.cycles-pp.io_serial_in
0.65 ± 18% -0.2 0.48 ± 21% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.52 ± 11% -0.2 0.36 ± 8% perf-profile.children.cycles-pp.lru_add_drain
0.51 ± 11% -0.2 0.36 ± 8% perf-profile.children.cycles-pp.lru_add_drain_cpu
1.92 ± 5% -0.1 1.78 ± 6% perf-profile.children.cycles-pp.div_short
0.51 ± 17% -0.1 0.38 ± 22% perf-profile.children.cycles-pp.tick_nohz_next_event
0.53 ± 5% -0.1 0.45 ± 7% perf-profile.children.cycles-pp.kill
0.42 ± 6% -0.1 0.35 ± 2% perf-profile.children.cycles-pp.down_write
0.22 ± 9% -0.1 0.16 ± 2% perf-profile.children.cycles-pp.smpboot_thread_fn
0.40 ± 5% -0.1 0.34 ± 3% perf-profile.children.cycles-pp.__lru_cache_add
0.36 ± 3% -0.1 0.30 ± 4% perf-profile.children.cycles-pp.malloc
0.21 ± 2% -0.1 0.16 ± 15% perf-profile.children.cycles-pp.free
0.27 ± 8% -0.1 0.22 ± 8% perf-profile.children.cycles-pp.unlink
0.12 ± 9% -0.0 0.08 ± 12% perf-profile.children.cycles-pp.link
0.12 ± 3% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.rwsem_down_write_failed
0.12 ± 13% -0.0 0.10 ± 9% perf-profile.children.cycles-pp.vfprintf
0.10 ± 10% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.run_ksoftirqd
0.11 ± 7% -0.0 0.09 ± 5% perf-profile.children.cycles-pp.compar2
0.12 ± 3% -0.0 0.10 ± 8% perf-profile.children.cycles-pp.vma_compute_subtree_gap
0.09 ± 4% -0.0 0.07 ± 10% perf-profile.children.cycles-pp.mul_short
0.07 ± 14% -0.0 0.06 ± 15% perf-profile.children.cycles-pp.lseek64
0.09 ± 12% +0.0 0.12 ± 13% perf-profile.children.cycles-pp.inet_recvmsg
0.15 ± 7% +0.0 0.18 ± 6% perf-profile.children.cycles-pp.page_add_file_rmap
0.10 ± 11% +0.0 0.13 ± 13% perf-profile.children.cycles-pp.sock_read_iter
0.32 ± 2% +0.0 0.35 perf-profile.children.cycles-pp.alloc_set_pte
0.29 ± 7% +0.0 0.33 ± 3% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.28 ± 8% +0.0 0.32 perf-profile.children.cycles-pp.vfs_read
0.17 ± 9% +0.0 0.21 ± 9% perf-profile.children.cycles-pp.___perf_sw_event
0.04 ± 58% +0.0 0.08 ± 15% perf-profile.children.cycles-pp.schedule_idle
0.24 ± 10% +0.0 0.28 ± 4% perf-profile.children.cycles-pp.ksys_read
0.14 ± 8% +0.0 0.19 ± 6% perf-profile.children.cycles-pp.__x64_sys_munmap
0.35 ± 5% +0.0 0.40 ± 5% perf-profile.children.cycles-pp.sync_regs
0.18 ± 7% +0.1 0.23 ± 4% perf-profile.children.cycles-pp.new_sync_read
0.68 ± 3% +0.1 0.74 ± 3% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.02 ±173% +0.1 0.11 ± 10% perf-profile.children.cycles-pp.wake_up_klogd_work_func
0.72 ± 2% +0.1 0.82 ± 2% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.62 ± 3% +0.1 0.72 ± 2% perf-profile.children.cycles-pp.flush_tlb_func_common
0.59 ± 3% +0.1 0.69 ± 2% perf-profile.children.cycles-pp.native_flush_tlb_one_user
0.39 ± 13% +0.1 0.49 ± 9% perf-profile.children.cycles-pp.clockevents_program_event
0.56 +0.2 0.71 ± 2% perf-profile.children.cycles-pp._dl_addr
0.41 ± 33% +0.3 0.73 ± 18% perf-profile.children.cycles-pp.start_kernel
18.80 ± 10% +8.7 27.45 ± 9% perf-profile.children.cycles-pp.start_secondary
19.22 ± 11% +9.0 28.19 ± 9% perf-profile.children.cycles-pp.do_idle
19.21 ± 11% +9.0 28.18 ± 9% perf-profile.children.cycles-pp.secondary_startup_64
19.21 ± 11% +9.0 28.18 ± 9% perf-profile.children.cycles-pp.cpu_startup_entry
17.47 ± 10% +9.4 26.82 ± 9% perf-profile.children.cycles-pp.cpuidle_enter_state
17.48 ± 10% +9.4 26.83 ± 9% perf-profile.children.cycles-pp.cpuidle_enter
14.07 ± 9% +9.9 23.99 ± 9% perf-profile.children.cycles-pp.intel_idle
27.51 ± 2% -4.9 22.62 ± 8% perf-profile.self.cycles-pp.string_rtns_1
2.25 ± 14% -1.5 0.71 ± 7% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
4.27 ± 3% -0.6 3.63 ± 8% perf-profile.self.cycles-pp.div_long
4.62 -0.6 4.06 ± 8% perf-profile.self.cycles-pp.add_int
2.40 ± 3% -0.5 1.92 ± 9% perf-profile.self.cycles-pp.mem_rtns_1
3.03 ± 3% -0.4 2.62 ± 8% perf-profile.self.cycles-pp.add_short
0.54 ± 21% -0.2 0.36 ± 20% perf-profile.self.cycles-pp.io_serial_in
0.16 ± 5% -0.1 0.10 ± 15% perf-profile.self.cycles-pp.free
0.33 ± 3% -0.1 0.28 ± 7% perf-profile.self.cycles-pp.malloc
0.11 ± 7% -0.0 0.09 ± 5% perf-profile.self.cycles-pp.compar2
0.09 ± 7% +0.0 0.11 ± 7% perf-profile.self.cycles-pp.copy_page
0.15 ± 8% +0.0 0.18 ± 8% perf-profile.self.cycles-pp.___perf_sw_event
0.35 ± 4% +0.0 0.40 ± 6% perf-profile.self.cycles-pp.sync_regs
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__fput
0.66 ± 2% +0.1 0.72 ± 3% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.44 ± 4% +0.1 0.51 perf-profile.self.cycles-pp._raw_spin_lock
0.61 ± 6% +0.1 0.69 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.59 ± 3% +0.1 0.69 perf-profile.self.cycles-pp.native_flush_tlb_one_user
0.55 +0.1 0.70 ± 3% perf-profile.self.cycles-pp._dl_addr
0.00 +0.2 0.16 ± 22% perf-profile.self.cycles-pp.find_busiest_group
1.17 ± 3% +0.2 1.34 ± 7% perf-profile.self.cycles-pp.do_syscall_64
14.04 ± 9% +9.9 23.97 ± 9% perf-profile.self.cycles-pp.intel_idle
reaim.time.user_time
1800 +-+------------------------------------------------------------------+
|.++.+.++.+.++.++.+.++.+.+ +.++.+.++.++.+ +.+.++.+.++.+.++.++.+.++.|
1600 O-+O O OO O OO OO O OO O OO O OO O OO : : |
1400 +-+ : : OO O O: |
| : : : : |
1200 +-+ : : : : |
1000 +-+ : : : : |
| : : : : |
800 +-+ :: :: |
600 +-+ :: :: |
| :: :: |
400 +-+ : : |
200 +-+ : : |
| : : |
0 +-O------------------------------------------------------------------+
reaim.time.system_time
500 +-+-------------------------------------------------------------------+
450 +-++.+.++.+.++.+.+ ++.+ +.+.++.+.++.+.+ +.++.+.++.+.++.+.++.+.++.|
| : : : : |
400 O-+O O OO O OO O OO O OO O OO O OO O O : : |
350 +-+ : : O O OO : |
| : : : : |
300 +-+ : : : : |
250 +-+ : : : : |
200 +-+ :: :: |
| :: :: |
150 +-+ :: :: |
100 +-+ : : |
| : : |
50 +-+ : : |
0 +-O-------------------------------------------------------------------+
reaim.time.percent_of_cpu_this_job_got
800 +-+-------------------------------------------------------------------+
| |
700 +-++.+.++.+.++.+.++.+.++.+ +.+.++.+.++.+.+ +.++.+.++.+.++.+.++.+.++.|
600 O-+O O OO O OO O OO O OO O OO O OO O O : : |
| : : O O OO : |
500 +-+ : : : : |
| : : : : |
400 +-+ : : : : |
| : : :: |
300 +-+ :: :: |
200 +-+ :: :: |
| : :: |
100 +-+ : : |
| : : |
0 +-O-------------------------------------------------------------------+
reaim.time.minor_page_faults
1e+08 +-+-----------------------------------------------------------------+
9e+07 O-+O.OO.O.OO.O.OO OO O.OO O OO.OO.O.OO.OO O ++.+.++.++.+.++.+.++.++.|
| : : : : |
8e+07 +-+ : : : : |
7e+07 +-+ : : : : |
| : : : : |
6e+07 +-+ : : : : |
5e+07 +-+ : : : : |
4e+07 +-+ : : : : |
| : : : : |
3e+07 +-+ : : : : |
2e+07 +-+ : : |
| : : |
1e+07 +-+ : : |
0 +-O-----------------------------------------------------------------+
reaim.parent_time
0.7 +-+-------------------------------------------------------------------+
| O O O |
0.6 +-+ O OO O OO O O O |
O O O OO O OO O OO O OO |
0.5 +-++.+.++.+.++.+.++.+.++.+ +.+.++.+.++.+.+ +.++.+.++.+.++.+.++.+.++.|
| : : : : |
0.4 +-+ : : : : |
| : : : : |
0.3 +-+ : : : : |
| : : :: |
0.2 +-+ :: :: |
| :: :: |
0.1 +-+ : : |
| : : |
0 +-O-------------------------------------------------------------------+
reaim.child_systime
4 +-+-------------------------------------------------------------------+
| + : : : : +.++ |
3.5 O-+O O OO O OO O OO O OO O OO O OO O O : : |
3 +-+ : : O O OO : |
| : : : : |
2.5 +-+ : : : : |
| : : : : |
2 +-+ : : :: |
| :: :: |
1.5 +-+ :: :: |
1 +-+ :: :: |
| : : |
0.5 +-+ : : |
| : : |
0 +-O-------------------------------------------------------------------+
reaim.child_utime
14 +-+--------------------------------------------------------------------+
O.+O.O.OO O O OO O OO O.OO O OO.O.O.OO.O.OO O ++.+.++.+.++.+.+ +.+.++.|
12 +-+ : : : : |
| : : : : |
10 +-+ : : : : |
| : : : : |
8 +-+ : : : : |
| : : : : |
6 +-+ : : : : |
| : : : : |
4 +-+ : : : : |
| : : |
2 +-+ : : |
| : : |
0 +-O--------------------------------------------------------------------+
reaim.jobs_per_min
600000 +-+----------------------------------------------------------------+
| |
500000 +-+ .+.++.++.+.++. |
|.++.++ ++.+ +.++.+.++.++.+ +.++.++.+.++.++.+.++.++.|
O O OO O OO OO O OO OO O OO OO O OO : : |
400000 +-+ : : OO OO : |
| : : : : |
300000 +-+ : : : : |
| : : : : |
200000 +-+ :: :: |
| :: :: |
| :: :: |
100000 +-+ : : |
| : : |
0 +-O----------------------------------------------------------------+
reaim.jobs_per_min_child
14000 +-+-----------------------------------------------------------------+
| +.+. |
12000 +-++.++.+.+ ++.++.+.++ ++.++.+.++.++ ++.+.++.++.+.++.+.++.++.|
O O OO O OO O OO OO O O: O : O : : |
10000 +-+ O OO O O OO OO O : |
| : : : : |
8000 +-+ : : : : |
| : : : : |
6000 +-+ : : : : |
| : : : : |
4000 +-+ : : : : |
| :: :: |
2000 +-+ : : |
| : : |
0 +-O-----------------------------------------------------------------+
reaim.std_dev_time
0.03 +-+------------------------------------OO-O-------------------------+
| O |
0.025 +-+ |
| |
| O O OO OO O O |
0.02 O-+O OO O OO O OO OO O O |
| |
0.015 +-+ |
| |
0.01 +-++.++.+.++.+.++.++.+.++ ++.++.+.++.++ ++.+.++.++.+.++.+.++.++.|
| : : : : |
| : : : : |
0.005 +-+ : : : : |
| : : |
0 +-O-----------------------------------------------------------------+
reaim.std_dev_percent
6 +-+---------------------------------------------------------------------+
| |
5 +-+ O O O O |
| |
| O O O O |
4 O-+O O O OO O OO O O OO O O O O O |
| |
3 +-+ |
|.++.+.+. +. .+. +. .+ .+.++.+.++. .+ .+.++.+.+.++.+.++.+.+. +.|
2 +-+ + +.++ +.+ + : + + : + + |
| : : : : |
| :: : : |
1 +-+ :: :: |
| : : |
0 +-O---------------------------------------------------------------------+
reaim.jti
100 +-+-------------------------------------------------------------------+
90 O-+O O OO O OO O OO O OO O OO O OO O OO O OO : |
| : : : : |
80 +-+ : : : : |
70 +-+ : : : : |
| : : : : |
60 +-+ : : : : |
50 +-+ : : :: |
40 +-+ :: :: |
| :: :: |
30 +-+ :: :: |
20 +-+ : : |
| : : |
10 +-+ : : |
0 +-O-------------------------------------------------------------------+
reaim.max_jobs_per_min
600000 +-+----------------------------------------------------------------+
| |
500000 +-++.++.+.++.++.+.++.++.+ +.++.+.++.++.+ +.++.++.+.++.++.+.++.++.|
| : : : : |
O O OO O OO OO O OO OO O OO OO O OO : : |
400000 +-+ : : OO OO : |
| : : : : |
300000 +-+ : : : : |
| : : :: |
200000 +-+ :: :: |
| :: :: |
| : :: |
100000 +-+ : : |
| : : |
0 +-O----------------------------------------------------------------+
reaim.workload
500000 +-+----------------------------------------------------------------+
450000 O-+O.OO.O.OO.OO.O OO OO.O OO.OO.O.OO.OO.OO +.++.++.+.++.++.+.++.++.|
| : : : : |
400000 +-+ : : : : |
350000 +-+ : : : : |
| : : : : |
300000 +-+ : : : : |
250000 +-+ : : : : |
200000 +-+ :: :: |
| :: :: |
150000 +-+ :: :: |
100000 +-+ : : |
| : : |
50000 +-+ : : |
0 +-O----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep3: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6/process/50%/debian-x86_64-2019-05-14.cgz/lkp-bdw-ep3/hackbench/0xb000036
commit:
6824881809 ("sched/fair: rename sum_nr_running to sum_h_nr_running")
e1f4f2a13d ("sched/fair: rework load_balance")
6824881809407252 e1f4f2a13d35480157bd3b83255
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:2 150% 3:4 dmesg.WARNING:at_ip__fsnotify_parent/0x
%stddev %change %stddev
\ | \
22729 ± 24% -49.8% 11419 ± 60% numa-numastat.node1.other_node
17257 +2.8% 17744 proc-vmstat.nr_page_table_pages
11516 ± 3% -8.3% 10554 ± 3% softirqs.CPU1.SCHED
11094 ± 2% -8.0% 10212 ± 5% softirqs.CPU14.SCHED
22158 ± 11% -50.1% 11066 ± 93% numa-vmstat.node0.nr_shmem
11883 ± 13% -24.1% 9017 ± 23% numa-vmstat.node0.nr_slab_reclaimable
5938 ± 92% +190.6% 17260 ± 39% numa-vmstat.node0.numa_other
47539 ± 13% -24.1% 36069 ± 23% numa-meminfo.node0.KReclaimable
12583 ± 19% -11.4% 11145 ± 19% numa-meminfo.node0.Mapped
47539 ± 13% -24.1% 36069 ± 23% numa-meminfo.node0.SReclaimable
88559 ± 11% -50.0% 44323 ± 93% numa-meminfo.node0.Shmem
14.07 +2.0% 14.34 perf-stat.i.MPKI
1.506e+09 +1.9% 1.535e+09 perf-stat.i.cache-references
14.06 +2.0% 14.34 perf-stat.overall.MPKI
1.504e+09 +1.9% 1.533e+09 perf-stat.ps.cache-references
995.50 ± 9% -63.3% 365.00 ± 4% interrupts.34:PCI-MSI.3145731-edge.eth0-TxRx-2
740.50 -50.3% 368.25 ± 3% interrupts.38:PCI-MSI.3145735-edge.eth0-TxRx-6
995.50 ± 9% -63.3% 365.00 ± 4% interrupts.CPU13.34:PCI-MSI.3145731-edge.eth0-TxRx-2
740.50 -50.3% 368.25 ± 3% interrupts.CPU17.38:PCI-MSI.3145735-edge.eth0-TxRx-6
7.00 ± 71% +746.4% 59.25 ± 73% interrupts.CPU61.TLB:TLB_shootdowns
1326 ± 9% +94.8% 2583 ± 30% interrupts.TLB:TLB_shootdowns
3336 ± 14% -39.8% 2009 ± 22% sched_debug.cfs_rq:/.load.min
30.14 -8.5% 27.57 ± 4% sched_debug.cfs_rq:/.runnable_load_avg.max
106.01 ± 11% -16.0% 89.03 ± 11% sched_debug.cpu.clock.stddev
106.01 ± 11% -16.0% 89.03 ± 11% sched_debug.cpu.clock_task.stddev
0.00 ± 10% -14.6% 0.00 ± 8% sched_debug.cpu.next_balance.stddev
2.32 ± 5% -9.8% 2.09 ± 6% sched_debug.cpu.nr_running.min
1.51 ± 4% -6.1% 1.42 ± 3% sched_debug.cpu.nr_uninterruptible.avg
0.70 ± 2% -0.0 0.65 ± 2% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.64 -0.0 0.60 ± 4% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_write.ksys_write.do_syscall_64
0.56 +0.0 0.58 perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.89 +0.0 0.92 perf-profile.calltrace.cycles-pp.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.52 +0.0 0.54 perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__schedule.schedule
0.97 +0.0 1.00 perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.04 +0.0 1.07 perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
1.06 +0.0 1.10 perf-profile.calltrace.cycles-pp.load_new_mm_cr3.switch_mm_irqs_off.__schedule.schedule.pipe_wait
2.37 +0.1 2.43 perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_read
1.92 +0.1 1.99 perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_read
17.42 +0.3 17.73 perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.new_sync_read.vfs_read.ksys_read
1.37 -0.1 1.28 ± 3% perf-profile.children.cycles-pp.file_has_perm
0.52 -0.1 0.46 ± 8% perf-profile.children.cycles-pp.reschedule_interrupt
0.91 ± 2% -0.1 0.84 ± 3% perf-profile.children.cycles-pp.avc_has_perm
0.45 -0.0 0.42 ± 2% perf-profile.children.cycles-pp.prepare_to_wait
0.25 ± 4% -0.0 0.23 ± 7% perf-profile.children.cycles-pp.rw_verify_area
0.13 -0.0 0.11 ± 6% perf-profile.children.cycles-pp.task_tick_fair
0.28 -0.0 0.26 ± 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.21 ± 2% -0.0 0.19 ± 2% perf-profile.children.cycles-pp.tick_sched_timer
0.18 ± 2% -0.0 0.17 ± 2% perf-profile.children.cycles-pp.tick_sched_handle
0.18 ± 2% -0.0 0.17 ± 2% perf-profile.children.cycles-pp.update_process_times
0.16 +0.0 0.17 ± 2% perf-profile.children.cycles-pp.resched_curr
2.19 +0.1 2.25 perf-profile.children.cycles-pp.reweight_entity
1.21 +0.1 1.28 perf-profile.children.cycles-pp.set_next_entity
17.55 +0.3 17.86 perf-profile.children.cycles-pp.pipe_wait
0.89 ± 2% -0.1 0.83 ± 2% perf-profile.self.cycles-pp.avc_has_perm
0.71 ± 2% -0.0 0.67 perf-profile.self.cycles-pp.try_to_wake_up
0.24 ± 4% -0.0 0.22 ± 6% perf-profile.self.cycles-pp.rw_verify_area
0.23 ± 4% -0.0 0.21 ± 6% perf-profile.self.cycles-pp.ksys_write
0.27 -0.0 0.25 ± 2% perf-profile.self.cycles-pp.prepare_to_wait
0.96 +0.0 0.98 perf-profile.self.cycles-pp.enqueue_task_fair
0.65 +0.0 0.67 perf-profile.self.cycles-pp.__calc_delta
0.30 +0.0 0.33 ± 4% perf-profile.self.cycles-pp.schedule
0.86 +0.0 0.89 perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.28 ± 3% +0.0 0.32 ± 3% perf-profile.self.cycles-pp.rb_erase_cached
0.91 +0.0 0.95 ± 2% perf-profile.self.cycles-pp._raw_spin_lock
***************************************************************************************************
lkp-bdw-ep3: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6/threads/50%/debian-x86_64-2019-05-14.cgz/lkp-bdw-ep3/hackbench/0xb000036
commit:
6824881809 ("sched/fair: rename sum_nr_running to sum_h_nr_running")
e1f4f2a13d ("sched/fair: rework load_balance")
6824881809407252 e1f4f2a13d35480157bd3b83255
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:2 50% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:2 50% 1:4 dmesg.WARNING:at_ip__mutex_lock/0x
%stddev %change %stddev
\ | \
127594 -1.4% 125832 hackbench.throughput
19993169 ± 8% +43.1% 28618199 ± 13% cpuidle.C1E.time
992796 ± 4% -6.7% 926759 ± 7% numa-meminfo.node0.MemUsed
8020 ± 3% +17.0% 9387 ± 6% softirqs.CPU1.SCHED
692877 ± 4% +21.3% 840432 ± 10% turbostat.C1E
18474747 -1.1% 18272710 vmstat.system.cs
37.61 -1.3% 37.10 boot-time.boot
2865 -0.7% 2843 boot-time.idle
5762437 ± 5% -12.3% 5050885 ± 7% numa-vmstat.node1.numa_hit
5605653 ± 5% -12.6% 4896930 ± 7% numa-vmstat.node1.numa_local
18247 -0.9% 18088 proc-vmstat.nr_slab_reclaimable
19442 ± 6% +12.9% 21952 ± 4% proc-vmstat.pgactivate
610118 ± 12% -18.5% 497215 ± 11% sched_debug.cfs_rq:/.spread0.stddev
3342 ± 53% -53.2% 1563 ± 4% sched_debug.cpu.avg_idle.min
-844.09 +13.7% -959.51 sched_debug.cpu.nr_uninterruptible.min
342.78 +15.9% 397.15 ± 6% sched_debug.cpu.nr_uninterruptible.stddev
1917 ± 3% -15.1% 1627 ± 11% slabinfo.UNIX.active_objs
1917 ± 3% -15.1% 1627 ± 11% slabinfo.UNIX.num_objs
9091 ± 4% -8.2% 8348 ± 2% slabinfo.kmalloc-512.active_objs
9250 ± 4% -8.3% 8484 ± 2% slabinfo.kmalloc-512.num_objs
594.00 +23.3% 732.50 ± 7% slabinfo.skbuff_ext_cache.num_objs
3562 ± 3% -13.6% 3078 ± 9% slabinfo.sock_inode_cache.active_objs
3562 ± 3% -13.6% 3078 ± 9% slabinfo.sock_inode_cache.num_objs
12.06 +3.5% 12.49 perf-stat.i.MPKI
2.542e+10 -2.0% 2.492e+10 perf-stat.i.branch-instructions
1.93 +2.0% 1.97 perf-stat.i.cpi
20617 ± 32% -46.7% 10992 ± 51% perf-stat.i.cycles-between-cache-misses
3.761e+10 -1.8% 3.695e+10 perf-stat.i.dTLB-loads
26224314 -1.3% 25874600 perf-stat.i.dTLB-store-misses
2.385e+10 -1.7% 2.345e+10 perf-stat.i.dTLB-stores
1780038 ± 4% -12.4% 1559944 ± 6% perf-stat.i.iTLB-loads
1.263e+11 -2.0% 1.238e+11 perf-stat.i.instructions
0.52 -2.0% 0.51 perf-stat.i.ipc
12.03 +3.6% 12.46 perf-stat.overall.MPKI
1.93 +2.0% 1.97 perf-stat.overall.cpi
8150 ± 31% -35.1% 5286 ± 39% perf-stat.overall.cycles-between-cache-misses
0.52 -2.0% 0.51 perf-stat.overall.ipc
2.538e+10 -1.9% 2.488e+10 perf-stat.ps.branch-instructions
3.755e+10 -1.8% 3.689e+10 perf-stat.ps.dTLB-loads
26183203 -1.3% 25834588 perf-stat.ps.dTLB-store-misses
2.381e+10 -1.7% 2.341e+10 perf-stat.ps.dTLB-stores
1777307 ± 4% -12.4% 1557519 ± 6% perf-stat.ps.iTLB-loads
1.261e+11 -2.0% 1.236e+11 perf-stat.ps.instructions
1115 -30.9% 771.00 ± 12% interrupts.32:PCI-MSI.3145729-edge.eth0-TxRx-0
1115 -30.9% 771.00 ± 12% interrupts.CPU11.32:PCI-MSI.3145729-edge.eth0-TxRx-0
6190099 +13.1% 7002452 ± 5% interrupts.CPU15.RES:Rescheduling_interrupts
6428408 +9.4% 7034643 ± 3% interrupts.CPU19.RES:Rescheduling_interrupts
6108717 ± 2% +10.8% 6765874 ± 4% interrupts.CPU2.RES:Rescheduling_interrupts
6201667 +11.4% 6911496 ± 3% interrupts.CPU20.RES:Rescheduling_interrupts
6626353 ± 5% -6.7% 6183467 ± 5% interrupts.CPU28.RES:Rescheduling_interrupts
3957 +75.5% 6944 ± 24% interrupts.CPU43.NMI:Non-maskable_interrupts
3957 +75.5% 6944 ± 24% interrupts.CPU43.PMI:Performance_monitoring_interrupts
3971 +74.9% 6947 ± 24% interrupts.CPU45.NMI:Non-maskable_interrupts
3971 +74.9% 6947 ± 24% interrupts.CPU45.PMI:Performance_monitoring_interrupts
6263858 +12.5% 7043735 ± 3% interrupts.CPU5.RES:Rescheduling_interrupts
5820732 ± 4% +15.5% 6724700 ± 5% interrupts.CPU51.RES:Rescheduling_interrupts
3957 +75.6% 6947 ± 24% interrupts.CPU52.NMI:Non-maskable_interrupts
3957 +75.6% 6947 ± 24% interrupts.CPU52.PMI:Performance_monitoring_interrupts
6188932 +14.6% 7095450 ± 5% interrupts.CPU53.RES:Rescheduling_interrupts
3970 +99.9% 7934 interrupts.CPU54.NMI:Non-maskable_interrupts
3970 +99.9% 7934 interrupts.CPU54.PMI:Performance_monitoring_interrupts
6043697 ± 4% +15.6% 6985466 ± 4% interrupts.CPU56.RES:Rescheduling_interrupts
5941533 +20.2% 7142469 ± 4% interrupts.CPU60.RES:Rescheduling_interrupts
5820575 ± 4% +15.1% 6701035 ± 4% interrupts.CPU64.RES:Rescheduling_interrupts
3964 +99.7% 7919 interrupts.CPU87.NMI:Non-maskable_interrupts
3964 +99.7% 7919 interrupts.CPU87.PMI:Performance_monitoring_interrupts
5762147 ± 4% +21.9% 7021575 ± 4% interrupts.CPU9.RES:Rescheduling_interrupts
3.35 ± 4% -0.2 3.12 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
1.21 -0.0 1.17 ± 2% perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.55 ± 2% -0.0 0.54 ± 2% perf-profile.calltrace.cycles-pp.fsnotify.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.55 ± 2% +0.0 0.59 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.37 +0.1 1.46 ± 2% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.new_sync_write.vfs_write.ksys_write
1.25 +0.1 1.34 ± 2% perf-profile.calltrace.cycles-pp.update_load_avg.dequeue_entity.dequeue_task_fair.__schedule.schedule
1.88 ± 2% +0.1 2.01 ± 3% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.26 ±100% +0.3 0.54 ± 3% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_read.new_sync_read.vfs_read.ksys_read
0.00 +0.5 0.53 ± 4% perf-profile.calltrace.cycles-pp.__update_load_avg_se.update_load_avg.dequeue_entity.dequeue_task_fair.__schedule
3.35 ± 4% -0.2 3.12 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64
2.73 -0.1 2.67 perf-profile.children.cycles-pp.reweight_entity
0.20 ± 7% -0.0 0.15 ± 7% perf-profile.children.cycles-pp.inode_has_perm
1.23 -0.0 1.19 ± 2% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.22 -0.0 0.19 ± 4% perf-profile.children.cycles-pp.__x64_sys_read
0.23 ± 2% -0.0 0.22 ± 5% perf-profile.children.cycles-pp.set_next_buddy
0.39 +0.0 0.41 ± 2% perf-profile.children.cycles-pp.anon_pipe_buf_release
1.46 +0.0 1.50 perf-profile.children.cycles-pp.set_next_entity
1.39 +0.0 1.43 perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.32 ± 7% +0.1 0.38 ± 4% perf-profile.children.cycles-pp.update_cfs_rq_h_load
1.58 +0.1 1.64 perf-profile.children.cycles-pp.__update_load_avg_se
2.33 +0.1 2.41 perf-profile.children.cycles-pp.mutex_lock
0.39 ± 9% +0.1 0.48 ± 10% perf-profile.children.cycles-pp.cpus_share_cache
5.29 +0.1 5.40 perf-profile.children.cycles-pp.pick_next_task_fair
1.88 ± 2% +0.1 2.01 ± 3% perf-profile.children.cycles-pp.switch_fpu_return
2.00 +0.1 2.15 ± 2% perf-profile.children.cycles-pp.mutex_unlock
4.12 +0.2 4.36 perf-profile.children.cycles-pp.update_load_avg
3.35 ± 4% -0.2 3.12 ± 3% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.77 -0.1 0.70 ± 3% perf-profile.self.cycles-pp.try_to_wake_up
0.23 ± 4% -0.0 0.18 ± 4% perf-profile.self.cycles-pp.ksys_write
0.17 ± 8% -0.0 0.14 ± 8% perf-profile.self.cycles-pp.inode_has_perm
0.19 -0.0 0.16 ± 5% perf-profile.self.cycles-pp.__x64_sys_read
0.45 -0.0 0.43 perf-profile.self.cycles-pp.check_preempt_wakeup
0.36 +0.0 0.38 perf-profile.self.cycles-pp.anon_pipe_buf_release
0.71 ± 2% +0.0 0.76 perf-profile.self.cycles-pp.vfs_read
1.29 +0.0 1.34 perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.32 ± 6% +0.0 0.37 ± 5% perf-profile.self.cycles-pp.update_cfs_rq_h_load
1.54 +0.1 1.60 perf-profile.self.cycles-pp.__update_load_avg_se
1.15 +0.1 1.21 ± 2% perf-profile.self.cycles-pp.enqueue_task_fair
0.56 ± 2% +0.1 0.62 ± 2% perf-profile.self.cycles-pp.__wake_up_common
1.44 +0.1 1.52 perf-profile.self.cycles-pp.mutex_lock
1.15 ± 2% +0.1 1.24 ± 2% perf-profile.self.cycles-pp._raw_spin_lock
0.39 ± 9% +0.1 0.48 ± 10% perf-profile.self.cycles-pp.cpus_share_cache
1.86 ± 2% +0.1 2.00 ± 3% perf-profile.self.cycles-pp.switch_fpu_return
1.96 +0.1 2.10 perf-profile.self.cycles-pp.mutex_unlock
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 6 months
[0-Day CI notification] the service had met instability issue and was downgraded
by Li, Philip
Hi all, FYI, this is from 0-Day CI test service team that our service had met
instability issue which prevented the effective testing in last week. You may
see less bisect report or incomplete testing. We are working on it to recover
it, currently, the earliest time we expect it can be back to normal is tomorrow.
Thanks
1 year, 6 months
c522ad0637: kernel hang in boot stage
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: c522ad0637cacca1775a3849c2b554f46577b98d ("ACPICA: Update table load object initialization")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
in testcase: fio
with following parameters:
runtime: 300s
disk: 1SSD
fs: ext4
nr_task: 1
test_size: 128G
rw: write
bs: 4k
ioengine: sync
direct: direct
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url:https://github.com/axboe/fio
on test machine: Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
[ 5.408282] smp: Brought up 2 nodes, 72 CPUs
[ 5.409283] smpboot: Max logical packages: 2
[ 5.410282] smpboot: Total of 72 processors activated (331375.89 BogoMIPS)
[ 5.605290] node 0 initialised, 15956993 pages in 192ms
[ 5.611291] node 1 initialised, 16343433 pages in 198ms
[ 5.621060] devtmpfs: initialized
[ 5.621340] x86/mm: Memory block size: 2048MB
[ 5.622967] PM: Registering ACPI NVS region [mem 0x6b521000-0x6bf50fff] (10682368 bytes)
[ 5.623599] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[ 5.624357] futex hash table entries: 32768 (order: 9, 2097152 bytes)
[ 5.626618] pinctrl core: initialized pinctrl subsystem
[ 5.627368] PM: RTC time: 06:13:50, date: 2019-07-09
[ 5.628697] NET: Registered protocol family 16
[ 5.629457] audit: initializing netlink subsys (disabled)
[ 5.630299] audit: type=2000 audit(1562652826.903:1): state=initialized audit_enabled=0 res=1
[ 5.638286] cpuidle: using governor menu
[ 5.642292] Detected 1 PCC Subspaces
[ 5.646314] Registering PCC driver as Mailbox controller
[ 5.651409] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
[ 5.659283] ACPI: bus type PCI registered
[ 5.663282] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[ 5.669472] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0x80000000-0x8fffffff] (base 0x80000000)
[ 5.679302] PCI: MMCONFIG at [mem 0x80000000-0x8fffffff] reserved in E820
[ 5.680293] PCI: Using configuration type 1 for base access
[ 5.685300] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[ 5.691283] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[ 5.702283] ACPI: Added _OSI(Module Device)
[ 5.706285] ACPI: Added _OSI(Processor Device)
[ 5.710282] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 5.715281] ACPI: Added _OSI(Processor Aggregator Device)
[ 5.720282] ACPI: Added _OSI(Linux-Dell-Video)
[ 5.725282] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[ 5.730282] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[ 5.776862] ACPI: 4 ACPI AML tables successfully acquired and loaded
[ 5.786441] ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
[ 5.809488] ACPI: Dynamic OEM Table Load:
IFWI Version: SE5C620.86B.OR.64.2018.28.1.01.0847.selfboot
Primary Bios Version: SE5C620.86B.00.01.0014.070920180847
Backup Bios Version: SE5C620.86B.00.01.0014.070920180847
System is booting from BIOS Primary Area!
BMC Firmware Version: 1.60.56383BEF
SDR Version: SDR Package 1.60
ME Firmware Version: 04.00.04.340
Platform ID: S2600WF
System memory detected: 131072 MB
Current memory speed: 2666 MT/s
Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz
Number of physical processors identified: 2
AHCI Capable Controller 1 enabling 8 ports of 6Gb/s SATA
AHCI Capable Controller 2 enabling 6 ports of 6Gb/s SATA
USB Keyboard detected
USB Mouse detected
BMC BaseBoard IP Address 1 : 192.168.3.35
BMC BaseBoard IP Address 2 : 192.168.3.35
BMC Dedicated NIC IP Address : 192.168.3.36
Press [Enter] to directly boot.
Press [F2] to enter setup and select boot options.
Press [F6] to show boot menu options.
Press [F12] to boot from network.
Best Regards,
Rong Chen
1 year, 6 months
94cc25049f: hackbench.throughput -19.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -19.4% regression of hackbench.throughput due to commit:
commit: 94cc25049f407162b193b52c60859e0a8fc6c695 (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl0jffgACgkQpekojE+k")
https://github.com/zen-kernel/zen-kernel 5.2/zen-tune
in testcase: hackbench
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
nr_threads: 50%
mode: process
ipc: socket
cpufreq_governor: performance
ucode: 0xb000036
test-description: Hackbench is both a benchmark and a stress test for the Linux kernel scheduler.
test-url: https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/sc...
In addition to that, the commit also has significant impact on the following tests:
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/socket/x86_64-rhel-7.6/process/50%/debian-x86_64-2019-05-14.cgz/lkp-bdw-ep3/hackbench/0xb000036
commit:
v5.2
94cc25049f (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl0jffgACgkQpekojE+k")
v5.2 94cc25049f407162b193b52c608
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:7 14% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:7 43% 3:4 dmesg.WARNING:at_ip__slab_free/0x
%stddev %change %stddev
\ | \
186160 -19.4% 149993 hackbench.throughput
1.166e+08 ± 6% +702.6% 9.361e+08 ± 5% hackbench.time.involuntary_context_switches
882616 -14.7% 753249 hackbench.time.minor_page_faults
8542 +1.6% 8676 hackbench.time.percent_of_cpu_this_job_got
39778 +8.9% 43318 hackbench.time.system_time
12454 -15.5% 10526 hackbench.time.user_time
4.887e+08 ± 3% +322.1% 2.063e+09 ± 5% hackbench.time.voluntary_context_switches
1.162e+09 -18.2% 9.504e+08 hackbench.workload
281856 -10.0% 253669 meminfo.SUnreclaim
93646 ± 5% -26.4% 68963 ± 23% numa-meminfo.node0.AnonHugePages
3.22 ± 14% -1.8 1.46 ± 7% mpstat.cpu.all.idle%
23.08 -3.9 19.17 mpstat.cpu.all.usr%
72.75 +8.2% 78.75 vmstat.cpu.sy
23.00 -19.6% 18.50 ± 2% vmstat.cpu.us
592.50 +25.3% 742.25 vmstat.procs.r
1124000 ± 3% +338.0% 4923650 ± 4% vmstat.system.cs
308255 +233.9% 1029354 ± 4% vmstat.system.in
1.001e+09 -76.2% 2.381e+08 ± 13% cpuidle.C1.time
70321015 -54.7% 31845456 ± 6% cpuidle.C1.usage
1.041e+08 ± 5% -54.4% 47410378 ± 4% cpuidle.C1E.time
3698696 ± 7% +82.5% 6748329 ± 9% cpuidle.C3.usage
24339475 ± 2% +137.0% 57694119 ± 6% cpuidle.POLL.time
11094713 ± 2% +209.0% 34283466 ± 7% cpuidle.POLL.usage
2712 +1.8% 2762 turbostat.Avg_MHz
70318553 -54.7% 31842326 ± 6% turbostat.C1
1.85 -1.4 0.43 ± 15% turbostat.C1%
0.19 ± 5% -0.1 0.08 ± 5% turbostat.C1E%
3697870 ± 7% +82.5% 6747475 ± 9% turbostat.C3
2.42 ± 8% -67.6% 0.79 ± 8% turbostat.CPU%c1
1.897e+08 +238.6% 6.422e+08 ± 5% turbostat.IRQ
10.55 +4.1% 10.99 turbostat.RAMWatt
1580657 +100.3% 3166230 proc-vmstat.nr_dirty_background_threshold
3165179 +150.1% 7917510 proc-vmstat.nr_dirty_threshold
4811 -2.1% 4710 proc-vmstat.nr_inactive_anon
41003 -1.1% 40544 proc-vmstat.nr_kernel_stack
21342 -2.1% 20887 proc-vmstat.nr_slab_reclaimable
70422 -9.9% 63446 proc-vmstat.nr_slab_unreclaimable
4811 -2.1% 4710 proc-vmstat.nr_zone_inactive_anon
2568016 +30.8% 3359096 ± 2% proc-vmstat.numa_hit
2539513 +31.2% 3330581 ± 2% proc-vmstat.numa_local
3444968 +77.5% 6115510 ± 3% proc-vmstat.pgalloc_normal
2444132 -7.0% 2274101 proc-vmstat.pgfault
3377366 +78.8% 6038072 ± 3% proc-vmstat.pgfree
9598 ± 2% -9.5% 8687 ± 2% slabinfo.UNIX.active_objs
9601 ± 2% -9.1% 8732 ± 2% slabinfo.UNIX.num_objs
13920 -13.4% 12051 slabinfo.files_cache.active_objs
13922 -13.4% 12062 slabinfo.files_cache.num_objs
58574 -27.8% 42305 ± 2% slabinfo.kmalloc-512.active_objs
2556 -34.4% 1678 ± 3% slabinfo.kmalloc-512.active_slabs
81806 -34.3% 53714 ± 2% slabinfo.kmalloc-512.num_objs
2556 -34.4% 1678 ± 3% slabinfo.kmalloc-512.num_slabs
10443 -11.3% 9267 ± 2% slabinfo.mm_struct.active_objs
10458 -10.9% 9323 ± 3% slabinfo.mm_struct.num_objs
13730 -12.8% 11968 ± 3% slabinfo.pid.active_objs
13730 -12.8% 11969 ± 3% slabinfo.pid.num_objs
1102 +8.6% 1197 ± 3% slabinfo.proc_dir_entry.active_objs
1102 +8.6% 1197 ± 3% slabinfo.proc_dir_entry.num_objs
716.75 ± 8% -19.1% 580.00 ± 2% slabinfo.skbuff_ext_cache.active_objs
726.00 ± 8% -10.5% 650.00 slabinfo.skbuff_ext_cache.num_objs
533.25 ± 2% -18.7% 433.75 ± 9% slabinfo.skbuff_fclone_cache.active_objs
533.25 ± 2% -18.7% 433.75 ± 9% slabinfo.skbuff_fclone_cache.num_objs
54214 -30.1% 37916 ± 2% slabinfo.skbuff_head_cache.active_objs
2412 -36.4% 1533 ± 3% slabinfo.skbuff_head_cache.active_slabs
77223 -36.4% 49092 ± 3% slabinfo.skbuff_head_cache.num_objs
2412 -36.4% 1533 ± 3% slabinfo.skbuff_head_cache.num_slabs
13279 ± 4% -9.2% 12058 ± 2% slabinfo.sock_inode_cache.active_objs
13279 ± 4% -9.2% 12059 ± 2% slabinfo.sock_inode_cache.num_objs
15048 -10.3% 13494 slabinfo.task_delay_info.active_objs
15048 -10.3% 13494 slabinfo.task_delay_info.num_objs
343.25 ± 12% -34.6% 224.50 ± 16% slabinfo.xfrm_state.active_objs
343.25 ± 12% -34.6% 224.50 ± 16% slabinfo.xfrm_state.num_objs
14.31 ± 2% +12.8% 16.15 perf-stat.i.MPKI
1.886e+10 +2.8% 1.939e+10 perf-stat.i.branch-instructions
2.02 +0.1 2.10 perf-stat.i.branch-miss-rate%
3.769e+08 +8.0% 4.069e+08 perf-stat.i.branch-misses
5.86 -0.5 5.39 perf-stat.i.cache-miss-rate%
81073345 +5.9% 85881068 ± 2% perf-stat.i.cache-misses
1.387e+09 +16.1% 1.611e+09 perf-stat.i.cache-references
1121375 ± 3% +340.7% 4942018 ± 4% perf-stat.i.context-switches
2.379e+11 +1.8% 2.421e+11 perf-stat.i.cpu-cycles
215470 ± 3% +254.7% 764231 ± 2% perf-stat.i.cpu-migrations
0.37 ± 2% +0.6 0.95 ± 2% perf-stat.i.dTLB-load-miss-rate%
1.085e+08 ± 2% +160.7% 2.828e+08 ± 2% perf-stat.i.dTLB-load-misses
0.27 ± 2% +0.0 0.29 perf-stat.i.dTLB-store-miss-rate%
56471246 ± 2% +5.2% 59429129 perf-stat.i.dTLB-store-misses
2.106e+10 -2.6% 2.051e+10 perf-stat.i.dTLB-stores
81.75 -13.6 68.15 perf-stat.i.iTLB-load-miss-rate%
50088478 +52.2% 76241583 ± 2% perf-stat.i.iTLB-load-misses
11770543 ± 2% +212.5% 36779986 ± 4% perf-stat.i.iTLB-loads
9.822e+10 +1.6% 9.977e+10 perf-stat.i.instructions
1977 -31.8% 1349 ± 3% perf-stat.i.instructions-per-iTLB-miss
3880 -7.8% 3578 perf-stat.i.minor-faults
91.26 +6.6 97.87 perf-stat.i.node-load-miss-rate%
26647037 +45.0% 38646711 ± 3% perf-stat.i.node-load-misses
2465744 ± 5% -70.4% 729956 ± 6% perf-stat.i.node-loads
87.30 -9.5 77.78 perf-stat.i.node-store-miss-rate%
14654283 +23.8% 18148009 ± 2% perf-stat.i.node-store-misses
2151357 ± 2% +155.7% 5501820 ± 4% perf-stat.i.node-stores
3880 -7.8% 3578 perf-stat.i.page-faults
14.12 +14.4% 16.15 perf-stat.overall.MPKI
2.00 +0.1 2.10 perf-stat.overall.branch-miss-rate%
5.85 -0.5 5.33 perf-stat.overall.cache-miss-rate%
2934 -3.9% 2820 ± 2% perf-stat.overall.cycles-between-cache-misses
0.37 ± 2% +0.6 0.95 ± 2% perf-stat.overall.dTLB-load-miss-rate%
0.27 ± 2% +0.0 0.29 perf-stat.overall.dTLB-store-miss-rate%
80.97 -13.5 67.46 perf-stat.overall.iTLB-load-miss-rate%
1961 -33.2% 1309 ± 3% perf-stat.overall.instructions-per-iTLB-miss
91.53 +6.6 98.14 perf-stat.overall.node-load-miss-rate%
87.20 -10.5 76.75 perf-stat.overall.node-store-miss-rate%
51810 +25.7% 65111 perf-stat.overall.path-length
1.883e+10 +2.8% 1.935e+10 perf-stat.ps.branch-instructions
3.762e+08 +8.0% 4.062e+08 perf-stat.ps.branch-misses
80939745 +5.9% 85744186 ± 2% perf-stat.ps.cache-misses
1.385e+09 +16.1% 1.608e+09 perf-stat.ps.cache-references
1119435 ± 3% +340.7% 4933555 ± 4% perf-stat.ps.context-switches
2.375e+11 +1.8% 2.417e+11 perf-stat.ps.cpu-cycles
215097 ± 3% +254.7% 762943 ± 2% perf-stat.ps.cpu-migrations
1.083e+08 ± 2% +160.7% 2.823e+08 ± 2% perf-stat.ps.dTLB-load-misses
56377190 ± 2% +5.2% 59331167 perf-stat.ps.dTLB-store-misses
2.103e+10 -2.6% 2.048e+10 perf-stat.ps.dTLB-stores
50004610 +52.2% 76113883 ± 2% perf-stat.ps.iTLB-load-misses
11750442 ± 2% +212.5% 36717457 ± 4% perf-stat.ps.iTLB-loads
9.806e+10 +1.6% 9.96e+10 perf-stat.ps.instructions
3874 -7.8% 3573 perf-stat.ps.minor-faults
26602986 +45.0% 38584440 ± 3% perf-stat.ps.node-load-misses
2461704 ± 5% -70.4% 728877 ± 6% perf-stat.ps.node-loads
14630120 +23.8% 18119069 ± 2% perf-stat.ps.node-store-misses
2147799 ± 2% +155.7% 5492741 ± 4% perf-stat.ps.node-stores
3874 -7.8% 3573 perf-stat.ps.page-faults
6.018e+13 +2.8% 6.188e+13 perf-stat.total.instructions
9829288 ± 30% -65.2% 3425365 ± 88% sched_debug.cfs_rq:/.MIN_vruntime.max
1082863 ± 34% -65.5% 373825 ± 88% sched_debug.cfs_rq:/.MIN_vruntime.stddev
28943 ± 6% +94.1% 56192 ± 71% sched_debug.cfs_rq:/.load.max
116.11 ±173% +702.9% 932.30 ± 49% sched_debug.cfs_rq:/.load.min
5289 +64.3% 8689 ± 47% sched_debug.cfs_rq:/.load.stddev
9829288 ± 30% -65.2% 3425365 ± 88% sched_debug.cfs_rq:/.max_vruntime.max
1082863 ± 34% -65.5% 373825 ± 88% sched_debug.cfs_rq:/.max_vruntime.stddev
24222061 +14.2% 27660112 sched_debug.cfs_rq:/.min_vruntime.avg
25005940 +20.4% 30113412 sched_debug.cfs_rq:/.min_vruntime.max
342616 ± 4% +190.2% 994342 ± 13% sched_debug.cfs_rq:/.min_vruntime.stddev
0.07 ±110% +333.3% 0.30 ± 25% sched_debug.cfs_rq:/.nr_running.min
0.17 ± 6% -30.7% 0.12 ± 13% sched_debug.cfs_rq:/.nr_running.stddev
0.02 ±173% +1300.0% 0.32 ± 42% sched_debug.cfs_rq:/.runnable_load_avg.min
27877 ± 4% +99.4% 55578 ± 71% sched_debug.cfs_rq:/.runnable_weight.max
36.64 ±173% +823.1% 338.18 ± 48% sched_debug.cfs_rq:/.runnable_weight.min
6634 +46.6% 9727 ± 42% sched_debug.cfs_rq:/.runnable_weight.stddev
768141 ± 41% +263.3% 2790315 ± 18% sched_debug.cfs_rq:/.spread0.max
-688157 +140.8% -1656893 sched_debug.cfs_rq:/.spread0.min
342419 ± 3% +190.2% 993722 ± 13% sched_debug.cfs_rq:/.spread0.stddev
469.77 ± 4% +13.8% 534.70 ± 3% sched_debug.cfs_rq:/.util_avg.min
238.37 -12.0% 209.81 ± 2% sched_debug.cfs_rq:/.util_avg.stddev
318.19 ± 2% +36.6% 434.56 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.avg
956.93 +24.6% 1192 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.max
0.11 ± 66% +13360.0% 15.30 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.min
210.55 ± 2% +26.1% 265.56 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.stddev
93513 ± 6% -25.7% 69448 ± 3% sched_debug.cpu.avg_idle.avg
24.14 ± 14% +77.1% 42.76 ± 8% sched_debug.cpu.clock.stddev
24.14 ± 14% +77.1% 42.76 ± 8% sched_debug.cpu.clock_task.stddev
0.05 ±173% +1200.0% 0.59 ± 17% sched_debug.cpu.cpu_load[0].min
7.03 +43.7% 10.10 ± 38% sched_debug.cpu.cpu_load[0].stddev
26.48 ± 7% +116.1% 57.20 ± 64% sched_debug.cpu.cpu_load[1].max
5.09 ± 3% +78.3% 9.08 ± 42% sched_debug.cpu.cpu_load[1].stddev
23.75 ± 4% +133.1% 55.36 ± 67% sched_debug.cpu.cpu_load[2].max
2.34 ± 13% -31.1% 1.61 ± 10% sched_debug.cpu.cpu_load[2].min
4.21 +101.3% 8.47 ± 45% sched_debug.cpu.cpu_load[2].stddev
22.36 ± 6% +141.9% 54.09 ± 71% sched_debug.cpu.cpu_load[3].max
2.73 ± 8% -38.3% 1.68 ± 11% sched_debug.cpu.cpu_load[3].min
3.72 ± 3% +118.3% 8.11 ± 48% sched_debug.cpu.cpu_load[3].stddev
27.00 ± 11% +123.1% 60.23 ± 70% sched_debug.cpu.cpu_load[4].max
2.61 ± 7% -35.7% 1.68 ± 11% sched_debug.cpu.cpu_load[4].min
4.02 ± 7% +115.9% 8.68 ± 48% sched_debug.cpu.cpu_load[4].stddev
17756 -12.4% 15546 sched_debug.cpu.curr->pid.avg
19813 -12.4% 17357 sched_debug.cpu.curr->pid.max
605.70 ±100% +920.4% 6180 ± 41% sched_debug.cpu.curr->pid.min
3300 ± 8% -52.1% 1579 ± 21% sched_debug.cpu.curr->pid.stddev
425.25 ±118% +294.0% 1675 ± 23% sched_debug.cpu.load.min
5266 ± 2% +63.2% 8594 ± 47% sched_debug.cpu.load.stddev
500103 -49.7% 251307 sched_debug.cpu.max_idle_balance_cost.avg
507505 -37.8% 315414 ± 19% sched_debug.cpu.max_idle_balance_cost.max
500000 -50.0% 250000 sched_debug.cpu.max_idle_balance_cost.min
0.00 ± 6% +16.5% 0.00 ± 4% sched_debug.cpu.next_balance.stddev
6.07 +24.9% 7.57 ± 3% sched_debug.cpu.nr_running.avg
15.73 ± 4% +42.1% 22.34 ± 4% sched_debug.cpu.nr_running.max
0.09 ±122% +400.0% 0.45 ± 14% sched_debug.cpu.nr_running.min
3.67 +33.5% 4.90 ± 3% sched_debug.cpu.nr_running.stddev
3971700 ± 6% +327.3% 16971708 ± 4% sched_debug.cpu.nr_switches.avg
4272583 ± 8% +331.4% 18433855 ± 5% sched_debug.cpu.nr_switches.max
3758954 ± 5% +318.6% 15735621 ± 4% sched_debug.cpu.nr_switches.min
96452 ± 24% +516.1% 594193 ± 20% sched_debug.cpu.nr_switches.stddev
24.00 -33.3% 16.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 -46.7% 1.60 sched_debug.sysctl_sched.sysctl_sched_min_granularity
4.00 -50.0% 2.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
10.55 ± 15% -10.6 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_write
9.52 ± 16% -9.5 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write
9.45 ± 16% -9.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
9.09 ± 15% -9.1 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_read
7.90 ± 15% -7.9 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read
7.83 ± 15% -7.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
7.80 ± 16% -7.8 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
7.51 ± 16% -7.5 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
6.42 ± 16% -6.4 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
6.25 ± 16% -6.2 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
2.12 ± 5% -1.3 0.78 ± 88% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
2.87 ± 6% -1.3 1.57 ± 33% perf-profile.calltrace.cycles-pp.secondary_startup_64
2.84 ± 6% -1.3 1.55 ± 33% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
2.84 ± 6% -1.3 1.55 ± 33% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
2.83 ± 6% -1.3 1.55 ± 33% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.30 ± 5% -1.2 1.06 ± 50% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.29 ± 5% -1.2 1.06 ± 50% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
0.73 ± 2% +0.1 0.81 ± 8% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.55 ± 4% +0.1 0.63 ± 9% perf-profile.calltrace.cycles-pp.skb_release_data.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
0.68 ± 4% +0.1 0.77 ± 6% perf-profile.calltrace.cycles-pp.security_socket_recvmsg.sock_recvmsg.sock_read_iter.new_sync_read.vfs_read
0.79 ± 4% +0.1 0.89 ± 7% perf-profile.calltrace.cycles-pp.sock_recvmsg.sock_read_iter.new_sync_read.vfs_read.ksys_read
0.94 ± 2% +0.1 1.04 ± 9% perf-profile.calltrace.cycles-pp.__slab_free.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
0.98 ± 4% +0.1 1.12 ± 9% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.21 ± 3% +0.2 1.38 ± 8% perf-profile.calltrace.cycles-pp.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
1.14 ± 3% +0.2 1.37 ± 6% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
0.58 ± 4% +0.2 0.81 ± 4% perf-profile.calltrace.cycles-pp.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
1.83 ± 4% +0.2 2.08 ± 8% perf-profile.calltrace.cycles-pp.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_read_generic
1.08 ± 3% +0.2 1.33 perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_read.ksys_read.do_syscall_64
1.65 ± 5% +0.3 1.94 ± 7% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
1.83 ± 5% +0.3 2.13 ± 7% perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor
3.94 ± 3% +0.5 4.43 ± 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
0.12 ±173% +0.5 0.65 ± 11% perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.00 +0.6 0.57 ± 6% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
23.23 ± 4% +8.2 31.43 ± 4% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
27.95 ± 4% +8.3 36.21 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
23.94 ± 3% +8.4 32.32 ± 4% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
29.32 ± 4% +8.4 37.77 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
67.15 ± 3% +19.2 86.39 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
67.66 ± 3% +19.4 87.04 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
10.64 ± 15% -10.6 0.00 perf-profile.children.cycles-pp.__GI___libc_write
9.18 ± 15% -9.2 0.00 perf-profile.children.cycles-pp.__GI___libc_read
2.87 ± 6% -1.3 1.57 ± 33% perf-profile.children.cycles-pp.do_idle
2.87 ± 6% -1.3 1.57 ± 33% perf-profile.children.cycles-pp.secondary_startup_64
2.87 ± 6% -1.3 1.57 ± 33% perf-profile.children.cycles-pp.cpu_startup_entry
2.84 ± 6% -1.3 1.55 ± 33% perf-profile.children.cycles-pp.start_secondary
2.15 ± 5% -1.3 0.90 ± 65% perf-profile.children.cycles-pp.intel_idle
2.33 ± 5% -1.3 1.07 ± 51% perf-profile.children.cycles-pp.cpuidle_enter
2.33 ± 5% -1.3 1.07 ± 51% perf-profile.children.cycles-pp.cpuidle_enter_state
3.45 ± 5% -0.5 2.98 ± 8% perf-profile.children.cycles-pp.skb_copy_datagram_from_iter
0.07 ± 10% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.__vfs_read
0.07 ± 7% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.__vfs_write
0.76 +0.1 0.83 ± 3% perf-profile.children.cycles-pp.mutex_lock
5.83 +0.3 6.08 perf-profile.children.cycles-pp._raw_spin_lock
5.64 +0.5 6.14 ± 2% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
84.58 +1.9 86.51 perf-profile.children.cycles-pp.do_syscall_64
85.24 +1.9 87.17 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
2.14 ± 5% -1.3 0.89 ± 66% perf-profile.self.cycles-pp.intel_idle
0.06 ± 6% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.__vfs_write
0.13 ± 3% -0.0 0.11 ± 6% perf-profile.self.cycles-pp.wait_for_unix_gc
0.08 ± 6% +0.0 0.10 ± 5% perf-profile.self.cycles-pp.__slab_alloc
0.62 +0.0 0.65 perf-profile.self.cycles-pp.sock_wfree
0.46 ± 2% +0.1 0.52 ± 4% perf-profile.self.cycles-pp.new_sync_read
0.58 ± 4% +0.1 0.70 ± 7% perf-profile.self.cycles-pp.___slab_alloc
2.04 +0.2 2.20 ± 3% perf-profile.self.cycles-pp.unix_stream_read_generic
5.62 +0.5 6.12 ± 2% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
660.00 ± 46% +80.1% 1188 ± 26% interrupts.39:PCI-MSI.3145736-edge.eth0-TxRx-7
762.75 ± 7% -8.3% 699.25 interrupts.9:IO-APIC.9-fasteoi.acpi
925673 ± 4% +580.4% 6298086 ± 7% interrupts.CPU0.RES:Rescheduling_interrupts
762.75 ± 7% -8.3% 699.25 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
914026 ± 5% +589.5% 6302138 ± 7% interrupts.CPU1.RES:Rescheduling_interrupts
909968 ± 4% +586.8% 6250075 ± 7% interrupts.CPU10.RES:Rescheduling_interrupts
901457 ± 4% +579.2% 6122367 ± 7% interrupts.CPU11.RES:Rescheduling_interrupts
885989 ± 5% +597.1% 6176190 ± 7% interrupts.CPU12.RES:Rescheduling_interrupts
900091 ± 4% +578.6% 6108137 ± 9% interrupts.CPU13.RES:Rescheduling_interrupts
916030 ± 4% +579.3% 6222876 ± 6% interrupts.CPU14.RES:Rescheduling_interrupts
907387 ± 3% +580.4% 6173726 ± 7% interrupts.CPU15.RES:Rescheduling_interrupts
906462 ± 3% +584.5% 6205125 ± 7% interrupts.CPU16.RES:Rescheduling_interrupts
892967 ± 4% +579.9% 6071200 ± 8% interrupts.CPU17.RES:Rescheduling_interrupts
660.00 ± 46% +80.1% 1188 ± 26% interrupts.CPU18.39:PCI-MSI.3145736-edge.eth0-TxRx-7
883497 ± 3% +590.0% 6095740 ± 8% interrupts.CPU18.RES:Rescheduling_interrupts
895170 ± 5% +590.4% 6180136 ± 7% interrupts.CPU19.RES:Rescheduling_interrupts
920768 ± 4% +579.8% 6259793 ± 8% interrupts.CPU2.RES:Rescheduling_interrupts
897528 ± 4% +580.2% 6104818 ± 8% interrupts.CPU20.RES:Rescheduling_interrupts
890291 ± 4% +606.9% 6293257 ± 9% interrupts.CPU21.RES:Rescheduling_interrupts
921105 ± 5% +569.1% 6162675 ± 7% interrupts.CPU22.RES:Rescheduling_interrupts
934751 ± 4% +559.2% 6162001 ± 6% interrupts.CPU23.RES:Rescheduling_interrupts
926805 ± 5% +565.8% 6170704 ± 7% interrupts.CPU24.RES:Rescheduling_interrupts
922681 ± 5% +577.7% 6252842 ± 6% interrupts.CPU25.RES:Rescheduling_interrupts
936049 ± 5% +567.3% 6246646 ± 6% interrupts.CPU26.RES:Rescheduling_interrupts
902602 ± 4% +578.3% 6122067 ± 5% interrupts.CPU27.RES:Rescheduling_interrupts
932255 ± 6% +576.9% 6310357 ± 6% interrupts.CPU28.RES:Rescheduling_interrupts
915340 ± 5% +579.8% 6222851 ± 5% interrupts.CPU29.RES:Rescheduling_interrupts
906245 ± 3% +603.2% 6373161 ± 7% interrupts.CPU3.RES:Rescheduling_interrupts
905647 ± 4% +587.3% 6224841 ± 5% interrupts.CPU30.RES:Rescheduling_interrupts
912558 ± 4% +577.6% 6183543 ± 7% interrupts.CPU31.RES:Rescheduling_interrupts
906350 ± 4% +583.7% 6196414 ± 6% interrupts.CPU32.RES:Rescheduling_interrupts
904780 ± 4% +578.3% 6137556 ± 6% interrupts.CPU33.RES:Rescheduling_interrupts
899080 ± 5% +573.5% 6054977 ± 6% interrupts.CPU34.RES:Rescheduling_interrupts
897373 ± 4% +593.8% 6225607 ± 6% interrupts.CPU35.RES:Rescheduling_interrupts
919071 ± 4% +560.4% 6069648 ± 5% interrupts.CPU36.RES:Rescheduling_interrupts
909253 ± 3% +567.4% 6068575 ± 6% interrupts.CPU37.RES:Rescheduling_interrupts
929529 ± 6% +556.6% 6102919 ± 6% interrupts.CPU38.RES:Rescheduling_interrupts
902204 ± 5% +573.2% 6073921 ± 7% interrupts.CPU39.RES:Rescheduling_interrupts
909667 ± 4% +594.3% 6315416 ± 7% interrupts.CPU4.RES:Rescheduling_interrupts
891308 ± 6% +590.8% 6157161 ± 7% interrupts.CPU40.RES:Rescheduling_interrupts
908411 ± 6% +578.3% 6161643 ± 7% interrupts.CPU41.RES:Rescheduling_interrupts
906106 ± 4% +574.9% 6115742 ± 6% interrupts.CPU42.RES:Rescheduling_interrupts
928420 ± 6% +556.2% 6092084 ± 6% interrupts.CPU43.RES:Rescheduling_interrupts
918605 ± 5% +581.0% 6255941 ± 8% interrupts.CPU44.RES:Rescheduling_interrupts
925149 ± 3% +578.0% 6272327 ± 7% interrupts.CPU45.RES:Rescheduling_interrupts
913916 ± 4% +592.9% 6332293 ± 7% interrupts.CPU46.RES:Rescheduling_interrupts
903308 ± 4% +590.5% 6237458 ± 8% interrupts.CPU47.RES:Rescheduling_interrupts
906960 ± 4% +595.7% 6309330 ± 6% interrupts.CPU48.RES:Rescheduling_interrupts
898670 ± 2% +596.3% 6257070 ± 5% interrupts.CPU49.RES:Rescheduling_interrupts
904056 ± 4% +585.4% 6196200 ± 5% interrupts.CPU5.RES:Rescheduling_interrupts
921663 ± 3% +587.9% 6340403 ± 7% interrupts.CPU50.RES:Rescheduling_interrupts
921589 ± 5% +576.6% 6235610 ± 7% interrupts.CPU51.RES:Rescheduling_interrupts
920120 ± 3% +570.5% 6169861 ± 8% interrupts.CPU52.RES:Rescheduling_interrupts
53.50 ± 79% -75.7% 13.00 ±111% interrupts.CPU52.TLB:TLB_shootdowns
907652 ± 5% +582.0% 6190117 ± 9% interrupts.CPU53.RES:Rescheduling_interrupts
907563 ± 5% +587.5% 6239226 ± 8% interrupts.CPU54.RES:Rescheduling_interrupts
916945 ± 3% +574.3% 6183137 ± 6% interrupts.CPU55.RES:Rescheduling_interrupts
895538 ± 5% +601.1% 6278286 ± 6% interrupts.CPU56.RES:Rescheduling_interrupts
884652 ± 5% +591.1% 6114236 ± 10% interrupts.CPU57.RES:Rescheduling_interrupts
892337 ± 5% +596.8% 6217512 ± 7% interrupts.CPU58.RES:Rescheduling_interrupts
914771 ± 2% +569.0% 6119824 ± 8% interrupts.CPU59.RES:Rescheduling_interrupts
914171 ± 5% +584.8% 6260168 ± 8% interrupts.CPU6.RES:Rescheduling_interrupts
902117 ± 4% +590.4% 6227990 ± 7% interrupts.CPU60.RES:Rescheduling_interrupts
885288 ± 4% +598.6% 6184631 ± 8% interrupts.CPU61.RES:Rescheduling_interrupts
878833 ± 4% +598.7% 6140823 ± 8% interrupts.CPU62.RES:Rescheduling_interrupts
899149 ± 4% +589.3% 6197789 ± 8% interrupts.CPU63.RES:Rescheduling_interrupts
900729 ± 4% +590.9% 6223358 ± 8% interrupts.CPU64.RES:Rescheduling_interrupts
136.50 ± 90% -74.5% 34.75 ±134% interrupts.CPU64.TLB:TLB_shootdowns
897133 ± 3% +600.4% 6283243 ± 8% interrupts.CPU65.RES:Rescheduling_interrupts
925816 ± 5% +556.7% 6079971 ± 7% interrupts.CPU66.RES:Rescheduling_interrupts
917527 ± 5% +570.8% 6155190 ± 5% interrupts.CPU67.RES:Rescheduling_interrupts
919350 ± 5% +567.3% 6134541 ± 7% interrupts.CPU68.RES:Rescheduling_interrupts
898265 ± 5% +592.4% 6219990 ± 6% interrupts.CPU69.RES:Rescheduling_interrupts
916558 ± 4% +583.6% 6266016 ± 7% interrupts.CPU7.RES:Rescheduling_interrupts
916060 ± 5% +573.1% 6165772 ± 6% interrupts.CPU70.RES:Rescheduling_interrupts
900704 ± 5% +585.5% 6174141 ± 7% interrupts.CPU71.RES:Rescheduling_interrupts
915386 ± 4% +591.7% 6331697 ± 7% interrupts.CPU72.RES:Rescheduling_interrupts
911494 ± 4% +580.5% 6202621 ± 5% interrupts.CPU73.RES:Rescheduling_interrupts
914152 ± 4% +579.1% 6208012 ± 5% interrupts.CPU74.RES:Rescheduling_interrupts
919469 ± 4% +567.9% 6141356 ± 7% interrupts.CPU75.RES:Rescheduling_interrupts
897632 ± 4% +600.6% 6288603 ± 6% interrupts.CPU76.RES:Rescheduling_interrupts
894843 ± 5% +585.7% 6136288 ± 7% interrupts.CPU77.RES:Rescheduling_interrupts
899897 ± 5% +569.2% 6022419 ± 7% interrupts.CPU78.RES:Rescheduling_interrupts
896111 ± 4% +591.2% 6194069 ± 6% interrupts.CPU79.RES:Rescheduling_interrupts
911418 ± 4% +572.3% 6127868 ± 7% interrupts.CPU8.RES:Rescheduling_interrupts
899998 ± 6% +586.2% 6175781 ± 5% interrupts.CPU80.RES:Rescheduling_interrupts
904633 ± 5% +580.2% 6153552 ± 6% interrupts.CPU81.RES:Rescheduling_interrupts
912989 ± 5% +574.1% 6154076 ± 5% interrupts.CPU82.RES:Rescheduling_interrupts
890987 ± 6% +587.7% 6127712 ± 6% interrupts.CPU83.RES:Rescheduling_interrupts
909524 ± 5% +574.1% 6131352 ± 6% interrupts.CPU84.RES:Rescheduling_interrupts
887671 ± 5% +598.6% 6201025 ± 6% interrupts.CPU85.RES:Rescheduling_interrupts
922918 ± 7% +567.2% 6157723 ± 5% interrupts.CPU86.RES:Rescheduling_interrupts
924721 ± 8% +564.6% 6146038 ± 5% interrupts.CPU87.RES:Rescheduling_interrupts
911674 ± 6% +575.7% 6160493 ± 9% interrupts.CPU9.RES:Rescheduling_interrupts
79925641 ± 4% +581.6% 5.448e+08 ± 6% interrupts.RES:Rescheduling_interrupts
58677 +33.7% 78438 ± 2% softirqs.CPU0.RCU
62136 -34.5% 40712 ± 4% softirqs.CPU0.SCHED
59057 +34.9% 79666 softirqs.CPU1.RCU
58087 -36.3% 36979 ± 3% softirqs.CPU1.SCHED
58735 +36.2% 80007 ± 2% softirqs.CPU10.RCU
57523 -35.9% 36897 ± 3% softirqs.CPU10.SCHED
58893 +34.1% 78965 ± 3% softirqs.CPU11.RCU
57406 -35.4% 37075 ± 2% softirqs.CPU11.SCHED
58865 +36.3% 80236 ± 3% softirqs.CPU12.RCU
57257 -37.4% 35843 ± 3% softirqs.CPU12.SCHED
59183 +35.7% 80281 ± 3% softirqs.CPU13.RCU
57012 -36.5% 36197 ± 2% softirqs.CPU13.SCHED
59266 +37.4% 81408 ± 3% softirqs.CPU14.RCU
58226 -38.4% 35866 ± 2% softirqs.CPU14.SCHED
61379 +36.0% 83456 ± 2% softirqs.CPU15.RCU
58616 -36.6% 37181 ± 2% softirqs.CPU15.SCHED
61622 +38.5% 85353 ± 3% softirqs.CPU16.RCU
58312 -37.6% 36374 ± 4% softirqs.CPU16.SCHED
61225 +38.1% 84576 ± 3% softirqs.CPU17.RCU
57373 -36.8% 36242 ± 4% softirqs.CPU17.SCHED
61348 +38.4% 84883 ± 3% softirqs.CPU18.RCU
57212 -36.9% 36080 ± 6% softirqs.CPU18.SCHED
61419 +35.5% 83228 softirqs.CPU19.RCU
57815 -36.1% 36934 ± 2% softirqs.CPU19.SCHED
59327 +37.0% 81302 softirqs.CPU2.RCU
58339 -38.2% 36039 ± 3% softirqs.CPU2.SCHED
61521 +39.3% 85705 ± 3% softirqs.CPU20.RCU
58315 -38.7% 35730 ± 4% softirqs.CPU20.SCHED
61303 +36.5% 83660 ± 2% softirqs.CPU21.RCU
58219 -37.4% 36452 ± 5% softirqs.CPU21.SCHED
62716 +40.5% 88094 ± 5% softirqs.CPU22.RCU
59118 ± 2% -39.6% 35707 ± 5% softirqs.CPU22.SCHED
61657 +40.8% 86814 ± 3% softirqs.CPU23.RCU
58433 -40.0% 35081 ± 4% softirqs.CPU23.SCHED
61266 +41.2% 86502 ± 3% softirqs.CPU24.RCU
57837 ± 2% -38.0% 35859 ± 5% softirqs.CPU24.SCHED
61037 +40.9% 86028 ± 4% softirqs.CPU25.RCU
57656 -37.1% 36246 ± 5% softirqs.CPU25.SCHED
60944 +41.3% 86115 ± 4% softirqs.CPU26.RCU
57748 -38.8% 35337 ± 3% softirqs.CPU26.SCHED
60939 +40.7% 85741 ± 4% softirqs.CPU27.RCU
57479 ± 2% -38.3% 35452 ± 4% softirqs.CPU27.SCHED
61905 ± 2% +39.9% 86610 ± 3% softirqs.CPU28.RCU
58315 -40.5% 34682 ± 4% softirqs.CPU28.SCHED
61113 +34.2% 82040 ± 6% softirqs.CPU29.RCU
58094 -38.5% 35707 ± 5% softirqs.CPU29.SCHED
58790 +35.5% 79661 ± 2% softirqs.CPU3.RCU
57557 -37.3% 36063 ± 3% softirqs.CPU3.SCHED
60586 +36.0% 82415 softirqs.CPU30.RCU
57599 -38.2% 35574 ± 4% softirqs.CPU30.SCHED
60304 +36.5% 82319 softirqs.CPU31.RCU
57851 -37.6% 36115 ± 2% softirqs.CPU31.SCHED
60324 +36.4% 82277 softirqs.CPU32.RCU
57639 ± 2% -38.8% 35263 ± 3% softirqs.CPU32.SCHED
60037 +36.3% 81855 ± 2% softirqs.CPU33.RCU
57420 -38.9% 35079 ± 4% softirqs.CPU33.SCHED
60156 +33.8% 80467 ± 2% softirqs.CPU34.RCU
57464 ± 2% -37.9% 35700 ± 5% softirqs.CPU34.SCHED
60986 +34.6% 82080 ± 2% softirqs.CPU35.RCU
57413 ± 2% -37.1% 36086 ± 3% softirqs.CPU35.SCHED
60617 +36.6% 82795 ± 2% softirqs.CPU36.RCU
58369 -39.0% 35594 ± 5% softirqs.CPU36.SCHED
61261 +36.5% 83642 ± 2% softirqs.CPU37.RCU
58398 -41.5% 34146 ± 4% softirqs.CPU37.SCHED
60849 +36.2% 82865 softirqs.CPU38.RCU
58527 ± 2% -39.8% 35253 ± 3% softirqs.CPU38.SCHED
60454 +34.8% 81482 ± 2% softirqs.CPU39.RCU
57462 -38.8% 35174 ± 5% softirqs.CPU39.SCHED
58921 +35.6% 79901 ± 2% softirqs.CPU4.RCU
57398 -35.0% 37291 softirqs.CPU4.SCHED
59983 +35.6% 81310 ± 2% softirqs.CPU40.RCU
57495 -37.5% 35949 ± 3% softirqs.CPU40.SCHED
60086 +33.9% 80460 ± 3% softirqs.CPU41.RCU
57358 -38.7% 35182 ± 4% softirqs.CPU41.SCHED
61416 ± 2% +31.7% 80887 ± 2% softirqs.CPU42.RCU
58654 ± 2% -39.7% 35395 ± 5% softirqs.CPU42.SCHED
60646 +33.2% 80753 ± 3% softirqs.CPU43.RCU
58587 -40.2% 35012 ± 4% softirqs.CPU43.SCHED
54349 +31.1% 71257 ± 2% softirqs.CPU44.RCU
58145 -38.0% 36074 ± 3% softirqs.CPU44.SCHED
65618 ± 2% +25.6% 82444 ± 4% softirqs.CPU45.RCU
58325 -38.2% 36062 ± 2% softirqs.CPU45.SCHED
60360 +36.3% 82298 ± 2% softirqs.CPU46.RCU
58041 -38.0% 35986 ± 5% softirqs.CPU46.SCHED
59401 +36.8% 81253 ± 2% softirqs.CPU47.RCU
57339 -37.8% 35645 ± 4% softirqs.CPU47.SCHED
59694 +37.7% 82214 ± 2% softirqs.CPU48.RCU
57253 -34.6% 37460 softirqs.CPU48.SCHED
59659 ± 2% +39.4% 83171 ± 4% softirqs.CPU49.RCU
57313 -35.9% 36738 ± 4% softirqs.CPU49.SCHED
58982 +35.1% 79708 ± 2% softirqs.CPU5.RCU
57375 -36.1% 36665 ± 4% softirqs.CPU5.SCHED
59984 +37.5% 82477 softirqs.CPU50.RCU
58032 -37.5% 36284 ± 2% softirqs.CPU50.SCHED
59934 +37.6% 82466 ± 2% softirqs.CPU51.RCU
58122 -37.5% 36313 ± 4% softirqs.CPU51.SCHED
59879 +38.0% 82608 ± 2% softirqs.CPU52.RCU
57973 -36.1% 37070 ± 3% softirqs.CPU52.SCHED
59850 ± 2% +37.0% 82025 ± 2% softirqs.CPU53.RCU
57550 -36.3% 36659 ± 4% softirqs.CPU53.SCHED
59785 ± 2% +37.3% 82068 ± 2% softirqs.CPU54.RCU
57572 -35.7% 37000 ± 3% softirqs.CPU54.SCHED
59828 +34.7% 80584 ± 2% softirqs.CPU55.RCU
56891 -35.5% 36678 ± 3% softirqs.CPU55.SCHED
59674 +36.6% 81506 ± 3% softirqs.CPU56.RCU
57322 -36.9% 36170 ± 4% softirqs.CPU56.SCHED
59422 +36.8% 81278 ± 3% softirqs.CPU57.RCU
56682 -36.0% 36257 ± 5% softirqs.CPU57.SCHED
59664 +38.4% 82564 ± 4% softirqs.CPU58.RCU
58281 -37.7% 36334 ± 3% softirqs.CPU58.SCHED
60104 +37.2% 82456 ± 4% softirqs.CPU59.RCU
58250 -36.9% 36771 ± 3% softirqs.CPU59.SCHED
59052 +35.2% 79851 ± 2% softirqs.CPU6.RCU
58102 ± 2% -38.1% 35938 softirqs.CPU6.SCHED
60319 +39.1% 83925 ± 3% softirqs.CPU60.RCU
57942 -37.2% 36412 ± 4% softirqs.CPU60.SCHED
59653 +39.9% 83439 ± 3% softirqs.CPU61.RCU
57018 -35.8% 36591 ± 3% softirqs.CPU61.SCHED
59764 +40.1% 83740 ± 4% softirqs.CPU62.RCU
57479 -36.6% 36434 ± 5% softirqs.CPU62.SCHED
59996 +38.7% 83234 ± 2% softirqs.CPU63.RCU
58008 -36.4% 36870 ± 3% softirqs.CPU63.SCHED
60025 +40.3% 84231 ± 4% softirqs.CPU64.RCU
58603 -37.8% 36425 ± 3% softirqs.CPU64.SCHED
60032 +36.6% 82011 softirqs.CPU65.RCU
58419 -37.3% 36608 ± 3% softirqs.CPU65.SCHED
61568 +40.3% 86368 ± 3% softirqs.CPU66.RCU
58633 -39.6% 35410 ± 3% softirqs.CPU66.SCHED
61611 +41.0% 86891 ± 3% softirqs.CPU67.RCU
58323 ± 2% -39.7% 35142 ± 3% softirqs.CPU67.SCHED
61397 +40.8% 86423 ± 2% softirqs.CPU68.RCU
58069 -37.0% 36595 ± 5% softirqs.CPU68.SCHED
61168 +39.8% 85500 ± 3% softirqs.CPU69.RCU
57442 ± 2% -37.3% 35990 ± 6% softirqs.CPU69.SCHED
59050 +35.9% 80273 ± 2% softirqs.CPU7.RCU
58390 -38.0% 36184 ± 2% softirqs.CPU7.SCHED
61171 +39.9% 85568 ± 3% softirqs.CPU70.RCU
57497 ± 2% -37.6% 35871 ± 4% softirqs.CPU70.SCHED
61074 +40.9% 86077 ± 3% softirqs.CPU71.RCU
57464 -37.8% 35715 ± 2% softirqs.CPU71.SCHED
61345 +41.1% 86558 ± 3% softirqs.CPU72.RCU
58120 -39.6% 35127 ± 5% softirqs.CPU72.SCHED
61293 +34.2% 82238 ± 4% softirqs.CPU73.RCU
58342 -38.6% 35804 ± 4% softirqs.CPU73.SCHED
60943 +33.6% 81410 ± 4% softirqs.CPU74.RCU
57499 -38.0% 35623 ± 3% softirqs.CPU74.SCHED
56943 +32.1% 75237 ± 3% softirqs.CPU75.RCU
57730 -37.7% 35982 ± 3% softirqs.CPU75.SCHED
56725 +33.9% 75965 softirqs.CPU76.RCU
57373 -38.4% 35320 ± 2% softirqs.CPU76.SCHED
56919 +35.4% 77082 ± 2% softirqs.CPU77.RCU
57509 -38.7% 35232 ± 4% softirqs.CPU77.SCHED
56528 +36.3% 77066 ± 2% softirqs.CPU78.RCU
57255 -37.5% 35765 ± 5% softirqs.CPU78.SCHED
57562 ± 2% +33.2% 76651 ± 2% softirqs.CPU79.RCU
57279 ± 2% -36.9% 36164 ± 3% softirqs.CPU79.SCHED
58916 +36.2% 80229 ± 2% softirqs.CPU8.RCU
57850 -36.5% 36710 ± 2% softirqs.CPU8.SCHED
56932 +37.1% 78080 ± 2% softirqs.CPU80.RCU
58409 ± 2% -38.9% 35703 ± 4% softirqs.CPU80.SCHED
189033 ± 2% +11.6% 210927 ± 5% softirqs.CPU80.TIMER
56970 +37.2% 78145 ± 3% softirqs.CPU81.RCU
58583 -41.3% 34378 ± 3% softirqs.CPU81.SCHED
56840 +36.9% 77809 ± 2% softirqs.CPU82.RCU
58526 -38.1% 36249 ± 5% softirqs.CPU82.SCHED
57395 +33.7% 76732 ± 2% softirqs.CPU83.RCU
57260 ± 2% -38.4% 35262 ± 5% softirqs.CPU83.SCHED
56606 +36.5% 77282 ± 3% softirqs.CPU84.RCU
57485 -37.0% 36234 ± 2% softirqs.CPU84.SCHED
57138 +33.6% 76325 ± 2% softirqs.CPU85.RCU
57783 -38.6% 35451 ± 4% softirqs.CPU85.SCHED
57065 +31.5% 75019 ± 4% softirqs.CPU86.RCU
58430 -38.7% 35797 ± 4% softirqs.CPU86.SCHED
188971 ± 2% +24.2% 234770 ± 20% softirqs.CPU86.TIMER
56871 +33.2% 75740 ± 4% softirqs.CPU87.RCU
58398 -39.7% 35211 ± 4% softirqs.CPU87.SCHED
59024 +35.5% 79965 ± 2% softirqs.CPU9.RCU
57479 -35.9% 36869 ± 3% softirqs.CPU9.SCHED
5262948 +36.6% 7191775 ± 2% softirqs.RCU
5094938 -37.7% 3172832 softirqs.SCHED
hackbench.throughput
200000 +-+----------------------------------------------------------------+
180000 +-+.+ ++ +.+.+.+.+ +.+.++ + +.+.+.+ ++.+.+ |
| : :: : : : : : : : : : |
160000 O-+ : O O O: O O O O O O : O O : O O : O O : : |
140000 +-+ : :: : : O O O O : O O : O O : : |
| : : : : : : : : : : : : : :|
120000 +-+ : : : : : : : : : : : : : :|
100000 +-+ : : : : : : : : : : : : : :|
80000 +-+ : : : : : : : : : : : : : :|
| : : : : : : : : : : : : : :|
60000 +-+ : : : : : : : : : : : : : :|
40000 +-+ : : : : : : : : : : : : |
| : : : : : : : : : : : : |
20000 +-+ : : : : : : : : : : : : |
0 +-O-O------O---------------------O--O------------------------------+
hackbench.time.percent_of_cpu_this_job_got
9000 +-+-----------------------O-O-O-O--------O-O-----O-O-----------------+
O.+.+ O O O.+ O O.O.O.O.O +.+.+O O + O.O.+.+ O O +.+.+.+ |
8000 +-+ : : : : : : : : : : : : |
7000 +-+ : : : : : : : : : : : : |
| : : : : : : : : : : : : |
6000 +-+ : : : : : : : : : : : : : :|
5000 +-+ : : : : : : : : : : : : : :|
| : : : : : : : : : : : : : :|
4000 +-+ : : : : : : : : : : : : : :|
3000 +-+ : : : : : : : : : : : : : :|
| : : : : : : : : : : : : : :|
2000 +-+ : : : : : : : : : : : : |
1000 +-+ : : : : : : : : : : : : |
| : : : : : : : : : : : : |
0 +-O-O-------O---------------------O--O-------------------------------+
hackbench.time.voluntary_context_switches
2.5e+09 +-+---------------------------O-----------------------------------+
| O O |
| O OO O O |
2e+09 +-+ O O O O O O O O |
O O O O O O |
| O O |
1.5e+09 +-+ |
| |
1e+09 +-+ |
| |
| |
5e+08 +-+.+ +.+ .+.+.+.+ +.+.+.+ + +.+.+.+ +.++.+ |
| : : : + : : : :: : : : : :|
| : : : + : : : : :: : : : :|
0 +-O-O------O--------------------O---O-----------------------------+
hackbench.time.involuntary_context_switches
1.2e+09 +-+---------------------------------------------------------------+
| O |
1e+09 +-+ OO O O O |
| O O O O O O |
| O O O O O O O O |
8e+08 O-+ O |
| O O |
6e+08 +-+ |
| |
4e+08 +-+ |
| |
| |
2e+08 +-+ |
|.+.+. +.+. .+.+.+.+.+. .+.+.+.+. .+ .+.+.+.+. .+.++.+. .|
0 +-O-O------O--------------------O---O-----------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep3c: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
***************************************************************************************************
lkp-bdw-ep3d: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/100%/debian-x86_64-2019-05-14.cgz/300s/lkp-bdw-ep3d/shared/reaim/0xb000036
commit:
v5.2
94cc25049f (" iQEzBAABCAAdFiEEghj4iEmqxSLpTPRwpekojE+kFfoFAl0jffgACgkQpekojE+k")
v5.2 94cc25049f407162b193b52c608
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:5 -20% :2 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
1:5 -20% :2 dmesg.WARNING:stack_recursion
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 6 months
7457c0da02 [ 0.733186] BUG: KASAN: unknown-crash in unwind_next_frame
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
commit 7457c0da024b181a9143988d740001f9bc98698d
Author: Peter Zijlstra <peterz(a)infradead.org>
AuthorDate: Fri May 3 12:22:47 2019 +0200
Commit: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Tue Jun 25 10:23:50 2019 +0200
x86/alternatives: Add int3_emulate_call() selftest
Given that the entry_*.S changes for this functionality are somewhat
tricky, make sure the paths are tested every boot, instead of on the
rare occasion when we trip an INT3 while rewriting text.
Requested-by: Andy Lutomirski <luto(a)kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Reviewed-by: Josh Poimboeuf <jpoimboe(a)redhat.com>
Acked-by: Andy Lutomirski <luto(a)kernel.org>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
faeedb0679 x86/stackframe/32: Allow int3_emulate_push()
7457c0da02 x86/alternatives: Add int3_emulate_call() selftest
+-------------------------------------------------------+------------+------------+
| | faeedb0679 | 7457c0da02 |
+-------------------------------------------------------+------------+------------+
| boot_successes | 33 | 8 |
| boot_failures | 2 | 4 |
| WARNING:possible_circular_locking_dependency_detected | 2 | |
| BUG:KASAN:unknown-crash_in_u | 0 | 4 |
+-------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 0.726834] CPU: GenuineIntel Intel Core Processor (Haswell) (family: 0x6, model: 0x3c, stepping: 0x1)
[ 0.728007] Spectre V2 : Spectre mitigation: kernel not compiled with retpoline; no mitigation available!
[ 0.728009] Speculative Store Bypass: Vulnerable
[ 0.729969] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
[ 0.732269] ==================================================================
[ 0.733186] BUG: KASAN: unknown-crash in unwind_next_frame+0x3f6/0x490
[ 0.734146] Read of size 8 at addr ffffffff84007db0 by task swapper/0
[ 0.734963]
[ 0.735168] CPU: 0 PID: 0 Comm: swapper Tainted: G T 5.2.0-rc6-00013-g7457c0d #1
[ 0.736266] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 0.737374] Call Trace:
[ 0.737697] dump_stack+0x19/0x1b
[ 0.738129] print_address_description+0x1b0/0x2b2
[ 0.738745] ? unwind_next_frame+0x3f6/0x490
[ 0.739370] __kasan_report+0x10f/0x171
[ 0.739959] ? unwind_next_frame+0x3f6/0x490
[ 0.739959] kasan_report+0x12/0x1c
[ 0.739959] __asan_load8+0x54/0x81
[ 0.739959] unwind_next_frame+0x3f6/0x490
[ 0.739959] ? unwind_dump+0x24e/0x24e
[ 0.739959] unwind_next_frame+0x1b/0x23
[ 0.739959] ? create_prof_cpu_mask+0x20/0x20
[ 0.739959] arch_stack_walk+0x68/0xa5
[ 0.739959] ? set_memory_4k+0x2a/0x2c
[ 0.739959] stack_trace_save+0x7b/0xa0
[ 0.739959] ? stack_trace_consume_entry+0x89/0x89
[ 0.739959] save_trace+0x3c/0x93
[ 0.739959] mark_lock+0x1ef/0x9b1
[ 0.739959] ? sched_clock_local+0x86/0xa6
[ 0.739959] __lock_acquire+0x3ba/0x1bea
[ 0.739959] ? __asan_loadN+0xf/0x11
[ 0.739959] ? mark_held_locks+0x8e/0x8e
[ 0.739959] ? mark_lock+0xb4/0x9b1
[ 0.739959] ? sched_clock_local+0x86/0xa6
[ 0.739959] lock_acquire+0x122/0x221
[ 0.739959] ? _vm_unmap_aliases+0x141/0x183
[ 0.739959] __mutex_lock+0xb6/0x731
[ 0.739959] ? _vm_unmap_aliases+0x141/0x183
[ 0.739959] ? sched_clock_cpu+0xac/0xb1
[ 0.739959] ? __mutex_add_waiter+0xae/0xae
[ 0.739959] ? lock_downgrade+0x368/0x368
[ 0.739959] ? _vm_unmap_aliases+0x40/0x183
[ 0.739959] mutex_lock_nested+0x16/0x18
[ 0.739959] _vm_unmap_aliases+0x141/0x183
[ 0.739959] ? _vm_unmap_aliases+0x40/0x183
[ 0.739959] vm_unmap_aliases+0x14/0x16
[ 0.739959] change_page_attr_set_clr+0x15e/0x2f2
[ 0.739959] ? __set_pages_p+0x111/0x111
[ 0.739959] ? alternative_instructions+0xd8/0x118
[ 0.739959] ? arch_init_ideal_nops+0x181/0x181
[ 0.739959] set_memory_4k+0x2a/0x2c
[ 0.739959] check_bugs+0x11fd/0x1298
[ 0.739959] ? l1tf_cmdline+0x1dc/0x1dc
[ 0.739959] ? proc_create_single_data+0x5f/0x6e
[ 0.739959] ? cgroup_init+0x2b1/0x2f6
[ 0.739959] start_kernel+0x793/0x7eb
[ 0.739959] ? thread_stack_cache_init+0x2e/0x2e
[ 0.739959] ? idt_setup_early_handler+0x70/0xb1
[ 0.739959] x86_64_start_reservations+0x55/0x76
[ 0.739959] x86_64_start_kernel+0x87/0xaa
[ 0.739959] secondary_startup_64+0xa4/0xb0
[ 0.739959]
[ 0.739959] Memory state around the buggy address:
[ 0.739959] ffffffff84007c80: 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1
[ 0.739959] ffffffff84007d00: f1 00 00 00 00 00 00 00 00 00 f2 f2 f2 f3 f3 f3
[ 0.739959] >ffffffff84007d80: f3 79 be 52 49 79 be 00 00 00 00 00 00 00 00 f1
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start ce0af1f7ff3c78bd4d9327a981a4c6e609200157 6fbc7275c7a9ba97877050335f290341a1fd8dbf --
git bisect good c43a6604c14fd6095c098e373f113a01fe4b2f73 # 10:26 G 11 0 0 0 Merge 'linux-review/Sergey-Senozhatsky/cifs-fix-build-by-selecting-CONFIG_KEYS/20190701-192530' into devel-hourly-2019070602
git bisect bad 592eb776845425e370f815b3f8f213b8f2d14faf # 10:36 B 0 6 21 0 Merge 'linux-review/Alan-Stern/USB-core-Fix-compiler-warnings-in-devio-c/20190627-045941' into devel-hourly-2019070602
git bisect good 5b6207d86d7b287567a39f61e9a713b3522dfd6a # 10:54 G 11 0 1 1 Merge 'tip/x86/timers' into devel-hourly-2019070602
git bisect bad 264b8443817826a38479ad52596c8ae2f854b4f6 # 11:05 B 0 2 17 0 Merge 'tip/x86/asm' into devel-hourly-2019070602
git bisect good 5b30b22f870ee5e1a3ac000bde0b582a19a5c44f # 11:21 G 11 0 0 0 Merge 'linux-review/Liu-Changcheng/RDMA-i40iw-set-queue-pair-state-when-being-queried/20190628-175445' into devel-hourly-2019070602
git bisect good 4705370afadb21a3c7b2d9bc5c5e5fdb43f918a8 # 11:49 G 11 0 4 4 Merge 'm68k/m68k-queue' into devel-hourly-2019070602
git bisect good 39081077793b516f91c47aadf935569618e7fe69 # 12:11 G 11 0 1 1 Merge 'krzk/next/dt' into devel-hourly-2019070602
git bisect good b56298be8511287dc67d9d8c501bb6703698f229 # 12:38 G 11 0 1 1 Merge 'tip/timers/vdso' into devel-hourly-2019070602
git bisect good f7ffb2c7f16fa3618b243a0dcd0bbbb65263f57b # 12:59 G 11 0 3 3 Merge 'thermal/next' into devel-hourly-2019070602
git bisect good 5e1246ff2d3707992e3bf3eaa45551f7fab97983 # 13:30 G 12 0 2 2 x86/entry/32: Clean up return from interrupt preemption path
git bisect good ea1ed38dba64b64a245ab8ca1406269d17b99485 # 13:54 G 11 0 2 2 x86/stackframe, x86/ftrace: Add pt_regs frame annotations
git bisect good faeedb0679bee39ebffc6d53111e86932dea189a # 14:21 G 11 0 0 0 x86/stackframe/32: Allow int3_emulate_push()
git bisect bad 7457c0da024b181a9143988d740001f9bc98698d # 14:29 B 0 1 16 0 x86/alternatives: Add int3_emulate_call() selftest
# first bad commit: [7457c0da024b181a9143988d740001f9bc98698d] x86/alternatives: Add int3_emulate_call() selftest
git bisect good faeedb0679bee39ebffc6d53111e86932dea189a # 14:42 G 34 0 2 2 x86/stackframe/32: Allow int3_emulate_push()
# extra tests on HEAD of linux-devel/devel-hourly-2019070602
git bisect bad ce0af1f7ff3c78bd4d9327a981a4c6e609200157 # 14:43 B 5 7 0 1 0day head guard for 'devel-hourly-2019070602'
# extra tests on tree/branch linux-next/master
# extra tests with first bad commit reverted
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
1 year, 6 months
[ptrace] cd5bbb3047: kernel_selftests.seccomp.seccomp_bpf.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: cd5bbb3047bf73353767d70a03db986600ca372a ("ptrace: add PTRACE_GET_SYSCALL_INFO request")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
# selftests: seccomp: seccomp_bpf
# [==========] Running 74 tests from 1 test cases.
# [ RUN ] global.mode_strict_support
# [ OK ] global.mode_strict_support
# [ RUN ] global.mode_strict_cannot_call_prctl
# [ OK ] global.mode_strict_cannot_call_prctl
# [ RUN ] global.no_new_privs_support
# [ OK ] global.no_new_privs_support
# [ RUN ] global.mode_filter_support
# [ OK ] global.mode_filter_support
# [ RUN ] global.mode_filter_without_nnp
# [ OK ] global.mode_filter_without_nnp
# [ RUN ] global.filter_size_limits
# [ OK ] global.filter_size_limits
# [ RUN ] global.filter_chain_limits
# [ OK ] global.filter_chain_limits
# [ RUN ] global.mode_filter_cannot_move_to_strict
# [ OK ] global.mode_filter_cannot_move_to_strict
# [ RUN ] global.mode_filter_get_seccomp
# [ OK ] global.mode_filter_get_seccomp
# [ RUN ] global.ALLOW_all
# [ OK ] global.ALLOW_all
# [ RUN ] global.empty_prog
# [ OK ] global.empty_prog
# [ RUN ] global.log_all
# [ OK ] global.log_all
# [ RUN ] global.unknown_ret_is_kill_inside
# [ OK ] global.unknown_ret_is_kill_inside
# [ RUN ] global.unknown_ret_is_kill_above_allow
# [ OK ] global.unknown_ret_is_kill_above_allow
# [ RUN ] global.KILL_all
# [ OK ] global.KILL_all
# [ RUN ] global.KILL_one
# [ OK ] global.KILL_one
# [ RUN ] global.KILL_one_arg_one
# [ OK ] global.KILL_one_arg_one
# [ RUN ] global.KILL_one_arg_six
# [ OK ] global.KILL_one_arg_six
# [ RUN ] global.KILL_thread
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_redirected:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_redirected:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_redirected:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_redirected:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_redirected:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_redirected:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_redirected:Expected 0 (0) == msg (1)
# TRACE_syscall.ptrace_syscall_redirected: Test failed at step #15
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_errno:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_errno:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_errno:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_errno:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_errno:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_errno:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_errno:Expected 0 (0) == msg (1)
# TRACE_syscall.ptrace_syscall_errno: Test failed at step #15
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_faked:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_faked:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_faked:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_faked:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_faked:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_faked:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.ptrace_syscall_faked:Expected 0 (0) == msg (1)
# TRACE_syscall.ptrace_syscall_faked: Test failed at step #15
# [==========] Running 74 tests from 1 test cases.
# [ RUN ] global.mode_strict_support
# [ OK ] global.mode_strict_support
# [ RUN ] global.mode_strict_cannot_call_prctl
# [ OK ] global.mode_strict_cannot_call_prctl
# [ RUN ] global.no_new_privs_support
# [ OK ] global.no_new_privs_support
# [ RUN ] global.mode_filter_support
# [ OK ] global.mode_filter_support
# [ RUN ] global.mode_filter_without_nnp
# [ OK ] global.mode_filter_without_nnp
# [ RUN ] global.filter_size_limits
# [ OK ] global.filter_size_limits
# [ RUN ] global.filter_chain_limits
# [ OK ] global.filter_chain_limits
# [ RUN ] global.mode_filter_cannot_move_to_strict
# [ OK ] global.mode_filter_cannot_move_to_strict
# [ RUN ] global.mode_filter_get_seccomp
# [ OK ] global.mode_filter_get_seccomp
# [ RUN ] global.ALLOW_all
# [ OK ] global.ALLOW_all
# [ RUN ] global.empty_prog
# [ OK ] global.empty_prog
# [ RUN ] global.log_all
# [ OK ] global.log_all
# [ RUN ] global.unknown_ret_is_kill_inside
# [ OK ] global.unknown_ret_is_kill_inside
# [ RUN ] global.unknown_ret_is_kill_above_allow
# [ OK ] global.unknown_ret_is_kill_above_allow
# [ RUN ] global.KILL_all
# [ OK ] global.KILL_all
# [ RUN ] global.KILL_one
# [ OK ] global.KILL_one
# [ RUN ] global.KILL_one_arg_one
# [ OK ] global.KILL_one_arg_one
# [ RUN ] global.KILL_one_arg_six
# [ OK ] global.KILL_one_arg_six
# [ RUN ] global.KILL_thread
# [ OK ] global.KILL_thread
# [ RUN ] global.KILL_process
# [ OK ] global.KILL_process
# [ RUN ] global.arg_out_of_range
# [ OK ] global.arg_out_of_range
# [ RUN ] global.ERRNO_valid
# [ OK ] global.ERRNO_valid
# [ RUN ] global.ERRNO_zero
# [ OK ] global.ERRNO_zero
# [ RUN ] global.ERRNO_capped
# [ OK ] global.ERRNO_capped
# [ RUN ] global.ERRNO_order
# [ OK ] global.ERRNO_order
# [ RUN ] TRAP.dfl
# [ OK ] TRAP.dfl
# [ RUN ] TRAP.ign
# [ OK ] TRAP.ign
# [ RUN ] TRAP.handler
# [ OK ] TRAP.handler
# [ RUN ] precedence.allow_ok
# [ OK ] precedence.allow_ok
# [ RUN ] precedence.kill_is_highest
# [ OK ] precedence.kill_is_highest
# [ RUN ] precedence.kill_is_highest_in_any_order
# [ OK ] precedence.kill_is_highest_in_any_order
# [ RUN ] precedence.trap_is_second
# [ OK ] precedence.trap_is_second
# [ RUN ] precedence.trap_is_second_in_any_order
# [ OK ] precedence.trap_is_second_in_any_order
# [ RUN ] precedence.errno_is_third
# [ OK ] precedence.errno_is_third
# [ RUN ] precedence.errno_is_third_in_any_order
# [ OK ] precedence.errno_is_third_in_any_order
# [ RUN ] precedence.trace_is_fourth
# [ OK ] precedence.trace_is_fourth
# [ RUN ] precedence.trace_is_fourth_in_any_order
# [ OK ] precedence.trace_is_fourth_in_any_order
# [ RUN ] precedence.log_is_fifth
# [ OK ] precedence.log_is_fifth
# [ RUN ] precedence.log_is_fifth_in_any_order
# [ OK ] precedence.log_is_fifth_in_any_order
# [ RUN ] TRACE_poke.read_has_side_effects
# [ OK ] TRACE_poke.read_has_side_effects
# [ RUN ] TRACE_poke.getpid_runs_normally
# [ OK ] TRACE_poke.getpid_runs_normally
# [ RUN ] TRACE_syscall.ptrace_syscall_redirected
# [ FAIL ] TRACE_syscall.ptrace_syscall_redirected
# [ RUN ] TRACE_syscall.ptrace_syscall_errno
# [ FAIL ] TRACE_syscall.ptrace_syscall_errno
# [ RUN ] TRACE_syscall.ptrace_syscall_faked
# [ FAIL ] TRACE_syscall.ptrace_syscall_faked
# [ RUN ] TRACE_syscall.syscall_allowed
# [ OK ] TRACE_syscall.syscall_allowed
# [ RUN ] TRACE_syscall.syscall_redirected
# [ OK ] TRACE_syscall.syscall_redirected
# [ RUN ] TRACE_syscall.syscall_errno
# [ OK ] TRACE_syscall.syscall_errno
# [ RUN ] TRACE_syscall.syscall_faked
# [ OK ] TRACE_syscall.syscall_faked
# [ RUN ] TRACE_syscall.skip_after_RET_TRACE
# [ OK ] TRACE_syscall.skip_after_RET_TRACE
# [ RUN ] TRACE_syscall.kill_after_RET_TRACE
# [ OK ] TRACE_syscall.kill_after_RET_TRACE
# [ RUN ] TRACE_syscall.skip_aseccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.skip_after_ptrace:Expected 0 (0) == msg (1)
# TRACE_syscall.skip_after_ptrace: Test failed at step #17
# seccomp_bpf.c:1781:TRACE_syscall.kill_after_ptrace:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.kill_after_ptrace:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.kill_after_ptrace:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.kill_after_ptrace:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.kill_after_ptrace:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.kill_after_ptrace:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.kill_after_ptrace:Expected 0 (0) == msg (1)
# seccomp_bpf.c:1781:TRACE_syscall.kill_after_ptrace:Expected 0 (0) == msg (2)
# seccomp_bpf.c:1781:TRACE_syscall.kill_after_ptrace:Expected 0 (0) == msg (1)
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# [ OK ] global.user_notification_basic
# [ RUN ] global.user_notification_kill_in_middle
# [ OK ] global.user_notification_kill_in_middle
# [ RUN ] global.user_notification_signal
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# [ OK ] global.user_notification_basic
# [ RUN ] global.user_notification_kill_in_middle
# [ OK ] global.user_notification_kill_in_middle
# [ RUN ] global.user_notification_signal
# [ OK ] global.user_notification_signal
# [ RUN ] global.user_notification_closed_listener
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# [ OK ] global.user_notification_basic
# [ RUN ] global.user_notification_kill_in_middle
# [ OK ] global.user_notification_kill_in_middle
# [ RUN ] global.user_notification_signal
# [ OK ] global.user_notification_signal
# [ RUN ] global.user_notification_closed_listener
# [ OK ] global.user_notification_closed_listener
# [ RUN ] global.user_notification_child_pid_ns
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# [ OK ] global.user_notification_basic
# [ RUN ] global.user_notification_kill_in_middle
# [ OK ] global.user_notification_kill_in_middle
# [ RUN ] global.user_notification_signal
# [ OK ] global.user_notification_signal
# [ RUN ] global.user_notification_closed_listener
# [ OK ] global.user_notification_closed_listener
# [ RUN ] global.user_notification_child_pid_ns
# [ OK ] global.user_notification_child_pid_ns
# [ RUN ] global.user_notification_sibling_pid_ns
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# [ OK ] global.user_notification_basic
# [ RUN ] global.user_notification_kill_in_middle
# [ OK ] global.user_notification_kill_in_middle
# [ RUN ] global.user_notification_signal
# [ OK ] global.user_notification_signal
# [ RUN ] global.user_notification_closed_listener
# [ OK ] global.user_notification_closed_listener
# [ RUN ] global.user_notification_child_pid_ns
# [ OK ] global.user_notification_child_pid_ns
# [ RUN ] global.user_notification_sibling_pid_ns
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# [ OK ] global.user_notification_basic
# [ RUN ] global.user_notification_kill_in_middle
# [ OK ] global.user_notification_kill_in_middle
# [ RUN ] global.user_notification_signal
# [ OK ] global.user_notification_signal
# [ RUN ] global.user_notification_closed_listener
# [ OK ] global.user_notification_closed_listener
# [ RUN ] global.user_notification_child_pid_ns
# [ OK ] global.user_notification_child_pid_ns
# [ RUN ] global.user_notification_sibling_pid_ns
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# [ OK ] global.user_notification_basic
# [ RUN ] global.user_notification_kill_in_middle
# [ OK ] global.user_notification_kill_in_middle
# [ RUN ] global.user_notification_signal
# [ OK ] global.user_notification_signal
# [ RUN ] global.user_notification_closed_listener
# [ OK ] global.user_notification_closed_listener
# [ RUN ] global.user_notification_child_pid_ns
# [ OK ] global.user_notification_child_pid_ns
# [ RUN ] global.user_notification_sibling_pid_ns
# [ OK ] global.user_notification_sibling_pid_ns
# [ RUN ] global.user_notification_fault_recv
# fter_ptrace
# [ FAIL ] TRACE_syscall.skip_after_ptrace
# [ RUN ] TRACE_syscall.kill_after_ptrace
# [ OK ] TRACE_syscall.kill_after_ptrace
# [ RUN ] global.seccomp_syscall
# [ OK ] global.seccomp_syscall
# [ RUN ] global.seccomp_syscall_mode_lock
# [ OK ] global.seccomp_syscall_mode_lock
# [ RUN ] global.detect_seccomp_filter_flags
# [ OK ] global.detect_seccomp_filter_flags
# [ RUN ] global.TSYNC_first
# [ OK ] global.TSYNC_first
# [ RUN ] TSYNC.siblings_fail_prctl
# [ OK ] TSYNC.siblings_fail_prctl
# [ RUN ] TSYNC.two_siblings_with_ancestor
# [ OK ] TSYNC.two_siblings_with_ancestor
# [ RUN ] TSYNC.two_sibling_want_nnp
# [ OK ] TSYNC.two_sibling_want_nnp
# [ RUN ] TSYNC.two_siblings_with_no_filter
# [ OK ] TSYNC.two_siblings_with_no_filter
# [ RUN ] TSYNC.two_siblings_with_one_divergence
# [ OK ] TSYNC.two_siblings_with_one_divergence
# [ RUN ] TSYNC.two_siblings_not_under_filter
# [ OK ] TSYNC.two_siblings_not_under_filter
# [ RUN ] global.syscall_restart
# [ OK ] global.syscall_restart
# [ RUN ] global.filter_flag_log
# [ OK ] global.filter_flag_log
# [ RUN ] global.get_action_avail
# [ OK ] global.get_action_avail
# [ RUN ] global.get_metadata
# [ OK ] global.get_metadata
# [ RUN ] global.user_notification_basic
# [ OK ] global.user_notification_basic
# [ RUN ] global.user_notification_kill_in_middle
# [ OK ] global.user_notification_kill_in_middle
# [ RUN ] global.user_notification_signal
# [ OK ] global.user_notification_signal
# [ RUN ] global.user_notification_closed_listener
# [ OK ] global.user_notification_closed_listener
# [ RUN ] global.user_notification_child_pid_ns
# [ OK ] global.user_notification_child_pid_ns
# [ RUN ] global.user_notification_sibling_pid_ns
# [ OK ] global.user_notification_sibling_pid_ns
# [ RUN ] global.user_notification_fault_recv
# [ OK ] global.user_notification_fault_recv
# [ RUN ] global.seccomp_get_notif_sizes
# [ OK ] global.seccomp_get_notif_sizes
# [==========] 70 / 74 tests passed.
# [ FAILED ]
not ok 1 selftests: seccomp: seccomp_bpf
To reproduce:
# build kernel
cd linux
cp config-5.2.0-rc4-00289-gcd5bbb3047bf7 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 6 months
[PATCH] xfs: don't update lastino for FSBULKSTAT_SINGLE
by Darrick J. Wong
From: Darrick J. Wong <darrick.wong(a)oracle.com>
The kernel test robot found a regression of xfs/054 in the conversion of
bulkstat to use the new iwalk infrastructure -- if a caller set *lastip
= 128 and invoked FSBULKSTAT_SINGLE, the bstat info would be for inode
128, but *lastip would be increased by the kernel to 129.
FSBULKSTAT_SINGLE never incremented lastip before, so it's incorrect to
make such an update to the internal lastino value now.
Fixes: 2810bd6840e463 ("xfs: convert bulkstat to new iwalk infrastructure")
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Signed-off-by: Darrick J. Wong <darrick.wong(a)oracle.com>
---
fs/xfs/xfs_ioctl.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c
index 6bf04e71325b..1876461e5104 100644
--- a/fs/xfs/xfs_ioctl.c
+++ b/fs/xfs/xfs_ioctl.c
@@ -797,7 +797,6 @@ xfs_ioc_fsbulkstat(
breq.startino = lastino;
breq.icount = 1;
error = xfs_bulkstat_one(&breq, xfs_fsbulkstat_one_fmt);
- lastino = breq.startino;
} else { /* XFS_IOC_FSBULKSTAT */
breq.startino = lastino ? lastino + 1 : 0;
error = xfs_bulkstat(&breq, xfs_fsbulkstat_one_fmt);
1 year, 6 months
[xfs] 490d451fa5: aim7.jobs-per-min -6.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -6.3% regression of aim7.jobs-per-min due to commit:
commit: 490d451fa5188975c21246f7f8f4914cd3f2d6f2 ("xfs: fix inode_cluster_size rounding mayhem")
https://kernel.googlesource.com/pub/scm/fs/xfs/xfs-linux.git xfs-5.3-merge
in testcase: aim7
on test machine: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
with following parameters:
disk: 4BRD_12G
md: RAID0
fs: xfs
test: disk_src
load: 3000
cpufreq_governor: performance
ucode: 0x200005e
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.6/3000/RAID0/debian-x86_64-2019-05-14.cgz/lkp-skl-2sp7/disk_src/aim7/0x200005e
commit:
494dba7b27 ("xfs: refactor inode geometry setup routines")
490d451fa5 ("xfs: fix inode_cluster_size rounding mayhem")
494dba7b276e12bc 490d451fa5188975c21246f7f8f
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
85540 -6.3% 80176 aim7.jobs-per-min
210.66 +6.7% 224.75 aim7.time.elapsed_time
210.66 +6.7% 224.75 aim7.time.elapsed_time.max
1331 +4.0% 1384 aim7.time.system_time
19562090 -3.3% 18914487 aim7.time.voluntary_context_switches
8.45 -2.4% 8.25 iostat.cpu.system
16359 ± 24% -78.0% 3597 ±167% numa-numastat.node0.other_node
184824 -9.4% 167476 vmstat.system.cs
7985664 ± 5% -15.9% 6715392 ± 10% meminfo.DirectMap2M
344940 ± 4% -11.7% 304492 ± 11% meminfo.DirectMap4k
320.00 -2.5% 312.00 turbostat.Avg_MHz
8.74 ± 3% -1.0 7.78 ± 3% turbostat.C1%
42.55 -1.1% 42.07 turbostat.RAMWatt
9263 -6.4% 8669 ± 4% proc-vmstat.nr_shmem
687031 +3.4% 710693 proc-vmstat.numa_hit
663468 +3.6% 687162 proc-vmstat.numa_local
4201 ± 3% -17.8% 3455 ± 11% proc-vmstat.pgactivate
774593 +3.2% 799333 proc-vmstat.pgalloc_normal
715182 +4.4% 746954 proc-vmstat.pgfault
12681 ± 11% +48.8% 18874 ± 15% numa-vmstat.node0.nr_slab_reclaimable
16222 ± 24% -77.9% 3590 ±166% numa-vmstat.node0.numa_other
333.75 ± 5% -55.4% 149.00 ±101% numa-vmstat.node1.nr_inactive_file
4773 ± 2% -67.6% 1548 ±110% numa-vmstat.node1.nr_shmem
17450 ± 8% -35.4% 11267 ± 27% numa-vmstat.node1.nr_slab_reclaimable
41685 -32.5% 28147 ± 42% numa-vmstat.node1.nr_slab_unreclaimable
333.75 ± 5% -55.4% 149.00 ±101% numa-vmstat.node1.nr_zone_inactive_file
50756 ± 11% +48.8% 75513 ± 15% numa-meminfo.node0.KReclaimable
50756 ± 11% +48.8% 75513 ± 15% numa-meminfo.node0.SReclaimable
1335 ± 5% -55.2% 598.00 ±101% numa-meminfo.node1.Inactive(file)
69836 ± 8% -35.4% 45087 ± 27% numa-meminfo.node1.KReclaimable
1459618 ± 2% -22.8% 1126350 ± 21% numa-meminfo.node1.MemUsed
69836 ± 8% -35.4% 45087 ± 27% numa-meminfo.node1.SReclaimable
166770 -32.4% 112702 ± 42% numa-meminfo.node1.SUnreclaim
19096 ± 2% -67.6% 6193 ±110% numa-meminfo.node1.Shmem
236606 ± 3% -33.3% 157789 ± 36% numa-meminfo.node1.Slab
13.48 ± 11% -27.6% 9.75 ± 22% sched_debug.cfs_rq:/.util_est_enqueued.avg
17417 ± 4% +102.4% 35258 ± 25% sched_debug.cpu.avg_idle.min
200170 ± 3% -11.3% 177551 ± 3% sched_debug.cpu.nr_switches.min
224927 -9.5% 203656 sched_debug.cpu.sched_count.avg
243511 -9.9% 219425 sched_debug.cpu.sched_count.max
197972 ± 4% -10.6% 177000 ± 3% sched_debug.cpu.sched_count.min
112286 -9.5% 101655 sched_debug.cpu.sched_goidle.avg
121611 -9.9% 109518 sched_debug.cpu.sched_goidle.max
98719 ± 4% -10.5% 88352 ± 3% sched_debug.cpu.sched_goidle.min
112397 -9.5% 101751 sched_debug.cpu.ttwu_count.avg
99155 ± 4% -9.3% 89884 ± 3% sched_debug.cpu.ttwu_count.min
3.088e+09 -7.1% 2.869e+09 perf-stat.i.branch-instructions
83196224 -6.9% 77422910 perf-stat.i.branch-misses
39.31 +1.4 40.67 perf-stat.i.cache-miss-rate%
186802 -9.3% 169481 perf-stat.i.context-switches
2.246e+10 -2.8% 2.183e+10 perf-stat.i.cpu-cycles
301.96 ± 3% -14.4% 258.37 ± 5% perf-stat.i.cpu-migrations
3030899 -7.0% 2817923 ± 4% perf-stat.i.dTLB-load-misses
3.663e+09 -6.6% 3.422e+09 perf-stat.i.dTLB-loads
155564 ± 5% -8.8% 141852 ± 7% perf-stat.i.dTLB-store-misses
1.719e+09 -6.3% 1.612e+09 perf-stat.i.dTLB-stores
1132323 ± 7% -8.1% 1040500 ± 2% perf-stat.i.iTLB-load-misses
7997681 -6.1% 7508178 perf-stat.i.iTLB-loads
1.487e+10 -7.0% 1.383e+10 perf-stat.i.instructions
0.65 -3.8% 0.63 perf-stat.i.ipc
3223 -2.0% 3160 perf-stat.i.minor-faults
12807723 +1.8% 13034193 perf-stat.i.node-load-misses
2062331 +5.3% 2171538 perf-stat.i.node-loads
2819059 -7.5% 2608335 perf-stat.i.node-store-misses
130481 -5.4% 123403 perf-stat.i.node-stores
3223 -2.0% 3160 perf-stat.i.page-faults
7.83 +5.4% 8.26 perf-stat.overall.MPKI
40.38 +1.5 41.86 perf-stat.overall.cache-miss-rate%
1.51 +4.5% 1.58 perf-stat.overall.cpi
477.65 -4.4% 456.87 perf-stat.overall.cycles-between-cache-misses
0.66 -4.3% 0.63 perf-stat.overall.ipc
3.073e+09 -7.1% 2.855e+09 perf-stat.ps.branch-instructions
82794308 -6.9% 77071761 perf-stat.ps.branch-misses
185887 -9.2% 168701 perf-stat.ps.context-switches
2.236e+10 -2.8% 2.173e+10 perf-stat.ps.cpu-cycles
300.52 ± 3% -14.4% 257.28 ± 5% perf-stat.ps.cpu-migrations
3016180 -7.0% 2805027 ± 4% perf-stat.ps.dTLB-load-misses
3.645e+09 -6.6% 3.406e+09 perf-stat.ps.dTLB-loads
154823 ± 5% -8.8% 141216 ± 7% perf-stat.ps.dTLB-store-misses
1.711e+09 -6.2% 1.604e+09 perf-stat.ps.dTLB-stores
1126828 ± 7% -8.1% 1035738 ± 2% perf-stat.ps.iTLB-load-misses
7958612 -6.1% 7473724 perf-stat.ps.iTLB-loads
1.48e+10 -7.0% 1.377e+10 perf-stat.ps.instructions
3209 -1.9% 3149 perf-stat.ps.minor-faults
12744993 +1.8% 12974158 perf-stat.ps.node-load-misses
2052251 +5.3% 2161551 perf-stat.ps.node-loads
2805253 -7.4% 2596325 perf-stat.ps.node-store-misses
129870 -5.4% 122873 perf-stat.ps.node-stores
3209 -1.9% 3149 perf-stat.ps.page-faults
75633 ± 2% +13.8% 86075 ± 8% softirqs.CPU0.TIMER
66792 ± 2% +12.2% 74934 ± 9% softirqs.CPU1.TIMER
68311 +12.2% 76617 ± 11% softirqs.CPU10.TIMER
29860 ± 2% +8.8% 32499 ± 2% softirqs.CPU11.SCHED
67495 ± 3% +13.9% 76899 ± 12% softirqs.CPU11.TIMER
30084 +9.0% 32788 ± 2% softirqs.CPU12.SCHED
69794 ± 6% +11.8% 78012 ± 11% softirqs.CPU12.TIMER
30384 +7.9% 32777 ± 2% softirqs.CPU13.SCHED
68705 +14.4% 78630 ± 11% softirqs.CPU13.TIMER
29805 +7.6% 32057 ± 3% softirqs.CPU14.SCHED
66776 +12.0% 74781 ± 10% softirqs.CPU14.TIMER
55769 +10.4% 61544 softirqs.CPU15.RCU
67027 ± 2% +13.1% 75809 ± 10% softirqs.CPU15.TIMER
30420 +8.4% 32967 ± 2% softirqs.CPU16.SCHED
68713 +13.6% 78081 ± 11% softirqs.CPU16.TIMER
53587 ± 5% +14.7% 61442 softirqs.CPU17.RCU
30995 ± 2% +9.2% 33862 ± 3% softirqs.CPU18.SCHED
69520 ± 4% +17.1% 81388 ± 12% softirqs.CPU18.TIMER
30143 +10.0% 33145 ± 2% softirqs.CPU2.SCHED
68273 +15.1% 78551 ± 10% softirqs.CPU2.TIMER
68008 ± 2% +12.7% 76628 ± 9% softirqs.CPU20.TIMER
67197 ± 2% +12.9% 75868 ± 9% softirqs.CPU21.TIMER
67291 ± 2% +11.6% 75123 ± 10% softirqs.CPU22.TIMER
65606 +12.1% 73533 ± 7% softirqs.CPU23.TIMER
68790 +11.0% 76388 ± 9% softirqs.CPU26.TIMER
65092 ± 2% +13.5% 73886 ± 8% softirqs.CPU27.TIMER
67028 ± 3% +14.1% 76472 ± 9% softirqs.CPU28.TIMER
67306 ± 2% +11.3% 74920 ± 8% softirqs.CPU29.TIMER
67033 ± 2% +12.7% 75519 ± 9% softirqs.CPU30.TIMER
67820 ± 2% +12.1% 76004 ± 10% softirqs.CPU31.TIMER
66204 +11.6% 73887 ± 8% softirqs.CPU32.TIMER
67556 ± 2% +10.7% 74781 ± 8% softirqs.CPU33.TIMER
29106 +9.4% 31847 ± 3% softirqs.CPU36.SCHED
28942 ± 2% +8.9% 31512 ± 3% softirqs.CPU37.SCHED
29150 +9.7% 31984 ± 3% softirqs.CPU38.SCHED
67927 +16.7% 79249 ± 11% softirqs.CPU4.TIMER
29114 ± 2% +11.1% 32332 ± 5% softirqs.CPU40.SCHED
65775 +28.9% 84766 ± 24% softirqs.CPU40.TIMER
29101 ± 2% +7.9% 31395 ± 3% softirqs.CPU41.SCHED
29161 ± 3% +11.1% 32385 softirqs.CPU42.SCHED
64737 +15.1% 74538 ± 9% softirqs.CPU42.TIMER
29585 ± 2% +9.9% 32515 ± 3% softirqs.CPU43.SCHED
66414 ± 2% +12.1% 74465 ± 10% softirqs.CPU43.TIMER
29083 +9.1% 31718 ± 4% softirqs.CPU44.SCHED
60441 ± 3% +13.5% 68597 ± 2% softirqs.CPU45.RCU
29010 ± 2% +10.5% 32052 ± 4% softirqs.CPU45.SCHED
61581 +9.8% 67620 softirqs.CPU48.RCU
29491 +9.9% 32406 ± 3% softirqs.CPU48.SCHED
65730 +10.1% 72370 ± 7% softirqs.CPU48.TIMER
61858 +9.7% 67844 softirqs.CPU49.RCU
65309 +11.8% 72991 ± 8% softirqs.CPU49.TIMER
28966 +7.9% 31269 ± 4% softirqs.CPU50.SCHED
29145 ± 2% +8.5% 31618 ± 4% softirqs.CPU51.SCHED
61437 +11.4% 68414 ± 2% softirqs.CPU52.RCU
29869 +8.1% 32292 ± 2% softirqs.CPU52.SCHED
66283 ± 2% +11.6% 73997 ± 9% softirqs.CPU52.TIMER
30109 ± 4% +6.9% 32172 ± 2% softirqs.CPU53.SCHED
66547 ± 2% +7.4% 71490 ± 5% softirqs.CPU54.TIMER
65625 +10.3% 72402 ± 6% softirqs.CPU55.TIMER
65739 +8.7% 71459 ± 5% softirqs.CPU56.TIMER
29338 +8.5% 31825 ± 2% softirqs.CPU57.SCHED
65624 +8.3% 71073 ± 4% softirqs.CPU57.TIMER
29326 +7.9% 31644 ± 3% softirqs.CPU58.SCHED
59934 ± 3% +9.2% 65474 ± 4% softirqs.CPU6.RCU
68921 ± 3% +16.1% 79996 ± 10% softirqs.CPU6.TIMER
66408 ± 2% +7.3% 71233 ± 3% softirqs.CPU62.TIMER
28503 +8.2% 30839 ± 2% softirqs.CPU63.SCHED
66373 +9.8% 72889 ± 8% softirqs.CPU64.TIMER
28778 +8.7% 31277 ± 3% softirqs.CPU66.SCHED
30149 +11.1% 33496 softirqs.CPU7.SCHED
68019 ± 2% +32.6% 90193 ± 19% softirqs.CPU7.TIMER
66562 ± 2% +28.9% 85777 ± 22% softirqs.CPU71.TIMER
30044 +8.3% 32542 ± 2% softirqs.CPU8.SCHED
29713 +8.2% 32162 ± 3% softirqs.CPU9.SCHED
66744 ± 2% +16.2% 77537 ± 14% softirqs.CPU9.TIMER
2148925 +8.1% 2323738 ± 2% softirqs.SCHED
4878934 +10.8% 5408187 ± 7% softirqs.TIMER
171333 +7.5% 184170 interrupts.CAL:Function_call_interrupts
1042 ± 4% -11.2% 925.75 ± 5% interrupts.CPU13.RES:Rescheduling_interrupts
1046 ± 3% -11.7% 923.50 ± 3% interrupts.CPU16.RES:Rescheduling_interrupts
2227 ± 7% +16.6% 2596 interrupts.CPU18.CAL:Function_call_interrupts
1192 ± 13% -16.0% 1001 ± 4% interrupts.CPU2.RES:Rescheduling_interrupts
1026 ± 4% -10.6% 917.50 ± 7% interrupts.CPU20.RES:Rescheduling_interrupts
1529 ± 6% -22.2% 1189 ± 22% interrupts.CPU21.NMI:Non-maskable_interrupts
1529 ± 6% -22.2% 1189 ± 22% interrupts.CPU21.PMI:Performance_monitoring_interrupts
1643 ± 2% -25.4% 1225 ± 23% interrupts.CPU23.NMI:Non-maskable_interrupts
1643 ± 2% -25.4% 1225 ± 23% interrupts.CPU23.PMI:Performance_monitoring_interrupts
1062 ± 4% -11.3% 942.75 ± 3% interrupts.CPU23.RES:Rescheduling_interrupts
2291 ± 4% +12.6% 2581 ± 2% interrupts.CPU24.CAL:Function_call_interrupts
1576 ± 6% -26.5% 1159 ± 21% interrupts.CPU24.NMI:Non-maskable_interrupts
1576 ± 6% -26.5% 1159 ± 21% interrupts.CPU24.PMI:Performance_monitoring_interrupts
1623 ± 4% -23.0% 1250 ± 28% interrupts.CPU26.NMI:Non-maskable_interrupts
1623 ± 4% -23.0% 1250 ± 28% interrupts.CPU26.PMI:Performance_monitoring_interrupts
1560 ± 3% -24.8% 1174 ± 24% interrupts.CPU28.NMI:Non-maskable_interrupts
1560 ± 3% -24.8% 1174 ± 24% interrupts.CPU28.PMI:Performance_monitoring_interrupts
2356 ± 2% +10.1% 2594 interrupts.CPU29.CAL:Function_call_interrupts
1539 ± 5% -23.3% 1181 ± 22% interrupts.CPU29.NMI:Non-maskable_interrupts
1539 ± 5% -23.3% 1181 ± 22% interrupts.CPU29.PMI:Performance_monitoring_interrupts
1427 ± 20% -32.7% 961.00 ± 3% interrupts.CPU3.RES:Rescheduling_interrupts
1630 ± 9% -21.9% 1273 ± 24% interrupts.CPU30.NMI:Non-maskable_interrupts
1630 ± 9% -21.9% 1273 ± 24% interrupts.CPU30.PMI:Performance_monitoring_interrupts
1044 ± 5% -9.4% 946.25 ± 5% interrupts.CPU32.RES:Rescheduling_interrupts
1516 ± 4% -19.7% 1217 ± 26% interrupts.CPU34.NMI:Non-maskable_interrupts
1516 ± 4% -19.7% 1217 ± 26% interrupts.CPU34.PMI:Performance_monitoring_interrupts
1625 ± 4% -21.9% 1269 ± 23% interrupts.CPU35.NMI:Non-maskable_interrupts
1625 ± 4% -21.9% 1269 ± 23% interrupts.CPU35.PMI:Performance_monitoring_interrupts
1098 ± 5% -13.0% 956.00 ± 4% interrupts.CPU35.RES:Rescheduling_interrupts
1580 ± 6% -22.4% 1226 ± 22% interrupts.CPU37.NMI:Non-maskable_interrupts
1580 ± 6% -22.4% 1226 ± 22% interrupts.CPU37.PMI:Performance_monitoring_interrupts
1107 ± 5% -8.1% 1017 interrupts.CPU37.RES:Rescheduling_interrupts
1082 ± 2% -8.4% 991.50 ± 3% interrupts.CPU41.RES:Rescheduling_interrupts
1053 ± 4% -8.8% 961.25 ± 5% interrupts.CPU48.RES:Rescheduling_interrupts
1580 ± 6% -9.4% 1432 ± 5% interrupts.CPU5.NMI:Non-maskable_interrupts
1580 ± 6% -9.4% 1432 ± 5% interrupts.CPU5.PMI:Performance_monitoring_interrupts
1529 ± 9% -29.6% 1076 ± 25% interrupts.CPU51.NMI:Non-maskable_interrupts
1529 ± 9% -29.6% 1076 ± 25% interrupts.CPU51.PMI:Performance_monitoring_interrupts
1066 ± 6% -10.6% 953.00 ± 8% interrupts.CPU52.RES:Rescheduling_interrupts
1544 -31.8% 1053 ± 29% interrupts.CPU54.NMI:Non-maskable_interrupts
1544 -31.8% 1053 ± 29% interrupts.CPU54.PMI:Performance_monitoring_interrupts
1528 ± 4% -31.1% 1052 ± 26% interrupts.CPU55.NMI:Non-maskable_interrupts
1528 ± 4% -31.1% 1052 ± 26% interrupts.CPU55.PMI:Performance_monitoring_interrupts
1575 ± 3% -35.1% 1022 ± 27% interrupts.CPU56.NMI:Non-maskable_interrupts
1575 ± 3% -35.1% 1022 ± 27% interrupts.CPU56.PMI:Performance_monitoring_interrupts
1519 ± 5% -32.9% 1019 ± 26% interrupts.CPU57.NMI:Non-maskable_interrupts
1519 ± 5% -32.9% 1019 ± 26% interrupts.CPU57.PMI:Performance_monitoring_interrupts
1552 ± 2% -34.1% 1022 ± 23% interrupts.CPU58.NMI:Non-maskable_interrupts
1552 ± 2% -34.1% 1022 ± 23% interrupts.CPU58.PMI:Performance_monitoring_interrupts
1631 ± 5% -36.5% 1036 ± 27% interrupts.CPU59.NMI:Non-maskable_interrupts
1631 ± 5% -36.5% 1036 ± 27% interrupts.CPU59.PMI:Performance_monitoring_interrupts
1011 ± 3% -8.0% 930.00 ± 5% interrupts.CPU6.RES:Rescheduling_interrupts
2339 ± 3% +10.4% 2582 interrupts.CPU60.CAL:Function_call_interrupts
1576 ± 5% -37.3% 989.00 ± 27% interrupts.CPU60.NMI:Non-maskable_interrupts
1576 ± 5% -37.3% 989.00 ± 27% interrupts.CPU60.PMI:Performance_monitoring_interrupts
1552 ± 2% -28.8% 1105 ± 29% interrupts.CPU61.NMI:Non-maskable_interrupts
1552 ± 2% -28.8% 1105 ± 29% interrupts.CPU61.PMI:Performance_monitoring_interrupts
1611 ± 4% -33.2% 1076 ± 30% interrupts.CPU62.NMI:Non-maskable_interrupts
1611 ± 4% -33.2% 1076 ± 30% interrupts.CPU62.PMI:Performance_monitoring_interrupts
1573 ± 4% -35.0% 1023 ± 26% interrupts.CPU63.NMI:Non-maskable_interrupts
1573 ± 4% -35.0% 1023 ± 26% interrupts.CPU63.PMI:Performance_monitoring_interrupts
1576 ± 2% -37.9% 978.75 ± 24% interrupts.CPU64.NMI:Non-maskable_interrupts
1576 ± 2% -37.9% 978.75 ± 24% interrupts.CPU64.PMI:Performance_monitoring_interrupts
1538 ± 4% -36.1% 982.50 ± 27% interrupts.CPU65.NMI:Non-maskable_interrupts
1538 ± 4% -36.1% 982.50 ± 27% interrupts.CPU65.PMI:Performance_monitoring_interrupts
1556 ± 9% -30.5% 1081 ± 25% interrupts.CPU66.NMI:Non-maskable_interrupts
1556 ± 9% -30.5% 1081 ± 25% interrupts.CPU66.PMI:Performance_monitoring_interrupts
1518 ± 7% -31.9% 1034 ± 29% interrupts.CPU68.NMI:Non-maskable_interrupts
1518 ± 7% -31.9% 1034 ± 29% interrupts.CPU68.PMI:Performance_monitoring_interrupts
1581 ± 6% -34.1% 1042 ± 28% interrupts.CPU69.NMI:Non-maskable_interrupts
1581 ± 6% -34.1% 1042 ± 28% interrupts.CPU69.PMI:Performance_monitoring_interrupts
1535 ± 2% -31.8% 1047 ± 31% interrupts.CPU70.NMI:Non-maskable_interrupts
1535 ± 2% -31.8% 1047 ± 31% interrupts.CPU70.PMI:Performance_monitoring_interrupts
1566 ± 3% -21.6% 1228 ± 20% interrupts.CPU71.NMI:Non-maskable_interrupts
1566 ± 3% -21.6% 1228 ± 20% interrupts.CPU71.PMI:Performance_monitoring_interrupts
109885 ± 4% -18.8% 89259 ± 12% interrupts.NMI:Non-maskable_interrupts
109885 ± 4% -18.8% 89259 ± 12% interrupts.PMI:Performance_monitoring_interrupts
0.60 ± 6% -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
0.57 ± 8% -0.2 0.39 ± 57% perf-profile.calltrace.cycles-pp.xfs_mod_ifree.xfs_trans_unreserve_and_mod_sb.xfs_log_commit_cil.__xfs_trans_commit.xfs_create
1.14 ± 6% -0.1 1.01 perf-profile.calltrace.cycles-pp.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.97 ± 6% -0.1 0.85 ± 3% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending.do_idle
0.98 ± 6% -0.1 0.86 ± 3% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry
1.00 ± 9% -0.1 0.89 perf-profile.calltrace.cycles-pp.xfs_trans_free_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree.xfs_inactive
0.98 ± 6% -0.1 0.86 ± 3% perf-profile.calltrace.cycles-pp.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary
0.83 ± 10% -0.1 0.72 perf-profile.calltrace.cycles-pp.xfs_buf_item_unlock.xfs_trans_free_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree
0.90 ± 6% -0.1 0.79 ± 2% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending
1.14 ± 4% -0.1 1.04 ± 4% perf-profile.calltrace.cycles-pp.__xstat64
0.73 ± 6% -0.1 0.63 ± 3% perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.77 ± 10% -0.1 0.67 ± 2% perf-profile.calltrace.cycles-pp.xfs_buf_unlock.xfs_buf_item_unlock.xfs_trans_free_items.xfs_log_commit_cil.__xfs_trans_commit
1.00 ± 5% -0.1 0.90 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__xstat64
0.99 ± 5% -0.1 0.90 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__xstat64
0.73 ± 9% -0.1 0.65 ± 2% perf-profile.calltrace.cycles-pp.up.xfs_buf_unlock.xfs_buf_item_unlock.xfs_trans_free_items.xfs_log_commit_cil
0.67 ± 10% -0.1 0.59 ± 3% perf-profile.calltrace.cycles-pp.try_to_wake_up.up.xfs_buf_unlock.xfs_buf_item_unlock.xfs_trans_free_items
0.85 ± 5% -0.1 0.78 ± 4% perf-profile.calltrace.cycles-pp.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__xstat64
0.78 ± 4% -0.1 0.72 ± 3% perf-profile.calltrace.cycles-pp.vfs_statx.__do_sys_newstat.do_syscall_64.entry_SYSCALL_64_after_hwframe.__xstat64
0.57 ± 4% -0.0 0.54 perf-profile.calltrace.cycles-pp.xfs_trans_read_buf_map.xfs_read_agi.xfs_ialloc_read_agi.xfs_dialloc.xfs_ialloc
0.80 ± 4% +0.1 0.87 ± 4% perf-profile.calltrace.cycles-pp.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_check_agi_freecount.xfs_dialloc_ag.xfs_dialloc
0.53 +0.1 0.60 perf-profile.calltrace.cycles-pp.xfs_inobt_get_rec.xfs_check_agi_freecount.xfs_difree_inobt.xfs_difree.xfs_ifree
0.74 ± 4% +0.1 0.85 perf-profile.calltrace.cycles-pp.xfs_difree_finobt.xfs_difree.xfs_ifree.xfs_inactive_ifree.xfs_inactive
0.68 ± 5% +0.1 0.78 ± 2% perf-profile.calltrace.cycles-pp.xfs_verify_ino.xfs_dir_ino_validate.__xfs_dir3_data_check.xfs_dir3_data_check.xfs_dir2_block_removename
0.76 ± 6% +0.1 0.88 ± 7% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.66 ± 6% +0.1 0.79 ± 3% perf-profile.calltrace.cycles-pp.xfs_verify_ino.xfs_dir_ino_validate.__xfs_dir3_data_check.xfs_dir3_data_check.xfs_dir2_block_addname
1.16 ± 3% +0.1 1.30 ± 2% perf-profile.calltrace.cycles-pp.xfs_check_agi_freecount.xfs_difree_inobt.xfs_difree.xfs_ifree.xfs_inactive_ifree
0.73 ± 5% +0.1 0.87 ± 5% perf-profile.calltrace.cycles-pp.xfs_inobt_get_rec.xfs_check_agi_freecount.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc
0.67 ± 8% +0.1 0.81 ± 11% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.79 ± 6% +0.1 0.94 ± 3% perf-profile.calltrace.cycles-pp.xfs_verify_ino.xfs_dir_ino_validate.__xfs_dir3_data_check.xfs_dir3_data_check.xfs_dir2_block_lookup_int
0.88 ± 6% +0.2 1.03 ± 8% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.97 ± 5% +0.2 1.14 ± 3% perf-profile.calltrace.cycles-pp.xfs_dir_ino_validate.__xfs_dir3_data_check.xfs_dir3_data_check.xfs_dir2_block_removename.xfs_dir_removename
1.55 ± 4% +0.2 1.72 perf-profile.calltrace.cycles-pp.xfs_difree_inobt.xfs_difree.xfs_ifree.xfs_inactive_ifree.xfs_inactive
0.95 ± 6% +0.2 1.14 perf-profile.calltrace.cycles-pp.xfs_dir_ino_validate.__xfs_dir3_data_check.xfs_dir3_data_check.xfs_dir2_block_addname.xfs_dir_createname
1.16 ± 3% +0.2 1.35 ± 2% perf-profile.calltrace.cycles-pp.xfs_dir_ino_validate.__xfs_dir3_data_check.xfs_dir3_data_check.xfs_dir2_block_lookup_int.xfs_dir2_block_removename
2.03 ± 3% +0.2 2.27 ± 3% perf-profile.calltrace.cycles-pp.xfs_check_agi_freecount.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc
4.39 ± 3% +0.3 4.67 ± 2% perf-profile.calltrace.cycles-pp.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create.path_openat
2.42 ± 4% +0.3 2.72 perf-profile.calltrace.cycles-pp.xfs_difree.xfs_ifree.xfs_inactive_ifree.xfs_inactive.xfs_fs_destroy_inode
3.65 ± 3% +0.3 3.96 ± 3% perf-profile.calltrace.cycles-pp.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create
2.77 ± 3% +0.3 3.08 ± 3% perf-profile.calltrace.cycles-pp.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create
0.44 ± 58% +0.3 0.76 ± 18% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_kernel
0.45 ± 58% +0.3 0.77 ± 18% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
3.44 ± 5% +0.3 3.77 perf-profile.calltrace.cycles-pp.xfs_ifree.xfs_inactive_ifree.xfs_inactive.xfs_fs_destroy_inode.destroy_inode
0.47 ± 58% +0.3 0.81 ± 18% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
0.47 ± 58% +0.3 0.81 ± 18% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_kernel.secondary_startup_64
0.47 ± 58% +0.3 0.81 ± 18% perf-profile.calltrace.cycles-pp.start_kernel.secondary_startup_64
3.63 ± 3% +0.3 3.97 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
3.91 ± 2% +0.4 4.27 ± 5% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.42 ± 58% +1.4 1.82 perf-profile.calltrace.cycles-pp.xfs_agino_range.xfs_verify_ino.xfs_dir_ino_validate.__xfs_dir3_data_check.xfs_dir3_data_check
1.92 ± 6% -0.2 1.69 ± 2% perf-profile.children.cycles-pp.ttwu_do_activate
1.92 ± 6% -0.2 1.68 ± 2% perf-profile.children.cycles-pp.activate_task
1.91 ± 6% -0.2 1.68 ± 2% perf-profile.children.cycles-pp.enqueue_task_fair
1.81 ± 6% -0.2 1.59 ± 2% perf-profile.children.cycles-pp.enqueue_entity
1.48 ± 6% -0.2 1.29 ± 2% perf-profile.children.cycles-pp.__account_scheduler_latency
1.54 ± 6% -0.2 1.37 ± 2% perf-profile.children.cycles-pp.try_to_wake_up
1.22 ± 6% -0.2 1.07 ± 2% perf-profile.children.cycles-pp.stack_trace_save_tsk
1.33 ± 7% -0.1 1.19 ± 2% perf-profile.children.cycles-pp.xfs_buf_item_unlock
1.13 ± 6% -0.1 1.00 ± 3% perf-profile.children.cycles-pp.arch_stack_walk
1.16 ± 6% -0.1 1.03 ± 2% perf-profile.children.cycles-pp.sched_ttwu_pending
1.24 ± 4% -0.1 1.12 ± 5% perf-profile.children.cycles-pp.xfs_dir3_data_entsize
0.84 ± 3% -0.1 0.73 ± 2% perf-profile.children.cycles-pp.xfs_errortag_test
1.15 ± 4% -0.1 1.04 ± 4% perf-profile.children.cycles-pp.__xstat64
0.79 ± 7% -0.1 0.69 ± 3% perf-profile.children.cycles-pp.unwind_next_frame
0.71 ± 3% -0.1 0.60 perf-profile.children.cycles-pp.rwsem_wake
1.08 ± 7% -0.1 0.97 ± 3% perf-profile.children.cycles-pp.up
0.59 ± 4% -0.1 0.49 perf-profile.children.cycles-pp.wake_up_q
1.09 ± 3% -0.1 1.00 ± 3% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.76 ± 4% -0.1 0.68 ± 7% perf-profile.children.cycles-pp.xfs_dir3_data_entry_tag_p
0.74 -0.1 0.67 ± 2% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.85 ± 5% -0.1 0.78 ± 4% perf-profile.children.cycles-pp.__do_sys_newstat
0.79 ± 5% -0.1 0.72 ± 3% perf-profile.children.cycles-pp.vfs_statx
0.53 ± 4% -0.1 0.47 ± 3% perf-profile.children.cycles-pp.filename_lookup
0.51 ± 3% -0.1 0.46 ± 2% perf-profile.children.cycles-pp.path_lookupat
0.61 ± 5% -0.1 0.56 ± 4% perf-profile.children.cycles-pp.schedule
0.37 ± 5% -0.1 0.32 ± 6% perf-profile.children.cycles-pp.orc_find
0.27 ± 5% -0.0 0.23 ± 2% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.31 ± 5% -0.0 0.27 ± 8% perf-profile.children.cycles-pp.__orc_find
0.30 ± 4% -0.0 0.27 ± 3% perf-profile.children.cycles-pp.dequeue_task_fair
0.17 ± 12% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.xlog_grant_add_space
0.10 ± 7% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.15 ± 2% -0.0 0.12 ± 4% perf-profile.children.cycles-pp.update_curr
0.08 ± 8% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.__unwind_start
0.05 ± 8% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.xfs_dir2_isblock
0.05 ± 8% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.__next_timer_interrupt
0.06 ± 6% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.run_timer_softirq
0.05 ± 8% +0.0 0.07 ± 12% perf-profile.children.cycles-pp.menu_reflect
0.07 ± 10% +0.0 0.09 ± 8% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.15 ± 3% +0.0 0.17 ± 5% perf-profile.children.cycles-pp.run_rebalance_domains
0.15 ± 3% +0.0 0.19 ± 10% perf-profile.children.cycles-pp.xfs_inobt_init_cursor
0.04 ± 58% +0.0 0.07 ± 11% perf-profile.children.cycles-pp.xfs_verify_agino_or_null
0.26 ± 7% +0.0 0.30 ± 4% perf-profile.children.cycles-pp.xfs_next_bit
0.14 ± 3% +0.0 0.17 ± 5% perf-profile.children.cycles-pp.update_blocked_averages
0.11 ± 6% +0.0 0.15 ± 10% perf-profile.children.cycles-pp._raw_spin_trylock
0.22 ± 4% +0.0 0.26 ± 10% perf-profile.children.cycles-pp.xfs_dialloc_ag_update_inobt
0.09 ± 4% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.xfs_log_calc_unit_res
0.08 ± 5% +0.0 0.13 ± 6% perf-profile.children.cycles-pp.xfs_verify_fsbno
0.03 ±100% +0.0 0.07 ± 5% perf-profile.children.cycles-pp.ret_from_fork
0.03 ±100% +0.0 0.07 ± 5% perf-profile.children.cycles-pp.kthread
0.19 ± 9% +0.0 0.24 perf-profile.children.cycles-pp.rebalance_domains
0.31 ± 8% +0.0 0.35 ± 9% perf-profile.children.cycles-pp.rcu_core
0.19 ± 9% +0.1 0.25 ± 3% perf-profile.children.cycles-pp.xfs_trans_add_item
0.33 ± 7% +0.1 0.39 ± 5% perf-profile.children.cycles-pp.xfs_inode_item_format_data_fork
0.31 ± 8% +0.1 0.37 ± 6% perf-profile.children.cycles-pp.xfs_iextents_copy
0.20 ± 2% +0.1 0.26 ± 9% perf-profile.children.cycles-pp.__radix_tree_lookup
0.51 ± 2% +0.1 0.58 ± 3% perf-profile.children.cycles-pp.xfs_btree_increment
0.21 ± 14% +0.1 0.28 ± 7% perf-profile.children.cycles-pp.xfs_trans_free
1.12 ± 2% +0.1 1.20 perf-profile.children.cycles-pp.xfs_btree_check_sblock
0.20 ± 5% +0.1 0.28 ± 6% perf-profile.children.cycles-pp.xfs_bmap_validate_extent
0.26 ± 6% +0.1 0.35 ± 4% perf-profile.children.cycles-pp.xfs_inobt_btrec_to_irec
0.34 ± 4% +0.1 0.44 ± 3% perf-profile.children.cycles-pp.xfs_ag_block_count
0.26 ± 8% +0.1 0.35 ± 6% perf-profile.children.cycles-pp.xfs_btree_check_ptr
0.75 ± 4% +0.1 0.85 perf-profile.children.cycles-pp.xfs_difree_finobt
0.84 ± 2% +0.1 0.95 perf-profile.children.cycles-pp.__xfs_btree_check_sblock
0.81 ± 4% +0.1 0.93 ± 6% perf-profile.children.cycles-pp.__softirqentry_text_start
1.37 ± 3% +0.1 1.50 ± 3% perf-profile.children.cycles-pp.ktime_get
0.33 ± 9% +0.1 0.46 ± 4% perf-profile.children.cycles-pp.xfs_verify_agbno
0.33 ± 7% +0.1 0.46 ± 7% perf-profile.children.cycles-pp.xfs_btree_ptr_to_daddr
1.94 ± 4% +0.2 2.10 ± 2% perf-profile.children.cycles-pp.xfs_btree_lookup
0.97 ± 6% +0.2 1.13 ± 7% perf-profile.children.cycles-pp.clockevents_program_event
1.55 ± 4% +0.2 1.72 perf-profile.children.cycles-pp.xfs_difree_inobt
1.50 ± 4% +0.2 1.67 ± 3% perf-profile.children.cycles-pp.xfs_btree_lookup_get_block
0.57 ± 7% +0.2 0.82 ± 6% perf-profile.children.cycles-pp.xfs_verify_agino
0.56 ± 22% +0.3 0.81 ± 18% perf-profile.children.cycles-pp.start_kernel
2.45 ± 3% +0.3 2.71 ± 5% perf-profile.children.cycles-pp.hrtimer_interrupt
4.39 ± 3% +0.3 4.67 ± 2% perf-profile.children.cycles-pp.xfs_ialloc
0.61 ± 10% +0.3 0.91 ± 2% perf-profile.children.cycles-pp.xfs_verify_dir_ino
1.70 ± 4% +0.3 2.00 ± 2% perf-profile.children.cycles-pp.xfs_inobt_get_rec
2.42 ± 4% +0.3 2.73 perf-profile.children.cycles-pp.xfs_difree
3.65 ± 3% +0.3 3.96 ± 3% perf-profile.children.cycles-pp.xfs_dialloc
2.77 ± 3% +0.3 3.08 ± 3% perf-profile.children.cycles-pp.xfs_dialloc_ag
3.44 ± 5% +0.3 3.77 perf-profile.children.cycles-pp.xfs_ifree
3.97 ± 2% +0.4 4.32 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
4.27 ± 2% +0.4 4.63 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
3.41 ± 4% +0.4 3.81 ± 3% perf-profile.children.cycles-pp.xfs_check_agi_freecount
2.76 ± 5% +0.5 3.31 perf-profile.children.cycles-pp.xfs_verify_ino
3.93 ± 4% +0.7 4.68 perf-profile.children.cycles-pp.xfs_dir_ino_validate
2.48 ± 5% +0.8 3.29 ± 2% perf-profile.children.cycles-pp.xfs_agino_range
4.27 ± 6% -0.7 3.58 ± 4% perf-profile.self.cycles-pp.__xfs_dir3_data_check
0.83 ± 4% -0.1 0.72 ± 2% perf-profile.self.cycles-pp.xfs_errortag_test
0.96 ± 2% -0.1 0.87 ± 4% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
1.04 ± 5% -0.1 0.96 ± 5% perf-profile.self.cycles-pp.xfs_dir3_data_entsize
0.62 ± 5% -0.1 0.55 ± 8% perf-profile.self.cycles-pp.xfs_dir3_data_entry_tag_p
0.58 -0.1 0.51 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.95 ± 4% -0.1 0.90 perf-profile.self.cycles-pp.xfs_log_commit_cil
0.24 ± 7% -0.0 0.20 ± 9% perf-profile.self.cycles-pp.xfs_buf_get_map
0.43 ± 3% -0.0 0.39 perf-profile.self.cycles-pp.xfs_perag_get
0.31 ± 5% -0.0 0.27 ± 8% perf-profile.self.cycles-pp.__orc_find
0.23 ± 7% -0.0 0.20 ± 4% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.33 ± 3% -0.0 0.30 ± 3% perf-profile.self.cycles-pp.xfs_buf_item_init
0.17 ± 12% -0.0 0.14 ± 5% perf-profile.self.cycles-pp.xlog_grant_add_space
0.16 ± 5% -0.0 0.14 ± 3% perf-profile.self.cycles-pp.rwsem_down_write_failed
0.08 ± 15% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.stack_trace_save_tsk
0.10 +0.0 0.11 ± 4% perf-profile.self.cycles-pp.xfs_btree_read_buf_block
0.06 ± 11% +0.0 0.08 ± 13% perf-profile.self.cycles-pp.update_blocked_averages
0.07 ± 11% +0.0 0.10 ± 10% perf-profile.self.cycles-pp.xfs_btree_ptr_to_daddr
0.11 ± 6% +0.0 0.15 ± 10% perf-profile.self.cycles-pp._raw_spin_trylock
0.09 ± 4% +0.0 0.13 ± 3% perf-profile.self.cycles-pp.xfs_log_calc_unit_res
0.01 ±173% +0.0 0.06 ± 9% perf-profile.self.cycles-pp.run_timer_softirq
0.06 ± 14% +0.0 0.10 ± 12% perf-profile.self.cycles-pp.xfs_inobt_init_cursor
0.24 ± 9% +0.0 0.28 ± 5% perf-profile.self.cycles-pp.xfs_trans_alloc
0.16 ± 9% +0.0 0.21 ± 5% perf-profile.self.cycles-pp.xfs_read_agi
0.06 +0.0 0.11 ± 4% perf-profile.self.cycles-pp.xfs_verify_fsbno
0.20 ± 13% +0.1 0.27 ± 7% perf-profile.self.cycles-pp.xfs_trans_free
0.18 ± 7% +0.1 0.25 ± 3% perf-profile.self.cycles-pp.xfs_trans_add_item
0.19 ± 2% +0.1 0.26 ± 9% perf-profile.self.cycles-pp.__radix_tree_lookup
0.23 ± 9% +0.1 0.32 ± 6% perf-profile.self.cycles-pp.xfs_verify_agbno
0.32 ± 5% +0.1 0.41 ± 3% perf-profile.self.cycles-pp.xfs_ag_block_count
0.25 ± 6% +0.1 0.35 ± 4% perf-profile.self.cycles-pp.xfs_inobt_btrec_to_irec
0.52 ± 4% +0.1 0.63 ± 3% perf-profile.self.cycles-pp.__xfs_btree_check_sblock
1.17 ± 4% +0.1 1.29 ± 5% perf-profile.self.cycles-pp.ktime_get
0.58 ± 10% +0.3 0.88 ± 2% perf-profile.self.cycles-pp.xfs_verify_dir_ino
2.23 ± 5% +0.7 2.97 ± 2% perf-profile.self.cycles-pp.xfs_agino_range
aim7.jobs-per-min
89000 +-+-----------------------------------------------------------------+
88000 +-+ .+.. .+.+.. .+. |
|.+..+.+.+.+..+.+.+ +.+ + + |
87000 +-+ + |
86000 +-+ + + |
| +.+. + |
85000 +-+ + |
84000 +-+ |
83000 +-+ |
| |
82000 +-+ |
81000 +-O O O O O O O O O |
O O O O O O O O O O O O O O O |
80000 +-+ O O O O O O O
79000 +-+-----------------------------------------------------------------+
aim7.time.elapsed_time
230 +-+-------------------------------------------------------------------+
| |
225 +-+ O O O O O O O O O O
O O O O O O O O O O O O O O O |
| O O O O O O |
220 +-+ |
| |
215 +-+ |
| .+.. |
210 +-+ +.+ |
| .. + |
|. .+.+.+..+. .+..+. .+..+. .+. .+ |
205 +-+. + + +.+. + |
| |
200 +-+-------------------------------------------------------------------+
aim7.time.elapsed_time.max
230 +-+-------------------------------------------------------------------+
| |
225 +-+ O O O O O O O O O O
O O O O O O O O O O O O O O O |
| O O O O O O |
220 +-+ |
| |
215 +-+ |
| .+.. |
210 +-+ +.+ |
| .. + |
|. .+.+.+..+. .+..+. .+..+. .+. .+ |
205 +-+. + + +.+. + |
| |
200 +-+-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 6 months