Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
3 years
[lkp] [sctp] a6c2f79287: netperf.Throughput_Mbps -37.2% regression
by kernel test robot
FYI, we noticed a -37.2% regression of netperf.Throughput_Mbps due to commit:
commit a6c2f792873aff332a4689717c3cd6104f46684c ("sctp: implement prsctp TTL policy")
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
in testcase: netperf
on test machine: 4 threads Ivy Bridge with 8G memory
with following parameters:
ip: ipv4
runtime: 300s
nr_threads: 200%
cluster: cs-localhost
send_size: 10K
test: SCTP_STREAM_MANY
cpufreq_governor: performance
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/send_size/tbox_group/test/testcase:
cs-localhost/gcc-6/performance/ipv4/x86_64-rhel-7.2/200%/debian-x86_64-2015-02-07.cgz/300s/10K/lkp-ivb-d02/SCTP_STREAM_MANY/netperf
commit:
826d253d57 ("sctp: add SCTP_PR_ASSOC_STATUS on sctp sockopt")
a6c2f79287 ("sctp: implement prsctp TTL policy")
826d253d57b11f69 a6c2f792873aff332a4689717c
---------------- --------------------------
%stddev %change %stddev
\ | \
3923 ± 0% -37.2% 2462 ± 0% netperf.Throughput_Mbps
11701 ± 14% -21.9% 9139 ± 11% meminfo.AnonHugePages
642.75 ± 7% +15.8% 744.00 ± 6% slabinfo.kmalloc-512.active_objs
4993 ± 2% +14.6% 5724 ± 2% slabinfo.kmalloc-64.active_objs
4993 ± 2% +14.6% 5724 ± 2% slabinfo.kmalloc-64.num_objs
9.00 ± 0% -77.8% 2.00 ± 0% vmstat.procs.r
112616 ± 1% +19.0% 133959 ± 0% vmstat.system.cs
4053 ± 0% +6.3% 4309 ± 0% vmstat.system.in
16466114 ± 0% -37.4% 10307354 ± 0% softirqs.NET_RX
72067 ± 10% -63.5% 26326 ± 4% softirqs.RCU
8598 ± 4% +954.6% 90677 ± 1% softirqs.SCHED
605899 ± 0% -45.7% 329111 ± 0% softirqs.TIMER
2529 ± 4% -15.1% 2147 ± 1% proc-vmstat.nr_alloc_batch
1.48e+08 ± 0% -37.1% 93059843 ± 0% proc-vmstat.numa_hit
1.48e+08 ± 0% -37.1% 93059842 ± 0% proc-vmstat.numa_local
3.742e+08 ± 1% -36.9% 2.362e+08 ± 0% proc-vmstat.pgalloc_dma32
4.733e+08 ± 0% -36.6% 3e+08 ± 0% proc-vmstat.pgalloc_normal
8.476e+08 ± 0% -36.7% 5.362e+08 ± 0% proc-vmstat.pgfree
99.31 ± 0% -44.6% 55.03 ± 0% turbostat.%Busy
3269 ± 0% -44.8% 1804 ± 1% turbostat.Avg_MHz
0.05 ± 17% +51942.1% 24.72 ± 1% turbostat.CPU%c1
0.64 ± 0% +3067.3% 20.11 ± 3% turbostat.CPU%c6
20.20 ± 0% -24.7% 15.21 ± 0% turbostat.CorWatt
0.12 ± 39% +1906.4% 2.36 ± 3% turbostat.Pkg%pc2
0.46 ± 10% +1696.7% 8.27 ± 6% turbostat.Pkg%pc6
37.54 ± 0% -14.5% 32.11 ± 0% turbostat.PkgWatt
76510 ± 46% +2.6e+05% 1.954e+08 ± 1% cpuidle.C1-IVB.time
6742 ± 76% +2.9e+05% 19860856 ± 1% cpuidle.C1-IVB.usage
19769 ± 17% +5414.4% 1090172 ± 3% cpuidle.C1E-IVB.time
151.00 ± 11% +4024.8% 6228 ± 3% cpuidle.C1E-IVB.usage
33074 ± 14% +5086.6% 1715442 ± 2% cpuidle.C3-IVB.time
114.50 ± 14% +6075.1% 7070 ± 3% cpuidle.C3-IVB.usage
8006184 ± 0% +4066.3% 3.336e+08 ± 2% cpuidle.C6-IVB.time
8874 ± 0% +4197.0% 381350 ± 2% cpuidle.C6-IVB.usage
476.75 ± 92% +77822.9% 371497 ± 12% cpuidle.POLL.time
46.25 ±144% +30824.3% 14302 ± 3% cpuidle.POLL.usage
1.758e+11 ± 2% -38.4% 1.083e+11 ± 1% perf-stat.L1-dcache-load-misses
6.405e+11 ± 0% -29.7% 4.5e+11 ± 0% perf-stat.L1-dcache-loads
6.28e+09 ± 3% -30.0% 4.398e+09 ± 0% perf-stat.L1-dcache-prefetch-misses
8.142e+10 ± 1% -39.3% 4.941e+10 ± 0% perf-stat.L1-dcache-store-misses
5.32e+11 ± 0% -33.6% 3.533e+11 ± 1% perf-stat.L1-dcache-stores
5.59e+10 ± 1% -19.3% 4.512e+10 ± 1% perf-stat.L1-icache-load-misses
5.199e+10 ± 0% -42.1% 3.01e+10 ± 1% perf-stat.LLC-loads
3.066e+10 ± 1% -42.4% 1.765e+10 ± 1% perf-stat.LLC-prefetches
5.282e+10 ± 0% -40.0% 3.168e+10 ± 1% perf-stat.LLC-stores
2.776e+11 ± 0% -27.9% 2.002e+11 ± 0% perf-stat.branch-instructions
3.861e+09 ± 1% -50.8% 1.9e+09 ± 0% perf-stat.branch-load-misses
2.772e+11 ± 0% -27.8% 2e+11 ± 0% perf-stat.branch-loads
3.864e+09 ± 1% -50.8% 1.9e+09 ± 0% perf-stat.branch-misses
1.179e+11 ± 0% -43.9% 6.61e+10 ± 0% perf-stat.bus-cycles
7.126e+09 ± 16% -96.8% 2.272e+08 ± 4% perf-stat.cache-misses
1.173e+11 ± 0% -38.0% 7.278e+10 ± 0% perf-stat.cache-references
34232822 ± 1% +19.1% 40787311 ± 0% perf-stat.context-switches
3.897e+12 ± 0% -44.2% 2.174e+12 ± 1% perf-stat.cpu-cycles
12019 ± 35% +306.6% 48867 ± 1% perf-stat.cpu-migrations
4.069e+09 ± 20% -58.4% 1.694e+09 ± 46% perf-stat.dTLB-load-misses
6.421e+11 ± 0% -30.3% 4.476e+11 ± 1% perf-stat.dTLB-loads
5.285e+08 ± 22% -71.0% 1.531e+08 ± 17% perf-stat.dTLB-store-misses
5.32e+11 ± 0% -33.4% 3.544e+11 ± 1% perf-stat.dTLB-stores
3.735e+08 ± 5% -48.5% 1.923e+08 ± 3% perf-stat.iTLB-load-misses
1.803e+08 ± 52% +662.4% 1.374e+09 ± 2% perf-stat.iTLB-loads
1.505e+12 ± 0% -29.3% 1.064e+12 ± 0% perf-stat.instructions
339045 ± 0% +4.4% 354068 ± 0% perf-stat.minor-faults
339041 ± 0% +4.4% 354062 ± 0% perf-stat.page-faults
3.892e+12 ± 0% -43.9% 2.182e+12 ± 0% perf-stat.ref-cycles
3.052e+12 ± 0% -25.9% 2.261e+12 ± 0% perf-stat.stalled-cycles-frontend
34082 ± 16% -65.5% 11746 ± 83% sched_debug.cfs_rq:/.MIN_vruntime.avg
75047 ± 3% -58.2% 31366 ± 63% sched_debug.cfs_rq:/.MIN_vruntime.max
34069 ± 3% -59.9% 13660 ± 61% sched_debug.cfs_rq:/.MIN_vruntime.stddev
1403286 ± 4% -34.9% 913870 ± 14% sched_debug.cfs_rq:/.load.avg
1874082 ± 7% -27.9% 1352098 ± 19% sched_debug.cfs_rq:/.load.max
1041057 ± 0% -66.8% 345620 ± 35% sched_debug.cfs_rq:/.load.min
1063 ± 0% -54.6% 482.93 ± 15% sched_debug.cfs_rq:/.load_avg.avg
1204 ± 3% -20.6% 956.50 ± 4% sched_debug.cfs_rq:/.load_avg.max
939.38 ± 0% -86.4% 127.33 ± 31% sched_debug.cfs_rq:/.load_avg.min
104.33 ± 19% +238.6% 353.26 ± 6% sched_debug.cfs_rq:/.load_avg.stddev
34082 ± 16% -65.5% 11746 ± 83% sched_debug.cfs_rq:/.max_vruntime.avg
75047 ± 3% -58.2% 31366 ± 63% sched_debug.cfs_rq:/.max_vruntime.max
34069 ± 3% -59.9% 13660 ± 61% sched_debug.cfs_rq:/.max_vruntime.stddev
74820 ± 0% -10.4% 67031 ± 1% sched_debug.cfs_rq:/.min_vruntime.min
1091 ± 7% +378.3% 5221 ± 13% sched_debug.cfs_rq:/.min_vruntime.stddev
1.35 ± 3% -35.4% 0.88 ± 14% sched_debug.cfs_rq:/.nr_running.avg
1.83 ± 6% -29.5% 1.29 ± 19% sched_debug.cfs_rq:/.nr_running.max
1.00 ± 0% -66.7% 0.33 ± 35% sched_debug.cfs_rq:/.nr_running.min
943.28 ± 0% -61.0% 367.80 ± 20% sched_debug.cfs_rq:/.runnable_load_avg.avg
991.42 ± 1% -21.2% 781.00 ± 14% sched_debug.cfs_rq:/.runnable_load_avg.max
894.88 ± 1% -90.9% 81.46 ± 20% sched_debug.cfs_rq:/.runnable_load_avg.min
39.07 ± 31% +667.7% 299.97 ± 16% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-794.96 ±-98% +505.2% -4810 ±-18% sched_debug.cfs_rq:/.spread0.avg
-1935 ±-42% +442.8% -10508 ±-13% sched_debug.cfs_rq:/.spread0.min
1092 ± 7% +378.0% 5221 ± 13% sched_debug.cfs_rq:/.spread0.stddev
949.60 ± 0% -41.6% 554.85 ± 12% sched_debug.cfs_rq:/.util_avg.avg
976.08 ± 1% -9.9% 879.71 ± 1% sched_debug.cfs_rq:/.util_avg.max
925.83 ± 0% -67.0% 305.25 ± 20% sched_debug.cfs_rq:/.util_avg.min
18.71 ± 30% +1151.4% 234.18 ± 6% sched_debug.cfs_rq:/.util_avg.stddev
2.60 ± 9% -66.3% 0.88 ± 62% sched_debug.cpu.clock.stddev
2.60 ± 9% -66.3% 0.88 ± 62% sched_debug.cpu.clock_task.stddev
941.57 ± 1% -65.8% 322.38 ± 25% sched_debug.cpu.cpu_load[0].avg
992.46 ± 1% -29.5% 700.08 ± 25% sched_debug.cpu.cpu_load[0].max
880.67 ± 4% -93.3% 58.58 ± 64% sched_debug.cpu.cpu_load[0].min
44.14 ± 46% +526.2% 276.40 ± 28% sched_debug.cpu.cpu_load[0].stddev
944.76 ± 0% -59.6% 381.23 ± 17% sched_debug.cpu.cpu_load[1].avg
980.25 ± 1% -17.5% 808.25 ± 5% sched_debug.cpu.cpu_load[1].max
900.79 ± 1% -89.8% 91.67 ± 28% sched_debug.cpu.cpu_load[1].min
30.83 ± 30% +904.7% 309.72 ± 7% sched_debug.cpu.cpu_load[1].stddev
941.95 ± 0% -61.6% 361.83 ± 18% sched_debug.cpu.cpu_load[2].avg
974.08 ± 1% -18.0% 798.29 ± 5% sched_debug.cpu.cpu_load[2].max
899.00 ± 1% -90.8% 82.42 ± 24% sched_debug.cpu.cpu_load[2].min
29.59 ± 30% +946.9% 309.73 ± 6% sched_debug.cpu.cpu_load[2].stddev
935.02 ± 0% -63.1% 344.78 ± 19% sched_debug.cpu.cpu_load[3].avg
965.83 ± 1% -18.7% 785.50 ± 5% sched_debug.cpu.cpu_load[3].max
893.04 ± 1% -91.8% 73.21 ± 26% sched_debug.cpu.cpu_load[3].min
28.77 ± 29% +976.4% 309.70 ± 6% sched_debug.cpu.cpu_load[3].stddev
922.55 ± 0% -64.2% 330.33 ± 19% sched_debug.cpu.cpu_load[4].avg
951.75 ± 1% -20.0% 761.71 ± 6% sched_debug.cpu.cpu_load[4].max
880.96 ± 0% -92.8% 63.08 ± 31% sched_debug.cpu.cpu_load[4].min
27.92 ± 26% +988.8% 304.04 ± 6% sched_debug.cpu.cpu_load[4].stddev
1302 ± 1% -15.7% 1097 ± 7% sched_debug.cpu.curr->pid.avg
798.04 ± 5% -70.5% 235.67 ± 39% sched_debug.cpu.curr->pid.min
812.92 ± 1% +22.9% 999.29 ± 4% sched_debug.cpu.curr->pid.stddev
1359793 ± 6% -34.4% 892053 ± 17% sched_debug.cpu.load.avg
1830834 ± 4% -28.5% 1308664 ± 19% sched_debug.cpu.load.max
1041241 ± 0% -62.6% 389138 ± 37% sched_debug.cpu.load.min
0.00 ± 21% +77.1% 0.00 ± 7% sched_debug.cpu.next_balance.stddev
150630 ± 0% -24.2% 114118 ± 0% sched_debug.cpu.nr_load_updates.avg
151833 ± 0% -21.6% 119037 ± 0% sched_debug.cpu.nr_load_updates.max
149812 ± 0% -27.3% 108953 ± 1% sched_debug.cpu.nr_load_updates.min
745.51 ± 1% +502.7% 4492 ± 14% sched_debug.cpu.nr_load_updates.stddev
2.60 ± 1% -52.8% 1.23 ± 17% sched_debug.cpu.nr_running.avg
3.79 ± 3% -39.6% 2.29 ± 39% sched_debug.cpu.nr_running.max
1.62 ± 8% -74.4% 0.42 ± 34% sched_debug.cpu.nr_running.min
4242583 ± 1% +20.1% 5094187 ± 0% sched_debug.cpu.nr_switches.avg
3880089 ± 1% +24.6% 4835277 ± 1% sched_debug.cpu.nr_switches.min
487080 ± 18% -61.6% 186796 ± 37% sched_debug.cpu.nr_switches.stddev
netperf.Throughput_Mbps
4000 ++-----------------------------------*--*-----*--*--*--*--*-*--*-----+
| .*.. .*..*..* *. |
3500 *+.*..* *.*..*..*. *. |
3000 ++ : : |
| : O |
2500 O+ O O: O : O O O O O O O O O O O O O O
| : : O O O O O |
2000 ++ : : |
| : : |
1500 ++ : : |
1000 ++ : : |
| : : |
500 ++ :: |
| : |
0 ++-------*------------------------------------O----------------------+
[*] bisect-good sample
[O] bisect-bad sample
Thanks,
Xiaolong
5 years, 8 months
[lkp] [f2fs] ec795418c4: fsmark.app_overhead -36.3% regression
by kernel test robot
FYI, we noticed a -36.3% regression of fsmark.files_per_sec due to commit:
commit ec795418c41850056feb956534edf059dc1155d4 ("f2fs: use percpu_rw_semaphore")
https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git dev-test
in testcase: fsmark
on test machine: 72 threads Haswell-EP with 128G memory
with following parameters: cpufreq_governor=performance/disk=1SSD/filesize=8K/fs=f2fs/iterations=8/nr_directories=16d/nr_files_per_directory=256fpd/nr_threads=4/sync_method=fsyncBeforeClose/test_size=72G
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1SSD/8K/f2fs/8/x86_64-rhel/16d/256fpd/4/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-hsw-ep4/72G/fsmark
commit:
3bdad3c7ee ("f2fs: skip to check the block address of node page")
ec795418c4 ("f2fs: use percpu_rw_semaphore")
3bdad3c7ee72a76e ec795418c41850056feb956534
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
29551 ± 0% -36.3% 18831 ± 2% fsmark.files_per_sec
6696820 ± 1% +4.2% 6977582 ± 1% fsmark.app_overhead
322.28 ± 0% +55.9% 502.37 ± 2% fsmark.time.elapsed_time
322.28 ± 0% +55.9% 502.37 ± 2% fsmark.time.elapsed_time.max
2.756e+08 ± 0% -7.6% 2.545e+08 ± 0% fsmark.time.file_system_outputs
8913 ± 6% +106.8% 18436 ± 10% fsmark.time.involuntary_context_switches
320.80 ± 0% -2.8% 311.80 ± 0% fsmark.time.percent_of_cpu_this_job_got
993.78 ± 0% +53.2% 1522 ± 2% fsmark.time.system_time
41.53 ± 2% +11.1% 46.13 ± 1% fsmark.time.user_time
21551618 ± 0% +123.2% 48100377 ± 2% fsmark.time.voluntary_context_switches
51036 ± 6% +77.9% 90812 ± 0% meminfo.Dirty
25742 ± 49% +100.4% 51585 ± 31% numa-meminfo.node1.Dirty
6517 ± 49% +97.7% 12887 ± 31% numa-vmstat.node1.nr_dirty
311222 ± 2% +55.6% 484246 ± 7% softirqs.RCU
483341 ± 3% +34.2% 648718 ± 3% softirqs.SCHED
849788 ± 7% +27.1% 1079705 ± 5% softirqs.TIMER
8.48 ± 0% -5.0% 8.06 ± 0% turbostat.%Busy
223.50 ± 0% -7.5% 206.80 ± 0% turbostat.Avg_MHz
1.02 ± 3% +45.2% 1.48 ± 2% turbostat.CPU%c3
7420 ± 2% -18.1% 6079 ± 1% slabinfo.ext4_extent_status.active_objs
7420 ± 2% -18.1% 6079 ± 1% slabinfo.ext4_extent_status.num_objs
7676 ± 1% -12.8% 6694 ± 3% slabinfo.kmalloc-1024.active_objs
8005 ± 1% -12.3% 7016 ± 2% slabinfo.kmalloc-1024.num_objs
270.00 ± 1% -36.3% 172.10 ± 3% vmstat.io.bi
424174 ± 0% -40.5% 252324 ± 2% vmstat.io.bo
137171 ± 0% +42.3% 195205 ± 0% vmstat.system.cs
129727 ± 0% -16.4% 108410 ± 0% vmstat.system.in
115.40 ±127% -96.1% 4.50 ±189% proc-vmstat.kswapd_high_wmark_hit_quickly
12748 ± 6% +78.2% 22719 ± 0% proc-vmstat.nr_dirty
6239 ± 9% +26.7% 7908 ± 13% proc-vmstat.numa_hint_faults
5275 ± 8% +22.6% 6469 ± 11% proc-vmstat.numa_hint_faults_local
746512 ± 2% +50.2% 1121580 ± 2% proc-vmstat.pgfault
2.331e+08 ± 1% +179.8% 6.522e+08 ± 5% cpuidle.C1-HSW.time
21473442 ± 0% +125.2% 48347668 ± 2% cpuidle.C1-HSW.usage
13731593 ± 4% +1207.0% 1.795e+08 ± 16% cpuidle.C1E-HSW.time
123887 ± 3% +1237.9% 1657510 ± 18% cpuidle.C1E-HSW.usage
2.173e+08 ± 4% +181.6% 6.12e+08 ± 4% cpuidle.C3-HSW.time
479301 ± 5% +441.1% 2593727 ± 5% cpuidle.C3-HSW.usage
2.042e+10 ± 0% +52.9% 3.123e+10 ± 2% cpuidle.C6-HSW.time
22061045 ± 0% +53.2% 33801151 ± 2% cpuidle.C6-HSW.usage
1.131e+08 ± 10% +44.0% 1.629e+08 ± 7% cpuidle.POLL.time
448858 ± 0% +25.2% 561850 ± 0% cpuidle.POLL.usage
6.679e+10 ± 3% +43.1% 9.559e+10 ± 2% perf-stat.L1-dcache-load-misses
7.043e+11 ± 3% +44.4% 1.017e+12 ± 4% perf-stat.L1-dcache-loads
4.012e+11 ± 5% +44.5% 5.798e+11 ± 4% perf-stat.L1-dcache-stores
3.657e+10 ± 5% +26.0% 4.609e+10 ± 4% perf-stat.L1-icache-load-misses
3.534e+10 ± 3% +39.2% 4.919e+10 ± 5% perf-stat.LLC-loads
5.802e+09 ± 8% +77.5% 1.03e+10 ± 6% perf-stat.LLC-stores
5.894e+11 ± 4% +44.1% 8.49e+11 ± 4% perf-stat.branch-instructions
5.843e+09 ± 5% +61.1% 9.414e+09 ± 5% perf-stat.branch-load-misses
5.715e+11 ± 3% +51.0% 8.631e+11 ± 4% perf-stat.branch-loads
5.885e+09 ± 7% +69.6% 9.981e+09 ± 8% perf-stat.branch-misses
1.778e+11 ± 5% +44.2% 2.564e+11 ± 4% perf-stat.bus-cycles
7.914e+10 ± 6% +38.9% 1.099e+11 ± 5% perf-stat.cache-references
44519137 ± 0% +121.2% 98455560 ± 2% perf-stat.context-switches
4.758e+12 ± 3% +42.2% 6.765e+12 ± 4% perf-stat.cpu-cycles
422734 ± 2% +113.7% 903275 ± 3% perf-stat.cpu-migrations
1.278e+10 ± 3% +35.1% 1.727e+10 ± 4% perf-stat.dTLB-load-misses
7.029e+11 ± 4% +44.3% 1.014e+12 ± 5% perf-stat.dTLB-loads
2.886e+08 ± 4% +139.9% 6.922e+08 ± 5% perf-stat.dTLB-store-misses
3.995e+11 ± 8% +45.3% 5.807e+11 ± 3% perf-stat.dTLB-stores
1.312e+09 ± 3% +47.0% 1.928e+09 ± 7% perf-stat.iTLB-loads
2.929e+12 ± 2% +38.8% 4.064e+12 ± 6% perf-stat.instructions
729801 ± 2% +50.8% 1100280 ± 2% perf-stat.minor-faults
1.071e+08 ± 8% +14.4% 1.224e+08 ± 6% perf-stat.node-store-misses
729789 ± 2% +50.8% 1100266 ± 2% perf-stat.page-faults
3.922e+12 ± 3% +47.2% 5.775e+12 ± 4% perf-stat.ref-cycles
6824 ± 0% +55.1% 10582 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
28828 ± 12% +49.1% 42992 ± 6% sched_debug.cfs_rq:/.exec_clock.max
30.22 ± 11% +95.5% 59.08 ± 7% sched_debug.cfs_rq:/.exec_clock.min
7993 ± 6% +50.5% 12033 ± 2% sched_debug.cfs_rq:/.exec_clock.stddev
25.89 ± 18% -28.4% 18.53 ± 21% sched_debug.cfs_rq:/.load_avg.avg
7337 ± 0% +51.6% 11121 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
34800 ± 9% +43.2% 49818 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
8745 ± 6% +46.1% 12773 ± 2% sched_debug.cfs_rq:/.min_vruntime.stddev
5.47 ± 16% -29.1% 3.88 ± 15% sched_debug.cfs_rq:/.runnable_load_avg.avg
-32162 ±-16% +39.6% -44890 ±-13% sched_debug.cfs_rq:/.spread0.min
8745 ± 6% +46.1% 12774 ± 2% sched_debug.cfs_rq:/.spread0.stddev
652.00 ± 6% -22.9% 502.53 ± 6% sched_debug.cfs_rq:/.util_avg.max
138.58 ± 6% -22.4% 107.57 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
203456 ± 0% +44.0% 293000 ± 0% sched_debug.cpu.clock.avg
203461 ± 0% +44.0% 293005 ± 0% sched_debug.cpu.clock.max
203450 ± 0% +44.0% 292994 ± 0% sched_debug.cpu.clock.min
203456 ± 0% +44.0% 293000 ± 0% sched_debug.cpu.clock_task.avg
203461 ± 0% +44.0% 293005 ± 0% sched_debug.cpu.clock_task.max
203450 ± 0% +44.0% 292994 ± 0% sched_debug.cpu.clock_task.min
21.62 ± 35% -36.8% 13.66 ± 28% sched_debug.cpu.cpu_load[3].stddev
336.16 ± 9% +23.4% 414.96 ± 8% sched_debug.cpu.curr->pid.avg
5526 ± 0% +38.9% 7674 ± 2% sched_debug.cpu.curr->pid.max
1227 ± 5% +31.3% 1611 ± 5% sched_debug.cpu.curr->pid.stddev
46943 ± 2% +55.3% 72908 ± 1% sched_debug.cpu.nr_load_updates.avg
70757 ± 3% +51.6% 107258 ± 3% sched_debug.cpu.nr_load_updates.max
31077 ± 12% +56.5% 48628 ± 12% sched_debug.cpu.nr_load_updates.min
10900 ± 4% +53.9% 16781 ± 3% sched_debug.cpu.nr_load_updates.stddev
305199 ± 0% +123.9% 683210 ± 0% sched_debug.cpu.nr_switches.avg
1286089 ± 12% +115.6% 2772732 ± 7% sched_debug.cpu.nr_switches.max
602.75 ± 5% +64.8% 993.43 ± 7% sched_debug.cpu.nr_switches.min
362058 ± 7% +117.3% 786749 ± 3% sched_debug.cpu.nr_switches.stddev
57.60 ± 13% +124.4% 129.24 ± 11% sched_debug.cpu.nr_uninterruptible.max
-105.13 ±-13% +217.5% -333.79 ±-11% sched_debug.cpu.nr_uninterruptible.min
22.80 ± 5% +220.7% 73.11 ± 4% sched_debug.cpu.nr_uninterruptible.stddev
304847 ± 0% +124.1% 683016 ± 0% sched_debug.cpu.sched_count.avg
1283492 ± 12% +115.9% 2771349 ± 7% sched_debug.cpu.sched_count.max
373.43 ± 8% +109.1% 780.90 ± 7% sched_debug.cpu.sched_count.min
361891 ± 7% +117.4% 786838 ± 3% sched_debug.cpu.sched_count.stddev
151704 ± 0% +124.4% 340409 ± 0% sched_debug.cpu.sched_goidle.avg
639131 ± 12% +115.5% 1377132 ± 7% sched_debug.cpu.sched_goidle.max
148.13 ± 7% +104.1% 302.30 ± 7% sched_debug.cpu.sched_goidle.min
180189 ± 7% +117.6% 392032 ± 3% sched_debug.cpu.sched_goidle.stddev
152464 ± 0% +124.2% 341851 ± 0% sched_debug.cpu.ttwu_count.avg
931253 ± 13% +78.3% 1660600 ± 7% sched_debug.cpu.ttwu_count.max
201.32 ± 7% +91.7% 385.88 ± 4% sched_debug.cpu.ttwu_count.min
215408 ± 6% +95.3% 420609 ± 3% sched_debug.cpu.ttwu_count.stddev
106.18 ± 8% +89.1% 200.76 ± 4% sched_debug.cpu.ttwu_local.min
203453 ± 0% +44.0% 292997 ± 0% sched_debug.cpu_clk
200513 ± 0% +44.5% 289817 ± 0% sched_debug.ktime
0.17 ± 0% -46.7% 0.09 ± 49% sched_debug.rt_rq:/.rt_nr_running.max
0.02 ± 17% -44.8% 0.01 ± 53% sched_debug.rt_rq:/.rt_nr_running.stddev
203453 ± 0% +44.0% 292997 ± 0% sched_debug.sched_clk
3.58 ± 9% -42.2% 2.07 ± 34% perf-profile.cycles-pp.__alloc_percpu_gfp.__percpu_counter_init.f2fs_alloc_inode.alloc_inode.new_inode_pseudo
3.73 ± 5% -44.5% 2.07 ± 34% perf-profile.cycles-pp.__blk_mq_complete_request.blk_mq_complete_request.__nvme_process_cq.nvme_irq.handle_irq_event_percpu
0.93 ± 6% -41.0% 0.55 ± 35% perf-profile.cycles-pp.__f2fs_submit_merged_bio.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range
0.90 ± 8% -93.3% 0.06 ±300% perf-profile.cycles-pp.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.f2fs_do_sync_file.f2fs_sync_file
6.88 ± 7% -30.3% 4.79 ± 33% perf-profile.cycles-pp.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.f2fs_sync_file.vfs_fsync_range
1.96 ± 8% -42.4% 1.13 ± 34% perf-profile.cycles-pp.__generic_file_write_iter.f2fs_file_write_iter.__vfs_write.vfs_write.sys_write
3.81 ± 5% -44.3% 2.12 ± 34% perf-profile.cycles-pp.__nvme_process_cq.nvme_irq.handle_irq_event_percpu.handle_irq_event.handle_edge_irq
3.68 ± 9% -42.2% 2.13 ± 34% perf-profile.cycles-pp.__percpu_counter_init.f2fs_alloc_inode.alloc_inode.new_inode_pseudo.new_inode
1.59 ± 7% -43.6% 0.90 ± 34% perf-profile.cycles-pp.__percpu_counter_sum.f2fs_balance_fs.f2fs_create.path_openat.do_filp_open
2.39 ± 9% -36.4% 1.52 ± 34% perf-profile.cycles-pp.__percpu_counter_sum.f2fs_balance_fs.f2fs_write_data_page.f2fs_write_cache_pages.f2fs_write_data_pages
2.79 ± 7% -40.8% 1.65 ± 34% perf-profile.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
2.91 ± 6% -45.2% 1.60 ± 34% perf-profile.cycles-pp.__wake_up.__wake_up_bit.end_page_writeback.f2fs_write_end_io.bio_endio
2.92 ± 6% -45.1% 1.61 ± 34% perf-profile.cycles-pp.__wake_up_bit.end_page_writeback.f2fs_write_end_io.bio_endio.blk_update_request
2.88 ± 6% -45.3% 1.57 ± 34% perf-profile.cycles-pp.__wake_up_common.__wake_up.__wake_up_bit.end_page_writeback.f2fs_write_end_io
0.98 ± 11% -62.2% 0.37 ± 81% perf-profile.cycles-pp.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work.worker_thread
2.56 ± 7% -44.9% 1.41 ± 34% perf-profile.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
4.04 ± 8% -42.3% 2.33 ± 34% perf-profile.cycles-pp.alloc_inode.new_inode_pseudo.new_inode.f2fs_new_inode.f2fs_create
2.86 ± 6% -45.3% 1.56 ± 34% perf-profile.cycles-pp.autoremove_wake_function.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit
3.39 ± 6% -45.2% 1.86 ± 34% perf-profile.cycles-pp.bio_endio.blk_update_request.blk_mq_end_request.nvme_complete_rq.__blk_mq_complete_request
3.73 ± 5% -44.5% 2.07 ± 34% perf-profile.cycles-pp.blk_mq_complete_request.__nvme_process_cq.nvme_irq.handle_irq_event_percpu.handle_irq_event
3.65 ± 5% -44.7% 2.02 ± 34% perf-profile.cycles-pp.blk_mq_end_request.nvme_complete_rq.__blk_mq_complete_request.blk_mq_complete_request.__nvme_process_cq
3.50 ± 5% -45.2% 1.92 ± 34% perf-profile.cycles-pp.blk_update_request.blk_mq_end_request.nvme_complete_rq.__blk_mq_complete_request.blk_mq_complete_request
2.85 ± 6% -45.3% 1.56 ± 34% perf-profile.cycles-pp.default_wake_function.autoremove_wake_function.wake_bit_function.__wake_up_common.__wake_up
4.26 ± 6% -43.9% 2.39 ± 34% perf-profile.cycles-pp.do_IRQ.ret_from_intr.cpuidle_enter.call_cpuidle.cpu_startup_entry
20.48 ± 5% -21.1% 16.16 ± 34% perf-profile.cycles-pp.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
20.64 ± 5% -21.3% 16.25 ± 34% perf-profile.cycles-pp.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
0.81 ± 18% -85.8% 0.11 ±200% perf-profile.cycles-pp.do_write_page.write_data_page.do_write_data_page.f2fs_write_data_page.f2fs_write_cache_pages
6.86 ± 7% -30.4% 4.78 ± 33% perf-profile.cycles-pp.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.f2fs_sync_file
3.20 ± 6% -44.7% 1.77 ± 34% perf-profile.cycles-pp.end_page_writeback.f2fs_write_end_io.bio_endio.blk_update_request.blk_mq_end_request
2.54 ± 7% -44.7% 1.40 ± 33% perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
3.88 ± 9% -42.5% 2.23 ± 34% perf-profile.cycles-pp.f2fs_alloc_inode.alloc_inode.new_inode_pseudo.new_inode.f2fs_new_inode
1.99 ± 5% -49.0% 1.01 ± 34% perf-profile.cycles-pp.f2fs_balance_fs.f2fs_create.path_openat.do_filp_open.do_sys_open
3.12 ± 9% -43.5% 1.76 ± 34% perf-profile.cycles-pp.f2fs_balance_fs.f2fs_write_data_page.f2fs_write_cache_pages.f2fs_write_data_pages.do_writepages
1.18 ± 6% -47.6% 0.62 ± 34% perf-profile.cycles-pp.f2fs_dentry_hash.find_in_level.f2fs_find_entry.f2fs_lookup.path_openat
2.73 ± 7% -41.1% 1.61 ± 34% perf-profile.cycles-pp.f2fs_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
7.86 ± 8% -52.0% 3.77 ± 34% perf-profile.cycles-pp.f2fs_find_entry.f2fs_lookup.path_openat.do_filp_open.do_sys_open
7.95 ± 8% -51.7% 3.84 ± 34% perf-profile.cycles-pp.f2fs_lookup.path_openat.do_filp_open.do_sys_open.sys_open
4.60 ± 7% -42.4% 2.65 ± 34% perf-profile.cycles-pp.f2fs_new_inode.f2fs_create.path_openat.do_filp_open.do_sys_open
1.12 ± 3% -41.5% 0.66 ± 35% perf-profile.cycles-pp.f2fs_wait_on_page_writeback.part.31.f2fs_wait_on_page_writeback.wait_on_node_pages_writeback.f2fs_do_sync_file.f2fs_sync_file
1.14 ± 3% -41.6% 0.67 ± 35% perf-profile.cycles-pp.f2fs_wait_on_page_writeback.wait_on_node_pages_writeback.f2fs_do_sync_file.f2fs_sync_file.vfs_fsync_range
5.46 ± 7% -27.3% 3.97 ± 34% perf-profile.cycles-pp.f2fs_write_cache_pages.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range
4.81 ± 8% -25.6% 3.57 ± 34% perf-profile.cycles-pp.f2fs_write_data_page.f2fs_write_cache_pages.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range
6.86 ± 7% -30.4% 4.77 ± 33% perf-profile.cycles-pp.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file
3.38 ± 6% -45.1% 1.85 ± 34% perf-profile.cycles-pp.f2fs_write_end_io.bio_endio.blk_update_request.blk_mq_end_request.nvme_complete_rq
0.92 ± 9% -87.6% 0.11 ±201% perf-profile.cycles-pp.filemap_fdatawait_range.filemap_write_and_wait_range.f2fs_do_sync_file.f2fs_sync_file.vfs_fsync_range
8.18 ± 7% -35.5% 5.27 ± 34% perf-profile.cycles-pp.filemap_write_and_wait_range.f2fs_do_sync_file.f2fs_sync_file.vfs_fsync_range.do_fsync
1.28 ± 4% -65.3% 0.45 ± 66% perf-profile.cycles-pp.find_data_page.find_in_level.f2fs_find_entry.f2fs_lookup.path_openat
7.80 ± 8% -52.1% 3.74 ± 34% perf-profile.cycles-pp.find_in_level.f2fs_find_entry.f2fs_lookup.path_openat.do_filp_open
4.86 ± 9% -51.2% 2.37 ± 35% perf-profile.cycles-pp.find_target_dentry.find_in_level.f2fs_find_entry.f2fs_lookup.path_openat
1.80 ± 9% -43.4% 1.02 ± 33% perf-profile.cycles-pp.generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.__vfs_write.vfs_write
4.05 ± 6% -44.1% 2.26 ± 34% perf-profile.cycles-pp.handle_edge_irq.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter
4.08 ± 6% -43.9% 2.29 ± 34% perf-profile.cycles-pp.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter.call_cpuidle
3.92 ± 5% -44.0% 2.20 ± 34% perf-profile.cycles-pp.handle_irq_event.handle_edge_irq.handle_irq.do_IRQ.ret_from_intr
3.92 ± 5% -44.2% 2.19 ± 34% perf-profile.cycles-pp.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.handle_irq.do_IRQ
1.57 ± 9% -43.7% 0.88 ± 35% perf-profile.cycles-pp.kthread.ret_from_fork
1.87 ± 9% -40.5% 1.11 ± 34% perf-profile.cycles-pp.memset_erms.__alloc_percpu_gfp.__percpu_counter_init.f2fs_alloc_inode.alloc_inode
4.10 ± 8% -42.4% 2.36 ± 34% perf-profile.cycles-pp.new_inode.f2fs_new_inode.f2fs_create.path_openat.do_filp_open
4.05 ± 8% -42.4% 2.33 ± 34% perf-profile.cycles-pp.new_inode_pseudo.new_inode.f2fs_new_inode.f2fs_create.path_openat
3.70 ± 5% -44.6% 2.05 ± 34% perf-profile.cycles-pp.nvme_complete_rq.__blk_mq_complete_request.blk_mq_complete_request.__nvme_process_cq.nvme_irq
3.83 ± 5% -44.3% 2.14 ± 34% perf-profile.cycles-pp.nvme_irq.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.handle_irq
20.41 ± 6% -21.0% 16.12 ± 34% perf-profile.cycles-pp.path_openat.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
1.51 ± 10% -43.7% 0.85 ± 34% perf-profile.cycles-pp.pcpu_alloc.__alloc_percpu_gfp.__percpu_counter_init.f2fs_alloc_inode.alloc_inode
1.24 ± 10% -46.7% 0.66 ± 36% perf-profile.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
1.57 ± 9% -43.7% 0.88 ± 35% perf-profile.cycles-pp.ret_from_fork
4.22 ± 6% -42.9% 2.41 ± 34% perf-profile.cycles-pp.ret_from_intr.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
20.66 ± 5% -21.3% 16.27 ± 34% perf-profile.cycles-pp.sys_open.entry_SYSCALL_64_fastpath
2.93 ± 7% -40.9% 1.73 ± 34% perf-profile.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
2.82 ± 6% -45.2% 1.55 ± 34% perf-profile.cycles-pp.try_to_wake_up.default_wake_function.autoremove_wake_function.wake_bit_function.__wake_up_common
2.64 ± 6% -44.9% 1.45 ± 34% perf-profile.cycles-pp.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function.wake_bit_function
2.88 ± 7% -41.2% 1.69 ± 34% perf-profile.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
1.43 ± 3% -43.0% 0.82 ± 34% perf-profile.cycles-pp.wait_on_node_pages_writeback.f2fs_do_sync_file.f2fs_sync_file.vfs_fsync_range.do_fsync
2.86 ± 6% -45.2% 1.57 ± 34% perf-profile.cycles-pp.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit.end_page_writeback
1.05 ± 10% -51.3% 0.51 ± 51% perf-profile.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
1.05 ± 10% -51.3% 0.51 ± 51% perf-profile.cycles-pp.wb_writeback.wb_workfn.process_one_work.worker_thread.kthread
1.25 ± 10% -46.7% 0.66 ± 36% perf-profile.cycles-pp.worker_thread.kthread.ret_from_fork
0.94 ± 11% -62.2% 0.36 ± 81% perf-profile.cycles-pp.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work
12602 ± 10% +605.7% 88933 ± 4% latency_stats.avg.f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range
117.20 ± 38% +14638.0% 17272 ± 10% latency_stats.avg.f2fs_sync_fs.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 6536 ± 74% latency_stats.avg.rcu_sync_enter.percpu_down_write.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open
110089 ± 5% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
123952 ± 2% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
10543 ± 6% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_fdatawrite.sync_dirty_inodes.[f2fs].write_checkpoint.[f2fs]
417255 ± 1% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs]
96289 ± 4% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_read_failed.is_checkpointed_node.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
116316 ± 5% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_read_failed.need_dentry_mark.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
96158 ± 3% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_read_failed.need_inode_block_update.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1238583 ± 1% latency_stats.hits.call_rwsem_down_read_failed.percpu_down_read.get_node_info.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2911068 ± 7% latency_stats.hits.call_rwsem_down_read_failed.percpu_down_read.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 6208007 ± 2% latency_stats.hits.call_rwsem_down_read_failed.percpu_down_read.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
0.00 ± -1% +Inf% 2257817 ± 10% latency_stats.hits.call_rwsem_down_read_failed.percpu_down_read.is_checkpointed_node.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2727789 ± 1% latency_stats.hits.call_rwsem_down_read_failed.percpu_down_read.need_dentry_mark.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2227301 ± 2% latency_stats.hits.call_rwsem_down_read_failed.percpu_down_read.need_inode_block_update.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
12787 ± 9% -83.0% 2177 ± 9% latency_stats.hits.call_rwsem_down_write_failed.__f2fs_submit_merged_bio.[f2fs].f2fs_submit_merged_bio_cond.[f2fs].f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].wait_on_node_pages_writeback.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
140416 ± 3% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
0.00 ± -1% +Inf% 2617435 ± 9% latency_stats.hits.call_rwsem_down_write_failed.percpu_down_write.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 2581381 ± 1% latency_stats.hits.call_rwsem_down_write_failed.percpu_down_write.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 5283128 ± 3% latency_stats.hits.call_rwsem_down_write_failed.percpu_down_write.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open
193673 ± 4% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
271526 ± 3% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
0.00 ± -1% +Inf% 11445 ± 20% latency_stats.hits.percpu_down_write.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
0.00 ± -1% +Inf% 24410 ± 15% latency_stats.hits.percpu_down_write.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 26517 ± 9% latency_stats.hits.percpu_down_write.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
54803 ± 8% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.f2fs_new_inode.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
7982 ± 76% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync
0.00 ± -1% +Inf% 9196 ± 32% latency_stats.max.call_rwsem_down_read_failed.percpu_down_read.f2fs_new_inode.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 15634 ±151% latency_stats.max.call_rwsem_down_read_failed.percpu_down_read.f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync
5996 ± 54% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
18794 ± 50% -99.8% 38.20 ± 36% latency_stats.max.call_rwsem_down_write_failed.write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
25320 ± 17% -99.9% 18.30 ± 42% latency_stats.max.call_rwsem_down_write_failed.write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
352.80 ± 66% +25704.3% 91037 ± 7% latency_stats.max.f2fs_sync_fs.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 20403 ±131% latency_stats.max.percpu_down_write.write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 11786 ± 39% latency_stats.max.percpu_down_write.write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
0.00 ± -1% +Inf% 18121 ±153% latency_stats.max.rcu_sync_enter.percpu_down_write.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 23677 ± 90% latency_stats.max.rcu_sync_enter.percpu_down_write.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open
0.00 ± -1% +Inf% 14660 ± 34% latency_stats.max.rcu_sync_enter.percpu_down_write.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs]
0.00 ± -1% +Inf% 16700 ±162% latency_stats.max.rcu_sync_enter.percpu_down_write.try_to_free_nats.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
51205 ± 11% -80.7% 9868 ± 29% latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].wait_on_node_pages_writeback.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
123642 ± 26% -100.0% 0.00 ± -1% latency_stats.sum.bt_get.blk_mq_get_tag.__blk_mq_alloc_request.blk_mq_map_request.blk_mq_make_request.generic_make_request.submit_bio.__submit_merged_bio.[f2fs].f2fs_submit_page_mbio.[f2fs].write_meta_page.[f2fs].f2fs_write_meta_page.[f2fs].sync_meta_pages.[f2fs]
19533 ± 10% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.build_free_nids.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
145779 ± 11% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.build_free_nids.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs]
23239 ± 49% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.f2fs_convert_inline_inode.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
15290 ± 17% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
6123697 ± 6% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.f2fs_new_inode.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
34789 ± 22% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync
472390 ± 4% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1526265 ± 5% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
66045 ± 6% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_fdatawrite.sync_dirty_inodes.[f2fs].write_checkpoint.[f2fs]
2547237 ± 2% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs]
1181887 ± 2% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.is_checkpointed_node.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
879407 ± 2% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.need_dentry_mark.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
658290 ± 3% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.need_inode_block_update.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 56722 ± 8% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.build_free_nids.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 201027 ± 7% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.build_free_nids.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
0.00 ± -1% +Inf% 7958 ± 50% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 73190 ± 11% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.f2fs_new_inode.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 47053 ± 52% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync
0.00 ± -1% +Inf% 11815766 ± 5% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.get_node_info.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 21331651 ± 17% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 48966334 ± 11% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
0.00 ± -1% +Inf% 17894265 ± 10% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.is_checkpointed_node.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 18580100 ± 4% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.need_dentry_mark.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 27464359 ± 6% latency_stats.sum.call_rwsem_down_read_failed.percpu_down_read.need_inode_block_update.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
22780 ± 8% -84.5% 3539 ± 10% latency_stats.sum.call_rwsem_down_write_failed.__f2fs_submit_merged_bio.[f2fs].f2fs_submit_merged_bio_cond.[f2fs].f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].wait_on_node_pages_writeback.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
965007 ± 5% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
0.00 ± -1% +Inf% 7963263 ± 15% latency_stats.sum.call_rwsem_down_write_failed.percpu_down_write.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 7766380 ± 4% latency_stats.sum.call_rwsem_down_write_failed.percpu_down_write.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 22993263 ± 10% latency_stats.sum.call_rwsem_down_write_failed.percpu_down_write.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 24094 ± 11% latency_stats.sum.call_rwsem_down_write_failed.percpu_down_write.try_to_free_nats.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 96995 ± 6% latency_stats.sum.call_rwsem_down_write_failed.percpu_down_write.try_to_free_nats.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
1230935 ± 4% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1133345 ± 5% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
59569 ± 41% -99.4% 369.80 ± 18% latency_stats.sum.call_rwsem_down_write_failed.write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
231014 ± 22% -100.0% 103.90 ± 39% latency_stats.sum.call_rwsem_down_write_failed.write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
4537935 ± 27% +306.3% 18436209 ± 6% latency_stats.sum.f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
1473434 ± 17% +530.5% 9289385 ± 11% latency_stats.sum.f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range
1018 ± 52% +80360.5% 819812 ± 11% latency_stats.sum.f2fs_sync_fs.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 517363 ± 5% latency_stats.sum.percpu_down_write.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
0.00 ± -1% +Inf% 890753 ± 3% latency_stats.sum.percpu_down_write.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].fsync_node_pages.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 258657 ± 10% latency_stats.sum.percpu_down_write.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open
0.00 ± -1% +Inf% 70546 ± 50% latency_stats.sum.percpu_down_write.write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 65607 ± 25% latency_stats.sum.percpu_down_write.write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
0.00 ± -1% +Inf% 50572 ± 81% latency_stats.sum.rcu_sync_enter.percpu_down_write.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 123301 ± 26% latency_stats.sum.rcu_sync_enter.percpu_down_write.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open
0.00 ± -1% +Inf% 110076 ± 10% latency_stats.sum.rcu_sync_enter.percpu_down_write.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs]
0.00 ± -1% +Inf% 52574 ± 79% latency_stats.sum.rcu_sync_enter.percpu_down_write.try_to_free_nats.[f2fs].f2fs_balance_fs_bg.[f2fs].f2fs_balance_fs.[f2fs].f2fs_write_data_page.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_do_sync_file.[f2fs]
689963 ± 22% -98.9% 7538 ±238% latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].f2fs_add_regular_entry.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
9358 ± 44% -95.3% 441.00 ± 39% latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_inode.[f2fs].f2fs_do_sync_file.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
fsmark.time.system_time
1800 ++-------------------------------------------------------------------+
| O |
1600 OOO OOOOOO OOOOOO OOOOOO OOOO O OOOOOO OOOOOO OOOOO OOOOOO O |
1400 ++ |
| |
1200 ***.** * *.*** **.******.* * |
1000 ++ ****.******.******.*** * * *** *.******.******.***
| |
800 ++ |
600 ++ |
| |
400 ++ |
200 ++ |
| |
0 ++--------------------------------------------O----------------------+
fsmark.time.elapsed_time
600 ++--------------------------------------------------------------------+
| |
500 OOO OOOOOO OOOOO OOOOO OOOOOO OOOOO OOOO O O O OOOO OO O O |
| O O O O O O |
| |
400 ++ * * * |
***.******.*****.*****.***** + ****. ****.* ****.** **.*****. *** *. *
300 ++ * * * * **|
| |
200 ++ |
| |
| |
100 ++ |
| |
0 ++--------------------------------------------O-----------------------+
fsmark.time.elapsed_time.max
600 ++--------------------------------------------------------------------+
| |
500 OOO OOOOOO OOOOO OOOOO OOOOOO OOOOO OOOO O O O OOOO OO O O |
| O O O O O O |
| |
400 ++ * * * |
***.******.*****.*****.***** + ****. ****.* ****.** **.*****. *** *. *
300 ++ * * * * **|
| |
200 ++ |
| |
| |
100 ++ |
| |
0 ++--------------------------------------------O-----------------------+
fsmark.time.voluntary_context_switches
6e+07 ++------------------------------------------------------------------+
| |
5e+07 O+ OO O O O OO O OO O OO O O OO O OO |
|OOO OOOO OO OOO OOO OO O O O O OO O OOOOOO |
| |
4e+07 ++ |
| |
3e+07 ++ |
| *****.*******.******.** ***.*******. |
2e+07 ****.* *****.*******.*** ******.****
| |
| |
1e+07 ++ |
| |
0 ++-------------------------------------------O----------------------+
fsmark.files_per_sec
35000 ++------------------------------------------------------------------+
| |
30000 ++ * * ** .** .******.****
****.* * **.*******.******.** *** **** .******. ****** |
25000 ++ * * |
| |
20000 O+OO O OO O O O O O OO O OOOOO |
|O OOOOO OO OO OOOOOO OOOOO O OO O O OO OO OO |
15000 ++ |
| |
10000 ++ |
| |
5000 ++ |
| |
0 ++-------------------------------------------O----------------------+
[*] bisect-good sample
[O] bisect-bad sample
Thanks,
Xiaolong
5 years, 10 months
[lkp] 8296d37010: vm-scalability.throughput -42.0% regression
by kernel test robot
FYI, we noticed a -42.0% regression of vm-scalability.throughput due to commit:
commit 8296d3701014cd804bf981b5e2bf9a58b5c0e341 ("Speeds up clearing huge pages using work queue")
git://bee.sh.intel.com/git/yhuang/linux.git dbg_clear_huge_page
in testcase: vm-scalability
on test machine: 144 threads Brickland Haswell-EX with 512G memory
with following parameters:
cpufreq_governor: performance
runtime: 300s
size: 8T
test: anon-w-seq
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 27.7% improvement |
| test machine | 72 threads Brickland Haswell-EP with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=1 |
| | runtime=300s |
| | size=8T |
| | test=anon-w-seq |
+------------------+----------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 33.5% improvement |
| test machine | 88 threads Broadwell-EP with 128G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=8T |
| | test=anon-w-seq |
+------------------+----------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 45.6% improvement |
| test machine | 72 threads Brickland Haswell-EP with 128G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=8T |
| | test=anon-w-seq |
+------------------+----------------------------------------------------------------------------------+
| testcase: change | vm-scalability: |
| test machine | 4 threads qemu-system-x86_64 -enable-kvm -cpu qemu64,+ssse3 with 4G memory |
| test parameters | runtime=300s |
| | size=2T |
| | test=shm-pread-seq |
+------------------+----------------------------------------------------------------------------------+
| testcase: change | vm-scalability: |
| test machine | 72 threads Brickland Haswell-EP with 128G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=8T |
| | test=anon-w-seq |
| | thp_defrag=never |
| | thp_enabled=never |
+------------------+----------------------------------------------------------------------------------+
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/8T/lkp-hsx04/anon-w-seq/vm-scalability
commit:
v4.7-rc7
8296d37010 ("Speeds up clearing huge pages using work queue")
v4.7-rc7 8296d3701014cd804bf981b5e2
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
79883256 ± 0% -42.0% 46349306 ± 0% vm-scalability.throughput
516.98 ± 0% +65.4% 855.09 ± 0% vm-scalability.time.elapsed_time
516.98 ± 0% +65.4% 855.09 ± 0% vm-scalability.time.elapsed_time.max
427347 ± 1% +4365.4% 19082862 ± 0% vm-scalability.time.involuntary_context_switches
27202100 ± 1% -3.8% 26170440 ± 1% vm-scalability.time.minor_page_faults
13276 ± 0% -69.6% 4036 ± 0% vm-scalability.time.percent_of_cpu_this_job_got
34729 ± 0% -27.8% 25090 ± 0% vm-scalability.time.system_time
33909 ± 0% -72.2% 9431 ± 0% vm-scalability.time.user_time
41305 ± 0% +69327.1% 28676869 ± 0% vm-scalability.time.voluntary_context_switches
28541 ± 0% +88.3% 53732 ± 0% meminfo.KernelStack
15170 ± 3% +42.4% 21607 ± 4% softirqs.NET_RX
5954845 ± 1% +198.8% 17795401 ± 0% softirqs.RCU
547545 ± 10% +2448.4% 13953494 ± 0% softirqs.SCHED
134.75 ± 1% -39.0% 82.20 ± 0% vmstat.procs.r
7130 ± 3% +5610.4% 407164 ± 0% vmstat.system.cs
147778 ± 0% +3.2% 152445 ± 0% vmstat.system.in
11247 ± 1% +56.8% 17639 ± 0% numa-meminfo.node0.KernelStack
5822 ± 6% +106.8% 12037 ± 1% numa-meminfo.node1.KernelStack
5796 ± 4% +111.4% 12253 ± 1% numa-meminfo.node2.KernelStack
5691 ± 3% +108.3% 11852 ± 0% numa-meminfo.node3.KernelStack
1137 ± 18% +703.1% 9135 ±167% numa-meminfo.node3.Shmem
702.75 ± 1% +56.8% 1101 ± 0% numa-vmstat.node0.nr_kernel_stack
363.50 ± 6% +106.8% 751.80 ± 1% numa-vmstat.node1.nr_kernel_stack
361.00 ± 4% +112.0% 765.40 ± 1% numa-vmstat.node2.nr_kernel_stack
355.00 ± 3% +108.6% 740.40 ± 0% numa-vmstat.node3.nr_kernel_stack
283.75 ± 18% +704.8% 2283 ±167% numa-vmstat.node3.nr_shmem
559380 ± 16% +6.6e+05% 3.712e+09 ± 1% cpuidle.C1-HSW.time
46260 ± 23% +89365.4% 41386703 ± 1% cpuidle.C1-HSW.usage
14628635 ± 16% +34618.7% 5.079e+09 ± 0% cpuidle.C1E-HSW.time
28388 ± 6% +2e+05% 55926082 ± 0% cpuidle.C1E-HSW.usage
21808226 ± 3% +76644.1% 1.674e+10 ± 0% cpuidle.C3-HSW.time
61344 ± 3% +1e+05% 63558599 ± 0% cpuidle.C3-HSW.usage
5.555e+09 ± 3% +491.3% 3.285e+10 ± 0% cpuidle.C6-HSW.time
5848295 ± 3% +758.9% 50232586 ± 0% cpuidle.C6-HSW.usage
6742029 ± 19% +1671.8% 1.195e+08 ± 0% cpuidle.POLL.time
1012 ± 18% +41616.8% 422173 ± 0% cpuidle.POLL.usage
1783 ± 0% +88.3% 3358 ± 0% proc-vmstat.nr_kernel_stack
41801 ± 0% +13.4% 47397 ± 2% proc-vmstat.numa_foreign
95889 ± 1% -80.8% 18393 ± 0% proc-vmstat.numa_hint_faults
61294 ± 1% -92.3% 4715 ± 1% proc-vmstat.numa_hint_faults_local
13629995 ± 0% -32.3% 9230486 ± 0% proc-vmstat.numa_huge_pte_updates
41801 ± 0% +13.4% 47397 ± 2% proc-vmstat.numa_miss
27943 ± 1% -51.4% 13583 ± 0% proc-vmstat.numa_pages_migrated
6.983e+09 ± 0% -32.3% 4.731e+09 ± 0% proc-vmstat.numa_pte_updates
274.00 ± 4% -100.0% 0.00 ± 0% proc-vmstat.pgmigrate_fail
27943 ± 1% -51.4% 13583 ± 0% proc-vmstat.pgmigrate_success
92.53 ± 0% -43.0% 52.74 ± 0% turbostat.%Busy
2675 ± 0% -43.0% 1525 ± 0% turbostat.Avg_MHz
3.39 ± 1% +1107.2% 40.89 ± 0% turbostat.CPU%c1
0.01 ± 0% +33960.0% 3.41 ± 0% turbostat.CPU%c3
4.07 ± 4% -27.2% 2.96 ± 1% turbostat.CPU%c6
65.00 ± 2% -12.6% 56.80 ± 0% turbostat.CoreTmp
1.72 ± 6% -82.0% 0.31 ± 4% turbostat.Pkg%pc2
67.50 ± 3% -12.3% 59.20 ± 0% turbostat.PkgTmp
603.44 ± 0% -12.6% 527.24 ± 0% turbostat.PkgWatt
1029 ± 0% -42.8% 589.21 ± 0% turbostat.RAMWatt
800.50 ± 25% +105.0% 1641 ± 24% slabinfo.blkdev_requests.active_objs
800.50 ± 25% +105.0% 1641 ± 24% slabinfo.blkdev_requests.num_objs
36464 ± 3% -48.9% 18616 ± 0% slabinfo.kmalloc-256.active_objs
583.00 ± 3% -39.8% 350.80 ± 1% slabinfo.kmalloc-256.active_slabs
37354 ± 3% -39.8% 22483 ± 1% slabinfo.kmalloc-256.num_objs
583.00 ± 3% -39.8% 350.80 ± 1% slabinfo.kmalloc-256.num_slabs
50562 ± 1% +12.0% 56643 ± 1% slabinfo.kmalloc-32.active_objs
50573 ± 1% +12.0% 56646 ± 1% slabinfo.kmalloc-32.num_objs
19624 ± 1% +13.9% 22350 ± 5% slabinfo.kmalloc-512.active_objs
19793 ± 1% +13.3% 22417 ± 5% slabinfo.kmalloc-512.num_objs
197.75 ± 40% +139.4% 473.40 ± 6% slabinfo.nfs_read_data.active_objs
197.75 ± 40% +139.4% 473.40 ± 6% slabinfo.nfs_read_data.num_objs
20308 ± 0% +18.3% 24024 ± 1% slabinfo.proc_inode_cache.active_objs
20315 ± 0% +18.3% 24024 ± 1% slabinfo.proc_inode_cache.num_objs
7781 ± 1% +14.0% 8867 ± 1% slabinfo.sighand_cache.active_objs
7901 ± 1% +13.5% 8965 ± 1% slabinfo.sighand_cache.num_objs
11430 ± 1% +12.0% 12801 ± 0% slabinfo.signal_cache.active_objs
11505 ± 1% +11.6% 12836 ± 0% slabinfo.signal_cache.num_objs
2735 ± 0% +52.2% 4165 ± 0% slabinfo.task_struct.active_objs
925.75 ± 0% +51.3% 1400 ± 0% slabinfo.task_struct.active_slabs
2777 ± 0% +51.3% 4201 ± 0% slabinfo.task_struct.num_objs
925.75 ± 0% +51.3% 1400 ± 0% slabinfo.task_struct.num_slabs
0.00 ± -1% +Inf% 21653 ± 46% latency_stats.avg.cgroup_kn_lock_live.__cgroup_procs_write.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 22567 ± 45% latency_stats.avg.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 3852719 ± 0% latency_stats.hits.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 4545106 ± 0% latency_stats.hits.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 16172385 ± 0% latency_stats.hits.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1321590 ± 3% -70.2% 394183 ± 1% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 662623 ± 69% latency_stats.max.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
21951 ± 75% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.page_lock_anon_vma_read.rmap_walk_anon.rmap_walk.try_to_unmap.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
6386 ±110% -98.1% 120.20 ± 97% latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 21653 ± 46% latency_stats.max.cgroup_kn_lock_live.__cgroup_procs_write.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 662781 ± 69% latency_stats.max.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 7092 ± 30% latency_stats.max.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 22567 ± 45% latency_stats.max.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
27271 ± 37% -100.0% 0.00 ± -1% latency_stats.max.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 1.912e+10 ± 0% latency_stats.sum.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
97956 ± 89% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.page_lock_anon_vma_read.rmap_walk_anon.rmap_walk.try_to_unmap.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
25625 ± 25% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.rmap_walk_anon.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
23289 ±132% -97.2% 648.80 ± 48% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 21653 ± 46% latency_stats.sum.cgroup_kn_lock_live.__cgroup_procs_write.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2.558e+10 ± 0% latency_stats.sum.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 4.672e+09 ± 0% latency_stats.sum.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 22567 ± 45% latency_stats.sum.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
2506022 ± 12% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.143e+12 ± 0% +10.4% 1.262e+12 ± 0% perf-stat.L1-dcache-load-misses
1.354e+13 ± 0% +101.2% 2.726e+13 ± 0% perf-stat.L1-dcache-loads
5.788e+12 ± 0% +13.9% 6.59e+12 ± 1% perf-stat.L1-dcache-stores
1.518e+10 ± 2% +398.1% 7.562e+10 ± 1% perf-stat.L1-icache-load-misses
8.415e+09 ± 0% +192.9% 2.465e+10 ± 0% perf-stat.LLC-load-misses
2.557e+10 ± 0% +473.4% 1.466e+11 ± 0% perf-stat.LLC-loads
1.02e+11 ± 0% -40.0% 6.116e+10 ± 0% perf-stat.LLC-store-misses
3.893e+11 ± 0% -59.1% 1.592e+11 ± 0% perf-stat.LLC-stores
2.226e+13 ± 0% +57.8% 3.512e+13 ± 0% perf-stat.branch-instructions
1.417e+09 ± 6% +1809.7% 2.706e+10 ± 0% perf-stat.branch-load-misses
2.225e+13 ± 0% +57.2% 3.497e+13 ± 0% perf-stat.branch-loads
1.458e+09 ± 3% +1750.7% 2.699e+10 ± 0% perf-stat.branch-misses
6.66e+12 ± 0% -4.2% 6.377e+12 ± 1% perf-stat.bus-cycles
1.111e+11 ± 0% -21.4% 8.725e+10 ± 0% perf-stat.cache-misses
4.293e+11 ± 0% -14.2% 3.681e+11 ± 0% perf-stat.cache-references
3682641 ± 3% +9370.7% 3.488e+08 ± 0% perf-stat.context-switches
1.923e+14 ± 0% -4.7% 1.833e+14 ± 1% perf-stat.cpu-cycles
124403 ± 0% +22298.6% 27864645 ± 0% perf-stat.cpu-migrations
2.248e+09 ± 25% +320.7% 9.457e+09 ± 14% perf-stat.dTLB-load-misses
1.352e+13 ± 0% +102.1% 2.732e+13 ± 0% perf-stat.dTLB-loads
3.587e+08 ± 10% +169.6% 9.67e+08 ± 12% perf-stat.dTLB-store-misses
5.785e+12 ± 0% +14.1% 6.6e+12 ± 0% perf-stat.dTLB-stores
2.533e+08 ± 1% +70.1% 4.309e+08 ± 1% perf-stat.iTLB-load-misses
75350482 ± 1% +3093.7% 2.406e+09 ± 1% perf-stat.iTLB-loads
7.124e+13 ± 0% +63.3% 1.164e+14 ± 0% perf-stat.instructions
1.691e+09 ± 0% +919.2% 1.723e+10 ± 0% perf-stat.node-load-misses
6.683e+09 ± 0% +9.4% 7.308e+09 ± 1% perf-stat.node-loads
2.729e+08 ± 0% +9304.2% 2.566e+10 ± 0% perf-stat.node-store-misses
1.017e+11 ± 0% -64.6% 3.603e+10 ± 1% perf-stat.node-stores
1.66e+14 ± 0% -4.7% 1.582e+14 ± 1% perf-stat.ref-cycles
1.20 ± 16% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault
0.00 ± -1% +Inf% 22.47 ± 5% perf-profile.cycles-pp.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault
41.58 ± 6% -36.6% 26.36 ± 25% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_unit
0.00 ± -1% +Inf% 10.67 ± 5% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page
0.00 ± -1% +Inf% 10.73 ± 4% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page
0.00 ± -1% +Inf% 6.35 ± 4% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page
0.00 ± -1% +Inf% 16.46 ± 3% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.pwq_unbound_release_workfn.process_one_work.worker_thread
0.00 ± -1% +Inf% 1.04 ± 1% perf-profile.cycles-pp.__schedule.schedule.worker_thread.kthread.ret_from_fork
1.21 ± 16% -100.0% 0.00 ± -1% perf-profile.cycles-pp.alloc_pages_vma.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 11.72 ± 4% perf-profile.cycles-pp.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault
0.87 ± 63% +2639.6% 23.77 ± 2% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
2.04 ± 15% +1432.8% 31.31 ± 4% perf-profile.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
50.52 ± 4% -100.0% 0.00 ± -1% perf-profile.cycles-pp.clear_page_c_e.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 12.73 ± 2% perf-profile.cycles-pp.clear_page_c_e.process_one_work.worker_thread.kthread.ret_from_fork
0.87 ± 63% +2706.2% 24.48 ± 2% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
0.87 ± 63% +2638.9% 23.76 ± 2% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.85 ± 63% +2675.2% 23.52 ± 2% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 7.40 ± 4% perf-profile.cycles-pp.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault
55.34 ± 4% -43.0% 31.56 ± 4% perf-profile.cycles-pp.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
41.59 ± 6% -36.6% 26.36 ± 25% perf-profile.cycles-pp.do_page_fault.page_fault.do_unit
83.81 ± 6% -55.9% 36.92 ± 25% perf-profile.cycles-pp.do_unit
1.18 ± 16% -100.0% 0.00 ± -1% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.handle_mm_fault
41.45 ± 6% -36.5% 26.33 ± 25% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_unit
0.83 ± 63% +2693.5% 23.33 ± 2% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.00 ± -1% +Inf% 32.67 ± 3% perf-profile.cycles-pp.kthread.ret_from_fork
0.00 ± -1% +Inf% 10.68 ± 5% perf-profile.cycles-pp.mutex_lock.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault
0.00 ± -1% +Inf% 10.73 ± 4% perf-profile.cycles-pp.mutex_lock.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page
0.00 ± -1% +Inf% 6.35 ± 4% perf-profile.cycles-pp.mutex_lock.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault
0.00 ± -1% +Inf% 16.49 ± 3% perf-profile.cycles-pp.mutex_lock.pwq_unbound_release_workfn.process_one_work.worker_thread.kthread
0.00 ± -1% +Inf% 10.64 ± 5% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.__alloc_workqueue_key.clear_huge_page
0.00 ± -1% +Inf% 10.70 ± 4% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.apply_workqueue_attrs.__alloc_workqueue_key
0.00 ± -1% +Inf% 6.31 ± 4% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.destroy_workqueue.clear_huge_page
0.00 ± -1% +Inf% 16.42 ± 3% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pwq_unbound_release_workfn.process_one_work
0.00 ± -1% +Inf% 10.36 ± 5% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.__alloc_workqueue_key
0.00 ± -1% +Inf% 10.27 ± 4% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.apply_workqueue_attrs
0.00 ± -1% +Inf% 6.02 ± 5% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.destroy_workqueue
0.00 ± -1% +Inf% 16.08 ± 3% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pwq_unbound_release_workfn
41.59 ± 6% -36.6% 26.36 ± 25% perf-profile.cycles-pp.page_fault.do_unit
0.00 ± -1% +Inf% 31.28 ± 3% perf-profile.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 ± -1% +Inf% 16.71 ± 3% perf-profile.cycles-pp.pwq_unbound_release_workfn.process_one_work.worker_thread.kthread.ret_from_fork
0.00 ± -1% +Inf% 32.67 ± 3% perf-profile.cycles-pp.ret_from_fork
0.00 ± -1% +Inf% 1.07 ± 2% perf-profile.cycles-pp.schedule.worker_thread.kthread.ret_from_fork
0.88 ± 63% +2699.8% 24.50 ± 2% perf-profile.cycles-pp.start_secondary
0.00 ± -1% +Inf% 32.50 ± 3% perf-profile.cycles-pp.worker_thread.kthread.ret_from_fork
51865 ± 31% +1461.5% 809868 ± 4% sched_debug.cfs_rq:/.MIN_vruntime.avg
620221 ± 31% +279.3% 2352308 ± 2% sched_debug.cfs_rq:/.MIN_vruntime.stddev
217836 ± 0% -15.8% 183318 ± 0% sched_debug.cfs_rq:/.exec_clock.min
1428 ± 15% +1692.5% 25603 ± 0% sched_debug.cfs_rq:/.exec_clock.stddev
8371 ± 6% +2802.6% 242990 ± 4% sched_debug.cfs_rq:/.load.avg
255027 ± 31% +530.6% 1608188 ± 17% sched_debug.cfs_rq:/.load.max
2774 ± 14% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.load.min
22973 ± 28% +1784.8% 433000 ± 3% sched_debug.cfs_rq:/.load.stddev
15.98 ± 7% +1458.1% 249.03 ± 2% sched_debug.cfs_rq:/.load_avg.avg
231.17 ± 4% +254.2% 818.76 ± 9% sched_debug.cfs_rq:/.load_avg.max
2.64 ± 14% +1206.1% 34.47 ± 7% sched_debug.cfs_rq:/.load_avg.min
34.87 ± 5% +320.6% 146.67 ± 4% sched_debug.cfs_rq:/.load_avg.stddev
51865 ± 31% +1461.5% 809868 ± 4% sched_debug.cfs_rq:/.max_vruntime.avg
620221 ± 31% +279.3% 2352308 ± 2% sched_debug.cfs_rq:/.max_vruntime.stddev
31688756 ± 0% -75.3% 7837478 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
32039857 ± 0% -73.4% 8530591 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
31236174 ± 0% -77.1% 7142398 ± 1% sched_debug.cfs_rq:/.min_vruntime.min
172847 ± 16% +193.9% 508060 ± 1% sched_debug.cfs_rq:/.min_vruntime.stddev
0.76 ± 9% -21.0% 0.60 ± 2% sched_debug.cfs_rq:/.nr_running.avg
1.19 ± 4% +79.7% 2.15 ± 8% sched_debug.cfs_rq:/.nr_running.max
0.39 ± 14% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_running.min
0.18 ± 18% +239.8% 0.63 ± 1% sched_debug.cfs_rq:/.nr_running.stddev
2.27 ± 23% +247.9% 7.90 ± 3% sched_debug.cfs_rq:/.nr_spread_over.avg
5.72 ± 5% +600.3% 40.07 ± 5% sched_debug.cfs_rq:/.runnable_load_avg.avg
82.11 ± 49% +306.4% 333.69 ± 8% sched_debug.cfs_rq:/.runnable_load_avg.max
2.22 ± 15% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.runnable_load_avg.min
7.90 ± 39% +797.4% 70.91 ± 4% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-94182 ±-133% +218.5% -299955 ± -3% sched_debug.cfs_rq:/.spread0.avg
-545371 ±-32% +82.6% -995615 ± -8% sched_debug.cfs_rq:/.spread0.min
173893 ± 16% +192.5% 508704 ± 1% sched_debug.cfs_rq:/.spread0.stddev
795.73 ± 6% -35.3% 514.90 ± 2% sched_debug.cfs_rq:/.util_avg.avg
443.89 ± 20% -63.2% 163.16 ± 11% sched_debug.cfs_rq:/.util_avg.min
986272 ± 0% -49.0% 502866 ± 1% sched_debug.cpu.avg_idle.avg
1024476 ± 1% +12.4% 1151269 ± 1% sched_debug.cpu.avg_idle.max
760003 ± 4% -95.7% 32915 ± 12% sched_debug.cpu.avg_idle.min
37266 ± 11% +642.8% 276814 ± 0% sched_debug.cpu.avg_idle.stddev
290694 ± 0% +62.4% 471958 ± 0% sched_debug.cpu.clock.avg
290826 ± 0% +62.3% 472019 ± 0% sched_debug.cpu.clock.max
290537 ± 0% +62.4% 471895 ± 0% sched_debug.cpu.clock.min
81.82 ± 24% -56.4% 35.69 ± 2% sched_debug.cpu.clock.stddev
290694 ± 0% +62.4% 471958 ± 0% sched_debug.cpu.clock_task.avg
290826 ± 0% +62.3% 472019 ± 0% sched_debug.cpu.clock_task.max
290537 ± 0% +62.4% 471895 ± 0% sched_debug.cpu.clock_task.min
81.82 ± 24% -56.4% 35.69 ± 2% sched_debug.cpu.clock_task.stddev
5.79 ± 5% +433.8% 30.90 ± 8% sched_debug.cpu.cpu_load[0].avg
81.75 ± 48% +278.6% 309.48 ± 9% sched_debug.cpu.cpu_load[0].max
2.22 ± 15% -100.0% 0.00 ± -1% sched_debug.cpu.cpu_load[0].min
7.94 ± 38% +668.4% 61.02 ± 6% sched_debug.cpu.cpu_load[0].stddev
9.53 ± 9% +741.3% 80.20 ± 4% sched_debug.cpu.cpu_load[1].avg
139.67 ± 15% +140.0% 335.23 ± 3% sched_debug.cpu.cpu_load[1].max
2.42 ± 22% -92.8% 0.17 ± 79% sched_debug.cpu.cpu_load[1].min
20.64 ± 15% +263.0% 74.94 ± 3% sched_debug.cpu.cpu_load[1].stddev
7.96 ± 6% +839.6% 74.83 ± 3% sched_debug.cpu.cpu_load[2].avg
104.56 ± 14% +178.3% 290.95 ± 3% sched_debug.cpu.cpu_load[2].max
2.42 ± 22% -70.2% 0.72 ± 26% sched_debug.cpu.cpu_load[2].min
14.10 ± 14% +324.1% 59.81 ± 4% sched_debug.cpu.cpu_load[2].stddev
7.05 ± 4% +909.5% 71.13 ± 3% sched_debug.cpu.cpu_load[3].avg
85.06 ± 11% +197.0% 252.63 ± 4% sched_debug.cpu.cpu_load[3].max
10.21 ± 14% +378.8% 48.89 ± 4% sched_debug.cpu.cpu_load[3].stddev
6.47 ± 4% +954.6% 68.18 ± 3% sched_debug.cpu.cpu_load[4].avg
67.14 ± 18% +217.5% 213.15 ± 4% sched_debug.cpu.cpu_load[4].max
2.42 ± 22% +99.2% 4.81 ± 18% sched_debug.cpu.cpu_load[4].min
7.64 ± 18% +421.8% 39.87 ± 4% sched_debug.cpu.cpu_load[4].stddev
16273 ± 8% -44.7% 9000 ± 5% sched_debug.cpu.curr->pid.avg
19035 ± 0% +39.9% 26637 ± 0% sched_debug.cpu.curr->pid.max
6025 ± 52% -100.0% 0.00 ± -1% sched_debug.cpu.curr->pid.min
3061 ± 31% +283.2% 11731 ± 1% sched_debug.cpu.curr->pid.stddev
7770 ± 9% +2964.7% 238130 ± 4% sched_debug.cpu.load.avg
171305 ± 59% +749.2% 1454745 ± 9% sched_debug.cpu.load.max
2774 ± 14% -100.0% 0.00 ± -1% sched_debug.cpu.load.min
16452 ± 50% +2477.0% 423970 ± 1% sched_debug.cpu.load.stddev
512284 ± 1% +42.2% 728214 ± 5% sched_debug.cpu.max_idle_balance_cost.max
1026 ± 62% +3364.7% 35570 ± 5% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 19% -34.1% 0.00 ± 2% sched_debug.cpu.next_balance.stddev
235853 ± 0% +40.8% 332030 ± 0% sched_debug.cpu.nr_load_updates.avg
242944 ± 0% +55.1% 376822 ± 0% sched_debug.cpu.nr_load_updates.max
230088 ± 0% +25.5% 288846 ± 0% sched_debug.cpu.nr_load_updates.min
3297 ± 1% +1118.8% 40194 ± 0% sched_debug.cpu.nr_load_updates.stddev
0.79 ± 12% -23.6% 0.60 ± 3% sched_debug.cpu.nr_running.avg
0.39 ± 14% -100.0% 0.00 ± -1% sched_debug.cpu.nr_running.min
0.29 ± 28% +119.2% 0.64 ± 3% sched_debug.cpu.nr_running.stddev
12486 ± 2% +9429.1% 1189888 ± 0% sched_debug.cpu.nr_switches.avg
139358 ± 11% +1029.4% 1573851 ± 0% sched_debug.cpu.nr_switches.max
1652 ± 7% +49792.2% 824412 ± 0% sched_debug.cpu.nr_switches.min
22570 ± 6% +1314.2% 319200 ± 0% sched_debug.cpu.nr_switches.stddev
0.00 ± 42% +47866.1% 1.57 ± 4% sched_debug.cpu.nr_uninterruptible.avg
12.42 ± 22% +68644.7% 8535 ± 0% sched_debug.cpu.nr_uninterruptible.max
-15.08 ±-27% +59252.4% -8952 ± -2% sched_debug.cpu.nr_uninterruptible.min
4.38 ± 11% +1.8e+05% 7864 ± 0% sched_debug.cpu.nr_uninterruptible.stddev
12908 ± 2% +9123.8% 1190660 ± 0% sched_debug.cpu.sched_count.avg
155343 ± 11% +930.8% 1601227 ± 1% sched_debug.cpu.sched_count.max
1482 ± 7% +55508.2% 824530 ± 0% sched_debug.cpu.sched_count.min
24403 ± 5% +1211.1% 319962 ± 0% sched_debug.cpu.sched_count.stddev
732.59 ± 2% +70220.5% 515160 ± 0% sched_debug.cpu.sched_goidle.avg
9711 ± 21% +7111.8% 700365 ± 0% sched_debug.cpu.sched_goidle.max
141.94 ± 9% +2.4e+05% 338951 ± 0% sched_debug.cpu.sched_goidle.min
1043 ± 13% +14535.0% 152743 ± 0% sched_debug.cpu.sched_goidle.stddev
5761 ± 2% +10460.0% 608462 ± 0% sched_debug.cpu.ttwu_count.avg
67879 ± 9% +957.4% 717750 ± 0% sched_debug.cpu.ttwu_count.max
681.25 ± 3% +74539.0% 508478 ± 0% sched_debug.cpu.ttwu_count.min
10967 ± 6% +674.4% 84933 ± 0% sched_debug.cpu.ttwu_count.stddev
4609 ± 4% +1430.2% 70534 ± 0% sched_debug.cpu.ttwu_local.avg
64856 ± 10% +30.3% 84501 ± 0% sched_debug.cpu.ttwu_local.max
377.81 ± 3% +15018.1% 57117 ± 0% sched_debug.cpu.ttwu_local.min
290531 ± 0% +62.4% 471892 ± 0% sched_debug.cpu_clk
287233 ± 1% +63.5% 469728 ± 0% sched_debug.ktime
0.11 ± 0% -40.0% 0.07 ± 0% sched_debug.rt_rq:/.rt_nr_running.max
0.01 ± 0% -35.1% 0.01 ± 15% sched_debug.rt_rq:/.rt_nr_running.stddev
0.03 ± 25% -62.4% 0.01 ± 22% sched_debug.rt_rq:/.rt_time.avg
2.82 ± 8% -61.3% 1.09 ± 30% sched_debug.rt_rq:/.rt_time.max
0.25 ± 14% -61.2% 0.10 ± 26% sched_debug.rt_rq:/.rt_time.stddev
290531 ± 0% +62.4% 471892 ± 0% sched_debug.sched_clk
vm-scalability.throughput
8e+07 *+--------------------------------*----------------*----------------*
| .. . |
7e+07 ++ . .. |
6e+07 ++ . . |
| . . |
5e+07 ++ . . |
O .. O . O O O
4e+07 ++ . .. |
| . . |
3e+07 ++ . . |
2e+07 ++ . . |
| .. . |
1e+07 ++ . .. |
| . |
0 ++---------------*--------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-hsw-ep2: 72 threads Brickland Haswell-EP with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/size/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/1/debian-x86_64-2015-02-07.cgz/300s/8T/lkp-hsw-ep2/anon-w-seq/vm-scalability
commit:
v4.7-rc7
8296d37010 ("Speeds up clearing huge pages using work queue")
v4.7-rc7 8296d3701014cd804bf981b5e2
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
2541668 ± 0% +27.7% 3245849 ± 0% vm-scalability.throughput
1505 ± 8% +81222.9% 1224180 ± 0% vm-scalability.time.involuntary_context_switches
7675912 ± 4% -74.1% 1988431 ± 0% vm-scalability.time.minor_page_faults
99.00 ± 0% -28.3% 71.00 ± -1% vm-scalability.time.percent_of_cpu_this_job_got
415.33 ± 0% -93.1% 28.47 ± -3% vm-scalability.time.system_time
466.24 ± 0% +27.9% 596.13 ± 0% vm-scalability.time.user_time
909.67 ± 2% +1.3e+05% 1223534 ± 0% vm-scalability.time.voluntary_context_switches
18262 ± 10% -19.1% 14781 ± 0% interrupts.CAL:Function_call_interrupts
162441 ± 5% -9.7% 146748 ± 0% meminfo.DirectMap4k
1145 ± 4% +2149.1% 25760 ± 0% vmstat.system.cs
0.00 ± -1% +Inf% 1222419 ± 0% latency_stats.hits.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 2.383e+08 ± 0% latency_stats.sum.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
94072 ± 19% +499.5% 563946 ± 0% softirqs.RCU
54289 ± 2% +1131.0% 668289 ± 0% softirqs.SCHED
742038 ± 0% +38.0% 1024212 ± 0% softirqs.TIMER
4039131 ± 6% -99.6% 14187 ± 0% numa-numastat.node0.numa_foreign
2527978 ± 4% -71.2% 726885 ± 0% numa-numastat.node0.numa_miss
1788832 ± 0% +26.3% 2260104 ± 0% numa-numastat.node1.local_node
2529795 ± 4% -71.8% 713317 ± 0% numa-numastat.node1.numa_foreign
1788832 ± 0% +26.3% 2260105 ± 0% numa-numastat.node1.numa_hit
4040948 ± 6% -100.0% 619.00 ± 0% numa-numastat.node1.numa_miss
1552444 ± 21% +156.3% 3978652 ± 0% cpuidle.C1-HSW.time
185441 ± 6% +202.9% 561626 ± 0% cpuidle.C1-HSW.usage
4576536 ± 11% +4216.5% 1.975e+08 ± 0% cpuidle.C1E-HSW.time
24584 ± 16% +6947.3% 1732523 ± 0% cpuidle.C1E-HSW.usage
19455172 ± 5% +15021.2% 2.942e+09 ± 0% cpuidle.C3-HSW.time
63967 ± 3% +11258.7% 7265839 ± 0% cpuidle.C3-HSW.usage
1.596e+08 ± 19% -27.9% 1.152e+08 ± 0% cpuidle.POLL.time
4311 ± 7% +396.5% 21409 ± 0% cpuidle.POLL.usage
953.33 ± 5% +70.8% 1628 ± 0% slabinfo.blkdev_requests.active_objs
953.33 ± 5% +70.8% 1628 ± 0% slabinfo.blkdev_requests.num_objs
19293 ± 2% +17.0% 22580 ± 0% slabinfo.kmalloc-256.active_objs
20002 ± 2% +19.7% 23941 ± 0% slabinfo.kmalloc-256.num_objs
15084 ± 7% +12.5% 16972 ± 0% slabinfo.kmalloc-512.active_objs
15222 ± 7% +11.8% 17014 ± 0% slabinfo.kmalloc-512.num_objs
1385 ± 4% +30.3% 1805 ± 0% slabinfo.mnt_cache.active_objs
1385 ± 4% +30.3% 1805 ± 0% slabinfo.mnt_cache.num_objs
5.28 ± 1% +7.2% 5.66 ±-17% turbostat.%Busy
125.33 ± 1% +9.3% 137.00 ± 0% turbostat.Avg_MHz
24.31 ± 0% +27.4% 30.97 ± -3% turbostat.CPU%c1
0.02 ± 0% +19350.0% 3.89 ±-25% turbostat.CPU%c3
70.38 ± 0% -15.5% 59.48 ± -1% turbostat.CPU%c6
31.62 ± 1% -31.5% 21.67 ± -4% turbostat.Pkg%pc2
96.45 ± 0% +6.4% 102.66 ± 0% turbostat.PkgWatt
35.36 ± 1% +9.2% 38.61 ± -2% turbostat.RAMWatt
6639740 ± 6% -90.2% 647733 ± 0% proc-vmstat.numa_foreign
3542448 ± 0% +15.2% 4082184 ± 0% proc-vmstat.numa_hit
919505 ± 0% -20.6% 730368 ± 0% proc-vmstat.numa_huge_pte_updates
3542447 ± 0% +15.2% 4082183 ± 0% proc-vmstat.numa_local
6639740 ± 6% -90.2% 647733 ± 0% proc-vmstat.numa_miss
4.73e+08 ± 0% -20.9% 3.74e+08 ± 0% proc-vmstat.numa_pte_updates
5894825 ± 3% -18.7% 4793507 ± 0% proc-vmstat.pgalloc_dma32
4.914e+08 ± 0% +27.1% 6.247e+08 ± 0% proc-vmstat.pgalloc_normal
9454924 ± 4% -60.4% 3746598 ± 0% proc-vmstat.pgfault
4.808e+08 ± 0% +27.5% 6.13e+08 ± 0% proc-vmstat.pgfree
921343 ± 0% +29.2% 1190559 ± 0% proc-vmstat.thp_deferred_split_page
953086 ± 0% +28.3% 1222419 ± 0% proc-vmstat.thp_fault_alloc
12910 ± 6% -90.5% 1232 ± 0% proc-vmstat.thp_fault_fallback
15951920 ± 3% -33.0% 10681427 ± 0% numa-meminfo.node0.Active
15937037 ± 3% -33.1% 10666883 ± 0% numa-meminfo.node0.Active(anon)
15854293 ± 3% -33.0% 10628859 ± 0% numa-meminfo.node0.AnonHugePages
15914382 ± 3% -33.1% 10644541 ± 0% numa-meminfo.node0.AnonPages
10390 ± 25% -35.9% 6658 ± 0% numa-meminfo.node0.Mapped
49241735 ± 1% +10.8% 54576665 ± 0% numa-meminfo.node0.MemFree
16644983 ± 3% -32.1% 11310054 ± 0% numa-meminfo.node0.MemUsed
32726 ± 3% -30.0% 22908 ± 0% numa-meminfo.node0.PageTables
57921 ± 7% +15.2% 66713 ± 0% numa-meminfo.node0.SUnreclaim
17135973 ± 2% +29.0% 22108234 ± 0% numa-meminfo.node1.Active
17121188 ± 2% +29.0% 22093251 ± 0% numa-meminfo.node1.Active(anon)
17065962 ± 2% +29.1% 22037536 ± 0% numa-meminfo.node1.AnonHugePages
17112745 ± 2% +28.8% 22046594 ± 0% numa-meminfo.node1.AnonPages
48333766 ± 0% -10.4% 43287833 ± 0% numa-meminfo.node1.MemFree
17696485 ± 2% +28.5% 22742414 ± 0% numa-meminfo.node1.MemUsed
34985 ± 2% +26.9% 44397 ± 0% numa-meminfo.node1.PageTables
29141 ± 2% -9.3% 26425 ± 0% numa-meminfo.node1.SReclaimable
3984492 ± 3% -33.1% 2666937 ± 0% numa-vmstat.node0.nr_active_anon
3979198 ± 3% -33.1% 2661266 ± 0% numa-vmstat.node0.nr_anon_pages
7741 ± 3% -33.0% 5190 ± 0% numa-vmstat.node0.nr_anon_transparent_hugepages
12310193 ± 1% +10.8% 13643946 ± 0% numa-vmstat.node0.nr_free_pages
2597 ± 25% -35.9% 1664 ± 0% numa-vmstat.node0.nr_mapped
8184 ± 3% -30.0% 5726 ± 0% numa-vmstat.node0.nr_page_table_pages
14480 ± 7% +15.2% 16678 ± 0% numa-vmstat.node0.nr_slab_unreclaimable
2203159 ± 2% -96.3% 81344 ± 0% numa-vmstat.node0.numa_foreign
1252855 ± 8% -69.9% 376939 ± 0% numa-vmstat.node0.numa_miss
4288953 ± 3% +28.8% 5522754 ± 0% numa-vmstat.node1.nr_active_anon
4282323 ± 3% +28.7% 5510523 ± 0% numa-vmstat.node1.nr_anon_pages
8336 ± 3% +29.0% 10758 ± 0% numa-vmstat.node1.nr_anon_transparent_hugepages
12074772 ± 0% -10.4% 10822526 ± 0% numa-vmstat.node1.nr_free_pages
8751 ± 2% +26.8% 11097 ± 0% numa-vmstat.node1.nr_page_table_pages
7284 ± 2% -9.3% 6606 ± 0% numa-vmstat.node1.nr_slab_reclaimable
1217231 ± 8% -73.4% 324012 ± 0% numa-vmstat.node1.numa_foreign
1146153 ± 1% +23.0% 1410186 ± 0% numa-vmstat.node1.numa_hit
1146153 ± 1% +23.0% 1410185 ± 0% numa-vmstat.node1.numa_local
2167535 ± 2% -98.7% 28416 ± 0% numa-vmstat.node1.numa_miss
7.842e+10 ± 2% +30.3% 1.022e+11 ± 0% perf-stat.L1-dcache-load-misses
9.966e+11 ± 1% +20.1% 1.197e+12 ± 0% perf-stat.L1-dcache-loads
5.952e+11 ± 2% +26.9% 7.553e+11 ± 0% perf-stat.L1-dcache-stores
2.068e+10 ± 0% +16.8% 2.416e+10 ± 0% perf-stat.L1-icache-load-misses
2.776e+08 ± 2% +134.6% 6.513e+08 ± 0% perf-stat.LLC-load-misses
8.58e+09 ± 1% +59.0% 1.364e+10 ± 0% perf-stat.LLC-loads
1.001e+09 ± 3% +52.9% 1.529e+09 ± 0% perf-stat.LLC-store-misses
4.052e+09 ± 0% +56.1% 6.325e+09 ± 0% perf-stat.LLC-stores
1.373e+12 ± 0% +24.7% 1.712e+12 ± 0% perf-stat.branch-instructions
5.29e+09 ± 3% +26.3% 6.68e+09 ± 0% perf-stat.branch-load-misses
1.399e+12 ± 2% +21.2% 1.695e+12 ± 0% perf-stat.branch-loads
5.513e+09 ± 1% +33.4% 7.353e+09 ± 0% perf-stat.branch-misses
2.717e+11 ± 0% +10.2% 2.993e+11 ± 0% perf-stat.bus-cycles
1.352e+09 ± 0% +59.2% 2.152e+09 ± 0% perf-stat.cache-misses
2.986e+10 ± 1% +51.8% 4.533e+10 ± 0% perf-stat.cache-references
1008305 ± 4% +2140.3% 22588837 ± 0% perf-stat.context-switches
6.857e+12 ± 2% +8.6% 7.444e+12 ± 0% perf-stat.cpu-cycles
55660 ± 1% +9.6% 60990 ± 0% perf-stat.cpu-migrations
2.076e+09 ± 0% +22.2% 2.536e+09 ± 0% perf-stat.dTLB-load-misses
9.948e+11 ± 4% +12.1% 1.115e+12 ± 0% perf-stat.dTLB-loads
4.047e+08 ± 0% -1.9% 3.97e+08 ± 0% perf-stat.dTLB-store-misses
6.139e+11 ± 0% +14.0% 6.998e+11 ± 0% perf-stat.dTLB-stores
6.095e+08 ± 0% +6.1% 6.466e+08 ± 0% perf-stat.iTLB-load-misses
3.27e+08 ± 3% +24.0% 4.055e+08 ± 0% perf-stat.iTLB-loads
4.604e+12 ± 2% +25.3% 5.77e+12 ± 0% perf-stat.instructions
9492594 ± 3% -60.0% 3798901 ± 0% perf-stat.minor-faults
38705676 ± 6% +916.0% 3.933e+08 ± 0% perf-stat.node-load-misses
2.455e+08 ± 2% +7.0% 2.627e+08 ± 0% perf-stat.node-loads
25987631 ± 6% +209.5% 80421284 ± 0% perf-stat.node-store-misses
9.802e+08 ± 3% +43.7% 1.408e+09 ± 0% perf-stat.node-stores
9492659 ± 3% -60.0% 3798896 ± 0% perf-stat.page-faults
6.082e+12 ± 1% +14.2% 6.945e+12 ± 0% perf-stat.ref-cycles
24.95 ± 18% -96.4% 0.91 ±-109% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault
2.14 ± 8% -27.9% 1.54 ±-64% perf-profile.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.35 ± 9% -21.7% 1.06 ±-94% perf-profile.cycles-pp.__tick_nohz_idle_enter.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
7.53 ± 8% -19.4% 6.07 ±-16% perf-profile.cycles-pp.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.call_console_drivers.constprop.23.console_unlock.vprintk_emit.vprintk_default.printk
70.21 ± 5% +12.1% 78.73 ± -1% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
23.39 ± 19% -100.0% 0.00 ± -1% perf-profile.cycles-pp.clear_page_c_e.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 13.46 ± -7% perf-profile.cycles-pp.clear_page_c_e.process_one_work.worker_thread.kthread.ret_from_fork
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.console_unlock.vprintk_emit.vprintk_default.printk.perf_duration_warn
71.86 ± 5% +12.5% 80.87 ± -1% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
70.14 ± 5% +12.1% 78.64 ± -1% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
61.19 ± 5% +17.8% 72.06 ± -1% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
24.87 ± 18% -96.6% 0.84 ±-119% perf-profile.cycles-pp.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
24.95 ± 18% -96.4% 0.91 ±-109% perf-profile.cycles-pp.do_page_fault.page_fault
24.93 ± 18% -96.5% 0.88 ±-113% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
2.94 ± 8% -22.4% 2.28 ±-43% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
1.40 ± 8% -20.0% 1.12 ±-89% perf-profile.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
1.92 ± 9% -17.3% 1.59 ±-62% perf-profile.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.irq_work_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter.call_cpuidle
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter
0.00 ± -1% +Inf% 15.67 ± -6% perf-profile.cycles-pp.kthread.ret_from_fork
3.20 ± 8% -22.3% 2.49 ±-40% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
24.96 ± 18% -96.4% 0.91 ±-109% perf-profile.cycles-pp.page_fault
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.perf_duration_warn.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
0.89 ± 17% -24.7% 0.67 ±-149% perf-profile.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
6.95 ± 32% +144.6% 17.01 ± -5% perf-profile.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.printk.perf_duration_warn.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
0.00 ± -1% +Inf% 14.75 ± -6% perf-profile.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 ± -1% +Inf% 15.67 ± -6% perf-profile.cycles-pp.ret_from_fork
7.16 ± 8% -19.3% 5.78 ±-17% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
72.17 ± 5% +12.5% 81.17 ± -1% perf-profile.cycles-pp.start_secondary
1.42 ± 11% -20.4% 1.13 ±-88% perf-profile.cycles-pp.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
1.02 ± 8% -24.3% 0.77 ±-129% perf-profile.cycles-pp.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.vprintk_default.printk.perf_duration_warn.irq_work_run_list.irq_work_run
0.89 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.vprintk_emit.vprintk_default.printk.perf_duration_warn.irq_work_run_list
0.00 ± -1% +Inf% 15.43 ± -6% perf-profile.cycles-pp.worker_thread.kthread.ret_from_fork
0.00 ± 0% +8e+08% 8.05 ±-12% sched_debug.cfs_rq:/.MIN_vruntime.avg
0.00 ± 0% +5.8e+10% 579.51 ± 0% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 ± 0% +4.6e+24% 67.82 ± -1% sched_debug.cfs_rq:/.MIN_vruntime.stddev
6069 ± 0% +53.5% 9314 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
47113 ± 18% +24.7% 58747 ± 0% sched_debug.cfs_rq:/.exec_clock.max
13592 ± 9% +72.6% 23454 ± 0% sched_debug.cfs_rq:/.load.avg
410493 ± 4% +96.9% 808227 ± 0% sched_debug.cfs_rq:/.load.max
66987 ± 6% +68.7% 112976 ± 0% sched_debug.cfs_rq:/.load.stddev
12.14 ± 3% +45.7% 17.69 ± -5% sched_debug.cfs_rq:/.load_avg.avg
476.40 ± 16% -45.2% 261.07 ± 0% sched_debug.cfs_rq:/.load_avg.max
0.00 ± 0% +8e+08% 8.05 ±-12% sched_debug.cfs_rq:/.max_vruntime.avg
0.00 ± 0% +5.8e+10% 579.51 ± 0% sched_debug.cfs_rq:/.max_vruntime.max
0.00 ± 0% +4.6e+24% 67.82 ± -1% sched_debug.cfs_rq:/.max_vruntime.stddev
36291 ± 8% -52.3% 17303 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
313191 ± 19% -69.5% 95523 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
1743 ± 13% +28.9% 2248 ± 0% sched_debug.cfs_rq:/.min_vruntime.min
63122 ± 7% -75.7% 15355 ± 0% sched_debug.cfs_rq:/.min_vruntime.stddev
0.05 ± 5% +16.5% 0.06 ±-1636% sched_debug.cfs_rq:/.nr_running.avg
0.01 ± 0% +100.0% 0.03 ±-3857% sched_debug.cfs_rq:/.nr_spread_over.avg
0.93 ± 0% +50.0% 1.40 ±-71% sched_debug.cfs_rq:/.nr_spread_over.max
0.11 ± 0% +61.2% 0.18 ±-568% sched_debug.cfs_rq:/.nr_spread_over.stddev
6.26 ± 12% -23.0% 4.82 ±-20% sched_debug.cfs_rq:/.runnable_load_avg.avg
406.09 ± 16% -52.8% 191.87 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.max
47.79 ± 16% -45.1% 26.24 ± -3% sched_debug.cfs_rq:/.runnable_load_avg.stddev
286209 ± 19% -75.2% 70993 ± 0% sched_debug.cfs_rq:/.spread0.max
63124 ± 7% -75.7% 15355 ± 0% sched_debug.cfs_rq:/.spread0.stddev
22.63 ± 0% +28.5% 29.07 ± -3% sched_debug.cfs_rq:/.util_avg.avg
994.69 ± 0% -26.9% 727.33 ± 0% sched_debug.cfs_rq:/.util_avg.max
120.31 ± 0% -19.2% 97.22 ± -1% sched_debug.cfs_rq:/.util_avg.stddev
652924 ± 4% -70.7% 191467 ± 0% sched_debug.cpu.avg_idle.min
62909 ± 4% +154.2% 159903 ± 0% sched_debug.cpu.avg_idle.stddev
31.29 ± 19% -89.6% 3.27 ±-30% sched_debug.cpu.clock.stddev
31.29 ± 19% -89.6% 3.27 ±-30% sched_debug.cpu.clock_task.stddev
6.24 ± 13% -54.8% 2.82 ±-35% sched_debug.cpu.cpu_load[0].avg
405.91 ± 16% -70.9% 118.27 ± 0% sched_debug.cpu.cpu_load[0].max
47.77 ± 16% -67.4% 15.56 ± -6% sched_debug.cpu.cpu_load[0].stddev
7.95 ± 2% +76.1% 13.99 ± -7% sched_debug.cpu.cpu_load[1].avg
449.78 ± 13% -51.3% 219.20 ± 0% sched_debug.cpu.cpu_load[1].max
7.48 ± 4% +86.9% 13.99 ± -7% sched_debug.cpu.cpu_load[2].avg
435.38 ± 14% -49.9% 218.07 ± 0% sched_debug.cpu.cpu_load[2].max
6.95 ± 7% +103.7% 14.16 ± -7% sched_debug.cpu.cpu_load[3].avg
423.78 ± 15% -48.7% 217.47 ± 0% sched_debug.cpu.cpu_load[3].max
6.44 ± 11% +121.7% 14.29 ± -6% sched_debug.cpu.cpu_load[4].avg
411.18 ± 16% -46.7% 219.20 ± 0% sched_debug.cpu.cpu_load[4].max
13803 ± 11% +50.4% 20766 ± 0% sched_debug.cpu.load.avg
423194 ± 7% +50.1% 635245 ± 0% sched_debug.cpu.load.max
68322 ± 9% +42.2% 97131 ± 0% sched_debug.cpu.load.stddev
0.00 ± 8% -64.0% 0.00 ±-5290916% sched_debug.cpu.next_balance.stddev
13819 ± 1% +614.7% 98776 ± 0% sched_debug.cpu.nr_load_updates.avg
56631 ± 14% +333.7% 245596 ± 0% sched_debug.cpu.nr_load_updates.max
4329 ± 4% +501.6% 26043 ± 0% sched_debug.cpu.nr_load_updates.min
11927 ± 6% +301.9% 47932 ± 0% sched_debug.cpu.nr_load_updates.stddev
0.05 ± 4% +13.4% 0.06 ±-1661% sched_debug.cpu.nr_running.avg
7481 ± 4% +1919.5% 151094 ± 0% sched_debug.cpu.nr_switches.avg
40224 ± 18% +1521.6% 652301 ± 0% sched_debug.cpu.nr_switches.max
868.31 ± 9% +15.1% 999.67 ± 0% sched_debug.cpu.nr_switches.min
7780 ± 13% +2083.9% 169922 ± 0% sched_debug.cpu.nr_switches.stddev
8.00 ± 10% +19.2% 9.53 ±-10% sched_debug.cpu.nr_uninterruptible.max
7550 ± 5% +1903.0% 151224 ± 0% sched_debug.cpu.sched_count.avg
58276 ± 18% +1012.6% 648402 ± 0% sched_debug.cpu.sched_count.max
588.11 ± 12% +20.5% 708.67 ± 0% sched_debug.cpu.sched_count.min
10040 ± 12% +1592.6% 169943 ± 0% sched_debug.cpu.sched_count.stddev
3221 ± 5% +1975.9% 66868 ± 0% sched_debug.cpu.sched_goidle.avg
19376 ± 18% +1466.6% 303542 ± 0% sched_debug.cpu.sched_goidle.max
254.87 ± 10% +20.0% 305.93 ± 0% sched_debug.cpu.sched_goidle.min
3709 ± 14% +1995.9% 77756 ± 0% sched_debug.cpu.sched_goidle.stddev
3119 ± 5% +2302.0% 74926 ± 0% sched_debug.cpu.ttwu_count.avg
21818 ± 32% +2207.3% 503414 ± 0% sched_debug.cpu.ttwu_count.max
3351 ± 23% +2595.9% 90348 ± 0% sched_debug.cpu.ttwu_count.stddev
655.26 ± 4% +1504.1% 10510 ± 0% sched_debug.cpu.ttwu_local.avg
3182 ± 18% +3784.0% 123596 ± 0% sched_debug.cpu.ttwu_local.max
170.62 ± 5% +23.5% 210.67 ± 0% sched_debug.cpu.ttwu_local.min
439.43 ± 0% +3789.0% 17089 ± 0% sched_debug.cpu.ttwu_local.stddev
0.00 ±141% +200.0% 0.01 ±-12817% sched_debug.rt_rq:/.rt_nr_running.stddev
***************************************************************************************************
lkp-bdw-ep2: 88 threads Broadwell-EP with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/8T/lkp-bdw-ep2/anon-w-seq/vm-scalability
commit:
v4.7-rc7
8296d37010 ("Speeds up clearing huge pages using work queue")
v4.7-rc7 8296d3701014cd804bf981b5e2
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
0.06 ± 7% -79.8% 0.01 ± 1% vm-scalability.stddev
38146345 ± 0% +33.5% 50940657 ± 0% vm-scalability.throughput
554.56 ± 0% -26.8% 406.16 ± 0% vm-scalability.time.elapsed_time
554.56 ± 0% -26.8% 406.16 ± 0% vm-scalability.time.elapsed_time.max
346652 ± 0% +2921.7% 10474845 ± 0% vm-scalability.time.involuntary_context_switches
7889 ± 0% -56.7% 3415 ± 0% vm-scalability.time.percent_of_cpu_this_job_got
22171 ± 0% -74.4% 5673 ± 0% vm-scalability.time.system_time
21583 ± 0% -62.0% 8200 ± 0% vm-scalability.time.user_time
58669 ± 0% +20490.9% 12080480 ± 0% vm-scalability.time.voluntary_context_switches
20909 ± 5% +13.7% 23776 ± 4% interrupts.CAL:Function_call_interrupts
1.33 ± 35% -100.0% 0.00 ± -1% numa-numastat.node0.other_node
594.79 ± 0% -25.3% 444.24 ± 1% uptime.boot
8150 ± 1% +62.1% 13214 ± 3% uptime.idle
23305202 ± 0% +23.1% 28680490 ± 0% meminfo.AnonHugePages
20033 ± 0% +43.8% 28799 ± 0% meminfo.KernelStack
51997 ± 0% +20.8% 62802 ± 0% meminfo.PageTables
81.33 ± 0% -9.4% 73.67 ± 1% vmstat.procs.r
4618 ± 3% +5168.1% 243282 ± 0% vmstat.system.cs
93458 ± 1% +13.2% 105783 ± 0% vmstat.system.in
16988 ± 1% -14.3% 14557 ± 1% softirqs.NET_RX
4355772 ± 2% +64.9% 7182335 ± 0% softirqs.RCU
303674 ± 0% +1935.3% 6180796 ± 0% softirqs.SCHED
22334603 ± 0% -40.0% 13394325 ± 0% softirqs.TIMER
5747 ± 1% +20.2% 6906 ± 0% numa-vmstat.node0.nr_anon_transparent_hugepages
745.00 ± 1% +34.0% 998.00 ± 1% numa-vmstat.node0.nr_kernel_stack
6543 ± 1% +18.6% 7763 ± 0% numa-vmstat.node0.nr_page_table_pages
23141 ± 3% +10.3% 25535 ± 0% numa-vmstat.node0.nr_slab_unreclaimable
5917 ± 0% +18.7% 7026 ± 0% numa-vmstat.node1.nr_anon_transparent_hugepages
506.67 ± 1% +58.1% 801.00 ± 1% numa-vmstat.node1.nr_kernel_stack
1701 ± 0% +56.3% 2658 ± 25% numa-vmstat.node1.nr_mapped
6763 ± 0% +16.1% 7854 ± 1% numa-vmstat.node1.nr_page_table_pages
90.16 ± 0% -18.3% 73.69 ± 0% turbostat.%Busy
2516 ± 0% -18.8% 2043 ± 0% turbostat.Avg_MHz
5.11 ± 1% +364.6% 23.74 ± 0% turbostat.CPU%c1
0.03 ± 0% +2444.4% 0.76 ± 0% turbostat.CPU%c3
4.71 ± 1% -61.7% 1.80 ± 1% turbostat.CPU%c6
1.69 ± 1% -87.4% 0.21 ± 11% turbostat.Pkg%pc2
252.57 ± 0% +2.8% 259.69 ± 0% turbostat.PkgWatt
132.98 ± 0% -2.4% 129.82 ± 0% turbostat.RAMWatt
7716175 ± 16% +12917.5% 1.004e+09 ± 0% cpuidle.C1-BDW.time
83323 ± 13% +15468.5% 12972137 ± 0% cpuidle.C1-BDW.usage
5597051 ± 8% +17317.8% 9.749e+08 ± 0% cpuidle.C1E-BDW.time
36098 ± 2% +28933.3% 10480645 ± 1% cpuidle.C1E-BDW.usage
46920966 ± 12% +8645.3% 4.103e+09 ± 0% cpuidle.C3-BDW.time
124855 ± 10% +13265.3% 16687376 ± 0% cpuidle.C3-BDW.usage
4.774e+09 ± 1% -29.4% 3.37e+09 ± 0% cpuidle.C6-BDW.time
5056262 ± 1% +25.7% 6355777 ± 0% cpuidle.C6-BDW.usage
1549 ± 11% +6459.9% 101634 ± 1% cpuidle.POLL.usage
11825435 ± 0% +19.9% 14173072 ± 0% numa-meminfo.node0.AnonHugePages
11934 ± 1% +34.0% 15989 ± 1% numa-meminfo.node0.KernelStack
26291 ± 0% +18.4% 31133 ± 1% numa-meminfo.node0.PageTables
92565 ± 4% +10.4% 102148 ± 0% numa-meminfo.node0.SUnreclaim
124198 ± 4% +8.7% 134999 ± 0% numa-meminfo.node0.Slab
12226440 ± 1% +17.9% 14419573 ± 0% numa-meminfo.node1.AnonHugePages
8105 ± 1% +58.2% 12819 ± 1% numa-meminfo.node1.KernelStack
6834 ± 0% +55.4% 10619 ± 24% numa-meminfo.node1.Mapped
27246 ± 1% +15.5% 31478 ± 1% numa-meminfo.node1.PageTables
11383 ± 1% +23.2% 14021 ± 0% proc-vmstat.nr_anon_transparent_hugepages
1251 ± 0% +43.8% 1800 ± 0% proc-vmstat.nr_kernel_stack
13036 ± 1% +20.6% 15719 ± 0% proc-vmstat.nr_page_table_pages
112629 ± 3% -99.6% 415.00 ± 17% proc-vmstat.numa_hint_faults
88148 ± 4% -99.6% 332.00 ± 17% proc-vmstat.numa_hint_faults_local
6378041 ± 0% -100.0% 0.00 ± -1% proc-vmstat.numa_huge_pte_updates
19227 ± 3% -99.8% 32.67 ± 56% proc-vmstat.numa_pages_migrated
3.272e+09 ± 0% -100.0% 668.00 ± 36% proc-vmstat.numa_pte_updates
10408 ± 4% +8.7% 11311 ± 6% proc-vmstat.pgactivate
1190 ± 7% -100.0% 0.00 ± -1% proc-vmstat.pgmigrate_fail
19227 ± 3% -99.8% 32.67 ± 56% proc-vmstat.pgmigrate_success
82759 ±117% -99.1% 737.33 ±138% latency_stats.avg.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 996353 ± 0% latency_stats.hits.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 1817330 ± 0% latency_stats.hits.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 8326013 ± 0% latency_stats.hits.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
772874 ± 6% -85.9% 109215 ± 2% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 82817 ± 49% latency_stats.max.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 83729 ± 48% latency_stats.max.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
248080 ±117% -99.7% 739.00 ±137% latency_stats.max.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 12202 ± 5% latency_stats.max.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 2.619e+09 ± 0% latency_stats.sum.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
41841 ± 99% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.page_lock_anon_vma_read.rmap_walk_anon.rmap_walk.try_to_unmap.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
22928 ± 28% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.rmap_walk_anon.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 4.892e+09 ± 0% latency_stats.sum.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
248278 ±117% -99.7% 759.33 ±132% latency_stats.sum.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 4.98e+09 ± 0% latency_stats.sum.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1145012 ± 2% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
25922 ± 1% -55.4% 11563 ± 0% slabinfo.kmalloc-256.active_objs
414.00 ± 1% -41.1% 243.67 ± 1% slabinfo.kmalloc-256.active_slabs
26521 ± 1% -41.1% 15632 ± 1% slabinfo.kmalloc-256.num_objs
414.00 ± 1% -41.1% 243.67 ± 1% slabinfo.kmalloc-256.num_slabs
43609 ± 3% +34.5% 58654 ± 0% slabinfo.kmalloc-32.active_objs
340.33 ± 3% +34.5% 457.67 ± 0% slabinfo.kmalloc-32.active_slabs
43620 ± 3% +34.5% 58654 ± 0% slabinfo.kmalloc-32.num_objs
340.33 ± 3% +34.5% 457.67 ± 0% slabinfo.kmalloc-32.num_slabs
19070 ± 3% +33.7% 25496 ± 5% slabinfo.kmalloc-512.active_objs
298.67 ± 3% +33.6% 399.00 ± 5% slabinfo.kmalloc-512.active_slabs
19171 ± 3% +33.4% 25578 ± 5% slabinfo.kmalloc-512.num_objs
298.67 ± 3% +33.6% 399.00 ± 5% slabinfo.kmalloc-512.num_slabs
5418 ± 1% +8.6% 5885 ± 1% slabinfo.sighand_cache.active_objs
5466 ± 1% +8.5% 5931 ± 1% slabinfo.sighand_cache.num_objs
4158 ± 4% +9.9% 4569 ± 1% slabinfo.sock_inode_cache.active_objs
4158 ± 4% +9.9% 4569 ± 1% slabinfo.sock_inode_cache.num_objs
1796 ± 0% +30.4% 2342 ± 0% slabinfo.task_struct.active_objs
606.33 ± 0% +30.3% 790.33 ± 0% slabinfo.task_struct.active_slabs
1821 ± 0% +30.3% 2372 ± 0% slabinfo.task_struct.num_objs
606.33 ± 0% +30.3% 790.33 ± 0% slabinfo.task_struct.num_slabs
5.736e+11 ± 0% +4.8% 6.008e+11 ± 0% perf-stat.L1-dcache-load-misses
6.828e+12 ± 0% +33.2% 9.097e+12 ± 0% perf-stat.L1-dcache-loads
2.914e+12 ± 0% +6.0% 3.088e+12 ± 0% perf-stat.L1-dcache-stores
7.84e+09 ± 0% +174.9% 2.155e+10 ± 0% perf-stat.L1-icache-load-misses
3.023e+09 ± 0% +118.5% 6.604e+09 ± 0% perf-stat.LLC-load-misses
1.05e+10 ± 0% +145.8% 2.581e+10 ± 0% perf-stat.LLC-loads
5.669e+10 ± 0% -41.8% 3.3e+10 ± 0% perf-stat.LLC-store-misses
4.157e+11 ± 0% -15.6% 3.508e+11 ± 0% perf-stat.LLC-stores
1.117e+13 ± 0% +19.6% 1.335e+13 ± 0% perf-stat.branch-instructions
1.009e+09 ± 0% +422.6% 5.272e+09 ± 0% perf-stat.branch-load-misses
1.117e+13 ± 0% +19.7% 1.336e+13 ± 0% perf-stat.branch-loads
1.018e+09 ± 0% +420.5% 5.299e+09 ± 0% perf-stat.branch-misses
4.325e+12 ± 0% -40.4% 2.576e+12 ± 0% perf-stat.bus-cycles
1.64e+11 ± 0% -32.9% 1.101e+11 ± 0% perf-stat.cache-misses
6.914e+11 ± 0% -4.8% 6.583e+11 ± 0% perf-stat.cache-references
2556163 ± 4% +3779.8% 99173855 ± 0% perf-stat.context-switches
1.21e+14 ± 0% -40.6% 7.181e+13 ± 0% perf-stat.cpu-cycles
129774 ± 0% +6200.0% 8175820 ± 0% perf-stat.cpu-migrations
2.377e+08 ± 0% +306.5% 9.664e+08 ± 2% perf-stat.dTLB-load-misses
6.831e+12 ± 0% +33.1% 9.093e+12 ± 0% perf-stat.dTLB-loads
1.3e+08 ± 0% +63.0% 2.119e+08 ± 0% perf-stat.dTLB-store-misses
2.917e+12 ± 0% +5.9% 3.088e+12 ± 0% perf-stat.dTLB-stores
1.66e+08 ± 1% -4.0% 1.594e+08 ± 0% perf-stat.iTLB-load-misses
63264319 ± 1% +870.0% 6.136e+08 ± 0% perf-stat.iTLB-loads
3.594e+13 ± 0% +22.1% 4.39e+13 ± 0% perf-stat.instructions
22353737 ± 0% -1.5% 22022473 ± 0% perf-stat.minor-faults
4.773e+08 ± 1% +470.9% 2.725e+09 ± 0% perf-stat.node-load-misses
2.535e+09 ± 0% +52.9% 3.877e+09 ± 1% perf-stat.node-loads
1.319e+08 ± 1% +1521.6% 2.139e+09 ± 0% perf-stat.node-store-misses
5.666e+10 ± 0% -45.6% 3.084e+10 ± 0% perf-stat.node-stores
22353972 ± 0% -1.5% 22022566 ± 0% perf-stat.page-faults
9.501e+13 ± 0% -40.3% 5.67e+13 ± 0% perf-stat.ref-cycles
1.06 ± 2% -37.4% 0.66 ± 2% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault
0.00 ± -1% +Inf% 11.20 ± 0% perf-profile.cycles-pp.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault
10.56 ± 5% -94.8% 0.55 ±141% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault
42.60 ± 0% -60.2% 16.94 ± 4% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_unit
0.00 ± -1% +Inf% 4.53 ± 4% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page
0.00 ± -1% +Inf% 5.26 ± 4% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page
0.00 ± -1% +Inf% 3.15 ± 4% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page
0.00 ± -1% +Inf% 5.97 ± 1% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.pwq_unbound_release_workfn.process_one_work.worker_thread
1.07 ± 2% -37.7% 0.67 ± 2% perf-profile.cycles-pp.alloc_pages_vma.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 6.42 ± 1% perf-profile.cycles-pp.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault
1.67 ± 12% +831.7% 15.59 ± 1% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
1.69 ± 1% +873.4% 16.48 ± 0% perf-profile.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 1.36 ± 1% perf-profile.cycles-pp.clear_mem.process_one_work.worker_thread.kthread.ret_from_fork
48.62 ± 0% -100.0% 0.00 ± -1% perf-profile.cycles-pp.clear_page_c_e.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 28.49 ± 2% perf-profile.cycles-pp.clear_page_c_e.process_one_work.worker_thread.kthread.ret_from_fork
1.69 ± 12% +857.6% 16.18 ± 1% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
1.67 ± 13% +833.5% 15.59 ± 1% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
1.64 ± 13% +841.8% 15.48 ± 1% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 4.00 ± 3% perf-profile.cycles-pp.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault
52.83 ± 0% -67.2% 17.32 ± 0% perf-profile.cycles-pp.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
10.56 ± 5% -94.8% 0.55 ±141% perf-profile.cycles-pp.do_page_fault.page_fault
42.60 ± 0% -60.2% 16.95 ± 4% perf-profile.cycles-pp.do_page_fault.page_fault.do_unit
86.70 ± 0% -49.5% 43.82 ± 3% perf-profile.cycles-pp.do_unit
1.05 ± 1% -37.6% 0.65 ± 2% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.handle_mm_fault
10.53 ± 5% -94.8% 0.54 ±141% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
42.49 ± 0% -60.3% 16.88 ± 4% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_unit
1.63 ± 13% +842.4% 15.39 ± 1% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.00 ± -1% +Inf% 38.24 ± 1% perf-profile.cycles-pp.kthread.ret_from_fork
0.00 ± -1% +Inf% 4.54 ± 4% perf-profile.cycles-pp.mutex_lock.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault
0.00 ± -1% +Inf% 5.26 ± 4% perf-profile.cycles-pp.mutex_lock.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page
0.00 ± -1% +Inf% 3.16 ± 4% perf-profile.cycles-pp.mutex_lock.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault
0.00 ± -1% +Inf% 5.98 ± 1% perf-profile.cycles-pp.mutex_lock.pwq_unbound_release_workfn.process_one_work.worker_thread.kthread
0.00 ± -1% +Inf% 4.52 ± 4% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.__alloc_workqueue_key.clear_huge_page
0.00 ± -1% +Inf% 5.24 ± 4% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.apply_workqueue_attrs.__alloc_workqueue_key
0.00 ± -1% +Inf% 3.13 ± 3% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.destroy_workqueue.clear_huge_page
0.00 ± -1% +Inf% 5.96 ± 1% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pwq_unbound_release_workfn.process_one_work
0.00 ± -1% +Inf% 4.27 ± 4% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.__alloc_workqueue_key
0.00 ± -1% +Inf% 4.94 ± 4% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.apply_workqueue_attrs
0.00 ± -1% +Inf% 2.96 ± 4% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.destroy_workqueue
0.00 ± -1% +Inf% 5.69 ± 1% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pwq_unbound_release_workfn
10.56 ± 5% -94.8% 0.55 ±141% perf-profile.cycles-pp.page_fault
42.60 ± 0% -60.2% 16.95 ± 4% perf-profile.cycles-pp.page_fault.do_unit
0.00 ± -1% +Inf% 37.43 ± 1% perf-profile.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 ± -1% +Inf% 6.06 ± 1% perf-profile.cycles-pp.pwq_unbound_release_workfn.process_one_work.worker_thread.kthread.ret_from_fork
0.00 ± -1% +Inf% 38.24 ± 1% perf-profile.cycles-pp.ret_from_fork
1.69 ± 12% +858.6% 16.20 ± 1% perf-profile.cycles-pp.start_secondary
0.00 ± -1% +Inf% 38.08 ± 1% perf-profile.cycles-pp.worker_thread.kthread.ret_from_fork
5.82 ± 86% +7.5e+06% 433918 ± 21% sched_debug.cfs_rq:/.MIN_vruntime.avg
353.62 ± 73% +1.1e+06% 3889704 ± 2% sched_debug.cfs_rq:/.MIN_vruntime.max
43.10 ± 77% +2.7e+06% 1183052 ± 12% sched_debug.cfs_rq:/.MIN_vruntime.stddev
242739 ± 0% -46.6% 129743 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
245391 ± 0% -44.4% 136382 ± 0% sched_debug.cfs_rq:/.exec_clock.max
240090 ± 0% -48.8% 122991 ± 0% sched_debug.cfs_rq:/.exec_clock.min
1116 ± 9% +435.8% 5983 ± 0% sched_debug.cfs_rq:/.exec_clock.stddev
12350 ± 10% +2254.6% 290808 ± 9% sched_debug.cfs_rq:/.load.avg
80177 ± 80% +1634.2% 1390400 ± 22% sched_debug.cfs_rq:/.load.max
5380 ± 19% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.load.min
12011 ± 72% +3401.2% 420562 ± 10% sched_debug.cfs_rq:/.load.stddev
18.85 ± 21% +1747.5% 348.21 ± 3% sched_debug.cfs_rq:/.load_avg.avg
175.43 ± 14% +626.6% 1274 ± 9% sched_debug.cfs_rq:/.load_avg.max
5.07 ± 14% +501.5% 30.48 ± 9% sched_debug.cfs_rq:/.load_avg.min
33.38 ± 10% +787.0% 296.05 ± 6% sched_debug.cfs_rq:/.load_avg.stddev
5.82 ± 86% +7.5e+06% 433918 ± 21% sched_debug.cfs_rq:/.max_vruntime.avg
353.62 ± 73% +1.1e+06% 3889704 ± 2% sched_debug.cfs_rq:/.max_vruntime.max
43.10 ± 77% +2.7e+06% 1183052 ± 12% sched_debug.cfs_rq:/.max_vruntime.stddev
21470799 ± 0% -81.9% 3882202 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
21680860 ± 0% -81.5% 4000633 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
21243753 ± 0% -82.2% 3771272 ± 0% sched_debug.cfs_rq:/.min_vruntime.min
91646 ± 5% -41.9% 53207 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
1.03 ± 4% +84.3% 1.90 ± 7% sched_debug.cfs_rq:/.nr_running.max
0.47 ± 20% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_running.min
0.17 ± 8% +210.4% 0.54 ± 6% sched_debug.cfs_rq:/.nr_running.stddev
1.67 ± 1% +283.5% 6.42 ± 0% sched_debug.cfs_rq:/.nr_spread_over.avg
2.41 ± 0% +25.7% 3.03 ± 3% sched_debug.cfs_rq:/.nr_spread_over.stddev
0.00 ±141% +2.9e+06% 11.02 ± 29% sched_debug.cfs_rq:/.removed_load_avg.avg
0.03 ±141% +1.3e+06% 432.19 ± 14% sched_debug.cfs_rq:/.removed_load_avg.max
0.00 ±141% +1.7e+06% 60.51 ± 19% sched_debug.cfs_rq:/.removed_load_avg.stddev
0.00 ±141% +2.7e+06% 10.17 ± 29% sched_debug.cfs_rq:/.removed_util_avg.avg
0.03 ±141% +1.2e+06% 408.62 ± 14% sched_debug.cfs_rq:/.removed_util_avg.max
0.00 ±141% +1.6e+06% 56.60 ± 19% sched_debug.cfs_rq:/.removed_util_avg.stddev
9.15 ± 2% +1211.4% 120.00 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.avg
30.43 ± 9% +2319.7% 736.38 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.max
4.47 ± 22% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.runnable_load_avg.min
4.26 ± 2% +4373.5% 190.39 ± 7% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-62769 ±-29% -249.1% 93587 ± 24% sched_debug.cfs_rq:/.spread0.avg
149462 ± 19% +41.8% 211876 ± 14% sched_debug.cfs_rq:/.spread0.max
-291684 ±-11% -94.1% -17117 ±-87% sched_debug.cfs_rq:/.spread0.min
92632 ± 4% -42.7% 53106 ± 7% sched_debug.cfs_rq:/.spread0.stddev
812.83 ± 2% -18.5% 662.06 ± 3% sched_debug.cfs_rq:/.util_avg.avg
1008 ± 0% +48.2% 1494 ± 8% sched_debug.cfs_rq:/.util_avg.max
495.03 ± 14% -54.3% 226.14 ± 3% sched_debug.cfs_rq:/.util_avg.min
123.98 ± 17% +103.6% 252.44 ± 6% sched_debug.cfs_rq:/.util_avg.stddev
982698 ± 0% -57.2% 420240 ± 4% sched_debug.cpu.avg_idle.avg
726070 ± 4% -93.5% 47424 ± 46% sched_debug.cpu.avg_idle.min
54806 ± 17% +285.3% 211164 ± 7% sched_debug.cpu.avg_idle.stddev
313402 ± 0% -29.7% 220202 ± 2% sched_debug.cpu.clock.avg
313450 ± 0% -29.7% 220215 ± 2% sched_debug.cpu.clock.max
313351 ± 0% -29.7% 220187 ± 2% sched_debug.cpu.clock.min
28.21 ± 9% -72.9% 7.65 ± 10% sched_debug.cpu.clock.stddev
313402 ± 0% -29.7% 220202 ± 2% sched_debug.cpu.clock_task.avg
313450 ± 0% -29.7% 220215 ± 2% sched_debug.cpu.clock_task.max
313351 ± 0% -29.7% 220187 ± 2% sched_debug.cpu.clock_task.min
28.21 ± 9% -72.9% 7.65 ± 10% sched_debug.cpu.clock_task.stddev
9.14 ± 3% +938.7% 94.99 ± 11% sched_debug.cpu.cpu_load[0].avg
30.60 ± 10% +2355.2% 751.29 ± 5% sched_debug.cpu.cpu_load[0].max
4.47 ± 22% -100.0% 0.00 ± -1% sched_debug.cpu.cpu_load[0].min
4.27 ± 2% +3974.1% 174.13 ± 7% sched_debug.cpu.cpu_load[0].stddev
11.26 ± 19% +1493.3% 179.38 ± 0% sched_debug.cpu.cpu_load[1].avg
78.57 ± 43% +882.2% 771.71 ± 5% sched_debug.cpu.cpu_load[1].max
12.46 ± 51% +1510.5% 200.73 ± 4% sched_debug.cpu.cpu_load[1].stddev
10.72 ± 14% +1531.0% 174.82 ± 0% sched_debug.cpu.cpu_load[2].avg
65.97 ± 21% +939.0% 685.38 ± 6% sched_debug.cpu.cpu_load[2].max
4.50 ± 22% +37.6% 6.19 ± 21% sched_debug.cpu.cpu_load[2].min
10.00 ± 34% +1618.8% 171.82 ± 3% sched_debug.cpu.cpu_load[2].stddev
10.20 ± 10% +1571.8% 170.55 ± 0% sched_debug.cpu.cpu_load[3].avg
54.53 ± 1% +987.1% 592.81 ± 3% sched_debug.cpu.cpu_load[3].max
4.57 ± 20% +101.3% 9.19 ± 24% sched_debug.cpu.cpu_load[3].min
7.90 ± 21% +1707.9% 142.78 ± 3% sched_debug.cpu.cpu_load[3].stddev
9.78 ± 7% +1609.8% 167.16 ± 1% sched_debug.cpu.cpu_load[4].avg
41.93 ± 8% +1118.3% 510.86 ± 0% sched_debug.cpu.cpu_load[4].max
4.63 ± 20% +166.2% 12.33 ± 26% sched_debug.cpu.cpu_load[4].min
6.14 ± 15% +1765.6% 114.47 ± 3% sched_debug.cpu.cpu_load[4].stddev
18022 ± 3% -56.7% 7803 ± 10% sched_debug.cpu.curr->pid.avg
20735 ± 0% -13.0% 18049 ± 0% sched_debug.cpu.curr->pid.max
4933 ± 28% -100.0% 0.00 ± -1% sched_debug.cpu.curr->pid.min
3449 ± 3% +136.8% 8168 ± 3% sched_debug.cpu.curr->pid.stddev
12358 ± 10% +2281.6% 294330 ± 7% sched_debug.cpu.load.avg
80177 ± 80% +1695.8% 1439829 ± 24% sched_debug.cpu.load.max
5383 ± 19% -100.0% 0.00 ± -1% sched_debug.cpu.load.min
12027 ± 72% +3401.2% 421111 ± 10% sched_debug.cpu.load.stddev
254203 ± 0% -32.7% 171120 ± 0% sched_debug.cpu.nr_load_updates.avg
259100 ± 0% -30.5% 180126 ± 0% sched_debug.cpu.nr_load_updates.max
249692 ± 0% -34.7% 162940 ± 0% sched_debug.cpu.nr_load_updates.min
2423 ± 2% +160.1% 6304 ± 0% sched_debug.cpu.nr_load_updates.stddev
0.47 ± 20% -100.0% 0.00 ± -1% sched_debug.cpu.nr_running.min
0.42 ± 22% +42.6% 0.59 ± 10% sched_debug.cpu.nr_running.stddev
14938 ± 3% +3239.8% 498931 ± 0% sched_debug.cpu.nr_switches.avg
91655 ± 10% +499.6% 549576 ± 0% sched_debug.cpu.nr_switches.max
3516 ± 3% +12749.6% 451805 ± 0% sched_debug.cpu.nr_switches.min
17448 ± 8% +105.0% 35768 ± 0% sched_debug.cpu.nr_switches.stddev
0.00 ± 81% +43995.2% 0.50 ± 10% sched_debug.cpu.nr_uninterruptible.avg
6.50 ± 10% +38104.4% 2483 ± 1% sched_debug.cpu.nr_uninterruptible.max
-19.30 ±-23% +12566.7% -2444 ± -2% sched_debug.cpu.nr_uninterruptible.min
3.80 ± 2% +57355.5% 2185 ± 0% sched_debug.cpu.nr_uninterruptible.stddev
15341 ± 4% +3155.1% 499372 ± 0% sched_debug.cpu.sched_count.avg
92611 ± 11% +537.2% 590150 ± 7% sched_debug.cpu.sched_count.max
3410 ± 5% +13156.9% 452059 ± 0% sched_debug.cpu.sched_count.min
18186 ± 9% +102.9% 36899 ± 5% sched_debug.cpu.sched_count.stddev
1499 ± 5% +12195.5% 184417 ± 0% sched_debug.cpu.sched_goidle.avg
6462 ± 3% +3101.8% 206908 ± 0% sched_debug.cpu.sched_goidle.max
462.57 ± 2% +34779.3% 161340 ± 0% sched_debug.cpu.sched_goidle.min
1063 ± 7% +1457.4% 16556 ± 0% sched_debug.cpu.sched_goidle.stddev
6820 ± 3% +3714.4% 260157 ± 0% sched_debug.cpu.ttwu_count.avg
44755 ± 11% +535.6% 284465 ± 0% sched_debug.cpu.ttwu_count.max
1365 ± 9% +17285.7% 237459 ± 0% sched_debug.cpu.ttwu_count.min
8587 ± 9% +105.9% 17678 ± 0% sched_debug.cpu.ttwu_count.stddev
4993 ± 4% +1002.1% 55032 ± 0% sched_debug.cpu.ttwu_local.avg
42198 ± 8% +41.4% 59677 ± 0% sched_debug.cpu.ttwu_local.max
713.23 ± 9% +6994.9% 50603 ± 0% sched_debug.cpu.ttwu_local.min
8336 ± 8% -67.6% 2699 ± 0% sched_debug.cpu.ttwu_local.stddev
313348 ± 0% -29.7% 220186 ± 2% sched_debug.cpu_clk
309751 ± 0% -29.9% 217158 ± 2% sched_debug.ktime
0.10 ± 0% +42.9% 0.14 ± 0% sched_debug.rt_rq:/.rt_nr_running.max
0.01 ± 16% +42.9% 0.02 ± 16% sched_debug.rt_rq:/.rt_nr_running.stddev
313348 ± 0% -29.7% 220186 ± 2% sched_debug.sched_clk
***************************************************************************************************
lkp-hsw-ep2: 72 threads Brickland Haswell-EP with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/8T/lkp-hsw-ep2/anon-w-seq/vm-scalability
commit:
v4.7-rc7
8296d37010 ("Speeds up clearing huge pages using work queue")
v4.7-rc7 8296d3701014cd804bf981b5e2
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
0.03 ± 20% -70.6% 0.01 ± 2% vm-scalability.stddev
35657782 ± 0% +45.6% 51931484 ± 0% vm-scalability.throughput
572.79 ± 0% -31.1% 394.69 ± 0% vm-scalability.time.elapsed_time
572.79 ± 0% -31.1% 394.69 ± 0% vm-scalability.time.elapsed_time.max
328341 ± 0% +3328.2% 11256045 ± 0% vm-scalability.time.involuntary_context_switches
6719 ± 0% -59.6% 2715 ± 0% vm-scalability.time.percent_of_cpu_this_job_got
18939 ± 0% -84.7% 2897 ± 0% vm-scalability.time.system_time
19554 ± 0% -60.0% 7821 ± 0% vm-scalability.time.user_time
41413 ± 0% +26399.0% 10974014 ± 0% vm-scalability.time.voluntary_context_switches
3201 ± 70% +118.5% 6996 ± 29% numa-numastat.node1.numa_foreign
3201 ± 70% +118.5% 6996 ± 29% numa-numastat.node1.numa_miss
611.78 ± 0% -29.1% 433.60 ± 0% uptime.boot
5217 ± 0% +92.8% 10060 ± 0% uptime.idle
4266 ± 8% +4762.3% 207457 ± 0% vmstat.system.cs
73766 ± 0% +11.8% 82478 ± 0% vmstat.system.in
26142997 ± 0% +11.8% 29217123 ± 1% meminfo.AnonHugePages
16939 ± 0% +38.2% 23407 ± 0% meminfo.KernelStack
57079 ± 0% +10.9% 63312 ± 1% meminfo.PageTables
12857 ± 2% -22.5% 9965 ± 2% softirqs.NET_RX
3500329 ± 2% +59.3% 5576495 ± 0% softirqs.RCU
254266 ± 7% +1879.7% 5033646 ± 0% softirqs.SCHED
19643831 ± 0% -44.4% 10917987 ± 0% softirqs.TIMER
13096655 ± 0% +18.8% 15558240 ± 5% numa-meminfo.node0.AnonHugePages
10091 ± 1% +29.4% 13061 ± 2% numa-meminfo.node0.KernelStack
28546 ± 2% +16.4% 33229 ± 5% numa-meminfo.node0.PageTables
13486519 ± 0% +17.3% 15818216 ± 4% numa-meminfo.node1.AnonHugePages
6843 ± 2% +51.0% 10335 ± 3% numa-meminfo.node1.KernelStack
8600 ± 30% +22.9% 10566 ± 25% numa-meminfo.node1.Mapped
29398 ± 1% +16.8% 34329 ± 3% numa-meminfo.node1.PageTables
6390 ± 0% +18.2% 7551 ± 6% numa-vmstat.node0.nr_anon_transparent_hugepages
630.33 ± 1% +29.5% 816.00 ± 3% numa-vmstat.node0.nr_kernel_stack
7136 ± 1% +15.8% 8263 ± 6% numa-vmstat.node0.nr_page_table_pages
6625 ± 0% +16.0% 7683 ± 5% numa-vmstat.node1.nr_anon_transparent_hugepages
427.33 ± 2% +50.8% 644.33 ± 3% numa-vmstat.node1.nr_kernel_stack
2156 ± 30% +22.6% 2644 ± 25% numa-vmstat.node1.nr_mapped
7390 ± 2% +15.6% 8542 ± 5% numa-vmstat.node1.nr_page_table_pages
12871 ± 0% +11.3% 14329 ± 0% proc-vmstat.nr_anon_transparent_hugepages
1058 ± 0% +38.2% 1461 ± 0% proc-vmstat.nr_kernel_stack
19143 ± 0% -98.3% 317.00 ± 16% proc-vmstat.numa_hint_faults
15441 ± 1% -98.3% 260.67 ± 15% proc-vmstat.numa_hint_faults_local
6677608 ± 0% -100.0% 0.00 ± -1% proc-vmstat.numa_huge_pte_updates
3632 ± 3% -99.6% 13.00 ± 22% proc-vmstat.numa_pages_migrated
3.424e+09 ± 0% -100.0% 328.33 ± 15% proc-vmstat.numa_pte_updates
3632 ± 3% -99.6% 13.00 ± 22% proc-vmstat.pgmigrate_success
93.71 ± 0% -18.8% 76.04 ± 0% turbostat.%Busy
2617 ± 0% -18.8% 2124 ± 0% turbostat.Avg_MHz
3.36 ± 3% +540.7% 21.55 ± 0% turbostat.CPU%c1
0.02 ± 0% +3366.7% 0.69 ± 0% turbostat.CPU%c3
2.91 ± 4% -41.2% 1.71 ± 1% turbostat.CPU%c6
0.87 ± 9% -65.8% 0.30 ± 3% turbostat.Pkg%pc2
265.69 ± 0% +4.3% 277.20 ± 0% turbostat.PkgWatt
143.43 ± 0% -1.6% 141.08 ± 0% turbostat.RAMWatt
1337329 ± 16% +51788.7% 6.939e+08 ± 0% cpuidle.C1-HSW.time
87931 ± 23% +10151.1% 9013901 ± 0% cpuidle.C1-HSW.usage
5560533 ± 10% +12553.8% 7.036e+08 ± 0% cpuidle.C1E-HSW.time
19568 ± 7% +39448.8% 7738917 ± 0% cpuidle.C1E-HSW.usage
16491714 ± 1% +17911.8% 2.97e+09 ± 0% cpuidle.C3-HSW.time
49321 ± 1% +23837.4% 11806348 ± 0% cpuidle.C3-HSW.usage
2735962 ± 3% +82.5% 4992792 ± 0% cpuidle.C6-HSW.usage
3336982 ± 19% +299.4% 13327904 ± 1% cpuidle.POLL.time
1537 ± 17% +4546.0% 71408 ± 1% cpuidle.POLL.usage
0.00 ± -1% +Inf% 670539 ± 0% latency_stats.hits.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 1429117 ± 0% latency_stats.hits.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 8324214 ± 0% latency_stats.hits.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
742404 ± 14% -86.7% 98838 ± 0% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 41496 ± 6% latency_stats.max.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
15437 ± 13% -99.5% 77.00 ± 28% latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 43960 ± 2% latency_stats.max.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 13836 ± 4% latency_stats.max.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 1.464e+09 ± 0% latency_stats.sum.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
23344 ± 47% -98.1% 435.00 ± 6% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2.931e+09 ± 0% latency_stats.sum.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 5.531e+09 ± 0% latency_stats.sum.flush_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
21445 ± 3% -54.9% 9663 ± 0% slabinfo.kmalloc-256.active_objs
345.67 ± 3% -38.7% 212.00 ± 2% slabinfo.kmalloc-256.active_slabs
22132 ± 3% -38.6% 13593 ± 2% slabinfo.kmalloc-256.num_objs
345.67 ± 3% -38.7% 212.00 ± 2% slabinfo.kmalloc-256.num_slabs
35878 ± 0% +31.6% 47218 ± 3% slabinfo.kmalloc-32.active_objs
35879 ± 0% +31.6% 47218 ± 3% slabinfo.kmalloc-32.num_objs
14274 ± 2% +43.7% 20505 ± 3% slabinfo.kmalloc-512.active_objs
14469 ± 1% +42.6% 20629 ± 2% slabinfo.kmalloc-512.num_objs
4224 ± 0% +12.2% 4742 ± 0% slabinfo.sighand_cache.active_objs
4270 ± 0% +12.1% 4787 ± 0% slabinfo.sighand_cache.num_objs
1482 ± 0% +27.3% 1886 ± 0% slabinfo.task_struct.active_objs
501.67 ± 0% +27.2% 638.33 ± 0% slabinfo.task_struct.active_slabs
1505 ± 0% +27.3% 1916 ± 0% slabinfo.task_struct.num_objs
501.67 ± 0% +27.2% 638.33 ± 0% slabinfo.task_struct.num_slabs
5.721e+11 ± 0% +3.4% 5.915e+11 ± 0% perf-stat.L1-dcache-load-misses
6.767e+12 ± 0% +16.6% 7.89e+12 ± 0% perf-stat.L1-dcache-loads
2.898e+12 ± 0% +5.5% 3.058e+12 ± 0% perf-stat.L1-dcache-stores
8.939e+09 ± 0% +201.2% 2.692e+10 ± 0% perf-stat.L1-icache-load-misses
3.817e+09 ± 0% +101.6% 7.696e+09 ± 0% perf-stat.LLC-load-misses
1.332e+10 ± 0% +95.5% 2.604e+10 ± 0% perf-stat.LLC-loads
4.801e+10 ± 0% -37.7% 2.992e+10 ± 0% perf-stat.LLC-store-misses
1.863e+11 ± 0% -45.0% 1.025e+11 ± 0% perf-stat.LLC-stores
1.108e+13 ± 0% +9.6% 1.214e+13 ± 0% perf-stat.branch-instructions
8.546e+08 ± 0% +449.5% 4.696e+09 ± 0% perf-stat.branch-load-misses
1.112e+13 ± 0% +9.3% 1.216e+13 ± 0% perf-stat.branch-loads
8.867e+08 ± 2% +438.9% 4.778e+09 ± 0% perf-stat.branch-misses
3.797e+12 ± 0% -44.6% 2.102e+12 ± 0% perf-stat.bus-cycles
5.213e+10 ± 0% -27.0% 3.804e+10 ± 0% perf-stat.cache-misses
2.075e+11 ± 0% -27.0% 1.515e+11 ± 0% perf-stat.cache-references
2438791 ± 8% +3271.1% 82214290 ± 0% perf-stat.context-switches
1.063e+14 ± 0% -44.5% 5.894e+13 ± 0% perf-stat.cpu-cycles
109620 ± 0% +7027.7% 7813382 ± 0% perf-stat.cpu-migrations
3.539e+08 ± 12% +225.6% 1.152e+09 ± 24% perf-stat.dTLB-load-misses
6.759e+12 ± 0% +16.8% 7.898e+12 ± 0% perf-stat.dTLB-loads
1.316e+08 ± 4% +103.2% 2.673e+08 ± 20% perf-stat.dTLB-store-misses
2.895e+12 ± 0% +5.7% 3.06e+12 ± 0% perf-stat.dTLB-stores
1.294e+08 ± 1% +6.3% 1.375e+08 ± 0% perf-stat.iTLB-load-misses
47846677 ± 2% +971.0% 5.124e+08 ± 0% perf-stat.iTLB-loads
3.571e+13 ± 0% +10.9% 3.958e+13 ± 0% perf-stat.instructions
5.943e+08 ± 1% +386.3% 2.89e+09 ± 0% perf-stat.node-load-misses
3.221e+09 ± 0% +50.3% 4.84e+09 ± 0% perf-stat.node-loads
1.069e+08 ± 1% +1941.7% 2.182e+09 ± 0% perf-stat.node-store-misses
4.766e+10 ± 0% -41.7% 2.779e+10 ± 0% perf-stat.node-stores
8.724e+13 ± 0% -44.5% 4.842e+13 ± 0% perf-stat.ref-cycles
1.16 ± 1% -37.2% 0.73 ± 5% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault
0.00 ± -1% +Inf% 6.53 ± 4% perf-profile.cycles-pp.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault
46.49 ± 1% -77.7% 10.39 ± 2% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_unit
0.00 ± -1% +Inf% 2.27 ± 3% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page
0.00 ± -1% +Inf% 2.65 ± 4% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page
0.00 ± -1% +Inf% 1.49 ± 4% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page
0.00 ± -1% +Inf% 3.67 ± 1% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.pwq_unbound_release_workfn.process_one_work.worker_thread
1.17 ± 2% -37.6% 0.73 ± 4% perf-profile.cycles-pp.alloc_pages_vma.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 3.69 ± 4% perf-profile.cycles-pp.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault
1.35 ± 24% +995.5% 14.75 ± 1% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
1.81 ± 1% +465.3% 10.21 ± 0% perf-profile.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 1.66 ± 1% perf-profile.cycles-pp.clear_mem.process_one_work.worker_thread.kthread.ret_from_fork
44.34 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.clear_page_c_e.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 34.13 ± 1% perf-profile.cycles-pp.clear_page_c_e.process_one_work.worker_thread.kthread.ret_from_fork
1.36 ± 24% +1028.4% 15.35 ± 1% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
1.35 ± 24% +995.5% 14.75 ± 1% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
1.32 ± 24% +1010.6% 14.66 ± 1% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 2.23 ± 3% perf-profile.cycles-pp.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault
48.10 ± 1% -76.7% 11.19 ± 0% perf-profile.cycles-pp.do_huge_pmd_anonymous_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
46.49 ± 1% -77.7% 10.39 ± 2% perf-profile.cycles-pp.do_page_fault.page_fault.do_unit
95.56 ± 1% -57.5% 40.57 ± 2% perf-profile.cycles-pp.do_unit
1.14 ± 2% -37.1% 0.72 ± 5% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.handle_mm_fault
46.37 ± 1% -77.8% 10.31 ± 3% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_unit
1.31 ± 23% +1010.7% 14.59 ± 1% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.00 ± -1% +Inf% 41.80 ± 1% perf-profile.cycles-pp.kthread.ret_from_fork
0.00 ± -1% +Inf% 2.28 ± 3% perf-profile.cycles-pp.mutex_lock.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault
0.00 ± -1% +Inf% 2.66 ± 4% perf-profile.cycles-pp.mutex_lock.apply_workqueue_attrs.__alloc_workqueue_key.clear_huge_page.do_huge_pmd_anonymous_page
0.00 ± -1% +Inf% 1.50 ± 4% perf-profile.cycles-pp.mutex_lock.destroy_workqueue.clear_huge_page.do_huge_pmd_anonymous_page.handle_mm_fault
0.00 ± -1% +Inf% 3.68 ± 1% perf-profile.cycles-pp.mutex_lock.pwq_unbound_release_workfn.process_one_work.worker_thread.kthread
0.00 ± -1% +Inf% 2.26 ± 3% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.__alloc_workqueue_key.clear_huge_page
0.00 ± -1% +Inf% 2.64 ± 4% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.apply_workqueue_attrs.__alloc_workqueue_key
0.00 ± -1% +Inf% 1.46 ± 4% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.destroy_workqueue.clear_huge_page
0.00 ± -1% +Inf% 3.66 ± 1% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pwq_unbound_release_workfn.process_one_work
0.00 ± -1% +Inf% 2.01 ± 3% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.__alloc_workqueue_key
0.00 ± -1% +Inf% 2.39 ± 4% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.apply_workqueue_attrs
0.00 ± -1% +Inf% 1.33 ± 4% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.destroy_workqueue
0.00 ± -1% +Inf% 3.39 ± 2% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pwq_unbound_release_workfn
46.49 ± 1% -77.6% 10.39 ± 2% perf-profile.cycles-pp.page_fault.do_unit
0.00 ± -1% +Inf% 41.07 ± 1% perf-profile.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 ± -1% +Inf% 3.75 ± 1% perf-profile.cycles-pp.pwq_unbound_release_workfn.process_one_work.worker_thread.kthread.ret_from_fork
0.00 ± -1% +Inf% 41.80 ± 1% perf-profile.cycles-pp.ret_from_fork
1.36 ± 24% +1029.9% 15.37 ± 1% perf-profile.cycles-pp.start_secondary
0.00 ± -1% +Inf% 41.62 ± 1% perf-profile.cycles-pp.worker_thread.kthread.ret_from_fork
22409 ±141% +1919.1% 452462 ± 10% sched_debug.cfs_rq:/.MIN_vruntime.avg
808103 ±141% +328.0% 3458796 ± 0% sched_debug.cfs_rq:/.MIN_vruntime.max
132594 ±141% +756.8% 1136087 ± 4% sched_debug.cfs_rq:/.MIN_vruntime.stddev
252239 ± 0% -46.8% 134197 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
253927 ± 0% -44.9% 139915 ± 0% sched_debug.cfs_rq:/.exec_clock.max
250010 ± 0% -48.4% 128892 ± 0% sched_debug.cfs_rq:/.exec_clock.min
805.28 ± 11% +476.1% 4639 ± 0% sched_debug.cfs_rq:/.exec_clock.stddev
14443 ± 5% +2332.1% 351292 ± 2% sched_debug.cfs_rq:/.load.avg
64695 ± 73% +2781.4% 1864129 ± 16% sched_debug.cfs_rq:/.load.max
10818 ± 16% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.load.min
8247 ± 76% +5737.8% 481497 ± 4% sched_debug.cfs_rq:/.load.stddev
21.17 ± 7% +1806.2% 403.63 ± 1% sched_debug.cfs_rq:/.load_avg.avg
318.43 ± 40% +325.9% 1356 ± 2% sched_debug.cfs_rq:/.load_avg.max
10.13 ± 9% +225.2% 32.95 ± 9% sched_debug.cfs_rq:/.load_avg.min
44.45 ± 36% +660.6% 338.08 ± 0% sched_debug.cfs_rq:/.load_avg.stddev
22409 ±141% +1919.1% 452462 ± 10% sched_debug.cfs_rq:/.max_vruntime.avg
808103 ±141% +328.0% 3458798 ± 0% sched_debug.cfs_rq:/.max_vruntime.max
132594 ±141% +756.8% 1136087 ± 4% sched_debug.cfs_rq:/.max_vruntime.stddev
18128511 ± 0% -81.2% 3407497 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
18245060 ± 0% -80.8% 3498104 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
17989949 ± 0% -81.8% 3275951 ± 0% sched_debug.cfs_rq:/.min_vruntime.min
0.90 ± 2% -11.6% 0.79 ± 0% sched_debug.cfs_rq:/.nr_running.avg
1.03 ± 4% +148.8% 2.57 ± 12% sched_debug.cfs_rq:/.nr_running.max
0.77 ± 16% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_running.min
0.09 ± 51% +596.9% 0.60 ± 7% sched_debug.cfs_rq:/.nr_running.stddev
1.75 ± 4% +332.2% 7.55 ± 1% sched_debug.cfs_rq:/.nr_spread_over.avg
13.73 ± 8% +19.6% 16.43 ± 6% sched_debug.cfs_rq:/.nr_spread_over.max
2.31 ± 2% +37.5% 3.18 ± 4% sched_debug.cfs_rq:/.nr_spread_over.stddev
12.22 ± 0% +1198.9% 158.75 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.avg
28.70 ± 5% +2520.7% 752.14 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.max
9.30 ± 15% -99.5% 0.05 ±141% sched_debug.cfs_rq:/.runnable_load_avg.min
3.04 ± 23% +7174.2% 221.37 ± 5% sched_debug.cfs_rq:/.runnable_load_avg.stddev
7245 ±214% +1622.3% 124785 ± 17% sched_debug.cfs_rq:/.spread0.avg
123472 ± 17% +74.3% 215249 ± 9% sched_debug.cfs_rq:/.spread0.max
-132172 ±-31% -95.1% -6489 ±-60% sched_debug.cfs_rq:/.spread0.min
892.46 ± 2% -20.6% 708.40 ± 0% sched_debug.cfs_rq:/.util_avg.avg
1010 ± 0% +57.8% 1594 ± 2% sched_debug.cfs_rq:/.util_avg.max
767.80 ± 13% -70.1% 229.67 ± 11% sched_debug.cfs_rq:/.util_avg.min
59.95 ± 57% +387.2% 292.03 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
971616 ± 0% -59.2% 396870 ± 3% sched_debug.cpu.avg_idle.avg
1000000 ± 0% -11.3% 886566 ± 4% sched_debug.cpu.avg_idle.max
744054 ± 17% -93.5% 48437 ± 25% sched_debug.cpu.avg_idle.min
47455 ± 27% +302.2% 190868 ± 7% sched_debug.cpu.avg_idle.stddev
311238 ± 0% -29.1% 220771 ± 0% sched_debug.cpu.clock.avg
311271 ± 0% -29.1% 220783 ± 0% sched_debug.cpu.clock.max
311199 ± 0% -29.1% 220758 ± 0% sched_debug.cpu.clock.min
21.85 ± 23% -66.2% 7.39 ± 6% sched_debug.cpu.clock.stddev
311238 ± 0% -29.1% 220771 ± 0% sched_debug.cpu.clock_task.avg
311271 ± 0% -29.1% 220783 ± 0% sched_debug.cpu.clock_task.max
311199 ± 0% -29.1% 220758 ± 0% sched_debug.cpu.clock_task.min
21.85 ± 23% -66.2% 7.39 ± 6% sched_debug.cpu.clock_task.stddev
12.23 ± 0% +1045.2% 140.11 ± 3% sched_debug.cpu.cpu_load[0].avg
28.70 ± 5% +2496.2% 745.10 ± 0% sched_debug.cpu.cpu_load[0].max
9.30 ± 15% -99.5% 0.05 ±141% sched_debug.cpu.cpu_load[0].min
3.10 ± 22% +6762.8% 213.04 ± 1% sched_debug.cpu.cpu_load[0].stddev
13.05 ± 3% +1522.6% 211.82 ± 4% sched_debug.cpu.cpu_load[1].avg
58.90 ± 32% +1245.6% 792.57 ± 4% sched_debug.cpu.cpu_load[1].max
9.30 ± 15% -46.7% 4.95 ± 15% sched_debug.cpu.cpu_load[1].min
7.37 ± 38% +2765.4% 211.08 ± 3% sched_debug.cpu.cpu_load[1].stddev
12.87 ± 1% +1527.9% 209.56 ± 2% sched_debug.cpu.cpu_load[2].avg
50.80 ± 8% +1255.4% 688.52 ± 2% sched_debug.cpu.cpu_load[2].max
9.33 ± 15% -19.4% 7.52 ± 3% sched_debug.cpu.cpu_load[2].min
5.98 ± 1% +2866.4% 177.52 ± 2% sched_debug.cpu.cpu_load[2].stddev
12.79 ± 1% +1523.4% 207.56 ± 2% sched_debug.cpu.cpu_load[3].avg
45.87 ± 27% +1177.3% 585.86 ± 3% sched_debug.cpu.cpu_load[3].max
9.47 ± 15% +24.2% 11.76 ± 4% sched_debug.cpu.cpu_load[3].min
5.39 ± 21% +2590.3% 145.12 ± 0% sched_debug.cpu.cpu_load[3].stddev
12.63 ± 1% +1523.4% 205.07 ± 1% sched_debug.cpu.cpu_load[4].avg
40.67 ± 28% +1172.2% 517.38 ± 0% sched_debug.cpu.cpu_load[4].max
9.47 ± 15% +113.8% 20.24 ± 11% sched_debug.cpu.cpu_load[4].min
4.63 ± 27% +2401.7% 115.79 ± 1% sched_debug.cpu.cpu_load[4].stddev
17502 ± 1% -60.3% 6947 ± 2% sched_debug.cpu.curr->pid.avg
18194 ± 0% -11.1% 16177 ± 0% sched_debug.cpu.curr->pid.max
3163 ± 72% -100.0% 0.00 ± -1% sched_debug.cpu.curr->pid.min
2226 ± 27% +230.5% 7360 ± 1% sched_debug.cpu.curr->pid.stddev
15383 ± 0% +2134.6% 343744 ± 5% sched_debug.cpu.load.avg
132690 ± 1% +1116.8% 1614549 ± 11% sched_debug.cpu.load.max
10818 ± 16% -100.0% 0.00 ± -1% sched_debug.cpu.load.min
15898 ± 5% +2824.4% 464941 ± 3% sched_debug.cpu.load.stddev
500000 ± 0% +21.9% 609488 ± 4% sched_debug.cpu.max_idle_balance_cost.max
261592 ± 0% -33.6% 173665 ± 0% sched_debug.cpu.nr_load_updates.avg
265792 ± 0% -31.9% 181000 ± 0% sched_debug.cpu.nr_load_updates.max
257853 ± 0% -35.0% 167567 ± 0% sched_debug.cpu.nr_load_updates.min
1923 ± 3% +144.5% 4701 ± 0% sched_debug.cpu.nr_load_updates.stddev
0.93 ± 3% -11.5% 0.82 ± 1% sched_debug.cpu.nr_running.avg
1.93 ± 8% +40.4% 2.71 ± 8% sched_debug.cpu.nr_running.max
0.77 ± 16% -100.0% 0.00 ± -1% sched_debug.cpu.nr_running.min
0.24 ± 8% +172.1% 0.65 ± 7% sched_debug.cpu.nr_running.stddev
16572 ± 6% +3037.0% 519887 ± 0% sched_debug.cpu.nr_switches.avg
80913 ± 9% +597.0% 563953 ± 0% sched_debug.cpu.nr_switches.max
3507 ± 0% +13837.1% 488782 ± 0% sched_debug.cpu.nr_switches.min
16011 ± 9% +24.1% 19876 ± 2% sched_debug.cpu.nr_switches.stddev
-0.00 ±-141% -1e+05% 0.47 ± 1% sched_debug.cpu.nr_uninterruptible.avg
13.77 ± 26% +30608.8% 4227 ± 0% sched_debug.cpu.nr_uninterruptible.max
-12.77 ± -4% +32511.3% -4163 ± 0% sched_debug.cpu.nr_uninterruptible.min
4.76 ± 11% +80376.7% 3832 ± 0% sched_debug.cpu.nr_uninterruptible.stddev
16778 ± 6% +2999.5% 520032 ± 0% sched_debug.cpu.sched_count.avg
80047 ± 9% +615.6% 572844 ± 1% sched_debug.cpu.sched_count.max
3396 ± 0% +14289.2% 488771 ± 0% sched_debug.cpu.sched_count.min
16232 ± 9% +25.7% 20407 ± 2% sched_debug.cpu.sched_count.stddev
1645 ± 12% +10331.5% 171604 ± 0% sched_debug.cpu.sched_goidle.avg
9799 ± 13% +1875.0% 193536 ± 0% sched_debug.cpu.sched_goidle.max
369.83 ± 3% +41879.6% 155254 ± 0% sched_debug.cpu.sched_goidle.min
1796 ± 21% +477.6% 10377 ± 0% sched_debug.cpu.sched_goidle.stddev
7732 ± 7% +3459.5% 275252 ± 0% sched_debug.cpu.ttwu_count.avg
38526 ± 8% +699.8% 308129 ± 1% sched_debug.cpu.ttwu_count.max
1447 ± 0% +17406.9% 253482 ± 0% sched_debug.cpu.ttwu_count.min
7835 ± 9% +99.2% 15610 ± 3% sched_debug.cpu.ttwu_count.stddev
5418 ± 14% +1227.9% 71951 ± 0% sched_debug.cpu.ttwu_local.avg
35944 ± 10% +112.8% 76482 ± 0% sched_debug.cpu.ttwu_local.max
796.37 ± 4% +8288.8% 66805 ± 0% sched_debug.cpu.ttwu_local.min
7076 ± 12% -57.4% 3016 ± 0% sched_debug.cpu.ttwu_local.stddev
311196 ± 0% -29.1% 220757 ± 0% sched_debug.cpu_clk
308421 ± 0% -29.3% 217995 ± 0% sched_debug.ktime
0.10 ± 0% +42.9% 0.14 ± 0% sched_debug.rt_rq:/.rt_nr_running.max
0.01 ± 15% +54.2% 0.02 ± 21% sched_debug.rt_rq:/.rt_nr_running.stddev
0.75 ± 41% +64.9% 1.23 ± 23% sched_debug.rt_rq:/.rt_time.max
311196 ± 0% -29.1% 220757 ± 0% sched_debug.sched_clk
Thanks,
Xiaolong
5 years, 11 months
[lkp] [BUG] 48d381633b: WARNING: CPU: 0 PID: 950 at lib/debugobjects.c:263 debug_print_object+0x7d/0x90
by kernel test robot
FYI, we noticed the following commit:
git://bee.sh.intel.com/git/wfg/linux-devel.git fix-e100e-netpoll-nick
commit 48d381633b581bcf7be0a6e297ed65ae280690ec ("BUG: sleeping function called from invalid context at kernel/irq/manage.c:110")
in testcase: boot
on test machine: 1 threads qemu-system-i386 -enable-kvm with 192M memory
caused below changes:
+------------------------------------------------------------+------+------------+
| | v4.7 | 48d381633b |
+------------------------------------------------------------+------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 73 | 10 |
| genirq:Flags_mismatch_irq##(serial)vs.#(goldfish_pdev_bus) | 73 | 10 |
| invoked_oom-killer:gfp_mask=0x | 1 | |
| Mem-Info | 1 | |
| backtrace:SyS_clone+0x | 1 | |
| backtrace:do_sys_open+0x | 2 | |
| backtrace:SyS_open+0x | 2 | |
| backtrace:vfs_stat+0x | 1 | |
| backtrace:SyS_stat64+0x | 1 | |
| IP-Config:Auto-configuration_of_network_failed | 2 | |
| genirq:Flags_mismatch_irq##(serial)vs | 3 | |
| genirq:Flags_mismatch_irq##(serial)vs.#(goldfish_pdev_bu | 1 | |
| genirq:Flags_mismatch_irq | 2 | |
| genirq:Flags_mismatch_irq##(seri | 1 | |
| genirq:Flags_mismatch_irq##(serial)vs.#(goldfi | 1 | |
| genirq:Flags_mismatch_irq##(serial) | 2 | |
| genirq:Flags_mismatch_irq##(se | 1 | |
| genirq:Flags_mismatch_irq##(serial)vs.#(gold | 1 | |
| WARNING:at_lib/debugobjects.c:#debug_print_object | 0 | 8 |
| BUG:unable_to_handle_kernel | 0 | 8 |
| Oops | 0 | 8 |
| EIP_is_at_detach_if_pending | 0 | 8 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 8 |
+------------------------------------------------------------+------+------------+
[ 22.906262] BIOS EDD facility v0.16 2004-Jun-25, 2 devices found
[ 22.908684] PM: Hibernation image not present or could not be loaded.
[ 22.908684] PM: Hibernation image not present or could not be loaded.
[ 22.911189] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 22.911189] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 22.913531] debug: unmapping init [mem 0xb269e000-0xb275afff]
[ 22.913531] debug: unmapping init [mem 0xb269e000-0xb275afff]
[ 22.915620] Write protecting the kernel text: 15776k
[ 22.915620] Write protecting the kernel text: 15776k
[ 22.917207] Write protecting the kernel read-only data: 6080k
[ 22.917207] Write protecting the kernel read-only data: 6080k
[ 22.975948] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 22.975948] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 22.994162] random: mktemp urandom read with 14 bits of entropy available
[ 22.994162] random: mktemp urandom read with 14 bits of entropy available
[ 26.941972] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 26.941972] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 27.485632] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 27.485632] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 27.990029] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 27.990029] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 28.495632] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 28.495632] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 28.853460] ------------[ cut here ]------------
[ 28.853460] ------------[ cut here ]------------
[ 28.854581] WARNING: CPU: 0 PID: 950 at lib/debugobjects.c:263 debug_print_object+0x7d/0x90
[ 28.854581] WARNING: CPU: 0 PID: 950 at lib/debugobjects.c:263 debug_print_object+0x7d/0x90
[ 28.856736] ODEBUG: deactivate not available (active state 0) object type: timer_list hint: e1000_watchdog+0x0/0x51a
[ 28.856736] ODEBUG: deactivate not available (active state 0) object type: timer_list hint: e1000_watchdog+0x0/0x51a
[ 28.858899] CPU: 0 PID: 950 Comm: netifd Tainted: G S 4.7.0-00001-g48d3816 #1
[ 28.858899] CPU: 0 PID: 950 Comm: netifd Tainted: G S 4.7.0-00001-g48d3816 #1
[ 28.860593] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 28.860593] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 28.862388] b0091ef0
[ 28.862388] b0091ef0 b0091ef0 b0091ef0 b0091eac b0091eac b137128e b137128e b0091edc b0091edc b1041707 b1041707 b23c221c b23c221c b0091f0c b0091f0c
[ 28.864209] 000003b6
[ 28.864209] 000003b6 b23c225c b23c225c 00000107 00000107 b138adcd b138adcd 00000107 00000107 b0091f3c b0091f3c b2579700 b2579700 b2387d49 b2387d49
[ 28.866108] b0091ef8
[ 28.866108] b0091ef8 b1041755 b1041755 00000009 00000009 00000000 00000000 b0091ef0 b0091ef0 b23c221c b23c221c b0091f0c b0091f0c b0091f2c b0091f2c
[ 28.868031] Call Trace:
[ 28.868031] Call Trace:
[ 28.868582] [<b137128e>] dump_stack+0x16/0x18
[ 28.868582] [<b137128e>] dump_stack+0x16/0x18
[ 28.869536] [<b1041707>] __warn+0xc3/0xdb
[ 28.869536] [<b1041707>] __warn+0xc3/0xdb
[ 28.870453] [<b138adcd>] ? debug_print_object+0x7d/0x90
[ 28.870453] [<b138adcd>] ? debug_print_object+0x7d/0x90
[ 28.871636] [<b1041755>] warn_slowpath_fmt+0x36/0x38
[ 28.871636] [<b1041755>] warn_slowpath_fmt+0x36/0x38
[ 28.872828] [<b138adcd>] debug_print_object+0x7d/0x90
[ 28.872828] [<b138adcd>] debug_print_object+0x7d/0x90
[ 28.874057] [<b199bdb3>] ? e1000_update_stats+0x7dc/0x7dc
[ 28.874057] [<b199bdb3>] ? e1000_update_stats+0x7dc/0x7dc
[ 28.875392] [<b138b4f9>] debug_object_deactivate+0xc4/0xc6
[ 28.875392] [<b138b4f9>] debug_object_deactivate+0xc4/0xc6
[ 28.876779] [<b1086796>] detach_if_pending+0x1f/0x81
[ 28.876779] [<b1086796>] detach_if_pending+0x1f/0x81
[ 28.878019] [<b1087006>] mod_timer+0x57/0x102
[ 28.878019] [<b1087006>] mod_timer+0x57/0x102
[ 28.879151] [<b19966ed>] e1000_intr+0xf7/0xfc
[ 28.879151] [<b19966ed>] e1000_intr+0xf7/0xfc
[ 28.880267] [<b107d2ad>] handle_irq_event_percpu+0x58/0x10f
[ 28.880267] [<b107d2ad>] handle_irq_event_percpu+0x58/0x10f
[ 28.881665] [<b107f5e7>] ? handle_fasteoi_irq+0x15/0x115
[ 28.881665] [<b107f5e7>] ? handle_fasteoi_irq+0x15/0x115
[ 28.883017] [<b107d386>] ? handle_irq_event+0x22/0x4c
[ 28.883017] [<b107d386>] ? handle_irq_event+0x22/0x4c
[ 28.884298] [<b107d38d>] handle_irq_event+0x29/0x4c
[ 28.884298] [<b107d38d>] handle_irq_event+0x29/0x4c
[ 28.885430] [<b107f642>] handle_fasteoi_irq+0x70/0x115
[ 28.885430] [<b107f642>] handle_fasteoi_irq+0x70/0x115
[ 28.886678] [<b107f5d2>] ? handle_level_irq+0xd1/0xd1
[ 28.886678] [<b107f5d2>] ? handle_level_irq+0xd1/0xd1
[ 28.887897] [<b1015912>] handle_irq+0x3e/0x5c
[ 28.887897] [<b1015912>] handle_irq+0x3e/0x5c
[ 28.888914] <IRQ>
[ 28.888914] <IRQ> [<b1015649>] do_IRQ+0x48/0xfd
[<b1015649>] do_IRQ+0x48/0xfd
[ 28.890018] [<b1f665ae>] common_interrupt+0x2e/0x34
[ 28.890018] [<b1f665ae>] common_interrupt+0x2e/0x34
[ 28.891109] [<b199007b>] ? grcan_store_select+0x23/0x50
[ 28.891109] [<b199007b>] ? grcan_store_select+0x23/0x50
[ 28.892291] [<b199d119>] ? e1000_open+0xaf/0x10c
[ 28.892291] [<b199d119>] ? e1000_open+0xaf/0x10c
[ 28.893334] [<b1e46da3>] __dev_open+0x78/0xdc
[ 28.893334] [<b1e46da3>] __dev_open+0x78/0xdc
[ 28.894316] [<b1f65766>] ? _raw_spin_unlock_bh+0x2a/0x2d
[ 28.894316] [<b1f65766>] ? _raw_spin_unlock_bh+0x2a/0x2d
[ 28.895500] [<b1e47013>] __dev_change_flags+0x89/0x139
[ 28.895500] [<b1e47013>] __dev_change_flags+0x89/0x139
[ 28.896673] [<b1e5707b>] ? rtnl_lock+0xf/0x11
[ 28.896673] [<b1e5707b>] ? rtnl_lock+0xf/0x11
[ 28.897666] [<b1e470e6>] dev_change_flags+0x23/0x54
[ 28.897666] [<b1e470e6>] dev_change_flags+0x23/0x54
[ 28.898725] [<b1e5e00b>] dev_ifsioc+0x1eb/0x2cd
[ 28.898725] [<b1e5e00b>] dev_ifsioc+0x1eb/0x2cd
[ 28.899746] [<b1e5e2d4>] dev_ioctl+0xc8/0x5a4
[ 28.899746] [<b1e5e2d4>] dev_ioctl+0xc8/0x5a4
[ 28.900777] [<b1e5e39d>] ? dev_ioctl+0x191/0x5a4
[ 28.900777] [<b1e5e39d>] ? dev_ioctl+0x191/0x5a4
[ 28.901824] [<b1e2d0a1>] sock_ioctl+0x11f/0x1e8
[ 28.901824] [<b1e2d0a1>] sock_ioctl+0x11f/0x1e8
[ 28.902825] [<b1e2cf82>] ? init_once+0xd/0xd
[ 28.902825] [<b1e2cf82>] ? init_once+0xd/0xd
[ 28.903770] [<b111b580>] do_vfs_ioctl+0x7d/0x729
[ 28.903770] [<b111b580>] do_vfs_ioctl+0x7d/0x729
[ 28.904794] [<b10fd483>] ? slob_free_pages+0x2b/0x2f
[ 28.904794] [<b10fd483>] ? slob_free_pages+0x2b/0x2f
[ 28.905923] [<b10fd900>] ? __kmem_cache_free+0x25/0x30
FYI, raw QEMU command line is:
qemu-system-i386 -enable-kvm -kernel /pkg/linux/i386-randconfig-c0-07282000/gcc-4.9/48d381633b581bcf7be0a6e297ed65ae280690ec/vmlinuz-4.7.0-00001-g48d3816 -append 'ip=::::vm-intel12-openwrt-i386-2::dhcp root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-intel12-openwrt-i386-2/boot-1-openwrt-i386.cgz-48d381633b581bcf7be0a6e297ed65ae280690ec-20160728-14825-1qsf38g-0.yaml ARCH=i386 kconfig=i386-randconfig-c0-07282000 branch=linux-devel/devel-spot-201607281924 commit=48d381633b581bcf7be0a6e297ed65ae280690ec BOOT_IMAGE=/pkg/linux/i386-randconfig-c0-07282000/gcc-4.9/48d381633b581bcf7be0a6e297ed65ae280690ec/vmlinuz-4.7.0-00001-g48d3816 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-intel12-openwrt-i386/openwrt-i386.cgz/i386-randconfig-c0-07282000/gcc-4.9/48d381633b581bcf7be0a6e297ed65ae280690ec/0 LKP_SERVER=inn debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 systemd.log_level=err ignore_loglevel earlyprintk=ttyS0,115200 console=ttyS0,115200 console=tty0 vga=normal rw drbd.minor_count=8' -initrd /fs/KVM/initrd-vm-intel12-openwrt-i386-2 -m 192 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -watchdog-action debug -rtc base=localtime -drive file=/fs/KVM/disk0-vm-intel12-openwrt-i386-2,media=disk,if=virtio -drive file=/fs/KVM/disk1-vm-intel12-openwrt-i386-2,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-intel12-openwrt-i386-2 -serial file:/dev/shm/kboot/serial-vm-intel12-openwrt-i386-2 -daemonize -display none -monitor null
Thanks,
Xiaolong
5 years, 11 months
[lkp] [x86/stacktrace] 7be83135e2: BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
by kernel test robot
FYI, we noticed the following commit:
https://github.com/jpoimboe/linux unwind-2016-07-25
commit 7be83135e2b166469de43f165e2a08c6b36f5da8 ("x86/stacktrace: convert save_stack_trace_*() to the new unwinder")
in testcase: boot
on test machine: 2 threads qemu-system-x86_64 -enable-kvm with 360M memory
caused below changes:
+-------------------------------------------------------------------+------------+------------+
| | 12b0afa7c8 | 7be83135e2 |
+-------------------------------------------------------------------+------------+------------+
| boot_successes | 87 | 0 |
| boot_failures | 3 | 90 |
| BUG:kernel_torture_test_oversize | 2 | |
| BUG:kernel_test_crashed | 1 | |
| BUG:using_smp_processor_id()in_preemptible[#]code:swapper | 0 | 90 |
| BUG:using_smp_processor_id()in_preemptible[#]code:init | 0 | 16 |
| BUG:using_smp_processor_id()in_preemptible[#]code:systemd | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:systemd-journal | 0 | 2 |
| BUG:using_smp_processor_id()in_preemptible[#]code:rc | 0 | 12 |
| BUG:using_smp_processor_id()in_preemptible[#]code:udevd | 0 | 27 |
| BUG:using_smp_processor_id()in_preemptible[#]code:udevadm | 0 | 7 |
| BUG:using_smp_processor_id()in_preemptible[#]code:sh | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:S07bootlogd | 0 | 1 |
| BUG:using_smp_pro | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:sed | 0 | 5 |
| BUG:using_smp_processor_id()in_preemptible[#]code:touch | 0 | 2 |
| BUG:using_smp_processor_id()in_preemptible[#]code:chown | 0 | 2 |
| BUG:using_smp_proc | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:rm | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:start-stop-daem | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:id | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:S03udev | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:hwclock | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:lock_torture_st | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:ln | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:grep | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:basename | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:ls | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:input_id | 0 | 1 |
| BUG:using_smp_processor_id()in_preemptible[#]code:trinity | 0 | 1 |
+-------------------------------------------------------------------+------------+------------+
[ 7.944152] debug: unmapping init [mem 0xffff880001a74000-0xffff880001bfffff]
[ 7.944891] debug: unmapping init [mem 0xffff8800021a2000-0xffff8800021fffff]
[ 25.551987] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ 25.554473] BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
[ 25.555217] caller is debug_smp_processor_id+0x17/0x19
[ 25.555698] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.7.0-01746-g7be8313 #1
[ 25.556361] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 25.557211] ffff880010b27a78 ffffffff813ff796 ffff880010b27a48 ffffffff8140878a
[ 25.557327] 0000000000000001 0000000000000001 ffffffff820d2487 ffffffff820e88d3
[ 25.557327] ffff880010aac000 0000000000016060 ffff880010b27aa8 ffffffff8142a844
[ 25.557327] Call Trace:
[ 25.557327] [<ffffffff813ff796>] dump_stack+0x10c/0x181
[ 25.557327] [<ffffffff8140878a>] ? ___ratelimit+0x196/0x1a2
[ 25.557327] [<ffffffff8142a844>] check_preemption_disabled+0x172/0x184
[ 25.557327] [<ffffffff8142a882>] debug_smp_processor_id+0x17/0x19
[ 25.557327] [<ffffffff8100ef30>] get_stack_info+0xa2/0x212
[ 25.557327] [<ffffffff8103a0d9>] unwind_next_frame+0x8d/0x120
[ 25.557327] [<ffffffff81021538>] __save_stack_trace+0x137/0x17b
[ 25.557327] [<ffffffff811043be>] ? __might_sleep+0xd2/0xdf
[ 25.557327] [<ffffffff810215ca>] save_stack_trace+0x14/0x16
[ 25.557327] [<ffffffff8129608b>] kasan_kmalloc+0xe3/0x168
[ 25.557327] [<ffffffff810215ca>] ? save_stack_trace+0x14/0x16
[ 25.557327] [<ffffffff8129608b>] ? kasan_kmalloc+0xe3/0x168
[ 25.557327] [<ffffffff812961b5>] ? kasan_slab_alloc+0xf/0x11
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -kernel /pkg/linux/x86_64-randconfig-s2-07280741/gcc-4.4/7be83135e2b166469de43f165e2a08c6b36f5da8/vmlinuz-4.7.0-01746-g7be8313 -append 'ip=::::vm-vp-quantal-x86_64-9::dhcp root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-x86_64-9/boot-1-quantal-core-x86_64.cgz-7be83135e2b166469de43f165e2a08c6b36f5da8-20160728-127661-lncm5s-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s2-07280741 branch=linux-devel/devel-hourly-2016072804 commit=7be83135e2b166469de43f165e2a08c6b36f5da8 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s2-07280741/gcc-4.4/7be83135e2b166469de43f165e2a08c6b36f5da8/vmlinuz-4.7.0-01746-g7be8313 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-vp-quantal-x86_64/quantal-core-x86_64.cgz/x86_64-randconfig-s2-07280741/gcc-4.4/7be83135e2b166469de43f165e2a08c6b36f5da8/0 LKP_SERVER=inn debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 systemd.log_level=err ignore_loglevel earlyprintk=ttyS0,115200 console=ttyS0,115200 console=tty0 vga=normal rw drbd.minor_count=8' -initrd /fs/sdh1/initrd-vm-vp-quantal-x86_64-9 -m 360 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-x86_64-9 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-x86_64-9 -daemonize -display none -monitor null
Thanks,
Xiaolong
5 years, 11 months
[lkp] [blk] ee5c4fef9f: BUG: unable to handle kernel NULL pointer dereference at 0000010b
by kernel test robot
FYI, we noticed the following commit:
https://github.com/0day-ci/linux Minfei-Huang/blk-core-Fix-the-bad-IO-during-checking-bio/20160728-182758
commit ee5c4fef9f2ef03ee8f283a5b24192df00e17f0f ("blk-core: Fix the bad IO during checking bio")
in testcase: boot
on test machine: 2 threads qemu-system-i386 -enable-kvm with 320M memory
caused below changes:
+------------------------------------------------+------------+------------+
| | b013517951 | ee5c4fef9f |
+------------------------------------------------+------------+------------+
| boot_successes | 11 | 2 |
| boot_failures | 1 | 10 |
| BUG:kernel_test_crashed | 1 | |
| BUG:unable_to_handle_kernel | 0 | 8 |
| Oops | 0 | 8 |
| EIP_is_at__lock_acquire | 0 | 8 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 8 |
| IP-Config:Auto-configuration_of_network_failed | 0 | 2 |
+------------------------------------------------+------------+------------+
[ 24.378591] attempt to access beyond end of device
[ 24.378593] fd0: rw=0, want=8, limit=0
[ 24.378594] floppy: error -5 while reading block 0
[ 24.378600] BUG: unable to handle kernel NULL pointer dereference at 0000010b
[ 24.378605] IP: [<7906d275>] __lock_acquire+0xa7/0x612
[ 24.378606] *pde = 00000000
[ 24.378608] Oops: 0002 [#1] SMP
[ 24.378611] CPU: 1 PID: 574 Comm: mount Not tainted 4.7.0-rc2-00241-gee5c4fe #4
[ 24.378612] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 24.378614] task: 87152c80 ti: 883f0000 task.ti: 883f0000
[ 24.378615] EIP: 0060:[<7906d275>] EFLAGS: 00010002 CPU: 1
[ 24.378617] EIP is at __lock_acquire+0xa7/0x612
[ 24.378618] EAX: 00000007 EBX: 00000002 ECX: 00000000 EDX: 00000000
[ 24.378619] ESI: 00000001 EDI: 87152c80 EBP: 883f1c2c ESP: 883f1c00
[ 24.378620] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
[ 24.378621] CR0: 80050033 CR2: 0000010b CR3: 0f8bd000 CR4: 00000690
[ 24.378625] Stack:
[ 24.378630] 00000000 7a267440 00000202 883f1c1c 00000000 ffffffff 883f1d74 883f1c2c
[ 24.378634] 00000002 87152c80 883f1d74 883f1c64 7906da8d 00000000 00000001 00000001
[ 24.378637] 00000000 79066107 00000000 00000000 00000000 00000000 883f1d64 00000202
[ 24.378638] Call Trace:
[ 24.378640] [<7906da8d>] lock_acquire+0x60/0x7c
[ 24.378644] [<79066107>] ? complete+0x12/0x35
[ 24.378648] [<79b9a42a>] _raw_spin_lock_irqsave+0x34/0x44
[ 24.378650] [<79066107>] ? complete+0x12/0x35
[ 24.378651] [<79066107>] complete+0x12/0x35
[ 24.378654] [<79467b9a>] floppy_rb0_cb+0x31/0x38
[ 24.378656] [<7932d102>] bio_endio+0x39/0x51
[ 24.378659] [<7932ec47>] generic_make_request_checks+0x13a/0x144
[ 24.378661] [<793300ae>] generic_make_request+0x11/0x12a
[ 24.378663] [<79330293>] submit_bio+0xcc/0xd3
[ 24.378665] [<79468347>] __floppy_read_block_0+0xbc/0xfe
[ 24.378668] [<7906bfa3>] ? mark_held_locks+0x4b/0x65
[ 24.378671] [<79b9a5de>] ? _raw_spin_unlock_irqrestore+0x39/0x4b
[ 24.378672] [<79467b69>] ? floppy_find+0x3b/0x3b
[ 24.378674] [<79468955>] floppy_revalidate+0x104/0x171
[ 24.378678] [<79117276>] check_disk_change+0x41/0x4e
[ 24.378680] [<79467e9a>] floppy_open+0x20c/0x28d
[ 24.378682] [<7911697b>] __blkdev_get+0xf9/0x34f
[ 24.378684] [<79116d39>] blkdev_get+0x168/0x25c
[ 24.378689] [<790f8206>] ? path_put+0x15/0x18
[ 24.378691] [<79117061>] ? lookup_bdev+0x62/0x72
[ 24.378693] [<79117094>] blkdev_get_by_path+0x23/0x53
[ 24.378696] [<790f2820>] mount_bdev+0x2a/0x157
[ 24.378700] [<7917748a>] ext4_mount+0x10/0x12
[ 24.378702] [<7917af40>] ? ext4_calculate_overhead+0x30e/0x30e
[ 24.378704] [<790f2ad3>] mount_fs+0x53/0x110
[ 24.378708] [<79107ab4>] vfs_kern_mount+0x47/0xaa
[ 24.378710] [<79108d9b>] do_mount+0x7a6/0x8a6
[ 24.378714] [<790c35c2>] ? strndup_user+0x27/0x3f
[ 24.378717] [<79109040>] SyS_mount+0x52/0x76
[ 24.378720] [<79000f2e>] do_int80_syscall_32+0x48/0x5a
[ 24.378722] [<79b9ab2c>] entry_INT80_32+0x2c/0x2c
[ 24.378747] Code: 80 08 48 7a 74 03 8b 75 0c 83 fa 01 77 0b 8b 45 ec 8b 44 90 04 85 c0 75 12 31 c9 8b 45 ec e8 8f cc ff ff 85 c0 0f 84 f2 04 00 00 <f0> ff 80 04 01 00 00 8b 9f 58 04 00 00 89 5d e4 83 3d 08 8a bc
[ 24.378750] EIP: [<7906d275>] __lock_acquire+0xa7/0x612 SS:ESP 0068:883f1c00
[ 24.378750] CR2: 000000000000010b
[ 24.378752] ---[ end trace beb8a2f440b7388d ]---
[ 24.378753] Kernel panic - not syncing: Fatal exception
FYI, raw QEMU command line is:
qemu-system-i386 -enable-kvm -kernel /pkg/linux/i386-randconfig-s1-201630/gcc-6/ee5c4fef9f2ef03ee8f283a5b24192df00e17f0f/vmlinuz-4.7.0-rc2-00241-gee5c4fe -append 'ip=::::vm-kbuild-yocto-i386-10::dhcp root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-i386-10/boot-1-yocto-minimal-i386.cgz-ee5c4fef9f2ef03ee8f283a5b24192df00e17f0f-20160728-20894-1h3orba-0.yaml ARCH=i386 kconfig=i386-randconfig-s1-201630 branch=linux-devel/devel-catchup-201607281838 commit=ee5c4fef9f2ef03ee8f283a5b24192df00e17f0f BOOT_IMAGE=/pkg/linux/i386-randconfig-s1-201630/gcc-6/ee5c4fef9f2ef03ee8f283a5b24192df00e17f0f/vmlinuz-4.7.0-rc2-00241-gee5c4fe max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-i386/yocto-minimal-i386.cgz/i386-randconfig-s1-201630/gcc-6/ee5c4fef9f2ef03ee8f283a5b24192df00e17f0f/0 LKP_SERVER=inn debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 systemd.log_level=err ignore_loglevel earlyprintk=ttyS0,115200 console=ttyS0,115200 console=tty0 vga=normal rw drbd.minor_count=8' -initrd /fs/sda1/initrd-vm-kbuild-yocto-i386-10 -m 320 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -watchdog-action debug -rtc base=localtime -drive file=/fs/sda1/disk0-vm-kbuild-yocto-i386-10,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-i386-10 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-i386-10 -daemonize -display none -monitor null
Thanks,
Xiaolong
5 years, 11 months
[lkp] [mm, kasan] a6efa0b2aa: Undefined behaviour in mm/kasan/quarantine.c:102:13
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit a6efa0b2aa5568872abff95bfa7d8a4dba00f86f ("mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB")
in testcase: boot
on test machine: 1 threads qemu-system-x86_64 -enable-kvm -cpu SandyBridge with 320M memory
caused below changes:
7809 [ 18.666107] UBSAN: Undefined behaviour in mm/kasan/quarantine.c:102:13
7810 [ 18.668198] member access within misaligned address ffff88000d1efebc for type 'struct qlist_node'
7811 [ 18.670368] which requires 8 byte alignment
7812 [ 18.671494] CPU: 0 PID: 1 Comm: swapper Not tainted 4.7.0-rc7-00368-ga6efa0b #1
7813 [ 18.673400] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
7814 [ 18.675812] 0000000000000000 ffff88000d4af918 ffffffff81ac3c82 ffff88000d4af938
7815 [ 18.678219] ffffffff81b60046 000000000000001f ffffffff8370a6c0 ffff88000d4af9d8
7816 [ 18.680606] ffffffff81b60a2f ffffffff8370b44c 0000000041b58ab3 ffffffff82b6a7c6
7817 [ 18.683014] Call Trace:
7818 [ 18.683822] [<ffffffff81ac3c82>] dump_stack+0x19/0x1b
7819 [ 18.685112] [<ffffffff81b60046>] ubsan_epilogue+0xe/0x84
7820 [ 18.687650] [<ffffffff81b60a2f>] __ubsan_handle_type_mismatch+0x1e2/0x20a
7821 [ 18.689369] [<ffffffff81b6084d>] ? __ubsan_handle_divrem_overflow+0x16c/0x16c
7822 [ 18.691296] [<ffffffff81339dd2>] ? ___slab_alloc+0x710/0x93e
7823 [ 18.692941] [<ffffffff81ac781f>] ? idr_get_empty_slot+0xddf/0xddf
7824 [ 18.698603] [<ffffffff81343846>] quarantine_reduce+0x1d3/0x23f
7825 [ 18.700062] [<ffffffff81341ef4>] kasan_kmalloc+0x28/0x91
7826 [ 18.701428] [<ffffffff81341f6f>] kasan_slab_alloc+0x12/0x14
7827 [ 18.702846] [<ffffffff8133a3c4>] kmem_cache_alloc+0x334/0x451
7828 [ 18.704305] [<ffffffff81478c73>] ? __kernfs_new_node+0xa9/0x1ff
7829 [ 18.705794] [<ffffffff81478c73>] __kernfs_new_node+0xa9/0x1ff
7830 [ 18.707243] [<ffffffff81478bca>] ? kernfs_dop_revalidate+0x2c9/0x2c9
7831 [ 18.721888] [<ffffffff81ad541c>] ? rb_first+0x35/0x8c
7832 [ 18.723213] [<ffffffff81478843>] ? kernfs_leftmost_descendant+0x48/0x5b
7833 [ 18.724800] [<ffffffff8147c0a7>] kernfs_new_node+0xa0/0xe2
7834 [ 18.726201] [<ffffffff81480506>] __kernfs_create_file+0x33/0x19f
7835 [ 18.727704] [<ffffffff81482179>] sysfs_add_file_mode_ns+0x26c/0x3cd
7836 [ 18.729371] [<ffffffff81482505>] sysfs_add_file+0x50/0x57
7837 [ 18.730834] [<ffffffff81483ff0>] sysfs_merge_group+0x109/0x1d4
7838 [ 18.748017] [<ffffffff81dd60bc>] dpm_sysfs_add+0x9e/0x13e
7839 [ 18.749196] [<ffffffff81dbb549>] device_add+0xa66/0x1034
7840 [ 18.750342] [<ffffffff81dbaae3>] ? device_private_init+0x1e9/0x1e9
7841 [ 18.751629] [<ffffffff81db7eac>] ? device_create_file+0x155/0x155
7842 [ 18.752898] [<ffffffff8133a926>] ? kmem_cache_alloc_trace+0x445/0x457
7843 [ 18.754233] [<ffffffff81dc09d3>] ? subsys_register+0x3d/0x168
7844 [ 18.755544] [<ffffffff81dbbb31>] device_register+0x1a/0x1d
7845 [ 18.756717] [<ffffffff81dc0a97>] subsys_register+0x101/0x168
7846 [ 18.758022] [<ffffffff81dc3561>] subsys_system_register+0x34/0x3a
7847 [ 18.759308] [<ffffffff86c85359>] ? edac_mc_sysfs_init+0xcf/0xcf
7848 [ 18.769681] [<ffffffff86c85378>] edac_init+0x1f/0x70
7849 [ 18.779343] [<ffffffff81000597>] do_one_initcall+0x14e/0x200
7850 [ 18.780772] [<ffffffff81000449>] ? initcall_blacklisted+0x146/0x146
7851 [ 18.790449] [<ffffffff8114c800>] ? remove_wait_queue+0x154/0x1ca
7852 [ 18.791916] [<ffffffff8112f59a>] ? preempt_count_sub+0x18/0xd9
7853 [ 18.793370] [<ffffffff86c01a28>] kernel_init_freeable+0x2b8/0x34c
7854 [ 18.794868] [<ffffffff82580aba>] kernel_init+0x11/0x11b
7855 [ 18.796185] [<ffffffff8258becf>] ret_from_fork+0x1f/0x40
7856 [ 18.797540] [<ffffffff82580aa9>] ? rest_init+0x90/0x90
7857 [ 18.807610] ================================================================================
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu SandyBridge -kernel /pkg/linux/x86_64-randconfig-s4-07242348/gcc-6/a6efa0b2aa5568872abff95bfa7d8a4dba00f86f/vmlinuz-4.7.0-rc7-00368-ga6efa0b -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-x86_64-59/boot-1-yocto-minimal-x86_64.cgz-a6efa0b2aa5568872abff95bfa7d8a4dba00f86f-20160725-6441-1w86cht-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s4-07242348 branch=linux-next/master commit=a6efa0b2aa5568872abff95bfa7d8a4dba00f86f BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s4-07242348/gcc-6/a6efa0b2aa5568872abff95bfa7d8a4dba00f86f/vmlinuz-4.7.0-rc7-00368-ga6efa0b max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-x86_64/yocto-minimal-x86_64.cgz/x86_64-randconfig-s4-07242348/gcc-6/a6efa0b2aa5568872abff95bfa7d8a4dba00f86f/0 LKP_SERVER=inn debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 systemd.log_level=err ignore_loglevel earlyprintk=ttyS0,115200 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-x86_64-59::dhcp drbd.minor_count=8' -initrd /fs/sdg1/initrd-vm-kbuild-yocto-x86_64-59 -m 320 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdg1/disk0-vm-kbuild-yocto-x86_64-59,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-x86_64-59 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-x86_64-59 -daemonize -display none -monitor null
Thanks,
Xiaolong
5 years, 11 months
[lkp] [md] 8430e7e0af: BUG: unable to handle kernel NULL pointer dereference at 0000000000000050
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 8430e7e0af9a15063b90343e3beebc164c8e90f3 ("md: disconnect device from personality before trying to remove it.")
in testcase: mdadm-selftests
with following parameters:
test_prefix: 10
on test machine: 8 threads Nehalem with 4G memory
caused below changes:
+--------------------------------------------------------------+------------+------------+
| | 7ac5044722 | 8430e7e0af |
+--------------------------------------------------------------+------------+------------+
| boot_successes | 13 | 4 |
| boot_failures | 0 | 11 |
| BUG:unable_to_handle_kernel | 0 | 11 |
| Oops | 0 | 11 |
| RIP:remove_and_add_spares | 0 | 10 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 10 |
| backtrace:vfs_write+0x | 0 | 10 |
| backtrace:SyS_write+0x | 0 | 10 |
| WARNING:at_arch/x86/kernel/smp.c:#native_smp_send_reschedule | 0 | 1 |
+--------------------------------------------------------------+------------+------------+
[ 31.258520] md: bind<loop9>
[ 31.258925] md: bind<loop8>
[ 31.299842] md: bind<loop10>
[ 31.303080] BUG: unable to handle kernel NULL pointer dereference at 0000000000000050
[ 31.303611] IP: [<ffffffff817752f6>] remove_and_add_spares+0x236/0x310
[ 31.303995] PGD 8422c067 PUD 92d3a067 PMD 0
[ 31.304363] Oops: 0000 [#1] SMP
[ 31.304590] Modules linked in: multipath loop raid456 async_raid6_recov async_memcpy async_pq async_xor xor async_tx raid6_pq raid10 raid1 raid0 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver netconsole sg sr_mod sd_mod cdrom dcdbas ata_generic snd_hda_codec_realtek pata_acpi snd_hda_codec_generic i7core_edac coretemp kvm_intel snd_hda_codec_hdmi kvm ata_piix irqbypass crc32c_intel pcspkr serio_raw edac_core usb_storage libata snd_hda_intel firewire_ohci firewire_core crc_itu_t snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer snd soundcore shpchp acpi_cpufreq broadcom bcm_phy_lib
[ 31.308998] CPU: 0 PID: 876 Comm: mdadm Not tainted 4.6.0-10682-g8430e7e #1
[ 31.309361] Hardware name: Dell Inc. Studio XPS 8000/0X231R, BIOS A01 08/11/2009
[ 31.309882] task: ffff8800977da480 ti: ffff880084534000 task.ti: ffff880084534000
[ 31.310327] RIP: 0010:[<ffffffff817752f6>] [<ffffffff817752f6>] remove_and_add_spares+0x236/0x310
[ 31.310863] RSP: 0018:ffff880084537d40 EFLAGS: 00010246
[ 31.311163] RAX: 0000000000000000 RBX: ffff8800a2377600 RCX: 0000000000000000
[ 31.311532] RDX: 0000000000000000 RSI: ffff8800a2377600 RDI: ffff8800a0d9c800
[ 31.311901] RBP: ffff880084537d98 R08: 0000000000000000 R09: ffff8800a0d9c848
[ 31.312270] R10: 00007f50ade33760 R11: 0000000000000246 R12: ffff8800a2377600
[ 31.312640] R13: ffff8800a0d9c818 R14: ffff8800a0d9c800 R15: ffff8800a0d9c801
[ 31.313009] FS: 00007f50ae254700(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000
[ 31.313477] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 31.313793] CR2: 0000000000000050 CR3: 000000008455b000 CR4: 00000000000006f0
[ 31.314162] Stack:
[ 31.314335] ffff8800a0d9c848 ffff880000000000 ffff8800a0d9c800 ffff880084537d68
[ 31.314909] ffffffff813cb570 ffff880084537d98 ffff8800a2377600 ffff880135d40000
[ 31.315483] 0000000000000006 0000000000000006 ffff8800a0d9c800 ffff880084537dc0
[ 31.316057] Call Trace:
[ 31.316250] [<ffffffff813cb570>] ? selinux_capable+0x20/0x30
[ 31.316567] [<ffffffff81779401>] state_store+0x81/0x410
[ 31.316868] [<ffffffff8176fda7>] rdev_attr_store+0x77/0xb0
[ 31.317179] [<ffffffff812813f7>] sysfs_kf_write+0x37/0x40
[ 31.317485] [<ffffffff81280754>] kernfs_fop_write+0x134/0x1c0
[ 31.317807] [<ffffffff811ff9f8>] __vfs_write+0x28/0x120
[ 31.318109] [<ffffffff810a3989>] ? __might_sleep+0x49/0x80
[ 31.318420] [<ffffffff810c8f35>] ? percpu_down_read+0x25/0x70
[ 31.318740] [<ffffffff81200b75>] vfs_write+0xb5/0x1a0
[ 31.319034] [<ffffffff81201eb6>] SyS_write+0x46/0xa0
[ 31.321012] [<ffffffff81919f72>] entry_SYSCALL_64_fastpath+0x1a/0xa4
[ 31.321354] Code: 00 00 00 a8 04 0f 85 6a ff ff ff 48 c7 83 e8 00 00 00 00 00 00 00 49 8b 46 08 4c 89 4d a8 48 89 de 89 4d b0 44 88 45 b8 4c 89 f7 <ff> 50 50 85 c0 44 0f b6 45 b8 8b 4d b0 4c 8b 4d a8 0f 85 33 ff
[ 31.324457] RIP [<ffffffff817752f6>] remove_and_add_spares+0x236/0x310
[ 31.324840] RSP <ffff880084537d40>
[ 31.325069] CR2: 0000000000000050
[ 31.325311] ---[ end trace ef9ffa4f5b63806d ]---
[ 31.325683] Kernel panic - not syncing: Fatal exception
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Xiaolong
5 years, 11 months
[tracefs] f49ffd11d0: WARNING: CPU: 0 PID: 1 at kernel/trace/trace.c:6271 init_tracer_tracefs
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux Hari-Bathini/perf-tracefs-Container-aware-tracing-support/20160728-052933
commit f49ffd11d08100b7bc21e04e06e2314371377464
Author: Hari Bathini <hbathini(a)linux.vnet.ibm.com>
AuthorDate: Thu Jul 28 02:57:49 2016 +0530
Commit: 0day robot <fengguang.wu(a)intel.com>
CommitDate: Thu Jul 28 05:29:39 2016 +0800
tracefs: add 'newinstance' mount option
When tracefs is mounted inside a container, its files are visible to
all containers. This implies that a user from within a container can
list/delete uprobes registered elsewhere, leading to security issues
and/or denial of service (Eg. deleting a probe that is registered from
elsewhere). This patch addresses this problem by adding mount option
'newinstance', allowing containers to have their own instance mounted
separately. Something like the below from within a container:
$ mount -o newinstance -t tracefs tracefs /sys/kernel/tracing
$
$
$ perf probe /lib/x86_64-linux-gnu/libc.so.6 malloc
Added new event:
probe_libc:malloc (on malloc in /lib/x86_64-linux-gnu/libc.so.6)
You can now use it in all perf tools, such as:
perf record -e probe_libc:malloc -aR sleep 1
$
$
$ perf probe --list
probe_libc:malloc (on __libc_malloc in /lib64/libc.so.6)
$
while another container/host has a completely different view:
$ perf probe --list
probe_libc:memset (on __libc_memset in /lib64/libc.so.6)
$
This patch reuses the code that provides support to create new instances
under tracefs instances directory.
Signed-off-by: Hari Bathini <hbathini(a)linux.vnet.ibm.com>
+------------------------------------------------------------------+------------+------------+------------+
| | b18cfd29f3 | f49ffd11d0 | 468e21ad86 |
+------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 67 | 0 | 0 |
| boot_failures | 2 | 30 | 54 |
| IP-Config:Auto-configuration_of_network_failed | 2 | 2 | 4 |
| WARNING:at_kernel/trace/trace.c:#init_tracer_tracefs | 0 | 30 | 54 |
| WARNING:at_kernel/trace/trace.c:#tracer_init_tracefs | 0 | 30 | 54 |
| WARNING:at_kernel/trace/trace.c:#create_trace_option_files | 0 | 30 | 54 |
| invoked_oom-killer:gfp_mask=0x | 0 | 2 | 10 |
| Mem-Info | 0 | 2 | 10 |
| Out_of_memory:Kill_process | 0 | 2 | 10 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 0 | 2 | 10 |
| backtrace:dump_stack+0x | 0 | 30 | 54 |
| backtrace:init_tracer_tracefs+0x | 0 | 30 | 54 |
| backtrace:tracer_init_tracefs+0x | 0 | 30 | 54 |
| backtrace:kernel_init_freeable+0x | 0 | 30 | 54 |
| backtrace:ret_from_fork+0x | 0 | 30 | 54 |
| backtrace:__update_tracer_options+0x | 0 | 30 | 54 |
| backtrace:SyS_clone+0x | 0 | 0 | 3 |
| backtrace:do_execveat_common+0x | 0 | 0 | 1 |
| backtrace:SyS_execve+0x | 0 | 0 | 1 |
+------------------------------------------------------------------+------------+------------+------------+
[ 1.240349] Could not create tracefs 'uprobe_profile' entry
[ 1.241165] Could not create tracefs 'snapshot' entry
[ 1.241923] ------------[ cut here ]------------
[ 1.242684] WARNING: CPU: 0 PID: 1 at kernel/trace/trace.c:6271 init_tracer_tracefs+0x57b/0x640
[ 1.244129] Could not create tracefs directory 'per_cpu/0'
[ 1.244901] Modules linked in:
[ 1.245497] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc5-00694-gf49ffd1 #1
[ 1.246611] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 1.247872] 0000000000000000 ffff88000eccfd30 ffffffff819b0684 0000000000000000
[ 1.249265] 0000000000000009 ffff88000eccfd80 ffffffff811de4fb ffff88000eccfd70
[ 1.250652] ffffffff810c25b2 0000187f00000000 ffffffff824cde20 000000000000187f
[ 1.252049] Call Trace:
[ 1.252508] [<ffffffff819b0684>] dump_stack+0xf9/0x185
[ 1.253264] [<ffffffff811de4fb>] ? init_tracer_tracefs+0x57b/0x640
[ 1.254133] [<ffffffff810c25b2>] __warn+0x192/0x1c0
[ 1.254856] [<ffffffff810c2648>] warn_slowpath_fmt+0x68/0x80
[ 1.255669] [<ffffffff8165c0fc>] ? __create_dir+0x16c/0x1f0
[ 1.256478] [<ffffffff811de4fb>] init_tracer_tracefs+0x57b/0x640
[ 1.257335] [<ffffffff82dbb965>] ? do_early_param+0xf6/0xf6
[ 1.258136] [<ffffffff82dbb965>] ? do_early_param+0xf6/0xf6
[ 1.258933] [<ffffffff82df0857>] tracer_init_tracefs+0x98/0x20c
[ 1.259771] [<ffffffff82df07bf>] ? register_tracer+0x246/0x246
[ 1.260598] [<ffffffff81002239>] do_one_initcall+0x79/0x240
[ 1.261405] [<ffffffff82dbb965>] ? do_early_param+0xf6/0xf6
[ 1.262210] [<ffffffff82dbc951>] kernel_init_freeable+0x1eb/0x2d3
[ 1.263071] [<ffffffff82009078>] kernel_init+0x18/0x1f0
[ 1.263826] [<ffffffff8201450f>] ret_from_fork+0x1f/0x40
[ 1.264604] [<ffffffff82009060>] ? rest_init+0x90/0x90
[ 1.265547] ---[ end trace 0e8e4ebf618fbad5 ]---
[ 1.266356] Could not create tracefs 'tracing_thresh' entry
git bisect start 468e21ad86d6d4c008fd03b5dc29d43a8f9e27ef 523d939ef98fd712632d93a5a2b588e477a7565e --
git bisect bad 0f48c074e49b503217e34b3f6a158a5757218ec1 # 14:49 0- 26 Merge 'miklos-vfs/overlayfs-next' into devel-spot-201607280820
git bisect bad cf934a88e87c2f2a9cdbf78dbd4bcccc65e98162 # 14:49 0- 50 Merge 'spi/for-next' into devel-spot-201607280820
git bisect good b975c455ad5a9fdbcfaffbb4ecab4a1fa9337525 # 14:54 21+ 6 Merge 'linux-review/Nicolin-Chen/ASoC-rt5659-Add-mclk-controls/20160728-070432' into devel-spot-201607280820
git bisect good c655dcde536ef0ce1f42c9887ff7dfc010ddf1bd # 14:59 22+ 4 Merge 'linux-review/Laura-Garcia-Liebana/netfilter-nft_nth-match-every-n-packets/20160728-060610' into devel-spot-201607280820
git bisect bad 891168b2ad7ca32fdd5a8854789677bca953dbc0 # 14:59 0- 30 Merge 'block/for-linus' into devel-spot-201607280820
git bisect bad be216e3fff5c5f84bf105104ddd147f710b53905 # 14:59 0- 28 Merge 'linux-review/Hari-Bathini/perf-tracefs-Container-aware-tracing-support/20160728-052933' into devel-spot-201607280820
git bisect good b18cfd29f3039effe4d2d58f61b868f0c272584b # 15:03 63+ 0 tracefs: add instances support for uprobe events
git bisect bad f49ffd11d08100b7bc21e04e06e2314371377464 # 15:03 0- 28 tracefs: add 'newinstance' mount option
# first bad commit: [f49ffd11d08100b7bc21e04e06e2314371377464] tracefs: add 'newinstance' mount option
git bisect good b18cfd29f3039effe4d2d58f61b868f0c272584b # 15:04 63+ 0 tracefs: add instances support for uprobe events
# extra tests with CONFIG_DEBUG_INFO_REDUCED
git bisect bad f49ffd11d08100b7bc21e04e06e2314371377464 # 15:04 0- 66 tracefs: add 'newinstance' mount option
# extra tests on HEAD of linux-devel/devel-spot-201607280820
git bisect bad 468e21ad86d6d4c008fd03b5dc29d43a8f9e27ef # 15:05 0- 54 0day head guard for 'devel-spot-201607280820'
# extra tests on tree/branch linux-review/Hari-Bathini/perf-tracefs-Container-aware-tracing-support/20160728-052933
git bisect bad f49ffd11d08100b7bc21e04e06e2314371377464 # 15:11 0- 28 tracefs: add 'newinstance' mount option
# extra tests with first bad commit reverted
git bisect good 47a2d96332f81965e04f23969e5ba67c78ab12a9 # 15:41 60+ 1 Revert "tracefs: add 'newinstance' mount option"
# extra tests on tree/branch linus/master
git bisect good 194dc870a5890e855ecffb30f3b80ba7c88f96d6 # 16:08 66+ 6 Add braces to avoid "ambiguous ‘else’" compiler warnings
# extra tests on tree/branch linux-next/master
git bisect good d4e661a48572b90ab727149e2f2ec087380a573b # 18:18 66+ 4 Add linux-next specific files for 20160728
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 256
-smp 1
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-watchdog-action debug
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
root=/dev/ram0
hung_task_panic=1
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
systemd.log_level=err
ignore_loglevel
earlyprintk=ttyS0,115200
console=ttyS0,115200
console=tty0
vga=normal
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
5 years, 11 months