Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
3 years, 1 month
Test monitoring on custom github repo
by Thomas Garnier
Hi,
I am working on KASLR (PIE for x86_64). I previously used Kees (CCed)
branches for lkp bot testing but someone told be I could ask you to add a
custom github path to monitor all branches on it.
I pushed my changes to: https://github.com/thgarnie/linux (kasrl_pie_v2
right now)
Can you add it? Anything I need to do?
Thanks,
--
Thomas
3 years, 5 months
[lkp-robot] [brd] 316ba5736c: aim7.jobs-per-min -11.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -11.2% regression of aim7.jobs-per-min due to commit:
commit: 316ba5736c9caa5dbcd84085989862d2df57431d ("brd: Mark as non-rotational")
https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-4.18/block
in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
disk: 1BRD_48G
fs: btrfs
test: disk_rw
load: 1500
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-7/performance/1BRD_48G/btrfs/x86_64-rhel-7.2/1500/debian-x86_64-2016-08-31.cgz/lkp-ivb-ep01/disk_rw/aim7
commit:
522a777566 ("block: consolidate struct request timestamp fields")
316ba5736c ("brd: Mark as non-rotational")
522a777566f56696 316ba5736c9caa5dbcd8408598
---------------- --------------------------
%stddev %change %stddev
\ | \
28321 -11.2% 25147 aim7.jobs-per-min
318.19 +12.6% 358.23 aim7.time.elapsed_time
318.19 +12.6% 358.23 aim7.time.elapsed_time.max
1437526 ± 2% +14.6% 1646849 ± 2% aim7.time.involuntary_context_switches
11986 +14.2% 13691 aim7.time.system_time
73.06 ± 2% -3.6% 70.43 aim7.time.user_time
2449470 ± 2% -25.0% 1837521 ± 4% aim7.time.voluntary_context_switches
20.25 ± 58% +1681.5% 360.75 ±109% numa-meminfo.node1.Mlocked
456062 -16.3% 381859 softirqs.SCHED
9015 ± 7% -21.3% 7098 ± 22% meminfo.CmaFree
47.50 ± 58% +1355.8% 691.50 ± 92% meminfo.Mlocked
5.24 ± 3% -1.2 3.99 ± 2% mpstat.cpu.idle%
0.61 ± 2% -0.1 0.52 ± 2% mpstat.cpu.usr%
16627 +12.8% 18762 ± 4% slabinfo.Acpi-State.active_objs
16627 +12.9% 18775 ± 4% slabinfo.Acpi-State.num_objs
57.00 ± 2% +17.5% 67.00 vmstat.procs.r
20936 -24.8% 15752 ± 2% vmstat.system.cs
45474 -1.7% 44681 vmstat.system.in
6.50 ± 59% +1157.7% 81.75 ± 75% numa-vmstat.node0.nr_mlock
242870 ± 3% +13.2% 274913 ± 7% numa-vmstat.node0.nr_written
2278 ± 7% -22.6% 1763 ± 21% numa-vmstat.node1.nr_free_cma
4.75 ± 58% +1789.5% 89.75 ±109% numa-vmstat.node1.nr_mlock
88018135 ± 3% -48.9% 44980457 ± 7% cpuidle.C1.time
1398288 ± 3% -51.1% 683493 ± 9% cpuidle.C1.usage
3499814 ± 2% -38.5% 2153158 ± 5% cpuidle.C1E.time
52722 ± 4% -45.6% 28692 ± 6% cpuidle.C1E.usage
9865857 ± 3% -40.1% 5905155 ± 5% cpuidle.C3.time
69656 ± 2% -42.6% 39990 ± 5% cpuidle.C3.usage
590856 ± 2% -12.3% 517910 cpuidle.C6.usage
46160 ± 7% -53.7% 21372 ± 11% cpuidle.POLL.time
1716 ± 7% -46.6% 916.25 ± 14% cpuidle.POLL.usage
197656 +4.1% 205732 proc-vmstat.nr_active_file
191867 +4.1% 199647 proc-vmstat.nr_dirty
509282 +1.6% 517318 proc-vmstat.nr_file_pages
2282 ± 8% -24.4% 1725 ± 22% proc-vmstat.nr_free_cma
357.50 +10.6% 395.25 ± 2% proc-vmstat.nr_inactive_file
11.50 ± 58% +1397.8% 172.25 ± 93% proc-vmstat.nr_mlock
970355 ± 4% +14.6% 1111549 ± 8% proc-vmstat.nr_written
197984 +4.1% 206034 proc-vmstat.nr_zone_active_file
357.50 +10.6% 395.25 ± 2% proc-vmstat.nr_zone_inactive_file
192282 +4.1% 200126 proc-vmstat.nr_zone_write_pending
7901465 ± 3% -14.0% 6795016 ± 16% proc-vmstat.pgalloc_movable
886101 +10.2% 976329 proc-vmstat.pgfault
2.169e+12 +15.2% 2.497e+12 perf-stat.branch-instructions
0.41 -0.1 0.35 perf-stat.branch-miss-rate%
31.19 ± 2% +1.6 32.82 perf-stat.cache-miss-rate%
9.116e+09 +8.3% 9.869e+09 perf-stat.cache-misses
2.924e+10 +2.9% 3.008e+10 ± 2% perf-stat.cache-references
6712739 ± 2% -15.4% 5678643 ± 2% perf-stat.context-switches
4.02 +2.7% 4.13 perf-stat.cpi
3.761e+13 +17.3% 4.413e+13 perf-stat.cpu-cycles
606958 -13.7% 523758 ± 2% perf-stat.cpu-migrations
2.476e+12 +13.4% 2.809e+12 perf-stat.dTLB-loads
0.18 ± 2% -0.0 0.16 ± 9% perf-stat.dTLB-store-miss-rate%
1.079e+09 ± 2% -9.6% 9.755e+08 ± 9% perf-stat.dTLB-store-misses
5.933e+11 +1.6% 6.029e+11 perf-stat.dTLB-stores
9.349e+12 +14.2% 1.068e+13 perf-stat.instructions
11247 ± 11% +19.8% 13477 ± 9% perf-stat.instructions-per-iTLB-miss
0.25 -2.6% 0.24 perf-stat.ipc
865561 +10.3% 954350 perf-stat.minor-faults
2.901e+09 ± 3% +9.8% 3.186e+09 ± 3% perf-stat.node-load-misses
3.682e+09 ± 3% +11.0% 4.088e+09 ± 3% perf-stat.node-loads
3.778e+09 +4.8% 3.959e+09 ± 2% perf-stat.node-store-misses
5.079e+09 +6.4% 5.402e+09 perf-stat.node-stores
865565 +10.3% 954352 perf-stat.page-faults
51.75 ± 5% -12.5% 45.30 ± 10% sched_debug.cfs_rq:/.load_avg.avg
316.35 ± 3% +17.2% 370.81 ± 8% sched_debug.cfs_rq:/.util_est_enqueued.stddev
15294 ± 30% +234.9% 51219 ± 76% sched_debug.cpu.avg_idle.min
299443 ± 3% -7.3% 277566 ± 5% sched_debug.cpu.avg_idle.stddev
1182 ± 19% -26.3% 872.02 ± 13% sched_debug.cpu.nr_load_updates.stddev
1.22 ± 8% +21.7% 1.48 ± 6% sched_debug.cpu.nr_running.avg
2.75 ± 10% +26.2% 3.47 ± 6% sched_debug.cpu.nr_running.max
0.58 ± 7% +24.2% 0.73 ± 6% sched_debug.cpu.nr_running.stddev
77148 -20.0% 61702 ± 7% sched_debug.cpu.nr_switches.avg
70024 -24.8% 52647 ± 8% sched_debug.cpu.nr_switches.min
6662 ± 6% +61.9% 10789 ± 24% sched_debug.cpu.nr_switches.stddev
80.45 ± 18% -19.1% 65.05 ± 6% sched_debug.cpu.nr_uninterruptible.stddev
76819 -19.3% 62008 ± 8% sched_debug.cpu.sched_count.avg
70616 -23.5% 53996 ± 8% sched_debug.cpu.sched_count.min
5494 ± 9% +85.3% 10179 ± 26% sched_debug.cpu.sched_count.stddev
16936 -52.9% 7975 ± 9% sched_debug.cpu.sched_goidle.avg
19281 -49.9% 9666 ± 7% sched_debug.cpu.sched_goidle.max
15417 -54.8% 6962 ± 10% sched_debug.cpu.sched_goidle.min
875.00 ± 6% -35.0% 569.09 ± 13% sched_debug.cpu.sched_goidle.stddev
40332 -23.5% 30851 ± 7% sched_debug.cpu.ttwu_count.avg
35074 -26.3% 25833 ± 6% sched_debug.cpu.ttwu_count.min
3239 ± 8% +67.4% 5422 ± 28% sched_debug.cpu.ttwu_count.stddev
5232 +27.4% 6665 ± 13% sched_debug.cpu.ttwu_local.avg
15877 ± 12% +77.5% 28184 ± 27% sched_debug.cpu.ttwu_local.max
2530 ± 10% +95.9% 4956 ± 27% sched_debug.cpu.ttwu_local.stddev
2.52 ± 7% -0.6 1.95 ± 3% perf-profile.calltrace.cycles-pp.btrfs_dirty_pages.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write
1.48 ± 12% -0.5 1.01 ± 4% perf-profile.calltrace.cycles-pp.btrfs_get_extent.btrfs_dirty_pages.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write
1.18 ± 16% -0.4 0.76 ± 7% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_dirty_pages.__btrfs_buffered_write
1.18 ± 16% -0.4 0.76 ± 7% perf-profile.calltrace.cycles-pp.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_dirty_pages.__btrfs_buffered_write.btrfs_file_write_iter
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.calltrace.cycles-pp.__dentry_kill.dentry_kill.dput.__fput.task_work_run
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dentry_kill.dput.__fput
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.calltrace.cycles-pp.dentry_kill.dput.__fput.task_work_run.exit_to_usermode_loop
0.90 ± 18% -0.3 0.56 ± 4% perf-profile.calltrace.cycles-pp.btrfs_evict_inode.evict.__dentry_kill.dentry_kill.dput
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
1.69 -0.1 1.54 ± 2% perf-profile.calltrace.cycles-pp.lock_and_cleanup_extent_if_need.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write
0.87 ± 4% -0.1 0.76 ± 2% perf-profile.calltrace.cycles-pp.__clear_extent_bit.clear_extent_bit.lock_and_cleanup_extent_if_need.__btrfs_buffered_write.btrfs_file_write_iter
0.87 ± 4% -0.1 0.76 ± 2% perf-profile.calltrace.cycles-pp.clear_extent_bit.lock_and_cleanup_extent_if_need.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write
0.71 ± 6% -0.1 0.61 ± 2% perf-profile.calltrace.cycles-pp.clear_state_bit.__clear_extent_bit.clear_extent_bit.lock_and_cleanup_extent_if_need.__btrfs_buffered_write
0.69 ± 6% -0.1 0.60 ± 2% perf-profile.calltrace.cycles-pp.btrfs_clear_bit_hook.clear_state_bit.__clear_extent_bit.clear_extent_bit.lock_and_cleanup_extent_if_need
96.77 +0.6 97.33 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.56 ± 3% perf-profile.calltrace.cycles-pp.can_overcommit.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write.btrfs_file_write_iter
96.72 +0.6 97.29 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
43.13 +0.8 43.91 perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write
42.37 +0.8 43.16 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.block_rsv_release_bytes.btrfs_inode_rsv_release.__btrfs_buffered_write
43.11 +0.8 43.89 perf-profile.calltrace.cycles-pp.block_rsv_release_bytes.btrfs_inode_rsv_release.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write
42.96 +0.8 43.77 perf-profile.calltrace.cycles-pp._raw_spin_lock.block_rsv_release_bytes.btrfs_inode_rsv_release.__btrfs_buffered_write.btrfs_file_write_iter
95.28 +0.9 96.23 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
95.22 +1.0 96.18 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
94.88 +1.0 95.85 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
94.83 +1.0 95.80 perf-profile.calltrace.cycles-pp.btrfs_file_write_iter.__vfs_write.vfs_write.ksys_write.do_syscall_64
94.51 +1.0 95.50 perf-profile.calltrace.cycles-pp.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write.ksys_write
42.44 +1.1 43.52 perf-profile.calltrace.cycles-pp._raw_spin_lock.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write.btrfs_file_write_iter
42.09 +1.1 43.18 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write
44.07 +1.2 45.29 perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write
43.42 +1.3 44.69 perf-profile.calltrace.cycles-pp.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write
2.06 ± 18% -0.9 1.21 ± 6% perf-profile.children.cycles-pp.btrfs_search_slot
2.54 ± 7% -0.6 1.96 ± 3% perf-profile.children.cycles-pp.btrfs_dirty_pages
1.05 ± 24% -0.5 0.52 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.50 ± 12% -0.5 1.03 ± 4% perf-profile.children.cycles-pp.btrfs_get_extent
1.22 ± 15% -0.4 0.79 ± 8% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
0.81 ± 5% -0.4 0.41 ± 6% perf-profile.children.cycles-pp.btrfs_calc_reclaim_metadata_size
0.74 ± 24% -0.4 0.35 ± 9% perf-profile.children.cycles-pp.btrfs_lock_root_node
0.74 ± 24% -0.4 0.35 ± 9% perf-profile.children.cycles-pp.btrfs_tree_lock
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.children.cycles-pp.__dentry_kill
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.children.cycles-pp.evict
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.children.cycles-pp.dentry_kill
0.90 ± 18% -0.3 0.56 ± 4% perf-profile.children.cycles-pp.btrfs_evict_inode
0.91 ± 18% -0.3 0.57 ± 4% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.52 ± 20% -0.3 0.18 ± 14% perf-profile.children.cycles-pp.do_idle
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.children.cycles-pp.task_work_run
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.children.cycles-pp.__fput
0.90 ± 18% -0.3 0.57 ± 4% perf-profile.children.cycles-pp.dput
0.51 ± 20% -0.3 0.18 ± 14% perf-profile.children.cycles-pp.secondary_startup_64
0.51 ± 20% -0.3 0.18 ± 14% perf-profile.children.cycles-pp.cpu_startup_entry
0.50 ± 21% -0.3 0.17 ± 16% perf-profile.children.cycles-pp.start_secondary
0.47 ± 20% -0.3 0.16 ± 13% perf-profile.children.cycles-pp.cpuidle_enter_state
0.47 ± 19% -0.3 0.16 ± 13% perf-profile.children.cycles-pp.intel_idle
0.61 ± 20% -0.3 0.36 ± 11% perf-profile.children.cycles-pp.btrfs_tree_read_lock
0.47 ± 26% -0.3 0.21 ± 10% perf-profile.children.cycles-pp.prepare_to_wait_event
0.64 ± 18% -0.2 0.39 ± 9% perf-profile.children.cycles-pp.btrfs_read_lock_root_node
0.40 ± 22% -0.2 0.21 ± 5% perf-profile.children.cycles-pp.btrfs_clear_path_blocking
0.38 ± 23% -0.2 0.19 ± 13% perf-profile.children.cycles-pp.finish_wait
1.51 ± 3% -0.2 1.35 ± 2% perf-profile.children.cycles-pp.__clear_extent_bit
1.71 -0.1 1.56 ± 2% perf-profile.children.cycles-pp.lock_and_cleanup_extent_if_need
0.29 ± 25% -0.1 0.15 ± 10% perf-profile.children.cycles-pp.btrfs_orphan_del
0.27 ± 27% -0.1 0.12 ± 8% perf-profile.children.cycles-pp.btrfs_del_orphan_item
0.33 ± 18% -0.1 0.19 ± 9% perf-profile.children.cycles-pp.queued_read_lock_slowpath
0.33 ± 19% -0.1 0.20 ± 4% perf-profile.children.cycles-pp.__wake_up_common_lock
0.45 ± 15% -0.1 0.34 ± 2% perf-profile.children.cycles-pp.btrfs_alloc_data_chunk_ondemand
0.47 ± 16% -0.1 0.36 ± 4% perf-profile.children.cycles-pp.btrfs_check_data_free_space
0.91 ± 4% -0.1 0.81 ± 3% perf-profile.children.cycles-pp.clear_extent_bit
1.07 ± 5% -0.1 0.97 perf-profile.children.cycles-pp.__set_extent_bit
0.77 ± 6% -0.1 0.69 ± 3% perf-profile.children.cycles-pp.btrfs_clear_bit_hook
0.17 ± 20% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.queued_write_lock_slowpath
0.16 ± 22% -0.1 0.08 ± 24% perf-profile.children.cycles-pp.btrfs_lookup_inode
0.21 ± 17% -0.1 0.14 ± 19% perf-profile.children.cycles-pp.__btrfs_update_delayed_inode
0.26 ± 12% -0.1 0.18 ± 13% perf-profile.children.cycles-pp.btrfs_async_run_delayed_root
0.52 ± 5% -0.1 0.45 perf-profile.children.cycles-pp.set_extent_bit
0.45 ± 5% -0.1 0.40 ± 3% perf-profile.children.cycles-pp.alloc_extent_state
0.11 ± 17% -0.1 0.06 ± 11% perf-profile.children.cycles-pp.btrfs_clear_lock_blocking_rw
0.28 ± 9% -0.0 0.23 ± 3% perf-profile.children.cycles-pp.btrfs_drop_pages
0.07 -0.0 0.03 ±100% perf-profile.children.cycles-pp.btrfs_set_lock_blocking_rw
0.39 ± 3% -0.0 0.34 ± 3% perf-profile.children.cycles-pp.get_alloc_profile
0.33 ± 7% -0.0 0.29 perf-profile.children.cycles-pp.btrfs_set_extent_delalloc
0.38 ± 2% -0.0 0.35 ± 4% perf-profile.children.cycles-pp.__set_page_dirty_nobuffers
0.49 ± 3% -0.0 0.46 ± 3% perf-profile.children.cycles-pp.pagecache_get_page
0.18 ± 4% -0.0 0.15 ± 2% perf-profile.children.cycles-pp.truncate_inode_pages_range
0.08 ± 5% -0.0 0.05 ± 9% perf-profile.children.cycles-pp.btrfs_set_path_blocking
0.08 ± 6% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.truncate_cleanup_page
0.80 ± 4% +0.2 0.95 ± 2% perf-profile.children.cycles-pp.can_overcommit
96.84 +0.5 97.37 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
96.80 +0.5 97.35 perf-profile.children.cycles-pp.do_syscall_64
43.34 +0.8 44.17 perf-profile.children.cycles-pp.btrfs_inode_rsv_release
43.49 +0.8 44.32 perf-profile.children.cycles-pp.block_rsv_release_bytes
95.32 +0.9 96.26 perf-profile.children.cycles-pp.ksys_write
95.26 +0.9 96.20 perf-profile.children.cycles-pp.vfs_write
94.91 +1.0 95.88 perf-profile.children.cycles-pp.__vfs_write
94.84 +1.0 95.81 perf-profile.children.cycles-pp.btrfs_file_write_iter
94.55 +1.0 95.55 perf-profile.children.cycles-pp.__btrfs_buffered_write
86.68 +1.0 87.70 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
44.08 +1.2 45.31 perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
43.49 +1.3 44.77 perf-profile.children.cycles-pp.reserve_metadata_bytes
87.59 +1.8 89.38 perf-profile.children.cycles-pp._raw_spin_lock
0.47 ± 19% -0.3 0.16 ± 13% perf-profile.self.cycles-pp.intel_idle
0.33 ± 6% -0.1 0.18 ± 6% perf-profile.self.cycles-pp.get_alloc_profile
0.27 ± 8% -0.0 0.22 ± 4% perf-profile.self.cycles-pp.btrfs_drop_pages
0.07 -0.0 0.03 ±100% perf-profile.self.cycles-pp.btrfs_set_lock_blocking_rw
0.14 ± 5% -0.0 0.12 ± 6% perf-profile.self.cycles-pp.clear_page_dirty_for_io
0.09 ± 5% -0.0 0.07 ± 10% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.17 ± 4% +0.1 0.23 ± 3% perf-profile.self.cycles-pp.reserve_metadata_bytes
0.31 ± 7% +0.1 0.45 ± 2% perf-profile.self.cycles-pp.can_overcommit
86.35 +1.0 87.39 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
aim7.jobs-per-min
29000 +-+-----------------------------------------------------------------+
28500 +-+ +.. + +..+.. +.. |
|..+ +.+..+.. : .. + .+.+..+..+.+.. .+..+.. + + + |
28000 +-+ + .. : + +. + + + |
27500 +-+ + + |
| |
27000 +-+ |
26500 +-+ |
26000 +-+ |
| |
25500 +-+ O O O O O |
25000 +-+ O O O O O O O O O
| O O O O O O O O |
24500 O-+O O O O |
24000 +-+-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 6 months
[lkp-robot] [sched/fair] d519329f72: unixbench.score -9.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -9.9% regression of unixbench.score due to commit:
commit: d519329f72a6f36bc4f2b85452640cfe583b4f81 ("sched/fair: Update util_est only on util_avg updates")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: unixbench
on test machine: 8 threads Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz with 6G memory
with following parameters:
runtime: 300s
nr_task: 100%
test: execl
test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
gcc-7/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/300s/nhm-white/execl/unixbench
commit:
a07630b8b2 ("sched/cpufreq/schedutil: Use util_est for OPP selection")
d519329f72 ("sched/fair: Update util_est only on util_avg updates")
a07630b8b2c16f82 d519329f72a6f36bc4f2b85452
---------------- --------------------------
%stddev %change %stddev
\ | \
4626 -9.9% 4167 unixbench.score
3495362 ± 4% +70.4% 5957769 ± 2% unixbench.time.involuntary_context_switches
2.866e+08 -11.6% 2.534e+08 unixbench.time.minor_page_faults
666.75 -9.7% 602.25 unixbench.time.percent_of_cpu_this_job_got
1830 -9.7% 1653 unixbench.time.system_time
395.13 -5.2% 374.58 unixbench.time.user_time
8611715 -58.9% 3537314 ± 3% unixbench.time.voluntary_context_switches
6639375 -9.1% 6033775 unixbench.workload
26025 +3849.3% 1027825 interrupts.CAL:Function_call_interrupts
4856 ± 14% -27.4% 3523 ± 11% slabinfo.filp.active_objs
3534356 -8.8% 3223918 softirqs.RCU
77929 -11.2% 69172 vmstat.system.cs
19489 ± 2% +7.5% 20956 vmstat.system.in
9.05 ± 9% +11.0% 10.05 ± 8% boot-time.dhcp
131.63 ± 4% +8.6% 142.89 ± 7% boot-time.idle
9.07 ± 9% +11.0% 10.07 ± 8% boot-time.kernel_boot
76288 ± 3% -12.8% 66560 ± 3% meminfo.DirectMap4k
16606 -13.1% 14433 meminfo.Inactive
16515 -13.2% 14341 meminfo.Inactive(anon)
11.87 ± 5% +7.8 19.63 ± 4% mpstat.cpu.idle%
0.07 ± 35% -0.0 0.04 ± 17% mpstat.cpu.soft%
68.91 -6.1 62.82 mpstat.cpu.sys%
29291570 +325.4% 1.246e+08 cpuidle.C1.time
8629105 -36.1% 5513780 cpuidle.C1.usage
668733 ± 12% +11215.3% 75668902 ± 2% cpuidle.C1E.time
9763 ± 12% +16572.7% 1627882 ± 2% cpuidle.C1E.usage
1.834e+08 ± 9% +23.1% 2.258e+08 ± 11% cpuidle.C3.time
222674 ± 8% +133.4% 519690 ± 6% cpuidle.C3.usage
4129 -13.3% 3581 proc-vmstat.nr_inactive_anon
4129 -13.3% 3581 proc-vmstat.nr_zone_inactive_anon
2.333e+08 -12.2% 2.049e+08 proc-vmstat.numa_hit
2.333e+08 -12.2% 2.049e+08 proc-vmstat.numa_local
6625 -10.9% 5905 proc-vmstat.pgactivate
2.392e+08 -12.1% 2.102e+08 proc-vmstat.pgalloc_normal
2.936e+08 -12.6% 2.566e+08 proc-vmstat.pgfault
2.392e+08 -12.1% 2.102e+08 proc-vmstat.pgfree
2850 -15.3% 2413 turbostat.Avg_MHz
8629013 -36.1% 5513569 turbostat.C1
1.09 +3.5 4.61 turbostat.C1%
9751 ± 12% +16593.0% 1627864 ± 2% turbostat.C1E
0.03 ± 19% +2.8 2.80 turbostat.C1E%
222574 ± 8% +133.4% 519558 ± 6% turbostat.C3
6.84 ± 8% +1.5 8.34 ± 10% turbostat.C3%
2.82 ± 7% +250.3% 9.87 ± 2% turbostat.CPU%c1
6552773 ± 3% +23.8% 8111699 ± 2% turbostat.IRQ
2.02 ± 11% +28.3% 2.58 ± 9% turbostat.Pkg%pc3
7.635e+11 -12.5% 6.682e+11 perf-stat.branch-instructions
3.881e+10 -12.9% 3.381e+10 perf-stat.branch-misses
2.09 -0.3 1.77 ± 4% perf-stat.cache-miss-rate%
1.551e+09 -15.1% 1.316e+09 ± 4% perf-stat.cache-misses
26177920 -10.5% 23428188 perf-stat.context-switches
1.99 -2.8% 1.93 perf-stat.cpi
7.553e+12 -14.7% 6.446e+12 perf-stat.cpu-cycles
522523 ± 2% +628.3% 3805664 perf-stat.cpu-migrations
2.425e+10 ± 4% -14.3% 2.078e+10 perf-stat.dTLB-load-misses
1.487e+12 -11.3% 1.319e+12 perf-stat.dTLB-loads
1.156e+10 ± 3% -7.7% 1.066e+10 perf-stat.dTLB-store-misses
6.657e+11 -11.1% 5.915e+11 perf-stat.dTLB-stores
0.15 +0.0 0.15 perf-stat.iTLB-load-miss-rate%
5.807e+09 -11.0% 5.166e+09 perf-stat.iTLB-load-misses
3.799e+12 -12.1% 3.34e+12 perf-stat.iTLB-loads
3.803e+12 -12.2% 3.338e+12 perf-stat.instructions
654.99 -1.4% 646.07 perf-stat.instructions-per-iTLB-miss
0.50 +2.8% 0.52 perf-stat.ipc
2.754e+08 -11.6% 2.435e+08 perf-stat.minor-faults
1.198e+08 ± 7% +73.1% 2.074e+08 ± 4% perf-stat.node-stores
2.754e+08 -11.6% 2.435e+08 perf-stat.page-faults
572928 -3.4% 553258 perf-stat.path-length
unixbench.score
4800 +-+------------------------------------------------------------------+
|+ + + |
4700 +-+ + + :+ +. :+ + + |
| + + + +. : + + + + + + + .+++++ .+ +|
4600 +-+ +++ :+++ + ++: : :+ +++ ++.++++ + ++++ ++ |
| + + + ++ ++ + |
4500 +-+ |
| |
4400 +-+ |
| |
4300 +-+ |
O |
4200 +-O O O OOOO OO OOO OOOO OOOO O O |
|O OO OOOOO O O OO O O O O O OO |
4100 +-+------------------------------------------------------------------+
unixbench.workload
9e+06 +-+---------------------------------------------------------------+
| : |
8.5e+06 +-+ : |
| : |
8e+06 +-+ : |
| :: |
7.5e+06 +-+ : : + |
| +: : : + |
7e+06 +-+ + + :: : :: + + : + + + + + |
|:+ + + : :: : : :: : :+ : : ::+ :+ .+ :+ ++ ++ + ++ ::++|
6.5e+06 +-O+ +++ ++++ +++ + ++ +.+ + ++ + + + + + + + +.+++ + |
O O O O O O O |
6e+06 +O+OOO O OOOOOOOO OOOO OO OOOOOOOOO O O O OO |
| O |
5.5e+06 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 8 months
[lkp-robot] [nfsd4] 517dc52baa: fsmark.files_per_sec 32.4% improvement
by kernel test robot
Greeting,
FYI, we noticed a 32.4% improvement of fsmark.files_per_sec due to commit:
commit: 517dc52baa2a508c82f68bbc7219b48169e6b29f ("nfsd4: shortern default lease period")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: fsmark
on test machine: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory
with following parameters:
iterations: 1x
nr_threads: 1t
disk: 1BRD_48G
fs: f2fs
fs2: nfsv4
filesize: 4M
test_size: 40G
sync_method: fsyncBeforeClose
cpufreq_governor: performance
test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-7/performance/1BRD_48G/4M/nfsv4/f2fs/1x/x86_64-rhel-7.2/1t/debian-x86_64-2016-08-31.cgz/fsyncBeforeClose/ivb44/40G/fsmark
commit:
c2993a1d7d ("nfsd4: extend reclaim period for reclaiming clients")
517dc52baa ("nfsd4: shortern default lease period")
c2993a1d7d6687fd 517dc52baa2a508c82f68bbc72
---------------- --------------------------
%stddev %change %stddev
\ | \
53.60 +32.4% 70.95 fsmark.files_per_sec
191.89 -24.4% 145.16 fsmark.time.elapsed_time
191.89 -24.4% 145.16 fsmark.time.elapsed_time.max
17.75 ± 2% +31.0% 23.25 ± 3% fsmark.time.percent_of_cpu_this_job_got
1.43 ± 2% +0.4 1.85 ± 3% mpstat.cpu.sys%
0.03 ± 3% +0.0 0.04 ± 3% mpstat.cpu.usr%
1333968 ± 3% -24.4% 1008280 ± 2% softirqs.SCHED
4580860 ± 3% -25.0% 3433796 ± 4% softirqs.TIMER
49621514 ± 50% -33.8% 32838257 ± 4% cpuidle.C3.time
8.87e+09 ± 3% -25.7% 6.588e+09 cpuidle.C6.time
9796946 ± 3% -24.8% 7369851 cpuidle.C6.usage
212766 ± 3% +33.3% 283568 vmstat.io.bo
13605317 +27.6% 17354458 vmstat.memory.cache
41139824 ± 2% -15.6% 34707711 vmstat.memory.free
0.00 +1e+102% 1.00 vmstat.procs.r
16158 ± 9% +33.0% 21495 ± 12% vmstat.system.cs
28279 ± 10% +23.0% 34796 ± 4% meminfo.Active(file)
13485862 +27.9% 17253726 meminfo.Cached
20655 ± 10% +24.7% 25748 meminfo.Dirty
12246598 +30.7% 16008540 meminfo.Inactive
12237146 +30.7% 15999087 meminfo.Inactive(file)
41246576 ± 2% -15.3% 34917557 meminfo.MemFree
123641 ± 2% +15.8% 143144 meminfo.SReclaimable
233101 +10.4% 257273 meminfo.Slab
13275 ± 28% +54.6% 20527 ± 15% numa-meminfo.node0.Active(file)
9394 ± 33% +76.6% 16592 ± 14% numa-meminfo.node0.Dirty
20060196 ± 8% -18.6% 16336481 ± 10% numa-meminfo.node0.MemFree
5768180 ± 2% +67.5% 9661137 ± 22% numa-meminfo.node1.FilePages
5162181 ± 3% +74.9% 9029558 ± 23% numa-meminfo.node1.Inactive
5158148 ± 3% +75.0% 9027345 ± 23% numa-meminfo.node1.Inactive(file)
21163215 ± 5% -12.3% 18564891 ± 6% numa-meminfo.node1.MemFree
367.00 ± 27% +82.6% 670.25 ± 40% numa-meminfo.node1.NFS_Unstable
624.00 ± 21% +95.3% 1218 ± 35% numa-meminfo.node1.Writeback
2.236e+09 ± 6% -21.0% 1.767e+09 ± 12% perf-stat.branch-misses
3.553e+09 ± 6% -12.3% 3.115e+09 ± 10% perf-stat.cache-misses
9.503e+09 ± 4% -15.0% 8.074e+09 ± 9% perf-stat.cache-references
8701 ± 13% -38.3% 5367 ± 26% perf-stat.cpu-migrations
1.037e+08 ± 5% -20.6% 82303955 ± 7% perf-stat.dTLB-store-misses
86.00 -1.2 84.80 perf-stat.iTLB-load-miss-rate%
1.33e+08 ± 5% -20.2% 1.062e+08 ± 11% perf-stat.iTLB-load-misses
543566 ± 3% -24.5% 410527 perf-stat.minor-faults
543567 ± 3% -24.5% 410533 perf-stat.page-faults
98.50 +15.0% 113.25 turbostat.Avg_MHz
2081 +10.2% 2292 ± 3% turbostat.Bzy_MHz
0.59 +0.1 0.73 ± 2% turbostat.C1%
0.15 ± 2% +0.1 0.20 ± 6% turbostat.C1E%
9795281 ± 3% -24.8% 7368291 turbostat.C6
58.70 +9.0% 64.01 ± 3% turbostat.CorWatt
9631901 ± 3% -24.9% 7237299 turbostat.IRQ
4.29 ± 3% -26.1% 3.17 ± 5% turbostat.Pkg%pc6
87.05 +6.4% 92.61 ± 2% turbostat.PkgWatt
8.29 +2.9% 8.54 turbostat.RAMWatt
3296 ± 29% +55.4% 5124 ± 16% numa-vmstat.node0.nr_active_file
2342 ± 33% +76.4% 4131 ± 15% numa-vmstat.node0.nr_dirty
5021740 ± 8% -18.7% 4085154 ± 9% numa-vmstat.node0.nr_free_pages
764.00 ± 57% +124.9% 1718 ± 28% numa-vmstat.node0.nr_vmscan_immediate_reclaim
3297 ± 29% +55.4% 5124 ± 16% numa-vmstat.node0.nr_zone_active_file
2472 ± 31% +72.1% 4254 ± 14% numa-vmstat.node0.nr_zone_write_pending
2418556 ± 8% +51.7% 3669187 ± 18% numa-vmstat.node1.nr_dirtied
1442200 ± 2% +67.4% 2413905 ± 22% numa-vmstat.node1.nr_file_pages
5299408 ± 5% -12.3% 4649246 ± 6% numa-vmstat.node1.nr_free_pages
1289719 ± 3% +74.9% 2255484 ± 23% numa-vmstat.node1.nr_inactive_file
85499 ± 21% +39.3% 119063 ± 22% numa-vmstat.node1.nr_indirectly_reclaimable
92.00 ± 23% +82.1% 167.50 ± 33% numa-vmstat.node1.nr_unstable
136.50 ± 25% +92.3% 262.50 ± 25% numa-vmstat.node1.nr_writeback
2415645 ± 8% +51.8% 3666735 ± 18% numa-vmstat.node1.nr_written
1289715 ± 3% +74.9% 2255489 ± 23% numa-vmstat.node1.nr_zone_inactive_file
7109 ± 10% +22.6% 8718 ± 4% proc-vmstat.nr_active_file
5188 ± 10% +24.3% 6450 proc-vmstat.nr_dirty
1326117 -4.8% 1262303 proc-vmstat.nr_dirty_background_threshold
2655481 -4.8% 2527699 proc-vmstat.nr_dirty_threshold
3372021 +27.9% 4311617 proc-vmstat.nr_file_pages
10302256 ± 2% -15.3% 8722920 proc-vmstat.nr_free_pages
3059627 +30.7% 3997741 proc-vmstat.nr_inactive_file
191536 ± 2% +31.6% 252107 proc-vmstat.nr_indirectly_reclaimable
3508 -6.4% 3283 proc-vmstat.nr_shmem
30984 ± 2% +15.6% 35811 proc-vmstat.nr_slab_reclaimable
27320 +4.5% 28553 ± 2% proc-vmstat.nr_slab_unreclaimable
275.25 ± 13% +63.5% 450.00 ± 20% proc-vmstat.nr_writeback
7109 ± 10% +22.6% 8716 ± 4% proc-vmstat.nr_zone_active_file
3059632 +30.7% 3997758 proc-vmstat.nr_zone_inactive_file
5414 ± 9% +25.8% 6812 proc-vmstat.nr_zone_write_pending
561847 ± 3% -24.2% 425887 ± 2% proc-vmstat.pgfault
951.53 ± 4% -15.3% 805.83 ± 7% sched_debug.cfs_rq:/.exec_clock.avg
195.00 ± 3% +37.1% 267.35 ± 7% sched_debug.cfs_rq:/.load_avg.avg
0.78 -11.5% 0.69 ± 2% sched_debug.cfs_rq:/.nr_spread_over.avg
0.75 -11.1% 0.67 sched_debug.cfs_rq:/.nr_spread_over.min
129.41 ± 10% -23.9% 98.49 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.stddev
187.51 ± 4% +23.1% 230.86 sched_debug.cfs_rq:/.util_avg.avg
115822 -26.1% 85630 sched_debug.cpu.clock.avg
115824 -26.1% 85633 sched_debug.cpu.clock.max
115818 -26.1% 85627 sched_debug.cpu.clock.min
115822 -26.1% 85630 sched_debug.cpu.clock_task.avg
115824 -26.1% 85633 sched_debug.cpu.clock_task.max
115818 -26.1% 85627 sched_debug.cpu.clock_task.min
16.83 ± 13% +118.0% 36.67 ± 52% sched_debug.cpu.cpu_load[0].avg
18.88 ± 9% +71.7% 32.43 ± 20% sched_debug.cpu.cpu_load[1].avg
498.88 ± 9% +83.7% 916.50 ± 39% sched_debug.cpu.cpu_load[1].max
82.58 ± 7% +70.8% 141.02 ± 33% sched_debug.cpu.cpu_load[1].stddev
4240 -19.7% 3403 sched_debug.cpu.curr->pid.max
95563 -31.3% 65668 sched_debug.cpu.nr_load_updates.avg
100959 -30.6% 70040 sched_debug.cpu.nr_load_updates.max
93175 -32.1% 63230 sched_debug.cpu.nr_load_updates.min
600.39 ± 4% -16.6% 500.69 ± 2% sched_debug.cpu.ttwu_local.avg
115820 -26.1% 85628 sched_debug.cpu_clk
115820 -26.1% 85628 sched_debug.ktime
0.05 ± 10% +48.4% 0.08 ± 20% sched_debug.rt_rq:/.rt_time.avg
2.39 ± 10% +48.9% 3.56 ± 20% sched_debug.rt_rq:/.rt_time.max
0.34 ± 10% +48.9% 0.51 ± 20% sched_debug.rt_rq:/.rt_time.stddev
116302 -26.0% 86097 sched_debug.sched_clk
1036 ± 9% +27.7% 1322 ± 8% slabinfo.biovec-64.active_objs
1036 ± 9% +27.7% 1322 ± 8% slabinfo.biovec-64.num_objs
712.50 ± 8% +27.5% 908.50 ± 10% slabinfo.ext4_extent_status.active_objs
712.50 ± 8% +27.5% 908.50 ± 10% slabinfo.ext4_extent_status.num_objs
2640 ± 8% +25.5% 3313 ± 6% slabinfo.ext4_io_end.active_objs
2674 ± 8% +25.4% 3354 ± 6% slabinfo.ext4_io_end.num_objs
2709 ± 9% +34.4% 3641 ± 5% slabinfo.f2fs_extent_tree.active_objs
2750 ± 9% +32.8% 3651 ± 5% slabinfo.f2fs_extent_tree.num_objs
2255 ± 5% +31.6% 2969 ± 8% slabinfo.f2fs_inode_cache.active_objs
2286 ± 4% +31.2% 3000 ± 7% slabinfo.f2fs_inode_cache.num_objs
590.00 ± 8% +24.5% 734.75 ± 3% slabinfo.file_lock_cache.active_objs
590.00 ± 8% +24.5% 734.75 ± 3% slabinfo.file_lock_cache.num_objs
7627 +12.3% 8568 ± 5% slabinfo.free_nid.active_objs
7627 +12.3% 8568 ± 5% slabinfo.free_nid.num_objs
6251 ± 5% +26.0% 7875 ± 5% slabinfo.fscrypt_info.active_objs
6251 ± 5% +26.0% 7875 ± 5% slabinfo.fscrypt_info.num_objs
13894 ± 6% +16.3% 16158 slabinfo.kmalloc-128.active_objs
13901 ± 6% +16.3% 16161 slabinfo.kmalloc-128.num_objs
2184 ± 2% +30.5% 2851 slabinfo.nfs_inode_cache.active_objs
2209 ± 3% +30.1% 2874 ± 2% slabinfo.nfs_inode_cache.num_objs
636.50 ± 14% +46.5% 932.25 ± 24% slabinfo.nfsd4_stateids.active_objs
636.50 ± 14% +46.5% 932.25 ± 24% slabinfo.nfsd4_stateids.num_objs
10514 ± 8% +25.0% 13138 ± 2% slabinfo.nfsd_drc.active_objs
10514 ± 8% +25.0% 13138 ± 2% slabinfo.nfsd_drc.num_objs
984.00 ± 11% +38.0% 1358 ± 22% slabinfo.numa_policy.active_objs
984.00 ± 11% +38.0% 1358 ± 22% slabinfo.numa_policy.num_objs
2062 ± 15% +17.2% 2417 ± 8% slabinfo.pool_workqueue.active_objs
2062 ± 15% +18.1% 2435 ± 7% slabinfo.pool_workqueue.num_objs
121248 ± 4% +25.6% 152261 slabinfo.radix_tree_node.active_objs
2173 ± 4% +25.3% 2722 slabinfo.radix_tree_node.active_slabs
121725 ± 4% +25.3% 152514 slabinfo.radix_tree_node.num_objs
2173 ± 4% +25.3% 2722 slabinfo.radix_tree_node.num_slabs
2126 ± 6% +15.2% 2449 ± 4% slabinfo.sock_inode_cache.active_objs
2126 ± 6% +15.2% 2449 ± 4% slabinfo.sock_inode_cache.num_objs
18605 +19.2% 22180 ± 3% slabinfo.vm_area_struct.active_objs
18668 +18.9% 22193 ± 3% slabinfo.vm_area_struct.num_objs
7.75 ± 5% -22.6% 6.00 nfsstat.Client.nfs.v3.commit.percent
7.00 +14.3% 8.00 nfsstat.Client.nfs.v3.getattr.percent
3.00 -33.3% 2.00 nfsstat.Client.nfs.v3.write
8.00 -25.0% 6.00 nfsstat.Client.nfs.v3.write.percent
4.50 ± 11% -33.3% 3.00 nfsstat.Client.nfs.v4.access.percent
2546 ± 8% +24.2% 3164 nfsstat.Client.nfs.v4.close
5.00 +40.0% 7.00 nfsstat.Client.nfs.v4.close.percent
2546 ± 8% +24.3% 3164 nfsstat.Client.nfs.v4.commit
5.00 +40.0% 7.00 nfsstat.Client.nfs.v4.commit.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.confirm.percent
3.00 -33.3% 2.00 nfsstat.Client.nfs.v4.getattr.percent
2551 ± 8% +24.2% 3169 nfsstat.Client.nfs.v4.lookup
1.00 -100.0% 0.00 nfsstat.Client.nfs.v4.lookup_root.percent
1.00 -100.0% 0.00 nfsstat.Client.nfs.v4.null.percent
2558 ± 8% +24.1% 3174 nfsstat.Client.nfs.v4.open
17.25 ± 2% -14.5% 14.75 ± 2% nfsstat.Client.nfs.v4.open.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.pathconf.percent
6.00 -25.0% 4.50 ± 11% nfsstat.Client.nfs.v4.server_caps.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.setclntid.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.statfs.percent
10189 ± 8% +24.3% 12662 nfsstat.Client.nfs.v4.write
22.50 ± 3% +26.7% 28.50 nfsstat.Client.nfs.v4.write.percent
20460 ± 8% +24.1% 25401 nfsstat.Client.rpc.authrefrsh
20460 ± 8% +24.1% 25401 nfsstat.Client.rpc.calls
20422 ± 8% +24.2% 25365 nfsstat.Server.nfs.v4.compound
1.00 -100.0% 0.00 nfsstat.Server.nfs.v4.null.percent
2552 ± 8% +24.2% 3170 nfsstat.Server.nfs.v4.operations.access
2546 ± 8% +24.2% 3164 nfsstat.Server.nfs.v4.operations.close
2546 ± 8% +24.2% 3164 nfsstat.Server.nfs.v4.operations.commit
17858 ± 8% +24.2% 22185 nfsstat.Server.nfs.v4.operations.getattr
5101 ± 8% +24.2% 6337 nfsstat.Server.nfs.v4.operations.getfh
2551 ± 8% +24.2% 3169 nfsstat.Server.nfs.v4.operations.lookup
4.00 -25.0% 3.00 nfsstat.Server.nfs.v4.operations.lookup.percent
2558 ± 8% +24.1% 3174 nfsstat.Server.nfs.v4.operations.open
6.00 -16.7% 5.00 nfsstat.Server.nfs.v4.operations.open.percent
20417 ± 8% +24.2% 25359 nfsstat.Server.nfs.v4.operations.putfh
1.00 -100.0% 0.00 nfsstat.Server.nfs.v4.operations.setcltid.percent
1.00 -100.0% 0.00 nfsstat.Server.nfs.v4.operations.setcltidconf.percent
10189 ± 8% +24.3% 12662 nfsstat.Server.nfs.v4.operations.write
6.25 ± 6% +36.0% 8.50 ± 5% nfsstat.Server.nfs.v4.operations.write.percent
20423 ± 8% +24.2% 25366 nfsstat.Server.packet.packets
20423 ± 8% +24.2% 25366 nfsstat.Server.packet.tcp
10194 ± 8% +24.3% 12667 nfsstat.Server.reply.cache.misses
10229 ± 8% +24.1% 12699 nfsstat.Server.reply.cache.nocache
20423 ± 8% +24.2% 25366 nfsstat.Server.rpc.calls
19.63 ± 6% -1.8 17.83 ± 8% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
10.22 ± 10% -1.8 8.43 ± 13% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
7.83 ± 11% -1.5 6.33 ± 14% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
4.69 ± 14% -1.0 3.70 ± 19% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.20 ± 14% -0.9 3.29 ± 18% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.23 ± 20% -0.3 0.89 ± 23% perf-profile.calltrace.cycles-pp.rcu_check_callbacks.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.43 ± 8% -0.3 1.16 ± 10% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.04 ± 7% -0.1 0.90 ± 8% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.08 ± 4% -0.1 0.94 ± 7% perf-profile.calltrace.cycles-pp.run_timer_softirq.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.19 ± 9% +0.2 1.38 ± 7% perf-profile.calltrace.cycles-pp.__next_timer_interrupt.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
1.90 ± 5% +0.2 2.14 ± 6% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.99 ± 11% +0.2 1.23 ± 11% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
0.99 ± 11% +0.2 1.23 ± 11% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
0.93 ± 7% +0.3 1.18 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_trylock.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt
1.00 ± 10% +0.3 1.26 ± 12% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk
1.00 ± 10% +0.3 1.26 ± 11% perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk.irq_work_run_list
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter_state
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.irq_work_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter_state.do_idle
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.vprintk_emit.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk.irq_work_run_list.irq_work_run
2.88 ± 3% +0.4 3.29 ± 4% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
3.66 ± 3% +0.4 4.11 ± 3% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
3.05 ± 2% +0.5 3.54 ± 5% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
91.67 +0.7 92.33 perf-profile.calltrace.cycles-pp.secondary_startup_64
7.46 ± 5% +1.0 8.46 ± 4% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
56.78 +1.7 58.52 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
10.53 ± 10% -1.9 8.60 ± 13% perf-profile.children.cycles-pp.hrtimer_interrupt
8.14 ± 11% -1.6 6.51 ± 14% perf-profile.children.cycles-pp.__hrtimer_run_queues
4.88 ± 13% -1.1 3.79 ± 18% perf-profile.children.cycles-pp.tick_sched_timer
4.32 ± 13% -1.0 3.35 ± 18% perf-profile.children.cycles-pp.tick_sched_handle
1.82 ± 12% -0.4 1.44 ± 20% perf-profile.children.cycles-pp.scheduler_tick
1.30 ± 17% -0.4 0.93 ± 22% perf-profile.children.cycles-pp.rcu_check_callbacks
1.54 ± 9% -0.3 1.25 ± 9% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
1.25 ± 6% -0.2 1.03 ± 13% perf-profile.children.cycles-pp.ktime_get
1.15 ± 7% -0.2 0.97 ± 6% perf-profile.children.cycles-pp.clockevents_program_event
0.88 ± 6% -0.2 0.70 ± 6% perf-profile.children.cycles-pp.native_write_msr
1.17 ± 4% -0.2 1.01 ± 6% perf-profile.children.cycles-pp.run_timer_softirq
0.56 ± 9% -0.1 0.42 ± 18% perf-profile.children.cycles-pp.enqueue_hrtimer
0.16 ± 23% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.29 ± 15% -0.1 0.21 ± 29% perf-profile.children.cycles-pp.update_rq_clock
0.16 ± 18% -0.1 0.10 ± 27% perf-profile.children.cycles-pp.raise_softirq
0.08 ± 10% -0.0 0.04 ± 59% perf-profile.children.cycles-pp.calc_global_load_tick
0.22 ± 8% +0.1 0.27 ± 14% perf-profile.children.cycles-pp.pm_qos_read_value
1.42 ± 7% +0.2 1.59 ± 8% perf-profile.children.cycles-pp.__next_timer_interrupt
0.97 ± 6% +0.2 1.21 ± 3% perf-profile.children.cycles-pp._raw_spin_trylock
3.11 ± 5% +0.4 3.49 ± 4% perf-profile.children.cycles-pp.tick_nohz_next_event
3.81 ± 3% +0.4 4.23 ± 3% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
3.17 +0.5 3.62 ± 5% perf-profile.children.cycles-pp.rebalance_domains
91.87 +0.7 92.53 perf-profile.children.cycles-pp.do_idle
91.67 +0.7 92.33 perf-profile.children.cycles-pp.secondary_startup_64
91.67 +0.7 92.33 perf-profile.children.cycles-pp.cpu_startup_entry
7.71 ± 5% +0.9 8.65 ± 4% perf-profile.children.cycles-pp.menu_select
0.89 ± 16% -0.3 0.58 ± 23% perf-profile.self.cycles-pp.rcu_check_callbacks
0.88 ± 6% -0.2 0.70 ± 6% perf-profile.self.cycles-pp.native_write_msr
0.91 ± 6% -0.1 0.78 ± 5% perf-profile.self.cycles-pp.run_timer_softirq
0.42 ± 15% -0.1 0.29 ± 21% perf-profile.self.cycles-pp.timerqueue_add
0.39 ± 7% -0.1 0.28 ± 17% perf-profile.self.cycles-pp.scheduler_tick
0.29 ± 18% -0.1 0.23 ± 6% perf-profile.self.cycles-pp.clockevents_program_event
0.16 ± 18% -0.1 0.10 ± 27% perf-profile.self.cycles-pp.raise_softirq
0.08 ± 10% -0.0 0.04 ± 59% perf-profile.self.cycles-pp.calc_global_load_tick
0.10 ± 13% -0.0 0.06 ± 20% perf-profile.self.cycles-pp.rcu_bh_qs
0.07 ± 24% +0.0 0.11 ± 15% perf-profile.self.cycles-pp.rcu_nmi_enter
0.22 ± 8% +0.1 0.27 ± 14% perf-profile.self.cycles-pp.pm_qos_read_value
0.38 ± 3% +0.1 0.47 ± 6% perf-profile.self.cycles-pp.rebalance_domains
0.59 ± 11% +0.1 0.69 ± 5% perf-profile.self.cycles-pp.tick_nohz_next_event
0.78 ± 6% +0.1 0.92 ± 4% perf-profile.self.cycles-pp.__next_timer_interrupt
0.97 ± 6% +0.2 1.21 ± 3% perf-profile.self.cycles-pp._raw_spin_trylock
fsmark.files_per_sec
74 +-+--------------------------------------------------------------------+
72 +-O O O O O O O O O |
O O O O O |
70 +-+ O O O |
68 +-+ |
66 +-+ |
64 +-+ |
| |
62 +-+ |
60 +-+ |
58 +-+ |
56 +-+ |
|.+..+. .+..+. .+. .+.. .+. .+. |
54 +-+ +.+..+ +.+..+ +..+ + +..+.+.+..+.+.+. +..+.+.+..+.|
52 +-+--------------------------------------------------------------------+
fsmark.time.elapsed_time
200 +-+-------------------------------------------------------------------+
| .+.. |
190 +-+ .+.+..+. .+.+.+.. .+.+.. .+. .+.+.+..+.+.+..+. .+..+.+ +.|
| +..+ +.+. + + +. + |
| |
180 +-+ |
| |
170 +-+ |
| |
160 +-+ |
| |
| |
150 +-+ O O O O O O |
O O O O O O O O O |
140 +-O----------------O--------------------------------------------------+
fsmark.time.elapsed_time.max
200 +-+-------------------------------------------------------------------+
| .+.. |
190 +-+ .+.+..+. .+.+.+.. .+.+.. .+. .+.+.+..+.+.+..+. .+..+.+ +.|
| +..+ +.+. + + +. + |
| |
180 +-+ |
| |
170 +-+ |
| |
160 +-+ |
| |
| |
150 +-+ O O O O O O |
O O O O O O O O O |
140 +-O----------------O--------------------------------------------------+
nfsstat.Client.nfs.v4.write.percent
29 O-+--O-O----O-O------O--O-O-O----O--O----------------------------------+
| |
28 +-O O O O O O |
27 +-+ |
| |
26 +-+ |
| |
25 +-+ |
| |
24 +-+ |
23 +-+ :|
| :|
22 +-+ +.+..+ +.+..+ +..+ + +..+.+.+..+.+.+.. +..+.+.+..+ |
|+ + + + + + + .. + + + |
21 +-+--------------------------------------------------------------------+
nfsstat.Client.nfs.v4.commit.percent
7 O-O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O--------------------------------+
| |
| |
6.5 +-+ |
| |
| |
| |
6 +-+ |
| |
| |
5.5 +-+ |
| |
| |
| |
5 +-+-------------------------------------------------------------------+
nfsstat.Client.nfs.v4.open.percent
18 +-+------------------------------------------------------------------+
|: : : : : : : + : : : + : : |
17.5 +-+ : : : : : : + : : : + : : |
17 +-+ +.+..+ +..+.+ + + +.+..+.+.+..+.+ +.+ +..+.|
| |
16.5 +-+ |
| |
16 +-+ |
| |
15.5 +-+ |
15 O-O O O O O O O O O O O O O |
| |
14.5 +-+ |
| |
14 +-+--O--------O--------------------O---------------------------------+
nfsstat.Client.nfs.v4.close.percent
7 O-O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O--------------------------------+
| |
| |
6.5 +-+ |
| |
| |
| |
6 +-+ |
| |
| |
5.5 +-+ |
| |
| |
| |
5 +-+-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 10 months
[lkp-robot] [mm] 9092c71bb7: blogbench.write_score -12.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -12.3% regression of blogbench.write_score and a +9.6% improvement
of blogbench.read_score due to commit:
commit: 9092c71bb724dba2ecba849eae69e5c9d39bd3d2 ("mm: use sc->priority for slab shrink targets")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: blogbench
on test machine: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory
with following parameters:
disk: 1SSD
fs: btrfs
cpufreq_governor: performance
test-description: Blogbench is a portable filesystem benchmark that tries to reproduce the load of a real-world busy file server.
test-url: https://www.pureftpd.org/project/blogbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/rootfs/tbox_group/testcase:
gcc-7/performance/1SSD/btrfs/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/lkp-bdw-de1/blogbench
commit:
fcb2b0c577 ("mm: show total hugetlb memory consumption in /proc/meminfo")
9092c71bb7 ("mm: use sc->priority for slab shrink targets")
fcb2b0c577f145c7 9092c71bb724dba2ecba849eae
---------------- --------------------------
%stddev %change %stddev
\ | \
3256 -12.3% 2854 blogbench.write_score
1235237 ± 2% +9.6% 1354163 blogbench.read_score
28050912 -10.1% 25212230 blogbench.time.file_system_outputs
6481995 ± 3% +25.0% 8105320 ± 2% blogbench.time.involuntary_context_switches
906.00 +13.7% 1030 blogbench.time.percent_of_cpu_this_job_got
2552 +14.0% 2908 blogbench.time.system_time
173.80 +8.4% 188.32 blogbench.time.user_time
19353936 +3.6% 20045728 blogbench.time.voluntary_context_switches
8719514 +13.0% 9850451 softirqs.RCU
2.97 ± 5% -0.7 2.30 ± 3% mpstat.cpu.idle%
24.92 -6.5 18.46 mpstat.cpu.iowait%
0.65 ± 2% +0.1 0.75 mpstat.cpu.soft%
67.76 +6.7 74.45 mpstat.cpu.sys%
50206 -10.7% 44858 vmstat.io.bo
49.25 -9.1% 44.75 ± 2% vmstat.procs.b
224125 -1.8% 220135 vmstat.system.cs
48903 +10.7% 54134 vmstat.system.in
3460654 +10.8% 3834883 meminfo.Active
3380666 +11.0% 3752872 meminfo.Active(file)
1853849 -17.4% 1530415 meminfo.Inactive
1836507 -17.6% 1513054 meminfo.Inactive(file)
551311 -10.3% 494265 meminfo.SReclaimable
196525 -12.6% 171775 meminfo.SUnreclaim
747837 -10.9% 666040 meminfo.Slab
8.904e+08 -24.9% 6.683e+08 cpuidle.C1.time
22971020 -12.8% 20035820 cpuidle.C1.usage
2.518e+08 ± 3% -31.7% 1.72e+08 cpuidle.C1E.time
821393 ± 2% -33.3% 548003 cpuidle.C1E.usage
75460078 ± 2% -23.3% 57903768 ± 2% cpuidle.C3.time
136506 ± 3% -25.3% 101956 ± 3% cpuidle.C3.usage
56892498 ± 4% -23.3% 43608427 ± 4% cpuidle.C6.time
85034 ± 3% -33.9% 56184 ± 3% cpuidle.C6.usage
24373567 -24.5% 18395538 cpuidle.POLL.time
449033 ± 2% -10.8% 400493 cpuidle.POLL.usage
1832 +9.3% 2002 turbostat.Avg_MHz
22967645 -12.8% 20032521 turbostat.C1
18.43 -4.6 13.85 turbostat.C1%
821328 ± 2% -33.3% 547948 turbostat.C1E
5.21 ± 3% -1.6 3.56 turbostat.C1E%
136377 ± 3% -25.3% 101823 ± 3% turbostat.C3
1.56 ± 2% -0.4 1.20 ± 3% turbostat.C3%
84404 ± 3% -34.0% 55743 ± 3% turbostat.C6
1.17 ± 4% -0.3 0.90 ± 4% turbostat.C6%
25.93 -26.2% 19.14 turbostat.CPU%c1
0.12 ± 3% -19.1% 0.10 ± 9% turbostat.CPU%c3
14813304 +10.7% 16398388 turbostat.IRQ
38.19 +3.6% 39.56 turbostat.PkgWatt
4.51 +4.5% 4.71 turbostat.RAMWatt
8111200 ± 13% -63.2% 2986242 ± 48% proc-vmstat.compact_daemon_free_scanned
1026719 ± 30% -81.2% 193485 ± 30% proc-vmstat.compact_daemon_migrate_scanned
2444 ± 21% -63.3% 897.50 ± 20% proc-vmstat.compact_daemon_wake
8111200 ± 13% -63.2% 2986242 ± 48% proc-vmstat.compact_free_scanned
755491 ± 32% -81.6% 138856 ± 28% proc-vmstat.compact_isolated
1026719 ± 30% -81.2% 193485 ± 30% proc-vmstat.compact_migrate_scanned
137.75 ± 34% +2.8e+06% 3801062 ± 2% proc-vmstat.kswapd_inodesteal
6749 ± 20% -53.6% 3131 ± 12% proc-vmstat.kswapd_low_wmark_hit_quickly
844991 +11.2% 939487 proc-vmstat.nr_active_file
3900576 -10.5% 3490567 proc-vmstat.nr_dirtied
459789 -17.8% 377930 proc-vmstat.nr_inactive_file
137947 -10.3% 123720 proc-vmstat.nr_slab_reclaimable
49165 -12.6% 42989 proc-vmstat.nr_slab_unreclaimable
1382 ± 11% -26.2% 1020 ± 20% proc-vmstat.nr_writeback
3809266 -10.7% 3403350 proc-vmstat.nr_written
844489 +11.2% 938974 proc-vmstat.nr_zone_active_file
459855 -17.8% 378121 proc-vmstat.nr_zone_inactive_file
7055 ± 18% -52.0% 3389 ± 11% proc-vmstat.pageoutrun
33764911 ± 2% +21.3% 40946445 proc-vmstat.pgactivate
42044161 ± 2% +12.1% 47139065 proc-vmstat.pgdeactivate
92153 ± 20% -69.1% 28514 ± 24% proc-vmstat.pgmigrate_success
15212270 -10.7% 13591573 proc-vmstat.pgpgout
42053817 ± 2% +12.1% 47151755 proc-vmstat.pgrefill
11297 ±107% +1025.4% 127138 ± 21% proc-vmstat.pgscan_direct
19930162 -24.0% 15141439 proc-vmstat.pgscan_kswapd
19423629 -24.0% 14758807 proc-vmstat.pgsteal_kswapd
10868768 +184.8% 30950752 proc-vmstat.slabs_scanned
3361780 ± 3% -22.9% 2593327 ± 3% proc-vmstat.workingset_activate
4994722 ± 2% -43.2% 2835020 ± 2% proc-vmstat.workingset_refault
316427 -9.3% 286844 slabinfo.Acpi-Namespace.active_objs
3123 -9.4% 2829 slabinfo.Acpi-Namespace.active_slabs
318605 -9.4% 288623 slabinfo.Acpi-Namespace.num_objs
3123 -9.4% 2829 slabinfo.Acpi-Namespace.num_slabs
220514 -40.7% 130747 slabinfo.btrfs_delayed_node.active_objs
9751 -25.3% 7283 slabinfo.btrfs_delayed_node.active_slabs
263293 -25.3% 196669 slabinfo.btrfs_delayed_node.num_objs
9751 -25.3% 7283 slabinfo.btrfs_delayed_node.num_slabs
6383 ± 8% -12.0% 5615 ± 2% slabinfo.btrfs_delayed_ref_head.num_objs
9496 +15.5% 10969 slabinfo.btrfs_extent_buffer.active_objs
9980 +20.5% 12022 slabinfo.btrfs_extent_buffer.num_objs
260933 -10.7% 233136 slabinfo.btrfs_extent_map.active_objs
9392 -10.6% 8396 slabinfo.btrfs_extent_map.active_slabs
263009 -10.6% 235107 slabinfo.btrfs_extent_map.num_objs
9392 -10.6% 8396 slabinfo.btrfs_extent_map.num_slabs
271938 -10.3% 243802 slabinfo.btrfs_inode.active_objs
9804 -10.6% 8768 slabinfo.btrfs_inode.active_slabs
273856 -10.4% 245359 slabinfo.btrfs_inode.num_objs
9804 -10.6% 8768 slabinfo.btrfs_inode.num_slabs
7085 ± 5% -5.5% 6692 ± 2% slabinfo.btrfs_path.num_objs
311936 -16.4% 260797 slabinfo.dentry.active_objs
7803 -9.6% 7058 slabinfo.dentry.active_slabs
327759 -9.6% 296439 slabinfo.dentry.num_objs
7803 -9.6% 7058 slabinfo.dentry.num_slabs
2289 -23.3% 1755 ± 6% slabinfo.proc_inode_cache.active_objs
2292 -19.0% 1856 ± 6% slabinfo.proc_inode_cache.num_objs
261546 -12.3% 229485 slabinfo.radix_tree_node.active_objs
9404 -11.9% 8288 slabinfo.radix_tree_node.active_slabs
263347 -11.9% 232089 slabinfo.radix_tree_node.num_objs
9404 -11.9% 8288 slabinfo.radix_tree_node.num_slabs
1140424 ± 12% +40.2% 1598980 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.max
790.55 +13.0% 893.20 ± 3% sched_debug.cfs_rq:/.exec_clock.stddev
1140425 ± 12% +40.2% 1598982 ± 14% sched_debug.cfs_rq:/.max_vruntime.max
0.83 ± 10% +21.5% 1.00 ± 8% sched_debug.cfs_rq:/.nr_running.avg
3.30 ± 99% +266.3% 12.09 ± 13% sched_debug.cfs_rq:/.removed.load_avg.avg
153.02 ± 97% +266.6% 560.96 ± 13% sched_debug.cfs_rq:/.removed.runnable_sum.avg
569.93 ±102% +173.2% 1556 ± 14% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
1.42 ± 60% +501.5% 8.52 ± 34% sched_debug.cfs_rq:/.removed.util_avg.avg
19.88 ± 59% +288.9% 77.29 ± 16% sched_debug.cfs_rq:/.removed.util_avg.max
5.05 ± 58% +342.3% 22.32 ± 22% sched_debug.cfs_rq:/.removed.util_avg.stddev
791.44 ± 3% +47.7% 1168 ± 8% sched_debug.cfs_rq:/.util_avg.avg
1305 ± 6% +33.2% 1738 ± 5% sched_debug.cfs_rq:/.util_avg.max
450.25 ± 11% +66.2% 748.17 ± 14% sched_debug.cfs_rq:/.util_avg.min
220.82 ± 8% +21.1% 267.46 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
363118 ± 11% -23.8% 276520 ± 11% sched_debug.cpu.avg_idle.avg
726003 ± 8% -30.8% 502313 ± 4% sched_debug.cpu.avg_idle.max
202629 ± 3% -32.2% 137429 ± 18% sched_debug.cpu.avg_idle.stddev
31.96 ± 28% +54.6% 49.42 ± 14% sched_debug.cpu.cpu_load[3].min
36.21 ± 25% +64.0% 59.38 ± 6% sched_debug.cpu.cpu_load[4].min
1007 ± 5% +20.7% 1216 ± 7% sched_debug.cpu.curr->pid.avg
4.50 ± 5% +14.8% 5.17 ± 5% sched_debug.cpu.nr_running.max
2476195 -11.8% 2185022 sched_debug.cpu.nr_switches.max
212888 -26.6% 156172 ± 3% sched_debug.cpu.nr_switches.stddev
3570 ± 2% -58.7% 1474 ± 2% sched_debug.cpu.nr_uninterruptible.max
-803.67 -28.7% -573.38 sched_debug.cpu.nr_uninterruptible.min
1004 ± 2% -50.4% 498.55 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
2478809 -11.7% 2189310 sched_debug.cpu.sched_count.max
214130 -26.5% 157298 ± 3% sched_debug.cpu.sched_count.stddev
489430 ± 2% -16.6% 408309 ± 2% sched_debug.cpu.sched_goidle.avg
724333 ± 2% -28.2% 520263 ± 2% sched_debug.cpu.sched_goidle.max
457611 -18.1% 374746 ± 3% sched_debug.cpu.sched_goidle.min
62957 ± 2% -47.4% 33138 ± 3% sched_debug.cpu.sched_goidle.stddev
676053 ± 2% -15.4% 571816 ± 2% sched_debug.cpu.ttwu_local.max
42669 ± 3% +22.3% 52198 sched_debug.cpu.ttwu_local.min
151873 ± 2% -18.3% 124118 ± 2% sched_debug.cpu.ttwu_local.stddev
blogbench.write_score
3300 +-+------------------------------------------------------------------+
3250 +-+ +. .+ +. .+ : : : +. .+ .+.+.+. .|
|: +. .+ +.+.+.+ + + + : +. : : +. + +.+ + + |
3200 +-+ + +.+ + : + + : + + |
3150 +-+.+ ++ +.+ |
3100 +-+ |
3050 +-+ |
| |
3000 +-+ |
2950 +-+ O O |
2900 +-O O O O |
2850 +-+ O O O O O O O OO O O O |
| O O O O |
2800 O-+ O O |
2750 +-+------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 11 months
[lkp-robot] [rbd] 2f18d46683: vm-scalability.throughput -92.3% regression
by kernel test robot
Greeting,
We noticed a -92.3% regression of vm-scalability.throughput due to commit:
commit: 2f18d46683cb3047c41229d57cf7c6e2ee48676f ("rbd: refactor rbd_wait_state_locked()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 80 threads Skylake with 64G memory
with following parameters:
runtime: 300s
test: lru-file-readonce
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
gcc-7/x86_64-rhel-7.2/debian-x86_64-2018-04-03.cgz/1/vm-lkp-wsx03-8G/boot
commit:
v4.17-rc1
2f18d46683 ("rbd: refactor rbd_wait_state_locked()")
v4.17-rc1 2f18d46683cb3047c41229d57c
---------------- --------------------------
%stddev %change %stddev
\ | \
259103 +520.5% 1607621 vm-scalability.median
5.71 ± 4% -100.0% 0.00 vm-scalability.stddev
21004763 -92.3% 1607621 vm-scalability.throughput
240.41 +28.9% 309.83 vm-scalability.time.elapsed_time
240.41 +28.9% 309.83 vm-scalability.time.elapsed_time.max
439976 -85.3% 64590 ± 19% vm-scalability.time.involuntary_context_switches
62896 -89.1% 6836 vm-scalability.time.minor_page_faults
6698 -98.7% 89.00 vm-scalability.time.percent_of_cpu_this_job_got
15789 -98.4% 252.97 vm-scalability.time.system_time
316.29 -92.0% 25.44 vm-scalability.time.user_time
1292 -84.7% 197.50 ± 7% vm-scalability.time.voluntary_context_switches
4.295e+09 -88.8% 4.823e+08 vm-scalability.workload
2567 ± 65% -99.7% 8.98 boot-time.idle
278517 ± 3% -100.0% 0.00 interrupts.CAL:Function_call_interrupts
1076372 -86.0% 150299 ± 4% softirqs.RCU
493821 -100.0% 0.00 softirqs.SCHED
8295195 -97.9% 175669 ± 2% softirqs.TIMER
14.80 -14.2 0.58 ± 3% mpstat.cpu.idle%
0.00 ± 31% -0.0 0.00 mpstat.cpu.iowait%
0.01 ± 23% +0.1 0.09 ± 26% mpstat.cpu.soft%
1.67 +6.9 8.58 mpstat.cpu.usr%
72.60 ± 4% +142.1% 175.75 vmstat.memory.buff
3313366 ± 7% +50.6% 4988389 vmstat.memory.free
69.20 -98.6% 1.00 vmstat.procs.r
12208 ± 3% -66.1% 4143 ± 2% vmstat.system.cs
86363 -98.7% 1133 ± 2% vmstat.system.in
733300 ± 12% -100.0% 0.00 cpuidle.C1.time
28147 ± 6% -94.1% 1672 ± 2% cpuidle.C1.usage
15489018 ± 3% -100.0% 0.00 cpuidle.C1E.time
49325 ± 2% -100.0% 0.00 cpuidle.C1E.usage
2.758e+09 -100.0% 0.00 cpuidle.C6.time
2879847 -100.0% 0.00 cpuidle.C6.usage
8915 ± 10% -100.0% 0.00 cpuidle.POLL.time
540.00 ± 5% -100.0% 0.00 cpuidle.POLL.usage
240.41 +28.9% 309.83 time.elapsed_time
240.41 +28.9% 309.83 time.elapsed_time.max
439976 -85.3% 64590 ± 19% time.involuntary_context_switches
62896 -89.1% 6836 time.minor_page_faults
6698 -98.7% 89.00 time.percent_of_cpu_this_job_got
15789 -98.4% 252.97 time.system_time
316.29 -92.0% 25.44 time.user_time
1292 -84.7% 197.50 ± 7% time.voluntary_context_switches
5.317e+08 -100.0% 0.00 numa-numastat.node0.local_node
1.119e+08 -100.0% 0.00 numa-numastat.node0.numa_foreign
5.317e+08 -100.0% 0.00 numa-numastat.node0.numa_hit
29285955 ± 3% -100.0% 0.00 numa-numastat.node0.numa_miss
29289175 ± 3% -100.0% 0.00 numa-numastat.node0.other_node
4.015e+08 -100.0% 0.00 numa-numastat.node1.local_node
29285955 ± 3% -100.0% 0.00 numa-numastat.node1.numa_foreign
4.015e+08 -100.0% 0.00 numa-numastat.node1.numa_hit
1.119e+08 -100.0% 0.00 numa-numastat.node1.numa_miss
1.119e+08 -100.0% 0.00 numa-numastat.node1.other_node
2648 +16.3% 3079 turbostat.Avg_MHz
85.69 +13.8 99.50 turbostat.Busy%
25313 ± 6% -93.4% 1672 ± 2% turbostat.C1
45534 ± 3% -100.0% 0.00 turbostat.C1E
0.07 ± 6% -0.1 0.00 turbostat.C1E%
2877662 -100.0% 0.00 turbostat.C6
14.25 -14.2 0.00 turbostat.C6%
14.21 -96.5% 0.49 turbostat.CPU%c1
vm-scalability.throughput
2.2e+07 +-+---------------------------------------------------------------+
2e+07 +-++..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+ |
| |
1.8e+07 +-+ |
1.6e+07 +-+ |
1.4e+07 +-+ |
1.2e+07 +-+ |
| |
1e+07 +-+ |
8e+06 +-+ |
6e+06 +-+ |
4e+06 +-+ |
| |
2e+06 O-+O O O O O O O O O O O O O O O O O O O O O O
0 +-+---------------------------------------------------------------+
vm-scalability.median
1.8e+06 +-+---------------------------------------------------------------+
| |
1.6e+06 O-+O O O O O O O O O O O O O O O O O O O O O O
| |
1.4e+06 +-+ |
1.2e+06 +-+ |
| |
1e+06 +-+ |
| |
800000 +-+ |
600000 +-+ |
| |
400000 +-+ |
|..+.. .+..+..+..+..+.. .+..+.. .+..+..+..+.. .+..+ |
200000 +-+---------------------------------------------------------------+
vm-scalability.workload
4.5e+09 +-+---------------------------------------------------------------+
|..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+..+ |
4e+09 +-+ |
3.5e+09 +-+ |
| |
3e+09 +-+ |
2.5e+09 +-+ |
| |
2e+09 +-+ |
1.5e+09 +-+ |
| |
1e+09 +-+ |
5e+08 +-+O O O O O O O O O O O O O O O |
O O O O O O O O
0 +-+---------------------------------------------------------------+
vm-scalability.time.user_time
350 +-+-------------------------------------------------------------------+
|..+..+...+..+..+..+..+..+...+..+..+..+..+...+..+..+..+..+ |
300 +-+ |
| |
250 +-+ |
| |
200 +-+ |
| |
150 +-+ |
| |
100 +-+ |
| |
50 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O
0 +-+-------------------------------------------------------------------+
vm-scalability.time.system_time
16000 +-+-----------------------------------------------------------------+
| |
14000 +-+ |
12000 +-+ |
| |
10000 +-+ |
| |
8000 +-+ |
| |
6000 +-+ |
4000 +-+ |
| |
2000 +-+ |
| |
0 O-+O--O--O--O--O---O--O--O--O--O--O--O--O--O--O--O---O--O--O--O--O--O
vm-scalability.time.percent_of_cpu_this_job_got
7000 +-+------------------------------------------------------------------+
|..+..+..+...+..+..+..+..+..+..+...+..+..+..+..+..+..+..+ |
6000 +-+ |
| |
5000 +-+ |
| |
4000 +-+ |
| |
3000 +-+ |
| |
2000 +-+ |
| |
1000 +-+ |
| |
0 O-+O--O--O---O--O--O--O--O--O--O---O--O--O--O--O--O--O--O---O--O--O--O
vm-scalability.time.elapsed_time
310 O-+O--O---O--O--O--O--O--O---O--O--O--O--O---O--O--O--O--O--O---O--O--O
| |
300 +-+ |
290 +-+ |
| |
280 +-+ |
| |
270 +-+ |
| |
260 +-+ |
250 +-+ |
| |
240 +-++..+...+..+..+..+..+..+...+..+..+..+..+...+..+..+..+..+ |
| |
230 +-+-------------------------------------------------------------------+
vm-scalability.time.elapsed_time.max
310 O-+O--O---O--O--O--O--O--O---O--O--O--O--O---O--O--O--O--O--O---O--O--O
| |
300 +-+ |
290 +-+ |
| |
280 +-+ |
| |
270 +-+ |
| |
260 +-+ |
250 +-+ |
| |
240 +-++..+...+..+..+..+..+..+...+..+..+..+..+...+..+..+..+..+ |
| |
230 +-+-------------------------------------------------------------------+
vm-scalability.time.minor_page_faults
70000 +-+-----------------------------------------------------------------+
|..+..+..+..+..+...+..+..+..+..+..+..+..+..+..+..+...+..+ |
60000 +-+ |
| |
50000 +-+ |
| |
40000 +-+ |
| |
30000 +-+ |
| |
20000 +-+ |
| |
10000 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O
0 +-+-----------------------------------------------------------------+
vm-scalability.time.voluntary_context_switches
1400 +-+------------------------------------------------------------------+
|..+..+..+...+..+..+..+..+..+..+...+..+..+..+..+..+..+..+ |
1200 +-+ |
| |
1000 +-+ |
| |
800 +-+ |
| |
600 +-+ |
| |
400 +-+ |
| |
200 O-+O O O O O O O O O O O O O O O O O O O O O O
| |
0 +-+------------------------------------------------------------------+
vm-scalability.time.involuntary_context_switches
450000 +-+----------------------------------------------------------------+
| +. +. + |
400000 +-+ |
350000 +-+ |
| |
300000 +-+ |
250000 +-+ |
| |
200000 +-+ |
150000 +-+ |
| |
100000 +-+ O O O |
50000 O-+ O O O O O O O O O O O O O O
| O O O O O |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 11 months
[lkp-robot] [MD] 5a409b4f56: aim7.jobs-per-min -27.5% regression
by kernel test robot
Greeting,
FYI, we noticed a -27.5% regression of aim7.jobs-per-min due to commit:
commit: 5a409b4f56d50b212334f338cb8465d65550cd85 ("MD: fix lock contention for flush bios")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
disk: 4BRD_12G
md: RAID1
fs: xfs
test: sync_disk_rw
load: 600
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.2/600/RAID1/debian-x86_64-2016-08-31.cgz/lkp-ivb-ep01/sync_disk_rw/aim7
commit:
448ec638c6 ("md/raid5: Assigning NULL to sh->batch_head before testing bit R5_Overlap of a stripe")
5a409b4f56 ("MD: fix lock contention for flush bios")
448ec638c6bcf369 5a409b4f56d50b212334f338cb
---------------- --------------------------
%stddev %change %stddev
\ | \
1640 -27.5% 1189 aim7.jobs-per-min
2194 +37.9% 3026 aim7.time.elapsed_time
2194 +37.9% 3026 aim7.time.elapsed_time.max
50990311 -95.8% 2148266 aim7.time.involuntary_context_switches
107965 ± 4% -26.4% 79516 ± 2% aim7.time.minor_page_faults
49.14 +82.5% 89.66 ± 2% aim7.time.user_time
7.123e+08 -35.7% 4.582e+08 aim7.time.voluntary_context_switches
672282 +36.8% 919615 interrupts.CAL:Function_call_interrupts
16631387 ± 2% -39.9% 9993075 ± 7% softirqs.RCU
9708009 +186.1% 27778773 softirqs.SCHED
33436649 +45.5% 48644912 softirqs.TIMER
4.16 -2.1 2.01 mpstat.cpu.idle%
0.24 ± 2% +27.7 27.91 mpstat.cpu.iowait%
95.51 -25.6 69.94 mpstat.cpu.sys%
0.09 +0.0 0.13 mpstat.cpu.usr%
6051756 ± 3% +59.0% 9623085 numa-numastat.node0.local_node
6055311 ± 3% +59.0% 9626996 numa-numastat.node0.numa_hit
6481209 ± 3% +48.4% 9616310 numa-numastat.node1.local_node
6485866 ± 3% +48.3% 9620756 numa-numastat.node1.numa_hit
61404 -27.7% 44424 vmstat.io.bo
2.60 ± 18% +11519.2% 302.10 vmstat.procs.b
304.10 -84.9% 45.80 ± 2% vmstat.procs.r
400477 -43.5% 226094 vmstat.system.cs
166461 -49.9% 83332 vmstat.system.in
78397 +27.0% 99567 meminfo.Dirty
14427 +18.4% 17082 meminfo.Inactive(anon)
1963 ± 5% +5.4% 2068 ± 4% meminfo.Mlocked
101143 +991.0% 1103488 meminfo.SUnreclaim
53684 ± 4% -18.1% 43946 ± 3% meminfo.Shmem
175580 +571.4% 1178829 meminfo.Slab
39406 +26.2% 49717 numa-meminfo.node0.Dirty
1767204 ± 10% +37.2% 2425487 ± 2% numa-meminfo.node0.MemUsed
51634 ± 18% +979.3% 557316 numa-meminfo.node0.SUnreclaim
92259 ± 13% +551.7% 601288 numa-meminfo.node0.Slab
38969 +28.0% 49863 numa-meminfo.node1.Dirty
1895204 ± 10% +24.7% 2363037 ± 3% numa-meminfo.node1.MemUsed
49512 ± 19% +1003.1% 546165 numa-meminfo.node1.SUnreclaim
83323 ± 14% +593.1% 577534 numa-meminfo.node1.Slab
2.524e+09 +894.5% 2.51e+10 cpuidle.C1.time
50620790 +316.5% 2.109e+08 cpuidle.C1.usage
3.965e+08 +1871.1% 7.815e+09 cpuidle.C1E.time
5987788 +186.1% 17129412 cpuidle.C1E.usage
2.506e+08 +97.5% 4.948e+08 ± 2% cpuidle.C3.time
2923498 -55.7% 1295033 cpuidle.C3.usage
5.327e+08 +179.9% 1.491e+09 cpuidle.C6.time
779874 ± 2% +229.3% 2567769 cpuidle.C6.usage
6191357 +3333.6% 2.126e+08 cpuidle.POLL.time
204095 +1982.1% 4249504 cpuidle.POLL.usage
9850 +26.3% 12444 numa-vmstat.node0.nr_dirty
12908 ± 18% +979.3% 139321 numa-vmstat.node0.nr_slab_unreclaimable
8876 +29.6% 11505 numa-vmstat.node0.nr_zone_write_pending
3486319 ± 4% +55.1% 5407021 numa-vmstat.node0.numa_hit
3482713 ± 4% +55.1% 5403066 numa-vmstat.node0.numa_local
9743 +28.1% 12479 numa-vmstat.node1.nr_dirty
12377 ± 19% +1003.1% 136532 numa-vmstat.node1.nr_slab_unreclaimable
9287 +30.0% 12074 numa-vmstat.node1.nr_zone_write_pending
3678995 ± 4% +44.8% 5326772 numa-vmstat.node1.numa_hit
3497785 ± 4% +47.1% 5145705 numa-vmstat.node1.numa_local
252.70 +100.2% 505.90 slabinfo.biovec-max.active_objs
282.70 +99.1% 562.90 slabinfo.biovec-max.num_objs
2978 ± 17% +52.5% 4543 ± 14% slabinfo.dmaengine-unmap-16.active_objs
2978 ± 17% +52.5% 4543 ± 14% slabinfo.dmaengine-unmap-16.num_objs
2078 +147.9% 5153 ± 11% slabinfo.ip6_dst_cache.active_objs
2078 +148.1% 5157 ± 11% slabinfo.ip6_dst_cache.num_objs
5538 ± 2% +26.2% 6990 ± 3% slabinfo.kmalloc-1024.active_objs
5586 ± 3% +27.1% 7097 ± 3% slabinfo.kmalloc-1024.num_objs
6878 +47.6% 10151 ± 5% slabinfo.kmalloc-192.active_objs
6889 +47.5% 10160 ± 5% slabinfo.kmalloc-192.num_objs
9843 ± 5% +1.6e+05% 16002876 slabinfo.kmalloc-64.active_objs
161.90 ± 4% +1.5e+05% 250044 slabinfo.kmalloc-64.active_slabs
10386 ± 4% +1.5e+05% 16002877 slabinfo.kmalloc-64.num_objs
161.90 ± 4% +1.5e+05% 250044 slabinfo.kmalloc-64.num_slabs
432.80 ± 12% +45.2% 628.50 ± 6% slabinfo.nfs_read_data.active_objs
432.80 ± 12% +45.2% 628.50 ± 6% slabinfo.nfs_read_data.num_objs
3956 -23.1% 3041 slabinfo.pool_workqueue.active_objs
4098 -19.8% 3286 slabinfo.pool_workqueue.num_objs
360.50 ± 15% +56.6% 564.70 ± 11% slabinfo.secpath_cache.active_objs
360.50 ± 15% +56.6% 564.70 ± 11% slabinfo.secpath_cache.num_objs
35373 ± 2% -8.3% 32432 proc-vmstat.nr_active_anon
19595 +27.1% 24914 proc-vmstat.nr_dirty
3607 +18.4% 4270 proc-vmstat.nr_inactive_anon
490.30 ± 5% +5.4% 516.90 ± 4% proc-vmstat.nr_mlock
13421 ± 4% -18.1% 10986 ± 3% proc-vmstat.nr_shmem
18608 +1.2% 18834 proc-vmstat.nr_slab_reclaimable
25286 +991.0% 275882 proc-vmstat.nr_slab_unreclaimable
35405 ± 2% -8.3% 32465 proc-vmstat.nr_zone_active_anon
3607 +18.4% 4270 proc-vmstat.nr_zone_inactive_anon
18161 +29.8% 23572 proc-vmstat.nr_zone_write_pending
76941 ± 5% -36.8% 48622 ± 4% proc-vmstat.numa_hint_faults
33878 ± 7% -35.5% 21836 ± 5% proc-vmstat.numa_hint_faults_local
12568956 +53.3% 19272377 proc-vmstat.numa_hit
12560739 +53.4% 19264015 proc-vmstat.numa_local
17938 ± 3% -33.5% 11935 ± 2% proc-vmstat.numa_pages_migrated
78296 ± 5% -36.0% 50085 ± 4% proc-vmstat.numa_pte_updates
8848 ± 6% -38.2% 5466 ± 6% proc-vmstat.pgactivate
8874568 ± 8% +368.7% 41590920 proc-vmstat.pgalloc_normal
5435965 +39.2% 7564148 proc-vmstat.pgfault
12863707 +255.1% 45683570 proc-vmstat.pgfree
17938 ± 3% -33.5% 11935 ± 2% proc-vmstat.pgmigrate_success
1.379e+13 -40.8% 8.17e+12 perf-stat.branch-instructions
0.30 +0.1 0.42 perf-stat.branch-miss-rate%
4.2e+10 -17.6% 3.462e+10 perf-stat.branch-misses
15.99 +3.8 19.74 perf-stat.cache-miss-rate%
3.779e+10 -21.6% 2.963e+10 perf-stat.cache-misses
2.364e+11 -36.5% 1.501e+11 perf-stat.cache-references
8.795e+08 -22.2% 6.84e+08 perf-stat.context-switches
4.44 -7.2% 4.12 perf-stat.cpi
2.508e+14 -44.5% 1.393e+14 perf-stat.cpu-cycles
36915392 +60.4% 59211221 perf-stat.cpu-migrations
0.29 ± 2% +0.0 0.34 ± 4% perf-stat.dTLB-load-miss-rate%
4.14e+10 -30.2% 2.89e+10 ± 4% perf-stat.dTLB-load-misses
1.417e+13 -40.1% 8.491e+12 perf-stat.dTLB-loads
0.20 ± 4% -0.0 0.18 ± 5% perf-stat.dTLB-store-miss-rate%
3.072e+09 ± 4% -28.0% 2.21e+09 ± 4% perf-stat.dTLB-store-misses
1.535e+12 -20.2% 1.225e+12 perf-stat.dTLB-stores
90.73 -11.7 79.07 perf-stat.iTLB-load-miss-rate%
8.291e+09 -6.6% 7.743e+09 perf-stat.iTLB-load-misses
8.473e+08 +141.8% 2.049e+09 ± 3% perf-stat.iTLB-loads
5.646e+13 -40.2% 3.378e+13 perf-stat.instructions
6810 -35.9% 4362 perf-stat.instructions-per-iTLB-miss
0.23 +7.8% 0.24 perf-stat.ipc
5326672 +39.2% 7413706 perf-stat.minor-faults
1.873e+10 -29.9% 1.312e+10 perf-stat.node-load-misses
2.093e+10 -29.2% 1.481e+10 perf-stat.node-loads
39.38 -0.7 38.72 perf-stat.node-store-miss-rate%
1.087e+10 -16.6% 9.069e+09 perf-stat.node-store-misses
1.673e+10 -14.2% 1.435e+10 perf-stat.node-stores
5326695 +39.2% 7413708 perf-stat.page-faults
1875095 ± 7% -54.8% 846645 ± 16% sched_debug.cfs_rq:/.MIN_vruntime.avg
32868920 ± 6% -35.7% 21150379 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.max
7267340 ± 5% -44.7% 4015798 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.stddev
4278 ± 7% -54.7% 1939 ± 11% sched_debug.cfs_rq:/.exec_clock.stddev
245.48 ± 2% +65.3% 405.75 ± 7% sched_debug.cfs_rq:/.load_avg.avg
2692 ± 6% +126.0% 6087 ± 7% sched_debug.cfs_rq:/.load_avg.max
33.09 -73.0% 8.94 ± 7% sched_debug.cfs_rq:/.load_avg.min
507.40 ± 4% +128.0% 1156 ± 7% sched_debug.cfs_rq:/.load_avg.stddev
1875095 ± 7% -54.8% 846645 ± 16% sched_debug.cfs_rq:/.max_vruntime.avg
32868921 ± 6% -35.7% 21150379 ± 14% sched_debug.cfs_rq:/.max_vruntime.max
7267341 ± 5% -44.7% 4015798 ± 14% sched_debug.cfs_rq:/.max_vruntime.stddev
35887197 -13.2% 31149130 sched_debug.cfs_rq:/.min_vruntime.avg
37385506 -14.3% 32043914 sched_debug.cfs_rq:/.min_vruntime.max
34416296 -12.3% 30183927 sched_debug.cfs_rq:/.min_vruntime.min
1228844 ± 8% -52.6% 582759 ± 4% sched_debug.cfs_rq:/.min_vruntime.stddev
0.83 -28.1% 0.60 ± 6% sched_debug.cfs_rq:/.nr_running.avg
2.07 ± 3% -24.6% 1.56 ± 8% sched_debug.cfs_rq:/.nr_running.max
20.52 ± 4% -48.8% 10.52 ± 3% sched_debug.cfs_rq:/.nr_spread_over.avg
35.96 ± 5% -42.2% 20.77 ± 9% sched_debug.cfs_rq:/.nr_spread_over.max
8.97 ± 11% -44.5% 4.98 ± 8% sched_debug.cfs_rq:/.nr_spread_over.min
6.40 ± 12% -45.5% 3.49 ± 7% sched_debug.cfs_rq:/.nr_spread_over.stddev
21.78 ± 7% +143.3% 53.00 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.avg
328.86 ± 18% +303.4% 1326 ± 14% sched_debug.cfs_rq:/.runnable_load_avg.max
55.97 ± 17% +286.0% 216.07 ± 13% sched_debug.cfs_rq:/.runnable_load_avg.stddev
0.10 ± 29% -82.4% 0.02 ± 50% sched_debug.cfs_rq:/.spread.avg
3.43 ± 25% -79.9% 0.69 ± 50% sched_debug.cfs_rq:/.spread.max
0.56 ± 26% -80.7% 0.11 ± 50% sched_debug.cfs_rq:/.spread.stddev
1228822 ± 8% -52.6% 582732 ± 4% sched_debug.cfs_rq:/.spread0.stddev
992.30 -24.9% 745.56 ± 2% sched_debug.cfs_rq:/.util_avg.avg
1485 -18.1% 1217 ± 2% sched_debug.cfs_rq:/.util_avg.max
515.45 ± 2% -25.2% 385.73 ± 6% sched_debug.cfs_rq:/.util_avg.min
201.54 -14.9% 171.52 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
248.73 ± 6% -38.1% 154.02 ± 8% sched_debug.cfs_rq:/.util_est_enqueued.avg
222.78 ± 3% -15.8% 187.58 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.stddev
77097 ± 4% +278.4% 291767 ± 11% sched_debug.cpu.avg_idle.avg
181319 ± 6% +298.7% 722862 ± 3% sched_debug.cpu.avg_idle.max
19338 +392.3% 95203 ± 17% sched_debug.cpu.avg_idle.min
34877 ± 6% +303.5% 140732 ± 6% sched_debug.cpu.avg_idle.stddev
1107408 +37.6% 1523823 sched_debug.cpu.clock.avg
1107427 +37.6% 1523834 sched_debug.cpu.clock.max
1107385 +37.6% 1523811 sched_debug.cpu.clock.min
13.10 ± 9% -48.1% 6.80 ± 8% sched_debug.cpu.clock.stddev
1107408 +37.6% 1523823 sched_debug.cpu.clock_task.avg
1107427 +37.6% 1523834 sched_debug.cpu.clock_task.max
1107385 +37.6% 1523811 sched_debug.cpu.clock_task.min
13.10 ± 9% -48.1% 6.80 ± 8% sched_debug.cpu.clock_task.stddev
30.36 ± 7% +107.7% 63.06 ± 12% sched_debug.cpu.cpu_load[0].avg
381.48 ± 18% +269.8% 1410 ± 18% sched_debug.cpu.cpu_load[0].max
63.92 ± 18% +262.2% 231.50 ± 17% sched_debug.cpu.cpu_load[0].stddev
31.34 ± 5% +118.4% 68.44 ± 9% sched_debug.cpu.cpu_load[1].avg
323.62 ± 17% +349.5% 1454 ± 14% sched_debug.cpu.cpu_load[1].max
53.23 ± 16% +350.3% 239.71 ± 13% sched_debug.cpu.cpu_load[1].stddev
32.15 ± 3% +129.4% 73.74 ± 6% sched_debug.cpu.cpu_load[2].avg
285.20 ± 14% +420.8% 1485 ± 9% sched_debug.cpu.cpu_load[2].max
46.66 ± 12% +430.0% 247.32 ± 8% sched_debug.cpu.cpu_load[2].stddev
33.02 ± 2% +133.2% 77.00 ± 3% sched_debug.cpu.cpu_load[3].avg
252.16 ± 10% +481.2% 1465 ± 7% sched_debug.cpu.cpu_load[3].max
40.74 ± 8% +503.2% 245.72 ± 6% sched_debug.cpu.cpu_load[3].stddev
33.86 +131.5% 78.38 ± 2% sched_debug.cpu.cpu_load[4].avg
219.81 ± 8% +522.6% 1368 ± 5% sched_debug.cpu.cpu_load[4].max
35.45 ± 7% +554.2% 231.90 ± 4% sched_debug.cpu.cpu_load[4].stddev
2600 ± 4% -30.5% 1807 ± 4% sched_debug.cpu.curr->pid.avg
25309 ± 4% -19.5% 20367 ± 4% sched_debug.cpu.curr->pid.max
4534 ± 7% -21.2% 3573 ± 5% sched_debug.cpu.curr->pid.stddev
0.00 ± 2% -27.6% 0.00 ± 6% sched_debug.cpu.next_balance.stddev
1083917 +38.6% 1502777 sched_debug.cpu.nr_load_updates.avg
1088142 +38.6% 1508302 sched_debug.cpu.nr_load_updates.max
1082048 +38.7% 1501073 sched_debug.cpu.nr_load_updates.min
3.53 ± 6% -73.0% 0.95 ± 6% sched_debug.cpu.nr_running.avg
11.54 ± 3% -62.1% 4.37 ± 10% sched_debug.cpu.nr_running.max
3.10 ± 3% -66.8% 1.03 ± 9% sched_debug.cpu.nr_running.stddev
10764176 -22.4% 8355047 sched_debug.cpu.nr_switches.avg
10976436 -22.2% 8545010 sched_debug.cpu.nr_switches.max
10547712 -22.8% 8143037 sched_debug.cpu.nr_switches.min
148628 ± 3% -22.7% 114880 ± 7% sched_debug.cpu.nr_switches.stddev
11.13 ± 2% +24.5% 13.85 sched_debug.cpu.nr_uninterruptible.avg
6420 ± 8% -48.7% 3296 ± 11% sched_debug.cpu.nr_uninterruptible.max
-5500 -37.2% -3455 sched_debug.cpu.nr_uninterruptible.min
3784 ± 6% -47.2% 1997 ± 4% sched_debug.cpu.nr_uninterruptible.stddev
10812670 -22.7% 8356821 sched_debug.cpu.sched_count.avg
11020646 -22.5% 8546277 sched_debug.cpu.sched_count.max
10601390 -23.2% 8144743 sched_debug.cpu.sched_count.min
144529 ± 3% -20.9% 114359 ± 7% sched_debug.cpu.sched_count.stddev
706116 +259.0% 2534721 sched_debug.cpu.sched_goidle.avg
771307 +232.4% 2564059 sched_debug.cpu.sched_goidle.max
644658 +286.9% 2494236 sched_debug.cpu.sched_goidle.min
49847 ± 6% -67.9% 15979 ± 7% sched_debug.cpu.sched_goidle.stddev
9618827 -39.9% 5780369 sched_debug.cpu.ttwu_count.avg
8990451 -61.7% 3441265 ± 4% sched_debug.cpu.ttwu_count.min
418563 ± 25% +244.2% 1440565 ± 7% sched_debug.cpu.ttwu_count.stddev
640964 -93.7% 40366 ± 2% sched_debug.cpu.ttwu_local.avg
679527 -92.1% 53476 ± 4% sched_debug.cpu.ttwu_local.max
601661 -94.9% 30636 ± 3% sched_debug.cpu.ttwu_local.min
24242 ± 21% -77.7% 5405 ± 9% sched_debug.cpu.ttwu_local.stddev
1107383 +37.6% 1523810 sched_debug.cpu_clk
1107383 +37.6% 1523810 sched_debug.ktime
0.00 -49.4% 0.00 ± 65% sched_debug.rt_rq:/.rt_nr_migratory.avg
0.03 -49.4% 0.01 ± 65% sched_debug.rt_rq:/.rt_nr_migratory.max
0.00 -49.4% 0.00 ± 65% sched_debug.rt_rq:/.rt_nr_migratory.stddev
0.00 -49.4% 0.00 ± 65% sched_debug.rt_rq:/.rt_nr_running.avg
0.03 -49.4% 0.01 ± 65% sched_debug.rt_rq:/.rt_nr_running.max
0.00 -49.4% 0.00 ± 65% sched_debug.rt_rq:/.rt_nr_running.stddev
0.01 ± 8% +79.9% 0.01 ± 23% sched_debug.rt_rq:/.rt_time.avg
1107805 +37.6% 1524235 sched_debug.sched_clk
87.59 -87.6 0.00 perf-profile.calltrace.cycles-pp.md_flush_request.raid1_make_request.md_handle_request.md_make_request.generic_make_request
87.57 -87.6 0.00 perf-profile.calltrace.cycles-pp.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_write_iter.__vfs_write
87.59 -87.5 0.05 ±299% perf-profile.calltrace.cycles-pp.blkdev_issue_flush.xfs_file_fsync.xfs_file_write_iter.__vfs_write.vfs_write
87.51 -87.5 0.00 perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
87.51 -87.5 0.00 perf-profile.calltrace.cycles-pp.submit_bio.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_write_iter
87.50 -87.5 0.00 perf-profile.calltrace.cycles-pp.md_make_request.generic_make_request.submit_bio.submit_bio_wait.blkdev_issue_flush
87.50 -87.5 0.00 perf-profile.calltrace.cycles-pp.md_handle_request.md_make_request.generic_make_request.submit_bio.submit_bio_wait
82.37 -82.4 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.md_flush_request.raid1_make_request.md_handle_request.md_make_request
82.23 -82.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.md_flush_request.raid1_make_request.md_handle_request
87.79 -25.0 62.75 ± 8% perf-profile.calltrace.cycles-pp.raid1_make_request.md_handle_request.md_make_request.generic_make_request.submit_bio
92.78 -13.0 79.76 perf-profile.calltrace.cycles-pp.xfs_file_fsync.xfs_file_write_iter.__vfs_write.vfs_write.ksys_write
93.08 -12.6 80.49 perf-profile.calltrace.cycles-pp.xfs_file_write_iter.__vfs_write.vfs_write.ksys_write.do_syscall_64
93.08 -12.6 80.50 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
93.11 -12.6 80.56 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
93.11 -12.6 80.56 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
93.14 -12.5 80.64 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
93.15 -12.5 80.65 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.40 ± 2% -1.4 1.97 ± 8% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
3.33 ± 2% -1.4 1.96 ± 9% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
1.12 ± 2% -0.7 0.42 ± 68% perf-profile.calltrace.cycles-pp.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
1.16 ± 2% -0.6 0.60 ± 17% perf-profile.calltrace.cycles-pp.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate
0.00 +0.6 0.59 ± 15% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.raid1_write_request.raid1_make_request.md_handle_request
0.00 +0.6 0.64 ± 15% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.raid1_write_request.raid1_make_request.md_handle_request.md_make_request
0.00 +0.7 0.65 ± 10% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.sched_ttwu_pending.do_idle
0.00 +0.7 0.68 ± 10% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry
0.00 +0.7 0.69 ± 10% perf-profile.calltrace.cycles-pp.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary
0.00 +0.8 0.79 ± 11% perf-profile.calltrace.cycles-pp.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +0.8 0.83 ± 7% perf-profile.calltrace.cycles-pp.__schedule.schedule.raid1_write_request.raid1_make_request.md_handle_request
0.62 ± 3% +0.8 1.45 ± 22% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait.__xfs_log_force_lsn
0.00 +0.8 0.83 ± 7% perf-profile.calltrace.cycles-pp.schedule.raid1_write_request.raid1_make_request.md_handle_request.md_make_request
0.63 ± 2% +0.8 1.46 ± 22% perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_wait.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
0.62 ± 2% +0.8 1.46 ± 22% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait.__xfs_log_force_lsn.xfs_log_force_lsn
3.92 ± 2% +0.9 4.79 ± 6% perf-profile.calltrace.cycles-pp.ret_from_fork
3.92 ± 2% +0.9 4.79 ± 6% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.69 ± 2% +0.9 1.64 ± 23% perf-profile.calltrace.cycles-pp.xlog_wait.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_write_iter
0.00 +1.2 1.17 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.prepare_to_wait_event.raid1_write_request.raid1_make_request.md_handle_request
0.00 +1.2 1.23 ± 18% perf-profile.calltrace.cycles-pp.prepare_to_wait_event.raid1_write_request.raid1_make_request.md_handle_request.submit_flushes
0.00 +1.3 1.27 ± 17% perf-profile.calltrace.cycles-pp.raid1_write_request.raid1_make_request.md_handle_request.submit_flushes.process_one_work
0.00 +1.3 1.27 ± 17% perf-profile.calltrace.cycles-pp.md_handle_request.submit_flushes.process_one_work.worker_thread.kthread
0.00 +1.3 1.27 ± 17% perf-profile.calltrace.cycles-pp.raid1_make_request.md_handle_request.submit_flushes.process_one_work.worker_thread
0.00 +1.3 1.27 ± 17% perf-profile.calltrace.cycles-pp.submit_flushes.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +1.6 1.65 ± 14% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.raid_end_bio_io
0.00 +1.7 1.71 ± 14% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.raid_end_bio_io.raid1_end_write_request
0.00 +1.7 1.71 ± 14% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.raid_end_bio_io.raid1_end_write_request.brd_make_request
0.00 +1.9 1.86 ± 13% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.raid_end_bio_io.raid1_end_write_request.brd_make_request.generic_make_request
0.00 +2.1 2.10 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.__xfs_log_force_lsn.xfs_log_force_lsn
0.00 +2.1 2.10 ± 10% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync
0.00 +2.1 2.11 ± 10% perf-profile.calltrace.cycles-pp.remove_wait_queue.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_write_iter
0.00 +2.2 2.16 ± 10% perf-profile.calltrace.cycles-pp.raid_end_bio_io.raid1_end_write_request.brd_make_request.generic_make_request.flush_bio_list
2.24 ± 4% +2.2 4.44 ± 15% perf-profile.calltrace.cycles-pp.xfs_log_force_lsn.xfs_file_fsync.xfs_file_write_iter.__vfs_write.vfs_write
0.00 +2.3 2.25 ± 10% perf-profile.calltrace.cycles-pp.raid1_end_write_request.brd_make_request.generic_make_request.flush_bio_list.flush_pending_writes
0.00 +2.3 2.30 ± 20% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.raid1_write_request.raid1_make_request.md_handle_request
0.00 +2.4 2.35 ± 20% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.raid1_write_request.raid1_make_request.md_handle_request.md_make_request
0.37 ± 65% +2.4 2.81 ± 7% perf-profile.calltrace.cycles-pp.md_thread.kthread.ret_from_fork
0.26 ±100% +2.5 2.81 ± 7% perf-profile.calltrace.cycles-pp.raid1d.md_thread.kthread.ret_from_fork
0.26 ±100% +2.5 2.81 ± 7% perf-profile.calltrace.cycles-pp.flush_pending_writes.raid1d.md_thread.kthread.ret_from_fork
0.26 ±100% +2.6 2.81 ± 7% perf-profile.calltrace.cycles-pp.flush_bio_list.flush_pending_writes.raid1d.md_thread.kthread
0.10 ±200% +2.7 2.76 ± 7% perf-profile.calltrace.cycles-pp.generic_make_request.flush_bio_list.flush_pending_writes.raid1d.md_thread
0.00 +2.7 2.73 ± 7% perf-profile.calltrace.cycles-pp.brd_make_request.generic_make_request.flush_bio_list.flush_pending_writes.raid1d
1.20 ± 3% +3.1 4.35 ± 15% perf-profile.calltrace.cycles-pp.__xfs_log_force_lsn.xfs_log_force_lsn.xfs_file_fsync.xfs_file_write_iter.__vfs_write
0.63 ± 6% +3.8 4.38 ± 27% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.__xfs_log_force_lsn.xfs_file_fsync
0.63 ± 5% +3.8 4.39 ± 27% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_write_iter
0.63 ± 5% +3.8 4.40 ± 27% perf-profile.calltrace.cycles-pp.remove_wait_queue.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_write_iter.__vfs_write
1.26 ± 5% +5.3 6.55 ± 27% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_write_iter
1.27 ± 5% +5.3 6.55 ± 27% perf-profile.calltrace.cycles-pp._raw_spin_lock.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_write_iter.__vfs_write
1.30 ± 4% +8.4 9.72 ± 9% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.33 ± 4% +8.9 10.26 ± 9% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.28 ± 2% +9.1 11.36 ± 27% perf-profile.calltrace.cycles-pp.__xfs_log_force_lsn.xfs_file_fsync.xfs_file_write_iter.__vfs_write.vfs_write
1.59 ± 4% +10.4 11.97 ± 9% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.59 ± 4% +10.4 11.98 ± 9% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
1.59 ± 4% +10.4 11.98 ± 9% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
1.63 ± 4% +10.8 12.47 ± 8% perf-profile.calltrace.cycles-pp.secondary_startup_64
0.00 +57.7 57.66 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.raid1_write_request.raid1_make_request
0.00 +57.7 57.73 ± 10% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.raid1_write_request.raid1_make_request.md_handle_request
0.05 ±299% +57.8 57.85 ± 9% perf-profile.calltrace.cycles-pp.prepare_to_wait_event.raid1_write_request.raid1_make_request.md_handle_request.md_make_request
0.19 ±154% +62.5 62.73 ± 8% perf-profile.calltrace.cycles-pp.raid1_write_request.raid1_make_request.md_handle_request.md_make_request.generic_make_request
0.19 ±154% +62.6 62.76 ± 8% perf-profile.calltrace.cycles-pp.md_handle_request.md_make_request.generic_make_request.submit_bio.xfs_submit_ioend
0.19 ±154% +62.6 62.79 ± 8% perf-profile.calltrace.cycles-pp.md_make_request.generic_make_request.submit_bio.xfs_submit_ioend.xfs_vm_writepages
0.20 ±154% +62.6 62.81 ± 8% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.xfs_submit_ioend.xfs_vm_writepages.do_writepages
0.20 ±154% +62.6 62.81 ± 8% perf-profile.calltrace.cycles-pp.submit_bio.xfs_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range
0.20 ±154% +62.6 62.82 ± 8% perf-profile.calltrace.cycles-pp.xfs_submit_ioend.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range
0.29 ±125% +62.8 63.09 ± 8% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync
0.29 ±126% +62.8 63.10 ± 8% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_write_iter
0.29 ±125% +62.8 63.11 ± 8% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_write_iter.__vfs_write
0.62 ± 41% +62.9 63.52 ± 7% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.xfs_file_fsync.xfs_file_write_iter.__vfs_write.vfs_write
88.51 -88.2 0.26 ± 19% perf-profile.children.cycles-pp.md_flush_request
87.57 -87.2 0.35 ± 19% perf-profile.children.cycles-pp.submit_bio_wait
87.59 -87.2 0.39 ± 19% perf-profile.children.cycles-pp.blkdev_issue_flush
83.26 -83.2 0.02 ±123% perf-profile.children.cycles-pp._raw_spin_lock_irq
88.85 -25.7 63.11 ± 8% perf-profile.children.cycles-pp.md_make_request
88.90 -25.7 63.17 ± 8% perf-profile.children.cycles-pp.submit_bio
88.83 -24.5 64.31 ± 8% perf-profile.children.cycles-pp.raid1_make_request
88.84 -24.5 64.33 ± 8% perf-profile.children.cycles-pp.md_handle_request
89.38 -23.5 65.92 ± 7% perf-profile.children.cycles-pp.generic_make_request
89.90 -13.4 76.51 ± 2% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
92.79 -13.0 79.76 perf-profile.children.cycles-pp.xfs_file_fsync
93.08 -12.6 80.49 perf-profile.children.cycles-pp.xfs_file_write_iter
93.09 -12.6 80.54 perf-profile.children.cycles-pp.__vfs_write
93.13 -12.5 80.60 perf-profile.children.cycles-pp.vfs_write
93.13 -12.5 80.61 perf-profile.children.cycles-pp.ksys_write
93.22 -12.4 80.83 perf-profile.children.cycles-pp.do_syscall_64
93.22 -12.4 80.83 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
3.40 ± 2% -1.4 1.97 ± 8% perf-profile.children.cycles-pp.worker_thread
3.33 ± 2% -1.4 1.96 ± 9% perf-profile.children.cycles-pp.process_one_work
1.03 ± 7% -1.0 0.07 ± 37% perf-profile.children.cycles-pp.xlog_cil_force_lsn
1.69 ± 2% -0.7 0.96 ± 4% perf-profile.children.cycles-pp.reschedule_interrupt
1.66 ± 2% -0.7 0.94 ± 4% perf-profile.children.cycles-pp.scheduler_ipi
1.13 ± 2% -0.7 0.47 ± 11% perf-profile.children.cycles-pp.finish_wait
0.54 ± 8% -0.4 0.10 ± 38% perf-profile.children.cycles-pp.xlog_cil_push
0.49 ± 9% -0.4 0.09 ± 35% perf-profile.children.cycles-pp.xlog_write
0.10 ± 8% -0.1 0.04 ± 67% perf-profile.children.cycles-pp.flush_work
0.20 ± 5% -0.0 0.16 ± 11% perf-profile.children.cycles-pp.reweight_entity
0.06 ± 10% +0.0 0.10 ± 23% perf-profile.children.cycles-pp.brd_lookup_page
0.18 ± 5% +0.0 0.23 ± 13% perf-profile.children.cycles-pp.__update_load_avg_se
0.02 ±153% +0.1 0.07 ± 16% perf-profile.children.cycles-pp.delay_tsc
0.03 ±100% +0.1 0.08 ± 15% perf-profile.children.cycles-pp.find_next_bit
0.08 ± 5% +0.1 0.14 ± 14% perf-profile.children.cycles-pp.native_write_msr
0.29 ± 4% +0.1 0.36 ± 8% perf-profile.children.cycles-pp.__orc_find
0.40 ± 4% +0.1 0.46 ± 7% perf-profile.children.cycles-pp.dequeue_task_fair
0.11 ± 11% +0.1 0.18 ± 14% perf-profile.children.cycles-pp.__module_text_address
0.12 ± 8% +0.1 0.19 ± 13% perf-profile.children.cycles-pp.is_module_text_address
0.04 ± 50% +0.1 0.12 ± 19% perf-profile.children.cycles-pp.kmem_cache_alloc
0.00 +0.1 0.08 ± 11% perf-profile.children.cycles-pp.clear_page_erms
0.00 +0.1 0.08 ± 28% perf-profile.children.cycles-pp.__indirect_thunk_start
0.01 ±200% +0.1 0.10 ± 25% perf-profile.children.cycles-pp.xfs_trans_alloc
0.00 +0.1 0.09 ± 18% perf-profile.children.cycles-pp.md_wakeup_thread
0.00 +0.1 0.09 ± 26% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.1 0.09 ± 26% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.00 +0.1 0.09 ± 20% perf-profile.children.cycles-pp.ktime_get
0.18 ± 4% +0.1 0.27 ± 12% perf-profile.children.cycles-pp.idle_cpu
0.20 ± 6% +0.1 0.30 ± 9% perf-profile.children.cycles-pp.unwind_get_return_address
0.16 ± 10% +0.1 0.25 ± 13% perf-profile.children.cycles-pp.__module_address
0.03 ±100% +0.1 0.13 ± 8% perf-profile.children.cycles-pp.brd_insert_page
0.06 ± 9% +0.1 0.16 ± 14% perf-profile.children.cycles-pp.task_tick_fair
0.08 ± 12% +0.1 0.18 ± 24% perf-profile.children.cycles-pp.bio_alloc_bioset
0.03 ± 81% +0.1 0.14 ± 27% perf-profile.children.cycles-pp.generic_make_request_checks
0.17 ± 7% +0.1 0.28 ± 11% perf-profile.children.cycles-pp.__kernel_text_address
0.11 ± 9% +0.1 0.22 ± 15% perf-profile.children.cycles-pp.wake_up_page_bit
0.16 ± 6% +0.1 0.27 ± 10% perf-profile.children.cycles-pp.kernel_text_address
0.00 +0.1 0.11 ± 11% perf-profile.children.cycles-pp.get_page_from_freelist
0.00 +0.1 0.11 ± 19% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.11 ± 7% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.08 ± 10% +0.1 0.19 ± 22% perf-profile.children.cycles-pp.xfs_do_writepage
0.25 ± 4% +0.1 0.37 ± 10% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.00 +0.1 0.12 ± 13% perf-profile.children.cycles-pp.switch_mm
0.08 ± 38% +0.1 0.20 ± 19% perf-profile.children.cycles-pp.io_serial_in
0.18 ± 5% +0.1 0.31 ± 7% perf-profile.children.cycles-pp.dequeue_entity
0.00 +0.1 0.13 ± 26% perf-profile.children.cycles-pp.tick_nohz_next_event
0.06 ± 11% +0.1 0.19 ± 19% perf-profile.children.cycles-pp.mempool_alloc
0.32 ± 5% +0.1 0.45 ± 6% perf-profile.children.cycles-pp.orc_find
0.15 ± 10% +0.1 0.29 ± 19% perf-profile.children.cycles-pp.xfs_destroy_ioend
0.15 ± 11% +0.1 0.30 ± 18% perf-profile.children.cycles-pp.call_bio_endio
0.08 ± 17% +0.2 0.23 ± 25% perf-profile.children.cycles-pp.xlog_state_done_syncing
0.00 +0.2 0.15 ± 22% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.12 ± 8% +0.2 0.27 ± 23% perf-profile.children.cycles-pp.write_cache_pages
0.10 ± 16% +0.2 0.26 ± 16% perf-profile.children.cycles-pp.wait_for_xmitr
0.10 ± 19% +0.2 0.25 ± 14% perf-profile.children.cycles-pp.serial8250_console_putchar
0.10 ± 17% +0.2 0.26 ± 13% perf-profile.children.cycles-pp.uart_console_write
0.10 ± 16% +0.2 0.26 ± 15% perf-profile.children.cycles-pp.serial8250_console_write
0.11 ± 15% +0.2 0.27 ± 15% perf-profile.children.cycles-pp.console_unlock
0.09 ± 9% +0.2 0.26 ± 12% perf-profile.children.cycles-pp.scheduler_tick
0.10 ± 18% +0.2 0.28 ± 15% perf-profile.children.cycles-pp.irq_work_run_list
0.10 ± 15% +0.2 0.28 ± 14% perf-profile.children.cycles-pp.xlog_state_do_callback
0.09 ± 12% +0.2 0.27 ± 16% perf-profile.children.cycles-pp.irq_work_run
0.09 ± 12% +0.2 0.27 ± 16% perf-profile.children.cycles-pp.printk
0.09 ± 12% +0.2 0.27 ± 16% perf-profile.children.cycles-pp.vprintk_emit
0.09 ± 12% +0.2 0.27 ± 17% perf-profile.children.cycles-pp.irq_work_interrupt
0.09 ± 12% +0.2 0.27 ± 17% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.00 +0.2 0.18 ± 16% perf-profile.children.cycles-pp.poll_idle
0.30 ± 4% +0.2 0.49 ± 11% perf-profile.children.cycles-pp.update_load_avg
1.39 ± 2% +0.2 1.59 ± 6% perf-profile.children.cycles-pp.__save_stack_trace
1.43 +0.2 1.65 ± 6% perf-profile.children.cycles-pp.save_stack_trace_tsk
0.14 ± 13% +0.2 0.36 ± 13% perf-profile.children.cycles-pp.update_process_times
0.00 +0.2 0.23 ± 22% perf-profile.children.cycles-pp.find_busiest_group
0.22 ± 6% +0.2 0.45 ± 18% perf-profile.children.cycles-pp.brd_do_bvec
0.14 ± 13% +0.2 0.38 ± 14% perf-profile.children.cycles-pp.tick_sched_handle
0.10 ± 8% +0.2 0.34 ± 26% perf-profile.children.cycles-pp.xfs_log_commit_cil
0.07 ± 10% +0.3 0.33 ± 23% perf-profile.children.cycles-pp.io_schedule
0.03 ± 83% +0.3 0.29 ± 27% perf-profile.children.cycles-pp.__softirqentry_text_start
0.11 ± 5% +0.3 0.36 ± 25% perf-profile.children.cycles-pp.__xfs_trans_commit
0.06 ± 36% +0.3 0.31 ± 26% perf-profile.children.cycles-pp.irq_exit
0.08 ± 9% +0.3 0.35 ± 23% perf-profile.children.cycles-pp.wait_on_page_bit_common
0.15 ± 12% +0.3 0.42 ± 14% perf-profile.children.cycles-pp.tick_sched_timer
0.10 ± 11% +0.3 0.39 ± 22% perf-profile.children.cycles-pp.__filemap_fdatawait_range
0.06 ± 12% +0.3 0.37 ± 9% perf-profile.children.cycles-pp.schedule_idle
0.02 ±153% +0.3 0.34 ± 17% perf-profile.children.cycles-pp.menu_select
0.17 ± 5% +0.3 0.49 ± 22% perf-profile.children.cycles-pp.xfs_vn_update_time
0.19 ± 12% +0.3 0.51 ± 18% perf-profile.children.cycles-pp.xlog_iodone
0.18 ± 5% +0.3 0.51 ± 22% perf-profile.children.cycles-pp.file_update_time
0.18 ± 5% +0.3 0.51 ± 21% perf-profile.children.cycles-pp.xfs_file_aio_write_checks
0.21 ± 11% +0.4 0.60 ± 15% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.26 ± 6% +0.4 0.69 ± 16% perf-profile.children.cycles-pp.pick_next_task_fair
1.20 ± 2% +0.4 1.64 ± 10% perf-profile.children.cycles-pp.schedule
0.28 ± 5% +0.4 0.72 ± 21% perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
0.00 +0.4 0.44 ± 22% perf-profile.children.cycles-pp.load_balance
0.25 ± 8% +0.5 0.74 ± 15% perf-profile.children.cycles-pp.hrtimer_interrupt
1.30 ± 2% +0.7 2.00 ± 9% perf-profile.children.cycles-pp.__schedule
0.31 ± 8% +0.8 1.09 ± 16% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.31 ± 8% +0.8 1.09 ± 16% perf-profile.children.cycles-pp.apic_timer_interrupt
3.92 ± 2% +0.9 4.79 ± 6% perf-profile.children.cycles-pp.ret_from_fork
3.92 ± 2% +0.9 4.79 ± 6% perf-profile.children.cycles-pp.kthread
0.69 ± 2% +0.9 1.64 ± 23% perf-profile.children.cycles-pp.xlog_wait
0.08 ± 13% +1.2 1.27 ± 17% perf-profile.children.cycles-pp.submit_flushes
0.16 ± 9% +1.6 1.74 ± 4% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.17 ± 9% +2.0 2.16 ± 10% perf-profile.children.cycles-pp.raid_end_bio_io
0.21 ± 6% +2.0 2.25 ± 10% perf-profile.children.cycles-pp.raid1_end_write_request
2.24 ± 4% +2.2 4.44 ± 15% perf-profile.children.cycles-pp.xfs_log_force_lsn
0.46 ± 6% +2.3 2.73 ± 7% perf-profile.children.cycles-pp.brd_make_request
0.51 ± 6% +2.3 2.81 ± 7% perf-profile.children.cycles-pp.md_thread
0.49 ± 6% +2.3 2.81 ± 7% perf-profile.children.cycles-pp.raid1d
0.49 ± 6% +2.3 2.81 ± 7% perf-profile.children.cycles-pp.flush_pending_writes
0.49 ± 6% +2.3 2.81 ± 7% perf-profile.children.cycles-pp.flush_bio_list
1.80 ± 3% +5.6 7.44 ± 27% perf-profile.children.cycles-pp._raw_spin_lock
2.12 ± 4% +5.8 7.97 ± 20% perf-profile.children.cycles-pp.remove_wait_queue
1.33 ± 4% +8.8 10.12 ± 8% perf-profile.children.cycles-pp.intel_idle
1.37 ± 4% +9.3 10.71 ± 8% perf-profile.children.cycles-pp.cpuidle_enter_state
1.59 ± 4% +10.4 11.98 ± 9% perf-profile.children.cycles-pp.start_secondary
1.63 ± 4% +10.8 12.47 ± 8% perf-profile.children.cycles-pp.secondary_startup_64
1.63 ± 4% +10.8 12.47 ± 8% perf-profile.children.cycles-pp.cpu_startup_entry
1.63 ± 4% +10.9 12.49 ± 8% perf-profile.children.cycles-pp.do_idle
3.48 +12.2 15.72 ± 23% perf-profile.children.cycles-pp.__xfs_log_force_lsn
1.36 ± 12% +57.8 59.12 ± 10% perf-profile.children.cycles-pp.prepare_to_wait_event
0.43 ± 38% +62.4 62.82 ± 8% perf-profile.children.cycles-pp.xfs_submit_ioend
0.55 ± 29% +62.5 63.10 ± 8% perf-profile.children.cycles-pp.xfs_vm_writepages
0.55 ± 30% +62.5 63.10 ± 8% perf-profile.children.cycles-pp.do_writepages
0.55 ± 29% +62.6 63.11 ± 8% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
0.66 ± 25% +62.9 63.52 ± 7% perf-profile.children.cycles-pp.file_write_and_wait_range
0.39 ± 43% +63.6 64.02 ± 8% perf-profile.children.cycles-pp.raid1_write_request
5.43 ± 3% +64.2 69.64 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
89.86 -13.5 76.31 ± 2% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.14 ± 8% -0.0 0.09 ± 19% perf-profile.self.cycles-pp.md_flush_request
0.10 ± 12% -0.0 0.07 ± 21% perf-profile.self.cycles-pp.account_entity_enqueue
0.06 ± 7% +0.0 0.08 ± 12% perf-profile.self.cycles-pp.pick_next_task_fair
0.05 ± 12% +0.0 0.08 ± 18% perf-profile.self.cycles-pp.___perf_sw_event
0.15 ± 6% +0.0 0.18 ± 9% perf-profile.self.cycles-pp.__update_load_avg_se
0.17 ± 4% +0.0 0.22 ± 10% perf-profile.self.cycles-pp.__schedule
0.10 ± 11% +0.1 0.15 ± 11% perf-profile.self.cycles-pp._raw_spin_lock
0.02 ±153% +0.1 0.07 ± 16% perf-profile.self.cycles-pp.delay_tsc
0.02 ±152% +0.1 0.07 ± 23% perf-profile.self.cycles-pp.set_next_entity
0.03 ±100% +0.1 0.08 ± 15% perf-profile.self.cycles-pp.find_next_bit
0.08 ± 5% +0.1 0.14 ± 14% perf-profile.self.cycles-pp.native_write_msr
0.01 ±200% +0.1 0.07 ± 23% perf-profile.self.cycles-pp.kmem_cache_alloc
0.29 ± 4% +0.1 0.36 ± 8% perf-profile.self.cycles-pp.__orc_find
0.14 ± 7% +0.1 0.21 ± 12% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.00 +0.1 0.08 ± 11% perf-profile.self.cycles-pp.clear_page_erms
0.00 +0.1 0.08 ± 28% perf-profile.self.cycles-pp.__indirect_thunk_start
0.00 +0.1 0.08 ± 20% perf-profile.self.cycles-pp.md_wakeup_thread
0.34 ± 6% +0.1 0.43 ± 12% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.18 ± 4% +0.1 0.27 ± 12% perf-profile.self.cycles-pp.idle_cpu
0.16 ± 10% +0.1 0.25 ± 13% perf-profile.self.cycles-pp.__module_address
0.06 ± 11% +0.1 0.17 ± 14% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.08 ± 38% +0.1 0.20 ± 19% perf-profile.self.cycles-pp.io_serial_in
0.18 ± 5% +0.1 0.32 ± 15% perf-profile.self.cycles-pp.update_load_avg
0.00 +0.1 0.15 ± 17% perf-profile.self.cycles-pp.poll_idle
0.00 +0.2 0.15 ± 16% perf-profile.self.cycles-pp.menu_select
0.00 +0.2 0.18 ± 24% perf-profile.self.cycles-pp.find_busiest_group
0.02 ±152% +0.3 0.35 ± 21% perf-profile.self.cycles-pp.raid1_write_request
1.33 ± 4% +8.8 10.12 ± 8% perf-profile.self.cycles-pp.intel_idle
aim7.jobs-per-min
1700 +-+------------------------------------------------------------------+
|+ ++++++ :+ ++++ ++++ +++ ++++++ + + ++++++++++++ ++ ++|
1600 +-+ + +++ + +++++ ++.++ + ++ ++ + ++ |
| |
| |
1500 +-+ |
| |
1400 +-+ |
| |
1300 +-+ |
| |
O OO OO O O O |
1200 +OO OOOOOOOOO OO OOOOOOOOOOOOOO OOOOOOOOO O |
| |
1100 +-+------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 11 months
[lkp-robot] [shmem] a067b3644b: vm-scalability.throughput -73.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -73.2% regression of vm-scalability.throughput due to commit:
commit: a067b3644bc70f2f715227bc3d22f5bc4a951dff ("shmem: Convert shmem_add_to_page_cache to XArray")
git://git.infradead.org/users/willy/linux-dax.git xarray
in testcase: vm-scalability
on test machine: qemu-system-x86_64 -enable-kvm -cpu qemu64,+ssse3 -smp 4 -m 4G
with following parameters:
runtime: 300s
size: 2T
test: shm-pread-seq
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
=========================================================================================
compiler/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-7/x86_64-allyesdebian/debian-x86_64-2018-04-03.cgz/300s/2T/vm-kbuild-4G/shm-pread-seq/vm-scalability
commit:
108ba81fe3 ("shmem: Convert find_swap_entry to XArray")
a067b3644b ("shmem: Convert shmem_add_to_page_cache to XArray")
108ba81fe3134638 a067b3644bc70f2f715227bc3d
---------------- --------------------------
%stddev %change %stddev
\ | \
973694 ± 5% -73.2% 260463 ± 2% vm-scalability.throughput
0.18 -57.9% 0.08 ± 4% vm-scalability.free_time
243461 ± 5% +7.0% 260463 ± 2% vm-scalability.median
62943 ± 15% -86.6% 8405 ± 9% vm-scalability.time.involuntary_context_switches
41550585 ± 11% -63.7% 15073341 ± 3% vm-scalability.time.minor_page_faults
282.50 ± 3% -65.0% 99.00 vm-scalability.time.percent_of_cpu_this_job_got
653.30 ± 4% -61.9% 249.02 vm-scalability.time.system_time
222.16 ± 2% -74.2% 57.36 vm-scalability.time.user_time
3921839 ± 17% -100.0% 1366 ± 3% vm-scalability.time.voluntary_context_switches
2.553e+08 ± 5% -73.5% 67626963 ± 3% vm-scalability.workload
283.00 ± 9% +159.1% 733.25 ± 6% interrupts.CAL:Function_call_interrupts
3202 -46.3% 1721 slabinfo.page->ptl.active_objs
3927 -41.8% 2286 slabinfo.page->ptl.num_objs
83771 -10.8% 74711 softirqs.RCU
111439 ± 5% -58.7% 45980 ± 3% softirqs.SCHED
135036 ± 3% -18.9% 109566 ± 2% softirqs.TIMER
3.00 -66.7% 1.00 vmstat.procs.r
27602 ± 16% -94.0% 1651 ± 13% vmstat.system.cs
5979 ± 21% -78.4% 1294 ± 10% vmstat.system.in
403546 -36.1% 258064 meminfo.Active
403475 -36.1% 257752 meminfo.Active(anon)
1016425 +12.4% 1142559 meminfo.Inactive
1015445 +12.4% 1141822 meminfo.Inactive(anon)
11637 -51.8% 5605 meminfo.PageTables
18.92 ± 7% +46.6 65.47 mpstat.cpu.idle%
0.43 ± 35% -0.4 0.00 ±120% mpstat.cpu.iowait%
15.02 ± 28% -8.5 6.51 ± 10% mpstat.cpu.steal%
48.39 ± 9% -26.8 21.61 ± 2% mpstat.cpu.sys%
17.15 ± 6% -10.9 6.25 mpstat.cpu.usr%
101963 -36.7% 64496 proc-vmstat.nr_active_anon
385408 -1.6% 379417 proc-vmstat.nr_file_pages
253679 +12.4% 285190 proc-vmstat.nr_inactive_anon
4149 -1.5% 4086 proc-vmstat.nr_kernel_stack
2898 -51.6% 1403 proc-vmstat.nr_page_table_pages
295656 -2.0% 289677 proc-vmstat.nr_shmem
101967 -36.7% 64500 proc-vmstat.nr_zone_active_anon
253681 +12.4% 285191 proc-vmstat.nr_zone_inactive_anon
19639935 ± 5% -21.2% 15468488 ± 3% proc-vmstat.numa_hit
19639935 ± 5% -21.2% 15468488 ± 3% proc-vmstat.numa_local
14187284 ± 5% -99.9% 17509 ± 13% proc-vmstat.pgactivate
14133756 ± 4% -21.6% 11084067 ± 5% proc-vmstat.pgalloc_dma32
5499321 ± 9% -20.4% 4376536 ± 3% proc-vmstat.pgalloc_normal
41936064 ± 11% -63.1% 15463252 ± 3% proc-vmstat.pgfault
19505793 ± 5% -21.1% 15389143 ± 3% proc-vmstat.pgfree
117160 ± 3% -53.6% 54389 sched_debug.cfs_rq:/.exec_clock.avg
118940 ± 3% -39.3% 72184 ± 5% sched_debug.cfs_rq:/.exec_clock.max
114940 ± 3% -65.5% 39702 ± 5% sched_debug.cfs_rq:/.exec_clock.min
1557 ± 27% +736.8% 13030 ± 13% sched_debug.cfs_rq:/.exec_clock.stddev
126995 ± 33% -63.7% 46135 ± 96% sched_debug.cfs_rq:/.load.min
294.54 ± 4% +19.6% 352.41 ± 9% sched_debug.cfs_rq:/.load_avg.avg
139.08 ± 8% +31.9% 183.48 ± 16% sched_debug.cfs_rq:/.load_avg.stddev
455686 ± 4% -70.6% 133860 ± 5% sched_debug.cfs_rq:/.min_vruntime.avg
466686 ± 3% -65.7% 160093 ± 2% sched_debug.cfs_rq:/.min_vruntime.max
444165 ± 4% -75.2% 110113 ± 6% sched_debug.cfs_rq:/.min_vruntime.min
8771 ± 25% +128.4% 20036 ± 11% sched_debug.cfs_rq:/.min_vruntime.stddev
0.92 ± 3% -22.7% 0.71 ± 8% sched_debug.cfs_rq:/.nr_running.avg
0.62 ± 29% -60.0% 0.25 ± 57% sched_debug.cfs_rq:/.nr_running.min
0.19 ± 60% +97.2% 0.37 ± 20% sched_debug.cfs_rq:/.nr_running.stddev
90.92 ± 29% -60.6% 35.79 ±110% sched_debug.cfs_rq:/.runnable_load_avg.min
134.24 ± 15% +41.9% 190.46 ± 19% sched_debug.cfs_rq:/.runnable_load_avg.stddev
121098 ± 31% -69.9% 36507 ±109% sched_debug.cfs_rq:/.runnable_weight.min
8770 ± 25% +128.5% 20037 ± 11% sched_debug.cfs_rq:/.spread0.stddev
806.85 ± 12% -24.7% 607.84 ± 7% sched_debug.cfs_rq:/.util_avg.avg
592.62 ± 21% -55.9% 261.21 ± 28% sched_debug.cfs_rq:/.util_avg.min
163.80 ± 19% +82.8% 299.38 ± 8% sched_debug.cfs_rq:/.util_avg.stddev
544.27 ± 26% -77.6% 121.94 ± 17% sched_debug.cfs_rq:/.util_est_enqueued.avg
817.46 ± 28% -47.0% 432.96 ± 17% sched_debug.cfs_rq:/.util_est_enqueued.max
229.71 ± 50% -99.9% 0.33 ± 50% sched_debug.cfs_rq:/.util_est_enqueued.min
1179733 ± 48% +771.7% 10283446 ± 12% sched_debug.cpu.avg_idle.avg
2186782 ± 39% +655.9% 16529998 ± 14% sched_debug.cpu.avg_idle.max
436002 ± 80% +1048.0% 5005135 ± 19% sched_debug.cpu.avg_idle.min
697066 ± 27% +539.8% 4459732 ± 23% sched_debug.cpu.avg_idle.stddev
401.04 ± 2% +31.8% 528.42 ± 13% sched_debug.cpu.cpu_load[1].max
130.00 ± 27% -48.9% 66.42 ± 14% sched_debug.cpu.cpu_load[1].min
103.93 ± 16% +73.3% 180.16 ± 14% sched_debug.cpu.cpu_load[1].stddev
366.04 ± 6% +62.6% 595.33 ± 12% sched_debug.cpu.cpu_load[2].max
139.92 ± 20% -60.0% 55.92 ± 9% sched_debug.cpu.cpu_load[2].min
87.49 ± 26% +142.0% 211.73 ± 14% sched_debug.cpu.cpu_load[2].stddev
353.79 ± 10% +81.7% 642.87 ± 11% sched_debug.cpu.cpu_load[3].max
151.04 ± 18% -72.6% 41.38 ± 20% sched_debug.cpu.cpu_load[3].min
79.58 ± 31% +205.8% 243.35 ± 14% sched_debug.cpu.cpu_load[3].stddev
337.00 ± 13% +100.0% 674.17 ± 11% sched_debug.cpu.cpu_load[4].max
158.04 ± 17% -78.4% 34.12 ± 35% sched_debug.cpu.cpu_load[4].min
71.84 ± 36% +269.3% 265.33 ± 13% sched_debug.cpu.cpu_load[4].stddev
2696 ± 9% -27.7% 1948 ± 7% sched_debug.cpu.curr->pid.avg
1372 ± 64% -82.3% 243.29 ±126% sched_debug.cpu.curr->pid.min
921.23 ± 41% +69.1% 1558 ± 8% sched_debug.cpu.curr->pid.stddev
487128 ± 14% +30.0% 633229 ± 19% sched_debug.cpu.load.max
144979 ± 24% +69.4% 245569 ± 16% sched_debug.cpu.load.stddev
3298283 ± 31% +118.2% 7198331 ± 14% sched_debug.cpu.max_idle_balance_cost.avg
5959825 ± 30% +85.5% 11054227 ± 19% sched_debug.cpu.max_idle_balance_cost.max
1278454 ± 39% +212.9% 4000812 ± 18% sched_debug.cpu.max_idle_balance_cost.min
41082 -25.0% 30809 ± 2% sched_debug.cpu.nr_load_updates.avg
41797 -14.8% 35619 ± 2% sched_debug.cpu.nr_load_updates.max
40213 -34.8% 26209 ± 3% sched_debug.cpu.nr_load_updates.min
629.61 ± 9% +489.3% 3710 ± 4% sched_debug.cpu.nr_load_updates.stddev
0.58 ± 31% -57.1% 0.25 ± 57% sched_debug.cpu.nr_running.min
953962 ± 21% -92.9% 67812 ± 10% sched_debug.cpu.nr_switches.avg
996979 ± 19% -90.9% 90867 ± 17% sched_debug.cpu.nr_switches.max
913679 ± 21% -94.9% 46191 ± 5% sched_debug.cpu.nr_switches.min
32843 ± 27% -47.3% 17308 ± 42% sched_debug.cpu.nr_switches.stddev
942452 ± 21% -94.0% 56255 ± 13% sched_debug.cpu.sched_count.avg
987126 ± 20% -92.2% 77110 ± 22% sched_debug.cpu.sched_count.max
900694 ± 21% -96.0% 36326 ± 8% sched_debug.cpu.sched_count.min
33839 ± 26% -53.1% 15870 ± 46% sched_debug.cpu.sched_count.stddev
416113 ± 22% -97.1% 11890 ± 15% sched_debug.cpu.sched_goidle.avg
429842 ± 23% -96.2% 16204 ± 20% sched_debug.cpu.sched_goidle.max
402313 ± 21% -98.2% 7433 ± 16% sched_debug.cpu.sched_goidle.min
10768 ± 48% -69.0% 3337 ± 30% sched_debug.cpu.sched_goidle.stddev
478266 ± 21% -94.1% 28209 ± 13% sched_debug.cpu.ttwu_count.avg
501683 ± 20% -92.0% 39900 ± 24% sched_debug.cpu.ttwu_count.max
453413 ± 22% -95.9% 18446 ± 3% sched_debug.cpu.ttwu_count.min
18884 ± 16% -54.4% 8612 ± 48% sched_debug.cpu.ttwu_count.stddev
37415 ± 20% -48.4% 19288 ± 26% sched_debug.cpu.ttwu_local.avg
57109 ± 13% -52.0% 27419 ± 43% sched_debug.cpu.ttwu_local.max
455859 ± 34% -39.7% 275085 ± 5% lock_stat.&(&adapter->stats_lock)->rlock.holdtime-total
16210 ± 60% -86.7% 2151 ±160% lock_stat.&(&dn_rt_hash_table[i].lock)->rlock.holdtime-max
68614 ± 23% -37.5% 42887 ± 7% lock_stat.&(&dn_rt_hash_table[i].lock)->rlock.holdtime-total
153567 ± 2% +17.9% 181023 ± 5% lock_stat.&(&dsp_lock)->rlock.holdtime-total
81334 ± 53% +89.8% 154340 ± 44% lock_stat.&(&ep->lock)->rlock.holdtime-total
13027 ± 24% +78.8% 23291 ± 12% lock_stat.&(&ep->lock)->rlock.waittime-max
74988 ± 66% +153.9% 190420 ± 38% lock_stat.&(&ep->lock)->rlock.waittime-total
34932 ± 3% +28.1% 44763 ± 17% lock_stat.&(&gc_work->dwork)->timer.holdtime-total
708791 ± 11% -99.9% 405.00 ± 7% lock_stat.&(&info->lock)->rlock.acq-bounces
4475 ± 8% +22.8% 5494 lock_stat.&(&ipvs->defense_work)->timer.holdtime-total
157167 ± 2% -27.4% 114177 ± 3% lock_stat.&(&parent->list_lock)->rlock.acq-bounces
163974 ± 4% -16.7% 136547 ± 5% lock_stat.&(&parent->list_lock)->rlock.holdtime-total
1482495 ± 7% -97.2% 42182 ± 4% lock_stat.&(&pgdat->lru_lock)->rlock.acq-bounces
3216117 ± 5% -35.9% 2060805 ± 3% lock_stat.&(&pgdat->lru_lock)->rlock.acquisitions
737686 ± 9% -99.8% 1798 ± 5% lock_stat.&(&pgdat->lru_lock)->rlock.con-bounces
737788 ± 9% -99.8% 1798 ± 5% lock_stat.&(&pgdat->lru_lock)->rlock.contentions
1474000 ± 9% -99.8% 2478 ± 5% lock_stat.&(&pgdat->lru_lock)->rlock.contentions.pagevec_lru_move_fn
12663440 ± 3% -29.1% 8973526 lock_stat.&(&pgdat->lru_lock)->rlock.holdtime-total
2071580 ± 10% -85.6% 298272 ± 25% lock_stat.&(&pgdat->lru_lock)->rlock.waittime-total
115219 ± 16% +293.6% 453481 ± 15% lock_stat.&(&pool->lock)->rlock.holdtime-total
11095 ±102% +164.7% 29370 ± 8% lock_stat.&(&pool->lock)->rlock.waittime-max
16178 ± 95% +2231.4% 377188 ± 21% lock_stat.&(&pool->lock)->rlock.waittime-total
545696 ± 22% +337.2% 2385686 ± 3% lock_stat.&(&sighand->siglock)->rlock.holdtime-total
9209588 ± 7% -100.0% 3443 ± 3% lock_stat.&(&xa->xa_lock)->rlock.acq-bounces
33330684 ± 5% -9.8% 30075070 ± 3% lock_stat.&(&xa->xa_lock)->rlock.acquisitions
750708 ± 10% -100.0% 12.75 ± 30% lock_stat.&(&xa->xa_lock)->rlock.con-bounces
751838 ± 10% -100.0% 12.75 ± 30% lock_stat.&(&xa->xa_lock)->rlock.contentions
1503651 ± 10% -100.0% 23.75 ± 27% lock_stat.&(&xa->xa_lock)->rlock.contentions.shmem_add_to_page_cache
20138731 ± 3% +37.9% 27772112 lock_stat.&(&xa->xa_lock)->rlock.holdtime-total
21047 ± 10% -71.1% 6073 ± 61% lock_stat.&(&xa->xa_lock)->rlock.waittime-max
2826405 ± 29% -99.7% 8548 ± 66% lock_stat.&(&xa->xa_lock)->rlock.waittime-total
263136 ± 5% -91.4% 22741 ± 3% lock_stat.&(&zone->lock)->rlock.acq-bounces
910134 ± 51% -65.1% 317918 ± 9% lock_stat.&(&zone->lock)->rlock.waittime-total
15925 ± 16% -21.6% 12486 ± 10% lock_stat.&(({do{constvoid*__vpp_verify=.holdtime-total
109133 ± 4% -61.8% 41693 ± 7% lock_stat.&(ptlock_ptr(page))->rlock#2.acq-bounces
68196728 ± 9% -55.9% 30053960 ± 3% lock_stat.&(ptlock_ptr(page))->rlock#2.acquisitions
172010 ±122% -85.4% 25129 ± 35% lock_stat.&(ptlock_ptr(page))->rlock#2.holdtime-max
33983810 ± 4% -70.4% 10071152 lock_stat.&(ptlock_ptr(page))->rlock#2.holdtime-total
189456 ± 3% -42.2% 109466 lock_stat.&(ptlock_ptr(page))->rlock.acquisitions
106129 ± 7% -52.1% 50795 ± 4% lock_stat.&(ptlock_ptr(page))->rlock.holdtime-total
4109768 ± 14% -82.1% 736924 lock_stat.&base->lock.acquisitions
13669 ± 18% -73.3% 3646 ± 37% lock_stat.&base->lock.holdtime-max
2209337 ± 9% -80.5% 431561 ± 4% lock_stat.&base->lock.holdtime-total
1910 ±156% +338.4% 8375 ± 43% lock_stat.&ctx->wqh.holdtime-max
1200480 ± 5% +56.4% 1877354 lock_stat.&dup_mmap_sem-R.holdtime-total
336791 ± 7% +39.0% 468161 ± 6% lock_stat.&f->f_pos_lock.holdtime-avg
74615 ± 8% +65.9% 123751 ± 33% lock_stat.&group->notification_waitq.holdtime-total
80849 ± 19% +96.4% 158807 ± 33% lock_stat.&group->notification_waitq.waittime-total
47155458 ± 10% -56.3% 20625775 ± 2% lock_stat.&mm->mmap_sem-R.acquisitions
248011 ± 20% -74.0% 64461 ± 47% lock_stat.&mm->mmap_sem-R.holdtime-max
8.109e+08 -76.2% 1.929e+08 lock_stat.&mm->mmap_sem-R.holdtime-total
386738 ± 36% -68.7% 121115 ± 15% lock_stat.&mm->mmap_sem-W.holdtime-max
20781707 ± 3% -75.2% 5146865 ± 2% lock_stat.&mm->mmap_sem-W.holdtime-total
172243 ± 3% -7.6% 159110 lock_stat.&nf_conntrack_generation-R.holdtime-total
2005217 ± 17% -98.8% 23380 lock_stat.&p->pi_lock.acq-bounces
4514659 ± 16% -90.7% 420676 ± 10% lock_stat.&p->pi_lock.acquisitions
43481206 ± 7% -40.7% 25775609 ± 9% lock_stat.&p->pi_lock.holdtime-total
35054 ± 15% -31.4% 24032 ± 10% lock_stat.&p->pi_lock.waittime-total
12830201 ± 14% -100.0% 192.00 ± 14% lock_stat.&page_wait_table[i].acq-bounces
17227249 ± 13% -100.0% 339.00 ± 9% lock_stat.&page_wait_table[i].acquisitions
4229005 ± 16% -100.0% 14.00 ± 34% lock_stat.&page_wait_table[i].con-bounces
4233464 ± 16% -100.0% 14.00 ± 34% lock_stat.&page_wait_table[i].contentions
986102 ± 13% -100.0% 9.50 ± 41% lock_stat.&page_wait_table[i].contentions.finish_wait
870129 ± 17% -100.0% 0.75 ±110% lock_stat.&page_wait_table[i].contentions.wait_on_page_bit_common
6610697 ± 16% -100.0% 17.75 ± 36% lock_stat.&page_wait_table[i].contentions.wake_up_page_bit
21461 ± 14% -77.2% 4900 ± 95% lock_stat.&page_wait_table[i].holdtime-max
44324794 ± 3% -99.9% 27092 ±130% lock_stat.&page_wait_table[i].holdtime-total
13.53 ± 46% +20884.1% 2839 ±105% lock_stat.&page_wait_table[i].waittime-avg
29849 ± 14% -60.8% 11704 ± 79% lock_stat.&page_wait_table[i].waittime-max
53128316 ± 24% -99.9% 52419 ±133% lock_stat.&page_wait_table[i].waittime-total
289983 ± 19% -50.3% 144255 ± 24% lock_stat.&pipe->mutex/1.waittime-total
16126 ± 15% +44.2% 23247 ± 15% lock_stat.&pipe->wait.holdtime-max
985.70 ± 59% +352.1% 4456 ± 19% lock_stat.&pipe->wait.waittime-avg
23510 ± 17% +100.7% 47181 ± 12% lock_stat.&pipe->wait.waittime-max
531976 ± 48% +607.7% 3764886 ± 24% lock_stat.&pipe->wait.waittime-total
4145 ± 91% +225.4% 13490 ± 10% lock_stat.&pool->lock/1.holdtime-max
8992 ± 55% +397.1% 44699 ± 37% lock_stat.&pool->lock/1.holdtime-total
1275 ±109% +431.5% 6780 ± 11% lock_stat.&pool->lock/1.waittime-avg
3026 ±118% +477.6% 17480 ± 34% lock_stat.&pool->lock/1.waittime-max
3419 ±122% +1283.1% 47299 ± 51% lock_stat.&pool->lock/1.waittime-total
3756368 ± 8% -98.8% 46374 ± 5% lock_stat.&rq->lock.acq-bounces
24049672 ± 14% -93.1% 1652462 ± 11% lock_stat.&rq->lock.acquisitions
1879491 ± 7% -99.5% 9184 ± 13% lock_stat.&rq->lock.con-bounces
1879660 ± 7% -99.5% 9189 ± 13% lock_stat.&rq->lock.contentions
1878333 ± 7% -99.6% 7513 ± 17% lock_stat.&rq->lock.contentions.__task_rq_lock
1863315 ± 8% -99.5% 8537 ± 15% lock_stat.&rq->lock.contentions.rq_lock
21165 ± 18% +29.2% 27353 ± 8% lock_stat.&rq->lock.holdtime-max
59512381 ± 13% -86.6% 7958968 ± 6% lock_stat.&rq->lock.holdtime-total
15399445 ± 22% -70.0% 4619267 ± 6% lock_stat.&rq->lock.waittime-total
21045 ± 15% +50.8% 31733 ± 11% lock_stat.&rsp->gp_wq.holdtime-max
1292559 ± 23% +1042.7% 14770713 ± 14% lock_stat.&rsp->gp_wq.holdtime-total
7115 ± 12% +70.9% 12158 ± 2% lock_stat.&rsp->gp_wq.waittime-avg
1706488 ± 26% +1149.1% 21315655 ± 13% lock_stat.&rsp->gp_wq.waittime-total
8481 ± 5% -18.4% 6918 ± 3% lock_stat.&sb->s_type->i_lock_key#1.holdtime-total
7290 ± 72% -58.1% 3053 ± 49% lock_stat.&sb->s_type->i_mutex_key#1.holdtime-total
7456886 ± 10% -33.0% 4996474 ± 9% lock_stat.&sig->cred_guard_mutex.holdtime-total
522.32 ± 36% +2030.3% 11126 ± 94% lock_stat.&sighand->signalfd_wqh.holdtime-total
12353 ± 5% -84.8% 1878 ± 4% lock_stat.&stopper->lock.acquisitions
4994 ± 2% -85.6% 718.25 ± 3% lock_stat.&stopper->lock.holdtime-total
1974810 ± 17% -100.0% 38.75 ± 23% lock_stat.&tsk->delays->lock.acq-bounces
3920656 ± 17% -100.0% 129.00 ± 13% lock_stat.&tsk->delays->lock.acquisitions
8996 ± 10% -99.4% 53.16 ± 12% lock_stat.&tsk->delays->lock.holdtime-max
982328 ± 14% -100.0% 116.84 ± 22% lock_stat.&tsk->delays->lock.holdtime-total
0.19 ±100% +3.7e+06% 7011 ± 95% lock_stat.&txwq.waittime-avg
0.22 ±101% +4.4e+06% 9686 ± 98% lock_stat.&txwq.waittime-max
0.16 ±100% +2e+06% 3138 ±119% lock_stat.&txwq.waittime-min
0.38 ±100% +5.9e+06% 22240 ±118% lock_stat.&txwq.waittime-total
3104 ± 10% -85.4% 452.72 ± 18% lock_stat.&x->wait#3.holdtime-total
20473 ± 68% -86.2% 2823 ±117% lock_stat.(&dn_route_timer).holdtime-max
106379 ± 29% -37.5% 66495 ± 4% lock_stat.(&dn_route_timer).holdtime-total
204022 ± 2% +41.4% 288477 ± 35% lock_stat.(&dsp_spl_tl).holdtime-total
14154 ± 37% -79.8% 2859 ± 49% lock_stat.(&n->timer).holdtime-total
1128 ± 6% +1423.7% 17197 ±148% lock_stat.(&net->can.can_stattimer).holdtime-total
55207 +13.7% 62786 lock_stat.(&timer.timer).acquisitions
56.56 ± 84% +1903.7% 1133 ± 19% lock_stat.(shepherd).work.holdtime-avg
12719 ± 94% +130.5% 29323 ± 6% lock_stat.(shepherd).work.holdtime-max
17528 ± 84% +1905.4% 351513 ± 19% lock_stat.(shepherd).work.holdtime-total
3486 ± 29% -44.3% 1942 ± 4% lock_stat.(work_completion)(&(&adapter->watchdo.holdtime-avg
535756 ± 28% -44.2% 298691 ± 4% lock_stat.(work_completion)(&(&adapter->watchdo.holdtime-total
13139 ± 15% +328.5% 56302 ±119% lock_stat.(work_completion)(&(&gc_work->dwork)-.holdtime-max
457.76 ± 11% +2237.9% 10702 ± 56% lock_stat.(work_completion)(&(&wb->dwork)->work.holdtime-max
3236 ± 88% +853.2% 30849 ± 54% lock_stat.(work_completion)(&(reap_work)->work).holdtime-max
101649 ± 8% +26.0% 128028 ± 12% lock_stat.(work_completion)(&(reap_work)->work).holdtime-total
10752 ± 57% -48.4% 5551 ± 93% lock_stat.(work_completion)(&task->u.tk_work).holdtime-max
29789 ± 26% -30.7% 20637 ± 21% lock_stat.(work_completion)(&task->u.tk_work).holdtime-total
13140 ± 15% +328.5% 56303 ±119% lock_stat.(wq_completion)"events_power_efficien.holdtime-max
10753 ± 57% -48.4% 5552 ± 93% lock_stat.(wq_completion)"rpciod".holdtime-max
29817 ± 26% -30.7% 20663 ± 21% lock_stat.(wq_completion)"rpciod".holdtime-total
458.81 ± 10% +2232.8% 10702 ± 56% lock_stat.(wq_completion)"writeback".holdtime-max
8932 ± 6% -11.0% 7953 ± 2% lock_stat.binfmt_lock-R.holdtime-total
25365 ± 14% +43.4% 36368 ± 30% lock_stat.console_owner.holdtime-max
1551024 ± 11% -24.2% 1176166 ± 12% lock_stat.console_owner.holdtime-total
12724 ± 94% +130.4% 29320 ± 6% lock_stat.cpu_hotplug_lock.rw_sem-R.holdtime-max
23096 ± 63% +1450.7% 358151 ± 18% lock_stat.cpu_hotplug_lock.rw_sem-R.holdtime-total
3203 ± 12% -100.0% 0.00 lock_stat.drivers/atm/iph.holdtime-total
0.00 +9e+105% 8984 ±121% lock_stat.drivers/atm/iphase.holdtime-max
0.00 +1.3e+106% 12660 ± 91% lock_stat.drivers/atm/iphase.holdtime-total
18559 -100.0% 0.00 lock_stat.drivers/tty/isi.acquisitions
11255 ± 85% -100.0% 0.00 lock_stat.drivers/tty/isi.holdtime-max
68448 ± 31% -100.0% 0.00 lock_stat.drivers/tty/isi.holdtime-total
0.00 +1.9e+106% 18787 lock_stat.drivers/tty/isicom.acquisitions
0.00 +5.2e+106% 51897 ± 16% lock_stat.drivers/tty/isicom.holdtime-total
20598124 ± 5% -20.6% 16345527 ± 2% lock_stat.fs_reclaim.acquisitions
14366 ± 20% +95.7% 28113 ± 58% lock_stat.fs_reclaim.holdtime-max
3651960 ± 2% -20.5% 2902504 lock_stat.fs_reclaim.holdtime-total
10515834 ± 14% -94.1% 620597 ± 3% lock_stat.hrtimer_bases.lock.acquisitions
23336318 ± 13% -95.7% 1000877 ± 3% lock_stat.hrtimer_bases.lock.holdtime-total
41455 ± 4% +9.8% 45529 ± 3% lock_stat.iclock_lock-R.holdtime-total
3784656 ± 17% -95.6% 167136 ± 7% lock_stat.jiffies_lock#2-R.acquisitions
9650 ± 12% -98.9% 106.17 ± 11% lock_stat.jiffies_lock#2-R.holdtime-max
697919 ± 6% -94.9% 35417 ± 6% lock_stat.jiffies_lock#2-R.holdtime-total
31117 ± 3% -38.9% 19010 ± 7% lock_stat.jiffies_lock.acq-bounces
234334 ± 5% -99.9% 319.50 ± 7% lock_stat.key#2.acq-bounces
129919 ± 5% -19.2% 104951 ± 3% lock_stat.key#2.holdtime-total
926.38 ± 82% -100.0% 0.00 lock_stat.key#2.waittime-avg
5293 ±122% -100.0% 0.00 lock_stat.key#2.waittime-max
12783 ±121% -100.0% 0.00 lock_stat.key#2.waittime-total
10681 ± 4% -96.6% 363.00 ± 2% lock_stat.key#3.acq-bounces
3278573 ± 20% -99.5% 17651 ± 3% lock_stat.latency_lock.acq-bounces
4330299 ± 16% -94.5% 238341 ± 15% lock_stat.latency_lock.acquisitions
28394 ± 31% -99.3% 199.75 ± 15% lock_stat.latency_lock.con-bounces
28437 ± 31% -99.3% 200.75 ± 15% lock_stat.latency_lock.contentions
56839 ± 31% -99.3% 399.25 ± 15% lock_stat.latency_lock.contentions.__account_scheduler_latency
2085235 ± 19% -95.1% 102278 ± 11% lock_stat.latency_lock.holdtime-total
126589 ± 21% -90.7% 11764 ± 66% lock_stat.latency_lock.waittime-total
178625 ± 19% +98.8% 355148 ± 34% lock_stat.log_wait.lock.holdtime-total
201393 ± 39% +125.0% 453153 ± 42% lock_stat.log_wait.lock.waittime-total
8387 ±106% -96.6% 284.26 ±146% lock_stat.logbuf_lock.waittime-total
2885 ± 3% -100.0% 0.00 lock_stat.mm/vmstat.c:187.holdtime-total
0.00 +4e+105% 4039 ± 41% lock_stat.mm/vmstat.c:1875.holdtime-total
29566 ± 21% -18.7% 24025 ± 6% lock_stat.mount_lock#2-R.holdtime-total
19986 ± 22% -22.1% 15571 ± 3% lock_stat.pgd_lock.holdtime-total
17894 ± 16% +61.6% 28914 ± 33% lock_stat.printing_lock.holdtime-max
10991 ± 18% +170.1% 29683 ± 71% lock_stat.rcu_callback-R.holdtime-max
143791 ± 2% -10.0% 129434 ± 2% lock_stat.rcu_node_0.acquisitions
147374 ± 13% -14.7% 125665 lock_stat.rcu_node_0.holdtime-total
6949 ± 35% -91.9% 561.12 ±112% lock_stat.rcu_node_0.waittime-max
16821 ± 29% -81.9% 3039 ± 41% lock_stat.rcu_node_0.waittime-total
1.241e+08 ± 9% -58.0% 52076183 ± 2% lock_stat.rcu_read_lock-R.acquisitions
2.913e+08 ± 4% -71.2% 84041884 lock_stat.rcu_read_lock-R.holdtime-total
17953 ± 31% -23.2% 13787 lock_stat.rename_lock#2-R.holdtime-total
9057 ± 38% +239.2% 30722 ± 55% lock_stat.slab_mutex.holdtime-max
1018 ± 94% +714.6% 8298 ± 49% lock_stat.slock-AF_INET-RPC/1.holdtime-max
8395 ± 9% -31.8% 5727 ± 5% lock_stat.speakup_info.spinlock.holdtime-total
402906 ± 36% +496.4% 2402835 ± 3% lock_stat.tasklist_lock-R.waittime-total
501739 ± 23% +365.5% 2335605 ± 4% lock_stat.tasklist_lock-W.holdtime-total
1527 ± 45% +104.4% 3121 ± 30% lock_stat.tasklist_lock-W.waittime-avg
29476 ± 3% -38.7% 18056 ± 7% lock_stat.timekeeper_lock.acq-bounces
2729 ± 79% +253.1% 9637 ± 42% lock_stat.timekeeper_lock.holdtime-max
31404402 ± 13% -83.3% 5258508 ± 3% lock_stat.tk_core-R.acquisitions
4978505 ± 10% -82.8% 853856 ± 6% lock_stat.tk_core-R.holdtime-total
2728 ± 79% +234.6% 9129 ± 49% lock_stat.tk_core-W.holdtime-max
vm-scalability.throughput
1.4e+06 +-+---------------------------------------------------------------+
| +..+ |
1.2e+06 +-+. : + .+.+. .+.+. |
| +..+.+. : +. .+.+. +.+.+. +.+.+.. |
| +.+ + .+. +. |
1e+06 +-+ + +.+. .. +.|
| + |
800000 +-+ |
| |
600000 +-+ |
| |
| |
400000 +-+ |
O O O O O O O O O O O O O O O O O O O O O O |
200000 +-+-----------------O---------------------------------------------+
vm-scalability.free_time
0.2 +-+------------------------------------------------------------------+
| .. + +.+.+..+. .+.. |
0.18 +-+..+.+.+ +. .. +.+..+.+.+.+..+.+.+..+.+ +.+.+.+..+.|
| + |
0.16 +-+ |
| |
0.14 +-+ |
| |
0.12 +-+ |
| |
0.1 +-+ |
| |
0.08 +-+ O O O O O O O O O O O |
O O O O O O O O O O O O |
0.06 +-+------------------------------------------------------------------+
vm-scalability.workload
3.5e+08 +-+---------------------------------------------------------------+
|.+. +. : .+. .+. |
3e+08 +-+ +..+.+. : : .+.+..+ +.+.+..+ +.+.+.. |
| +. : +.+ |
| + +.+. .+.+.|
2.5e+08 +-+ +.+.+. |
| |
2e+08 +-+ |
| |
1.5e+08 +-+ |
| |
| |
1e+08 +-+ |
O O O O O O O O O O O O O O O O O O O O O O |
5e+07 +-+-----------------O---------------------------------------------+
vm-scalability.time.user_time
300 +-+-------------------------------------------------------------------+
| + +..+. |
|.+..+. .. + + +. .+.+..+.+. .+.+..+. |
250 +-+ +.+ + +..+ +..+ +.+.. .+ |
| + + .+.+..+.|
| +..+ |
200 +-+ |
| |
150 +-+ |
| |
| |
100 +-+ |
| |
| O |
50 O-O--O-O-O--O-O-O--O-O----O-O-O--O-O-O--O-O-O--O-O-O------------------+
vm-scalability.time.system_time
700 +-+-------------------------------------------------------------------+
650 +-+.. + +.. +.+ +..+.|
|+ .+ +.+. .. + + + + + |
600 +-+ +. .+. + .. + +. .+ +. + +..+.+ |
550 +-+ +.+..+.+ +.+ +..+ +.+ |
| |
500 +-+ |
450 +-+ |
400 +-+ |
| |
350 +-+ |
300 +-+ |
| O O O |
250 O-O O O O O O O O O O O O O O O O O O O |
200 +-+-------------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
350 +-+-------------------------------------------------------------------+
| |
300 +-+ .+ |
|.+..+. .+. + .+.+.+..+.+. .+.+..+. .+.+. .+..+.|
| +.+..+.+ +.+. +..+ +.+. +..+.+ |
250 +-+ |
| |
200 +-+ |
| |
150 +-+ |
| |
| |
100 O-O O O O O O O O O O O O O O O O O O O O O O |
| |
50 +-+-------------------------------------------------------------------+
vm-scalability.time.minor_page_faults
6.5e+07 +-+---------------------------------------------------------------+
6e+07 +-+ + |
| + ..: + + |
5.5e+07 +-+: + : +.+.. + : + : |
5e+07 +-+: : : : + : .+ : + |
4.5e+07 +-+ +..+.+ : : : +.+.+. +.+.+.. .+ : + |
4e+07 +-+ + : +. : + + : +.|
| +.+ + +.+. : |
3.5e+07 +-+ + |
3e+07 +-+ |
2.5e+07 +-+ |
2e+07 +-+ |
O O O O |
1.5e+07 +-+ O O O O O O O O O O O O O O O O O O O |
1e+07 +-+---------------------------------------------------------------+
vm-scalability.time.voluntary_context_switches
7e+06 +-+-----------------------------------------------------------------+
| |
6e+06 +-+ + + |
|+ + .+ .+. .. : + : |
5e+06 +-+ + .+.+ + : + + : .+..+.+ : .+.+ |
| + : : : : + +. + +|
4e+06 +-+ : : : : +.+.. +..+ |
| +..+ +..+ + |
3e+06 +-+ +.+.+ |
| |
2e+06 +-+ |
| |
1e+06 +-+ |
| |
0 O-O--O-O-O-O--O-O-O-O--O-O-O-O--O-O-O--O-O-O-O--O-O-----------------+
vm-scalability.time.involuntary_context_switches
100000 +-+----------------------------------------------------------------+
90000 +-++ .+ : : |
|: + + + +. +. : + + : + + |
80000 +-+ : : +.. + + : + : : :: : : :+ + |
70000 +-+ : : + : : +: : : :: : : +..+ + :|
| : : : : + :: + :: + + :|
60000 +-+ + + + + +.+. .. |
50000 +-+ + |
40000 +-+ |
| |
30000 +-+ |
20000 +-+ |
| |
10000 O-O O O O O O O O O O O O O O O O O O O O O O |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 11 months
398ceb91b8 ("RFC: debugobjects: capture stack traces at .."): BUG: kernel hang in boot stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://anongit.freedesktop.org/drm-intel topic/core-for-CI
commit 398ceb91b81505e7138eecdac870a59e85661671
Author: Daniel Vetter <daniel.vetter(a)ffwll.ch>
AuthorDate: Tue Mar 20 17:02:58 2018 +0100
Commit: Daniel Vetter <daniel.vetter(a)ffwll.ch>
CommitDate: Mon Jun 25 18:07:12 2018 +0200
RFC: debugobjects: capture stack traces at _init() time
Sometimes it's really easy to know which object has gone boom and
where the offending code is, and sometimes it's really hard. One case
we're trying to hunt down is when module unload catches a live debug
object, with a module with lots of them.
Capture a full stack trace from debug_object_init() and dump it when
there's a problem.
FIXME: Should we have a separate Kconfig knob for the backtraces,
they're quite expensive? Atm I'm just selecting it for the general
debug object stuff.
v2: Drop the locks for gathering&storing the backtrace. This is
required because depot_save_stack can call free_pages (to drop it's
preallocation), which can call debug_check_no_obj_freed, which will
recurse into the db->lock spinlocks.
Cc: Philippe Ombredanne <pombredanne(a)nexb.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Kate Stewart <kstewart(a)linuxfoundation.org>
Cc: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: Waiman Long <longman(a)redhat.com>
Acked-by-for-CI-testing: Chris Wilson <chris(a)chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180320160258.11381-1-dani...
ebaef7b2f7 kernel/panic: Repeat the line and caller information at the end of the OOPS
398ceb91b8 RFC: debugobjects: capture stack traces at _init() time
b1ee8b0d94 Documentation: e100: Fix docs build error
+-------------------------------------------------------+------------+------------+------------+
| | ebaef7b2f7 | 398ceb91b8 | b1ee8b0d94 |
+-------------------------------------------------------+------------+------------+------------+
| boot_successes | 23 | 0 | 0 |
| boot_failures | 20 | 15 | 13 |
| WARNING:possible_circular_locking_dependency_detected | 20 | | |
| BUG:kernel_hang_in_boot_stage | 0 | 15 | 13 |
+-------------------------------------------------------+------------+------------+------------+
kernel_total_size: 0x0000000003aaf000
trampoline_32bit: 0x000000000009d000
Decompressing Linux... Parsing ELF... done.
Booting the kernel.
BUG: kernel hang in boot stage
Linux version 4.18.0-rc2-00007-g398ceb9 #1
Command line: root=/dev/ram0 hung_task_panic=1 debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw link=/kbuild-tests/run-queue/yocto-lkp-nhm-dp2/x86_64-randconfig-s3-06300434/linux-devel:devel-catchup-201806300333:398ceb91b81505e7138eecdac870a59e85661671:bisect-linux-32/.vmlinuz-398ceb91b81505e7138eecdac870a59e85661671-20180630111015-8:yocto-lkp-nhm-dp2-10 branch=linux-devel/devel-catchup-201806300333 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s3-06300434/gcc-7/398ceb91b81505e7138eecdac870a59e85661671/vmlinuz-4.18.0-rc2-00007-g398ceb9 drbd.minor_count=8 rcuperf.shutdown=0
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 691ef198391e10a174dedbea19727b45d453ad47 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect good 3b03431ffcdfa7ae2bbb7c3f1c71766f66849d83 # 06:06 G 10 0 5 5 Merge 'lpieralisi-pci/pci/controller-fixes' into devel-catchup-201806300333
git bisect bad 6d16a4c44349091e57d96550facbb6a7d597bd5c # 06:26 B 0 1 14 0 Merge 'linux-review/Philipp-Zabel/media-coda-mark-CODA960-firmware-version-2-1-9-as-supported/20180629-220504' into devel-catchup-201806300333
git bisect bad b65c314ab490a111080b5cd07e9aa024a439a9b2 # 06:48 B 0 2 15 0 Merge 'ath6kl/master-pending' into devel-catchup-201806300333
git bisect bad 0c633bf0e5915066d9e83ad4a68ce428b4e9621a # 07:20 B 0 1 14 0 Merge 'cifs/for-next' into devel-catchup-201806300333
git bisect good 5b6f5648fe62afa5b3fcab8ae9bdd8ae0d6ae742 # 07:40 G 10 0 2 2 Merge 'jpirko-mlxsw/petrm_erspan_lag_fixes' into devel-catchup-201806300333
git bisect bad a0275c5d4ac6da625e02a5722994c321a681977c # 08:07 B 0 5 18 0 Merge 'drm-tip/drm-tip' into devel-catchup-201806300333
git bisect good 78796877c37cb2c3898c4bcd2a12238d83858287 # 08:24 G 11 0 5 5 drm/i915: Move the irq_counter inside the spinlock
git bisect good 57e23de02f4878061818fd118129a6b0e1516b11 # 08:43 G 10 0 5 7 drm/sun4i: DW HDMI: Expand algorithm for possible crtcs
git bisect good 6e0ef9d85b99baeeea4b9c4a9777809cb0c6040a # 09:11 G 10 0 5 5 drm/amd/display: Write TEST_EDID_CHECKSUM_WRITE for EDID tests
git bisect good 81d984fa3495f93aa4e9726598a8b4767eaca86f # 09:35 G 10 0 5 5 Merge remote-tracking branch 'drm/drm-next' into drm-tip
git bisect good 186633be741825ed88fe3d92ef6f334364a26ee3 # 09:51 G 11 0 7 7 Merge remote-tracking branch 'drm-intel/drm-intel-next-queued' into drm-tip
git bisect bad b1ee8b0d945633e4165f9e160af4cda8be6497f5 # 10:10 B 0 2 15 0 Documentation: e100: Fix docs build error
git bisect good f25ae7126147dcbc5c2e80125d6ee941d0485e98 # 10:35 G 10 0 4 4 lockdep: finer-grained completion key for kthread
git bisect bad 4b06b972bbb472ad77ba86397e8548e87318f8d5 # 10:53 B 0 1 14 0 mei: discard messages from not connected client during power down.
git bisect bad 398ceb91b81505e7138eecdac870a59e85661671 # 11:19 B 0 2 15 0 RFC: debugobjects: capture stack traces at _init() time
git bisect good ebaef7b2f7d0648b79f9475b5679bff4114fc1fa # 11:44 G 11 0 7 7 kernel/panic: Repeat the line and caller information at the end of the OOPS
# first bad commit: [398ceb91b81505e7138eecdac870a59e85661671] RFC: debugobjects: capture stack traces at _init() time
git bisect good ebaef7b2f7d0648b79f9475b5679bff4114fc1fa # 11:53 G 32 0 13 20 kernel/panic: Repeat the line and caller information at the end of the OOPS
# extra tests with debug options
git bisect bad 398ceb91b81505e7138eecdac870a59e85661671 # 12:09 B 0 8 21 0 RFC: debugobjects: capture stack traces at _init() time
# extra tests on HEAD of linux-devel/devel-catchup-201806300333
git bisect bad 691ef198391e10a174dedbea19727b45d453ad47 # 12:09 B 0 13 29 0 0day head guard for 'devel-catchup-201806300333'
# extra tests on tree/branch drm-intel/topic/core-for-CI
git bisect bad b1ee8b0d945633e4165f9e160af4cda8be6497f5 # 12:16 B 0 13 26 0 Documentation: e100: Fix docs build error
# extra tests with first bad commit reverted
git bisect good b847b36176eff1c29f964bf7048fd7f8bf2f521a # 12:32 G 11 0 4 4 Revert "RFC: debugobjects: capture stack traces at _init() time"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
4 years