[mm/debug] f675f2f91d: WARNING:at_mm/debug_vm_pgtable.c:#debug_vm_pgtable
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: f675f2f91d0457e2b430936f0d756457fdcad1ca ("[PATCH V2 1/3] mm/debug: Add tests validating arch page table helpers for core features")
url: https://github.com/0day-ci/linux/commits/Anshuman-Khandual/mm-debug-Add-m...
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------------------------------------+---------------+------------+
| | next-20200323 | f675f2f91d |
+-------------------------------------------------------------------------------+---------------+------------+
| boot_successes | 1 | 0 |
| boot_failures | 3 | 4 |
| WARNING:suspicious_RCU_usage | 3 | 4 |
| drivers/char/ipmi/ipmi_msghandler.c:#RCU-list_traversed_in_non-reader_section | 3 | 4 |
| net/ipv4/fib_trie.c:#RCU-list_traversed_in_non-reader_section | 3 | 2 |
| WARNING:at_mm/debug_vm_pgtable.c:#debug_vm_pgtable | 0 | 4 |
| RIP:debug_vm_pgtable | 0 | 4 |
| calltrace:hrtimers_dead_cpu | 0 | 1 |
| calltrace:irq_exit | 0 | 3 |
+-------------------------------------------------------------------------------+---------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 283.486118] WARNING: CPU: 1 PID: 1 at mm/debug_vm_pgtable.c:371 debug_vm_pgtable+0x4dc/0x7e3
[ 283.487342] Modules linked in:
[ 283.487752] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.6.0-rc7-next-20200323-00001-gf675f2f91d045 #1
[ 283.488817] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 283.489794] RIP: 0010:debug_vm_pgtable+0x4dc/0x7e3
[ 283.490361] Code: b5 fd 48 8b 7d d0 be 20 01 00 00 e8 3d 9f b5 fd 48 8b 75 c8 48 8b 7d d0 e8 30 9f b5 fd 48 8b 75 c8 48 8b 7d d0 e8 23 9f b5 fd <0f> 0b 48 8b 75 c8 48 8b 7d d0 e8 14 9f b5 fd 0f 0b 48 8b 75 c8 48
[ 283.492577] RSP: 0000:ffff888236493ed8 EFLAGS: 00010202
[ 283.493235] RAX: 00000001e1d31025 RBX: ffff88823e7f6cd8 RCX: ffffffffffffffff
[ 283.494135] RDX: 0000000000000000 RSI: 0000000000000025 RDI: 00000001e1d31000
[ 283.495002] RBP: ffff888236493f38 R08: 0000000000000001 R09: 0000000000000001
[ 283.495858] R10: 0000000000000001 R11: 0000000000000000 R12: ffff88821d907000
[ 283.496748] R13: ffff88821d8fc498 R14: ffff88821d8fda90 R15: ffff88821d8fc000
[ 283.497614] FS: 0000000000000000(0000) GS:ffff888237800000(0000) knlGS:0000000000000000
[ 283.498585] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 283.499290] CR2: 00000000ffffffff CR3: 00000001e1222000 CR4: 00000000000406e0
[ 283.500165] Call Trace:
[ 283.500499] ? rest_init+0x240/0x240
[ 283.500985] kernel_init+0x13/0x110
[ 283.501433] ret_from_fork+0x24/0x30
[ 283.501907] irq event stamp: 4760776
[ 283.502366] hardirqs last enabled at (4760775): [<ffffffffb481e34d>] _raw_spin_unlock_irqrestore+0x4d/0x60
[ 283.511686] hardirqs last disabled at (4760776): [<ffffffffb3c038d4>] trace_hardirqs_off_thunk+0x1a/0x1c
[ 283.512914] softirqs last enabled at (4760748): [<ffffffffb4c002cf>] __do_softirq+0x2cf/0x4ad
[ 283.514086] softirqs last disabled at (4760741): [<ffffffffb3cf4f4d>] irq_exit+0xcd/0xe0
[ 283.515114] ---[ end trace 7e3383c4261f8faa ]---
To reproduce:
# build kernel
cd linux
cp config-5.6.0-rc7-next-20200323-00001-gf675f2f91d045 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 4 months
6d25be5782 ("sched/core, workqueues: Distangle worker .."): [ 52.816697] WARNING: CPU: 0 PID: 14 at kernel/workqueue.c:882 wq_worker_sleeping
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 6d25be5782e482eb93e3de0c94d0a517879377d0
Author: Thomas Gleixner <tglx(a)linutronix.de>
AuthorDate: Wed Mar 13 17:55:48 2019 +0100
Commit: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Tue Apr 16 16:55:15 2019 +0200
sched/core, workqueues: Distangle worker accounting from rq lock
The worker accounting for CPU bound workers is plugged into the core
scheduler code and the wakeup code. This is not a hard requirement and
can be avoided by keeping track of the state in the workqueue code
itself.
Keep track of the sleeping state in the worker itself and call the
notifier before entering the core scheduler. There might be false
positives when the task is woken between that call and actually
scheduling, but that's not really different from scheduling and being
woken immediately after switching away. When nr_running is updated when
the task is retunrning from schedule() then it is later compared when it
is done from ttwu().
[ bigeasy: preempt_disable() around wq_worker_sleeping() by Daniel Bristot de Oliveira ]
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Acked-by: Tejun Heo <tj(a)kernel.org>
Cc: Daniel Bristot de Oliveira <bristot(a)redhat.com>
Cc: Lai Jiangshan <jiangshanlai(a)gmail.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Link: http://lkml.kernel.org/r/ad2b29b5715f970bffc1a7026cabd6ff0b24076a.1532952...
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
e2abb39811 sched/fair: Remove unneeded prototype of capacity_of()
6d25be5782 sched/core, workqueues: Distangle worker accounting from rq lock
9efcc4a129 afs: Fix unpinned address list during probing
+--------------------------------------------------------------------------------+------------+------------+------------+
| | e2abb39811 | 6d25be5782 | 9efcc4a129 |
+--------------------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 997 | 806 | 27 |
| boot_failures | 112 | 104 | 9 |
| WARNING:at_kernel/rcu/rcutorture.c:#rcu_torture_fwd_prog | 61 | 52 | 2 |
| RIP:rcu_torture_fwd_prog | 63 | 57 | 2 |
| BUG:workqueue_lockup-pool | 12 | 16 | 1 |
| BUG:kernel_hang_in_early-boot_stage,last_printk:early_console_in_setup_code | 14 | 13 | 6 |
| BUG:kernel_timeout_in_early-boot_stage,last_printk:early_console_in_setup_code | 5 | | |
| Initiating_system_reboot | 1 | | |
| BUG:spinlock_recursion_on_CPU | 10 | 6 | |
| RIP:rcu_torture_one_read | 2 | | |
| INFO:rcu_preempt_self-detected_stall_on_CPU | 1 | 3 | |
| RIP:iov_iter_copy_from_user_atomic | 1 | 3 | |
| BUG:kernel_hang_in_boot_stage | 2 | | |
| RIP:__schedule | 4 | 2 | |
| BUG:kernel_hang_in_test_stage | 2 | | |
| RIP:rcutorture_seq_diff | 3 | | |
| BUG:spinlock_wrong_owner_on_CPU | 1 | | |
| BUG:workqueue_leaked_lock_or_atomic:rcu_torture_sta | 1 | | |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/workqueue.c | 1 | | |
| WARNING:at_kernel/workqueue.c:#worker_set_flags | 1 | | |
| RIP:worker_set_flags | 1 | | |
| BUG:scheduling_while_atomic | 1 | | |
| BUG:unable_to_handle_kernel | 2 | 1 | |
| Oops:#[##] | 2 | 1 | |
| Kernel_panic-not_syncing:Fatal_exception | 6 | 1 | |
| kernel_BUG_at_lib/list_debug.c | 4 | | |
| invalid_opcode:#[##] | 4 | | |
| RIP:__list_add_valid | 4 | | |
| RIP:__rb_erase_color | 1 | | |
| INFO:rcu_preempt_detected_stalls_on_CPUs/tasks | 1 | 7 | |
| RIP:rcu_read_delay | 1 | 1 | |
| RIP:kvm_async_pf_task_wait | 1 | | |
| RIP:d_walk | 1 | | |
| WARNING:at_kernel/workqueue.c:#wq_worker_sleeping | 0 | 8 | 1 |
| RIP:wq_worker_sleeping | 0 | 8 | 1 |
| RIP:kthread_data | 0 | 2 | |
| RIP:to_kthread | 0 | 1 | |
| RIP:wq_worker_running | 0 | 5 | 1 |
| WARNING:at_kernel/workqueue.c:#worker_enter_idle | 0 | 3 | 1 |
| RIP:worker_enter_idle | 0 | 3 | 1 |
| BUG:kernel_timeout_in_boot_stage | 0 | 1 | |
| BUG:soft_lockup-CPU##stuck_for#s![rcu_torture_fak:#] | 0 | 1 | |
| RIP:preempt_count_equals | 0 | 2 | |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 2 | |
| BUG:soft_lockup-CPU##stuck_for#s![rcu_torture_rea:#] | 0 | 1 | |
| RIP:ftrace_stub | 0 | 1 | |
| RIP:ftrace_likely_update | 0 | 3 | |
| RIP:simple_write_end | 0 | 1 | |
| RIP:delay_tsc | 0 | 2 | |
| RIP:check_preemption_disabled | 0 | 1 | |
+--------------------------------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 5.924439] igt_debug 0x0000000000000200-0x0000000000000600: 1024: used
[ 5.924883] igt_debug 0x0000000000000600-0x0000000000000a00: 1024: free
[ 5.925327] igt_debug 0x0000000000000a00-0x0000000000000e00: 1024: used
[ 5.925806] igt_debug 0x0000000000000e00-0x0000000000001000: 512: free
[ 5.926286] igt_debug total: 4096, used 2048 free 2048
[ 52.816697] WARNING: CPU: 0 PID: 14 at kernel/workqueue.c:882 wq_worker_sleeping+0x72/0x136
[ 52.821963] Modules linked in:
[ 52.821963] CPU: 0 PID: 14 Comm: kworker/0:1 Not tainted 5.1.0-rc3-00024-g6d25be5782e48 #1
[ 52.821963] Workqueue: rcu_gp wait_rcu_exp_gp
[ 52.821963] RIP: 0010:wq_worker_sleeping+0x72/0x136
[ 52.821963] Code: 48 8b 58 40 45 85 ed 41 0f 95 c6 31 c9 31 d2 44 89 f6 e8 c5 3f 0d 00 48 ff 05 71 aa 59 03 45 85 ed 74 17 48 ff 05 6d aa 59 03 <0f> 0b 48 ff 05 6c aa 59 03 48 ff 05 6d aa 59 03 31 c9 31 d2 44 89
[ 52.821963] RSP: 0000:ffffa66380073b78 EFLAGS: 00010202
[ 52.821963] RAX: ffff97fe37b08de0 RBX: ffffffff9c2ae960 RCX: 0000000000000000
[ 52.821963] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff9cd4e028
[ 52.821963] RBP: ffffa66380073b98 R08: 0000000000000000 R09: 0000000000000000
[ 52.821963] R10: ffffa66380073c90 R11: 0000000000000000 R12: ffff97fe37b08de0
[ 52.821963] R13: 0000000000000001 R14: 0000000000000001 R15: 0000000000005000
[ 52.821963] FS: 0000000000000000(0000) GS:ffffffff9c250000(0000) knlGS:0000000000000000
[ 52.821963] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 52.821963] CR2: 0000000000005000 CR3: 00000001b741a000 CR4: 00000000000006b0
[ 52.821963] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 52.821963] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 52.821963] Call Trace:
[ 52.821963] schedule+0x7a/0x1c0
[ 52.821963] kvm_async_pf_task_wait+0x253/0x300
[ 52.821963] do_async_page_fault+0xbf/0x141
[ 52.821963] ? kvm_async_pf_task_wait+0x5/0x300
[ 52.821963] ? do_async_page_fault+0xbf/0x141
[ 52.821963] async_page_fault+0x1e/0x30
[ 52.821963] RIP: 0010:to_kthread+0x0/0x81
[ 52.821963] Code: 03 48 ff 05 44 b0 59 03 89 de 31 c9 31 d2 48 c7 c7 a8 ea d4 9c e8 9f ec 0c 00 48 ff 05 33 b0 59 03 5b 41 5c 41 5d 41 5e 5d c3 <55> 48 ff 05 a2 a8 59 03 48 89 e5 41 55 41 54 53 45 31 ed 49 89 fc
[ 52.821963] RSP: 0000:ffffa66380073d68 EFLAGS: 00010206
[ 52.821963] RAX: 0000000000000000 RBX: ffff97fe37b52000 RCX: 0000000000000000
[ 52.821963] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff97fe37b52000
[ 52.821963] RBP: ffffa66380073d70 R08: 0000000000000000 R09: ffffffff9c2c0a18
[ 52.821963] R10: ffff97fe37b52060 R11: 0000000000000000 R12: 0000000000000000
[ 52.821963] R13: ffffffff9cd51418 R14: 0000000000002710 R15: 0000000000000000
[ 52.821963] ? kthread_data+0x15/0x22
[ 52.821963] wq_worker_running+0x15/0x53
[ 52.821963] schedule+0x1ab/0x1c0
[ 52.821963] schedule_timeout+0xd6/0x10a
[ 52.821963] ? init_timer_key+0x41/0x41
[ 52.821963] rcu_exp_sel_wait_wake+0x58c/0xae2
[ 52.821963] wait_rcu_exp_gp+0x19/0x22
[ 52.821963] process_one_work+0x387/0x5eb
[ 52.821963] ? worker_thread+0x4d2/0x5e3
[ 52.821963] worker_thread+0x401/0x5e3
[ 52.821963] ? rescuer_thread+0x492/0x492
[ 52.821963] kthread+0x19b/0x1aa
[ 52.821963] ? kthread_flush_work+0x175/0x175
[ 52.821963] ret_from_fork+0x3a/0x50
[ 52.821963] random: get_random_bytes called from init_oops_id+0x2d/0x4c with crng_init=0
[ 52.821963] ---[ end trace 0be0a8f9c0f268e1 ]---
[ 64.886064] rcu-torture: rtc: (____ptrval____) ver: 324 tfle: 0 rta: 324 rtaf: 0 rtf: 315 rtmbe: 0 rtbe: 0 rtbke: 0 rtbre: 0 rtbf: 0 rtb: 0 nt: 101 barrier: 0/0:0
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start v5.2 v5.1 --
git bisect bad 2c1212de6f9794a7becba5f219fa6ce8a8222c90 # 23:32 B 104 1 6 273 Merge tag 'spdx-5.2-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
git bisect bad 06cbd26d312edfe4a83ff541c23f8f866265eb24 # 00:18 B 26 1 0 0 Merge tag 'nfs-for-5.2-1' of git://git.linux-nfs.org/projects/anna/linux-nfs
git bisect bad 9f2e3a53f7ec9ef55e9d01bc29a6285d291c151e # 05:24 B 269 1 20 20 Merge tag 'for-5.2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux
git bisect bad ba3934de557a2074b89d67e29e5ce119f042d057 # 08:23 B 131 1 11 11 Merge branch 'x86-platform-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good 90489a72fba9529c85e051067ecb41183b8e982e # 16:37 G 900 0 74 74 Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 82ac4043cac5d75d8cda79bc8a095f8306f35c75 # 17:45 B 245 1 22 22 Merge branch 'x86-cache-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad a0e928ed7c603a47dca8643e58db224a799ff2c5 # 21:01 B 315 1 22 22 Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad e00d4135751bfe786a9e26b5560b185ce3f9f963 # 23:36 B 193 1 16 16 Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad bee9853932e90ce94bce4276ec6b7b06bc48070b # 02:33 B 262 1 24 24 sched/core: Fix typo in comment
git bisect good d8743230c9f4e92f370ecd2a90c680ddcede6ae5 # 08:53 G 902 0 97 97 sched/topology: Fix build_sched_groups() comment
git bisect bad 6d25be5782e482eb93e3de0c94d0a517879377d0 # 10:00 B 62 1 12 12 sched/core, workqueues: Distangle worker accounting from rq lock
git bisect good e2abb398115e9c33f3d1e25bf6d1d08badc58b13 # 23:19 G 900 0 89 89 sched/fair: Remove unneeded prototype of capacity_of()
# first bad commit: [6d25be5782e482eb93e3de0c94d0a517879377d0] sched/core, workqueues: Distangle worker accounting from rq lock
git bisect good e2abb398115e9c33f3d1e25bf6d1d08badc58b13 # 02:02 G 1000 0 21 110 sched/fair: Remove unneeded prototype of capacity_of()
# extra tests with debug options
git bisect good 6d25be5782e482eb93e3de0c94d0a517879377d0 # 09:37 G 900 0 147 147 sched/core, workqueues: Distangle worker accounting from rq lock
# extra tests on head commit of linus/master
git bisect bad 9efcc4a129363187c9bf15338692f107c5c9b6f0 # 10:25 B 30 1 7 7 afs: Fix unpinned address list during probing
# bad: [9efcc4a129363187c9bf15338692f107c5c9b6f0] afs: Fix unpinned address list during probing
# extra tests on linus/master
# duplicated: [9efcc4a129363187c9bf15338692f107c5c9b6f0] afs: Fix unpinned address list during probing
# extra tests on linux-next/master
# 119: [89295c59c1f063b533d071ca49d0fa0c0783ca6f] Add linux-next specific files for 20200326
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
2 years, 4 months
[tracing] cd8f62b481: BUG:sleeping_function_called_from_invalid_context_at_mm/slab.h
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: cd8f62b481530fafbeacee0d3283b60bcf42d854 ("[PATCH 02/12 v2] tracing: Save off entry when peeking at next entry")
url: https://github.com/0day-ci/linux/commits/Steven-Rostedt/ring-buffer-traci...
in testcase: rcuperf
with following parameters:
runtime: 300s
perf_type: rcu
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------------------+------------+------------+
| | 2e2bf17ca0 | cd8f62b481 |
+----------------------------------------------------------------+------------+------------+
| boot_successes | 13 | 0 |
| boot_failures | 0 | 8 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/slab.h | 0 | 8 |
+----------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 16.821280] BUG: sleeping function called from invalid context at mm/slab.h:565
[ 16.822211] in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 158, name: rcu_perf_writer
[ 16.823164] no locks held by rcu_perf_writer/158.
[ 16.823650] CPU: 0 PID: 158 Comm: rcu_perf_writer Not tainted 5.6.0-rc6-00081-gcd8f62b481530f #1
[ 16.824856] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 16.826220] Call Trace:
[ 16.826667] dump_stack+0x16/0x18
[ 16.827265] ___might_sleep+0x104/0x118
[ 16.827947] __might_sleep+0x69/0x70
[ 16.828600] ? __fs_reclaim_release+0x17/0x19
[ 16.829374] slab_pre_alloc_hook+0x34/0x6e
[ 16.830107] __kmalloc+0x48/0xe8
[ 16.830710] ? rb_reader_unlock+0x2b/0x3c
[ 16.831205] ? trace_find_next_entry+0x84/0x133
[ 16.831723] trace_find_next_entry+0x84/0x133
[ 16.832217] trace_print_lat_context+0x4b/0x437
[ 16.832730] ? rb_event_length+0x56/0x67
[ 16.833186] ? ring_buffer_event_length+0x34/0x79
[ 16.833728] ? __find_next_entry+0xd3/0x1c2
[ 16.834190] print_trace_line+0x69d/0x767
[ 16.834666] ftrace_dump+0x285/0x437
[ 16.835055] rcu_perf_writer+0x25f/0x34e
[ 16.835476] kthread+0xdf/0xe1
[ 16.835823] ? rcu_perf_reader+0x9c/0x9c
[ 16.836264] ? kthread_create_worker_on_cpu+0x1c/0x1c
[ 16.836834] ret_from_fork+0x2e/0x38
[ 16.837270] rb_produ-160 0.... 6528532us : ring_buffer_producer: Starting ring buffer hammer
[ 16.838318] rb_produ-160 0.... 16529141us : ring_buffer_producer: End ring buffer hammer
[ 16.839349] rb_produ-160 0.... 16558157us : ring_buffer_producer: Running Consumer at nice: 19
[ 16.840382] rb_produ-160 0.... 16558162us : ring_buffer_producer: Running Producer at nice: 19
[ 16.841401] rb_produ-160 0.... 16558163us : ring_buffer_producer: WARNING!!! This test is running at lowest priority.
[ 16.854781] rb_produ-160 0.... 16558164us : ring_buffer_producer: Time: 10000604 (usecs)
[ 16.857354] rb_produ-160 0.... 16558165us : ring_buffer_producer: Overruns: 6117960
[ 16.858322] rb_produ-160 0.... 16558166us : ring_buffer_producer: Read: 4356190 (by events)
[ 16.859404] rb_produ-160 0.... 16558167us : ring_buffer_producer: Entries: 44650
[ 16.860329] rb_produ-160 0.... 16558167us : ring_buffer_producer: Total: 10518800
[ 16.861226] rb_produ-160 0.... 16558168us : ring_buffer_producer: Missed: 0
[ 16.862124] rb_produ-160 0.... 16558169us : ring_buffer_producer: Hit: 10518800
[ 16.863087] rb_produ-160 0.... 16558170us : ring_buffer_producer: Entries per millisec: 1051
[ 16.864123] rb_produ-160 0.... 16558171us : ring_buffer_producer: 951 ns per entry
[ 16.865074] rb_produ-160 0.... 16558172us : ring_buffer_producer_thread: Sleeping for 10 secs
[ 16.866105] ---------------------------------
[ 16.866725] rcu-perf: Test complete
[ 16.878198] random: fast init done
Elapsed time: 60
qemu-img create -f qcow2 disk-vm-snb-i386-9-0 256G
qemu-img create -f qcow2 disk-vm-snb-i386-9-1 256G
qemu-img create -f qcow2 disk-vm-snb-i386-9-2 256G
qemu-img create -f qcow2 disk-vm-snb-i386-9-3 256G
qemu-img create -f qcow2 disk-vm-snb-i386-9-4 256G
qemu-img create -f qcow2 disk-vm-snb-i386-9-5 256G
qemu-img create -f qcow2 disk-vm-snb-i386-9-6 256G
kvm=(
qemu-system-i386
-enable-kvm
-cpu SandyBridge
-kernel $kernel
-initrd initrd-vm-snb-i386-9.cgz
-m 8192
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0,hostfwd=tcp::32032-:22
-boot order=nc
-no-reboot
-watchdog i6300esb
-watchdog-action debug
-rtc base=localtime
-drive file=disk-vm-snb-i386-9-0,media=disk,if=virtio
-drive file=disk-vm-snb-i386-9-1,media=disk,if=virtio
-drive file=disk-vm-snb-i386-9-2,media=disk,if=virtio
-drive file=disk-vm-snb-i386-9-3,media=disk,if=virtio
-drive file=disk-vm-snb-i386-9-4,media=disk,if=virtio
-drive file=disk-vm-snb-i386-9-5,media=disk,if=virtio
-drive file=disk-vm-snb-i386-9-6,media=disk,if=virtio
-serial stdio
-display none
-monitor null
)
append=(
ip=::::vm-snb-i386-9::dhcp
root=/dev/ram0
user=lkp
job=/job-script
ARCH=i386
kconfig=i386-randconfig-g003-20200324
branch=linux-devel/devel-hourly-2020032505
commit=cd8f62b481530fafbeacee0d3283b60bcf42d854
BOOT_IMAGE=/pkg/linux/i386-randconfig-g003-20200324/gcc-7/cd8f62b481530fafbeacee0d3283b60bcf42d854/vmlinuz-5.6.0-rc6-00081-gcd8f62b481530f
max_uptime=1500
RESULT_ROOT=/result/rcuperf/rcu-300s/vm-snb-i386/yocto-i386-minimal-20190520.cgz/i386-randconfig-g003-20200324/gcc-7/cd8f62b481530fafbeacee0d3283b60bcf42d854/3
result_service=tmpfs
selinux=0
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
To reproduce:
# build kernel
cd linux
cp config-5.6.0-rc6-00081-gcd8f62b481530f .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 4 months
[drm] 6f365e561d: WARNING:at_drivers/gpu/drm/drm_managed.c:#drmm_add_final_kfree
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 6f365e561d66553d09b832378a45d3dee5be24e1 ("drm: Set final_kfree in drm_dev_alloc")
git://anongit.freedesktop.org/drm/drm-misc drm-misc-next
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------------------+------------+------------+
| | c6603c740e | 6f365e561d |
+----------------------------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 0 | 6 |
| WARNING:at_drivers/gpu/drm/drm_managed.c:#drmm_add_final_kfree | 0 | 6 |
| EIP:drmm_add_final_kfree | 0 | 6 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 2 |
| Oops:#[##] | 0 | 2 |
| EIP:check_preempt_wakeup | 0 | 2 |
| EIP:_raw_spin_unlock_irqrestore | 0 | 2 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 2 |
+----------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 243.287098] WARNING: CPU: 1 PID: 1 at drivers/gpu/drm/drm_managed.c:119 drmm_add_final_kfree+0x9e/0xc0
[ 243.287533] Modules linked in:
[ 243.287533] CPU: 1 PID: 1 Comm: swapper/0 Tainted: G T 5.6.0-rc5-01259-g6f365e561d665 #1
[ 243.287533] EIP: drmm_add_final_kfree+0x9e/0xc0
[ 243.287533] Code: 7a c4 01 83 15 74 39 7a c4 00 0f 0b 83 05 80 39 7a c4 01 83 15 84 39 7a c4 00 eb a0 83 05 90 39 7a c4 01 83 15 94 39 7a c4 00 <0f> 0b 83 05 98 39 7a c4 01 83 15 9c 39 7a c4 00 89 73 1c 5b 5e 5d
[ 243.287533] EAX: f15d7dfc EBX: f15d7570 ECX: 002b6270 EDX: f15d7dfc
[ 243.287533] ESI: f15d7570 EDI: c0270678 EBP: c005de14 ESP: c005de0c
[ 243.287533] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00010246
[ 243.287533] CR0: 80050033 CR2: ffffffff CR3: 03d30000 CR4: 00040690
[ 243.287533] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 243.287533] DR6: fffe0ff0 DR7: 00000400
[ 243.287533] Call Trace:
[ 243.287533] drm_dev_alloc+0x74/0xc0
[ 243.287533] bochs_pci_probe+0xf2/0x320
[ 243.287533] pci_device_probe+0x1e7/0x340
[ 243.287533] really_probe+0xba/0x740
[ 243.287533] driver_probe_device+0xa2/0x2e0
[ 243.287533] ? mutex_lock_nested+0x31/0x50
[ 243.287533] device_driver_attach+0x87/0xa0
[ 243.287533] __driver_attach+0xef/0x1a0
[ 243.287533] ? device_driver_attach+0xa0/0xa0
[ 243.287533] bus_for_each_dev+0xdf/0x140
[ 243.287533] driver_attach+0x22/0x40
[ 243.287533] ? device_driver_attach+0xa0/0xa0
[ 243.287533] bus_add_driver+0x20d/0x3a0
[ 243.287533] ? pci_bus_num_vf+0x30/0x30
[ 243.287533] driver_register+0xc7/0x210
[ 243.287533] ? ast_init+0x4f/0x4f
[ 243.287533] __pci_register_driver+0x67/0x80
[ 243.287533] bochs_init+0x39/0x4f
[ 243.287533] do_one_initcall+0xfc/0x5e0
[ 243.287533] ? parse_args+0x2c0/0x520
[ 243.287533] ? rdinit_setup+0x41/0x41
[ 243.287533] kernel_init_freeable+0x475/0x555
[ 243.287533] ? rest_init+0x210/0x210
[ 243.287533] kernel_init+0x16/0x270
[ 243.287533] ret_from_fork+0x19/0x30
[ 243.287533] ---[ end trace a9ab7aa096fc5de5 ]---
To reproduce:
# build kernel
cd linux
cp config-5.6.0-rc5-01259-g6f365e561d665 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 4 months
[btrfs] 1eb52c8bd8: fio.write_bw_MBps -46.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -46.1% regression of fio.write_bw_MBps due to commit:
commit: 1eb52c8bd8d6b056caa06737242830f03777da32 ("btrfs: get rid of one layer of bios in direct I/O")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: fio-basic
on test machine: 160 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 256G memory
with following parameters:
runtime: 300s
disk: 1SSD
fs: btrfs
nr_task: 100%
test_size: 128G
rw: write
bs: 4k
ioengine: sync
direct: direct
cpufreq_governor: performance
ucode: 0xb000038
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/direct/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-7/performance/direct/1SSD/btrfs/sync/x86_64-rhel-7.6/100%/debian-x86_64-20191114.cgz/300s/write/lkp-bdw-ex2/128G/fio-basic/0xb000038
commit:
ba9d3fc7cb ("btrfs: put direct I/O checksums in btrfs_dio_private instead of bio")
1eb52c8bd8 ("btrfs: get rid of one layer of bios in direct I/O")
ba9d3fc7cb61296b 1eb52c8bd8d6b056caa06737242
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
0.57 ± 25% -0.2 0.35 ± 3% fio.latency_1000us%
0.13 ± 9% -0.1 0.06 ± 5% fio.latency_100us%
0.18 ± 3% +24.3 24.45 ± 17% fio.latency_10ms%
0.34 ± 5% -0.1 0.23 ± 6% fio.latency_250us%
93.88 -39.1 54.78 ± 9% fio.latency_2ms%
3.95 ± 5% +15.1 19.03 ± 12% fio.latency_4ms%
0.52 ± 4% +0.1 0.58 ± 4% fio.latency_500us%
0.08 ± 20% -0.1 0.02 ± 11% fio.latency_50us%
0.35 ± 3% +0.1 0.48 ± 2% fio.latency_750us%
2.681e+08 -45.3% 1.466e+08 ± 5% fio.time.file_system_outputs
23020 -47.8% 12023 ± 6% fio.time.involuntary_context_switches
159091 -8.4% 145794 fio.time.minor_page_faults
2351 -39.8% 1415 ± 5% fio.time.percent_of_cpu_this_job_got
6836 -39.0% 4170 ± 5% fio.time.system_time
141.07 ± 2% -37.2% 88.62 ± 6% fio.time.user_time
74688391 -36.0% 47773188 ± 3% fio.time.voluntary_context_switches
33510952 -45.3% 18323895 ± 5% fio.workload
442.52 -46.1% 238.59 ± 5% fio.write_bw_MBps
1757184 +196.5% 5210112 fio.write_clat_90%_us
1945600 +187.2% 5586944 fio.write_clat_95%_us
2441216 +157.7% 6291456 fio.write_clat_99%_us
1409946 +86.2% 2625138 ± 5% fio.write_clat_mean_us
371509 ± 3% +331.6% 1603575 ± 4% fio.write_clat_stddev
113285 -46.1% 61078 ± 5% fio.write_iops
0.76 ± 6% -0.5 0.29 ± 6% mpstat.cpu.all.iowait%
28.05 -5.7 22.39 ± 2% mpstat.cpu.all.sys%
0.26 -0.1 0.17 ± 5% mpstat.cpu.all.usr%
56771 ± 4% -9.3% 51506 softirqs.CPU100.SCHED
55528 ± 3% -6.8% 51743 ± 4% softirqs.CPU148.SCHED
113291 ± 2% +24.7% 141291 ± 15% softirqs.CPU49.TIMER
1.562e+10 ± 20% +58.3% 2.472e+10 ± 18% cpuidle.C6.time
35854532 ± 5% +21.6% 43598798 ± 5% cpuidle.C6.usage
44587503 ± 5% -29.7% 31323017 ± 3% cpuidle.POLL.time
9963711 ± 5% -29.0% 7069945 ± 3% cpuidle.POLL.usage
31160 -49.6% 15709 ± 70% numa-numastat.node0.other_node
573019 ± 20% -46.9% 304525 ± 16% numa-numastat.node2.local_node
587387 ± 20% -43.6% 331125 ± 16% numa-numastat.node2.numa_hit
563047 ± 14% -47.7% 294353 ± 6% numa-numastat.node3.local_node
588001 ± 13% -45.4% 321048 ± 4% numa-numastat.node3.numa_hit
70.25 +9.6% 77.00 vmstat.cpu.id
443711 -45.4% 242237 ± 5% vmstat.io.bo
3.75 ± 11% -40.0% 2.25 ± 19% vmstat.procs.b
54.00 -21.3% 42.50 ± 2% vmstat.procs.r
501610 -11.1% 445923 vmstat.system.in
71.10 +8.7% 77.25 iostat.cpu.idle
27.88 -20.1% 22.29 ± 2% iostat.cpu.system
1.15 ± 5% -68.8% 0.36 ± 7% iostat.nvme0n1.avgqu-sz
1.08 ± 80% -52.1% 0.52 ± 5% iostat.nvme0n1.await.max
110528 -45.5% 60285 ± 5% iostat.nvme0n1.w/s
1.08 ± 80% -52.1% 0.52 ± 5% iostat.nvme0n1.w_await.max
442960 -45.3% 242228 ± 5% iostat.nvme0n1.wkB/s
96186 ± 2% +74.4% 167794 ± 15% meminfo.Active(file)
10471936 ± 4% +12.4% 11775488 ± 7% meminfo.DirectMap2M
13601 +40.4% 19096 ± 4% meminfo.Dirty
5884172 -12.5% 5148653 ± 2% meminfo.Memused
2826160 -27.3% 2053832 ± 3% meminfo.SUnreclaim
2936311 -26.3% 2164032 ± 3% meminfo.Slab
27663 -25.2% 20686 ± 3% meminfo.max_used_kB
7889 +24.8% 9844 ± 2% slabinfo.blkdev_ioc.active_objs
7903 +24.6% 9845 ± 2% slabinfo.blkdev_ioc.num_objs
16616823 -33.2% 11099554 ± 4% slabinfo.btrfs_extent_map.active_objs
296728 -33.2% 198206 ± 4% slabinfo.btrfs_extent_map.active_slabs
16616833 -33.2% 11099554 ± 4% slabinfo.btrfs_extent_map.num_objs
296728 -33.2% 198206 ± 4% slabinfo.btrfs_extent_map.num_slabs
13880 +172.9% 37881 ± 3% slabinfo.kmalloc-128.active_objs
217.25 +172.3% 591.50 ± 3% slabinfo.kmalloc-128.active_slabs
13947 +171.6% 37881 ± 3% slabinfo.kmalloc-128.num_objs
217.25 +172.3% 591.50 ± 3% slabinfo.kmalloc-128.num_slabs
12405 +24.7% 15471 ± 7% slabinfo.numa_policy.active_objs
12405 +24.7% 15471 ± 7% slabinfo.numa_policy.num_objs
101066 -8.6% 92337 ± 5% proc-vmstat.nr_active_anon
24037 ± 2% +74.7% 41999 ± 14% proc-vmstat.nr_active_file
67593 +45.1% 98064 ± 12% proc-vmstat.nr_dirtied
3423 +39.7% 4783 ± 4% proc-vmstat.nr_dirty
29713 +1.5% 30172 proc-vmstat.nr_kernel_stack
134152 -6.0% 126155 ± 4% proc-vmstat.nr_shmem
706371 -27.3% 513804 ± 3% proc-vmstat.nr_slab_unreclaimable
63441 ± 4% +35.9% 86242 ± 13% proc-vmstat.nr_written
101066 -8.6% 92337 ± 5% proc-vmstat.nr_zone_active_anon
24037 ± 2% +74.7% 41999 ± 14% proc-vmstat.nr_zone_active_file
3421 +39.9% 4785 ± 4% proc-vmstat.nr_zone_write_pending
39946 ± 19% -36.8% 25244 ± 9% proc-vmstat.numa_hint_faults
24644 ± 17% -43.6% 13893 ± 5% proc-vmstat.numa_hint_faults_local
2258358 -30.4% 1571696 ± 2% proc-vmstat.numa_hit
2164392 -31.7% 1477659 ± 2% proc-vmstat.numa_local
3576883 ± 2% -42.7% 2047837 ± 3% proc-vmstat.pgalloc_normal
2141630 ± 4% -44.4% 1190106 ± 2% proc-vmstat.pgfree
1.342e+08 -45.1% 73640384 ± 5% proc-vmstat.pgpgout
24028 ± 5% +76.3% 42370 ± 25% numa-meminfo.node0.Active(file)
3436 +37.4% 4722 ± 9% numa-meminfo.node0.Dirty
315977 ± 4% +41.5% 447077 ± 42% numa-meminfo.node0.FilePages
714746 ± 2% -27.3% 519800 ± 3% numa-meminfo.node0.SUnreclaim
737117 ± 3% -26.0% 545687 ± 4% numa-meminfo.node0.Slab
24334 ± 4% +72.9% 42072 ± 13% numa-meminfo.node1.Active(file)
3360 ± 3% +40.7% 4727 ± 8% numa-meminfo.node1.Dirty
30338 ± 52% -92.1% 2398 ± 22% numa-meminfo.node1.Inactive(file)
707990 ± 3% -27.5% 513144 ± 3% numa-meminfo.node1.SUnreclaim
730781 ± 2% -25.4% 545123 numa-meminfo.node1.Slab
261518 +9.7% 286771 ± 2% numa-meminfo.node1.Unevictable
23970 ± 4% +75.7% 42114 ± 11% numa-meminfo.node2.Active(file)
3414 ± 3% +45.9% 4980 ± 10% numa-meminfo.node2.Dirty
2089 ± 4% +464.7% 11797 ±135% numa-meminfo.node2.Inactive(file)
684851 -24.9% 514538 ± 6% numa-meminfo.node2.SUnreclaim
718787 -24.6% 541680 ± 6% numa-meminfo.node2.Slab
23770 ± 3% +72.9% 41104 ± 14% numa-meminfo.node3.Active(file)
3419 ± 2% +37.0% 4686 ± 5% numa-meminfo.node3.Dirty
621874 ± 33% -38.5% 382252 ± 8% numa-meminfo.node3.FilePages
211446 ± 95% -97.2% 5918 ± 70% numa-meminfo.node3.Inactive(anon)
8189 ± 12% -28.9% 5826 ± 2% numa-meminfo.node3.KernelStack
210498 ± 95% -96.2% 8099 ± 24% numa-meminfo.node3.Mapped
1634119 ± 14% -28.5% 1167711 ± 8% numa-meminfo.node3.MemUsed
8986 ± 65% -93.8% 558.00 ± 42% numa-meminfo.node3.PageTables
716951 ± 3% -29.5% 505575 ± 3% numa-meminfo.node3.SUnreclaim
307384 ± 66% -78.1% 67417 ± 27% numa-meminfo.node3.Shmem
748007 ± 4% -29.0% 530773 ± 3% numa-meminfo.node3.Slab
236951 ± 4% +74.8% 414267 ± 20% sched_debug.cfs_rq:/.load.avg
1051986 ± 14% +2775.4% 30248961 ± 28% sched_debug.cfs_rq:/.load.max
382024 ± 5% +599.5% 2672216 ± 29% sched_debug.cfs_rq:/.load.stddev
266.10 ± 18% +55.7% 414.41 ± 7% sched_debug.cfs_rq:/.load_avg.avg
0.24 ± 3% -14.3% 0.21 ± 5% sched_debug.cfs_rq:/.nr_running.avg
0.02 ±173% +13188.9% 2.46 ± 52% sched_debug.cfs_rq:/.removed.load_avg.avg
3.54 ±173% +10209.6% 365.13 ± 65% sched_debug.cfs_rq:/.removed.load_avg.max
0.26 ±173% +11413.7% 29.43 ± 58% sched_debug.cfs_rq:/.removed.load_avg.stddev
0.86 ±173% +9170.6% 79.74 ± 47% sched_debug.cfs_rq:/.removed.runnable_sum.avg
164.29 ±173% +6197.3% 10345 ± 28% sched_debug.cfs_rq:/.removed.runnable_sum.max
11.86 ±173% +7400.3% 889.28 ± 36% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
0.02 ±173% +5513.9% 0.94 ± 67% sched_debug.cfs_rq:/.removed.util_avg.avg
3.21 ±173% +3881.6% 127.74 ± 56% sched_debug.cfs_rq:/.removed.util_avg.max
0.23 ±173% +4540.6% 10.74 ± 59% sched_debug.cfs_rq:/.removed.util_avg.stddev
870.28 ± 9% +135.2% 2047 ± 25% sched_debug.cfs_rq:/.runnable_load_avg.max
237569 ± 4% +74.0% 413454 ± 20% sched_debug.cfs_rq:/.runnable_weight.avg
1050573 ± 14% +2779.2% 30248310 ± 28% sched_debug.cfs_rq:/.runnable_weight.max
383684 ± 5% +596.4% 2671965 ± 29% sched_debug.cfs_rq:/.runnable_weight.stddev
150.81 ± 7% -15.6% 127.27 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.avg
1448440 ± 18% -21.5% 1137742 ± 11% sched_debug.cpu.avg_idle.max
771.64 ± 6% -14.8% 657.35 ± 6% sched_debug.cpu.curr->pid.avg
0.00 ± 9% -15.6% 0.00 ± 6% sched_debug.cpu.next_balance.stddev
0.24 ± 6% -14.2% 0.20 ± 4% sched_debug.cpu.nr_running.avg
0.55 ± 9% +16.3% 0.64 sched_debug.cpu.nr_uninterruptible.avg
-65.21 +67.9% -109.47 sched_debug.cpu.nr_uninterruptible.min
28.57 ± 7% +30.6% 37.32 ± 8% sched_debug.cpu.nr_uninterruptible.stddev
81977 ± 23% +95.3% 160125 ± 10% sched_debug.cpu.ttwu_count.min
15904 ± 19% -72.6% 4359 ± 13% sched_debug.cpu.ttwu_local.avg
108359 ± 19% -68.6% 34043 ± 18% sched_debug.cpu.ttwu_local.max
35141 ± 19% -75.0% 8783 ± 13% sched_debug.cpu.ttwu_local.stddev
6014 ± 5% +76.3% 10603 ± 25% numa-vmstat.node0.nr_active_file
7304 ± 13% +128.6% 16699 ± 41% numa-vmstat.node0.nr_dirtied
856.00 +38.1% 1182 ± 8% numa-vmstat.node0.nr_dirty
79002 ± 4% +41.5% 111781 ± 42% numa-vmstat.node0.nr_file_pages
178849 ± 2% -27.3% 130010 ± 3% numa-vmstat.node0.nr_slab_unreclaimable
6442 ± 15% +130.8% 14871 ± 44% numa-vmstat.node0.nr_written
6014 ± 5% +76.3% 10603 ± 25% numa-vmstat.node0.nr_zone_active_file
856.25 +38.1% 1182 ± 9% numa-vmstat.node0.nr_zone_write_pending
6091 ± 4% +72.8% 10527 ± 13% numa-vmstat.node1.nr_active_file
837.50 ± 2% +41.3% 1183 ± 9% numa-vmstat.node1.nr_dirty
7582 ± 52% -92.1% 600.00 ± 22% numa-vmstat.node1.nr_inactive_file
27196 ±162% +97.9% 53815 ± 94% numa-vmstat.node1.nr_mapped
177163 ± 3% -27.6% 128347 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
65379 +9.7% 71692 ± 2% numa-vmstat.node1.nr_unevictable
6091 ± 4% +72.8% 10527 ± 13% numa-vmstat.node1.nr_zone_active_file
7582 ± 52% -92.1% 600.00 ± 22% numa-vmstat.node1.nr_zone_inactive_file
65379 +9.7% 71692 ± 2% numa-vmstat.node1.nr_zone_unevictable
837.25 ± 2% +41.4% 1184 ± 9% numa-vmstat.node1.nr_zone_write_pending
5999 ± 4% +75.6% 10538 ± 11% numa-vmstat.node2.nr_active_file
6769 ± 3% +105.8% 13934 ± 28% numa-vmstat.node2.nr_dirtied
850.25 ± 3% +46.5% 1245 ± 10% numa-vmstat.node2.nr_dirty
522.25 ± 4% +464.6% 2948 ±134% numa-vmstat.node2.nr_inactive_file
171381 -24.9% 128699 ± 6% numa-vmstat.node2.nr_slab_unreclaimable
5917 ± 3% +104.3% 12088 ± 32% numa-vmstat.node2.nr_written
5999 ± 4% +75.6% 10538 ± 11% numa-vmstat.node2.nr_zone_active_file
522.25 ± 4% +464.6% 2948 ±134% numa-vmstat.node2.nr_zone_inactive_file
849.75 ± 3% +46.7% 1246 ± 10% numa-vmstat.node2.nr_zone_write_pending
5949 ± 3% +72.9% 10285 ± 14% numa-vmstat.node3.nr_active_file
851.00 ± 2% +37.7% 1172 ± 5% numa-vmstat.node3.nr_dirty
155526 ± 33% -38.5% 95575 ± 8% numa-vmstat.node3.nr_file_pages
52896 ± 95% -97.2% 1493 ± 70% numa-vmstat.node3.nr_inactive_anon
8188 ± 12% -28.9% 5825 ± 2% numa-vmstat.node3.nr_kernel_stack
52666 ± 94% -96.0% 2086 ± 24% numa-vmstat.node3.nr_mapped
2244 ± 65% -93.8% 139.00 ± 42% numa-vmstat.node3.nr_page_table_pages
76896 ± 66% -78.1% 16857 ± 27% numa-vmstat.node3.nr_shmem
179393 ± 3% -29.5% 126463 ± 3% numa-vmstat.node3.nr_slab_unreclaimable
5949 ± 3% +72.9% 10285 ± 14% numa-vmstat.node3.nr_zone_active_file
52896 ± 95% -97.2% 1493 ± 70% numa-vmstat.node3.nr_zone_inactive_anon
851.50 ± 2% +37.7% 1172 ± 6% numa-vmstat.node3.nr_zone_write_pending
722738 ± 17% -32.7% 486464 ± 9% numa-vmstat.node3.numa_hit
611153 ± 21% -39.0% 372997 ± 13% numa-vmstat.node3.numa_local
16.63 -15.3% 14.08 perf-stat.i.MPKI
1.078e+10 -18.9% 8.744e+09 ± 2% perf-stat.i.branch-instructions
1.00 ± 6% +0.1 1.12 ± 2% perf-stat.i.branch-miss-rate%
7.06 ± 3% +1.9 8.92 perf-stat.i.cache-miss-rate%
53207426 ± 2% -11.8% 46918906 ± 2% perf-stat.i.cache-misses
7.476e+08 ± 3% -29.0% 5.305e+08 ± 3% perf-stat.i.cache-references
1.524e+11 -19.1% 1.233e+11 ± 2% perf-stat.i.cpu-cycles
3821 ± 2% +16.8% 4464 ± 8% perf-stat.i.cpu-migrations
3170 ± 16% -15.2% 2690 ± 3% perf-stat.i.cycles-between-cache-misses
0.21 ± 8% +0.0 0.25 ± 6% perf-stat.i.dTLB-load-miss-rate%
1.167e+10 -19.1% 9.447e+09 ± 2% perf-stat.i.dTLB-loads
2.345e+09 -10.6% 2.096e+09 ± 2% perf-stat.i.dTLB-stores
28.83 ± 7% +3.2 31.99 ± 3% perf-stat.i.iTLB-load-miss-rate%
8884989 ± 7% -21.4% 6986487 ± 6% perf-stat.i.iTLB-load-misses
22097174 ± 2% -33.3% 14729666 ± 2% perf-stat.i.iTLB-loads
4.576e+10 -18.6% 3.722e+10 ± 2% perf-stat.i.instructions
0.30 +0.8% 0.30 perf-stat.i.ipc
3987 -2.5% 3887 perf-stat.i.minor-faults
95.84 +1.3 97.15 perf-stat.i.node-load-miss-rate%
24251167 -12.8% 21155958 ± 2% perf-stat.i.node-load-misses
1041611 ± 2% -37.6% 649903 ± 4% perf-stat.i.node-loads
76.92 -1.5 75.42 perf-stat.i.node-store-miss-rate%
11698329 -17.4% 9659586 ± 2% perf-stat.i.node-store-misses
3420696 -8.6% 3125060 ± 2% perf-stat.i.node-stores
3987 -2.5% 3887 perf-stat.i.page-faults
16.34 ± 2% -12.7% 14.26 perf-stat.overall.MPKI
0.93 ± 3% +0.2 1.10 ± 2% perf-stat.overall.branch-miss-rate%
7.12 ± 3% +1.7 8.85 perf-stat.overall.cache-miss-rate%
2865 -8.3% 2626 ± 2% perf-stat.overall.cycles-between-cache-misses
0.21 ± 9% +0.1 0.26 ± 6% perf-stat.overall.dTLB-load-miss-rate%
28.66 ± 7% +3.5 32.15 ± 4% perf-stat.overall.iTLB-load-miss-rate%
95.88 +1.1 97.02 perf-stat.overall.node-load-miss-rate%
77.37 -1.8 75.55 perf-stat.overall.node-store-miss-rate%
408903 +49.6% 611741 ± 3% perf-stat.overall.path-length
1.074e+10 -18.8% 8.719e+09 ± 2% perf-stat.ps.branch-instructions
53036173 ± 2% -11.7% 46815332 ± 2% perf-stat.ps.cache-misses
7.451e+08 ± 3% -29.0% 5.294e+08 ± 3% perf-stat.ps.cache-references
1.519e+11 -19.1% 1.229e+11 ± 2% perf-stat.ps.cpu-cycles
3809 ± 2% +17.2% 4463 ± 8% perf-stat.ps.cpu-migrations
1.163e+10 -19.0% 9.421e+09 ± 2% perf-stat.ps.dTLB-loads
2.337e+09 -10.5% 2.091e+09 ± 2% perf-stat.ps.dTLB-stores
8853815 ± 7% -21.3% 6971170 ± 6% perf-stat.ps.iTLB-load-misses
22023628 ± 2% -33.3% 14694997 ± 2% perf-stat.ps.iTLB-loads
4.561e+10 -18.6% 3.712e+10 ± 2% perf-stat.ps.instructions
3973 -2.4% 3877 perf-stat.ps.minor-faults
24172195 -12.7% 21107094 ± 2% perf-stat.ps.node-load-misses
1038351 ± 2% -37.5% 648941 ± 4% perf-stat.ps.node-loads
11658597 -17.4% 9635694 ± 2% perf-stat.ps.node-store-misses
3409902 -8.6% 3117668 ± 2% perf-stat.ps.node-stores
3973 -2.4% 3877 perf-stat.ps.page-faults
1.37e+13 -18.3% 1.119e+13 ± 2% perf-stat.total.instructions
3869 ± 17% -43.8% 2174 ± 31% interrupts.CPU103.NMI:Non-maskable_interrupts
3869 ± 17% -43.8% 2174 ± 31% interrupts.CPU103.PMI:Performance_monitoring_interrupts
20628 ± 15% -26.7% 15129 ± 19% interrupts.CPU106.RES:Rescheduling_interrupts
3940 ± 26% -50.7% 1943 ± 54% interrupts.CPU109.NMI:Non-maskable_interrupts
3940 ± 26% -50.7% 1943 ± 54% interrupts.CPU109.PMI:Performance_monitoring_interrupts
20852 ± 19% -28.7% 14858 ± 26% interrupts.CPU113.RES:Rescheduling_interrupts
94.00 ± 49% +105.1% 192.75 ± 41% interrupts.CPU115.TLB:TLB_shootdowns
3113 ± 12% -39.6% 1881 ± 25% interrupts.CPU117.NMI:Non-maskable_interrupts
3113 ± 12% -39.6% 1881 ± 25% interrupts.CPU117.PMI:Performance_monitoring_interrupts
15318 ± 33% +37.0% 20988 ± 22% interrupts.CPU117.RES:Rescheduling_interrupts
22815 ± 19% -34.4% 14976 ± 37% interrupts.CPU122.RES:Rescheduling_interrupts
3803 ± 18% -41.6% 2221 ± 43% interrupts.CPU124.NMI:Non-maskable_interrupts
3803 ± 18% -41.6% 2221 ± 43% interrupts.CPU124.PMI:Performance_monitoring_interrupts
663.25 ± 50% +36.0% 902.00 ± 4% interrupts.CPU125.CAL:Function_call_interrupts
2736 ± 35% -50.9% 1343 ± 48% interrupts.CPU125.NMI:Non-maskable_interrupts
2736 ± 35% -50.9% 1343 ± 48% interrupts.CPU125.PMI:Performance_monitoring_interrupts
207.50 ± 24% +30.8% 271.50 ± 12% interrupts.CPU126.TLB:TLB_shootdowns
15286 ± 29% +37.8% 21061 ± 20% interrupts.CPU129.RES:Rescheduling_interrupts
192.00 ± 27% +50.7% 289.25 ± 14% interrupts.CPU129.TLB:TLB_shootdowns
3027 ± 18% -45.7% 1645 ± 51% interrupts.CPU131.NMI:Non-maskable_interrupts
3027 ± 18% -45.7% 1645 ± 51% interrupts.CPU131.PMI:Performance_monitoring_interrupts
14710 ± 28% +40.6% 20677 ± 20% interrupts.CPU133.RES:Rescheduling_interrupts
181.00 ± 13% +39.6% 252.75 ± 14% interrupts.CPU136.TLB:TLB_shootdowns
15260 ± 24% +34.3% 20497 ± 16% interrupts.CPU137.RES:Rescheduling_interrupts
174.25 ± 18% +42.9% 249.00 ± 9% interrupts.CPU138.TLB:TLB_shootdowns
3465 ± 19% -37.6% 2161 ± 25% interrupts.CPU139.NMI:Non-maskable_interrupts
3465 ± 19% -37.6% 2161 ± 25% interrupts.CPU139.PMI:Performance_monitoring_interrupts
169.00 ± 15% +44.2% 243.75 ± 10% interrupts.CPU139.TLB:TLB_shootdowns
3139 ± 11% -35.0% 2041 ± 29% interrupts.CPU140.NMI:Non-maskable_interrupts
3139 ± 11% -35.0% 2041 ± 29% interrupts.CPU140.PMI:Performance_monitoring_interrupts
12983 ± 8% -9.7% 11722 ± 3% interrupts.CPU143.RES:Rescheduling_interrupts
3586 ± 14% -29.7% 2522 ± 14% interrupts.CPU144.NMI:Non-maskable_interrupts
3586 ± 14% -29.7% 2522 ± 14% interrupts.CPU144.PMI:Performance_monitoring_interrupts
3389 ± 13% -27.1% 2469 ± 17% interrupts.CPU148.NMI:Non-maskable_interrupts
3389 ± 13% -27.1% 2469 ± 17% interrupts.CPU148.PMI:Performance_monitoring_interrupts
3892 ± 15% -22.3% 3024 ± 19% interrupts.CPU160.NMI:Non-maskable_interrupts
3892 ± 15% -22.3% 3024 ± 19% interrupts.CPU160.PMI:Performance_monitoring_interrupts
15714 ± 20% +25.5% 19727 ± 17% interrupts.CPU163.RES:Rescheduling_interrupts
2880 ± 14% -37.3% 1807 ± 32% interrupts.CPU164.NMI:Non-maskable_interrupts
2880 ± 14% -37.3% 1807 ± 32% interrupts.CPU164.PMI:Performance_monitoring_interrupts
3083 ± 3% -29.9% 2161 ± 24% interrupts.CPU165.NMI:Non-maskable_interrupts
3083 ± 3% -29.9% 2161 ± 24% interrupts.CPU165.PMI:Performance_monitoring_interrupts
12511 ± 8% -15.5% 10570 ± 10% interrupts.CPU167.RES:Rescheduling_interrupts
3183 ± 14% -30.8% 2202 ± 9% interrupts.CPU172.NMI:Non-maskable_interrupts
3183 ± 14% -30.8% 2202 ± 9% interrupts.CPU172.PMI:Performance_monitoring_interrupts
15555 ± 26% +37.3% 21360 ± 20% interrupts.CPU172.RES:Rescheduling_interrupts
3044 ± 4% -29.8% 2137 ± 12% interrupts.CPU174.NMI:Non-maskable_interrupts
3044 ± 4% -29.8% 2137 ± 12% interrupts.CPU174.PMI:Performance_monitoring_interrupts
14849 ± 26% +41.1% 20958 ± 18% interrupts.CPU175.RES:Rescheduling_interrupts
3430 ± 17% -50.6% 1693 ± 32% interrupts.CPU177.NMI:Non-maskable_interrupts
3430 ± 17% -50.6% 1693 ± 32% interrupts.CPU177.PMI:Performance_monitoring_interrupts
213.50 ± 43% -45.4% 116.50 ± 4% interrupts.CPU179.TLB:TLB_shootdowns
4220 ± 26% -51.1% 2065 ± 37% interrupts.CPU185.NMI:Non-maskable_interrupts
4220 ± 26% -51.1% 2065 ± 37% interrupts.CPU185.PMI:Performance_monitoring_interrupts
12366 ± 4% -9.3% 11218 ± 5% interrupts.CPU186.RES:Rescheduling_interrupts
4485 ± 35% -42.8% 2565 ± 26% interrupts.CPU187.NMI:Non-maskable_interrupts
4485 ± 35% -42.8% 2565 ± 26% interrupts.CPU187.PMI:Performance_monitoring_interrupts
3775 ± 28% -32.6% 2545 ± 19% interrupts.CPU20.NMI:Non-maskable_interrupts
3775 ± 28% -32.6% 2545 ± 19% interrupts.CPU20.PMI:Performance_monitoring_interrupts
3184 ± 13% -46.9% 1691 ± 23% interrupts.CPU29.NMI:Non-maskable_interrupts
3184 ± 13% -46.9% 1691 ± 23% interrupts.CPU29.PMI:Performance_monitoring_interrupts
3601 ± 18% -40.1% 2155 ± 27% interrupts.CPU32.NMI:Non-maskable_interrupts
3601 ± 18% -40.1% 2155 ± 27% interrupts.CPU32.PMI:Performance_monitoring_interrupts
658.25 ± 51% +32.6% 873.00 ± 4% interrupts.CPU34.CAL:Function_call_interrupts
138.25 ± 45% +62.4% 224.50 ± 16% interrupts.CPU37.TLB:TLB_shootdowns
657.00 ± 50% +33.9% 879.75 ± 5% interrupts.CPU38.CAL:Function_call_interrupts
3674 ± 23% -40.7% 2179 ± 45% interrupts.CPU38.NMI:Non-maskable_interrupts
3674 ± 23% -40.7% 2179 ± 45% interrupts.CPU38.PMI:Performance_monitoring_interrupts
3842 ± 14% -46.6% 2051 ± 38% interrupts.CPU50.NMI:Non-maskable_interrupts
3842 ± 14% -46.6% 2051 ± 38% interrupts.CPU50.PMI:Performance_monitoring_interrupts
2875 ± 20% -42.9% 1641 ± 25% interrupts.CPU54.NMI:Non-maskable_interrupts
2875 ± 20% -42.9% 1641 ± 25% interrupts.CPU54.PMI:Performance_monitoring_interrupts
14360 ± 4% -14.6% 12262 ± 6% interrupts.CPU55.RES:Rescheduling_interrupts
3257 ± 18% -38.9% 1988 ± 20% interrupts.CPU57.NMI:Non-maskable_interrupts
3257 ± 18% -38.9% 1988 ± 20% interrupts.CPU57.PMI:Performance_monitoring_interrupts
332.75 ± 85% -68.2% 105.75 ±141% interrupts.CPU59.TLB:TLB_shootdowns
3046 ± 27% -43.1% 1734 ± 35% interrupts.CPU69.NMI:Non-maskable_interrupts
3046 ± 27% -43.1% 1734 ± 35% interrupts.CPU69.PMI:Performance_monitoring_interrupts
2768 ± 22% -52.3% 1320 ± 54% interrupts.CPU70.NMI:Non-maskable_interrupts
2768 ± 22% -52.3% 1320 ± 54% interrupts.CPU70.PMI:Performance_monitoring_interrupts
2896 ± 19% -33.6% 1922 ± 18% interrupts.CPU71.NMI:Non-maskable_interrupts
2896 ± 19% -33.6% 1922 ± 18% interrupts.CPU71.PMI:Performance_monitoring_interrupts
4418 ± 16% -35.8% 2838 ± 9% interrupts.CPU73.NMI:Non-maskable_interrupts
4418 ± 16% -35.8% 2838 ± 9% interrupts.CPU73.PMI:Performance_monitoring_interrupts
3848 ± 7% -26.6% 2825 ± 12% interrupts.CPU75.NMI:Non-maskable_interrupts
3848 ± 7% -26.6% 2825 ± 12% interrupts.CPU75.PMI:Performance_monitoring_interrupts
15568 ± 5% -13.5% 13469 ± 2% interrupts.CPU79.RES:Rescheduling_interrupts
4364 ± 12% -38.2% 2699 ± 21% interrupts.CPU90.NMI:Non-maskable_interrupts
4364 ± 12% -38.2% 2699 ± 21% interrupts.CPU90.PMI:Performance_monitoring_interrupts
4179 ± 6% -49.8% 2098 ± 20% interrupts.CPU94.NMI:Non-maskable_interrupts
4179 ± 6% -49.8% 2098 ± 20% interrupts.CPU94.PMI:Performance_monitoring_interrupts
15345 ± 9% -17.3% 12688 ± 6% interrupts.CPU99.RES:Rescheduling_interrupts
350.75 -56.6% 152.25 ± 60% interrupts.IWI:IRQ_work_interrupts
612737 ± 3% -24.2% 464206 ± 19% interrupts.NMI:Non-maskable_interrupts
612737 ± 3% -24.2% 464206 ± 19% interrupts.PMI:Performance_monitoring_interrupts
21.44 ± 2% -7.0 14.48 ± 33% perf-profile.calltrace.cycles-pp.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
21.45 ± 2% -7.0 14.50 ± 33% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
21.37 ± 2% -6.9 14.43 ± 33% perf-profile.calltrace.cycles-pp.btrfs_direct_IO.generic_file_direct_write.btrfs_file_write_iter.new_sync_write.vfs_write
21.38 ± 2% -6.9 14.44 ± 33% perf-profile.calltrace.cycles-pp.generic_file_direct_write.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write
10.50 ± 3% -6.1 4.36 ± 73% perf-profile.calltrace.cycles-pp.btrfs_release_path.btrfs_free_path.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper
10.50 ± 3% -6.1 4.36 ± 73% perf-profile.calltrace.cycles-pp.btrfs_free_path.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work
10.39 ± 3% -6.1 4.27 ± 74% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.btrfs_release_path.btrfs_free_path.btrfs_mark_extent_written.btrfs_finish_ordered_io
9.67 ± 3% -5.8 3.86 ± 78% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.btrfs_release_path.btrfs_free_path
9.68 ± 3% -5.8 3.87 ± 78% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.btrfs_release_path.btrfs_free_path.btrfs_mark_extent_written
16.52 -4.4 12.14 ± 25% perf-profile.calltrace.cycles-pp.do_blockdev_direct_IO.btrfs_direct_IO.generic_file_direct_write.btrfs_file_write_iter.new_sync_write
15.70 ± 2% -4.2 11.54 ± 13% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node
15.74 ± 2% -4.1 11.60 ± 13% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot
15.44 -4.1 11.35 ± 24% perf-profile.calltrace.cycles-pp.btrfs_get_blocks_direct.do_blockdev_direct_IO.btrfs_direct_IO.generic_file_direct_write.btrfs_file_write_iter
14.74 -4.0 10.73 ± 24% perf-profile.calltrace.cycles-pp.can_nocow_extent.btrfs_get_blocks_direct.do_blockdev_direct_IO.btrfs_direct_IO.generic_file_direct_write
13.61 -4.0 9.64 ± 25% perf-profile.calltrace.cycles-pp.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_file_extent.can_nocow_extent
13.65 -4.0 9.69 ± 25% perf-profile.calltrace.cycles-pp.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_file_extent.can_nocow_extent.btrfs_get_blocks_direct
13.91 -3.8 10.06 ± 23% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_file_extent.can_nocow_extent.btrfs_get_blocks_direct.do_blockdev_direct_IO
13.92 -3.8 10.07 ± 23% perf-profile.calltrace.cycles-pp.btrfs_lookup_file_extent.can_nocow_extent.btrfs_get_blocks_direct.do_blockdev_direct_IO.btrfs_direct_IO
7.93 -2.7 5.23 ± 22% perf-profile.calltrace.cycles-pp.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_file_extent
3.68 ± 5% -2.1 1.53 ±100% perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_space.btrfs_direct_IO.generic_file_direct_write.btrfs_file_write_iter.new_sync_write
3.66 ± 5% -2.1 1.52 ±100% perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.btrfs_delalloc_reserve_space.btrfs_direct_IO.generic_file_direct_write.btrfs_file_write_iter
3.59 ± 6% -2.1 1.48 ±100% perf-profile.calltrace.cycles-pp.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_delalloc_reserve_space.btrfs_direct_IO.generic_file_direct_write
3.36 ± 6% -2.0 1.35 ±100% perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_delalloc_reserve_space.btrfs_direct_IO
3.29 ± 6% -2.0 1.32 ±100% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_delalloc_reserve_space
5.31 ± 2% -1.7 3.63 ± 27% perf-profile.calltrace.cycles-pp.finish_wait.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_file_extent
7.47 ± 3% -0.9 6.57 ± 8% perf-profile.calltrace.cycles-pp.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
1.15 ± 6% -0.8 0.39 ±100% perf-profile.calltrace.cycles-pp.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread.ret_from_fork
1.15 ± 5% -0.8 0.39 ±100% perf-profile.calltrace.cycles-pp.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread
1.15 ± 5% -0.8 0.39 ±100% perf-profile.calltrace.cycles-pp.__btrfs_run_delayed_items.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread
1.14 ± 6% -0.8 0.39 ±100% perf-profile.calltrace.cycles-pp.__btrfs_update_delayed_inode.__btrfs_run_delayed_items.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
1.05 ± 6% -0.7 0.36 ±100% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_inode.__btrfs_update_delayed_inode.__btrfs_run_delayed_items.flush_space
1.05 ± 6% -0.7 0.36 ±100% perf-profile.calltrace.cycles-pp.btrfs_lookup_inode.__btrfs_update_delayed_inode.__btrfs_run_delayed_items.flush_space.btrfs_async_reclaim_metadata_space
1.04 ± 6% -0.7 0.35 ±100% perf-profile.calltrace.cycles-pp.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_inode.__btrfs_update_delayed_inode.__btrfs_run_delayed_items
1.04 ± 6% -0.7 0.35 ±100% perf-profile.calltrace.cycles-pp.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_inode.__btrfs_update_delayed_inode
1.23 ± 7% -0.7 0.56 ±100% perf-profile.calltrace.cycles-pp.btrfs_remove_ordered_extent.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread
1.17 ± 7% -0.6 0.52 ±100% perf-profile.calltrace.cycles-pp.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_remove_ordered_extent.btrfs_finish_ordered_io.btrfs_work_helper
1.17 ± 7% -0.6 0.52 ±100% perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_remove_ordered_extent.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work
0.95 ± 8% -0.5 0.41 ±100% perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_remove_ordered_extent.btrfs_finish_ordered_io
0.93 ± 9% -0.5 0.40 ±100% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_remove_ordered_extent
0.70 ± 18% -0.4 0.28 ±100% perf-profile.calltrace.cycles-pp.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_direct_IO.generic_file_direct_write.btrfs_file_write_iter
0.70 ± 17% -0.4 0.29 ±100% perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_direct_IO.generic_file_direct_write.btrfs_file_write_iter.new_sync_write
0.62 ± 8% -0.3 0.28 ±100% perf-profile.calltrace.cycles-pp.ret_from_intr.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.62 ± 8% -0.3 0.28 ±100% perf-profile.calltrace.cycles-pp.do_IRQ.ret_from_intr.cpuidle_enter_state.cpuidle_enter.do_idle
0.53 ± 3% +0.3 0.81 ± 25% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending
0.56 ± 3% +0.3 0.84 ± 24% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending.do_idle
0.56 ± 3% +0.3 0.84 ± 25% perf-profile.calltrace.cycles-pp.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary
0.56 ± 3% +0.3 0.84 ± 25% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry
0.64 ± 2% +0.3 0.99 ± 26% perf-profile.calltrace.cycles-pp.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.75 ± 6% +0.4 1.18 ± 27% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.26 ±100% +0.5 0.72 ± 23% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.82 ± 6% +0.5 1.31 ± 27% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.00 +1.3 1.31 ± 37% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_write_lock_slowpath.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot
0.00 +1.6 1.58 ± 33% perf-profile.calltrace.cycles-pp.queued_write_lock_slowpath.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
2.91 ± 3% +4.2 7.08 ± 41% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.finish_wait.btrfs_tree_lock.btrfs_lock_root_node
2.92 ± 3% +4.2 7.11 ± 41% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.finish_wait.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot
2.93 ± 3% +4.2 7.15 ± 41% perf-profile.calltrace.cycles-pp.finish_wait.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
4.20 ± 2% +6.7 10.93 ± 50% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_lock.btrfs_lock_root_node
35.04 ± 3% +6.8 41.85 ± 7% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
4.22 ± 2% +6.8 11.03 ± 50% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot
35.15 ± 3% +6.9 42.01 ± 7% perf-profile.calltrace.cycles-pp.ret_from_fork
35.15 ± 3% +6.9 42.01 ± 7% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
35.14 ± 3% +6.9 42.00 ± 7% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
4.29 ± 2% +7.0 11.26 ± 50% perf-profile.calltrace.cycles-pp.prepare_to_wait_event.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
33.85 ± 3% +7.2 41.02 ± 8% perf-profile.calltrace.cycles-pp.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread.kthread
33.86 ± 3% +7.5 41.36 ± 8% perf-profile.calltrace.cycles-pp.btrfs_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
31.32 ± 3% +7.9 39.25 ± 11% perf-profile.calltrace.cycles-pp.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work.worker_thread
20.65 ± 3% +12.8 33.42 ± 25% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper.process_one_work
7.60 ± 3% +12.9 20.49 ± 43% perf-profile.calltrace.cycles-pp.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
7.61 ± 3% +12.9 20.50 ± 43% perf-profile.calltrace.cycles-pp.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.btrfs_work_helper
21.55 ± 2% -7.0 14.59 ± 33% perf-profile.children.cycles-pp.ksys_write
21.54 ± 2% -7.0 14.57 ± 33% perf-profile.children.cycles-pp.vfs_write
21.48 ± 2% -7.0 14.52 ± 33% perf-profile.children.cycles-pp.new_sync_write
21.73 ± 2% -7.0 14.77 ± 32% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
21.45 ± 2% -7.0 14.49 ± 33% perf-profile.children.cycles-pp.btrfs_file_write_iter
21.73 ± 2% -7.0 14.77 ± 33% perf-profile.children.cycles-pp.do_syscall_64
21.39 ± 2% -7.0 14.44 ± 33% perf-profile.children.cycles-pp.generic_file_direct_write
21.37 ± 2% -6.9 14.43 ± 33% perf-profile.children.cycles-pp.btrfs_direct_IO
10.57 ± 3% -6.2 4.41 ± 73% perf-profile.children.cycles-pp.btrfs_free_path
10.65 ± 3% -6.1 4.54 ± 72% perf-profile.children.cycles-pp.btrfs_release_path
27.59 ± 2% -5.5 22.07 ± 17% perf-profile.children.cycles-pp.btrfs_read_lock_root_node
27.41 ± 2% -5.5 21.94 ± 17% perf-profile.children.cycles-pp.btrfs_tree_read_lock
10.82 ± 3% -4.6 6.18 ± 55% perf-profile.children.cycles-pp.__wake_up_common_lock
16.52 -4.4 12.14 ± 25% perf-profile.children.cycles-pp.do_blockdev_direct_IO
15.44 -4.1 11.35 ± 24% perf-profile.children.cycles-pp.btrfs_get_blocks_direct
14.73 -4.0 10.73 ± 24% perf-profile.children.cycles-pp.can_nocow_extent
13.92 -3.8 10.07 ± 23% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
5.67 ± 6% -3.0 2.70 ± 81% perf-profile.children.cycles-pp._raw_spin_lock
3.68 ± 5% -2.0 1.67 ± 83% perf-profile.children.cycles-pp.btrfs_delalloc_reserve_space
3.65 ± 6% -2.0 1.65 ± 84% perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
3.59 ± 6% -2.0 1.60 ± 85% perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
2.31 ± 6% -1.2 1.13 ± 75% perf-profile.children.cycles-pp.btrfs_block_rsv_release
2.24 ± 6% -1.1 1.11 ± 74% perf-profile.children.cycles-pp.btrfs_inode_rsv_release
1.15 ± 6% -0.8 0.39 ±100% perf-profile.children.cycles-pp.btrfs_async_reclaim_metadata_space
1.15 ± 5% -0.8 0.39 ±100% perf-profile.children.cycles-pp.flush_space
1.15 ± 5% -0.8 0.39 ±100% perf-profile.children.cycles-pp.__btrfs_run_delayed_items
1.14 ± 6% -0.8 0.39 ±100% perf-profile.children.cycles-pp.__btrfs_update_delayed_inode
1.05 ± 6% -0.7 0.36 ±100% perf-profile.children.cycles-pp.btrfs_lookup_inode
1.23 ± 7% -0.6 0.65 ± 75% perf-profile.children.cycles-pp.btrfs_remove_ordered_extent
0.48 ± 3% -0.1 0.34 ± 38% perf-profile.children.cycles-pp.btrfs_try_granting_tickets
0.29 ± 2% -0.1 0.15 ± 53% perf-profile.children.cycles-pp.btrfs_update_inode_fallback
0.29 ± 2% -0.1 0.15 ± 53% perf-profile.children.cycles-pp.btrfs_update_inode
0.26 -0.1 0.13 ± 54% perf-profile.children.cycles-pp.btrfs_delayed_update_inode
0.27 -0.1 0.15 ± 35% perf-profile.children.cycles-pp.btrfs_map_bio
0.14 ± 3% -0.1 0.04 ±100% perf-profile.children.cycles-pp.__mutex_lock
0.18 ± 2% -0.1 0.08 ± 31% perf-profile.children.cycles-pp.submit_bio
0.13 ± 5% -0.1 0.03 ±100% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.17 ± 2% -0.1 0.08 ± 31% perf-profile.children.cycles-pp.generic_make_request
0.30 ± 2% -0.1 0.24 ± 21% perf-profile.children.cycles-pp.find_extent_buffer
0.12 -0.1 0.06 ± 63% perf-profile.children.cycles-pp.blk_mq_make_request
0.16 ± 2% -0.1 0.11 ± 36% perf-profile.children.cycles-pp.check_committed_ref
0.05 ± 8% +0.0 0.07 ± 10% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.05 ± 8% +0.0 0.07 ± 11% perf-profile.children.cycles-pp.__switch_to
0.11 ± 7% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.scheduler_tick
0.07 ± 6% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.__unwind_start
0.05 +0.0 0.08 ± 10% perf-profile.children.cycles-pp.__switch_to_asm
0.08 ± 8% +0.0 0.11 ± 19% perf-profile.children.cycles-pp.read_tsc
0.05 ± 8% +0.0 0.08 ± 15% perf-profile.children.cycles-pp.native_sched_clock
0.17 ± 4% +0.0 0.20 ± 3% perf-profile.children.cycles-pp.select_task_rq_fair
0.05 ± 9% +0.0 0.09 ± 16% perf-profile.children.cycles-pp.sched_clock
0.06 ± 6% +0.0 0.10 ± 18% perf-profile.children.cycles-pp.sched_clock_cpu
0.17 ± 6% +0.0 0.21 ± 13% perf-profile.children.cycles-pp.dequeue_entity
0.09 ± 4% +0.0 0.14 ± 12% perf-profile.children.cycles-pp.update_rq_clock
0.01 ±173% +0.0 0.06 ± 11% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.11 ± 7% +0.0 0.16 ± 23% perf-profile.children.cycles-pp.__next_timer_interrupt
0.29 +0.1 0.34 ± 4% perf-profile.children.cycles-pp.schedule_idle
0.20 ± 5% +0.1 0.25 ± 14% perf-profile.children.cycles-pp.dequeue_task_fair
0.19 ± 5% +0.1 0.25 ± 13% perf-profile.children.cycles-pp.update_process_times
0.08 ± 5% +0.1 0.15 ± 35% perf-profile.children.cycles-pp.__tree_search
0.00 +0.1 0.07 ± 25% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.20 ± 4% +0.1 0.27 ± 13% perf-profile.children.cycles-pp.tick_sched_handle
0.16 ± 5% +0.1 0.23 ± 25% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.18 ± 4% +0.1 0.25 ± 22% perf-profile.children.cycles-pp.__orc_find
0.22 ± 6% +0.1 0.30 ± 13% perf-profile.children.cycles-pp.tick_sched_timer
0.62 ± 2% +0.1 0.71 ± 8% perf-profile.children.cycles-pp.schedule
0.06 ± 6% +0.1 0.15 ± 31% perf-profile.children.cycles-pp._raw_write_lock
0.00 +0.1 0.09 ± 35% perf-profile.children.cycles-pp.btrfs_set_token_32
0.20 ± 4% +0.1 0.29 ± 24% perf-profile.children.cycles-pp.tick_nohz_next_event
0.34 ± 7% +0.1 0.44 ± 7% perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +0.1 0.11 ± 37% perf-profile.children.cycles-pp.btrfs_submit_bio_start_direct_io
0.00 +0.1 0.11 ± 34% perf-profile.children.cycles-pp.run_one_async_start
0.25 ± 5% +0.1 0.38 ± 24% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.89 ± 2% +0.1 1.02 ± 7% perf-profile.children.cycles-pp.__sched_text_start
0.41 ± 7% +0.1 0.54 ± 10% perf-profile.children.cycles-pp.irq_exit
0.08 ± 6% +0.1 0.21 ± 40% perf-profile.children.cycles-pp.btrfs_get_token_32
0.00 +0.2 0.16 ± 33% perf-profile.children.cycles-pp.run_one_async_done
0.36 ± 5% +0.2 0.52 ± 20% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.45 ± 5% +0.2 0.61 ± 18% perf-profile.children.cycles-pp.unwind_next_frame
0.50 ± 5% +0.2 0.72 ± 23% perf-profile.children.cycles-pp.menu_select
0.67 ± 5% +0.2 0.90 ± 18% perf-profile.children.cycles-pp.arch_stack_walk
0.55 ± 6% +0.2 0.79 ± 20% perf-profile.children.cycles-pp.hrtimer_interrupt
0.00 +0.3 0.26 ± 32% perf-profile.children.cycles-pp.btrfs_wq_submit_bio
0.71 ± 5% +0.3 0.97 ± 18% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.00 +0.3 0.28 ± 45% perf-profile.children.cycles-pp.setup_leaf_for_split
0.87 ± 5% +0.3 1.17 ± 16% perf-profile.children.cycles-pp.__account_scheduler_latency
1.10 ± 4% +0.3 1.41 ± 12% perf-profile.children.cycles-pp.enqueue_entity
1.15 ± 4% +0.3 1.47 ± 12% perf-profile.children.cycles-pp.ttwu_do_activate
1.14 ± 4% +0.3 1.46 ± 12% perf-profile.children.cycles-pp.enqueue_task_fair
1.14 ± 4% +0.3 1.47 ± 12% perf-profile.children.cycles-pp.activate_task
0.67 ± 2% +0.4 1.02 ± 26% perf-profile.children.cycles-pp.sched_ttwu_pending
1.03 ± 6% +0.4 1.46 ± 18% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.00 +0.4 0.44 ± 40% perf-profile.children.cycles-pp.btrfs_dio_private_put
0.00 +0.5 0.46 ± 39% perf-profile.children.cycles-pp.btrfs_end_dio_bio
1.15 ± 5% +0.5 1.65 ± 18% perf-profile.children.cycles-pp.apic_timer_interrupt
0.03 ±100% +0.9 0.96 ± 80% perf-profile.children.cycles-pp.btrfs_unlock_up_safe
0.05 +1.0 1.04 ± 78% perf-profile.children.cycles-pp.setup_items_for_insert
0.29 ± 5% +1.0 1.28 ± 65% perf-profile.children.cycles-pp.queued_read_lock_slowpath
0.05 +1.2 1.28 ± 70% perf-profile.children.cycles-pp.btrfs_duplicate_item
0.24 ± 6% +1.4 1.65 ± 32% perf-profile.children.cycles-pp.queued_write_lock_slowpath
35.05 ± 3% +6.8 41.85 ± 7% perf-profile.children.cycles-pp.process_one_work
35.15 ± 3% +6.9 42.01 ± 7% perf-profile.children.cycles-pp.ret_from_fork
35.15 ± 3% +6.9 42.01 ± 7% perf-profile.children.cycles-pp.kthread
35.14 ± 3% +6.9 42.00 ± 7% perf-profile.children.cycles-pp.worker_thread
33.85 ± 3% +7.2 41.02 ± 8% perf-profile.children.cycles-pp.btrfs_finish_ordered_io
33.86 ± 3% +7.5 41.36 ± 8% perf-profile.children.cycles-pp.btrfs_work_helper
31.32 ± 3% +7.9 39.25 ± 11% perf-profile.children.cycles-pp.btrfs_mark_extent_written
36.36 ± 2% +8.2 44.60 ± 12% perf-profile.children.cycles-pp.btrfs_search_slot
7.61 ± 3% +13.1 20.67 ± 43% perf-profile.children.cycles-pp.btrfs_lock_root_node
7.61 ± 3% +13.1 20.67 ± 43% perf-profile.children.cycles-pp.btrfs_tree_lock
0.13 ± 5% -0.1 0.03 ±100% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.08 ± 6% +0.0 0.10 ± 5% perf-profile.self.cycles-pp.select_task_rq_fair
0.14 +0.0 0.16 ± 5% perf-profile.self.cycles-pp.__sched_text_start
0.07 ± 5% +0.0 0.10 ± 18% perf-profile.self.cycles-pp._find_next_bit
0.05 +0.0 0.08 ± 10% perf-profile.self.cycles-pp.__switch_to_asm
0.04 ± 58% +0.0 0.07 ± 12% perf-profile.self.cycles-pp.__switch_to
0.08 ± 5% +0.0 0.11 ± 19% perf-profile.self.cycles-pp.read_tsc
0.05 +0.0 0.08 ± 15% perf-profile.self.cycles-pp.native_sched_clock
0.07 +0.0 0.10 ± 10% perf-profile.self.cycles-pp.update_rq_clock
0.07 ± 10% +0.0 0.12 ± 26% perf-profile.self.cycles-pp.do_idle
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.enqueue_task_fair
0.00 +0.1 0.06 ± 16% perf-profile.self.cycles-pp.__next_timer_interrupt
0.00 +0.1 0.06 ± 20% perf-profile.self.cycles-pp.stack_trace_save_tsk
0.06 ± 6% +0.1 0.12 ± 36% perf-profile.self.cycles-pp.__tree_search
0.17 ± 7% +0.1 0.23 ± 14% perf-profile.self.cycles-pp.unwind_next_frame
0.12 ± 6% +0.1 0.19 ± 25% perf-profile.self.cycles-pp.cpuidle_enter_state
0.10 ± 8% +0.1 0.17 ± 18% perf-profile.self.cycles-pp.prepare_to_wait_event
0.18 ± 4% +0.1 0.25 ± 22% perf-profile.self.cycles-pp.__orc_find
0.07 ± 10% +0.1 0.15 ± 48% perf-profile.self.cycles-pp.queued_read_lock_slowpath
0.00 +0.1 0.08 ± 39% perf-profile.self.cycles-pp.btrfs_set_token_32
0.06 ± 6% +0.1 0.15 ± 34% perf-profile.self.cycles-pp._raw_write_lock
0.08 ± 6% +0.1 0.20 ± 38% perf-profile.self.cycles-pp.btrfs_get_token_32
0.53 ± 3% +0.2 0.71 ± 8% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.13 ± 3% +0.2 0.33 ± 19% perf-profile.self.cycles-pp.queued_write_lock_slowpath
fio.write_bw_MBps
500 +---------------------------------------------------------------------+
| .+ + + .+ |
450 |+++++ ++ ++++++.+++++++++++.+++++++++++.++++++++++.++++ ++++++ ++++|
400 |-+ |
| |
350 |-+ OO OO O O |
| OOO O O O O O |
300 |-+ O |
| |
250 |-+ OO |
200 |-+ O O O OO O OO |
| O O O O O |
150 |OOOO |
| |
100 +---------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 4 months
[btrfs] c75e839414: aim7.jobs-per-min -8.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -8.9% regression of aim7.jobs-per-min due to commit:
commit: c75e839414d3610e6487ae3145199c500d55f7f7 ("btrfs: kill the subvol_srcu")
https://github.com/kdave/btrfs-devel.git misc-5.7
in testcase: aim7
on test machine: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
with following parameters:
disk: 4BRD_12G
md: RAID0
fs: btrfs
test: disk_wrt
load: 1500
cpufreq_governor: performance
ucode: 0x500002c
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/4BRD_12G/btrfs/x86_64-rhel-7.6/1500/RAID0/debian-x86_64-20191114.cgz/lkp-csl-2ap2/disk_wrt/aim7/0x500002c
commit:
efc3453494 ("btrfs: make btrfs_cleanup_fs_roots use the radix tree lock")
c75e839414 ("btrfs: kill the subvol_srcu")
efc3453494af7818 c75e839414d3610e6487ae31451
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
3:8 -38% :7 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
3:8 -38% :7 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
29533 ± 2% -8.9% 26893 ± 3% aim7.jobs-per-min
305.05 ± 2% +9.8% 335.06 ± 3% aim7.time.elapsed_time
305.05 ± 2% +9.8% 335.06 ± 3% aim7.time.elapsed_time.max
4866516 ± 10% +37.5% 6692812 ± 8% aim7.time.involuntary_context_switches
56241 ± 2% +10.4% 62079 ± 3% aim7.time.system_time
2346399 +6.4% 2495983 ± 2% aim7.time.voluntary_context_switches
4.75 ± 7% -11.4% 4.21 ± 5% iostat.cpu.idle
4.16 ± 7% -0.5 3.66 ± 6% mpstat.cpu.all.idle%
0.05 ± 2% -0.0 0.05 ± 6% mpstat.cpu.all.usr%
430.50 ± 6% +22.0% 525.29 ± 4% vmstat.procs.r
27321 ± 3% +11.0% 30325 ± 2% vmstat.system.cs
226298 ± 6% +21.5% 274931 ± 4% meminfo.Active(file)
220802 ± 6% +22.1% 269499 ± 4% meminfo.Dirty
607.00 ±129% +62.9% 988.86 ± 81% meminfo.Mlocked
14263 ± 2% -8.1% 13105 ± 2% meminfo.max_used_kB
57241 ± 6% +22.4% 70068 ± 5% numa-meminfo.node0.Active(file)
55300 ± 6% +21.1% 66982 ± 4% numa-meminfo.node0.Dirty
1145 ± 27% +38.1% 1582 ± 14% numa-meminfo.node0.Inactive(file)
55923 ± 7% +20.9% 67615 ± 4% numa-meminfo.node1.Active(file)
54871 ± 6% +22.2% 67076 ± 4% numa-meminfo.node1.Dirty
56296 ± 6% +21.4% 68324 ± 5% numa-meminfo.node2.Active(file)
55052 ± 7% +22.8% 67589 ± 5% numa-meminfo.node2.Dirty
56396 ± 6% +20.9% 68163 ± 3% numa-meminfo.node3.Active(file)
55170 ± 6% +21.6% 67078 ± 5% numa-meminfo.node3.Dirty
56553 ± 6% +21.6% 68743 ± 4% proc-vmstat.nr_active_file
55191 ± 6% +22.1% 67372 ± 4% proc-vmstat.nr_dirty
401952 +2.0% 409958 proc-vmstat.nr_file_pages
151.75 ±129% +63.1% 247.43 ± 81% proc-vmstat.nr_mlock
56553 ± 6% +21.6% 68743 ± 4% proc-vmstat.nr_zone_active_file
54523 ± 6% +22.7% 66890 ± 4% proc-vmstat.nr_zone_write_pending
3141177 ± 2% +6.1% 3332730 proc-vmstat.pgactivate
1387953 ± 2% +7.6% 1492915 ± 2% proc-vmstat.pgfault
977.38 ± 4% +6.1% 1036 proc-vmstat.unevictable_pgs_culled
813104 ± 17% +52.3% 1238414 ± 9% sched_debug.cfs_rq:/.min_vruntime.stddev
1.28 ± 31% +100.5% 2.56 ± 15% sched_debug.cfs_rq:/.nr_spread_over.avg
815153 ± 17% +52.3% 1241359 ± 9% sched_debug.cfs_rq:/.spread0.stddev
1.89 ± 14% +30.1% 2.45 ± 8% sched_debug.cpu.nr_running.avg
4.93 ± 9% +31.8% 6.50 ± 10% sched_debug.cpu.nr_running.max
0.89 ± 9% +17.5% 1.05 ± 3% sched_debug.cpu.nr_running.stddev
21574 ± 12% +17.7% 25397 ± 2% sched_debug.cpu.nr_switches.avg
19563 ± 12% +18.8% 23234 ± 2% sched_debug.cpu.nr_switches.min
20176 ± 13% +19.2% 24059 ± 2% sched_debug.cpu.sched_count.avg
27751 ± 11% +16.7% 32378 ± 4% sched_debug.cpu.sched_count.max
18768 ± 13% +19.5% 22428 ± 2% sched_debug.cpu.sched_count.min
1403 ± 7% -30.1% 980.72 ± 11% sched_debug.cpu.sched_goidle.avg
4130 ± 9% -13.1% 3587 ± 7% sched_debug.cpu.sched_goidle.max
1186 ± 8% -34.7% 774.74 ± 14% sched_debug.cpu.sched_goidle.min
16729 ± 15% +23.5% 20655 ± 4% sched_debug.cpu.ttwu_count.max
2750 ± 11% +24.1% 3413 ± 7% sched_debug.cpu.ttwu_count.stddev
397.28 ± 17% +26.8% 503.71 ± 10% sched_debug.cpu.ttwu_local.stddev
14331 ± 6% +22.4% 17535 ± 5% numa-vmstat.node0.nr_active_file
13847 ± 6% +21.1% 16769 ± 4% numa-vmstat.node0.nr_dirty
285.88 ± 27% +38.1% 394.86 ± 14% numa-vmstat.node0.nr_inactive_file
14330 ± 6% +22.4% 17534 ± 5% numa-vmstat.node0.nr_zone_active_file
285.88 ± 27% +38.1% 394.86 ± 14% numa-vmstat.node0.nr_zone_inactive_file
13684 ± 7% +21.6% 16641 ± 4% numa-vmstat.node0.nr_zone_write_pending
13987 ± 7% +20.8% 16903 ± 4% numa-vmstat.node1.nr_active_file
13727 ± 6% +22.1% 16758 ± 4% numa-vmstat.node1.nr_dirty
13987 ± 7% +20.9% 16903 ± 4% numa-vmstat.node1.nr_zone_active_file
13566 ± 6% +22.7% 16644 ± 4% numa-vmstat.node1.nr_zone_write_pending
14058 ± 6% +21.3% 17051 ± 5% numa-vmstat.node2.nr_active_file
13743 ± 7% +22.7% 16863 ± 4% numa-vmstat.node2.nr_dirty
14058 ± 6% +21.3% 17051 ± 5% numa-vmstat.node2.nr_zone_active_file
13576 ± 7% +23.4% 16751 ± 5% numa-vmstat.node2.nr_zone_write_pending
14085 ± 6% +20.9% 17023 ± 3% numa-vmstat.node3.nr_active_file
13759 ± 6% +21.8% 16758 ± 4% numa-vmstat.node3.nr_dirty
14085 ± 6% +20.9% 17023 ± 3% numa-vmstat.node3.nr_zone_active_file
13599 ± 6% +22.2% 16623 ± 4% numa-vmstat.node3.nr_zone_write_pending
42741122 ± 4% +20.8% 51618902 ± 4% perf-stat.i.cache-misses
83003095 ± 3% +15.6% 95926683 ± 4% perf-stat.i.cache-references
27176 ± 3% +10.9% 30126 ± 2% perf-stat.i.context-switches
3011 ± 2% -7.1% 2796 perf-stat.i.cpu-migrations
12750 ± 4% -16.8% 10610 ± 4% perf-stat.i.cycles-between-cache-misses
1.101e+09 ± 2% -7.3% 1.021e+09 ± 2% perf-stat.i.dTLB-stores
4473 -1.9% 4385 perf-stat.i.minor-faults
10637685 +11.6% 11868028 ± 2% perf-stat.i.node-load-misses
642328 ± 2% +24.2% 797960 ± 2% perf-stat.i.node-loads
5924816 ± 2% -6.3% 5549941 ± 2% perf-stat.i.node-store-misses
380462 -6.4% 356109 ± 2% perf-stat.i.node-stores
4473 -1.9% 4385 perf-stat.i.page-faults
1.64 ± 3% +16.1% 1.90 ± 3% perf-stat.overall.MPKI
13207 ± 4% -17.0% 10965 ± 4% perf-stat.overall.cycles-between-cache-misses
42977427 ± 4% +21.0% 52002574 ± 4% perf-stat.ps.cache-misses
82444892 ± 3% +16.0% 95657468 ± 3% perf-stat.ps.cache-references
27395 ± 3% +11.0% 30414 ± 2% perf-stat.ps.context-switches
3032 ± 2% -7.0% 2820 perf-stat.ps.cpu-migrations
1.105e+09 ± 2% -7.2% 1.025e+09 ± 2% perf-stat.ps.dTLB-stores
4397 -1.9% 4313 perf-stat.ps.minor-faults
10718746 +11.7% 11972880 ± 2% perf-stat.ps.node-load-misses
646657 ± 2% +24.5% 805277 ± 2% perf-stat.ps.node-loads
5974439 ± 2% -6.2% 5601846 ± 2% perf-stat.ps.node-store-misses
382538 -6.3% 358590 ± 2% perf-stat.ps.node-stores
4397 -1.9% 4313 perf-stat.ps.page-faults
1.537e+13 ± 2% +9.9% 1.689e+13 ± 2% perf-stat.total.instructions
27.39 -0.1 27.28 perf-profile.calltrace.cycles-pp.btrfs_dirty_pages.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
35.12 +0.1 35.24 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write
35.23 +0.1 35.34 perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter
35.08 +0.1 35.21 perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_file_write_iter
35.47 +0.1 35.61 perf-profile.calltrace.cycles-pp.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
34.97 +0.1 35.10 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write
35.10 +0.2 35.25 perf-profile.calltrace.cycles-pp.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write
35.11 +0.2 35.26 perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
35.56 +0.2 35.72 perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write
98.70 +0.2 98.88 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
98.73 +0.2 98.90 perf-profile.calltrace.cycles-pp.write
98.69 +0.2 98.86 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
98.71 +0.2 98.88 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
98.67 +0.2 98.85 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
98.63 +0.2 98.81 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
98.62 +0.2 98.80 perf-profile.calltrace.cycles-pp.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
98.58 +0.2 98.77 perf-profile.calltrace.cycles-pp.btrfs_buffered_write.btrfs_file_write_iter.new_sync_write.vfs_write.ksys_write
27.39 -0.1 27.28 perf-profile.children.cycles-pp.btrfs_dirty_pages
0.56 ± 5% -0.1 0.47 ± 8% perf-profile.children.cycles-pp.btrfs_get_extent
0.51 ± 5% -0.1 0.42 ± 8% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
0.52 ± 5% -0.1 0.43 ± 8% perf-profile.children.cycles-pp.btrfs_search_slot
0.42 ± 7% -0.1 0.34 ± 8% perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.46 ± 6% -0.1 0.38 ± 8% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
0.44 ± 6% -0.1 0.36 ± 9% perf-profile.children.cycles-pp.btrfs_read_lock_root_node
0.19 ± 14% -0.1 0.11 ± 24% perf-profile.children.cycles-pp.start_secondary
0.19 ± 16% -0.1 0.11 ± 25% perf-profile.children.cycles-pp.cpuidle_enter
0.19 ± 16% -0.1 0.11 ± 25% perf-profile.children.cycles-pp.cpuidle_enter_state
0.19 ± 14% -0.1 0.11 ± 23% perf-profile.children.cycles-pp.secondary_startup_64
0.19 ± 14% -0.1 0.11 ± 23% perf-profile.children.cycles-pp.cpu_startup_entry
0.19 ± 14% -0.1 0.11 ± 23% perf-profile.children.cycles-pp.do_idle
0.43 ± 6% -0.1 0.35 ± 10% perf-profile.children.cycles-pp.btrfs_tree_read_lock
0.18 ± 15% -0.1 0.11 ± 22% perf-profile.children.cycles-pp.intel_idle
0.29 ± 9% -0.1 0.22 ± 12% perf-profile.children.cycles-pp.osq_lock
0.20 ± 7% -0.0 0.15 ± 15% perf-profile.children.cycles-pp.finish_wait
0.40 ± 5% -0.0 0.35 ± 6% perf-profile.children.cycles-pp.do_unlinkat
0.40 ± 6% -0.0 0.35 ± 6% perf-profile.children.cycles-pp.unlink
0.26 ± 4% -0.0 0.22 ± 6% perf-profile.children.cycles-pp.do_sys_open
0.26 ± 4% -0.0 0.22 ± 6% perf-profile.children.cycles-pp.do_sys_openat2
0.26 ± 4% -0.0 0.22 ± 6% perf-profile.children.cycles-pp.do_filp_open
0.26 ± 4% -0.0 0.22 ± 6% perf-profile.children.cycles-pp.path_openat
0.26 ± 4% -0.0 0.22 ± 6% perf-profile.children.cycles-pp.creat
0.00 +0.0 0.05 perf-profile.children.cycles-pp.btrfs_calculate_inode_block_rsv_size
99.65 +0.1 99.74 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.64 +0.1 99.73 perf-profile.children.cycles-pp.do_syscall_64
35.61 +0.1 35.74 perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
35.56 +0.2 35.72 perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
97.54 +0.2 97.70 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
98.73 +0.2 98.91 perf-profile.children.cycles-pp.write
98.69 +0.2 98.87 perf-profile.children.cycles-pp.ksys_write
98.68 +0.2 98.86 perf-profile.children.cycles-pp.vfs_write
98.64 +0.2 98.82 perf-profile.children.cycles-pp.new_sync_write
98.62 +0.2 98.81 perf-profile.children.cycles-pp.btrfs_file_write_iter
97.59 +0.2 97.78 perf-profile.children.cycles-pp._raw_spin_lock
98.58 +0.2 98.77 perf-profile.children.cycles-pp.btrfs_buffered_write
0.18 ± 15% -0.1 0.11 ± 22% perf-profile.self.cycles-pp.intel_idle
0.28 ± 9% -0.1 0.22 ± 12% perf-profile.self.cycles-pp.osq_lock
0.45 ± 2% -0.0 0.41 ± 2% perf-profile.self.cycles-pp._raw_spin_lock
96.76 +0.2 96.92 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
7565 ± 5% +14.1% 8633 ± 4% interrupts.CPU10.RES:Rescheduling_interrupts
7312 ± 6% +15.9% 8477 ± 3% interrupts.CPU100.RES:Rescheduling_interrupts
7370 ± 3% +13.2% 8346 ± 3% interrupts.CPU101.RES:Rescheduling_interrupts
7551 ± 5% +11.7% 8436 ± 4% interrupts.CPU102.RES:Rescheduling_interrupts
7461 ± 4% +14.6% 8553 ± 4% interrupts.CPU103.RES:Rescheduling_interrupts
7555 ± 5% +12.8% 8521 ± 2% interrupts.CPU104.RES:Rescheduling_interrupts
7326 ± 6% +17.0% 8571 ± 2% interrupts.CPU105.RES:Rescheduling_interrupts
7372 ± 5% +16.1% 8561 ± 2% interrupts.CPU106.RES:Rescheduling_interrupts
7245 ± 6% +14.3% 8278 ± 4% interrupts.CPU107.RES:Rescheduling_interrupts
7328 ± 6% +15.6% 8469 ± 5% interrupts.CPU108.RES:Rescheduling_interrupts
7326 ± 6% +13.5% 8316 interrupts.CPU109.RES:Rescheduling_interrupts
7623 ± 6% +11.6% 8505 ± 5% interrupts.CPU11.RES:Rescheduling_interrupts
7355 ± 4% +14.8% 8442 ± 3% interrupts.CPU110.RES:Rescheduling_interrupts
7293 ± 2% +13.8% 8298 ± 5% interrupts.CPU112.RES:Rescheduling_interrupts
7336 ± 5% +15.4% 8469 ± 4% interrupts.CPU113.RES:Rescheduling_interrupts
7324 ± 5% +16.8% 8555 ± 4% interrupts.CPU114.RES:Rescheduling_interrupts
7480 ± 5% +14.2% 8538 ± 5% interrupts.CPU115.RES:Rescheduling_interrupts
7346 ± 4% +15.2% 8461 ± 2% interrupts.CPU116.RES:Rescheduling_interrupts
7307 ± 5% +15.5% 8438 ± 3% interrupts.CPU117.RES:Rescheduling_interrupts
7210 ± 4% +18.0% 8507 ± 5% interrupts.CPU118.RES:Rescheduling_interrupts
7541 ± 4% +15.9% 8743 ± 6% interrupts.CPU12.RES:Rescheduling_interrupts
7432 ± 4% +14.8% 8530 ± 4% interrupts.CPU120.RES:Rescheduling_interrupts
7335 ± 3% +16.0% 8510 ± 4% interrupts.CPU121.RES:Rescheduling_interrupts
7388 ± 5% +13.2% 8366 ± 4% interrupts.CPU122.RES:Rescheduling_interrupts
7204 ± 6% +19.2% 8587 ± 4% interrupts.CPU123.RES:Rescheduling_interrupts
7223 ± 5% +18.0% 8522 ± 6% interrupts.CPU124.RES:Rescheduling_interrupts
7323 ± 3% +14.7% 8403 ± 5% interrupts.CPU125.RES:Rescheduling_interrupts
7208 ± 5% +16.9% 8429 ± 4% interrupts.CPU126.RES:Rescheduling_interrupts
7282 ± 5% +16.1% 8451 ± 5% interrupts.CPU127.RES:Rescheduling_interrupts
7287 ± 4% +16.7% 8502 ± 3% interrupts.CPU128.RES:Rescheduling_interrupts
7258 ± 3% +16.8% 8478 ± 5% interrupts.CPU129.RES:Rescheduling_interrupts
7597 ± 3% +15.1% 8748 ± 3% interrupts.CPU13.RES:Rescheduling_interrupts
7303 ± 5% +17.6% 8592 ± 5% interrupts.CPU130.RES:Rescheduling_interrupts
7210 ± 6% +16.5% 8402 ± 6% interrupts.CPU131.RES:Rescheduling_interrupts
7399 ± 5% +12.3% 8310 ± 3% interrupts.CPU132.RES:Rescheduling_interrupts
7234 ± 4% +15.6% 8364 ± 3% interrupts.CPU133.RES:Rescheduling_interrupts
7301 ± 4% +18.4% 8641 ± 4% interrupts.CPU134.RES:Rescheduling_interrupts
7273 ± 4% +16.0% 8437 ± 4% interrupts.CPU135.RES:Rescheduling_interrupts
7327 ± 4% +14.3% 8378 ± 5% interrupts.CPU136.RES:Rescheduling_interrupts
7125 ± 4% +20.3% 8569 ± 5% interrupts.CPU137.RES:Rescheduling_interrupts
7273 ± 5% +15.6% 8408 ± 5% interrupts.CPU138.RES:Rescheduling_interrupts
7210 ± 4% +17.0% 8439 ± 4% interrupts.CPU139.RES:Rescheduling_interrupts
7576 ± 5% +15.8% 8773 ± 4% interrupts.CPU14.RES:Rescheduling_interrupts
7290 ± 6% +13.8% 8297 ± 4% interrupts.CPU140.RES:Rescheduling_interrupts
7215 ± 4% +15.7% 8349 ± 5% interrupts.CPU141.RES:Rescheduling_interrupts
7228 ± 6% +16.8% 8442 ± 3% interrupts.CPU142.RES:Rescheduling_interrupts
7341 ± 5% +14.1% 8380 ± 4% interrupts.CPU143.RES:Rescheduling_interrupts
7346 ± 5% +14.0% 8377 ± 4% interrupts.CPU145.RES:Rescheduling_interrupts
7529 ± 4% +13.0% 8505 ± 6% interrupts.CPU146.RES:Rescheduling_interrupts
7425 ± 4% +11.9% 8308 ± 3% interrupts.CPU147.RES:Rescheduling_interrupts
7367 ± 4% +16.8% 8603 ± 3% interrupts.CPU149.RES:Rescheduling_interrupts
7673 ± 4% +11.7% 8571 ± 4% interrupts.CPU15.RES:Rescheduling_interrupts
7316 ± 4% +13.1% 8276 ± 3% interrupts.CPU150.RES:Rescheduling_interrupts
7297 ± 5% +16.1% 8471 ± 3% interrupts.CPU151.RES:Rescheduling_interrupts
7285 ± 4% +16.6% 8494 ± 5% interrupts.CPU152.RES:Rescheduling_interrupts
7339 ± 6% +14.9% 8434 ± 3% interrupts.CPU153.RES:Rescheduling_interrupts
7334 ± 4% +13.4% 8320 ± 4% interrupts.CPU154.RES:Rescheduling_interrupts
7381 ± 5% +11.9% 8258 ± 4% interrupts.CPU155.RES:Rescheduling_interrupts
7317 ± 5% +14.8% 8403 ± 6% interrupts.CPU156.RES:Rescheduling_interrupts
7392 ± 5% +12.9% 8346 ± 5% interrupts.CPU157.RES:Rescheduling_interrupts
7464 ± 5% +10.3% 8233 ± 2% interrupts.CPU158.RES:Rescheduling_interrupts
7249 ± 5% +15.6% 8377 ± 4% interrupts.CPU159.RES:Rescheduling_interrupts
7581 ± 4% +14.7% 8696 ± 5% interrupts.CPU16.RES:Rescheduling_interrupts
7439 ± 5% +13.9% 8473 ± 6% interrupts.CPU161.RES:Rescheduling_interrupts
7142 ± 5% +15.0% 8212 ± 5% interrupts.CPU162.RES:Rescheduling_interrupts
7290 ± 5% +12.9% 8232 ± 4% interrupts.CPU163.RES:Rescheduling_interrupts
7246 ± 5% +15.8% 8391 ± 4% interrupts.CPU164.RES:Rescheduling_interrupts
7418 ± 4% +10.2% 8177 ± 4% interrupts.CPU165.RES:Rescheduling_interrupts
7342 ± 2% +13.7% 8351 ± 5% interrupts.CPU166.RES:Rescheduling_interrupts
7161 ± 3% +17.0% 8376 ± 4% interrupts.CPU167.RES:Rescheduling_interrupts
7278 ± 5% +15.4% 8403 ± 3% interrupts.CPU168.RES:Rescheduling_interrupts
7237 ± 2% +17.9% 8532 ± 5% interrupts.CPU169.RES:Rescheduling_interrupts
7551 ± 2% +16.9% 8830 ± 10% interrupts.CPU17.RES:Rescheduling_interrupts
7344 ± 2% +14.2% 8384 ± 5% interrupts.CPU170.RES:Rescheduling_interrupts
7381 ± 4% +15.2% 8506 ± 5% interrupts.CPU171.RES:Rescheduling_interrupts
7287 ± 3% +17.3% 8545 ± 5% interrupts.CPU172.RES:Rescheduling_interrupts
7308 ± 4% +15.5% 8438 ± 4% interrupts.CPU175.RES:Rescheduling_interrupts
7414 ± 2% +14.2% 8470 ± 5% interrupts.CPU176.RES:Rescheduling_interrupts
7348 ± 6% +14.5% 8411 ± 5% interrupts.CPU178.RES:Rescheduling_interrupts
7246 ± 4% +15.9% 8400 ± 6% interrupts.CPU179.RES:Rescheduling_interrupts
7569 ± 4% +14.3% 8651 ± 4% interrupts.CPU18.RES:Rescheduling_interrupts
7351 ± 4% +14.4% 8410 ± 5% interrupts.CPU180.RES:Rescheduling_interrupts
7404 ± 2% +13.6% 8413 ± 4% interrupts.CPU181.RES:Rescheduling_interrupts
7262 ± 4% +14.3% 8300 ± 4% interrupts.CPU182.RES:Rescheduling_interrupts
7340 ± 5% +13.0% 8294 ± 5% interrupts.CPU183.RES:Rescheduling_interrupts
7391 ± 4% +13.1% 8356 ± 5% interrupts.CPU184.RES:Rescheduling_interrupts
7276 ± 4% +15.0% 8365 ± 6% interrupts.CPU185.RES:Rescheduling_interrupts
7401 ± 4% +14.4% 8468 ± 3% interrupts.CPU186.RES:Rescheduling_interrupts
7375 ± 3% +11.7% 8238 ± 5% interrupts.CPU187.RES:Rescheduling_interrupts
7245 ± 5% +16.5% 8439 ± 6% interrupts.CPU188.RES:Rescheduling_interrupts
7435 ± 3% +14.1% 8484 ± 7% interrupts.CPU189.RES:Rescheduling_interrupts
7190 ± 4% +16.4% 8370 ± 5% interrupts.CPU190.RES:Rescheduling_interrupts
7179 ± 3% +15.2% 8272 ± 3% interrupts.CPU191.RES:Rescheduling_interrupts
7565 ± 5% +15.4% 8726 ± 3% interrupts.CPU20.RES:Rescheduling_interrupts
7510 ± 2% +14.9% 8632 ± 4% interrupts.CPU21.RES:Rescheduling_interrupts
7485 ± 6% +15.7% 8658 ± 3% interrupts.CPU22.RES:Rescheduling_interrupts
7561 ± 4% +11.4% 8423 ± 3% interrupts.CPU23.RES:Rescheduling_interrupts
7396 ± 3% +15.8% 8566 ± 3% interrupts.CPU24.RES:Rescheduling_interrupts
7488 ± 3% +17.9% 8826 ± 7% interrupts.CPU26.RES:Rescheduling_interrupts
7409 ± 4% +15.2% 8532 ± 4% interrupts.CPU27.RES:Rescheduling_interrupts
7439 ± 5% +16.6% 8670 ± 4% interrupts.CPU28.RES:Rescheduling_interrupts
7442 ± 2% +15.4% 8585 ± 2% interrupts.CPU29.RES:Rescheduling_interrupts
7617 ± 5% +23.2% 9382 ± 11% interrupts.CPU3.RES:Rescheduling_interrupts
7410 ± 3% +16.8% 8653 ± 4% interrupts.CPU30.RES:Rescheduling_interrupts
7500 ± 4% +14.2% 8562 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts
7400 ± 4% +18.3% 8758 ± 4% interrupts.CPU32.RES:Rescheduling_interrupts
7360 ± 4% +16.8% 8599 ± 5% interrupts.CPU33.RES:Rescheduling_interrupts
7358 ± 5% +19.9% 8819 ± 4% interrupts.CPU34.RES:Rescheduling_interrupts
7449 ± 5% +18.8% 8853 ± 4% interrupts.CPU35.RES:Rescheduling_interrupts
7461 ± 3% +15.4% 8608 ± 4% interrupts.CPU36.RES:Rescheduling_interrupts
7381 ± 3% +18.1% 8716 ± 3% interrupts.CPU37.RES:Rescheduling_interrupts
7419 ± 5% +15.4% 8563 ± 3% interrupts.CPU38.RES:Rescheduling_interrupts
7553 ± 5% +14.4% 8641 ± 4% interrupts.CPU39.RES:Rescheduling_interrupts
7624 ± 4% +14.5% 8731 ± 3% interrupts.CPU4.RES:Rescheduling_interrupts
7392 ± 3% +16.1% 8582 ± 3% interrupts.CPU40.RES:Rescheduling_interrupts
7434 ± 5% +16.0% 8621 ± 2% interrupts.CPU41.RES:Rescheduling_interrupts
7376 ± 4% +18.2% 8716 ± 5% interrupts.CPU42.RES:Rescheduling_interrupts
7480 ± 5% +15.4% 8634 ± 4% interrupts.CPU43.RES:Rescheduling_interrupts
7445 ± 2% +16.5% 8676 ± 5% interrupts.CPU44.RES:Rescheduling_interrupts
7407 ± 4% +16.3% 8615 ± 2% interrupts.CPU45.RES:Rescheduling_interrupts
7446 ± 3% +16.2% 8653 ± 3% interrupts.CPU46.RES:Rescheduling_interrupts
7422 ± 5% +17.0% 8683 ± 3% interrupts.CPU47.RES:Rescheduling_interrupts
7484 ± 6% +15.2% 8622 ± 5% interrupts.CPU48.RES:Rescheduling_interrupts
7532 ± 2% +16.5% 8778 ± 4% interrupts.CPU50.RES:Rescheduling_interrupts
7413 ± 5% +17.6% 8721 ± 2% interrupts.CPU52.RES:Rescheduling_interrupts
7586 ± 4% +14.4% 8677 ± 5% interrupts.CPU53.RES:Rescheduling_interrupts
7557 ± 5% +14.8% 8678 ± 7% interrupts.CPU54.RES:Rescheduling_interrupts
7378 ± 4% +15.7% 8540 ± 4% interrupts.CPU57.RES:Rescheduling_interrupts
7361 ± 5% +17.5% 8650 ± 3% interrupts.CPU58.RES:Rescheduling_interrupts
7636 ± 6% +13.0% 8629 ± 3% interrupts.CPU59.RES:Rescheduling_interrupts
7555 ± 4% +17.2% 8853 ± 3% interrupts.CPU6.RES:Rescheduling_interrupts
7525 ± 4% +15.5% 8694 ± 5% interrupts.CPU60.RES:Rescheduling_interrupts
7623 ± 4% +14.0% 8690 ± 4% interrupts.CPU61.RES:Rescheduling_interrupts
7596 ± 4% +12.0% 8507 ± 5% interrupts.CPU62.RES:Rescheduling_interrupts
7604 ± 5% +12.8% 8580 ± 4% interrupts.CPU63.RES:Rescheduling_interrupts
7640 ± 4% +12.7% 8610 ± 3% interrupts.CPU64.RES:Rescheduling_interrupts
7552 ± 6% +15.5% 8720 ± 3% interrupts.CPU65.RES:Rescheduling_interrupts
7455 ± 6% +17.2% 8735 ± 4% interrupts.CPU66.RES:Rescheduling_interrupts
7612 ± 4% +11.6% 8497 ± 3% interrupts.CPU68.RES:Rescheduling_interrupts
7476 ± 5% +15.2% 8613 ± 5% interrupts.CPU69.RES:Rescheduling_interrupts
7743 ± 5% +13.3% 8775 ± 5% interrupts.CPU7.RES:Rescheduling_interrupts
7427 ± 3% +16.1% 8624 ± 4% interrupts.CPU70.RES:Rescheduling_interrupts
7468 ± 4% +14.5% 8553 ± 6% interrupts.CPU71.RES:Rescheduling_interrupts
7724 ± 4% +9.3% 8445 ± 4% interrupts.CPU72.RES:Rescheduling_interrupts
7507 ± 4% +15.4% 8665 ± 4% interrupts.CPU73.RES:Rescheduling_interrupts
7493 ± 6% +15.8% 8676 ± 6% interrupts.CPU74.RES:Rescheduling_interrupts
7704 ± 5% +9.6% 8445 ± 5% interrupts.CPU75.RES:Rescheduling_interrupts
7419 ± 5% +15.2% 8548 ± 5% interrupts.CPU76.RES:Rescheduling_interrupts
7684 ± 4% +11.5% 8564 ± 5% interrupts.CPU77.RES:Rescheduling_interrupts
7405 ± 4% +14.3% 8467 ± 3% interrupts.CPU78.RES:Rescheduling_interrupts
7532 ± 5% +13.9% 8577 ± 4% interrupts.CPU79.RES:Rescheduling_interrupts
7713 ± 2% +14.4% 8824 ± 4% interrupts.CPU8.RES:Rescheduling_interrupts
7535 ± 3% +16.3% 8760 ± 5% interrupts.CPU80.RES:Rescheduling_interrupts
7511 ± 4% +13.2% 8504 ± 4% interrupts.CPU82.RES:Rescheduling_interrupts
7497 ± 5% +12.8% 8457 ± 6% interrupts.CPU83.RES:Rescheduling_interrupts
7411 ± 6% +13.7% 8423 ± 5% interrupts.CPU84.RES:Rescheduling_interrupts
7374 ± 5% +11.6% 8233 ± 6% interrupts.CPU85.RES:Rescheduling_interrupts
7335 ± 4% +17.3% 8604 ± 3% interrupts.CPU86.RES:Rescheduling_interrupts
7537 ± 4% +12.7% 8497 ± 4% interrupts.CPU87.RES:Rescheduling_interrupts
7382 ± 6% +14.0% 8415 ± 4% interrupts.CPU88.RES:Rescheduling_interrupts
7417 ± 5% +13.6% 8429 ± 5% interrupts.CPU89.RES:Rescheduling_interrupts
7555 ± 3% +15.6% 8732 ± 4% interrupts.CPU9.RES:Rescheduling_interrupts
7421 ± 3% +12.1% 8319 ± 4% interrupts.CPU90.RES:Rescheduling_interrupts
7409 ± 5% +14.7% 8495 ± 5% interrupts.CPU91.RES:Rescheduling_interrupts
7335 ± 4% +12.4% 8242 ± 4% interrupts.CPU92.RES:Rescheduling_interrupts
7457 ± 4% +14.8% 8563 ± 4% interrupts.CPU93.RES:Rescheduling_interrupts
7352 ± 4% +11.9% 8228 ± 4% interrupts.CPU94.RES:Rescheduling_interrupts
10504 ± 3% +15.8% 12165 ± 4% interrupts.CPU95.RES:Rescheduling_interrupts
7427 ± 3% +13.0% 8394 ± 5% interrupts.CPU96.RES:Rescheduling_interrupts
7502 ± 3% +12.9% 8471 ± 4% interrupts.CPU97.RES:Rescheduling_interrupts
7539 ± 4% +12.7% 8493 ± 3% interrupts.CPU98.RES:Rescheduling_interrupts
1432373 ± 3% +14.5% 1640585 ± 3% interrupts.RES:Rescheduling_interrupts
135712 ± 3% +12.9% 153198 ± 3% softirqs.CPU0.RCU
114638 ± 2% +9.8% 125890 ± 3% softirqs.CPU0.TIMER
135086 ± 4% +13.8% 153754 ± 3% softirqs.CPU1.RCU
133934 ± 3% +14.2% 152902 ± 3% softirqs.CPU100.RCU
134695 ± 4% +13.5% 152928 ± 3% softirqs.CPU101.RCU
109307 ± 3% +8.9% 119064 ± 3% softirqs.CPU101.TIMER
134784 ± 4% +13.8% 153340 ± 3% softirqs.CPU102.RCU
109197 ± 3% +10.0% 120092 ± 2% softirqs.CPU102.TIMER
134915 ± 4% +13.6% 153291 ± 3% softirqs.CPU103.RCU
109324 ± 3% +9.4% 119572 ± 2% softirqs.CPU103.TIMER
134440 ± 3% +13.8% 152989 ± 3% softirqs.CPU104.RCU
109025 ± 2% +9.5% 119342 ± 3% softirqs.CPU104.TIMER
134989 ± 3% +13.3% 152912 ± 3% softirqs.CPU105.RCU
133957 ± 3% +14.1% 152811 ± 4% softirqs.CPU106.RCU
109121 ± 2% +9.4% 119357 ± 3% softirqs.CPU106.TIMER
133867 ± 3% +14.9% 153778 ± 3% softirqs.CPU107.RCU
133941 ± 3% +14.2% 152933 ± 3% softirqs.CPU108.RCU
108808 ± 2% +9.9% 119620 ± 3% softirqs.CPU108.TIMER
134294 ± 3% +13.8% 152828 ± 3% softirqs.CPU109.RCU
108942 ± 2% +9.7% 119493 ± 3% softirqs.CPU109.TIMER
135078 ± 4% +14.7% 154918 ± 3% softirqs.CPU11.RCU
111201 ± 4% +8.8% 121012 ± 4% softirqs.CPU11.TIMER
134040 ± 4% +14.4% 153362 ± 3% softirqs.CPU110.RCU
109312 ± 2% +9.7% 119935 ± 3% softirqs.CPU110.TIMER
134435 ± 3% +14.1% 153379 ± 3% softirqs.CPU111.RCU
109990 ± 2% +9.5% 120452 ± 3% softirqs.CPU111.TIMER
134575 ± 3% +14.4% 153949 ± 3% softirqs.CPU112.RCU
109071 ± 2% +16.9% 127528 ± 16% softirqs.CPU112.TIMER
134426 ± 4% +13.5% 152561 ± 3% softirqs.CPU113.RCU
108951 ± 3% +9.3% 119108 ± 3% softirqs.CPU113.TIMER
134164 ± 3% +13.9% 152852 ± 3% softirqs.CPU114.RCU
109006 ± 2% +9.8% 119700 ± 3% softirqs.CPU114.TIMER
134582 ± 3% +13.6% 152939 ± 3% softirqs.CPU115.RCU
109630 ± 2% +9.3% 119867 ± 3% softirqs.CPU115.TIMER
134179 ± 3% +13.8% 152727 ± 3% softirqs.CPU116.RCU
109584 ± 2% +9.8% 120288 ± 3% softirqs.CPU116.TIMER
134264 ± 3% +13.9% 152864 ± 3% softirqs.CPU117.RCU
108520 ± 2% +9.6% 118912 ± 3% softirqs.CPU117.TIMER
134170 ± 3% +13.9% 152837 ± 3% softirqs.CPU118.RCU
108312 ± 2% +9.8% 118936 ± 3% softirqs.CPU118.TIMER
134420 ± 3% +13.6% 152758 ± 3% softirqs.CPU119.RCU
108878 ± 2% +9.5% 119222 ± 3% softirqs.CPU119.TIMER
135512 ± 3% +13.2% 153456 ± 3% softirqs.CPU12.RCU
109593 ± 2% +9.7% 120187 ± 3% softirqs.CPU12.TIMER
134181 ± 3% +14.4% 153507 ± 4% softirqs.CPU120.RCU
134332 ± 3% +14.7% 154127 ± 3% softirqs.CPU121.RCU
133849 ± 3% +14.1% 152764 ± 3% softirqs.CPU122.RCU
134914 ± 4% +13.1% 152621 ± 3% softirqs.CPU123.RCU
134696 ± 3% +13.6% 153022 ± 3% softirqs.CPU124.RCU
135402 ± 3% +12.6% 152472 ± 3% softirqs.CPU125.RCU
135007 ± 4% +13.4% 153160 ± 3% softirqs.CPU126.RCU
133854 ± 4% +14.5% 153218 ± 3% softirqs.CPU127.RCU
134460 ± 3% +13.8% 153050 ± 3% softirqs.CPU128.RCU
134201 ± 3% +13.9% 152804 ± 3% softirqs.CPU129.RCU
135115 ± 3% +13.8% 153763 ± 3% softirqs.CPU13.RCU
109694 ± 2% +9.5% 120158 ± 3% softirqs.CPU13.TIMER
134142 ± 3% +13.8% 152658 ± 3% softirqs.CPU130.RCU
133403 ± 4% +14.4% 152588 ± 3% softirqs.CPU131.RCU
134376 ± 5% +13.8% 152949 ± 3% softirqs.CPU132.RCU
133792 ± 4% +14.4% 153116 ± 3% softirqs.CPU133.RCU
133926 ± 3% +14.5% 153290 ± 3% softirqs.CPU134.RCU
133781 ± 4% +14.2% 152827 ± 3% softirqs.CPU135.RCU
133615 ± 4% +14.3% 152710 ± 3% softirqs.CPU136.RCU
133861 ± 3% +14.0% 152616 ± 3% softirqs.CPU137.RCU
133358 ± 3% +14.2% 152236 ± 3% softirqs.CPU138.RCU
133429 ± 3% +14.7% 153030 ± 3% softirqs.CPU139.RCU
134852 ± 4% +13.4% 152917 ± 3% softirqs.CPU14.RCU
133888 ± 3% +13.5% 152019 ± 3% softirqs.CPU140.RCU
133469 ± 3% +14.0% 152198 ± 3% softirqs.CPU141.RCU
133562 ± 3% +14.6% 153003 ± 3% softirqs.CPU142.RCU
133249 ± 4% +14.7% 152788 ± 3% softirqs.CPU143.RCU
133480 ± 3% +14.2% 152446 ± 3% softirqs.CPU144.RCU
108241 ± 4% +14.1% 123556 ± 5% softirqs.CPU144.TIMER
133390 ± 3% +14.4% 152587 ± 3% softirqs.CPU145.RCU
107776 ± 4% +16.2% 125272 ± 6% softirqs.CPU145.TIMER
133562 ± 4% +14.9% 153417 ± 3% softirqs.CPU146.RCU
107850 ± 4% +22.1% 131638 ± 16% softirqs.CPU146.TIMER
133959 ± 4% +14.3% 153117 ± 3% softirqs.CPU147.RCU
108435 ± 4% +14.5% 124134 ± 5% softirqs.CPU147.TIMER
133766 ± 3% +13.9% 152340 ± 3% softirqs.CPU148.RCU
107808 ± 3% +14.2% 123118 ± 4% softirqs.CPU148.TIMER
133445 ± 3% +14.0% 152174 ± 3% softirqs.CPU149.RCU
107683 ± 3% +17.2% 126171 ± 10% softirqs.CPU149.TIMER
134684 ± 3% +14.1% 153726 ± 2% softirqs.CPU15.RCU
133146 ± 4% +14.1% 151963 ± 3% softirqs.CPU150.RCU
107449 ± 3% +14.5% 123038 ± 5% softirqs.CPU150.TIMER
133561 ± 4% +13.9% 152192 ± 3% softirqs.CPU151.RCU
108007 ± 4% +13.8% 122913 ± 5% softirqs.CPU151.TIMER
133723 ± 4% +14.0% 152426 ± 3% softirqs.CPU152.RCU
107782 ± 4% +14.4% 123315 ± 5% softirqs.CPU152.TIMER
134077 ± 3% +13.7% 152383 ± 3% softirqs.CPU153.RCU
107366 ± 3% +14.3% 122768 ± 5% softirqs.CPU153.TIMER
133473 ± 4% +14.1% 152260 ± 3% softirqs.CPU154.RCU
107435 ± 3% +17.3% 125982 ± 9% softirqs.CPU154.TIMER
133274 ± 4% +14.4% 152411 ± 3% softirqs.CPU155.RCU
107162 ± 3% +16.6% 124900 ± 6% softirqs.CPU155.TIMER
133723 ± 4% +13.8% 152182 ± 3% softirqs.CPU156.RCU
107539 ± 3% +14.3% 122875 ± 5% softirqs.CPU156.TIMER
133588 ± 4% +14.1% 152366 ± 3% softirqs.CPU157.RCU
107669 ± 3% +14.3% 123117 ± 5% softirqs.CPU157.TIMER
133842 ± 3% +13.9% 152394 ± 3% softirqs.CPU158.RCU
108270 ± 3% +14.2% 123615 ± 5% softirqs.CPU158.TIMER
133732 ± 3% +14.4% 153005 ± 3% softirqs.CPU159.RCU
107811 ± 3% +14.4% 123309 ± 4% softirqs.CPU159.TIMER
134902 ± 3% +14.0% 153757 ± 3% softirqs.CPU16.RCU
134086 ± 4% +13.7% 152432 ± 3% softirqs.CPU160.RCU
133629 ± 3% +15.1% 153806 ± 4% softirqs.CPU161.RCU
107629 ± 3% +15.1% 123918 ± 6% softirqs.CPU161.TIMER
133705 ± 4% +14.0% 152441 ± 3% softirqs.CPU162.RCU
107590 ± 3% +14.6% 123323 ± 5% softirqs.CPU162.TIMER
133507 ± 3% +14.1% 152281 ± 3% softirqs.CPU163.RCU
107949 ± 3% +14.5% 123559 ± 5% softirqs.CPU163.TIMER
133900 ± 3% +14.1% 152818 ± 3% softirqs.CPU164.RCU
133582 ± 3% +14.3% 152645 ± 3% softirqs.CPU165.RCU
107077 ± 3% +14.1% 122135 ± 5% softirqs.CPU165.TIMER
133964 ± 3% +13.6% 152233 ± 3% softirqs.CPU166.RCU
107083 ± 3% +14.6% 122697 ± 5% softirqs.CPU166.TIMER
133858 ± 3% +13.4% 151813 ± 3% softirqs.CPU167.RCU
133894 ± 3% +14.2% 152917 ± 4% softirqs.CPU168.RCU
115262 ± 4% +11.3% 128273 ± 3% softirqs.CPU168.TIMER
133354 ± 3% +14.6% 152857 ± 3% softirqs.CPU169.RCU
115356 ± 4% +10.7% 127705 ± 3% softirqs.CPU169.TIMER
135401 ± 4% +13.4% 153594 ± 4% softirqs.CPU17.RCU
109951 ± 2% +9.0% 119834 ± 3% softirqs.CPU17.TIMER
134371 ± 3% +13.6% 152595 ± 3% softirqs.CPU170.RCU
133674 ± 3% +13.9% 152274 ± 3% softirqs.CPU171.RCU
116126 ± 4% +10.2% 127943 ± 3% softirqs.CPU171.TIMER
133578 ± 4% +13.7% 151876 ± 3% softirqs.CPU172.RCU
115999 ± 4% +10.1% 127703 ± 3% softirqs.CPU172.TIMER
133366 ± 3% +13.8% 151826 ± 3% softirqs.CPU173.RCU
115864 ± 4% +10.6% 128113 ± 2% softirqs.CPU173.TIMER
133370 ± 3% +14.4% 152517 ± 3% softirqs.CPU174.RCU
115636 ± 4% +10.3% 127583 ± 3% softirqs.CPU174.TIMER
133065 ± 3% +14.9% 152858 ± 2% softirqs.CPU175.RCU
115610 ± 4% +11.1% 128488 ± 2% softirqs.CPU175.TIMER
133854 ± 3% +13.8% 152277 ± 3% softirqs.CPU176.RCU
115663 ± 4% +10.3% 127556 ± 3% softirqs.CPU176.TIMER
133885 ± 3% +13.9% 152466 ± 3% softirqs.CPU177.RCU
115583 ± 4% +10.1% 127237 ± 3% softirqs.CPU177.TIMER
133626 ± 3% +14.1% 152524 ± 3% softirqs.CPU178.RCU
115499 ± 4% +13.0% 130466 ± 5% softirqs.CPU178.TIMER
133822 ± 3% +13.9% 152455 ± 3% softirqs.CPU179.RCU
115860 ± 4% +10.5% 128011 ± 3% softirqs.CPU179.TIMER
134797 ± 3% +13.8% 153464 ± 3% softirqs.CPU18.RCU
109558 ± 2% +9.9% 120433 ± 3% softirqs.CPU18.TIMER
133196 ± 3% +14.5% 152516 ± 3% softirqs.CPU180.RCU
115468 ± 4% +10.5% 127646 ± 3% softirqs.CPU180.TIMER
133363 ± 3% +14.4% 152521 ± 3% softirqs.CPU181.RCU
114985 ± 4% +10.7% 127333 ± 2% softirqs.CPU181.TIMER
133105 ± 3% +14.5% 152388 ± 3% softirqs.CPU182.RCU
115503 ± 4% +10.3% 127371 ± 2% softirqs.CPU182.TIMER
133262 ± 3% +14.2% 152227 ± 3% softirqs.CPU183.RCU
115923 ± 4% +10.2% 127788 ± 3% softirqs.CPU183.TIMER
133353 ± 3% +14.5% 152736 ± 3% softirqs.CPU184.RCU
114979 ± 4% +10.8% 127392 ± 2% softirqs.CPU184.TIMER
133272 ± 4% +14.2% 152161 ± 3% softirqs.CPU185.RCU
114796 ± 4% +10.6% 126997 ± 3% softirqs.CPU185.TIMER
133099 ± 3% +14.3% 152161 ± 3% softirqs.CPU186.RCU
115256 ± 3% +10.3% 127108 ± 3% softirqs.CPU186.TIMER
133141 ± 3% +14.5% 152419 ± 3% softirqs.CPU187.RCU
115232 ± 4% +10.4% 127261 ± 2% softirqs.CPU187.TIMER
132941 ± 3% +14.9% 152717 ± 3% softirqs.CPU188.RCU
115683 ± 4% +10.6% 127917 ± 3% softirqs.CPU188.TIMER
133093 ± 3% +14.3% 152103 ± 3% softirqs.CPU189.RCU
115121 ± 3% +10.7% 127448 ± 2% softirqs.CPU189.TIMER
135315 ± 3% +13.5% 153560 ± 4% softirqs.CPU19.RCU
110300 ± 2% +9.4% 120632 ± 3% softirqs.CPU19.TIMER
133551 ± 3% +14.1% 152410 ± 3% softirqs.CPU190.RCU
115244 ± 4% +10.3% 127113 ± 2% softirqs.CPU190.TIMER
130837 ± 3% +14.5% 149778 ± 3% softirqs.CPU191.RCU
113894 ± 4% +10.6% 125947 ± 3% softirqs.CPU191.TIMER
134918 ± 4% +14.0% 153822 ± 3% softirqs.CPU2.RCU
135106 ± 3% +13.8% 153799 ± 3% softirqs.CPU20.RCU
110293 ± 2% +9.5% 120807 ± 3% softirqs.CPU20.TIMER
134667 ± 3% +14.3% 153926 ± 3% softirqs.CPU21.RCU
108946 ± 2% +9.6% 119434 ± 3% softirqs.CPU21.TIMER
134647 ± 4% +13.9% 153399 ± 3% softirqs.CPU22.RCU
134457 ± 3% +13.9% 153092 ± 3% softirqs.CPU23.RCU
109040 ± 2% +9.5% 119382 ± 3% softirqs.CPU23.TIMER
135907 ± 3% +13.2% 153794 ± 3% softirqs.CPU24.RCU
134641 ± 3% +14.1% 153599 ± 3% softirqs.CPU25.RCU
139080 ± 5% +14.2% 158844 ± 4% softirqs.CPU26.RCU
134935 ± 3% +13.9% 153741 ± 3% softirqs.CPU27.RCU
135383 ± 3% +13.4% 153563 ± 3% softirqs.CPU28.RCU
134635 ± 4% +14.5% 154126 ± 3% softirqs.CPU29.RCU
134606 ± 3% +14.3% 153867 ± 3% softirqs.CPU3.RCU
134644 ± 3% +14.3% 153919 ± 4% softirqs.CPU30.RCU
134501 ± 3% +14.1% 153503 ± 3% softirqs.CPU31.RCU
134470 ± 3% +14.3% 153640 ± 3% softirqs.CPU32.RCU
134430 ± 3% +14.2% 153548 ± 3% softirqs.CPU33.RCU
134322 ± 4% +13.9% 153006 ± 3% softirqs.CPU34.RCU
134761 ± 3% +13.4% 152813 ± 3% softirqs.CPU35.RCU
134900 ± 4% +13.4% 153036 ± 3% softirqs.CPU36.RCU
133998 ± 4% +14.9% 153954 ± 3% softirqs.CPU37.RCU
133910 ± 4% +14.7% 153652 ± 3% softirqs.CPU38.RCU
134435 ± 3% +14.2% 153583 ± 3% softirqs.CPU39.RCU
135327 ± 3% +14.4% 154792 ± 3% softirqs.CPU4.RCU
134394 ± 4% +14.3% 153571 ± 3% softirqs.CPU40.RCU
134144 ± 3% +14.6% 153786 ± 3% softirqs.CPU41.RCU
134280 ± 4% +14.0% 153085 ± 3% softirqs.CPU42.RCU
134032 ± 3% +14.5% 153452 ± 3% softirqs.CPU43.RCU
134114 ± 3% +14.1% 153003 ± 3% softirqs.CPU44.RCU
133925 ± 3% +14.2% 152958 ± 3% softirqs.CPU45.RCU
134092 ± 3% +14.2% 153134 ± 3% softirqs.CPU46.RCU
134447 ± 3% +13.5% 152553 ± 3% softirqs.CPU47.RCU
134462 ± 3% +14.2% 153526 ± 3% softirqs.CPU48.RCU
134272 ± 4% +14.6% 153858 ± 3% softirqs.CPU49.RCU
108678 ± 3% +21.0% 131480 ± 14% softirqs.CPU49.TIMER
134407 ± 3% +14.8% 154234 ± 3% softirqs.CPU5.RCU
109551 ± 2% +10.0% 120526 ± 4% softirqs.CPU5.TIMER
134199 ± 3% +13.6% 152493 ± 3% softirqs.CPU50.RCU
108813 ± 3% +16.9% 127221 ± 8% softirqs.CPU50.TIMER
134735 ± 3% +13.8% 153285 ± 3% softirqs.CPU51.RCU
109673 ± 3% +13.5% 124484 ± 5% softirqs.CPU51.TIMER
133831 ± 3% +15.0% 153841 ± 3% softirqs.CPU52.RCU
108270 ± 3% +14.5% 123963 ± 5% softirqs.CPU52.TIMER
133604 ± 3% +14.8% 153353 ± 3% softirqs.CPU53.RCU
108210 ± 3% +21.6% 131601 ± 17% softirqs.CPU53.TIMER
134222 ± 3% +14.0% 152955 ± 3% softirqs.CPU54.RCU
108651 ± 3% +13.8% 123681 ± 5% softirqs.CPU54.TIMER
133710 ± 3% +14.5% 153107 ± 3% softirqs.CPU55.RCU
108627 ± 3% +13.9% 123768 ± 5% softirqs.CPU55.TIMER
133851 ± 3% +14.4% 153064 ± 3% softirqs.CPU56.RCU
108194 ± 3% +14.6% 123974 ± 5% softirqs.CPU56.TIMER
133813 ± 4% +14.3% 152908 ± 3% softirqs.CPU57.RCU
108113 ± 4% +14.1% 123310 ± 5% softirqs.CPU57.TIMER
133907 ± 4% +14.6% 153508 ± 3% softirqs.CPU58.RCU
108210 ± 4% +21.3% 131266 ± 17% softirqs.CPU58.TIMER
133998 ± 3% +14.7% 153760 ± 3% softirqs.CPU59.RCU
108178 ± 3% +20.7% 130583 ± 13% softirqs.CPU59.TIMER
135213 ± 3% +13.6% 153623 ± 3% softirqs.CPU6.RCU
133426 ± 4% +14.7% 153041 ± 3% softirqs.CPU60.RCU
108163 ± 3% +14.6% 123926 ± 5% softirqs.CPU60.TIMER
133918 ± 4% +14.5% 153286 ± 3% softirqs.CPU61.RCU
108417 ± 3% +14.7% 124323 ± 4% softirqs.CPU61.TIMER
134579 ± 3% +13.6% 152835 ± 3% softirqs.CPU62.RCU
108879 ± 3% +14.1% 124196 ± 4% softirqs.CPU62.TIMER
134117 ± 3% +14.4% 153418 ± 3% softirqs.CPU63.RCU
108167 ± 3% +14.3% 123689 ± 4% softirqs.CPU63.TIMER
134221 ± 4% +14.2% 153231 ± 3% softirqs.CPU64.RCU
108185 ± 3% +14.5% 123891 ± 4% softirqs.CPU64.TIMER
133921 ± 3% +14.7% 153550 ± 3% softirqs.CPU65.RCU
108024 ± 4% +14.5% 123682 ± 4% softirqs.CPU65.TIMER
134312 ± 4% +14.1% 153294 ± 3% softirqs.CPU66.RCU
108547 ± 3% +13.9% 123685 ± 5% softirqs.CPU66.TIMER
133782 ± 3% +14.4% 152985 ± 3% softirqs.CPU67.RCU
108548 ± 3% +14.4% 124209 ± 5% softirqs.CPU67.TIMER
133894 ± 3% +14.3% 153020 ± 3% softirqs.CPU68.RCU
108918 ± 4% +14.0% 124173 ± 5% softirqs.CPU68.TIMER
133995 ± 4% +14.2% 153073 ± 3% softirqs.CPU69.RCU
107858 ± 3% +14.1% 123035 ± 5% softirqs.CPU69.TIMER
134915 ± 3% +13.3% 152925 ± 3% softirqs.CPU7.RCU
134484 ± 4% +14.5% 153923 ± 3% softirqs.CPU70.RCU
107728 ± 3% +14.9% 123730 ± 4% softirqs.CPU70.TIMER
133785 ± 4% +14.0% 152555 ± 3% softirqs.CPU71.RCU
133984 ± 3% +14.4% 153321 ± 3% softirqs.CPU72.RCU
115702 ± 3% +10.9% 128319 ± 3% softirqs.CPU72.TIMER
134000 ± 3% +14.5% 153461 ± 3% softirqs.CPU73.RCU
115917 ± 4% +10.6% 128242 ± 3% softirqs.CPU73.TIMER
134164 ± 3% +13.9% 152750 ± 3% softirqs.CPU74.RCU
133834 ± 3% +14.4% 153161 ± 3% softirqs.CPU75.RCU
116501 ± 4% +10.5% 128788 ± 3% softirqs.CPU75.TIMER
133974 ± 3% +14.1% 152929 ± 3% softirqs.CPU76.RCU
116454 ± 4% +10.4% 128555 ± 2% softirqs.CPU76.TIMER
134352 ± 3% +13.6% 152650 ± 3% softirqs.CPU77.RCU
116931 ± 3% +10.3% 129015 ± 3% softirqs.CPU77.TIMER
134425 ± 4% +13.8% 152928 ± 3% softirqs.CPU78.RCU
133970 ± 4% +14.2% 153041 ± 3% softirqs.CPU79.RCU
116586 ± 4% +10.4% 128681 ± 3% softirqs.CPU79.TIMER
135164 ± 3% +13.9% 153933 ± 3% softirqs.CPU8.RCU
134670 ± 4% +13.4% 152743 ± 3% softirqs.CPU80.RCU
116667 ± 4% +10.0% 128309 ± 3% softirqs.CPU80.TIMER
134130 ± 3% +14.7% 153891 ± 3% softirqs.CPU81.RCU
116138 ± 4% +10.9% 128848 ± 3% softirqs.CPU81.TIMER
133890 ± 3% +14.4% 153133 ± 3% softirqs.CPU82.RCU
116159 ± 4% +17.0% 135890 ± 12% softirqs.CPU82.TIMER
133927 ± 3% +14.1% 152802 ± 3% softirqs.CPU83.RCU
116766 ± 4% +10.4% 128926 ± 3% softirqs.CPU83.TIMER
133479 ± 3% +14.8% 153280 ± 3% softirqs.CPU84.RCU
116219 ± 4% +10.5% 128460 ± 3% softirqs.CPU84.TIMER
134146 ± 3% +14.1% 153030 ± 3% softirqs.CPU85.RCU
115760 ± 4% +10.7% 128138 ± 3% softirqs.CPU85.TIMER
133761 ± 3% +14.4% 153004 ± 3% softirqs.CPU86.RCU
116263 ± 4% +10.4% 128314 ± 2% softirqs.CPU86.TIMER
133750 ± 3% +15.0% 153843 ± 3% softirqs.CPU87.RCU
116499 ± 4% +11.4% 129755 ± 2% softirqs.CPU87.TIMER
133834 ± 4% +14.8% 153597 ± 3% softirqs.CPU88.RCU
115684 ± 3% +10.5% 127826 ± 2% softirqs.CPU88.TIMER
134153 ± 3% +14.1% 153104 ± 3% softirqs.CPU89.RCU
115866 ± 3% +10.2% 127686 ± 2% softirqs.CPU89.TIMER
134847 ± 3% +13.7% 153380 ± 3% softirqs.CPU9.RCU
133576 ± 3% +14.2% 152551 ± 3% softirqs.CPU90.RCU
115914 ± 3% +10.2% 127744 ± 3% softirqs.CPU90.TIMER
133760 ± 4% +14.7% 153437 ± 3% softirqs.CPU91.RCU
116142 ± 4% +10.4% 128208 ± 2% softirqs.CPU91.TIMER
133995 ± 4% +13.9% 152606 ± 3% softirqs.CPU92.RCU
116734 ± 4% +9.9% 128259 ± 3% softirqs.CPU92.TIMER
133780 ± 3% +14.1% 152671 ± 2% softirqs.CPU93.RCU
115982 ± 3% +10.5% 128218 ± 2% softirqs.CPU93.TIMER
133984 ± 4% +13.7% 152369 ± 3% softirqs.CPU94.RCU
115623 ± 4% +10.3% 127489 ± 3% softirqs.CPU94.TIMER
134164 ± 3% +14.0% 152971 ± 4% softirqs.CPU95.RCU
117463 ± 3% +10.4% 129720 ± 4% softirqs.CPU95.TIMER
134096 ± 4% +14.2% 153145 ± 4% softirqs.CPU96.RCU
109014 ± 2% +10.0% 119943 ± 3% softirqs.CPU96.TIMER
135115 ± 3% +13.3% 153137 ± 3% softirqs.CPU97.RCU
133945 ± 3% +14.2% 152973 ± 3% softirqs.CPU98.RCU
109148 ± 2% +9.4% 119448 ± 3% softirqs.CPU98.TIMER
134912 ± 3% +13.4% 153038 ± 3% softirqs.CPU99.RCU
25756088 ± 3% +14.1% 29384406 ± 3% softirqs.RCU
21381092 ± 3% +10.1% 23543636 ± 3% softirqs.TIMER
aim7.jobs-per-min
30500 +-------------------------------------------------------------------+
| ..+.... .|
30000 |-+ ...... ... ... |
29500 |................ ...... .. ... |
| +.. ... .. |
29000 |-+ ... ... |
28500 |-+ . ... |
| +. |
28000 |-+ O O |
27500 |-+ |
| |
27000 |-+ |
26500 |-+ |
| O |
26000 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 4 months
[Btrfs] 8e7f04c76d: xfstests.btrfs.061.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 8e7f04c76ddb99524994d2d8cd45cb7f6464726d ("Btrfs: simplify inline extent handling when doing reflinks")
https://github.com/kdave/btrfs-devel.git misc-next
in testcase: xfstests
with following parameters:
disk: 6HDD
fs: btrfs
test: btrfs-group00
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2020-03-28 03:51:45 export TEST_DIR=/fs/vda
2020-03-28 03:51:45 export TEST_DEV=/dev/vda
2020-03-28 03:51:45 export FSTYP=btrfs
2020-03-28 03:51:45 export SCRATCH_MNT=/fs/scratch
2020-03-28 03:51:45 mkdir /fs/scratch -p
2020-03-28 03:51:45 export SCRATCH_DEV_POOL="/dev/vdb /dev/vdc /dev/vdd /dev/vde /dev/vdf"
2020-03-28 03:51:45 sed "s:^:btrfs/:" //lkp/benchmarks/xfstests/tests/btrfs-group00
2020-03-28 03:51:45 ./check btrfs/001 btrfs/004 btrfs/007 btrfs/010 btrfs/013 btrfs/016 btrfs/019 btrfs/022 btrfs/025 btrfs/028 btrfs/031 btrfs/034 btrfs/037 btrfs/040 btrfs/043 btrfs/046 btrfs/049 btrfs/052 btrfs/055 btrfs/058 btrfs/061 btrfs/064 btrfs/067 btrfs/071 btrfs/074 btrfs/077 btrfs/080 btrfs/083 btrfs/086 btrfs/089 btrfs/092 btrfs/095 btrfs/098 btrfs/101 btrfs/104 btrfs/107 btrfs/110 btrfs/113 btrfs/116 btrfs/119 btrfs/122 btrfs/125 btrfs/128 btrfs/131 btrfs/134 btrfs/137 btrfs/140 btrfs/143 btrfs/146 btrfs/149 btrfs/152 btrfs/155 btrfs/158 btrfs/161 btrfs/164 btrfs/167 btrfs/170 btrfs/173 btrfs/176 btrfs/179 btrfs/182 btrfs/185 btrfs/188 btrfs/191 btrfs/194 btrfs/197 btrfs/200 btrfs/203 btrfs/206
btrfs/206 - unknown test, ignored
FSTYP -- btrfs
PLATFORM -- Linux/x86_64 vm-snb-45 5.6.0-rc5-00168-g8e7f04c76ddb9 #1 SMP Fri Mar 27 16:10:31 CST 2020
MKFS_OPTIONS -- /dev/vdb
MOUNT_OPTIONS -- /dev/vdb /fs/scratch
btrfs/001 69s
btrfs/004 170s
btrfs/007 2s
btrfs/010 151s
btrfs/013 11s
btrfs/016 2s
btrfs/019 0s
btrfs/022 5s
btrfs/025 1s
btrfs/028 31s
btrfs/031 1s
btrfs/034 8s
btrfs/037 1s
btrfs/040 1s
btrfs/043 1s
btrfs/046 18s
btrfs/049 6s
btrfs/052 17s
btrfs/055 2s
btrfs/058 3s
btrfs/061 _check_dmesg: something found in dmesg (see /lkp/benchmarks/xfstests/results//btrfs/061.dmesg)
btrfs/064 644s
btrfs/067 122s
btrfs/071 19s
btrfs/074 43s
btrfs/077 1s
btrfs/080 54s
btrfs/083 1s
btrfs/086 36s
btrfs/089 15s
btrfs/092 36s
btrfs/095 19s
btrfs/098 17s
btrfs/101 209s
btrfs/104 2s
btrfs/107 1s
btrfs/110 1s
btrfs/113 0s
btrfs/116 [not run] FITRIM not supported on /fs/scratch
btrfs/119 1s
btrfs/122 15s
btrfs/125 17s
btrfs/128 1s
btrfs/131 2s
btrfs/134 1s
btrfs/137 1s
btrfs/140 2s
btrfs/143 1s
btrfs/146 3s
btrfs/149 1s
btrfs/152 3s
btrfs/155 1s
btrfs/158 1s
btrfs/161 1s
btrfs/164 2s
btrfs/167 1s
btrfs/170 1s
btrfs/173 1s
btrfs/176 2s
btrfs/179 133s
btrfs/182 3s
btrfs/185 1s
btrfs/188 1s
btrfs/191 1s
btrfs/194 22s
btrfs/197 - output mismatch (see /lkp/benchmarks/xfstests/results//btrfs/197.out.bad)
--- tests/btrfs/197.out 2020-03-05 16:38:54.000000000 +0800
+++ /lkp/benchmarks/xfstests/results//btrfs/197.out.bad 2020-03-28 04:34:05.170608651 +0800
@@ -3,23 +3,19 @@
Label: none uuid: <UUID>
Total devices <NUM> FS bytes used <SIZE>
devid <DEVID> size <SIZE> used <SIZE> path SCRATCH_DEV
- *** Some devices missing
raid5
Label: none uuid: <UUID>
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/btrfs/197.out /lkp/benchmarks/xfstests/results//btrfs/197.out.bad' to see the entire diff)
btrfs/200 1s
btrfs/203 1s
Ran: btrfs/001 btrfs/004 btrfs/007 btrfs/010 btrfs/013 btrfs/016 btrfs/019 btrfs/022 btrfs/025 btrfs/028 btrfs/031 btrfs/034 btrfs/037 btrfs/040 btrfs/043 btrfs/046 btrfs/049 btrfs/052 btrfs/055 btrfs/058 btrfs/061 btrfs/064 btrfs/067 btrfs/071 btrfs/074 btrfs/077 btrfs/080 btrfs/083 btrfs/086 btrfs/089 btrfs/092 btrfs/095 btrfs/098 btrfs/101 btrfs/104 btrfs/107 btrfs/110 btrfs/113 btrfs/116 btrfs/119 btrfs/122 btrfs/125 btrfs/128 btrfs/131 btrfs/134 btrfs/137 btrfs/140 btrfs/143 btrfs/146 btrfs/149 btrfs/152 btrfs/155 btrfs/158 btrfs/161 btrfs/164 btrfs/167 btrfs/170 btrfs/173 btrfs/176 btrfs/179 btrfs/182 btrfs/185 btrfs/188 btrfs/191 btrfs/194 btrfs/197 btrfs/200 btrfs/203
Not run: btrfs/116
Failures: btrfs/061 btrfs/197
Failed 2 of 68 tests
To reproduce:
# build kernel
cd linux
cp config-5.6.0-rc5-00168-g8e7f04c76ddb9 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years, 4 months
10055da814 ("drm: Set final_kfree in drm_dev_alloc"): [ 24.948283] WARNING: CPU: 0 PID: 1 at drivers/gpu/drm/drm_managed.c:119 drmm_add_final_kfree
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/UPDATE-20200326-045230/Daniel-Ve...
commit 10055da8145261ac5f8f0edff56581bb81935505
Author: Daniel Vetter <daniel.vetter(a)ffwll.ch>
AuthorDate: Mon Mar 23 15:49:03 2020 +0100
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Thu Mar 26 04:52:32 2020 +0800
drm: Set final_kfree in drm_dev_alloc
I also did a full review of all callers, and only the xen driver
forgot to call drm_dev_put in the failure path. Fix that up too.
v2: I noticed that xen has a drm_driver.release hook, and uses
drm_dev_alloc(). We need to remove the kfree from
xen_drm_drv_release().
bochs also has a release hook, but leaked the drm_device ever since
commit 0a6659bdc5e8221da99eebb176fd9591435e38de
Author: Gerd Hoffmann <kraxel(a)redhat.com>
Date: Tue Dec 17 18:04:46 2013 +0100
drm/bochs: new driver
This patch here fixes that leak.
Same for virtio, started leaking with
commit b1df3a2b24a917f8853d43fe9683c0e360d2c33a
Author: Gerd Hoffmann <kraxel(a)redhat.com>
Date: Tue Feb 11 14:58:04 2020 +0100
drm/virtio: add drm_driver.release callback.
Acked-by: Gerd Hoffmann <kraxel(a)redhat.com>
Acked-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Acked-by: Sam Ravnborg <sam(a)ravnborg.org>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko(a)epam.com>
Cc: Gerd Hoffmann <kraxel(a)redhat.com>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko(a)epam.com>
Cc: xen-devel(a)lists.xenproject.org
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: David Airlie <airlied(a)linux.ie>
Cc: Daniel Vetter <daniel(a)ffwll.ch>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko(a)epam.com>
Cc: xen-devel(a)lists.xenproject.org
5745edda57 drm: add managed resources tied to drm_device
10055da814 drm: Set final_kfree in drm_dev_alloc
7defb09658 drm: Add docs for managed resources
+----------------------------------------------------------------+------------+------------+------------+
| | 5745edda57 | 10055da814 | 7defb09658 |
+----------------------------------------------------------------+------------+------------+------------+
| boot_successes | 32 | 0 | 0 |
| boot_failures | 1 | 11 | 2 |
| BUG:kernel_hang_in_test_stage | 1 | | |
| WARNING:at_drivers/gpu/drm/drm_managed.c:#drmm_add_final_kfree | 0 | 11 | 2 |
| RIP:drmm_add_final_kfree | 0 | 11 | 2 |
| WARNING:possible_circular_locking_dependency_detected | 0 | 1 | |
+----------------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 24.852398] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[ 24.903568] [drm] Initialized vgem 1.0.0 20120112 for vgem on minor 0
[ 24.916624] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 24.935488] [drm] Initialized vkms 1.0.0 20180514 for vkms on minor 1
[ 24.947773] ------------[ cut here ]------------
[ 24.948283] WARNING: CPU: 0 PID: 1 at drivers/gpu/drm/drm_managed.c:119 drmm_add_final_kfree+0x34/0x40
[ 24.948283] Modules linked in:
[ 24.948283] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G T 5.6.0-rc7-01663-g10055da814526 #1
[ 24.948283] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 24.948283] RIP: 0010:drmm_add_final_kfree+0x34/0x40
[ 24.948283] Code: 49 89 f4 48 83 7b 30 00 74 02 0f 0b 4c 39 e3 73 02 0f 0b 4c 89 e7 e8 eb 91 97 ff 48 8d 93 00 10 00 00 4c 01 e0 48 39 c2 72 02 <0f> 0b 4c 89 63 30 5b 41 5c 5d c3 90 55 48 89 e5 41 56 41 55 41 54
[ 24.948283] RSP: 0018:ffffac76c003bc38 EFLAGS: 00010246
[ 24.948283] RAX: ffff9c5c7375e000 RBX: ffff9c5c7375d000 RCX: 0000000000000000
[ 24.948283] RDX: ffff9c5c7375e000 RSI: ffff9c5c7375d000 RDI: 0000000080000000
[ 24.948283] RBP: ffffac76c003bc48 R08: 0000000000000001 R09: 0000000000000000
[ 24.948283] R10: 0000000000000000 R11: 0000000000000000 R12: ffff9c5c7375d000
[ 24.948283] R13: ffff9c5c76a0e0b0 R14: ffffffffa0a96580 R15: ffffffffa0a96520
[ 24.948283] FS: 0000000000000000(0000) GS:ffff9c5c77a00000(0000) knlGS:0000000000000000
[ 24.948283] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.948283] CR2: 00007f9c456b2480 CR3: 0000000227480000 CR4: 00000000000006b0
[ 24.948283] Call Trace:
[ 24.948283] drm_dev_alloc+0x61/0x70
[ 24.948283] bochs_pci_probe+0x8a/0x160
[ 24.948283] pci_device_probe+0xbd/0x160
[ 24.948283] really_probe+0x1cd/0x460
[ 24.948283] ? rdinit_setup+0x74/0x74
[ 24.948283] driver_probe_device+0xe4/0x130
[ 24.948283] device_driver_attach+0x3d/0x60
[ 24.948283] __driver_attach+0x11b/0x130
[ 24.948283] ? device_driver_attach+0x60/0x60
[ 24.948283] bus_for_each_dev+0x56/0xa0
[ 24.948283] driver_attach+0x1d/0x20
[ 24.948283] bus_add_driver+0x122/0x200
[ 24.948283] driver_register+0xa1/0xe0
[ 24.948283] __pci_register_driver+0x6d/0x80
[ 24.948283] ? vkms_init+0x2c4/0x2c4
[ 24.948283] bochs_init+0x52/0x73
[ 24.948283] do_one_initcall+0x101/0x2d0
[ 24.948283] ? init+0x3c/0x3c
[ 24.948283] ? do_one_initcall+0x6/0x2d0
[ 24.948283] kernel_init_freeable+0x1c2/0x2c3
[ 24.948283] ? rest_init+0x220/0x220
[ 24.948283] kernel_init+0x9/0xf0
[ 24.948283] ret_from_fork+0x24/0x30
[ 24.948283] irq event stamp: 436112
[ 24.948283] hardirqs last enabled at (436111): [<ffffffff9e0d2242>] __kmalloc_track_caller+0x132/0x290
[ 24.948283] hardirqs last disabled at (436112): [<ffffffff9de01ee8>] trace_hardirqs_off_thunk+0x1a/0x1c
[ 24.948283] softirqs last enabled at (436106): [<ffffffff9fa003d7>] __do_softirq+0x3d7/0x412
[ 24.948283] softirqs last disabled at (436099): [<ffffffff9df0b1ca>] irq_exit+0x4a/0xc0
[ 24.948283] ---[ end trace 68c132b95971b4a5 ]---
[ 25.336891] [drm] Found bochs VGA, ID 0xb0c0.
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 5eb10aa0823a556bd94c0b1bebe26d8fa152f619 16fbf79b0f83bc752cee8589279f1ebfe57b3b6e --
git bisect bad 0ace0054e6b0fb3897ae4544287dd29157bd9a23 # 18:12 B 0 3 19 0 Merge 'linux-review/Martin-KaFai-Lau/bpf-Add-bpf_sk_storage-support-to-bpf_tcp_ca/20200321-035231' into devel-hourly-2020032619
git bisect good 8df10e20fb6ab8494c206757b40fd0129972a31a # 19:26 G 10 0 1 1 Merge 'linux-review/Heiko-Carstens/s390-removal-of-hibernate-support/20200324-053325' into devel-hourly-2020032619
git bisect bad 9a12ccc493ff7b6fca5b64bcc1045f2104b38193 # 19:56 B 0 1 17 0 Merge 'linux-review/Cezary-Rojewski/acpi-Add-NHLT-table-signature/20200321-044951' into devel-hourly-2020032619
git bisect bad 85649f426116a4b565685c2adebd087d487e86e7 # 20:28 B 0 1 17 0 Merge 'linux-review/Chao-Yu/f2fs-clean-up-dic-tpages-assignment/20200326-063609' into devel-hourly-2020032619
git bisect bad b2b597d6fdebc14917f667159bd2da5f46f91fe7 # 21:25 B 0 3 19 0 Merge 'vfs/next.uaccess-4' into devel-hourly-2020032619
git bisect bad 5ba411db952603c3d236f30480e226d603f8ef09 # 22:26 B 0 2 18 0 Merge 'djwong-xfs/repair-fsfile-metadata' into devel-hourly-2020032619
git bisect bad fcdc1b980318fd30bca9fbc2eed9afaafa4dc354 # 23:07 B 0 3 19 0 Merge 'linux-review/UPDATE-20200326-045230/Daniel-Vetter/drm_device-managed-resources-v5/20200324-045436' into devel-hourly-2020032619
git bisect good 3f78c2e4a3afc298f330c54f29ee10c4dda78519 # 23:57 G 10 0 0 0 Merge 'linux-review/Kees-Cook/Optionally-randomize-kernel-stack-offset-each-syscall/20200326-045102' into devel-hourly-2020032619
git bisect good 8c2670ae74735aac2489236d74c1b6b1cb9d68ff # 00:43 G 10 0 0 0 Merge 'linux-review/Yue-Hu/drivers-block-zram-zram_drv-c-remove-WARN_ON_ONCE-in-free_block_bdev/20200323-185723' into devel-hourly-2020032619
git bisect good 08e467f2b085876026d8e48afee1ed4d53f0edf9 # 01:40 G 10 0 0 0 Merge 'linux-review/Pablo-Neira-Ayuso/netfilter-nft_fwd_netdev-validate-family-and-chain-type/20200325-185403' into devel-hourly-2020032619
git bisect good 93159e12353c2a47e5576d642845a91fa00530bf # 02:15 G 10 0 1 1 drm/i915/gem: Avoid gem_context->mutex for simple vma lookup
git bisect good 29bc37f74ebbfd05c3c89efacdd4e09a362bc8ff # 02:54 G 10 0 0 0 Merge remote-tracking branch 'drm/drm-next' into drm-tip
git bisect good ca394b76cc984614bb79bf83415377d2bf4278fb # 03:28 G 11 0 0 0 Merge remote-tracking branch 'takashi/for-next' into drm-tip
git bisect bad e08f58984044bc0dcc8d25884e00f335f271b632 # 04:01 B 0 4 20 0 drm/vkms: Use drmm_add_final_kfree
git bisect good 19f102d485b9f5e03677f73133d9922e2650686f # 05:14 G 10 0 0 0 Revert "drm/i915: Don't select BROKEN"
git bisect bad 10055da8145261ac5f8f0edff56581bb81935505 # 06:04 B 0 1 17 0 drm: Set final_kfree in drm_dev_alloc
git bisect good 256fb2eb782a5b175b1d14641b7ba747efd19e7a # 08:24 G 10 0 1 1 Merge remote-tracking branch 'drm-intel/topic/core-for-CI' into drm-tip
git bisect good 6636876ca19c973c30a35b953ac0fa409065c7bd # 09:28 G 10 0 1 1 mm/sl[uo]b: export __kmalloc_track(_node)_caller
git bisect good 5745edda57538c3b1da74b2bd1ef0a81f3a987ec # 10:20 G 10 0 0 0 drm: add managed resources tied to drm_device
# first bad commit: [10055da8145261ac5f8f0edff56581bb81935505] drm: Set final_kfree in drm_dev_alloc
git bisect good 5745edda57538c3b1da74b2bd1ef0a81f3a987ec # 10:24 G 30 0 0 0 drm: add managed resources tied to drm_device
# extra tests with debug options
git bisect good 10055da8145261ac5f8f0edff56581bb81935505 # 10:58 G 10 0 0 0 drm: Set final_kfree in drm_dev_alloc
# extra tests on head commit of linux-review/UPDATE-20200326-045230/Daniel-Vetter/drm_device-managed-resources-v5/20200324-045436
git bisect bad 7defb09658f57febb56551b057c53ce6151d234c # 11:18 B 0 2 18 0 drm: Add docs for managed resources
# bad: [7defb09658f57febb56551b057c53ce6151d234c] drm: Add docs for managed resources
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
2 years, 4 months
[mm] fd4d9c7d0c: stress-ng.switch.ops_per_sec -30.5% regression
by kernel test robot
Greeting,
FYI, we noticed a -30.5% regression of stress-ng.switch.ops_per_sec due to commit:
commit: fd4d9c7d0c71866ec0c2825189ebd2ce35bd95b8 ("mm: slub: add missing TID bump in kmem_cache_alloc_bulk()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:
nr_threads: 100%
disk: 1HDD
testtime: 30s
test: switch
cpufreq_governor: performance
ucode: 0x500002c
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-20191114.cgz/lkp-csl-2sp5/switch/stress-ng/30s/0x500002c
commit:
ac309e7744 ("Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid")
fd4d9c7d0c ("mm: slub: add missing TID bump in kmem_cache_alloc_bulk()")
ac309e7744bee222 fd4d9c7d0c71866ec0c2825189e
---------------- ---------------------------
%stddev %change %stddev
\ | \
69076693 -30.5% 47993323 stress-ng.switch.ops
2302520 -30.5% 1599758 stress-ng.switch.ops_per_sec
26.79 -9.0% 24.37 stress-ng.time.user_time
9242 ± 13% -16.2% 7749 ± 2% numa-meminfo.node0.KernelStack
2.86 ±100% -100.0% 0.00 iostat.sdb.await.max
2.86 ±100% -100.0% 0.00 iostat.sdb.r_await.max
9243 ± 13% -16.2% 7748 ± 2% numa-vmstat.node0.nr_kernel_stack
157380 ± 9% -60.3% 62515 ± 90% numa-vmstat.node0.numa_other
22499 ± 28% -41.5% 13173 ± 34% sched_debug.cfs_rq:/.spread0.max
-3319 +252.7% -11706 sched_debug.cfs_rq:/.spread0.min
-53.25 -45.1% -29.25 sched_debug.cpu.nr_uninterruptible.min
10425 ± 7% +13.3% 11813 ± 5% interrupts.CPU41.RES:Rescheduling_interrupts
10605 ± 2% +31.9% 13993 ± 23% interrupts.CPU46.RES:Rescheduling_interrupts
10804 ± 8% +13.0% 12211 ± 8% interrupts.CPU82.RES:Rescheduling_interrupts
10708 ± 3% +30.1% 13930 ± 22% interrupts.CPU94.RES:Rescheduling_interrupts
5456 ± 15% +71.7% 9369 ± 20% softirqs.CPU0.RCU
18494 ± 4% +6.9% 19771 ± 6% softirqs.CPU0.TIMER
20484 ± 14% -22.5% 15866 ± 9% softirqs.CPU27.TIMER
5114 ± 10% +64.9% 8433 ± 28% softirqs.CPU5.RCU
4841 ± 5% +45.6% 7047 ± 32% softirqs.CPU53.RCU
17421 ± 3% -9.3% 15796 ± 8% softirqs.CPU53.TIMER
18295 ± 4% -11.7% 16155 ± 7% softirqs.CPU59.TIMER
19446 ± 10% -13.6% 16803 ± 9% softirqs.CPU7.TIMER
4847 ± 7% +62.3% 7866 ± 43% softirqs.CPU8.RCU
18.36 +5.3% 19.33 perf-stat.i.MPKI
2.48 ± 3% +0.2 2.63 ± 2% perf-stat.i.cache-miss-rate%
17934024 ± 4% +10.0% 19730768 perf-stat.i.cache-misses
4.13 +4.9% 4.33 perf-stat.i.cpi
9504 -7.7% 8776 perf-stat.i.cycles-between-cache-misses
0.02 ± 3% +0.0 0.02 ± 5% perf-stat.i.dTLB-store-miss-rate%
58.48 -1.5 57.02 perf-stat.i.iTLB-load-miss-rate%
0.25 ± 2% -5.3% 0.23 perf-stat.i.ipc
94.99 -1.0 94.02 perf-stat.i.node-load-miss-rate%
6984752 ± 3% +8.0% 7545390 perf-stat.i.node-load-misses
336707 ± 4% +36.2% 458652 ± 2% perf-stat.i.node-loads
5585196 ± 3% +5.5% 5893365 perf-stat.i.node-store-misses
18.76 +4.2% 19.55 perf-stat.overall.MPKI
2.32 +0.2 2.52 ± 2% perf-stat.overall.cache-miss-rate%
4.21 +4.2% 4.38 perf-stat.overall.cpi
9662 -8.0% 8891 perf-stat.overall.cycles-between-cache-misses
0.02 ± 3% +0.0 0.02 ± 5% perf-stat.overall.dTLB-store-miss-rate%
58.68 -1.6 57.07 perf-stat.overall.iTLB-load-miss-rate%
987.32 +2.2% 1009 perf-stat.overall.instructions-per-iTLB-miss
0.24 -4.0% 0.23 perf-stat.overall.ipc
95.40 -1.1 94.27 perf-stat.overall.node-load-miss-rate%
17353488 ± 4% +10.0% 19087092 perf-stat.ps.cache-misses
4.863e+09 ± 3% -6.2% 4.562e+09 perf-stat.ps.dTLB-stores
6758402 ± 3% +8.0% 7299061 perf-stat.ps.node-load-misses
325857 ± 4% +36.2% 443722 ± 2% perf-stat.ps.node-loads
5404193 ± 3% +5.5% 5700934 perf-stat.ps.node-store-misses
1.275e+12 -6.1% 1.197e+12 ± 2% perf-stat.total.instructions
45.82 ± 36% -27.5 18.30 ± 60% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
49.70 ± 31% -25.4 24.31 ± 41% perf-profile.calltrace.cycles-pp.secondary_startup_64
49.70 ± 31% -25.4 24.31 ± 41% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
49.70 ± 31% -25.4 24.31 ± 41% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
49.70 ± 31% -25.4 24.31 ± 41% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
49.13 ± 32% -24.8 24.31 ± 41% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
48.65 ± 31% -24.3 24.31 ± 41% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
17.04 ± 85% +26.6 43.60 ± 25% perf-profile.calltrace.cycles-pp.ret_from_fork
17.04 ± 85% +26.6 43.60 ± 25% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
14.96 ±100% +28.6 43.60 ± 25% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
14.67 ±103% +28.9 43.60 ± 25% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
12.30 ±133% +30.0 42.32 ± 29% perf-profile.calltrace.cycles-pp.memcpy_erms.drm_fb_helper_dirty_work.process_one_work.worker_thread.kthread
12.59 ±130% +31.3 43.88 ± 24% perf-profile.calltrace.cycles-pp.drm_fb_helper_dirty_work.process_one_work.worker_thread.kthread.ret_from_fork
45.82 ± 36% -27.5 18.30 ± 60% perf-profile.children.cycles-pp.intel_idle
49.70 ± 31% -25.4 24.31 ± 41% perf-profile.children.cycles-pp.secondary_startup_64
49.70 ± 31% -25.4 24.31 ± 41% perf-profile.children.cycles-pp.start_secondary
49.70 ± 31% -25.4 24.31 ± 41% perf-profile.children.cycles-pp.cpu_startup_entry
49.70 ± 31% -25.4 24.31 ± 41% perf-profile.children.cycles-pp.do_idle
49.13 ± 32% -24.8 24.31 ± 41% perf-profile.children.cycles-pp.cpuidle_enter
49.13 ± 32% -24.8 24.31 ± 41% perf-profile.children.cycles-pp.cpuidle_enter_state
17.04 ± 85% +26.6 43.60 ± 25% perf-profile.children.cycles-pp.ret_from_fork
17.04 ± 85% +26.6 43.60 ± 25% perf-profile.children.cycles-pp.kthread
14.96 ±100% +28.6 43.60 ± 25% perf-profile.children.cycles-pp.worker_thread
14.67 ±103% +28.9 43.60 ± 25% perf-profile.children.cycles-pp.process_one_work
12.59 ±130% +31.0 43.60 ± 25% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
12.59 ±130% +31.0 43.60 ± 25% perf-profile.children.cycles-pp.memcpy_erms
45.82 ± 36% -27.5 18.30 ± 60% perf-profile.self.cycles-pp.intel_idle
12.13 ±128% +31.5 43.60 ± 25% perf-profile.self.cycles-pp.memcpy_erms
stress-ng.switch.ops
8e+07 +-------------------------------------------------------------------+
| |
7e+07 |-+...+....+ +.....+....+.....+ |
6e+07 |.. : : |
| : : |
5e+07 |-+ O : O : O |
| O : O O : O O O O O |
4e+07 |-+ : : |
| : : |
3e+07 |-+ : : |
2e+07 |-+ : : |
| : : |
1e+07 |-+ : : |
| : : |
0 +-------------------------------------------------------------------+
stress-ng.switch.ops_per_sec
2.5e+06 +-----------------------------------------------------------------+
| ...+....+ +.....+....+.....+ |
|.. : : |
2e+06 |-+ : : |
| : : |
| O O : O O O : O O O O O O |
1.5e+06 |-+ : : |
| : : |
1e+06 |-+ : : |
| : : |
| : : |
500000 |-+ : : |
| : : |
| : : |
0 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years, 4 months
[mm/debug] f8bf55f05f: BUG:non-zero_pgtables_bytes_on_freeing_mm
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: f8bf55f05f7d665fdf5942ceb28a82089ddad44a ("mm/debug: add tests validating architecture page table helpers")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------+------------+------------+
| | bf99016ea0 | f8bf55f05f |
+-------------------------------------------+------------+------------+
| boot_successes | 20 | 0 |
| boot_failures | 0 | 20 |
| BUG:non-zero_pgtables_bytes_on_freeing_mm | 0 | 18 |
| BUG:kernel_hang_in_boot_stage | 0 | 1 |
| BUG:workqueue_lockup-pool | 0 | 1 |
+-------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 7.050643] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest0/status
[ 7.051641] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest0/status
[ 7.054849] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest0/status
[ 7.054850] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest1/status
[ 7.056452] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest1/status
[ 7.059482] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest1/status
[ 7.059483] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest2/status
[ 7.076240] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest2/status
[ 7.082878] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest2/status
[ 7.082881] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest3/status
[ 7.085979] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest3/status
[ 7.093291] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest3/status
[ 7.094066] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest5/status
[ 7.096582] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest5/status
[ 7.102576] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest5/status
[ 7.102599] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest6/status
[ 7.107367] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest6/status
[ 7.113918] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest6/status
[ 7.113920] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest7/status
[ 7.117058] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest7/status
[ 7.124533] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest7/status
[ 7.125013] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest8/status
[ 7.127918] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest8/status
[ 7.134354] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest8/status
[ 7.134357] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest8/property-foo
[ 7.137419] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest8/property-foo
[ 7.143157] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest8/property-foo
[ 7.143160] ### dt-test ### EXPECT \ : OF: overlay: node_overlaps_later_cs: #6 overlaps with #7 @/testcase-data/overlay-node/test-bus/test-unittest8
[ 7.146130] ### dt-test ### EXPECT \ : OF: overlay: overlay #6 is not topmost
[ 7.148434] OF: overlay: node_overlaps_later_cs: #6 overlaps with #7 @/testcase-data/overlay-node/test-bus/test-unittest8
[ 7.151941] OF: overlay: overlay #6 is not topmost
[ 7.152965] ### dt-test ### EXPECT / : OF: overlay: overlay #6 is not topmost
[ 7.152966] ### dt-test ### EXPECT / : OF: overlay: node_overlaps_later_cs: #6 overlaps with #7 @/testcase-data/overlay-node/test-bus/test-unittest8
[ 7.157527] ### dt-test ### EXPECT \ : i2c i2c-1: Added multiplexed i2c bus 2
[ 7.159418] i2c i2c-0: Added multiplexed i2c bus 1
[ 7.175401] ### dt-test ### EXPECT / : i2c i2c-1: Added multiplexed i2c bus 2
[ 7.175404] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest12/status
[ 7.176802] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest12/status
[ 7.183238] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest12/status
[ 7.183241] ### dt-test ### EXPECT \ : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest13/status
[ 7.186407] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest13/status
[ 7.193861] ### dt-test ### EXPECT / : OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest13/status
[ 7.193864] ### dt-test ### EXPECT \ : i2c i2c-1: Added multiplexed i2c bus 3
[ 7.200011] i2c i2c-0: Added multiplexed i2c bus 2
[ 7.202698] ### dt-test ### EXPECT / : i2c i2c-1: Added multiplexed i2c bus 3
[ 7.209235] ### dt-test ### EXPECT \ : GPIO line <<int>> (line-B-input) hogged as input
[ 7.212134] ### dt-test ### EXPECT \ : GPIO line <<int>> (line-A-input) hogged as input
[ 7.213792] GPIO line 509 (line-B-input) hogged as input
[ 7.215209] GPIO line 503 (line-A-input) hogged as input
[ 7.216541] ### dt-test ### EXPECT / : GPIO line <<int>> (line-A-input) hogged as input
[ 7.218292] ### dt-test ### EXPECT / : GPIO line <<int>> (line-B-input) hogged as input
[ 7.219753] ### dt-test ### EXPECT \ : GPIO line <<int>> (line-D-input) hogged as input
[ 7.224715] GPIO line 501 (line-D-input) hogged as input
[ 7.226085] ### dt-test ### EXPECT / : GPIO line <<int>> (line-D-input) hogged as input
[ 7.229424] ### dt-test ### EXPECT \ : GPIO line <<int>> (line-C-input) hogged as input
[ 7.231653] GPIO line 495 (line-C-input) hogged as input
[ 7.232861] ### dt-test ### EXPECT / : GPIO line <<int>> (line-C-input) hogged as input
[ 7.239447] ### dt-test ### FAIL of_unittest_overlay_high_level():2982 overlay_base_root not initialized
[ 7.240485] ### dt-test ### end of unittest - 254 passed, 1 failed
[ 7.570301] CE: hpet increased min_delta_ns to 5000 nsec
[ 7.570449] CE: hpet increased min_delta_ns to 7500 nsec
[ 7.570563] CE: hpet increased min_delta_ns to 11250 nsec
[ 7.570645] CE: hpet increased min_delta_ns to 16875 nsec
[ 7.570783] CE: hpet increased min_delta_ns to 25312 nsec
[ 9.273375] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 9.276257] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 9.279231] _warn_unseeded_randomness: 27 callbacks suppressed
[ 9.279239] random: get_random_bytes called from addrconf_dad_kick+0x55/0xb0 with crng_init=1
[ 9.292890] Sending DHCP requests .
[ 9.292901] random: get_random_bytes called from ip_auto_config+0x510/0xebf with crng_init=1
[ 9.293233] , OK
[ 9.296007] IP-Config: Got DHCP answer from 10.0.2.2, my address is 10.0.2.15
[ 9.297449] IP-Config: Complete:
[ 9.298054] device=eth0, hwaddr=52:54:00:12:34:56, ipaddr=10.0.2.15, mask=255.255.255.0, gw=10.0.2.2
[ 9.299789] host=vm-snb-6, domain=, nis-domain=(none)
[ 9.300976] bootserver=10.0.2.2, rootserver=10.0.2.2, rootpath=
[ 9.300978] nameserver0=10.0.2.3
[ 9.304270] debug_vm_pgtable: debug_vm_pgtable: Validating architecture page table helpers
[ 9.306147] random: get_random_u32 called from debug_vm_pgtable+0x2c/0x6a5 with crng_init=1
[ 9.306206] BUG: non-zero pgtables_bytes on freeing mm: -4096
[ 9.317950] Freeing unused kernel image (initmem) memory: 872K
[ 9.319376] Write protecting kernel text and read-only data: 21104k
[ 9.322456] NX-protecting the kernel data: 14216k
[ 9.324940] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ 9.326221] rodata_test: all tests were successful
[ 9.327341] x86/mm: Checking user space page tables
[ 9.328395] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ 9.329867] Run /init as init process
[ 9.330602] with arguments:
[ 9.331216] /init
[ 9.331698] with environment:
[ 9.332360] HOME=/
[ 9.333015] TERM=linux
[ 9.333592] user=lkp
[ 9.334128] job=/lkp/jobs/scheduled/vm-snb-6/boot-1-openwrt-i386-generic-20190428.cgz-f8bf55f05f7d665fdf5942ceb28a82089ddad44a-20200328-60525-1ymm2c4-11.yaml
[ 9.337271] ARCH=i386
[ 9.337855] kconfig=i386-randconfig-h003-20200326
[ 9.338861] branch=linux-devel/devel-catchup-202003270319
[ 9.340058] commit=f8bf55f05f7d665fdf5942ceb28a82089ddad44a
[ 9.341438] BOOT_IMAGE=/pkg/linux/i386-randconfig-h003-20200326/gcc-7/f8bf55f05f7d665fdf5942ceb28a82089ddad44a/vmlinuz-5.6.0-rc7-12014-gf8bf55f05f7d6
[ 9.343123] max_uptime=600
[ 9.343456] RESULT_ROOT=/result/boot/1/vm-snb/openwrt-i386-generic-20190428.cgz/i386-randconfig-h003-20200326/gcc-7/f8bf55f05f7d665fdf5942ceb28a82089ddad44a/8
[ 9.345009] LKP_SERVER=inn
[ 9.345337] selinux=0
[ 9.345618] apic=debug
[ 9.345908] softlockup_panic=1
[ 9.346285] nmi_watchdog=panic
[ 9.346643] prompt_ramdisk=0
[ 9.346984] vga=normal
[ 9.347272] watchdog_thresh=60
[ 9.352224] init: Console is alive
[ 9.353943] kmodloader: loading kernel modules from /etc/modules-boot.d/*
[ 9.355146] kmodloader: done loading kernel modules from /etc/modules-boot.d/*
[ 9.367256] init: - preinit -
Press the [f] key and hit [enter] to enter failsafe mode
Press the [1], [2], [3] or [4] key and hit [enter] to select the debug level
[ 10.425360] _warn_unseeded_randomness: 56 callbacks suppressed
[ 10.425370] random: get_random_bytes called from flow_hash_from_keys+0xd6/0x1c0 with crng_init=1
[ 10.425630] random: get_random_bytes called from addrconf_dad_kick+0x55/0xb0 with crng_init=1
[ 10.426200] random: get_random_u32 called from arch_pick_mmap_layout+0x59/0x130 with crng_init=1
[ 11.560234] _warn_unseeded_randomness: 3 callbacks suppressed
[ 11.560242] random: get_random_u32 called from cache_alloc_refill+0x62d/0xf10 with crng_init=1
[ 11.560951] random: get_random_u32 called from arch_pick_mmap_layout+0x59/0x130 with crng_init=1
[ 11.560960] random: get_random_u32 called from randomize_stack_top+0x1d/0x40 with crng_init=1
[ 12.584597] _warn_unseeded_randomness: 7 callbacks suppressed
[ 12.584607] random: get_random_u32 called from arch_pick_mmap_layout+0x59/0x130 with crng_init=1
[ 12.587470] random: get_random_u32 called from randomize_stack_top+0x1d/0x40 with crng_init=1
[ 12.587477] random: get_random_u32 called from arch_align_stack+0x35/0x50 with crng_init=1
[ 12.609699] mount_root: mounting /dev/root
[ 12.632418] urandom-seed: Seed file not found (/etc/urandom.seed)
[ 12.642196] procd: - early -
[ 13.190774] procd: - ubus -
[ 13.196472] random: ubusd: uninitialized urandom read (4 bytes read)
[ 13.242855] random: ubusd: uninitialized urandom read (4 bytes read)
[ 13.244220] random: ubusd: uninitialized urandom read (4 bytes read)
[ 13.249986] procd: - init -
Please press Enter to activate this console.
[ 13.327004] kmodloader: loading kernel modules from /etc/modules.d/*
[ 13.330177] kmodloader: done loading kernel modules from /etc/modules.d/*
[ 14.336904] _warn_unseeded_randomness: 102 callbacks suppressed
[ 14.336912] random: get_random_u32 called from arch_pick_mmap_layout+0x59/0x130 with crng_init=1
[ 14.336922] random: get_random_u32 called from randomize_stack_top+0x1d/0x40 with crng_init=1
[ 14.336927] random: get_random_u32 called from arch_align_stack+0x35/0x50 with crng_init=1
[ 14.556753] urandom_read: 5 callbacks suppressed
[ 14.556755] random: jshn: uninitialized urandom read (4 bytes read)
[ 14.564888] random: ubusd: uninitialized urandom read (4 bytes read)
[ 15.345890] _warn_unseeded_randomness: 615 callbacks suppressed
[ 15.345898] random: get_random_u32 called from arch_pick_mmap_layout+0x59/0x130 with crng_init=1
[ 15.345905] random: get_random_u32 called from randomize_stack_top+0x1d/0x40 with crng_init=1
[ 15.345909] random: get_random_u32 called from arch_align_stack+0x35/0x50 with crng_init=1
[ 16.361307] _warn_unseeded_randomness: 390 callbacks suppressed
[ 16.361314] random: get_random_u32 called from arch_pick_mmap_layout+0x59/0x130 with crng_init=1
[ 16.361322] random: get_random_u32 called from randomize_stack_top+0x1d/0x40 with crng_init=1
[ 16.361327] random: get_random_u32 called from arch_align_stack+0x35/0x50 with crng_init=1
[ 17.365403] _warn_unseeded_randomness: 284 callbacks suppressed
[ 17.365412] random: get_random_u32 called from arch_pick_mmap_layout+0x59/0x130 with crng_init=1
[ 17.365420] random: get_random_u32 called from randomize_stack_top+0x1d/0x40 with crng_init=1
[ 17.365425] random: get_random_u32 called from arch_align_stack+0x35/0x50 with crng_init=1
LKP: HOSTNAME vm-snb-6, MAC 52:54:00:12:34:56, kernel 5.6.0-rc7-12014-gf8bf55f05f7d6 2, serial console /dev/ttyS0
[ 18.346174] Kernel tests: Boot OK!
[ 18.346174] /lkp/lkp/src/bin/run-lkp
[ 18.346174] RESULT_ROOT=/result/boot/1/vm-snb/openwrt-i386-generic-20190428.cgz/i386-randconfig-h003-20200326/gcc-7/f8bf55f05f7d665fdf5942ceb28a82089ddad44a/8
[ 18.346174] job=/lkp/jobs/scheduled/vm-snb-6/boot-1-openwrt-i386-generic-20190428.cgz-f8bf55f05f7d665fdf5942ceb28a82089ddad44a-20200328-60525-1ymm2c4-11.yaml
[ 18.346174] result_service=raw_upload, RESULT_MNT=/inn/result, RESULT_ROOT=/inn/result/boot/1/vm-snb/openwrt-i386-generic-20190428.cgz/i386-randconfig-h003-20200326/gcc-7/f8bf55f05f7d665fdf5942ceb28a82089ddad44a/8
[ 18.346174] run-job /lkp/jobs/scheduled/vm-snb-6/boot-1-openwrt-i386-generic-20190428.cgz-f8bf55f05f7d665fdf5942ceb28a82089ddad44a-20200328-60525-1ymm2c4-11.yaml
[ 18.491299] _warn_unseeded_randomness: 216 callbacks suppressed
[ 18.491308] random: get_random_u32 called from arch_pick_mmap_layout+0x59/0x130 with crng_init=1
[ 18.491318] random: get_random_u32 called from randomize_stack_top+0x1d/0x40 with crng_init=1
[ 18.491323] random: get_random_u32 called from arch_align_stack+0x35/0x50 with crng_init=1
[ 19.892854] random: crng init done
[ 19.893497] random: 82 get_random_xx warning(s) missed due to ratelimiting
[ 23.382687] target ucode:
[ 24.383652] sleep started
[ 25.384495] /bin/wget -q http://inn:80/~lkp/cgi-bin/lkp-jobfile-append-var?job_file=/lkp/jobs/sche... -O /dev/null
[ 32.652245] kill 869 vmstat --timestamp -n 10
[ 32.652245] kill 865 cat /proc/kmsg
[ 32.652245] wait for background processes: 878 873 oom-killer meminfo
[ 34.365971] CE: hpet increased min_delta_ns to 37968 nsec
[ 35.265079] CE: hpet increased min_delta_ns to 56952 nsec
[ 36.654596] /bin/wget -q http://inn:80/~lkp/cgi-bin/lkp-jobfile-append-var?job_file=/lkp/jobs/sche... -O /dev/null
To reproduce:
# build kernel
cd linux
cp config-5.6.0-rc7-12014-gf8bf55f05f7d6 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
2 years, 4 months