[lkp] [test] f9160685f6: WARNING: CPU: 0 PID: 1 at lib/kobject.c:597 kobject_get+0xa8/0xb0
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git 20160414-sysdata-v6
commit f9160685f621802db1a5fab14fb2bb5b8e1fbff3 ("test: add new sysdata_file_request*() loader tester")
As below, the log "WARNING: CPU: 0 PID: 1 at lib/kobject.c:597 kobject_get+0xa8/0xb0" showed with your commit.
[ 6.411218] Key type encrypted registered
[ 6.414023] test_firmware: interface ready
[ 6.414930] ------------[ cut here ]------------
[ 6.416008] WARNING: CPU: 0 PID: 1 at lib/kobject.c:597 kobject_get+0xa8/0xb0
[ 6.418641] kobject: '(null)' (ccccccd4): is not initialized, yet kobject_get() is being called.
[ 6.421176] Modules linked in:
[ 6.422588] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.6.0-rc3-00003-gf916068 #539
[ 6.424319] 00000282 00000282 ca5f7de8 c13c2cc3 ca5f7e2c c1cec998 c13c5a28 ca5f7e18
[ 6.427157] c10682e5 c1cec948 ca5f7e48 00000001 c1cec998 00000255 c13c5a28 00000255
[ 6.428725] ccccccd4 cb9da010 00000000 ca5f7e34 c1068334 00000009 00000000 ca5f7e2c
[ 6.430343] Call Trace:
[ 6.430807] [<c13c2cc3>] dump_stack+0x96/0xb3
[ 6.431716] [<c13c5a28>] ? kobject_get+0xa8/0xb0
[ 6.433681] [<c10682e5>] __warn+0xf5/0x110
[ 6.435492] [<c13c5a28>] ? kobject_get+0xa8/0xb0
[ 6.436351] [<c1068334>] warn_slowpath_fmt+0x34/0x40
[ 6.437351] [<c13c5a28>] kobject_get+0xa8/0xb0
[ 6.438298] [<c14db77d>] device_add+0xdd/0x650
[ 6.439176] [<c11c6e83>] ? kfree+0x93/0x280
[ 6.440091] [<c11930ed>] ? kfree_const+0x1d/0x30
[ 6.440967] [<c11930ed>] ? kfree_const+0x1d/0x30
[ 6.442025] [<c13c62aa>] ? kobject_set_name_vargs+0x6a/0xa0
[ 6.444059] [<c14dbef7>] device_create_groups_vargs+0xe7/0x100
[ 6.446142] [<c14dbf9f>] device_create_with_groups+0x2f/0x40
[ 6.447257] [<c14ce483>] misc_register+0x163/0x1b0
[ 6.448280] [<c1f1449f>] ? test_firmware_init+0x9c/0x9c
[ 6.449251] [<c1f144e0>] test_sysdata_init+0x41/0xc0
[ 6.450225] [<c1002166>] do_one_initcall+0xd6/0x260
[ 6.451176] [<c1f1449f>] ? test_firmware_init+0x9c/0x9c
[ 6.452175] [<c108cf93>] ? parse_args+0x353/0x5f0
[ 6.453090] [<c1ee7cc5>] ? kernel_init_freeable+0xd2/0x16f
[ 6.454192] [<c1ee7ce5>] kernel_init_freeable+0xf2/0x16f
[ 6.455223] [<c195549b>] kernel_init+0xb/0x100
[ 6.456080] [<c1960389>] ret_from_kernel_thread+0x21/0x38
[ 6.458314] [<c1955490>] ? rest_init+0xb0/0xb0
[ 6.460164] ---[ end trace b4208ac557fed46f ]---
[ 6.460742] ------------[ cut here ]------------
vm-intel12-openwrt-i386: 1 threads qemu-system-i386 -enable-kvm with 192M memory
FYI, raw QEMU command line is:
qemu-system-i386 -enable-kvm -kernel /pkg/linux/i386-randconfig-i0-201615/gcc-5/f9160685f621802db1a5fab14fb2bb5b8e1fbff3/vmlinuz-4.6.0-rc3-00003-gf916068 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-intel12-openwrt-i386-16/rand_boot-1-openwrt-i386.cgz-i386-randconfig-i0-201615-f9160685f621802db1a5fab14fb2bb5b8e1fbff3-20160415-107555-1x78d4z-0.yaml ARCH=i386 kconfig=i386-randconfig-i0-201615 branch=mcgrof/20160414-sysdata-v6 commit=f9160685f621802db1a5fab14fb2bb5b8e1fbff3 BOOT_IMAGE=/pkg/linux/i386-randconfig-i0-201615/gcc-5/f9160685f621802db1a5fab14fb2bb5b8e1fbff3/vmlinuz-4.6.0-rc3-00003-gf916068 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-intel12-openwrt-i386/openwrt-i386.cgz/i386-randconfig-i0-201615/gcc-5/f9160685f621802db1a5fab14fb2bb5b8e1fbff3/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-intel12-openwrt-i386-16::dhcp drbd.minor_count=8' -initrd /fs/KVM/initrd-vm-intel12-openwrt-i386-16 -m 192 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/KVM/disk0-vm-intel12-openwrt-i386-16,media=disk,if=virtio -drive file=/fs/KVM/disk1-vm-intel12-openwrt-i386-16,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-intel12-openwrt-i386-16 -serial file:/dev/shm/kboot/serial-vm-intel12-openwrt-i386-16 -daemonize -display none -monitor null
Thanks,
Xiaolong Ye
4 years, 9 months
[lkp] [mountinfo] b544fb04d8: INFO: possible circular locking dependency detected
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/sergeh/linux-security 2016-04-16/kernfs.show
commit b544fb04d8cd4558829c13b8fea88dd0ed8ab10e ("mountinfo: implement show_path for kernfs and cgroup")
As below, the log "INFO: possible circular locking dependency detected" showed with your commit.
[ 41.620951] ======================================================
[ 41.621590] [ INFO: possible circular locking dependency detected ]
[ 41.622244] 4.6.0-rc3-00059-gb544fb0 #1 Not tainted
[ 41.622757] -------------------------------------------------------
[ 41.623410] systemd/1 is trying to acquire lock:
[ 41.623897] (cgroup_mutex){+.+.+.}, at: [<ffffffff81409a99>] cgroup_show_path+0x29/0x1e0
[ 41.624810]
[ 41.624810] but task is already holding lock:
[ 41.625413] (namespace_sem){+++++.}, at: [<ffffffff816ea05f>] m_start+0x5f/0x760
[ 41.626257]
[ 41.626257] which lock already depends on the new lock.
[ 41.626257]
[ 41.627124]
[ 41.627124] the existing dependency chain (in reverse order) is:
[ 41.627967]
-> #4 (namespace_sem){+++++.}:
[ 41.628522] [<ffffffff81313af5>] lock_acquire+0x135/0x250
[ 41.629203] [<ffffffff832fd9a0>] down_write+0x50/0xf0
[ 41.629830] [<ffffffff816f1918>] lock_mount+0x138/0x8a0
[ 41.630472] [<ffffffff816f3c22>] do_add_mount+0x22/0x690
[ 41.631209] [<ffffffff816f885d>] do_mount+0x52d/0x3900
[ 41.631838] [<ffffffff816fcbd0>] SyS_mount+0x90/0xd0
[ 41.632442] [<ffffffff83302600>] entry_SYSCALL_64_fastpath+0x23/0xc1
[ 41.633198]
-> #3 (&sb->s_type->i_mutex_key){+.+.+.}:
[ 41.633849] [<ffffffff81313af5>] lock_acquire+0x135/0x250
[ 41.634511] [<ffffffff832f58e9>] mutex_lock_nested+0xd9/0xb50
[ 41.635199] [<ffffffff81823f5e>] proc_setup_self+0xce/0x5b0
[ 41.635863] [<ffffffff818009e1>] proc_fill_super+0x1f1/0x330
[ 41.636530] [<ffffffff818010e9>] proc_mount+0x99/0x2f0
[ 41.637160] [<ffffffff8166b935>] mount_fs+0x65/0x330
[ 41.637771] [<ffffffff816f42f6>] vfs_kern_mount+0x66/0x400
[ 41.638519] [<ffffffff816febc5>] kern_mount_data+0x45/0xf0
[ 41.639204] [<ffffffff8180157b>] pid_ns_prepare_proc+0x1b/0x90
[ 41.639907] [<ffffffff812215a7>] alloc_pid+0xc57/0x15b0
[ 41.640538] [<ffffffff8119fa73>] copy_process+0x2493/0x5c40
[ 41.641228] [<ffffffff811a374e>] _do_fork+0x10e/0xc30
[ 41.641843] [<ffffffff811a4294>] kernel_thread+0x24/0x30
[ 41.642485] [<ffffffff832e717f>] rest_init+0x1f/0x190
[ 41.643107] [<ffffffff86fd08ca>] start_kernel+0x5c9/0x5ef
[ 41.643767] [<ffffffff86fcf2c3>] x86_64_start_reservations+0x2a/0x2c
[ 41.644532] [<ffffffff86fcf3c5>] x86_64_start_kernel+0x100/0x10d
[ 41.645263]
-> #2 (&type->s_umount_key#4/1){+.+...}:
[ 41.645900] [<ffffffff81313af5>] lock_acquire+0x135/0x250
[ 41.646549] [<ffffffff812f1e44>] down_write_nested+0x54/0x100
[ 41.647243] [<ffffffff81668378>] sget+0x4b8/0xe10
[ 41.647858] [<ffffffff818011cb>] proc_mount+0x17b/0x2f0
[ 41.648504] [<ffffffff8166b935>] mount_fs+0x65/0x330
[ 41.649120] [<ffffffff816f42f6>] vfs_kern_mount+0x66/0x400
[ 41.649797] [<ffffffff816febc5>] kern_mount_data+0x45/0xf0
[ 41.650491] [<ffffffff8180157b>] pid_ns_prepare_proc+0x1b/0x90
[ 41.651194] [<ffffffff812215a7>] alloc_pid+0xc57/0x15b0
[ 41.651866] [<ffffffff8119fa73>] copy_process+0x2493/0x5c40
[ 41.652547] [<ffffffff811a374e>] _do_fork+0x10e/0xc30
[ 41.653189] [<ffffffff811a4294>] kernel_thread+0x24/0x30
[ 41.653827] [<ffffffff832e717f>] rest_init+0x1f/0x190
[ 41.654449] [<ffffffff86fd08ca>] start_kernel+0x5c9/0x5ef
[ 41.655096] [<ffffffff86fcf2c3>] x86_64_start_reservations+0x2a/0x2c
[ 41.655861] [<ffffffff86fcf3c5>] x86_64_start_kernel+0x100/0x10d
[ 41.656631]
-> #1 (&cgroup_threadgroup_rwsem){++++.+}:
[ 41.657283] [<ffffffff81313af5>] lock_acquire+0x135/0x250
[ 41.657951] [<ffffffff832fd9a0>] down_write+0x50/0xf0
[ 41.658572] [<ffffffff812f251a>] percpu_down_write+0x8a/0xab0
[ 41.659258] [<ffffffff8142289f>] cgroup_apply_control+0x14f/0x700
[ 41.659998] [<ffffffff8142482f>] rebind_subsystems+0x3ff/0x11f0
[ 41.660724] [<ffffffff8142711f>] cgroup_setup_root+0x38f/0x990
[ 41.661424] [<ffffffff8142f7b6>] cgroup_mount+0x11d6/0x1f10
[ 41.662128] [<ffffffff8166b935>] mount_fs+0x65/0x330
[ 41.662736] [<ffffffff816f42f6>] vfs_kern_mount+0x66/0x400
[ 41.663404] [<ffffffff816f87e5>] do_mount+0x4b5/0x3900
[ 41.664027] [<ffffffff816fcbd0>] SyS_mount+0x90/0xd0
[ 41.664635] [<ffffffff83302600>] entry_SYSCALL_64_fastpath+0x23/0xc1
[ 41.665392]
-> #0 (cgroup_mutex){+.+.+.}:
[ 41.665927] [<ffffffff8130eceb>] __lock_acquire+0x57cb/0x79b0
[ 41.666635] [<ffffffff81313af5>] lock_acquire+0x135/0x250
[ 41.667291] [<ffffffff832f58e9>] mutex_lock_nested+0xd9/0xb50
[ 41.667986] [<ffffffff81409a99>] cgroup_show_path+0x29/0x1e0
[ 41.668698] [<ffffffff81832a81>] kernfs_sop_show_path+0x171/0x290
[ 41.669416] [<ffffffff8178826a>] show_mountinfo+0x32a/0xe80
[ 41.670115] [<ffffffff816e83c2>] m_show+0x72/0xb0
[ 41.670713] [<ffffffff817014ca>] seq_read+0xeda/0x1f20
[ 41.671348] [<ffffffff8165d65d>] __vfs_read+0x12d/0x910
[ 41.672023] [<ffffffff8165df6f>] vfs_read+0x12f/0x380
[ 41.672660] [<ffffffff8165f7da>] SyS_read+0x10a/0x260
[ 41.673283] [<ffffffff83302600>] entry_SYSCALL_64_fastpath+0x23/0xc1
[ 41.674060]
[ 41.674060] other info that might help us debug this:
[ 41.674060]
[ 41.674905] Chain exists of:
cgroup_mutex --> &sb->s_type->i_mutex_key --> namespace_sem
[ 41.675941] Possible unsafe locking scenario:
[ 41.675941]
[ 41.676619] CPU0 CPU1
[ 41.677139] ---- ----
[ 41.677655] lock(namespace_sem);
[ 41.678083] lock(&sb->s_type->i_mutex_key);
[ 41.678886] lock(namespace_sem);
[ 41.679591] lock(cgroup_mutex);
[ 41.680005]
[ 41.680005] *** DEADLOCK ***
[ 41.680005]
[ 41.680649] 2 locks held by systemd/1:
[ 41.681060] #0: (&p->lock){+.+.+.}, at: [<ffffffff817006ce>] seq_read+0xde/0x1f20
[ 41.682001] #1: (namespace_sem){+++++.}, at: [<ffffffff816ea05f>] m_start+0x5f/0x760
[ 41.682934]
vm-lkp-wsx03-1G: 1 threads qemu-system-x86_64 -enable-kvm -cpu host with 1G memory
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu host -kernel /pkg/linux/x86_64-randconfig-s2-04161456/gcc-5/b544fb04d8cd4558829c13b8fea88dd0ed8ab10e/vmlinuz-4.6.0-rc3-00059-gb544fb0 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-lkp-wsx03-1G-9/bisect_boot-1-debian-x86_64-2015-02-07.cgz-x86_64-randconfig-s2-04161456-b544fb04d8cd4558829c13b8fea88dd0ed8ab10e-20160416-57707-qo0r06-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s2-04161456 branch=linux-devel/devel-spot-201604161417 commit=b544fb04d8cd4558829c13b8fea88dd0ed8ab10e BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s2-04161456/gcc-5/b544fb04d8cd4558829c13b8fea88dd0ed8ab10e/vmlinuz-4.6.0-rc3-00059-gb544fb0 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-lkp-wsx03-1G/debian-x86_64-2015-02-07.cgz/x86_64-randconfig-s2-04161456/gcc-5/b544fb04d8cd4558829c13b8fea88dd0ed8ab10e/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-lkp-wsx03-1G-9::dhcp' -initrd /fs/sdc1/initrd-vm-lkp-wsx03-1G-9 -m 1024 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23608-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-lkp-wsx03-1G-9,media=disk,if=virtio -drive file=/fs/sdc1/disk1-vm-lkp-wsx03-1G-9,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-lkp-wsx03-1G-9 -serial file:/dev/shm/kboot/serial-vm-lkp-wsx03-1G-9 -daemonize -display none -monitor null
Thanks,
Xiaolong Ye
4 years, 9 months
[lkp] [Relayout structs] 12fdf14a16: BUG: unable to handle kernel NULL pointer dereference at (null)
by kernel test robot
FYI, we noticed the below changes on
git://git.infradead.org/users/willy/linux-dax.git radix-cleanups-2016-04-13
commit 12fdf14a16ac03bc6a809d00967ed969b2bcc0aa ("Relayout structs")
+-----------------------------------------------+------------+------------+
| | 3db14d3ee8 | 12fdf14a16 |
+-----------------------------------------------+------------+------------+
| boot_successes | 6 | 11 |
| boot_failures | 0 | 17 |
| BUG:unable_to_handle_kernel | 0 | 15 |
| Oops | 0 | 15 |
| RIP:_raw_spin_lock | 0 | 15 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 15 |
| backtrace:vfs_write | 0 | 17 |
| backtrace:SyS_write | 0 | 17 |
| backtrace:populate_rootfs | 0 | 15 |
| backtrace:kernel_init_freeable | 0 | 15 |
| WARNING:at_lib/list_debug.c:#__list_del_entry | 0 | 2 |
+-----------------------------------------------+------------+------------+
[ 0.774441] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[ 0.776639] PCI: CLS 0 bytes, default 64
[ 0.777819] Unpacking initramfs...
[ 2.691761] BUG: unable to handle kernel NULL pointer dereference at (null)
[ 2.694841] IP: [<ffffffff818f75a3>] _raw_spin_lock+0x13/0x30
[ 2.696212] PGD 0
[ 2.697102] Oops: 0002 [#1] SMP
[ 2.698219] Modules linked in:
[ 2.699233] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.6.0-rc2-00049-g12fdf14 #1
[ 2.701216] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 2.703360] task: ffff880035708000 ti: ffff880035710000 task.ti: ffff880035710000
[ 2.705376] RIP: 0010:[<ffffffff818f75a3>] [<ffffffff818f75a3>] _raw_spin_lock+0x13/0x30
[ 2.707523] RSP: 0000:ffff880035713988 EFLAGS: 00010046
[ 2.708755] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ 2.710216] RDX: 0000000000000001 RSI: ffff8800339d4b80 RDI: 0000000000000000
[ 2.711677] RBP: ffff8800357139a8 R08: 00000000339d4b01 R09: ffff8800339d4b90
[ 2.713134] R10: 000000000000003f R11: 0000000000000001 R12: ffff8800b39d4b80
[ 2.714652] R13: ffff8800339d4b80 R14: 0000000000000000 R15: 0000000000000001
[ 2.716143] FS: 0000000000000000(0000) GS:ffff880037000000(0000) knlGS:0000000000000000
[ 2.718214] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2.719496] CR2: 0000000000000000 CR3: 0000000001e06000 CR4: 00000000000006f0
[ 2.720971] Stack:
[ 2.721776] ffffffff811a26b8 ffffea000083fcc0 ffff88002117d108 ffff88002117d0f0
[ 2.724050] ffff880035713a00 ffffffff81174777 ffff880035713a10 ffff880035654000
[ 2.726349] ffff8800339d4b68 ffff8800339d4b98 ffffea000083fcc0 000000000000000f
[ 2.728634] Call Trace:
[ 2.729485] [<ffffffff811a26b8>] ? list_lru_del+0x58/0x130
[ 2.730756] [<ffffffff81174777>] __add_to_page_cache_locked+0x137/0x290
[ 2.732161] [<ffffffff8117492a>] add_to_page_cache_lru+0x3a/0xc0
[ 2.733484] [<ffffffff81174ad4>] pagecache_get_page+0x124/0x250
[ 2.734823] [<ffffffff81174c26>] grab_cache_page_write_begin+0x26/0x40
[ 2.736212] [<ffffffff81223218>] simple_write_begin+0x28/0x1d0
[ 2.737521] [<ffffffff81173f8f>] generic_perform_write+0xbf/0x1c0
[ 2.738856] [<ffffffff811763a0>] __generic_file_write_iter+0x190/0x1f0
[ 2.740239] [<ffffffff810a1039>] ? __might_sleep+0x49/0x80
[ 2.741507] [<ffffffff811764e4>] generic_file_write_iter+0xe4/0x1e0
[ 2.742867] [<ffffffff82022772>] ? bunzip2+0x41f/0x41f
[ 2.744072] [<ffffffff811fa3ca>] __vfs_write+0xaa/0xe0
[ 2.745305] [<ffffffff811fb209>] vfs_write+0xa9/0x190
[ 2.746510] [<ffffffff82022772>] ? bunzip2+0x41f/0x41f
[ 2.747720] [<ffffffff811fc556>] SyS_write+0x46/0xa0
[ 2.748911] [<ffffffff81fdb309>] xwrite+0x2a/0x5f
[ 2.750049] [<ffffffff81fdb3c8>] do_copy+0x8a/0xb8
[ 2.751217] [<ffffffff81fdb0f7>] write_buffer+0x23/0x34
[ 2.752442] [<ffffffff81fdb133>] flush_buffer+0x2b/0x85
[ 2.753676] [<ffffffff81fdb108>] ? write_buffer+0x34/0x34
[ 2.754959] [<ffffffff82022a15>] __gunzip+0x299/0x341
[ 2.756173] [<ffffffff82022abd>] ? __gunzip+0x341/0x341
[ 2.757401] [<ffffffff82022ace>] gunzip+0x11/0x13
[ 2.758573] [<ffffffff81fdb059>] ? md_run_setup+0x9a/0x9a
[ 2.759826] [<ffffffff81fdb9af>] unpack_to_rootfs+0x171/0x275
[ 2.761164] [<ffffffff81fdb059>] ? md_run_setup+0x9a/0x9a
[ 2.762447] [<ffffffff81fdbab3>] ? unpack_to_rootfs+0x275/0x275
[ 2.763834] [<ffffffff81fdbb13>] populate_rootfs+0x60/0x114
[ 2.765113] [<ffffffff81002123>] do_one_initcall+0xb3/0x1d0
[ 2.766533] [<ffffffff81fda11b>] kernel_init_freeable+0x192/0x21f
[ 2.767892] [<ffffffff818ea76e>] kernel_init+0xe/0x110
[ 2.769112] [<ffffffff818f7a42>] ret_from_fork+0x22/0x40
[ 2.770369] [<ffffffff818ea760>] ? rest_init+0x90/0x90
[ 2.771688] Code: 89 c6 e8 21 0f 7d ff 66 90 5d c3 0f 1f 00 66 2e 0f 1f 84 00 00 00 00 00 66 66 66 66 90 65 ff 05 bc 5d 71 7e 31 c0 ba 01 00 00 00 <3e> 0f b1 17 85 c0 75 01 c3 55 89 c6 48 89 e5 e8 e9 0e 7d ff 66
[ 2.780902] RIP [<ffffffff818f75a3>] _raw_spin_lock+0x13/0x30
[ 2.782335] RSP <ffff880035713988>
[ 2.783326] CR2: 0000000000000000
[ 2.784314] ---[ end trace ff42f42dd0b1f55c ]---
[ 2.785491] Kernel panic - not syncing: Fatal exception
vm-lkp-wsx03-1G: 1 threads qemu-system-x86_64 -enable-kvm -cpu host with 1G memory
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu host -kernel /pkg/linux/x86_64-rhel/gcc-4.9/12fdf14a16ac03bc6a809d00967ed969b2bcc0aa/vmlinuz-4.6.0-rc2-00049-g12fdf14 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-lkp-wsx03-1G-14/rand_ftq-100%-20x-100000ss-debian-x86_64-2015-02-07.cgz-x86_64-rhel-12fdf14a16ac03bc6a809d00967ed969b2bcc0aa-20160415-9415-80xf1f-0.yaml ARCH=x86_64 kconfig=x86_64-rhel branch=dax/radix-cleanups-2016-04-13 commit=12fdf14a16ac03bc6a809d00967ed969b2bcc0aa BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/12fdf14a16ac03bc6a809d00967ed969b2bcc0aa/vmlinuz-4.6.0-rc2-00049-g12fdf14 max_uptime=731 RESULT_ROOT=/result/ftq/100%-20x-100000ss/vm-lkp-wsx03-1G/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/12fdf14a16ac03bc6a809d00967ed969b2bcc0aa/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-lkp-wsx03-1G-14::dhcp' -initrd /fs/sdc1/initrd-vm-lkp-wsx03-1G-14 -m 1024 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23613-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-lkp-wsx03-1G-14,media=disk,if=virtio -drive file=/fs/sdc1/disk1-vm-lkp-wsx03-1G-14,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-lkp-wsx03-1G-14 -serial file:/dev/shm/kboot/serial-vm-lkp-wsx03-1G-14 -daemonize -display none -monitor null
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Kernel Test Robot
4 years, 9 months
[lkp] [mountinfo] bc61415206: INFO: possible circular locking dependency detected ]
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/sergeh/linux-security 2016-04-14/kernfs.show
commit bc61415206b5a88c200d1f47bce654a8964a5f4e ("mountinfo: implement show_path for kernfs and cgroup")
+----------------------------------------------------+------------+------------+
| | 90de6800c2 | bc61415206 |
+----------------------------------------------------+------------+------------+
| boot_successes | 4 | 1 |
| boot_failures | 1 | 5 |
| invoked_oom-killer:gfp_mask=0x | 1 | 1 |
| Mem-Info | 1 | 1 |
| Out_of_memory:Kill_process | 1 | 1 |
| backtrace:_do_fork | 1 | 5 |
| backtrace:SyS_clone | 1 | 1 |
| backtrace:do_compat_writev | 0 | 1 |
| backtrace:compat_SyS_writev | 0 | 1 |
| INFO:possible_circular_locking_dependency_detected | 0 | 4 |
| backtrace:do_mount | 0 | 4 |
| backtrace:SyS_mount | 0 | 4 |
| backtrace:vfs_read | 0 | 4 |
| backtrace:SyS_read | 0 | 4 |
+----------------------------------------------------+------------+------------+
[ 23.823486] input: ImExPS/2 BYD TouchPad as /devices/platform/i8042/serio1/input/input3
[ 23.912770]
[ 23.913569] ======================================================
[ 23.914920] [ INFO: possible circular locking dependency detected ]
[ 23.916286] 4.6.0-rc3-00058-gbc61415 #105 Not tainted
[ 23.917496] -------------------------------------------------------
[ 23.918857] systemd/1 is trying to acquire lock:
[ 23.920002] (cgroup_mutex){+.+.+.}, at: [<ffffffff81134cd4>] cgroup_show_path+0x31/0xed
[ 23.922397]
[ 23.922397] but task is already holding lock:
[ 23.924159] (namespace_sem){+++++.}, at: [<ffffffff8120a218>] m_start+0x22/0x83
[ 23.926537]
[ 23.926537] which lock already depends on the new lock.
[ 23.926537]
[ 23.929088]
[ 23.929088] the existing dependency chain (in reverse order) is:
[ 23.931077]
-> #4 (namespace_sem){+++++.}:
[ 23.932944] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 23.934375] [<ffffffff81cb1504>] down_write+0x51/0xb4
[ 23.935739] [<ffffffff8120c01d>] lock_mount+0x60/0x19d
[ 23.937114] [<ffffffff8120c572>] do_add_mount+0x23/0xc9
[ 23.938497] [<ffffffff8120dc12>] do_mount+0xa92/0xb75
[ 23.939856] [<ffffffff8120df22>] SyS_mount+0x77/0x9f
[ 23.941200] [<ffffffff81cb353c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[ 23.942731]
-> #3 (&sb->s_type->i_mutex_key){+.+.+.}:
[ 23.944734] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 23.946135] [<ffffffff81caf48d>] mutex_lock_nested+0x79/0x35f
[ 23.947588] [<ffffffff8124ff0b>] proc_setup_self+0x35/0xf9
[ 23.948999] [<ffffffff81246658>] proc_fill_super+0x86/0x95
[ 23.950418] [<ffffffff81246894>] proc_mount+0xaf/0xf2
[ 23.951788] [<ffffffff811f085b>] mount_fs+0x14/0x8c
[ 23.953118] [<ffffffff8120aa1b>] vfs_kern_mount+0x6c/0x13a
[ 23.954567] [<ffffffff8120ab02>] kern_mount_data+0x19/0x2e
[ 23.955976] [<ffffffff812469cd>] pid_ns_prepare_proc+0x1c/0x30
[ 23.957446] [<ffffffff810bcd20>] alloc_pid+0x2c6/0x387
[ 23.959001] [<ffffffff8109cf8a>] copy_process+0xb8b/0x195e
[ 23.960569] [<ffffffff8109def0>] _do_fork+0xbd/0x656
[ 23.961932] [<ffffffff8109e4b2>] kernel_thread+0x29/0x2b
[ 23.963428] [<ffffffff81ca50cb>] rest_init+0x22/0x13b
[ 23.964860] [<ffffffff827afedb>] start_kernel+0x3fd/0x40a
[ 23.966340] [<ffffffff827af308>] x86_64_start_reservations+0x2a/0x2c
[ 23.967926] [<ffffffff827af442>] x86_64_start_kernel+0x138/0x145
[ 23.969558]
-> #2 (&type->s_umount_key#4/1){+.+...}:
[ 23.971795] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 23.973285] [<ffffffff810e4add>] down_write_nested+0x4b/0xb8
[ 23.974809] [<ffffffff811ef914>] sget+0x257/0x40c
[ 23.976135] [<ffffffff81246857>] proc_mount+0x72/0xf2
[ 23.977580] [<ffffffff811f085b>] mount_fs+0x14/0x8c
[ 23.978992] [<ffffffff8120aa1b>] vfs_kern_mount+0x6c/0x13a
[ 23.980485] [<ffffffff8120ab02>] kern_mount_data+0x19/0x2e
[ 23.981969] [<ffffffff812469cd>] pid_ns_prepare_proc+0x1c/0x30
[ 23.983520] [<ffffffff810bcd20>] alloc_pid+0x2c6/0x387
[ 23.984963] [<ffffffff8109cf8a>] copy_process+0xb8b/0x195e
[ 23.986578] [<ffffffff8109def0>] _do_fork+0xbd/0x656
[ 23.988023] [<ffffffff8109e4b2>] kernel_thread+0x29/0x2b
[ 23.989498] [<ffffffff81ca50cb>] rest_init+0x22/0x13b
[ 23.990938] [<ffffffff827afedb>] start_kernel+0x3fd/0x40a
[ 23.992428] [<ffffffff827af308>] x86_64_start_reservations+0x2a/0x2c
[ 23.994050] [<ffffffff827af442>] x86_64_start_kernel+0x138/0x145
[ 23.995594]
-> #1 (&cgroup_threadgroup_rwsem){++++.+}:
[ 23.997703] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 23.999190] [<ffffffff81cb1504>] down_write+0x51/0xb4
[ 24.000629] [<ffffffff810e4e73>] percpu_down_write+0x2d/0x119
[ 24.002193] [<ffffffff81139a24>] cgroup_apply_control+0x99/0x24e
[ 24.003813] [<ffffffff8113a0d2>] rebind_subsystems+0x142/0x3d6
[ 24.005373] [<ffffffff8113ab74>] cgroup_setup_root+0x15b/0x293
[ 24.006914] [<ffffffff8113b493>] cgroup_mount+0x7e7/0xab5
[ 24.008380] [<ffffffff811f085b>] mount_fs+0x14/0x8c
[ 24.009773] [<ffffffff8120aa1b>] vfs_kern_mount+0x6c/0x13a
[ 24.011353] [<ffffffff8120db6b>] do_mount+0x9eb/0xb75
[ 24.012780] [<ffffffff8120df22>] SyS_mount+0x77/0x9f
[ 24.014182] [<ffffffff81cb353c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[ 24.015752]
-> #0 (cgroup_mutex){+.+.+.}:
[ 24.017708] [<ffffffff810ea6a9>] __lock_acquire+0x12d5/0x192a
[ 24.019243] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 24.020709] [<ffffffff81caf48d>] mutex_lock_nested+0x79/0x35f
[ 24.022206] [<ffffffff81134cd4>] cgroup_show_path+0x31/0xed
[ 24.023798] [<ffffffff81253a73>] kernfs_sop_show_path+0x3c/0x4c
[ 24.025358] [<ffffffff81228c16>] show_mountinfo+0x79/0x248
[ 24.026845] [<ffffffff81209f48>] m_show+0x17/0x19
[ 24.028183] [<ffffffff8120ec8e>] seq_read+0x26a/0x350
[ 24.029640] [<ffffffff811ec0f5>] __vfs_read+0x26/0xb9
[ 24.031106] [<ffffffff811ecd16>] vfs_read+0xa0/0x12e
[ 24.032530] [<ffffffff811edf28>] SyS_read+0x51/0x92
[ 24.033946] [<ffffffff81cb353c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[ 24.035565]
[ 24.035565] other info that might help us debug this:
[ 24.035565]
[ 24.038275] Chain exists of:
cgroup_mutex --> &sb->s_type->i_mutex_key --> namespace_sem
[ 24.041634] Possible unsafe locking scenario:
[ 24.041634]
[ 24.043525] CPU0 CPU1
[ 24.044712] ---- ----
[ 24.045893] lock(namespace_sem);
[ 24.047081] lock(&sb->s_type->i_mutex_key);
[ 24.048736] lock(namespace_sem);
[ 24.050305] lock(cgroup_mutex);
[ 24.051536]
[ 24.051536] *** DEADLOCK ***
[ 24.051536]
[ 24.053977] 2 locks held by systemd/1:
[ 24.055057] #0: (&p->lock){+.+.+.}, at: [<ffffffff8120ea60>] seq_read+0x3c/0x350
[ 24.057587] #1: (namespace_sem){+++++.}, at: [<ffffffff8120a218>] m_start+0x22/0x83
[ 24.073031]
[ 24.073031] stack backtrace:
[ 24.074712] CPU: 0 PID: 1 Comm: systemd Not tainted 4.6.0-rc3-00058-gbc61415 #105
[ 24.076774] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 24.079032] 0000000000000000 ffff8800365c3b20 ffffffff8154f514 ffffffff8307a760
[ 24.081431] ffffffff830c6fe0 ffff8800365c3b60 ffffffff810e7e85 ffff8800364f8040
[ 24.083759] ffff8800364f88e0 ffff8800364f88a8 0000000000000002 0000000000000002
[ 24.086148] Call Trace:
[ 24.087061] [<ffffffff8154f514>] dump_stack+0x85/0xbe
[ 24.088343] [<ffffffff810e7e85>] print_circular_bug+0x287/0x295
[ 24.089754] [<ffffffff810ea6a9>] __lock_acquire+0x12d5/0x192a
[ 24.091094] [<ffffffff81134cd4>] ? cgroup_show_path+0x31/0xed
[ 24.092437] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 24.093779] [<ffffffff810eb53c>] ? lock_acquire+0x188/0x223
[ 24.095132] [<ffffffff81134cd4>] ? cgroup_show_path+0x31/0xed
[ 24.096500] [<ffffffff81134cd4>] ? cgroup_show_path+0x31/0xed
[ 24.097803] [<ffffffff81caf48d>] mutex_lock_nested+0x79/0x35f
[ 24.099170] [<ffffffff81134cd4>] ? cgroup_show_path+0x31/0xed
[ 24.100561] [<ffffffff81134cd4>] cgroup_show_path+0x31/0xed
[ 24.101949] [<ffffffff8120f09b>] ? seq_printf+0x3f/0x47
[ 24.103306] [<ffffffff81134cd4>] ? cgroup_show_path+0x31/0xed
[ 24.104680] [<ffffffff81253a73>] kernfs_sop_show_path+0x3c/0x4c
[ 24.106038] [<ffffffff81228c16>] show_mountinfo+0x79/0x248
[ 24.107393] [<ffffffff81209f48>] m_show+0x17/0x19
[ 24.108618] [<ffffffff8120ec8e>] seq_read+0x26a/0x350
[ 24.109913] [<ffffffff811ec0f5>] __vfs_read+0x26/0xb9
[ 24.111177] [<ffffffff810e4b9e>] ? up_write+0x1f/0x3e
[ 24.112420] [<ffffffff811abc5c>] ? vm_mmap_pgoff+0x77/0x9b
[ 24.113762] [<ffffffff811ecd16>] vfs_read+0xa0/0x12e
[ 24.115033] [<ffffffff811edf28>] SyS_read+0x51/0x92
[ 24.116303] [<ffffffff81cb353c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[ 24.532144] systemd-journald[2066]: Fixed max_use=48.4M max_size=6.0M min_size=4.0M keep_free=72.6M
vm-lkp-wsx03-1G: 1 threads qemu-system-x86_64 -enable-kvm -cpu host with 1G memory
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu host -kernel /pkg/linux/x86_64-nfsroot/gcc-5/bc61415206b5a88c200d1f47bce654a8964a5f4e/vmlinuz-4.6.0-rc3-00058-gbc61415 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-lkp-wsx03-1G-23/bisect_pft-20x-debian-x86_64-2015-02-07.cgz-x86_64-nfsroot-bc61415206b5a88c200d1f47bce654a8964a5f4e-20160415-122486-ahzbi5-0.yaml ARCH=x86_64 kconfig=x86_64-nfsroot branch=linux-devel/devel-catchup-201604151448 commit=bc61415206b5a88c200d1f47bce654a8964a5f4e BOOT_IMAGE=/pkg/linux/x86_64-nfsroot/gcc-5/bc61415206b5a88c200d1f47bce654a8964a5f4e/vmlinuz-4.6.0-rc3-00058-gbc61415 max_uptime=710 RESULT_ROOT=/result/pft/20x/vm-lkp-wsx03-1G/debian-x86_64-2015-02-07.cgz/x86_64-nfsroot/gcc-5/bc61415206b5a88c200d1f47bce654a8964a5f4e/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-lkp-wsx03-1G-23::dhcp' -initrd /fs/sdc1/initrd-vm-lkp-wsx03-1G-23 -m 1024 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23622-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-lkp-wsx03-1G-23,media=disk,if=virtio -drive file=/fs/sdc1/disk1-vm-lkp-wsx03-1G-23,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-lkp-wsx03-1G-23 -serial file:/dev/shm/kboot/serial-vm-lkp-wsx03-1G-23 -daemonize -display none -monitor null
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Kernel Test Robot
4 years, 9 months
[lkp] [mountinfo] f48f469be8: INFO: possible circular locking dependency detected ]
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/sergeh/linux-security 2016-04-16/kernfs.show
commit f48f469be8eb0985316a8562098d1224463ed4fc ("mountinfo: implement show_path for kernfs and cgroup")
+----------------------------------------------------+------------+------------+
| | b15c891b6d | f48f469be8 |
+----------------------------------------------------+------------+------------+
| boot_successes | 10 | 48 |
| boot_failures | 1 | 6 |
| BUG:kernel_test_crashed | 1 | 2 |
| INFO:possible_circular_locking_dependency_detected | 0 | 4 |
| backtrace:do_mount | 0 | 4 |
| backtrace:SyS_mount | 0 | 4 |
| backtrace:_do_fork | 0 | 4 |
| backtrace:vfs_read | 0 | 4 |
| backtrace:SyS_read | 0 | 4 |
| INFO:task_blocked_for_more_than#seconds | 0 | 2 |
| INFO:lockdep_is_turned_off | 0 | 2 |
| RIP:__default_send_IPI_dest_field | 0 | 2 |
| Kernel_panic-not_syncing:hung_task:blocked_tasks | 0 | 2 |
| backtrace:vfs_write | 0 | 2 |
| backtrace:SyS_write | 0 | 2 |
| backtrace:watchdog | 0 | 2 |
+----------------------------------------------------+------------+------------+
[ 24.384793] input: ImExPS/2 BYD TouchPad as /devices/platform/i8042/serio1/input/input3
[ 24.506141]
[ 24.506920] ======================================================
[ 24.508344] [ INFO: possible circular locking dependency detected ]
[ 24.509764] 4.6.0-rc3-00059-gf48f469 #220 Not tainted
[ 24.511030] -------------------------------------------------------
[ 24.512443] systemd/1 is trying to acquire lock:
[ 24.513630] (cgroup_mutex){+.+.+.}, at: [<ffffffff81134cf3>] cgroup_show_path+0x50/0x12d
[ 24.516123]
[ 24.516123] but task is already holding lock:
[ 24.517951] (namespace_sem){+++++.}, at: [<ffffffff8120a258>] m_start+0x22/0x83
[ 24.520307]
[ 24.520307] which lock already depends on the new lock.
[ 24.520307]
[ 24.522957]
[ 24.522957] the existing dependency chain (in reverse order) is:
[ 24.525005]
-> #4 (namespace_sem){+++++.}:
[ 24.526923] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 24.528384] [<ffffffff81cb1544>] down_write+0x51/0xb4
[ 24.529786] [<ffffffff8120c05d>] lock_mount+0x60/0x19d
[ 24.531219] [<ffffffff8120c5b2>] do_add_mount+0x23/0xc9
[ 24.532649] [<ffffffff8120dc52>] do_mount+0xa92/0xb75
[ 24.534056] [<ffffffff8120df62>] SyS_mount+0x77/0x9f
[ 24.535443] [<ffffffff81cb357c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[ 24.537033]
-> #3 (&sb->s_type->i_mutex_key){+.+.+.}:
[ 24.539100] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 24.540551] [<ffffffff81caf4cd>] mutex_lock_nested+0x79/0x35f
[ 24.542078] [<ffffffff8124ff4b>] proc_setup_self+0x35/0xf9
[ 24.543541] [<ffffffff81246698>] proc_fill_super+0x86/0x95
[ 24.545010] [<ffffffff812468d4>] proc_mount+0xaf/0xf2
[ 24.546414] [<ffffffff811f089b>] mount_fs+0x14/0x8c
[ 24.547799] [<ffffffff8120aa5b>] vfs_kern_mount+0x6c/0x13a
[ 24.549267] [<ffffffff8120ab42>] kern_mount_data+0x19/0x2e
[ 24.550740] [<ffffffff81246a0d>] pid_ns_prepare_proc+0x1c/0x30
[ 24.552269] [<ffffffff810bcd20>] alloc_pid+0x2c6/0x387
[ 24.553692] [<ffffffff8109cf8a>] copy_process+0xb8b/0x195e
[ 24.555248] [<ffffffff8109def0>] _do_fork+0xbd/0x656
[ 24.556644] [<ffffffff8109e4b2>] kernel_thread+0x29/0x2b
[ 24.558103] [<ffffffff81ca510b>] rest_init+0x22/0x13b
[ 24.559507] [<ffffffff827afedb>] start_kernel+0x3fd/0x40a
[ 24.561002] [<ffffffff827af308>] x86_64_start_reservations+0x2a/0x2c
[ 24.562619] [<ffffffff827af442>] x86_64_start_kernel+0x138/0x145
[ 24.564162]
-> #2 (&type->s_umount_key#4/1){+.+...}:
[ 24.566356] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 24.567813] [<ffffffff810e4add>] down_write_nested+0x4b/0xb8
[ 24.569323] [<ffffffff811ef954>] sget+0x257/0x40c
[ 24.570678] [<ffffffff81246897>] proc_mount+0x72/0xf2
[ 24.572105] [<ffffffff811f089b>] mount_fs+0x14/0x8c
[ 24.573486] [<ffffffff8120aa5b>] vfs_kern_mount+0x6c/0x13a
[ 24.574953] [<ffffffff8120ab42>] kern_mount_data+0x19/0x2e
[ 24.576427] [<ffffffff81246a0d>] pid_ns_prepare_proc+0x1c/0x30
[ 24.577956] [<ffffffff810bcd20>] alloc_pid+0x2c6/0x387
[ 24.579377] [<ffffffff8109cf8a>] copy_process+0xb8b/0x195e
[ 24.580924] [<ffffffff8109def0>] _do_fork+0xbd/0x656
[ 24.582332] [<ffffffff8109e4b2>] kernel_thread+0x29/0x2b
[ 24.583770] [<ffffffff81ca510b>] rest_init+0x22/0x13b
[ 24.585182] [<ffffffff827afedb>] start_kernel+0x3fd/0x40a
[ 24.586634] [<ffffffff827af308>] x86_64_start_reservations+0x2a/0x2c
[ 24.588231] [<ffffffff827af442>] x86_64_start_kernel+0x138/0x145
[ 24.589764]
-> #1 (&cgroup_threadgroup_rwsem){++++.+}:
[ 24.591851] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 24.593312] [<ffffffff81cb1544>] down_write+0x51/0xb4
[ 24.594718] [<ffffffff810e4e73>] percpu_down_write+0x2d/0x119
[ 24.596231] [<ffffffff81139a64>] cgroup_apply_control+0x99/0x24e
[ 24.597763] [<ffffffff8113a112>] rebind_subsystems+0x142/0x3d6
[ 24.599279] [<ffffffff8113abb4>] cgroup_setup_root+0x15b/0x293
[ 24.600790] [<ffffffff8113b4d3>] cgroup_mount+0x7e7/0xab5
[ 24.602270] [<ffffffff811f089b>] mount_fs+0x14/0x8c
[ 24.603649] [<ffffffff8120aa5b>] vfs_kern_mount+0x6c/0x13a
[ 24.605121] [<ffffffff8120dbab>] do_mount+0x9eb/0xb75
[ 24.606524] [<ffffffff8120df62>] SyS_mount+0x77/0x9f
[ 24.607920] [<ffffffff81cb357c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[ 24.609508]
-> #0 (cgroup_mutex){+.+.+.}:
[ 24.611446] [<ffffffff810ea6a9>] __lock_acquire+0x12d5/0x192a
[ 24.612952] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 24.614409] [<ffffffff81caf4cd>] mutex_lock_nested+0x79/0x35f
[ 24.615894] [<ffffffff81134cf3>] cgroup_show_path+0x50/0x12d
[ 24.617378] [<ffffffff81253ab3>] kernfs_sop_show_path+0x3c/0x4c
[ 24.618911] [<ffffffff81228c56>] show_mountinfo+0x79/0x248
[ 24.620386] [<ffffffff81209f88>] m_show+0x17/0x19
[ 24.621717] [<ffffffff8120ecce>] seq_read+0x26a/0x350
[ 24.623121] [<ffffffff811ec135>] __vfs_read+0x26/0xb9
[ 24.624549] [<ffffffff811ecd56>] vfs_read+0xa0/0x12e
[ 24.625960] [<ffffffff811edf68>] SyS_read+0x51/0x92
[ 24.627320] [<ffffffff81cb357c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[ 24.628886]
[ 24.628886] other info that might help us debug this:
[ 24.628886]
[ 24.631558] Chain exists of:
cgroup_mutex --> &sb->s_type->i_mutex_key --> namespace_sem
[ 24.634648] Possible unsafe locking scenario:
[ 24.634648]
[ 24.636558] CPU0 CPU1
[ 24.637721] ---- ----
[ 24.638869] lock(namespace_sem);
[ 24.640033] lock(&sb->s_type->i_mutex_key);
[ 24.641721] lock(namespace_sem);
[ 24.643293] lock(cgroup_mutex);
[ 24.644444]
[ 24.644444] *** DEADLOCK ***
[ 24.644444]
[ 24.646750] 2 locks held by systemd/1:
[ 24.660414] #0: (&p->lock){+.+.+.}, at: [<ffffffff8120eaa0>] seq_read+0x3c/0x350
[ 24.662842] #1: (namespace_sem){+++++.}, at: [<ffffffff8120a258>] m_start+0x22/0x83
[ 24.665317]
[ 24.665317] stack backtrace:
[ 24.666899] CPU: 0 PID: 1 Comm: systemd Not tainted 4.6.0-rc3-00059-gf48f469 #220
[ 24.668928] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 24.671140] 0000000000000000 ffff8800365c3b20 ffffffff8154f554 ffffffff8307a760
[ 24.673459] ffffffff830c6fe0 ffff8800365c3b60 ffffffff810e7e85 ffff8800364f8040
[ 24.675784] ffff8800364f88e0 ffff8800364f88a8 0000000000000002 0000000000000002
[ 24.678113] Call Trace:
[ 24.679005] [<ffffffff8154f554>] dump_stack+0x85/0xbe
[ 24.680299] [<ffffffff810e7e85>] print_circular_bug+0x287/0x295
[ 24.681635] [<ffffffff810ea6a9>] __lock_acquire+0x12d5/0x192a
[ 24.682992] [<ffffffff81134cf3>] ? cgroup_show_path+0x50/0x12d
[ 24.684376] [<ffffffff810eb53c>] lock_acquire+0x188/0x223
[ 24.685646] [<ffffffff810eb53c>] ? lock_acquire+0x188/0x223
[ 24.686964] [<ffffffff81134cf3>] ? cgroup_show_path+0x50/0x12d
[ 24.688356] [<ffffffff81134cf3>] ? cgroup_show_path+0x50/0x12d
[ 24.689686] [<ffffffff81caf4cd>] mutex_lock_nested+0x79/0x35f
[ 24.691040] [<ffffffff81134cf3>] ? cgroup_show_path+0x50/0x12d
[ 24.692381] [<ffffffff81134cf3>] cgroup_show_path+0x50/0x12d
[ 24.693717] [<ffffffff8120f0db>] ? seq_printf+0x3f/0x47
[ 24.694976] [<ffffffff81134cf3>] ? cgroup_show_path+0x50/0x12d
[ 24.696415] [<ffffffff81253ab3>] kernfs_sop_show_path+0x3c/0x4c
[ 24.697759] [<ffffffff81228c56>] show_mountinfo+0x79/0x248
[ 24.699081] [<ffffffff81209f88>] m_show+0x17/0x19
[ 24.700331] [<ffffffff8120ecce>] seq_read+0x26a/0x350
[ 24.701569] [<ffffffff811ec135>] __vfs_read+0x26/0xb9
[ 24.702793] [<ffffffff810e4b9e>] ? up_write+0x1f/0x3e
[ 24.704054] [<ffffffff811abc9c>] ? vm_mmap_pgoff+0x77/0x9b
[ 24.705354] [<ffffffff811ecd56>] vfs_read+0xa0/0x12e
[ 24.706572] [<ffffffff811edf68>] SyS_read+0x51/0x92
[ 24.707810] [<ffffffff81cb357c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[ 25.141431] systemd-journald[2067]: Fixed max_use=48.4M max_size=6.0M min_size=4.0M keep_free=72.6M
vm-lkp-wsx03-1G: 1 threads qemu-system-x86_64 -enable-kvm -cpu host with 1G memory
vm-vp-quantal-x86_64: 2 threads qemu-system-x86_64 -enable-kvm with 360M memory
vm-client-x5355-yocto-ia32: 1 threads qemu-system-x86_64 -enable-kvm -cpu core2duo with 256M memory
vm-client-x5355-openwrt-ia32: 1 threads qemu-system-x86_64 -enable-kvm with 192M memory
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu host -kernel /pkg/linux/x86_64-nfsroot/gcc-5/f48f469be8eb0985316a8562098d1224463ed4fc/vmlinuz-4.6.0-rc3-00059-gf48f469 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-lkp-wsx03-1G-20/bisect_fsmark-1x-32t-1HDD-xfs-nfsv4-9B-400M-fsyncBeforeClose-16d-256fpd-debian-x86_64-2015-02-07.cgz-x86_64-nfsroot-f48f469be8eb0985316a8562098d1224463ed4fc-20160417-49685-1aw56ab-0.yaml ARCH=x86_64 kconfig=x86_64-nfsroot branch=linux-devel/devel-catchup-201604170503 commit=f48f469be8eb0985316a8562098d1224463ed4fc BOOT_IMAGE=/pkg/linux/x86_64-nfsroot/gcc-5/f48f469be8eb0985316a8562098d1224463ed4fc/vmlinuz-4.6.0-rc3-00059-gf48f469 max_uptime=3600 RESULT_ROOT=/result/fsmark/1x-32t-1HDD-xfs-nfsv4-9B-400M-fsyncBeforeClose-16d-256fpd/vm-lkp-wsx03-1G/debian-x86_64-2015-02-07.cgz/x86_64-nfsroot/gcc-5/f48f469be8eb0985316a8562098d1224463ed4fc/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-lkp-wsx03-1G-20::dhcp' -initrd /fs/sdc1/initrd-vm-lkp-wsx03-1G-20 -m 1024 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23619-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-lkp-wsx03-1G-20,media=disk,if=virtio -drive file=/fs/sdc1/disk1-vm-lkp-wsx03-1G-20,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-lkp-wsx03-1G-20 -serial file:/dev/shm/kboot/serial-vm-lkp-wsx03-1G-20 -daemonize -display none -monitor null
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Kernel Test Robot
4 years, 9 months
[lkp] [Add sancov plugin] 1fd239e064: WARNING: CPU: 0 PID: 0 at lib/locking-selftest.c:974 dotest+0xeb/0x86c
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git kspp/gcc-plugins
commit 1fd239e0649beffc5d1d9c3899be662463e1a8e5 ("Add sancov plugin")
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] WARNING: CPU: 0 PID: 0 at lib/locking-selftest.c:974 dotest+0xeb/0x86c
[ 0.000000] WARNING: CPU: 0 PID: 0 at lib/locking-selftest.c:974 dotest+0xeb/0x86c
[ 0.000000] Modules linked in:
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.6.0-rc3-00063-g1fd239e #1
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.6.0-rc3-00063-g1fd239e #1
[ 0.000000] ffffffff82803dd8
[ 0.000000] ffffffff82803dd8 ffffffff837c74e8 ffffffff837c74e8 0000000000000002 0000000000000002 0000000000000000 0000000000000000
[ 0.000000] 0000000000000001
[ 0.000000] 0000000000000001 0000000000000020 0000000000000020 ffffffff82803e10 ffffffff82803e10 ffffffff8169f949 ffffffff8169f949
[ 0.000000] ffffffff82803e60
[ 0.000000] ffffffff82803e60 ffffffff810fbfcc ffffffff810fbfcc 0000000000000001 0000000000000001 000003ce00000009 000003ce00000009
[ 0.000000] Call Trace:
[ 0.000000] Call Trace:
[ 0.000000] [<ffffffff8169f949>] dump_stack+0x1e/0x20
[ 0.000000] [<ffffffff8169f949>] dump_stack+0x1e/0x20
[ 0.000000] [<ffffffff810fbfcc>] __warn+0x30f/0x332
[ 0.000000] [<ffffffff810fbfcc>] __warn+0x30f/0x332
[ 0.000000] [<ffffffff817055dd>] ? arch_local_irq_enable+0x12/0x12
[ 0.000000] [<ffffffff817055dd>] ? arch_local_irq_enable+0x12/0x12
[ 0.000000] [<ffffffff810fc108>] warn_slowpath_null+0x34/0x36
[ 0.000000] [<ffffffff810fc108>] warn_slowpath_null+0x34/0x36
[ 0.000000] [<ffffffff81767803>] dotest+0xeb/0x86c
[ 0.000000] [<ffffffff81767803>] dotest+0xeb/0x86c
[ 0.000000] [<ffffffff8171930d>] locking_selftest+0x136/0x1ff9
[ 0.000000] [<ffffffff8171930d>] locking_selftest+0x136/0x1ff9
[ 0.000000] [<ffffffff83a7284a>] start_kernel+0x607/0x8b1
[ 0.000000] [<ffffffff83a7284a>] start_kernel+0x607/0x8b1
[ 0.000000] [<ffffffff83a704db>] x86_64_start_reservations+0xbf/0xcb
[ 0.000000] [<ffffffff83a704db>] x86_64_start_reservations+0xbf/0xcb
[ 0.000000] [<ffffffff83a70120>] ? early_idt_handler_array+0x120/0x120
[ 0.000000] [<ffffffff83a70120>] ? early_idt_handler_array+0x120/0x120
[ 0.000000] [<ffffffff83a70624>] x86_64_start_kernel+0x13d/0x14a
[ 0.000000] [<ffffffff83a70624>] x86_64_start_kernel+0x13d/0x14a
[ 0.000000] ---[ end trace cb88537fdc8fa200 ]---
[ 0.000000] ---[ end trace cb88537fdc8fa200 ]---
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -kernel /pkg/linux/x86_64-randconfig-s3-04141352/gcc-5/1fd239e0649beffc5d1d9c3899be662463e1a8e5/vmlinuz-4.6.0-rc3-00063-g1fd239e -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-x86_64-15/bisect_boot-1-quantal-core-x86_64.cgz-x86_64-randconfig-s3-04141352-1fd239e0649beffc5d1d9c3899be662463e1a8e5-20160414-48517-bttc3w-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s3-04141352 branch=linux-devel/devel-spot-201604141327 commit=1fd239e0649beffc5d1d9c3899be662463e1a8e5 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s3-04141352/gcc-5/1fd239e0649beffc5d1d9c3899be662463e1a8e5/vmlinuz-4.6.0-rc3-00063-g1fd239e max_uptime=600 RESULT_ROOT=/result/boot/1/vm-vp-quantal-x86_64/quantal-core-x86_64.cgz/x86_64-randconfig-s3-04141352/gcc-5/1fd239e0649beffc5d1d9c3899be662463e1a8e5/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-quantal-x86_64-15::dhcp drbd.minor_count=8' -initrd /fs/sdh1/initrd-vm-vp-quantal-x86_64-15 -m 360 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-x86_64-15 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-x86_64-15 -daemonize -display none -monitor null
Thanks,
Kernel Test Robot
4 years, 9 months
[lkp] [sched/fair] 1924638bd2: vm-scalability.throughput -38.1% regression
by kernel test robot
FYI, we noticed vm-scalability.throughput -38.1% regression on
git://bee.sh.intel.com/git/ydu19/tip flat_hierarchy_v2
commit 1924638bd249198512411221bc695f6c0fbf312a ("sched/fair: Drop out incomplete current period when sched averages accrue")
commit:
36312635c6190a9fb44ac2bfdf10ff83aa9feaba
1924638bd249198512411221bc695f6c0fbf312a
36312635c6190a9f 1924638bd249198512411221bc
---------------- --------------------------
%stddev %change %stddev
\ | \
761258 ± 1% -38.1% 471305 ± 0% vm-scalability.throughput
30766 ± 3% -69.8% 9282 ± 1% vm-scalability.time.involuntary_context_switches
50850461 ± 1% -38.1% 31490987 ± 0% vm-scalability.time.minor_page_faults
1137 ± 0% -23.1% 875.00 ± 1% vm-scalability.time.percent_of_cpu_this_job_got
3396 ± 0% -24.7% 2557 ± 1% vm-scalability.time.system_time
87.00 ± 0% +22.8% 106.83 ± 0% vm-scalability.time.user_time
55338718 ± 1% -44.3% 30844608 ± 0% vm-scalability.time.voluntary_context_switches
291952 ± 8% -25.6% 217211 ± 2% numa-numastat.node0.local_node
304311 ± 6% -23.5% 232662 ± 1% numa-numastat.node0.numa_hit
737976 ± 6% -17.7% 607289 ± 5% softirqs.RCU
1243583 ± 1% -10.0% 1119594 ± 0% softirqs.SCHED
1566834 ± 0% -12.9% 1364168 ± 0% vmstat.memory.cache
11.25 ± 11% -35.6% 7.25 ± 15% vmstat.procs.r
346168 ± 1% -41.4% 202800 ± 0% vmstat.system.cs
19722 ± 2% -27.7% 14267 ± 1% vmstat.system.in
6.05 ± 0% -19.9% 4.85 ± 1% turbostat.%Busy
155.75 ± 0% -23.1% 119.75 ± 1% turbostat.Avg_MHz
2.57 ± 3% -84.6% 0.40 ± 44% turbostat.CPU%c3
24.62 ± 4% -58.5% 10.23 ± 1% turbostat.Pkg%pc2
105.98 ± 0% +12.7% 119.44 ± 0% turbostat.RAMWatt
208886 ± 2% -11.5% 184766 ± 2% meminfo.Active
110089 ± 5% -22.8% 84937 ± 5% meminfo.Active(anon)
2055197 ± 1% +13.3% 2328473 ± 0% meminfo.Committed_AS
199489 ± 1% -36.5% 126774 ± 0% meminfo.PageTables
669070 ± 1% -26.3% 493214 ± 0% meminfo.SUnreclaim
747414 ± 1% -23.7% 570415 ± 0% meminfo.Slab
623.00 ± 26% +39.5% 869.00 ± 5% time.file_system_inputs
30766 ± 3% -69.8% 9282 ± 1% time.involuntary_context_switches
50850461 ± 1% -38.1% 31490987 ± 0% time.minor_page_faults
1137 ± 0% -23.1% 875.00 ± 1% time.percent_of_cpu_this_job_got
3396 ± 0% -24.7% 2557 ± 1% time.system_time
87.00 ± 0% +22.8% 106.83 ± 0% time.user_time
55338718 ± 1% -44.3% 30844608 ± 0% time.voluntary_context_switches
4.371e+08 ± 8% -94.1% 25888597 ± 79% cpuidle.C1-BDW.time
1701176 ± 6% -94.5% 94192 ± 1% cpuidle.C1-BDW.usage
3.255e+09 ± 2% -99.1% 29911492 ± 10% cpuidle.C1E-BDW.time
13009780 ± 4% -99.8% 20037 ± 6% cpuidle.C1E-BDW.usage
1.833e+09 ± 2% -90.0% 1.825e+08 ± 40% cpuidle.C3-BDW.time
7018257 ± 2% -96.6% 239993 ± 30% cpuidle.C3-BDW.usage
4.987e+10 ± 0% +11.2% 5.545e+10 ± 0% cpuidle.C6-BDW.time
48862155 ± 1% -14.9% 41602974 ± 2% cpuidle.C6-BDW.usage
27520 ± 5% -22.8% 21233 ± 5% proc-vmstat.nr_active_anon
49872 ± 1% -36.5% 31691 ± 0% proc-vmstat.nr_page_table_pages
167264 ± 1% -26.3% 123301 ± 0% proc-vmstat.nr_slab_unreclaimable
1942 ± 4% +16.2% 2256 ± 4% proc-vmstat.numa_hint_faults
29024 ± 6% -27.0% 21183 ± 2% proc-vmstat.pgactivate
6436 ± 5% -26.9% 4703 ± 12% proc-vmstat.pgalloc_dma32
1256452 ± 0% -10.4% 1125377 ± 0% proc-vmstat.pgalloc_normal
51707765 ± 1% -37.5% 32331204 ± 0% proc-vmstat.pgfault
1105486 ± 4% -10.6% 988169 ± 3% proc-vmstat.pgfree
3192 ± 5% -30.0% 2233 ± 7% slabinfo.UNIX.active_objs
3192 ± 5% -30.0% 2233 ± 7% slabinfo.UNIX.num_objs
7586 ± 2% -12.5% 6636 ± 1% slabinfo.files_cache.active_objs
7586 ± 2% -12.5% 6636 ± 1% slabinfo.files_cache.num_objs
12639 ± 1% -12.5% 11065 ± 1% slabinfo.pid.active_objs
12639 ± 1% -12.5% 11065 ± 1% slabinfo.pid.num_objs
5888 ± 4% -21.9% 4598 ± 7% slabinfo.sock_inode_cache.active_objs
5888 ± 4% -21.9% 4598 ± 7% slabinfo.sock_inode_cache.num_objs
2529959 ± 1% -37.4% 1584447 ± 0% slabinfo.vm_area_struct.active_objs
57499 ± 1% -37.4% 36010 ± 0% slabinfo.vm_area_struct.active_slabs
2529980 ± 1% -37.4% 1584462 ± 0% slabinfo.vm_area_struct.num_objs
57499 ± 1% -37.4% 36010 ± 0% slabinfo.vm_area_struct.num_slabs
vm-scalability.throughput
850000 ++-----------------------------------------------------------------+
| |
800000 *+*..*.*.. .*.*.. .*. .*.. .*.*. |
| *.*.*. *.*.*..*.*..* *..*.* *.*. *..*.*..*.|
750000 ++ *
700000 ++ |
| |
650000 ++ |
| |
600000 ++ |
550000 ++ |
| |
500000 ++ |
O O O O O O O O O O O O O O O O O O O O O
450000 ++--------------------O-O-O--O-O--O-O-O----------------------------+
vm-scalability.time.user_time
130 ++--------------------------------------------------------------------+
125 ++ O |
O O O O O O O |
120 ++ O |
115 ++ |
| O O |
110 ++ O O O O O O O O
105 ++ O O O O O O O O O O |
100 ++ |
| |
95 ++ |
90 ++ .*. .*. .*.*..*. |
*..*.*..*.*.. .*..*.*..*.*..*.*..*.*. *..*.*. *. *..*.*..*.*
85 ++ * |
80 ++--------------------------------------------------------------------+
vm-scalability.time.system_time
3600 ++-------------------------------------------------------------------+
*. .*.*.. .*.. .*.. .*. .*. .*. |
3400 ++*. *.*..*.*..* * *.*..* *..*.*. *..*.*. *..*.*..*.*
3200 ++ |
| |
3000 ++ |
| |
2800 ++ |
| |
2600 ++ O O O O O O
2400 ++ O O O O O O |
| O O |
2200 O+O O O O O O O O O O O O |
| O O |
2000 ++-------------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
1200 ++-------------------------------------------------------------------+
1150 *+ .*.*.. .*.. .*.. .*. .*. .*..*. |
| *. *.*..*.*..* * *.*..* *..*.*. *..* *..*.*..*.*
1100 ++ |
1050 ++ |
| |
1000 ++ |
950 ++ |
900 ++ |
| O O O O O O
850 ++ O O O O O O |
800 ++ O |
O O O O O O O O O O O O O O |
750 ++ O O |
700 ++-------------------------------------------------------------------+
vm-scalability.time.minor_page_faults
5.5e+07 ++----------------------------------------------------------------+
*.*.. .*.*. .*.*.. |
| *.*.*.. .*. *..*.*.*..*.*.*.. .*.*..*. .* *.*.*..*.|
5e+07 ++ * * *. *
| |
| |
4.5e+07 ++ |
| |
4e+07 ++ |
| |
| |
3.5e+07 ++ |
| |
O O O O O O O O O O O O O O O O O O O O O
3e+07 ++-------------------O--O-O-O--O-O-O--O---------------------------+
vm-scalability.time.voluntary_context_switches
6e+07 ++---------------*------------------------------------------------+
*.*..*.*.*.. .. *. .*.. .*. .*.*.*.. |
5.5e+07 ++ *.* *..*.*.*..*.* *.*.*. *. *.*.*..*.*
| |
5e+07 ++ |
| |
4.5e+07 ++ |
| |
4e+07 ++ |
| |
3.5e+07 ++ |
O O O O O O O O O O O |
3e+07 ++ O O O O O O O O O O O O O O O O O O
| |
2.5e+07 ++----------------------------------------------------------------+
vm-scalability.time.involuntary_context_switches
35000 ++------------------------------------------------------------------+
*.*.. *. .*.. *.. |
30000 ++ *. .*.. .. *.*..* *.*..*.*.*..*.*.. + *.*
| *..*.*..*.* *.*..*.* * |
| |
25000 ++ |
| |
20000 ++ |
| |
15000 ++ |
| |
| |
10000 O+O O O O O O O O O O O O O O O O O O O O O O O O O O O O
| |
5000 ++------------------------------------------------------------------+
Test machine: lkp-bdw-ex2
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
yexl
4 years, 9 months
[lkp] [test] 265bbb5704: BUG: unable to handle kernel paging request at e8e04dec
by kernel test robot
FYI, we noticed that kernel panic due to "BUG: unable to handle kernel paging request at e8e04dec"
with your commit.
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux-next.git 20160412-sysdata-v1
commit 265bbb5704d63eac3bdf81fe28b566b572a10d89 ("test: add sysdata loader tester")
[ 4.716154] Driver for 1-wire Dallas network protocol.
[ 5.108702] f71882fg: Not a Fintek device
[ 5.109238] f71882fg: Not a Fintek device
[ 5.496811] BUG: unable to handle kernel paging request at e8e04dec
[ 5.497564] IP: [<c13bff9e>] kobject_get+0xe/0xb0
[ 5.498132] *pdpt = 000000000209c001 *pde = 0000000000000000
[ 5.498782] Oops: 0000 [#1] PREEMPT SMP
[ 5.499216] Modules linked in:
[ 5.499587] CPU: 0 PID: 25 Comm: kworker/0:1 Not tainted 4.6.0-rc3-next-20160412-00002-g265bbb5 #261
[ 5.500565] Workqueue: events do_init_work
[ 5.501030] task: ca034000 ti: ca036000 task.ti: ca036000
[ 5.501608] EIP: 0060:[<c13bff9e>] EFLAGS: 00010286 CPU: 0
[ 5.502232] EIP is at kobject_get+0xe/0xb0
[ 5.502677] EAX: e8e04dcc EBX: e8e04dcc ECX: 00000000 EDX: 00000000
[ 5.503363] ESI: cb8365c0 EDI: 00000000 EBP: ca037de8 ESP: ca037dcc
[ 5.504050] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
[ 5.504636] CR0: 80050033 CR2: e8e04dec CR3: 0209f000 CR4: 000006b0
[ 5.505314] Stack:
[ 5.505550] c193f43f cb8365b8 cb8365c0 00000000 00000000 e8e04dc4 cb8365c0 ca037e2c
[ 5.506507] c14d311d ffffffff 00000000 ca037e10 c11c2673 c118fc9d c013cd40 cb8365c0
[ 5.507450] cb8365b8 ca037e18 c118fc9d ca037e2c c13c08ba 00000000 cb8365c0 cb8365b8
[ 5.508400] Call Trace:
[ 5.508704] [<c193f43f>] ? klist_init+0x2f/0x50
[ 5.509220] [<c14d311d>] device_add+0xdd/0x650
[ 5.509738] [<c11c2673>] ? kfree+0x93/0x280
[ 5.510232] [<c118fc9d>] ? kfree_const+0x1d/0x30
[ 5.510740] [<c118fc9d>] ? kfree_const+0x1d/0x30
[ 5.511252] [<c13c08ba>] ? kobject_set_name_vargs+0x6a/0xa0
[ 5.511865] [<c14d3897>] device_create_groups_vargs+0xe7/0x100
[ 5.512504] [<c14d393f>] device_create_with_groups+0x2f/0x40
[ 5.513146] [<c14c5e23>] misc_register+0x163/0x1b0
[ 5.513685] [<c13dc44d>] do_init_work+0x4d/0xd0
[ 5.514214] [<c1085994>] process_one_work+0x2c4/0x950
[ 5.514768] [<c10858a7>] ? process_one_work+0x1d7/0x950
[ 5.515359] [<c108639a>] worker_thread+0x37a/0x640
[ 5.515908] [<c1086020>] ? process_one_work+0x950/0x950
[ 5.516477] [<c108c011>] kthread+0xa1/0xc0
[ 5.516951] [<c194af09>] ret_from_kernel_thread+0x21/0x38
[ 5.517546] [<c108bf70>] ? kthread_unpark+0x50/0x50
[ 5.518112] Code: 54 cd c1 89 55 f8 e8 2e 79 da ff e8 ba d2 ff ff 8b 55 f8 eb 9f b8 08 54 cd c1 eb c1 90 55 89 e5 56 53 89 c3 83 ec 14 85 c0 74 38 <f6> 40 20 01 74 6c b8 01 00 00 00 3e 0f c1 43 1c 8d 70 01 31 c9
[ 5.521217] EIP: [<c13bff9e>] kobject_get+0xe/0xb0 SS:ESP 0068:ca037dcc
[ 5.522006] CR2: 00000000e8e04dec
[ 5.522383] ---[ end trace e8742962ac65bac4 ]---
[ 5.522897] Kernel panic - not syncing: Fatal exception
FYI, raw QEMU command line is:
qemu-system-i386 -enable-kvm -kernel /pkg/linux/i386-randconfig-i0-201615/gcc-5/265bbb5704d63eac3bdf81fe28b566b572a10d89/vmlinuz-4.6.0-rc3-next-20160412-00002-g265bbb5 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-lkp-wsx03-openwrt-i386-20/rand_boot-1-openwrt-i386.cgz-i386-randconfig-i0-201615-265bbb5704d63eac3bdf81fe28b566b572a10d89-20160413-23967-gfd6wz-0.yaml ARCH=i386 kconfig=i386-randconfig-i0-201615 branch=mcgrof-next/20160412-sysdata-v1 commit=265bbb5704d63eac3bdf81fe28b566b572a10d89 BOOT_IMAGE=/pkg/linux/i386-randconfig-i0-201615/gcc-5/265bbb5704d63eac3bdf81fe28b566b572a10d89/vmlinuz-4.6.0-rc3-next-20160412-00002-g265bbb5 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-lkp-wsx03-openwrt-i386/openwrt-i386.cgz/i386-randconfig-i0-201615/gcc-5/265bbb5704d63eac3bdf81fe28b566b572a10d89/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-lkp-wsx03-openwrt-i386-20::dhcp drbd.minor_count=8' -initrd /fs/sdc1/initrd-vm-lkp-wsx03-openwrt-i386-20 -m 192 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-lkp-wsx03-openwrt-i386-20,media=disk,if=virtio -drive file=/fs/sdc1/disk1-vm-lkp-wsx03-openwrt-i386-20,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-lkp-wsx03-openwrt-i386-20 -serial file:/dev/shm/kboot/serial-vm-lkp-wsx03-openwrt-i386-20 -daemonize -display none -monitor null
Thanks,
Xiaolong Ye
4 years, 9 months
[test] 4e0b3b61e0: BUG: unable to handle kernel paging request at 3003f02f
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git 20160412-sysdata-v2
commit 4e0b3b61e0794fe9908fd5cbb111d188dbad7682
Author: Luis R. Rodriguez <mcgrof(a)kernel.org>
AuthorDate: Mon Feb 29 19:52:08 2016 -0800
Commit: Luis R. Rodriguez <mcgrof(a)kernel.org>
CommitDate: Tue Apr 12 22:31:12 2016 -0700
test: add sysdata loader tester
This add a load tester for the new extensible sysdata
file loader, part firmware_class. The usermode helper
can be safely ignored here.
The sysdata API has two main interfaces, synchronous and
asynchronous, we provide 4 types of trigger tests for
each:
* trigger_request: sync simple loader
* trigger_request_keep: sync, asks for us to manage freeing sysdata
* trigger_request_opt: sync, the file is optional
* trigger_request_opt_default: sync, optional, try test-sysdata.bin
* trigger_async_request: async simple loader
* trigger_async_request_keep: async, asks us to manage freeing sysdata
* trigger_async_request_opt: async, the file is optional
* trigger_async_request_opt_default: async, optional, try test-sysdata.bin
Signed-off-by: Luis R. Rodriguez <mcgrof(a)kernel.org>
+------------------------------------------+------------+------------+------------+
| | 9c3d978891 | 4e0b3b61e0 | b77aedea17 |
+------------------------------------------+------------+------------+------------+
| boot_successes | 70 | 4 | 9 |
| boot_failures | 0 | 20 | 12 |
| BUG:unable_to_handle_kernel | 0 | 20 | 12 |
| Oops | 0 | 20 | |
| EIP_is_at_kobject_get | 0 | 20 | 12 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 20 | 12 |
| backtrace:acpi_get_cpuid | 0 | 20 | 12 |
| backtrace:early_init_pdc | 0 | 20 | 12 |
| backtrace:acpi_early_processor_set_pdc | 0 | 20 | 12 |
| backtrace:acpi_init | 0 | 20 | 12 |
| backtrace:kernel_init_freeable | 0 | 20 | 12 |
| backtrace:misc_register | 0 | 20 | 12 |
| backtrace:do_init_work | 0 | 20 | 12 |
| Oops:#[##] | 0 | 0 | 12 |
+------------------------------------------+------------+------------+------------+
[ 0.853909] UBSAN: Undefined behaviour in drivers/acpi/acpica/dsutils.c:641:16
[ 0.854615] index -1 is out of range for type 'acpi_operand_object *[9]'
[ 0.855149] CPU: 0 PID: 1 Comm: swapper Not tainted 4.6.0-rc3-00002-g4e0b3b6 #1
[ 0.855715] ffffffff 8f445c6c 8f445c48 816bd84c 8f445c58 8172e99e 8f445c58 8276ca14
[ 0.856406] 8f445ca4 8172efa3 821f43cc 8f445c70 8276ca2c 00000296 0000312d 821c3464
[ 0.857092] 00000002 00000020 8f4c0900 00000002 8f442478 8f4c08e0 00000030 8f41ce40
[ 0.857778] Call Trace:
[ 0.857980] [<816bd84c>] dump_stack+0x16/0x1a
[ 0.858331] [<8172e99e>] ubsan_epilogue+0xe/0x40
[ 0.858706] [<8172efa3>] __ubsan_handle_out_of_bounds+0x63/0x70
[ 0.859183] [<8187b60c>] acpi_ds_create_operand+0x1ff/0x287
[ 0.859624] [<8187b786>] acpi_ds_create_operands+0xf2/0x134
[ 0.860079] [<81868e09>] ? acpi_os_release_object+0x8/0xc
[ 0.860506] [<8189bfcf>] ? acpi_ut_delete_generic_state+0x13/0x15
[ 0.860993] [<818929a5>] ? acpi_ps_pop_scope+0xb4/0x109
[ 0.861507] [<8187bd48>] acpi_ds_exec_end_op+0xe2/0x4c3
[ 0.861937] [<81891e8a>] ? acpi_ps_complete_op+0x24c/0x25f
[ 0.862379] [<818917cb>] acpi_ps_parse_loop+0x624/0x668
[ 0.862798] [<81898dd3>] ? acpi_ut_remove_reference+0x25/0x28
[ 0.863250] [<818925c8>] acpi_ps_parse_aml+0x95/0x284
[ 0.863651] [<81892fd7>] acpi_ps_execute_method+0x1a6/0x1d7
[ 0.864100] [<8188c37c>] acpi_ns_evaluate+0x1dc/0x25b
[ 0.864504] [<8188f947>] acpi_evaluate_object+0x105/0x1fb
[ 0.864940] [<8187025c>] acpi_get_phys_id+0x32/0x195
[ 0.865337] [<818703ea>] acpi_get_cpuid+0xb/0x15
[ 0.865707] [<82acbf79>] early_init_pdc+0x81/0x9a
[ 0.866087] [<8188f606>] acpi_ns_walk_namespace+0xdf/0x19c
[ 0.866524] [<8188fb8d>] acpi_walk_namespace+0x7a/0xa5
[ 0.866940] [<82acbef8>] ? acpi_processor_init+0x1e/0x1e
[ 0.867362] [<82acbfb0>] acpi_early_processor_set_pdc+0x1e/0x36
[ 0.867835] [<82acbef8>] ? acpi_processor_init+0x1e/0x1e
[ 0.868259] [<82acb7ec>] acpi_init+0x12b/0x253
[ 0.868617] [<82acb6c1>] ? acpi_sleep_init+0xdd/0xdd
[ 0.869018] [<82a91e91>] do_one_initcall+0x162/0x1e2
[ 0.869421] [<8108d054>] ? parse_args+0x1f4/0x4a0
[ 0.869800] [<82a92117>] ? kernel_init_freeable+0x206/0x317
[ 0.870242] [<82a9219b>] kernel_init_freeable+0x28a/0x317
[ 0.870671] [<81f2255c>] kernel_init+0xc/0x120
[ 0.871032] [<81095854>] ? schedule_tail+0x14/0x90
[ 0.871411] [<81f2c1e8>] ret_from_kernel_thread+0x20/0x34
[ 0.871849] [<81f22550>] ? rest_init+0xb0/0xb0
[ 0.872206] ================================================================================
[ 0.873218] ACPI: Interpreter enabled
[ 0.873530] ACPI: (supports S0 S5)
git bisect start b77aedea17833fb2011afae221a27c5d58a933ee bf16200689118d19de1b8d2a3c314fc21f5dc7bb --
git bisect bad a85ae13a2dfd31598a752cc9b37d266eec2e973d # 15:32 0- 18 Merge 'linux-review/Akshay-Adiga/cpufreq-powernv-Ramp-down-global-pstate-slower-than-local-pstate/20160413-021050' into devel-spot-201604131338
git bisect bad 86c4cdc16018ef8454dabcce5f69d90cc1a5a1ce # 15:40 0- 4 Merge 'linux-review/Maxim-Zhukov/scripts-genksyms-fix-resource-leak/20160413-045831' into devel-spot-201604131338
git bisect bad b0decdcc87f038da6a99973753c2e13bd118680f # 15:51 0- 5 Merge 'linux-review/Guodong-Xu/DTS-for-hi6220-and-HiKey/20160413-080400' into devel-spot-201604131338
git bisect bad a64fe97739da1346b0e3ffc72e69604c3bd98a23 # 15:55 0- 5 Merge 'rcu/rcu/dev' into devel-spot-201604131338
git bisect bad 9e865567fde5eb26dddcb5eb06e4139197342a41 # 16:04 0- 5 Merge 'bpf/master' into devel-spot-201604131338
git bisect bad ba106f83209f2ba98f973f5598b0ebd0f093b8f1 # 16:08 0- 8 Merge 'mcgrof/20160412-sysdata-v2' into devel-spot-201604131338
git bisect good 8eca194f57d3d88d990fe34f465ef9b1b59921fc # 16:18 24+ 0 0day base guard for 'devel-spot-201604131338'
git bisect bad c426df546829bd77e39b45dca4e17ba5e342623f # 16:25 0- 5 add test 1
git bisect bad 4e0b3b61e0794fe9908fd5cbb111d188dbad7682 # 16:33 0- 20 test: add sysdata loader tester
git bisect good 9c3d9788912c9b7c2a5b64e30fddc373baf78dd1 # 16:40 24+ 0 firmware: add an extensible system data helpers
# first bad commit: [4e0b3b61e0794fe9908fd5cbb111d188dbad7682] test: add sysdata loader tester
git bisect good 9c3d9788912c9b7c2a5b64e30fddc373baf78dd1 # 16:42 70+ 0 firmware: add an extensible system data helpers
# extra tests with DEBUG_INFO
git bisect bad 4e0b3b61e0794fe9908fd5cbb111d188dbad7682 # 16:47 0- 50 test: add sysdata loader tester
# extra tests on HEAD of linux-devel/devel-spot-201604131338
git bisect bad b77aedea17833fb2011afae221a27c5d58a933ee # 16:47 0- 12 0day head guard for 'devel-spot-201604131338'
# extra tests on tree/branch mcgrof/20160412-sysdata-v2
git bisect bad 4a99c5afa4776af8f764f2fa218816dc11cc8ade # 16:52 0- 19 add test 0010
# extra tests on tree/branch linus/master
git bisect good 1c74a7f812b135d3df41d7c3671b647aed6467bf # 16:58 67+ 2 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid
# extra tests on tree/branch linux-next/master
git bisect good 54ea972ac5feaa81d03d861e038884130f0fddbb # 17:05 69+ 2 Add linux-next specific files for 20160413
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 256
-smp 1
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
4 years, 9 months
[lkp] [huge tmpfs] d7c7d56ca6: vm-scalability.throughput -5.5% regression
by kernel test robot
FYI, we noticed that vm-scalability.throughput -5.5% regression on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit d7c7d56ca61aec18e5e0cb3a64e50073c42195f7 ("huge tmpfs: avoid premature exposure of new pagetable")
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/lkp-hsw01/lru-file-mmap-read-rand/vm-scalability
commit:
517348161d2725b8b596feb10c813bf596dc6a47
d7c7d56ca61aec18e5e0cb3a64e50073c42195f7
517348161d2725b8 d7c7d56ca61aec18e5e0cb3a64
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1801726 ± 0% -5.5% 1702808 ± 0% vm-scalability.throughput
317.89 ± 0% +2.9% 327.15 ± 0% vm-scalability.time.elapsed_time
317.89 ± 0% +2.9% 327.15 ± 0% vm-scalability.time.elapsed_time.max
872240 ± 4% +8.5% 946467 ± 1% vm-scalability.time.involuntary_context_switches
6.73e+08 ± 0% -92.5% 50568722 ± 0% vm-scalability.time.major_page_faults
2109093 ± 9% -25.8% 1564815 ± 7% vm-scalability.time.maximum_resident_set_size
37881 ± 0% +586.9% 260194 ± 0% vm-scalability.time.minor_page_faults
5087 ± 0% +3.7% 5277 ± 0% vm-scalability.time.percent_of_cpu_this_job_got
16047 ± 0% +7.5% 17252 ± 0% vm-scalability.time.system_time
127.19 ± 0% -88.3% 14.93 ± 1% vm-scalability.time.user_time
72572 ± 7% +56.0% 113203 ± 3% cpuidle.C1-HSW.usage
9.879e+08 ± 4% -32.5% 6.67e+08 ± 8% cpuidle.C6-HSW.time
605545 ± 3% -12.9% 527295 ± 1% softirqs.RCU
164170 ± 7% +20.5% 197881 ± 6% softirqs.SCHED
2584429 ± 3% -25.5% 1925241 ± 2% vmstat.memory.free
252507 ± 0% +36.2% 343994 ± 0% vmstat.system.in
2.852e+08 ± 5% +163.9% 7.527e+08 ± 1% numa-numastat.node0.local_node
2.852e+08 ± 5% +163.9% 7.527e+08 ± 1% numa-numastat.node0.numa_hit
2.876e+08 ± 6% +162.8% 7.559e+08 ± 0% numa-numastat.node1.local_node
2.876e+08 ± 6% +162.8% 7.559e+08 ± 0% numa-numastat.node1.numa_hit
6.73e+08 ± 0% -92.5% 50568722 ± 0% time.major_page_faults
2109093 ± 9% -25.8% 1564815 ± 7% time.maximum_resident_set_size
37881 ± 0% +586.9% 260194 ± 0% time.minor_page_faults
127.19 ± 0% -88.3% 14.93 ± 1% time.user_time
94.37 ± 0% +2.0% 96.27 ± 0% turbostat.%Busy
2919 ± 0% +2.0% 2977 ± 0% turbostat.Avg_MHz
5.12 ± 4% -38.7% 3.14 ± 5% turbostat.CPU%c6
2.00 ± 13% -44.8% 1.10 ± 22% turbostat.Pkg%pc2
240.00 ± 0% +4.2% 250.14 ± 0% turbostat.PkgWatt
55.36 ± 3% +16.3% 64.40 ± 2% turbostat.RAMWatt
17609 ±103% -59.4% 7148 ± 72% latency_stats.avg.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
63966 ±152% -68.4% 20204 ± 64% latency_stats.max.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
299681 ±123% -89.7% 30889 ± 13% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 35893 ± 10% latency_stats.max.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
90871 ±125% -56.2% 39835 ± 74% latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
61821 ± 22% -86.6% 8254 ± 62% latency_stats.sum.sigsuspend.SyS_rt_sigsuspend.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 59392 ±118% latency_stats.sum.throttle_direct_reclaim.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault.do_fault.handle_mm_fault.__do_page_fault
0.00 ± -1% +Inf% 1549096 ± 24% latency_stats.sum.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
639.30 ± 8% -38.8% 391.40 ± 6% slabinfo.RAW.active_objs
639.30 ± 8% -38.8% 391.40 ± 6% slabinfo.RAW.num_objs
555.90 ± 14% -50.7% 274.10 ± 36% slabinfo.nfs_commit_data.active_objs
555.90 ± 14% -50.7% 274.10 ± 36% slabinfo.nfs_commit_data.num_objs
10651978 ± 0% -80.0% 2126718 ± 0% slabinfo.radix_tree_node.active_objs
218915 ± 0% -81.9% 39535 ± 0% slabinfo.radix_tree_node.active_slabs
12259274 ± 0% -81.9% 2213762 ± 0% slabinfo.radix_tree_node.num_objs
218915 ± 0% -81.9% 39535 ± 0% slabinfo.radix_tree_node.num_slabs
8503640 ± 1% -87.8% 1038681 ± 0% meminfo.Active
8155208 ± 1% -91.5% 692744 ± 0% meminfo.Active(file)
47732497 ± 0% +13.9% 54365008 ± 0% meminfo.Cached
38794624 ± 0% +36.4% 52899738 ± 0% meminfo.Inactive
38748440 ± 0% +36.4% 52853183 ± 0% meminfo.Inactive(file)
45315491 ± 0% -24.0% 34459599 ± 0% meminfo.Mapped
2693407 ± 5% -30.7% 1867438 ± 3% meminfo.MemFree
7048370 ± 0% -81.5% 1303216 ± 0% meminfo.SReclaimable
7145508 ± 0% -80.4% 1400313 ± 0% meminfo.Slab
4168849 ± 2% -88.1% 496040 ± 27% numa-meminfo.node0.Active
3987391 ± 1% -91.3% 346768 ± 0% numa-meminfo.node0.Active(file)
23809283 ± 0% +13.8% 27087077 ± 0% numa-meminfo.node0.FilePages
19423374 ± 0% +35.8% 26379857 ± 0% numa-meminfo.node0.Inactive
19402281 ± 0% +35.8% 26356354 ± 0% numa-meminfo.node0.Inactive(file)
22594121 ± 0% -24.1% 17153129 ± 0% numa-meminfo.node0.Mapped
1430871 ± 5% -31.2% 984861 ± 2% numa-meminfo.node0.MemFree
3457483 ± 1% -81.4% 642147 ± 0% numa-meminfo.node0.SReclaimable
3507005 ± 1% -80.3% 692577 ± 0% numa-meminfo.node0.Slab
4349443 ± 3% -87.5% 543711 ± 24% numa-meminfo.node1.Active
4181422 ± 3% -91.7% 346861 ± 1% numa-meminfo.node1.Active(file)
23896184 ± 0% +14.2% 27287954 ± 0% numa-meminfo.node1.FilePages
19329324 ± 0% +37.2% 26528591 ± 0% numa-meminfo.node1.Inactive
19304364 ± 0% +37.3% 26505692 ± 0% numa-meminfo.node1.Inactive(file)
22671758 ± 0% -23.7% 17303673 ± 0% numa-meminfo.node1.Mapped
1299430 ± 7% -32.8% 873435 ± 6% numa-meminfo.node1.MemFree
3589265 ± 1% -81.6% 661650 ± 0% numa-meminfo.node1.SReclaimable
3636880 ± 1% -80.5% 708315 ± 0% numa-meminfo.node1.Slab
994864 ± 1% -91.3% 86711 ± 0% numa-vmstat.node0.nr_active_file
5952715 ± 0% +13.8% 6773427 ± 0% numa-vmstat.node0.nr_file_pages
356982 ± 5% -31.5% 244513 ± 3% numa-vmstat.node0.nr_free_pages
4853127 ± 0% +35.8% 6590709 ± 0% numa-vmstat.node0.nr_inactive_file
394.70 ± 15% -62.9% 146.60 ± 32% numa-vmstat.node0.nr_isolated_file
5649360 ± 0% -24.1% 4288873 ± 0% numa-vmstat.node0.nr_mapped
28030 ± 53% -97.7% 648.30 ± 10% numa-vmstat.node0.nr_pages_scanned
864516 ± 1% -81.4% 160512 ± 0% numa-vmstat.node0.nr_slab_reclaimable
1.522e+08 ± 4% +155.9% 3.893e+08 ± 1% numa-vmstat.node0.numa_hit
1.521e+08 ± 4% +155.9% 3.893e+08 ± 1% numa-vmstat.node0.numa_local
217926 ± 3% -84.4% 33949 ± 2% numa-vmstat.node0.workingset_activate
60138428 ± 2% -72.5% 16533446 ± 0% numa-vmstat.node0.workingset_nodereclaim
4367580 ± 3% +158.4% 11285489 ± 1% numa-vmstat.node0.workingset_refault
1043245 ± 3% -91.7% 86749 ± 1% numa-vmstat.node1.nr_active_file
5974941 ± 0% +14.2% 6823255 ± 0% numa-vmstat.node1.nr_file_pages
323798 ± 7% -33.0% 216945 ± 5% numa-vmstat.node1.nr_free_pages
4829122 ± 1% +37.2% 6627644 ± 0% numa-vmstat.node1.nr_inactive_file
395.80 ± 8% -68.5% 124.80 ± 46% numa-vmstat.node1.nr_isolated_file
5669082 ± 0% -23.7% 4326551 ± 0% numa-vmstat.node1.nr_mapped
32004 ± 60% -99.9% 47.00 ± 9% numa-vmstat.node1.nr_pages_scanned
897351 ± 1% -81.6% 165406 ± 0% numa-vmstat.node1.nr_slab_reclaimable
1.535e+08 ± 4% +154.6% 3.909e+08 ± 0% numa-vmstat.node1.numa_hit
1.535e+08 ± 4% +154.7% 3.909e+08 ± 0% numa-vmstat.node1.numa_local
235134 ± 5% -85.7% 33507 ± 2% numa-vmstat.node1.workingset_activate
59647268 ± 1% -72.1% 16626347 ± 0% numa-vmstat.node1.workingset_nodereclaim
4535102 ± 4% +151.1% 11389137 ± 0% numa-vmstat.node1.workingset_refault
347641 ± 13% +97.0% 684832 ± 0% proc-vmstat.allocstall
7738 ± 9% +236.5% 26042 ± 0% proc-vmstat.kswapd_low_wmark_hit_quickly
2041367 ± 1% -91.5% 173206 ± 0% proc-vmstat.nr_active_file
1233230 ± 0% +11.7% 1378011 ± 0% proc-vmstat.nr_dirty_background_threshold
2466460 ± 0% +11.7% 2756024 ± 0% proc-vmstat.nr_dirty_threshold
11933740 ± 0% +13.9% 13594909 ± 0% proc-vmstat.nr_file_pages
671934 ± 5% -31.1% 463093 ± 3% proc-vmstat.nr_free_pages
9685062 ± 0% +36.5% 13216819 ± 0% proc-vmstat.nr_inactive_file
792.80 ± 10% -67.9% 254.20 ± 34% proc-vmstat.nr_isolated_file
11327952 ± 0% -23.9% 8616859 ± 0% proc-vmstat.nr_mapped
73994 ± 51% -99.1% 657.00 ± 7% proc-vmstat.nr_pages_scanned
1762423 ± 0% -81.5% 325807 ± 0% proc-vmstat.nr_slab_reclaimable
72.30 ± 23% +852.4% 688.60 ± 58% proc-vmstat.nr_vmscan_immediate_reclaim
5392 ± 2% -11.9% 4750 ± 2% proc-vmstat.numa_hint_faults
5.728e+08 ± 5% +163.4% 1.509e+09 ± 0% proc-vmstat.numa_hit
5.728e+08 ± 5% +163.4% 1.509e+09 ± 0% proc-vmstat.numa_local
5638 ± 4% -12.5% 4935 ± 3% proc-vmstat.numa_pte_updates
8684 ± 8% +215.8% 27427 ± 0% proc-vmstat.pageoutrun
3220941 ± 0% -90.2% 315751 ± 0% proc-vmstat.pgactivate
17739240 ± 1% +143.6% 43217427 ± 0% proc-vmstat.pgalloc_dma32
6.6e+08 ± 0% +138.1% 1.572e+09 ± 0% proc-vmstat.pgalloc_normal
6.737e+08 ± 0% -92.4% 51517407 ± 0% proc-vmstat.pgfault
6.767e+08 ± 0% +138.5% 1.614e+09 ± 0% proc-vmstat.pgfree
6.73e+08 ± 0% -92.5% 50568722 ± 0% proc-vmstat.pgmajfault
31567471 ± 1% +91.6% 60472288 ± 0% proc-vmstat.pgscan_direct_dma32
1.192e+09 ± 2% +84.5% 2.199e+09 ± 0% proc-vmstat.pgscan_direct_normal
16309661 ± 0% +150.4% 40841573 ± 0% proc-vmstat.pgsteal_direct_dma32
6.151e+08 ± 0% +140.8% 1.481e+09 ± 0% proc-vmstat.pgsteal_direct_normal
939746 ± 18% +101.3% 1891322 ± 6% proc-vmstat.pgsteal_kswapd_dma32
27432476 ± 4% +162.4% 71970660 ± 2% proc-vmstat.pgsteal_kswapd_normal
4.802e+08 ± 5% -81.5% 88655347 ± 0% proc-vmstat.slabs_scanned
452671 ± 2% -85.1% 67360 ± 1% proc-vmstat.workingset_activate
1.198e+08 ± 1% -72.4% 33135682 ± 0% proc-vmstat.workingset_nodereclaim
8898128 ± 1% +154.6% 22657102 ± 0% proc-vmstat.workingset_refault
613962 ± 12% -18.6% 499880 ± 9% sched_debug.cfs_rq:/.min_vruntime.stddev
31.47 ± 38% +203.5% 95.52 ± 29% sched_debug.cfs_rq:/.nr_spread_over.max
6.19 ± 32% +150.9% 15.53 ± 24% sched_debug.cfs_rq:/.nr_spread_over.stddev
41.71 ± 51% -42.3% 24.07 ± 12% sched_debug.cfs_rq:/.runnable_load_avg.avg
1094 ±106% -60.9% 427.95 ± 25% sched_debug.cfs_rq:/.runnable_load_avg.max
163.22 ± 92% -63.2% 60.09 ± 28% sched_debug.cfs_rq:/.runnable_load_avg.stddev
613932 ± 12% -18.6% 499833 ± 9% sched_debug.cfs_rq:/.spread0.stddev
35.20 ± 8% -29.1% 24.97 ± 11% sched_debug.cpu.cpu_load[0].avg
731.80 ± 11% -36.1% 467.45 ± 21% sched_debug.cpu.cpu_load[0].max
116.23 ± 10% -43.5% 65.72 ± 23% sched_debug.cpu.cpu_load[0].stddev
35.25 ± 8% -25.6% 26.23 ± 10% sched_debug.cpu.cpu_load[1].avg
722.47 ± 10% -30.2% 504.05 ± 18% sched_debug.cpu.cpu_load[1].max
115.25 ± 10% -38.5% 70.82 ± 19% sched_debug.cpu.cpu_load[1].stddev
35.37 ± 8% -22.4% 27.45 ± 8% sched_debug.cpu.cpu_load[2].avg
721.90 ± 9% -27.7% 521.60 ± 16% sched_debug.cpu.cpu_load[2].max
10.85 ± 14% +16.9% 12.68 ± 6% sched_debug.cpu.cpu_load[2].min
114.93 ± 9% -35.1% 74.62 ± 16% sched_debug.cpu.cpu_load[2].stddev
35.20 ± 8% -21.3% 27.70 ± 5% sched_debug.cpu.cpu_load[3].avg
705.73 ± 9% -29.6% 496.57 ± 13% sched_debug.cpu.cpu_load[3].max
10.95 ± 13% +18.7% 13.00 ± 4% sched_debug.cpu.cpu_load[3].min
112.58 ± 9% -34.8% 73.35 ± 12% sched_debug.cpu.cpu_load[3].stddev
34.96 ± 8% -21.7% 27.39 ± 5% sched_debug.cpu.cpu_load[4].avg
684.63 ± 10% -32.0% 465.83 ± 11% sched_debug.cpu.cpu_load[4].max
11.10 ± 12% +17.7% 13.07 ± 3% sched_debug.cpu.cpu_load[4].min
110.03 ± 9% -36.1% 70.28 ± 10% sched_debug.cpu.cpu_load[4].stddev
293.58 ± 28% +110.8% 618.85 ± 32% sched_debug.cpu.curr->pid.min
18739 ± 3% +10.5% 20713 ± 1% sched_debug.cpu.nr_switches.avg
33332 ± 10% +21.0% 40337 ± 6% sched_debug.cpu.nr_switches.max
4343 ± 10% +34.8% 5852 ± 8% sched_debug.cpu.nr_switches.stddev
19363 ± 3% +9.2% 21136 ± 1% sched_debug.cpu.sched_count.avg
20.35 ± 17% -31.5% 13.93 ± 22% sched_debug.cpu.sched_goidle.min
9245 ± 3% +12.5% 10398 ± 0% sched_debug.cpu.ttwu_count.avg
16837 ± 10% +27.0% 21390 ± 8% sched_debug.cpu.ttwu_count.max
2254 ± 8% +39.5% 3143 ± 8% sched_debug.cpu.ttwu_count.stddev
8052 ± 4% +16.2% 9353 ± 0% sched_debug.cpu.ttwu_local.avg
5846 ± 4% +11.0% 6491 ± 2% sched_debug.cpu.ttwu_local.min
1847 ± 11% +39.8% 2582 ± 8% sched_debug.cpu.ttwu_local.stddev
3.66 ± 4% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.filemap_fault.xfs_filemap_fault.__do_fault
0.00 ± -1% +Inf% 1.12 ± 0% perf-profile.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead
0.00 ± -1% +Inf% 77.72 ± 0% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault
79.28 ± 0% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.filemap_fault.xfs_filemap_fault
11.43 ± 5% -89.4% 1.21 ± 4% perf-profile.cycles-pp.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
0.00 ± -1% +Inf% 96.93 ± 0% perf-profile.cycles-pp.__do_fault.do_fault.handle_mm_fault.__do_page_fault.do_page_fault
91.04 ± 0% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 96.66 ± 0% perf-profile.cycles-pp.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault.do_fault
29.86 ± 3% -96.9% 0.92 ± 19% perf-profile.cycles-pp.__list_lru_walk_one.isra.3.list_lru_walk_one.scan_shadow_nodes.shrink_slab.shrink_zone
1.59 ± 14% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__lru_cache_add.lru_cache_add.add_to_page_cache_lru.filemap_fault.xfs_filemap_fault
0.00 ± -1% +Inf% 5.67 ± 5% perf-profile.cycles-pp.__lru_cache_add.lru_cache_add.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages
0.00 ± -1% +Inf% 78.11 ± 0% perf-profile.cycles-pp.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault
79.40 ± 0% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__page_cache_alloc.filemap_fault.xfs_filemap_fault.__do_fault.handle_pte_fault
1.28 ± 4% -38.7% 0.78 ± 1% perf-profile.cycles-pp.__radix_tree_lookup.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list
25.30 ± 6% -84.2% 3.99 ± 5% perf-profile.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone
0.56 ± 0% +98.2% 1.11 ± 0% perf-profile.cycles-pp.__rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc
0.00 ± -1% +Inf% 1.11 ± 0% perf-profile.cycles-pp.__xfs_get_blocks.xfs_get_blocks.do_mpage_readpage.mpage_readpages.xfs_vm_readpages
0.01 ±133% +30254.3% 2.66 ± 8% perf-profile.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list.shrink_page_list
5.07 ± 25% +268.7% 18.71 ± 3% perf-profile.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc
9.16 ± 6% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock.list_lru_add.__delete_from_page_cache.__remove_mapping.shrink_page_list
0.69 ± 64% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock.list_lru_del.__add_to_page_cache_locked.add_to_page_cache_lru.filemap_fault
27.69 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock.shadow_lru_isolate.__list_lru_walk_one.list_lru_walk_one.scan_shadow_nodes
10.77 ± 10% +238.5% 36.45 ± 1% perf-profile.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_zone_memcg.shrink_zone.do_try_to_free_pages
0.35 ± 9% +193.4% 1.02 ± 13% perf-profile.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_zone_memcg.shrink_zone.kswapd
12.86 ± 9% -89.4% 1.36 ± 17% perf-profile.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
1.11 ± 18% +333.5% 4.83 ± 6% perf-profile.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add.add_to_page_cache_lru
5.38 ± 5% -100.0% 0.00 ± -1% perf-profile.cycles-pp.add_to_page_cache_lru.filemap_fault.xfs_filemap_fault.__do_fault.handle_pte_fault
0.00 ± -1% +Inf% 7.15 ± 4% perf-profile.cycles-pp.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault
0.00 ± -1% +Inf% 78.06 ± 0% perf-profile.cycles-pp.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault
79.38 ± 0% -100.0% 0.00 ± -1% perf-profile.cycles-pp.alloc_pages_current.__page_cache_alloc.filemap_fault.xfs_filemap_fault.__do_fault
0.00 ± -1% +Inf% 97.32 ± 0% perf-profile.cycles-pp.do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
5.19 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.do_mpage_readpage.mpage_readpage.xfs_vm_readpage.filemap_fault.xfs_filemap_fault
0.00 ± -1% +Inf% 10.68 ± 1% perf-profile.cycles-pp.do_mpage_readpage.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault
0.72 ± 67% -98.3% 0.01 ± 87% perf-profile.cycles-pp.do_syscall_64.return_from_SYSCALL_64.__libc_fork
72.75 ± 1% -23.2% 55.88 ± 1% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc
0.00 ± -1% +Inf% 96.86 ± 0% perf-profile.cycles-pp.filemap_fault.xfs_filemap_fault.__do_fault.do_fault.handle_mm_fault
90.80 ± 0% -100.0% 0.00 ± -1% perf-profile.cycles-pp.filemap_fault.xfs_filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault
2.39 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp.filemap_map_pages.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.97 ± 12% +321.3% 4.07 ± 6% perf-profile.cycles-pp.free_hot_cold_page.free_hot_cold_page_list.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
1.03 ± 9% +303.4% 4.17 ± 5% perf-profile.cycles-pp.free_hot_cold_page_list.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone
0.65 ± 23% +451.9% 3.58 ± 6% perf-profile.cycles-pp.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list.shrink_page_list.shrink_inactive_list
0.00 ± -1% +Inf% 21.18 ± 2% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead
6.22 ± 21% -100.0% 0.00 ± -1% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.filemap_fault
94.07 ± 0% -100.0% 0.00 ± -1% perf-profile.cycles-pp.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.57 ± 1% +104.6% 1.16 ± 2% perf-profile.cycles-pp.isolate_lru_pages.isra.47.shrink_inactive_list.shrink_zone_memcg.shrink_zone.do_try_to_free_pages
2.96 ± 7% -30.5% 2.05 ± 9% perf-profile.cycles-pp.kthread.ret_from_fork
9.58 ± 6% -100.0% 0.00 ±229% perf-profile.cycles-pp.list_lru_add.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list
1.88 ± 6% -100.0% 0.00 ± -1% perf-profile.cycles-pp.list_lru_del.__add_to_page_cache_locked.add_to_page_cache_lru.filemap_fault.xfs_filemap_fault
29.08 ± 3% -97.0% 0.89 ± 19% perf-profile.cycles-pp.list_lru_walk_one.scan_shadow_nodes.shrink_slab.shrink_zone.do_try_to_free_pages
1.59 ± 14% -100.0% 0.00 ± -1% perf-profile.cycles-pp.lru_cache_add.add_to_page_cache_lru.filemap_fault.xfs_filemap_fault.__do_fault
0.00 ± -1% +Inf% 5.68 ± 5% perf-profile.cycles-pp.lru_cache_add.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead
5.24 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.mpage_readpage.xfs_vm_readpage.filemap_fault.xfs_filemap_fault.__do_fault
0.00 ± -1% +Inf% 18.20 ± 1% perf-profile.cycles-pp.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault
2.37 ± 14% +79.9% 4.27 ± 13% perf-profile.cycles-pp.native_flush_tlb_others.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
0.01 ±133% +30322.9% 2.66 ± 8% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list
5.07 ± 25% +268.8% 18.71 ± 3% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current
9.16 ± 6% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.list_lru_add.__delete_from_page_cache.__remove_mapping
0.75 ± 57% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.list_lru_del.__add_to_page_cache_locked.add_to_page_cache_lru
27.68 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.shadow_lru_isolate.__list_lru_walk_one.list_lru_walk_one
11.09 ± 10% +237.5% 37.44 ± 0% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_zone_memcg.shrink_zone
12.76 ± 9% -90.9% 1.17 ± 22% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list
1.08 ± 19% +338.2% 4.75 ± 7% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add
1.81 ± 2% -73.7% 0.48 ± 1% perf-profile.cycles-pp.page_check_address_transhuge.page_referenced_one.rmap_walk_file.rmap_walk.page_referenced
3.24 ± 1% -42.5% 1.87 ± 2% perf-profile.cycles-pp.page_referenced.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone
2.20 ± 2% -66.0% 0.75 ± 5% perf-profile.cycles-pp.page_referenced_one.rmap_walk_file.rmap_walk.page_referenced.shrink_page_list
1.54 ± 14% -100.0% 0.00 ± -1% perf-profile.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add.add_to_page_cache_lru.filemap_fault
0.00 ± -1% +Inf% 5.57 ± 5% perf-profile.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add.add_to_page_cache_lru.mpage_readpages
2.07 ± 4% -100.0% 0.00 ± -1% perf-profile.cycles-pp.radix_tree_next_chunk.filemap_map_pages.handle_pte_fault.handle_mm_fault.__do_page_fault
3.01 ± 6% -31.7% 2.05 ± 9% perf-profile.cycles-pp.ret_from_fork
0.72 ± 67% -98.5% 0.01 ± 94% perf-profile.cycles-pp.return_from_SYSCALL_64.__libc_fork
3.15 ± 1% -46.0% 1.70 ± 1% perf-profile.cycles-pp.rmap_walk.page_referenced.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
3.02 ± 2% -48.4% 1.56 ± 1% perf-profile.cycles-pp.rmap_walk_file.rmap_walk.page_referenced.shrink_page_list.shrink_inactive_list
29.08 ± 3% -97.0% 0.89 ± 19% perf-profile.cycles-pp.scan_shadow_nodes.shrink_slab.shrink_zone.do_try_to_free_pages.try_to_free_pages
28.89 ± 3% -97.1% 0.84 ± 22% perf-profile.cycles-pp.shadow_lru_isolate.__list_lru_walk_one.list_lru_walk_one.scan_shadow_nodes.shrink_slab
44.93 ± 4% +21.9% 54.77 ± 1% perf-profile.cycles-pp.shrink_inactive_list.shrink_zone_memcg.shrink_zone.do_try_to_free_pages.try_to_free_pages
33.07 ± 4% -50.8% 16.28 ± 3% perf-profile.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone.do_try_to_free_pages
1.11 ± 16% -22.6% 0.86 ± 6% perf-profile.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone.kswapd
29.15 ± 3% -96.8% 0.94 ± 16% perf-profile.cycles-pp.shrink_slab.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask
73.07 ± 1% -23.5% 55.91 ± 1% perf-profile.cycles-pp.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current
45.01 ± 4% +22.1% 54.95 ± 1% perf-profile.cycles-pp.shrink_zone_memcg.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask
2.35 ± 14% +78.9% 4.21 ± 13% perf-profile.cycles-pp.smp_call_function_many.native_flush_tlb_others.try_to_unmap_flush.shrink_page_list.shrink_inactive_list
0.00 ± -1% +Inf% 55.91 ± 1% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead
72.76 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.filemap_fault
2.38 ± 14% +79.5% 4.28 ± 13% perf-profile.cycles-pp.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone
0.58 ± 1% +51.9% 0.88 ± 14% perf-profile.cycles-pp.workingset_eviction.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
0.00 ± -1% +Inf% 96.89 ± 0% perf-profile.cycles-pp.xfs_filemap_fault.__do_fault.do_fault.handle_mm_fault.__do_page_fault
91.02 ± 0% -100.0% 0.00 ± -1% perf-profile.cycles-pp.xfs_filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault
0.00 ± -1% +Inf% 1.11 ± 0% perf-profile.cycles-pp.xfs_get_blocks.do_mpage_readpage.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead
5.26 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.xfs_vm_readpage.filemap_fault.xfs_filemap_fault.__do_fault.handle_pte_fault
0.00 ± -1% +Inf% 18.21 ± 1% perf-profile.cycles-pp.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault
lkp-hsw01: Grantley Haswell-EP
Memory: 64G
vm-scalability.time.user_time
140 ++--------------------------------------------------------------------+
|******* ****** *************************************** **** ****** ***
120 *+ * * * * * * * * |
| |
100 ++ |
| |
80 ++ |
| |
60 ++ |
| |
40 ++ |
| |
20 OOOO O O OOO O OO OO OO |
| O OOOOOO OOO O OOOO OO OO |
0 ++--------------------------------------------------------------------+
vm-scalability.time.major_page_faults
7e+08 ++*-***-----*-----**--*-----*---****-*--*-***------------*---***--*-+
** * * ***** ****** ** ********* * **** * ***************** ******
6e+08 ++ |
| |
5e+08 ++ |
| |
4e+08 ++ |
| |
3e+08 ++ |
| |
2e+08 ++ |
| |
1e+08 ++ |
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO |
0 ++------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong Ye
4 years, 9 months