[lkp] [drm] ea32de7004: BUG: lock held when returning to user space! ]
by kernel test robot
FYI, we noticed the following commit:
git://internal_merge_and_test_tree devel-catchup-201606211544 commit ea32de70041a0f9b8dbb8208a22badb73e40a8ce ("drm: Revamp connector_list protection")
on test machine: vm-kbuild-1G: 2 threads qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap with 1G memory
caused below changes:
+------------------------------------------------------------------+------------+------------+
| | 760f1ee5ad | ea32de7004 |
+------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 6 | 8 |
| genirq:Flags_mismatch_irq##(serial)vs.#(goldfish_pdev_bus) | 6 | 5 |
| backtrace:do_sys_open | 6 | 5 |
| backtrace:SyS_open | 6 | 5 |
| BUG:lock_held_when_returning_to_user_space | 0 | 6 |
| is_leaving_the_kernel_with_locks_still_held | 0 | 6 |
| invoked_oom-killer:gfp_mask=0x | 0 | 2 |
| Mem-Info | 0 | 2 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 0 | 2 |
| backtrace:btrfs_test_extent_io | 0 | 2 |
| backtrace:init_btrfs_fs | 0 | 2 |
| backtrace:kernel_init_freeable | 0 | 2 |
+------------------------------------------------------------------+------------+------------+
[ 26.517205] Freeing unused kernel memory: 992K (ffff880003908000 - ffff880003a00000)
[ 26.523433]
[ 26.527208] ================================================
[ 26.532106] [ BUG: lock held when returning to user space! ]
[ 26.537158] 4.7.0-rc2-00776-gea32de7 #1 Not tainted
[ 26.543816] ------------------------------------------------
[ 26.548589] init/1 is leaving the kernel with locks still held!
[ 26.551801] 4 locks held by init/1:
[ 26.554672] #0: (drm_connector_list_srcu){......}, at: [<ffffffff81aa88c5>] drm_helper_encoder_in_use+0xa5/0x1f0
[ 26.558882] #1: (drm_connector_list_srcu){......}, at: [<ffffffff81aa88c5>] drm_helper_encoder_in_use+0xa5/0x1f0
[ 26.562997] #2: (drm_connector_list_srcu){......}, at: [<ffffffff81aa88c5>] drm_helper_encoder_in_use+0xa5/0x1f0
[ 26.567048] #3: (drm_connector_list_srcu){......}, at: [<ffffffff81aa88c5>] drm_helper_encoder_in_use+0xa5/0x1f0
[ 26.573936] systemd[1]: RTC configured in localtime, applying delta of 480 minutes to system time.
[ 26.578847] random: systemd urandom read with 21 bits of entropy available
[ 26.669696] systemd-default-display-manager-generator[314]: No /etc/X11/default-display-manager file, nothing to generate
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap -kernel /pkg/linux/x86_64-randconfig-s0-06212222/gcc-6/ea32de70041a0f9b8dbb8208a22badb73e40a8ce/vmlinuz-4.7.0-rc2-00776-gea32de7 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-1G-11/bisect_boot-1-debian-x86_64-2015-02-07.cgz-x86_64-randconfig-s0-06212222-ea32de70041a0f9b8dbb8208a22badb73e40a8ce-20160622-129410-1s349pe-1.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s0-06212222 branch=linux-devel/devel-hourly-2016062118 commit=ea32de70041a0f9b8dbb8208a22badb73e40a8ce BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s0-06212222/gcc-6/ea32de70041a0f9b8dbb8208a22badb73e40a8ce/vmlinuz-4.7.0-rc2-00776-gea32de7 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-1G/debian-x86_64-2015-02-07.cgz/x86_64-randconfig-s0-06212222/gcc-6/ea32de70041a0f9b8dbb8208a22badb73e40a8ce/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-1G-11::dhcp' -initrd /fs/sdf1/initrd-vm-kbuild-1G-11 -m 1024 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23010-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -device virtio-scsi-pci,id=scsi0 -drive file=/fs/sdf1/disk0-vm-kbuild-1G-11,if=none,id=hd0,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd0,scsi-id=1,lun=0 -drive file=/fs/sdf1/disk1-vm-kbuild-1G-11,if=none,id=hd1,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd1,scsi-id=1,lun=1 -drive file=/fs/sdf1/disk2-vm-kbuild-1G-11,if=none,id=hd2,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd2,scsi-id=1,lun=2 -drive file=/fs/sdf1/disk3-vm-kbuild-1G-11,if=none,id=hd3,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd3,scsi-id=1,lun=3 -drive file=/fs/sdf1/disk4-vm-kbuild-1G-11,if=none,id=hd4,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd4,scsi-id=1,lun=4 -pidfile /dev/shm/kboot/pid-vm-kbuild-1G-11 -serial file:/dev/shm/kboot/serial-vm-kbuild-1G-11 -daemonize -display none -monitor null
Thanks,
Xiaolong
4 years, 8 months
[lkp] [power_supply] 5bc28b93a3: kmsg.thermal_thermal_zone#:failed_to_read_out_thermal_zone(-#)
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/sre/linux-power-supply.git fixes
commit 5bc28b93a36e3cb3acc2870fb75cb6ffb182fece ("power_supply: power_supply_read_temp only if use_cnt > 0")
on test machine: vm-kbuild-yocto-ia32: 1 threads qemu-system-x86_64 -enable-kvm -cpu Westmere with 320M memory
caused below changes:
[ 5.180382] __power_supply_register: Expected proper parent device for 'test_ac'
[ 5.181219] __power_supply_register: Expected proper parent device for 'test_battery'
[ 5.182252] thermal thermal_zone0: failed to read out thermal zone (-19)
[ 5.182916] __power_supply_register: Expected proper parent device for 'test_usb'
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Westmere -kernel /pkg/linux/x86_64-randconfig-s3-06222353/gcc-6/5bc28b93a36e3cb3acc2870fb75cb6ffb182fece/vmlinuz-4.7.0-rc1-00001-g5bc28b9 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-ia32-2/bisect_boot-1-yocto-minimal-i386.cgz-x86_64-randconfig-s3-06222353-5bc28b93a36e3cb3acc2870fb75cb6ffb182fece-20160623-81162-1kedsbh-1.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s3-06222353 branch=linux-devel/devel-spot-201606222332 commit=5bc28b93a36e3cb3acc2870fb75cb6ffb182fece BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s3-06222353/gcc-6/5bc28b93a36e3cb3acc2870fb75cb6ffb182fece/vmlinuz-4.7.0-rc1-00001-g5bc28b9 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-ia32/yocto-minimal-i386.cgz/x86_64-randconfig-s3-06222353/gcc-6/5bc28b93a36e3cb3acc2870fb75cb6ffb182fece/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-ia32-2::dhcp drbd.minor_count=8' -initrd /fs/sda1/initrd-vm-kbuild-yocto-ia32-2 -m 320 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sda1/disk0-vm-kbuild-yocto-ia32-2,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-ia32-2 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-ia32-2 -daemonize -display none -monitor null
Thanks,
Xiaolong
4 years, 8 months
[lkp] [thermal] 71765b477c: BUG: KASAN: slab-out-of-bounds in thermal_zone_create_device_groups+0x4c3/0x62a at addr ffff88000883ec20
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 71765b477ce8a5cf0bc84a870a79835b03401df3 ("thermal: core: move thermal_zone sysfs to thermal_sysfs.c")
on test machine: vm-kbuild-yocto-ia32: 1 threads qemu-system-x86_64 -enable-kvm -cpu Westmere with 320M memory
caused below changes:
+---------------------------------------------------------------------------+------------+------------+
| | 575e51a198 | 71765b477c |
+---------------------------------------------------------------------------+------------+------------+
| boot_successes | 16 | 0 |
| boot_failures | 10 | 44 |
| BUG:kernel_test_oversize | 6 | |
| BUG:kernel_test_crashed | 4 | |
| BUG:KASAN:slab-out-of-bounds_in_thermal_zone_create_device_groups_at_addr | 0 | 44 |
| BUG_kmalloc-#(Not_tainted):kasan:bad_access_detected | 0 | 44 |
| INFO:Allocated_in#age=#cpu=#pid= | 0 | 44 |
| INFO:Freed_in#age=#cpu=#pid= | 0 | 44 |
| INFO:Slab#objects=#used=#fp=#flags= | 0 | 44 |
| INFO:Object#@offset=#fp= | 0 | 44 |
| BUG:KASAN:slab-out-of-bounds_in_internal_create_group_at_addr | 0 | 44 |
| BUG_kmalloc-#(Tainted:G_B):kasan:bad_access_detected | 0 | 44 |
| backtrace:power_supply_register | 0 | 44 |
| backtrace:test_power_init | 0 | 44 |
| backtrace:kernel_init_freeable | 0 | 44 |
+---------------------------------------------------------------------------+------------+------------+
[ 34.228978] power_supply test_battery: uevent
[ 34.229549] power_supply test_battery: POWER_SUPPLY_NAME=test_battery
[ 34.230452] ==================================================================
[ 34.231531] BUG: KASAN: slab-out-of-bounds in thermal_zone_create_device_groups+0x4c3/0x62a at addr ffff88000883ec20
[ 34.233051] Write of size 8 by task swapper/0/1
[ 34.233728] =============================================================================
[ 34.234922] BUG kmalloc-8 (Not tainted): kasan: bad access detected
[ 34.235844] -----------------------------------------------------------------------------
[ 34.235844]
[ 34.237254] Disabling lock debugging due to kernel taint
[ 34.238051] INFO: Allocated in 0xffff88000883f028 age=18446744073709485001 cpu=0 pid=0
[ 34.247240] INFO: Freed in 0xfffefbc9 age=18446744073709485001 cpu=0 pid=0
[ 34.253883] INFO: Slab 0xffffea0000220f80 objects=23 used=19 fp=0xffff88000883f180 flags=0x4080
[ 34.254989] INFO: Object 0xffff88000883ec18 @offset=3096 fp=0xcccccccccccccccc
[ 34.254989]
[ 34.256094] Redzone ffff88000883ec10: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ
[ 34.257193] Object ffff88000883ec18: cc cc cc cc cc cc cc cc ........
[ 34.258278] Redzone ffff88000883ec20: 00 00 00 00 00 00 00 00 ........
[ 34.265807] Padding ffff88000883ed60: c7 fb fe ff 00 00 00 00 ........
[ 34.267083] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G B 4.7.0-rc3-00022-g71765b4 #1
[ 34.268358] ffffea0000220f80 ffff88000f9efab0 ffffffff89e344e1 ffff88000883e000
[ 34.269534] ffff88000883ec18 ffff88000f9efae0 ffffffff89ccb769 ffff880000090300
[ 34.270709] ffffea0000220f80 ffff88000883ec18 ffff88000883ec20 ffff88000f9efb08
[ 34.271881] Call Trace:
[ 34.272261] [<ffffffff89e344e1>] dump_stack+0x86/0xc0
[ 34.272954] [<ffffffff89ccb769>] print_trailer+0x115/0x195
[ 34.273673] [<ffffffff89ccf560>] object_err+0x34/0x3b
[ 34.274332] [<ffffffff89cd49e3>] kasan_report_error+0x213/0x54d
[ 34.275102] [<ffffffff89cd3c43>] ? kasan_unpoison_shadow+0x35/0x43
[ 34.275906] [<ffffffff89cd3c43>] ? kasan_unpoison_shadow+0x35/0x43
[ 34.276714] [<ffffffff89cd4d56>] kasan_report+0x39/0x3b
[ 34.277534] [<ffffffff8a718b56>] ? thermal_zone_create_device_groups+0x4c3/0x62a
[ 34.278495] [<ffffffff89cd46c7>] __asan_store8+0x61/0x6d
[ 34.279189] [<ffffffff8a718b56>] thermal_zone_create_device_groups+0x4c3/0x62a
[ 34.280127] [<ffffffff8a7160d4>] thermal_zone_device_register+0x2ac/0xa61
[ 34.281009] [<ffffffff8a715e28>] ? thermal_notify_framework+0x10/0x10
[ 34.281848] [<ffffffff8a531db9>] ? dev_warn+0xe2/0xe2
[ 34.282514] [<ffffffff89b6366f>] ? lockdep_init_map+0xf7/0x2cc
[ 34.283273] [<ffffffff8a671710>] __power_supply_register+0x5c3/0x7a3
[ 34.284103] [<ffffffff8a671ee8>] power_supply_register+0x13/0x15
[ 34.284892] [<ffffffff8ba8c21e>] test_power_init+0x35/0xe6
[ 34.285654] [<ffffffff8ba8c1e9>] ? max8925_power_driver_init+0x14/0x14
[ 34.286636] [<ffffffff8ba3135d>] do_one_initcall+0xf1/0x1aa
[ 34.287483] [<ffffffff8ba3126c>] ? start_kernel+0x485/0x485
[ 34.288306] [<ffffffff89b25760>] ? parse_args+0x60/0x4d2
[ 34.289005] [<ffffffff8ba30ad4>] ? set_debug_rodata+0x12/0x12
[ 34.289755] [<ffffffff8ba315d3>] kernel_init_freeable+0x1bd/0x24d
[ 34.290546] [<ffffffff8aa29a6e>] kernel_init+0x13/0x125
[ 34.291226] [<ffffffff8aa3542f>] ret_from_fork+0x1f/0x40
[ 34.291917] [<ffffffff8aa29a5b>] ? rest_init+0xd2/0xd2
[ 34.292586] Memory state around the buggy address:
[ 34.293202] ffff88000883eb00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 34.294120] ffff88000883eb80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Westmere -kernel /pkg/linux/x86_64-randconfig-s3-06181227/gcc-6/71765b477ce8a5cf0bc84a870a79835b03401df3/vmlinuz-4.7.0-rc3-00022-g71765b4 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-ia32-13/bisect_boot-1-yocto-minimal-i386.cgz-x86_64-randconfig-s3-06181227-71765b477ce8a5cf0bc84a870a79835b03401df3-20160618-30208-1m0tl7f-0.yaml~ ARCH=x86_64 kconfig=x86_64-randconfig-s3-06181227 branch=linux-devel/devel-spot-201606181132 commit=71765b477ce8a5cf0bc84a870a79835b03401df3 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s3-06181227/gcc-6/71765b477ce8a5cf0bc84a870a79835b03401df3/vmlinuz-4.7.0-rc3-00022-g71765b4 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-ia32/yocto-minimal-i386.cgz/x86_64-randconfig-s3-06181227/gcc-6/71765b477ce8a5cf0bc84a870a79835b03401df3/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-ia32-13::dhcp drbd.minor_count=8' -initrd /fs/sdf1/initrd-vm-kbuild-yocto-ia32-13 -m 320 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdf1/disk0-vm-kbuild-yocto-ia32-13,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-ia32-13 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-ia32-13 -daemonize -display none -monitor null
Thanks,
Xiaolong
4 years, 8 months
[mm] c3e3459c92: WARNING: CPU: 1 PID: 249 at mm/util.c:519 __vm_enough_memory
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit c3e3459c92a22be17145cdd9d86a8acc74afa5cf
Author: Mel Gorman <mgorman(a)techsingularity.net>
AuthorDate: Thu Jun 23 09:59:20 2016 +1000
Commit: Stephen Rothwell <sfr(a)canb.auug.org.au>
CommitDate: Thu Jun 23 09:59:20 2016 +1000
mm: move vmscan writes and file write accounting to the node
As reclaim is now node-based, it follows that page write activity due to
page reclaim should also be accounted for on the node. For consistency,
also account page writes and page dirtying on a per-node basis.
After this patch, there are a few remaining zone counters that may appear
strange but are fine. NUMA stats are still per-zone as this is a
user-space interface that tools consume. NR_MLOCK, NR_SLAB_*,
NR_PAGETABLE, NR_KERNEL_STACK and NR_BOUNCE are all allocations that
potentially pin low memory and cannot trivially be reclaimed on demand.
This information is still useful for debugging a page allocation failure
warning.
Link: http://lkml.kernel.org/r/1466518566-30034-20-git-send-email-mgorman@techs...
Signed-off-by: Mel Gorman <mgorman(a)techsingularity.net>
Acked-by: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Rik van Riel <riel(a)surriel.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
+------------------------------------------------+------------+------------+---------------+
| | e426f7b4ad | c3e3459c92 | next-20160623 |
+------------------------------------------------+------------+------------+---------------+
| boot_successes | 93 | 0 | 0 |
| boot_failures | 12 | 16 | 29 |
| IP-Config:Auto-configuration_of_network_failed | 12 | 12 | 16 |
| WARNING:at_mm/util.c:#__vm_enough_memory | 0 | 14 | 27 |
| backtrace:vm_mmap_pgoff | 0 | 13 | 21 |
| backtrace:SyS_mmap_pgoff | 0 | 13 | 21 |
| backtrace:SyS_mmap | 0 | 13 | 21 |
| BUG:kernel_test_hang | 0 | 2 | |
| backtrace:do_execveat_common | 0 | 1 | |
| backtrace:SyS_execve | 0 | 1 | |
| backtrace:_do_fork | 0 | 0 | 6 |
| backtrace:SyS_clone | 0 | 0 | 6 |
+------------------------------------------------+------------+------------+---------------+
[ 7.529499] systemd-sysv-generator[249]: Ignoring K01watchdog symlink in rc6.d, not generating watchdog.service.
[ 7.530773] systemd-fstab-generator[247]: Parsing /etc/fstab
[ 7.535727] ------------[ cut here ]------------
[ 7.535734] WARNING: CPU: 1 PID: 249 at mm/util.c:519 __vm_enough_memory+0x6f/0x1d0
[ 7.535738] memory commitment underflow
[ 7.535738] CPU: 1 PID: 249 Comm: systemd-sysv-ge Not tainted 4.7.0-rc4-00215-gc3e3459 #1
[ 7.535739] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 7.535742] 0000000000000000 ffff88003f1f3cb8 ffffffff8143c528 ffff88003f1f3d08
[ 7.535745] 0000000000000000 ffff88003f1f3cf8 ffffffff810a9976 000002073f1f3de0
[ 7.535747] 0000000000000001 ffffffffff0a01aa 0000000000000001 ffff88003f4d52c0
[ 7.535747] Call Trace:
[ 7.535751] [<ffffffff8143c528>] dump_stack+0x65/0x8d
[ 7.535754] [<ffffffff810a9976>] __warn+0xb6/0xe0
[ 7.535756] [<ffffffff810a99ea>] warn_slowpath_fmt+0x4a/0x50
[ 7.535757] [<ffffffff8115e84f>] __vm_enough_memory+0x6f/0x1d0
[ 7.535761] [<ffffffff813d7a8e>] security_vm_enough_memory_mm+0x4e/0x60
[ 7.535765] [<ffffffff81176fc1>] mmap_region+0x131/0x590
[ 7.535767] [<ffffffff811777dd>] do_mmap+0x3bd/0x4a0
[ 7.535768] [<ffffffff8115e4d5>] vm_mmap_pgoff+0x85/0xd0
[ 7.535770] [<ffffffff811754d1>] SyS_mmap_pgoff+0xb1/0xc0
[ 7.535773] [<ffffffff81023376>] SyS_mmap+0x16/0x20
[ 7.535777] [<ffffffff81c25df6>] entry_SYSCALL_64_fastpath+0x1e/0xa8
git bisect start 5c4d1ca9cfa71d9515ce5946cfc6497d22b1108e 33688abb2802ff3a230bd2441f765477b94cc89e --
git bisect good 8a5968b8e3f00a767f43f88805f3d288756570e0 # 05:57 22+ 6 Merge remote-tracking branch 'wireless-drivers-next/master'
git bisect good fdb4089f350a5e41bf8cf99838be4142e7fffda9 # 05:59 22+ 6 Merge remote-tracking branch 'spi/for-next'
git bisect good d262aacb56c190b7b1cfeb0b9edca24e38147ade # 06:00 22+ 4 Merge remote-tracking branch 'char-misc/char-misc-next'
git bisect good b36d4f39637feedd4de59b79f9efbe57419a6090 # 06:03 22+ 6 Merge remote-tracking branch 'pwm/for-next'
git bisect good 97730d0a214f1be8043cc2e06c7510d8a93d559b # 06:05 22+ 6 Merge remote-tracking branch 'livepatching/for-next'
git bisect good 82b3e0323b0abaa917a383a61e48b3028a14311a # 06:07 22+ 6 Merge remote-tracking branch 'rtc/rtc-next'
git bisect bad 204502b144a6e67a067aa476e4c0149815e1be8e # 06:07 0- 5 Merge branch 'akpm-current/current'
git bisect good 7f8d9cfe93d5d5fcbd3719e96782922f55c17947 # 06:32 22+ 2 thp: handle file pages in split_huge_pmd()
git bisect bad ead73aa015eae132a31e8b1324931781d5cd1ba9 # 06:49 0- 15 mm: update the comment in __isolate_free_page
git bisect good 44918a56e3af3c802cd0daeb9eba5ecc22fa4a96 # 07:04 22+ 2 mm-vmscan-move-lru-lists-to-node-fix
git bisect bad e128e3836e98482661449dc811d10741ba75be61 # 07:10 1- 23 mm, vmscan: only wakeup kswapd once per node for the requested classzone
git bisect good b4f255ed4b93bb514f0e79cdbb2c6c2620e40325 # 07:16 22+ 0 mm, vmscan: make shrink_node decisions more node-centric
git bisect good 6123aec614297a16b0446d3f4046020a7d692857 # 07:19 22+ 4 mm: move page mapped accounting to the node
git bisect good e426f7b4ade5e59ee0b504d2472d850ded146196 # 07:21 22+ 12 mm: move most file-based accounting to the node
git bisect bad 60747f09fcc202a80bba0e8a97ae0835befd34d2 # 07:21 0- 29 mm, vmscan: update classzone_idx if buffer_heads_over_limit
git bisect bad c3e3459c92a22be17145cdd9d86a8acc74afa5cf # 07:21 0- 16 mm: move vmscan writes and file write accounting to the node
# first bad commit: [c3e3459c92a22be17145cdd9d86a8acc74afa5cf] mm: move vmscan writes and file write accounting to the node
git bisect good e426f7b4ade5e59ee0b504d2472d850ded146196 # 07:24 69+ 12 mm: move most file-based accounting to the node
# extra tests with CONFIG_DEBUG_INFO_REDUCED
git bisect bad c3e3459c92a22be17145cdd9d86a8acc74afa5cf # 07:24 0- 4 mm: move vmscan writes and file write accounting to the node
# extra tests on HEAD of linux-next/master
git bisect bad 5c4d1ca9cfa71d9515ce5946cfc6497d22b1108e # 07:25 0- 29 Add linux-next specific files for 20160623
# extra tests on tree/branch linux-next/master
git bisect bad 5c4d1ca9cfa71d9515ce5946cfc6497d22b1108e # 07:25 0- 29 Add linux-next specific files for 20160623
# extra tests with first bad commit reverted
git bisect good eb910f180124bc59e1c1d2bf3a6f036526d3bf16 # 07:27 69+ 12 Revert "mm: move vmscan writes and file write accounting to the node"
# extra tests on tree/branch linus/master
git bisect good da01e18a37a57f360222d3a123b8f6994aa1ad14 # 07:46 67+ 4 x86: avoid avoid passing around 'thread_info' in stack dumping code
# extra tests on tree/branch linux-next/master
git bisect bad 5c4d1ca9cfa71d9515ce5946cfc6497d22b1108e # 07:46 0- 29 Add linux-next specific files for 20160623
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
4 years, 8 months
[fs] c43edc7bd9: BUG: unable to handle kernel BUG: unable to handle kernel NULL pointer dereferenceNULL pointer dereference at 00000470
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux Deepa-Dinamani/Delete-CURRENT_TIME-and-CURRENT_TIME_SEC-macros/20160620-104147
commit c43edc7bd9c06af9a7278101d462eb0ba0299605
Author: Deepa Dinamani <deepa.kernel(a)gmail.com>
AuthorDate: Sun Jun 19 17:27:01 2016 -0700
Commit: 0day robot <fengguang.wu(a)intel.com>
CommitDate: Mon Jun 20 10:41:55 2016 +0800
fs: Replace CURRENT_TIME with current_time() for inode timestamps
CURRENT_TIME macro is not appropriate for filesystems as it
doesn't use the right granularity for filesystem timestamps.
Use current_time() instead.
CURRENT_TIME is also not y2038 safe.
This is also in preparation for the patch that transitions
vfs timestamps to use 64 bit time and hence make them
y2038 safe. As part of the effort current_time() will be
extended to do range checks. Hence, it is necessary for all
file system timestamps to use current_time(). Also,
current_time() will be transitioned along with vfs to be
y2038 safe.
Note that whenever a single call to current_time() is used
to change timestamps in different inodes, it is because they
share the same time granularity.
Signed-off-by: Deepa Dinamani <deepa.kernel(a)gmail.com>
Acked-by: Felipe Balbi <balbi(a)kernel.org>
Acked-by: Steven Whitehouse <swhiteho(a)redhat.com>
Acked-by: Ryusuke Konishi <konishi.ryusuke(a)lab.ntt.co.jp>
Reviewed-by: David Sterba <dsterba(a)suse.com>
+------------------------------------------+------------+------------+------------+
| | 58b11bff28 | c43edc7bd9 | ac5a28d43c |
+------------------------------------------+------------+------------+------------+
| boot_successes | 63 | 0 | 0 |
| boot_failures | 0 | 22 | 13 |
| BUG:unable_to_handle_kernel | 0 | 22 | 13 |
| Oops | 0 | 22 | 13 |
| EIP_is_at_current_time | 0 | 22 | 13 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 22 | 13 |
| backtrace:_do_fork | 0 | 22 | 13 |
+------------------------------------------+------------+------------+------------+
member access within null pointer of type 'struct super_block'
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.7.0-rc3-next-20160617-00002-gc43edc7 #1
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.7.0-rc3-next-20160617-00002-gc43edc7 #1
00000000 00000000 00000000 00000000 81badd80 81badd80 812d44fa 812d44fa 00000001 00000001 81badda0 81badda0 2cb434b2 2cb434b2 81badd8c 81badd8c
8131d947 8131d947 81cab780 81cab780 81baddac 81baddac 8131dd3d 8131dd3d 81b08fa2 81b08fa2 81b09277 81b09277 81cacd80 81cacd80 00200202 00200202
00000000 00000000 2cb434b2 2cb434b2 81baddcc 81baddcc 811b0dc6 811b0dc6 57677728 57677728 00000000 00000000 2cb434b2 2cb434b2 8f414008 8f414008
Call Trace:
Call Trace:
[<812d44fa>] dump_stack+0x57/0x79
[<812d44fa>] dump_stack+0x57/0x79
[<8131d947>] ubsan_epilogue+0xb/0x33
[<8131d947>] ubsan_epilogue+0xb/0x33
[<8131dd3d>] __ubsan_handle_type_mismatch+0x47/0xfb
[<8131dd3d>] __ubsan_handle_type_mismatch+0x47/0xfb
[<811b0dc6>] current_time+0x3c/0x53
[<811b0dc6>] current_time+0x3c/0x53
[<811e6982>] proc_alloc_inode+0x66/0x84
[<811e6982>] proc_alloc_inode+0x66/0x84
[<811ae68e>] alloc_inode+0x34/0x9f
[<811ae68e>] alloc_inode+0x34/0x9f
[<811af51a>] new_inode_pseudo+0xb/0x58
[<811af51a>] new_inode_pseudo+0xb/0x58
[<811e6c49>] proc_get_inode+0xd/0x10d
[<811e6c49>] proc_get_inode+0xd/0x10d
[<811e6d9c>] proc_fill_super+0x53/0x93
[<811e6d9c>] proc_fill_super+0x53/0x93
[<811e7006>] proc_mount+0xb3/0xee
[<811e7006>] proc_mount+0xb3/0xee
[<81196fb2>] mount_fs+0x48/0xcf
[<81196fb2>] mount_fs+0x48/0xcf
[<811b54f6>] vfs_kern_mount+0x4c/0x131
[<811b54f6>] vfs_kern_mount+0x4c/0x131
[<811b8609>] kern_mount_data+0x28/0x54
[<811b8609>] kern_mount_data+0x28/0x54
[<811e7160>] pid_ns_prepare_proc+0x13/0x35
[<811e7160>] pid_ns_prepare_proc+0x13/0x35
[<81069afc>] alloc_pid+0x358/0x4a2
[<81069afc>] alloc_pid+0x358/0x4a2
[<8104a72a>] copy_process+0x1297/0x1ba9
[<8104a72a>] copy_process+0x1297/0x1ba9
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<8104b253>] _do_fork+0xcf/0x4eb
[<8104b253>] _do_fork+0xcf/0x4eb
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<812ed0b6>] ? find_next_bit+0x12/0x1c
[<812ed0b6>] ? find_next_bit+0x12/0x1c
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<8104b68b>] kernel_thread+0x1c/0x21
[<8104b68b>] kernel_thread+0x1c/0x21
[<81847535>] rest_init+0x1a/0xa1
[<81847535>] rest_init+0x1a/0xa1
[<8209ee76>] start_kernel+0x3ca/0x3d2
[<8209ee76>] start_kernel+0x3ca/0x3d2
[<8209e2e4>] i386_start_kernel+0xab/0xaf
[<8209e2e4>] i386_start_kernel+0xab/0xaf
================================================================================
================================================================================
BUG: unable to handle kernel BUG: unable to handle kernel NULL pointer dereferenceNULL pointer dereference at 00000470
at 00000470
IP:IP: [<811b0dc6>] current_time+0x3c/0x53
[<811b0dc6>] current_time+0x3c/0x53
*pde = 00000000 *pde = 00000000
Oops: 0000 [#1] PREEMPT SMP
Oops: 0000 [#1] PREEMPT SMP
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.7.0-rc3-next-20160617-00002-gc43edc7 #1
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.7.0-rc3-next-20160617-00002-gc43edc7 #1
task: 81bb1c00 ti: 81bac000 task.ti: 81bac000
task: 81bb1c00 ti: 81bac000 task.ti: 81bac000
EIP: 0060:[<811b0dc6>] EFLAGS: 00210296 CPU: 0
EIP: 0060:[<811b0dc6>] EFLAGS: 00210296 CPU: 0
EIP is at current_time+0x3c/0x53
EIP is at current_time+0x3c/0x53
EAX: 81b08e52 EBX: 00000000 ECX: 2544e35b EDX: 331b7b49
EAX: 81b08e52 EBX: 00000000 ECX: 2544e35b EDX: 331b7b49
ESI: 2cb434b2 EDI: 57677728 EBP: 81baddcc ESP: 81baddb4
ESI: 2cb434b2 EDI: 57677728 EBP: 81baddcc ESP: 81baddb4
DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
CR0: 80050033 CR2: 00000470 CR3: 02159000 CR4: 00040690
CR0: 80050033 CR2: 00000470 CR3: 02159000 CR4: 00040690
Stack:
Stack:
57677728 57677728 00000000 00000000 2cb434b2 2cb434b2 8f414008 8f414008 8f414024 8f414024 8008e8a8 8008e8a8 81badddc 81badddc 811e6982 811e6982
81862240 81862240 8008e8a8 8008e8a8 81baddec 81baddec 811ae68e 811ae68e 8008e8a8 8008e8a8 81ccbb80 81ccbb80 81bade00 81bade00 811af51a 811af51a
2544e35b 2544e35b 8008e8a8 8008e8a8 81ccbb80 81ccbb80 81bade14 81bade14 811e6c49 811e6c49 8008e8a8 8008e8a8 81bf5ca0 81bf5ca0 8008e8a8 8008e8a8
Call Trace:
Call Trace:
[<811e6982>] proc_alloc_inode+0x66/0x84
[<811e6982>] proc_alloc_inode+0x66/0x84
[<811ae68e>] alloc_inode+0x34/0x9f
[<811ae68e>] alloc_inode+0x34/0x9f
[<811af51a>] new_inode_pseudo+0xb/0x58
[<811af51a>] new_inode_pseudo+0xb/0x58
[<811e6c49>] proc_get_inode+0xd/0x10d
[<811e6c49>] proc_get_inode+0xd/0x10d
[<811e6d9c>] proc_fill_super+0x53/0x93
[<811e6d9c>] proc_fill_super+0x53/0x93
[<811e7006>] proc_mount+0xb3/0xee
[<811e7006>] proc_mount+0xb3/0xee
[<81196fb2>] mount_fs+0x48/0xcf
[<81196fb2>] mount_fs+0x48/0xcf
[<811b54f6>] vfs_kern_mount+0x4c/0x131
[<811b54f6>] vfs_kern_mount+0x4c/0x131
[<811b8609>] kern_mount_data+0x28/0x54
[<811b8609>] kern_mount_data+0x28/0x54
[<811e7160>] pid_ns_prepare_proc+0x13/0x35
[<811e7160>] pid_ns_prepare_proc+0x13/0x35
[<81069afc>] alloc_pid+0x358/0x4a2
[<81069afc>] alloc_pid+0x358/0x4a2
[<8104a72a>] copy_process+0x1297/0x1ba9
[<8104a72a>] copy_process+0x1297/0x1ba9
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<8104b253>] _do_fork+0xcf/0x4eb
[<8104b253>] _do_fork+0xcf/0x4eb
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<812ed0b6>] ? find_next_bit+0x12/0x1c
[<812ed0b6>] ? find_next_bit+0x12/0x1c
[<818475bc>] ? rest_init+0xa1/0xa1
[<818475bc>] ? rest_init+0xa1/0xa1
[<8104b68b>] kernel_thread+0x1c/0x21
[<8104b68b>] kernel_thread+0x1c/0x21
[<81847535>] rest_init+0x1a/0xa1
[<81847535>] rest_init+0x1a/0xa1
[<8209ee76>] start_kernel+0x3ca/0x3d2
[<8209ee76>] start_kernel+0x3ca/0x3d2
[<8209e2e4>] i386_start_kernel+0xab/0xaf
[<8209e2e4>] i386_start_kernel+0xab/0xaf
Code:Code: f1 f1 ff ff 8b 8b 75 75 f0 f0 8b 8b 7d 7d e8 e8 85 85 db db 75 75 0c 0c 31 31 d2 d2 b8 b8 98 98 b7 b7 ca ca 81 81 e8 e8 43 43 cf cf 16 16 00 00 8b 8b 5b 5b 1c 1c 85 85 db db 75 75 0c 0c 31 31 d2 d2 b8 b8 80 80 b7 b7 ca ca 81 81 e8 e8 30 30 cf cf 16 16 00 00 <8b> <8b> 8b 8b 70 70 04 04 00 00 00 00 89 89 f8 f8 89 89 f2 f2 e8 e8 7c 7c ac ac f0 f0 ff ff 83 83 c4 c4 0c 0c 5b 5b 5e 5e 5f 5f
EIP: [<811b0dc6>] EIP: [<811b0dc6>] current_time+0x3c/0x53current_time+0x3c/0x53 SS:ESP 0068:81baddb4
SS:ESP 0068:81baddb4
CR2: 0000000000000470
CR2: 0000000000000470
---[ end trace 4d5ff9f2f68c4233 ]---
---[ end trace 4d5ff9f2f68c4233 ]---
git bisect start ac5a28d43cc9c23525d92173bd64212d81295b09 ce24ed9ec7532bc79485f23dff59923c0271f6d3 --
git bisect bad b49866cd5401e20ee939655995b184f92ffa537e # 12:39 0- 20 fs: cifs: Replace CURRENT_TIME by get_seconds
git bisect bad 893073fb0993c3e3256e64b903bac8a103740c5d # 12:42 0- 22 fs: ext4: Use current_time() for inode timestamps
git bisect bad 553d024a5815863d83fa871537609883d5e4bb38 # 12:46 0- 22 fs: Replace CURRENT_TIME_SEC with current_time() for inode timestamps
git bisect good 58b11bff2839a01a08ff325c5afd51a3c01c77a8 # 12:52 22+ 0 vfs: Add current_time() api
git bisect bad c43edc7bd9c06af9a7278101d462eb0ba0299605 # 12:55 0- 6 fs: Replace CURRENT_TIME with current_time() for inode timestamps
# first bad commit: [c43edc7bd9c06af9a7278101d462eb0ba0299605] fs: Replace CURRENT_TIME with current_time() for inode timestamps
git bisect good 58b11bff2839a01a08ff325c5afd51a3c01c77a8 # 12:57 63+ 0 vfs: Add current_time() api
# extra tests with CONFIG_DEBUG_INFO_REDUCED
git bisect bad c43edc7bd9c06af9a7278101d462eb0ba0299605 # 13:04 0- 52 fs: Replace CURRENT_TIME with current_time() for inode timestamps
# extra tests on HEAD of linux-review/Deepa-Dinamani/Delete-CURRENT_TIME-and-CURRENT_TIME_SEC-macros/20160620-104147
git bisect bad ac5a28d43cc9c23525d92173bd64212d81295b09 # 13:04 0- 13 time: Delete current_fs_time() function
# extra tests on tree/branch linux-review/Deepa-Dinamani/Delete-CURRENT_TIME-and-CURRENT_TIME_SEC-macros/20160620-104147
git bisect bad ac5a28d43cc9c23525d92173bd64212d81295b09 # 13:04 0- 13 time: Delete current_fs_time() function
# extra tests with first bad commit reverted
# extra tests on tree/branch linus/master
git bisect good c3695331f3a326a468bd6a5b6f05b481b399726b # 14:09 66+ 1 Merge branch 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
# extra tests on tree/branch linux-next/master
git bisect good ce24ed9ec7532bc79485f23dff59923c0271f6d3 # 14:13 64+ 0 Add linux-next specific files for 20160617
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-m 256
-smp 1
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
4 years, 8 months
[lkp] d9c064699f: reaim.jobs_per_min +781.5% improvement
by kernel test robot
FYI, we noticed a +781.5% improvement of reaim.jobs_per_min due to commit:
commit d9c064699fe0b6f1dee88f3287198a87478ad443 ("lockless next_positive()")
git://internal_merge_and_test_tree revert-d9c064699fe0b6f1dee88f3287198a87478ad443-d9c064699fe0b6f1dee88f3287198a87478ad443
in testcase: reaim
on test machine: lkp-hsw-ep2: 72 threads Brickland Haswell-EP with 128G memory
with following parameters: cpufreq_governor=performance/nr_job=1500/nr_task=100%/runtime=300s/test=five_sec
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_job/nr_task/rootfs/runtime/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/1500/100%/debian-x86_64-2015-02-07.cgz/300s/lkp-hsw-ep2/five_sec/reaim
commit:
8d6efe8e82 ("libfs.c: new helper - next_positive()")
d9c064699f ("lockless next_positive()")
8d6efe8e8288c056 d9c064699fe0b6f1dee88f3287
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
91070 ± 1% +781.5% 802778 ± 12% reaim.jobs_per_min
1264 ± 1% +781.5% 11149 ± 12% reaim.jobs_per_min_child
95.70 ± 0% +3.4% 98.98 ± 0% reaim.jti
91751 ± 1% +789.8% 816385 ± 13% reaim.max_jobs_per_min
70.98 ± 1% -88.5% 8.18 ± 12% reaim.parent_time
3.76 ± 8% -80.6% 0.73 ± 14% reaim.std_dev_percent
2.57 ± 7% -97.8% 0.06 ± 2% reaim.std_dev_time
365.03 ± 1% -16.9% 303.19 ± 0% reaim.time.elapsed_time
365.03 ± 1% -16.9% 303.19 ± 0% reaim.time.elapsed_time.max
1158439 ± 0% +482.3% 6745706 ± 12% reaim.time.involuntary_context_switches
1.552e+08 ± 0% +500.5% 9.32e+08 ± 9% reaim.time.minor_page_faults
5663 ± 0% -54.5% 2574 ± 12% reaim.time.percent_of_cpu_this_job_got
20378 ± 1% -70.4% 6040 ± 12% reaim.time.system_time
295.15 ± 0% +499.2% 1768 ± 11% reaim.time.user_time
2386984 ± 0% +1274.3% 32804128 ± 1% reaim.time.voluntary_context_switches
1596555 ± 4% +39.1% 2220674 ± 5% softirqs.RCU
498656 ± 4% +347.6% 2232137 ± 17% softirqs.SCHED
10919053 ± 2% -61.2% 4237120 ± 11% softirqs.TIMER
908627 ± 0% -10.5% 813225 ± 1% vmstat.memory.cache
87.50 ± 0% -68.9% 27.25 ± 13% vmstat.procs.r
22949 ± 1% +1117.0% 279297 ± 0% vmstat.system.cs
79846 ± 0% +1.8% 81303 ± 0% vmstat.system.in
83065045 ± 16% +459.1% 4.644e+08 ± 9% numa-numastat.node0.local_node
15331 ± 36% +252.9% 54102 ± 16% numa-numastat.node0.numa_foreign
83065046 ± 16% +459.1% 4.644e+08 ± 9% numa-numastat.node0.numa_hit
14371 ± 46% +281.0% 54748 ± 17% numa-numastat.node0.numa_miss
71704035 ± 18% +543.7% 4.616e+08 ± 10% numa-numastat.node1.local_node
12018 ± 46% +431.7% 63904 ± 14% numa-numastat.node1.numa_foreign
71704036 ± 18% +543.7% 4.616e+08 ± 10% numa-numastat.node1.numa_hit
12978 ± 50% +387.4% 63258 ± 14% numa-numastat.node1.numa_miss
80.56 ± 0% -51.6% 39.02 ± 11% turbostat.%Busy
2249 ± 0% -52.1% 1076 ± 11% turbostat.Avg_MHz
15.38 ± 1% +196.5% 45.59 ± 11% turbostat.CPU%c1
0.11 ± 4% +673.8% 0.81 ± 81% turbostat.CPU%c3
3.96 ± 4% +268.5% 14.57 ± 9% turbostat.CPU%c6
0.85 ± 5% +516.4% 5.27 ± 9% turbostat.Pkg%pc2
0.60 ± 30% +1010.0% 6.66 ± 12% turbostat.Pkg%pc6
229.54 ± 0% -6.8% 214.04 ± 2% turbostat.PkgWatt
45.83 ± 0% +4.8% 48.04 ± 0% turbostat.RAMWatt
38272094 ± 0% +2716.3% 1.078e+09 ± 3% cpuidle.C1-HSW.time
581196 ± 1% +2611.7% 15760538 ± 7% cpuidle.C1-HSW.usage
98933135 ± 0% +826.3% 9.164e+08 ± 8% cpuidle.C1E-HSW.time
1066746 ± 0% +959.2% 11299354 ± 7% cpuidle.C1E-HSW.usage
2.805e+08 ± 2% +1078.8% 3.307e+09 ± 7% cpuidle.C3-HSW.time
1072514 ± 2% +1337.7% 15419712 ± 9% cpuidle.C3-HSW.usage
4.594e+09 ± 1% +69.5% 7.788e+09 ± 7% cpuidle.C6-HSW.time
5153934 ± 1% +104.2% 10523817 ± 8% cpuidle.C6-HSW.usage
24321444 ± 16% +68.1% 40893825 ± 12% cpuidle.POLL.time
9870 ± 1% +1297.1% 137903 ± 2% cpuidle.POLL.usage
85341 ± 5% +73.2% 147814 ± 7% meminfo.Active
68959 ± 6% +90.6% 131463 ± 7% meminfo.Active(anon)
33718 ± 2% +14.6% 38631 ± 1% meminfo.AnonPages
532820 ± 0% +11.2% 592729 ± 1% meminfo.Cached
147277 ± 2% +97.6% 291021 ± 1% meminfo.Committed_AS
10210 ± 0% +30.4% 13312 ± 6% meminfo.Inactive(anon)
159630 ± 2% -84.2% 25153 ± 4% meminfo.KernelStack
20168 ± 0% +22.1% 24625 ± 3% meminfo.Mapped
6634 ± 1% +18.6% 7869 ± 1% meminfo.PageTables
83755 ± 0% -29.7% 58853 ± 0% meminfo.SReclaimable
292695 ± 1% -44.9% 161415 ± 0% meminfo.SUnreclaim
45488 ± 9% +131.4% 105276 ± 10% meminfo.Shmem
376450 ± 1% -41.5% 220270 ± 0% meminfo.Slab
44688 ± 42% +179.6% 124927 ± 9% numa-meminfo.node0.Active
36500 ± 51% +219.9% 116757 ± 9% numa-meminfo.node0.Active(anon)
18453 ± 14% +35.0% 24920 ± 6% numa-meminfo.node0.AnonPages
266991 ± 6% +30.2% 347573 ± 3% numa-meminfo.node0.FilePages
5279 ± 81% +136.7% 12499 ± 8% numa-meminfo.node0.Inactive(anon)
85960 ± 13% -84.3% 13511 ± 6% numa-meminfo.node0.KernelStack
10067 ± 26% +57.6% 15866 ± 5% numa-meminfo.node0.Mapped
155862 ± 13% -44.5% 86564 ± 3% numa-meminfo.node0.SUnreclaim
23338 ± 76% +344.8% 103819 ± 11% numa-meminfo.node0.Shmem
198354 ± 13% -40.0% 118990 ± 4% numa-meminfo.node0.Slab
72657 ± 21% -84.1% 11562 ± 6% numa-meminfo.node1.KernelStack
831298 ± 7% -19.5% 669531 ± 1% numa-meminfo.node1.MemUsed
2713 ± 24% +46.9% 3986 ± 17% numa-meminfo.node1.PageTables
41134 ± 15% -35.7% 26435 ± 15% numa-meminfo.node1.SReclaimable
135994 ± 18% -45.0% 74799 ± 3% numa-meminfo.node1.SUnreclaim
177129 ± 17% -42.8% 101235 ± 6% numa-meminfo.node1.Slab
9118 ± 51% +220.0% 29183 ± 9% numa-vmstat.node0.nr_active_anon
4621 ± 14% +34.7% 6225 ± 6% numa-vmstat.node0.nr_anon_pages
66736 ± 6% +30.2% 86889 ± 3% numa-vmstat.node0.nr_file_pages
1320 ± 81% +136.3% 3121 ± 8% numa-vmstat.node0.nr_inactive_anon
5412 ± 14% -84.4% 843.50 ± 5% numa-vmstat.node0.nr_kernel_stack
2524 ± 26% +57.0% 3962 ± 5% numa-vmstat.node0.nr_mapped
5823 ± 76% +345.6% 25950 ± 11% numa-vmstat.node0.nr_shmem
39067 ± 13% -44.6% 21636 ± 3% numa-vmstat.node0.nr_slab_unreclaimable
78088 ± 7% +22.1% 95348 ± 5% numa-vmstat.node0.numa_foreign
41239887 ± 15% +457.2% 2.298e+08 ± 9% numa-vmstat.node0.numa_hit
41239886 ± 15% +457.2% 2.298e+08 ± 9% numa-vmstat.node0.numa_local
77648 ± 8% +23.4% 95810 ± 5% numa-vmstat.node0.numa_miss
4543 ± 21% -84.0% 727.75 ± 6% numa-vmstat.node1.nr_kernel_stack
678.00 ± 24% +46.9% 996.00 ± 18% numa-vmstat.node1.nr_page_table_pages
10289 ± 15% -35.8% 6607 ± 15% numa-vmstat.node1.nr_slab_reclaimable
34009 ± 18% -45.0% 18702 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
37769 ± 16% +74.6% 65950 ± 7% numa-vmstat.node1.numa_foreign
35944540 ± 18% +536.4% 2.287e+08 ± 9% numa-vmstat.node1.numa_hit
35944540 ± 18% +536.4% 2.287e+08 ± 9% numa-vmstat.node1.numa_local
38209 ± 17% +71.4% 65489 ± 7% numa-vmstat.node1.numa_miss
17243 ± 6% +90.6% 32873 ± 7% proc-vmstat.nr_active_anon
17020 ± 1% +9.7% 18675 ± 2% proc-vmstat.nr_alloc_batch
8455 ± 2% +14.3% 9667 ± 1% proc-vmstat.nr_anon_pages
133185 ± 0% +11.3% 148180 ± 1% proc-vmstat.nr_file_pages
2551 ± 0% +30.4% 3327 ± 6% proc-vmstat.nr_inactive_anon
10019 ± 2% -84.4% 1566 ± 4% proc-vmstat.nr_kernel_stack
5055 ± 0% +21.8% 6156 ± 3% proc-vmstat.nr_mapped
1662 ± 1% +18.4% 1967 ± 1% proc-vmstat.nr_page_table_pages
11352 ± 9% +131.8% 26317 ± 10% proc-vmstat.nr_shmem
20950 ± 0% -29.8% 14713 ± 0% proc-vmstat.nr_slab_reclaimable
73267 ± 1% -44.9% 40345 ± 0% proc-vmstat.nr_slab_unreclaimable
27349 ± 0% +331.5% 118023 ± 15% proc-vmstat.numa_foreign
53815 ± 1% -100.0% 16.25 ± 36% proc-vmstat.numa_hint_faults
35659 ± 2% -100.0% 13.00 ± 28% proc-vmstat.numa_hint_faults_local
1.548e+08 ± 0% +498.3% 9.26e+08 ± 9% proc-vmstat.numa_hit
1.548e+08 ± 0% +498.3% 9.26e+08 ± 9% proc-vmstat.numa_local
27349 ± 0% +331.5% 118023 ± 15% proc-vmstat.numa_miss
6391 ± 3% -100.0% 0.50 ±100% proc-vmstat.numa_pages_migrated
60278 ± 1% -100.0% 26.50 ± 21% proc-vmstat.numa_pte_updates
11745 ± 9% +125.8% 26524 ± 10% proc-vmstat.pgactivate
2781427 ± 16% +466.0% 15742731 ± 10% proc-vmstat.pgalloc_dma32
1.573e+08 ± 0% +494.0% 9.342e+08 ± 9% proc-vmstat.pgalloc_normal
1.56e+08 ± 0% +498.2% 9.335e+08 ± 9% proc-vmstat.pgfault
1.6e+08 ± 0% +493.6% 9.499e+08 ± 9% proc-vmstat.pgfree
515.50 ± 3% -100.0% 0.00 ± -1% proc-vmstat.pgmigrate_fail
6391 ± 3% -100.0% 0.50 ±100% proc-vmstat.pgmigrate_success
7.613e+10 ± 1% +352.9% 3.448e+11 ± 11% perf-stat.L1-dcache-load-misses
3.027e+12 ± 2% +33.6% 4.046e+12 ± 9% perf-stat.L1-dcache-loads
1.373e+12 ± 2% +68.7% 2.316e+12 ± 10% perf-stat.L1-dcache-stores
2.18e+10 ± 1% +399.7% 1.09e+11 ± 12% perf-stat.L1-icache-load-misses
2.854e+09 ± 2% +411.3% 1.459e+10 ± 7% perf-stat.LLC-load-misses
3.982e+10 ± 1% +250.1% 1.394e+11 ± 9% perf-stat.LLC-loads
2.292e+09 ± 4% +156.4% 5.876e+09 ± 7% perf-stat.LLC-store-misses
5.533e+09 ± 1% +282.7% 2.117e+10 ± 10% perf-stat.LLC-stores
1.006e+10 ± 1% +466.7% 5.704e+10 ± 9% perf-stat.branch-load-misses
1.001e+10 ± 1% +474.4% 5.747e+10 ± 9% perf-stat.branch-misses
2.096e+12 ± 1% -60.7% 8.235e+11 ± 12% perf-stat.bus-cycles
5.168e+09 ± 3% +284.7% 1.988e+10 ± 8% perf-stat.cache-misses
6.096e+10 ± 1% +364.2% 2.83e+11 ± 8% perf-stat.cache-references
8013980 ± 0% +964.6% 85319954 ± 0% perf-stat.context-switches
5.869e+13 ± 1% -61.7% 2.249e+13 ± 11% perf-stat.cpu-cycles
2432227 ± 0% +1061.9% 28259534 ± 2% perf-stat.cpu-migrations
2.128e+09 ± 1% +537.6% 1.357e+10 ± 10% perf-stat.dTLB-load-misses
3.007e+12 ± 1% +31.9% 3.965e+12 ± 10% perf-stat.dTLB-loads
5.498e+08 ± 1% +522.2% 3.421e+09 ± 10% perf-stat.dTLB-store-misses
1.373e+12 ± 2% +76.0% 2.417e+12 ± 7% perf-stat.dTLB-stores
5.121e+08 ± 3% +509.5% 3.121e+09 ± 7% perf-stat.iTLB-load-misses
5.41e+08 ± 1% +450.9% 2.98e+09 ± 10% perf-stat.iTLB-loads
1.242e+13 ± 1% +50.1% 1.865e+13 ± 11% perf-stat.instructions
1.549e+08 ± 0% +498.1% 9.266e+08 ± 9% perf-stat.minor-faults
2.667e+09 ± 3% +421.9% 1.392e+10 ± 6% perf-stat.node-load-misses
1.974e+08 ± 18% +304.1% 7.974e+08 ± 11% perf-stat.node-loads
9.775e+08 ± 7% +239.8% 3.322e+09 ± 8% perf-stat.node-store-misses
1.299e+09 ± 2% +91.8% 2.491e+09 ± 7% perf-stat.node-stores
1.549e+08 ± 0% +498.1% 9.266e+08 ± 9% perf-stat.page-faults
4.829e+13 ± 1% -60.6% 1.904e+13 ± 11% perf-stat.ref-cycles
71143 ± 2% -60.7% 27994 ± 2% slabinfo.cred_jar.active_objs
2107 ± 3% -68.2% 670.00 ± 1% slabinfo.cred_jar.active_slabs
88518 ± 3% -68.2% 28164 ± 1% slabinfo.cred_jar.num_objs
2107 ± 3% -68.2% 670.00 ± 1% slabinfo.cred_jar.num_slabs
168230 ± 1% -51.8% 81158 ± 0% slabinfo.dentry.active_objs
4074 ± 1% -52.3% 1945 ± 0% slabinfo.dentry.active_slabs
171125 ± 1% -52.2% 81727 ± 0% slabinfo.dentry.num_objs
4074 ± 1% -52.3% 1945 ± 0% slabinfo.dentry.num_slabs
3633 ± 0% +13.4% 4120 ± 1% slabinfo.files_cache.active_objs
3633 ± 0% +13.4% 4120 ± 1% slabinfo.files_cache.num_objs
69898 ± 2% -62.2% 26402 ± 1% slabinfo.kmalloc-256.active_objs
1201 ± 2% -65.2% 418.25 ± 1% slabinfo.kmalloc-256.active_slabs
76943 ± 2% -65.2% 26788 ± 1% slabinfo.kmalloc-256.num_objs
1201 ± 2% -65.2% 418.25 ± 1% slabinfo.kmalloc-256.num_slabs
999.00 ± 2% -13.5% 864.25 ± 0% slabinfo.kmalloc-32.active_slabs
127950 ± 2% -13.5% 110670 ± 0% slabinfo.kmalloc-32.num_objs
999.00 ± 2% -13.5% 864.25 ± 0% slabinfo.kmalloc-32.num_slabs
395.25 ± 5% -32.3% 267.75 ± 8% slabinfo.kmem_cache.active_objs
395.25 ± 5% -32.3% 267.75 ± 8% slabinfo.kmem_cache.num_objs
596.00 ± 5% -24.2% 452.00 ± 6% slabinfo.kmem_cache_node.active_objs
608.00 ± 5% -23.7% 464.00 ± 5% slabinfo.kmem_cache_node.num_objs
1667 ± 0% +46.4% 2440 ± 2% slabinfo.mm_struct.active_objs
1667 ± 0% +46.7% 2445 ± 2% slabinfo.mm_struct.num_objs
40356 ± 4% -82.7% 6967 ± 5% slabinfo.pid.active_objs
635.00 ± 4% -83.0% 108.25 ± 5% slabinfo.pid.active_slabs
40679 ± 4% -82.9% 6967 ± 5% slabinfo.pid.num_objs
635.00 ± 4% -83.0% 108.25 ± 5% slabinfo.pid.num_slabs
25524 ± 5% -51.9% 12272 ± 0% slabinfo.radix_tree_node.active_objs
458.75 ± 5% -52.3% 218.75 ± 0% slabinfo.radix_tree_node.active_slabs
25715 ± 5% -52.3% 12272 ± 0% slabinfo.radix_tree_node.num_objs
458.75 ± 5% -52.3% 218.75 ± 0% slabinfo.radix_tree_node.num_slabs
29275 ± 2% -31.0% 20202 ± 1% slabinfo.shmem_inode_cache.active_objs
619.00 ± 2% -32.7% 416.50 ± 1% slabinfo.shmem_inode_cache.active_slabs
30364 ± 2% -32.7% 20430 ± 1% slabinfo.shmem_inode_cache.num_objs
619.00 ± 2% -32.7% 416.50 ± 1% slabinfo.shmem_inode_cache.num_slabs
2403 ± 0% +26.9% 3049 ± 4% slabinfo.sighand_cache.active_objs
2404 ± 0% +27.2% 3058 ± 3% slabinfo.sighand_cache.num_objs
12294 ± 2% -33.7% 8151 ± 0% slabinfo.signal_cache.active_objs
449.75 ± 2% -38.7% 275.75 ± 0% slabinfo.signal_cache.active_slabs
13510 ± 2% -38.6% 8296 ± 0% slabinfo.signal_cache.num_objs
449.75 ± 2% -38.7% 275.75 ± 0% slabinfo.signal_cache.num_slabs
10311 ± 3% -82.6% 1791 ± 2% slabinfo.task_struct.active_objs
3551 ± 2% -82.6% 618.50 ± 3% slabinfo.task_struct.active_slabs
10653 ± 2% -82.6% 1857 ± 3% slabinfo.task_struct.num_objs
3551 ± 2% -82.6% 618.50 ± 3% slabinfo.task_struct.num_slabs
108393 ± 62% -100.0% 0.00 ± -1% latency_stats.avg.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
339380 ± 4% -100.0% 0.00 ± -1% latency_stats.avg.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
194559 ± 72% -100.0% 0.00 ± -1% latency_stats.avg.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.copy_page_from_iter.pipe_write
319346 ± 51% -100.0% 0.00 ± -1% latency_stats.avg.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.dcache_readdir.iterate_dir
65382 ± 7% +1143.3% 812920 ± 49% latency_stats.hits.call_rwsem_down_read_failed.lookup_slow.walk_component.path_lookupat.filename_lookup.user_path_at_empty.vfs_fstatat.SYSC_newlstat.SyS_newlstat.entry_SYSCALL_64_fastpath
3051 ± 3% +355.0% 13883 ± 34% latency_stats.hits.call_rwsem_down_write_failed.__put_anon_vma.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64
9446 ± 2% +464.9% 53366 ± 45% latency_stats.hits.call_rwsem_down_write_failed.anon_vma_clone.anon_vma_fork.copy_process._do_fork.SyS_clone.do_syscall_64.return_from_SYSCALL_64
4536 ± 2% +1571.0% 75803 ± 36% latency_stats.hits.call_rwsem_down_write_failed.anon_vma_fork.copy_process._do_fork.SyS_clone.do_syscall_64.return_from_SYSCALL_64
19168 ± 26% +15919.6% 3070717 ± 23% latency_stats.hits.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.do_syscall_64.return_from_SYSCALL_64
39844 ± 4% +3214.0% 1320443 ± 52% latency_stats.hits.call_rwsem_down_write_failed.do_unlinkat.SyS_unlink.entry_SYSCALL_64_fastpath
41723 ± 4% +2552.2% 1106589 ± 52% latency_stats.hits.call_rwsem_down_write_failed.filename_create.SyS_link.entry_SYSCALL_64_fastpath
18645 ± 5% +1909.2% 374631 ± 65% latency_stats.hits.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.SyS_creat.entry_SYSCALL_64_fastpath
8799 ± 2% +555.6% 57691 ± 39% latency_stats.hits.call_rwsem_down_write_failed.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
19628 ± 21% +17477.0% 3450136 ± 24% latency_stats.hits.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
8919 ± 33% +14766.9% 1326055 ± 24% latency_stats.hits.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
2172 ± 8% +7081.9% 156009 ± 14% latency_stats.hits.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64
4901 ± 31% +14152.9% 698606 ± 24% latency_stats.hits.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
2071 ± 11% +7587.3% 159204 ± 8% latency_stats.hits.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
16527 ± 29% +13021.5% 2168685 ± 23% latency_stats.hits.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.entry_SYSCALL_64_fastpath
8410 ± 35% +16273.9% 1377127 ± 27% latency_stats.hits.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
3568 ± 9% +7720.3% 279068 ± 11% latency_stats.hits.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.vm_mmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64
1022906 ± 0% +434.9% 5471273 ± 17% latency_stats.hits.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
345570 ± 1% -76.6% 80923 ± 2% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
22583 ± 9% -100.0% 0.00 ± -1% latency_stats.hits.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
5944 ± 9% +734.8% 49623 ± 19% latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
9532 ± 3% +1106.8% 115038 ± 23% latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.padzero.load_elf_binary
263602 ± 26% -95.6% 11602 ± 37% latency_stats.max.call_rwsem_down_read_failed.lookup_slow.walk_component.path_lookupat.filename_lookup.user_path_at_empty.vfs_fstatat.SYSC_newlstat.SyS_newlstat.entry_SYSCALL_64_fastpath
157119 ±100% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.page_lock_anon_vma_read.rmap_walk_anon.rmap_walk.try_to_unmap.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
30500 ± 41% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.rmap_walk_anon.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
34491 ± 68% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.rmap_walk_anon.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.copy_strings
21802 ± 18% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.rmap_walk_anon.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.count
87091 ± 51% -97.1% 2558 ± 53% latency_stats.max.call_rwsem_down_write_failed.__put_anon_vma.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64
98529 ± 27% -97.6% 2349 ± 62% latency_stats.max.call_rwsem_down_write_failed.anon_vma_clone.anon_vma_fork.copy_process._do_fork.SyS_clone.do_syscall_64.return_from_SYSCALL_64
97105 ± 48% -97.0% 2885 ± 70% latency_stats.max.call_rwsem_down_write_failed.anon_vma_fork.copy_process._do_fork.SyS_clone.do_syscall_64.return_from_SYSCALL_64
432550 ± 16% -97.4% 11114 ± 33% latency_stats.max.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.do_syscall_64.return_from_SYSCALL_64
172978 ± 34% -93.6% 11153 ± 36% latency_stats.max.call_rwsem_down_write_failed.do_unlinkat.SyS_unlink.entry_SYSCALL_64_fastpath
158218 ± 36% -94.9% 8050 ± 51% latency_stats.max.call_rwsem_down_write_failed.filename_create.SyS_link.entry_SYSCALL_64_fastpath
190608 ± 33% -93.9% 11536 ± 38% latency_stats.max.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.SyS_creat.entry_SYSCALL_64_fastpath
46991 ± 36% -95.3% 2213 ± 61% latency_stats.max.call_rwsem_down_write_failed.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
398078 ± 16% -97.1% 11641 ± 38% latency_stats.max.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
234315 ± 32% -95.2% 11316 ± 36% latency_stats.max.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
87973 ± 79% -94.5% 4855 ± 23% latency_stats.max.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64
278920 ± 50% -96.7% 9204 ± 26% latency_stats.max.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
89389 ± 78% -93.4% 5870 ± 57% latency_stats.max.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
364073 ± 28% -96.8% 11622 ± 36% latency_stats.max.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.entry_SYSCALL_64_fastpath
355101 ± 23% -97.2% 9855 ± 37% latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
231221 ± 57% -97.1% 6595 ± 51% latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.vm_mmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64
262602 ± 57% -100.0% 0.00 ± -1% latency_stats.max.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
2378346 ± 7% -100.0% 0.00 ± -1% latency_stats.max.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
305022 ± 58% -100.0% 0.00 ± -1% latency_stats.max.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.copy_page_from_iter.pipe_write
517558 ± 27% -100.0% 0.00 ± -1% latency_stats.max.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.dcache_readdir.iterate_dir
6266 ± 80% -80.4% 1229 ± 81% latency_stats.max.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
35278 ± 38% -100.0% 0.00 ± -1% latency_stats.max.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
54816 ± 46% -100.0% 0.00 ± -1% latency_stats.max.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.copy_strings.do_execveat_common.SyS_execve.do_syscall_64
86622 ± 59% -100.0% 0.00 ± -1% latency_stats.max.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.count.do_execveat_common.SyS_execve.do_syscall_64
858490 ± 38% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.page_lock_anon_vma_read.rmap_walk_anon.rmap_walk.try_to_unmap.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
456258 ± 21% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.rmap_walk_anon.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
414714 ± 18% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.rmap_walk_anon.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.copy_strings
304261 ± 34% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.rmap_walk_anon.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.count
519498 ± 38% -67.1% 170685 ± 64% latency_stats.sum.call_rwsem_down_write_failed.__put_anon_vma.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64
6492724 ± 22% -95.0% 324323 ± 65% latency_stats.sum.call_rwsem_down_write_failed.anon_vma_clone.anon_vma_fork.copy_process._do_fork.SyS_clone.do_syscall_64.return_from_SYSCALL_64
14659472 ± 38% +11131.9% 1.647e+09 ± 24% latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.do_syscall_64.return_from_SYSCALL_64
37091851 ± 6% +140.8% 89325097 ± 90% latency_stats.sum.call_rwsem_down_write_failed.do_unlinkat.SyS_unlink.entry_SYSCALL_64_fastpath
21962599 ± 17% +168.4% 58953021 ± 93% latency_stats.sum.call_rwsem_down_write_failed.filename_create.SyS_link.entry_SYSCALL_64_fastpath
2295465 ± 14% -83.2% 386264 ± 52% latency_stats.sum.call_rwsem_down_write_failed.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
14817903 ± 41% +11928.6% 1.782e+09 ± 24% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
137.50 ±173% +9856.5% 13690 ± 72% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.do_syscall_64.return_from_SYSCALL_64
7519861 ± 42% +10135.4% 7.697e+08 ± 20% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
588722 ± 60% +664.5% 4501045 ± 56% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64
4366498 ± 54% +9200.6% 4.061e+08 ± 20% latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
460122 ± 40% +830.8% 4282894 ± 53% latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
13164990 ± 56% +9239.9% 1.23e+09 ± 21% latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 4942 ± 93% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.do_syscall_64.return_from_SYSCALL_64
7178166 ± 45% +11239.4% 8.14e+08 ± 24% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
1295552 ± 51% +514.6% 7962598 ± 55% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.vm_mmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64
1.01e+09 ± 0% +1020.6% 1.131e+10 ± 5% latency_stats.sum.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
664569 ± 92% -100.0% 0.00 ± -1% latency_stats.sum.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
2265 ± 0% +1223.3% 29982 ± 17% latency_stats.sum.stop_one_cpu.sched_exec.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
2.137e+08 ± 4% -100.0% 0.00 ± -1% latency_stats.sum.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
361136 ± 62% -100.0% 0.00 ± -1% latency_stats.sum.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.copy_page_from_iter.pipe_write
687053 ± 21% -100.0% 0.00 ± -1% latency_stats.sum.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.dcache_readdir.iterate_dir
8673153 ± 18% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
5526927 ± 11% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.copy_strings.do_execveat_common.SyS_execve.do_syscall_64
5154012 ± 40% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.count.do_execveat_common.SyS_execve.do_syscall_64
19380 ± 57% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
38337 ± 12% +506.1% 232358 ± 32% latency_stats.sum.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
66129 ± 1% +765.0% 571994 ± 30% latency_stats.sum.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.padzero.load_elf_binary
20464 ± 11% +477.6% 118202 ± 12% latency_stats.sum.wait_woken.inotify_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
26179 ± 35% -95.7% 1116 ± 62% sched_debug.cfs_rq:/.MIN_vruntime.avg
125656 ± 15% -60.4% 49801 ± 19% sched_debug.cfs_rq:/.MIN_vruntime.max
44014 ± 19% -84.1% 6993 ± 37% sched_debug.cfs_rq:/.MIN_vruntime.stddev
136443 ± 7% -59.5% 55219 ± 11% sched_debug.cfs_rq:/.exec_clock.avg
146711 ± 7% -49.2% 74586 ± 4% sched_debug.cfs_rq:/.exec_clock.max
123219 ± 6% -71.0% 35727 ± 25% sched_debug.cfs_rq:/.exec_clock.min
5754 ± 12% +213.8% 18058 ± 14% sched_debug.cfs_rq:/.exec_clock.stddev
844624 ± 9% -56.4% 368454 ± 15% sched_debug.cfs_rq:/.load.avg
784.87 ± 9% -76.5% 184.50 ± 21% sched_debug.cfs_rq:/.load_avg.avg
1631 ± 14% -67.6% 529.08 ± 8% sched_debug.cfs_rq:/.load_avg.max
407.17 ± 21% -72.7% 111.20 ± 4% sched_debug.cfs_rq:/.load_avg.stddev
26179 ± 35% -95.7% 1116 ± 62% sched_debug.cfs_rq:/.max_vruntime.avg
125656 ± 15% -60.4% 49801 ± 19% sched_debug.cfs_rq:/.max_vruntime.max
44014 ± 19% -84.1% 6993 ± 37% sched_debug.cfs_rq:/.max_vruntime.stddev
146448 ± 7% -56.0% 64409 ± 11% sched_debug.cfs_rq:/.min_vruntime.avg
163337 ± 6% -40.8% 96735 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
131848 ± 6% -67.0% 43507 ± 25% sched_debug.cfs_rq:/.min_vruntime.min
6739 ± 11% +189.7% 19522 ± 16% sched_debug.cfs_rq:/.min_vruntime.stddev
0.85 ± 9% -53.9% 0.39 ± 15% sched_debug.cfs_rq:/.nr_running.avg
94.80 ± 9% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_spread_over.avg
308.23 ± 13% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_spread_over.max
66.61 ± 12% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_spread_over.min
28.66 ± 10% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_spread_over.stddev
712.15 ± 9% -86.9% 92.96 ± 25% sched_debug.cfs_rq:/.runnable_load_avg.avg
1463 ± 13% -70.4% 433.17 ± 22% sched_debug.cfs_rq:/.runnable_load_avg.max
402.81 ± 20% -71.4% 115.08 ± 16% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-15060 ±-12% +114.7% -32329 ±-12% sched_debug.cfs_rq:/.spread0.avg
-29665 ± -4% +79.4% -53232 ±-14% sched_debug.cfs_rq:/.spread0.min
6743 ± 10% +189.5% 19524 ± 16% sched_debug.cfs_rq:/.spread0.stddev
696.44 ± 5% -48.0% 362.08 ± 17% sched_debug.cfs_rq:/.util_avg.avg
916.79 ± 6% -27.7% 662.79 ± 10% sched_debug.cfs_rq:/.util_avg.max
29.51 ± 85% +341.0% 130.12 ± 47% sched_debug.cfs_rq:/.util_avg.min
261.05 ± 16% -44.7% 144.42 ± 10% sched_debug.cfs_rq:/.util_avg.stddev
596196 ± 4% -20.7% 472974 ± 11% sched_debug.cpu.avg_idle.avg
105698 ± 41% -62.6% 39535 ± 27% sched_debug.cpu.avg_idle.min
221788 ± 5% -10.3% 199054 ± 0% sched_debug.cpu.clock.avg
221799 ± 5% -10.3% 199063 ± 0% sched_debug.cpu.clock.max
221757 ± 5% -10.3% 198995 ± 0% sched_debug.cpu.clock.min
221788 ± 5% -10.3% 199054 ± 0% sched_debug.cpu.clock_task.avg
221799 ± 5% -10.3% 199063 ± 0% sched_debug.cpu.clock_task.max
221757 ± 5% -10.3% 198995 ± 0% sched_debug.cpu.clock_task.min
701.54 ± 10% -91.5% 59.43 ± 29% sched_debug.cpu.cpu_load[0].avg
1463 ± 13% -70.9% 426.42 ± 20% sched_debug.cpu.cpu_load[0].max
406.30 ± 20% -74.6% 103.03 ± 19% sched_debug.cpu.cpu_load[0].stddev
745.31 ± 9% -80.0% 149.24 ± 22% sched_debug.cpu.cpu_load[1].avg
1541 ± 13% -69.8% 466.00 ± 14% sched_debug.cpu.cpu_load[1].max
385.34 ± 20% -74.6% 97.89 ± 3% sched_debug.cpu.cpu_load[1].stddev
742.90 ± 9% -80.5% 144.88 ± 23% sched_debug.cpu.cpu_load[2].avg
1509 ± 14% -71.0% 437.58 ± 16% sched_debug.cpu.cpu_load[2].max
382.17 ± 20% -75.6% 93.08 ± 5% sched_debug.cpu.cpu_load[2].stddev
741.77 ± 9% -80.9% 141.46 ± 23% sched_debug.cpu.cpu_load[3].avg
1486 ± 14% -72.5% 409.17 ± 16% sched_debug.cpu.cpu_load[3].max
380.23 ± 20% -76.7% 88.75 ± 7% sched_debug.cpu.cpu_load[3].stddev
738.75 ± 10% -81.1% 139.32 ± 23% sched_debug.cpu.cpu_load[4].avg
1469 ± 14% -74.3% 377.46 ± 13% sched_debug.cpu.cpu_load[4].max
378.29 ± 20% -77.7% 84.46 ± 7% sched_debug.cpu.cpu_load[4].stddev
59821 ± 9% -60.1% 23855 ± 13% sched_debug.cpu.curr->pid.avg
845770 ± 9% -54.8% 381918 ± 14% sched_debug.cpu.load.avg
152824 ± 7% -33.2% 102060 ± 2% sched_debug.cpu.nr_load_updates.avg
163393 ± 7% -23.9% 124414 ± 2% sched_debug.cpu.nr_load_updates.max
141602 ± 7% -42.4% 81552 ± 9% sched_debug.cpu.nr_load_updates.min
5593 ± 5% +231.0% 18517 ± 27% sched_debug.cpu.nr_load_updates.stddev
1.10 ± 12% -61.0% 0.43 ± 17% sched_debug.cpu.nr_running.avg
4.74 ± 22% -55.2% 2.12 ± 29% sched_debug.cpu.nr_running.max
0.88 ± 18% -40.6% 0.52 ± 12% sched_debug.cpu.nr_running.stddev
53514 ± 6% +973.1% 574252 ± 0% sched_debug.cpu.nr_switches.avg
104000 ± 10% +670.9% 801711 ± 4% sched_debug.cpu.nr_switches.max
31623 ± 7% +1003.2% 348868 ± 12% sched_debug.cpu.nr_switches.min
15406 ± 16% +1206.8% 201325 ± 16% sched_debug.cpu.nr_switches.stddev
0.04 ±108% +1901.0% 0.84 ± 26% sched_debug.cpu.nr_uninterruptible.avg
222.77 ± 11% +11596.4% 26055 ± 35% sched_debug.cpu.nr_uninterruptible.max
-272.09 ±-14% +9926.5% -27281 ±-33% sched_debug.cpu.nr_uninterruptible.min
126.36 ± 13% +17929.5% 22781 ± 34% sched_debug.cpu.nr_uninterruptible.stddev
53491 ± 6% +998.1% 587412 ± 0% sched_debug.cpu.sched_count.avg
104371 ± 9% +696.7% 831513 ± 4% sched_debug.cpu.sched_count.max
31417 ± 7% +1023.7% 353031 ± 12% sched_debug.cpu.sched_count.min
15614 ± 14% +1245.4% 210075 ± 17% sched_debug.cpu.sched_count.stddev
16201 ± 7% +1432.8% 248336 ± 2% sched_debug.cpu.sched_goidle.avg
28765 ± 15% +1168.9% 364995 ± 6% sched_debug.cpu.sched_goidle.max
10113 ± 6% +1206.6% 132136 ± 11% sched_debug.cpu.sched_goidle.min
3599 ± 25% +2856.1% 106398 ± 17% sched_debug.cpu.sched_goidle.stddev
19670 ± 6% +1219.6% 259573 ± 2% sched_debug.cpu.ttwu_count.avg
44668 ± 13% +653.9% 336776 ± 5% sched_debug.cpu.ttwu_count.max
11720 ± 10% +1443.6% 180919 ± 6% sched_debug.cpu.ttwu_count.min
6846 ± 14% +909.6% 69121 ± 19% sched_debug.cpu.ttwu_count.stddev
13212 ± 6% +296.5% 52382 ± 16% sched_debug.cpu.ttwu_local.avg
34684 ± 13% +81.8% 63044 ± 6% sched_debug.cpu.ttwu_local.max
7123 ± 11% +494.9% 42378 ± 28% sched_debug.cpu.ttwu_local.min
221761 ± 5% -10.2% 199051 ± 0% sched_debug.cpu_clk
219375 ± 6% -10.9% 195502 ± 0% sched_debug.ktime
0.04 ± 36% -92.0% 0.00 ± 34% sched_debug.rt_rq:/.rt_nr_running.avg
0.71 ± 31% -76.7% 0.17 ± 0% sched_debug.rt_rq:/.rt_nr_running.max
0.15 ± 33% -85.6% 0.02 ± 15% sched_debug.rt_rq:/.rt_nr_running.stddev
0.06 ± 40% -56.7% 0.03 ± 18% sched_debug.rt_rq:/.rt_time.avg
1.57 ± 7% -23.6% 1.20 ± 7% sched_debug.rt_rq:/.rt_time.max
0.25 ± 18% -36.2% 0.16 ± 13% sched_debug.rt_rq:/.rt_time.stddev
221761 ± 5% -10.2% 199051 ± 0% sched_debug.sched_clk
61.95 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.____fput.task_work_run.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.88 ± 24% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.handle_mm_fault.__do_page_fault
0.59 ± 3% +827.2% 5.45 ± 12% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 1.89 ± 13% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault.page_test
61.95 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__fput.____fput.task_work_run.exit_to_usermode_loop.syscall_return_slowpath
0.00 ± -1% +Inf% 2.40 ± 2% perf-profile.cycles-pp.__libc_fork
0.00 ± -1% +Inf% 1.41 ± 4% perf-profile.cycles-pp.__split_vma.isra.36.split_vma.mprotect_fixup.sys_mprotect.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.32 ± 5% perf-profile.cycles-pp._dl_addr
0.00 ± -1% +Inf% 2.45 ± 7% perf-profile.cycles-pp._do_fork.sys_clone.do_syscall_64.return_from_SYSCALL_64
0.00 ± -1% +Inf% 1.95 ± 2% perf-profile.cycles-pp._do_fork.sys_clone.do_syscall_64.return_from_SYSCALL_64.__libc_fork
3.55 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock.dcache_dir_close.__fput.____fput.task_work_run
24.58 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock.next_positive.isra.13.dcache_readdir.iterate_dir.sys_getdents
49.44 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_trylock.dput.dcache_dir_close.__fput.____fput
0.00 ± -1% +Inf% 6.05 ± 18% perf-profile.cycles-pp.add_short.add_short
0.00 ± -1% +Inf% 1.08 ± 24% perf-profile.cycles-pp.alloc_pages_vma.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 6.04 ± 13% perf-profile.cycles-pp.brk
6.07 ± 6% +498.0% 36.30 ± 7% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 4.04 ± 5% perf-profile.cycles-pp.copy_process.part.30._do_fork.sys_clone.do_syscall_64.return_from_SYSCALL_64
6.12 ± 6% +513.9% 37.54 ± 7% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
1.27 ± 28% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpu_stopper_thread.smpboot_thread_fn.kthread.ret_from_fork
6.07 ± 6% +497.8% 36.29 ± 7% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
6.06 ± 6% +490.1% 35.75 ± 7% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
61.94 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.dcache_dir_close.__fput.____fput.task_work_run.exit_to_usermode_loop
26.43 ± 3% -96.0% 1.04 ± 6% perf-profile.cycles-pp.dcache_readdir.iterate_dir.sys_getdents.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 4.69 ± 14% perf-profile.cycles-pp.do_brk.sys_brk.entry_SYSCALL_64_fastpath.brk
0.00 ± -1% +Inf% 6.97 ± 4% perf-profile.cycles-pp.do_execveat_common.isra.34.sys_execve.do_syscall_64.return_from_SYSCALL_64.execve
0.00 ± -1% +Inf% 4.95 ± 7% perf-profile.cycles-pp.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 4.96 ± 7% perf-profile.cycles-pp.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2.55 ± 6% perf-profile.cycles-pp.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.04 ± 12% perf-profile.cycles-pp.do_munmap.sys_brk.entry_SYSCALL_64_fastpath.brk
0.59 ± 3% +830.1% 5.49 ± 12% perf-profile.cycles-pp.do_page_fault.page_fault
0.00 ± -1% +Inf% 1.91 ± 13% perf-profile.cycles-pp.do_page_fault.page_fault.page_test
0.53 ± 0% +668.6% 4.04 ± 27% perf-profile.cycles-pp.do_syscall_64.return_from_SYSCALL_64
0.00 ± -1% +Inf% 1.96 ± 2% perf-profile.cycles-pp.do_syscall_64.return_from_SYSCALL_64.__libc_fork
0.00 ± -1% +Inf% 7.07 ± 5% perf-profile.cycles-pp.do_syscall_64.return_from_SYSCALL_64.execve
58.00 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.dput.dcache_dir_close.__fput.____fput.task_work_run
0.00 ± -1% +Inf% 1.17 ± 5% perf-profile.cycles-pp.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.isra.34.sys_execve
90.30 ± 0% -82.2% 16.08 ± 19% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 6.00 ± 13% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.brk
0.00 ± -1% +Inf% 1.27 ± 12% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.kill
0.00 ± -1% +Inf% 0.87 ± 32% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.sync
0.00 ± -1% +Inf% 7.15 ± 5% perf-profile.cycles-pp.execve
0.00 ± -1% +Inf% 4.07 ± 6% perf-profile.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.sys_exit_group
0.00 ± -1% +Inf% 3.17 ± 9% perf-profile.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
62.06 ± 1% -99.5% 0.31 ±103% perf-profile.cycles-pp.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2.34 ± 3% perf-profile.cycles-pp.filemap_map_pages.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.00 ± -1% +Inf% 3.24 ± 9% perf-profile.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.isra.34.sys_execve
0.00 ± -1% +Inf% 1.03 ± 12% perf-profile.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.exit_mmap.mmput
0.00 ± -1% +Inf% 1.40 ± 2% perf-profile.cycles-pp.free_pgtables.exit_mmap.mmput.do_exit.do_group_exit
0.00 ± -1% +Inf% 1.39 ± 9% perf-profile.cycles-pp.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary
0.26 ±100% +1790.2% 4.82 ± 11% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 1.53 ± 13% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.page_test
0.00 ± -1% +Inf% 5.62 ± 5% perf-profile.cycles-pp.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
5.99 ± 6% +488.2% 35.25 ± 7% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
26.58 ± 3% -95.6% 1.17 ± 5% perf-profile.cycles-pp.iterate_dir.sys_getdents.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.53 ± 13% perf-profile.cycles-pp.kill
1.37 ± 24% -30.9% 0.94 ± 5% perf-profile.cycles-pp.kthread.ret_from_fork
0.00 ± -1% +Inf% 6.62 ± 8% perf-profile.cycles-pp.load_elf_binary.search_binary_handler.do_execveat_common.isra.34.sys_execve.do_syscall_64
3.67 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp.lockref_put_return.dput.dcache_dir_close.__fput.____fput
0.00 ± -1% +Inf% 2.45 ± 5% perf-profile.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap
0.00 ± -1% +Inf% 4.09 ± 6% perf-profile.cycles-pp.mmput.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 3.19 ± 9% perf-profile.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.isra.34
0.00 ± -1% +Inf% 2.03 ± 5% perf-profile.cycles-pp.mprotect_fixup.sys_mprotect.entry_SYSCALL_64_fastpath
1.26 ± 28% -100.0% 0.00 ± -1% perf-profile.cycles-pp.multi_cpu_stop.cpu_stopper_thread.smpboot_thread_fn.kthread.ret_from_fork
0.00 ± -1% +Inf% 1.25 ± 5% perf-profile.cycles-pp.native_irq_return_iret
24.50 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.next_positive.isra.13.dcache_readdir.iterate_dir
24.63 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp.next_positive.isra.13.dcache_readdir.iterate_dir.sys_getdents.entry_SYSCALL_64_fastpath
0.59 ± 3% +824.8% 5.50 ± 12% perf-profile.cycles-pp.page_fault
0.00 ± -1% +Inf% 1.93 ± 13% perf-profile.cycles-pp.page_fault.page_test
0.00 ± -1% +Inf% 2.12 ± 14% perf-profile.cycles-pp.page_test
0.00 ± -1% +Inf% 5.21 ± 9% perf-profile.cycles-pp.perf_event_aux.part.51.perf_event_mmap.do_brk.sys_brk.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 4.00 ± 9% perf-profile.cycles-pp.perf_event_aux_ctx.perf_event_aux.part.51.perf_event_mmap.do_brk.sys_brk
0.00 ± -1% +Inf% 4.00 ± 14% perf-profile.cycles-pp.perf_event_mmap.do_brk.sys_brk.entry_SYSCALL_64_fastpath.brk
0.00 ± -1% +Inf% 0.86 ± 13% perf-profile.cycles-pp.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.exit_mmap
1.37 ± 23% -30.3% 0.96 ± 5% perf-profile.cycles-pp.ret_from_fork
0.53 ± 1% +664.2% 4.05 ± 27% perf-profile.cycles-pp.return_from_SYSCALL_64
0.00 ± -1% +Inf% 1.96 ± 2% perf-profile.cycles-pp.return_from_SYSCALL_64.__libc_fork
0.00 ± -1% +Inf% 7.07 ± 5% perf-profile.cycles-pp.return_from_SYSCALL_64.execve
0.00 ± -1% +Inf% 6.65 ± 8% perf-profile.cycles-pp.search_binary_handler.do_execveat_common.isra.34.sys_execve.do_syscall_64.return_from_SYSCALL_64
1.28 ± 28% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
0.00 ± -1% +Inf% 1.41 ± 4% perf-profile.cycles-pp.split_vma.mprotect_fixup.sys_mprotect.entry_SYSCALL_64_fastpath
6.12 ± 6% +514.2% 37.59 ± 7% perf-profile.cycles-pp.start_secondary
0.00 ± -1% +Inf% 0.88 ± 32% perf-profile.cycles-pp.sync
0.00 ± -1% +Inf% 5.96 ± 13% perf-profile.cycles-pp.sys_brk.entry_SYSCALL_64_fastpath.brk
0.00 ± -1% +Inf% 2.52 ± 7% perf-profile.cycles-pp.sys_clone.do_syscall_64.return_from_SYSCALL_64
0.00 ± -1% +Inf% 1.95 ± 2% perf-profile.cycles-pp.sys_clone.do_syscall_64.return_from_SYSCALL_64.__libc_fork
0.00 ± -1% +Inf% 7.07 ± 5% perf-profile.cycles-pp.sys_execve.do_syscall_64.return_from_SYSCALL_64.execve
0.00 ± -1% +Inf% 4.96 ± 7% perf-profile.cycles-pp.sys_exit_group.entry_SYSCALL_64_fastpath
26.58 ± 3% -95.6% 1.18 ± 5% perf-profile.cycles-pp.sys_getdents.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2.64 ± 6% perf-profile.cycles-pp.sys_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2.63 ± 6% perf-profile.cycles-pp.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2.12 ± 5% perf-profile.cycles-pp.sys_mprotect.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.87 ± 32% perf-profile.cycles-pp.sys_sync.entry_SYSCALL_64_fastpath.sync
62.06 ± 1% -99.5% 0.32 ±103% perf-profile.cycles-pp.syscall_return_slowpath.entry_SYSCALL_64_fastpath
61.95 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.task_work_run.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.05 ± 12% perf-profile.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
0.00 ± -1% +Inf% 1.03 ± 12% perf-profile.cycles-pp.tlb_flush_mmu_free.tlb_finish_mmu.exit_mmap.mmput.do_exit
0.00 ± -1% +Inf% 1.19 ± 1% perf-profile.cycles-pp.unlink_file_vma.free_pgtables.exit_mmap.mmput.do_exit
0.00 ± -1% +Inf% 0.95 ± 10% perf-profile.cycles-pp.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec
0.00 ± -1% +Inf% 1.98 ± 4% perf-profile.cycles-pp.unmap_page_range.unmap_single_vma.unmap_vmas.exit_mmap.mmput
0.00 ± -1% +Inf% 0.97 ± 12% perf-profile.cycles-pp.unmap_region.do_munmap.sys_brk.entry_SYSCALL_64_fastpath.brk
0.00 ± -1% +Inf% 1.42 ± 7% perf-profile.cycles-pp.unmap_single_vma.unmap_vmas.exit_mmap.mmput.do_exit
0.00 ± -1% +Inf% 1.43 ± 7% perf-profile.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
0.00 ± -1% +Inf% 2.61 ± 6% perf-profile.cycles-pp.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.17 ± 4% perf-profile.cycles-pp.vma_adjust.__split_vma.isra.36.split_vma.mprotect_fixup.sys_mprotect
reaim.jobs_per_min
1e+06 ++-----------------------------------------------------------------+
900000 O+O O O O O O O O O O O O |
| O |
800000 ++ |
700000 ++ O O O O O O O O O |
| |
600000 ++ |
500000 ++ |
400000 ++ |
| |
300000 ++ |
200000 ++ |
| |
100000 *+*.*.*..*.*.*.*.*.*..*.*.*.*.*.*..*.*.*.*.*.*..*.*.*.*.*.*..*.*.*.*
0 ++-----------------------------------------------------------------+
reaim.jobs_per_min_child
14000 ++------------------------------------------------------------------+
O O O O O O O O O O O |
12000 ++ O O O |
| |
10000 ++ O O O O O O O O O |
| |
8000 ++ |
| |
6000 ++ |
| |
4000 ++ |
| |
2000 ++ |
*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.*
0 ++------------------------------------------------------------------+
reaim.std_dev_time
4.5 ++--------------------------------------------------------------------+
| * *.*. |
4 ++ .* :+ .* .*.. *.*. .. *.*.. .*.*. .*.* |
3.5 *+*. : : *. + .* + * * *..*.* + |
| : : * * + |
3 ++ * *.*.*. |
2.5 ++ *..*.|
| *
2 ++ |
1.5 ++ |
| |
1 ++ |
0.5 ++ |
| |
0 O+O--O-O-O-O--O-O-O-O--O-O-O-O--O-O-O-O--O-O-O-O--O-------------------+
reaim.std_dev_percent
4.5 ++--------------------------------------------------------------------+
| * .* |
4 ++ .* :+ *. *.*.*. *. .* + .* |
3.5 *+*. : : *..* *.*.. : *. .. *.. + *. .*.*.*..* *. +|
| : : + + : * * *..* *
3 ++ : * * |
| * |
2.5 ++ |
| |
2 ++ |
1.5 ++ |
| |
1 ++ |
O O O O O O O O O O O O O O |
0.5 ++------------O---O------O---O--O---O-O------O-O----------------------+
reaim.jti
99 O+O-O--O-O-O-O--O-O-O-O-O--O-O-O-O--O-O-O-O--O-O-O-------------------+
| |
98.5 ++ |
98 ++ |
| |
97.5 ++ |
| |
97 ++ |
| |
96.5 ++ * * * .*. |
96 *+* + : + : : : * * *.. * *
| + + : + : : : + : + + + *. +|
95.5 ++ * :.*.* *.* *..* : .*.*. .*.* * *. .. * |
| * *.*..* *. * |
95 ++-------------------------------------------------------------------+
reaim.max_jobs_per_min
1e+06 ++-----------------------------------------------------------------+
900000 O+O O O O O O O O O O O O O |
| |
800000 ++ O |
700000 ++ O O O O O O O O |
| |
600000 ++ |
500000 ++ |
400000 ++ |
| |
300000 ++ |
200000 ++ |
| |
100000 *+*.*.*..*.*.*.*.*.*..*.*.*.*.*.*..*.*.*.*.*.*..*.*.*.*.*.*..*.*.*.*
0 ++-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
4 years, 8 months
[lkp] [fs] 5368104fdb: stderr.sync_disk_cp (2): cannot create /fs/shm/tmpb.0153300000
by kernel test robot
FYI, we noticed the following commit:
https://github.com/0day-ci/linux Andrey-Vagin/fs-allow-to-use-dirfd-as-root-for-openat-and-other-at-syscalls/20160618-043537
commit 5368104fdbcf6c4ede18362cc8d3f113c22f6523 ("fs: allow to use dirfd as root for openat and other *at syscalls")
in testcase: aim7
with following parameters: load=1000/test=sync_disk_cp
on test machine: lkp-a05: Atom with 8G memory
caused below changes:
sync_disk_cp (2): cannot create /fs/shm/tmpb.0153300000
sync_disk_cp (2): cannot create /fs/shm/tmpb.0153200000
disk1.c: No such file or directory
disk1.c: No such file or directory
Child #999: : No such file or directory
Child #998: : No such file or directory
Failed to execute
sync_disk_cp /fs/shm
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Xiaolong
4 years, 8 months
[lkp] [jbd2] 41f6316ed3: kernel BUG at fs/ext4/super.c:371!
by kernel test robot
FYI, we noticed the following commit:
https://github.com/0day-ci/linux Wang-Shilong/jbd2-wake-up-j_wait_done_commit-before-commit-callback/20160616-115406
commit 41f6316ed374dba6f9693d917c1815019f075bdf ("jbd2: wake up j_wait_done_commit before commit callback")
in testcase: ext4-frags
with following parameters: disk=1HDD
on test machine: vm-vp-1G: 2 threads qemu-system-x86_64 -enable-kvm -cpu Nehalem with 1G memory
caused below changes:
+----------------------------------------------------------------------------------------+----------+------------+
| | v4.7-rc3 | 41f6316ed3 |
+----------------------------------------------------------------------------------------+----------+------------+
| boot_successes | 666 | 10 |
| boot_failures | 89 | 4 |
| invoked_oom-killer:gfp_mask=0x | 65 | |
| Mem-Info | 79 | |
| Out_of_memory:Kill_process | 5 | |
| backtrace:_do_fork | 6 | |
| backtrace:SyS_clone | 5 | |
| INFO:suspicious_RCU_usage | 1 | |
| backtrace:rcu_torture_writer | 1 | |
| backtrace:vfs_write | 59 | |
| backtrace:SyS_write | 59 | |
| backtrace:pgd_alloc | 1 | |
| backtrace:mm_init | 1 | |
| BUG:kernel_test_crashed | 5 | |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 56 | |
| backtrace:populate_rootfs | 56 | |
| backtrace:kernel_init_freeable | 56 | |
| BUG:kernel_early-boot_hang | 3 | |
| page_allocation_failure:order:#,mode:#(GFP_NOWAIT|__GFP_HIGH|__GFP_COMP|__GFP_NOTRACK) | 14 | |
| warn_alloc_failed+0x | 14 | |
| backtrace:btrfs_submit_helper | 11 | |
| backtrace:blk_mq_run_work_fn | 4 | |
| backtrace:vfs_read | 2 | |
| backtrace:SyS_read | 2 | |
| INFO:task_blocked_for_more_than#seconds | 1 | |
| RIP:__default_send_IPI_dest_field | 1 | |
| RIP:trace_hardirqs_on_caller | 1 | |
| Kernel_panic-not_syncing:hung_task:blocked_tasks | 1 | |
| backtrace:do_utimes | 1 | |
| backtrace:SyS_utimensat | 1 | |
| backtrace:watchdog | 1 | |
| backtrace:ep_poll | 1 | |
| backtrace:SyS_epoll_wait | 1 | |
| backtrace:wb_workfn | 1 | |
| backtrace:do_mlock | 1 | |
| backtrace:SyS_mlock | 1 | |
| kernel_BUG_at_fs/ext4/super.c | 0 | 4 |
| invalid_opcode:#[##]SMP | 0 | 4 |
| RIP:ext4_journal_commit_callback | 0 | 4 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 4 |
| backtrace:kjournald2 | 0 | 4 |
+----------------------------------------------------------------------------------------+----------+------------+
[ 126.781601] EXT4-fs (vda): couldn't mount as ext2 due to feature incompatibilities
[ 128.686809] EXT4-fs (vda): mounted filesystem with ordered data mode. Opts: (null)
[ 128.925399] ------------[ cut here ]------------
[ 128.927927] kernel BUG at fs/ext4/super.c:371!
[ 128.930723] invalid opcode: 0000 [#1] SMP
[ 128.932964] Modules linked in: snd_pcsp
[ 128.935300] CPU: 0 PID: 5176 Comm: jbd2/vda-8 Not tainted 4.7.0-rc3-00001-g41f6316 #195
[ 128.939334] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 128.943491] task: ffff88003c86d340 ti: ffff88003ac48000 task.ti: ffff88003ac48000
[ 128.947079] RIP: 0010:[<ffffffff812aa760>] [<ffffffff812aa760>] ext4_journal_commit_callback+0x2e/0xa2
[ 128.951298] RSP: 0018:ffff88003ac4bc00 EFLAGS: 00010246
[ 128.953524] RAX: ffffffff812aa732 RBX: 0000000000000000 RCX: 0000000000000000
[ 128.956014] RDX: 0000000100000003 RSI: ffff8800392f4040 RDI: ffff880038ae67e8
[ 128.958507] RBP: ffff88003ac4bc30 R08: 0000002fb2cf811c R09: 0000000000000001
[ 128.961025] R10: ffff88003ac4bad8 R11: 0000000000000003 R12: ffff880038ae0008
[ 128.963490] R13: 0000000000000000 R14: ffff880038ae33f8 R15: ffff8800392f4040
[ 128.965834] FS: 0000000000000000(0000) GS:ffff880036a00000(0000) knlGS:0000000000000000
[ 128.969102] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 128.971189] CR2: 00007f4c56b09ad0 CR3: 0000000038b8e000 CR4: 00000000000006f0
[ 128.973487] Stack:
[ 128.974686] 0000000100000003 0000000000000000 0000000000000002 000000000fc00000
[ 128.978100] ffff880038ae67e8 ffff8800392f4040 ffff88003ac4bdc0 ffffffff812d885d
[ 128.981588] 0000000000000000 ffff88003c86dba0 0000000000000000 0000001dc629831a
[ 128.984908] Call Trace:
[ 128.986118] [<ffffffff812d885d>] jbd2_journal_commit_transaction+0x1bb7/0x1f03
[ 128.988879] [<ffffffff81ccc3c5>] ? _raw_spin_unlock_irqrestore+0x48/0x5e
[ 128.990894] [<ffffffff812dd2ef>] kjournald2+0xc5/0x26c
[ 128.992651] [<ffffffff812dd2ef>] ? kjournald2+0xc5/0x26c
[ 128.994316] [<ffffffff810e09be>] ? wake_up_bit+0x2a/0x2a
[ 128.995988] [<ffffffff812dd22a>] ? commit_timeout+0x10/0x10
[ 128.997694] [<ffffffff810c0064>] kthread+0xfb/0x103
[ 128.999270] [<ffffffff81cccdef>] ret_from_fork+0x1f/0x40
[ 129.000953] [<ffffffff810bff69>] ? kthread_create_on_node+0x1ca/0x1ca
[ 129.002823] Code: 66 90 55 48 89 e5 41 57 41 56 41 55 41 54 53 52 4c 8b 2f 4c 8b b7 48 08 00 00 41 83 e5 02 83 7e 0c 07 4d 8b a6 e8 08 00 00 75 02 <0f> 0b 49 81 c4 78 04 00 00 49 89 f7 4c 89 e7 e8 ec 17 a2 00 49
[ 129.014232] RIP [<ffffffff812aa760>] ext4_journal_commit_callback+0x2e/0xa2
[ 129.016077] RSP <ffff88003ac4bc00>
[ 129.017345] ---[ end trace 2d0f182ce31e0934 ]---
[ 129.018820] Kernel panic - not syncing: Fatal exception
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Nehalem -kernel /pkg/linux/x86_64-nfsroot/gcc-6/41f6316ed374dba6f9693d917c1815019f075bdf/vmlinuz-4.7.0-rc3-00001-g41f6316 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-1G-6/bisect_ext4-frags-1HDD-debian-x86_64-2015-02-07.cgz-x86_64-nfsroot-41f6316ed374dba6f9693d917c1815019f075bdf-20160617-84664-r9ff8r-0.yaml~ ARCH=x86_64 kconfig=x86_64-nfsroot branch=linux-devel/devel-hourly-2016061707 commit=41f6316ed374dba6f9693d917c1815019f075bdf BOOT_IMAGE=/pkg/linux/x86_64-nfsroot/gcc-6/41f6316ed374dba6f9693d917c1815019f075bdf/vmlinuz-4.7.0-rc3-00001-g41f6316 max_uptime=3600 RESULT_ROOT=/result/ext4-frags/1HDD/vm-vp-1G/debian-x86_64-2015-02-07.cgz/x86_64-nfsroot/gcc-6/41f6316ed374dba6f9693d917c1815019f075bdf/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-1G-6::dhcp' -initrd /fs/sdh1/initrd-vm-vp-1G-6 -m 1024 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23105-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdh1/disk0-vm-vp-1G-6,media=disk,if=virtio -drive file=/fs/sdh1/disk1-vm-vp-1G-6,media=disk,if=virtio -drive file=/fs/sdh1/disk2-vm-vp-1G-6,media=disk,if=virtio -drive file=/fs/sdh1/disk3-vm-vp-1G-6,media=disk,if=virtio -drive file=/fs/sdh1/disk4-vm-vp-1G-6,media=disk,if=virtio -drive file=/fs/sdh1/disk5-vm-vp-1G-6,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-vp-1G-6 -serial file:/dev/shm/kboot/serial-vm-vp-1G-6 -daemonize -display none -monitor null
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Xiaolong
4 years, 8 months