[x86/apic] Kernel panic - not syncing: BIOS has enabled x2apic but kernel doesn't support x2apic, please disable x2apic in BIOS.
by Huang Ying
FYI, we noticed the below changes on
https://github.com/jiangliu/linux.git x86_irq_p2v1
commit 16018101c92ad295c51fdc50a8738c46f2f1da81 ("x86/apic: Panic if kernel doesn't support x2apic but BIOS has enabled x2apic")
I think this is the intended the behavior and the kernel message is more clear now :-)
+------------------------------------------------------------------------------------------------------------------------------------+------------+------------+
| | 123a969fab | 16018101c9 |
+------------------------------------------------------------------------------------------------------------------------------------+------------+------------+
| boot_successes | 3 | 0 |
| boot_failures | 12 | 2 |
| Kernel_panic-not_syncing:IO-APIC+timer_doesn't_work!Boot_with_apic=debug_and_send_a_report.Then_try_booting_with_the'noapic'option | 12 | |
| backtrace:native_smp_prepare_cpus | 12 | 1 |
| backtrace:kernel_init_freeable | 12 | 2 |
| Kernel_panic-not_syncing:BIOS_has_enabled_x2apic_but_kernel_doesn't_support_x2apic,please_disable_x2apic_in_BIOS | 0 | 1 |
| Kernel_panic-not_syncing:VFS:Unable_to_mount_root_fs_on_unknown-block(#,#) | 0 | 1 |
| backtrace:prepare_namespace | 0 | 1 |
+------------------------------------------------------------------------------------------------------------------------------------+------------+------------+
[ 0.233055] smpboot: weird, boot CPU (#255) not listed by the BIOS
[ 0.239956] Getting VERSION: ffffffff
[ 0.244044] Getting VERSION: ffffffff
[ 0.248132] Kernel panic - not syncing: BIOS has enabled x2apic but kernel doesn't support x2apic, please disable x2apic in BIOS.
[ 0.248132]
[ 0.262788] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.19.0-rc1-ga1cae60 #1
[ 0.270659] Hardware name: Intel Corporation S2600WTT/S2600WTT, BIOS GRNDSDP1.TP2.0025.R02.1403131625 03/13/2014
[ 0.282023] 000000000000a180 ffff88085bf8be18 ffffffff81a058d6 00000000000011e2
[ 0.290318] ffffffff81ee7e13 ffff88085bf8be98 ffffffff81a03c39 00000000ffffffff
[ 0.298613] 0000000000000008 ffff88085bf8bea8 ffff88085bf8be48 ffff88085bf8be58
[ 0.306909] Call Trace:
[ 0.309647] [<ffffffff81a058d6>] dump_stack+0x4c/0x65
[ 0.315386] [<ffffffff81a03c39>] panic+0xc9/0x1f7
[ 0.320738] [<ffffffff82395dac>] enable_IR_x2apic+0x2b/0x2d
[ 0.327051] [<ffffffff82397983>] default_setup_apic_routing+0x12/0x6b
[ 0.334339] [<ffffffff82393d2a>] native_smp_prepare_cpus+0x257/0x360
[ 0.341533] [<ffffffff8238203a>] kernel_init_freeable+0xfc/0x23f
[ 0.348337] [<ffffffff819fc944>] ? rest_init+0x87/0x87
[ 0.354169] [<ffffffff819fc952>] kernel_init+0xe/0xdf
[ 0.359907] [<ffffffff81a0ccbc>] ret_from_fork+0x7c/0xb0
[ 0.365934] [<ffffffff819fc944>] ? rest_init+0x87/0x87
ACPI MEMORY or I/O RESET_REG.
APPEND dmesg: /result/lkp-hsw01/boot/performance-1/debian-x86_64.cgz/x86_64-lkp/a1cae60fa995d086df06d38695cbdf8b5ec87e73/0/.dmesg already exists
early console in decompress_kernel
Thanks,
Huang, Ying
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 4 months
[net/socket.c ] 0cf00c6f360: -3.1% netperf.Throughput_Mbps
by Huang Ying
FYI, we noticed the below changes on
commit 0cf00c6f360a3f7b97be000520f1acde88800536 ("net/socket.c : introduce helper function do_sock_sendmsg to replace reduplicate code")
testbox/testcase/testparams: lkp-t410/netperf/performance-300s-200%-10K-SCTP_STREAM_MANY
42eef7a0bb0989cd 0cf00c6f360a3f7b97be000520
---------------- --------------------------
%stddev %change %stddev
\ | \
1293786 ± 4% -17.0% 1073209 ± 4% netperf.time.voluntary_context_switches
13.73 ± 3% -8.2% 12.61 ± 1% netperf.time.user_time
1198 ± 0% -3.1% 1162 ± 0% netperf.Throughput_Mbps
220 ± 0% -2.8% 214 ± 0% netperf.time.percent_of_cpu_this_job_got
653 ± 0% -2.7% 635 ± 0% netperf.time.system_time
5.91 ± 22% -100.0% 0.00 ± 0% perf-profile.cpu-cycles._raw_spin_unlock_bh.release_sock.sctp_sendmsg.inet_sendmsg.sock_sendmsg
0 ± 0% +Inf% 131 ± 1% latency_stats.avg.sctp_sendmsg.[sctp].inet_sendmsg.do_sock_sendmsg.___sys_sendmsg.__sys_sendmsg.SyS_sendmsg.system_call_fastpath
0 ± 0% +Inf% 4976 ± 0% latency_stats.max.sctp_sendmsg.[sctp].inet_sendmsg.do_sock_sendmsg.___sys_sendmsg.__sys_sendmsg.SyS_sendmsg.system_call_fastpath
15.58 ± 5% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.sctp_datamsg_from_user.sctp_sendmsg.inet_sendmsg.sock_sendmsg.___sys_sendmsg
21.57 ± 6% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.sctp_primitive_SEND.sctp_sendmsg.inet_sendmsg.sock_sendmsg.___sys_sendmsg
21.10 ± 6% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.release_sock.sctp_sendmsg.inet_sendmsg.sock_sendmsg.___sys_sendmsg
62.06 ± 2% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.sctp_sendmsg.inet_sendmsg.sock_sendmsg.___sys_sendmsg.__sys_sendmsg
62.68 ± 2% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.inet_sendmsg.sock_sendmsg.___sys_sendmsg.__sys_sendmsg.sys_sendmsg
63.26 ± 2% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.sock_sendmsg.___sys_sendmsg.__sys_sendmsg.sys_sendmsg.system_call_fastpath
0 ± 0% +Inf% 1068435 ± 4% latency_stats.hits.sctp_sendmsg.[sctp].inet_sendmsg.do_sock_sendmsg.___sys_sendmsg.__sys_sendmsg.SyS_sendmsg.system_call_fastpath
14.88 ± 1% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.sctp_backlog_rcv.release_sock.sctp_sendmsg.inet_sendmsg.sock_sendmsg
12.33 ± 4% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.sctp_do_sm.sctp_primitive_SEND.sctp_sendmsg.inet_sendmsg.sock_sendmsg
0 ± 0% +Inf% 1.404e+08 ± 3% latency_stats.sum.sctp_sendmsg.[sctp].inet_sendmsg.do_sock_sendmsg.___sys_sendmsg.__sys_sendmsg.SyS_sendmsg.system_call_fastpath
8.68 ± 5% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.sctp_user_addto_chunk.sctp_datamsg_from_user.sctp_sendmsg.inet_sendmsg.sock_sendmsg
5.78 ± 6% -100.0% 0.00 ± 0% perf-profile.cpu-cycles.sctp_make_datafrag_empty.sctp_datamsg_from_user.sctp_sendmsg.inet_sendmsg.sock_sendmsg
4960 ± 1% -100.0% 0 ± 0% latency_stats.max.sctp_sendmsg.[sctp].inet_sendmsg.sock_sendmsg.___sys_sendmsg.__sys_sendmsg.SyS_sendmsg.system_call_fastpath
135 ± 3% -100.0% 0 ± 0% latency_stats.avg.sctp_sendmsg.[sctp].inet_sendmsg.sock_sendmsg.___sys_sendmsg.__sys_sendmsg.SyS_sendmsg.system_call_fastpath
1.755e+08 ± 5% -100.0% 0 ± 0% latency_stats.sum.sctp_sendmsg.[sctp].inet_sendmsg.sock_sendmsg.___sys_sendmsg.__sys_sendmsg.SyS_sendmsg.system_call_fastpath
1289900 ± 4% -100.0% 0 ± 0% latency_stats.hits.sctp_sendmsg.[sctp].inet_sendmsg.sock_sendmsg.___sys_sendmsg.__sys_sendmsg.SyS_sendmsg.system_call_fastpath
0.00 ± 0% +Inf% 5.88 ± 9% perf-profile.cpu-cycles._raw_spin_unlock_bh.release_sock.sctp_sendmsg.inet_sendmsg.do_sock_sendmsg
0.00 ± 0% +Inf% 6.16 ± 0% perf-profile.cpu-cycles.sctp_make_datafrag_empty.sctp_datamsg_from_user.sctp_sendmsg.inet_sendmsg.do_sock_sendmsg
0.00 ± 0% +Inf% 8.69 ± 0% perf-profile.cpu-cycles.sctp_user_addto_chunk.sctp_datamsg_from_user.sctp_sendmsg.inet_sendmsg.do_sock_sendmsg
0.00 ± 0% +Inf% 12.81 ± 3% perf-profile.cpu-cycles.sctp_do_sm.sctp_primitive_SEND.sctp_sendmsg.inet_sendmsg.do_sock_sendmsg
0.00 ± 0% +Inf% 22.40 ± 4% perf-profile.cpu-cycles.sctp_primitive_SEND.sctp_sendmsg.inet_sendmsg.do_sock_sendmsg.___sys_sendmsg
0.00 ± 0% +Inf% 16.02 ± 0% perf-profile.cpu-cycles.sctp_datamsg_from_user.sctp_sendmsg.inet_sendmsg.do_sock_sendmsg.___sys_sendmsg
0.00 ± 0% +Inf% 20.81 ± 3% perf-profile.cpu-cycles.release_sock.sctp_sendmsg.inet_sendmsg.do_sock_sendmsg.___sys_sendmsg
0.00 ± 0% +Inf% 64.24 ± 2% perf-profile.cpu-cycles.do_sock_sendmsg.___sys_sendmsg.__sys_sendmsg.sys_sendmsg.system_call_fastpath
0.00 ± 0% +Inf% 63.64 ± 2% perf-profile.cpu-cycles.inet_sendmsg.do_sock_sendmsg.___sys_sendmsg.__sys_sendmsg.sys_sendmsg
0.00 ± 0% +Inf% 63.01 ± 2% perf-profile.cpu-cycles.sctp_sendmsg.inet_sendmsg.do_sock_sendmsg.___sys_sendmsg.__sys_sendmsg
0.00 ± 0% +Inf% 14.61 ± 1% perf-profile.cpu-cycles.sctp_backlog_rcv.release_sock.sctp_sendmsg.inet_sendmsg.do_sock_sendmsg
1911 ± 30% +92.9% 3687 ± 19% latency_stats.sum.down.console_lock.console_device.tty_open.chrdev_open.do_dentry_open.vfs_open.do_last.path_openat.do_filp_open.do_sys_open.SyS_open
197535 ± 24% +45.6% 287661 ± 14% sched_debug.cpu#1.sched_goidle
2800 ± 7% +58.9% 4449 ± 12% cpuidle.C6-NHM.usage
8319130 ± 3% +48.0% 12315380 ± 12% cpuidle.C6-NHM.time
16345940 ± 12% +52.9% 24992140 ± 9% cpuidle.C1-NHM.time
4.83 ± 13% +52.6% 7.37 ± 10% turbostat.%c1
36210370 ± 14% +54.4% 55904924 ± 10% cpuidle.C1E-NHM.time
4763 ± 14% +70.5% 8121 ± 7% cpuidle.C3-NHM.usage
740074 ± 13% +54.9% 1146007 ± 9% cpuidle.C1E-NHM.usage
201068 ± 19% +38.3% 277987 ± 11% sched_debug.cpu#0.sched_goidle
0.23 ± 10% +60.4% 0.36 ± 27% turbostat.%c6
0.14 ± 16% +27.8% 0.17 ± 19% turbostat.%c3
198646 ± 19% +44.9% 287931 ± 11% sched_debug.cpu#2.sched_goidle
401326 ± 16% +47.4% 591389 ± 10% cpuidle.C1-NHM.usage
115 ± 10% -23.6% 88 ± 16% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.__inode_permission.inode_permission.may_open
34172 ± 7% +30.3% 44535 ± 5% softirqs.SCHED
3363437 ± 11% +34.9% 4538159 ± 14% cpuidle.C3-NHM.time
156 ± 5% +26.1% 197 ± 6% uptime.idle
212838 ± 17% +39.3% 296432 ± 7% sched_debug.cpu#3.sched_goidle
151 ± 5% -18.5% 123 ± 3% latency_stats.avg.sctp_skb_recv_datagram.[sctp].sctp_recvmsg.[sctp].sock_common_recvmsg.sock_recvmsg.___sys_recvmsg.__sys_recvmsg.SyS_recvmsg.system_call_fastpath
1137848 ± 9% +15.6% 1314801 ± 4% sched_debug.cpu#1.sched_count
1137594 ± 9% +15.5% 1314483 ± 4% sched_debug.cpu#1.nr_switches
427 ± 7% +15.8% 495 ± 5% sched_debug.cpu#3.load
607501 ± 6% -11.3% 539116 ± 4% sched_debug.cpu#2.ttwu_local
1063918 ± 4% -10.4% 952749 ± 6% latency_stats.sum.do_wait.SyS_wait4.system_call_fastpath
3700692 ± 3% +12.8% 4174394 ± 2% latency_stats.hits.sctp_skb_recv_datagram.[sctp].sctp_recvmsg.[sctp].sock_common_recvmsg.sock_recvmsg.___sys_recvmsg.__sys_recvmsg.SyS_recvmsg.system_call_fastpath
1.05 ± 4% +10.2% 1.16 ± 4% perf-profile.cpu-cycles.kmalloc_large_node.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.sctp_packet_transmit
2.93 ± 5% +7.9% 3.16 ± 3% perf-profile.cpu-cycles.__kmalloc_reserve.isra.26.__alloc_skb._sctp_make_chunk.sctp_make_datafrag_empty.sctp_datamsg_from_user
334282 ± 4% -7.3% 309726 ± 2% sched_debug.cfs_rq[1]:/.min_vruntime
0.91 ± 7% -11.0% 0.81 ± 3% perf-profile.cpu-cycles.nf_hook_slow.ip_output.ip_local_out_sk.ip_queue_xmit.sctp_v4_xmit
2.74 ± 2% +8.2% 2.97 ± 2% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_kmem_pages_node.kmalloc_large_node.__kmalloc_node_track_caller
245637 ± 3% -13.0% 213811 ± 9% sched_debug.cfs_rq[1]:/.MIN_vruntime
245637 ± 3% -13.0% 213811 ± 9% sched_debug.cfs_rq[1]:/.max_vruntime
623176 ± 4% -9.9% 561641 ± 5% sched_debug.cpu#3.ttwu_local
437 ± 2% -7.7% 403 ± 4% sched_debug.cpu#1.cpu_load[3]
1293786 ± 4% -17.0% 1073209 ± 4% time.voluntary_context_switches
27545 ± 3% +11.2% 30617 ± 1% vmstat.system.cs
94.80 ± 0% -2.9% 92.09 ± 0% turbostat.%c0
220 ± 0% -2.8% 214 ± 0% time.percent_of_cpu_this_job_got
653 ± 0% -2.7% 635 ± 0% time.system_time
lkp-t410: Westmere
Memory: 2G
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Huang, Ying
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 4 months
[sched] a15b12ac36a: -46.9% time.voluntary_context_switches +1.5% will-it-scale.per_process_ops
by Huang Ying
FYI, we noticed the below changes on
commit a15b12ac36ad4e7b856a4ae54937ae26a51aebad ("sched: Do not stop cpu in set_cpus_allowed_ptr() if task is not running")
testbox/testcase/testparams: lkp-g5/will-it-scale/performance-lock1
1ba93d42727c4400 a15b12ac36ad4e7b856a4ae549
---------------- --------------------------
%stddev %change %stddev
\ | \
1517261 ± 0% +1.5% 1539994 ± 0% will-it-scale.per_process_ops
247 ± 30% +131.8% 573 ± 49% sched_debug.cpu#61.ttwu_count
225 ± 22% +142.8% 546 ± 34% sched_debug.cpu#81.ttwu_local
15115 ± 44% +37.3% 20746 ± 40% numa-meminfo.node7.Active
1028 ± 38% +115.3% 2214 ± 36% sched_debug.cpu#16.ttwu_local
2 ± 19% +133.3% 5 ± 43% sched_debug.cpu#89.cpu_load[3]
21 ± 45% +88.2% 40 ± 23% sched_debug.cfs_rq[99]:/.tg_load_contrib
414 ± 33% +98.6% 823 ± 28% sched_debug.cpu#81.ttwu_count
4 ± 10% +88.2% 8 ± 12% sched_debug.cfs_rq[33]:/.runnable_load_avg
22 ± 26% +80.9% 40 ± 24% sched_debug.cfs_rq[103]:/.tg_load_contrib
7 ± 17% -41.4% 4 ± 25% sched_debug.cfs_rq[41]:/.load
7 ± 17% -37.9% 4 ± 19% sched_debug.cpu#41.load
3 ± 22% +106.7% 7 ± 10% sched_debug.cfs_rq[36]:/.runnable_load_avg
174 ± 13% +48.7% 259 ± 31% sched_debug.cpu#112.ttwu_count
4 ± 19% +88.9% 8 ± 5% sched_debug.cfs_rq[35]:/.runnable_load_avg
260 ± 10% +55.6% 405 ± 26% numa-vmstat.node3.nr_anon_pages
1042 ± 10% +56.0% 1626 ± 26% numa-meminfo.node3.AnonPages
26 ± 22% +74.3% 45 ± 16% sched_debug.cfs_rq[65]:/.tg_load_contrib
21 ± 43% +71.3% 37 ± 26% sched_debug.cfs_rq[100]:/.tg_load_contrib
3686 ± 21% +40.2% 5167 ± 19% sched_debug.cpu#16.ttwu_count
142 ± 9% +34.4% 191 ± 24% sched_debug.cpu#112.ttwu_local
5 ± 18% +69.6% 9 ± 15% sched_debug.cfs_rq[35]:/.load
2 ± 30% +100.0% 5 ± 37% sched_debug.cpu#106.cpu_load[1]
3 ± 23% +100.0% 6 ± 48% sched_debug.cpu#106.cpu_load[2]
5 ± 18% +69.6% 9 ± 15% sched_debug.cpu#35.load
9 ± 20% +48.6% 13 ± 16% sched_debug.cfs_rq[7]:/.runnable_load_avg
1727 ± 15% +43.9% 2484 ± 30% sched_debug.cpu#34.ttwu_local
10 ± 17% -40.5% 6 ± 13% sched_debug.cpu#41.cpu_load[0]
10 ± 14% -29.3% 7 ± 5% sched_debug.cpu#45.cpu_load[4]
10 ± 17% -33.3% 7 ± 10% sched_debug.cpu#41.cpu_load[1]
6121 ± 8% +56.7% 9595 ± 30% sched_debug.cpu#13.sched_goidle
13 ± 8% -25.9% 10 ± 17% sched_debug.cpu#39.cpu_load[2]
12 ± 16% -24.0% 9 ± 15% sched_debug.cpu#37.cpu_load[2]
492 ± 17% -21.3% 387 ± 24% sched_debug.cpu#46.ttwu_count
3761 ± 11% -23.9% 2863 ± 15% sched_debug.cpu#93.curr->pid
570 ± 19% +43.2% 816 ± 17% sched_debug.cpu#86.ttwu_count
5279 ± 8% +63.5% 8631 ± 33% sched_debug.cpu#13.ttwu_count
377 ± 22% -28.6% 269 ± 14% sched_debug.cpu#46.ttwu_local
5396 ± 10% +29.9% 7007 ± 14% sched_debug.cpu#16.sched_goidle
1959 ± 12% +36.9% 2683 ± 15% numa-vmstat.node2.nr_slab_reclaimable
7839 ± 12% +37.0% 10736 ± 15% numa-meminfo.node2.SReclaimable
5 ± 15% +66.7% 8 ± 9% sched_debug.cfs_rq[33]:/.load
5 ± 25% +47.8% 8 ± 10% sched_debug.cfs_rq[37]:/.load
2 ± 0% +87.5% 3 ± 34% sched_debug.cpu#89.cpu_load[4]
5 ± 15% +66.7% 8 ± 9% sched_debug.cpu#33.load
6 ± 23% +41.7% 8 ± 10% sched_debug.cpu#37.load
8 ± 10% -26.5% 6 ± 6% sched_debug.cpu#51.cpu_load[1]
7300 ± 37% +63.6% 11943 ± 16% softirqs.TASKLET
2984 ± 6% +43.1% 4271 ± 23% sched_debug.cpu#20.ttwu_count
328 ± 4% +40.5% 462 ± 25% sched_debug.cpu#26.ttwu_local
10 ± 7% -27.5% 7 ± 5% sched_debug.cpu#43.cpu_load[3]
9 ± 8% -30.8% 6 ± 6% sched_debug.cpu#41.cpu_load[3]
9 ± 8% -27.0% 6 ± 6% sched_debug.cpu#41.cpu_load[4]
10 ± 14% -32.5% 6 ± 6% sched_debug.cpu#41.cpu_load[2]
16292 ± 6% +42.8% 23260 ± 25% sched_debug.cpu#13.nr_switches
14 ± 28% +55.9% 23 ± 8% sched_debug.cpu#99.cpu_load[0]
5 ± 8% +28.6% 6 ± 12% sched_debug.cpu#17.load
13 ± 7% -23.1% 10 ± 12% sched_debug.cpu#39.cpu_load[3]
7 ± 10% -35.7% 4 ± 11% sched_debug.cfs_rq[45]:/.runnable_load_avg
5076 ± 13% -21.8% 3970 ± 11% numa-vmstat.node0.nr_slab_unreclaimable
20306 ± 13% -21.8% 15886 ± 11% numa-meminfo.node0.SUnreclaim
10 ± 10% -28.6% 7 ± 6% sched_debug.cpu#45.cpu_load[3]
11 ± 11% -29.5% 7 ± 14% sched_debug.cpu#45.cpu_load[1]
10 ± 12% -26.8% 7 ± 6% sched_debug.cpu#44.cpu_load[1]
10 ± 10% -28.6% 7 ± 6% sched_debug.cpu#44.cpu_load[0]
7 ± 17% +48.3% 10 ± 7% sched_debug.cfs_rq[11]:/.runnable_load_avg
11 ± 12% -34.1% 7 ± 11% sched_debug.cpu#47.cpu_load[0]
10 ± 10% -27.9% 7 ± 5% sched_debug.cpu#47.cpu_load[1]
10 ± 8% -26.8% 7 ± 11% sched_debug.cpu#47.cpu_load[2]
10 ± 8% -28.6% 7 ± 14% sched_debug.cpu#43.cpu_load[0]
10 ± 10% -27.9% 7 ± 10% sched_debug.cpu#43.cpu_load[1]
10 ± 10% -28.6% 7 ± 6% sched_debug.cpu#43.cpu_load[2]
12940 ± 3% +49.8% 19387 ± 35% numa-meminfo.node2.Active(anon)
3235 ± 2% +49.8% 4844 ± 35% numa-vmstat.node2.nr_active_anon
17 ± 17% +36.6% 24 ± 9% sched_debug.cpu#97.cpu_load[2]
14725 ± 8% +21.8% 17928 ± 11% sched_debug.cpu#16.nr_switches
667 ± 10% +45.3% 969 ± 22% sched_debug.cpu#17.ttwu_local
3257 ± 5% +22.4% 3988 ± 11% sched_debug.cpu#118.curr->pid
3144 ± 15% -20.7% 2493 ± 8% sched_debug.cpu#95.curr->pid
2192 ± 11% +50.9% 3308 ± 37% sched_debug.cpu#18.ttwu_count
6 ± 11% +37.5% 8 ± 19% sched_debug.cfs_rq[22]:/.load
12 ± 5% +27.1% 15 ± 8% sched_debug.cpu#5.cpu_load[1]
11 ± 12% -23.4% 9 ± 13% sched_debug.cpu#37.cpu_load[3]
6 ± 11% +37.5% 8 ± 19% sched_debug.cpu#22.load
8 ± 8% -25.0% 6 ± 0% sched_debug.cpu#51.cpu_load[2]
7 ± 6% -20.0% 6 ± 11% sched_debug.cpu#55.cpu_load[3]
11 ± 9% -17.4% 9 ± 9% sched_debug.cpu#39.cpu_load[4]
12 ± 5% -22.9% 9 ± 11% sched_debug.cpu#38.cpu_load[3]
420 ± 13% +43.0% 601 ± 9% sched_debug.cpu#30.ttwu_local
1682 ± 14% +38.5% 2329 ± 17% numa-meminfo.node7.AnonPages
423 ± 13% +37.0% 579 ± 16% numa-vmstat.node7.nr_anon_pages
15 ± 13% +41.9% 22 ± 5% sched_debug.cpu#99.cpu_load[1]
6 ± 20% +44.0% 9 ± 13% sched_debug.cfs_rq[19]:/.runnable_load_avg
9 ± 4% -24.3% 7 ± 0% sched_debug.cpu#43.cpu_load[4]
6341 ± 7% -19.6% 5100 ± 16% sched_debug.cpu#43.curr->pid
2577 ± 11% -11.9% 2270 ± 10% sched_debug.cpu#33.ttwu_count
13 ± 6% -18.5% 11 ± 12% sched_debug.cpu#40.cpu_load[2]
4828 ± 6% +23.8% 5979 ± 6% sched_debug.cpu#34.curr->pid
4351 ± 12% +33.9% 5824 ± 12% sched_debug.cpu#36.curr->pid
10 ± 8% -23.8% 8 ± 8% sched_debug.cpu#37.cpu_load[4]
10 ± 14% -28.6% 7 ± 6% sched_debug.cpu#45.cpu_load[2]
17 ± 22% +40.6% 24 ± 7% sched_debug.cpu#97.cpu_load[1]
11 ± 9% +21.3% 14 ± 5% sched_debug.cpu#7.cpu_load[2]
10 ± 8% -26.2% 7 ± 10% sched_debug.cpu#36.cpu_load[4]
12853 ± 2% +20.0% 15429 ± 11% numa-meminfo.node2.AnonPages
4744 ± 8% +30.8% 6204 ± 11% sched_debug.cpu#35.curr->pid
3214 ± 2% +20.0% 3856 ± 11% numa-vmstat.node2.nr_anon_pages
6181 ± 6% +24.9% 7718 ± 9% sched_debug.cpu#13.curr->pid
6675 ± 23% +27.5% 8510 ± 10% sched_debug.cfs_rq[91]:/.tg_load_avg
171261 ± 5% -22.2% 133177 ± 15% numa-numastat.node0.local_node
6589 ± 21% +29.3% 8522 ± 11% sched_debug.cfs_rq[89]:/.tg_load_avg
6508 ± 20% +28.0% 8331 ± 8% sched_debug.cfs_rq[88]:/.tg_load_avg
6598 ± 22% +29.2% 8525 ± 11% sched_debug.cfs_rq[90]:/.tg_load_avg
590 ± 13% -21.4% 464 ± 7% sched_debug.cpu#105.ttwu_local
175392 ± 5% -21.7% 137308 ± 14% numa-numastat.node0.numa_hit
11 ± 6% -18.2% 9 ± 7% sched_debug.cpu#38.cpu_load[4]
6643 ± 23% +27.4% 8465 ± 10% sched_debug.cfs_rq[94]:/.tg_load_avg
6764 ± 7% +13.8% 7695 ± 7% sched_debug.cpu#12.curr->pid
29 ± 28% +34.5% 39 ± 5% sched_debug.cfs_rq[98]:/.tg_load_contrib
1776 ± 7% +29.4% 2298 ± 13% sched_debug.cpu#11.ttwu_local
13 ± 0% -19.2% 10 ± 8% sched_debug.cpu#40.cpu_load[3]
7 ± 5% -17.2% 6 ± 0% sched_debug.cpu#51.cpu_load[3]
7371 ± 20% -18.0% 6045 ± 3% sched_debug.cpu#1.sched_goidle
26560 ± 2% +14.0% 30287 ± 7% numa-meminfo.node2.Slab
16161 ± 6% -9.4% 14646 ± 1% sched_debug.cfs_rq[27]:/.avg->runnable_avg_sum
351 ± 6% -9.3% 318 ± 1% sched_debug.cfs_rq[27]:/.tg_runnable_contrib
7753 ± 27% -22.9% 5976 ± 5% sched_debug.cpu#2.sched_goidle
3828 ± 9% +17.3% 4490 ± 6% sched_debug.cpu#23.sched_goidle
23925 ± 2% +23.0% 29419 ± 23% numa-meminfo.node2.Active
47 ± 6% -15.8% 40 ± 19% sched_debug.cpu#42.cpu_load[1]
282 ± 5% -9.7% 254 ± 7% sched_debug.cfs_rq[109]:/.tg_runnable_contrib
349 ± 5% -9.3% 317 ± 1% sched_debug.cfs_rq[26]:/.tg_runnable_contrib
6941 ± 3% +8.9% 7558 ± 7% sched_debug.cpu#61.nr_switches
16051 ± 5% -8.9% 14618 ± 1% sched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
238944 ± 3% +9.2% 260958 ± 5% numa-vmstat.node2.numa_local
12966 ± 5% -9.5% 11732 ± 6% sched_debug.cfs_rq[109]:/.avg->runnable_avg_sum
1004 ± 3% +8.2% 1086 ± 4% sched_debug.cpu#118.sched_goidle
20746 ± 4% -8.4% 19000 ± 1% sched_debug.cfs_rq[45]:/.avg->runnable_avg_sum
451 ± 4% -8.3% 413 ± 1% sched_debug.cfs_rq[45]:/.tg_runnable_contrib
3538 ± 4% +17.2% 4147 ± 8% sched_debug.cpu#26.ttwu_count
16 ± 9% +13.8% 18 ± 2% sched_debug.cpu#99.cpu_load[3]
1531 ± 0% +11.3% 1704 ± 1% numa-meminfo.node7.KernelStack
3569 ± 3% +17.2% 4182 ± 10% sched_debug.cpu#24.sched_goidle
1820 ± 3% -12.5% 1594 ± 8% slabinfo.taskstats.num_objs
1819 ± 3% -12.4% 1594 ± 8% slabinfo.taskstats.active_objs
4006 ± 5% +19.1% 4769 ± 8% sched_debug.cpu#17.sched_goidle
21412 ± 19% -17.0% 17779 ± 3% sched_debug.cpu#2.nr_switches
16 ± 9% +24.2% 20 ± 4% sched_debug.cpu#99.cpu_load[2]
10493 ± 7% +13.3% 11890 ± 4% sched_debug.cpu#23.nr_switches
1207 ± 2% -46.9% 640 ± 4% time.voluntary_context_switches
time.voluntary_context_switches
1300 ++-----------*--*--------------------*-------------------------------+
*..*.*..*.. + *.*..*..*.*..*..* .*..*..*. .*..*.*..*.. |
1200 ++ * * *. *.*..*
1100 ++ |
| |
1000 ++ |
| |
900 ++ |
| |
800 ++ |
700 ++ |
O O O O O O O O O O O O O |
600 ++ O O O O O O O O O |
| O |
500 ++-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Huang, Ying
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 4 months